### How to Differentiate an Integral

In calculus, a branch of mathematics, the

The derivative of a function at a chosen input value describes the best linear approximation of the function near that input value. Informally, the derivative is the ratio of the infinitesimal change of the output over the infinitesimal change of the input producing that change of output. For a real-valued function of a single real variable, the derivative at a point equals the slope of the tangent line to the graph of the function at that point. In higher dimensions, the derivative of a function at a point is a linear transformation called the linearization.

The process of finding a derivative is called

In mathematics, the problem of

###
Borel measures on

The result for Lebesgue measure turns out to be a special case of the following result, which is based on the Besicovitch covering theorem: if

As of 2007, it is still an open question whether there exists an infinite-dimensional Gaussian measure

**derivative**is a measure of how a function changes as its input changes. Loosely speaking, a derivative can be thought of as how much one quantity is changing in response to changes in some other quantity; for example, the derivative of the position of a moving object with respect to time is the object's instantaneous velocity.The derivative of a function at a chosen input value describes the best linear approximation of the function near that input value. Informally, the derivative is the ratio of the infinitesimal change of the output over the infinitesimal change of the input producing that change of output. For a real-valued function of a single real variable, the derivative at a point equals the slope of the tangent line to the graph of the function at that point. In higher dimensions, the derivative of a function at a point is a linear transformation called the linearization.

^{}A closely related notion is the differential of a function.The process of finding a derivative is called

**differentiation**. The reverse process is called*antidifferentiation*. The fundamental theorem of calculus states that antidifferentiation is the same as integration. Differentiation and integration constitute the two fundamental operations in single-variable calculus.In mathematics, the problem of

**differentiation of integrals**is that of determining under what circumstances the mean value integral of a suitable function on a small neighbourhood of a point approximates the value of the function at that point. More formally, given a space*X*with a measure*μ*and a metric*d*, one asks for what functions*f*:*X*→**R**does*μ*-almost all)*x*∈*X*? (Here, as in the rest of the article,*B*_{r}(*x*) denotes the open ball in*X*with*d*-radius*r*and centre*x*.) This is a natural question to ask, especially in view of the heuristic construction of the Riemann integral, in which it is almost implicit that*f*(*x*) is a "good representative" for the values of*f*near*x*.## Theorems on the differentiation of integrals

### Lebesgue measure

One result on the differentiation of integrals is the Lebesgue differentiation theorem, as proved by Henri Lebesgue in 1910. Consider*n*-dimensional Lebesgue measure*λ*^{n}on*n*-dimensional Euclidean space**R**^{n}. Then, for any locally integrable function*f*:**R**^{n}→**R**, one has*λ*^{n}-almost all points*x*∈**R**^{n}. It is important to note, however, that the measure zero set of "bad" points depends on the function*f*.###
Borel measures on **R**^{n}

The result for Lebesgue measure turns out to be a special case of the following result, which is based on the Besicovitch covering theorem: if *μ*is any locally finite Borel measure on**R**^{n}and*f*:**R**^{n}→**R**is locally integrable with respect to*μ*, then*μ*-almost all points*x*∈**R**^{n}.### Gaussian measures

The problem of the differentiation of integrals is much harder in an infinite-dimensional setting. Consider a separable Hilbert space (*H*, 〈 , 〉) equipped with a Gaussian measure*γ*. As stated in the article on the Vitali covering theorem, the Vitali covering theorem fails for Gaussian measures on infinite-dimensional Hilbert spaces. Two results of David Preiss (1981 and 1983) show the kind of difficulties that one can expect to encounter in this setting:- There is a Gaussian measure
*γ*on a separable Hilbert space*H*and a Borel set*M*⊆*H*so that, for*γ*-almost all*x*∈*H*,

- There is a Gaussian measure
*γ*on a separable Hilbert space*H*and a function*f*∈*L*^{1}(*H*,*γ*;**R**) such that

*γ*. Let the covariance operator of*γ*be*S*:*H*→*H*given by*e*_{i})_{i∈N}of*H*,*q*< 1 such that*f*∈*L*^{1}(*H*,*γ*;**R**),*γ*. In 1988, Tišer showed that if*α*> 5 ⁄ 2, then*γ*-almost all*x*and all*f*∈*L*^{p}(*H*,*γ*;**R**),*p*> 1.As of 2007, it is still an open question whether there exists an infinite-dimensional Gaussian measure

*γ*on a separable Hilbert space*H*so that, for all*f*∈*L*^{1}(*H*,*γ*;**R**),*γ*-almost all*x*∈*H*. However, it is conjectured that no such measure exists, since the*σ*_{i}would have to decay very rapidly.**To Join Ajit Mishra's Online Classroom**__.__**CLICK HERE**
## Comments

## Post a Comment