How to Differentiate an Integral

In calculus, a branch of mathematics, the derivative is a measure of how a function changes as its input changes. Loosely speaking, a derivative can be thought of as how much one quantity is changing in response to changes in some other quantity; for example, the derivative of the position of a moving object with respect to time is the object's instantaneous velocity.
The derivative of a function at a chosen input value describes the best linear approximation of the function near that input value. Informally, the derivative is the ratio of the infinitesimal change of the output over the infinitesimal change of the input producing that change of output. For a real-valued function of a single real variable, the derivative at a point equals the slope of the tangent line to the graph of the function at that point. In higher dimensions, the derivative of a function at a point is a linear transformation called the linearization. A closely related notion is the differential of a function.
The process of finding a derivative is called differentiation. The reverse process is called antidifferentiation. The fundamental theorem of calculus states that antidifferentiation is the same as integration. Differentiation and integration constitute the two fundamental operations in single-variable calculus.


In mathematics, the problem of differentiation of integrals is that of determining under what circumstances the mean value integral of a suitable function on a small neighbourhood of a point approximates the value of the function at that point. More formally, given a space X with a measure μ and a metric d, one asks for what functions f : X → R does
\lim_{r \to 0} \frac1{\mu \big( B_{r} (x) \big)} \int_{B_{r} (x)} f(y) \, \mathrm{d} \mu(y) = f(x)
for all (or at least μ-almost all) x ∈ X? (Here, as in the rest of the article, Br(x) denotes the open ball in X with d-radius r and centre x.) This is a natural question to ask, especially in view of the heuristic construction of the Riemann integral, in which it is almost implicit that f(x) is a "good representative" for the values of f near x.


Theorems on the differentiation of integrals

Lebesgue measure

One result on the differentiation of integrals is the Lebesgue differentiation theorem, as proved by Henri Lebesgue in 1910. Consider n-dimensional Lebesgue measure λn on n-dimensional Euclidean space Rn. Then, for any locally integrable function f : Rn → R, one has
for λn-almost all points x ∈ Rn. It is important to note, however, that the measure zero set of "bad" points depends on the function f.

Borel measures on Rn

The result for Lebesgue measure turns out to be a special case of the following result, which is based on the Besicovitch covering theorem: if μ is any locally finite Borel measure on Rn and f : Rn → R is locally integrable with respect to μ, then
\lim_{r \to 0} \frac1{\mu \big( B_{r} (x) \big)} \int_{B_{r} (x)} f(y) \, \mathrm{d} \mu (y) = f(x)
for μ-almost all points x ∈ Rn.

Gaussian measures

The problem of the differentiation of integrals is much harder in an infinite-dimensional setting. Consider a separable Hilbert space (H, 〈 , 〉) equipped with a Gaussian measure γ. As stated in the article on the Vitali covering theorem, the Vitali covering theorem fails for Gaussian measures on infinite-dimensional Hilbert spaces. Two results of David Preiss (1981 and 1983) show the kind of difficulties that one can expect to encounter in this setting:
  • There is a Gaussian measure γ on a separable Hilbert space H and a Borel set M ⊆ H so that, for γ-almost all x ∈ H,
\lim_{r \to 0} \frac{\gamma \big( M \cap B_{r} (x) \big)}{\gamma \big( B_{r} (x) \big)} = 1.
  • There is a Gaussian measure γ on a separable Hilbert space H and a function f ∈ L1(HγR) such that
\lim_{r \to 0} \inf \left\{ \left. \frac1{\gamma \big( B_{s} (x) \big)} \int_{B_{s} (x)} f(y) \, \mathrm{d} \gamma(y) \right| x \in H, 0 < s < r \right\} = + \infty.
However, there is some hope if one has good control over the covariance of γ. Let the covariance operator of γ be S : H → H given by
\langle Sx, y \rangle = \int_{H} \langle x, z \rangle \langle y, z \rangle \, \mathrm{d} \gamma(z),
or, for some countable orthonormal basis (ei)iN of H,
Sx = \sum_{i \in \mathbf{N}} \sigma_{i}^{2} \langle x, e_{i} \rangle e_{i}.
In 1981, Preiss and Jaroslav Tišer showed that if there exists a constant 0 < q < 1 such that
\sigma_{i + 1}^{2} \leq q \sigma_{i}^{2},
then, for all f ∈ L1(HγR),
\frac1{\mu \big( B_{r} (x) \big)} \int_{B_{r} (x)} f(y) \, \mathrm{d} \mu(y) \xrightarrow[r \to 0]{\gamma} f(x),
where the convergence is convergence in measure with respect to γ. In 1988, Tišer showed that if
\sigma_{i + 1}^{2} \leq \frac{\sigma_{i}^{2}}{i^{\alpha}}
for some α > 5 ⁄ 2, then
\frac1{\mu \big( B_{r} (x) \big)} \int_{B_{r} (x)} f(y) \, \mathrm{d} \mu(y) \xrightarrow[r \to 0]{} f(x),
for γ-almost all x and all f ∈ Lp(HγR), p > 1.
As of 2007, it is still an open question whether there exists an infinite-dimensional Gaussian measure γ on a separable Hilbert space H so that, for all f ∈ L1(HγR),
\lim_{r \to 0} \frac1{\gamma \big( B_{r} (x) \big)} \int_{B_{r} (x)} f(y) \, \mathrm{d} \gamma(y) = f(x)
for γ-almost all x ∈ H. However, it is conjectured that no such measure exists, since the σi would have to decay very rapidly. 

To Join Ajit Mishra's Online Classroom  CLICK HERE  .

Comments