MA694 Rough paths theory

During the Spring 2013 semester I will teach a class on rough paths theory and post the lectures on this blog.

The rough paths theory was discovered by Terry Lyons in the 1990’s. The theory allows to solve differential equations driven by rough signals. The theory is deterministic but perfectly applies to the study of differential equations driven by rough random signals as the Brownian motion or even potentially rougher signals such as the fractional Brownian motion. The main reference for the course will be the book:

P. Friz, N. Victoir: Multidimensional stochastic processes as rough paths. Theory and Applications. Cambridge Studies in Advanced Mathematics (CUP, 2009)

A rough table of contents is as follows:

  • Ordinary differential equations
  • Young’s integration theory
  • Estimating iterated integrals
  • p-rough paths
  • Rough linear differential equations
  • Carnot groups
  • Geometric rough paths
  • Rough differential equations
  • Stochastic processes as rough paths
Posted in Rough paths theory | 2 Comments

Lecture 35. Weak differentiability for solutions of stochastic differential equations and the existence of a smooth density

As usual, we consider a filtered probability space \left( \Omega , (\mathcal{F}_t)_{t \geq 0} , \mathcal{F},\mathbb{P} \right) which satisfies the usual conditions and on which is defined a n-dimensional Brownian motion (B_t)_{t \ge 0}. Our purpose here, is to prove that solutions of stochastic differential equations are differentiable in the sense of Malliavin.
The following lemma is easy to prove by using the Wiener chaos expansion.

Lemma. Let (u_s)_{0 \le s \le 1} be a progressively measurable process such that for every 0 \le s \le 1, u^i_s \in \mathbb{D}^{1,2} and
\mathbb{E} \left(\int_0^1 \|u_s\|^2 ds \right)<+\infty, \quad \mathbb{E} \left(\int_0^1 \int_0^1 \|\mathbf{D}_s u_t\|^2 ds dt\right)<+\infty.
Then \int_0^1 u_s dB_s \in \mathbb{D}^{1,2} and
\mathbf{D}_t\left( \int_0^1 u_s dB_s\right)=u_t+\sum_{i=1}^n \int_t^1 (\mathbf{D}_t u^i_s) dB^i_s.

Proof. We make the proof when n=1 and use the notations introduced in the Wiener chaos expansion Lecture. For f \in L^2([0,1]), we have
\mathbf{D}_t I_n( f^{\otimes n } )= f(t) I_{n-1} (f^{\otimes {(n-1)}}).
But we can write,
I_n( f^{\otimes n } )=\int_0^1f(t) \left( \int_{\Delta_{n-1}[0,t]} f^{\otimes {(n-1)}} dB_{t_1} \cdots dB_{t_{n-1}}\right) dB_t,
and thus
I_n( f^{\otimes n } )=\int_0^1 u_s dB_s,
with u_t= f(t)\int_{\Delta_{n-1}[0,t]} f^{\otimes {(n-1)}} dB_{t_1} \cdots dB_{t_{n-1}}. Since
f(t) I_{n-1} (f^{\otimes {(n-1)}})= f(t) \left( \int_{\Delta_{n-1}[0,t]} f^{\otimes {(n-1)}} dB_{t_1} \cdots dB_{t_{n-1}}\right)
+ f(t) \int_t^1 f(s)\left( \int_{\Delta_{n-2}[0,s]} f^{\otimes {(n-1)}} dB_{t_1} \cdots dB_{t_{n-2}}\right)dB_s,
we get the result when \int_0^1 u_s dB_s can be written as I_n( f^{\otimes n } ). By continuity of the Malliavin derivative on the space of chaos of order n, we conclude that the formula is true if \int_0^1 u_s dB_s is a chaos of order n. The result finally holds in all generality by using the Wiener chaos expansion \square

We consider two functions b : \mathbb{R}^n \to \mathbb{R}^n and \sigma:\mathbb{R}^n \to \mathbb{R}^{n \times n} and we assume that b and \sigma are C^\infty with derivatives at any order (more than 1) bounded.

As we know, there exists a bicontinuous process (X_t^{x})_{t\ge 0, x \in \mathbb{R}^d} such that for t \ge 0,
X_t^{x} =x +\int_0^t b(X_s^{x}) ds +\sum_{k=1}^n \int_0^t \sigma_k(X_s^{x}) dB^k_s.
Moreover, for every p \ge 1, and T \ge 0
\mathbb{E} \left( \sup_{0 \le t \le T} \| X^x_t \|^p\right) < +\infty.

Theorem. For every i=1,...,n, 0 \le t \le 1, X_t^{x,i} \in \mathbb{D}^{\infty} and for r \le t,
\mathbf{D}^j_r X_t^{x,i}= \sigma_{i,j}(X_r^{x}) +\sum_{l=1}^n \int_r^t \partial_l b_i(X_s^{x})\mathbf{D}^j_r X_s^{x,l} ds +\sum_{k,l=1}^n \int_r^t \partial_l\sigma_{i,k}(X_s^{x})\mathbf{D}^j_r X_s^{x,l} dB^k_s,
where \mathbf{D}^j_r X^i_t is the j-th component of \mathbf{D}_r X^i_t. If r > t, then \mathbf{D}^j_r X_t^{x,i}=0.

Proof. We first prove that X_1^{x,i} \in \mathbb{D}^{1,p} for every p \ge 1. We consider the Picard approximations given by X_0(t)=x and
X_{n+1}(t) =x +\int_0^t b(X_n(s)) ds +\sum_{k=1}^n \int_0^t \sigma_k(X_n(s)) dB^k_s.
By induction, it is easy to see that X_n(t) \in \mathbb{D}^{1,p} and that for every p \ge 1, we have
\Psi_n(t)=\sup_{0 \le r \le t} \mathbb{E} \left( \sup_{s \in [r,t]} \| \mathbf{D}_r X_n(s) \|^p \right)< +\infty,
and
\Psi_{n+1}(t)\le \alpha +\beta\int_0^t \Psi_n(s)ds.
Then, we observe that X_n(t) converges to X_t^x in L^p and that the sequence \| X_n(t) \|_{1,p} is bounded. As a consequence X_1^{x,i} \in \mathbb{D}^{1,p} for every p \ge 1. The equation for the Malliavin derivative is obtained by differentiating the equation satisfied by X_t^x. Higher order derivatives may be treated in a similar way with a few additional work \square

Combining this theorem with the uniqueness property for solutions of linear stochastic differential equations, we obtain the following representation for the Malliavin derivative of a solution of a stochastic differential equation:

Corollary:
\mathbf{D}^j_r X^x_t=\mathbf{J}_{0 \rightarrow t}(x) \mathbf{J}_{0 \rightarrow r}^{-1}(x) \sigma_j (X^x_r),~~j=1,...,n, ~~ 0\leq r \leq t,

where (\mathbf{J}_{0 \rightarrow t}(x))_{ t \geq 0} is the first variation process defined by
\mathbf{J}_{0 \rightarrow t}(x)=\frac{\partial X^x_t}{\partial x}(x).

We now fix x \in \mathbb{R}^n as the initial condition for our equation and denote by \Gamma_t=\left( \sum_{j=1}^n \int_0^1 \mathbf{D}_r^j X_t^{i,x}\mathbf{D}_r^j X^{i',x}_t dr\right)_{1 \le i,i' \le n} the Malliavin matrix of X^x_t. From the previous corollary, we deduce that
\Gamma_t(x)=\mathbf{J}_{0 \rightarrow t}(x) \int_0^t \mathbf{J}_{0 \rightarrow r}^{-1}(x) \sigma (X^x_r) \sigma (X^x_r)^* \mathbf{J}_{0 \rightarrow r}^{-1}(x)^* dr \mathbf{J}_{0 \rightarrow t}(x)^*.

We are now finally in position to state the main theorem of the section:

Theorem. Assume that there exists \lambda > 0 such that for every x \in \mathbb{R}^n,
\| \sigma (x) \|^2 \ge \lambda \| x \|^2,
then for every t > 0 and x \in \mathbb{R}^n, the random variable X_t^x has a smooth density with respect to the Lebesgue measure.

Proof:
We want to prove that \Gamma_t(x) is invertible with inverse in L^p for p \ge 1. Since \mathbf{J}_{0 \rightarrow t}(x) is invertible and that its inverse solves a linear equation, we deduce that for every p \ge 1,
\mathbb{E}\left( \| \mathbf{J}_{0 \rightarrow t}^{-1}(x) \|^p \right) < +\infty.

We conclude that it is enough to prove that C_t(x) is invertible with inverse in L^p where
C_t(x)= \int_0^t \mathbf{J}_{0 \rightarrow r}^{-1}(x) \sigma (X^x_r) \sigma (X^x_r)^* \mathbf{J}_{0 \rightarrow r}^{-1}(x)^* dr.
By the uniform ellipticity assumption, we have
C_t(x) \ge \lambda \int_0^t \mathbf{J}_{0 \rightarrow r}^{-1}(x) \mathbf{J}_{0 \rightarrow r}^{-1}(x)^* dr,
where the inequality is understood in the sense that the difference of the two symmetric matrices is non negative. This implies that C_t(x) is invertible. Moreover, it is an easy exercise to prove that if M_t is a continuous map taking its values in the set of positive definite matrices, then we have
\left(\int_0^ t M_s ds\right)^{-1} \le \frac{1}{t^2} \left(\int_0^ t M^{-1}_s ds\right).
As a consequence, we obtain
C^{-1}_t(x) \le \frac{1}{t^2 \lambda} \int_0^t \mathbf{J}_{0 \rightarrow r}(x)^* \mathbf{J}_{0 \rightarrow r}(x) dr.
Since \mathbf{J}_{0 \rightarrow r}(x) has moments in L^p for all p \ge 1, we conclude that C_t(x) is invertible with inverse in L^p \square

 

To conclude, we note that this approach to prove existence and smoothness of the density for solutions of stochastic differential equations can also be extended to stochastic differential equations driven by other processes than Brownian motion. For instance, this approach applies to stochastic differential driven by fractional Brownian motions.

 

Posted in Stochastic Calculus lectures | Leave a comment

Lecture 34. The Wiener chaos expansion

As in the previous Lectures, we consider a filtered probability space (\Omega, (\mathcal{F}_t)_{0 \le t \le 1}, \mathbb{P}) on which is defined a Brownian motion (B_t)_{0 \le t \le 1}, and we assume that (\mathcal{F}_t)_{0 \le t \le 1} is the usual completion of the natural filtration of (B_t)_{0 \le t \le 1}. Our goal is here to write an orthogonal decomposition of the space L^2(\mathcal{F}_1) that is particularly suited to the study of the space \mathbb{D}^{1,2}. For simplicity of the exposition, we restrict ourselves to the case where the Brownian motion (B_t)_{0 \le t \le 1} is one-dimensional.

In the sequel, for n \ge 1, we denote by \Delta_n the simplex \Delta_n =\{ 0\le t_1 \le \cdots \le t_n \le 1\} and if f_n \in L^2( \Delta_n),
I_n (f_n) =\int_0^1 \int_0^{t_n} \cdots \int_0^{t_2} f_n(t_1,\cdots,t_n)  dB_{t_1}...dB_{t_n}
=\int_{\Delta_n}  f_n(t_1,\cdots,t_n)  dB_{t_1}...dB_{t_n}.

The set
\mathbf{K}_n=\left\{\int_{\Delta_n}  f_n(t_1,\cdots,t_n)  dB_{t_1}...dB_{t_n}, f_n \in L^2( \Delta_n)  \right\}
is called the space of Wiener chaos of order n. By convention the set of constant random variables shall be denoted by \mathbf{K}_0.

By using the Itō’s isometry, we readily compute that
\mathbb{E} \left(I_n (f_n)I_p (f_p) \right)=  \begin{cases}  0 & \text{if }p \neq n \\  \| f_n \|^2_{L^2(\Delta_n)} & \text{if }p=n.  \end{cases}
As a consequence, the spaces \mathbf{K}_n are orthogonal in L^2. It is easily seen that \mathbf{K}_n is the closure of the linear span of the family
\left\{ I_n (f^{\otimes n}), f \in L^2([0,1]) \right\},
where for f \in L^2([0,1]), we denoted by f^{\otimes n} the map \Delta_n \to \mathbb{R} such that f^{\otimes n}(t_1,\cdots,t_n)=f(t_1)\cdots f(t_n). It turns out that I_n (f^{\otimes n}) can be computed by using Hermite polynomials. The Hermite polynomial of order n is defined as
H_n (x)=(-1)^n \frac{1}{n!} e^{\frac{x^2}{2}} \frac{d^n}{dx^n} e^{-\frac{x^2}{2}}.
By the very definition of H_n, we see that for every t, x \in \mathbb{R},
\exp \left( t x -\frac{t^2}{2}\right)=\sum_{k=0}^{+\infty} t^k H_k(x).

Lemma. If f \in L^2([0,1]) then I_n (f^{\otimes n})=\| f \|^n_{L^2([0,1])} H_n \left(\frac{ \int_0^1 f(s) dB_s}{\| f \|_{L^2([0,1])} } \right) .

Proof. On one hand, we have for \lambda \in \mathbb{R},
\exp \left( \lambda \int_0^1 f(s) dB_s-\frac{\lambda^2}{2} \int_0^1 f(s)^2 ds  \right)=\sum_{n=0}^{+\infty} \lambda^n \| f \|^n_{L^2([0,1])} H_n \left(\frac{ \int_0^1 f(s) dB_s}{\| f \|_{L^2([0,1])} } \right) .

On the other hand, for 0 \le t \le 1, let us consider
M_t(\lambda)=\exp \left( \lambda \int_0^t f(s) dB_s-\frac{\lambda^2}{2} \int_0^t f(s)^2 ds  \right).
From Itō’s formula, we have
M_t(\lambda)=1+\lambda \int_0^t M_s f(s) dB_s.
By iterating the previous linear relation, we easily obtain that for every n \ge 1,
M_1(\lambda)=1+\sum_{k=1}^n \lambda^k I_k( f^{\otimes k})+\lambda^{n+1} \int_0^1 M_tf(t)\left(\int_{\Delta_n([0,t])}  f(t_1)\cdots f(t_n)  dB_{t_1}...dB_{t_n}\right) dB_t.
We conclude,
I_n( f^{\otimes n})=\frac{1}{n!} \frac{ d^k M_1}{d \lambda^n}(0)=\| f \|^n_{L^2([0,1])} H_n \left(\frac{ \int_0^1 f(s) dB_s}{\| f \|_{L^2([0,1])} } \right) \square

As we pointed it out, for p \neq n, the spaces \mathbf{K}_n and \mathbf{K}_p are othogonal. We have the following orthogonal decomposition of L^2:

Theorem.[Wiener chaos expansion]
L^2 =\bigoplus_{n \ge 0} \mathbf{K}_n.

Proof. As a by-product of the previous proof, we easily obtain that for f \in L^2([0,1]),
\exp \left( \lambda \int_0^1 f(s) dB_s-\frac{\lambda^2}{2} \int_0^1 f(s)^2 ds  \right)=\sum_{n=1}^{+\infty}  I_n( f^{\otimes n}),
where the convergence of the series is almost sure but also in L^2. Therefore, if F \in L^2 is orthogonal to \bigoplus_{n \ge 1} \mathbf{K}_n, then F is orthogonal to every \exp \left( \lambda \int_0^1 f(s) dB_s-\frac{\lambda^2}{2} \int_0^1 f(s)^2 ds  \right), f \in L^2([0,1]). This implies that F=0 \square

As we are going to see, the space \mathbb{D}^{1,2} or more generally \mathbb{D}^{k,2} is easy to describe by using the Wiener chaos expansion. The keypoint is the following proposition:

Proposition. Let F=I_n(f_n) \in \mathbf{K}_n, then F \in \mathbb{D}^{1,2} and \mathbf{D}_t F=I_{n-1} ( \tilde{f}_n (\cdot, t)), where for 0\le t_1 \le \cdots \le t_{n-1} \le 1,
\tilde{f}_n (t_1,\cdots, t_{n-1},t)  =f_n (t_1,\cdots, t_k, t, t_{k+1}, \cdots, t_{n-1}) \quad  \text{if } t_{k} \le t \le t_{k+1}.

Proof. Let f \in L^2([0,1]). We have
I_n (f^{\otimes n})=\| f \|^n_{L^2([0,1])} H_n \left(\frac{ \int_0^1 f(s) dB_s}{\| f \|_{L^2([0,1])} } \right) .
Thus F=I_n (f^{\otimes n}) is a smooth cylindric functional and
\mathbf{D}_t F =\| f \|^{n-1}_{L^2([0,1])} f(t) H'_n \left(\frac{ \int_0^1 f(s) dB_s}{\| f \|_{L^2([0,1])} } \right).
It is easy to see that H_n'=H_{n-1}, therefore we have
\mathbf{D}_t F  =\| f \|^{n-1}_{L^2([0,1])} f(t) H_{n-1} \left(\frac{ \int_0^1 f(s) dB_s}{\| f \|_{L^2([0,1])} } \right)  =f(t) I_{n-1} (f^{\otimes {(n-1)}}).
As a consequence, we compute that \mathbb{E} \left(\int_0^1 (\mathbf{D}_t F)^2 dt  \right)=n \mathbb{E} (F^2). We now observe that \mathbf{K}_n is the closure in L^2 of the linear span of the family
\left\{ I_n (f^{\otimes n}), f \in L^2([0,1]) \right\}
to conclude the proof of the proposition \square

We can finally turn to the description of \mathbb{D}^{1,2} using the chaos decomposition:

Theorem. Let F \in L^2 and let
F=\mathbb{E}(F) +\sum_{m \ge 1} I_m (f_m),
be the chaotic decomposition of F. Then F \in \mathbb{D}^{1,2} if and only if
\sum_{m \ge 1} m \mathbb{E}\left(  I_m (f_m)^2\right) < +\infty,
and in that case,
\mathbf{D}_t F= \mathbb{E}(\mathbf{D}_tF) + \sum_{m \ge 2} I_{m-1} ( \tilde{f}_m (\cdot, t)).

Proof. It is a consequence of the fact that for F \in \mathbf{K}_n, \mathbb{E} \left(\int_0^1 (\mathbf{D}_t F)^2 dt  \right)=n \mathbb{E} (F^2) \square.

An immediate but useful corollary of the previous theorem is the following result:

Corollary. Let (F_n)_{n \ge 0} be a sequence in \mathbb{D}^{1,2} that converges to F in L^2 and such that
\sup_{ n \ge 0} \mathbb{E} \left(\int_0^1 (\mathbf{D}_t F_n)^2 dt  \right) < +\infty.
Then, F \in \mathbb{D}^{1,2}.

Exercise. Let F \in L^2 and let
F=\mathbb{E}(F) +\sum_{m \ge 1} I_m (f_m),
be the chaotic decomposition of F. Show that that F \in \mathbb{D}^{k,2}, k \ge 1 if and only if
\sum_{m \ge 1} m^k \mathbb{E}\left(  I_m (f_m)^2\right) < +\infty.

Exercise. Let L=\delta \mathbf{D} . Show that for F \in \mathbf{K}_n, LF=nF.

Posted in Stochastic Calculus lectures | 2 Comments

Some advices to young mathematicians

Being a mathematician constantly ranks among the best jobs. The research in mathematics certainly offers an exciting intellectual adventure.

For the young mathematicians and graduate students who wish to pursue a career in this domain, this article by Gian-Carlo Rota contains thoughts that, I think, can specially be useful.

The blog of Terence Tao is also an excellent source for advices on writing mathematics and academic careers.

Posted in Mathematicians | Leave a comment

Lecture 33. The Malliavin matrix and existence of densities

More generally, by using the same methods as in the previous Lecture, we can introduce iterated derivatives. If F \in \mathcal{S}, we set
\mathbf{D}^k_{t_1,...,t_k} F = \mathbf{D}_{t_1} ...\mathbf{D}_{t_k} F.
We may then consider \mathbf{D}^k F as a square integrable random process indexed by [0,1]^{k} and valued in \mathbb{R}^n. By using the integration by parts formula, it is possible to prove, as we did it in the previous Lecture, that for any p \geq 1, the operator \mathbf{D}^k is closable on \mathcal{S}. We denote by \mathbb{D}^{k,p} the domain of \mathbf{D}^k in L^p, it is the closure of the class of cylindric random variables with respect to the norm
\left\| F\right\| _{k,p}=\left( \mathbb{E}\left( F^{p}\right) +\sum_{j=1}^k \mathbb{E}\left( \left\| \mathbf{D}^j F\right\|_{\mathbf{L}^2 ([0,1]^j, \mathbb{R}^n)}^{p}\right) \right)^{\frac{1}{p}},
and
\mathbb{D}^{\infty}=\bigcap_{p \geq 1} \bigcap_{k \geq 1} \mathbb{D}^{k,p}.
We have the following key result which makes Malliavin calculus so useful when one wants to study the existence of densities for random variables.
Theorem.(P. Malliavin) Let F=(F_1,...,F_m) be a \mathcal{F}_1 measurable random vector such that:

  • For every i=1,...,m, F_i \in \mathbb{D}^{\infty};
  • The matrix
    \Gamma= \left( \int_0^1 \langle \mathbf{D}_s F^i , \mathbf{D}_s F^j \rangle_{\mathbb{R}^n} ds \right)_{1 \leq i,j \leq m}
  • is invertible.

Then F has a density with respect to the Lebesgue measure. If moreover, for every p > 1,
\mathbb{E} \left( \frac{1}{\mid \det \Gamma \mid ^p} \right) < \infty,
then this density is C^\infty.

The matrix \Gamma is often called the Malliavin matrix of the random vector F.

This theorem relies on the following lemma of Fourier analysis for which we shall use the following notation: If \phi: \mathbb{R}^n \rightarrow \mathbb{R} is a smooth function then for \alpha =(i_1,...,i_k) \in  \{1,...,n\}^k, we denote
\partial_\alpha \phi =\frac{\partial^k}{\partial x_{i_1} \cdots \partial x_{i_k} } \phi.
Lemma. Let \mu be a probability measure on \mathbb{R}^n such that for every smooth and compactly supported function \phi :\mathbb{R}^n \rightarrow \mathbb{R},
\left| \int_{\mathbb{R}^n} \partial_\alpha \phi d\mu \right| \le C_\alpha \| \phi \|_\infty,
where \alpha  \in  \{1,...,n\}^k, k \ge 1, C_\alpha > 0. Then \mu is absolutely continuous with respect to the Lebesgue measure with a smooth density.

Proof. The idea is to show that we may assume that \mu is compactly supported and then use Fourier transforms techniques. Let x_0 \in \mathbb{R}^n, R > 0 and R' > R. Let \Psi be a smooth function on \mathbb{R}^n such that \Psi =1 on the ball \mathbf{B} (x_0,R) and \Psi=0 outside the ball \mathbf{B} (x_0,R'). Let \nu be the measure on \mathbb{R}^n that has a density \Psi with respect to \mu. It is easily seen, by induction and integrating by parts that for every smooth and compactly supported function \phi :\mathbb{R}^n \rightarrow \mathbb{R},
\left| \int_{\mathbb{R}^n} \partial_\alpha \phi d\nu \right| \le C'_\alpha \| \phi \|_\infty,
where \alpha  \in  \{1,...,n\}^k, k \ge 1, C'_\alpha > 0. Now, if we can prove that under the above assumption \nu has a smooth density, then we will able to conclude that \phi has a smooth density because x_0 \in \mathbb{R}^n and R,R' are arbitrary. Let
\hat{\nu}(y) =\int_{\mathbb{R}^n} e^{i \langle y,x \rangle} \nu (dx)
be the Fourier transform of the measure \mu. The assumption implies that \hat{\nu} is rapidly decreasing (apply the inequality with \phi(x)=e^{i \langle y,x \rangle}). We conclude that \nu has a smooth density with respect to the Lebesgue measure and that this density f is given by the inverse Fourier transform formula:
f(x)=\frac{1}{(2\pi)^n} \int_{\mathbb{R}^n} e^{-i \langle y,x \rangle} \hat{\nu} (y) dy \square

We may now turn to the proof of the Theorem.

The proof relies on the integration by parts formula for the Malliavin derivative. Let \phi be a smooth and compactly supported function on \mathbb{R}^n. Since F_i \in \mathbb{D}^\infty, we easily deduce that \phi(F) \in \mathbb{D}^\infty and that
\mathbf{D} \phi (F) =\sum_{i=1}^n \partial_i \phi (F) \mathbf{D} F_i.
Therefore we obtain
\int_0^1 \langle  \mathbf{D}_t \phi (F),\mathbf{D}_t F_j \rangle dt= \sum_{i=1}^n \partial_i \phi (F)  \int_0^1 \langle \mathbf{D}_t F_i, \mathbf{D}_t F_j \rangle dt.
We conclude that
\partial_i \phi (F)=\sum_{j=1}^n (\Gamma^{-1})_{i,j}  \int_0^1 \langle  \mathbf{D}_t \phi (F),\mathbf{D}_t F_j \rangle dt .
As a consequence, we obtain
\mathbb{E} \left(\partial_i \phi (F) \right)  = \mathbb{E} \left(\sum_{j=1}^n (\Gamma^{-1})_{i,j}  \int_0^1 \langle  \mathbf{D}_t \phi (F),\mathbf{D}_t F_j \rangle dt \right)
=\sum_{j=1}^n \mathbb{E} \left(  \int_0^1 \langle  \mathbf{D}_t \phi (F), (\Gamma^{-1})_{i,j}\mathbf{D}_t F_j \rangle dt   \right)
=\sum_{j=1}^n \mathbb{E} \left( \phi (F) \delta ( (\Gamma^{-1})_{i,j}\mathbf{D} F_j ) \right)
= \mathbb{E} \left( \phi (F) \delta \left( \sum_{j=1}^n (\Gamma^{-1})_{i,j}\mathbf{D} F_j \right) \right)
By using inductively this integration by parts formula, it is seen that for every \alpha  \in  \{1,...,n\}^k, k \ge 1, there exists an integrable random variable Z_\alpha such that,
\mathbb{E} \left( \partial_\alpha \phi (F)\right)=\mathbb{E} \left( \phi (F) Z_\alpha \right).

Posted in Stochastic Calculus lectures | Leave a comment

Lecture 32. The Malliavin derivative

The next Lectures will be devoted to the study of the problem of the existence of a density for solutions of stochastic differential equations. The basic tool to study such questions is the so-called Malliavin calculus.

Let us consider a filtered probability space (\Omega, (\mathcal{F}_t)_{0 \le t \le 1}, \mathbb{P}) on which is defined a Brownian motion (B_t)_{0 \le t \le 1}. We assume that (\mathcal{F}_t)_{0 \le t \le 1} is the usual completion of the natural filtration of (B_t)_{0 \le t \le 1}.

A \mathcal{F}_{1} measurable real valued random variable F is said to be cylindric if it can be written
F=f \left( \int_0^{1} h^1(s) dB_s,...,\int_0^{1} h^m(s) dB_s \right)
where h^i \in \mathbf{L}^2 ([0,1], \mathbb{R}^n) and f:\mathbb{R}^m \rightarrow \mathbb{R} is a C^{\infty} function such that f and all its partial derivatives have polynomial growth. The set of cylindric random variables is denoted by \mathcal{S}. It is easy to see that \mathcal{S} is dense in L^p for every p \ge 1.

The Malliavin derivative of F \in \mathcal{S} is the \mathbb{R}^n valued stochastic process (\mathbf{D}_t F )_{0 \leq t \leq 1} given by
\mathbf{D}_t F=\sum_{i=1}^{m} h^i (t) \frac{\partial f}{\partial x_i} \left( \int_0^{1} h^1(s) dB_s,...,\int_0^{1} h^m(s)dB_s \right).
We can see \mathbf{D} as an (unbounded) operator from the space \mathcal{S} \subset L^p into the Banach space
\mathcal{L}^p=\left\{ (X_t)_{0 \le t \le 1},\mathbb{E}\left( \left( \int_0^1 \| X_t \|^2 dt\right)^p \right) < +\infty \right\}.
Our first task will be to prove that \mathbf{D} is closable. This will be a consequence of the following fundamental integration by parts formula which is interesting in itself.

Proposition. (Integration by parts formula) Let F \in \mathcal{S} and (h(s))_{0 \le s \le 1} be a progressively measurable such that \mathbb{E}\left( \int_0^1 \| h(s)\|^2 ds \right) < +\infty. We have
\mathbb{E} \left( \int_0^1( \mathbf{D}_s F)h(s) ds \right)=\mathbb{E}\left( F \int_0^{1} h(s)dB_s\right).

Proof.
Let
F=f \left( \int_0^{1} h^1(s) dB_s,...,\int_0^{1} h^m(s) dB_s \right) \in \mathcal{S}.
Let us now fix \varepsilon \ge 0 and denote
F_\varepsilon =f \left( \int_0^{1} h^1(s) d\left( B_s +\varepsilon \int_0^{s} h(u)du \right),...,\int_0^{1} h^m(s) d\left( B_s +\varepsilon \int_0^{s} h(u)du \right) \right).
From Girsanov’s theorem, we have
\mathbb{E} ( F_\varepsilon)=\mathbb{E} \left(\exp \left(\varepsilon \int_0^{1} h(u)dB_u -\frac{\varepsilon^2}{2}\int_0^{1} \|h(u)\|^2du \right) F \right).
Now, on one hand we compute
\lim_{\varepsilon \to 0} \frac{1}{\varepsilon} \left( \mathbb{E} ( F_\varepsilon)-\mathbb{E} (F) \right) =\mathbb{E} \left( \int_0^1\sum_{i=1}^{m} \frac{\partial f}{\partial x_i} \left( \int_0^{1} h^1(s)dB_s,...,\int_0^{1} h^m(s) dB_s \right) h^i(s)h(s) dt \right)
=\mathbb{E} \left( \int_0^1( \mathbf{D}_s F)h(s) dt \right),
and on the other hand, we obtain
\lim_{\varepsilon \to 0} \frac{1}{\varepsilon} \left( \mathbb{E} ( F_\varepsilon)-\mathbb{E} (F) \right)=\mathbb{E}\left( F \int_0^{1} h(s)dB_s\right)
\square

Proposition. Let p > 1. As a densely defined operator from L^p into \mathcal{L}^p, \mathbf{D} is closable.

Proof. Let (F_n)_{n \in \mathbb{N}} be a sequence in \mathcal{S} that converges in L^p(\mathcal{B}_{1}) to 0 and such that \mathbf{D}F_n converges in \mathcal{L}^p to X. We want to prove that X=0. Let (h(s))_{0 \le s \le 1} be a bounded progressively measurable process. We have
\lim_{n \to \infty} \mathbb{E} \left( \int_0^1( \mathbf{D}_sF _n)h(s) ds \right)=\mathbb{E} \left( \int_0^1 X_s h(s) ds \right),
and
\lim_{n \to \infty}\mathbb{E}\left( F_n \int_0^{1} h(s)dB_s\right)=0.
As a consequence, we obtain
\mathbb{E} \left( \int_0^1 X_s h(s) ds \right)=0.
Since h is arbitrary, we conclude X=0. \square

The closure of \mathbf{D} in L^p shall still be denoted by \mathbf{D}. Its domain \mathbb{D}^{1,p} is the closure of \mathcal{S} with respect to the norm
\left\| F\right\| _{1,p}=\left( \mathbb{E}\left( F^{p}\right) + \mathbb{E}\left( \left\| \mathbf{D} F\right\|_{\mathbf{L}^2 ([0,1], \mathbb{R}^n)}^{p}\right) \right)^{\frac{1}{p}},
For p > 1, we can consider the adjoint operator \delta of \mathbf{D}. This is a densely defined operator \mathcal{L}^q \to L^q(\mathcal{B}_{1}) with 1/p+1/q=1 which is characterized by the duality formula
\mathbb{E} (F \delta u)=\mathbb{E} \left(\int_0^1 (\mathbf{D}_s F) u_s ds \right) , \quad F \in \mathbb{D}^{1,p}.
From the integration by parts formula and Burkholder-Davis-Gundy inequalities, it is clear that the domain of \delta in \mathcal{L}^q contains the set of progressively measurable processes (u_t)_{0 \le t \le 1} such that \mathbb{E} \left(\left( \int_0^1 \| u_s \|^2ds\right)^{q/2} \right) < + \infty and in that case, \delta u =\int_0^1 u_s dB_s. The operator \delta can thus be thought as an extension of the Itō’s integral. It is often called the Skohorod integral.

Exercise.(Clark-Ocone formula)
Show that for F \in \mathbb{D}^{1,2},
F=\mathbb{E}(F)+\int_0^1 \mathbb{E} \left( \mathbf{D}_1F \mid \mathcal{F}_t \right)dB_t.

Posted in Stochastic Calculus lectures | Leave a comment

Lecture 31. Then, a miracle occurs

Posted in Stochastic Calculus lectures | Leave a comment

Lecture 30. Stratonovitch stochastic differential equations

As usual, let \left( \Omega , (\mathcal{F}_t)_{t \geq 0} , \mathbb{P} \right) be a filtered probability space which satisfies the usual conditions. It is often useful to use the language of Stratonovitch ‘s integration to study stochastic differential equations because the Itō’s formula takes a much nicer form. If (N_t)_{0 \leq t \leq T}, T > 0, is an \mathcal{F}-adapted real valued local martingale and if (\Theta_t)_{0 \leq t \leq T} is an \mathcal{F}-adapted continuous semimartingale satisfying \mathbb{P} \left( \int_0^T \Theta_t^2 d \langle N \rangle_t < +\infty \right)=1, then by definition the Stratonovitch integral of (\Theta_t)_{0 \leq t \leq T} with respect to (N_t)_{t \ge 0} is defined as
\int_0^T \Theta_t \circ d  N_t =\int_0^T \Theta_t  d  N_t+\frac{1}{2} \langle \Theta, N \rangle_T,
where:

  • \int_0^T \Theta_t d  N_t is the Itō integral of (\Theta_t)_{0 \leq t \leq T} against (N_t)_{0 \leq t \leq T};
  • \langle \Theta, N \rangle_T is the quadratic covariation at time T between (\Theta_t)_{0 \leq t \leq T} and (N_t)_{0 \leq t \leq T}.

By using Stratonovitch integral instead of Itō’s, the Itō formula reduces to the classical change of variable formula.

Theorem. Let (X_t)_{t \geq 0}=\left( X^1_t , \cdots , X^n_t \right)_{t \geq 0} be a n– dimensional continuous semimartingale. Let now f:\mathbb{R}^n \rightarrow \mathbb{R} be a C^2 function. We have
f(X_t)  =f(X_0)+\sum_{i=1}^n \int_0^t \frac{\partial f}{\partial x_i} (X_s) \circ dX^i_s, \quad t \ge 0.

Let \mathcal{O} \subset \mathbb{R}^n be a non empty open set. A smooth vector field V on \mathcal{O} is simply a smooth map
\begin{array}{llll}  V: & \mathcal{O} & \rightarrow  & \mathbb{R}^{n} \\  & x & \rightarrow  & (v_{1}(x),...,v_{n}(x)).  \end{array}
The vector field V defines a differential operator acting on smooth functions f: \mathcal{O} \rightarrow \mathbb{R} as follows:
Vf(x)=\sum_{i=1}^n v_i (x) \frac{\partial f}{\partial x_i}.
We note that V is a derivation, that is a map on \mathcal{C}^{\infty} (\mathcal{O} , \mathbb{R} ), linear over \mathbb{R}, satisfying for f,g \in \mathcal{C}^{\infty} (\mathcal{O} , \mathbb{R} ), V(fg)=(Vf)g +f (Vg).
An interesting result is that, conversely, any derivation on \mathcal{C}^{\infty} (\mathcal{O} , \mathbb{R} ) is a vector field.

Let now (B_t)_{t \geq 0}=(B^1_t,...,B^d_t)_{t \geq 0} be a d-dimensional Brownian motion and consider d+1 C^1 vector fields V_i : \mathbb{R}^n \rightarrow \mathbb{R}^n, n \geq 1, i=0,...,d. By using the language of vector fields and Stratonovitch integrals, the fundamental theorem for the existence and the uniqueness of solutions for stochastic differential equations is the following:

Theorem. Assume that V_0,V_1,\cdots,V_d are bounded vector fields with bounded derivatives up to order 2. Let x_0 \in \mathbb{R}^n. On \left( \Omega , (\mathcal{F}_t)_{t \geq 0} , \mathbb{P} \right), there exists a unique continuous and adapted process (X_t^{x_0})_{t \geq 0} such that for t \geq 0,
X_t^{x_0}=x_0 + \sum_{i=0}^d \int_0^t V_i (X_s^{x_0}) \circ dB^i_s,
with the convention that B^0_t=t.

Thanks to Itō’s formula the corresponding Itō’s formulation is
X_t^{x_0} =x_0 + \frac{1}{2} \sum_{i=1}^d \int_0^t \nabla_{V_i}  V_i (X_s^{x_0}) ds +\sum_{i=0}^d \int_0^t V_i (X_s^{x_0}) dB^i_s,
where for 1 \leq i \leq d, \nabla_{V_i} V_i is the vector field given by
\nabla_{V_i} V_i (x)=\sum_{j=1}^n \left( \sum_{k=1}^n v_i^k (x) \frac{\partial v^j_i}{\partial x_k}(x)\right)\frac{\partial}{\partial x_j}, \text{ }x \in \mathbb{R}^n.
If f:\mathbb{R}^n \rightarrow \mathbb{R} is a C^2 function, from Itō’s formula, we have for t \geq 0,
f(X_t^{x_0})=f(x_0) + \sum_{i=0}^d \int_0^t (V_i f) (X_s^{x_0}) \circ dB^i_s,
and the process
\left( f(X_t^{x_0})-\int_0^t (Lf)(X_s^{x_0})ds \right)_{t \geq 0}
is a local martingale where L is the second order differential operator
L = V_0+\frac{1}{2} \sum_{i=1}^d V_i^2.

Posted in Stochastic Calculus lectures | 3 Comments

Lecture 29. The strong Markov property for solutions of stochastic differential equations

In the previous section, we have seen that if (X_t^x)_{t \ge 0} is the solution of a stochastic differential equation
X_t^{x} =x +\int_0^t b(X_s^{x}) ds + \int_0^t \sigma(X_s^{x}) dB_s,
then (X_t^x)_{t \ge 0} is a Markov process, that is for every t,T \ge 0,
\mathbb{E}(f(X_{t+T}^x) \mid \mathcal{F}_T)=(P_{t}f )(X_T^x),
where P_tf(x)=\mathbb{E}( f(X_t^x)). It is remarkable that this property still holds when T is now any finite stopping time. This property is called the strong Markov property.
The key lemma is the following:

Lemma. Let (B_t)_{t\ge 0} be a standard Brownian motion and let T be a finite stopping time. The process, (B_{T+t}-B_T)_{t\ge 0} is a standard Brownian motion independent from \mathcal{F}_T.

Proof. Let T be a finite stopping time of the filtration (\mathcal{F}_t)_{t \ge 0}. We first assume T bounded. Let us consider the process \tilde{B}_t=B_{T+t}-B_T, \quad t \ge 0. Let \lambda \in \mathbb{R}, 0\le s \le t. Applying Doob’s stopping theorem to the martingale \left( e^{i\lambda B_t +\frac{\lambda^2}{2} t}\right)_{t \ge 0}, with the stopping times t+T and s+T , yields:
\mathbb{E} \left( e^{i\lambda B_{T+t} +\frac{\lambda^2}{2} (T+t)}\mid \mathcal{F}_{T +s} \right)=e^{i\lambda B_{T+s} +\frac{\lambda^2}{2} (T+s)}.
Therefore
\mathbb{E} \left( e^{i\lambda (B_{T+t} -B_{T+s})}\mid \mathcal{F}_{T+s} \right)=e^{-\frac{\lambda^2}{2} (t-s) }.
The increments of (\tilde{B}_t)_{t \ge 0} are therefore independent and stationary. The conclusion then easily follows. If T is not bounded almost surely, then we can consider the stopping time T \wedge N and from the previous result the finite dimensional distributions (B_{ t_1 +T\wedge N}-B_{T \wedge N}, \cdots , B_{ t_n +T\wedge N}-B_{T \wedge N}) do not depend on N and are the same as a Brownian motion. We can then let N \to  + \infty to conclude \square

Theorem. For every x \in \mathbb{R}^n, (X_t^{x})_{t\ge 0, x \in \mathbb{R}^d} is a strong Markov process with semigroup (P_t)_{t \ge 0}: For every Borel function f:\mathbb{R}^n \to \mathbb{R} with polynomial growth, every t \ge 0, and every finite stopping time T,
\mathbb{E}(f(X_{t+T}^x) \mid \mathcal{F}_T)=(P_{t}f )(X_T^x).

Proof. The proof is identical to the proof of the usual Markov property with the additional ingredient given by the previous proposition \square

The strong Markov property for solutions of stochastic differential equations is useful to solve boundary value problems in partial differential equations theory. Let K be a bounded closed set in \mathbb{R}^n. For x \in \Omega, we denote T_x= \inf \{ t \ge 0, X_t \in \partial K \}. If f is bounded Borel function such that f_{\partial K}=0, we define
P^K_t f(x)=\mathbb{E}\left( f(X^x_t) \mathbf{1}_{t \le T_x } \right).
The proof of the following theorem is let to the reader.

Theorem. Let f:K \to \mathbb{R} be a bounded Borel function and assume that the function u(t,x)=(P^K_tf)(x) is C^{1,2}. Then u is the unique solution of the Dirirchlet boundary value problem
\frac{\partial u}{\partial t} (t,x)=Lu(t,x)
in [0,+\infty) \times \mathbb{R}^n , with the initial condition
u(0,x)=f(x),
and the boundary condition
u(t,x)=0, \quad x \in \partial K.

Posted in Stochastic Calculus lectures | Leave a comment

Lecture 28. The Feynman-Kac formula

It is now time to give some applications of the theory of stochastic differential equations to parabolic second order partial differential equations. In particular we are going to prove that solutions of such equations can represented by using solutions of stochastic differential equations. This representation formula is called the FeynmanKac formula.
As usual, we consider a filtered probability space \left( \Omega , (\mathcal{F}_t)_{t \geq 0} , \mathcal{F},\mathbb{P} \right) which satisfies the usual conditions and on which is defined a n-dimensional Brownian motion (B_t)_{t \ge 0}. Again, we consider two functions b : \mathbb{R}^n \to \mathbb{R}^n and \sigma: \mathbb{R}^{n \times n} and we assume that there exists C > 0 such that
\| b(x)-b(y) \| + \| \sigma (x) - \sigma (y) \| \le C \| x-y \|, x,y \in \mathbb{R}^n.
Let L be the second order differential operator
L=\sum_{i=1}^n b_i(x) \frac{\partial}{\partial x_i} +\frac{1}{2}\sum_{i,j=1}^n a_{ij}(x)\frac{\partial^2}{\partial x_i \partial x_j} ,
where a_{ij}(x)=(\sigma(x)\sigma^*(x))_{ij}.

As we know, there exists a bicontinuous process (X_t^{x})_{t\ge 0, x \in \mathbb{R}^d} such that for t \ge 0,
X_t^{x} =x +\int_0^t b(X_s^{x}) ds + \int_0^t \sigma(X_s^{x}) dB_s.
Moreover, as it has been stressed before, for every p \ge 1, and T \ge 0
\mathbb{E} \left( \sup_{0 \le t \le T} \| X^x_t \|^p\right) < +\infty.
As a consequence, if f:\mathbb{R}^n \to \mathbb{R} is a Borel function with polynomial growth, we can consider the function
P_t f (x) =\mathbb{E}(f(X_t^x)).
Theorem. For every x \in \mathbb{R}^n, (X_t^{x})_{t\ge 0, x \in \mathbb{R}^d} is a Markov process with semigroup (P_t)_{t \ge 0}. More precisely, for every Borel function f:\mathbb{R}^n \to \mathbb{R} with polynomial growth and every t \ge s,
\mathbb{E}(f(X_t^x) \mid \mathcal{F}_s)=(P_{t-s}f )(X_s^x).

Proof. The key point, here, is to observe that solutions are actually adapted to the natural filtration of the Brownian motion (B_t)_{t \ge 0}. More precisely, there exists on the space of continuous functions [0,+\infty) \rightarrow \mathbb{R}^n a predictable functional such that for t \ge 0:
X_t^{x_0}=F(x_0, (B_u)_{0 \le u \le t}).
Indeed, let us first work on [0,T] where T is small enough. In that case, as seen previously, the process (X^{x_0}_t)_{0 \le t \le T} is the unique fixed point of the application \Phi defined by
\Phi(X)_t =x_0+\int_0^t b(X_s^{x_0}) ds + \int_0^t \sigma(X_s^{x_0}) dB_s.

Alternatively, one can interpret this by observing that (X^{x_0}_t)_{0 \le t \le T} is the limit of the sequence of processes (X^{n}_t)_{0 \le t \le T} inductively defined by
X^{n+1}=\Phi (X^n), \quad X^0=x_0.
It is easily checked that for each X^n there is a predictable functional F_n such that
X_t^{n}=F_n(x_0, (B_u)_{0 \le u \le t}),
which proves the above claim when T is small enough. To get the existence of F for any T, we can proceed

With this hands, we can now prove the Markov property. Let s \ge 0. For t \ge 0, we have
X_{s+t}^{x_0}  =X_s +\int_s^{s+t} b(X_u^{x_0}) du + \int_s^{s+t} \sigma(X_u^{x_0}) dB_u
=X_s +\int_0^{t} b(X_{u+s}^{x_0}) du + \int_0^{t} \sigma(X_s^{x_0}) d(B_{u+s}-B_s).
Consequently, from uniqueness of solutions,
X_{s+t}^{x_0}=F(X^{x_0}_s, (B_{u+s}-B_s)_{0 \le u \le t}).
We deduce that for a Borel function f:\mathbb{R}^n \rightarrow \mathbb{R} with polynomial growth,
\mathbb{E} \left( f(X_{s+t}^{x_0}) \mid \mathcal{F}_s \right)=\mathbb{E} \left( f(F(X^{x_0}_s, (B_{u+s}-B_s)_{0 \le u \le t}))\mid  \mathcal{F}_s \right)=P_t f (X^{x_0}_s),
because (B_{u+s}-B_s)_{0 \le u \le t} is a Brownian motion independent of \mathcal{F}_s \square

Theorem Let f:\mathbb{R}^n \to \mathbb{R} be a Borel function with polynomial growth and assume that the function
u(t,x)=(P_tf)(x)
is C^{1,2}, that is once differentiable with respect to t and twice differentiable with respect to x. Then u solves the Cauchy problem
\frac{\partial u}{\partial t} (t,x)=Lu(t,x)
in [0,+\infty) \times \mathbb{R}^n , with the initial condition u(0,x)=f(x).

Proof. Let T > 0 and consider the function v(t,x)=u(T-t,x). According the previous theorem, we have
\mathbb{E}(f(X_T^x) \mid \mathcal{F}_t)=v(t,X_t^x).
As a consequence, the process v(t,X_t^x) is a martingale. But from Ito’s formula the bounded variation part of v(t,X_t^x) is \int_0^t \left( \frac{\partial v}{\partial t}(s, X_s^x) + L v(s, X_s^x) \right)ds which is therefore 0. We conclude
\frac{\partial v}{\partial t}(0, x) + L v(0, x)=\lim_{t \to 0} \frac{1}{t} \int_0^t \left( \frac{\partial v}{\partial t}(s, X_s^x) + L v(s, X_s^x) \right)ds=0 \square

Exercise Show that if f is a C^2 function such that \nabla f and \nabla^2 f have polynomial growth, then the function P_tf(x) is C^{1,2}. Here, we denote by \nabla^2 f the Hessian matrix of f.

Theorem. Let f:\mathbb{R}^n \to \mathbb{R} be a Borel function with polynomial growth. Let u:[0,+\infty)\times \mathbb{R}^n \to \mathbb{R} be a solution of the Cauchy problem
\frac{\partial u}{\partial t} (t,x)=Lu(t,x)
with the initial condition u(0,x)=f(x).
If there exists a locally integrable function C and p \ge 0, such that for every t \ge 0 and x \in \mathbb{R}^n,
\| \nabla u (t,x) \| \le C(t) (1+\|x \|^p),
then u(t,x)= P_t f(x).

Proof. Let T > 0 and, as before, consider the function v(t,x)=u(T-t,x). As a consequence of Ito’s formula, we have
v(t,X_t^x)=u(T,x)+M_t,
where M_t is a local martingale with quadratic variation \sum_{i,j}^n \int_0^t a_{ij}(X_s^x) \frac{\partial u}{\partial x_i}(X_s^x)  \frac{\partial u}{\partial x_j} (X_s^x) ds. The conditions on \sigma and $u$ imply that this quadratic variation is integrable. As a consequence, v(t,X_t^x) is a martingale and thus \mathbb{E} (v(T,X_t^x))=u(T,x) \square

The previous results may be extended to study parabolic equations with potential as well. More precisely, let V:\mathbb{R}^n \to \mathbb{R} be a bounded function. If f:\mathbb{R}^n \to \mathbb{R} is a Borel function with polynomial growth, we define
P^V_t f (x) =\mathbb{E}\left(e^{\int_0^t V(X_s^x) ds }f(X_t^x)\right).
The same proofs as before will give the following theorems.

Theorem. For every x \in \mathbb{R}^n and every Borel function f:\mathbb{R}^n \to \mathbb{R} with polynomial growth and every t \ge s,
\mathbb{E}\left( e^{\int_0^t V(X_u^x) du } f(X_t^x) \mid \mathcal{F}_s\right)=e^{\int_0^s V(X_u^x) du }(P^V_{t-s}f )(X_s^x).

Theorem. Let f:\mathbb{R}^n \to \mathbb{R} be a Borel function with polynomial growth and assume that the function
u(t,x)=(P^V_tf)(x)
is C^{1,2}, that is once differentiable with respect to t and twice differentiable with respect to x. Then u solves the Cauchy problem
\frac{\partial u}{\partial t} (t,x)=Lu(t,x)+V(x)u(t,x)
in [0,+\infty) \times \mathbb{R}^n , with the initial condition
u(0,x)=f(x).

Theorem. Let f:\mathbb{R}^n \to \mathbb{R} be a Borel function with polynomial growth. Let u:[0,+\infty)\times \mathbb{R}^n \to \mathbb{R} be a solution of the Cauchy problem
\frac{\partial u}{\partial t} (t,x)=Lu(t,x)+V(x)u(t,x)
with the initial condition u(0,x)=f(x). If there exists a locally integrable function C and p \ge 0, such that for every t \ge 0 and x \in \mathbb{R}^n,
\| \nabla u (t,x) \| \le C(t) (1+\|x \|^p),
then u(t,x)= P^V_t f(x).

Posted in Stochastic Calculus lectures | Leave a comment