Lecture 27. Stochastic differential equations. Regularity of the flow

In this lecture, we study the regularity of the solution of a stochastic differential equation with respect to its initial condition. The key tool is a multimensional parameter extension of the Kolmogorov continuity theorem whose proof is almost identical to the one-dimensional case.

Theorem. Let (\Theta_x)_{x \in [0,1]^d} be a n-dimensional stochastic process such that there exist positive constants \gamma, c, \varepsilon such that for every x,y \in [0,1]^d
\mathbb{E} \left( \| \Theta_x -\Theta_y \|^\gamma \right)\le C \| x -y \|^{d +\varepsilon}.
There exists a modification (\tilde{\Theta}_x)_{x \in [0,1]^d} of the process (\Theta_x)_{x \in [0,1]^d} such that for every \alpha \in [0, \varepsilon/\gamma) there exists a finite random variable K_\alpha such that for every x,y \in [0,1]^d
\| \tilde{\Theta}_x - \tilde{\Theta}_y \| \le K_\alpha \| x-y \|^\alpha.

As above, we consider two functions b : \mathbb{R}^n \to \mathbb{R}^n and \sigma: \mathbb{R}^{n \times n} and we assume that there exists C > 0 such that
\| b(x)-b(y) \| + \| \sigma (x) - \sigma (y) \| \le C \| x-y \|, x,y \in \mathbb{R}^n.
As we already know, for every x \in \mathbb{R}^n, there exists a continuous and adapted process (X_t^{x})_{t\ge 0} such that for t \ge 0,
X_t^{x} =x +\int_0^t b(X_s^{x}) ds + \int_0^t \sigma(X_s^{x}) dB_s.

Proposition. Let T > 0 and K be a compact set in \mathbb{R}^n. For every p \ge 2, there exists a constant C_{p,T,K} > 0 such that for every 0 \le s \le t \le T and x,y \in K,
\mathbb{E} \left( \| X^x_t-X^y _s \|^p \right)\le C_{p,T,K} \left( \| x-y \|^p +|t-s|^{p/2} \right).
As a consequence, there exists a modification (\tilde{X}_t^{x})_{t\ge 0, x\in \mathbb{R}^n} of the process (X_t^{x})_{t\ge 0, x\in \mathbb{R}^n} such that for t \ge 0, x \in \mathbb{R}^n,
\tilde{X}_t^{x} =x +\int_0^t b(\tilde{X}_s^{x}) ds + \int_0^t \sigma(\tilde{X}_s^{x}) dB_s.
and such that (t,x) \to X^x_t (\omega) is continuous for almost every \omega.

Proof. As before, we can find K > 0 such that
\| b(x)-b(y) \| + \| \sigma (x) - \sigma (y) \| \le K \| x-y \|, x,y \in \mathbb{R}^n;
and \| b(x) \| + \| \sigma (x) \| \le K (1 +\| x \|), x \in \mathbb{R}^n.

We fix x,y \in \mathbb{R}^n and p \ge 2. Let
h(t)=\mathbb{E} \left( \|X_t^x-X_t^y\|^p \right).
By using the inequality \| a +b+c \|^p \le 3^{p-1} ( \| a \|^p + \| b \|^p +\| c\|^p ), we obtain
\|X_t^x-X_t^y\|^p \le 3^{p-1} \left( \| x-y \|^p +\left(\int_0^t \| b(X_s^x)-b(X_s^y) \| ds \right)^p + \left\| \int_0^t ( \sigma(X_s^x) -\sigma(X_s^y))dB_s \right\|^p \right).
We now have
\left(\int_0^t \| b(X_s^x)-b(X_s^y) \| ds \right)^p\le t^{p-1} \int_0^t \| b(X_s^x)-b(X_s^y) \|^p ds\le K^p t^{p-1} \int_0^t \| X_s^x-X_s^y \|^p ds,
and from Burkholder-Davis-Gundy inequality
\mathbb{E} \left( \left\| \int_0^t ( \sigma(X_s^x) -\sigma(X_s^y))dB_s \right\|^p \right) \le C_p \mathbb{E} \left( \left\| \int_0^t \| \sigma(X_s^x) -\sigma(X_s^y) \|^2 ds \right\|^{p/2} \right)
\le C_p K^2 \mathbb{E} \left( \left( \int_0^t \| X_s^x -X_s^y \|^2 ds \right)^{p/2} \right)
\le C_p K^2 t^{p/2 -1} \mathbb{E} \left( \int_0^t \| X_s^x -X_s^y \|^p ds \right).
As a conclusion we obtain
h(t) \le 3^{p-1} \left( \| x-y \|^p +(K^p t^{p-1}+C_p K^2 t^{p/2 -1}) \int_0^t h(s) ds \right).
Gronwall’s inequality yields then
h(t)\le \phi(t) \| x-y \|^p,
where \phi is a continuous function.

We have for 0\le s \le t \le T,
\| X_t^x -X_s^x \|^p \le 2^{p-1}\left( \left\| \int_s^t b(X_u^{x}) ds\right\|^p + \left\| \int_s^t \sigma(X_u^{x}) dB_u \right\|^p\right),
and
\left\| \int_s^t b(X_u^{x}) ds\right\|^p \le K^p (t-s)^p ( 1+ \sup_{0 \le s \le T} \| X_s \|)^p,
\mathbb{E} \left( \left\| \int_s^t \sigma(X_u^{x}) dB_u \right\|^p\right) \le C_p \mathbb{E} \left( \left( \int_s^t \|\sigma(X_u^{x})\|^2 du \right)^{p/2} \right)
\le C_pK^p (t-s)^{p/2} \mathbb{E} \left( \left( 1+ \sup_{0 \le s \le T} \| X_s \| \right)^{p} \right)
The conclusion then easily follows by combining the two previous estimates \square

In the sequel, of course, we shall always work with this bicontinuous version of the solution.

Definition.The continuous process of continuous maps \Psi_t: x \to X_t^x is called the stochastic flow associated to the equation.

If the maps b and \sigma are moreover C^1, then the stochastic flow is itself differentiable and the equation for the derivative can be obtained by formally differentiating the equation with respect to its initial condition. We willl admit this result without proof:

Theorem. Let us assume that b and \sigma are C^1 Lipschitz functions, then for every t \ge 0, the flow \Psi_t associated to the equation is a flow of differentiable maps. Moreover, the first variation process J_t which is defined as the Jacobian matrix \frac{\partial \Psi_t}{\partial x} (x) is the unique solution of the matrix stochastic differential equation:
J_t=\mathbf{Id}+\int_0^t \frac{\partial b}{\partial x} (X_s^x)J_s ds+\sum_{i=1}^n \int_0^t \frac{\partial \sigma_i }{\partial x} (X_s^x) J_s dB^i_s.

Posted in Stochastic Calculus lectures | Leave a comment

Lecture 26. Stochastic differential equations. Existence and uniqueness of solutions

We now turn to the theory of stochastic differential equations. Stochastic differential equations are the differential equations corresponding to the theory of the stochastic integration.

As usual, we consider a filtered probability space \left( \Omega , (\mathcal{F}_t)_{t \geq 0}, \mathcal{F},\mathbb{P} \right) which satisfies the usual conditions and on which is defined a n-dimensional Brownian motion (B_t)_{t \ge 0}. Let b :\mathbb{R}^n \to \mathbb{R}^n, and \sigma: \mathbb{R}^n \to \mathbb{R}^{ n \times n} be functions.

Theorem. Let us assume that there exists C > 0 such that
\| b(x)-b(y) \| + \| \sigma (x) - \sigma (y) \| \le C \| x-y \|,  x,y \in \mathbb{R}^n
Then, for every x_0 \in \mathbb{R}^n, there exists a unique continuous and adapted process (X_t^{x_0})_{t\ge 0} such that for t \ge 0
X_t^{x_0} =x_0 +\int_0^t b(X_s^{x_0}) ds + \int_0^t \sigma (X_s^{x_0}) dB_s.
Moreover, for every T \ge 0,
\mathbb{E} \left( \sup_{0 \le s \le T} \mid X_s \mid^2 \right) <+\infty .

Proof.
Let us first observe that from our assumptions, there exists K > 0 such that

  • \| b(x)-b(y) \| + \| \sigma (x) - \sigma (y) \| \le K \| x-y \|, x,y \in \mathbb{R}^n;
  • \| b(x) \| + \| \sigma (x)  \| \le K (1 +\| x \|), x \in \mathbb{R}^n.

The idea is to apply a fixed point theorem in a convenient Banach space.
For T > 0, let us consider the space \mathcal{E}_T of continuous and adapted processes such that
\mathbb{E} \left( \sup_{0 \le s \le T} \mid X_s \mid^2 \right) < +\infty .
We endow that space with the norm
\parallel X \parallel^2 =\mathbb{E} \left( \sup_{0 \le s \le T} \mid X_s \mid^2 \right).
It is easily seen that (\mathcal{E}_T, \parallel \cdot \parallel) is a Banach space.

Step one: We first prove that if a continuous and adapted process (X^{x_0}_t)_{t \ge 0} is a solution of the equation then, for every T >0, (X^{x_0}_t)_{0 \le t \le T} \in \mathcal{E}_T.

Let us fix T > 0 and consider for n \in \mathbb{N} the stopping times T_n =\inf \{ t \ge 0, \| X^{x_0}_t \| > n \}. For t \le T,
X_{t \wedge T_n}^{x_0} =x_0 +\int_0^{t \wedge T_n} b(X_s^{x_0}) ds + \int_0^{t \wedge T_n} \sigma (X_s^{x_0}) dB_s.
Therefore, by using the inequality \|a+b+c\|^2 \le 3 (\|a\|^2 +\|b\|^2+\|c\|^2), we get
\left\| X_{t \wedge T_n}^{x_0} \right\|^2 \le 3 \left( \|x_0\|^2 +\left\| \int_0^{t \wedge T_n} b(X_s^{x_0}) ds \right\|^2 + \left\| \int_0^{t \wedge T_n} \sigma(X_s^{x_0}) dB_s\right\|^2 \right).
Thus, we have
\mathbb{E} \left( \sup_{0 \le u \le t \wedge T_n} \left\| X_{u}^{x_0} \right\|^2 \right)
\le 3 \left( \|x_0\|^2 + \mathbb{E} \left( \sup_{0 \le u \le t \wedge T_n}  \left\| \int_0^{u \wedge T_n} b(X_s^{x_0}) ds \right\|^2 \right) + \mathbb{E} \left( \sup_{0 \le u \le t \wedge T_n} \left\| \int_0^{u \wedge T_n} \sigma(X_s^{x_0}) dB_s\right\|^2 \right) \right)
By using our assumptions, we first estimate
\mathbb{E} \left( \sup_{0 \le u \le t \wedge T_n}  \left\| \int_0^{u \wedge T_n} b(X_s^{x_0}) ds \right\|^2 \right) \le K^2 \mathbb{E} \left( \left( \int_0^{t \wedge T_n} (1+\| X_s^{x_0} \| ) ds \right)^2 \right).
By using our assumptions and Doob’s inequality, we now estimate
\mathbb{E} \left( \sup_{0 \le u \le t \wedge T_n} \left\| \int_0^{u \wedge T_n} \sigma(X_s^{x_0}) dB_s\right\|^2 \right) \le 4 K^2 \mathbb{E} \left( \int_0^{t \wedge T_n} (1 +\| X_s \|)^2 ds \right).
Therefore, from the inequality \|a+b\|^2 \le 2 (\|a\|^2 +\|b\|^2), we get
\mathbb{E} \left( \sup_{0 \le u \le t \wedge T_n} \left\| X_{u }^{x_0} \right\|^2 \right)
\le 3 \left( \|x_0\|^2 +2(K^2 T +4K^2) \int_0^t \left(1+\mathbb{E} \left( \sup_{0 \le u \le s \wedge T_n} \left\| X_{u \wedge T_n}^{x_0} \right\|^2 \right)ds \right)  \right).
We may now apply Gronwall’s lemma to the function t \rightarrow \mathbb{E} \left( \sup_{0 \le u \le t \wedge T_n} \left\| X_{u }^{x_0} \right\|^2 \right) and deduce
\mathbb{E} \left( \sup_{0 \le u \le T \wedge T_n} \left\| X_{u }^{x_0} \right\|^2 \right) \le C
where C is a constant that does not depend on n. Fatou’s lemma implies by passing to the limit when n \rightarrow +\infty that
\mathbb{E} \left( \sup_{0 \le u \le T} \left\| X_{u }^{x_0} \right\|^2 \right) \le C.
We conclude, as expected, that (X^{x_0}_t)_{0 \le t \le T} \in \mathcal{E}_T.
More generally, by using the same arguments we can observe that if a continuous and adapted process satisfies
X_t =X_0 +\int_0^t b(X_s) ds + \int_0^t \sigma (X_s) dB_s,
with \mathbb{E} (X_0^2) < +\infty, then (X_t)_{0 \le t \le T} \in \mathcal{E}_T.

Step 2: We now show existence and uniqueness of solutions for the equation on a time interval [0,T] where T is small enough.

Let us consider the application \Phi that sends a continuous and adapted process (X_t)_{0 \le t \le T} to the process \Phi (X)_t =x_0 +\int_0^t b(X_s) ds + \int_0^t \sigma (X_s) dB_s. By using successively the inequalities (a+b)^2 \le 2(a^2 +b^2), Cauchy-Schwarz inequality and Doob’s inequality, we get \parallel \Phi (X) - \Phi (Y) \parallel^2 \le 2(K^2T^2 +4K^2T) \parallel X - Y \parallel^2. Moreover, arguing the same way as above, we can prove \parallel \Phi (0) \parallel^2 \le 3 (x_0^2+K^2T^2 +4K^2T).
Therefore, if T is small enough \Phi is a Lipschitz map \mathcal{E}_T \rightarrow \mathcal{E}_T whose Lipshitz constant is strictly less than 1. Consequently, it has a unique fixed point. This fixed point is, of course the unique solution of the equation on the time interval [0,T]. Here again, we can observe that the same reasoning applies if x_0 is replaced by a random variable X_0 that satisfies \mathbb{E}(X_0^2)<+\infty.

Step 3.
In order to get a solution of the equation on [0,+\infty), we may apply the previous step to get a solution on intervals [T_{n},T_{n+1}], where T_{n+1}-T_n is small enough and T_n \rightarrow +\infty. This will provide a solution of the equation on [0,+\infty). This solution is unique, from the uniqueness on each interval [T_{n},T_{n+1}]
\square

Definition: An equation like in the previous theorem is called a stochastic differential equation.

Exercise: (Ornstein-Uhlenbeck process) Let \theta \in \mathbb{R}. We consider the following stochastic differential equation,
dX_t=\theta X_t dt +dB_t, \quad X_0=x.

  • Show that it admits a unique solution that is given by
    X_t= e^{\theta t} x+\int_0^t e^{\theta (t-s)} dB_s.
  • Show that (X_t)_{t \ge 0} is Gaussian process. Compute its mean and covariance function.
  • Show that if \theta <0 then, when t \to +\infty, X_t converges in distribution toward a Gaussian distribution.

Exercise.(Brownian bridge) We consider for 0\le t < 1 the following stochastic differential equation
dX_t =-\frac{X_t}{1-t}dt +dB_t, \quad X_0=0

  • Show that
    X_t=(1-t)\int_0^t \frac{dB_s}{1-s}
    is the unique solution.
  • Deduce that (X_t)_{t \ge 0} is Gaussian process. Compute its mean and covariance function.
  • Show that in L^2, when t \to 1, X_t \to 0.

Exercise. Let \mu \in \mathbb{R} and \sigma > 0. We consider the following stochastic differential equation,
dX_t =\mu X_t dt +\sigma X_t dB_t, \quad X_0=x > 0.
Show that
X_t=xe^{\sigma B_t +(\mu-\frac{\sigma^2}{2})t}.
is the unique solution.

The next proposition shows that solutions of stochastic differential equations are intrinsically related to a second order differential operator. This connection will later be investigated in more details.

Proposition. Let (X_t^x)_{t \ge 0} be the solution of a stochastic differential equation
X_t^{x_0} =x_0 +\int_0^t b(X_s^{x_0}) ds + \int_0^t \sigma(X_s^{x_0}) dB_s,
where b : \mathbb{R}^n \to \mathbb{R}^n and \sigma: \mathbb{R}^n \to \mathbb{R}^{ n \times n} are Borel functions. Let now f : \mathbb{R}^n \to \mathbb{R}^n be a C^2 function. The process
M^f_t=f(X_t^x)-\int_0^t Lf (X_s^x)ds,
is a local martingale, where L is the second order differential operator
L=\sum_{i=1}^n b_i(x) \frac{\partial}{\partial x_i} +\frac{1}{2}\sum_{i,j=1}^n a_{ij}(x) \frac{\partial^2}{\partial x_i \partial x_j} ,
and a_{ij}(x)=(\sigma(x)\sigma^*(x))_{ij}.

Posted in Stochastic Calculus lectures | Leave a comment

Lecture 25. Girsanov theorem

In this section, we describe a theorem which has far reaching consequences in mathematical finance: The Girsanov theorem. It describes the impact of a probability change on stochastic calculus.

Let (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}) be a filtered probability space. We assume that (\mathcal{F}_t)_{t \ge 0} is the usual completion of the filtration of a Brownian motion (B_t)_{t \ge 0}. Let \mathbb{Q} be a probability measure on \mathcal{F}_\infty which is equivalent to \mathbb{P}. We denote by D the density of \mathbb{Q} with respect to \mathbb{P}.

Theorem (Girsanov theorem) There exists a progressively measurable process \left( \Theta_t \right)_{t \geq 0} such that for every t \ge 0, \mathbb{P} \left( \int_0^t \Theta_s^2ds < +\infty \right)=1 and \mathbb{E} \left( D \mid \mathcal{F}_t \right)=\exp \left( \int_0^t \Theta_s dB_s - \frac{1}{2} \int_0^t \Theta_s ^2 ds \right). Moreover, the process B_t - \int_0^t \Theta_s ds is a Brownian motion on the filtered probability space (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{Q}). As a consequence, a continuous and adapted process (X_t)_{t \ge 0} is a \mathbb{P}-semimartingale if and only if it is a \mathbb{Q}-semimartingale.

Proof. Since \mathbb{P} and \mathbb{Q} are equivalent on \mathcal{F}_\infty, there are of course also equivalent on \mathcal{F}_t for every t \ge 0. The density of \mathbb{Q}_{/ \mathcal{F}_t} with respect to \mathbb{P}_{/ \mathcal{F}_t} is given by D_t=\mathbb{E}^{\mathbb{P}} \left( D \mid \mathcal{F}_t \right). As a consequence, the process D_t is a positive martingale. From Itō’s representation theorem, we therefore deduce that there exists a progressively measurable process (u_t)_{t \ge 0} such that D_t=1+\int_0^t u_sdB_s. Let now \Theta_t=\frac{u_t}{D_t}. We have then,
D_t=1+\int_0^t \Theta_s D_s dB_s.
By using Itō’s formula to the process D_t \exp \left( -\int_0^t \Theta_s dB_s +\frac{1}{2} \int_0^t \Theta_s ^2 ds \right), we see that it implies
D_t=\exp \left( \int_0^t \Theta_s dB_s - \frac{1}{2} \int_0^t \Theta_s ^2 ds \right).
We now want to prove that the process B_t - \int_0^t \Theta_s ds is a \mathbb{Q}-Brownian motion. It is clear the \mathbb{Q}-quadratic variation of this process is t. From the Levy’s characterization result, we therefore just need to prove that it is a \mathbb{Q} local martingale. For this, we are going to prove that that the process
N_t= \left( B_t - \int_0^t \Theta_s ds\right) \exp \left( \int_0^t \Theta_s dB_s - \frac{1}{2} \int_0^t \Theta_s ^2 ds \right)
is a \mathbb{P}-local martingale. Indeed, from the integration by parts formula, it is immediate that
dN_t= D_t dB_t +\left( B_t - \int_0^t \Theta_s ds\right) dD_t.
Since D_t is the density of \mathbb{Q}_{\mathcal{F}_t} with respect to \mathbb{P}_{\mathcal{F}_t}, it is then easy to deduce that N_t is a \mathbb{P}-local martingale and thus a \mathbb{P} Brownian motion \square

Exercise. Let (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}) be a filtered probability space that satisfies the usual conditions. As before, let \mathbb{Q} be a probability measure on \mathcal{F}_\infty which is equivalent to \mathbb{P}. We denote by D the density of \mathbb{Q} with respect to \mathbb{P} and D_t=\mathbb{E}^{\mathbb{P}} (D \mid \mathcal{F}_t). Let (M_t)_{t \ge 0} be a \mathbb{P} local martingale. Show that the process
N_t=M_t-\int_0^t\frac{d \langle M, D \rangle_s}{D_s}
is a \mathbb{Q} local martingale. As a consequence, a continuous and adapted process (X_t)_{t \ge 0} is a \mathbb{P}-semimartingale if and only if it is a \mathbb{Q}-semimartingale.

Exercise Let (B_t)_{t\ge 0} be a Brownian motion. We denote by \mathbb{P} the Wiener measure, by (\pi_t)_{t \ge 0} the coordinate process and by (\mathcal{F}_t)_{t \ge 0} its natural filtration.

  • Let \mu \in \mathbb{R} and \mathbb{P}^\mu be the distribution of the process (B_t+\mu t)_{t\ge 0}. Show that for every t \ge 0, \mathbb{P}^\mu_{/ \mathcal{F}_t} \ll \mathbb{P}_{/ \mathcal{F}_t}, and that
    \frac{d \mathbb{P}^\mu_{/ \mathcal{F}_t} }{d\mathbb{P}_{/ \mathcal{F}_t} }=e^{\mu \pi_t -\frac{\mu^2}{2} t}.
  • Is it true that \mathbb{P}^\mu_{/ \mathcal{F}_\infty} \ll \mathbb{P}_{/ \mathcal{F}_\infty}
  • For a \in \mathbb{R}_{\ge 0}, we denote T_a=\inf \{ t \ge 0, B_t+\mu t =a \}. Compute the density function of T_a (You may use the previous question).
  • More generally, let f :\mathbb{R}_{\ge 0} \rightarrow \mathbb{R} be a measurable function such that for every t \ge 0, \int_0^t f^2(s)ds < +\infty. We denote by \mathbb{P}^f the distribution of the process \left(B_t+\int_0^t f(s)ds\right)_{t\ge 0}. Show that for every t \ge 0,
    \mathbb{P}^f_{/ \mathcal{F}_t} \ll \mathbb{P}_{/ \mathcal{F}_t},
    and that
    \frac{d \mathbb{P}^f_{/ \mathcal{F}_t} }{d\mathbb{P}_{/ \mathcal{F}_t} }=e^{\int_0^t f(s)d\pi_s \frac{1}{2} \int_0^t f^2(s)ds}.

Let (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}) be a filtered probability space that satisfies the usual conditions and let (B_t)_{t \ge 0} be a Brownian motion on it. Let now \left( \Theta_t \right)_{t \geq 0} be a progressively measurable process such that for every t \ge 0,   \mathbb{P} \left( \int_0^{t} \Theta_s^2ds <+\infty \right)=1. We denote
Z_t=\exp \left( \int_0^t \Theta_s dX_s - \frac{1}{2} \int_0^t \Theta_s ^2 ds \right), \text{ }t \geq 0.
As a consequence of Itō’s formula, it is clear that (Z_t)_{t \ge 0} is a local martingale. In general (Z_t)_{t \ge 0} is not a martingale, but the following two lemmas provide simple sufficient conditions that it is.

Lemma. If for every t \geq 0, \mathbb{E} (Z_t)=1, then (Z_t)_{t \ge 0} is a martingale.

Proof. The process Z is a non negative local martingale and thus a super martingale \square

Lemma. (Novikov’s condition) If \mathbb{E} \left(\exp \left( \frac{1}{2} \int_0^\infty \Theta_s^2 ds \right) \right) < +\infty, then (Z_t)_{t \ge 0} is a uniformly integrable martingale.

Proof. We denote M_t=\int_0^t \Theta_s dB_s. As a consequence of \mathbb{E} \left(\exp \left( \frac{1}{2} \langle M \rangle_\infty \right) \right)<+\infty, the random variable \langle M \rangle_\infty has moments of all order. So from Burkholder-Davis-Gundy inequalities, \sup_{t \ge 0} |M_t | has moments of all orders, which implies that M is a uniformly integrable martingale. We have then
\exp\left( \frac{1}{2} M_\infty \right) = \exp\left( \frac{1}{2}M_\infty-\frac{1}{4} \langle M\rangle_\infty \right)\exp\left( \frac{1}{4} \langle M \rangle_\infty \right).
The Cauchy-Schwarz inequality implies then that \mathbb{E} \left( \exp\left( \frac{1}{2} M_\infty \right) \right) < +\infty.
We deduce from the Doob’s convergence theorem that the process \exp\left( \frac{1}{2} M \right) is a uniformly integrable submartingale. Let now \eta < 1 and 0<t \le +\infty. We have
\exp \left( \eta M_t -\frac{\eta^2}{2} \langle M \rangle_t \right)=\left( \exp \left( M_t -\frac{1}{2} \langle M \rangle_t \right)\right)^{\eta^2}\exp\left( \frac{\eta M_t}{1+\eta} \right)^{1-\eta^2}.
Holder’s inequality shows then that
\mathbb{E} \left( \exp \left( \eta M_t -\frac{\eta^2}{2} \langle M \rangle_t \right) \right)
\le \mathbb{E} \left( \exp \left( M_t -\frac{1}{2} \langle M \rangle_t \right) \right)^{\eta^2} \mathbb{E} \left( \exp\left( \frac{\eta M_t}{1+\eta} \right) \right)^{1-\eta^2}
\le \mathbb{E} \left( \exp \left( M_t -\frac{1}{2} \langle M \rangle_t \right) \right)^{\eta^2} \mathbb{E} \left( \exp\left( \frac{M_t}{2} \right) \right)^{2\eta(1-\eta)}
\le \mathbb{E} \left( \exp \left( M_t -\frac{1}{2} \langle M \rangle_t \right) \right)^{\eta^2} \mathbb{E} \left( \exp\left( \frac{M_\infty}{2} \right) \right)^{2\eta(1-\eta)}
If we can prove that \mathbb{E} \left( \exp \left( \eta M_t -\frac{\eta^2}{2} \langle M \rangle_t \right) \right)=1, then by letting \eta \to 1 in the above inequality, we would get
\mathbb{E} \left( \exp \left( M_t -\frac{1}{2} \langle M \rangle_t \right) \right) \ge 1
and thus \mathbb{E} \left( \exp \left( M_t -\frac{1}{2} \langle M \rangle_t \right) \right) = 1.

Let p > 1 such that \frac{\eta \sqrt{p}}{\sqrt{p}-1} \le 1. Consider r=\frac{\sqrt{p}+1}{\sqrt{p}-1} and s =\frac{\sqrt{p}+1}{2} so that 1/r+1/s=1. Using
\exp \left( \eta M_t -\frac{\eta^2}{2} \langle M \rangle_t \right)^p = \exp \left( \sqrt{\frac{p}{r}} \eta M_t -\frac{p}{2} \eta^2 \langle M \rangle_t \right) \exp \left( \left( p \eta -\sqrt{\frac{p}{r} } \right)M_t \right)
and then Holder’s inequality, shows that there is a constant C (depending only on M) such that for any stopping time T
\mathbb{E} \left( \exp \left( \eta M_T -\frac{\eta^2}{2} \langle M \rangle_T \right)^p \right) \le C.
By the Doob’s maximal inequality, it implies that the local martingale \exp \left( \eta M_t -\frac{\eta^2}{2} \langle M \rangle_t \right) is actually a true martingale. This implies \mathbb{E} \left( \exp \left( \eta M_t -\frac{\eta^2}{2} \langle M \rangle_t \right) \right)=1 and the desired conclusion \square

We now assume that (Z_t)_{t \ge 0} is a uniformly integrable martingale. In that case, it is easy to see that on the \sigma-field \mathcal{F}_\infty, there is a unique probability measure \mathbb{Q} equivalent to \mathbb{P} such that for every t \ge 0, \frac{d\mathbb{Q}_{/\mathcal{F}_t}}{d\mathbb{P}_{/\mathcal{F}_t}}=Z_t,\text{ }\mathbb{P}-a.s. The same argument as before shows then that with respect to \mathbb{Q}, the process
B_t - \int_0^t \Theta_s ds is a Brownian motion.

Posted in Stochastic Calculus lectures | Leave a comment

Lecture 24. Burkholder-Davis-Gundy inequalities

In this section, we study some of the most important martingale inequalities: The BurkholderDavisGundy inequalities. Interestingly, the range of application of these inequalities is very large and they play an important role in harmonic analysis and the study of singular integrals (see for instance the nice survey by my colleague Pr. Bañuelos). These inequalities admit several proofs. We present here a proof using Itō’s formula and an interesting domination inequality which is due to Lenglart. For an alternative proof, you may refer to the original approach by Burkholder-Davis-Gundy.

We admit without proof, the following domination inequality which is is due to Lenglart.

Proposition.(Lenglart) Let (N_t)_{t \ge 0} be a positive adapted right-continuous process and (A_t)_{t \ge 0} be an increasing process. Assume that for every bounded stopping time \tau, \mathbb{E} (N_\tau \mid \mathcal{F}_0 ) \le \mathbb{E} (A_\tau \mid \mathcal{F}_0 ). Then, for every k \in (0,1), \mathbb{E} \left( \left(\sup_{0 \le t \le T} N_t \right)^k \right) \le \frac{2-k}{1-k} \mathbb{E} \left( A_T^k\right).

We shall use this lemma to prove the following

Theorem. (Burkholder-Davis-Gundy inequalities) Let T > 0 and (M_t)_{ 0 \le t \le T} be a continuous local martingale such that M_0=0. For every 0 < p < \infty, there exist universal constants c_p and C_p, independent of T and (M_t)_{ 0 \le t \le T} such that
c_p \mathbb{E} \left( \langle M\rangle_T^{\frac{p}{2} } \right)\le  \mathbb{E}\left(\left(\sup_{0 \le t \le T} |M_t|\right)^p \right) \le C_p  \mathbb{E} \left( \langle M\rangle_T^{\frac{p}{2} } \right).

Proof. By stopping it is enough to prove the result for bounded M. Let q \ge 2. From Itō’s formula we have
d |M_t|^q  =q|M_t|^{q-1} \mathbf{sgn}(M_t) dM_t +\frac{1}{2} q(q-1) | M_t |^{q-2} d \langle M \rangle _t
=q  \mathbf{sgn}(M_t)|M_t|^{q-1}dM_t+ \frac{1}{2} q(q-1) | M_t |^{q-2}d\langle M \rangle_t.
As a consequence of the Doob’s stopping theorem, we get that for every bounded stopping time \tau,
\mathbb{E} \left(  |M_\tau|^q  \mid \mathcal{F}_0 \right) \le  \frac{1}{2} q(q-1) \mathbb{E} \left( \int_0^\tau | M_t |^{q-2} d\langle M \rangle_t \mid \mathcal{F}_0 \right).
From the Lenglart’s domination inequality, we deduce then that for every k \in (0,1),
\mathbb{E} \left( \left(\sup_{0 \le t \le T}  |M_t|^q  \right)^k \right) \le \frac{2-k}{1-k}  \left( \frac{1}{2} q(q-1)\right)^k  \mathbb{E} \left(\left( \int_0^T | M_t |^{q-2}  d\langle M \rangle_t\right)^k \right).
We now bound
\mathbb{E} \left(\left( \int_0^T | M_t |^{q-2} d \langle M \rangle _t\right)^k \right)
\le  \mathbb{E} \left(\left(\sup_{0 \le t \le T} |M_t| \right)^{k(q-2)}\left( \int_0^T d\langle M \rangle _t\right)^k \right)
\le  \mathbb{E} \left(\left(\sup_{0 \le t \le T} |M_t| \right)^{kq} \right)^{1-\frac{2}{q}} \mathbb{E}  \left( \langle M \rangle _T^{\frac{kq}{2}} \right)^{\frac{2}{q}}.
As a consequence, we obtain:
\mathbb{E} \left( \left(\sup_{0 \le t \le T}  |M_t|^q  \right)^k \right) \le \frac{2-k}{1-k}  \left( \frac{1}{2} q(q-1)\right)^k   \mathbb{E} \left(\left(\sup_{0 \le t \le T} |M_t| \right)^{kq} \right)^{1-\frac{2}{q}} \mathbb{E}  \left(\langle M \rangle_T^{\frac{kq}{2}} \right)^{\frac{2}{q}}.
Letting p=qk yields the claimed result, that is
\mathbb{E}\left(\left(\sup_{0 \le t \le T} |M_t|\right)^p \right) \le C_p  \mathbb{E} \left( \langle M\rangle_T^{\frac{p}{2} } \right).
We proceed now to the proof of the left hand side inequality. We have,
M_t^2 =\langle M \rangle_t +2\int_0^t M_s dM_s.
Therefore, we get
\mathbb{E} \left( \langle M\rangle_T^{\frac{p}{2} } \right) \le A_p \left(  \mathbb{E}\left(\left(\sup_{0 \le t \le T} |M_t|\right)^p \right) +  \mathbb{E}\left(\sup_{0 \le t \le T}\ \left| \int_0^t M_s dM_s\right|^{p/2} \right) \right).
By using the previous argument, we now have
\mathbb{E}\left(\sup_{0 \le t \le T}\ \left| \int_0^t M_s dM_s\right|^{p/2} \right)  \le B_p \mathbb{E}\left( \left( \int_0^T M^2_s d\langle M\rangle_s\right)^{p/4} \right)
\le B_p  \mathbb{E}\left(\left(\sup_{0 \le t \le T} |M_t|\right)^{p/2} \langle M \rangle_T^{p/4} \right)
\le B_p  \mathbb{E}\left(\left(\sup_{0 \le t \le T} |M_t|\right)^{p}\right)^{1/2}  \mathbb{E} \left( \langle M \rangle_T^{p/2} \right)^{1/2}.
As a conclusion, we obtained
\mathbb{E} \left( \langle M\rangle_T^{\frac{p}{2} } \right) \le A_p \left(   \mathbb{E}\left(\left(\sup_{0 \le t \le T} |M_t|\right)^p \right) + B_p  \mathbb{E}\left(\left(\sup_{0 \le t \le T} |M_t|\right)^{p}\right)^{1/2}  \mathbb{E} \left( \langle M \rangle_T^{p/2} \right)^{1/2} \right).
This is an inequality of the form x^2 \le A_p \left( y^2 +B_p xy\right), which easily implies
c_p x^2 \le y^2, thanks to the inequality 2xy \le \frac{1}{\delta} x^2+\delta y^2, with a conveniently chosen \delta \square

Posted in Stochastic Calculus lectures | Leave a comment

Lecture 23. Martingales as a time changed Brownian motion

In the previous Lecture, we proved that any martingale which is adapted to a Brownian filtration can be written as a stochastic integral. In this section, we prove that any martingale can also be represented as a time changed Brownian motion. To prove this fact, we give first first a characterization of the Brownian motion which is interesting in itself. In this section, we denote by (\mathcal{F}_t)_{t \ge 0} a filtration that satisfies the usual conditions.

Proposition: (Levy’s characterization theorem) Let (M_t)_{t \ge 0} be a continuous local martingale such that M_0=0 and such that for every t \ge 0, \langle M \rangle_t =t. The process (M_t)_{t \ge 0} is a standard Brownian motion.

Proof. Let N_t=e^{i\lambda M_t +\frac{1}{2}\lambda^2 t}. By using Itō’s formula, we obtain that for s \le t, N_t =N_s +i \lambda \int_s^t N_u dM_u.
As a consequence, the process (N_t)_{t \ge 0} is a martingale and, from the above equality we get
\mathbb{E}\left( e^{i\lambda (M_t-M_s)} \mid \mathcal{F}_s \right)=e^{-\frac{1}{2}\lambda^2 (t-s)}.
The process (M_t)_{t \ge 0} is therefore a continuous process with stationary and independent increments such that M_t is normally distributed with mean 0 and variance t. It is thus a Brownian motion \square

The next proposition shows that continuous martingales behave in a nice way with respect to time changes.

Proposition Let (C_t)_{t \ge 0} be a continuous and increasing process such that for every t \ge 0, C_t is a finite stopping time of the filtration (\mathcal{F}_t)_{t \ge 0}. Let (M_t)_{t \ge 0} be a continuous martingale with respect to (\mathcal{F}_t)_{t \ge 0}. The process (M_{C_t})_{t \ge 0} is a local martingale with respect to the filtration (\mathcal{F}_{C_t})_{t \ge 0}. Moreover \langle M_C \rangle =\langle M \rangle_C.

Proof. . By using localization, we may assume C to be bounded. According to the Doob’s stopping theorem, we need to prove that for every bounded stopping time T of the filtration (\mathcal{F}_{C_t})_{t \ge 0}, we have \mathbb{E}( M_{C_T})=0. But C_T is obviously a bounded stopping time of the filtration (\mathcal{F}_t)_{t \ge 0} and thus from Doob’s stopping theorem we have \mathbb{E}( M_{C_T})=0. The same argument shows that M_C^2- \langle M \rangle_C \square

Exercise. Let (C_t)_{t \ge 0} be an increasing and right continuous process such that for every t \ge 0, C_t is a finite stopping time of the filtration (\mathcal{F}_t)_{t \ge 0}. Let (M_t)_{t \ge 0} be a continuous martingale with respect to (\mathcal{F}_t)_{t \ge 0} such that M is constant on each interval [C_{t-},C_t]. Show that the process (M_{C_t})_{t \ge 0} is a continuous local martingale with respect to the filtration (\mathcal{F}_{C_t})_{t \ge 0} and that \langle M_C \rangle =\langle M \rangle_C.

We can now prove the following nice representation result for martingales.

Theorem. ( Dambis, Dubins-Schwarz) Let (M_t)_{t \ge 0} be a continuous martingale such that M_0=0 and \langle M \rangle_\infty =+\infty. There exists a Brownian motion (B_t)_{t \ge 0}, such that for every t \ge 0,
M_t =B_{\langle M \rangle_t}.

Proof. Let C_t =\inf \{ s \ge 0, \langle M \rangle_s > t \}. (C_t)_{t \ge 0} is a right continuous and increasing process such that for every t \ge 0, C_t is a finite stopping time of the filtration (\mathcal{F}_t)_{t \ge 0} and M is obviously constant on each interval [C_{t-},C_t]. From the previous exercise the process (M_{C_t})_{t \ge 0} is a local martingale whose quadratic variation is equal to t. From Levy’s characterization theorem, it is thus a Brownian motion \square

Exercise. Show that if (M_t)_{t \ge 0} is a continuous local martingale such that M_0=0 and \langle M \rangle_\infty =+\infty, there exists a Brownian motion (B_t)_{t \ge 0}, such that for every t \ge 0, M_t =B_{\langle M \rangle_t}.

Exercise.
Let (u_t)_{t \ge 0} be a continuous adapted process and let (B_t)_{t \ge 0} be a Brownian motion. Show that for every T \ge 0, the process \left( \int_0^t u_s dB_s\right)_{0\le t \le T} has \frac{1}{2}-\varepsilon Holder paths, where 0 < \varepsilon \le \frac{1}{2}.

The study of the planar Brownian is deeply connected to the theory of analytic functions. The fundamental property of the Brownian curve is that it is a conformal invariant. The following proposition is easily proved as a consequence of Itō’s formula and of the Dambins-Dubins-Schwarz theorem. By definition, a complex Brownian motion is a process (B_t)_{t \ge 0} in the complex plane that can be decomposed as B_t =B^1_t +i B^2_t where B^1 and B^2 are independent Brownian motions.

Proposition.(Conformal invariance of the planar Brownian motion) Let (B_t)_{t \ge 0} be a complex Brownian motion and f : \mathbb{C} \to \mathbb{C} be an analytic function. Then
f(B_t)=f(0)+\int_0^t f'(B_s) dB_s.
As a consequence, there exists a complex Brownian motion (\beta_t)_{t \ge 0} such that
f(B_t)=f(0)+\beta_{\int_0^t | f'(B_s)|^2 ds}.

To study the complex Brownian motion, it is useful to look at it in polar coordinates. It leads to the so-called skew-product decomposition of the complex Brownian motion.

Proposition. Let (B_t)_{t \ge 0} be a complex Brownian motion started at z \neq 0. There exists a complex Brownian motion (\beta_t)_{t \ge 0} such that
B_t=z \exp{\left( \beta_{\int_0^t \frac{ds}{\rho_s^2} }\right)}, where \rho_t =| B_t |.

Proof. The proof is let as an exercise to the reader. The main idea is to prove, by using Itō’s formula, that B_t=z \exp\left( \int_0^t \frac{dB_s}{B_s} \right), and then to used the Dambins-Dubins-Schwarz theorem \square

Exercise. In the previous proposition, show that the process (\rho_t)_{t \ge 0} is independent from the process (\mathbf{Im} ( \beta_t))_{t \ge 0}.

You will find below a video of a talk by Pr. Marc Yor concerning quadratic functionals of the planar Brownian motion. The talk was given at the University of Bristol in December 2008 for a special event.

Posted in Stochastic Calculus lectures | 2 Comments

Lecture 23. Itō’s representation theorem

In this Lecture we show that, remarkably, any square integrable integrable random variable which is measurable with respect to a Brownian motion, can be expressed as a stochastic integral with respect to this Brownian motion. A striking consequence of this result, which is known as Itō’s representation theorem, is that any square integrable martingale of a Brownian filtration has a continuous version.

Let (B_t)_{t \ge 0} be a Brownian motion. In the sequel, we consider the filtration (\mathcal{F}_t)_{t \ge 0} which is the usual completion of the natural filtration of (B_t)_{t \ge 0} (such a filtration is called a Brownian filtration).

The following lemma is a straightforward consequence of Itō’s formula.

Lemma. Let f : \mathbb{R}_{\ge 0}  \to \mathbb{R} be a locally square integrable function. The process \left( \exp\left( \int_0^t f(s) dB_s -\frac{1}{2} \int_0^t f(s)^2 ds \right)\right)_{t \ge 0} is a square integrable martingale.

Proof. From Itō’s formula we have
\exp\left( \int_0^t f(s) dB_s -\frac{1}{2} \int_0^t f(s)^2 ds \right)=1+\int_0^t f(s) \exp\left( \int_0^s f(u) dB_u -\frac{1}{2} \int_0^s f(u)^2 du \right)dB_s.
The random variable \int_0^s f(u) dB_u is a Gaussian random variable with mean 0 and variance \int_0^s f(u)^2 du. As a consequence
\mathbb{E}\left( \int_0^t f(s)^2 \exp\left( 2\int_0^s f(u) dB_u  \right) ds \right)<+\infty
and the process
\int_0^t f(s) \exp\left( \int_0^s f(u) dB_u -\frac{1}{2} \int_0^s f(u)^2 du \right)dB_s
is a martingale. \square

Lemma. Let \mathcal{D} be the set of compactly supported and piecewise constant functions \mathbb{R}_{\ge 0} \rightarrow \mathbb{R}, i.e. the set of functions f that can be written as f=\sum_{i=1}^n a_i \mathbf{1}_{(t_{i-1}, t_i]}, for some 0\le t_1 \le \cdots \le t_n and a_1, \cdots , a_n \in \mathbb{R}. The family \left\{  \exp\left( \int_0^{+\infty}  f(s) dB_s -\frac{1}{2} \int_0^{+\infty}  f(s)^2 ds \right), f \in \mathcal{D} \right\} is total in \mathbf{L}^2 (\mathcal{F}_\infty , \mathbb{P} ).

Proof.
Let F \in \mathbf{L}^2 (\mathcal{F}_\infty , \mathbb{P} ) such that for every f \in \mathcal{D},
\mathbb{E} \left(F  \exp\left( \int_0^{+\infty}  f(s) dB_s -\frac{1}{2} \int_0^{+\infty}  f(s)^2 ds \right) \right)=0.
Let t_1,\cdots, t_n \ge 0. We have for every \lambda_1,\cdots,\lambda_n \in \mathbb{R},
\mathbb{E} \left(F  \exp\left( \sum_{i=1}^n \lambda_i (B_{t_i}-B_{t_{i-1}}) \right) \right)=0.
By analytic continuation, we see that
\mathbb{E} \left(F  \exp\left( \sum_{i=1}^n \lambda_i (B_{t_i}-B_{t_{i-1}}) \right) \right)=0.
actually also holds for every \lambda_1,\cdots,\lambda_n \in \mathbb{C}. By using the Fourier transform, it implies that
\mathbb{E} \left(F  \mid B_{t_1},\cdots , B_{t_n} \right)=0.
Since t_1,\cdots, t_n were arbitrary, we conclude that \mathbb{E}(F \mid \mathcal{F}_\infty)=0. As a conclusion F=0 \square

We are now in position to state the representation theorem.

Theorem. For every F \in \mathbf{L}^2 (\mathcal{F}_\infty , \mathbb{P} ), there is a unique progressively measurable process (u_t)_{t \ge 0} such that \mathbb{E} \left(\int_0^\infty u_s^2 ds \right)<+\infty and F=\mathbb{E} (F)+ \int_0^{+\infty} u_s dB_s.

Proof. The uniqueness is immediate as a consequence of the Itō’s isometry for stochastic integrals. Let \mathcal{A} be the set of random variables F \in \mathbf{L}^2 (\mathcal{F}_\infty , \mathbb{P} ) such that there exists a progressively measurable process (u_t)_{t \ge 0} such that \mathbb{E} \left(\int_0^\infty u_s^2 ds \right)< +\infty and F=\mathbb{E} (F)+ \int_0^{+\infty} u_s dB_s. From the above lemma, it is clear that \mathcal{A} contains the set of set of random variables
\left\{  \exp\left( \int_0^{+\infty}  f(s) dB_s -\frac{1}{2} \int_0^{+\infty}  f(s)^2 ds \right), f \in \mathcal{D} \right\}.
Since this set is total in \mathbf{L}^2 (\mathcal{F}_\infty , \mathbb{P} ), we just need to prove that \mathcal{A} is closed in \mathbf{L}^2 (\mathcal{F}_\infty , \mathbb{P} ). So, let (F_n)_{n \in \mathbb{N}} be a sequence of random variables such that F_n \in \mathcal{A} and F_n \to_{n \to \infty} F in \mathbf{L}^2 (\mathcal{F}_\infty , \mathbb{P} ). There is a progressively measurable process (u^n_t)_{t \ge 0} such that \mathbb{E} \left(\int_0^\infty (u^n_s)^2 ds \right)<+\infty and F_n=\mathbb{E} (F_n)+ \int_0^{+\infty} u^n_s dB_s. By using Itō’s isometry, it is seen that the sequence u^n is a Cauchy sequence and therefore converges to a process u which is seen to satisfy
F=\mathbb{E} (F_n)+ \int_0^{+\infty} u^n_s dB_s \square

As a consequence of the representation theorem, we obtain the following description of the square integrable martingales of the filtration (\mathcal{F}_t)_{t \ge 0}.

Corollary. Let (M_t)_{t \ge 0} be a square integrable martingale of the filtration (\mathcal{F}_t)_{t \ge 0}. There is a unique progressively measurable process (u_t)_{t \ge 0} such that for every t \ge 0, \mathbb{E} \left(\int_0^t u_s^2 ds \right)<+\infty and M_t=\mathbb{E} (M_0)+ \int_0^{t} u_s dB_s. In particular, (M_t)_{t \ge 0} admits a continuous version.

Exercise. Show that if (M_t)_{t \ge 0} is a local martingale of the filtration (\mathcal{F}_t)_{t \ge 0}, then there is a unique progressively measurable process (u_t)_{t \ge 0} such that for every t \ge 0, \mathbb{P} \left(\int_0^t u_s^2 ds < +\infty \right)=1 and M_t=\mathbb{E} (M_0)+ \int_0^{t} u_s dB_s.

Posted in Stochastic Calculus lectures | Leave a comment

Lecture 22. Recurrence and transience of the multidimensional Brownian motion

In the next few Lectures we will illustrate through several examples of application the power of the stochastic integration theory.

We start with a study of the multidimensional Brownian motion. As already pointed out, a multidimensional stochastic process (B_t)_{t \ge 0}= \left( B^1_t,\cdots, B^n_t\right)_{t \ge 0}, is called a Brownian motion if the processes (B^1_t)_{t \ge 0} , \cdots, (B^n_t)_{t \ge 0} are independent Brownian motions. In the sequel we denote by \Delta the Laplace operator on \mathbb{R}^n, that is

\Delta =\sum_{i=1}^n \frac{\partial^2}{\partial x_i^2}.

The following result is an easy consequence of the Itō’s formula.

Proposition. Let f: \mathbb{R}_{\ge 0} \times  \mathbb{R}^n\rightarrow \mathbb{R} be a function that is once continuously differentiable with respect to its first variable and twice continuously differentiable with respect to its second variable and let (B_t)_{t \ge 0}=(B^1_t,...,B^n_t)_{t \ge 0} be a n-dimensional Brownian motion. The process
X_t=f(t,B_t)-\left( \int_0^t \frac{1}{2} \Delta f (s,B_s)+\frac{\partial f}{\partial t}(s,B_s) ds \right)
is a local martingale. If moreover f is such that
\sum_{i=1}^n \left(  \frac{\partial f}{\partial x_i}(t,x) \right)^2 \le \phi (t) e^{K \| x \|},
for some continuous function \phi and some constant K \in \mathbb{R}, then (X_t)_{t \ge 0} is a martingale.

In particular, if f is a harmonic function, i.e. \Delta f=0, and if (B_t)_{t \ge 0} is a multidimensional Brownian motion, then the process (f(B_t))_{t \ge 0} is a local martingale. As we will see it later, this nice fact has many consequences. A first nice application is the study of recurrence or transience of the multidimensional Brownian motion paths. As we have seen before, the Brownian motion recurrent: It reaches any value with probability 1. In higher dimensions, the situation is more subtle.

Let (B_t)_{t \ge 0}=(B^1_t,...,B^n_t)_{t \ge 0} be a n-dimensional Brownian motion with n \ge 2. For a > 0 and x \in \mathbb{R}^n, we consider the stopping time
T_a^x=\inf \{ t \ge 0, \| B_t +x \| =a \}.
Proposition. For a < \| x \|  < b,
\mathbb{P}\left( T_a^x < T_b^x \right)=  \begin{cases}  \frac{\ln b -\ln \|x \|}{\ln b-\ln a}, & n=2 \\  \frac{\| x\|^{2-n}-b^{2-n}}{a^{2-n}-b^{2-n}}, & n \ge 3.  \end{cases}

Proof. For a < \| x \| < b, we consider the function
f(x)=\Psi(\|x \|)=  \begin{cases}  \ln \| x\| , & n=2 \\  \| x \|^{2-n}, & n \ge 3.  \end{cases}
A straightforward computation shows that \Delta f=0. The process (f(B_{t \wedge  T_a^x \wedge T_b^x}))_{t \ge 0} is therefore a martingale, which implies \mathbb{E}\left( f(B_{ T_a^x \wedge T_b^x})\right)=f(x). This yields
\Psi(a) \mathbb{P}\left( T_a^x < T_b^x \right)+\Psi(b) \mathbb{P}\left( T_b^x < T_a^x \right)=f(x).
Since
\left( T_a^x < T_b^x \right)+\mathbb{P}\left( T_b^x < T_a^x \right)=1,
we deduce that
\mathbb{P}\left( T_a^x < T_b^x \right)=  \begin{cases}  \frac{\ln b -\ln \|x \|}{\ln b-\ln a}, & n=2 \\  \frac{\| x\|^{2-n}-b^{2-n}}{a^{2-n}-b^{2-n}}, & n \ge 3.  \end{cases} \square

By letting b \to \infty, we get
Corolllary. For 0 < a <  \| x \| ,
\mathbb{P}\left( T_a^x < +\infty \right)=  \begin{cases}  1, & n=2 \\  \frac{\| x\|^{2-n}}{a^{2-n}}, & n \ge 3.  \end{cases}
As a consequence, for n=2 the Brownian motion is recurrent, that is, for every non empty set \mathcal{O} \subset \mathbb{R}^2,
\mathbb{P}\left( \exists t \ge 0, B_t \in \mathcal{O}\right)=1.

Though the two-dimensional Brownian motion is recurrent, points are always polar.

Proposition.
For every x \in \mathbb{R}^n, \mathbb{P}( \exists t \ge 0, B_t=x)=0.

Proof. It suffices to prove that for every x \in \mathbb{R}^n, x \neq 0, \mathbb{P}\left( T_0^x < +\infty \right)=0. We have
\{ T_0^x < +\infty \} =\cup_{n \ge 0} \cap_{m \ge \frac{1}{\| x \|}} \{ T_{1/m}^x \le T_n^x\}.
Since \mathbb{P} \left(  \cap_{m \ge \frac{1}{\| x \|}} \{ T_{1/m}^x \le T_n^x\} \right)=\lim_{m \to \infty} \mathbb{P} \left( T_{1/m}^x \le T_n^x \right)=0, we get
\mathbb{P}\left( T_0^x < +\infty \right)=0 \square

As we have just seen, the two-dimensional Brownian motion will hit every non empty open set with probability one. The situation is different in dimension higher than 3: Brownian motion paths will eventually leave any bounded set with probability one.

Proposition. Let (B_t)_{t \ge 0}=(B^1_t,...,B^n_t)_{t \ge 0} be a n-dimensional Brownian motion. If n \ge 3 then almost surely
\lim_{t \to \infty} \|B_t \| =+\infty.

Proof. Let us assume n \ge 3. Let \Phi(x)=\frac{1}{\| x+a \|^{n-2}} where a \in \mathbb{R}^n. Since (B_t)_{t \ge 0} will never hit the point -a, we can consider the process (\Phi(B_t))_{t \ge 0} which is seen to be a positive local martingale from Itō’s formula. A positive local martingale is always a supermartingale. Therefore from the Doob’s convergence theorem, the process (\Phi(B_t))_{t \ge 0} converges almost surely when t \to \infty to an integrable and non negative random variable Z. From Fatou’s lemma, we have \mathbb{E}(Z) \le \lim \inf_{t \to +\infty} \mathbb{E} ( \Phi(B_t)). By the scaling property of the Brownian motion, it is clear that \lim \inf_{t \to +\infty} \mathbb{E} ( \Phi(B_t))=0. We conclude Z=0 \square

Exercise (Probabilistic proof of Liouville theorem) By using martingale methods, prove that if f:\mathbb{R}^n \to \mathbb{R} is a bounded harmonic function, then f is constant.

Posted in Stochastic Calculus lectures | Leave a comment

Lecture 21. Itō’s formula

Itō’s formula is certainly the most important and useful formula of stochastic calculus. It is the change of variable formula for stochastic integrals. It is a very simple formula whose specificity is the appearance of a quadratic variation term, that reflects the fact that semimartingales have a finite quadratic variation.

Due to its importance, we first provide a heuristic argument on how to derive Itō ‘s formula. Let f : \mathbb{R} \rightarrow \mathbb{R} be a smooth function and x: \mathbb{R} \rightarrow \mathbb{R} be a C^1 path \mathbb{R} \rightarrow \mathbb{R}. We have the following heuristic computation:
f(x_{t+dt})=f(x_t +(x_{t+dt}-x_t))
=f(x_t)+f'(x_t)(x_{t+dt}-x_t)
=f(x_t)+f'(x_t)dx_{t}.
This suggests, by summation, the following correct formula:
f(x_t)=f(x_0)+\int_0^t f'(x_s)dx_s.
Let us now try to consider a Brownian motion (B_t)_{t \ge 0} instead of the smooth path x and let us try to adapt the previous computation to this case. Since Brownian motion has quadratic variation which is not zero, \langle B \rangle_t=t, we need to go at the order 2 in the Taylor expansion of f. This leads to the following heuristic computation:
f(B_{t+dt})=f(B_t +(B_{t+dt}-B_t))
=f(B_t)+f'(B_t)(B_{t+dt}-B_t)+\frac{1}{2} f''(B_t) ((B_{t+dt}-B_t))^2
=f(B_t)+f'(B_t)dB_{t}+\frac{1}{2} f''(B_t)dt.
By summation, we are therefore led to the formula
f(B_t)=f(0)+\int_0^t f'(B_s)dB_{s}+\frac{1}{2}\int_0^t f''(B_s)ds,
which is, as we will see it later perfectly correct.

In what follows, we consider a filtered probability space (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}) that satisfies the usual conditions. Our starting point to prove Itō’s formula is the following formula which is known as the integration by parts formula for semimartingales:

Proposition. (Integration by parts formula)
Let (X_t)_{t \ge 0} and (Y_t)_{t \ge 0} be two continuous semimartingales, then the process (X_t Y_t )_{t \ge 0} is a continuous semimartingale and we have:
X_t Y_t =X_0 Y_0 +\int_0^t X_s dY_s +\int_0^t Y_s dX_s+\langle X,Y \rangle_t, \quad t \ge 0.

Proof. By bilinearity of the multiplication, we may assume X=Y. Also by considering, if needed, X-X_0 instead of X, we may assume that X_0=0. Let t \ge 0. For every sequence \Delta_n [0,t] such that
\lim_{n \rightarrow +\infty}\mid\Delta_n [0,t]\mid=0, we have
\sum_{k=1}^{n} \left( X_{t^n_k}-X_{t^n_{k-1}}\right)^2=X_t^2-2\sum_{k=1}^{n} X_{t^n_{k-1}} \left(X_{t^n_k} -X_{t^n_{k-1}}\right).
By letting n \to \infty, we therefore obtain the following identity which yields the expected result:
X_t^2=2\int_0^t X_s dX_s +\langle X \rangle_t
\square

We are now in position to prove Itō’s formula in its simpler form.

Theorem. (Itō’s formula I) Let (X_t)_{t \ge 0} be a continuous and adapted semimartingale and let f: \mathbb{R} \rightarrow \mathbb{R} be a function which is twice continuously differentiable. The process (f(X_t))_{t \ge 0} is a semimartingale and the following change of variable formula holds:
f(X_t)=f(X_0)+\int_0^t f'(X_s)dX_s +\frac{1}{2}\int_0^t f''(X_s) d\langle X \rangle_s.

Proof. We assume that the semimartingale (X_t)_{t \ge 0} is bounded. If it is not, we may apply the following arguments to the semimartingale (X_{t \wedge T_n})_{t \ge 0}, where T_n =\inf \{ t \ge 0, X_t \ge n \} and then let n \to \infty. Let \mathcal{A} be the set of two times continuously differentiable functions f for which the formula given in the statement of the theorem holds holds. It is straightforward that \mathcal{A} is a vector space. Let us show that \mathcal{A} is also an algebra, that is also let stable by multiplication. Let f,g \in \mathcal{A}. By using the integration by parts formula with the semimartingales (f(X_t))_{t \ge 0} and (g(X_t))_{t \ge 0}, we obtain
f(X_t)g(X_t)=f(X_0)g(X_0)+\int_0^t f(X_s) dg(X_s) +\int_0^t g(X_s) df(X_s)+\langle f(X),g(X) \rangle_t.
The terms of the previous sum may be separately treated in the following way. Since f,g \in \mathcal{A}, we get:
\int_0^t f(X_s) dg(X_s) =\int_0^t f(X_s) g'(X_s)dX_s +\frac{1}{2}\int_0^t f(X_s)g''(X_s) d \langle X \rangle_s
\int_0^t g(X_s) df(X_s) =\int_0^t g(X_s) f'(X_s)dX_s+\frac{1}{2}\int_0^t g(X_s)f''(X_s) d \langle X \rangle_s
\langle f(X),g(X) \rangle_t= \int_0^t f'(X_s)g'(X_s)d\langle X \rangle_s.
Therefore,
f(X_t)g(X_t) = f(X_0)g(X_0)+\int_0^t f(X_s) g'(X_s)dX_s +\int_0^t g(X_s) f'(X_s)dX_s  +  \frac{1}{2}\int_0^t f(X_s)g''(X_s) d \langle X \rangle_s+\int_0^t f'(X_s)g'(X_s)d\langle X \rangle_s+\frac{1}{2}\int_0^t g(X_s)f''(X_s) d \langle X \rangle_s
=f(X_0)g(X_0)+\int_0^t (fg)'(X_s)dX_s +\frac{1}{2}\int_0^t (fg)''(X_s) d \langle X \rangle_s.
We deduce that fg \in \mathcal{A}.

As a conclusion, \mathcal{A} is an algebra of functions. Since \mathcal{A} contains the function x \rightarrow x, we deduce that \mathcal{A} actually contains every polynomial function. Now in order to show that every function f which is twice continuously differentiable is actually in \mathcal{A}, we first observe that since X is assumed to be bounded, it take its values in a compact set. It is then possible to find a sequence of polynomials P_n such that, on this compact set, P_n uniformly converges toward f, P'_n uniformly converges toward f' and P''_n uniformly converges toward f'' \square

As a particular case of the previous formula, if we apply this formula with X as a Brownian motion, we get the formula that was already pointed out at the beginning of the section: If f: \mathbb{R} \rightarrow \mathbb{R} is twice continuously differentiable function, then
f(B_t)=f(0)+\int_0^t f'(B_s)dB_{s}+\frac{1}{2}\int_0^t f''(B_s)ds.

It is easy to derive the following variations of Itō’s formula:

Theorem: (Itō’s formula II) Let (X_t)_{t \ge 0} be a continuous and adapted semimartingale, and let (A_t)_{t \ge 0} be an adapted bounded variation process. If f: \mathbb{R}\times \mathbb{R} \rightarrow \mathbb{R} is a function that is once continuously differentiable with respect to its first variable and that is twice continuously differentiable with respect to its second variable, then for t \ge 0:
f(A_t, X_t)=f(A_0, X_0)+\int_0^t \frac{\partial f}{\partial t} (A_s, X_s)dA_s+\int_0^t \frac{\partial f}{\partial x} (A_s, X_s)dX_s +\frac{1}{2}\int_0^t \frac{\partial^2 f}{\partial x^2}(A_s, X_s) d \langle X \rangle_s.

Theorem. (Itō’s formula III) Let (X^1_t)_{t \ge 0},…,(X^n_t)_{t \ge 0} be n adapted and continuous semimartingales and let f: \mathbb{R}^n \rightarrow \mathbb{R} be a twice continuously differentiable function. We have:
f(X^1_t,...,X^n_t)=  f(X^1_0,...,X^n_0)+\sum_{i=1}^n\int_0^t \frac{\partial f}{\partial x_i}  (X^1_s,...,X^n_s)dX^i_s  +\frac{1}{2} \sum_{i,j=1}^n \int_0^t\frac{\partial^2 f}{\partial x_i \partial x_j} (X^1_s,...,X^n_s) d\langle X^i,X^j \rangle_s.

Exercise. Let f: \mathbb{R}_{\ge 0} \times \mathbb{R} \rightarrow \mathbb{C} be a function that is once continuously differentiable with respect to its first variable and twice continuously differentiable with respect to its second variable that satisfies
\frac{1}{2} \frac{\partial^2 f}{\partial x^2}+\frac{\partial f}{\partial t}=0. Show that if (M_t)_{t \ge 0} is a continuous local martingale, then (f(\langle M \rangle_t,M_t))_{t \ge 0} is a continuous local martingale. Deduce that for \lambda \in \mathbb{C}, the process \left( \exp (\lambda M_t -\frac{1}{2} \lambda^2 \langle M \rangle_t )\right)_{t \ge 0} is a local martingale.

Exercise. The Hermite polynomial of order n is defined as
H_n (x)=(-1)^n e^{\frac{x^2}{2}} \frac{d^n}{dx^n} e^{-\frac{x^2}{2}}.

  • Compute H_0, H_1,H_2,H_3.
  • Show that if (B_t)_{t \ge 0} is a Brownian motion, then the process \left(t^{n/2}H_n (\frac{B_t}{\sqrt{t}})\right)_{t \ge 0} is a martingale.
  • Show that
    t^{n/2}H_n (\frac{B_t}{\sqrt{t}})=n! \int_0^t \int_0^{t_1} ... \int_0^{t_{n-1}} dB_{s_1}...dB_{s_n}.

Posted in Stochastic Calculus lectures | Leave a comment

Lecture 20. Local martingales, Semimartingales and Integrators

The goal of this Lecture is to extend the domain of definition of the Itō integral with respect to Brownian motion. The idea is to use the fruitful concept of localization. We will then be interested in the wider class of processes for which it is possible to define a stochastic integral satisfying natural probabilistic properties. This will lead to the natural notion of semimartingales.

As before, we consider here a Brownian motion (B_t)_{t\ge 0} that is defined on a filtered probability space (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}) that satisfies the usual conditions.

Definition. We define the space L_{loc}^2 (\Omega, (\mathcal{F}_t)_{t \ge  0},\mathbb{P}), as the set of the processes (u_t)_{t \ge 0} that are progressively measurable with respect to the filtration (\mathcal{F}_t)_{t \ge 0} and such that for every t \ge 0,
\mathbb{P} \left( \int_0^{t} u_s^2 ds <+\infty\right)=1.

We first have the following fact:

Lemma. Let u \in L_{loc}^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}). There exists an increasing family of stopping times (T_n)_{n \ge 0} for the filtration (\mathcal{F}_t)_{t \ge 0} such that:

  • Almost surely, \lim_{n \rightarrow +\infty} T_n=+\infty;
  • \mathbb{E} \left( \int_0^{T_n} u_s^2 ds \right)<+\infty.

Thanks to this Lemma, it is now easy to naturally define \int_0^t u_s dB_s for u \in L_{loc}^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}). Indeed, let u \in L_{loc}^2 (\Omega,(\mathcal{F}_t)_{t \ge 0},\mathbb{P}) and let t \ge 0. According to the previous lemma, let us now consider an increasing sequence of stopping times (T_n)_{n \ge  0} such that:

  • Almost surely, \lim_{n \rightarrow +\infty} T_n=+\infty;
  • \mathbb{E} \left( \int_0^{T_n} u_s^2 ds \right)< +\infty.

Since \mathbb{E} \left( \int_0^{T_n} u_s^2 ds \right)<+\infty, the stochastic integral
\int_0^{T_n} u_s dB_s=\int_0^{+\infty} u_s 1_{[0,T_n]} (s)dB_s
exists. We may therefore define in a unique way a stochastic process \left(\int_0^{t} u_s dB_s \right)_{t \ge 0} such that:

  • \left(\int_0^{t} u_s dB_s \right)_{t \ge 0} is a continuous stochastic process adapted to the filtration (\mathcal{F}_t)_{t \ge 0};
  • The stochastic process \left(\int_0^{t\wedge T_n} u_s dB_s \right)_{t \ge 0} is a uniformly integrable martingale with respect to the filtration (\mathcal{F}_t)_{t \ge 0} (because it is bounded in L^2).

This leads to the following definition:

Definition. A stochastic process (M_t)_{t \geq 0} is called a local martingale (with respect to the filtration (\mathcal{F}_t)_{t \geq 0}) if there is a sequence of stopping times (T_n)_{n \geq 0} such that:

  • The sequence (T_n)_{n\geq 0} is increasing and almost surely satisfies \lim_{n \rightarrow +\infty} T_n =+\infty;
  • For n \geq 1, the process (M_{t \wedge T_n})_{t \geq 0} is a uniformly integrable martingale with respect to the filtration (\mathcal{F}_t)_{t \geq 0}.

Thus, as an example, if u \in L_{loc}^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}) then the process \left(\int_0^{t} u_s dB_s \right)_{t \ge 0} is a local martingale. Of course, any martingale turns out to be a local martingale. But, as we will see it later, in general the converse is not true. The following Exercise gives a useful criterion to prove that a given local martingale is actually martingale.

Exercise.
Let (M_t)_{t \geq 0} be a continuous local martingale such that for t \geq 0,
\mathbb{E} \left( \sup_{s \leq t} \mid M_s \mid \right) < +\infty.
Show that (M_t)_{t \geq 0} is a martingale. As a consequence, bounded local martingales necessarily are martingales.

It is interesting to observe that if (M_t)_{t \ge 0} is a local martingale, then the sequence of stopping times may explicitly be chosen so that the resulting stopped martingales enjoy nice properties.

Lemma. Let (M_t)_{t \geq 0} be a continuous local martingale on (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}) such that M_0=0. Let
T_n= \inf \{ t \ge 0, | M_t | \ge n \}.
Then, for n \in \mathbb{N}, the process (M_{t \wedge T_n})_{t \geq 0} is a bounded martingale.

Proof. Let (S_n)_{n \geq 0} be a sequence of stopping times such that:

  • The sequence (S_n)_{n\geq 0} is increasing and almost surely \lim_{n \rightarrow +\infty} S_n =+\infty;
  • For every n \geq 1, the process (M_{t \wedge s_n})_{t \geq 0} is a uniformly integrable martingale with respect to the filtration (\mathcal{F}_t)_{t \geq 0}.

For t \ge s and k,n \ge 0, we have:
\mathbb{E} \left( M_{t \wedge S_k \wedge T_n} \mid \mathcal{F}_s \right)=M_{s \wedge S_k \wedge T_n}.
Letting k \rightarrow +\infty leads then to the expected result \square

Since bounded martingales are of course square integrable, we easily deduce from the previous Lemma that the following result holds:

Theorem. Let (M_t)_{t \geq 0} be a continuous local martingale on (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}) such that M_0=0. Then, there is a unique continuous increasing process (\langle M \rangle_t)_{t \geq 0} such that:

  • \langle M \rangle_0=0;
  • The process (M_t^2 - \langle M \rangle_t)_{t \geq 0} is a local martingale.

Furthermore, for every t \ge 0 and every sequence of subdivisions \Delta_n [0,t] such that \lim_{n \rightarrow +\infty}\mid\Delta_n [0,t]\mid=0, the following limit holds in probability:
\lim_{n \rightarrow +\infty} \sum_{k=1}^{n} \left( M_{t^n_k} -M_{t^n_{k-1}}\right)^2=\langle M \rangle_t.
Moreover, if u is a progressively measurable process such that for every t \ge 0, \mathbb{P} \left( \int_0^t u_s^2 d\langle M \rangle_s <+\infty \right)=1, then we may define a stochastic integral \left( \int_0^t u_s dM_s \right)_{t \ge 0} such that the stochastic process \left( \int_0^t u_s dM_s \right)_{t \ge 0} is a continuous local martingale.

At that point, we already almost found the widest class of stochastic processes with respect to which it was possible to naturally construct a stochastic integral. To go further in that direction, let us first observe that if we add a bounded variation process to a local martingale, then we obtain a process with respect to which a stochastic integral is naturally defined.

More precisely, if (X_t)_{t \ge 0} may be written under the form:
X_t=X_0+A_t+M_t, where (A_t)_{t \ge 0} is a bounded variation process and where
(M_t)_{t \geq 0} is a continuous local martingale on (\Omega, (\mathcal{F}_t)_{t \ge  0},\mathcal{F},\mathbb{P}) such that M_0=0, then if u is a progressively measurable process such that for t \ge 0,
\mathbb{P} \left( \int_0^t u_s^2 d\langle M \rangle_s <+\infty \right)=1, we may define a stochastic integral as
\left( \int_0^t u_s dX_s \right)_{t \ge 0}=\left( \int_0^t u_s dA_s + \int_0^t u_s dM_s \right)_{t \ge 0},
where \int_0^t u_s dA_s is simply understood as the Riemann-Stieltjes integral with respect to the process (A_t)_{t \ge 0}.

The class of stochastic processes that we obtained is called the class of semimartingales and, as we will see it later, is the most relevant one:
Definition.
Let (X_t)_{t \ge 0} be an adapted continuous stochastic process on the filtered probability space (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}). We say that (X_t)_{t \ge 0} is a semimartingale with respect to the filtration (\mathcal{F}_t)_{t \ge 0} if (X_t)_{t \ge 0} may be written as:
X_t=X_0+A_t+M_t
where (A_t)_{t \ge 0} is a bounded variation process and (M_t)_{t \geq 0} is a continuous local martingale such that M_0=0. If it exists, the previous decomposition is unique.

Exercise. Let (M_t)_{t \ge 0} be a continuous local martingale on the filtered probability space (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}). Show that (M^2_t)_{t \ge 0} is a semimartingale.

Since a bounded variation process has a zero quadratic variation, it is easy to prove the following theorem:
Proposition. Let
X_t=X_0+A_t+M_t, \quad t \ge 0,
be a continuous adapted semimartingale. For every t \ge 0 and every sequence of subdivisions \Delta_n [0,t] such that \lim_{n \rightarrow +\infty}\mid\Delta_n[0,t]\mid=0, the following limit holds in probability:
\lim_{n \rightarrow +\infty} \sum_{k=1}^{n} \left( X_{t^n_k} -X_{t^n_{k-1}}\right)^2=\langle M\rangle_t.
We therefore call \langle M \rangle the quadratic variation of X and denote \langle X \rangle=\langle M \rangle.

Exercise. Let (X_t)_{t \geq 0} be a continuous semimartingale on the filtered probability space (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}). If \Delta [0,T] is a subdivision of the time interval $[0,T]$, we denote S_t^{\Delta [0,T]}(X)=\sum_{i=0}^{k-1}\left( X_{t_{i+1}} -X_{t_i} \right)^2 +(X_t-X_{t_k})^2, where k is such that t_k \le t <t_{k+1}. Let \Delta_n [0,T] be a sequence of subdivisions of [0,T] such that \lim_{n \rightarrow +\infty}\mid\Delta_n [0,T]\mid=0. Show that the following limit holds in probability, \lim_{n \rightarrow +\infty} \sup_{0\le t \le T} \left| S_t^{\Delta [0,T]}(X) - \langle X \rangle_t \right|=0.

Exercise. Let (X_t)_{t \geq 0} be a continuous semimartingale on (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}). Let u^n be a sequence of locally bounded and adapted processes almost surely converging toward 0 such that u^n \le u, where u is a locally bounded process. Show that for T \ge 0, the following limit holds in probability
\lim_{n \rightarrow +\infty} \sup_{0\le t \le T} \left| \int_0^t u^n_s dX_s \right|=0.

It already has been observed that in the Brownian case, though the stochastic integral is not an almost sure limit of Riemann sums, it is however a limit in probability of such sums. This may extended to semimartingales in the following way.

Proposition. Let u be a continuous and adapted process, let (X_t)_{t \ge 0} be a continuous and adapted semimartingale and let t \ge 0. For every sequence of subdivisions \Delta_n [0,t] such that
\lim_{n \rightarrow +\infty}\mid\Delta_n [0,t]\mid=0,
the following limit holds in probability:
\lim_{n \rightarrow +\infty} \sum_{k=0}^{n-1}u_{t^n_{k}} \left( X_{t^n_{k+1}}  X_{t^n_{k}}\right)=\int_0^t u_s dX_s.

As we already suggested it, the class of semimartingales is actually the wider class of stochastic processes with respect to which we may define a stochastic integral that enjoys natural properties. Let us more precisely explain what the previous statement means.

Let us denote by \mathcal{E}_b the set of processes (u_t)_{t \ge 0} such that:
u_t=\sum_{i=1}^{N} F_i 1_{(S_i,T_{i}]} (t), where 0\le S_1 \le T_1 \le ... \le S_N \le T_N are bounded stopping times and where the F_i‘s are random variable that are bounded and measurable with respect to \mathcal{F}_{S_i}. If (X_t)_{t \ge 0} is a continuous and adapted process and if u \in \mathcal{E}_b, then we naturally define
\int_0^t u_s dX_s =\sum_{i=1}^{N} F_i ( X_{T_i \wedge t}-X_{S_i \wedge t}).
We have the following theorem that we shall admit without proof:

Proposition. Let (X_t)_{t \ge 0} be a continuous and adapted process. The process (X_t)_{t \ge 0} is a semimartingale if and only if for every sequence u^n in \mathcal{E}_b that almost surely converges to 0, we have for every t \ge 0 and \varepsilon >0,
\lim_{n \rightarrow + \infty} \mathbb{P} \left( \left| \int_0^t u^n_s dX_s \right| > \varepsilon \right)=0.

Posted in Stochastic Calculus lectures | 1 Comment

Lecture 19. Stochastic integrals with respect to square integrable martingales

In the same way that a stochastic integral with respect to Brownian motion was constructed, a stochastic integral with respect to square integrable martingales may be defined. We shall not repeat this construction, since it was done in the Brownian motion case, but we point out the main results without proofs.

Let (M_t)_{t \geq 0} be a continuous square integrable martingale on a filtered probability space (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}) that satisfies the usual conditions. We assume that \sup_{t \ge 0} \mathbb{E} \left( M_t^2 \right) <+\infty, and M_0=0. Let us denote by \mathcal{L}_M^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}) the set of processes (u_t)_{t \ge 0} that are progressively measurable with respect to the filtration (\mathcal{F}_t)_{t \ge 0} and such that
\mathbb{E} \left( \int_0^{+\infty} u_s^2 d\langle M \rangle_s \right)<+\infty.
We still denote by \mathcal{E} the set of simple and predictable processes, that is the set of processes (u_t)_{t \ge 0} that may be written as:
u_t=\sum_{i=0}^{n-1} F_i 1_{(t_i,t_{i+1}]} (t),
where 0\le t_0 \le ... \le t_n and where F_i is a random variable that is measurable with respect to \mathcal{F}_{t_i} and such that \mathbb{E}( F_i^2)<+\infty. We define an equivalence relation \mathcal{R} on the set \mathcal{L}_M^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}) as follows:
u\mathcal{R} v \Leftrightarrow \mathbb{E} \left( \int_0^{+\infty} (u_s-v_s)^2 d\langle M \rangle_s \right)=0.
and denote by
L_M^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P})=\mathcal{L}_M^2 (\Omega, (\mathcal{F}_t)_{t \ge  0},\mathbb{P})/\mathcal{R},
the set of equivalence classes. It is easy to check that L_M^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}) endowed with the norm
\| u \|^2=\mathbb{E} \left( \int_0^{+\infty} u_s^2 d\langle M \rangle_s \right),
is a Hilbert space.

Theorem. There exists is a unique linear map
\mathcal{I}_M:L_M^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}) \rightarrow L^2 (\Omega, \mathcal{F},\mathbb{P})
such that:

  • For u=\sum_{i=0}^{n-1} F_i 1_{(t_i,t_{i+1}]} \in \mathcal{E}, \mathcal{I} (u)=\sum_{i=0}^{n-1} F_i (M_{t_{i+1}} -M_{t_i});
  • For u \in L_M^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}),
    \mathbb{E} \left( \mathcal{I}_M (u)^2\right)=\mathbb{E} \left( \int_0^{+\infty} u_s^2 d\langle M \rangle_s\right).

The map \mathcal{I}_M is called the Itō integral with respect to the continuous and square integrable martingale (M_t)_{t \geq 0} . We denote for u \in L_M^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}),
\mathcal{I}_M (u)=\int_0^{+\infty} u_s d M_s.

Proposition. Let (u_t)_{t \ge 0} be a stochastic process which is progressively measurable with respect to the filtration (\mathcal{F}_t)_{t \ge 0} and such that for every t \ge 0, \mathbb{E}\left( \int_0^t u_s^2 d\langle M \rangle_s \right) < +\infty. The process
\left( \int_0^t u_s dM_s \right)_{t \ge 0}=\left( \int_0^{+\infty} u_s 1_{[0,t]}(s)dM_s \right)_{t \ge 0}
is a square integrable martingale with respect to the filtration (\mathcal{F}_t)_{t \ge 0} that admits a continuous modification.

Proposition. Let (u_t)_{t \ge 0} be a stochastic process which is progressively measurable with respect to the filtration (\mathcal{F}_t)_{t \ge 0} and such that for every t \ge 0, \mathbb{E} \left( \int_0^t u_s^2 d\langle M \rangle_s \right)< +\infty. We have
\left\langle \int_0^{\cdot} u_s dM_s \right\rangle_t=\int_0^t u_s^2d\langle M \rangle_s.

Proposition. Let u \in L_M^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}) be a stochastic process whose paths are left continuous. Let t \ge 0. For every sequence of subdivisions \Delta_n [0,t] such that
\lim_{n \rightarrow +\infty}\mid\Delta_n [0,t]\mid=0,
the following convergence holds in probability:
\lim_{n \rightarrow +\infty} \sum_{k=0}^{n-1}u_{t^n_{k}} \left( M_{t^n_{k+1}}  M_{t^n_{k}}\right)=\int_0^t u_s dM_s.

Proposition. Let us assume that M_t=\int_0^t \Theta_s dB_s where (B_t)_{t \ge 0} is a Brownian motion on (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}) and where \Theta \in L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}). For u \in L_M^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}),
\int_0^t u_s dM_s=\int_0^t u_s \Theta_s dB_s.

Exercise. Let (M_t)_{t \ge 0} and (N_t)_{t \ge 0} be two square integrable martingales such that for every t \ge 0,
\mathbb{E} \left( \int_0^t M_s^2 d\langle N \rangle_s \right)<+\infty, \quad \mathbb{E} \left( \int_0^t N_s^2 d\langle M \rangle_s \right)<+\infty.
For t \ge 0,
M_tN_t =M_0 N_0+ \int_0^t M_s dN_s +\int_0^t N_s dM_s +\langle M,N \rangle_t.

Posted in Stochastic Calculus lectures | Leave a comment