Lecture 18. Square integrable martingales and quadratic variations

It turns out that stochastic integrals may be defined for other stochastic processes than Brownian motions. The key properties that were used in the above approach were the martingale property and the square integrability of the Brownian motion.

As above, we consider a filtered probability space (\Omega,(\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}) that satisfies the usual conditions. A martingale (M_t)_{t \ge 0} defined on this space is said to be square integrable if for every t \geq 0, \mathbb{E}\left( M_t^2 \right) < + \infty.

For instance, if (B_t)_{t \ge 0} is a Brownian motion on (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}) and if (u_t)_{t \ge 0} is a process which is progressively measurable with respect to the filtration (\mathcal{F}_t)_{t \ge 0} such that for every t \ge 0, \mathbb{E} \left( \int_0^t u_s^2 ds \right)<+\infty then, the process M_t=\int_0^t u_s dB_s, \quad t \ge 0 is a square integrable martingale.

The most important theorem concerning continuous square integrable martingales is that they admit a quadratic variation. Before proving this theorem, we state a preliminary lemma.

Lemma. Let (M_t)_{0\le t \le T} be a continuous martingale such that
\sup_{\Delta_n[0,T]} \sum_{k=0}^{n-1} | M_{t_{k+1}^n} -M_{t_{k}^n}|<+\infty.
Then (M_t)_{0\le t \le T} is constant.

Proof.
We may assume M_0=0. For N \ge 0, let us consider the stopping time
T_N=\inf \left\{ s \in [0,T], |M_s| \ge N, \sup_{\Delta_n[0,s]} \sum_{k=0}^{n-1} | M_{t_{k+1}^n}  -M_{t_{k}^n}| \ge N \right\}\wedge T.
The stopped process (M_{t \wedge T_N})_{ 0 \le t \le T } is a martingale and therefore for s \le t,
\mathbb{E}((M_{t \wedge T_N} -M_{s \wedge T_N})^2)=\mathbb{E}(M_{t \wedge T_N}^2) -\mathbb{E}(M_{s \wedge T_N}^2).
Consider now a sequence of subdivisions \Delta_n[0,T] whose mesh tends to 0. By summing up the above inequality on the subdivision, we obtain
\mathbb{E}(M_{ T_N}^2)  =\sum_{k=0}^{n-1} \left( M_{t^n_k\wedge T_N}-M_{t^n_{k-1}\wedge T_N}\right)^2
\le \sup | M_{t^n_k\wedge T_N}-M_{t^n_{k-1}\wedge T_N} | \mathbb{E}\left( \sum_{k=0}^{n-1} \left| M_{t^n_k\wedge T_N}-M_{t^n_{k-1}\wedge T_N}\right|\right)
\le N  \sup | M_{t^n_k\wedge T_N}-M_{t^n_{k-1}\wedge T_N} |.
By letting n \to +\infty, we get \mathbb{E}(M_{ T_N}^2)=0. This implies M_{ T_N}=0. Letting now N \to \infty, we conclude M_T=0. \square

Theorem. Let (M_t)_{t \geq 0} be a martingale on (\Omega (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}) which is continuous and square integrable and such that M_0=0.There is a unique continuous and increasing process denoted (\langle M \rangle_t)_{t \geq 0} that satisfies the following properties:

  • \langle M \rangle_0=0;
  • The process (M_t^2 - \langle M \rangle_t)_{t \geq 0} is a martingale.

Actually for every t \ge 0 and for every sequence of subdivisions \Delta_n [0,t] such that \lim_{n \rightarrow +\infty}\mid\Delta_n [0,t]\mid=0,
the following convergence takes place in probability:
\lim_{n \rightarrow +\infty} \sum_{k=1}^{n} \left( M_{t^n_k}-M_{t^n_{k-1}}\right)^2=\langle M \rangle_t.
The process (\langle M \rangle_t)_{t \geq 0} is called the quadratic variation process of (M_t)_{t \geq 0}.

Proof.
We first assume that the martingale (M_t)_{t \geq 0} is bounded and prove that if \Delta_n [0,t] is a sequence of subdivisions of the interval [0,t] such that
\lim_{n \rightarrow +\infty}\mid\Delta_n [0,t]\mid=0,
then the limit
\lim_{n \rightarrow +\infty} \sum_{k=1}^{n} \left( M_{t^n_k}-M_{t^n_{k-1}}\right)^2
exists in L^2 and thus in probability.

Toward this goal, we introduce some notations. If \Delta [0,T] is a subdivision of the time interval [0,T] and if (X_t)_{t\ge 0} is a stochastic process, then we denote
S_t^{\Delta [0,T]}(X)=\sum_{i=0}^{k-1}\left( X_{t_{i+1}} -X_{t_i} \right)^2 +(X_t-X_{t_k})^2,
where k is such that t_k \le t  \le t_{k+1},

An easy computation on conditional expectations shows that if (X_t)_{t\ge 0} is a martingale, then the process X_t^2-S_t^{\Delta [0,T]}(X), \quad t \le T is also a martingale. Also, if \Delta [0,T] and \Delta' [0,T] are two subdivisions of the time interval [0,T], we will denote by \Delta \vee \Delta' [0,T] the subdivision obtained by putting together the points \Delta [0,T] and the points of \Delta' [0,T]. Let now \Delta_n [0,T] be a sequence of subdivisions of [0,T] such that
\lim_{n \rightarrow +\infty}\mid\Delta_n [0,T]\mid=0.
Let us show that the sequence S_T^{\Delta_n [0,T]}(M) is a Cauchy sequence in L^2. Since the process S^{\Delta_n [0,T]}(M)-S^{\Delta_p [0,T]}(M) is a martingale (as a difference of two martingales), we deduce that
\mathbb{E}\left( \left(S_T^{\Delta_n [0,T]}(M)-S_T^{\Delta_p [0,T]}(M) \right)^2 \right)
= \mathbb{E}\left(S_T^{\Delta_n \vee \Delta_p [0,T]}(S^{\Delta_n [0,T]}(M)-S^{\Delta_p [0,T]}(M))\right)
\le  2 \left( \mathbb{E}\left(S_T^{\Delta_n \vee \Delta_p [0,T]}(S^{\Delta_n [0,T]}(M))\right)+\mathbb{E}\left(S_T^{\Delta_n \vee \Delta_p [0,T]}(S^{\Delta_p [0,T]}(M))\right) \right).
Let us denote by s_k‘s the points of the subdivision \Delta_n \vee \Delta_p [0,T] and for fixed s_k, we denote by t_l the point of \Delta_n [0,T] which is the closest to s_k and such that t_l \le s_k \le t_{l+1}. We have
S_{s_{k+1}}^{\Delta_n [0,T]}(M)-S_{s_{k}}^{\Delta_n [0,T]}(M)=(M_{s_{k+1}} -M_{t_l})^2-(M_{s_{k}}  M_{t_l})^2
=(M_{s_{k+1}} -M_{s_k})(M_{s_{k+1}} +M_{s_k}-2M_{t_l}).
Therefore, from Cauchy-Schwarz inequality,
\mathbb{E}\left(S_T^{\Delta_n \vee \Delta_p [0,T]}(S^{\Delta_n [0,T]}(M))\right)\le \mathbb{E} \left( \sup_k (M_{s_{k+1}} +M_{s_k}-2M_{t_l})^4\right)^{1/2}\mathbb{E} \left( \left(S_T^{\Delta_n \vee \Delta_p [0,T]}(M) \right)^2\right)^{1/2}.
Since the martingale M is assumed to be continuous, when n,p \rightarrow +\infty,
\mathbb{E} \left( \sup_k (M_{s_{k+1}} +M_{s_k}-2M_{t_l})^4\right) \rightarrow 0.
Thus, in order to conclude, it suffices to prove that \mathbb{E} \left( \left(S_T^{\Delta_n \vee \Delta_p [0,T]}(M) \right)^2\right) is bounded. This fact is an easy consequence of the fact that M is assumed to be bounded. Therefore, in the L^2 sense the following convergence holds
\langle M \rangle_t =\lim_{n \rightarrow +\infty} \sum_{k=1}^{n}\left( M_{t^n_k} -M_{t^n_{k 1}}\right)^2.
The process (M_t^2 - \langle M \rangle_t)_{t \geq 0} is seen to be a martingale because for every n and T \ge 0, the process M_t^2-S_t^{\Delta_n [0,T]}(M), \quad t \le T is a martingale. Let us now show that the obtained process \langle M \rangle is a continuous process. From Doob’s inequality, for n,p \ge 0 and \varepsilon > 0,
\mathbb{P}\left( \sup_{0 \le t \le T} \left(S_t^{\Delta_n[0,T]}(M)-S_t^{\Delta_p [0,T]}(M) \right) \ge \varepsilon  \right)\le \frac{\mathbb{E}\left( \left(S_T^{\Delta_n[0,T]}(M)-S_T^{\Delta_p[0,T]}(M) \right)^2\right)}{\varepsilon^2}.

From Borel-Cantelli lemma, there exists therefore a sequence n_k such that the sequence of continuous stochastic processes \left( S_t^{\Delta_{n_k} [0,T]}(M)\right)_{0 \le t \le T} almost surely uniformly converges to the process \left( \langle M \rangle_t\right)_{0 \le t \le T}. This proves the existence of a continuous version for \langle M \rangle. Finally, to prove that \langle M \rangle is increasing, it is enough to consider a an increasing sequence of subdivisions whose mesh tends to 0. Let us now prove that \langle M \rangle is the unique process such that M^2-\langle M \rangle is a martingale. Let A and A' be two continuous and increasing stochastic processes such that A_0=A'_0=0 and such that (M_t^2 -A_t)_{ t\ge 0} and (M_t^2 -A'_t)_{ t\ge 0} are martingales. The process (N_t)_{t\ge 0}=(A_t -A'_t)_{t\ge 0} is then seen to be a martingale that has a bounded variation. From the previous lemma, this implies that (N_t)_{t\ge 0} is constant and therefore equal to 0 due to its initial condition.

We now turn to the case where (M_t)_{t \ge 0} is not necessarily bounded. Let us introduce the sequence of stopping times:
T_N=\inf \{ t \ge 0, |M_t | \ge N \}.
According to the previous arguments, for every N \ge 0, there is an increasing process A^N such that (M_{t\wedge T_N}^2-A^N_t)_{t \ge 0} is a martingale. By uniqueness of this process, it is clear that A^{N+1}_{t\wedge T_N}=A^N_t, therefore we can define a process A_t by requiring that A_t(\omega)= A^N_t(\omega) provided that T_N(\omega)\ge t. By using convergence theorems, it is then checked that (M_t^2-A_t)_{t \ge 0} is a martingale.

Finally, let \Delta_n[0,t] be a sequence of subdivisions whose mesh tends to 0. We have for every \varepsilon >0,
\mathbb{P} \left( \left|A_t - \sum_{k=1}^{n} \left( M_{t^n_k}-M_{t^n_{k-1}}\right)^2 \right|\ge \varepsilon  \right)
\mathbb{P} (T_N \le t)+\mathbb{P} \left( \left|A^N_t - \sum_{k=1}^{n} \left( M_{t^n_k\wedge T_N}-M_{t^n_{k-1}\wedge T_N}\right)^2 \right|\ge \varepsilon  \right).
This easily implies the announced convergence in probability of the quadratic variations to A_t \square

Exercise. Let (M_t)_{t \geq 0} be a square integrable martingale on a filtered probability space (\Omega,(\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}). Assume that M_0=0. If \Delta [0,T] is a subdivision of the time interval [0,T] and if (X_t)_{t\ge 0} is a stochastic process, we denote
S_t^{\Delta [0,T]}(X)=\sum_{i=0}^{k-1}\left( X_{t_{i+1}} -X_{t_i} \right)^2 +(X_t-X_{t_k})^2,
where k is such that t_k \le t <t_{k+1}. Let \Delta_n [0,T] be a sequence of subdivisions of [0,T] such that
\lim_{n \rightarrow +\infty}\mid\Delta_n [0,T]\mid=0.  latex
Show that the following convergence holds in probability,
\lim_{n \rightarrow +\infty} \sup_{0\le t \le T} \left| S_t^{\Delta [0,T]}(M) - \langle M \rangle_t \right|=0.
Thus, in the previous theorem, the convergence is actually uniform on compact intervals.

We have already pointed out that stochastic integrals with respect to Brownian motion provide an example of square integrable martingale, they therefore have a quadratic variation. The next proposition explicitly computes this variation.

Proposition. Let (B_t)_{t \ge 0} be a Brownian motion on a filtered probability space (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}) that satisfies the usual conditions. Let (u_t)_{t \ge 0}be a progressively measurable process such that for every t \ge 0, \mathbb{E} \left( \int_0^t u_s^2 ds \right)<+\infty. For t \ge 0:
\left\langle \int_0^{\cdot} u_s dB_s \right\rangle_t=\int_0^t u_s^2ds.

Proof.
Since the process \left( \int_0^t u_s^2ds \right)_{t \ge 0} is continuous, increasing and equals 0 when t=0, we just need to prove that
\left(  \int_0^{t} u_s dB_s \right)^2 - \int_0^t u_s^2ds
is a martingale.

If u \in \mathcal{E}, is a simple process, it is easily seen that for t \ge s:
\mathbb{E} \left( \left(  \int_0^{t} u_v dB_v \right)^2 \mid\mathcal{F}_s \right)
=\mathbb{E} \left( \left(  \int_0^{s} u_v dB_v +\int_s^{t} u_v dB_v \right)^2 \mid \mathcal{F}_s \right)
=\mathbb{E} \left( \left(  \int_0^{s} u_v dB_v \right)^2 \mid \mathcal{F}_s \right)+\mathbb{E} \left( \left(  \int_s^{t} u_v dB_v \right)^2 \mid \mathcal{F}_s \right)
=\left(  \int_0^{s} u_v dB_v \right)^2 +\mathbb{E} \left(   \int_s^{t} u_v^2 dv \mid \mathcal{F}_s \right).
We may then conclude by using the density of \mathcal{E} in L^2 (\Omega,(\mathcal{F}_t)_{t \ge 0},\mathbb{P}) \square

As a straightforward corollary of the existence of a quadratic variation for the square integrable martingales, we immediately obtain:

Corollary. Let (M_t)_{t \geq 0} and (N_t)_{t \ge 0} be two continuous square integrable martingales on (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}) such that M_0=N_0=0. There is a unique continuous process (\langle M ,N \rangle_t)_{t \geq 0} with bounded variation that satisfies:

  • \langle M ,N\rangle_0=0;
  • The process (M_t N_t - \langle M ,N \rangle_t)_{t \geq 0} is a martingale.

Moreover, for t \ge 0 and for every sequence \Delta_n [0,t] such that \lim_{n \rightarrow +\infty}\mid\Delta_n [0,t]\mid=0, the following convergence holds in probability:
\lim_{n \rightarrow +\infty} \sum_{k=1}^{n} \left( M_{t^n_k} -M_{t^n_{k-1}}\right)\left( N_{t^n_k} -N_{t^n_{k-1}}\right)=\langle M,N \rangle_t.
The process (\langle M ,N\rangle_t)_{t \geq 0} is called the quadratic covariation process of (M_t)_{t \geq 0} and (N_t)_{t \geq 0}.

Proof.
We may actually just use the formula
\langle M, N \rangle =\frac{1}{4} \left( \langle M+N \rangle - \langle M- N \rangle \right),
as a definition of the covariation and then check that the above properties are indeed satisfied \square

Exercise. Let (B_t^1)_{t\ge 0} and (B_t^2)_{t \ge 0} be two independent Brownian motions. Show that \langle B^1, B^2 \rangle_t =0.

Posted in Stochastic Calculus lectures | Leave a comment

Lecture 17. Some properties of the Itō integral

In this lecture, we study properties of the Itō integral that was defined in the previous lecture. The following proposition is easy to prove and its proof is left to the reader as an exercise.

Proposition: Let u,v \in L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}),
show that
\mathbb{E} \left( \int_0^{+\infty} u_s dB_s \right)=0
and
\mathbb{E} \left(  \int_0^{+\infty} u_s dB_s \int_0^{+\infty} v_s dB_s \right)=\mathbb{E} \left( \int_0^{+\infty} u_s v_s ds \right).

Associated with Itō’s integral, we can construct an integral process, its fundamental property is that it is continuous martingale.

Proposition: Let u \in L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}). The process
\left( \int_0^t u_s dB_s \right)_{t \ge 0}=\left( \int_0^{+\infty} u_s 1_{[0,t]}(s)dB_s \right)_{t \ge 0}
is a martingale with respect to the filtration (\mathcal{F}_t)_{t \ge 0} that admits a continuous modification.

Proof:

We first prove the martingale property. If
u_t=\sum_{i=0}^{n-1} F_i 1_{(t_i,t_{i+1}]} (t)
is in \mathcal{E}, then for every t \ge s,
\mathbb{E} \left( \int_0^t u_v dB_v \mid \mathcal{F}_s \right)
=\mathbb{E} \left( \sum_{i=0}^{n-1} F_i (B_{t_{i+1}\wedge t }-B_{t_i \wedge t}) \mid \mathcal{F}_s \right)
=\sum_{i=0}^{n-1} F_i (B_{t_{i+1}\wedge s}-B_{t_i \wedge s})
=\int_0^s u_v dB_v.

Thus if u \in \mathcal{E}, the process
\left( \int_0^t u_s dB_s \right)_{t \ge 0}=\left( \int_0^{+\infty} u_s 1_{[0,t]}(s)dB_s \right)_{t \ge 0}
is a martingale with respect to the filtration (\mathcal{F}_t)_{t \ge 0}. Since \mathcal{E} is dense in L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}), and since it is easily checked that a limit in L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}) of martingales is still a martingale, we deduce the expected result.

We now prove the existence of a continuous version.

If u \in \mathcal{E}, the continuity of the integral process easily stems from the continuity of the Brownian paths. Let u \in L^2 (\Omega,(\mathcal{F}_t)_{t \ge 0},\mathbb{P}) and let u^n be a sequence in \mathcal{E} that converges to u. From Doob’s inequality, we have for m,n \ge 0 and \varepsilon > 0,
\mathbb{P} \left( \sup_{ t \ge 0} \left| \int_0^t (u^n_s-u^m_s)dB_s\right| \ge \varepsilon \right)
\le \frac{\mathbb{E} \left( \mid\int_0^{+\infty} (u^n_s-u^m_s)dB_s \mid^2\right)}{\varepsilon^2}
\le \frac{\mathbb{E} \left(  \int_0^{+\infty}(u^n_s-u^m_s)^2 ds  \right)}{\varepsilon^2}.
There exists thus a sequence (n_k)_{k \ge 0} such that
\mathbb{P} \left( \sup_{ t \ge 0} \left| \int_0^t (u^{n_{k+1}}_s-u^{n_k}_s)dB_s \right| \ge \frac{1}{2^k} \right) \le \frac{1}{2^k}.
From Borel-Cantelli lemma, the sequence of processes \left( \int_0^t u^{n_k}_sdB_s\right)_{t \ge 0} converges then almost surely uniformly to the process \left( \int_0^t u_sdB_s\right)_{t \ge 0} which is therefore continuous \square

As a straightforward consequence of the previous proposition and Doob’s inequalities, we obtain

Proposition Let u \in L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}).

  • For every \lambda > 0,
    \mathbb{P} \left( \sup_{t \ge 0} \left| \int_0^t u_s dB_s \right| \ge \lambda \right) \le \frac{\mathbb{E} \left( \int_0^{+\infty} u_s^2 ds \right)}{\lambda^2};
  • \mathbb{E}\left( \left( \sup_{t \ge 0} \left| \int_0^t u_s dB_s \right|\right)^2 \right) \le 4 \mathbb{E} \left( \int_0^{+\infty} u_s^2 ds \right).

For u \in L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}), the Riemann sums \sum_{k=0}^{n-1} u_{\frac{kt}{n}} \left( (B_{\frac{(k+1)t}{n}}-B_{\frac{kt}{n}} \right),
need not to almost surely converge to \int_0^t u_s dB_s. However the following proposition shows that under weak regularity assumptions we have a convergence in probability.

Proposition: Let u \in L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}) be a left continuous process. Let t \ge 0. For every sequence of subdivisions \Delta_n [0,t] such that \lim_{n \rightarrow +\infty}\mid\Delta_n [0,t]\mid=0, the following convergence holds in probability:
\lim_{n \rightarrow +\infty} \sum_{k=0}^{n-1}u_{t^n_{k}} \left( B_{t^n_{k+1}}-B_{t^n_{k}}\right)=\int_0^t u_s dB_s.

Proof.

Let us first assume that u is bounded almost surely. We have
\sum_{k=0}^{n-1}u_{t^n_{k}} \left( B_{t^n_{k+1}} -B_{t^n_{k}}\right)=\int_0^t u^n_s dB_s,
where u^n_s =\sum_{k=0}^{n-1}u_{t^n_{k}} 1_{(t^n_{k}, t^n_{k+1}]}(s). The Itō’s isometry and the Lebesgue dominated convergence theorem shows then that \int_0^t u^n_s dB_s converges to \int_0^t u_s dB_s in L^2 and therefore in probability. For general u‘s we can use a localization procedure. For N \ge 0, consider the random time T_N=\inf \{ t \ge 0, | u_t | \ge N \}.

We have for every \varepsilon >0,
\mathbb{P} \left( \left| \sum_{k=0}^{n-1}u_{t^n_{k}} \left( B_{t^n_{k+1}} -B_{t^n_{k}}\right) - \int_0^t u_s dB_s  \right| \ge \varepsilon \right)
\le \mathbb{P}(T_N \le t)+\mathbb{P} \left( \left| \sum_{k=0}^{n-1}u_{t^n_{k}} \left( B_{t^n_{k+1}} -B_{t^n_{k}}\right) - \int_0^t u_s dB_s  \right| \ge \varepsilon , T_N \ge t\right)
\le  \mathbb{P}(T_N \le t)+\mathbb{P} \left( \left| \sum_{k=0}^{n-1}u_{t^n_{k}} 1_{|u_{t^n_{k}} |\le M } \left( B_{t^n_{k+1}} -B_{t^n_{k}}\right) - \int_0^t u_s 1_{ |u_{s} |\le M }dB_s  \right| \ge \varepsilon \right).
This easily implies the convergence in probability.

\square

Exercise:

  • Show that for t \ge 0,
    \int_0^t B_s dB_s =\frac{1}{2} \left( B_t^2 -t \right).
    What is surprising in this formula ?
  • Show that when n \rightarrow +\infty, the sequence
    \sum_{k=0}^{n-1} B_{\frac{(k+1)t}{n}} \left( B_{\frac{(k+1)t}{n}}-B_{\frac{kt}{n}} \right),
    converges to a random variable that shall be computed.

Posted in Stochastic Calculus lectures | Leave a comment

Lecture 16. Itō integral

Since a Brownian motion (B_t)_{t \ge 0} does not have absolutely continuous paths, we can not directly use the theory of Riemann-Stieltjes integrals to give a sense to integrals like \int_0^t \Theta_s dB_s for every continuous stochastic process (\Theta_s)_{s \ge 0}. However, if (\Theta_s)_{s \ge 0} is regular enough in the Hölder sense, then \int_0^t \Theta_s dB_s can still be constructed as a limit of Riemann sums by using a celebrated result of L.C. Young. In the sequel, we shall denote by \mathcal{C}^\alpha(I) the space of \alpha– Hölder continuous functions that are defined on an interval I.

Theorem.(Young’s integral) Let f \in \mathcal{C}^\beta([0,T]) and g \in \mathcal{C}^\gamma([0,T]). If \beta+\gamma>1, then for every subdivision \Delta_n [0,T], whose mesh tends to 0, the Riemann sums

\sum_{i=0}^{n-1} f(t^n_i)( g(t^n_{i+1}) - g(t^n_i))

converge, when n \to \infty to a limit which is independent of the subdivision t^n_i. This limit is denoted \int_0^T f dg and called the Young’s integral of f with respect to g.

As a consequence of the previous result, we can therefore use Young’s integral to give a sense to the integral \int_0^t \Theta_s dB_s as soon as the stochastic process (\Theta_s)_{s \ge 0} has \gamma-Hölder paths with \gamma >1/2. This is not satisfying enough, since for instance the integral \int_0^t B_s dB_s is not even well defined. The alternative to using Riemann sums to and studying its almost sure convergence is to take advantage of the quadratic variation of the Brownian motion paths and use the full power of probabilistic methods.

In what follows, we consider a Brownian motion (B_t)_{t\ge 0} which is defined on a filtered probability space (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}). (B_t)_{t \ge 0} is assumed to be adapted to the filtration (\mathcal{F}_t)_{t \ge 0}. We also assume that (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}) satisfies the usual conditions as they were defined at the begining of Lecture 10. These assumptions imply in particular the following facts that we record here for later use :

  1. A limit (in L^2 or in probability) of adapted processes is still adapted;
  2. A modification of a progressively measurable process is still a progressively measurable process.

Proposition. Let (B_t)_{t \ge 0} be a standard Brownian motion. We denote by (\mathcal{F}^B_t)_{t \ge 0} its natural filtration: \mathcal{F}^B_\infty=\sigma ( B_u,u \ge 0) and by \mathcal{N} the null sets of \mathcal{F}^B_\infty.
Show that the filtration (\sigma (\mathcal{F}^B_t, \mathcal{N}))_{t \ge 0} satisfies the usual conditions.

We denote by L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}) the set of processes (u_t)_{t \ge 0} that are progressively measurable with respect to the filtration (\mathcal{F}_t)_{t \ge 0} and such that

\mathbb{E} \left( \int_0^{+\infty} u_s^2 ds \right)<+\infty.

Exercise. Show that the space L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}) endowed with the norm

\| u \|^2=\mathbb{E} \left( \int_0^{+\infty} u_s^2 ds \right)
is a Hilbert space.

We now denote by \mathcal{E} the set of processes (u_t)_{t \ge 0} that may be written as:

u_t=\sum_{i=0}^{n-1} F_i 1_{(t_i,t_{i+1}]} (t),

where 0\le t_0 \le ... \le t_n and where F_i is a random variable that is measurable with respect to \mathcal{F}_{t_i} and such that \mathbb{E}( F_i^2)<+\infty. The set \mathcal{E} is often called the set of simple previsible processes. We first observe that it is straightforward that

\mathcal{E} \subset L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}).

The following theorem provides the basic definition of the so-called Itō integral.

Theorem. (Itō integral) There is a unique linear map

\mathcal{I}:L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}) \rightarrow L^2 (\Omega, \mathcal{F},\mathbb{P})

such that:

  1. For u=\sum_{i=0}^{n-1} F_i 1_{(t_i,t_{i+1}]} \in \mathcal{E}, \mathcal{I} (u)=\sum_{i=0}^{n-1} F_i (B_{t_{i+1}} -B_{t_i});
  2. For u \in L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}), \mathbb{E} \left( \mathcal{I} (u)^2\right)=\mathbb{E} \left( \int_0^{+\infty} u_s^2 ds \right).

The map \mathcal{I} is called the Itō integral and for u \in L^2 (\Omega,(\mathcal{F}_t)_{t \ge 0},\mathbb{P}), we will use the notation

\mathcal{I} (u)=\int_0^{+\infty} u_s dB_s.

Proof: Since L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}) endowed with the norm \| u \|^2=\mathbb{E} \left( \int_0^{+\infty} u_s^2 ds \right)
is a Hilbert space, from the isometries extension theorem we just have to prove that

  • For u=\sum_{i=0}^{n-1} F_i 1_{(t_i,t_{i+1}]} \in \mathcal{E},

    \mathbb{E} \left( \left(\sum_{i=0}^{n-1} F_i (B_{t_{i+1}} -B_{t_i})\right)^2\right)=\mathbb{E} \left(\int_0^{+\infty} u_s^2 ds \right);

  • The set \mathcal{E} is dense in L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}).

Let u=\sum_{i=0}^{n-1} F_i 1_{(t_i,t_{i+1}]} \in \mathcal{E}. Due to the independence of the Brownian motion increments, we have:

\mathbb{E} \left( \left(\sum_{i=0}^{n-1} F_i (B_{t_{i+1}} -B_{t_i})\right)^2\right)
= \mathbb{E} \left( \sum_{i,j=0}^{n-1} F_i F_j (B_{t_{i+1}} -B_{t_i})(B_{t_{j+1}} -B_{t_j})\right)
=  \mathbb{E} \left( \sum_{i=0}^{n-1} F_i^2 (B_{t_{i+1}} -B_{t_i})^2\right)+2 \mathbb{E} \left(\sum_{0\le i<j \le n-1}F_i F_j (B_{t_{i+1}} -B_{t_i})(B_{t_{j+1}} -B_{t_j})\right)
= \mathbb{E} \left( \sum_{i=0}^{n-1} F_i^2 (t_{i+1} -t_i)\right)
=\mathbb{E} \left( \int_0^{+\infty} u_s^2 ds \right).

Let us now prove that \mathcal{E} is dense in L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}). We proceed in several steps. As a first step, let us observe that the set of progressively measurable bounded processes is dense in L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}). Indeed, for u \in L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}), the sequence (u_t \mathbf{1}_{[0,n]} (\mid u_t \mid))_{t \ge 0} converges to u.

As a second step, we remark that if u \in L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}) is a bounded process, then u is a limit of bounded processes that are in L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}) and such that almost every paths are supported in a fixed compact set (consider the sequence (u_t \mathbf{1}_{[0,n]} (t)_{t \ge 0}).

As a third step, if u \in L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}) is a bounded process such that almost every paths are supported in a fixed compact set, then the sequence \left( \frac{1}{n} \int_{t-\frac{1}{n}} ^t u_s ds \mathbf{1}_{(\frac{1}{n},+\infty)} (t) \right)_{t \ge 0} is seen to converge toward u. Therefore, u is a limit of left continuous and bounded processes that are in L^2(\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}) and such that almost every paths are supported in a fixed compact set.

Finally, it suffices to prove that if u \in L^2 (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathbb{P}) is a left continuous and bounded process such that almost every paths are supported in a fixed compact set, then u is a limit of processes that belong to \mathcal{E}. This may be proved by considering the sequence:
u^n_t=\sum_{i=0}^{+\infty} u_{\frac{i}{n}} 1_{(\frac{i}{n},\frac{i+1}{n}]} (t) \square

Posted in Stochastic Calculus lectures | 4 Comments

Lecture 15. Quadratic variation of the Brownian motion paths

If \Delta_n [0,t]=\left\{ 0=t^n_0 \le t^n_1 \le ...\le t^n_n=t\right\}, is a subdivision of the time interval [0,t], we denote by \mid\Delta_n [0,t] \mid=\max \{ \mid t^n_{k+1}-t^n_k \mid , k=0,...,n-1 \}, the mesh of this subdivision.

Proposition. Let (B_t)_{t\ge 0} be a standard Brownian motion. Let t \ge 0. For every sequence \Delta_n [0,t] of subdivisions such that \lim_{n \rightarrow +\infty}\mid\Delta_n [0,t]\mid=0, the following convergence takes place in L^2 (and thus in probability),
\lim_{n \rightarrow +\infty} \sum_{k=1}^{n} \left( B_{t^n_k}-B_{t^n_{k-1}}\right)^2=t.
As a consequence, almost surely, Brownian paths have an infinite variation on the time interval [0,t].

Proof.

Let us denote

V_n=\sum_{k=1}^{n} \left( B_{t^n_k} -B_{t^n_{k-1}}\right)^2.

Thanks to the stationarity and the independence of Brownian increments, we have:
\mathbb{E} \left( (V_n-t)^2\right)=\mathbb{E} \left(V_n^2\right)-2t\mathbb{E} \left( V_n\right)+t^2
=\sum_{j,k=1}^n\mathbb{E} \left( \left( B_{t^n_j} -B_{t^n_{j-1}}\right)^2\left( B_{t^n_k}-B_{t^n_{k-1}}\right)^2\right)-t^2
=\sum_{k=1}^n\mathbb{E} \left( \left( B_{t^n_j}-B_{t^n_{j-1}}\right)^4\right)+2\sum_{1\le j<k\le n}^n\mathbb{E} \left( \left( B_{t^n_j} -B_{t^n_{j-1}}\right)^2\left( B_{t^n_k}-B_{t^n_{k-1}}\right)^2\right)-t^2
=\sum_{k=1}^n (t^n_k-t^n_{k-1})^2 \mathbb{E} \left( B_1^4\right)+2\sum_{1\le j<k\le n}^n (t^n_j-t^n_{j-1})(t^n_k-t^n_{k-1})-t^2
=3\sum_{k=1}^n (t^n_j-t^n_{j-1})^2+2\sum_{1\le j<k\le n}^n (t^n_j-t^n_{j-1})(t^n_k-t^n_{k-1})-t^2
=2\sum_{k=1}^n (t^n_k-t^n_{k-1})^2
\le 2t\mid\Delta_n [0,t]\mid \rightarrow_{n \rightarrow +\infty} 0.

Let us now prove that, as a consequence of this convergence, the paths of the process (B_t)_{t\ge 0} almost surely have an infinite variation on the time interval [0,t]. It suffices to prove that there exists a sequence of subdivisions \Delta_n [0,t] such that almost surely

\lim_{n \rightarrow +\infty} \sum_{k=1}^n \mid B_{t^n_k} -B_{t^n_{k-1}}\mid=+\infty.

Reasoning by absurd, let us assume that the supremum on all the subdivisions of the time interval [0,t] of the sums

\lim_{n \rightarrow +\infty} \sum_{k=1}^n \mid B_{t^n_k} -B_{t^n_{k-1}}\mid

may be bounded from above by some positive M . From the above result, since the convergence in probability implies the existence of an almost surely convergent subsequence, we can find a sequence of subdivisions \Delta_n [0,t] whose mesh tends to 0 and such that almost surely,

\lim_{n \rightarrow +\infty} \sum_{k=1}^{n} \left( B_{t^n_k}-B_{t^n_{k-1}}\right)^2=t.

We get then
\sum_{k=1}^{n} \left( B_{t^n_k} -B_{t^n_{k-1}}\right)^2 \le M \sup_{1\le k \le n} \mid B_{t^n_k} -B_{t^n_{k-1}} \mid \rightarrow_{n \rightarrow +\infty} 0,
which is clearly absurd \square

Exercise. Let (B_t)_{t\ge 0} be a Brownian motion.

  1. Show that for t \ge 0, almost surely
    \lim_{n \rightarrow +\infty}\sum_{k=1}^{2^n} \left(B_{\frac{kt}{2^n}} -B_{\frac{(k-1)t}{2^n}}\right)^2=t.
  2. Show that there exists a sequence of subdivisions \Delta_n [0,t] whose mesh tends to 0 and such that almost surely

    \lim_{n \rightarrow +\infty} \sum_{k=1}^{n} \left( B_{t^n_k} -B_{t^n_{k-1}}\right)^2=+\infty.

Posted in Stochastic Calculus lectures | Leave a comment

Lecture 14. The law of iterated logarithm

We already observed that as a consequence of Kolmogorov’s continuity theorem, the Brownian paths are \gamma-Hölder continuous for every \gamma \in \left(0,\frac{1}{2}\right). The next proposition, which is known as the law of iterated logarithm shows in particular that Brownian paths are not \frac{1}{2}-Hölder continuous.

Theorem. Let (B_t)_{t\ge 0} be a Brownian motion. For s \ge 0,

\mathbb{P}\left( \lim \inf_{t \rightarrow 0} \frac{B_{t+s}-B_s}{\sqrt{2t \ln \ln \frac{1}{t}}} =-1 , \lim \sup_{t \rightarrow 0} \frac{B_{t+s}-B_s}{\sqrt{2t\ln \ln \frac{1}{t}}} =1 \right)=1.

Proof

Thanks to the symmetry and invariance by translation of the Brownian motion, it suffices to show that:

\mathbb{P}\left( \lim \sup_{t \rightarrow 0} \frac{B_{t}}{\sqrt{2t\ln \ln \frac{1}{t}}} =1 \right)=1.

Let us first prove that

\mathbb{P}\left( \lim \sup_{t \rightarrow 0} \frac{B_{t}}{\sqrt{2t \ln \ln \frac{1}{t}}} \le 1 \right)=1.

Let us denote

h(t)=\sqrt{2t \ln \ln \frac{1}{t}}.

Let \alpha, \beta >0, from Doob’s maximal inequality applied to the martingale \left(e^{\alpha B_t-\frac{\alpha^2}{2}t} \right)_{t \ge 0}, we have for t \ge 0:

\mathbb{P} \left(  \sup_{0 \le s \le t} \left( B_s-\frac{\alpha}{2}s\right)>\beta \right)=\mathbb{P} \left(  \sup_{0 \le s \le t} e^{\alpha B_s -\frac{\alpha^2}{2}s} >e^{\alpha \beta}\right) \le e^{-\alpha \beta}.

Let now \theta , \delta \in (0,1). Using the previous inequality for every n \in \mathbb{N} with

t=\theta^n, \alpha=\frac{(1+\delta)h(\theta^n)}{\theta^n}, \beta=\frac{1}{2} h (\theta^n),

yields when n \rightarrow +\infty,

\mathbb{P} \left(  \sup_{0 \le s \le \theta^n} \left( B_s -\frac{(1+\delta)h(\theta^n)}{2\theta^n}s\right)>\frac{1}{2} h (\theta^n) \right)=O\left( \frac{1}{n^{1+\delta}} \right).

Therefore from Borel-Cantelli lemma, for almost every \omega \in \Omega, we may find N(\omega)\in \mathbb{N} such that for n \ge N(\omega),

\sup_{0 \le s \le \theta^n} \left( B_s(\omega)-\frac{(1+\delta)h(\theta^n)}{2\theta^n} s\right) \le \frac{1}{2}h (\theta^n).

But,

\sup_{0 \le s \le \theta^n} \left( B_s(\omega) -\frac{(1+\delta)h(\theta^n)}{2\theta^n} s\right) \le \frac{1}{2} h (\theta^n)

implies that for \theta^{n+1} \le t\le \theta^n,

B_t (\omega) \le \sup_{0 \le s \le \theta^n} B_s(\omega) \le \frac{1}{2} (2+\delta)h (\theta^n) \le \frac{(2+\delta)h(t)}{2\sqrt{\theta}}.

We conclude:

\mathbb{P}\left( \lim \sup_{t \rightarrow 0} \frac{B_{t}}{\sqrt{2t\ln \ln \frac{1}{t}}} \le \frac{2+\delta}{2\sqrt{\theta}}\right)=1.

Letting now \theta \rightarrow 1 and \delta \rightarrow 0 yields

\mathbb{P}\left( \lim \sup_{t \rightarrow 0} \frac{B_{t}}{\sqrt{2t \ln \ln \frac{1}{t}}} \le 1\right)=1.

Let us now prove that

\mathbb{P}\left( \lim \sup_{t \rightarrow 0} \frac{B_{t}}{\sqrt{2t\ln \ln \frac{1}{t}}} \ge 1 \right)=1.

Let \theta \in (0,1). For n \in \mathbb{N}, we denote

A_n=\left\{ \omega, B_{\theta^n}(\omega)-B_{\theta^{n+1}}(\omega) \ge (1-\sqrt{\theta})h(\theta^n)\right\}.

Let us prove that
\sum \mathbb{P} (A_n)=+\infty.

The basic inequality
\int_{a}^{+\infty} e^{-\frac{u^2}{2}}du \ge \frac{a}{1+a^2}e^{-\frac{a^2}{2}},

implies

\mathbb{P} (A_n)=\frac{1}{\sqrt{2\pi}}\int_{a_n}^{+\infty}e^{-\frac{u^2}{2}}du  \ge \frac{a_n}{1+a_n^2}e^{-\frac{a_n^2}{2}},

with

a_n=\frac{(1-\sqrt{\theta})h(\theta^n)}{\theta^{n/2} \sqrt{1-\theta}}.

When n \to +\infty,

\frac{a_n}{1+a_n^2} e^{-\frac{a_n^2}{2}}=O\left(\frac{1}{n^{\frac{1+\theta-2\sqrt{\theta}}{1-\theta}}}\right),

therefore,

\sum \mathbb{P} (A_n)=+\infty.

As a consequence of the independence of the Brownian increments and of Borel-Cantelli lemma, the event

B_{\theta^n}-B_{\theta^{n+1}} \ge (1-\sqrt{\theta})h(\theta^n)

will occur almost surely for infinitely many n‘s. But, thanks to the first part of the proof, for almost every \omega, we may find N(\omega) such that for n \ge N(\omega),

B_{\theta^{n+1}}>-2h(\theta^{n+1})\ge-2\sqrt{\theta} h(\theta^n).

Thus, almost surely, the event

B_{\theta^n} >h(\theta^n)(1-3\sqrt{\theta})

will occur for infinitely many n‘s. This implies

\mathbb{P}\left( \lim \sup_{t \rightarrow 0} \frac{B_{t}}{\sqrt{2t \ln \ln \frac{1}{t}}} \ge 1-3\sqrt{\theta} \right)=1.

We finally get

\mathbb{P}\left( \lim \sup_{t \rightarrow 0} \frac{B_{t}}{\sqrt{2t \ln \ln \frac{1}{t}}} \ge 1 \right)=1.

by letting \theta \to 0 \square

As a straightforward consequence, we may observe that the time inversion invariance property of Brownian motion implies:

Corollary. Let (B_t)_{t\ge 0} be a standard Brownian motion.

\mathbb{P}\left( \lim \inf_{t \rightarrow +\infty} \frac{B_{t}}{\sqrt{2t \ln \ln t}} =-1,\lim  \sup_{t \rightarrow +\infty} \frac{B_{t}}{\sqrt{2t \ln \ln t}} =1\right)=1.

Posted in Stochastic Calculus lectures | Leave a comment

Lecture 13. Some basic properties of the Brownian motion

In this Lecture we present some basic properties of the Brownian motion paths.

Proposition. Let (B_t)_{t\ge 0} be a standard Brownian motion.

\mathbb{P} \left( \inf_{ t\ge 0} B_t = - \infty , \sup_{t \ge 0} B_t =+\infty \right)=1.

Proof

Since the process (-B_t)_{t\ge 0} is also a Brownian motion, in order to prove that

\mathbb{P} \left( \inf_{ t\ge 0} B_t = - \infty , \sup_{t \ge 0} B_t=+\infty \right)=1,

we just have to check that

\mathbb{P} \left(  \sup_{t \ge 0} B_t =+\infty \right)=1.

Let N\in \mathbb{N}. From the scaling property of Brownian motion, we have

\mathbb{P} \left( c  \sup_{t \ge 0} B_t \le N\right)=\mathbb{P}\left(   \sup_{t \ge 0} B_t \le N\right), c>0.

Therefore we have

\mathbb{P} \left(   \sup_{t \ge 0} B_t \le N\right)=\mathbb{P}\left(   \sup_{t \ge 0} B_t = 0 \right).

Now, we may observe that

\mathbb{P} \left(   \sup_{t \ge 0} B_t = 0 \right) \le \mathbb{P}\left(B_1 \le 0, \sup_{t \ge 1} B_t =0\right)=\mathbb{P} \left(B_1 \le 0, \sup_{t\ge 0} (B_{t+1}-B_1) =-B_1\right).

Since the process (B_{t+1}-B_1)_{t\ge 0} is a Brownian motion independent of B_1, we have for c >0,

\mathbb{P} \left(B_1 \le 0, \sup_{t \ge 0}\left (B_{t+1}-B_1 \right)  =-B_1\right)=\mathbb{P} \left(B_1 \le 0, c \sup_{t \ge 0} (B_{t+1}-B_1) =-B_1\right).

Therefore we get

\mathbb{P} \left(B_1 \le 0, \sup_{t \ge 0} (B_{t+1}-B_1) =-B_1 \right) =\mathbb{P} \left(B_1 \le 0, \sup_{t \ge 0} (B_{t+1}-B_1) =0\right)
=\mathbb{P} \left(B_1 \le 0\right)  \mathbb{P} \left(\sup_{t \ge 0}(B_{t+1}-B_1)  =0\right)
=\frac{1}{2} \mathbb{P}\left(   \sup_{t \ge 0} B_t = 0 \right)

Thus,

\mathbb{P} \left(   \sup_{t \ge 0} B_t = 0 \right) \le \frac{1}{2} \mathbb{P} \left(   \sup_{t \ge 0} B_t = 0 \right)

and we can deduce that

\mathbb{P} \left(   \sup_{t \ge 0} B_t = 0 \right)=0,

and

\mathbb{P} \left(   \sup_{t \ge 0} B_t \le N\right)=0.

Since this holds for every N, it implies that

\mathbb{P} \left(  \sup_{t \ge 0} B_t =+\infty \right)=1.

\square

By using this proposition we deduce the following proposition whose proof is let as an exercise to the reader.

Proposition (Recurrence property of Brownian motion)
Let (B_t)_{t\ge 0} be a Brownian motion. For every t \ge 0 and x \in \mathbb{R},

\mathbb{P} ( \exists s \ge t, B_s=x)=1.

Martingale theory provides powerful tools to study Brownian motion. We list in the Proposition below some martingales that are naturally associated with the Brownian motion and that will play an important role in the sequel.

Proposition. Let (B_t)_{t\ge 0} be a standard Brownian motion. The following processes are martingales (with respect to their natural filtration):

  1. (B_t)_{t\ge 0};
  2. (B_t^2-t)_{t\ge 0};
  3. \left( e^{\lambda B_t -\frac{\lambda^2}{2}t} \right)_{ t \ge 0}, \lambda \in \mathbb{C}.

Proof

  1. First, we note that for t \ge 0, \mathbb{E}(\mid B_t\mid)<+\infty because B_t is a Gaussian random variable. Now for t \ge s, \mathbb{E}(B_t-B_s\mid \mathcal{F}_s)=\mathbb{E}(B_t-B_s )=0, therefore \mathbb{E}(B_t\mid\mathcal{F}_s)=B_s.
  2. For t \ge 0, \mathbb{E}( B_t^2)=t<+\infty and for t \ge s, \mathbb{E}((B_t-B_s)^2 \mid \mathcal{F}_s)=\mathbb{E}((B_t-B_s)^2 )=t-s, therefore \mathbb{E}(B_t^2-t\mid \mathcal{F}_s)=B_s^2-s.
  3. For t \ge 0, \mathbb{E}\left( \left| e^{\lambda B_t -\frac{\lambda^2}{2}t} \right|\right) < +\infty, because B_t is a Gaussian random variable. Then we have for t \ge s, \mathbb{E}(e^{\lambda (B_t-B_s)} \mid \mathcal{F}_s)=\mathbb{E}(e^{\lambda (B_t-B_s)})=e^{\frac{\lambda^2}{2} (t-s)}, and therefore \mathbb{E} \left(e^{\lambda B_t -\frac{\lambda^2}{2}t}  \mid \mathcal{F}_s\right)=e^{\lambda B_s -\frac{\lambda^2}{2}s}.

\square

The previous martingales may be used to explicitly compute the distribution of some functionals associated to Brownian motion.

Proposition. Let (B_t)_{t\ge 0} be a standard Brownian motion. We denote for a>0,

T_a=\inf \{ t \ge 0,\text{ } B_t=a \}.

For every \lambda >0, we have

\mathbb{E} \left( e^{-\lambda T_a}\right)=e^{-a \sqrt{2\lambda}}.

Therefore, the distribution of T_a is given by the density function

\mathbb{P} (T_a \in dt)=\frac{a}{(2\pi t)^{3/2}}e^{-\frac{a^2}{2t}} dt,\quad t>0.

Proof

Let \alpha >0. For N \ge 1, we denote by T_N the almost surely bounded stopping time:

T_N=T_a \wedge N.

Applying Doob’s stopping theorem to the martingale \left( e^{\alpha B_t -\frac{\alpha^2}{2} t} \right)_{t \ge 0} yields:

\mathbb{E} \left( e^{\alpha B_{T_a \wedge N} -\frac{\alpha^2}{2} (T_a \wedge N)}  \right)=1.

But for N \ge 1,

e^{\alpha B_{T_a \wedge N} -\frac{\alpha^2}{2} (T_a \wedge N)}  \le e^{\alpha a}.

Therefore from Lebesgue dominated convergence theorem, by letting n \rightarrow +\infty, we obtain

\mathbb{E} \left( e^{\alpha B_{T_a } -\frac{\alpha^2}{2} T_a }  \right)=1.

Since by continuity of the Brownian paths we have,

B_{T_a } =a,

we conclude,

\mathbb{E} \left(e^{ -\frac{\alpha^2}{2} T_a}   \right)=e^{-\alpha a}.

The formula for the density function of T_a is obtained by inverting the previous Laplace transform \square

Posted in Stochastic Calculus lectures | Leave a comment

Lecture 12. The Brownian motion: Definition and basic properties

Definition: Let (\Omega,\mathcal{F},\mathbb{P}) be a probability space. A continuous real-valued process (B_t)_{t \ge 0} is called a standard Brownian motion if it is a Gaussian process with mean function

m(t)=\mathbb{E}(B_t)=0

and covariance function

R(s,t)=\mathbb{E}(B_sB_t)=\min (s,t).

It is seen that R(s,t)=\min (s,t) is a covariance function, because it is symmetric and for a_1,...,a_n \in \mathbb{R} and t_1,...,t_n \in  \mathbb{R}_{\ge 0},

\sum_{1 \le i,j \le n} a_i a_j \min (t_i,t_j) =\sum_{1 \le i,j \le n} a_i a_j \int_0^{+\infty} \mathbf{1}_{[0,t_i]} (s)  \mathbf{1}_{[0,t_j]} (s) ds

=  \int_0^{+\infty} \left( \sum_{i=1}^n a_i \mathbf{1}_{[0,t_i]} (s) \right)^2 ds \ge 0.

The distribution probability of a standard Brownian motion is called the Wiener measure. It is probability measure on the space of continuous functions [0, +\infty) \to \mathbb{R} (See Lecture 3).

Similarly, a n-dimensional stochastic process (B_t)_{t \ge 0} is called a standard Brownian motion if

(B_t)_{t \ge 0}=(B^1_t,\cdots,B^n_t)_{t \ge 0}

where the processes (B^i_t)_{t \ge 0} are independent standard Brownian motions.

Of course, the definition of Brownian motion is worth only because such an object exists.

Theorem.
There exist a probability space (\Omega,\mathcal{F},\mathbb{P}) and a stochastic process on it which is a standard Brownian motion.

Proof

From the Daniell-Kolmogorov existence theorem, there exists a probability space (\Omega,\mathcal{F},\mathbb{P}) and a Gaussian process (X_t)_{t \ge 0} on it, whose mean function is 0 and covariance function is

\mathbb{E} (X_s X_t)=\min (s,t).

We have for n \ge 0 and 0 \le s \le t:

\mathbb{E} \left( (X_t - X_s)^{2n} \right)=\frac{(2n)!}{2^n n!} (t-s)^n.

Therefore, by using the Kolmogorov continuity theorem, there exists a modification (B_t)_{t \ge 0} of (X_t)_{t \ge 0} whose paths are locally \gamma-Hölder if \gamma \in [0,\frac{n-1}{2n} ) \square

From the previous proof, we also easily deduce that the paths of a standard Brownian motion are locally \gamma-Hölder for every \gamma <\frac{1}{2}. It will later be shown that they are not \frac{1}{2}-Hölder (It is a consequence of the law of iterated logarithm).

The following exercises give some first basic properties of Brownian motion. In these exercises, (B_t)_{t \ge 0} is a standard one-dimensional Brownian motion.

Exercise.
Show the following properties:

  • B_0=0 a.s.;
  • For any h \geq 0, the process (B_{t+h} - B_h)_{t \ge 0} is a standard Brownian motion;
  • For any t>s\geq 0, the random variable B_t -B_s is independent of the \sigma-algebra \sigma(B_u, u \le s ).

Exercise.(Symmetry property of the Brownian motion)

  • Show that the process (-B_t)_{t \ge 0} is a standard Brownian motion.
  • More generally, show that if

    (B_t)_{t \ge 0}=(B^1_t,\cdots,B^d_t)_{t \ge 0}

    is a d-dimensional Brownian motion and if M is an orthogonal d \times d matrix, then (MB_t)_{t \ge 0} is a standard Brownian motion.

Exercise.(Scaling property of the Brownian motion)

Show that for every c >0, the process (B_{ct})_{t \geq 0} has the same distribution as the process (\sqrt{c} B_t)_{t \geq 0}.

Exercise.(Time inversion property of Brownian motion)

  • Show that almost surely, \lim_{t \to +\infty} \frac{B_t}{t} =0.
  • Deduce that the process (t B_{\frac{1}{t}})_{t \geq 0} has the same law as the process ( B_t)_{t \geq 0}.

Exercise.(Non-canonical representation of Brownian motion)

  • Show that for t \ge 0, the Riemann integral \int_0^t \frac{B_s}{s} ds almost surely exists.
  • Show that the process \left( B_t-\int_0^t \frac{B_s}{s}ds\right)_{t \ge 0} is a standard Brownian motion.

Posted in Stochastic Calculus lectures | Leave a comment

Lecture 11. Doob’s martingale maximal inequalities

In this post, we prove some fundamental martingale inequalities that, once again, are due to Joe Doob

Theorem (Doob’s maximal inequalities) Let (\mathcal{F}_t)_{t \ge 0} be a filtration on probability space (\Omega, \mathcal{F},\mathbb{P}) and let (M_t)_{t \ge 0} be a continuous martingale with respect to the filtration (\mathcal{F}_t)_{t \ge 0}.

  • Let p \ge 1 and T>0. If \mathbb{E} (\mid M_T\mid^p) <+\infty , then we have

    \mathbb{P} \left( \sup_{0 \le t \le T} \mid M_t \mid \ge \lambda\right) \le \frac{\mathbb{E} \left( \mid M_T \mid^p\right)}{\lambda^p}.

  • Let p > 1 and T>0. If \mathbb{E} (\mid M_T\mid^p)<+\infty, then we have

    \mathbb{E}\left( \left( \sup_{0 \le t \le T} \mid M_t \mid \right)^p \right) \le \left( \frac{p}{p-1} \right)^p \mathbb{E} (\mid M_T \mid^p).

Proof:

Let p \ge 1 and T>0. If \mathbb{E} (\mid M_T \mid^p) < +\infty then, from Jensen’s inequality the process (\mid M_t \mid^p)_{0 \le t\le T} is a submartingale. Let \lambda > 0 and

\tau =\inf \{  s \ge 0 \text{ such that } \mid M_s \mid \ge \lambda \}\wedge T,

with the usual convention that \inf \emptyset =+\infty. It is seen that \tau is an almost surely bounded stopping time. Therefore, from the Doob’s stopping theorem

\mathbb{E} (\mid M_{\tau} \mid^p) \le \mathbb{E} (\mid M_{T} \mid^p).

But from the very definition of \tau,

\mid M_{\tau} \mid^p \ge 1_{\sup_{0 \le t \le T} \mid M_t \mid \ge \lambda} \lambda^p+1_{\sup_{0 \le t \le T} \mid M_t \mid < \lambda} \mid M_{T} \mid^p.
which implies,
\mathbb{P} \left( \sup_{0 \le t \le T} \mid M_t \mid \ge \lambda \right) \le \frac{\mathbb{E} \left( \mid M_T \mid^p 1_{\sup_{0 \le t \le T} \mid M_t \mid \ge \lambda} \right)}{\lambda^p}\le \frac{\mathbb{E} \left( \mid M_T \mid^p  \right)}{\lambda^p}.

This concludes the proof of the first part of our statement.

Let now p > 1 and T > 0.

Let us first assume that:

\mathbb{E}\left( \left( \sup_{0 \le t \le T} \mid M_t \mid \right)^p \right) < +\infty ,

The previous proof shows that for \lambda >0,

\mathbb{P} \left( \sup_{0 \le t \le T} \mid M_t \mid \ge \lambda \right) \le \frac{\mathbb{E} \left( \mid M_T \mid 1_{\sup_{0 \le t \le T} \mid M_t \mid \ge \lambda} \right)}{\lambda}.

We deduce,

\int_0^{+\infty} \lambda^{p-1} \mathbb{P} \left( \sup_{0 \le t \le T} \mid M_t \mid \ge \lambda \right) d \lambda \le \int_0^{+\infty} \lambda^{p-2} \mathbb{E} \left( \mid M_T \mid 1_{\sup_{0 \le t \le T} \mid M_t \mid \ge \lambda} \right)d\lambda.

From Fubini’s theorem,

\int_0^{+\infty} \lambda^{p-1} \mathbb{P} \left( \sup_{0 \le t \le T} \mid M_t \mid \ge \lambda \right) d \lambda =\int_{\Omega} \left( \int_0^{\sup_{0 \le t \le T} \mid M_t \mid (\omega)} \lambda^{p-1} d\lambda \right) d \mathbb{P} (\omega)

=\frac{1}{p} \mathbb{E}\left( \left( \sup_{0 \le t \le T} \mid M_t \mid \right)^p \right).

Similarly, we obtain

\int_0^{+\infty} \lambda^{p-2} \mathbb{E} \left( \mid M_T \mid 1_{\sup_{0 \le t \le T} \mid M_t \mid \ge \lambda} \right)d\lambda=\frac{1}{p-1} \mathbb{E} \left( \left(\sup_{0 \le t \le T} \mid M_t \mid\right)^{p-1} \mid M_T \mid \right).

Hence,

\mathbb{E}\left( \left( \sup_{0 \le t \le T} \mid M_t \mid \right)^p \right) \le  \frac{p}{p-1} \mathbb{E} \left( \left(\sup_{0 \le t \le T} \mid M_t \mid\right)^{p-1} \mid M_T \mid \right).

By using now Hölder’s inequality we obtain,

\mathbb{E} \left( \left(\sup_{0 \le t \le T} \mid M_t \mid\right)^{p-1} \mid M_T \mid \right) \le \mathbb{E} \left( \mid M_T \mid^p\right)^{\frac{1}{p}} \mathbb{E} \left( \left( \sup_{0 \le t \le T} \mid M_t \mid \right)^p\right)^{\frac{p-1}{p}},

which implies

\mathbb{E}\left( \left( \sup_{0 \le t \le T} \mid M_t \mid \right)^p \right) \le  \frac{p}{p-1} \mathbb{E} \left( \mid M_T \mid^p\right)^{\frac{1}{p}} \mathbb{E} \left( \left( \sup_{0 \le t \le T} \mid M_t \mid \right)^p\right)^{\frac{p-1}{p}}.

As a conclusion if \mathbb{E}\left( \left( \sup_{0 \le t \le T} \mid M_t \mid \right)^p \right) < +\infty, we have:

\mathbb{E}\left( \left( \sup_{0 \le t \le T} \mid M_t \mid \right)^p \right) \le \left( \frac{p}{p-1} \right)^p \mathbb{E} (\mid M_T \mid^p).

Now, if \mathbb{E}\left( \left( \sup_{0 \le t \le T} \mid M_t \mid \right)^p \right) = +\infty, we consider for N \in \mathbb{N}, the stopping time \tau_N=\inf \{t \ge 0, \mid M_t \mid \ge N \} \wedge T. By using the above result to the martingale (M_{t \wedge \tau_N})_{t \ge 0}, we obtain

\mathbb{E}\left( \left( \sup_{0 \le t \le T} \mid M_{t \wedge \tau_N} \mid \right)^p \right) \le \left( \frac{p}{p-1} \right)^p \mathbb{E} (\mid M_T \mid^p),

from which we may conclude by using the monotone convergence theorem.

Posted in Stochastic Calculus lectures | 4 Comments

Lecture 10. The Doob’s regularization theorem

When dealing with stochastic processes, it is often important to work with versions of these processes whose paths are as regular as possible. In that direction, the Kolmogorov’s continuity theorem (see Lecture 6) provided a sufficient condition allowing to work with continuous versions of stochastic processes. For martingales, the possibility of working with regular versions, is related to the regularity properties of the filtration with respect to which the martingale property is satisfied.

Definition: Let (\mathcal{F}_t)_{t \ge 0} be a filtration on a probability space (\Omega, \mathcal{F},\mathbb{P}). If the following assumptions are fulfilled:

  • If A \in \mathcal{F} satisfies \mathbb{P} (A) =0, then every subset of A is in \mathcal{F}_0;
  • The filtration (\mathcal{F}_t)_{t \ge 0} is right continuous, that is for every t \ge 0

    \mathcal{F}_t=\cap_{\varepsilon >0} \mathcal{F}_{t+\varepsilon},

the filtered probability space

(\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P})

is said to satisfy the usual conditions.

The above set of assumptions are called the usual conditions because, as we will see it in the next Lectures, these are the conditions under which it is convenient to work in order to properly define the stochastic integral.

Let (\mathcal{F}_t)_{t \ge 0} be a filtration on a probability space (\Omega, \mathcal{F},\mathbb{P}) and let (M_t)_{t \ge 0} be a (sub, super) martingale with respect to the filtration (\mathcal{F}_t)_{t \ge 0} whose paths are right continuous and left limited. The filtered probability space

(\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P})

may canonically be enlarged into a filtered probability space

(\Omega, (\mathcal{G}_t)_{t \ge 0},\mathcal{G},\mathbb{P})

that satisfies the usual conditions. Indeed, \mathcal{G} can be taken to be the \mathbb{P}-completion of \mathcal{F} and

\mathcal{G}_t=\cap_{u >t} \sigma ( \mathcal{F}_u, \mathcal{N} )

where \mathcal{N} is the set of events whose probability is zero. Moreover (M_t)_{t \ge 0} is a (sub, super) martingale with respect to the filtration (\mathcal{G}_t)_{t \ge 0} (this is not straightforward and let to the reader as an exercise ). The filtered probability space

(\Omega, (\mathcal{G}_t)_{t \ge 0},\mathcal{G},\mathbb{P})

is called the usual completion of

(\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}).

Exercise: Let (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}) be a filtered probability space that satisfies the usual conditions and let (X_t)_{t \ge 0} be stochastic process adapted to the filtration (\mathcal{F}_t)_{t \ge 0} whose paths are left limited and right continuous. Let K be a compact subset of \mathbb{R}. Show that the random time

T=\inf \{ t \ge 0, X_t \in K \}

is a stopping time of the filtration (\mathcal{F}_t)_{t \ge 0}.

Theorem (Doob’s regularization theorem): Let (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}) be a filtered probability space that satisfies the usual conditions and let (M_t)_{t \ge 0} be a supermartingale with respect to the filtration (\mathcal{F}_t)_{t \ge 0}. Let us assume that the function t \rightarrow \mathbb{E} (M_t) is right continuous. There exists a modification (\tilde{M}_t)_{t \ge 0} of (M_t)_{t \ge 0} such that:

  • (\tilde{M}_t)_{t \ge 0} is adapted to the filtration (\mathcal{F}_t)_{t \ge 0};
  • The paths of (\tilde{M}_t)_{t \ge 0} are locally bounded, right continuous and left limited;
  • (\tilde{M}_t)_{t \ge 0} is a supermartingale with respect to the filtration (\mathcal{F}_t)_{t \ge 0}.

Proof:

As for the proof of Doob’s convergence theorem (see Lecture 9), the idea is to study the oscillations of (M_t)_{t\ge 0}. In what follows, we will use the notations introduced in the proof of this theorem that we remind below.

For N \in \mathbb{N}, N>0 and n\in \mathbb{N}, we denote

\mathcal{D}_{n,N}=\left\{ \frac{kN}{2^n}, 0 \le k \le 2^n \right\},

\mathcal{D}_{N}=\cup_n \mathcal{D}_{n,N}

and

\mathcal{D}=\cup_{n,N} \mathcal{D}_{n,N}.

For a<b, let \mathcal{N} (a,b,n,N) be the greatest integer k for which we can find elements of \mathcal{D}_{n,N},

0 \le q_1 <r_1 <q_2 <r_2 <...<q_k<r_k\le N

such that

M_{q_i} < a, M_{r_i} > b.

Let now \Omega^* be the set of \omega \in \Omega such that \forall t\ge 0, \lim_{s \rightarrow t, s>t, s \in \mathcal{D}} M_s (\omega) exists and is finite.

It is easily seen that:

\Omega^* =\cap_{a,b \in \mathbb{Q}} \cap_{N \in \mathbb{N}^*} \left\{ \omega \in \Omega, \sup_{t \in \mathcal{D}_N} \mid M_t (\omega) \mid <+\infty \text{ et } \sup_{n\in \mathbb{N}} \mathcal{N}(a,b,n,N) <+\infty \right\}.

Therefore we have \Omega^* \in \mathcal{F}. We may prove, as we proved the Doob's convergence theorem that \mathbb{P} (\Omega^*)=1.

For t \ge 0, we define (\tilde{M}_t)_{ t \ge 0} in the following way:

  • If \omega \in \Omega^*, \tilde{M}_t (\omega)=\lim_{s \rightarrow t, s>t, s \in \mathcal{D}} M_s(\omega)
  • If \omega \notin \Omega^*, \tilde{M}_t (\omega)=0.

It is clear that the paths of (\tilde{M}_t)_{t \ge 0} are locally bounded, right continuous and left limited. Let us now show that this process is the expected modification of (M_t)_{t \ge 0}.

We first observe that for t \ge 0, the random variable \lim_{s \rightarrow t, s>t, s \in \mathcal{D}} M_s is measurable with respect to \cap_{s >t} \mathcal{F}_s=\mathcal{F}_t. Furthermore, \Omega \backslash \Omega^* has a zero probability and is therefore in \mathcal{F}_0, according to the usual conditions. This shows that the process (\tilde{M}_t)_{t \ge 0} is adapted to the filtration (\mathcal{F}_t)_{t \ge 0}.

We now show that (\tilde{M}_t)_{t \ge 0} is a modification of (M_t)_{t \ge 0}.

Let t \ge 0. We have almost surely

\lim_{s \rightarrow t, s>t, s \in \mathcal{D}} M_s=\tilde{M}_t.

Let us prove that this convergence also holds in L^1. To prove this, it is enough to check that for every decreasing family (s_n)_{n \in \mathbb{N}} such that s_n \in \mathcal{D} and that converges toward t, the family (M_{s_n})_{n \in \mathbb{N}} is uniformly integrable.

Let \varepsilon >0. Since u \rightarrow \mathbb{E} (M_u) is assumed to be right continuous, we can find s \in \mathbb{R} such that t < s and such that for every s > u > t,,

0 \le \mathbb{E} (M_u) - \mathbb{E} (M_s) \le \frac{\varepsilon}{2}.

For s>u>t and \lambda >0, we have:
\mathbb{E} ( \mid M_u \mid 1_{ \mid M_u \mid > \lambda})=-\mathbb{E} (  M_u  1_{  M_u  < -\lambda})+\mathbb{E}(M_u)-\mathbb{E} (  M_u  1_{  M_u  \le \lambda} )
\le \mathbb{E} (\mid  M_s \mid  1_{  \mid M_u \mid >  \lambda})+\frac{\varepsilon}{2}.

Now, since M_s \in L^1, we can find \delta >0 such that for every F \in \mathcal{F} that satisfies \mathbb{P} (F) < \delta, we have \mathbb{E} ( \mid M_s \mid 1_F)<\frac{\varepsilon}{2}. But for t < u < s,

\mathbb{P}(\mid M_u \mid > \lambda) \le \frac{\mathbb{E} ( \mid M_u \mid )}{\lambda} = \frac{\mathbb{E} (  M_u )+2\mathbb{E}(\max (-M_u,0))}{\lambda}.

From Jensen inequality, it is then seen that the process (\max (-M_u,0))_{t<u<s} is a submartingale, and therefore

\mathbb{E}(\max (-M_u,0))\le\mathbb{E}(\max (-M_{s},0)).

We deduce that for t < u < s,

\mathbb{P}(\mid M_u \mid > \lambda) \le \frac{ \mathbb{E} ( M_t )+2\mathbb{E}(\max (-M_{s},0))}{\lambda}.
It is thus possible to find A > 0 such that for every t < u < s,

\mathbb{P}(\mid M_u \mid > A) < \delta,

For t < u < s, we have then

\mathbb{E} ( \mid M_u \mid 1_{ \mid M_u \mid > \lambda}) < \varepsilon.

This implies that for every decreasing family (s_n)_{n \in \mathbb{N}} such that s_n \in \mathcal{D} and that converges toward t, the family (M_{s_n})_{n \in \mathbb{N}} is uniformly integrable. The convergence

\lim_{s \rightarrow t, s>t, s \in \mathcal{D}} M_s=\tilde{M}_t.

thus also holds in L^1. Now, since (M_t)_{t \ge 0} is a supermartingale, for s \ge t we have

\mathbb{E} \left(  M_s \mid \mathcal{F}_t  \right) \le M_t.

This implies,

\lim_{s \rightarrow t, s>t, s \in \mathcal{D}} \mathbb{E} \left( M_s \mid \mathcal{F}_t  \right) \le M_t,

and

\mathbb{E} \left( \tilde{M}_t \mid \mathcal{F}_t  \right) \le M_t.

Hence, since \tilde{M}_t is adapted to the filtration \mathcal{F}_t

\tilde{M}_t \le M_t,

Due to the fact that the function u \rightarrow \mathbb{E} (M_u) is right continuous, we have

\lim_{s \rightarrow t, s>t, s \in \mathcal{D}} \mathbb{E} (M_s)=\mathbb{E} (M_t).

But from the L^1 convergence, we also have

\lim_{s \rightarrow t, s>t, s \in \mathcal{D}} \mathbb{E}(M_s)=\mathbb{E} \left(\lim_{s \rightarrow t, s>t, s \in \mathcal{D}} M_s\right)=\mathbb{E} (\tilde{M}_t),

This gives

\mathbb{E} (\tilde{M}_t)=\mathbb{E} (M_t).

The random variable M_t -\tilde{M}_t is therefore non-negative and has a zero expectation. This implies that almost surely M_t=\tilde{M}_t. The stochastic process (\tilde{M}_t)_{t \ge 0} is therefore a modification of (M_t)_{t \ge 0}. Finally, since a modification of a supermartingale is still a supermartingale, this concludes the proof of the theorem.

The following exercise shows that martingales naturally appear when studying equivalent measures on a filtered probability space. We assume here the reader familiar with the Radon-Nikodym theorem.

Exercise. Let (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}) be a filtered probability space that satisfies the usual conditions. We denote

\mathcal{F}_{\infty}=\sigma \left( \mathcal{F}_t , t \ge 0 \right)

and for t\ge 0, \mathbb{P}_{/\mathcal{F}_t} is the restriction of \mathbb{P} to \mathcal{F}_t. Let \mathbb{Q} be a probability measure on \mathcal{F}_{\infty} such that for every t \ge 0,

\mathbb{Q}_{/\mathcal{F}_t} \ll \mathbb{P}_{/\mathcal{F}_t}.

  • Show that there exists a right continuous and left limited martingale (D_t)_{t \ge 0} such that for every t \ge 0,

    D_t=\frac{d\mathbb{Q}_{/\mathcal{F}_t}}{d\mathbb{P}_{/\mathcal{F}_t}},\text{  }\mathbb{P}-a.s.

  • Show that the following properties are equivalent:
    1) \mathbb{Q}_{/\mathcal{F}_\infty} \ll \mathbb{P}_{/\mathcal{F}_\infty};
    2) The martingale (D_t)_{t \ge 0} is uniformly integrable;
    3) (D_t)_{t \ge 0} converges in L^1;
    4) (D_t)_{t \ge 0} almost surely converges to an integrable and \mathcal{F}_\infty measurable random variable D such that D_t =\mathbb{E}(D\mid \mathcal{F}_t), \quad t \ge 0.

Posted in Stochastic Calculus lectures | Leave a comment

Lecture 9. The Doob’s convergence theorem

Let us first remind some basic facts about the notion of uniform integrability which is a crucial tool in the study of continuous time martingales.

Definition. Let (X_i)_{i \in \mathcal{I}} be a family of random variables. We say that the family (X_i)_{i \in \mathcal{I}} is uniformly integrable if for every \varepsilon >0, there exists K \ge 0 such that

\forall i \in \mathcal{I},\quad \mathbb{E}(\mid X_i \mid 1_{\mid X_i \mid >K})<\varepsilon.

We have the following properties:

  • A finite family of integrable random variables is uniformly integrable ;
  • If the family (X_i)_{i \in \mathcal{I}} is uniformly integrable then it is bounded in L^1, that is \sup_\mathcal{I} \mathbb{E}(\mid X_i \mid ) <+\infty;
  • If the family (X_i)_{i \in \mathcal{I}} is bounded in L^p with p>1, that is \sup_\mathcal{I} \mathbb{E}(\mid X_i \mid^p ) <+\infty, then it is uniformly integrable.

The notion of uniform integrability is often used to prove a convergence in L^1 thanks to the following result:

Proposition. Let (X_n)_{n\in \mathbb{N}} be a sequence of integrable random variables. Let X be an integrable random variable. The sequence (X_n)_{n\in \mathbb{N}} converges toward X in L^1, that is \lim_{n \to +\infty} \mathbb{E}(\mid X_n-X \mid )=0, if and only if:

  • In probability, X_n \rightarrow_{n \rightarrow +\infty} X, that is for every \varepsilon >0, \lim_{n \to +\infty} \mathbb{P}( \mid X_n -X \mid \ge \varepsilon) =0;
  • The family (X_n)_{n\in \mathbb{N}} is uniformly integrable.

 

It is clear that if X is an integrable random variable defined on a filtered probability space (\Omega, (\mathcal{F}_t)_{t \ge 0}, \mathcal{F},\mathbb{P}) then the process \left(\mathbb{E}(X\mid \mathcal{F}_t) \right)_{t \ge 0} is a martingale with respect to the filtration (\mathcal{F}_t)_{t \ge 0}. The following theorem characterizes the martingales that are of this form.

Theorem (Doob’s convergence theorem):
Let (\mathcal{F}_t)_{t \ge 0} be a filtration defined on a probability space (\Omega,\mathcal{F},\mathbb{P}) and let (M_t)_{t \ge 0} be a martingale with respect to the filtration (\mathcal{F}_t)_{t \ge 0} whose paths are left limited and right continuous. The following properties are equivalent:

  • When t \to +\infty, (M_t)_{t \ge 0} converges in L^1;
  • When t \to +\infty, (M_t)_{t \ge 0} converges almost surely toward an integrable and \mathcal{F}-measurable random variable X that satisfies

    M_t =\mathbb{E}(X\mid \mathcal{F}_t), t \ge 0;

  • The family (M_t)_{t \ge 0} is uniformly integrable.

Proof:

As a first step, we show that if the martingale (M_t)_{t \ge 0} is bounded in L^1, that is

\sup_{t \ge 0} \mathbb{E} ( \mid M_t \mid)<+\infty

then (M_t)_{t \ge 0} almost surely converges toward an integrable and \mathcal{F}-measurable random variable X.

Let us first observe that

\left\{ \omega \in \Omega, M_t (\omega) \text{ converges} \right\}=\left\{ \omega \in \Omega, \lim \sup_{t \rightarrow +\infty} M_t(\omega)= \lim \inf_{t \rightarrow +\infty} M_t(\omega)\right\}

Therefore, in order to show that (M_t)_{t \ge 0} almost surely converges when t \to +\infty, we may prove that

\mathbb{P} \left( \left\{ \omega \in \Omega, \lim \sup_{t \rightarrow +\infty} M_t(\omega) >\lim \inf_{t \rightarrow +\infty} M_t (\omega)\right\} \right)=0.

Let us assume that

\mathbb{P} \left( \left\{ \omega \in \Omega, \lim \sup_{t \rightarrow +\infty} M_t(\omega) >\lim \inf_{t \rightarrow +\infty} M_t (\omega)\right\} \right)>0.

In that case we may find a<b such that:

\mathbb{P} \left( \left\{ \omega \in \Omega, \lim \sup_{t \rightarrow +\infty} M_t(\omega) >a>b>\lim \inf_{t \rightarrow +\infty} M_t (\omega)\right\} \right)>0.

The idea now is to study the oscillations of (M_t)_{t \ge 0} between a and b. For N \in \mathbb{N}, N>0 and n\in \mathbb{N}, we denote

\mathcal{D}_{n,N}=\left\{ \frac{kN}{2^n}, 0 \le k \le 2^n \right\},

and

\mathcal{D}=\cup_{n,N} \mathcal{D}_{n,N}.

Let \mathcal{N} (a,b,n,N) be the greatest integer k for which we may find elements of \mathcal{D}_{n,N},

0 \le q_1 <r_1 <q_2 <r_2 <...<q_k<r_k\le N

that satisfy

M_{q_i} <a, M_{r_i}>b.

Let now

Y_{n,N}=\sum_{k=1}^{2^n} C_{ \frac{kN}{2^n}} (M_{\frac{kN}{2^n}}-M_{\frac{(k-1)N}{2^n}}),

where C_k \in \{0,1\} is recursively defined by:

C_{ 1}=1_{ M_0 < a},

C_{k}=1_{C_{k-1}=1} 1_{ M_{\frac{(k-1)N}{2^n}} \le b}+1_{C_{k-1}=0} 1_{ M_{\frac{(k-1)N}{2^n}} < a}.

Since (M_t)_{t \ge 0} is martingale, it is easily checked that

\mathbb{E} (Y_{n,N})=0.

Furthermore, thanks to the very definition of \mathcal{N} (a,b,n,N), we have

Y_{n,N} \ge (b-a)\mathcal{N} (a,b,n,N) -\max( a-M_N,0).

Therefore,

(b-a) \mathbb{E} \left( \mathcal{N} (a,b,n,N) \right) \le \mathbb{E}\left(\max( a-M_N,0)\right)\le\mid a \mid+ \mathbb{E} ( \mid M_N\mid ) \le \mid a \mid+ \sup_{t>0} \mathbb{E} (\mid M_t \mid ),

and thus

(b-a) \mathbb{E} \left( \sup_{n,N} \mathcal{N} (a,b,n,N) \right) \le \mid a \mid+ \sup_{t>0} \mathbb{E} ( \mid M_t \mid ).

This implies that almost surely \sup_{n,N} \mathcal{N}(a,b,n,N)<+\infty, from which we deduce

\mathbb{P} \left( \left\{ \omega \in \Omega, \lim \sup_{t \rightarrow +\infty,t\in \mathcal{D}} M_t(\omega) >a>b>\lim \inf_{t \rightarrow +\infty,t\in \mathcal{D}} M_t (\omega)\right\}\right)=0.

Since the paths of (M_t)_{t \ge 0} are right continuous, we have

\mathbb{P} \left( \left\{ \omega \in \Omega, \lim \sup_{t\rightarrow +\infty,t\in \mathcal{D}} M_t(\omega) >a>b>\lim \inf_{t\rightarrow +\infty,t\in \mathcal{D}} M_t (\omega)\right\}\right)
=\mathbb{P} \left( \left\{ \omega \in \Omega, \lim \sup_{t\rightarrow +\infty} M_t(\omega) >a>b>\lim \inf_{t \rightarrow +\infty} M_t (\omega)\right\} \right) .

This is absurd.

Thus, if (M_t)_{t \ge 0} is bounded in L^1, it almost surely converges toward an \mathcal{F}-measurable random variable X. Fatou’s lemma provides the integrability of X.

With this preliminary result in hands, we can now turn to the proof of the theorem.

Let us assume that (M_t)_{t \ge 0} converges in L^1. In that case, it is of course bounded in L^1, and thus almost surely converges toward an \mathcal{F}-measurable and integrable random variable X. Let t \ge 0 and A \in \mathcal{F}_t, we have for s \ge t,

\mathbb{E} ( M_s 1_A)=\mathbb{E} ( M_t 1_A)

By letting s \rightarrow +\infty, the dominated convergence theorem yields

\mathbb{E} (X 1_A)=\mathbb{E} ( M_t 1_A).

Therefore, as expected, we obtain

\mathbb{E} (X \mid \mathcal{F}_t)= M_t.

Let us now assume that (M_t)_{t \ge 0} almost surely converges toward an \mathcal{F}-measurable and integrable random variable X that satisfies

M_t =\mathbb{E}(X\mid \mathcal{F}_t), t \ge 0.

We almost surely have \sup_{t \ge 0} \mid M_t \mid <+\infty and thus for A \ge 0,

\mathbb{E} ( \mid M_t \mid 1_{\mid M_t \mid \ge A} )=\mathbb{E} ( \mid \mathbb{E}(X\mid \mathcal{F}_t) \mid 1_{\mid M_t \mid \ge A})
\le \mathbb{E} ( \mid X \mid 1_{\mid M_t \mid \ge A} )
\le \mathbb{E} ( \mid X \mid 1_{\sup_{t \ge 0} \mid M_t \mid \ge A}).

This implies the uniform integrability for the family (M_t)_{t \ge 0} .

Finally, if the family (M_t)_{t \ge 0} is uniformly integrable, then it is bounded in L^1 and therefore almost surely converges. The almost sure convergence, together with the uniform integrability, provides the convergence in L^1.

Posted in Stochastic Calculus lectures | Leave a comment