Lecture 10. Estimating iterated integrals (Part 1)

In the previous lecture we introduced the signature of a bounded variation path x as the formal series
\mathfrak{S} (x)_{s,t} =1 + \sum_{k=1}^{+\infty} \int_{\Delta^k [s,t]}  dx^{\otimes k}.
If now x \in C^{p-var}([0,T],\mathbb{R}^d), p \ge 1 the iterated integrals \int_{\Delta^k [s,t]}  dx^{\otimes k} can only be defined as Young integrals when p < 2. In this lecture, we are going to derive some estimates that allow to define the signature of some (not all) paths with a finite p variation when p \ge 2. These estimates are due to Terry Lyons in his seminal paper and this is where the rough paths theory really begins.

For P \in \mathbb{R} [[X_1,...,X_d]] that can be writen as
P=P_0+\sum_{k = 1}^{+\infty} \sum_{I \in \{1,...,d\}^k}a_{i_1,...,i_k} X_{i_1}...X_{i_k},
we define
\| P \| =|P_0|+\sum_{k = 1}^{+\infty} \sum_{I \in \{1,...,d\}^k}|a_{i_1,...,i_k}|  \in [0,\infty].
It is quite easy to check that for P,Q \in \mathbb{R} [[X_1,...,X_d]]
\| PQ \| \le \| P \| \| Q\|.
Let x \in C^{1-var}([0,T],\mathbb{R}^d). For p \ge 1, we denote
\left\|  \int dx^{\otimes k}\right\|_{p-var, [s,t]}=\left( \sup_{ \Pi \in \mathcal{D}[s,t]} \sum_{i=0}^{n-1} \left\|  \int_{\Delta^k [t_i,t_{i+1}]}  dx^{\otimes k} \right\|^p \right)^{1/p},
where \mathcal{D}[s,t] is the set of subdivisions of the interval [s,t]. Observe that for k \ge 2, in general
\int_{\Delta^k [s,t]}  dx^{\otimes k}+ \int_{\Delta^k [t,u]}  dx^{\otimes k} \neq \int_{\Delta^k [s,u]}  dx^{\otimes k}.
Actually from the Chen’s relations we have
\int_{\Delta^n [s,u]}  dx^{\otimes n}= \int_{\Delta^n [s,t]}  dx^{\otimes k}+ \int_{\Delta^n [t,u]}  dx^{\otimes k} +\sum_{k=1}^{n-1} \int_{\Delta^k [s,t]}  dx^{\otimes k }\int_{\Delta^{n-k} [t,u]}  dx^{\otimes (n-k) }.
It follows that \left\|  \int dx^{\otimes k}\right\|_{p-var, [s,t]} needs not to be the p-variation of t \to \int_{\Delta^k [s,t]} dx^{\otimes k}.
The first major result of rough paths theory is the following estimate:

Proposition: Let p \ge 1. There exists a constant C \ge 0, depending only on p, such that for every x \in C^{1-var}([0,T],\mathbb{R}^d) and k  \ge 0,
\left\|  \int_{\Delta^k [s,t]}  dx^{\otimes k} \right\| \le \frac{C^k}{\left( \frac{k}{p}\right)!} \left( \sum_{j=1}^{[p]} \left\|  \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]}  \right)^k, \quad 0 \le s \le t \le T.

By \left( \frac{k}{p}\right)!, we of course mean \Gamma  \left( \frac{k}{p}+1\right). Some remarks are in order before we prove the result. If p=1, then the estimate becomes
\left\|  \int_{\Delta^k [s,t]}  dx^{\otimes k} \right\| \le \frac{C^k}{k!} \| x \|_{1-var, [s,t]}^k,
which is immediately checked because
\left\|  \int_{\Delta^k [s,t]}  dx^{\otimes k} \right\|
\le \sum_{I \in \{1,...,d\}^k} \left\|   \int_{\Delta^{k}[s,t]}dx^{I} \right\|
\le \sum_{I \in \{1,...,d\}^k}  \int_{s \le t_1 \le t_2 \le \cdots \le t_k \le t} \| dx^{i_1}(t_1) \| \cdots \| dx^{i_k}(t_k)\|
\le \frac{1}{k!} \left( \sum_{j=1}^ d \| x^j \|_{1-var, [s,t]} \right)^k.

We can also observe that for k \le p, the estimate is easy to obtain because
\left\|  \int_{\Delta^k [s,t]}  dx^{\otimes k} \right\| \le \left\|  \int dx^{\otimes k}\right\|_{\frac{p}{k}-var, [s,t]}.
So, all the work is to prove the estimate when k >p. The proof is split into two lemmas. The first one is a binomial inequality which is actually quite difficult to prove:

Lemma: For x,y >0, n \in \mathbb{N}, n \ge 0, and p \ge 1,
\sum_{j=0}^n \frac{x^{j/p}}{\left( \frac{j}{p}\right)!} \frac{y^{(n-j)/p}}{\left( \frac{n-j}{p}\right)!} \le p \frac{(x+y)^{n/p}}{ {\left( \frac{n}{p}\right)!}}.

Proof: See Lemma 2.2.2 in the article by Lyons or this proof for the sharp constant \square

The second one is a lemma that actually already was essentially proved in the Lecture on Young’s integral, but which was not explicitly stated.

Lemma: Let \Gamma: \{ 0 \le s \le t \le T \} \to \mathbb{R}^N. Let us assume that:

  • There exists a control \tilde{\omega} such that
    \lim_{r \to 0} \sup_{(s,t), \tilde{\omega}(s,t) \le r } \frac{\| \Gamma_{s,t} \|}{r}=0;
  • There exists a control \omega and \theta >1, \xi >0 such that for 0 \le s \le t \le u \le T,
    \| \Gamma_{s,u} \| \le \| \Gamma_{s,t} \|+ \| \Gamma_{t,u} \| +\xi \omega(s,u)^\theta.

Then, for all 0 \le s \le t \le T,
\| \Gamma_{s,t} \| \le \frac{\xi}{1-2^{1-\theta}} \omega(s,t)^\theta.

Proof:
See the proof of the Young-Loeve estimate or Lemma 6.2 in the book by Friz-Victoir \square

We can now turn to the proof of the main result.

Proof:
Let us denote
\omega(s,t)=\left( \sum_{j=1}^{[p]} \left\|  \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]}  \right)^p.
We claim that \omega is a control. Indeed for 0 \le s \le t \le u \le T, we have from Holder’s inequality
\omega(s,t)+\omega(t,u)
= \left( \sum_{j=1}^{[p]} \left\|  \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]}  \right)^p+\left( \sum_{j=1}^{[p]} \left\|  \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [t,u]}  \right)^p
\le \left( \sum_{j=1}^{[p]}\left(  \left\|  \int dx^{\otimes j}\right\|^{p/j}_{\frac{p}{j}-var, [s,t]} +   \left\|  \int dx^{\otimes j}\right\|^{p/j}_{\frac{p}{j}-var, [t,u]}\right)^{1/p} \right)^p
\le \left( \sum_{j=1}^{[p]} \left\|  \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,u]}  \right)^p =\omega(s,u).

It is clear that for some constant \beta > 0 which is small enough, we have for k \le p,
\left\|  \int_{\Delta^k [s,t]}  dx^{\otimes k} \right\|  \le \frac{1}{\beta \left( \frac{k}{p}\right)!} \omega(s,t)^{k/p}.

Let us now consider
\Gamma_{s,t}=  \int_{\Delta^{[p]+1} [s,t]}  dx^{\otimes ([p]+1)}.
From the Chen’s relations, for 0 \le s \le t \le u \le T,
\Gamma_{s,u}=  \Gamma_{s,t}+  \Gamma_{t,u}+\sum_{j=1}^{[p]} \int_{\Delta^j [s,t]}  dx^{\otimes j }\int_{\Delta^{[p]+1-j} [t,u]}  dx^{\otimes ([p]+1-j) }.
Therefore,
\| \Gamma_{s,u}\|
\le \|  \Gamma_{s,t} \| + \|  \Gamma_{t,u} \| +\sum_{j=1}^{[p]} \left\| \int_{\Delta^j [s,t]}  dx^{\otimes j }\right\|  \left\| \int_{\Delta^{[p]+1-j} [t,u]}  dx^{\otimes ([p]+1-j) }\right\|
\le  \|  \Gamma_{s,t} \| + \|  \Gamma_{t,u} \| +\frac{1}{\beta^2}  \sum_{j=1}^{[p]} \frac{1}{ \left( \frac{j}{p}\right)!} \omega(s,t)^{j/p}\frac{1}{ \left( \frac{[p]+1-j}{p}\right)!} \omega(t,u)^{([p]+1-j)/p}
\le \|  \Gamma_{s,t} \| + \|  \Gamma_{t,u} \| +\frac{1}{\beta^2}  \sum_{j=0}^{[p]+1} \frac{1}{ \left( \frac{j}{p}\right)!} \omega(s,t)^{j/p}\frac{1}{ \left( \frac{[p]+1-j}{p}\right)!} \omega(t,u)^{([p]+1-j)/p}
\le \|  \Gamma_{s,t} \| + \|  \Gamma_{t,u} \| +\frac{1}{\beta^2} p  \frac{(\omega(s,t)+\omega(t,u))^{([p]+1)/p}}{ {\left( \frac{[p]+1}{p}\right)!}}
\le  \|  \Gamma_{s,t} \| + \|  \Gamma_{t,u} \| +\frac{1}{\beta^2} p  \frac{\omega(s,u)^{([p]+1)/p}}{ {\left( \frac{[p]+1}{p}\right)!}}.
On the other hand, we have
\|  \Gamma_{s,t} \| \le A \| x \|_{1-var,[s,t]}^{[p]+1}.
We deduce from the previous lemma that
\|  \Gamma_{s,t} \| \le \frac{1}{\beta^2} \frac{p}{1-2^{1-\theta}}  \frac{\omega(s,t)^{([p]+1)/p}}{ {\left( \frac{[p]+1}{p}\right)!}},
with \theta=\frac{[p]+1}{p}. The general case k \ge p is dealt by induction. The details are let to the reader \square

Posted in Rough paths theory | 4 Comments

Lecture 9. The signature of a bounded variation path

In this lecture we introduce the central notion of the signature of a path x \in C^{1-var}([0,T],\mathbb{R}^d) which is a convenient way to encode all the algebraic information on the path x which is relevant to study differential equations driven by x. The motivation for the definition of the signature comes from formal manipulations on Taylor series.

Let us consider a differential equation
y(t)=y(s)+\sum_{i=1}^d \int_s^t V_i (y(u) )dx^i(u),
where the V_i‘s are smooth vector fields on \mathbb{R}^n.

If f: \mathbb{R}^{n} \rightarrow \mathbb{R} is a C^{\infty} function, by the change of variable formula,
f(y(t))=f(y(s))+\sum^{d}_{i=1}\int^{t}_{s}V_{i}f(y(u))dx^{i}(u).

Now, a new application of the change of variable formula to V_{i}f(y(s)) leads to
f(y(t))=f(y(s))+\sum^{d}_{i=1}V_{i}f(y(s))\int^{t}_{s}dx^{i}(u)+\sum^{d}_{i,j=1}\int^{t}_{s}\int^{u}_{s} V_{j}V_{i}f(y(v))dx^{j}(v)dx^{i}(u).

We can continue this procedure to get after N steps
f(y(t))=f(y(s))+\sum^{N}_{k=1}\sum_{I=(i_1,\cdots,i_k)}(V_{i_1}\cdots V_{i_k}f)(y(s))\int_{\Delta^{k}[s,t]}dx^{I}+R_{N}(s,t)
for some remainder term R_{N}(s,t), where we used the notations:

  • \Delta^{k}[s,t]=\{(t_1,\cdots,t_k)\in[s,t]^{k}, s\leq t_1\leq t_2\cdots\leq t_k\leq t\}
  • If I=\left(i_1,\cdots,i_k\right)\in\{1,\cdots,d\}^k is a word with length k, \int_{\Delta^{k}[s,t]}dx^{I}=\displaystyle      \int_{s \le t_1 \le t_2 \le \cdots \le t_k \le t}dx^{i_1}(t_1)\cdots dx^{i_k}(t_k).

If we let N\rightarrow +\infty, assuming R_{N}(s,t) \to 0 (which is by the way true for t-s small enough if the V_i‘s are analytic), we are led to the formal expansion formula:
f(y(t))=f(y(s))+\sum^{+\infty}_{k=1}\sum_{I=(i_1,\cdots,i_k)}(V_{i_1}\cdots V_{i_k}f)(y(s))\int_{\Delta^{k}[s,t]}dx^{I}.
This shows, at least at the formal level, that all the information given by x on y is contained in the iterated integrals \int_{\Delta^{k}[s,t]}dx^{I}.

Let \mathbb{R} [[X_1,...,X_d]] be the non commutative algebra over \mathbb{R} of the formal series with d indeterminates, that is the set of series
Y=y_0+\sum_{k = 1}^{+\infty} \sum_{I \in \{1,...,d\}^k} a_{i_1,...,i_k} X_{i_1}...X_{i_k}.

Definition: Let x \in C^{1-var}([0,T],\mathbb{R}^d). The signature of x (or Chen’s series) is the formal series:
\mathfrak{S} (x)_{s,t} =1 + \sum_{k=1}^{+\infty} \sum_{I \in \{1,...,d\}^k} \left(  \int_{\Delta^{k}[s,t]}dx^{I} \right) X_{i_1} \cdots X_{i_k}, \quad  0 \le s \le t \le T.

As we are going to see in the next few lectures, the signature is a fascinating algebraic object. At the source of the numerous properties of the signature lie the following so-called Chen’s relations

Lemma: Let x \in C^{1-var}([0,T],\mathbb{R}^d). For any word (i_1,...,i_n) \in \{ 1, ... , d \}^n and any 0 \le s \le t \le u \le T ,
\int_{\Delta^n [s,u]}  dx^{(i_1,...,i_n)}=\sum_{k=0}^{n} \int_{\Delta^k [s,t]} dx^{(i_1,...,i_k)}\int_{\Delta^{n-k} [t,u]}  dx^{(i_{k+1},...,i_n)},
where we used the convention that if I is a word with length 0, then \int_{\Delta^{0} [0,t]}  dx^I =1.

Proof: It follows readily by induction on n by noticing that
\int_{\Delta^n [s,u]}  dx^{(i_1,...,i_n)}=\int_s^u \left( \int_{\Delta^{n-1} [s,t_n]}  dx^{(i_1,...,i_{n-1})} \right) dx^{i_n}(t_n) \square

To avoid heavy notations, it will be convenient to denote
\int_{\Delta^k [s,t]}  dx^{\otimes k} =\sum_{I \in \{1,...,d\}^k} \left(  \int_{\Delta^{k}[s,t]}dx^{I} \right) X_{i_1} \cdots X_{i_k}.

This notation actually reflects a natural algebra isomorphism between \mathbb{R} [[X_1,...,X_d]] and 1\oplus_{k=1}^{+\infty} (\mathbb{R}^d)^{\otimes k}. With this notation, observe that the signature writes then
\mathfrak{S} (x)_{s,t} =1 + \sum_{k=1}^{+\infty} \int_{\Delta^k [s,t]}  dx^{\otimes k},
and that the Chen’s relations become
\int_{\Delta^n [s,u]}  dx^{\otimes n}=\sum_{k=0}^{n} \int_{\Delta^k [s,t]}  dx^{\otimes k }\int_{\Delta^{n-k} [t,u]}  dx^{\otimes (n-k) }.
The Chen’s relations imply the following flow property for the signature:

Proposition: Let x \in C^{1-var}([0,T],\mathbb{R}^d). For any 0 \le s \le t \le u \le T ,
\mathfrak{S} (x)_{s,u} =\mathfrak{S} (x)_{s,t}\mathfrak{S} (x)_{t,u}

Proof: Indeed,
\mathfrak{S} (x)_{s,u}
=1 + \sum_{k=1}^{+\infty} \int_{\Delta^k [s,u]}  dx^{\otimes k}
=1 + \sum_{k=1}^{+\infty}\sum_{j=0}^{n} \int_{\Delta^j [s,t]}  dx^{\otimes j }\int_{\Delta^{k-j} [t,u]}  dx^{\otimes (k-j) }
=\mathfrak{S} (x)_{s,t}\mathfrak{S} (x)_{t,u}
\square

Posted in Rough paths theory | Leave a comment

Lecture 8. Young’s differential equations

In the previous lecture we defined the Young’s integral \int y dx when x \in C^{p-var} ([0,T], \mathbb{R}^d) and y \in  C^{q-var} ([0,T], \mathbb{R}^{e \times d}) with \frac{1}{p}+\frac{1}{q} > 1. The integral path \int_0^t ydx has then a bounded p-variation. Now, if V: \mathbb{R}^d \to \mathbb{R}^{d \times d} is a Lipschitz map, then the integral, \int V(x) dx is only defined when \frac{1}{p}+\frac{1}{p} > 1, that is for p < 2. With this in mind, it is apparent that Young’s integration should be useful to solve differential equations driven by continuous paths with bounded p-variation for p  < 2. If p \ge 2 , then the Young’s integral is of no help and the rough paths theory later explained is the correct one.

The basic existence and uniqueness result is the following. Throughout this lecture, we assume that p < 2.

Theorem: Let x\in C^{p-var} ([0,T], \mathbb{R}^d) and let V : \mathbb{R}^e \to \mathbb{R}^{e \times d} be a Lipschitz continuous map, that is there exists a constant K  > 0 such that for every x,y \in \mathbb{R}^e,
\| V(x)-V(y) \| \le K \| x-y \|.
For every y_0 \in \mathbb{R}^e, there is a unique solution to the differential equation:
y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T.
Moreover y \in C^{p-var} ([0,T], \mathbb{R}^e).

Proof: The proof is of course based again of the fixed point theorem. Let 0 < \tau \le T and consider the map \Phi going from the space C^{p-var} ([0,\tau], \mathbb{R}^e) into itself, which is defined by
\Phi(y)_t =y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le \tau.
By using basic estimates on the Young’s integrals, we deduce that
\| \Phi(y^1)-\Phi(y^2) \|_{ p-var, [0,\tau]}
\le C \| x \|_{p-var,[0,\tau]} ( \| V(y^1)-V(y^2) \|_{ p-var, [0,\tau]} +\| V(y^1)(0)-V(y^2)(0)\|)
\le CK \| x \|_{p-var,[0,\tau]}(  \| y^1-y^2  \|_{ p-var, [0,\tau]}+\| y^1(0)-y^2(0)\|).
If \tau is small enough, then CK  \| x \|_{p-var,[0,\tau]} < 1, which means that \Phi is a contraction of the Banach space C^{p-var} ([0,\tau], \mathbb{R}^e) endowed with the norm \| y \|_{p-var,[0,\tau]} +\| y(0)\|.

The fixed point of \Phi, let us say y, is the unique solution to the differential equation:
y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le \tau.
By considering then a subdivision
\{ \tau=\tau_1 <  \tau_2 <\cdots <\tau_n=T \}
such that C K  \| x \|_{p-var,[\tau_k,\tau_{k+1}]} < 1, we obtain a unique solution to the differential equation:
y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T \square

As for the bounded variation case, the solution of a Young's differential equation is a C^1 function of the initial condition,

Proposition: Let x\in C^{p-var} ([0,T], \mathbb{R}^d) and let V : \mathbb{R}^e \to \mathbb{R}^{e \times d} be a C^1 Lipschitz continuous map. Let \pi(t,y_0) be the flow of the equation
y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T.
Then for every 0\le t \le T, the map y_0 \to  \pi (t,y_0) is C^1 and the Jacobian J_t=\frac{\partial \pi(t,y_0)}{\partial y_0} is the unique solution of the matrix linear equation
J_t=Id+ \sum_{i=1}^d \int_0^t DV_i(\pi(s,y_0))J_s dx^i(s).

As we already mentioned it before, solutions of Young’s differential equations are continuous with respect to the driving path in the p-variation topology

Theorem: Let x^n  \in C^{p-var} ([0,T], \mathbb{R}^d) and let V : \mathbb{R}^e \to \mathbb{R}^{e\times d} be a Lipschitz and bounded continuous map such that for every x,y \in \mathbb{R}^d,
\| V(x)-V(y) \| \le K \| x-y \|.
Let y^n be the solution of the differential equation:
y^n(t)=y(0)+\int_0^t V(y^n(s)) dx^n(s), \quad 0\le t \le T.
If x^n converges to x in p-variation, then y^n converges in p-variation to the solution of the differential equation:
y(t)=y(0)+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T.

Proof: Let 0\le s \le t \le T. We have
\| y-y^n \|_{p-var,[s,t]}
= \left\| \int_0^\cdot V(y(u)) dx(u) -\int_0^\cdot V(y^n(u)) dx^n(u) \right\|_{p-var,[s,t]}
\le  \left\| \int_0^\cdot (V(y(u))-V(y^n(u))) dx(u) + \int_0^\cdot V(y^n(u)) d( x(u)-x^n(u)) \right\|_{p-var,[s,t]}
\le  \left\| \int_0^\cdot (V(y(u))-V(y^n(u))) dx(u)  \right\|_{p-var,[s,t]}+\left\| \int_0^\cdot V(y^n(u)) d( x(u)-x^n(u)) \right\|_{p-var,[s,t]}
\le CK \| x\|_{p-var,[s,t]} \| y-y^n \|_{p-var,[s,t]}+C\| x-x^n \|_{p-var,[s,t]}(K \| y^n \|_{p-var,[s,t]}+\| V\|_{\infty, [0,T]})
Thus, if s,t is such that CK \| x\|_{p-var,[s,t]} < 1, we obtain
\| y-y^n \|_{p-var,[s,t]} \le \frac{C(K \| y^n \|_{p-var,[s,t]}+\| V\|_{\infty, [0,T]})}{ 1-CK\| x\|_{p-var,[s,t]} } \| x-x^n \|_{p-var,[s,t]}.
In the very same way, provided CK \| x^n\|_{p-var,[s,t]} < 1, we get
\| y^n \|_{p-var,[s,t]} \le \frac{C\| V\|_{\infty, [0,T]}}{ 1-CK\| x^n\|_{p-var,[s,t]} }.

Let us fix 0 < \varepsilon < 1 and pick a sequence 0\le \tau_1 \le \cdots \le \tau_m=T such that CK \| x\|_{p-var,[\tau_i,\tau_{i+1}]}+\varepsilon < 1. Since \| x^n\|_{p-var,[\tau_i,\tau_{i+1}]} \to  \| x\|_{p-var,[\tau_i,\tau_{i+1}]}, for n \ge N_1 with N_1 big enough, we have
CK \| x^n\|_{p-var,[\tau_i,\tau_{i+1}]}+\frac{\varepsilon}{2} < 1.
We deduce that for n \ge N_1,
\| y^n \|_{p-var,[\tau_i,\tau_{i+1}]} \le \frac{2}{\varepsilon} C \| V\|_{\infty, [0,T]}
and
\| y-y^n \|_{p-var,[\tau_i,\tau_{i+1}]}
\le \frac{C(K  \frac{2}{\varepsilon} C \| V\|_{\infty, [0,T]}+\| V\|_{\infty, [0,T]})}{ 1-CK\| x\|_{p-var,[\tau_i,\tau_{i+1}] }} \| x-x^n \|_{p-var,[\tau_i,\tau_{i+1}]}
\le \frac{C}{\varepsilon} \| V\|_{\infty, [0,T]} \left(  \frac{2KC}{\varepsilon}+1 \right) \| x-x^n \|_{p-var,[\tau_i,\tau_{i+1}]}
\le \frac{C}{\varepsilon} \| V\|_{\infty, [0,T]} \left(  \frac{2KC}{\varepsilon}+1 \right) \| x-x^n \|_{p-var,[0,T]}.
For n \ge N_2 with N_2 \ge N_1 and big enough, we have
\| x-x^n \|_{p-var,[0,T]} \le \frac{\varepsilon^3}{m},
which implies
\| y-y^n \|_{p-var,[0,T]} \le \frac{C}{\varepsilon} \| V\|_{\infty, [0,T]} \left(  \frac{2KC}{\varepsilon}+1 \right) \varepsilon^3.
\square

Posted in Rough paths theory | Leave a comment

Lecture 7. Young’s integral

In this lecture we define the Young‘s integral \int y dx when x \in C^{p-var} ([0,T], \mathbb{R}^d) and y \in  C^{q-var} ([0,T], \mathbb{R}^{e \times d}) with \frac{1}{p}+\frac{1}{q} >1. The cornerstone is the following Young-Loeve estimate.

Theorem: Let x \in C^{1-var} ([0,T], \mathbb{R}^d) and y \in  C^{1-var} ([0,T], \mathbb{R}^{e \times d}). Consider now p,q \ge 1 with \theta=\frac{1}{p}+\frac{1}{q} > 1. The following estimate holds: for 0 \le s \le t \le T,
\left\| \int_s^t y(u)dx(u)-y(s)(x(t)-x(s))  \right\| \le \frac{1}{1-2^{1-\theta} }\| x \|_{p-var; [s,t]} \| y \|_{q-var; [s,t]}.

Proof: For 0 \le s \le t \le T, let us define
\Gamma_{s,t} =\int_s^t y(u)dx(u) -y(s)(x(t)-x(s)) .
We have for s < t < u,
\Gamma_{s,u}-\Gamma_{s,t}-\Gamma_{t,u} =-y(s)(x(u)-x(s))+y(s)(x(t)-x(s))+y(t)(x(u)-x(t))= (y(s)-y(t))(x(t)-x(u)).
As a consequence, we get
\| \Gamma_{s,u}\|\le \| \Gamma_{s,t} \|+\| \Gamma_{t,u}\| +\| x \|_{p-var; [s,t]} \| y \|_{q-var; [t,u]}.
Let now \omega(s,t)=\| x \|^{1/\theta}_{p-var; [s,t]} \| y \|^{1/\theta}_{q-var; [s,t]}. We claim that \omega is a control. The continuity and the vanishing on the diagonal are obvious to check, so we just need to justify the superadditivity. Let s < t < u, we have from Holder’s inequality,
\omega(s,t)+\omega(t,u)
=\| x \|^{1/\theta}_{p-var; [s,t]} \| y \|^{1/\theta}_{q-var; [s,t]}+\| x \|^{1/\theta}_{p-var; [t,u]} \| y \|^{1/\theta}_{q-var; [t,u]}
\le (\| x \|^{p}_{p-var; [s,t]} + \| x \|^{p}_{p-var; [t,u]})^{\frac{1}{p\theta}}(\| y \|^{q}_{q-var; [s,t]} + \| y \|^{q}_{q-var; [t,u]})^{\frac{1}{q\theta}}
\le \| x \|^{1/\theta}_{p-var; [s,u]} \| y \|^{1/\theta}_{q-var; [s,u]}=\omega(s,u).
We have then
\| \Gamma_{s,u}\|\le \| \Gamma_{s,t} \|+\| \Gamma_{t,u}\| +\omega(s,u)^\theta.
For \varepsilon > 0, consider then the control
\omega_\varepsilon (s,t)= \omega(s,t) +\varepsilon ( \| x \|_{1-var; [s,t]} + \| y \|_{1-var; [s,t]}).
Define now
\Psi(r)= \sup_{s,u, \omega_\varepsilon (s,u)\le r}  \| \Gamma_{s,u}\|.
If s,u is such that \omega_\varepsilon (s,u) \le r, we can find a t such that \omega_\varepsilon(s,t) \le \frac{1}{2} \omega_\varepsilon(s,u), \omega_\varepsilon(t,u) \le \frac{1}{2} \omega_\varepsilon(s,u). Indeed, the continuity of \omega_\varepsilon forces the existence of a t such that \omega_\varepsilon(s,t)=\omega_\varepsilon(t,u) . We obtain therefore
\| \Gamma_{s,u}\|\le 2 \Psi(r/2) + r^\theta,
which implies by maximization,
\Psi(r)\le  2 \Psi(r/2) + r^\theta.
By iterating n times this inequality, we obtain
\Psi(r)
\le  2^n \Psi\left(\frac{r}{2^n} \right) +\sum_{k=0}^{n-1} 2^{k(1-\theta)} r^\theta
\le 2^n \Psi\left(\frac{r}{2^n} \right) + \frac{1}{1-2^{1-\theta}} r^\theta.
It is now clear that:
\| \Gamma_{s,t} \|
\le \left\|\int_s^t (y(u)-y(s))dx(u) \right\|
\le \| x \|_{1-var; [s,t]}  \| y-y(s) \|_{\infty; [s,t]}
\le  ( \| x \|_{1-var; [s,t]} + \| y \|_{1-var; [s,t]})^2
\le \frac{1}{\varepsilon^2} \omega_\varepsilon (s,t)^2,
\lim_{n \to \infty} 2^n \Psi\left(\frac{r}{2^n} \right) =0.
We conclude
\Psi(r) \le  \frac{1}{1-2^{1-\theta}} r^\theta
and thus
\| \Gamma_{s,u}\| \le \frac{1}{1-2^{1-\theta}} \omega_\varepsilon(s,u) ^\theta
Sending \varepsilon \to 0, finishes the proof \square

It is remarkable that the Young-Loeve estimate only involves \| x \|_{p-var; [s,t]} and \| y \|_{q-var; [s,t]}. As a consequence, we obtain the following result whose proof is let to the reader:

Proposition: Let x \in C^{p-var} ([0,T], \mathbb{R}^d) and y \in  C^{q-var} ([0,T], \mathbb{R}^{e \times d}) with \theta=\frac{1}{p}+\frac{1}{q} >1. Let us assume that there exists a sequence x^n \in C^{1-var} ([0,T], \mathbb{R}^d) such that x^n \to x in C^{p-var} ([0,T], \mathbb{R}^d) and a sequence y^n \in C^{1-var} ([0,T], \mathbb{R}^{e \times d}) such that y^n \to x in C^{q-var} ([0,T], \mathbb{R}^d), then for every s < t, \int_s^t y^n(u)dx^n(u) converges to a limit that we call the Young’s integral of y against x on the interval [s,t] and denote \int_s^t y(u)dx(u).
The integral \int_s^t y(u)dx(u) does not depend of the sequences x^n and y^n and the following estimate holds: for 0 \le s \le t \le T,
\left\| \int_s^t y(u)dx(u)-y(s)(x(t)-x(s))  \right\| \le \frac{1}{1-2^{1-\theta} }\| x \|_{p-var; [s,t]} \| y \|_{q-var; [s,t]}.

The closure of C^{1-var} ([0,T], \mathbb{R}^d) in C^{p-var} ([0,T], \mathbb{R}^d) is C^{0, p-var} ([0,T], \mathbb{R}^d) and we know that C^{p+\varepsilon-var} ([0,T], \mathbb{R}^d) \subset C^{0, p-var} ([0,T], \mathbb{R}^d). It is therefore obvious to extend the Young’s integral for every x \in C^{p-var} ([0,T], \mathbb{R}^d) and y \in  C^{q-var} ([0,T], \mathbb{R}^{e \times d}) with \theta=\frac{1}{p}+\frac{1}{q} >1 and the Young-Loeve estimate still holds
\left\| \int_s^t y(u)dx(u)-y(s)(x(t)-x(s))  \right\| \le \frac{1}{1-2^{1-\theta} }\| x \|_{p-var; [s,t]} \| y \|_{q-var; [s,t]}.
From this estimate, we easily see that for x \in C^{p-var} ([0,T], \mathbb{R}^d) and y \in  C^{p-var} ([0,T], \mathbb{R}^{e \times d}) with \frac{1}{p}+\frac{1}{q} > 1 the sequence of Riemann sums
\sum_{k=0}^{n-1} y(t_i)( x_{t_{i+1}}-x_{t_i})
will converge to \int_s^t y(u)dx(u) when the mesh of the subdivision goes to 0. We record for later use the following estimate on the Young’s integral, which is also an easy consequence of the Young-Loeve estimate (see Theorem 6.8 in the book for further details).

Proposition: Let x \in C^{p-var} ([0,T], \mathbb{R}^d) and y \in  C^{q-var} ([0,T], \mathbb{R}^{e \times d}) with \frac{1}{p}+\frac{1}{q} > 1. The integral path t \to \int_0^t y(u)dx(u) is continuous with a finite p-variation and we have
\left\|\int_0^\cdot y(u) dx(u) \right\|_{p-var, [s,t] }
\le C \| x \|_{p-var; [s,t]}  \left( \| y \|_{q-var; [s,t]}  + \| y \|_{\infty; [s,t]}  \right)
\le 2C  \| x \|_{p-var; [s,t]}  \left( \| y \|_{q-var; [s,t]}  + \| y(0)\|  \right)

Posted in Rough paths theory | Leave a comment

Lecture 6. Continuous paths with bounded p-variation

Our next goal in this course is to define an integral that can be used to integrate rougher paths than bounded variation. As we are going to see, Young’s integration theory allows to define \int y dx as soon as y has finite q-variation and x and has a finite p-variation with 1/p+1/q>1. This integral is simply is a limit of Riemann sums as for the Riemann-Stiletjes integral. In this lecture we present some basic properties of the space of continuous paths with a finite p-variation. We present these results for \mathbb{R}^d valued paths but most of the results extend without difficulties to paths valued in metric spaces (see chapter 5 in the book by Friz-Victoir).

Definition. A path x:[s,t] \to \mathbb{R}^d is said to be of finite p-variation, p > 0 if the p-variation of x on [s,t], which is defined as
\| x \|_{p-var; [s,t]} :=\left( \sup_{ \Pi \in \Delta[s,t]} \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^p \right)^{1/p},
is finite. The space of continuous paths x : [s,t] \to \mathbb{R}^d with a finite p-variation will be denoted by C^{p-var} ([s,t], \mathbb{R}^d).

The notion of p-variation is only interesting when p \ge 1.

Proposition: Let x:[s,t] \to \mathbb{R}^d be a continuous path of finite p-variation with p < 1. Then, x is constant.

Proof: We have for s \le u \le t,
\| x(u)-x(s)\|
\le ( \max \| x(t_{k+1}) -x(t_k) \|^{1-p} )  \left(  \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^p \right)
\le  ( \max \| x(t_{k+1}) -x(t_k) \|^{1-p} ) \| x \|^p_{p-var; [s,t]}.

Since x is continuous, it is also uniformly continuous on [s,t]. By taking a sequence of subdivisions whose mesh tends to 0, we deduce then that
\| x(u)-x(s)\|=0,
so that x is constant \square
The following proposition is immediate:

Proposition: Let x:[s,t] \to \mathbb{R}^d, be a continuous path. If p \le p' then
\| x \|_{p'-var; [s,t]} \le  \| x \|_{p-var; [s,t]}.
As a consequence C^{p-var} ([s,t], \mathbb{R}^d) \subset C^{p'-var} ([s,t], \mathbb{R}^d).

We remind that a continuous map \omega: \{ 0 \le s \le t \le T \} \to [0,\infty) that vanishes on the diagonal is called a control f if for all s \le t \le u,
\omega(s,t)+\omega(t,u) \le \omega (s,u).

Proposition: Let x \in C^{p-var} ([0,T], \mathbb{R}^d). Then \omega(s,t)= \| x \|^p_{p-var; [s,t]} is a control such that for every s \le t,
\| x(s) -x(t) \| \le \omega(s,t)^{1/p}.

Proof: It is immediate that
\| x(s) -x(t) \| \le \omega(s,t)^{1/p},
so we focus on the proof that \omega is a control. If \Pi_1 \in \Delta [s,t] and \Pi_2 \in \Delta [t,u], then \Pi_1 \cup \Pi_2 \in \Delta [s,u]. As a consequence, we obtain
\sup_{ \Pi_1 \in \Delta[s,t]} \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^p +\sup_{ \Pi_2 \in \Delta[t,u]} \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^p \le \sup_{ \Pi \in \Delta[s,u]} \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^p,
thus
\| x \|^p_{p-var, [s,t]}+ \| x \|^p_{p-var, [t,u]} \le  \| x \|^p_{p-var, [s,u]}.
The proof of the continuity is left to the reader (see also Proposition 5.8 in the book by Friz-Victoir) \square

In the following sense, \| x \|^p_{p-var; [s,t]} is the minimal control of a path x.

Proposition: Let x \in C^{p-var} ([0,T], \mathbb{R}^d) and let \omega: \{ 0 \le s \le t \le T \} \to [0,\infty) be a control such that for 0 \le s \le t \le T,
\| x(s)-x(t) \| \le C \omega (s,t)^{1/p},
then
\| x \|_{p-var; [s,t]} \le C \omega(s,t)^{1/p}.

Proof: We have
\| x \|_{p-var; [s,t]}
= \left( \sup_{ \Pi \in \Delta[s,t]} \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^p \right)^{1/p}
\le  \left( \sup_{ \Pi \in \Delta[s,t]} \sum_{k=0}^{n-1} C^p \omega(t_{k}, t_{k+1}) \right)^{1/p}
\le C  \omega(s,t)^{1/p}
\square

The next result shows that the set of continuous paths with bounded p-variation is a Banach space.

Theorem: Let p \ge 1. The space C^{p-var} ([0,T], \mathbb{R}^d) endowed with the norm \| x(0) \|+   \| x \|_{p-var, [0,T]} is a Banach space.

Proof: The proof is identical to the case p=1, so we let the careful reader check the details \square

Again, the set of smooth paths is not dense in C^{p-var} ([0,T], \mathbb{R}^d) for the p-variation convergence topology. The closure of the set of smooth paths in the p-variation norm shall be denoted by C^{0,p-var} ([0,T], \mathbb{R}^d). We have the following characterization of paths in C^{0,p-var} ([0,T], \mathbb{R}^d).

Proposition: Let p \ge 1. x \in C^{0,p-var} ([0,T], \mathbb{R}^d) if and only if
\lim_{\delta \to 0}   \sup_{ \Pi \in \Delta[s,t], | \Pi | \le \delta } \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^p=0.

Proof: See Theorem 5.31 in the book by Friz-Victoir \square

The following corollary shall often be used in the sequel:

Corollary: If 1 \le p< q, then C^{p-var} ([0,T], \mathbb{R}^d) \subset C^{0,q-var} ([0,T], \mathbb{R}^d).

Proof: Let \Pi \in  \Delta[s,t] whose mesh is less than \delta > 0. We have
\sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^q
\le \left(  \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^p\right) \max  \| x(t_{k+1}) -x(t_k) \|^{p-q}
\le \| x \|^p_{p-var; [s,t]}  \max  \| x(t_{k+1}) -x(t_k) \|^{p-q}.
As a consequence, we obtain
\lim_{\delta \to 0}   \sup_{ \Pi \in \Delta[s,t], | \Pi | \le \delta } \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^q=0 \square

Posted in Rough paths theory | 8 Comments

Lecture 5. Exponential of vector fields and solutions of differential equations

Let x\in C^{1-var} ([0,T], \mathbb{R}^d) and let V : \mathbb{R}^e \to \mathbb{R}^{e\times d} be a Lipschitz continuous map. In order to analyse the solution of the differential equation,
y(t)=y_0+\int_0^t V(y(s)) dx(s),
and make the geometry enter into the scene, it is convenient to see V as a collection of vector fields V=(V_1, \cdots, V_d), where the V_i‘s are the columns of the matrix V. The differential equation then of course writes
y(t)=y_0+\sum_{i=1}^d \int_0^t V_i (y(s)) dx^i(s),

Generally speaking, a vector field V on \mathbb{R}^{e} is a map
\begin{array}{llll}  V: & \mathbb{R}^{e}& \rightarrow  & \mathbb{R}^{e} \\  & x & \rightarrow  & (v_{1}(x),...,v_{e}(x)).  \end{array}
A vector field V can be seen as a differential operator acting on differentiable functions f: \mathbb{R}^{e} \rightarrow \mathbb{R} as follows:
Vf(x)=\langle V(x), \nabla f (x) \rangle= \sum_{i=1}^e v_i (x) \frac{\partial f}{\partial x_i}.
We note that V is a derivation, that is for f,g \in \mathcal{C}^{1} (\mathbb{R}^e , \mathbb{R} ),
V(fg)=(Vf)g +f (Vg).
For this reason we often use the differential notation for vector fields and write:
V=\sum_{i=1}^d v_i(x) \frac{\partial }{\partial x_i}.
Using this action of vector fields on functions, the change of variable formula for solutions of differential equations takes a particularly concise form:

Proposition: Let y be a solution of a differential equation that writes
y(t)=y_0+\sum_{i=1}^d \int_0^t V_i (y(s)) dx^i(s),
then for any C^1 function f: \mathbb{R}^{e} \rightarrow \mathbb{R},
f(y(t))=f(y_0)+\sum_{i=1}^d \int_0^t V_i f (y(s)) dx^i(s).

Let V be a Lipschitz vector field on \mathbb{R}^e. For any y_0 \in \mathbb{R}^e, the differential equation
y(t)=y_0+\int_0^t V(y(s)) ds
has a unique solution y: \mathbb{R} \to \mathbb{R}^e. By time homogeneity of the equation, the flow of this equation satisfies
\pi ( t_1 , \pi( t_2 ,y_0 ) )=\pi (t_1 +t_2,y_0).
and therefore \{ \pi( t, \cdot), t \in \mathbb{R}\} is a one parameter group of diffeomorphisms \mathbb{R}^e \to \mathbb{R}^e. This group is generated by V in the sense that for every y_0 \in \mathbb{R}^e,
\lim_{t\to 0} \frac{\pi(t,y_0) -y_0}{t}=V(y_0).
For these reasons, we write \pi(t,y_0)=e^{tV}(y_0). Let us now assume that V is a C^1 Lipschitz vector field on \mathbb{R}^e. If \phi :\mathbb{R}^e \to \mathbb{R}^e is a diffeomorphism, the pull-back \phi^{\ast}V of the vector field V by the map \phi is the vector field defined by the chain rule,
\phi^{\ast}V (x)=(d \phi^{-1} )_{\phi (x) } \left( V (\phi(x)) \right). In particular, if V' is another C^1 Lipschitz vector field on \mathbb{R}^e, then for every t \in \mathbb{R}, we have a vector field (e^{tV})^{\ast} V'. The Lie bracket [V,V'] between V and V' is then defined as
[V,V']=\left( \frac{d}{dt} \right)_{t=0} (e^{tV})^{\ast}V'.
It is computed that
[ V, V' ](x)=\sum_{i=1}^e \left( \sum_{j=1}^e v_j (x) \frac{\partial v'_i}{\partial x_j}(x)- v'_j (x) \frac{\partial v_i}{\partial x_j}(x)\right)\frac{\partial}{\partial x_i}.
Observe that the Lie bracket obviously satisfies [V,V']=-[V',V] and the so-called Jacobi identity that is:
[V,[V',V'']]+[V',[V'',V]]+[V'',[V,V']]=0.
What the Lie bracket [V,V'] really quantifies is the lack of commutativity of the respective flows generated by V and V'.

Lemma: Let V,V' be two C^1 Lipschitz vector fields on \mathbb{R}^e. Then, [V,V']=0 if and only if for every s,t \in \mathbb{R},
e^{sV} e^{t V'}=e^{sV+tV'}=e^{t V'} e^{sV}.

Proof: This is a classical result in differential geometry, so we only give one part the proof. From the very definition of the Lie bracket and the multiplicativity of the flow, we see that [V,V']=0 if and only if for every s \in \mathbb{R}, (e^{sV})^{\ast}V'=V'. Now, suppose that [V,V']=0. Let y be the solution of the equation
y(t)=y_0+\int_0^t V'(y(s)) ds.
Since (e^{sV})^{\ast}V'=V', we obtain that e^{sV} (y(t)) is also a solution of the equation. By uniqueness of solutions, we obtain that
e^{sV}(y(t))=e^{tV'} ( e^{sV}(y_0)).
As a conclusion,
e^{sV} e^{t V'}=e^{t V'} e^{sV}
\square

If we consider a differential equation
y(t)=y_0+\sum_{i=1}^d \int_0^t V_i (y(s)) dx^i(s),
as we will see it throughout this class, the Lie brackets [V_i,V_j] play an important role in understanding the geometry of the set of solutions. The easiest result in that direction is the following:

Proposition: Let x\in C^{1-var} ([0,T], \mathbb{R}^d) and let V_1,\cdots, V_d be C^1 Lipschitz vector fields on \mathbb{R}^e. Assume that for every 1 \le i,j \le d , [V_i,V_j]=0, then the solution of the differential equation
y(t)=y_0+\sum_{i=1}^d \int_0^t V_i (y(s)) dx^i(s), \quad 0 \le t \le T,
can be represented as
y(t)= \exp \left( \sum_{i=1}^d x^i(t) V_i \right) (y_0).

Proof: Let
F(x_1,\cdots,x_n)= \exp \left( \sum_{i=1}^d x_i V_i \right) (y_0).
Since the flows generated by the V_i‘s are commuting, we get that
\frac{\partial F}{\partial x_i}(x)=V_i (F(x)).
The change of variable formula for bounded variation paths implies then that F(x^1(t),\cdots,x^n(t)) is a solution and we conclude by uniqueness \square

Posted in Rough paths theory | 5 Comments

Lecture 4. Differential equations driven by bounded variation paths

In this lecture we establish the basic existence and uniqueness results concerning differential equations driven by bounded variation paths and prove the continuity in the 1-variation topology of the solution of an equation with respect to the driving signal.

Theorem: Let x\in C^{1-var} ([0,T], \mathbb{R}^d) and let V : \mathbb{R}^e \to \mathbb{R}^{e\times d} be a Lipschitz continuous map, that is there exists a constant K > 0 such that for every x,y \in \mathbb{R}^e,
\| V(x)-V(y) \| \le K \| x-y \|.
For every y_0 \in \mathbb{R}^e, there is a unique solution to the differential equation:
y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T.
Moreover y \in C^{1-var} ([0,T], \mathbb{R}^e).

Proof: The proof is a classical application of the fixed point theorem. Let 0 < \tau \le T and consider the map \Phi going from the space of continuous functions [0,\tau] \to \mathbb{R}^e into itself, which is defined by
\Phi(y)_t =y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le \tau.
By using estimates on Riemann-Stieltjes integrals, we deduce that
\| \Phi(y^1)-\Phi(y^2) \|_{ \infty, [0,\tau]}
\le \| V(y^1)-V(y^2) \|_{ \infty, [0,\tau]} \| x \|_{1-var,[0,\tau]}
\le K  \| y^1-y^2  \|_{ \infty, [0,\tau]} \| x \|_{1-var,[0,\tau]}
If \tau is small enough, then K  \| x \|_{1-var,[0,\tau]} < 1, which means that \Phi is a contraction that admits a unique fixed point y. This y is the unique solution to the differential equation:
y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le \tau.
By considering then a subdivision
\{ \tau=\tau_1 <  \tau_2 < \cdots < \tau_n=T \}
such that K  \| x \|_{1-var,[\tau_k,\tau_{k+1}]} < 1, we obtain a unique solution to the differential equation:
y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T
\square

The solution of a differential equation is a continuous function of the initial condition, more precisely we have the following estimate:

Proposition: Let x\in C^{1-var} ([0,T], \mathbb{R}^d) and let V : \mathbb{R}^e \to  \mathbb{R}^{e\times d} be a Lipschitz continuous map such that for every x,y \in \mathbb{R}^e,
\| V(x)-V(y) \| \le K \| x-y \|.
If y^1 and y^2 are the solutions of the differential equations:
y^1(t)=y^1(0)+\int_0^t V(y^1(s)) dx(s), \quad 0\le t \le T,
and
y^2(t)=y^2(0)+\int_0^t V(y^2(s)) dx(s), \quad 0\le t \le T,
then the following estimate holds:
\| y^1 -y^2 \|_{\infty,[0,T]} \le \| y^1(0) -y^2(0) \| \exp \left( K  \| x \|_{1-var,[0,T]}   \right).

Proof: We have
\| y^1-y^2 \|_{\infty,[0,t]} \le  \| y^1(0) -y^2(0) \| +K \int_0^t \| y^1-y^2 \|_{\infty,[0,s]} \| dx(s) \|,
and conclude by Gronwall’s lemma \square

This continuity can be understood in terms of flows. Let x\in C^{1-var} ([0,T], \mathbb{R}^d) and let V : \mathbb{R}^e \to  \mathbb{R}^{e\times d} be a Lipschitz map. Denote by \pi (t,y_0), 0 \le t \le T, y_0 \in \mathbb{R}^e, the unique solution of the equation
y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T.
The previous proposition shows that for a fixed 0 \le t \le T, the map y_0 \to  \pi (t,y_0) is Lipschitz continuous. The set \{  \pi (t, \cdot), 0 \le t \le T \} is called the flow of the equation.
Under more regularity assumptions on V, the map y_0 \to  \pi (t,y_0) is even C^1 and the Jacobian map solves a linear equation.

Proposition: Let x\in C^{1-var} ([0,T], \mathbb{R}^d) and let V : \mathbb{R}^e \to  \mathbb{R}^{e\times d} be a C^1 Lipschitz continuous map. Let \pi(t,y_0) be the flow of the equation
y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T.
Then for every 0\le t \le T, the map y_0 \to  \pi (t,y_0) is C^1 and the Jacobian J_t=\frac{\partial \pi(t,y_0)}{\partial y_0} is the unique solution of the matrix linear equation
J_t=Id+ \sum_{i=1}^d\int_0^t DV_i(\pi(s,y_0))J_s dx(s),
where the V_i‘s denote the columns of the matrix V.

Proof: We refer to the Chapter 4 in the book by Friz-Victoir \square

We finally turn to the important estimate showing that solutions of differential equations are continuous with respect to the driving path in the 1-variation topology

Theorem: Let x^1,x^2 \in C^{1-var} ([0,T], \mathbb{R}^d) and let V : \mathbb{R}^e \to \mathbb{R}^{e\times d} be a Lipschitz and bounded continuous map such that for every x,y \in \mathbb{R}^e,
\| V(x)-V(y) \| \le K \| x-y \|.
If y^1 and y^2 are the solutions of the differential equations:
y^1(t)=y(0)+\int_0^t V(y^1(s)) dx^1(s), \quad 0\le t \le T,
and
y^2(t)=y(0)+\int_0^t V(y^2(s)) dx^2(s), \quad 0\le t \le T,
then the following estimate holds:
\| y^1 -y^2 \|_{1-var,[0,T]} \le \| V \|_\infty  \left(   1+ K\| x_1 \|_{1-var,[0,T]}  \exp \left( K  \| x_1 \|_{1-var,[0,T]}   \right) \right)  \| x^1 -x^2 \|_{1-var,[0,T]} .

Proof: We first give an estimate in the supremum topology. It is easily seen that the assumptions imply
\| y^1 -y^2 \|_{\infty ,[0,t]} \le K \int_0^t \| y^1 -y^2 \|_{\infty ,[0,s]} \| dx^1(s) \| +\| V \|_\infty  \| x^1 -x^2 \|_{1-var,[0,T]}.
From Gronwall’s lemma, we deduce that
\| y^1 -y^2 \|_{\infty ,[0,T]} \le \| V \|_\infty   \exp \left( K  \| x \|_{1-var,[0,T]}   \right)   \| x^1 -x^2 \|_{1-var,[0,T]} .
Now, we also have for any 0\le s \le t \le T,
\| y^1(t)-y^2(t)-(y^1(s)-y^2(s))\|\le K \| y^1 -y^2 \|_{\infty ,[0,T]} \| x^1 \|_{1-var,[s,t]} +\| V\|_\infty  \| x^1 -x^2 \|_{1-var,[s,t]} .
This implies,
\| y^1 -y^2 \|_{1-var,[0,T]}  \le K \| y^1 -y^2 \|_{\infty ,[0,T]} \| x^1 \|_{1-var,[0,T]} +\| V\|_\infty  \| x^1 -x^2 \|_{1-var,[0,T]}
and yields the conclusion \square

Posted in Rough paths theory | Leave a comment

Lecture 3. Riemann-Stieltjes integrals

Let y:[0,T] \to \mathbb{R}^{e \times d} be a piecewise continuous path and x  \in C^{1-var} ([0,T], \mathbb{R}^d). It is well-known that we can integrate y against x by using the RiemannStieltjes integral which is a natural extension of the Riemann integral. The idea is to use the Riemann sums
\sum_{k=0}^{n-1} y(t_k) (x(t_{k+1})-x(t_k)),
where \Pi=\{ 0 =t_0 < t_1 < \cdots < t_n =T \}. It is easy to prove that, when the mesh of the subdivision \Pi goes to 0, the Riemann sums converge to a limit which is independent from the sequence of subdivisions that was chosen. The limit is then denoted \int_0^T y(t) dx(t) and called the Riemann-Stieltjes integral of y against x. Since x has a bounded variation, it is easy to see that, more generally,
\sum_{k=0}^{n-1} y(\xi_k) (x(t_{k+1})-x(t_k)),
with t_k \le \xi_k \le t_{k+1} would also converge to \int_0^T y(t) dx(t). If
x(t)=x(0)+\int_0^t g(s) ds
is an absolutely continuous path, then it is not difficult to prove that we have
\int_0^T y(t) dx(t) =\int_0^T y(t) g(t) dt,
where the integral on the right hand side is understood in Riemann’s sense.

We have
\left\| \sum_{k=0}^{n-1} y(t_k) (x(t_{k+1})-x(t_k))\right\|
\le \sum_{k=0}^{n-1} \| y(t_k)\| \| (x(t_{k+1})-x(t_k))\|
\le \sum_{k=0}^{n-1} \| y(t_k)\| \| (x(t_{k+1})-x(t_k))\|
\le  \sum_{k=0}^{n-1} \| y(t_k)\|  \| x \|_{1-var,[t_k,t_{k+1}]}.
Thus, by taking the limit when the mesh of the subdivision goes to 0, we obtain the estimate
\left\| \int_0^T y(t) dx(t) \right\| \le \int_0^T \| y(t) \| \| dx(t) \| \le \| y \|_{\infty, [0,T]}  \| x \|_{1-var,[0,T]},
where \int_0^T \| y(t) \| \| dx(t) \| is the notation for the Riemann-Stieltjes integral of \| y \| against the bounded variation path l(t)= \| x \|_{1-var,[0,t]}. We can also estimate the Riemann-Stieltjes integral in the 1-variation distance. We collect the following estimate for later use:

Proposition: Let y,y':[0,T] \to \mathbb{R}^{e \times d} be a piecewise continuous path and x,x' \in C^{1-var} ([0,T], \mathbb{R}^d). We have
\left\| \int_0^{\cdot} y'(t) dx'(t)-\int_0^{\cdot} y(t) dx(t) \right\|_{1-var,[0,T]} \le  \| x \|_{1-var,[0,T]} \| y-y' \|_{\infty, [0,T]} + \| y' \|_{\infty, [0,T]}  \| x -x'\|_{1-var,[0,T]}.

The Riemann-Stieltjes satisfies the usual rules of calculus, for instance the integration by parts formula takes the following form
Proposition: Let y \in C^{1-var} ([0,T], \mathbb{R}^{e \times d} ) and x\in C^{1-var} ([0,T], \mathbb{R}^d).
\int_0^T y(t) dx(t)+\int_0^T dy(t) x(t)=y(T)x(T) -y(0)x(0).

We also have the following change of variable formula:

Proposition: Let x\in C^{1-var} ([0,T], \mathbb{R}^d) and let \Phi: \mathbb{R}^d \to \mathbb{R}^e be a C^1 map. We have
\Phi (x(T)) =\Phi (x(0)) + \int_0^T \Phi'(x(t)) dx(t).

Proof: From the mean value theorem
\Phi (x(T)) -\Phi (x(0))=\sum_{k=0}^{n-1} (\Phi (x(t_{k+1})) -\Phi (x(t_k)))=\sum_{k=0}^{n-1}\Phi'(x_{\xi_k})  (x(t_{k+1}) -x(t_k)),
with t_k \le \xi_k \le t_{k+1}. The result is then obtained by taking the limit when the mesh of the subdivision goes to 0 \square

We finally state a classical analysis lemma, Gronwall’s lemma, which provides a wonderful tool to estimate solutions of differential equations.

Proposition: Let x \in C^{1-var} ([0,T], \mathbb{R}^d) and let \Phi: [0,T] \to [0,\infty) be a bounded measurable function. If,
\Phi(t) \le A+B\int_0^t \Phi(s) \| d x(s)\|, \quad 0 \le t \le T,
for some A,B \ge 0, then
\Phi(t) \le A \exp (B \| x \|_{1-var,[0,t]}   )\quad 0 \le t \le T.

Proof: Iterating the inequality
\Phi(t) \le A+B\int_0^t \Phi(s) \| d x(s)\|
N times, we get
\Phi(t) \le A+\sum_{k=1} ^n AB^{k} \int_0^ t \int_0^{t_1} \cdots \int_0^{t_{k-1}} \| d x(t_k)\| \cdots \| dx(t_1) \| +R_n(t),
where R_n(t) is a remainder term that goes to 0 when n \to \infty. Observing that
\int_0^ t \int_0^{t_1} \cdots \int_0^{t_{k-1}} \| d x(t_k)\| \cdots \| dx(t_1) \|=\frac{ \| x \|^k_{1-var,[0,t]} }{k!}
and sending n to \infty finishes the proof \square

Posted in Rough paths theory | Leave a comment

Lecture 2. Continuous paths with bounded variation

The first few lectures are essentially reminders of undergraduate real analysis materials. We will cover some aspects of the theory of differential equations driven by continuous paths with bounded variation. The point is to fix some notations that will be used throughout the course and to stress the importance of the topology of convergence in 1-variation if we are interested in stability results for solutions with respect to the driving signal.

If s \le t, we will denote by \Delta [s,t], the set of subdivisions of the interval [s,t], that is \Pi \in \Delta [s,t] can be written
\Pi=\left\{ s= t_0 < t_1 < \cdots < t_n =t \right\}.

Definition: A continuous path x : [s,t] \to \mathbb{R}^d is said to have a bounded variation on [s,t], if the 1-variation of x on [s,t], which is defined as
\| x \|_{1-var; [s,t]} :=\sup_{ \Pi \in \Delta[s,t]} \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|,
is finite. The space of continuous bounded variation paths x : [s,t] \to \mathbb{R}^d, will be denoted by C^{1-var} ([s,t], \mathbb{R}^d).

\| \cdot \|_{1-var; [s,t]} is not a norm, because constant functions have a zero 1-variation, but it is obviously a semi-norm. If x is continuously differentiable on [s,t], it is easily seen that
\| x \|_{1-var, [s,t]}=\int_s^t \| x'(s) \| ds.

Proposition: Let x \in  C^{1-var} ([0,T], \mathbb{R}^d). The function (s,t)\to \| x \|_{1-var, [s,t]} is additive, i.e for 0 \le s \le t \le u \le T,
\| x \|_{1-var, [s,t]}+ \| x \|_{1-var, [t,u]}= \| x \|_{1-var, [s,u]},
and controls x in the sense that for 0 \le s \le t \le T,
\| x(s)-x(t) \| \le \| x \|_{1-var, [s,t]}.
The function s \to  \| x \|_{1-var, [0,s]} is moreover continuous and non decreasing.

Proof: If \Pi_1 \in \Delta [s,t] and \Pi_2 \in \Delta [t,u], then \Pi_1 \cup \Pi_2 \in \Delta [s,u]. As a consequence, we obtain
\sup_{ \Pi_1 \in \Delta[s,t]} \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \| +\sup_{ \Pi_2 \in \Delta[t,u]} \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \| \le \sup_{ \Pi \in \Delta[s,u]} \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|,
thus
\| x \|_{1-var, [s,t]}+ \| x \|_{1-var, [t,u]} \le  \| x \|_{1-var, [s,u]}.
Let now \Pi \in \Delta[s,u]:
\Pi=\left\{ s= t_0 < t_1 < \cdots < t_n =t \right\}.
Let k=\max \{ j, t_j \le t\}. By the triangle inequality, we have
\sum_{j=0}^{n-1} \| x(t_{j+1}) -x(t_j) \|
\le  \sum_{j=0}^{k-1} \| x(t_{j+1}) -x(t_j) \| + \sum_{j=k}^{n-1} \| x(t_{j+1}) -x(t_j) \|
\le  \| x \|_{1-var, [s,t]}+ \| x \|_{1-var, [t,u]}.
Taking the \sup of \Pi \in \Delta[s,u] yields
\| x \|_{1-var, [s,t]}+ \| x \|_{1-var, [t,u]} \ge  \| x \|_{1-var, [s,u]},
which completes the proof. The proof of the continuity and monoticity of s \to  \| x \|_{1-var, [0,s]} is let to the reader \square

This control of the path by the 1-variation norm is an illustration of the notion of controlled path which is very useful in rough paths theory.

Definition: A map \omega: \{ 0 \le s \le t \le T \} \to [0,\infty) is called superadditive if for all s \le t \le u,
\omega(s,t)+\omega(t,u) \le \omega (s,u).
If, in adition, \omega is continuous and \omega(t,t)=0, we call \omega a control. We say that a path x:[0,T] \to \mathbb{R} is controlled by a control \omega, if there exists a constant C < 0, such that for every 0 \le s \le t \le T,
\| x(t) -x(s) \| \le C \omega(s,t).

Obviously, Lipschitz functions have a bounded variation. The converse is of course not true: t\to \sqrt{t} has a bounded variation on [0,1] but is not Lipschitz. However, any continuous path with bounded variation is the reparametrization of a Lipschitz path in the following sense.

Proposition: Let x \in C^{1-var} ([0,T], \mathbb{R}^d). There exist a Lipschitz function y:[0,1] \to \mathbb{R}^d, and a continuous and non-decreasing function \phi:[0,T]\to [0,1] such that x=y\circ \phi.

Proof: We assume \| x \|_{1-var, [0,T]} \neq 0 and consider
\phi(t)=\frac{  \| x \|_{1-var, [0,t]} }{ \| x \|_{1-var, [0,T]} }.
It is continuous and non decreasing. There exists a function y such that x=y\circ \phi because \phi(t_1)=\phi(t_2) implies x(t_1)=x(t_2). We have then, for s \le t,
\| y( \phi(t)) -y ( \phi(s)) \|=\| x(t) -x (s) \| \le  \| x \|_{1-var, [s,t]} =\| x \|_{1-var, [0,T]}  (\phi(t)-\phi(s) ) \square

The next result shows that the set of continuous paths with bounded variation is a Banach space.

Theorem: The space C^{1-var} ([0,T], \mathbb{R}^d) endowed with the norm \| x(0) \|+   \| x \|_{1-var, [0,T]} is a Banach space.

Proof: Let x^n \in C^{1-var} ([0,T], \mathbb{R}^d) be a Cauchy sequence. It is clear that
\| x^n -x^m \|_\infty \le \| x^n(0)-x^m(0) \|+   \| x^n-x^m \|_{1-var, [0,T]}.
Thus, x^n converges uniformly to a continuous path x :[0,T] \to \mathbb{R}. We need to prove that x has a bounded variation. Let
\Pi=\{ 0=t_0 <t_1 < \cdots <t_n=T \}
be a a subdivision of [0,T]. There is m \ge 0, such that \| x - x^m \|_\infty \le \frac{1}{2n}, thus
\sum_{k=0}^{n-1} \|x(t_{k+1})-x(t_k) \|
\le \sum_{k=0}^{n-1} \|x(t_{k+1})-x^m(t_{k+1}) \| +\sum_{k=0}^{n-1} \|x^m(t_{k})-x(t_k) \| +\| x^m \|_{1-var,[0,T]}
\le 1+\sup_{n} \| x^n \|_{1-var,[0,T]}.
Thus, we have
\| x \|_{1-var,[0,T]} \le 1+\sup_{n} \| x^n \|_{1-var,[0,T]} < \infty
\square

For approximations purposes, it is important to observe that the set of smooth paths is not dense in C^{1-var} ([0,T], \mathbb{R}^d) for the 1-variation convergence topology. The closure of the set of smooth paths in the 1-variation norm, which shall be denoted by C^{0,1-var} ([0,T], \mathbb{R}^d) is the set of absolutely continuous paths.

Proposition: Let x \in C^{1-var} ([0,T], \mathbb{R}^d). Then, x \in C^{0,1-var} ([0,T], \mathbb{R}^d) if and only if there exists y \in L^1([0,T]) such that,
x(t)=x(0)+\int_0^t y(s) ds.

Proof: First, let us assume that
x(t)=x(0)+\int_0^t y(s) ds,
for some y \in  L^1([0,T]). Since smooth paths are dense in L^1([0,T]), we can find a sequence y^n in L^1([0,T]) such that \| y-y^n \|_1 \to 0. Define then,
x^n(t)=x(0)+\int_0^t y^n(s) ds.
We have
\| x-x^n \|_{1-var,[0,T]}=\| y-y^n \|_1.
This implies that x  \in C^{0,1-var} ([0,T], \mathbb{R}^d). Conversely, if x  \in C^{0,1-var} ([0,T], \mathbb{R}^d), there exists a sequence of smooth paths x^n that converges in the 1-variation topology to x. Each x^n can be written as,
x^n(t)=x^n(0)+\int_0^t y^n(s) ds.
We still have
\| x^m-x^n \|_{1-var,[0,T]}=\| y^m-y^n \|_1,
so that y^n converges to some y in L^1. It is then clear that
x(t)=x(0)+\int_0^t y(s) ds
\square

Exercise: Let x \in C^{1-var} ([0,T], \mathbb{R}^d). Show that x is the limit in 1-variation of piecewise linear interpolations if and only if x  \in C^{0,1-var} ([0,T], \mathbb{R}^d).

Posted in Rough paths theory | Leave a comment

Lecture 1. An overview of rough paths theory

Let us consider a differential equation that writes
y(t)=y_0+\sum_{i=1}^d \int_0^t V_i (y(s)) dx^i(s),
where the V_i‘s are vector fields on \mathbb{R}^n and where the driving signal x(t)=(x^1(t), \cdots, x^d(t)) is a continuous bounded variation path. If the vector fields are Lipschitz continuous then, for any fixed initial condition, there is a unique solution y(t) to the equation. We can see this solution y as a function of the driving signal x. It is an important question to understand for which topology, this function is continuous.

A simple example shows that the topology of uniform convergence is not the correct one here. Indeed, let us consider the differential equation
y_1(t) = x_1(t)
y_2(t)= x_2(t)
y_3(t)= -\int_0^t y_2(s) dx_1(s) +\int_0^t y_1(s) dx_2(s)
where
x_1(t)=\frac{1}{n} \cos (n^2 t ), \quad x_2(t)=\frac{1}{n} \sin (n^2 t).
A straightforward computation shows that y_3(t)=t. When n \to \infty, (x_1,x_2) converges uniformly to 0 whereas, of course, (y_1,y_2,y_3) does not converge to 0. In this framework , a correct topology is given by the topology of convergence in 1-variation on compact sets. To fix the ideas, let us work on the interval [0,1]. The distance in 1-variation between two continuous bounded variation paths x,\tilde{x}:[0,1] \to \mathbb{R}^d is given by
\delta_1(x,\tilde{x})=\|x(0)-\tilde{x}(0) \|+ \sup_{\pi}  \sum_{k=0}^{n-1} \| (x(t_{i+1})-\tilde{x}(t_{i+1})) -(x(t_i)-\tilde{x}(t_i))  \|,
where the supremum is taken over all the subdivisions \pi =\{ 0 \leq t_1 \leq \cdots \leq t_n \leq 1 \}.

It is then a fact that is going to be proved in this class that if the V_i‘s are bounded and if x^n:[0,1] \to \mathbb{R}^d is a sequence of bounded variation paths that converges in 1-variation to a continuous path x with bounded variation, then the solutions of the differential equations
y^n(t)=y_0+\sum_{i=1}^d \int_0^t V_i (y^n(s)) dx^{i,n}(s),
converge in 1-variation to the solution of
y(t)=y_0+\sum_{i=1}^d \int_0^t V_i (y(s)) dx^i(s).
This type of continuity result suggests to use a topology in p-variation, p \ge 1, to try to extend the map x \to y to a larger class of driving signals x. More precisely, for p \geq 1, let us denote by \Omega^p  (\mathbb{R}^d) the closure of the set of continuous with bounded variation paths x:[0,1] \rightarrow  \mathbb{R}^d with respect to the distance in p-variation which is given by
\delta_p(x,\tilde{x})=\left( \|x(0)-\tilde{x}(0) \|^p+ \sup_{\pi}  \sum_{k=0}^{n-1} \| (x(t_{i+1})-\tilde{x}(t_{i+1})) -(x(t_i)-\tilde{x}(t_i))  \|^p \right)^{1/p}.
We will then prove the following result:

Proposition: Let p < 2. If x^n:[0,1] \to \mathbb{R}^d is a sequence of bounded variation paths that converges in p-variation to a path x \in \Omega^p  (\mathbb{R}^d), then the solutions of the differential equations
y^n(t)=y_0+\sum_{i=1}^d \int_0^t V_i (y^n(s)) dx^{i,n}(s),
converge in p-variation to some y \in \Omega^p  (\mathbb{R}^d). Moreover y is the solution of the differential equation
y(t)=y_0+\sum_{i=1}^d \int_0^t V_i (y(s)) dx^i(s),
where the integrals are understood in the sense of Young’s integration.

The value p=2 is really a treshold: The result is simply false for p=2. The main idea of the rough paths theory is to introduce a much stronger topology than the convergence in p-variation. This topology, that we now explain, is related to the continuity of lifts of paths in free nilpotent Lie groups.

Let \mathbb{G}_N (\mathbb{R}^d) be the free N-step nilpotent Lie group with d generators X_1,\cdots,X_d. If x:[0,1] \rightarrow  \mathbb{R}^d is continuous with bounded variation, the solution x^* of the equation
x^*(t)=\sum_{i=1}^d \int_0^t X_i (x^*(s)) dx^i(s),
is called the lift of x in \mathbb{G}_N (\mathbb{R}^d). For p \geq 1, let us
denote \Omega^p \mathbb{G}_N (\mathbb{R}^d) the closure of the set of lifted paths x^*:[0,1]  \rightarrow \mathbb{G}_N (\mathbb{R}^d) with respect to the distance in p-variation which is given by
\delta^N_p (x^*,y^*) =\sup_{\pi} \left( \sum_{i=1}^{n-1} d_N \left( y_{t_i}^* (x_{t_i}^*)^{-1}, y_{t_{i+1}}^* (x_{t_{i+1}}^*)^{-1} \right)^p \right)^{\frac{1}{p}},
where d_N denotes the Carnot-Caratheodory distance on the group \mathbb{G}_N (\mathbb{R}^d). This is a distance that will be explained in details later. Its main property is that it is homogeneous with respect to the natural dilation of \mathbb{G}_N (\mathbb{R}^d).

Consider now the map \mathcal{I} which associates to a continuous with bounded variation path
x: [0,1] \rightarrow \mathbb{R}^d the continuous path with bounded variation y : [0,1] \rightarrow \mathbb{R}^d that solves the ordinary differential equation
y(t) =y_0+\sum_{i=1}^d \int_0^t V_i (y(s)) dx^i(s).
It is clear that there exists a unique map \mathcal{I}^{*} from the set of continuous with bounded variation lifted paths [0,1] \rightarrow \mathbb{G}_N (\mathbb{R}^d) onto the set of continuous with bounded variation lifted paths [0,1] \rightarrow \mathbb{G}_N (\mathbb{R}^n) which makes the following diagram commutative
\begin{array}{lll}  & \mathcal{I}^{*} &  \\  x^{*} & \longrightarrow  & y^{*} \\  \uparrow  &  & \uparrow  \\  x & \longrightarrow  & y \\  & \mathcal{I} &  \end{array}.
The fundamental theorem of Lyons is the following:

Theorem: If N \geq [p], then in the topology of \delta^N_p-variation, there exists a continuous extension of \mathcal{I}^{*} from \Omega^p \mathbb{G}_N (\mathbb{R}^d) into \Omega^p \mathbb{G}_N (\mathbb{R}^n).

In particular, we can now give a sense to differential equations driven by some continuous paths with finite p-variation, for any p \ge 1. Indeed, let x:[0,1] \to \mathbb{R}^d which is continuous with a fnite p-variation and assume that there exists x^* \in \Omega^p \mathbb{G}_N (\mathbb{R}^d) whose projection onto \mathbb{R}^d is x. The projection onto \mathbb{R}^d of \mathcal{I}^{*}(x^*) is then understood as being a solution of
y(t) =y_0+\sum_{i=1}^d \int_0^t V_i (y(s)) dx^i(s).

An important example of application is given by the case where the driving signal is a Brownian motion (B(t))_{t \ge 0}. Brownian motion has a p-finite variation for any p > 2 and, as we will see, admits a canonical lift in \Omega^p \mathbb{G}_2 (\mathbb{R}^d). As a conclusion, we can consider in the rough paths sense, solutions to the equation
y(t) =y_0+\sum_{i=1}^d \int_0^t V_i (y(s)) dB^i(s).

It turns out that this notion of solution is exactly equivalent to solutions that are obtained by using the Stratonovitch integration theory. Therefore, the theory of stochastic differential equations appears as a very special case of the rough paths theory !

Posted in Rough paths theory | Leave a comment