Lecture 8. Rough paths Fall 2017

In this lecture, it is now time to harvest the fruits of the two previous lectures. This will allow us to finally define the notion of p-rough path and to construct the signature of such path.

A first result which is a consequence of the theorem proved in the previous lecture is the following continuity of the iterated iterated integrals with respect to a convenient topology. The proof uses very similar arguments to the previous two lectures, so we let it as an exercise to the student.

Theorem: Let p \ge 1, K > 0 and x,y \in C^{1-var}([0,T],\mathbb{R}^d) such that
\sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dy^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \le 1,
and
\left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right)^p+ \left( \sum_{j=1}^{[p]} \left\| \int dy^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right)^p \le K.
Then there exists a constant C \ge 0 depending only on p and K such that for k \ge 1
\left\| \int_{\Delta^k [0,\cdot]} dx^{\otimes k}- \int_{\Delta^k [0,\cdot]} dy^{\otimes k} \right\|_{p-var, [0,T]} \le \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dy^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right) \frac{C^k}{\left( \frac{k}{p}\right)!}.

This continuity result naturally leads to the following definition.

Definition: Let p \ge 1 and x \in C^{p-var}([0,T],\mathbb{R}^d). We say that x is a p-rough path if there exists a sequence x_n \in C^{1-var}([0,T],\mathbb{R}^d) such that x_n\to x in p-variation and such that for every \varepsilon > 0, there exists N \ge 0 such that for m,n \ge N,
\sum_{j=1}^{[p]} \left\| \int dx_n^{\otimes j}- \int dx_m^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \le \varepsilon.
The space of p-rough paths will be denoted \mathbf{\Omega}^p([0,T],\mathbb{R}^d).

From the very definition, \mathbf{\Omega}^p([0,T],\mathbb{R}^d) is the closure of C^{1-var}([0,T],\mathbb{R}^d) inside C^{p-var}([0,T],\mathbb{R}^d) for the distance
d_{\mathbf{\Omega}^p([0,T],\mathbb{R}^d)}(x,y)= \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dy^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} .

If x \in \mathbf{\Omega}^p([0,T],\mathbb{R}^d) and x_n \in C^{1-var}([0,T],\mathbb{R}^d) is such that x_n\to x in p-variation and such that for every \varepsilon > 0, there exists N \ge 0 such that for m,n \ge N,
\sum_{j=1}^{[p]} \left\| \int dx_n^{\otimes j}- \int dx_m^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \le \varepsilon,
then we define \int_{\Delta^k [s,t]} dx^{\otimes k} for k \le p as the limit of the iterated integrals \int_{\Delta^k [s,t]} dx_n^{\otimes k}. However it is important to observe that \int_{\Delta^k [s,t]} dx^{\otimes k} may then depend on the choice of the approximating sequence x_n. Once the integrals \int_{\Delta^k [s,t]} dx^{\otimes k} are defined for k \le p, we can then use the previous theorem to construct all the iterated integrals \int_{\Delta^k [s,t]} dx^{\otimes k} for k > p. It is then obvious that if x,y \in \mathbf{\Omega}^p([0,T],\mathbb{R}^d), then
1 + \sum_{k=1}^{[p]} \int_{\Delta^k [s,t]} dx^{\otimes k}=1 + \sum_{k=1}^{[p]} \int_{\Delta^k [s,t]} dy^{\otimes k}
implies that
1 + \sum_{k=1}^{+\infty } \int_{\Delta^k [s,t]} dx^{\otimes k}=1 + \sum_{k=1}^{+\infty} \int_{\Delta^k [s,t]} dy^{\otimes k}.
In other words the signature of a p-rough path is completely determinated by its truncated signature at order [p]:
\mathfrak{S}_{[p]} (x)_{s,t} =1 + \sum_{k=1}^{[p]} \int_{\Delta^k [s,t]} dx^{\otimes k}.
For this reason, it is natural to present a p-rough path by this truncated signature at order [p] in order to stress that the choice of the approximating sequence to contruct the iterated integrals up to order [p] has been made. This will be further explained in much more details when we will introduce the notion of geometric rough path over a rough path.

The following results are straightforward to obtain from the previous lectures by a limiting argument.

Lemma: Let x \in \mathbf{\Omega}^p([0,T],\mathbb{R}^d), p \ge 1. For 0 \le s \le t \le u \le T , and n \ge 1,
\int_{\Delta^n [s,u]} dx^{\otimes n}=\sum_{k=0}^{n} \int_{\Delta^k [s,t]} dx^{\otimes k }\int_{\Delta^{n-k} [t,u]} dx^{\otimes (n-k) }.

Theorem: Let p \ge 1. There exists a constant C \ge 0, depending only on p, such that for every x \in\mathbf{\Omega}^p([0,T],\mathbb{R}^d) and k \ge 1,
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k} \right\| \le \frac{C^k}{\left( \frac{k}{p}\right)!} \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]} \right)^k, \quad 0 \le s \le t \le T.

If p \ge 2, the space \mathbf{\Omega}^p([0,T],\mathbb{R}^d) is not a priori a Banach space (it is not a linear space) but it is a complete metric space for the distance
d_{\mathbf{\Omega}^p([0,T],\mathbb{R}^d)}(x,y)= \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dy^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} .
The structure of \mathbf{\Omega}^p([0,T],\mathbb{R}^d) will be better understood in the next lectures, but let us remind that if 1 \le p < 2, then \mathbf{\Omega}^p([0,T],\mathbb{R}^d) is the closure of C^{1-var}([0,T],\mathbb{R}^d) inside C^{p-var}([0,T],\mathbb{R}^d) for the variation distance it is therefore what we denoted C^{0,p-var}([0,T],\mathbb{R}^d). As a corollary we deduce

Proposition: Let 1 \le p < 2. Then x \in \mathbf{\Omega}^p([0,T],\mathbb{R}^d) if and only if
\lim_{\delta \to 0} \sup_{ \Pi \in \mathcal{D}[s,t], | \Pi | \le \delta } \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^p=0,
where \mathcal{D}[s,t] is the set of subdivisions of [s,t]. In particular, for p < q < 2,
C^{q-var}([0,T],\mathbb{R}^d) \subset \mathbf{\Omega}^p([0,T],\mathbb{R}^d).

 

We are now ready to define solutions of linear differential equations driven by p-rough paths, p \ge 1 and present the Lyons’ continuity theorem in this setting. Let x \in \mathbf{\Omega}^p([0,T],\mathbb{R}^d) be a p-rough path with truncated signature \sum_{k=0}^{[p]} \int_{\Delta^k [s,t]} dx^{\otimes k}, and let x_n \in C^{1-var}([0,T],\mathbb{R}^d) be an approximating sequence such that
\sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dx_n^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \to 0.

Let us consider matrices M_1,\cdots,M_d \in \mathbb{R}^{n \times n}. We have the following theorem:

Theorem: Let y_n:[0,T] \to \mathbb{R}^n be the solution of the differential equation
y_n(t)=y(0)+\sum_{i=1}^d \int_0^t M_i y_n(s)d x^i_n(s).
Then, when n \to \infty, y_n converges in the p-variation distance to some y \in C^{p-var}([0,T],\mathbb{R}^n) . y is called the solution of the rough differential equation
y(t)=y(0)+\sum_{i=1}^d \int_0^t M_i y(s)d x^i(s).

Proof: It is a classical result that the solution of the equation
y_n(t)=y(0)+\sum_{i=1}^d \int_0^t M_i y_n(s)d x^i_n(s),
can be expanded as the convergent Volterra series:
y_n(t)=y_n(s)+ \sum^{+\infty}_{k=1}\sum_{I=(i_1,\cdots,i_k)} M_{i_1}\cdots M_{i_k} \left( \int_{\Delta^{k}[s,t]}dx_n^{I} \right) y_n(s).
Therefore, in particular, for n,m \ge 0,
y_n(t)-y_p(t)=\sum^{+\infty}_{k=1}\sum_{I=(i_1,\cdots,i_k)} M_{i_1}\cdots M_{i_k} \left( \int_{\Delta^{k}[0,t]}dx_n^{I}- \int_{\Delta^{k}[0,t]}dx_p^{I} \right) y(0),
which implies that
\| y_n(t)-y_m(t) \| \le \sum^{+\infty}_{k=1}M^k \left\| \int_{\Delta^{k}[0,t]}dx_n^{\otimes k}- \int_{\Delta^{k}[0,t]}dx_m^{\otimes k} \right\| \| y(0) \|
with M=\max \{ \| M_1 \| , \cdots , \| M_d \| \}. From the theorems of the previous lectures, there exists a constant C \ge 0 depending only on p and
\sup_n \sum_{j=1}^{[p]} \left\| \int dx_n^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]}
such that for k \ge 1 and n,m big enough:
\left\| \int_{\Delta^k [0,\cdot]} dx_n^{\otimes k}- \int_{\Delta^k [0,\cdot]} dx_m^{\otimes k} \right\|_{p-var, [0,T]} \le \left( \sum_{j=1}^{[p]} \left\| \int dx_n^{\otimes j}- \int dx_m^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right) \frac{C^k}{\left( \frac{k}{p}\right)!}.
As a consequence, there exists a constant \tilde{C} such that for n,m big enough:
\| y_n(t)-y_m(t) \| \le \tilde{C} \sum_{j=1}^{[p]} \left\| \int dx_n^{\otimes j}- \int dx_m^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} .
This already proves that y_n converges in the supremum topology to some y. We now have
(y_n(t)-y_n(s))-(y_m(t)-y_m(s))
=\sum^{+\infty}_{k=1}\sum_{I=(i_1,\cdots,i_k)} M_{i_1}\cdots M_{i_k} \left( \int_{\Delta^{k}[s,t]}dx_n^{I}y_n(s) -\int_{\Delta^{k}[s,t]}dx_m^{I} y_m(s)\right),
and we can bound
\left\| \int_{\Delta^{k}[s,t]}dx_n^{I}y_n(s) -\int_{\Delta^{k}[s,t]}dx_m^{I} y_m(s) \right\|
\le \left\| \int_{\Delta^{k}[s,t]}dx_n^{I} \right\| \| y_n(s)-y_m(s) \|+\| y_m(s) \| \left\| \int_{\Delta^{k}[s,t]}dx_n^{I} - \int_{\Delta^{k}[s,t]}dx_m^{I}\right\|
\le \left\| \int_{\Delta^{k}[s,t]}dx_n^{I} \right\| \| y_n-y_m \|_{\infty, [0,T]} +\| y_m \|_{\infty, [0,T]} \left\| \int_{\Delta^{k}[s,t]}dx_n^{I} - \int_{\Delta^{k}[s,t]}dx_m^{I}\right\|
Again, from the theorems of the previous lectures, there exists a constant C \ge 0, depending only on p and
\sup_n \sum_{j=1}^{[p]} \left\| \int dx_n^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]}
such that for k \ge 1 and n,m big enough
\left\| \int_{\Delta^k [s,t]} dx_n^{\otimes k} \right\| \le \frac{C^k}{\left( \frac{k}{p}\right)!} \omega(s,t)^{k/p}, \quad 0 \le s \le t \le T.
\left\| \int_{\Delta^k [s,t]} dx_n^{\otimes k}- \int_{\Delta^k [s,t]} dx_m^{\otimes k} \right\| \le \left( \sum_{j=1}^{[p]} \left\| \int dx_n^{\otimes j}- \int dx_m^{\otimes k} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right) \frac{C^k}{\left( \frac{k}{p}\right)!} \omega(s,t)^{k/p} ,
where \omega is a control such that \omega(0,T)=1. Consequently, there is a constant \tilde{C}, such that
\| (y_n(t)-y_n(s))-(y_m(t)-y_m(s)) \|
\le \tilde{C} \left( \| y_n-y_m \|_{\infty, [0,T]} + \sum_{j=1}^{[p]} \left\| \int dx_n^{\otimes j}- \int dx_m^{\otimes k} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right) \omega(s,t)^{1/p}
This implies the estimate
\| y_n -y_m \|_{p-var,[0,T]} \le \tilde{C} \left( \| y_n-y_m \|_{\infty, [0,T]} + \sum_{j=1}^{[p]} \left\| \int dx_n^{\otimes j}- \int dx_m^{\otimes k} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right)
and thus gives the conclusion \square

With just a little more work, it is possible to prove the following stronger result whose proof is let to the reader.
Theorem: Let y_n:[0,T] \to \mathbb{R}^n be the solution of the differential equation
y_n(t)=y(0)+\sum_{i=1}^d \int_0^t M_i y_n(s)d x^i_n(s).
and y be the solution of the rough differential equation:
y(t)=y(0)+\sum_{i=1}^d \int_0^t M_i y(s)d x^i(s).
Then, y \in \mathbf{\Omega}^p([0,T],\mathbb{R}^d) and when n \to \infty,
\sum_{j=1}^{[p]} \left\| \int dy^{\otimes j}- \int dy_n^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \to 0.

We can get useful estimates for solutions of rough differential equations. For that, we need the following analysis lemma:

Proposition: For x \ge 0 and p \ge 1,
\sum_{k=0}^{+\infty} \frac{x^k}{\left( \frac{k}{p} \right)!} \le p e^{x^p}.

Proof: For \alpha \ge 0, we denote
E_\alpha(x)=\sum_{k=0}^{+\infty} \frac{x^k}{\left( k \alpha \right)!}.
This is a special function called the Mittag-Leffler function. From the binomial inequality
E_\alpha(x)^2
=\sum_{k=0}^{+\infty} \left( \sum_{j=0}^k \frac{1}{\left( j \alpha \right)!\left( (k-j) \alpha \right)!}\right)x^k
\le \frac{1}{\alpha}\sum_{k=0}^{+\infty} 2^{\alpha k} \frac{x^k}{\left( k \alpha \right)!}=\frac{1}{\alpha}E_\alpha(2^\alpha x).
Thus we proved
E_\alpha(x)\le\frac{1}{\alpha^{1/2}}E_\alpha(2^\alpha x)^{1/2}.
Iterating this inequality, k times we obtain
E_\alpha(x)\le \frac{1}{\alpha^{\sum_{j=1}^k \frac{1}{2^j}}} E_\alpha(2^{\alpha k}x)^{1/(2k)}.
It is known (and not difficult to prove) that
E_\alpha(x) \sim_{x \to \infty} \frac{1}{\alpha} e^{x^{1/\alpha}}.
By letting k \to \infty we conclude
E_\alpha(x) \le \frac{1}{\alpha} e^{x^{1/\alpha}}.
\square

This estimate provides the following result:

Proposition: Let y be the solution of the rough differential equation:
y(t)=y(0)+\sum_{i=1}^d \int_0^t M_i y(s)d x^i(s).
Then, there exists a constant C depending only on p such that for 0 \le t \le T,
\| y(t) \| \le p \| y(0)\| e^{ CM \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,t]} \right)^p},
where M=\max \{ \| M_1 \|, \cdots, \|M_d\| \}.

Proof: We have
y(t)=y(0)+ \sum^{+\infty}_{k=1}\sum_{I=(i_1,\cdots,i_k)} M_{i_1}\cdots M_{i_k} \left( \int_{\Delta^{k}[0,t]}dx^{I} \right) y(0).
Thus we obtain
y(t)\le \left( 1+ \sum^{+\infty}_{k=1}\sum_{I=(i_1,\cdots,i_k)} M^k \left\| \int_{\Delta^{k}[0,t]}dx^{I} \right\| \right) \| y(0) \|,
and we conclude by using estimates on iterated integrals of rough paths together with the previous lemma \square

Posted in Rough paths theory | Leave a comment

HW6. MA3160 Fall 2021, due October 21

Exercise 1. Patricia receives  an average of two texts every 2 hours. If we assume that the number of texts is Poisson distributed, what is the probability that she receives five or more texts in a 9 hours period?

Exercise 2.  A UConn student claims that she can distinguish Dairy Bar ice cream from Friendly’s ice cream. As a test, she is given ten samples of ice cream (each sample is either from the Dairy Bar or Friendly’s) and asked to identify each one.  What is the probability that she would be right exactly eight times if she guessed  correctly  each sample with probability 2/3 ?

Honors Exercise.

Jennifer is baking cookies. She mixes 500 raisins and 600 chocolate chips into her cookie dough and ends up with 300 cookies.
(a) Find the probability that a randomly picked cookie will have three raisins in it.
(b) Find the probability that a randomly picked cookie will have at least one chocolate chip
in it.
(c) Find the probability that a randomly picked cookie will have no more than two bits in
it (a bit is either a raisin or a chocolate chip).

Posted in MA3160 | Leave a comment

HW5. MA3160 Fall 2017

Exercise 1. Three balls are randomly chosen with replacement from an urn containing 5 blue, 4 red, and 2 yellow balls. Let X denote the number of red balls chosen.

(a) What are the possible values of X?
(b) What are the probabilities associated to each value?

 

Exercise 2. Suppose X is a random variable such that E[X] = 50 and Var(X) = 12. Calculate the following quantities.

(a) E[X^2]
(b) E [3X + 2]

(c) E [(X+2)^2]

(d) Var[−X]

Posted in Uncategorized | Leave a comment

Annales de la faculte des sciences de Toulouse

Annales de la Faculte des Sciences de Toulouse is a peer-reviewed international  journal with a long tradition of excellence (going back to 1887 and Thomas Stieltjes). The journal periodically publishes surveys by the recipients of the Fermat Prize.  The Editorial Board encourages high level submissions.

Submissions in all areas of mathematics are accepted and decisions are usually made within 3 months. The electronic version is free and accessible without subscription.

Posted in Mathematicians | Leave a comment

MA3160. Fall 2017. Midterm 1 sample

Practice midterm 1

 

We will do the correction in class on 09/28.

Posted in Uncategorized | Leave a comment

HW4. MA360 Due 09/30/21

Exercise 1. Two dice are rolled. Consider the events A = {sum of two dice equals 3}, B = {sum of two dice equals 7 }, and C = {at least one of the dice shows a 1}.

(a) What is P(A | C)?

(b) What is P(B | C)?

(c) Are A and C independent? What about B and C?

Exercise 2. Suppose you roll two standard, fair, 6-sided dice. What is the probability that the sum is at least 9 given that you rolled at least one 6?

Exercise 3.  Color blindness is a sex-linked condition, and 5% of men and 0.25% of women are color blind. The population of the United States is 51% female. What is the probability that a color-blind American is a man?

Honors exercise. The Uconn covid test for covid is effective at x%. We know that 0.5% of the Uconn population is covid positive. What is the minimum x so that if someone tests positive the probability that this person is covid positive is at least 90% ?

Posted in MA3160 | Leave a comment

Lecture 7. Rough paths. Fall 2017

In the previous lecture we introduced the signature of a bounded variation path x as the formal series
\mathfrak{S} (x)_{s,t} =1 + \sum_{k=1}^{+\infty} \int_{\Delta^k [s,t]} dx^{\otimes k}.
If now x \in C^{p-var}([0,T],\mathbb{R}^d), p \ge 1 the iterated integrals \int_{\Delta^k [s,t]} dx^{\otimes k} can only be defined as Young integrals when p < 2. In this lecture, we are going to derive some estimates that allow to define the signature of some (not all) paths with a finite p variation when p \ge 2. These estimates are due to Terry Lyons in his seminal paper and this is where the rough paths theory really begins.

For P \in \mathbb{R} [[X_1,...,X_d]] that can be writen as
P=P_0+\sum_{k = 1}^{+\infty} \sum_{I \in \{1,...,d\}^k}a_{i_1,...,i_k} X_{i_1}...X_{i_k},
we define
\| P \| =|P_0|+\sum_{k = 1}^{+\infty} \sum_{I \in \{1,...,d\}^k}|a_{i_1,...,i_k}| \in [0,\infty].
It is quite easy to check that for P,Q \in \mathbb{R} [[X_1,...,X_d]]
\| PQ \| \le \| P \| \| Q\|.
Let x \in C^{1-var}([0,T],\mathbb{R}^d). For p \ge 1, we denote
\left\| \int dx^{\otimes k}\right\|_{p-var, [s,t]}=\left( \sup_{ \Pi \in \mathcal{D}[s,t]} \sum_{i=0}^{n-1} \left\| \int_{\Delta^k [t_i,t_{i+1}]} dx^{\otimes k} \right\|^p \right)^{1/p},
where \mathcal{D}[s,t] is the set of subdivisions of the interval [s,t]. Observe that for k \ge 2, in general
\int_{\Delta^k [s,t]} dx^{\otimes k}+ \int_{\Delta^k [t,u]} dx^{\otimes k} \neq \int_{\Delta^k [s,u]} dx^{\otimes k}.
Actually from the Chen’s relations we have
\int_{\Delta^n [s,u]} dx^{\otimes n}= \int_{\Delta^n [s,t]} dx^{\otimes k}+ \int_{\Delta^n [t,u]} dx^{\otimes k} +\sum_{k=1}^{n-1} \int_{\Delta^k [s,t]} dx^{\otimes k }\int_{\Delta^{n-k} [t,u]} dx^{\otimes (n-k) }.
It follows that \left\| \int dx^{\otimes k}\right\|_{p-var, [s,t]} needs not to be the p-variation of t \to \int_{\Delta^k [s,t]} dx^{\otimes k}.
The first major result of rough paths theory is the following estimate:

Proposition: Let p \ge 1. There exists a constant C \ge 0, depending only on p, such that for every x \in C^{1-var}([0,T],\mathbb{R}^d) and k \ge 0,
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k} \right\| \le \frac{C^k}{\left( \frac{k}{p}\right)!} \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]} \right)^k, \quad 0 \le s \le t \le T.

By \left( \frac{k}{p}\right)!, we of course mean \Gamma \left( \frac{k}{p}+1\right). Some remarks are in order before we prove the result. If p=1, then the estimate becomes
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k} \right\| \le \frac{C^k}{k!} \| x \|_{1-var, [s,t]}^k,
which is immediately checked because
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k} \right\|
\le \sum_{I \in \{1,...,d\}^k} \left\| \int_{\Delta^{k}[s,t]}dx^{I} \right\|
\le \sum_{I \in \{1,...,d\}^k} \int_{s \le t_1 \le t_2 \le \cdots \le t_k \le t} \| dx^{i_1}(t_1) \| \cdots \| dx^{i_k}(t_k)\|
\le \frac{1}{k!} \left( \sum_{j=1}^ d \| x^j \|_{1-var, [s,t]} \right)^k.

We can also observe that for k \le p, the estimate is easy to obtain because
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k} \right\| \le \left\| \int dx^{\otimes k}\right\|_{\frac{p}{k}-var, [s,t]}.
So, all the work is to prove the estimate when k >p. The proof is split into two lemmas. The first one is a binomial inequality which is actually quite difficult to prove:

Lemma: For x,y >0, n \in \mathbb{N}, n \ge 0, and p \ge 1,
\sum_{j=0}^n \frac{x^{j/p}}{\left( \frac{j}{p}\right)!} \frac{y^{(n-j)/p}}{\left( \frac{n-j}{p}\right)!} \le p \frac{(x+y)^{n/p}}{ {\left( \frac{n}{p}\right)!}}.

Proof: See Lemma 2.2.2 in the article by Lyons or this proof for the sharp constant \square

The second one is a lemma that actually already was essentially proved in the Lecture on Young’s integral, but which was not explicitly stated.

Lemma: Let \Gamma: \{ 0 \le s \le t \le T \} \to \mathbb{R}^N. Let us assume that:

  • There exists a control \tilde{\omega} such that
    \lim_{r \to 0} \sup_{(s,t), \tilde{\omega}(s,t) \le r } \frac{\| \Gamma_{s,t} \|}{r}=0;
  • There exists a control \omega and \theta >1, \xi >0 such that for 0 \le s \le t \le u \le T,
    \| \Gamma_{s,u} \| \le \| \Gamma_{s,t} \|+ \| \Gamma_{t,u} \| +\xi \omega(s,u)^\theta.

Then, for all 0 \le s \le t \le T,
\| \Gamma_{s,t} \| \le \frac{\xi}{1-2^{1-\theta}} \omega(s,t)^\theta.

Proof:
See the proof of the Young-Loeve estimate or Lemma 6.2 in the book by Friz-Victoir \square

We can now turn to the proof of the main result.

Proof:
Let us denote
\omega(s,t)=\left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]} \right)^p.
We claim that \omega is a control. Indeed for 0 \le s \le t \le u \le T, we have from Holder’s inequality
\omega(s,t)+\omega(t,u)
= \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]} \right)^p+\left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [t,u]} \right)^p
\le \left( \sum_{j=1}^{[p]}\left( \left\| \int dx^{\otimes j}\right\|^{p/j}_{\frac{p}{j}-var, [s,t]} + \left\| \int dx^{\otimes j}\right\|^{p/j}_{\frac{p}{j}-var, [t,u]}\right)^{1/p} \right)^p
\le \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,u]} \right)^p =\omega(s,u).

It is clear that for some constant \beta > 0 which is small enough, we have for k \le p,
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k} \right\| \le \frac{1}{\beta \left( \frac{k}{p}\right)!} \omega(s,t)^{k/p}.

Let us now consider
\Gamma_{s,t}= \int_{\Delta^{[p]+1} [s,t]} dx^{\otimes ([p]+1)}.
From the Chen’s relations, for 0 \le s \le t \le u \le T,
\Gamma_{s,u}= \Gamma_{s,t}+ \Gamma_{t,u}+\sum_{j=1}^{[p]} \int_{\Delta^j [s,t]} dx^{\otimes j }\int_{\Delta^{[p]+1-j} [t,u]} dx^{\otimes ([p]+1-j) }.
Therefore,
\| \Gamma_{s,u}\|
\le \| \Gamma_{s,t} \| + \| \Gamma_{t,u} \| +\sum_{j=1}^{[p]} \left\| \int_{\Delta^j [s,t]} dx^{\otimes j }\right\| \left\| \int_{\Delta^{[p]+1-j} [t,u]} dx^{\otimes ([p]+1-j) }\right\|
\le \| \Gamma_{s,t} \| + \| \Gamma_{t,u} \| +\frac{1}{\beta^2} \sum_{j=1}^{[p]} \frac{1}{ \left( \frac{j}{p}\right)!} \omega(s,t)^{j/p}\frac{1}{ \left( \frac{[p]+1-j}{p}\right)!} \omega(t,u)^{([p]+1-j)/p}
\le \| \Gamma_{s,t} \| + \| \Gamma_{t,u} \| +\frac{1}{\beta^2} \sum_{j=0}^{[p]+1} \frac{1}{ \left( \frac{j}{p}\right)!} \omega(s,t)^{j/p}\frac{1}{ \left( \frac{[p]+1-j}{p}\right)!} \omega(t,u)^{([p]+1-j)/p}
\le \| \Gamma_{s,t} \| + \| \Gamma_{t,u} \| +\frac{1}{\beta^2} p \frac{(\omega(s,t)+\omega(t,u))^{([p]+1)/p}}{ {\left( \frac{[p]+1}{p}\right)!}}
\le \| \Gamma_{s,t} \| + \| \Gamma_{t,u} \| +\frac{1}{\beta^2} p \frac{\omega(s,u)^{([p]+1)/p}}{ {\left( \frac{[p]+1}{p}\right)!}}.
On the other hand, we have
\| \Gamma_{s,t} \| \le A \| x \|_{1-var,[s,t]}^{[p]+1}.
We deduce from the previous lemma that
\| \Gamma_{s,t} \| \le \frac{1}{\beta^2} \frac{p}{1-2^{1-\theta}} \frac{\omega(s,t)^{([p]+1)/p}}{ {\left( \frac{[p]+1}{p}\right)!}},
with \theta=\frac{[p]+1}{p}. The general case k \ge p is dealt by induction. The details are let to the reader \square

 

Let x \in C^{1-var}([0,T],\mathbb{R}^d). Since
\omega(s,t)=\left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]} \right)^p
is a control, the estimate
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k} \right\| \le \frac{C^k}{\left( \frac{k}{p}\right)!} \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]} \right)^k, \quad 0 \le s \le t \le T.
easily implies that for k > p,
\left\| \int dx^{\otimes k} \right\|_{1-var, [s,t]} \le \frac{C^k}{\left( \frac{k}{p}\right)!} \omega(s,t)^{k/p}.
We stress that it does not imply a bound on the 1-variation of the path t \to \int_{\Delta^k [0,t]} dx^{\otimes k} . What we can get for this path, are bounds in p-variation:

Proposition: Let p \ge 1. There exists a constant C \ge 0, depending only on p, such that for every x \in C^{1-var}([0,T],\mathbb{R}^d) and k \ge 0,
\left\| \int_{\Delta^k [0,\cdot]} dx^{\otimes k} \right\|_{p-var, [s,t]} \le \frac{C^k}{\left( \frac{k}{p}\right)!} \omega(s,t)^{1/p} \omega(0,T)^{\frac{k-1}{p}}
where
\omega(s,t)= \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]} \right)^p, \quad 0 \le s \le t \le T.

Proof: This is an easy consequence of the Chen’s relations. Indeed,

\left\| \int_{\Delta^k [0,t]} dx^{\otimes k} - \int_{\Delta^k [0,s]} dx^{\otimes k} \right\|
=\left\| \sum_{j=1}^k \int_{\Delta^j [s,t]} dx^{\otimes j} \int_{\Delta^{j-k} [0,s]} dx^{\otimes (k-j)} \right\|
\le \sum_{j=1}^k \left\| \int_{\Delta^j [s,t]} dx^{\otimes j} \right\| \left\| \int_{\Delta^{j-k} [0,s]} dx^{\otimes (k-j)} \right\|
\le C^k \sum_{j=1}^k \frac{1}{\left( \frac{j}{p}\right)!} \omega(s,t)^{j/p} \frac{1}{\left( \frac{k-j}{p}\right)!} \omega(s,t)^{(k-j)/p}
\le C^k \omega(s,t)^{1/p} \sum_{j=1}^k \frac{1}{\left( \frac{j}{p}\right)!} \omega(0,T)^{(j-1)/p} \frac{1}{\left( \frac{k-j}{p}\right)!} \omega(0,T)^{(k-j)/p}
\le C^k \omega(s,t)^{1/p} \omega(0,T)^{(k-1)/p}\sum_{j=1}^k \frac{1}{\left( \frac{j}{p}\right)!} \frac{1}{\left( \frac{k-j}{p}\right)!}.
and we conclude with the binomial inequality \square

We are now ready for a second major estimate which is the key to define iterated integrals of a path with p-bounded variation when p \ge 2.

Theorem: Let p \ge 1, K > 0 and x,y \in C^{1-var}([0,T],\mathbb{R}^d) such that
\sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dy^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \le 1,
and
\left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right)^p+ \left( \sum_{j=1}^{[p]} \left\| \int dy^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right)^p \le K.
Then there exists a constant C \ge 0 depending only on p and K such that for 0\le s \le t \le T and k \ge 1
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k}- \int_{\Delta^k [s,t]} dy^{\otimes k} \right\| \le \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dy^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right) \frac{C^k}{\left( \frac{k}{p}\right)!} \omega(s,t)^{k/p} ,
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k}\right\| +\left\| \int_{\Delta^k [s,t]} dy^{\otimes k} \right\| \le \frac{C^k}{\left( \frac{k}{p}\right)!} \omega(s,t)^{k/p}
where \omega is the control
\omega(s,t)= \frac{ \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]} \right)^p+ \left( \sum_{j=1}^{[p]} \left\| \int dy^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]} \right)^p } { \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right)^p+ \left( \sum_{j=1}^{[p]} \left\| \int dy^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right)^p }
+\left( \frac{\sum_{j=1}^{[p]} \left\| \int dx^{\otimes j} - \int dy^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]} }{\sum_{j=1}^{[p]} \left\| \int dx^{\otimes j} - \int dy^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [0,T]} } \right)^p

Proof: We prove by induction on k that for some constants C,\beta,
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k}- \int_{\Delta^k [s,t]} dy^{\otimes k} \right\| \le \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dy^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right) \frac{C^k}{\beta \left( \frac{k}{p}\right)!} \omega(s,t)^{k/p},
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k}\right\| +\left\| \int_{\Delta^k [s,t]} dy^{\otimes k} \right\| \le \frac{C^k}{\beta \left( \frac{k}{p}\right)!} \omega(s,t)^{k/p}

For k \le p, we trivially have
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k}- \int_{\Delta^k [s,t]} dy^{\otimes k} \right\| \le \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dy^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right)^k \omega(s,t)^{k/p}
\le \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dy^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right) \omega(s,t)^{k/p}.
and
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k}\right\| +\left\| \int_{\Delta^k [s,t]} dy^{\otimes k} \right\| \le K^{k/p} \omega(s,t)^{k/p}.
Not let us assume that the result is true for 0 \le j \le k with k > p. Let
\Gamma_{s,t}=\int_{\Delta^k [s,t]} dx^{\otimes (k+1)}- \int_{\Delta^k [s,t]} dy^{\otimes (k+1)}
From the Chen’s relations, for 0 \le s \le t \le u \le T,
\Gamma_{s,u}= \Gamma_{s,t}+ \Gamma_{t,u}
+\sum_{j=1}^{k} \int_{\Delta^j [s,t]} dx^{\otimes j }\int_{\Delta^{k+1-j} [t,u]} dx^{\otimes (k+1-j) }-\sum_{j=1}^{k} \int_{\Delta^j [s,t]} dy^{\otimes j }\int_{\Delta^{k+1-j} [t,u]} dy^{\otimes (k+1-j) }.
Therefore, from the binomial inequality
\| \Gamma_{s,u}\|
\le \| \Gamma_{s,t} \| + \| \Gamma_{t,u} \| +\sum_{j=1}^{k} \left\| \int_{\Delta^j [s,t]} dx^{\otimes j }- \int_{\Delta^j [s,t]} dy^{\otimes j } \right\| \left\| \int_{\Delta^{k+1-j} [t,u]} dx^{\otimes (k+1-j) }\right\|
+\sum_{j=1}^{k} \left\| \int_{\Delta^{j} [s,t]} dy^{\otimes j }\right\| \left\| \int_{\Delta^{k+1-j} [t,u]} dx^{\otimes (k+1-j) }- \int_{\Delta^{k+1-j} [t,u]} dy^{\otimes (k+1-j) } \right\|
\le \| \Gamma_{s,t} \| + \| \Gamma_{t,u} \| +\frac{1}{\beta^2}\tilde{\omega}(0,T) \sum_{j=1}^{k} \frac{C^j}{\left( \frac{j}{p}\right)!} \omega(s,t)^{j/p} \frac{C^{k+1-j}}{\left( \frac{k+1-j}{p}\right)!} \omega(t,u)^{(k+1-j)/p}
+\frac{1}{\beta^2}\tilde{\omega}(0,T) \sum_{j=1}^{k} \frac{C^j}{\left( \frac{j}{p}\right)!} \omega(s,t)^{j/p} \frac{C^{k+1-j}}{\left( \frac{k+1-j}{p}\right)!} \omega(t,u)^{(k+1-j)/p}
\le \| \Gamma_{s,t} \| + \| \Gamma_{t,u} \| +\frac{2p}{\beta^2} \tilde{\omega}(0,T) C^{k+1} \frac{ \omega(s,u)^{(k+1)/p}}{\left( \frac{k+1}{p}\right)! }
where
\tilde{\omega}(0,T)=\sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dy^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} .
We deduce
\| \Gamma_{s,t} \| \le \frac{2p}{\beta^2(1-2^{1-\theta})} \tilde{\omega}(0,T) C^{k+1} \frac{ \omega(s,t)^{(k+1)/p}}{\left( \frac{k+1}{p}\right)! }
with \theta= \frac{k+1}{p}. A correct choice of \beta finishes the induction argument \square

Posted in Rough paths theory | 3 Comments

Lecture 6. Rough paths. Fall 2017

In this lecture we introduce the central notion of the signature of a path x \in C^{1-var}([0,T],\mathbb{R}^d) which is a convenient way to encode all the algebraic information on the path x which is relevant to study differential equations driven by x. The motivation for the definition of the signature comes from formal manipulations on Taylor series.

Let us consider a differential equation
y(t)=y(s)+\sum_{i=1}^d \int_s^t V_i (y(u) )dx^i(u),
where the V_i‘s are smooth vector fields on \mathbb{R}^n.

If f: \mathbb{R}^{n} \rightarrow \mathbb{R} is a C^{\infty} function, by the change of variable formula,
f(y(t))=f(y(s))+\sum^{d}_{i=1}\int^{t}_{s}V_{i}f(y(u))dx^{i}(u).

Now, a new application of the change of variable formula to V_{i}f(y(s)) leads to
f(y(t))=f(y(s))+\sum^{d}_{i=1}V_{i}f(y(s))\int^{t}_{s}dx^{i}(u)+\sum^{d}_{i,j=1}\int^{t}_{s}\int^{u}_{s} V_{j}V_{i}f(y(v))dx^{j}(v)dx^{i}(u).

We can continue this procedure to get after N steps
f(y(t))=f(y(s))+\sum^{N}_{k=1}\sum_{I=(i_1,\cdots,i_k)}(V_{i_1}\cdots V_{i_k}f)(y(s))\int_{\Delta^{k}[s,t]}dx^{I}+R_{N}(s,t)
for some remainder term R_{N}(s,t), where we used the notations:

  • \Delta^{k}[s,t]=\{(t_1,\cdots,t_k)\in[s,t]^{k}, s\leq t_1\leq t_2\cdots\leq t_k\leq t\}
  • If I=\left(i_1,\cdots,i_k\right)\in\{1,\cdots,d\}^k is a word with length k, \int_{\Delta^{k}[s,t]}dx^{I}=\displaystyle \int_{s \le t_1 \le t_2 \le \cdots \le t_k \le t}dx^{i_1}(t_1)\cdots dx^{i_k}(t_k).

If we let N\rightarrow +\infty, assuming R_{N}(s,t) \to 0 (which is by the way true for t-s small enough if the V_i‘s are analytic), we are led to the formal expansion formula:
f(y(t))=f(y(s))+\sum^{+\infty}_{k=1}\sum_{I=(i_1,\cdots,i_k)}(V_{i_1}\cdots V_{i_k}f)(y(s))\int_{\Delta^{k}[s,t]}dx^{I}.
This shows, at least at the formal level, that all the information given by x on y is contained in the iterated integrals \int_{\Delta^{k}[s,t]}dx^{I}.

Let \mathbb{R} [[X_1,...,X_d]] be the non commutative algebra over \mathbb{R} of the formal series with d indeterminates, that is the set of series
Y=y_0+\sum_{k = 1}^{+\infty} \sum_{I \in \{1,...,d\}^k} a_{i_1,...,i_k} X_{i_1}...X_{i_k}.

Definition: Let x \in C^{1-var}([0,T],\mathbb{R}^d). The signature of x (or Chen’s series) is the formal series:
\mathfrak{S} (x)_{s,t} =1 + \sum_{k=1}^{+\infty} \sum_{I \in \{1,...,d\}^k} \left( \int_{\Delta^{k}[s,t]}dx^{I} \right) X_{i_1} \cdots X_{i_k}, \quad 0 \le s \le t \le T.

As we are going to see in the next few lectures, the signature is a fascinating algebraic object. At the source of the numerous properties of the signature lie the following so-called Chen’s relations

Lemma: Let x \in C^{1-var}([0,T],\mathbb{R}^d). For any word (i_1,...,i_n) \in \{ 1, ... , d \}^n and any 0 \le s \le t \le u \le T ,
\int_{\Delta^n [s,u]} dx^{(i_1,...,i_n)}=\sum_{k=0}^{n} \int_{\Delta^k [s,t]} dx^{(i_1,...,i_k)}\int_{\Delta^{n-k} [t,u]} dx^{(i_{k+1},...,i_n)},
where we used the convention that if I is a word with length 0, then \int_{\Delta^{0} [0,t]} dx^I =1.

Proof: It follows readily by induction on n by noticing that
\int_{\Delta^n [s,u]} dx^{(i_1,...,i_n)}=\int_s^u \left( \int_{\Delta^{n-1} [s,t_n]} dx^{(i_1,...,i_{n-1})} \right) dx^{i_n}(t_n) \square

To avoid heavy notations, it will be convenient to denote
\int_{\Delta^k [s,t]} dx^{\otimes k} =\sum_{I \in \{1,...,d\}^k} \left( \int_{\Delta^{k}[s,t]}dx^{I} \right) X_{i_1} \cdots X_{i_k}.

This notation actually reflects a natural algebra isomorphism between \mathbb{R} [[X_1,...,X_d]] and 1\oplus_{k=1}^{+\infty} (\mathbb{R}^d)^{\otimes k}. With this notation, observe that the signature writes then
\mathfrak{S} (x)_{s,t} =1 + \sum_{k=1}^{+\infty} \int_{\Delta^k [s,t]} dx^{\otimes k},
and that the Chen’s relations become
\int_{\Delta^n [s,u]} dx^{\otimes n}=\sum_{k=0}^{n} \int_{\Delta^k [s,t]} dx^{\otimes k }\int_{\Delta^{n-k} [t,u]} dx^{\otimes (n-k) }.
The Chen’s relations imply the following flow property for the signature:

Proposition: Let x \in C^{1-var}([0,T],\mathbb{R}^d). For any 0 \le s \le t \le u \le T ,
\mathfrak{S} (x)_{s,u} =\mathfrak{S} (x)_{s,t}\mathfrak{S} (x)_{t,u}

Proof: Indeed,
\mathfrak{S} (x)_{s,u}
=1 + \sum_{k=1}^{+\infty} \int_{\Delta^k [s,u]} dx^{\otimes k}
=1 + \sum_{k=1}^{+\infty}\sum_{j=0}^{k} \int_{\Delta^j [s,t]} dx^{\otimes j }\int_{\Delta^{k-j} [t,u]} dx^{\otimes (k-j) }
=\mathfrak{S} (x)_{s,t}\mathfrak{S} (x)_{t,u}
\square

Posted in Rough paths theory | 2 Comments

Lecture 6. Rough paths Fall 2017

In the previous lecture we defined the Young’s integral \int y dx when x \in C^{p-var} ([0,T], \mathbb{R}^d) and y \in C^{q-var} ([0,T], \mathbb{R}^{e \times d}) with \frac{1}{p}+\frac{1}{q} > 1. The integral path \int_0^t ydx has then a bounded p-variation. Now, if V: \mathbb{R}^d \to \mathbb{R}^{d \times d} is a Lipschitz map, then the integral, \int V(x) dx is only defined when \frac{1}{p}+\frac{1}{p} > 1, that is for p < 2. With this in mind, it is apparent that Young’s integration should be useful to solve differential equations driven by continuous paths with bounded p-variation for p < 2. If p \ge 2 , then the Young’s integral is of no help and the rough paths theory later explained is the correct one.

The basic existence and uniqueness result is the following. Throughout this lecture, we assume that p < 2.

Theorem: Let x\in C^{p-var} ([0,T], \mathbb{R}^d) and let V : \mathbb{R}^e \to \mathbb{R}^{e \times d} be a Lipschitz continuous map, that is there exists a constant K > 0 such that for every x,y \in \mathbb{R}^e,
\| V(x)-V(y) \| \le K \| x-y \|.
For every y_0 \in \mathbb{R}^e, there is a unique solution to the differential equation:
y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T.
Moreover y \in C^{p-var} ([0,T], \mathbb{R}^e).

Proof: The proof is of course based again of the fixed point theorem. Let 0 < \tau \le T and consider the map \Phi going from the space C^{p-var} ([0,\tau], \mathbb{R}^e) into itself, which is defined by
\Phi(y)_t =y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le \tau.
By using basic estimates on the Young’s integrals, we deduce that
\| \Phi(y^1)-\Phi(y^2) \|_{ p-var, [0,\tau]}
\le C \| x \|_{p-var,[0,\tau]} ( \| V(y^1)-V(y^2) \|_{ p-var, [0,\tau]} +\| V(y^1)(0)-V(y^2)(0)\|)
\le CK \| x \|_{p-var,[0,\tau]}( \| y^1-y^2 \|_{ p-var, [0,\tau]}+\| y^1(0)-y^2(0)\|).
If \tau is small enough, then CK \| x \|_{p-var,[0,\tau]} < 1, which means that \Phi is a contraction of the Banach space C^{p-var} ([0,\tau], \mathbb{R}^e) endowed with the norm \| y \|_{p-var,[0,\tau]} +\| y(0)\|.

The fixed point of \Phi, let us say y, is the unique solution to the differential equation:
y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le \tau.
By considering then a subdivision
\{ \tau=\tau_1 < \tau_2 <\cdots <\tau_n=T \}
such that C K \| x \|_{p-var,[\tau_k,\tau_{k+1}]} < 1, we obtain a unique solution to the differential equation:
y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T \square

As for the bounded variation case, the solution of a Young’s differential equation is a C^1 function of the initial condition,

Proposition: Let x\in C^{p-var} ([0,T], \mathbb{R}^d) and let V : \mathbb{R}^e \to \mathbb{R}^{e \times d} be a C^1 Lipschitz continuous map. Let \pi(t,y_0) be the flow of the equation
y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T.
Then for every 0\le t \le T, the map y_0 \to \pi (t,y_0) is C^1 and the Jacobian J_t=\frac{\partial \pi(t,y_0)}{\partial y_0} is the unique solution of the matrix linear equation
J_t=Id+ \sum_{i=1}^d \int_0^t DV_i(\pi(s,y_0))J_s dx^i(s).

As we already mentioned it before, solutions of Young’s differential equations are continuous with respect to the driving path in the p-variation topology

Theorem: Let x^n \in C^{p-var} ([0,T], \mathbb{R}^d) and let V : \mathbb{R}^e \to \mathbb{R}^{e\times d} be a Lipschitz and bounded continuous map such that for every x,y \in \mathbb{R}^d,
\| V(x)-V(y) \| \le K \| x-y \|.
Let y^n be the solution of the differential equation:
y^n(t)=y(0)+\int_0^t V(y^n(s)) dx^n(s), \quad 0\le t \le T.
If x^n converges to x in p-variation, then y^n converges in p-variation to the solution of the differential equation:
y(t)=y(0)+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T.

Proof: Let 0\le s \le t \le T. We have
\| y-y^n \|_{p-var,[s,t]}
= \left\| \int_0^\cdot V(y(u)) dx(u) -\int_0^\cdot V(y^n(u)) dx^n(u) \right\|_{p-var,[s,t]}
\le \left\| \int_0^\cdot (V(y(u))-V(y^n(u))) dx(u) + \int_0^\cdot V(y^n(u)) d( x(u)-x^n(u)) \right\|_{p-var,[s,t]}
\le \left\| \int_0^\cdot (V(y(u))-V(y^n(u))) dx(u) \right\|_{p-var,[s,t]}+\left\| \int_0^\cdot V(y^n(u)) d( x(u)-x^n(u)) \right\|_{p-var,[s,t]}
\le CK \| x\|_{p-var,[s,t]} \| y-y^n \|_{p-var,[s,t]}+C\| x-x^n \|_{p-var,[s,t]}(K \| y^n \|_{p-var,[s,t]}+\| V\|_{\infty, [0,T]})
Thus, if s,t is such that CK \| x\|_{p-var,[s,t]} < 1, we obtain
\| y-y^n \|_{p-var,[s,t]} \le \frac{C(K \| y^n \|_{p-var,[s,t]}+\| V\|_{\infty, [0,T]})}{ 1-CK\| x\|_{p-var,[s,t]} } \| x-x^n \|_{p-var,[s,t]}.
In the very same way, provided CK \| x^n\|_{p-var,[s,t]} < 1, we get
\| y^n \|_{p-var,[s,t]} \le \frac{C\| V\|_{\infty, [0,T]}}{ 1-CK\| x^n\|_{p-var,[s,t]} }.

Let us fix 0 < \varepsilon < 1 and pick a sequence 0\le \tau_1 \le \cdots \le \tau_m=T such that CK \| x\|_{p-var,[\tau_i,\tau_{i+1}]}+\varepsilon < 1. Since \| x^n\|_{p-var,[\tau_i,\tau_{i+1}]} \to \| x\|_{p-var,[\tau_i,\tau_{i+1}]}, for n \ge N_1 with N_1 big enough, we have
CK \| x^n\|_{p-var,[\tau_i,\tau_{i+1}]}+\frac{\varepsilon}{2} < 1.
We deduce that for n \ge N_1,
\| y^n \|_{p-var,[\tau_i,\tau_{i+1}]} \le \frac{2}{\varepsilon} C \| V\|_{\infty, [0,T]}
and
\| y-y^n \|_{p-var,[\tau_i,\tau_{i+1}]}
\le \frac{C(K \frac{2}{\varepsilon} C \| V\|_{\infty, [0,T]}+\| V\|_{\infty, [0,T]})}{ 1-CK\| x\|_{p-var,[\tau_i,\tau_{i+1}] }} \| x-x^n \|_{p-var,[\tau_i,\tau_{i+1}]}
\le \frac{C}{\varepsilon} \| V\|_{\infty, [0,T]} \left( \frac{2KC}{\varepsilon}+1 \right) \| x-x^n \|_{p-var,[\tau_i,\tau_{i+1}]}
\le \frac{C}{\varepsilon} \| V\|_{\infty, [0,T]} \left( \frac{2KC}{\varepsilon}+1 \right) \| x-x^n \|_{p-var,[0,T]}.
For n \ge N_2 with N_2 \ge N_1 and big enough, we have
\| x-x^n \|_{p-var,[0,T]} \le \frac{\varepsilon^3}{m},
which implies
\| y-y^n \|_{p-var,[0,T]} \le \frac{C}{\varepsilon} \| V\|_{\infty, [0,T]} \left( \frac{2KC}{\varepsilon}+1 \right) \varepsilon^3.
\square

Posted in Uncategorized | Leave a comment

HW3. MA3160, Due 09/23/21

Exercise 1. Two dice are simultaneously rolled. For each pair of events defined below, compute if they are independent or not.

(a) A1 ={thesumis7},B1 ={thefirstdielandsa3}.

(b) A2 = {the sum is 9}, B2 = {the second die lands a 3}.

(c) A3 = {the sum is 9}, B3 = {the first die lands even}.
(d) A4 = {the sum is 9}, B4 = {the first die is less than the second}.

(e) A5 = {two dice are equal}, B5 = {the sum is 8}.
(f) A6 = {two dice are equal}, B6 = {the first die lands even}.

(g) A7 = {two dice are not equal}, B7 = {the first die is less than the second}.

Exercise 2. Are the events A1, B1 and B3 from Exercise 1 independent?

Honors Exercise. Suppose you toss a fair coin repeatedly and independently. If it comes up heads, you win a dollar, and if it comes up tails, you lose a dollar. Suppose you start with $20. What is the probability you will get to $150 before you go broke?

Posted in MA3160 | Leave a comment