HW2 MA5311: Due February 3

Exercise 1. Let g : \mathbf{R}^2 \to \mathbf{R}^4, (u,v) \to (\cos u , \sin u, \cos v, \sin v). Show that g( \mathbf{R}^2) is a 2-dimensional smooth manifold homeomorphic to the torus \mathbf{S}^1 \times \mathbf{S}^1.

Exercise 2. Let h_+: \mathbf{S}^2-N \to \mathbf{C} be the stereographic projection from the north pole N, and h_- be the stereographic projection from the south pole S.

  1. Show that for z \neq 0, h_+ h_-^{-1} =\frac{1}{\bar z}.
  2. Show that if P is a non constant polynomial, the map f=h_+^{-1} P h_+, f(N)=N is smooth.
  3. More generally, if Q: \mathbf{C} \to \mathbf{C} is smooth, find a condition on Q so that f=h_+^{-1} Q h_+, f(N)=N is smooth.
  4. By following Milnor’s argument in the proof of the fundamental theorem of algebra, find sufficient conditions so that a smooth map Q: \mathbf{C} \to \mathbf{C} is onto.

 

Exercise 3.  By using Sard’s theorem, prove that the set of regular values of a smooth map f : M \to N is dense in N.

Posted in Uncategorized | Leave a comment

MA5161. HW 1 due Wednesday 1/25

Exercise 1. Show that the  \sigma-algebra \mathcal{T}(\mathbb{R}_{\ge 0},\mathbb{R}^d) is also generated by the following families:

  • \{ f \in \mathcal{A}(\mathbb{R}_{\ge 0}, \mathbb{R}^d ), f(t_1) \in B_1,...,f(t_n) \in B_n \} where t_1,...,t_n \in \mathbb{R}_{\ge 0} and where B_1,...,B_n are Borel sets in \mathbb{R}^d.
  • \{ f \in \mathcal{A}(\mathbb{R}_{\ge 0}, \mathbb{R}^d), (f(t_1),...,f(t_n)) \in B \} where t_1,...,t_n \in \mathbb{R}_{\ge 0} and where B is a Borel set in (\mathbb{R}^{d})^{\otimes n}.

 

Exercise 2.  Show that the following sets are in \mathcal{B} ([0,1],\mathbb{R}):

  • \{ f \in \mathcal{C}([0,1], \mathbb{R}), \sup_{t\in [0,1]} f(t) <1 \}
  • \{ f \in \mathcal{C}([0,1], \mathbb{R}), \exists t\in [0,1] f(t) =0 \}
Posted in Uncategorized | Leave a comment

MA5311. About John Milnor

John Milnor is a renowned mathematician who made fundamental contributions to differential topology and was awarded the Fields medal in 1962. One of his most striking result is the existence of several distinct differentiable structures on the 7 dimensional sphere (!).

The following video is an introductory lecture to differential topology given in 1965. The first 26 minutes will give you a good overview of what differential topology is about.

 

Posted in Differential Topology | Leave a comment

MA5311: Homework 1, due Wednesday 1/25

Exercise 1. Show that the  sphere

\mathbf{S}^n=\left\{ (x_1,\cdots,x_{n+1}) \in \mathbf{R}^{n+1}, x_1^2+\cdots+x_{n+1}^2=1 \right\} 

is a n-dimensional smooth manifold.

Exercise 2. We consider the following two  subsets of the plane

A= \{ ( x , \sin (1/x)), x \neq 0 \}

and

B= \{ ( x , | x| ), x \in \mathbf{R} \}.

Are A and  B smooth manifolds ? Of course, justify your answer with a proof.

Exercise 3. Let M \subset\mathbf{R}^{k}  be a n-dimensional smooth manifold. Let x \in M. A smooth curve on M is a smooth map  \gamma: \mathbf{R} \to M. We denote by \Gamma_0 the set of smooth curves on M such that  \gamma (0)=x. Show that the set

\left\{ \gamma'(0), \gamma \in \Gamma_0 \right\} 

is a linear space isomorphic to TM_x.

 

 

Posted in Differential Topology | Leave a comment

MATH 5311: Differential Topology

During the Spring, I will be teaching a class on differential topology. Lecture Notes will not be posted on this blog since I will be explicitly using several books. The course will mainly be organized in two parts.

Part 1. Introduction to differential topology

In this part, to simplify the presentation, all manifolds are taken to be infinitely differentiable and to be explicitly embedded in euclidean space. A small amount of point-set topology and of real variable theory is taken for granted. We shall follow the classical reference by John Milnor: Topology from the differentiable viewpoint.

  1. Smooth manifolds and smooth maps
  2. Sard-Brown theorem
  3. Smooth homotopy and smooth isotopy
  4. Brouwer degree
  5. Vector fields and Euler number

Part 2. Riemannian geometry and differential topology

 

In this part we plan to show how Riemannian geometry can be used to study topological properties of a manifold. Depending on the pace of the class, all of the following topics may not be covered.

  1. Abstract manifolds
  2. Whitney embedding theorem
  3. Differential forms on manifolds
  4. De Rham cohomology
  5. Riemannian metrics
  6. The Laplacian on forms and Hodge theory
  7. Connections and curvature
  8. Weitzenbock formula
  9. The Chern-Gauss-Bonnet formula
  10. Introduction  to index theory and characteristic classes

General references for this part of the course will be

  • Riemannian Geometry, by Gallot-Hulin-Lafontaine
  • The Laplacian on a Riemannian manifold, by Rosenberg
Posted in Differential Topology | Leave a comment

Take home questions

 

  1. Let
    L=\Delta +\langle \nabla U, \nabla \cdot\rangle,
    where U is a smooth function on \mathbb{R}^n and \Delta the usual Laplace operator on \mathbb{R}^n. Show that with respect to the measure \mu(dx)=e^{U(x)} dx, the operator L is essentially self-adjoint on \mathcal{C}_c (\mathbb{R}^n,\mathbb{R}).
  2. Compute \Gamma_2(f)=\frac{1}{2}( L\Gamma(f,f)-2\Gamma(f,Lf)) for the previous operator.
  3. Show that if, as a bilinear form, \mathbf{Hess} U \ge \rho for some \rho \in  \mathbb{R}, then the semigroup P_t=e^{tL} is stochastically complete.
  4. Let now V_1,\cdots,V_n be smooth vector fields on \mathbb{R}^n such that V_1(x),\cdots, V_n(x) is a basis of \mathbb{R}^n for every x. We denote by \Delta^g the Laplace-Beltrami operator associated with the corresponding Riemannian metric and the diffusion operator L^g f =\Delta^g f +\sum_{i=1}^n V_i U V_i f, where U is a smooth function. Show that L^g is symmetric with respect to a measure that shall be computed.
  5. Show that if the vector fields V_i‘s are globally Lipschitz, then L^g is essentially self-adjoint.
  6. Compute \Gamma^g_2(f)=\frac{1}{2}( L^g\Gamma^g(f,f)-2\Gamma^g(f,L^gf)) where \Gamma^g is the carre du champ of L^g.
  7. Deduce a criterion for the stochastic completeness of the semigroup P^g_t=e^{tL^g}.
  8. Let K be an elliptic diffusion operator with smooth coefficients on \mathbb{R}^n. Show that K can be written as K=L^g+Z, where Z is a smooth vector field and \Delta^g is the Laplace Beltrami operator of some Riemannian metric.
  9. Show that if K is symmetric with respect to a measure equivalent to the Lebesgue measure, then it can be written as L^g (see question 4) for some g and some U.
Posted in Uncategorized | Leave a comment

Lecture 13. The Bochner’s formula

The goal of this lecture is to prove the Bochner formula: A fundamental formula that relates the so-called Ricci curvature of the underlying Riemannian structure to the analysis of the LaplaceBeltrami operator. The Bochner’s formula is a local formula, we therefore only need to prove it on \mathbb{R}^n.

Let (V_1,\cdots,V_n) be an elliptic system of smooth vector fields on \mathbb{R}^n. As usual, we introduce the structure constants of the underlying Riemannian metric:
[V_i,V_j]=\sum_{k=1}^n \omega_{ij}^k V_k.

We know that the Laplace-Beltrami operator is given by L=\sum_{i=1}^n V_i^2 +V_0, where
V_0= -\sum_{i,k}^n \omega_{ik}^k V_i.
We first introduce the Ricci curvature, which is seen in this lecture as a first order differential bilinear form.

If f is a smooth and compactly supported function on \mathbb{R}^n, we define
\mathcal R(f,f) = \sum_{k,l=1}^n \mathcal{R}_{k,l} V_k f V_l f
where
\mathcal{R}_{k,l} = \sum_{j=1}^n (V_l\omega^j_{kj} - V_j\omega^k_{l j}) + \sum_{i,j=1}^n \omega_{ji}^i \omega^l_{k j} - \sum_{i=1}^n\omega_{k i}^i \omega_{l i}^i
+ \frac{1}{2} \sum_{1\le i<j\le n} \bigg(\omega^l_{ij} \omega^k_{ij} - (\omega_{l j}^i +\omega_{li}^j)(\omega^i_{kj} + \omega^j_{ki})\bigg)

Though, it is not apparent, it is actually an intrinsic Riemannian invariant. That is, \mathcal{R} only depends on the Riemannian metric g induced by the vector fields.

In the sequel, we will use the following differential bilinear form that already has been widely used throughout these lectures:
\Gamma(f,g) =\frac{1}{2}(L(fg)-fLg-gLf)=\sum_{i=1}^n V_i f V_i g,
and we now introduce its iteration
\Gamma_{2}(f,g) = \frac{1}{2}\left(L\Gamma(f,g) - \Gamma(f, Lg)-\Gamma (g,Lf)\right).

Henceforth, we adopt the notation
f_{,ij} = \frac{V_i V_j f + V_j V_i f}{2}
for the entries of the symmetrized Hessian of f with respect to the vector fields V_1,...,V_d. Noting that V_iV_j f\ =\ f_{,ij}\ +\ \frac{1}{2}\ [V_i,V_j] f and using the structure constants we obtain the useful formula
V_iV_jf = f_{,ij} + \frac{1}{2} \sum_{l=1}^n \omega^l_{ij} V_l f

Our principal result of this lecture is the following:

Theorem: (Bochner’s identity) For every smooth function f :\mathbb{R}^n \rightarrow \mathbb{R},
\Gamma_{2}(f,f)
= \sum_{l=1}^n \left(f_{,ll} -\sum_{i=1}^n \omega_{il}^l V_i f \right)^2 +2 \sum_{1 \le l<j \le n} \left( f_{,l j}-\sum_{i=1}^n \frac{\omega_{il}^j +\omega_{ij}^l}{2} V_i f \right)^2 +\mathcal{R}(f,f).
where \mathcal R(f,f) is the quadratic form defined above.

Proof: We begin by observing that for any smooth function F on \mathbb{R}^n L(F^2) = 2 F LF + 2 \Gamma(F,F).
This and the definition of \Gamma gives
L \Gamma(f,f) = \sum_{i=1}^n L((V_i f)^2) = 2 \sum_{i=1}^n V_i f L(V_i f) + 2 \sum_{i=1}^n \Gamma(V_i f,V_if) .
We now have
L(V_i f) = V_0 V_i f + \sum_{j=1}^n V_j^2 V_i f = V_i V_0 f + [V_0,V_i]f + \sum_{j=1}^n V_j (V_iV_j f) + V_j[V_j,V_i]f
= V_i V_0 f + [V_0,V_i]f + \sum_{j=1}^n \big\{V_i (V_jV_j f) + [V_j,V_i]V_j f + V_j[V_j,V_i]f\big\}
= V_i(Lf) + [V_0,V_i]f + \sum_{j=1}^n \big\{[V_j,V_i]V_j f + V_j[V_j,V_i]f\big\}
= V_i(Lf) + [V_0,V_i]f + 2 \sum_{j=1}^n [V_j,V_i]V_j f +\sum_{j=1}^n [V_j,[V_j,V_i]]f.
Using this identity we find
L \Gamma(f,f) = 2 \sum_{i=1}^n V_i f \left\{V_i(Lf) + [V_0,V_i]f + 2 \sum_{j=1}^n [V_j,V_i]X_j f + \sum_{j=1}^n [V_j,[V_j,V_i]]f\right\} + 2 \sum_{i,j=1}^n (V_jV_i f)^2
= 2 \Gamma(f,Lf) + 2 \sum_{i=1}^n V_i f [V_0,V_i]f + 4 \sum_{i,j=1}^n V_if [V_j,V_i]V_j f + 2 \sum_{i,j=1}^n V_i f [V_j,[V_j,V_i]]f + 2 \sum_{i,j=1}^n (V_jV_i f)^2.
Since, thanks to the skew-symmetry of the matrix \{[V_i,V_j]f\}_{i,j=1,...,n}, we have \sum_{i,j=1}^n f_{,ij} [V_i,V_j] f = 0, we find
\sum_{i,j=1}^n (V_jV_i f)^2 = \sum_{i,j=1}^n f_{,ij}^2 + \frac{1}{4} \sum_{i,j=1}^n ([V_i,V_j] f)^2 +\sum_{i,j=1}^n f_{,ij} [V_i,V_j] f
= \sum_{i,j=1}^n f_{,ij}^2 + \frac{1}{4} \sum_{i,j=1}^n ([V_i,V_j] f)^2.
We thus obtain

\frac{1}{2} \big[L \Gamma(f,f) - 2\Gamma(f,Lf)\big] = \sum_{i=1}^n V_i f [V_0,V_i]f + 2 \sum_{i,j=1}^n V_i f [V_j,V_i]V_j f + \sum_{i,j=1}^n V_i f [V_j,[V_j,V_i]]f + \sum_{i,j=1}^n f_{,ij}^2 +\frac{1}{4} \sum_{i,j=1}^n ([V_i,V_j] f)^2.

Since we have

\Gamma_{2}(f,f) = \frac{1}{2}\left(L\Gamma(f,f) - 2 \Gamma(f, Lf)\right), we conclude \Gamma_{2}(f,f) = \sum_{i,j=1}^n f_{,ij}^2 - 2 \sum_{i,j=1}^n V_i f [V_i,V_j]V_j f + \frac{1}{4} \sum_{i,j=1}^n ([V_i,V_j] f)^2 + \sum_{i=1}^n V_i f [V_0,V_i]f + \sum_{i,j=1}^n V_i f [[V_i,V_j],V_j]f

To complete the proof we need to recognize that the right-hand side coincides with that in the statement of our result.

With this objective in mind, using the structure constants we obtain
\sum_{i,j=1}^n f_{,ij}^2 - 2 \sum_{i,j=1}^n V_i f [V_i,V_j]V_j f
= \sum_{l=1}^n f_{,ll}^2 + 2 \sum_{1 \le l<j \le n} f_{,jl}^2 - 2 \sum_{i,j=1}^n V_i f \left(\sum_{l=1}^d \omega_{ij}^l V_l \right)V_j f
= \sum_{l=1}^n f_{,ll}^2 + 2 \sum_{1 \le l<j \le n} f_{,jl}^2 - 2 \sum_{i,j=1}^n \sum_{l=1}^n \omega_{ij}^l V_l V_j f\ V_i f
= \sum_{l=1}^n f_{,ll}^n + 2 \sum_{1 \le l<j \le n} f_{,jl}^2 - 2 \sum_{i,j=1}^n \sum_{l=1}^n \omega_{ij}^l f_{,l j}V_i f - \sum_{i,j=1}^n \sum_{l, k=1}^n \omega_{ij}^l \omega^k_{l j} V_k f V_i f
= \sum_{l=1}^n f_{,ll}^2 + 2 \sum_{1 \le l<j \le n} f_{,jl}^2 - 2 \sum_{l,j=1}^n \left(\sum_{i=1}^n \omega_{ij}^l V_i f\right) f_{,l j} - \sum_{i,j=1}^n \sum_{l, k=1}^n \omega_{ij}^l \omega^k_{l j} V_k f V_i f
= \sum_{l=1}^n \left(f_{,ll}^2 - 2 \left(\sum_{i=1}^n \omega^l_{il} V_i f\right) f_{,l l}\right) + 2 \sum_{1 \le l<j \le n}\left( f_{,jl}^2 - 2 \sum_{1\le l<j\le n} \left(\sum_{i=1}^n \frac{\omega_{ij}^l + \omega_{il}^j}{2} V_i f\right) f_{,l j} \right)-\sum_{i,j=1}^n \sum_{l, k=1}^n \omega_{ij}^l \omega^k_{l j} V_k f V_i f.
If we now complete the squares we obtain

\sum_{i,j=1}^n f_{,ij}^2 - 2 \sum_{i,j=1}^n V_i f [V_i,V_j]V_j f
= \sum_{l=1}^n \left( f_{,ll} -\sum_{i=1}^n \omega_{il}^l V_i f \right)^2 + 2 \sum_{1 \le l<j \le n} \left( f_{,jl} -\sum_{i=1}^n \frac{\omega_{ij}^l +\omega_{il}^j}{2} V_i f \right)^2
- \sum_{l=1}^n \left(\sum_{i=1}^n \omega_{il}^l V_i f \right)^2 - 2 \sum_{1 \le l<j \le n} \left(\sum_{i=1}^n \frac{\omega_{ij}^l +\omega_{il}^j}{2} V_i f \right)^2-\sum_{i,j,k,l=1}^n \omega_{ij}^l \omega^k_{l j} V_k f V_i f.

Next, we have
\sum_{i=1}^n V_i f [V_0,V_i]f = \sum_{i,j,k,l=1}^n \omega^k_{jk} \omega^l_{ij} V_l f V_i f + \sum_{i=1}^n \sum_{j,k=1}^n (V_i \omega^k_{jk}) V_if V_j f,
and also
\sum_{i,j=1}^n V_i f [[V_i,V_j],V_j]f = \sum_{i,j=1}^n \sum_{l= 1}^n [\omega^l_{ij} V_l,V_j]f V_if
= \sum_{i,j=1}^n \sum_{l=1}^n \omega^l_{ij} V_i f [V_l,V_j] f - \sum_{i,j=1}^n \sum_{l=1}^n (V_j\omega^l_{ij}) V_if V_l f.
Using the structure constants we find
\sum_{i,j=1}^n V_i f [[V_i,V_j],V_j]f = \sum_{i,j=1}^n \sum_{l,k=1}^n \omega^l_{ij} \omega^k_{l j} V_i f V_kf - \sum_{i,j=1}^d \sum_{l=1}^d (X_j\omega^l_{ij}) X_if X_l f.
Next we have
\frac{1}{4} \sum_{i,j=1}^n ([V_i,V_j]f)^2 = \frac{1}{2} \sum_{1\le i<j\le n}\left(\sum_{l=1}^n \omega^l_{ij} V_l f\right)^2 .
We obtain therefore
\Gamma_2(f,f) = \sum_{l=1}^n \left( f_{,ll} -\sum_{i=1}^n \omega_{il}^l V_i f \right)^2 + 2 \sum_{1 \le l<j \le n} \left( f_{,jl} -\sum_{i=1}^n \frac{\omega_{ij}^l +\omega_{il}^j}{2} V_i f \right)^2 + \mathcal{M}onster
where we have let

\mathcal{M}onster = - \sum_{l=1}^n \left(\sum_{i=1}^d \omega_{il}^l V_i f \right)^2 - 2 \sum_{1 \le l<j \le n} \left(\sum_{i=1}^n \frac{\omega_{ij}^l +\omega_{il}^j}{2} V_i f \right)^2
+ \sum_{i,j,k,l=1}^n \omega^k_{jk} \omega^l_{ij} V_l f V_i f - \sum_{i,j,k,l=1}^n \omega_{ij}^k \omega^l_{k j} V_l f V_i f + \sum_{i=1}^n \sum_{j,k=1}^n (V_i \omega^k_{jk}) V_if V_j f + \sum_{i,j=1}^n \sum_{l,k=1}^n \omega^l_{ij} \omega^k_{l j} V_i f V_kf
- \sum_{i,j=1}^n \sum_{l=1}^n (V_j\omega^l_{ij}) V_if V_l f+ \frac{1}{2} \sum_{1\le i<j\le n}\left(\sum_{l=1}^n \omega^l_{ij} V_l f\right)^2.

Simplifying the expression we obtain
\mathcal{M}onster = - \sum_{k,l=1}^n \sum_{i=1}^n \omega_{k i}^i \omega_{l i}^i V_k f V_l f - \frac{1}{2} \sum_{k,l=1}^n \sum_{1 \le i<j \le n} (\omega_{l j}^i +\omega_{l i}^j)(\omega^i_{kj} + \omega^j_{ki}) V_k f V_l f + \sum_{k,l=1}^n \sum_{j=1}^n (V_l\omega^j_{kj} - V_j\omega^k_{l j}) V_kf V_l f
+ \sum_{i,j,k,l=1}^n \omega_{ji}^i \omega^l_{k j} V_k f V_l f+ \frac{1}{2} \sum_{k,l=1}^n \sum_{1\le i<j\le n} \omega^l_{ij} \omega^k_{ij} V_k f V_l f.

To complete the proof we need to recognize that the monster coincides with \mathcal R(f,f). This simple computation is let to the reader \square

Let f:\mathbb{R}^n \rightarrow \mathbb{R} be a smooth function. The matrix with coefficient (l,j) given by f_{,l j} -\sum_{i=1}^n \frac{\omega_{il}^j +\omega_{ij}^l}{2} V_i f is a Riemannian invariant. This matrix is called the Riemannian Hessian of f and denoted by \mathbf{Hess} f or \nabla^2 f. As a consequence of this and of the Bochner’s identity, \mathcal{R} is seen to be a Riemannian invariant.

Definition: Let \mathbb{M} be a Riemannian manifold. The bilinear form ( a (0,2) tensor) locally defined by \mathbf{Ricc} (\nabla f, \nabla f)= \mathcal{R}(f,f) is called the Ricci curvature of \mathbb{M}

On a Riemannian manifold, the Bochner’s formula can therefore synthetically be written

\Gamma_2 (f,f)=\| \mathbf{Hess} f \|^2_{HS}+ \mathbf{Ric} (\nabla f, \nabla f).

As a consequence, it should come as no surprise that a lower bound on \mathbf{Ric} translates into a lower bound on \Gamma_2.

Theorem: Let \mathbb{M} be a Riemannian manifold. We have, in the sense of bilinear forms, \mathbf{Ric} \ge \rho if and only if for every f \in C^\infty(\mathbb{M}),
\Gamma_2(f,f) \ge \frac{1}{n} (Lf)^2 + \rho \Gamma(f,f).

Proof: Let us assume that \mathbf{Ric} \ge \rho. In that case, from Bochner’s formula we deduce that \Gamma_2 (f,f) \ge \| \mathbf{Hess} f \|^2_{HS}+ \rho \Gamma(f,f). From Cauchy-Schwartz inequality, we have the bound
\| \mathbf{Hess} f \|^2_{HS} \ge \frac{1}{n} \mathbf{Tr} \left(\mathbf{Hess} f \right)^2.
Since \mathbf{Tr} \left(\mathbf{Hess} f \right)=Lf, we conclude that
\Gamma_2(f,f) \ge \frac{1}{n} (Lf)^2 + \rho \Gamma(f,f).

Conversely, let us now assume that for every f \in C^\infty(\mathbb{M}),
\Gamma_2(f,f) \ge \frac{1}{n} (Lf)^2 + \rho \Gamma(f,f).

Let x \in \mathbb{M} and v \in \mathbf{T}_x \mathbb{M}. It is possible to find a function f \in C^\infty(\mathbb{M}) such that, at x, \mathbf{Hess} f =0 and \nabla f=v. We have then, by using Bochner’s identity at x, \mathbf{Ric} (v,v) \ge \rho \| v \|^2 \square

The inequality \Gamma_2(f,f) \ge \frac{1}{n} (Lf)^2 + \rho \Gamma(f,f) is called the curvature-dimension inequality. It is an intrinsic property of the operator L.

We finally mention another consequence of Bochner’s identity which shall be later used.

Lemma: Let \mathbb{M} be a Riemannian manifold such that \mathbf{Ric} \ge \rho. For every f \in C^\infty(\mathbb{M}),
\Gamma(\Gamma(f)) \le 4 \Gamma (f) \left( \Gamma_2(f)-\rho\Gamma(f)\right).

Proof: It follows from the fact that \Gamma_2 (f,f) \ge \| \mathbf{Hess} f \|^2_{HS}+ \rho \Gamma(f,f) and Cauchy-Schwartz inequality implies that \Gamma(\Gamma(f)) \le 4 \| \mathbf{Hess} f \|^2_{HS} \Gamma(f). Details are let to the reader \square

Posted in Diffusions on manifolds | Leave a comment

Lecture 12. The distance associated to subelliptic diffusion operators

In this lecture we prove that most of the results that were proven for Laplace-Beltrami operators may actually be generalized to any locally subelliptic operator.
Let L be a locally subelliptic diffusion operator defined on \mathbb{R}^n. For every smooth functions f,g: \mathbb{R} \rightarrow \mathbb{R}, we recall that the carre du champ operator is the symmetric first-order differential form defined by:

\Gamma (f,g) =\frac{1}{2} \left( L(fg)-fLg-gLf \right).
A straightforward computation shows that if

L=\sum_{i,j=1}^n \sigma_{ij} (x) \frac{\partial^2}{ \partial x_i \partial x_j} +\sum_{i=1}^n b_i (x)\frac{\partial}{\partial x_i},
then,

\Gamma (f,g)=\sum_{i,j=1}^n \sigma_{ij} (x) \frac{\partial f}{\partial x_i} \frac{\partial g}{\partial x_j}.
As a consequence, for every smooth function f, \Gamma(f) \ge 0.

Definition: An absolutely continuous curve \gamma: [0,T] \rightarrow \mathbb{R}^n is said to be subunit for the operator L if for every smooth function f : \mathbb{R}^n \to \mathbb{R} we have \left| \frac{d}{dt} f ( \gamma(t) ) \right| \le \sqrt{ (\Gamma f) (\gamma(t)) }. We then define the subunit length of \gamma as \ell_s(\gamma) = T.

Given x, y\in \mathbb{R}^n, we indicate with

S(x,y) =\{\gamma:[0,T]\to \mathbb{R}^n \mid \gamma\ \text{is subunit for}\ \Gamma, \gamma(0) = x,\ \gamma(T) = y\}.
In these lectures we always assume that

S(x,y) \not= \emptyset,\ \ \ \ \text{for every}\ x, y\in \mathbb{R}^n.
If L is an elliptic operator or if L is a sum of squares operator that satisfies Hormander’s condition, then this assumption is satisfied.

Under such assumption it is easy to verify that

d(x,y) = \inf\{l_s(\gamma)\mid \gamma\in S(x,y)\},
defines a true distance on \mathbb{R}^n. This is the intrinsic distance associated to the subelliptic operator L. A beautiful result by Fefferman and Phong relates the subellipticity of L to the size of the balls for this metric:

Theorem: Let x \in \mathbb{R}^n. There exist constants  \varepsilon >0, r_0 >0 and C_1,C_2 >0 such that for 0 \le r < r_0,

B(x,r) \subset B_d (x,C_1 r^\varepsilon) \subset B (x,C_2 r^\varepsilon)  ,
where B_d denotes here the ball for the metric d and B the ball for the Euclidean metric on \mathbb{R}^n.

A corollary of this result is that the topology induced by d coincides with the Euclidean topology of \mathbb{R}^n. The distance d can also be computed using the following definition:

Proposition: For every x,y \in \mathbb{R}^n,
d(x,y)=\sup \left\{ |f(x) -f(y) | , f \in C^\infty(\mathbb{R}^n) , \| \Gamma(f) \|_\infty \le 1 \right\},\ \ \ \ x,y \in \mathbb{R}^n.

Proof: Let x,y \in \mathbb{R}^n. We denote
\delta (x,y)=\sup \{ | f(x)-f(y) |, f \in C_0^\infty(\mathbb{R}^n), \| \Gamma(f) \|_\infty \le 1 \}.
Let \gamma: [0,T] \to \mathbb{R}^n be a sub-unit curve such that
\gamma(0)=x, \gamma(T)=x.
We have \left| \frac{d}{dt} f ( \gamma(t) ) \right| \le \sqrt{ (\Gamma f) (\gamma(t)) }, therefore, if \Gamma(f) \le 1,

\left| f(y)-f(x) \right| \le T.
As a consequence
\delta (x,y) \le d(x,y).

We now prove the converse inequality which is trickier. We already know that if L is elliptic then d(x,y)=\delta(x,y). If L is only subelliptic, we consider the sequence of operators L_k=L+\frac{1}{k} \Delta where \Delta is the usual Laplacian. We denote by d_k the distance associated to L_k. It is easy to see that d_k increases with k and that d_k(x,y) \le d(x,y). We can find a curve \gamma_k:[0,1] \to \\mathbb{R}^n, such that \gamma(0)=x,\gamma(1)=y and for every f \in C^\infty(\mathbb{R}^n),

\left| \frac{d}{dt} f(\gamma_k(t)) \right|^2 \le \left(d^2_k(x,y)+\frac{1}{k} \right)\left( \Gamma(f) (\gamma_k(t)) + \frac{1}{k} \Gamma_\Delta(f)(\gamma_k(t)) \right),
where \Gamma_\Delta is the carre du champ operator of \Delta. Since d_k \le d, we see that the sequence \gamma_k is uniformly equicontinuous. As a consequence of the Arzela-Ascoli theorem, we deduce that there exists a subsequence which we continue to denote \gamma_k that converges uniformly to a curve \gamma:[0,1] \to \mathbb{R}^n, such that \gamma(0)=x,\gamma(1)=y and for every f \in C^\infty(\mathbb{R}^n),

\left| \frac{d}{dt} f(\gamma (t)) \right|^2 \le \sup_k d^2_k(x,y) \Gamma(f) (\gamma_k(t)).
By definition of d, we deduce d(x,y) \le \sup_k d_k(x,y). As a consequence, we proved that d(x,y)=\lim_{k \to \infty} d_k(x,y). Since it is clear that

d_k(x,y)= \sup \left\{ |f(x) -f(y) | , f \in C^\infty(\mathbb{R}^n) , \left\| \Gamma(f) + \frac{1}{k} \Gamma_\Delta(f) \right\|_\infty \le 1 \right\} \le \delta(x,y),
we finally conclude that d(x,y) \le \delta(x,y), hence d(x,y)=\delta(x,y).
\square

A straightforward corollary of the previous proposition is the following useful result:

Corollary: If f \in C^\infty(\mathbb{R}^n) satisfies \Gamma(f)=0, then f is constant.
The Hopf-Rinow theorem is still true with an identical proof in the case of subelliptic operators.

Theorem: The metric space (\mathbb{R}^n,d) is complete (i.e. Cauchy sequences are convergent) if and only the compact sets are the closed and bounded sets.
Similarly, we also have the following key result:
Proposition: There exists an increasing sequence h_n\in C^\infty_0(\mathbb{R}^n), 0 \le h_n \le 1, such that h_n\nearrow 1 on \mathbb{R}^n, and \| \Gamma(h_n) \|_{\infty} \to 0, as n\to \infty if and only if the metric space (\mathbb{R}^n,d) is complete.

Posted in Diffusions on manifolds | Leave a comment

Lecture 11. Laplace-Beltrami operators on Rn

In this lecture we define Riemannian structures and corresponding LaplaceBeltrami operators. We first study Riemannian structures on Rn to avoid technicalities in the presentation of the main ideas and then, in a later lecture, will extend our results to the manifold case.

We start with the following basic definition:

Definition: A Riemannian structure on \mathbb{R}^n is a smooth map g from \mathbb{R}^n to the set of symmetric positive matrices.

In other words, a Riemannian structure induces at each point x \in \mathbb{R}^n an inner product g_x, and the dependence x \rightarrow g_x is asked to be smooth.

A natural way to define Riemannian structures, is to start from a family of smooth vector fields V_1,\cdots, V_n such that for every x \in \mathbb{R}^n, (V_1(x),\cdots, V_n(x)) is a basis of \mathbb{R}^n. It is then easily seen that there is a unique Riemannian structure on \mathbb{R}^n that makes for x \in \mathbb{R}^n, (V_1(x),\cdots, V_n(x)) an orthonormal basis.

Conversely, given a Riemannian structure on \mathbb{R}^n, it is possible to find smooth vector fields V_1,\cdots, V_n on \mathbb{R}^n such that for x \in \mathbb{R}^n, (V_1(x),\cdots, V_n(x)) an orthonormal basis for this Riemannian structure.

In this course, we shall mainly deal with such a point of view on Riemannian structures and use as much as possible the language of vector fields. This point of view is not restrictive and will allow more easy extensions to the sub-Riemannian case in a later part of the course.

Let us consider a family of smooth vector fields V_1,\cdots, V_n such that for every x \in \mathbb{R}^n, (V_1(x),\cdots, V_n(x)) is a basis of \mathbb{R}^n. Without loss of generality we may assume that \mathbf{det} ( V_1(x),\cdots, V_n(x)) > 0. Our goal is to associate to this Riemannian structure a canonical diffusion operator.

As a first step, we associate with the vector fields V_1,\cdots, V_n a natural Borel measure \mu which is the measure with density d\mu =\frac{1}{ \mathbf{det} ( V_1(x),\cdots, V_n(x))} dx with respect to the Lebesgue measure. This is the so-called Riemannian measure. The diffusion operator we want to consider shall be symmetric with respect to this measure.

Remark: Let (U_1,\cdots,U_n) be another system of smooth vector fields on \mathbb{R}^n such that for every x \in \mathbb{R}^n, (U_1(x),\cdots,U_n(x)) is an orthonormal basis with respect to the inner product g_x. The systems of vector fields (U_1,\cdots,U_n) and (V_1,\cdots,V_n) are related one to each other through an orthogonal mapping. This implies that |\mathbf{det} ( V_1(x),\cdots, V_n(x))|=| \mathbf{det} ( U_1(x),\cdots, U_n(x))|. In other words, the Riemannian measure \mu only depends on the Riemannian structure g.

Due to the fact that for every x \in \mathbb{R}^n, (V_1(x),\cdots, V_n(x)) is a basis of \mathbb{R}^n, we may find smooth functions \omega^{k}_{ij}‘s on $\mathbb{R}^n$ such that [V_i,V_j]=\sum_{k=1}^n \omega_{ij}^k V_k. Those functions are called the structure constants of the Riemannian structure. Every relevant geometric quantities may be expressed in terms of these functions. Of course, these functions depend on the choice of the vector fields $V_i$’s and thus are not Riemannian invariants, but several combinations of them, like curvature quantities, are Riemannian invariants.

The following proposition expresses the formal adjoint with respect to the Lebesgue measure of the vector field V_i which is seen as an operator acting on the spave of smooth and compactly supported functions.

Proposition: If f,g\in \mathcal{C}_c(\mathbb{R}^n, \mathbb{R}) are smooth and compactly supported functions, then we have \int_{\mathbb{R}^n} (V_i f ) g d\mu =\int_{\mathbb{R}^n} f (V_i^* g) d\mu, where V_i^*=-V_i+ \sum_{k=1}^n \omega_{ik}^k. 

Proof: Let us denote V_i=\sum_{j=1}^n v_i^j \frac{\partial}{\partial x_j}, and m(x)= \mathbf{det} ( V_1(x),\cdots, V_n(x)). We have \int_{\mathbb{R}^n} (V_i f ) g d\mu =\int_{\mathbb{R}^n}\sum_{j=1}^n v_i^j \frac{\partial f}{\partial x_j} g \frac{dx}{m} =-\sum_{j=1}^n\int_{\mathbb{R}^n} f \left( m \frac{\partial}{\partial x_j} \frac{1}{m} g v_i^j \right) d\mu. We now compute \sum_{j=1}^n m \frac{\partial}{\partial x_j} \frac{1}{m} v_i^j=-\sum_{j=1}^n v_i^j \frac{1}{m} \frac{\partial m}{\partial x_j} +\sum_{j=1}^n\frac{\partial v_i^j}{\partial x_j}. We then observe that \frac{\partial m}{\partial x_j} = \frac{\partial}{\partial x_j} \mathbf{det} ( V_1(x),\cdots, V_n(x)) =\sum_{k=1}^n \mathbf{det} \left( V_1(x),\cdots,\frac{\partial V_k}{\partial x_j}(x),\cdots, V_n(x)\right).
Thus, we obtain
-\sum_{j=1}^n v_i^j \frac{\partial m}{\partial x_j} +\sum_{j=1}^n m \frac{\partial v_i^j}{\partial x_j} = \sum_{k=1}^n \mathbf{det} \left( V_1(x),\cdots,-\sum_{j=1}^n v_i^j \frac{\partial V_k}{\partial x_j}(x),\cdots, V_n(x)\right)
+\sum_{k=1}^n\mathbf{det} \left( V_1(x),\cdots,\frac{\partial v_i^j}{\partial x_j} V_k(x) ,\cdots, V_n(x)\right)
=-\sum_{k=1}^n\mathbf{det} \left( V_1(x),\cdots,[V_i,V_k](x) ,\cdots, V_n(x)\right)
=-\sum_{k=1}^n\mathbf{det} \left( V_1(x),\cdots,\sum_{j=1}^n \omega_{ik}^j(x) V_j(x) ,\cdots, V_n(x)\right)
=-\sum_{k=1}^n \omega_{ik}^k (x)\mathbf{det} \left( V_1(x),\cdots,V_k(x) ,\cdots, V_n(x)\right)=-m\sum_{k=1}^n \omega_{ik}^k. \square

With this integration by parts formula in hands we are led to the following natural definition
Definition: The diffusion operator L =-\sum_{i=1}^n V_i^* V_i =\sum_{i=1}^n V_i^2 -\sum_{i,k}^n \omega_{ik}^k V_i is called the Laplace-Beltrami operator associated with the Riemannian structure g

The following straightforward properties of L are let as an exercise to the reader:

  • L is an elliptic operator;
  • The Riemannian measure \mu is invariant for L;
  • The operator L is symmetric with respect to \mu.

Exercise: Let (U_1,\cdots,U_n) be another system of smooth vector fields on \mathbb{R}^n such that for every x \in \mathbb{R}^n, (U_1(x),\cdots,U_n(x)) is an orthonormal basis with respect to the inner product g_x. Show that \sum_{i=1}^n V_i^* V_i=\sum_{i=1}^n U_i^* U_i. In other words, the Laplace-Beltrami operator is a Riemannian invariant: It only depends on the Riemannian structure g.

In order to apply the diffusion semigroup theory developed in the first lectures and construct without ambiguity the semigroup associated to the Laplace-Beltrami operator L, we need to know if L is essentially self-adjoint. Interestingly, this property of essential self-adjointness is clodely related to a metric property of the underlying Riemannian structure: The completeness of the associated distance.

Given an absolutely continuous curve \gamma: [0,T] \rightarrow \mathbb{R}^n, we define its Riemannian length by L_g (\gamma)=\int_0^T \sqrt{g_{\gamma(s)} (\gamma'(s),\gamma'(s) )} ds. If x,y \in \mathbb{R}^n, let us denote by \mathcal{C}(x,y) the set of absolutely continuous curves \gamma: [0,1] \rightarrow \mathbb{R}^n such that \gamma(0)=x, \gamma(1)=y. The Riemannian distance between x and y is defined by d(x,y)=\inf_{\gamma \in \mathcal{C}(x,y)} L_g(\gamma). By using reparametrization, we may define the Riemannian distance in a equivalent way by using the notion of sub-unit curve. Let \gamma: [0,T] \to \mathbb{R}^n be an absolutely continuous curve. Since the vector fields V_1,\cdots,V_n‘s form a basis of \mathbb{R}^n at each point, we may find continuous functions \alpha_i:[0,T] \to \mathbb{R}^n such that \gamma'(t)=\sum_{i=1}^n \alpha_i(t) V_i (\gamma(t)). The curve \gamma is then said to be sub-unit if for almost every t \in [0,T], \sum_{i=1}^n \alpha_i(t)^2 \le 1. By using reparametrization, it is easily seen that for x,y \in \mathbb{R}^n, d(x,y)=\inf \left\{ T \text{ such that there exists a sub-unit curve } \gamma:[0,T] \to \mathbb{R}^n, \gamma(0)=x, \gamma(1)=y \right\}.

Exercise: Let \theta_1, \cdots , \theta_n \in \mathbb{R}. Show that for every x \in \mathbb{R}^n, d \left(x, e^{\sum_{i=1}^n \theta_i V_i } x \right) \le \sum_{i=1}^n \theta_i^2. 

An important fact is that d hence defined is indeed a distance that induces the usual topology of \mathbb{R}^n.

Definition: The function d defined above is a distance that induces the usual topology of \mathbb{R}^n.

Proof: Since any curve can be parametrized backwards and forwards, we have d(x,y)=d(y,x). The triangle inequality is easily proved by using juxtaposition of curves. Plainly d(x,x)=0, so it remains to prove that if x \neq y then d(x,y) > 0. Let x,y \in \mathbb{R}^n such that x \neq y. Let us denote R=\| x-y \|. The closed Euclidean ball \bar{B}_e (x,R) is compact, therefore there exist two constants \alpha, \beta > 0 such that for every z \in \bar{B}_e (x,R) and u \in \mathbb{R}^n, \alpha^2 \| u \|^2 \le g_z(u,u) \le \beta^2 \| u \|^2. Let now \gamma:[0,1]\to \mathbb{R}^n be an absolutely continuous curve such that \gamma(0)=x, \gamma(1)=y. Let \tau =\inf \{ t, \| \gamma(t) \|=R \}.
We have
L_g(\gamma)
=\int_0^1 \sqrt{g_{\gamma(s)} (\gamma'(s),\gamma'(s) )} ds
\ge \int_0^\tau \sqrt{g_{\gamma(s)} (\gamma'(s),\gamma'(s) )} ds
\ge \alpha \int_0^\tau \| \gamma'(s) \| ds
\ge \alpha \int_0^\tau \| \gamma'(s) \| ds
\ge \alpha \| \gamma(\tau) - \gamma (0) \|
\ge \alpha \| x -y \|.
As a consequence, we deduce that d(x,y) \ge \alpha^2 \| x -y \| > 0. Therefore d is indeed a distance. Moreover, it is shown as above that for every z \in \mathbb{R}^n, R> 0, there are constants C_1,C_2> 0 such that for every x,y \in \bar{B}_e (z,R), C_1 \| x-y \| \le d(x,y) \le C_2 \| x-y \|. This implies that d induces the usual topology of \mathbb{R}^n \square

As shown in the following proposition, the distance d is intrinsically associated to the Laplace-Beltrami operator.

Proposition: For x,y \in \mathbb{R}^n, we have d(x,y) =\sup \{ | f(x)-f(y) |, f \in \mathcal{C}_c^\infty(\mathbb{R}^n,\mathbb{R}), \| \Gamma(f,f) \|_\infty \le 1 \}. 

Proof: Let x,y \in \mathbb{R}^n. We denote \delta (x,y)=\sup \{ | f(x)-f(y) |, f \in \mathcal{C}_c^\infty(\mathbb{R}^n,\mathbb{R}), \| \Gamma(f,f) \|_\infty \le 1 \}. Let \gamma: [0,T] \to \mathbb{R} be a sub-unit curve such that \gamma(0)=x, \gamma(T)=x. We can find \alpha_1,\cdots,\alpha_n:[0,T] \to \mathbb{R}^n such that \gamma'(t)=\sum_{i=1}^n V_i (\gamma(t)) \alpha_i (t), and \sum_{i=1}^n \alpha_i^2 \le 1. Let now f\in \mathcal{C}^\infty(\mathbb{R}^n,\mathbb{R}), \| \Gamma(f,f) \|_\infty \le 1. From the change of variable formula we have, f(\gamma(T))= f(\gamma(0)) + \sum_{i=1}^n \int_0^T V_i f(\gamma(s)) \alpha_i(s) ds. Therefore, from Cauchy-Schwarz inequality, \left| f(y)-f(x) \right| \le T. As a consequence \delta (x,y) \le d(x,y).

We now prove the converse inequality which is trickier. The idea is to use the function f(y)=d(x,y) that satisfies | f(x) -f(y)|=d(x,y) and "\Gamma(f,f) =1". However, giving a precise meaning to \Gamma(f,f) =1 is not so easy, because it turns out that f is not differentiable at x. It suggests to use an approximation of the identity to regularize f and avoid the discussion of this differentiability issue. More precisely, fix x_o ,y_o\in \mathbb{R}^n, and for N \ge 1, consider the function \Psi_N(y)= \eta \left( \int_{\mathbb{R}^n} \rho_N (t) d(x_o,y-t) dt \right), where \rho \in \mathcal{C}_c (\mathbb{R}^n,\mathbb{R}), \rho \ge 0, \int_{\mathbb{R}^n} \rho=1, \rho_N(t)=N^n \rho(N t) and \eta \in \mathcal{C}_c (\mathbb{R}^n,\mathbb{R}), \eta \ge 0, has the property that \| \eta ' \|_\infty \le 1 and \eta (u)=u for u \in [0,1+d(x_o,y_o)]. Since for any \theta \in \mathbb{R}^{n}, \| \theta \|=1, t \ge 0 | d(x_o,y)-d\left(x_o, e^{t \sum_{i=1}^n \theta_i V_i} y\right)| \le d\left(y, e^{t \sum_{i=1}^n \theta_i V_i} y\right) \le t, it is easy to see that \Gamma(\Psi_N,\Psi_N) \le 1+\frac{C}{N}, for some constant > 0. Hence \delta(x_o,y_o) \ge \lim \inf_{N \to +\infty} | \Psi_N(y_o) -\Psi_N(x_o) |=d(x_o,y_o) \square

The following theorem is known as the HopfRinow theorem, it provides a necessary and sufficient condition for the completeness of the metric space (\mathbb{R}^n,d).

Proposition: The metric space (\mathbb{R}^n,d) is complete (i.e. Cauchy sequences are convergent) if and only the compact sets are the closed and bounded sets.

Proof: It is clear that if closed and bounded sets are compact then the metric space (\mathbb{R}^n,d) is complete; It comes from the fact that Cauchy sequence are convergent if and only if they have at least one cluster value. So, we need to prove that closed and bounded sets for the distance d are compact provided that (\mathbb{R}^n,d) is complete. To check this, it is enough to prove that closed balls are compact. Let x \in \mathbb{R}^n. Observe that if the closed ball \bar{B}(x, r) is compact for some r> 0, then \bar{B}(x, \rho) is closed for any \rho < r. Define R=\sup \{ r> 0, \bar{B}(x, r) \text{ is compact } \}. Since d induces the usual topology of \mathbb{R}^n, R>0. Let us assume that R < +\infty and let us show that it leads to a contradiction. We first show that \bar{B}(x, R) is compact. Since (\mathbb{R}^n,d) is assumed to be complete, it suffices to prove that \bar{B}(x, R) is totally bounded: That is, for every \varepsilon > 0 there is a finite set S_\varepsilon such that every point of \bar{B}(x, R) belongs to a \varepsilon-neighborhood of S_\varepsilon.

So, let \varepsilon > 0 small enough. By definition of R, the ball \bar{B}(x, R-\varepsilon / 4) is compact; It is therefore totally bounded. We can find a finite set S=\{ y_1,\cdots,y_N\} such that every point of \bar{B}(x, R-\varepsilon / 4) lies in a \varepsilon / 2-neighborhood of S. Let now y \in \bar{B}(x, R). We claim that there exists y' \in \bar{B}(x, R-\varepsilon / 4) such that d(y,y') \le \varepsilon /2 . If y \in \bar{B}(x, R-\varepsilon / 4), there is nothing to prove, we may therefore assume that y \notin \bar{B}(x, R-\varepsilon / 4). Consider then a sub-unit curve \gamma: [0,R+\varepsilon / 4] such that \gamma(0)=x, \gamma(R+\varepsilon/4)=y. Let \tau =\inf \{t, \gamma(t) \notin \bar{B}(x, R-\varepsilon / 4) \}. We have \tau \ge R-\varepsilon / 4. On the other hand, d(\gamma(\tau), \gamma(R+\varepsilon/4)) \le R+\varepsilon/4 -\tau. As a consequence, d(\gamma(\tau),y) \le \varepsilon /2. In every case, there exists therefore y' \in \bar{B}(x, R-\varepsilon / 4) such that d(y,y') \le \varepsilon /2 . We may then pick y_k in S such that d(y_k,y') \le \varepsilon / 2. From the triangle inequality, we have d(y,y_k) \le \varepsilon. So, at the end, it turns out that every point of \bar{B}(x, R) lies in a \varepsilon-neighborhood of S. This shows that \bar{B}(x, R) is totally bounded and therefore compact because (\mathbb{R}^n,d) is assumed to be complete. Actually, the previous argument shows more, it shows that if every point of \bar{B}(x, R) lies in a \varepsilon /2-neighborhood of a finite S, then every point of \bar{B}(x, R+\varepsilon/4) will lie \varepsilon-neighborhood of S, so that the ball \bar{B}(x, R+\varepsilon/4) is also compact. This contradicts the fact the definition of R. Therefore every closed ball is a compact set, due to the arbitrariness of x \square

Checking that the metric space (\mathbb{R}^n,d) is complete is not always easy in concrete situations. From the Hopf-Rinow theorem, it suffices to prove that the closed balls are compact. The following proposition is therefore useful.

Proposition: Suppose that the vector fields V_1,\cdots,V_n‘s have globally Lipschitz coefficients . Then the closed ball \bar B(x,R) is compact for every x \in \mathbb{R}^n and R > 0. As a consequence the metric space (\mathbb{R}^n,d) is complete. 
Proof: By the hypothesis on the V_j‘s there exists a constant M > 0 such
that \|V(x)\|=\left(\sum_{j=1}^n \|V_j(x)\|^2\right)^\frac{1}{2}\leq M(1+\|x\|) for any x\in\mathbb{R}^n. Fix x_o,y\in\mathbb{R}^n and let \gamma:[0,T]\to\mathbb{R}^n, be a sub-unit curve such that \gamma(0)=x_o, \gamma(T)=y. Letting y(t)=\|\gamma(t)\|^2 we obtain y'(t)=2\leq 2 \|\gamma(t)\|\|\gamma'(t)\| \leq 2\|\gamma(t)\|\|V(\gamma(t))\|. We infer that y'(t)\leq C\left(\sqrt{y(t)}+y(t) \right), for some C>0 depending only on M. Integrating the latter inequality one has \|\gamma(t)\| \leq (A+\| x_o \|) e^{\frac{C}{2}T},\quad t\in [0,T]. for some constant A >0. The previous estimate shows in particular, that B(x_o,R)\subset B_e(0, (A+\| x_o \|) e^{CR}). We conclude that \bar B(x_o,R) is Euclidean compact. Since the metric and the Euclidean topology coincide, it is also d-compact \square

Completeness of the metric space (\mathbb{R}^n,d) is related to the essential self-adjointness of the Laplace-Beltrami operator.

Theorem: If the metric space (\mathbb{R}^n,d) is complete, then the Laplace-Beltrami operator L is essentially self-adjoint.

Proof: We know that if there exists an increasing sequence h_n\in C_c(\mathbb{R}^n,\mathbb{R}) such that h_n\nearrow 1 on \mathbb{R}^n, and ||\Gamma(h_n,h_n)||_{\infty} \to 0, as n\to \infty, then the operator L is essentially self-adjoint. We are therefore reduced to prove the existence of such a sequence. Let us fix a base point x_0\in\mathbb{R}^n. We can find an exhaustion function \rho\in C^\infty(\mathbb{R}^n,\mathbb{R}) such that |\rho - d(x_0,\cdot)| \le L,\ \ \ \ \ \ |\Gamma(\rho,\rho)|\le L \ \ \text{on } \mathbb{R}^n. By the completeness of (\mathbb{R}^n,d) and the Hopf-Rinow theorem, the level sets \Omega_s = \{x\in \mathbb{M} \mid \rho(x) < s\} are relatively compact and, furthermore, \Omega_s \nearrow \mathbb{R}^n as s\to \infty. We now pick an increasing sequence of functions \phi_n\in C^\infty([0,\infty)) such that \phi_n\equiv 1 on [0,n], \phi_n \equiv 0 outside [0,2n], and |\phi_n'|\le \frac{2}{n}. If we set h_n(x) = \phi_n(\rho(x)), then we have h_n\in C_c(\mathbb{R}^n,\mathbb{R}), h_n\nearrow1 on \mathbb{R}^n as n\to \infty, and ||\Gamma(h_n,h_n)||_{\infty} \le \frac{2L}{n} \square

Posted in Diffusions on manifolds | Leave a comment

Assistant Research Position at Uconn

I am currently looking for a postdoc starting in August 2017 in the Mathematics department at the University of Connecticut.

The position is described here:

Assistant Research Position

Please apply through mathjobs.org and contact me for further details.

 

 

Posted in Uncategorized | Leave a comment