## Why study “pullback attractors”?

Pullback attractors for PDE systems was my first research project. Consider a non-autonomous (partial) differential equation, with $t> \tau \in \mathbb R$,

$u'(t) = A(t)u(t) \qquad u(\tau) = u_{\tau}$

with $u_\tau \in X$ for a Banach space $X$. If the evolution equation has a unique solution in $X$ for each $u_\tau \in X$, the it generates an evolution process $U(t,\tau): X \rightarrow X$ where $U(t,\tau)u_{\tau}$ is the solution of the equation at the time $t$ with initial data $u_\tau$ at the time $\tau$.

The pullback attractor for $\{U(t,\tau)\}$ is a set of compact sets $\{A(t)\}_{t\in\mathbb R}$ which is invariant, i.e. $U(t,\tau)A(\tau) = A(t)$ and is pullback attracting all bounded sets, that is for any fixed time $t\in \mathbb R$,

$\lim_{\tau \rightarrow -\infty}d_{H}(U(t,\tau)B, A(t)) = 0$

for any bounded set $B$.

The main difference between the pullback attractor and the usual global attractor is that the pullback attractor attracts bounded set in a “pullback” sense, that is instead of sending the final time $t$ to infinity, we send the initial time $\tau$ to minus infinity. In other words, instead of starting at a initial time and going to the future, what we do with the pullback attractor is to fix the current time and go back to history.

I have been asked many times the question “What does that mean that you go back to the history, while the main idea of attractors to investigate the large time behavior or equivalently the future?“. To be honest, I did not know a convincing answer.

Luckily, after attending a talk of Stefanie Sonner today I now have in mind a good answer for that question.

Consider the simple non-autonomous ODE equation

$u' = -u + t$ and initial data $u(\tau) = u_{\tau} \in \mathbb R$.

By solving the equation we easily get that

$u(t) = e^{-(t-\tau)}(u_{\tau} - \tau + 1) + t - 1$.

It is obvious that $\lim_{t\rightarrow+\infty}u(t) = +\infty$ and there is no hope for a compact (or even bounded) attracting set as $t\rightarrow+\infty$.

However, if we take the differences between two solutions $u$ and $v$ (with different initial data) then we have

$(u - v)' = -(u-v)$ or equivalently $(u-v)(t) = e^{-(t-\tau)}(u_\tau - v_{\tau})$.

This suggests that there is still an asymptotic behavior of the equation that needs to be studied. And that is where the pullback attractor theory comes in.

Posted in Attractors | Tagged | 2 Comments

## Entropy method for chemical reaction-diffusion systems II: A linear system

In the last post, I’ve shown you how does the entropy method work in the case of a single equation: heat equation with homogeneous Neumann boundary condition. The application of entropy method in systems is more involved and needs more efforts to bring out what we want.

In this post, we first try to apply the method to a linear system. The application to nonlinear systems is more complicated and will be revealed in next posts.

Modelling the reaction-diffusion system

Assuming that a reaction

$\boxed{A \leftrightharpoons B}$

takes place in a bounded domain $\Omega\subset \mathbb R^n$ with the forward- and backward-reaction constant rates are normalised to $1$. The corresponding linear RD system for two unknown functions $a(x,t)$ and $b(x,t)$ reads as

$\begin{cases} a_t - d_a\Delta a = -a + b, \quad x\in \Omega\\ b_t - d_b\Delta b = a - b, \quad x\in\Omega\\ \nabla a \cdot \nu = \nabla b \cdot \nu = 0, \quad x\in \partial\Omega\end{cases}$

and initial data $a(x,0) = a_0(x)$ and $b(x,0) = b_0(x)$.

Normalised volume and average value

Throughout this post, we will assume that $\Omega$ has a normalised volume, that is $|\Omega| = 1$. For a function $f$, we denote by

$\overline{f} = \int_{\Omega}f(x)dx$

the average value of $f$.

Conservation of mass

The solution to this solution admits one conservation of mass

$\overline{a}(t) + \overline{b}(t) = \overline{a_0} + \overline{b_0} =: M$ for all $t>0$

Constant equilibrium

The unique equilibrium is $(a_{\infty}, b_{\infty}) = \left(\frac{M}{2}, \frac{M}{2}\right)$.

Convergence to equilibrium?

To prove the convergence to equilibrium $(a,b) \longrightarrow (a_{\infty}, b_{\infty})$ as $t\rightarrow +\infty$ we consider the quadratic entropy functional

$\boxed{E[a,b](t) = \|a\|^2 + \|b\|^2}$,

where $\|\cdot\|$ is the usual $L^2$ norm, and its entropy dissipation

$\boxed{D[a,b](t) = -\frac{d}{dt}E[a,b](t) = 2d_a\|\nabla a\|^2 + 2d_b\|\nabla b\|^2 + 2\|a - b\|^2}$.

We now aim to prove an entropy-entropy dissipation estimate of the form

$\boxed{D[a,b] \geq \lambda (E[a,b] - E[a_{\infty}, b_{\infty}])}\qquad \qquad (*)$

for some $\lambda >0$.

To prove this inequality (*) we follow the following scheme, which is later turned out to be very effective to apply to nonlinear systems,

Step 0: (Decompose the difference of entropy)

Using the mass conservation $\overline{a} + \overline{b} = M$ and values of the equilibrium we can show that

$E[a,b] - E[a_{\infty}, b_{\infty}] =: E_1 + E_2$

where

$E_1 = E[a,b] - E[\overline{a}, \overline{b}] = \|a - \overline{a}\|^2 + \|b - \overline{b}\|^2$

and

$E_2 = E[\overline{a}, \overline{b}] - E[a_{\infty}, b_{\infty}] = (\overline{a} - a_{\infty})^2 + (\overline{b} - b_{\infty})^2$.

In the next steps, we will try to control $E_1$ and $E_2$ separately.

Step 1: (Role of diffusion)

To control $E_1$, we use the Poincaré inequaltiy to have

$\|\nabla a\|^2 \geq \lambda_1\|a - \overline{a}\|^2$ and $\|\nabla b\|^2 \geq \lambda_1\|b - \overline{b}\|^2$.

Hence, we have

$\frac{1}{2}D[a,b] \geq \min\{d_a, d_b\}\lambda_1 E_1$.

Step 2: (Reaction of averages)

Applying the Jensen’s inequality $\int_{\Omega}f^2dx \geq \left(\int_{\Omega}fdx\right)^2$ (see here for more general inequalities) we have

$\|a - b\|^2 \geq (\overline{a} - \overline{b})^2 = [(\overline{a} - a_{\infty}) - (\overline{b} - b_{\infty})]^2 = (\overline{a} - a_{\infty})^2 + (\overline{b} - b_{\infty})^2 = E_2$

where we used the mass conservation $\overline{a} + \overline{b} = M$ at the last step. Therefore, we have

$\frac{1}{2}D[a,b] \geq \|a - b\|^2 \geq E_2$.

Step 3: (Combining steps 1 and 2)

From steps 1 and 2, we easily see that

$\boxed{D[a,b] \geq \mu(E[a,b] - E[a_{\infty}, b_{\infty}])}$

with $\mu = 2\min\{\min\{d_a, d_b\}\lambda_1,1\}$, thus, by the Gronwall inequality

$E[a,b](t) - E[a_{\infty}, b_{\infty}] \leq e^{-\mu t}(E[a_0,b_0] - E[a_{\infty},b_{\infty}])$.

On the other hand, by the mass conservation we easily see that

$E[a,b](t) - E[a_{\infty}, b_{\infty}] = \|a(t) - a_{\infty}\|^2 + \|b(t) - b_{\infty}\|^2$.

In conclusion, we have proved

$\boxed{\|a(t) - a_{\infty}\|^2 + \|b(t) - b_{\infty}\|^2 \leq e^{-\mu t}(E[a_0,b_0] - E[a_{\infty},b_{\infty}])}$

## Entropy method for chemical reaction-diffusion systems I: Entropy method (continued)

In the last post, we’ve already seen the motivation of the question of convergence to equilibrium. This post continues to give a method which does not only give qualitative result (does the trajectory converge to equilibrium?) but (usually) also quantitave result (how fast is the convergence?).

Gronwall’s inequality

Let us start with the classic Gronwall’s inequality: Assume that $f: \mathbb{R} \rightarrow \mathbb{R}$ is a differentiable function. If

$\boxed{\partial_tf(t) \leq \lambda f(t)}$ for all $t\geq 0$

then

$\boxed{f(t) \leq e^{\lambda t}f(0)}$ for $t\geq 0$.

Especially, when $\lambda < 0$ then $f(t)$ decays exponentially.

A heat equation

We continue to see how to show the convergence to equilibrium for a heat equation with homogeneous Neumann boundary condition: let $\Omega \subset \mathbb{R}^N$ be a bounded domain with $C^1$ boundary. Consider the heat equation for $u(x,t)$

$u_t - \Delta u = 0$  in $\Omega$

$\partial_{\nu} u = 0$ on $\partial\Omega$

$u(x,0) = u_0(x)$ in $\Omega$.

The equilibrium $u_{\infty}$ satisfies $-\Delta u_{\infty} = 0$ in $\Omega$ and $\partial_{\nu}u_{\infty} = 0$ on $\partial \Omega$. Then we see that any constant function $u_{\infty} \equiv constant$ is an equilibrium.

However, note that from $u_t - \Delta u = 0$, we get, after integrating over $\Omega$

$\partial_t \int_{\Omega}udx - \int_{\Omega}\Delta u dx = 0$.

Using the homogeneous Neumann boundary condition we have $\int_{\Omega} \Delta u dx = 0$. Then it follows that

$\partial_t \int_{\Omega} udx = 0$

and consequently

$\int_{\Omega} u(x,t)dx = \int_{\Omega}u_0(x)dx$    for all $t>0$.

This is called the mass conservation (or conservation of mass) of the heat equation. This gives us a hint that the equilibrium should also satisfies the mass conservation, i.e.

$\int_{\Omega} u_{\infty}dx = \int_{\Omega} u_0(x)dx$ or equivalently $\boxed{u_{\infty} = \frac{1}{|\Omega|}\int_{\Omega}u_0(x)dx}$.

We now aim to prove the convergence of $u(x,t)$ to $u_{\infty}$ as $t\rightarrow +\infty$.

Denote by $\|\cdot\|$ the usual norm of $L^2(\Omega)$. We define an entropy functional

$E(u) = \|u(t) - u_{\infty}\|^2$.

Compute the time derivative of $E(t)$ we have

$\frac{d}{dt}E(u) = 2\int_{\Omega}(u - u_{\infty})u_tdx = 2\int_{\Omega}(u-u_{\infty})\Delta u dx = -2\|\nabla u\|^2$.

We call the quantity $D(u) := -\frac{d}{dt}E(u) = 2\|\nabla u\|^2$ the entropy dissipation (or entropy production). By the Poincaré inequality

$\|\nabla u\|^2 \geq \lambda \left\|u - \frac{1}{|\Omega|}\int_{\Omega}udx\right\|^2$

we have

$D(u) = 2\|\nabla u\|^2 \geq 2\lambda \left\|u - \frac{1}{|\Omega|}\int_{\Omega}udx\right\|^2 = 2\lambda\|u - u_{\infty}\|^2 = 2\lambda E(u)$.

This is called an entropy-entropy dissipation estimateTherefore, we have

$\frac{d}{dt}E(u) = -D(u) \leq -2\lambda E(u)$,

thus, by the Gronwall lemma

$E(u)(t) \leq e^{-2\lambda t}E(u)(0)$

or equivalently

$\|u(t) - u_{\infty}\| \leq e^{-\lambda t}\|u_0 - u_{\infty}\|$.

Theorem. The trajectory $u(x,t)$ of the heat equation with homogeneous Neumann boundary condition converges exponentially to $u_{\infty}$ with the rate $\lambda$ where $\lambda$ is the constant in the Pointcaré inequality.

Entropy method

We now state the basic idea of the entropy method for an evolution equation of the form

$\partial_t f = F(f)$

which has a unique equilibrium $f_{\infty}$ and a set of mass conservations.

We aim to find

(i) an entropy functional $E(f)$ which has the property $E(f) - E(f_{\infty}) \geq C\|f - f_{\infty}\|$ and has

(ii) nonnegative entropy dissipation $D(f) = -\frac{d}{dt}E(f) \geq 0$; and

(iii) an entropy-entropy dissipation estimate of the form

$D(f) \geq \lambda(E(f) - E(f_{\infty}))$.

If we have (i)-(iii) then it follows first by Gronwall’s lemma that $E(f)(t) \rightarrow E(f_{\infty})$ exponentially and then by (i) that $f(t) \rightarrow f_{\infty}$ exponentially.

(Some) Good things about entropy method

(i) The entropy method is based on an functional inequality (entropy-entropy dissipation estimate) which is usually not directly linked to the evolution system. That makes the method is quite robust to generalisation. Once the functional inequality is established, it can be used in any other system which has similar kind of entropy functional and entropy dissipation.

(ii) The entropy method usually gives explicit rate of convergence. If we can prove the functional inequality with explicit constant, then the convergence follows with explicit constant. This is the quantitative result that has been mentioned before.

Posted in Entropy | 1 Comment

## Entropy method for chemical reaction-diffusion systems I: Entropy method

The concept “entropy method” could mean different things. In this serie of posts, I will address an entropy method which is very  useful in proving convergence to equilibrium evolution equations. The main application will be in chemical reaction network theory.

Convergence to equilibrium of evolution equation

Consider a typical evolution

(1)                              $\boxed{\partial_tf(t) = \mathcal{F}(f)(t)}$         for all            $t>0$

with initial data $f(0) = f_0$. A state $f_{\infty}$ is called an equilibrium if it is a time-independent solution of (1), that is

$\boxed{\mathcal F(f_{\infty}) =0}$.

Many physical or chemical phenomena tend to a stable state in large time. A typical example is a pendulum. After moving it from its equilibrium position, the pendulum starts to move but eventually comes back to its equilibrium because of air friction.

It is noted however that there are also different scenarios like pattern formation or chaos.

Hence, it is interesting to ask whether the trajectory $f(t)$ of (1) converges to the equilibrium $f_{\infty}$ as $t\rightarrow +\infty$ or not.

Questions concerning convergence to equilibrium

There are several important (and interesting) questions should be answered:

1. (The most natural one) Does the trajectory converge to an equilibrium when $t\rightarrow +\infty$?
2. (Multiple equilibria) It is possible to have multiple equilibria. So the next question is to which equilibrium does the trajectory converge to?
3. (The rate of convergence) How fast is the convergence?

The 1st question is qualitative (one just wants to know that it converges or not) while the 3rd one is quantitative (one wants to know how fast it converges). It should be noted that sometimes the 3rd question turns out to be much more difficult than the 1st one. My favourite example is the imhomogeneous Boltzmann equation: though the convergence to equilibrium was proved around the beginning of twentieth century, it is not until the year 2000 when C. Villani and L. Desvillettes published their result on how fast does it converge to equilibrium.

The method they used is called entropy method, which will be the main tool in this serie of posts.

Posted in Entropy | 1 Comment

## [Ứng dụng Toán] Làm sao để phát hiện vị trí phát ra tiếng động trên xe tải?

Nhân dịp tham dự một workshop thú vị, cũng như việc nhận được một vài câu hỏi về ứng dụng của Toán học, mình xin nêu một ví dụ sau, mà theo mình là rất có ý nghĩa.

(Bài viết này dựa trên sự hợp tác với K. Pieper, P. Trautmann và D. Walter)

Một công ti chuyên sản xuất xe tải chạy đường dài đang gặp một rắc rối. Đó là khi cho chạy thử xe thì xe phát ra những tiếng động khác thường. Các kĩ sư của công ti đã thử sử dụng kinh nghiệm nhưng vẫn không phát hiện được ra bộ phận nào phát ra tiếng động đó. Đương nhiên là không thể tháo từng bộ phận ra kiểm tra được (mà có tháo ra cũng không biết kiểm tra bằng cách nào, vì chỉ có để trên xe chạy thì mới phát ra tiếng động).

[Có thể đến đây sẽ có nhiều người có ý tưởng thú vị về làm cách nào để tìm ra những chỗ gây ra tiếng động này]

Chúng ta sẽ thấy được một cách tiếp cận Toán học để giải quyết vấn đề này ngay sau đây. [Xin được nhấn mạnh rằng đây không phải là một vấn đề giả tưởng đưa ra bởi các nhà toán học, bởi như bản thân mình vừa mới gặp (rất tình cờ) một anh bạn người Ý làm về cơ khí kỹ thuật (mechanical engineering) thì anh ấy đang muốn giải quyết chính xác vấn đề này, đương nhiên là đối với loại động cơ khác chứ không nhất thiết phải là xe tải]

Quay lại bài toán xe tải, để xác định các vị trí gây tiếng động, người ta lắp một số microphone tại một số vị trí trong xe tải và cho xe tải chạy một quãng đường. Khi xe chạy phát ra tiếng động thì các microphone này sẽ thu lại được tần số và biên độ của các âm thanh phát ra.

Bài toán bây giờ đặt ra như sau, với các tần số và biên độ nhận được từ các microphone, làm cách nào để tìm được các vị trí phát ra tiếng động?

Mô hình toán học của bài toán này như thế nào?

Ta biết rằng sóng âm thỏa mãn phương trình sóng

$\boxed{p_{tt} - c^2\Delta p = f(t)}$ + điều kiện biên

trong đó $p$ là biên độ âm, $c$ là vận tốc sóng âm còn $f$ là nguồn phát sóng âm. Ta có thể giả sử (tự nhiên) rằng chỉ có một vài chỗ gây ra tiếng động của xe, do đó nguồn phát có dạng

$\boxed{f(t,x) = u(t)\delta_{x^*}}$

ở đó $\delta_{x}$ là hàm Dirac tại điểm $x$ (ở đây để cho đơn giản ta giả sử chỉ có một điểm gây ra tiếng động, trường hợp nhiều điểm hơn có thể giải quyết tương tự). Hơn thế nữa, vì chuyển động của xe có thể coi như là chuyển động đều nên nguồn sóng sẽ là tuần hoàn theo thời gian, do đó có thể viết nguồn sóng dưới dạng

$\boxed{u(t) = \sum_{i=1}^{N}\mathrm{Re}(u_n\mathrm{exp}(i\omega_n t))}$

trong đó $\omega_n$ là các tần số mà microphone thu được.

Bây giờ sử dụng biến đổi Fourier theo biến thời gian cho phương trình sóng ban đầu với tần số $\omega_n$ tương ứng ta nhận được phương trình Helmholtz như sau

$\boxed{-\omega_n^2 p_n - c^2\Delta p_n = u_n(t)\delta_{x^*}}$ + điều kiện biên

Gỉa sử rằng ta thu được biên độ tại điểm $x_k, k=1,2,\ldots, K$$p_d^k, k=1,2,\ldots,K$. Khi đó bài toán xác định điểm $x^*$ sẽ được đưa về bài toán điều khiển tối ưu sau

$\boxed{min_{x^*\in \Omega, u\in \mathbb{C}^N}\left(\dfrac 12 \sum_{k=1}^{K}\|p(x_k) - p_d^k\|_{\mathbb C^N}^2 + \alpha \|u\|_{\mathbb C^N}\right)}$

thỏa mãn điều kiện $\boxed{-\omega^np_n - c^2\Delta p_n = u_n\delta_{x^*}}$ với $n=1,2,\ldots, N$.

Đây là bài toán điển hình để dẫn đến một lớp các bài toán trong điều khiển tối ưu đó là điều khiển tối ưu thưa (sparse optimal control) vốn đang rất được quan tâm trong thời gian gần đây.

## Exponential convergence to equilibrium for a class of chemical reactions

In this post, we will see how a Lyapunov functional will help us to prove the explicit convergence to equilibrium for some chemical reactions.

We consider a single reversible reaction

$2U \leftrightharpoons V$

with the forward and backward reaction rates are assumed to be $1$. Denote by $u(t), v(t)$ the concentrations of $U, V$ at time $t$ respectively. Using the law of mass action, we arrive at the following nonlinear system of ODEs:

$\begin{cases}u' = -2(u^2 - v),&t>0,\\ v' = u^2 - v, &t>0,\\ u(0) = u_0\geq 0, \quad v(0) = v_0\geq 0 \end{cases} (1)$

We note that the system (1) possesses the mass conservation

$u(t) + 2v(t) = u_0 + 2v_0 =: M \quad \text{ for all } t>0$

where we call $M > 0$ the initial mass.

The system (1) has a unique equilibrium $(u_\infty, v_\infty)$ which balances the reaction and satisfies the mass conservation:

$\begin{cases}u_{\infty}^2 = v_{\infty},\\ u_{\infty} + 2v_{\infty} = M\end{cases}$

We remark that, without satisfying the mass conservation, we would have many infinite stationary states to the system (1).

The main question is: do the concentrations tend to the equilibrium as $t\rightarrow +\infty$.

Theorem. There exists an explicit constant $\lambda>0$ depending on the initial mass $M$ such that,

$|u(t) - u_{\infty}| + |v(t) - v_{\infty}| \leq e^{-\lambda t}(|u_0-u_\infty| + |v_0 - v_\infty|)$.

The Theorem tells us that, we are not only able to show the convergence to equilibrium but also able to compute explicitly the rate of convergence. The idea of the proof is to use the free energy functional (or Boltzmann-type entropy functional).

Proof.

Multiplying the equation of $u$ with $log(u)$ and the equation of $v$ with $log(v)$ we have

$(ulogu - u)_t = u_tlog(u) = -2log(u)(u^2-v)$

and

$(vlogv - v)_t = v_tlog(v) = log(v)(u^2-v)$.

Summing up the two equations above we have

$\frac{d}{dt}(ulogu - u + vlogu - v) = -(u^2-v)log(u^2/v) \leq 0$

or equivalently

$\frac{d}{dt}E[u,v](t) = -D[u,v](t) \leq 0$

with $E[u,v] = ulogu - u + vlogv - v$ and $D[u,v] = (u^2 - v)log(u^2/v)$.

Remark. The functional $E$ is called free energy or entropy or Boltzmann-type entropy. In the latter case, we call $D$ entropy dissipation.

We note that $D[u,v] = 0$ iff $u^2 = v$, which combining with the mass conservation implies that,

$D[u,v] = 0 \quad \text{ if and only if } (u,v) = (u_\infty,v_\infty)$.

This gives us a hope that the convergence actually happens!

To do that, we need two useful estimates relating to the relative entropy:

(i) $E[u,v] - E[u_\infty,v_\infty] = ulog(u/u_\infty) - u + u_{\infty} + vlog(v/v_\infty) - v + v_\infty$; and

(ii) $E[u,v] - E[u_\infty,v_\infty] \geq C(|u-u_\infty|^2 + |v - v_\infty|^2)$.

Hints: The proof of (i) is easy with direct computations by using the mass conservation. The proof of (ii) is a little more tricky and is left as an exercise (for you, the reader, to check!)

The property (ii) tells us that, if we can prove the convergence of the entropy $E[u,v](t)$ to $E[u_\infty,v_\infty]$ as $t\rightarrow +\infty$, the we will get the convergence of solutions.

To do that, we first observe

$\frac{d}{dt}(E[u,v] - E[u_\infty,v_\infty]) = -D[u,v]$.

If we can prove that

$D[u,v] \geq \alpha(E[u,v] - E[u_\infty,v_\infty])$ (*)

then we have

$\frac{d}{dt}(E[u,v] - E[u_\infty,v_\infty]) \leq -\alpha(E[u,v] - E[u_\infty,v_\infty])$

which, by the help of Gronwall’s lemma, gives us $E[u,v](t) \longrightarrow E[u_\infty,v_\infty]$ exponentially as $t\rightarrow +\infty$ with the rate $\alpha$.

Therefore, our aim now is to prove the inequality (*). This inequality is called entropy-entropy dissipation estimate.

In order to do this, we use the idea of Bakry-Emery criterion: to investigate the second derivative of the relative entropy $E[u,v]-E[u_\infty,v_\infty]$ or equivalently the first derivative of the entropy dissipation $D[u,v]$.

By direct computations, we have

$\frac{d}{dt}D[u,v] = -(4u+1)(u^2 - v)log(u^2/v) - (u^2 - v)^2(4/u + 1/v) \leq -D[u,v]$

thanks to the positivity of solution (why do we have this? It would be nice if you could get the answer yourself). We thus get

$D[u,v](t) \leq e^{-t}D[u_0,v_0] \longrightarrow 0\quad \text{ as } t\rightarrow +\infty$.

Now, we integrate

$\frac{d}{dt}D[u,v] \leq -D[u,v]$

from $t$ to $+\infty$ and use $D[u,v](+\infty) = 0$ and $E[u,v](+\infty) = E[u_\infty,v_\infty]$ (one more why!!! Hints: $E[u,v]$ is a Lyapunov functional), we have

$-D[u,v](t) \leq -(E[u,v](t) - E[u_\infty,v_\infty])$

or equivalently

$D[u,v] \geq (E[u,v] - E[u_\infty,v_\infty])$.

We have proved (*) and thus complete the proof of Theorem.

Remark.

(i) One could think of extending the method to the case of spatially imhomogeneous case with diffusion, that is the concentration does not only depend on time but also on spatial varibles. The system hence writes as

$\begin{cases}u_t - d_u\Delta u = -2(u^2 - v), &x\in\Omega, t>0,\\ v_t - d_v\Delta v = u^2 - v, &x\in\Omega, t>0\end{cases}$

with homogeneous Neumann boundary condition. However, due to the appearance of spatial variables, the Bakry-Emery method leads to “nasty terms” which are (almost) impossible to control. There is a way to prove a corresponding version of (*), it was first given in a work of Desvillettes and Fellner.

(ii) By patient computations, one easily sees that the method would work for a more general reaction, which writes as

$\alpha_1U_1 + \alpha_2U_2 + \ldots + \alpha_NU_N \leftrightharpoons \beta_1V_1 + \beta_2V_2 + \ldots + \beta_MV_M$

for any number $N$ and $M$ of substances.

Posted in Entropy | | 3 Comments