- Research
- Open access
- Published:
On a Neumann boundary control in a parabolic system
Boundary Value Problems volume 2015, Article number: 166 (2015)
Abstract
In this paper we have dealt with controlling a boundary condition of a parabolic system in one dimension. This control aims to find the best appropriate right-hand side boundary function which ensures the closeness between the solution of system at final time and the desired target for the solution. Since these types of problems are ill posed, we have used a regularized solution. By numerical examples we have tested the theoretical results.
1 Introduction
We consider the following one-dimensional parabolic partial differential equation:
where \(k > 0\) and \(h ( x,t )\), \(u_{0} ( x )\) are given functions satisfying the following conditions:
We want to obtain a suitable sized boundary function \(g ( t ) \in H^{1} ( 0,T )\) which approaches the solution of the problem (1.1)-(1.3) to the desired target \(y ( x ) \in L_{2} ( 0,l )\) at a final time \(t = T\).
This process requires the use of the following cost functional:
and solving the problem
On the other hand we know that the problem (1.6) is numerically ill posed. In other words, quite different \(g ( t )\) functions can minimize the functional (1.5). Therefore, instead of the functional (1.5), we introduce the new functional
and solve the problem
Here \(\alpha > 0\) is a regularization parameter which ensures both the uniqueness of the solution and a balance between the norms \(\Vert u ( x,T;g ) - y ( x ) \Vert _{L_{2} ( 0,l )}^{2}\) and \(\Vert g \Vert _{H^{1} ( 0,T )}^{2}\). We show the ill-posedness for \(\alpha = 0\) by a numerical example. Detailed information as regards the regularization parameter can be found in [1].
2 Some previous works and the different aspects of this work
Neumann boundary control problems with different objective functionals received a great deal of attention in the last years [2–5]. Besides, important studies involving final time targets are as follows.
In his famous work, Lions [6] considered the control u in the parabolic system
minimizing the cost function
with target \(z_{d}\) and operator N. Taking \(f \in L_{2} ( Q )\), \(y_{0} \in L_{2} ( \Omega )\), \(u \in L_{2} ( \Sigma )\), he gave the optimality conditions.
Hasanoğlu [7] considered the boundary value problem
and investigated the determination of the pair \(w: = \{ F ( x,t ),T_{0} ( t ) \}\) in the set
minimizing the functional
Hasanoğlu obtained the Fréchet derivative of the functional, established a minimizing sequence, and stated that this sequence weakly converges to the quasi-solution of the problem.
Dhamo and Tröltzsch [8] investigated the controllability aspects for optimal parabolic boundary control problems of type
subject to the one-dimensional heat equation
on the set of feasible controls
Altmüller and Grüne [9] studied the stability properties of a model with predictive control without terminal constraints applied to the heat equation,
by the cost functional
on the controls set \(L_{\infty} ( [ 0,T ] )\).
This work chooses more regular controls than previous work [6, 8, 9]. We take the controls in the closed and convex set \(G_{\mathrm{ad}} \subset H^{1} ( 0,T )\). This choice causes the addition of the control in the norm of \(H^{1} ( 0,T )\) to the functional. In the case that the control is in the space \(L_{2}\), the Fréchet derivative contains the solution of adjoint equation only. In the case of \(H^{1} ( 0,T )\) the Fréchet derivative contains not only the solution of the adjoint equation but also a solution of a second-order ordinary differential equation.
Numerical examples are rarely encountered in the literature. This work contains a detailed numerical investigation. Both the ill-posedness for \(\alpha = 0\) and the regularizing effect of this parameter for \(\alpha > 0\) are illustrated in detail.
3 A motivation for the problem
In this section we give a motivation for the problem. Consider a wire with diffusivity constant k. This wire is heated by a discontinuous heat source h. The initial temperature distribution is \(u_{0}\). The left end is insulated and the right end has a heat flux \(g ( t )\). The heat flux intensity function \(g ( t )\) produces the heat distribution \(u ( x,t;g )\) which is the solution of the PDE.
We want to control both the magnify of the heat flux function \(g ( t )\) and the distance between the heat distribution u at final time T and \(y ( x )\) via α. The optimal values are shown by \(g_{*}\) and \(J_{*}\) (see Figure 1).
4 Existence and uniqueness of optimal solution
In this section we prove the existence and uniqueness of optimal solution. Let us define the closed and convex subset \(G_{\mathrm{ad}} \subset H^{1} ( 0,T )\) of admissible controls.
First of all we know from [10], p.33, that for every \(u_{0} ( x ) \in H^{1} ( \Omega )\), \(h ( x,t ) \in L_{2} ( Q )\), and \(g ( t ) \in H^{1} ( 0,T )\), the boundary value problem (1.1)-(1.3) admits a unique solution \(u \in H^{2,1} ( Q )\) that depends continuously on h, \(u_{0}\), and g by the following estimate:
where \(c_{1}\) is a constant independent from h, \(u_{0}\), and g. Before giving the existence and uniqueness theorem for an optimal solution, we rearrange the cost functional \(J_{\alpha} ( g )\) given by (1.7) thus:
To use the linearity of the transform \(g \to u [ g ] - u [ 0 ]\), we add and subtract the term \(u ( x,T;0 )\) to the functional \(J_{\alpha} ( g )\).
If we define the auxiliary functionals
then \(J_{\alpha} ( g )\) in (4.2) is briefly written as
Due to the linearity of the transform \(g \to u [ g ] - u [ 0 ]\), it can easily be seen that the functional \(\pi ( g,g )\) defined by (4.3) is bilinear, coercive, symmetric, continuous, and strictly convex. In addition, the functional Lg is linear, continuous, and convex.
Now, we give the following theorem for the existence and uniqueness in view of [11].
Theorem 4.1
Let \(\pi ( g,g )\) be a coercive, bilinear, continuous, and symmetric form and let Lg be a linear and continuous functional. Then there is a unique element \(g_{*} \in G_{\mathrm{ad}}\) such that
for the functional given in (4.2).
Proof
Let \(\{ g_{k} \} \in G_{\mathrm{ad}}\) be a minimizing sequence for \(J_{\alpha} ( g )\). By this we mean that
for \(k \to \infty\). Coercivity and continuity of \(\pi ( g,g )\) give
Combining (4.8) with (4.9) we conclude that
Then the sequence \(\{ g_{k} \}\) has a weakly converging subsequence \(\{ g_{k_{m}} \}\) converging to the element \(g_{*} \in H^{1} ( 0,T )\). The set \(G_{\mathrm{ad}}\) is weakly closed, since it is closed and convex. Hence
Moreover, the transform \(g \to J_{\alpha} ( g )\) is weakly lower semicontinuous, since \(g \to \pi ( g,g )\) is weakly lower semicontinuous and \(g \to Lg\) is weakly continuous. Then by the definition of lower semicontinuity, we have
We can write the following using (4.8):
and by (4.11) we obtain
Hence the existence of the solution for the problem (1.1)-(1.6) is obtained.
For uniqueness we use the strict convexity of \(J_{\alpha} ( g )\), since for all \(g_{1} \ne g_{2} \in H^{1} ( 0,T )\) and \(\beta \in ( 0,1 )\),
Now let \(g_{1}\) and \(g_{2}\) be two elements satisfying
Since the set \(G_{\mathrm{ad}}\) is convex
and since \(J_{\alpha} ( g )\) is strictly convex while \(g_{1} \ne g_{2}\) we get
and this is a contradiction. Then we must have \(g_{1} = g_{2}\). This shows that the minimum element is unique. Theorem 4.1 has been proven. □
5 Well-posedness of the problem
In Section 4, we proved the existence and uniqueness of optimal solution. In this section, we show that for a minimizing sequence \(\{ g_{k} ( t ) \}\), the convergence of \(J_{\alpha} ( \{ g_{k} \} ) \to J_{\alpha} ( g_{*} )\) implies \(\Vert g_{k} - g_{*} \Vert _{H^{1} ( 0,T )} \to 0\) for \(k \to \infty\) while \(\alpha > 0\).
For this purpose we must show that the functional \(J_{\alpha} ( g )\) is strongly convex.
Theorem 5.1
The functional \(J_{\alpha} ( g )\) is strongly convex with the convexity constant α.
Proof
By the definition of strong convexity of a functional, we must prove that
for \(\chi > 0\).
First, let us show that the functional \(\alpha \Vert g \Vert _{H^{1} ( 0,T )}^{2}\) is strongly convex. For all \(g_{1},g_{2} \in G_{\mathrm{ad}}\) and \(\beta \in [ 0,1 ]\), we can write
Hence \(\alpha \Vert g \Vert _{H^{1} ( 0,T )}^{2}\) is strongly convex with the convexity constant \(\chi = \alpha\). Recalling the expression of \(\pi ( g,g )\) and using the above equality, we have
On the other hand we know from Section 4 that \(\pi ( g,g )\) is strictly convex, so we get
The functional \(\pi ( g,g )\) is strongly convex with the convexity constant α. As for \(J_{\alpha} ( g )\) we get
and this implies (5.1). Hence \(J_{\alpha} ( g )\) is strongly convex with the convexity constant \(\chi = \alpha\). □
Theorem 5.2
For the strongly convex functional \(J_{\alpha} ( g )\) with the convexity constant α, there is a minimizing sequence which converges strongly to an element \(g_{*}\) and satisfies the following inequality:
Proof
This proof can be done in a similar way to [12]. If we take \(\beta = \frac{1}{2}\) in (5.1) then
On the other hand, since
we get
and
Hence the proof is done. □
6 Obtaining the optimal solution
Up to now we have seen that if a minimizing sequence is found then the limit of this sequence will be the solution of optimal control problem. In this section, we investigate how we can get this minimizing sequence. To do this, we must obtain the adjoint problem and the Fréchet derivation for the functional.
6.1 Adjoint problem and Fréchet derivation of the functional
The Lagrange functional for the problem can be written as follows:
The stationarity condition \(\delta L = 0\) gives the adjoint problem
Let \(\Delta g ( t )\) be an increment to the function \(g ( t )\), then the difference function \(\Delta u ( x,t ) = u ( x,t;g + \Delta g ) - u ( x,t;g )\) is the solution of the difference problem:
Furthermore the difference function \(\Delta u ( x,t )\) satisfies the following estimate for \(t \in [ 0,T ]\):
On the other hand, the difference for the functional subject to \(\Delta g ( t )\) is
We can obtain the following equality using the adjoint and difference problems:
Also, considering (6.9) in (6.8), we get
In order to have the inner product in the space \(H^{1} ( 0,T )\) we must consider the function ξ, which is the weak solution of the following problem:
Then we write
and
Rearranging this, we obtain the equality
We take into account (6.7) and the following definition of the Fréchet derivation:
We get the Fréchet derivation for the functional thus:
6.2 Constituting a minimizing sequence
In this section, we construct a minimizing sequence using the gradient method. If we take the initial element \(g_{0} \in G_{\mathrm{ad}}\), we can constitute a minimizing sequence by the rule
where \(J' ( g_{k} )\) is the Fréchet derivation accompanying the element \(g_{k}\). The \(\beta_{k}\) are sufficiently small numbers satisfying
Computations of the \(\beta_{k}\) can be carried out by one of the methods shown in [12]. Since the functional is weakly lower semicontinuous, we have
Iteration can be stopped by one of the following criteria:
7 A numerical example
Let us consider the following problem on the domain \(( x,t ) \in Q = ( 0,1 ) \times ( 0,1 )\), choosing \(k = 1\), \(l = 1\), \(T = 1\):
We use the cost functional
and want to solve the problem
We consider the solution of the parabolic problem (7.1)-(7.3) as \(u = u_{1} + u_{2}\) with \(u_{2} = \frac{x^{2}}{2l}g ( t )\). Then the following problem with a homogeneous boundary condition for the function \(u_{1}\) is obtained:
The weak solution for the problem (7.6)-(7.8) can be defined as follows:
and the solution to this equality can be approximated by the Feado-Galerkin method using the sum
Here the functions \(\varphi_{k} ( x )\) are an orthogonal basis in \(H^{1} ( \Omega )\). Compatible with the boundary values, we can take these functions as
The unknown functions \(c_{k} ( t )\) in (7.9) are found from the system of first-order ordinary differential equations
In this system,
is the matrix of unknowns,
is the initial data matrix. The coefficient matrices M and A are such that
and the right-hand side matrix H is
Since M and A are diagonal, each equation in the system (7.10) gives an ordinary differential equation. Therefore we can solve (7.10) and find the functions \(c_{k} ( t )\) exactly.
First, let us take \(\alpha = 0\) and consider the functional,
The minimum value of this functional is \(J_{*} = 0\) and the functional takes this value for \(g_{*} = 6\cos (t)\). Taking \(N = 10\) to approximate the solution for the Feado-Galerkin method we obtain the minimum value as \(J_{*} = 0.27 \times 10^{ - 8}\).
The problem is ill posed in this case, since the minimum value is nearly obtained by quite different \(g ( t )\) functions.
Starting with the initial element \(g_{0} = \cos t\), if we construct a minimizing sequence by (6.14) for \(\beta = 0.2\) then we obtain the following element after 101 iterations:
The value of the functional for the element \(g_{101}\) is \(J ( g_{101} ) = 0.020786\). But the norm of the difference between these functions is \(\Vert g_{101} - g_{*} \Vert _{H^{1} ( 0,1 )} = 2.354540\). A graph of this solution is given in Figure 2.
If we start another initial element \(g_{0} = 1\), and we construct a minimizing sequence by (6.14) for \(\beta = 0.2\), then we obtain the following element after 101 iterations:
The value of the functional for the element \(g_{101}\) is \(J ( g_{101} ) = 0.029751\). But the norm of the difference between these functions is \(\Vert g_{101} - g_{*} \Vert _{H^{1} ( 0,1 )} = 2.817847\).
These examples show that the problem is numerically ill posed for \(\alpha = 0\).
We take \(\alpha > 0\) as a regularization parameter and minimize the functional (7.4) using the minimizing sequence by (6.14) for \(\beta = 0.2\).
The values \(\int_{0}^{1} [ u ( x,1;g ) - y ( x ) ]^{2}\, dx\) and \(\Vert g \Vert _{H^{1} ( 0,1 )}^{2}\) are obtained as given in Table 1, if the stopping criterion is taken as \(\vert J_{\alpha} ( g_{k + 1} ) - J_{\alpha} ( g_{k} ) \vert < 1 \times 10^{ - 6}\).
In Figure 3, we can see that the values of \(\int_{0}^{1} [ u ( x,1;g ) - y ( x ) ]^{2}\, dx\) become smaller and the values of \(\Vert g \Vert _{H^{1} ( 0,1 )}^{2}\) become larger as the α decrease. The opposite occurs as the α increase.
The problem is well posed for any \(\alpha > 0\). For example if we take \(\alpha = 0.6\) we get the functional
Let us construct a minimizing sequence by (6.14) for \(\beta = 0.2\) and stop the iteration by the criterion \(\vert J_{\alpha} ( g_{k + 1} ) - J_{\alpha} ( g_{k} ) \vert < 1 \times 10^{ - 6}\). If we start with the initial element \(g_{0} = 0\), we get the minimum value \(J_{0.6*} = 9.565356\) and the minimum element
If we start with the initial element \(g_{0} = \cos (t)\), we get the minimum value \(J_{0.6*} = 9.565356\) and the minimum element
The norm of the difference between these functions is \(\Vert g_{27} - g_{15} \Vert _{H^{1} ( 0,1 )} = 0.000841\).
It can be seen from Figure 4 that minimum values and minimum elements are close enough to each other, respectively. The problem is numerically well posed.
References
Vasilev, FP: Numerical Methods for Solving Extremal Problems. Nauka, Moscow (1988)
Levaggi, L: Variable structure control for parabolic evolution equations. In: Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference, Seville, Spain, 12-15 December (2005)
Qian, L, Tian, L: Boundary control of an unstable heat equation. Int. J. Nonlinear Sci. 3(1), 68-73 (2007)
Elharfi, A: Output-feedback stabilization and control optimization for parabolic equations with Neumann boundary control. Electron. J. Differ. Equ. 2011, 146 (2011)
Sadek, IS, Bokhari, MA: Optimal boundary control of heat conduction problems on an infinite time domain by control parameterization. J. Franklin Inst. 348, 1656-1667 (2011)
Lions, JL: Optimal Control of Systems Governed by Partial Differential Equations. Springer, Berlin (1971)
Hasanoğlu, A: Simultaneous determination of the source terms in a linear parabolic problem from the final overdetermination: weak solution approach. J. Math. Anal. Appl. 330, 766-779 (2007)
Dhamo, V, Tröltzsch, F: Some aspects of reachability for parabolic boundary control problems with control constraints. Comput. Optim. Appl. 50, 75-110 (2011)
Altmüller, N, Grüne, L: A comparative stability analysis of Neumann and Dirichlet boundary MPC for the heat equation. In: Proceedings of the 1st IFAC Workshop on Control of Systems Modeled by Partial Differential Equations - CPDE (2013)
Lions, JL, Magenes, E: Non-Homogeneous Boundary Value Problems and Applications, vol. II. Springer, Berlin (1972)
Lions, JL: Optimal Control of Systems Governed by Partial Differential Equations. Springer, Berlin (1971)
İskenderov, AD, Tagiyev, RQ, Yagubov, QY: Optimization Methods. Çaşıoğlu, Bakü (2002)
Acknowledgements
The authors are thankful to all the referees for their suggestions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
The main idea of this paper was proposed by ŞSŞ and MS. ŞSŞ and MS prepared the manuscript initially and performed all the steps of the proofs in this research. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Şener, Ş.S., Subaşi, M. On a Neumann boundary control in a parabolic system. Bound Value Probl 2015, 166 (2015). https://doi.org/10.1186/s13661-015-0430-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13661-015-0430-5