Research  Open  Published:
Constructive analysis of periodic solutions with interval halving
Boundary Value Problemsvolume 2013, Article number: 57 (2013)
Abstract
For a constructive analysis of the periodic boundary value problem for systems of nonlinear nonautonomous ordinary differential equations, a numericalanalytic approach is developed, which allows one to both study the solvability and construct approximations to the solution. An interval halving technique, by using which one can weaken significantly the conditions required to guarantee the convergence, is introduced. The main assumption on the equation is that the nonlinearity is locally Lipschitzian.
An existence theorem based on properties of approximations is proved. A relation to Mawhin’s continuation theorem is indicated.
MSC:34B15.
Introduction
In this paper, we shall develop a numericalanalytic approach to the analysis of periodic solutions of systems of nonautonomous ordinary differential equations using the idea introduced in [1]. The method is numericalanalytic in the sense that its realisation consists of two stages concerning, respectively, an explicit construction of certain equations and their numerical analysis and is close in the spirit to the LyapunovSchmidt reductions [2, 3]. However, neither a small parameter nor an implicit function argument is used.
We focus on numericalanalytic schemes based upon successive approximations. In the context of the theory of nonlinear oscillations, such types of methods were apparently first developed in [4–8]. We refer the reader to [9–20] for the related bibliography.
For a boundary value problem, the numericalanalytic approach usually replaces the problem by a family of initial value problems for a suitably perturbed system containing a vector parameter which most often has the meaning of the initial value of the solution. The solution of the Cauchy problem for the perturbed system is sought for in an analytic form by successive approximations, whereas the numerical value of the parameter is determined later from the socalled determining equations.
In order to guarantee the convergence, a kind of the Lipschitz condition is usually assumed [9–12] and a smallness restriction of the type
is imposed, where K is the Lipschitz matrix and ${q}_{T}$ depends on the period T. The improvement of condition (0.1) consists in maximising the value of the constant ${q}_{T}$.
In this paper, which is a continuation of [1], we provide a constructive approach to the study of solvability of the periodic problem (1.3), (1.4), where the analysis of convergence uses the interval halving technique. We shall see that, under fairly general assumptions, this idea allows one to replace (0.1) by the weaker condition
and, thus, significantly improve the convergence conditions established, in particular, in [6–9, 12]. The restriction imposed on the width of the domain is likewise improved. Finally, an existence theorem based upon the properties of approximate solutions is proved. The proofs use a number of technical facts from [1], which are stated in the course of exposition when appropriate.
1 Problem setting and basic assumptions
The method that we are interested in deals with Tperiodic solutions of a system of nonlinear ordinary differential equations
where $f:{\mathbb{R}}^{n+1}\to {\mathbb{R}}^{n}$ is a continuous function such that
for all $z\in {\mathbb{R}}^{n}$ and $t\in (\mathrm{\infty},\mathrm{\infty})$. Here, T is a given positive number. We restrict ourselves to considering continuously differentiable solutions of system (1.1) and, furthermore, instead of Tperiodic solutions of (1.1), we shall always deal with the solutions $u:[0,T]\to {\mathbb{R}}^{n}$ of the corresponding periodic boundary value problem on the bounded interval $[0,T]$,
The passage to problem (1.3), (1.4) is justified by assumption (1.2).
Our main assumption is that $f:[0,T]\times {\mathbb{R}}^{n}\to {\mathbb{R}}^{n}$ is Lipschitzian with respect to the space variable in a certain bounded set D, which is the closure of a bounded and connected domain in ${\mathbb{R}}^{n}$. For the sake of simplicity, we assume that there exists a nonnegative constant square matrix K of dimension n such that
for all $\{{x}_{1},{x}_{2}\}\subset D$ and $t\in [0,T]$.
Here and below, the obvious notation $x=col({x}_{1},{x}_{2},\dots ,{x}_{n})$ is used, and the inequalities between vectors are understood componentwise. The same convention is adopted implicitly for the operations ‘max’ and ‘min’ so that, e.g., $max\{h(z):z\in Q\}$ for any $h={({h}_{i})}_{i=1}^{n}:Q\to {\mathbb{R}}^{n}$, where $Q\subset {\mathbb{R}}^{m}$, $m\le n$, is defined as the column vector with the components $max\{{h}_{i}(z):z\in Q\}$, $i=1,2,\dots ,n$.
2 Notation and symbols
We fix an $n\in \mathbb{N}$ and a bounded set $D\subset {\mathbb{R}}^{n}$. The following symbols are used in the sequel:

1.
${1}_{n}$ is the unit matrix of dimension n.

2.
$r(K)$ is the maximal, in modulus, eigenvalue of a matrix K.

3.
Given a closed interval $J\subseteq \mathbb{R}$, we define the vector ${\delta}_{J,D}(f)$ by setting
$${\delta}_{J,D}(f):=\underset{(t,z)\in J\times D}{max}f(t,z)\underset{(t,z)\in J\times D}{min}f(t,z).$$(2.1) 
4.
${e}_{k}$, $k=1,2,\dots ,n$: see (10.5).

5.
∂ Ω is the boundary of a domain Ω.

6.
${\u25b7}_{S}$: see Definition 10.1.
The notion of a set $D(r)$ associated with D, which could have been called an inner rneighbourhood of D, will often be used in what follows.
Definition 2.1 For any nonnegative vector $r\in {\mathbb{R}}^{n}$, we put
where
One of the assumptions to be used below means that the inner rneighbourhood of D is nonempty for r sufficiently large.
Finally, let the positive number ${\varrho}_{\ast}$ be determined by the equality
We refer, e.g., to [12, 21] for the discussion of other ways of introducing the constant ${\varrho}_{\ast}$ and for its meaning. What is important for us here is that ${\varrho}_{\ast}$ is the constant appearing in Lemma 3.2. One can show by computation that
3 pperiodic successive approximations
The method suggested by Samoilenko in [6, 7], originally called numericalanalytic method for the investigation of periodic solutions, was also referred to later as the method of periodic successive approximations [9–12]. Its scheme, which is described in a suitable for us form by Propositions 3.1 and 3.4 below, is quite simple and deals with the investigation of the parametrised equation
where $z\in D$ is a parameter to be chosen later. For convenience of reference, we formulate the statements for the pperiodic problem
where $g:[{t}_{0},{t}_{0}+p]\times {\mathbb{R}}^{n}\to {\mathbb{R}}^{n}$ and ${t}_{0}\in (\mathrm{\infty},\mathrm{\infty})$ is arbitrary but fixed.
Following [1], we now describe the original, unmodified, periodic successive approximations scheme for the pperiodic problem (3.2), (3.3) which we are going to modify and which is constructed as follows. With problem (3.2), (3.3), one associates the sequence of functions ${u}_{m}(\cdot ,z)$, $m\ge 0$, defined according to the rule
for $t\in [{t}_{0},{t}_{0}+p]$ and $m=1,2,\dots $ , where the vector $z=col({z}_{1},{z}_{2},\dots ,{z}_{n})$ is regarded as a parameter, the value of which is to be determined later.
Proposition 3.1 ([[12], Theorem 3.17])
Let the function f satisfy the Lipschitz condition (1.5) with a matrix K for which the inequality
holds and, moreover,
Then, for any fixed $z\in D(\frac{p}{4}{\delta}_{[{t}_{0},{t}_{0}+p],D}(g))$, the following assertions are true:

1.
Sequence (3.4) converges to a limit function
$${u}_{\mathrm{\infty}}(t,z)=\underset{m\to \mathrm{\infty}}{lim}{u}_{m}(t,z)$$(3.7)
uniformly in $t\in [{t}_{0},{t}_{0}+p]$.

2.
The limit function (3.7) satisfies the pperiodic boundary conditions
$${u}_{\mathrm{\infty}}({t}_{0},z)={u}_{\mathrm{\infty}}({t}_{0}+p,z).$$ 
3.
The function ${u}_{\mathrm{\infty}}(\cdot ,z)$ is the unique solution of the Cauchy problem
(3.8)
where

4.
Given an arbitrarily small positive ε, one can choose a number ${m}_{\epsilon}\ge 1$ such that the estimate
$${u}_{m}(t,z){u}_{\mathrm{\infty}}(t,z)\le \frac{1}{2}{\alpha}_{{m}_{\epsilon}}(t){K}^{{m}_{\epsilon}1}{({\varrho}_{\epsilon}pK)}^{m{m}_{\epsilon}+1}{({1}_{n}{\varrho}_{\epsilon}pK)}^{1}{\delta}_{[{t}_{0},{t}_{0}+p],D}(g)$$
holds for all $t\in [{t}_{0},{t}_{0}+p]$ and $m\ge {m}_{\epsilon}$, where
Recall that, according to (2.2), condition (3.6) means the nonemptiness of the inner $\frac{p}{4}{\delta}_{[{t}_{0},{t}_{0}+p],D}(g)$neighbourhood of the set D, where ${\delta}_{[{t}_{0},{t}_{0}+p],D}(g)$ is the vector given by formula (2.1). This agrees with the natural supposition that, for an approximation technique to be applicable, the domain where the Lipschitz condition is assumed should be wide enough.
The proof of Proposition 3.1 is based on Lemma 3.2 formulated below, which provides an estimate for the sequence of functions ${\alpha}_{m}$, $m\ge 0$, given by the formula
where $m\ge 1$ and ${\alpha}_{0}(t):=1$, $t\in [{t}_{0},{t}_{0}+p]$. We provide the formulation here for a clearer understanding of the constants appearing in the estimates.
Lemma 3.2 ([[16], Lemma 3])
For any $\epsilon \in (0,+\mathrm{\infty})$, one can specify an integer ${m}_{\epsilon}\ge 1$ such that
for all $t\in [{t}_{0},{t}_{0}+p]$ and $m\ge {m}_{\epsilon}$.
It should be noted that estimate (3.13) is optimal in the sense that ε can never be put equal to zero.
Remark 3.3 It follows from [[22], Lemma 4] that if $\epsilon \ge {\epsilon}_{0}$, where
then ${m}_{\epsilon}=2$ in Lemma 3.2 (here, of course, we think of ${m}_{\epsilon}$ as of the least integer possessing the property indicated).
The assertion of Proposition 3.1 suggests a natural way to establish a relation between the pperiodic solutions of the given equation (1.3) and those of the perturbed equation (3.8) (or, equivalently, solutions of the initial value problem (3.8), (3.9)). Indeed, it turns out that, by choosing the value of z appropriately, one can use function (3.7) to construct a solution of the original periodic boundary value problem (1.3), (1.4).
Let the assumptions of Proposition 3.1 hold. Then:

1.
Given a $z\in D(\frac{p}{4}{\delta}_{[{t}_{0},{t}_{0}+p],D}(g))$, the function ${u}_{\mathrm{\infty}}(\cdot ,z)$ is a solution of the pperiodic boundary value problem (3.2), (3.3) if and only if z is a root of the equation
$$\mathrm{\Delta}(z)=0.$$(3.15) 
2.
For any solution $u(\cdot )$ of problem (3.2), (3.3) with $u({t}_{0})\in D(\frac{p}{4}{\delta}_{[{t}_{0},{t}_{0}+p],D}(g))$, there exists a ${z}_{0}$ such that $u(\cdot )={u}_{\mathrm{\infty}}(\cdot ,{z}_{0})$.
The important assertion (2) means that equation (3.15), usually referred to as a determining equation, allows one to track all the solutions of the periodic boundary value problem (1.3), (1.4). In such a manner, the original infinitedimensional problem is reduced to a system of n numerical equations.
The method thus consists of two parts, namely, the analytic part, when the integral equation (3.1) is dealt with by using the method of successive approximations (3.4), and the numerical one, which consists in finding a value of the unknown parameter from equation (3.15). This closely correlates with the idea of the LyapunovSchmidt reduction [2, 3].
The main obstacle for an efficient application of Proposition 3.4 is due to the fact that the function ${u}_{\mathrm{\infty}}(\cdot ,z)$, $z\in D(\frac{p}{4}{\delta}_{[{t}_{0},{t}_{0}+p],D}(g))$ and, therefore, the mapping $\mathrm{\Delta}:D(\frac{p}{4}{\delta}_{[{t}_{0},{t}_{0}+p],D}(g))\to {\mathbb{R}}^{n}$ are explicitly unknown. Nevertheless, it is possible to prove the existence of a solution on the basis of the properties of a certain iteration ${u}_{m}(\cdot ,z)$ which is constructed explicitly for a certain fixed m. For this purpose, one studies the approximate determining system
where ${\mathrm{\Delta}}_{m}:D(\frac{p}{4}{\delta}_{[{t}_{0},{t}_{0}+p],D}(g))\to {\mathbb{R}}^{n}$ is defined by the formula
for $z\in D(\frac{p}{4}{\delta}_{[{t}_{0},{t}_{0}+p],D}(g))$. This topic is discussed in detail, in particular, in [12], whereas a theorem of the kind specified, which corresponds to the scheme developed here, is proved in Section 9. Our main goal is to obtain a solvability theorem under assumptions weaker than those that would be needed when applying Proposition 3.1.
Indeed, in view of (2.5), assumption (3.5), which is essential for the proof of the uniform convergence of sequence (3.4), can be rewritten in the form
Inequality (3.17) can be treated either as a kind of upper bound for the Lipschitz matrix or as a smallness assumption on the period p, the latter interpretation presenting the scheme as particularly appropriate for the study of highfrequency oscillations.
Without assumption (3.17), Lemma 3.2 does not guarantee the convergence of sequence (3.4) when applied directly along the lines of the proof of Proposition 3.1. Nevertheless, it turns out that this limitation can be overcome and, by using a suitable parametrisation and modifying the scheme appropriately, one can always weaken the smallness condition (3.5) so that the constant on its righthand side is doubled:
Note also that, although we have in mind to weaken mainly the smallness condition (3.17) guaranteeing the convergence of iterations, it turns out that the techniques suggested here for this purpose allow us to obtain a considerable improvement of condition (3.6) as well (Corollary 6.7).
Moreover, we shall see that, under the weaker condition (3.18), the modified scheme can be used to prove the existence of a periodic solution on the basis of results of computation (Theorem 10.2).
4 Interval halving, parametrisation and gluing
We should like to show that the approach described by Proposition 3.1 can also be used in the cases where the smallness condition (3.5), which guarantees the convergence, is violated. For this purpose, a natural trick based on the interval halving can be used, where the unmodified scheme, in a sense, should work twice. However, some care should be taken on the boundary conditions.
Indeed, from the first glance, one is tempted to implement halving in the sense that the original scheme should be applied for each of the resulting halfintervals, and thus sequence (3.4) would be constructed twice for problem (3.2), (3.3) with ${t}_{0}=0$, $p=\frac{1}{2}T$, $g=f{}_{[0,\frac{1}{2}T]\times {\mathbb{R}}^{n}}$ and ${t}_{0}=\frac{1}{2}T$, $p=\frac{1}{2}T$, $g=f{}_{[\frac{1}{2}T,T]\times {\mathbb{R}}^{n}}$, respectively. This is impossible, however, because the boundary conditions on the halfintervals, with trivial exceptions, are never $\frac{1}{2}T$periodic.
The correct halving scheme is obtained when, along with the periodic boundary value problem (1.3), (1.4), we consider two auxiliary problems
and
where $\lambda =col({\lambda}_{1},\dots ,{\lambda}_{n})$ is a free parameter, the value of which is to be determined suitably from the argument related to gluing. The mutual disposition of the graphs of x and y satisfying, respectively, problems (4.1), (4.2) and (4.3), (4.4) is as shown on Figure 1.
Our further reasoning related to problem (1.3), (1.4) uses the following simple observation. Let us put
Proposition 4.1 ([1])
Let $x:[0,\frac{1}{2}T]\to {\mathbb{R}}^{n}$ and $y:[\frac{1}{2}T,T]\to {\mathbb{R}}^{n}$ be solutions of problems (4.1), (4.2) and (4.3), (4.4), respectively, with a certain value of $\lambda \in {\mathbb{R}}^{n}$. Then the function
is a solution of the periodic problem boundary value problem (1.4) for the equation
Conversely, if a certain function $u:[0,T]\to {\mathbb{R}}^{n}$ is a solution of problem (1.3), (1.4), then its restrictions $x:=u{}_{[0,\frac{1}{2}T]}$ and $y:=u{}_{[\frac{1}{2}T,T]}$ to the corresponding intervals satisfy, respectively, problems (4.1), (4.2) and (4.3), (4.4).
Remark 4.2 A solution of the functional differential equation (4.7) is understood in the Carathéodory sense, and a jump of ${u}^{\prime}$ at $\frac{1}{2}T$ is allowed. Note that function (4.6) is always continuous at $\frac{1}{2}T$.
The idea of Proposition 4.1 is, in fact, to rewrite the periodic boundary condition (1.4) in the form
which naturally leads us to the introduction of the parameter λ.
Proposition 4.1 allows one to treat the Tperiodic problem (1.3), (1.4) as a kind of join of two independent twopoint problems (4.1), (4.2) and (4.3), (4.4). Solving them independently and considering λ as an unknown parameter, one can then try to ‘glue’ their solutions together by choosing the value of λ so that (4.9) holds. The possibility of this gluing is equivalent to the solvability of the original problem. A rigorous formulation is contained in the following
Proposition 4.3 ([1])
Assume that $x:[0,\frac{1}{2}T]\to {\mathbb{R}}^{n}$ and $y:[\frac{1}{2}T,T]\to {\mathbb{R}}^{n}$ are solutions of problems (4.1), (4.2) and (4.3), (4.4), respectively, for a certain value of $\lambda \in {\mathbb{R}}^{n}$. Then the function $u:[0,T]\to {\mathbb{R}}^{n}$ given by formula (4.6) is a solution of problem (1.3), (1.4) if and only if the equality
holds.
Conversely, if a certain $u:[0,T]\to {\mathbb{R}}^{n}$ is a solution of problem (1.3), (1.4), then the functions $x:=u{}_{[0,\frac{1}{2}T]}$ and $y:=u{}_{[\frac{1}{2}T,T]}$ satisfy, respectively, problems (4.1), (4.2) and (4.3), (4.4).
Introduce the functions ${\overline{\alpha}}_{m}:[0,\frac{1}{2}T]\to [0,+\mathrm{\infty})$ and ${\overline{\overline{\alpha}}}_{m}:[\frac{1}{2}T,T]\to [0,+\mathrm{\infty})$, $m\ge 0$, by putting ${\overline{\alpha}}_{0}\equiv 1$, ${\overline{\overline{\alpha}}}_{0}\equiv 1$,
for $t\in [0,\frac{1}{2}T]$, and
for $t\in [\frac{1}{2}T,T]$. In particular, we have
and
Functions (4.10) and (4.11), which are, in fact, appropriately scaled versions of (3.12), are involved in the estimates given in the sequel.
5 Iterations on halfintervals
As Proposition 4.3 suggests, our approach to the Tperiodic problem (1.3), (1.4) requires that we first study the auxiliary problems (4.1), (4.2) and (4.3), (4.4) separately, for which purpose appropriate iteration processes will be introduced. Let us start by considering problem (4.1), (4.2). Following [1], we set
and define the recurrence sequence of functions ${X}_{m}:[0,\frac{1}{2}T]\times {\mathbb{R}}^{2n}\to {\mathbb{R}}^{n}$, $m=0,1,\dots $ , by putting
for all $m=1,2,\dots $ , $\xi \in {\mathbb{R}}^{n}$ and $\lambda \in {\mathbb{R}}^{n}$. In a similar manner, for the parametrised problem (4.3), (4.4) on the interval $[\frac{1}{2}T,T]$, we introduce the sequence of functions ${Y}_{m}:[\frac{1}{2}T,T]\times {\mathbb{R}}^{2n}\to {\mathbb{R}}^{n}$, $m\ge 0$, according to the formulae
for all η and λ from ${\mathbb{R}}^{n}$.
The recurrence sequences determined by equalities (5.1), (5.2) and (5.3), (5.4) arise in a natural way when boundary value problems of type (4.1), (4.1) and (4.3), (4.4) are considered. It is not difficult to verify that formulae (5.1), (5.2) and (5.3), (5.4) are particular cases of those corresponding the iteration scheme for twopoint boundary value problems (see, e.g., [23]). One can also derive these formulae directly from Proposition 3.1 by carrying out, respectively, the substitutions $x(t)=u(t)2t{T}^{1}\lambda $, $t\in [0,\frac{1}{2}T]$, and $y(t)=u(t)+(2t{T}^{1}1)\lambda $, $t\in [\frac{1}{2}T,T]$, after which one arrives at parametrised $\frac{1}{2}T$periodic boundary value problems on the corresponding halfintervals.
It is important to note that all the members of the sequences ${X}_{m}(\cdot ,\xi ,\lambda )$, $m\ge 0$, and ${Y}_{m}(\cdot ,\xi ,\lambda )$, $m\ge 0$, satisfy, respectively, conditions (4.2) and (4.4).
Lemma 5.1 For any $\{\xi ,\eta ,\lambda \}\subset {\mathbb{R}}^{n}$ and $m\ge 0$, the functions ${X}_{m}(\cdot ,\xi ,\lambda )$ and ${Y}_{m}(\cdot ,\eta ,\lambda )$ satisfy the boundary conditions
Now recall that the vector λ, which is involved in all the abovestated relations, is the ‘gluing’ parameter determining the pair of auxiliary boundary value problems (4.1), (4.2) and (4.3), (4.4), for which a continuous join described by Proposition 4.3 is possible. In this relation, the following property is important.
Lemma 5.2 Let $m\ge 0$ be arbitrary. Then the equality
holds if and only if
Proof Indeed, it follows directly from (5.1) and (5.3) that ${X}_{0}(\frac{1}{2}T,\xi ,\lambda )=\xi +\lambda $ and ${Y}_{0}(\frac{1}{2}T,\eta ,\lambda )=\eta $, whence the assertion is obvious for $m=0$. Similarly, if $m\ge 1$, then, according to (5.2) and (5.4), we have ${X}_{m}(\frac{1}{2}T,\xi ,\lambda )=\xi +\lambda $ and ${Y}_{m}(\frac{1}{2}T,\eta ,\lambda )=\eta $ and, consequently, relation (5.7) is equivalent to (5.8) for any m. □
6 Successive approximations and their convergence
Let us now pass to the construction of the iteration scheme for the original Tperiodic problem (1.3), (1.4). The sequences ${X}_{m}:[0,\frac{1}{2}T]\times {\mathbb{R}}^{2n}\to {\mathbb{R}}^{n}$ and ${Y}_{m}:[\frac{1}{2}T,T]\times {\mathbb{R}}^{2n}\to {\mathbb{R}}^{n}$, $m\ge 0$, from the preceding section will be used for this purpose. We shall see that, for this purpose, the graphs of the respective members of the last named sequences should be glued together in the sense of Lemma 5.2. Namely, we put
for any $m=0,1,\dots $ . Functions (6.1) and (6.2) will be considered only for those values of ξ and η that are located, in a sense, sufficiently far from the boundary of the domain D. More precisely, we consider $(\xi ,\eta )$ from the set ${G}_{D}(r)$, which, for any nonnegative vector r, is defined by the equality
Recall that we use notation (2.3). In other words, a couple of vectors $(\xi ,\eta )$ belongs to ${G}_{D}(r)$ if and only if every convex combination of ξ and η lies in D together with its rneighbourhood. The inclusion $(\xi ,\eta )\in {G}_{D}(r)$ implies, in particular, that $B(\xi ,r)\subset D$ and $B(\eta ,r)\subset D$, i.e., the vectors ξ and η both belong to the set $D(r)$ defined by formula (2.2). It is also obvious from (6.3) that ${G}_{D}(r)\subset {D}^{2}$ for any r.
The following statement shows that sequence (6.1) is uniformly convergent and its limit is a solution of a certain perturbed problem for all $(\xi ,\eta )$ which are admissible in the sense that $(\xi ,\eta )\in {G}_{D}(r)$ with r sufficiently large.
Theorem 6.1 Let the vectorfunction $f:[0,T]\times D\to {\mathbb{R}}^{n}$ satisfy the Lipschitz condition (1.5) on the set D with a matrix K such that
Moreover, assume that
Then, for an arbitrary pair of vectors $(\xi ,\eta )\in {G}_{D}(\frac{T}{8}{\delta}_{[0,\frac{1}{2}T],D}(f))$:

1.
The uniform, in $t\in [0,\frac{1}{2}T]$, limit
$$\underset{m\to \mathrm{\infty}}{lim}{x}_{m}(t,\xi ,\eta )=:{x}_{\mathrm{\infty}}(t,\xi ,\eta )$$(6.6)
exists and, moreover,

2.
The function ${x}_{\mathrm{\infty}}(\cdot ,\xi ,\eta )$ is the unique solution of the Cauchy problem
(6.8)
where

3.
Given an arbitrarily small positive ε, one can specify a number ${m}_{\epsilon}\ge 1$ such that
(6.11)
for all $t\in [0,\frac{1}{2}T]$ and $m\ge {m}_{\epsilon}$, where ${\varrho}_{\epsilon}$ is given by (3.11).
Recall that the constant ${\varrho}_{\ast}$ involved in condition (6.4) is given by equality (2.4), while the vector ${\delta}_{[0,\frac{1}{2}T],D}(f)$ arising in (6.5) is defined according to (2.1).
Remark 6.2 The error estimate (6.11) may look inconvenient because it is guaranteed starting from a sufficiently large iteration number, ${m}_{\epsilon}$, depending on the value of ε which can be arbitrarily small. It is, however, quite transparent when the required constant is not ‘too close’ to ${\varrho}_{\ast}$ (i.e., if ε is not ‘too small’). More precisely, in view of Remark 3.3, ${m}_{\epsilon}=2$ for $\epsilon \ge {\epsilon}_{0}$, where
is given by formula (3.14). Consequently, inequality (6.11) with $\epsilon \ge {\epsilon}_{0}$ holds for an arbitrary value of $m\ge 2$.
By analogy with Theorem 6.1, under similar conditions, we can establish the uniform convergence of sequence (6.2). Namely, the following statement holds.
Theorem 6.3 Assume that the vectorfunction f satisfies conditions (1.5), (6.4) and, moreover,
Then, for all fixed $(\xi ,\eta )\in {G}_{D}(\frac{T}{8}{\delta}_{[\frac{1}{2}T,T],D}(f))$:

1.
The uniform, in $t\in [\frac{1}{2}T,T]$, limit
$$\underset{m\to \mathrm{\infty}}{lim}{y}_{m}(t,\xi ,\eta )=:{y}_{\mathrm{\infty}}(t,\xi ,\eta )$$(6.13)
exists and, moreover,

2.
The function ${y}_{\mathrm{\infty}}(\cdot ,\xi ,\eta )$ is the unique solution of the Cauchy problem
(6.15)
where

3.
For an arbitrarily small positive ε, one can find a number ${m}_{\epsilon}\ge 1$ such that
(6.18)
for all $t\in [\frac{1}{2}T,T]$ and $m\ge {m}_{\epsilon}$, where ${\varrho}_{\epsilon}$ is given by (3.11).
Remark 6.4 Similarly to Remark 6.2, one can conclude that the validity of estimate (6.18) is ensured for all $m\ge 1$ provided that $\epsilon \ge {\epsilon}_{0}$ with ${\epsilon}_{0}$ given by formula (3.14).
Theorems 6.1 and 6.3 are improved versions of Theorems 1 and 2 from [1], and their proofs follow the lines of those given therein. The main difference here is the use of Lemma 7.2 in order to guarantee that the values of the iterations do not escape from D. The rest of the argument is pretty similar to that of [1], and we omit it.
Note that the assumptions of Theorems 6.1 and 6.3 differ from each other in conditions (6.5) and (6.12) only. Therefore, by putting
we arrive immediately at the following statement summarising the last two theorems.
Theorem 6.5 Assume that the function f satisfies the Lipschitz condition (1.5) in D with K satisfying relation (6.4) and, moreover, D is such that
Then, for any $(\xi ,\eta )\in {G}_{D}({r}_{D}(f))$, the assertions of Theorems 6.1 and 6.3 hold.
Recall that D is the main domain where the Lipschitz condition (1.5) is assumed, whereas ${G}_{D}({r}_{D}(f))$ is the subset of ${D}^{2}$ defined according to (6.3). The latter set is, in a sense, a twodimensional analogue of $D({r}_{D}(f))$ and, as has already been noted above, the inclusion
is true. By virtue of (6.21), assumption (6.20) implies in particular that
which is a condition of type (3.6) appearing in Proposition 3.1 (see Figure 2). It turns out that, in the case of a convex domain, condition (6.20) can always be replaced by (6.22). Indeed, the following statement holds.
Lemma 6.6 If the domain D is convex, then the corresponding set ${G}_{D}({r}_{D}(f))$ has the form
Proof In view of (6.21), it is sufficient to show that
Indeed, let us put $r:={r}_{D}(f)$ (the assertion is, of course, true for any nonnegative vector r, but the present formulation is sufficient for our purposes) and assume that, on the contrary, inclusion (6.23) does not hold. Then one can specify some ξ and η such that
According to definition (6.3), relation (6.25) means the existence of certain ${\theta}_{0}\in [0,1]$ and $z\in {\mathbb{R}}^{n}$ such that
Let us put $h:=z(1{\theta}_{0})\xi {\theta}_{0}\eta $. Then, in view of (6.26), we have
Furthermore, it is obvious that
and, consequently, z is a convex combination of $\xi +h$ and $\eta +h$. By virtue of (2.2), (6.24) and (6.27), both vectors $\xi +h$ and $\eta +h$ belong to D and, therefore, so does z because (6.28) holds and the set D is convex. However, this contradicts relation (6.26). Thus, inclusion (6.23) holds, and our lemma is proved. □
By virtue of Lemma 6.6, the assertion of Theorem 6.5 for f Lipschitzian in a convex domain can be reformulated as follows.
Corollary 6.7 Let f satisfy conditions (1.5) and (6.4). If, moreover, the domain D is convex and (6.22) holds, then, for any ξ and η from $D({r}_{D}(f))$, all the assertions of Theorems 6.1 and 6.3 hold.
The convexity assumption on D is rather natural and, in fact, the domain where the Lipschitz condition for the nonlinearity is verified most frequently has the form of a ball (in our case, where the inequalities between vectors are understood componentwise, it is an ndimensional rectangular parallelepiped).
We note that the smallness assumption (6.4), which guarantees the convergence of iterations in Corollary 6.7, is twice as weak as the corresponding condition (3.5) of Proposition 3.1:
Furthermore, it is rather interesting to observe that the condition on inner neighbourhoods also becomes less restrictive after the interval halving has been carried out. Indeed, it is clear from (2.1) and (6.19) that, for condition (6.22) of Corollary 6.7 to be satisfied, it would be sufficient if
whereas, at the same time, assumption (3.6) of Proposition 3.1 would require the relation
The radius of the inner neighbourhood in (6.30) is less by half. Comparing (6.4) and (6.30) with the corresponding conditions (6.29) and (6.31) arising in Proposition 3.1, we conclude that the idea of interval halving described above thus allows us to improve the original scheme of periodic successive approximations in both directions.
Theorem 6.5 suggests that the iteration sequences (5.2) and (5.4) can be used to construct the solutions of auxiliary problems (4.1), (4.2) and (4.3), (4.4) and ultimately of the original problem (1.3), (1.4). A further analysis, which will lead us to an existence theorem, involves determining equations. Before continuing, we give some auxiliary statements.
7 Auxiliary statements
Several technical lemmata given below are needed in the proof of Theorems 6.1 and 6.3. We implicitly assume in the formulations that condition (6.20) is satisfied.
Given arbitrary $i\in \{0,1\}$ and $v\in C([\frac{1}{2}iT,\frac{1}{2}(i+1)T],{\mathbb{R}}^{n})$, put
for all $t\in [\frac{1}{2}iT,\frac{1}{2}(i+1)T]$. The linear mapping ${P}_{i}$, which obviously transforms the space $C([\frac{1}{2}iT,\frac{1}{2}(i+1)T],{\mathbb{R}}^{n})$ to itself, is in fact a scaled version of the corresponding projection operator used rather frequently in studies of the periodic boundary problem (see, e.g., [12]). In our case, properties of this mapping are used when estimating the values of the Nemytskii operator generated by the function f involved in equation (1.3).
Lemma 7.1 Let $x:[0,\frac{1}{2}T]\to {\mathbb{R}}^{n}$ and $y:[\frac{1}{2}T,T]\to {\mathbb{R}}^{n}$ be arbitrary functions such that $\{x(t):t\in [0,\frac{1}{2}T]\}\subset D$ and $\{y(t):t\in [\frac{1}{2}T,T]\}\subset D$. Then:

1.
For $t\in [0,\frac{1}{2}T]$,
$$\begin{array}{rl}\left{P}_{0}f(\cdot ,x(\cdot ))\right(t)& \le \frac{1}{2}{\overline{\alpha}}_{1}(t){\delta}_{[0,\frac{1}{2}T],D}(f)\\ \le \frac{T}{8}{\delta}_{[0,\frac{1}{2}T],D}(f).\end{array}$$(7.2) 
2.
For $t\in [\frac{1}{2}T,T]$,
$$\begin{array}{rl}\left{P}_{1}f(\cdot ,y(\cdot ))\right(t)& \le \frac{1}{2}{\overline{\overline{\alpha}}}_{1}(t){\delta}_{[\frac{1}{2}T,T],D}(f)\\ \le \frac{T}{8}{\delta}_{[\frac{1}{2}T,T],D}(f).\end{array}$$(7.3)
Recall that ${\overline{\alpha}}_{1}$ and ${\overline{\overline{\alpha}}}_{1}$ are functions (4.12), (4.13), and the vectors ${\delta}_{[0,\frac{1}{2}T],D}(f)$, ${\delta}_{[\frac{1}{2}T,T],D}(f)$ are defined according to (2.1). The proof of Lemma 7.1 is almost a literal repetition of that of [[1], Lemma 7] and uses the estimate obtained in [[22], Lemma 3].
Lemma 7.2 For arbitrary $m\ge 0$ and $(\xi ,\eta )\in {G}_{D}({r}_{D}(f))$, the inclusions
and
hold.
Proof
Let us fix an arbitrary pair of vectors
and prove, e.g., relation (7.4). We shall argue by induction. Indeed, in view of (5.1),
for $t\in [0,\frac{1}{2}T]$. This means that, at every point t from $[0,\frac{1}{2}T]$, the value of ${x}_{0}(t,\xi ,\eta )$ is a convex combination of ξ and η. Recalling definition (6.3) of the set ${G}_{D}({r}_{D}(f))$ and using assumption (7.6), we conclude that all the values of the function ${X}_{0}(\cdot ,\xi ,\eta \xi )$ lie in D, i.e., (7.4) holds with $m=0$.
Assume now that
for a certain value of m and show that the inclusion
holds as well. Indeed, considering (5.2) and recalling notation (7.1), we conclude that, for all m, the identity
holds for any $t\in [0,\frac{1}{2}T]$. Since the validity of inclusion (7.4) has been assumed, we see that inequality (7.2) of Lemma 7.1 can be applied and, therefore, identity (7.10) yields
for all $t\in [0,\frac{1}{2}T]$. It follows from (7.11) that, at every point $t\in [0,\frac{1}{2}T]$, the value ${X}_{m+1}(t,\xi ,\eta \xi )$ lies in the $\frac{T}{8}{\delta}_{[0,\frac{1}{2}T],D}(f)$neighbourhood of a convex combination of the vectors ξ and η. Since ξ and η satisfy (7.6) and, by (6.19), ${r}_{D}(f)\ge \frac{T}{8}{\delta}_{[0,\frac{1}{2}T],D}(f)$, it follows from definition (6.3) of the set ${G}_{D}({r}_{D}(f))$ that all the values of the function ${X}_{m+1}(\cdot ,\xi ,\eta \xi )$ belong to D, i.e., (7.9) holds. Thus, inclusion (7.4) is true for all $m\ge 0$. Recalling notation (6.1), we arrive immediately to (7.4).
Relation (7.5) is proved by analogy. Indeed, it follows from (5.3) that
where $\theta (t):=2(1t{T}^{1})$ for any $t\in [\frac{1}{2}T,T]$. Since, obviously, $0\le \theta (t)\le 1$ for all $t\in [\frac{1}{2}T,T]$, identity (7.12) and assumption (7.6) guarantee that the function ${Y}_{0}(\cdot ,\eta ,\eta \xi )$ has values in D. Let us assume that, for a certain m,
and show that
By virtue of (5.4), for any $t\in [\frac{1}{2}T,T]$, we have
with the same definition of $\theta (\cdot )$ as in (7.12). According to assumption (7.13), the function ${Y}_{m}(\cdot ,\xi ,\eta \xi )$ has values in D. Therefore, using equality (7.15) and estimate (7.3) of Lemma 7.1, we obtain
for all $t\in [\frac{1}{2}T,T]$. Since $\theta :[\frac{1}{2}T,T]\to [0,1]$, inequality (7.16) implies that all the values of the function ${Y}_{m+1}(\cdot ,\eta ,\eta \xi )$ belong to the $\frac{T}{8}{\delta}_{[\frac{1}{2}T,T],D}(f)$neighbourhood of a convex combination of ξ and η. Recalling now (6.3) and (6.19) and using assumption (7.6), we arrive at (7.14). Consequently, inclusion (7.13) holds for all m, and (7.5) follows immediately from (6.2) and (7.13). The lemma is proved. □
Finally, the corresponding assertions of Theorems 6.1 and 6.3 lead us immediately to the following statement.
Lemma 7.3 Under the assumptions of Theorem 6.5, the inclusions
and
hold true for any $(\xi ,\eta )\in {G}_{D}({r}_{D}(f))$.
The proof of Lemma 7.3 consists in passing to the limit in (7.4) and (7.5) as $m\to +\mathrm{\infty}$, the possibility of which is ensured by Theorem 6.5.
8 Limit functions and determining equations
The techniques based on the original periodic successive approximations (3.4), the applicability of which is guaranteed by Proposition 3.1, lead one to the necessary and sufficient conditions for the solvability formulated in terms of determining equations (3.15) of Proposition 3.4. A certain analogue of the last mentioned statement should also be established for our new version of the method, with iterations constructed using the interval halving procedure, for the resulting scheme to be logically complete. It is natural to expect that the limit functions of the iterations on the halfintervals will help one to formulate criteria of solvability of the original problem, and, in fact, it turns out that it is the functions $\mathrm{\Xi}:{G}_{D}({r}_{D}(f))\to {\mathbb{R}}^{n}$ and $\mathrm{H}:{G}_{D}({r}_{D}(f))\to {\mathbb{R}}^{n}$ defined according to equalities (6.10) and (6.17) that provide such a characterisation.
Indeed, Theorems 6.1 and 6.3 guarantee that, under the conditions assumed, the functions ${x}_{\mathrm{\infty}}(\cdot ,\xi ,\eta ):[0,\frac{1}{2}T]\to {\mathbb{R}}^{n}$ and ${y}_{\mathrm{\infty}}(\cdot ,\eta ,\eta ):[\frac{1}{2}T,T]\to {\mathbb{R}}^{n}$ are well defined for all $(\xi ,\eta )\in {G}_{D}({r}_{D}(f))$. Therefore, by putting
we obtain a function ${u}_{\mathrm{\infty}}(\cdot ,\xi ,\eta ):[0,T]\to {\mathbb{R}}^{n}$, which is well defined for the same values of $(\xi ,\eta )\in {G}_{D}({r}_{D}(f))$. This function is obviously continuous.
The following theorem, which is a modified version of [[1], Theorem 4], establishes a relation of this function to the original periodic problem (1.3), (1.4) in terms of the zeroes of Ξ and H.
Theorem 8.1 Let f satisfy the Lipschitz condition (1.5) with a matrix K such that (6.4) holds. Furthermore, assume that D has property (6.20). Then:

1.
The function ${u}_{\mathrm{\infty}}(\cdot ,\xi ,\eta ):[0,T]\to {\mathbb{R}}^{n}$ defined by (8.1) is a solution of the periodic boundary value problem (1.3), (1.4) if and only if the pair $(\xi ,\eta )$ satisfies the system of 2n equations
$$\begin{array}{r}\mathrm{\Xi}(\xi ,\eta )=0,\\ \mathrm{H}(\xi ,\eta )=0.\end{array}$$(8.2) 
2.
For every solution $u(\cdot )$ of problem (1.3), (1.4) with $(u(0),u(\frac{1}{2}T))\in {G}_{D}({r}_{D}(f))$, there exists a pair $({\xi}_{0},{\eta}_{0})$ such that $u(\cdot )={u}_{\mathrm{\infty}}(\cdot ,{\xi}_{0},{\eta}_{0})$.
Equations (8.2) are usually referred to as determining or bifurcation equations [3, 12] because their roots determine solutions of the original problem. The variables involved in system (8.2) admit a natural interpretation: ξ means the value of the solution at 0, whereas η is responsible for its value at $\frac{1}{2}T$. We can observe the main difference between the unmodified periodic successive approximations (Proposition 3.1) and a similar scheme obtained after the interval halving (Theorem 6.5): the convergence condition is twice as weak but, instead of n numerical equations (3.15) of Proposition 3.4, we need to solve 2n equations (8.2) of Theorem 8.1.
A constructive solvability analysis involves a natural concept of approximate determining equations, which is discussed below.
9 Approximate determining equations
Although Theorem 8.1 provides a theoretical answer to the question on the construction of a solution of the periodic problem (1.3), (1.4), its application faces difficulties due to the fact that the explicit form of the functions $\mathrm{\Xi}:{G}_{D}({r}_{D}(f))\to {\mathbb{R}}^{n}$ and $\mathrm{H}:{G}_{D}({r}_{D}(f))\to {\mathbb{R}}^{n}$ appearing in (8.2) is usually unknown. This complication can be overcome by using the functions
and
for a fixed m, which will lead one to the socalled approximate determining equations. More precisely, similarly to [12, 24], it can be shown that, under certain natural assumptions, one can replace the exact determining system (8.2) by its approximate analogue
Note that, unlike system (8.2), the m th approximate determining system (9.3) contains only terms involving the functions ${x}_{m}:[0,\frac{1}{2}T]\times {G}_{D}({r}_{D}(f))\to {\mathbb{R}}^{n}$ and ${y}_{m}:[\frac{1}{2}T,T]\times {G}_{D}({r}_{D}(f))\to {\mathbb{R}}^{n}$ and, thus, known explicitly.
It is natural to expect that approximations to the unknown solution of (1.3), (1.4) can be obtained by using the function ${u}_{m}(\cdot ,\xi ,\eta ):[0,T]\to {\mathbb{R}}^{n}$,
which is an ‘approximate’ version of (8.1) well defined for all $t\in [0,T]$ and $(\xi ,\eta )\in {G}_{D}({r}_{D}(f))$.
The piecewise character of the definition of function (9.4) does not affect the properties that a potential approximation obtained from it should possess. Indeed,
Proposition 9.1 If ξ and η satisfy equations (9.3) for a certain m, then the function ${u}_{m+1}(\cdot ,\xi ,\eta )$ determined by equality (9.4) is continuously differentiable on $[0,T]$.
Proof It follows immediately from (5.2), (5.4) and (9.4) that
and
Recall that, by virtue of (5.4) and (6.2),
Then, in view of (9.1) and (9.2), it follows from (9.3), (9.5) and (9.6) that
and, therefore, ${u}_{m+1}^{\prime}(\cdot ,\xi ,\eta )$ is continuous at $\frac{1}{2}T$. The continuous differentiability of the function ${u}_{m+1}(\cdot ,\xi ,\eta )$ at other points is obvious from its definition. □
In order to prove a statement on the solvability of problem (1.3), (1.4), we need some estimates of the functions ${\mathrm{\Xi}}_{m}:{G}_{D}({r}_{D}(f))\to {\mathbb{R}}^{n}$ and ${\mathrm{H}}_{m}:{G}_{D}({r}_{D}(f))\to {\mathbb{R}}^{n}$, $m=0,1,\dots $ , defined by (9.1) and (9.2).
Lemma 9.2 Assume that (6.20) holds. Let f satisfy the Lipschitz condition (1.5) with a matrix K such that
Then the estimates
and
hold for any values of $(\xi ,\eta )\in {G}_{D}({r}_{D}(f))$ and $m\ge 2$.
Proof Let us fix arbitrary $(\xi ,\eta )\in {G}_{D}({r}_{D}(f))$ and $m\ge 2$. Recalling (6.10) and (9.1), we obtain
By Lemma 7.3, the function ${x}_{\mathrm{\infty}}(\cdot ,\xi ,\eta ):[0,\frac{1}{2}T]\to {\mathbb{R}}^{n}$ has values in D and, therefore, the Lipschitz condition (1.5) can be used in (9.10). Then, applying estimate (6.11) of Theorem 6.1 with $\epsilon ={\epsilon}_{0}$, where ${\epsilon}_{0}\approx 0.00727$ is given by (3.14), we obtain
Recall now that, in view of Remark 6.2 and relations (3.11) and (3.14), one has
and, therefore, (9.11) can be rewritten in the form
Furthermore, it follows from (4.12) and (4.10) that the function ${\overline{\alpha}}_{2}$ has the form
whence we obtain by computation that
Considering (9.12) and (9.15), we find that inequality (9.11), in fact, means that
which estimate coincides with (9.8). Note that the invertibility of the matrix ${1}_{n}\frac{3}{20}TK$ is guaranteed by condition (9.7).
In a similar manner, in order to establish (9.9), we use (6.17) and (9.1) to obtain the estimate
Lemma 7.3 guarantees that all the values of the function ${y}_{\mathrm{\infty}}(\cdot ,\xi ,\eta ):[\frac{1}{2}T,T]\to {\mathbb{R}}^{n}$ lie in D and, therefore, the Lipschitz condition (1.5) can be used in (9.17). Estimate (6.18) of Theorem 6.1 applied with $\epsilon ={\epsilon}_{0}$ then yields
Finally, it follows from (4.13) and (4.11) by computation that
and, hence,
Consequently, by virtue of relations (9.12) and (9.20), inequality (9.18) leads us directly to the required estimate (9.9). □
10 Solvability analysis based on approximation
The argument shown above allows us to conclude on the solvability of the periodic problem (1.3), (1.4) on the basis of properties of iterations (5.2) and (5.4). More precisely, it turns out that the use of functions (9.1) and (9.2) allows one to study the vector field $\mathrm{\Phi}:{G}_{D}({r}_{D}(f))\to {\mathbb{R}}^{2n}$,
the critical points of which, as we have seen in Theorem 8.1, determine the solutions of the original problem (1.3), (1.4), through its approximation
where m is fixed. In the formulation of the theorem given below, the following notion is used.
Definition 10.1 ([12])
Let r and l be positive integers and $S\subset {\mathbb{R}}^{l}$ be an arbitrary nonempty set. For any pair of vector functions ${g}_{j}:{\mathbb{R}}^{l}\to {\mathbb{R}}^{r}$, $j=1,2$, we write
if and only if there exists a function $\nu :S\to \{1,2,\dots ,l\}$ such that the strict inequality
holds for all $z\in S$.
Here, ${e}_{k}$, $k=1,2,\dots ,r$, are the unit vectors,
and $\u3008\cdot ,\cdot \u3009$ stands for the usual inner product in ${\mathbb{R}}^{r}$. The binary relation ${\u25b7}_{S}$ introduced by Definition 10.1 is a kind of strict inequality for vector functions and its properties are similar to those of the usual strict inequality sign. For example, $f\ge g$ and $g{\u25b7}_{S}h$ imply that $f{\u25b7}_{S}h$. The last named property will be used below in the proof of Theorem 10.2.
We are now able to formulate a statement guaranteeing the solvability of the original periodic problem (1.3), (1.4) based on the information obtained in the course of computation of iterations. In contrast to the unmodified scheme of periodic successive approximations (Proposition 3.1, $r(K)<{T}^{1}{\varrho}_{\ast}^{1}$), here the iterations are proved to be convergent under the assumption that is twice as weak as in the former case (Theorem 6.5, $r(K)<2{T}^{1}{\varrho}_{\ast}^{1}$). A similar observation can be made concerning the assumption on the domain D (see Corollary 6.7 and the remarks related to conditions (6.30) and (6.31)).
When stating the existence theorem, we restrict our consideration to a slightly weaker version of condition (6.4), where the value ${\varrho}_{\ast}\approx 0.2927$ is replaced by 0.3, and thus neglect the gap ($0,0.00727\dots $) for ε in estimates (6.11) and (6.18).
Theorem 10.2 Assume that the function f in (1.3) satisfies the Lipschitz condition (1.5) with a matrix K such that inequality (9.7) holds and, moreover, the set D has property (6.20). Moreover, let there exist a closed domain
such that, for a certain fixed value of $m\ge 2$, the mapping ${\mathrm{\Phi}}_{m}$ given by formula (10.2) satisfies the conditions
and
where
Then there exist certain values $({\xi}^{\ast},{\eta}^{\ast})\in \mathrm{\Omega}$ such that the function ${u}_{\mathrm{\infty}}(\cdot ,{\xi}^{\ast},{\eta}^{\ast})$ is a solution of the periodic boundary value problem (1.3), (1.4).
Recall that the symbol ${\u25b7}_{\partial \mathrm{\Omega}}$ in (10.7) is understood in the sense of Definition 10.1. It should be noted that condition (10.7) involves the values of functions on the boundary of Ω only.
Proof We shall use the lemmata stated above. By analogy to [12, 24], we shall prove that the fields Φ and ${\mathrm{\Phi}}_{m}$ are homotopic. It will be sufficient to consider the linear deformation
where $\theta \in [0,1]$. Indeed, it is clear that ${Q}_{\theta}$ is a continuous mapping on ∂ Ω for every $\theta \in [0,1]$ and, furthermore,
Let us fix an arbitrary pair $(\xi ,\eta )\in \partial \mathrm{\Omega}$. According to (10.1) and (10.2), we have
On the other hand, by Lemma 9.2, estimates (9.8) and (9.9) true. Using relations (9.8) and (9.9) in (10.11), we show that
and hence ${Q}_{\theta}$ does not vanish on ∂ Ω for any θ. Thus, Φ is homotopic to ${\mathrm{\Phi}}_{m}$. The property of invariance of degree under homotopy then yields
and therefore, in view of (10.6), we conclude that $deg(\mathrm{\Phi},\mathrm{\Omega})\ne 0$. Consequently, there exist vectors ${\xi}^{\ast}$ and ${\eta}^{\ast}$ possessing the properties indicated, and it only remains to refer to Theorem 8.1. The theorem is proved. □
Note that Theorem 10.2 provides solvability conditions based upon properties of approximations starting from the second one inclusively. A similar statement allowing to use the zeroth and the first approximations can be obtained if we use [[12], Lemma 3.16] instead of Lemma 3.2. In that case, condition (10.7) of Theorem 10.2 is replaced, respectively, by the relations
and
11 Approximation of a solution
The theorem proved in the preceding section can be complemented by the following natural observation. Let $(\stackrel{\u02c6}{\xi},\stackrel{\u02c6}{\eta})\in \mathrm{\Omega}$ be a root of the approximate determining system (9.3) for a certain m. Then the function
defined according to (9.4) can be regarded as the m th approximation to a solution of the periodic problem (1.3), (1.4). This is justified by Proposition 9.1 and the estimates
for $t\in [0,\frac{1}{2}T]$ and
for $t\in [\frac{1}{2}T,T]$, which, as is easy to see from (9.4), follow directly from Theorem 6.5. A uniform inequality, not given here, can be obtained by estimating the mapping $(\xi ,\eta )\mapsto {u}_{m}(t,\xi ,\eta )$ for any fixed $t\in [0,T]$.
It is worth to emphasise the role of the unknown parameters whose values appearing in (11.1) are determined from equations (9.3): $\stackrel{\u02c6}{\xi}$ is an approximation of the initial value of the periodic solution and $\stackrel{\u02c6}{\eta}$ is that of its value at $\frac{1}{2}T$.
As regards the practical application of Theorem 10.2, it should be noted that, according to (10.2), the mapping ${\mathrm{\Phi}}_{m}$ is known in an analytic form because it is determined solely by the m th iteration, which is already constructed at the moment. Of course, the degree in (10.6) is the Brouwer degree because all the vector fields are finitedimensional. Likewise, all the terms in the righthand side of inequality (10.7) are computed explicitly (e.g., by using computer algebra systems).
12 An example
Let us consider the scalar πperiodic boundary value problem
where $h(t):=\frac{1}{2}(3cos2tsin2t)+\frac{1}{8}(sin4t+1)$, $t\in [0,\pi ]$. It is easy to check that the function
is a solution of problem (12.1), (12.2). This solution has values in the domain $D:=[1,1]$, where, as one can verify, the convergence condition (3.5) is not satisfied. However, the corresponding condition with the doubled constant (3.18) does hold, and therefore, the interval halving technique can be used.
The appropriate computations, which have been carried out by using Maple 14 and are omitted here, show that the approach based on Theorems 6.5 and 10.2 is indeed applicable in this case. The existence of solution (12.3) (let us forget for a moment that we know it explicitly in this academic example) is established by Theorem 10.2, whereas its approximations of type (11.1) are constructed as described above. For instance, in the first approximation, we have $u\approx {U}_{1}$ with
where ${\chi}_{\pi}$ is the indicator function (4.5) and
and
The numerical values of the parameters ξ and η corresponding to functions (12.4), (12.5) (see Table 1) are found from the system of equations (9.3) with $m=1$, which, in this case, have the form
and
The graphs obtained in the course of computation are shown on Figures 3 and 4, whereas Table 1 contains the corresponding numerical values of the parameters. Note that only the zeroth approximation has derivative with a discontinuity at $\frac{1}{2}T$ (cf. Proposition 9.1). The graphs and the computed numerical values of the parameters show a rather good accuracy of approximation.
13 Comments
Several points can be outlined in relation to the techniques discussed in the preceding sections.
13.1 Approximation scheme in practice
An interesting feature of the approach indicated here is that a practical analysis of the periodic problem (1.3), (1.4) along its lines starts directly with the computation of iterations. We construct the approximate determining equations (9.3), solve them numerically in an appropriate region, substitute the corresponding roots into the formula for ${u}_{m}$ and form functions (11.1) which are, in a sense, candidates for approximations of a solution. Having constructed functions (11.1) for several values of m, we check their behaviour heuristically and if it exhibits some signs of being possibly convergent, we stop the computation and verify the assumptions of the existence theorem. If successful, then, since this moment, we already know that a solution exists, and either we are satisfied with the achieved accuracy of approximation (in this case, the scheme stops and the function ${U}_{m}$ given by (11.1) for the last computed value of m is proclaimed as its outcome) or, for some reasons, we find that a more accurate approximation is needed (one more step is made then, and a similar check is carried out for the new approximation).
It is important to observe that, once the existence of a solution is known from Theorem 10.2 at the m th step of iteration, we immediately obtain an approximation to it in the form (11.1). The scheme thus allows us to both study the solvability of the periodic problem and construct approximations to its solution.
It should be noted that the ability to derive the fact of solvability of the original problem from the corresponding properties of approximate problems is rather uncommon (see [12] for some details). For the numerical methods, the generic situation is, in fact, quite the reverse, when some or another technique is applied to solve a problem which is a priori assumed to be solvable.
13.2 Extension to other problems
The idea expressed above can easily be adopted for application to differential equations with argument deviations. The only issue that should be clarified in that case is the definition of iterations on the halfintervals at those points which are thrown over the middle to the adjacent halfinterval. For this purpose, sequences (5.2) and (5.4) should be computed simultaneously, with (5.4) serving as an initial function for (5.2) at the next step, and vice versa.
Likewise, with appropriate modifications, the technique developed here can be applied to problems with boundary conditions other than periodic ones. We do not dwell on this topic here.
13.3 Variable subinterval lengths
It is, of course, not necessary to keep the $1:2$ ratio of subinterval lengths. For example, if there is a point ${s}_{0}$ such that ${\delta}_{[0,{s}_{0}],D}(f)$ is much greater than ${\delta}_{[{s}_{0},T],D}(f)$, the halving, or any other kind of division, is natural to be continued on $[0,{s}_{0}]$. This reminds us of the idea used in the adaptive numerical methods with a variable step length.
13.4 Applicability on small intervals
In contrast to purely numerical approaches, where one may be forced to discretise with a tiny step, the efficiency of the technique based on Theorem 6.5 is not so much affected by the smallness of the interval. This makes the scheme well applicable, in particular, for the study of highfrequency oscillations.
13.5 Advantages over other methods
The proposed technique has some other positive features distinguishing it from other approaches. For example, when applying it, one experiences no difficulties with the selection of the starting approximation (in contrast, e.g., to monotone iterative methods); there is no need to recalculate considerable amounts of data when passing to the next step of approximation (unlike projection methods); the global Lipschitz condition and the assumption on the unique solvability of the Cauchy problem are not necessary (unlike shooting method); etc. As regards the last mentioned condition, one should note that, for functional differential equations, it is violated even in very simple cases, and it is thus unnatural to require it when constructing a scheme of analysis of a reasonably wide class of problems.
13.6 Repeated interval halving
The interval halving procedure can be repeated. When doing so, we observe that conditions both on the eigenvalues of the Lipschitz matrix and the size of the domain are weakened by half at each step. Indeed, it follows immediately from Corollary 6.7 that the periodic successive approximation scheme constructed with k interval halvings is applicable provided that
and
It is also clear that the $D({2}^{k2}T{\delta}_{[0,T],D}(f))$, $k=0,1,\dots $ , is a strictly increasing sequence of sets tending to the original domain D in the limit as k grows to ∞. In other words, rather interestingly, the scheme suggested here is theoretically applicable however large the eigenvalues of K may be.
The sideeffect of the successive interval halving is the increase of the dimension of the system of determining equations, which contains ${2}^{k}n$ equations at the k th interval halving. One can regard this as a certain price to be paid for being able to apply interval halving in order to convert a divergent iteration scheme into a convergent one.
In this way, by carrying out interval halving sequentially, one can, in particular, reestablish the convergence of numericalanalytic algorithms for systems of ordinary differential equations with globally Lipschitzian nonlinearities (see [12, 25, 26]).
13.7 Combination with other methods
The most difficult part of the scheme discussed consists in the analytic construction of so many members of the parametrised iteration sequence (9.4) which is sufficient to establish the solvability of the periodic problem (see conditions (10.6), (10.7)) and achieve the required precision of approximation in (11.1). Its practical implementation, usually done by using symbolic computation systems, can be considerably facilitated by combining the analytic computation with a suitable kind of approximation. The use of the polynomial or trigonometric interpolation (see [10, 27]) is very convenient for this purpose.
13.8 Nondegeneracy condition for higherorder approximations
It is obvious from (9.7) and (10.8) that ${lim}_{m\to \mathrm{\infty}}{M}_{m}=0$ and, hence, the righthand side of inequality (10.7) vanishes when m grows to +∞. On the other hand, it is easy to see that, under the conditions assumed, the mapping ${\mathrm{\Phi}}_{m}$ (uniformly on compact sets) converges to Φ as m tends to +∞. We thus arrive at the interesting observation that assumption (10.7) of Theorem 10.2, which is the main condition ensuring the nondegeneracy of the homotopy, has the form of the strict inequality
where ${\mathrm{\Phi}}_{m}$ approaches to $\mathrm{\Phi}$ while the term ${w}_{m}$ becomes arbitrarily small as m grows to +∞.
13.9 Relation to continuation theorems
Theorem 10.2 and similar statements can also be applied on the zeroth step of iteration, i.e., when one does not perform any iteration at all. This reminds us of the notion of a generating system appearing, e.g., in the asymptotic methods.
Indeed, having in mind Theorem 10.2 in its present formulation and recalling condition (10.12), let us put
for any $(\xi ,\eta )\in {G}_{D}({r}_{D}(f))$. Recall that ${G}_{D}({r}_{D}(r))$ is a subset of ${D}^{2}$ which a priori contains the value $(u(0),u(\frac{1}{2}T))$ for the periodic solution $u(\cdot )$ in question.
By using Theorem 10.2 for $m=0$ with condition (10.7) replaced by (10.12), we obtain the following statement on the solvability of the periodic problem (1.3), (1.4).
Corollary 13.1 Let assumption (6.20) hold and let the convergence condition (9.7) be satisfied. Furthermore, let there exist a closed domain $\mathrm{\Omega}\subset {G}_{D}({r}_{D}(f))$ such that
and
Then the periodic boundary value problem (1.3), (1.4) has at least one solution $u(\cdot )$ which has values in D and, moreover, is such that $(u(0),u(\frac{1}{2}T))\in \mathrm{\Omega}$.
Recall that the vectors ${\delta}_{[0,\frac{1}{2}T],D}(f)$ and ${\delta}_{[\frac{1}{2}T,T],D}(f)$ are computed directly according to formula (2.1), whereas ‘${\u25b7}_{\partial \mathrm{\Omega}}$’ means that, at every point from ∂ Ω, the strict inequality ‘>’ holds for at least one row, and the number of that row may vary with the point.
Assumptions of type (6.12), (13.5) are natural from various points of view. For example, let us imagine for a while that no interval halving has been carried out at all and thus, instead of Theorem 6.5, we are in the situation described by Proposition 3.1 with $g=f$, $p=T$ and ${t}_{0}=0$. The system of 2n determining equations (8.2) then turns back into the ndimensional system (3.15),
the zeroth approximation of which, in the sense of the iteration process (3.4), has the form
Therefore, assumption (6.12) becomes
with a suitable domain $V\subset D$, where
for $\xi \in V$. Then, using [[12], Lemma 3.26] with $m=0$, one easily shows that the following statement holds.
Corollary 13.2 The conditions (13.7), $r(K)<10{(3T)}^{1}$ and
are sufficient for the solvability of the periodic problem (1.3), (1.4).
Arguing in this manner, we can obtain, in particular, the wellknown Mawhin’s theorem [28], with (13.7) being the solvability condition for the generating equation (of course, one could use the condition of a priori bounds type instead of (13.9) for a more exact resemblance). In this context, Corollary 13.1 can be regarded as a ‘halved’ analogue of the last mentioned statement, where the equations
determine the initial data of the zeroth approximation. The sideeffect of halving is visible from the presence of two independent variables, ξ and η, due to which system (13.10), (13.11), in contrast to (13.6), contains n extra equations.
It should be noted that the convergence of the iteration scheme in Corollary 13.1 is guaranteed under the assumption $r(K)<20{(3T)}^{1}$, which is twice as weak as the corresponding condition of Corollary 13.2 ($r(K)<10{(3T)}^{1}$).
References
 1.
Rontó A, Rontó M: Periodic successive approximations and interval halving. Miskolc Math. Notes 2012, 13(2):459482.
 2.
Nirenberg L: Topics in Nonlinear Functional Analysis. Courant Institute of Mathematical Sciences New York University, New York; 1974. (With a chapter by E. Zehnder, Notes by R. A. Artino, Lecture Notes, 19731974)
 3.
Gaines RE, Mawhin JL Lecture Notes in Mathematics 568. In Coincidence Degree, and Nonlinear Differential Equations. Springer, Berlin; 1977.
 4.
Cesari L Ergebnisse der Mathematik und Ihrer Grenzgebiete, N. F. 16. In Asymptotic Behavior and Stability Problems in Ordinary Differential Equations. 2nd edition. Academic Press, New York; 1963.
 5.
Hale JK: Oscillations in Nonlinear Systems. McGrawHill, New York; 1963.
 6.
Samoilenko AM: A numericalanalytic method for investigation of periodic systems of ordinary differential equations. I. Ukr. Math. J. 1965, 17(4):8293. 10.1007/BF02526569
 7.
Samoilenko AM: A numericalanalytic method for investigation of periodic systems of ordinary differential equations. II. Ukr. Math. J. 1966, 18(2):5059. 10.1007/BF02537778
 8.
Samoilenko AM: On a sequence of polynomials and the radius of convergence of its AbelPoisson sum. Ukr. Math. J. 2003, 55(7):11191130. doi:10.1023/B:UKMA.0000010610.69570.13
 9.
Samoilenko AM, Ronto NI: NumericalAnalytic Methods of Investigating Periodic Solutions. Mir, Moscow; 1979. (With a foreword by Yu. A. Mitropolskii)
 10.
Samoilenko AM, Ronto NI: NumericalAnalytic Methods of Investigation of BoundaryValue Problems. Naukova Dumka, Kiev; 1986. (In Russian, with an English summary, edited and with a preface by Yu. A. Mitropolskii)
 11.
Samoilenko AM, Ronto NI: NumericalAnalytic Methods in the Theory of BoundaryValue Problems for Ordinary Differential Equations. Naukova Dumka, Kiev; 1992. (In Russian, edited and with a preface by Yu. A. Mitropolskii)
 12.
Rontó A, Rontó M: Successive approximation techniques in nonlinear boundary value problems for ordinary differential equations. Handb. Differ. Equ. In Handbook of Differential Equations: Ordinary Differential Equations. Vol. IV. Elsevier/NorthHolland, Amsterdam; 2008:441592.
 13.
Rontó A, Rontó M: Successive approximation method for some linear boundary value problems for differential equations with a special type of argument deviation. Miskolc Math. Notes 2009, 10: 6995.
 14.
Rontó A, Rontó M: On a CauchyNicoletti type threepoint boundary value problem for linear differential equations with argument deviations. Miskolc Math. Notes 2009, 10(2):173205.
 15.
Rontó A, Rontó M: On nonseparated threepoint boundary value problems for linear functional differential equations. Abstr. Appl. Anal. 2011., 2011: Article ID 326052. doi:10.1155/2011/326052
 16.
Ronto A, Rontó M: A note on the numericalanalytic method for nonlinear twopoint boundaryvalue problems. Nonlinear Oscil. 2001, 4: 112128.
 17.
Rontó A, Rontó M: On some symmetric properties of periodic solutions. Nonlinear Oscil. 2003, 6: 82107. doi:10.1023/A:1024827821289 10.1023/A:1024827821289
 18.
Rontó M, Shchobak N: On the numericalanalytic investigation of parametrized problems with nonlinear boundary conditions. Nonlinear Oscil. 2003, 6(4):469496. doi:10.1023/B:NONO.0000028586.11256.d7
 19.
Rontó M, Shchobak N: On parametrization for a nonlinear boundary value problem with separated conditions. Electron. J. Qual. Theory Differ. Equ. 2007, 18: 116.
 20.
Ronto AN, Ronto M, Shchobak NM: On the parametrization of threepoint nonlinear boundary value problems. Nonlinear Oscil. 2004, 7(3):384402. 10.1007/s1107200500195
 21.
Ronto AN, Rontó M, Samoilenko AM, Trofimchuk SI: On periodic solutions of autonomous difference equations. Georgian Math. J. 2001, 8: 135164.
 22.
Rontó M, Mészáros J: Some remarks on the convergence of the numericalanalytical method of successive approximations. Ukr. Math. J. 1996, 48: 101107. doi:10.1007/BF02390987 10.1007/BF02390987
 23.
Rontó M, Samoilenko AM: NumericalAnalytic Methods in the Theory of BoundaryValue Problems. World Scientific, River Edge; 2000. (With a preface by Yu. A. Mitropolsky and an appendix by the authors and S. I. Trofimchuk)
 24.
Rontó A, Rontó M: Existence results for threepoint boundary value problems for systems of linear functional differential equations. Carpath. J. Math. 2012, 28: 163182.
 25.
Kwapisz M: On modifications of the integral equation of Samoilenko’s numericalanalytic method of solving boundary value problems. Math. Nachr. 1992, 157: 125135.
 26.
Kwapisz M: On modification of Samoilenko’s numericalanalytic method of solving boundary value problems for difference equations. Appl. Math. 1993, 38(2):133144.
 27.
Rontó A, Rontó M, Holubová G, Nečesal P: Numericalanalytic technique for investigation of solutions of some nonlinear equations with Dirichlet conditions. Bound. Value Probl. 2011., 2011: Article ID 58. doi:10.1186/16872770201158
 28.
Mawhin J CBMS Regional Conference Series in Mathematics 40. In Topological Degree Methods in Nonlinear Boundary Value Problems. Am. Math. Soc., Providence; 1979. (Expository lectures from the CBMS Regional Conference held at Harvey Mudd College, Claremont, Calif., June 915, 1977)
Acknowledgements
Dedicated to Professor Jean Mawhin on the occasion of his 70th birthday.
The work supported in part by RVO: 67985840 (A. Rontó). This research was carried out as part of the TAMOP4.2.1.B10/2/KONV20100001 project with support from the European Union, cofinanced by the European Social Fund (M. Rontó).
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
The initial draft was prepared mainly by the first two authors, while the third one carried out the numerical computations and an overall check of estimates. All the authors contributed equally to the final version of this work and approved its present form.
Authors’ original submitted files for images
Rights and permissions
About this article
Received
Accepted
Published
DOI
Keywords
 periodic solution
 periodic boundary value problem
 parametrisation
 periodic successive approximations
 numericalanalytic method
 interval halving
 existence
 continuation
 Mawhin’s theorem