Let us now pass to the construction of the iteration scheme for the original Tperiodic problem (1.3), (1.4). The sequences {X}_{m}:[0,\frac{1}{2}T]\times {\mathbb{R}}^{2n}\to {\mathbb{R}}^{n} and {Y}_{m}:[\frac{1}{2}T,T]\times {\mathbb{R}}^{2n}\to {\mathbb{R}}^{n}, m\ge 0, from the preceding section will be used for this purpose. We shall see that, for this purpose, the graphs of the respective members of the last named sequences should be glued together in the sense of Lemma 5.2. Namely, we put
for any m=0,1,\dots . Functions (6.1) and (6.2) will be considered only for those values of ξ and η that are located, in a sense, sufficiently far from the boundary of the domain D. More precisely, we consider (\xi ,\eta ) from the set {G}_{D}(r), which, for any nonnegative vector r, is defined by the equality
{G}_{D}(r):=\{(\xi ,\eta )\in {D}^{2}:B((1\theta )\xi +\theta \eta ,r)\subset D\text{for all}\theta \in [0,1]\}.
(6.3)
Recall that we use notation (2.3). In other words, a couple of vectors (\xi ,\eta ) belongs to {G}_{D}(r) if and only if every convex combination of ξ and η lies in D together with its rneighbourhood. The inclusion (\xi ,\eta )\in {G}_{D}(r) implies, in particular, that B(\xi ,r)\subset D and B(\eta ,r)\subset D, i.e., the vectors ξ and η both belong to the set D(r) defined by formula (2.2). It is also obvious from (6.3) that {G}_{D}(r)\subset {D}^{2} for any r.
The following statement shows that sequence (6.1) is uniformly convergent and its limit is a solution of a certain perturbed problem for all (\xi ,\eta ) which are admissible in the sense that (\xi ,\eta )\in {G}_{D}(r) with r sufficiently large.
Theorem 6.1 Let the vectorfunction f:[0,T]\times D\to {\mathbb{R}}^{n} satisfy the Lipschitz condition (1.5) on the set D with a matrix K such that
r(K)<\frac{2}{T{\varrho}_{\ast}}.
(6.4)
Moreover, assume that
{G}_{D}(\frac{T}{8}{\delta}_{[0,\frac{1}{2}T],D}(f))\ne \mathrm{\varnothing}.
(6.5)
Then, for an arbitrary pair of vectors (\xi ,\eta )\in {G}_{D}(\frac{T}{8}{\delta}_{[0,\frac{1}{2}T],D}(f)):

1.
The uniform, in t\in [0,\frac{1}{2}T], limit
\underset{m\to \mathrm{\infty}}{lim}{x}_{m}(t,\xi ,\eta )=:{x}_{\mathrm{\infty}}(t,\xi ,\eta )
(6.6)
exists and, moreover,
{x}_{\mathrm{\infty}}(\frac{T}{2},\xi ,\eta ){x}_{\mathrm{\infty}}(0,\xi ,\eta )=\eta \xi .
(6.7)

2.
The function
{x}_{\mathrm{\infty}}(\cdot ,\xi ,\eta )
is the unique solution of the Cauchy problem
where
\mathrm{\Xi}(\xi ,\eta ):=\eta \xi {\int}_{0}^{\frac{T}{2}}f(\tau ,{x}_{\mathrm{\infty}}(\tau ,\xi ,\eta ))\phantom{\rule{0.2em}{0ex}}d\tau .
(6.10)

3.
Given an arbitrarily small positive ε, one can specify a number {m}_{\epsilon}\ge 1 such that
for all t\in [0,\frac{1}{2}T] and m\ge {m}_{\epsilon}, where {\varrho}_{\epsilon} is given by (3.11).
Recall that the constant {\varrho}_{\ast} involved in condition (6.4) is given by equality (2.4), while the vector {\delta}_{[0,\frac{1}{2}T],D}(f) arising in (6.5) is defined according to (2.1).
Remark 6.2 The error estimate (6.11) may look inconvenient because it is guaranteed starting from a sufficiently large iteration number, {m}_{\epsilon}, depending on the value of ε which can be arbitrarily small. It is, however, quite transparent when the required constant is not ‘too close’ to {\varrho}_{\ast} (i.e., if ε is not ‘too small’). More precisely, in view of Remark 3.3, {m}_{\epsilon}=2 for \epsilon \ge {\epsilon}_{0}, where
{\epsilon}_{0}\approx 0.00727
is given by formula (3.14). Consequently, inequality (6.11) with \epsilon \ge {\epsilon}_{0} holds for an arbitrary value of m\ge 2.
By analogy with Theorem 6.1, under similar conditions, we can establish the uniform convergence of sequence (6.2). Namely, the following statement holds.
Theorem 6.3 Assume that the vectorfunction f satisfies conditions (1.5), (6.4) and, moreover,
{G}_{D}(\frac{T}{8}{\delta}_{[\frac{1}{2}T,T],D}(f))\ne \mathrm{\varnothing}.
(6.12)
Then, for all fixed (\xi ,\eta )\in {G}_{D}(\frac{T}{8}{\delta}_{[\frac{1}{2}T,T],D}(f)):

1.
The uniform, in t\in [\frac{1}{2}T,T], limit
\underset{m\to \mathrm{\infty}}{lim}{y}_{m}(t,\xi ,\eta )=:{y}_{\mathrm{\infty}}(t,\xi ,\eta )
(6.13)
exists and, moreover,
{y}_{\mathrm{\infty}}(T,\xi ,\eta ){y}_{\mathrm{\infty}}(\frac{T}{2},\xi ,\eta )=\xi \eta .
(6.14)

2.
The function
{y}_{\mathrm{\infty}}(\cdot ,\xi ,\eta )
is the unique solution of the Cauchy problem
where
\mathrm{H}(\xi ,\eta ):=\xi \eta {\int}_{\frac{T}{2}}^{T}f(\tau ,{y}_{\mathrm{\infty}}(\tau ,\xi ,\eta ))\phantom{\rule{0.2em}{0ex}}d\tau .
(6.17)

3.
For an arbitrarily small positive ε, one can find a number {m}_{\epsilon}\ge 1 such that
for all t\in [\frac{1}{2}T,T] and m\ge {m}_{\epsilon}, where {\varrho}_{\epsilon} is given by (3.11).
Remark 6.4 Similarly to Remark 6.2, one can conclude that the validity of estimate (6.18) is ensured for all m\ge 1 provided that \epsilon \ge {\epsilon}_{0} with {\epsilon}_{0} given by formula (3.14).
Theorems 6.1 and 6.3 are improved versions of Theorems 1 and 2 from [1], and their proofs follow the lines of those given therein. The main difference here is the use of Lemma 7.2 in order to guarantee that the values of the iterations do not escape from D. The rest of the argument is pretty similar to that of [1], and we omit it.
Note that the assumptions of Theorems 6.1 and 6.3 differ from each other in conditions (6.5) and (6.12) only. Therefore, by putting
{r}_{D}(f):=\frac{T}{8}max\{{\delta}_{[0,\frac{1}{2}T],D}(f),{\delta}_{[\frac{1}{2}T,T],D}(f)\},
(6.19)
we arrive immediately at the following statement summarising the last two theorems.
Theorem 6.5 Assume that the function f satisfies the Lipschitz condition (1.5) in D with K satisfying relation (6.4) and, moreover, D is such that
{G}_{D}({r}_{D}(f))\ne \mathrm{\varnothing}.
(6.20)
Then, for any (\xi ,\eta )\in {G}_{D}({r}_{D}(f)), the assertions of Theorems 6.1 and 6.3 hold.
Recall that D is the main domain where the Lipschitz condition (1.5) is assumed, whereas {G}_{D}({r}_{D}(f)) is the subset of {D}^{2} defined according to (6.3). The latter set is, in a sense, a twodimensional analogue of D({r}_{D}(f)) and, as has already been noted above, the inclusion
{G}_{D}({r}_{D}(f))\subset D({r}_{D}(f))\times D({r}_{D}(f))
(6.21)
is true. By virtue of (6.21), assumption (6.20) implies in particular that
D({r}_{D}(f))\ne \mathrm{\varnothing},
(6.22)
which is a condition of type (3.6) appearing in Proposition 3.1 (see Figure 2). It turns out that, in the case of a convex domain, condition (6.20) can always be replaced by (6.22). Indeed, the following statement holds.
Lemma 6.6 If the domain D is convex, then the corresponding set {G}_{D}({r}_{D}(f)) has the form
{G}_{D}({r}_{D}(f))=D({r}_{D}(f))\times D({r}_{D}(f)).
Proof In view of (6.21), it is sufficient to show that
{G}_{D}({r}_{D}(f))\supset D({r}_{D}(f))\times D({r}_{D}(f)).
(6.23)
Indeed, let us put r:={r}_{D}(f) (the assertion is, of course, true for any nonnegative vector r, but the present formulation is sufficient for our purposes) and assume that, on the contrary, inclusion (6.23) does not hold. Then one can specify some ξ and η such that
According to definition (6.3), relation (6.25) means the existence of certain {\theta}_{0}\in [0,1] and z\in {\mathbb{R}}^{n} such that
z\in B((1{\theta}_{0})\xi +{\theta}_{0}\eta ,r)\setminus D.
(6.26)
Let us put h:=z(1{\theta}_{0})\xi {\theta}_{0}\eta. Then, in view of (6.26), we have
Furthermore, it is obvious that
(1{\theta}_{0})(\xi +h)+{\theta}_{0}(\eta +h)=z
(6.28)
and, consequently, z is a convex combination of \xi +h and \eta +h. By virtue of (2.2), (6.24) and (6.27), both vectors \xi +h and \eta +h belong to D and, therefore, so does z because (6.28) holds and the set D is convex. However, this contradicts relation (6.26). Thus, inclusion (6.23) holds, and our lemma is proved. □
By virtue of Lemma 6.6, the assertion of Theorem 6.5 for f Lipschitzian in a convex domain can be reformulated as follows.
Corollary 6.7 Let f satisfy conditions (1.5) and (6.4). If, moreover, the domain D is convex and (6.22) holds, then, for any ξ and η from D({r}_{D}(f)), all the assertions of Theorems 6.1 and 6.3 hold.
The convexity assumption on D is rather natural and, in fact, the domain where the Lipschitz condition for the nonlinearity is verified most frequently has the form of a ball (in our case, where the inequalities between vectors are understood componentwise, it is an ndimensional rectangular parallelepiped).
We note that the smallness assumption (6.4), which guarantees the convergence of iterations in Corollary 6.7, is twice as weak as the corresponding condition (3.5) of Proposition 3.1:
r(K)<\frac{1}{T{\varrho}_{\ast}}.
(6.29)
Furthermore, it is rather interesting to observe that the condition on inner neighbourhoods also becomes less restrictive after the interval halving has been carried out. Indeed, it is clear from (2.1) and (6.19) that, for condition (6.22) of Corollary 6.7 to be satisfied, it would be sufficient if
D(\frac{T}{8}{\delta}_{[0,T],D}(f))\ne \mathrm{\varnothing},
(6.30)
whereas, at the same time, assumption (3.6) of Proposition 3.1 would require the relation
D(\frac{T}{4}{\delta}_{[0,T],D}(f))\ne \mathrm{\varnothing}.
(6.31)
The radius of the inner neighbourhood in (6.30) is less by half. Comparing (6.4) and (6.30) with the corresponding conditions (6.29) and (6.31) arising in Proposition 3.1, we conclude that the idea of interval halving described above thus allows us to improve the original scheme of periodic successive approximations in both directions.
Theorem 6.5 suggests that the iteration sequences (5.2) and (5.4) can be used to construct the solutions of auxiliary problems (4.1), (4.2) and (4.3), (4.4) and ultimately of the original problem (1.3), (1.4). A further analysis, which will lead us to an existence theorem, involves determining equations. Before continuing, we give some auxiliary statements.