In this section, we investigate a specified type of inhomogeneous LDS named constant-type multiple cellular neural network (constant CNN). To clarify the elucidation, Section 2.1 concentrates on the constant CNNs with nearest neighborhood. The general cases of constant CNNs and deeper architecture are investigated in the rest of this section.
2.1 Constant cellular neural networks with nearest neighborhood
First we consider the LDS realized as
(1)
for . Denote the parameters that relate to the odd and even positions by and , respectively. We call the feedback template of (1), and is the threshold. It is seen that the templates in (1) are periodic; the prescribed model is a generalization of the classical cellular neural network and is called the constant-type multiple cellular neural network.
A system of ordinary differential equations is called completely stable if each of its solution x approaches an equilibrium state. Let , denote the collection of cells in odd and even coordinates, respectively. Express (1) as
(2)
where , , is a diagonal mapping (herein and ), and . The sufficient conditions for the complete stability of (1) are given as follows. The extension of Theorem 2.1 can be seen in Theorem 2.5.
Theorem 2.1 A constant CNN is completely stable if, for, one of the following conditions is satisfied.
S1 is symmetric.
S2 andfor all i, j, where
The complete stability of (1) demonstrates that the investigation of the equilibrium solutions is essential. To make the discussion more clear, we focus on the mosaic solutions, i.e., for all i, and study the complexity of the output space of the mosaic solutions. We investigate the complexity of the output space in two aspects:
To achieve our target, we introduce the ordering matrix and transition matrix first. The ordering matrix is defined as
herein the pattern ‘−’ stands for the state and ‘+’ stands for . Let
For , define the transition matrix of by
where consists of patterns of length n in X. Yielding and , we derive the formula of and . For the general cases of constant CNNs, Theorem 2.2 is generalized by Lemma 2.11 and Theorem 2.13.
Theorem 2.2 Supposefor some, . andare the transition matrices ofand, respectively. Then
wherefor any nonnegative matrix. Moreover, the topological entropy of Y is
whereandare the spectral radii ofand, respectively.
In the meantime, it is natural to elucidate the influence of boundary conditions on the exact number of patterns of length n and topological entropy. Three types of boundary conditions, periodic, Neumann, and Dirichlet boundary conditions, are considered. To reflect the influence of the boundary conditions, we introduce three boundary matrices. Let
The periodic boundary matrix is a matrix defined by
The Neumann boundary condition infers zero flux on both sides of the space. The left and right Neumann boundary matrices are then defined by
respectively. Furthermore, the Dirichlet boundary condition indicates that both sides of the space are constant states and the corresponding boundary matrices are
Herein and relate to states ‘−’ (i.e., ) and ‘+’ (i.e., ), respectively. Before presenting the formula of and under the boundary condition , we introduce two operations of matrices.
Definition 2.3
-
1.
Suppose that is a matrix and is an matrix. The Kronecker product is defined by
-
2.
Suppose that are matrices. The Hadamard product is defined by
With the introduction of the boundary matrices and the Kronecker and Hadamard products, we obtain Theorem 2.4 which reveals the formulae of exact number of patterns and topological entropy under the influence of three kinds of boundary conditions. The extension of Theorem 2.4 for general constant CNNs is demonstrated by Lemma 2.12 and Theorem 2.13.
Theorem 2.4 Supposefor some, . andare the transition matrices ofand, respectively. Then, , ifandare primitive matrices. Furthermore, the exact number of patterns of length n with boundary conditionare as follows:
Herein.
Hereinrelate to the conditions that the patterns on the boundary are ‘−’ and ‘+’, respectively.
2.2 Stability of constant cellular neural networks
The rest of this section extends the results in Section 2.1. To make the paper compact, we introduce the general setting for multi-dimensional inhomogeneous LDS and then concentrate on the one-dimensional case. The elucidation of multi-dimensional systems will be investigated in another paper.
A D-dimensional inhomogeneous CNN-based LDS is realized as
(6)
where , and , which is a finite subset of , indicates the neighborhood for neuron . The piecewise linear function is called the output function; refers to the threshold, and the feedback template stores the weight of local interaction between neurons, where .
An inhomogeneous CNN-based LDS is called a constant CNN if the neighborhood
, the template
, and z are periodic up to shifts. More precisely, there exists such that , , and satisfy , , and for , where
It is seen that the constant CNNs generalize the concept of the classical CNNs that were introduced in [1, 35]. More precisely, a classical CNN is a constant CNN with . The essential description of a one-dimensional constant CNN is presented in the following form:
(7)
where and
. Without loss of generality, we assume for some , . In this case, the feedback template of (7) is , where . A stationary solution is called a mosaic solution if for all , and is called a mosaic pattern. A system of ordinary differential equations is said to be completely stable if every trajectory tends to an equilibrium point. Theorem 2.5 infers that a constant CNN is a completely stable system. (We remark that Theorem 2.5 is an extension of Theorem 2.1.)
Theorem 2.5 Suppose thatis the template of (7) and the system is written as
Then a constant CNN is completely stable if, for, one of the following conditions is satisfied.
-
(1)
is symmetric.
-
(2)
is nonsingular and , where is defined in (10).
Let be a finite index set. The one-dimensional lattice ℤ can be decomposed into ℓ non-overlapping subspaces
Equation (7) can then be restated as
(8)
(It is easily seen that . We reindex the coordinates of neurons to clarify the upcoming investigation.) To prove Theorem 2.5, we consider two kinds of feedback templates separately. For the case that the feedback template of a classical CNN is symmetrical, Forti and Tesi demonstrated that it is completely stable.
Theorem 2.6 ([36])
A classical CNN with symmetric feedback template is completely stable.
For the case that the feedback template is not symmetrical, suppose that a CNN with n-neurons is described as follows:
where , A is an constant matrix with diagonal elements satisfying
is a diagonal mapping from to , and is a constant vector. Takahashi and Chua proposed a criterion to determine whether a CNN is completely stable.
Theorem 2.7 ([37])
Let
K
be an
matrix satisfying
(10)
for. A classical CNN with asymmetric feedback template is completely stable if K is nonsingular and, herein a matrixmeans thatfor all i, j.
It comes immediately from Theorem 2.7 that if the feedback template of a CNN is asymmetric, then the system is completely stable provided there exists a positive constant r such that
(11)
Proof of Theorem 2.5 Suppose ; in this case, a constant CNN is deduced to be a classical CNN. Theorem 2.6 infers that a constant CNN is completely stable if the feedback template
is symmetrical. Whenever
is asymmetric, the system is still completely stable if the matrix K defined in (10) is nonsingular and . It is indicated via (8) that a constant CNN can be decomposed into ℓ independent CNN subsystems, the complete stability of a constant CNN comes from the complete stability of every subsystem. □
For a fixed template, the collection of mosaic patterns is called the output space of (7). Since the neighborhood is finite for each i, the output space is determined by the so-called admissible local patterns. Suppose that y is a mosaic pattern, for each and , the necessary and sufficient condition for is
(12)
and the necessary and sufficient condition for is
(13)
Set
The set of admissible local patterns ℬ of a constant CNN is then
Similar to the discussion in [17], the output space Y can be represented as
(Recall that in the above equation, .)
One of the important research issues in the circuit theory is the learning problem. That is to say, mathematically, for what and how many phenomena the constant CNNs are capable of exhibiting. Theorem 2.9 infers that once is fixed, there are finitely many equivalent classes of templates
and z so that the basic sets of admissible local patterns are constrained. Let be the parameter space of the classical CNNs, where . Theorem 2.8 indicates that the can be partitioned into a finite number of subregions such that each subregion has the same mosaic patterns.
Theorem 2.8 ([34])
There is a positive integer
and a unique set of open subregions
satisfying
-
(i)
,
-
(ii)
if ,
-
(iii)
and for some k if and only if .
Hereis the closure of P in.
Let be the parameter space of (7). The following theorem demonstrates that
is also partitioned into a finite number of equivalent subregions.
Theorem 2.9 (Separation property)
There is a positive integer
K
and a unique set of open subregions
satisfying
-
(i)
,
-
(ii)
if ,
-
(iii)
and for some k if and only if .
Proof Similar to the proof of Theorem 2.5, a constant CNN is reduced to a classical CNN whenever , hence Theorem 2.9 is performed in this case. When , the basic set of admissible local patterns of (7) is the ordered union of the basic set of admissible local patterns . More specifically,
is isomorphic to the direct product , where is the parameter space of (7)
j
, the subsystem of (7) restricting to the cells . Since, for , each parameter space is partitioned into a finite number of equivalent subregions by Theorem 2.8,
is then the union of a unique set of open subregions which satisfies conditions (i) to (iii). This derives the desired result. □
Let be an integer, and let Ω be a subset of the symbolic space which is invariant under the shift map defined by . Denote
which is invariant under σ. The set is called a multiple subshift if Ω is a subshift. Equation (8) together with the proof of Theorem 2.9 asserts that the output space Y of a constant CNN is decomposed into subspaces . Observe that Y is topologically conjugated to the direct product of the output spaces of the classical CNNs, that is, , where is determined by . This derives Theorem 2.10, which indicates that the output space of a constant CNN is a multiple subshift for some parameters.
Theorem 2.10 Given a set of templates, whereand. Let Y be the solution space of the constant CNN with respect to. Then
ifandfor, where Ω is a SFT that comes from the output space of the classical CNN with respect to template.
2.3 Boundary effect on constant cellular neural networks
This subsection elucidates the influence of the boundary condition on the exact number of mosaic patterns of finite length and on the growth rate as the length increases. The investigation starts with formulating the number of patterns. Denote by the coordinates of the neurons. In this case, the boundary sites are . For the constant CNNs on , the following three types of boundary conditions are considered:
-
(i)
(7)
n
-N: constant CNNs with Neumann boundary condition on ;
-
(ii)
(7)
n
-P: constant CNNs with periodic boundary condition on ;
-
(iii)
(7)
n
-D: constant CNNs with Dirichlet boundary condition on .
These boundary conditions are discrete analogues of the ones in PDEs; to be specific, a pattern satisfies: (i) the Neumann boundary condition if and ; (ii) the periodic boundary condition if ; (iii) the Dirichlet boundary condition if and are prescribed.
Since , the total number of patterns of finite length in a constant CNN relates to the number of patterns in the subspaces. For each , there is a transition matrix that is implemented for the investigation of the subspace a (cf.[17] and Section 4). Lemma 2.11 elucidates the exact number of mosaic patterns of length n of a constant CNN without the influence of the boundary condition. The verification is straightforward and is omitted.
Lemma 2.11 For, writefor someand. Then
where, anddenotes the number of patterns of length q in X.
Let denote the collection of output patterns of length n with boundary condition B, where , and D stands for the periodic, Neumann, and Dirichlet boundary conditions, respectively. To find the exact number , we introduce the following boundary matrices.
-
(i)
Periodic boundary matrix . More precisely,
-
(ii)
Dirichlet boundary matrices , , , and stands for the left/right Dirichlet boundary condition that is given by ‘−’ and ‘+’, respectively.
-
(iii)
Neumann boundary matrices , . More precisely,
Here ⊗ is the Kronecker product, E is a matrix with entries being 1’s, I is the identity matrix, and is a column vector with entries being 1’s. Suppose that M is a matrix. Define by letting all the even/odd columns be zero vectors. Furthermore, indicates the matrix obtained from M by setting each of the lower-/upper-half rows as a zero vector.
Recall that a set function is defined by if and only if for E being a nonempty subset of ℝ. For , define
(14)
It is seen that is a nonnegative integer. To clarify the formulae of the exact number of patterns of length n of constant CNNs with boundary conditions, we introduce some notations first. Suppose , where . For , set
(15)
and
(16)
Herein refers to the 1-norm of the matrix M. Lemma 2.12 demonstrates the explicit formulae of the number of patterns of length n with boundary conditions.
Lemma 2.12 Let, where. Suppose, then the exact numberwith boundary conditionare as follows:
-
(i)
The periodic boundary condition:
(17)
-
(ii)
The Dirichlet boundary condition:
(18)
wheremeans the pattern on the boundary is ‘’.
-
(iii)
The Neumann boundary condition:
(19)
and
(20)
otherwise.
Hereis amatrix with entries being 1’s, and ∘ means the Hadamard product.
Proof We address the proof of , where the other cases can be verified in an analogous method.
Suppose that . It is seen from Lemma 2.11 that
At the same time, indicates that for all j. A straightforward examination demonstrates that
and for . Therefore, we have
If , then and
where refers to the number of patterns
with , and and are patterns of length in and , respectively. It is verified that
This derives
and completes the proof. □
Next, to study the influence of boundary conditions on the exact number of patterns of finite length, we consider the effect on the growth rate of the number of patterns; more specifically, the topological entropy of the output space Y. The topological entropy of a space X is defined by
(21)
The existence of comes immediately from the submultiplicativity of , which can be verified by applying Lemma 2.11. Theorem 2.13 declares the formula of the topological entropies of the constant CNNs, and the relation between the topological entropies of the constant CNNs and the classical CNNs.
Theorem 2.13. Moreover, forprovidedis mixing for all.
Proof For , there exists a unique such that
Lemma 2.11 infers that
Applying the squeeze theorem, we have
This completes the first part of the proof.
To evaluate the boundary effect on the topological entropy of Y, we demonstrate that . The other cases can be done analogously. Let τ denote the smallest integer such that for , restated, for . According to the definition of ,
Suppose . Lemma 2.12 implements
if , and
otherwise. On the other hand, it is easily checked that
The above observation derives that
and thus we have . □
The following theorem comes immediately from Theorem 2.13, the proof is omitted.
Theorem 2.14 The set of topological entropies of the constant CNNs is dense in the closed interval. More precisely, givenand, there exists a constant CNN such that.