We’d like to understand how you use our websites in order to improve them. Register your interest.

# Multiple solutions of fourth-order difference equations with different boundary conditions

## Abstract

In the present paper, a class of fourth-order nonlinear difference equations with Dirichlet boundary conditions or periodic boundary conditions are considered. Based on the invariant sets of descending flow in combination with the mountain pass lemma, we establish a series of sufficient conditions on the existence of multiple solutions for these boundary value problems. In addition, some examples are provided to demonstrate the applicability of our results.

## Introduction

Given an integer $$N>1$$, $$[1, N]$$ denotes the discrete interval $$\{1, 2, \ldots, N\}$$. Consider the fourth-order nonlinear difference equation

$$\Delta ^{4} x(n-2)=f\bigl(n, x(n)\bigr),\quad n\in [1, N] ,$$
(1.1)

with Dirichlet boundary conditions

$$x(-1)=x(0)=0=x(N+1)=x(N+2)$$
(1.2)

or periodic boundary conditions

$$\Delta ^{i} x(-1)=\Delta ^{i} x(N-1),\quad i=0, 1, 2, 3.$$
(1.3)

Here $$f\in C([1, N]\times \mathbf{R}, \mathbf{R})$$, $$F(n, x)=\int _{0} ^{x} f(n, s)\,ds$$ and $$F(n, 0)=0$$. Δ is the forward difference operator and $$\Delta x(n)=x(n+1)-x(n)$$, $$\Delta ^{0} x(n)=x(n)$$. For $$i\geq 1$$, $$\Delta ^{i} x(n)=\Delta (\Delta ^{i-1} x(n))$$.

Equation (1.1) can be regarded as a discrete analogue of the continuous versions equation

$$x^{(4)}(t)=f\bigl(t, x(t)\bigr), \quad t \in \mathbf{R},$$

which is used to describe the stationary states of the deflection of an elastic beam . As to difference equation (1.1),  establishes the existence of periodic solutions with minimal period by employing variational techniques and the linking theorem. Using Dancer’s global bifurcation theorem,  shows the existence and multiplicity of positive solutions of (1.1) in the form of

$$\Delta ^{4}x(n-2)=\lambda h(n)f\bigl(x(n)\bigr),\quad n\in [2, N].$$

With the rapid development of the technique of computers and the theory of nonlinear difference equations, difference equations have been widely used to study discrete models in many fields such as finance insurance, computing, electrical circuit analysis, dynamical systems, physical field, and biology; see [4, 5] and the references therein. Importantly, much literature and many monographs deal with problems of the existence and multiplicity of solutions by using various methods, such as critical point theory [6,7,8], topological degree theory , fixed-point index theory . Recently, some literature [11, 12] studied solutions of $$\phi _{c}$$-Laplacian difference equations. For more research on solutions of difference equations, we can refer to [13,14,15,16,17,18,19,20,21, 28,29,30,31].

To the best of our knowledge, there are few studies on sign-changing solutions of fourth-order difference equations. In 2015, He, Zhou et al.  obtained the existence of sign-changing solutions for the following periodic boundary value problem:

$$\textstyle\begin{cases} -\Delta [p(n-1) \Delta x(n-1)]+q(n)x(n)= f(n, x(n)), \quad n\in [1, N], \\ x(0)=x(N),\qquad \Delta x(0)=\Delta x(N), \end{cases}$$

by applying invariant sets of descending flow. Furthermore, [23,24,25] deal with other second-order nonlinear boundary value problems and achieve sign-changing solutions in a similar way to .

Motivated by the above reasons, the aims of this paper are as follows. Based on the invariant sets of descending flow, the mountain pass lemma and variational methods, we establish a series of sufficient conditions on the existence of multiple solutions including positive solutions, negative solutions and sign-changing solutions for Dirichlet boundary value problems (1.1) with (1.2) and periodic boundary value problems (1.1) with (1.3). To demonstrate the applicability of our results, some examples are provided. Here and hereafter, a positive (negative) solution $$x(n)$$ to (1.1) with (1.2) or (1.1) with (1.3) is a sequence $$\{x(n)\}$$ that satisfies Eq. (1.1) and the boundary conditions (1.2) or (1.3) with $$x(n)>0$$ ($$x(n)<0$$) for all $$n\in [1, N]$$. While $$\{x(n)\}$$ includes both positive and negative components, we call $$x(n)$$ is a sign-changing solution.

Next, we give some known results which are critical for the proofs of our main results.

### Definition 1.1

()

Let H be a Banach space. The functional $$I\in C^{1}(H, \mathbf{R})$$ is said to satisfy the Palais–Smale condition ((PS) condition for short) if any sequence $$\{x_{n}\}\subset H$$ satisfying $$|I(x_{n})|\leq c$$ for some $$c>0$$ and $$I'(x_{n})\rightarrow 0$$ as $$n\rightarrow \infty$$ possesses a convergent subsequence.

### Definition 1.2

()

Let H be a Banach space, the functional $$I\in C^{1}(H, \mathbf{R})$$. If any sequence $$\{x_{n}\}\subset H$$ with $$I(x_{n})\rightarrow c$$ for some $$c\in \mathbf{R}$$ and $$(1+\|x_{n}\|) \|I'(x_{n})\|\rightarrow 0$$ as $$n\rightarrow \infty$$ has a convergent subsequence in H, then we say that I satisfies the Cerami condition ((C) condition for short).

Let $$B_{r}$$ denote the open ball in H with radius r and center 0, and $$\partial B_{r}$$ be its boundary.

### Lemma 1.1

(Mountain pass lemma )

Let H be a real Banach space, and $$I\in C^{1}(H, \mathbf{R})$$ satisfies the $$(PS)$$ condition. If $$I(0)=0$$ and the following conditions hold:

1. (i)

there exist constants $$r>0$$ and $$\rho >0$$ such that $$I(x)\geq \rho$$ for all $$x\in \partial B_{r}$$;

2. (ii)

there exists $$x_{0}\in H \setminus B_{r}$$ such that $$I(x_{0}) \leq 0$$,

then I has a critical value $$c\geq \rho$$, and c can be characterized as

$$c=\inf_{h \in \varGamma } \max_{s \in [0, 1]} I\bigl(h(s)\bigr),$$

here

$$\varGamma =\bigl\{ h \in C\bigl([0, 1], H\bigr)|h(0)=0, h(1)=x_{0}\bigr\} .$$

### Lemma 1.2

()

Let $$I\in C^{1}(H, \mathbf{R})$$ be a functional defined on a Hilbert space H which satisfies the $$(PS)$$ condition and $$I'(x)=x-S(x)$$ for all $$x\in H$$. If there are open convex subsets $$D_{1}$$ and $$D_{2}$$ of H satisfying $$S(\partial D_{1})\subset D_{1}$$, $$S(\partial D_{2})\subset D_{2}$$ and $$D_{1} \cap D_{2}\neq \emptyset$$, and, moreover, there is a path $$h:[0, 1]\rightarrow H$$ such that

$$h(0)\in D_{1} \setminus D_{2},\qquad h(1)\in D_{2} \setminus D_{1},$$

and

$$\inf_{x\in \overline{D_{1}}\cap \overline{D_{2}}}I(x)> \sup_{\tau \in [0,1]}I\bigl(h(\tau ) \bigr),$$

then I has at least four critical points, one in $$H\setminus (\overline{D _{1}} \cup \overline{D_{2}})$$, one in $$D_{1} \setminus \overline{D _{2}}$$, one in $$D_{1} \cap D_{2}$$, and one in $$D_{2} \setminus \overline{D _{1}}$$.

### Remark 1.1

Theorem 5.1 in  tells us that the (PS) condition can be substituted by the weaker $$(C)$$ condition in Lemma 1.2.

The rest of the paper is organized as follows. In Sect. 2, BVP (1.1) with (1.2) is considered, and a series of sufficient conditions are established to ensure the existence of multiple solutions including positive solutions, negative solutions and sign-changing solutions by variational methods together with invariant sets of descending flow. In a similar way, Sect. 3 achieves some results on BVP (1.1) with (1.3). Finally, three examples to illustrate the applicability of our theoretical results are provided in Sect. 4.

## Multiple solutions for BVP (1.1) with (1.2)

Given a constant $$m\geq 0$$, define the inner product of $$M_{1}$$ as

$$\langle x, y\rangle _{m}=\sum_{n=0}^{N+1} {{\Delta ^{2}}x(n-1) {\Delta ^{2}}y(n-1)+m \sum _{n=1}^{N} {x(n)y(n)}}, \quad\forall x, y \in M_{1},$$

where $$M_{1}=\{x:[-1, N+2] \rightarrow \mathbf{R}| x(-1)=x(0)=0=x(N+1)=x(N+2) \}$$. Then $$M_{1}$$ is an N-dimensional Hilbert space and the induced norm is

$$\Vert x \Vert _{m}= \Biggl(\sum_{n=0}^{N+1} \bigl\vert {\Delta ^{2}}x(n-1) \bigr\vert ^{2}+m \sum _{n=1}^{N} \bigl\vert x(n) \bigr\vert ^{2} \Biggr)^{\frac{1}{2}}, \quad\forall x \in M_{1}.$$

Let H be an N-dimensional Hilbert space with the common inner product $$(\cdot, \cdot )$$ and norm $$\|\cdot \|$$. It follows that $$M_{1}$$ and H are isomorphic, and the norm $$\|\cdot \|_{m}$$ is equivalent to $$\|\cdot \|$$.

For BVP (1.1) with (1.2), we consider the following functional $$I_{1}: H\rightarrow \mathbb{\mathbf{R}}$$:

$$I_{1}(x)=\frac{1}{2}\sum _{n=0}^{N+1} \bigl\vert {\Delta ^{2}}x(n-1) \bigr\vert ^{2}- \sum_{n=1}^{N} F\bigl(n, x(n)\bigr).$$
(2.1)

Write

\begin{aligned} A_{1}= \begin{pmatrix} 6 &-4 &1 & \cdots &0 &0 &0 \\ -4 &6 &-4 & \cdots &0 &0 &0 \\ 1 &-4 &6 & \cdots &0 &0 &0 \\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\ 0 &0 &0 & \cdots &6 &-4 &1 \\ 0 &0 &0 & \cdots &-4 &6 &-4 \\ 0 &0 &0 & \cdots &1 &-4 &6 \end{pmatrix}_{N\times N}, \end{aligned}

then (2.1) can be rewritten as

$$I_{1}(x)=\frac{1}{2}(A_{1}x, x)-\sum _{n=1}^{N} F\bigl(n, x(n)\bigr),$$

where $$x=(x(1), x(2), \ldots, x(N))^{\tau }\in H$$.

It is not difficult to verify that $$A_{1}$$ is a positive define matrix. Let eigenvalues of $$A_{1}$$ be $$\lambda _{1}, \lambda _{2}, \ldots, \lambda _{N}$$ and let us have $$\lambda _{j}>0$$ $$(j=1, 2, \ldots, N)$$. Without loss of generality, we can assume that

$$0< \lambda _{1} \leq \lambda _{2} \leq \cdots \leq \lambda _{N}.$$

### Remark 2.1

Obviously, the λ of the linear eigenvalue problem

$$\textstyle\begin{cases} \Delta ^{4} x(n-2)=\lambda x(n),\quad n\in [1, N], \\ x(-1)=x(0)=0=x(N+1)=x(N+2), \end{cases}$$
(2.2)

corresponding to BVP (1.1) with (1.2), is exactly the eigenvalue of the matrix $$A_{1}$$.

Now we state the main results of this section.

### Theorem 2.1

Assume that:

$$(F_{1})$$ :

$$\max_{n\in [1, N]}\limsup_{x\rightarrow 0}|\frac{f(n, x)}{x}|<\lambda _{1}$$;

$$(F_{2})$$ :

$$\lim_{|x|\rightarrow \infty } \frac{f(n, x)}{x}=l$$ where either constant $$l>\lambda _{2}$$ or $$l=+\infty$$ with $$|f(n, x)|\leq C(1+|x|^{s-1})$$ for all $$n\in [1, N]$$ and some $$s>2$$, $$C>0$$;

$$(F_{3})$$ :

$$\lim_{|x|\rightarrow \infty }[xf(n, x)-2F(n, x)]=\infty$$, $$\forall n\in [1, N]$$.

Then we have

1. (i)

if $$(F_{1})$$$$(F_{2})$$ are satisfied and $$l\in (\lambda _{2}, +\infty ]$$ is not an eigenvalue of (2.2), then BVP (1.1) with (1.2) has at least three nontrivial solutions: one is sign-changing, one is positive and one is negative;

2. (ii)

if $$(F_{1})$$$$(F_{3})$$ are satisfied, the conclusion of (i) is true even if l is an eigenvalue of (2.2).

### Theorem 2.2

If $$\liminf_{|x|\rightarrow \infty }\frac{f(n, x)}{x}>\lambda _{1}$$ and $$\limsup_{x\rightarrow 0}\frac{f(n, x)}{x}<\lambda _{1}$$, then BVP (1.1) with (1.2) possesses at least a positive solution and a negative solution. That is to say,

1. (i)

when $$\liminf_{x\rightarrow -\infty } \frac{f(n, x)}{x}>\lambda _{1}$$ and $$\limsup_{x\rightarrow {0^{-}}}\frac{f(n, x)}{x}<\lambda _{1}$$, BVP (1.1) with (1.2) admits at least a negative solution;

2. (ii)

when $$\liminf_{x\rightarrow +\infty } \frac{f(n, x)}{x}>\lambda _{1}$$ and $$\limsup_{x\rightarrow {0^{+}}} \frac{f(n, x)}{x}<\lambda _{1}$$, BVP (1.1) with (1.2) admits at least a positive solution.

In the following, we devote ourselves to making preparations to verify our main results.

Consider the following problem to obtain Green’s function of BVP (1.1) with (1.2):

$$\textstyle\begin{cases} \Delta ^{4} x(n-2)-m{\Delta ^{2}}x(n-1)=h(n), \quad n\in [1, N], \\ x(-1)=x(0)=0=x(N+1)=x(N+2), \end{cases}$$
(2.3)

here $$h: [1, N]\rightarrow \mathbf{R}$$. Define

\begin{aligned} B_{1}= \begin{pmatrix} 2 &-1 &0 & \cdots &0 &0 &0 \\ -1 &2 &-1 & \cdots &0 &0 &0 \\ 0 &-1 &2 & \cdots &0 &0 &0 \\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\ 0 &0 &0 & \cdots &2 &-1 &0 \\ 0 &0 &0 & \cdots &-1 &2 &-1 \\ 0 &0 &0 & \cdots &0 &-1 &2 \end{pmatrix}_{N\times N}, \end{aligned}

then (2.3) and the system $$(A_{1}+mB_{1})x=h$$ are equivalent. Therefore, the unique solution of (2.3) can be expressed by

$$x=(A_{1}+mB_{1})^{-1} h.$$
(2.4)

Noticing that there are two solutions $$r_{1}=m$$ and $$r_{2}=0$$ for $$X(r)=r^{2}-mr=0$$,

$${\Delta ^{4}}x(n-2)-m{\Delta ^{2}}x(n-1)=\bigl(-{\Delta ^{2}} L+{r_{2}}\bigr) \bigl(- {\Delta ^{2}} L+{r_{1}}\bigr)x(n)=\bigl(-{\Delta ^{2}} L+{r_{1}} \bigr) \bigl(-{\Delta ^{2}} L+ {r_{2}}\bigr)x(n),$$

where $$x= (x(-1), x(0), \ldots, x(N+1), x(N+2))$$, $$Lx(n)=x(n-1)$$, $$n \in [1, N]$$.

Now we have the following lemma.

### Lemma 2.1

For $$i \in \{1, 2\}$$, the unique solution of BVP

$$\textstyle\begin{cases} -\Delta ^{2} x(n-1)+r_{i}x(n)=h(n), \quad n\in [1, N], \\ x(0)=0, \qquad x(N+1)=0, \end{cases}$$
(2.5)

has the form

$$x(n)=\sum_{s=1}^{N} {G_{i}}(n, s)h(s),\quad n \in [0, N+1],$$

with

\begin{aligned} &G_{1}(n, s)= \textstyle\begin{cases} \frac{(P^{s-N}-P^{N-s+2})(P^{n}-P^{-n})}{(1-P^{2})(P^{N+1}-P^{-N-1})}, & 0 \le n \le s \le N + 1, \\ \frac{(P^{n-N}-P^{N-n+2})(P^{s}-P^{-s})}{(1-P^{2})(P^{N+1}-P^{-N-1})}, & 0 \le s \le n \le N + 1, \end{cases}\displaystyle \\ &G_{2}(n, s)= \textstyle\begin{cases} \frac{n(N+1-s)}{N+1}, & 0 \le n \le s \le N + 1, \\ \frac{s(N+1-n)}{N+1}, & 0 \le s \le n \le N + 1, \end{cases}\displaystyle \end{aligned}

and $$P=\frac{(2+m)+\sqrt{(2+m)^{2}-4}}{2}$$.

### Proof

(i) When $$i=1$$, BVP (2.5) corresponds to

$$\textstyle\begin{cases} -\Delta ^{2} x(n-1)+mx(n)=h(n),\quad n\in [1, N], \\ x(0)=0,\qquad x(N+1)=0. \end{cases}$$
(2.6)

Consider the homogeneous equation of (2.6),

$$\textstyle\begin{cases} -\Delta ^{2} x(n-1)+mx(n)=0, \quad n\in [1, N], \\ x(0)=0, \qquad x(N+1)=0. \end{cases}$$
(2.7)

Then the corresponding characteristic equation to (2.7) is

$${P^{2}}- ({2+m} )P+1=0,$$

which means that $$P_{1, 2}=\frac{(2+m)\pm \sqrt{(2+m)^{2}-4}}{2}$$ are two different roots of it. Note $$P_{1}=\frac{1}{P_{2}}$$, so we can denote

$$P=\frac{(2+m)+\sqrt{(2+m)^{2}-4}}{2} \quad\text{and}\quad P^{-1}=\frac{(2+m)- \sqrt{(2+m)^{2}-4}}{2}.$$

Therefore, the general solution of (2.6) can be expressed by

$$x(n)=c_{1}(n)P^{n}+c_{2}(n)P^{-n}.$$
(2.8)

Next, to determine coefficients $$c_{1}(n)$$ and $$c_{2}(n)$$ in (2.8), replacing $$x(n)$$ in (2.6) with (2.8), we get

$$\textstyle\begin{cases} \Delta c_{1}(n-1)P^{n}+\Delta c_{2}(n-1)P^{-n}=0, \\ \Delta c_{1}(n-1)P ^{n+1}+\Delta c_{2}(n-1)P^{-n-1}=-h(n). \end{cases}$$

Direct computation yields

$$c_{1}(n)=c_{1}(0)+\sum_{s=1}^{n} \frac{h(s)P^{-s+1}}{1-P^{2}},\qquad c_{2}(n)=c_{2}(0)+\sum _{s=1}^{n} \frac{-h(s)P^{s+1}}{1-P ^{2}}.$$

Thus (2.8) can be rewritten in the form of

$$x(n)=c_{1}(0)P^{n}+c_{2}(0)P^{-n}+\sum _{s=1}^{n} \frac{P^{-s+1+n}-P ^{s+1-n}}{1-P^{2}}h(s).$$

Applying the boundary conditions $$x(0)=0$$ and $$x(N+1)=0$$, we have

$$c_{2}(0)=-c_{1}(0) \quad{\text{and}}\quad c_{1}(0)=\sum _{s=1} ^{N} \frac{P^{s-N}-P^{N-s+2}}{(1-P^{2})(P^{N+1}-P^{-N-1})}h(s),$$

then

$$x(n)=\sum_{s=1}^{N} \frac{(P^{s-N}-P^{N-s+2})(P^{n}-P^{-n})}{(1-P ^{2})(P^{N+1}-P^{-N-1})}h(s)+ \sum_{s=1}^{n} \frac{P^{n-s+1}-P ^{s-n+1}}{1-P^{2}}h(s).$$

Denote

$$G_{1}(n, s)= \textstyle\begin{cases} \frac{(P^{s-N}-P^{N-s+2})(P^{n}-P^{-n})}{(1-P^{2})(P^{N+1}-P^{-N-1})},& 0 \le n \le s \le N+1, \\ \frac{(P^{n-N}-P^{N-n+2})(P^{s}-P^{-s})}{(1-P^{2})(P^{N+1}-P^{-N-1})}, & 0 \le s \le n \le N+1, \end{cases}$$

hence the unique solution of (2.6) is in the form of

$$x(n)=\sum_{s=1}^{N} G_{1}(n, s)h(s),\quad n\in [0, N+1].$$

(ii) When $$i=2$$, the general solution of the BVP

$$\textstyle\begin{cases} -\Delta ^{2} x(n-1)=h(n), \quad n\in [1, N], \\ x(0)=0, \qquad x(N+1)=0, \end{cases}$$
(2.9)

is given by

$$x(n)=c_{1}(0)+nc_{2}(0)+\sum_{s=1}^{n} (s-n)h(s).$$

Employing the boundary conditions, we get

$$x(n)=\sum_{s=1}^{N} \frac{n(N+1-s)}{N+1}h(s)+ \sum_{s=1} ^{n} (s-n)h(s).$$

Then the unique solution of (2.9) can be written as

$$x(n)=\sum_{s=1}^{N} {G_{2}}(n, s)h(s),\quad n \in [0, N+1],$$

where

$$G_{2}(n, s)= \textstyle\begin{cases} \frac{n(N+1-s)}{N+1}, & 0 \le n \le s \le N+1, \\ \frac{s(N+1-n)}{N+1}, & 0 \le s \le n \le N+1. \end{cases}$$

□

### Remark 2.2

For $$i=1, 2$$ and any $$n, s\in [1, N]$$, it is easy to verify $${G_{i}}(n, s) = {G_{i}}(s, n) > 0$$.

Combining $${\Delta ^{4}} x(n-2)-m{\Delta ^{2}} x(n-1)=(-{\Delta ^{2}} L+ {r_{2}})(-{\Delta ^{2}} L+{r_{1}})x(n)$$ with Lemma 2.1, induces the following.

### Lemma 2.2

$$x(n)=\sum_{s=1}^{N} \Biggl(\sum _{j=1}^{N} {G_{1}}(n, j) {G_{2}}(j, s) \Biggr)h(s)=\sum_{s=1}^{N} \Biggl(\sum_{j=1}^{N} {G_{2}}(n, j){G_{1}}(j,s) \Biggr)h(s),\quad n\in [1, N],$$

is the unique solution of BVP (2.3).

### Proof

Making use of Lemma 2.1, we see that both

$$\textstyle\begin{cases} -\Delta ^{2} x(n-1)+r_{1} x(n)=y(n), \quad n\in [1, N], \\ x(0)=0, \qquad x(N+1)=0, \end{cases}$$

and

$$\textstyle\begin{cases} -\Delta ^{2} y(n-1)+r_{2} y(n)=h(n), \quad n\in [1, N], \\ y(0)=0, \qquad y(N+1)=0, \end{cases}$$

have exactly one solution, namely,

$$x(n)=\sum_{j=1}^{N} {G_{1}}(n, j)y(j),\quad n \in [0, N+1],$$

and

$$y(n)=\sum_{s=1}^{N} {G_{2}}(n, s)h(s),\quad n \in [0, N+1],$$

respectively.

Furthermore,

$${\Delta ^{4}} x(n-2)-m{\Delta ^{2}} x(n-1)=\bigl(-{\Delta ^{2}} L+{r_{2}}\bigr) \bigl(- {\Delta ^{2}} L+{r_{1}}\bigr)x(n).$$

Thus we find that (2.3) has a unique solution,

$$x(n)=\sum_{j=1}^{N} {G_{1}}(n, j) \Biggl(\sum_{s=1}^{N} {G_{2}}(j, s)h(s) \Biggr)=\sum_{s=1}^{N} \Biggl(\sum_{j=1}^{N} {G_{1}}(n, j){G_{2}}(j, s) \Biggr)h(s),\quad n \in [1, N].$$

Similarly,

$$x(n)=\sum_{s=1}^{N} \Biggl(\sum _{j=1}^{N} {G_{2}}(n, j) {G_{1}}(j, s) \Biggr)h(s),\quad n \in [1, N].$$

This completes the proof of Lemma 2.2. □

Define $$f_{m}$$, $$K_{m}: H\rightarrow H$$ as

\begin{aligned} &(f_{m}x) (n)=f\bigl(n, x(n)\bigr)-m\Delta ^{2} x(n-1), \\ &(K_{m}x) (n)=\sum_{s=1}^{N} \sum_{j=1}^{N} G_{2}(n, j)G_{1}(j, s)x(s). \end{aligned}

In view of the definition of $$K_{m}$$, together with (2.4) and Lemma 2.2, we have

$$K_{m}=(A_{1}+mB_{1})^{-1}.$$
(2.10)

### Remark 2.3

Let the completely continuous operator $$S_{m}: H\rightarrow H$$ be $$S_{m}=K_{m}f_{m}$$. Lemma 2.2 implies that $$x=\{x(n)\}_{n=-1} ^{N+2}$$ satisfies BVP (1.1) with (1.2) if and only if $$x=\{x(n)\}_{n=1} ^{N}\in H$$ satisfies $$S_{m}(x)=x$$.

### Lemma 2.3

The functional $$I_{1}$$ defined as (2.1) is Fréchet differentiable on H, and the critical points of $$I_{1}$$ are exactly the fixed points of $$S_{m}$$.

### Proof

For any $$x, y\in H$$, using Lagrange mean value theorem, there exists $$\theta (n)\in (0, 1)$$, $$n\in [1, N]$$ such that

\begin{aligned} & I_{1}(x+y)-I_{1}(x) \\ &\quad=\frac{1}{2} \sum_{n=0}^{N+1} \bigl\vert \Delta ^{2} (x+y) (n-1) \bigr\vert ^{2}- \sum _{n=1}^{N} F\bigl(n, (x+y) (n)\bigr)\\ &\qquad{}- \frac{1}{2} \sum_{n=0} ^{N+1} \bigl\vert \Delta ^{2} x(n-1) \bigr\vert ^{2}+\sum _{n=1}^{N} F\bigl(n, x(n)\bigr) \\ &\quad=\frac{1}{2} \sum_{n=0}^{N+1} \bigl\vert \Delta ^{2} y(n-1) \bigr\vert ^{2}+ \sum _{n=0}^{N+1} \Delta ^{2} x(n-1)\Delta ^{2} y(n-1)-\sum_{n=1}^{N} f \bigl(n, x(n)+\theta (n)y(n) \bigr)y(n) \\ &\quad=\frac{1}{2} \Vert y \Vert ^{2}_{m}- \frac{m}{2} \Vert y \Vert ^{2}+\langle x,y\rangle _{m}-m\sum_{n=1}^{N} x(n)y(n)- \sum_{n=1}^{N} f\bigl(n, x(n)+ \theta (n)y(n)\bigr)y(n). \end{aligned}

Then

\begin{aligned} & I_{1}(x+y)-I_{1}(x)-\langle x, y\rangle _{m}+\sum_{n=1} ^{N} \bigl[f \bigl(n, x(n)\bigr)+mx(n)\bigr]y(n) \\ &\quad=\frac{1}{2} \Vert y \Vert ^{2}_{m}- \frac{m}{2} \Vert y \Vert ^{2}+\sum _{n=1} ^{N} \bigl[f\bigl(n, x(n)\bigr)-f\bigl(n, x(n)+\theta (n)y(n)\bigr)\bigr]y(n). \end{aligned}

It follows from the continuity of f that

$$\lim_{\|y\|_{m}\rightarrow 0} \bigl[f\bigl(n, x(n)\bigr)-f\bigl(n, x(n)+\theta (n)y(n)\bigr)\bigr]=0.$$

Therefore,

$$\lim_{ \Vert y \Vert _{m}\rightarrow 0} \frac{I_{1}(x+y)-I_{1}(x)-\langle x, y\rangle _{m}+\sum_{n=1}^{N} [f(n, x(n))+mx(n)]y(n)}{ \Vert y \Vert _{m}}=0,$$

which signifies that $$I_{1}$$ is Fréchet differentiable on H and

$$\bigl\langle I_{1}'(x), y\bigr\rangle _{m}=\langle x, y\rangle _{m}-\sum _{n=1}^{N} \bigl[f\bigl(n, x(n)\bigr)+mx(n) \bigr]y(n).$$
(2.11)

Next, we only need to prove $$\langle {x-{S_{m}}(x),y}\rangle _{m}= \langle x,y\rangle _{m}-\sum_{n=1}^{N} [f(n, x(n))+mx(n)]y(n)$$ to accomplish the proof of Lemma 2.3.

For any $$x, y \in H$$, the boundary conditions imply that

\begin{aligned} \sum_{n=1}^{N} \Delta ^{4} x(n-2)y(n) &=\sum_{n=1} ^{N} \bigl[\Delta ^{2} x(n)-2\Delta ^{2} x(n-1)+\Delta ^{2} x(n-2) \bigr]y(n) \\ &=\sum_{n=1}^{N} \Delta ^{2} x(n)y(n)-2\sum_{n=1}^{N} \Delta ^{2} x(n-1)y(n)+\sum_{n=1}^{N} \Delta ^{2} x(n-2)y(n) \\ &=\sum_{n=1}^{N} \Delta ^{2} x(n-1)\Delta ^{2} y(n-1)+\Delta ^{2}x(N)\Delta ^{2} y(N)+\Delta ^{2} x(-1)\Delta ^{2} y(-1) \\ &=\sum_{n=0}^{N+1} \Delta ^{2} x(n-1)\Delta ^{2} y(n-1), \end{aligned}

which implies that

\begin{aligned} \bigl\langle {{S_{m}}(x), y}\bigr\rangle _{m} &=\sum_{n=0}^{N+1} \Delta ^{2} (S_{m} x) (n-1)\Delta ^{2}y(n-1)+m\sum _{n=1}^{N} (S_{m} x) (n)y(n) \\ &=\sum_{n=1}^{N} \Delta ^{4} (S_{m} x) (n-2)y(n)+m\sum_{n=1}^{N} (S_{m} x) (n)y(n) \\ &=\sum_{n=1}^{N} \bigl[\Delta ^{4} (S_{m} x) (n-2)+m(S_{m} x) (n) \bigr]y(n) \\ &=\sum_{n=1}^{N} \bigl[f\bigl(n, x(n) \bigr)+mx(n)\bigr]y(n). \end{aligned}
(2.12)

Then

$$\bigl\langle {x-{S_{m}}(x), y}\bigr\rangle _{m}=\langle x, y\rangle _{m}-\sum_{n=1}^{N} \bigl[f\bigl(n, x(n)\bigr)+mx(n)\bigr]y(n).$$

Hence

$$\bigl\langle I_{1}'(x), y\bigr\rangle _{m}= \bigl\langle {x-{S_{m}}(x), y}\bigr\rangle _{m},$$

which implies that $$I_{1}'(x)= x-{S_{m}}(x)$$ for all $$x\in H$$. □

Aiming to prove our main results, we introduce the following basic notations and necessary results. Let

$$\varLambda =\{x\in H: x\geq 0\},\qquad -\varLambda =\{x\in H: x\leq 0\}$$

be two convex cones and

$$D_{\varepsilon }^{+}=\bigl\{ x\in H: \operatorname{dist}_{m} (x, \varLambda )< \varepsilon \bigr\} ,\qquad D_{\varepsilon }^{-}=\bigl\{ x\in H: \operatorname{dist}_{m} (x, -\varLambda )< \varepsilon \bigr\}$$

denote two sets with $$\operatorname{dist}_{m}(x, \pm \varLambda )= \inf_{y\in \pm \varLambda }\|x-y\|_{m}$$ and $$\varepsilon >0$$ is an arbitrary constant. It is easy to find that $$D_{\varepsilon }^{\pm }$$ are open convex subsets on H with $$D_{\varepsilon }^{+}\cap D _{\varepsilon }^{-}\neq \emptyset$$. Furthermore, $$H\setminus (\overline{D _{\varepsilon }^{+}}\cup \overline{D_{\varepsilon }^{-}})$$ contains only sign-changing functions.

### Lemma 2.4

Under hypotheses $$(F_{1})$$ and $$(F_{2})$$, there exists $$\varepsilon _{0}>0$$ such that

$$S_{m}\bigl(\partial D_{\varepsilon }^{-} \bigr)\subset D_{\varepsilon }^{-},\qquad S_{m}\bigl(\partial D_{\varepsilon }^{+} \bigr)\subset D_{\varepsilon }^{+},$$

hold for $$\varepsilon \in (0, \varepsilon _{0})$$. Let $$x \in D_{\varepsilon }^{-}$$ $$(D_{\varepsilon }^{+})$$ be a nontrivial critical point of $$I_{1}$$, then x is a negative $$(positive)$$ solution of BVP (1.1) with (1.2).

### Proof

The proof of the conclusions of $$D_{\varepsilon }^{-}$$ is analogous to that of $$D_{\varepsilon }^{+}$$, here we prove the case of $$D_{\varepsilon }^{-}$$ in detail.

Because of $$(F_{1})$$ and $$(F_{2})$$, we have, for all $$x\in {\mathbf{R}} \backslash \{0\}$$ and $$n\in [1, N]$$,

$$x\bigl(f(n, x)+mx\bigr)>0.$$
(2.13)

From the definition of $${\|\cdot \|}_{m}$$, direct calculation leads to

$$\Vert x \Vert _{m}^{2}=\sum _{n=0}^{N+1} \bigl\vert \Delta ^{2}x(n-1) \bigr\vert ^{2} + m\sum_{n=1}^{N} \bigl\vert x(n) \bigr\vert ^{2}=(A_{1}x, x)+m \Vert x \Vert ^{2},$$

which means

$$\sqrt{\lambda _{1}+m} \Vert x \Vert \leq \Vert x \Vert _{m}\leq \sqrt{\lambda _{N}+m} \Vert x \Vert .$$
(2.14)

For any $$x \in H$$, let $$x^{+}=\max \{x, 0\}$$, $${x^{-}}=\min \{x, 0\}$$ and $$y=S_{m}(x)$$, then

$$\bigl\Vert x^{+} \bigr\Vert =\inf _{\psi \in -\varLambda } \Vert x-\psi \Vert \leq \frac{1}{\sqrt{m+ \lambda _{1}}}\inf _{\psi \in -\varLambda } \Vert x-\psi \Vert _{m}=\frac{1}{\sqrt{m+ \lambda _{1}}} \operatorname{dist}_{m}(x, -\varLambda ).$$
(2.15)

It follows from $$(F_{1})$$ and $$(F_{2})$$ that there exists constant $$\tau >0$$ such that

$$\bigl\vert f(n, x)+mx \bigr\vert \leq (m+\lambda _{1}-\tau ) \vert x \vert +C \vert x \vert ^{s-1},\quad \forall (n, x)\in [1, N]\times \mathbf{R}.$$
(2.16)

Note that $$y=y^{+}+y^{-}$$ and $$y^{-}\in {-\varLambda }$$, so $$\operatorname{dist} _{m}(y, -\varLambda )\leq \|y-y^{-}\|_{m}=\|y^{+}\| _{m}$$. In view of the property of operator $$S_{m}$$ together with above inequalities, it yields

\begin{aligned} &\operatorname{dist}_{m}(y, -\varLambda ) \bigl\Vert y^{+} \bigr\Vert _{m} \\ &\quad\leq \bigl\langle y^{+}, y ^{+}\bigr\rangle _{m} \\ &\quad=\bigl\langle {S_{m}}\bigl(x^{+}\bigr), y^{+} \bigr\rangle _{m} \\ &\quad=\sum_{n=1}^{N} \bigl[ f\bigl(n, x^{+}(n)\bigr)+mx^{+}(n) \bigr] y ^{+}(n) \\ &\quad\leq (m+\lambda _{1}-\tau ) \bigl\Vert x^{+} \bigr\Vert \bigl\Vert y^{+} \bigr\Vert +C \bigl\Vert x^{+} \bigr\Vert ^{s-1} \bigl\Vert y ^{+} \bigr\Vert \\ &\quad\leq \biggl(\frac{m+\lambda _{1}-\tau }{m+\lambda _{1}}\operatorname{dist} _{m}(x, - \varLambda )+\frac{C}{\sqrt{(m+\lambda _{1})^{s}}}\bigl(\operatorname{dist} _{m}(x, - \varLambda )\bigr)^{s-1} \biggr) \bigl\Vert y^{+} \bigr\Vert _{m}, \end{aligned}

that is,

$$\operatorname{dist}_{m}(y, -\varLambda )\leq \frac{m+\lambda _{1}-\tau }{m+ \lambda _{1}} \operatorname{dist}_{m}(x, -\varLambda )+\frac{C}{\sqrt{(m+ \lambda _{1})^{s}}}\bigl( \operatorname{dist}_{m}(x, -\varLambda )\bigr)^{s-1}.$$

Therefore, there exists $$\varepsilon _{0}>0$$ such that, for any $$0<\varepsilon <\varepsilon _{0}$$ and $$x\in D_{\varepsilon }^{-}$$,

$$\operatorname{dist}_{m}\bigl(S_{m}(x), - \varLambda \bigr) \le \frac{2(m+\lambda _{1})- \tau }{2(m+\lambda _{1})}\operatorname{dist}_{m}(x, - \varLambda ).$$
(2.17)

Due to $$\frac{2(m+\lambda _{1})-\tau }{2(m+\lambda _{1})}<1$$, then

$$S_{m}\bigl(\partial {D_{\varepsilon }^{-}}\bigr) \subset D_{\varepsilon }^{-}.$$

Furthermore, if $$x\in D_{\varepsilon }^{-}$$ is a nontrivial critical point of $$I_{1}$$, so $$I_{1}'(x)=x-S_{m}(x)=0$$, $$S_{m}(x)=x$$. Equation (2.17) implies that $$\operatorname{dist}_{m}(x, -\varLambda )=0$$, i.e., $$x\in -\varLambda \setminus \{0\}$$. According to (2.13), we can show $$x(n)<0$$ for all $$n\in [1, N]$$, which indicates that x is a negative solution of BVP (1.1) with (1.2). □

### Lemma 2.5

Under the assumption $$(F_{2})$$, if either

1. (i)

$$l=+\infty$$ or

2. (ii)

$$l<+\infty$$ is not an eigenvalue of the matrix $$A_{1}$$,

then the functional $$I_{1}$$ satisfies the $$(PS)$$ condition.

### Proof

Let $$\{x_{k}\}\subset H$$ be a sequence such that $$|I_{1}(x_{k})| \leq K$$ for some $$K>0$$ and $$I_{1}'(x_{k})\rightarrow 0$$ as $$k\rightarrow \infty$$. Since H is an N-dimensional Hilbert space, it is sufficient to show that $$\{x_{k}\}$$ is bounded to see that $$\{x_{k}\}$$ has a convergent subsequence.

(i) When $$l=+\infty$$, we can choose $$\eta _{1}>0$$ such that, for any $$(n, x) \in [1, N] \times \mathbf{R}$$, $$F(n, x) \ge {\lambda _{N}} {x^{2}}-\eta _{1}$$. Then

$$I_{1}(x_{k}) \leq -\frac{1}{2}\lambda _{N} \Vert x_{k} \Vert ^{2}+\eta _{1} N,$$
(2.18)

so

$$\frac{1}{2}\lambda _{N} \Vert x_{k} \Vert ^{2} \leq K+\eta _{1} N,$$

that is,

$$\Vert x_{k} \Vert ^{2} \leq \frac{2(K+\eta _{1} N)}{\lambda _{N}},$$

which indicates that $$\{x_{k}\}$$ is bounded.

(ii) When $$l<+\infty$$ is not an eigenvalue of matrix $$A_{1}$$, we complete the proof by contradiction. Suppose there exists a subsequence of $$\{x_{k}\}$$ (still denoted by $$\{x_{k}\}$$) such that $$\rho _{k}=\|x _{k}\|\rightarrow +\infty$$ as $$k\rightarrow \infty$$. Set $$y_{k}=\frac{x _{k}}{\rho _{k}}\in H$$, then $$\|y_{k}\|=1$$. The completeness of H shows that there is $$y\in H$$ satisfying $$y_{k}\rightarrow y$$ as $$k\rightarrow \infty$$. Put

$$\omega _{k}= \biggl(\frac{f(1, x_{k}(1))}{x_{k}(1)}y_{k}(1), \ldots, \frac{f(N, x_{k}(N))}{x_{k}(N)}y_{k}(N) \biggr)^{\tau }.$$

By $$\lim_{|x|\rightarrow \infty }\frac{f(n, x)}{x}=l$$ and $$I_{1}'(x_{k})= {x_{k}} - {K_{m}}{f_{m}}{x_{k}}$$, we have

$$\frac{I_{1}'(x_{k})}{\rho _{k}}=y_{k}-\frac{1}{\rho _{k}}K_{0} f_{0} x _{k}=y_{k}-K_{0} \omega _{k}\rightarrow y-K_{0}ly, \quad k\rightarrow \infty.$$

Noting that $$\frac{I_{1}'(x_{k})}{\rho _{k}}\rightarrow 0$$ as $$k\rightarrow \infty$$, so $$y-K_{0}ly=0$$. In view of (2.10), we find that l is an eigenvalue of matrix $$A_{1}$$, which is a contradiction. Therefore, $$\{x_{k}\}$$ is bounded. □

In the following we verify that $$I_{1}$$ satisfies the $$(C)$$ condition under suitable assumptions.

### Lemma 2.6

Under the assumption $$(F_{3})$$, the functional $$I_{1}$$ satisfies the $$(C)$$ condition.

### Proof

Let $$\{x_{k}\}\subset H$$ be a sequence satisfying $$I_{1}(x_{k})\rightarrow c$$ for some $$c\in \mathbf{R}$$ and $$(1+\|x_{k}\|_{m})\|I_{1}'(x_{k})\| _{m}\rightarrow 0$$ as $$k\rightarrow \infty$$. Due to the finite-dimensionality of H, we only need to test that $$\{x_{k}\}$$ is bounded.

(i) Assume $$\lim_{|x|\rightarrow \infty }[xf(n, x)-2F(n, x)]=- \infty$$. Because of the boundedness, there is a constant $$R_{1}>0$$ such that

$$-R_{1}\leq I_{1}(x_{k})\leq R_{1} \quad{\text{and}}\quad \bigl(1+ \Vert x_{k} \Vert _{m}\bigr) \bigl\Vert I_{1}'(x_{k}) \bigr\Vert _{m}\leq R_{1}.$$

Applying the Cauchy–Schwarz inequality yields

\begin{aligned} -3R_{1} &\leq 2I_{1}(x_{k})- \bigl(1+ \Vert x_{k} \Vert _{m}\bigr) \bigl\Vert I_{1}'(x_{k}) \bigr\Vert _{m} \\ &\leq 2I_{1}(x_{k})-\bigl\langle I_{1}'(x_{k}), x_{k}\bigr\rangle _{m} \\ &=\sum_{n=1}^{N} \bigl[x_{k}(n)f \bigl(n, x_{k}(n)\bigr)-2F\bigl(n, x_{k}(n)\bigr)\bigr]. \end{aligned}
(2.19)

We affirm that $$\{x_{k}\}$$ is bounded. Otherwise, it possesses a subsequence, still denoted by $$\{x_{k}\}$$, such that, for some $$n_{0}\in [1, N]$$, $$|x_{k}(n_{0})|\rightarrow +\infty$$ as $$k\rightarrow \infty$$. Then

$$x_{k}(n_{0})f\bigl(n_{0}, x_{k}(n_{0}) \bigr)-2F\bigl(n_{0}, x_{k}(n_{0})\bigr) \to - \infty,\quad k \to \infty.$$

Bearing in mind the continuity of f and assuming (i), there exists a constant $$R_{2}>0$$ such that

$$xf(n, x)-2F(n, x)\leq R_{2}, \quad\forall n\in [1, N], x\in \mathbf{R}.$$

Therefore, when $$k\rightarrow \infty$$,

\begin{aligned} &\sum_{n=1}^{N} \bigl[x_{k}(n)f \bigl(n, x_{k}(n)\bigr)-2F\bigl(n, x_{k}(n)\bigr)\bigr]\\ &\quad\leq x _{k}(n_{0})f\bigl(n_{0}, x_{k}(n_{0}) \bigr)-2F\bigl(n_{0}, x_{k}(n_{0}) \bigr)+(N-1){R_{2}} \rightarrow -\infty, \end{aligned}

which is contrary to (2.19). Our result is proved.

(ii) Assume $$\lim_{|x|\rightarrow \infty }[xf(n, x)-2F(n, x)]=+ \infty$$. In this case, the proof that $$I_{1}$$ satisfies $$(C)$$ condition is similar to the above case and we omit it.

In summary, $$I_{1}$$ satisfies $$(C)$$ condition under $$(F_{3})$$ and the proof of Lemma 2.6 is completed. □

### Lemma 2.7

Let $$z_{1}, z_{2}$$ denote eigenvectors corresponding to eigenvalues $$\lambda _{1}, \lambda _{2}$$ of matrix $$A_{1}$$, respectively. If $$l>\lambda _{2}$$, then

$$\lim_{\|x\|_{m}\rightarrow +\infty } I_{1}(x)=-\infty, \quad\forall x\in H_{1}={\mathrm{span}}\{z_{1}, z_{2}\}.$$

### Proof

(i) When $$l=+\infty$$. From (2.18), it is easy to see that, for any $$x\in H_{1}$$, $$I_{1}(x) \rightarrow -\infty$$ as $$\|x\|_{m} \rightarrow +\infty$$.

(ii) Suppose $$l\in (\lambda _{2}, +\infty )$$. For any $$x\in H_{1}$$, x can be written as $$x=\varepsilon _{1} z_{1}+\varepsilon _{2} z_{2}$$. Without loss of generality, we assume that $$z_{1}$$ is orthogonal to $$z_{2}$$, then $$\|x\|^{2}=\varepsilon _{1}^{2}\|z_{1}\|^{2}+\varepsilon _{2}^{2}\|z_{2}\|^{2}$$. Choose ϵ such that $$0<\epsilon < \min \{l-\lambda _{i}\}$$, $$i=1, 2$$. In view of $$\lim_{|x|\rightarrow +\infty }\frac{f(n, x)}{x}=l$$, it follows that we can find $$\eta _{2}>0$$ such that $$F(n, x) \ge \frac{{l-\epsilon }}{2}{x^{2}} - \eta _{2}$$. Hence, for any $$x\in H_{1}$$,

\begin{aligned} I_{1}(x) &=\frac{1}{2}(A_{1}x, x)-\sum _{n=1}^{N} F\bigl(n, x(n)\bigr) \\ &\leq \frac{1}{2} \bigl(\lambda _{1}\varepsilon _{1}^{2} \Vert z_{1} \Vert ^{2}+ \lambda _{2}\varepsilon _{2}^{2} \Vert z_{2} \Vert ^{2} \bigr)-\frac{l-\epsilon }{2} \Vert x \Vert ^{2}+\eta _{2} N \\ &=\frac{\lambda _{1}-l+\epsilon }{2}\varepsilon _{1}^{2} \Vert z_{1} \Vert ^{2}+ \frac{\lambda _{2}-l+\epsilon }{2}\varepsilon _{2}^{2} \Vert z_{2} \Vert ^{2}+ \eta _{2} N. \end{aligned}

Thus $$I_{1}(x)\rightarrow -\infty$$ as $$\|x\|_{m}\rightarrow +\infty$$ for $$\lambda _{i}-l+\epsilon <0$$, $$i=1, 2$$. □

With the above preparations, we are in a position to state the proof of Theorem 2.1.

### Proof of Theorem 2.1

According to (2.16), we have

$$F(n, x)+\frac{m}{2} \vert x \vert ^{2}\le \frac{1}{2}(m+\lambda _{1}-\tau ) \vert x \vert ^{2}+ \frac{C}{s} \vert x \vert ^{s}.$$

For $$x\in H$$ and $$s >2$$ defined in $$(F_{2})$$, there exists $$C_{1}>0$$ such that

$$\vert x \vert _{s}: = \Biggl(\sum _{n=1}^{N} \bigl\vert x(n) \bigr\vert ^{s} \Biggr)^{\frac{1}{s}}\leq C _{1}\min \bigl\{ \Vert x \Vert , \Vert x \Vert _{m}\bigr\} ,\quad \forall x\in H.$$
(2.20)

Then

\begin{aligned} I_{1}(x) &=\frac{1}{2} \Vert x \Vert _{m}^{2}-\sum_{n=1}^{N} \biggl[F\bigl(n, x(n)\bigr)+ \frac{m}{2} \bigl\vert x(n) \bigr\vert ^{2} \biggr] \\ &\geq \frac{1}{2} \Vert x \Vert _{m}^{2}- \frac{m+\lambda _{1}-\tau }{2} \Vert x \Vert ^{2}-\frac{C}{s} \vert x \vert _{s}^{s} \\ &\geq \frac{\tau }{2(m+\lambda _{1})} \Vert x \Vert _{m}^{2}- \frac{CC_{1}^{s}}{s} \Vert x \Vert _{m}^{s}. \end{aligned}

Recall that, for all $$x\in \overline{D_{\varepsilon }^{+}} \cap \overline{D _{\varepsilon }^{-}}$$, $$\|x^{\pm }\|\leq \frac{1}{\sqrt{m+\lambda _{1}}}\operatorname{dist}_{m}(x, \mp \varLambda )\leq \frac{1}{\sqrt{m+\lambda _{1}}}\varepsilon _{0}$$. Then we can find a constant $$c_{0}>-\infty$$ such that $$\inf_{x\in \overline{D_{\varepsilon }^{+}} \cap \overline{D_{ \varepsilon }^{-}}}I_{1}(x)=c_{0}$$. Lemma 2.7 indicates that there exists a constant $$R>2\varepsilon _{0}$$ such that $$I_{1}(x)< c_{0}-1$$ holds for all $$x\in H_{1}$$ and $$\|x\|_{m}=R$$. Last but not least, define a path $$h: [0,1]\rightarrow H_{1}$$ as follows:

$$h(s)=R\frac{\cos (\pi s)z_{1}+\sin (\pi s)z_{2}}{ \Vert \cos (\pi s)z_{1}+ \sin (\pi s)z_{2} \Vert _{m}}.$$

Obviously, $$h(s)\in H_{1}$$ and $$\|h(s)\|_{m}=R$$ when $$s\in [0, 1]$$, then $$I_{1}(h(s))< c_{0}-1$$. Further,

\begin{aligned} &h(0)=R\frac{z_{1}}{ \Vert z_{1} \Vert _{m}}\in D_{\varepsilon }^{+}\setminus D _{\varepsilon }^{-}, \qquad h(1)=-R\frac{z_{1}}{ \Vert z_{1} \Vert _{m}}\in D_{\varepsilon }^{-} \setminus D _{\varepsilon }^{+}, \\ &\inf_{x\in \overline{D_{\varepsilon }^{+}} \cap \overline{D_{ \varepsilon }^{-}}}I_{1}(x)=c_{0}>c_{0}-1> \sup_{s\in [0, 1]}I_{1}\bigl(h(s)\bigr). \end{aligned}

Applying Lemma 1.2, $$I_{1}(x)$$ possesses a critical point in $$H\setminus (\overline{D_{\varepsilon }^{+}} \cup \overline{D_{ \varepsilon }^{-}})$$ which corresponds to a sign-changing solution of BVP (1.1) with (1.2). Besides, there also exist a critical point in $$D_{\varepsilon }^{-}\setminus \overline{D_{\varepsilon }^{+}}$$ and a critical point in $$D_{\varepsilon }^{+}\setminus \overline{D_{\varepsilon }^{-}}$$ corresponding to a negative solution and a positive solution of BVP (1.1) with (1.2), respectively. This completes the proof of (i).

With the aid of Remark 1.1 and Lemma 2.6, the verification for the case (ii) is similar to the case (i) and we leave it for the reader. □

Here and hereafter in this section, we study positive solutions and negative solutions for BVP (1.1) with (1.2) by means of the mountain pass lemma.

Consider the following functionals:

$$I_{1}^{\pm } (x)=\frac{1}{2} \langle x, x\rangle _{0}-\sum_{n=1} ^{N} F\bigl(n, x^{\pm }(n)\bigr), \quad\forall x\in H.$$

Referring to , we have the following.

### Lemma 2.8

Under the conditions of Theorem 2.2, $$I_{1}^{\pm }$$ are continuously differentiable. In addition, the critical points of $$I_{1}^{+}$$ $$(I_{1}^{-})$$ are just the positive $$(negative)$$ solutions of BVP (1.1) with (1.2).

Therefore, we have reduced the problem of looking for positive (negative) solutions of BVP (1.1) with (1.2) to that of seeking nonzero critical points of the functional $$I_{1}^{+}$$ $$(I_{1}^{-})$$ on H.

### Lemma 2.9

If $$\liminf_{|x|\rightarrow \infty }\frac{f(n, x)}{x}>\lambda _{1}$$ for all $$n\in [1, N]$$, then $$I_{1}^{\pm }$$ satisfy the $$(PS)$$ condition.

### Proof

Here we test the case of $$I_{1}^{+}$$ at length, the other case can be proved analogously and is omitted.

Let $$\{x_{k}\}\subset H$$ be a sequence such that $$\{I_{1}^{+}(x_{k}) \}$$ is bounded and $${I_{1}^{+}}'(x_{k})\rightarrow 0$$ as $$k\rightarrow \infty$$. For simplicity, denote $$(f^{+} x)(n)=f(n, x^{+}(n))$$ for all $$n \in [1, N]$$. Then

$$\bigl\Vert x^{-}_{k} \bigr\Vert ^{2}_{0} \leq \bigl\langle x_{k}, x^{-}_{k}\bigr\rangle _{0}\leq \bigl\langle x_{k}-K_{0}f^{+}x_{k}, x^{-}_{k}\bigr\rangle _{0}= \bigl\langle {I_{1} ^{+}}'(x_{k}), x^{-}_{k}\bigr\rangle _{0}=o(1) \bigl\Vert x^{-}_{k} \bigr\Vert _{0},$$

which implies $$x^{-}_{k}\rightarrow 0$$ as $$k\rightarrow \infty$$. Next we prove that $$\{x^{+}_{k}\}$$ is bounded. If it is not, there is a subsequence of $$\{x_{k}\}$$ (still denoted by $$\{x_{k}\}$$) satisfying $$\rho _{k}=\|x^{+}_{k}\|_{0}\rightarrow +\infty$$ as $$k\rightarrow \infty$$. Set $$y_{k}=\frac{x^{+}_{k}}{\rho _{k}}$$, then $$\|y_{k}\|_{0} =1$$. Moreover, the completeness of H means that there is $$y\in H$$ such that $$y_{k}\rightarrow y$$ as $$k\rightarrow \infty$$. Let $$z_{1}>0$$ denote the eigenvector corresponding to $$\lambda _{1}$$, then

\begin{aligned} \lambda _{1}\sum_{n=1}^{N} x_{k}(n)z_{1}(n) &=\sum_{n=1} ^{N} \Delta ^{4} x_{k}(n-2)z_{1}(n) \\ &=\sum_{n=0}^{N+1} \Delta ^{2} x_{k}(n-1)\Delta ^{2} z_{1}(n-1) \\ &=\langle x_{k}, z_{1}\rangle _{0} \\ &=\bigl\langle K_{0}f^{+}x_{k}, z_{1} \bigr\rangle _{0}+\bigl\langle {I_{1}^{+}}'(x _{k}), z_{1}\bigr\rangle _{0} \\ &=\sum_{n=1}^{N} f\bigl(n, x^{+}_{k}(n)\bigr)z_{1}(n)+\bigl\langle {I_{1} ^{+}}'(x_{k}), z_{1}\bigr\rangle _{0}. \end{aligned}

Dividing both sides by $$\rho _{k}$$, we obtain

$$\lambda _{1}\sum_{n=1}^{N} y_{k}(n)z_{1}(n)=\sum_{n=1} ^{N} \frac{f(n, x^{+}_{k}(n))}{x^{+}_{k}(n)}y_{k}(n)z_{1}(n)+o(1).$$
(2.21)

It is well known that, for any $$n\in [1, N]$$, either $$x^{+}_{k}(n) \rightarrow +\infty$$ or $$\{x^{+}_{k}(n)\}$$ is bounded.

If $$x^{+}_{k}(n)\rightarrow +\infty$$, by $$\liminf_{|x|\rightarrow \infty }\frac{f(n, x)}{x}>\lambda _{1}$$ for all $$n\in [1, N]$$, we obtain

$$\min_{n\in [1, N]} \liminf_{|x|\rightarrow \infty } \frac{f(n, x)}{x}>\lambda _{1}.$$
(2.22)

If $$\{x^{+}_{k}(n)\}$$ is bounded, then $$\frac{f(n, x^{+}_{k}(n))}{\rho _{k}}\rightarrow 0$$ and $$y(n)=0$$. While the definition of $$y_{k}$$ gives $$\|y_{k}\|_{0}=1$$, $$y\neq 0$$. Hence, it is not difficult to find an n such that $$x^{+}_{k}(n)\rightarrow +\infty$$ and $$y(n)>0$$. Passing to the limit about k in (2.21) and making use of (2.22) yield

$$\lambda _{1}\sum_{n=1}^{N} y(n)z_{1}(n)=\lim_{k \rightarrow \infty } \Biggl(\sum _{n=1}^{N} \frac{f(n, x^{+}_{k}(n))}{x^{+}_{k}(n)}y_{k}(n)z_{1}(n) \Biggr)> \lambda _{1} \sum_{n=1}^{N} y(n)z_{1}(n),$$

which raises a contradiction. Therefore, $$I_{1}^{+}$$ satisfies $$(PS)$$ condition. The proof of Lemma 2.9 is completed. □

With the help of Lemma 2.8 and Lemma 2.9, we can prove Theorem 2.2 via Lemma 1.1.

### Proof of Theorem 2.2

According to $$\max_{n\in [1, N]}\limsup_{x\rightarrow 0}\frac{f(n, x)}{x}<\lambda _{1}$$, we have constants $$\xi _{1}>0$$ and $$r>0$$ such that

$$F(n, x)\leq \frac{\lambda _{1}-\xi _{1}}{2} \vert x \vert ^{2},\quad n\in [1, N], \vert x \vert \leq r.$$

For all $$x\in \partial B_{r}$$, we have

\begin{aligned} I_{1}^{+}(x) &=\frac{1}{2} \langle x, x \rangle _{0}-\sum_{n=1} ^{N} F \bigl(n, x^{+}(n)\bigr) \\ &\geq \frac{1}{2} \Vert x \Vert ^{2}_{0}- \frac{\lambda _{1}-\xi _{1}}{2} \Vert x \Vert ^{2} \\ &\geq \frac{1}{2} \Vert x \Vert ^{2}_{0}- \frac{\lambda _{1}-\xi _{1}}{2\lambda _{1}} \Vert x \Vert ^{2}_{0} \\ &=\frac{\xi _{1}r^{2}}{2\lambda _{1}}\triangleq \rho >0. \end{aligned}

From $$\min_{n\in [1, N]} \liminf_{|x|\rightarrow \infty }\frac{f(n, x)}{x}>\lambda _{1}$$, we can choose a constant $$\xi _{2}>0$$ such that

$$\min_{n\in [1, N]} \liminf_{|x|\rightarrow \infty } \frac{f(n, x)}{x}> \lambda _{1}+\xi _{2}.$$

Further, there exists a constant $$C'>0$$ such that

$$F(n, x)\geq \frac{\lambda _{1}+\xi _{2}}{2} \vert x \vert ^{2}-C'$$

is true for all $$x\in \mathbf{R}$$. Hence, for ν large enough

\begin{aligned} I_{1}^{+}(\nu z_{1}) &= \frac{1}{2} \langle \nu z_{1}, \nu z_{1}\rangle _{0}-\sum_{n=1}^{N} F\bigl(n, \nu z_{1}^{+}(n)\bigr) \\ &\leq \frac{\nu ^{2}}{2} \Vert z_{1} \Vert ^{2}_{0}- \frac{\lambda _{1}+\xi _{2}}{2} {\nu ^{2}} \Vert z_{1} \Vert ^{2}+C' N \\ &=\frac{\nu ^{2}}{2} \Vert z_{1} \Vert ^{2}_{0}- \frac{\lambda _{1}+\xi _{2}}{2 \lambda _{1}} {\nu ^{2}} \Vert z_{1} \Vert ^{2}_{0}+C' N \\ &=-\frac{\xi _{2} \nu ^{2}}{2\lambda _{1}} \Vert z_{1} \Vert ^{2}_{0}+C' N< 0. \end{aligned}

Based on Lemma 2.9 and Lemma 1.1, $$I_{1}^{+}(x)$$ possesses a critical point $$x_{0} \in H$$ such that $${I_{1}^{+}}'(x_{0})=0$$ and $$I_{1}^{+}(x _{0})\geq \rho >0$$. Thus

$$\bigl\Vert x_{0}^{-} \bigr\Vert ^{2}_{0} \leq \bigl\langle x_{0}, x_{0}^{-}\bigr\rangle _{0}\leq \bigl\langle x_{0}-K_{0}f^{+}x_{0}, x_{0}^{-}\bigr\rangle _{0}=\bigl\langle {I_{1} ^{+}}'(x_{0}), x_{0}^{-}\bigr\rangle _{0}=0,$$

which implies that $$x_{0}^{-}=0$$ and $$x_{0}=x_{0}^{+}\geq 0$$. If $$x_{0}=0$$, then $$I_{1}^{+}(x_{0})=0$$, which is contradiction with $$I_{1}^{+}(x_{0})\geq \rho >0$$, so $$x_{0}>0$$. In view of Lemma 2.8, $$x_{0}>0$$ is a positive solution of problem (1.1) with (1.2).

By a little modification of the case of $$I_{1}^{+}$$, we can obtain a negative solution for the case of $$I_{1}^{-}$$. □

## Multiple solutions for BVP (1.1) with (1.3)

In this section, we discuss the multiple solutions for BVP (1.1) with (1.3).

Let $$M_{2}=\{x:[-1, N+2] \rightarrow {\mathbf{R}}|{\Delta ^{i}} x(-1)= {\Delta ^{i}} x(N-1), i = 0, 1, 2, 3\}$$ equipped with the inner product

$$\langle x, y\rangle _{m}=\sum_{n=1}^{N} \bigl[{\Delta ^{2}} x(n-1) {\Delta ^{2}} y(n-1) +mx(n)y(n) \bigr].$$

Then $$M_{2}$$ is an N-dimensional Hilbert space and the induced norm is

$$\Vert x \Vert _{m}= \Biggl(\sum_{n=1}^{N} \bigl[ \bigl\vert {\Delta ^{2}} x(n-1) \bigr\vert ^{2}+m \bigl\vert x(n) \bigr\vert ^{2} \bigr] \Biggr) ^{\frac{1}{2}}.$$

Consider the functional $$I_{2}: H\rightarrow \mathbb{\mathbf{R}}$$

$$I_{2}(x)=\frac{1}{2}\sum _{n=1}^{N} \bigl\vert {\Delta ^{2}} x(n-1) \bigr\vert ^{2}- \sum_{n=1}^{N} F\bigl(n, x(n)\bigr).$$
(3.1)

For any $$x=(x(1), x(2), \ldots, x(N))^{\tau }\in H$$, (3.1) can be rewritten as

$$I_{2}(x)=\frac{1}{2}(A_{2}x, x) - \sum_{n=1}^{N} F\bigl(n, x(n)\bigr),$$
(3.2)

where $$A_{2}$$ is an $${N}\times {N}$$ matrix defined as

\begin{aligned} A_{2}= \begin{pmatrix} 6 &-4 &1 & \cdots &0 &1 &-4 \\ -4 &6 &-4 & \cdots &0 &0 &1 \\ 1 &-4 &6 & \cdots &0 &0 &0 \\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\ 0 &0 &0 & \cdots &6 &-4 &1 \\ 1 &0 &0 & \cdots &-4 &6 &-4 \\ -4 &1 &0& \cdots &1 &-4 &6 \end{pmatrix}_{N\times N}. \end{aligned}

Direct computation shows that $$\omega _{k}=16\sin ^{4} \frac{k\pi }{N}$$, $$k=0, 1, \ldots, N-1$$ are the eigenvalues of $$A_{2}$$. Then when $$k=1, 2, \ldots, N-1$$, we have

$$\omega _{k}=\omega _{N-k}, \quad 0< \omega _{1}\leq \omega _{2}\leq \cdots \leq \omega _{[\frac{N}{2}]}.$$

Our main results of this section are as follows.

### Theorem 3.1

Suppose that:

$$(F_{4})$$ :

$$\max_{n\in [1, N]}\limsup_{x\rightarrow 0}|\frac{f(n, x)}{x}|<\omega _{1}$$;

$$(F_{5})$$ :

when $$x\neq 0$$, $$x(f(n, x)+mx)>0$$;

$$(F_{6})$$ :

there exist constants $$\sigma >1$$, $$C_{3}$$, $$C_{4}>0$$ such that $$|f(n, x)|\leq C_{3}|x|^{\sigma }+C_{4}$$;

$$(F_{7})$$ :

$$\lim_{|x|\rightarrow \infty } \frac{f(n, x)}{x}=+\infty$$;

$$(F_{8})$$ :

when $$x\neq 0$$, $$xf(n, x)-2F(n, x)>0$$ and $$xf(n, x)-2F(n, x)\rightarrow \infty$$ as $$|x|\rightarrow \infty$$.

Then we have:

1. (i)

if $$(F_{4})$$$$(F_{7})$$ hold, then BVP (1.1) with (1.3) has at least three nontrivial solutions: one sign-changing, one positive and one negative;

2. (ii)

the conclusion is still true if we replace $$(F_{7})$$ with $$(F_{8})$$ in the case (i).

Now we pay our attention to verifying Theorem 3.1 with the aid of Lemma 1.2.

Motivated by Sect. 2, we consider following boundary value problem to obtain Green’s function corresponding to BVP (1.1) with (1.3):

$$\textstyle\begin{cases} \Delta ^{4} x(n-2)- m{\Delta ^{2}} x(n-1)=h(n), & n\in [1, N], \\ \Delta ^{i} x(-1)=\Delta ^{i} x(N-1),& i=0, 1, 2, 3, \end{cases}$$
(3.3)

where $$h: [1, N]\rightarrow \mathbf{R}$$.

Similar to Lemma 2.1, we have the following.

### Lemma 3.1

Denote

$$G_{3}(n, s)= \textstyle\begin{cases} \frac{P^{s-n+1+N}+P^{n-s+1}}{(1-P^{N})(1-P^{2})}, & 0 \le s \le n \le N+1, \\ \frac{P^{n-s+1+N}+P^{s-n+1}}{(1-P^{N})(1-P^{2})}, & 0 \le n \le s \le N+1, \end{cases}$$

and

$$G_{4}(n, s) = \textstyle\begin{cases} \frac{(s-1)(N+1-n)}{N}, &0 \le s \le n \le N+1, \\ \frac{(n-1)(N+1-s)}{N}, & 0 \le n \le s \le N+1, \end{cases}$$

where P is defined in Lemma 2.1. Then

$$x(n)=\sum_{s=1}^{N} {G_{j}}(n, s)h(s),\quad n \in [0, N+1], j=3, 4,$$

is the unique solution of

$$\textstyle\begin{cases} -\Delta ^{2} x(n-1)+r_{j}x(n)=h(n), & n\in [1, N], j=3, 4, \\ \Delta ^{i} x(-1)=\Delta ^{i} x(N-1), & i=0, 1, 2, 3, \end{cases}$$

where $$r_{3}$$, $$r_{4}$$ are solutions of equation $${r^{2}}-mr=0$$.

Following Lemma 3.1, we have a lemma.

### Lemma 3.2

The linear discrete fourth-order BVP (3.3) admits a unique solution

$$x(n)=\sum_{s=1}^{N} \Biggl(\sum _{j=1}^{N} {G_{3}}(n, j) {G_{4}}(j, s)h(s) \Biggr)=\sum_{s=1}^{N} \Biggl(\sum_{j=1}^{N} {G_{4}}(n, j){G_{3}}(j, s)h(s) \Biggr),\quad n \in [1, N]$$

satisfying $$x(-1)=x(N-1)$$, $$x(0)=x(N)$$, $$x(1)=x(N+1)$$ and $$x(2)=x(N+2)$$.

### Remark 3.1

Green’s function $${G_{j}}(n, s)$$ ($$j=3, 4$$) defined by Lemma 3.1 satisfies $${G_{j}}(n, s) = {G_{j}}(s, n) > 0$$, $$n, s \in [1, N]$$. For $$x\in H$$, define operators $$K^{*}_{m}$$, $$f^{*}_{m}$$ and $$S^{*}_{m}$$ as follows:

\begin{aligned} &\bigl(K^{*}_{m}x\bigr) (n)=\sum _{s=1}^{N} \sum_{j=1}^{N} G_{4}(n, j)G_{3}(j, s)x(s), \\ &\bigl(f^{*}_{m}x\bigr) (n)=f\bigl(n, x(n)\bigr)-m\Delta ^{2} x(n-1), \\ &S^{*}_{m}=K^{*}_{m}f^{*}_{m}, \end{aligned}

where $$S^{*}_{m}: H\rightarrow H$$ is a completely continuous operator. Then $$\{x(n)\}_{n=0}^{N+1}$$ is a solution of BVP (1.1) with (1.3) if and only if $$x=\{x(n)\}_{n=1} ^{N}\in H$$ is a fixed point of $$S^{*}_{m}$$.

With the aim to look for solutions of BVP (1.1) with (1.3) by Lemma 1.2, we need the following lemma and we omit its proof because of its similarity to Lemma 2.3.

### Lemma 3.3

The functional $$I_{2}$$ defined by (3.1) is Fréchet differentiable on H and $$I_{2}'(x)= x-{S^{*}_{m}}(x)$$ for all $$x\in H$$.

According to Lemma 1.2 and Remark 1.1, it is time for us to formulate a test.

### Lemma 3.4

Let $$I_{2}$$ be defined as (3.1),

1. (i)

$$I_{2}$$ satisfies the $$(PS)$$ condition under $$(F_{7})$$;

2. (ii)

$$I_{2}$$ satisfies the $$(C)$$ condition under $$(F_{8})$$.

### Proof

(i) The proof is similar to that of Lemma 2.5(i) and thus is omitted.

(ii) Let $$\{x_{k}\}\subset H$$ be a sequence, then there exists a constant $$\tilde{C}>0$$ such that

$$\bigl\vert I_{2}(x_{k}) \bigr\vert \leq \frac{\tilde{C}-1}{2}\quad {\text{and}}\quad \bigl(1+ \Vert x_{k} \Vert _{m}\bigr) \bigl\Vert {I_{2}}'(x_{k}) \bigr\Vert _{m}< 1.$$

It follows from the Cauchy–Schwarz inequality that

$$\bigl\langle {I_{2}}'(x_{k}), x_{k}\bigr\rangle _{m}\leq \Vert x_{k} \Vert _{m} \bigl\Vert {I_{2}}'(x _{k}) \bigr\Vert _{m}\leq \bigl(1+ \Vert x_{k} \Vert _{m}\bigr) \bigl\Vert {I_{2}}'(x_{k}) \bigr\Vert _{m}< 1.$$

Therefore,

\begin{aligned} \sum_{n=1}^{N} \bigl[x_{k}f(n, x_{k})-2F(n, x_{k}) \bigr]&=2I_{2}(x_{k})-\bigl\langle {I_{2}}'(x_{k}), x_{k}\bigr\rangle _{m} \\ &\quad \leq 2 \bigl\vert {I_{2}}(x_{k}) \bigr\vert + \Vert x_{k} \Vert _{m} \bigl\Vert {I_{2}}'(x_{k}) \bigr\Vert _{m} \leq \tilde{C}. \end{aligned}
(3.4)

In view of $$(F_{8})$$, there exists $$\delta >0$$ such that

$$xf(n, x)-2F(n, x)>\tilde{C},\quad \forall n\in [1, N] {\text{ and }} \vert x \vert >\delta,$$

which contradicts (3.4). Then $$\{x_{k}\}$$ is bounded and the proof is finished. □

### Lemma 3.5

Suppose that $$(F_{4})$$, $$(F_{5})$$ and $$(F_{6})$$ are true, there exists constant $$\tilde{\varepsilon }_{0}>0$$ such that, for $$0<\varepsilon < \tilde{\varepsilon }_{0}$$,

1. (i)

$${S^{*}_{m}}(\partial D_{\varepsilon }^{-}) \subset D _{\varepsilon }^{-}$$. And if $$x\in D_{\varepsilon }^{-}$$ is a nontrivial critical point of $$I_{2}$$, then x is a negative solution of BVP (1.1) with (1.3);

2. (ii)

$${S^{*}_{m}}(\partial D_{\varepsilon }^{+}) \subset D _{\varepsilon }^{+}$$. And if $$x \in D_{\varepsilon }^{+}$$ is a nontrivial critical point of $$I_{2}$$, then x is a positive solution of BVP (1.1) with (1.3).

### Proof

(i) Since $$\omega _{1}$$ and $$\omega _{[\frac{N}{2}]}$$ are the minimum and maximum eigenvalues of matrix $$A_{2}$$, respectively. Then

$$\sqrt{\omega _{1}+m} \Vert x \Vert \le \Vert x \Vert _{m} \le \sqrt{ \omega _{[\frac{N}{2}]}+m} \Vert x \Vert ,$$
(3.5)

\begin{aligned} \bigl\Vert x^{+} \bigr\Vert &=\mathop{\inf } _{\psi \in -\varLambda } \Vert x-\psi \Vert \le \frac{1}{{\sqrt{m+{\omega _{1}}} }}\mathop{\inf } _{\psi \in -\varLambda } \Vert x-\psi \Vert _{m} \\ &=\frac{1}{{\sqrt{m+ {\omega _{1}}} }}{ \operatorname{dist}_{m}}(x, -\varLambda ), \quad\forall x \in H. \end{aligned}
(3.6)

According to $$(F_{4})$$ and $$(F_{6})$$, we can find a constant $$\tilde{\tau }>0$$ such that

$$\bigl\vert f(n, x)+mx \bigr\vert \leq (m+\omega _{1}-\tilde{\tau }) \vert x \vert +C_{2} \vert x \vert ^{\sigma }, \quad\forall (n, x) \in [1, N] \times \mathbf{R}.$$
(3.7)

Let $$y=S^{*}_{m}(x)$$ for any $$x\in H$$. As a matter of fact, $${y^{+}}=y-{y^{-}}$$ and $${y^{-}} \in -\varLambda$$ mean $$\operatorname{dist} _{m}(y,-\varLambda ) \leq \|y-y^{-}\|_{m}=\|y^{+}\|_{m}$$. Combining (3.5), (3.6) and (3.7), it yields

\begin{aligned} &\operatorname{dist}_{m}(y, -\varLambda ) \bigl\Vert y^{+} \bigr\Vert _{m} \\ &\quad\leq \bigl\langle y^{+}, y ^{+}\bigr\rangle _{m} \\ &\quad=\bigl\langle {S^{*}_{m}}\bigl(x^{+}\bigr), y^{+}\bigr\rangle _{m} \\ &\quad=\sum_{n=1}^{N} \bigl[f\bigl(n, x^{+}(n)\bigr)+mx^{+}(n) \bigr]y^{+}(n) \\ &\quad\leq (m+\omega _{1}-\tilde{\tau }) \sum_{n=1}^{N} \bigl\vert x^{+}(n) \bigr\vert y ^{+}(n)+C_{2} \sum_{n=1}^{N} \bigl\vert x^{+}(n) \bigr\vert ^{\sigma }y^{+}(n) \\ &\quad\leq (m+\omega _{1}-\tilde{\tau }) \bigl\Vert x^{+} \bigr\Vert \bigl\Vert y^{+} \bigr\Vert +C_{2} \bigl\Vert x^{+} \bigr\Vert ^{\sigma } \bigl\Vert y^{+} \bigr\Vert \\ &\quad\leq \biggl(\frac{m+\omega _{1}-\tilde{\tau }}{m+\omega _{1}}\operatorname{dist} _{m}(x, - \varLambda )+\frac{C_{2}}{\sqrt{(m+\omega _{1})^{\sigma +1}}}\bigl( \operatorname{dist}_{m}(x, - \varLambda )\bigr)^{\sigma } \biggr) \bigl\Vert y^{+} \bigr\Vert _{m}, \end{aligned}

which implies

$$\operatorname{dist}_{m}(y, -\varLambda ) \leq \frac{m+\omega _{1}-\tilde{\tau }}{m+ \omega _{1}} \operatorname{dist}_{m}(x, -\varLambda )+{C'_{2}} \bigl(\operatorname{dist}_{m}(x, -\varLambda )\bigr)^{\sigma },$$

here $${C'_{2}}=\frac{C_{2}}{\sqrt{(m+\omega _{1})^{\sigma +1}}}$$. Let $${C'_{2}}(\operatorname{dist}_{m}(x, -\varLambda ))^{\sigma -1}=\frac{ \tilde{\tau }}{2(m+\omega _{1})}$$, then

$$\operatorname{dist}_{m}\bigl(S^{*}_{m}(x), -\varLambda \bigr) \le \frac{2(m+\omega _{1})- \tilde{\tau }}{2(m+\omega _{1})}\operatorname{dist}_{m}(x, - \varLambda ).$$
(3.8)

Obviously, $$\frac{2(m+\omega _{1})-\tilde{\tau }}{2(m+\omega _{1})}<1$$ is correct for all $$\tilde{\tau }>0$$. Consequently,

$${S^{*}_{m}}(x) \in D_{\varepsilon }^{-},\quad \forall x \in \partial D_{\varepsilon }^{-}.$$

If $$x\in D_{\varepsilon }^{-}$$ is a nontrivial critical point of $$I_{2}$$, Lemma 3.3 shows $${I'_{2}}(x)=x-S^{*}_{m}(x)=0$$. Moreover, (3.8) indicates $$x \in -\varLambda \backslash \{0\}$$. Following $$(F_{5})$$ and Remark 3.1, we see that $$x(n)<0$$ in $$n\in [1,N]$$. Therefore, x is a negative solution of BVP (1.1) with (1.3).

Item (ii) can be discussed similarly, we omit its proof. □

Now we devote our efforts to completing the proof of our main results in this section.

### Proof of Theorem 3.1

Since H is a finite-dimensional Hilbert space, there exists constant $$\bar{C}>0$$ such that

$$\vert x \vert _{\sigma +1}: = \Biggl(\sum _{n=1}^{N} \bigl\vert x(n) \bigr\vert ^{\sigma +1} \Biggr) ^{\frac{1}{\sigma +1}} \le \bar{C}\min \bigl\{ \Vert x \Vert , \Vert x \Vert _{m} \bigr\} ,\quad \forall x \in H.$$
(3.9)

According to (3.7), we have

$$F(n, x)+\frac{m}{2} \vert x \vert ^{2}\le \frac{1}{2}(m+\omega _{1}-\tilde{\tau }) \vert x \vert ^{2}+\frac{C _{2}}{\sigma +1} \vert x \vert ^{\sigma +1}.$$

Making use of this, together with (3.5) and (3.9), it follows that

\begin{aligned} I_{2}(x) &=\frac{1}{2}\langle x, x\rangle _{m}-\sum_{n=1}^{N} \biggl[F \bigl(n, x(n)\bigr)+\frac{m}{2} \bigl\vert x(n) \bigr\vert ^{2} \biggr] \\ &\geq \frac{1}{2} \Vert x \Vert _{m}^{2}- \frac{m+\omega _{1}-\tilde{\tau }}{2} \Vert x \Vert ^{2}-\frac{C_{2}}{\sigma +1} \vert x \vert _{\sigma +1}^{\sigma +1} \\ &\geq \frac{\tilde{\tau }}{2(m+\omega _{1})} \Vert x \Vert _{m}^{2}- \frac{C_{2} \bar{C}^{\sigma +1}}{\sigma +1} \Vert x \Vert _{m}^{\sigma +1}. \end{aligned}

Due to (3.6), $$\|x^{\pm }\| \leq \frac{1}{\sqrt{m+\omega _{1}}} {\mbox{dist}}(x, \mp \varLambda ) \leq \frac{1}{\sqrt{m+\omega _{1}}} {\tilde{\varepsilon }_{0}}$$ holds for all $$x\in \overline{D_{\varepsilon }^{+}}\cap$$ $$\overline{D_{\varepsilon }^{-}}$$. Then there exists $${c_{1}}>-\infty$$ such that $$\mathop{\inf } _{x \in \overline{D_{\varepsilon }^{+}} \cap \overline{D_{ \varepsilon }^{-}}} I_{2}(x)={c_{1}}$$. Let $$\upsilon _{1}, \upsilon _{2}$$ be eigenvectors corresponding to eigenvalues $$\omega _{1}, \omega _{2}$$ of matrix $$A_{2}$$. Denote $$H_{2}={\mbox{span}}\{\upsilon _{1}, \upsilon _{2}\}$$, for all $$x\in H_{2}$$. Note that $$I_{2}(x)\rightarrow -\infty$$ as $$\|x\|_{m}\rightarrow +\infty$$. Thus it is easy to look for a constant $$\tilde{R}>2{\tilde{\varepsilon }_{0}}$$ such that $$I_{2}(x)<{c_{1}}-1$$ and $${\|x\|}_{m}=\tilde{R}$$. Define a path $$g: [{0, 1} ] \to {H_{2}}$$ as

$$g(s)=\tilde{R}\frac{\cos (\pi s){\upsilon _{1}}+\sin (\pi s){\upsilon _{2}}}{ \Vert \cos (\pi s){\upsilon _{1}}+\sin (\pi s ){\upsilon _{2}} \Vert _{m}}.$$

Direct calculation gives

$$g(0)=\tilde{R}\frac{\upsilon _{1}}{ \Vert \upsilon _{1} \Vert _{m}} \in D_{\varepsilon }^{+} \backslash D_{\varepsilon }^{-},\qquad g(1)=-\tilde{R}\frac{\upsilon _{1}}{ \Vert \upsilon _{1} \Vert _{m}} \in D_{ \varepsilon }^{-} \backslash D_{\varepsilon }^{+},$$

and

$$\mathop{\inf } _{x \in \overline{D_{\varepsilon }^{+}} \cap \overline{D_{ \varepsilon }^{-}}} I_{2}(x)>\mathop{\sup } _{s \in [0, 1]} I _{2}\bigl(g(s)\bigr).$$

Combining Lemma 3.4(i) with Lemma 3.5, it is easy to see that there is a critical point in $$H\backslash (\overline{D_{\varepsilon }^{+}} \cup \overline{D_{\varepsilon }^{-}})$$ corresponding to a sign-changing solution of BVP (1.1) with (1.3). In addition, we have a critical point in $$D_{\varepsilon }^{+}\backslash \overline{D_{\varepsilon } ^{-}}$$ corresponding to a positive solution and have a critical point in $$D_{\varepsilon }^{-}\backslash \overline{D_{\varepsilon }^{+}}$$ corresponding to a negative solution of BVP (1.1) with (1.3). The proof of (i) is finished.

It follows from Lemma 3.4(ii) and Remark 1.1, the proof for (ii) is analogous. Thus, the proof of Theorem 3.1 is complete. □

## Examples

To demonstrate the applicability of our theoretical results, three examples are provided.

### Example 4.1

Let $$N=4$$, we consider BVP

$$\textstyle\begin{cases} \Delta ^{4} x(n-2)=\frac{-\frac{14}{5}x(n)}{1+x^{2}(n)}+3x(n),\quad n\in [1, 4], \\ x(-1)=x(0)=0=x(5)=x(6). \end{cases}$$
(4.1)

It is easy to see that $$f(n, x)=\frac{-\frac{14}{5}x}{1+x^{2}}+3x$$ and $$\lambda _{1}=0.3944$$, $$\lambda _{2}=2.6148$$. Then

1. (1)

$$\lim_{x\rightarrow 0}\frac{f(n, x)}{x}=\lim_{x\rightarrow 0} (\frac{-\frac{14}{5}}{1+x^{2}}+3 )=0.2< \lambda _{1}$$;

2. (2)

$$\lim_{|x|\rightarrow \infty }\frac{f(n, x)}{x}=\lim_{|x|\rightarrow \infty } (\frac{-\frac{14}{5}}{1+x^{2}}+3 )=3> \lambda _{2}$$;

3. (3)

$$\lim_{|x|\rightarrow \infty }[xf(n, x)-2F(n, x)]= \lim_{|x|\rightarrow \infty } \frac{14}{5} (\ln (1+x^{2})-\frac{x ^{2}}{1+x^{2}} )=+\infty$$.

Hence (4.1) meets all conditions in Theorem 2.1. Therefore, Theorem 2.1 guarantees that (4.1) admits at least a positive solution, a negative solution and a sign-changing solution. By direct calculation, we see that $$(0, 0, -0.1566, -0.2993, -0.2993, -0.1566, 0, 0)$$ is the negative solution and $$(0, 0, 0.1566, 0.2993, 0.2993, 0.1566, 0, 0)$$ is the positive solution. In addition, $$(0, 0, 2.9287, 1.9285, -1.9285, -2.9287, 0, 0)$$ and $$(0, 0, -2.9287, -1.9285, 1.9285 , 2.9287, 0, 0)$$ are two sign-changing solutions.

### Example 4.2

Consider the following problem:

$$\textstyle\begin{cases} \Delta ^{4} x(n-2)=x^{3}(n) + \frac{1}{5}x(n), \quad n\in [1, 5], \\ x(-1)=x(0)=0=x(6)=x(7). \end{cases}$$
(4.2)

In fact, $$f(n, x)=x^{3}+\frac{1}{5}x$$ and $$\lambda _{1}=0.2121$$. Furthermore,

1. (1)

$$\lim_{|x|\rightarrow \infty }\frac{f(n, x)}{x}=\lim_{|x|\rightarrow \infty } (x^{2}+\frac{1}{5} )> \lambda _{1}$$;

2. (2)

$$\lim_{x\rightarrow 0}\frac{f(n, x)}{x}=\lim_{x\rightarrow 0} (x^{2}+\frac{1}{5} )=0.2<\lambda _{1}$$.

Then Theorem 2.2 means that (4.2) has at least a positive solution and a negative solution. Direct calculation shows that $$(0, 0, 0.050496, 0.105201, 0.128054, 0.105201, 0.050496, 0, 0)$$ is the positive solution and $$(0, 0, -0.050496, -0.105201, -0.128054, -0.105201, -0.050496, 0, 0)$$ is the negative solution.

### Example 4.3

Consider the following problem:

$$\textstyle\begin{cases} \Delta ^{4} x(n-2)=\tau \vert x(n) \vert ^{\tau -2}x(n)-ax(n), & n\in [1, N], \\ \Delta ^{i} x(-1)=\Delta ^{i} x(N-1),& i=0, 1, 2, 3, \end{cases}$$
(4.3)

where $$\tau >2$$ and $$0\leq a<\min \{16\sin ^{4}\frac{\pi }{N}, m\}$$.

Here $$f(n, x)=\tau |x|^{\tau -2}x-ax$$, $$n\in [1, N]$$, so $$F(n, x)=\int ^{x}_{0}f(n, s)\,ds=|x|^{\tau }-\frac{a}{2}x^{2}$$, and

1. (1)

$$\lim_{x \rightarrow 0}|\frac{f(n, x)}{x}|=\lim_{x \rightarrow 0} \vert \tau |x|^{\tau -2}-a \vert =|a|=a<16 \sin ^{4}\frac{\pi }{N}=\omega _{1}$$;

2. (2)

$$xf(n, x)+mx^{2}=\tau |x|^{\tau }+(m-a)x^{2}>0$$ for $$x\neq 0$$;

3. (3)

$$\lim_{|x|\rightarrow \infty }\frac{f(n, x)}{x}=\lim_{|x| \to \infty } (\tau |x|^{\tau -2}-a )=+\infty$$;

4. (4)

$$xf(n, x)-2F(n, x)=(\tau -2)|x|^{\tau }>0$$ for $$x\neq 0$$, and

$$xf(n, x)-2F(n, x)=(\tau -2) \vert x \vert ^{\tau }\rightarrow +\infty \quad\text{as } \vert x \vert \rightarrow \infty.$$

Moreover, it is clear that there exist constants $$\sigma >1$$, $$C_{3}$$, $$C_{4}>0$$ such that

$$\bigl\vert f(n, x) \bigr\vert = \bigl\vert \tau \vert x \vert ^{\tau -2}x-ax \bigr\vert \leq C_{3} \vert x \vert ^{\sigma }+C_{4}.$$

Hence, (4.3) satisfies all conditions of Theorem 3.1, which implies that (4.3) has at least three nontrivial solutions: one positive, one negative and one sign-changing.

In particular, choose $$m=2$$, $$N=4$$, $$\tau =4$$ and $$a=1$$, (4.3) can be rewritten as

$$\textstyle\begin{cases} \Delta ^{4} x(n-2)=4x^{3}(n)-x(n), & n\in [1, 4], \\ \Delta ^{i} x(-1)= \Delta ^{i} x(3), & i=0, 1, 2, 3. \end{cases}$$
(4.4)

Obviously, $$(0, 0, 0, 0, 0, 0, 0, 0)$$ is the trivial solution. The other nontrivial solutions are the positive solution $$(\frac{1}{2}, \frac{1}{2}, \frac{1}{2}, \frac{1}{2}, \frac{1}{2}, \frac{1}{2}, \frac{1}{2}, \frac{1}{2})$$, the negative solution $$(-\frac{1}{2}, - \frac{1}{2}, -\frac{1}{2}, -\frac{1}{2}, -\frac{1}{2}, -\frac{1}{2}, - \frac{1}{2}, -\frac{1}{2})$$ and two sign-changing solutions $$(\frac{\sqrt{5}}{2}, \frac{\sqrt{5}}{2}, -\frac{\sqrt{5}}{2}, -\frac{ \sqrt{5}}{2}, \frac{\sqrt{5}}{2}, \frac{\sqrt{5}}{2}, -\frac{ \sqrt{5}}{2}, -\frac{\sqrt{5}}{2})$$, $$(-\frac{\sqrt{5}}{2}, \frac{ \sqrt{5}}{2}, \frac{\sqrt{5}}{2}, -\frac{\sqrt{5}}{2}, -\frac{\sqrt{5}}{2}, \frac{\sqrt{5}}{2}, \frac{\sqrt{5}}{2}, -\frac{ \sqrt{5}}{2})$$.

## References

1. 1.

Rabinowitz, P.H.: Minimax Methods in Critical Point Theory with Applications to Differential Equations. American Mathematical Society, Providence (1986)

2. 2.

Liu, X., Zhou, T., Shi, H.P., Long, Y.H., Wen, Z.L.: Periodic solutions with minimal period for fourth-order nonlinear difference equations. Discrete Dyn. Nat. Soc. 2018, Article ID 4376156 (2018)

3. 3.

Ma, R.Y., Lu, Y.Q.: Existence and multiplicity of positive solutions of a nonlinear discrete fourth-order boundary value problem. Abstr. Appl. Anal. 2012, Article ID 918082 (2012)

4. 4.

Onozaki, T., Sieg, G., Yokoo, M.: Complex dynamics in a cobweb model with adaptive production adjustment. J. Econ. Behav. Organ. 41(2), 101–115 (2000)

5. 5.

Goldberg, S.: Introduction to Difference Equations: With Illustrative Examples from Economics, Psychology, and Sociology. John Wiley, New York (1958)

6. 6.

Liu, X., Zhang, Y.B., Shi, H.P.: Existence and nonexistence results for a fourth-order discrete Dirichlet boundary value problem. Hacet. J. Math. Stat. 44(4), 855–866 (2015)

7. 7.

Shi, H.P., Liu, X., Zhang, Y.B., Deng, X.Q.: Existence of periodic solutions of fourth-order nonlinear difference equations. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 108(2), 811–825 (2014)

8. 8.

Liu, X., Zhang, Y.B., Shi, H.P.: Existence theorems of periodic solutions for fourth-order nonlinear functional difference equations. J. Appl. Math. Comput. 42, 51–67 (2013)

9. 9.

Yang, J.P.: Sign-changing solutions to discrete fourth-order Neumann boundary value problems. Adv. Differ. Equ. (2013). https://doi.org/10.1186/1687-1847-2013-10

10. 10.

Ma, R.Y., Xu, Y.J.: Existence of positive solution for nonlinear fourth-order difference equations. Comput. Math. Appl. 59(12), 3770–3777 (2010)

11. 11.

Zhou, Z., Ling, J.X.: Infinitely many positive solutions for a discrete two point nonlinear boundary value problem with $$\phi _{c}$$-Laplacian. Appl. Math. Lett. 91, 28–34 (2019)

12. 12.

Zhou, Z., Su, M.T.: Boundary value problems for 2n-order $$\phi _{c}$$-Laplacian difference equations containing both advance and retardation. Appl. Math. Lett. 41, 7–11 (2015)

13. 13.

Wang, Q., Zhou, Z.: Solutions of the boundary value problem for a 2nth-order nonlinear difference equation containing both advance and retardation. Adv. Differ. Equ. (2013). https://doi.org/10.1186/1687-1847-2013-322

14. 14.

Long, Y.H., Zhang, Y.B., Shi, H.P.: Homoclinic solutions of 2nth-order difference equations containing both advance and retardation. Open Math. 14, 520–530 (2016)

15. 15.

Long, Y.H., Shi, H.P.: Multiple solutions for the discrete p-Laplacian boundary value problems. Discrete Dyn. Nat. Soc. 2014, Article ID 213702 (2014)

16. 16.

Graef, J.R., Kong, L.J., Liu, X.Y.: Existence of solutions to a discrete fourth order periodic boundary value problem. J. Differ. Equ. Appl. 22(8), 1167–1183 (2016)

17. 17.

He, T.S., Su, Y.L.: On discrete fourth-order boundary value problems with three parameters. J. Comput. Appl. Math. 233(10), 2506–2520 (2010)

18. 18.

Liu, X., Zhang, Y.B., Shi, H.P., Deng, X.Q.: Periodic and subharmonic solutions for fourth-order nonlinear difference equations. Appl. Math. Comput. 236, 613–620 (2014)

19. 19.

Liu, X., Zhang, Y.B., Shi, H.P.: Nonexistence and existence results for a class of fourth-order difference Neumann boundary value problems. Indag. Math. 26(1), 293–305 (2015)

20. 20.

Long, Y.H.: Multiplicity results for periodic solutions with prescribed minimal periods to discrete Hamiltonian systems. J. Differ. Equ. Appl. 17(10), 1499–1518 (2011)

21. 21.

Yang, S.J., Shi, B., Zhang, D.C.: Existence of positive solutions for boundary value problems of nonlinear functional difference equation with p-Laplacian operator. Bound. Value Probl. 2007, Article ID 38230 (2007)

22. 22.

He, T.S., Zhou, Y.W., Xu, Y.T., Chen, C.Y.: Sign-changing solutions for discrete second-order periodic boundary value problems. Bull. Malays. Math. Sci. Soc. 38(1), 181–195 (2015)

23. 23.

Long, Y.H., Zeng, B.L.: Sign-changing solutions for discrete Dirichlet boundary value problem. J. Appl. Math. Phys. 5, 2228–2243 (2017)

24. 24.

Long, Y.H., Zeng, B.L.: Multiple and sign-changing solutions for discrete Robin boundary value problem with parameter dependence. Open Math. 15, 1549–1557 (2017)

25. 25.

Long, Y.H., Chen, J.L.: Existence of multiple solutions to second-order discrete Neumann boundary value problems. Appl. Math. Lett. 83, 7–14 (2018)

26. 26.

Liu, X.Q., Liu, J.Q.: On a boundary value problem in the half-space. J. Differ. Equ. 250(4), 2099–2142 (2011)

27. 27.

Liu, Z.L., Sun, J.X.: Invariant sets of descending flow in critical point theory with applications to nonlinear differential equations. J. Differ. Equ. 172(2), 257–299 (2001)

28. 28.

Sandra, P., Dix, J.G.: Oscillation of solutions to non-linear difference equations with several advanced arguments. Opusc. Math. 37(6), 887–898 (2017)

29. 29.

Psarros, N., Papaschinopoulos, G., Schinas, C.J.: On the stability of some systems of exponential difference equations. Opusc. Math. 38(1), 95–115 (2018)

30. 30.

Stevic, S.: Solvable subclasses of a class of nonlinear second-order difference equations. Adv. Nonlinear Anal. 5(2), 147–165 (2016)

31. 31.

Xin, Y., Han, X.F., Cheng, Z.B.: Study on a kind of fourth-order p-Laplacian Rayleigh equation with linear autonomous difference operator. Bound. Value Probl. (2017). https://doi.org/10.1186/s13661-017-0756-2

### Acknowledgements

The authors gratefully acknowledge the two anonymous reviewers for their careful reading and valuable comments and suggestions.

### Availability of data and materials

Data sharing is not applicable to this article as no data sets were generated or analyzed during the current study.

## Funding

This work is supported by the Program for Changjiang Scholars and Innovative Research Team in University (IRT-16R16).

## Author information

Authors

### Contributions

All authors contributed equally to this work. All authors read and approved the final manuscript.

### Corresponding author

Correspondence to Yuhua Long.

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests. 