Skip to main content

Bitsadze-Samarsky type problems with double involution

Abstract

In this paper, the solvability of a new class of nonlocal boundary value problems for the Poisson equation is studied. Nonlocal conditions are specified in the form of a connection between the values of the unknown function at different points of the boundary. In this case, the boundary operator is determined using matrices of involution-type mappings. Theorems on the existence and uniqueness of solutions to the studied problems are proved. Using Green’s functions of the classical Dirichlet and Neumann boundary value problems, Green’s functions of the studied problems are constructed and integral representations of solutions to these problems are obtained.

1 Introduction

Boundary value problems specified in the form of a connection between the values of the unknown function at various points of a region or boundary are called problems of the Bitsadze–Samarsky type or nonlocal problems. A problem of this type was first studied in [1]. Further, in [2] the emergence of such problems in the mathematical modeling of certain processes in plasma is described in detail. The methods of solution and examples of various applications of the nonlocal boundary value problems of the Bitsadze–Samarsky type to the problems in physics, technology, and other branches of science are described in [3, 4]. Note also that nonlocal boundary value problems for various differential equations were studied in [511].

In this paper, for the Poisson equation, we study the solvability issues of two types of nonlocal boundary value problems. In these problems, nonlocal conditions are specified in the form of a connection between the values of the unknown function at different points of the boundary. Moreover, in the boundary conditions of the studied problems, the unknown function is involved with transformed arguments, which are specified using matrices of involution-type mappings. Note that boundary value problems with transformed arguments were studied in the work of D. Przeworska-Rolewicz [12]. In this work, in the domain \(Q \subset {R^{2}}\) mappings of the type \(S_{2\pi /m}^{k}:\bar{Q}\to \bar{Q}\), \(k=0,1,\ldots,m-1\), where

S α = ( cos α sin α sin α cos α )

are considered. Using these mappings, the analogues of Dirichlet, Neumann, and Robin problems were studied. In particular, a boundary value problem with the condition of the type

$$ \sum \limits _{k=0}^{N-1}{{{a}_{k}}u\left ( S_{2\pi /\alpha }^{k}x \right )=g(x),x\in \partial Q}. $$

Further, nonlocal boundary value problems with mappings of this type in the n− dimensional case, for \(n \ge 2\), were studied in [13, 14]. We also note that in [15, 16] for the nonlocal Poisson equation and the nonlocal biharmonic equation, the main boundary value problems with mappings of the form \({S^{k}}\), where S is an orthogonal matrix, were studied. This work is a continuation of the studies presented in [13, 14].

Let us formulate statements of nonlocal problems that will be considered in this work. Let \(\Omega = \left \{ {x:|x| < 1} \right \}\) be a unit ball in \({R^{n}},\;n \ge 2\), and Ω be a unit sphere. Let also \({S_{1}}\), \({S_{2}}\) be two real commutative orthogonal \(n \times n\) matrices such that \(S_{i}^{{l_{i}}} = I\), \({l_{i}} \in N\), \(i = 1,2\), where \({l_{1}},{l_{2}} \in N \cup \left \{ 0 \right \}\). Denote \(\ell = {l_{2}}{l_{1}}\) and consider a sequence of real numbers \({a_{0}},\ldots,{a_{{l_{1}} - 1}},{a_{{l_{1}}}},\ldots,{a_{2{l_{1}} - 1}},\ldots,{a_{({l_{2}} - 1){l_{1}} - 1}},\ldots,{a_{\ell - 1}}\), which we denote by a. If we write the summation index i in the form \(i = ({i_{2}},{i_{1}}) \equiv {i_{2}}{l_{1}} + {i_{1}}\), where \({i_{k}} = 0,1,\ldots,{l_{k}} - 1\) for \(k = 1,2\), then the components of a can be written as

$$ {a_{(0,0)}},\ldots,{a_{(0,{l_{1}} - 1)}},{a_{(1,0)}},\ldots,{a_{(1,{l_{1}} - 1)}},\ldots,{a_{({l_{2}} - 2,{l_{1}} - 1)}},\ldots,{a_{({l_{2}} - 1,{l_{1}} - 1)}}. $$

It is obvious that if \(0 \le i < \ell \), then \({i_{1}} = \left \{ {i/{l_{1}}} \right \}\), \({i_{2}} = \left [ {i/{l_{1}}} \right ]\), where \(\left [ \cdot \right ]\) and \(\left \{ \cdot \right \}\) are integer and fractional parts of the number. Further we will also consider the sequence a as a vector \({\mathbf{{a}}} = \left ( {{a_{0}},{a_{1}}, \ldots ,{a_{\ell - 1}}} \right )\).

Remark 1

It is obvious that \(|x|^{2}=(S_{i}^{T}S_{i}x,x)=(S_{i}x,S_{i}x)=|S_{i}x|^{2}\). Thus, \(x\in \Omega \Rightarrow S_{i}x\in \Omega \) and \(y\in \partial \Omega \Rightarrow {{S}_{i}}y\in \partial \Omega \).

For example, the matrix \(S_{i}\) could be an orthogonal matrix of the following type:

$$ S= \begin{pmatrix} I_{k}&{\mathbf{0}}&{\mathbf{0}}&{\mathbf{0}} \\ {\mathbf{0}}&\cos \alpha & -\sin \alpha & {\mathbf{0}} \\ {\mathbf{0}}&\sin \alpha & \cos \alpha & {\mathbf{0}} \\ {\mathbf{0}}&{\mathbf{0}}&{\mathbf{0}}& I_{n-k-2}\end{pmatrix} , $$

where \(\alpha =\frac{2k\pi}{l}\), \(0\le k\le n-2\), and 0 are zero matrices of corresponding orders. It is clear that \(S^{l}=I\).

Let us introduce a nonlocal operator formed by the vector a

$$ {{R}_{\mathbf{a}}}[u](x)=\sum \limits _{({{i}_{2}},{{i}_{1}})=0}^{({{l}_{2}}-1,{{l}_{1}}-1)}{{{a}_{({{i}_{2}},{{i}_{1}})}}u \left ( S_{2}^{{{i}_{2}}}S_{1}^{{{i}_{1}}}x \right )}. $$

Note that in [17, 18] eigenfunctions for the Laplace operator with double and multiple involution were studied.

Let us consider the following boundary value problems:

Dirichlet problem. Find a function \(u(x)\in {{C}^{2}}\left ( \Omega \right )\cap C\left ( {\bar{\Omega }} \right )\) that satisfies the conditions

$$\begin{aligned}& { } \Delta u(x)=f(x),\,x\in \Omega , \end{aligned}$$
(1)
$$\begin{aligned}& { } {{R}_{\mathbf{a}}}[u]{{|}_{\partial \Omega }}=g(x),\,x\in \partial \Omega . \end{aligned}$$
(2)

Neumann problem. Find a function \(u(x)\in {{C}^{2}}\left ( \Omega \right )\cap {{C}^{1}}\left ( \bar{\Omega } \right )\) that satisfies equation (1) and the condition

$$ { } {{R}_{\mathbf{a}}}{{\left . \left [ \frac{\partial u}{\partial \nu } \right ] \right |}_{\partial \Omega }}=g(x),\, x \in \partial \Omega . $$
(3)

2 Preliminary information

To study the above problems (1), (2) and (1), (3), we need some auxiliary statements. Let us introduce the function

$$ { } v(x)=\sum \limits _{(i_{2},i_{1})=0}^{(l_{2}-1,l_{1}-1)}a_{(i_{2},i_{1})}u(S_{2}^{i_{2}}S_{1}^{i_{1}}x), $$
(4)

where \(x\in \Omega \) or \(x\in \partial \Omega \), and the summation is carried out in the ascending order by index \(i=(i_{2},i_{1})\equiv i_{2}\cdot l_{1}+ i_{1}\) in the following form:

$$ (0,0),\ldots ,(0,{{l}_{1}}-1),(1,0),\ldots ,(1,{{l}_{1}}-1),\ldots ,({{l}_{2}}-2,{{l}_{1}}-1), \ldots ,({{l}_{2}}-1,{{l}_{1}}-1). $$

From equality (4), taking into account that \(S_{2}^{l_{2}}= S_{1}^{l_{1}}=I\), it is easy to conclude that functions of the type \(v(S_{2}^{j_{2}}S_{1}^{j_{1} }x)\), where \(j=0,\ldots ,\ell -1\), can be linearly expressed through functions \(u(S_{2}^{i_{2}}S_{1}^{i_{1}}x)\). If we consider the following vectors of the order

$$ U(x)=\left ( u(S_{2}^{{{i}_{2}}}S_{1}^{{{i}_{1}}}x) \right )_{i=0, \ldots ,\ell -1}^{t},\quad V(x)=\left ( v(S_{2}^{{{i}_{2}}}S_{1}^{{{i}_{1}}}x) \right )_{i=0,\ldots ,\ell -1}^{t}, $$

then this dependence has the form

( v ( x ) v ( S 2 i 2 S 1 i 1 x ) v ( S 2 l 2 1 S 1 l 1 1 x ) ) = ( a i , j ) i , j = 0 , , 1 ( u ( x ) u ( S 2 i 2 S 1 i 1 x ) u ( S 2 l 2 1 S 1 l 1 1 x ) ) ,

and it can be represented in the matrix form

$$ { } V(x)=A_{(2)}U(x), $$
(5)

where \({{A}_{(2)}}={{\left ( {{a}_{i,j}} \right )}_{i,j=0,\ldots ,\ell -1}}\) is the corresponding matrix of the order \(\ell \times \ell \). The subscript at \({{A}_{(2)}}\) means that the matrix is generated by two inversions \({{S}_{1}}\), \({{S}_{2}}\).

Thus, (5) follows from (4). The reverse statement is also valid as the first line of (5) is (4).

To describe the properties of the matrix \(A_{(2)}=\left (a_{i,j} \right )_{i,j=0,\ldots , l_{2}l_{1}-1}\), consider the operation of adding the indices of the matrix coefficients in the following sense:

$$ i\oplus j=({{i}_{2}},{{i}_{1}})\oplus ({{j}_{2}},{{j}_{1}})\equiv (({{i}_{2}}+{{j}_{2}} \!\bmod \!\ {{l}_{2}}),({{i}_{1}}+{{j}_{1}}\!\bmod \!\ {{l}_{1}})), $$

where \((i_{2},i_{1})\) is the representation of the index i as given above. It is clear that is a commutative and associative operation over \(i\in \{0,\ldots ,{{l}_{2}}{{l}_{1}}-1\}\) and \((i_{2},i_{1})\oplus (0,0)=(i_{2}, i_{1})\). If \(i\oplus j=({{i}_{2}},{{i}_{1}})\oplus ({{j}_{2}},{{j}_{1}})=(0,0)\), then \({{i}_{2}}+{{j}_{2}}=0\!\ \bmod \!\ {{l}_{2}}\), \({{i}_{1}}+{{j}_{1}}=0\! \ \bmod \!\ {{l}_{1}}\), which means that \(j_{2}=-i_{2}+n_{2}l_{2}\), \(j_{1}=-i_{1}+n_{1}l_{1}\), where \(n_{2}\), \(n_{1}\) are arbitrary integers. Here, we assume that \(j=(j_{2},j_{1})\) is the number \(j\in \{0,\ldots ,\ell -1\}\), and therefore only the case \(n_{2}=n_{1}=-1\) is suitable and, thus, \({{j}_{2}}={{l}_{2}}-{{i}_{2}}\), \({{j}_{1}}={{l}_{1}}-{{i}_{1}}\). Therefore, we can write \(\ominus i=(l_{2}-i_{2},l_{1}-i_{1})\).

For example, if \(l_{1}=2\), \(l_{2}=3\), then \(\ominus (2,1)=(1,1)\) or \(\ominus 5=3\). If we suppose that \((-i_{2},-i_{1})\equiv \ominus i=(l_{2}-i_{2},l_{1}-i_{1})\), then

$$\begin{aligned}& (-{{i}_{2}},-{{i}_{1}})\oplus ({{j}_{2}},{{j}_{1}})=({{l}_{2}}-{{i}_{2}},{{l}_{1}}-{{i}_{1}}) \oplus ({{j}_{2}},{{j}_{1}})\\& =({{l}_{2}}-{{i}_{2}}+{{j}_{2}}\!\bmod \!\ {{l}_{2}},{{l}_{1}}-{{i}_{1}}+{{j}_{1}} \!\bmod \!\ {{l}_{1}})=(-{{i}_{2}}+{{j}_{2}}\!\bmod \!\ {{l}_{2}},-{{i}_{1}}+{{j}_{1}} \!\bmod \!\ {{l}_{1}}), \end{aligned}$$

that is, the operation is formally applicable to numbers of the type \((-i_{2},-i_{1})\). We will assume that

$$ i\ominus j\equiv i\oplus (\ominus j)=({{i}_{2}}-{{j}_{2}}\!\bmod \!\ {{l}_{2}},{{i}_{1}}-{{j}_{1}} \!\bmod \!\ {{l}_{1}}). $$

Let us extend and operations to all numbers of the type \((i_{2},i_{1})\), assuming that \(({{i}_{2}},{{i}_{1}})\equiv ({{i}_{2}}\!\bmod \! {{l}_{2}},{{i}_{1}} \!\bmod \! {{l}_{1}})\). For example, if \(l_{1}=2\), \(l_{2}=3\), then \((1,-1)=(1,1)\) and \((5,-3)=(2,1)\).

The following statement is proved in [17, Theorem 1].

Theorem 1

The matrix \(A_{(2)}\) from equality (5) can be represented in the form

$$ { } {{A}_{(2)}}\equiv {{\left ( {{a}_{i,j}} \right )}_{i,j=0,\ldots , \ell -1}}={{\left ( {{a}_{j\ominus i}} \right )}_{i,j=0,\ldots ,\ell -1}}. $$
(6)

The linear combination of matrices of the form (6) is a matrix of the form (6).

Example 1

For \(l_{2}=3\) and \(l_{1}=2\), we get \(\ell =6\), and matrix \({{A}_{(2)}}\) is written as

A ( 2 ) = ( a ( 0 , 0 ) ( 0 , 0 ) a ( 0 , 1 ) ( 0 , 0 ) a ( 1 , 0 ) ( 0 , 0 ) a ( 1 , 1 ) ( 0 , 0 ) a ( 2 , 0 ) ( 0 , 0 ) a ( 2 , 1 ) ( 0 , 0 ) a ( 0 , 0 ) ( 0 , 1 ) a ( 0 , 1 ) ( 0 , 1 ) a ( 1 , 0 ) ( 0 , 1 ) a ( 1 , 1 ) ( 0 , 1 ) a ( 2 , 0 ) ( 0 , 1 ) a ( 2 , 1 ) ( 0 , 1 ) a ( 0 , 0 ) ( 1 , 0 ) a ( 0 , 1 ) ( 1 , 0 ) a ( 1 , 0 ) ( 1 , 0 ) a ( 1 , 1 ) ( 1 , 0 ) a ( 2 , 0 ) ( 1 , 0 ) a ( 2 , 1 ) ( 1 , 0 ) . . . . . . a ( 0 , 0 ) ( 2 , 1 ) a ( 0 , 1 ) ( 2 , 1 ) a ( 1 , 0 ) ( 2 , 1 ) a ( 1 , 1 ) ( 2 , 1 ) a ( 2 , 0 ) ( 2 , 1 ) a ( 2 , 1 ) ( 2 , 1 ) ) = ( a 0 a 1 a 2 a 3 a 4 a 5 a 1 a 0 a 3 a 2 a 5 a 4 a 4 a 5 a 0 a 1 a 2 a 3 a 5 a 4 a 1 a 0 a 3 a 2 a 2 a 3 a 4 a 5 a 0 a 1 a 3 a 2 a 5 a 4 a 1 a 0 ) .

Let us present important consequences of Theorem 1.

Corollary 1

The matrix \(A_{(2)}\) is uniquely determined by its first line \(\mathbf{a}=\left ( {{a}_{0}},{{a}_{1}},\ldots ,{{a}_{\ell -1}} \right )\).

In [15], the authors studied matrices \({{A}_{(1)}}({{a}_{0}},\ldots ,{{a}_{l-1}})\) generated by a single involution

A ( 1 ) ( a 0 ,, a l 1 )= ( a j i mod l ) i , j = 0 , l 1 = ( a 0 a 1 a l 1 a l 1 a 0 a l 2 a 1 a 2 a 0 ) ,

which coincides with the matrix \(A_{(2)}\) in the case \(l_{2}=1\), \(l_{1}=l\), as in this case \(\ell ={{l}_{1}}\), \((i_{2}, i_{1})=(0,i_{1})=i_{1}\), and \({{i}_{1}}=0,\ldots ,l-1\).

Corollary 2

The matrix has a matrix structure consisting of \(l_{2}\times l_{2}\) square blocks, each of which is a matrix of the size \(l_{1}\times l_{1}\) and type \(A_{(1)}\). If we represent a vector a in the form of vectors \(\mathbf{a}=({{\mathbf{a}}_{0}},\ldots ,{{\mathbf{a}}_{{{l}_{2}}-1}})\), where \({{\mathbf{a}}_{{{j}_{2}}}}=({{a}_{{{j}_{2}}{{l}_{1}}}},\ldots ,{{a}_{({{j}_{2}}+1){{l}_{1}}-1}})\) is also a vector, and denote \(A_{(1)}^{(j_{2})}=A_{(1)}(\mathbf {a}_{j_{2}})\), then the equality

A ( 2 ) (a)= A ( 1 ) ( A ( 1 ) ( 0 ) , A ( 1 ) ( 1 ) ,, A ( 1 ) ( l 2 1 ) ) ( A ( 1 ) ( 0 ) A ( 1 ) ( 1 ) A ( 1 ) ( l 2 1 ) A ( 1 ) ( l 2 1 ) A ( 1 ) ( 0 ) A ( 1 ) ( l 2 2 ) A ( 1 ) ( 1 ) A ( 1 ) ( 2 ) A ( 1 ) ( 0 ) ) ,
(7)

where the block matrix repeats the structure of the matrix \({{A}_{(1)}}\) of the size \(l_{2}\times l_{2}\), is valid.

Example 2

Property (7) of matrices of the form \(A_{(2)}\) is clearly visible in the matrix \(A_{(2)}(\mathbf{a})\) from Example 1, where \(l_{2}=3\), \(l_{1}=2\), \(\mathbf{a}=(a_{0},a_{1},a_{2},a_{3},a_{4},a_{5})\). If we denote \({{\mathbf{a}}_{1}}=({{a}_{2}},{{a}_{3}})\), \({{\mathbf{a}}_{2}}=({{a}_{4}},{{a}_{5}})\), and

$$\begin{aligned}& A_{(1)}(\mathbf {a}_{0})= \begin{pmatrix} a_{0} & a_{1} \\ a_{1} & a_{0} \end{pmatrix} = A^{(0)}_{(1)},\quad A_{(1)}(\mathbf {a}_{1})= \begin{pmatrix} a_{2} & a_{3} \\ a_{3} & a_{2} \end{pmatrix} = A^{(1)}_{(1)},\\& A_{(1)}(\mathbf {a}_{2})= \begin{pmatrix} a_{4} & a_{5} \\ a_{5} & a_{4} \end{pmatrix} =A^{(2)}_{(1)}, \end{aligned}$$

then \(\mathbf{a}=({{\mathbf{a}}_{0}},{{\mathbf{a}}_{1}},{{\mathbf{a}}_{2}})\), and matrix \(A_{(2)}(\mathbf{a})\) is written as

$$ A_{(2)}(\mathbf{a}) = \begin{pmatrix} a_{0} & a_{1} & a_{2} & a_{3} & a_{4} & a_{5}\\ a_{1} & a_{0} & a_{3} & a_{2} & a_{4} & a_{4}\\ a_{4} & a_{5} & a_{0} & a_{1} & a_{2} & a_{3}\\ a_{5} & a_{4} & a_{1} & a_{0} & a_{3} & a_{2}\\ a_{2} & a_{3} & a_{4} & a_{5} & a_{0} & a_{1}\\ a_{3} & a_{2} & a_{5} & a_{4} & a_{1} & a_{0} \end{pmatrix}= \begin{pmatrix} A^{(0)}_{(1)} & A^{(1)}_{(1)} & A^{(2)}_{(1)} \\ A^{(2)}_{(1)} & A^{(0)}_{(1)} & A^{(1)}_{(1)} \\ A^{(1)}_{(1)} & A^{(2)}_{(1)} & A^{(0)}_{(1)} \end{pmatrix} =A_{(1)}\big(A^{(0)}_{(1)},A^{(1)}_{(1)},A^{(2)}_{(1)}\big). $$

Corollary 3

A transposed matrix \(A_{(2)}^{t}(\mathbf{a})\) has the structure of the matrix \(A_{(2)}^{t}\) and, in addition, \(A_{(2)}^{t}(\mathbf{a})={{A}_{(2)}}(\mathbf{c})\), where \(\mathbf{c}={{({{a}_{(-{{j}_{2}},-{{j}_{1}})}})}_{({{j}_{2}},{{j}_{1}})=0, \ldots ,({{l}_{2}}-1,{{l}_{1}}-1)}}\), and both components of the index \(-j_{2}\) and \(-j_{1}\) are taken by \(\bmod \ l_{2}\) and \(\bmod \!\ {{l}_{1}}\), respectively.

Example 3

For the matrix \(A_{(2)}^{t}(\mathbf{a})\), from Example 2 we have

$$\begin{aligned}& \mathbf{c}={{({{a}_{\ominus j}})}_{j=0,\ldots ,5}}=({{a}_{0}},{{a}_{-1}},{{a}_{-2}},{{a}_{-3}},{{a}_{-4}},{{a}_{-5}})\\& =({{a}_{0}},{{a}_{(0,-1)}},{{a}_{(-1,0)}},{{a}_{(-1,-1)}},{{a}_{(-2,0)}},{{a}_{(-2,-1)}})\\& =({{a}_{0}},{{a}_{(0,1)}},{{a}_{(2,0)}},{{a}_{(2,1)}},{{a}_{(1,0)}},{{a}_{(1,1)}})=({{a}_{0}},{{a}_{1}},{{a}_{4}},{{a}_{5}},{{a}_{2}},{{a}_{3}}). \end{aligned}$$

Let us study the product of matrices of the form (6).

Theorem 2

([17, Theorem 2]) The product of matrices of the form (6) is again a matrix of the form (6). Multiplication of matrices of the form (6) is commutative \({{A}_{(2)}}(\mathbf{a}){{A}_{(2)}}(\mathbf{d})={{A}_{(2)}}( \mathbf{d}){{A}_{(2)}}(\mathbf{a})\).

The following theorem gives an idea of eigenvectors and eigenvalues of matrices of the form \({{A}_{(2)}}(\mathbf{a}){{A}_{(2)}}(\mathbf{d})={{A}_{(2)}}( \mathbf{d}){{A}_{(2)}}(\mathbf{a})\) from (6). The result [18, Theorem 3] can be represented in the following form.

Theorem 3

Eigenvectors of the matrix \(A_{(2)}(\mathbf{a})\) can be chosen in the form

e k = e ( k 2 , k 1 ) = ( d k 1 λ k 2 d k 1 λ k 2 l 2 1 d k 1 ) , d k 1 = ( 1 λ k 1 λ k 1 l 1 1 ) ,

where \({{\lambda }_{{{k}_{1}}}}={{e}^{\textit{i}2\pi \frac{{{k}_{1}}}{{{l}_{1}}}}}\) is the \({{l}_{1}}\)-th root of unity, \({{k}_{1}}=0,\ldots ,{{l}_{1}}-1\), and \({{\lambda }_{{{k}_{2}}}}={{e}^{\textit{i}2\pi \frac{{{k}_{2}}}{{{l}_{2}}}}}\) is the \(l_{2}\)-th root of unity, \({{k}_{2}}=0,\ldots ,{{l}_{2}}-1\).

Denote \(\boldsymbol{\lambda }_{k}^{j}\equiv \boldsymbol{\lambda }_{({{k}_{2}},{{k}_{1}})}^{({{j}_{2}},{{j}_{1}})}= \lambda _{{{k}_{2}}}^{{{j}_{2}}}\lambda _{{{k}_{1}}}^{{{j}_{1}}}\). Then the following statement follows from Theorem 3.

Corollary 4

An eigenvector of the matrix \(A_{(2)}(\mathbf{a})\) with number \(k=({{k}_{2}},{{k}_{1}})=0,\ldots ,\ell -1\), where \({{k}_{1}}=0,\ldots ,{{l}_{1}}-1\), \({{k}_{2}}=0,\ldots ,{{l}_{2}}-1\), can be represented as

$$ { } {{\mathbf{e}}_{k}}=\left ( \boldsymbol{\lambda }_{k}^{j} \right )_{j=0, \ldots ,\ell -1}^{t}\equiv \left ( \lambda _{{{k}_{2}}}^{{{j}_{2}}} \lambda _{{{k}_{1}}}^{{{j}_{1}}} \right )_{({{j}_{2}},{{j}_{1}})=0, \ldots ,({{l}_{2}}-1,{{l}_{1}}-1)}^{t}, $$
(8)

and the eigenvalue corresponding to this eigenvector is determined from the equality

$$ { } {{\mu }_{k}}=\sum \limits _{j=0}^{\ell -1}{{{a}_{j}} \boldsymbol{\lambda }_{k}^{j}}=\sum \limits _{({{j}_{2}},{{j}_{1}})=0}^{({{l}_{2}}-1,{{l}_{1}}-1)}{{{a}_{({{j}_{2}},{{j}_{1}})}}} \lambda _{{{k}_{2}}}^{{{j}_{2}}}\lambda _{{{k}_{1}}}^{{{j}_{1}}}= \mathbf{a}\cdot \mathbf{e}_{k}^{t}. $$
(9)

Remark 2

The expression \({{\lambda }_{{{k}_{2}}}}{{\lambda }_{{{k}_{1}}}}\) is an ordered pair. In it, the first place is \({{\lambda }_{{{k}_{2}}}}\), the \(l_{2}\)-th root of unity, and the second place is \({{\lambda }_{{{k}_{2}}}}\), the \(l_{2}\)-th root of unity. Therefore, in the general case, \(\lambda _{1}\lambda _{1}\ne \lambda _{1}^{2}\).

Remark 3

If we take \({{\mathbf{S}}^{j}}=S_{2}^{{{j}_{2}}}S_{1}^{{{j}_{1}}}\), then the operator \({{R}_{a}}[u]\) can be written as

$$ {{R}_{a}}[u](x)=\sum \limits _{j=0}^{\ell -1}{{{a}_{j}}u\left ( {{ \mathbf{S}}^{j}}x \right )}. $$

Example 4

For the matrix \({A_{(2)}}({\mathbf{{a}}})\) from Example 2, we have \({\lambda _{{i_{2}}}} = {\lambda ^{{i_{2}}}}\), \({\lambda _{{i_{1}}}} = {( - 1)^{{i_{1}}}}\), where \(\lambda = \exp ({\mathrm{{i}}}{\textstyle{\frac{2\pi}{3}}})\). Hence, from formula (8) we get that \({{\mathbf{{e}}}_{({i_{2}},{i_{1}})}} = \left ( {\lambda _{{i_{2}}}^{{j_{2}}} \lambda _{{i_{1}}}^{{j_{1}}}} \right )_{({j_{2}},{j_{1}}) = 0, \ldots ,(2,1)}^{t}\). Thus,

$$\begin{aligned}& {{\mathbf{e}}_{0}}={{\mathbf{e}}_{(0,0)}}={{(1,1,1,1,1,1)}^{t}}, \text{ }{{\mathbf{e}}_{1}}={{\mathbf{e}}_{(0,0)}}={{(1,-1,1,-1,1,-1)}^{t}},\\& {{\mathbf{e}}_{2}}={{\mathbf{e}}_{(1,0)}}={{(1,1,\lambda ,\lambda , \bar{\lambda },\bar{\lambda })}^{t}},{{\mathbf{e}}_{3}}={{\mathbf{e}}_{(1,1)}}={{(1,-1, \lambda ,-\lambda ,\bar{\lambda },-\bar{\lambda })}^{t}},\\& {{\mathbf{e}}_{4}}={{\mathbf{e}}_{(2,0)}}={{(1,1,\bar{\lambda }, \bar{\lambda },\lambda ,\lambda )}^{t}},{{\mathbf{e}}_{5}}={{ \mathbf{e}}_{(2,1)}}={{(1,-1,\bar{\lambda },-\bar{\lambda },\lambda ,- \lambda )}^{t}}. \end{aligned}$$

From equality (9), \({{\mu }_{i}}=\mathbf{a}\cdot \mathbf{e}_{i}^{t}\) for \(\mathbf{a}=(a_{0},a_{1},a_{2},a_{3},a_{4},a_{5})\). Thus,

$$\begin{aligned}& {{\mu }_{(0,0)}}={{a}_{0}}+{{a}_{1}}+{{a}_{2}}+{{a}_{3}}+{{a}_{4}}+{{a}_{5}},{{ \mu }_{(0,1)}}={{a}_{0}}-{{a}_{1}}+{{a}_{2}}-{{a}_{3}}+{{a}_{4}}-{{a}_{5}},\\& {{\mu }_{(1,0)}}={{a}_{0}}+{{a}_{1}}+\lambda ({{a}_{2}}+{{a}_{3}})+ \bar{\lambda }({{a}_{4}}+{{a}_{5}}), {{\mu }_{(1,1)}}={{a}_{0}}-{{a}_{1}}+ \lambda ({{a}_{2}}-{{a}_{3}})+\bar{\lambda }({{a}_{4}}-{{a}_{5}}),\\& {{\mu }_{(2,0)}}={{a}_{0}}+{{a}_{1}}+\bar{\lambda }({{a}_{2}}+{{a}_{3}})+ \lambda ({{a}_{4}}+{{a}_{5}}),{{\mu }_{(2,1)}}={{a}_{0}}-{{a}_{1}}+ \bar{\lambda }({{a}_{2}}-{{a}_{3}})+\lambda ({{a}_{4}}-{{a}_{5}}). \end{aligned}$$

Theorem 4

Let \(\mathbf{a}\cdot \mathbf{e}_{k}^{t}\ne 0\) for \(k=0,\ldots ,\ell -1\), where the eigenvectors \({{\mathbf{e}}_{k}}\) are found from (8). Then there is a matrix inverse to the matrix \(A_{(2)}(\mathbf{a})\), and it has the form

$$ { } A_{(2)}^{-1}(\mathbf{a})=\frac{1}{\ell }{\mathrm{M}} {{\mathop{\mathrm{diag}} \nolimits} ^{ - 1}}\left ( {{\mu _{0}}, \ldots ,{\mu _{\ell - 1}}} \right ) \overline {\mathrm{M}}, $$
(10)

where \({\mathrm{M}} =\left ( {{\mathbf{e}}_{0}},\ldots ,{{\mathbf{e}}_{\ell -1}} \right )\). The matrix M is symmetrical and orthogonal.

Proof

Since \({{\mathbf{e}}_{k}}\) is the eigenvector of the matrix \({{A}_{(2)}}(\mathbf{a})\), then \({{A}_{(2)}}(\mathbf{a}){\mathrm{M}} =\left ( {{\mu }_{0}}{{\mathbf{e}}_{0}}, \ldots ,{{\mu }_{\ell -1}}{{\mathbf{e}}_{\ell -1}} \right )\), and it means that

$$\begin{aligned}& {A_{(2)}}({\mathbf{{a}}}){\mathrm{M}}{{\mathop{\mathrm{diag}}\nolimits} ^{ - 1}} \left ( {{\mu _{0}}, \ldots ,{\mu _{\ell - 1}}} \right ) = \left ( {{ \mu _{0}}{{\mathbf{{e}}}_{0}}, \ldots ,{\mu _{\ell - 1}}{{\mathbf{{e}}}_{\ell - 1}}} \right ){\mathop{\mathrm{diag}}\nolimits} \left ( {\mu _{0}^{ - 1}, \ldots ,\mu _{\ell - 1}^{ - 1}} \right ) =\\& = \left ( {{{\mathbf{{e}}}_{0}}, \ldots ,{{\mathbf{{e}}}_{\ell - 1}}} \right ) = { \mathrm{M}}. \end{aligned}$$

Hence,

$$ {A_{(2)}}({\mathbf{{a}}}){\mathrm{M}}{\mathop{\mathrm{diag}}\nolimits} \left ( {\mu _{0}^{ - 1}, \ldots ,\mu _{\ell - 1}^{ - 1}} \right )\overline {\mathrm{M}} = { \mathrm{M}}\overline {\mathrm{M}}. $$

Let \({{\mathbf{e}}_{j}}={{\mathbf{e}}_{({{j}_{2}},{{j}_{1}})}}\) and \({{\mathbf{e}}_{i}}={{\mathbf{e}}_{({{i}_{2}},{{i}_{1}})}}\) be two different columns of matrix M, i.e., \(j\ne i\). Then from (8), using equalities \(\lambda _{j_{2}}\bar{\lambda}_{i_{2}}=\lambda _{j_{2}-i_{2}}\) and \({{\lambda }_{{{j}_{1}}}}{{\bar{\lambda }}_{{{i}_{1}}}}={{\lambda }_{{{j}_{1}}-{{i}_{1}}}}\), we can write

$$\begin{aligned}& {{\mathbf{{e}}}_{j}}{\overline {\mathbf{{e}}} _{i}} = {{\mathbf{{e}}}_{({j_{2}},{j_{1}})}} \cdot{\overline {\mathbf{{e}}} _{({i_{2}},{i_{1}})}} = \left ( {\lambda _{{j_{2}}}^{{k_{2}}} \lambda _{{j_{1}}}^{{k_{1}}}} \right )_{({k_{2}},{k_{1}}) = 0, \ldots ,({l_{2}} - 1,{l_{1}} - 1)}^{t}\cdot \left ( {\bar{\lambda}_{{i_{2}}}^{{k_{2}}} \bar{\lambda}_{{i_{1}}}^{{k_{1}}}} \right )_{({k_{2}},{k_{1}}) = 0, \ldots ,({l_{2}} - 1,{l_{1}} - 1)}^{t}\\& = \sum \limits _{({k_{2}},{k_{1}}) = 0}^{({l_{2}} - 1,{l_{1}} - 1)} { \lambda _{{j_{2}}}^{{k_{2}}}} \lambda _{{j_{1}}}^{{k_{1}}} \bar{\lambda}_{{i_{2}}}^{{k_{2}}}\bar{\lambda}_{{i_{1}}}^{{k_{1}}} = \sum \limits _{({k_{2}},{k_{1}}) = 0}^{({l_{2}} - 1,{l_{1}} - 1)} {{{({ \lambda _{{j_{2}}}}{{\bar{\lambda}}_{{i_{2}}}})}^{{k_{2}}}}} {({ \lambda _{{j_{1}}}}{\bar{\lambda}_{{i_{1}}}})^{{k_{1}}}} = \sum \limits _{({k_{2}},{k_{1}}) = 0}^{({l_{2}} - 1,{l_{1}} - 1)} { \lambda _{{j_{2}} - {i_{2}}}^{{k_{2}}}} \lambda _{{j_{1}} - {i_{1}}}^{{k_{1}}}\\& = \sum \limits _{{k_{2}} = 0}^{{l_{2}} - 1} {\lambda _{{j_{2}} - {i_{2}}}^{{k_{2}}}} \sum \limits _{{k_{1}} = 0}^{{l_{1}} - 1} {\lambda _{{j_{1}} - {i_{1}}}^{{k_{1}}}}. \end{aligned}$$

Let \(j_{2}-i_{2}\ne 0\), then \({{\lambda }_{{{j}_{2}}-{{i}_{2}}}}\ne 1\), and by a simple combinatorial identity we find that

$$ \sum \limits _{{{k}_{2}}=0}^{{{l}_{2}}-1}{\lambda _{{{j}_{2}}-{{i}_{2}}}^{{{k}_{2}}}}= \frac{\lambda _{{{j}_{2}}-{{i}_{2}}}^{{{l}_{2}}}-1}{{{\lambda }_{{{j}_{2}}-{{i}_{2}}}}-1}=0. $$

If \({{j}_{2}}-{{i}_{2}}=0\), then \({{\lambda }_{{{j}_{2}}-{{i}_{2}}}}=1\) and \(\sum \limits _{{{k}_{2}}=0}^{{{l}_{2}}-1}{\lambda _{{{j}_{2}}-{{i}_{2}}}^{{{k}_{2}}}}={{l}_{2}}\). Thus, if \(j\ne i\), then either \({{j}_{2}}\ne {{i}_{2}}\) or \({{j}_{1}}\ne {{i}_{1}}\)

$$ \sum \limits _{{{k}_{2}}=0}^{{{l}_{2}}-1}{\lambda _{{{j}_{2}}-{{i}_{2}}}^{{{k}_{2}}}} \sum \limits _{{{k}_{1}}=0}^{{{l}_{1}}-1}{\lambda _{{{j}_{1}}-{{i}_{1}}}^{{{k}_{1}}}}=0, $$

and if \(j=i\), then

$$ \sum \limits _{{{k}_{2}}=0}^{{{l}_{2}}-1}{\lambda _{{{j}_{2}}-{{i}_{2}}}^{{{k}_{2}}}} \sum \limits _{{{k}_{1}}=0}^{{{l}_{1}}-1}{\lambda _{{{j}_{1}}-{{i}_{1}}}^{{{k}_{1}}}}={{l}_{2}}{{l}_{1}}= \ell . $$

Thus, we get

$$ { } {{\mathbf{{e}}}_{j}} \cdot {\overline {\mathbf{{e}}} _{i}} = \sum \limits _{k = 0}^{ \ell - 1} {{\boldsymbol{\lambda }}_{j}^{k}\overline {\boldsymbol{\lambda }} _{i}^{k}} = \left \{ { \textstyle\begin{array}{*{20}{c}} 0, &{j \ne i} \\ \ell, &{j = i} \end{array}\displaystyle } \right .. $$
(11)

Let us prove the symmetry of the matrix M. Indeed, as \(\lambda _{{{i}_{2}}}^{{{j}_{2}}}=\lambda _{{{j}_{2}}}^{{{i}_{2}}}\) and \(\lambda _{{{i}_{1}}}^{{{j}_{1}}}=\lambda _{{{j}_{1}}}^{{{i}_{1}}}\), then

$$\begin{aligned}& {{\mathrm{M}}^{t}} = \left ( {\lambda _{{j_{2}}}^{{i_{2}}}\lambda _{{j_{1}}}^{{i_{1}}}} \right )_{ \textstyle\begin{array}{*{20}{c}} {({i_{2}},{i_{1}}) = 0, \ldots ,({l_{2}} - 1,{l_{1}} - 1)} \\ {({j_{2}},{j_{1}}) = 0, \ldots ,({l_{2}} - 1,{l_{1}} - 1)} \end{array}\displaystyle }^{t} = {\left ( {\lambda _{{i_{2}}}^{{j_{2}}}\lambda _{{i_{1}}}^{{j_{1}}}} \right )_{ \textstyle\begin{array}{*{20}{c}} {({i_{2}},{i_{1}}) = 0, \ldots ,({l_{2}} - 1,{l_{1}} - 1)} \\ {({j_{2}},{j_{1}}) = 0, \ldots ,({l_{2}} - 1,{l_{1}} - 1)} \end{array}\displaystyle }}\\& = {\left ( {\lambda _{{j_{2}}}^{{i_{2}}}\lambda _{{j_{1}}}^{{i_{1}}}} \right )_{ \textstyle\begin{array}{*{20}{c}} {({i_{2}},{i_{1}}) = 0, \ldots ,({l_{2}} - 1,{l_{1}} - 1)} \\ {({j_{2}},{j_{1}}) = 0, \ldots ,({l_{2}} - 1,{l_{1}} - 1)} \end{array}\displaystyle }} ={\left ( {{\boldsymbol{\lambda }}_{j}^{i}} \right )_{i,j = 0, \ldots , \ell - 1}} = {\mathrm{M}}. \end{aligned}$$

It follows that the ith line of the matrix has the form \(\mathbf{e}_{i}^{t}\). It means that

$$ {\mathrm{M}}\overline {\mathrm{M}} = {\left ( {{{\mathbf{{e}}}_{i}} \cdot {{ \overline {\mathbf{{e}}} }_{j}}} \right )_{i,j = 0, \ldots ,\ell - 1}} = \ell I. $$

Using the obtained equality, we can write

$$ {A_{(2)}}({\mathbf{{a}}})\frac{1}{\ell }{\mathrm{M}}{{\mathop{\mathrm{diag}}\nolimits} ^{ - 1}}\left ( {{\mu _{0}}, \ldots ,{\mu _{\ell - 1}}} \right ) \overline {\mathrm{M}} = \frac{1}{\ell }\ell I = I. $$

The theorem is proved. □

Example 5

For the matrix \({{A}_{(2)}}(\mathbf{a})\) from Example 4 we have

$$ {\mathrm{M}} = \left ( {{{\mathbf{{e}}}_{0}}, \ldots ,{{\mathbf{{e}}}_{5}}} \right ) = \left ( { \textstyle\begin{array}{*{20}{c}} 1&1&1&1&1&1 \\ 1&{ - 1}&1&{ - 1}&1&{ - 1} \\ 1&1&\lambda &\lambda &{\bar{\lambda}}&{\bar{\lambda}} \\ 1&{ - 1}&\lambda &{ - \lambda }&{\bar{\lambda}}&{ - \bar{\lambda}} \\ 1&1&{\bar{\lambda}}&{\bar{\lambda}}&\lambda &\lambda \\ 1&{ - 1}&{\bar{\lambda}}&{ - \bar{\lambda}}&\lambda &{ - \lambda } \end{array}\displaystyle } \right ). $$

Theorem 5

Let the matrix \({{A}_{(2)}}(\mathbf{a})\) be not special, then the inverse matrix has the form \(A_{(2)}^{-1}(\mathbf{a})={{A}_{(2)}}(\mathbf{b})\), where

$$ { } {b_{j}} = \frac{1}{\ell }\sum \limits _{k = 0}^{\ell - 1} { \frac{{\overline {\boldsymbol{\lambda }} _{k}^{j}}}{{{\mu _{k}}}}}, \;\;j = 0, \ldots ,\ell - 1. $$
(12)

Proof

Let the matrix \({{A}_{(2)}}(\mathbf{a})\) be invertible, then its eigenvalues, found from (9), are nonzero, i.e., \({{\mu }_{k}}\ne 0\), and therefore Theorem 4 is applicable. Let us denote the elements of the inverse matrix as \({{b}_{i,j}}={{\left ( A_{(2)}^{-1}(\mathbf{a}) \right )}_{i,j}}\) for \(i,j=0,\ldots ,\ell -1\). Then, using formula (10) and symmetry of M, we find

$$\begin{aligned}& {b_{i,j}} = \frac{1}{\ell }{\left ( {{\mathrm{M}}{{{\mathop{\mathrm{diag}} \nolimits} }^{ - 1}}\left ( {{\mu _{0}}, \ldots ,{\mu _{\ell - 1}}} \right )\overline {\mathrm{M}} } \right )_{i,j}} = \frac{1}{\ell }\sum \limits _{k = 0}^{\ell - 1} { \frac{{{\boldsymbol{\lambda }}_{i}^{k}}}{{{\mu _{k}}}}{\boldsymbol{\lambda }}_{j}^{ - k}} = \frac{1}{\ell }\sum \limits _{k = 0}^{\ell - 1} { \frac{{{\boldsymbol{\lambda }}_{i - j}^{k}}}{{{\mu _{k}}}}}\\& = \frac{1}{\ell }\sum \limits _{k = 0}^{\ell - 1} { \frac{{{\boldsymbol{\lambda }}_{k}^{i - j}}}{{{\mu _{k}}}}} = \frac{1}{\ell } \sum \limits _{k = 0}^{\ell - 1} { \frac{{\overline {\boldsymbol{\lambda }} _{k}^{j - i}}}{{{\mu _{k}}}}}. \end{aligned}$$

If we use the notation (12), then we have \({{b}_{i,j}}={{b}_{j\ominus i}}\), where \({{b}_{j}}\) is determined from (12). Thus, according to Theorem 1, \(A_{(2)}^{-1}(\mathbf{a})={{\left ( {{b}_{j\ominus i}} \right )}_{i,j=0, \ldots ,\ell -1}}={{A}_{(2)}}(\mathbf{b})\). For \(i=0\) we get the first line \({{A}_{(2)}}(\mathbf{b})\), which proves equality (12). The theorem is proved. □

Corollary 5

It is easy to see that according to (11) and (12)

$$ {\mathbf{{b}}} \cdot {\mathbf{{e}}}_{k}^{t} = \sum \limits _{i = 0}^{\ell - 1} { \frac{1}{\ell }} \sum \limits _{j = 0}^{\ell - 1} { \frac{{\overline {\boldsymbol{\lambda }} _{j}^{i}}}{{{\mu _{j}}}}} {\boldsymbol{{ \lambda }}}_{k}^{i} = \sum \limits _{j = 0}^{\ell - 1} { \frac{1}{{{\mu _{j}}}}} \frac{1}{\ell }\sum \limits _{i = 0}^{\ell - 1} {\overline {\boldsymbol{\lambda }} _{j}^{i}{\boldsymbol{\lambda }}_{k}^{i}} = \sum \limits _{j = 0}^{\ell - 1} {\frac{{{\delta _{j,k}}}}{{{\mu _{j}}}}} = \frac{1}{{{\mu _{k}}}}. $$

The vector b can be also found using the formula

$$ { } {\mathbf{{b}}} = \frac{1}{\ell }{{\boldsymbol{{\mu }}}_{-} }\overline {\mathrm{M}}, $$
(13)

where \({{\boldsymbol{\mu }}_{-}}={{\left ( \mu _{k}^{-1} \right )}_{k=0,\ldots , \ell -1}}\).

Corollary 6

Let the matrix \({{A}_{(2)}}(\mathbf{a})\) be nonsingular, i.e., \(\mathbf{a}\cdot {{\mathbf{e}}_{k}}\ne 0\) for \(k=0,\ldots ,\ell -1\), where vectors \({{\mathbf{e}}_{k}}\) are found from (8), then the solution to the system of algebraic equations \({{A}_{(2)}}(\mathbf{a})\mathbf{p}=\mathbf{q}\) can be written as

$$ {\mathbf{{p}}} = \frac{1}{\ell }{\left ( {\sum \limits _{k = 0}^{\ell - 1} { \frac{1}{{{\mu _{k}}}}\sum \limits _{j = 0}^{\ell - 1} { \overline {\boldsymbol{\lambda }} _{k}^{j - i}{q_{j}}} } } \right )_{i = 0, \ldots ,\ell - 1}}. $$

Corollary 7

If the matrix \({{A}_{(2)}}(\mathbf{a})\) is not singular, then the eigenvectors of the matrix \(A_{(2)}^{-1}(\mathbf{a})\) are equal to \({{\mathbf{e}}_{k}}\), \(k=0,\ldots ,\ell -1\), and the eigenvalues have the form \(\mu _{k}^{-1}\). Indeed, since the eigenvectors \({{\mathbf{e}}_{k}}\) of the matrix \(A_{(2)}(\mathbf{a})\) do not depend on a, they will also be eigenvectors of the matrix \(A_{(2)}^{-1}(\mathbf{a})={{A}_{(2)}}(\mathbf{b})\). From the equality \({{A}_{(2)}}(\mathbf{a}){{\mathbf{e}}_{k}}={{\mu }_{k}}(\mathbf{a}){{ \mathbf{e}}_{k}}\) it also follows that

$$ A_{(2)}^{-1}(\mathbf{a}){{\mathbf{e}}_{k}}=\frac{1}{{{\mu }_{k}}}{{ \mathbf{e}}_{k}}. $$

3 Dirichlet problem

The following statement is required.

Lemma 1

[15, Lemma 3.1], Let S be an orthogonal matrix, then the operator \({I_{S}}u(x) = u(Sx)\) and the Laplace operator Δ commute \(\Delta {I_{S}}u(x) = {I_{S}}\Delta u(x)\) for the functions \(u \in {C^{2}}(\Omega )\). The operator \(\Lambda = \sum \limits _{i = 1}^{n} {{x_{i}}} {u_{{x_{i}}}}(x)\) and the operator \({I_{S}}\) also commute \(\Lambda {I_{S}}u(x) = {I_{S}}\Lambda u(x)\) for functions \(u \in {C^{1}}(\bar{\Omega})\) and the equality \(\nabla {I_{S}} = {I_{S}}{S^{T}}\nabla \) is valid.

Theorem 6

Let \(\mathbf{a}\cdot {{\mathbf{e}}_{k}}\ne 0\) for \(k=0,\ldots ,\ell -1\) and \(f\in {{C}^{\lambda }}\left ( \overline{\Omega } \right )\), \(0<\lambda <1\), \(g\in C\left ( \partial \Omega \right )\), then a solution to the Dirichlet problem (1), (2) exists, is unique, and can be written in the form

$$ { } u(x) = \sum \limits _{({i_{2}},{i_{1}}) = 0}^{({l_{2}} - 1,{l_{1}} - 1)} {{b_{({i_{2}},{i_{1}})}}} \hat{u}(S_{2}^{{i_{2}}}S_{1}^{{i_{1}}}x) = \sum \limits _{j = 0}^{\ell - 1} {{b_{j}}\hat{u}\left ( {{{\mathbf{{S}}}^{j}}x} \right )} = {R_{\mathbf{{b}}}}[\hat{u}](x), $$
(14)

where \(\hat{u}(x)\) is a solution to the classical Dirichlet problem

$$\begin{aligned}& { } \Delta \hat{u} = \hat{f}(x),\,x \in \Omega , \end{aligned}$$
(15)
$$\begin{aligned}& { } \hat{u} = g(x),\,x \in \partial \Omega , \end{aligned}$$
(16)

the vector b is found from (13) and \(\hat{f}(x)={{R}_{\mathbf{a}}}[f]\).

Proof

1. Let us assume that \(\mathbf{a}\cdot {{\mathbf{e}}_{k}}\ne 0\) and a solution to the Dirichlet problem (1), (2) exists. By virtue of Lemma 1, from the equality \(\Delta u(x)=f(x)\) it follows that \(\Delta u(S_{2}^{{{i}_{2}}}S_{1}^{{{i}_{1}}}x)=f(S_{2}^{{{i}_{2}}}S_{1}^{{{i}_{1}}}x)\), and therefore if we denote

$$\begin{aligned}& \hat{u}(x) = {R_{\mathbf{{a}}}}[u](x) = \sum \limits _{({i_{2}},{i_{1}}) = 0}^{({l_{2}} - 1,{l_{1}} - 1)} {{a_{({i_{2}},{i_{1}})}}} u(S_{2}^{{i_{2}}}S_{1}^{{i_{1}}}x),\\& \hat{f}(x) = {R_{\mathbf{{a}}}}[f] = \sum \limits _{({i_{2}},{i_{1}}) = 0}^{({l_{2}} - 1,{l_{1}} - 1)} {{a_{({i_{2}},{i_{1}})}}} f(S_{2}^{{i_{2}}}S_{1}^{{i_{1}}}x), \end{aligned}$$

we obtain the Poisson equation for the function \(\hat{u}(x)\)

$$ \Delta \hat{u} = \hat{f}(x),\,x \in \Omega . $$

If we add boundary conditions (2)

$$ \hat{u} = {R_{\mathbf{{a}}}}[u]{|_{\partial \Omega }} = g(x),\,x \in \partial \Omega , $$

then for the function \(\hat{u}(x)\) we will get the classical Dirichlet problem (15), (16) for the Poisson equation. Under the assumptions made for \(f(x)\) and \(g(x)\), a solution to this problem exists \(\hat{u}(x)\in {{C}^{2}}\left ( \Omega \right )\cap C\left ( { \bar{\Omega }} \right )\). For the function \(\hat{u}(x)\), the following equality can be written:

$$ \hat{u}(x) = \sum \limits _{({i_{2}},{i_{1}}) = 0}^{({l_{2}} - 1,{l_{1}} - 1)} {{a_{({i_{2}},{i_{1}})}}} u(S_{2}^{{i_{2}}}S_{1}^{{i_{1}}}x),\, x\in \overline{\Omega}, $$

which, in accordance with (5), generates the equivalent matrix equality

$$ {A_{(2)}}({\mathbf{{a}}})U(x) = \hat{U}(x), $$

where

$$ U(x)=\left ( u(S_{2}^{{{i}_{2}}}S_{1}^{{{i}_{1}}}x) \right )_{({{i}_{2}},{{i}_{1}})=0, \ldots ,({{l}_{2}}-1,{{l}_{1}}-1)}^{t},\; \hat{U}(x)=\left ( \hat{u}(S_{2}^{{{i}_{2}}}S_{1}^{{{i}_{1}}}x) \right )_{({{i}_{2}},{{i}_{1}})=0,\ldots ,({{l}_{2}}-1,{{l}_{1}}-1)}^{t}. $$

Hence, provided that \(\mathbf{a}\cdot \mathbf{e}_{k}^{t}\ne 0\), we get the matrix equality

$$ U(x) = A_{(2)}^{ - 1}({\mathbf{{a}}})\hat{U}(x) = A_{(2)}({\mathbf{{b}}})\hat{U}(x), $$

where the vector is found from (13). If we take the first line of this equality, we get (14)

$$ u(x) = \sum \limits _{({i_{2}},{i_{1}}) = 0}^{({l_{2}} - 1,{l_{1}} - 1)} {{b_{({i_{2}},{i_{1}})}}} \hat{u}(S_{2}^{{i_{2}}}S_{1}^{{i_{1}}}x) = {R_{ \mathbf{{b}}}}[\hat{u}](x). $$

Thus, if a solution to the Dirichlet problem exists, it has the form (14).

2. Let \({\mathbf{{a}}} \cdot {\mathbf{{e}}}_{k}^{t} \ne 0\) for \(k = 0, \ldots ,\ell - 1\). Show that a solution to the Dirichlet problem (1), (2) exists and has the form (14). Let us take a function \(\hat{u}(x) \in {C^{2}}\left ( \Omega \right ) \cap C\left ( { \bar{\Omega}} \right )\)—a solution to the Dirichlet problem (15), (16). Since, under the assumptions made, the vector b is defined by Theorem 5, the function \(u(x) \in {C^{2}}\left ( \Omega \right ) \cap C\left ( {\bar{\Omega}} \right )\) from equality (14) is also defined. Let us make sure that it is a solution to problem (1), (2). By virtue of Lemma 1, the following equality is valid:

$$ \Delta u(x) = \sum \limits _{({i_{2}},{i_{1}}) = 0}^{({l_{2}} - 1,{l_{1}} - 1)} {{b_{({i_{2}},{i_{1}})}}\Delta } \hat{u}(S_{2}^{{i_{2}}}S_{1}^{{i_{1}}}x) = \sum \limits _{({i_{2}},{i_{1}}) = 0}^{({l_{2}} - 1,{l_{1}} - 1)} {{b_{({i_{2}},{i_{1}})}} \hat{f}(S_{2}^{{i_{2}}}S_{1}^{{i_{1}}}x)} = {R_{\mathbf{{b}}}}[\hat{f}]. $$
(17)

The functional equality

$$ \hat{f}(x) = \sum \limits _{({i_{2}},{i_{1}}) = 0}^{({l_{2}} - 1,{l_{1}} - 1)} {{a_{({i_{2}},{i_{1}})}}} f(S_{2}^{{i_{2}}}S_{1}^{{i_{1}}}x) = {R_{ \mathbf{{a}}}}[f], $$

through which the function \(\hat{f}(x)\) from the conditions of the theorem was defined, generates the vector equality \(\hat{F}(x)=A_{(2)}^{{}}(\mathbf{a})F(x)\). From it, by virtue of Theorem 5, we find \(F(x)=A_{(2)}^{-1}(\mathbf{a})\hat{F}(x)=A_{(2)}^{{}}(\mathbf{b}) \hat{F}(x)\). The first elements of this vector equality give the scalar equality

$$ f(x) = \sum \limits _{({i_{2}},{i_{1}}) = 0}^{({l_{2}} - 1,{l_{1}} - 1)} {{b_{({i_{2}},{i_{1}})}}\hat{f}(S_{2}^{{i_{2}}}S_{1}^{{i_{1}}}x)} = {R_{ \mathbf{{b}}}}[\hat{f}]. $$

Therefore, equality (17) takes the form of equation (1). In particular, we proved that

$$ {R_{\mathbf{{b}}}}[{R_{\mathbf{{a}}}}[f]] = f. $$
(18)

Let us check that the boundary conditions are fulfilled. From (14), (16), and (18) it follows that

$$\begin{aligned}& {R_{\mathbf{{a}}}}[u]{|_{\partial \Omega }} = \sum \limits _{({i_{2}},{i_{1}}) = 0}^{({l_{2}} - 1,{l_{1}} - 1)} {{b_{({i_{2}},{i_{1}})}}} {R_{\mathbf{{a}}}}[ \hat{u}(S_{2}^{{i_{2}}}S_{1}^{{i_{1}}}x)]{|_{\partial \Omega }} = \sum \limits _{({i_{2}},{i_{1}}) = 0}^{({l_{2}} - 1,{l_{1}} - 1)} {{b_{({i_{2}},{i_{1}})}}} {R_{\mathbf{{a}}}}[\hat{u}](S_{2}^{{i_{2}}}S_{1}^{{i_{1}}}x){|_{\partial \Omega }}\\& = {R_{\mathbf{{b}}}}[{R_{\mathbf{{a}}}}[\hat{u}]]{|_{\partial \Omega }} = \hat{u}{|_{ \partial \Omega }} = g(x),\;x \in \partial \Omega . \end{aligned}$$

This means that the function \(u(x)\) from (14) is a solution to the Dirichlet problem (1), (2). Since the solution to problem (15), (16) is unique, the solution to problem (1), (2) is also unique. The theorem is proved. □

Remark 4

It is clear that from (18) it also follows that \({{R}_{\mathbf{a}}}[{{R}_{\mathbf{b}}}[f]]=f\) as by Corollary 5\(\mathbf{b}\cdot \mathbf{e}_{k}^{t}=\mu _{k}^{-1}\ne 0\) in (18) the vectors a and b can interchange the position.

4 Construction of a solution to the Dirichlet problem

Denote the Poisson kernel of the Dirichlet problem in a ball as

$$ P(x,y) = \frac{1}{{{\omega _{n}}}} \frac{{1 - |x{|^{2}}}}{{|x - y{|^{n}}}}, $$

where \({{\omega }_{n}}\) is the area of a unit sphere in \({{{R}}^{n}}\). Let \(G(x,y)\) be Green’s function of the Dirichlet problem, which is represented in the form (see, for example, [19])

$$ G(x,y) = \frac{1}{{{\omega _{n}}}}\left [ {E(x,y) - E\left ( {x|y|, \frac{y}{{|y|}}} \right )} \right ], $$
(19)

where \(E(x,y)\) is an elementary solution of Laplace’s equation

$$ E(x,y) = \left \{ \textstyle\begin{array}{l} - \ln |x - y|,\;n = 2 \\ \frac{1}{{n - 2}}|x - y{|^{2 - n}},\;n \ge 3 \end{array}\displaystyle \right .. $$

For further investigation, the following statement is necessary.

Lemma 2

([15, Lemma 4.1]) Let the function \(g(x)\) be continuous on Ω and S be an orthogonal matrix. Then, for any \(k\in {N}\), the equalities

$$ \int \limits _{\partial \Omega } {g({S^{k}}y)\;d{s_{y}}} = \int \limits _{\partial \Omega } {g(y)\;d{s_{y}}} ,\int \limits _{\Omega }{g({S^{k}}y) \;dy} = \int \limits _{\Omega }{g(y)\;dy} $$

are valid. Based on this lemma, we will prove the following statement.

Theorem 7

Let the numbers \(\left \{ {{a}_{k}}:k=0,\ldots ,\ell -1 \right \}\) be such that \(\mathbf{a}\cdot \mathbf{e}_{k}^{t}\ne 0\) for \(k=0,\ldots ,\ell -1\), where the vector \(\mathbf{e}_{k}^{{}}\) is found from (8) and \(f\in {{C}^{\lambda }}(\bar{\Omega })\), \(g\in {{C}^{\lambda +2}}( \partial \Omega )\), \(0<\lambda <1\). Then the solution to problem (1), (2) exists, is unique, belongs to the class \({{C}^{\lambda +2}}(\bar{\Omega })\), and is represented in the form

$$ u(x) = \int \limits _{\Omega }{G\left ( {x,y} \right )f(y)\;dy} + \int \limits _{\partial \Omega } {P\left ( {x,y} \right ){R_{\mathbf{{b}}}}[g](y) \;d{s_{y}}}, $$
(20)

where the function \(G(x,y)\) is determined from (19), and the numbers

$$ {b_{j}} = \frac{1}{\ell }\sum \limits _{k = 0}^{\ell - 1} { \frac{{\overline {\boldsymbol{\lambda }} _{k}^{j}}}{{{\mu _{k}}}}} $$

for \(j=0,\ldots ,\ell -1\) are found from (12).

Proof

According to Theorem 6, provided that \(\mathbf{a}\cdot \mathbf{e}_{k}^{t}\ne 0\), a solution to problem (1), (2) exists. Using Remark 3, this solution can be written as (14)

$$ u(x) = \sum \limits _{j = 0}^{\ell - 1} {{b_{j}}\hat{u}\left ( {{{\mathbf{{S}}}^{j}}x} \right )} = {R_{\mathbf{{b}}}}[\hat{u}](x), $$
(21)

where \(\hat{u}(x)\) is a solution of the classical Dirichlet problem (15), (16) and \({{b}_{j}}\) are found from (12). It is clear that if \(f(x)\in {{C}^{\lambda }}(\bar{\Omega })\), then \(\hat{f}(x)={{R}_{\mathbf{a}}}[f]\in {{C}^{\lambda }}(\bar{\Omega })\) and for \(g(x)\in {{C}^{\lambda +2}}(\partial \Omega )\) a solution to the Dirichlet problem (15), (16) exists, is unique, and belongs to the class \(\hat{u}(x)\in {{C}^{\lambda +2}}(\bar{\Omega })\) [20]. It is also known (see, for example, [19, p. 35]) that for given functions \(g(x)\) and \(\hat{f}(x)={{R}_{\mathbf{a}}}[f]\) the solution to problem (15), (16) is represented in the form

$$ \hat{u}(x) = \int \limits _{\Omega }{G(x,y)} {R_{\mathbf{{a}}}}[f]\left ( y \right )dy + \int \limits _{\partial \Omega } {P(x,y)g(y)\;d{s_{y}}}. $$
(22)

Using equality (21) we can write

$$ u(x) = \sum \limits _{j = 0}^{\ell - 1} {{b_{j}}} \int \limits _{ \Omega }{G\left ( {{{\mathbf{{S}}}^{j}}x,y} \right )} {R_{\mathbf{{a}}}}[f]\left ( y \right )dy + \sum \limits _{j = 0}^{\ell - 1} {{b_{j}}} \int \limits _{\partial \Omega } {P\left ( {{{\mathbf{{S}}}^{j}}x,y} \right )g(y) \;d{s_{y}}}. $$
(23)

It is not difficult to see that

$$ f \in {C^{\lambda }}(\bar{\Omega}),g \in {C^{\lambda + 2}}(\partial \Omega ) \Rightarrow \hat{u} \in {C^{\lambda + 2}}(\bar{\Omega}) \Rightarrow u \in {C^{\lambda + 2}}(\bar{\Omega}). $$

Taking into account Remark 1, we get

$$ |{{\mathbf{{S}}}^{j}}x - {{\mathbf{{S}}}^{j}}y| = |{{\mathbf{{S}}}^{j}}(x - y)| = |S_{2}^{{j_{2}}}S_{1}^{{j_{1}}}(x - y)| = |S_{1}^{{j_{1}}}(x - y)| = |x - y|, $$

thus

$$ E\left ( {{{\mathbf{{S}}}^{j}}x,{{\mathbf{{S}}}^{j}}y} \right ) = E\left ( {x,y} \right ). $$
(24)

Hence, \(G\left ( {{\mathbf{S}}^{j}}x,{{\mathbf{S}}^{j}}y \right )=G\left ( x,y \right )\). Further, using Lemma 2, we get

$$ \int \limits _{\Omega }{G\left ( {{{\mathbf{{S}}}^{j}}x,y} \right )\hat{f}(y) \;dy} = \int \limits _{\Omega }{G\left ( {{{\mathbf{{S}}}^{j}}x,{{\mathbf{{S}}}^{j}}y} \right )\hat{f}({{\mathbf{{S}}}^{j}}y)\;dy} = \int \limits _{\Omega }{G \left ( {x,y} \right )\hat{f}({{\mathbf{{S}}}^{j}}y)\;dy}. $$

It follows that

$$\begin{aligned}& \sum \limits _{j = 0}^{\ell - 1} {{b_{j}}} \int \limits _{\Omega }{G \left ( {{{\mathbf{{S}}}^{j}}x,y} \right )} {R_{\mathbf{{a}}}}[f]\left ( y \right )dy = \int \limits _{\Omega }{G\left ( {x,y} \right )} \sum \limits _{j = 0}^{\ell - 1} {{b_{j}}} {R_{\mathbf{{a}}}}[f]\left ( {{{\mathbf{{S}}}^{j}}y} \right )dy\\& = \int \limits _{\Omega }{G\left ( {x,y} \right )} {R_{\mathbf{{b}}}}\left [ {{R_{\mathbf{{a}}}}[f](y)} \right ]dy. \end{aligned}$$

If we now take (18) into account, we obtain

$$ \sum \limits _{j = 0}^{\ell - 1} {{b_{j}}} \int \limits _{\Omega }{G \left ( {{{\mathbf{{S}}}^{j}}x,y} \right )} {R_{\mathbf{{a}}}}[f]\left ( y \right )dy = \int \limits _{\Omega }{G\left ( {x,y} \right )f(y)\;dy} . $$

Similarly, it is not difficult to obtain that \(P\left ( {{\mathbf{S}}^{j}}x,{{\mathbf{S}}^{j}}y \right )=P\left ( x,y \right )\). Using Lemma 2, we obtain the following equalities:

$$\begin{aligned}& \sum \limits _{j = 0}^{\ell - 1} {{b_{j}}} \int \limits _{\partial \Omega } {P\left ( {{{\mathbf{{S}}}^{j}}x,y} \right )g(y)\;d{s_{y}}} = \int \limits _{\partial \Omega } {\sum \limits _{j = 0}^{\ell - 1} {{b_{j}}} P\left ( {{{\mathbf{{S}}}^{j}}x,{{\mathbf{{S}}}^{j}}y} \right )g\left ( {{{\mathbf{{S}}}^{j}}y} \right )\;d{s_{y}}}\\& = \int \limits _{\partial \Omega } {P\left ( {x,y} \right )\sum \limits _{j = 0}^{\ell - 1} {{b_{j}}} g\left ( {{{\mathbf{{S}}}^{j}}y} \right )\;d{s_{y}}} = \int \limits _{\partial \Omega } {P\left ( {x,y} \right ){R_{\mathbf{{b}}}}[g](y)\;d{s_{y}}} . \end{aligned}$$

Thus, the solution to function \(u(x)\) from (23) is transformed to the form (20). The theorem is proved. □

Example 6

Find a solution to the Dirichlet problem

$$ \Delta u(x) = {x_{2}},\,x \in \Omega;\quad u({x_{2}},{x_{3}},{x_{1}}){|_{ \partial \Omega }} = x_{1}^{2},\, x \in \partial \Omega. $$
(25)

In this case \(n=3\), \({S_{2}}x = {({x_{2}},{x_{3}},{x_{1}})^{t}}\) \(({l_{2}} = 3)\), \({S_{1}}x = {(- {x_{1}},{x_{2}},{x_{3}})^{t}}\) \(({l_{2}} = 2)\). It is clear that \(S_{2}S_{1}=S_{1}S_{2}\). Then \(\mathbf{a}=\left ( 0,0,1,0,0,0 \right )\), \({{R}_{\mathbf{a}}}[u](x)=u\left ( S_{2}^{1}S_{1}^{0}x \right )=u({{x}_{2}},{{x}_{3}},{{x}_{1}})\). The matrix \({{A}_{(2)}}(\mathbf{a})\), according to Example 2, has the form

$$ {A_{(2)}}({\mathbf{{a}}}) = \left ( { \textstyle\begin{array}{*{20}{c}} 0&0&1&0&0&0 \\ 0&0&0&1&0&0 \\ 0&0&0&0&1&0 \\ 0&0&0&0&0&1 \\ 1&0&0&0&0&0 \\ 0&1&0&0&0&0 \end{array}\displaystyle } \right ). $$

From Example 4 we find

$$\begin{aligned}& {\mu _{0}} = {a_{0}} + {a_{1}} + {a_{2}} + {a_{3}} + {a_{4}} + {a_{5}} = 1,\\& {\mu _{1}} = {a_{0}} - {a_{1}} + {a_{2}} - {a_{3}} + {a_{4}} - {a_{5}} = 1,\\& {\mu _{2}} = {a_{0}} + {a_{1}} + \lambda ({a_{2}} + {a_{3}}) + \bar{\lambda}({a_{4}} + {a_{5}}) = \lambda,\\& {\mu _{3}} = {a_{0}} - {a_{1}} + \lambda ({a_{2}} - {a_{3}}) + \bar{\lambda}({a_{4}} - {a_{5}})=\lambda,\\& {\mu _{4}} = {a_{0}} + {a_{1}} + \bar{\lambda}({a_{2}} + {a_{3}}) + \lambda ({a_{4}} + {a_{5}}) = \bar{\lambda},\\& {\mu _{5}} = {a_{0}} - {a_{1}} + \bar{\lambda}({a_{2}} - {a_{3}}) + \lambda ({a_{4}} - {a_{5}}) = \bar{\lambda}. \end{aligned}$$

where \(\lambda =\exp (\text{i}\tfrac{2\pi }{3})=-\tfrac{1}{2}+\text{i} \tfrac{\sqrt{3}}{2}\). It follows that \({{\boldsymbol{\mu }}_{-}}=(1,1,\bar{\lambda },\bar{\lambda },\lambda , \lambda )\), and using equality (13), taking into account Example 5, we find

$$ {\mathbf{{b}}} = \frac{1}{\ell }{{\boldsymbol{{\mu }}}_{-} }\overline {\mathrm{M}} = \frac{1}{6}(1,1,\bar{\lambda},\bar{\lambda},\lambda ,\lambda )\left ( { \textstyle\begin{array}{*{20}{c}} 1&1&1&1&1&1 \\ 1&{ - 1}&1&{ - 1}&1&{ - 1} \\ 1&1&{\bar{\lambda}}&{\bar{\lambda}}&\lambda &\lambda \\ 1&{ - 1}&{\bar{\lambda}}&{ - \bar{\lambda}}&\lambda &{ - \lambda } \\ 1&1&\lambda &\lambda &{\bar{\lambda}}&{\bar{\lambda}} \\ 1&{ - 1}&\lambda &{ - \lambda }&{\bar{\lambda}}&{ - \bar{\lambda}} \end{array}\displaystyle } \right ) = (0,0,0,0,1,0). $$

Therefore \({R_{\mathbf{{b}}}}[g] = S_{2}^{2}S_{1}^{0}g = g({x_{3}},{x_{1}},{x_{2}})\). The conditions of Theorem 7 are satisfied as \({{\mu }_{k}}=\mathbf{a}\cdot \mathbf{e}_{k}^{t}\ne 0\), which means that the solution to the Dirichlet problem

$$ \Delta u(x) = f(x),\,x \in \Omega;\quad {\mathrm{{ }}}u({x_{2}},{x_{3}},{x_{1}}){|_{ \partial \Omega }} = g({x_{1}},{x_{2}},{x_{3}}),\,x \in \partial \Omega, $$

for \(f(x)={{x}_{2}}\), \(g(x)=x_{1}^{2}\) has the form

$$ u(x) = \int \limits _{\Omega }{G\left ( {x,y} \right )f(y)\;dy} + \int \limits _{\partial \Omega } {P\left ( {x,y} \right )g({y_{3}},{y_{1}},{y_{2}}) \;d{s_{y}}}. $$

It is easier to calculate the resulting solution for given \(f(x)\) and \(g(x)\) based on Theorem 6. For \(f(x)={{x}_{2}}\), \(g(x)=x_{1}^{2}\), the Dirichlet problem (15), (16) takes the form

$$ \Delta \hat{u} = \hat{x}_{2}=x_{3},\,x \in \Omega ;\;\;\hat{u}{|_{\partial \Omega }} = x_{1}^{2},\,x \in \partial \Omega. $$

Let us use [21, Theorem 10]. Solution to the Dirichlet problem

$$ \Delta u(x) = Q(x),\,x \in \Omega ,\quad {u_{|\partial \Omega }} = P{(x)_{| \partial \Omega }}, $$

in the unit ball can be written as

$$ u(x) = P(x) + \frac{{|x{|^{2}} - 1}}{2}\int _{0}^{1} {\sum \limits _{s = 0}^{\infty }{ \frac{{{{(1 - \alpha |x{|^{2}})}^{s}}{{(1 - \alpha )}^{s}}}}{{(2s + 2)!!(2s)!!}}} } {\Delta ^{s}}(Q - \Delta P)(\alpha x)\,{\alpha ^{n/2 - 1}}\,d \alpha . $$

Only the term at \(s=0\) will remain under the sum sign. Calculations give

$$\hat{u}(x)=x_{1}^{2}+\frac{|x|^{2}-1}{2}\int_{0}^{1}{\frac{1}{2}}(\alpha {{x}_{3}}-2)\,{{\alpha }^{1/2}}\,d\alpha =x_{1}^{2}+\frac{|x|^{2}-1}{4}\left(\frac{2{{x}_{3}}}{5}-\frac{4}{3} \right). $$

Therefore, by Theorem 6 we find that \({{R}_{\mathbf{b}}}[\hat{u}]=S_{2}^{2}S_{1}^{0}\hat{u}=\hat{u}({{x}_{3}},{{x}_{1}},{{x}_{2}})\), i.e. \(x_{1}\to x_{3}\), \(x_{2}\to x_{1}\), \(x_{3}\to x_{2}\) and hence

$$ u(x)=S_{2}^{2}S_{1}^{0}\left(x_{1}^{2}+(|x|^{2}-1)\left(\frac{x_{3}}{10}-\frac{1}{3}\right)\right) = x_{3}^{2} + (|x|^{2}-1)\left(\frac{x_{2}}{10}-\frac{1}{3}\right). $$

It is easy to verify that the resulting function is indeed a solution to the Dirichlet problem (25).

5 Neumann problem

Let \(f\in {{C}^{1}}(\bar{\Omega })\), \(\psi \in C(\partial \Omega )\). In [22], Green’s function

$$ {{\mathcal{N}}_{2}}(x,\xi ) = E(x,\xi ) - {E_{0}}(x,\xi ) $$
(26)

to the Neumann problem for the Poisson equation in Ω is constructed, where the function \(E_{0}(x,\xi )\) is harmonic with respect to \(x,\xi \in \Omega \) and is written in the form

$$ {E_{0}}(x,\xi ) = \int _{0}^{1} {\left ( {\hat{E}\left ( { \frac{x}{{|x|}},t|x|\xi } \right ) + 1} \right )} \frac{{dt}}{t}, $$
(27)

and \(\hat{E}(x,\xi )={{\Lambda }_{x}}E(x,\xi )\). Here, we use the notation \(\Lambda u= \sum _{i=1}^{n}x_{i} u_{x_{i}}\), and the index x indicates that the operator Λ is applied over variables x. It is easy to see that for Ω the equality \(\Lambda u=\frac{\partial u}{\partial \nu}\) is valid on Ω. As \(\hat{E}(x,\xi )=-(|x{{|}^{2}}-x\cdot \xi )/|x-\xi {{|}^{n}}\), the function

$$ \hat{E}\left ( {\frac{x}{{|x|}},t|x|\xi } \right ) = - \frac{{1 - (x\cdot \xi )t}}{{{{(1 - 2t(x\cdot \xi ) + |x{|^{2}}|\xi {|^{2}}{t^{2}})}^{n/2}}}} $$
(28)

is symmetrical, and therefore the function \(E_{0}(x,\xi )\) and hence the function \({\mathcal {N}_{2}}(x,\xi )\) are also symmetrical. The following statement holds true.

Theorem 8

([22, Theorem 3]). Let \(f\in {{C}^{1}}\left ( \overline{\Omega } \right )\), \(\psi \in C\left ( \partial \Omega \right )\). Then the solution to the Neumann problem for the Poisson equation

$$ \Delta u(x) = f(x),\,x \in \Omega ;\quad {\left . { \frac{{\partial u(x)}}{{\partial \nu }}} \right |_{\partial \Omega }} = \psi (x),\,\;x \in \partial \Omega , $$
(29)

when the condition \(\int _{\partial \Omega }{\psi }(\xi )\,d{{s}_{\xi }}=\int _{\Omega }{f}( \xi )\,d\xi \) is satisfied, can be written, up to a constant, in the form

$$ u(x) = \frac{1}{{{\omega _{n}}}}\int _{\partial \Omega } {{{\mathcal{N}}_{2}}} (x,\xi )\psi (\xi )\,d{s_{\xi }} - \frac{1}{{{\omega _{n}}}}\int _{ \Omega }{{{\mathcal{N}}_{2}}} (x,\xi )f(\xi )\,d\xi . $$
(30)

From the proof of this theorem it follows that the solution \(u(x)\) has smoothness \(u\in {{C}^{2}}\left ( \Omega \right )\), \(\Lambda u\in C\left ( {\bar{\Omega }} \right )\).

Theorem 9

Let \(\mathbf{a}\cdot {{\mathbf{e}}_{k}}\ne 0\) for \(k=0,\ldots ,\ell -1\) and for functions \(f\in {{C}^{1}}\left ( \overline{\Omega } \right )\), \(g\in C\left ( \partial \Omega \right )\) the condition

$$ \int _{\partial \Omega } {g(\xi )} \,d{s_{\xi }} = \int _{\Omega }{{R_{ \mathbf{{a}}}}[f]} (\xi )\,d\xi $$

be satisfied, then the solution to the Neumann problem (1), (3) exists, is unique up to a constant, and can be written in the form

$$ u(x) = \sum \limits _{({i_{2}},{i_{1}}) = 0}^{({l_{2}} - 1,{l_{1}} - 1)} {{b_{({i_{2}},{i_{1}})}}} \tilde{u}(S_{2}^{{i_{2}}}S_{1}^{{i_{1}}}x) = \sum \limits _{j = 0}^{\ell - 1} {{b_{j}}\tilde{u}\left ( {{{\mathbf{{S}}}^{j}}x} \right )} = {R_{\mathbf{{b}}}}[\tilde{u}](x), $$
(31)

where \(\tilde{u}(x)\) is a solution to the classical Neumann problem (29) for \(f(x)=\hat{f}(x)={{R}_{\mathbf{a}}}[f]\) and \(\psi (x)=g(x)\).

Proof

Similar to the proof of Theorem 6, consider a function \(\tilde{u}\in {{C}^{2}}\left ( \Omega \right )\) such that \(\Lambda \tilde{u}\in C\left ( {\bar{\Omega }} \right )\), which is a solution to the Neumann problem

$$ \Delta\tilde{u}(x) = \hat{f}(x),\,x \in \Omega, {\left . { \frac{{\partial \tilde{u}}}{{\partial \nu }}} \right |_{\partial \Omega }} = g(x),\,x \in \partial \Omega . $$
(32)

According to Theorem 8, \(\tilde{u}(x)\) exists because the condition \(\int _{\partial \Omega }{g}(\xi )\,d{{s}_{\xi }}=\int _{\Omega }{{ \hat{f}}}(\xi )\,d\xi \) is satisfied. Since, by Theorem 5, the vector b is defined for \(\mathbf{a}\cdot \mathbf{e}_{k}^{t}\ne 0\) for \(k=0,\ldots ,\ell -1\), then equality (31) defines a function \(u(x)\) such that \(u={{R}_{\mathbf{b}}}[\tilde{u}]\in {{C}^{2}}\left ( \Omega \right )\), \(\ \Lambda u={{R}_{\mathbf{b}}}[\Lambda \tilde{u}]\in C\left ( { \bar{\Omega }} \right )\). By virtue of Lemma 1 and equality (18), for \(x\in \Omega \) we get

$$\begin{aligned}& \Delta u(x) = \Delta {R_{\mathbf{{b}}}}[\tilde{u}](x) = \Delta \sum \limits _{j = 0}^{\ell - 1} {{b_{j}}\tilde{u}\left ( {{{\mathbf{{S}}}^{j}}x} \right )} = \sum \limits _{j = 0}^{\ell - 1} {{b_{j}}(\Delta \tilde{u})\left ( {{{ \mathbf{{S}}}^{j}}x} \right )}\\& = \sum \limits _{j = 0}^{\ell - 1} {{b_{j}}\hat{f}\left ( {{{\mathbf{{S}}}^{j}}x} \right )} = {R_{\mathbf{{b}}}}[\hat{f}](x) = {R_{\mathbf{{b}}}}[{R_{\mathbf{{a}}}}[f]](x) = f(x). \end{aligned}$$

Let us check the fulfillment of the boundary conditions. Since \(\Lambda \tilde{u}\in C\left ( {\bar{\Omega }} \right )\) and for Ω the equality \(\Lambda u=\frac{\partial u}{\partial \nu}\) is valid, then using Lemma 1 we find

$$\begin{aligned}& {\left . {{{\left . {\frac{{\partial u}}{{\partial \nu }}} \right |}_{ \partial \Omega }} = \frac{{\partial {R_{\mathbf{{b}}}}[\tilde{u}]}}{{\partial \nu }}} \right |_{ \partial \Omega }} = \Lambda {R_{\mathbf{{b}}}}{\left . {\left [ {\tilde{u}} \right ]} \right |_{\partial \Omega }} = {R_{\mathbf{{b}}}}{\left . {\left [ {\Lambda \tilde{u}} \right ]} \right |_{\partial \Omega }}\\& = {R_{\mathbf{{b}}}}\left [ {{{\left . { \frac{{\partial \tilde{u}}}{{\partial \nu }}} \right |}_{\partial \Omega }}} \right ] = {R_{\mathbf{{b}}}}\left [ {{{\left . { \frac{{\partial \tilde{u}}}{{\partial \nu }}} \right |}_{\partial \Omega }}} \right ] = {R_{\mathbf{{b}}}}[g],x \in \partial \Omega . \end{aligned}$$

Therefore, from (31), (32) and Remark 4 it follows that

$$ {R_{\mathbf{{a}}}}{\left . {\left [ {\frac{{\partial u}}{{\partial \nu }}} \right ]} \right |_{\partial \Omega }} = {R_{\mathbf{{a}}}}\left [ {{{ \left . {\frac{{\partial u}}{{\partial \nu }}} \right |}_{\partial \Omega }}} \right ] = {R_{\mathbf{{a}}}}\left [ {{R_{\mathbf{{b}}}}[g]} \right ] = g,\;x \in \partial \Omega . $$

This means that the function \(u(x)\) from (31) is a solution to the Neumann problem (1), (3). The theorem is proved. □

The result of Theorem 9 is invertible.

Lemma 3

If a solution to the Neumann problem (1), (3) exists and \(\mathbf{a}\cdot {{\mathbf{e}}_{k}}\ne 0\) for \(k=0,\ldots ,\ell -1\), and the condition \(\int _{\partial \Omega }{g}(\xi )\,d{{s}_{\xi }}=\int _{\Omega }{{ \hat{f}}}(\xi )\,d\xi \) is satisfied, then \(\tilde{u}(x)={{R}_{\mathbf{a}}}[u](x)+C\) is a solution to problem (32).

Proof

Let, under the given conditions, a solution to the Neumann problem (1), (3) exist. By virtue of Lemma 1, it follows from the equality \(\Delta u(x)=f(x)\) that for the functions \(\tilde{u}(x)={{R}_{\mathbf{a}}}[u](x)+C\) and \(\hat{f}(x)={{R}_{\mathbf{a}}}[f](x)\) the equality \(\Delta \tilde{u}=\hat{f}(x)\), \(x\in \Omega \) is valid. From condition (3) we obtain that the boundary conditions on Ω are satisfied:

$$ g = {R_{\mathbf{{a}}}}{\left . {\left [ { \frac{{\partial u}}{{\partial \nu }}} \right ]} \right |_{\partial \Omega }} = {R_{\mathbf{{a}}}}\left [ {{{\left . {\Lambda u} \right |}_{ \partial \Omega }}} \right ] = \Lambda ({R_{\mathbf{{a}}}}{\left . {\left [ u \right ] + C)} \right |_{\partial \Omega }} = {\left . {\Lambda \tilde{u}} \right |_{\partial \Omega }} = {\left . { \frac{{\partial \tilde{u}}}{{\partial \nu }}} \right |_{\partial \Omega }}. $$

According to Theorem 8, the solution \(\tilde{u}(x)\) to the resulting Neumann problem for the Poisson equation (32), exists by virtue of the condition \(\int _{\partial \Omega }{g}(\xi )\,d{{s}_{\xi }}= \int _{\Omega }{{\hat{f}}}(\xi )\,d\xi \). The lemma is proved. □

6 Construction of a solution to the Neumann problem

Using the Neumann function \({{\mathcal{N}}_{2}}(x,\xi )\), we construct a solution to the Neumann problem (1), (3). To do this, we need the following statement.

Lemma 4

For Green’s function \({{\mathcal{N}}_{2}}\left ( x,\xi \right )\) and any \(j=0,\ldots ,\ell -1\), the equality

$$ {{\mathcal{N}}_{2}}\left ( {{{\mathbf{{S}}}^{j}}x,{{\mathbf{{S}}}^{j}}\xi } \right ) = {{ \mathcal{N}}_{2}}\left ( {x,\xi } \right ) $$

is valid.

Proof

It is not difficult to see that from (28), in accordance with Remark 1, it follows

$$ \hat{E}\left ( {\frac{{{{\mathbf{{S}}}^{j}}x}}{{|{{\mathbf{{S}}}^{j}}x|}},t|{{\mathbf{{S}}}^{j}}x|{{ \mathbf{{S}}}^{j}}\xi } \right ) = - \frac{{1 - ({{\mathbf{{S}}}^{j}}x,{{\mathbf{{S}}}^{j}}\xi )t}}{{{{(1 - 2t({{\mathbf{{S}}}^{j}}x,{{\mathbf{{S}}}^{j}}\xi ) + |{{\mathbf{{S}}}^{j}}x{|^{2}}|{{\mathbf{{S}}}^{j}}\xi {|^{2}}{t^{2}})}^{n/2}}}} = \hat{E}\left ( {\frac{x}{{|x|}},t|x|\xi } \right ), $$

where \(j=0,\ldots ,\ell -1\). Therefore, from (27) we find

$$ {E_{0}}({{\mathbf{{S}}}^{j}}x,{{\mathbf{{S}}}^{j}}\xi ) = \int _{0}^{1} ( \hat{E}( \frac{{{{\mathbf{{S}}}^{j}}x}}{{|{{\mathbf{{S}}}^{j}}x|}},t|{{\mathbf{{S}}}^{j}}x|{{\mathbf{{S}}}^{j}} \xi ) + 1)\frac{{dt}}{t} = \int _{0}^{1} ( \hat{E}(\frac{x}{{|x|}},t|x|\xi ) + 1)\frac{{dt}}{t} = {E_{0}}(x,\xi ).$$

Using (26) and (24) we can write

$$ {{\mathcal{N}}_{2}}\left ( {{{\mathbf{{S}}}^{j}}x,{{\mathbf{{S}}}^{j}}\xi } \right ) = E \left ( {{{\mathbf{{S}}}^{j}}x,{{\mathbf{{S}}}^{j}}\xi } \right ) - {E_{0}}\left ( {{{\mathbf{{S}}}^{j}}x,{{\mathbf{{S}}}^{j}}\xi } \right ) = E\left ( {x,\xi } \right ) - {E_{0}}\left ( {x,\xi } \right ) = {{\mathcal{N}}_{2}}\left ( {x, \xi } \right ). $$

The lemma is proved. □

Theorem 10

Let \(\mathbf{a}\cdot {{\mathbf{e}}_{k}}\ne 0\) for \(k=0,\ldots ,\ell -1\), and for the functions \(f\in {{C}^{1}}\left ( \overline{\Omega } \right )\), \(g\in C\left ( \partial \Omega \right )\) the condition \(\int _{\partial \Omega }{g(\xi )}\,d{{s}_{\xi }}=\int _{\Omega }{{{R}_{ \mathbf{a}}}[f]}(\xi )\,d\xi \) be satisfied. Then the solution to the Neumann problem (1), (3) can be written in the form

$$ u(x) = \frac{1}{{{\omega _{n}}}}\int \limits _{\partial \Omega } {{{ \mathcal{N}}_{2}}\left ( {x,\xi } \right ){R_{\mathbf{{b}}}}[g](\xi )\;d{s_{\xi }}} - \frac{1}{{{\omega _{n}}}}\int \limits _{\Omega }{{{\mathcal{N}}_{2}} \left ( {x,\xi } \right )} f\left ( \xi \right )d\xi + C. $$
(33)

Proof

According to Theorem 9, under the conditions \(\mathbf{a}\cdot \mathbf{e}_{k}^{t}\ne 0\), \(\int _{\partial \Omega }{g(\xi )}\,d{{s}_{\xi }}=\int _{\Omega }{{{R}_{ \mathbf{a}}}[f]}(\xi ) d\xi \) the solution \(u(x)\) to problem (1), (3) exists and can be written in the form

$$ u(x) = \sum \limits _{j = 0}^{\ell - 1} {{b_{j}}\tilde{u}\left ( {{{ \mathbf{{S}}}^{j}}x} \right )} = {R_{\mathbf{{b}}}}[\tilde{u}](x), $$
(34)

where \(\tilde{u}(x)\), a solution of the classical Neumann problem (32), and \({{b}_{j}}\) are found from (12). It is clear that if \(f(x)\in {{C}^{1}}(\bar{\Omega })\), then \(\hat{f}(x)={{R}_{\mathbf{a}}}[f]\in {{C}^{1}}(\bar{\Omega })\), which means that for \(g\in C(\partial \Omega )\) the solution to the Neumann problem (32) exists, is unique up to a constant, belongs to the class \(u\in {{C}^{2}}(\Omega )\), \(\Lambda u\in C(\overline{\Omega })\), and can also represented in the form (30)

$$ \tilde{u}(x) = \frac{1}{{{\omega _{n}}}}\int _{\partial \Omega } {{{ \mathcal{N}}_{2}}} (x,\xi )g(\xi )\,d{s_{\xi }} - \frac{1}{{{\omega _{n}}}}\int _{\Omega }{{{\mathcal{N}}_{2}}} (x,\xi ){R_{ \mathbf{{a}}}}[f](\xi )\,d\xi + C. $$

Using formula (34) we can write

$$ u(x) = \frac{1}{{{\omega _{n}}}}\sum \limits _{j = 0}^{\ell - 1} {{b_{j}}} \int \limits _{\partial \Omega } {{{\mathcal{N}}_{2}}\left ( {{{\mathbf{{S}}}^{j}}x, \xi } \right )g(\xi )\;d{s_{\xi }}} - \frac{1}{{{\omega _{n}}}}\sum \limits _{j = 0}^{\ell - 1} {{b_{j}}} \int \limits _{\Omega }{{{\mathcal{N}}_{2}} \left ( {{{\mathbf{{S}}}^{j}}x,\xi } \right )} {R_{\mathbf{{a}}}}[f]\left ( \xi \right )d\xi . $$
(35)

It is easy to see that, according to Theorem 8,

$$ f \in {C^{1}}(\bar{\Omega}),g \in C(\partial \Omega ) \Rightarrow \Lambda \tilde{u} \in C(\bar{\Omega}) \Rightarrow \Lambda u \in C( \bar{\Omega}). $$

According to Lemma 4, \({{\mathcal{N}}_{2}}\left ( {{\mathbf{S}}^{j}}x,{{\mathbf{S}}^{j}} \xi \right )={{\mathcal{N}}_{2}}\left ( x,\xi \right )\), and therefore, using Lemma 2, we get

$$\begin{aligned}& \sum \limits _{j = 0}^{\ell - 1} {{b_{j}}} \int \limits _{\Omega }{{{ \mathcal{N}}_{2}}\left ( {{{\mathbf{{S}}}^{j}}x,\xi } \right )} {R_{\mathbf{{a}}}}[f] \left ( \xi \right )d\xi = \sum \limits _{j = 0}^{\ell - 1} {{b_{j}}} \int \limits _{\Omega }{{{\mathcal{N}}_{2}}\left ( {{{\mathbf{{S}}}^{j}}x,{{\mathbf{{S}}}^{j}} \xi } \right )} {R_{\mathbf{{a}}}}[f]\left ( {{{\mathbf{{S}}}^{j}}\xi } \right )d \xi \\& = \int \limits _{\Omega }{{{\mathcal{N}}_{2}}\left ( {x,\xi } \right )} \sum \limits _{j = 0}^{\ell - 1} {{b_{j}}} {R_{\mathbf{{a}}}}[f]\left ( {{{ \mathbf{{S}}}^{j}}\xi } \right )d\xi = \int \limits _{\Omega }{{{\mathcal{N}}_{2}} \left ( {x,\xi } \right )} {R_{\mathbf{{b}}}}\left [ {{R_{\mathbf{{a}}}}[f](y)} \right ]dy. \end{aligned}$$

If we now take (18) into account, we obtain

$$ \sum \limits _{j = 0}^{\ell - 1} {{b_{j}}} \int \limits _{\Omega }{{{ \mathcal{N}}_{2}}\left ( {{{\mathbf{{S}}}^{j}}x,\xi } \right )} {R_{\mathbf{{a}}}}[f] \left ( \xi \right )d\xi = \int \limits _{\Omega }{{{\mathcal{N}}_{2}} \left ( {x,\xi } \right )} f\left ( \xi \right )d\xi . $$

Similarly, it is not difficult to obtain the equality

$$\begin{aligned}& \sum \limits _{j = 0}^{\ell - 1} {{b_{j}}} \int \limits _{\partial \Omega } {{{\mathcal{N}}_{2}}\left ( {{{\mathbf{{S}}}^{j}}x,\xi } \right )g(\xi ) \;d{s_{\xi }}} = \sum \limits _{j = 0}^{\ell - 1} {{b_{j}}} \int \limits _{\partial \Omega } {{{\mathcal{N}}_{2}}\left ( {{{\mathbf{{S}}}^{j}}x,{{ \mathbf{{S}}}^{j}}\xi } \right )g({{\mathbf{{S}}}^{j}}\xi )\;d{s_{\xi }}}\\& = \int \limits _{\partial \Omega } {{{\mathcal{N}}_{2}}\left ( {x,\xi } \right )\sum \limits _{j = 0}^{\ell - 1} {{b_{j}}} g({{\mathbf{{S}}}^{j}} \xi )\;d{s_{\xi }}} = \int \limits _{\partial \Omega } {{{\mathcal{N}}_{2}} \left ( {x,\xi } \right ){R_{\mathbf{{b}}}}[g](\xi )\;d{s_{\xi }}}. \end{aligned}$$

Thus, the function \(u(x)\) from (35) is transformed to the form (33). The theorem is proved. □

Example 7

Find a solution to the Neumann problem

$$ \Delta u(x) = {x_{2}},\,x \in \Omega;\;\; \frac{{\partial u}}{{\partial \nu }}({x_{2}},{x_{3}},{x_{1}}){|_{ \partial \Omega }} = {x_{1}},\,x \in \partial \Omega. $$
(36)

In this case, as in Example 6, \(n=3\), \({{S}_{2}}x={{({{x}_{2}},{{x}_{3}},{{x}_{1}})}^{t}}\) (\({{l}_{2}}=3\)), \({{S}_{1}}x=-x\) (\({{l}_{1}}=2\)) and \(\mathbf{a}=(0,0,1,0,0,0)\). Problem (32) takes the form

$$ \Delta \tilde{u}(x) = {x_{3}},\,x \in \Omega;\;\; {\left . { \frac{{\partial \tilde{u}}}{{\partial \nu }}} \right |_{\partial \Omega }} = {x_{1}},\,x \in \partial \Omega . $$
(37)

For this Neumann problem, according to [23], the auxiliary Dirichlet problem will be the problem

$$ \Delta v(x) = (\Lambda + 2){x_{3}} \equiv 3{x_{3}},\,x \in \Omega ;\quad v{|_{ \partial \Omega }} = {x_{1}},\,x \in \partial \Omega, $$

the solution of which, in accordance with Example 6, is

$$ v(x) = {x_{1}} + \left ( {|x{|^{2}} - 1} \right ) \frac{{3{x_{3}}}}{{10}}, $$

hence

$$ \tilde{u}(x) = \int_{0}^{1} {v(tx)\frac{{dt}}{t}} = \int_{0}^{1} {{x_{1}} + \left(|x|^{2}t^{2}-1\right) \frac{{3{x_{3}}}}{{10}}dt} = {x_{1}} + \left(|x|^{2}-3\right)\frac{{{x_{3}}}}{{10}}. $$

From Example 6 we find \({{R}_{\mathbf{b}}}[\tilde{u}]=S_{2}^{2}S_{1}^{0}\tilde{u}=\tilde{u}({{x}_{3}},{{x}_{1}},{{x}_{2}})\), which means that by Theorem 9 we get

$$ u(x)={{R}_{\mathbf{b}}}[\tilde{u}](x)={{R}_{\mathbf{b}}}\left [ {{x}_{1}}+ \left ( |x{{|}^{2}}-3 \right )\frac{{{x}_{3}}}{10} \right ]={{x}_{3}}+ \left ( |x{{|}^{2}}-3 \right )\frac{{{x}_{2}}}{10}. $$

As noted in [23, Lemma 3], the solvability condition for the Neumann problem (37), and therefore the original problem (36), is equivalent to the fulfillment of the condition \(v(0)=0\), which is obviously true.

Data Availability

No datasets were generated or analysed during the current study.

References

  1. Bitsadze, A., Samarskii, A.: Some elementary generalizations of linear elliptic boundary value problems. Dokl. Akad. Nauk SSSR 185(4), 739–740 (1969)

    MathSciNet  Google Scholar 

  2. Samarskii, A.: Some problems of the theory of differential equations. Differ. Uravn. 16(11), 1925–1935 (1980)

    MathSciNet  Google Scholar 

  3. Skubachevskii, A.L.: Nonclassical boundary-value problems I. J. Math. Sci. 155, 199–334 (2008). https://doi.org/10.1007/s10958-008-9218-9

    Article  MathSciNet  Google Scholar 

  4. Skubachevskii, A.L.: Nonclassical boundary-value problems ii. J. Math. Sci. 166, 377–561 (2010). https://doi.org/10.1007/s10958-010-9873-5

    Article  MathSciNet  Google Scholar 

  5. Adil, N., Berdyshev, A.S., Eshmatov, B.E.: Solvability and Volterra property of nonlocal problems for mixed fractional-order diffusion-wave equation. Bound. Value Probl. 2023, 74 (2023). https://doi.org/10.1186/s13661-023-01735-0

    Article  MathSciNet  Google Scholar 

  6. Ashyralyyev, C.: On the stable difference scheme for source identification nonlocal elliptic problem. Math. Methods Appl. Sci. 46, 2488–2499 (2023). https://doi.org/10.1002/mma.8656

    Article  MathSciNet  Google Scholar 

  7. Ashyralyyev, C.: Numerical solution to Bitsadze-Samarskii type elliptic overdetermined multipoint NBVP. Bound. Value Probl. 2017, 74 (2017). https://doi.org/10.1186/s13661-017-0804-y

    Article  MathSciNet  Google Scholar 

  8. Assanova, A.T., Uteshova, R.: Solution of a nonlocal problem for hyperbolic equations with piecewise constant argument of generalized type. Chaos Solitons Fractals 165, 112816 (2022). https://doi.org/10.1016/j.chaos.2022.112816

    Article  MathSciNet  Google Scholar 

  9. Berikelashvili, G.: To a nonlocal generalization of the Dirichlet problem. J. Inequal. Appl. 2006, 93858 (2006). https://doi.org/10.1155/JIA/2006/93858.

    Article  MathSciNet  Google Scholar 

  10. Zhou, L., Yu, H.: Error estimate of a high accuracy difference scheme for Poisson equation with two integral boundary conditions. Adv. Differ. Equ. 2018, 225 (2018). https://doi.org/10.1186/s13662-018-1682-z

    Article  MathSciNet  Google Scholar 

  11. Li, C.: Uniqueness of a nonlinear integro-differential equation with nonlocal boundary condition and variable coefficients. Bound. Value Probl. 2023, 26 (2023). https://doi.org/10.1186/s13661-023-01713-6

    Article  MathSciNet  Google Scholar 

  12. Przeworska-Rolewicz, D.: Some boundary value problems with transformed argument. Comment. Math. Helv. 17, 451–457 (1974)

    MathSciNet  Google Scholar 

  13. Karachik, B.V., Turmetov: Solvability of one nonlocal Dirichlet problem for the Poisson equation. Novi Sad J. Math. 50, 67–88 (2020). https://doi.org/10.30755/NSJOM.08942

    Article  Google Scholar 

  14. Turmetov, B., Karachik, V.: Solvability of nonlocal Dirichlet problem for generalized Helmholtz equation in a unit ball. Complex Var. Elliptic Equ. 68, 1204–1218 (2023). https://doi.org/10.1080/17476933.2022.2040021

    Article  MathSciNet  Google Scholar 

  15. Karachik, V.V., Sarsenbi, B.K.A.M., Turmetov: On the solvability of the main boundary value problems for a nonlocal Poisson equation. Turk. J. Math. 43, 1604–1625 (2019). https://doi.org/10.3906/mat-1901-71

    Article  MathSciNet  Google Scholar 

  16. Turmetov, B., Karachik, V., Muratbekova, M.: On a boundary value problem for the biharmonic equation with multiple involution. Mathematics 9, 2020 (2021). https://doi.org/10.3390/math9172020

    Article  Google Scholar 

  17. Turmetov, B., Karachik, V.: Construction of eigenfunctions to one nonlocal second-order differential operator with double involution. Axioms 10, 543 (2022). https://doi.org/10.3390/axioms11100543

    Article  Google Scholar 

  18. Turmetov, B., Karachik, V.: On eigenfunctions and eigenvalues of a nonlocal Laplace operator with multiple involution. Symmetry 13, 1781 (2021). https://doi.org/10.3390/sym13101781

    Article  Google Scholar 

  19. Evans, L.C.: Partial Differential Equations, 2nd edn. Am. Math. Soc., Providence (2010)

    Google Scholar 

  20. Gilbarg, D., Trudinger, N.S.: Elliptic Partial Differential Equations of Second Order. Springer, Berlin (2001)

    Book  Google Scholar 

  21. Karachik, V.V.: Construction of polynomial solutions to some boundary value problems for Poisson’s equation. Comput. Math. Math. Phys. 51, 1567–1587 (2011). https://doi.org/10.1134/S0965542511090120

    Article  MathSciNet  Google Scholar 

  22. Karachik, V.V., Turmetov, B.K.: On the Green’s function for the third boundary value problem. Sib. Adv. Math. 29, 32–43 (2019). https://doi.org/10.3103/S1055134419010036

    Article  MathSciNet  Google Scholar 

  23. Karachik, V.V.: Sufficient conditions for solvability of one class of Neumann-type problems for the polyharmonic equation. Comput. Math. Math. Phys. 61, 1276–1288 (2021). https://doi.org/10.1134/S0965542521040059

    Article  MathSciNet  Google Scholar 

Download references

Funding

This research has been funded by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan (grant No. AP19677926).

Author information

Authors and Affiliations

Authors

Contributions

The authors contributed equally to this paper. All authors reviewed the manuscript.

Corresponding author

Correspondence to Batirkhan Turmetov.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Muratbekova, M., Karachik, V. & Turmetov, B. Bitsadze-Samarsky type problems with double involution. Bound Value Probl 2024, 86 (2024). https://doi.org/10.1186/s13661-024-01892-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13661-024-01892-w

Keywords