diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzdbiq" "b/data_all_eng_slimpj/shuffled/split2/finalzzdbiq" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzdbiq" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThis paper is devoted to a spectral analysis of the biharmonic operator subject to Neumann boundary conditions on a domain which undergoes a singular perturbation.\nThe focus is on planar dumbbell-shaped domains $\\Omega_{\\epsilon}$, with $\\epsilon >0$, described in Figure~\\ref{fig: dumbbell}. Namely,\ngiven two bounded smooth domains $\\Omega_L, \\Omega_R$ in $\\numberset{R}^2$ with $\\Omega_L\\cap \\Omega_R=\\emptyset $ such that $\\partial \\Omega_L \\supset \\{(0,y)\\in \\numberset{R}^2 : -10$ small enough. Here\n$ R_\\epsilon \\cup L_\\epsilon$ is a thin channel connecting $\\Omega_L$ and $\\Omega_R$ defined by\n\\begin{equation}\n\\label{def: R_eps}\nR_\\epsilon = \\{(x,y)\\in\\numberset{R}^2 : x\\in(0,1), 0< y< \\epsilon g(x) \\},\n\\end{equation}\n\\[ L_\\epsilon =( \\{0\\} \\times (0, \\epsilon g(0)) \\cup (\\{1\\}\\times (0, \\epsilon g(1))) ), \\]\nwhere $g \\in C^2[0,1]$ is a positive real-valued function. Note that $\\Omega_\\epsilon$ collapses to the limit set $\\Omega_0 = \\Omega \\cup ([0,1] \\times \\{0\\})$ as $\\epsilon \\to 0$.\n\nWe consider the eigenvalue problem\n\\begin{equation} \\label{PDE: main problem_eigenvalues}\n\\begin{cases}\n\\Delta^2u - \\tau \\Delta u + u = \\lambda \\, u, &\\textup{in $\\Omega_\\epsilon$,}\\\\\n(1-\\sigma) \\frac{\\partial^2 u}{\\partial n^2} + \\sigma \\Delta u = 0, &\\textup{on $\\partial \\Omega_\\epsilon$,}\\\\\n\\tau \\frac{\\partial u}{\\partial n} - (1-\\sigma)\\, \\Div_{\\partial \\Omega_\\epsilon}(D^2u \\cdot n)_{\\partial \\Omega_\\epsilon} - \\frac{\\partial(\\Delta u)}{\\partial n} = 0, &\\textup{on $\\partial \\Omega_\\epsilon$,}\n\\end{cases}\n\\end{equation}\nwhere $\\tau \\geq 0$, $\\sigma \\in (-1,1)$ are fixed parameters, and we analyse the behaviour of the eigenvalues and of the corresponding eigenfunctions as $\\epsilon \\to 0$. Here $\\Div_{\\partial \\Omega_\\epsilon}$ is the tangential divergence operator, and $(\\cdot)_{\\partial \\Omega_\\epsilon}$ is the projection on the tangent line to $\\partial \\Omega_\\epsilon$.\n The corresponding Poisson problem reads\n\\begin{equation} \\label{PDE: main problem}\n\\begin{cases}\n\\Delta^2u - \\tau \\Delta u +u= f, &\\textup{in $\\Omega_\\epsilon$},\\\\\n(1-\\sigma) \\frac{\\partial^2 u}{\\partial n^2} + \\sigma \\Delta u = 0, &\\textup{on $\\partial \\Omega_\\epsilon$},\\\\\n\\tau \\frac{\\partial u}{\\partial n} - (1-\\sigma) \\Div_{\\partial \\Omega_\\epsilon}(D^2u \\cdot n)_{\\partial \\Omega_\\epsilon} - \\frac{\\partial(\\Delta u)}{\\partial n} = 0, &\\textup{on $\\partial \\Omega_\\epsilon$},\n\\end{cases}\n\\end{equation}\nwith datum\n$f \\in L^2(\\Omega_\\epsilon)$.\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.7\\textwidth]{omega_eps_crop}\n\\caption{The dumbbell domain $\\Omega_\\epsilon$.}\n\\label{fig: dumbbell}\n\\end{figure}\n\n Since $\\partial\\Omega_{\\epsilon}$ has corner singularities at the junctions $(0,0)$, $(0,\\epsilon g(0))$, $(1,0)$, $(1,\\epsilon g(1))$ and $H^{4}$\nregularity does not hold around those points, we shall always understand problems \\eqref{PDE: main problem_eigenvalues}, \\eqref{PDE: main problem},\n(as well as analogous problems) in a weak (variational) sense, in which case only $H^2$ regularity is required.\n\nNamely, the variational formulation of problem \\eqref{PDE: main problem} is the following: find $u\\in H^2(\\Omega_\\epsilon)$ such that\n\\begin{equation} \\label{PDE: main problem weak}\n\\int_{\\Omega_\\epsilon} (1-\\sigma) D^2u : D^2\\varphi + \\sigma \\Delta u \\Delta \\varphi + \\tau \\nabla u \\cdot \\nabla \\varphi +u\\varphi \\,dx = \\int_{\\Omega_\\epsilon} f \\varphi\\,dx\\, ,\n\\end{equation}\nfor all $\\varphi \\in H^2(\\Omega_\\epsilon)$. The quadratic form associated with the left-hand side of (\\ref{PDE: main problem weak}) - call it $B_{\\Omega_{\\epsilon}}(u, \\varphi )$ - is coercive for all $\\tau \\geq 0$ and $\\sigma \\in (-1,1)$, see e.g. \\cite{ChAppl}, \\cite{Ch}.\nIn particular, by standard spectral theory this quadratic form allows to define a non-negative self-adjoint operator $T=(\\Delta^2 - \\tau \\Delta +I)_{N(\\sigma )}$ in $L^2(\\Omega_{\\epsilon})$ which plays the role of the classical operator $\\Delta^2 - \\tau \\Delta +I$ subject to the boundary conditions above.\nMore precisely, $T$ is uniquely defined by the relation\n$$ B_{\\Omega_{\\epsilon}}(u, \\varphi )=_{L^2(\\Omega_{\\epsilon})} \\, , $$\nfor all\n$ u,\\varphi \\in H^2(\\Omega_{\\epsilon})$. In particular the domain of the square root $T^{1\/2}$ of $T$ is $H^2(\\Omega_{\\epsilon})$ and\n a function $u$ belongs to the domain of $T$ if and only if\n$u\\in H^2(\\Omega_{\\epsilon})$\nand there exists $f\\in L^2(\\Omega_{\\epsilon})$ such that\n$B_{\\Omega_{\\epsilon}}(u, \\varphi )= _{L^2(\\Omega_{\\epsilon})} $ for all $\\varphi \\in H^2(\\Omega_{\\epsilon})$, in which case\n$Tu=f$. We refer to \\cite[Chp.~4]{Daviesbook} for a general introduction to the variational approach to the spectral analysis of partial differential operators on non-smooth domains.\n\n The operator $T$ is densely defined and its eigenvalues and eigenfunctions are exactly those of problem (\\ref{PDE: main problem_eigenvalues}).\nMoreover, since the embedding $H^2(\\Omega_{\\epsilon} )\\subset L^2(\\Omega_{\\epsilon} )$ is compact, $(\\Delta^2 - \\tau \\Delta +I)_{N(\\sigma )}$ has compact resolvent, hence the spectrum is discrete and consists of a divergent increasing sequence of positive eigenvalues\n$\\lambda_{n}(\\Omega_{\\epsilon}),\\ n\\in \\numberset{N}$, with finite multiplicity (here each eigenvalue is repeated as many times as its multiplicity).\n\nProblem (\\ref{PDE: main problem_eigenvalues}) arises in linear elasticity in connection with the Kirchhoff-Love model for the study of vibrations and deformations of free plates, in which case $\\sigma $ represents the\n Poisson ratio of the material and $\\tau$ the lateral tension. In this sense, the dumbbell domain $\\Omega_{\\epsilon}$ could represent a plate and $R_{\\epsilon }$\n a part of it which degenerates to the segment $[0,1] \\times \\{0\\}$.\n\nWe note that problem (\\ref{PDE: main problem_eigenvalues}) can be considered as a natural fourth order version of the corresponding eigenvalue problem for the\nNeumann Laplacian $-\\Delta_N$, namely\n\\begin{equation} \\label{PDE: second problem_eigenvalues}\n\\begin{cases}\n-\\Delta u + u = \\lambda \\, u, &\\textup{in $\\Omega_\\epsilon$,}\\\\\n\\frac{\\partial u}{\\partial n} = 0, &\\textup{on $\\partial \\Omega_\\epsilon$,}\\\\\n\\end{cases}\n\\end{equation}\nthe variational formulation of which reads\n\n\\begin{equation} \\label{PDE: second problem weak}\n\\int_{\\Omega_\\epsilon} Du \\cdot D \\varphi + u \\varphi \\,dx =\\lambda \\int_{\\Omega_\\epsilon} u \\varphi\\,dx ,\n\\end{equation}\nwhere the test functions $\\varphi$ and the unknown $u$ are considered in $H^1(\\Omega_{\\epsilon})$.\n\nAlthough the terminology used in the literature to refer to boundary value problems for fourth order operators is sometimes a bit misleading, we emphasise\nthat the formulation of problems \\eqref{PDE: main problem_eigenvalues}, \\eqref{PDE: main problem} is rather classical, see e.g. \\cite[Example~2.15]{necas}\nwhere problem \\eqref{PDE: main problem} with $\\tau =0$ is referred to as the Neumann problem for the biharmonic operator. Moreover, we point out that a number of recent papers devoted to the analysis of \\eqref{PDE: main problem_eigenvalues} have confirmed that problem (\\ref{PDE: main problem_eigenvalues})\ncan be considered as the natural Neumann problem for the biharmonic operator, see \\cite{arlacras}, \\cite{ArrLamb}, \\cite{BuosoProv}, \\cite{BuosoProvCH}, \\cite{bulacompl}, \\cite{ChAppl}, \\cite{Ch}, \\cite{Prov}.\nWe also refer to \\cite{GazzGS} for an extensive discussion on boundary value problems for higher order elliptic operators.\n\nIt is known that the eigenelements of the Neumann Laplacian on a typical dumbbell domain as above have a singular behaviour, see \\cite{ArrPhD}, \\cite{Arr1}, \\cite{Arr2}, \\cite{ACJdE}, \\cite{ACL}, \\cite{AHH}, and the references therein. For example, it is known that not all the eigenvalues of $-\\Delta_N$ on $\\Omega_{\\epsilon}$ converge to the eigenvalues of $-\\Delta_N$ in $\\Omega$; indeed, some of the eigenvalues of the dumbbell domain are asymptotically close to the eigenvalues of a boundary value problem defined in the channel $R_\\epsilon$. This allows the appearance in the limit of extra eigenvalues associated with an ordinary differential equation in the segment $(0,1)$, which are generally different from the eigenvalues of $-\\Delta_N$ in $\\Omega$.\nSuch singular behaviour reflects a general characteristic of boundary value problems with Neumann boundary conditions, the stability of which requires rather strong assumptions on the admissible domain perturbations, see e.g., \\cite{ACJdE}, \\cite{ArrLamb}, \\cite{lalaneu}. We refer to \\cite[p.~420]{C-H} for a classical counterexample.\n\n\nThe aim of the present paper is to clarify how Neumann boundary conditions affect the spectral behaviour of the operator $\\Delta^2-\\tau \\Delta $ on dumbbell domains, by extending the validity of some results known for the second order operator $-\\Delta_N$ to the fourth-order operator $(\\Delta^2 - \\tau \\Delta)_{N(\\sigma )}$.\n\nFirst of all, we prove that the eigenvalues of problem (\\ref{PDE: main problem_eigenvalues})\ncan be asymptotically decomposed into two families of eigenvalues as\n\\begin{equation}\\label{dec}\n(\\lambda_n(\\Omega_\\epsilon))_{n\\geq 1} \\approx (\\omega_k)_{k\\geq 1} \\cup (\\theta^\\epsilon_l)_{l\\geq 1}, \\ \\ {\\rm as }\\ \\epsilon \\to 0,\n\\end{equation}\n where $(\\omega_k)_{k\\geq 1}$ are the eigenvalues of problem\n\\begin{equation} \\label{PDE: Omega}\n\\begin{cases}\n\\Delta^2 w - \\tau \\Delta w + w = \\omega_k\\, w, &\\text{in $\\Omega$},\\\\\n(1-\\sigma) \\frac{\\partial^2 w}{\\partial n^2} + \\sigma \\Delta w = 0, &\\textup{on $\\partial \\Omega$},\\\\\n\\tau \\frac{\\partial w}{\\partial n} - (1-\\sigma) \\Div_{\\partial \\Omega}(D^2w \\cdot n)_{\\partial \\Omega} - \\frac{\\partial(\\Delta w)}{\\partial n} = 0, &\\textup{on $\\partial \\Omega$,}\n\\end{cases}\n\\end{equation}\nand $(\\theta^\\epsilon_l)_{l\\geq 1}$ are the eigenvalues of problem\n\\begin{equation} \\label{PDE: R_eps}\n\\begin{cases}\n\\Delta^2 v - \\tau \\Delta v + v = \\theta^\\epsilon_l\\, v, &\\text{in $R_\\epsilon$},\\\\\n(1-\\sigma) \\frac{\\partial^2 v}{\\partial n^2} + \\sigma \\Delta v = 0, &\\textup{on $\\Gamma_\\epsilon$},\\\\\n\\tau \\frac{\\partial v}{\\partial n} - (1-\\sigma) \\Div_{\\Gamma_\\epsilon}(D^2v \\cdot n)_{\\Gamma_\\epsilon} - \\frac{\\partial(\\Delta v)}{\\partial n} = 0, &\\textup{on $\\Gamma_\\epsilon$,}\\\\\nv = 0 = \\frac{\\partial v}{\\partial n}, &\\text{on $L_\\epsilon$.}\n\\end{cases}\n\\end{equation}\nThe decomposition \\eqref{dec} is proved under the assumption that a certain condition on $R_\\epsilon$, called H-Condition, is satisfied. We provide in particular a simple condition on the profile function $g$ which guarantees the validity of the H-Condition.\n\nThus, in order to analyse the behaviour of $\\lambda_n(\\Omega_\\epsilon)$ as $\\epsilon \\to 0$, it suffices to study $\\theta^\\epsilon_l$ as $\\epsilon \\to 0$. To do so, we need to pass to the limit in the variational formulation of problem \\eqref{PDE: R_eps}. Since the domain $R_\\epsilon$ collapses to a segment as $\\epsilon \\to 0$, we use thin domain techniques in order to find the appropriate limiting problem. As in the case of the Laplace operator, the limiting problem depends on the shape of the channel $R_\\epsilon$ via the profile function $g(x)$. More precisely it can be written as follows\n\\begin{equation}\\label{ODE: limit problem}\n\\begin{cases}\n\\frac{1 - \\sigma^2}{g} (gh'')'' - \\frac{\\tau}{g}(gh')' + h = \\theta h, &\\text{in $(0,1)$}\\\\\nh(0)=h(1)=0,&\\\\\nh'(0)=h'(1)=0.&\n\\end{cases}\n\\end{equation}\nThis allows to prove convergence results for the eigenvalues and eigenfunctions of problem \\eqref{PDE: main problem}. The precise statement can be found in Theorem~\\ref{lastthm}.\nRoughly speaking, Theorem~\\ref{lastthm} establishes the following alternative:\n\\begin{itemize} \\item[(A)] either $\\lambda_n(\\Omega_\\epsilon) \\to \\omega_k$, for some $k\\geq 1$ in which case the corresponding eigenfunctions converge in $\\Omega$ to the eigenfunctions associated with $\\omega_k$.\n\\item[(B)] or $\\lambda_n(\\Omega_\\epsilon) \\to \\theta_l$ as $\\epsilon \\to 0$ for some $l\\in \\numberset{N}$ in which case the corresponding eigenfunctions behave in $R_\\epsilon$ like the eigenfunctions\nassociated with $ \\theta_l$.\n\\end{itemize}\nMoreover, all eigenvalues $\\omega_k$ and $\\theta_l$ are reached in the limit by the eigenvalues $\\lambda_n(\\Omega_{\\epsilon})$.\n\nWe find it remarkable that for $\\sigma\\ne 0$ the limiting equation in (\\ref{ODE: limit problem}) is distorted by the coefficient $1-\\sigma^2\\ne 1$. This phenomenon\nshows that the dumbbell problem for our fourth order problem \\eqref{PDE: main problem_eigenvalues} with $\\sigma \\ne 0$ is significantly different from the second order problem \\eqref{PDE: second problem_eigenvalues} considered in the literature.\n\nWe also note that the Dirichlet problem for the operator $\\Delta^2u - \\tau \\Delta u + u$, namely\n\\begin{equation} \\label{PDE: dir}\n\\begin{cases}\n\\Delta^2u - \\tau \\Delta u + u = \\lambda \\, u, &\\textup{in $\\Omega_\\epsilon$,}\\\\\n u = 0, &\\textup{on $\\partial \\Omega_\\epsilon$,}\\\\\n \\frac{\\partial u}{\\partial n} = 0, &\\textup{on $\\partial \\Omega_\\epsilon$}\n\\end{cases}\n\\end{equation}\nis stable in the sense that its eigenelements converge to those of the operator $\\Delta^2- \\tau \\Delta + I$ in $\\Omega$ as $\\epsilon\\to 0$. In other words, as for the Laplace operator, in the case of Dirichlet boundary conditions, no eigenvalues from the channel $R_{\\epsilon}$ appear in the limit as $\\epsilon \\to 0$. In fact, it is well known that Dirichlet eigenvalues on thin domains diverge to $+\\infty$ as $\\epsilon \\to 0$, because of the Poincar\\'e inequality.\n\nIn order to prove our results, we study the convergence of the resolvent operators $(\\Delta^2 - \\tau \\Delta +I)_{N(\\sigma , \\tau)}^{-1}$ and this is done by using the notion of $\\mathcal{E}$-convergence, which is a useful tool in the analysis of boundary value problems defined on variable domains, see e.g., \\cite{ACL}, \\cite{arlacras}, \\cite{ArrLamb}.\n\n\nWe point out that, although many papers in the literature have been devoted to the spectral analysis of second order operators with either Neumann or Dirichlet boundary conditions on dumbbell domains, see \\cite{Arr1}, \\cite{Arr2}, \\cite{Jimbo1}, \\cite{Jimbo2} and references therein, very little seems to be known about these problems for higher order operators. We refer to \\cite{taylor} for a recent analysis of\nthe dumbbell problem in the case of elliptic systems subject to Dirichlet boundary conditions.\n\nFinally, we observe that it would be interesting to provide precise rates of convergence for the eigenvalues $\\lambda_n(\\Omega_{\\epsilon})$ and the corresponding eigenfunctions as $\\epsilon \\to 0$ in the spirit of the asymptotic analysis performed e.g., in \\cite{Arr2}, \\cite{Gady1}, \\cite{Gady2}, \\cite{Gady3}, \\cite{Gady4}, \\cite{Jimbo1}, \\cite{Jimbo2} for second order operators. However, in case of higher order operators, this seems a challenging problem and is not addressed here.\n\n\nThe paper is organized as follows. In Section \\ref{sec: decomposition} we prove the asymptotic decomposition \\eqref{dec} of the eigenvalues $\\lambda_n(\\Omega_\\epsilon)$. This is achieved in several steps. In Theorem \\ref{thm: upper bound} we provide a suitable upper bound for the eigenvalue $\\lambda_n(\\Omega_\\epsilon)$. Then, in Definition~\\ref{def: H condition} we introduce an assumption on the shape of the channel $R_\\epsilon$, called H-Condition, which is needed to prove a lower bound for $\\lambda_n(\\Omega_\\epsilon)$ as $\\epsilon \\to 0$, see Theorem~\\ref{thm: lower bound}. Finally, we collect the results of the section in Theorem \\ref{thm: eigenvalues decomposition} to deduce a convergence result for the eigenvalues and the eigenfunctions of problem \\eqref{PDE: main problem_eigenvalues} under the assumption that the H-Condition holds. In Section \\ref{sec: proof H condition regular dumbbells} we show that a wide class of regular dumbbell domains satisfy the H-Condition. In Section \\ref{sec: thin plates} we study the convergence of the solutions of problem \\eqref{PDE: R_eps} as $\\epsilon \\to 0$, we identify the limiting problem in $(0,1)$, and we prove the spectral convergence of problem \\eqref{PDE: R_eps} to problem \\eqref{ODE: limit problem}.\nFinally, in Section \\ref{conclusionsec} we combine the results of the previous sections and prove Theorem~\\ref{lastthm}.\n\n\n\n\n\n\n\n\n\n\\section{Decomposition of the eigenvalues} \\label{sec: decomposition}\nThe main goal of this section is to prove the decomposition of the eigenvalues of problem \\eqref{PDE: main problem_eigenvalues} into the two families of eigenvalues coming from \\eqref{PDE: Omega} and \\eqref{PDE: R_eps}. First of all we note that, since $\\Omega_{\\epsilon} $, $\\Omega $ and $R_{\\epsilon }$ are sufficiently regular, by standard spectral theory for differential operators it follows that the operators associated with the quadratic forms appearing in the weak formulation of problems \\eqref{PDE: main problem_eigenvalues}, \\eqref{PDE: Omega}, \\eqref{PDE: R_eps} have compact resolvents. Thus, the spectra of such problems are discrete and consist of positive eigenvalues of finite multiplicity. The eigenpairs of problems \\eqref{PDE: main problem_eigenvalues}, \\eqref{PDE: Omega}, \\eqref{PDE: R_eps} will be denoted by $(\\lambda_n(\\Omega_\\epsilon), \\varphi_n^\\epsilon)_{n \\geq 1}$, $(\\omega_n, \\varphi_n^\\Omega)_{n \\geq 1}$, $(\\theta_n^\\epsilon, \\gamma_n^\\epsilon)_{n\\geq 1}$ respectively, where the three families of eigenfunctions $\\varphi_n^\\epsilon$, $\\varphi_n^\\Omega$, $\\gamma_n^\\epsilon$ are complete orthonormal bases of the spaces $L^2(\\Omega_{\\epsilon})$, $L^2(\\Omega )$, $L^2(R_{\\epsilon})$ respectively.\nMoreover we set $(\\lambda_n^\\epsilon)_{n\\geq 1} = (\\omega_k)_{k \\geq 1} \\cup (\\theta_l^\\epsilon)_{l\\geq 1}$, where it is understood that the eigenvalues are arranged in increasing order and repeated according to their multiplicity. In particular if $\\omega_k = \\theta_l^\\epsilon$ for some $k,l \\in \\numberset{N}$, then such an eigenvalue is repeated in the sequence $(\\lambda_n^\\epsilon)_{n \\geq 1}$ as many times as the sum of the multiplicities of $\\omega_k$ and $\\theta_l^\\epsilon$. Let us note explicitly that the order in the sequence $(\\lambda_n^\\epsilon)_{n\\geq 1}$ depends on $\\epsilon$. For each $\\lambda_n^\\epsilon$ we define the function $\\phi^\\epsilon_n \\in H^2(\\Omega) \\oplus H^2(R_\\epsilon)$ in the following way:\n\\begin{equation}\n\\label{def: phi_n 1}\n\\phi^\\epsilon_n = \\begin{cases}\n \\varphi_k^\\Omega, &\\text{in $\\Omega$},\\\\\n 0, &\\text{in $R_\\epsilon$},\n \\end{cases}\n\\end{equation}\nif $\\lambda_n^\\epsilon = \\omega_k$, for some $k \\in \\numberset{N}$; otherwise\n\\begin{equation}\n\\label{def: phi_n 2}\n\\phi^\\epsilon_n=\\begin{cases}\n 0, &\\text{in $\\Omega$},\\\\\n \\gamma_l^\\epsilon, &\\text{in $R_\\epsilon$},\n \\end{cases}\n\\end{equation}\nif $\\lambda_n^\\epsilon = \\theta_l^\\epsilon$, for some $l \\in \\numberset{N}$. We observe that in the case $\\lambda_n^\\epsilon= \\omega_k = \\theta_l^\\epsilon$ for some $k,l \\in \\numberset{N}$, with $\\omega_k$ of multiplicity $m_1$ and $\\theta_l^\\epsilon$ of multiplicity $m_2$ we agree to order the eigenvalues (and the corresponding functions $\\phi^\\epsilon_n$) by listing first the $m_1$ eigenvalues $\\omega_k$, then the remaining $m_2$ eigenvalues $\\theta_l^\\epsilon$.\n\nNote that $(\\phi^\\epsilon_i, \\phi^\\epsilon_j)_{L^2(\\Omega_\\epsilon)} = \\delta_{ij}$ where $\\delta_{ij}$ is the Kronecker symbol, that is $\\delta_{ij}=0$ for $i\\ne j$ and $\\delta_{ij}=1$ for $i=j$. Note also that although $\\phi_n^\\epsilon$ defined by \\eqref{def: phi_n 2} are in $H^2(\\Omega_\\epsilon)$ (due to the Dirichlet boundary condition imposed in $L_\\epsilon$), the function $\\phi_n^\\epsilon$ defined by \\eqref{def: phi_n 1} do not lie in $H^2(\\Omega_\\epsilon)$.\nTo bypass this problem we define a sequence of functions in $H^2(\\Omega_\\epsilon)$ by setting\n\\[\n\\xi_n^\\epsilon =\n\\begin{cases}\nE\\varphi_k^\\Omega, &\\text{if $\\lambda_n^\\epsilon = \\omega_k$,}\\\\\n\\phi^\\epsilon_n, &\\text{if $\\lambda_n^\\epsilon = \\theta_l^\\epsilon$},\n\\end{cases}\n\\]\nwhere $E$ is a linear continuous extension operator mapping $H^2(\\Omega)$ to $H^2(\\numberset{R}^N)$. Then it is easy to verify that for fixed $i,j$, we have $(\\xi^\\epsilon_i, \\xi^\\epsilon_j)_{L^2(\\Omega_\\epsilon)}=\\delta_{ij}+o(1)$ as $\\epsilon \\to 0$. Then for fixed $n$ and for $\\epsilon$ small enough, $\\xi_1^\\epsilon,\\ldots,\\xi_n^\\epsilon$ are linearly independent.\n\nNow we prove an upper bound for the eigenvalues $\\lambda_n(\\Omega_\\epsilon)$.\n\\begin{theorem}[Upper bound] \\label{thm: upper bound}\nLet $n\\geq 1$ be fixed. The eigenvalues $\\lambda_n^\\epsilon$ are uniformly bounded in $\\epsilon$ and\n\\begin{equation} \\label{eq: upper bound}\n\\lambda_n(\\Omega_\\epsilon) \\leq \\lambda_n^\\epsilon + o(1), \\quad \\text{as $\\epsilon \\to 0$.}\n\\end{equation}\n\\end{theorem}\n\\begin{proof}\n\nThe fact that $\\lambda_n^\\epsilon$ remains bounded as $\\epsilon \\to 0$ is an easy consequence of the inequality \\begin{equation} \\label{eq: boundedness lambda_n^eps}\n\\lambda_n^\\epsilon \\leq \\omega_n < \\infty,\n\\end{equation}\nwhich holds by definition of $\\lambda_n^\\epsilon$.\nIn the sequel we write $\\perp$ to denote the orthogonality in $L^2$, and $[f_1, \\dots, f_m]$ for the linear span of the functions $f_1, \\dots, f_m$.\n\nBy the variational characterization of the eigenvalues $\\lambda_n(\\Omega_\\epsilon)$ we have\n\\begin{multline} \\label{eq: lambda_n(Omega_eps)var}\n\\lambda_n(\\Omega_\\epsilon) = \\min \\left\\{ \\frac{\\displaystyle \\int_{\\Omega_\\epsilon} (1-\\sigma) |D^2\\psi|^2 + \\sigma |\\Delta \\psi|^2 + \\tau |\\nabla \\psi|^2 + |\\psi|^2 }{\\displaystyle\\int_{\\Omega_\\epsilon} |\\psi|^2} \\right.\\\\\n\\left. : \\text{$\\psi \\in H^2(\\Omega_\\epsilon)$, $\\psi \\not\\equiv 0$ and $\\psi \\perp \\varphi_1^\\epsilon, \\dots, \\varphi_{n-1}^\\epsilon$} \\right\\}.\n\\end{multline}\nSince the functions $\\xi^\\epsilon_1,\\dots,\\xi^\\epsilon_n$ are linearly independent, by a dimension argument there exists $\\xi^\\epsilon \\in [\\xi^\\epsilon_1,\\dots,\\xi^\\epsilon_n]$ such that $\\norma{\\xi^\\epsilon}_{L^2(\\Omega_\\epsilon)}=1$, and $\\xi^\\epsilon \\perp \\varphi_1^\\epsilon, \\dots, \\varphi_{n-1}^\\epsilon$.\n\nWe can write\n$ \\xi^\\epsilon = \\sum_{i=1}^n \\alpha_i \\xi_i^\\epsilon$,\nfor some $\\alpha_1,\\dots, \\alpha_n \\in \\numberset{R}$ depending on $\\epsilon$ such that $\\sum_{i=1}^n \\alpha_i^2 = 1 + o(1)$ as $\\epsilon \\to 0$. By using $\\xi^\\epsilon$ as a test function in \\eqref{eq: lambda_n(Omega_eps)var} we get\n\n\\begin{equation} \\label{proof: computationsRQ}\n\\begin{split}\n&\\lambda_n(\\Omega_\\epsilon) \\leq \\int_{\\Omega_\\epsilon} (1-\\sigma) |D^2\\xi^\\epsilon|^2 + \\sigma |\\Delta \\xi^\\epsilon|^2 + \\tau |\\nabla \\xi^\\epsilon|^2 +|\\xi^\\epsilon|^2\\\\\n&= \\sum_{i=1}^n \\alpha_i^2 \\biggl( \\int_{\\Omega_\\epsilon} (1-\\sigma) |D^2\\xi_i^\\epsilon|^2 + \\sigma |\\Delta\\xi_i^\\epsilon|^2 + \\tau |\\nabla \\xi_i^\\epsilon|^2 + |\\xi_i^\\epsilon|^2 \\biggr) \\\\\n&+ \\sum_{i\\neq j}\\alpha_i\\alpha_j \\biggl( \\int_{\\Omega_\\epsilon} (1-\\sigma) (D^2\\xi^\\epsilon_i : D^2\\xi_j^\\epsilon) + \\sigma \\Delta \\xi_i^\\epsilon \\Delta\\xi_j^\\epsilon + \\tau \\nabla\\xi_i^\\epsilon \\cdot \\nabla \\xi_j^\\epsilon + \\xi_i^\\epsilon \\xi_j^\\epsilon \\biggr).\n\\end{split}\n\\end{equation}\n\nBy definition of $\\xi_i^\\epsilon$ and the absolute continuity of the Lebesgue integral, we have\n{\\small\\[\\int_{\\Omega_\\epsilon} (1-\\sigma) |D^2\\xi_i^\\epsilon|^2 + \\sigma |\\Delta\\xi_i^\\epsilon|^2 + \\tau |\\nabla \\xi_i^\\epsilon|^2 + |\\xi_i^\\epsilon|^2=\n\\begin{cases}\n\\omega_k+o(1), & \\hbox{if }\\textup{$\\exists\\, k$ s.t. $\\lambda_i^\\epsilon=\\omega_k$} ,\\\\\n\\theta_\\epsilon^l, &\\hbox{if }\\textup{$\\exists\\, l$ s.t. $\\lambda_i^\\epsilon=\\theta_\\epsilon^l$},\n\\end{cases}\n\\]}\nwhich implies that\n$\\int_{\\Omega_\\epsilon} (1-\\sigma) |D^2\\xi_i^\\epsilon|^2 + \\sigma |\\Delta\\xi_i^\\epsilon|^2 + \\tau |\\nabla \\xi_i^\\epsilon|^2 + |\\xi_i^\\epsilon|^2\\leq \\lambda_n^\\epsilon+o(1).$\n\nNote that\n\\[\n\\begin{split}\n&\\sum_{i\\neq j}\\alpha_i\\alpha_j \\biggl( \\int_{\\Omega_\\epsilon} (1-\\sigma) (D^2\\xi^\\epsilon_i : D^2\\xi_j^\\epsilon) + \\sigma \\Delta \\xi_i^\\epsilon \\Delta\\xi_j^\\epsilon + \\tau \\nabla\\xi_i^\\epsilon \\cdot \\nabla \\xi_j^\\epsilon + \\xi_i^\\epsilon \\xi_j^\\epsilon \\biggr)=o(1)\n\\end{split}\n\\]\n\nHence,\n$\\lambda_n(\\Omega_\\epsilon)\\leq \\sum_{i=1}^n \\alpha_i^2 ( \\lambda_n^\\epsilon+o(1))+o(1)\\leq \\lambda_n^\\epsilon+o(1)$\nwhich concludes the proof of \\eqref{eq: upper bound}.\n\\end{proof}\n\n\\begin{remark}\nNote that the shape of the channel $R_\\epsilon$ does not play any role in establishing the upper bound. The only fact needed is that the measure of $R_\\epsilon$ tends to $0$ as $\\epsilon \\to 0$.\n\\end{remark}\n\nIn the sequel we shall provide a lower bound for the eigenvalues $\\lambda_n(\\Omega_\\epsilon)$. Before doing so, let us introduce some notation.\n\n\\begin{definition}\\label{definitionNorm}\nLet $\\sigma \\in (-1,1)$, $\\tau \\geq 0$. We denote by $H^2_{L_\\epsilon}(R_\\epsilon)$ the space obtained as the closure in $H^2 (R_\\epsilon)$ of $C^{\\infty}(\\overline{R_{\\epsilon}})$ functions which vanish in a neighbourhood of $L_\\epsilon$.\nFurthermore, for any Lipschitz bounded open set $U$ we define\n\\[\n[f]_{H^2_{\\sigma,\\tau}(U)} = \\bigl|(1-\\sigma) \\norma{D^2 f}_{L^2(U)}^2 + \\sigma \\norma{\\Delta f}_{L^2(U)}^2 + \\tau \\norma{\\nabla f}_{L^2(U)}^2 + \\norma{f}_{L^2(U)}^2 \\bigr|^{1\/2}\\, ,\n\\]\nfor all $f \\in H^2(U)$.\n\\end{definition}\n\nNote the functions $u$ in $ H^2_{L_\\epsilon}(R_\\epsilon) $ satisfy the conditions $u=0$ and $\\nabla u =0$ on $L_{\\epsilon}$ in the sense of traces.\n\n\n\\begin{proposition} \\label{prop: convergence eigenprojections}\nLet $n \\in \\numberset{N}$ be such that the following two conditions are satisfied:\n\\begin{enumerate}[label=(\\roman*)]\n\\item For all $i=1,\\dots,n$,\n\\begin{equation} \\label{prop: lambda_i}\n\\abs{\\lambda_i^\\epsilon - \\lambda_i(\\Omega_\\epsilon)}\\to 0 \\quad \\quad \\text{as $\\epsilon \\to 0$,}\n\\end{equation}\n\\item There exists $\\delta>0$ such that\n\\begin{equation} \\label{prop: lambda_n+1}\n\\lambda_n^\\epsilon \\leq \\lambda_{n+1}(\\Omega_\\epsilon) - \\delta\n\\end{equation}\nfor any $\\epsilon >0$ small enough.\n\\end{enumerate}\nLet $P_n$ be the projector from $L^2(\\Omega_\\epsilon)$ onto the linear span $[\\phi_1^\\epsilon,\\dots,\\phi_n^\\epsilon]$ defined by\n\\begin{equation}\nP_n g = \\sum_{i=1}^n (g, \\phi_i^\\epsilon)_{L^2(\\Omega_\\epsilon)} \\phi_i^\\epsilon\\, ,\n\\end{equation}\nfor all $g\\in L^2(\\Omega_\\epsilon)$, where $\\phi_i^\\epsilon$ is defined in \\eqref{def: phi_n 1}, \\eqref{def: phi_n 2}. Then\n\\begin{equation} \\label{prop: thesis}\n\\norma{\\varphi_i^\\epsilon - P_n \\varphi_i^\\epsilon}_{H^2(\\Omega) \\oplus H^2(R_\\epsilon)} \\to 0,\n\\end{equation}\nas $\\epsilon \\to 0$, for all $i=1,\\dots,n$.\n\\end{proposition}\n\\begin{proof}\nBy \\eqref{eq: upper bound} and \\eqref{eq: boundedness lambda_n^eps} we can extract a subsequence from both the sequences $(\\lambda_i^\\epsilon)_{\\epsilon>0}$ and $(\\lambda_i(\\Omega_\\epsilon))_{\\epsilon>0}$ such that\n\\[\n\\lambda_i^{\\epsilon_k} \\to \\lambda_i,\\ \\ {\\rm and}\\ \\\n\\lambda_i(\\Omega_{\\epsilon_k}) \\to \\widehat{\\lambda}_i,\n\\]\nas $k\\to \\infty$, for all $i=1,\\dots,n+1$.\\\\\nBy assumption we have $\\lambda_i = \\widehat{\\lambda}_i$ for all $i=1,\\dots,n$. Thus, by passing to the limit as $\\epsilon \\to 0$ in \\eqref{eq: upper bound} (with $n$ replaced by $n+1$) and in \\eqref{prop: lambda_n+1}, we get\n\\[ \\lambda_n \\leq \\widehat{\\lambda}_{n+1} - \\delta \\leq \\lambda_{n+1} - \\delta. \\]\n\nWe rewrite $\\lambda_1,\\dots,\\lambda_n$ without repetitions due to multiplicity in order to get a new sequence\n\\begin{equation} \\label{proof: nonoverlappeigenvalues}\n\\widetilde{\\lambda}_1< \\widetilde{\\lambda}_2<\\dots< \\widetilde{\\lambda}_s = \\lambda_n\n\\end{equation}\nand set $\\widetilde{\\lambda}_{s+1}:= \\widehat{\\lambda}_{n+1} \\leq \\lambda_{n+1}$. Thus, by assumption \\eqref{prop: lambda_n+1} we have that\n\\begin{equation} \\label{proof: nonoverlappeigenvalues2}\n\\widetilde{\\lambda}_s < \\widetilde{\\lambda}_{s+1}.\n\\end{equation}\nFor each $r=1,\\dots,s$, let $\\widetilde{\\lambda}_r = \\lambda_{i_r} = \\dots = \\lambda_{j_r}$, for some $i_r \\leq j_r$, $i_r, j_r \\in \\{1,\\dots,n \\}$, where it is understood that $j_r - i_r + 1$ is the multiplicity of $\\widetilde{\\lambda}_r$. Furthermore, we define the eigenprojector $Q_r$ from $L^2(\\Omega_\\epsilon)$ onto the linear span $[\\varphi_{i_r}^\\epsilon, \\dots, \\varphi_{j_r}^\\epsilon]$ by\n\\begin{equation} \\label{proof: def Q_r}\nQ_r g = \\sum_{i=i_r}^{j_r} (g, \\varphi_{i_r}^\\epsilon)_{L^2(\\Omega_\\epsilon)} \\varphi_{i_r}^\\epsilon.\n\\end{equation}\nWe now proceed to prove the following\\\\ \\smallskip\n\n\\noindent\\emph{Claim:} $\\norma{\\xi_i^{\\epsilon_k} - Q_r \\xi_i^{\\epsilon_k}}_{H^2(\\Omega_{\\epsilon_k})} \\to 0$ as $\\epsilon \\to 0$, for all $i_r \\leq i \\leq j_r$ and $r \\leq s$.\n\n\\noindent Let us prove it by induction on $1 \\leq r \\leq s$.\\\\\nIf $r=1$, we define the function\n\\[\n\\chi_{\\epsilon_k} = \\xi_i^{\\epsilon_k} - Q_1 \\xi_i^{\\epsilon_k} = \\xi_i^{\\epsilon_k} - \\sum_{l=1}^{j_1} (\\xi_i^{\\epsilon_k}, \\varphi_l^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})} \\varphi_l^{\\epsilon_k}.\n\\]\nThen $\\chi_{\\epsilon_k} \\in H^2(\\Omega_{\\epsilon_k})$, $(\\chi_{\\epsilon_k} , \\varphi_l^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})}= 0$ for all $l=1,\\dots, j_1$ and by the min-max representation of $\\lambda_2(\\Omega_{\\epsilon_k})$ we have that\n\\begin{equation} \\label{proof: bigger lambda_2}\n[\\chi_{\\epsilon_k}]^2_{H^2_{\\sigma,\\tau}(\\Omega_{\\epsilon_k})} \\geq \\lambda_2(\\Omega_{\\epsilon_k)} \\norma{\\chi_{\\epsilon_k}}^2_{L^2(\\Omega_{\\epsilon_k})} \\geq \\widetilde{\\lambda}_2 \\norma{\\chi_{\\epsilon_k}}^2_{L^2(\\Omega_{\\epsilon_k})} - o(1).\n\\end{equation}\nOn the other hand, it is easy to prove by definition of $\\chi_{\\epsilon_k}$ that\n\\begin{multline}\n\\int_{\\Omega_{\\epsilon_k}} (1-\\sigma) \\big(D^2\\chi_{\\epsilon_k} : D^2 \\psi\\big) + \\sigma \\Delta \\chi_{\\epsilon_k} \\Delta\\psi + \\tau \\nabla\\chi_{\\epsilon_k}\\cdot \\nabla \\psi + \\chi_{\\epsilon_k}\\psi \\, dx\\\\\n= \\lambda_1(\\Omega_{\\epsilon_k})\\int_{\\Omega_{\\epsilon_k}} \\chi_{\\epsilon_k} \\psi \\, dx + o(1)\n\\end{multline}\nfor all $\\psi\\in H^2(\\Omega_{\\epsilon_k})$. This in particular implies that\n\\begin{equation} \\label{proof: chi_eps equality}\n[\\chi_{\\epsilon_k}]^2_{H^2_{\\sigma,\\tau}(\\Omega_{\\epsilon_k})} = \\lambda_1(\\Omega_{\\epsilon_k}) \\norma{\\chi_{\\epsilon_k}}^2_{L^2(\\Omega_{\\epsilon_k})} +o(1)\n\\end{equation}\nand consequently,\n\\begin{equation} \\label{proof: less lambda_1}\n[\\chi_{\\epsilon_k}]^2_{H^2_{\\sigma,\\tau}(\\Omega_{\\epsilon_k})} \\leq \\widetilde{\\lambda}_1 \\norma{\\chi_{\\epsilon_k}}^2_{L^2(\\Omega_{\\epsilon_k})} + o(1).\n\\end{equation}\nHence, inequalities \\eqref{proof: bigger lambda_2}, \\eqref{proof: less lambda_1} imply that\n\\[\n\\widetilde{\\lambda}_2 \\norma{\\chi_{\\epsilon_k}}^2_{L^2(\\Omega_{\\epsilon_k})} - o(1) \\leq \\widetilde{\\lambda}_1 \\norma{\\chi_{\\epsilon_k}}^2_{L^2(\\Omega_{\\epsilon_k})} + o(1),\n\\]\nwhich implies that $\\norma{\\chi_{\\epsilon_k}}_{L^2(\\Omega_{\\epsilon_k})} = o(1)$ (otherwise we would have $\\widetilde{\\lambda}_2 \\leq \\widetilde{\\lambda}_1 + o(1)$, against \\eqref{proof: nonoverlappeigenvalues}). Finally, equation \\eqref{proof: chi_eps equality} implies that $[\\chi_{\\epsilon_k}]_{H^2_{\\sigma,\\tau}(\\Omega_{\\epsilon_k})} = o(1)$, so that also $\\norma{\\chi_{\\epsilon_k}}_{H^2(\\Omega_{\\epsilon_k})}= o(1)$.\n\nLet $r>1$ and assume by induction hypothesis that\n\\begin{equation} \\label{proof: ind_hyp}\n\\norma{\\xi^{\\epsilon_k}_i - Q_t \\xi^{\\epsilon_k}_i}_{H^2(\\Omega_{\\epsilon_k})} \\to 0\n\\end{equation}\nas $k \\to \\infty$, for all $i_t \\leq i \\leq j_t$ and for all $t=1,\\dots,r-1$. We have to prove that \\eqref{proof: ind_hyp} holds also for $t=r$. Let $i_r \\leq i \\leq j_r$ and let $\\chi_{\\epsilon_k} = \\xi_i^{\\epsilon_k} - Q_r \\xi_i^{\\epsilon_k}$. Then\n\\begin{equation} \\label{proof: almost orthog}\n(\\chi_{\\epsilon_k}, \\varphi_h^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})} \\to 0 \\quad \\text{as $k \\to \\infty$, for all $h=1,\\dots,j_r$ }.\n\\end{equation}\nIndeed, if $h \\in \\{i_r, \\dots, j_r\\}$ then by definition of $\\chi_{\\epsilon_k}$, $(\\chi_{\\epsilon_k}, \\varphi_h^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})} = 0$. Otherwise, if $h < i_r$, note that the function $\\varphi_h^{\\epsilon_k}$ satisfies\n\\begin{multline*}\n\\int_{\\Omega_{\\epsilon_k}} (1-\\sigma) \\left( D^2\\varphi_h^{\\epsilon_k} : D^2\\psi \\right) + \\sigma \\Delta \\varphi_h^{\\epsilon_k} \\Delta \\psi + \\tau \\nabla \\varphi_h^{\\epsilon_k} \\nabla\\psi + \\varphi_h^{\\epsilon_k} \\psi\\, dx \\\\\n= \\lambda_h(\\Omega_{\\epsilon_k}) \\int_{\\Omega_{\\epsilon_k}} \\varphi_h^{\\epsilon_k} \\psi\\,dx\\, ,\n\\end{multline*}\nfor all $\\psi \\in H^2(\\Omega_{\\epsilon_k})$, briefly\n$\nB_{\\Omega_{\\epsilon_k}}(\\varphi_h^{\\epsilon_k}, \\psi) = \\lambda_h(\\Omega_{\\epsilon_k})(\\varphi_h^{\\epsilon_k}, \\psi)_{L^2(\\Omega_{\\epsilon_k})}\\, ,\n$\nfor all $\\psi \\in H^2(\\Omega_{\\epsilon_k})$, where $B_U$ denotes the quadratic form associated with the operator\n$\\Delta^2-\\tau\\Delta +I$\non an open set $U$. Similarly,\n$\nB_{\\Omega_{\\epsilon_k}}(\\xi_i^{\\epsilon_k}, \\psi) = \\lambda_i^{\\epsilon_k}(\\xi_i^{\\epsilon_k}, \\psi)_{L^2(\\Omega_{\\epsilon_k})} + o(1)\n$\nfor all $\\psi \\in H^2(\\Omega_{\\epsilon_k})$. Thus,\n$\\lambda_h(\\Omega_{\\epsilon_k})(\\varphi_h^{\\epsilon_k}, \\xi_i^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})} = \\lambda_i^{\\epsilon_k}(\\xi_i^{\\epsilon_k}, \\varphi_h^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})} + o(1)\n$\nwhich implies\n\\begin{equation}\n\\label{proof: difference eigenvalues}\n( \\lambda_h(\\Omega_{\\epsilon_k}) - \\lambda_i^{\\epsilon_k}) (\\varphi_h^{\\epsilon_k}, \\xi_i^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})} = o(1)\n\\end{equation}\nand since $( \\lambda_h(\\Omega_{\\epsilon_k}) - \\lambda_i^{\\epsilon_k}) \\to (\\widetilde{\\lambda}_h - \\widetilde{\\lambda}_i) \\neq 0$ by assumption, by \\eqref{proof: difference eigenvalues} we deduce that $(\\varphi_h^{\\epsilon_k}, \\xi_i^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})} = o(1)$ as $\\epsilon_k \\to 0$, for all $h=1,\\dots,j_r$, which implies \\eqref{proof: almost orthog}.\n\nAs in the case $r=1$ we may deduce that\n\\begin{equation} \\label{proof: bigger lambda_r+1}\n[\\chi_{\\epsilon_k}]^2_{H^2_{\\sigma,\\tau}(\\Omega_{\\epsilon_k})} \\geq \\widetilde{\\lambda}_{r+1} \\norma{\\chi_{\\epsilon_k}}^2_{L^2(\\Omega_{\\epsilon_k})} - o(1).\n\\end{equation}\nOn the other hand, by definition of $\\chi_{\\epsilon_k}$ we have\n\\begin{equation} \\label{proof: less lambda_r}\n[\\chi_{\\epsilon_k}]^2_{H^2_{\\sigma,\\tau}(\\Omega_{\\epsilon_k})} \\leq \\widetilde{\\lambda}_r \\norma{\\chi_{\\epsilon_k}}^2_{L^2(\\Omega_{\\epsilon_k})} + o(1).\n\\end{equation}\nBy \\eqref{proof: bigger lambda_r+1}, \\eqref{proof: less lambda_r} and \\eqref{proof: nonoverlappeigenvalues} it must be $\\norma{\\chi_{\\epsilon_k}}^2_{L^2(\\Omega_{\\epsilon_k})} = o(1)$ and by \\eqref{proof: less lambda_r} we deduce that $[\\chi_{\\epsilon_k}]^2_{H^2_{\\sigma,\\tau}(\\Omega_{\\epsilon_k})} = o(1)$, hence $\\norma{\\chi_{\\epsilon_k}}_{H^2(\\Omega_{\\epsilon_k})} \\to 0$,\nas $k\\to \\infty$. This concludes the proof of the Claim.\\\\\n\nNow define the projector $\\widetilde{Q}_n$ from $L^2(\\Omega_\\epsilon)$ into the linear span $[\\varphi_1^{\\epsilon}, \\dots, \\varphi_n^{\\epsilon}]$ by\n\\[\n\\widetilde{Q}_n g = \\sum_{i=1}^n (g,\\varphi_i^\\epsilon)_{L^2(\\Omega_\\epsilon)} \\varphi_i^\\epsilon.\n\\]\nThen, as a consequence of the Claim we have that\n\\begin{equation}\n\\label{convergence xi}\n\\norma{\\xi_i^{\\epsilon_k} - \\widetilde{Q}_n \\xi_i^{\\epsilon_k}}_{H^2(\\Omega_{\\epsilon_k})} \\to 0\n\\end{equation}\nas $k \\to \\infty$, for all $i=1,\\dots,n$. Indeed for all indexes $i=1,\\dots,n$ there exists $1 \\leq r\\leq s$ such that $i_r \\leq i \\leq j_r$; let assume for simplicity that $r=1$. Then we have $\\norma{\\xi_i^{\\epsilon_k} - Q_1 \\xi_i^{\\epsilon_k}}_{H^2(\\Omega_{\\epsilon_k})} \\to 0$ as $k\\to \\infty$; and also\n\\[\n\\norma{\\xi_i^{\\epsilon_k} - \\widetilde{Q}_n \\xi_i^{\\epsilon_k}}_{H^2(\\Omega_{\\epsilon_k})} \\leq \\norma{\\xi_i^{\\epsilon_k} - Q_1 \\xi_i^{\\epsilon_k}}_{H^2(\\Omega_{\\epsilon_k})} + \\sum_{l > j_1}^n \\big\\lvert(\\xi_i^{\\epsilon_k}, \\varphi_l^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})}\\big\\rvert \\norma{\\varphi_l^{\\epsilon_k}}_{H^2(\\Omega_{\\epsilon_k})}\n\\]\nand the right-hand side tends to 0 as $k \\to \\infty$ because $\\norma{\\varphi_l^{\\epsilon_k}}_{H^2(\\Omega_{\\epsilon_k})}$ is uniformly bounded in $k$ and $(\\xi_i^{\\epsilon_k}, \\varphi_l^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})} \\to 0$ as $k\\to \\infty$ (to see this it is sufficient to argue as in the proof of \\eqref{proof: difference eigenvalues}). Moreover, since $\\norma{\\xi_i^{\\epsilon_k} - \\phi_i^{\\epsilon_k}}_{H^2(\\Omega) \\oplus H^2(R_{\\epsilon_k})} \\to 0$ as $k \\to \\infty$ for all $i=1,\\dots,n$, we also have $\\norma{\\phi_i^{\\epsilon_k} - \\widetilde{Q}_n \\phi_i^{\\epsilon_k}}_{H^2(\\Omega) \\oplus H^2(R_{\\epsilon_k})} \\to 0$\nas $k \\to \\infty$, for all $i=1,\\dots,n$. Thus $(\\widetilde{Q}_n \\phi_1^{\\epsilon_k}, \\dots,$ $ \\widetilde{Q}_n \\phi_n^{\\epsilon_k} )$ is a basis in $(L^2(\\Omega_{\\epsilon_k})^n)$ for $[\\varphi_1^{\\epsilon_k}, \\dots, \\varphi_n^{\\epsilon_k}]$. Hence,\n$\n\\varphi_i^{\\epsilon_k} = \\sum_{l=1}^n a_{li}^{\\epsilon_k} \\widetilde{Q}_n \\phi_l^{\\epsilon_k}\n$\nfor some coefficients $a_{li}^{\\epsilon_k} = (\\varphi_i^{\\epsilon_k}, \\phi_l^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})} + o(1)$ as $k\\to\\infty$. Then for all $i =1,\\dots,n$ we have\n\\begin{multline*}\n\\norma{\\varphi_i^{\\epsilon_k} - P_n \\varphi_i^{\\epsilon_k}}_{H^2(\\Omega) \\oplus H^2(R_{\\epsilon_k}\\!)}\\\\\n = \\bigg\\lVert \\sum_{l=1}^n (\\varphi_i^{\\epsilon_k}, \\phi_l^{\\epsilon_k})_{L^2} [\\phi_l^{\\epsilon_k} - \\widetilde{Q}_n \\phi_l^{\\epsilon_k}] + o(1) \\sum_{l=1}^n \\widetilde{Q}_n \\phi_l^{\\epsilon_k} \\bigg \\rVert_{H^2(\\Omega) \\oplus H^2(R_{\\epsilon_k}\\!)}\n\\end{multline*}\nand the right-hand side tends to 0 as $k \\to \\infty$.\n\\end{proof}\n\n\\begin{remark}\n\\label{rmk: orthogonal matrix A}\nIn the previous proof one could prove that the matrix\n$A = (a_{li} ^{\\epsilon_k} )_{l,i=1,\\dots,n} $\nis almost orthogonal, in the sense that $A A^t = A^t A = \\mathbb{I} + o(1)$ as $k \\to \\infty$. To prove this it is sufficient to show that the matrix\n $\\tilde A= \\bigl((\\phi^{\\epsilon_k}_l, \\varphi^{\\epsilon_k}_m)_{L^2(\\Omega_{\\epsilon_k})}\\bigr)_{l,m=1,\\dots,n}$\nis almost orthogonal. Let $l$ be fixed and note that\n$\n\\phi^{\\epsilon_k}_l = \\sum_{m=1}^n (\\phi^{\\epsilon_k}_l, \\varphi_m^{\\epsilon_k})_{L^2(\\Omega_{\\epsilon_k})} \\varphi^{\\epsilon_k}_m + (\\mathbb{I}-\\widetilde{Q}_m) \\phi^{\\epsilon_k}_l,\n$\nhence, by \\eqref{convergence xi} we deduce that\n\\begin{equation}\\label{almost orthogonal matrix}\n\\delta_{li} = (\\phi^{\\epsilon_k}_l, \\phi^{\\epsilon_k}_i)_{L^2(\\Omega_{\\epsilon_k})} = \\sum_{m=1}^n (\\phi^{\\epsilon_k}_l, \\varphi^{\\epsilon_k}_m)_{L^2(\\Omega_{\\epsilon_k})} (\\varphi^{\\epsilon_k}_m, \\phi^{\\epsilon_k}_i)_{L^2(\\Omega_{\\epsilon_k})} + o(1)\\, ,\n\\end{equation}\nas $k \\to \\infty$.\nNote that we can rewrite \\eqref{almost orthogonal matrix} as $\\tilde A \\tilde A^t = \\mathbb{I} + o(1)$, and in a similar way we also get that $\\tilde A^t \\tilde A = \\mathbb{I} + o(1)$, concluding the proof.\n\\end{remark}\n\nIn the sequel we shall need the following lemma.\n\n\\begin{lemma} \\label{lemma: equation for chi} Let $1 \\leq i \\leq j \\leq n$. Assume that $\\widehat \\lambda \\in \\numberset{R}$ is such that, possibly passing to a subsequence, $\\lambda_m(\\Omega_\\epsilon )\\to \\widehat{\\lambda}$ as $\\epsilon \\to 0$ for all $m \\in \\{i, \\dots, j \\}$.\nIf $\\chi_{\\epsilon} \\in [\\varphi_i^{\\epsilon}, \\dots, \\varphi_j^{\\epsilon}]$, $\\norma{\\chi_{\\epsilon}}_{L^2(\\Omega_{\\epsilon})}=1$ and $\\chi_{\\epsilon}|_{\\Omega} \\rightharpoonup \\chi$ in $H^2(\\Omega)$\nthen\n\\begin{equation}\n\\label{eq: equation for chi}\n\\int_{\\Omega} (1-\\sigma) (D^2 \\chi : D^2 \\psi) + \\sigma \\Delta \\chi \\Delta \\psi + \\tau \\nabla \\chi \\cdot\\nabla \\psi + \\chi \\psi\\, dx = \\widehat{\\lambda} \\int_{\\Omega} \\chi \\psi\\, dx\\, ,\n\\end{equation}\nfor all $\\psi \\in H^2(\\Omega)$.\n\\end{lemma}\n\\begin{proof}\nSince $\\chi_{\\epsilon} \\in [\\varphi_i^{\\epsilon}, \\dots, \\varphi_j^{\\epsilon}]$ and $\\norma{\\chi_{\\epsilon}}_{L^2(\\Omega_{\\epsilon})}=1$ there exist coefficients $(a_l(\\epsilon))_{l=i}^j$ such that\n$\n\\chi_{\\epsilon} = \\sum_{l=i}^j a_l(\\epsilon) \\varphi_l^{\\epsilon}$ and $\\sum_{l=i}^j a_l^2(\\epsilon) = 1.$\nNote that for all $m \\in \\{i, \\dots, j \\}$, possibly passing to a subsequence, there exists $\\widehat{\\varphi}_m \\in H^2(\\Omega)$ such that $\\varphi_m^{\\epsilon}|_{\\Omega} \\rightharpoonup \\widehat{\\varphi}_m$ in $H^2(\\Omega)$. Since $\\chi_{\\epsilon}|_{\\Omega} \\rightharpoonup \\chi$ in $H^2(\\Omega)$ by assumption, we get that $\\chi = \\sum_{l=i}^j a_l \\widehat{\\varphi}_l$ in $\\Omega$ for some coefficients $(a_l)_{l=i}^j$. Let $\\psi \\in H^2(\\Omega)$ be fixed and consider an extension $\\widetilde{\\psi} = E \\psi \\in H^2(\\numberset{R}^N)$. Then\n\\begin{equation} \\label{proof: lemma chi}\n\\begin{split}\n&\\int_{\\Omega_{\\epsilon}} (1-\\sigma) \\bigl(D^2\\chi_{\\epsilon} : D^2 \\widetilde{\\psi}\\bigr) + \\sigma \\Delta \\chi_{\\epsilon} \\Delta \\widetilde{\\psi} + \\tau \\nabla \\chi_{\\epsilon} \\nabla \\widetilde{\\psi} + \\chi_{\\epsilon}\\widetilde{\\psi}\\\\\n&= \\sum_{l=i}^j a_l(\\epsilon) \\biggl[ \\int_{\\Omega_{\\epsilon}} (1-\\sigma) \\bigl(D^2 \\varphi_l^{\\epsilon} : D^2 \\widetilde{\\psi}\\bigr) + \\sigma \\Delta \\varphi_l^{\\epsilon} \\Delta \\widetilde{\\psi} + \\tau \\nabla \\varphi_l^{\\epsilon} \\nabla \\widetilde{\\psi} + \\varphi_l^{\\epsilon}\\widetilde{\\psi} \\biggr]\\\\\n&= \\sum_{l=i}^j a_l(\\epsilon) \\lambda_l(\\Omega_{\\epsilon}) \\int_{\\Omega_{\\epsilon}} \\varphi_l^{\\epsilon} \\widetilde{\\psi}.\n\\end{split}\n\\end{equation}\nThen it is possible to pass to the limit in both sides of \\eqref{proof: lemma chi} by splitting the integrals over $\\Omega_{\\epsilon}$ into an integral over $R_{\\epsilon}$ (that tends to $0$ as $\\epsilon \\to 0$) and an integral over $\\Omega$. Moreover, the integrals over $\\Omega$ will converge to the corresponding integrals in \\eqref{eq: equation for chi} as $\\epsilon \\to 0$, because of the weak convergence of $\\chi_{\\epsilon}$ in $H^2(\\Omega)$ and the strong convergence of $E\\psi$ to $\\psi$ in $H^2(\\Omega)$.\n\\end{proof}\n\n\nWe proceed to prove the lower bound for $\\lambda_n(\\Omega_\\epsilon)$.\nTo do so, we need to add an extra assumption on the shape of $\\Omega_{\\epsilon}$. Hence,\n we introduce the following condition in the spirit of what is known for the Neumann Laplacian (see e.g., \\cite{ArrPhD}, \\cite{Arr1}, \\cite{AHH}).\n\n\\begin{definition}[H-Condition]\n\\label{def: H condition}\nWe say that the family of dumbbell domains $\\Omega_\\epsilon$, $\\epsilon>0$, satisfies the H-Condition if, given functions $u_\\epsilon \\in H^2(\\Omega_\\epsilon)$ such that $\\norma{u_\\epsilon}_{H^2(\\Omega_\\epsilon)} \\leq R$ for all $\\epsilon>0$, there exist functions $\\bar{u}_\\epsilon \\in H^2_{L_\\epsilon}(R_\\epsilon)$ such that\n\\begin{enumerate}[label=(\\roman*)]\n\\item $\\norma{u_\\epsilon - \\bar{u}_\\epsilon }_{L^2(R_\\epsilon)} \\to 0$ as $\\epsilon \\to 0$,\n\\item $[\\bar{u}_\\epsilon]^2_{H^2_{\\sigma, \\tau}(R_\\epsilon)} \\leq [u_\\epsilon]^2_{H^2_{\\sigma, \\tau}(\\Omega_\\epsilon)} + o(1)$ as $\\epsilon \\to 0$.\n\\end{enumerate}\n\\end{definition}\n\nRecall that $[\\cdot ]_{H^2_{\\sigma,\\tau}}$ is defined above in Definition \\ref{definitionNorm}. We will show in Section \\ref{sec: proof H condition regular dumbbells} that a wide class of channels $R_\\epsilon$ satisfies the H-Condition.\n\n\n\\begin{theorem}[Lower bound] \\label{thm: lower bound}\nAssume that the family of dumbbell domains $\\Omega_\\epsilon$, $\\epsilon>0$, satisfies the H-Condition. Then for every $n\\in \\numberset{N}$ we have $\\lambda_n(\\Omega_\\epsilon) \\geq \\lambda_n^\\epsilon - o(1)$ as $\\epsilon \\to 0$.\n\\end{theorem}\n\\begin{proof}\nBy Theorem \\ref{thm: upper bound} and its proof we know that both $\\lambda_i(\\Omega_\\epsilon)$ and $\\lambda_i^\\epsilon$ are uniformly bounded in $\\epsilon$. Then, for each subsequence $\\epsilon_k$ we can find a subsequence (which we still call $\\epsilon_k$), sequences of real numbers $(\\lambda_i)_{i\\in \\numberset{N}}$, $(\\widehat{\\lambda}_i)_{i\\in \\numberset{N}}$, and sequences of $H^2(\\Omega)$ functions $(\\phi_i)_{i \\in \\numberset{N}}$, $(\\widehat{\\varphi}_i)_{i \\in \\numberset{N}}$, such that the following conditions are satisfied:\n\\begin{enumerate}[label=(\\roman*)]\n\\item $\\lambda_i^{\\epsilon_k} \\longrightarrow \\lambda_i$, for all $i \\geq 1$;\n\\item $\\lambda_i(\\Omega_{\\epsilon_k}) \\longrightarrow \\widehat{\\lambda}_i$, for all $i \\geq 1$;\n\\item $\\xi^{\\epsilon_k}_i|_{\\Omega} \\longrightarrow \\phi_i $ strongly in $H^2(\\Omega)$, for all $i \\geq 1$;\n\\item $\\varphi_i^{\\epsilon_k}|_{\\Omega} \\longrightarrow \\widehat{\\varphi}_i$ weakly in $H^2(\\Omega)$, for all $i \\geq 1$;\n\\end{enumerate}\nNote that $(iii)$ immediately follows by recalling that $\\xi^{\\epsilon_k}_i|_{\\Omega}$ either it is zero or it coincides with $\\varphi_i^\\Omega$. Then $(iv)$ is deduced by the estimate\n$\\norma{\\varphi_i^{\\epsilon_k}}_{H^2(\\Omega_{\\epsilon_k})} \\leq c\\, \\lambda_i(\\Omega_{\\epsilon_k})$ and by the boundedness of the sequence $\\lambda_i(\\Omega_{\\epsilon_k})$, $k \\in \\numberset{N}$.\n\nWe plan to prove that $\\widehat{\\lambda}_i = \\lambda_i$ for all $i\\geq 1$. We do it by induction.\nFor $i=1$ we clearly have $\\lambda_1 = \\lambda_1(\\Omega) = 1 = \\lambda(\\Omega_{\\epsilon_k})$ for all $k$; hence, passing to the limit as $k\\to\\infty$ in the right-hand side of the former equality we get $\\lambda_1 = \\widehat{\\lambda_1}$.\nThen, we assume by induction hypothesis that $\\widehat{\\lambda}_i = \\lambda_i$ for all $i=1,\\dots,n$ and we prove that $\\widehat{\\lambda}_{n+1} = \\lambda_{n+1}$. There are two possibilities: either $\\lambda_n = \\lambda_{n+1}$ or $\\lambda_n < \\lambda_{n+1}$. In the first case we deduce by \\eqref{eq: upper bound} that\n\\[\n\\lambda_n = \\widehat{\\lambda}_n \\leq \\widehat{\\lambda}_{n+1} \\leq \\lambda_{n+1} = \\lambda_n,\n\\]\nhence all the inequalities are equalities and in particular $\\widehat{\\lambda}_{n+1} = \\lambda_{n+1}$.\nConsequently we can assume without loss of generality that $\\lambda_n < \\lambda_{n+1}$. In this case we must have $\\widehat{\\lambda}_{n+1} \\in [\\lambda_n, \\lambda_{n+1}]$ because $\\lambda_n=\\widehat{\\lambda_n}$ and $\\lambda_n(\\Omega_{\\epsilon_k}) \\leq \\lambda_{n+1}(\\Omega_{\\epsilon_k}) \\leq \\lambda_{n+1}^{\\epsilon_k} + o(1)$ as $k\\to \\infty$. Let $r= \\max\\{\\lambda_i : i0$, with compact resolvents in $L^2(\\Omega_\\epsilon)$ if there exist $\\delta, M, N, \\epsilon_0 > 0$ such that\n\\begin{align}\n[x_\\epsilon - \\delta, x_\\epsilon + \\delta] \\cap \\{ \\lambda_n^\\epsilon \\}_{n=1}^\\infty = \\emptyset,& \\quad\\forall \\epsilon < \\epsilon_0\\\\\nx_\\epsilon \\leq M,& \\quad\\forall \\epsilon < \\epsilon_0\\\\\nN(x_{\\epsilon}) := \\#\\{ \\lambda_i^{\\epsilon} : \\lambda_i^{\\epsilon} \\leq x_\\epsilon\\}\\leq N < \\infty.\n\\end{align}\nIf $x_\\epsilon$ divides the spectrum we define the projector $P_{x_\\epsilon}$ from $L^2(\\Omega_\\epsilon)$ onto the linear span $[\\phi_1^{\\epsilon}, \\dots, \\phi_{N(x_\\epsilon)}^{\\epsilon}]$ of the first $N(x_\\epsilon)$ eigenfunctions by\n\\[\nP_{x_\\epsilon} g = \\sum_{i=1}^{N(x_\\epsilon)} (g,\\phi_i^\\epsilon)_{L^2(\\Omega_\\epsilon)} \\phi_i^\\epsilon\\, ,\n\\]\nfor all $g\\in L^2(\\Omega_\\epsilon)$. Then, recalling Theorem \\ref{thm: upper bound} and Theorem \\ref{thm: lower bound} we deduce the following.\n\n\\begin{theorem}[(Decomposition of the eigenvalues)] \\label{thm: eigenvalues decomposition}\nLet $\\Omega_\\epsilon$, $\\epsilon>0$, be a family of dumbbell domains satisfying the H-Condition. Then the following statements hold:\n\\begin{enumerate}[label =(\\roman*)]\n\\item $\\lim_{\\epsilon \\to 0}\\, \\abs{\\lambda_n(\\Omega_\\epsilon) - \\lambda_n^\\epsilon} = 0$, for all $n\\in \\numberset{N} $.\n\n\\item For any $x_\\epsilon$ dividing the spectrum,\n $\\lim_{\\epsilon \\to 0}\\, \\norma{\\varphi^\\epsilon_{r_\\epsilon} - P_{x_\\epsilon} \\varphi^\\epsilon_{r_\\epsilon}}_{H^2(\\Omega) \\oplus H^2(R_\\epsilon)} = 0$, for all $r_\\epsilon = 1,\\dots, N(x_\\epsilon)$.\\end{enumerate}\n\\end{theorem}\n\n\n\n\n\n\n\n\n\n\\section{Proof of the H-Condition for regular dumbbells}\n\\label{sec: proof H condition regular dumbbells}\nThe goal of this section is to prove that the H-Condition holds for regular dumbbell domains. More precisely, we will consider channels $R_\\epsilon$ such that the profile function $g$ has the following monotonicity property:\\vspace{8pt}\\\\\n (MP): \\textit{ there exists $\\delta \\in ]0, 1\/2[$ such that $g$ is decreasing on $[0,\\delta)$ and increasing on $(1-\\delta, 1]$. } \\vspace{8pt}\\\\\n\\noindent If (MP) is satisfied then the set $A_{\\epsilon} = \\{ (x,y) \\in \\numberset{R}^2 : x \\in (0,\\delta) \\cup (1-\\delta, 1), 00$ we define the function $f_{\\gamma, \\beta} \\in C^{1,1}(0,1)$ by setting\n\\begin{equation}\nf=f_{\\gamma,\\beta}(x) =\n\\begin{cases} -\\epsilon^\\gamma \\Big(\\frac{x}{\\epsilon^\\beta}\\Big)^2 + (\\epsilon^\\beta+2\\epsilon^\\gamma) \\Big( \\frac{x}{\\epsilon^\\beta} \\Big) - \\epsilon^\\gamma, & x\\in (0,\\epsilon^\\beta), \\\\\n\\qquad x, & x\\in (\\epsilon^\\beta, 1).\n\\end{cases}\n\\end{equation}\nNote that $f$ is a $C^{1,1}$-diffeomorphism from $(0, \\epsilon^\\beta)$ onto $(-\\epsilon^\\gamma, \\epsilon^\\beta)$. Then,\n\\[\nf'(x) =\n\\begin{cases} 1+2 \\epsilon^{\\gamma-\\beta} \\, (1-\\frac{x}{\\epsilon^\\beta}), & x\\in (0,\\epsilon^\\beta), \\\\\n\\qquad 1, & x\\in (\\epsilon^\\beta, 1),\n\\end{cases}\n\\]\nand\n\\[\nf''(x) =\n\\begin{cases} - 2 \\epsilon^{\\gamma-2\\beta}, & x\\in (0,\\epsilon^\\beta), \\\\\n\\qquad 0, & x\\in (\\epsilon^\\beta, 1),\n\\end{cases}\n\\]\nwhich implies that $|f'(x)-1|\\leq 2 \\epsilon^{\\gamma-\\beta}$, for all $x\\in (0,1)$, and $|f''(x)|\\leq 2 \\epsilon^{\\gamma-2\\beta}$, for all $x\\in (0,1)$. Thus,\n if $\\gamma>\\beta$ then\n\\begin{equation}\n\\label{eq: asymptotics f'}\nf'(x) = 1 + o(1) \\quad \\hbox{ as } \\epsilon\\to 0.\n\\end{equation}\n\nFor any $\\theta \\in (0,1)$, we define the following sets:\n\\begin{align*}\n&K_\\epsilon^\\theta = \\{ (x,y) \\in \\Omega: - \\epsilon^\\theta < x < 0,\\, 0 < y < \\epsilon g(0) \\}\\, , \\\\\n&\\Gamma_\\epsilon^\\theta = \\{ (-\\epsilon^\\theta, y) : 0 0$. Then, with the notation above and for $0<\\theta<\\frac{1}{3}$, we have\n\\begin{equation}\n \\norma{u_\\epsilon}_{L^2(J_\\epsilon^\\theta)} = O(\\epsilon^{2\\theta}), \\quad \\norma{\\nabla u_\\epsilon}_{L^2(J_\\epsilon^\\theta)}=O(\\epsilon^\\theta), \\hbox{ as } \\epsilon\\to 0\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\nWe define the function $u_\\epsilon^s \\in H^2(J_\\epsilon^\\theta)$ by setting\n\\[\nu_\\epsilon^s (x,y) = -3 u_\\epsilon(-x,y) + 4 u_\\epsilon \\Bigl(-\\frac{x}{2}, y \\Bigr)\n\\]\nfor all $(x,y) \\in J_\\epsilon^\\theta$. The function $u_\\epsilon^s$ can be viewed as a higher order reflection of $u_\\epsilon$ with respect to the $y$-axis. Let us note that we can estimate the $L^2$ norm of $u^s_\\epsilon$, of its gradient and of its derivatives of order 2, in the following way:\n\\begin{align}\n&\\norma{u_\\epsilon^s}_{L^2(J_\\epsilon^\\theta)} \\leq C \\norma{u_\\epsilon}_{L^2(K_\\epsilon^\\theta)}, \\label{proof: ineq 1} \\\\\n&\\norma{\\nabla u_\\epsilon^s}_{L^2(J_\\epsilon^\\theta)} \\leq C \\norma{\\nabla u_\\epsilon}_{L^2(K_\\epsilon^\\theta)}, \\label{proof: ineq 2} \\\\\n&\\norma{D^\\alpha u_\\epsilon^s}_{L^2(J_\\epsilon^\\theta)} \\leq C\\norma{D^\\alpha u_\\epsilon}_{L^2(K_\\epsilon^\\theta)}, \\label{proof: ineq 3}\n\\end{align}\nfor any multiindex $\\alpha$ of length $2$ and for some constant $C$ independent of $\\epsilon$. To obtain the three inequalities above, we are using that the image of $K_\\epsilon^\\theta$ under the reflexion about the $y$-axis contains $J_\\epsilon^\\theta$. This is a consequence of (MP).\nSince the $L^2$ norms on the right-hand sides of the inequalities above are taken on a subset of $\\Omega$, we can improve the estimate of \\eqref{proof: ineq 1} and \\eqref{proof: ineq 2} using H\\\"older's inequality and Sobolev embeddings to obtain\n\\begin{equation}\\label{sobolev-1}\n\\norma{u_\\epsilon}_{L^2(K_\\epsilon^\\theta)} \\leq |K_\\epsilon^\\theta|^{1\/2} \\norma{u_\\epsilon}_{L^\\infty(\\Omega)} \\leq c \\bigl( \\epsilon^{\\theta + 1}\\bigr)^{1\/2} \\norma{u_\\epsilon}_{H^2(\\Omega)}\n\\end{equation}\nand in a similar way\n\\begin{equation}\\label{sobolev-2}\n\\norma{\\nabla u_\\epsilon}_{L^2(K_\\epsilon^\\theta)} \\leq |K_\\epsilon^\\theta|^{\\frac{1}{2} - \\frac{1}{p}} \\norma{\\nabla u_\\epsilon}_{L^p(\\Omega)} \\leq c \\bigl(\\epsilon^{\\theta + 1}\\bigr)^{\\frac{1}{2} - \\frac{1}{p}} \\norma{u_\\epsilon}_{H^2(\\Omega)}\n\\end{equation}\nfor any $2 0$, where we have used \\eqref{bound-second-derivatives}. Hence we rewrite inequality \\eqref{proof: Poincare ineq} in the following way:\n\\begin{equation} \\label{proof: decay ineq psi_eps}\n\\Big \\lVert \\frac{\\partial \\psi_\\epsilon}{\\partial x_i} \\Big \\rVert_{L^2(J_\\epsilon^\\theta)} \\leq \\frac{2}{\\pi} \\epsilon^\\theta (C R + o(1)) = O(\\epsilon^\\theta)\n\\end{equation}\nas $\\epsilon \\to 0$, for $i=1,2$.\n\nFinally, by the inequalities \\eqref{proof: decay ineq nabla u eps^s}, \\eqref{proof: decay ineq psi_eps} we deduce that\n\\begin{equation}\n\\begin{split}\n\\norma{\\nabla u_\\epsilon}_{L^2(J_\\epsilon^\\theta)} &\\leq \\norma{\\nabla \\psi_\\epsilon}_{L^2(J_\\epsilon^\\theta)} + \\norma{\\nabla u^s_\\epsilon}_{L^2(J_\\epsilon^\\theta)}\\\\\n&\\leq O(\\epsilon^\\theta) + C \\bigl(\\epsilon^{\\theta + 1}\\bigr)^{\\frac{1}{2} - \\frac{1}{p}} \\norma{u_\\epsilon}_{H^2(\\Omega)}\\leq O(\\epsilon^\\theta),\n\\end{split}\n\\end{equation}\nwhere we have used that $(\\theta + 1) (1\/2 - 1\/p) > \\theta$ for large enough $p$.\n\nIt remains to prove that $\\norma{u_\\epsilon}_{L^2(J_\\epsilon^\\theta)}= O(\\epsilon^{2\\theta})$ as $\\epsilon \\to 0$. We can repeat the argument for $u_\\epsilon$ instead of $\\partial_{x_i} u_\\epsilon$, with the difference that now we can improve the decay of $\\norma{\\psi_\\epsilon}_{L^2(J_\\epsilon^\\theta)}$ by using the one-dimensional Poincar\\'{e} inequality twice. More precisely we have that\n\\[\n\\norma{\\psi_\\epsilon}_{L^2(J_\\epsilon^\\theta)} \\leq \\Big(\\frac{2}{\\pi}\\Big)^2 \\epsilon^{2\\theta} \\bigg \\lVert \\frac{\\partial^2 \\psi_\\epsilon}{\\partial x^2} \\bigg \\rVert_{L^2(J_\\epsilon^\\theta)}\n\\]\nfrom which we deduce\n$\n\\norma{\\psi_\\epsilon}_{L^2(J_\\epsilon^\\theta)} = O(\\epsilon^{2\\theta})\n$\nas $\\epsilon \\to 0$. Hence,\n\\begin{equation}\\label{eq:estimate ueps}\n\\begin{split}\n\\norma{u_\\epsilon}_{L^2(J_\\epsilon^\\theta)} &\\leq \\norma{\\psi_\\epsilon}_{L^2(J_\\epsilon^\\theta)} + \\norma{u^s_\\epsilon}_{L^2(J_\\epsilon^\\theta)}\n\\leq O(\\epsilon^{2\\theta}) + C \\epsilon^{\\frac{\\theta + 1}{2}} \\norma{u_\\epsilon}_{H^2(\\Omega)} = O(\\epsilon^{2\\theta})\n\\end{split}\n\\end{equation}\nas $\\epsilon \\to 0$, concluding the proof.\n\\end{proof}\n\nWe can now give a proof of Theorem \\ref{thm: (MP) implies (H)}.\n\\begin{proof}[Proof of Theorem \\ref{thm: (MP) implies (H)}]\nLet $u_\\epsilon\\in H^2(\\Omega_\\epsilon)$ be such that $\\norma{u_\\epsilon}_{H^2(\\Omega_\\epsilon)}\\leq R$ for any $\\epsilon > 0$. We prove that the H-Condition holds if we choose $\\overline{u}_\\epsilon$ as in \\eqref{def: u bar} with $\\gamma <1\/3$. Note that $u_\\epsilon \\equiv \\overline{u}_\\epsilon$ on $R_\\epsilon \\setminus J_\\epsilon^\\beta$. Let us first estimate $\\norma{\\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)}$. By a change of variable and by \\eqref{eq: asymptotics f'} we deduce that\n\\begin{equation}\n\\begin{split}\n\\norma{\\overline{u}_\\epsilon}^2_{L^2(J_\\epsilon^\\beta)} &= \\int_0^{\\epsilon^\\beta} \\int_0^{\\epsilon g(x)} |(u_\\epsilon \\chi_\\epsilon^\\gamma)(f(x),y)|^2\\, dy dx\\\\\n&= \\int_{-\\epsilon^\\gamma}^{\\epsilon^\\beta} \\int_0^{\\epsilon g(f^{-1}(z))} |(u_\\epsilon \\chi_\\epsilon^\\gamma)(z,y)|^2 |f'(f^{-1}(z))|^{-1}\\, dy dz\\\\\n&\\leq (1+o(1)) \\int_{-\\epsilon^\\gamma}^{\\epsilon^\\beta} \\int_0^{\\epsilon g(f^{-1}(z))} |(u_\\epsilon \\chi_\\epsilon^\\gamma)(z,y)|^2 dy dz\\\\\n&\\leq (1+o(1)) \\norma{u_\\epsilon}^2_{L^2(Z_\\epsilon^\\gamma)},\n\\end{split}\n\\end{equation}\nwhere $Z_\\epsilon^\\gamma = \\{ (x,y) \\in \\Omega_\\epsilon : -\\epsilon^\\gamma < x < \\epsilon^\\beta, 0 < y < \\epsilon g(f^{-1}(x)) \\}$. Note that since the function $g$ is non increasing, then $Z_\\epsilon^\\gamma\\subset K_\\epsilon^{\\gamma}\\cup J_\\epsilon^\\beta$. Hence,\n\\begin{equation} \\label{proof: estimate overline(u)}\n\\norma{\\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)}^2 \\leq (1+o(1))( \\norma{u_\\epsilon}_{L^2(K^\\gamma_\\epsilon)}^2+ \\norma{u_\\epsilon}_{L^2(J_\\epsilon^\\beta)}^2).\n\\end{equation}\nNote that the last summand in the right-hand side of \\eqref{proof: estimate overline(u)} behaves as $O(\\epsilon^{4\\beta})$ as $\\epsilon \\to 0$ because of Proposition \\ref{prop: sym arg}. Also by \\eqref{sobolev-1} with $\\theta$ replaced by $\\gamma$, we get\n\\[\n\\norma{u_\\epsilon}_{L^2(K^\\gamma_\\epsilon)} \\leq c \\epsilon^{\\frac{\\gamma+1}{2}} \\norma{u_\\epsilon}_{H^2(\\Omega)},\n\\]\n\n\\noindent Thus,\n\\[\n\\norma{\\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)}^2 \\leq (1+o(1)) (O(\\epsilon^{4\\beta}) + O(\\epsilon^{\\gamma+1}) = O(\\epsilon^{4\\beta})\n\\]\nas $\\epsilon \\to 0$. We then have by Proposition \\ref{prop: sym arg} that\n\\[\n\\norma{u_\\epsilon - \\overline{u}_\\epsilon}_{L^2(R_\\epsilon)} = \\norma{u_\\epsilon - \\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)} \\leq \\norma{u_\\epsilon}_{L^2(J_\\epsilon^\\beta)} + \\norma{\\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)} = O(\\epsilon^{2\\beta})\n\\]\nas $\\epsilon \\to 0$. This concludes the proof of $(i)$ in the H-Condition.\n\nIn order to prove $(ii)$ from Definition \\ref{def: H condition}, we first need to compute $\\norma{\\nabla \\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)}$ and $\\norma{D^2 \\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)}$. We have\n\\[\n\\begin{split}\n&\\frac{\\partial \\overline{u}_\\epsilon}{\\partial x} (x,y)= \\Bigg[\\bigg(\\frac{\\partial u_\\epsilon }{\\partial x} \\chi_\\epsilon^\\gamma\\bigg) (f(x),y) + (u_\\epsilon (\\chi_\\epsilon^\\gamma)')(f(x),y) \\Bigg] f'(x)\\\\\n&\\frac{\\partial \\overline{u}_\\epsilon}{\\partial y} (x,y)= \\bigg(\\frac{\\partial u_\\epsilon}{\\partial y} \\chi_\\epsilon^\\gamma\\bigg)(f(x),y).\n\\end{split}\n\\]\nHence,\n\\begin{equation}\n\\label{proof: gradient estimate}\n\\begin{split}\n\\norma{\\nabla \\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)} &\\leq \\norma{f'}_{L^\\infty} \\bigl( \\norma{\\nabla u_\\epsilon (f(\\cdot),\\cdot)}_{L^2(J_\\epsilon^\\beta)} + \\norma{(u_\\epsilon (\\chi_\\epsilon^\\gamma)')(f(\\cdot),\\cdot)}_{L^2(J_\\epsilon^\\beta)}\\bigr)\\\\\n&\\leq \\norma{f'}_{L^\\infty} \\norma{f'}_{L^\\infty}^{-1\/2}\\bigl(\\norma{\\nabla u_\\epsilon}_{L^2(K_\\epsilon^\\gamma \\cup J_\\epsilon^\\beta)} + c_1 \\norma{\\epsilon^{-\\gamma} u_\\epsilon}_{L^2(K_\\epsilon^\\gamma)}\\bigr)\\\\\n&\\leq (1 + o(1)) \\bigl(\\norma{\\nabla u_\\epsilon}_{L^2(K_\\epsilon^\\gamma)} + \\norma{\\nabla u_\\epsilon}_{L^2(J_\\epsilon^\\beta)} + c_1 \\epsilon^{-\\gamma} \\norma{u_\\epsilon}_{L^2(K_\\epsilon^\\gamma)}\\bigr)\n\\end{split}\n\\end{equation}\nwhere we have used the definition of $\\chi_\\epsilon^\\gamma$ and the change of variables $(f(x), y) \\mapsto (x, y)$. By Proposition \\ref{prop: sym arg} we know that $\\norma{\\nabla u_\\epsilon}_{L^2(J_\\epsilon^\\beta)} = O(\\epsilon^{\\beta})$ as $\\epsilon \\to 0$. Moreover, by \\eqref{sobolev-1}, \\eqref{sobolev-2} with $\\theta$ replaced by $\\gamma$, we deduce that\n\\[\n\\norma{u_\\epsilon}_{L^2(K_\\epsilon^\\gamma)} = O(\\epsilon^{ \\frac{\\gamma+1}{2}} ), \\quad\\quad \\norma{\\nabla u_\\epsilon}_{L^2(K_\\epsilon^\\gamma)} = O(\\epsilon^{\\gamma_p}),\n\\]\nfor any $p < \\infty$, where we have set\n\\[\n\\gamma_p = \\biggl(\\frac{1}{2} - \\frac{1}{p}\\biggr)(\\gamma + 1).\n\\]\nFinally, we deduce by \\eqref{proof: gradient estimate} that\n\\begin{equation} \\label{proof: gradient estimate final}\n\\norma{\\nabla \\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)} \\leq (1+o(1)) (O(\\epsilon^{\\gamma_p}) + O(\\epsilon^{\\beta}) + \\epsilon^{-\\gamma} O(\\epsilon^{\\gamma_p}) ) = O(\\epsilon^{\\beta})\n\\end{equation}\nbecause $\\gamma_p-\\gamma>\\beta$, for sufficiently large $p$ (note that $\\beta < (1-\\gamma )\/2$ for $\\gamma < 1\/3$).\n\n\nWe now estimate the $L^2$ norm of $D^2 \\overline{u}_\\epsilon$. In order to simplify our notation we write $F(x,y) = (f(x),y)$, $\\chi_\\epsilon^\\gamma = \\chi$, $\\bar u_\\epsilon=\\bar u$, $u_\\epsilon=u$ and we use the subindex notation for the partial derivatives, that is, $u_x=\\frac{\\partial u}{\\partial x}$ and so on. First, note that\n\\begin{equation}\\label{2D-eq1}\n\\begin{split}\n&\\bar u_{xx} = \\Big[\\Big(u_{xx} \\chi + 2 u_x \\chi' + u \\chi''\\Big) \\circ F \\Big] \\cdot |f'|^2 + \\Big[\\Big(u_x \\chi + u \\chi' \\Big) \\circ F\\Big] \\cdot f'', \\\\\n&\\bar u_{xy} = \\Big[\\Big( u_{xy} \\chi + u_y \\chi' \\Big) \\circ F \\Big] \\cdot f', \\\\\n&\\bar u_{yy} =\\Big( u_{yy} \\chi \\Big) \\circ F,\n\\end{split}\n\\end{equation}\nand we may write\n\\begin{equation*}\n\\bar u_{xx} = [u_{xx} \\chi \\circ F]\\cdot |f'|^2+ R_1, \\quad \\bar u_{xy} = [u_{xy} \\chi \\circ F] \\cdot f'+ R_2, \\quad \\bar u_{yy} = u_{yy} \\chi \\circ F.\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\begin{split}\n&R_1= \\Big[\\Big( 2 u_x \\chi' + u \\chi''\\Big) \\circ F \\Big] \\cdot |f'|^2 + \\Big[\\Big(u_x \\chi + u \\chi' \\Big) \\circ F\\Big] \\cdot f'', \\\\\n&R_2 = u_y \\chi' \\circ F \\cdot f'.\n\\end{split}\n\\end{equation*}\n\nWe now show that $\\|R_1\\|_{L^2(J_\\epsilon^\\beta)}=o(1)$, $\\|R_2\\|_{L^2(J_\\epsilon^\\beta)}=o(1)$ as $\\epsilon\\to 0$. For this, we will prove that each single term in $R_1$ and $R_2$ is $o(1)$ as $\\epsilon\\to 0$. Recall that $f'(x)=1+o(1)$ and $f''(x)=o(1)$, $\\chi'=O(\\epsilon^{-\\gamma})$ and $\\chi''=O(\\epsilon^{-2\\gamma})$ for $x\\in (0,\\epsilon^\\beta)$. By a change of variables, by the Sobolev Embedding Theorem and the definition of $\\chi$ it is easy to deduce that\n\\begin{align*}\n&\\norma{(u_x \\chi')\\circ F}_{L^2(J_\\epsilon^\\beta)} \\leq (1+o(1)) \\norma{u_x \\chi'}_{L^2(K^\\gamma_\\epsilon)} \\leq C R \\epsilon^{\\gamma_p - \\gamma} = O(\\epsilon^\\beta)\\\\\n&\\norma{(u \\chi'')\\circ F}_{L^2(J_\\epsilon^\\beta)} \\leq c_2 (1+o(1)) \\norma{u \\epsilon^{-2\\gamma}}_{L^2(K_\\epsilon^\\gamma)} \\leq C R \\epsilon^{\\frac{1-3\\gamma}{2}}\\\\\n&\\norma{(u_y \\chi')\\circ F}_{L^2(J_\\epsilon^\\beta)} \\leq c_1 (1+o(1)) \\norma{\\epsilon^{-\\gamma} u_y}_{L^2(K^\\gamma_\\epsilon)} \\leq C R \\epsilon^{\\gamma_p - \\gamma}= O(\\epsilon^\\beta)\\, .\n\\end{align*}\nBy \\eqref{proof: gradient estimate final} we also have\n\\begin{equation}\n\\norma{(u_x \\chi + u \\chi' ) \\circ F}_{L^2(J_\\epsilon^\\beta)} \\leq (1+o(1)) \\norma{\\nabla \\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)} = O(\\epsilon^\\beta).\n\\end{equation}\nHence the $L^2$ norms of $R_1$, $R_2$ vanish as $\\epsilon \\to 0$. In particular,\n\\begin{equation*}\n\\norma{D^2 \\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)} = (1 + o(1))\\norma{D^2 u_\\epsilon}_{L^2(K_\\epsilon^\\gamma \\cup J_\\epsilon^\\beta)} + O(\\epsilon^{\\frac{1-3\\gamma}{2}}) + O(\\epsilon^\\beta),\n\\end{equation*}\nas $\\epsilon\\to 0$. In a similar way we can also prove that\n\\begin{equation*}\n\\norma{\\Delta \\overline{u}_\\epsilon}_{L^2(J_\\epsilon^\\beta)} = (1 + o(1))\\norma{\\Delta u_\\epsilon}_{L^2(K_\\epsilon^\\gamma \\cup J_\\epsilon^\\beta)} + O(\\epsilon^{\\frac{1-3\\gamma}{2}}) + O(\\epsilon^\\beta),\n\\end{equation*}\nas $\\epsilon\\to 0$. Hence,\n\\begin{multline}\n\\label{proof: channel energy}\n(1-\\sigma) \\norma{D^2 \\overline{u}_\\epsilon}^2_{L^2(J_\\epsilon^\\beta)} + \\sigma \\norma{\\Delta \\overline{u}_\\epsilon}^2_{L^2(J_\\epsilon^\\beta)} + \\tau \\norma{\\nabla \\overline{u}_\\epsilon }^2_{L^2(J_\\epsilon^\\beta)}\\\\\n =(1-\\sigma)\\norma{D^2 u_\\epsilon}^2_{L^2(K_\\epsilon^\\gamma \\cup J_\\epsilon^\\beta)} + \\sigma \\norma{\\Delta u_\\epsilon}^2_{L^2(K_\\epsilon^\\gamma \\cup J_\\epsilon^\\beta)} + o(1).\n\\end{multline}\nBy adding to both handsides of \\eqref{proof: channel energy} {\\small$(1-\\sigma)\\norma{D^2 \\overline{u}_\\epsilon}^2_{L^2(R_\\epsilon \\setminus J_\\epsilon^\\beta)}$, $\\sigma \\norma{\\Delta \\overline{u}_\\epsilon}^2_{L^2(R_\\epsilon \\setminus J_\\epsilon^\\beta)}$} and the lower order term $\\tau \\norma{\\nabla \\overline{u}_\\epsilon }^2_{L^2(R_\\epsilon \\setminus J_\\epsilon^\\beta)}$, and keeping in account that $\\overline{u}_\\epsilon \\equiv u_\\epsilon$ on $R_\\epsilon \\setminus J_\\epsilon^\\beta$ we deduce that\n\\begin{multline}\\label{mon}\n(1-\\sigma) \\norma{D^2 \\overline{u}_\\epsilon}^2_{L^2(R_\\epsilon)} + \\sigma \\norma{\\Delta \\overline{u}_\\epsilon}^2_{L^2(R_\\epsilon)} + \\tau \\norma{\\nabla \\overline{u}_\\epsilon }^2_{L^2(R_\\epsilon)}\\\\\n = (1-\\sigma)\\norma{D^2 u_\\epsilon}^2_{L^2(K_\\epsilon^\\gamma \\cup R_\\epsilon)} + \\sigma\\norma{\\Delta u_\\epsilon}^2_{L^2(K_\\epsilon^\\gamma \\cup R_\\epsilon)} + \\tau \\norma{\\nabla u_\\epsilon }^2_{L^2(R_\\epsilon \\setminus J_\\epsilon^\\beta)} + o(1)\\\\\n\\leq (1-\\sigma)\\norma{D^2 u_\\epsilon}^2_{L^2(\\Omega_\\epsilon)} + \\sigma \\norma{\\Delta u_\\epsilon}^2_{L^2(\\Omega_\\epsilon)} + \\tau \\norma{\\nabla u_\\epsilon }^2_{L^2(\\Omega_\\epsilon)} + o(1),\n\\end{multline}\nas $\\epsilon \\to 0$, concluding the proof of $(ii)$ in the H-Condition.\n Note that in \\eqref{mon}, we have used the monotonicity of the quadratic form with respect to inclusion of sets. Such property is straightforward\nfor $\\sigma \\in [0,1)$. In the case $\\sigma \\in (-1,0)$ it follows by observing that\n\\begin{multline*}\n(1-\\sigma) \\bigl[ u^2_{xx} + 2 u^2_{xy} + u^2_{yy} \\bigr] + \\sigma \\bigl[ u^2_{xx} + 2 u_{xx} u_{yy} + u^2_{yy} \\bigr]\\\\\n \\geq u^2_{xx} + u^2_{yy} + \\sigma (u^2_{xx} + u^2_{yy} ) = (1+\\sigma) (u^2_{xx} + u^2_{yy} ) > 0,\n\\end{multline*}\nfor all $u \\in H^2(\\Omega_\\epsilon)$.\n\\end{proof}\n\n\n\n\n\n\n\n\\section{Asymptotic analysis on the thin domain}\n\\label{sec: thin plates}\nThe purpose of this section is to study the convergence of the eigenvalue problem \\eqref{PDE: R_eps} as $\\epsilon \\to 0$. Since the thin domain $R_\\epsilon$ is shrinking to the segment $(0,1)$ as $\\epsilon \\to 0$, we plan to identify the limiting problem in $(0,1)$ and to prove that the resolvent operator of problem \\eqref{PDE: R_eps} converges as $\\epsilon \\to 0$ to the resolvent operator of the limiting problem in a suitable sense which guarantees the spectral convergence.\n\nMore precisely, we shall prove that the the limiting eigenvalue problem in $[0,1]$ is\n\\begin{equation}\\label{classiceigenode}\n\\begin{cases}\n\\frac{1-\\sigma^2}{g} (gh'')''- \\frac{\\tau}{g}(gh')' + h = \\theta h, &\\text{in $(0,1)$,}\\\\\nh(0)=h(1)=0,&\\\\\nh'(0)=h'(1)=0.&\n\\end{cases}\n\\end{equation}\nNote that the weak formulation of (\\ref{classiceigenode}) is\n\\[\n(1-\\sigma^2)\\int_0^1 h''\\psi''gdx+\\tau \\int_0^1h'\\psi'gdx+\\int_0^1h\\psi g dx=\\theta \\int_0^1h\\psi g dx,\n\\]\nfor all $\\psi\\in H^2_0(0,1)$, where $h$ is to be found in the Sobolev space $H^2_0(0,1)$. In the sequel, we shall denote by $L^2_g(0,1)$ the Hilbert space $L^2((0,1); g(x)dx)$.\n\n\\subsection{Finding the limiting problem}\n\\label{subsection: finding limit prb}\n\nIn order to use thin domain techniques in the spirit of \\cite{HR}, we need to fix a reference domain $R_1$ and pull-back the eigenvalue problem defined on $R_{\\epsilon}$ onto $R_1$ by means of a suitable diffeomorphism.\n\nLet $R_1$ be the rescaled domain obtained by setting $\\epsilon = 1$ in the definition of $R_\\epsilon$ (see \\eqref{def: R_eps}). For any fixed $\\epsilon >0$, let $\\Phi_\\epsilon$ be the map from $R_1$ to $R_\\epsilon$ defined by $\\Phi_\\epsilon(x',y') = (x', \\epsilon y')= (x,y)$ for all $(x',y') \\in R_1$. We consider the composition operator $T_\\epsilon$ from $L^2(R_\\epsilon; \\epsilon^{-1}dxdy)$ to $L^2(R_1)$ defined by\n\\[\nT_\\epsilon u(x',y') = u \\circ \\Phi_\\epsilon (x', y') = u(x', \\epsilon y')\\, ,\n\\]\nfor all $u \\in L^2(R_\\epsilon)$, $(x',y') \\in R_1$. We also endow the spaces $H^2(R_1)$ and $H^2(R_\\epsilon)$ with the norms defined by\n\\begin{multline}\n\\|\\varphi\\|_{H^2_{\\epsilon, \\sigma, \\tau}(R_1)}^2 =\\int_{R_1} \\Bigg((1-\\sigma) \\Bigg[ \\abs*{\\frac{\\partial^2 \\varphi}{\\partial x^2}}^2 + \\frac{2}{\\epsilon^2}\\abs*{\\frac{\\partial^2 \\varphi}{\\partial x \\partial y}}^2 + \\frac{1}{\\epsilon^4} \\abs*{\\frac{\\partial^2 \\varphi}{\\partial y^2}}^2 \\Bigg]\\\\\n+ \\sigma \\abs*{\\frac{\\partial^2 \\varphi}{\\partial x^2} + \\frac{1}{\\epsilon^2}\\frac{\\partial^2 \\varphi}{\\partial y^2}}^2 + \\tau \\Bigg[ \\abs*{\\frac{\\partial \\varphi}{\\partial x}}^2 + \\frac{1}{\\epsilon}\\abs*{\\frac{\\partial \\varphi}{\\partial y}}^2 \\Bigg] + \\abs{\\varphi}^2\\, \\Bigg)dxdy\\, ,\n\\end{multline}\n\n\\begin{multline}\n\\|\\varphi\\|_{H^2_{\\sigma, \\tau}(R_\\epsilon)}^2 =\\int_{R_\\epsilon} \\Bigg((1-\\sigma) \\Bigg[ \\abs*{\\frac{\\partial^2 \\varphi}{\\partial x^2}}^2 + 2\\abs*{\\frac{\\partial^2 \\varphi}{\\partial x \\partial y}}^2 + \\abs*{\\frac{\\partial^2 \\varphi}{\\partial y^2}}^2 \\Bigg]\\\\\n+ \\sigma \\abs*{\\frac{\\partial^2 \\varphi}{\\partial x^2}+ \\frac{\\partial^2 \\varphi}{\\partial y^2}}^2 + \\tau \\Bigg[ \\abs*{\\frac{\\partial \\varphi}{\\partial x}}^2 + \\abs*{\\frac{\\partial \\varphi}{\\partial y}}^2 \\Bigg] + \\abs{\\varphi}^2\\,\\Bigg) dxdy\\, .\n\\end{multline}\nIt is not difficult to see that if $\\varphi\\in H^2(R_\\epsilon)$ then\n\\[\\|T_\\epsilon \\varphi\\|_{H^2_{\\epsilon,\\sigma,\\tau}(R_1)}^2= \\epsilon^{-1} \\|\\varphi\\|_{H^2_{\\sigma,\\tau}(R_\\epsilon)}^2.\\]\n\nWe consider the following Poisson problem with datum $f_\\epsilon \\in L^2(R_\\epsilon)$:\n\\begin{equation} \\label{PDE: R_eps f_eps}\n\\begin{cases}\n\\Delta^2 v_\\epsilon - \\tau \\Delta v_\\epsilon + v_\\epsilon = f_\\epsilon, &\\text{in $R_\\epsilon$},\\\\\n(1-\\sigma) \\frac{\\partial^2 v_\\epsilon}{\\partial n_\\epsilon^2} + \\sigma \\Delta v_\\epsilon = 0, &\\textup{on $\\Gamma_\\epsilon$},\\\\\n\\tau \\frac{\\partial v_\\epsilon}{\\partial n_\\epsilon} - (1-\\sigma) \\Div_{\\partial \\Omega_\\epsilon}(D^2v_\\epsilon \\cdot n_\\epsilon)_{\\partial \\Omega_\\epsilon} - \\frac{\\partial(\\Delta v_\\epsilon)}{\\partial n_\\epsilon} = 0, &\\textup{on $\\Gamma_\\epsilon$,}\\\\\nv = 0 = \\frac{\\partial v_\\epsilon}{\\partial n_\\epsilon}, &\\text{on $L_\\epsilon$.}\n\\end{cases}\n\\end{equation}\nNote that the energy space associated with Problem \\eqref{PDE: R_eps f_eps} is exactly $H^2_{L_{\\epsilon}}(R_\\epsilon)$.\nBy setting $\\tilde{v}_\\epsilon = v_\\epsilon (x', \\epsilon y')$, $\\tilde{f}_\\epsilon = f(x', \\epsilon y')$ and pulling-back problem (\\ref{PDE: R_eps f_eps}) to $R_1$ by means of $\\Phi_\\epsilon$, we get the following equivalent problem in $R_1$ in the unknown $\\tilde{v}_\\epsilon $ (we use again the variables $(x,y)$ instead of $(x',y')$ to simplify the notation):\n{\\small \\begin{equation} \\label{PDE: R_1}\n\\begin{cases}\n\\frac{\\partial^4 \\tilde{v}_\\epsilon}{\\partial x^4} + \\frac{2}{\\epsilon^2} \\frac{\\partial^4 \\tilde{v}_\\epsilon}{\\partial x^2 \\partial y^2} + \\frac{1}{\\epsilon^4} \\frac{\\partial^4 \\tilde{v}_\\epsilon}{\\partial y^4} - \\tau \\Big( \\frac{\\partial^2 \\tilde{v}_\\epsilon}{\\partial x^2} + \\frac{1}{\\epsilon^2} \\frac{\\partial^2 \\tilde{v}_\\epsilon}{\\partial y^2} \\Big) + \\tilde{v}_\\epsilon = \\tilde{f}_\\epsilon, &\\text{in $R_1$},\\\\\n(1-\\sigma) \\Big( \\frac{\\partial^2 \\tilde{v}_\\epsilon}{\\partial x^2}\\tilde{n}_x^2 + \\frac{2}{\\epsilon} \\frac{\\partial^2 \\tilde{v}_\\epsilon}{\\partial x \\partial y}\\tilde{n}_x \\tilde{n}_y + \\frac{1}{\\epsilon^2}\\frac{\\partial^2 \\tilde{v}_\\epsilon}{\\partial y^2}\\tilde{n}_y^2 \\Big) + \\sigma \\Big( \\frac{\\partial^2 \\tilde{v}_\\epsilon}{\\partial x^2} + \\frac{1}{\\epsilon^2} \\frac{\\partial^2 \\tilde{v}_\\epsilon}{\\partial y^2} \\Big)= 0, &\\textup{on $\\Gamma_1$},\\\\\n\\tau \\Big(\\frac{\\partial \\tilde{v}_\\epsilon}{\\partial x} \\tilde{n}_x + \\frac{1}{\\epsilon} \\frac{\\partial \\tilde{v}_\\epsilon}{\\partial y} \\tilde{n}_y \\Big) - (1-\\sigma) \\Div_{\\Gamma_{1,\\epsilon}}(D_\\epsilon^2 \\tilde{v}_\\epsilon \\cdot \\tilde{n}){_{\\Gamma_{1,\\epsilon}} }- \\nabla_\\epsilon(\\Delta_\\epsilon \\tilde{v}_\\epsilon) \\cdot \\tilde{n} = 0, &\\textup{on $\\Gamma_{1}$,}\\\\\n\\tilde{v}_\\epsilon = 0 = \\frac{\\partial \\tilde{v}_\\epsilon}{\\partial x} n_x + \\frac{1}{\\epsilon}\\frac{\\partial \\tilde{v}_\\epsilon}{\\partial y} \\tilde{n}_y , &\\text{on $L_1$.}\n\\end{cases}\n\\end{equation}}\nHere $\\tilde{n} = (\\tilde{n}_x, \\tilde{n}_y) = (n_x, \\epsilon^{-1}n_y)$ and the operators $\\Delta_\\epsilon, \\nabla_\\epsilon$ are the standard differential operators associated with $(\\partial_x, \\epsilon^{-1} \\partial_y)$. Moreover,\n\\[\\Div_{\\Gamma_{1,\\epsilon}} F = \\frac{\\partial F_1}{\\partial x} + \\frac{1}{\\epsilon}\\frac{\\partial F_2}{ \\partial y} - \\tilde{n}_\\epsilon \\nabla_{\\!\\epsilon} F\\, \\tilde{n}_\\epsilon\\]\nand $(F)_{\\Gamma_{1,\\epsilon}} = F - (F, \\tilde{n})\\, \\tilde{n}$ for any vector field $F =(F_1,F_2)$.\n\nAssume now that the data $f_{\\epsilon }$, $\\epsilon>0$ are such that $(\\tilde{f}_\\epsilon)_{\\epsilon>0}$ is an equibounded family in $L^2(R_1)$, i.e.,\n\\begin{equation} \\label{hypotesis on f eps}\n\\int_{R_1} \\abs{\\tilde{f}_\\epsilon}^2\\,dxdy' \\leq c, \\quad \\textup{or equivalently} \\quad \\int_{R_\\epsilon} \\abs*{f_\\epsilon}^2 dxdy \\leq c \\epsilon\\, ,\n\\end{equation}\nfor all $\\epsilon>0$, where $c$ is a positive constant not depending on $\\epsilon$.\n\nWe plan to pass to the limit in \\eqref{PDE: R_1} as $\\epsilon \\to 0$ by arguing as follows. If $\\tilde{v}_\\epsilon \\in H^2_{L_1}(R_1)$ is the solution to problem \\eqref{PDE: R_1}, then we have the following integral equality\n\\small\n\\begin{multline}\n\\label{eq: weak formulation R1}\n(1-\\sigma) \\int_{R_1} \\frac{\\partial^2 \\tilde{v}_\\epsilon }{\\partial x^2} \\frac{\\partial^2 \\varphi }{\\partial x^2} + \\frac{2}{\\epsilon^2} \\frac{\\partial^2 \\tilde{v}_\\epsilon }{\\partial x \\partial y} \\frac{\\partial^2 \\varphi }{\\partial x \\partial y} + \\frac{1}{\\epsilon^4} \\frac{\\partial^2 \\tilde{v}_\\epsilon }{\\partial y^2} \\frac{\\partial^2 \\varphi }{\\partial y^2} dx\\\\\n+ \\sigma \\int_{R_1} \\Big(\\frac{\\partial^2 \\tilde{v}_\\epsilon }{\\partial x^2} + \\frac{1}{\\epsilon^2} \\frac{\\partial^2 \\tilde{v}_\\epsilon }{\\partial y^2} \\Big) \\Big( \\frac{\\partial^2 \\varphi }{\\partial x^2} + \\frac{1}{\\epsilon^2} \\frac{\\partial^2 \\varphi }{\\partial y^2} \\Big) dx\\\\\n + \\tau \\int_{R_1} \\frac{\\partial \\tilde{v}_\\epsilon }{\\partial x} \\frac{\\partial \\varphi }{\\partial x} + \\frac{1}{\\epsilon^2} \\frac{\\partial \\tilde{v}_\\epsilon }{\\partial y} \\frac{\\partial \\varphi }{\\partial y} dx + \\int_{R_1} \\tilde{v}_\\epsilon \\varphi dx = \\int_{R_1} \\tilde{f}_\\epsilon \\varphi dx\n\\end{multline}\n\\normalsize\nfor all $\\varphi \\in H^2_{L_1}(R_1)$. By choosing $\\varphi = \\tilde{v}_\\epsilon$ we deduce the following apriori estimate:\n\\small\n\\begin{multline}\n\\label{ineq: apriori ineq tilde v eps}\n(1-\\sigma) \\int_{R_1} \\abs*{\\frac{\\partial^2 \\tilde{v}_\\epsilon }{\\partial x^2}}^2 + \\frac{2}{\\epsilon^2} \\abs*{\\frac{\\partial^2 \\tilde{v}_\\epsilon }{\\partial x \\partial y}}^2 + \\frac{1}{\\epsilon^4} \\abs*{\\frac{\\partial^2 \\tilde{v}_\\epsilon }{\\partial y^2}}^2 dx + \\sigma \\int_{R_1} \\abs*{\\frac{\\partial^2 \\tilde{v}_\\epsilon }{\\partial x^2} + \\frac{1}{\\epsilon^2} \\frac{\\partial^2 \\tilde{v}_\\epsilon }{\\partial y^2}}^2 dx\\\\\n+ \\tau \\int_{R_1} \\abs*{\\frac{\\partial \\tilde{v}_\\epsilon }{\\partial x}}^2 + \\frac{1}{\\epsilon^2} \\abs*{\\frac{\\partial \\tilde{v}_\\epsilon }{\\partial y}}^2 dx + \\int_{R_1} \\abs{\\tilde{v}_\\epsilon}^2 dx \\leq \\frac{1}{2} \\int_{R_1} \\abs{\\tilde{f}_\\epsilon}^2\\,dx + \\frac{1}{2} \\int_{R_1} \\abs{\\tilde{v}_\\epsilon}^2\\,dx\n\\end{multline}\n\\normalsize\nfor all $\\epsilon>0$. This implies that $\\norma{\\tilde{v}_\\epsilon}_{H^2_{\\epsilon,\\sigma, \\tau}(R_1)} \\leq C$ for all $\\epsilon> 0$, in particular $\\norma{\\tilde{v}_\\epsilon}_{H^2(R_1)} \\leq C(\\sigma, \\tau)$ for all $\\epsilon>0$; hence, there exists $v \\in H^2(R_1)$ such that, up to a subsequence\n$\\tilde{v}_\\epsilon \\to v$, weakly in $H^2(R_1)$, strongly in $H^1(R_1)$. Moreover from \\eqref{ineq: apriori ineq tilde v eps} we deduce that\n\\begin{align}\n\\label{ineq: decay y derivatives 1} &\\norma*{\\frac{\\partial^2 \\tilde{v}_\\epsilon}{\\partial x \\partial y}}_{L^2(R_1)} \\leq C \\epsilon, \\quad\\quad \\norma*{\\frac{\\partial \\tilde{v}_\\epsilon}{\\partial y}}_{L^2(R_1)} \\leq C \\epsilon , \\\\\n\\label{ineq: decay y derivatives 2}&\\norma*{\\frac{\\partial^2 \\tilde{v}_\\epsilon}{\\partial y^2}}_{L^2(R_1)} \\leq C \\epsilon^2, \\end{align}\nfor all $\\epsilon > 0$, hence there exists $u \\in L^2(R_1)$ such that, up to a subsequence\n\\begin{equation}\\label{weakly-to-u}\n\\frac{1}{\\epsilon^2} \\frac{\\partial^2 \\tilde{v}_\\epsilon}{\\partial y^2} \\rightharpoonup u, \\hbox{ weakly in }L^2(R_1)\n\\end{equation}\n as $\\epsilon \\to 0$. By \\eqref{ineq: decay y derivatives 1} we deduce that the limit function $v$ is constant in $y$. Indeed, if we choose any function $\\phi \\in C^\\infty_c(R_1)$, then\n\\[\n\\int_{R_1} v \\frac{\\partial \\phi}{ \\partial y} = \\lim_{\\epsilon \\to 0} \\int_{R_1} \\tilde{v}_\\epsilon \\frac{\\partial \\phi}{ \\partial y} = - \\lim_{\\epsilon \\to 0} \\int_{R_1} \\frac{\\partial \\tilde{v}_\\epsilon}{\\partial y} \\phi = 0,\n\\]\nhence $\\frac{\\partial v}{\\partial y} = 0$ and then $v(x,y) \\equiv v(x)$ for almost all $(x,y) \\in R_1$. This suggests to choose test functions $\\psi$ depending only on $x$ in the weak formulation \\eqref{eq: weak formulation R1}. Possibly passing to a subsequence, there exists $f \\in L^2(R_1)$ such that\n\\[\n\\tilde{f}_\\epsilon \\rightharpoonup f \\quad \\quad \\text{in $L^2(R_1)$, as $\\epsilon \\to 0$}.\n\\]\nLet $\\psi \\in H^2_0(0,1)$. Then $\\psi \\in H^2(R_1)$ (here it is understood that the function is extended to the whole of $R_1$ by setting $\\psi(x,y) = \\psi(x)$ for all $(x,y) \\in R_1$) and clearly $\\psi \\equiv 0$ on $L_1$. Use $\\psi$ as a test function in \\eqref{eq: weak formulation R1}, pass to the limit as $\\epsilon \\to 0$ and consider \\eqref{weakly-to-u} to get\n\\begin{equation} \\label{eq: limit probl weak x}\n\\int_0^1 \\Big( \\frac{\\partial^2 v}{\\partial x^2} \\frac{\\partial^2 \\psi}{\\partial x^2} + \\sigma \\mathcal{M}(u) \\frac{\\partial^2 \\psi}{\\partial x^2} + \\tau \\frac{\\partial v}{\\partial x} \\frac{\\partial \\psi}{\\partial x} + v \\psi \\Big) g(x)\\, dx = \\int_0^1 \\mathcal{M}(f) \\psi g(x)\\, dx\n\\end{equation}\nfor all $\\psi \\in H^2_0(0,1)$.\nHere, the averaging operator $\\mathcal{M} $ is defined from $L^2(R_1)$ to $L^2_g(0,1)$ by\n\\[\n\\mathcal{M} h (x) = \\frac{1}{ g(x)} \\int_0^{ g(x)} h(x,y)\\, dy\\, ,\n\\]\nfor all $h\\in L^2(R_1)$ and for almost all $x \\in (0,1)$.\n\nFrom \\eqref{eq: limit probl weak x} we deduce that\n\\[\n\\frac{1}{g} (v'' g)'' + \\frac{\\sigma}{g} (\\mathcal{M}(u) g)'' - \\frac{\\tau}{g} (v' g)' + v = \\mathcal{M}(f), \\quad\\quad \\text{in (0,1)},\n\\]\nwhere the equality is understood in the sense of distributions.\\\\\nComing back to \\eqref{eq: weak formulation R1} we may also choose test functions $\\varphi(x,y) = \\epsilon^2 \\zeta(x,y)$, where $\\zeta \\in H_{L_1}^2(R_1)$. Using \\eqref{ineq: decay y derivatives 1}, \\eqref{ineq: decay y derivatives 2} and letting $\\epsilon \\to 0$ we deduce\n\\begin{equation*}\n(1-\\sigma) \\int_{R_1} u \\frac{\\partial^2 \\zeta}{ \\partial y^2} + \\sigma \\int_{R_1}\\Big( \\frac{\\partial^2 v}{\\partial x^2} \\frac{\\partial^2 \\zeta}{\\partial y^2} + u \\frac{\\partial^2 \\zeta}{\\partial y^2} \\Big) = 0\n\\end{equation*}\nwhich can be rewritten as\n\\begin{equation}\n\\label{eq: identity 2 order deriv}\n\\int_{R_1} \\Big( u + \\sigma \\frac{\\partial^2 v}{\\partial x^2} \\Big) \\frac{\\partial^2 \\zeta}{\\partial y^2} = 0\n\\end{equation}\nfor all $\\zeta \\in H_{L_1}^2(R_1)$. In particular this holds for all $\\zeta \\in C^\\infty_c(R_1)$, hence there exists the second order derivative\n\\begin{equation} \\label{eq: second derivative yy = 0}\n\\frac{\\partial^2}{\\partial y^2}\\Big( u + \\sigma \\frac{\\partial^2 v}{\\partial x^2} \\Big) = 0.\n\\end{equation}\nHence, $u(x,y) + \\sigma \\frac{\\partial^2 v}{\\partial x^2} = \\psi_1(x) + \\psi_2(x) y$ for almost all $(x,y) \\in R_1$ and for some functions $\\psi_1, \\psi_2 \\in L^2(R_1)$, and then \\eqref{eq: identity 2 order deriv} can be written as\n\\begin{equation} \\label{eq: limit probl weak y}\n \\int_{R_1} (\\psi_1(x)+y\\psi_2(x)) \\frac{\\partial^2 \\zeta}{\\partial y^2} = 0\n\\end{equation}\nIntegrating twice by parts in $y$ in equation \\eqref{eq: limit probl weak y} we deduce that\n\\begin{equation}\n\\label{eq: boundary identity}\n- \\int_{\\partial R_1} \\psi_2(x) \\zeta n_y dS + \\int_{\\partial R_1} (\\psi_1(x)+y\\psi_2(x)) \\frac{\\partial \\zeta}{\\partial y} n_y dS = 0\n\\end{equation}\nfor all $\\zeta \\in H_{L_1}^2(R_1)$. We are going to choose now particular functions $\\zeta$ in \\eqref{eq: boundary identity}. Consider first $b=\\frac{1}{2}\\min_{x\\in [0,1]} g(x)>0$ so that the rectangle $(0,1)\\times (0,b)\\subset R_1$ and consider a function $\\eta=\\eta(y)$ with $\\eta\\in C^\\infty(0,b)$ such that $\\eta(y)=1+\\alpha y$ for $00$, be a family of Hilbert spaces. We assume the existence of a family of linear operators $\\mathcal{E}_\\epsilon \\in \\mathcal{L}(\\mathcal{H}_0, \\mathcal{H}_\\epsilon)$, $\\epsilon >0$, such that\n\\begin{equation}\n\\label{def: basic property E_eps}\n\\norma{\\mathcal{E}_\\epsilon u_0}_{\\mathcal{H}_\\epsilon} \\to \\norma{u_0}_{\\mathcal{H}_0},\\ \\ {\\rm as}\\ \\epsilon\\to 0,\n\\end{equation}\nfor all $u_0 \\in \\mathcal{H}_0$.\n\n\n\n\\begin{definition} Let $\\mathcal{H}_\\epsilon$ and $\\mathcal{E}_\\epsilon$ be as above.\n\\begin{enumerate}[label =(\\roman*)]\n\\item Let $u_\\epsilon\\in \\mathcal{H}_\\epsilon$, $\\epsilon >0$. We say that $u_\\epsilon$ $\\mathcal{E}$-converges to $u$ as $\\epsilon \\to 0$ if $\\norma{u_\\epsilon - \\mathcal{E}_\\epsilon u}_{\\mathcal{H}_\\epsilon} \\to 0$ as $\\epsilon \\to 0$. We write $u_\\epsilon \\overset{E}{\\longrightarrow} u$.\n\\item Let $ B_\\epsilon \\in \\mathcal{L}(\\mathcal{H}_\\epsilon)$, $\\epsilon >0$. We say that $B_\\epsilon$ $\\mathcal{E}\\E$-converges to a linear operator $B_0 \\in \\mathcal{L}(\\mathcal{H}_0)$ if $B_\\epsilon u_\\epsilon \\overset{E}{\\longrightarrow} B_0 u$ whenever $u_\\epsilon \\overset{E}{\\longrightarrow} u\\in \\mathcal{H}_0$. We write $B_\\epsilon \\overset{EE}{\\longrightarrow} B_0$.\n\\item Let $ B_\\epsilon \\in \\mathcal{L}(\\mathcal{H}_\\epsilon)$, $ \\epsilon >0$. We say that $B_\\epsilon$ compactly converges to $B_0 \\in \\mathcal{L}(\\mathcal{H}_0)$ (and we write $B_\\epsilon \\overset{C}{\\longrightarrow} B_0$) if the following two conditions are satisfied:\n \\begin{enumerate}[label=(\\alph*)]\n \\item $B_\\epsilon \\overset{EE}{\\longrightarrow} B_0$ as $\\epsilon \\to 0$;\n \\item for any family $u_\\epsilon \\in \\mathcal{H}_{\\epsilon}$, $\\epsilon>0$, such that $\\norma{u_\\epsilon}_{\\mathcal{H}_\\epsilon}=1$ for all $\\epsilon \\in (0,1)$, there exists a subsequence $B_{\\epsilon_k}u_{\\epsilon_k}$ of $B_\\epsilon u_\\epsilon$ and $\\bar{u} \\in \\mathcal{H}_0$ such that $B_{\\epsilon_k}u_{\\epsilon_k} \\overset{E}{\\longrightarrow} \\bar{u}$ as $k \\to \\infty$.\n \\end{enumerate}\n\\end{enumerate}\n\\end{definition}\n\nFor any $\\epsilon \\geq 0$, let $A_\\epsilon$ be a (densely defined) closed, nonnegative differential operator on $\\mathcal{H}_\\epsilon$ with domain $\\mathscr{D}(A_\\epsilon) \\subset \\mathcal{H}_\\epsilon$. We assume for simplicity that $0$ does not belong to the spectrum of $A_{\\epsilon}$ and that\n\\[\n\\textup{(H1): $A_\\epsilon$ has compact resolvent $B_\\epsilon := A_\\epsilon^{-1}$ for any $\\epsilon \\in [0,1)$,}\n\\]\nand\n\\[\n\\textup{(H2): $B_\\epsilon \\overset{C}{\\longrightarrow} B_0 $, as $\\epsilon \\to 0$.}\n\\]\nGiven an eigenvalue $\\lambda$ of $A_0$ we consider the generalized eigenspace $S(\\lambda, A_0) := Q(\\lambda, A_0)\\mathcal{H}_0$, where\n\\[\nQ(\\lambda, A_0) = \\frac{1}{2\\pi i} \\int_{|\\xi - \\lambda| = \\delta} (\\xi \\mathbb{I} - A_0)^{-1}\\, d\\xi\n\\]\nand $\\delta > 0$ is such that the disk $\\{\\xi \\in \\mathbb{C} : |\\xi - \\lambda| \\leq \\delta \\}$ does not contain any eigenvalue except for $\\lambda$. In a similar way, if (H1),(H2) hold, then we can define $S(\\lambda, A_{\\epsilon}) := Q(\\lambda, A_{\\epsilon})\\mathcal{H}_{\\epsilon}$, where\n\\[\nQ(\\lambda, A_\\epsilon) = \\frac{1}{2\\pi i} \\int_{|\\xi - \\lambda| = \\delta} (\\xi \\mathbb{I} - A_\\epsilon)^{-1}\\, d\\xi.\n\\]\nThis definition makes sense because for $\\epsilon$ small enough $(\\xi \\mathbb{I} - A_\\epsilon)$ is invertible for all $\\xi$ such that $|\\xi - \\lambda| = \\delta$, see \\cite[Lemma 4.9]{ACJdE}. Then the following theorem holds.\n\n\\begin{theorem}\n\\label{thm: E conv -> spectral conv}\nLet $A_\\epsilon$, $A_0$ be operators as above satisfying conditions (H1), (H2). Then the operators $A_{\\epsilon}$ {\\rm are spectrally convergent} to $A_0$ as $\\epsilon\\to 0$, i.e., the following statements hold:\n\\begin{enumerate}[label=(\\roman*)]\n\\item If $\\lambda_0$ is an eigenvalue of $A_0$, then there exists a sequence of eigenvalues $\\lambda_\\epsilon$ of $A_\\epsilon$ such that $\\lambda_\\epsilon \\to \\lambda_0$ as $\\epsilon \\to 0$. Conversely, if $\\lambda_\\epsilon$ is an eigenvalue of $A_\\epsilon$ for all $\\epsilon >0$, and $\\lambda_\\epsilon \\to \\lambda_0$, then $\\lambda_0$ is an eigenvalue of $A_0$.\n\\item There exists $\\epsilon_0 > 0$ such that the dimension of the generalized eigenspace $S(\\lambda_0, A_\\epsilon)$ equals the dimension of $S(\\lambda_0, A_0)$, for any eigenvalue $\\lambda_0$ of $A_0$, for any $\\epsilon \\in [0,\\epsilon_0)$.\n\\item If $\\varphi_0 \\in S(\\lambda_0, A_0)$ then for any $\\epsilon >0$ there exists $\\varphi_\\epsilon \\in S(\\lambda_0, A_\\epsilon)$ such that $\\varphi_\\epsilon \\overset{E}{\\longrightarrow} \\varphi_0$ as $\\epsilon \\to 0$.\n\\item If $\\varphi_\\epsilon \\in S(\\lambda_0, A_\\epsilon)$ satisfies $\\norma{\\varphi_\\epsilon}_{\\mathcal{H}_\\epsilon} = 1$ for all $\\epsilon >0$, then $\\varphi_\\epsilon $, $\\epsilon >0$, has an $\\mathcal{E}$-convergent subsequence whose limit is in $S(\\lambda_0, A_0)$.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\nSee \\cite[Theorem 4.10]{ACL}.\n\\end{proof}\n\nWe now apply Theorem \\ref{thm: E conv -> spectral conv} to problem \\eqref{PDE: R_eps}. To do so, we consider the following Hilbert spaces\n\\[\n\\mathcal{H}_\\epsilon = L^2(R_\\epsilon; \\epsilon^{-1}dxdy),\\ \\ {\\rm and}\\ \\ \\mathcal{H}_0 = L^2_g(0,1),\n\\]\nand we denote by $\\mathcal{E}_\\epsilon$ the extension operator from $L^2_g(0,1)$ to $L^2(R_\\epsilon; \\epsilon^{-1}dxdy)$, defined by\n\\begin{equation}\n\\label{def: extension}\n(\\mathcal{E}_\\epsilon v)(x,y) = v(x),\n\\end{equation}\nfor all $v \\in L^2_g(0,1)$, for almost all $(x,y) \\in R_\\epsilon$. Clearly $\\norma{E_\\epsilon u_0}_{(R_\\epsilon; \\epsilon^{-1}dxdy)} = \\norma{u_0}_{L^2_g(0,1)}$, hence $\\mathcal{E}_\\epsilon$ trivially satisfies property \\eqref{def: basic property E_eps}.\n\nWe consider the operators $A_{\\epsilon}=(\\Delta^2 - \\tau \\Delta +I)_{ L_{\\epsilon }}$, $A_0=(\\Delta^2 - \\tau \\Delta +I)_{D }$ on $\\mathcal{H}_{\\epsilon}$ and $\\mathcal{H}_0$ respectively, associated with the eigenvalue problems \\eqref{PDE: R_eps} and \\eqref{ODE: limit problem}, respectively. Namely, $(\\Delta^2 - \\tau \\Delta +I)_{ L_{\\epsilon }}$ is the operator $\\Delta^2 - \\tau \\Delta +I$ on $R_{\\epsilon}$ subject to Dirichlet boundary\nconditions on $L_{\\epsilon}$ and Neumann boundary conditions on $\\partial R_{\\epsilon}\\setminus L_{\\epsilon }$ as described in \\eqref{PDE: R_eps}. Similarly, $(\\Delta^2 - \\tau \\Delta +I)_{D }$\nis the operator $\\Delta^2 - \\tau \\Delta +I$ on $(0,1)$ subject to Dirichlet boundary conditions as described in \\eqref{ODE: limit problem}.\n\nThen we can prove the following\n\\begin{theorem}\\label{spectfin} The operators $(\\Delta^2 - \\tau \\Delta +I)_{ L_{\\epsilon }}$ spectrally converge to\\\\ $(\\Delta^2 - \\tau \\Delta +I)_{D }$ as $\\epsilon \\to 0$, in the sense of Theorem~\\ref{thm: E conv -> spectral conv}.\n\\end{theorem}\n\n\\begin{proof}\nIn view of Theorem \\ref{thm: E conv -> spectral conv}, it is sufficient to prove the following two facts:\n\\begin{enumerate}[label=(\\arabic*)]\n\\item if $f_\\epsilon \\in L^2(R_\\epsilon; \\epsilon^{-1}dxdy)$ is such that $\\epsilon^{-1\/2}\\norma{f_\\epsilon}_{L^2(R_\\epsilon)} = 1$ for any $\\epsilon >0$, and $v_\\epsilon$ is the corresponding solutions of Problem \\eqref{PDE: R_eps f_eps}, then there exists a subsequence $\\epsilon_k \\to 0$ as $k \\to \\infty$ and $\\bar{v} \\in L^2_g(0,1)$ such that $v_{\\epsilon_k}$ $\\mathcal{E}$-converge to $\\bar{v}$ as $k \\to \\infty$.\n\\item if $f_\\epsilon \\in L^2(R_\\epsilon; \\epsilon^{-1}dxdy)$ and $f_\\epsilon \\overset{E}{\\longrightarrow} f$ as $\\epsilon \\to 0$, then the corresponding solutions $v_\\epsilon$ of Problem \\eqref{PDE: R_eps f_eps} $\\mathcal{E}$-converge to the solution of Problem \\eqref{ODE: auxiliary problem sigma2} with datum $f$.\n\\end{enumerate}\nNote that (1) follows immediately from the computations in Section \\ref{subsection: finding limit prb}. Indeed, if $f_\\epsilon \\in L^2(R_\\epsilon; \\epsilon^{-1}dxdy)$ is as in (1), up to a subsequence, $\\tilde{f}_\\epsilon \\rightharpoonup f$ in $L^2(R_1)$, which implies that $\\tilde{v}_\\epsilon \\rightharpoonup v_0\\in H_0^2(0,1)$ in $H^2(R_1)$, where $v_0$ is the solution of Problem \\eqref{ODE: auxiliary problem sigma2}. This implies that $\\norma{v_\\epsilon - \\mathcal{E} v_0}_{L^2(R_\\epsilon; \\epsilon^{-1}dxdy)} \\to 0$, hence (1) is proved.\\\\\nIn order to show (2) we take a sequence of functions $f_\\epsilon \\in L^2(R_\\epsilon; \\epsilon^{-1}dxdy)$ and $f\\in L^2_g(0,1)$ such that $\\epsilon^{-1\/2}\\norma{f_\\epsilon - \\mathcal{E}_\\epsilon f}_{L^2(R_\\epsilon)} \\to 0$ as $\\epsilon \\to 0$. After a change of variable, this is equivalent to $\\norma{\\tilde{f}_\\epsilon - \\mathcal{E} f}_{L^2(R_1)} \\to 0$ as $\\epsilon \\to 0$. Arguing as in Section \\ref{subsection: finding limit prb}, one show that the $\\tilde{v}_\\epsilon \\rightharpoonup v \\in L^2_g(0,1)$ in $H^2(R_1)$ and that $v$ solves problem \\eqref{ODE: auxiliary problem sigma2}. Hence $\\norma{\\tilde{v}_\\epsilon - \\mathcal{E} v}_{L^2(R_1)} \\to 0$ as $\\epsilon \\to 0$, or equivalently, $\\norma{v_\\epsilon - \\mathcal{E}_\\epsilon v}_{L^2(R_\\epsilon; \\epsilon^{-1}dxdy)} \\to 0$ as $\\epsilon \\to 0$, proving (2).\n\n\n\\end{proof}\n\n\n\n\n\n\n\\section{Conclusion}\\label{conclusionsec}\n\nRecall that the eigenpairs of problems \\eqref{PDE: main problem_eigenvalues}, \\eqref{PDE: Omega} are denoted by $(\\lambda_n(\\Omega_\\epsilon), \\varphi_n^\\epsilon)$, $(\\omega_n, \\varphi_n^\\Omega)_{n\\geq1}$ respectively, where the two families of eigenfunctions $\\varphi_n^\\epsilon$, $\\varphi_n^\\Omega$ are complete orthonormal bases of the spaces $L^2(\\Omega_{\\epsilon})$, $L^2(\\Omega )$, respectively. Denote now by $(h_n, \\theta_n)_{n\\geq 1}$ the eigenpairs of problem $\\eqref{ODE: limit problem}$\nwhere the eigenfunctions $h_n$ define an orthonormal basis of the space $L^2_g(0,1)$.\nIn the spirit of the definition of $\\lambda_n^{\\epsilon} $ given in Section 2, we set now $(\\lambda_n^0)_{n\\geq 1} = (\\omega_k)_{k \\geq 1} \\cup (\\theta_l )_{l\\geq 1}$, where it is understood that the eigenvalues are arranged in increasing order and repeated according to their multiplicity. For each $\\lambda_n^0$ we define the function $\\phi_n^0 \\in H^2(\\Omega) \\oplus H^2(R_\\epsilon)$ in the following way:\n\\begin{equation*}\n\\phi^0_n =\n\\begin{cases}\n \\varphi_k^\\Omega, &\\text{in $\\Omega$}\\\\\n 0, &\\text{in $R_\\epsilon$},\n\\end{cases}\n\\end{equation*}\nif $\\lambda_n^0 = \\omega_k$, for some $k \\in \\numberset{N}$; otherwise\n\\begin{equation*}\n\\phi^0_n=\n\\begin{cases}\n 0, &\\text{in $\\Omega$}, \\\\\n \\epsilon^{-1\/2}\\mathcal{E}_\\epsilon h_l, &\\text{in $R_\\epsilon$}\n\\end{cases}\n\\end{equation*}\nif $\\lambda_n^\\epsilon = \\theta_l$, for some $l \\in \\numberset{N}$ (here we agree to order the eigenvalues and the eigenfunctions following the same rule used in the definition of $\\lambda_n^{\\epsilon}$ and $\\phi_n^{\\epsilon }$ in Section 2).\n\nFinally, if $x>0$ divides the spectrum $\\lambda_n(\\Omega_{\\epsilon})$ for all $\\epsilon >0 $ sufficiently small (see the end of Section 2) and $N(x)$ is the number of eigenvalues with $\\lambda_n(\\Omega_{\\epsilon})\\le x$ (counting their multiplicity), we define the projector $P_{x}^0$ from $L^2(\\Omega_\\epsilon)$ onto the linear span $[\\phi_1^{0}, \\dots, \\phi_{N(x)}^{0}]$ by setting\n\\[\nP_{x}^0 u = \\sum_{i=1}^{N(x)} (u,\\phi_i^0)_{L^2(\\Omega_\\epsilon)} \\phi_i^0\n\\]\nfor all $u\\in L^2(\\Omega_\\epsilon)$. (Note that choosing $x$ independent of $\\epsilon$ is possible by the limiting behaviour of the eigenvalues.) Then, using Theorems \\ref{thm: eigenvalues decomposition} and \\ref{spectfin} we deduce the following.\n\n\\begin{theorem} \\label{lastthm}\nLet $\\Omega_\\epsilon$, $\\epsilon>0$, be a family of dumbbell domains satisfying the H-Condition. Then the following statements hold:\n\\begin{enumerate}[label =(\\roman*)]\n\\item $\\lim_{\\epsilon \\to 0}\\, \\abs{\\lambda_n(\\Omega_\\epsilon) - \\lambda_n^0} = 0$, for all $n\\in \\numberset{N} $.\n\\item For any $x$ dividing the spectrum,\n $\\lim_{\\epsilon \\to 0}\\, \\norma{\\varphi^\\epsilon_{n} - P^0_{x} \\varphi^\\epsilon_{n}}_{H^2(\\Omega) \\oplus L^2(R_\\epsilon)} = 0$, for all $n = 1,\\dots, N(x)$.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\nThe convergence of the eigenvalues follows directly by Theorems \\ref{thm: eigenvalues decomposition} and \\ref{spectfin}. Indeed, by Theorem \\ref{thm: eigenvalues decomposition} we know that $|\\lambda_n(\\Omega_\\epsilon) - \\lambda_n^\\epsilon| \\to 0$ as $\\epsilon \\to 0$. If $\\lambda_n^\\epsilon = \\omega_k$ for some $k\\in \\numberset{N}$, for all sufficiently small $\\epsilon$, then we are done; otherwise, if $\\lambda_n^\\epsilon = \\theta_l^\\epsilon$ for some $l\\in \\numberset{N}$, definitely in $\\epsilon$, by Theorem \\ref{spectfin} we deduce that $\\theta_l^\\epsilon \\to \\theta_l$ as $\\epsilon \\to 0$, hence $|\\lambda_n(\\Omega_\\epsilon) - \\theta_l| \\leq |\\lambda_n(\\Omega_\\epsilon) - \\theta_l^\\epsilon| + |\\theta_l^\\epsilon - \\theta_l| \\to 0 $ as $\\epsilon \\to 0$.\\\\\nConsider now the convergence of the eigenfunctions. By Theorems~\\ref{thm: E conv -> spectral conv},~\\ref{spectfin} it follows that for any $\\epsilon >0$ there exists an orthonormal sequence of generalized eigenfunctions $\\delta_j^{\\epsilon }$ in $L^2(R_{\\epsilon}, \\epsilon^{-1}dxdy)$ associated with the eigenvalues\n$\\theta_j^{\\epsilon}$ of problem \\eqref{PDE: R_eps} such that for every $j\\in \\numberset{N}$\n\\begin{equation}\\label{lastthm1}\n\\norma{\\delta^\\epsilon_j - \\mathcal{E}_{\\epsilon} h_j}_{L^2(R_\\epsilon, \\epsilon^{-1}dxdy )} \\to 0,\n\\end{equation}\nas $\\epsilon \\to 0$. Recall that a generalized eigenfunction is an element of a generalized eigenspace, see Section~\\ref{sec: spectral convergence}. We set\n$\n\\gamma_j^\\epsilon =\\epsilon^{-1\/2}\\delta^\\epsilon_j\n$\nand we note that $\\gamma_j^{\\epsilon}$ is a sequence of generalized eigenfunctions of Problem \\eqref{PDE: R_eps} which is orthonormal in $L^2(R_{\\epsilon})$, as required in Theorem~\\ref{thm: eigenvalues decomposition}. Thus by Theorem~\\ref{thm: eigenvalues decomposition} $(ii)$, we deduce that\n\\small{\n\\begin{equation*}\n\\begin{split}\n&\\norma*{\\varphi_n^{\\epsilon} - \\sum^{N(x) }_{i=1} (\\varphi_n^\\epsilon, \\epsilon^{-1\/2}\\mathcal{E}_\\epsilon h_i)_{L^2(R_\\epsilon)} \\epsilon^{-1\/2} \\mathcal{E}_\\epsilon h_i}_{L^2(R_\\epsilon)} \\leq \\norma*{\\varphi_n^{\\epsilon} - \\sum^{N(x)}_{i=1} (\\varphi_n^\\epsilon, \\gamma_i^\\epsilon)_{L^2(R_\\epsilon)} \\gamma_i^\\epsilon }_{L^2(R_\\epsilon)}\\\\\n&+ \\norma*{\\sum^{N(x)}_{i=1} (\\varphi_n^\\epsilon, \\gamma_i^\\epsilon)_{L^2(R_\\epsilon)} \\gamma_i^\\epsilon - \\sum^{N(x)}_{i=1} (\\varphi_n^\\epsilon, \\epsilon^{-1\/2}\\mathcal{E}_\\epsilon h_i)_{L^2(R_\\epsilon)} \\epsilon^{-1\/2}\\mathcal{E}_\\epsilon h_i }_{L^2(R_\\epsilon)}\\\\\n&\\leq o(1) + \\norma*{\\sum^{N(x)}_{i=1} (\\varphi_n^\\epsilon, \\epsilon^{-1\/2}\\mathcal{E}_\\epsilon h_i)_{L^2(R_\\epsilon)} ( \\gamma_i^\\epsilon -\\epsilon^{-1\/2}\\mathcal{E}_\\epsilon h_i) }_{L^2(R_\\epsilon)} + \\norma*{\\sum^{N(x)}_{i=1} (\\varphi_n^\\epsilon, \\gamma_i^\\epsilon - \\epsilon^{-1\/2}\\mathcal{E}_\\epsilon h_i)_{L^2(R_\\epsilon)} \\gamma_i^\\epsilon}_{L^2(R_\\epsilon)}\\\\\n\\end{split}\n\\end{equation*}}\n\\normalsize\nHence, \n\\begin{equation}\\label{lastthm2}\n\\begin{split}\n&\\norma*{\\varphi_n^{\\epsilon} - \\sum^{N(x) }_{i=1} (\\varphi_n^\\epsilon, \\epsilon^{-1\/2}\\mathcal{E}_\\epsilon h_i)_{L^2(R_\\epsilon)} \\epsilon^{-1\/2} \\mathcal{E}_\\epsilon h_i}_{L^2(R_\\epsilon)}\\\\ \n&\\leq o(1) + C \\sum^{N(x)}_{i=1} \\norma{\\gamma_i^\\epsilon - \\epsilon^{-1\/2}\\mathcal{E}_\\epsilon h_i}_{L^2(R_\\epsilon)} =o(1) + C \\sum^{N(x)}_{i=1} \\norma{\\delta _i^\\epsilon - \\mathcal{E}_\\epsilon h_i}_{L^2(R_\\epsilon ,\\epsilon^{-1}dxdy )}.\n\\end{split}\n\\end{equation}\nSince the right-hand side of the last inequality in \\eqref{lastthm2} goes to zero as $\\epsilon \\to 0$ by (\\ref{lastthm1}), we conclude that $\\lim_{\\epsilon \\to 0}\\, \\norma{\\varphi^\\epsilon_{n} - P^0_{x} \\varphi^\\epsilon_{n}}_{L^2(R_\\epsilon)} = 0$. Finally, the fact that $\\lim_{\\epsilon \\to 0}\\, \\norma{\\varphi^\\epsilon_{n} - P^0_{x} \\varphi^\\epsilon_{n}}_{H^2(\\Omega)} = 0$ follows directly from Theorem \\ref{thm: eigenvalues decomposition}.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\\subsection*{Acknowledgment}\nThe first author is partially supported by grants MTM2012-31298, MTM2016-75465, ICMAT Severo Ochoa project SEV-2015-0554, MINECO, Spain and Grupo de Investigaci\\'on CADEDIF, UCM. The third author acknowledges financial support from the INDAM - GNAMPA project 2016 ``Equazioni differenziali con applicazioni alla meccanica\". The second and third authors are also members of the Gruppo Nazionale per l'Analisi Matematica, la Probabilit\\`{a} e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM).\\\\\nThe second and the third authors are very thankful to the Departamento de Matem\\'atica Aplicada of the Universidad Complutense de Madrid for the warm hospitality received on the occasion of their visits. The authors are thankful to an anonymous referee for pointing out a number of items in the reference list.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn semiconductors or insulators with broken inversion symmetry, an intrinsic electromechanical\ncoupling between stresses and electric polarizations can be observed, which is called piezoelectric effect.\nTwo-dimensional (2D) materials can show unique properties compared to their bulk counterparts, and the reduction in dimensionality of 2D\nmaterials can often eliminate inversion symmetry, which allows these materials to become piezoelectric\\cite{q4}.\nIt has been theoretically reported that many 2D\nmaterials break inversion symmetry\nand hence can exhibit piezoelectricity, such as group IIA and IIB metal oxides, group-V binary semiconductors, transition metal dichalchogenides (TMD), Janus TMD and group III-V semiconductors\\cite{q7,q7-2,q7-3,q7-3-1,q7-3-2,q9,q10,q11,q12,qr,nr,nr1}. A majority of structures have piezoelectric coefficients greater than a typical value of bulk piezoelectric materials (5 pm\/V). Significantly, the monolayer SnSe,\nSnS, GeSe and GeS with puckered structure possess giant piezoelectricity, as high as 75-251 pm\/V\\cite{q10}, which may have huge potential application in the field of sensors, actuators and energy harvesters.\n The different crystal symmetry can induce a only in-plane piezoelectricity like TMD monolayers\\cite{q9}, both in-plane and out-of-plane piezoelectricity for example 2D Janus monolayers\\cite{q7,q7-3}, or a pure out-of-plane piezoelectricity such as penta-graphene\\cite{q7-4}. It has been proved that strain may be a effective strategy to tune piezoelectric properties of 2D materials\\cite{r1,r3}.\nExperimentally discovered piezoelectricity of $\\mathrm{MoS_2}$\\cite{q5,q6}, MoSSe\\cite{q8} and $\\mathrm{In_2Se_3}$\\cite{q8-1} has triggered an intense interest in piezoelectric properties of 2D materials.\n\n\\begin{figure*}\n \n \\includegraphics[width=16.0cm]{Fig1.eps}\n \\caption{(Color online)The crystal structure of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MA_2Z_4}$ including top and side views. The purple balls represent M atoms, and the blue balls for A atoms, and the yellow balls for Z atoms. These crystal structure can be divided into three categories: A ($\\alpha_1$, $\\alpha_2$), B ($\\alpha_3$, $\\alpha_4$) and C ($\\alpha_5$, $\\alpha_6$) according to the relative positions of M and A atoms. The different categories can be connected by translation operation, and the different structures in the same category can be related by mirror or rotation operations. The green lines represent mirror face, translation direction or rotation axis.}\\label{t0}\n\\end{figure*}\n\n\nIt is meaningful to explore piezoelectricity of new 2D family. Recently, the layered\n2D $\\mathrm{MoSi_2N_4}$ has been experimentally achieved by chemical vapor deposition (CVD)\\cite{msn}, which possesses semiconducting behavior, high strength and excellent ambient stability. In rapid sequence, 2D\n$\\mathrm{WSi_2N_4}$ has also been synthesized by CVD. In the wake of $\\mathrm{MSi_2N_4}$ (M=Mo and W), $\\mathrm{MA_2Z_4}$ family are constructed with twelve different structures ($\\alpha_i$ and $\\beta_i$ ($i$=1 to 6)) by intercalating $\\mathrm{MoS_2}$-type $\\mathrm{MZ_2}$ monolayer into InSe-type $\\mathrm{A_2Z_2}$ monolayer\\cite{m20}. The $\\mathrm{MA_2Z_4}$ family spans a wide range of properties from semiconductor to topological insulator to Ising superconductor upon the number of valence electrons (VEC). Intrinsic piezoelectricity in monolayer $\\mathrm{XSi_2N_4}$ (X=Ti, Zr, Hf, Cr, Mo and W) with $\\alpha_1$ phase are studied by the first principle calculations\\cite{m21}, and the independent in-plane piezoelectric constants $d_{11}$ is predicted to be 0.78 pm\/V-1.24 pm\/V. The valley-dependent properties of monolayer $\\mathrm{MoSi_2N_4}$, $\\mathrm{WSi_2N_4}$ and $\\mathrm{MoSi_2As_4}$ have been investigated by the first-principle calculations\\cite{g1}. The structural, mechanical, thermal, electronic, optical and photocatalytic properties of $\\mathrm{MoSi_2N_4}$ are studied by using hybrid density functional theory (HSE06-DFT)\\cite{g2}.\n\nIn this work, the role of crystal structure on intrinsic piezoelectricity in monolayer $\\mathrm{MSi_2N_4}$ (M=Mo and W) are studied by using density functional perturbation theory (DFPT)\\cite{pv6}. It is interesting to note that the same structural dependence on $d_{11}$ and $e_{11}$ between monolayer $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ is observed. Calculated results show that the atomic arrangement of $\\mathrm{A_2Z_2}$ double layers has important effect on the in-plane piezoelectric polarization of $\\mathrm{MSi_2N_4}$ (M=Mo and W) monolayers.\nFinally, we investigate the intrinsic piezoelectricity of monolayer $\\alpha_1$- and $\\alpha_2$- $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$. It is found that the $\\mathrm{MA_2P_4}$ have more stronger piezoelectricity than $\\mathrm{MA_2N_4}$. So,\n experimentally synthesizing monolayer $\\mathrm{MA_2Z_4}$ containing P atoms is very promising for energy harvesting and piezoelectric sensing.\n\n\n\n\n\nThe rest of the paper is organized as follows. In the next\nsection, we shall give our computational details and methods about piezoelectric coefficients.\n In the third section, we perform symmetry analysis for elastic and piezoelectric coefficients of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MA_2Z_4}$. In the fourth sections, we shall present main results and analysis. Finally, we shall give our conclusions in the fifth section.\n\\begin{table}\n\\centering \\caption{The optimized lattice constants of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MSi_2N_4}$ (M=Mo and W) using GGA ($\\mathrm{{\\AA}}$). }\\label{tab0}\n \\begin{tabular*}{0.48\\textwidth}{@{\\extracolsep{\\fill}}ccccccc}\n \\hline\\hline\nName & $\\alpha_1$ & $\\alpha_2$ & $\\alpha_3$ & $\\alpha_4$ & $\\alpha_5$ & $\\alpha_6$ \\\\\\hline\\hline\n $\\mathrm{MoSi_2N_4}$ &2.91& 2.90 & 2.84 & 2.84 & 2.86 & 2.85\\\\\n $\\mathrm{WSi_2N_4}$ &2.91 & 2.90 & 2.84 & 2.84 & 2.87 & 2.85\\\\ \\hline\\hline\n\\end{tabular*}\n\\end{table}\n\n\n\\section{Computational detail}\nBased on the density functional theory (DFT)\\cite{1}, our simulations are carried out as implemented\nin the plane-wave code VASP\\cite{pv1,pv2,pv3}. The exchange-correlation functional is treated within popular generalized gradient\napproximation of Perdew, Burke and Ernzerhof (GGA-PBE)\\cite{pbe} to perform the structural relaxation and the calculations of the elastic and\npiezoelectric tensors. For energy band calculations, the spin orbital coupling (SOC)\nis also taken into account due to containing early transition metal.\nProjector-augmented wave pseudopotentials are used with a cutoff energy of 500 eV for plane-wave expansions.\nA vacuum spacing of more than 20 $\\mathrm{{\\AA}}$ is adopted to prevent any interactions\nbetween the adjacent periodic images of the 2D monolayers. The total energy convergence criterion is set\nto $10^{-8}$ eV, and the\natomic positions are optimized until all components of\nthe forces on each atom are reduced to values less than 0.0001 $\\mathrm{eV.{\\AA}^{-1}}$.\nWe calculate the coefficients of elastic stiffness tensor $C_{ij}$ by using strain-stress relationship (SSR) and the piezoelectric stress coefficients $e_{ij}$ by DFPT method\\cite{pv6}.\nA Monkhorst-Pack mesh of 15$\\times$15$\\times$1 in the first Brillouin zone\nis sampled for $C_{ij}$, and 9$\\times$16$\\times$1 for $e_{ij}$.\nThe 2D elastic coefficients $C^{2D}_{ij}$\n and piezoelectric stress coefficients $e^{2D}_{ij}$\nhave been renormalized by the the length of unit cell along z direction ($Lz$): $C^{2D}_{ij}$=$Lz$$C^{3D}_{ij}$ and $e^{2D}_{ij}$=$Lz$$e^{3D}_{ij}$.\nHowever, the $d_{ij}$ is independent of $Lz$.\n\n\\begin{figure*}\n \n \\includegraphics[width=12cm]{Fig2.eps}\n \\caption{(Color online) The energy band structures of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MoSi_2N_4}$ using GGA+SOC, and the VBM and CBM are connected by red arrow. }\\label{band}\n\\end{figure*}\n\n\\begin{figure}\n \n \\includegraphics[width=7cm]{Fig3.eps}\n \\caption{(Color online)The energy band gaps of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ using GGA+SOC.}\\label{band1}\n\\end{figure}\n\n\n\n\\begin{figure}\n \n \\includegraphics[width=7cm]{Fig4.eps}\n \\caption{(Color online) The elastic constants $C_{ij}$ of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$.}\\label{c}\n\\end{figure}\n\\begin{figure}\n \n \\includegraphics[width=7cm]{Fig5.eps}\n \\caption{(Color online) The piezoelectric stress coefficients $e_{11}$, the ionic contribution and electronic contribution to $e_{11}$ of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$. }\\label{e}\n\\end{figure}\n\n\\begin{table}\n\\centering \\caption{The $d_{11}$ of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MSi_2N_4}$ (M=Mo and W) using GGA (pm\/V). }\\label{tab0-1}\n \\begin{tabular*}{0.48\\textwidth}{@{\\extracolsep{\\fill}}ccccccc}\n \\hline\\hline\nName & $\\alpha_1$ & $\\alpha_2$ & $\\alpha_3$ & $\\alpha_4$ & $\\alpha_5$ & $\\alpha_6$ \\\\\\hline\\hline\n $\\mathrm{MoSi_2N_4}$ &1.15 & 0.65& 0.34 & -1.98 & 3.53 & 1.32\\\\\n $\\mathrm{WSi_2N_4}$ &0.78 &0.25 & -0.07 & -2.05 & 2.91 & 0.88\\\\ \\hline\\hline\n\\end{tabular*}\n\\end{table}\n\n\\section{Symmetry Analysis}\n The piezoelectric stress tensors $e_{ijk}$ and strain tensor $d_{ijk}$ is defined as:\n \\begin{equation}\\label{pe0}\n e_{ijk}=\\frac{\\partial P_i}{\\partial \\varepsilon_{jk}}=e_{ijk}^{elc}+e_{ijk}^{ion}\n \\end{equation}\nand\n \\begin{equation}\\label{pe0-1}\n d_{ijk}=\\frac{\\partial P_i}{\\partial \\sigma_{jk}}=d_{ijk}^{elc}+d_{ijk}^{ion}\n \\end{equation}\nwhere $P_i$, $\\varepsilon_{jk}$ and $\\sigma_{jk}$ are polarization vector, strain and stress, respectively.\nThe $e_{ijk}^{elc}$ ($d_{ijk}^{elc}$) is the clamped-ion\npiezoelectric tensors resulting from the pure electronic contribution. The relaxed-ion\npiezoelectric tensors $e_{ijk}$ ($d_{ijk}$) is obtained from the sum of ionic\nand electronic contributions. The $d_{ijk}$ can be connected with $e_{ijk}$ by the elastic stiffness tensor $C_{ijkl}$.\nBy employing the frequently used Voigt notation (11$\\rightarrow$1,\n22$\\rightarrow$2, 33$\\rightarrow$3, 23$\\rightarrow$4, 31$\\rightarrow$5 and 12$\\rightarrow$6),\nthe elastic tensor $C_{ijkl}$, piezoelectric tensors $e_{ijk}$ and $d_{ijk}$ become into $C_{ij}$ (6$\\times$6 matrix), $e_{ij}$ (3$\\times$6 matrix) and $d_{ij}$ (3$\\times$6 matrix). The symmetry of crystal\nstructure will further reduce the number of independent $C_{ij}$, $e_{ij}$ and $d_{ij}$ tensors.\n\nBy intercalating $\\mathrm{MoS_2}$-type $\\mathrm{MZ_2}$ monolayer into InSe-type $\\mathrm{A_2Z_2}$ monolayer,\nsix $\\alpha_i$ and six $\\beta_i$ ($i$=1 to 6) $\\mathrm{MA_2Z_4}$ monolayers can be constructed\\cite{m20}.\nThe six $\\alpha_i$ have the same $P\\bar{6}m2$ space group due to inserting 2H-$\\mathrm{MoS_2}$-type $\\mathrm{MZ_2}$ monolayer into $\\alpha$-InSe-type $\\mathrm{A_2Z_2}$ double layers, which break inversion symmetry. The six $\\beta_i$ are built by intercalating 1T-$\\mathrm{MoS_2}$-type $\\mathrm{MZ_2}$ monolayer into $\\beta$-InSe-type $\\mathrm{A_2Z_2}$ double layers with the same $P\\bar{3}m1$ space group, which keep inversion symmetry.\nTherefore, $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MA_2Z_4}$ monolayers are piezoelectric.\n\n The six $\\alpha_i$ geometric structures of the $\\mathrm{MA_2Z_4}$ monolayer are\nplotted in \\autoref{t0}. All considered six $\\alpha_i$ crystal structures have the same $\\bar{6}m2$ point group. Only the in-plane piezoelectric effect is allowed in\nmonolayer $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MA_2Z_4}$, when a uniaxial in-plane strain is applied. For 2D semiconductors, in general,\nin-plane stresses and strains are only allowed,\nwhile the out-of-plane is strain\/stress free\\cite{q9,q10,q11}. And then the $e_{ij}$, $d_{ij}$ and $C_{ij}$ can be written as:\n \\begin{equation}\\label{pe1}\n \\left(\n \\begin{array}{ccc}\n e_{11} &-e_{11} & 0 \\\\\n 0 &0 & -e_{11}\\\\\n 0 & 0 & 0 \\\\\n \\end{array}\n \\right)\n \\end{equation}\n \\begin{equation}\\label{pe1}\n \\left(\n \\begin{array}{ccc}\n d_{11} & -d_{11} & 0 \\\\\n 0 &0 & -2d_{11} \\\\\n 0 & 0 & 0 \\\\\n \\end{array}\n \\right)\n \\end{equation}\n \\begin{equation}\\label{pe1}\n \\left(\n \\begin{array}{ccc}\n C_{11} & C_{12} &0 \\\\\n C_{12} & C_{11} &0 \\\\\n 0 & 0 & \\frac{C_{11}-C_{12}}{2} \\\\\n \\end{array}\n \\right)\n \\end{equation}\n The forms of these piezoelectric and stiffness constants are the same as those for TMD monolayers\\cite{q9,q11} due to the same point group.\nBy $e_{ik}$=$d_{ij}C_{jk}$, the only in-plane $d_{11}$ is found to be:\n\\begin{equation}\\label{pe2-7}\n d_{11}=\\frac{e_{11}}{C_{11}-C_{12}}\n\\end{equation}\n\n\n\n\\begin{table*}\n\\centering \\caption{The optimized lattice constants of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$ using GGA ($\\mathrm{{\\AA}}$). }\\label{tab1}\n \\begin{tabular*}{0.96\\textwidth}{@{\\extracolsep{\\fill}}cccccccccccc}\n \\hline\\hline\nName & $\\mathrm{CrSi_2N_4}$ & $\\mathrm{MoSi_2N_4}$ & $\\mathrm{WSi_2N_4}$ & $\\mathrm{MoGe_2N_4}$ & $\\mathrm{WGe_2N_4}$ & $\\mathrm{CrSi_2P_4}$&$\\mathrm{MoSi_2P_4}$&$\\mathrm{WSi_2P_4}$&$\\mathrm{CrGe_2P_4}$&$\\mathrm{MoGe_2P_4}$&$\\mathrm{WGe_2P_4}$ \\\\\\hline\\hline\n $\\alpha_1$ &2.84& 2.91 & 2.91 & 3.02 & 3.02& 3.42 & 3.47 & 3.47 &3.50 & 3.54 &3.54\\\\\n $\\alpha_2$ &2.84 & 2.90 & 2.90 & 3.01 &3.01 & 3.41 & 3.45 & 3.46 &3.49 & 3.53 & 3.53\\\\ \\hline\\hline\n\\end{tabular*}\n\\end{table*}\n\n\n\\section{Main calculated results}\n Firstly, we discuss the structural relation among six $\\alpha_i$ crystal structures of $\\mathrm{MA_2Z_4}$.\n According to the relative positions of M and A atoms, the six $\\alpha_i$ crystal structure can be divided into three categories: A ($\\alpha_1$, $\\alpha_2$), B ($\\alpha_3$, $\\alpha_4$) and C ($\\alpha_5$, $\\alpha_6$).\nThe different categories can be connected by translation operation. The $\\alpha_3$ can be attained by translating $\\mathrm{A_2Z_2}$ double layers of $\\alpha_2$ along the green line of top view of $\\alpha_2$ in \\autoref{t0} with the transfixion of $\\mathrm{MZ_2}$ monolayer.\n The $\\alpha_6$ can be attained from $\\alpha_4$ by similar translation operation. The different structures in the same category can be related by mirror or rotation operations. The $\\alpha_2$ can be built by mirroring $\\mathrm{A_2Z_2}$ double layers of $\\alpha_1$ with respect to the vertical surface defined by two green lines of top and side views of $\\alpha_1$. The $\\alpha_4$ ($\\alpha_6$) can be constructed by rotating the $\\mathrm{A_2Z_2}$ double layers of $\\alpha_3$ ($\\alpha_5$) with $\\pi$\/3 along the vertical axis defined by linking two A atoms.\n\n \\begin{figure}\n \n \\includegraphics[width=7cm]{Fig6.eps}\n \\caption{(Color online) The piezoelectric strain coefficients $d_{11}$ of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$. }\\label{d}\n\\end{figure}\n\n\n\n\n\n\n\\begin{table*}\n\\centering \\caption{The $d_{11}$ of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$ using GGA (pm\/V). }\\label{tab1-1}\n \\begin{tabular*}{0.96\\textwidth}{@{\\extracolsep{\\fill}}cccccccccccc}\n \\hline\\hline\nName & $\\mathrm{CrSi_2N_4}$ & $\\mathrm{MoSi_2N_4}$ & $\\mathrm{WSi_2N_4}$ & $\\mathrm{MoGe_2N_4}$ & $\\mathrm{WGe_2N_4}$ & $\\mathrm{CrSi_2P_4}$&$\\mathrm{MoSi_2P_4}$&$\\mathrm{WSi_2P_4}$&$\\mathrm{CrGe_2P_4}$&$\\mathrm{MoGe_2P_4}$&$\\mathrm{WGe_2P_4}$ \\\\\\hline\\hline\n $\\alpha_1$ &1.24&\t1.15&\t0.78&\t1.85&\t1.31&\t6.03&\t4.91&\t4.16&\t6.12&\t5.27&\t4.36\\\\\n $\\alpha_2$ &1.42&\t0.65&\t0.25&\t0.75&\t0.26&\t3.96&\t2.64&\t1.65&\t5.06&\t3.87&\t2.77\\\\ \\hline\\hline\n\\end{tabular*}\n\\end{table*}\n\n\n It has been proved that monolayer $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$ with 34 VEC are non-magnetic with $\\alpha_1$ or $\\alpha_2$ crystal structure, and are both dynamically and thermodynamically\\cite{m20}. A piezoelectric material should be a semiconductor for\nprohibiting current leakage. Only $\\mathrm{MSi_2N_4}$ (M=Mo and W) monolayers are semiconductors for all six $\\alpha_i$ crystal structures.\nSo, we mainly study the structure effect on intrinsic piezoelectricity of $\\mathrm{MSi_2N_4}$ (M=Mo and W). The\nstructural parameters of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MSi_2N_4}$ (M=Mo and W) are optimized, and the lattice constants are listed in \\autoref{tab0}.\nIt is found that the lattice constants between $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ with the same phase are almost the same.\nThe size of these lattice constants can also be classified into A, B and C, which declare that the relative positions of M and A atoms determine lattice constants.\n\nNext, we use optimized crystal structures to investigate their electronic structures.\n Although the SOC has little effects on the energy band gaps of $\\mathrm{MSi_2N_4}$ (M=Mo and W) monolayers, the SOC can produce observed spin-orbit splitting in the valence bands at K point\\cite{m21}. Because the energy band outlines between $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ are very similar, only the energy bands of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MoSi_2N_4}$ are shown in \\autoref{band} using GGA+SOC. It is clearly seen that they all are indirect gap semiconductors, and the gap range is 0.18 eV to 1.99 eV. The position of conduction band minimum (CBM) for all six $\\alpha_i$ is at K point, except for $\\alpha_6$ at M point. The valence band maximum (VBM) of $\\alpha_1$, $\\alpha_2$ and $\\alpha_5$ is at $\\Gamma$ point, while the one of $\\alpha_3$, $\\alpha_4$ and $\\alpha_6$ is slightly off $\\Gamma$ point, and at one point along the $\\Gamma$-K line.\nThe energy band gaps of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ using GGA+SOC are plotted \\autoref{band1}. It is clearly seen that the structural dependence of band gap of $\\mathrm{WSi_2N_4}$ is the same with one of $\\mathrm{MoSi_2N_4}$, and the gap ranges from 0.08 eV to 2.37 eV. Therefore, it is very effective to tune the electronic structures of $\\mathrm{MSi_2N_4}$ (M=Mo and W) monolayers by translating or rotating $\\mathrm{Si_2N_2}$ bilayer.\n\\begin{figure}\n \n \\includegraphics[width=7cm]{Fig7.eps}\n \\caption{(Color online)The energy band gaps of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$ using GGA+SOC. The direct band gap is marked by \"D\", and the unmarked one is indirect band gap.}\\label{band2}\n\\end{figure}\n\n\n\n\n\n\nTo calculate the $d_{11}$, two independent elastic stiffness coefficients ($C_{11}$ and $C_{12}$) of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ are attained by SSR, which are plotted in \\autoref{c}, together with $C_{11}$-$C_{12}$. For six structures, all calculated elastic coefficients of $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ satisfy the Born stability criteria\\cite{ela}, which means that they all are mechanically stable. Similar structural dependence of $C_{11}$, $C_{12}$ and $C_{11}$-$C_{12}$ can be observed between $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$.\nIt is found that the $C_{12}$ of two structures in the same category are very close, if the two structures are connected by mirror operation ($\\alpha_1$ and $\\alpha_2$). However, the $C_{12}$ has obvious difference for two structures related by rotation operation ($\\alpha_3$ and $\\alpha_4$ or $\\alpha_5$ and $\\alpha_6$). The $\\alpha_4$ ($\\alpha_5$) has the larger\n $C_{12}$ than $\\alpha_3$ ($\\alpha_6$). Between $\\alpha_4$ ($\\alpha_3$) and $\\alpha_5$ ($\\alpha_6$), the difference is only the position of Si atom.\nIt is found that the $C_{11}$-$C_{12}$ of $\\alpha_1$, $\\alpha_2$, $\\alpha_4$ and $\\alpha_5$ are close, and $\\alpha_3$ and $\\alpha_6$ have the larger $C_{11}$-$C_{12}$, which is against the $d_{11}$ according to \\autoref{pe2-7}. These elastic stiffness coefficients are very larger than ones of\n TMD monolayers\\cite{q9,q11}, which indicates that\n$\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ are not easy to be deformed.\n\\begin{figure}\n \n \\includegraphics[width=8cm]{Fig8.eps}\n \\caption{The energy band structures of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{CrGe_2P_4}$ using GGA+SOC.}\\label{band-c}\n\\end{figure}\n\n\n\n\n\n\n\n Another key physical quantity $e_{11}$ of $\\alpha_i$- ($i$=1 to 6) $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ are calculated to attain $d_{11}$. Their piezoelectric coefficients $e_{11}$ along with the ionic contribution and electronic contribution to $e_{11}$ are shown \\autoref{e}.\n It is clearly seen that the similar structural dependence between $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ can be observed.\n It is found that the ionic contribution of two structures connected by mirror or rotation operations in the same category has opposite sign. In the different category, the ionic contribution of two structures connected by translation operation has the same sign.\n Calculated results show that the ionic contribution and electronic contribution have opposite sign for all $\\alpha_i$ except $\\alpha_5$.\n For A and B categories, the electronic contribution has similar structural dependence with ionic contribution.\n In the C category, the rotation operation gives rise to the identical signs for electronic contribution from $\\alpha_6$ to $\\alpha_5$.\nIn considered six structures, the $e_{11}$ of $\\alpha_5$ has the largest value, which is due to superposed ionic contribution and electronic contribution. The $e_{11}$ with $\\alpha_5$ phase is 13.95$\\times$$10^{-10}$ C\/m for $\\mathrm{MoSi_2N_4}$, and\t12.17$\\times$$10^{-10}$ C\/m for $\\mathrm{WSi_2N_4}$. These $e_{11}$ are very larger than ones of 2D TMD, metal oxides, III-V\nsemiconductor and Janus TMD materials\\cite{q7,q9,q11}.\nThe $e_{11}$ of experimentally synthesized $\\alpha_1$-$\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ is 4.40$\\times$$10^{-10}$ C\/m and 3.14$\\times$$10^{-10}$ C\/m, which\nare comparable to that of most 2D materials, such as TMD and Janus TMD materials\\cite{q7,q9,q11}.\nUsing the calculated $C_{11}$-$C_{12}$ and $e_{11}$, the $d_{11}$ can be attained according to \\autoref{pe2-7}, which are shown in \\autoref{d}. From $\\alpha_1$ to $\\alpha_6$, the $d_{11}$ and $e_{11}$ show very analogical structural dependence. For $\\alpha_5$ phase, the $d_{11}$ has the largest value of 3.53 pm\/V for $\\mathrm{MoSi_2N_4}$, and\t2.91 pm\/V for $\\mathrm{WSi_2N_4}$.\n For experimentally synthesized $\\alpha_1$-$\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$, the $d_{11}$ is 1.14 pm\/V and 0.78 pm\/V, which are smaller than that of 2D TMD\\cite{q9,q11} due to very large $C_{11}$-$C_{12}$. The related $d_{11}$ are listed in \\autoref{tab0-1}.\n\\begin{figure}\n \n \\includegraphics[width=7cm]{Fig9.eps}\n \\caption{(Color online) The elastic constants $C_{ij}$ of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$.}\\label{c1}\n\\end{figure}\n\n\n\n\nFor $\\alpha_1$ and $\\alpha_2$ phases, the monolayer $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$ are all semiconductors using GGA+SOC. The energy band gaps of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$ using GGA+SOC are plotted in \\autoref{band2}. It is found that the gap of $\\mathrm{CrGe_2P_4}$ is very small, and 0.008 eV for $\\alpha_1$ phase and 0.061 eV for $\\alpha_2$ phase. To unambiguously indicate them to be semiconductors, the energy band structures of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{CrGe_2P_4}$ using GGA+SOC are shown in \\autoref{band-c}.\n For the same material, the gap with $\\alpha_2$ phase is larger than one of $\\alpha_1$ phase. For $\\mathrm{MA_2N_4}$, the gap increases with M from Cr to Mo to W, while the gap of $\\mathrm{MA_2P_4}$ firstly increases, and then decreases. Another reason is that their enthalpies of formation between $\\alpha_1$ and $\\alpha_2$ phases are very close\\cite{m20}.\nSo, we investigate the intrinsic piezoelectricity of the 11 kinds of materials with $\\alpha_1$ and $\\alpha_2$ phases.\nThe optimize lattice constants are listed in \\autoref{tab1}, and the lattice constants between $\\alpha_1$ and $\\alpha_2$ phases for the same material\nalmost the same, which is because the $\\alpha_1$ and $\\alpha_2$ phases are in the same A class.\nWith element changing from Cr to Mo to W, from Si to Ge, and from N to P, the lattice constants of both $\\alpha_1$ and $\\alpha_2$ phases increase, which is due to increasing atomic radius.\n\n\n\n\\begin{figure}\n \n \\includegraphics[width=7cm]{Fig10.eps}\n \\caption{(Color online) The piezoelectric stress coefficients $e_{11}$, the ionic contribution and electronic contribution to $e_{11}$ of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$. }\\label{e1}\n\\end{figure}\n\n\nThe elastic constants $C_{ij}$ of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$ are plotted in \\autoref{c1}. For all studied materials, the $C_{11}$-$C_{12}$ with $\\alpha_2$ phase are larger than ones of $\\alpha_1$ phase, which is due to larger $C_{11}$ and smaller $C_{12}$. The $C_{ij}$ of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ containing P atom are very smaller than ones including N atom, which is favor of $d_{11}$. The piezoelectric stress coefficients $e_{11}$ of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$ together with the ionic contribution and electronic contribution to $e_{11}$ are shown in \\autoref{e1}. When M changes from Cr to Mo to W with the same A and Z atoms, the electronic contribution of $\\mathrm{MA_2Z_4}$ with $\\alpha_1$ phase decreases, while the one of $\\alpha_2$ phase changes toward more negative value. It is found that the electronic contribution (absolute value) of all $\\mathrm{MA_2Z_4}$ with $\\alpha_2$ phase is smaller than one of $\\alpha_1$. The ionic contribution of all materials with $\\alpha_2$ phase is positive, which is the same with the electronic contribution of $\\alpha_1$.\n With M from Cr to Mo to W, the ionic contribution of $\\mathrm{MA_2Z_4}$ of $\\alpha_2$ phase with the same A and Z atoms decreases, while the one of $\\alpha_1$ phase changes toward more negative value except for $\\mathrm{CrSi_2N_4}$. It is clearly seen that the $e_{11}$ of all materials are positive values. The $e_{11}$ of $\\mathrm{MA_2Z_4}$ containing P atom with the same M and A atoms is larger than one including N atom for both $\\alpha_1$ and $\\alpha_2$ phases. For $\\alpha_1$ phase, the $e_{11}$ ranges from 3.14$\\times$$10^{-10}$ C\/m to 9.31$\\times$$10^{-10}$ C\/m, and the whole range for $\\alpha_2$ phase is 0.85$\\times$$10^{-10}$ C\/m to 7.39$\\times$$10^{-10}$ C\/m.\n \\begin{figure}\n \n \\includegraphics[width=7cm]{Fig11.eps}\n \\caption{(Color online) The piezoelectric strain coefficients $d_{11}$ of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$. }\\label{d1}\n\\end{figure}\n\n\n Finally, the piezoelectric strain coefficients $d_{11}$ of $\\alpha_i$- ($i$=1 to 2) $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) expect $\\mathrm{CrGe_2N_4}$ are plotted in \\autoref{d1}. The related $d_{11}$ are also summarized in \\autoref{tab1-1}. For $\\alpha_1$ phase, the range of $d_{11}$ is 0.78 pm\/V to 6.12 pm\/V, and the range changes from 0.25 pm\/V to 5.06 pm\/V for $\\alpha_2$ phase. The change trend of $d_{11}$ as a function of material is very similar with one of $e_{11}$. It is clearly seen that monolayer $\\mathrm{MA_2Z_4}$ containing P atom have more excellent piezoelectric response due to high $d_{11}$.\n The most $d_{11}$ of them are larger than $d_{33}$ = 3.1 pm\/V of familiar bulk piezoelectric wurtzite GaN\\cite{zh1}.\nSo, it is highly recommended to synthesize monolayer $\\mathrm{MA_2Z_4}$ containing P atom, such as $\\alpha_1$-$\\mathrm{CrSi_2P_4}$, $\\alpha_1$-$\\mathrm{MoSi_2P_4}$, $\\alpha_1$-$\\mathrm{CrGe_2P_4}$, $\\alpha_1$-$\\mathrm{MoGe_2P_4}$ and $\\alpha_2$-$\\mathrm{CrGe_2P_4}$.\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\nWe have demonstrated strong structure effect on intrinsic piezoelectricity in septuple-atomic-layer $\\mathrm{MSi_2N_4}$ (M=Mo and W)\nthrough first-principles simulations. The same structural dependence on $d_{11}$ and $e_{11}$, together with the ionic and electronic contributions to $e_{11}$ between $\\mathrm{MoSi_2N_4}$ and $\\mathrm{WSi_2N_4}$ monolayers is found, and the $\\alpha_5$ phase has large piezoelectric coefficients. The intrinsic piezoelectricity of monolayer $\\mathrm{MA_2Z_4}$ (M=Cr, Mo and W; A=Si and Ge; Z=N and P) with $\\alpha_1$ and $\\alpha_2$ phases expect $\\mathrm{CrGe_2N_4}$ are explored, and the monolayer $\\mathrm{MA_2P_4}$ have more stronger piezoelectric polarization than monolayer $\\mathrm{MA_2Z_4}$ including N atom.\nThe largest $d_{11}$ among $\\mathrm{MA_2N_4}$ materials only is 1.85 pm\/V, and the largest $d_{11}$ of $\\mathrm{MA_2P_4}$ is up to 6.12 pm\/V.\n Among the studied 22 materials, the $d_{11}$ of monolayer $\\alpha_1$-$\\mathrm{CrSi_2P_4}$, $\\alpha_1$-$\\mathrm{MoSi_2P_4}$, $\\alpha_1$-$\\mathrm{CrGe_2P_4}$, $\\alpha_1$-$\\mathrm{MoGe_2P_4}$ and $\\alpha_2$-$\\mathrm{CrGe_2P_4}$ are greater than or close to 5 pm\/V.\n These $d_{11}$ of $\\mathrm{MA_2P_4}$ compare favorably with piezoelectric coefficients of\nfamiliar bulk piezoelectrics such as $\\alpha$-quartz ($d_{11}$ = 2.3\npm\/V), wurtzite GaN ($d_{33}$ = 3.1 pm\/V) and wurtzite AlN ($d_{33}$ = 5.1 pm\/V)\\cite{zh1,zh2}.\nOur works provide valuable guidance for\nexperimental synthesis efforts, and hope our study will stimulate more\nresearch interest into $\\mathrm{MA_2Z_4}$ family, especially for\nits applications in piezoelectric field.\n\n\n\n\n\\section{Data availability}\nThe data that support the findings of this study are available from the corresponding author upon reasonable request.\n\n\n\n\\begin{acknowledgments}\nThis work is supported by the Natural Science Foundation of Shaanxi Provincial Department of Education (19JK0809). We are grateful to the Advanced Analysis and Computation Center of China University of Mining and Technology (CUMT) for the award of CPU hours and WIEN2k\/VASP software to accomplish this work.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\bigskip\nThere have been a number of recent investigations into the \n{\\em chiral odd} structure functions of the nucleon. As in the case \nof the polarized structure functions there are two quantities of \ninterest at leading twist: The transverse spin chiral odd structure\nfunction $h_T(x,Q^2)$ and the longitudinal spin chiral odd structure \nfunction $h_L(x,Q^2)$. Within the context of the operator product \nexpansion (OPE) the analysis in terms of twist reveals that the \ntransverse chiral odd structure function $h_T(x,Q^2)$ is purely twist--2, \nwhile the longitudinal structure function $h_L(x,Q^2)$ contains \nboth twist--2 and twist--3 contributions.\nAccordingly, the decomposition of $h_L(x,Q^2)$ into twist--2 \nand twist--3 ($\\overline{h}_L(x,Q^2)$) pieces is given by\n\\begin{eqnarray}\nh_L(x,Q^2)=2x\\, \\int_{x}^1\\ dy\\frac{h_T(y,Q^2)}{y^2}+ \n\\overline{h}_L(x,Q^2)\\ .\n\\label{hL}\n\\end{eqnarray}\nAs a reminder we note that the kinematics are defined such that $q$ denotes \nthe momentum transferred to a nucleon of momentum $p$. In the Bjorken\nlimit, {\\it i.e.} $Q^2=-q^2\\to\\infty$ with $x=Q^2\/2p\\cdot\\! q$ fixed,\nthe leading twist contributions to the nucleon structure\nfunctions dominate the $1\/Q^2$ expansion. The additional and \nimportant logarithmic dependence on $Q^2$, which is associated \nwith soft gluon emission, is included via the evolution program \nof perturbative quantum--chromo--dynamics (QCD).\n\nWhile the chiral odd structure functions are not directly accessible \nin deep inelastic lepton nucleon scattering (DIS) there is the well known \nproposal at {\\em RHIC} to extract the quark transversality distributions\n$h_T^{(a)}(x,Q^2)$ ($a$ being the flavor index) from Drell--Yan \ndilepton--production resulting from transversely polarized proton \nbeams \\cite{Ra79}. Unfortunately dilepton production processes \nare difficult to extract from proton--proton collisions as the purely\nhadronic processes dominate. Furthermore this experiment will provide\nonly the product of the chiral odd distributions for quarks and \nantiquarks. As the latter are presumably small these flavor distributions \nare not easily measurable in the Drell--Yan process. In the light of these \ndisadvantages it has recently been pointed out that the transversality \ndistributions may also be measured in the fragmentation region of \nDIS \\cite{Ja98}. The key observation is that these distribution \nfunctions can be extracted from an asymmetry in the two meson production \nin the special case that this two meson state (like $\\pi^+\\pi^-$) is a \nsuperposition of different $C$--parity states, as {\\it e.g.} $\\sigma$ \nand $\\rho$. Then the phases in the final state interactions do not \nvanish on the average and the differential cross section is proportional \nto the product of chiral odd distributions and the interference \nfragmentation functions. The latter describe the emission and \nsubsequent absorption of a two pion intermediate state from quarks \nof different helicity. In case these fragmentation functions are not \nanomalously small the chiral odd distribution functions can then be \nobtained from DIS processes\\footnote{The relevant fragmentation and \ndistribution functions depend on different kinematical variables: the \ntwo meson state momentum fraction and the Bjorken variable, respectively.} \nlike $eN\\to e^\\prime \\pi^+\\pi^-X$ with the nucleon $N$ being transversely \npolarized. Assuming isospin covariance for the fragmentation functions \nthese DIS processes will provide access to the charge squared weighted \nchiral odd distribution functions \\cite{Ja98}. Such processes \nshould be measurable in the transversely polarized target experiments at \n{\\it HERMES}. Knowledge of the chiral odd structure \nfunctions will serve to complete our picture of the spin structure\nof the nucleon as they correspond to the distribution of the quark \ntransverse spin in a nucleon which is transversely polarized \n\\cite{Ja96}. With these data being expected in the near future \nit is, of course, interesting to understand the structure of \nthe nucleon from the theoretical point of view. As we are still \nlacking a bound state wave function for nucleon in terms of quarks \nand gluons, {\\it i.e.} computed from first principles in QCD,\nit is both mandatory and fruitful to investigate these chiral odd \nflavor distributions and their charge weighted average\nnucleon structure functions within hadronic models of the\nnucleon \\cite{Ja92,St93,Ba97,Sc97,Sc97a,Ka97,Po96}. \n\nIn the context of the spin structure of the nucleon chiral soliton\nmodels are particularly interesting as they provide an explanation\nfor the small magnitude of the quark spin contribution to the proton \nspin, {\\it i.e.} the vanishingly small matrix element of the singlet \naxial current \\cite{We96}. In these models the nucleon is described as a \nnon--perturbative field configuration in some non--linear effective \nmeson theory \\cite{Sk61,Ad83,Al96}. Unfortunately in many of these soliton \nmodels the evaluation of structure functions is infeasible due to the \nhighly non--linear structure of the current operators and the inclusion \nof higher derivative operators which complicates the current commutation \nrelations. However, it has recently been recognized that the soliton \nsolution \\cite{Al96} which emerges after bosonization \\cite{Eb86} of \nthe Nambu--Jona--Lasinio (NJL) \\cite{Na61} chiral quark model can be \nemployed to compute nucleon structure functions \\cite{We96a,We97}. In \norder to project this soliton configuration onto nucleon states with good \nspin and flavor a cranking procedure must be employed \\cite{Ad83,Re89} \nwhich implements significant $1\/N_C$ contributions ($N_C$ is the number \nof color degrees of freedom.). When extracting the structure functions \nfrom the NJL chiral soliton model the full calculation which also\nincludes effects of the vacuum polarized by the background soliton is \nquite laborious. In addition we are still lacking a regularization \nprescription of the vacuum contribution to the structure functions\nwhich is derived from the action functional and which yields algebraic \nexpressions for their moments which are {\\em consistent} with those for \nthe static nucleon properties. Fortunately it is known that the \ndominant contribution to \nstatic nucleon properties stems from the single quark level which has the \nlowest energy eigenvalue (in magnitude) and is strongly bound by the \nsoliton \\cite{Al96}. This is particularly the case for spin related \nquantities. Hence it is a reasonable approximation to consider\nonly the contribution of this level to the structure functions. In \nthe proceeding section the NJL chiral soliton model together with the \nabove mentioned approximation, which we will call {\\it valence quark \napproximation}\\footnote{This notation refers to the valence quark in the \nNJL chiral soliton model and should not be confused with the valence quark\nin the parton model.} will be described in more detail. \n\nThe NJL model for the quark flavor dynamics incorporates spontaneous\nbreaking of chiral symmetry in a dynamic fashion. Hence the quark fields \nwhich built up the soliton self--consistently \\cite{Re88} are {\\em \nconstituent quarks} with a constituent quark mass of several hundred \n{\\rm MeV}. Keeping this in mind we calculate both the {\\em effective}\nconstituent quark distributions and in turn the corresponding leading twist\ncontributions to nucleon structure functions ({\\it cf.} eq (\\ref{chgw}))\nat a low scale $Q_0^2$. In the language of Feynman diagrams \nthe DIS processes are described by a constituent quark of the nucleon \nabsorbing a quanta of the external source. In the Bjorken limit the quark \nthen propagates highly off--shell before emitting a quanta of the external \nsource. The intermediate quark may propagate forward and backward.\nHence the complete structure functions acquire contributions from \nboth distributions where the intermediate constituent quark moves \nforward and backward. We will focus on nucleon structure functions which\nare defined as the sum over the charge--weighted flavor distributions \n\\cite{Ja92}\n\\begin{eqnarray}\nh_{T\/L}^{(\\pm)}(x,Q_0^2)=\\frac{1}{2} \\sum_a \ne_{a}^2 h^{(a,\\pm)}_{T\/L}(x,Q_0^2) \\ ,\n\\label{chgw}\n\\end{eqnarray}\nin analogy to those of the chiral even spin polarized and unpolarized \nnucleon structure functions \\cite{Ja98,Ja96}. Here $a$ represents a \nquark label, while $(\\pm)$ refers to the forward $(+)$ and backward\n$(-)$ propagating intermediate constituent quarks. Furthermore $e_{a}$ \ndenotes the charge fraction of the considered quark flavor $a$. The \ncomplete chiral odd structure functions are finally obtained as the sum\n\\begin{eqnarray}\nh_{T\/L}(x,Q_0^2)=h_{T\/L}^{(+)}(x,Q_0^2)+h_{T\/L}^{(-)}(x,Q_0^2)\\ .\n\\label{defintro}\n\\end{eqnarray}\nThe calculation of the flavor distributions $h^{(a)}_{T\/L}$ in the \nvalence approximation to the NJL chiral soliton model \\cite{We96a,We97} \nis summarized in section 3.\n\nFurther it is important to note that when considering model structure \nfunctions the OPE implies that the initial conditions,\n$\\mu^2=Q_0^2$, for the evolution is, \n{\\it a priori}, a free parameter in any baryon model \\cite{Sc91}. \nFor the model under consideration it has previously been determined to \n$Q_0^2\\approx0.4{\\rm GeV}^2$ by studying the evolution dependence of \nthe model prediction for the unpolarized structure functions \\cite{We96a}. \nIn a subsequent step to compute the chiral odd structure functions we \nemploy a leading order evolution program \\cite{Ba97,Ka97} to obtain the \nchiral odd structure functions at a larger scale, {\\it e.g.} \n$Q^2\\approx 4{\\rm GeV}^2$ relevant to the experimental conditions. This \nevolution program incorporates the leading logarithmic corrections to \nthe leading twist pieces. The evolution procedure as applied to our \nmodel structure functions will be explained in section 4.\n\nThe numerical results for the chiral odd structure functions are\npresented in section 5 while concluding remarks are contained in \nsection 6. Technical details on the model calculations and the QCD \nevolution procedure are relegated to appendices. Let us also mention\nthat there has been a previous calculation of $h_T(x,Q_0^2)$ \\cite{Po96} \nwhich, however, ignored both the projection onto good nucleon states \nand the QCD evolution. Furthermore in that calculation an (arbitrary) \nmeson profile was employed rather than a self--consistent soliton \nsolution to the static equations of motion. \n\n\\bigskip\n\\section{The NJL--Model Chiral Soliton}\n\\bigskip\n \nBefore continuing with the discussion of the chiral odd structure\nfunctions, we will review the issue of the \nchiral soliton in the NJL model.\n\nThe Lagrangian of the NJL model in terms of quark degrees of freedom \nreads \\cite{Na61,Eb86}\n\\begin{eqnarray}\n{\\cal L} = \\bar q (i\\partial \\hskip -0.5em \/ - m^0 ) q +\n 2G_{\\rm NJL} \\sum _{i=0}^{3}\n\\left( (\\bar q \\frac {\\tau^i}{2} q )^2\n +(\\bar q \\frac {\\tau^i}{2} i\\gamma _5 q )^2 \\right) .\n\\label{NJL}\n\\end{eqnarray}\nHere $q$, $\\hat m^0$ and $G_{\\rm NJL}$ denote the quark field, the \ncurrent quark mass and a dimensionful coupling constant, respectively.\nThis model is motivated as follows:\nIntegrating out the gluon fields from QCD yields a current--current \ninteraction mediated by one gluon exchange to leading order\nin powers of the quark current. Replacing the gluon mediating \npropagator with a local contact interaction and \nperforming the appropriate Fierz--transformations yields the \nLagrangian (\\ref{NJL}) in leading order of $1\/N_C$ \\cite{Ca87,Re90}, \nwhere $N_C$ refers to the number of color degrees of freedom. Although\nonly a subset of possible non--perturbative gluonic modes are \ncontained in the contact interaction term in eq (\\ref{NJL}) \nit is important to stress that gluonic effects are contained in the \nmodel (\\ref{NJL}). Furthermore the NJL model embodies the approximate \nchiral symmetry of QCD and has to be understood as an effective \n(non--renormalizable) theory of the low--energy quark flavor dynamics.\n\nApplication of functional bosonization techniques \\cite{Eb86} to the \nLagrangian (\\ref{NJL}) yields the mesonic action\n\\begin{eqnarray}\n{\\cal A}&=&{\\rm Tr}_\\Lambda\\log(D)+\\frac{1}{4G_{\\rm NJL}}\n\\int d^4x\\ {\\rm tr}\n\\left(m^0\\left(M+M^{\\dag}\\right)-MM^{\\dag}\\right)\\ , \n\\label{bosact} \\\\\nD&=&i\\partial \\hskip -0.5em \/-\\left(M+M^{\\dag}\\right)\n-\\gamma_5\\left(M-M^{\\dag}\\right)\\ ,\n\\label{dirac}\n\\end{eqnarray}\nwhere $M=S+iP$ comprises composite scalar ($S$) and pseudoscalar ($P$) \nmeson fields which appear as quark--antiquark bound states. \nFor regularization, which is indicated by the cut--off $\\Lambda$, we \nwill adopt the proper--time scheme \\cite{Sch51}. The free parameters \nof the model are the current quark mass $m^0$, the coupling constant \n$G_{\\rm NJL}$ and the cut--off $\\Lambda$. The equation of motion for \nthe scalar field $S$ may be considered as the gap--equation for the \norder parameter $\\langle {\\bar q} q\\rangle$ of chiral symmetry breaking. \nThis equation relates the vacuum expectation value \n$\\langle M\\rangle=m{\\mbox{{\\sf 1}\\zr{-0.16}\\rule{0.04em}{1.55ex}\\zr{0.1}}}$ to the model parameters $m^0$, $G_{\\rm NJL}$ \nand $\\Lambda$. For apparent reasons $m$ is called the {\\em constituent} \nquark mass. The occurrence of this vacuum expectation value reflects the \nspontaneous breaking of chiral symmetry and causes the pseudoscalar fields \nto emerge as (would--be) Goldstone bosons. Expanding ${\\cal A}$ to quadratic \norder in $P$ (around $\\langle M\\rangle$) these parameters are related to \nphysical quantities; that is, the pion mass, $m_\\pi=135{\\rm MeV}$ and the \npion decay constant, $f_\\pi=93{\\rm MeV}$. This leaves one undetermined \nparameter which we choose to be the constituent quark mass \\cite{Eb86}.\n\nThe NJL model chiral soliton \\cite{Al96,Re88} is given \nby a non--perturbative meson configuration which is assumed of the \nhedgehog type\n\\begin{eqnarray}\nM_{\\rm H}(\\mbox{\\boldmath $x$})=m\\ {\\rm exp}\n\\left(i\\mbox{\\boldmath $\\tau$}\\cdot{\\hat{\\mbox{\\boldmath $x$}}}\n\\Theta(r)\\right)\\ .\n\\label{hedgehog}\n\\end{eqnarray}\nIn order to compute the functional trace in eq (\\ref{bosact}) for this \nstatic configuration we express the \nDirac operator (\\ref{dirac}) in terms of a Hamiltonian\noperator $h$, {\\it i.e.} $D=i\\beta(\\partial_t-h)$ with\n\\begin{eqnarray}\nh=\\mbox{\\boldmath $\\alpha$}\\cdot\\mbox{\\boldmath $p$}+m\\ \\beta\\\n{\\rm exp}\\left(i\\gamma_5\\mbox{\\boldmath $\\tau$}\n\\cdot{\\hat{\\mbox{\\boldmath $x$}}}\\Theta(r)\\right)\\ .\n\\label{hamil}\n\\end{eqnarray}\nWe denote the eigenvalues and eigenfunctions of $h$ by \n$\\epsilon_\\mu$ and $\\Psi_\\mu$, respectively. Explicit expressions for \nthese wave--functions are displayed in appendix B of ref \\cite{Al96}. \nIn the proper--time regularization scheme the energy functional of \nthe NJL model is found to be \\cite{Re89,Al96}, \n\\begin{eqnarray}\nE[\\Theta]=\n\\frac{N_C}{2}\\epsilon_{\\rm v}\n\\left(1+{\\rm sgn}(\\epsilon_{\\rm v})\\right)\n&+&\\frac{N_C}{2}\\int^\\infty_{1\/\\Lambda^2}\n\\frac{ds}{\\sqrt{4\\pi s^3}}\\sum_\\nu{\\rm exp}\n\\left(-s\\epsilon_\\nu^2\\right)\n\\nonumber \\\\* && \\hspace{1.5cm}\n+\\ m_\\pi^2 f_\\pi^2\\int d^3r \\left(1-{\\rm cos}\\Theta(r)\\right) .\n\\label{efunct}\n\\end{eqnarray}\nThe subscript ``${\\rm v}$\" denotes the valence quark level. This state \nis the distinct level bound in the soliton background, {\\it i.e.}\n$-m<\\epsilon_{\\rm v}1$ although the contributions for $x>1$ \nare very small. \n\\begin{figure}[ht]\n\\centerline{\n\\epsfig{figure=updown.400.ps,height=8.5cm,width=8.0cm,angle=270}\n\\hspace{-0.5cm}\n\\epsfig{figure=updown.450.ps,height=8.5cm,width=8.0cm,angle=270}}\n\\caption{\\label{fig_htud}\nThe valence quark approximation of the transverse chiral--odd nucleon \ndistribution function as a function of Bjorken--$x$ for the up and down \nquark flavor content in the rest frame. For comparison also the model \ncalculation \\protect\\cite{We97} for the twist--2 polarized structure \nfunction $g_1(x,Q_0^2)$ is shown for the respective flavor channels.\nTwo values of the constituent quark mass are considered:\n$m=400 {\\rm MeV}$ (left panel) and $m=450 {\\rm MeV}$ (right panel).}\n\\end{figure}\n\nThe calculation of nucleon \nstructure functions in the Bjorken limit, however, singles out \nthe null plane, $\\xi^+=0$. This condition can be satisfied upon \ntransformation to the infinite momentum frame (IMF) even for models \nwhere the nucleon emerges as a (static) localized object \\cite{Hu77}. \nFor the quark soliton model under consideration this transformation \ncorresponds to performing a boost in the space of the collective \ncoordinate $\\bbox{x}_0$, {\\it cf.} eq (\\ref{cht}). Upon this boost \nto the IMF we have observed \\cite{Ga97} that the common problem of \nimproper support for the structure functions, {\\it i.e.} non--vanishing \nstructure functions for $x>1$, is cured along the line suggested by \nJaffe \\cite{Ja80} some time ago. The reason simply is that the Lorentz \ncontraction associated with the boost to the IMF maps the infinite line \nexactly onto the interval $x\\in [0,1[$. In addition we have observed that \nthis Lorentz contraction effects the structure functions also at small \nand moderate $x$. Incorporating these results for the general set \nof leading twist structure functions within the NJL--chiral soliton model\nyields the following form for the forward and backward\nmoving intermediate quark\nstate contributions to the chiral odd transverse\nspin structure function, $h^{(\\pm)}_T\\left(x,Q^2\\right)$,\n\\begin{eqnarray}\n\\hspace{-0.3cm}\nh^{(\\pm)}_T(x)&=&\\pm N_C\\frac{M}{\\pi(1-x)}\n\\int_{p_{\\rm min}}^\\infty \\hspace{-0.2cm} pdp d\\varphi \\\n\\nonumber \\\\ && \\hspace{1.0cm}\n\\times\\langle N |\\tilde{\\psi}^\\dagger (\\bbox{p}_{\\mp})\n\\left(1\\mp\\alpha_3\\right)\\gamma_{\\perp}\\gamma_5{\\cal Q}^2\n\\tilde{\\psi}(\\bbox{p}_{\\mp})|N\\rangle\n\\Big|_{{\\rm cos}\\theta=-\n{\\textstyle \\frac{M\\ {\\rm ln}(1-x)\\pm\\epsilon_{\\rm v}}{p}}} \\ .\n\\label{htp}\n\\end{eqnarray}\nIn general the resulting relation between structure functions \nin the IMF and the rest frame (RF) reads\n\\begin{eqnarray}\nf_{\\rm IMF}(x)=\\frac{\\Theta(1-x)}{1-x} f_{\\rm RF}\n\\Big(-{\\rm ln}(1-x)\\Big)\\ .\n\\label{fboost}\n\\end{eqnarray}\nOf course, in the context of the chiral odd structure functions \n$f_{\\rm RF}$ is to be identified with the expressions in \neqs (\\ref{ht11},\\ref{hl11},\\ref{hltnjl}). As will be recognized \nshortly the solution to\nthe proper support problem is essential in order to \napply the evolution program of perturbative QCD.\n\\begin{figure}[ht]\n\\centerline{\n\\epsfig{figure=updownpr.400.ps,height=8.5cm,width=8.0cm,angle=270}\n\\hspace{-0.5cm}\n\\epsfig{figure=updownpr.450.ps,height=8.5cm,width=8.0cm,angle=270}}\n\\caption{\\label{fig_htudpr}\nSame as figure \\protect\\ref{fig_htud} \nin the IMF (\\protect\\ref{fboost}).}\n\\end{figure}\nThe chiral odd and polarized structure functions resulting from this\ntransformation are shown in figure \\ref{fig_htudpr}.\n\nIn order to include the logarithmic corrections to the \ntwist--2 pieces of the chiral odd structure functions we \napply the well--established GLAP procedure \\cite{Gr72}.\nFor the transverse component $h_T(x,Q^2)$ this is \nstraightforward as it is pure twist--2. For the longitudinal\npiece $h_L(x,Q^2)$ one first has to extract the twist--2\ncomponent through $h_T(x,Q^2)$ namely,\n$h_L^{(2)}(x,Q^2)=2x\\, \\int_{x}^1\\ dy\\,h_T(y,Q^2)\/y^2$.\n\nWe simultaneously denote by $h^{(2)}$ the twist--2 parts of $h_T$ \nand $h_L$. To leading order (in $\\alpha_{QCD}(Q^2)$) the variations\nof the structure functions from a change $\\delta t$ of the \nmomentum scale is given by\n\\begin{eqnarray}\nh^{(2)}(x,t+\\delta{t})=h^{(2)}(x,t)\\ \n+ \\frac{dh^{(2)}(x,t)}{dt} \\ \\delta t\\ ,\n\\label{h2var}\n\\end{eqnarray}\nwhere $t={\\rm log}\\left(Q^2\/\\Lambda_{QCD}^2\\right)$. The variation\n(\\ref{h2var}) is essentially due to the emission and absorption of \nsoft gluons. The explicit expression for the evolution differential \nequation is given by the convolution integral,\n\\begin{eqnarray}\n\\frac{d\\, h^{(2)}(x,t)}{dt} =\\frac{\\alpha_{QCD}(t)}{2\\pi}\nC_{R}(F)\\int^1_{x}\\ \\frac{dy}{y}P_{qq}^h\\left(y\\right)\nh^{(2)}\\left(\\frac{x}{y},t\\right)\n\\label{convl}\n\\end{eqnarray}\nwhere the leading order splitting function \\cite{Ar90,Ba97} is\ngiven by,\n\\begin{eqnarray} \nP_{qq}^{h}\\left(z\\right)=\n\\frac{4}{3}\\left[\\frac{2}{\\left(1-z\\right)_+}-2\n+\\frac{3}{2}\\ \\delta(z-1)\\right]\n\\end{eqnarray}\nand $C_R(f)=\\left(n_f^2-1\\right)\/2n_f$ for $n_f$ active flavors,\n$\\alpha_{QCD}(t)=4\\pi\/\\left[b_0\\log\\left(Q^2\/ \\Lambda^2\\right)\\right]$\nand $b_0=(11N_C-2n_f)\/3$.\nEmploying the ``+\" prescription yields for three light flavors and\n$N_C=3$\n\\begin{eqnarray}\n\\frac{d h^{(2)}(x,t)}{dt}&=&\\frac{\\alpha_{QCD}(t)}{2\\pi}\n\\left\\{\\ \\left(2 + \\frac{8}{3}\\log(1-x)\\right)h^{(2)}(x,t)\n\\right.\n\\nonumber \\\\*&& \\hspace{-0.7cm} \n\\left. \n+\\, \\frac{8}{3}\\int^{1}_{x}\\ \\frac{dy}{y}\\left[\n\\frac{1}{1-y}\\left(h^{(2)}(\\frac{x}{y},t)-yh^{(2)}(x,t)\\right)\n- h^{(2)}(\\frac{x}{y},t)\\right]\\right\\}\\ .\n\\label{evhtw2}\n\\end{eqnarray}\nAs indicated above, the structure functions must vanish at the boundary \n$x=1$ in order to cancel the divergence of the logarithm in eq \n(\\ref{evhtw2}) and thus for the GLAP procedure to be applicable. This \nmakes the projection of the rest frame structure functions mandatory.\nThe variation of the structure functions for finite intervals \nin $t$ is straightforwardly obtained by iteration of these \nequations, {\\it i.e.} as a solution to the differential \nequation (\\ref{evhtw2}). As discussed previously the initial value \nfor integrating the differential equation is given by the scale \n$Q_0^2$ at which the model is defined. It should be emphasized that \nthis scale essentially is a new parameter of the model. For a given \nconstituent quark mass $m$ we adjust $Q_0^2$ to maximize the \nagreement of the predictions with the experimental data on \npreviously \\cite{We96a} calculated unpolarized structure functions for \nelectron--nucleon DIS: $F_2^{ep}-F_2^{en}$. For the constituent \nquark mass $m=400{\\rm MeV}$ we have obtained $Q_0^2\\approx0.4{\\rm GeV}^2$.\nNote that this value of $Q_0^2$ is indeed (as it should) smaller than \nthe ultraviolet cut--off of the underlying NJL soliton model as \n$\\Lambda^2\\approx 0.56{\\rm GeV}^2$. The latter quantity indicates the range \nof validity of the model. In figure \\ref{fig_ht2p}a we compare the un--evolved, \nprojected, proton structure function $h_T^{p}\\left(x,Q_0^2\\right)$ with \nthe one evolved from $Q_0^2=0.4{\\rm GeV}^2$ to $Q^2=4.0{\\rm GeV}^2$. As \nexpected the evolution pronounces the structure function at low $x$. \n\nThis change towards small $x$ is a generic feature of the projection \nand evolution process and presumably not very sensitive to the \nprescription applied here. In particular, choosing a projection \ntechnique \\cite{Tr97} alternative to (\\ref{fboost}) may easily be \ncompensated by an appropriate variation of the scale $Q_0^2$. In \nfigure \\ref{fig_ht2p}b the same calculation for $h_L^{(2)}(x,Q^2)$ is \npresented.\n\nIn the evolution of the twist--2 pieces we have restricted ourselves\nto the leading order in $\\alpha_s$ because for the twist--3 piece of\n$h_L$, the necessary ingredients are not known in next--to--leading \norder. Even the leading order evolution is only known in the large \n$N_C$ limit. It should be noted that such an approach seems \nparticularly suited for soliton models which equally utilize large \n$N_C$ arguments. As pointed out by Balitskii et al. \\cite{Bal96} the \nadmixture of independent quark and quark--gluon operators contributing \nto the twist--3 portion ${\\overline{h}}_L(x,Q^2)$ grows with $n$ \nwhere $n$ refers to the $n^{\\rm th}$ moment,\n${\\cal M}_n\\left[ \\overline{h}_L(Q^2)\\right]$ of $h_L(x,Q^2)$.\nHowever, much like the case with\nthe spin--polarized structure function, $g_2(x,Q^2)$ \\cite{Ali91}\nin the $N_C\\rightarrow \\infty$ limit the quark operators of \ntwist--3 decouple from the quark--gluon operators of the same twist.\nThen the anomalous dimensions $\\gamma_n$ which govern the \nlogarithmic $Q^2$ dependence of ${\\cal M}_n$ can be computed. Once the \n$\\gamma_n$'s are known an evolution kernel can be constructed that \n``propagates'' the the twist--3 part $\\overline{h}(x,Q^2)$ in momentum\n\\begin{eqnarray}\n\\overline{h}_L(x,Q^2)&=&\\int_x^1 \\frac{dy}{y} b(x,y;Q^2,Q_0^2)\n\\overline{h}_L(y,Q_0^2)\\ .\n\\label{evkern}\n\\end{eqnarray}\nWe relegate the detailed discussion of the kernel $b(x,y;Q^2,Q_0^2)$,\nwhich is obtained by inverting the $Q^2$ dependence of ${\\cal M}_n$,\nto appendix C. In figure \\ref{fig_h2bllp}a we show the evolution of \n$\\overline{h}_L(x)$. Again we used $Q_0^2=0.4{\\rm GeV}^2$ and \n$Q^2=4.0{\\rm GeV}^2$.\n\nAs discussed in ref \\cite{Bal96} the merit of this \napproach is that to leading order in $N_C$ the knowledge of \n$h_L(x,Q^2)$ at one scale is sufficient to predict it at any arbitrary \nscale, which is not the case at finite $N_C$.\\footnote{As noted in \n\\cite{Bal96}, next to leading order corrections are estimated to go \nlike $O\\left(1\/N^2_c\\times{\\rm ln}(n)\/\\, n\\right)$ at large $n$.}\nThus $h_L(x,Q^2)$ obeys a generalized GLAP evolution equation. \nThis finally enables us (in much the same manner as was the case \nfor $g_2(x,Q^2)$ in \\cite{We97}) to compute the longitudinal chiral odd \nstructure function $h_L(x,Q^2)$ by combining the separately evolved \ntwist--2 and twist--3 components together. The result for \n$Q_0^2=0.4{\\rm GeV}^2$ and $Q^2=4.0{\\rm GeV}^2$ is shown in figure\n\\ref{fig_h2bllp}b. We recall that the only ingredients have been the leading \ntwist pieces of the chiral odd structure functions at the model \nscale $Q_0$.\\footnote{A feature of $h_L(x)$ compared with $g_2(x)$ \nis that as $h_L(x)$ does not mix with gluon distributions \nowing to its chiral-odd nature and its $Q^2$ evolution is given by \n(\\ref{mom}), (\\ref{adm}) even for the flavor singlet piece.} \n\n\\bigskip\n\\section{Discussion of the Numerical Results}\n\\bigskip\nIn this section we discuss the results of the chiral-odd structure\nfunctions calculated from eqs (\\ref{hT0})--(\\ref{hL1}) for constituent\nquark masses $m=400 {\\rm MeV}$ and $m=450 {\\rm MeV}$. In figure\n\\ref{fig_htud} we have shown the up and down quark contributions\nto the transverse chiral odd structure function of the proton. Figure\n\\ref{fig_htudpr} displays them boosted to the IMF. We observe\nthat these structure functions are always smaller (in magnitude) than\nthe twist--2 polarized structure function $g_1$ with the same flavor\ncontent. This relation is also known from the bag model \\cite{Ja92}.\nSimilar to the confinement model calculation of Barone {\\it et al.}\n\\cite{Ba97} we find that $h_T^{(d)}(x)$ is negative at small $x$. In\ncontrast to $g_1^{(d)}(x)$, however, it might change sign although\nthe positive contribution appears to be small and diminishing with\nincreasing constituent quark mass.\n\nAs already indicated in the introduction the DIS processes which are \nsensitive to these distributions will provide access to the charge \nweighted combinations thereof. We will hence concentrate on this flavor \ncontent. In any event, as we will be discussing both, the proton and \nthe neutron chiral odd distributions, other flavor combinations can \nstraightforwardly be extracted by disentangling the isoscalar \nand isovector pieces in eq (\\ref{qsquare}). In \nconnection with the chiral--odd transverse nucleon structure function \nwe also calculate its zeroth moment which is referred to as the isoscalar \nand isovector nucleon tensor charges \\cite{Ja92},\n\\begin{eqnarray}\n\\Gamma^S_{T}(Q^2) &=& \\frac{18}{5} \\int_0^1\\, \n\\left[ dx\\ h_T^p\\left(x,Q^2\\right)\\\n+ h_T^n\\left(x,Q^2\\right)\\right]\n\\label{gtens} \\\\\n\\Gamma^V_{T}(Q^2) &=& 6 \\int_0^1\\, \\left[ dx\\ h_T^p\\left(x,Q^2\\right)\\ \n- h_T^n\\left(x,Q^2\\right)\\right] \n\\label{gtenv}\n\\end{eqnarray}\nat both the low scale, $Q_0^2=0.4 {\\rm GeV}^2$ and a scale commensurate\nwith experiment, $Q^2= 4 {\\rm GeV}^2$. Of course, for the neutron we \nhave to reverse the signs of the isovector pieces in eq (\\ref{hltnjl}).\nIn eqs (\\ref{gtens}) and (\\ref{gtenv}) the normalization factors are \ndue to the separation into isosinglet and isovector contributions, \n{\\it cf.} eq (\\ref{qsquare}). Note that due to \n$\\int_0^1 dz P_{qq}^h(z)\\ne0$ the tensor charge is not protected against \nlogarithmic corrections. Our results for the valence quark approximation \nare summarized in Table 1. For completeness we also add the vacuum \ncontribution to the tensor charges at the model scale $Q_0^2$. Their \nanalytic expressions are given in appendix D. Obviously this \nvacuum contribution is negligibly small. This is a strong\njustification of the valence quark approximation to the chiral \nodd structure functions. \n\\begin{table}[ht]\n\\caption{\\label{tab_1}\nNucleon tensor charges calculated from eqs (\\ref{gtens}) and\n(\\ref{gtenv}) as a function of the constituent quark mass $m$ in the\nNJL chiral--soliton model. The momentum scales are $Q_0^2=0.4{\\rm GeV}^2$\nand $Q^2=4.0{\\rm GeV}^2$. The numbers in parenthesis in the respective\nupper rows include the negligible contribution from the polarized quark\nvacuum. We compare with results from the Lattice \\protect\\cite{Ao97},\nQCD sum rules \\protect\\cite{He95}, the constituent quark model with\nGoldstone boson effects \\protect\\cite{Su97} and a quark soliton model \ncalculation \\protect\\cite{Ki96} including multiplicative $1\/N_C$ corrections \nviolating PCAC in the similar case of the axial vector current \n\\protect\\cite{Al93}. Finally the predictions from the confinement model \nof ref \\protect\\cite{Ba97} with the associated momentum scales \n(in ${\\rm GeV}^2$) are shown.}\n~ \\vskip0.1cm\n\\centerline{\n\\renewcommand{\\arraystretch}{1.5}\n\\begin{tabular}{c|lll|llll|ll}\n$m$ ({\\rm MeV}) &~~~350 &~~~400 &~~~450 \n& Lat. & ~SR & ~CQ & ~QS & ~$Q^2$ & CM\n\\\\ \\hline\n$\\Gamma^S_T(Q_0^2)$ & 0.80 (0.82)\n& 0.72 (0.76) & 0.67 (0.72)\n& 0.61 & 0.61 & 1.31 & 0.69 & 0.16 & 0.90 \\\\\n$\\Gamma^S_T(Q^2) $ & 0.73 & 0.65 & 0.61 \n&\\multicolumn{4}{c|}{no scale attributed} \n&25.0 & 0.72\\\\\n\\hline\n$\\Gamma^V_T(Q_0^2)$ & 0.88 (0.89) \n& 0.86 (0.87) & 0.86 (0.85) \n& 1.07 & 1.37 & 1.07 & 1.45 & 0.16 & 1.53 \\\\\n$\\Gamma^V_T(Q^2) $ & 0.80 & 0.78 & 0.77 \n&\\multicolumn{4}{c|}{no scale attributed}\n&25.0 & 1.22 \\\\\n\\end{tabular}}\n\\renewcommand{\\arraystretch}{1.0}\n\\end{table}\nA further justification comes from a recent\nstudy of the Gottfried sum rule within the same model \\cite{Wa98}.\nAlso in that case the contribution of the distorted quark vacuum\nto the relevant structure function turned out to be negligibly\nsmall.\n\nBesides justifying the valence quark approximation for the chiral \nodd distributions table \\ref{tab_1} contains the comparison to other \nmodel calculations of the nucleon tensor charges. We note that in obtaining \nthe isovector tensor charge $\\Gamma_T^V$ we have omitted contributions \nwhich are suppressed by $1\/N_C$ ({\\it cf.} appendix D). These contributions\narise when one adopts a non--symmetric ordering of the operators in \nthe space of the collective operators \\cite{Ki96}. The main reason for \ntaking the symmetric ordering is that in the case of the isovector axial \ncharge, $g_A$, any non--symmetric ordering of the collective operators \nleads to a sizable violation of PCAC unless the meson profile is not \nmodified \\cite{Al93}. These multiplicative $1\/N_C$ corrections \\cite{Da94} \nmay be the reason why our predictions for $\\Gamma_T^V$ are somewhat lower \nthan those of other models. In the case of the flavor singlet component, \nwhich does not have such corrections, our results compare nicely with \nother model calculations except for the constituent quark model of \nref \\cite{Su97}.\n\nIn figure \\ref{fig_htnp} we display the transverse chiral odd proton \n$h_T^{p}\\left(x,Q_0^2\\right)$ and neutron \n$h_T^{n}\\left(x,Q_0^2\\right)$ structure functions at the low momentum \nscale $Q_0^2$, while in figure \\ref{fig_hlnp} we do the same for the \ncorresponding chiral odd longitudinal structure functions \n$h_L^{p}\\left(x,Q_0^2\\right)$\nand $h_L^{n}\\left(x,Q_0^2\\right)$.\n\\begin{figure}[ht]\n\\centerline{\n\\epsfig{figure=hTp.ps,height=8.5cm,width=8.0cm,angle=270}\n\\hspace{-0.5cm}\n\\epsfig{figure=hTn.ps,height=8.5cm,width=8.0cm,angle=270}}\n\\caption{\\label{fig_htnp}\nThe valence quark approximation of the chiral--odd\nnucleon structure functions as a function of Bjorken--$x$.\nLeft panel: $h_{T}^{p}\\left(x ,Q_0^2\\right)$ for constituent\nquark masses $m=400 {\\rm MeV}$ (solid line) and\n$m=450 {\\rm MeV}$ (long--dashed line).\nRight panel: $h_{T}^{n}\\left(x,Q_0^2\\right)$.}\n\\end{figure}\n\\begin{figure}[ht]\n\\centerline{\n\\epsfig{figure=hLp.ps,height=8.5cm,width=8.0cm,angle=270}\n\\hspace{-0.5cm}\n\\epsfig{figure=hLn.ps,height=8.5cm,width=8.0cm,angle=270}}\n\\caption{\\label{fig_hlnp}\nThe valence quark approximation of the chiral--odd\nnucleon structure functions as a function of Bjorken--$x$.\nLeft panel: $h_{L}^{p}\\left(x ,Q_0^2\\right)$ for constituent\nquark masses $m=400 {\\rm MeV}$ (solid line)\nand $m=450 {\\rm MeV}$ (long--dashed line).\nRight panel: $h_{L}^{n}\\left(x,Q_0^2\\right)$.}\n\\end{figure}\nWe observe that the structure \nfunctions $h_{T}^{N}(x,Q_0^{2})$ and $h_{L}^{N}(x,Q_0^{2})$ are \nreasonably localized in the interval $0\\le x\\le1$. In particular, this\nis the case for the chiral odd structure functions of the neutron. \nNevertheless a projection as in eq (\\ref{fboost}) is required to \nimplement Lorentz covariance. In addition the computed structure functions \nexhibit a pronounced maximum at $x\\approx0.3$ which is smeared out when the \nconstituent quark mass $m$ increases. This can be understood as follows:\nIn our chiral soliton model the constituent mass serves as a coupling\nconstant of the quarks to the chiral field (see eqs (\\ref{bosact})\nand (\\ref{hamil})). The valence quark level becomes more strongly bound \nas the constituent quark mass increases. Hence the lower components of \nthe valence quark wave--function increase with $m$ and relativistic \neffects become more important. This effect results in the above \nmentioned broadening of the maximum.\n\nAs discussed above a sensible comparison with (eventually available)\ndata requires either to evolve the model results upward according to\nthe QCD renormalization group equations or to compare the model \nresults with a low momentum scale parameterization of the leading \ntwist pieces of the structure functions. The latter requires the \nknowledge of the structure functions at some scale in the whole \ninterval $x\\in[0,1[$. At present no such data are available for \nthe chiral odd structure functions $h_T(x)$ and $h_L(x)$. Therefore \nand in anticipation of results from {\\em RHIC} and or {\\em HERMES} we \napply leading order evolution procedures to evolve the structure \nfunction from the model scale, $Q_0^2=0.4 {\\rm GeV}^2$ to \n$Q^2=4{\\rm GeV}^2$. In Figs. \\ref{fig_ht2p}a and \\ref{fig_ht2p}b we \ndisplay the results of the two step process of projection and evolution \nfor the twist--2 transverse structure function, $h_T^{p}(x,Q^2)$ and \n$h_L^{p(2)}(x,Q^2)$, respectively for a constituent quark mass\nof $m=400 {\\rm MeV}$. \n\\begin{figure}[ht]\n\\centerline{\n\\epsfig{figure=hTpe.ps,height=8.5cm,width=8.0cm,angle=270}\n\\hspace{-0.5cm}\n\\epsfig{figure=h2Lpe.ps,height=8.5cm,width=8.0cm,angle=270}}\n\\caption{\\label{fig_ht2p}\nLeft panel: The evolution of $h_{T}^{p}\\left(x ,Q^{2}\\right)$\nfrom $Q^2_0=0.4 {\\rm GeV}^2$ (solid line) to $Q^2=4 {\\rm GeV}^2$ \n(long--dashed line) for the constituent quark mass $m=400 {\\rm MeV}$.\nRight panel: The evolution of the twist--2 contribution to the \nlongitudinal chiral odd structure function,\n$h_{L}^{p(2)}\\left(x ,Q^{2}\\right)$\nfrom $Q^2_0=0.4 {\\rm GeV}^2$ (solid line) to\n$Q^2=4 {\\rm GeV}^2$ (long--dashed line) for $m=400 {\\rm MeV}$.}\n\\end{figure}\nIn figure \\ref{fig_h2bllp} we present the evolution of \n$h_L^{p}(x)$ along with its decomposition into terms of the leading \ntwist--2 contribution, $2x \\int_{x}^1\\ dy h^p_T(y,Q^2)\/y^2$, and the \nremaining twist--3 piece, $\\overline{h}^p_L(x,Q^2)$.\n\\begin{figure}[ht]\n\\centerline{\n\\epsfig{figure=h2bLpe.ps,height=8.5cm,width=8.0cm,angle=270}\n\\hspace{-0.5cm}\n\\epsfig{figure=hLpe.ps,height=8.5cm,width=8.0cm,angle=270}}\n\\caption{\\label{fig_h2bllp}\nLeft panel (\\protect\\ref{fig_h2bllp}a): \nThe evolution of the twist--3 contribution to the longitudinal \nchiral odd structure function, $\\overline{h}_L^p(x,Q^2)$\nalong with the corresponding twist--2 piece,\n$h_{L}^{p(2)}\\left(x ,Q^{2}\\right)$.\nRight panel (\\protect\\ref{fig_h2bllp}b): The evolution\nof $h_{L}^{p}\\left(x ,Q^{2}\\right)=h_{L}^{p(2)}\\left(x ,Q^{2}\\right)\n+\\overline{h}_L^p(x,Q^2)$ from $Q^2_0=0.4 {\\rm GeV}^2$ (solid line) to\n$Q^2=4 {\\rm GeV}^2$ (long--dashed line) for the constituent\nquark mass $m=400 {\\rm MeV}$.}\n\\end{figure}\nAs in the case of the polarized structure\nfunction, $g_2(x,Q^2)$, the non--trivial twist--3\npiece arises as a result of the binding of the constituent \nquarks through the pion fields acting as effective non--perturbative \ngluonic modes. The twist--3 contribution is evolved according to the \nlarge $N_C$ scheme \\cite{Bal96,Ali91,Io95} outlined in the preceding\nsection (and in Appendix C). Similarly in Figs. \\ref{fig_ht2n} and \n\\ref{fig_h2blln} we display the projection and \nevolution procedure to the twist--2 and 3 contribution to the neutron \nstructure functions, $h_L^{n(2)}(x,Q^2)$ and $\\overline{h}_L^n(x,Q^2)$,\nrespectively.\n\\begin{figure}[ht]\n\\centerline{\n\\epsfig{figure=hTne.ps,height=8.5cm,width=8.0cm,angle=270}\n\\hspace{-0.5cm}\n\\epsfig{figure=h2Lne.ps,height=8.5cm,width=8.0cm,angle=270}}\n\\caption{\\label{fig_ht2n}\nLeft panel:The evolution\nof $h_{T}^{n}\\left(x ,Q^{2}\\right)$\nfrom $Q^2_0=0.4 {\\rm GeV}^2$ (solid line) to\n$Q^2=4 {\\rm GeV}^2$ (long--dashed line) for the constituent\nquark mass $m=400 {\\rm MeV}$.\nRight panel: The evolution\nof the twist--2 contribution to the longitudinal chiral odd\nstructure function,\n$h_{L}^{n(2)}\\left(x ,Q^{2}\\right)$\nfrom $Q^2_0=0.4 {\\rm GeV}^2$ (solid line) to\n$Q^2=4 {\\rm GeV}^2$ (long--dashed line) for $m=400 {\\rm MeV}$.}\n\\end{figure}\n\\begin{figure}[ht]\n\\centerline{\n\\epsfig{figure=h2bLne.ps,height=8.5cm,width=8.0cm,angle=270}\n\\hspace{-0.5cm}\n\\epsfig{figure=hLne.ps,height=8.5cm,width=8.0cm,angle=270}}\n\\caption{\\label{fig_h2blln}\nLeft panel: The evolution of the twist--3\ncontribution to the longitudinal chiral odd\nstructure function, $\\overline{h}_L^n(x,Q^2)$\nalong with the corresponding twist--2 piece,\n$h_{L}^{n(2)}\\left(x ,Q^{2}\\right)$.\nRight panel: The evolution\nof $h_{L}^{n}\\left(x ,Q^{2}\\right)=h_{L}^{n(2)}\\left(x ,Q^{2}\\right)\n+\\overline{h}_L^n(x,Q^2)$\nfrom $Q^2_0=0.4 {\\rm GeV}^2$ (solid line) to\n$Q^2=4 {\\rm GeV}^2$ (long--dashed line) for the constituent\nquark mass $m=400 {\\rm MeV}$.}\n\\end{figure}\n\nBesides the absolute magnitudes, the major difference between the chiral \nodd structure functions of the proton and the neutron is that the latter \ndrop to zero at a lower value of $x$. As can be observed from figure \n\\ref{fig_htnp} this is inherited from the model chiral odd structure \nfunction at the low momentum scale and can be linked to the smallness of \nthe down quark component of $h_T$, {\\it cf.} figure \\ref{fig_htud}. \nApparently the projection and evolution program does not alter this \npicture.\n\nWe would also like to compare our results from the NJL chiral soliton\nmodel to those obtained in other approaches. A MIT bag model calculation of \nthe isovector contribution $6(h_T^{p}-h_T^{n})$ has been presented \nin ref \\cite{Ja92}. In shape ({\\it e.g.} position of the maximum) that \nresult is quite similar to ours. However, the absolute value is a bit \nlarger in the MIT bag model. This reflects the fact that in the MIT bag \nmodel the isovector combinations of the axial and tensor charges turn \nout to be bigger than in the present model. Additionally, the QCD evolution \nof the MIT bag model prediction for $h_T$ has been studied in \nref \\cite{St93} utilizing the Peierls--Yoccoz projection as in ref \\cite{Sc91}. \nIn that case the maximum at $x\\approx0.5$ gets shifted to a value as low \nas $x=0.2$. Also the structure function becomes rather broad at the large \nscale. The fact that in that calculation the evolution effects are more \npronounced than in the present approach is caused by the significantly \nlower scale ($\\mu_{\\rm bag}=0.08{\\rm GeV}^2$) used in ref \\cite{St93}.\nOn the other hand our results \nare quite different to those obtained in the QCD sum rule approach of \nref \\cite{Io95}. The sum rule approach essentially predicts $h_T$ to be \nconstant in the interval $0.31$. This \ncan be cured by Lorentz boosting to the infinite momentum frame which \nis particularly suited for DIS processes. Although the un--boosted\nstructure functions are negligibly small at $x>1$ the transformation \nto this frame is essential and has sizable effects on the structure\nfunctions at moderate $x$. However, the most important \nissue when comparing the model predictions to (not yet available)\nexperimental data is the observation that the model represents \nQCD at a low momentum scale $Q_0^2$. {\\it A priori} this scale \nrepresents an additional parameter to the model calculation \nwhich, for consistency, has to be smaller than the ultraviolet \ncut--off of the model $\\Lambda^2=0.56{\\rm GeV}^2$. For the model \nunder consideration we previously fixed $Q_0^2$ when studying the \nunpolarized structure functions and found $Q_0^2=0.4{\\rm GeV}^2$. \nThe important logarithmic corrections\nto the model structure functions are then obtained within a generalized\nGLAP evolution program. In this context we have restricted ourselves\nto a leading order (in $\\alpha_{\\rm QCD}$) calculation because \nthe anomalous dimensions, which govern the QCD evolution, for the \ntwist--3 piece of the longitudinal part of the chiral odd structure \nare only known to that order. As the full evolution to the longitudinal\nstructure function involves both twist--2 and twist--3 pieces this \nrestriction is consistent. We have seen that the QCD evolution of the \nchiral odd structure function leads to sizable enhancements at low $x$, \n{\\it i.e.} in the region $0.01\\le x\\le 0.10$. In this respect the present \nsituation is similar to that for the polarized structure functions.\nA difference to the polarized structure function is that the lowest moment\nis not protected against logarithmic corrections, even at leading order\nin $\\alpha_{\\rm QCD}$. For the nucleon tensor charge we thus find a \nreduction of about 10\\% upon evolution to $Q^2=4.0{\\rm GeV}^2$.\nWe have also compared the neutron and proton chiral odd structure \nfunctions. This has been achieved by the inclusion of the $1\/N_C$\ncranking corrections. In absolute value the proton structure functions\nare about twice as large as those of the neutron. Furthermore the \nneutron structure functions drop to zero at a lower value of $x$.\nThese two effects can be linked to the down quark component of the \ntransverse nucleon chiral odd distribution functions being significantly \nsmaller than the component with up--quark quantum numbers. We have also \nobserved that neither of these features is effected by the evolution \nprogram.\n\n\\bigskip\n\\section*{Acknowledgements}\n\\bigskip\nThis work has been supported in part by the\nDeutsche Forschungsgemeinschaft (DFG) under contract Re 856\/2-3,\nand by the U.S. Department of Energy (D.O.E.) under\ncontract DE--FE--02--95ER40923.\nOne of us (LG) is grateful to G. R. Goldstein for helpful comments\nand to K. A. Milton for encouragement and support.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nFinding the spectrum of a transition matrix is a very popular subject in graph theory and Markov chain theory. There are only a few techniques known to describe the exact spectrum of a Markov chain, and they usually work under very specific conditions, such as when the Markov chain is a random walk on a finite group, generated only by a conjugacy class \\cite{DiaSha}. Most well-known examples where a transition matrix has been diagonalized usually rely on combination of advanced representation theory, Fourier analysis, and combinatorial arguments \\cite{DSaliola}, \\cite{Hough}, \\cite{Star}, \\cite{hyp}, \\cite{hyp2}, \\cite{hyp3}, \\cite{Brown}, \\cite{Pike}. But even in most of these cases, there is no description of what an eigenbasis of the transition matrix would look like, which in general is needed as well in order to understand the transition matrix. \n\nIn this work, we present the full spectrum of the simple random walk on complete, finite $d-$ary trees and a corresponding eigenbasis, and we use this information to produce a lower bound for the interchange process on the trees, which we conjecture is sharp. Consider the complete, $d-$ary tree $\\mathcal T_h$ of height $h$, which has $n = 1+d+\\dots+d^{h} = \\frac{d^{h+1}-1}{d-1}$ vertices. We study the simple random walk on $\\mathcal T_h$ ,whose transition matrix is denoted by $Q_h$, according to which when we are at the root we stay fixed with probability $1\/(d+1)$, or we move to a child with probability $1\/(d+1)$ each. When we are at a leaf, we stay fixed with probability $d\/(d+1)$ otherwise we move to the unique parent with probability $1\/(d+1)$. For any other node, we choose one of the neighbors with probability $1\/(d+1)$. \n\nThis is a well studied Markov chain. Aldous \\cite{Aldous} proved that the cover time is asymptotic to $2 h^2 d^{h+1} \\log h\/(h-1)$. The order of the spectral gap and the mixing time of this Markov chain have been widely known for a long time. In fact, the random walk on $\\mathcal T_h$ is one of the most famous examples of a random walk not exhibiting cutoff (see Example 18.6 of \\cite{AMPY4}). However, finding the exact value of the spectral gap has been an open question for years, let alone finding the entire spectrum and an eigenbasis of the transition matrix $Q_h$.\n\n\nWe denote by $\\rho$ the root of $\\mathcal T_h$, by $V(\\mathcal T_h)$ the vertex set of $\\mathcal T_h$, and by $E(\\mathcal T_h)$ the set of edges of $\\mathcal T_h$. Let $\\ell: V(\\mathcal T_h) \\rightarrow [0,\\ldots, h]$ denote the distance from the root. For every node $v$, let $\\mathcal T^v$ be the complete $d-$ary subtree rooted at $v$, namely consisting of $v$ and all vertices of $V( \\mathcal T_h)$ that are descendants of $v$ in $\\mathcal T_h$. Let $\\mathcal T_i^v$ be the complete $d-$ary subtree of $\\mathcal T^v$ rooted at the $i-$th child of $v$.\n\nThe next theorem includes the first result of this paper, presenting the eigenvalues and an eigenbasis of $Q_h$.\n\\begin{theorem}\\label{thm:spectrum} \n\t\\begin{enumerate} [label = (\\alph*)]\n\t\t\\item $Q_h$ is diagonalizable with $1$ being an eigenvalue with multiplicity 1. Every other eigenvalue $\\lambda\\neq 1$ of $Q_h$ is of the form\n\t\t\\begin{equation}\\label{eq:lambda:x:thm}\n\t\t\\lambda = \\frac{d}{d+1}\\left (x+ \\frac{1}{xd}\\right ),\n\t\t\\end{equation}\n\t\twhere $x\\neq \\pm \\frac{1}{\\sqrt d}$ is a solution of one of the following $h+1$ equations:\n\t\t\\begin{equation} \\label{eq:x:sym:thm}\n\t\td^{h+1}x^{2h+2} = 1\n\t\t\\end{equation}\n\t\tand \n\t\t\\begin{equation} \\label{eq:x:antisym}\n\t\td^{k+2} x^{2k+4}- d^{k+2} x^{2k+3}+ dx -1 = 0,\\quad \\text{for some } 0 \\leq k \\leq h-1.\n\t\t\\end{equation}\n\t\t\n\t\tReversely, each solution $x\\neq \\pm \\frac{1}{\\sqrt d}$ of these equations corresponds to an eigenvalue $\\lambda$ according to \\eqref{eq:lambda:x:thm}. \n\t\tFor each of these equations, if $x$ is a solution then so is $\\frac{1}{xd}$. Both $x$ and $\\frac{1}{xd}$ correspond to the same $\\lambda$. The correspondence between $x$ and $\\lambda$ is $2$-to-$1$.\n\t\t\n\t\t\\item \\label{thm:spectrum:eigenvector} For each solution $x\\neq \\pm \\frac{1}{\\sqrt d}$ of \\eqref{eq:x:sym:thm}, an eigenvector $f_{\\lambda}$ with respect to $\\lambda$ is given by the formula \n\t\t\\begin{equation}\\label{eq:thm:spectrum:sym}\n\t\tf_{\\lambda}(v) = \\frac{dx^{2}-x}{dx^{2}-1} x^i + \\frac{x-1}{dx^{2}-1}\\frac{1}{d^{i} x^{i}} \\quad\\text{for every $v$ with $\\ell(v)=i$, $0\\le i\\le h$}.\n\t\t\\end{equation}\n\t\t\n\t\tFor each $0 \\leq k \\leq h-1$, each solution $x\\neq \\pm \\frac{1}{\\sqrt d}$ of \\eqref{eq:x:antisym}, each $v \\in V(\\mathcal T_h)$ such that $\\ell(v)= h-1-k$, and each $j\\in [1, \\dots, d-1]$, an eigenvector $f_{v, j, j+1}$ with respect to $\\lambda$ is given by the formula \n\t\t\\begin{equation}\\label{eq:thm:spectrum:antisym}\n\t\tf_{ v, j, j+1}(w) = \n\t\t\\begin{cases}\n\t\t& \\frac{dx^{i+2}}{dx^{2}-1} - \\frac{1}{(dx^{2}-1)d^{i} x^{i}} \\quad \\mbox{ for } w\\in \\mathcal T^v_j,\\mbox{ where } i= \\ell(w) -h+k,\\\\\n\t\t& -\\frac{dx^{i+2}}{dx^{2}-1} + \\frac{1}{(dx^{2}-1)d^{i }x^{i}} \\quad \\mbox{ for } w\\in \\mathcal T^v_{j+1}, \\mbox{ where } i= \\ell(w) -h+k,\\\\\n\t\t& 0, \\mbox{ otherwise.} \n\t\t\\end{cases}\n\t\t\\end{equation}\n\t\t\\item The collection of these eigenvectors together with the all-1 vector form an eigenbasis of $Q_h$.\n\t\\end{enumerate}\n\n\\end{theorem}\n\n\n\nIn Lemma \\ref{lm:sym:antisym} and Figure \\ref{fig:sym:antisym}, we describe and illustrate the eigenvectors in more detail.\n\nThe idea behind the proof is to consider appropriate projections of the random walk. For example, let $X_t$ be the state of the random walk at time $t$ and let $Y_t$ be the distance of $X_t$ from the root. Then $Y_t$ is a Markov chain on $[0,h],$ whose eigenvalues are also eigenvalues of $Q_h$. Also, the eigenvectors of $Y_t$ lift to give the eigenvectors presented in \\eqref{eq:thm:spectrum:sym}. This computation is not going to give us the full spectrum, however. \n\nFor example, in the case of the binary tree, another type of projection to consider is as follows. We consider the process $W_t,$ which is equal to $-Y_t$ if $X_t \\in \\mathcal T^{\\rho}_1$ and equal to $Y_t$ otherwise. The second largest eigenvalue can be derived by this new process, while the eigenvectors are of the form presented in \\eqref{eq:thm:spectrum:antisym}. The reason why this is the right process to study is hidden in the mixing time of the random walk on $\\mathcal T_h$. A coupling argument roughly says that we have to wait until $X_t$ reaches the root $\\rho$. The first time that $X_t$ hits $\\rho$ is captured by $W_t$, since $W_t$ is a Markov chain on $[-h,h]$, where the bias is towards the ends and away from zero. The projections that we consider form birth and death processes, whose mixing properties have been thoroughly studied by Ding, Lubetzky, and Peres \\cite{DLP}. To capture the entire spectrum, our method is to find in each eigenspace a well-structured eigenvector, which occurs by considering an appropriate projection.\n\nOur analysis has immediate applications to card shuffling, namely the interchange process on $\\mathcal T_h$, and to the exclusion process. We enumerate the nodes in $V (\\mathcal T _h)$ and we assign cards to the nodes. At time zero, card $i$ is assigned to node $i$. The interchange process on $\\mathcal T_h$ chooses an edge uniformly at random and then flips a fair coin. If heads, interchange the cards on the ends of $e$; if tails, stay fixed. A configuration of the deck corresponds to an element of the symmetric group.\n\nLet $g \\in S_n$. Let $P$ be the transition matrix of the interchange process on the complete, finite $d-$ary tree $\\mathcal T_h$ and let $P^t_{id}(g)$ be the probability that we are at $g$ after $t$ steps, given that we start at the identity. We define the total variation distance between $P^t_{id}$ and the uniform measure $U$ to be \n\\begin{equation}\n\td(t)= \\frac{1}{2} \\sum_{x \\in S_n} \\left \\vert P^t_{id}(x) -\\frac{1}{n!}\\right \\vert. \\nonumber\n\\end{equation}\n\nA celebrated result concerning the interchange process was the proof of Aldous conjecture \\cite[Theorem 1.1]{CLR}, which states that the spectral gap of P is the same as the spectral gap of the Markov chain that the ace of spades performs. Adjusting our computations, we now get the following result.\n\\begin{theorem}\\label{thm:lowerbound}\n\tFor the interchange process on the complete $d$-ary tree of depth $h$, we have that \n\t\\begin{itemize}\n\t\t\\item[(a)] The spectral gap of the transition matrix is $\\frac{(d-1)^{2}}{2(n-1) d^{h+1}} + O \\left (\\frac{\\log_{d} n}{n^{3}}\\right )$, \n\t\t\\item[(b)] And if $t=\\frac{1}{d-1}n^{2}\\log n- \\frac{1}{d-1}n^2 \\log \\left( \\frac{1}{\\varepsilon} \\right) + O\\left ( n^{2}\\right ) $, then \n\t\t$$d(t) \\geq1- \\varepsilon,$$\n\t\twhere $\\varepsilon$ is any positive constant. \n\t\\end{itemize}\n\\end{theorem}\nThis is already much faster than the interchange process on the path, another card shuffle that uses $n-1$ transpositions, which Lacoin \\cite{Lacoin} recently proved exhibits cutoff at $\\frac{1}{2\\pi^2} n^3 \\log n$. We conjecture that the lower bound in part $(b)$ of Theorem \\ref{thm:lowerbound} is sharp and that the interchange process on $\\mathcal T_h$ exhibits cutoff at $\\frac{1}{d-1} n^{2}\\log n$.\n\n\nWe can get lower bounds for the mixing time of another well studied process, the exclusion process on the complete $d$-ary tree. This is a famous interacting particle system process, according to which at time zero, $k \\leq n\/2$ nodes of the tree are occupied by indistinguishable particles. At time $t$, we pick an edge uniformly at random and we flip the two ends. Similar computations to the ones of the proof of Theorem \\ref{thm:lowerbound} give that if $t=\\frac{1}{d-1}n^{2}\\log k- \\frac{1}{d-1}n^2 \\log \\left( \\frac{1}{\\varepsilon} \\right) + o\\left ( n^{2} \\log k\\right ) $, then \n$$d(t) \\geq1- \\varepsilon,$$\nwhere $\\varepsilon>0$ is a constant. Combining Oliveira's result \\cite{OL} with Theorem \\ref{thm:lowerbound} $(b)$, we get that the order of the mixing time of the exclusion process on the complete $d-$ary tree is $n^2 \\log k$. \n\nAs potential open questions, we suggest trying to find the spectrum or just the exact value of the spectral gap for the simple random on finite Galton-Watson trees or for the frog model as presented in \\cite{Jon}.\n \n \n\\section{The spectrum of $Q_h$}\nThis section is devoted to the proof of Theorem \\ref{thm:spectrum}.\n\n\nLet $\\lambda$ be an eigenvalue of $Q_h$ and let $E(\\lambda)$ be the corresponding eigenvalue. We first show that there exists an eigenvector in $E(\\lambda)$ that has the form described in Theorem \\ref{thm:spectrum} \\ref{thm:spectrum:eigenvector}.\n\\begin{lemma}\\label{lm:sym:antisym}\nThe eigenspace $E(\\lambda)$ contains an eigenvector $f$ that has one of the following forms:\n\\begin{enumerate}\n\\item[(a)] [Completely symmetric] $f(v)=f(w)$ for every $v,w \\in V(\\mathcal T_h) $ such that $\\ell(v)=\\ell(w)$. In this case we will call $f$ completely symmetric for $\\mathcal T_h$;\n\\item[(b)] [Pseudo anti-symmetric] There is a node $v$ and $i,j \\in \\{ 1,\\ldots, d\\}$ such that $f(w)=0$ for every $w \\notin V(\\mathcal T_i^v\\cup \\mathcal T_j^v)$, $f \\vert_{\\mathcal T_i^v}$ and $f \\vert_{\\mathcal T_j^v}$ are completely symmetric, and $f \\vert_{\\mathcal T_i^v}=-f \\vert_{\\mathcal T_j^v}$. We call such $f$ pseudo anti-symmetric.\n\\end{enumerate}\n\\end{lemma}\nThe following illustrations explain what the described eigenvectors look like for binary trees.\n\n\\tikzset{every tree node\/.style={minimum width=2em,draw,circle},\n blank\/.style={draw=none},\n edge from parent\/.style=\n {draw,edge from parent path={(\\tikzparentnode) -- (\\tikzchildnode)}},\n level distance=1.5cm}\n \n\n\\begin{figure}[H] \n\t\\centering\n\t\\begin{minipage}{.5\\textwidth}\n\t\t\\centering\n\\Tree\n[.$y_0$ \n[.$y_1$ \n[.$y_2$\n[.$y_3$ ]\n[.$y_3$ ]\n]\n[.$y_2$\n[.$y_3$ ]\n[.$y_3$ ]\n]]\n[.$y_1$\n[.$y_2$ \n[.$y_3$ ]\n[.$y_3$ ]]\n[.$y_2$ \n[.$y_3$ ]\n[.$y_3$ ]]]\n]\n\t\\end{minipage}%\n\t\\begin{minipage}{.5\\textwidth}\n\t\t\\centering\n\\Tree\n[.0 \n[.0 \n[.$y_0$ \n[.$y_1$ ]\n[.$y_1$ ]]\n[.-$y_0$ \n[.-$y_1$ ]\n[.-$y_1$ ]]]\n[.0 \n[.0 \n[.0 ]\n[.0 ]]\n[.0 \n[.0 ]\n[.0 ]\n]]\n]\n\t\\end{minipage}\t\n\\caption{Completely symmetric eigenvectors (left) and Pseudo anti-symmetric eigenvectors (right)}\n\\label{fig:sym:antisym}\n\t\\end{figure}\n\n\n\n\\begin{proof}\nAssume that $E(\\lambda)$ does not contain a completely symmetric eigenvector. Let $f$ be a nonzero element of $E(\\lambda)$. Since $f$ is not completely symmetric, there exist vertices of the same level at which $f$ takes different values. Let $v$ be a vertex with the largest $l(v)$ such that there are at least two of its children, say the $i$-th and $j$-th children, at which $f$ has different values. For example, if there are two leaves $u$ and $w$ at which $f(u)\\neq f(w)$ that have the same parent $v'$ then we simply take $v$ to be $v'$. \n\nBy the choice of $v$, $f\\vert _{\\mathcal T_k^v}$ is completely symmetric for all $k\\in [d]$. Indeed, let $u$ be the $k$-th child of $v$. We have $\\mathcal T_k^v = \\mathcal T^{u}$. By the choice of $v$, $f$ takes the same value at all children of $u$. Let $u_1, u_2$ be two arbitrary children of $u$. Again by the choice of $v$, $f$ takes the same value, denoted by $f_1$, at all children of $u_1$, and the same value, denoted by $f_2$, at all children of $u_2$. Since $f$ is an eigenvector of $Q_h$, \n\\begin{equation}\\label{key}\n\\lambda f(u_1) = \\frac{d}{d+1} f_1+\\frac{1}{d+1} f(u) \\quad \\text{and}\\quad \\lambda f(u_2) = \\frac{d}{d+1} f_2+\\frac{1}{d+1} f(u). \\nonumber\n\\end{equation}\nSince $f(u_1) = f(u_2)$, $f_1 = f_2$. Thus, $f$ takes the same value at all grandchildren of $u$. Repeating this argument shows that $f\\vert _{\\mathcal T^u}$ is completely symmetric.\n\nConsider the vector $g$ obtained from $f$ by switching its values on $\\mathcal T^{v}_{i}$ and $\\mathcal T^{v}_{j}$. More specifically, $g\\vert _{\\mathcal T^{v}_{i}} = f\\vert _{\\mathcal T^{v}_{j}}$, $g\\vert _{\\mathcal T^{v}_{j}} = f\\vert _{\\mathcal T^{v}_{i}}$, and $g = f$ elsewhere.\n \n By the symmetry of the tree and the matrix $Q_h$, $g$ also belongs to $E(\\lambda)$. So is $f-g$, which we denote by $h$. Observe that $h$ is an eigenvector that is 0 everywhere except on $\\mathcal T^{v}_{i} \\cup \\mathcal T^{v}_{j}$ and $h \\vert_{\\mathcal T_i^v}=f\\vert_{\\mathcal T_i^v} - f\\vert_{\\mathcal T_j^v}=-h \\vert_{\\mathcal T_j^v}$. Moreover, $h$ is completely symmetric when restricted to $\\mathcal T^{v}_{i}$ and $ \\mathcal T^{v}_{j}$ because both $f$ and $g$ are, as seen above. Thus, $h \\in E(\\lambda)$ and is pseudo anti-symmetric.\n\\end{proof}\n\n\n\n\n\\subsection{Completely symmetric eigenvectors}\\label{subsection:sym}\n\nIn this section, we describe completely symmetric eigenvectors. We shall show that the completely symmetric eigenvectors of $Q_h$ are given by the formula \\eqref{eq:thm:spectrum:sym} and correspond to $\\lambda$ and $x$ satisfying \\eqref{eq:lambda:x:thm} and \\eqref{eq:x:sym:thm} as in Theorem \\ref{thm:spectrum}.\n\n\nSince a completely symmetric eigenvector of $Q_h$ has the same value at every node of the same level (see Figure \\ref{fig:1}), we can project it onto the path $[0, h]$ and obtain an eigenvector of the corresponding random walk on the path.\n \n\\begin{figure}[H]\\label{figure:sym:3}\n\t\\centering\n\\begin{tikzpicture}\n\\Tree\n[.$y_0$ \n [.$y_{1}$ \n [.$y_{2}$ \n [.$\\ldots$ ]\n [.$\\ldots$ ]]\n [.$y_{2}$ \n [.$\\ldots$ ]\n [.$\\ldots$ ]]]\n [.$y_{1}$ \n [.$y_2$ \n [.$\\ldots$ ]\n [.$\\ldots$ ]]\n [.$y_2$ \n [.$\\ldots$ ]\n [.$\\ldots$ ]\n ]]\n] \n\\end{tikzpicture}\n\\caption{Completely symmetric eigenvectors}\n\\label{fig:1}\n\\end{figure}\n\n \n\\begin{lemma} \\label{lm:sym}\n\tThere are exactly $h+1$ linearly independent completely symmetric eigenvectors of $Q_h$.\n\\end{lemma}\n\\begin{proof} Each symmetric eigenvector of $Q_h$ corresponds one-to-one to an eigenvector of the following projection onto the path $[0, h]$ with transition matrix $R_h$:\n\t\\begin{itemize}\n\t\t\\item $R_h(0, 1) = \\frac{d}{d+1}, R_h(0, 0) = \\frac{1}{d+1}$,\n\t\t\\item $R_h(l, l-1) = \\frac{1}{d+1}$, $R_h(l, l+1) = \\frac{d}{d+1}$ for all $1\\le l\\le h-1$,\n\t\t\\item $R_h(h, h-1) = \\frac{1}{d+1}$, $R_h(h, h) = \\frac{d}{d+1}$,\n\t\\end{itemize}\n\n\tSince $R_h$ is a reversible transition matrix with stationary distribution $\\pi := [1, d, d^{2}, \\dots, d^{h}]$, the matrix $A:= D^{1\/2} R_h D^{-1\/2}$ is symmetric where $D$ is the diagonal matrix with $D(x, x)= \\pi(x)$. Therefore, $A$ is diagonalizable and so is $R_h$. In other words, $R_h$ has $h+1$ independent real eigenvectors. This implies that $Q_h$ has $h+1$ linearly independent completely symmetric eigenvectors.\n\\end{proof}\n\n\\begin{lemma}\\label{lm:sym:detail}\n\tThe matrix $R_n$ has 1 as an eigenvalue with multiplicity 1. Each of the remaining $h$ eigenvalues $\\lambda\\neq 1$ of $R_n$ is of the form \n\t$$\\lambda = \\frac{d}{d+1}\\left (x+ \\frac{1}{xd}\\right )$$\n\twhere $x \\neq \\pm \\frac{1}{\\sqrt{d}}$ is a non-real solution of the equation\n\t\\begin{equation} \n\td^{h+1}x^{2h+2} = 1.\\nonumber\n\t\\end{equation}\n\tThis equation has exactly $2h$ such solutions. If $x$ is a solution, so is $\\frac{1}{xd}$. There is a 2-to-1 correspondence between $x$ and $\\lambda$. An eigenvector $y = (y_0, y_1, \\dots, y_h)$ of $R_h$ with respect to $\\lambda$ is given by \n\t\\begin{equation} \n\ty_i = \\frac{dx^{2}-x}{dx^{2}-1} x^i + \\frac{x-1}{dx^{2}-1}\\frac{1}{d^{i} x^{i}} \\quad\\text{for every $0\\le i\\le h$}.\\nonumber\n\t\\end{equation}\n\tThe vector $f:\\mathcal T_h\\to \\mathbb R$ that takes value $y_i$ at all nodes of depth $i$ is an eigenvector of $Q_h$ with respect to $\\lambda$.\n\\end{lemma}\n\\begin{proof}\n\tLet $\\lambda$ be an eigenvalue of $R_h$ and $y = (y_0, y_1, \\dots, y_h)$ be an eigenvector corresponding to $\\lambda$. We have\n\t\\begin{enumerate}[label=(R\\arabic{*}), ref=R\\arabic{*}]\n\t\t\\item\\label{eq:R:0} $\\frac{d}{d+1} y_{1} +\\frac{1}{d+1} y_{0} = \\lambda y_0$,\n\t\t\\item\\label{eq:R:i} $\\frac{1}{d+1} y_{i-1} +\\frac{d}{d+1} y_{i+1} = \\lambda y_i$ for all $1\\le i\\le h-1$,\n\t\t\\item\\label{eq:R:h} $\\frac{1}{d+1} y_{h-1} +\\frac{d}{d+1} y_{h} = \\lambda y_h$.\n\t\\end{enumerate}\n\nSince $y$ is not the zero vector, the above equations imply that $y_0\\neq 0$. Without loss of generality, we assume $y_0=1$. \n\t\n\tLet $x_1, x_2$ be the solutions to the characteristic equation of \\eqref{eq:R:i}: \n\t$$\\frac{1}{d+1} - \\lambda x + \\frac{d}{d+1}x^{2} = 0$$\n\tor equivalently\n\t\\begin{equation}\\label{eq:x:lambda:1}\n\td x^{2} - (d+1)\\lambda x+1 = 0.\n\t\\end{equation}\n\t\n\tBy \\eqref{eq:x:lambda:1}, we have\n\t$$x_1 x_2 = \\frac{1}{d}$$\n\tand \n\t\\begin{equation} \n\t\\lambda = \\frac{d}{d+1}(x_1+ x_2) = \\frac{d}{d+1}\\left (x_1+ \\frac{1}{x_1 d}\\right ) .\\label{eq:lambda:x}\n\t\\end{equation}\n\t\n\t\n\t\n\tIf $x_1\\neq x_2$ then we can write $y_0 = \\alpha_1 - \\alpha_2$, $y_1 = \\alpha_1 x_1 -\\alpha_2 x_2$ for some $\\alpha_1, \\alpha_2$. We show that for all $0\\le i\\le h$,\n\t\\begin{equation}\\label{eq:recurrent:y:1}\n\ty_i = \\alpha_1 x_1^{i} - \\alpha_2 x_2^{i}.\n\t\\end{equation}\n\tIndeed, assuming that \\eqref{eq:recurrent:y:1} holds for $y_0, \\dots, y_i$ for some $1\\le i\\le h-1$ then by \\eqref{eq:x:lambda:1},\n\t\\begin{eqnarray}\n\t\\lambda y_i - \\frac{1}{d+1} y_{i-1} = \\alpha_1 x_1^{i-1}\\left (\\lambda x_1 - \\frac{1}{d+1}\\right )-\\alpha_2 x_2^{i-1}\\left (\\lambda x_2 - \\frac{1}{d+1}\\right ) = \\frac{d}{d+1} \\left (\\alpha_1 x_1^{i+1}- \\alpha_2 x_2^{i+1}\\right ).\\nonumber\n\t\\end{eqnarray}\n\tThus, by \\eqref{eq:R:i}, \n\t\\begin{equation}\\label{key}\n\t\\frac{d}{d+1} y_{i+1} = \\frac{d}{d+1} \\alpha_1 x_1^{i+1}- \\frac{d}{d+1} \\alpha_2 x_2^{i+1}\\nonumber\n\t\\end{equation}\n\tand so\n\t\\begin{equation}\\label{key}\n\ty_{i+1} = \\alpha_1 x_1^{i+1}- \\alpha_2 x_2^{i+1}.\\nonumber\n\t\\end{equation}\n\tThus, \\eqref{eq:recurrent:y:1} also holds for $y_{i+1}$ and hence, for all $y_0, \\dots, y_h$. \n\t\n\tSimilarly, by \\eqref{eq:R:h}, we get\n\t\\begin{eqnarray}\n\t\\frac{d}{d+1} y_h &=&\\lambda y_h - \\frac{1}{d+1} y_{h-1} = \\alpha_1 x_1^{h-1}\\left (\\lambda x_1 - \\frac{1}{d+1}\\right )-\\alpha_2 x_2^{h-1}\\left (\\lambda x_2 - \\frac{1}{d+1}\\right )\\nonumber\\\\\n\t& =& \\frac{d}{d+1} \\left (\\alpha_1 x_1^{h+1}- \\alpha_2 x_2^{h+1}\\right ).\\nonumber\n\t\\end{eqnarray}\n\tThus, \n\t\\begin{equation}\\label{eq:R:h:1}\n\t\\alpha_1 x_1^{h+1}- \\alpha_2 x_2^{h+1} = \\alpha_1 x_1^{h}- \\alpha_2 x_2^{h}\n\t\\end{equation}\n\tas they are both equal to $y_h$.\n\t\n\t\n\tBy \\eqref{eq:recurrent:y:1}, \\eqref{eq:R:0} becomes\n\t\\begin{equation}\\label{eq:R:0:1}\n\td(\\alpha_1x_1 - \\alpha_2 x_2) = \\left (xd+\\frac{1}{x} - 1\\right ) (\\alpha_1 - \\alpha_2).\n\t\\end{equation}\n\t\n\t\n\t\n\tFor simplicity, we write $\\alpha = \\alpha_1$ and $x = x_1$. By \\eqref{eq:recurrent:y:1} for $i=0$, we get\n\t$$\\alpha_2 = \\alpha-1.$$\n\t\n\n\tEquations \\eqref{eq:R:0:1} becomes\n\t\\begin{equation} \n\td\\alpha x - \\frac{\\alpha-1}{x} = dx+\\frac{1}{x} - 1\\nonumber\n\t\\end{equation}\n\twhich gives\n\t\\begin{equation}\\label{eq:R:0:2}\n\t\\alpha_1 = \\alpha = \\frac{dx^{2}-x}{dx^{2}-1} \\quad\\text{and}\\quad \\alpha_2 = \\alpha-1 = \\frac{1-x}{dx^{2}-1}. \n\t\\end{equation}\n\t\n \n\t\n\tPlugging \\eqref{eq:R:0:2} into \\eqref{eq:R:h:1} and taking into account $x_2 = \\frac{1}{xd}$, we get\n\t\\begin{equation}\\label{key}\n\t(dx-1)(x-1)(d^{h+1}x^{2h+2}-1) = 0.\\nonumber\n\t\\end{equation}\n\t\n\tIf $x = 1$ then $\\alpha_2 = \\alpha-1 = 0$ by \\eqref{eq:R:0:2}. And so, $y = \\alpha(1, \\dots, 1)$ which is an eigenvector of the eigenvalue 1. Since $\\lambda\\neq 1$, $x\\neq 1$. If $x=\\frac{1}{d}$ then $x_2 = \\frac{1}{xd} = 1$. By the symmetry of $x_1$ and $x_2$, this also corresponds to $\\lambda=1$ which is not the case. \n\t\n\tThus, $x$ satisfies\n\t\\begin{equation} \n\td^{h+1}x^{2h+2}-1=0.\\nonumber\n\t\\end{equation}\n\t \n\t\n\tThis equation has $2h$ non-real solutions and 2 real solutions $\\pm \\frac{1}{\\sqrt{d}}$. For each non-real solution $x_1$, observe that $x_2:=\\frac{1}{dx_1}$ is also a non-real solution. Note that $x_1\\neq x_2$ and by setting $\\lambda$ and $y$ as in \\eqref{eq:lambda:x} and \\eqref{eq:recurrent:y:1} with $\\alpha_1$ and $\\alpha_2$ as in \\eqref{eq:R:0:2}, one can check that $y$ is indeed an eigenvector corresponding to $\\lambda$. Thus, these $2h$ non-real solutions correspond to exactly $h$ eigenvalues $\\lambda\\neq 1$ of $R_n$. Since $R_n$ has exactly $h+1$ eigenvalues, these are all.\n\\end{proof}\n\n\n\\subsection{Pseudo anti-symmetric eigenvectors}\\label{subsection:anti}\nIn this section, we describe pseudo anti-symmetric eigenvectors. We shall show that the pseudo anti-symmetric eigenvectors of $Q_h$ are given by the formula \\eqref{eq:thm:spectrum:antisym} and correspond to $\\lambda$ and $x$ satisfying \\eqref{eq:lambda:x:thm} and \\eqref{eq:x:antisym} as in Theorem \\ref{thm:spectrum}.\n\n\nConsider a pseudo anti-symmetric eigenvector $f$ with node $v$ and indices $i, j$ as described in Lemma \\ref{lm:sym:antisym} (see Figure \\ref{fig:sym:antisym}). Let $k = h-\\ell(v)-1\\in [0, h-1]$. As in Figure \\ref{fig:sym:antisym} and Figure \\ref{fig:anti}, let $y=(y_0, y_1, \\dots, y_k)$ where $y_0$ is the value of $f$ at the $i$-th child of $v$, which is denoted by $u$, $y_1$ is the value of $f$ at the children of $u$ and so on. With these notations, we also write $f$ as $f_{y, v, i, j}$. Observe that $y$ is an eigenvector of the following matrix $S_k$:\n\\begin{itemize}\n\t\\item $S_k(0, 1) = \\frac{d}{d+1}$,\n\t\\item $S_k(l, l-1) = \\frac{1}{d+1}$, $S_k(l, l+1) = \\frac{d}{d+1}$ for all $1\\le l\\le k-1$,\n\t\\item $S_k(k, k-1) = \\frac{1}{d+1}$, $S_k(k, k) = \\frac{d}{d+1}$.\n\\end{itemize}\n\nReversely, for any eigenvector $y$ of $S_k$, for any node $v$ at depth $h-k-1$ and for any choice of $i, j\\in [1, d]$ with $i\\neq j$, we can lift it to a pseudo anti-symmetric eigenvector $f_{y, v, i, j}$.\n\n\n\\begin{figure}[H]\t\\centering\n\t\\begin{tikzpicture}\n\t\\Tree\n\t[.0\n\t[.$0$ \n\t[.$y_0$ \n\t[.$y_1$ \n\t[.$y_2$ ]\n\t[.$y_2$ ]]\n\t[.$y_1$ \n\t[.$y_2$ ]\n\t[.$y_2$ ]]]\n\t[.$-y_{0}$ \n\t[.$-y_1$ \n\t[.$-y_2$ ]\n\t[.$-y_2$ ]]\n\t[.$-y_1$ \n\t[.$-y_2$ ]\n\t[.$-y_2$ ]\n\t]]\n\t]\n\t[.$0$ \n\t[.$0$ \n\t[.$0$ \n\t[.$0$ ]\n\t[.$0$ ]]\n\t[.$0$ \n\t[.$0$ ]\n\t[.$0$ ]]]\n\t[.$0$ \n\t[.$0$ \n\t[.$0$ ]\n\t[.$0$ ]]\n\t[.$0$ \n\t[.$0$ ]\n\t[.$0$ ]\n\t]]\n\t]\n\t]\n\t\\end{tikzpicture}\n\t\\caption{Pseudo anti-symmetric eigenvectors}\\label{fig:anti}\n\n\\end{figure}\n\n \\begin{lemma} \\label{lm:antisym}\n\tFor each $k\\in [0, h-1]$, $S_k$ has $k+1$ eigenvectors. For each eigenvector $y$ of $S_k$ and for each $v$ with $l(v) = h-k-1$, there are $d-1$ linearly independent pseudo anti-symmetric eigenvectors of $Q_h$ of the form $f_{y,v,i,j}$.\n\\end{lemma}\n\\begin{proof}\n\tSince $S_k$ differs from $R_k$ only at the $(0, 0)$ entry, it also satisfies the equation $\\pi(x) S_k(x, y) = \\pi(y)S_k(y, x)$ where $\\pi = [1, d, d^{2}, \\dots, d^{k}]$. Thus, like $R_k$, the matrix $DS_k D^{-1}$ is symmetric where $D$ is the diagonal matrix with $D(x, x) = \\pi(x)^{1\/2}$. By symmetry, $D S_k D^{-1}$ has $k+1$ eigenvalues and so does $S_k$. \n\t\n\tFor each eigenvector $y$ of $S_k$, we create $d-1$ independent vectors $f_{y, v, i, i+1}$ for $1\\le i\\le d-1$. It is clear that any $f_{y, v, i, j}$ can be written as a linear combination of these vectors. This completes the proof.\n\\end{proof}\nWe now describe the eigenvectors of $S_k$.\n\\begin{lemma}\\label{lm:anti:detail}\n\tEach of the $k+1$ eigenvalue $\\lambda$ of $S_k$ is of the form \n\t$$\\lambda = \\frac{d}{d+1}\\left (x+ \\frac{1}{dx}\\right )$$\n\twhere $x\\neq \\pm \\frac{1}{\\sqrt d}$ is a solution of the equation\n\t\\begin{equation} \n\td^{k+2} x^{2k+4}- d^{k+2} x^{2k+3}+ dx -1 = 0.\\nonumber\n\t\\end{equation}\n\tThis equation has $2k+2$ solutions that differ from $\\frac{1}{\\sqrt d}$. If $x$ is a solution, so is $\\frac{1}{dx}$. There is a 2-to-1 correspondence between $x$ and $\\lambda$. An eigenvector $y = (y_0, y_1, \\dots, y_k)$ of $S_k$ with respect to $\\lambda$ is given by \n\t\\begin{equation} \n\ty_i = \\frac{dx^{i+2}}{dx^{2}-1} - \\frac{1}{(dx^{2}-1)d^{i} x^{i}} \\quad\\text{for every $0\\le i\\le k$}.\\nonumber\n\t\\end{equation}\n\\end{lemma}\n\n\n\\begin{proof}\n\tLet $\\lambda$ be an eigenvalue of $S_k$ and $y = (y_0, y_1, \\dots, y_k)$ be an eigenvector corresponding to $\\lambda$. We have\n\t\\begin{enumerate}[label=(S\\arabic{*}), ref=S\\arabic{*}]\n\t\t\\item\\label{eq:S:0} $\\frac{d}{d+1} y_{1} = \\lambda y_0$,\n\t\t\\item\\label{eq:S:i} $\\frac{1}{d+1} y_{i-1} +\\frac{d}{d+1} y_{i+1} = \\lambda y_i$ for all $1\\le i\\le k-1$,\n\t\t\\item\\label{eq:S:k} $\\frac{1}{d+1} y_{k-1} +\\frac{d}{d+1} y_{k} = \\lambda y_k$.\n\t\\end{enumerate}\n\t\n\tAs before, we let $x_1, x_2$ be the solutions to the equation \n\t$$\\frac{1}{d+1} - \\lambda x + \\frac{d}{d+1}x^{2} = 0.$$\n\tBy exactly the same argument as in the proof of Lemma \\ref{lm:sym:detail}, we derive by setting $y_0=1$ that \n\t$$y_i = \\alpha_1 x_1^{i} - \\alpha_2 x_2^{i}$$\nwhere\n\t$$\\alpha_1 = \\frac{dx^{2}}{dx^{2}-1}\\quad\\text{and}\\quad \\alpha_2 =\\frac{1}{dx^{2}-1}$$\n\tand $x_1$ and $x_2$ satisfy\n\t\\begin{equation}\\label{eq:x:2}\n\td^{k+2} x^{2k+4}- d^{k+2} x^{2k+3}+ dx -1 = 0\n\t\\end{equation}\n\t\n \tNote that, $x = \\pm \\frac{1}{\\sqrt d}$ are solutions of \\eqref{eq:x:2}. The remaining $2k+2$ solutions split into pairs $(x, \\frac{1}{dx})$ of distinct components. For each of these pairs, let $x_1 := x$ and $x_2:=\\frac{1}{dx}$. We have $x_1\\neq x_2$ and by setting $\\lambda$ and $y$ as in \\eqref{eq:lambda:x} and \\eqref{eq:recurrent:y:1} with $\\alpha_1 = \\frac{dx^{2}}{dx^{2}-1}$ and $\\alpha_2 =\\frac{1}{dx^{2}-1}$, one can check that $y$ is indeed an eigenvector corresponding to $\\lambda$. Thus, these $2k+2$ solutions correspond to exactly $k+1$ eigenvalues $\\lambda$ of $S_k$. Since $S_k$ has exactly $k+1$ eigenvalues, these are all.\n\\end{proof}\n\n\n\\subsection{Proof of Theorem \\ref{thm:spectrum}}\\label{subsection:proof:spectrum}\nThe following lemma shows that we can retrieve all eigenvectors of $Q_h$ from completely symmetric and pseudo anti-symmetric eigenvectors. Let $\\mathcal A_{S_k}$ be the eigenbasis of $S_k$ as described in Lemma \\ref{lm:anti:detail} and $\\mathcal B$ be a collection of $h+1$ independent completely symmetric eigenvectors of $Q_h$ as in Lemma \\ref{lm:sym:detail}. Let \n$$\\mathcal A: = \\lbrace f_{y, v, i, i+1}, v \\in V(\\mathcal T_{h-1}), y \\in \\mathcal A_{S_{h-\\ell(v)-1}}, i \\in [d-1] \\rbrace.$$\n\\begin{lemma}\\label{lm:spanning}\n\tThe collection $\\mathcal A \\cup \\mathcal B$ is an eigenbasis for $Q_h$.\n\\end{lemma}\n\nAssuming Lemma \\ref{lm:spanning}, we now put everything together to complete the proof of Theorem \\ref{thm:spectrum}.\n\\begin{proof}[Proof of Theorem \\ref{thm:spectrum}]\n\tThe first part of the theorem follows from Lemmas \\ref{lm:sym:detail} and \\ref{lm:anti:detail}. As seen in Lemma \\ref{lm:sym:detail}, the set $\\mathcal B$ in Lemma \\ref{lm:spanning} consists of eigenvectors as in \\eqref{eq:thm:spectrum:sym} and the all-1 vector. By Lemmas \\ref{lm:antisym} and \\ref{lm:anti:detail}, the set $\\mathcal A$ consists of eigenvectors as in \\eqref{eq:thm:spectrum:antisym}. That gives the second part. Finally, the third part follows from Lemma \\ref{lm:spanning}.\n\\end{proof}\n \n\nBefore proving Lemma \\ref{lm:spanning}, we make the following simple observation. For a rooted-tree $T$ that is not necessarily regular, recall that a vector $f: T\\to \\mathbb R$ is said to be \\textit{completely symmetric} if $f(u) = f(v)$ for all pairs of vertices $u, v$ at the same level. A vector $f$ is said to be \\textit{energy-preserving} if for all level $l$, \n$$\\sum_{v\\in T: l(v)=l} f(v)=0.$$\n\\begin{observation}\\label{obs}\n\tFor any rooted-tree $T$ and any vector $f: T\\to \\mathbb R$, if $f$ is both energy-preserving and completely symmetric then it is the zero vector.\n\\end{observation}\n\n\n \\begin{proof}[Proof of Lemma \\ref{lm:spanning}]\nFirst of all, we check that their number is equal to $n$. By Lemmas \\ref{lm:sym} and \\ref{lm:antisym}, the total number of vectors is\n\\begin{align*}\nh+1+ \\sum_{k=0}^{h-1} (k+1)(d-1) d^{h-k-1} \n\\end{align*}\nwhere $d^{h-k-1}$ is the number of nodes $v$ of depth $h-k-1$. By algebraic manipulation, this number is exactly $\\frac{d^{h+1} -1}{d-1}=n$.\n\n\nWe will now prove that the vectors considered are linearly independent. Assume that there exist coefficients $c_{y,v,i}$ and $c_g$ such that\n\\begin{equation} \n\\sum c_{y,v,i} f_{y,v,i,i+1} + \\sum_{g\\in \\mathcal B} c_g g = 0\\nonumber\n\\end{equation}\nwhere the first sum runs over all $v \\in V(\\mathcal T_{h-1}), y \\in \\mathcal A_{S_{h-\\ell(v)-1}}, i \\in [d-1]$. We need to show that $ c_{y,v,i}$ and $c_g$ are all 0.\n\nSince pseudo anti-symmetric vectors are energy-preserving on $\\mathcal T_{h}$, the sum $\\sum_{g\\in \\mathcal B} c_g g = -\\sum c_{y,v,i} f_{y,v,i,i+1}$ is both completely symmetric and energy-preserving. And so, by Observation \\ref{obs},\n\\begin{equation}\\label{eq:indep:sum}\n\\sum c_{y,v,i} f_{y,v,i,i+1} = \\sum_{g\\in \\mathcal B} c_g g = 0\n\\end{equation}\nBy the independence of vectors in $\\mathcal B$, we conclude that $c_g = 0$ for all $g\\in \\mathcal B$. \n \nWe now prove by induction on the vertices of $v\\in V(\\mathcal T_{h-1})$ and $i\\in [d-1]$ that $c_{y,v,i} = 0$ for all $y\\in \\mathcal A_{S_{h-\\ell(v)-1}}$. For this induction, we shall use the natural ordering of pairs $(v, i)$ as follows.\n$$(v, i)< (v', i')\\quad \\text{if and only if} \\quad l(v)< l(v') \\text{ or } l(v) = l(v') \\text{ and } i< i'.$$ \n \n \n For the base case, which is for $v := \\rho$ and $i:=1$, from \\eqref{eq:indep:sum}, we have\n \\begin{equation}\\label{key}\nF_{\\rho, 1}:= \\sum_{y\\in \\mathcal A_{S_{h-1}}} c_{y,\\rho,1} f_{y,\\rho,1,2} = -\\sum c_{y,u,j} f_{y,u,j,j+1} \\nonumber\n \\end{equation}\n where the second sum runs over all $u \\in V(\\mathcal T_{h-1})$ and $j \\in [d-1]$ with $(\\rho, 1)< (u, j)$ and all $y \\in \\mathcal A_{S_{h-\\ell(v)-1}}$. Note that when restricting on the subtree $\\mathcal T_{1}^{\\rho}$, $F_{\\rho, 1}$ is a completely symmetric vector because all of the $f_{y,\\rho,1,2}$ are completely symmetric. Likewise, $F_{\\rho, 1}$ is energy-preserving on $\\mathcal T_{1}^{\\rho}$, because of the vectors $ f_{y,u,j,j+1}$. By Observation \\ref{obs}, $F_{\\rho, 1}=0$ on $\\mathcal T_{1}^{\\rho}$. Since the $f_{y,\\rho,1,2}$ are only supported on $\\mathcal T_{1}^{\\rho}\\cup \\mathcal T_{2}^{\\rho}$ and $f_{y,\\rho,1,2}\\vert_{\\mathcal T_{1}^{\\rho}} = -f_{y,\\rho,1,2}\\vert_{\\mathcal T_{2}^{\\rho}} $, so is $F_{\\rho, 1}$. Therefore, $F_{\\rho, 1} = 0$ on $\\mathcal T_{2}^{\\rho}$ and thus on $\\mathcal T_{h}$. So, \n \\begin{equation}\\label{key}\n\\sum_{y\\in \\mathcal A_{S_{h-1}}} c_{y,\\rho,1} f_{y,\\rho,1,2} = 0 \\nonumber.\n \\end{equation}\n By the independence of vectors in $ \\mathcal A_{S_{h-1}}$, we conclude that $c_{y,\\rho,1} = 0$ for all $y\\in \\mathcal A_{S_{h-1}}$, establishing the base case.\n \n For the induction step, assume that for some $(v, i)$, it is proven that $c_{y,w, k} = 0$ for all $(w, k)< (v, i)$ and $y\\in \\mathcal A_{S_{h-\\ell(w)-1}}$. We now show that $c_{y,v, i} = 0$ for all $y\\in \\mathcal A_{S_{h-\\ell(v)-1}}$. By this assumption, the left-most side in \\eqref{eq:indep:sum} reduces to\n \\begin{equation} \\label{eq:indep:induction}\n \\sum c_{y,u, j} f_{y,u, j, j+1} = 0 \n \\end{equation}\n where the sum runs over all $(u, j)\\ge (v, i)$. Our argument now is similar to the base case. From \\eqref{eq:indep:induction}, we have\n \\begin{equation} \n F_{v, i}:=\\sum_{y\\in \\mathcal A_{S_{h-\\ell(v)-1}}} c_{y,v,i} f_{y,v,i,i+1} = -\\sum_{y, (v, i)<(u, j)} c_{y,u,j} f_{y,u,j,j+1}.\\nonumber\n \\end{equation}\n Similarly to the base case, when restricting on the subtree $\\mathcal T_{i}^{v}$, $F_{v, i}$ is both completely symmetric and energy-preserving on $\\mathcal T_{j}^{v}$. By Observation \\ref{obs}, $F_{v, i}=0$ on $\\mathcal T_{j}^{v}$. This leads to $F_{v, i} = 0$ on $\\mathcal T_{i+1}^{v}$ and thus $F_{v, i}=0$ on $\\mathcal T_{h}$. So, \n \\begin{equation}\\label{key}\n \\sum_{y\\in \\mathcal A_{S_{h-\\ell(v)-1}}} c_{y,v,i} f_{y,v,i,i+1} =0 \\nonumber.\n \\end{equation}\n By the independence of vectors in $ \\mathcal A_{S_{h-\\ell(v)-1}}$, we conclude that $c_{y,v, i} = 0$ for all $y\\in \\mathcal A_{S_{h-\\ell(v)-1}}$, establishing the induction step and thus finishing the proof.\n\\end{proof}\n\n\n\n\\section{Proof of Theorem \\ref{thm:lowerbound}}\n\\subsection{Proof of Theorem \\ref{thm:lowerbound}(a)}\nConsider the interchange process on $\\mathcal T_h$. Let $Q_h'$ be the transition matrix of the ace of spades. In other words, $Q_h'$ is the transition matrix of any fixed card on the tree. \nBy \\cite[Theorem 1.1]{CLR}, the spectral gap of the interchange process on the complete $d$-ary tree of depth $h$ is the same as the spectral gap of $Q_h'$. We note that\n\\begin{equation}\\label{key}\nQ_h' = \\frac{2n-d-3}{2(n-1)} I_n + \\frac{d+1}{2(n-1)} Q_h.\n\\end{equation}\nAnd therefore, the spectral gap of $Q_h'$ is $\\frac{d+1}{2(n-1)}$ times the spectral gap of $Q_h$.\n\n\n\nThus \\ref{thm:lowerbound} (a) is deduced from the following.\n\\begin{lemma}\\label{lm:Q_h:gap} For sufficiently large $h$, the spectral gap of $Q_h$ is equal $1 - \\lambda_2$ where $\\lambda_2$ is the second largest eigenvalue of $Q_h$. Moreover,\n\\begin{equation}\\label{eq:lm:gap}\n\\lambda_2 = 1- \\frac{(d-1)^{2}}{(d+1)\\cdot d^{h+1}} + O \\left (\\frac{\\log_{d} n}{n^{2}}\\right )\n\\end{equation}\n\\end{lemma} \n\n \n\n\n\nTo prove Lemma \\ref{lm:Q_h:gap}, we shall use Theorem \\ref{thm:spectrum}. Let $\\lambda$ be an eigenvalue of $Q_h$ and $x$ be a solution of\n\\begin{equation} \\label{eq:x:lambda:1:1}\nd x^{2} - (d+1)\\lambda x+1 = 0\n\\end{equation}\nwhich we have encountered in \\eqref{eq:x:lambda:1}.\n\nNote that if $\\lambda^{2}\\ge \\frac{4d}{(d+1)^{2}}$ then this equation has two real solutions both of which have the same sign as $\\lambda$.\n \nSince the equation \\eqref{eq:x:sym:thm} only has nonreal solutions except $x = \\pm \\frac{1}{\\sqrt d}$, combining this observation with Theorem \\ref{thm:spectrum}, each eigenvalue $\\lambda^{2}\\ge \\frac{4d}{(d+1)^{2}}$ is given by Equation \\eqref{eq:lambda:x:thm} for some $x \\neq \\pm \\frac{1}{\\sqrt d}$ satisfying\n \\begin{equation} \\label{eq:x:anti:k}\n d^{k+1} x ^{2k+2}- d^{k+1} x ^{2k+1}+ dx -1 = 0,\n \\end{equation}\nfor some $k\\in [1, h]$ which is simply Equation \\eqref{eq:x:antisym} (with $k$ being shifted for notational convenience). \n \n \n We shall show the following\n \\begin{lemma}\\label{lm:bound:lambda}\n \t\\begin{enumerate} [label = (\\alph*)]\n \t\t\\item For all $k\\in [1, h]$, Equation \\eqref{eq:x:anti:k} has no solutions in $\\left (-\\infty, -\\frac{1}{\\sqrt d}\\right )$. There are no eigenvalues of $Q_h$ less than $-\\sqrt \\frac{4d}{(d+1)^{2}}$.\n\\item \t There exists a constant $h_0>0$ such that for all $k\\ge h_0$, the largest solution $x$ of \\eqref{eq:x:anti:k} satisfies\n \t\\begin{equation}\\label{eq:bound:x}\n \t1 - \\frac{a}{d^{k+1}} < x<1 - \\frac{d-1}{d^{k+1}} \\quad\\text{where}\\quad a=d-1 + \\frac{2(d-1)^{2}(k+1)}{d^{k+1}}.\n \t\\end{equation}\n \tFurthermore, for $k=h$, the eigenvalue that corresponds to this $x$ satisfies\n \t\\begin{equation}\\label{eq:bound:lambda}\n \t\\left |\\lambda - \\left (1-\\frac{(d-1)^{2}}{(d+1)\\cdot d^{h+1}}\\right )\\right | = O\\left (\\frac{\\log_{d} n}{n^{2}}\\right ).\n \t\\end{equation}\n \\end{enumerate}\n \\end{lemma}\n \n Assuming Lemma \\ref{lm:bound:lambda}, we conclude that for sufficiently large $h$, the largest $x$ that satisfies one of the equations \\eqref{eq:x:anti:k} for some $k$ in $[1, h]$ satisfies\n \\begin{equation} \n 1 - \\frac{a}{d^{h+1}} < x<1 - \\frac{d-1}{d^{h+1}} \\quad\\text{where}\\quad a=d-1 + \\frac{2(d-1)^{2}(h+1)}{d^{h+1}}.\\nonumber\n \\end{equation}\nSince the right-hand side of \\eqref{eq:lambda:x:thm} is increasing in $x$ for $x\\ge \\frac{1}{\\sqrt d}$, the second largest eigenvalue $\\lambda_2$ of $Q_h$ corresponds to such $x$ and so it satisfies \\eqref{eq:bound:lambda}, proving \\eqref{eq:lm:gap}. By the first part of Lemma \\ref{lm:bound:lambda}, there are no eigenvalues of $Q_h$ whose absolute value is larger than $\\lambda_2$. This proves Lemma \\ref{lm:Q_h:gap}.\n \n \\begin{proof}[Proof of Lemma \\ref{lm:bound:lambda}]\n \tLet $f(x) = d^{k+1} x^{2k+2} - d^{k+1} x^{2k+1} + dx-1$. \n \t\n \tTo prove part (a), for all $x< -\\frac{1}{\\sqrt d}$, we have\n \t$$d^{k+1}x^{2k+2}> 1\\quad\\text{and}\\quad - d^{k+1} x^{2k+1} > - dx$$\n \tand so $f$ has no roots in $\\left (-\\infty, -\\frac{1}{\\sqrt d}\\right )$. Assume that there were an eigenvalue $\\lambda<-\\sqrt \\frac{4d}{(d+1)^{2}}$. By the argument right before \\eqref{eq:x:anti:k}, Equation \\eqref{eq:x:lambda:1:1} has two negative solutions $x_1 0.\\nonumber\n \t\\end{equation}\n \tThus, $f$ is increasing on the interval $[1 - \\frac{1}{2k+2} , \\infty)$ which contains $[1 - \\frac{a}{d^{k+1}}, 1 - \\frac{d-1}{d^{k+1}}]$ for sufficiently large $k$. Thus, to prove \\eqref{eq:bound:x}, it suffices to show that \n \t\\begin{equation}\\label{eq:derivative:test}\n \tf\\left (1 - \\frac{a}{d^{k+1}}\\right )<00,\\nonumber\n \t\\end{eqnarray}\n \tproving the \\eqref{eq:derivative:test}. \n \t\n \tWe have shown that there exists a solution $x = 1-\\alpha$ where $\\frac{d-1}{d^{k+1}}\\le \\alpha \\le \\frac{a}{d^{k+1}}$. Let $\\lambda$ be the eigenvalue corresponding to $x$ as in \\eqref{eq:lambda:x}. We have\n \t\\begin{equation}\\label{key}\n \t\\frac{d+1}{d}\\lambda = 1 - \\alpha+\\frac{1}{d(1-\\alpha)} \\in \\left (1 - \\alpha+\\frac{1}{d} (1 +\\alpha), 1 - \\alpha+\\frac{1}{d} (1 +\\alpha+2\\alpha^{2})\\right ). \\nonumber \n \t\\end{equation}\n \tIn other words, \n \t\\begin{equation}\\label{key}\n \t\\frac{d+1}{d}\\lambda \\in\\left (\\frac{d+1}{d} -\\frac{d-1}{d} \\alpha , \\frac{d+1}{d} -\\frac{d-1}{d} \\alpha +\\frac{2}{d}\\alpha^{2}\\right ). \\nonumber\n \t\\end{equation}\n \tUsing the bounds $\\frac{d-1}{d^{k+1}}\\le \\alpha \\le \\frac{a}{d^{k+1}}$, we obtain\n \t\\begin{equation}\\label{key}\n \t\\lambda - \\left (1-\\frac{(d-1)^{2}}{(d+1)\\cdot d^{k+1}}\\right ) \\le \\frac{2}{d+1}\\alpha^{2}\\le \\frac{2a^{2}}{(d+1)\\cdot d^{2k+2}}\\le \\frac{2}{(d+1)\\cdot d^{2k+1}}\\nonumber \n \t\\end{equation}\n \tand\n \t\\begin{equation}\\label{key}\n \t\\lambda - \\left (1-\\frac{(d-1)^{2}}{(d+1)\\cdot d^{k+1}}\\right ) \\ge -\\frac{d-1}{d+1}\\alpha+\\frac{(d-1)^{2}}{(d+1)\\cdot d^{k+1}} \\ge -\\frac{2(d-1)^{3}(k+1)}{(d+1)\\cdot d^{2k+2}}\\ge - \\frac{2(k+1)}{d^{2k}}.\\nonumber \n \t\\end{equation}\n \tThus, for $k=h$,\n \t\\begin{equation} \n \t\\left |\\lambda - \\left (1-\\frac{(d-1)^{2}}{(d+1)\\cdot d^{h+1}}\\right )\\right | \\le \\frac{2(h+1)}{d^{2h}} .\\nonumber\n \t\\end{equation}\n \tThese bounds together with the equation $n = \\frac{d^{h+1}-1}{d-1} \\in (d^{h}, 2d^{h})$ give \\eqref{eq:bound:lambda}.\n \\end{proof}\n \n \n \\subsection{Proof of Theorem \\ref{thm:lowerbound} (b)}\n For the proof of the lower bound, we will use Wilson's lemma.\n \\begin{lemma}[Lemma 5, \\cite{Wilson}]\\label{W}\n \tLet $\\varepsilon, R$ be positive numbers and $0<\\gamma< 2-\\sqrt{2} $. Let $F: X\\to \\mathbb R$ be a function on the state space $X$ of a Markov chain $(C_t)$ such that \n \t$$\\expect{F(C_{t+1})\\vert C_t) }= (1 - \\gamma )F(C_t), \\quad \\expect{\\left [F(C_{t+1})- F(C_{t})\\right ]^2 \\vert C_t} \\leq R,$$ and \n \t$$t \\leq \\frac{ \\log \\max_{x\\in X}F(x) + \\frac{1}{2} \\log( \\gamma \\varepsilon\/(4R))}{-\\log (1 - \\gamma )}.\n \t$$ Then the total variation distance from stationarity at time $t$ is at least $1-\\varepsilon$.\n \\end{lemma}\n \n \n \\begin{proof}[Proof of Theorem \\ref{thm:lowerbound} (b)]\n Let $0 j > m$ such that $\\lnot\\mathfrak{S}_{\\texttt{{SAFE}}{}}^{post}(s_j,s_{j+1})$ holds.\n Furthermore, by construction of $\\mathcal{G}_{post}$, all states $s_j$ with $n > j > m$ satisfy $\\varphi$.\n Hence $\\varphi \\land \\lnot\\mathfrak{S}^{post}_{\\texttt{{SAFE}}{}}(s_j,s_{j+1})$ holds, which implies that $\\lnot \\mathfrak{S}_{\\texttt{{SAFE}}}(s_j,s_{j+1})$ holds. We conclude that $\\lnot\\mathfrak{S}_{\\texttt{{SAFE}}{}}$ is a necessary subgoal in $\\mathcal{G}^I$.\n\t\n\tWe now show (c). Since we return in line~\\ref{line:thirdreturn}, we have $\\operatorname{Unsat}(\\operatorname{Enf}(F,\\mathcal{G}))$ and, by induction hypothesis, $\\operatorname{Unsat}(\\operatorname{Enf}(\\lnot\\mathfrak{S}_{\\texttt{{SAFE}}{}}^{post},\\mathcal{G}_{post}))$.\n As the transition relation of $\\mathcal{G}_{post}$ is restricted to $\\varphi$, this implies $\\operatorname{Unsat}(\\operatorname{Enf}(\\varphi \\land \\lnot\\mathfrak{S}_{\\texttt{{SAFE}}{}}^{post},\\mathcal{G}_{post}))$.\n We also have $F \\implies \\lnot \\varphi$ and $(\\varphi \\land \\lnot\\mathfrak{S}_{\\texttt{{SAFE}}{}}^{post}) \\implies \\varphi$. As $(\\mathit{Safe}_{post} \\lor \\mathit{Reach}_{post}) \\implies (\\mathit{Safe} \\lor \\mathit{Reach})$ holds, we can apply \\Cref{lem:unsat_sum} to conclude $\\operatorname{Unsat}(\\operatorname{Enf}(F \\lor (\\varphi\\land\\lnot\\mathfrak{S}_{\\texttt{{SAFE}}{}}^{post})),\\mathcal{G})$, which implies $\\operatorname{Unsat}(\\operatorname{Enf}(\\neg \\mathit{Safe} \\lor F \\lor (\\varphi\\land\\lnot\\mathfrak{S}_{\\texttt{{SAFE}}{}}^{post})),\\mathcal{G}) = \\operatorname{Unsat}(\\operatorname{Enf}(\\lnot\\mathfrak{S}_{\\texttt{{SAFE}}{}}),\\mathcal{G}))$.\n\t\t\n\t\\medskip\n\t{\\it Case 4: $\\operatorname{Reach}(\\mathcal{G})$ returns in line~\\ref{line: last return} and the {\\upshape\\textbf{if}} statement in line~\\ref{line:transback} is false}.\n\tBy induction hypothesis we assume that the recursive calls in lines~\\ref{line: recursion1} and \\ref{line: recursion2} returned tuples $(R_{post}, \\mathfrak{S}_{\\texttt{{REACH}}{}}^{post}, \\mathfrak{S}_{\\texttt{{SAFE}}{}}^{post})$ and $(R_{pre}, \\mathfrak{S}_{\\texttt{{REACH}}{}}^{pre}, \\mathfrak{S}_{\\texttt{{SAFE}}{}}^{pre})$ satisfying properties (a)--(c) above for $\\mathcal{G}_{post}$ and $\\mathcal{G}_{pre}$. We now show these properties in $\\mathcal{G}$ for $R\\lor R_{pre}$, and \n\t\\begin{align*}\n\t\t\\mathfrak{S}_{\\texttt{{REACH}}{}} &= \\;\\;(\\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}{}}^{post}) \\implies \\mathfrak{S}_{\\texttt{{REACH}}{}}^{post}) \\\\\n & \\land \\;\\; ((\\neg \\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}{}}^{post}) \\land \\operatorname{Pre}(F)) \\implies F) \\\\\n & \\land \\;\\; ((\\neg \\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}{}}^{post}) \\land \\neg \\operatorname{Pre}(F)) \\implies \\mathfrak{S}_{\\texttt{{REACH}}{}}^{pre}) \\\\\n & \\land \\;\\; (\\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}{}}^{post}) \\lor \\operatorname{Pre}(F) \\lor \\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}{}}^{pre})) \\\\\n\t\t\\mathfrak{S}_{\\texttt{{SAFE}}{}} &= (\\neg \\varphi \\implies \\mathfrak{S}_{\\texttt{{SAFE}}{}}^{pre}) \\land (\\varphi \\implies \\mathfrak{S}_{\\texttt{{SAFE}}{}}^{post}),\n\t\\end{align*} \n\twith $F= C \\land \\mathit{R}_{post}[\\var\/\\varp]$.\n\nWe first show that (a) $\\mathfrak{S}_{\\texttt{{REACH}}{}}$ is winning for \\texttt{{REACH}}{} from states satisfying $R\\lor R_{pre} \\lor \\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}})$. For states in $R$ this is trivial, so let $\\rho = s_0 s_1 \\ldots$ be a play in $\\mathcal{G}$ conforming to $\\mathfrak{S}_{\\texttt{{REACH}}{}}$ such that $R_{pre}(s_0)$ holds.\n Our first claim is that if there exists $k \\in \\mathbb{N}$ such that $\\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}}^{post})(s_k)$ holds, then $\\rho$ must be winning for \\texttt{{REACH}}{}.\n This is due to the fact that $\\mathfrak{S}_{\\texttt{{REACH}}}^{post}$ is winning in $\\mathcal{G}_{post}$ from all states satisfying $\\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}}^{post})$, which allows us to use~\\Cref{lem:gpost}.\n To argue that $\\mathfrak{S}_{\\texttt{{REACH}}}^{post}$ keeps playing according to $\\mathfrak{S}_{\\texttt{{REACH}}}^{post}$ once such a state is reached, we observe that if a symbolic reachability strategy $\\mathfrak{S}$ wins from $s$, then $\\operatorname{Pre}(\\mathfrak{S})$ holds in any state in $S_{\\texttt{{REACH}}}$ reachable from $s$ via a play prefix conforming to $\\mathfrak{S}$, by definition.\n \n Now we show that such a position $k$ must exist.\n First, for $j \\in \\mathbb{N}$ such that $(\\neg \\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}}^{post}) \\land \\operatorname{Pre}(\\operatorname{Enf}(F,\\mathcal{G})))(s_j)$ holds, the transition $(s_j,s_{j+1})$ must satisfy $F$.\n This is because if $s_j \\in S_{\\texttt{{SAFE}}}$, then all outgoing transitions from $s_j$ satisfy $F$.\n Otherwise, it follows by the fact that $\\rho$ conforms to $\\mathfrak{S}_{\\texttt{{REACH}}}$.\n As $\\operatorname{Post}(F) \\equiv R_{post}[\\var\/\\varp]$ and $\\mathfrak{S}_{\\texttt{{REACH}}}^{pos}$ wins from all states satisfying $R_{post}$ by assumption, it follows that $s_{j+1}$ satisfies $\\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}}^{post})$.\n\n As long as $\\rho$ visits only states satisfying $(\\neg \\operatorname{Pre}(\\operatorname{Enf}(F,\\mathcal{G})) \\land \\neg \\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}}^{post}))$, the strategy $\\mathfrak{S}_{\\texttt{{REACH}}}$ prescribes to play according to $\\mathfrak{S}_{\\texttt{{REACH}}}^{pre}$.\n By assumption, this strategy is winning for \\texttt{{REACH}}{} in $\\mathcal{G}_{pre}$, and hence the play $\\rho$ eventually visits a state in $\\operatorname{Pre}(\\operatorname{Enf}(F,\\mathcal{G}))$.\n As above, the play is guaranteed to stay in $\\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}}^{pre})$ until that position.\n\n The above argument also shows that $\\mathfrak{S}_{\\texttt{{REACH}}}$ is winning for all states satisfying $\\operatorname{Pre}(F) \\lor \\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}}^{post}) \\lor \\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}}^{pre})$, which is implied by $\\operatorname{Pre}(\\mathfrak{S}_{\\texttt{{REACH}}})$.\n Also, $\\mathfrak{S}_{\\texttt{{REACH}}} \\implies (\\mathit{Reach} \\lor \\mathit{Safe})$ is valid, as the corresponding statements hold for the pre- and post-strategies, and $F \\implies (\\mathit{Safe} \\lor \\mathit{Reach})$ is valid.\n\n\n\n\t\n\n\t \n\n\t\n\tNext we show that (b) $\\lnot \\mathfrak{S}_{\\texttt{{SAFE}}{}}$ is a necessary subgoal in $\\mathcal{G}^I$.\n\tNo player can play back from $\\mathcal{G}_{post}$ to $\\mathcal{G}_{pre}$ without $\\texttt{{REACH}}$ having already won in $\\mathcal{G}_{post}$. We first show that under this condition,\n \\[\\neg \\mathfrak{S}_{\\texttt{{SAFE}}} = (\\neg \\varphi \\land \\neg \\mathfrak{S}_{\\texttt{{SAFE}}{}}^{pre}) \\lor (\\varphi \\land \\neg \\mathfrak{S}_{\\texttt{{SAFE}}{}}^{post})\\]\n qualifies as a necessary subgoal in $\\mathcal{G}^I$. For this, consider the necessary subgoal $C$.\n For any play $\\rho = s_0 s_1 \\ldots$ with $n \\in \\mathbb{N}$ such that $\\mathit{Goal}(s_n)$ there is some $k \\in \\mathbb{N}$ with $k < n$ and $C(s_k,s_{k+1})$. As $F$ characterizes a subset of $C$, we check two cases: Either (1) $\\lnot F(s_k,s_{k+1})$ or (2) $F(s_k,s_{k+1})$. In case (1), we have $\\lnot R_{post}(s_{k+1})$ and because of our assumption that no transition of the game satisfies $\\varphi \\land \\neg \\varphi'$, for all $j \\in \\mathbb{N}$ with $k < j < n: \\varphi(s_j)$. It follows by induction hypothesis that there is some $l \\in \\mathbb{N}$ such that $\\neg \\mathfrak{S}_{\\texttt{{SAFE}}}^{post}(s_l,s_{l+1})$.\n In case (2) we use that $\\mathfrak{S}_{\\texttt{{SAFE}}}^{pre}$ plays only moves available in $\\mathcal{G}_{pre}$, and hence $\\mathfrak{S}_{\\texttt{{SAFE}}}^{pre} \\implies \\neg F$ is valid.\n Furthermore $F \\implies \\neg \\varphi$, as $F$ characterizes a subset of the subgoal $C$.\n Hence we can conclude that $F \\implies (\\neg \\varphi \\land \\neg \\mathfrak{S}_{\\texttt{{SAFE}}}^{pre})$ is valid.\n It follows that $\\neg \\mathfrak{S}_{\\texttt{{SAFE}}}$ qualifies as necessary subgoal.\n\n Finally we show (c) that $\\operatorname{Unsat}(\\operatorname{Enf}(\\neg \\mathfrak{S}_{\\texttt{{SAFE}}},\\mathcal{G}))$ holds.\n We have $(\\mathit{Safe}_{pre} \\lor \\mathit{Reach}_{pre}) \\implies (\\mathit{Safe} \\lor \\mathit{Reach})$ and $(\\mathit{Safe}_{post} \\lor \\mathit{Reach}_{post}) \\implies (\\mathit{Safe} \\lor \\mathit{Reach})$. As $\\operatorname{Pre}(\\neg \\varphi \\land \\neg \\mathfrak{S}_{\\texttt{{SAFE}}}^{pre}) \\land \\operatorname{Pre}(\\varphi \\land \\neg \\mathfrak{S}_{\\texttt{{SAFE}}}^{post})$ is cleary unsatisfiable, we can again apply Lemma \\ref{lem:unsat_sum} to infer $\\operatorname{Unsat}(\\operatorname{Enf}(\\neg \\mathfrak{S}_{\\texttt{{SAFE}}},\\mathcal{G}))$.\n This uses that any transitions reachable in $\\mathcal{G}_{post}$ has to satisfy $\\varphi$ in this case.\n\t\n\t\\medskip\n\t{\\it Case 5: $\\operatorname{Reach}(\\mathcal{G})$ returns in line~\\ref{line: last return} and the {\\upshape\\textbf{if}} statement in line~\\ref{line:transback} is true}. \n\n\tWe assume that both recursive calls terminated and, by induction, returned triples $(R_{post},\\mathfrak{S}_{\\texttt{{REACH}}}^{post},\\mathfrak{S}_{\\texttt{{SAFE}}}^{post})$ and $(R_{pre},\\mathfrak{S}_{\\texttt{{REACH}}}^{pre},\\mathfrak{S}_{\\texttt{{SAFE}}}^{pre})$ satisfying (a)-(c).\n\n (a) is shown exactly as in Case 4.\n\n\tFor (b) we first observe that by setting $\\varphi$ to $\\texttt{false}$ (see line~\\ref{line:tpostfalse}) in this case we get $\\mathfrak{S}_{\\texttt{{SAFE}}} = \\mathfrak{S}_{\\texttt{{SAFE}}}^{pre}$.\n We show that $\\neg \\mathfrak{S}_{\\texttt{{SAFE}}} = \\neg \\mathfrak{S}_{\\texttt{{SAFE}}}^{pre}$ is a necessary subgoal in $\\mathcal{G}_I$.\n The transition predicate $F$ in line~\\ref{line: recursion1} is a sufficient subgoal by induction hypothesis, but due to the restriction on the post-game, we cannot conclude that states in $\\operatorname{Post}(C)$ that are not in $\\operatorname{Post}(F)$ are winning for \\texttt{{SAFE}}{}.\n\tBy adding all transitions to $\\mathit{Goal}$ (line~\\ref{line: E2}) we get that $F$ in line~\\ref{line: recursion2} is a necessary and sufficient subgoal (clearly, any winning play must go through $\\mathit{Goal} [\\var\/\\varp]$).\n\tAs we have ensured that $F$ is necessary, we know for all plays $\\rho = s_0 s_1 \\ldots$ with some $n \\in \\mathbb{N}$ such that $\\mathit{Goal}(s_n)$ there is some $k \\in \\mathbb{N}$ with $k < n$ and $F(s_k,s_{k+1})$. As in Case 4 we may conclude that $F \\implies \\neg \\mathfrak{S}_{\\texttt{{SAFE}}}^{pre}$.\nIt follows that $\\neg \\mathfrak{S}_{\\texttt{{SAFE}}}$ is a necessary subgoal in $\\mathcal{G}_I$.\n\nFor (c) we observe that $\\operatorname{Unsat}(\\operatorname{Pre}(\\operatorname{Enf}(\\neg \\mathfrak{S}_{\\texttt{{SAFE}}}^{pre},\\mathcal{G}_{pre})))$ holds by induction hypothesis, which directly implies $\\operatorname{Unsat}(\\operatorname{Pre}(\\operatorname{Enf}(\\neg \\mathfrak{S}_{\\texttt{{SAFE}}}^{pre},\\mathcal{G})))$. This concludes the argument for the final case, and the proof is complete.\n\t\\qed\n\n\\end{proof}\n\n\n\\terminationfinite*\n\\begin{proof}\n\tWe denote by $\\operatorname{size}(\\mathcal{G})$ the number of concrete transitions of $\\mathcal{G}$, formally: $\\operatorname{size}(\\mathcal{G}) = |\\{ (s,s') \\in S \\times S \\mid (\\mathit{Safe} \\lor \\mathit{Reach})(s,s') \\text{ is valid}\\}|$.\n\tIf the domains of all variables are finite, then so is $\\operatorname{size}(\\mathcal{G})$.\n\tWe assume that this is the case and show that the subgames on which $\\operatorname{Reach}(\\mathcal{G})$ recurses are strictly smaller in this measure.\n\tThis is enough to guarantee termination.\n\t\n\tThe first subgame is constructed in line~\\ref{line:gpost} and takes the form:\n\t\\[\\mathcal{G}_{post} = \\langle \\operatorname{Post}(\\mathit{C})[\\varp\/\\var],\\mathit{Safe} \\land \\varphi, \\mathit{Reach} \\land \\varphi, \\mathit{Goal} \\rangle.\\]\n\tThe important restriction of this game is that both safety and reachability player transitions have the additional precondition $\\varphi$.\n\tWe may assume that $\\operatorname{Enf}(C,\\mathcal{G})$ is satisfiable, as otherwise the algorithm does not reach line~\\ref{line:gpost}.\n\tThen, in particular, $C$ is satisfiable, by the definition of $\\operatorname{Enf}(C,\\mathcal{G})$.\n\tBut $C = \\operatorname{Instantiate}(\\varphi,\\mathcal{G}) = (\\mathit{Safe} \\lor \\mathit{Reach}) \\land \\neg \\varphi \\land \\varphi'$, which means that there exist states $s,s'$ such that $ (\\mathit{Safe} \\lor \\mathit{Reach})(s,s')$, $\\neg \\varphi(s)$, and $\\varphi(s')$ are all valid. \n\tThis transition from $s$ to $s'$ in $\\mathcal{G}$ is excluded in $\\mathcal{G}_{post}$, and as no new transitions are included, it follows that $\\operatorname{size}(\\mathcal{G}_{post}) < \\operatorname{size}(\\mathcal{G})$.\n\t\n\tThe second subgame is constructed in line~\\ref{line:gpre2} and takes the form:\n\t\\[\\mathcal{G}_{pre} = \\langle I,\\mathit{Safe} \\land \\lnot F,\\mathit{Reach} \\land \\lnot F, \\operatorname{Pre}(\\operatorname{Enf}(F,\\mathcal{G}))\\rangle.\\]\n We may assume that $F \\land (\\mathit{Safe} \\lor \\mathit{Reach})$ is satisfiable, as otherwise the algorithm would not have moved past line~\\ref{line:safeavoidsE}.\n Observe that if $F$ is changed in line~\\ref{line: E2} then it is only extended and hence satisfiability is preserved.\n As no transition satisfying $F$ exists in $\\mathcal{G}_{pre}$ it follows that $\\operatorname{size}(\\mathcal{G}_{pre}) < \\operatorname{size}(\\mathcal{G})$.\n This concludes the proof.\n\t\\qed\n\\end{proof}\n\n\\terminationbisim*\n\\begin{proof}\n\tLet $S_1,\\ldots,S_n$ be the bisimulation classes of $\\mathcal{G}$, and $\\psi_1, \\ldots, \\psi_n \\in \\cal L(\\mathcal{V})$ be the formulas that define them.\n\tWe define\n\t\\[\\operatorname{size}(\\mathcal{G}) = |\\{(S_i,S_j) \\mid (\\mathit{Safe} \\lor \\mathit{Reach}) \\land \\psi_i \\land \\psi_j' \\text{ is satisfiable} \\}|,\\]\n\twhich equals the number of transitions in the bisimulation quotient of $\\mathcal{G}$ under $\\sim$.\n\tOur aim is to show that $\\operatorname{Reach}(\\cdot)$ terminates for all subgames that are considered in any recursive call of $\\operatorname{Reach}(\\mathcal{G})$.\n\t\n\tTo this end, we show that $\\operatorname{Reach}(\\mathcal{G})$ terminates for all reachability games $\\mathcal{G} = \\langle \\mathit{Init}, \\mathit{Safe}, \\mathit{Reach}, \\mathit{Goal} \\rangle$ such that\n\t\\begin{itemize}\n\t\\item $\\operatorname{size}(\\mathcal{G})$ is finite,\n \\item the relation $\\sim$ is a bisimulation on $\\mathcal{G}$, and\n\t\\item $\\mathit{Goal}$ is equivalent to a disjunction of formulas $\\psi_i$.\n\t\\end{itemize}\n\tWe show this by induction on $\\operatorname{size}(\\mathcal{G})$.\n\t\n\tLet $\\mathcal{G} = \\langle \\mathit{Init}, \\mathit{Safe}, \\mathit{Reach}, \\mathit{Goal} \\rangle$ satisfy these conditions, and assume that $\\operatorname{size}(\\mathcal{G}) = 0$.\n\tThen it follows that $\\mathit{Safe} \\lor \\mathit{Reach}$ is unsatisfiable.\n\tThis is because if any $(s_1,s_2)$ would satisfy $\\mathit{Safe} \\lor \\mathit{Reach}$, then in particular $(\\mathit{Safe} \\lor \\mathit{Reach}) \\land \\psi_i \\land \\psi_j'$ would be satisfied by $(s_1,s_2)$, where we assume $s_1 \\in \\S_i$ and $s_2 \\in \\S_j$.\n\tIt follows that $\\operatorname{Unsat}(\\operatorname{Enf}(C,\\mathcal{G}))$ in line~\\ref{line: cpre} is true, as $\\operatorname{Enf}(C,\\mathcal{G}) \\implies (\\mathit{Safe} \\lor \\mathit{Reach})$ is valid for any $C$.\n\tBut then Algorithm~\\ref{alg:algreach} terminates on input $\\mathcal{G}$.\n\t\n\tNow suppose that we have $\\mathcal{G}$ with $\\operatorname{size}(\\mathcal{G}) > 0$.\n\tIf the algorithm does not return in lines~\\ref{line:ret1} or~\\ref{line:safety wins}, we have to consider the first subgame\n\t\\[\\mathcal{G}_{post} = \\langle \\operatorname{Post}(C)[\\varp\/\\var], \\mathit{Safe} \\land \\varphi, \\mathit{Reach} \\land \\varphi, \\mathit{Goal} \\rangle,\\]\n\twhich is constructed in line~\\ref{line:gpost}.\n\tWe may assume that for some $I \\subseteq \\{1,\\ldots,n\\}$ we have $\\varphi \\equiv \\bigvee_{i \\in I} \\psi_i$, due to our assumption on the function $\\operatorname{Interpolate}$.\n\tHence the effect of restricting all transitions to $\\varphi$ is to remove all transitions in states not in $\\bigcup \\{S_i \\mid i \\in I\\}$, which are exactly the states in $\\bigcup \\{S_i \\mid i \\in \\{1,\\ldots,n\\} \\setminus I\\}$.\n\tIt is clear that $\\sim$ is still a bisimulation in the resulting game, and that the goal states are preserved.\n\tTo see that $\\operatorname{size}(\\mathcal{G}_{post}) < \\operatorname{size}(\\mathcal{G})$ we may assume that $\\operatorname{Unsat}(\\operatorname{Enf}(C,\\mathcal{G}))$ is false, otherwise we would have returned in line~\\ref{line:safety wins}.\n\tThen, in particular, there is a transition in $\\mathcal{G}$ satisfying $\\neg \\varphi$, which means that there is a pair $S_i,S_j$ such that $(\\mathit{Safe} \\lor \\mathit{Reach}) \\land \\neg \\varphi \\land \\psi_i \\land \\psi_j'$ is satisfiable.\n\tThis is cleary unsatisfiable when replacing $(\\mathit{Safe} \\lor \\mathit{Reach})$ by $(\\mathit{Safe} \\lor \\mathit{Reach}) \\land \\varphi$.\n\tHence, $\\operatorname{size}(\\mathcal{G}_{post}) < \\operatorname{size}(\\mathcal{G})$.\n\tAs a result, we can apply the induction hypothesis to conclude that the recursive call $\\operatorname{Reach}(\\mathcal{G}_{post})$ in line~\\ref{line: recursion1} terminates.\n\t\n\tNow let us consider the second subgame $\\mathcal{G}_{pre}$, as constructed in line~\\ref{line:gpre2}.\n\tFirst, we observe that $F \\equiv (\\mathit{Safe} \\lor \\mathit{Reach}) \\land \\neg \\varphi \\land \\varphi' \\land R_{post}[\\var\/\\varp]$, where $R_{post}$ is a state predicate characterizing the initial winning states of $\\mathcal{G}_{post}$ (this uses~\\Cref{thm:partcorr}).\n\tAs $\\sim$ is a bisimulation on $\\mathcal{G}_{post}$, it follows by~\\Cref{lem:bisimpreservesreach} that $R_{post}$ is equivalent to a disjunction of formulas $\\psi_i$.\n\tAs a consequence, we can equivalently write $F$ as $\\phi_1 \\land (\\phi_2[\\var\/\\varp])$ for two formulas $\\phi_1,\\phi_2 \\in \\cal L(\\mathcal{V})$ that are both equivalent to disjunctions of $\\psi_i$.\n\tBy~\\Cref{lem:preenfbisim} it follows that $\\operatorname{Pre}(\\operatorname{Enf}(E,\\mathcal{G}))$ is also equivalent to a disjunction of $\\psi_i$.\n\t\n\tRestricting transitions to $\\neg F$ in $\\mathcal{G}_{pre}$ has the effect of removing all transitions from states in $\\bigcup\\{S_i \\mid \\psi_i \\implies \\phi_1 \\text{ is valid}\\}$ to states in $\\bigcup\\{S_i \\mid \\psi_i \\implies \\phi_2 \\text{ is valid}\\}$.\n\tIt is clear that $\\sim$ is still a bisimulation in the resulting game.\n\tFurthermore, as $\\operatorname{Enf}(F,\\mathcal{G})$ is satisfiable, there is at least one such transition in $\\mathcal{G}$.\n\tIt follows that $\\operatorname{size}(\\mathcal{G}_{pre}) < \\operatorname{size}(\\mathcal{G})$ and hence the algorithm terminates by induction hypothesis.\n\t\\qed\n\\end{proof}\n\n\\begin{lemma}\n\t\\label{lem:preenfbisim}\n\tLet $\\sim$ be a bisimulation on $\\mathcal{G}$ which is also an equivalence relation, and $S_1,\\ldots, S_n$ be its equivalence classes.\n\tAssume that $S_1,\\ldots, S_n$ are defined by $\\psi_1,\\ldots, \\psi_n \\in \\cal L(\\mathcal{V})$.\n\tLet $\\phi_1 \\land (\\phi_2[\\var\/\\varp]) \\in \\cal L(\\mathcal{V} \\cup \\mathcal{V'})$ be such that both $\\phi_1,\\phi_2$ are equivalent to disjunctions of formulas $\\psi_i$.\n\t\n\tThen, $\\operatorname{Pre}(\\operatorname{Enf}(\\phi_1 \\land (\\phi_2[\\var\/\\varp]),\\mathcal{G}))$ is equivalent to a disjunction of formulas $\\psi_i$.\n\\end{lemma}\n\\begin{proof}\n\tWe show that if there exists a state in $S_i$ that satisfies $\\operatorname{Pre}(\\operatorname{Enf}(\\phi_1 \\land (\\phi_2[\\var\/\\varp]),\\mathcal{G}))$, then so do all states in $S_i$.\n\tLet $s_1 \\in S_i$ be such that $\\operatorname{Pre}(\\operatorname{Enf}(\\phi_1 \\land (\\phi_2[\\var\/\\varp]),\\mathcal{G}))(s_1)$ is valid.\n\t\n\tWe make a case distinction on whether $s_1 \\in S_{\\texttt{{REACH}}}$ holds.\n\tIf so, then there exists a state $q_1$ such that $(\\mathit{Reach} \\land \\phi_1 \\land (\\phi_2[\\var\/\\varp]))(s_1,q_1)$ is valid.\n\tIn particular, $\\phi_1(s_1)$ and $\\phi_2(q_1)$ are both valid.\n\tAssuming that $q_1 \\in S_j$ holds, both $\\psi_i \\implies \\phi_1$ and $\\psi_j \\implies \\phi_2$ are valid, as both $\\phi_1$ and $\\phi_2$ are equivalent to disjunctions of $\\psi$-formulas (which have pairwise disjoint sets of models).\n\tNow take any other state $s_2 \\in S_i$.\n\tAs $s_1 \\sim s_2$ and $\\mathit{Reach}(s_1,q_1)$ is valid, there exists a state $q_2 \\in S_j$ such that $\\mathit{Reach}(s_2,q_2)$ is valid.\n\tFurthermore, as $\\psi_i(s_2)$ and $\\psi_j(q_2)$ are both valid, so is $(\\mathit{Reach} \\land \\phi_1 \\land (\\phi_2[\\var\/\\varp]))(s_2,q_2)$.\n\tHence, $\\operatorname{Pre}(\\operatorname{Enf}(\\phi_1 \\land (\\phi_2[\\var\/\\varp]),\\mathcal{G}))(s_2)$ is valid.\n\t\n\tNow assume that $s_1 \\in S_{\\texttt{{SAFE}}}$.\n\tThen, for all states $q_1$ such that $\\mathit{Safe}(s_1,q_1)$ is valid, $(\\phi_1 \\land (\\phi_2[\\var\/\\varp]))(s_1,q_1)$ holds.\n\tWhenever this is the case, and $q_1 \\in S_j$ holds, it follows that $\\psi_j \\implies \\phi_2$ is valid.\n\n\tNow take any other state $s_2 \\in S_i$ and assume, for contradiction, that there exists a $q_2$ such that $\\mathit{Safe}(s_2,q_2)$ is valid, but not $(\\phi_1 \\land (\\phi_2[\\var\/\\varp]))(s_2,q_2)$.\n\tAssuming $q_2 \\in S_j$, we have that $\\psi_j \\land \\phi_2$ is unsatisfiable.\n\tAs $s_1 \\sim s_2$ holds, we find $q_1$ such that $\\mathit{Safe}(s_1,q_1) \\land \\psi_j(q_1)$ is valid.\n By the previous reasoning, this would imply that $\\psi_j \\implies \\phi_2$ is valid.\n\tThis is a contradiction as $\\psi_j$ is satisfiable.\n\t\\qed\n\\end{proof}\n\n\\section{Conclusion}\nOur work is a step towards the fully automated synthesis of software. \nIt targets symbolically represented reachability games which are expressive enough to model a variety of problems, from common game benchmarks to program synthesis problems. \nThe presented approach exploits causal information in the form of \\emph{subgoals}, which are parts of the game that the reachability player needs to pass through in order to win.\nHaving computed a subgoal, which can be done using Craig interpolation, the game is split along the subgoal and solved recursively.\nAt the same time, the algorithm infers a structured symbolic strategy for the winning player.\nThe evaluation of our prototype implementation \\textsc{CabPy} shows that our approach is practically applicable and scales much better than previously available tools on several benchmarks. \nWhile termination is only guaranteed for games with finite bisimulation quotient, the experiments demonstrate that several infinite games can be solved as well.\n\nThis work opens up several interesting questions for further research.\nOne concerns the quality of the returned strategies.\nDue to its compositional nature, at first sight it seems that our approach is not well-suited to handle global optimization criteria, such as reaching the goal in fewest possible steps.\nOn the other hand, the returned strategies often involve only a few key decisions and we believe that therefore the strategies are often very sparse, although this has to be further investigated.\nWe also plan to automatically extract deterministic strategies from the symbolic ones~\\cite{Bloem,Ehlers} we currently consider.\n\nAnother question regards the computation of subgoals. \nThe performance of our algorithm is highly influenced by which interpolant is returned by the solver.\nIn particular this affects the number of subgames that have to be solved, and how complex they are.\nWe believe that template-based interpolation \\cite{template_interpolation} could be a promising candidate to explore with the goal to compute good interpolants. \nThis could be combined with the possibility for the user to provide templates or expressive interpolants directly, thereby benefiting from the user's domain knowledge.\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\n\\section{Motivating Example} \\label{sec:motivation}\n\nConsider the scenario that an expensive painting is displayed in a large exhibition room of a museum.\nIt is secured with an alarm system that is controlled via a control panel on the opposite side of the room.\nA security guard is sleeping at the control panel and occasionally wakes up to check whether the alarm is still armed.\nTo steal the painting, a thief first needs to disable the alarm and then reach the painting before the alarm has been reactivated. We model this scenario as a two-player game between a safety player (the guard) and a reachability player (the thief) in the theory of linear arithmetic.\nThe moves of both players, their initial positions, and the goal condition are described by the formulas:\n\n\n\\iffalse\n\\begin{figure}[t]\n\t\\centering\n\t\\scalebox{0.4}{\n\t\t\\begin{tikzpicture}[node distance=2cm]\n\t\t\\node(mona) at (10,5){\\def\\svgwidth{1cm}\\input{monalisa_color.pdf_tex}};\n\t\t\\node(thief) at (0,0){\\def\\svgwidth{1.15cm}\\input{thief_color.pdf_tex}};\n\t\t\\node(guard) at (0,10){\\def\\svgwidth{2cm}\\input{guard2_color.pdf_tex}};\n\t\t\\draw[step=2.0,black,thin] (-0.5,-0.5) grid (10.5,10.5);\n\t\t\\node(guard) at (-1,0.0){$0$};\n\t\t\\node(guard) at (0,-1){$0$};\n\t\t\\node(guard) at (10.0,-1){$10$};\n\t\t\\node(guard) at (-1,10.0){$10$};\n\t\t\\end{tikzpicture}}\n\t\\caption{The Mona Lisa problem. The painting is secured with an alarm the thief has to disable first in order to remain undetected. The sleeping guard will occasionally wake up to check whether the alarm is still on.}\n\t\\label{exp_monalisa_pic}\n\\end{figure}\n\\fi\n\n\n\\begin{align*}\n\\mathit{Init} &\\equiv && \\lnot \\mathbf{r} \\land x = 0 \\land y = 0 \\land p = 0 \\land a = 1 \\land t = 0, &&&\\\\\n\\mathit{Guard} &\\equiv &&\\neg\\mathbf{r} \\land \\mathbf{r}' \\land x' = x \\land y' = y \\land p' = p &&&\\\\\n& &&\\land ((t' = t - 1 \\land a' = a)\\lor (t \\leq 0 \\land t' = 2)),&&&(\\text{sleep or wake up})\\\\\n\\mathit{Thief} &\\equiv && \\mathbf{r} \\land \\neg\\mathbf{r}' \\land t' = t \\\\\n\t&&&\\land x + 1 \\ge x' \\ge x - 1 \\land y + 1 \\ge y' \\ge y - 1&&&(\\text{move})\\\\\n& &&\\land (x' \\neq 0 \\lor y' \\neq 10 \\implies a' = a)&&&(\\text{alarm off})\\\\\n& &&\\land (x' \\neq 10 \\lor y' \\neq 5 \\lor a = 1 \\implies p' = p),&&&(\\text{steal})\\\\\n\\mathit{Goal} &\\equiv && \\lnot \\mathbf{r} \\land p = 1. &&&\n\\end{align*} \n\nThe thief's position in the room is modeled by two coordinates $x,y \\in \\mathbb{R}$ with initial value $(0,0)$, and with every transition the thief can move some bounded distance. \nNote that we use primed variables to represent the value of variables after taking a transition.\nThe control panel is located at $(0,10)$ and the painting at $(10,5)$. \nThe status of the alarm and the painting are described by two boolean variables $a,p \\in \\{0,1\\}$. \nThe guard wakes up every two time units, modeled by the variable $t \\in \\mathbb{R}$. \nThe variables $x,y$ are bounded to the interval $[0,10]$ and $t$ to $[0,2]$. \nThe boolean variable \\textbf{r} encodes who makes the next move. \nIn the presented configuration, the thief needs more time to move from the control panel to the painting than the guard will sleep. \nIt follows that there is a winning strategy for the guard, namely, to always reactivate the alarm upon waking up.\n\nAlthough it is intuitively fairly easy to come up with this strategy for the guard, it is surprisingly hard for game solving tools to find it. The main obstacle is the infinite state space of this game.\nOur approach for solving games represented in this logical way imitates \\emph{causal reasoning}: \nHumans observe that in order for the thief to steal the painting (i.e., the effect $p=1$), a transition must have been taken whose source state does not satisfy the pre-condition of (steal) while the target state does. \nPart of this cause is the condition $a=0$, i.e., the alarm is off. Recursively, in order for the effect $a=0$ to happen, a transition setting $a$ from $1$ to $0$ must have occurred, and so on. \n\nOur approach captures these cause-effect relationships through the notion of \\emph{necessary subgoals}, which are essential milestones that the reachability player has to transition through in order to achieve their goal.\nThe first necessary subgoal corresponding to the intuitive description above is\n$$C_1 = (\\mathit{Guard} \\lor \\mathit{Thief}) \\land p \\neq 1 \\land p' = 1.$$\nIn this case, it easy to see that $C_1$ is also a \\emph{sufficient subgoal}, meaning that all successor states of $C_1$ are winning for the thief. Therefore, it is enough to solve the game with the modified objective to reach those predecessor states of $C_1$ from which the thief can \\emph{enforce} $C_1$ being the next move (even if it is not their turn). Doing so recursively produces the necessary subgoal\n$$C_2 = (\\mathit{Guard} \\lor \\mathit{Thief}) \\land a \\neq 0 \\land a' = 0,$$\nmeaning that some transition must have caused the effect that the alarm is disabled. However, $C_2$ is \\emph{not} sufficient which can be seen by recursively solving the game spanning from successor states of $C_2$ to $C_1$. This computation has an important caveat: After passing through $C_2$, it may happen that $a$ is reset to $1$ at a later point (in this particular case, this constitutes precisely the winning strategy of the safety player), which means that there is no canonical way to slice the game along this subgoal into smaller parts. Hence the recursive call solves the game from $C_2$ to $C_1$ \\emph{subject to} the bold assumption that any move from $a = 0$ to $a' = 1$ is winning for the guard. This generally underapproximates the winning states of the thief. Remarkably, we show that this approximation is enough to build winning strategies for \\emph{both} players from their respective winning regions. In this case, it allows us to infer that moving through $C_2$ is always a losing move for the thief. However, at the same time, any play reaching $\\mathit{Goal}$ has to move through $C_2$. It follows that the thief loses the global game.\n\nWe evaluated our method on several configurations of this game, which we call \\emph{Mona Lisa}. The results in Section \\ref{sec:experiments} support our conjecture that the room size has little influence on the time our technique needs to solve the game.\n\n\\section{Case Studies}\n\\label{sec:experiments}\n\nIn this section we evaluate our approach on a number of case studies. \nOur prototype \\textsc{CabPy}{}\\footnote[2]{The source code of \\textsc{CabPy}{} and our experimental data are both available at \\url{https:\/\/github.com\/reactive-systems\/cabpy}. We provide a virtual machine image with \\textsc{CabPy}{} already installed for reproducing our evaluation~\\cite{VM}.} is written in Python and implements the game solving part of the presented algorithm. \nExtending it to returning a symbolic strategy using the ideas outlined above is straightforward.\nWe compared our prototype with \\textsc{SimSynth} \\cite{FarzanK17}, the only other readily available tool for solving linear arithmetic games. \nThe evaluation was carried out with Ubuntu 20.04, a 4-core Intel\\textsuperscript{\\textregistered} Core\\texttrademark~i5 2.30GHz processor, as well as 8GB of memory. \\textsc{CabPy}{} uses the PySMT~\\cite{pysmt2015} library as an interface to the MathSAT5~\\cite{mathsat5} and Z3~\\cite{z3solver} SMT solvers.\nOn all benchmarks, the timeout was set to 10 minutes. In addition to the winner, we report the runtime and the number of subgames our algorithm visits. Both may vary with different SMT solvers or in different environments.\n\\subsection{Game of Nim}\n\nGame of Nim is a classic game from the literature \\cite{Bouton1901} and played on a number of heaps of stones. Both players take turns of choosing a single heap and removing at least one stone from it. We consider the version where the player that removes the last stone wins. Our results are shown in \\Cref{exp_nim}. In instances with three heaps or more we bounded the domains of the variables in the instance description, by specifying that no heap exceeds its initial size and does not go below zero.\n\nFollowing the discussion in \\Cref{sec:termination}, we need to bound the domains to ensure the termination of our tool on these instances.\nRemarkably, bounding the variables was not necessary for instances with only two heaps, where our tool \\textsc{CabPy}{} scales to considerably larger instances than \\textsc{SimSynth}.\nWe did not add the same constraints to the input of \\textsc{SimSynth}{}, as for \\textsc{SimSynth}{} this resulted in longer runtimes rather than shorter.\nIn Game of Nim, there are no natural necessary subgoals that the safety player can locally control.\n\nThe results (see~\\Cref{exp_nim}) demonstrate that our approach is not completely dependent on finding the right interpolants and is in particular also competitive when the reachability player wins the game. We suspect that \\textsc{SimSynth}{} performs worse in these cases because the safety player has a large range of possible moves in most states, and inferring the win of the reachability player requires the tool to backtrack and try our all of them.\n\n\\begin{figure}[tbp]\n\t\\centering\n\t{\\def\\arraystretch{1.1}\\tabcolsep=5pt\n \\small\n\t\t\\begin{tabular}{|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t& \\multicolumn{2}{c|}{\\textsc{CabPy}} & \\textsc{SimSynth} & \\\\\n\t\t\t\\hline\n\t\t\tHeaps & Subgames & Time(s) &Time(s) & Winner\\\\\n\t\t\t\\hline\n\t\t\t(4,4) & 19 & 1.50 & 10.44 & $\\texttt{{REACH}}$\\\\\n\t\t\t(4,5) & 23 & 1.92 & 12.74 & $\\texttt{{SAFE}}$\\\\\n\t\t\t(5,5) & 23 & 1.99 & 85.75 & $\\texttt{{REACH}}$\\\\\n\t\t\t(5,6) & 27 & 2.90 & 91.66 & $\\texttt{{SAFE}}$\\\\\n\t\t\t(6,6) & 28 & 3.04 & Timeout & $\\texttt{{REACH}}$\\\\\n\t\t\t(6,7) & 31 & 3.76 & Timeout & $\\texttt{{SAFE}}$\\\\\n\t\t\t(20,20) & 88 & 94.85 & Timeout & $\\texttt{{REACH}}$\\\\\n\t\t\t(20,21) & 94 & 113.04 & Timeout & $\\texttt{{SAFE}}$\\\\\n\t\t\t(30,30) & 128 & 364.13 & Timeout & $\\texttt{{REACH}}$\\\\\n\t\t\t(30,31) & 135 & 404.02 & Timeout & $\\texttt{{SAFE}}$\\\\\\hline\n\t\t\t(3,3,3)b & 23 & 13.63 & 2.85 & $\\texttt{{SAFE}}$\\\\\n\t\t\t(1,4,5)b & 32 & 7.00 & 289.85 & $\\texttt{{REACH}}$\\\\\n\t\t\t(4,4,4)b & 33 & 50.55 & 24.39 & $\\texttt{{SAFE}}$\\\\\n\t\t\t(2,4,6)b & 38 & 19.77 & Timeout & $\\texttt{{REACH}}$\\\\\n\t\t\t(5,5,5)b & 33 & 127.89 & 162.50 & $\\texttt{{SAFE}}$\\\\\n\t\t\t(3,5,6)b & 40 & 86.56 & Timeout & $\\texttt{{REACH}}$\\\\\\hline\n\t\t\t(2,2,2,2)b & 39 & 84.79 & 213.79 & $\\texttt{{REACH}}$\\\\\n\t\t\t(2,2,2,3)b & 41 & 102.01 & Timeout & $\\texttt{{SAFE}}$\\\\\n\t\t\t\\hline\n\t\\end{tabular}}\n\t\\caption{Experimental results for the Game of Nim. The notation $(h_1,\\ldots, h_n)$ denotes the instance played on $n$ heaps, each of which consists of $h_i$ stones. Instances marked with b indicate that the variable domains were explicitly bounded in the input for \\textsc{CabPy}{}.}\n\t\\label{exp_nim}\n\\end{figure}\n\n\n\\begin{figure}[tbp]\n\t\\centering\n\t{\\def\\arraystretch{1.1}\\tabcolsep=5pt\n \\small\n\t\t\\begin{tabular}{|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t& \\multicolumn{2}{c|}{\\textsc{CabPy}} & \\textsc{SimSynth} & \\\\\n\t\t\t\\hline\n\t\t\t$r$ & Subgames & Time(s) &Time(s) & Winner\\\\\n\t\t\t\\hline\n\t\t\t10 & 10 & 0.57 & 3.93 & $\\texttt{{SAFE}}$\\\\\n\t\t\t20 & 20 & 1.23 & 20.48 & $\\texttt{{SAFE}}$\\\\\n\t\t\t40 & 40 & 3.42 & 121.96 & $\\texttt{{SAFE}}$\\\\\n\t\t\t60 & 60 & 7.36 & Timeout & $\\texttt{{SAFE}}$\\\\\n\t\t\t80 & 80 & 17.72 & Timeout & $\\texttt{{SAFE}}$\\\\\n\t\t\t100 & 100 & 26.36 & Timeout & $\\texttt{{SAFE}}$\\\\\n\t\t\t\\hline\n\t\\end{tabular}}\n\t\\caption{Experimental results for the Corridor game. The safety player controls the door between rooms $r-1$ and $r$.}\n\t\\label{exp_corridor}\n\\end{figure}\n\n\\subsection{Corridor}\n\nWe now consider an example that demonstrates the potential of our method in case the game structure contains natural bottlenecks. Consider a corridor of $100$ rooms arranged in sequence, i.e., each room $i$ with $0 \\leq i < 100$ is connected to room $i+1$ with a door. The objective of the reachability player is to reach room 100 and they are free to choose valid values from $\\mathbb{R}^2$ for the position in each room at every other turn. The safety player controls some door to a room $r \\leq 100$. Naturally, a winning strategy is to prevent the reachability player from passing that door, which is a natural bottleneck and necessary subgoal on the way to the last room.\n\n The experimental results are summarized in Figure \\ref{exp_corridor}. We evaluated several versions of this game, increasing the length from the start to the controlled door. The results confirm that our causal synthesis algorithm finds the trivial strategy of closing the door quickly. This is because Craig interpolation focuses the subgoals on the room number variable while ignoring the movement in the rooms in between, as can be seen by the number of considered subgames. \\textsc{SimSynth}, which tries to generalize a strategy obtained from a step-bounded game, struggles because the tool solves the games that happen between each of the doors before reaching the controlled one.\n\n\\subsection{Mona Lisa}\n\nThe game described in Section \\ref{sec:motivation} between a thief and a security guard is very well suited to further assess the strength and limitations of both our approach as well as of \\textsc{SimSynth}{}. We ran several experiments with this scenario, scaling the size of the room and the sleep time of the guard, as well as trying a scenario where the guard does not sleep at all. Scaling the size of the room makes it harder for \\textsc{SimSynth}{} to solve this game with a forward unrolling approach, while our approach extracts the necessary subgoals irrespective of the room size. However, scaling the guard's sleep time makes it harder to solve the subgame between the two necessary subgoals, while it only has a minor effect on the length of the unrolling needed to stabilize the play in a safe region, as done by \\textsc{SimSynth}.\n\n The results in Figure \\ref{exp_monalisa} support this conjecture. The size of the room has \\emph{almost no effect at all} on both the runtime of \\textsc{CabPy}{} and the number of considered subgames. However, as the results for a sleep value of 4 show, the employed combination of quantifier elimination and interpolation introduces some instability in the produced formulas. This means we may get different Craig interpolants and slice the game with more or less subgoals. Therefore, we see a lot of potential in optimizing the interplay between the employed tools for quantifier elimination and interpolation. The phenomenon of the runtime being sensitive to these small changes in values is also seen with \\textsc{SimSynth}, where a longer sleep time sometimes means a faster execution.\n\n\\begin{figure}[tbp]\n\t\\centering\n {\\def\\arraystretch{1.1}\\tabcolsep=5pt\n \\small\n\t\t\\begin{tabular}{|c|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t& & \\multicolumn{2}{c|}{\\textsc{CabPy}} & \\textsc{SimSynth} & \\\\\n\t\t\t\\hline\n\t\t\tSize & Sleep & Subgames & Time(s) & Time(s) & Winner\\\\\n\t\t\t\\hline\n\t\t\t$10\\times10$ & - & 7 & 0.61 & 4.79 & $\\texttt{{SAFE}}$\\\\\n\t\t\t$20\\times20$ & - & 7 & 0.60 & 25.26 & $\\texttt{{SAFE}}$\\\\\n\t\t\t$40\\times40$ & - & 7 & 0.61 & 157.62 & $\\texttt{{SAFE}}$\\\\\n\t\t\t\\hline\n\t\t\t$10\\times10$ & 1 & 10 & 4.22 & 20.31 & $\\texttt{{SAFE}}$\\\\\n\t\t\t$20\\times20$ & 1 & 11 & 4.34 & 36.44 & $\\texttt{{SAFE}}$\\\\\n\t\t\t$40\\times40$ & 1 & 11 & 4.65 & 226.14 & $\\texttt{{SAFE}}$\\\\\n\t\t\t\\hline\n\t\t\t$10\\times10$ & 2 & 13 & 5.88 & 7.40 & $\\texttt{{SAFE}}$\\\\\n\t\t\t$20\\times20$ & 2 & 14 & 5.98 & 60.00 & $\\texttt{{SAFE}}$\\\\\n\t\t\t$40\\times40$ & 2 & 13 & 5.92 & 270.48 & $\\texttt{{SAFE}}$\\\\\n\t\t\t\\hline\n\t\t\t$10\\times10$ & 3 & 18 & 26.58 & 13.94 & $\\texttt{{SAFE}}$\\\\\n\t\t\t$20\\times20$ & 3 & 17 & 26.19 & 115.53 & $\\texttt{{SAFE}}$\\\\\n\t\t\t$40\\times40$ & 3 & 18 & 27.85 & 290.12 & $\\texttt{{SAFE}}$\\\\\n\t\t\t\\hline\n\t\t\t$10\\times10$ & 4 & 30 & 175.27 & 13.96 & $\\texttt{{SAFE}}$\\\\\n\t\t\t$20\\times20$ & 4 & 22 & 204.79 & 60.08 & $\\texttt{{SAFE}}$\\\\\n\t\t\t$40\\times40$ & 4 & 27 & 123.95 & 319.47 & $\\texttt{{SAFE}}$\\\\\n\t\t\t\\hline\n\t\\end{tabular}}\n\t\\caption{Experimental results for the Mona Lisa game.}\n\t\\label{exp_monalisa}\n\\end{figure}\n\n\\subsection{Program Synthesis}\n\nLastly, we study two benchmarks that are directly related to program synthesis. \nThe first problem is to synthesize a controller for a thermostat by filling out an incomplete program, as described in \\cite{BeyeneCPR14}. A range of possible initial values of the room temperature $c$ is given, e.g., $20.8 \\leq c \\leq 23.5$, together with the temperature dynamics which depend on whether the heater is on (variable $o \\in \\mathbb{B}$).\nThe objective for $\\texttt{{SAFE}}$ is to control the value of $o$ in every round such that $c$ stays between $20$ and $25$. This is a common benchmark for program synthesis tools and both \\textsc{CabPy}{} and \\textsc{SimSynth}{} solve it quickly.\nThe other problem relates to Lamport's bakery algorithm\\cite{Lamport1974}. We consider two processes using this protocol to ensure mutually exclusive access to a shared resource. The game describes the task of synthesizing a scheduler that violates the mutual exclusion. This essentially is a model checking problem, and we study it to see how well the tools can infer a safety invariant that is out of control of the safety player. For our approach, this makes no difference, as both players may play through a subgoal and the framework is well suited to find a safety invariant. The forward unrolling approach of \\textsc{SimSynth}, however, seems to explore the whole state space before inferring safety, and fails to find an invariant before a timeout. \n\n\\begin{figure}[tbp]\n\t\\centering\n\t{\\def\\arraystretch{1.1}\\tabcolsep=5pt\n \\small\n\t\t\\begin{tabular}{|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t& \\multicolumn{2}{c|}{\\textsc{CabPy}} & \\textsc{SimSynth} & \\\\\n\t\t\t\\hline\n\t\t\tName & Subgames & Time(s) &Time(s) & Winner\\\\\n\t\t\t\\hline\n\t\t\tThermostat & 6 & 0.44 & 0.39 & $\\texttt{{SAFE}}$ \\\\\n\t\t\tBakery & 46 & 18.25 & Timeout & $\\texttt{{SAFE}}$ \\\\\n\t\t\t\\hline\n\t\\end{tabular}}\n\t\\caption{Experimental results for program synthesis problems.}\n\t\\label{exp_synth}\n\\end{figure}\n\n\n\n\n\\section{Introduction}\nTwo-player games are a fundamental model in logic and verification due to their connection to a wide range of topics such as decision procedures, synthesis and control~\\cite{Harding05,Alur15,Alur16,BLOEM2012911,BLOEM20073,6225075}. \nAlgorithmic techniques for \\emph{finite-state} two-player games have been studied extensively for many acceptance conditions~\\cite{GraedelTW2002}.\nFor \\emph{infinite-state} games most problems are directly undecidable. \nHowever, infinite state spaces occur naturally in domains like software synthesis~\\cite{RyzhykCKLH09} and cyber-physical systems~\\cite{JessenRLD07}, and hence handling such games is of great interest. \nAn elegant classification of infinite-state games that can be algorithmically handled, depending on the acceptance condition of the game, was given in~\\cite{deAlfaroHM2001}.\nThe authors assume a symbolic encoding of the game in a very general form.\nMore recently, incomplete procedures for solving infinite-state two-player games specified using logical constraints were studied~\\cite{BeyeneCPR14,FarzanK17}.\nWhile~\\cite{BeyeneCPR14} is based on automated theorem-proving for Horn formulas and handles a wide class of acceptance conditions, the work in~\\cite{FarzanK17} focusses on reachability games specified in the theory of linear arithmetic, and uses sophisticated decision procedures for that theory.\n\nIn this paper, we present a novel technique for solving logically represented reachability games based on the notion of \\emph{subgoals}.\nA \\emph{necessary} subgoal is a transition predicate that is satisfied at least once on every play that reaches the overall goal.\nIt represents an intermediate target that the reachability player must reach in order to win. Subgoals open up game solving to the study of cause-effect relationships in the form of counterfactual reasoning~\\cite{counterfactual}: If a cause (the subgoal) had not occurred, then the effect (reaching the goal) would not have happened.\nThus for the safety player, a necessary subgoal provides a chance to win the game based on local information:\nIf they control all states satisfying the pre-condition of the subgoal, then any strategy that in these states picks a transition outside of the subgoal is winning. \nFinding such a necessary subgoal may let us conclude that the safety player wins without ever having to unroll the transition relation.\n\nOn the other hand, passing through a necessary subgoal is in general not enough for the reachability player to win. We call a subgoal \\emph{sufficient} if indeed the reachability player has a winning strategy from every state satisfying the post-condition of the subgoal.\nDual to the description in the preceding paragraph, sufficient subgoals provide a chance for the reachability player to win the global game as they must merely reach this intermediate target. The two properties differ in one key aspect: While necessity of a subgoal only considers the paths of the game arena, for sufficiency the game structure is crucial. \n\nWe show how Craig interpolants can be used to compute necessary subgoals, making our methods applicable to games represented by any logic that supports interpolation. In contrast, determining whether a subgoal is sufficient requires a partial solution of the given game. This motivates the following recursive approach. We slice the game along a necessary subgoal into two parts, the pre-game and the post-game.\nIn order to guarantee these games to be smaller, we solve the post-game under the assumption that the considered subgoal was bridged \\emph{for the last time}.\nWe conclude that the safety player wins the overall game if they can avoid all initial states of the post-game that are winning for the reachability player.\nOtherwise, the pre-game is solved subject to the winning condition given by the sufficient subgoal consisting of these states. \nThis approach does not only determine which player wins from each initial state, but also computes symbolically represented winning strategies with a causal structure. \nWinning safety player strategies induce necessary subgoals that the reachability player cannot pass, which constitutes a cause for their loss. Winning reachability player strategies represent a sequence of sufficient subgoals that will be passed, providing an explanation for the win.\n\nThe Python-based implementation \\textsc{CabPy}{} of our approach was used to compare its performance to \\textsc{SimSynth} \\cite{FarzanK17}, which is, to the best of our knowledge, the only other available tool for solving linear arithmetic reachability games. Our experiments demonstrate that our algorithm is competitive in many case studies. We can also confirm the expectation that our approach heavily benefits from qualitatively expressive Craig interpolants. It is noteworthy that like \\textsc{SimSynth}{} our approach is fully automated and does not require any input in the form of hints or templates. \nOur contributions are summarized as follows:\n\\begin{itemize}[topsep=0.5ex]\n\t\\item We introduce the concept of \\emph{necessary} and \\emph{sufficient subgoals} and show how Craig interpolation can be used to compute necessary subgoals (\\Cref{sec:subgoals}).\n\t\\item We describe an algorithm for solving logically represented two-player reachability games using these concepts.\n We also discuss how to compute representations of winning strategies in our approach (\\Cref{sec:gamesolving}).\n\t\\item We evaluate our approach experimentally through our Python-based tool \\textsc{CabPy}, demonstrating a competitive performance compared to the previously available tool \\textsc{SimSynth}{} on various case studies (\\Cref{sec:experiments}).\n\\end{itemize}\n\n\\textbf{Related Work. } \nThe problem of solving linear arithmetic games is addressed in~\\cite{FarzanK17} using an approach that relies on a dedicated decision procedure for quantified linear arithmetic formulas, together with a method to generalize safety strategies from truncated versions of the game that end after a prescribed number of rounds.\nOther approaches for solving infinite-state games include deductive methods that compute the winning regions of both players using proof rules~\\cite{BeyeneCPR14}, \npredicate abstraction where an abstract controlled predecessor operation is used on the abstract game representation~\\cite{WalkerR14}, \nand symbolic BDD-based exploration of the state space~\\cite{Edelkamp2002}. \nAdditional techniques are available for finite-state games, e.g., generalizing winning runs into a winning strategy for one of the players~\\cite{NarodytskaLBRW14}. \n\nOur notion of subgoal is related to the concept of landmarks as used in planning~\\cite{HoffmannPS11}. Landmarks are milestones that must be true on every successful plan, and they can be used to decompose a planning task into smaller sub-tasks. \nLandmarks have also been used in a game setting to prevent the opponent from reaching their goal using counter-planning~\\cite{PozancoEFB18}. \nWhenever a planning task is unsolvable, one method to find out why is checking hierarchical abstractions for solvability and finding the components causing the problem~\\cite{SreedharanSSK19}. \n\nCausality-based approaches have also been used for model checking of multi-threaded concurrent programs~\\cite{DBLP:conf\/concur\/KupriyanovF13,DBLP:conf\/cav\/KupriyanovF14}. \nIn our approach, we use Craig interpolation to compute the subgoals. \nInterpolation has already been used in similar contexts before, for example to extract winning strategies from game trees~\\cite{EenLNR15} or to compute new predicates to refine the game abstractions~\\cite{SlicingAbstractions}. \nIn ~\\cite{FarzanK17}, interpolation is used to synthesize concrete winning strategies from so called \\emph{winning strategy skeletons}, which describe a set of strategies of which at least one is winning.\n\n\n\n\\section{Preliminaries}\n\\label{sec:prelims}\nWe consider two-player reachability games defined by formulas in a given logic $\\cal L$.\nWe let $\\cal L(\\mathcal{V})$ be the $\\cal L$-formulas over a finite set of variables $\\mathcal{V}$, also called \\emph{state predicates} in the following.\nWe call $\\mathcal{V'} = \\{\\mathit{v'} \\mid \\mathit{v} \\in \\mathcal{V}\\}$ the set of \\emph{primed variables}, which are used to represent the value of variables after taking a transition.\nTransitions are expressed by formulas in the set $\\cal L(\\mathcal{V} \\cup \\mathcal{V'})$, called \\emph{transition predicates}.\nFor some formula $\\varphi \\in \\cal L(\\mathcal{V})$, we denote the substitution of all variables by their primed variant by $\\varphi[\\var\/\\varp]$. Similarly, we define $\\varphi[\\varp\/\\var]$.\n\nFor our algorithm we will require the satisfiability problem of $\\cal L$-formulas to be decidable and \\emph{Craig interpolants} \\cite{Craig1957} to exist for any two mutually unsatisfiable formulas.\nFormally, we assume there is a function $\\operatorname{Sat} : \\cal L(\\mathcal{V}) \\to \\mathbb{B}$ that checks the satisfiability of some formula $\\varphi \\in \\cal L(\\mathcal{V})$ and an unsatisfiability check $\\operatorname{Unsat} : \\cal L(\\mathcal{V}) \\rightarrow \\mathbb{B}$. \nFor interpolation, we assume that there is a function $\\operatorname{Interpolate} : \\cal L(\\mathcal{V}) \\times \\cal L(\\mathcal{V}) \\to \\cal L(\\mathcal{V})$ computing a \\emph{Craig interpolant} for mutually unsatisfiable formulas: If $\\varphi ,\\psi \\in \\cal L(\\mathcal{V})$ are such that $\\operatorname{Unsat}(\\varphi\\land\\psi)$ holds, then $\\psi \\implies \\operatorname{Interpolate}(\\varphi,\\psi)$ is valid, $\\operatorname{Interpolate}(\\varphi,\\psi)\\land \\varphi$ is unsatisfiable, and $\\operatorname{Interpolate}(\\varphi,\\psi)$ only contains variables shared by $\\varphi$ and $\\psi$.\n\nThese functions are provided by many modern \\emph{Satisfiability Modulo Theories} (SMT) solvers, in particular for the theories of linear integer arithmetic and linear real arithmetic, which we will use for all our examples. Note that interpolation is usually only supported for the quantifier-free fragments of these logics, while our algorithm will introduce existential quantifiers.\nTherefore, we resort to quantifier elimination wherever necessary, for which there are known procedures for both linear integer arithmetic and linear real arithmetic formulas~\\cite{presburger1929uber,Monniaux2008}.\n\nIn order to distinguish the two players, we will assume that a Boolean variable called $\\mathbf{r} \\in \\mathcal{V}$ exists, which holds exactly in the states controlled by the reachability player.\nFor all other variables $v \\in \\mathcal{V}$, we let $\\mathcal{D}(v)$ be the domain of $v$, and we define $\\mathcal{D} = \\bigcup \\{\\mathcal{D}(v) \\mid v \\in \\mathcal{V}\\}$.\nIn the remainder of the paper, we consider the variables $\\mathcal{V}$ and their domains to be fixed.\n\n\\begin{definition}[Reachability Game]\n\tA reachability game is defined by a tuple $\\G = \\langle \\init, \\safe, \\reach, \\goal \\rangle${}, where $\\mathit{Init} \\in \\cal L(\\mathcal{V})$ is the \\emph{initial condition}, $\\mathit{Safe} \\in \\cal L(\\mathcal{V} \\cup \\mathcal{V'})$ defines the transitions of player $\\texttt{{SAFE}}$, $\\mathit{Reach} \\in \\cal L(\\mathcal{V} \\cup \\mathcal{V'})$ defines the transitions of player $\\texttt{{REACH}}$ and $\\mathit{Goal} \\in \\cal L(\\mathcal{V})$ is the \\emph{goal condition}.\n\n We require the formulas $(\\mathit{Safe} \\implies \\neg \\mathbf{r})$ and $(\\mathit{Reach} \\implies \\mathbf{r})$ to be valid.\n\\end{definition}\n\nA \\emph{state} $s$ of $\\mathcal{G}$ is a valuation of the variables $\\mathcal{V}$, i.e., a function $s\\colon\\mathcal{V} \\to \\mathcal{D}$ that satisfies $s(v) \\in \\mathcal{D}(v)$ for all $v \\in \\mathcal{V}$.\nWe denote the set of states by $S$, and we let $S_\\texttt{{SAFE}}$ be the states $s$ such that $s(\\mathbf{r}) = \\texttt{false}$, and $S_\\texttt{{REACH}}$ be the states $s$ such that $s(\\mathbf{r}) = \\texttt{true}$. The variable $\\mathbf{r}$ determines whether \\texttt{{REACH}}{} or \\texttt{{SAFE}}{} makes the move out of the current state, and in particular $\\mathit{Safe} \\land \\mathit{Reach}$ is unsatisfiable. \n\nGiven a state predicate $\\varphi \\in \\cal L(\\mathcal{V})$, we denote by $\\varphi(s)$ the closed formula we get by replacing each occurrence of variable $v \\in \\mathcal{V}$ in $\\varphi$ by $s(v)$.\nSimilarly, given a transition predicate $\\tau \\in \\cal L(\\mathcal{V} \\cup \\mathcal{V'})$ and states $s,s'$, we let $\\tau(s,s')$ be the formula we obtain by replacing all occurrences of $v \\in \\mathcal{V}$ in $\\tau$ by $s(v)$, and all occurrences of $v' \\in \\mathcal{V'}$ in $\\tau$ by $s'(v)$. For replacing only $v \\in \\mathcal{V}$ by $s(v)$, we define $\\tau(s)\\in\\cal L(\\mathcal{V}')$. A \\emph{trap state} of $\\mathcal{G}$ is a state $s$ such that $(\\mathit{Safe} \\lor \\mathit{Reach})(s)\\in\\cal L(\\mathcal{V}')$ is unsatisfiable (i.e., $s$ has no outgoing transitions).\n\nA \\emph{play} of $\\mathcal{G}$ starting in state $s_0$ is a finite or infinite sequence of states $\\rho = s_0 s_1 s_2 \\ldots \\in \\S^+ \\cup \\S^\\omega$ such that for all $i < \\operatorname{len}(\\rho)$ either $\\mathit{Safe}(s_i,s_{i+1})$ or $\\mathit{Reach}(s_i,s_{i+1})$ is valid, and if $\\rho$ is a finite play, then $s_{\\operatorname{len}(\\rho)}$ is required to be a trap state.\nHere, $\\operatorname{len}(s_0\\ldots s_n) = n$ for finite plays, and $\\operatorname{len}(\\rho) = \\infty$ if $\\rho$ is an infinite play. The set of plays of some game $\\G = \\langle \\init, \\safe, \\reach, \\goal \\rangle${} is defined as $\\operatorname{Plays}(\\mathcal{G}) = \\{\\rho = s_0 s_1 s_2 \\ldots \\mid \\rho\\text{ is a play in } \\mathcal{G} \\text{ s.t. } \\mathit{Init}(s_0)\\text{ holds} \\}$.\n$\\texttt{{REACH}}$ \\emph{wins} some play $\\rho = s_0 s_1 \\ldots$ if the play reaches a goal state, i.e., if there exists some integer $0\\leq k \\leq\\operatorname{len}(\\rho)$ such that $\\mathit{Goal}(s_k)$ is valid.\nOtherwise, $\\texttt{{SAFE}}$ wins play $\\rho$.\nA \\emph{reachability strategy} $\\sigma_{\\mathit{R}}$ is a function $\\sigma_{\\mathit{R}} : \\S^*S_\\texttt{{REACH}} \\to \\S$ such that if $\\sigma_{\\mathit{R}}(\\omega s) =s'$ and $s$ is not a trap state, then $\\mathit{Reach}(s,s')$ is valid. \nWe say that a play $\\rho = s_0 s_1 s_2 \\ldots$ is \\emph{consistent} with $\\sigma_{\\mathit{R}}$ if for all $i$ such that $s_i(\\mathbf{r}) = \\texttt{true}$ we have $s_{i+1} = \\sigma_{\\mathit{R}}(s_0 \\ldots s_i)$.\nA reachability strategy $\\sigma_{\\mathit{R}}$ is \\emph{winning} from some state $s$ if $\\texttt{{REACH}}$ wins every play consistent with $\\sigma_{\\mathit{R}}$ starting in $s$. We define \\emph{safety strategies} $\\sigma_{\\mathit{S}}$ for $\\texttt{{SAFE}}$ analogously. We say that a player \\emph{wins in or from a state}~$s$ if they have a winning strategy from $s$. Lastly, $\\texttt{{REACH}}$ \\emph{wins the game} $\\mathcal{G}$ if they win from some initial state.\nOtherwise, $\\texttt{{SAFE}}$ wins.\n\nWe often project a transition predicate $T$ onto the source or target states of transitions satisfying $T$, which is taken care of by the formulas $\\operatorname{Pre}(\\mathit{T})=\\exists \\mathcal{V'}.\\:\\mathit{T}$ and $\\operatorname{Post}(\\mathit{T})=\\exists \\mathcal{V}.\\:\\mathit{T}$.\nThe notation $\\exists \\mathcal{V}$ (resp. $\\exists \\mathcal{V'}$) represents the existential quantification over all variables in the corresponding set.\nGiven $\\varphi \\in \\cal L(\\mathcal{V})$, we call the set of transitions in $\\mathcal{G}$ that move from states not satisfying $\\varphi$, to states satisfying $\\varphi$, the \\emph{instantiation} of $\\varphi$, formally:\n\\[\\operatorname{Instantiate}(\\varphi,\\mathcal{G})=(\\mathit{Safe} \\lor \\mathit{Reach}) \\land \\lnot\\varphi\\land\\varphi'.\\]\n\n\\section{Subgoals}\n\\label{sec:subgoals}\n\nWe formally define the notion of subgoals.\nLet $\\mathcal{G} = \\langle \\mathit{Init}, \\mathit{Safe}, \\mathit{Reach}, \\mathit{Goal} \\rangle$ be a fixed reachability game throughout this section, where we assume that $\\mathit{Init} \\land \\mathit{Goal}$ is unsatisfiable.\nWhenever this assumption is not satisfied in our algorithm, we will instead consider the game $\\mathcal{G}' = \\langle \\mathit{Init} \\land \\neg \\mathit{Goal}, \\mathit{Safe}, \\mathit{Reach}, \\mathit{Goal} \\rangle$ which does satisfy it.\nAs states in $\\mathit{Init} \\land \\mathit{Goal}$ are immediately winning for \\texttt{{REACH}}, this is not a real restriction.\n\n\\begin{definition}[Enforceable transitions]\n\tThe set of \\emph{enforceable transitions} relative to a transition predicate $T \\in \\cal L(\\mathcal{V} \\cup \\mathcal{V'})$ is defined by the formula\n\t\\[\\operatorname{Enf}(\\mathit{T},\\mathcal{G})=\\; (\\mathit{Safe} \\lor \\mathit{Reach}) \\land \\mathit{T} \\land \\lnot \\exists \\mathcal{V'}.\\:\\big(\\mathit{Safe}\\land\\lnot\\mathit{T}\\big).\\]\n\t\n\\end{definition}\n\nThe enforceable transitions operator serves a purpose similar to the \\emph{controlled predecessors} operator commonly known in the literature, which is often used in a backwards fixed point computation, called \\emph{attractor construction} \\cite{Thomas95}. For both operations, the idea is to determine controllability by \\texttt{{REACH}}{}. The main difference is that we do not consider the whole transition relation, but only a predetermined set of transitions and check from which predecessor states the post-condition of the set can be enforced by \\texttt{{REACH}}{}. These include all transitions in $T$ controlled by \\texttt{{REACH}}{} and additionally transitions in $T$ controlled by \\texttt{{SAFE}}{} such that \\emph{all other transitions} in the origin state of the transition also satisfy $T$. The similarity with the controlled predecessor is exemplified by the following lemma:\n\\begin{lemma}\n \\label{lem:enf}\n Let $T$ be a transition predicate, and suppose that all states satisfying $\\operatorname{Post}(T)[\\varp\/\\var]$ are winning for \\texttt{{REACH}}{} in $\\mathcal{G}$.\n Then all states in $\\operatorname{Pre}(\\operatorname{Enf}(T,\\mathcal{G}))$ are winning for \\texttt{{REACH}}{} in $\\mathcal{G}$.\n\\end{lemma}\n\\begin{proof}\n Clearly, all states in $\\operatorname{Pre}(\\operatorname{Enf}(T,\\mathcal{G}))$ that are under the control of \\texttt{{REACH}}{} are winning for \\texttt{{REACH}}{}, as in any such state they have a transition satisfying $T$ (observe that $\\operatorname{Enf}(T,\\mathcal{G}) \\implies T$ is valid), which leads to a winning state by assumption.\n\n So let $s$ be a state satisfying $\\operatorname{Pre}(\\operatorname{Enf}(T,\\mathcal{G}))$ that is under the control of \\texttt{{SAFE}}{}.\n As $\\operatorname{Pre}(\\operatorname{Enf}(T,\\mathcal{G}))(s)$ is valid, $s$ has a transition that satisfies $T$ (in particular, $s$ is not a trap state).\n Furthermore, we know that there is no $s' \\in \\S$ such that $\\mathit{Safe}(s,s')\\land\\lnot\\mathit{T}(s,s')$ holds, and hence there is no transition satisfying $\\lnot\\mathit{T}$ from $s$. Since $\\operatorname{Post}(T)[\\varp\/\\var]$ is winning for \\texttt{{REACH}}{}, it follows that from $s$ player \\texttt{{SAFE}}{} cannot avoid playing into a winning state of \\texttt{{REACH}}{}.\n \\qed\n\\end{proof}\n\nWe now turn to a formal definition of \\emph{necessary subgoals}, which intuitively are sets of transitions that appear on every play that is winning for \\texttt{{REACH}}{}. \n\\begin{definition}[Necessary subgoal]\\label{necessary_subgoal}\n\tA \\emph{necessary subgoal} $C \\in \\cal L(\\mathcal{V} \\cup \\mathcal{V'})$ for~$\\mathcal{G}$ is a transition predicate such that for every play $\\rho = s_0 s_1 \\ldots$ of $\\mathcal{G}$ and $n \\in \\mathbb{N}$ such that $\\mathit{Goal}(s_{n})$ is valid, there exists some $k < n$ such that $C(s_k,s_{k+1})$ is valid.\n\\end{definition}\n\nNecessary subgoals provide a means by which winning safety player strategies can be identified, as formalized in the following lemma.\n\n\\begin{lemma}\\label{prop_safestrat}\n\tA safety strategy $\\sigma_{\\mathit{S}}$ is winning in $\\mathcal{G}$ if and only if there exists a necessary subgoal $\\mathit{C}$ for $\\mathcal{G}$ such that for all plays $\\rho = s_0 s_1 \\ldots$ of $\\mathcal{G}$ consistent with~$\\sigma_{\\mathit{S}}$ there is no $n \\in \\mathbb{N}$ such that $C(s_n,s_{n+1})$ holds. \n\\end{lemma}\n\\begin{proof}\n ``$\\implies$''. The transition predicate $\\mathit{Goal}[\\var\/\\varp]$ (i.e., transitions with endpoints satisfying $\\mathit{Goal}$) is clearly a necessary subgoal. If $\\sigma_{\\mathit{S}}$ is winning for \\texttt{{SAFE}}, then no play consistent with $\\sigma_{\\mathit{S}}$ contains a transition in this necessary subgoal. \\\\\n \\noindent ``$\\Longleftarrow$''. Let $C$ be a necessary subgoal such that no play consistent with $\\sigma_{\\mathit{S}}$ contains a transition of $C$. Then by \\Cref{necessary_subgoal} no play consistent with $\\sigma_{\\mathit{S}}$ contains a state satisfying $\\mathit{Goal}$. Hence $\\sigma_{\\mathit{S}}$ is a winning strategy for \\texttt{{SAFE}}.\n\t\\qed\n\\end{proof}\n\nOf course, the question remains how to compute non-trivial subgoals. Indeed, using $\\mathit{Goal}$ as outlined in the proof above provides no further benefit over a simple backwards exploration (see~\\Cref{rem:attractor} in the following section).\n\nIdeally, a subgoal should represent an interesting key decision to focus the strategy search.\nAs we show next, Craig interpolation allows to extract partial causes for the mutual unsatisfiability of $\\mathit{Init}$ and $\\mathit{Goal}$ and can in this way provide necessary subgoals. \nRecall that a Craig interpolant $\\varphi$ between $\\mathit{Init}$ and $\\mathit{Goal}$ is a state predicate that is implied by $\\mathit{Goal}$, and unsatisfiable in conjunction with $\\mathit{Init}$. \nIn this sense, $\\varphi$ describes an observable \\emph{effect} that must occur if $\\texttt{{REACH}}{}$ wins, and the concrete transition that instantiates the interpolant \\emph{causes} this effect.\n\n\\begin{proposition}\\label{prop_necessary}\n\tLet $\\varphi$ be a Craig interpolant for $\\mathit{Init}$ and $\\mathit{Goal}$. Then the transition predicate $\\operatorname{Instantiate}(\\varphi,\\mathcal{G})$ is a necessary subgoal.\n\\end{proposition}\n\\begin{proof}\n As $\\varphi$ is an interpolant, it holds that $\\mathit{Goal} \\implies \\varphi$ is valid and $\\mathit{Init} \\land \\varphi$ is unsatisfiable.\n Consider any play $\\rho = s_0 s_1 \\ldots$ of $\\mathcal{G}$ such that $\\mathit{Goal}(s_n)$ is valid for some $n \\in \\mathbb{N}$.\n It follows that $\\lnot \\varphi(s_0)$ and $\\varphi(s_n)$ are both valid.\n Consequently, there is some $0 \\leq i < n$ such that $\\lnot \\varphi(s_i)$ and $\\varphi(s_{i+1})$ are both valid.\n As all pairs $(s_k,s_{k+1})$ satisfy either $\\mathit{Safe}$ or $\\mathit{Reach}$, it follows that $\\big(\\operatorname{Instantiate}(\\varphi,\\mathcal{G})\\big)(s_i,s_{i+1})$ is valid.\n Hence, $\\operatorname{Instantiate}(\\varphi,\\mathcal{G})$ is a necessary subgoal.\n\t\\qed\n\\end{proof}\n\nWhile avoiding a necessary subgoal is a winning strategy for \\texttt{{SAFE}}{}, reaching a necessary subgoal is in general not sufficient to guarantee a win for \\texttt{{REACH}}{}.\nThis is because there might be some transitions in the necessary subgoal that produce the desired effect described by the Craig interpolant, but that trap \\texttt{{REACH}}{} in a region of the state space where they cannot enforce some other necessary effect to reach goal. \nFor the purpose of describing a set of transitions that is guaranteed to be winning for the reachability player, we introduce \\emph{sufficient subgoals}.\n\n \\begin{definition}[Sufficient subgoal]\n A transition predicate $\\mathit{F}\\in\\cal L(\\mathcal{V}\\cup\\mathcal{V'})$ is called a \\emph{sufficient subgoal} if $\\texttt{{REACH}}$ wins from every state satisfying $\\operatorname{Post}(\\mathit{F})[\\varp\/\\var]$.\n \\end{definition}\n\n\\begin{example}\n\tConsider the Mona Lisa game $\\mathcal{G}$ described in Section \\ref{sec:motivation}.\n\t\\[C_1 = (\\mathit{Guard} \\lor \\mathit{Thief}) \\land p \\neq 1 \\land p' = 1\\]\n\tqualifies as sufficient subgoal, because $\\texttt{{REACH}}$ wins from every successor state as all those states satisfy $\\mathit{Goal}$. \n\tAlso, every play reaching $\\mathit{Goal}$ eventually passes $C_1$, and hence $C_1$ is also necessary. On the other hand, \n\t\\[C_2 = (\\mathit{Guard} \\lor \\mathit{Thief}) \\land a \\neq 0 \\land a' = 0\\]\n\tis only a necessary subgoal in $\\mathcal{G}$, because $\\texttt{{SAFE}}$ wins from some (in fact all) states satisfying $\\operatorname{Post}(C_2)$.\n\t\n\\end{example}\n \nIf the set of transitions in the necessary subgoal $C$ that lead to winning states of \\texttt{{REACH}}{} is definable in $\\cal L$ then we call the transition predicate $F$ that defines it the \\emph{largest sufficient subgoal} included in $C$. \nIt is characterized by the properties (1) $F \\implies C$ is valid, and (2) if $F'$ is such that $F \\implies F'$ is valid, then either $F \\equiv F'$, or $F'$ is not a sufficient subgoal. Since $C$ is a necessary subgoal and $F$ is maximal with the properties above, \\texttt{{REACH}}{} needs to see a transition in $F$ eventually in order to win. This balance of necessity and sufficiency allows us to partition the game along $F$ into a game that happens after the subgoal and one that happens before.\n\n\\begin{proposition}\n\t\t\\label{lem:slicing}\n\t Let $C$ be a necessary subgoal, and $F$ be the largest sufficient subgoal included in $C$. Then \\texttt{{REACH}}{} wins from an initial state $s$ in $\\mathcal{G}$ if and only if \\texttt{{REACH}}{} wins from $s$ in the pre-game \n \\[\\mathcal{G}_{pre} = \\langle \\mathit{Init}, \\mathit{Safe} \\land \\neg F, \\mathit{Reach} \\land \\neg F, \\operatorname{Pre}(\\operatorname{Enf}(F,\\mathcal{G})) \\rangle.\\]\n\\end{proposition}\n\\begin{proof}\n ``$\\implies$''. Suppose that \\texttt{{REACH}}{} wins in $\\mathcal{G}$ from $s$ using strategy $\\sigma_R$. Assume for a contradiction that \\texttt{{SAFE}}{} wins in $\\mathcal{G}_{pre}$ from $s$ using strategy $\\sigma_S$. Consider strategy $\\sigma'_S$ such that $\\sigma_{\\mathit{S}}'(\\omega s') = \\sigma_{\\mathit{S}}(\\omega s')$ if $(\\mathit{Safe} \\land \\lnot F)(s')$ is satisfiable, and else $\\sigma_{\\mathit{S}}'(\\omega s') = \\sigma_{\\mathit{S}}''(\\omega s')$, where $\\sigma_{\\mathit{S}}''$ is an arbitrary safety player strategy in $\\mathcal{G}$. Let $\\rho = s_0s_1\\ldots$ be the (unique) play of $\\mathcal{G}$ consistent with both $\\sigma_R$ and $\\sigma'_S$, where $s_0 = s$. Since $\\sigma_R$ is winning in $\\mathcal{G}$ and $C$ is a necessary subgoal in $\\mathcal{G}$, there must exist some $m\\in\\mathbb{N}$ such that $C(s_m, s_{m+1})$ is valid. Let $m$ be the smallest such index. Since $F \\implies C$, we know for all $0 \\leq k < m$ that $\\lnot F (s_k,s_{k+1})$ holds. Hence, there is the play $\\rho' = s_0s_1\\ldots s_m \\ldots$ in $\\mathcal{G}_{pre}$ consistent with $\\sigma_S$. The state $s_{m+1}$ is winning for \\texttt{{REACH}}{} in $\\mathcal{G}$, as it is reached on a play consistent with the winning strategy $\\sigma_R$. Hence, we know that $F(s_m, s_{m+1})$ holds, because $F$ is the largest sufficient subgoal included in $C$.\n If $(\\mathit{Reach} \\land F)(s_m, s_{m+1})$ held, we would have that $\\operatorname{Pre}(\\operatorname{Enf}(F,\\mathcal{G})(s_m)$ holds: a contradiction with $\\rho'$ being consistent with $\\sigma_S$, which we assumed to be winning in $\\mathcal{G}_{pre}$. It follows that $(\\mathit{Safe} \\land F)(s_m, s_{m+1})$ holds. We can conclude that $(\\mathit{Safe} \\land \\lnot F)(s_m)$ is unsatisfiable (i.e., $s_m$ is a trap state in $\\mathcal{G}_{pre}$), because in all other cases $\\texttt{{SAFE}}$ plays according to $\\sigma_{\\mathit{S}}$, which cannot choose a transition satisfying $F$. However, this implies that $\\operatorname{Pre}(\\operatorname{Enf}(F,\\mathcal{G})(s_m)$ holds, again a contradiction with $\\rho'$ being consistent with winning strategy $\\sigma_S$.\n\n\t\\noindent ``$\\Longleftarrow$''. If $\\texttt{{REACH}}$ wins in $\\mathcal{G}_{pre}$ they have a strategy $\\sigma_{\\mathit{R}}$ such that every play consistent with $\\sigma_{\\mathit{R}}$ reaches the set $\\operatorname{Pre}(\\operatorname{Enf}(F,\\mathcal{G}))$.\n As $F$ is a sufficient subgoal, the states $\\operatorname{Post}(F)$ are winning for \\texttt{{REACH}}{} by definition.\n It follows by~\\Cref{lem:enf} that all states satisfying $\\operatorname{Pre}(\\operatorname{Enf}(F,\\mathcal{G}))$ are winning in $\\mathcal{G}$.\n Combining $\\sigma_{R}$ with a strategy that wins in all these states yields a winning strategy for \\texttt{{REACH}}{} in $\\mathcal{G}$.\n \\qed\n\\end{proof}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}