diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzauhy" "b/data_all_eng_slimpj/shuffled/split2/finalzzauhy" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzauhy" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\\section{Introduction}\\label{introduction}\nNear-filed wireless power transfer (WPT) has been attracting growing interest, due to its high efficiency for power transmission. Near-field WPT is realized by inductive coupling (IC) \\cite{MurakamiSatoh96, KimCho01} for short-range applications in centimeters, or magnetic resonant coupling (MRC) \\cite{KursSoljatic07, HuiLee14} for mid-range applications up to a couple of meters. With the short-range WPT technology reaching the stage of commercial use, the mid-range WPT technology has been gathering momentum in the last decade. The recent progress on mid-range WPT is referred to the review paper \\cite{HuiLee14} and the references therein.\n\nThe MRC-WPT system with multiple transmitters (TXs) and\/or multiple receivers (RXs) has been studied in the literature \\cite{YoonLing11, AhnHong13, JadidianKatabi14, RezaZhang:C15}. The MRC-WPT system with only two TXs and one RX is studied in \\cite{YoonLing11, AhnHong13}, while the obtained results cannot be readily extended to the case of more than two TXs. Recently, an ``Magnetic MIMO'' charging system~\\cite{JadidianKatabi14} can charge a phone inside of a user's pocket 40cm away from the array of TX coils, independently of the phone's orientation. For an MRC-WPT system with multiple RXs, the load resistances of the RXs are jointly optimized in \\cite{RezaZhang:C15} to minimize the total transmit power and address the ``near-far'' fairness problem. Deploying multiple TXs can help focus the magnetic fields on the RX~\\cite{JadidianKatabi14}, in a manner analogous to beamforming in far-field wireless communications~\\cite{GershmanSPM10}. However, to our best knowledge, there has been no prior work that designs the magnetic beamforming from a signal processing and optimization perspective, for an MRC-WPT system with arbitrary number of TXs, which thus motivates our work.\n\n\nIn this paper, as shown in Fig.~\\ref{fig:Fig1}, we consider an MRC-WPT system with a single RX and multiple TXs of which their currents (or equivalently source voltages) can be adjusted such that the magnetic fields are constructively combined at the RX, thus achieving a magnetic beamforming gain. We formulate a problem to minimize the total power drawn from the voltage sources of all TXs by designing the currents flowing through different TXs, subject to the minimum power required by the RX load as well as the TXs' constraints on the peak voltage and current. For the special case of identical TX resistances and neglecting the TXs' constraints, the optimal current magnitude of each TX is shown to be proportional to the mutual inductance between its TX coil and the RX coil. In general, our formulated problem is a non-convex quadratically constrained quadratic programming (QCQP) problem. It is recast as a semidefinite programming (SDP) problem. We show that its semidefinite relaxation (SDR) \\cite{LuoZhangSPM10, HuangPalomar10} is tight, i.e., the existing optimal solution to the SDR problem is always rank-one. The optimal solution to the QCQP problem can thus be obtained via standard convex optimization methods~\\cite{ConvecOptBoyd04}. Numerical results show that magnetic beamforming significantly enhances the deliverable power and the WPT efficiency over the benchmark scheme of equal current allocation.\n\n\n\n\\section{System Model} \\label{system_model}\nAs shown in Fig.~\\ref{fig:Fig1}, we consider an MRC-WPT system with $N \\geq 1$ TXs each equipped with a single coil, indexed by $n$, and a single RX with a coil, index as $0$. Each TX $n$ is connected to a stable energy source supplying sinusoidal voltage over time given by $\\tilv_{ n}(t) = {\\mathrm{Re}} \\{v_{ n} e^{jwt}\\}$, with the complex $v_{ n}$ denoting an adjustable voltage and $w > 0$ denoting its operating angular frequency. Let $\\tili_{ n}(t) = {\\mathrm{Re}} \\{i_{ n} e^{jwt}\\}$ denote the steady-state current flowing through TX $n$, with the complex current $i_{ n}$. This current produces a time-varying magnetic flux in the $n$-th TX coil, which passes through the RX coil and induces time-varying currents in it. Let $\\tili_{0}(t) = {\\mathrm{Re}} \\{i_{ 0} e^{jwt}\\}$ be the steady-state current at the RX, with the complex current $i_{ 0}$.\n\nWe use $M_{n0}$ and $M_{n_1 n_2}$ to denote the mutual inductance between the $n$-th TX coil and the RX coil, and the mutual inductance between the $n_1$-th TX coil and the $n_2$-th TX coil, respectively. Each mutual inductance plays the role of magnetic channel between a pair of coils, and is a positive or negative real number depending on the physical characteristics, the relative distance, and the orientations of a pair of coils, etc. \\cite{JadidianKatabi14}.\n\n\n\n\\begin{figure} [htbp]\n\\centering\n\\includegraphics[width=.65\\columnwidth]{Circuit_Diagram_V0923_MISO.eps}\n\\caption{System model of an MRC-WPT system\n\\label{fig:Fig1}\n\\end{figure}\n\n\n\nWe denote the self-inductance and the capacity of the $n$-th TX coil (RX coil) by $L_{ n} >0$ ($L_0 > 0$) and $C_{ n} >0$ ($C_0 > 0$), respectively. The capacitors are chosen such that all TXs and the RX have the same resonant angular frequency $w$. We use $r_{ n} >0$ to denote the sum of the source resistance and the internal parasitic resistance of the $n$-th TX. The resistance of the RX, denoted by $r_{ 0} >0$, consists of the parasitic resistance $r_{ \\sft p, 0} >0$ and the load resistance $r_{ \\sft l, 0} >0$, i.e., $r_{ 0} = r_{ \\sft p, 0} + r_{ \\sft l, 0}$. The load is assumed to be purely resistive.\n\n\n\n\nIn this paper, we assume that all electrical parameters are fixed and known by the central controller. For analytical convenience, we treat the complex currents $i_{ n}$'s flowing through TXs as design parameters\\footnote{In practice, it is more convenient to use a voltage source rather than a current source. However, by using standard circuit theory, one can easily compute the source voltages $v_{ n}$'s that generate the the required currents $i_{ n}$'s.}, which are adjusted to realize magnetic beamforming.\n\n\nBy applying Kirchhoff's circuit law to the RX, we obtain its current $i_{ 0}$ as follows\n\\begin{align}\n i_{ 0} &= \\frac{jw}{r_0} \\sum \\limits_{n=1}^N M_{0n} i_{ n}. \\label{eq:current_receiver_q}\n\\end{align}\n\n\\noindent Define the vector of mutual inductances between the RX coil and all TX coils as $\\boldm = [M_{01} \\; M_{02} \\; \\cdots \\; $ $ M_{0N}]^H$.\nFrom \\eqref{eq:current_receiver_q}, the power delivered to the RX load is\n\\begin{align}\n p_0 &= \\frac{1}{2} |i_0|^2 r_{ \\sft l, 0} =\\frac{w^2 r_{ \\sft l, 0} }{2 r_ 0^2} {\\bi}^H \\boldm \\boldm^H {\\bi}.\n \\label{eq:load_power_vec_complex}\n\\end{align}\n\n\n\n\n\n\nBy applying Kirchhoff's circuit law to each TX $n$, we obtain its source voltage as follows\n\\begin{align}\n v_n&=r_n i_n + j w \\sum \\limits_{k=1, \\neq n}^N M_{nk} i_k - j w M_{n0} i_0 \\nonumber \\\\\n \n &= \\left(\\! r_n \\!+\\! \\frac{M_{n0}^2 w^2}{r_0} \\!\\right) i_n \\!+\\! \\sum \\limits_{k \\neq n} \\left(\\! j w M_{nk} \\!+\\! \\frac{M_{n0} M_{0k} w^2}{r_0} \\! \\right) i_k. \\label{eq:voltage_transmitter_n}\n\\end{align}\n\n\\noindent We define the $N$-order square matrix $\\bB $ as follows\n\\begin{align}\n\\bB &= {\\overline{\\bB}} + j {\\widehat{\\bB}},\\label{eq_complex_B}\n\\end{align}\nwhere the elements of the real-part matrix ${\\overline{\\bB}}$ and the imaginary-part matrix $\\widehat{\\bB}$ are given by\n\\begin{align}\n {\\overline{B}}_{nk} &=\n \\left\\{ \\begin{array}{cl}\n r_{ n} + \\frac{M_{n0}^2 w^2}{r_0} , &\\mbox{if}\\; k=n\\\\\n \\frac{M_{n0} M_{0k} w^2}{r_0}, \\qquad &\\mbox{otherwise} \\\\\n \\end{array}\n \\right. \\label{eq_complex_B_real}\\\\\n {\\widehat{B}}_{nk} &= \\left\\{ \\begin{array}{cl}\n 0, &\\mbox{if}\\; k=n\\\\\n - w M_{ nk}, \\qquad &\\mbox{otherwise} \\\\\n \\end{array}\n \\right.\\label{eq_complex_B_imaginar}\n \\end{align}\nThe matrices $\\bB , \\; {\\overline{\\bB}} $ and ${\\widehat{\\bB}}$ are symmetric, due to the fact that $M_{ nk} = M_{ kn}, \\; \\forall n \\neq k$. Denote the $n$-th column of the matrices $\\bB , \\; \\overline{\\bB} , \\; \\widehat{\\bB}$ by $\\bb_n ,\\; \\overline{\\bb}_n , \\; \\widehat{\\bb}_n$, respectively. Moreover, the matrix ${\\overline{\\bB}} $ is positive semidefinite (PSD), as it can be rewritten as follows\n \\begin{align}\n {\\overline{\\bB}} = \\bR + \\frac{w^2 \\boldm \\boldm^H}{r_0}. \\label{eq:barB}\n \\end{align}\n\n\n\\noindent The source voltage of each TX $n$ can be equivalently rewritten as\n\\begin{align}\n v_{ n}= {\\bb}_n^H \\bi. \\label{eq:voltage_transmitter_n_vector}\n\\end{align}\n\n\nFrom \\eqref{eq_complex_B} and \\eqref{eq:voltage_transmitter_n_vector}, the total power drawn from the sources of all TXs, denoted by $p$, is derived as follows\n\\begin{align\n p &=\\frac{1}{2} {\\mathrm{Re}} \\left\\{ \\sum \\limits_{n=1}^N \\bi^H {\\bb}_n i_{ n}\\right\\}\n \n \n \n = \\frac{1}{2} {{\\bi}}^H {\\overline{\\bB}} {{\\bi}}. \\label{eq:p_transmit_general_vec_complex}\n\\end{align}\n\n\\begin{myrem}\\label{remark:M_tx_p}\n From \\eqref{eq_complex_B_real} and~\\eqref{eq:p_transmit_general_vec_complex}, we observe that the total power drawn from all TXs' voltage sources depends on the mutual inductance $M_{n0}$ between the coils of each TX $n$ and the RX, but it is independent of the mutual inductance $M_{ nk}$ between any pair of TX coils.\n \n\\end{myrem}\n\n\\section{Problem Formulation}\\label{sec: formulation}\nIn this section, we formulate a problem to minimize the total power drawn from the voltage sources of all TXs by jointly designing the currents $\\bi$ flowing through TXs, subject to practical constraints of the MRC-WPT system. Particularly, we consider the following constraints: first, the power delivered to the RX load should exceed a given power level $\\beta_0 >0$, i.e., $p_0 \\geq \\beta_0$; second, the maximally allowable amplitude of the source voltage $v_{ n}$ is $V_{ n}$. i.e., $|v_n| \\leq V_n$; third, the maximally allowable amplitude of the current $i_{ n}$ is $A_{ n}$, i.e., $|i_n| \\leq A_n$. Let $\\bQ_n$ be the rank-one matrix with the $n$-th diagonal element as one and all other elements as zero. From \\eqref{eq:load_power_vec_complex}, \\eqref{eq:voltage_transmitter_n_vector} and \\eqref{eq:p_transmit_general_vec_complex}, the problem is thus formulated as follows\n\\begin{subequations}\\label{eq:optimP1}\n\\begin{align}\n \\mathrm{(P1)}: \\ \\ \\ &\\underset{ {\\bi} \\in \\calC^N }{\\text{min}} \\ \\\n \\frac{1}{2} {{\\bi}}^H {\\overline{\\bB}} {{\\bi}} \\label{eq:rewardP1} \\\\\n \\quad \\text{s. t.} \\ \\\n &\\frac{w^2 r_{ \\sft l, 0} }{2 r_0^2} {\\bi}^H \\boldm \\boldm^H {\\bi} \\geq \\beta_0 \\label{eq:const1P1} \\\\\n &\\bi^H {\\bb}_n {\\bb}_n^H \\bi \\leq V_{ n}^2, \\;\\; n=1, \\; 2, \\; \\cdots \\; N \\label{eq:const2P1} \\\\\n &\\bi^H \\bQ_n \\bi \\leq A_{ n}^2, \\;\\; n=1, \\; 2, \\; \\cdots \\; N \\label{eq:const3P1}\n \n\\end{align}\n\\end{subequations}\n$\\mathrm{(P1)}$ is a complex-valued non-convex QCQP problem \\cite{ConvecOptBoyd04}. Although solving such a problem is nontrivial in general \\cite{LuoZhangSPM10, HuangPalomar10}, we obtain the optimal solution to $\\mathrm{(P1)}$ in Section~\\ref{sec: solution}.\n\n\\section{Optimal Solutions}\\label{sec: solution}\n\\subsection{Optimal Solution to ($\\mathrm{P1}$) without Constraints~\\eqref{eq:const2P1} and~\\eqref{eq:const3P1}}\nIn this subsection, we consider the simplified ($\\mathrm{P1}$) only with delivered load power constraint~\\eqref{eq:const1P1} but without voltage and current constraints given in \\eqref{eq:const2P1} and \\eqref{eq:const3P1}, respectively, to get useful insights on magnetic beamforming. We observe that the real-part currents $\\bar{\\bi}$ and the imaginary-part currents $\\hat{\\bi}$ contribute in the same way to the total TX power in \\eqref{eq:rewardP1} and the delivered load power in \\eqref{eq:const1P1}, since both $\\overline{\\bB}$ and $\\boldm \\boldm^H$ are symmetric matrices. As a result, we can set $\\hat{\\bi}=\\mathbf{0}$ and design $\\bar{\\bi}$ only, i.e., we need to solve\n\\begin{subequations}\\label{eq:optimP2}\n\\begin{align}\n \\mathrm{(P2)}: \\ \\ \\ \\underset{ \\bar{\\bi} \\in \\bbR^N}{\\text{min}} \\ \\\n & \\frac{1}{2} {\\bar{\\bi}}^H {\\overline{\\bB}} {\\bar{\\bi}} \\label{eq:rewardP2} \\\\\n \\quad \\text{s. t.} \\ \\\n &\\frac{w^2 r_{ \\sft l, 0} }{2 r_0^2} \\bar{\\bi}^H \\boldm \\boldm^H \\bar{\\bi} \\geq \\beta_0 \\label{eq:const1P2}\n\\end{align}\n\\end{subequations}\nThe optimal solution to $\\mathrm{(P2)}$ is given as follows.\n\\begin{mythe}\\label{the:solution_specialP2}\nThe optimal solution to $\\mathrm{(P2)}$ is ${\\bar{\\bi}}^{\\star} = \\alpha \\bu_1$, where $\\alpha$ is a constant such that the constraint \\eqref{eq:const1P2} holds with equality, and $\\bu_1$ is the eigenvector associated with the minimum eigenvalue, denoted by $\\gamma_1$, of the matrix\n\\begin{align}\n \\bT=\\bR + \\frac{w^2 (r_0 -v^{\\star} r_{ \\sft l, 0})}{r_0^2} \\boldm \\boldm^H, \\label{eq:solution_wo_constraint}\n\\end{align}\nwhere $v^{\\star}$ is chosen such that $\\gamma_1 =0$.\n\nSpecifically, for the case of identical TX resistances, i.e., $\\bR=r \\bI$,\nthe optimal currents to ($\\mathrm{P2}$) is\n \\begin{align}\n {\\bar{\\bi}}^{\\star} = \\frac{\\alpha \\boldm}{\\| \\boldm \\|_2}\\label{eq:solution_wo_constraint_common_r}.\n \\end{align}\n\\end{mythe}\n\n\n\\begin{proof}\nWe construct the Lagrange function of $\\mathrm{(P2)}$ as\n\\begin{align}\n L(\\bar{\\bi}, v) &= \\frac{1}{2} {\\bar{\\bi}}^H {\\overline{\\bB}} {\\bar{\\bi}} + v \\left( \\beta_0 - \\frac{w^2 r_{ \\sft l, 0} }{2r_0^2} \\bar{\\bi}^H \\boldm \\boldm^H \\bar{\\bi}\\right) \\label{eq:Lagrangian}\n\\end{align}\n\nThen, the Lagrange dual function is given by\n\\begin{align}\nL(v) &= \\beta_0 v + \\underset{{\\bar{\\bi}}}{\\inf} \\;\\left( \\frac{1}{2} {\\bar{\\bi}}^H {\\overline{\\bB}} {\\bar{\\bi}} - \\frac{w^2 r_{ \\sft l, 0} v }{2r_0^2} \\bar{\\bi}^H \\boldm \\boldm^H \\bar{\\bi} \\right) \\nonumber \\\\\n &= \\beta_0 v \\!+\\! \\underset{{\\bar{\\bi}}}{\\inf} \\;\\left( \\frac{1}{2} {\\bar{\\bi}}^H \\left(\\bR + \\frac{w^2 \\boldm \\boldm^H}{r_{ \\sft l, 0} + r_{ \\sft p, 0}}\\right) {\\bar{\\bi}} \\!-\\! \\frac{w^2 r_{ \\sft l, 0} v }{2r_0^2} \\bar{\\bi}^H \\boldm \\boldm^H \\bar{\\bi} \\right) \\nonumber \\\\\n \n \n &= \\beta_0 v \\!+\\! \\underset{{\\bar{\\bi}}}{\\inf} \\;\\frac{1}{2} {\\bar{\\bi}}^H \\! \\left( \\bR \\!+\\! \\frac{w^2 ((1 \\!-\\! v) r_{ \\sft l, 0} \\!+\\! r_{ \\sft p, 0})}{r_0^2} \\boldm \\boldm^H \\right) \\bar{\\bi}. \\label{eq:LagrangeDual_1}\n\\end{align}\nTo obtain the best lower bound on the optimal objective value of ($\\mathrm{P2}$), the dual variable $v$ should be optimized to maximize the Lagrange dual~\\eqref{eq:LagrangeDual_1}. For dual feasibility, the Lagrange dual function~\\eqref{eq:LagrangeDual_1} should be bounded below. For convenience, we denote the following singular-value-decomposition (SVD)\n\\begin{align}\n \\bR + \\frac{w^2 ((1-v) r_{ \\sft l, 0} + r_{ \\sft p, 0})}{r_0^2} \\boldm \\boldm^H = \\bU \\bGamma \\bU^H, \\label{eq:SVD_specialP3}\n\\end{align}\nwhere the matrix $\\bU = [\\bu_1 \\; \\bu_2 \\; \\cdots \\; \\bu_N]$ is orthogonal, and $\\bGamma = \\diag\\{\\gamma_1, \\cdots, \\gamma_N\\}$, with $\\gamma_1 \\leq \\gamma_2 \\leq \\cdots \\leq \\gamma_N$. For the case of arbitrary transmitter resistances, the Lagrangian in \\eqref{eq:Lagrangian} is bounded below in $\\bar{\\bi}$ and the Lagrange dual function \\eqref{eq:LagrangeDual_1} is maximized, only when $v$ is chosen as $v^{\\star}$ such that $\\gamma_1 =0$.\n\nMoreover, we observe that the objective \\eqref{eq:rewardP2} is minimized when the constraint \\eqref{eq:const1P2} holds with equality, since both $\\overline{\\bB}$ and the matrix $\\boldm \\boldm^H$ are PSD. Hence, the optimal current can be written as ${\\bar{\\bi}}^{\\star} = \\alpha \\bu_1$, where $\\bu_1$ is the eigenvector associated with the eigenvalue $\\gamma_1 =0$, and $\\alpha$ is a constant such that the constraint \\eqref{eq:const1P2} holds with equality.\n\nFor the case of identical transmitter resistance, i.e., $\\bR = r \\bI_N$,\nfrom the isometric property of the identity matrix $\\bI_N$, the diagonal matrix $\\bGamma$ is given by\n\\begin{align}\n\\bGamma &= \\diag \\left\\{ r + \\frac{w^2 ((1-v) r_{ \\sft l, 0} + r_{ \\sft p, 0})}{r_0^2}, \\; r,\\; \\cdots, \\; r\\right\\}\n\\nonumber\n\\end{align}\nand the eigenvector $\\bu_1 = \\frac{\\boldm}{\\| \\boldm \\|_2}$, and $\\bu_n, \\; \\forall n \\geq 2$, are arbitrarily orthogonal vectors constructed by methods such as Gram$-$Schmidt method. {It is standard to show that the Lagrangian in \\eqref{eq:Lagrangian} is bounded below in $\\bar{\\bi}$ and the Lagrange dual function \\eqref{eq:LagrangeDual_1} is maximized, only when $v$ is chosen such that the first eigenvalue is zero}, i.e., the optimal dual variable is\n\\begin{align}\nv^{\\star} = 1 + \\frac{r r_0^2 }{r_{ \\sft l, 0} w^2} + \\frac{r_{ \\sft p, 0}}{r_{ \\sft l, 0}},\n\\end{align}\nand the optimal current is given by\n\\begin{align}\n{\\bar{\\bi}}^{\\star} = \\frac{\\alpha \\boldm}{\\| \\boldm \\|_2}, \\label{eq:opt_current_wpecial}\n\\end{align}\nwhere $\\alpha$ is a constant such that \\eqref{eq:const1P2} holds with equality.\n\n\\end{proof}\n\nIn fact, all TX currents can be rotated by arbitrarily common phase.\n\\begin{myrem}\n Theorem \\ref{the:solution_specialP2} implies that for the case of identical TX resistances, the optimal current magnitude of each TX $n$ is proportional to the mutual inductance $M_{0n}$ between the RX and TX $n$. This is analogous to the traditional maximum-ratio-transmission (MRT) beamforming in wireless communications~\\cite{GershmanSPM10}. The magnetic beamforming also differs from the traditional beamforming in wireless communications. The former operates over the near-field magnetic flux, and only the TX current magnitudes are adjusted according to the real magnetic channels (i.e., positive or negative mutual inductances)~\\cite{JadidianKatabi14}; while the latter operates over the far-field propagating waves, and the amplitude as well as the phase of the signal at each TX are adjusted according to complex wireless channels including amplitude fluctuation and phase shift due to prorogation delay~\\cite{GoldsmithWC2005}.\n \n\\end{myrem}\n\n\n\n\\subsection{Optimal Solution to ($\\mathrm{P1}$)}\nDefine $\\bX \\triangleq \\bi \\bi^H$, $\\bM \\triangleq \\boldm \\boldm^H$, and $\\bB_n \\triangleq \\bb_n \\bb_n^H$. Thus, $\\mathrm{(P1)}$ can be equivalently rewritten as the following SDP problem\n\\begin{subequations}\\label{eq:optimP2-SDP}\n\\begin{align}\n (\\mathrm{P1}\\mathrm{-SDP}): \\ \\ \\ &\\underset{ \\bX \\in \\bbC^{N \\times N}}{\\text{min}} \\ \\\n \\frac{1}{2} \\Tr \\left( {\\overline{\\bB}} \\bX \\right) \\label{eq:rewardP1SDP} \\\\\n \\quad \\text{s. t.} \\ \\\n &\\Tr \\left( {\\bM} \\bX \\right) \\geq \\frac{2 r_0^2 \\beta_0}{w^2 r_{ \\sft l, 0} } \\label{eq:const1P1SDP} \\\\\n &\\Tr \\left( {\\bB_n} \\bX \\right) \\leq V_{ n}^2, \\;\\; n=1, \\; \\cdots \\; N \\label{eq:const2P1SDP}\n \\\\\n &\\Tr \\left( \\bQ_n \\bX \\right) \\leq A_{ n}^2, \\;\\; n=1, \\; \\cdots \\; N \\label{eq:const3P1SDP} \\\\\n & \\bX \\succcurlyeq 0, \\; \\rank \\left(\\bX\\right)=1 \\label{eq:const4P1SDP}\n\\end{align}\n\\end{subequations\nwhere $\\bX \\succcurlyeq 0$ indicates that $\\bX$ is PSD.\n\n\n\n\nAs well known, the rank constraint in \\eqref{eq:const4P1SDP} is non-convex \\cite{LuoZhangSPM10,HuangPalomar10}. By ignoring the rank-one constraint in~\\eqref{eq:const4P1SDP}, we obtain the SDR of $(\\mathrm{P1}\\mathrm{-SDP})$, denoted by $(\\mathrm{P1}\\mathrm{-SDR})$, which is convex. Moreover, we have the following theorem on $(\\mathrm{P1}\\mathrm{-SDR})$ and the optimal solution to $(\\mathrm{P1})$.\n\\begin{mythe}\\label{the:rank-one}\n The SDR of $(\\mathrm{P1}\\mathrm{-SDP})$ is tight, i.e., the existing optimal solution $\\bX^{\\star}$ to $(\\mathrm{P1}\\mathrm{-SDR})$ is always rank-one (i.e., $\\bX^{\\star} = \\bi^{\\star} \\left(\\bi^{\\star}\\right)^H$). The optimal solution to $(\\mathrm{P1})$ is $\\bi^{\\star}$.\n\\end{mythe}\n\n\\begin{proof}\nLet $\\lambda \\geq 0, \\ \\bm{\\rho}=(\\rho_1, \\; \\cdots, \\rho_N)^T, \\ \\forall \\rho_n \\geq 0, \\ \\bm{\\mu} =(\\mu_1, \\cdots, \\mu_N), \\ \\forall \\mu_n \\geq 0,$ be the dual variables corresponding to the constraint(s) given in~\\eqref{eq:const1P1},~\\eqref{eq:const2P1}, and~\\eqref{eq:const3P1}, respectively. Let the matrix $\\bS \\succcurlyeq 0$ be the dual variable corresponding to the constraint $\\bX \\succcurlyeq 0$ in~\\eqref{eq:const4P1SDP}. The Lagrangian of $(\\mathrm{P1}\\mathrm{-SDR})$ is then written as\n\\begin{align}\n L(\\bX, \\lambda, \\bm{\\rho}, \\bS) &= \\frac{1}{2} \\Tr \\left( {\\overline{\\bB}} \\bX \\right) - \\lambda \\left( \\Tr \\left( {\\bM} \\bX \\right) - \\frac{2 r_0^2 \\beta_0}{w^2 r_{ \\sft l, 0} }\\right) \\!+ \\nonumber \\\\\n &\\quad \\!\\! \\sum \\limits_{n=1}^N \\! \\rho_n \\! \\left( \\Tr \\left( {\\bB_n} \\bX \\right) \\!-\\!A_{ n}^2 \\right) \\!+\\! \\! \\sum \\limits_{n=1}^N \\! \\mu_n \\! \\left( \\Tr \\left( {\\bQ_n} \\bX \\right) \\!-\\! D_{ n}^2 \\right) \\!-\\! \\Tr \\left( \\bS \\bX \\right).\n\\end{align}\n\nLet $\\bX^{\\star}, \\lambda^{\\star}, \\bm{\\rho}^{\\star}, {\\bm{\\mu}^{\\star}}$, and $\\bS^{\\star}$ be the optimal primal variables and dual variables, respectively. Moreover, the Karush$-$Kuhn$-$Tucker (KKT) conditions are given by\n\\begin{align}\n \\nabla_{\\bX} L(\\bX^{\\star}, \\lambda^{\\star}, \\bm{\\rho}^{\\star}, \\bm{\\mu}^{\\star}, \\bS^{\\star}) &= \\frac{1}{2} {\\overline{\\bB}} -\\lambda^{\\star} \\bM + \\sum \\limits_{n=1}^N \\rho_n^{\\star} {\\bB_n} + \\sum \\limits_{n=1}^N \\mu_n^{\\star} {\\bQ_n} - \\bS^{\\star} = \\bm{0}. \\label{eq:KKT_orthgonality} \\\\\n \\bS^{\\star} \\bX^{\\star} &= \\bm{0}. \\label{eq:KKT_complementarity}\n\\end{align}\n\nNext, by postmultiplexing \\eqref{eq:KKT_orthgonality} by $\\bX^{\\star}$ and substituting \\eqref{eq:KKT_complementarity} into the resulting equation, we obtain\n\\begin{align}\n \\frac{1}{2} {\\overline{\\bB}} \\bX^{\\star} -\\lambda^{\\star} \\bM \\bX^{\\star} + \\sum \\limits_{n=1}^N \\rho_n^{\\star} {\\bB_n} \\bX^{\\star} + \\sum \\limits_{n=1}^N \\mu_n^{\\star} {\\bQ_n} \\bX^{\\star}= \\bm{0}. \\label{eq:KKT_orthgonality2}\n\\end{align}\n\n\\noindent We further have\n\\begin{align}\n\\rank \\left( \\left( \\frac{1}{2} {\\overline{\\bB}}+ \\sum \\limits_{n=1}^N \\rho_n^{\\star} {\\bB_n} + \\sum \\limits_{n=1}^N \\mu_n^{\\star} {\\bQ_n} \\right) \\bX^{\\star} \\right) =\\rank \\left( \\bM \\bX^{\\star}\\right) \\leq \\rank \\left( \\bM \\right) =1. \\label{eq:KKT_orthgonality3}\n\\end{align}\nSince $\\overline{\\bB}$ is PSD, the matrix $\\left(\\! \\frac{1}{2} {\\overline{\\bB}} \\!+\\! \\sum \\limits_{n=1}^N \\rho_n^{\\star} {\\bB_n} \\!+\\! \\sum \\limits_{n=1}^N \\mu_n^{\\star} {\\bQ_n} \\! \\right)$ has full rank. Hence, the relation \\eqref{eq:KKT_orthgonality3} implies that\n\\begin{align}\n \\! \\! \\! \\rank \\left( \\bX^{\\star} \\right) \\!= \\! \\rank \\! \\left( \\! \\left(\\! \\frac{1}{2} {\\overline{\\bB}} \\!+\\! \\sum \\limits_{n=1}^N \\rho_n^{\\star} {\\bB_n} \\!+\\! \\sum \\limits_{n=1}^N \\mu_n^{\\star} {\\bQ_n} \\!\\right) \\bX^{\\star} \\! \\right) \\!=\\! 1. \\label{eq:KKT_orthgonality4}\n\\end{align}\nSince $\\bX^{\\star}$ is rank-one, it can be written as $\\bX^{\\star} = \\bi^{\\star} \\left(\\bi^{\\star}\\right)^H$. The vector $\\bi^{\\star}$ is thus the solution to $(\\mathrm{P1})$. This completes the proof.\n\\end{proof}\n\n\n\n\\begin{myrem}\nAs a convex problem, $(\\mathrm{P1}\\mathrm{-SDR})$ can be polynomially solved via an interior-point method~\\cite{ConvecOptBoyd04}, to arbitrary accuracy. The optimal solution to $(\\mathrm{P1})$ is directly obtained from the optimal solution to $(\\mathrm{P1}\\mathrm{-SDR})$, without any postprocessing required.\n\\end{myrem}\n\n\\section{Numerical Results}\\label{sec: simulation}\nAs shown in Fig.~\\ref{fig:Fig2}, we consider the setup with $N=5$ TX coils and one RX coil, each of which has 100 turns and a radius of 0.1 meter. We use cooper wire with radius of 0.1 millimeter for all coils. All the coils are horizontal with respect to the $xy$-plane. The total resistance of each TX is the same as $0.336 \\Omega$. For the RX, its parasitic resistance and load resistance are $r_{ \\sft {p}, 0} =0.336 \\Omega$ and $r_{\\sft{l},0}=50 \\Omega$, respectively. The self and mutual inductances are given in Table~\\ref{table_inductance}. All the capacitors are chosen such that the resonance angular frequency is $w=6.78 \\times 2 \\pi$ rad\/second \\cite{RezaZhang:C15}. We assume that the constraint thresholds $V_n=30\\sqrt{2}$~V and $A_{n}=5\\sqrt{2}$~A.\n\\begin{figure} [htbp]\n\\centering\n\\includegraphics[width=.65\\columnwidth]{System_Setup_V0923_MISO.eps}\n\\caption{System setup\n\\label{fig:Fig2}\n\\end{figure}\n\\begin{table} [htbp]\n\\centering\n\\caption{Mutual\/Self inductances ($\\mu$H)} \\label{table_inductance}\n\\small{\n\\begin{tabular}{*{6}{c}}\n \\hline \\hline\n & TX 1 & TX 2 & TX 3 & TX 4 & TX 5\\\\\n \\hline\n TX 1 & 5886.8 & 0.3565 & 0.1253 & 0.3565 & 0.2984 \\\\\n TX 2 & 0.3565 & 5886.8 & 0.3565 & 0.1253 & 0.2984 \\\\\n TX 3 & 0.1253 & 0.3565 & 5886.8 & 0.3565 & 0.2984 \\\\\n TX 4 & 0.3565 & 0.1253 & 0.3565 & 5886.8 & 0.2984 \\\\\n TX 5 & 0.2984 & 0.2984 & 0.2984 & 0.2984 & 5886.8 \\\\\n RX & 1.6121 & 0.00781 & -0.0296 & 0.00781 & 0.1508 \\\\\n \n \\hline\n\\end{tabular}\n}\n\\end{table}\n\n\n\nFor comparison, we consider an uncoordinated benchmark scheme of equal current allocation, i.e., all TXs carry the same in-phase current which can be adjusted. In particular, we compare the WPT performance for the case with TXs' constraints on the peak voltage and current, and for the case without TXs' constraints, respectively. We define the efficiency of WPT as the ratio of the delivered load power $\\beta_0$ to the total TX power $p$, i.e., $\\eta \\triangleq \\frac{\\beta_0}{p}$.\n\n\nFig.~\\ref{fig:Fig_A_fixedR} plots the total TX power $p$ and the efficiency $\\eta$ versus the delivered load power $\\beta_0$. All curves show the feasible and optimal values. For the case without TXs' constraints, we observe that the WPT efficiencies by using magnetic beamforming and by using the benchmark are 87.1$\\%$ and 62.0$\\%$, respectively. Hence, the exact gain of magmatic beamforming is an increase of WPT efficiency by 25.1$\\%$.\n\nFor the case with TXs' constraints, we observe that the magnetic beamforming can deliver more power up to 73.6~W to the RX with the efficiency of 74.1$\\%$; while the benchmark can deliver at most 36.0~W to the RX load with the efficiency of 62.0$\\%$. That is, the deliverable power is enhanced by 104.4$\\%$ by using magnetic beamforming, over the uncoordinated benchmark of equal current allocation. Thus, besides the efficiency improvement, another important benefit of magnetic beamforming is the enhancement of the deliverable power, under practical TX constraints.\n\n\n\\begin{figure} [htbp]\n\\centering\n\\includegraphics[width=.65\\columnwidth]{f0921_A_load50.eps}\n\\caption{Power and efficiency v.s. RX power $\\beta_0$.}\n\\label{fig:Fig_A_fixedR}\n\\end{figure}\n\nFig.~\\ref{fig:Fig_A_fixedR} also shows that the WPT efficiency decreases for $60.5 < \\beta_0 < 73.6$. To explain the decreasing efficiency and obtain insights for magnetic beamforming, we investigate the cases of $\\beta_0=60$~W and 73.5~W in the following. The optimal currents and power of all TXs are obtained in Table~\\ref{table_solution}. For $\\beta_0=60$~W, TX 1 carries almost the peak current, and most energy is consumed by TX 1 that has the largest mutual inductance with the RX. This implies that the TXs with larger mutual inductances with the RX are favorable to carry higher currents, to achieve high efficiency of WPT. For $\\beta_0=60$~W, all TX constraints are inactive, and it can be further checked that the current of each TX is proportional to its mutual inductance with the RX. This verifies \\eqref{eq:solution_wo_constraint_common_r} in Theorem \\ref{the:solution_specialP2}. To support higher RX load power of 73.5~W, we observe that TX 1, 3 and 5 carry the (maximally allowable) peak current, and TX 2 as well as TX 4 increase their carried currents. The cost is the decreased overall efficiency, due to smaller mutual inductance between TX 2, 3, 4, 5 and the RX, than that between TX 1 and the RX.\n\n\\begin{table} [htbp]\n\\centering\n\\caption{Optimal results for $\\beta_0=50, 56$.} \\label{table_solution}\n\\small{\n\\begin{tabular}{*{3}{c}}\n \\hline \\hline\n & $\\beta_0=60$ & $\\beta_0=73.5$\\\\\n \\hline\n $(i_1^{\\star}, p_1^{\\star})$ & $(-7.0698, 68.25)$ & $(-7.071, 74.66)$ \\\\\n $(i_2^{\\star}, p_2^{\\star})$ & $(-0.0342, 0.0016)$ & $(-3.499, 2.22)$ \\\\\n $(i_3^{\\star}, p_3^{\\star})$ & $(0.1296, 0.023)$ & $(-7.071, 9.62)$ \\\\\n $(i_4^{\\star}, p_4^{\\star})$ & $(-0.0342, 0.0016)$ & $(-3.499, 2.22)$\\\\\n $(i_5^{\\star}, p_5^{\\star})$ & $(-0.6617, 0.598)$ & $(-7.071, 14.60)$\\\\\n \\hline\n\\end{tabular}\n}\n\\end{table}\n\n\n\n\n\n\\section{Conclusion}\nThis paper has studied the optimal magnetic beamforming for an MRC-WPT system with multiple TXs and a single RX. We formulate an optimization problem to minimize the total power drawn from the voltage sources of all TXs by jointly designing the currents flowing through different TXs, subject to the RX constraint on the required load power and the TXs' constraints on the peak voltage and current. For the special case of identical TX resistances and neglecting TXs' constraints, the optimal current of each TX is shown to be proportional to the mutual inductance between its TX coil and the RX coil. In general, the formulated non-convex QCQP problem is recast as an SDP problem. Its SDR is shown to be tight. Numerical results show that magnetic beamforming significantly enhances the deliverable power and the WPT efficiency, compared to the uncoordinated benchmark scheme of equal current allocation. Furthermore, it is numerically shown that TXs with large mutual inductances with the RX are favorable to carry high currents, to achieve high efficiency for WPT.\n\n\n\n\n\n\\renewcommand{\\baselinestretch}{1.45}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection{\\label{sec:level0}Introduction}\n\nQuantized responses have been a central theme throughout the realm of topological physics, which was initiated with the discovery of the quantum Hall effects in two-dimensional electron gases~\\cite{hasan2010colloquium,qi2011topological}. A wide variety of topological band structures have been revealed over the last decades, leading to the identification of various forms of quantized responses, from quantized Faraday and Kerr rotations in three-dimensional topological insulators~\\cite{wu2016quantized} to quantized circular dichroism~\\cite{asteria2019measuring} and topological Bloch oscillations~\\cite{li2016bloch,hoeller2018topological} in two-dimensional ultracold atomic gases. An emblematic and minimal instance of quantized topological transport concerns the adiabatic motion of a quantum particle moving in a slowly-varying periodic potential, an effect known as the Thouless pump~\\cite{PhysRevB.27.6083}. In this setting, the center-of-mass motion is quantized according to the Chern number of the underlying band structure, as defined over a hybrid momentum-time space~\\cite{xiao2010berry}. The realization of synthetic lattice systems has allowed for the experimental implementation of Thouless pumps and for the observation of the related quantized motion, in both photonics~\\cite{kraus2012topological,verbin2015topological,zilberberg2018photonic,grinberg2020robust,cerjan2020thouless,ke2016topological} and ultracold gases setups~\\cite{lohse2016thouless,nakajima2016topological}.\n\nInterestingly, synthetic topological systems~\\cite{ozawa2019topological,cooper2019topological} can operate beyond the linear regime of the Schr\u00f6dinger equation, hence opening the door to nonlinear topological physics~\\cite{smirnova2020nonlinear}. In this emerging framework, a central topic concerns the possible interplay between nonlinear excitations, known as solitons, and the underlying topological band structure~\\cite{leykam2016edge,solnyshkov2017chirality,st2017lasing,bisianov2019stability,ivanov2020vector,gonzalez2020dynamical,mukherjee2020observation,mukherjee2020observation2,xia2020nontrivial,pernet2021topological,mittal2021topological,kirsch2021nonlinear}. Interestingly, exact correspondences between topological indices and nonlinear modes have been identified in mechanical systems~\\cite{lo2021topology} and for the Korteweg-de-Vries equation of fluid dynamics~\\cite{oblak2020berry}, hence allowing for a formal topological classification of nonlinear excitations in certain special cases. In the context of nonlinear topological photonics, a recent experimental study reported on the quantized motion of solitons in a lattice system undergoing a Thouless pump sequence~\\cite{jurgensen2021quantized}.\nDespite the presence of considerable nonlinearity, these observations suggest that the quantization of the solitons displacement is dictated by the Chern numbers of the underlying band structure.\n\n\nIt is the aim of this work to elucidate and explore the quantized transport of solitons in nonlinear topological Thouless pumps. Inspired by the experiment of Ref.~\\cite{jurgensen2021quantized}, we address this topic by considering a general class of one-dimensional systems described by the discrete nonlinear Schr\\\"odinger equation (DNLS)~\\cite{Sulem, kevrekidis2009discrete}. In the regime of weak nonlinearity, solitons are known to predominantly occupy a single Bloch band~\\cite{kevrekidis2009discrete}; following Ref.~\\cite{alfimov2002wannier}, we represent the solitons in terms of maximally localized Wannier functions, defined in the corresponding Bloch band. In this Wannier representation, the adiabatic motion of the soliton can be deduced from an ordinary (scalar) DNLS; from this, we show that the quantized motion of the soliton is directly related to the quantized displacement of Wannier functions upon pumping, which is known to be set by the Chern number of the band~\\cite{asboth2016short,mei2014topological,lohse2016thouless}. This general approach allows us to mathematically demonstrate the topological nature of nonlinear Thouless pumps, by relating the quantized motion of solitons to the Chern number of the underlying Bloch band, see Figs.~\\ref{Fig_sketch}(a)-(b). More generally, these developments introduce a theoretical framework by which a broad class of nonlinear topological phenomena can be formulated in terms of topological band indices. \n\n\n\nWe then broaden the scope by applying this nonlinear topological framework to the realm of quantum gases. By considering an instructive mapping to a Bose-Bose atomic mixture on a lattice~\\cite{bloch2008many,chin2010feshbach,1D_mixtures}, which encompasses the aforementioned DNLS as its semiclassical limit, we identify a scenario by which a topological pump emerges from inter-particle interaction processes:~a soliton of impurity atoms is dragged by the driven majority atoms, hence leading to interaction-induced topological transport; see Fig.~\\ref{Fig_sketch}(c). This intriguing phenomenon, which could be implemented in ultracold atomic mixtures in optical lattices~\\cite{bloch2008many,chin2010feshbach,1D_mixtures}, is reminiscent of topological polarons~\\cite{grusdt2016interferometric,grusdt2019topological,camacho2019dropping,de2020anyonic,pimenov2021topological,PhysRevB.104.035133}, in the sense that impurities inherit the topological properties of their environment through genuine interaction processes.\n\n\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=6.7cm]{Fig_sketch_new.pdf}\n\\vspace{-0.25cm}\n\\caption{(a) In a Thouless pump, the Wannier functions perform a quantized drift established by the Chern number of the corresponding band. (b) In a nonlinear setting, the motion of a soliton follows the quantized Wannier drift. (c) In a Bose-Bose atomic mixture, quantized pumping can be induced by interactions :~a soliton of impurity atoms is dragged by the driven majority atoms, leading to interaction-induced topological transport.}\n\\label{Fig_sketch}\n\\end{figure}\n\n\n\\subsection{Topological pumps of solitons:~General theory} \n\nOur theoretical framework concerns a generic class of lattice models governed by the DNLS,\n\\begin{equation}\\label{DNLS_1}\n i \\partial_t \\phi_{i, \\alpha} = \\sum_{j, \\beta} \\, H_{i j}^{\\alpha \\beta}(t) \\, \\phi_{j, \\beta} - g \\, |\\phi_{i, \\alpha}|^2 \\phi_{i, \\alpha} \\, ,\n\\end{equation}\nwhere the field $\\phi_{i, \\alpha}$ is defined at the lattice site $\\alpha$ of the $i$th unit cell; $H(t)$ is a time-dependent Hamiltonian matrix, which includes a Thouless pump sequence~\\cite{PhysRevB.27.6083,asboth2016short}; and $g\\!>\\!0$ is the (onsite) nonlinearity strength. Equation~\\eqref{DNLS_1} preserves the norm of the field, which we set to \\(\\sum_{\\alpha, i} \\, |\\phi_{i, \\alpha}|^2\\!=\\!1\\), without loss of generality.\n\nAn illustrative model, used below to validate the general theory, is provided by the two-band Rice-Mele model~\\cite{lohse2016thouless}:~a 1D chain with alternating couplings $J_{1,2}(t)$ and staggered potential $\\pm \\Delta(t)$ (Appendix). Considering the \\emph{nonlinear} Rice-Mele model, Eq.~\\eqref{DNLS_1} takes the more explicit form \n\\begin{align}\n&i \\partial_t \\phi_{i,1}=J_1(t) \\phi_{i,2} + J_2(t) \\phi_{i-1,2}+\\Delta(t)\\phi_{i,1} - g \\, |\\phi_{i, 1}|^2 \\phi_{i, 1} \\notag , \\\\\n&i \\partial_t \\phi_{i,2}=J_1(t) \\phi_{i,1} + J_2(t) \\phi_{i+1,1}-\\Delta(t)\\phi_{i,2} - g \\, |\\phi_{i, 2}|^2 \\phi_{i, 2} \\label{Rice-Mele_NLSE}.\n\\end{align}\nHere, the Thouless pump cycle corresponds to a loop in the parameter space spanned by $(J_2-J_1)$ and $\\Delta$, which encircles the origin $(J_1\\!=\\!J_2, \\Delta=0)$; see inset of Fig.~\\ref{Fig_two} and Appendix. When $g\\!=\\!0$, the Bloch bands defined in momentum-time space are associated with a Chern number $C\\!=\\!\\pm1$. This topological invariant is known to determine the quantized displacement for a filled band upon each cycle of the pump~\\cite{asboth2016short}.\n\nOur analysis starts by studying the adiabatic evolution associated to the general Eq.~\\eqref{DNLS_1}, which is characterized by the period of the pump $T$ (exceeding all other time scales). To simplify notations, we use the multi-index $\\vc{i}\\!=\\!(i,\\alpha)$ and write $H_{\\vc{i}\\vc{j}}\\!\\equiv\\!H_{i j}^{\\alpha \\beta}(t)$. Introducing the \\emph{adiabatic time} \\(s\\!=\\!t\/T\\), Eq.~\\eqref{DNLS_1} takes the form \\(i \\varepsilon \\partial_s \\phi_{\\vc{i}} = \\sum_{\\vc{j}} \\, H_{\\vc{i}\\vc{j}}(s) \\, \\phi_{\\vc{j}} - g |\\phi_{\\vc{i}}|^2 \\phi_{\\vc{i}}\\), where \\(\\varepsilon=1\/T\\). The solutions to the adiabatic DNLS can be well approximated by stationary states of the form \\(\\phi_{\\vc{i}} \\propto e^{-i \\theta_s} \\, \\varphi_{\\vc{i}}\\), where \\(\\theta_s\\) is a time-dependent phase factor and \\(\\varphi_{\\vc{i}}\\) is an instantaneous solution to the stationary nonlinear Schr\\\"{o}dinger equation (see Appendix and Refs.~\\cite{gang2017adiabatic,carles2011semiclassical})\n\\begin{equation}\\label{DNLS_inst_1}\n \\mu_{s} \\, \\varphi_{\\vc{i}} = \\sum_{\\vc{j}} \\, H_{\\vc{i}\\vc{j}}(s) \\, \\varphi_{\\vc{j}} - g \\, |\\varphi_{\\vc{i}}|^2 \\varphi_{\\vc{i}} \\, ,\n\\end{equation}\nwhere the instantaneous eigenvalue $\\mu_{s}$ explicitly depends on the adiabatic time $s$. \n\nEquation~\\eqref{DNLS_inst_1} admits (bright) solitons as stationary state solutions, which are stable localized structures in the bulk. For sufficiently weak nonlinearity, solitons predominantly occupy the band from which they bifurcate~\\cite{szameit2010discrete}, while increasing nonlinearity leads to band mixing. In real space, solitons are immobile without external forcing, and are degenerate modulo a lattice translation set by the translational symmetry of the system. By adiabatically changing the Hamiltonian $H_{\\vc{i}\\vc{j}}(s)$, a single soliton undergoes smooth deformation, and after one period, it is mapped to the manifold of initial solutions, implying translation by an integer multiple of the unit cell. \nThe observations of Ref.~\\cite{jurgensen2021quantized} suggest that solitons bifurcating from a single Bloch band undergo a quantized displacement dictated by the Chern number of the band~\\cite{PhysRevB.27.6083} over each pump cycle. Demonstrating this intriguing relation between the transport of nonlinear excitations and topological band indices is at the core of the present work.\n\nTo elucidate the topological nature of nonlinear pumps, we follow Ref.~\\cite{alfimov2002wannier} and represent the solitons of Eq.~\\eqref{DNLS_inst_1} in the basis of maximally localized Wannier states,\n\\begin{equation}\\label{soliton_expansion_wannier}\n \\varphi_{\\vc{i}} = \\sum_{n} \\, \\varphi^{(n)}_{\\vc{i}} ,\\, \\quad \\varphi^{(n)}_{\\vc{i}} = \\sum_{l} \\, a^{(n)}_{l} \\, w^{(n)}_{\\vc{i}}(l) \\, ,\n\\end{equation}\nwhere the superscript \\(n\\) denotes the occupied band; the index \\(l\\) labels the unit cell on which the Wannier state is localized; and all dependence on the adiabatic time $s$ is henceforth implicit. The coefficients \\(a^{(n)}_{l}\\) obey the analogue of Eq.~\\eqref{DNLS_inst_1} in the Wannier representation (Appendix)\n\\begin{equation}\\label{NLSE_wannier}\n\\begin{split}\n \\mu_{s} \\, a^{(n)}_{l} & = \\sum_{l_1} \\, \\omega_{l-l_1} \\, a^{(n)}_{l_1} \\, \\\\ & - g \\sum_{n_1,n_2,n_3} \\sum_{l_1,l_2,l_3} W^{(\\underline{n})}_{\\underline{l}} a^{(n_1)*}_{l_1} \\, a^{(n_2)}_{l_2}\\, a^{(n_3)}_{l_3} \\, ,\n\\end{split}\n\\end{equation}\nwhere \\(\\omega_{l} = 1\/N \\sum_{\\mt{k}} \\, \\mt{exp}(i \\mt{k} \\, l) \\, \\epsilon^{(n)}_{\\mt{k}}\\) is the Fourier transform of the $n$th Bloch band $\\epsilon^{(n)}_{\\mt{k}}$ associated with $H_{\\vc{i}\\vc{j}}(s)$; \\(\\underline{n}=(n, n_1, n_2, n_3)\\), \\(\\underline{l}=(l, l_1, l_2, l_3)\\); and \\( W^{(\\underline{n})}_{(\\underline{l})}\\) are the following Wannier overlaps\n\\begin{equation}\\label{W}\n W^{(\\underline{n})}_{(\\underline{l})} = \\sum_{\\vc{j}} \\, w^{(n)*}_{\\vc{j}}(l) \\, w^{(n_1)*}_{\\vc{j}}(l_1) \\, w^{(n_2)}_{\\vc{j}}(l_2) \\, w^{(n_3)}_{\\vc{j}}(l_3) \\, .\n\\end{equation}\n\nThe Wannier states of a Bloch band are not unique, as they depend on the gauge choice for the Bloch functions \\cite{yu2011equivalent}. Nevertheless, a unique set of maximally localized Wannier functions is provided by the eigenstates of the position operator's projection onto the associated band. Since such Wannier functions are exponentially localized, the contribution to the Wannier overlaps in Eq.~\\eqref{W} from Wannier functions corresponding to different unit cells are negligible. The Wannier overlaps can thus be simplified as \\(W^{(\\underline{n})}_{\\underline{l}}=W^{(\\underline{n})} \\, \\delta_{ll_1} \\, \\delta_{l_1 l_2} \\, \\delta_{l_2 l_3}\\,\\), where $ W^{(\\underline{n})} = \\sum_{\\vc{j}} \\, w^{(n)*}_{\\vc{j}}(l) \\, w^{(n_1)*}_{\\vc{j}}(l) \\, w^{(n_2)}_{\\vc{j}}(l) \\, w^{(n_3)}_{\\vc{j}}(l)$; we point out that $W^{(\\underline{n})}$ does not depend on the index $l$, because of translational invariance.\n\nMoreover, in the regime of weak nonlinearity, the initial state soliton occupies a single band~\\cite{jurgensen2021chern,jurgensen2021quantized,alfimov2002wannier}, which allows us to neglect inter-band contributions to Eq.~\\eqref{NLSE_wannier}. We note that this simplification holds throughout the evolution of the pump, during which the soliton adiabatically follows the same band.\n\nUnder those realistic assumptions, the Wannier representation of the DNLS reduces to the form\n\\begin{equation}\\label{NLSE_Wannier_simple}\n\\begin{split}\n \\mu_{s} \\, a^{(n)}_{l} & = \\sum_{l_1} \\, \\omega_{l-l_1} \\, a^{(n)}_{l_1} \\, - g W^{(n)} |a^{(n)}_{l}|^2 a^{(n)}_{l} \\, ,\n\\end{split}\n\\end{equation}\nwhere $W^{(n)} = \\sum_{\\vc{j}} \\, \\vert w^{(n)}_{\\vc{j}}(l) \\vert^4 $. Equation~\\eqref{NLSE_Wannier_simple} has the form of a scalar DNLS on a simple lattice with one degree of freedom per unit cell labeled by Wannier indices \\(l\\), with hopping terms involving nearest and beyond-nearest neighbors. The properties of such scalar DNLS are well established~\\cite{kevrekidis2009discrete, kevrekidis2003instabilities, kivshar1993peierls}:~Equation~\\eqref{NLSE_Wannier_simple} admits inter-site solitons, with maxima on two adjacent sites, and on-site solitons, with their maximum on a single site. The inter-site solitons are known to be unstable against small perturbations, we thus restrict ourselves to the stable on-site solitons. Crucially, on-site solitons are always peaked around a single site (\\(l\\)) throughout the pumping cycle, as there is a finite energy (Peierls-Naborro) barrier for delocalization~\\cite{kevrekidis2009discrete,kivshar1993peierls}. Interestingly, the Peierls-Naborro barrier plays a role analogous to the ``gap condition\" of conventional topological physics, by forbidding transitions to other stable states during the adiabatic time evolution. This observation suggests that solitons are dragged by Wannier states upon pumping, hence exhibiting a quantized displacement in real space established by the Chern number~\\cite{asboth2016short,mei2014topological,lohse2016thouless}; see Figs.~\\ref{Fig_sketch}(a)-(b).\n\nTo firmly prove the topological nature of the nonlinear Thouless pump, we evaluate the solitons center-of-mass displacement after one period $s\\!=\\!1$ (Appendix)\n\\begin{align}\\label{soliton_COM}\n \\Delta \\langle \\varphi^{(n)}, X \\varphi^{(n)} \\rangle & = \\Delta \\langle w^{(n)}(0), X w^{(n)}(0) \\rangle \\, \\\\ & + \\Delta \\sum_{l \\neq l'} \\, a^{(n)*}_{l'} a^{(n)}_{l}\\langle w^{(n)}(l'), X w^{(n)}(l) \\rangle,\\notag\n\\end{align}\nwhere \\(X\\) is the position operator of the lattice; \\( \\langle f, g \\rangle \\equiv \\sum_{\\vc{i}} f^{*}_{\\vc{i}} g_{\\vc{i}} \\) is the inner product of fields on the lattice; and \\(\\Delta (\\cdot) \\equiv (\\cdot)_{s=1} - (\\cdot)_{s=0} \\). The first term in Eq.~\\eqref{soliton_COM} reflects the displacement of Wannier functions upon one pump cycle, which is known to correspond to the Chern number of the band~\\cite{asboth2016short,mei2014topological,lohse2016thouless}; the additional terms displayed on the second line are possible corrections due to the finite overlap of different Wannier states. Importantly, we find that these small interference effects are periodic in time (Appendix), such that these correction terms in Eq.~\\eqref{soliton_COM} do not contribute to the solitons center-of-mass displacement over a pump cycle. Altogether, this completes the proof:~the displacement of solitons is indeed quantized according to the Chern number of the band from which they emanate.\\\\\n\n\\subsection{Numerical validation} \n\nWe now demonstrate the validity of our assumptions by solving the nonlinear Rice-Mele model [Eq.~\\eqref{Rice-Mele_NLSE}]. In Figs.~\\ref{Fig_one}(a)-(b), we compare the on-site soliton solution of the simplified Eq.~\\eqref{NLSE_Wannier_simple}, which emerges from the lowest band, with the Wannier representation of the exact soliton obtained by solving the full DNLS in Eq.~\\eqref{DNLS_inst_1}. We then perform a similar comparison in real space, by convolving the soliton of Eq.~\\eqref{NLSE_Wannier_simple} with the corresponding Wannier states, and by comparing this result to the exact soliton of the original nonlinear Rice-Mele model; see Figs.~\\ref{Fig_one}(c)-(d). The perfect agreement validates the description of the soliton in Wannier representation using the ordinary nonlinear Schr\\\"{o}dinger equation~\\eqref{NLSE_Wannier_simple}.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=8.6cm]{Fig_1_solitons_real_new.pdf}\n\\vspace{-0.1cm}\n\\caption{(a)-(b): Wannier representation of a soliton in the lowest band of the nonlinear Rice-Mele model (blue solid line), compared with the soliton obtained from the simplified DNLS Eq.~\\eqref{NLSE_Wannier_simple} (dashed red line), for $g\\!=\\!J_0$ and $g\\!=\\!2 \\, J_0$ and time $s\\!=\\!0.12$. Here, $J_0$ is a characteristic hopping strength (Appendix). Note how increasing the nonlinearity further localizes the soliton. (c)-(d) Same comparison in real space. }\n\\label{Fig_one}\n\\end{figure}\n\nWe depict the motion of the exact soliton in Fig.~\\ref{Fig_two}, as obtained by solving Eq.~\\eqref{DNLS_inst_1} over two pump cycles $s\\!\\in\\![0,2]$, and we compare this trajectory with the drift of its underlying Wannier function, i.e.~the Wannier state that contributes the most to the expansion~\\eqref{soliton_expansion_wannier}. In order to obtain a contiguous path for the Wannier center, we relabeled the Wannier functions whenever the Wannier centers met discontinuities; this smoothing corresponds to a singular gauge transformation of the corresponding Bloch states, and has no physical implication. Figure~\\ref{Fig_two} indicates that the trajectories of the soliton and Wannier center differ at intermediate times ($s\\!\\ne\\!$ integer), which we attribute to the aforementioned interference effects involving different Wannier states (Appendix); however, in agreement with our theoretical predictions, this deviation remains small and time-periodic over the whole pump cycle, and does not introduce any (integer) correction to the quantized center-of-mass displacement.\\\\\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=8.6cm]{Fig_2_pump_inset.pdf}\n\\vspace{-0.25cm}\n\\caption{Adiabatic evolution of the soliton's center-of-mass during two full pump cycles (inset), as obtained by solving Eq.~\\eqref{DNLS_inst_1} on the Rice-Mele lattice with $g\\!=\\!J_0$, and selecting a soliton in the lowest band. This is compared to the evolution of the center-of-mass of the Wannier function with largest contribution to the soliton's expansion [Eq.~\\eqref{soliton_expansion_wannier}]. For clarity, the Wannier functions are relabeled during the pump cycle such that their center-of-mass follows a contiguous path instead of winding around a unit cell. The quantized displacement is set by the Chern number $C\\!=\\!-1$ of the occupied band.}\n\\label{Fig_two}\n\\end{figure}\n\n\\vspace{-1.715cm}\n\n\\subsection{An interaction-induced topological pump \\\\ for ultracold atomic mixtures} \n\nThe theoretical framework presented in this work is based on the general DNLS in Eq.~\\eqref{DNLS_1}, and hence, it applies to a broad range of nonlinear lattice systems. Here, we exploit this universality by introducing a mapping to an imbalanced Bose-Bose atomic mixture, which encompasses the DNLS in Eq.~\\eqref{DNLS_1} as its semiclassical limit (within the Thomas-Fermi approximation). As we explain below, this approach reveals an interaction-induced topological pump, where solitons of impurity atoms undergo a quantized drift resulting from genuine inter-particle interaction processes; see Fig.~\\ref{Fig_sketch}(c). \n\n\\vspace{-0.11cm}\n\nWe start from a microscopic theory for an imbalanced Bose-Bose atomic mixture on a 1D lattice~\\cite{1D_mixtures}, as described by the second-quantized Hamiltonian\n\\begin{align}\n \\hat H &= \\sum_{ \\langle \\vc{i},\\vc{j} \\rangle} \\, \\hat\\phi^{\\dagger}_{\\vc{i}} \\, H^{(\\phi)}_{\\vc{i}\\vc{j}} \\, \\hat\\phi_{\\vc{j}} \n + \\sum_{\\vc{i}} \\, \\frac{U_{\\phi\\phi}}{2} \\, \\hat\\phi^{\\dagger}_{\\vc{i}} \\hat\\phi^{\\dagger}_{\\vc{i}} \\hat\\phi_{\\vc{i}} \\hat\\phi_{\\vc{i}} \\notag \\\\\n &+ \\sum_{\\langle \\vc{i}, \\vc{j} \\rangle} \\, \\hat\\sigma^{\\dagger}_{\\vc{i}} \\, H^{(\\sigma)}_{\\vc{i}\\vc{j}} \\, \\hat\\sigma_{\\vc{j}} \n + \\sum_{\\vc{i}} \\, \\frac{U_{\\sigma \\sigma}}{2} \\, \\hat\\sigma^{\\dagger}_{\\vc{i}} \\hat\\sigma^{\\dagger}_{\\vc{i}} \\hat\\sigma_{\\vc{i}} \\,\n \\hat\\sigma_{\\vc{i}} \\, \\notag \\\\\n &+ \\, \\sum_{\\vc{i}} \\, U_{\\phi \\sigma} \\, \\hat\\phi^{\\dagger}_{\\vc{i}}\\hat\\phi_{\\vc{i}} \\, \\hat\\sigma^{\\dagger}_{\\vc{i}} \\hat{\\sigma}_{\\vc{i}}.\\label{eq:imp_bos_Ham}\n\\end{align}\nwhere $\\hat\\phi_{\\vc{i}}$ and $\\hat\\sigma_{\\vc{i}}$ are bosonic field operators on the lattice; note that we use the same conventions for indices \\(\\vc{i}=(i,\\alpha)\\) as before. Specifically, the first line describes single-body processes (i.e.~nearest-neighbor hopping and onsite potentials) and intra-species contact interaction processes for the majority atoms, which are described by the field operator $\\hat\\phi_{\\vc{i}}$; the second line describes single-body processes and intra-species contact interactions for impurity atoms, represented by the field operator $\\hat\\sigma_{\\vc{i}}$; and the third line describes inter-species interaction processes. We assume that the intra-species interaction strengths are both repulsive, $(U_{\\sigma \\sigma}, U_{\\phi \\phi} > 0)$, whereas the inter-species interaction strength is attractive (\\(U_{\\phi \\sigma} < 0\\)).\n\nIn the semi-classical limit, where quantum fluctuations are suppressed for both species, this Bose-Bose mixture setting is well described by two coupled nonlinear Schr\\\"odinger equations (Appendix and Ref.~\\cite{1D_mixtures})\n\\vspace{-0.2cm}\n\\begin{equation} (\\omega_0+\\mu_{\\phi}) \\, \\phi_{\\vc{i}} - \\sum_{\\vc{j}} \\, H^{(\\phi)}_{\\vc{i}\\vc{j}} \\, \\phi_{\\vc{j}} \n - \\Big ( \\, g_{\\phi \\phi}|\\phi_{\\vc{i}}|^2 \\, \n + g_{\\phi \\sigma}|\\sigma_{\\vc{i}}|^2 \\, \\Big ) \\phi_{\\vc{i}} = 0 \\, , \\notag\\\\ \n\\end{equation}\n\\vspace{-0.52cm}\n\\begin{equation}\\label{var_eq_12_main}\n (\\omega_0+\\mu_{\\sigma}) \\, \\sigma_{\\vc{i}} - \\sum_{\\vc{j}} \\, H^{(\\sigma)}_{\\vc{i}\\vc{j}} \\, \\sigma_{\\vc{j}} - \n \\Big ( \\,\n g_{\\phi \\sigma}|\\phi_{\\vc{i}}|^2 + g_{\\sigma \\sigma} \\, |\\sigma_{\\vc{i}}|^2 \n \\, \\Big ) \\, \\sigma_{\\vc{i}} = 0 \\, ,\n\\end{equation}\nwhere $\\phi_{\\vc{i}}$ and $\\sigma_{\\vc{i}}$ denote classical fields satisfying the constraints \\(\\sum_{\\vc{i}} \\, |\\phi_{\\vc{i}}|^2 = N_{\\phi}\/(N_{\\phi}+N_{\\sigma})\\) and \\(\\sum_{\\vc{i}} \\, |\\sigma_{\\vc{i}}|^2 = N_{\\sigma}\/(N_{\\phi}+N_{\\sigma})\\), with \\(N_{\\phi}\\) and \\(N_{\\sigma}\\) the particle number of majority and impurity species, respectively; the interaction parameters are defined as $g_{\\alpha \\beta}\\!=\\!U_{\\alpha \\beta} (N_{\\phi}+N_{\\sigma})$, with $\\alpha,\\beta=(\\phi, \\sigma)$; $\\mu_{\\phi,\\sigma}$ denote the chemical potentials; and \\(\\omega_0\\) is the eigenvalue of the nonlinear Eqs.~\\eqref{var_eq_12_main}.\n\nConsidering the case of heavy impurities, we neglect their kinetic-energy contributions ($H^{(\\sigma)}_{\\vc{i}\\vc{j}}$) to Eq.~\\eqref{var_eq_12_main}, the so-called Thomas-Fermi approximation. In this regime, one can relate the impurity mean-field profile to the majority profile as \n\\begin{equation}\n|\\sigma_{\\vc{i}}|^2 =- (g_{\\phi \\sigma}\/g_{\\sigma \\sigma}) \\, |\\phi_{\\vc{i}}|^2,\\label{profile_match}\n\\end{equation}\nand Eq.~\\eqref{var_eq_12_main} simplifies to the DNLS (Appendix)\n\\begin{equation}\\label{EOM_HS}\n \\begin{split}\n (\\omega_0+\\mu_{\\phi}) \\, \\phi_{\\vc{i}} = \\left (\\sum_{\\vc{j}} \\, H^{(\\phi)}_{\\vc{i}\\vc{j}} - u^{\\mt{MF}}_{\\vc{i}} \\right)\\phi_{\\vc{i}} \\, , \\quad u^{\\mt{MF}}_{\\vc{i}} = g |\\phi_{\\vc{i}}|^2 \\, ,\n \\end{split}\n\\end{equation}\nwhere \\(g\\!=\\!-g_{\\phi \\phi}+g^{2}_{\\phi \\sigma}\/g_{\\sigma \\sigma}\\). Interestingly, Eq.~\\eqref{EOM_HS} is formally equivalent to the DNLS in Eq.~\\eqref{DNLS_inst_1}:~the majority atoms described by the field $\\phi_{\\vc{i}}$ can form a soliton and undergo a quantized motion upon driving a Thouless pump sequence in the corresponding lattice Hamiltonian, i.e.~$H^{(\\phi)}_{\\vc{i}\\vc{j}}(s)$. Importantly, according to Eq.~\\eqref{profile_match}, the impurity atoms also form a soliton and undergo a quantized motion:~the impurities exhibit topological pumping from genuine interaction processes with the majority atoms. In particular, this interaction-induced topological pumping occurs even when the lattice felt by the impurities $H^{(\\sigma)}_{\\vc{i}\\vc{j}}$ is associated with a trivial (non-topological) band structure.\n\nWe first analyze this interaction-induced topological effect by considering the Thomas-Fermi approximation. It appears from Eq.~\\eqref{EOM_HS} that $u^{\\mt{MF}}$ acts as an effective potential for the majority atoms; a soliton then emerges as the bound state of the impurity field. In the context of highly-imbalanced mixtures with strong impurity-majority coupling, i.e.~in the strong-coupling Bose polaron regime, it is customary to assume a variational ansatz describing the profile of the impurity and majority fields \\cite{grusdt2015new}; the majority field is then found as the bound state of the impurity potential \\(u^{\\mt{MF}}\\) using the first relation in Eq.~\\eqref{EOM_HS}. Here, the variational problem for obtaining \\(u^{\\mt{MF}}\\) reduces to one for \\(\\phi\\), because of the constraint $u^{\\mt{MF}} = g |\\phi|^2$. As before, we express \\(\\phi\\) in the Wannier basis, and the variational problem is then solved simultaneously for both \\(u^{\\mt{MF}}\\) and \\(\\phi\\) using the ansatz \\(a_{l}=\\eta \\, \\mt{sech}( \\xi \\, (l-l_0) ) \\) for the Wannier coefficients of \\(\\phi\\). The bound state of the resulting impurity potential $u^{\\mt{MF}}\\!=\\!g |\\phi|^2$ then corresponds to the soliton (Appendix). \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=8.7cm]{Fig_4_variational.pdf}\n\\vspace{-0.5cm}\n\\caption{\n(a) Evolution of the soliton's width $\\xi$ in Wannier space over two pump cycles, as obtained by fitting the numerical solution of Eq.~\\eqref{DNLS_inst_1} with a sech function (blue solid line). This is compared to the width of the variational-ansatz solution (dashed red line), and to that of the bound-state solution (green dotted line); here \\(g\\!=\\!J_0\\). (b) Same for the amplitude of the soliton $\\eta$. (c) Amplitude and width of the exact (solid blue line) and variational-ansatz (dashed red line) solutions as a function of $g$, at time $s\\!=\\!0.12$. (d) Center-of-mass displacement of the calculated bound state over one pump cycle. The inset shows the corresponding bound state profiles. The quantized motion is dictated by the Chern number $C=-1$; compare with Fig.~\\ref{Fig_two}.}\n\\label{Fig_three}\n\\end{figure}\n\nFigures~\\ref{Fig_three}(a) and (b) show the adiabatic evolution of the amplitude \\(\\eta\\) and width \\(\\xi\\) of the variational solution \\(a_{l}\\!=\\!\\eta \\, \\mt{sech}( \\xi \\, (l-l_0) ) \\) used for the Wannier coefficients of \\(\\phi\\). We compare these results with the amplitude and width extracted from the bound-state solution associated with the impurity potential $u_{\\mt{MF}} = g |\\phi|^2$, as well as to those extracted from the exact soliton of Eq.~\\eqref{DNLS_inst_1} expressed in Wannier representation. We also show the dependence of these parameters on the nonlinearity \\(g\\) in Fig.~\\ref{Fig_three}(c), for both the exact soliton and the variational solution. These results validate our variational approach, as well as the bound-state picture of our soliton.\n\nThe minimum-energy solutions obtained from the variational ansatz are realized for integer values of the Wannier index \\(l_0\\), and thus correspond to stable on-site solitons. Moreover, this Wannier index \\(l_0\\) remains constant over a pump cycle. Hence, this again suggests that the real-space motion of the soliton should follow the quantized Wannier drift, as established by the Chern number. This is verified in Fig.~\\ref{Fig_three}(d), where the center-of-mass displacement of the calculated bound state is shown to be quantized over a pump cycle (compare with Fig.~\\ref{Fig_two}).\n\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=8.7cm]{mixture_new.pdf}\n\\caption{Center-of-mass trajectories of both species in a Bose-Bose mixture. Here, the interaction strengths are set as \\(g_{\\phi \\phi} \\simeq 0.226 \\, J_0, \\, g_{\\phi \\sigma} \\simeq -11.32 \\, J_0 \\) and \\(g_{\\sigma \\sigma} \\simeq 2.26 \\, J_0\\). The majority atoms undergo the pumping sequence as in Fig.~\\ref{Fig_two}, while impurities feel a trivial lattice.~Impurity atoms undergo quantized transport through interactions with their environment.}\n\\label{Fig_pump_mixutre}\n\\end{figure}\n\\\n\nIn order to demonstrate the validity of our results, in particular, the robustness of the interaction-induced topological pump away from the Thomas-Fermi limit, we solve Eq.~\\eqref{var_eq_12_main} numerically for a mass-balanced mixture, thus including the effects of the impurities' kinetic energy. We again use the Rice-Mele model, but consider two different pump sequences for the majority and impurity species:~the majority feels the same (topological) pump sequence as in Fig.~\\ref{Fig_two}, while we apply a trivial sequence for the impurity species. We obtain the steady state solution of Eq.~\\eqref{var_eq_12_main} over two pump cycles, where the majority particles predominantly occupy the lowest Bloch band. The corresponding trajectories of the CM of both species are depicted in Fig.~\\ref{Fig_pump_mixutre}, where the impurity CM is shown to be dragged by the majority particles. While the exact form of the CM trajectories depend on the details of the model and pumping sequence, the CM displacement after one pump cycle is dictated by the Chern number of the topological band occupied by the majority species (\\(C\\!=\\!-1\\) in this case). Although the impurity atoms experience a topologically trivial lattice (Appendix), they are shown to undergo topological pumping through genuine interaction effects with their environment. \\\\\n\n\\subsection{Implementation in ultracold atoms} \n\nThe interaction-induced topological pump introduced above could be experimentally implemented in ultracold atomic gases involving two bosonic species. In fact, the parameters values incorporated in our numerical simulations of Eq.~\\eqref{var_eq_12_main}, and displayed in Fig.~\\ref{Fig_pump_mixutre}, are compatible with an experimental realization based on bosonic \\(^{7}\\mt{Li}-^{7}\\mt{Li}\\) mixtures, with two different hyperfine states of \\(^{7}\\mt{Li}\\) as ``majority\" and ``impurity\" atoms; we note that the formation of solitons in Lithium gases was previously investigated, both theoretically and experimentally~\\cite{ahufinger2004creation,strecker2002formation}. Following Ref.~\\cite{hulet2020methods}, the scattering lengths between atoms in state \\((F\\!=\\!1,m_F\\!=\\!1)\\) -- ``impurity\" atoms -- and \\((F\\!=\\!1,m_F\\!=\\!0)\\) -- ``majority\" atoms -- can be set to \\(a_{\\phi \\phi} \\simeq 0.154 \\, a_0, \\, a_{\\phi \\sigma} \\simeq -7.57 \\, a_0, \\, a_{\\sigma \\sigma} \\simeq 1.514 \\, a_0\\), at a magnetic field \\(B \\simeq 575 \\, G\\), where \\(a_0\\) is the Bohr radius (\\(a_0=0.0529 \\, \\mt{nm}\\)); we note that these scattering lengths are highly tunable thanks to a broad Feshbach resonance. As further discussed below, this configuration is compatible with the interaction parameters ($g_{\\phi \\phi}, g_{\\sigma \\sigma}, g_{\\phi \\sigma}$) used in our numerics.\n\nThe lattice structure and pump sequence can be designed within a time-dependent optical lattice. For instance, following Ref.~\\cite{nakajima2016topological},\nthe atoms can be loaded in a potential landscape comprised of two superimposed optical lattices, with a long-wavelength lattice (\\(\\lambda_l\\!=\\!1064 \\, \\mt{nm}\\)) and a shorter lattice (\\(\\lambda_s\\!=\\!\\lambda_l\/2\\)), with different amplitudes (\\(V_l\\!=\\!3.0 \\, E_R\\) and \\(V_s\\!=\\!1.0 \\, E_R\\), with \\(E_R\\!=\\!h^2\/(2m \\, \\lambda^2_l)\\) the recoil energy of the long lattice). Such an optical lattice potential takes the form \\(V(x, \\phi) = - V_l \\, \\mt{cos}^2(2\\pi x\/\\lambda_l-\\phi) - V_s \\, \\mt{cos}^2(2\\pi x\/\\lambda_s)\\), and it implements the Rice-Mele lattice considered in our numerics:~the Thouless pump sequence is simply realized by sweeping the phase \\(\\phi\\) from \\(0\\) to \\(2 \\pi\\). The relevant parameters of the Rice-Mele model can be extracted from a tight-binding analysis of the optical lattice potential~\\cite{nakajima2016topological}, and the resulting pump sequence is described by the following elliptic path in parameter space: \\(((J_1-J_2)\/a)^2+(\\Delta\/b)^2\\!=\\!1\\), with \\(a\\simeq0.19 \\, E_R\\) and \\(b\\simeq0.475 \\, E_R\\). In our numerics, we choose a closely related pumping sequence with \\(a=0.15 \\, E_R\\) and \\(b=0.5 \\, E_R\\); this choice does not affect our final conclusions, since topological pumping is robust against smooth deformations of the pumping sequence. Finally, to reveal the interaction-induced topological transport for impurities, we propose to implement a trivial pump sequence for that species only [see Fig.~\\ref{Fig_pump_mixutre}]; this could be realized by designing a state-dependent optical lattice~\\cite{Jaksch_toolbox}, for instance, using the Floquet-engineering scheme of Ref.~\\cite{Jotzu_state}.\n\nThe particle numbers of the two species can be set to \\(N_{\\phi}\\!\\simeq\\!1500\\) \\cite{bradley1997bec} and \\(N_{\\sigma}\/N_{\\phi}\\!\\simeq\\!1\/30\\). With this choice, we obtain the interaction parameters according to the relation \\(g_{\\alpha \\beta}\/E_R\\!=\\!(N_{\\sigma}+N_{\\phi})\\, \\sqrt{8\/\\pi}\\,k_l \\, a_{\\alpha \\beta} (V_s\/E_R)^{3\/4}\\) \\cite{bloch2008many}, where \\(\\alpha, \\beta\\!=\\! (\\phi,\\sigma)\\) and \\(k_l\\!=\\!2\\pi\/\\lambda_l\\). Setting the pump parameter \\(J_0\\!=\\!0.5E_R\\), the numerical values for the interaction parameters are obtained as \\(g_{\\phi \\phi} \\simeq 0.226 \\, J_0, \\, g_{\\phi \\sigma} \\simeq -11.32 \\, \\, J_0 \\) and \\(g_{\\sigma \\sigma} \\simeq 2.26 \\, J_0\\), which are the values used in our numerical simulations [Fig.~\\ref{Fig_pump_mixutre}].\n\n\n\\subsection{Conclusions}\n\nIn this work, we outlined a general theoretical framework that connects Bloch band's topology to nonlinear excitations, hence elucidating the topological transport of solitons in the context of nonlinear Thouless pumps. Solitons are stable states of nonlinear lattice systems described by the paradigmatic discrete nonlinear Schr\\\"{o}dinger equation (DNLS), which is central in describing nonlinear phenomena in a wide range of physical settings, from nonlinear optics and photonics, to ultracold quantum matter, fluid dynamics and plasma physics. In this sense, characterizing the influence of Bloch band's topology on the behavior of the stable states of DNLS is of prime importance. This program is particularly challenging due to the lack of generic theoretical approaches connecting notions of topological physics to nonlinear systems and vice versa. Furthermore, introducing nonlinearities in more sophisticated topological systems, such as higher-dimensional settings, or lattices exhibiting higher-order topology and symmetry-protected features, could lead to exotic phenomena exhibited by the nonlinear modes of the system; see Ref.~\\cite{Liberto_orbital} and references therein. By providing a scheme that naturally connects topological indices of band structures to nonlinear excitations, our work opens the door to the exploration of novel nonlinear topological phenomena.\n\nWe also illustrated the universality of our approach, by introducing a topological pump for Bose-Bose atomic mixtures, where one species (impurity atoms) experience a quantized drift through genuine interaction processes with the other species (the surrounding majority atoms). Importantly, the impurity atoms inherit the topological properties of their environment through inter-species interactions. We note that such interaction-induced topology has been previously studied in the context of topological polarons, namely, in mixtures with strong population imbalance, where individual topological excitations can bind to mobile impurities~\\cite{grusdt2016interferometric,grusdt2019topological,de2020anyonic,PhysRevB.104.035133}. The present scheme extends those concepts to more complex majority-impurity states, such as coupled coherent states within a superfluid phase. We also point out that the proposed scheme can be implemented using available cold-atom technologies, and the quantized transport of impurities can be measured in-situ, using state-selective imaging techniques~\\cite{Bloch_Sylvain_review}. Besides, the Chern number characterizing the interaction-induced topological pump could also be directly extracted by interferometry~\\cite{grusdt2019topological}.\\\\\n\n\nDuring the preparation of this manuscript, the authors became aware of a related work by M. J\\\"urgensen and M. C. Rechtsman~\\cite{jurgensen2021chern}.\\\\\n\n\n\\section{Appendix}\n\n{\\bf Adiabatic theorem for NLS.} The adiabatic theorem for NLS (both continuous and discrete forms), follows closely the formulation of its linear counterpart~\\cite{carles2011semiclassical,gang2017adiabatic}. For a system with a time-dependent Hamiltonian \\(H(t)\\), which varies on a time scale \\(T\\) much larger than all the time scales in the problem, the time-dependent NLS takes the following form (see main text)\n\\begin{equation}\\label{NLS_s}\ni \\varepsilon \\, \\partial_s \\phi = \\, H(s) \\, \\phi - g |\\phi|^2 \\phi , \\, \n\\end{equation}\nwhere \\(s=t\/T\\) is the adiabatic time and \\(\\varepsilon=1\/T\\) the rate of change. The stationary state solutions of Eq.~\\eqref{NLS_s} are of the form \n\\begin{equation}\\label{ad_eq}\n \\phi_s = e^{-i \\theta_s} \\, \\big ( \\, \\varphi_s + \\delta \\, \\varphi_s \\, \\big ) , \\,\n\\end{equation}\nwhere \\(\\varphi_s\\) is the instantaneous solution of the stationary NLS,\n\\begin{equation}\\label{DNLS_inst_1_SM}\n \\mu_{s} \\, \\varphi_s = \\, H(s) \\, \\varphi_s - g \\, |\\varphi_s|^2 \\varphi_s \\, ,\n\\end{equation}\nand \\(\\theta_s = 1\/\\varepsilon \\big ( \\, \\int^{s}_0 ds' \\, \\mu_{s'} - \\gamma_s \\, \\big ) \\) is a global phase factor consisting of a dynamical contribution and a Berry phase, and it can be ignored. The correction term \\(\\delta \\, \\varphi_s\\) accounts for non-adiabatic variations, and for \\(\\varepsilon \\to 0 \\), it behaves as \\(||\\delta \\, \\varphi|| \\sim \\varepsilon \\,\\)\n, hence vanishes in the adiabatic limit \\(\\varepsilon \\to 0 \\,\\). The relevant dynamical information is therefore encoded in the instantaneous solutions of Eq.~\\eqref{DNLS_inst_1_SM}.\\\\\n\n{\\bf The Rice-Mele model and pump sequence.} Throughout this work, we illustrate the general concepts and results using the Rice-Mele model, with periodic boundary conditions. This simple two-band model, which is reviewed in some detail below, is known to exhibit a topological (Thouless) pump sequence. \n\nThe Rice-Mele model is a 1D tight-binding model with alternating nearest-neighbor tunneling matrix elements ($J_1$, $J_2$, $J_1$, $J_2$, \\dots), and a staggered on-site potential. We denote the two sites within each unit cell by \\(\\alpha = A, B\\) and the unit cells by \\(i, \\, 0 \\leq i \\leq N-1\\), where \\(N\\) is the number of unit cells. The hopping matrix element between sites \\(A\\) and \\(B\\) within each unit cell (resp. between adjacent unit cells) is written as \\(J_1\\!=\\!-J(1+\\delta)\\) (resp. \\(J_2\\!=\\!-J(1-\\delta)\\)) and the magnitude of the staggered potential on site \\(A\\) (resp.~\\(B\\)) equals \\(\\Delta\\) (resp.~\\(-\\Delta\\)). The Hamiltonian of the Rice-Mele model thus reads\n\\begin{align}\nH =& -\\sum^{N-1}_{i=0} \\, \\Big [ \\, J(1+\\delta) \\, \\dyad{i,A}{i,B} \\\\\n& + J(1-\\delta) \\dyad{i, A}{i-1,B} \\, \\Big ] \\notag\\\\\n&+ \\frac{\\Delta}{2} \\,\\sum^{N-1}_{i=0} \\Big [ \\, \\, \\dyad{i,A}{i,A} - \\, \\dyad{i,B}{i,B} \\, \\Big ] + \\text{h.c.} \\, \\label{RM_Ham}\n\\end{align}\n\nThe simulations shown in the main text were performed on a lattice with \\(N=100\\) unit cells, and using the following pump sequence \n\\begin{equation}\n \\begin{split}\n & J(s) = J_0 \\big ( \\, 1 + 1\/2 \\, \\mt{cos}(2\\pi s) \\, \\big ) \\, ,\\\\\n & \\delta(s) = \\delta_0 \\, \\mt{cos}(2 \\pi s)\/( 2 + \\mt{cos}(2\\pi s) ) \\, , \\\\\n & \\Delta(s) = \\, J_0 \\, \\mt{sin}(2\\pi s), \n \\end{split}\\label{sequence}\n\\end{equation}\nwith \\(J_0\\!=\\!0.5 \\,\\) and \\(\\delta_0\\!=\\!0.6\\), corresponding to a topological pump with Chern number $C\\!=\\!-1$. The \\emph{nonlinear} Rice-Mele model, which is used in our simulations, is obtained by adding an on-site nonlinearity to this lattice model; see Eq.~\\eqref{Rice-Mele_NLSE}. \n\nIn order to demonstrate the interaction-induced topological pumping in the Bose-Bose mixture setting, we assume that the two species experience the same Rice-Mele lattice described above, but with different pump sequences:~the majority atoms experience the topological pumping sequence in Eq.~\\eqref{sequence}, while the impurity atoms experience a trivial sequence with \\(J \\delta\\!=\\!\\Delta\\!=\\!0\\). The resulting center-of-mass displacement of both species are depicted in Fig.~\\ref{Fig_pump_mixutre} of the main text.\\\\\n\n{\\bf Derivation of the scalar DNLS.} We outline the derivation of the simplified scalar DNLS from the original lattice DNLS,\n\\begin{equation}\\label{NLS_ij}\n \\mu \\, \\phi_{\\vc{i}} = \\sum_{\\vc{j}} \\, H_{\\vc{i}\\vc{j}} \\, \\phi_{\\vc{j}} - g \\, |\\phi_{\\vc{i}}|^2 \\, \\phi_{\\vc{i}} \\, .\n\\end{equation}\nThe Wannier functions are related to the Bloch waves of the Hamiltonian by the following relations\n\\begin{equation}\\label{w_psi}\n\\begin{split}\n w^{(n)}_{\\vc{j}}(l) & = \\frac{1}{\\sqrt{N}} \\, \\sum^{N-1}_{\\mt{k}=0} \\, e^{i(2\\pi\/N)\\mt{k}(-l)} \\, \\psi^{(n)}_{\\vc{j}}(\\mt{k}) \\, \\\\\n & = \n \\frac{1}{\\sqrt{N}} \\, \\sum^{N-1}_{\\mt{k}=0} \\, e^{i(2\\pi\/N)\\mt{k}(j-l)} \\, u^{(n)}_{\\vc{j}}(\\mt{k}), \\,\n\\end{split}\n\\end{equation}\nwhere \\(\\psi^{(n)}_{\\vc{j}}(\\mt{k})=e^{i(2\\pi\/N)\\mt{k}(j)}\\,u^{(n)}_{\\vc{j}}(\\mt{k})\\) is the Bloch wave of band \\(n\\) with momentum \\(\\mt{k}\\) and \\(u^{(n)}_{\\vc{j}}(\\mt{k})\\) is the corresponding Bloch function, which is periodic over the unit cells and does not depend on \\(j\\). To represent the Hamiltonian part in Wannier basis, we evaluate the matrix elements of the Hamiltonian over the Wannier states\n\\begin{equation}\\label{H_ww}\n\\begin{split}\n & \\langle w^{(n')}(l'), H \\, w^{(n)}(l) \\rangle \\\\ & = \\frac{1}{N} \\, \\sum^{N-1}_{\\mt{k}, \\mt{k}'=0} \\, e^{i(2\\pi\/N)(\\mt{k}'l'-\\mt{k}l)} \\, \\langle \\psi^{(n')}(\\mt{k}'), \\, H \\, \\psi^{(n)}(\\mt{k}) \\rangle \\\\ \n & = \\delta_{nn'} \\, \\cdot \\, \\frac{1}{N} \\, \\sum^{N-1}_{\\mt{k}=0} \\, e^{i(2\\pi\/N)\\mt{k}(l'-l)} \\, \\, \\epsilon^{(n)}_{\\mt{k}} \\, = \\delta_{nn'} \\, \\cdot \\, \\omega_{l'-l} \\, ,\n \\end{split}\n\\end{equation}\nwhere \\(\\omega_{l} = 1\/N \\, \\sum^{N-1}_{\\mt{k}=0} \\, e^{i(2\\pi\/N)\\mt{k}(l)} \\, \\epsilon^{(n)}_{\\mt{k}}\\) is the Fourier transform of the Bloch band \\(\\epsilon^{(n)}_{\\mt{k}}\\); see main text.\n\nNext, we express the nonlinearity in terms of Wannier functions,\n\\begin{equation}\\label{www}\n\\begin{split}\n & \\langle w^{(n)}(l), \\, |\\phi|^2 \\, \\phi\n \\rangle \n \\\\ & = \\sum_{n_1, n_2, n_3} \\sum_{l_1, l_2, l_3 } \\, \\\\ & \n \\Bigg ( \\, \\sum_{\\vc{i}} \\,\n w^{(n)*}_{\\vc{i}}(l)w^{(n_1)*}_{\\vc{i}}(l_1)w^{(n_2)}_{\\vc{i}}(l_2)w^{(n_3)}_{\\vc{i}}(l_3) \\, \\, \\Bigg )\n \\, \\\\ & \\times\n a^{(n_1)*}_{l_1} \\, a^{(n_2)}_{l_2} \\, a^{(n_3)}_{l_3} \\, .\n\\end{split}\n\\end{equation}\nTaking the inner product of Eq.~\\eqref{NLS_ij} with \\(w^{(n)}_{l}\\) and using Eqs.~\\eqref{H_ww} and \\eqref{www}, we obtain the following DNLS\n\\begin{equation}\\label{NLSE_wannier_SM}\n\\begin{split}\n \\mu_{s} \\, a^{(n)}_{l} & = \\sum_{l_1} \\, \\omega_{l-l_1} \\, a^{(n)}_{l_1} \\, \\\\ & - g \\sum_{n_1,n_2,n_3} \\sum_{l_1,l_2,l_3} W^{(\\underline{n})}_{\\underline{l}} a^{(n_1)*}_{l_1} \\, a^{(n_2)}_{l_2}\\, a^{(n_3)}_{l_3} \\, .\n\\end{split}\n\\end{equation}\n\n{\\bf Derivation of the soliton center-of-mass displacement.} Here, we prove that the quantized displacement of the solitons center-of-mass is determined by the Chern number of the related Bloch band. For later convenience, we derive the following identity for matrix elements of position operator over the Wannier functions,\n\\begin{equation}\\label{useful_id}\n\\begin{split}\n & \\langle w^{(n)}(l'), X w^{(n)}(l) \\rangle \\\\ & = \\langle w^{(n)}({l'-l}), (T^{\\dagger}_{l} \\, X \\, T_{l}) \\, w^{(n)}({0}) \\rangle \\\\ \n & = \\langle w^{(n)}({l'-l}), X \\, w^{(n)}({0}) \\rangle + l \\, \\langle w^{(n)}({l'-l}), \\, w^{(n)}({0}) \\rangle \\\\ \n & = \\langle w^{(n)}({l'-l}), X \\, w^{(n)}({0}) \\rangle + l \\, \\delta_{ll'} \n\\end{split}\n\\end{equation}\nwhere \\(T_{l}\\) is the translation operator by \\(l\\) unit cells. In deriving Eq.~\\eqref{useful_id} we used the relation \\(T^{\\dagger}_{l} \\, X T_{l} = X + l \\,\\) together with the orthogonality of Wannier functions. The soliton center-of-mass then reads \n\\begin{equation}\\label{sol_CM}\n\\begin{split}\n & \\langle\n \\varphi^{(n)}, X \\varphi^{(n)}\n \\rangle_s\n \\\\ & = \n \\sum_{l,l'} \\, \n a^{(n)*}_{l'} \\, a^{(n)}_{l} \\,\n \\langle \n w^{(n)}(l'), X w^{(n)}(l)\n \\rangle_s \\\\ \n & = \\sum_{l} \\, \n |a^{(n)}_{l}|^2 \\,\n \\langle \n w^{(n)}(l), X w^{(n)}(l) \n \\rangle_s\n \\\\ & \n + \\sum_{l\\neq l'} \\, \n a^{(n)*}_{l'} \\, a^{(n)}_{l} \\,\n \\langle \n w^{(n)}(l'), X w^{(n)}(l)\n \\rangle_s \\\\\n & = \\Big ( \\, \\sum_{l} \\, \n |a^{(n)}_{l}|^2 \\, \\Big ) \\,\n \\langle \n w^{(n)}(\\mt{0}), X w^{(n)}(\\mt{0}) \n \\rangle_s \n \\\\ & +\n \\Big ( \\,\n \\sum_{l} \\, \n |a^{(n)}_{l}|^2 \\,\n l\n \\, \\Big )\n \\, \\langle \n w^{(n)}(\\mt{0}), w^{(n)}(\\mt{0}) \n \\rangle_s \\\\\n & + \\sum_{\\delta l\\neq0} \\, \\Big ( \\, \\sum_{l} \\, \n a^{(n)*}_{l+\\delta l} \\, a^{(n)}_{l} \n \\, \\Big ) \\,\n \\langle \n w^{(n)}(\\delta l), X w^{(n)}(\\mt{0})\n \\rangle_s \\, , \n \\end{split}\n\\end{equation}\nwhere we used Eq.~\\eqref{useful_id} in the last equality. The first term in the last equality of Eq.~\\eqref{sol_CM} reduces to \\(\\langle \n w^{(n)}(\\mt{0}), X w^{(n)}(\\mt{0}) \n \\rangle_s\\) since we normalized the soliton intensity to unity, \\(N_{\\phi} = \\sum_{l} \\, |a^{(n)}_{l}|^2 = 1\\). The second term in the last expression is the mean value of the position of the Wannier functions indices, which is constant since the on-site solution is always peaked around a Wannier label and remains symmetric around it. Its contribution to the displacement over a pump cycle thus vanishes. The third term contains products of the form \\( \\big ( \\, \\sum_{l} \\, \n a^{(n)*}_{l+\\delta l} \\, a^{(n)}_{l} \\, \\big ) \\langle \n w^{(n)}(\\delta l), X w^{(n)}(\\mt{0})\n \\rangle_s \\) and its treatment requires more care. The coefficient \\( \\big ( \\, \\sum_{l} \\, \n a^{(n)*}_{l+\\delta l} \\, a^{(n)}_{l} \\, \\big ) \\) is time-periodic, since \\(a^{(n)}_{l}\\) is, by assumption, the solution of the scalar DNLS in Eq.~\\eqref{NLSE_Wannier_simple}, in the main text. To investigate the behavior of \\(\\langle \n w^{(n)}(\\delta l), X w^{(n)}(\\mt{0})\n \\rangle_s\\), we note that after a pump cycle, the Wannier functions are displaced by the Chern number, \\( w^{(n)}(l)|_{s=1} = w^{(n)}(l+\\mathcal{C}_n)|_{s=0} \\), with \\(\\mathcal{C}_{n}\\) the Chern number of band \\(n\\). Thus, after a pump cycle, we have \n \\begin{equation}\\label{Cross_wan}\n \\begin{split}\n \\langle \n & w^{(n)}(\\delta l), X w^{(n)}(\\mt{0})\n \\rangle|_{s=1} \\\\ & = \n \\langle \n w^{(n)}(\\delta l+\\mathcal{C}_n), X w^{(n)}(\\mathcal{C}_n)\n \\rangle|_{s=0} \\\\ & = \n \\langle \n w^{(n)}(\\delta l), X w^{(n)}(0)\n \\rangle|_{s=0} \\, ,\n \\end{split}\n \\end{equation}\n where we used Eq.~\\eqref{useful_id} in the last step. This proves that the quantity \\(\\langle \n w^{(n)}(\\delta l), X w^{(n)}(0)\n \\rangle|_s\\), in the last equality of Eq.~\\eqref{sol_CM}, is a time-periodic quantity. \n \n Altogether, the third term in Eq.~\\eqref{sol_CM} is also time-periodic, and the soliton's center-of-mass displacement over a pump cycle is given by\n \\begin{equation}\n \\Delta \\langle\n \\varphi^{(n)}, X \\varphi^{(n)}\n \\rangle = \\Delta \\,\n \\langle \n w^{(n)}(\\mt{0}), X w^{(n)}(\\mt{0}) \n \\rangle \\, .\n \\end{equation}\n This result directly relates the soliton's displacement to the displacement of Wannier functions upon one pump cycle, as dictated by the Chern number of the band~\\cite{asboth2016short,mei2014topological,lohse2016thouless}. This proves the quantized pumping of the soliton according to the Chern number. \\\\\n \n\n \n \n \n{\\bf Derivation of the Bose-Bose mixture equations.} In order to derive the equations governing the coherent state profiles of the two species in the mixture, we start from the microscopic Hamiltonian in Eq.~\\eqref{eq:imp_bos_Ham}. The coherent-state action of the system takes the following form (\\(\\hbar=1\\)),\n\\begin{equation}\\label{CS_action}\n\\begin{split}\n S[\\bar{\\phi}, \\phi; \\bar{\\sigma}, \\sigma] & = \\int^{t_f}_{t_i} \\, dt \\, L[\\bar{\\phi},\\phi;\\bar{\\sigma},\\sigma] \\, ,\n\\end{split}\n\\end{equation}\nwith the Lagrangian \n\\begin{equation}\n\\begin{split}\n & L[\\bar{\\phi},\\phi;\\bar{\\sigma},\\sigma] \\\\ & = \n \\sum_{\\vc{i}} \\, \\bar{\\phi}_{\\vc{i}} \\, \n \\big [ \\, \\, i \\partial_t + \\mu_{\\phi} \n \\, \\big ] \n \\phi_{\\vc{i}}\n - \\sum_{\\langle \\vc{i},\\vc{j} \\rangle} \\, \\bar{\\phi}_{\\vc{i}} \\, t_{\\phi} H^{(\\phi)}_{\\vc{i}\\vc{j}} \\, \\phi_{\\vc{j}}\n \\, - \\sum_{\\vc{i}} \\, \\frac{g_{\\phi \\phi}}{2} \\, |\\phi_{\\vc{i}}|^4 \\, \n \\\\ &\n +\n \\sum_{\\vc{i}} \n \\, \\bar{\\sigma}_{\\vc{i}} \\,\n \\big [ \\, \\, i \\partial_t + \\mu_{\\sigma} \n \\, \\big ] \n \\sigma_{\\vc{i}}\n - \\sum_{\\langle \\vc{i},\\vc{j} \\rangle} \\, \\bar{\\sigma}_{\\vc{i}} \\, H^{(\\sigma)}_{\\vc{i}\\vc{j}} \\, \\sigma_{\\vc{j}}\n - \\sum_{\\vc{i}} \\, \\frac{g_{\\sigma \\sigma}}{2} \\, |\\sigma_{\\vc{i}}|^4 \\, \\\\ & \n -\n \\sum_{\\vc{i}} \n g_{\\phi \\sigma} \\,\n |\\sigma_{\\vc{i}}|^2 |\\phi_{\\vc{i}}|^2 .\n\\end{split}\n\\end{equation}\nTo proceed, we seek stationary state solutions for the coherent state fields of the form \\(\\phi^{(\\mt{ss})}_{\\vc{i}}(t)=e^{-i\\omega_0 t}\\,\\phi_{\\vc{i}}\\) and \\(\\sigma^{(\\mt{ss})}_{\\vc{i}}(t)=e^{-i\\omega_0 t}\\,\\sigma_{\\vc{i}}\\), which minimize \\( L[\\bar{\\phi},\\phi;\\bar{\\sigma},\\sigma]\\). Such solutions are the saddle-point solutions of the quantum mechanical action, giving the mean-field stable states of the system. The Lagrangian then takes the time-independent form\n\\begin{equation}\n\\begin{split}\n & L[\\bar{\\phi},\\phi;\\bar{\\sigma},\\sigma] \\\\ & = \n \\sum_{\\vc{i}} \\, \\bar{\\phi}_{\\vc{i}} \\,\n \\big [ \\, \\, \\omega_0 + \\mu_{\\phi} \n \\, \\big ] \n \\phi_{\\vc{i}}\n - \\sum_{\\langle \\vc{i},\\vc{j} \\rangle} \\, \\bar{\\phi}_{\\vc{i}} \\, H^{(\\phi)}_{\\vc{i}\\vc{j}} \\, \\phi_{\\vc{j}}\n - \\sum_{\\vc{i}} \\, \\frac{g_{\\phi \\phi}}{2} \\, |\\phi_{\\vc{i}}|^4 \\,\n \\\\ &\n +\n \\sum_{\\vc{i}} \n \\, \\bar{\\sigma}_{\\vc{i}} \\,\n \\big [ \\, \\, \\omega_0 + \\mu_{\\sigma} \n \\, \\big ] \n \\sigma_{\\vc{i}}\n - \\sum_{\\langle \\vc{i},\\vc{j} \\rangle} \\, \\bar{\\sigma}_{\\vc{i}} \\, H^{(\\sigma)}_{\\vc{i}\\vc{j}} \\, \\sigma_{\\vc{j}}\n - \\sum_{\\vc{i}} \\, \\frac{g_{\\sigma \\sigma}}{2}\\, |\\sigma_{\\vc{i}} |^4 \\, \\\\ & \n - \\sum_{\\vc{i}} g_{\\phi \\sigma}\n \\, |\\sigma_{\\vc{i}}|^2 \\,\n |\\phi_{\\vc{i}}|^2.\n\\end{split}\n\\end{equation}\nTo minimize the Lagrangian, the corresponding Euler-Lagrange equations are derived from \\(\\delta L\/\\delta \\, \\bar{\\phi}_{\\vc{i}}=0\\, \\) and \\(\\delta L\/\\delta \\, \\bar{\\sigma}_{\\vc{i}}=0\\,\\), which leads to the two coupled equations in Eq.~\\eqref{var_eq_12_main} in the main text. \n\nIn the limiting case of heavy impurities, we neglect their kinetic-energy contributions ($H^{(\\sigma)}_{\\vc{i}\\vc{j}}$) to Eq.~\\eqref{var_eq_12_main}, the so-called Thomas-Fermi approximation. In this case, the second equation in Eq.~\\eqref{var_eq_12_main} reduces to \\( (\\omega_0+\\mu_{\\sigma}) = g_{\\phi \\sigma}|\\phi_{\\vc{i}}|^2 + g_{\\sigma \\sigma} \\, |\\sigma_{\\vc{i}}|^2 \\). For the bright soliton solutions of Eq.~\\eqref{var_eq_12_main}, \\(\\phi_{\\vc{i}}\\) and \\( \\sigma_{\\vc{i}} \\) decay exponentially away from the soliton center, thus, to zeroth order in the impurities hopping strength, \\(\\omega_0+\\mu_{\\sigma}=0\\). Eq.~\\eqref{var_eq_12_main} then reduce to\n\\begin{equation}\\label{var_eq_21}\n (\\omega_0+\\mu_{\\phi}) \\, \\phi_{\\vc{i}} = \\sum_{\\vc{j}} \\, H^{(\\phi)}_{\\vc{i}\\vc{j}} \\, \\phi_{\\vc{j}} \n + \\Big ( \\, g_{\\phi \\phi}|\\phi_{\\vc{i}}|^2 \\, \n + g_{\\phi \\sigma}|\\sigma_{\\vc{i}}|^2 \\, \\Big ) \\phi_{\\vc{i}} \\, , \\\\ \n\\end{equation}\n\\begin{equation}\\label{var_eq_22}\n |\\sigma_{\\vc{i}}|^2 = -g_{\\phi \\sigma}\/g_{\\sigma\\sigma} \\, |\\phi_{\\vc{i}}|^2 \\, .\n\\end{equation}\nInserting Eq.~\\eqref{var_eq_22} into Eq.~\\eqref{var_eq_21}, we obtain an effective DNLS for \\(\\phi_{\\vc{i}}\\),\n\\begin{equation}\\label{var_eq_23}\n (\\omega_0+\\mu_{\\phi}) \\, \\phi_{\\vc{i}} = \\sum_{\\vc{j}} \\, H^{(\\phi)}_{\\vc{i}\\vc{j}} \\, \\phi_{\\vc{j}} \n + \\big ( \\, g_{\\phi \\phi} \\, \n - g^2_{\\phi \\sigma}\/g_{\\sigma \\sigma} \\, \\big ) |\\phi_{\\vc{i}}|^2 \\phi_{\\vc{i}} \\, , \\\\ \n\\end{equation}\nwith the effective nonlinearity strength \\(g = -g_{\\phi \\phi} \\, \n+ g^2_{\\phi \\sigma}\/g_{\\sigma \\sigma} \\,\\), which for \\(g_{\\phi \\phi} g_{\\sigma \\sigma} < g^2_{\\phi \\sigma}\\) corresponds to a defocusing nonlinearity. \\\\\n\n{\\bf Variational ansatz for the state of Bose-Bose mixture in the Thomas-Fermi limit.} The variational treatment of Eqs.~\\eqref{profile_match} and \\eqref{EOM_HS} accounts to minimizing the following energy functional for the field \\(\\phi\\)\n\\begin{equation}\\label{funcnl_GPE}\n\\begin{split}\n H[\\bar{\\phi}, \\phi] & = \\sum_{\\vc{i},\\vc{j}} \\, \\bar{\\phi}_{\\vc{i}} \\, H^{(\\phi)}_{\\vc{i}\\vc{j}} \\, \\phi_{\\vc{j}} - \\frac{g}{2} \\sum_{\\vc{i}} \\, \\vert\\phi_{\\vc{i}}\\vert^2 \\\\ & - \\mu_{\\phi} \\, \\Big ( \\, \\sum_{\\vc{i}} \\, \\vert\\phi_{\\vc{i}}\\vert^2 - N_{\\phi}\\, \\Big ) \\, . \n\\end{split}\n\\end{equation}\nFrom the knowledge obtained from the soliton solutions of the DNLS in the main text, we assume that \\(\\phi_{\\vc{i}}\\) belongs to a single band and expand it in terms of the Wannier functions of the corresponding band, \\(\\phi_{\\vc{i}} \\!=\\! \\sum_{l} \\, a^{(n)}_{l} \\, w^{(n)}(l)\\). We then use a sech variational ansatz for the coefficient amplitudes, \\(a^{(n)}_{l} \\!=\\! \\eta\/ \\, \\mt{sech}\\big ( \\, \\xi(l-l_0) \\, \\big ) \\). The variational energy functional takes the following form \n\\begin{equation}\\label{H_var}\n\\begin{split}\nH\/\\eta^2 & = \\omega_0 \\, N_{\\phi}\/\\eta^2 + \\sum^{\\infty}_{n=1} \\, \\frac{4n}{\\mt{sinh}(\\xi n)} \\omega_n \\\\ & - \\frac{2 \\,g}{3} \\eta^2 \\Bigg [ \\, \\frac{1}{\\xi} + \\sum^{\\infty}_{m=1} \\frac{2 \\pi^2}{ \\xi^2} \\Big ( \\, 1 + \\frac{\\pi^2 m^2}{\\xi^2} \\, \\Big ) \\frac{m \\, \\mt{cos}(2\\pi m l_0)}{\\mt{sinh(\\frac{\\pi^2 m}{\\xi})}} \\, \\Bigg ] \\, ,\n\\end{split}\n\\end{equation}\nsubject to the constraint \\(N_{\\phi}=\\mt{const.}\\), where\n\\begin{equation}\\label{N_var}\n N_{\\phi}\/\\eta^2 = \\frac{2}{\\xi} + \\sum^{\\infty}_{m=1} \\, \\frac{4\\pi^2}{\\xi^2} \\, \\frac{m \\, \\mt{cos}(2 \\pi m l_0)}{\\mt{sinh}(\\frac{\\pi^2 m}{\\xi})} \\, .\n\\end{equation}\nFor the simulations presented in the main text [Fig.~\\ref{Fig_three}], we assume that \\(N_{\\phi}\\!=\\!1 \\,\\); see Refs. ~\\cite{kevrekidis2009discrete,kevrekidis2003instabilities} for more details on variational ans\\\"{a}tze for DNLS. From the solution of Eqs.~\\eqref{H_var} and \\eqref{N_var} we then obtain the boson field, \\(\\phi_{\\vc{i}}\\), which is then used to obtain the effective attractive potential \\(u_{\\vc{i}}^{\\text{MF}}=g|\\phi_{\\vc{i}}|^2\\,\\); see Eq.~\\eqref{EOM_HS}. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\n\nIn\nthis article we study the deformations of smooth surfaces $X$ of general type whose canonical map is a finite, degree $2$ morphism onto a minimal rational surface $Y$, embedded in projective space by a very ample complete linear series. Minimal rational surfaces are the projective plane and Hirzebruch surfaces $\\mathbf F_e$ with $e \\neq 1$; then, for completeness' sake we will include canonical double covers of all Hirzebruch surfaces (also the non--minimal $\\mathbf F_1$) in our study. \n\n\n\\medskip\nOur main result is Theorem~\\ref{coversofP2andruled}, in which we show that any deformation of the canonical morphism of most such surfaces $X$ is again a morphism of degree $2$. Thus in Section~\\ref{non.existence.onminimal.section} we study how the deformations of the canonical morphism $\\varphi$ of $X$ are. Since $\\varphi$ is by hypothesis finite and of degree $2$ onto its image, any deformation of $\\varphi$ has to be also a finite morphism and either of degree $2$ or of degree $1$. A priori, it is not clear which of the two possibilities should happen. On the one hand, it is well--known (see~\\cite{Hor2p_g-4}) that if $\\varphi$ is a double cover of a surface of minimal degree, any deformation of $\\varphi$ has again degree $2$. Of course, there is a good reason for this to occur, namely, the invariants of a canonical double cover of a surface of minimal degree are below Castelnuovo's line $c_1^2=3p_g-7$ (recall that no surface below Castelnuovo's line can have a birational canonical map). On the other hand however, if $\\, Y$ is embedded by an arbitrary very ample complete linear series, there are many surfaces $X$ whose invariants lie on or above Castelnuovo's line. Thus the existence of deformations of $\\varphi$ to degree $1$ morphisms would be plausible in these cases (for instance, Examples~\\ref{example39} and~\\ref{example45} show the existence of surfaces of general type with very ample canonical divisors and having the same invariants as certain canonical double covers of minimal rational surfaces). Moreover, if we consider canonical double covers of non--minimal rational surfaces, there exist cases for which the canonical morphism can be deformed to a morphism of degree $1$ (see~\\cite[4.5]{AK} and~\\cite[Theorem 3.14]{MP}). In this article we completely settle this matter when $Y$ is $\\mathbf P^2$ or a Hirzebruch surface and the branch divisor of the canonical double cover is base--point--free. In Theorem~\\ref{coversofP2andruled}\nwe show that any deformation of $\\varphi$ is again a degree $2$ morphism. Theorem~\\ref{coversofP2andruled} can be rephrased in terms of the moduli space in the following way: the irreducible component containing $[X]$ it its moduli space parametrizes surfaces whose canonical map is a finite, degree $2$ morphism. Theorem~\\ref{coversofP2andruled} contrasts therefore with the results\nof~\\cite[4.5]{AK} and~\\cite[Theorem 3.14]{MP} mentioned above; instead, Theorem~\\ref{coversofP2andruled} generalizes\nthe behavior of the deformations of canonical covers of surfaces of minimal degree mentioned earlier. \nThis variety of behaviors\nshows how different and\ncomplicated the moduli spaces of surfaces of general type that are canonical double covers of arbitrarily embedded rational surfaces can be when compared with either the\nmoduli of curves or, for that matter, the moduli of surfaces of general type that are canonical double covers of surfaces of minimal degree.\n\n\n\\medskip\n\n\n\n\nThe key point for the proof of Theorem~\\ref{coversofP2andruled} is the vanishing\nof Hom$(\\mathcal I\/\\mathcal I^2,\\omega_Y(-1))$, which we prove in Proposition~\\ref{nonexist.canonical.structures}.\nThis vanishing can be interpreted as the non--existence, at an ``infinitesimal'' level, of deformations\nof $\\varphi$ to degree $1$ morphisms.\nThus Theorem~\\ref{coversofP2andruled} shows how infinitesimal properties translate into global\nstatements such as the fact that the moduli component\ncontaining\n$[X]$ parametrizes surfaces whose canonical map is a degree $2$ morphism.\nOn the other hand, finite morphisms that can be deformed to morphisms of degree $1$ are linked\nto the existence of certain multiple structures embedded in projective\nspace (see~\\cite{Fong}, \\cite{GPcarpets}, \\cite{Gon} and~\\cite{Compositio}).\nThus, as a by--product of Proposition~\\ref{nonexist.canonical.structures}, in Section~\\ref{noropes}\nwe prove the non--existence of ``canonically'' embedded\n(i.e., such that the dualizing sheaf is the restriction of $\\mathcal O_{\\mathbf P^N}(1)$)\nmultiple structures on $\\mathbf P^2$ and on Hirzebruch surfaces.\nIn particular, this means that there do not exist canonically embedded double structures on smooth\nsurfaces of minimal degree. This fact is interesting since there do exist double structures of other\nkinds on a surface of minimal degree (see ~\\cite{GPcarpets} where the existence of double\nstructures on rational normal scrolls with the invariants of a smooth $K3$ surface is shown).\n\n\\medskip\n\n\n\nFinite covers of rational surfaces have interesting implications for the geography and the moduli of surfaces of general type (see e.g.~\\cite{Cat.moduli} \n or~\\cite{Hor2p_g-4}). In Section~\\ref{moduli.section} we chart the region in the geography\ncovered by the double covers $X$ of Theorem~\\ref{coversofP2andruled}. This is done in Propositions~\\ref{cancover.P2.invariants} and~\\ref{cancover.ruled.invariants} and in Remark ~\\ref{cancover.ruled.geography}. The Chern quotient $\\frac{c_1^2}{c_2}$ of our surfaces approaches $\\frac{1}{2}$; \n on the other hand, our region is not contained but goes well inside the region above Castelnuovo's line. In the second part of Section~\\ref{moduli.section} we explore the moduli space of $[X]$. First, in Proposition~\\ref{moduli.dim}, we compute the dimension of the irreducible moduli component of $[X]$, which is $2d^2+15d+19$ if $\\, Y$ is $\\mathbf P^2$ embedded by $|\\mathcal O_{\\mathbf P^2}(d)|$ and $(2a+5)(2b-ae+5)-7$ if $\\, Y$ is $\\mathbf F_e$ embedded by an arbitrary very ample linear series\n$|\\mathcal O_Y(aC_0+bf)|$.\nFinally we go a bit further in studying the complexity of some\nof these\nmoduli spaces.\nWe\nfind examples (see Examples~\\ref{example39} and~\\ref{example45}) of moduli spaces having two kind of components: components parametrizing surfaces which can be canonically embedded and components parametrizing surfaces whose canonical map is a degree $2$ morphism.\n\n\n\n\\section{Canonical double covers of\nHirzebruch surfaces and $\\mathbf P^2$}\\label{non.existence.onminimal.section}\n\n\n\nIn this section we study how the canonical morphism of canonical double covers of $\\mathbf P^2$ and of Hirzebruch surfaces deforms.\nWe introduce the notation that we will use in this section and in the remaining of the article:\n\n\n\\begin{noname}\\label{setup}\n{\\bf Set--up and notation:} {\\rm Throughout this article \nwe will use the following notation and will follow these conventions:\n\\begin{enumerate}\n\\item We will work over an algebraically closed field $\\mathbf k$ of characteristic $0$.\n\\item $X$ and $Y$ will be smooth, irreducible projective varieties.\n\\item $i$ will denote a projective embedding $i: Y \\hookrightarrow \\mathbf P^N$ induced by a complete linear series on $Y$. In this case, $\\mathcal I$ will denote the ideal sheaf of $i(Y)$ in $\\mathbf P^N$. Likewise, we will often abridge $i^*\\mathcal O_{\\mathbf P^N}(1)$ as $\\mathcal O_Y(1)$.\n\\item $\\pi$ will denote a finite morphism $\\pi: X \\longrightarrow Y$ of degree $n \\geq 2$; in this case, $\\mathcal E$ will denote the trace--zero module of $\\pi$ ($\\mathcal E$ is a vector bundle on $Y$ of rank $n-1$).\n\\item $\\varphi$ will denote a projective morphism $\\varphi: X \\longrightarrow \\mathbf P^N$ such that $\\varphi= i \\circ \\pi$.\n\\item $\\mathbf F_e$ will denote the Hirzebruch surface whose minimal section $C_0$ has self--intersection $C_0^2=-e$. We will denote by $f$ the fiber of $\\mathbf F_e$ onto $\\mathbf P^1$.\n\\end{enumerate}}\n\\end{noname}\n\n\\noindent\nFor the reader's convenience, we include here without a proof two results from~\\cite{MP} (Theorem 2.6 and Lemma 3.9) that we will use throughout the article. To state the first one we need also Proposition 3.7 from~\\cite{Gon}.\nWe assume the notation just stated but point out that Proposition~\\ref{morphism.miguel} and Theorem~\\ref{Psi2=0} hold also for embeddings $i$ induced by linear series non necessarily complete.\n\n\n\n\\begin{proposition}\\label{morphism.miguel} \nThere exists a homomorphism\n\\begin{equation*}\n H^0(\\mathcal N_\\varphi) \\overset{\\Psi}\\longrightarrow \\mathrm{Hom}(\\pi^*(\\mathcal I\/\\mathcal I^2), \\mathcal O_X),\n\\end{equation*}\nthat appears when taking cohomology on the commutative diagram~\\cite[(3.3.2)]{Gon}. Since\n\\begin{equation*}\n\\mathrm{Hom}(\\pi^*(\\mathcal I\/\\mathcal I^2), \\mathcal O_X)=\\mathrm{Hom}(\\mathcal I\/\\mathcal I^2, \\pi_*\\mathcal O_X)=\\mathrm{Hom}(\\mathcal I\/\\mathcal I^2, \\mathcal O_Y) \\oplus \\mathrm{Hom}(\\mathcal I\/\\mathcal I^2, \\mathcal E)\n\\end{equation*}\nthe homomorphism $\\Psi$ has two components\n\\begin{eqnarray*}\nH^0(\\mathcal N_\\varphi) & \\overset{\\Psi_1} \\longrightarrow & \\mathrm{Hom}(\\mathcal I\/\\mathcal I^2, \\mathcal O_Y) \\cr\nH^0(\\mathcal N_\\varphi) & \\overset{\\Psi_2} \\longrightarrow & \\mathrm{Hom}(\\mathcal I\/\\mathcal I^2, \\mathcal E).\n\\end{eqnarray*}\n\\end{proposition}\n\n\\begin{theorem}\\label{Psi2=0}\nWith the notation of~\\ref{setup}\nand of Proposition~\\ref{morphism.miguel}, let $X$ be a smooth variety of general type of dimension $m \\geq 2$ with ample and base--point--free canonical bundle, let $\\varphi$ be its canonical morphism and assume that the degree of $\\pi$ is $n=2$.\nAssume furthermore that\n\\begin{enumerate}\n\\item $h^1(\\mathcal O_Y)=h^{m-1}(\\mathcal O_Y)=0$ (in particular, $Y$ is regular);\n\\item $h^1(\\mathcal O_Y(1))=h^{m-1}(\\mathcal O_Y(1))=0$;\n\\item $h^0(\\omega_Y(-1))=0$;\n\\item $h^1(\\omega_Y^{-2}(2))=0$;\n\\item the variety $Y$ is unobstructed in $\\mathbf P^N$ (i.e., the base of the universal deformation space of $Y$ in $\\mathbf P^N$ is smooth); and\n\\item $\\Psi_2 = 0$.\n\\end{enumerate}\nThen\n\\begin{enumerate}\n\\item[(a)] $X$ and $\\varphi$ are unobstructed (for a definition of unobstructedness, see e.g.~\\cite{Ser}), and\n\\item[(b)] any deformation of $\\varphi$ is a (finite) canonical morphism of degree $2$. Thus the canonical map of a variety corresponding to a general point of the component of $X$ in its moduli space is a finite morphism of degree $2$.\n\\end{enumerate}\n \\end{theorem}\n\n\n\\begin{lemma}\\label{isom.Hom.Ext} \nLet $S$ be a (smooth) surface, embedded in $\\mathbf P^N$, let $\\mathcal J$ be the ideal sheaf of $S$ in $\\mathbf P^N$ and consider the connecting homomorphism\n\\begin{equation*}\n\\mathrm{Hom}(\\mathcal J\/\\mathcal J^2,\\omega_{S}(-1)) \\overset{\\delta} \\longrightarrow \\mathrm{Ext}^1(\\Omega_{S},\\omega_{S}(-1)).\n\\end{equation*}\n\\begin{enumerate}\n\\item If $p_g(S)=0$ and $h^1(\\mathcal O_{S}(1))=0$, then $\\delta$ is injective;\n\\item if $q(S)=0$ and $S$ is embedded by a complete linear series,\nthen $\\delta$ is surjective.\n\\end{enumerate}\n\\end{lemma}\n\n\\noindent\nAs we have seen in~\\cite{Gon}, \\cite{Compositio} or~\\cite{MP}, the study of the deformations of $\\varphi$ is linked to the space Hom$(\\mathcal I\/\\mathcal I^2, \\mathcal E)$. Since, as we will see later, if $\\varphi$ is a canonical double cover (and $h^0(\\omega_Y(-1)=0$), then $\\mathcal E=\\omega_Y(-1)$, we will study $\\mathrm{Hom}(\\mathcal I\/\\mathcal I^2, \\mathcal \\omega_Y(-1))$ in the next proposition:\n\n\n\n\\begin{proposition}\\label{nonexist.canonical.structures} Let $Y$ be either $\\mathbf P^2$ or a Hirzebruch surface and let $i$ be as in \\ref{setup}.\nThen\n $\\mathrm{Hom}(\\mathcal I\/\\mathcal I^2, \\mathcal \\omega_Y(-1))=0$.\n\\end{proposition}\n\n\\begin{proof}\n First we argue when $Y$ is a Hirzebruch surface. Let $\\mathcal O_Y(1)=\\mathcal O_Y(aC_0+bf)$. \nSince \\linebreak\n$\\mathcal O_Y(aC_0+bf)$ is very ample, we have\n\\begin{equation}\\label{Hirz.veryample}\n b-ae \\geq 1.\n\\end{equation}\nWe can apply Lemma~\\ref{isom.Hom.Ext};\nindeed, $p_g(Y)=0$, $h^1(\\mathcal O_Y(1))=0$,\n$q(Y)=0$\nand $h^2(\\mathcal O_Y(1))=0$.\nThen, Lemma~\\ref{isom.Hom.Ext}\nsays that it suffices to see that $\\mathrm{Ext}^1 (\\Omega_Y, \\omega_Y(-1))=0$.\nFor this, apply~\\cite[Proposition II.8.11]{Hart} to the fibration of $Y$ to $\\mathbf P^1$ (see also the proof of~\\cite[Proposition 1.7]{GPcarpets}) and get the sequence\n\\begin{equation}\\label{fibration}\n0 \\longrightarrow \\mathcal O_Y(-2f) \\longrightarrow \\Omega_Y \\longrightarrow \\mathcal O_Y(-2C_0-ef) \\longrightarrow 0.\n\\end{equation}\nThen, applying Hom$(-,\\omega_Y(-1))$ we get\n\\begin{equation*}\n \\mathrm{Ext}^1(\\mathcal O_Y(-2C_0-ef), \\omega_Y(-1)) \\longrightarrow \\mathrm{Ext}^1(\\Omega_Y, \\omega_Y(-1)) \\longrightarrow \\mathrm{Ext}^1(\\mathcal O_Y(-2f), \\omega_Y(-1)).\n\\end{equation*}\nNow $\\mathrm{Ext}^1(\\mathcal O_Y(-2C_0-ef), \\omega_Y(-1))\n=H^1(\\mathcal O_Y((a-2)C_0+(b-e)f))^{\\vee}=0$ (by pushing down to $\\mathbf P^1$ and because of~\\eqref{Hirz.veryample}). Also\n$\\mathrm{Ext}^1(\\mathcal O_Y(-2f), \\omega_Y(-1))=H^1(\\mathcal O_Y(aC_0+(b-2)f))^{\\vee}=0$ (again by pushing down to $\\mathbf P^1$ and because of~\\eqref{Hirz.veryample}), so $\\mathrm{Ext}^1(\\Omega_Y, \\omega_Y(-1))=0$.\nThen, by Lemma~\\ref{isom.Hom.Ext},\nit follows that Hom$(\\mathcal I\/\\mathcal I^2, \\omega_Y(-1))=0$.\n\n\\smallskip\n\n\\noindent Now we prove the proposition for $Y=\\mathbf P^2$ embedded by $\\mathcal O_{Y}(1)=\\mathcal O_{\\mathbf P^2}(d)$. Also in this occasion $p_g(Y)=0$, $h^1(\\mathcal O_Y(1))=0$,\n$q(Y)=0$\nand $h^2(\\mathcal O_Y(1))=0$, so\nby Lemma~\\ref{isom.Hom.Ext}\nit suffices to see that $\\mathrm{Ext}^1 (\\Omega_Y, \\omega_Y(-1))=0$.\nFor this we use the Euler sequence of $\\mathbf P^2$\n\\begin{equation}\\label{euler.P2}\n0 \\longrightarrow \\Omega_{\\mathbf P^2} \\longrightarrow H^0(\\mathcal O_{\\mathbf P^2}(1)) \\otimes \\mathcal O_{\\mathbf P^2}(-1) \\longrightarrow \\mathcal O_{\\mathbf P^2} \\longrightarrow 0.\n\\end{equation}\nTo the sequence~\\eqref{euler.P2} we apply the functor Hom$(-,\\omega_Y(-1))$ to obtain the exact sequence\n\\begin{equation*}\n0 \\longrightarrow \\mathrm{Ext}^1(\\Omega_{\\mathbf P^2},\\omega_Y(-1)) \\longrightarrow \\mathrm{Ext}^2(\\mathcal O_{\\mathbf P^2}, \\omega_Y(-1))\n\\longrightarrow \\mathrm{Ext}^2(H^0(\\mathcal O_{\\mathbf P^2}(1)) \\otimes \\mathcal O_{\\mathbf P^2}(-1), \\omega_Y(-1)).\n\\end{equation*}\nDualizing we get\n\\begin{equation*}\nH^0(\\mathcal O_{\\mathbf P^2}(d-1)) \\otimes H^0(\\mathcal O_{\\mathbf P^2}(1)) \\overset{\\alpha} \\longrightarrow H^0(\\mathcal O_{\\mathbf P^2}(d)) \\longrightarrow \\mathrm{Ext}^1(\\Omega_{\\mathbf P^2},\\omega_Y(-1))^{\\vee} \\longrightarrow 0.\n\\end{equation*}\nNow the multiplication map $\\alpha$ is surjective if $d \\geq 1$, so $\\mathrm{Ext}^1(\\Omega_{\\mathbf P^2},\\omega_Y(-1))$ vanishes and, by Lemma~\\ref{isom.Hom.Ext},\nso does Hom$(\\mathcal I\/\\mathcal I^2, \\omega_Y(-1))$.\n\\end{proof}\n\n\n\n\n\n\\noindent\nWe will use Proposition~\\ref{nonexist.canonical.structures} to obtain the main result of this section, Theorem~\\ref{coversofP2andruled}. In order to do this we need first the following\n\n\\begin{lemma}\\label{Yunobs}\nLet $Y$ be either $\\mathbf P^2$ or a Hirzebruch surface and let $i$ be as in~\\eqref{setup}. Then $H^1(\\mathcal N_{i(Y)\/\\mathbf P^N})=0$. In particular, $i(Y)$ is unobstructed in $\\mathbf P^N$.\n\\end{lemma}\n\n\\begin{proof}\nRecall that $H^1(\\mathcal N_{i(Y)\/\\mathbf P^N})=\\mathrm{Ext}^1(\\mathcal I\/\\mathcal I^2,\\mathcal O_Y)$, which fits into the exact sequence\n\\begin{equation}\\label{Yunobs.seq}\n \\mathrm{Ext}^1(\\Omega_{\\mathbf P^N}|_Y,\\mathcal O_Y) \\longrightarrow \\mathrm{Ext}^1(\\mathcal I\/\\mathcal I^2,\\mathcal O_Y) \\longrightarrow \\mathrm{Ext}^2(\\Omega_Y,\\mathcal O_Y).\n\\end{equation}\nWe want to see that both $\\mathrm{Ext}^1(\\Omega_{\\mathbf P^N}|_Y,\\mathcal O_Y)$ and $\\mathrm{Ext}^2(\\Omega_Y,\\mathcal O_Y)$ vanish. To handle the vanishing of $\\mathrm{Ext}^1(\\Omega_{\\mathbf P^N}|_Y,\\mathcal O_Y)$, consider the sequence\n\\begin{equation}\\label{Yunobs.seq2}\n\\mathrm{Ext}^1(\\mathcal O_Y^{N+1}(-1),\\mathcal O_Y) \\longrightarrow \\mathrm{Ext}^1(\\Omega_{\\mathbf P^N}|_Y,\\mathcal O_Y) \\longrightarrow \\mathrm{Ext}^2(\\mathcal O_Y,\\mathcal O_Y).\n\\end{equation}\nIt is clear that $\\mathrm{Ext}^1(\\mathcal O_Y^{N+1}(-1),\\mathcal O_Y)$ and $\\mathrm{Ext}^2(\\mathcal O_Y,\\mathcal O_Y)$ both vanish because $h^1(\\mathcal O_{Y}(1))=0$ and $p_g(Y)=0$.\n\\smallskip\n\\noindent To prove the vanishing of $\\mathrm{Ext}^2(\\Omega_Y,\\mathcal O_Y)$ we argue for $\\mathbf P^2$ and for Hirzebruch surfaces separately. If $Y=\\mathbf P^2$, after applying Hom$(-,\\mathcal O_Y)$ to~\\eqref{euler.P2} we see that $\\textrm{Ext}^2(\\Omega_{\\mathbf P^2},\\mathcal O_{\\mathbf P^2})$ fits into the exact sequence\n\\begin{equation*}\n\\mathrm{Ext}^2(\\mathcal O_{\\mathbf P^2}(-1), \\mathcal O_{\\mathbf P^2})^{\\oplus 3} \\longrightarrow \\mathrm{Ext}^2(\\Omega_{\\mathbf P^2},\\mathcal O_{\\mathbf P^2}) \\longrightarrow \\mathrm{Ext}^3(\\mathcal O_{\\mathbf P^2}, \\mathcal O_{\\mathbf P^2}),\n\\end{equation*}\nso $\\textrm{Ext}^2(\\Omega_Y,\\mathcal O_Y)=0$. Now, if $Y$ is a Hirzebruch surface we apply Hom$(-,\\mathcal O_Y)$ to\n\\eqref{fibration} and get\n\\begin{equation*}\n \\mathrm{Ext}^2(\\mathcal O_Y(-2C_0-ef), \\mathcal O_Y) \\longrightarrow \\mathrm{Ext}^2(\\Omega_Y,\\mathcal O_Y) \\longrightarrow \\mathrm{Ext}^2(\\mathcal O_Y(-2f), \\mathcal O_Y).\n\\end{equation*}\nThen $\\mathrm{Ext}^2(\\Omega_Y,\\mathcal O_Y)=0$ because $H^2(\\mathcal O_Y(2C_0+ef))=H^2(\\mathcal O_Y(2f))=0$. Thus $H^1(\\mathcal N_{i(Y)\/\\mathbf P^N})=0$, and by~\\cite[Corollary 3.2.7]{Ser}, $i(Y)$ is unobstructed in $\\mathbf P^N$. \n\\end{proof}\n\n\n\n\\begin{theorem}\\label{coversofP2andruled}\nLet $X$ be a smooth surface of general type with ample and base--point-free canonical line bundle and let $\\varphi$ be its canonical morphism. Assume that the degree of $\\pi$ is $n=2$\nand that $Y$ is either $\\mathbf P^2$ or a Hirzebruch surface and let $i$ be induced by the complete linear series of a very ample divisor $D$ on $Y$ (if $\\, Y$ is a Hirzebruch surface, then $D=aC_0+bf$ with $b-ae \\geq 1$ and $a \\geq 1$).\nMoreover, if $\\, Y$ is the Hirzebruch surface assume that\n$\\omega_Y^{-2}(2)$ is base--point--free. \nThen\n\\begin{enumerate}\n\\item[(a)] $X$ and $\\varphi$ are unobstructed, and\n\\item[(b)] any deformation of $\\varphi$ is a canonical $2:1$ morphism. Thus the canonical map of a variety corresponding to a general point of the component of $X$ in its moduli space is a finite morphism of degree $2$.\n\\end{enumerate}\n\\end{theorem}\n\n\n\\begin{proof}\nWe will use Theorem~\\ref{Psi2=0}, so we check now that its hypotheses are satisfied.\nCondition (1) holds because the surface $Y$ is regular. Condition (2) of Theorem~\\ref{Psi2=0} is\n$h^1(\\mathcal O_Y(1))=0$ and this has been already observed in the proof of Lemma~\\ref{Yunobs}.\nCondition (3) follows from the fact that $p_g(Y)=0$.\nTo check Condition (4) observe that, since $X$ is smooth, so is its branch locus, which is a divisor in $|\\omega_Y^{-2}(2)|$ since $\\varphi$ is\nthe canonical morphism of $X$. If $Y$ is a Hirzebruch surface,\nthen $\\omega_Y^{-2}(2)$ is base--point-free by assumption. Then all the twists of the push--down of $\\omega_Y^{-2}(2)$ to $\\mathbf P^1$ are non negative, so Condition (4) of Theorem~\\ref{Psi2=0} holds in this case. If $Y=\\mathbf P^2$, then Condition (4) of Theorem~\\ref{Psi2=0} follows trivially from the vanishing of intermediate cohomology on $\\mathbf P^2$. Condition (5) of Theorem~\\ref{Psi2=0} follows from Lemma~\\ref{Yunobs}. Finally recall that, since $\\varphi$ is the canonical morphism of $X$ and Condition (3) of Theorem~\\ref{Psi2=0} holds, the trace zero module of $\\pi$ is $\\omega_Y(-1)$. This follows from relative duality, having in account that on $Y$ numerical equivalence is the same as linear equivalence (see also Lemma~\\ref{canon.cover.split} for a more general statement). Then Condition (6) of Theorem~\\ref{Psi2=0} follows from Proposition~\\ref{nonexist.canonical.structures}.\n\\end{proof}\n\n\n\\noindent The assumption made in the statement of Theorem~\\ref{coversofP2andruled} asking $\\omega_Y^{-2}(2)$ to be base--point--free if $Y$ is a Hirzebruch surface is only needed if $e \\geq 6$ and even, in which case it is not a very strong assumption: \n\n\n\\begin{remark}\\label{freeness.remark} Let $Y=\\mathbf F_e$ and assume that $X, Y$ and $\\varphi$ are as in Theorem~\\ref{coversofP2andruled} except that no base--point-freeness assumption is made on $\\omega_Y^{-2}(2)$. Then $\\omega_Y^{-2}(2)$ is base--point-free unless $e$ is even, $e \\geq 6$ and $b-ae=\\frac{1}{2}e-2$. In fact, under our hypotheses we have $b-ae \\geq \\frac{1}{2}e-2$.\nTherefore, when $ e$ is even and $e \\geq 6$, the extra \ncondition we have to impose on a Hirzebruch surface so that $\\omega_Y^{-2}(2)$ be base--point--free is\n$b-ae \\neq \\frac{1}{2}e-2$, or equivalently $b-ae > \\frac{1}{2}e-2$.\n\\end{remark}\n\n\\begin{proof}\nSince $X$ is smooth, $|\\omega_Y^{-2}(2)|$ should have a smooth member. Recall that $\\omega_Y^{-2}(2)=\\mathcal O_Y((2a+4)C_0+(2b+2e+4)f)$. Then $2b-2ae -e + 4=(2b+2e+4)-(2a+3)e \\geq 0$, otherwise $2C_0$ would be in the fixed part of any divisor of $|\\omega_Y^{-2}(2)|$. Now, if $\\omega_Y^{-2}(2)$ is not base--point-free, then $2b-2ae -e + 4 < e$. On the other hand, if $0 \\leq 2b-2ae -e + 4 < e$, the divisor $(2a+4)C_0+(2b+2e+4)f$ is the sum of a fixed part $C_0$ and a base--point--free part $(2a+3)C_0+(2b+2e+4)f$, so $|\\omega_Y^{-2}(2)|$ has smooth members if and only if $2b-2ae -e + 4=((2a+3)C_0+(2b+2e+4)f)\\cdot C_0=0$. This only happens if $e$ is even and $b-ae=\\frac{1}{2}e-2$. Since under our hypotheses $aC_0+bf$ is very ample, $b-ae \\geq 1$, so if $\\omega_Y^{-2}(2)$ is not base--point--free, we have in addition $e \\geq 6$.\n\\end{proof}\n\n\n\n\n\n\\section{Non existence of canonical ropes}\\label{noropes}\n\n\nThe deformation of morphisms is related to the appearance or non appearance of multiple structures embedded in projective space. In this section we apply Proposition~\\ref{nonexist.canonical.structures} to prove the non existence of ``canonical, embedded ropes'' on $i(Y)$, i.e., multiple structures on $i(Y)$ with conormal bundle $\\mathcal E$, when $Y,i,\\varphi$ and $\\mathcal E$ are as in~\\ref{setup} and, in addition, $\\varphi$ is a canonical morphism and $Y$ is either a Hirzebruch surface or $\\mathbf P^2$. For that we need the following lemma on the structure of $\\mathcal E$:\n\n\\begin{lemma}\\label{canon.cover.split}\nLet $X$ be a variety of general type with ample and base--point-free canonical divisor and let $\\varphi$ be the canonical morphism of $X$. Assume\n\\begin{equation}\\label{nulhyp}\nH^0(\\omega_Y(-1))=0.\n\\end{equation}\nThen the trace zero module of $\\pi$ splits as\n\\begin{equation*}\\label{split}\n{\\mathcal{E}}= {\\mathcal{E}}' \\oplus \\omega_Y(-1),\n\\end{equation*}\nwhere ${\\mathcal{E}}'$ is a rank $(n-2)$ locally free sheaf.\n\\end{lemma}\n\n\\begin{proof}\nOur hypothesis says that $\\omega_X = \\pi^* {\\mathcal{O}}_Y(1)$ and\n\\begin{equation}\\label{series}\nH^0(\\omega_X)=H^0({\\mathcal{O}}_Y(1)).\n\\end{equation}\nMoreover, since $\\pi_* {\\mathcal{O}}_X= {\\mathcal{O}}_Y \\oplus {\\mathcal{E}}$, we have\n\\begin{equation}\\label{pushomega}\n\\pi_* \\omega_X = {\\mathcal{O}}_Y(1) \\oplus ({\\mathcal{E}} \\otimes {\\mathcal{O}}_Y(1)).\n\\end{equation}\nSo $H^0(\\omega_X)=H^0({\\mathcal{O}}_Y(1)) \\oplus H^0({\\mathcal{E}} \\otimes {\\mathcal{O}}_Y(1))$. Hence \\eqref{series} is equivalent to\n\\begin{equation}\\label{vanishing}\nH^0({\\mathcal{E}} \\otimes {\\mathcal{O}}_Y(1))=0.\n\\end{equation}\nOn the other hand, relative duality for $\\pi$ yields $\\pi_* \\omega_X = (\\pi_* {\\mathcal{O}}_X)^\\vee \\otimes \\omega_Y$ or equivalently\n\\begin{equation}\\label{relative}\n\\pi_* \\omega_X = \\omega_Y \\oplus ({\\mathcal{E}}^\\vee \\otimes \\omega_Y).\n\\end{equation}\n\n\\noindent\nFrom \\eqref{pushomega} and \\eqref{relative} we get a sequence\n\\begin{equation*} 0 \\longrightarrow {\\mathcal{O}}_Y(1) \\overset{\\iota} \\longrightarrow \\omega_Y \\oplus ({\\mathcal{E}}^{\\vee} \\otimes \\omega_Y) \\longrightarrow {\\mathcal{E}} \\otimes {\\mathcal{O}}_Y(1)\\longrightarrow 0.\n\\end{equation*}\nThen~\\eqref{nulhyp} implies that $\\iota$ factors through ${\\mathcal{E}}^{\\vee} \\otimes \\omega_Y$ so we get a diagram\n\\begin{equation*}\\label{diag2}\n\\xymatrix@C-5pt@R-7pt{\n & & 0 \\ar[d] & 0 \\ar[d] & \\\\\n0 \\ar[r] & {\\mathcal{O}}_Y(1) \\ar@{=}[d]\\ar[r] & {\\mathcal{E}}^{\\vee} \\otimes \\omega_Y \\ar[d] \\ar[r] & {\\mathcal{E}}{'}(1) \\ar[d] \\ar[r]& 0 \\\\\n0 \\ar[r] & {\\mathcal{O}}_Y(1) \\ar[r]^-\\iota & \\omega_Y \\oplus ({\\mathcal{E}}^{\\vee} \\otimes \\omega_Y) \\ar[d]\\ar[r] & {\\mathcal{E}} \\otimes {\\mathcal{O}}_Y(1)\\ar[d]\\ar[r] &0\\\\\n & & \\omega_Y \\ar[d] \\ar@{=}[r] & \\omega_Y \\ar[d]& \\\\\n & & 0 & 0, &}\n\\end{equation*}\nwhere ${\\mathcal{E}}'$ is a rank $(n-2)$ locally free sheaf on $Y$. Besides, both middle exact sequences split, and so the top horizontal and the right--hand vertical exact sequences also split.\nNow, from the splitting of the right hand side vertical sequence we get\n\\begin{equation}\\label{vert-split}\n{\\mathcal{E}} = {\\mathcal{E}}' \\oplus \\omega_Y(-1).\n\\end{equation}\n\\end{proof}\n\n\\noindent\nThe next corollary shows that there are no canonical double structures on either $\\mathbf P^2$ or a Hirzebruch surface. This result \ncontrasts with the results of~\\cite{GPcarpets} where the existence of double structures on smooth rational normal scrolls having the same invariants of smooth $K3$ surfaces is shown.\n\n\n\\begin{corollary}\\label{nonexist.canonical.structures.cor} Let $Y$ as in~\\ref{setup} and either $\\mathbf P^2$ or a Hirzebruch surface and let $X, \\varphi$ and $i$ be also as in~\\ref{setup}. Assume furthermore that $X$ is a surface of general type and that $\\varphi$ is its\ncanonical morphism.\nThen\n\\begin{enumerate}\n\\item $\\mathrm{Hom}(\\mathcal I\/\\mathcal I^2, \\mathcal E)$ does not contain surjective homomorphisms;\n\\item there are no multiple structures inside $\\mathbf P^N$, supported on $i(Y)$ with conormal bundle $\\mathcal E$; and\n\\item there are no double structures inside $\\mathbf P^N$, supported on $i(Y)$ whose conormal bundle is a subsheaf of $\\mathcal E$.\n\\end{enumerate}\n\\end{corollary}\n\n\\begin{proof}\nSince $p_g(Y)=0$, then $h^0(\\omega_Y(-1))=0$. Thus Lemma~\\ref{canon.cover.split} implies\n$\\mathcal E= \\mathcal E' \\oplus \\omega_Y(-1)$ so\n\\begin{equation}\\label{Hom.split}\n\\mathrm{Hom}(\\mathcal I\/\\mathcal I^2,\\mathcal E)=\\mathrm{Hom}(\\mathcal I\/\\mathcal I^2,\\mathcal E') \\oplus \\mathrm{Hom}(\\mathcal I\/\\mathcal I^2,\\omega_Y(-1)).\n\\end{equation}\nThe last summand in~\\eqref{Hom.split} is $0$ by Proposition~\\ref{nonexist.canonical.structures}, so we get (1).\nParts (2) and (3) follow from~\\cite[Proposition 2.1]{Gon}.\n\\end{proof}\n\n\\section{Consequences for geography and moduli}\\label{moduli.section}\n\n\nIn this section we compute the invariants of the surfaces of general type we have constructed in Section~\\ref{non.existence.onminimal.section}, thus finding the region of the geography of surfaces of general type they reside in. In addition, we compute the dimension of the moduli components parametrizing our surfaces. Finally we show two examples of moduli spaces having components of different nature: components parametrizing canonically embedded surfaces and components parametrizing surfaces whose canonical map is a finite morphism of degree $2$.\n\n\n\n\\begin{proposition}\\label{cancover.P2.invariants}\nLet $Y=\\mathbf P^2$ embedded by $|\\mathcal O_{\\mathbf P^2}(d)|$ and let $X$ be a canonical double cover of $Y$ as the ones appearing in Theorem~\\ref{coversofP2andruled}. Then the surface $X$ has the following invariants:\n\\begin{eqnarray}\\label{invariant.formulae.P2}\n p_g&=&\\frac{1}{2}d^2+\\frac{3}{2}d+1 \\cr\n \\cr\nq&=&0 \\cr\n\\cr\n\\chi&=&\\frac{1}{2}d^2+\\frac{3}{2}d+2 \\cr\n\\cr\nc_1^2&=&2d^2 \\cr\n \\cr\n\\frac{c_1^2}{c_2}&=&\\frac{d^2}{2d^2+9d+12}.\n\\end{eqnarray}\n\\end{proposition}\n\n\n\\begin{proof}\nIn our situation $p_g=h^0(\\mathcal O_{\\mathbf P^2}(d))$. Since $h^1(\\mathcal O_X)=h^1(\\mathcal O_Y) + h^1(\\omega_Y(-1))$ and $q(Y)=h^1(\\mathcal O_Y(1))=0$, $q(X)=0$.\nThe values of $c_1^2$ are obvious, since $\\varphi$ has degree $2$ onto $Y$. Finally, the values of $\\frac{c_1^2}{c_2}$ follow from Noether's formula.\n\\end{proof}\n\n\n\\begin{proposition}\\label{cancover.ruled.invariants}\nLet $Y$ be a Hirzebruch surface $Y=\\mathbf F_e$ embedded by a very ample linear system $|aC_0+bf|$ and let $X$ be a\ncanonical double cover of $\\, Y$ as the ones appearing in Theorem~\\ref{coversofP2andruled}.\nThen $X$ has the following invariants:\n\\begin{eqnarray}\\label{invariant.formulae}\n p_g&=&(a+1)(b+1-\\frac{ae}{2}) \\cr\nq&=&0 \\cr\n\\chi&=& (a+1)(b+1-\\frac{ae}{2})+1\\cr\nc_1^2&=&2a(2b-ae) \\cr\n \\cr\n\\frac{c_1^2}{c_2}&=&\\frac{2ab-a^2e}{4ab-2a^2e+6a-3ae+6b+12}.\n\\end{eqnarray}\n\\end{proposition}\n\n\n\n\n\\begin{proof}\nSince $\\varphi$ is a canonical cover, $p_g$ is the dimension of $H^0(\\mathcal O_Y(aC_0+bf))$, which is well--known. By the construction of $X$,\n$h^1(\\mathcal O_Y)=h^1(\\mathcal O_Y) + h^1(\\omega_Y(-1))$. Again we know that $q(Y)$ and $h^1(\\mathcal O_Y(1))$ are both $0$, so $q(X)=0$ and then the values stated for $\\chi$ are obvious. The values of $c_1^2$ follow also from the construction and properties of $X$ and $\\varphi$; indeed, $\\omega_X=\\varphi^*\\mathcal O_Y(1)$ and $\\varphi$ has degree $2$ onto its image $Y$. Finally, the values of $\\frac{c_1^2}{c_2}$ follow from Noether's formula.\n\\end{proof}\n\n\n\\begin{remark}\\label{Chern.quotient} For any of the surfaces $X$ in Propositions~\\ref{cancover.P2.invariants} and~\\ref{cancover.ruled.invariants}, $\\frac{c_1^2}{c_2} < \\frac{1}{2}$ but there exist surfaces $X$ as in Proposition~\\ref{cancover.P2.invariants} and surfaces $X$ as in Proposition~\\ref{cancover.ruled.invariants} for which $\\frac{c_1^2}{c_2}$ is arbitrarily close to $\\frac{1}{2}$.\n\\end{remark}\n\n\\begin{proof} If $X$ is as in Proposition~\\ref{cancover.P2.invariants} and~\\ref{cancover.ruled.invariants}, the claim is clear.\nIf $X$ is as in Proposition~\\ref{cancover.ruled.invariants}, note that\n\\begin{equation*}\n \\frac{c_1^2}{c_2}=\\frac{2ab-a^2e}{4ab-2a^2e+6a-3ae+6b+12}=\\frac{a(2b-ae)}{2a(2b-ae)+6a+3b+3(b-ae)+12}.\n\\end{equation*}\nThe very ampleness of $\\mathcal O_Y(aC_0+bf)$ implies $b-ae \\geq 1$, so $6a+3b+3(b-ae)+12 > 0$ and the claim is also clear in this case.\n\\end{proof}\n\n\\begin{remark}\\label{cancover.ruled.geography}\n{\\rm We now present the information given in Proposition~\\ref{cancover.ruled.invariants} more graphically, by displaying on a plane the pairs $(x,y)=(\\chi,c_1^2)$ of the covers of ruled surfaces appearing in Theorem~\\ref{coversofP2andruled}. If we fix an integer $a \\geq 1$, then the points $(x,y)$ corresponding to the covers of ruled surfaces appearing in Theorem~\\ref{coversofP2andruled} are points (with integer coordinates) lying on the line $l_{a}$ passing through the point $(a+2,0)$ with slope $\\frac{4a}{a+1}$, i.e., the line of equation\n\\begin{equation}\\label{graphic}\n y=\\frac{4a}{a+1}(x-a-2).\n\\end{equation}\n\n\\noindent\nMore precisely, for each $a \\geq 1$, the invariants form an unbounded set consisting of all the integer points on the semiline of $l_{a}$ including and up and to the right of the point $(2a+3,4a)$.\nNote that each two distinct lines $l_a$ and $l_{a'}$ described above meet at the point with $x=aa'+a+a'+2$. Note also that $l_1$ is obviously the Noether's line $y-2x+6=0$; the reason for this is that if $a=1$, $Y$ is embedded as a surface of minimal degree. Note also that the limiting points of the semilines described above lie on Noether's line $y-2x+6=0$ (as should be, since in this case the limiting point $(2a+3,4a)$ is obtained when considering canonical double covers of $\\mathbf F_0$ embedded by $|aC_0+f|$, which are surfaces of minimal degree).\n\n\\smallskip\n\\noindent\nNote that as $a$ goes to infinity, the slopes of the lines $l_a$ approaches $4$. This means that Chern ratio $\\frac{c_1^2}{c_2}$ approaches $\\frac{1}{2}$. Note also that, if $a \\geq 4$, when $x$ is sufficiently large, the line $l_a$ goes into the region $y \\geq 3x-10$, bounded by the Castelnuovo line $y=3x-10$.\n }\\end{remark}\n\n\\smallskip\n\n\\noindent We illustrate Remark~\\ref{cancover.ruled.geography} with Figures 1 and 2. In Figure 1 low values of $(\\chi,c_1^2)$ appear while Figure 2 zooms out in order to display larger values. Recall that we denote $\\chi$ by $x$ and $c_1^2$ by $y$. \n\n\n\n\\smallskip\n\n\\begin{figure}[!hb]\n\\begin{center}\n\\includegraphics[width=0.8\\textwidth,\nheight=0.6\\textheight\n]{figure1}\n\\end{center}\n\\caption{Solid lines, from less steep to more steep, are $l_1$ (which is also Noether's line) to $l_4$. The dashed line is Castelnuovo's line. The points marked are the integer points lying on $l_1$ with first coordinate $x \\geq 5$ and the integer points lying on $l_2, l_3$ and $l_4$ and above $l_1$.}\n\\end{figure}\n\n\\newpage\n\n\\begin{figure}[!h]\n\\begin{center}\n\\includegraphics[width=0.8\\textwidth, height=1.\n\\textwidth]{figure2}\n\\end{center}\n\\caption{Solid lines, from less steep to more steep, are $l_1$ (which is also Noether's line) to $l_6$; the dashed line is Castelnuovo's line.}\n\\end{figure}\n\n\n\\noindent \nNext we remark that the surfaces constructed in Theorem~\\ref{coversofP2andruled} are not only regular, but also simply connected:\n\n\n\\begin{remark}\nThe surfaces of general type $X$ that we have constructed in Theorem~\\ref{coversofP2andruled} are simply connected.\n\\end{remark}\n\n\\begin{proof} Let $Y$ be either a minimal rational surface or $F_1$. The fundamental group of $Y$ is well known to be $0$. The morphism $\\pi$ is a double cover of $Y$ branched along a divisor in $|\\omega_Y^{-2}(2)|$. If $Y$ is $\\mathbf P^2$ this branch divisor is obviously base--point-free\nand if $\\, Y$ is a Hirzebruch surface, the branch divisor is also base--point--free by \nthe hypothesis of Theorem~\\ref{coversofP2andruled}. Then the fundamental group of $X$ is \nthe same as the fundamental group of $Y$ by~\\cite[Corollary 2.7]{Nori} \n(note that the ampleness hypothesis required there can be relaxed to big and nefness), so $X$ is simply connected.\nThen, consider the families of surfaces associated to the deformations of $X$ given in Theorem~\\ref{coversofP2andruled}.\nAll the smooth fibers in such families are diffeomorphic to each other, hence they are also simply connected.\nThus the surfaces constructed in Theorem~\\ref{coversofP2andruled} are simply connected.\n\\end{proof}\n\n\n\n\\medskip\n\\noindent\nIn the second part of this section we compute the dimension of the components of the moduli parametrizing the surfaces of general type appearing in Theorem~\\ref{coversofP2andruled}:\n\n\n\\begin{proposition}\\label{moduli.dim}\n Let $X$ be a surface of general type as in Theorem~\\ref{coversofP2andruled}. Then there is only one irreducible component of the moduli containing $[X]$ and its dimension is\n\\begin{enumerate}\n\\item $\\mu=2d^2+15d+19$, if $\\, Y=\\mathbf P^2$;\n\\item $\\mu=(2a+5)(2b-ae+5)-7$, if $\\, Y$ is a Hirzebruch surface.\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\nPart (a) of Theorem~\\ref{coversofP2andruled} implies that \nthe base of the formal semiuniversal deformation space of $X$ is smooth, so in particular $[X]$ belongs to a unique irreducible component \nof the moduli.\nNote that \\cite[Lemma 4.4]{MP} holds also under our hypothesis. Then\n\\begin{equation}\\label{value.of.mu}\n\\mu= h^0(\\mathcal N_\\pi) - h^1(\\mathcal N_\\pi) + h^1(\\mathcal T_Y) - h^0(\\mathcal T_Y) + \\textrm{ dim Ext}^1 (\\Omega_Y,\\omega_Y(-1)).\n\\end{equation}\nThus, we will compute now the dimensions of the cohomology groups that appear in~\\eqref{value.of.mu}. First recall that in the proof of Proposition~\\ref{nonexist.canonical.structures} we obtained $\\textrm{Ext}^1 (\\Omega_Y,\\omega_Y(-1))=0$.\n\n\\smallskip\n\\noindent Now we prove that $h^1(\\mathcal N_\\pi)=0$. Recall (see~\\cite[Lemma 2.5]{MP}) that $h^1(\\mathcal N_\\pi)=h^1(\\mathcal O_B(B))$, where $B$ is the branch divisor of $\\pi$. The divisor $B$ is a smooth member of $|\\omega_Y^{-2}(2)|$. Recall also that we showed in the proof of Theorem~\\ref{coversofP2andruled} that $H^1(\\omega_Y^{-2}(2))=0$. Then the sequence\n\\begin{equation}\nH^1(\\mathcal O_Y) \\longrightarrow H^1(\\mathcal O_Y(B)) \\longrightarrow H^1(\\mathcal O_B(B)) \\longrightarrow H^2(\\mathcal O_Y)\n\\end{equation}\nand the fact that $p_g(Y)=0$ imply the vanishing of $H^1(\\mathcal N_\\pi)$.\n\n\\smallskip\n\\noindent Next we compute the number $h^0(\\mathcal T_Y)-h^1(\\mathcal T_Y)$. If $Y=\\mathbf P^2$, since $h^2(\\mathcal T_Y)=0$, then\n\\begin{equation}\\label{Xi.TP2}\n h^0(\\mathcal T_Y)-h^1(\\mathcal T_Y)=\\chi(\\mathcal T_Y)=8.\n\\end{equation}\nNow if $\\, Y$ is a Hirzebruch surface, dualizing and taking global sections on~\\eqref{fibration} yields\n\\begin{equation}\\label{Xi.THirz}\n 0 \\longrightarrow H^0(\\mathcal O_Y(2C_0+ef)) \\longrightarrow H^0(\\mathcal T_Y) \\longrightarrow H^0(\\mathcal O_Y(2f)) \\longrightarrow H^1(\\mathcal O_Y(2C_0+ef)) \\longrightarrow H^1(\\mathcal T_Y)\\longrightarrow 0.\n\\end{equation}\nThen $h^0(\\mathcal T_Y)-h^1(\\mathcal T_Y)=6$.\n\n\\smallskip\n\\noindent Now, to complete the computation we find $h^0(\\mathcal N_\\pi)$.\nAgain by~\\cite[Lemma 2.5]{MP} we have $h^0(\\mathcal N_\\pi)=h^0(\\mathcal O_B(B))=h^0(\\mathcal O_Y(B))-1$, because $Y$ is regular. If $\\, Y=\\mathbf P^2$, embedded by $|\\mathcal O_{\\mathbf P^2}(d)|$,\nthen $h^0(\\mathcal N_\\pi)=2d^2+15d+27$ and if $\\, Y$ is a Hirzebruch surface $\\mathbf F_e$ \nas in Theorem~\\ref{coversofP2andruled}, then by Riemann--Roch, $h^0(\\mathcal N_\\pi)=(2a+5)(2b-ae+5)-1$ \n(recall that $\\omega_Y^{-2}(2)$ is base--point--free and that for a Hirzebruch surface $Y$ this implies \nthe vanishing of $H^1(\\omega_Y^{-2}(2))$).\nPlugging all this in~\\eqref{value.of.mu} yields the result.\n\\end{proof}\n\n\\noindent\nSome of the\nsurfaces constructed in Theorem~\\ref{coversofP2andruled} provide examples of moduli spaces with\ninteresting properties:\n\n\\begin{example}\\label{example39}\nThe moduli space $\\mathcal M_{(39,0,110)}$ parametrizing surfaces of general type with $p_g=39, q=0$ and $c_1^2=110$ has one component $\\mathcal M_1$ whose general point corresponds to a surface $S$ as in~\\cite[Lemma 4.10]{MP} and another component $\\mathcal M_2$ whose general point corresponds to a surface $X$ as in Theorem~\\ref{coversofP2andruled}. In particular a general point of $\\mathcal M_1$ corresponds to a surface that can be canonically embedded whereas a general point of $\\mathcal M_2$ corresponds to a surface whose canonical map is a degree $2$, finite morphism.\n\\end{example}\n\n\\begin{proof} For instance, the linear system $|4H+10F|$ of $S(1,1,1)$ has smooth members $S$ which are surfaces like those of~\\cite[Lemma 4.10]{MP} and with $(p_g(S),c_1^2(S))=(39,110)$ ($H$ is the tautological divisor of $S(1,1,1)$ and $F$ is a fiber over $\\mathbf P^1$). On the other hand, canonical double covers $X$ of $\\mathbf F_1$ embedded by $|5C_0+8f|$ are surfaces like those in Theorem~\\ref{coversofP2andruled} and have $(p_g(X),c_1^2(X))=(39,110)$. Since by Theorem~\\ref{coversofP2andruled} the canonical map of $X$ deforms to a finite morphism of degree $2$, $S$ and $X$ belong to different components of $\\mathcal M_{(39,0,110)}$.\n\\end{proof}\n\n\\begin{example}\\label{example45}\nThe moduli space $\\mathcal M_{(45,0,128)}$ parametrizing surfaces of general type with $p_g=45, q=0$ and $c_1^2=128$ has at least three components $\\mathcal M_1$, $\\mathcal M_2$ and $\\mathcal M_3$. A general point of $\\mathcal M_1$ corresponds to a surface $S$ as in~\\cite[Lemma 4.10]{MP}, while general points of $\\mathcal M_2$ and $\\mathcal M_3$ correspond to surfaces $X$ as in Theorem~\\ref{coversofP2andruled}. In particular a general point of $\\mathcal M_1$ corresponds to a surface that can be canonically embedded whereas general points of $\\mathcal M_2$ and $\\mathcal M_3$ correspond to surfaces whose canonical map is a degree $2$, finite morphism.\n\\end{example}\n\n\n\\begin{proof} For instance, the linear system $|4H+12F|$ of $S(1,1,1)$ has smooth members $S$ which are surfaces like those of~\\cite[Lemma 4.10]{MP} and with $(p_g(S),c_1^2(S))=(45,128)$ (recall that $H$ is the tautological divisor of $S(1,1,1)$ and $F$ is a fiber over $\\mathbf P^1$). On the other hand, canonical double covers of $\\mathbf P^2$ embedded by octics are surfaces $X_2$ as in Theorem~\\ref{coversofP2andruled} having $(p_g(X_2),c_1^2(X_2))=(45,128)$ (see \\eqref{invariant.formulae.P2}). In addition, canonical double covers of $\\mathbf F_0$ embedded by $|4C_0+8f|$ are also surfaces $X_3$ as in Theorem~\\ref{coversofP2andruled} having $(p_g(X_2),c_1^2(X_2))=(45,128)$ (see \\eqref{invariant.formulae}). Now recall that the point in the moduli space corresponding to $X_2$ belongs to only one component of $\\mathcal M_{(45,0,128)}$ (see Proposition~\\ref{moduli.dim}), which we will call $\\mathcal M_2$. Likewise, the point in the moduli space corresponding to $X_3$ belongs to only one component of $\\mathcal M_{(45,0,128)}$, which we will call $\\mathcal M_3$. Then $\\mathcal M_2$ and $\\mathcal M_3$ are different for their dimensions are: indeed, applying Proposition~\\ref{moduli.dim} we get that the dimension of $\\mathcal M_2$ is $267$ whereas the dimension of $\\mathcal M_3$ is $266$.\n\\end{proof}\n\n\n\\begin{remark}\nFor any integer $m$, $m \\geq 4$, let $\\Xi_m$ be the set of values $(x',y)$ for which there exist a smooth surface $X$ as in Theorem~\\ref{coversofP2andruled} with $(p_g(X),c_1^2(X))=(x',y)$ and a smooth surface $S$ as in~\\cite[Lemma 4.10]{MP} with $(p_g(S),c_1^2(S))=(x',y)$.\n\\begin{enumerate}\n\\item The set $\\Xi_m$ is finite (and possibly empty) for every $m \\geq 4$. In particular, $\\Xi_4=\\{(39,110),$ $(45,128)\\}$ and $\\Xi_5=\\Xi_6=\\emptyset$.\n\\item There are no surfaces $X$ with $Y=\\mathbf P^2$ such that $(p_g(X),c_1^2(X)) \\in \\Xi_4 \\cup \\Xi_5 \\cup \\cdots$, except the surfaces $X_2$ appearing in Example~\\ref{example45}.\n\\end{enumerate}\n\\end{remark}\n\n\n\n\n\n\n\\begin{proof}\nThe remark follows from elementary although somehow involved computations, once we take in account~\\cite[(3.17.1)]{MP}, the hypothesis of~\\cite[Lemma 4.10]{MP}, \\eqref{invariant.formulae.P2}, \\eqref{invariant.formulae}, \\eqref{graphic} and the fact that if\n $S$ is as in~\\cite[Lemma 4.10]{MP}, then $(p_g(S),c_1^2(S))=(x',y)$ satisfies the equation\n\\begin{equation}\\label{yx'm}\n y=6\\frac{m-3}{m-2}x'-(m-3)(m+3)\n\\end{equation}\n(see~\\cite[(3.17.2)]{MP}).\n\\end{proof}\n\n\n\n\\begin{acknowledgement}\n{\\rm We are very grateful to Edoardo Sernesi for drawing our attention to our earlier work on deformation of morphisms and for suggesting us to use it to study the canonical maps of surfaces of general type. We thank Madhav Nori for a helpful conversation. \nWe also thank Tadashi Ashikaga for bringing to our attention his result~\\cite[4.5]{AK} with Kazuhiro Konno.\n}\n\\end{acknowledgement}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\\section{Introduction}\n\n\nIn this paper we consider the asymptotic behavior, as $\\varepsilon \\to 0+$,\nof the solution $u^\\varepsilon$ of the boundary value problem\nfor the Hamilton-Jacobi equation, with a small parameter $\\varepsilon > 0$,\n\\begin{align} \\label{epHJ} \n\\begin{cases} \n\\lambda u^\\varepsilon - \\cfrac{b \\cdot Du^\\varepsilon}{\\varepsilon} + G(x, Du^\\varepsilon ) = 0 \\ \\ \\ &\\text{ in } \\Omega, \\tag{$\\mathrm{HJ}^\\varepsilon$} \\\\ \nu^\\varepsilon = g^\\varepsilon \\ \\ \\ &\\text{ on } \\partial \\Omega.\n\\end{cases}\n\\end{align}\n\n\nHere $\\lambda$ is a positive constant,\n$\\Omega$ is an open subset of $\\mathbb R^2$ with boundary $\\partial \\Omega$,\n$u^\\varepsilon : \\overline \\Omega \\to \\mathbb R$ is the unknown,\n$G : \\overline \\Omega \\times \\mathbb R^2 \\to \\mathbb R$ and $g^\\varepsilon : \\partial \\Omega \\to \\mathbb R$ are given functions, \nand $b : \\mathbb R^2 \\to \\mathbb R^2$ is a Hamiltonian vector field,\nthat is, for a given Hamiltonian $H : \\mathbb R^2 \\to \\mathbb R$,\n\\begin{equation*}\nb = (H_{x_2}, -H_{x_1}),\n\\end{equation*}\nwhere the subscript $x_i$ indicates\nthe differentiation with respect to the variable $x_i$.\n\n\nLet us consider an optimal control problem,\nwhere the state is described by the initial value problem\n\\begin{align} \\label{state}\n\\begin{cases}\n\\dot X^\\varepsilon (t) = \\cfrac{1}{\\varepsilon} \\, b(X^\\varepsilon (t)) + \\alpha (t) \\ \\ \\ \\text{ for } t \\in \\mathbb R, \\\\\nX^\\varepsilon (0) = x \\in \\mathbb R^2, \\\\\n\\alpha \\in L^\\infty (\\mathbb R ; \\mathbb R^2), \n\\end{cases}\n\\end{align}\nof which the solution will be denoted by $X^\\varepsilon (t, x, \\alpha)$,\nthe discount rate and pay-off are given by $\\lambda$ and $g^\\varepsilon$, respectively, \nand the running cost is given by the function $L$,\ncalled \\textit{Lagrangian} of $G$, defined by\n\\begin{equation*}\nL(x, \\xi ) = \\sup_{p \\in \\mathbb R^2} \\{ -\\xi \\cdot p - G(x, p) \\} \\ \\ \\ \\text{ for } (x, \\xi ) \\in \\overline \\Omega \\times \\mathbb R^2.\n\\end{equation*} \nThen \\eqref{epHJ} is the dynamic programming equation for this optimal control problem. \n\n\nWe may regard \\eqref{state} as an equation\nobtained by perturbing by control $\\alpha$ the Hamiltonian system \n\\begin{align}\n\\begin{cases} \\label{epHS}\n\\dot X^\\varepsilon (t) = \\cfrac{1}{\\varepsilon} \\, b(X^\\varepsilon (t)) \\ \\ \\ \\text{ for } t \\in \\mathbb R, \\\\ \\tag{$\\mathrm{HS}^\\varepsilon$}\nX^\\varepsilon (0) = x \\in \\mathbb R^2,\n\\end{cases}\n\\end{align}\nof which the solution will be denoted by $X^\\varepsilon (t, x)$.\n\n\nPerturbation problems for Hamiltonian flows,\nsimilar to ours, were studied by Freidlin-Wentzell \\cites{FW}. \nIn their work \\cites{FW}, they considered the stochastic perturbation of \\eqref{epHS}\n\\begin{align} \\label{SDE}\n\\begin{cases}\ndX_t^\\varepsilon = \\cfrac{1}{\\varepsilon} \\, b(X_t^\\varepsilon ) dt + dW_t \\ \\ \\ \\text{ for } t \\in \\mathbb R, \\\\\nX(0) = x \\in \\mathbb R^2,\n\\end{cases}\n\\end{align} \nwhere $W_t$ is a standard two dimensional Brownian motion,\nand associated with \\eqref{SDE} is, in place of \\eqref{epHJ},\nthe boundary value problem for the linear second-order elliptic partial differential equation\n\\begin{align} \\label{LDE}\n\\begin{cases}\n- \\, \\cfrac{1}{2} \\, \\Delta u^\\varepsilon - \\cfrac{b \\cdot Du^\\varepsilon}{\\varepsilon} = f \\ \\ \\ &\\text{ in } \\Omega, \\\\\nu^\\varepsilon = g^\\varepsilon \\ \\ \\ &\\text{ on } \\partial \\Omega,\n\\end{cases}\n\\end{align}\nwhere $f \\in C(\\overline \\Omega)$ is a given function.\n\n\nIn \\cites{FW}, authors proved that solutions of \\eqref{LDE} converge,\nas $\\varepsilon \\to 0+$, to solutions of a boundary value problem\nfor a system of ordinary differential equations (odes, for short)\non a graph by a probabilistic approach.\nThe domain $\\Omega$ thus degenerates to a graph in the limiting process,\nwhich we call \\textit{singular perturbations of domains}.\nMore precisely, the limit of the function $u^\\varepsilon$, as $\\varepsilon \\to 0+$, \nbecomes a function which is constant along each component of level sets of the Hamiltonian $H$.\nAs a consequence, a natural parametrization to describe the limiting function is\nto use the height of the Hamiltonian $H$ and not the original two-dimensional variables (see Fig. 2 below).\nAfter work of \\cites{FW}, such problems have been studied by many authors, including Freidlin-Wentzell \\cites{FW},\nand many of them studied by the probabilistic approach. \nIn a recent study, Ishii-Souganidis \\cites{IS} obtained, by using pure pde-techniques, \nsimilar results for linear second-order degenerate elliptic partial differential equations.\n\n\nIn the view point of pde, \nlinear differential equations, like \\eqref{LDE}, give the basis \nfor studying stochastic perturbations of \\eqref{epHS},\nwhile nonlinear differential equations, like \\eqref{epHJ}, play the same role \nfor perturbations of \\eqref{epHS} by control terms. \n\n\nHere we treat \\eqref{epHJ}, establish the convergence\nof the solution $u^\\varepsilon$ of \\eqref{epHJ} to a function on a graph\nand identify the limit of $u^\\varepsilon$ as a unique solution of a boundary value problem \nfor a system of odes on the graph.\nThe result is stated in Theorem \\ref{main}. \nThe argument for establishing this result depends heavily on\nviscosity solution techniques including the perturbed test function method \nas well as representations, as value functions in optimal control, of solutions of \\eqref{epHJ}. \n\n\nIn \\cites{AT}, authors treat a problem similar to the above. \nThey consider general Hamilton-Jacobi equations in optimal control \non an unbounded thin set converging to a graph,\nprove the convergence of the solutions, and identify the limit of the solutions. \n\n\nAn interesting point of our result lies in that we have to treat \na non-coercive Hamiltonian in Hamilton-Jacobi equation \\eqref{epHJ}. \nMany authors in their studies on Hamilton-Jacobi equations\non graphs, including \\cites{AT}, \nassumes, in order to guarantee the existence of continuous solutions, \na certain coercivity of the Hamiltonian in the Hamilton-Jacobi equations, which corresponds, in terms of \noptimal control, a certain controllability of the dynamics. \nIn our result, we also make a coercivity assumption (see (G4) below) \non the unperturbed Hamiltonian, called $G$ in \\eqref{epHJ}, \nbut, because of the term $b \\cdot Du^\\varepsilon\/\\varepsilon$ in \\eqref{epHJ},\nwhen $\\varepsilon > 0$ is very small, the perturbed Hamiltonian becomes non-coercive\nand the controllability of the dynamics \\eqref{state} breaks down. \nA crucial point in our study is that, when $\\varepsilon$ is very small, \nthe perturbed term $b \\cdot Du^\\varepsilon\/\\varepsilon$ \nmakes the solution $u^\\varepsilon$ nearly constant along the level set of the \nHamiltonian $H$ while the perturbed Hamiltonian \n$-b(x) \\cdot p\/\\varepsilon + G(x,p)$ in \n\\eqref{epHJ} is ``coercive in the direction orthogonal to $b(x)$'', that is \nthe direction of the gradient $DH(x)$. \nHeuristically at least, these two characteristics combined together allow us \nto analyze the asymptotic behavior of the solution $u^\\varepsilon$ of \n\\eqref{epHJ} as $\\varepsilon \\to 0+$. \n\n\nThis paper is organized as follows.\nIn the next section, we describe precisely the \nHamitonian $H$, the domain $\\Omega$ as well as \nsome relevant properties of the Hamiltonian system \\eqref{HS}, \npresent the assumptions on the unperturbed Hamiltonian $G$ used throughout the paper,\nand give a basic existence and uniqueness proposition (see Proposition \\ref{viscosity-sol}) \nfor \\eqref{epHJ} and a proposition concerning the dynamic programming principle. \nSection 3, which is divided into two parts, is devoted to establishing Theorem \\ref{main}.\nIn the first part, we give some observations on the odes \n\\eqref{lim-HJ} below in Section 3 on the graph,\nwhich the limiting function of the solution $u^\\varepsilon$ of \\eqref{epHJ} should satisfy. \nThe last part is devoted to the proof of Theorem \\ref{main}. \nIt relies on three propositions.\nThey are the characterization of the half relaxed-limits of $u^\\varepsilon$\n(Theorem \\ref{characterize} in Section 4),\nand two estimates for the half relaxed-limits of $u^\\varepsilon$\n(Lemmas \\ref{v^+-u_i^+} and \\ref{v^--d_0} in Sections 5 and 6, respectively). \nIn Section 7, we are concerned with the admissibility of boundary data for the odes on the graph. \nIn our formulation of the main result (see Theorem \\ref{main}), \nwe assume rather implicit (or ad hoc) assumptions (G5) and (G6). \nIndeed, (G5) readily gives us a unique viscosity solution of \\eqref{epHJ} \nand (G6) essentially assumes no boundary-layer phenomenon for the solution of \n\\eqref{epHJ} in the limiting process as $\\varepsilon \\to 0+$. \nIt is thus important to know when (G5) and (G6) hold. \nSection 7 focuses to give a sufficient condition under which (G5) and (G6) hold.\n\n\nBefore closing the introduction, we give a few of our notations. \n\n\n\n\\subsection*{Notation}\n\n\nFor $c, d \\in \\mathbb R$, we write $c \\wedge d = \\min \\{ c, d \\}$ and $c \\vee d = \\max \\{ c, d \\}$. \nFor $r > 0$, we denote by $B_r$ the open disc centered at the origin with radius $r$.\nWe write $\\mathbf{1}_E$ for the characteristic function of the set $E$.\n\n\n\n\n\n\\section{Preliminaries}\n\n\n\\begin{figure}[t]\n\\centering\n\\input{Hamiltonian.tex}\n\\caption{\\ }\n\\end{figure}\n\n\n\\subsection{The domain $\\Omega$}\n\n\nWe assume the following (H1)--(H3) throughout this paper.\n\n\n\\begin{itemize}\n\\item[(H1)] $H \\in C^2 (\\mathbb R^2)$ and $\\lim_{|x| \\to \\infty} H(x) = \\infty$. \n \n \n\\item[(H2)] $H$ has exactly three critical points $z_1, z_3 \\in \\mathbb R^2$ \nand $z_2 = (0,0) := 0$. \n\n\n\\item[(H3)] There exists $\\kappa > 0$ such that \n \\begin{equation*}\n\t\t\t\tH(x_1, x_2) = x_2^2 -x_1^2 \\ \\ \\ \\text{ on } \\overline B_\\kappa.\n \\end{equation*}\n\\end{itemize}\nThe graph of the Hamiltonian $H$ \nsatisfying (H1)--(H3) is depicted in Fig. 1 above.\n\n\nIt follows from these assumptions that\nfor any $h > 0$, the open set $\\{ x \\in \\mathbb R^2 \\ | \\ H(x) < h \\}$ is connected,\nand the open set $\\{ x \\in \\mathbb R^2 \\ | \\ H(x) < 0 \\}$ consists of\ntwo connected components $D_1$ and $D_3$ such that $z_1 \\in D_1$ and $z_3 \\in D_3$.\nWe may assume that $(-\\kappa, 0) \\in D_1$ and $(\\kappa, 0) \\in D_3$.\n\n\nThe shape of the domain $\\Omega$ is depicted in Fig. 2(a) below.\n\n\nWe choose $h_1, h_2, h_3 \\in \\mathbb R$ so that\n\\begin{equation*}\nh_1, h_3 < 0 < h_2 \\ \\ \\ \\text{ and } \\ \\ \\ H(z_i) < h_i \\ \\ \\ \\text{ for } i \\in \\{ 1, 3 \\}, \n\\end{equation*}\nand consider the intervals\n\\begin{equation*}\nJ_2 = (0, h_2) \\ \\ \\ \\text{ and } \\ \\ \\ J_i = (h_i, 0) \\ \\ \\ \\text{ for } i \\in \\{ 1, 3 \\}, \n\\end{equation*}\nthe open sets\n\\begin{equation*}\n\\Omega_2 = \\{ x \\in \\mathbb R^2 \\ | \\ H(x) \\in J_2 \\} \\ \\ \\ \\text{ and } \\ \\ \\ \\Omega_i = \\{ x \\in D_i \\ | \\ H(x) \\in J_i \\} \\ \\ \\ \\text{ for } i \\in \\{ 1, 3 \\}, \n\\end{equation*}\nand their ``outer\" boundaries\n\\begin{equation*}\n\\partial_i \\Omega = \\{ x \\in \\overline \\Omega_i \\ | \\ H(x) = h_i \\} \\ \\ \\ \\text{ for } i \\in \\{ 1, 2, 3 \\}.\n\\end{equation*}\n\n\nNow we introduce $\\Omega$ as the open connected set \n\\begin{equation*}\n\\Omega = \\left( \\bigcup_{i = 1}^3 \\Omega_i \\right) \\cup \\{ x \\in \\mathbb R^2 \\ | \\ H(x) = 0 \\},\n\\end{equation*}\nwith the boundary\n\\begin{equation*}\n\\partial \\Omega = \\bigcup_{i = 1}^3 \\partial_i \\Omega.\n\\end{equation*}\nIt is obvious that, by replacing $\\kappa$ by a smaller positive number if necessary,\nwe may assume that $\\overline B_\\kappa \\subset \\Omega$ in (H3).\nFor later convenience, the constant $\\kappa > 0$ in (H3) will be always assumed\nsmall enough so that $\\overline B_\\kappa \\subset \\Omega$.\n\n\nWe define loops $c_i (h)$ for $h \\in \\bar J_i$ and $i \\in \\{ 1, 2, 3 \\}$ by\n\\begin{equation*}\nc_i (h) = \\{ x \\in \\overline \\Omega_i \\ | \\ H(x) = h \\}.\n\\end{equation*}\n\n\nIf we identify all points belonging to a loop $c_i (h)$,\nthen we obtain a graph, which is shown in Fig. 2(b),\nconsisting of three segments, parametrized by $J_1$, $J_2$, and $J_3$.\n\n\n\\begin{figure}[t]\n\\centering\n\\input{Omega.tex}\n\\caption{\\ }\n\\end{figure}\n\n\nWe consider\nthe initial value problem\n\\begin{align} \\label{HS}\n\\begin{cases}\n\\dot X (t) = b(X(t)) \\ \\ \\ \\text{ for } t \\in \\mathbb R, \\\\ \\tag{HS}\nX(0) = x \\in \\mathbb R^2,\n\\end{cases}\n\\end{align}\nwhich admits a unique global in time solution $X = X(t, x)$.\nNote that, in view of (H1),\n\\begin{equation*}\nX, \\dot X \\in C^1 (\\mathbb R \\times \\mathbb R^2 ; \\mathbb R^2) \\ \\ \\ \\text{ and } \\ \\ \\ H(X(t, x)) = H(x) \\ \\ \\ \\text{ for all } (t, x) \\in \\mathbb R \\times \\mathbb R^2. \n\\end{equation*} \n\n\nFix $i \\in \\{ 1, 2, 3 \\}$ and $h \\in \\bar J_i \\setminus \\{ 0 \\}$.\nSince $\\{ X(t, x) \\ | \\ t \\in \\mathbb R \\} \\subset c_i (h)$ if $x \\in c_i (h)$,\nand $DH(x) \\not= 0$ for all $x \\in c_i (h)$, \nit is easily seen that the map $t \\mapsto X(t, x)$ is periodic in $t$ for all $x \\in c_i (h)$,\nand that the minimal period of $X(\\cdot, x)$ is independent of $x \\in c_i (h)$.\nFor $x \\in c_i (h)$, let $T_i (h)$ be the minimal period of $X(\\cdot, x)$, that is,\n\\begin{equation*}\nT_i (h) = \\inf \\{ t > 0 \\ | \\ X(t, x) = x \\}.\n\\end{equation*}\n\n\n\\begin{lem} \\label{T_i}\nFor all $i \\in \\{ 1, 2, 3 \\}$, $T_i \\in C^1 (\\bar J_i \\setminus \\{ 0 \\})$ \nand $T_i (h) = O(|\\log |h||)$ as $(-1)^ih \\to 0+$.\n\\end{lem} \n\n\n\nThe proposition above can be found in \\cites{IS} in a slightly different context,\nto which we refer for the proof.\n\n\nFinally we observe that, for any $i \\in \\{ 1, 2, 3 \\}$ and $h \\in \\bar J_i \\setminus \\{ 0 \\}$,\nthe length $L_i (h)$ of the loop $c_i (h)$ is described by\n\\begin{equation} \\label{def-L_i}\nL_i (h) = \\int_0^{T_i (h)} |\\dot X(t, x)| \\, dt = \\int_0^{T_i (h)} |DH(X(t, x))| \\, dt,\n\\end{equation}\nwhere $x \\in \\overline \\Omega$ is chosen so that $x \\in c_i (h)$.\nThe value of the integral above is independent of the choice of $x \\in c_i (h)$.\n\n\nThe following lemma is a consequence of Lemma \\ref{T_i} and \\eqref{def-L_i}.\n\n\n\\begin{lem} \\label{L_i}\nFor all $i \\in \\{ 1, 2, 3 \\}$, $L_i \\in C^1 (\\bar J_i \\setminus \\{ 0 \\})$.\n\\end{lem}\n\n\nWe focus on the domain given by Hamiltonian $H$\nsatisfying (H1)--(H3) (see Fig. 1) in this paper for simplicity of presentation. \nIt is possible to treat many junctions if Hamiltonian $H$ has only critical points that are non-degenerate. \nHowever, the argument in this paper does not cover the case when critical points of Hamiltonian $H$ are degenerate,\nand thus junctions with more than three line segments are outside the scope of this paper.\n\n\n\n\\subsection{Viscosity solutions of \\eqref{epHJ}}\n\n\nWe prove here the existence and uniqueness of viscosity solutions of \\eqref{epHJ},\nwhich are continuous up to the boundary. \nWe do not recall here the definition and basic properties of viscosity solutions\nand we refer instead to \\cites{BCD, CIL, L} for them. \n\n\nWe need the following assumptions on $G$ and $g^\\varepsilon$.\n\n\n\\begin{itemize}\n\\item[(G1)] $G \\in C(\\overline \\Omega \\times \\mathbb R^2)$.\n\n \n\\item[(G2)] There exists a modulus $m$ such that\n \\begin{equation*}\n |G(x, p) - G(y, p)| \\leq m(|x - y|(1 + |p|)) \\ \\ \\ \\text{ for all } x, y \\in \\overline \\Omega \\text{ and } p \\in \\mathbb R^2.\n \\end{equation*}\n\n \n\\item[(G3)] For each $x \\in \\overline \\Omega$, the function $p \\mapsto G(x, p)$ is convex on $\\mathbb R^2$.\n\n \n\\item[(G4)] $G$ is coercive, that is, \n \\begin{equation*}\n G(x, p) \\to \\infty \\ \\ \\ \\text{ uniformly for } x \\in \\overline \\Omega \\text{ as } |p| \\to \\infty.\n \\end{equation*}\n\\end{itemize} \n\n\nUnder assumptions (G1), (G3), and (G4), there exist $\\nu, M > 0$ such that\n\\begin{equation*}\nG(x, p) \\geq \\nu |p| - M \\ \\ \\ \\text{ for all } (x, p) \\in \\overline \\Omega \\times \\mathbb R^2,\n\\end{equation*}\nwhich yields, together with the definition of $L$,\n\\begin{equation} \\label{upper-bound-L}\nL(x, \\xi ) \\leq M \\ \\ \\ \\text{ for all } (x, \\xi) \\in \\overline \\Omega \\times \\overline B_\\nu.\n\\end{equation}\nAlso, by the definition of $L$, we get\n\\begin{equation} \\label{lower-bound-L}\nL(x, \\xi ) \\geq -G(x, 0) \\ \\ \\ \\text{ for all } (x, \\xi) \\in \\overline \\Omega \\times \\mathbb R^2, \n\\end{equation}\nand, using (G1), we see that $L$ is lower semicontinuous\nin $\\overline \\Omega \\times \\mathbb R^2$.\n\n\n\\begin{itemize} \n\\item[(G5)] There exists $\\varepsilon_0 \\in (0, 1)$ such that\n\t\t\t\t$\\{ g^\\varepsilon \\}_{\\varepsilon \\in (0, \\varepsilon_0)} \\subset C(\\partial \\Omega)$\n is uniformly bounded on $\\partial \\Omega$,\n and, for any $\\varepsilon \\in (0, \\varepsilon_0)$, $g^\\varepsilon$ satisfies the condition\n\t\t\t\t\\begin{equation*}\n g^\\varepsilon (x) \\leq \\int_0^{\\vartheta} L(X^\\varepsilon (t, x, \\alpha ), \\alpha (t)) e^{-\\lambda t} \\, dt\n\t\t\t\t+ g^\\varepsilon (y) e^{-\\lambda \\vartheta} \n \\end{equation*}\n for all $x, y \\in \\partial \\Omega$, $\\vartheta \\in [0, \\infty)$, and $\\alpha \\in L^\\infty (\\mathbb R; \\mathbb R^2)$,\n where the conditions\n \\begin{equation*}\n X^\\varepsilon (\\vartheta, x, \\alpha) = y \\ \\ \\ \\text{ and } \\ \\ \\\n\t\t\t\tX^\\varepsilon (t, x, \\alpha) \\in \\overline \\Omega \\ \\ \\ \\text{ for all } t \\in [0, \\vartheta]\n \\end{equation*}\n are satisfied, that is, $\\vartheta$ is a visiting time at the target $y$\n\t\t\t\tof the trajectory $\\{ X^\\varepsilon (t, x, \\alpha) \\}_{t \\geq 0}$ constrained in $\\overline \\Omega$.\n\\end{itemize}\n\n\nCondition (G5) is a sort of compatibility condition,\nand we are motivated to introduce this condition by \\cites{L},\nwhere conditions similar to (G5) are used to guarantee\nthe continuity of the value functions in optimal control.\nHere (G5) has the same role as those in \\cites{L} and is to ensure the continuity\nof the function $u^\\varepsilon$ given by \\eqref{represent} below.\n\n\n\\begin{prop} \\label{viscosity-sol}\nAssume that {\\rm (G1)--(G5)} hold.\nFor $\\varepsilon \\in (0, \\varepsilon_0)$,\nwe define the function $u^\\varepsilon : \\overline \\Omega \\to \\mathbb R$ by\n\\begin{align} \\label{represent}\n\\begin{aligned}\nu^\\varepsilon (x) = \\inf \\Big\\{ \\int_0^{\\tau^\\varepsilon} L(X^\\varepsilon (t, x, \\alpha), \\alpha &(t))e^{-\\lambda t} \\, dt \\\\\n &+ g^\\varepsilon (X^\\varepsilon (\\tau^\\varepsilon, x, \\alpha))e^{-\\lambda \\tau^\\varepsilon}\n\t\t\t\t\t\t \\ | \\ \\alpha \\in L^\\infty (\\mathbb R; \\mathbb R^2) \\Big\\},\n\\end{aligned}\n\\end{align}\nwhere $\\tau^\\varepsilon$ is a visiting time in $\\partial \\Omega$\nof the trajectory $\\{ X^\\varepsilon (t, x, \\alpha) \\}_{t \\geq 0}$ constrained in $\\overline \\Omega$,\nthat is, $\\tau^\\varepsilon$ is a nonnegative number such that\n\\begin{equation*}\nX^\\varepsilon (\\tau^\\varepsilon, x, \\alpha) \\in \\partial \\Omega \\ \\ \\ \\text{ and } \\ \\ \\\nX^\\varepsilon (t, x, \\alpha) \\in \\overline \\Omega \\ \\ \\ \\text{ for all } t \\in [0, \\tau^\\varepsilon].\n\\end{equation*} \nThen $u^\\varepsilon$ is continuous on $\\overline \\Omega$,\nthe unique viscosity solution of \\textrm{\\eqref{epHJ}} and\nsatisfies $u^\\varepsilon = g^\\varepsilon$ on $\\partial \\Omega$.\nFurthermore the family $\\{ u^\\varepsilon \\}_{\\varepsilon \\in (0, \\varepsilon_0)}$\nis uniformly bounded on $\\overline \\Omega$.\n\\end{prop}\n\n\n\\begin{proof}\nWe begin by showing that\n$\\{ u^\\varepsilon \\}_{\\varepsilon \\in (0, \\varepsilon_0)}$ is uniformly bounded on $\\overline \\Omega$.\nFix a constant $C > 0$ so that\n$|g^\\varepsilon (x)| \\leq C$ for all $(x, \\varepsilon) \\in \\partial \\Omega \\times (0, \\varepsilon_0)$.\nWe may assume as well that $|G(x, 0)| \\leq C$ for all $x \\in \\overline \\Omega$.\n\n\nWe intend to define $\\tau^\\varepsilon := \\tau^\\varepsilon (x) \\in [0, \\infty)$\nand $Y^\\varepsilon : [0, \\tau^\\varepsilon] \\times \\overline \\Omega \\to \\mathbb R^2$.\nLet $\\varepsilon \\in (0, \\varepsilon_0)$ and $x \\in \\overline \\Omega$.\nIf $x \\in \\partial \\Omega$, then we take $\\tau^\\varepsilon = 0$ and set $Y^\\varepsilon (0, x) = x$.\nIf $x \\in \\Omega \\setminus \\{ 0 \\}$, then we solve the initial value problems\n\\begin{equation*}\n\\dot X^\\pm (t) = \\frac{b(X^\\pm (t))}{\\varepsilon} \\pm \\nu \\, \\frac{DH(X^\\pm (t))}{|DH(X^\\pm (t))|}\n\\ \\ \\ \\text{ and } \\ \\ \\ X^\\pm (0) = x.\n\\end{equation*}\nThese problems have unique solutions $X^\\pm (t)$ for $t \\geq 0$\nas far as $X^\\pm (t)$ stay away from the origin.\nSince $b(y) \\cdot DH(y) = 0$ for all $y \\in \\overline \\Omega$ and, hence,\n\\begin{equation} \\label{growth-H}\n\\mathrm{\\cfrac{d}{dt}} \\, H(X^\\pm (t)) = \\pm \\nu |DH(X^\\pm (t))| \\ \\ \\ \\text{ for all } t > 0,\n\\end{equation}\nwe see that if $x \\in (\\Omega_i \\cup c_i (0)) \\setminus \\{ 0 \\}$ and $i \\in \\{ 1, 3 \\}$,\nthen $X^- (t_x^-) \\in \\partial_i \\Omega$ and $X^- (t) \\in \\Omega_i$\nfor all $t \\in (0, t_x^-)$ and for some $t_x^- \\in (0, \\infty)$.\nSimilarly, if $x \\in (\\Omega_2 \\cup c_2 (0)) \\setminus \\{ 0 \\}$,\nthen $X^+ (t_x^+) \\in \\partial_2 \\Omega$ and $X^+ (t) \\in \\Omega_2$\nfor all $t \\in (0, t_x^+)$ and for some $t_x^+ \\in (0, \\infty)$.\nIn view of these observations and the fact that $c_2 (0) = c_1 (0) \\cup c_3 (0)$, \nwhen $x \\in \\Omega_1 \\cup \\Omega_3$, we set $\\tau^\\varepsilon = t_x^-$\nand $Y^\\varepsilon (t, x) = X^- (t)$ for $t \\in [0, \\tau^\\varepsilon]$,\nand when $(\\Omega_2 \\cup c_2 (0)) \\setminus \\{ 0 \\}$, we set $\\tau^\\varepsilon = t_x^+$\nand $Y^\\varepsilon (t, x) = X^+ (t)$ for $t \\in [0, \\tau^\\varepsilon]$.\n\n\nNow, we consider the case where $x = 0$\nand set $\\delta = \\kappa \\wedge (\\nu \\varepsilon\/4)$,\nwhere $\\kappa$ is the constant from (H3).\nBy (H3), for any $y \\in B_\\delta$, we have $H(y) = y_2^2 -y_1^2$ \nand $|b(y)|\/\\varepsilon = 2|y|\/\\varepsilon < \\nu\/2$.\nWe set $t_0 = 2\\delta\/\\nu$ and $X(t) = (\\nu t\/2)(0, 1) \\in \\mathbb R^2$ for $t \\in [0, t_0]$\nand note that $X(t_0) \\in \\Omega_2 \\cap \\partial B_\\delta$\nand that $X(t) \\in \\overline B_\\delta$ and $|\\dot X(t)| = \\nu\/2$ for all $t \\in [0, t_0]$.\nWe set $\\tau^\\varepsilon = \\tau^\\varepsilon (0) = t_0 + \\tau^\\varepsilon (X(t_0))$ and \n\\begin{align*}\nY^\\varepsilon (t, 0) =\n\\begin{cases}\nX(t) \\ \\ \\ &\\text{ for } t \\in [0, t_0], \\\\\nY^\\varepsilon (t - t_0, X(t_0)) \\ \\ \\ &\\text{ for } t \\in [t_0, \\tau^\\varepsilon].\n\\end{cases}\n\\end{align*}\nIt is now easily seen that for any $x \\in \\overline \\Omega$,\n$Y^\\varepsilon (\\tau^\\varepsilon, x) \\in \\partial \\Omega$ and\n$Y^\\varepsilon (t, x) \\in \\overline \\Omega$ for all $t \\in [0, \\tau^\\varepsilon]$ and that \n\\begin{equation*}\n\\dot Y^\\varepsilon (t, x) - \\frac{b(Y^\\varepsilon (t, x))}{\\varepsilon} \\in \\overline B_\\nu \\ \\ \\ \n\\text{ for all } (t, x) \\in (0, \\tau^\\varepsilon) \\times \\Omega.\n\\end{equation*}\n\n\nObserve by the definition of $u^\\varepsilon$ and\ninequalities \\eqref{lower-bound-L} and \\eqref{upper-bound-L} that for any $x \\in \\overline \\Omega$,\n\\begin{equation*}\nu^\\varepsilon (x) \\geq \\inf_{\\tau \\in [0, \\infty)} \\Big\\{ \\int_0^\\tau -Ce^{-\\lambda t} \\, dt + (-C)e^{-\\lambda \\tau} \\Big\\}\n = -((C\/\\lambda) \\vee C),\n\\end{equation*}\nand\n\\begin{equation*}\nu^\\varepsilon (x) \\leq \\int_0^{\\tau^\\varepsilon} L(Y^\\varepsilon (t, x), \\alpha (t, x))e^{-\\lambda t} \\, dt + Ce^{-\\lambda \\tau^\\varepsilon}\n \\leq \\int_0^{\\tau^\\varepsilon} Me^{-\\lambda t} \\, dt + Ce^{-\\lambda \\tau^\\varepsilon} \\leq (M\/\\lambda) \\vee C,\n\\end{equation*}\nwhere\n\\begin{align*}\n\\alpha (t, x) :=\n\\begin{cases}\n\\dot Y^\\varepsilon (t, x) - \\cfrac{b(Y^\\varepsilon (t, x))}{\\varepsilon} \\ \\ \\ &\\text{ if } x \\in \\Omega, \\\\\n0 \\ \\ \\ &\\text{ if } x \\in \\partial \\Omega. \n\\end{cases}\n\\end{align*}\nThus, we have\n\\begin{equation*}\n|u^\\varepsilon (x)| \\leq C \\vee (M\/\\lambda) \\vee (C\/\\lambda) \\ \\ \\ \n\\text{ for all } (x, \\varepsilon) \\in \\overline \\Omega \\times (0, \\varepsilon_0),\n\\end{equation*}\nwhich shows that $\\{ u^\\varepsilon \\}_{\\varepsilon \\in (0, \\varepsilon_0)}$ is uniformly bounded on $\\overline \\Omega$.\n\n\nNow, we may define the upper (resp., lower) semicontinuous envelope $(u^\\varepsilon)^\\ast$ (resp., $(u^\\varepsilon)_\\ast$)\nas a bounded function on $\\overline \\Omega$. \nAs is well-known (see, for instance, \\cites{I}), $(u^\\varepsilon)^\\ast$ and $(u^\\varepsilon)_\\ast$ are, respectively,\na viscosity subsolution and supersolution of \n\\begin{equation} \\label{pde*}\n\\lambda u - \\cfrac{b \\cdot Du}{\\varepsilon} + G(x, Du) = 0 \\ \\ \\ \\text{ in } \\Omega. \n\\end{equation}\n\n\nIt remains to prove that $u^\\varepsilon \\in C(\\overline \\Omega)$.\nWe first demonstrate that \n\\begin{equation} \\label{bdry-c}\n\\lim_{\\overline \\Omega \\ni y \\to x} u^\\varepsilon (y) = g^\\varepsilon (x) \\ \\ \\ \\text{ for all } x \\in \\partial \\Omega,\n\\end{equation}\nwhere the convergence is uniform in $x \\in \\partial \\Omega$.\n\n\nTo do this, we argue by contradiction\nand thus suppose that there exist a sequence $\\{ x_n \\}_{n \\in \\mathbb N} \\subset \\overline \\Omega$,\nconverging to $x_0 \\in \\partial_i \\Omega$ for some $i \\in \\{ 1, 2, 3 \\}$,\nand a positive constant $\\gamma > 0$ so that\n\\begin{equation*}\n|u^\\varepsilon (x_n) - g^\\varepsilon (x_0)| \\geq \\gamma \\ \\ \\ \\text{ for all } n \\in \\mathbb N.\n\\end{equation*}\nThere are two cases: for infinitely many $n \\in \\mathbb N$, we have\n\\begin{equation} \\label{infinite+}\nu^\\varepsilon (x_n) \\geq g^\\varepsilon (x_0) + \\gamma,\n\\end{equation}\nor, otherwise,\n\\begin{equation} \\label{infinite-}\nu^\\varepsilon (x_n) \\leq g^\\varepsilon (x_0) - \\gamma.\n\\end{equation}\nBy passing to a subsequence, we may assume that, in the first case (resp., in the second case),\n\\eqref{infinite+} (resp., \\eqref{infinite-}) is satisfied for all $n \\in \\mathbb N$.\nWe may assume as well that $x_n \\in \\overline \\Omega_i$ for all $n \\in \\mathbb N$.\nThe set $H \\big(\\overline B_\\kappa \\big) = \\{ H(y) \\, | \\, y \\in \\overline B_\\kappa \\}$ is clearly a closed interval,\nwhich we denote by $[h_-, h_+]$,\nand, since $\\overline B_\\kappa \\subset \\Omega$,\nwe have $h_1 \\vee h_3 < h_- < 0 < h_+ < h_2$.\nWe may assume by passing once again to a subsequence if necessary that for all $n \\in \\mathbb N$,\n$H(x_n) \\in (h_+, h_2]$ if $i = 2$, and $H(x_n) \\in [h_i, h_-)$ otherwise.\nBy \\eqref{growth-H}, we see that $Y^\\varepsilon (t, x_n) \\in \\overline \\Omega_i \\setminus B_\\kappa$\nfor all $t \\in [0, \\tau^\\varepsilon (x_n)]$ and $n \\in \\mathbb N$.\nWe set $c_0 = \\min_{\\overline \\Omega \\setminus B_\\kappa} |DH| (> 0)$.\n\n\nWe treat first the case where \\eqref{infinite+} holds for all $n \\in \\mathbb N$. \nBy \\eqref{growth-H}, we have\n\\begin{align*}\n(-1)^i h_i - (-1)^i H(x_n) &= (-1)^i H(Y^\\varepsilon (\\tau^\\varepsilon (x_n), x_n)) - (-1)^i H(x_n) \\\\\n &= \\nu \\int_0^{\\tau^\\varepsilon (x_n)} |DH(Y^\\varepsilon (t, x_n))| \\, dt \\geq \\nu c_0 \\tau^\\varepsilon (x_n),\n\\end{align*}\nand, therefore,\n\\begin{equation*}\n\\lim_{n \\to \\infty} \\tau^\\varepsilon (x_n) = 0.\n\\end{equation*}\nSetting $\\alpha_n (t) = (-1)^i \\nu DH(Y^\\varepsilon (t, x_n))\/|DH(Y^\\varepsilon (t, x_n))|$\nfor $t \\in [0, \\tau^\\varepsilon (x_n)]$, we get\n\\begin{align*}\nu^\\varepsilon (x_n) &\\leq \\int_0^{\\tau^\\varepsilon (x_n)} L(Y^\\varepsilon (t, x_n), \\alpha_n (t))e^{-\\lambda t} \\, dt\n + g^\\varepsilon (Y^\\varepsilon (\\tau^\\varepsilon (x_n), x_n))e^{-\\lambda \\tau^\\varepsilon (x_n)} \\\\\n &\\leq M \\tau^\\varepsilon (x_n) + g^\\varepsilon (Y^\\varepsilon (\\tau^\\varepsilon (x_n), x_n))e^{-\\lambda \\tau^\\varepsilon (x_n)}, \n\\end{align*}\nand, moreover,\n\\begin{equation*}\n\\limsup_{n \\to \\infty} u^\\varepsilon (x_n) \\leq g^\\varepsilon (x_0),\n\\end{equation*}\nwhich contradicts \\eqref{infinite+}.\n\n\nNext, we consider the case where \\eqref{infinite-} holds for all $n \\in \\mathbb N$.\nWe choose $\\alpha_n \\in L^\\infty ([0, \\infty); \\mathbb R^2)$ and $\\tau_n \\in [0, \\infty)$ for each $n \\in \\mathbb N$\nso that $X_n (t) := X^\\varepsilon (t, x_n, \\alpha_n) \\in \\overline \\Omega$ for all $t \\in [0, \\tau_n]$,\n$X_n (\\tau_n) \\in \\partial \\Omega$, and\n\\begin{equation} \\label{opposite}\nu^\\varepsilon (x_n) + \\frac{\\gamma}{2} > \\int_0^{\\tau_n} L(X_n (t), \\alpha_n (t))e^{-\\lambda t} \\, dt \n + g^\\varepsilon (X_n (\\tau_n))e^{-\\lambda \\tau_n}.\n\\end{equation}\n\n\nWe define $Z^\\varepsilon (t, x)$ for $(t, x) \\in [0, \\infty) \\times \\overline \\Omega_i \\setminus B_\\kappa$\nas the unique solution of\n\\begin{equation*}\n\\dot X(t) = - \\, \\frac{b(X(t))}{\\varepsilon} + (-1)^i \\nu \\, \\frac{DH(X(t))}{|DH(X(t))|} \\ \\ \\ \\text{ and } \\ \\ \\ X(0) = x.\n\\end{equation*}\nSimilarly to the case of $Y^\\varepsilon$, we deduce that there exists $\\sigma (x) \\in [0, \\infty)$ such that\n\\begin{equation*}\nZ^\\varepsilon (\\sigma (x), x) \\in \\partial_i \\Omega \\ \\ \\ \\text{ and } \\ \\ \\\nZ^\\varepsilon (t, x) \\in \\overline \\Omega_i \\setminus B_\\kappa \\ \\ \\ \\text{ for all } t \\in [0, \\sigma (x)].\n\\end{equation*}\n\n\nWe set $s_n = \\sigma (x_n)$, $t_n = s_n + \\tau_n$, and \n\\begin{align*}\nY_n (t) = \n\\begin{cases}\nZ^\\varepsilon (s_n - t, x_n) \\ \\ \\ &\\text{ for } t \\in [0, s_n], \\\\\nX_n (t - s_n) \\ \\ \\ &\\text{ for } t \\in [s_n, t_n].\n\\end{cases}\n\\end{align*}\nNote that $Y_n$ is continuous at $t = s_n$ and satisfies\n\\begin{align*}\n\\dot Y_n (t) =\n\\begin{cases}\n\\cfrac{b(Y_n (t))}{\\varepsilon} - (-1)^i \\nu \\, \\cfrac{DH(Y_n (t))}{|DH(Y_n (t))|} \\ \\ \\ &\\text{ for } t \\in (0, s_n), \\\\\n\\cfrac{b(Y_n (t))}{\\varepsilon} + \\alpha_n (t - s_n) \\ \\ \\ &\\text{ for a.e. } t \\in (s_n, t_n),\n\\end{cases}\n\\end{align*}\nthat $Y_n (t) \\in \\overline \\Omega$ for all $t \\in [0, t_n]$, and that\n\\begin{equation*}\n\\lim_{n \\to \\infty} s_n = 0 \\ \\ \\ \\text{ and } \\ \\ \\ \\lim_{n \\to \\infty} Y_n (0) = x_0.\n\\end{equation*}\nSetting\n\\begin{align*}\n\\beta_n (t) =\n\\begin{cases}\n- (-1)^i \\nu \\, \\cfrac{DH(Y_n (t))}{|DH(Y_n (t))|} \\ \\ \\ &\\text{ for } t \\in (0, s_n), \\\\\n\\alpha_n (t - s_n) \\ \\ \\ &\\text{ for a.e. } t \\in (s_n, t_n),\n\\end{cases}\n\\end{align*}\nwe have\n\\begin{equation*}\n\\dot Y_n (t) = \\frac{b(Y_n (t))}{\\varepsilon} + \\beta_n (t) \\ \\ \\ \\text{ for a.e. } t \\in (0, t_n).\n\\end{equation*}\nSince $Y_n (0), Y_n (t_n) \\in \\partial \\Omega$, we see by (G5) that\n\\begin{equation*}\ng^\\varepsilon (Y_n (0)) \\leq \\int_0^{t_n} L(Y_n (t), \\beta_n (t))e^{-\\lambda t} \\, dt\n + g^\\varepsilon (Y_n (t_n))e^{-\\lambda t_n},\n\\end{equation*}\nfrom which, together with \\eqref{opposite} and \\eqref{infinite-}, we get\n\\begin{align*}\ng^\\varepsilon (Y_n (0)) &\\leq Ms_n + e^{-\\lambda s_n} \\Big( \\int_0^{\\tau_n} L(Y_n (s_n + t), \\alpha_n (t))e^{-\\lambda t} \\, dt\n + g^\\varepsilon (Y_n (t_n))e^{-\\lambda \\tau_n} \\Big) \\\\ \n &= Ms_n + e^{-\\lambda s_n} \\Big( \\int_0^{\\tau_n} L(X_n (t), \\alpha_n (t))e^{-\\lambda t} \\, dt\n + g^\\varepsilon (X_n (\\tau_n))e^{-\\lambda \\tau_n} \\Big) \\\\\n &< Ms_n + e^{- \\lambda s_n} \\left( u^\\varepsilon (x_n) + \\frac{\\gamma}{2} \\right)\n \\leq Ms_n + e^{-\\lambda s_n} \\left( g^\\varepsilon (x_0) - \\frac{\\gamma}{2} \\right). \n\\end{align*}\nSending $n \\to \\infty$ yields\n\\begin{equation*}\ng^\\varepsilon (x_0) \\leq g^\\varepsilon (x_0) - \\frac{\\gamma}{2},\n\\end{equation*}\nwhich is a contradiction.\nThus, we conclude that $u^\\epsilon$ satisfies \\eqref{bdry-c}.\nIn particular, we have $u^\\varepsilon (x) = g^\\varepsilon (x)$ for all \n$x \\in \\partial \\Omega$.\n\n\nTo see the continuity of $u^\\varepsilon$, we note that the pde \\eqref{pde*} has the form\n\\begin{equation*}\n\\lambda u + F(x, Du) = 0 \\ \\ \\ \\text{ in } \\Omega,\n\\end{equation*}\nwhere $F$ is given by\n\\begin{equation*}\nF(x, p) = - \\, \\frac{b(x) \\cdot p}{\\varepsilon} + G(x, p)\n\\end{equation*}\nand satisfies \n\\begin{equation*}\n|F(x, p) - F(y, p)| \\leq \\frac{K}{\\varepsilon} |x - y| |p| + m(|x - y|(|p| + 1)) \\ \\ \\ \n\\text{ for all } x, y \\in \\Omega \\text{ and } p \\in \\mathbb R^2,\n\\end{equation*}\nwith $K > 0$ being a Lipschitz bound of $b$.\nA standard comparison theorem, together with \\eqref{bdry-c} and the viscosity properties of $u^\\varepsilon$, ensures\nthat $(u^\\varepsilon)^\\ast \\leq (u^\\varepsilon)_\\ast$ on $\\overline \\Omega$,\nwhich implies the continuity of $u^\\varepsilon$ on $\\overline \\Omega$.\n\\end{proof}\n\nHenceforth, throughout this paper $u^\\varepsilon$ denotes the function \ndefined in Proposition \\ref{viscosity-sol}. \n\n\n\n\n\\begin{prop} \\label{prop: DPP}\nAssume that {\\rm (G1)--(G5)} hold.\nLet $\\varepsilon \\in (0, \\varepsilon_0)$, $t \\geq 0$, and $x \\in \\overline \\Omega$. Then\n\\begin{align*}\nu^\\varepsilon (x) = \\inf \\Big\\{ \\int_0^{t \\wedge \\tau^\\varepsilon} L(X^\\varepsilon (s, x, \\alpha )&, \\alpha (s)) e^{-\\lambda s} \\, ds\n+ \\mathbf{1}_{\\{ t < \\tau^\\varepsilon \\}} u^\\varepsilon (X^\\varepsilon (t, x, \\alpha )) e^{-\\lambda t} \\\\ \n&+ \\mathbf{1}_{\\{ t \\geq \\tau^\\varepsilon \\}} g^\\varepsilon (X^\\varepsilon (\\tau^\\varepsilon, x, \\alpha )) e^{-\\lambda \\tau^\\varepsilon} \\ | \\ \\alpha \\in L^\\infty (\\mathbb R ; \\mathbb R^2 ) \\Big\\},\n\\end{align*}\nwhere $\\tau^\\varepsilon$ is a visiting time in $\\partial \\Omega$\nof the trajectory $\\{ X^\\varepsilon (t, x, \\alpha) \\}_{t \\geq 0}$ constrained in $\\overline \\Omega$.\n\\end{prop}\n\n\nWe refer to, for instance, \\cites{L} for a proof of this proposition.\nThe identity in the proposition above is called the \\emph{dynamic programming principle}.\n\n\n\n\nWe introduce the half relaxed-limits of $u^\\varepsilon$ as $\\varepsilon \\to 0+$:\n\\begin{align*}\n&v^+ (x) = \\lim_{r \\to 0+} \\sup \\{ u^\\varepsilon (y) \\ | \\ y \\in B_r(x) \\cap \\overline \\Omega, \\ \\varepsilon \\in (0, r) \\}, \\\\\n&v^- (x) = \\lim_{r \\to 0+} \\inf \\{ u^\\varepsilon (y) \\ | \\ y \\in B_r(x) \\cap \\overline \\Omega, \\ \\varepsilon \\in (0, r) \\},\n\\end{align*}\nwhich are well-defined and bounded on $\\overline \\Omega$\nsince the family $\\{ u^\\varepsilon \\}_{\\varepsilon \\in (0, \\varepsilon_0)}$\nis uniformly bounded on $\\overline \\Omega$.\n\n\nIn addition to (G1)--(G5), we always assume the following (G6).\n\n \n\\begin{itemize}\n\\item[(G6)] There exist constants $d_i$, with $i \\in \\{ 1, 2, 3\\}$, such that\n\t\t\t\t$v^\\pm (x) = d_i$ for all $x \\in \\partial_i \\Omega$ and $i \\in \\{ 1, 2, 3 \\}$.\n\\end{itemize}\nObviously, this implies that\n\\begin{equation*} \\label{g-d_i}\n\\lim_{\\Omega \\ni y \\to x} v^\\pm (y) = \\lim_{\\varepsilon \\to 0+} g^\\varepsilon (x) = d_i \\ \\ \\\n\\text{ uniformly for } x \\in \\partial_i \\Omega \\text{ for all } i \\in \\{ 1, 2, 3 \\}.\n\\end{equation*}\n\n\nAssumption (G6) is rather implicit and looks restrictive, but it simplifies our arguments below,\nsince, in the limiting process of sending $\\varepsilon \\to 0+$,\nany boundary layer does not occur. Our use of assumptions (G5) and (G6) \nis somewhat related to the fact that \n\\eqref{epHJ} is not coercive when $\\varepsilon$ is very small. \nHowever, for instance, in the case where $G(x, p) = |p| - f(x)$ with $f \\in C(\\overline \\Omega)$ and $f \\geq 0$,\n$g^\\varepsilon \\equiv 0$, and $d_i = 0$ for all $i \\in \\{ 1, 2, 3 \\}$, assumptions (G1)--(G6) hold.\nOur formulation of asymptotic analysis of \\eqref{epHJ} is based on (G5) and (G6),\nwhich may look a bit silly in the sense that it is not clear which $g^\\varepsilon$ and $d_i$, with $i \\in \\{ 1, 2, 3 \\}$, \nsatisfy (G5) and (G6). This question will be taken up in Section 7 and there \nwe give a fairly general sufficient condition on the data $(d_1,d_2,d_3)$ \nfor which conditions (G5) and (G6) hold. \n\n\n\n\n\\section{Main result}\n\n\n\\subsection{The limiting problem}\n\n\nIn this section,\nwe are concerned with the nonlinear ordinary differential equation \n\\begin{equation} \\label{lim-HJ}\n\\lambda u + \\overline G_i (h, u') = 0 \\ \\ \\ \\text{ in } J_i \\text{ and } i\\in\\{ 1, 2, 3 \\}, \\tag{$3.1_i$} \n\\end{equation} \nwhere $u : J_i \\to \\mathbb R$ is the unknown and\n$\\overline G_i : \\bar J_i \\setminus \\{ 0 \\} \\times \\mathbb R \\to \\mathbb R$ is the function defined by\n\\begin{equation*}\n\\overline G_i (h, q) = \\cfrac{1}{T_i (h)} \\int_0^{T_i (h)} G \\big( X(t, x), qDH(X(t, x)) \\big) \\, dt,\n\\end{equation*} \nwhere $x \\in \\overline \\Omega$ is chosen so that $x \\in c_i (h)$.\nThe value of the integral above is independent of the choice of $x \\in c_i (h)$.\nIn Theorem \\ref{main} in the next section, \nthe limit of $u^\\varepsilon$, as $\\varepsilon \\to 0+$,\nis described by use of an ordered triple of viscosity solutions of \\eqref{lim-HJ}, \nwith $i \\in \\{ 1, 2, 3 \\}$.\n\n\n\\setcounter{equation}{1}\n\n\nWe give here some lemmas concerning odes \\eqref{lim-HJ}. \n\n\n\\begin{lem} \\label{G-bar}\nFor any $i \\in \\{ 1, 2, 3 \\}$, $\\overline G_i \\in C(\\bar J_i \\setminus \\{ 0 \\} \\times \\mathbb R)$,\nand the function $\\overline G_i$ is locally coercive in the sense that,\nfor any compact interval $I$ of $\\bar J_i \\setminus \\{ 0 \\}$,\n\\begin{equation*}\n\\lim_{r \\to \\infty} \\inf \\{ \\overline G_i (h, q) \\, | \\, h \\in I, |q| \\geq r \\} = \\infty.\n\\end{equation*}\n\\end{lem}\n\n\n\\begin{proof}\nThe continuity of $\\overline G_i$ follows from the definition of $\\overline G_i$ and Lemma \\ref{T_i}.\n\n\nFix any $i \\in \\{ 1, 2, 3 \\}$ and $h_0 \\in J_i$.\nSet $I = [h_0, h_2]$ if $i = 2$ and, otherwise, $I = [h_i, h_0]$.\nWe choose $c_0 > 0$ so that\n\\begin{equation*}\n|DH(x)| \\geq c_0 \\ \\ \\ \\text{ for all } x \\in \\bigcup_{r \\in I} c_i (r).\n\\end{equation*}\nLet $h \\in I$ and choose $x \\in c_i (h)$.\nBy \\eqref{lower-bound-L}, we get\n\\begin{align} \\label{loc-coer}\n\\begin{aligned}\n\\overline G_i (h, q) &\\geq \\frac{1}{T_i (h)} \\int_0^{T_i (h)} \\Big( \\nu |q| |DH(X(t, x))| - M \\Big) \\, dt \\\\\n &\\geq \\nu c_0 |q| - M.\n\\end{aligned}\n\\end{align}\nThis shows the local coercivity of $\\overline G_i$.\n\\end{proof}\n\n\nFor $i \\in \\{1, 2, 3 \\}$, let $\\mathcal S_i$ (resp., $\\mathcal S_i^-$ or $\\mathcal S_i^+$) be the set of\nall viscosity solutions (resp., viscosity subsolutions or viscosity supersolutions ) of \\eqref{lim-HJ}.\n\n\n\\begin{lem} \\label{con-ext}\nLet $i \\in \\{ 1, 2, 3 \\}$ and $u \\in \\mathcal S_i^-$.\nThen $u$ is uniformly continuous in $J_i$ and, hence,\nit can be extended uniquely to $\\bar J_i$ as a continuous function on $\\bar J_i$.\n\\end{lem}\n\n\n\\begin{proof}\nFor any compact interval $I$ of $J_i$,\nnoting that $u$ is upper semicontinuous in $J_i$ and $\\overline G_i$ is locally coercive,\nwe find that $|u'| \\leq C_1 (u, I)$ in the viscosity sense\nfor some constant $C_1 (u, I) > 0$ depending on $u$ and $I$, \nwhich shows that $u$ is locally Lipschitz continuous in $J_i$.\n\n\nLet $h \\in J_i$ and fix $x \\in c_i (h)$.\nBy \\eqref{loc-coer}, we have\n\\begin{equation*}\n\\overline G_i (h, q) \\geq \\frac{ \\nu L_i (h)}{T_i (h)} |q| - M.\n\\end{equation*}\nHence, we get\n\\begin{equation} \\label{uc}\n\\lambda u + \\frac{\\nu L_i}{T_i} |u'| - M \\leq 0 \\ \\ \\ \\text{ in } J_i\n\\end{equation}\nin the viscosity sense and hence in the almost everywhere sense.\n\n\nWe define $v \\in C(J_i)$ by $v(h) = \\lambda u(h) - M$ and observe that\n\\begin{equation*}\n|v' (h)| + \\frac{\\lambda T_i (h)}{\\nu L_i (h)} v(h) \\leq 0 \\ \\ \\ \\text{ for a.e. } h \\in J_i.\n\\end{equation*}\nIt is obvious that the length $L_i (h)$ of $c_i (h)$ is bounded from below by a positive constant,\nwhile Lemma \\ref{T_i} assures that $T_i \\in L^1 (J_i)$.\nConsequently, we find that $T_i\/L_i \\in L^1 (J_i)$.\nGronwall's inequality yields, for any $h, a \\in J_i$,\n\\begin{equation} \\label{bound-v}\n|v(h)| \\leq |v(a)| \\exp \\int_{J_i} \\frac{\\lambda T_i (s)}{\\nu L_i (s)} \\, ds,\n\\end{equation}\nwhich shows that $u$ is a bounded function in $J_i$.\nFrom \\eqref{uc}, we get\n\\begin{equation} \\label{abs-con}\n|u' (h)| \\leq \\frac{T_i (h)}{\\nu L_i (h)} \\left( M + \\lambda \\sup_{J_i} |u| \\right) \\ \\ \\ \\text{ for a.e. } h \\in J_i.\n\\end{equation}\nSince $T_i\/L_i \\in L^1 (J_i)$, the inequality above shows that $u$ is uniformly continuous in $J_i$.\n\\end{proof}\n\n\n\nThanks to the lemma above, we may assume\nany $u \\in \\mathcal S_i^-$, with $i \\in \\{ 1, 2, 3 \\}$, as a function in $C(\\bar J_i)$.\nTo make this explicit notationally, we write $\\mathcal S_i^- \\cap C(\\bar J_i)$ for $\\mathcal S_i^-$.\nThis comment also applies to $\\mathcal S_i$ since $\\mathcal S_i \\subset \\mathcal S_i^-$.\n\n\nThe following lemma is a direct consequence of \\eqref{abs-con}.\n\n\n\\begin{lem} \\label{equi-con}\nLet $i \\in \\{ 1, 2, 3 \\}$ and $\\mathcal S \\subset \\mathcal S_i^-$.\nAssume that $\\mathcal S$ is uniformly bounded on $\\bar J_i$.\nThen $\\mathcal S$ is equi-continuous on $\\bar J_i$.\n\\end{lem}\n\n\n\\begin{lem} \\label{bound-by-bc}\nLet $i \\in \\{ 1, 2, 3 \\}$ and $u \\in \\mathcal S_i^- \\cap C(\\bar J_i)$.\nThen there exists a constant $C > 0$, independent of $u$, such that\n\\begin{equation*}\n|u(h)| \\leq C(|u(a)| + 1) \\ \\ \\ \\text{ for all } h, a \\in \\bar J_i.\n\\end{equation*} \n\\end{lem}\n\n\n\\begin{proof}\nSet\n\\begin{equation*}\nC_1 = \\exp \\int_{J_i} \\frac{\\lambda T_i (s)}{\\nu L_i (s)} \\, ds,\n\\end{equation*}\nand fix $h, a \\in \\bar J_i$.\nAccording to \\eqref{bound-v}, we have\n\\begin{equation*}\n|\\lambda u(h) - M| \\leq C_1 |\\lambda u(a) -M|,\n\\end{equation*}\nand, hence,\n\\begin{equation*}\n|u(h)| \\leq C_1 |u(a)| + 2 \\lambda^{-1} C_1 M. \\qedhere\n\\end{equation*}\n\\end{proof}\n\n\n\n \n \n\\subsection{Main result}\n\n\nThe main result is stated as follows.\nRecall that, throughout this paper, (H1)--(H3) and (G1)--(G6) are satisfied\nand $u^\\varepsilon$ is the unique solution of \\eqref{epHJ}.\n\n\n\\begin{thm} \\label{main}\nThere exist functions $u_i \\in \\mathcal S_i \\cap C(\\bar J_i)$, with $i \\in \\{ 1, 2, 3 \\}$,\nsuch that $u_1 (0) = u_2 (0) = u_3 (0)$,\n\\begin{equation*}\nu_i (h_i) = d_i \\ \\ \\ \\text{ for all } i \\in \\{ 1, 2, 3 \\},\n\\end{equation*}\nand, as $\\varepsilon \\to 0+$,\n\\begin{equation*}\nu^\\varepsilon \\to u_i \\circ H \\ \\ \\ \\text{ uniformly on } \\overline \\Omega_i \\text{ for all } i \\in \\{ 1, 2, 3 \\}.\n\\end{equation*}\nThat is, if we define $u_0 \\in C(\\overline \\Omega)$ by\n\\begin{align*}\nu_0 (x) = \n\\begin{cases}\nu_1 \\circ H (x) \\ \\ \\ \\text{ if } x \\in \\overline \\Omega_1, \\\\\nu_2 \\circ H (x) \\ \\ \\ \\text{ if } x \\in \\overline \\Omega_2, \\\\\nu_3 \\circ H (x) \\ \\ \\ \\text{ if } x \\in \\overline \\Omega_3, \n\\end{cases}\n\\end{align*}\nthen, as $\\varepsilon \\to 0+$,\n\\begin{equation*}\nu^\\varepsilon \\to u_0 \\ \\ \\ \\text{ uniformly on } \\overline \\Omega.\n\\end{equation*}\n\\end{thm}\n\n\nBefore giving the proof,\nwe note that the stability of viscosity solutions yields\n\\begin{equation*} \n-b \\cdot Dv^+ \\leq 0 \\ \\ \\ \\text{ and } \\ \\ \\ -b \\cdot Dv^- \\geq 0 \\ \\ \\ \\text{ in } \\Omega,\n\\end{equation*}\nin the viscosity sense.\nThese show that $v^+$ and $v^-$ are nondecreasing and nonincreasing\nalong the flow $\\{ X(t, x) \\}_{t \\in \\mathbb R}$, respectively.\n\n\nFix $i \\in \\{ 1, 2, 3 \\}$ and $x \\in \\Omega_i$ and set $h = H(x)$.\nThe monotonicity of $v^+$ along the flow $\\{ X(t, x) \\}_{t \\in \\mathbb R}$ yields, for all $t \\in [0, T_i (h)]$,\n\\begin{equation*}\nv^+ (x) = v^+ \\big( X(T_i (h), x) \\big) \\geq v^+ (X(t, x)) \\geq v^+ (X(0, x)) = v^+ (x).\n\\end{equation*}\nHence $v^+$ is constant on the loop $c_i (h)$.\nSimilarly we see that $v^-$ is also constant on the loop $c_i (h)$.\n\n\nThus, for any $h \\in J_i$ and $i \\in \\{ 1, 2, 3 \\}$,\nthe image $v^+ (c_i (h)) := \\{ v^+ (x) \\, | \\, x \\in c_i (h) \\}$ of $c_i (h)$ by $v^+$\n(resp., $v^- (c_i (h)) := \\{ v^- (x) \\, | \\, x \\in c_i (h) \\}$ of $c_i (h)$ by $v^-$)\nconsists of a single element. This ensures that the relation\n\\begin{equation} \\label{u_i^pm}\nu_i^+ (h) \\in v^+ (c_i (h)) \\ \\ \\ \\text{ (resp., } u_i^- (h) \\in v^- (c_i (h)))\n\\end{equation}\ndefines a function $u_i^+$ in $J_i$ (resp., $u_i^-$ in $J_i$).\nIt is easily seen that $u_i^+$ and $u_i^-$ are, respectively,\nupper and lower semicontinuous in $J_i$.\n\n\nFor the proof of Theorem \\ref{main}, we need the following three propositions.\n\n\n\\begin{thm} \\label{characterize}\nFor all $i \\in \\{ 1, 2, 3 \\}$, $u_i^+ \\in \\mathcal S_i^-$ and $u_i^- \\in \\mathcal S_i^+$.\n\\end{thm}\n\n\nWith this theorem at hand,\nwe assume (see Lemma \\ref{con-ext}) that $u_i^+ \\in C(\\bar J_i)$ for all $i \\in \\{ 1, 2, 3 \\}$.\nMoreover, by assumption (G6), we have\n\\begin{equation} \\label{bc-u_i^+}\nu_i^+ (h_i) = \\lim_{J_i \\ni h \\to h_i} u_i^- (h) = v^{\\pm} (x) = d_i\n\\ \\ \\ \\text{ for all } x \\in c_i (h_i) \\text{ and } i \\in \\{ 1, 2, 3 \\}.\n\\end{equation}\n\n\n\\begin{lem} \\label{v^+-u_i^+}\nWe have\n\\begin{equation*}\nv^+ (x) \\leq \\min _{i \\in \\{ 1, 2, 3 \\}} u_i^+ (0) \\ \\ \\ \\text{ for all } x \\in c_2 (0).\n\\end{equation*}\n\\end{lem}\n\n\n\\begin{lem} \\label{v^--d_0} \nSet $d_0 = \\min_{i \\in \\{ 1, 2, 3 \\}} u_i^+ (0)$. Then\n\\begin{equation*}\nv^- (x) \\geq d_0 \\ \\ \\ \\text{ for all } x \\in c_2 (0). \n\\end{equation*}\n\\end{lem}\n\n\nAssuming temporarily\nTheorem \\ref{characterize}, and Lemmas \\ref{v^+-u_i^+} and \\ref{v^--d_0},\nwe continue with the\n\n\n\\begin{proof}[Proof of Theorem \\ref{main}]\nBy Theorem \\ref{characterize}, we have\n$u_i^+ \\in \\mathcal S_i^- \\cap C(\\bar J_i)$ and $u_i^- \\in \\mathcal S_i^+$ for all $i \\in \\{ 1, 2, 3 \\}$.\n\n\nBy definition of the half-relaxed limits, it is obvious that\n\\begin{equation*} \\label{v^--v^+}\nv^- \\leq v^+ \\ \\ \\ \\text{ on } \\overline \\Omega,\n\\end{equation*}\nthat $v^+$ and $-v^-$ are upper semicontinuous on $\\overline \\Omega$\nand that if $v^+ \\leq v^-$, then $v^+ = v^-$ and, as $\\varepsilon \\to 0+$,\n\\begin{equation*}\nu^\\varepsilon \\to v^- = v^+ \\ \\ \\ \\text{ uniformly on } \\overline \\Omega.\n\\end{equation*}\n\n\nBy Lemmas \\ref{v^+-u_i^+} and \\ref{v^--d_0}, we have\n\\begin{equation*}\nv^+ (x) \\leq d_0 \\leq v^- (x) \\ \\ \\ \\text{ for all } x \\in c_2 (0),\n\\end{equation*}\nwhere $d_0 := \\min_{i \\in \\{ 1, 2, 3 \\}} u_i^+ (0)$.\nMoreover, by the semicontinuity properties of $v^\\pm$, we get\n\\begin{equation*}\nu_i^+ (0) \\leq v^+ (x) \\leq d_0 \\leq v^- (x) \\leq \\lim_{J_i \\ni h \\to 0} u_i^- (h)\n\\ \\ \\ \\text{ for all } x \\in c_2 (0) \\text{ and } i \\in \\{ 1, 2, 3 \\}.\n\\end{equation*}\nThis implies that \n\\begin{equation*}\nu_i^+ (0) = v^+ (x) = v^- (x) = \\lim_{J_i \\ni h \\to 0} u_i^- (h)\n\\ \\ \\ \\text{ for all } x \\in c_2 (0) \\text{ and } i \\in \\{ 1, 2, 3 \\}.\n\\end{equation*}\nAccording to \\eqref{bc-u_i^+}, we have\n\\begin{equation*}\nu_i^+ (h_i) = \\lim_{J_i \\ni h \\to h_i} u_i^- (h) \\ \\ \\ \\text{ for all } i \\in \\{ 1, 2, 3 \\}.\n\\end{equation*}\nThus, by the comparison principle applied to \\eqref{lim-HJ},\nwe find that $u_i^+ = u_i^-$ in $J_i$ for all $i \\in \\{ 1, 2, 3 \\}$.\nIn particular, setting $u_i = u_i^+$ on $\\bar J_i$ for $i \\in \\{ 1, 2, 3 \\}$ and recalling \\eqref{bc-u_i^+},\nwe see that $u_i \\in \\mathcal S_i \\cap C(\\bar J_i)$ for all $i \\in \\{ 1, 2, 3 \\}$,\nthat $u_1 (0) = u_2 (0) = u_3 (0)$, that $u_i (h_i) = d_i$ for all $i \\in \\{ 1, 2, 3 \\}$, and that\n\\begin{equation*}\nv^+ (x) = v^- (x) = u_i (h) \\ \\ \\ \\text{ for all } x \\in c_i (h), h \\in \\bar J_i, \\text{ and } i \\in \\{ 1, 2, 3 \\},\n\\end{equation*}\nthat is, \n\\begin{equation*}\nv^+ (x) = v^- (x) = u_i \\circ H(x) \\ \\ \\ \\text{ for all } x \\in \\overline \\Omega_i \\text{ and } i \\in \\{ 1, 2, 3 \\}.\n\\end{equation*} \nThis completes the proof. \n\\end{proof}\n\n\n\n\n\\section{Proof of Theorem \\ref{characterize}}\n\n\nBefore giving the proof of Theorem \\ref{characterize},\nwe introduce the functions $\\tau_i$ and $\\tilde \\tau_i$.\n\n\nFor each $i \\in \\{ 1, 2, 3 \\}$, we fix $p_i \\in c_i (0) \\setminus \\{ 0 \\}$,\ndenote by $Y_i (h)$ the solution of the initial value problem\n\\begin{align*}\n\\begin{cases}\nY' (h) = \\cfrac{DH(Y(h))}{|DH(Y(h))|^2} \\ \\ \\ \\text{ for } h \\in \\bar J_i \\setminus \\{ 0 \\}, \\\\\nY(0) = p_i,\n\\end{cases}\n\\end{align*}\nand set\n\\begin{equation*}\nl_i = \\{ Y_i (h) \\ | \\ h \\in \\bar J_i \\setminus \\{ 0 \\} \\}.\n\\end{equation*}\nIt is immediate that\n\\begin{equation*}\nY_i \\in C^1(\\bar J_i ; \\mathbb R^2) \\ \\ \\ \\text{ and } \\ \\ \\ H(Y_i (h)) = h \\ \\ \\ \\text{ for all } h \\in \\bar J_i \\text{ and } i \\in \\{ 1, 2, 3 \\}. \n\\end{equation*} \n\n\nFor $i \\in \\{ 1, 2, 3 \\}$ and $x \\in \\overline \\Omega_i \\setminus c_i (0)$,\nlet $\\tau_i (x)$ be the first time\nthe flow $\\{ X(t, x) \\}_{t > 0}$ reaches the curve $l_i$, that is,\n\\begin{equation} \\label{def-tau_i}\n\\tau_i (x) = \\inf \\{ t > 0 \\ | \\ X(t, x) \\in l_i \\}.\n\\end{equation} \nNote that although $\\tau_i$ are continuous\nin $\\overline \\Omega_i \\setminus (c_i (0) \\cup l_i)$,\nthey have jump discontinuities across the curves $l_i$.\nTo avoid this difficulty, for each $i \\in \\{ 1, 2, 3 \\}$,\nwe modify $\\tau_i$ near $l_i$ by considering\nthe set $U_i = \\{ x \\in \\overline \\Omega_i \\setminus c_i (0) \\ | \\ \\tau_i (x) \\not= T_i \\circ H(x)\/2 \\}$\nand the function $\\tilde \\tau_i : U_i \\to (0, \\infty)$ defined by\n\\begin{equation*}\n\\tilde \\tau_i (x) =\n\\begin{cases}\n\\tau_i (x) \\ \\ \\ &\\text{ if } \\tau_i (x) > T_i \\circ H(x)\/2, \\\\\n\\tau_i (x) + T_i \\circ H(x) \\ \\ \\ &\\text{ if } \\tau_i (x) < T_i \\circ H(x)\/2.\n\\end{cases}\n\\end{equation*}\n\n\n\\begin{lem}\nFor all $i \\in \\{ 1, 2, 3 \\}$, $\\tau_i \\in C^1 \\left( \\overline \\Omega_i \\setminus (c_i (0) \\cup l_i) \\right)$ and $\\tilde \\tau_i \\in C^1 (U_i)$.\n\\end{lem}\n\n\nThe lemma above, as well as Lemma \\ref{T_i},\ncan be found in \\cites{IS} in a slightly different context,\nto which we refer for the proof.\n\n\\begin{proof}[Proof of Theorem \\ref{characterize}]\nWe only show that $u_1^+ \\in \\mathcal S_1^-$\nsince the other cases can be treated in a similar way.\n\n\nLet $\\phi \\in C^1 (J_1)$ and $\\hat h \\in J_1$ and assume that\n$\\hat h$ is a strict maximum point in $J_1$ of the function $u_1^+ - \\phi$.\nSet $V_r = \\{ x \\in \\Omega_1 \\ | \\ |H(x) - \\hat h| < r \\}$ for $r > 0$\nand fix $r > 0$ so that $\\overline V_r \\subset \\Omega_1$.\n\nFix any $\\eta > 0$.\nDefine the function $g \\in C(\\Omega_1)$ by\n\\begin{equation*}\ng(x) = G\\big( x, \\phi' \\circ H(x)DH(x) \\big),\n\\end{equation*}\nand choose a function $f \\in C^1 (\\Omega_1)$ so that\n\\begin{equation*}\n|g(x) - f(x)| < \\frac{\\eta}{2} \\ \\ \\ \\text{ for all } x \\in V_r.\n\\end{equation*}\n\n\nLet $\\psi$ be the function in $\\Omega_1$ defined by\n\\begin{equation*}\n\\psi (x) = \\int_0^{\\tau_1 (x)} \\Big( f(X(t, x)) - \\bar f(x) \\Big) \\, dt,\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\bar f (x) := \\cfrac{1}{T_1 \\circ H(x)} \\int_0^{T_1 \\circ H(x)} f(X(s, x)) \\, ds,\n\\end{equation*}\nand observe that\n\\begin{equation} \\label{solvability}\n\\int_0^{T_1 \\circ H(x)} \\Big( f(X(t, x)) - \\bar f(x) \\Big) \\, dt = 0 \\ \\ \\ \\text{ for all } x \\in \\Omega_1.\n\\end{equation}\n\n\nRecalling that\n$X \\in C^1 (\\mathbb R \\times \\mathbb R^2)$,\n$\\tau_1 \\in C^1 \\left( \\overline \\Omega_1 \\setminus (c_1 (0) \\cup l_1) \\right)$, and $T_1 \\in C^1 (\\bar J_1 \\setminus \\{ 0 \\})$,\nit is clear that $\\psi \\in C^1 (\\Omega_1 \\setminus l_1)$.\nMoreover, recalling the definition of $\\tilde \\tau_1$,\nwe obtain from \\eqref{solvability} that\n\\begin{equation*}\n\\psi (x) = \\int_0^{\\tilde \\tau_1 (x)} \\Big( f(X(t, x)) - \\bar f(x) \\Big) \\, dt \\ \\ \\ \\text{ for all } x \\in U_1,\n\\end{equation*}\nand, hence, we see that $\\psi \\in C^1 (U_1)$\nand, moreover, that $\\psi \\in C^1 (\\Omega_1)$.\nBy using the dynamic programming principle, we see that\n\\begin{equation} \\label{eq: colector}\n- b \\cdot D\\psi = -f + \\bar f \\ \\ \\ \\text{ in } \\Omega_1. \n\\end{equation} \nIndeed, for any $x \\in \\Omega_1$ and $s \\in \\mathbb R$, we have \n\\begin{align*}\n\\begin{aligned}\n\\psi (X(-s, x)) &= \\int_0^{\\tau_1 (X(-s, x))} \\Big( f \\big( X(t, X(-s, x)) \\big) - \\bar f(X(-s, x)) \\Big) \\, dt \\\\\n &=\\int_{-s}^{\\tau_1(x)} \\Big( f(X(t, x)) - \\bar f(x) \\Big) \\, dt.\n\\end{aligned}\n\\end{align*}\nDifferentiating this with respect to $s$ at $s=0$, we get \n\\begin{equation*}\n-b(x) \\cdot D\\psi (x) = -f(x) + \\bar f(x). \n\\end{equation*}\n\n \nChoose sequences $\\{ \\varepsilon_n \\}_{n \\in \\mathbb N} \\subset (0, 1)$, converging to zero,\nand $\\{ y_n \\}_{n \\in \\mathbb N} \\subset \\overline V_r$ so that $\\lim_{n \\to \\infty} y_n = x_0$\nand $\\lim_{n \\to \\infty} u^{\\varepsilon_n} (y_n) = v^+ (x_0)$ for some $x_0 \\in c_1 (\\hat h)$.\nLet $\\{ x_n \\}_{n \\in \\mathbb N} \\subset \\overline V_r$ be a sequence consisting of\nmaximum points over $\\overline V_r$ of the functions $u^{\\varepsilon_n} - \\phi \\circ H - \\varepsilon_n \\psi$.\nBy replacing the sequence by its subsequence if necessary, we may assume that\n$\\lim_{n \\to \\infty} x_n = \\hat x$ and $\\lim_{n \\to \\infty} u^{\\varepsilon_n} (x_n) = a$\nfor some $\\hat x \\in \\overline V_r$ and $a \\in \\mathbb R$.\nNoting that, for all $n \\in \\mathbb N$,\n\\begin{equation*}\n(u^{\\varepsilon_n} - \\phi \\circ H - \\varepsilon_n \\psi )(y_n) \\leq (u^{\\varepsilon_n} - \\phi \\circ H - \\varepsilon_n \\psi )(x_n),\n\\end{equation*}\nand letting $n \\to \\infty$, we obtain\n\\begin{align*}\n(u_1^+ - \\phi )(\\hat h) &= (v^+ - \\phi \\circ H)(x_0) \\leq a - \\phi \\circ H (\\hat x) \\\\\n &\\leq (v^+ - \\phi \\circ H)(\\hat x) = (u_1^+ - \\phi) \\circ H(\\hat x).\n\\end{align*}\nSince $\\hat h$ is a strict maximum point in $J_1$ of the function $u_1^+ - \\phi$,\nwe see that $\\hat x \\in c_1 (\\hat h)$ and, moreover, that $a = u_1^+ (\\hat h)$. \n\n\nIf $\\varepsilon = \\varepsilon_n$ and $n$ is sufficiently large, then $x_n \\in V_r$ and\n\\begin{equation*}\n\\lambda u^\\varepsilon (x_n) - b(x_n) \\cdot D\\psi (x_n) + G\\big( x_n, \\phi' \\circ H(x_n)DH(x_n) + \\varepsilon D\\psi (x_n) \\big) \\leq 0.\n\\end{equation*} \nCombining this with \\eqref{eq: colector} yields\n\\begin{equation*}\n\\lambda u^\\varepsilon (x_n) - f(x_n) + \\bar f(x_n) + G\\big( x_n, \\phi' \\circ H(x_n)DH(x_n) + \\varepsilon D\\psi (x_n) \\big) \\leq 0.\n\\end{equation*} \nTaking the limit, as $n \\to \\infty$, in the inequality above, we get\n\\begin{equation*}\n\\lambda u_1^+ (\\hat h) - f(\\hat x) + \\bar f(\\hat x) + g(\\hat x) \\leq 0,\n\\end{equation*} \nand, moreover,\n\\begin{align*}\n0 &\\geq \\lambda u_1^+ (\\hat h) - f(\\hat x) + g(\\hat x) + \\frac{1}{T_1 (\\hat h)} \\int_0^{T_1(\\hat h)} f(X(t, \\hat x)) \\, dt \\\\\n &> \\lambda u_1^+ (\\hat h) - \\frac{\\eta}{2} + \\frac{1}{T_1 (\\hat h)} \\int_0^{T_1(\\hat h)} \\Big( g(X(t, \\hat x)) - \\frac{\\eta}{2} \\Big) \\, dt \\\\ \n &= \\lambda u_1^+ (\\hat h) + \\overline G_1 (\\hat h, \\phi' (\\hat h)) - \\eta.\n\\end{align*} \nSince $\\eta > 0$ is arbitrary, we conclude from the inequality above that\n\\begin{equation*}\n\\lambda u_1^+ (\\hat h) + \\overline G_1 (\\hat h, \\phi' (\\hat h)) \\leq 0,\n\\end{equation*}\nand, therefore, $u_1^+ \\in \\mathcal S_1^-$.\n\\end{proof}\n\n\n\n\n\n\n\\section{Proof of lemma \\ref{v^+-u_i^+}}\n\n\nThe key point of the proof of Lemma \\ref{v^+-u_i^+}\nis the behavior of $H(X^\\varepsilon (t, x, \\alpha))$ regarding the initial value $x$\nin a neighborhood of the homoclinic orbit $\\{ x \\in \\mathbb R^2 \\ | \\ H(x) = 0 \\}$.\n\n\nWe write $\\mathcal C$ for the subspace of those $\\beta \\in C^1 (\\Omega \\setminus \\{ 0 \\}; \\mathbb R^2)$\nthat are bounded in $\\Omega \\setminus \\{ 0 \\}$\nand, for each $\\varepsilon \\in (0, \\varepsilon_0)$ and $\\beta \\in \\mathcal C$,\nconsider the initial value problem\n\\begin{align} \\label{beta}\n\\begin{cases}\n\\dot X^\\varepsilon (t) = \\cfrac{1}{\\varepsilon} \\, b(X^\\varepsilon (t)) + \\beta (X^\\varepsilon (t)) \\ \\ \\ \\text{ for } t \\in \\mathbb R, \\\\\nX^\\varepsilon (0) = x \\in \\Omega \\setminus \\{ 0 \\}.\n\\end{cases}\n\\end{align}\nAs is well-known, problem \\eqref{beta} has a unique solution $X^\\varepsilon (t)$,\nwhich is also denoted by $\\xi^\\varepsilon (t, x, \\beta)$,\nin the maximal interval $(\\sigma_-^\\varepsilon (x, \\beta), \\sigma_+^\\varepsilon (x, \\beta))$\nwhere $\\sigma_-^\\varepsilon (x, \\beta) < 0 < \\sigma_+^\\varepsilon (x, \\beta)$,\nand the maximality means that\neither $\\sigma_-^\\varepsilon (x, \\beta) = -\\infty$ or\n$\\lim_{t \\to \\sigma_-^\\varepsilon (x, \\beta) + 0} \\mathrm{dist} (X^\\varepsilon (t), \\partial \\Omega \\cup \\{ 0 \\}) = 0$,\nand either $\\sigma_+^\\varepsilon (x, \\beta) = \\infty$ or\n$\\lim_{t \\to \\sigma_+^\\varepsilon (x, \\beta) - 0} \\mathrm{dist} (X^\\varepsilon (t), \\partial \\Omega \\cup \\{ 0 \\}) = 0$.\n\n\n\nNext we define $\\gamma \\in \\mathcal C$ by\n\\begin{equation*}\n\\gamma (x) = \\mu \\, \\cfrac{DH(x)}{|DH(x)|},\n\\end{equation*}\nwhere $\\mu$ is a positive constant chosen so that\n\\begin{equation*}\nL(x, \\xi ) \\leq C \\ \\ \\ \\text{ for all } (x, \\xi ) \\in \\overline \\Omega \\times \\overline B_\\mu \\text{ and some } C > 0,\n\\end{equation*}\nand set $h_0 = \\min_{i \\in \\{ 1, 2, 3 \\}} |h_i|$ and, for $h \\in (0, h_0)$,\n\\begin{equation} \\label{Omega-h}\n\\Omega_i (h) = \\{ x \\in \\overline \\Omega_i \\ | \\ 0 \\leq |H(x)| < h \\} \\ \\text{ for } i \\in \\{ 1, 2, 3 \\} \\ \\ \\text{ and } \\ \\ \\Omega (h) = \\bigcup_{i = 1}^3 \\Omega_i (h).\n\\end{equation} \n\n\nNow, by (H3), we have\n\\begin{equation*}\n|DH(x)|^2 = 4|x|^2 \\geq 4|H(x)| \\ \\ \\ \\text{ for all } x \\in \\overline B_\\kappa.\n\\end{equation*}\nSince $DH (x) \\not= 0$ for all $x \\in \\Omega \\setminus \\overline B_\\kappa$,\nthere exists $c_0 \\in (0, 2)$ such that\n\\begin{equation*}\n|DH(x)| \\geq c_0 |H(x)|^{\\frac{1}{2}} \\ \\ \\ \\text{ for all } x \\in \\Omega \\setminus \\overline B_\\kappa.\n\\end{equation*}\nCombining these yields\n\\begin{equation} \\label{ineq: DH}\n|DH(x)| \\geq c_0 |H(x)|^{\\frac{1}{2}} \\ \\ \\ \\text{ for all } x \\in \\Omega.\n\\end{equation} \n\n\n\\begin{lem} \\label{lem: tau_2-tau_1} \nLet $\\varepsilon \\in (0, \\varepsilon_0)$, $h \\in (0, h_0)$, and $x \\in \\Omega (h)$.\nIf $\\tau_1, \\tau_2 \\in (\\sigma_-^\\varepsilon (x, \\gamma), \\sigma_+^\\varepsilon (x, \\gamma))$ are such that\n$\\tau_1 < \\tau_2$ and $\\xi^\\varepsilon (t, x, \\gamma) \\in \\Omega (h)$ for all $t \\in (\\tau_1, \\tau_2)$, then\n\\begin{equation} \\label{ineq: tau_2-tau_1}\n\\tau_2 - \\tau_1 \\leq \\cfrac{2 \\sqrt{h}}{c_0 \\mu}.\n\\end{equation}\nAlso the inequality \\eqref{ineq: tau_2-tau_1} holds with $\\gamma$ being replaced by $-\\gamma$.\n\\end{lem} \n \n \n\\begin{proof}\nLet $\\tau_1, \\tau_2 \\in (\\sigma_-^\\varepsilon (x, \\gamma), \\sigma_+^\\varepsilon (x, \\gamma))$ are such that\n$\\tau_1 < \\tau_2$ and $\\xi^\\varepsilon (t, x, \\gamma) \\in \\Omega (h)$ for all $t \\in (\\tau_1, \\tau_2)$.\nSetting $\\psi (r) = r|r|^{-\\frac{1}{2}}$ for $r \\in \\mathbb R \\setminus \\{ 0 \\}$,\nwe compute that $\\psi' (r) = \\frac{1}{2} |r|^{-\\frac{1}{2}}$\nand, moreover, by using \\eqref{ineq: DH}, that\n\\begin{align*}\n2 \\sqrt{h} &\\geq \\psi \\circ H (\\xi^\\varepsilon (\\tau_2, x, \\gamma )) - \\psi \\circ H (\\xi^\\varepsilon (\\tau_1, x, \\gamma )) \\\\\n &= \\int_{\\tau_1}^{\\tau_2} \\psi' \\circ H (\\xi^\\varepsilon (s, x, \\gamma )) DH(\\xi^\\varepsilon (s, x, \\gamma )) \\cdot \\dot \\xi^\\varepsilon (s, x, \\gamma ) \\, ds \\\\\n &=\\frac{\\mu}{2} \\int_{\\tau_1}^{\\tau_2} |H(\\xi^\\varepsilon (s, x, \\gamma ))|^{-\\frac{1}{2}} |DH(\\xi^\\varepsilon (s, x, \\gamma ))| \\, ds \\geq c_0 \\mu (\\tau_2 - \\tau_1 ),\n\\end{align*}\nfrom which we conclude that\n\\begin{equation*}\n\\tau_2 - \\tau_1 \\leq \\cfrac{2 \\sqrt{h}}{c_0 \\mu}.\n\\end{equation*}\n\n\nSimilarly if $\\tau_1, \\tau_2 \\in (\\sigma_-^\\varepsilon (x, -\\gamma), \\sigma_+^\\varepsilon (x, -\\gamma))$ are such that\n$\\tau_1 < \\tau_2$ and $\\xi^\\varepsilon (t, x, -\\gamma) \\in \\Omega (h)$ for all $t \\in (\\tau_1, \\tau_2)$, then\n\\begin{align*}\n-2 \\sqrt{h} &\\leq \\psi \\circ H (\\xi^\\varepsilon (\\tau_2, x, -\\gamma )) - \\psi \\circ H (\\xi^\\varepsilon (\\tau_1, x, -\\gamma )) \\\\\n &= \\int_{\\tau_1}^{\\tau_2} \\psi' \\circ H (\\xi^\\varepsilon (s, x, -\\gamma )) DH(\\xi^\\varepsilon (s, x, -\\gamma )) \\cdot \\dot \\xi^\\varepsilon (s, x, -\\gamma ) \\, ds \\\\\n &= - \\, \\frac{\\mu}{2} \\int_{\\tau_1}^{\\tau_2} |H(\\xi^\\varepsilon (s, x, -\\gamma ))|^{-\\frac{1}{2}} |DH(\\xi^\\varepsilon (s, x, -\\gamma ))| \\, ds \\leq -c_0 \\mu (\\tau_2 - \\tau_1 ),\n\\end{align*}\nand, hence,\n\\begin{equation*}\n\\tau_2 - \\tau_1 \\leq \\cfrac{2 \\sqrt{h}}{c_0 \\mu}. \\qedhere\n\\end{equation*}\n\\end{proof}\n\n\n\nFor $\\varepsilon \\in (0, \\varepsilon_0)$, $\\beta \\in \\mathcal C$,\n$x \\in \\Omega \\setminus \\{ 0 \\}$, and $t \\in (\\sigma_-^\\varepsilon (x, \\beta), \\sigma_+^\\varepsilon (x, \\beta))$,\nwe define $\\pi^\\varepsilon [\\beta] (t, x) \\in \\mathbb R^2$ by\n\\begin{equation*}\n\\pi^\\varepsilon [\\beta ] (t, x) = \\beta (\\xi^\\varepsilon (t, x, \\beta )).\n\\end{equation*}\nIt is clear that $t \\mapsto \\pi^\\varepsilon [\\beta] (t, x)$ is a $C^1$ function\nin $(\\sigma_-^\\varepsilon (x, \\beta), \\sigma_+^\\varepsilon (x, \\beta))$ and that,\nfor all $t \\in (\\sigma_-^\\varepsilon (x, \\beta), \\sigma_+^\\varepsilon (x, \\beta))$,\n\\begin{equation*}\nX^\\varepsilon (t, x, \\pi^\\varepsilon [\\beta ] (\\cdot, x)) = \\xi^\\varepsilon (t, x, \\beta ).\n\\end{equation*}\n\n\n\\begin{proof}[Proof of Lemma \\ref{v^+-u_i^+}] \nWe may assume that\n\\begin{equation*}\n\\sup \\{ u^\\varepsilon (x) \\ | \\ x \\in \\Omega, \\ \\varepsilon \\in (0, \\varepsilon_0 ) \\} \n\\leq C.\n\\end{equation*}\n\n\nLet $h \\in (0, h_0)$.\nFix any $\\eta > 0$ and fix $\\delta \\in (0, \\varepsilon_0)$ so that\nif $\\varepsilon \\in (0, \\delta)$, then\n\\begin{equation*}\nu^\\varepsilon (x) < v^+ (x) + \\eta \\ \\ \\ \\text{ for all } x \\in \\overline{\\Omega (h)}.\n\\end{equation*}\n\n\nFix any $\\varepsilon \\in (0, \\delta)$.\nLet $x \\in \\Omega (h)$.\nObserve that, for all $t \\in (\\sigma_-^\\varepsilon (x, \\gamma), \\sigma_+^\\varepsilon (x, \\gamma))$,\n\\begin{equation*}\n\\mathrm{\\cfrac{d}{dt}} \\, H(\\xi^\\varepsilon (t, x, \\gamma )) = DH(\\xi^\\varepsilon (t, x, \\gamma )) \\cdot \\gamma (\\xi^\\varepsilon (t, x, \\gamma )) = |DH(\\xi^\\varepsilon (t, x, \\gamma ))| > 0.\n\\end{equation*}\nThis shows that $H(\\xi^\\varepsilon (t, x, \\gamma))$ is increasing\nin $t \\in (\\sigma_-^\\varepsilon (x, \\gamma), \\sigma_+^\\varepsilon (x, \\gamma))$.\nIt follows from this monotonicity that if $x \\in \\Omega_2 (h) \\setminus \\{ 0 \\}$,\nthen $\\xi^\\varepsilon (\\cdot, x, \\gamma)$ reaches the loop $c_2 (h)$\nat $\\tau_2^\\varepsilon (x, h) \\in (0, \\sigma_+^\\varepsilon (x, \\gamma))$.\nSimilarly we see that $H(\\xi^\\varepsilon (t, x, -\\gamma))$ is decreasing\nin $t \\in (\\sigma_-^\\varepsilon (x, -\\gamma)$, $\\sigma_+^\\varepsilon (x, -\\gamma))$,\nand, if $x \\in \\Omega_i (h) \\setminus \\{ 0 \\}$ and $i \\in \\{ 1, 3 \\}$,\nthen $\\xi^\\varepsilon (\\cdot, x, -\\gamma)$ reaches the loop $c_i (-h)$\nat $\\tau_i^\\varepsilon (x, h) \\in (\\sigma_+^\\varepsilon (x, -\\gamma), 0)$ (see Fig. 3).\n\n \nIn the case where $x \\in \\Omega_2 (h) \\setminus \\{ 0 \\}$,\nsetting $\\tau_2 = \\tau_2^\\varepsilon (x, h) > 0$ and $x_2 = \\xi^\\varepsilon (\\tau_2, x, \\gamma) \\in c_2 (h)$, \nwe have, by Lemma \\ref{lem: tau_2-tau_1},\n\\begin{equation*}\n\\tau_2 \\leq \\cfrac{2 \\sqrt{h}}{c_0 \\mu},\n\\end{equation*}\nand, moreover, by Proposition \\ref{prop: DPP},\n\\begin{align} \\label{ineq: x_2}\n\\begin{aligned}\nu^\\varepsilon (x) &\\leq \\int_0^{\\tau_2} L \\big( X^\\varepsilon (s, x, \\pi^\\varepsilon [\\gamma ] (\\cdot, x)), \\pi^\\varepsilon [\\gamma ] (\\cdot, x) \\big) e^{-\\lambda s} \\, ds \\\\\n &\\qquad \\qquad \\qquad \\qquad \\qquad \\qquad + u^\\varepsilon \\big( X^\\varepsilon (\\tau_2, x, \\pi^\\varepsilon [\\gamma ] (\\cdot, x)) \\big) e^{-\\lambda \\tau_2} \\\\ \n &\\leq \\int_0^{\\tau_2} L \\big( \\xi^\\varepsilon (s, x, \\gamma ), \\gamma (\\xi^\\varepsilon (s, x, \\gamma )) \\big) e^{-\\lambda s} \\, ds + u^\\varepsilon (x_2) e^{- \\lambda \\tau_2} \\\\\n &\\leq C \\tau_2 + u^\\varepsilon (x_2) + (1 - e^{- \\lambda \\tau_2}) |u^\\varepsilon (x_2)| \\\\\n &\\leq C(1 + \\lambda ) \\tau_2 + u^\\varepsilon (x_2) \\leq C(1 + \\lambda ) \\, \\cfrac{2 \\sqrt{h}}{c_0 \\mu} + u^\\varepsilon (x_2).\n\\end{aligned} \n\\end{align}\n\n\n\\begin{figure}[t]\n\\centering\n\\input{trajectory.tex}\n\\caption{\\ }\n\\end{figure}\n\n\nSimilarly, in the case where $x \\in \\Omega_i (h) \\setminus \\{ 0 \\}$ and $i \\in \\{ 1, 3 \\}$,\nsetting $\\tau_i = \\tau_i^\\varepsilon (x, h) > 0$ and $x_i = \\xi^\\varepsilon (\\tau_i, x, -\\gamma) \\in c_i (-h)$, we have\n\\begin{equation*}\n\\tau_i \\leq \\cfrac{2 \\sqrt{h}}{c_0 \\mu},\n\\end{equation*}\nand\n\\begin{align} \\label{ineq: x_i}\n\\begin{aligned}\nu^\\varepsilon (x) &\\leq \\int_0^{\\tau_i} L \\big( \\xi^\\varepsilon (s, x, -\\gamma ), -\\gamma (\\xi^\\varepsilon (s, x, -\\gamma )) \\big) e^{-\\lambda s} \\, ds + u^\\varepsilon (x_i) e^{-\\lambda \\tau_i} \\\\\n\t\t\t\t\t &\\leq C(1 + \\lambda ) \\tau_i + u^\\varepsilon (x_i) \\\\\n\t\t\t\t\t &\\leq C(1 + \\lambda ) \\cfrac{2 \\sqrt{h}}{c_0 \\mu} + u^\\varepsilon (x_i).\n\\end{aligned} \n\\end{align}\n\n\nFrom the monotonicity of $H(\\xi^\\varepsilon (t, x, \\gamma))$\nin $t \\in (\\sigma_-^\\varepsilon (x, \\gamma), \\sigma_+^\\varepsilon (x, \\gamma))$,\nit also follows that if $x \\in c_i (0) \\setminus \\{ 0 \\}$ and $i \\in \\{ 1, 3 \\}$,\nthen $\\xi^\\varepsilon (\\cdot, x, \\gamma)$ reaches the loop $c_2 (h)$\nat $\\tau_{i, +}^\\varepsilon (x, h, \\gamma) \\in (0, \\sigma_+^\\varepsilon (x, \\gamma))$\nand reaches the loop $c_i (-h)$ at $\\tau_{i, -}^\\varepsilon (x, h, \\gamma) \\in (\\sigma_-^\\varepsilon (x, \\gamma), 0)$.\nSimilarly if $x \\in c_i (0) \\setminus \\{ 0 \\}$ and $i \\in \\{ 1, 3 \\}$,\nthen $\\xi^\\varepsilon (\\cdot, x, -\\gamma)$ reaches the loop $c_i (-h)$\nat $\\tau_{i, +}^\\varepsilon (x, h, -\\gamma) \\in (0, \\sigma_+^\\varepsilon (x, -\\gamma))$\nand reaches the loop $c_2 (h)$ at $\\tau_{i, -}^\\varepsilon (x, h, -\\gamma) \\in (\\sigma_-^\\varepsilon (x, -\\gamma), 0)$.\n\n\nNow, let $x = p_1$ (recall here that $p_i$ are the fixed points on $c_i (0) \\setminus \\{ 0 \\}$). Setting\n\\begin{align*}\n\\begin{cases}\ns_1= -\\tau_{1, -}^\\varepsilon (p_1, h, \\gamma ), \\ s_2 = \\tau_{1, +}^\\varepsilon (p_1, h, \\gamma ), \\ \ns_3 = -\\tau_{1, -}^\\varepsilon (p_1, h, -\\gamma ), \\ s_4 = \\tau_{1, +}^\\varepsilon (p_1, h, -\\gamma ), \\\\\ny_1 = \\xi^\\varepsilon (-s_1, p_1, \\gamma ) \\in c_1 (-h), \\ y_2 = \\xi^\\varepsilon (s_2, p_1, \\gamma ) \\in c_2 (h), \\\\\ny_3 = \\xi^\\varepsilon (-s_3, p_1, -\\gamma ) \\in c_2 (h), \\ y_4 = \\xi^\\varepsilon (s_4, p_1, -\\gamma ) \\in c_1 (-h) \\text{ (see Fig. 4), }\n\\end{cases}\n\\end{align*}\nwe see that\n\\begin{equation*}\n\\xi^\\varepsilon (t, y_1, \\gamma ) = \\xi^\\varepsilon (t, \\xi^\\varepsilon (-s_1, p_1, \\gamma ), \\gamma ) = \\xi^\\varepsilon (t - s_1, p_1, \\gamma ) \\in \\Omega_1 (h) \\ \\ \\ \\text{ for all } t \\in (0, s_1),\n\\end{equation*}\nand\n\\begin{equation*}\n\\xi^\\varepsilon (s_1, y_1, \\gamma ) = \\xi^\\varepsilon (0, p_1, \\gamma ) = p_1.\n\\end{equation*}\nSimilarly we see that\n\\begin{equation*}\n\\xi^\\varepsilon (t, y_1, \\gamma ) = \\xi^\\varepsilon (t - s_1, p_1, \\gamma ) \\in \\Omega_2 (h) \\ \\ \\ \\text{ for all } t \\in (s_1, s_1 + s_2),\n\\end{equation*}\nand\n\\begin{equation*}\n\\xi^\\varepsilon (s_1 + s_2, y_1, \\gamma ) = y_2.\n\\end{equation*}\nMoreover we have\n\\begin{equation*}\ns_k \\leq \\cfrac{2 \\sqrt{h}}{c_0 \\mu} \\ \\ \\ \\text{ for all } k \\in \\{ 1, 2, 3, 4 \\}.\n\\end{equation*}\nFrom these, we get \n\\begin{align} \\label{ineq: y_2}\n\\begin{aligned}\nu^\\varepsilon (y_1) &\\leq \\int_0^{s_1 + s_2} L \\big( \\xi^\\varepsilon (s, y_1, \\gamma ), \\gamma (\\xi^\\varepsilon (s, x, \\gamma )) \\big) e^{-\\lambda s} \\, ds + u^\\varepsilon (y_2) e^{-\\lambda (s_1 + s_2)} \\\\\n &\\leq C(1 + \\lambda )(s_1 + s_2) + u^\\varepsilon (y_2) \\\\\n &\\leq C(1 + \\lambda ) \\cfrac{4 \\sqrt{h}}{c_0 \\mu} + u^\\varepsilon (y_2).\n\\end{aligned}\n\\end{align}\nSimilarly we get\n\\begin{align} \\label{ineq: y_4}\n\\begin{aligned}\nu^\\varepsilon (y_3) &\\leq \\int_0^{s_3 + s_4} L \\big( \\xi^\\varepsilon (s, y_1, -\\gamma ), -\\gamma (\\xi^\\varepsilon (s, x, -\\gamma )) \\big) e^{-\\lambda s} \\, ds + u^\\varepsilon (y_4) e^{-\\lambda (s_3 + s_4)} \\\\\n &\\leq C(1 + \\lambda ) \\cfrac{4 \\sqrt{h}}{c_0 \\mu} + u^\\varepsilon (y_4).\n\\end{aligned}\n\\end{align}\n\n\n\\begin{figure}[t]\n\\centering\n\\input{trajectory-1.tex}\n\\caption{\\ }\n\\end{figure}\n\n\nNext, let $x = p_3$. Setting\n\\begin{align*}\n\\begin{cases}\nt_1= -\\tau_{1, -}^\\varepsilon (p_3, h, \\gamma ), \\ t_2 = \\tau_{1, +}^\\varepsilon (p_3, h, \\gamma ), \\ \nt_3 = -\\tau_{1, -}^\\varepsilon (p_3, h, -\\gamma ), \\ t_4 = \\tau_{1, +}^\\varepsilon (p_3, h, -\\gamma ), \\\\\nz_1 = \\xi^\\varepsilon (-t_1, p_3, \\gamma ) \\in c_3 (-h), \\ z_2 = \\xi^\\varepsilon (t_2, p_3, \\gamma ) \\in c_2 (h), \\\\\nz_3 = \\xi^\\varepsilon (-t_3, p_3, -\\gamma ) \\in c_2 (h), \\ z_4 = \\xi^\\varepsilon (t_4, p_3, -\\gamma ) \\in c_3 (-h),\n\\end{cases}\n\\end{align*}\nwe have\n\\begin{equation*}\nt_k \\leq \\cfrac{2 \\sqrt{h}}{c_0 \\mu} \\ \\ \\ \\text{ for all } k \\in \\{ 1, 2, 3, 4 \\},\n\\end{equation*}\nand, moreover,\n\\begin{align*}\n\\begin{aligned}\nu^\\varepsilon (z_1) &\\leq \\int_0^{t_1 + t_2} L \\big( \\xi^\\varepsilon (s, z_1, \\gamma ), \\gamma (\\xi^\\varepsilon (s, x, \\gamma )) \\big) e^{-\\lambda s} \\, ds + u^\\varepsilon (z_2) e^{-\\lambda (t_1 + t_2)} \\\\\n &\\leq C(1 + \\lambda ) \\cfrac{4 \\sqrt{h}}{c_0 \\mu} + u^\\varepsilon (z_2),\n\\end{aligned}\n\\end{align*}\nand\n\\begin{align*}\n\\begin{aligned}\nu^\\varepsilon (z_3) &\\leq \\int_0^{t_3 + t_4} L \\big( \\xi^\\varepsilon (s, z_1, -\\gamma ), -\\gamma (\\xi^\\varepsilon (s, x, -\\gamma )) \\big) e^{-\\lambda s} \\, ds + u^\\varepsilon (z_4) e^{-\\lambda (t_3 + t_4)} \\\\\n &\\leq C(1 + \\lambda ) \\cfrac{4 \\sqrt{h}}{c_0 \\mu} + u^\\varepsilon (z_4).\n\\end{aligned}\n\\end{align*}\n\n\nFor $i \\in \\{ 1, 2, 3 \\}$ and $x, y \\in c_i ((-1)^i h)$, we set\n\\begin{equation*}\nt_i (x, y) = \\inf \\{ t \\geq 0 \\ | \\ X^\\varepsilon (t, x) = y \\}.\n\\end{equation*}\n\n\nFix $i \\in \\{ 1, 2, 3 \\}$.\nAs noted before, if $x \\in c_i ((-1)^i h)$,\nthen the solution $X(t, x)$ of \\eqref{HS} is periodic with period $T_i ((-1)^i h)$.\nThis periodicity is readily transferred to the solution $X^\\varepsilon (t, x)$ of \\eqref{epHS},\nthanks to the scaling property $X(\\varepsilon^{-1} t, x) = X^\\varepsilon (t, x)$,\nand $X^\\varepsilon (t, x)$ is periodic with period $T_i^\\varepsilon ((-1)^i h) := \\varepsilon T_i ((-1)^i h)$.\nHence we see that\n\\begin{equation*}\n0 \\leq t_i (x, y) < \\varepsilon T_i ((-1)^i h) \\ \\ \\ \\text{ for all } x, y \\in c_i ((-1)^i h).\n\\end{equation*}\n\n\nLet $x \\in \\Omega_2 (h) \\setminus \\{ 0 \\}$\nand let $\\tau_2 > 0$ and $x_2 \\in c_2 (h)$ are as in \\eqref{ineq: x_2}.\nBy using \\eqref{ineq: x_2}, we get\n\\begin{equation} \\label{ineq: u_2}\nu^\\varepsilon (x) \\leq C(1 + \\lambda ) \\cfrac{2 \\sqrt{h}}{c_0 \\mu} + v^+ (x_2) + \\eta = C(1 + \\lambda ) \\cfrac{2 \\sqrt{h}}{c_0 \\mu} + u_2^+ (h) + \\eta.\n\\end{equation}\nNext, noting that $x_2, y_3 \\in c_2 (h)$, we have\n\\begin{align*}\nu^\\varepsilon (x_2) &\\leq \\int_0^{t_2 (x_2, y_3)} L(X^\\varepsilon (s, x_2), 0) e^{- \\lambda s} \\, ds + u^\\varepsilon (y_3) e^{- \\lambda t_2 (x_2, y_3)} \\\\\n &\\leq C(1 + \\lambda ) \\varepsilon T_2 (h) + u^\\varepsilon (y_3).\n\\end{align*}\nCombining this with \\eqref{ineq: x_2} and \\eqref{ineq: y_4} yields\n\\begin{equation*}\nu^\\varepsilon (x) \\leq C(1 + \\lambda ) \\left( \\cfrac{6 \\sqrt{h}}{c_0 \\mu} + \\varepsilon T_2 (h) \\right) + u^\\varepsilon (y_4),\n\\end{equation*}\nand, hence, noting that $y_4 \\in c_1 (-h)$, we get\n\\begin{equation} \\label{ineq: u_1}\nu^\\varepsilon (x) \\leq C(1 + \\lambda ) \\left( \\cfrac{6 \\sqrt{h}}{c_0 \\mu} + \\varepsilon T_2 (h) \\right) + u_1^+ (-h) + \\eta.\n\\end{equation}\nSimilarly we get\n\\begin{equation*}\nu^\\varepsilon (x_2) \\leq C(1 + \\lambda ) t_2 (x_2, z_3) + u^\\varepsilon (z_3),\n\\end{equation*}\n\\begin{equation*}\nu^\\varepsilon (x) \\leq C(1 +\\lambda ) \\left( \\cfrac{6 \\sqrt{h}}{c_0 \\mu} + \\varepsilon T_2 (h) \\right) + u^\\varepsilon (z_4),\n\\end{equation*}\nand\n\\begin{equation} \\label{ineq: u_3}\nu^\\varepsilon (x) \\leq C(1 + \\lambda ) \\left( \\cfrac{6 \\sqrt{h}}{c_0 \\mu} + \\varepsilon T_2 (h) \\right) + u_3^+ (-h) + \\eta.\n\\end{equation}\nNow, \\eqref{ineq: u_2}, \\eqref{ineq: u_1}, and \\eqref{ineq: u_3} together yield\n\\begin{equation} \\label{ineq: sup-Omega_2}\n\\sup_{\\overline{\\Omega_2 (h)}} u^\\varepsilon \\leq C(1 + \\lambda ) \\left( \\cfrac{6 \\sqrt{h}}{c_0 \\mu} + \\varepsilon T_2 (h) \\right) + \\min_{i \\in \\{ 1, 2, 3 \\}} u_i^+ ((-1)^i h) + \\eta.\n\\end{equation}\nHere we have used the fact that\n\\begin{equation*}\n\\sup_{\\Omega_2 (h) \\setminus \\{ 0 \\}} u^\\varepsilon = \\sup_{\\overline{\\Omega_2 (h)}} u^\\varepsilon,\n\\end{equation*}\nwhich is a consequence of the continuity of $u^\\varepsilon$ on $\\overline \\Omega$.\n\n\nNext, let $x \\in \\Omega_1 (h) \\setminus \\{ 0 \\}$\nand let $\\tau_1 > 0$ and $x_1 \\in c_1 (-h)$ are as in \\eqref{ineq: x_i}.\nNoting that $x_1, y_1 \\in c_1(-h)$, we have\n\\begin{align*}\nu^\\varepsilon (x_1) &\\leq \\int_0^{t_1 (x_1, y_1)} L(X^\\varepsilon (s, x_1), 0) e^{- \\lambda s} \\, ds + u^\\varepsilon (y_1) e^{- \\lambda t_1 (x_1, y_1)} \\\\\n &\\leq C(1 + \\lambda )\\varepsilon T_1 (-h) + u^\\varepsilon (y_1).\n\\end{align*}\nCombining this with \\eqref{ineq: x_i} and \\eqref{ineq: y_2} yields\n\\begin{equation*}\nu^\\varepsilon (x) \\leq C(1 + \\lambda ) \\left( \\cfrac{6 \\sqrt{h}}{c_0 \\mu} + \\varepsilon T_1 (-h) \\right) + u^\\varepsilon (y_2),\n\\end{equation*}\nand, hence,\nnoting that $y_2 \\in c_2 (h) \\subset \\overline \\Omega_2 (h)$ and combining this with \\eqref{ineq: sup-Omega_2},\nwe get\n\\begin{equation*}\nu^\\varepsilon (x) \\leq C(1 + \\lambda ) \\left( \\cfrac{12 \\sqrt{h}}{c_0 \\mu} + \\varepsilon (T_1 (-h) + T_2 (h)) \\right) + \\min_{i \\in \\{ 1, 2, 3 \\}} u_i^+ ((-1)^i h) + \\eta,\n\\end{equation*}\nfrom which we conclude that\n\\begin{equation} \\label{ineq: sup-Omega_1}\n\\sup_{\\overline{\\Omega_1 (h)}} u^\\varepsilon \\leq C(1 + \\lambda ) \\left( \\cfrac{12 \\sqrt{h}}{c_0 \\mu} + \\varepsilon (T_1 (-h) + T_2 (h)) \\right) + \\min_{i \\in \\{ 1, 2, 3 \\}} u_i^+ ((-1)^i h) + \\eta.\n\\end{equation}\n\n\nAn argument parallel to the above yields\n\\begin{equation*}\n\\sup_{\\overline{\\Omega_3 (h)}} u^\\varepsilon \\leq C(1 + \\lambda ) \\left( \\cfrac{12 \\sqrt{h}}{c_0 \\mu} + \\varepsilon (T_3 (-h) + T_2 (h)) \\right) + \\min_{i \\in \\{ 1, 2, 3 \\}} u_i^+ ((-1)^i h) + \\eta,\n\\end{equation*}\nand, moreover, combining this\nwith \\eqref{ineq: sup-Omega_2} and \\eqref{ineq: sup-Omega_1} gives us the inequality\n\\begin{equation*}\n\\sup_{\\overline{\\Omega (h)}} u^\\varepsilon \\leq C(1 + \\lambda ) \\left( \\cfrac{12 \\sqrt{h}}{c_0 \\mu} + \\varepsilon (T_1 (-h) + T_2 (h) + T_3 (-h)) \\right) + \\min_{i \\in \\{ 1, 2, 3 \\}} u_i^+ ((-1)^i h) + \\eta.\n\\end{equation*}\nSince $\\eta > 0$ and $\\varepsilon \\in (0, \\delta)$ are arbitrary,\nwe conclude from the inequality above that\n\\begin{equation*}\n\\sup_{c_2 (0)} v^+ \\leq \\sup_{\\overline{\\Omega (h)}} v^+ \\leq C(1 + \\lambda ) \\cfrac{8 \\sqrt{h}}{c_0 \\mu} + \\min_{i \\in \\{ 1, 2, 3 \\}} u_i^+ ((-1)^i h),\n\\end{equation*}\nand, therefore,\n\\begin{equation*}\n\\sup_{c_2 (0)} v^+ \\leq \\lim_{h \\to 0+} \\min_{i \\in \\{ 1, 2, 3 \\}} u_i^+ ((-1)^i h) = \\min_{i \\in \\{ 1, 2, 3 \\}} u_i^+ (0). \\qedhere\n\\end{equation*}\n\\end{proof} \n \n \n \n \n\n\\section{Proof of Lemma \\ref{v^--d_0}}\n\n\nWe recall here that $h_0 = \\min_{i \\in \\{ 1, 2, 3 \\}} |h_i|$ and\nthat $\\Omega (\\cdot)$ denotes\nthe set defined in \\eqref{Omega-h}.\n\n\nThe key step of the proof of Lemma \\ref{v^--d_0} is\n\n\n\\begin{lem} \\label{key}\nFor any $\\eta > 0$,\nthere exist $\\delta \\in (0, h_0)$ and $\\psi \\in C^1 (\\Omega (\\delta))$ such that\n\\begin{equation*}\n-b \\cdot D\\psi + G(x, 0) < G(0, 0) + \\eta \\ \\ \\ \\text{ in } \\Omega (\\delta ).\n\\end{equation*}\n\\end{lem}\n\n\nFor $r \\in (0, \\kappa)$ and $x \\in \\Omega (r^2) \\setminus \\overline B_r$,\nlet $\\tau_+^r (x)$ be the first time the flow $\\{ X(t, x) \\}_{t > 0}$ reaches $\\partial B_r$, that is,\n\\begin{equation*}\n\\tau_+^r (x) = \\inf \\{ t > 0 \\ | \\ X(t, x) \\in \\partial B_r \\},\n\\end{equation*}\nand, similarly, set\n\\begin{equation*}\n\\tau_-^r (x) = \\sup \\{ t < 0 \\ | \\ X(t, x) \\in \\partial B_r \\},\n\\end{equation*}\nand \n\\begin{equation*}\nT^r (x) = (\\tau_+^r - \\tau_-^r) (x) ( > 0).\n\\end{equation*}\n\n\n\\begin{lem} \\label{low-bound-T_i^r}\nWe have\n\\begin{equation} \\label{ineq-T_i^r}\nT^r (x) \\geq \\log \\frac{\\kappa}{r} - \\frac{\\log 2}{2},\n\\end{equation}\nfor all $x \\in \\Omega (r^2) \\setminus \\overline B_r$ and $r \\in (0, \\kappa)$.\n\\end{lem}\n\n\n\\begin{proof}\nFix any $r \\in (0, \\kappa)$.\nIf $r > \\kappa\/\\sqrt{2}$, then the right-hand side of the inequality \\eqref{ineq-T_i^r} is negative.\n\n\nWe assume henceforth that $r \\leq \\kappa\/\\sqrt{2}$.\nSet\n\\begin{equation*}\nE (x) = \\{ t \\in [-\\tau_-^r (x), \\tau_+^r (x)] \\ | \\ X(t, x) \\in \\overline B_{\\kappa} \\setminus B_r \\} \\ \\ \\ \\text{ for } x \\in \\Omega (r^2) \\setminus \\overline B_r, \n\\end{equation*}\nnote that \n\\begin{equation*}\nT^r (x) \\geq |E (x)| \\ \\ \\ \\text{ for all } x \\in \\Omega (r^2) \\setminus \\overline B_r,\n\\end{equation*}\nand fix any $x \\in \\Omega (r^2) \\setminus \\overline B_r$.\n\n\nConsider first the case where $x \\in c_2 (0)$.\nFor each $s \\in [r, \\kappa]$,\nthe line $c_2 (0) \\cap \\left( \\overline B_s \\setminus B_r \\right) \\cap \\{ (x_1, x_2) \\, | \\, x_1 \\geq 0, \\, x_2 \\geq 0 \\}$\nis represented as\n\\begin{equation} \\label{line}\nX \\left( t, \\frac{1}{\\sqrt{2}} (r, r) \\right) = \\frac{r}{\\sqrt{2}} \\, (e^{2t}, e^{2t}) \\ \\ \\ \\text{ with } t \\in \\left[0, \\, \\frac{1}{2}\\log\\frac{s}{r} \\right],\n\\end{equation}\nwhich implies that\n\\begin{equation*} \n|E(x)| = \\log\\frac{\\kappa}{r}.\n\\end{equation*}\nHence, we have\n\\begin{equation} \\label{T^r-bound1}\nT^r (x) \\geq \\log\\frac{\\kappa}{r}.\n\\end{equation}\n\n\nConsider next the case where $x \\not \\in c_2 (0)$.\nSet $h = |H(x)|$.\nFor each $s \\in [r, \\kappa]$,\nthe curve (hyperbola) $c_2 (h) \\cap \\overline B_s \\cap \\{ (x_1, x_2) \\, | \\, x_2 > 0 \\}$ is represented as\n\\begin{equation} \\label{curve}\nX \\left(t, (0, \\sqrt{h}) \\right) = \\frac{\\sqrt{h}}{2} (e^{2t} -e^{-2t}, e^{2t} + e^{-2t}) \\ \\ \\ \\text{ with } t \\in [-\\sigma^s (h), \\sigma^s (h)],\n\\end{equation} \nwhere\n\\begin{equation*} \\label{sigma^s}\n\\sigma^s (h) := \\frac{1}{4} \\log \\left( \\frac{s^2}{h} + \\sqrt{\\frac{s^4}{h^2} - 1} \\right).\n\\end{equation*}\n\n\nNoting, by the rotational symmetry, that\n\\begin{equation*} \n|E(x)| = 2(\\sigma^{\\kappa} (h) - \\sigma^r (h)),\n\\end{equation*}\nwe compute\n\\begin{align*}\nT^r (x) &\\geq 2(\\sigma^{\\kappa} (h) - \\sigma^r (h)) \n \\geq \\frac{1}{2} \\log \\frac{\\kappa^2}{h} - \\frac{1}{2} \\log \\frac{2r^2}{h} \\\\\n &= \\log \\frac{\\kappa}{r} - \\frac{\\log 2}{2}.\n\\end{align*}\nCombining this with \\eqref{T^r-bound1} completes the proof.\n\\end{proof}\n\n\n\\begin{lem} \\label{tau^r}\nFor all $r \\in (0, \\kappa)$,\n$\\tau_{\\pm}^r \\in C^1 \\left( \\Omega (r^2) \\setminus \\overline B_r \\right)$.\n\\end{lem}\n\n\n\\begin{proof}\nFix any $r \\in (0, \\kappa)$.\nLet $\\varphi \\in C^1 (\\mathbb R^2)$ and $F \\in C^1 (\\mathbb R \\times \\mathbb R^2)$ are the functions defined by\n\\begin{equation*}\n\\varphi (x_1, x_2) = x_1^2 + x_2^2 -r^2 \\ \\ \\ \\text{ and } \\ \\ \\ F(t, x) = \\varphi (X(t, x)).\n\\end{equation*}\nObserve that for all $x \\in \\Omega (r^2) \\setminus \\overline B_r$,\n$F(\\tau_{\\pm}^r (x), x) = 0$, and, in view of (H3),\n\\begin{align*}\n\\mathrm{\\cfrac{d}{dt}} \\, F(t, x) \\Big|_{t = \\tau_{\\pm}^r (x)} = D\\varphi \\big( X(\\tau_{\\pm}^r (x), x) \\big) \\cdot \\dot X (\\tau_{\\pm}^r (x), x)\n = 8 X_1 (\\tau_{\\pm}^r (x), x) X_2 (\\tau_{\\pm}^r (x), x) \\not= 0,\n\\end{align*}\nwhere $X := (X_1, X_2)$.\nNow the claim follows from the implicit function theorem.\n\\end{proof}\n\n\nThe following lemma is a consequence of Lemma \\ref{tau^r}.\n\n\n\\begin{lem} \\label{T^r}\nFor all $r \\in (0, \\kappa)$, $T^r \\in C^1 \\left( \\Omega (r^2) \\setminus \\overline B_r \\right)$.\n\\end{lem}\n\n\n\\begin{proof}[Proof of Lemma \\ref{key}]\nFix any $\\eta > 0$.\nChoose a function $\\widetilde G \\in C^1 (\\overline \\Omega)$ so that\n\\begin{equation*}\n|G(x, 0) - \\widetilde G (x)| < \\frac{\\eta}{8} \\ \\ \\ \\text{ for all } x \\in \\overline \\Omega,\n\\end{equation*}\ndefine the function $g \\in C^1 (\\overline \\Omega)$ by\n\\begin{equation*}\ng(x) = \\widetilde G(x) - \\widetilde G(0),\n\\end{equation*}\nand fix $\\delta \\in (0, \\kappa\/2)$ so that\n\\begin{equation*}\n|g(x)| < \\frac{\\eta}{4} \\ \\ \\ \\text{ for all } x \\in B_{2\\delta}.\n\\end{equation*}\n\n\nFor each $r > 0$,\nwe choose a cut-off function $\\zeta_r \\in C^1 (\\mathbb R^2)$ so that\n\\begin{align*} \n\\begin{cases}\n\\zeta_r (x) = 0 \\ \\ \\ &\\text{ if } |x| \\leq r, \\\\\n0 \\leq \\zeta_r (x) \\leq 1 \\ \\ \\ &\\text{ if } r \\leq |x| \\leq 2r, \\\\\n\\zeta_r (x) = 1 \\ \\ \\ &\\text{ if } |x| \\geq 2r,\n\\end{cases}\n\\end{align*}\nand define the function $f \\in C^1 (\\overline \\Omega)$ by\n\\begin{equation*}\nf (x) = g (x) \\zeta_{\\delta} (x),\n\\end{equation*}\nwhich satisfies \n\\begin{equation*}\nf = 0 \\ \\ \\ \\text{ on } \\overline B_{\\delta} \\ \\ \\ \\text{ and } \\ \\ \\ |g - f| < \\frac{\\eta}{4} \\ \\ \\ \\text{ on } \\overline \\Omega.\n\\end{equation*}\n\n\nFix a small $r \\in (0, \\delta\/4)$.\nLet $\\psi$ be the function in $\\Omega (r^2) \\setminus \\overline B_r$ defined by\n\\begin{equation*}\n\\psi (x) = \\int_0^{\\tau_+^r (x)} \\Big( f(X(t, x)) - \\bar f (x) \\Big) \\, dt,\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\bar f (x) := \\cfrac{1}{T^r (x)} \\int_{\\tau_-^r (x)}^{\\tau_+^r (x)} f(X(s, x)) \\, ds.\n\\end{equation*}\n\n\nRecalling that $X \\in C^1 (\\mathbb R \\times \\mathbb R^2)$ and\n$\\tau_{\\pm}^r$, $T^r \\in C^1 \\left( \\Omega (r^2) \\setminus \\overline B_r \\right)$,\nit is clear that $\\psi \\in C^1 \\left( \\Omega (r^2) \\setminus \\overline B_r \\right)$.\nBy using the dynamic programming principle,\nwe see that\n\\begin{equation} \\label{psi_i}\n-b \\cdot D\\psi = -f + \\bar f \\ \\ \\ \\text{ in } \\Omega (r^2) \\setminus \\overline B_r.\n\\end{equation}\n\n\nNext, note that\n\\begin{equation*}\n\\lim_{\\Omega (r^2) \\setminus \\overline B_r \\ni y \\to x} \\psi (y) = 0 \\ \\ \\ \n\\text{ for all } x \\in \\Omega (r^2) \\cap \\partial B_r,\n\\end{equation*}\nand let $\\tilde \\psi$ and $\\varphi$ are the functions in $\\Omega (r^2)$ defined by \n\\begin{align*}\n\\tilde \\psi (x) = \n\\begin{cases}\n\\psi (x) \\ \\ \\ &\\text{ if } x \\in \\Omega (r^2) \\setminus \\overline B_r, \\\\\n0 \\ \\ \\ &\\text{ if } x \\in \\Omega (r^2) \\cap \\overline B_r, \n\\end{cases}\n\\end{align*}\nand\n\\begin{equation*}\n\\varphi (x) = \\tilde \\psi (x) \\zeta_{2r} (x).\n\\end{equation*}\nNoting that $\\tilde \\psi \\in C(\\Omega (r^2)) \\cap C^1 (\\Omega (r^2) \\setminus \\partial B_r)$,\nwe see that $\\varphi \\in C^1 (\\Omega (r^2))$ and, moreover, that\n\\begin{equation} \\label{varphi}\n-b \\cdot D \\varphi = -b \\cdot D \\tilde \\psi \\zeta_{2r} - \\tilde \\psi b \\cdot D\\zeta_{2r} \\ \\ \\ \\text{ in } \\Omega (r^2) \\setminus \\partial B_r.\n\\end{equation} \n\n\nNow it is enough to show that\nif $r \\in (0, \\delta\/4)$ is sufficiently small, then $\\varphi$ satisfies\n\\begin{equation} \\label{enough}\n- b \\cdot D \\varphi + G(x, 0) < G(0, 0) + \\eta \\ \\ \\ \\text{ in } \\Omega (r^2).\n\\end{equation} \n\n\nSince $\\varphi (x) = f (x) = 0$ for all $x \\in \\overline B_{2r}$, we conclude that\n\\begin{equation} \\label{center}\n-b(x) \\cdot D \\varphi (x) + f(x) = 0 \\ \\ \\ \\text{ for all } x \\in \\overline B_{2r}.\n\\end{equation}\n\n\nIn the case where $x \\in \\Omega (r^2) \\setminus B_{4r}$,\nsince $\\zeta_{2r} (x) = 1$ and $D\\zeta_{2r} (x) = 0$,\nby \\eqref{varphi}, we see that\n\\begin{equation} \\label{bvarphi-bpsi_i}\n-b(x) \\cdot D\\varphi (x) = -b (x) \\cdot D \\tilde \\psi (x) = -b (x) \\cdot D\\psi (x).\n\\end{equation}\nFix any $y \\in \\Omega (r^2) \\setminus \\overline B_{\\delta}$.\nNoting that\n\\begin{equation*}\nf(X(t, y)) = 0 \\ \\ \\ \\text{ for all } t \\in [\\tau_-^r (y), \\tau_-^{\\delta} (y)] \\cup [\\tau_+^{\\delta} (y), \\tau_+^r (y)],\n\\end{equation*}\nwe have\n\\begin{equation} \\label{T_i^delta} \n\\int_{\\tau_-^r (z)}^{\\tau_+^r (z)} f(X(t, z)) \\, dt = \\int_{\\tau_-^\\delta (y)}^{\\tau_+^\\delta (y)} f (X(t, y)) \\, dt \\ \\ \\ \n\\text{ for all } z \\in \\Omega (r^2) \\setminus \\overline B_r.\n\\end{equation}\nCombining \\eqref{psi_i}, \\eqref{bvarphi-bpsi_i}, and \\eqref{T_i^delta}\nand using Lemma \\ref{low-bound-T_i^r}, we get\n\\begin{align*} \n\\begin{aligned}\n|- b(x) \\cdot D\\varphi (x) + f (x)| &= |\\bar f (x)| \\\\ \n &\\leq \\left( \\log \\frac{\\kappa}{r} - \\frac{\\log 2}{2} \\right)^{-1} \\int_{\\tau_-^\\delta (y)}^{\\tau_+^\\delta (y)} |f (X(t, y))| \\, dt,\n\\end{aligned}\n\\end{align*}\nfrom which, by replacing $r \\in (0, \\delta \/4)$ by a smaller number if necessary, we conclude that \n\\begin{equation} \\label{outer}\n- b(x) \\cdot D\\varphi (x) + f (x) < \\frac{\\eta}{2}.\n\\end{equation}\n\n\nLet $x \\in \\Omega (r^2) \\cap \\left( B_{4r} \\setminus \\overline B_{2r} \\right)$.\nNoting that $f (x) = 0$, by \\eqref{varphi}, we have\n\\begin{equation} \\label{varphi-psi_1} \n-b(x) \\cdot D\\varphi (x) + f(x) = -b(x) \\cdot D \\tilde \\psi (x) \\zeta_{2r} (x) - \\tilde \\psi (x)b(x) \\cdot D\\zeta_{2r} (x).\n\\end{equation}\nUsing \\eqref{T_i^delta}, we get\n\\begin{align*}\n- b (x) \\cdot D\\tilde \\psi (x) \\zeta_{2r} (x) &\\leq |- b (x) \\cdot D\\tilde \\psi (x)| \\\\\n &= |\\bar f (x)| \\leq \\left( \\log \\frac{\\kappa}{r} - \\frac{\\log 2}{2} \\right)^{-1} \\int_{\\tau_-^\\delta (y)}^{\\tau_+^\\delta (y)} |f (X(t, y))| \\, dt, \n\\end{align*}\nfrom which,\nby replacing $r \\in (0, \\delta \/4)$ by a smaller number if necessary, we conclude that \n\\begin{equation} \\label{middle1}\n-b (x) \\cdot D\\tilde \\psi (x) \\zeta_{2r} (x) < \\cfrac{\\eta}{4}.\n\\end{equation}\n\n\nNext we note, by (H3), that $|b(y)| = O(|y|)$ as $y \\to 0$.\nWe may assume that $|D\\zeta_{2r} (y)| \\leq C_0 |2r|^{-1}$ for all $y \\in B_{4r}$ and for some $C_0 > 0$.\nHence we have\n\\begin{equation*}\n|b (y) \\cdot D\\zeta_{2r} (y)| \\leq C_1 \\ \\ \\ \\text{ for all } y \\in B_{4r} \\text{ and for some } C_1 > 0,\n\\end{equation*}\nand\n\\begin{equation} \\label{zeta_r}\n-\\tilde \\psi b \\cdot D\\zeta_{2r} \\leq C_1 |\\tilde \\psi| \\ \\ \\ \\text{ in } \\Omega (r^2) \\cap \\left( B_{4r} \\setminus \\overline B_{2r} \\right).\n\\end{equation}\n\n\nNow, if $x \\in c_2 (0)$, then, in view of \\eqref{line}, we see that\n\\begin{equation*}\n\\tau_+^r (x) \\wedge \\tau_-^r (x) \\leq \\frac{1}{2} \\log \\frac{4r}{r} = \\log 2,\n\\end{equation*}\nand, if $x \\not \\in c_2 (0)$ and $h = |H(x)|$, then, in view of \\eqref{curve}, we see that\n\\begin{equation*}\n\\tau_+^r (x) \\wedge \\tau_-^r (x) \\leq \\sigma^{4r} (h)- \\sigma^r (h) \\leq \\frac{5}{4} \\log 2.\n\\end{equation*}\nThese together yield\n\\begin{equation*}\n\\tau_+^r (x) \\wedge \\tau_-^r (x) \\leq \\log 2.\n\\end{equation*}\n\n\nConsider first the case where $\\tau_+^r (x) \\wedge \\tau_-^r (x) = \\tau_-^r (x)$.\nUsing \\eqref{T_i^delta}, we get\n\\begin{align} \\label{psi_1}\n\\begin{aligned}\n\\tilde \\psi (x) = \\left( 1- \\frac{\\tau_+^r (x)}{T^r (x)} \\right) \\int_{\\tau_-^\\delta (y)}^{\\tau_+^\\delta (y)} f(X(t, y)) \\, dt.\n\\end{aligned}\n\\end{align}\nSince \n\\begin{equation*}\nT^r (x) - \\log 2 \\leq \\tau_+^r (x) < T^r (x),\n\\end{equation*}\nby Lemma \\ref{low-bound-T_i^r}, we have\n\\begin{equation*}\n0 < 1- \\frac{\\tau_+^r (x)}{T^r (x)} \\leq \\frac{\\log 2}{T^r (x)} \\leq \\log 2 \\left( \\log \\cfrac{\\kappa}{r} -\\cfrac{\\log 2}{2} \\right)^{-1},\n\\end{equation*}\nand, hence, by replacing $r \\in (0, \\delta \/4)$ by a smaller number if necessary,\nwe conclude from \\eqref{psi_1} that\n\\begin{equation*}\nC_1 |\\tilde \\psi (x)| < \\frac{\\eta}{4},\n\\end{equation*}\nand, moreover, from \\eqref{zeta_r}, that\n\\begin{equation} \\label{psi_1-b-zeta_r}\n-\\tilde \\psi (x)b(x) \\cdot D\\zeta_{2r} (x) < \\frac{\\eta}{4}.\n\\end{equation}\n\n\nConsider next the case where $\\tau_+^r (x) \\wedge \\tau_-^r (x) = \\tau_+^r (x)$.\nNoting that\n\\begin{equation*}\nf (X(t, x)) = 0 \\ \\ \\ \\text{ for all } t \\in [0, \\tau_+^r (x)],\n\\end{equation*}\nand using \\eqref{T_i^delta}, we get\n\\begin{align*}\n|\\tilde \\psi (x)| \\leq \\frac{\\tau_+^r (x)}{T^r (x)} \\int_{\\tau_-^\\delta (y)}^{\\tau_+^\\delta (y)} |f(X(t, y))| \\, dt\n \\leq \\log 2 \\left( \\log \\cfrac{\\kappa}{r} - \\cfrac{\\log 2}{2} \\right)^{-1} \\int_{\\tau_-^\\delta (y)}^{\\tau_+^\\delta (y)} |f(X(t, y))| \\, dt,\n\\end{align*}\nfrom which, by replacing $r \\in (0, \\delta \/4)$ by a smaller number if necessary, we conclude that \n\\begin{equation*}\nC_1 |\\tilde \\psi (x)| < \\frac{\\eta}{4},\n\\end{equation*}\nand, moreover, that\n\\begin{equation*}\n-\\tilde \\psi (x)b(x) \\cdot D\\zeta_{2r} (x) < \\frac{\\eta}{4}.\n\\end{equation*}\nCombining this with \\eqref{psi_1-b-zeta_r} yields\n\\begin{equation} \\label{middle2}\n-\\tilde \\psi b \\cdot D\\zeta_{2r} < \\frac{\\eta}{4} \\ \\ \\ \\text{ in } \\Omega (r^2) \\cap \\left( B_{4r} \\setminus \\overline B_{2r} \\right).\n\\end{equation} \nNow, \\eqref{varphi-psi_1}, \\eqref{middle1}, and \\eqref{middle2} together yield\n\\begin{equation*}\n-b \\cdot D\\varphi + f < \\frac{\\eta}{2} \\ \\ \\ \\text{ in } \\Omega (r^2) \\cap \\left( B_{4r} \\setminus \\overline B_{2r} \\right).\n\\end{equation*} \nCombining this with \\eqref{center} and \\eqref{outer}, we get\n\\begin{equation*}\n-b \\cdot D\\varphi + f < \\frac{\\eta}{2} \\ \\ \\ \\text{ in } \\Omega (r^2),\n\\end{equation*}\nand, moreover, for any $x \\in \\Omega (r^2)$,\n\\begin{align*}\n\\frac{\\eta}{2} &> - b (x) \\cdot D\\varphi (x) + f (x) > - b (x) \\cdot D\\psi (x) + g (x) - \\frac{\\eta}{4} \\\\\n &= - b (x) \\cdot D\\varphi (x) + \\tilde G(x) - \\tilde G(0) -\\frac{\\eta}{4} \\\\\n &> - b (x) \\cdot D\\varphi (x) + G(x, 0) - G(0, 0) - \\frac{\\eta}{2}.\n\\end{align*}\nThis proves \\eqref{enough}. The proof is complete.\n\\end{proof}\n\n\n\\begin{lem} \\label{lim-min-barG_i}\nWe have \n\\begin{equation*}\n\\lim_{h \\to 0+} \\min_{q \\in \\mathbb R} \\overline G_i ((-1)^i h, q) = G(0, 0) \\ \\ \\ \\text{ for all } i \\in \\{ 1, 2, 3 \\}.\n\\end{equation*}\n\\end{lem}\n\n\nIn what follows, let $L(c)$ denote the length of a given curve $c$.\n\n\n\\begin{lem} \\label{length-c_i}\nFor any $r \\in (0, \\kappa)$,\n\\begin{equation*}\nL(c_i (h) \\cap B_r) \\leq 4r \\ \\ \\ \\text{ for all } h \\in J_i \\text{ and } i \\in \\{ 1, 2, 3 \\}.\n\\end{equation*}\n\\end{lem}\n\n\n\\begin{proof}\nFix any $i \\in \\{ 1, 2, 3 \\}$ and $h \\in J_i$.\nIf $|h| \\geq r^2$, then $c_i (h) \\cap B_r = \\emptyset$ and $L(c_i (h) \\cap B_r) = 0$.\n\n\nWe assume henceforth that $|h| < r^2$ and, for the time being, that $i = 2$.\nAs seen in the proof of Lemma \\ref{low-bound-T_i^r},\nthe curve (hyperbola) $c_2 (h) \\cap B_r \\cap \\{ (x_1, x_2) \\, | \\, x_2 > 0 \\}$ is represented as\n\\begin{equation*}\n(x_1, x_2) = (\\xi_1 (t), \\xi_2 (t)) \\ \\ \\ \\text{ with } t \\in (-\\tau, \\tau),\n\\end{equation*} \nwhere\n\\begin{equation*}\n(\\xi_1 (t), \\xi_2 (t)) := \\frac{\\sqrt{h}}{2} \\, (e^{2t} -e^{-2t}, e^{2t} + e^{-2t}),\n\\end{equation*}\nand\n\\begin{equation*}\n\\tau := \\frac{1}{4} \\log \\left( \\frac{r^2}{h} + \\sqrt{\\frac{r^4}{h^2} - 1} \\right).\n\\end{equation*}\n\n\nWe note\n\\begin{equation*}\ne^{2\\tau} \\leq \\sqrt{\\frac{2}{h}} \\, r,\n\\end{equation*}\nand compute that\n\\begin{align*}\nL(c_2 (h) \\cap B_r) &= 2L(c_2 (h) \\cap B_r \\cap \\{ (x_1, x_2) \\, | \\, x_2 > 0 \\})\n = 4\\int_0^{\\tau} \\sqrt{{\\dot \\xi_1 (t)}^2 + {\\dot \\xi_2 (t)}^2} \\, dt \\\\\n &= 4\\sqrt{2h} \\int_0^{\\tau} \\sqrt{e^{4t} + e^{-4t}} \\, dt \n \\leq 4\\sqrt{2h} \\int_0^{\\tau} (e^{2t} + e^{-2t}) \\, dt \\leq 2\\sqrt{2h} e^\\tau \\leq 4r.\n\\end{align*}\n\nBy the rotational symmetry, the computation above also implies\nthat $L(c_i (h) \\cap B_r) \\leq 2r$ when $i = 1$ or $i = 3$.\nThe proof is complete.\n\\end{proof}\n\n\n\n\\begin{proof}[Proof of Lemma \\ref{lim-min-barG_i}] \nWe begin by noting that, for some $c_0 > 0$,\n\\begin{equation} \\label{c_0}\n|DH(x)| \\geq c_0 |x| \\ \\ \\ \\text{ for all } x \\in \\overline \\Omega.\n\\end{equation}\nIndeed, by (H3), we have\n\\begin{equation} \\label{DH-near-0}\n|DH(x)| = 2|x| \\ \\ \\ \\text{ for all } x \\in B_{\\kappa},\n\\end{equation}\nand also, we have\n\\begin{equation*}\n\\min_{\\overline \\Omega \\setminus B_{\\kappa}} |DH(x)| > 0.\n\\end{equation*}\nThese together show that \\eqref{c_0} hold for some constant $c_0 > 0$.\n\n\nFix any $\\gamma > 0$.\nIn view of the coercivity of $G$, we choose $R > 0$ so that\n$G(x, p) \\geq G(0, 0)$ for all $(x, p) \\in \\overline \\Omega \\times (\\mathbb R^n \\setminus B_R)$.\nSince $G$ is uniformly continuous on $\\overline \\Omega \\times \\overline B_R$,\nthere is a constant $C_1 > 0$ such that\n\\begin{equation*}\n|G(x, p) - G(0, 0)| \\leq \\gamma + C_1 (|x| + |p|) \\ \\ \\ \\text{ for all } (x, p) \\in \\overline \\Omega \\times \\overline B_R.\n\\end{equation*}\nHence, we have\n\\begin{align*}\n&G(x, p) \\geq G(0, 0) - \\gamma -C_1 (|x| + |p|) \\ \\ \\ \\text{ for all } (x, p) \\in \\overline \\Omega \\times \\mathbb R^2, \\\\\n&|G(x, 0) - G(0, 0)| \\leq \\gamma + C_1|x| \\ \\ \\ \\text{ for all } x \\in \\overline \\Omega. \n\\end{align*}\n\n\nThe two inequalities above combined with \\eqref{c_0} yield\n\\begin{align}\n&G(x, p) \\geq G(0, 0) - \\gamma -C_2 (|DH(x)| + |p|) \\ \\ \\ \\text{ for all } (x, p) \\in \\overline \\Omega \\times \\mathbb R^2, \\label{G-bound1} \\\\\n&|G(x, 0) - G(0, 0)| \\leq \\gamma + C_2|DH(x)| \\ \\ \\ \\text{ for all } x \\in \\overline \\Omega \\label{G-bound2}\n\\end{align}\nfor some constant $C_2 > 0$.\n\n\nFix any $i \\in \\{ 1, 2, 3 \\}$ and $h \\in J_i$.\nFix a point $x \\in c_i (h)$.\nTaking the average of both sides of \\eqref{G-bound2} along $c_i (h)$, we get\n\\begin{equation} \\label{G_i-bound0}\n|\\overline G_i (h, 0) - G(0, 0)| \\leq \\gamma + \\frac{C_2}{T_i (h)} \\int_0^{T_i (h)} |DH(X(t, x))| \\, dt = \\gamma + \\frac{C_2 L_i (h)}{T_i (h)}.\n\\end{equation}\n\n\nFix any $q \\in \\mathbb R$.\nConsider first the case where $q \\leq \\gamma T_i (h)$.\nUsing \\eqref{G-bound1}, we compute\n\\begin{align} \\label{G_i-bound1}\n\\begin{aligned}\n\\overline G_i (h, q) - G(0, 0) + \\gamma &\\geq - \\, \\frac{C_2}{T_i (h)} \\int_0^{T_i (h)} (1 + |q|)|DH(X(t, x))| \\, dt \\\\\n &= - (1 + |q|) \\frac{C_2 L_i (h)}{T_i (h)} \\geq - \\, \\frac{C_2 L_i (h)}{T_i (h)} - \\gamma C_2 L_i (h).\n\\end{aligned}\n\\end{align}\n\n\nConsider next the case where $q > \\gamma T_i (h)$.\nSet\n\\begin{equation*}\nS = \\{ t \\in [0, T_i (h)] \\ | \\ |X(t, x)| \\leq R\/(c_0 |q|) \\}. \n\\end{equation*}\nChoose $\\delta > 0$ so that if $|h| \\leq \\delta$, then\n\\begin{equation*}\n\\frac{R}{c_0 \\gamma T_i (h)} \\leq \\kappa,\n\\end{equation*}\nand assume henceforth that $|h| < \\delta$.\nNote that $R\/(c_0 |q|) \\leq \\kappa$ and, by Lemma \\ref{length-c_i},\n\\begin{equation*}\nL(c_i (h) \\cap B_{R\/(c_0 |q|)}) \\leq \\frac{4R}{c_0 |q|},\n\\end{equation*}\nwhich implies that\n\\begin{equation} \\label{G_i-bound2}\n\\int_S |DH(X(t, x))| \\, dt = L(c_i (h) \\cap B_{R\/(c_0 |q|)}) \\leq \\frac{4R}{c_0 |q|}.\n\\end{equation}\nOn the other hand, if $t \\in [0, T_i (h)] \\setminus S$, then, by \\eqref{c_0},\n\\begin{equation*}\n\\frac{R}{c_0 |q|} < |X(t, x)| \\leq \\frac{|DH(X(t, x))|}{c_0},\n\\end{equation*}\nthat is, $|q| |DH(X(t, x))| > R$. Hence,\n\\begin{equation} \\label{G_i-bound3}\nG \\big( X(t, x), qDH(X(t, x)) \\big) \\geq G(0, 0) \\ \\ \\ \\text{ for all } t \\in [0, T_i (h)] \\setminus S.\n\\end{equation}\n\n\nUsing \\eqref{G-bound1}, \\eqref{G_i-bound2}, and \\eqref{G_i-bound3},\nwe compute that\n\\begin{align*}\n\\int_0^{T_i (h)} &G(X(t, x), qDH(X(t, x))) \\, dt \\\\\n &\\geq \\int_S \\Big( G(0, 0) - \\gamma - C_2(1+|q|)|DH(X(t, x))| \\Big) \\, dt + G(0, 0) \\int_{[0, T_i (h)] \\setminus S} \\, dt \\\\\n &\\geq (G(0, 0) - \\gamma) T_i (h) - C_2 L_i (h) - C_2 |q| L(c_i (h) \\cap B_{R\/(c_0 |q|)}) \\\\\n &\\geq (G(0, 0) - \\gamma) T_i (h) - C_2 L_i (h) - \\frac{4C_2 R}{c_0},\n\\end{align*}\nfrom which we get\n\\begin{equation*}\n\\overline G_i (h, q) \\geq G(0, 0) - \\gamma - \\frac{L_i (h)}{T_i (h)} - \\frac{2R}{c_0 T_i (h)} \\ \\ \\ \\text{ if } |h| < \\delta.\n\\end{equation*}\n\n\nThe inequality above together with \\eqref{G_i-bound1} implies that\n\\begin{equation*}\n\\liminf_{J_i \\ni h \\to 0} \\min_{q \\in \\mathbb R} \\overline G_i (h, q) \\geq G(0, 0) \\ \\ \\ \\text{ for all } i \\in \\{ 1, 2, 3 \\},\n\\end{equation*}\nwhile \\eqref{G_i-bound0} yields\n\\begin{equation} \\label{lim-barG_i-00}\n\\lim_{J_i \\ni h \\to 0} \\overline G_i (h, 0) = G(0, 0) \\ \\ \\ \\text{ for all } i \\in \\{ 1, 2, 3 \\}.\n\\end{equation}\nThese two together complete the proof.\n\\end{proof}\n\n\nFormula \\eqref{lim-barG_i-00} can be easily generalized as\n\\begin{equation} \\label{general-lim-barG_i}\n\\lim_{J_i \\times \\mathbb R \\ni (h, q) \\to (0, 0)} \\overline G_i (h, q) = G(0, 0) \\ \\ \\ \\text{ for all } i \\in \\{ 1, 2, 3 \\}.\n\\end{equation}\n\n\n\nAs in Lemma \\ref{v^--d_0}, for $i \\in \\{ 1, 2, 3 \\}$,\nlet $u_i^+ \\in \\mathcal S_i^- \\cap C(\\bar J_i)$ be the function\ndefined by \\eqref{u_i^pm} in $J_i$ and extended by continuity to $\\bar J_i$\nand let $d_0 = \\min_{i \\in \\{ 1, 2, 3 \\}} u_i^+ (0)$.\n\n\n\\begin{lem} \\label{u_i^+0-G00}\nWe have\n\\begin{equation*}\n\\lambda u_i^+ (0) + G(0, 0) \\leq 0 \\ \\ \\ \\text{ for all } i \\in \\{ 1, 2, 3 \\}.\n\\end{equation*}\n\\end{lem}\n\n\n\\begin{proof}\nFix $i \\in \\{1, 2, 3 \\}$.\nSince $u_i^+ \\in \\mathcal S_i^- \\cap C(\\bar J_i)$,\nwe see that\n\\begin{equation*}\n\\lambda u_i^+ (h) + \\min_{q \\in \\mathbb R} \\overline G_i (h, q) \\leq 0 \\ \\ \\ \\text{ for all } h \\in J_i.\n\\end{equation*}\nBy Lemma \\ref{lim-min-barG_i},\nwe conclude that $\\lambda u_i^+ (0) + G(0, 0) \\leq 0$.\n\\end{proof}\n\n\n\\begin{lem} \\label{nu_i^d} \nLet $d \\in (-\\infty, d_0)$ and set\n\\begin{equation} \\label{def-nu_i^d}\n\\nu_i^d (h) = \\sup \\{ u (h) \\ | \\ u \\in \\mathcal S_i^- \\cap C(\\bar J_i), \\ u(0) = d \\} \\ \\ \\ \\text{ for } h \\in \\bar J_i \\text{ and } i \\in \\{ 1, 2, 3 \\}.\n\\end{equation}\nThen there exists $\\delta \\in (0, h_0)$ such that\n\\begin{equation*}\n\\nu_i^d (h) > d \\ \\ \\ \\text{ for all } h \\in J_i \\cap [-\\delta, \\delta ] \\text{ and } i \\in \\{ 1, 2, 3 \\}.\n\\end{equation*}\n\\end{lem}\n\n\nHere an important remark on $\\mathcal S_i^-$ is\nthat if $u \\in \\mathcal S_i^-$, with $i \\in \\{ 1, 2, 3 \\}$,\nthen $u - a \\in \\mathcal S_i^-$ for any constant $a > 0$.\nIn particular, the sets of all $u \\in \\mathcal S_i^-$ that satisfy $u(0) = d$, with $d < d_0$ and $i \\in \\{ 1, 2, 3 \\}$,\nare non-empty and, by Lemmas \\ref{equi-con} and \\ref{bound-by-bc},\nthese are uniformly bounded and equi-continuous on $\\bar J_i$.\nThus, the functions $\\nu_i^d$, with $i \\in \\{ 1, 2, 3 \\}$, are well-defined as continuous functions on $\\bar J_i$ and,\nin view of Perron's method, these are solutions of \\eqref{lim-HJ}.\n\n\n\\begin{proof} \nSince $d < d_0$, we have\n\\begin{equation*}\n\\lambda d + G(0, 0) < 0 \\ \\ \\ \\text{ and } \\ \\ \\ \\min_{i \\in \\{ 1, 2, 3 \\}} u_i^+ (0) > d.\n\\end{equation*}\nBy \\eqref{general-lim-barG_i}, there exists $\\delta \\in (0, h_0)$ such that,\nfor all $i \\in \\{ 1, 2, 3 \\}$ and $h \\in J_i \\cap [ -\\delta, \\delta ]$,\n\\begin{equation*}\n\\lambda (d + (-1)^i h \\delta ) + \\overline G_i (h, (-1)^i \\delta ) < 0 \\ \\ \\ \\text{ and } \\ \\ \\ u_i^+ (h) \\geq d + (-1)^i h \\delta.\n\\end{equation*}\n\n\nFor $i \\in \\{ 1, 2, 3 \\}$, we set \n\\begin{align*}\nw_i (h) = \n\\begin{cases}\nu_i^+ (h) - u_i^+ ((-1)^i \\delta ) + d + \\delta^2 \\ \\ \\ &\\text{ if } h \\in \\bar J_i \\setminus [-\\delta, \\delta ], \\\\\nd + (-1)^i h \\delta \\ \\ \\ &\\text{ if } h \\in \\bar J_i \\cap [-\\delta, \\delta ], \n\\end{cases}\n\\end{align*}\nand observe that $w_i \\in \\mathrm{Lip} (\\bar J_i)$ and \n\\begin{equation*}\n\\lambda w_i (h) + \\overline G_i (h, Dw_i (h)) \\leq 0 \\ \\ \\ \\text{ for a.e. } h \\in J_i.\n\\end{equation*}\nThis and the convexity of $\\overline G_i (h, \\cdot)$ imply that $w_i \\in \\mathcal S_i^-$ for all $i \\in \\{ 1, 2, 3 \\}$.\nNoting that $w_i (0) = d$, we see that $\\nu_i^d \\geq w_i$ on $\\bar J_i$ for all $i \\in \\{ 1, 2, 3 \\}$, which,\nin particular, shows that $\\nu_i^d (h) \\geq d + (-1)^i h \\delta > d$ for all $h \\in J_i \\cap [-\\delta, \\delta]$ and $i \\in \\{ 1, 2, 3 \\}$.\nThe proof is complete.\n\\end{proof}\n\n\n\n\\begin{proof}[Proof of Lemma \\ref{v^--d_0}] \nWe argue by contradiction.\nWe thus set $d = \\min_{c_2 (0)} v^-$ and suppose that $d < d_0$.\n\n\nFor $i\\in\\{1,2,3\\}$, let $\\nu_i^d$ be the functions defined by \\eqref{def-nu_i^d}.\nFor $i \\in \\{ 1, 2, 3 \\}$ and $h \\in \\bar J_i$, we set\n\\begin{equation*}\nv_i (h) = u_i^+ (h) \\wedge \\nu_i^d (h),\n\\end{equation*}\nand observe that $v_i \\in \\mathcal S_i \\cap C(\\bar J_i)$, $v_i (0) = d$, and $v_i (h_i) \\leq u_i^+ (h_i) = d_i$. \nNoting that $u_i^-\\in\\mathcal S_i^+$, $\\lim_{J_i\\ni h\\to h_i} u_i^- (h) =d_i$, \nand $\\lim_{J_i\\ni h\\to 0}u_i^-(h)\\geq d$ for all $i\\in\\{1,2,3\\}$ and using the comparison principle, we get\n\\begin{equation*}\nu_i^-(h) \\geq v_i (h) \\ \\ \\ \\text{ for all } h \\in J_i \\text{ and } i \\in \\{ 1, 2, 3 \\}.\n\\end{equation*}\n\n\nSince $d < d_0$, thanks to Lemma \\ref{u_i^+0-G00},\nwe may choose $\\eta > 0$ so that\n\\begin{equation*}\n\\lambda d + G(0, 0) < - \\eta.\n\\end{equation*}\nBy Lemma \\ref{key},\nthere exist $\\delta \\in (0, h_0)$ and $\\psi \\in C^1 (\\overline{\\Omega (\\delta)})$ such that\n\\begin{equation*}\n-b \\cdot D\\psi + G(x, 0) \\leq G(0, 0) + \\eta \\ \\ \\ \\text{ on } \\overline{\\Omega (\\delta )}.\n\\end{equation*}\nIn view of Lemma \\ref{nu_i^d},\nby replacing $\\delta \\in (0, h_0)$ by a smaller number if necessary, we may assume that\n\\begin{equation*}\nv_i (h) > d \\ \\ \\ \\text{ for all } h \\in J_i \\cap [-\\delta, \\delta ] \\text{ and } i \\in \\{ 1, 2, 3 \\}. \n\\end{equation*}\nNow we may choose $c \\in (d, d_0)$ so that\n\\begin{equation*}\nc < \\min_{i \\in \\{ 1, 2, 3 \\}} v_i ((-1)^i \\delta ) \\ \\ \\ \\text{ and } \\ \\ \\ \\lambda c + G(0, 0) < -\\eta.\n\\end{equation*}\n\n\nFix any $\\gamma > 0$ so that $\\gamma < c - d$.\nThat is, $c - \\gamma > d$. Set\n\\begin{equation*}\nw^\\varepsilon (x) = c - \\gamma + \\varepsilon \\psi (x) \\ \\ \\ \\text{ for } x \\in \\overline{\\Omega (\\delta )} \\text{ and } \\varepsilon \\in (0, \\varepsilon_0 ),\n\\end{equation*}\nand compute that, for any $\\varepsilon \\in (0, \\varepsilon_0)$ and $x \\in \\overline{\\Omega (\\delta)}$,\n\\begin{align} \\label{ineq: w^ep}\n\\begin{aligned}\n\\lambda w^\\varepsilon (x) &- \\cfrac{b (x) \\cdot Dw^\\varepsilon (x)}{\\varepsilon} + G(x, Dw^\\varepsilon (x)) \\\\\n &= \\lambda (c - \\gamma ) + \\varepsilon \\lambda \\psi (x) - b(x) \\cdot D\\psi (x) + G(x, \\varepsilon D\\psi (x)) \\\\\n &= \\lambda (c - \\gamma ) + \\varepsilon \\lambda \\psi (x) - b(x) \\cdot D \\psi (x) + G(x, 0) + G(x, \\varepsilon D\\psi (x)) - G(x, 0) \\\\\n &\\leq - \\lambda \\gamma + \\varepsilon \\lambda \\psi (x) + \\lambda c + G(0, 0) + \\eta + G(x, \\varepsilon D \\psi (x)) - G(x, 0) \\\\\n &< -\\lambda \\gamma + \\varepsilon \\lambda \\psi (x) + G(x, \\varepsilon D\\psi (x)) - G(x, 0),\n\\end{aligned}\n\\end{align}\nfrom which, by replacing $\\varepsilon_0 \\in (0, 1)$ by a smaller number if necessary,\nwe may assume that if $\\varepsilon \\in (0, \\varepsilon_0)$, then \n\\begin{equation*}\n\\lambda w^\\varepsilon (x) - \\cfrac{b(x) \\cdot Dw^\\varepsilon (x)}{\\varepsilon} + G(x, Dw^\\varepsilon (x)) \\leq 0 \\ \\ \\ \\text{ for all } x \\in \\Omega (\\delta ).\n\\end{equation*}\nSince\n\\begin{equation*}\nc - \\gamma < \\min_{i \\in \\{ 1, 2, 3 \\}} v_i ((-1)^i \\delta) - \\gamma,\n\\end{equation*}\nby replacing $\\varepsilon_0 \\in (0, 1)$ by a smaller number if necessary,\nwe may assume that if $\\varepsilon \\in (0, \\varepsilon_0)$, then\n\\begin{equation*}\nw^\\varepsilon (x) < \\min_{i \\in \\{ 1, 2, 3 \\}} v_i ((-1)^i \\delta ) -\\gamma \\ \\ \\ \\text{ for all } x \\in \\overline{\\Omega (\\delta )}.\n\\end{equation*}\nMoreover, since $u_i^- \\geq v_i$ in $J_i$,\nby replacing $\\varepsilon_0 \\in (0, 1)$ by a smaller number if necessary,\nwe may assume that if $\\varepsilon \\in (0, \\varepsilon_0)$, then\n\\begin{equation*}\n\\min_{i \\in \\{ 1, 2, 3 \\}} v_i ((-1)^i \\delta ) - \\gamma < u^\\varepsilon (x) \\ \\ \\ \\text{ for all } x \\in c_i ((-1)^i \\delta ) \\text{ and } i \\in \\{ 1, 2 ,3 \\}.\n\\end{equation*}\nNoting that\n\\begin{equation*}\n\\partial \\Omega (\\delta ) = \\bigcup_{i \\in \\{ 1, 2, 3 \\}} c_i ((-1)^i \\delta ),\n\\end{equation*}\nand applying the comparison principle on $\\overline{\\Omega (\\delta)}$,\nwe get\n\\begin{equation*}\nw^\\varepsilon(x) \\leq u^\\varepsilon(x) \\ \\ \\ \\text{ for all } x \\in \\overline{\\Omega (\\delta )} \\text{ and } \\varepsilon \\in (0, \\varepsilon_0),\n\\end{equation*}\nwhich yields \n\\begin{equation*}\nc - \\gamma \\leq v^- (x) \\ \\ \\ \\text{ for all } x \\in c_2 (0).\n\\end{equation*}\nSince $d 0$, it is obvious that\nif $d \\in I_i$ and $c < d$, then $c \\in I_i$.\nObserve that if $d \\in \\mathbb R$ satisfies\n\\begin{equation*}\n\\lambda d + \\max_{h \\in \\bar J_i} \\overline G_i (h, 0) \\leq 0, \n\\end{equation*}\nthen $d \\in \\mathcal S_i^-$ and $d \\in I_i$,\nand that if $d \\in \\mathbb R$ satisfies\n\\begin{equation*}\n\\lambda d + \\min_{(h, p) \\in \\bar J_i \\times \\mathbb R} \\overline G_i (h, p) > 0,\n\\end{equation*} \nthen $d \\not \\in I_i$.\nThus we see that $I_i = (-\\infty, a_i]$\nfor some $a_i \\in \\mathbb R$.\n\n\nFor $i \\in \\{ 1, 2, 3 \\}$, $d \\in I_i$, and $h \\in \\bar J_i$, we set\n\\begin{equation*}\n\\rho_i^d (h) = \\sup \\{ u(h) \\ | \\ u \\in \\mathcal S_i^- \\cap C(\\bar J_i), \\ u(h_i) = d \\},\n\\end{equation*}\nand observe that $\\rho_i^d \\in \\mathcal S_i\\cap C(\\bar J_i)$ and $\\rho_i^d (h_i)=d$. \n\n\nWe set\n\\begin{equation*}\n\\rho_0 = \\min_{i \\in \\{ 1, 2, 3 \\}} \\sup_{d \\in I_i} \\rho_i^d (0),\n\\end{equation*}\nand note, in view of \\eqref{bound-by-bc}, that $\\rho_0 < \\infty$.\n\n\nLet $I_0$ be the set of $d \\in \\mathbb R$ such that\n\\begin{equation*}\n\\{ u \\in \\mathcal S_i^- \\cap C(\\bar J_i) \\ | \\ u(0) = d \\} \\not= \\emptyset \\ \\ \\ \\text{ for all } i \\in \\{ 1, 2, 3 \\}.\n\\end{equation*}\nIt is obvious, as well as $I_i$, that if $d \\in I_0$ and $c < d$, then $c \\in I_0$.\nObserve that if $d > \\rho_0$, then $d \\not\\in I_0$,\nand that if $d \\in I_0$, \nthen \nthere exist $c_i \\in I_i$, with $i \\in \\{ 1, 2, 3 \\}$,\nsuch that $\\rho_i^{c_i} (0) = d$.\nThus we see that $I_0 = (-\\infty, \\rho_0]$.\n\n\nFor $i \\in \\{ 1, 2, 3 \\}$, $d \\in I_0$, and $h \\in \\bar J_i$, we set\n\\begin{equation*}\n\\nu_i^d (h) = \\sup \\{ u(h) \\ | \\ u \\in \\mathcal S_i^- \\cap C(\\bar J_i), \\ u(0) = d \\},\n\\end{equation*}\nand observe that $\\nu_i^d \\in \\mathcal S_i \\cap C(\\bar J_i)$ and $\\nu_i^d (0) = d$.\n\n \n\\begin{thm} \\label{exist}\nLet $(d_0, d_1, d_2, d_3) \\in \\mathbb R^4$. \nThe problem \\eqref{HJ} has\na viscosity solution \n$(u_1, u_2, u_3)$ $\\in C(\\bar J_1) \\times C(\\bar J_2) \\times C(\\bar J_3)$\nif and only if\n\\begin{align} \\label{condi: admissible}\n\\begin{cases}\n(d_0, d_1, d_2, d_3) \\in I_0 \\times I_1 \\times I_2 \\times I_3, \\\\\n\\min_{i \\in \\{ 1, 2, 3 \\}} \\rho_i^{d_i} (0) \\geq d_0, \\\\\n\\nu_i^{d_0} (h_i) \\geq d_i \\ \\ \\ \\text{ for all } i \\in \\{ 1, 2, 3 \\}.\n\\end{cases}\n\\end{align} \n\\end{thm}\n\n\n\\begin{proof}\nFirst, assume that \\eqref{condi: admissible} is satisfied. Set\n\\begin{equation*}\nu_i (h) = \\rho_i^{d_i} (h) \\wedge \\nu_i^{d_0} (h) \\ \\ \\ \\text{ for } h \\in \\bar J_i \\text{ and } i \\in \\{ 1, 2, 3 \\},\n\\end{equation*}\nobserve that $u_i \\in \\mathcal S_i \\cap C(\\bar J_i)$, $u_i (h_i) = d_i$, and $u_i (0) = d_0$ for all $i \\in \\{ 1, 2, 3 \\}$,\nand conclude that $(u_1, u_2, u_3)$ is a viscosity solution of \\eqref{HJ}.\n\n\nNow, assume that \\eqref{HJ} has\na viscosity solution $(u_1, u_2, u_3) \\in C(\\bar J_1) \\times C(\\bar J_2) \\times C(\\bar J_3)$. \nObviously, $(d_0, d_1, d_2, d_3) \\in I_0 \\times I_1 \\times I_2 \\times I_3$,\n$\\rho_i^{d_i} \\geq u_i$ and $\\nu_i^{d_0} \\geq u_i$ on $\\bar J_i$ for all $i \\in \\{ 1, 2, 3 \\}$.\nMoreover we see that $\\rho_i^{d_i} (0) \\geq u_i (0) = d_0$\nand $\\nu_i^{d_0} (h_i) \\geq u_i = d_i$ for all $i \\in \\{ 1, 2, 3 \\}$.\nThus \\eqref{condi: admissible} is valid. \n\\end{proof}\n\n\nWe set\n\\begin{equation*}\\mathcal{D} = \\{ (d_0, d_1, d_2, d_3) \\in \\mathbb R^4 \\ | \\\n\\text{ \\eqref{condi: admissible} is satisfied } \\},\n\\end{equation*}\nand \n\\begin{align*}\n\\mathcal D_0 = \\{ (d_0, &d_1, d_2, d_3) \\in \\mathbb R^4 \\mid \\\\&\\text{ there exists $a > 0$ such that }\n(d_0 + a, d_1 + a, d_2 + a, d_3 + a)\\in \\mathcal D \\}. \n\\end{align*}\n\n\n\n\\begin{lem} \\label{D_0}\nLet $(d_0, d_1, d_2, d_3) \\in \\mathcal D_0$.\nThen \n\\begin{equation*}\nd_0 < \\min_{i \\in \\{ 1, 2, 3 \\}} \\rho_i^{d_i} (0). \n\\end{equation*}\n\\end{lem}\n\n\n\\begin{proof}\nChoose $ a > 0$ so that $(d_0 + a, d_1 + a, d_2 + a, d_3 + a) \\in \\mathcal D$. \nFix any $i \\in \\{ 1, 2, 3 \\}$ and note that\nthe function $u := \\rho_i^{d_i + a} - a$ is a subsolution of \n\\begin{equation*}\n\\lambda u + \\overline G_i (h, u') = -\\lambda a \\ \\ \\ \\text{ in } J_i.\n\\end{equation*}\nSelect a smooth function $\\psi \\in C^1(\\bar J_i)$ so that \n$\\psi (h) = 1$ in a neighborhood of $0$ and $\\psi (h) =0$ in a neighborhood of $h_i$.\nAccordingly, $\\psi'$ is supported in $J_i$. \nLet $\\varepsilon > 0$ and consider the function $u_\\varepsilon := u + \\varepsilon \\psi$.\nLet $M > 0$ be a Lipschitz bound of $u$ in $\\mathrm{supp} \\, \\psi'$, \nnote that $\\overline G_i$ is uniformly continuous in $\\mathrm{supp} \\, \\psi' \\times [-M, M]$, and \nobserve that if $\\varepsilon > 0$ is sufficiently small, then \n\\begin{equation*}\n\\overline G_i (h, u'(h) + \\varepsilon \\psi' (h)) \\leq \\overline G_i (h, u'(h)) + \\frac{\\lambda a}{2} \\ \\ \\ \\text{ for a.e. } h \\in J_i.\n\\end{equation*}\nThus, if $\\varepsilon \\in (0, a\/2)$ is sufficiently small, then we have\n\\begin{equation*}\n\\lambda u_\\varepsilon + \\overline G_i (h, u'_\\varepsilon) \\leq 0 \\ \\ \\ \\text{ in } J_i\n\\end{equation*}\nin the viscosity sense.\nFix such $\\varepsilon > 0$ and observe that\n$u_\\varepsilon (h_i) = u(h_i) = d_i$ and,\nby the definition of $\\rho_i^{d_i}$, that $\\rho_i^{d_i} \\geq u_\\varepsilon$ on $\\bar J_i$.\nSince $u_\\varepsilon (0) = \\rho_i^{d_i + a} (0) - a + \\varepsilon > \\rho^{d_i + a} (0) - a$, \nwe get $\\rho_i^{d_i} (0) > \\rho_i^{d_i + a} (0) - a$. \nSince $(d_0 + a, d_1 + a, d_2 + a, d_3 + a) \\in \\mathcal D$,\nwe have $d_0 + a \\leq \\rho_i^{d_i+a} (0)$.\nCombining this with the inequality above, we get $d_0 < \\rho_i^{d_i} (0)$.\nThis is true for all $i \\in \\{ 1, 2, 3 \\}$ and the proof is complete.\n\\end{proof}\n\n\n\\begin{thm}\nFor any $(d_0, d_1, d_2, d_3) \\in \\mathcal D_0$, \\emph{(G5)} and \\emph{(G6)} hold for \nsome boundary data $g^\\varepsilon$. \n\\end{thm}\n\n\n\\begin{proof}\nChoose $a>0$ so that $(d_0+3a,d_1+3a,d_2+3a,d_3+3a)\\in\\mathcal D$.\nSet $a_i = d_i + a$ for $i \\in \\{ 0, 1, 2, 3 \\}$ and\n\\begin{equation*}\nv_i (h) = \\rho_i^{a_i+ a} (h) \\wedge \\nu_i^{a_0 + a} (h) - a \\ \\ \\ \\text{ for } h \\in \\bar J_i \\text{ and } i \\in \\{ 1, 2, 3 \\},\n\\end{equation*}\nand observe that $v_i (h_i) = a_i$ and $v_i (0) = a_0$ for all $i \\in \\{ 1, 2, 3 \\}$.\nObserving that $\\rho_i^{a_i+ a} \\wedge \\nu_i^{a_0 + a} \\in \\mathcal S_i \\cap C(\\bar J_i)$ and\nnoting that $v_i$ are locally Lipschitz continuous in $\\bar J_i \\setminus \\{ 0 \\}$,\nwe may choose $c > 0$ so that \n\\begin{equation} \\label{ineq: v_i}\n\\lambda v_i (h) + \\overline G_i (h, v_i' (h)) \\leq -c \\ \\ \\ \\text{ for a.e. } h \\in J_i \\text{ and all } i \\in \\{ 1, 2, 3 \\}.\n\\end{equation}\nNoting that $\\rho_i := \\rho_i^{a_i + a} - a \\in \\mathcal S_i^- \\cap C(\\bar J_i)$ for all $i \\in \\{ 1, 2, 3 \\}$,\nwe see that\n\\begin{equation*}\n\\lambda \\rho_i (h) + \\min_{q \\in \\mathbb R} \\overline G_i (h, q) \\leq 0 \\ \\ \\ \n\\text{ for all } h \\in J_i \\text{ and } i \\in \\{ 1, 2, 3 \\},\n\\end{equation*}\nand, hence, using Lemma \\ref{lim-min-barG_i}, we get\n\\begin{equation*}\n\\lambda \\min_{i \\in \\{ 1, 2, 3 \\}} \\rho_i (0) + G(0, 0) \\leq 0.\n\\end{equation*}\nSince $(a_0 + a, a_1 + a, a_2 + a, a_3 + a) \\in \\mathcal D_0$,\nby Lemma \\ref{D_0}, we see that\n\\begin{equation} \\label{d_0+a}\na_0 < \\min_{i \\in \\{ 1, 2, 3 \\}} \\rho_i (0),\n\\end{equation}\nfrom which, we may choose $\\eta > 0$ so that\n\\begin{equation*}\n\\lambda a_0 + G(0, 0) \\leq -\\eta. \n\\end{equation*}\nBy Lemma \\ref{key}, there exist\n$\\delta \\in (0, h_0)$ and $\\psi \\in C^1 \\big(\\overline{\\Omega (\\delta)} \\big)$ such that\n\\begin{equation*}\n-b \\cdot D\\psi + G(x, 0) \\leq G(0, 0) + \\eta \\ \\ \\ \\text{ on } \\overline{\\Omega (\\delta )}.\n\\end{equation*}\nAccording to \\eqref{d_0+a}, by applying Lemma \\ref{nu_i^d} with $d = a_0$ and $d_0 = \\min_{i = 1, 2, 3} \\rho_i (0)$ and\nby replacing $\\delta \\in (0, h_0)$ by a smaller number if necessary,\nwe may assume that\n\\begin{equation*}\nv_i (h) > a_0 \\ \\ \\ \\text{ for all } h \\in J_i \\cap [-\\delta, \\delta ] \\text{ and } i \\in \\{ 1, 2, 3 \\}. \n\\end{equation*}\nNow we may choose $\\gamma \\in \\big( a_0, \\min_{i\\in\\{1,2,3\\}}\\rho_i(0) \\big)$ so that\n\\begin{equation*}\n\\gamma < \\min_{i \\in \\{ 1, 2, 3 \\}} v_i ((-1)^i \\delta ) \\ \\ \\ \\text{ and } \\ \\ \\ \\lambda \\gamma + G(0, 0) < -\\eta.\n\\end{equation*}\n\n\nSetting\n\\begin{equation*}\nw_0^\\varepsilon (x) = \\gamma - a + \\varepsilon \\psi (x) \\ \\ \\ \\text{ for } x \\in \\overline{\\Omega (\\delta )} \\text{ and } \\varepsilon \\in (0, 1),\n\\end{equation*}\nand computing as in \\eqref{ineq: w^ep}, we see that there exists\n$\\varepsilon_0 \\in (0, 1)$ such that if $\\varepsilon \\in (0, \\varepsilon_0)$, then \n\\begin{equation} \\label{ineq: w_0^ep}\n\\lambda w_0^\\varepsilon (x) - \\cfrac{b(x) \\cdot Dw_0^\\varepsilon (x)}{\\varepsilon} + G(x, Dw_0^\\varepsilon (x)) \\leq 0 \\ \\ \\ \\text{ for all } x \\in \\Omega (\\delta ).\n\\end{equation}\n\n\nNext, note that\nthere exist $\\delta_i \\in (0, \\delta)$, with $i \\in \\{ 1, 2, 3 \\}$, such that\n\\begin{equation*}\nv_i ((-1)^i \\delta_i ) = \\gamma \\ \\ \\ \\text{ and } \\ \\ \\ v_i (h) < \\gamma\n\\end{equation*}\nfor all $h \\in J_i \\cap (-\\delta_i, \\delta_i )$ and $i\\in\\{1,2,3\\}$.\nSet\n\\begin{equation*}\nJ_i (h) = J_i \\cup (h_i - h, h_i + h) \\ \\ \\ \\text{ for } h > 0 \\text{ and } i \\in \\{ 1, 2, 3 \\}.\n\\end{equation*}\nCombining \\eqref{ineq: v_i} and the fact that there exist $p_i \\in \\mathbb R$, \nwith $i \\in \\{ 1, 2, 3 \\}$, such that $\\lim_{J_i \\ni h \\to h_i} v_i' (h) \n= p_i$ along a subsequence for all $i \\in \\{ 1, 2, 3 \\}$, we have\n\\begin{equation*}\n\\lambda v_i (h_i) + \\overline G_i (h_i, p_i) \\leq -c \\ \\ \\ \\text{ for all } i \\in \\{ 1, 2, 3 \\},\n\\end{equation*}\nand, hence, for each $i \\in \\{ 1, 2, 3 \\}$ and for some $r_i \\in (0, \\delta_i\/2)$,\nwe can extend the domain of definition of $v_i$ to $\\overline{J_i (r_i)}$ so that\n\\begin{equation*}\nv_i (h) + \\overline G_i (h, v_i' (h)) \\leq 0 \\ \\ \\ \\text{ for a.e. } h \\in J_i (r_i),\n\\end{equation*}\nby setting\n\\begin{equation*}\nv_i (h) = v_i (h_i) + p_i (h - h_i) \\ \\ \\ \\text{ for } h \\in [h_i - r_i, h_i + r_i] \\setminus \\bar J_i.\n\\end{equation*}\nHere we have used the fact that, in view of the definition of $\\overline G_i$,\nwe may assume that, for each $i \\in \\{ 1, 2, 3 \\}$, $\\overline G_i$ is defined in $\\overline{J_i (r_i)} \\times \\mathbb R$.\n\n\nFor $i \\in \\{ 1, 2, 3 \\}$, $r \\in (0, r_i\/2)$, and $h \\in J_i (r_i\/2) \\setminus [-\\delta_i\/2, \\delta_i\/2]$, set\n\\begin{equation*}\nv_i^r (h) = \\varphi_r \\ast v_i (h), \n\\end{equation*} \nwhere $\\varphi_r (h) = (1\/r)\\varphi ((1\/r)h)$ and\n$\\varphi \\in C^\\infty (\\mathbb R)$ is a standard mollification kernel, that is,\n$\\varphi \\geq 0$, supp $\\varphi \\subset [-1, 1]$, and $\\int_\\mathbb R \\varphi \\, dx = 1$.\n\n\nDue to the local Lipschitz continuity of \n$v_i$ in $\\overline{J_i (r_i)} \\setminus \\{ 0 \\}$,\nthere exist $C_i > 0$, with $i \\in \\{ 1, 2, 3 \\}$, such that\n\\begin{equation*}\n|v_i' (h)| \\leq C_i\n\\end{equation*} \nfor a.e. $h \\in \\overline{J_i (r_i)} \\setminus (-(\\delta_i - r_i)\/2, \n(\\delta_i - r_i)\/2)$. \nFor $i \\in \\{ 1, 2, 3 \\}$, let $m_i$ be a modulus of $\\overline G_i$\non $\\overline{J_i (r_i)} \\setminus (-(\\delta_i - r_i)\/2, (\\delta_i -r_i)\/2) \\times [-C_i, C_i]$.\n\n\nFix any $i \\in \\{ 1, 2, 3 \\}$, $r \\in (0, r_i\/2)$, and $h \\in J_i \\setminus [-\\delta_i\/2, \\delta_i\/2]$, and compute that\n\\begin{align*}\n0 &\\geq \\lambda \\varphi_r \\ast v_i (h) + \\varphi_r \\ast \\overline G_i (\\cdot, v_i' (\\cdot )) (h) \\\\\n &= \\lambda v_i^r (h) + \\int_{h - r}^{h + r} \\varphi_r (h - s) \\overline G_i (s, v_i' (s)) \\, ds \\\\\n &\\geq \\lambda v_i^r (h) + \\int_{h - r}^{h + r} \\varphi_r (h - s) \\big( \\overline G_i (h, v_i' (s)) - m_i (r) \\big) \\, ds \\\\\n &\\geq \\lambda v_i^r (h) + \\overline G_i (h, \\varphi_r \\ast v_i' (h)) - m_i (r) \\\\\n &= \\lambda v_i^r (h) + \\overline G_i (h, (v_i^r)' (h)) - m_i (r).\n\\end{align*}\nHere we have used Jensen's inequality in the third inequality. \nMoreover we have\n\\begin{equation*}\n|v_i (h) - v_i^r (h)| \\leq \\int_{h - r}^{h + r} |v_i (h) - v_i (s)| \\varphi_r (h - s) \\, ds \\leq C_i r,\n\\end{equation*}\nand, hence, we get\n\\begin{equation*} \\label{eq: v_i-v_i^r}\n\\lambda v_i (h) + \\overline G_i (h, (v_i^r)' (h)) \\leq m_i (r) + \\lambda C_i r\n\\end{equation*}\nfor all $h \\in J_i \\setminus \\left[-\\delta_i\/2,\\, \\delta_i\/2 \\right]$, \n$r\\in(0,\\,r_i\/2)$, and $i \\in \\{ 1, 2, 3 \\}$. \n\n\nFix a small $r \\in (0, r_i\/2)$.\nFor $i \\in \\{ 1, 2, 3 \\}$, we define the function $g_i \\in C(\\overline \\Omega_i \\setminus \\Omega_i (\\delta_i\/2))$ by\n\\begin{equation*}\ng_i (x) = G(x, (v_i^r)' \\circ H(x)DH(x)),\n\\end{equation*}\nand choose $f_i \\in C^1 (\\overline \\Omega_i \\setminus \\Omega_i (\\delta_i\/2))$ so that\n\\begin{equation*}\n|g_i (x) - f_i (x)| < r \\ \\ \\ \\text{ for all } x \\in \\overline \\Omega_i \\setminus \\Omega_i \\left( \\frac{\\delta_i}{2} \\right).\n\\end{equation*}\n\n\nLet $\\tau_i$ are the functions defined in \\eqref{def-tau_i},\nand, for $i \\in \\{ 1, 2, 3 \\}$,\nlet $\\psi_i$ be the function on $\\overline \\Omega_i \\setminus \\Omega_i (\\delta_i\/2)$ defined by\n\\begin{equation*}\n\\psi_i (x) = \\int_0^{\\tau_i (x)} \\Big( f_i (X(t, x)) - \\bar f_i (x) \\Big) \\, dt,\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\bar f_i (x) := \\frac{1}{T_i \\circ H(x)} \\int_0^{T_i \\circ H(x)} f_i (X(t, x)) \\, dt. \n\\end{equation*}\n\n\nRecalling that \n$X \\in C^1 (\\mathbb R \\times \\mathbb R^2)$, that \n$\\tau_i \\in C^1 \\left( \\left( \\overline \\Omega_i \\setminus c_i (0) \\right) \\setminus l_i \\right)$ and $T_i \\in C^1 (\\bar J_i \\setminus \\{ 0 \\})$ for all $i\\in\\{1,2,3\\}$,\nand that the definition of $\\tilde \\tau_i$,\nit is clear that $\\psi_i \\in C^1 (\\overline \\Omega_i \\setminus \\Omega_i (\\delta_i\/2))$ for all $i \\in \\{ 1, 2, 3 \\}$.\nBy using the dynamic programming principle,\nwe see that\n\\begin{equation*}\n-b(x) \\cdot D\\psi_i(x) = -f_i(x) + \\bar f_i(x) \n\\ \\ \\ \\text{ for all } \nx \\in \\overline \\Omega_i \\setminus \\Omega_i \\left( \\frac{\\delta_i}{2} \\right)\n\\text{ and } i \\in \\{ 1, 2, 3 \\}.\n\\end{equation*}\n\n\nFor $i \\in \\{ 1, 2, 3 \\}$, $\\varepsilon \\in (0, \\varepsilon_0)$,\nand $x \\in \\overline \\Omega_i \\setminus \\Omega_i (\\delta_i\/2)$, we set\n\\begin{equation*}\nw_i^\\varepsilon (x) = v_i \\circ H(x) - a + \\varepsilon \\psi_i (x), \n\\end{equation*}\nand observe that $w_i^\\varepsilon \\in \\mathrm{Lip} \\big( \\overline \\Omega_i \\setminus \\Omega_i (\\delta_i\/2) \\big)$.\nFix any $i \\in \\{ 1, 2, 3 \\}$ and $\\varepsilon \\in (0, \\varepsilon_0)$,\nand compute that, for almost every $x \\in \\Omega_i \\setminus \\overline{\\Omega_i (\\delta_i\/2)}$,\n\\begin{align*}\n\\lambda &w_i^\\varepsilon (x) - \\frac{b (x) \\cdot Dw_i^\\varepsilon (x)}{\\varepsilon} + G(x, Dw_i^\\varepsilon (x)) \\\\\n &= \\lambda v_i \\circ H(x) - \\lambda a + \\varepsilon \\lambda \\psi_i (x) - b(x) \\cdot D\\psi_i (x) \\\\\n & \\quad + G\\big( x, v_i' \\circ H(x)DH(x) + \\varepsilon D\\psi_i (x) \\big) \\\\\n &= \\lambda v_i \\circ H(x) - \\lambda a + \\varepsilon \\lambda \\psi_i (x) - f_i (x) + \\bar f_i (x) \\\\ \n & \\quad + G\\big( x, v_i' \\circ H(x)DH(x) + \\varepsilon D\\psi_i (x) \\big) \\\\\n &< \\lambda v_i \\circ H(x) - \\lambda a + \\varepsilon \\lambda \\psi_i (x) - g_i (x) + r \\\\\n & \\quad + \\frac{1}{T_i \\circ H(x)} \\int_0^{T_i \\circ H(x)} \\Big( g_i (X(t, x)) + r \\Big) \\, dt + G(x, v_i' \\circ H(x)DH(x) + \\varepsilon D\\psi_i (x)) \\\\\n &\\leq - \\lambda a + \\varepsilon \\lambda \\psi_i (x) + m_i (r) + (2 + \\lambda C_i)r \\\\ \n & \\quad - G\\big( x, (v_i^r)' \\circ H(x)DH(x)\\big) + G\\big( x, v_i' \\circ H(x)DH(x) + \\varepsilon D\\psi_i (x)\\big),\n\\end{align*}\nfrom which,\nby replacing $r \\in (0, r_i)$ and $\\varepsilon_0 \\in (0, 1)$ by smaller numbers if necessary,\nwe may assume that if $\\varepsilon \\in (0, \\varepsilon_0)$, then,\n\\begin{equation} \\label{ineq: w_i^ep}\n\\lambda w_i^\\varepsilon (x) - \\frac{b (x) \\cdot Dw_i^\\varepsilon (x)}{\\varepsilon} + G(x, Dw_i^\\varepsilon (x)) \\leq 0,\n\\end{equation}\nfor a.e. $x \\in \\Omega_i \\setminus \\overline{\\Omega_i (\\delta_i\/2)}$ and for all $i \\in \\{ 1, 2, 3 \\}$. \n\nObserving that\n\\begin{align}\\label{W_0-bound1} \n\\lim_{\\varepsilon \\to 0} w_0^\\varepsilon (x) &= \\gamma - a \\ \\ \\ \\text{ uniformly for } x \\in \\overline{\\Omega (\\delta )}, \\\\ \\label{W_0-bound2}\n \\lim_{\\varepsilon \\to 0} w_i^\\varepsilon (x) &= v_i \\circ H(x) - a \\ \\ \\ \\text{ uniformly for } x \\in \\overline{\\Omega}_i \\setminus \\Omega_i (\\delta_i\/2 ),\n\\end{align}\nand\n\\begin{equation}\\label{W_0-bound3} \nv_i ((-1)^i \\delta ) > \\gamma \\ \\ \\ \\text{ and } \\ \\ \\ v_i \\left( \\frac{(-1)^i \\delta_i}{2} \\right) < \\gamma \\ \\ \\ \\text{ for all } i \\in \\{ 1, 2, 3 \\},\n\\end{equation}\nby replacing $\\varepsilon_0 \\in (0, 1)$ by a smaller number if necessary,\nwe may assume that if $\\varepsilon \\in (0, \\varepsilon_0)$,\nthen, for all $i \\in \\{ 1, 2, 3 \\}$,\n\\begin{equation*}\nw_i^\\varepsilon > w_0^\\varepsilon \\ \\ \\ \\text{ on } \\ c_i ((-1)^i \\delta ) \\ \\ \\ \\text{ and } \\ \\ \\ w_i^\\varepsilon < w_0^\\varepsilon \\ \\ \\ \\text{ on } c_i \\left(\\frac{(-1)^i \\delta_i}{2} \\right).\n\\end{equation*}\n\n\nFor $\\varepsilon \\in (0, \\varepsilon_0)$, we set\n\\begin{align*}\nw^\\varepsilon (x) = \n\\begin{cases}\nw_0^\\varepsilon (x) \\ \\ \\ &\\text{ if } x \\in \n\\overline{\\Omega_i (\\delta_i\/2 )} \\text{ and } i \\in \\{ 1, 2, 3 \\}, \\\\\nw_0^\\varepsilon (x) \\vee w_i^\\varepsilon (x) \\ \\ \\ &\\text{ if } x \\in \\Omega_i (\\delta ) \\setminus \\overline{\\Omega_i (\\delta_i\/2 )} \\text{ and } i \\in \\{ 1, 2, 3 \\}, \\\\\nw_i^\\varepsilon (x) \\ \\ \\ &\\text{ if } x \\in \\overline \\Omega_i \\setminus \\Omega_i (\\delta ) \\text{ and } i \\in \\{ 1, 2, 3 \\}.\n\\end{cases}\n\\end{align*}\nNoting that $w^\\varepsilon \\in \\mathrm{Lip} (\\overline \\Omega)$,\nby \\eqref{ineq: w_0^ep} and \\eqref{ineq: w_i^ep},\nwe have, for all $\\varepsilon \\in (0, \\varepsilon_0)$,\n\\begin{equation} \\label{subsol-w^ep} \nw^\\varepsilon - \\frac{b \\cdot Dw^\\varepsilon}{\\varepsilon} + G(x, Dw^\\varepsilon) \\leq 0 \\ \\ \\ \\text{ in } \\Omega,\n\\end{equation}\nin the viscosity sense.\n\n\nNow we set\n\\begin{equation*}\ng^\\varepsilon (x) = w^\\varepsilon (x) \\ \\ \\ \\text{ for } \\varepsilon \\in (0, \\varepsilon_0 ) \\text{ and } x \\in \\partial \\Omega.\n\\end{equation*}\n\n\nNow we intend to show that (G5) and (G6) hold. \nFor $i \\in \\{ 1, 2, 3 \\}$, $\\varepsilon \\in (0, \\varepsilon_0)$, and $x \\in \\overline \\Omega_i \\setminus \\Omega_i (\\delta_i\/2)$\nand for some constant $C > 0$, set\n\\begin{equation*}\nW_i^\\varepsilon (x) = w_i^\\varepsilon (x) + (-1)^i C (h_i - H(x)),\n\\end{equation*}\nand observe that $W_i^\\varepsilon \\in \\mathrm{Lip} \\big( \\overline \\Omega_i \\setminus \\Omega_i (\\delta_i\/2) \\big)$,\n$W_i^\\varepsilon = g^\\varepsilon$ on $\\partial_i \\Omega$, and\n$W_i^\\varepsilon \\geq w_i^\\varepsilon$ on $\\overline \\Omega_i \\setminus \\Omega_i (\\delta_i\/2)$.\nFix any $i \\in \\{ 1, 2, 3 \\}$ and $\\varepsilon \\in (0, \\varepsilon_0)$\nand compute that, for almost every $x \\in \\Omega_i \\setminus \\overline{\\Omega_i (\\delta_i\/2)}$,\n\\begin{align*}\n\\lambda &W_i^\\varepsilon (x) - \\frac{b (x) \\cdot DW_i^\\varepsilon (x)}{\\varepsilon} + G(x, DW_i^\\varepsilon (x)) \\\\\n &= \\lambda v_i \\circ H(x) - \\lambda a + \\varepsilon \\lambda \\psi_i (x) + (-1)^i \\lambda C (h_i - H(x)) - b(x) \\cdot D\\psi_i (x) \\\\\n & \\quad + G \\big( x, v_i' \\circ H(x)DH(x) + \\varepsilon D\\psi_i (x) - (-1)^i CDH(x) \\big),\n\\end{align*}\nfrom which, in view of the coercivity of $G$, \nby replacing $C > 0$, independently of $\\varepsilon \\in (0, \\, \\varepsilon_0)$, by a larger number if necessary, we conclude that\n\\begin{equation*}\n\\lambda W_i^\\varepsilon (x) - \\frac{b (x) \\cdot DW_i^\\varepsilon (x)}{\\varepsilon} + G(x, DW_i^\\varepsilon (x)) \\geq 0.\n\\end{equation*}\nWe note that\n\\begin{equation*}\n\\lim_{\\varepsilon \\to 0+} W_i^\\varepsilon (x) = W_i \\circ H (x) := v_i \\circ H (x) - a + (-1)^i C(h_i - H(x))\n\\end{equation*}\nuniformly for $x \\in \\overline \\Omega_i \\setminus \\Omega_i (\\delta_i\/2)$ and that $W_i \\in \\mathrm{Lip} \\big( \\bar J_i \\setminus (-\\delta_i\/2, \\, \\delta_i\/2) \\big)$ and $W_i (h_i) = d_i$.\n\n\nNext, set $M = \\max_{\\overline \\Omega} |G(x, 0)|$ and\n\\begin{equation*}\nW_0^\\varepsilon (x) = (M\/\\lambda) \\vee \\max_{\\overline \\Omega} w^\\varepsilon \\ \\ \\ \\text{ for } x \\in \\overline \\Omega \\text{ and } \\varepsilon \\in (0, \\varepsilon_0).\n\\end{equation*}\nIt is now easily seen that for all $\\varepsilon \\in (0, \\varepsilon_0)$ and $x \\in \\overline \\Omega$,\n$W_0^\\varepsilon (x) \\geq w^\\varepsilon (x)$ and \n\\begin{equation*}\n\\lambda W_0^\\varepsilon (x) - \\frac{b (x) \\cdot DW_0^\\varepsilon (x)}{\\varepsilon} + G(x, DW_0^\\varepsilon (x)) \\geq 0.\n\\end{equation*}\nMoreover, combining \\eqref{W_0-bound1}, \\eqref{W_0-bound2}, and \\eqref{W_0-bound3},\nwe see from the definition of $w^\\varepsilon$ that\n\\begin{equation*}\n\\lim_{\\varepsilon \\to 0+} W_0^\\varepsilon (x) = W_0 (x) := (M\/\\lambda) \\vee \\max_{i \\in \\{ 1, 2, 3 \\}} \\max_{\\overline \\Omega_i \\setminus \\Omega_i (\\delta_i\/2)} (v_i \\circ H - a)\n\\end{equation*}\nuniformly for $x \\in \\overline \\Omega$.\n\n\nNow, we note that\n\\begin{equation*}\nW_i^\\varepsilon (x) = w_i^\\varepsilon (x) = g^\\varepsilon (x) \\leq W_0^\\varepsilon (x)\n\\end{equation*}\nfor all $x \\in \\partial_i \\Omega$, $\\varepsilon \\in (0, \\varepsilon_0)$, and $i \\in \\{ 1, 2, 3 \\}$.\nBy replacing $C > 0$ by a larger number if necessary, we may assume that\n\\begin{equation*}\nW_i^\\varepsilon (x) > W_0^\\varepsilon (x)\n\\end{equation*}\nfor all $x \\in \\Omega_i (\\delta)$, $\\varepsilon \\in (0, \\varepsilon_0)$, and $i \\in \\{ 1, 2, 3 \\}$.\n\n\nFor $\\varepsilon \\in (0, \\varepsilon_0)$, we set\n\\begin{equation*}\nW^\\varepsilon (x) = W_0^\\varepsilon (x) \\wedge W_i^\\varepsilon (x) \\ \\ \\ \\text{ if } x \\in \\overline \\Omega_i \\text{ and } i \\in \\{ 1, 2, 3 \\},\n\\end{equation*}\nand observe that $W^\\varepsilon \\in \\mathrm{Lip} (\\overline \\Omega)$, $W^\\varepsilon$ is a viscosity supersolution of \\eqref{epHJ},\n$W^\\varepsilon = g^\\varepsilon$ on $\\partial \\Omega$, $W^\\varepsilon \\geq w^\\varepsilon$ on $\\overline \\Omega$, and \n\\begin{equation*}\n\\lim_{\\varepsilon \\to 0+} W^\\varepsilon (x) = W(x) := W_0 (x) \\wedge W_i \\circ H (x)\n\\end{equation*}\nuniformly for $x \\in \\overline \\Omega_i$ and for all $i \\in \\{ 1, 2, 3 \\}$.\nIt is obvious that $W_0 \\wedge W_i \\circ H \\in \\mathrm{Lip} (\\overline \\Omega_i)$\nand $W_0 \\wedge W_i \\circ H = d_i$ on $\\partial_i \\Omega$ for all $i \\in \\{ 1, 2, 3 \\}$.\n\n\nNow, by Perron's method and the comparison principle,\nthere exists a unique viscosity solution $u^\\varepsilon \\in C(\\overline \\Omega)$ of \\eqref{epHJ} satisfying $u^\\varepsilon = g^\\varepsilon$ on $\\partial \\Omega$\nsuch that \n\\begin{equation*}\nw^\\varepsilon \\leq u^\\varepsilon \\leq W^\\varepsilon \\ \\ \\ \\text{ on } \\overline \\Omega,\n\\end{equation*}\nand, hence, in view of Proposition \\ref{viscosity-sol}, (G5) holds.\nAlso, the inequality above yields, for all $i \\in \\{ 1, 2, 3 \\}$,\n\\begin{equation*}\nv_i \\circ H - a \\leq v^- \\leq v^+ \\leq W_i \\circ H\n\\end{equation*}\nin a neighborhood of $\\partial \\Omega_i$,\nand, therefore, for all $i \\in \\{ 1, 2, 3 \\}$ and $x \\in \\partial_i \\Omega$,\n\\begin{align*}\nd_i = \\lim_{\\Omega_i \\ni y \\to x} v_i \\circ H(y) - a &\\leq \\lim_{\\Omega_i \\ni y \\to x} v^- (y) \\\\\n &\\leq \\lim_{\\Omega_i \\ni y \\to x} v^+ (y) \\leq \\lim_{\\Omega_i \\ni y \\to x} W_i \\circ H (y) = d_i.\n\\end{align*}\nThis implies (G6). The proof is complete.\n\n\n\n\n\\end{proof}\n\n\n\n\n\\subsection*{Acknowledgments}\n\n\nThe author would like to thank Prof. Hitoshi Ishii\nfor many helpful comments and discussions about\nthe singular perturbation problem of Hamilton-Jacobi equations treated here.\nThe author would like to thank the anonymous referee for careful reading, \nvaluable comments and pointing out several errors.\n\n\n\\begin{bibdiv}\n\\begin{biblist}\n\\bib{ACCT}{article}{\n author={Achdou, Yves},\n author={Camilli, Fabio},\n author={Cutr{\\`{\\i}}, Alessandra},\n author={Tchou, Nicoletta},\n title={Hamilton-Jacobi equations constrained on networks},\n journal={NoDEA Nonlinear Differential Equations Appl.},\n volume={20},\n date={2013},\n number={3},\n pages={413--445},\n issn={1021-9722},\n review={\\MR{3057137}},\n doi={10.1007\/s00030-012-0158-1},\n}\n\n\n\\bib{AT}{article}{\n author={Achdou, Yves},\n author={Tchou, Nicoletta},\n title={Hamilton-Jacobi equations on networks as limits of singularly\n perturbed problems in optimal control: dimension reduction},\n journal={Comm. Partial Differential Equations},\n volume={40},\n date={2015},\n number={4},\n pages={652--693},\n issn={0360-5302},\n review={\\MR{3299352}},\n doi={10.1080\/03605302.2014.974764},\n}\n\n\n\\bib{BCD}{book}{\n author={Bardi, Martino},\n author={Capuzzo-Dolcetta, Italo},\n title={Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman\n equations},\n series={Systems \\& Control: Foundations \\& Applications},\n note={With appendices by Maurizio Falcone and Pierpaolo Soravia},\n publisher={Birkh\\\"auser Boston, Inc., Boston, MA},\n date={1997},\n pages={xviii+570},\n isbn={0-8176-3640-4},\n review={\\MR{1484411 (99e:49001)}},\n doi={10.1007\/978-0-8176-4755-1},\n}\n\n\n\\bib{B}{book}{\n author={Barles, Guy},\n title={Solutions de viscosit\\'e des \\'equations de Hamilton-Jacobi},\n language={French, with French summary},\n series={Math\\'ematiques \\& Applications (Berlin) [Mathematics \\&\n Applications]},\n volume={17},\n publisher={Springer-Verlag, Paris},\n date={1994},\n pages={x+194},\n isbn={3-540-58422-6},\n review={\\MR{1613876 (2000b:49054)}},\n}\n\n\n\\bib{CIL}{article}{\n author={Crandall, Michael G.},\n author={Ishii, Hitoshi},\n author={Lions, Pierre-Louis},\n title={User's guide to viscosity solutions of second order partial\n differential equations},\n journal={Bull. Amer. Math. Soc. (N.S.)},\n volume={27},\n date={1992},\n number={1},\n pages={1--67},\n issn={0273-0979},\n review={\\MR{1118699 (92j:35050)}},\n doi={10.1090\/S0273-0979-1992-00266-5},\n}\n\n\n\\bib{E}{article}{\n author={Evans, Lawrence C.},\n title={The perturbed test function method for viscosity solutions of\n nonlinear PDE},\n journal={Proc. Roy. Soc. Edinburgh Sect. A},\n volume={111},\n date={1989},\n number={3-4},\n pages={359--375},\n issn={0308-2105},\n review={\\MR{1007533 (91c:35017)}},\n doi={10.1017\/S0308210500018631},\n}\n\n\n\\bib{FW}{article}{\n author={Freidlin, Mark I.},\n author={Wentzell, Alexander D.},\n title={Random perturbations of Hamiltonian systems},\n journal={Mem. Amer. Math. Soc.},\n volume={109},\n date={1994},\n number={523},\n pages={viii+82},\n issn={0065-9266},\n review={\\MR{1201269 (94j:35064)}},\n doi={10.1090\/memo\/0523},\n}\n\n\n\\bib{IMZ}{article}{\n author={Imbert, Cyril},\n author={Monneau, R{\\'e}gis},\n author={Zidani, Hasnaa},\n title={A Hamilton-Jacobi approach to junction problems and application to\n traffic flows},\n journal={ESAIM Control Optim. Calc. Var.},\n volume={19},\n date={2013},\n number={1},\n pages={129--166},\n issn={1292-8119},\n review={\\MR{3023064}},\n doi={10.1051\/cocv\/2012002},\n}\n\n\\bib{I}{article}{\n author={Ishii, Hitoshi},\n title={A boundary value problem of the Dirichlet type for Hamilton-Jacobi\n equations},\n journal={Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4)},\n volume={16},\n date={1989},\n number={1},\n pages={105--135},\n issn={0391-173X},\n review={\\MR{1056130}},\n}\n\n\n\\bib{IS}{article}{\n author={Ishii, Hitoshi},\n author={Souganidis, Panagiotis E.},\n title={A pde approach to small stochastic perturbations of Hamiltonian\n flows},\n journal={J. Differential Equations},\n volume={252},\n date={2012},\n number={2},\n pages={1748--1775},\n issn={0022-0396},\n review={\\MR{2853559}},\n doi={10.1016\/j.jde.2011.08.036},\n}\n\n\n\\bib{L}{book}{\n author={Lions, Pierre-Louis},\n title={Generalized solutions of Hamilton-Jacobi equations},\n series={Research Notes in Mathematics},\n volume={69},\n publisher={Pitman (Advanced Publishing Program), Boston, Mass.-London},\n date={1982},\n pages={iv+317},\n isbn={0-273-08556-5},\n review={\\MR{667669 (84a:49038)}},\n}\n\n\n\\bib{S}{article}{\n author={Sowers, Richard B.},\n title={Stochastic averaging near a homoclinic orbit with multiplicative\n noise},\n journal={Stoch. Dyn.},\n volume={3},\n date={2003},\n number={3},\n pages={299--391},\n issn={0219-4937},\n review={\\MR{2017030}},\n doi={10.1142\/S0219493703000759},\n} \n\\end{biblist}\n\\end{bibdiv}\n\n\n\n\n\n(T. Kumagai) \nDepartment of Pure and Applied Mathematics, \nGraduate School of Fundamental Science and Engineering, Waseda University, Shinjuku, Tokyo 69-8050 Japan\n\n\nE-mail: kumatai13@gmail.com\n\n\n\n\n\n\\end{document}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\subsection{One-variable setting.} Let $\\theta$ be an inner function on the Hardy space $H^2(\\mathbb{D})$ and let $\\mathcal{K}_{\\theta} := H^2(\\mathbb{D}) \\ominus \\theta H^2(\\mathbb{D})$ be its model space. The associated compressions of the shift $S_\\theta := P_{\\theta} M_z|_{\\mathcal{K}_{\\theta}}$ (multiplication by $z$ followed by the orthogonal projection onto $\\mathcal{K}_{\\theta}$) have played a pivotal role in both operator and function theory. Indeed, allowing $\\theta$ to be operator valued, the famous Sz.-Nagy--Foias model theory says: every completely nonunitary, $C_0$ contraction is unitarily equivalent to a compression of the shift $S_\\theta$ on a model space $\\mathcal{K}_{\\theta}$ \\cite{Snf10}. \n\nIf the inner function is a finite Blaschke product $B$, i.e.\n\\[ B(z) = \\prod_{i=1}^m \\frac{ z-\\alpha_i}{1-\\bar{\\alpha}_iz}, \\qquad \\text{ where } \\alpha_1, \\dots, \\alpha_m \\in \\mathbb{D},\\]\nthen the associated compression of the shift $S_B$ is quite well behaved. Indeed, the matrix of $S_{B}$ with respect to a basis called the Takenaka-Malmquist-Walsh basis $\\{f_1, \\dots, f_m\\}$, see \\cite{gr15}, is the upper triangular matrix $M_B$ given entry-wise by\n \\begin{equation} \\label{eqn:onevar}\n (M_{B})_{ij}:= \\left \\langle S_B f_j, f_i\\right \\rangle_{\\mathcal{K}_{\\theta}} = \\left \\{ \\begin{array}{cc}\n \\alpha_i & \\text{ if } i=j; \\\\\n \\prod_{k=i+1}^{j-1} (-\\overline{\\alpha_k})(1-|\\alpha_i|^2)^{1\/2} (1-|\\alpha_j|^2)^{1\/2} &\\text{ if } i j.\n \\end{array}\n \\right.\n \\end{equation}\nFor this matrix, see the survey \\cite[pp.~180]{gw03}. Formula \\eqref{eqn:onevar} allows one to answer many natural questions about the structure of $S_{\\theta}.$ Answers concerning the numerical range and radius are particularly nice. Namely, if $T: \\mathcal{H} \\rightarrow \\mathcal{H}$ is a bounded operator on a Hilbert space $\\mathcal{H}$, then the \\emph{numerical range} of $T$ is the set \n\\[ \\mathcal{W}(T) := \\left \\{ \\langle T h, h \\rangle_{\\mathcal{H}} : \\| h\\|_{\\mathcal{H}} = 1 \\right \\}\\]\nand the \\emph{numerical radius} of $T$ is the number \n\\[ w(T) := \\sup \\left\\{ |\\lambda | : \\lambda \\in \\mathcal{W}(T) \\right\\}.\\]\nDiscussion of these sets for compressed shifts associated to finite Blaschke products requires some geometry. Recall that Poncelet's closure theorem says:~given two ellipses with one contained in the other, if there is an $N$-sided polygon circumscribing the smaller ellipse that has all of its vertices on the larger ellipse, then for every $\\lambda$ on the larger ellipse there is such an $N$-sided circumscribing polygon with a vertex at $\\lambda$, see \\cite[Section $5$]{gw03}. Similarly, for $N \\ge 3$, we say a curve $\\Gamma \\subset \\mathbb{D}$ satisfies the \\emph{$N$-Poncelet property} if for \n each point $\\lambda \\in \\partial \\mathbb{D}=\\mathbb{T},$ there is an $N$-sided polygon circumscribing $\\Gamma$ with one vertex at $\\lambda$ and all other vertices on $\\mathbb{T}$, see \\cite[pp. 182]{gw03}.\n \n Surprisingly, Poncelet curves have close ties to numerical ranges. Indeed, let $B$ be a finite Blaschke product of degree $m>1$. Then, as shown by Mirman \\cite{m98} and Gau and Wu \\cite{gw98}, the boundary $\\partial \\mathcal{W}(S_B)$ actually possesses the $(m+1)$-Poncelet property. The idea behind the proof is quite intuitive; the inscribing polygons are in one-to-one correspondence with the unitary $1$-dilations of $S_B$, which are obtained from \\eqref{eqn:onevar}. Moreover the vertices of the polygons are exactly the eigenvalues of the unitary $1$-dilations, and because $\\partial \\mathcal{W}(S_B)$ is strictly contained in $\\mathbb{D}$, the numerical radius $w(S_B)$ is always strictly less than $1$. For a detailed exploration of Poncelet ellipses for $B$ a degree-$3$ Blaschke product, see \\cite{DGM}, and for similar results concerning infinite Blaschke products, see \\cite{cgp09}. \n \n In what follows, we study these and other geometric properties of numerical ranges and radii of compressions of shifts on the bidisk $\\mathbb{D}^2$. \n\n\\subsection{Two-variable setting} For the two-variable case, let $\\Theta$ be an inner function on $\\mathbb{D}^2$, namely a function holomorphic on $\\mathbb{D}^2$ whose boundary values satisfy $|\\Theta(\\tau)|=1$ for almost every $\\tau \\in \\mathbb{T}^2$. Then let $\\mathcal{K}_{\\Theta}$ be the associated two-variable model space defined by\n\\[ \n\\mathcal{K}_{\\Theta} := H^2(\\mathbb{D}^2) \\ominus \\Theta H^2(\\mathbb{D}^2)= \\mathcal{H}\\left( \\frac{1 - \\Theta(z) \\overline{\\Theta(w)}}{(1-z_1\\bar{w}_1)(1-z_2\\bar{w}_2)} \\right),\\]\nwhere $\\mathcal{H}(K)$ denotes the reproducing kernel Hilbert space with reproducing kernel $K$. In this paper, we use $\\Theta$ to denote two-variable inner functions and $\\theta$ for simpler, often one-variable inner functions. In this setting, one natural compression of the shift is the operator\n\\[ \\widetilde{S}^1_{\\Theta} := P_{\\Theta} M_{z_1}|_{ \\mathcal{K}_{\\Theta}},\\] \nwhere $P_{\\Theta}$ denotes the orthogonal projection of $H^2(\\mathbb{D}^2)$ onto $\\mathcal{K}_{\\Theta}$ and $M_{z_1}$ is multiplication by $z_1$. Although we explicitly study $ \\widetilde{S}^1_{\\Theta}$, symmetric results will hold for a similarly-defined $\\widetilde{S}_{\\Theta}^2.$ \n\nAs in the one-variable discussion, we restrict attention to $\\Theta$ that are both rational and inner. Section \\ref{sec:rational} includes most needed details about rational inner functions, but discussing our main results will require some notation. First, the degree of $\\Theta$, denoted $\\deg \\Theta = (m,n),$ is defined as follows:~write $\\Theta=\\frac{q}{p}$ with $p$ and $q$ polynomials with no common factors. Then $m$ is the highest degree of $z_1$ and $n$ the highest degree of $z_2$ appearing in either $p$ or $q$. Moreover, if $\\Theta$ is rational inner with $\\deg \\Theta = (m,n)$, then there is an (almost) unique polynomial $p$ with no zeros on $\\mathbb{D}^2$ such that \n$\\Theta = \\frac{\\tilde{p}}{p},$\nwhere $\\tilde{p}(z) := z_1^m z_2^n \\overline{p( \\frac{1}{\\bar{z}_1}, \\frac{1}{\\bar{z}_2})}$ and $p$ and $\\tilde{p}$ share no common factors. See \\cite{ams06, Rud69} for details.\n\nOur goal is to study the numerical range of a general compression of the shift $ \\widetilde{S}^1_{\\Theta}$ associated to a rational inner function $\\Theta$. Unfortunately, the question\n\\begin{center} ``What are the properties of $\\mathcal{W}( \\widetilde{S}^1_{\\Theta})$?'' \\end{center}\noften has a trivial answer. To observe the problem, one can decompose $\\mathcal{K}_{\\Theta}$ as \n\\begin{equation} \\label{eqn:sub} \\mathcal{K}_{\\Theta} = \\mathcal{S}_1 \\oplus \\mathcal{S}_2,\\end{equation}\nwhere $\\mathcal{S}_1$ and $\\mathcal{S}_2$ are respectively $M_{z_1}$- and $M_{z_2}$-invariant. There are canonical ways to obtain such decompositions, and details are provided in \nSection \\ref{sec:rational}. If $\\mathcal{S}_1$ is nontrivial, then $ \\widetilde{S}^1_{\\Theta}|_{\\mathcal{S}_1} = M_{z_1}|_{\\mathcal{S}_1}$ and one can further show that\n\\[ \\text{Clos} \\left \\{ \\left \\langle \\widetilde{S}^1_{\\Theta} f, f \\right \\rangle_{\\mathcal{K}_{\\Theta}} : \\| f\\|_{\\mathcal{K}_{\\Theta}} = 1, f \\in \\mathcal{S}_1 \\right \\} = \\overline{\\mathbb{D}}.\\]\nThen since $ \\widetilde{S}^1_{\\Theta}$ is a contraction, we can conclude that $\\text{Clos}( \\mathcal{W}( \\widetilde{S}^1_{\\Theta}))$ equals $\\overline{\\mathbb{D}};$ see Lemma \\ref{lem:range} for details. Because of this, we compress $ \\widetilde{S}^1_{\\Theta}$ to the $M_{z_2}$-invariant subspace $\\mathcal{S}_2$ from \\eqref{eqn:sub} and study this more interesting compression of the shift:\n\\begin{equation} \\label{eqn:compression} S^1_{\\Theta} : = P_{\\mathcal{S}_2} \\widetilde{S}^1_{\\Theta} |_{\\mathcal{S}_2} = P_{\\mathcal{S}_2} M_{z_1} |_{\\mathcal{S}_2}.\\end{equation}\n\n\n\\subsection{Outline and Main Results}\nThis paper studies the structure of the compression of the shift $ S^1_{\\Theta}$ defined in \\eqref{eqn:compression} and the geometry of its numerical range. It is outlined as follows: in Section \\ref{sec:rational}, we detail needed results about rational inner functions and their model spaces on the bidisk. In Section \\ref{sec:structure}, we obtain most of our structural results about $ S^1_{\\Theta}$ and its numerical range, while in Section \\ref{sec:examples}, we illustrate the results from Section \\ref{sec:structure} with examples. In Sections \\ref{Zero} and \\ref{sec:boundary}, we study the geometry of the numerical ranges $\\mathcal{W}( S^1_{\\Theta})$ associated to simple rational inner functions; Section \\ref{Zero} addresses the zero inclusion question, and Section \\ref{sec:boundary} examines the shape of the boundary of the numerical range.\n\nBefore stating our main results, we require the following notation: $H^2_2(\\mathbb{D})$ denotes the one-variable Hardy space with independent variable $z_2$ and $H^2_2(\\mathbb{D})^m : = \\bigoplus_{i=1}^m H^2_2(\\mathbb{D})$ denotes the space of vector-valued functions $\\vec{f} = (f_1, \\dots, f_m)$ with each $f_i \\in H^2_2(\\mathbb{D})$. Define $L_2^2(\\mathbb{T})^m$ analogously, and let $F$ be a bounded $m\\times m$ matrix-valued function defined for almost every $ z_2 \\in \\mathbb{T}$. Then the \\emph{$z_2$-matrix-valued Toeplitz operator with symbol F} is the operator\n\\begin{equation} \\label{eqn:Toeplitz}\nT_F : H^2_2(\\mathbb{D})^m \\rightarrow H^2_2(\\mathbb{D})^m \\ \\ \\text{ defined by } \\ \\ T_F \\vec{f} = P_{H_2^2(\\mathbb{D})^m} \\big( F \\vec{f} \\ \\big),\n\\end{equation}\nwhere $P_{H_2^2(\\mathbb{D})^m} $ is the orthogonal projection of $L_2^2(\\mathbb{T})^m$ onto $H_2^2(\\mathbb{D})^m$.\n\nThen, in Section \\ref{sec:structure}, we show that each $ S^1_{\\Theta}$ is unitarily equivalent to a $z_2$-matrix-valued Toeplitz operator with a well-behaved symbol as follows:\n\n\\medskip\n\\noindent\n{\\bf Theorem~\\ref{thm:unitary}.} {\\it Let $\\Theta = \\frac{\\tilde{p}}{p}$ be rational inner of degree $(m,n)$ and let $\\mathcal{S}_2$ be as in \\eqref{eqn:sub}. Then there exists an $m\\times m$ matrix-valued function $M_{\\Theta}$, with entries that are rational functions of $\\bar{z}_2$ and continuous on $\\overline{\\mathbb{D}},$ such that\n\\[ S^1_{\\Theta} = \\mathcal{U} \\ T_{M_{\\Theta}} \\ \\mathcal{U}^*, \\]\nwhere $\\mathcal{U}: H^2_2(\\mathbb{D})^m \\rightarrow \\mathcal{S}_{2}$ is a unitary operator defined in \\eqref{eqn:unitary}.}\n\\medskip\n\nOne can view Theorem \\ref{thm:unitary} as a generalization of the formula \\eqref{eqn:onevar} for the matrix of a compressed shift associated to a Blaschke product. As in the one-variable setting, this structural result gives information about the numerical range of $ S^1_{\\Theta},$ namely:\n\n\\medskip\n\\noindent\n{\\bf Corollary~\\ref{thm:nr}.} {\\it Let $\\Theta = \\frac{\\tilde{p}}{p}$ be rational inner of degree $(m,n)$, let $\\mathcal{S}_2$ be as in \\eqref{eqn:sub}, and let $M_{\\Theta}$ be as in Theorem \\ref{thm:unitary}. Then\n\\[ \\text{Clos}\\left(\\mathcal{W}\\left(S^1_{\\Theta} \\right)\\right) = \\text{Conv} \\Big( \\bigcup_{\\tau \\in \\mathbb{T}} \\mathcal{W}\\left(M_{\\Theta}(\\tau) \\right) \\Big).\\]}\n\nHere ``Clos'' denotes the closure and ``Conv'' denotes the convex hull of the given sets. Then Corollary \\ref{thm:nr} says that $\\mathcal{W}( S^1_{\\Theta} )$ is built out of numerical ranges of specific $m \\times m$ matrices. We also connect $\\mathcal{W}( S^1_{\\Theta} )$ to the numerical ranges of compressed shifts associated to degree-$m$ Blaschke products, see Theorem \\ref{thm:generalnr}. This result is particularly important because it links the rich one-variable theory to this two-variable setting. For example, it implies that Clos($\\mathcal{W}( S^1_{\\Theta} )$) is the closed convex hull of a union of sets whose boundaries satisfy the $(m+1)$-Poncelet property. Amongst other results, we also combine Theorem \\ref{thm:generalnr} with one-variable facts to characterize when the numerical radius is maximal:\n\n\\medskip\n\\noindent\n{\\bf Theorem~\\ref{thm:numericalradius}.} {\\it Let $\\Theta = \\frac{\\tilde{p}}{p}$ be rational inner of degree $(m,n)$ and let $\\mathcal{S}_2$ be as in \\eqref{eqn:sub}. Then the numerical radius $w\\big( S^1_{\\Theta} \\big) =1$ if and only if $\\Theta$ has a singularity on $\\mathbb{T}^2$.}\n\\medskip\n\nThis theorem shows that certain one-variable properties do not (in general) hold in this two-variable setting. Indeed, as $N$-Poncelet sets cannot touch $\\mathbb{T}$, this implies that if $\\Theta$ has a singularity on $\\mathbb{T}^2$, then the boundary of $\\mathcal{W}( S^1_{\\Theta} )$ does not satisfy an $N$-Poncelet property.\n\nIn Section \\ref{sec:examples}, we illustrate these theorems with examples. We consider $\\Theta := \\prod_{i=1}^m \\theta_i,$ where each $\\theta_i$ is a degree $(1,1)$ rational inner function with a singularity on $\\mathbb{T}^2.$ Specifically, we decompose the associated $\\mathcal{K}_{\\Theta}$ into concrete $M_{z_1}$- and $M_{z_2}$-invariant subspaces $\\mathcal{S}_1$ and $\\mathcal{S}_2$, find an orthonormal basis of $\\mathcal{H}(K_2) := \\mathcal{S}_2 \\ominus M_{z_2} \\mathcal{S}_2$, and use that to compute explicitly the matrix-valued function $M_{\\Theta}$ from Theorem \\ref{thm:unitary}. Proposition \\ref{prop:productform} contains the decomposition of $\\mathcal{K}_{\\Theta}$ and the orthonormal basis of $\\mathcal{H}(K_2)$, while Theorem \\ref{thm:nr1} contains the formula for $M_{\\Theta}.$\n\nIn Section~\\ref{Zero}, we restrict attention to $\\Theta = \\theta_1 \\theta_2,$ where each $\\theta_i$ is a degree $(1,1)$ rational inner function with a singularity on $\\mathbb{T}^2.$ For these $\\Theta,$ Theorem \\ref{thm:nr1} gives a formula for $M_{\\Theta}$, which shows that $\\mathcal{W}( S^1_{\\Theta})$ is basically the convex hull of an infinite union of ellipses with specific foci and axes. This information allows us to study the geometry of these numerical ranges and in particular, investigate the classical problem:\n\n\\begin{center} ``When is zero in the numerical range $\\mathcal{W}( S^1_{\\Theta})$?''\\end{center} \n\nAn answer to the zero inclusion question often yields useful information. For example, the numerical range of a compact operator $T$ is closed if and only if $0 \\in \\mathcal{W}(T)$, \\cite{BGS}. Bourdon and Shapiro \\cite{BS} studied the zero inclusion question for composition operators showing, among other things, that the numerical range of a composition operator other than the identity always contains zero in the closure of the numerical range. More recently, Higdon \\cite{HIGDON} showed that if $\\varphi$ is a holomorphic self-map of $\\mathbb{D}$ with Denjoy-Wolff point on the unit circle that is not a linear fractional transformation, then zero is an interior point of the numerical range of the composition operator $\\mathcal{C}_\\varphi$. \n\nIn our setting, we obtain several results related to the zero inclusion question for $\\mathcal{W}( S^1_{\\Theta} )$. First, in Proposition~\\ref{lem:general}, we obtain two conditions guaranteeing that zero is in this numerical range; these conditions involve the foci of the elliptical disks comprising $\\mathcal{W}( S^1_{\\Theta} )$. We then impose additional restrictions on the coefficients of the rational inner function. Under these restrictions, in Proposition ~\\ref{lem:positive_eigenvalues}, we obtain necessarily and sufficient conditions for both zero to be in the interior and zero to be in the boundary of the numerical range.\n\nIn Section \\ref{sec:boundary}, we further study the shape of the numerical range $\\mathcal{W}( S^1_{\\Theta} )$. Due to the complexity of the computations, we only consider rational inner functions of the form $\\Theta = \\theta^2_1$, where $\\theta_1 = \\frac{\\tilde{p}}{p}$ for a polynomial $p(z)=a -z_1 + c z_2$ with no zeros on $\\mathbb{D}^2$, a zero on $\\mathbb{T}^2$, and $a, c >0$. We initially consider the question: \n\\begin{center} ``When is the numerical range $\\mathcal{W}( S^1_{\\Theta})$ circular?''\\end{center}\n\nFor more general operators, this question has a long and interesting history. For example, Anderson showed that if an $m \\times m$ matrix $M$ has the property that $\\mathcal{W}(A)$ is contained in $\\overline{\\mathbb{D}}$ and there are more than $m$ points with modulus $1$ in the numerical range, then $\\mathcal{W}(A) = \\mathbb{D}$ and zero is an eigenvalue of $A$ of multiplicity at least $2$. In \\cite{Wu}, Wu extends these results. \n\nWe show that for our restricted class of rational inner functions, which seem to be the ones most likely to produce a circular numerical range, $\\mathcal{W}( S^1_{\\Theta})$ is never circular. We then interpret the union of circles comprising $\\mathcal{W}( S^1_{\\Theta})$ as a family of curves. Using the theory of envelopes, we are able to obtain a precise description of the boundary of the numerical range. The exact parameterization is given in Theorem \\ref{thm:boundary}. We refer the reader to \\cite{Wu} for more information and other references about this question.\n\n\\subsection*{Acknowledgements} The authors gratefully acknowledge Institut Mittag-Leffler, where this work was initiated.\n The authors would also like to thank Elias Wegert for sharing a simple method for computing the envelope of a family of curves. \n\\section{Rational Inner Functions \\& Model Spaces} \\label{sec:rational}\n\nLet $\\Theta$ be a rational inner function on $\\mathbb{D}^2$ with $\\deg \\Theta = (m,n)$. As mentioned earlier, there is a basically unique polynomial $p$ with no zeros on $\\mathbb{D}^2$ such that $\\Theta = \\frac{\\tilde{p}}{p}$, where $\\tilde{p}(z) = z_1^m z_2^n \\overline{p( \\frac{1}{\\bar{z}_1}, \\frac{1}{\\bar{z}_2})}$ and $\\tilde{p}, p$ have no common factors.\n\nAn application of B\\'ezout's Theorem implies that $p, \\tilde{p}$ have at most $2mn$ common zeros, including intersection multiplicity and moreover, they will have exactly $2mn$ common zeros if $\\deg p = \\deg \\tilde{p}.$ Moreover, one can easily check that $p$ and $\\tilde{p}$ have the same zeros on $\\mathbb{T}^2$. Then as common zeros of $p$ and $\\tilde{p}$ on $\\mathbb{T}^2$ have even intersection multiplicity, $p$ can vanish at no more than $mn$ points on $\\mathbb{T}^2$. For further details and proofs of these comments, see \\cite{k14}. Then, an application of Theorem $4.9.1$ in \\cite{Rud69} implies that $p$ also has no zeros on $(\\mathbb{D} \\times \\mathbb{T}) \\cup (\\mathbb{T} \\times \\mathbb{D})$. \n\nIf $\\Theta$ is an inner function (not necessarily rational), the structure of the model space $\\mathcal{K}_{\\Theta}$ is also quite interesting. As mentioned earlier, there are canonical ways to decompose every nontrivial $\\mathcal{K}_{\\Theta}$ into subspaces that are $M_{z_1}$- and $M_{z_2}$-invariant, or equivalently, $z_1$- and $z_2$-invariant, as in \\eqref{eqn:sub}. For example, as discussed in \\cite{bsv05,bk13}, if you set $\\mathcal{S}_1^{max}$ to be the maximal subspace of $\\mathcal{K}_{\\Theta}$ invariant under $M_{z_1}$, then $\\mathcal{S}^{max}_1$ is clearly $z_1$-invariant and $\\mathcal{S}_2^{min} := \\mathcal{K}_{\\Theta} \\ominus \\mathcal{S}^{max}_1$ is $z_2$-invariant. One can similarly define $\\mathcal{S}_2^{max}$ and $\\mathcal{S}_1^{min}.$\n\nGiven any such subspaces $\\mathcal{S}_1$ and $\\mathcal{S}_2$ with $\\mathcal{K}_{\\Theta} = \\mathcal{S}_1 \\oplus \\mathcal{S}_2$ and each $\\mathcal{S}_j$ $z_j$-invariant, it makes sense to define reproducing kernels $K_1$, $K_2: \\mathbb{D}^2 \\times \\mathbb{D}^2 \\rightarrow \\mathbb{C}$ by\n\\begin{equation} \\label{eqn:kernel}\n\\mathcal{H}(K_1) = \\mathcal{S}_1 \\ominus z_1 \\mathcal{S}_1 \\ \\ \\text{ and } \\ \\\n\\mathcal{H}(K_2) = \\mathcal{S}_2 \\ominus z_2 \\mathcal{S}_2.\n\\end{equation}\nThe resulting pair of kernels $(K_1, K_2)$ is called a pair of \\emph{Agler kernels} of $\\Theta$ because the kernels satisfy the equation\n\\begin{equation} \\label{eqn:ad} 1-\\Theta(z) \\overline{\\Theta(w)} = (1-z_1 \\bar{w}_1)K_2(z,w) + (1-z_2 \\overline{w}_2) K_1(z,w),\\end{equation}\nfor all $z,w \\in \\mathbb{D}^2.$ Indeed, any positive semidefinite kernels $(K_1, K_2)$ satisfying \\eqref{eqn:ad} are called \\emph{Agler kernels of $\\Theta$} and the equation \\eqref{eqn:ad} is called an \n\\emph{Agler decomposition of $\\Theta.$} The existence of Agler decompositions was first proved by Agler in \\cite{ag90}.\n \nIf $\\Theta$ is rational inner, there are close connections between the properties of $\\Theta$ and the structure of the Hilbert spaces $\\mathcal{H}(K_1)$ and $\\mathcal{H}(K_2).$ The following result appears in \\cite{k11} and follows by an examination of the degrees and singularities of the functions in \\eqref{eqn:ad}:\n\n\\begin{theorem} \\label{thm:dim} Let $\\Theta = \\frac{ \\tilde{p}}{p}$ be a rational inner function of degree $(m,n)$ and let $K_1, K_2$ be Agler kernels of $\\Theta$ as in \\eqref{eqn:ad}. Then $\\dim \\mathcal{H}(K_1)$, $\\dim \\mathcal{H}(K_2)$ are both finite. Moreover, if $g$ is a function in $\\mathcal{H}(K_1),$ then $g = \\frac{r}{p}$ where $\\deg r \\le (m,n-1)$ and if $f$ is a function in $\\mathcal{H}(K_2)$, then $f = \\frac{q}{p}$ where $\\deg q \\le (m-1,n).$\n\\end{theorem}\n\nDefine the following exceptional set\n\\begin{equation} \\label{eqn:exceptional} E_{\\Theta} := \\Big\\{ \\tau \\in \\mathbb{T}: \\exists \\ \\tau_1 \\in \\mathbb{T} \\text{ such that } p(\\tau_1, \\tau)=0 \\Big\\}. \\end{equation}\nBy the above comments about $\\Theta$, the set $E_{\\Theta}$ is necessarily finite. For $\\tau \\in \\mathbb{T}$, define the slice function $\\Theta_{\\tau}$ by $\\Theta_{\\tau} \\equiv \\Theta(\\cdot, \\tau).$ Then $\\Theta_{\\tau}$ is a finite Blaschke product and in what follows, $\\mathcal{K}_{\\Theta_{\\tau}}$ will denote the one-variable model space associated to $\\Theta_{\\tau}.$\n\nThe following result is proved for Hilbert spaces arising from canonical decompositions of $\\mathcal{K}_{\\Theta}$ in \\cite{bk13, w10}. Specifically, see Theorems 1.6-1.8 in \\cite{bk13} as well as Proposition $2.5$ in \\cite{w10}. Here, we include the proof for more general decompositions of $\\mathcal{K}_{\\Theta}$, which basically mirrors the ideas appearing in \\cite{bk13}. \n\n\\begin{theorem} \\label{thm:rational} Let $\\Theta = \\frac{ \\tilde{p}}{p}$ be a rational inner function of degree $(m,n)$ and let $K_1, K_2$ be defined as in \\eqref{eqn:kernel}. Then for any $\\tau \\in \\mathbb{T} \\setminus E_{\\Theta}$, $\\Theta_\\tau$ is a Blaschke product with $\\deg \\Theta_ {\\tau}=m$ and the restriction map $\\mathcal{J}_{\\tau}: \\mathcal{H}(K_2) \\rightarrow \\mathcal{K}_{\\Theta_{\\tau}}$ defined by $\\mathcal{J}_{\\tau} f = f(\\cdot, \\tau)$ is unitary. Furthermore, $\\dim \\mathcal{H}(K_2) =m.$ The analogous statements hold for $\\mathcal{H}(K_1).$\n \\end{theorem}\n\n\\begin{proof} By Theorem \\ref{thm:dim}, $\\dim \\mathcal{H}(K_2)=M$ for some $M \\in \\mathbb{N}.$ We will later conclude that $M=m$. Let $\\{f_1, \\dots, f_M\\}$ be an orthonormal basis of $\\mathcal{H}(K_2)$. Then by \\cite[Proposition $2.18$]{am01}, we have $K_2(z,w) = \\sum_{i=1}^M f_i(z) \\overline{f_i(w)}.$\n\n Fix $\\tau \\in \\mathbb{T} \\setminus E_{\\Theta}$. Then $\\Theta_{\\tau}$ is a one-variable rational inner function and thus, is a Blaschke product with $\\deg \\Theta_{\\tau} \\le m.$ Further, as $p$ has no zeros on $\\mathbb{D}\\times \\mathbb{T}$, one can show that $\\deg \\tilde{p}(\\cdot, \\tau) =m.$ Since $p(\\cdot, \\tau)$ also has no zeros on $\\mathbb{T}$, no polynomials cancel in the fraction $\\Theta_{\\tau}=\\frac{\\tilde{p}(\\cdot, \\tau)}{p(\\cdot, \\tau)}$. This implies $\\deg \\Theta_{\\tau} =m$ and $\\dim \\mathcal{K}_{\\Theta_{\\tau}} = m$. Now, letting $z_2, w_2 \\rightarrow \\tau$ in \\eqref{eqn:ad} and dividing by $1-z_1\\overline{w_1}$ gives\n\\[ \\frac{1-\\Theta_{\\tau}(z_1) \\overline{\\Theta_{\\tau}(w_1)}}{1-z_1 \\overline{w_1}} = \\sum_{i=1}^M f_i(z_1, \\tau) \\overline{f_i(w_1, \\tau)}.\\]\nThus, the set $\\{ f_1(\\cdot, \\tau), \\dots, f_M(\\cdot, \\tau)\\}$ spans $\\mathcal{K}_{\\Theta_{\\tau}}$ and so the restriction map $\\mathcal{J}_{\\tau}$ is well defined (i.e.~maps $\\mathcal{H}(K_2)$ into $\\mathcal{K}_{\\Theta_{\\tau}}$) and is surjective. \n\nTo show that each $\\mathcal{J}_{\\tau}$ is an isometry, fix $f,g \\in \\mathcal{H}(K_2)$ and for $z_2 \\in \\mathbb{T}$, define\n\\[ F_{f,g}(z_2) := \\int_{\\mathbb{T}} f(z_1, z_2) \\overline{g(z_1, z_2)} d\\sigma(z_1) = \\left \\langle f(\\cdot, z_2), g(\\cdot, z_2)\\right \\rangle_{\\mathcal{K}_{\\Theta_{z_2}}},\\]\nwhere $d\\sigma(z_1)$ is normalized Lebesgue measure on $\\mathbb{T}$ and the last equality holds for $z_2 \\in \\mathbb{T} \\setminus E_{\\Theta}.$ An application of H\\\"older's inequality immediately implies that $F_{f,g} \\in L^1(\\mathbb{T})$. Furthermore, our assumptions imply that $\\mathcal{H}(K_2) \\perp z_2 \\mathcal{H}(K_2).$ From this we can conclude that $f \\perp z_2^j g$ in $\\mathcal{S}_2$, and hence in $H^2(\\mathbb{D}^2)$, for all $j \\in \\mathbb{Z} \\setminus \\{0\\}$. Then the Fourier coefficients of $F_{f,g}$ can be computed as follows:\n\\[ \\widehat{F_{f,g}}(j) = \\int_{\\mathbb{T}} z_2^{-j} F_{f,g}(z_2) d\\sigma(z_2) =\\int_{\\mathbb{T}^2} z_2^{-j} f(z) \\overline{g(z)} d\\sigma(z_1)d\\sigma(z_2)=0\\]\nfor $j \\in \\mathbb{Z} \\setminus \\{0\\}$. Then basic Fourier analysis (for example, Corollary 8.45 in \\cite{folland}) implies that \n\\[ F_{f,g}(z_2) = \\widehat{F_{f,g}}(0) = \\left \\langle f,g \\right \\rangle_{\\mathcal{H}(K_2)}\\ \\text{ for a.e.~} z_2 \\in \\mathbb{T}.\\]\nBut, the formula for $F_{f,g}$ implies that it is continuous on $\\mathbb{T}\\setminus E_{\\Theta}$ and so for $z_2 \\in \\mathbb{T}\\setminus E_{\\Theta}$,\n\\[ \\left \\langle f(\\cdot, z_2), g(\\cdot, z_2)\\right \\rangle_{\\mathcal{K}_{\\Theta_{z_2}}} = F_{f,g}(z_2)= \\left \\langle f,g \\right \\rangle_{\\mathcal{H}(K_2)}.\\]\nThis implies $\\mathcal{J}_{\\tau}$ is an isometry for $\\tau \\in \\mathbb{T} \\setminus E_{\\Theta}$. Since it is also surjective, $\\mathcal{J}_{\\tau}$ is unitary and so\n\\[ \\dim \\mathcal{H}(K_2) = \\dim \\mathcal{K}_{\\Theta_{\\tau}} = m,\\]\ncompleting the proof. \\end{proof}\n\n\\begin{remark} \\label{rem:kernel} Let $\\Theta = \\frac{\\tilde{p}}{p}$ be rational inner with $\\deg \\Theta = (m,n)$ and let $\\mathcal{S}_2$ be as in \\eqref{eqn:sub}. Then Theorems \\ref{thm:dim} and \\ref{thm:rational} can be used to deduce information about both the functions in $\\mathcal{S}_2$ and the inner product of $\\mathcal{S}_2$. As mentioned earlier, we let $H^2_2(\\mathbb{D})$ denote the one-variable Hardy space with independent variable $z_2.$\n\nFirst, as in \\eqref{eqn:kernel}, let $K_2$ be the reproducing kernel satisfying $\\mathcal{H}(K_2) = \\mathcal{S}_2 \\ominus z_2 \\mathcal{S}_2.$ By Theorems \\ref{thm:dim} and \\ref{thm:rational}, there is an orthonormal basis $\\{ \\frac{q_1}{p}, \\dots, \\frac{q_m}{p}\\} $ of $\\mathcal{H}(K_2)$ with $\\deg q_i \\le (m-1,n)$ for $i=1,\\dots, m.$ Then, since $\\frac{q_i}{p} \\perp \\frac{q_j}{p} z^k$ for all $i \\ne j$ and $k\\in \\mathbb{Z}$, one can show\n\\begin{equation} \\label{eqn:S2decomp} \\mathcal{S}_2 = \\mathcal{H}\\left( \\frac{\\sum_{i=1}^m \\frac{q_i(z)}{p(z)} \\frac{\\overline{q_i(w)}}{\\overline{p(w)}}}{1-z_2\\bar{w}_2}\\right) = \\bigoplus_{i=1}^m \\mathcal{H}\\left( \\frac{ \\frac{q_i(z)}{p(z)} \\frac{\\overline{q_i(w)}}{\\overline{p(w)}}}{1-z_2\\bar{w}_2}\\right),\n\\end{equation}\nwhere the last term indicates an orthogonal decomposition of $\\mathcal{S}_2$ into $m$ subspaces. We also claim that each subspace\n\\[ \\mathcal{S}_2^i := \\mathcal{H}\\left( \\frac{ \\frac{q_i(z)}{p(z)} \\frac{\\overline{q_i(w)}}{\\overline{p(w)}}}{1-z_2\\bar{w}_2}\\right)\\]\n is precisely the set of functions $\\frac{q_i}{p}H_2^2(\\mathbb{D})$ and for each pair of functions $\\frac{q_i}{p}f_i, \\frac{q_i}{p}g_i \\in \\mathcal{S}_2^i$, \n \\begin{equation} \\label{eqn:inner} \\left \\langle \\frac{q_i}{p} f_i, \\frac{q_i}{p} g_i \\right \\rangle_{\\mathcal{S}_2^i} = \\left \\langle f_i, g_i \\right \\rangle_{H_2^2(\\mathbb{D})}.\\end{equation}\n One can prove this claim by defining the above inner product on the set $\\frac{q_i}{p}H_2^2(\\mathbb{D})$. A straightforward computation shows that this turns $\\frac{q_i}{p}H_2^2(\\mathbb{D})$ into a reproducing kernel Hilbert space with reproducing kernel $\\frac{q_i(z)}{p(z)} \\frac{\\overline{q_i(w)}}{\\overline{p_i(w)}} \\frac{1}{1-z_2\\bar{w}_2}$. By the uniqueness of reproducing kernels, the set $\\frac{q_i}{p} H_2^2(\\mathbb{D})$ with the proposed inner product is exactly $\\mathcal{S}_2^i.$ \n \n \nThen, we can define a linear map $\\mathcal{U}: H^2_2(\\mathbb{D})^m \\rightarrow \\mathcal{S}_2$ by\n\\begin{equation} \\label{eqn:unitary} \\mathcal{U} \\vec{f} := \\sum_{i=1}^m \\frac{q_i}{p} f_i,\\qquad \\text{ for } \\vec{f} = (f_1, \\dots,f_m )\\in H^2_2(\\mathbb{D})^m.\\end{equation} \nWe will show that this map is actually unitary. First, observe that this map is well defined and surjective since \\eqref{eqn:S2decomp} and the above characterization of the subspaces $\\mathcal{S}_2^i$ imply that $\\mathcal{S}_2$ is composed precisely of functions of the form $\\sum_{i=1}^m \\frac{q_i}{p} f_i$, where each $f_i \\in H_2^2(\\mathbb{D})$. Moreover, as \\eqref{eqn:S2decomp} is an orthogonal decomposition and \\eqref{eqn:inner} gives the inner product on each $\\mathcal{S}_2^i$, we can conclude that for all $\\vec{f}, \\vec{g} \\in H^2_2(\\mathbb{D})^m$, \n\\[\n \\left \\langle \\mathcal{U} \\vec{f} , \\mathcal{U} \\vec{g } \\right \\rangle_{\\mathcal{S}_2} = \\sum_{i=1}^m \\left \\langle \\frac{q_i}{p}f_i, \\frac{q_i}{p}g_i \\right \\rangle_{\\mathcal{S}_2^i}\n = \\sum_{i=1}^m \\left \\langle f_i, g_i \\right \\rangle_{H^2_2(\\mathbb{D})} = \\left \\langle \\vec{f}, \\vec{g} \\right \\rangle_{H_2^2(\\mathbb{D})^m}.\\] \nThus, $\\mathcal{U}$ is unitary as desired.\n\\end{remark}\n\n\n\\section{The Structure and Numerical Range of $S^1_{\\Theta}$ }\\label{sec:structure}\n\nLet $\\Theta$ be rational inner and write $\\mathcal{K}_{\\Theta} = \\mathcal{S}_1 \\oplus \\mathcal{S}_2,$ for subspaces $\\mathcal{S}_1$, $\\mathcal{S}_2$ that are respectively $z_1$- and $z_2$-invariant. As the following lemma shows, the numerical range of $P_{\\mathcal{S}_1} \\widetilde{S}^1_{\\Theta}|_{\\mathcal{S}_1}$ is not particularly interesting.\n\n\\begin{lemma} \\label{lem:range} Let $\\Theta = \\frac{\\tilde{p}}{p}$ be rational inner of degree $(m,n)$ and let $\\mathcal{S}_1$ be a $z_1$-invariant subspace of $\\mathcal{K}_{\\Theta}$ as in \\eqref{eqn:sub}.\n\\begin{itemize}\n\\item[a.] If $n=0$, then Clos $\\mathcal{W}(P_{\\mathcal{S}_1} \\widetilde{S}^1_{\\Theta}|_{\\mathcal{S}_1})= \\{0\\}.$ \n\\item[b.] If $n > 0$, then Clos $\\mathcal{W}(P_{\\mathcal{S}_1} \\widetilde{S}^1_{\\Theta}|_{\\mathcal{S}_1})= \\overline{\\mathbb{D}}.$ \n\\end{itemize}\n\\end{lemma}\n\\begin{proof} Let $K_1$ be as in \\eqref{eqn:kernel}, i.e.~the reproducing kernel satisfying $\\mathcal{H}(K_1) = \\mathcal{S}_1 \\ominus z_1 \\mathcal{S}_1.$ Then \n\\[ \\mathcal{S}_1 = \\bigoplus_{k=0}^{\\infty} z_1^k \\mathcal{H}(K_1).\\]\n If $n=0$, then Theorem \\ref{thm:rational} implies that $\\dim \\mathcal{H}(K_1) =0$, so $\\mathcal{S}_1 = \\{0\\}.$ It follows immediately that Clos $\\mathcal{W}(P_{\\mathcal{S}_1} \\widetilde{S}^1_{\\Theta}|_{\\mathcal{S}_1})= \\{0\\}.$ \n\nNow assume $n>0$. Then by Theorems~\\ref{thm:dim} and \\ref{thm:rational}, we can find an orthonormal basis $\\{ \\frac{r_1}{p}, \\dots, \\frac{r_n}{p}\\}$ of $\\mathcal{H}(K_1)$ with each $r_i$ a polynomial. Define\n\\[ Z_{K_1} := \\{ w_1 \\in \\mathbb{D} : r_i(w_1, w_2) =0 \\ \\text{ for all } w_2 \\in \\mathbb{D} \\text{ and } 1 \\le i \\le n\\}.\\]\nIf $w_1 \\in Z_{K_1}$, then each $r_i(w_1, \\cdot) \\equiv 0$ on $\\mathbb{D}$. Thus, $ r_i(w_1, \\cdot) \\equiv 0$ on $\\mathbb{C}$. This implies $r_i$ vanishes on the zero set of $z_1-w_1.$ Since $z_1-w_1$ is irreducible, Hilbert's Nullstellensatz implies that $z_1 - w_1$ divides each $r_i$ and as the $r_i$ are polynomials, this implies that $Z_{K_1}$ is a finite set. Observe that \n\\[ \\widehat{K}_1(z,w) := \\frac{K_1(z,w)}{1-z_1\\bar{w}_1} = \\sum_{i=1}^n \\frac{\\frac{r_i(z)}{p(z)}\\frac{\\overline{r_i(w)}}{\\overline{p(w)}}}{1-z_1\\bar{w}_1}\\]\nis the reproducing kernel for $\\mathcal{S}_1$. Fix $w_1 \\in \\mathbb{D} \\setminus Z_{K_1}$ and choose $w_2 \\in \\mathbb{D}$ so that at least one $r_i(w_1, w_2) \\ne 0.$ Then setting $w=(w_1, w_2)$, we have $ \\| \\widehat{K}_1(\\cdot, w)\\|^2_{\\mathcal{S}_1} = \\widehat{K}_1(w,w) \\ne 0$ and since $\\mathcal{S}_1$ is $z_1$-invariant,\n\\[ w_1 \\left \\| \\widehat{K}_1(\\cdot, w) \\right\\|^2_{\\mathcal{S}_1} = w_1\\widehat{K}_1(w,w) = \\left \\langle M_{z_1}\\widehat{K}_1(\\cdot, w), \\widehat{K}_1(\\cdot, w) \\right \\rangle_{\\mathcal{S}_1} = \\left \\langle \\widetilde{S}^1_{\\Theta} \\widehat{K}_1(\\cdot, w), \\widehat{K}_1(\\cdot, w) \\right \\rangle_{\\mathcal{S}_1}. \\]\n Since $\\| \\widehat{K}_1(\\cdot, w)\\|^2_{\\mathcal{S}_1} \\ne 0$, we can divide both sides of the above equation by it and conclude that the point $w_1 \\in \\mathcal{W}(P_{\\mathcal{S}_1} \\widetilde{S}^1_{\\Theta}|_{\\mathcal{S}_1}).$ Since this works for all $w_1 \\in \\mathbb{D} \\setminus Z_{K_1}$ and $Z_{K_1}$ is finite,\n \\[\\overline{\\mathbb{D}} \\subseteq \\text{Clos }\\mathcal{W}(P_{\\mathcal{S}_1} \\widetilde{S}^1_{\\Theta}|_{\\mathcal{S}_1}).\\]\n The other containment follows immediately because $ \\widetilde{S}^1_{\\Theta}$ is a contraction. \\end{proof}\n\n\nBy Lemma \\ref{lem:range}, the interesting behavior of $ \\widetilde{S}^1_{\\Theta}$ occurs on the subspace $\\mathcal{S}_2$. Because of this, as mentioned earlier, we primarily study this alternate compression of the shift\n\\[ S_{\\Theta}^1 : = P_{\\mathcal{S}_2} \\widetilde{S}^1_{\\Theta}|_{\\mathcal{S}_2} = P_{\\mathcal{S}_2} M_{z_1} |_{\\mathcal{S}_2}.\\]\nIn the following result, we show that $S^1_{\\Theta}$ is unitarily equivalent to a simple $z_2$-matrix-valued Toeplitz operator, as defined in \\eqref{eqn:Toeplitz}.\n\n\\begin{theorem} \\label{thm:unitary} Let $\\Theta = \\frac{\\tilde{p}}{p}$ be rational inner of degree $(m,n)$ and let $\\mathcal{S}_2$ be as in \\eqref{eqn:sub}. Then there exists an $m\\times m$ matrix-valued function $M_{\\Theta}$, with entries that are rational functions of $\\bar{z}_2$ and continuous on $\\overline{\\mathbb{D}},$ such that\n\\begin{equation} \\label{eqn:unitary1} S^1_{\\Theta} = \\mathcal{U} \\ T_{M_{\\Theta}} \\ \\mathcal{U}^*, \\end{equation}\nwhere $\\mathcal{U}: H^2_2(\\mathbb{D})^m \\rightarrow \\mathcal{S}_{2}$ is the unitary operator defined in \\eqref{eqn:unitary}.\n\\end{theorem}\n\n\\begin{proof} Throughout this proof, we use the notation defined and explained in Remark \\ref{rem:kernel}. Recall that $\\{ \\frac{q_1}{p}, \\dots, \\frac{q_m}{p}\\} $ denotes the previously-obtained orthonormal basis of $\\mathcal{H}(K_2):= \\mathcal{S}_2 \\ominus z_2 \\mathcal{S}_2.$\n\nBy Proposition $3.4$ in \\cite{b13}, $\\mathcal{S}_2$ is invariant under the backward shift operator $S_{\\Theta}^{1*}=M_{z_1}^*|_{\\mathcal{S}_2}$. This means that there are one-variable functions $h_{1j}, \\dots, h_{mj} \\in H_2^2(\\mathbb{D})$ such that\n\\begin{equation} \\label{eqn:bwsform1} M_{z_1}^*\\left( \\frac{q_j}{p} \\right)= \\frac{q_1}{p} h_{1j} + \\dots + \\frac{q_m}{p} h_{mj}, \\quad \\text{ for } j=1, \\dots, m.\\end{equation} Define the $m\\times m$ matrix-valued function $H$ by\n\\begin{equation} \\label{eqn:H} H:= \\begin{bmatrix} h_{11} & \\cdots & h_{1m} \\\\\n\\vdots & \\ddots & \\vdots \\\\\nh_{m1} & \\cdots & h_{mm}\n\\end{bmatrix},\n\\end{equation}\nand define the matrix-valued function $M_{\\Theta}$ by\n\\begin{equation} \\label{eqn:M} M_{\\Theta} := H^*. \\end{equation}\n To establish the properties of $M_{\\Theta}$, we will show that $H$ has entries that are rational in $z_2$ and continuous on $\\overline{\\mathbb{D}}.$ First rewrite the terms in \\eqref{eqn:bwsform1} as\n\\[ \\left(M_{z_1}^*\\frac{q_j}{p} \\right)(z) = \\frac{Q_j(z)}{p(z) p(0,z_2)} \\ \\text{ and } \\ Q_{j}(z) = \\sum_{k=0}^{m-1} Q_{jk}(z_2)z_1^k\\]\nfor a polynomial $Q_j$ and write \n\\[ q_i(z) = \\sum_{k=0}^{m-1} q_{ik}(z_2)z_1^k, \\qquad \\text{ for } i=1, \\dots, m.\\]\nThen by canceling the $p$ from each denominator from \\eqref{eqn:bwsform1} and looking at the coefficients in front of each $z_1^k$ separately, \\eqref{eqn:bwsform1} can be rewritten as \n\\[ \\begin{bmatrix} q_{10}(z_2) & \\dots & q_{m0}(z_2) \\\\\n\\vdots & \\ddots & \\vdots \\\\\nq_{1(m-1)}(z_2) & \\dots & q_{m(m-1)}(z_2) \n\\end{bmatrix} \n\\begin{bmatrix} \nh_{1j}(z_2) \\\\\n\\vdots \\\\\nh_{mj}(z_2) \n\\end{bmatrix} \n= \\begin{bmatrix}\n\\frac{Q_{j0}(z_2)}{p(0,z_2)} \\\\\n\\vdots \\\\\n\\frac{Q_{j(m-1)}(z_2)}{p(0,z_2)} \n\\end{bmatrix},\n\\]\nfor $z_2 \\in \\mathbb{D}$ and $j=1, \\dots, m.$\nLet $A_j$ denote the $m\\times m$ matrix function in the above equation. Since $\\det A_j$ is a one-variable polynomial, it is either identically zero or has finitely many zeros. First assume $\\det A_j\\equiv 0$, so that clearly $\\det A_j(\\tau) = 0$ for each $\\tau \\in \\mathbb{T}$. This implies that for each fixed $\\tau \\in \\mathbb{T}$, one $q_k(\\cdot, \\tau)$ can be written as a linear combination of the other $q_i(\\cdot, \\tau).$ However by Theorem \\ref{thm:rational}, for $\\tau\\in \\mathbb{T} \\setminus E_{\\Theta}$, the set \n\\[ \\left\\{\\frac{q_1}{p}(\\cdot,\\tau), \\dots , \\frac{q_m}{p}(\\cdot,\\tau)\\right \\}\\]\n is a basis for the $m$-dimensional set $\\mathcal{K}_{\\Theta_\\tau}$. Thus the set must be linearly independent, a contradiction.\n\nHence, $\\det A_j \\not \\equiv 0.$ Thus, the matrix $A_j(z_2)$ is invertible except at (at most) a finite number of points $z_2 \\in \\mathbb{D}$ and so we can solve for each column of $H$ as \n\\[\\begin{bmatrix} \nh_{1j}(z_2) \\\\\n\\vdots \\\\\nh_{mj}(z_2) \n\\end{bmatrix} = \\begin{bmatrix} q_{10}(z_2) & \\dots & q_{m0}(z_2) \\\\\n\\vdots & \\ddots & \\vdots \\\\\nq_{1(m-1)}(z_2) & \\dots & q_{m(m-1)}(z_2) \n\\end{bmatrix}^{-1} \n\\begin{bmatrix}\n\\frac{r_{j0}(z_2)}{p(0,z_2)} \\\\\n\\vdots \\\\\n\\frac{r_{j(m-1)}(z_2)}{p(0,z_2)} \n\\end{bmatrix}.\n\\]\nThis shows that the entries of $H$ are rational functions in $z_2$ and so by \\eqref{eqn:M}, the entries of $M_{\\Theta}$ are rational in $\\bar{z}_2$.\n\nSince the entries of $H$ are also in $H_2^2(\\mathbb{D})$, we claim that they cannot have any singularities in $\\overline{\\mathbb{D}}$. That there are no singularities in $\\mathbb{D}$ should be clear. To see that there are no singularities on $\\mathbb{T}$, proceed by contradiction and assume that some $h_{ij}$ has a singularity at a $\\tau \\in \\mathbb{T}$. Then, after writing $h_{ij}$ as a ratio of one-variable polynomials with no common factors, the denominator of $h_{ij}$ vanishes at $\\tau$ but the numerator does not. \nBy the reproducing property of $H_2^2(\\mathbb{D})$, we know that for each $z_2 \\in \\mathbb{D}$, \n\\[ | h_{ij}(z_2) | = \\left | \\left \\langle h_{ij}, \\frac{1}{1-\\cdot \\bar{z_2}} \\right \\rangle_{H_2^2(\\mathbb{D})} \\right| \\le \\| h_{ij} \\|_{H_2^2(\\mathbb{D})} \\frac{1}{ \\sqrt{1- |z_2|^2}}.\\]\nBut since $h_{ij}$ has a singularity at $\\tau$, there is a sequence $\\{z_{2,n}\\} \\rightarrow \\tau$ and positive constant $C$ such that $|h_{ij}(z_{2,n})| \\ge C \\frac{1}{1-|z_{2,n}|}$ for each $n$, a contradiction. Thus $H$, and hence $M_{\\Theta},$ has entries continuous on $\\overline{\\mathbb{D}}.$ \n\nNow we establish \\eqref{eqn:unitary1}. Fix $f,g \\in \\mathcal{S}_2$. Then by Remark \\ref{rem:kernel}, there exist vector-valued functions $\\vec{f}=(f_1, \\dots, f_m) , \\vec{g}=(g_1, \\dots, g_m) \\in H^2_2(\\mathbb{D})^m$ such that \n\\[ f =\\sum_{i=1}^m \\frac{q_i}{p} f_i= \\mathcal{U} \\ \\vec{f} \\ \\ \\text{ and } \\ \\ g =\\sum_{i=1}^m \\frac{q_i}{p} g_i = \\mathcal{U} \\ \\vec{g},\\]\nso $\\vec{f} = \\mathcal{U}^*f$ and $\\vec{g} = \\mathcal{U}^* g.$ Then using the inner product formulas from Remark \\ref{rem:kernel}, we can compute\n\\[\n\\begin{aligned} \n\\left \\langle S^1_{\\Theta} f, g \\right \\rangle_{\\mathcal{S}_2}\n &= \\left \\langle f, M_{z_1}^* g \\right \\rangle_{\\mathcal{S}_2} \\\\\n& = \\left \\langle \\sum_{i=1}^m \\frac{q_i}{p} f_i, \\sum_{j=1}^m M^*_{{z}_1} \\left( \\frac{q_j}{p} \\right) g_j \\right \\rangle_{\\mathcal{S}_2} \\\\\n& = \\sum_{i,j=1}^m \\left \\langle \\frac{q_i}{p} f_i, \\frac{q_i}{p} h_{ij} g_j \\right \\rangle_{\\mathcal{S}_2}\\\\\n& = \\sum_{i,j=1}^m \\left \\langle f_i, h_{ij} g_j \\right \\rangle_{H_2^2(\\mathbb{D})}\\\\\n& = \\left \\langle \\vec{f}, T_H \\vec{g} \\right \\rangle_{H_2^2(\\mathbb{D})^m} \\\\\n& = \\left \\langle T_{M_{\\Theta}} \\vec{f}, \\vec{g} \\right \\rangle_{H_2^2(\\mathbb{D})^m}\\\\\n& = \\left \\langle \\ \\mathcal{U} \\ T_{M_{\\Theta}} \\ \\mathcal{U}^* f, g\\right \\rangle_{\\mathcal{S}_2},\n\\end{aligned}\n\\]\nwhere $T_H$ is the $z_2$-matrix-valued Toeplitz operator with symbol $H$. Since $f, g \\in \\mathcal{S}_2$ were arbitrary, this immediately gives \\eqref{eqn:unitary1}. \\end{proof}\n\n\\begin{example} Before proceeding, observe that Theorem \\ref{thm:unitary} generalizes the matrix from \\eqref{eqn:onevar}. Specifically, let $\\Theta = \\frac{\\tilde{p}}{p}$ be a rational inner function with $\\deg \\Theta = (m,0)$, so $\\Theta$ is a finite Blaschke product of degree $m$. Then the associated two-variable model space is\n\\[ \\mathcal{K}_{\\Theta} = \\mathcal{H} \\left( \\frac{1- \\Theta(z_1)\\overline{\\Theta(w_1)}}{(1-z_1\\overline{w}_1)(1-z_2\\overline{w}_2)}\\right),\\]\nwhich is $z_2$-invariant. Thus, we can set $\\mathcal{S}_2 = \\mathcal{K}_{\\Theta}$ and $\\mathcal{S}_1 = \\{0\\}$. One can actually show that this is the only choice of $\\mathcal{S}_1$ and $\\mathcal{S}_2.$ Then\n\\[ \\mathcal{H}(K_2) := \\mathcal{S}_2 \\ominus z_2 \\mathcal{S}_2 = \\mathcal{H} \\left( \\frac{1-\\Theta(z_1)\\overline{\\Theta(w_1)}}{1-z_1\\overline{w}_1}\\right)\\]\nis the one-variable model space associated to $\\Theta$ with independent variable $z_1$. It follows immediately that the one-variable Takenaka-Malmquist-Walsh basis $\\{f_1, \\dots, f_m\\}$ is an orthonormal basis for $\\mathcal{H}(K_2)$ and each $f_i = \\frac{q_i}{p}$ for some one-variable polynomial $q_i$ with $\\deg q_i \\le m-1.$ Because the one-variable model space (with independent variable $z_1$) is also invariant under the backward shift $M_{z_1}^*$, we can conclude that the unique $h_{ij}$ from \\eqref{eqn:bwsform1} are constants. Then since $\\mathcal{H}(K_2)$ is a subspace of $\\mathcal{K}_{\\Theta}$, we can use \\eqref{eqn:bwsform1}-\\eqref{eqn:M} to conclude\n\\[ \\left( {M_{\\Theta}}\\right)_{ij} = \\overline{H_{ji}} = \\overline{ \\left \\langle M_{z_1}^*\\frac{q_i}{p}, \\frac{q_j}{p} \\right \\rangle_{\\mathcal{K}_{\\Theta}}} = \\overline{ \\left \\langle M_{z_1}^*\\frac{q_i}{p}, \\frac{q_j}{p} \\right \\rangle_{\\mathcal{H}(K_2)}} = \\left \\langle P_{\\mathcal{H}(K_2)} M_{z_1} f_j ,f_i \\right \\rangle_{\\mathcal{H}(K_2)},\\]\nwhich is a constant matrix agreeing with the matrix from \\eqref{eqn:onevar}.\n\\end{example}\n\nAs a corollary of Theorem \\ref{thm:unitary}, we can characterize the numerical range of $ S^1_{\\Theta},$ denoted by $\\mathcal{W}( S^1_{\\Theta})$.\n\n\n\\begin{corollary} \\label{thm:nr} Let $\\Theta = \\frac{\\tilde{p}}{p}$ be rational inner of degree $(m,n)$, let $\\mathcal{S}_2$ be as in \\eqref{eqn:sub}, and let $M_{\\Theta}$ be as in Theorem \\ref{thm:unitary}. Then\n\\begin{equation} \\label{eqn:nr} \\text{Clos}\\left( \\mathcal{W}\\left ( S^1_{\\Theta} \\right)\\right) = \\text{Conv} \\Big( \\ \\bigcup_{\\tau \\in \\mathbb{T}} \\mathcal{W}\\left(M_{\\Theta}(\\tau) \\right) \\ \\Big).\\end{equation}\n\\end{corollary}\n\n\\begin{proof} By Theorem \\ref{thm:unitary}, the operator $ S^1_{\\Theta}$ has the same numerical range as the $z_2$-matrix-valued Toeplitz operator $T_{M_\\Theta}: H_2^2(\\mathbb{D})^m \\rightarrow H_2^2(\\mathbb{D})^m$. By \\cite[Theorem 1]{SpitBeb}, the closure of the numerical range $\\mathcal{W}(T_{M_{\\Theta}})$ is equal to\n$$\\mbox{Conv} \\left \\{\\mathcal{W}(A): A \\in \\mathcal{R}(M_{\\Theta})\\right\\},$$ \nwhere $\\mathcal{R}(M_{\\Theta} )$ is the essential range of $M_{\\Theta}$ as a function on $\\mathbb{T}.$ It is easy to see that this set is closed and so, we do not need to take its closure. Since $M_{\\Theta}$ is continuous on $\\mathbb{T}$, its essential range will equal its range, i.e.\n\\[ \\mbox{Conv} \\left\\{\\mathcal{W}(A): A \\in \\mathcal{R}(M_{\\Theta}) \\right\\} = \\text{Conv} \\Big( \\ \\bigcup_{\\tau \\in \\mathbb{T}} \\mathcal{W}\\left(M_{\\Theta}(\\tau) \\right) \\ \\Big), \\]\nproving \\eqref{eqn:nr}.\n\\end{proof}\n\n\nOne can also consider the family of one-variable functions $\\{ \\Theta_{\\tau} = \\Theta(\\cdot, \\tau): \\tau \\in \\mathbb{T} \\setminus E_{\\Theta} \\},$ where $E_{\\Theta}$ is the exceptional set defined in \\eqref{eqn:exceptional}. For each $\\tau \\in \\mathbb{T}\\setminus E_{\\Theta}$, let $S_{\\Theta_{\\tau}}$ denote the compression of the shift on $\\mathcal{K}_{\\Theta_{\\tau}},$ the one-variable model space associated to $\\Theta_{\\tau}.$ It turns out that the numerical ranges $\\mathcal{W}(S_{\\Theta_{\\tau}})$ are closely related to $\\mathcal{W}( S^1_{\\Theta}).$\n\n\\begin{theorem} \\label{thm:generalnr} Let $\\Theta = \\frac{\\tilde{p}}{p}$ be rational inner of degree $(m,n)$, let $E_{\\Theta}$ be the exceptional set from \\eqref{eqn:exceptional} and let $\\mathcal{S}_2$ be as in \\eqref{eqn:sub}. Then\n\\begin{equation} \\label{eqn:onevariable}\\text{Clos}\\left( \\mathcal{W} \\left( S^1_{\\Theta}\\right) \\right) = \\text{Clos} \\Big( \\text{Conv}\\Big( \\bigcup_{\\tau \\in \\mathbb{T} \\setminus E_{\\Theta}} \\mathcal{W}(S_{\\Theta_{\\tau}})\\Big)\\Big).\\end{equation}\n\\end{theorem}\n\n\\begin{proof} This proof will use the same notation as the proof of Theorem \\ref{thm:unitary}. First fix $\\tau \\in \\mathbb{T} \\setminus E_{\\Theta}$. By Theorem \\ref{thm:rational}, the set\n\\[ \\Big\\{ \\frac{q_1}{p}(\\cdot, \\tau), \\dots, \\frac{q_m}{p}(\\cdot, \\tau) \\Big\\}\\]\nis an orthonormal basis for $\\mathcal{K}_{\\Theta_{\\tau}}$ with independent variable $z_1.$ Consider \\eqref{eqn:bwsform1}. As all involved functions are rational with no singularities on $\\overline{\\mathbb{D}} \\times \\left(\\overline{\\mathbb{D}} \\setminus E_{\\Theta} \\right)$ and the backward shift operator $S_{\\Theta}^{1*}=M^*_{z_1}|_{\\mathcal{S}_2}$ treats $z_2$ like a constant, we can extend this formula to the functions $\\frac{q_i}{p}(\\cdot, \\tau).$\nSpecifically, \\[ M_{z_1}^* \\left( \\frac{q_j}{p}(\\cdot, \\tau) \\right) = \\left( M_{z_1}^* \\frac{q_j}{p}\\right)(\\cdot, \\tau) = \\frac{q_1}{p}(\\cdot, \\tau) h_{1j}(\\tau) + \\dots + \\frac{q_m}{p}(\\cdot, \\tau) h_{mj}(\\tau), \n\\]\nfor $j=1, \\dots, m.$ Now, we use arguments similar to those in the proof of Theorem \\ref{thm:unitary} to show $\\mathcal{W}(S_{\\Theta_{\\tau}}) = \\mathcal{W}(M_{\\Theta}(\\tau)).$ \nSpecifically, fix $f \\in \\mathcal{K}_{\\Theta_{\\tau}}$. Then there exist unique constants\n$a_1, \\dots, a_m \\in \\mathbb{C}$ such that\n\\[ f= \\sum_{i=1}^m a_i \\frac{q_i}{p}(\\cdot,\\tau).\\]\nMoreover, $\\| f\\|^2_{\\mathcal{K}_{\\Theta_{\\tau}}} =1$ if and only if $\\sum_{i=1}^m |a_i|^2=1,$ i.e.~exactly when\n$\\vec{a}:= (a_1, \\dots, a_m) \\in \\mathbb{C}^m$ has norm one. Then,\n\\[\n\\begin{aligned} \n\\left \\langle S_{\\Theta_{\\tau}} f,f \\right \\rangle_{\\mathcal{K}_{\\Theta_{\\tau}}} &= \\left \\langle f, M_{z_1}^* f \\right \\rangle_{\\mathcal{K}_{\\Theta_{\\tau}}} \\\\\n& = \\left \\langle \\sum_{i=1}^m a_i \\frac{q_i}{p}(\\cdot, \\tau), \\sum_{j=1}^m a_j M_{z_1}^*\\left( \\frac{q_j}{p}(\\cdot, \\tau) \\right) \\right \\rangle_{\\mathcal{K}_{\\Theta_{\\tau}}} \\\\\n& = \\sum_{i,j=1}^m \\left \\langle a_i \\frac{q_i}{p}(\\cdot, \\tau), \\frac{q_i}{p}(\\cdot, \\tau) a_j h_{ij}(\\tau) \\right \\rangle_{\\mathcal{K}_{\\Theta_{\\tau}}}\\\\\n& = \\sum_{i,j=1}^m \\left \\langle a_i, a_j h_{ij}(\\tau) \\right \\rangle_{\\mathbb{C}} \\\\\n& = \\left \\langle \\vec{a}, H(\\tau)\\vec{a} \\right \\rangle_{\\mathbb{C}^m} \\\\\n& = \\left \\langle M_{\\Theta}(\\tau) \\vec{a}, \\vec{a} \\right \\rangle_{\\mathbb{C}^m}, \n\\end{aligned}\n\\]\nwhere we used the definitions of $H$ and $M_{\\Theta}$ from \\eqref{eqn:H} and \\eqref{eqn:M}. This sequence of equalities\nproves that $\\mathcal{W}(S_{\\Theta_{\\tau}}) = \\mathcal{W}(M_{\\Theta}(\\tau)).$ Thus, we have \n\n\\[ \n\\begin{aligned}\n\\text{Clos} \\Big( \\text{Conv}\\Big( \\bigcup_{\\tau \\in \\mathbb{T} \\setminus E_{\\Theta}} \\mathcal{W}(S_{\\Theta_{\\tau}})\\Big)\\Big) &= \\text{Clos} \\Big( \\text{Conv}\\Big( \\bigcup_{\\tau \\in \\mathbb{T} \\setminus E_{\\Theta}} \\mathcal{W}(M_{\\Theta}(\\tau))\\Big)\\Big)\\\\\n& = \\text{Clos} \\Big( \\text{Conv}\\Big( \\bigcup_{\\tau \\in \\mathbb{T}} \\mathcal{W}(M_{\\Theta}(\\tau))\\Big)\\Big) \\\\\n& = \\text{Clos}\\left( \\mathcal{W}( S^1_{\\Theta}) \\right),\n\\end{aligned}\n\\] \nwhere we used Corollary \\ref{thm:nr} and the fact that $M_{\\Theta}$ is continuous on $\\mathbb{T}$.\n\\end{proof}\n\nIf $\\Theta = \\frac{\\tilde{p}}{p}$ is rational inner of degree $(m,n)$, then there are typically many ways to decompose $\\mathcal{K}_{\\Theta}$ into shift invariant subspaces $\\mathcal{S}_1$ and $\\mathcal{S}_2$. Indeed, according to Corollary 13.6 in \\cite{k14}, if $\\deg p = \\deg \\tilde{p}$, there is a unique such decomposition if and only if $\\tilde{p}$ and $p$ have $2mn$ common zeros (including intersection multiplicity) on $\\mathbb{T}^2.$ \nNevertheless, Theorem \\ref{thm:generalnr} allows us to show that $\\mathcal{W}( S^1_{\\Theta})$ does not depend on the decomposition chosen.\n\n\\begin{corollary} \\label{cor:S2} Let $\\Theta = \\frac{\\tilde{p}}{p}$ be rational inner of degree $(m,n)$. Let \n\\[ \\mathcal{K}_{\\Theta} = \\mathcal{S}_1 \\oplus \\mathcal{S}_2 = \\widetilde{\\mathcal{S}}_1 \\oplus \\widetilde{\\mathcal{S}}_2\\]\nwhere both $\\mathcal{S}_j, \\widetilde{\\mathcal{S}}_j$ are $z_j$-invariant subspaces for $j=1,2$. Then\n\\[ \\text{Clos} \\left( \\mathcal{W}\\left( P_{\\mathcal{S}_2} M_{z_1} |_{\\mathcal{S}_2} \\right)\\right)= \\text{Clos} \\left( \\mathcal{W}\\left(P_{\\widetilde{\\mathcal{S}}_2} M_{z_1}|_{\\widetilde{\\mathcal{S}}_2} \\right)\\right).\\]\n\\end{corollary}\n\n\\begin{proof} By Theorem \\ref{thm:generalnr},\n \\[ \n \\text{Clos} \\left( \\mathcal{W}\\left(P_{\\mathcal{S}_2} M_{z_1} |_{\\mathcal{S}_2} \\right)\\right)=\n \\text{Clos} \\Big( \\text{Conv}\\Big( \\bigcup_{\\tau \\in \\mathbb{T} \\setminus E_{\\Theta}} \\mathcal{W}(S_{\\Theta_{\\tau}})\\Big)\\Big) =\n \\text{Clos} \\left( \\mathcal{W}\\left(P_{\\widetilde{\\mathcal{S}}_2} M_{z_1} |_{\\widetilde{\\mathcal{S}}_2} \\right)\\right),\n \\]\n as desired. \\end{proof}\n \n Theorem \\ref{thm:generalnr} is particularly useful because the compressions of the shift on one-variable model spaces are well studied. Specifically, let $B$ be a degree $m$ Blaschke product with zeros $\\alpha_1, \\dots, \\alpha_m$ and let $S_B$ denote the compression of the shift on $\\mathcal{K}_B.$ Then, as mentioned in the introduction, one matrix of $S_B$ is given by \\eqref{eqn:onevar}. Using this formula, it is easy to deduce that the zeros $\\alpha_1, \\dots, \\alpha_m$ are all in $\\mathcal{W}(S_B)$. We will use this to establish the following result:\n \n\\begin{theorem}\\label{thm:numericalradius} Let $\\Theta = \\frac{\\tilde{p}}{p}$ be rational inner of degree $(m,n)$ and let $\\mathcal{S}_2$ be as in \\eqref{eqn:sub}. Then the numerical radius $w\\big( S^1_{\\Theta} \\big) =1$ if and only if $\\Theta$ has a singularity on $\\mathbb{T}^2$.\n\\end{theorem} \n\\begin{proof} \n($\\Rightarrow$) Assume $w\\big( S^1_{\\Theta} \\big) =1$. Then there exists a sequence $\\{\\lambda_n\\} \\subseteq \\mathcal{W}( S^1_{\\Theta})$ such that $|\\lambda_n| \\rightarrow 1.$ Since $\\{\\lambda_n\\}$ is bounded, it has a subsequence converging to some $\\lambda \\in \\mathbb{T}$. Thus, $\\lambda \\in \\text{Clos}(\\mathcal{W}( S^1_{\\Theta})).$ By Corollary \\ref{thm:nr}, \n\\[ \\lambda \\in \\text{Conv} \\Big( \\ \\bigcup_{\\tau \\in \\mathbb{T}} \\mathcal{W}\\left(M_{\\Theta}(\\tau) \\right) \\ \\Big).\\]\nAgain by Corollary \\ref{thm:nr}, as $S^1_{\\Theta}$ is a contraction, every $\\alpha \\in \\bigcup_{\\tau \\in \\mathbb{T}} \\mathcal{W}\\left(M_{\\Theta}(\\tau)\\right)$ satisfies $|\\alpha| \\le 1$. Since $|\\lambda|=1$, we can conclude that there is some $\\tilde{\\tau} \\in \\mathbb{T}$ and some $\\tilde{\\lambda} \\in \\mathcal{W}\\left(M_{\\Theta}(\\tilde{\\tau})\\right)$ such that $|\\tilde{\\lambda}|=1.$ \n\nNow by way of contradiction, assume $\\Theta$ does not have a singularity at $(\\tau_1, \\tilde{\\tau})$ for every $\\tau_1 \\in \\mathbb{T}.$ Then $\\tilde{\\tau} \\in \\mathbb{T} \\setminus E_{\\Theta}$ and so by the proof of Theorem \\ref{thm:generalnr}, $\\mathcal{W}(S_{\\Theta_{\\tilde{\\tau}}}) = \\mathcal{W}(M_{\\Theta}(\\tilde{\\tau}))$. Thus $\\tilde{\\lambda} \\in \\mathcal{W}(S_{\\Theta_{\\tilde{\\tau}}}).$ This gives a contradiction since the numerical range of a compressed shift on a model space associated to a finite Blaschke product is strictly contained in $\\mathbb{D}$. See pp.~$181$ of \\cite{gw03} for details. Thus $\\Theta$ must have a singularity at $(\\tau_1, \\tilde{\\tau})$ for some $\\tau_1 \\in \\mathbb{T}.$ \\\\\n\n\\noindent ($\\Leftarrow$) Since $S_{\\Theta}^1$ is a contraction, $w\\big( S^1_{\\Theta} \\big) \\le 1.$ Assume $\\Theta$ has a singularity at $\\tilde{\\tau} = (\\tilde{\\tau}_1, \\tilde{\\tau}_2) \\in \\mathbb{T}^2.$ Then as $\\Theta = \\frac{\\tilde{p}}{p}$, we must have $\\tilde{p}(\\tilde{\\tau})=0.$ To prove the desired claim, we will show that $\\tilde{\\tau}_1 \\in \\mathcal{W}( S^1_{\\Theta})$ and as $|\\tilde{\\tau}_1|=1$, we have $w\\big( S^1_{\\Theta} \\big) \\ge 1.$\nWrite\n\\[ \\tilde{p}(z_1, z_2) = \\sum_{k=0}^{m} \\tilde{p}_k(z_2) z_1^k = \\tilde{p}_m(z_2) \\left( z_1^m + \\sum_{k=0}^{m-1} \\frac{ \\tilde{p}_k(z_2)}{ \\tilde{p}_m(z_2)} z_1^k \\right),\\]\nfor one-variable polynomials $\\tilde{p}_1, \\dots, \\tilde{p}_m.$\nNote that $ \\tilde{p}_{m}$ does not vanish on $\\mathbb{T}.$ If it did, one could conclude that $p(0,\\cdot)$ vanishes on $\\mathbb{T}$, a contradiction of the fact that $p$ does not vanish on $\\mathbb{D} \\times \\mathbb{T}.$ \nNow for each $\\tau \\in \\mathbb{T}$, consider the one-variable polynomial \n\\[ \\tilde{p}(z_1, \\tau) = \\tilde{p}_m(\\tau) \\left( z_1^m + \\sum_{k=0}^{m-1} \\frac{ \\tilde{p}_k(\\tau)}{ \\tilde{p}_m(\\tau)} z_1^k \\right)\\]\nand factor it as\n\\[ \\tilde{p}(z_1, \\tau) = \\tilde{p}_m(\\tau) \\prod_{k=1}^m\\left( z_1- \\alpha_k(\\tau)\\right),\\]\nwhere $\\alpha_1(\\tau), \\dots, \\alpha_m(\\tau)$ are the zeros of $\\tilde{p}(\\cdot, \\tau)$. \nNow we use the fact that the zeros of a polynomial depend continuously on its coefficients, see \\cite{us77}.\n\nFix $\\epsilon >0$. Since the coefficients $ \\left \\{\\frac{ \\tilde{p}_k(\\tau)}{ \\tilde{p}_m(\\tau)} \\right \\}$ are continuous on $\\mathbb{T}$, there exist $\\delta_1, \\delta_2 >0$ such that if $| \\tau - \\tilde{\\tau}_2| < \\delta_1$, then \n\\[ \\left | \\frac{ \\tilde{p}_k(\\tau)}{ \\tilde{p}_m(\\tau)} - \\frac{ \\tilde{p}_k(\\tilde{\\tau}_2)}{ \\tilde{p}_m(\\tilde{\\tau}_2)}\\right | < \\delta_2 \\qquad \\text{ for } k=1, \\dots, m-1\\]\nand reordering the $\\alpha_k(\\tau)$ if necessary\n\\[ |\\alpha_k(\\tau) - \\alpha_k(\\tilde{\\tau}_2)| < \\epsilon \\qquad \\text{ for } k=1, \\dots, m.\\]\nWithout loss of generality, we can assume $\\alpha_1(\\tilde{\\tau}_2)=\\tilde{\\tau}_1.$ Since $\\epsilon >0$ was arbitrary and $E_{\\Theta}$ is finite, the above arguments shows that \n\\[ \n\\begin{aligned} \\tilde{\\tau}_1 & \\in \\text{Clos} \\left\\{ \\alpha_1(\\tau): \\tau \\in \\mathbb{T} \\setminus E_{\\Theta}\\right\\} \\\\\n& \\subseteq \\text{Clos} \\left( \\bigcup_{\\tau \\in \\mathbb{T} \\setminus E_{\\Theta}} \\mathcal{W}(S_{\\Theta_{\\tau}})\\right) \\\\\n& \\subseteq \\text{Clos} \\left( \\mathcal{W}( S^1_{\\Theta})\\right),\n\\end{aligned}\n\\]\nwhere we used Equation \\eqref{eqn:onevar} to show that each $\\alpha_1(\\tau) \\in \\mathcal{W}(S_{\\Theta_{\\tau}})$ and Theorem \\ref{thm:generalnr} to conclude the last containment. \n\\end{proof}\n \n\\section{Example: $S^1_{\\Theta}$ for Simple Rational Inner Functions} \\label{sec:examples}\n\nIn this section, we illustrate Theorem \\ref{thm:unitary} using a particular class of rational inner functions. Specifically, let $\\Theta = \\prod_{i=1}^m \\theta_i,$ where each $\\theta_i$ is a degree $(1,1)$ rational inner function with a singularity on $\\mathbb{T}^2.$\nIn what follows, we will decompose $\\mathcal{K}_{\\Theta}$ into specific $M_{z_1}$- and $M_{z_2}$-invariant subspaces $\\mathcal{S}_1$ and $\\mathcal{S}_2$ (also called $z_1$- and $z_2$-invariant), find an orthonormal basis of $\\mathcal{H}(K_2) := \\mathcal{S}_2 \\ominus z_2 \\mathcal{S}_2$, and use this basis to compute the matrix-valued function $M_{\\Theta}$ from Theorem \\ref{thm:unitary}.\n\n\\subsection{Preliminaries} We first require preliminary information about degree $(1,1)$ rational inner functions with a singularity on $\\mathbb{T}^2$ and their associated model spaces. To indicate that these are particularly simple functions, we denote \nthem with $\\theta$ rather than $\\Theta.$ Then for such a $\\theta,$ there is a polynomial $p(z) =a + bz_1 + cz_2 + d z_1 z_2$ with no zeros in $\\mathbb{D}^2 \\cup (\\mathbb{T} \\times \\mathbb{D}) \\cup (\\mathbb{D} \\times \\mathbb{T})$ such that \n \\[ \\theta(z) = \\frac{\\tilde{p}(z)}{p(z)} = \\frac{ \\bar{a}z_1 z_2 + \\bar{b}z_2 + \\bar{c} z_1 + \\bar{d}}{ a + bz_1 + cz_2 + d z_1 z_2 }.\\]\n In this situation, it is particularly easy to identify shift-invariant subspaces $\\mathcal{S}_1$ and $\\mathcal{S}_2$ associated to the two-variable model space $\\mathcal{K}_{\\theta}.$\n\n\\begin{lemma} \\label{lem:AD} Let $\\theta =\\frac{\\tilde{p}}{p}$ be a degree $(1,1)$ rational inner function with $p(z) = a + bz_1 + cz_2+d z_1 z_2$. Assume $p$ vanishes at $\\tau = (\\tau_1, \\tau_2) \\in \\mathbb{T}^2.$ Then $\\mathcal{K}_{\\theta} = \\mathcal{S}_1 \\oplus \\mathcal{S}_2$, where\n\\begin{equation} \\label{eqn:subspaces} \\mathcal{S}_1 = \\mathcal{H}\\left( \\frac{g(z)\\overline{g(w)}}{1-z_1\\bar{w}_1} \\right) \\ \\ \\text{ and }\\ \\ \\mathcal{S}_2 = \\mathcal{H}\\left( \\frac{f(z) \\overline{f(w)}}{1-z_2 \\bar{w}_2}\\right)\\end{equation}\nwith the functions in the reproducing kernels given by\n\\[ g(z) = \\frac{\\gamma (z_1 - \\tau_1)}{p(z)} \\ \\ \\text{ and } \\ \\ f(z) = \\frac{\\lambda (z_2 - \\tau_2)}{p(z)},\\] \nfor any $\\lambda, \\gamma$ satisfying $|\\lambda|^2 = |\\bar{a}c -d \\bar{b}|$ and $|\\gamma|^2 = |\\bar{a}b-d\\bar{c}|.$ Moreover, $\\mathcal{S}_1$ and $\\mathcal{S}_2$ are the only subspaces of $\\mathcal{K}_{\\theta}$ satisfying $\\mathcal{K}_{\\theta} =\\mathcal{S}_1 \\oplus \\mathcal{S}_2$ that are respectively $z_1$- and $z_2$-invariant.\n\\end{lemma}\n\n\\begin{proof} Define $f$ and $g$ as above. As mentioned earlier, by \\cite{bsv05,bk13}, there are canonical subspaces $\\mathcal{S}^{max}_1$ and $\\mathcal{S}^{min}_2$ with $\\mathcal{K}_{\\theta}= \\mathcal{S}^{max}_1 \\oplus \\mathcal{S}^{min}_2$ that are respectively $z_1$- and $z_2$-invariant. As they are subspaces of $H^2(\\mathbb{D}^2)$, we can write them as \n\\[ \\mathcal{S}^{max}_1 = \\mathcal{H}\\left(\\frac{K_1(z,w)}{1-z_1 \\bar{w}_1}\\right) \\ \\ \\text{ and } \\ \\ \\mathcal{S}^{min}_2 = \\mathcal{H}\\left(\\frac{K_2(z,w)}{1-z_2 \\bar{w}_2} \\right),\\] \nfor Agler kernels $(K_1, K_2)$ of $\\theta$ defined as in \\eqref{eqn:kernel}. Our first goal is to show that $K_1(z,w) = g(z)\\overline{g(w)}$ and $K_2(z,w) = f(z)\\overline{f(w)}.$ Now, by Theorems \\ref{thm:dim} and \\ref{thm:rational}, there are polynomials \n\\[ r(z_1) = A + Bz_1 \\ \\text{ and } \\ q(z_2) = C + D z_2 \\]\nsuch that $K_1(z,w) = \\frac{r(z_1)}{p(z)}\\overline{\\frac{r(w_1)}{p(w)}}$ and $K_2(z,w) = \\frac{q(z_2)}{p(z)}\\overline{\\frac{q(w_2)}{p(w)}}$. The definition of Agler kernels implies that $(K_1,K_2)$ satisfy the formula\n\\begin{equation} \\label{eqn:AD2} 1 - \\theta(z) \\overline{\\theta(w)} = (1- z_1 \\overline{w_1}) \\frac{q(z_2)}{p(z)} \\frac{ \\overline{q(w_2)}}{\\overline{p(w)}} + (1- z_2 \\overline{w_2}) \\frac{r(z_1)}{p(z)} \\frac{ \\overline{r(w_1)}}{\\overline{p(w)}}.\\end{equation}\nMultiplying through by $p(z) \\overline{p(w)}$ and letting $(w_1, w_2) \\rightarrow (\\tau_1, \\tau_2)$ gives\n\\[ 0 = \\overline{q(\\tau_2)} (1- z_1 \\overline{\\tau_1}) q(z_2) + \\overline{r(\\tau_1)} (1- z_2 \\overline{\\tau_2}) r(z_1). \\]\nThis implies that $q(\\tau_2)=0$ and so, $q(z_2) = F (z_2 - \\tau_2)$ for some constant $F$. Similarly, $r(z_1) = G (z_1 - \\tau_1)$ for some constant $G.$ To show that $K_1$ and $K_2$ have the desired expressions in terms of $g$ and $f$, we just need to show that $|F|^2 = |\\lambda|^2$ and $|G|^2 = |\\gamma|^2.$\n \nSubstituting the formulas for $q$ and $r$ into \\eqref{eqn:AD2} and multiplying through by $p(z) \\overline{p(w)}$ gives\n\\[ p(z) \\overline{p(w)} - \\tilde{p}(z) \\overline{\\tilde{p}(w)} = (1- z_1 \\overline{w_1}) |F|^2 (z_2 - \\tau_2) \\overline{ (w_2 - \\tau_2)} + (1- z_2 \\overline{w_2}) |G|^2 (z_1 - \\tau_1)\\overline{ (w_1 - \\tau_1)}.\\]\nRecalling that $p(z) = a+ bz_1+ cz_2 + dz_1z_2$ and $\\tilde{p}(z) = \\bar{a}z_1 z_2 + \\bar{b}z_2 + \\bar{c}z_1 + \\bar{d},$ we can equate the coefficients of the monomials $1, z_1 \\bar{w}_1, z_1$ and $z_2$ from both sides of the above equation to conclude:\n\\[\n\\begin{aligned} \n|a|^2 - |d|^2 &= |F|^2 +|G|^2\\\\\n|b|^2 - |c|^2 &= -|F|^2 + |G|^2 \\\\\n \\bar{a} b-d \\bar{c} &= -\\overline{\\tau_1} |G|^2 \\\\\n\\bar{a} c- d \\bar{b}&= -\\overline{\\tau_2} |F|^2.\n\\end{aligned}\n\\]\nThe last two equations show $|F|^2 = |\\lambda|^2$ and $|G|^2 = |\\gamma|^2$, implying that $K_1(z,w) = g(z)\\overline{g(w)}$ and $K_2(z,w) = f(z)\\overline{f(w)}.$\nIn combination with the first equation, one can also obtain the useful formulas\n\\begin{equation} \\label{eqn:zeros} \\tau_1 = \\frac{-2( a\\bar{b} - c \\bar{d})}{ |a|^2 +|b|^2 - |c|^2 -|d|^2} \\ \\text{ and } \\ \\tau_2 = \\frac{-2( a\\bar{c} - b \\bar{d})}{ |a|^2 +|c|^2 - |b|^2 -|d|^2}.\\end{equation}\nTo finish the proof, observe that $p$ and $\\tilde{p}$ have two common zeros (including intersection multiplicity) on $\\mathbb{T}^2$. As $\\theta$ is a degree $(1,1)$ rational inner function, Corollary 13.6 in \\cite{k14} implies that $\\theta$ has a unique pair of Agler kernels and hence, a unique pair of decomposing subspaces $\\mathcal{S}_1$ and $\\mathcal{S}_2$ that are respectively $z_1$- and $z_2$-invariant. This unique pair $\\mathcal{S}_1$ and $\\mathcal{S}_2$ must then be the subspaces $\\mathcal{S}^{max}_1$ and $\\mathcal{S}^{min}_2$ found earlier.\n\\end{proof}\n\nIt is worth pointing out that for the function $f$ in Lemma \\ref{lem:AD}, we can choose any $\\lambda$ satisfying $|\\lambda|^2 = |\\bar{a}c -d \\bar{b}|$. However, in the sequel, we will typically choose the particular $\\lambda$ satisfying $\\lambda^2 = \\bar{a}c-d\\bar{b}$. We now obtain additional information about $M_{z_1}^*$ applied to $\\theta$ and this particular function $f$ from Lemma \\ref{lem:AD}. \n\n\\begin{lemma} \\label{lem:bws} Let $\\theta =\\frac{\\tilde{p}}{p}$ be a degree $(1,1)$ rational inner function with $p(z) = a + bz_1 + cz_2+d z_1 z_2$. Assume $p$ vanishes at $\\tau= (\\tau_1, \\tau_2) \\in \\mathbb{T}^2$ and let $f$ be defined as in Lemma \\ref{lem:AD} with $\\lambda$ further satisfying $\\lambda^2 = \\bar{a}c-d\\bar{b}$. Then\n\\[ \n\\begin{aligned}\n\\left( M_{z_1}^* f \\right) (z) &= f(z) \\left( -\\frac{b +dz_2}{a + cz_2} \\right); \\\\\n\\left( M_{z_1}^* \\theta\\right) (z) & = f(z) \\lambda \\left( \\frac{ z_2 - \\tau_2}{a + cz_2}\\right).\n\\end{aligned}\n\\] \n\\end{lemma}\n\\begin{proof} First, simple computations using the definition of $f$ and $p$ give\n\\[ \n\\begin{aligned}\n(M_{z_1}^*f)(z) &= \\frac{\\lambda(z_2 -\\tau_2)}{z_1} \\left( \\frac{1}{p(z)} - \\frac{1}{p(0,z_2)} \\right) \\\\\n& = \\lambda (z_2 -\\tau_2)\\frac{ -b-dz_2}{p(z) p(0,z_2)} \\\\\n& = f(z) \\left( -\\frac{b +dz_2}{a + cz_2} \\right).\n\\end{aligned}\n\\]\nSimilarly, one can compute\n\\[ \n(M_{z_1}^*\\theta)(z) = \\frac{1}{z_1} \\left( \\frac{\\tilde{p}(z)}{p(z)} - \\frac{\\tilde{p}(0,z_2)}{p(0,z_2)} \\right).\\]\nUsing the definitions of $p$ and $\\tilde{p}$, one can obtain a common denominator, collect like terms, and cancel the $z_1$ from the denominator to obtain:\n\\begin{equation} \\label{eqn:thetashift} (M_{z_1}^* \\theta)(z) = \\frac{ (\\bar{a}c -d\\bar{b})z_2^2 + (|a|^2 +|c|^2 - |b|^2 -|d|^2)z_2 + (a\\bar{c} - b \\bar{d}) }{p(z)p(0,z_2)}.\\end{equation}\nRecall that $\\tau_2 \\in \\mathbb{T}$. Then using the formula for $\\tau_2$ from \\eqref{eqn:zeros}, one can conclude that $a\\bar{c} - b \\bar{d} \\ne 0$ and \n\\begin{equation}\\label{eqn:tau} \\tau_2 = \\frac{1}{\\overline{\\tau_2}} = - \\frac{|a|^2 +|c|^2 - |b|^2 -|d|^2}{2(\\bar{a} c-\\bar{b} d)} \\ \\ \\text{ and } \\ \\ \\tau_2^2 = \\frac{\\tau_2}{\\overline{\\tau_2}} = \\frac{a \\bar{c} -b\\bar{d}}{ \\bar{a}c - d \\bar{b}}.\\end{equation}\nTaking the numerator from \\eqref{eqn:thetashift} and factoring out $(\\bar{a}c -d\\bar{b})$ gives\n\n\\[\n\\begin{aligned}\n (\\bar{a}c -d\\bar{b})z_2^2 &+ (|a|^2 +|c|^2 - |b|^2 -|d|^2)z_2 + (a\\bar{c} - b \\bar{d}) \\\\\n &= (\\bar{a}c -d\\bar{b}) \\left( z_2^2 + \\frac{|a|^2 +|c|^2 - |b|^2 -|d|^2}{\\bar{a}c -d\\bar{b}} z_2+ \\frac{ a\\bar{c} - b \\bar{d}}{\\bar{a}c -d\\bar{b}}\\right) \\\\ \n & = \\lambda^2 \\left( z_2^2 - 2 \\tau_2 z_2 + \\tau_2^2\\right).\n\\end{aligned}\n\\]\nCombining our formulas gives\n\\[ (M_{z_1}^* \\theta)(z) = \\lambda^2 \\frac{ (z_2 - \\tau_2)^2}{p(z) p(0,z_2)} = \\lambda f(z) \\left( \\frac{z_2 -\\tau_2}{a +c z_2}\\right),\\]\nthe desired equality.\\end{proof}\n\n\n\\subsection{$M_\\Theta$ for product $\\Theta$} \n\nLet us now return to the question posed at the beginning of the section. Let $\\Theta = \\prod_{i=1}^m \\theta_i,$ where each $\\theta_i$ is a degree $(1,1)$ rational inner function with a singularity on $\\mathbb{T}^2.$ We can now use Lemma \\ref{lem:AD} to decompose $\\mathcal{K}_{\\Theta}$ into specific $z_1$- and $z_2$- invariant subspaces $\\mathcal{S}_1$ and $\\mathcal{S}_2$ and find an orthonormal basis of $\\mathcal{H}(K_2):= \\mathcal{S}_2 \\ominus z_2 \\mathcal{S}_2$. Then using Lemma \\ref{lem:bws}, we will compute the matrix function $M_{\\Theta}$ from Theorem \\ref{thm:unitary}.\n\nFor each $i$, let $\\mathcal{S}_{1,\\theta_i}$, $\\mathcal{S}_{2, \\theta_i}$, and $f_i$ denote the canonical subspaces and reproducing function associated to $\\theta_i$ in Lemma \\ref{lem:AD}. Then:\n\n\\begin{proposition} \\label{prop:productform}\nLet $\\Theta =\\prod_{i=1}^m \\theta_i,$ where each $\\theta_i$ is a degree $(1,1)$ rational inner function $\\frac{\\tilde{p}_i}{p_i}$ where $p_i(z) = a_i + b_i z_1 + c_i z_2 + d_i z_1 z_2$ with a singularity at $(\\tau_{1,i}, \\tau_{2,i}) \\in \\mathbb{T}^2.$ Define\n\\begin{equation} \\label{eqn:decompK}\n\\mathcal{S}_1 := \\bigoplus_{i=1}^m\\Big( \\big({\\textstyle \\prod_{k=1}^{i-1}\\theta_k} \\big) \\mathcal{S}_{1, \\theta_i} \\Big) \\ \\text{ and } \\\n\\mathcal{S}_2 := \\bigoplus_{i=1}^m\\Big( \\big({\\textstyle \\prod_{k=1}^{i-1}\\theta_k} \\big) \\mathcal{S}_{2, \\theta_i} \\Big).\n\\end{equation}\nThen $\\mathcal{K}_{\\Theta} = \\mathcal{S}_1 \\oplus \\mathcal{S}_2$ and $\\mathcal{S}_1$, $\\mathcal{S}_2$ are respectively $z_1$- and $z_2$-invariant. Furthermore, if $\n\\mathcal{H}(K_2) = \\mathcal{S}_2 \\ominus z_2 \\mathcal{S}_2$, then the set\n\\begin{equation} \\label{eqn:onbasis} \\left \\{ f_1, \\ \\theta_1 f_2, \\ \\theta_1 \\theta_2 f_3, \\dots, \\big({\\textstyle \\prod_{k=1}^{m-1}\\theta_k} \\big) f_m \\right \\}\\end{equation}\nis an orthonormal basis for $\\mathcal{H}(K_2)$, where each $f_i(z) = \\frac{\\lambda_i (z_2 - \\tau_{2,i})}{p_i(z)}$ and $\\lambda^2_i = \\overline{a_i}c_i-d_i\\overline{b_i}$. \n\\end{proposition}\n\n\\begin{proof} Observe that \n\\begin{equation} \\label{eqn:decomposition2} \\mathcal{K}_{\\Theta} = \\mathcal{K}_{\\theta_1} \\oplus \\theta_1 \\mathcal{K}_{\\theta_2} \\oplus (\\theta_1 \\theta_2) \\mathcal{K}_{\\theta_3} \\oplus \\dots \\oplus \\big({\\textstyle \\prod_{k=1}^{m-1}\\theta_k} \\big) \\mathcal{K}_{\\theta_m} = \\bigoplus_{i=1}^m \\big({\\textstyle \\prod_{k=1}^{i-1}\\theta_k} \\big)\\mathcal{K}_{\\theta_i}.\\end{equation}\nThis can be seen by observing that the subspaces in \\eqref{eqn:decomposition2} are orthogonal to each other and their reproducing kernels add to that of $\\mathcal{K}_{\\Theta}.$\nNow by Lemma \\ref{lem:AD}, we can write each $ \\mathcal{K}_{\\theta_i} = \\mathcal{S}_{1, \\theta_i} \\oplus \\mathcal{S}_{2, \\theta_i},$ \nwhere these subspaces are respectively $z_1$- and $z_2$-invariant. \nDefine $\\mathcal{S}_1$ and $\\mathcal{S}_2$ as in \\eqref{eqn:decompK}.\nThen, $\\mathcal{S}_1$ is an orthogonal sum of $z_1$-invariant subspaces and so is also a $z_1$-invariant subspace. Similarly, $\\mathcal{S}_2$ is $z_2$-invariant. By \\eqref{eqn:decomposition2}, it immediately follows that $\\mathcal{K}_{\\Theta} = \\mathcal{S}_1 \\oplus \\mathcal{S}_2.$ To prove the orthonormal basis result, observe that the components of $\\mathcal{S}_2$ in \\eqref{eqn:decompK} are pairwise-orthogonal and each is $z_2$-invariant. Thus\n\\[ \\begin{aligned}\n\\mathcal{S}_2 \\ominus z_2 \\mathcal{S}_2 & = \\bigoplus_{i=1}^m \\Big( \\big({\\textstyle \\prod_{k=1}^{i-1}\\theta_k} \\big) \\mathcal{S}_{2,\\theta_i} \\ominus z_2( \\big({\\textstyle \\prod_{k=1}^{i-1}\\theta_k} \\big)\\mathcal{S}_{2,\\theta_i}\\Big) \\\\\n& = \\bigoplus_{i=1}^m \\big({\\textstyle \\prod_{k=1}^{i-1}\\theta_k} \\big)\\Big( \\mathcal{S}_{2,\\theta_i} \\ominus z_2\\mathcal{S}_{2,\\theta_i}\\Big), \n\\end{aligned}\n\\]\nwhere we used the fact that each ${\\textstyle \\prod_{k=1}^{i-1}\\theta_k}$ is inner.\nBy the reproducing kernel formula in Lemma \\ref{lem:AD}, each singleton set $\\{f_i\\}$ is an orthonormal basis for $\\mathcal{S}_{2,\\theta_i} \\ominus z_2 \\mathcal{S}_{2,\\theta_i}$. Thus \neach singleton set $ \\left \\{ \\big({\\textstyle \\prod_{k=1}^{i-1}\\theta_k} \\big) f_i \\right \\} $ is an orthonormal basis for $ \\big({\\textstyle \\prod_{k=1}^{i-1}\\theta_k} \\big)\\left(\\mathcal{S}_{2,\\theta_i} \\ominus z_2 \\mathcal{S}_{2,\\theta_i}\\right) $. Since the decomposition of $\\mathcal{S}_2 \\ominus z_2 \\mathcal{S}_2 $ into components in the above equation is orthogonal, the set\n$ \\left \\{ f_1, \\theta_1 f_2, \\dots, \\big({\\textstyle \\prod_{k=1}^{m-1}\\theta_k} \\big) f_m \\right \\}$ gives the desired orthonormal basis.\n\\end{proof}\n\n\nRecall that $\\deg \\Theta = (m,m)$. By Theorem \\ref{thm:unitary}, the operator $S_{\\Theta}^1 := P_{\\mathcal{S}_2} M_{z_1} |_{\\mathcal{S}_2}$ is unitarily equivalent to a $z_2$ matrix-valued Toeplitz operator with $m\\times m$ symbol $M_{\\Theta}$, whose entries are rational in $\\bar{z}_2$ and continuous on $\\overline{\\mathbb{D}}.$ For this particular $\\Theta$ and $\\mathcal{S}_2$, we can compute $M_{\\Theta}$:\n\n\\begin{theorem} \\label{thm:nr1} Let $\\Theta = \\prod_{i=1}^m \\theta_i,$ where each $\\theta_i$ is a degree $(1,1)$ rational inner function $\\frac{\\tilde{p}_i}{p_i}$ where $p_i(z) = a_i + b_i z_1 + c_i z_2 + d_i z_1 z_2$ has a zero at $\\tau_i = (\\tau_{1,i}, \\tau_{2,i}) \\in \\mathbb{T}^2$. Let $\\mathcal{S}_2$ be as in \\eqref{eqn:decompK}. Then, the $m \\times m$ matrix-valued function $M_{\\Theta}$ from Theorem \\ref{thm:unitary} is given entry-wise by\n\\[ M_{\\Theta} (z_2)_{ji} = \\left \\{ \\begin{array}{cc}\n\\overline{ \\lambda_j \\left( \\frac{ z_2 - \\tau_{2,j}}{a_j + c_jz_2}\\right) \\lambda_i \\left( \\frac{ z_2 - \\tau_{2,i}}{a_i + c_iz_2}\\right) \\prod_{k=i+1}^{j-1} \\left(\\frac{\\overline{b_k}z_2 + \\overline{d_k}}{a_k + c_k z_2}\\right)} & \\text{ if $j>i$}; \\\\\n& \\\\\n\\overline{\\left( -\\frac{b_i +d_iz_2}{a_i + c_iz_2} \\right) } & \\text{ if $j=i$}; \\\\\n&\\\\ \n0 & \\text{ if } jj$. Observe that the following subspace of $\\mathcal{S}_2$\n\\[ \\bigoplus_{\\ell=1}^j \\big({\\textstyle \\prod_{k=1}^{\\ell-1}\\theta_k} \\big)\\mathcal{K}_{\\theta_{\\ell}}\\]\nis the two-variable model space associated to the inner function ${\\textstyle \\prod_{k=1}^{j-1}\\theta_k}$ and hence, is invariant under $M_{z_1}^*$. Thus if $i >j$, the fact that\n\\[M_{z_1}^*\\left( \\big({\\textstyle \\prod_{k=1}^{j-1}\\theta_k} \\big) f_j \\right) \\in \\bigoplus_{\\ell=1}^j \\big({\\textstyle \\prod_{k=1}^{\\ell-1}\\theta_k} \\big)\\mathcal{K}_{\\theta_\\ell} \\perp \\big({\\textstyle \\prod_{k=1}^{i-1}\\theta_k} \\big)\\mathcal{K}_{\\theta_i}\\]\n implies that $h_{ij}\\equiv 0.$ \nFor the other cases, we will use the identity\n\\begin{equation} \\label{eqn:bwsform} M_{z_1}^* \\left( GH \\right) = H M_{z_1}^*( G) +G(0, z_2)M_{z_1}^*(H),\\end{equation}\nfor any $G,H \\in H^2(\\mathbb{D}^2)$ with $GH \\in H^2(\\mathbb{D}^2)$. Now fix $i=j$ and observe that by \\eqref{eqn:bwsform} and Lemma \\ref{lem:bws},\n\\[ \n\\begin{aligned}\nM_{z_1}^* \\left ( \\big({\\textstyle \\prod_{k=1}^{j-1}\\theta_k} \\big) f_j \\right) &=\\big({\\textstyle \\prod_{k=1}^{j-1}\\theta_k} \\big) M_{z_1}^*(f_j) + f_j(0,z_2) M_{z_1}^*\\big({\\textstyle \\prod_{k=1}^{j-1}\\theta_k} \\big)\\\\\n& =\\big({\\textstyle \\prod_{k=1}^{j-1}\\theta_k} \\big) f_j \\left( -\\frac{b_j +d_jz_2}{a_j + c_jz_2} \\right) + f_j(0,z_2) M_{z_1}^* \\big({\\textstyle \\prod_{k=1}^{j-1}\\theta_k} \\big).\n\\end{aligned}\n\\]\nThe second term is in the model space associated to ${\\textstyle \\prod_{k=1}^{j-1}\\theta_k}$ and hence, is orthogonal to $\\big({\\textstyle \\prod_{k=1}^{j-1}\\theta_k} \\big)\\mathcal{K}_{\\theta_j}$. This follows from Proposition $3.5$ in \\cite{b13}, which show that if $\\phi$ is an inner function on $\\mathbb{D}^2$, then $\\left( M_{z_1}^*\\phi \\right)H^2_2(\\mathbb{D}) \\subseteq \\mathcal{K}_{\\phi}$. Thus, we can conclude that \n\\[ h_{jj}(z_2) = -\\frac{b_j +d_jz_2}{a_j + c_jz_2}.\\]\nLastly, fix $i,j$ with $i j.\n\\end{array}\n\\right. \\]\nThen the fact that $M_{\\Theta} = H^*$ gives the desired formula. \\end{proof}\n\nTo make this concrete, we compute several $M_{\\Theta}$ using the formula from Theorem \\ref{thm:nr1}.\n\n\\begin{example} First, let $\\Theta$ be the following degree $(2,2)$ rational inner function:\n\\[ \n\\Theta(z) = \\theta_1(z) \\theta_2(z) = \\left( \\frac{2z_1z_2-z_1-z_2}{2-z_1-z_2} \\right) \\left( \\frac{3z_1z_2-2z_1-z_2}{3-z_1-2z_2} \\right).\\]\nThen $\\tau_{2,1} = \\tau_{2,2}=1$ and we can take $\\lambda_1 = i \\sqrt{2}$ and $\\lambda_2 = i \\sqrt{6}$. Then, by Theorem \\ref{thm:nr1}:\n\\[ M_{\\Theta}(z_2) =\n\\left[ { \\setstretch{3}\n \\begin{array}{cc}\n\\dfrac{1}{2-\\bar{z}_2} & 0 \\\\\n\\dfrac{ -\\sqrt{12}(1-\\bar{z}_2)^2}{(2-\\bar{z}_2)(3-2\\bar{z}_2)} & \\dfrac{1}{3-2\\bar{z}_2} \n\\end{array} }\n\\right].\\]\nThus, $S^1_{\\Theta}$ is unitarily equivalent to the matrix-valued Toeplitz operator with this symbol.\n\\end{example}\n\n\\begin{example} Now, let $\\Theta$ be the following degree $(3,3)$ rational inner function:\n\\[\\Theta(z) = \\theta_1(z) \\theta_2(z)\\theta_3(z) = \\left( \\frac{2z_1z_2-z_1-z_2}{2-z_1-z_2} \\right) \\left( \\frac{3z_1z_2-2z_1-z_2}{3-z_1-2z_2} \\right) \\left(\\frac{3z_1z_2-z_1-z_2-1}{3-z_1-z_2-z_1z_2} \\right).\\]\nThen $\\tau_{2,1} = \\tau_{2,2}=\\tau_{2,3}=1$ and we have $\\lambda_1 = i \\sqrt{2}$, $\\lambda_2 = i \\sqrt{6}$, and $\\lambda_3 = 2i.$ By Theorem \\ref{thm:nr1}:\n\\[ M_{\\Theta}(z_2) =\n\\left[ { \\setstretch{3}\n \\begin{array}{ccc}\n\\dfrac{1}{2-\\bar{z}_2} & 0 & 0 \\\\\n\\dfrac{ -\\sqrt{12}(1-\\bar{z}_2)^2}{(2-\\bar{z}_2)(3-2\\bar{z}_2)} & \\dfrac{1}{3-2\\bar{z}_2} & 0\\\\\n\\dfrac{2 \\sqrt{2} (1-\\bar{z}_2)^2(1+\\bar{z}_2)}{(2-\\bar{z}_2)(3-\\bar{z}_2)(3-2\\bar{z}_2)} & \\dfrac{-2\\sqrt{6} (1-\\bar{z}_2)^2}{(3-2\\bar{z}_2)(3-\\bar{z}_2)} & \\dfrac{1+\\bar{z}_2}{3-\\bar{z}_2} \n\\end{array} }\n\\right],\\]\nso $S^1_{\\Theta}$ is unitarily equivalent to the matrix-valued Toeplitz operator with this symbol.\n\nIt is worth pointing that out that these $M_{\\Theta}$ are lower triangular (rather than upper triangular like \\eqref{eqn:onevar}) because in our computations, we ordered our bases in a different way than is typically done in the one-variable situation.\n\\end{example}\n\n\n\n\\section{Zero Inclusion Question for the Numerical Range}\n\\label{Zero}\n\n\nIn this section, we study the question of when zero is in the numerical range associated to a product of two degree $(1,1)$ rational inner functions: Using $\\mathcal{S}_2$ as defined in \\eqref{eqn:decompK} and recalling that $ S^1_{\\Theta} : = P_{\\mathcal{S}_2} M_{z_1}|_{\\mathcal{S}_2}$, we are interested in the question of when zero is in $\\mathcal{W}\\big( S^1_{\\Theta}\\big)$. \n\nWe begin with some notation. Let $\\Theta = \\theta_1 \\theta_2$, where each $\\theta_j$ is a degree $(1,1)$ rational inner function $\\frac{\\tilde{p}_j}{p_j}$ and each $p_j(z) = a_j + b_j z_1 + c_j z_2 + d_j z_1 z_2$ has a zero at $\\tau_j = (\\tau_{1,j}, \\tau_{2,j}) \\in \\mathbb{T}^2$. By Corollary \\ref{thm:nr}, \n\\[ \\text{Clos}\\big(\\mathcal{W}( S^1_{\\Theta})\\big) = \\text{Conv}\\Big( \\bigcup_{\\tau \\in \\mathbb{T}} \\mathcal{W}(M_{\\Theta}(\\tau))\\Big),\\] \nwhere $M_{\\Theta}$ is the $2 \\times 2$ matrix-valued function given in Theorem~\\ref{thm:nr1}. For our $\\Theta=\\theta_1\\theta_2$, \n\\begin{equation}\\label{eqn:2by2}\nM_\\Theta(z_2) =\n \\begin{bmatrix}\n -\\overline{\\left(\\frac{b_1 + d_1 z_2}{a_1 + c_1 z_2}\\right)} & 0\\\\\n &\\\\\n \\bar{\\lambda}_1 \\bar{\\lambda}_2 \n \\overline{\\left({\\frac{z_2 - \\tau_{2, 1}}{a_1 + c_1 z_2}}\\right)} \\cdot \\overline{\\left({\\frac{z_2 - \\tau_{2,2}}{a_2 + c_2 z_2}}\\right)} & -\\overline{\\left(\\frac{b_2 + d_2 z_2}{a_2 + c_2 z_2}\\right)}\n \\end{bmatrix},\n\\end{equation}\nwhere $\\lambda_j^2= \\bar{a}_j c_j - d_j \\bar{b}_j$, for $j = 1, 2$. In future computations, we let $f_{j, z_2}$ denote $(j,j)$-entry of $M_\\Theta(z_2)$ and let $\\beta_j$ denote the center of the circle $\\mathcal{C}_j := \\{ -\\frac{b_j + d_j z}{a_j + c_j z}: z \\in \\mathbb{T}\\}$ for $j=1,2.$ By Theorem~\\ref{thm:numericalradius}, the numerical radius $w( S^1_{\\Theta} ) =1$ and so, the entries (eigenvalues) $f_{1, z_2}$ and $f_{2, z_2}$ as well as the entire circles $\\mathcal{C}_1$ and $\\mathcal{C}_2$ are in $\\overline{\\mathbb{D}}$. \nFor each $z_2 \\in \\mathbb{T}$, define\n\\[U_{\\alpha_{z_2}} :=\n \\begin{bmatrix}\n 0 & e^{i \\alpha_{z_2} }\n\\\\\n1 & 0\n \\end{bmatrix},\n\\]\nwhere $\\alpha_{z_2} \\in \\mathbb{R}$ is chosen so that \n\\begin{equation} \\label{eqn:pos12} \\widehat{M}_{\\Theta}(z_2): = U_{\\alpha_{z_2}}^* M_\\Theta(z_2) U_{\\alpha_{z_2}} \\end{equation}\nhas positive $(1, 2)$-entry. Since $U_{\\alpha_{z_2}}$ is unitary, $\\widehat{M}_{\\Theta}(z_2)$ and $M_\\Theta(z_2)$ have the same numerical range. We will often apply the Elliptical Range Theorem (see, for example, \\cite{ckli}), which says that the numerical range of a $2 \\times 2$ upper-triangular matrix\n\\begin{equation}\\label{ERT} A =\n \\begin{bmatrix}\n a & c \n\\\\\n 0& b\n \\end{bmatrix}\n\\end{equation}\nis an elliptical disk with foci at $a$ and $b$ and minor axis of length $|c| = \\left({\\mbox{trace} (A^* A) - |a|^2 - |b|^2}\\right)^{1\/2}$. \nIn particular, the numerical range of $ \\widehat{M}_{\\Theta}(z_2)$, and hence of $M_{\\Theta}(z_2)$, is an elliptical disk with foci\n$f_{1,z_2}$ and $f_{2,z_2}$ and minor axis length:\n \\begin{equation} \\label{eqn:minor_axis} m_{z_2} := |\\lambda_1\\, \\lambda_2| \\left|\\frac{z_2 - \\tau_{2, 1}}{a_1 + c_1 z_2}\\right| \\cdot \\left|\\frac{z_2 - \\tau_{2, 2}}{a_2 + c_2z_2}\\right|.\\end{equation}\n\n\n\n\\subsection{When is $0$ in the numerical range?} Now let us consider the zero inclusion question.\n\n\\begin{proposition}\\label{lem:general} Let $\\Theta = \\theta_1 \\theta_2,$ where each $\\theta_j$ is a degree $(1,1)$ rational inner function $\\frac{\\tilde{p}_j}{p_j}$ where $p_j(z) = a_j + b_j z_1 + c_j z_2 + d_j z_1 z_2$ has a zero at $\\tau_j = (\\tau_{1,j}, \\tau_{2,j}) \\in \\mathbb{T}^2$. If there exists $\\gamma \\in \\mathbb{T} \\setminus \\{\\tau_{2, j}\\}_{j = 1, 2}$ such that either\n \\begin{equation}\\label{item:equal} \n|f_{1, \\gamma}| + |f_{2, \\gamma}| < |1 - \\bar{f}_{1, \\gamma} f_{2, \\gamma}|\n\\end{equation}\nor \n\\begin{equation}\\label{item:notequal} \n|\\bar{\\beta}_j - f_{j, \\gamma}| > |\\bar{\\beta}_j| \\quad \\text{for $j=1$ or $j=2$,} \\end{equation} \nthen $0 \\in \\mathcal{W}( S^1_{\\Theta})^0$. Furthermore, if $a_j \\bar{b}_j - c_j \\bar{d}_j \\in \\mathbb{R}$, then \\eqref{item:notequal} holds if and only if\n\\[|\\bar{a}_j d_j - b_j \\bar{c}_j| > |a_j \\bar{b}_j - c_j \\bar{d}_j|.\\] \n \\end{proposition}\n \n \\begin{proof} We first perform a general computation for any $z_2 \\in \\mathbb{T}$. By the discussions preceding \\eqref{eqn:minor_axis}, the numerical range of $M_{\\Theta}(z_2)$ is an elliptical disk with foci $f_{1,z_2}$ and $f_{2,z_2}$ and minor axis $m_{z_2}$ given in \\eqref{eqn:minor_axis}.\n We will show that $m_{z_2}$ also satisfies\n \\begin{equation} \\label{eqn:minor_axis2} m_{z_2} = \\sqrt{1 - |f_{1, z_2}|^2}\\sqrt{1 - |f_{2, z_2}|^2}.\n \\end{equation}\nTo this end, observe that\n\\[\n{1 - |f_{j, z_2}|^2} = \\frac{|a_j|^2 + |c_j|^2 - |b_j|^2 - |d_j|^2 + (\\overline{a}_j c_j - \\overline{b}_j d_j) z_2 + (a_j \\overline{c}_j - b_j \\overline{d}_j) \\overline{z}_2 }{|a_j + c_j z_2|^2}.\\] \nFrom \\eqref{eqn:tau}, we know that each $\\tau_{2,j} = \\frac{1}{\\overline{\\tau_{2,j}}} = - \\frac{|a_j|^2 +|c_j|^2 - |b_j|^2 -|d_j|^2}{2(\\bar{a}_j c_j-\\bar{b}_j d_j)}$. This implies that\n\\[{1 - |f_{j, z_2}|^2} = (|a_j|^2 + |c_j|^2 - |b_j|^2 - |d_j|^2) \\frac{1 - \\frac{\\bar{\\tau}_{2, j}}{2} z_2 - \\frac{{\\tau}_{2, j}}{2} \\bar{z}_2}{|a_j + c_j z_2|^2}.\n\\]\nSince both $\\tau_{2, j}\\in \\mathbb{T}$ and $z_2 \\in \\mathbb{T}$, and as $|f_{j,z_2}| \\le 1$, we have\n\\[ \\begin{aligned}\n1 - |f_{j, z_2}|^2 & = \\frac{(\\bar{a}_j c_j-\\bar{b}_j d_j) \\bar{z}_2 (z_2 - \\tau_{2, j})^2}{|a_j + c_j z_2|^2} \n& = \\frac{|\\bar{a}_j c_j-\\bar{b}_j d_j| \\, |z - \\tau_{2, j}|^2}{{|a_j + c_j z_2|^2}} =\n\\frac{|\\lambda_j|^2 \\, |z - \\tau_{2, j}|^2}{{|a_j + c_j z_2|^2}},\n\\end{aligned}\n\\]\nwhere $\\lambda_j$ is defined as in Theorem~\\ref{thm:nr1}. This proves \\eqref{eqn:minor_axis2}. Then a simple computation using the definition of an ellipse\nshows that the ellipse bounding $M_{\\Theta}(z_2)$ \nhas major axis given by\n\\begin{equation} \\label{eqn:major} M_{z_2} := |1 - \\overline{f}_{1, z_2} f_{2, z_2}|.\\end{equation}\n\nNow, to establish the first claim, assume there is some $\\gamma \\in \\mathbb{T} \\setminus \\{\\tau_{2, j}\\}_{j = 1, 2}$ satisfying \\eqref{item:equal}. One can see that the ellipse bounding $\\mathcal{W}( M_{\\Theta}(z_2))$ is non-degenerate because \\eqref{eqn:minor_axis} implies $m_{\\gamma} \\ne 0.$ Then combining condition~\\eqref{item:equal} with the formula for the major axis \\eqref{eqn:major} immediately gives $0 \\in \\mathcal{W}(M_\\Theta(\\gamma))^0 \\subseteq \\mathcal{W}( S^1_{\\Theta})^0$.\n \nTo establish the second claim, assume there is some $\\gamma \\in \\mathbb{T} \\setminus \\{\\tau_{2, j}\\}_{j = 1, 2}$ such that a focus $f_{j,\\gamma}$ satisfies \\eqref{item:notequal}. Then, since $\\bar{\\beta}_j$ is the center of the circle on which the $f_{j,z_2}$ lie, $|\\bar{\\beta}_j - f_{j, z_2}| > |\\bar{\\beta}_j|$ for all $z_2 \\in \\mathbb{T}$. Thus, the convex hull of the foci contains zero and since each $|\\bar{\\beta_j} - f_{j,z_2}| > 0$, we know that zero lies in $\\mathcal{W}( S^1_{\\Theta})^0$.\n \n\nNow suppose that $a_j \\bar{b}_j - c_j \\bar{d}_j \\in \\mathbb{R}$ and consider the circle $\\{ -\\frac{b_j + d_j z}{a_j + c_j z}: z \\in \\mathbb{T}\\}$ with center $\\beta_j$. If $z \\in \\mathbb{T}$ and $w = -\\frac{b_j + d_j z}{a_j + c_j z}$, then a computation gives\n \\[z = - \\frac{b_j + a_j w}{d_j + c_j w}.\\]\n Since $|z|^2 = 1$, we have\n \\[1 = - \\frac{b_j + a_j w}{d_j + c_j w} \\cdot \\overline{- \\frac{b_j + a_j w}{d_j + c_j w}},\\] \n which implies\n \\[(|a_j|^2 - |c_j|^2)|w|^2 + 2(a_j \\bar{b}_j - c_j \\bar{d}_j) \\Re w + (|b_j|^2 - |d_j|^2) = 0.\\]\n Writing $w = x + i y$, completing the square and computing, we see that the center $\\beta_j$ satisfies\n \\[\\beta_j = -\\frac{a_j \\bar{b}_j - c_j \\bar{d}_j}{|a_j|^2 - |c_j|^2},\\] so $\\beta_j = \\bar{\\beta}_j$ by our assumption that $a_j \\bar{b}_j - c_j \\bar{d}_j$ is real. The radius of the circle is\n \\[\\left|\\frac{|d_j|^2 - |b_j|^2}{|a_j|^2 - |c_j|^2} + \\frac{(a_j \\bar{b}_j - c_j \\bar{d}_j)^2}{(|a_j|^2 - |c_j|^2)^2}\\right|^{1\/2} = \\frac{|a_j \\bar{d}_j - \\bar{b}_j c_j|}{|a_j|^2 - |c_j|^2},\\] where we used the fact that our assumptions imply $|a_j| \\ne |c_j|$. Thus, condition~\\eqref{item:notequal} holds if and only if\n \n \\[|a_j \\bar{d}_j - \\bar{b}_j c_j| > |a_j \\bar{b}_j - c_j \\bar{d}_j|,\\]\n as desired.\n \\end{proof}\n \n\\begin{remark} In the first part of Proposition~\\ref{lem:general}, when zero lies in the interior of a single ellipse, we can say more if the foci $f_{1, \\gamma}$ and $f_{2, \\gamma}$ lie on a line through the origin. First, if the line segment joining the foci contains the origin in its interior, then condition~\\eqref{item:equal} implies that the ellipse is nondegenerate and zero immediately lies in the interior of the ellipse. A similar argument can be made if one or both of the foci is zero.\n \n \nIf the foci lie on a line that passes through the origin and are in the same quadrant, we can write $f_{1, \\gamma} = r_1 e^{i \\phi}$ and $f_{2, \\gamma} = r_2 e^{i \\phi}$ with $r_1, r_2 > 0$. Since the numerical radius is at most $1$, we know that $r_j \\le 1$ for $j = 1, 2$. If either $r_1 = 1$ or $r_2 = 1$, then condition \\eqref{item:equal} cannot hold. Thus $0 \\le r_j < 1$ for $j = 1, 2$ and condition \\eqref{item:equal} holds if and only if\n $r_1 + r_2 < 1 - r_1 r_2$, which happens precisely when \n $r_1 < \\frac{1 - r_2}{1 + r_2}.$\n \\end{remark}\n\n\nBefore proceeding further, we require the following lemma:\n\n\\begin{proposition}\\label{prop:elementary} Let $\\Theta = \\frac{\\tilde{p}}{p}$ be a rational inner function on $\\mathbb{D}^2$, where $p(z) = a + bz_1 + cz_2$ is a polynomial with a zero on $\\mathbb{T}^2$. Then $|a| = |b| + |c|$. \\end{proposition}\n\n\\begin{proof} \nSince $\\Theta$ is holomorphic, the polynomial $p$ does not vanish inside $\\mathbb{D}^2$. If $|a| < |b| + |c|$, we could choose $z_1$ and $z_2$ to make $p$ vanish in $\\mathbb{D}^2$, so this is impossible. Thus, we know that $|a| \\ge |b| + |c|$. But $p$ has a zero $(\\tau_1, \\tau_2)$ on $\\mathbb{T}$. Thus, $a = -b \\tau_1 - c \\tau_2$ and so $|a| \\le |b| + |c|$. Combining these two inequalities, we obtain $|a| = |b| + |c|$.\n\\end{proof}\n\n\nIn the following proposition, we restrict to the situation where $\\Theta = \\theta_1 \\theta_2$ and each $\\theta_j=\\frac{\\tilde{p}_j}{p_j}$ with $p_j(z) = a_j + b_j z_1 + c_j z_2$. We further require that $b_j <0$ and $a_j, c_j >0$. Given these assumptions, one can divide through by $|b_j|$ and automatically assume $b_j =-1.$ \n\nWe can now answer the zero inclusion question using the coefficients of the polynomials defining $\\Theta$ as follows:\n \n\\begin{proposition}\\label{lem:positive_eigenvalues} Let $\\Theta = \\theta_1 \\theta_2,$ where each $\\theta_j$ is a degree $(1,1)$ rational inner function $\\frac{\\tilde{p}_j}{p_j}$ where $p_j(z) = a_j - z_1 + c_j z_2$ has a zero at $\\tau_j = (\\tau_{1,j}, \\tau_{2,j}) \\in \\mathbb{T}^2$ and $a_j, c_j >0.$ Then\n\n\\begin{enumerate}\n\\item[$\\bullet$] $0 \\in \\mathcal{W}( S^1_{\\Theta})^0$ if and only if $c_1 c_2 > \\frac{1}{2}$;\n\\item[$\\bullet$] $0 \\in \\partial \\mathcal{W}( S^1_{\\Theta})$ if and only if $c_1 c_2 = \\frac{1}{2}$.\n\\end{enumerate}\n\\end{proposition}\n\n\n\n\n\\begin{proof} First observe that \\eqref{eqn:zeros} and our assumptions on the coefficients $a_j, c_j$ imply that $\\frac{1}{a_j - c_j } = 1$ and $\\tau_{2,j} = -1$ for $j=1,2$. Corollary ~\\ref{thm:nr} implies that\n\\[ \\text{Clos}\\left( \\mathcal{W}( S^1_{\\Theta}) \\right) = \\text{Conv} \\Big( \\bigcup_{z \\in \\mathbb{T}} \\mathcal{W}(M_\\Theta(z)) \\Big) = \\text{Conv} \\Big( \\bigcup_{z \\in \\mathbb{T}} \\mathcal{W}(M_\\Theta(\\bar{z})) \\Big),\\]\nand to simplify notation, we will often work with $M_\\Theta(\\bar{z})$. Observe that the circles of foci $\\left\\{\\frac{1}{a_j + c_j z}: \\bar{z} \\in \\mathbb{T} \\right\\}$ lie in $\\mathcal{W}( S^1_{\\Theta})$ and cannot contain $\\infty$ since $ S^1_{\\Theta}$ is a contraction. The circles pass through the points $\\frac{1}{a_j + c_j} \\in \\mathbb{R},$ when $z = 1$, and $\\frac{1}{a_j - c_j} = 1,$ when $z = -1$. \n\nNow we show that $0 \\in \\mathcal{W}( S^1_{\\Theta})^0$ if and only if $c_1 c_2 > \\frac{1}{2}.$ As pointed out after \\eqref{eqn:pos12}, $M_\\Theta(\\bar{z})$ has the same numerical range as \n\\[\n\\widehat{M}_{\\Theta}(\\bar{z}) =\n \\begin{bmatrix}\n \\frac{1}{a_2 + c_2 z}& \\sqrt{|a_1 a_2 c_1 c_2|} \\frac{|z +1|^2}{|a_1 + c_1 z|\\,|a_2 + c_2 z|}\\\\\n 0 & \\frac{1}{a_1 + c_1 z}\n \\end{bmatrix}\n\\]\nand so, we work with $\\widehat{M}_{\\Theta}(\\bar{z})$. In particular $H(M_\\Theta(\\bar{z})):= \\frac{1}{2} \\left(\\widehat{M}_{\\Theta}(\\bar{z}) + \\widehat{M}_{\\Theta}(\\bar{z})^*\\right)$ is a Hermitian matrix and therefore its numerical range is a real line segment. The endpoints are the minimum and maximum eigenvalues of $H(M_\\Theta(\\bar{z}))$, see \\cite[p.12]{HJ} or \\cite{K08}. Furthermore, $\\mathcal{W}(H(M_\\Theta(\\bar{z})))$ is the projection of $\\mathcal{W}\\big(\\widehat{M}_{\\Theta}(\\bar{z})\\big)$ and hence, of $\\mathcal{W}(M_\\Theta(\\bar{z}))$, onto the real axis. We now study the eigenvalues of $H(M_\\Theta(\\bar{z}))$, which give the minimum and maximum real parts of the elements in $\\mathcal{W}(M_\\Theta(\\bar{z}))$.\n\n\n\nFirst, the trace of $H(M_\\Theta(\\bar{z}))$, which is the sum of the two eigenvalues of $H(M_\\Theta(\\bar{z}))$, equals\n\\[\\frac{a_1 + c_1 \\Re z}{|a_1 + c_1 z|^2}+\\frac{a_2 + c_2 \\Re z}{|a_2 + c_2 z|^2} >0,\\] since $a_j - c_j = 1$. This shows that at least one eigenvalue of $H(M_\\Theta(\\bar{z}))$ is positive. Then, the minimum eigenvalue will be negative if and only if $\\det(H(M_\\Theta(\\bar{z})) < 0$. In this case, we have\n\\[\\det \\left(H(M_\\Theta(\\bar{z}))\\right) =\n\\det\\begin{bmatrix}\n \\frac{a_2 + c_2 \\Re z}{|a_2 + c_2 z|^2}& \\frac{\\sqrt{|a_1 a_2 c_1 c_2|}}{2} \\left(\\frac{|z +1|^2}{|a_1 + c_1 z|\\,|a_2 + c_2 z|}\\right)\\\\\n &\\\\\n\\frac{\\sqrt{|a_1 a_2 c_1 c_2|}}{2} \\left(\\frac{|z +1|^2}{|a_1 + c_1 z|\\,|a_2 + c_2 z|}\\right) & \\frac{a_1 + c_1 \\Re z}{|a_1 + c_1 z|^2}\n \\end{bmatrix}.\n\\]\nLet $x = \\Re z$. Then some $H(M_\\Theta(\\bar{z}))$ will have a negative eigenvalue if and only if there exists $x \\in (-1, 1]$ with\n\\[f(x) = (1 + c_1 + c_1 x)(1 + c_2 + c_2 x) - ((c_1+c_1^2)(c_2 + c_2^2))(1+ x)^2 < 0.\\]\nThe two zeros of $f$ occur at\n$$\\frac{1}{c_1 c_2} - 1 ~\\mbox{and}~ -1 - \\frac{1}{c_1 + c_2 + c_1 c_2}.$$\nThus, $f$ has a zero between $-1$ and $1$ if and only if $c_1 c_2 > \\frac{1}{2}$ and $f$ will be negative at some point $x \\in (-1, 1)$ if and only if one zero lies strictly between $-1$ and $1$. Therefore:\n\n\\begin{enumerate}\n\\item \\label{item:1} If $c_1 c_2 < \\frac{1}{2}$, then there is no such value of $x$. This implies that for each $z \\in \\mathbb{T}$, the matrix $H(M_\\Theta(\\bar{z}))$ has only positive eigenvalues and so, $\\mathcal{W}(M_\\Theta(\\bar{z})) \\subseteq \\{x + i y: x > 0\\}$. From this, we can conclude that $0 \\notin \\mathcal{W}( S^1_{\\Theta})^0$.\n\n\\smallskip\n\\item If $c_1 c_2 > \\frac{1}{2}$, then $f$ is negative at some point strictly between $\\frac{1}{c_1 c_2} - 1$ and $1$. Therefore, for some $z_0 \\in \\mathbb{T}$ (with $z_0 \\ne \\pm 1$) one eigenvalue of $H(M_\\Theta(\\bar{z}_0))$ is positive and one is negative. Thus, $\\mathcal{W}(M_\\Theta(\\bar{z}_0))$ contains a point $\\lambda_{z_0}$ with negative real part. \n\n\nRecall that the numerical range of any $M_\\Theta(z)$ is the elliptical disk with foci at $ \\frac{1}{a_j + c_j \\bar{z}}$ and minor axis of length \n\\[ \\sqrt{|a_1 a_2 c_1 c_2|} \\frac{|z +1|^2}{|a_1 + c_1 z|\\,|a_2 + c_2 z|}.\\]\nThis implies that $\\mathcal{W}(M_\\Theta(z_0))$ is the reflection of $\\mathcal{W}(M_\\Theta(\\bar{z}_0))$ across the $x$-axis and thus, $\\bar{\\lambda}_{z_0} \\in \\mathcal{W}( S^1_{\\Theta})$. If $\\lambda_{z_0} \\notin \\mathbb{R}$, the triangle joining $\\lambda_{z_0}, \\bar{\\lambda}_{z_0}$ and $1$ must be contained in $\\mathcal{W}( S^1_{\\Theta})$, which implies $0 \\in \\mathcal{W}( S^1_{\\Theta})^0$.\n\nNow let $\\lambda_{z_0} \\in \\mathbb{R}$. By assumption, we also have $\\lambda_{z_0}$ negative.\nBy earlier arguments, the circle $\\big\\{\\frac{1}{a_1 + c_1 z}: z \\in \\mathbb{T}\\big\\} \\subset \\mathcal{W}( S^1_{\\Theta})$. This circle passes through the points $1$ and $\\frac{1}{a_1 + c_1}$ so it contains points in the first and fourth quadrants. Denote two such points by $\\lambda_I$ and $\\lambda_{IV}$. Then, the triangle joining $\\lambda_{z_0}$, $\\lambda_I$, and $\\lambda_{IV}$ is contained in the numerical range and so $0 \\in \\mathcal{W}( S^1_{\\Theta})^0$.\n\n\\smallskip\n\n \n\\item If $c_1 c_2 = \\frac{1}{2}$, then $f(x) > 0$ for all $x \\in (-1, 1)$ and there are no values in any $\\mathcal{W}(M_\\Theta(\\bar{z}))$ with negative real part; i.e., $\\bigcup_{z \\in \\mathbb{T}} \\mathcal{W}(M_\\Theta(\\bar{z})) \\subseteq \\{z \\in \\mathbb{C}: z = x + i y, x \\ge 0\\}$. On the other hand, if we consider $z = 1$, we can see that zero satisfies the equation \n\\[\\left|0 - \\frac{1}{a_1 + c_1}\\right| + \\left|0 - \\frac{1}{a_2 + c_2}\\right| = \\left|1 - \\left(\\frac{1}{a_1 + c_1}\\right)\\left(\\frac{1}{a_2 + c_2}\\right)\\right|.\\] Thus $0 \\in \\mathcal{W}(M_\\Theta(1))$ and therefore $0 \\in \\partial \\mathcal{W}( S^1_{\\Theta})$.\n \\end{enumerate}\n\nFrom these arguments, we know that if $c_1 c_2 > \\frac{1}{2}$, then $0 \\in \\mathcal{W}( S^1_{\\Theta})^0$ and if $c_1 c_2 < \\frac{1}{2}$, then $0 \\notin \\mathcal{W}( S^1_{\\Theta})^0$. Furthermore, if $c_1 c_2 = \\frac{1}{2}$, then $0 \\in \\partial \\mathcal{W}( S^1_{\\Theta})$. Thus, we have proven most of Proposition~\\ref{lem:positive_eigenvalues}. It just remains to show that if $0 \\in \\partial \\mathcal{W}( S^1_{\\Theta})$, then $c_1 c_2 = \\frac{1}{2}$.\n\n\nAssume $0 \\in \\partial \\mathcal{W}( S^1_{\\Theta})$. If $c_1 c_2 > \\frac{1}{2}$, then $0 \\in \\mathcal{W}( S^1_{\\Theta})^0$, a contradiction. If \n$c_1 c_2 < \\frac{1}{2}$, the zeros of $f$ are at \n\\[\\frac{1}{c_1 c_2} - 1 >1 ~\\mbox{ and }~ -1 - \\frac{1}{c_1 + c_2 + c_1 c_2} <-1.\\]\n Since $f(x) = \\alpha_1 + \\alpha_2 x + \\alpha_3 x^2$ with $\\alpha_3 = c_1 c_2 - c_1 c_2 (1+c_1)(1+c_2) < 0$, the minimum value of $f$ on $[-1,1]$ must be either $f(-1) = 1$ or \\[f(1) = (1 + 2c_1)(1 + 2c_2) - 4 c_1 c_2(1+c_1)(1+c_2) = (1 - 4 c_1^2 c_2^2)+2c_1(1 - 2c_1 c_2)+2c_2(1-2c_1c_2) > 0.\\] \nNow define the quantity\n \\[m:=\\min \\left \\{f(-1), f(1) \\right \\} > 0.\\]\nFix $z \\in \\mathbb{T}$ and let $\\lambda_1, \\lambda_2$ be the two eigenvalues of $H(M_\\Theta(\\bar{z}))$. Since $ S^1_{\\Theta}$ is a contraction, we know $\\lambda_1, \\lambda_2 \\le 1.$ Without loss of generality, assume $\\lambda_1 = \\min \\{\\lambda_1, \\lambda_2\\}$. By assumption, $\\lambda_2 \\ne 0$, since $f(\\Re z) \\ne 0$. Then we can conclude that \n \\[ \\lambda_1 = \\frac{ \\det \\left( H(M_\\Theta(\\bar{z})\\right) }{\\lambda_2} \\ge \\det \\left( H(M_\\Theta(\\bar{z})\\right) \\ge m.\\]\nThis immediately implies that for each $z \\in \\mathbb{T}$, we have $\\mathcal{W}\\left(M_{\\Theta}(\\bar{z})\\right) \\subseteq \\{x + i y: x \\ge m \\}$ and zero cannot lie in the convex hull of the union of these sets. So, if zero lies in the boundary of the numerical range, then $c_1 c_2 = \\frac{1}{2}$.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n \n\n\n \n\n \n\n \n\n \n\n \n\n\n\n\n \n \n \n \n \n\n\n\n \n\n \n \n\n\n\n\n\n\n\n\\section{Boundary of the Numerical Range} \\label{sec:boundary}\n\n\\subsection{Initial Reductions and Formulas}\nWe now analyze the boundary of $\\mathcal{W}( S^1_{\\Theta})$ or equivalently, the boundary of $\\text{Clos} (\\mathcal{W}( S^1_{\\Theta})),$ for a special class of rational inner functions. Specifically, let $\\Theta = \\theta^2_1$, where $\\theta_1$ has a zero on $\\mathbb{T}^2$ and $\\theta_1 = \\frac{\\tilde{p}}{p}$ for $p(z)=a -z_1 + c z_2$ with $a,c \\ne 0$. The following remark shows that, without loss of further generality, we can assume $a, c >0.$\n\n\\begin{remark} Assume $\\theta_1 = \\frac{\\tilde{p}}{p}$ for some $p(z)=a -z_1 + c z_2$ and set $\\Theta = \\theta_1^2.$ Then by Corollary~\\ref{thm:nr},\n\\[\\text{Clos}\\left( \\mathcal{W} \\left( S^1_{\\Theta}\\right) \\right) = \\text{Conv}\\Big( \\bigcup_{\\tau \\in \\mathbb{T}} \\mathcal{W}(M_{\\Theta}(\\tau))\\Big),\\] where $M_{\\Theta}$ is the $2\\times 2$ matrix-valued function from Theorem \\ref{thm:nr1}. Now write $a = |a| e^{i \\theta}$, $c = |c| e^{i \\psi}$, and $w = e^{i (\\psi - \\theta)}z$ and observe that \\eqref{eqn:zeros} implies that $\\tau_2 = - e^{i (\\theta - \\psi)}$. With these substitutions, $M_{\\Theta}^*$ changes from\n\\[\nM^*_\\Theta(z) =\n \\begin{bmatrix}\n \\frac{1}{a + c z}& \\bar{a}c \\frac{(z -\\tau_2)^2}{(a + c z)^2}\\\\\n 0 & \\frac{1}{a + c z}\n \\end{bmatrix}\n\\ \\text{ to } \\ \n\\widetilde{M}_{\\Theta}^*(w)=\n e^{-i\\theta} \\begin{bmatrix}\n \\frac{1}{|a| + |c|w}& |ac|e^{-i\\psi} \\frac{(w +1)^2}{(|a| + |c| w)^2}\\\\\n 0 & \\frac{1}{|a| + |c| w}\n \\end{bmatrix}.\n\\]\nSince, when computing numerical ranges, the variables $z$ and $w$ above will take on all values in $\\mathbb{T}$, we can conclude \n\\[\\text{Clos}\\left( \\mathcal{W} \\left( S^1_{\\Theta}\\right) \\right) = \\text{Conv}\\Big( \\bigcup_{\\tau \\in \\mathbb{T}} \\mathcal{W}(M_{\\Theta}(\\tau))\\Big) = \\text{Conv}\\Big( \\bigcup_{\\tau \\in \\mathbb{T}} \\mathcal{W}(\\widetilde{M}_{\\Theta}(\\tau))\\Big).\\]\nThus, if we set $q(z) = |a| -z_1 + |c| z_2$ and $\\phi_1 = \\frac{\\tilde{q}}{q}$ and define $\\Phi = \\phi_1^2$, then $\\text{Clos} (\\mathcal{W}( S^1_{\\Phi}))$ equals $\\text{Clos} (\\mathcal{W}( S^1_{\\Theta}))$.\n\\end{remark}\n\nHenceforth, we assume that $p(z)= a -z_1 +cz_2$ where $a, c >0$. By \\eqref{eqn:zeros} and Proposition~\\ref{prop:elementary}, this forces $\\tau_2=-1$ and $a-c=1$. Furthermore, by the Elliptical Range Theorem, the boundary of each $\\mathcal{W}(M_{\\Theta}(\\tau))$ is a circle with center $c_{\\tau}:=\\frac{1}{a + c \\bar{\\tau}}$ and radius $r_{\\tau}$ half the modulus of the $(2,1)$-entry of $M_{\\Theta}(\\tau)$. Thus, we need to understand a family of circles. For later computations, we require the following alternate parameterization.\n\n\\begin{remark} \\label{rem:circles} The set of circles $\\left \\{ \\partial \\mathcal{W}\\big( M_{\\Theta}(\\tau)\\big) \\right\\}_{\\tau \\in \\mathbb{T}}$ is equal to the set of circles $\\{ \\mathcal{C}_{\\theta}\\}_{\\theta \\in [0,2\\pi)}$, where each $\\mathcal{C}_{\\theta}$ has center and radius given by\n\\begin{equation}\n\\label{eqn:circle} c(\\theta) := \\frac{ a + ce^{i \\theta}}{a+c} \\ \\ \\text{ and } \\ \\ \nr(\\theta) := \\frac{ac}{(a+c)^2}\\left(1- \\cos \\theta\\right).\\end{equation}\nTo see this, define the Blaschke factor\n\\[ B(z): = - \\frac{ \\frac{c}{a} +z}{1 + \\frac{c}{a} z } = - \\frac{ c +a z }{a +c z}.\\]\nThen $B$ maps $\\mathbb{T}$ one-to-one and onto itself. Now fix $\\tau \\in \\mathbb{T}$, set $\\lambda = B(\\bar{\\tau}) \\in \\mathbb{T}$, and choose $\\theta$ to be the unique\nangle in $[0, 2\\pi)$ with $\\lambda = e^{i\\theta}.$ Observe that \n\\[ a + c \\lambda = \\frac{ a ( a +c \\bar{\\tau}) -c(c +a \\bar{\\tau})}{a + c\\bar{\\tau}} = \\frac{(a+c)(a-c)}{a + c \\bar{\\tau}} = \\frac{ a+c}{a+c\\bar{\\tau}}, \\] \nwhere we used $a-c=1$. Then the center of $\\partial \\mathcal{W}\\big( M_{\\Theta}(\\tau))$ is \n\\[ c_{\\tau} = \\frac{1}{a+c \\bar{\\tau}} \\cdot \\frac{ a+c\\lambda}{a+c\\lambda} = \\frac{a+c\\lambda}{a+c} = c(\\theta).\\]\nTo consider the radius, first observe that since $\\lambda \\in \\mathbb{T}$, we have $2(1-\\cos \\theta ) = \\left | 1 - \\lambda \\right|^2.$ Moreover\n\\[ \\left | 1 - \\lambda \\right|^2 = \\left | 1 + \\frac{ c +a \\bar{\\tau} }{a +c \\bar{\\tau}} \\right|^2 = (a+c)^2 \\left |\\frac{ 1 + \\bar{\\tau}}{a + c\\bar{\\tau}} \\right |^2. \\]\nUsing that equation, we can write the radius of $\\partial \\mathcal{W}\\big( M_{\\Theta}(\\tau))$ as\n\\[ r_{\\tau} = \\frac{ac}{2} \\left | \\frac{ 1 + \\bar{\\tau}}{a + c \\bar{\\tau}}\\right|^2 = \\frac{ac\\left | 1 - \\lambda \\right|^2}{2(a+c)^2} = \\frac{ac(1 -\\cos \\theta)}{(a+c)^2} = r(\\theta), \\]\nwhich proves the claim.\n\\end {remark}\n\n\\subsection{Circular Numerical Ranges}\n\nIn the one-variable situation, if $B$ is a degree-$2$ Blaschke product, then the numerical range of $S_B$ is circular disk if and only if the two zeros of $B$ are the same. One might conjecture that a similar statement should hold in two variables, namely if $\\theta = \\theta_1^2$, then Clos$(\\mathcal{W}( S^1_{\\Theta}))$ is circular. In this section, we show this is not the case.\n\nNow fix $\\tau \\in \\mathbb{T}$. Then $\\partial \\mathcal{W}(M_{\\Theta}(\\tau))$ is a circle with radius\n\\[r_{\\tau} = \\frac{ac}{2} \\frac{|\\bar{\\tau} + 1|^2}{|a + c\\bar{\\tau}|^2} = ac \\frac{1 + x}{(a + c x)^2 + c^2(1-x^2)},\\] \nwhere $\\tau = x + i y$. One can check that $r_{\\tau}$ increases as $x$ increases. Therefore, the maximum and minimum values of $r_{\\tau}$\noccur when $\\tau = 1$ and $\\tau = -1$, respectively. Now consider the alternate formulas given in \\eqref{eqn:circle}. First, since the centers are exactly the points\n\\[ c(\\theta) = \\frac{a+ce^{i\\theta} }{a+c} = \\left( \\frac{a}{a+c} + \\frac{c}{a+c}\\cos \\theta, \\ \\frac{c}{a+c}\\sin \\theta\\right),\\]\n$c(\\theta)$ and $c(2\\pi-\\theta)$ are reflections of each other across the real axis. Moreover, \\eqref{eqn:circle} also implies that $r(\\theta) = r(2\\pi-\\theta)$ and so\n the set of circles $\\{\\mathcal{C}_{\\theta}\\}_{\\theta \\in [0,2\\pi)}$ is symmetric with respect to the real axis. This immediately implies Clos$(\\mathcal{W}( S^1_{\\Theta}))$ must also be symmetric with respect to the real axis.\n\nThus, if Clos$(\\mathcal{W}( S^1_{\\Theta}))$ were circular, the real line would contain the diameter. Furthermore, the value $\\frac{1}{a-c} = 1$ obtained when $\\tau = -1$ is in the numerical range and the numerical radius is $1$. So $1$ is the maximum value on the real axis. The smallest value on the real axis occurs when $\\tau= 1$ or equivalently, when $\\theta=\\pi.$ Then \\eqref{eqn:circle} shows that \nthe center $c_{1}= \\frac{1}{a+c}$ is real and has real part smaller than any other $c_{\\tau}$. Similarly, the radius $r_{1} =\\frac{2ac}{(a+c)^2}$ is maximal and so, the smallest value of Clos$(\\mathcal{W}( S^1_{\\Theta}))$ on the real axis is\n \\[\n \\frac{1}{a + c} -\\frac{2 ac}{(a + c)^2}= \\frac{a + c - 2 a c}{(a+c)^2}.\n \\] \nThus, these are the extreme real values of the numerical range, and if the numerical range were circular, they would be the endpoints of a diameter. Then the center of the circle would be the point $(\\alpha, 0)$, with $\\alpha$ given by\n\\begin{align}\n \\alpha &:= \\frac{1}{2}\\left(1 + \\frac{a + c - 2 a c}{(a+c)^2}\\right) \n= \\frac{a^2 + a + c(1+c)}{2(a+c)^2}\n= \\frac{a^2 + a + (a-1)a}{2(a + c)^2} = \\frac{a^2}{(a+c)^2},\n\\end{align}\nwhere we used $a=c+1$. Similarly, the radius $r$ would be\n\\begin{align*}\nr &: =\\frac{1}{2}\\left(1 - \\frac{a + c - 2 a c}{(a+c)^2}\\right) = \\frac{a^2 - a + 4ac + c^2 - c}{2(a+c)^2} \\\\\n&=\\frac{4 a c + c^2 - c+(c+1)^2-(c+1)}{2(a + c)^2}= \\frac{2 a c + c^2}{(a + c)^2}. \n\\end{align*}\nWe can now find a point $Q$ that is in the numerical range but is not in that circle. Specifically, consider $\\theta = \\frac{\\pi}{2}$\nand using \\eqref{eqn:circle}, define the point $Q$ by\n\\[Q := c\\Big(\\tfrac{\\pi}{2}\\Big) + r\\Big(\\tfrac{\\pi}{2}\\Big) e^{i\\frac{\\pi}{2}} = \\left(\\frac{a}{a + c}, \\frac{c^2 + 2ac}{(a+c)^2}\\right),\\]\nwhich is on $\\mathcal{C}_{\\frac{\\pi}{2}}$ and hence is in Clos$(\\mathcal{W}( S^1_{\\Theta}))$. If the numerical range were circular, the point $Q$ would lie in or on the circle bounding the numerical range with center $(\\alpha,0)$ and radius $r$. Computing the distance from $Q$ to the center gives\n\\[\n~\\mbox{dist}^2(Q,(\\alpha, 0)) = \\left(\\frac{a}{a+c} - \\frac{a^2}{(a+c)^2}\\right)^2+\\frac{(c^2 + 2 ac)^2}{(a + c)^4} = \\frac{(ac)^2}{(a+c)^4}+\\frac{(c^2+2ac)^2}{(a+c)^4}.\\] \nFor $Q$ to be in the circle, we must have \n\\[~\\mbox{dist}^2(Q,(\\alpha, 0)) \\le r^2 = \\frac{(2 a c+c^2)^2}{(a + c)^4},\\] \nwhich is impossible.\nThus, Clos$(\\mathcal{W}( S^1_{\\Theta}))$ cannot be circular.\n\n\n\\subsection{The Boundary of the Numerical Range}\n\nThe goal of this section is to prove the following theorem:\n\n\\begin{theorem} \\label{thm:boundary} Let $\\Theta = \\theta^2_1$ be a degree $(2,2)$ rational inner function, where $\\theta_1 = \\frac{\\tilde{p}}{p}$ for a polynomial $p(z)=a -z_1 + c z_2$ with no zeros on $\\mathbb{D}^2$, a zero on $\\mathbb{T}^2$, and $a, c >0$. Then the boundary of $\\mathcal{W}( S^1_{\\Theta})$ is given by the curve $E = (x(\\theta), y(\\theta))$ where \n\\[ \\begin{aligned}\nx(\\theta) &= \\frac{a + c \\cos \\theta}{a+c} + \\frac{ac(1-\\cos \\theta)}{(a+c)^2} \\cos\\left ( \\theta - \\text{\\emph{arcsin}}\\left( \\frac{a}{a+c} \\sin \\theta \\right) \\right) \\\\\n y(\\theta) &= \\frac{c \\sin \\theta}{a+c} + \\frac{ac(1-\\cos \\theta)}{(a+c)^2} \\sin\\left ( \\theta - \\text{\\emph{arcsin}}\\left( \\frac{a}{a+c} \\sin \\theta \\right) \\right),\n\\end{aligned}\n\\]\nfor $\\theta \\in [0, 2\\pi)$. \n\\end{theorem} \n\nWe prove Theorem \\ref{thm:boundary} using the theory of envelopes of families of curves. The proof takes a bit of work, so we break it into sections.\n\n\\subsubsection{Introduction to Envelopes} Let $f(x,y,\\theta)=0$ be a family of (distinct) curves parameterized by $\\theta$.\nOne may think of the {\\it envelope} $E$ of a family of curves as a curve that is tangent to each member of the family. There are several competing definitions for the notion of an envelope, one of which is the curve that satisfies the {\\it envelope algorithm} that we describe below. We take that as our definition, noting that in this case, the standard ways of thinking about envelopes agree. A discussion of these notions can be found in Courant \\cite[p. 171]{Courant}. We also refer readers interested in envelopes to \\cite{kalman07}. \n\nAssume the family of curves $f(x,y,\\theta)=0$ satisfies $f_x^2 + f_y^2 \\ne 0.$ Let $E$ be a curve parameterized as $(x(\\theta), y(\\theta))$ where $x(\\theta)$ and $y(\\theta)$ are continuously differentiable functions. Then we say that $E$ satisfies the {\\it envelope algorithm} if the points on $E$ satisfy the equations\n\\begin{equation} \\label{eqn:econd} f(x, y, \\theta) = 0 \\text{ and } f_{\\theta}(x, y, \\theta) = 0 \\end{equation} \nand the functions $x(\\theta)$ and $y(\\theta)$ satisfy \n\\begin{equation}\\label{derivative_theta} \\left(\\tfrac{dx}{d\\theta}\\right)^2 + \\left(\\tfrac{dy}{d\\theta}\\right)^2 \\ne 0.\n\\end{equation}\nAn alternate way to compute an envelope $E$ involves using intersections of the curves $f(x,y,\\theta)=0$ associated to different $\\theta$. For this method, assume an envelope $E$ exists and can be parameterized as $(x(\\theta), y(\\theta))$ for $x(\\theta), y(\\theta)$ continuously differentiable functions satisfying \\eqref{derivative_theta}. Then, fix $h$ and $\\theta$ and locate the intersection point of the curves $f(x, y, \\theta+h) = 0$ and $f(x, y, \\theta) = 0$; call this point $p_{h,\\theta}.$ Then $p_{\\theta}:= \\lim_{h \\rightarrow 0} p_{h, \\theta}$ gives the point on the envelope $E$ tangent to the curve $f(x,y,\\theta)=0$.\n\n\\subsubsection{Notation and Summary.} \nTo study $\\mathcal{W}( S^1_{\\Theta})$, Corollary \\ref{thm:nr} implies that we need to study the family of circles $\\{ \\partial \\mathcal{W}(M_{\\Theta}(\\tau))\\}_{\\tau \\in \\mathbb{T}}$. By Remark \\ref{rem:circles}, it is equivalent to consider the family of circles $\\{ \\mathcal{C}_{\\theta} \\}_{\\theta \\in [0, 2\\pi)}$, where\neach $\\mathcal{C}_{\\theta}$ has center and radius given by\n\\[ \n\\begin{aligned}\nc(\\theta) &= c_1(\\theta) + i c_2(\\theta) = \\frac{ a+c \\cos \\theta}{a+c} +i \\frac{c \\sin \\theta}{a+c}; \\\\\n r(\\theta) &= \\frac{ac}{(a+c)^2}\\left(1- \\cos \\theta\\right).\n \\end{aligned}\n \\]\n To align with the envelope notation, observe that the family of circles $\\{C_{\\theta}\\}_{\\theta \\in [0,2\\pi)}$ is also the set of curves satisfying $f(x,y,\\theta)=0$ for\n\\begin{equation} \\label{eqn:Ctheta} f(x,y,\\theta) = \\left( x- c_1(\\theta) \\right)^2 + \\left( y- c_2(\\theta) \\right)^2 - r(\\theta)^2, \\quad \\theta \\in [0, 2\\pi). \\end{equation}\nFor each $\\theta \\in [0, 2\\pi)$, let $\\mathcal{D}_{\\theta}$ denote\n the open disk with boundary $\\mathcal{C}_{\\theta}$. Let $\\mathcal{C}: = \\{c(\\theta) : \\theta \\in [0, 2\\pi)\\} $ denote the circle of centers of the \n $\\mathcal{C}_{\\theta}$ and let $\\mathcal{D}$ denote the open disk with boundary $\\mathcal{C}.$ \n Set $\\Omega = \\mathcal{D} \\cup \\bigcup_{\\theta \\in [0, 2\\pi)} \\mathcal{D}_{\\theta}$ and let $\\mathcal{B}$ denote the boundary of $\\Omega.$ Then the closure of the numerical \n range $\\mathcal{W}( S^1_{\\Theta})$ is the closed convex hull of $\\Omega.$ \n\nIn what follows, we find an envelope of the family of curves $\\{\\mathcal{C}_{\\theta}\\}_{\\theta \\in [0,2\\pi)}$ and use it to compute the boundary of \n$\\mathcal{W}( S^1_{\\Theta})$. First, observe that our family of curves satisfies $f_x^2 + f_y^2 \\ne 0$ for $\\theta \\ne 0.$ Then to find an envelope of \n$\\{\\mathcal{C}_{\\theta}\\}_{\\theta \\in [0,2\\pi)}$, we need only find a curve $E$ satisfying \\eqref{eqn:econd} and \\eqref{derivative_theta}.\nSpecifically, we will find all points satisfying \\eqref{eqn:econd}. These points \n will yield two curves $E_1$ and $E_2$. We will show $E_1$ also satisfies \\eqref{derivative_theta} and thus, gives an envelope for\n our family of curves. We further show that $E_1$ is a convex curve bounding the set $\\Omega.$ This implies $\\Omega$ is convex\n and so $\\overline{\\Omega} =$ Clos$(\\mathcal{W}( S^1_{\\Theta}))$. Thus $E_1$ gives the boundary of Clos$(\\mathcal{W}( S^1_{\\Theta})),$ and hence of $\\mathcal{W}( S^1_{\\Theta})$,\n as desired.\n \n\\subsubsection{Finding the Envelope.} We first identify all points satisfying \\eqref{eqn:econd}, which\ngives the two equations $f(x,y,\\theta)=0$\nand \n\\begin{equation} \\label{eqn:deriv_theta2} -\\left( x- c_1(\\theta) \\right) c_1'(\\theta) - \\left( y- c_2(\\theta) \\right) c_2'(\\theta) - r(\\theta) r'(\\theta)=0.\\end{equation}\nObserve that we can write each circle $\\mathcal{C}_{\\theta}$ parametrically as\n\\[ x(s) = c_1(\\theta) + r(\\theta) \\cos(s), \\ \\ y(s) =c_2(\\theta) + r(\\theta) \\sin (s), \\ \\ s\\in [0, 2\\pi).\\] \nThen \\eqref{eqn:deriv_theta2} is equivalent to \n\\[- r(\\theta) \\cos(s) \\frac{c \\sin \\theta}{a+c} + r(\\theta) \\sin(s) \\frac{c \\cos \\theta}{a+c} + r(\\theta) \\frac{ac}{(a+c)^2} \\sin \\theta=0.\\]\nFor $\\theta \\ne 0$, we have $r(\\theta) \\ne 0$ and so, this is equivalent to \n\\begin{equation} \\label{eqn:s} \\sin(s-\\theta) = \\cos \\theta \\sin(s) - \\sin \\theta \\cos(s) = -\\frac{a}{a+c} \\sin \\theta.\\end{equation}\nNote that the above equation has two solutions for $s$:\n\\begin{equation} \\label{eqn:sj} s_1(\\theta) := \\theta - \\text{arcsin}\\left( \\frac{a}{a+c} \\sin \\theta\\right) \\ \\text{ and } \\ s_2(\\theta) := \\theta - \\pi + \\text{arcsin}\\left( \\frac{a}{a+c} \\sin \\theta\\right).\\end{equation}\nThen the curves $E_1(\\theta)=(x_1(\\theta), y_1(\\theta)) $ and $E_2(\\theta)=(x_2(\\theta), y_2(\\theta)) $ defined by \n\\[ \nx_j(\\theta) := c_1(\\theta) + r(\\theta) \\cos( s_j(\\theta)) \\text{ and } y_j(\\theta) := c_2(\\theta) + r(\\theta) \\sin( s_j(\\theta)), \\ \\ \\theta \\in (0, 2\\pi), j=1,2 \\\\\n\\]\ngive two curves whose points satisfy \\eqref{eqn:econd}. \n\nSince we are concerned with the convex hull of the family of circles $\\{\\mathcal{C}_{\\theta}\\}_{\\theta\\in [0, 2\\pi)}$, we consider the outer curve $E_1$. To show that $E_1$ satisfies \\eqref{derivative_theta}, we need to do a little more work. \nFirst, observe that \\eqref{eqn:s} implies the following two equations:\n\\begin{equation} \\label{eqn:useful} \\cos\\big( \\theta - s_1(\\theta) \\big) \\left( 1-s_1'(\\theta)\\right) = \\frac{a}{a+c} \\cos \\theta \\ \\ \\text{ and } \\ \\ \\Re \\left(c^\\prime(\\theta) e^{-i s_1(\\theta)}\\right) = - r^\\prime(\\theta).\\end{equation}\nWe can obtain more information by writing \n\\[ E_1(\\theta) = c(\\theta) + r(\\theta) e^{i s_1(\\theta)} = \\frac{a+ce^{i\\theta}}{a+c} + \\frac{ac(1-\\cos \\theta)}{(a+c)^2} e^{i s_1(\\theta)}\\]\nand then computing derivatives as follows:\n\\[\n\\begin{aligned}\nx_1^\\prime(\\theta) + i y_1^\\prime(\\theta) &= e^{is_1(\\theta)} \\left(c^\\prime(\\theta) e^{-i s_1(\\theta)} + r^\\prime(\\theta) + i r(\\theta) s_1'(\\theta)\\right)\\\\\n&= e^{is_1(\\theta)} \\left(\\Re\\left(c^\\prime(\\theta) e^{-is_1(\\theta)} + r^\\prime(\\theta)\\right) + i \\Im \\left(c^\\prime(\\theta) e^{-is_1(\\theta)} + r^\\prime(\\theta)\\right) + i r(\\theta) s_1'(\\theta) \\right).\n\\end{aligned}\n\\]\nThen, using \\eqref{eqn:useful} and the fact that $r^\\prime(\\theta)$ is real, we have\n\\[\n\\begin{aligned}\nx_1^\\prime(\\theta) + i y_1^\\prime(\\theta) &= i e^{is_1(\\theta)} \\left(\\Im\\left(\\frac{c}{a + c} i e^{i \\theta} e^{-is_1(\\theta)} + r^\\prime(\\theta)\\right) + r(\\theta) s_1'(\\theta) \\right)\\\\\n&=i e^{is_1(\\theta)}\\left(\\frac{c}{a+c}\\cos(\\theta - s_1(\\theta) ) + \\frac{ac}{(a + c)^2}(1 - \\cos(\\theta))s_1'(\\theta) \\right),\n \\end{aligned}\n \\]\nwhich allows us to conclude that\n\\begin{eqnarray}\n \\label{eqn:x_derivative} x_1^\\prime(\\theta) &=& -\\sin \\left( s_1(\\theta) \\right) \\left(\\frac{c}{a+c} \\cos\\left(\\theta - s_1(\\theta) \\right) + \\frac{ac}{(a+c)^2} (1 - \\cos\\theta)s_1'(\\theta) \\right); \\\\\n \\label{eqn:y_derivative} y_1^\\prime(\\theta) &=& \\cos \\left( s_1(\\theta) \\right) \\left(\\frac{c}{a+c} \\cos\\left (\\theta - s_1(\\theta) \\right) + \\frac{ac}{(a+c)^2} (1 - \\cos\\theta)s_1'(\\theta)\\right).\n \\end{eqnarray}\nTo conclude \\eqref{derivative_theta} for $E_1$, one just needs to show that \n\\begin{equation} \\label{eqn:deriv_zero} \\frac{c}{a+c} \\cos\\left (\\theta - s_1(\\theta) \\right) + \\frac{ac}{(a+c)^2} (1 - \\cos\\theta)s_1'(\\theta) \\ne 0,\\end{equation}\nfor $\\theta \\ne 0.$ This is almost immediate. First observe that since $\\left| \\frac{a}{a+c}\\right| <1$, for $\\theta \\in [0, 2 \\pi)$,\n\\[ -\\frac{\\pi}{2} < \\text{arcsin}\\left( \\frac{a}{a+c} \\sin \\theta \\right) < \\frac{\\pi}{2},\\]\nand so $ \\theta - s_1(\\theta) \\in (- \\frac{\\pi}{2}, \\frac{\\pi}{2})$. This implies $\\cos( \\theta - s_1(\\theta)) >0$. Moreover, one can compute\n\\[ s_1'(\\theta) = 1 - \\frac{a}{a+c} \\frac{ \\cos \\theta}{ \\sqrt{ 1 - \\frac{a^2}{(a+c)^2} \\sin^2 \\theta}} \\] \nand observe that $s_1'$ is continuous and $s'_1(\\frac{\\pi}{2}) = 1 >0$. One can show that $s_1'(\\theta)=0$ leads to the contradiction $ 1 = \\frac{a}{a+c}$. \nThus, $s_1'>0$ as well and we can conclude that \\eqref{eqn:deriv_zero} is strictly positive. This implies $E_1$ satisfies \\eqref{derivative_theta} and thus,\nis an envelope for the family $\\{ \\mathcal{C}_{\\theta} \\}_{\\theta \\in [0, 2\\pi)}$. \n\nFinally, a word about $\\theta=0$. Because the circle $\\mathcal{C}_{0}$ is the single point $(1,0)$, it does not make sense to say a curve is tangent to\n$\\mathcal{C}_{0}$. However, the formulas $(x_j(\\theta), y_j(\\theta))$ for each $E_j$ extend to continuously differentiable functions on intervals containing zero in their interior.\nIn particular, we can certainly extend $E_1$ and $E_2$ to $\\theta=0$ by specifying $E_j(0)= 1$ for $j=1,2$.\n\n\n\n\n\\subsubsection{Location of $E_1, E_2$} Let us briefly consider the relationship between the curves $E_1, E_2$ \nand intersections of the circles $\\{\\mathcal{C}_{\\theta}\\}_{\\theta \\in [0,2\\pi)}$. We use this relationship to show that with the\nexception of the point $(1,0)$, the curve $E_1$ lies completely outside of $\\overline{\\mathcal{D}}$ and the curve $E_2$ lies completely in the interior of $\\mathcal{D}.$\n\nFix $\\theta \\ne0$. Then for $h$ with $|h|$ sufficiently small, the circles $\\mathcal{C}_{\\theta}$ and $\\mathcal{C}_{\\theta +h}$ intersect in two points. To verify this, observe that the disks $\\mathcal{D}_{\\theta}$ and $\\mathcal{D}_{\\theta +h}$ will overlap for $|h|$ sufficiently small. Moreover, the circle formula \\eqref{eqn:Ctheta} paired with the formulas for $c(\\theta)$ and $r(\\theta)$ can be used to show that no circle $\\mathcal{C}_{\\theta}$ is fully contained in a different circle $\\mathcal{C}_{\\tilde{\\theta}}.$ Thus, there must be two intersection points; call them $p^1_{\\theta, h}$ and $p^2_{\\theta, h}$.\n\nBasic geometry shows that the points $p^1_{\\theta, h}$ and $p^2_{\\theta, h}$ will be symmetric across the straight line connecting the centers $c(\\theta)$ and $c(\\theta +h)$. Since $r(\\theta) \\ne 0$, we can conclude that one point, say $p^1_{\\theta, h}$, is in $\\overline{\\mathcal{D}}^c$ and the other point $p^2_{\\theta, h}$ is in $\\mathcal{D}.$ Now write the intersection points as \n\\[ p_{\\theta,h}^j = c(\\theta) + r(\\theta)e^{i {s^j_h}},\\]\nwhere $s^j_h$ is an angle depending on $j$ and $h$. Substituting this formula for $p_{\\theta,h}^j$ into the equation for $\\mathcal{C}_{\\theta+h}$ gives:\n\\[ \\Big( c_1(\\theta) + r(\\theta)\\cos( s^j_h)- c_1(\\theta +h) \\Big)^2 + \\Big( c_2(\\theta) + r(\\theta)\\sin( s^j_h)- c_2(\\theta+h) \\Big)^2 - r(\\theta+h)^2= 0, \\]\nand one can use trigonometric approximations to show that \n\\[ \\lim_{h \\rightarrow 0 } \\sin(s_h^j-\\theta) = -\\frac{a}{a+c} \\sin \\theta.\\]\nThis shows that the sets $\\{ p_{\\theta,h}^1 \\}, \\{ p_{\\theta,h}^2 \\}$ converge to points $p^1_{\\theta}$ and $p^2_{\\theta}$ on $\\mathcal{C}_{\\theta}$ satisfying \\eqref{eqn:s}. This implies that the sets $\\{ p^1_{\\theta}, p^2_{\\theta} \\} $ and $\\{ E_1(\\theta), E_2(\\theta)\\}$ are equal.\n\nNow we can examine the location of the curves $E_1$ and $E_2.$ First since $E_1(\\theta)$ and $E_2(\\theta)$ are limits of the $\\{p^j_{\\theta,h}\\}$, they are symmetric points across $\\mathcal{C}$. \nThus, if either of $E_1(\\theta)$, $E_2(\\theta)$ is on $\\mathcal{C}$, we must have $E_1(\\theta)=E_2(\\theta).$ However, using \\eqref{eqn:sj}, one can show that $E_1(\\theta)=E_2(\\theta)$ only at $\\theta=0$. Thus, $E_1$ and $E_2$ only touch $\\mathcal{C}$ at $\\theta=0.$\n\nThen by the properties of $p^1_{\\theta}$ and $p^2_{\\theta}$, except at $\\theta=0$, one of the curves $E_1, E_2$ is always in $\\overline{\\mathcal{D}}^c$ and one is always in $\\mathcal{D}.$ By checking at $\\theta =\\pi$, we can conclude \n\\[ \n\\begin{aligned}\nE_1(\\theta) &= p^1_{\\theta} \\ \\in \\overline{\\mathcal{D}}^c \\ \\ \\text{ for } 0 < \\theta < 2 \\pi, \\quad E_1(0) = (1,0); \\\\\nE_2(\\theta) &= p^2_{\\theta} \\ \\in {\\mathcal{D}} \\ \\ \\text{ for } 0 < \\theta < 2 \\pi, \\quad E_2(0) = (1,0). \n\\end{aligned}\n\\]\n\n\\subsubsection{The Boundary of $\\Omega$.} Recall that $\\mathcal{B}$ denotes the boundary of $\\Omega = \\mathcal{D} \\cup \\bigcup_{\\theta \\in [0, 2\\pi)} \\mathcal{D}_{\\theta}$. We will show that $\\mathcal{B} = E_1$. Our initial goal is to show $\\mathcal{B} \\subseteq E_1$. First, it is easy to conclude that $\\mathcal{B} \\subseteq \\cup_{\\theta \\in [0, 2\\pi)} \\mathcal{C}_{\\theta}.$ To see this, note that $\\mathcal{B}$ is in the boundary of $ \\bigcup_{\\theta \\in [0, 2\\pi)} \\mathcal{D}_{\\theta}$. Then if $\\{ c(\\theta_n) + \\lambda_n r(\\theta_n )e^{i s_n}\\}$ with $0\\le \\lambda_n \\le 1$ is a sequence converging to a point on $\\mathcal{B}$, one can use convergent subsequences of the $\\{\\theta_n\\}$, $\\{s_n\\}$, and $\\{\\lambda_n\\}$ to conclude that it must converge to a point on some $\\mathcal{C}_{\\theta}$. \n\nSince $\\Omega$ is in the closure of the numerical range of a contraction, we also know $E_1(0) = (1,0)=\\mathcal{C}_0 \\in \\mathcal{B}.$ Now, we determine the points that the $\\mathcal{C}_{\\theta}$ with $\\theta \\ne 0$ can contribute to $\\mathcal{B}.$ Fix $\\theta \\ne 0$. Set $\\mathcal{B}_{\\theta} = \\mathcal{B} \\cap \\mathcal{C}_{\\theta}.$ Further, define\n\\[ \\Omega^{\\theta}_N := \\mathcal{D}_{\\theta} \\cup \\mathcal{D} \\cup \\left( \\bigcup_{\\ell=0}^{N-1} \\mathcal{D}_{ \\frac{2 \\pi \\ell}{N}} \\right).\\]\n Let $\\mathcal{B}_N$ denote the boundary of $\\Omega^{\\theta}_N$; then $\\mathcal{B}_{N}$ is composed of arcs of circles from the boundaries of the disks comprising $\\Omega^{\\theta}_N.$ Let $\\mathcal{B}_N^{\\theta}$ be the contribution of $\\mathcal{C}_{\\theta}$ to $\\mathcal{B}_N$. Since $\\Omega$ is open, we know that \n \\[\\mathcal{B}^{\\theta}_N := \\mathcal{B}_{N} \\cap \\mathcal{C}_{\\theta} = \\mathcal{C}_{\\theta} \\cap \\left(\\Omega_N^{\\theta} \\right)^c.\\]\n One can use the definition of boundary and the density of the roots of unity in $\\mathbb{T}$ to show \n \\[ \\mathcal{B}_{\\theta} = \\lim_{N \\rightarrow \\infty} \\mathcal{B}^{\\theta}_N.\\]\nFix $N$ and assume $\\mathcal{B}^{\\theta}_N \\ne \\emptyset.$\nBy earlier discussions, for $N$ sufficiently large (i.e. the difference between the angles sufficiently small), $\\mathcal{C}_{\\theta}$ will have one intersection point in $\\overline{\\mathcal{D}}^c$, call it $p_{\\psi}$, with each close $\\mathcal{C}_{\\psi}$ bounding a disk from $\\Omega_N^{\\theta}$. Then a whole segment of $\\mathcal{C} _{\\theta}$ between $p_{\\psi}$ and the point on $\\mathcal{C}_{\\theta} \\cap \\mathcal{C}$ closest to $c(\\psi)$ will be contained in $\\mathcal{D}_{\\psi}$. This implies that $\\mathcal{B}_N^{\\theta}$ must be an arc on $\\mathcal{C}_{\\theta}$ whose endpoints are intersection points of $\\mathcal{C}_{\\theta}$ and two nearby circles $\\mathcal{C}_{\\psi_1}$ and $\\mathcal{C}_{\\psi_2}$.\n\nBy earlier remarks about intersection points, as $N \\rightarrow \\infty$, the intersection points in $\\overline{\\mathcal{D}}^c$ between $\\mathcal{C}_{\\theta}$ and the closest $\\mathcal{C}_{\\psi}$'s will approach $E_1(\\theta).$\nThus we can conclude that either $\\mathcal{B}_{\\theta} = \\emptyset$ or $\\mathcal{B}_{\\theta} = E_1(\\theta).$\nThis proves the claim that $\\mathcal{B} \\subseteq E_1.$ \n\nTo show $\\mathcal{B}= E_1$, proceed by contradiction and assume there is some $E_1(\\theta)=(x_1(\\theta), y_1(\\theta)) \\not \\in \\mathcal{B}$. Without loss of generality, assume $0< \\theta < \\pi$. Earlier arguments showed that $s_1'$ is always positive, so $s_1$ is strictly increasing. Thus, $s_1(\\theta) \\in (s_1(0), s_1(\\pi)) =(0,\\pi)$. This implies $\\sin(s_1(\\theta)) >0$ and by \\eqref{eqn:x_derivative}, $x_1$ is strictly decreasing on $[0, \\pi]$. Moreover, on $(0, \\pi)$, we have $y_1>0$ and on $(\\pi, 2\\pi)$, we have $y_1<0$. Thus, there is no point on $E_1$ with $x$-coordinate $x_1(\\theta)$ and $y$-coordinate strictly larger than $y_1(\\theta).$\n\nTo obtain the contradiction, define $\\alpha = \\sup\\{ \\epsilon : (x_1(\\theta), y_1(\\theta) + \\epsilon) \\in \\Omega\\}$. Since $\\Omega$ is bounded, such an $\\alpha$ exists and since $E_1(\\theta) \\not \\in \\mathcal{B}$, we know $\\alpha > 0$. But, then $(x_1(\\theta), y_1(\\theta) +\\alpha) \\in \\mathcal{B}$ and since $\\mathcal{B}\\subseteq E_1$, we must have $(x_1(\\theta), y_1(\\theta) +\\alpha) \\in E_1$. But, this contradicts our previous statement about $E_1$.\nThen it follows that $\\mathcal{B} = E_1.$\n\n\\subsubsection{The Proof of Theorem \\ref{thm:boundary}}\nLet $\\widehat{\\Omega}$ be the closed convex hull of $\\Omega$. By previous facts, this implies $\\widehat{\\Omega} =$Clos$(\\mathcal{W}( S^1_{\\Theta}))$.\nWe will show that $E_1$ is the boundary of $\\widehat{\\Omega}$ and hence, of Clos($\\mathcal{W}( S^1_{\\Theta}))$ and $\\mathcal{W}( S^1_{\\Theta})$.\n\nFirst we show $E_1$ is the boundary of some convex set. To show this, we use the Parallel Tangents condition, which says that a curve $C$ is the boundary of a convex set if and only if there are no three points on $C$ such that the tangents at these points are parallel. Observe that the tangents of $E_1$ are given by $( x_1'(\\theta), y_1'(\\theta))$ for $\\theta \\in [0, 2\\pi).$ By way of contradiction, assume there are three points whose tangents are parallel, say at $\\theta_1, \\theta_2, \\theta_3 \\in [0, 2\\pi).$ This implies that \n\\begin{equation} \\label{eqn:3tangents} \\frac{ y_1'(\\theta_1)}{x_1'(\\theta_1)} = \\frac{ y_1'(\\theta_2)}{x_1'(\\theta_2)} =\\frac{ y_1'(\\theta_3)}{x_1'(\\theta_3)} .\\end{equation}\nBy \\eqref{eqn:x_derivative} and \\eqref{eqn:y_derivative}, we know $\\frac{ y_1'(\\theta)}{x_1'(\\theta)} = -\\cot(s_1(\\theta))$ for $\\theta \\in [0, 2\\pi)$. Then, since $s_1$ is a one-to-one function mapping $[0, 2\\pi)$ onto $[0, 2\\pi)$, Equation \\eqref{eqn:3tangents} says that there are three distinct angles $\\psi_1, \\psi_2, \\psi_3 \\in [0, 2\\pi)$ satisfying\n\\[ \\cot(\\psi_1) = \\cot(\\psi_2) = \\cot(\\psi_3),\\]\nwhich contradicts properties of cotangent. Thus, $E_1$ is the boundary of a convex set $S$. \n\nAs $E_1$ is a bounded closed curve and $S$ is convex, its closure $\\overline{S}$ must be the closed convex hull of $E_1$. Similarly, as $\\Omega$ is composed of circular disks including $\\mathcal{D}$, one can show that $\\Omega$ is contained in the closed convex hull of $E_1$. But, then $\\Omega \\subseteq \\overline{S} \\subseteq \\widehat{\\Omega}$, which implies that $\\widehat{\\Omega} = \\overline{S}$. Thus, $E_1$ is the boundary of $\\widehat{\\Omega}$ and hence, the boundary of Clos$(\\mathcal{W}(S^1_{\\Theta}))$ and $\\mathcal{W}(S_{\\Theta}^1)$. \\\\\n\n\nFinally, we remark that the boundary of the numerical range is not, in general, the set of extreme points that one obtains from the circles. Here, by an extreme point, we mean the point on $\\mathcal{C}_{\\theta}$ furthest away from the center of $\\mathcal{C}.$ In Figure~\\ref{fig:Numerical_range_1_2} for $a=2$ and $c=1$, we present some of the circles $\\{\\mathcal{C}_{\\theta}\\}$, the curve consisting of the extreme points of the $\\mathcal{C}_{\\theta}$, and the boundary of the numerical range of $S_{\\Theta}^1.$\n\n\n\n\\begin{figure}[ht]\n\\includegraphics[height=6cm]{allthreecurves}\n\\caption{The numerical range of $S_{\\Theta}^1$ with $a=2$ and $c=1$, the curve of extreme points in red, and the outer envelope of the family of circles in green.}\n\\label{fig:Numerical_range_1_2}\n\\end{figure}\n\n\n\n\n\n\n\n\n\\newpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}