diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzeyjo" "b/data_all_eng_slimpj/shuffled/split2/finalzzeyjo" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzeyjo" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe rest-frame ultraviolet spectra of galaxies contain a wealth of\ninformation about the population of massive stars, the properties of \nthe nebular gas those stars ionize, and the galactic-scale outflows they power. \nWe can therefore use rest-UV spectra to constrain the famous ``galactic feedback''\nthat drives metals into the intergalactic medium and to better understand the role \nof this feedback in shutting down future star formation. A key science goal for \n20--30~m telescopes is to understand this feedback process, but until the next \ngeneration of telescopes are built, there are only two ways \nto obtain rest-UV diagnostics for typical star-forming galaxies at\nredshifts above z $\\sim$1.5. The first is to stack low-quality spectra of \nmany galaxies (e.g., Shapley \\etal\\ 2003). This results in a good \nsampling of the average properties of star forming galaxies, but removes the \npossibility of understanding the variations in the observable properties and \nhow changes in one or more observables affects some or all of the others.\nThe second way is to target galaxies that have been highly amplified by gravitational \nlensing. In previously published good signal-to-noise (S\/N) rest-frame UV spectra for \nfour such bright gravitationally lensed galaxies there are numerous strong features that trace \nthe properties of massive stars and outflowing gas (Pettini \\etal\\ 2002, Finkelstein \n\\etal\\ 2009, Quider \\etal\\ 2009, 2010). All four of these galaxies have remarkably \nsimilar P~Cygni profiles, yet each has very different outflow \nproperties -- we do not yet know what is a ``normal'' outflow at z $>$ 1.\n\nAt the heart of the problem is the fact that the fundamental physical scale of star formation \nand stellar outflows is not the scale of a single galaxy, but rather the scale of individual \nstar forming regions within galaxies. However, strong gravitational lensing provides \nuniquely powerful views of distant galaxies, and in the most extreme \nhigh-magnification cases we are able to spatially resolute structures on $\\lesssim$100 pc \nscales within galaxies at z $>$ 1. In this proceeding we present a few highlights of the \nfirst results from an ongoing effort to obtain good S\/N rest-frame UV spectra \nof individual star forming regions within distant galaxies. Here we will focus on two \nsource in particular -- RCS2 J0327-1326 and SGAS J1050+0017.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.98\\columnwidth]{bayliss_fig1.eps} \n\\caption{\nSlit positions for MagE observations of RCS2 J0327-1326. The top grayscale images \nshow individual emission regions\/knots in the arc labeled with lowercase letters. Solid and \ndashed lines indicate the positions of the MagE slit along the arc. Slits indicated by \nsolid and dashed lines on the top left depict the slit positions for observations taken \nat two different slit widths, where the wider-slit was used for observations during \ntimes of worse seeing. The bottom panel \nshows a reconstruction of the giant arc in the source plane from Sharon \\etal\\ (2012), \nwith the same emission knots -- each with a physical size of $\\lesssim$100 pc -- are \nlabeled with lowercase letters. The solid cyan \nellipsoids indicate the four knots for which we have good S\/N \nspectra.}\\label{fig:0327slits}\n\\end{figure}\n\n\n\\section{RCS2 J0327-1326}\n\nRCS2 J0327-1326 is a spectacular strongly lensed galaxy at $z = 1.704$ that forms \na giant arc extending $\\sim$38'' along the sky (Wuyts \\etal\\ 2010, Sharon \\etal\\ 2012). \nThis galaxy has been well-studied at NIR wavelengths, which samples the rest-frame optical \n(Rigby \\etal\\ 2011, Whitaker \\etal\\ 2014, Wuyts \\etal\\ 2014). We have also been \nconducting a follow-up campaign to acquire good S\/N moderate resolution optical \nspectra of this arc with the MagE spectrograph on the Magellan-II (Clay) telescope. \nDue to the high magnification it is possible to place standard spectroscopic slits at different \npositions that map to distinct star forming regions with sizes $\\lesssim$100 pc in the \nsource galaxy (Figure~\\ref{fig:0327slits}).\n\nSpectra of each of four star forming knots within RCS2 J0327-1326 \nexhibit strong differences in the strength and structure of many prominent rest-UV \nfeatures. In particular, the C {\\small IV} and Mg {\\small II} P Cygni lines, which result \nfrom outflowing gas, show large variations between the different knots \n(Figure~\\ref{fig:0327_lines}). It is also notable that there is no positive correlation \nbetween the strength of the C {\\small IV} and Mg {\\small II} lines, indicating that \nthese two lines are generated by different mechanisms and\/or in different physical \nlocations. These data also informed the observed lack of correlation between \nLy-$\\alpha$ and Mg {\\small II} P Cygni emission in a sample of strongly lensed galaxies \n(Rigby \\etal\\ 2014), which further supports a physical picture in which Mg {\\small II} emission \ntraces stellar wind driven outflows, possibly providing a diagnostic measure of the \nradiative transfer mechanisms in those outflows. \n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.496\\columnwidth]{bayliss_fig2.eps} \n\\includegraphics[width=0.496\\columnwidth]{bayliss_fig3.eps} \n\\caption{\nThe Mg {\\small II} 2796,2803 (left) and C {\\small IV} 1548,1551 (right) P Cygni lines \nas observed in four different $\\lesssim$100 pc scale star forming regions within \nRCS2 J0327-1326. Both panels are consistent in their use of four \ndifferent colors to indicate the four distinct star forming regions.\n}\\label{fig:0327_lines}\n\\end{figure}\n\n\\section{SGAS J1050+0017}\n\nThis bright lensed galaxy at $z = 3.625$ was published by Bayliss \\etal\\ (2014), in which \nthe authors analyzed optical through NIR (rest-frame UV through optical) spectra of the \nsource, as well as imaging data spanning 0.4 through 4.5 microns. This multi-wavelength \nanalysis had difficultly explaining the properties of the integrated spectra of the giant arc, \nincluding very strong and narrow P Cygni features, as well as differences \nbetween the ionization parameter measurements. Here we show a new analysis of the \nGemini\/GMOS-North spectra of this arc in which we isolate and individually extract the \nemission from two distinct $\\sim$100-200 pc star forming knots along the arc (see Figures \n1 \\& 2 in Bayliss \\etal\\ 2014). The spectra clear exhibit differences between the two \ndistinct knots, with the second knot, plotted in red, having a much weaker C {\\small IV} \nfeature, as well as weaker He {\\small II} emission (Figure~\\ref{fig:s1050}). These spectra \nprovide direct evidence that the conditions of the ionizing radiation field and the population \nof massive stars vary significantly across different regions at $z = 3.625$.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.496\\columnwidth]{bayliss_fig4.eps}\n\\includegraphics[width=0.496\\columnwidth]{bayliss_fig5.eps}\n\\caption{\nThe C {\\small IV} 1548,1551 (left) P~Cygni lines and He {\\small II} 1640 (right) emission \nline as seen in distinct star forming regions (indicated by different colors) within \nSGAS J1050+0017.\n}\\label{fig:s1050}\n\\end{figure}\n\n\\section{Conclusions}\n\n\nLooking at individual star forming regions within the most highly magnified giant \narcs we see significant variations along different lines of sight, including variable \nstrength and profile shape of structure of P Cygni wind lines, with no evidence for \ncorrelation between C {\\small IV} and Mg {\\small II} P Cygni features, and \nindications of significant differences in the ionizing radiation field. These results \nreinforce the argument that the astrophysics of the interstellar medium (ISM) \nin distant star forming galaxies is a complex business. \n\nBright lensed galaxies extending out to z $\\sim$5 have recently become common place \n(e.g., Bayliss \\etal\\ 2010, Koester \\etal\\ 2010, Bayliss \\etal\\ 2011b, Gladders \\etal\\ in prep),\nand these lensed galaxies typically reside at z $\\sim$ 2, and therefore sample the \nepoch of peak star formation in the universe (Bayliss \\etal\\ 2011a, Bayliss 2012). \nThe highest magnification systems drawn from these large samples provide a new \nopportunity to leave single-object analysis behind and study the UV spectra of \nindividual star forming regions within many z $>$ 1 galaxies. Such observations \nwould allow us to characterize the relationship between local signatures of the winds \nof massive stars and their interaction with their surrounding ISM. These data will, in \nturn, reveal important new information about the astrophysics of the radiative transfer \nin the ISM in regions of prolific star formation. \n\n\\section*{Acknowledgements}\n\\noindent\nThis work is based on observations from the Magellan-II (Clay) and the Gemini-North \nTelescopes. Supported was provided by the National Science Foundation through Grant \nAST-1009012, by NASA through grant HST-GO-13003.01.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{#1}\\label{#2}}\n\n\n\\begin{document}\n\n\\onehalfspacing\n\n\\title[Invariant Couplings]{Invariant Coupling of Determinantal Measures \\\\ on Sofic Groups}\n\\author{Russell Lyons}\n\\address{R.L., Dept.\\ of Math., 831 E. 3rd St., Indiana Univ., Bloomington, IN 47405-7106, USA}\n\\email{rdlyons@indiana.edu}\n\n\\author{Andreas Thom}\n\\address{A.T., Math. Institut, Univ. Leipzig, PF 100920, D-04009 Leipzig, Germany\n}\n\\email{thom@math.uni-leipzig.de}\n\n\\thanks{R.L.'s research is\npartially supported by NSF grant DMS-1007244 and Microsoft Research. A.T.'s research is supported by ERC-StG 277728.}\n\n\\date{15 May 2014} \n\n\\begin{abstract}\n\\baselineskip=12pt\nTo any positive contraction $Q$ on $\\ell^2(W)$, there is associated a\ndeterminantal probability measure $\\P^Q$ on $2^W$, where $W$ is a\ndenumerable set.\nLet $\\gp$ be a countable sofic finitely generated\ngroup and $G = (\\gp, \\edge)$ be a Cayley graph of $\\gp$.\nWe show that if $Q_1$ and $Q_2$ are two $\\gp$-equivariant positive\ncontractions on $\\ell^2(\\gp)$ or on $\\ell^2(\\edge)$ with $Q_1 \\le Q_2$,\nthen there exists a $\\gp$-invariant monotone coupling of the corresponding\ndeterminantal probability measures witnessing the stochastic domination\n$\\P^{Q_1} \\dom \\P^{Q_2}$.\nIn particular, this applies to the wired and free uniform spanning forests,\nwhich was known before only when $\\gp$ is residually amenable.\nIn the case of spanning forests, we also give a second more explicit proof,\nwhich has the advantage of showing an explicit way to create the free\nuniform spanning forest as a limit over a sofic approximation.\nAnother consequence of our main result is to prove that all determinantal\nprobability measures $\\P^Q$ as above are $\\dbar$-limits of finitely\ndependent processes. Thus, when $\\gp$ is amenable, $\\P^Q$ is\nisomorphic to a Bernoulli shift, which was known before only when $\\gp$ is\nabelian.\nWe also prove analogous results for sofic unimodular random rooted graphs.\n\\end{abstract}\n\n\\maketitle\n\n\\tableofcontents\n\n\\bsection{Introduction}{s.intro}\n\nThe study of Bernoulli percolation and other random subgraphs of Cayley\ngraphs of non-amenable groups began to flourish in the mid 1990s.\nAlthough the lack of averaging over F\\o lner sequences was replaced by use\nof the Mass-Transport Principle, and expansion of all finite sets was even\nturned to advantage, coupling questions remained vexing:\nIn the amenable context, if two invariant probability\nmeasures have the property that one stochastically dominates the other,\nthen there is an invariant coupling (joining) of the two measures that\nwitnesses the domination, which is called a monotone coupling.\nWhether this holds on non-amenable groups remained open until resolved in\nthe negative by Mester \\rref b.Mester:mono\/.\nNonetheless, an invariant monotone coupling may still exist for certain\npairs of measures.\nIndeed, the coupling question was originally motivated by the special case\nof the so-called wired and free spanning forest measures, denoted $\\wsf$\nand $\\fsf$.\nBowen \\cite{bowen} showed that a monotone joining exists for $\\wsf$ and\n$\\fsf$ on every residually amenable group.\nWe show that it exists for $\\wsf$ and $\\fsf$ on every sofic group, a wide\nclass of groups that no group is known not to belong to.\nWe show this as a consequence of a more general result: These spanning\nforest measures are examples of determinantal probability measures, where,\nas we review in the next section, for every positive contraction on\n$\\ell^2(W)$, $W$ being a denumerable set, there is associated a probability\nmeasure $\\P^Q$ on $2^W$. It is known \\cite{Lyons:det,BBL:Rayleigh} that when $Q_1 \\le\nQ_2$, we have the stochastic domination $\\P^{Q_1} \\dom \\P^{Q_2}$.\nWe show that\nfor any pair of\ndeterminantal probability measures corresponding to equivariant\npositive contractions\n$Q_1 \\le Q_2$ on $\\ell^2$ of any sofic group, there is not only a monotone\ncoupling of $\\P^{Q_1}$ with $\\P^{Q_2}$, but in fact one that is\n$\\gp$-invariant.\nDeterminantal probability measures and point processes are a class of\nmeasures that appear in a variety of contexts, including several areas of\nmathematics as well as physics and machine learning. See, e.g., \\cite{\nMacchi, Soshnikov:survey, Lyons:det, \nKulTas}.\n\nThis is our main result, obtained via some abstract considerations of\nultraproducts of tracial von Neumann algebras.\nThe proof proceeds along the following broad outline: \nGiven $\\gp$-equivariant positive contractions $Q_1 \\le Q_2$\nand a sofic approximation to a Cayley diagram of\n$\\gp$ by finite graphs $G_n$ labelled with the generators of $\\gp$, we form\nthe metric ultraproduct of the sequence of von Neumann algebras associated\nto the sequence $(G_n)_n$.\nThis ultraproduct allows us to approximate $Q_i$ by positive contractions\n$Q_{1, n} \\le Q_{2, n}$ on $\\ell^2\\big(\\verts(G_n)\\big)$.\nWe may then take a limit point of monotone couplings of $\\P^{Q_{1, n}}$ with\n$\\P^{Q_{2, n}}$ to obtain a $\\gp$-invariant monotone coupling of $\\P^{Q_1}$\nwith $\\P^{Q_2}$.\nWe remark that ultraproducts are not essential to our proofs, but they\nallow us to make convenient statements about operators before deducing\ncorresponding consequences for determinantal probability measures.\n\nWe also describe some consequences for comparisons of return probabilities\nof random walks in random environments.\nWhile we are able to deduce a consequence for $\\fsf$ that was not\npreviously known, our coupling is not sufficiently explicit to deduce\nmuch more.\nIn the motivating case of $\\wsf$ and $\\fsf$, we also give a somewhat more\nconcrete way to obtain such a coupling.\nIn particular, this concrete approach yields a simple way to obtain the\n$\\fsf$ as a limit over a sofic approximation.\nNamely, for $L \\ge 0$, let $\\CYCLE_L(G)$ denote the space spanned by the\ncycles in $G$ of length at most $L$.\nWrite $\\fsf_{G, L}$ for the determinantal probability measure corresponding to\nthe orthogonal projection onto $\\CYCLE_L(G)^\\perp$.\nWe show that if a Cayley graph $G$\nis the random weak limit of $(G_n)_n$, then for $L(n) \\to\\infty$\nsufficiently slowly,\nthe random weak limit of $\\fsf_{G_n, L(n)}$ \nequals $\\fsf_G$.\nWe also extend our result from the context of Cayley graphs to its\nnatural setting of unimodular random rooted networks.\n\nFinally, we derive\na consequence of our monotone joining result for the ergodic\ntheory of group actions.\nLet $\\gp$ be a countable group and $X$ and $Y$ be two sets on which $\\gp$\nacts. A map\n$\\phi \\colon X \\to Y$ is called \\dfnterm{$\\gp$-equivariant} if $\\phi$\nintertwines the actions of $\\gp$: \n$$\n\\phi(\\gpe x) = \\gpe \\big(\\phi(x)\\big) \\qquad (\\gpe \\in \\gp,\\, x \\in X)\n\\,.\n$$\nIf $X$ and $Y$ are both measurable spaces, then a $\\gp$-equivariant\nmeasurable $\\phi$ is called a \\dfnterm{$\\gp$-factor}.\nLet $\\mu$ be a measure on $X$. If $\\phi$ is a $\\gp$-factor, then the\npush-forward measure $\\phi_* \\mu$ is called a \\dfnterm{$\\gp$-factor of $\\mu$}.\nThe measure $\\mu$ is \\dfnterm{$\\gp$-invariant} if \n$$\n\\mu(\\gpe B) = \\mu(B) \\qquad (\\gpe \\in \\gp,\\, B \\subseteq X \\hbox{\nmeasurable}).\n$$\nIf $\\nu$ is a measure on $Y$, then a $\\nu$-a.e.-invertible $\\gp$-factor\n$\\phi$ such that $\\nu = \\phi_* \\mu$ is called an \\dfnterm{isomorphism} from\n$(X, \\mu, \\gp)$ to $(Y, \\nu, \\gp)$.\nWe are interested in the case where $X$ and $Y$ are product spaces\nof the form $A^\\gp$ or, more generally, $A^W$, where $A$ is a measurable\nspace and $W$ is a countable set on which $\\gp$ acts. In such a case,\n$\\gp$ acts on $A^W$ by \n$$\n\\big(\\gpe \\omega\\big)(x) \n:=\n\\omega( \\gpe^{-1} x) \\qquad (\\omega \\in A^W,\\, x \\in W,\\, \\gpe\n\\in \\gp)\n\\,.\n$$\nIf $\\lambda$ is a probability measure on $A$ and\n$\\mu$ is the product measure $\\lambda^W$, then we call the action of $\\gp$\non $A^W$ a \\dfnterm{Bernoulli shift}.\nOrnstein \\cite{Orn:book} proved a number of fundamental results about\nBernoulli shifts for $\\gp = \\Z$ that were extended in work with Weiss\n\\cite{OrnW:amen} to amenable groups.\nIn particular, they showed that factors of Bernoulli shifts are isomorphic\nto Bernoulli shifts.\nOn the other hand, in the non-amenable setting, Popa\n\\rref b.Popa\/ gave an example of a factor of a Bernoulli shift that is not\nisomorphic to a Bernoulli shift.\nMore generally, it is not well understood which actions are factors of\nBernoulli shifts when $\\gp$ is not amenable.\nThe utility of factors of Bernoulli shifts\non non-amenable groups has\nbeen shown in various ways; for example, see \n\\cite{Popa,\nChifanIoana,\nHoudayer,\nLyons:fixed,\nAW:Bernoulli,\nKun:Lip,\nLyons:fiid}.\n\nIt is easy to see that every factor of a Bernoulli shift is a $\\dbar$-limit of\nfinitely dependent processes, by approximating the factor with a block factor.\nWhen $\\gp$ is amenable, the converse is true: every $\\dbar$-limit of finitely\ndependent processes is a factor of a Bernoulli shift.\nAlthough we do not know whether \ndeterminantal probability measures on $2^\\gp$ arising from equivariant\npositive contractions are factors of Bernoulli shifts,\nwe show here that they are $\\dbar$-limits of finitely dependent\n$\\gp$-invariant probability measures on $2^\\gp$\nprovided $\\gp$ is\nsofic; in particular, when $\\gp$ is amenable, these determinantal measures\nare isomorphic to Bernoulli shifts, a result that was first shown for\nabelian $\\gp$ by \\rref b.LS:dyn\/.\n\nIn \\rref s.def\/, we give the relevant background on determinantal measures,\nincluding the motivating case of spanning forests.\nWe discuss various basic notions related to groups, their Cayley graphs, \ntheir von Neumann algebras, and soficity in \\rref s.cay\/.\nA review of ultraproducts and some new results we need is in \\rref\ns.proofs\/.\nThese tools then lead quickly to a proof of our main result in \\rref\ns.existence\/.\nFollowing this, \\rref s.approxim\/ gives the alternative proof for the\nspanning forest measures.\nConsequences of an invariant coupling are discussed in \\rref s.conseq\/, including the\ndefinitions of the $\\dbar$-metric and finitely dependent processes.\nTools needed for the extension to unimodular random rooted networks are\ngiven in \\rref s.unimodular\/.\nIn fact, this generalization has the advantage that the case of measures on\nsubsets of edges, which had to be treated separately and somewhat\ncumbersomely in earlier sections, here can be deduced as merely a special\ncase.\nAfter these tools are developed, we again have a short proof of the\nexistence of unimodular (sofic) couplings in \\rref s.sofic-couple\/.\n\n\n\\bsection{Determinantal probability measures}{s.def}\n\nA determinantal probability measure is one whose elementary\ncylinder probabilities are given by determinants.\nMore specifically, suppose that $E$ is a finite or countable set and that\n$Q$ is an $E \\times E$ matrix.\nFor a subset $A \\subset E$, let $Q\\restrict A$ denote the submatrix of\n$Q$ whose rows and columns are indexed by $A$.\nIf $\\qba$ is a random subset of $E$ with the property that for all finite $A\n\\subset E$, we have \n\\rlabel e.DPM\n{\\P[A \\subset \\qba] = \\det (Q\\restrict A)\n\\,, }\nthen we call $\\P$ a \\dfnterm{determinantal probability measure}.\nThe inclusion-exclusion principle in combination with \\eqref{e.DPM}\ndetermines the probability of each elementary\ncylinder event.\nTherefore,\nfor every $Q$, there is at most one probability\nmeasure satisfying \\eqref{e.DPM}.\nConversely, it is known (see, e.g., \\rref b.Lyons:det\/) that there is a\ndeterminantal probability measure corresponding to $Q$ if $Q$ is the matrix of\na \\dfnterm{positive contraction} on $\\ell^2(E)$ (in the standard\northonormal basis), which means that for all $u \\in \\ell^2(E)$, we have\n$0 \\le \\iprod{Q u, u} \\le \\iprod{u, u}$.\n\nWe identify a subset of $E$ with an element of $\\{0, 1\\}^E = 2^E$ in the\nusual way.\n\nAn event $\\mathcal A \\subseteq 2^E$ is called \\dfnterm{increasing} if for\nall $A \\in \\mathcal A$ and all $e \\in E$, we have $A \\cup \\{e\\} \\in\n\\mathcal A$.\nGiven two probability measures $\\P^1$, $\\P^2$ on $2^E$, we say that\n\\dfnterm{$\\P^2$ stochastically dominates $\\P^1$} and write $\\P^1 \\dom \\P^2$ if\nfor all increasing events $\\mathcal A$, we have $\\P^1(\\mathcal A) \\le\n\\P^2(\\mathcal A)$.\nA \\dfnterm{coupling} of two probability measures $\\P^1$, $\\P^2$ on\n$2^E$ is a probability measure $\\mu$\non $2^E \\times 2^E$ whose coordinate projections are $\\P^1$, $\\P^2$.\nA coupling $\\mu$ is called\n\\dfnterm{monotone} if\n$$\n\\mu\\big\\{(A_1, A_2) \\st A_1 \\subset A_2\\big\\} = 1\n\\,.\n$$\nBy Strassen's theorem \\cite{Strassen}, stochastic\ndomination $\\P^1 \\preccurlyeq \\P^2$ is equivalent to the existence of a\nmonotone coupling of $\\P^1$ and $\\P^2$.\nHowever, even if $\\P^1$ and $\\P^2$ are $\\gp$-invariant probability measures\non $2^\\gp$ with $\\P^1 \\dom \\P^2$, it does not follow that there is a\n$\\gp$-invariant monotone coupling of $\\P^1$ and $\\P^2$; see \\rref\nb.Mester:mono\/.\nWe need the following theorem; see \\rref b.Lyons:det\/ \nand \\rref b.BBL:Rayleigh\/.\n\\procl t.dominate\nLet $E$ be finite and let\n$Q_1 \\le Q_2$ be positive contractions of $\\ell^2(E)$. Then $\\P^{Q_1} \\dom\n\\P^{Q_2}$.\n\\endprocl\nThe most well-known example of a (nontrivial discrete) determinantal\nprobability measure is that where $\\qba$ is a uniformly chosen random spanning\ntree of a finite connected graph $G = (\\vertex, \\edge)$ with $E := \\edge$.\nIn this case, $Q$ is the \\dfnterm{transfer current matrix} $Y$,\nwhich is defined as follows.\nOrient the edges of $G$ arbitrarily.\nRegard $G$ as an electrical network with each edge having unit\nconductance. Then $Y(e, f)$ is the amount of current flowing along the edge\n$f$ when a battery is hooked up between the endpoints of $e$ of such\nvoltage that in the network as a whole, unit current flows from the tail of\n$e$ to the head of $e$.\nThe fact that \\eqref{e.DPM} holds for the uniform spanning tree is due to\n\\rref b.BurPem\/ and is called the Transfer Current Theorem.\nThe case with $|A| = 1$ was shown much earlier by \\rref b.Kirchhoff\/, while\nthe case with $|A| = 2$ was first shown by \\rref b.BSST\/.\nWrite $\\ust_G$ for the uniform spanning tree measure on $G$.\n\nThe study of the analogue on an infinite graph of a uniform spanning tree\nwas begun by \\rref b.Pemantle:ust\/ at the suggestion of Lyons.\nPemantle showed\nthat if an infinite connected graph $G$ is exhausted by a sequence of finite\nconnected subgraphs $G_n$, then the weak limit of the uniform spanning tree\nmeasures $\\ust_{G_n}$ on $G_n$ exists. \nHowever, it may happen that the limit measure is not supported on trees,\nbut on forests.\nThis limit measure is now called the \\dfnterm{free uniform spanning forest} on\n$G$, denoted $\\fsf_G$. Considerations of electrical networks play the\ndominant role in the proof of existence of the limit.\nIf $G$ is itself a tree, then this measure is\ntrivial, namely, it is concentrated on $\\{G\\}$. Therefore, \\rref b.Hag:rcust\/\nintroduced another limit that had been considered more implicitly by \\rref\nb.Pemantle:ust\/ on $\\Z^d$, namely, the weak limit of the uniform\nspanning tree measures on $G_n^*$, where $G_n^*$ is the graph $G_n$ with\nits boundary identified to a single vertex. As \\rref b.Pemantle:ust\/ showed,\nthis limit also always exists\non any graph and is now called the \\dfnterm{wired uniform spanning forest},\ndenoted $\\wsf_G$. \n\nIn many cases, the free and the wired limits are the same. In particular,\nthis is the case on all euclidean lattices such as $\\Z^d$. The general\nquestion of when the free and wired uniform spanning forest measures are\nthe same turns out to be quite interesting: The measures are the same iff\nthere are no nonconstant harmonic Dirichlet functions on $G$ (see\n\\BLPSusf). For a Cayley graph of a group $\\Gamma$, this condition is\nequivalent to the non-vanishing of the first $\\ell^2$-Betti number of the\ngroup, i.e., $\\beta_1^{(2)}(\\Gamma) \\neq 0$. This important property of a\ngroup and its implications have been studied extensively; see, for\nexample, \\cite{bekka, petthom}. \n\nIn the paper \\BLPSusf, it was noted that the Transfer Current Theorem\nextends to the free and wired spanning forests if one uses the free and\nwired currents, respectively. \nTo explain this, note that the orthocomplement of the row space $\\STAR(G)$ of\nthe vertex-edge incidence matrix of a finite graph $G$ is the kernel, denoted\n$\\CYCLE(G)$, of the matrix.\nWe call $\\STAR(G)$ the \\dfnterm{star space} of $G$ and $\\CYCLE(G)$ the\n\\dfnterm{cycle\nspace} of $G$.\nFor an infinite graph $G = (\\vertex, \\edges)$ exhausted by finite subgraphs\n$G_n$, we let $\\STAR(G)$ be the closure of $\\bigcup \\STAR(G_n^*)$ and\n$\\CYCLE(G)$ be the closure of $\\bigcup \\CYCLE(G_n)$, where we take the closure\nin $\\ell^2(\\edges)$.\nThen $\\wsf_G$ is the determinantal probability measure corresponding to the\nprojection $P_{\\STAR(G)}$, while $\\fsf_G$ is the determinantal probability measure\ncorresponding to $P_{\\CYCLE(G)}^\\perp := P_{\\CYCLE(G)^\\perp}$.\nIn particular, $\\wsf_G = \\fsf_G$ iff $\\STAR(G) = \\CYCLE(G)^\\perp$.\n\nWhile the wired spanning forest is quite well understood,\nthe free spanning forest measure is in general poorly understood. \nA more detailed summary of uniform spanning forest measures can be found in\n\\rref b.Lyons:bird\/.\n\nIf $E$ is infinite and $Q_n$ are positive contractions on $\\ell^2(E)$\nthat tend to $Q$ in the weak operator topology (i.e., in each matrix\nentry with respect to the standard orthonormal basis, we have\nconvergence), then clearly $\\P^{Q_n}$ tend to $\\P^Q$ weak*. Since\n$\\STAR(G_n^*) \\subset \\CYCLE(G_n)^\\perp$, it follows from \\rref\nt.dominate\/ that $\\wsf_G \\dom \\fsf_G$, though this was proved first by other\nmeans in \\BLPSusf.\n\n\\bsection{Cayley graphs and Cayley diagrams}{s.cay}\n\nFor any group $\\gp$ acting on a set $X$, if the contraction $Q$ on\n$\\ell^2(X)$ is $\\gp$-equivariant, then $\\P^Q$ is $\\gp$-invariant.\n\n\\begin{dfn} Let $\\gp$ be a group. If $S$ is a set of elements of\\\/ $\\gp$, we\nwrite $S^{-1} := \\{s^{-1} \\st s \\in S\\}$. If $S = S^{-1}$ is such that\nthe smallest subgroup of\\\/ $\\gp$ that contains all elements of $S$ is $\\gp$\nitself, then the corresponding \\dfnterm{Cayley graph} of\\\/ $\\gp$ with respect\nto $S$ is the undirected graph whose vertex set is $\\gp$ and whose edge set\nis $[\\gpe, \\gpe s]$ for $\\gpe \\in \\gp$ and $s \\in S$.\nThe \\dfnterm{Cayley diagram} ${\\rm Cay}(\\Gamma,S)$ contains more\ninformation and is a labelled oriented graph; namely, it is the graph whose\nvertex set is $\\gp$ and whose edge set is $(\\gpe, \\gpe s)$ for $\\gpe \\in\n\\gp$ and $s \\in S$, where that edge is labelled $s$.\nThus, for each unoriented edge of the Cayley graph, there are two oriented\nedges of the Cayley diagram, with inverse labels.\nWe shall always assume that $S = S^{-1}$.\nAn \\dfnterm{$S$-labelled graph} is a graph each of whose edges is assigned\na label in $S$.\n\\end{dfn}\n\nA \\dfnterm{rooted graph} $(G, o)$ is a graph $G$ with a distinguished\nvertex $o$ of $G$. We denote by $[(G, o)]$ the class of rooted graphs that\nare isomorphic to $G$ via an isomorphism that preserves the root. If $G$ is\nlabelled or oriented, then we also require the isomorphism to preserve that\nextra structure. Generally we are interested in isomorphism classes and\nshall, after the first uses, drop the square brackets in the notation and\nthus not distinguish notationally between a rooted graph and its\nisomorphism class.\n\nGiven a vertex $v$ in a (possibly labelled and oriented) graph $G$ and $r\n\\ge 0$, write $B(v, r; G)$ for the (possibly labelled and oriented) graph\ninduced by $G$ on the vertices within distance $r$ of $v$, with $v$ as \nthe root. If $G$ is finite, let $\\nu_{G, r}$\ndenote the law of $[B(v, r; G)]$ when $v$ is chosen uniformly at random.\nIf $G$ is a Cayley graph with identity element $o$ and $G_n$ are finite\nundirected graphs such that \nfor every $r > 0$, the laws $\\nu_{G_n, r}$ tend to\n$\\delta_{[B(o, r; G)]}$, then we say that the \\dfnterm{random weak limit} of\n$(G_n)_n$ is $G$.\n\nWe say that $\\gp$ is \\dfnterm{sofic} if there exists a sequence $(G_n)_{n\n\\ge 1}$ of finite oriented graphs whose edges are labelled by elements of\n$S$ such that for every $r > 0$, the laws $\\nu_{G_n, r}$ tend to\n$\\delta_{[B(o, r; G)]}$, where $G$ is the Cayley diagram of $\\gp$. In this\ncase, we say that ${\\rm Cay}(\\Gamma,S)$ is a limit of finite $S$-labelled\ngraphs. It is well known and easy to see that if ${\\rm Cay}(\\Gamma,S)$ is\na limit of finite $S$-labelled graphs, then the same holds for any other\nfinite generating set of elements $S' \\subset \\Gamma$ \\cite{Weiss}. \n\n\n\\begin{dfn}\nA $S$-labelled oriented graph $G$ is called an \\dfnterm{$S$-labelled\nSchreier graph} if\nfor each vertex in $G$ and each $s \\in S$, there is precisely one incoming edge and one outgoing edge with the label $s$.\n\\end{dfn}\n\nLet $\\bF_{S}$ denote the free group on the set $S$, i.e.,\n$\\bF_{S}$ consists of formal products of letters $s^{\\pm}$ with\n$s \\in S$. We denote the neutral element of $\\bF_S$ by $\\varnothing$. Each $S$-labelled Schreier graph is equipped with a natural\naction of $\\bF_S$. The action of $s \\in S$ on $v \\in \\vertex$\nyields the unique vertex $v' \\in \\vertex$ such that there exists an\n$s$-labelled oriented edge $(v,v')$. We shall denote this vertex by\n$v.s$; and hence consider this action as a right action on $\\vertex$.\nMore generally, for $v \\in \\vertex$ and $w \\in \\bF_S$, $v.w$\ndenotes the vertex which is obtained by an oriented walk following the\nlabels determined by $w \\in \\bF_S$. \nLikewise, given an edge $e = (v, v.s)$ and $w \\in \\FS$, write $e.w := (v.w, v.w.s)$.\nFor a set $A \\subset \\FS$, write $v.A := \\{v.w \\st w \\in A\\}$ and $e.A :=\n\\{e.w \\st w \\in A\\}$.\n\nThe following was proved in slightly different language by\n\\cite{ES:direct}. \n\n\\procl l.def-sofic\nLet\\\/ $\\Gamma$ be a group and $S$ a generating set of elements of\\\/\n$\\Gamma$. The group $\\Gamma$ is sofic if and only if\\\/ ${\\rm Cay}(\\Gamma,S)$ is a limit of finite $S$-labelled Schreier graphs.\n\\endprocl\n\nSuppose that $\\theta$ is a probability measure on $\\{0, 1\\}^\\vertex$ or\n$\\{0, 1\\}^\\edges$ for a graph $G = (\\vertex, \\edge)$, regarded as giving\nrandom subsets (of either vertices or edges) by using $\\{0, 1\\}$-valued\nmarks.\nFor a vertex $v$ and $r \\ge 0$, denote the restriction of $\\theta$ to $B(v,\nr; G)$ by $\\theta(v, r)$.\nWrite $[\\theta(v, r)]$ for the probability measure induced on $[B(v, r;\nG)]$.\nIf $G$ is finite, let $\\theta(G, r) := \\frac1{|\\vertex|}\\sum_{v \\in\n\\vertex} [\\theta(v, r)]$.\nIn other words, if $H$ is a rooted $\\{0, 1\\}$-marked graph of radius at\nmost $r$, then with $\\cong$ denoting isomorphism of rooted marked graphs,\n$\\theta(G, r)(H) = \\frac1{|\\vertex|}\\sum_{v \\in \\vertex} \\sum_{H' \\cong\nH} \\theta(v, r)(H')$. \nSuppose that $G = (\\vertex, \\edge)$ is the random weak limit of finite \ngraphs $G_n = (\\vertex_n, \\edge_n)$, that $\\theta_n$ are probability\nmeasures on $\\{0, 1\\}^{\\vertex_n}$ or $\\{0, 1\\}^{\\edges_n}$, and that\n$\\theta$ is a probability measure on $\\{0, 1\\}^\\vertex$ or $\\{0,\n1\\}^\\edge$, respectively. \nIf the weak* limit of $\\theta_n({G_n}, r)$ equals $\\theta(o, r)$ for\nall $r \\ge 0$, then we say that $\\theta$ is the \\dfnterm{random weak limit}\nof $(\\theta_n)_n$.\n\nLet $\\Gamma$ be a group and $S$ be a finite generating set of elements of\n$\\Gamma$. There is a natural action of $\\Gamma$ on ${\\rm Cay}(\\Gamma,S)\n=(\\gp,\\edge)$ that we denote by $\\lambda \\colon \\Gamma \\to {\\rm\nAut}\\big({\\rm Cay}(\\Gamma,S)\\big)$. It is defined by the translations\n$\\lambda(g)(h) = gh$ for $g,h \\in \\gp$. The vertex set $\\gp$ of ${\\rm\nCay}(\\Gamma,S)$ admits another $\\Gamma$ action, which is given by the\nformula $\\rho(g)(h) = hg^{-1}$ for $g,h \\in \\Gamma$. Both actions extend to\nunitary representations of $\\Gamma$ on the Hilbert space $\\ell^2 \\Gamma$,\nwhich we also denote by $\\lambda$ and $\\rho$. We denote the natural\northonormal basis of $\\ell^2 \\Gamma$ by $\\{\\delta_\\gamma \\st \\gamma \\in\n\\Gamma \\}$. Since the multiplication in $\\Gamma$ is associative, these\nactions commute. We define the (right) \\dfnterm{group von Neumann\nalgebra} of $\\Gamma$ to be the algebra of $\\gp$-equivariant operators,\n$$R(\\Gamma) := \\big\\{ T \\in B(\\ell^2 \\Gamma) \\st \\forall g \\in \\Gamma \\colon\n\\lambda(g)T=T\\lambda(g) \\big\\}\\,,$$\nand note that $\\rho(\\Gamma) \\subset R(\\Gamma)$. We use $\\rho$ also to\ndenote the linearization $\\rho \\colon \\bbC \\Gamma \\to R(\\Gamma)$, which is\na $*$-homomorphism, where $\\bbC \\Gamma$ denotes the complex group ring,\nwhich carries the natural involution that sends $g$ to $g^{-1}$ and is\ncomplex conjugation on the coefficients. The group von Neumann\nalgebra $R(\\Gamma)$ comes with a natural involution $T \\mapsto T^*$ (the\nadjoint map) and a \\dfnterm{trace}\ngiven by the formula \n$$\n\\tau(T) := \\langle T \\delta_o ,\\delta_o \\rangle\n\\,,\n$$\nwhere $o$ is the identity element of $\\gp$. The functional $\\tau \\colon\nR(\\Gamma) \\to \\bbC$ leads to the definition of \\dfnterm{spectral measure}:\nLet $T \\in R(\\Gamma)$ be a self-adjoint operator. There exists a unique\nprobability measure $\\mu_T$ on the interval $\\big[{-}\\|T\\|,\\|T\\|\\big] \\subset \\bbR$ such that\n$$\\forall n \\in \\bbN\\quad \\tau(T^n) = \\int t^n \\ d \\mu_T(t)\\, .$$\n\n\n\nNote that the action of $\\Gamma$ on $\\edge$ is isomorphic to the natural left\naction of $\\Gamma$ on $\\Gamma \\times S$, where we identify a pair $(\\gamma,s)\n\\in \\Gamma \\times S$ with the edge $(\\gamma,\\gamma s) \\in \\edge$. Abusing\nnotation, we denote the natural unitary action of $\\Gamma$ on $\\ell^2 \\edge$\nalso by $\\lambda$.\nWe define the von Neumann algebra of the Cayley diagram ${\\rm Cay}(\\Gamma,S)$ to be\n$$R(\\Gamma,S) := \\big\\{T \\in B(\\ell^2 \\edge) \\st \\forall g \\in \\Gamma\n\\colon \\lambda(g)T=T\\lambda(g) \\big\\}\\,.$$\n\n\\begin{lem} \\label{block} There is a natural identification\n$$R(\\Gamma,S)=M_S\\big(R(\\Gamma)\\big)\\,,$$ where $M_S(Z)$ denotes the $S\n\\times S$-matrices over a ring $Z$.\n\\end{lem}\n\\begin{proof}\nThe isomorphism is described as follows. For $s \\in S$, we denote the\northogonal projection onto $\\ell^2(\\{e \\in \\edge \\st e = (\\gamma,\\gamma s),\n\\gamma \\in \\Gamma \\})$ by $p_s$. Clearly, $p_s$ is $\\lambda$-equivariant\nand there is a natural $\\Gamma$-equivariant identification of $\\{e \\in\n\\edge \\st e = (\\gamma,\\gamma s), \\gamma \\in \\Gamma \\}$ with $\\Gamma$.\nIndeed, since ${\\rm Cay}(\\Gamma,S)$ is a Schreier graph, there is precisely\none edge starting at $\\gpe \\in \\Gamma$ with label $s \\in S$.\nLet now $s,t \\in S$.\nWith a slight abuse of notation, every operator $T \\in R(\\Gamma,S)$ determines\na matrix $(p_s T p_t)_{s,t \\in S}$ of elements in $R(\\Gamma)$.\nIt is easily checked that this identification preserves all the structure.\n\\end{proof}\n\nNote that the von Neumann algebra $R(\\Gamma,S)$ also comes equipped with\na natural trace $\\tau_S \\colon R(\\Gamma,S) \\to \\bbC$ given by $\\tau_S(T)\n:= \\sum_{s \\in S} \\tau(p_s T p_s)$ with $p_s$ being as defined in the\nproof of Lemma \\ref{block}; we call $\\tau_S$ the \\dfnterm{natural\nextension} of $\\tau$.\n\nLet now $G=(\\vertex,\\edge)$ be an $S$-labelled Schreier graph. We\nconsider the algebra $B(\\ell^2 \\vertex)$ of bounded operators on the\nHilbert space $\\ell^2 \\vertex$ and note that we have a natural\nidentification $B(\\ell^2 \\edge) = M_S(B(\\ell^2 \\vertex))$ as in the proof\nof the previous lemma. If $\\vertex$ is finite, then $B(\\ell^2 \\vertex)$\ncarries a natural normalized trace $$\\tr_\\vertex(T) :=\n\\frac{1}{|\\vertex| }\\sum_{v \\in \\vertex} \\langle T \\delta_v , \\delta_v\n\\rangle\\,.$$ Similarly, there is then a natural trace $\\tr_{\\edge}$ on the\nalgebra $B(\\ell^2 \\edge)$ when $\\edges$ is finite. Every element $w \\in\n\\bF_S$ determines a\nunitary operator $\\pi(w)$ on $\\ell^2(\\vertex)$ for each $S$-labelled\nSchreier graph\n$G=(\\vertex,\\edge)$. It is given by linearity and the formula\n\\begin{equation} \\label{defpi}\n\\forall w \\in \\bF_S \\ \\ \\forall v \\in \\vertex \\quad\n\\pi(w)(\\delta_v) := \\delta_{v.w^{-1}}\\,.\n\\end{equation}\nBy taking linear combinations, every element in the complex group ring $a\n\\in \\bbC \\bF_S$ determines a bounded operator $\\rho_G(a)$ in\n$B(\\ell^2 \\vertex)$. Note that if $G= {\\rm Cay}(\\Gamma,S)$, then\n$\\rho_G(a)=\\rho(\\pi(a))$, where $\\pi \\colon \\bbC \\bF_S \\to \\bbC\n\\Gamma$ denotes the natural homomorphism of group rings extending the map\nfrom $S$ to its image in $\\Gamma$.\n\nThe following lemma is well known \\cite{elekszabo}; let us include a proof for convenience.\n\n\\begin{lem} \\label{lem:sofic}\nLet\\\/ $\\gp$ be a sofic group and $S$ be a finite generating set of elements\nin $\\gp$. Let $(G_n)_n$ be a sequence of finite $S$-labelled Schreier\ngraphs whose limit is $G:={\\rm Cay}(\\gp,S)$. Then for each $a \\in \\bbC\n\\bF_{S}$, we have\n$$\\tau(\\rho_G(a)) = \\lim_{n \\to \\infty} \\tr_{\\vertex_n}(\\rho_{G_n}(a))\\,.$$\n\\end{lem}\n\\begin{proof} It is enough to prove the statement for $a = w \\in {\\mathbf\nF}_S$. Then the left-hand side is $1$ or $0$ depending on whether\n$\\pi(w)= o$ in $\\Gamma$ or not. The right-hand side equals\n$$\\lim_{n \\to \\infty}\\frac{1}{|\\vertex_n| }\\sum_{v \\in \\vertex_n} \\langle\n\\rho_{G_n}(w) \\delta_v , \\delta_v \\rangle$$ and thus measures the\nfraction of vertices that are fixed by the action of $\\rho_{G_n}(w)$. If\n$w^{-1} = s_1^{\\varepsilon_1} \\cdots s_n^{\\varepsilon_n}$ with\n$s_1,\\dots,s_n \\in S$ and $\\varepsilon_1,\\dots,\\varepsilon_n \\in \\{\\pm\n1\\}$, then this is the fraction of vertices $v \\in \\vertex_n$ for which an\noriented walk with labels $s_1^{\\varepsilon_1}, \\dots, s_n^{\\varepsilon_n}$\nends at $v \\in \\vertex_n$, i.e., if $v.w^{-1}=v$ or not. It is clear that this fraction converges to $1$\nor $0$ depending on whether the corresponding walk in $G$ starting at $o\n\\in \\Gamma$ ends at $o \\in \\Gamma$ or not. It ends at $o$ if and only if\n$\\pi(w) = o$ in $\\Gamma$. Hence, the left- and the right-hand sides are\nequal. This finishes the proof.\n\\end{proof}\n\n\n\\bsection{Tracial von Neumann algebras and their ultraproducts}{s.proofs}\n\nBefore we introduce the metric ultraproduct of tracial von Neumann\nalgebras, we shall recall some basic facts about von Neumann algebras. A\n\\dfnterm{von Neumann algebra} is defined to be a weakly closed unital\n$*$-subalgebra of the space $B(H)$ of bounded linear operators on some\nHilbert space $H$. \nNote that the weak topology here refers to the weak operator topology (WOT).\n\nIf $K \\subset B(H)$ is a self-adjoint subset, then we define the\n\\dfnterm{commutant} of $K$ as $K' := \\{T \\in B(H) \\st \\forall S \\in K \\\nST=TS \\}$. It is easy to see that $K'$ is a von Neumann algebra. Most von Neumann algebras arise naturally as the\ncommutant of some explicit set of operators. For example, we defined\n$R(\\Gamma)= \\{\\lambda(\\gamma) \\in B(\\ell^2 \\Gamma) \\st \\gamma \\in \\Gamma\n\\}'$. Von Neumann's Double Commutant Theorem says that this construction\nexhausts all von Neumann algebras; more specifically, $M \\subset B(H)$ is a\nvon Neumann algebra if and only if $M=M^*$ and $M=M''$.\n\nSince any weakly closed algebra is also norm closed in $B(H)$, a von\nNeumann algebra is in particular a $C^*$-algebra. Like $C^*$-algebras,\nvon Neumann algebras admit an abstract characterization. A $C^*$-algebra\n$M$ is $*$-isomorphic to a von Neumann algebra if and only if as a\nBanach space, $M$ is a dual Banach space; in this case, the\npre-dual $M_*$ is unique up to isometry. In case $M=B(H)$, the pre-dual\n$M_*$ is the\nBanach space of trace-class operators on $H$ with the natural pairing\nbetween $M$ and $M_*$ given by the trace. It is well known that the weak*\ntopology (seeing $B(H)$ as a dual Banach space) is nothing but the\nso-called ultra-weak topology. Moreover, if $M$ is a weakly closed\nsub-algebra of $B(H)$, then the natural weak* topology on $M = (M_*)^*$\ncan be identified with the ultra-weak topology inherited from $B(H)$.\n \nFollowing Dedekind's characterization (or rather definition) of an infinite set, a projection\n(which always means ``orthogonal projection\" for us) $P \\in M$ in a von\nNeumann algebra is called \\dfnterm{infinite} if it is equivalent in $M$ to a\nsubprojection of itself, i.e., if\nthere exists a partial isometry $U \\in M$ such that\n$U^*U=P$ and $UU^*$ is a proper subprojection of $P$. A projection\nthat is not infinite is called \\dfnterm{finite}. For our purposes, only\n\\dfnterm{finite} von Neumann algebras matter, and these are (by definition)\nthe ones in which every projection is finite. Note that $B(\\ell^2 \\vertex)$ is finite if and only if $\\vertex$ is finite.\n\nA certificate for finiteness of a von Neumann algebra $M$ is the\nexistence of a positive, faithful trace $\\tau \\colon M \\to \\bbC$. Here, a\n(linear) functional $\\tau \\colon M \\to \\bbC$ is called \\dfnterm{positive}\nif $\\tau(T^*T) \\geq 0$. A positive functional is called \\dfnterm{faithful} if $\\tau(T^*T)=0$ implies\n$T=0$. The functional $\\tau$ is called a \\dfnterm{trace} if\n$\\tau(TS)=\\tau(ST)$ for all $S,T \\in M$. For the sake of getting used to\nthe definitions, let us convince ourselves that a von Neumann algebra $M$\nwith a positive and faithful trace $\\tau \\colon M \\to \\bbC$ is finite.\nLet $P$ be a projection and $U$ be a partial isometry with $U^*U=P$. \nSuppose that $UU^*$ is a\nsubprojection of $P$. Then $Q:=P-UU^*$ is also a projection and hence\n$Q=Q^2=Q^*Q$. However, $\\tau(Q^*Q)=\\tau(Q) = \\tau(U^*U-UU^*) = 0$ and hence $Q=0$. This shows that $UU^*=P$. \nHence, $M$ is a finite von Neumann algebra. \n\nA functional $\\tau \\colon M \\to \\bbC$ is called\n\\dfnterm{normal} if it is ultra-weakly continuous; natural examples are\nsums of so-called vector states $\\tau(T):= \\sum_i \\langle T \\xi_i,\\xi_i\n\\rangle$ for $\\xi_1,\\dots,\\xi_n \\in H$. The functional \nis called \\dfnterm{unital} if\n$\\tau(1)=1$. \nWe call a pair $(M,\\tau)$ a \\dfnterm{tracial von Neumann\nalgebra} if $M$ is a von Neumann algebra and $\\tau \\colon M \\to \\bbC$ is\na normal, positive, faithful, and unital trace. One can show that $\\tau\n\\colon R(\\Gamma) \\to \\bbC$ is a normal, positive, faithful, and unital\ntrace. Hence, $R(\\Gamma)$ is a finite von Neumann algebra.\n\nA $*$-homomorphism $\\varphi \\colon (N,\\tau_N) \\to (M,\\tau_M)$ between\n$*$-algebras equipped with traces is called \\dfnterm{trace preserving} if\n$\\tau_N= \\tau_M \\circ \\varphi$. If $\\tau \\colon M \\to \\bbC$ is a normal,\npositive, faithful, and unital trace, then the norm $T \\mapsto\n\\tau(T^*T)^{1\/2}$ determines the ultra-weak topology on norm-bounded sets.\nIn particular, this implies that if $(N,\\tau_N)$ and $(M,\\tau_M)$ are\ntracial von Neumann algebras, $N_0 \\subset (N,\\tau_N)$ is a dense\n$*$-subalgebra, and $\\tilde \\varphi \\colon (N_0,\\tau_N \\restrict {N_0}) \\to\n(M,\\tau_M)$ is a trace-preserving $*$-homomorphism, then $\\tilde \\varphi$\nhas a unique extension to a trace-preserving $*$-homomorphism $\\varphi\n\\colon (N,\\tau_N) \\to (M,\\tau_M)$. For all these facts about von Neumann\nalgebras and more, see \\cite{takesaki} and the references therein.\n\n\nThe following construction of metric ultraproducts of von Neumann algebras\npreserves the class of tracial von Neumann algebras and plays a crucial\nrole in their study. \n\\begin{dfn} \\label{dfn:ultra}\nLet $(M_n,\\tau_n)$ be a sequence of tracial von Neumann algebras. Let $\\omega$ be a non-principal ultrafilter on $\\bbN$. We consider \n$$\\ell^{\\infty}(\\bbN,(M_n,\\tau_n)_n) := \\big\\{ (T_n)_n \\st T_n \\in M_n, \\ \\sup\n\\{\\|T_n\\| \\st n \\in \\bbN\\}< \\infty \\big\\}$$\nand set\n$$J_{\\omega}:= \\big\\{ (T_n)_n \\in \\ell^{\\infty}(\\bbN,(M_n,\\tau_n)_n) \\st\n\\lim_{n \\to \\omega} \\tau_n(T_n^*T_n) = 0 \\big\\}\\,.$$\nIt is well known and easy to verify that $J_{\\omega} \\subset\n\\ell^{\\infty}(\\bbN,(M_n,\\tau_n)_n)$ is a two-sided $*$-ideal.\nThe \\dfnterm{metric ultraproduct} of $(M_n,\\tau_n)$ is defined to be the\nquotient\n$$\\prod_{n \\to \\omega} (M_{n},\\tau_{n}) :=\n\\frac{\\ell^{\\infty}(\\bbN,(M_n,\\tau_n)_n)}{J_{\\omega}} \\quad \\mbox{with\ntrace}\n\\quad \\tau_{\\omega}\\big((T_n)_n+J_{\\omega}\\big) := \\lim_{n \\to \\omega}\n\\tau_n(T_n)\\,.$$\n\\end{dfn}\n\nIt has been shown by Connes \\cite[Section I.3]{connes} that $(\\prod_{n \\to\n\\omega} (M_{n},\\tau_{n}),\\tau_{\\omega})$ is again a von Neumann algebra\nacting on a concrete Hilbert space ${}_\\omega H$. This is not\nstraightforward, since---if $M_n \\subset B(H_n)$---the ultraproduct\n$\\prod_{n \\to \\omega} (M_{n},\\tau_{n})$ does {\\em not} act on the Hilbert space arising as the ordinary\nBanach space ultraproduct $H_{\\omega}:= \\prod_{n \\to \\omega} H_n$ of the\nsequence of Hilbert spaces $(H_n)_n$. One can show that $\\tau_{\\omega}\n\\colon \\prod_{n \\to \\omega} (M_{n},\\tau_{n}) \\to \\bbC$ is a normal,\npositive, faithful, and unital trace.\n\nIn the context of the previous definition, we shall say that a sequence\n$(T_n)_n$ of operators with $T_n \\in M_n$ \\dfnterm{represents} some\noperator $T$ in\nthe ultraproduct if $(T_n)_n \\in \\ell^{\\infty}(\\bbN,(M_n,\\tau_n)_n)$ and\n$T$ is its residue class modulo $J_{\\omega}$. Since $J_{\\omega}$ is a\ntwo-sided $*$-ideal, we get: If $(T_n)_n$ represents $T$ and $(S_n)_n$\nrepresents $S$, then $(T_n^*)_n$ represents $T^*$, $(T_n+S_n)_n$ represents\n$T+S$, and $(T_nS_n)_n$ represents $TS$. The following lemma is only\nslightly more involved. \nRecall that if $T$ is a self-adjoint operator and $f \\colon \\bbR \\to \\bbR$ is\ncontinuous, then $f(T)$ is defined by using the spectral decomposition of\n$T$.\n\n\\begin{lem} \\label{lem:app}\nLet $(T_n)_n$ be a sequence of self-adjoint operators in\n$\\ell^{\\infty}(\\bbN,(M_n,\\tau_n)_n)$ representing some operator $T$ in the ultraproduct. Let $f \\colon \\bbR\n\\to \\bbR$ be a continuous function. Then the sequence $(f(T_n))_n$ represents $f(T)$. Let $(S_n)_n$ be another sequence in $\\ell^{\\infty}(\\bbN,(M_n,\\tau_n)_n)$ of self-adjoint operators representing some operator $S$ in the ultraproduct. If $S_n \\leq T_n$ for all $n \\in \\bbN$, then $S \\leq T$.\n\\end{lem}\n\n\\begin{proof}\nBy the above, the first statement is clear for polynomial functions. It follows\nfor general continuous functions by the Stone-Weierstrass approximation\ntheorem. Note that $S_n \\leq T_n$ if and only if $T_n - S_n = C_n^*C_n$\nfor some $C_n \\in M_n$. Since $(S_n)_n, (T_n)_n \\in\n\\ell^{\\infty}(\\bbN,(M_n,\\tau_n)_n)$, we also have $(C_n)_n \\in\n\\ell^{\\infty}(\\bbN,(M_n,\\tau_n)_n)$. Let $C$ be the operator that is\nrepresented by the sequence $(C_n)_n$. Then $T-S = C^*C$ and hence $S\n\\leq T$. This finishes the proof.\n\\end{proof}\n\n\\begin{remark} \\label{specmeas}The definition of metric ultraproduct is made in such a way that it is\ncompatible with spectral measures in the following sense.\nIf $(T_n)_n$ is a sequence of\nself-adjoint operators in $\\ell^{\\infty}(\\bbN,(M_n,\\tau_n)_n)$ that\nrepresents some self-adjoint operator $T$ in the ultraproduct,\nthen by definition and the above remarks, $\\lim_{n \\to \\omega}\n\\tau_n(T_n^k) = \\tau_\\omega(T^k)$ for all $k \\ge 0$, whence\n$\\lim_{n \\to \\omega} \\mu_{T_n} = \\mu_T$ in the weak* topology.\n\\end{remark}\n\nThe following proposition summarizes and extends results from \\cite{elekszabo}.\n\n\\begin{pro} \\label{emb}\nLet $\\gp$ be a sofic group and $S$ be a finite generating set of elements\nin $\\gp$. Let $(G_n)_n$ be a sequence of finite $S$-labelled Schreier\ngraphs whose limit is $G:={\\rm Cay}(\\gp,S)$. There exist trace-preserving\nembeddings\n$$\\iota \\colon (R(\\Gamma),\\tau) \\to \\prod_{n \\to \\omega} (B(\\ell^2\n\\vertex_n),\\tr_{\\vertex_n}) \\quad \\mbox{and} \\quad \\iota_S\n\\colon(R(\\Gamma,S),\\tau_S) \\to \\prod_{n \\to \\omega}(B(\\ell^2\n\\edge_n),\\tr_{\\edge_n})\\,.$$\nMoreover, if $(T_n)_n \\in \\ell^{\\infty}( \\bbN, (B(\\ell^2 \\vertex_n),{\\rm\ntr}_{\\vertex_n}))$ represents $\\iota(T)$ for some $T \\in R(\\Gamma)$, then\n\\begin{equation} \\label{eq1}\n\\lim_{n \\to \\omega} |\\vertex_n|^{-1} \\sum_{v \\in \\vertex_n}\n|\\iprod{T_n \\delta_{v.\\gamma}, \\delta_{v.\\gamma'}} - \\iprod{T\n\\delta_{o.\\gpe}, \\delta_{o.\\gpe'}}| = 0\n\\end{equation}\nfor all $\\gpe, \\gpe' \\in \\bF_S$. \nSimilarly, if $(T_n)_n \\in\n\\ell^{\\infty}( \\bbN, (B(\\ell^2 \\edge_n),\\tr_{\\edge_n}))$\nrepresents $\\iota_S(T)$ for some $T \\in R(\\Gamma,S)$, then\n\\begin{equation} \\label{eq1b}\n\\lim_{n \\to \\omega} |\\vertex_n|^{-1} \\sum_{v \\in \\vertex_n}\n|\\iprod{T_n \\delta_{(v.\\gamma,v.\\gamma s)},\\delta_{(v. \\gamma',v.\\gamma's')}} - \\iprod{T\n\\delta_{(o.\\gamma,o.\\gamma s)}, \\delta_{(o.\\gamma', o.\\gamma's')}}| = 0\n\\end{equation}\nfor all $\\gpe, \\gpe' \\in \\bF_S$ and $s,s' \\in S$.\n\n\\end{pro}\n\nEmbedding theorems like the preceding proposition were first proved by\nElek and Szab\\'o; see, for example, \\cite[Theorem 2]{elekszabo}. However, our\nemphasis is on the special features of embeddings coming from a sofic\napproximation. In particular, we are interested in Equations \\eqref{eq1} and\n\\eqref{eq1b}, which are not just features of every trace-preserving\nembedding.\n\n\n\\begin{proof}[Proof of Proposition \\ref{emb}:]\nFor each $n \\in \\bbN$, we consider the $*$-homomorphism $$\\rho_{G_n}\n\\colon \\bbC \\bF_S \\to (B(\\ell^2 \\vertex_n),{\\rm\ntr_{\\vertex_n}})\\,.$$ This sequence induces a $*$-homomorphism\n$$\\rho_\\omega \\colon \\bbC \\bF_S \\to \\prod_{n \\to \\omega}\n(B(\\ell^2 \\vertex_n),\\tr_{\\vertex_n})\\,.$$ We denote the canonical\ntrace on $\\prod_{n \\to \\omega} (B(\\ell^2 \\vertex_n),{\\rm\ntr}_{\\vertex_n})$ by $\\tr_{\\omega} := \\lim_{n \\to \\omega} {\\rm\ntr}_{\\vertex_n}$. Since $(G_n)_n$ is a sofic approximation to $\\Gamma$, we\nobtain\n$$\\tau_{\\omega}(\\rho_\\omega(w)) = \\begin{cases} 1 & \\mbox{if }\\pi(w)=e \\mbox{ in\n$\\Gamma$}, \\\\\n0 & \\mbox{if }\\pi(w)\\neq e \\mbox{ in $\\Gamma$}. \\end{cases}$$\nAs the result depends only on $\\pi(w)$,\nthis shows that $\\rho_\\omega$ descends to a $*$-homomorphism\n$\\rho_\\omega \\colon \\bbC \\Gamma \\to \\prod_{n \\to \\omega} (B(\\ell^2\n\\vertex_n),\\tr_{\\vertex_n})$ that preserves the canonical trace on\n$\\bbC \\Gamma$. Note also that $\\rho(\\bbC \\Gamma) \\subset R(\\Gamma)$ is\nweakly dense. Indeed, this is a standard fact and follows, for example,\nfrom the Commutation Theorem \\cite[Theorem 1 on page 80]{Dixmier}. If there\nis no risk of confusion, we shall sometimes identify $\\bbC\\Gamma$ with its image in $R(\\Gamma)$. \nIt is a standard fact---see the remarks before Definition\n\\ref{dfn:ultra}---that $\\rho_\\omega$ has an extension $\\iota$ to the group von Neumann $R(\\Gamma)$ of $\\Gamma$.\n\nThe second embedding is obtained after passing to matrix algebras\n$M_S(\\cbuldot)$ via the natural isomorphisms $M_S R(\\Gamma) = R(\\Gamma,S)$ and $M_S(B(\\ell^2 \\vertex_n)) = B(\\ell^2 \\edge)$. \n\nWe shall now prove Equation \\eqref{eq1}. Upon replacing $T$ by\n$\\rho_G(\\gamma')T \\rho_G(\\gamma^{-1})$ and $T_n$ by\n$\\rho_{G_n}(\\gamma')T_n \\rho_{G_n}(\\gamma^{-1})$, we may assume that\n$\\gamma=\\gamma'=o$. We first study the case $T=0$. We need to show that \n\\begin{equation} \\label{eq2}\n\\lim_{n \\to \\omega} |\\vertex_n|^{-1} \\sum_{v \\in \\vertex_n}\n|\\iprod{T_n \\delta_{v}, \\delta_{v}}| = 0.\n\\end{equation}\nWe may write $T_n := \\sum_{k=0}^3 i^k T^{(k)}_n$ with $T_n^{(k)}$ positive\nfor each $0 \\leq k \\leq 3$ and $n \\in \\bbN$. Here, e.g., $T_n^{(0)} -\nT_n^{(2)} = \\Re (T_n) := (T_n + T_n^*)\/2$ are self-adjoint operators that\nform a sequence representing $(T + T^*)\/2 = 0$, and $T_n^{(0)} = (\\Re\nT_n)_+$ is a function of $\\Re T_n$.\nBy Lemma \\ref{lem:app} and the remarks preceding it, each\nsequence $(T_n^{(k)})_n$ represents zero in the ultraproduct. Hence,\nEquation \\eqref{eq2} holds for $(T_n^{(k)})_n$ for $0 \\leq k \\leq 3$\nbecause all scalar products are positive already, and hence \\eqref{eq2}\nholds for $(T_n)_n$\nby the triangle inequality.\n\nWe conclude again from the triangle inequality that if Equation \\eqref{eq1} holds for one sequence $(T_n)_n$ representing some operator $\\iota(T)$ in the ultraproduct, then it holds for every sequence representing $\\iota(T)$. Thus, whether or not Equation \\eqref{eq1} holds is a property of $\\iota(T) \\in \\prod_{n \\to \\omega} (B(\\ell^2 \\vertex_n),\\tr_{\\vertex_n})$ alone.\n\nIt is clear that Equation \\eqref{eq1} holds for $\\iota(\\rho(\\gamma))$ for\n$\\gamma \\in \\Gamma$, as follows directly from the fact that the limit of\n$(G_n)_n$ is $G$. It\nis also clear that the set of operators $T \\in R(\\Gamma)$ for which\nEquation \\eqref{eq1} holds is closed under addition and multiplication by\nscalars. Thus, Equation \\eqref{eq1} holds for the complex group ring\n$\\bbC \\Gamma \\subset R(\\Gamma)$ and every ultrafilter. Moreover, our\nconstruction shows that for each $T \\in \\bbC \\Gamma$, there exists a\nsequence $(T_n)_n \\in \\ell^{\\infty}( \\bbN, (B(\\ell^2 \\vertex_n),{\\rm\ntr_{\\vertex_n}}))$ such that\n$$\\lim_{n \\to \\infty} |\\vertex_n|^{-1} \\sum_{v \\in \\vertex_n}\n|\\iprod{T_n \\delta_{v}, \\delta_{v}} - \\iprod{T\n\\delta_o, \\delta_o}| = 0\n\\,.$$\nMoreover, $(T_n)_n$ represents $\\iota(T)$ in the ultraproduct with respect to any ultrafilter.\nNow, let $T \\in R(\\Gamma)$ and let us choose a sequence $(S_i)_i$ with\n$S_i \\in \\bbC \\Gamma$, $S_i \\to T$ in the weak operator topology and $\\sup_i \\|S_i\\| \\leq\n\\|T\\|$. For each $i$, let $(S_{i,n})_n$ be a sequence that represents\n$\\iota(S_i)$ in the ultraproduct. It is now clear that if the sequence\n$(k_n)_n$ of natural numbers increases slowly enough, then $(S_{k_n,n})_n$\nrepresents $\\iota(T)$ in the ultraproduct and that Equation \\eqref{eq1}\nholds for this sequence. Since we have shown that Equation \\eqref{eq1}\ndepends only on $\\iota(T)$, it follows that Equation \\eqref{eq1} holds for\nany sequence $(T_n)_n$ that represents $\\iota(T)$ in the ultraproduct.\nThis proves Equation \\eqref{eq1} for every $T \\in R(\\Gamma)$.\nThe proof of the second equality is similar. This finishes the proof of the proposition.\n\\end{proof}\n\n\n\nThe following lemma shows that our previous result is useful to connect the approximation of some operator $\\iota(T)$ by a sequence $(T_n)_n$ to an approximation of the associated determinantal probability measures.\n\n\\procl l.ultradetl\nLet $\\gp$ be a sofic group and $S$ be a finite generating set of elements\nin $\\gp$. Let $(G_n)_n$ be a sequence of finite $S$-labelled Schreier\ngraphs whose limit is $G:={\\rm Cay}(\\gp,S)$. Let $\\iota$ and $\\iota_S$\nbe trace-preserving embeddings as in Proposition \\ref{emb}.\nLet $T \\in R\\gp$ be such that $0 \\leq T \\leq I$ and suppose that $(T_n)_n$\nrepresents $\\iota(T)$ in the ultraproduct $\\prod_{n \\to \\omega}\n(B(\\ell^2 \\vertex_n),\\tr_{\\vertex_n})$ with $0 \\leq T_n \\leq I$ for\neach $n \\in \\bbN$.\nThen $\\lim_{n \\to \\omega} \\P^{T_n} = \\P^T$ in the random weak topology,\nin other words,\n$$\n\\lim_{n \\to \\omega}\n\\frac{1}{|\\vertex_n| }\\sum_{v \\in \\vertex_n} \n\\P^{T_n}[v.A \\subset \\qba]\n=\n\\P^{T}[o.A \\subset \\qba]\n$$\nfor all finite $A \\subset \\bF_S$.\nLikewise, \nif $T \\in R(\\gp, S)$ is such that $0 \\leq T \\leq I$ and $(T_n)_n$\nrepresents $\\iota_S(T)$ in the ultraproduct $\\prod_{n \\to \\omega}\n(B(\\ell^2 \\edge_n),\\tr_{\\edge_n})$ with $0 \\leq T_n \\leq I$ for\neach $n \\in \\bbN$,\nthen $\\lim_{n \\to \\omega} \\P^{T_n} = \\P^T$ in the random weak topology;\nin other words, given a finite set $A_s \\subset {\\mathbf F_S}$ for each\n$s \\in S$, we have\n$$\n\\lim_{n \\to \\omega}\n\\frac{1}{|\\vertex_n| }\\sum_{v \\in \\vertex_n} \n\\P^{T_n}\\Big[\\bigcup_{s \\in S} (v, v.s).A_s \\subset \\qba\\Big]\n=\n\\P^{T}\\Big[\\bigcup_{s \\in S} (o, s).A_s \\subset \\qba\\Big]\n\\,.\n$$\n\\endprocl\n\n\\rproof\nWe prove the first statement, as the second is similar and just involves\nmore notation.\n\nBy definition of determinantal probability measures, the desired identity\nis the same as\n$$\n\\lim_{n \\to \\omega}\n\\frac{1}{|\\vertex_n| }\\sum_{v \\in \\vertex_n} \n\\det (T_n \\restrict v.A)\n=\n\\det (T \\restrict o.A)\n\\,.\n$$\nBecause $G$ is the limit of $(G_n)_n$, we may assume that $|o.A| = |A|$,\ni.e., that $\\pi$ is injective on $A$.\nDefine \n$$\na_n(v, \\gpe, \\gpe') \n:=\n|\\iprod{T_n \\delta_{v.\\gamma}, \\delta_{v.\\gamma'}} - \\iprod{T\n\\delta_\\gpe, \\delta_{\\gpe'}}| \n\\,.\n$$\nBy Equation (\\ref{eq1}), we have \n$$\n\\lim_{n \\to \\omega}\n\\frac{1}{|\\vertex_n| }\\sum_{v \\in \\vertex_n} \n\\sum_{\\gpe, \\gpe' \\in A}\na_n(v, \\gpe, \\gpe')\n=\n0\n\\,.\n$$\nSince $a_n(v, \\gpe, \\gpe') \\le 2$, it follows that\n$$\n\\lim_{n \\to \\omega}\n\\frac{1}{|\\vertex_n| }\\sum_{v \\in \\vertex_n} \n\\prod_{\\gpe' \\in A} \\sum_{\\gpe \\in A}\na_n(v, \\gpe, \\gpe')\n=\n0\n\\,,\n$$\nwhence by Hadamard's inequality, \n$$\n\\lim_{n \\to \\omega}\n\\frac{1}{|\\vertex_n| }\\sum_{v \\in \\vertex_n} \n|\\det (T_n \\restrict v.A)\n-\n\\det (T \\restrict o.A)|\n=\n0\n\\,,\n$$\nwhich is stronger than the limit we desired.\n\\Qed\n\n\nThe following lemma is well known \\cite{Douglas}, but we provide a proof\nfor the convenience of the reader, as it is an essential tool for our\nproofs.\n\n\\begin{lem} \\label{prev}\nLet $H$ be a Hilbert space, $M \\subseteq B(H)$ be a von Neumann algebra, and $S,T \\in M$ with $0 \\leq S \\leq T \\leq I$. Then there exists a positive contraction $C \\in M$ with $T^{1\/2}CT^{1\/2}=S$.\n\\end{lem}\n\\begin{proof}\nLet $H_0 \\subset H$ be the closure of the image of $T$ and note that $H_0 =\n\\ker(T)^{\\perp}$. Since the orthogonal projection onto $H_0$ is contained in $M$, we may assume without loss of generality that $H_0=H$. The\nspace $H_0':=\\{T^{1\/2}\\xi \\st \\xi \\in H \\}$ is dense in $H_0$. We define\n$D$ to be $D(T^{1\/2}\\xi) := S^{1\/2}\\xi$ on $H_0'$. Now, since $0 \\leq S\n\\leq T$, it is easy to see that $D$ is well defined and extends to a\ncontraction. Indeed, $$\\|D(T^{1\/2}\\xi)\\|^2 = \\langle\nD(T^{1\/2}\\xi),D(T^{1\/2}\\xi) \\rangle =\\langle S^{1\/2}\\xi, S^{1\/2}\\xi \\rangle\n= \\langle S \\xi,\\xi \\rangle \\leq \\langle T \\xi,\\xi \\rangle = \\|T^{1\/2}\n\\xi\\|^2\\,.$$\nThe inequality shows that $D$ is well defined; the whole\nshows that $D$ is contractive on $H_0'$ and hence has a unique contractive\nextension to $H$. It is also clear that $DT^{1\/2} = S^{1\/2}$ on $H$. For every operator $U \\in M'$, we get\n$$UDT^{1\/2} = US^{1\/2} = S^{1\/2}U = DT^{1\/2}U = DUT^{1\/2}.$$ Hence, $UD=DU$ on the image of $T^{1\/2}$, which we assume is dense in $H$. This implies $D \\in M''$ and hence $D \\in M$, since $M$ is a von Neumann algebra.\nWe can now set $C:=D^*D \\geq 0$, note that $C \\in M$, and that we have $T^{1\/2} CT^{1\/2} = T^{1\/2} D^*DT^{1\/2} = S^{1\/2} S^{1\/2} = S$. It is obvious that $C$ is also a contraction. This finishes the proof of the lemma.\n\\end{proof}\n\nThe previous lemma can now be used to show that self-adjoint operators $S,T$ in the ultra-product with $0\\leq S \\leq T \\leq I$ can be approximated by sequences satisfying the same relation.\n\n\\begin{lem} \\label{getlim}\nLet $(M_n,\\tau_n)$ be a sequence of tracial von Neumann algebras. Let $S,T\n\\in \\prod_{n \\to \\omega} (M_{n},\\tau_{n})$ be operators such that $0 \\leq S\n\\leq T \\leq I$. Then there exist sequences $(T_n)_n$ and $(S_n)_n$ with\n$T_n,S_n \\in M_n$ that represent $T$ and $S$ in the ultraproduct and\nsuch that $0 \\leq S_n \\leq T_n \\leq I$ for each $n \\in \\bbN$.\n\\end{lem}\n\\begin{proof}\nFirst of all, by Lemma \\ref{prev} there exists a positive contraction $C\\in \\prod_{n \\to \\omega} (M_{n},\\tau_{n})$ such that $T^{1\/2}C T^{1\/2} = S$.\nLet $(T_n)_n$ be some representative of $T$. By Lemma \\ref{lem:app},\n$\\big((T_n^* T_n)^{1\/2}\\big)_n$ represents $(T^* T)^{1\/2} = T$, so by\nreplacing $T_n$ with\n$(T_n^*T_n)^{1\/2}$, we may assume\nthat $T_n \\geq 0$. Let $\\epsilon_n := \\mu_{T_n}\\big((1,\\infty)\\big)$, where $\\mu_{T_n}$\ndenotes the spectral measure of $T_n$. Since $T \\leq I$, we\nknow that $\\epsilon_n \\to 0$ as $n \\to \\infty$. Let $Q_n$ be the spectral projection onto the\ninterval $[0,1]$; thus, $\\tau_n(Q_n) = 1 - \\epsilon_n \\to 1$ as $n \\to\n\\infty$. This implies that $(Q_n)_n$ represents $I$ in the ultraproduct.\nHence, we may replace $T_n$ by $Q_n T_n$ to obtain a sequence $(T_n)_n$\nthat represents $T$ and satisfies $0 \\leq T_n \\leq I$ for all $n \\in\n\\bbN$. The same argument applies to $0 \\leq C \\leq I$ and we obtain a\nsequence $(C_n)_n$ representing $C$ such that $0 \\leq C_n \\leq I$ for all $n \\in \\bbN$. Moreover, the sequence $(S_n)_n$ with $S_n:=T_n^{1\/2} C_n T_n^{1\/2}$ represents $S=T^{1\/2}CT^{1\/2} $. Now\n$0 \\leq C_n \\leq I$ implies $0 \\leq S_n \\leq T_n$. This finishes the proof.\n\\end{proof}\n\n\\bsection{Existence of invariant monotone couplings}{s.existence}\n\nWe shall now use the previous results to show the existence of certain\n$\\Gamma$-invariant couplings between $\\Gamma$-invariant determinantal\nmeasures on $\\Gamma$ itself, as well as on the edge set of any Cayley graph\nof $\\Gamma$. Recall that a positive contraction $Q$ in $R(\\Gamma)$ leads to\na $\\Gamma$-invariant determinantal measure on $\\Gamma$, which we denote by\n$\\P^Q$. Note also that\n$\\P^{Q_2}$ stochastically dominates $\\P^{Q_1}$ if $0 \\leq Q_1\n\\leq Q_2 \\leq I$. This is stated in \\rref t.dominate\/ in the finite case\nand is shown to imply the same for the infinite case in \\rref b.Lyons:det\/.\n\n\\procl t.gencouple\nLet $\\gp$ be a sofic group and $S$ be a finite generating set of elements\nin $\\gp$. \nIf $0 \\le Q_1 \\le Q_2 \\le I$ in $R(\\gp)$ or in $R(\\gp, S)$, then there exists\na $\\gp$-invariant (sofic) monotone coupling of\\\/ $\\P^{Q_1}$ and $\\P^{Q_2}$.\n\\endprocl\n\n\\rproof\nLet $(G_n)_n$ be a sequence of finite $S$-labelled Schreier graphs whose\nlimit is $G:={\\rm Cay}(\\gp,S)$.\nWe prove the theorem for operators in $R(\\gp)$, as the other case is\nessentially identical.\nBy Proposition \\ref{emb} and Lemma \\ref{getlim}, there exist $0 \\le S_n \\le\nT_n \\le I$ in $B(\\ell^2(\\verts_n))$ so that $(S_n)_n$ and $(T_n)_n$\nrepresent $\\iota(Q_1)$ and $\\iota(Q_2)$ in the ultraproduct.\nLet $\\mu_n$ be a monotone coupling of $\\P^{S_n}$ with $\\P^{T_n}$, as obtained from Strassen's theorem \\cite{Strassen}. \nAs explained in \\cite[Example 10.3]{AL:urn}, the random weak limit of\n$(\\mu_n)_n$ (perhaps after\ntaking a subsequence) is a $\\gp$-invariant coupling of $\\P^{Q_1}$ and\n$\\P^{Q_2}$, which is necessarily monotone. However, we give another proof\nin the framework we are using here.\n\nNote that\n$\\mu_n$ is a probability measure on $2^{\\vertex_n} \\times 2^{\\vertex_n}$.\nIn general, given a set $V$, let $V'$ be a disjoint copy of $V$ with\nbijection $\\phi \\colon V \\to V'$ and\nidentify elements of $2^V \\times 2^V$ with subsets of $V \\cup V'$.\nThus, elementary cylinder events are identified with events of the form\n$\\mathcal A = \\{A \\subset V \\cup V' \\st A \\supset A_1,\\, A \\cap A_2 =\n\\varnothing\\}$ for finite $A_1, A_2 \\subset V \\cup V'$.\nLet $A_1,A_2 \\subset \\gp \\cup \\gp'$.\nLet $\\mathcal A := \\{A \\subset \\gp \\cup \\gp' \\st A \\supset A_1,\\, A \\cap\nA_2 = \\varnothing\\}$.\nLet $\\sigma \\colon \\Gamma \\to \\bF_S$ be a section of the natural\nsurjection $\\pi \\colon \\bF_S \\to \\Gamma$. \nFor $v \\in \\vertex_n$, write\n$$\nA_{i,n,v} := v.\\sigma(A_i \\cap \\gp) \\cup \\phi(v.\\sigma(A_i \\cap \\phi(\\gp)))\n\\subset \\vertex_n \\cup \\vertex_n'\n$$\nfor $i \\in \\{1,2\\}$ and define\n$$\\mathcal A_{v,n}:=\n\\left\\{ A \\in 2^{\\vertex_n} \\times 2^{\\vertex_n} \\st A_{1,n,v} \\subset A,\nA_{2,n,v} \\cap A = \\varnothing \\right\\} \\subset 2^{\\vertex_n} \\times\n2^{\\vertex_n}\\,.$$\nWe define\n$$\\tilde \\mu(\\mathcal A) := \\lim_{n \\to \\omega} \\frac1{|\\vertex_n|}\\sum_{v\n\\in \\vertex_n}\\mu_n(\\mathcal A_{v,n})\\,.$$\nSince ultralimits are finitely additive, this extends to define\na consistent measure on cylinder events, whence by Kolmogorov's Extension\nTheorem, \nthere exists a unique measure $\\mu$ on the Borel $\\sigma$-algebra of\n$2^{\\gp} \\times 2^{\\gp}$ that extends $\\tilde \\mu$. We claim that\n$\\mu$ is the desired coupling.\nIt is a basic property of the formula\nin the definition of $\\tilde \\mu$ that the action of $\\bF_S$ on\n$G_n$ just permutes the summands of the right-hand side. Hence, we conclude\nthat $\\tilde \\mu$ is $\\Gamma$-invariant. Uniqueness of the extension in\nKolmogorov's Extension Theorem implies that $\\mu$ is also\n$\\Gamma$-invariant. It follows from \\rref l.ultradetl\/\nthat the marginals of\n$\\mu$ are just $\\P^{Q_1}$ and $\\P^{Q_2}$. Finally, it is a monotone\ncoupling because each $\\mu_n$ is monotone.\nThis finishes the proof.\n\\end{proof}\n\nAs a special case, we obtain the following:\n\n\\procl c.sfcouple\nLet $\\gp$ be a sofic group and $S$ be a finite generating set of elements\nin $\\gp$. \nThere exists a $\\Gamma$-invariant monotone coupling between $\\wsf_G \\dom\n\\fsf_G$ on the associated Cayley graph.\n\\endprocl\n\n\\rproof\nIt remains only to pass from the Cayley diagram to the Cayley graph.\nEvery edge of the Cayley graph is doubled in the diagram.\nHowever, recall that in order to define the spanning forest measures, one\nmust first choose an orientation for each unoriented edge.\nThus, we may simply choose one of each pair in the diagram to be the\norientation of the corresponding unoriented edge. If we ignore the other\nedge, then the corresponding determinantal probability measures are\nprecisely the ones we want when we identify each chosen oriented edge with\nits corresponding unoriented edge. \n\\Qed\n\nThe preceding corollary was proven by Bowen for residually amenable groups \\cite{bowen}. Elek and Szab\\'o \\cite{elekszabo2} gave examples of finitely generated sofic groups that are not residually amenable. Later, Cornulier gave the first examples of finitely presented groups that are sofic but not residually amenable (or even limits of amenable groups) \\cite[Corollary 3]{cornulier}. \n\nSimilar reasoning shows, e.g., that if $0 \\le Q_1 \\le \\cdots \\le Q_r \\le I$\nin $R(\\Gamma)$, then there exists a $\\Gamma$-invariant sofic coupling of\nall $\\P^{Q_i}$ simultaneously that is monotone for each successive pair\n$i$, $i+1$.\n\n\\bsection{Free uniform spanning forest measures as limits over a sofic\napproximation}{s.approxim}\n\nFor certain determinantal measures we can say more. Let us first\nintroduce some more notation.\nNote that for $d \\in \\bbN$, the embedding $(R(\\Gamma),\\tau) \\subset\n\\prod_{n \\to \\omega} (B(\\ell^2 \\vertex_n),\\tr_{\\vertex_n})$ gives\nrise to an embedding $$(M_dR(\\Gamma),\\tau^{(d)}) \\subset \\prod_{n \\to \\omega}\n(M_dB(\\ell^2 \\vertex_n), \\tr^{(d)}_{\\vertex_n})\\,,$$ where ${\\rm\ntr}^{(d)}_{\\vertex_n}$ denotes the natural extension of the trace ${\\rm\ntr}_{\\vertex_n}$ to $M_dB(\\ell^2 \\vertex_n)$. Similarly, we denote the natural\nextension of $\\tau$ to $M_d R(\\Gamma)$ by $\\tau^{(d)}$. \nLet $G=(\\vertex,\\edge)$ be an $S$-labelled Schreier graph. We shall\nconsider the natural extension of $\\rho_{G}$ (defined after \\eqref{defpi})\nto $$\\rho_G^{(d)} \\colon M_d\n\\bbC \\bF_S \\to M_d B(\\ell^2 \\vertex)\\,.$$\n\nThe following theorem is a variant of L\\\"uck's approximation theorem. In\nthe generality that we need, it was first proved by Elek and Szab\\'o in\n\\cite[Proposition 6.1(a)]{elekszabo}. \nLet us state what we need in our notation.\n\n\\begin{thm}[Elek-Szab\\'o] \\label{eleksz}\nLet $\\gp$ be a sofic group and $S$ be a finite generating set of elements\nin $\\gp$. Let $(G_n)_n$ be a sequence of finite $S$-labelled Schreier\ngraphs whose limit is $G:={\\rm Cay}(\\gp,S)$. Let $d \\in \\bbN$ and $a \\in\nM_d \\Z \\bF_S$. Let $P_n \\in M_d B(\\ell^2 \\vertex_n)$ denote the\nprojection onto the kernel of $\\rho^{(d)}_{G_n}(a)$ and $P \\in M_d R\n\\Gamma$ denote the projection onto the kernel of $\\rho^{(d)}_G(a)$. Then\n$$\\lim_{n \\to\\infty} \\tr^{(d)}_{\\vertex_n}(P_n) = \\tau^{(d)}(P)\\,.$$\n\\end{thm}\n\nThe statement of the previous theorem is not just a consequence of weak*\nconvergence of spectral measures. The proof uses in an essential way the\nintegrality of coefficients of $a \\in M_d \\Z \\bF_S$. The first\nresults of this form were obtained by L\\\"uck in \\cite{lueck}, and over the\nyears they inspired many results of the same type.\nAnalogues of L\\\"uck's approximation theorem in the context of convergent sequences of finite graphs were studied in \\cite{abertviragthom}.\n\n\nNote that Proposition 6.1 in \\cite{elekszabo}\ncontains a part (b), which asserted that \n\\begin{equation} \\label{claimb}\n\\int_{0^+}^{\\infty} \\log (t) \\ d \\mu_{|\\rho^{(d)}_{G_n}(a)|}(t) \\to \\int_{0^+}^{\\infty} \\log (t) \\ d \\mu_{|\\rho^{(d)}_{G}(a)|}(t) \\quad \\mbox{as } n \\to \\infty,\\end{equation}\nin a slightly more general form not assuming that the approximation is given by Schreier graphs; see \\cite{elekszabo}. \nThis part of the claim remained unproven in \\cite{elekszabo}. Very\nrecently, it was discovered that \\cite[Proposition 6.1(b)]{elekszabo} is\nactually wrong (in the form more general than \\eqref{claimb}).\nIt was shown independently by Lov\\'asz and by\nGrabowski-Thom that already the Cayley diagram of $G=\\Z$ with respect to some\nspecific multi-set of generators admits a sofic approximation\n$(G_n)_n$---albeit not by Schreier graphs---so that $|\\vertex(G_n)|^{-1}\n\\log | \\!\\det A(G_n)|$ does not converge, where $A(G_n)$ denotes the\nadjacency matrix of $G_n$. It is still possible that \\eqref{claimb} holds\nas written here.\n\n\\begin{cor} \\label{soficapp}\nLet\\\/ $\\gp$ be a sofic group and $S$ be a finite generating set of elements\nin $\\gp$. Let $(G_n)_n$ be a sequence of finite $S$-labelled Schreier\ngraphs whose limit is $G:={\\rm Cay}(\\gp,S)$. Let $d \\in \\bbN$ and $a \\in\nM_d \\Z \\bF_S$. Let $P_n \\in M_d B(\\ell^2 \\vertex_n)$ denote the\nprojection onto the kernel of $\\rho^{(d)}_{G_n}(a)$ and $P \\in M_d R\n\\Gamma$ denote the projection onto the kernel of $\\rho^{(d)}_G(a)$. Then the\nsequence $(P_n)_n$ represents $\\iota_S(P) \\in \\prod_{n \\to \\omega}\n(M_dB(\\ell^2 \\vertex_n),\\tr^{(d)}_{\\vertex_n})$.\n\\end{cor}\n\\begin{proof} Since $\\ker(T^*T)=\\ker(T)$ for any operator $T$ on a Hilbert\nspace, we may assume without loss of generality that $a = b^*b$ for some $b\n\\in M_d\\bbC \\bF_s$.\nWe set $T_n:= \\rho^{(d)}_{G_n}(a)$ and $T:= \\iota_S(\\rho^{(d)}(a))$. It is\nclear from Corollary \\ref{emb} that the sequence $(T_n)_n$ represents $T$.\nSet $c:= \\sum_{i,j=1}^d \\|a_{i,j}\\|_1$,\nwhere $a_{i,j}$ denotes the $(i,j)$-entry of the matrix $a$ and\n$\\|\\sum_{\\gamma} \\alpha_{\\gamma} \\gamma\\|_1 := \\sum_{\\gamma} |\\alpha_{\\gamma}|$. It\nis easy to verify that $c \\geq \\sup \\big\\{ \\|T_n\\| \\st n \\in \\bbN \\big\\}$\nand $c \\geq \\|T\\|$.\nBy Lemma \\ref{lem:app}, for any continuous function $f \\colon \\bbR\n\\to \\bbR$, the sequence $(f(T_n))_n$ represents $f(T)$. \nNow, let $(f_k)_k$ be a sequence of polynomials that are non-negative on\n$[0, c]$ and satisfy \n$$\\inf_k f_k(x) = \\begin{cases} 1 & \\mbox{if } x=0, \\\\ 0 & \\mbox{if } x \\in (0,c]. \\end{cases}$$\nFor example, one can take $f_k(x) := (1-x\/c)^{2k}$.\nNote that $\\inf_k f_k(T_n) = P_n$ and $\\inf_k f_k(T)=\\iota_S(P)$, where the infimum is taken with respect to the usual ordering on self-adjoint operators.\nIt is clear from Lemma \\ref{lem:app} that $(P_n)_n$ represents a\nprojection that is smaller than $f_k(T)$ for each $k \\in \\bbN$ and hence\nsmaller than $\\iota_S(P) = \\inf_k f_k(T)$. \nL\\\"uck's approximation theorem (Theorem \\ref{eleksz}) says that\n$\\lim_{n \\to \\infty} \\tr^{(d)}_{\\vertex_n}(P_n) = \\tau^{(d)}(P) =\n\\tau^{(d)}_{\\omega}(\\iota_S(P))$. This shows that the subprojection of\n$\\iota_S(P)$ that $(P_n)_n$ represents has the same trace as\n$\\iota_S(P)$.\nSince $\\tau^{(d)}_{\\omega}$ is faithful, this implies that $(P_n)_n$\nrepresents $\\iota_S(P)$. This finishes the proof.\n\\end{proof}\n\nLet $a \\in M_d \\Z \\bF_S$ be self-adjoint. The heart of the proof\nof the preceding theorem is that the convergence of spectral measures of\n$T_n:=\\rho^{(d)}_{G_n}(a)$ to the spectral measure of $T:=\\rho^{(d)}(a)$\nis far better than expected from Remark \\ref{specmeas}, due to the\nintegrality of the coefficients of $a$. Indeed, L\\\"uck's approximation\ntheorem asserts that $\\lim_{n \\to\\infty} \\mu_{T_n}(\\{0\\}) =\n\\mu_T(\\{0\\})$, which is not a consequence of weak convergence $\\mu_{T_n}\n\\to \\mu_T$ alone. In fact, even more is true. It is a consequence of\nresults in \\cite{thom} that the integrated densities $F_{T_n}(\\lambda):=\n\\mu_{T_n}((-\\infty,\\lambda])$ converge uniformly to $F_T(\\lambda):=\n\\mu_T((-\\infty,\\lambda])$, i.e., $\\sup\\{ |F_{T_n}(\\lambda) -\nF_{T}(\\lambda)| \\st \\lambda \\in \\bbR \\} \\to 0$ as $n \\to \\infty$. This\nallows for more results like Theorem \\ref{soficapp}, for example, for\nother spectral projections of the operator $\\rho^{(d)}(a)$.\n\nSuppose that the random weak limit of $(G_n)$ is a Cayley graph $G$. In the\ncase where $G$ is amenable and $G_n$ are merely connected subgraphs of $G$,\nwe have that the random weak limit of the uniform spanning tree measures\n$\\ust_{G_n}$ equals $\\wsf_G =\n\\fsf_G$. Despite the definition of $\\fsf$, the random weak limit of\n$\\ust_{G_n}$ need not be $\\fsf_G$\nfor a sofic approximation $(G_n)_n$ to a non-amenable group, $G$.\nIn fact, the limit of \n$\\ust_{G_n}$ is always $\\wsf_G$ (see \\cite[Proposition\n7.1]{AL:urn}). It is more complicated to get $\\fsf_G$ as a limit.\nThe proof of \\rref c.sfcouple\/ provides such, but is not very explicit.\nHere we give a more explicit method, which still provides an invariant\nmonotone coupling with $\\wsf_G$.\nFor $L \\ge 0$, let $\\CYCLE_L(G)$ denote the space spanned by the cycles in\n$G$ of length at most $L$.\nWrite $\\fsf_{G, L}$ for the determinantal probability measure corresponding to\nthe projection onto $\\CYCLE_L(G)^\\perp$.\nThis measure is not necessarily concentrated on forests; rather, it is\nconcentrated on subgraphs of girth larger than $L$.\nBy \\rref t.dominate\/, we have $\\ust_G \\dom \\fsf_{G, L}$ for all finite $G$ and\n$L$.\n\n\\procl t.cycleslimit\nIf $G$ is a Cayley graph of a group $\\gp$ and\nis the random weak limit of $(G_n)_n$, then if $L(n) \\to\\infty$\nsufficiently slowly,\nthe random weak limit of $\\fsf_{G_n, L(n)}$ \nequals $\\fsf_G$.\nA subsequence of monotone couplings witnessing $\\ust_{G_n} \\dom \\fsf_{G_n,\nL(n)}$ has a $\\gp$-invariant monotone coupling witnessing $\\wsf_G \\dom\n\\fsf_G$ as a weak* limit.\n\\endprocl\n\n\\rproof\nBy Corollary \\ref{soficapp} and \\rref l.ultradetl\/, for all $L \\ge 0$, the\nrandom weak limit of $\\fsf_{G_n, L}$ exists and equals $\\fsf_{G, L}$. Since \n$\\CYCLE_L(G) \\uparrow \\CYCLE(G)$,\nthe weak* limit of $\\fsf_{G, L}$ equals $\\fsf_G$.\nA subsequence of monotone couplings witnessing $\\ust_{G_n} \\dom \\fsf_{G_n,\nL}$ has a $\\gp$-invariant monotone coupling witnessing $\\wsf_G \\dom\n\\fsf_{G, L}$ as a random weak limit, which, as $L \\to\\infty$, has a\n$\\gp$-invariant monotone coupling witnessing $\\wsf_G \\dom \\fsf_G$ as a\nweak* limit.\nThe result follows. \n\\Qed\n\nIn particular, with the assumptions of \\rref t.cycleslimit\/, calculation of\naverage expected degree shows that\n$$\n\\lim_{L \\to\\infty} \\lim_{n \\to\\infty} \\frac{\\dim\n\\CYCLE_L(G_n)}{|\\vertex(G_n)|}\n=\n2\\beta_1^{(2)}(\\gp) + 2\n\\,.\n$$\nThis equation is already known, as it follows from L\\\"uck's results and the\nDeterminant Conjecture, which was established for sofic groups in\n\\cite[Theorem 5]{elekszabo}.\n\n\n\\bsection{Consequences of the invariant monotone couplings}{s.conseq}\n\n\nGiven a network with positive edge\nweights and a time $t > 0$, form the \\dfnterm{transition operator} $P_t$ for\ncontinuous-time random walk whose rates are the edge weights; in the case\nof unbounded weights (or degrees), we take the minimal process, which dies\nafter an explosion.\nThat is, if the entries of a matrix $A$ indexed by the vertices are equal\noff the diagonal to the negative of the edge weights and the diagonal\nentries are chosen to make the row sums zero, then $P_t := e^{-A t}$; in\nthe case of unbounded weights, we take the self-adjoint extension of $A$\ncorresponding to the minimal process.\nThe matrix $A$ is called the \\dfnterm{infinitesimal generator} or the\n\\dfnterm{Laplacian} of the network.\n\nWhen the weights are random, we have a continuous-time random walk in a\nrandom environment. If the distribution $\\mu$ of the edge weights is\ngroup-invariant, then $\\E[P_t(o, o)] = \\tr_\\mu(e^{-A t})$.\nHence, if there are two sets of random weights, $A^{(1)}$ and $A^{(2)}$,\ncoupled by an invariant measure $\\nu$ with the property that $A^{(1)}(e)\n\\le A^{(2)}(e)$ $\\nu$-a.s.\\ for all edges $e$, then the corresponding\nreturn probabilities $P^{(1)}$ and $P^{(2)}$ satisfy $\\E[P_t^{(1)}(o, o)]\n\\ge \\E[P_t^{(2)}(o, o)]$: see \\cite[Theorem 5.1]{AL:urn}.\nWhether this inequality holds without assuming the existence of an\ninvariant coupling, but merely that the two weight distributions are\ninvariant, is open; it was asked by \\rref b.FontesMathieu\/. \n\nOf course, if the weights are simply the indicators of invariant random subsets,\nthen we obtain random walk on the random clusters. Thus, when we have an\ninvariant coupling of two percolation measures, we have the above\ninequality on return probabilities. \nIn particular, we have shown that such an inequality holds when the two\npercolation measures are determinantal and arise from positive contractions\nin $R(\\gp)$.\nA similar result holds when a more\ncomplicated increasing function of the random subsets is used (such as\nusing for a weight of an edge the sum of the degrees of its\nendpoints in the cluster), since given an invariant monotone coupling of\nthe two cluster measures, one easily constructs an invariant monotone\ncoupling of such weights.\n\nFor another consequence of our coupling result, we consider the $\\FSF$.\nIt was proved in \n\\BLPSusf\\ that for every Cayley graph, whether sofic or not, a.s.\\\neach tree in $\\wsf_G$ has one end. In addition, \\BLPSusf\\ also proved that\nif $\\fsf_G \\ne \\wsf_G$, then a.s.\\ at least one tree in the $\\FSF$ has\ninfinitely many ends. \\BLPSusf\\ conjectured that a.s.\\ every tree in the\n$\\FSF$ has infinitely many ends in this case. We can now make a small\ncontribution to this conjecture:\n\n\\procl c.inftyends\nIf $G$ is the Cayley graph of a sofic group, then either $\\fsf_G = \\wsf_G$,\nin which case a.s.\\ each tree in $\\fsf_G$ has one end, or $\\fsf_G \\ne \\wsf_G$,\nin which case a.s.\\ each tree in $\\fsf_G$ has one or infinitely many ends,\nwith some tree having infinitely many ends.\n\\endprocl\n\n\\rproof\nSuppose that $\\fsf_G \\ne \\wsf_G$.\nLet $\\mu$ be an invariant monotone coupling of the two spanning forest\nmeasures.\nBecause $\\WSF$ is spanning, each tree in $\\FSF$ consists of unions of\n(infinite) trees in $\\WSF$ with additional connecting edges.\nIf there are only finitely many connecting edges, say, $N$, in some tree\n$T$ in $\\FSF$, then each vertex in $T$ can send mass $1\/N$ to each\nendpoint of each of the connecting edges in $T$. Such endpoints would\nreceive infinite mass, yet no point would send out mass more than 2.\nThus, the Mass-Transport Principle tells us that this event has probability\n0.\nTherefore, there are a.s.\\ either no connecting edges or infinitely many in\neach tree of $\\FSF$. Combining this with what was previously known\ngives the corollary. \n\\Qed\n\nThe above consequences of \\rref t.gencouple\/ were for specific models. We\nclose with a general consequence that is relevant in ergodic theory.\n\nFor a set $X$, write $\\pi_x \\colon \\{0, 1\\}^X \\to \\{0, 1\\}$ for the\nnatural coordinate projections ($x \\in X$).\nFor $K \\subseteq X$, write $\\fd(K)$ for the $\\sigma$-field on\n$\\{0, 1\\}^X$ generated the maps $\\pi_x$ for $x \\in K$. \nWhen $X$ is the vertex set $\\verts$ of a graph,\na probability measure $\\mu$ on $\\{0, 1\\}^\\verts$ is called\n\\dfnterm{$m$-dependent} if $\\fd(K_1), \\dots, \\fd(K_p)$ are independent whenever\nthe sets $K_i$ are pairwise separated by graph distance $> m$. A similar\ndefinition holds when $X$ is the edge set of a graph.\nWe say that\n$\\mu$ is \\dfnterm{finitely dependent} if it is $m$-dependent for some $m <\n\\infty$.\n\nNote that if $Q \\in \\bbC\\Gamma$ is a positive contraction, then $\\P^Q$ is\nfinitely dependent.\nThe Kaplansky density theorem implies that\nevery positive contraction $Q \\in R(\\Gamma)$ is the limit in the\nstrong operator topology (SOT) of positive contractions $Q_n \\in \\bbC\\Gamma$ (see\n\\cite[Cor.~5.3.6]{KR1}). Combining these two observations, we see\nthat $\\P^Q$ is the weak* limit of\nthe finitely dependent measures $\\P^{Q_n}$.\nLikewise, if $Q \\in R(\\gp, S)$ is a positive contraction, then there are\npositive contractions $Q_n \\in M_S(\\bbC\\gp)$ such that \n$\\P^Q$ is the weak* limit of the finitely dependent measures $\\P^{Q_n}$.\n\nIn the case that $\\Gamma$ is sofic, we can strengthen weak* convergence to\n$\\dbar$-convergence because of our\nmonotone coupling result. This follows ideas of \\rref b.LS:dyn\/, but that\ncase, where $\\gp$ is commutative, is much easier.\n\nLet $\\mu_1$ and $\\mu_2$ be two $\\gp$-invariant probability measures on $A^W$,\nwhere $\\gp$ acts quasi-transitively on $W$ and $A$ is finite. Let\n$W'$ be a section of $\\gp\\backslash W$. Then \nOrnstein's $\\dbar$-metric is defined as\n$$\n\\dbar(\\mu_1, \\mu_2)\n:=\n\\min \\Big\\{\\sum_{w \\in W'} \\Pbig{X_1(w) \\ne X_2(w)} \\st X_1 \\sim \\mu_1,\\, X_2\n\\sim \\mu_2,\\, (X_1, X_2) \\hbox{ is $\\gp$-invariant} \\Big\\}\n\\,.\n$$\nThis is a metric for the following reason.\nSuppose that $(X_1, X_2)$ is a joining of $(\\mu_1, \\mu_2)$ and $(X_3, X_4)$\nis a joining of $(\\mu_2, \\mu_3)$. Given a Borel set $C \\subseteq A^W$,\nwrite $f_C(X_2) := \\P[X_1 \\in C \\mid X_2]$ and\n$g_C(X_3) := \\P[X_4 \\in C \\mid X_3]$. The \\dfnterm{relatively independent\njoining of $(X_1, X_2)$ and $(X_3, X_4)$ over $\\mu_2$} is defined\nto be the measure $\\mu$ on $(A^W)^3$ determined by \n$$\n(C_1, C_2, C_3)\n\\mapsto\n\\int_{C_2} f_{C_1}(y) g_{C_3}(y) \\,d\\mu_2(y)\n$$\nfor $C_1, C_2, C_3 \\subseteq 2^W$.\nIt is easily verified and well known that this measure $\\mu$ is indeed\n$\\gp$-invariant, and therefore a joining.\n(Intuitively, we merely choose $X_2 = X_3$ to create this joining out of\nthe original pair of joinings. More precisely, $X_1$ and $X_4$ are then\nchosen independently given $X_2$.)\nNow choose the joinings $(X_1, X_2)$ and $(X_3, X_4)$ to achieve the minima\nin the definition of $\\dbar$.\nIf $(Y_1, Y_2, Y_3) \\sim \\mu$, then $\\dbar(\\mu_1, \\mu_3) \\le \\sum_{w \\in\nW'} \\Pbig{Y_1(w) \\ne Y_3(w)} \\le \\sum_{w \\in\nW'} \\Pbig{Y_1(w) \\ne Y_2(w)} + \\sum_{w \\in\nW'} \\Pbig{Y_2(w) \\ne Y_3(w)} =\n\\sum_{w \\in\nW'} \\Pbig{X_1(w) \\ne X_2(w)} + \\sum_{w \\in\nW'} \\Pbig{X_2(w) \\ne X_3(w)} = \\dbar(\\mu_1, \\mu_2) + \\dbar(\\mu_2, \\mu_3)$,\nas desired.\n\nIf $(\\mu_1, \\mu_2, \\ldots, \\mu_n)$ is a sequence of $\\gp$-invariant\nprobability measures on $A^W$ and $\\mu_{k, k+1}$ is a joining of $(\\mu_k,\n\\mu_{k+1})$ for each $k = 1, 2, \\ldots, n-1$, then there is an associated\nrelatively independent joining of all $n$ measures obtained by successively\ntaking a relatively independent joining $(Y_1, Y_2, Y_3)$ of $(\\mu_1, \\mu_2)$\nwith $(\\mu_2, \\mu_3)$ over $\\mu_2$, then the relatively independent joining\nof $(Y_1, Y_2, Y_3)$ with $(\\mu_3, \\mu_4)$ over $\\mu_3$, where we regard\n$(Y_1, Y_2, Y_3) \\in (A \\times A)^W \\times A^W$, etc.\nBy taking a limit of such joinings, we can do the same for an infinite\nsequence of invariant measures on $A^W$ with given successive joinings.\n\nIn case $A = \\{0, 1\\}$ and there is a monotone joining of $\\mu_1$ and $\\mu_2$,\nthen such a joining may be used to calculate $\\dbar(\\mu_1, \\mu_2)$:\n\n\\procl l.dbarmonocalc\nLet $\\mu_1$ and $\\mu_2$ be two $\\gp$-invariant probability measures on $2^W$,\nwhere $\\gp$ acts quasi-transitively on $W$. Let\n$W'$ be a section of\\\/ $\\gp\\backslash W$. If there is a monotone joining\n$(X_1, X_2)$ of\n$(\\mu_1, \\mu_2)$, then \n$$\n\\dbar(\\mu_1, \\mu_2)\n=\n\\sum_{w \\in W'} |\\P[X_1(w) = 0] - \\P[X_2(w) = 0]|\n=\n\\sum_{w \\in W'} \\Pbig{X_1(w) \\ne X_2(w)}\n\\,.\n$$\nSuppose that in addition,\n$\\mu_3$ and $\\mu_4$ are two $\\gp$-invariant probability measures on $2^W$\nsuch that there are monotone joinings witnessing\n$\\mu_1 \\dom \\mu_3 \\dom \\mu_2$ and\n$\\mu_1 \\dom \\mu_4 \\dom \\mu_2$.\nThen $\\dbar(\\mu_3, \\mu_4) \\le \\dbar(\\mu_1, \\mu_2)$.\n\\endprocl\n\n\\rproof\nIt is clear that any joining $(X_1, X_2)$ of $(\\mu_1, \\mu_2)$ has the\nproperty that \n$$\n\\sum_{w \\in W'} \\Pbig{X_1(w) \\ne X_2(w)} \\ge \n\\sum_{w \\in W'} |\\P[X_1(w) = 0] - \\P[X_2(w) = 0]|\n$$\nand that a monotone joining gives equality. \nFurthermore, if we extend $(X_1, X_2)$ to a relatively independent joining\n$(X_1, X_2, X_3, X_4)$ with the assumed joinings \nsatisfying $X_1 \\le X_3 \\le X_2$ and $X_1 \\le X_4 \\le X_2$, then the\njoining $(X_3, X_4)$ witnesses the desired inequality.\n\\Qed\n\nWe shall prove the following:\n\n\\procl t.dbarfindep\nLet $\\gp$ be a sofic group and $S$ be a finite generating set of elements\nin $\\gp$. \nIf $Q$ is a positive contraction in $R(\\gp)$ or in $R(\\gp, S)$, then there\nexists a sequence of positive contractions $Q_n$ in $\\bbC\\gp$ or\nin $M_S(\\bbC\\gp)$ such that the finitely dependent probability measures\n$\\P^{Q_n}$ converge to $\\P^Q$ in the $\\dbar$-metric.\n\\endprocl\n\n\nNote that when $\\gp$ is amenable, \\rref t.gencouple\/ is easy.\nIn addition, when $\\gp$ is amenable, it is known that \nthe $\\gp$-invariant finitely dependent processes are isomorphic to\nBernoulli shifts by using the very weak Bernoulli condition of \\rref\nb.Orn:book\/, extended to the amenable case by \\rref b.Adams\/; \nthat\nfactors of Bernoulli shifts are isomorphic to Bernoulli shifts\n\\cite{Orn:factor,OrnW:amen};\nand that\nthe class of processes isomorphic to Bernoulli shifts is $\\dbar$-closed\n\\cite{Orn:factor,OrnW:amen}.\nThus, we have the following corollary, which was proved for abelian $\\gp$\nin \\cite{LS:dyn}:\n\n\\procl c.dbarFIID\nLet $\\gp$ be an amenable group and $S$ be a finite generating set of elements\nin $\\gp$. \nIf $Q$ is a positive contraction in $R(\\gp)$ or in $R(\\gp, S)$, then \n$\\P^Q$ is isomorphic to a Bernoulli shift.\n\\endprocl\n\nOn the other hand, in the non-amenable setting,\nPopa gave an example of a factor of a Bernoulli shift that is not\nisomorphic to a Bernoulli shift. Indeed, \\cite[Corollary 2.14]{Popa} showed\nthat for any infinite group $\\Gamma$ \nwith Kazhdan's Property (T), the natural action $\\Gamma\n\\curvearrowright ({\\mathbb T},\\mu_{\\rm Haar})^{\\Gamma}\/(\\Z\/n\\Z)$ is not\nisomorphic to a Bernoulli shift for any $n \\geq 2$. Here, $\\Z\/n\\Z$ is\nunderstood to act diagonally on $({\\mathbb T},\\mu_{\\rm Haar})^{\\Gamma}$ by\nrotation in the obvious way. For such $\\gp$, it follows that the natural\naction $\\Gamma \\curvearrowright (\\Z\/n\\Z,\\mu_{\\rm Haar})^{\\Gamma}\/(\\Z\/n\\Z)$\nis not isomorphic to a Bernoulli shift for any $n \\geq 2$.\n\nNatural questions, therefore, include these, which are all settled in the\namenable case:\n\n\\procl q.findep\nIs every finitely dependent process a factor of a Bernoulli shift?\n\\endprocl\n\n\n\\procl q.dbar\nLet $\\gp$ act quasi-transitively on a countable set $W$ and let $A$ be finite.\nIs the class of measures on $A^W$ that are factors of Bernoulli shifts\nclosed in the $\\dbar$-metric?\n\\endprocl\n\n\\procl q.detlFIID\nAre determinantal probability measures associated to equivariant positive\ncontractions factors of Bernoulli shifts? \n\\endprocl\n\nBy \\rref t.dbarfindep\/, positive answers to Questions \\ref{question:findep}\nand \\ref{question:dbar} would imply a positive answer to \\rref\nq.detlFIID\/ on sofic groups.\n\nIn order to prove \\rref t.dbarfindep\/, we shall use two lemmas.\nWe give statements and\ndetails for $R(\\gp)$; they\nadmit straightforward extensions to $R(\\Gamma,S)=M_S(R(\\gp))$.\n\n\n\\procl l.dbarnorm\nIf $Q$ and $Q'$ are positive contractions in $R(\\gp)$, then\n$$\n\\dbar\\big(\\P^Q, \\P^{Q'}\\big) \\le \\frac{6 \\|Q - Q'\\|}{1 + 2\\|Q - Q'\\|}\n\\,.\n$$\n\\endprocl\n\n\\rproof\nWrite $r := \\|Q - Q'\\|$ and $t := r\/(1+2r)$. We set $Q_t := (1-t)Q+ t(I-Q)$.\nThen \n$\nQ \\ge (1-t)Q$ and $Q_t \\ge (1-t)Q$,\nwhence by the triangle inequality and \\rref l.dbarmonocalc\/, we have\n$$\n\\dbar\\big(\\P^Q, \\P^{Q_t}\\big)\n\\le\n\\dbar\\big(\\P^Q, \\P^{(1-t)Q}\\big)\n+\n\\dbar\\big(\\P^{(1-t)Q}, \\P^{Q_t}\\big)\n\\le\n\\|tQ\\| + \\|t (I-Q)\\|\n\\le 2t\n\\,.\n$$\nLikewise, with $Q'_t := (1-t)Q'+ t(I-Q')$, we have \n$$\n\\dbar\\big(\\P^{Q'}, \\P^{Q'_t}\\big)\n\\le 2t\n\\,.\n$$\nIn addition, $tI \\le Q'_t \\le (1-t)I$ and $Q_t - Q'_t = (1-2t)(Q-Q')$ has\nnorm $(1-2t)r = t$, whence\n$$\n0 \\le Q'_t - tI \\le Q_t \\le Q'_t + tI \\le I\n\\,.\n$$\n\\rref l.dbarmonocalc\/ again yields\n$$\n\\dbar\\big(\\P^{Q_t}, \\P^{Q'_t}\\big) \n\\le\n\\dbar\\big(\\P^{Q'_t-tI}, \\P^{Q'_t+tI}\\big) \n=\n2t\n\\,.\n$$\nPutting together these inequalities and using the triangle inequality for\nthe $\\dbar$-metric gives $\\dbar\\big(\\P^Q, \\P^{Q'}\\big) \\le 6t$, which is\nthe desired result. \n\\Qed\n\nWhen $Q$ and $Q'$ commute,\none can improve the bound in \\rref l.dbarnorm\/ by replacing the norm on the\nright-hand side with the Schatten 1-norm. Recall that this norm is \n$$\n\\|T\\|_1 := \\tau\\big((T^* T)^{1\/2}\\big)\n\\,.\n$$\nIn this language,\nwhen $\\Gamma$ is abelian, \\rref b.LS:dyn\/ showed that\n$\\dbar(\\P^Q, \\P^{Q'}) \\le \\|Q - Q'\\|_1$.\nIn fact, the same proof can be adapted for all $\\gp$ to the case that\n$Q$ and $Q'$ commute. We do not know\nwhether this inequality always holds, but we have the following weaker\nversion:\n\n\\procl l.dbarSchatten\nIf $Q$ and $Q'$ are positive contractions in $R(\\gp)$, then\n$$\n\\dbar\\big(\\P^Q, \\P^{Q'}\\big) \\le \n6 \\cdot 3^{2\/3} \\|Q - Q'\\|_1^{1\/3}\n\\,.\n$$\nIf $Q_n$ and $Q$ are positive contractions in $R(\\gp)$ with\n$Q_n \\to Q$ in {\\rm SOT}, then \n$\\dbar\\big(\\P^{Q_n}, \\P^Q\\big) \\to 0$.\n\\endprocl\n\n\\rproof\nWe shall use the Schatten 2-norm, $\\|T\\|_2 := \\sqrt{\\tau (T^* T)}$, and\nthe Powers-St{\\o}rmer inequality, $\\|T_1 - T_2\\|^2_2 \\le \\|T_1^2 -\nT_2^2\\|_1$ for $0 \\le T_1, T_2 \\in R(\\gp)$; see \\cite[Proposition\n6.2.4]{BroOz} for a proof that extends to our context.\n\nWrite $T := Q^{1\/2}$ and $T' := (Q')^{1\/2}$.\nLet $E$ be the spectral resolution of the identity for $T-T'$, so that\n$$\nT-T' = \\int_{-1}^1 s\\,dE(s)\n\\,.\n$$\nThus, \n$$\nr \n:= \n\\int_{-1}^1 s^2 \\,d\\nu(s)\n=\n\\|T-T'\\|_2^2\n\\le\n\\|Q - Q'\\|_1\n$$\nfor the scalar measure $A \\mapsto \\nu(A) := \\tau \\big(E(A)\\big)$.\nDefine $t := (r\/3)^{1\/3}$ and $B := [-1, -t] \\cup [t, 1]$.\nWe have $\\nu(B) \\le r\/t^2$ by Markov's inequality.\nWrite $P := E(B)$ and $P^\\perp := E\\big((-t, t)\\big)$.\nDefine $Q_1 := T P T$\nand $Q'_1 := T' P T'$, and write $Q_2 := Q-Q_1$, $Q'_2 := Q'-Q'_1$.\nThese are all positive contractions in $R(\\gp)$; for example, $Q_2 = T\nP^\\perp T = (P^\\perp T)^* (P^\\perp T) \\ge 0$.\nFurthermore, \n$$\nQ_2 - Q'_2\n=\nT P^\\perp (T - T') + (T-T') P^\\perp T'\n$$\nand $\\|T\\|, \\|T'\\| \\le 1$,\nwhence $\\|Q_2 - Q_2'\\| \\le 2 \\|(T - T') P^\\perp\\|$. Since\n$$\n(T - T') P^\\perp\n=\n\\int_{(-t, t)} s \\,dE(s)\n\\,,\n$$\nit follows that $\\|(T - T') P^\\perp\\| \\le t$, whence\n$\\|Q_2 - Q'_2\\| \\le 2t$ and $\\dbar\\big(\\P^{Q_2},\n\\P^{Q'_2}\\big) \\le 12t$ by \\rref l.dbarnorm\/.\nNow, $\\tau (Q_1) = \\tau (T^2 P) = \\tau (T^2 P^2) = \\tau (P T^2 P) = \\|T P\n\\delta_o\\|^2 \\le \\|T\\|^2 \\|P \\delta_o\\|^2 \\le \\tau (P) = \\nu(B) \\le r\/t^2$.\nSince $Q_2 \\le Q$, it follows that $\\dbar\\big(\\P^Q, \\P^{Q_2}\\big) =\n\\tau(Q-Q_2) = \\tau (Q_1) \\le r\/t^2$.\nLikewise, $\\dbar\\big(\\P^{Q'}, \\P^{Q'_2}\\big) \\le r\/t^2$.\nTherefore, \n$$\n\\dbar\\big(\\P^Q, \\P^{Q'}\\big)\n\\le\n\\dbar\\big(\\P^Q, \\P^{Q_2}\\big) + \\dbar\\big(\\P^{Q_2}, \\P^{Q'_2}\\big) +\n\\dbar\\big(\\P^{Q'_2}, \\P^{Q'}\\big)\n\\le\n\\frac{r}{t^2} + 12t + \\frac{r}{t^2}\n=\n6 (9r)^{1\/3}\n\\,,\n$$\nas desired. \n\nThe second part of the assertion follows from the inequality\n$$\\|Q_n \\delta_o - Q\n\\delta_o\\| = \\|Q_n - Q\\|_2 \\ge \\|Q_n - Q\\|_1\\,.$$\nThis finishes the proof.\\Qed\n\n\nWe remark that\nif it is assumed only that $Q_n$ converges to $Q$ in WOT, then it does {\\em\nnot} follow that $\\P^{Q_n}$ converges to $\\P^Q$ in $\\dbar$. In fact, on\n$\\Z$, entropies need not converge: see the end of Sec.~6 of \\rref b.LS:dyn\/\nfor examples.\nHere, we are using the fact that for processes on $\\Z$, entropy is\n$\\dbar$-continuous \\cite[Proposition 15.20]{Glasner}.\n\n\\begin{proof}[Proof of \\rref t.dbarfindep\/:]\nThis is immediate from \\rref l.dbarSchatten\/ and Kaplansky's Density Theorem, i.e., the fact that positive contractions in \n$R(\\gp)$ are SOT-limits of positive contractions in $\\C\\gp$.\n\\end{proof}\n\n\nBy analogy with Bernoulli processes in the amenable case,\none could ask whether determinantal processes are finitely determined,\nwhere we could define $\\mu$ as \\dfnterm{finitely determined} if whenever\n$\\mu_n \\to \\mu$\nweak* and in sofic entropy, we have $\\dbar(\\mu_n, \\mu) \\to 0$. The converse\npresumably holds for all processes. When $\\gp$ is amenable, this is\nknown since sofic entropy equals ordinary metric entropy.\nHere, we are relying on definitions and results of \\cite{Bowen:sofic}.\n\nNumerical calculation suggests that the inequality \n$\\dbar(\\P^Q, \\P^{Q'}) \\le \\|Q - Q'\\|_1$ always holds, even for finite\nmatrices without any invariance. Our proof of the weaker inequality \\rref\nl.dbarSchatten\/ holds in that generality.\nThese inequalities appear to imply similar inequalities for continuous\ndeterminantal point processes $(\\sX, \\sY)$, where the $\\dbar$-metric is\nreplaced by taking the minimum over all joinings of the intensity of the\nsymmetric difference $\\sX \\triangle \\sY$; we plan to pursue this elsewhere.\n\n\n\\bsection{Unimodular random rooted graphs}{s.unimodular}\n\nWe now extend the preceding theorems to their natural setting encompassing\nall random weak limits of finite graphs with bounded degree (and somewhat\nbeyond).\nOne other setting in which it would be natural to investigate these\nquestions is that of vertex-transitive graphs and their\nautomorphism-invariant determinantal probability measures.\nHowever, we are able to treat only the sofic ones (again), which, in\nparticular, excludes all non-unimodular transitive graphs.\n\nWe review a few definitions from the theory of unimodular random rooted\nnetworks; for more details, see \\rref b.AL:urn\/.\nA \\dfnterm{network} is a (multi-)graph $\\gh = (\\vertex, \\edge)$ together with a\ncomplete separable metric space $\\marks$ called the \\dfnterm{mark space} and\nmaps from $\\vertex$ and $\\edge$ to $\\marks$. Images in $\\marks$ are called\n\\dfnterm{marks}.\nThe only assumption on degrees is that they are finite when loops are not\ncounted.\nWe omit the mark maps from our notation for\nnetworks. \n\nA \\dfnterm{rooted network} $(\\gh, \\bp)$ is a network $\\gh$ with a distinguished\nvertex $\\bp$ of $\\gh$, called the \\dfnterm{root}.\nA \\dfnterm{rooted isomorphism} of rooted networks is an isomorphism of the\nunderlying networks that takes the root of one to the root of the other.\nWe do not distinguish between a rooted network and its\nisomorphism class.\nLet $\\GG_*$ denote the set of rooted isomorphism classes of rooted\n{\\it connected\\\/} locally finite networks.\nDefine a separable complete metric $d_* \\colon \\GG_* \\times \\GG_* \\to [0,1]$ on $\\GG_*$ by letting the distance\nbetween $(G_1, o_1)$ and $(G_2, o_2)$ be $1\/(1+\\alpha)$, where $\\alpha$ is\nthe supremum of those $r > 0$ such that there is some rooted isomorphism of\nthe balls of (graph-distance) radius $\\flr{r}$ around the roots of $G_i$\nsuch that each pair of corresponding marks has distance less than $1\/r$.\nFor probability measures $\\rtd$, $\\rtd_n$ on $\\GG_*$, we write $\\rtd_n \\cd\n\\rtd$ when $\\rtd_n$ converges weakly with respect to this metric.\n\nFor a (possibly disconnected)\nnetwork $\\gh$ and a vertex $x \\in \\verts(\\gh)$, write $\\gh_x$ for the\nconnected component of $x$ in $\\gh$.\nIf $\\gh$ is finite, then write $U_\\gh$ for a uniform random vertex of $\\gh$\nand $U(\\gh)$ for the\ncorresponding distribution of $\\big(\\gh_{U_\\gh}, U_\\gh\\big)$ on $\\GG_*$.\nSuppose that $\\gh_n$ are finite networks and that $\\rtd$ is a\nprobability measure on $\\GG_*$.\nWe say that the \\dfnterm{random weak limit} of a sequence $(\\gh_n)_n$ is $\\rtd$ if $U(\\gh_n)\n\\cd \\rtd$. \n\nA probability measure\nthat is a random weak limit of finite networks is called \\dfnterm{sofic}.\nIn particular, a group is called sofic when its Cayley diagram is sofic.\n\nAll sofic measures are unimodular, which we now define.\nSimilarly to the space $\\GG_*$, we define the space $\\gtwo$ of isomorphism\nclasses of locally\nfinite connected networks with an ordered pair of distinguished vertices\nand the natural topology thereon.\nWe shall write a function $f$ on $\\gtwo$ as $f(\\gh, x, y)$.\nWe refer to $f(\\gh, x, y)$ as the \\dfnterm{mass} sent from $x$ to $y$ in $\\gh$.\n\n\\procl d.unimodular \nLet $\\rtd$ be a probability measure on $\\GG_*$.\nWe call $\\rtd$ \\dfnterm{unimodular} if it obeys the \\dfnterm{Mass-Transport\nPrinciple}:\nFor all Borel\n$f \\colon \\gtwo \\to [0, \\infty]$,\nwe have\n$$\n\\int \\sum_{x \\in \\vertex(\\gh)} f(\\gh, \\bp, x) \\,d\\rtd(\\gh, \\bp)\n=\n\\int \\sum_{x \\in \\vertex(\\gh)} f(\\gh, x, \\bp) \\,d\\rtd(\\gh, \\bp)\n\\,.\n$$\n\\endprocl\n\nIt is easy to see that every sofic measure \nis unimodular, as observed by \\rref b.BS:rdl\/, who introduced this\ngeneral form of the Mass-Transport Principle under the name \n``intrinsic Mass-Transport Principle\".\nThe converse was posed as a question by \\rref b.AL:urn\/; it remains open.\n\nConsider the Hilbert space $\\Hilb(\\mu) := \\int^\\oplus \\ell^2\\big(\\vertex(\\gh)\\big)\n\\,d\\rtd(\\gh, \\bp)$, a direct integral (see, e.g., \\rref b.Nielsen\/ or\n\\cite[Chapter 14]{KR}).\nHere, we always choose canonical representatives for rooted-isomorphism\nclasses of networks, as explained\nin \\cite[Sec.~2]{AL:urn}; in particular, $\\verts(\\gh) = \\bbN$. However,\nthis is merely for technical reasons of measurability, so we omit this from\nour notation.\nThe space $\\Hilb(\\mu)$ is defined as the set\nof ($\\rtd$-equivalence classes of) $\\rtd$-measurable functions $\\xi$ defined on\n(canonical) \nrooted networks $(\\gh, \\bp)$ that satisfy $\\xi(\\gh, \\bp) \\in\n\\ell^2(\\verts(\\gh))$ and $\\int \\|\\xi(\\gh, \\bp)\\|^2 \\,d\\rtd(\\gh, \\bp) < \\infty$.\nWe write $\\xi = \\int^\\oplus \\xi(\\gh, \\bp) \\,d\\rtd(\\gh, \\bp)$.\nThe inner product is given by $\\iprod{\\xi, \\eta} := \\int \\iprod{ \\xi(\\gh, \\bp), \\eta(\\gh,\n\\bp } \\,d\\rtd(\\gh, \\bp)$.\nLet $T \\colon (\\gh, \\bp) \\mapsto T_{\\gh, \\bp}$ be a measurable assignment of\nbounded linear operators on $\\ell^2\\big(\\vertex(\\gh)\\big)$\nwith $\\mu$-finite\nsupremum of the norms $\\|T_{\\gh, \\bp}\\|$.\nThen $T$ induces a bounded linear operator $T := T^\\rtd := \\int^\\oplus T_{\\gh,\n\\bp} \\,d\\rtd(\\gh, \\bp)$ on $\\Hilb$ via\n$$\nT^\\rtd \\colon \\int^\\oplus \\xi(\\gh, \\bp)\n\\,d\\rtd(\\gh, \\bp) \\mapsto \\int^\\oplus T_{\\gh, \\bp} \\xi(\\gh, \\bp)\n\\,d\\rtd(\\gh, \\bp)\n\\,.\n$$\nThe norm $\\|T^\\rtd\\|$ of $T^\\rtd$ is the $\\rtd$-essential supremum of\n$\\|T_{\\gh, \\bp}\\|$.\nWe say that $T$ as above is \\dfnterm{equivariant} if for all network isomorphisms $\\phi \\colon \\gh_1 \\to \\gh_2$ preserving the marks, all\n$\\bp_1, x, y \\in \\verts(\\gh_1)$ and all $\\bp_2 \\in \\verts(\\gh_2)$,\nwe have $\\iprod{T_{\\gh_1, \\bp_1} \\delta_x, \\delta_y} = \\iprod{T_{\\gh_2,\n\\bp_2} \\delta_{\\phi(x)}, \\delta_{\\phi(y)}}$. For $T \\in B(\\Hilb(\\mu))$ equivariant, we have in particular that $T_{\\gh, \\bp}$ depends on\n$\\gh$ but not on the root $\\bp$, so we shall simplify our notation and\nwrite $T_\\gh$ in place of $T_{\\gh, \\bp}$.\nFor simplicity, we shall even write $T$ for $T_\\gh$ when no confusion can arise.\n\nWe shall show that sofic probability measures can be extended to sofic\nmeasures on $S$-labelled Schreier networks; any new loops will get new marks\nindicating that they were not in the original underlying graph.\nThis will be an important technical tool. It can only enlarge the class of\nequivariant operators.\n\n\\procl p.addS\nLet $(G_n)_n$ be networks with finitely many vertices and edges\nwhose random weak limit is $\\mu$ and whose\nmark space is $\\marks$.\nLet $|S|$ be at least twice the degree of every vertex in every\n$G_n$; possibly $|S| = \\infty$.\nThen there exist $S$-labelled Schreier\nnetworks $H_n$ with mark space $\\marks \\times \\{0, 1\\}$\nwith the following properties:\n\\begin{enumerate}\n\\item[(i)] \nThe underlying graph of each component of\n$H_n$ is equal to that of\\\/ $G_n$ except\nthat $H_n$ may have additional loops whose second mark coordinate is 1.\n\\item[(ii)] \nThe first coordinate marks of each component of\n$H_n$ restricted to the underlying graph of $G_n$\nagree with the marks on $G_n$.\n\\item[(iii)] \nThe sequence $(H_n)_n$ has a random weak limit carried by $S$-labelled\nSchreier networks.\n\\end{enumerate}\n\\endprocl\n\n\\rproof\nGiven a locally finite\nnetwork $G$ with mark space $\\marks$, produce a random $S$-labelled\nSchreier network, $\\phi(G)$, with mark space $\\marks \\times \\{0, 1\\}$ as\nfollows.\nLet $\\mk_0$ be an arbitrary element of $\\marks$.\nLet $U_k(e)$ be independent uniform $[0, 1]$ random variables for $k \\ge 1$\nand $e \\in \\edges(G)$.\nFor an edge $e$, let $N(e)$ be the set of edges (including $e$) that share\nan endpoint with $e$.\nWrite $S = \\{s_1, s_2, \\ldots\\}$.\nWe shall use the identity map for the involution $i \\colon S \\to S$.\nAssign a second mark coordinate of 0 to every vertex and edge of $G$.\nAssign the label $s_1$ to every edge $e$ such that $U_1(e) = \\min \\{U_1(e')\n\\st e' \\in N(e)\\}$.\nWe assign further labels from $S$ recursively.\nSupposing that a partial assignment has been made using the random\nfields $U_1, \\ldots, U_k$, let $J(e)$ be the minimum index $j$ such that \nno edge in $N(e)$ has been assigned the label $s_j$, if any.\nBy choice of $|S|$, there is always such an index when $e$ does not yet\nhave a label.\nNow assign the label $s_{J(e)}$ to every $e$ that does not have a label and\nfor which $U_{k+1}(e) = \\min \\{U_{k+1}(e') \\st e' \\in N(e)\\}$.\nAfter all these assignments are completed, to every vertex $x$,\nadd new loops with mark $(\\mk_0, 1)$ so that the degree of $x$ is equal\nto $|S|$ and so that the resulting network, $\\phi(G)$, is $S$-labelled.\n\nExcept for the fact that $\\phi(G_n)$ is random, the sequence\n$\\big(\\phi(G_n)\\big)_n$ has all the desired properties: note that\n$U\\big(\\phi(G_n)\\big) \\Rightarrow \\nu$, where for a measurable set $A$ of\nrooted networks,\n$$\n\\nu(A)\n:=\n\\int \\P\\big[\\big(\\phi(G), o\\big) \\in A\\big] \\,d\\mu(G, o)\n\\,.\n$$\n\nTo fix this problem, let $\\mk_1, \\mk_2, \\ldots$ be a dense subset of\n$\\marks$.\nLet $\\psi_n \\colon \\marks \\to \\{\\mk_1, \\ldots, \\mk_n\\}$ be a map that takes\neach mark $\\mk$ to one of the closest points to it among $\\{\\mk_1, \\ldots,\n\\mk_n\\}$.\nThen $\\psi_n$ naturally induces a map $\\tilde \\psi_n$ on networks.\nThe push-forward $\\nu_n$ of the law of\n$U\\big(\\phi(G_n)\\big)$ by $\\tilde \\psi_n$ gives a finitely supported\nprobability measure.\nBy taking a rational approximation of its probabilities, we may find a\nfinite (disconnected)\nnetwork $H_n$ such that $U(H_n)$ is within total variation distance\n$1\/n$ of $\\nu_n$.\nThis is the sequence desired. \n\\Qed\n\n\nLet $(X,d)$\nbe a separable compact metric space. From now on, we shall assume that\n$G=(\\vertex,\\edge)$ is a rooted connected $S$-labelled Schreier network.\nMoreover, we assume that the\n{\\em vertex} labels take values in the space $X$. If $S$ is finite, then\nthe space of marks $\\marks = S \\cup X$ is compact. We denote the set of\nsuch (rooted connected $S$-labelled Schreier) networks by $\\sGG$ or\n$\\sGG(S,X)$.\nWe use the following metric on $\\sGG(S, X)$:\nWrite $S = \\{s_1, s_2, \\ldots\\}$. \nFor a rooted $S$-labelled Schreier network $(G, o)$ and $n \\ge 1$, let\n$G^{n}$ denote the connected component of $o$ in the subnetwork of $G$ formed\nby deleting all edges with a label $s_k$ (in either direction) for any $k > n$.\nDefine a separable complete metric $d_* \\colon \\sGG \\times \\sGG \\to [0,1]$\non $\\sGG$ by letting the distance\nbetween $(G_1, o_1)$ and $(G_2, o_2)$ be $1\/(1+\\alpha)$, where $\\alpha$ is\nthe supremum of those $r > 0$ such that there is some rooted isomorphism of\nthe balls of (graph-distance) radius $\\flr{r}$ around the roots of\n$G_i^{\\flr{r}}$ that preserves marks up to an error of at most $1\/r$ in the\nmetric of the mark space.\nEven if $S$ is infinite,\n$\\sGG(S,X)$ is a compact\nmetric space (basically because $\\{0, 1\\}^S$ is compact)\nand thus for any sequence $(\\mu_n)_n$ of probability\nmeasures on $\\sGG(S,X)$, we have $\\mu_n \\Rightarrow \\mu$ if and only if\n$\\mu_n \\to \\mu$ in the weak* topology. \nFor simplicity of notation, we omit all other edge marks; one may actually\nencode edge marks via vertex marks in any case.\n\nBefore we proceed, let us discuss a natural example. Let $\\Gamma$ be a\ngroup that acts by\nhomeomorphisms on a compact metrizable space $X$, preserving a probability\nmeasure, $\\mu$. We assume that $\\Gamma$ is generated by a finite symmetric set $S \\subset \\Gamma$ and define the involution $i \\colon S \\to S$ by $i(s)=s^{-1}$. Then we can associate to each point $x \\in X$ the\n$S$-labelled rooted Schreier graph that arises from the restriction of the\naction of $\\Gamma$ to the orbit of $x$. Each vertex in this graph carries a\nnatural label in $X$. We obtain a continuous map $\\varphi \\colon X \\to\n\\sGG(S,X)$ and can consider the push-forward measure $\\varphi_*(\\mu)$.\nThis measure is unimodular. Moreover, there is a natural equivariant factor\nmap $\\psi \\colon \\sGG(S,X) \\to X$, sending a rooted graph to the label of\nits root.\n\nSuppose that $\\rtd$ is a unimodular probability measure on $\\sGG(S,X)$.\nNote that there is a continuous action of $\\bF_S$ on the compact space\n$\\sGG(S,X)$ that moves the root according to the labels seen at the root.\nMoreover, this action preserves the measure $\\mu$, since $\\mu$ is\nunimodular. Consider the ring $C(\\sGG(S,X))$ of continuous functions on\n$\\sGG(S,X)$ and the \\dfnterm{algebraic crossed product algebra} $C(\\sGG(S,X))\n\\rtimes \\bF_S$. Recall that\n$C(\\sGG(S,X)) \\rtimes \\bF_S$ is a $*$-algebra and consists of finite\nformal sums $\\sum_{w \\in \\bF_S} f_w w$ with $f_w \\in C(\\sGG(S,X))$. The\nmultiplication and involution are defined by linearity and the formulas\n\\begin{align*}\n\\forall f_1,f_2 \\in C(\\sGG(S,X)) \\ \\ \\forall w_1,w_2 \\in \\bF_S \\quad (f_1\nw_1)\n\\cdot (f_2 w_2) &:= f_1 (\\prescript{w_1}{}{\\!f_2}) w_1w_2\\,,\\\\\n(f_1 w_1)^* &:=\n(\\prescript{w_1^{-1}}{}{\\!\\bar f_1})\nw_1^{-1}\\,,\n\\end{align*}\nwhere we use the convention $\\prescript{w}{}{\\!f}(G,v) := f(G,v.w)$. The measure $\\mu$ gives rise to a functional $\\tau_{\\mu}$ on the crossed product algebra as follows:\n$$\\tau_{\\mu}\\left(\\sum_{w \\in \\bF_S} f_w w \\right) := \\sum_{w \\in \\bF_S}\n\\int \\bfone_{o.w = o} \\cdot f_{w}(G,o) \\\nd\\mu(G,o) \\,.$$\n\nThere are two natural actions, $F$ and $M$, on $\\Hilb(\\mu)$\nof the algebra of continuous functions on\n$\\sGG(S,X)$, which will both be of importance. First of all, $f \\in\nC(\\sGG(S,X))$ can act as a constant on the fibers, i.e., $F(f)_{G,o} :=\nf(G,o) \\cdot I_{\\ell^2(\\verts(G))}$, or equivalently\n$$\nF(f)(\\xi)\n:=\n\\int^\\oplus f(\\gh, \\bp) \\xi(\\gh, \\bp) \\,d\\rtd(\\gh, \\bp)\n\\,.\n$$ \nIt is a basic fact that an operator $T\n\\in B(\\Hilb(\\mu))$ arises as above\nfrom a measurable family $(G,o) \\mapsto T_{G,o}$\niff $T$ commutes with $F(C(\\sGG(S,X))$.\nA second action of $C(\\sGG(S,X))$ is defined by the formula $M(f)_{G,o}\n\\delta_v = f(G,v) \\cdot \\delta_v$ for all $v \\in \\verts(G)$, in other\nwords, \n$$\nM(f)(\\xi)\n:=\n\\int^\\oplus M(f)_{\\gh, \\bp} \\xi(\\gh, \\bp) \\,d\\rtd(\\gh, \\bp)\n\\,,\n$$\nwhere $M(f)_{G,o} \\xi(G, o)(v) := f(G,v) \\cdot \\xi(G, o)(v)$.\n\nWe denote by $\\rho_{\\mu}(s) \\in B(\\Hilb(\\mu))$ the operator that assigns to $(G,o)$\nthe unitary operator on $\\ell^2(\\vertex(G))$ that sends $\\delta_v$ to\n$\\delta_{v.i(s)}$.\nIt is easy to see that $\\rho_{\\mu}(s)$ is equivariant\nfor all $s \\in S$, and that this assignment extends to a unitary\nrepresentation $\\rho_{\\mu} \\colon \\bF_S \\to U(\\Hilb(\\mu))$ such that\n$\\rho_{\\mu}(s)^* = \\rho_\\mu(i(s))$ for all $s \\in S$.\nThis unitary\nrepresentation $\\rho_{\\mu} \\colon \\bF_S \\to U(\\Hilb(\\mu))$ extends in turn\nto a natural $*$-homomorphism $\\rho_{\\mu} \\colon C(\\sGG(S,X)) \\rtimes \\bF_S\n\\to B(\\Hilb(\\mu))$.\nIndeed, consider the natural representation of $C(\\sGG(S,X))$ on $\\Hilb(\\mu)$ by\nmultiplication, $f \\mapsto M(f)$. In order to see that $M$ and $\\rho_{\\mu}\n\\colon \\bF_S \\to U(\\Hilb(\\mu))$ combine via linearity and $\\rho_\\mu(f w) :=\nM(f) \\rho_\\mu(w)$ to a $*$-representation of the algebraic crossed product,\nit suffices to check that\n$$\\forall w \\in \\bF_S, f \\in C(\\sGG(S,X)) \\quad \\rho_{\\mu}(w) M(f)\n\\rho_{\\mu}(w)^* = M({}^w\\!f)\\,,$$\nas a simple verification using the definition of $C(\\sGG(S,X)) \\rtimes\n\\bF_S$ shows.\nTo prove that this equation indeed holds, we compute in $\\ell^2(\\verts(G))$\nfor $v \\in \\vertex(G)$ that\n$$\\rho_{\\mu}(w) M(f) \\rho_{\\mu}(w)^*\\delta_v = \n\\rho_{\\mu}(w) M(f) \\delta_{v.w} = f(G,v.w) \\cdot \\rho_{\\mu}(w) \\delta_{v.w}\n= M({}^{w}\\!f)\\delta_v\\,.$$\nThis shows that $\\rho_{\\mu} \\colon C(\\sGG(S,X)) \\rtimes \\bF_S \\to\nB(\\Hilb(\\mu))$ exists as desired.\n\nNow, if $\\delta^{\\mu} \\in \\Hilb(\\mu)$ denotes the naturally defined\nvector $(G,o) \\mapsto \\delta_o \\in \\ell^2(\\verts(G))$, then \n$$\\forall T \\in C(\\sGG(S,X)) \\rtimes \\bF_S \\qquad \\tau_\\mu(T) = \\langle\n\\rho_{\\mu}(T) \\delta^{\\mu}, \\delta^{\\mu} \\rangle\\,.$$\nIndeed, for all $w \\in \\bF_S, f \\in C(\\sGG(S,X))$, we have\n\\begin{eqnarray*}\n\\langle\n\\rho_{\\mu}(fw) \\delta^{\\mu}, \\delta^{\\mu} \\rangle\n&=& \\langle\nM(f) \\rho(w) \\delta^{\\mu}, \\delta^{\\mu} \\rangle \\\\\n&=& \\int \\langle\nM(f) \\rho(w) \\delta_o, \\delta_o \\rangle \\ d\\mu(G,o) \\\\\n&=& \\int \\langle\nM(f) \\delta_{o.w^{-1}}, \\delta_o \\rangle \\ d\\mu(G,o) \\\\\n&=& \\int \\bfone_{o.w = o} \\cdot f(G,o)\\ d \\mu(G,o).\n\\end{eqnarray*}\n\nThis shows that $\\tau_{\\mu} \\colon C(\\sGG(S,X)) \\rtimes \\bF_S \\to \\bbC$ is\na positive linear functional. Although we shall not use it, we remark that\nthe Hilbert space $\\Hilb(\\mu)$ is the\nGNS-construction associated with the trace $\\tau_{\\mu}$; see \\cite[Lemma 4 in Chapter 4]{Dixmier} for basics about the GNS-construction.\nWe denote by $R(\\mu)$ the von Neumann algebra generated by the\n$\\rho_\\mu$-image of\n$C(\\sGG(S,X)) \\rtimes \\bF_S$, i.e., $R(\\mu):= \\rho_{\\mu}(C(\\sGG(S,X))\n\\rtimes \\bF_S)''$.\nWe call $R(\\mu)$ the \\dfnterm{von Neumann algebra of the unimodular random\nnetwork $\\mu$}. Since $F(C(\\sGG(S,X)) \\subset \\rho_{\\mu}(C(\\sGG(S,X))\n\\rtimes \\bF_S)'$, we conclude that $R(\\mu) = \\rho_{\\mu}(C(\\sGG(S,X))\n\\rtimes \\bF_S)'' \\subset F(C(\\sGG(S,X))'$. Hence, every operator $T \\in R(\\mu)$ arises from a measurable family $(G,o) \\mapsto T_{G,o}$.\nWe extend $\\tau_{\\mu}$ to a normal, positive linear functional\n$\\tr_{\\mu}$ on $R(\\mu)$ by the formula\n\\begin{equation} \\label{deftrace}\n\\tr_{\\mu}(T):= \\langle T \\delta^{\\mu},\\delta^{\\mu} \\rangle = \\E\\big[ \\iprod{T_{\\gh,o} \\delta_o, \\delta_o} \\big] := \\int \\iprod{T_{\\gh,o} \\delta_o, \\delta_o}\n\\,d\\rtd(\\gh, \\bp)\\,.\n\\end{equation}\nThe left-regular representation $\\lambda_{\\mu} \\colon \\bF_S \\to U(\\Hilb(\\mu))$\nis defined as acting on the underlying measure space, i.e., a vector $(G,o)\n\\mapsto \\xi(G,o)$ is mapped via $\\lambda_\\mu(w)$\nto $(G,o) \\mapsto \\xi(G,o.w)$. Using similar\narguments as above, we see that $F$ (rather than $M$) and $\\lambda_{\\mu}$\ncombine to give another representation $\\lambda_{\\mu} \\colon C(\\sGG(S,X))\n\\rtimes \\bF_S \\to B(\\Hilb(\\mu))$ and we set $L(\\mu):=\n\\lambda_{\\mu}\\big(C(\\sGG(S,X)) \\rtimes \\bF_S\\big)''$. It is now a matter of\nchecking definitions to see that an operator $T \\in B(\\Hilb(\\mu))$ is\nequivariant if and only if $T \\in L(\\mu)'$.\n\nPut\n$N_{\\tau_{\\mu}}:= \\{T \\in C(\\sGG(S,X)) \\rtimes \\bF_S \\st\n\\tau_{\\mu}(T^*T)=0 \\}$.\nIn order to put the players in the right framework, let us note that \n$\\left(C(\\sGG(S,X)) \\rtimes \\bF_S \\right) \/N_{\\tau_{\\mu}}$ together with\nthe inner-product $(T_1|T_2):= \\tau_{\\mu}(T_2^*T_1)$ is a Hilbert algebra in the\nsense of \\cite[Chapters 5 and 6]{Dixmier}. The algebras $L(\\mu)$ and\n$R(\\mu)$ can be identified with the von Neumann algebras that are\nleft- and right-associated with this Hilbert algebra. Indeed, as we mentioned\nabove, the associated von Neumann algebras arise from the natural GNS-construction.\n\nIt follows from the Commutation Theorem \\cite[Theorem 1 on page 80]{Dixmier} that $R(\\mu) = L(\\mu)'$, i.e., the operators in $R(\\mu)$ are precisely the equivariant operators.\nIt was proved by \\rref b.AL:urn\/ that\n$T \\mapsto \\int \\iprod{T_{\\gh,o} \\delta_o, \\delta_o}\n\\,d\\rtd(\\gh, \\bp)$ is a trace on the algebra of equivariant operators. This\nresult is also an easy consequence of the general theory of Hilbert\nalgebras; see, for example, \\cite[Theorem 1 on page 97]{Dixmier}.\n\nAnother useful point of view is to see $R(\\mu)$ as the von Neumann algebra\nassociated with a discrete measured groupoid; see \\cite{renault} for a\ndefinition. Indeed, consider the $r$-discrete topological groupoid with\nbase space $\\sGG(S,X)$ and an arrow between $(G,o)$ and $(G',o')$ for each\n$v \\in G$ such that $(G,v) \\cong (G',o')$. Any unimodular measure $\\mu$\nturns this object into a discrete measured groupoid. The von Neumann\nalgebra $R(\\mu)$ that has been described concretely above is the von\nNeumann algebra associated with the discrete measured groupoid associated\nto the measure $\\mu$. We refer to \\cite{feldmanmoore2} for details about the von Neumann algebra associated to a discrete measured equivalence relation and to \\cite[Section 3]{sauer} or \\cite{renault} for an extension to the realm of discrete measured groupoids.\n\n\nLet us summarize:\n\\begin{thm} An operator $T \\in B(\\Hilb(\\mu))$ is equivariant if and only if $T \\in R(\\mu)$.\nThe pair $(R(\\mu),\\tr_{\\mu})$ is a tracial von Neumann algebra.\n\\end{thm}\n\n\nWe illustrate the definitions in two special cases.\n(1) If $G=(\\vertex,\\edge)$ is a finite $S$-labelled Schreier network with automorphism\ngroup $\\Lambda$ and we consider the natural action of $\\Lambda$ on $\\ell^2\n\\vertex$, then there exists a natural isomorphism $$R(U(G)) \\stackrel{\\sim}{\\to}\nB(\\ell^2 \\vertex)^{\\Lambda}:=\\{ T \\in B(\\ell^2 \\vertex) \\st \\forall \\lambda\n\\in \\Lambda \\ T \\lambda = \\lambda T \\}\\,.$$\n(2) If $\\mu$ is concentrated on a Cayley diagram of a group $\\Gamma$ (with\nfinite generating set $S$) and $X$ is a singleton, then $R(\\mu) = R(\\Gamma)$.\n\nIn complete analogy to the group case, we can define the von Neumann\nalgebra $R(\\mu,S) \\subset B(\\Hilb(\\mu,S))$, where $$\\Hilb(\\mu,S) :=\n\\int_{\\sGG(S,X)} \\ell^2(\\edge(G)) \\ d\\mu(G,o)\\,.$$\nAgain, there is a natural isomorphism $R(\\mu,S) = M_S \\left(R(\\mu)\n\\right)$. We denote the natural trace on $R(\\mu,S)$ by $\\tr_{\\mu} \\colon R(\\mu,S) \\to \\bbC$.\n\nWe shall now state and prove an embedding theorem for sofic unimodular\nnetworks. The techniques are inspired by work of Elek and Lippner\n\\cite{eleklip} and P\\u{a}unescu \\cite{paunescu}. Again, the point is not to\ngive another proof of these results, but to prove new approximation formulas\nthat allow for applications to the approximation of the associated determinantal measures.\n\n\\begin{thm} \\label{unimodapp}\nLet $(G_n)_n$ be a sequence of finite $S$-labelled Schreier networks and let $\\mu$\nbe a probability measure on $\\sGG(S,X)$. Let $\\omega$ be a non-principal\nultrafilter on $\\bbN$. If \n$U(\\gh_n)\n\\cd \\rtd$, then there exists a trace-preserving embedding\n$$\\iota \\colon (R(\\mu),\\tr_{\\mu}) \\to \\prod_{n \\to \\omega} (R(U(G_n)),\n\\tr_{U(G_n)})\\,.$$\nMoreover, there exists a sequence of probability measures $(\\nu_n)_n$ on\n$\\big(\\{G_n\\} \\times \\vertex(G_n)\\big) \\times \\sGG(S,X)$ with the following properties:\n\\begin{enumerate}\n\\item[(i)] The measure $\\nu_n$ has marginals $U(G_n)$ and $\\mu$.\n\\item[(ii)] With respect to the natural metric $d_*$ on $\\sGG(S,X)$, we have\n$$\\lim_{n \\to \\infty} \\int d_*((G_n,v),(G,o)) \\ d \\nu_n((G_n, v),(G,o)) \\to 0\\,.$$\n\\item[(iii)] \nIf $(T_n)_n \\in \\ell^{\\infty}( \\bbN, (R(U(G_n)), \\tr_{U(G_n)}))$ represents $\\iota(T)$ for some $T \\in R(\\mu)$, then\n\\begin{equation} \\label{eq1c}\n\\lim_{n \\to \\omega} \\int \\left| \n\\iprod{T_n \\delta_{v.\\gamma}, \\delta_{v.\\gamma'}} - \\iprod{T_{G,o}\n\\delta_{o.\\gpe}, \\delta_{o.\\gpe'}} \\right| d \\nu_n((G_n, v),(G,o)) =0 \n\\end{equation}\nfor all $\\gpe, \\gpe' \\in \\bF_S$.\n\\end{enumerate}\n\\end{thm}\n\\begin{proof} \nFirst of all, $\\mu$ is unimodular. Recall the \npositive and unital trace $\\tau_{\\mu} \\colon C(\\sGG(S,X))\n\\rtimes \\bF_S \\to \\bbC$ defined in \\eqref{deftrace}. \nAs in the proof of Proposition \\ref{emb}, we consider\nthe unital $*$-homomorphism\n$$\\rho:= \\lim_{n \\to \\omega} \\rho_{U(G_n)} \\colon C(\\sGG(S,X)) \\rtimes\n\\bF_S \\to \\prod_{n \\to \\omega} (R(U(G_n)), \\tr_{U(G_n)})\\,.$$\nSince $U(G_n) \\cd \\mu$, we have that $\\tau_{\\mu} = \\tr_{\\omega} \\circ \\rho$,\nwhere $\\tr_{\\omega}$\ndenotes the trace on the ultraproduct, i.e., $\\tr_\\omega((T_n)_n) :=\n\\lim_{n \\to \\omega} \\tr_{U(G_n)}(T_n)$ for all norm-bounded sequences\n$(T_n)_n$ with $T_n \\in R(U(G_n))$. As $\\tr_{\\mu}$ is faithful on $R(\\mu)$,\n$\\rho$ factors through the image $\\im (\\rho_\\mu)$ of $\\rho_{\\mu} \\colon\nC(\\sGG(S,X)) \\rtimes \\bF_S \\to R(\\mu)$. That is, there is a unique \nbounded $*$-homomorphism\n$$\n\\psi\n\\colon \\im (\\rho_\\mu) \\to \\prod_{n \\to \\omega} (R(U(G_n)), \\tr_{U(G_n)})\n$$\nsuch that $\\rho = \\psi \\circ \\rho_\\mu$.\nBy definition\n$\\im (\\rho_{\\mu})$ is weakly dense in $R(\\mu)$, whence\n$\\psi$ extends to a\ntrace-preserving $*$-homomorphism $\\iota$ from $R(\\mu)$ to the\nultraproduct von Neumann algebra.\n\nWeak convergence of measures on $\\big(\\sGG(S,X), d_*\\big)$ is equivalent\n(see the last corollary in \\cite{Strassen} or \\cite[3.1.1]{Skorohod})\nto convergence in the Wasserstein metric\n$$d_{\\rm W}(\\mu',\\mu) := \\inf_{\\nu} \\int d_*((G',o'),(G,o))\\\nd\\nu((G',o'),(G,o))\\,,$$ where\nthe infimum is taken over all measures $\\nu$ on $\\sGG(S,X) \\times \\sGG(S,X)$\nwith marginals $\\mu'$ and $\\mu$. Hence, we obtain a sequence of measures\n$\\nu_n'$ on $\\sGG(S,X) \\times \\sGG(S,X)$ with marginals $U(G_n)$ and\n$\\mu$ so that \n$$\\lim_{n \\to \\infty} \\int d_*((G',o'),(G,o)) \\ d \\nu'_n((G',o'),(G,o)) \\to\n0\\,.$$\nSince the natural map $\\big(G_n \\times \\verts(G_n)\\big) \\times \\sGG(S,X) \\to \\sGG(S,X) \\times\n\\sGG(S,X)$ is finite-to-one,\nwe can lift $\\nu'_n$ to a measure\n$\\nu_n$ on $(G_n \\times \\verts(G_n)\\big)\\times \\sGG(S,X)$. This proves (i) and (ii). \n\nIt remains\nto prove claim (iii) and in doing so, we follow the strategy of the proof of\nProposition \\ref{emb}. Indeed, using the arguments in the proof of\nProposition \\ref{emb}, it is again easy to see that the truth of (iii) \ndepends only on $T \\in R(\\mu)$ and not on the choice of an approximating\nsequence $(T_n)_n$. If $T$ lies in the image of $C(\\sGG(S,X)) \\rtimes\n\\bF_S$ and $T' = \\sum_{w} f_w w$ is some choice of a preimage of $T$ in the\ncrossed product algebra, then there is a canonical approximating sequence\n$(T_n)_n$ that represents $\\iota(T)$. Indeed, for each $n$, there is a\n$*$-homomorphism $\\rho_{U(G_n)} \\colon C(\\sGG(S,X))\n\\rtimes \\bF_S \\to R(U(G_n))$ and we set $T_n := \\rho_{U(G_n)}(T')$ for each $n \\in\n\\bbN$.\nFor each $w \\in \\bF_S$, the function $f_w$ in the presentation of $T'$ is\nuniformly continuous and hence $(G,o) \\mapsto \\langle T \\delta_{o.\\gamma},\n\\delta_{o.\\gamma'} \\rangle$ is uniformly continuous as well for such $T$.\nThus, (ii) easily implies (iii) for every element in the image of\n$C(\\sGG(S,X)) \\rtimes \\bF_S$. As in the proof of Proposition \\ref{emb}, a\ndiagonalization\nargument shows that (iii) holds for all $T \\in R(\\mu)$. This finishes the\nproof. \n\\end{proof}\n\n\n\\bsection{Existence of sofic monotone couplings}{s.sofic-couple}\n\n\nLet $\\mu$ be a unimodular probability measure on rooted networks.\nGiven an equivariant positive contraction $Q$ on $\\Hilb(\\mu)$,\nwe obtain a determinantal probability measure\n$\\P^{Q_G}$ on $\\{0, 1\\}^{\\vertex(G)}$ associated to $\\mu$-a.e.\\ rooted network\n$(G, o)$.\nNote that if $G$ already has marks, then we regard $\\P^{Q_G}$ as producing\n(at random) new marks, $\\eta \\in \\{0, 1\\}^{\\verts(G)}$,\nwhich we may take formally as second coordinates after the existing marks.\nIn other words, we define the probability measure $\\mu^Q$\nby the equation\n$$\n\\mu^Q\\big[B(o, r; G) \\cong (A, v),\\ \\eta \\restrict C \\equiv 1\\big]\n=\n\\int_{[B(o, r; G) \\cong (A, v)]} \\det(Q_G \\restrict C) \\,d\\mu(G, o)\n$$\nfor every rooted network $(A, v)$ of radius $r$ and every measurable choice\nof $C \\subseteq B(o, r; G)$.\nUsing involution invariance, it is easy to check that $\\mu^Q$ is\nunimodular.\n\nAs shown in \\rref p.addS\/, we may assume that $\\mu$ is carried by\n$S$-labelled Schreier networks.\nIn this case, it suffices to take the measurable choice\nof $C \\subseteq B(o, r; G)$ in the definition of $\\mu^Q$ to be of the form\n$\\{o.w_1, \\ldots, o.w_n\\}$ for some $w_1, \\ldots, w_n \\in \\FS$.\n\nAs a special case, let $G$ be a finite network and $Q$ be a positive\ncontraction on $\\ell^2\\big(\\vertex(G)\\big)$. Then $U(G)^Q$ is the\ndeterminantal probability measure reviewed in \\rref s.def\/, regarded as a\nrandomly rooted network.\n\nGiven a unimodular probability measure $\\mu$ with mark space $\\marks$ and\ntwo unimodular probability measures $\\mu_i$ with mark spaces $\\marks \\times\n\\{0, 1\\}$ for $i = 1, 2$, both of whose marginals forgetting the second\ncoordinate of the marks is $\\mu$, we say that a unimodular probability measure\n$\\nu$ with the 3-coordinate \nmark space $\\marks \\times \\{0, 1\\} \\times \\{0, 1\\}$ is a\n\\dfnterm{monotone coupling} of $\\mu_1$ and $\\mu_2$ if its marginal\nforgetting the coordinate $4-i$ is $\\mu_i$ for $i = 1, 2$ and $\\nu$ is\nconcentrated on networks whose marks $(\\xi, j, k)$\nsatisfy $j \\le k$.\n\nWe shall prove the following extension of \\rref t.gencouple\/:\n\n\\procl t.gencoupleURN\nLet $\\mu$ be a sofic probability measure on rooted networks.\nIf $0 \\le Q_1 \\le Q_2 \\le I$ in $R(\\mu)$, then there\nexists a sofic monotone coupling of $\\P^{Q_1}$ and $\\P^{Q_2}$.\n\\endprocl\n\nAs we noted, we may assume that $\\mu$ is carried by\nSchreier networks.\nWe have the following extension of\n\\rref l.ultradetl\/:\n\n\\procl l.ultradetlURN\nLet $(G_n)_n$ be a sequence of finite $S$-labelled Schreier\nnetworks whose random weak limit is $\\mu$. Let $\\iota$ and $\\iota_S$\nbe trace-preserving embeddings as in Theorem \\ref{unimodapp}.\nLet $T \\in R(\\mu)$ be such that $0 \\leq T \\leq I$ and suppose that $(T_n)_n$\nrepresents $\\iota(T)$ in the ultraproduct $\\prod_{n \\to \\omega}\n(B(\\ell^2 \\vertex_n),\\tr_{\\vertex_n})$ with $0 \\leq T_n \\leq I$ for\neach $n \\in \\bbN$.\nThen $\\lim_{n \\to \\omega} U(G_n)^{T_n} = \\mu^T$ in the weak topology.\n\\endprocl\n\nThe proof is essentially the same as that for \\rref l.ultradetl\/, using the\ncoupling probability measures $\\nu_n$ of Theorem \\ref{unimodapp} (but not\nassuming any analogue of the injectivity of $\\pi$).\n\nWe may now use \\rref l.ultradetlURN\/ to prove \\rref t.gencoupleURN\/,\njust as \\rref t.gencouple\/ was proved.\n\nAll the above was for determinantal probability measures on subsets of\nvertices. In order to deduce corresponding results for determinantal\nprobability measures on subsets of edges (or even ``mixed\" measures on\nsubsets of both vertices and edges), we use the following construction.\nGiven a network $G$, subdivide each edge $e$ by adding a new vertex $x_e$,\nwhich is joined to each endpoint of $e$ and which receives the mark of $e$.\nAlso assign a second coordinate to the new vertices so that we may\ndistinguish them.\nProvided that the expected degree of the root under the unimodular measure\n$\\mu$ is finite, we may choose a re-rooting of the subdivided networks in\norder to obtain a natural unimodular probability measure, $\\tilde \\mu$: see\n\\cite[Example 9.8]{AL:urn} for details.\nIn fact, when $\\mu$ is the random weak limit of $(G_n)_n$,\nwe may simply subdivide the edges of $G_n$ and take the random weak limit\nof the resulting networks, $G'_n$.\nUsing the finiteness of the expected degree under $\\mu$,\nit is not hard to check that $U(G'_n)$ does indeed converge to $\\tilde \\mu$.\nWith this construction, if we desire a determinantal probability measure on\nthe edges for $\\mu$, we may simply use the corresponding positive\ncontraction on the vertices for $\\tilde \\mu$, where all entries are 0 that\ndo not correspond to a pair of new vertices.\n\nIn particular, this result allows to extend our observations on the existence of invariant monotone couplings between $\\wsf$ and $\\fsf$ to unimodular random networks. In combination with the results in \\cite{abertviragthom}, we are also able to extend the results in Section \\ref{s.approxim}. We omit proofs since the strategy and the techniques are unchanged.\n\n\n\n\n\n\n\n\n\n\\section*{Acknowledgments}\n\nThis research started when R.L. visited Universit\\\"at G\\\"ottingen as a guest of the Courant Research Centre G\\\"ottingen in November 2008 and was continued later when A.T.\\ visited Indiana University at Bloomington as a Visiting Scholar in March 2011. A.T. thanks ERC for support.\nWe thank Ben Hayes for help with the proof of \\rref l.dbarSchatten\/.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nBellman described multi-stage decision processes from a mathematical point of view in \\citea{Bellman1957}, this algorithm was called \\gls{DP}. The \\gls{DP} principle comes down to describing each state with a value in a value function and moving the system to the state with the highest value. \nThe value function is also called the cost-to-go function and stores the expected sum of future rewards for each state. In general this function cannot be found directly, therefore an iterative approach like \\gls{TD}-learning \\citeb{SB98, Sut88} is used.\n\n\\gls{DP} has the ability to solve complex control problems in diverse environments, and it is possible to view the environment as a black-box by modelling it using system identification techniques. The advantage of combining \\gls{DP} and system identification techniques, is that it leads to an adaptive control scheme.\nThese adaptive controllers have already been successfully trained off-line for many purposes, ranging from agile missile interception \\citeb{HB99} to aircraft auto-landing and control \\citeb{SB97}. Also, an on-line adaptive critic flight control was implemented on a six-degree-of-freedom business jet aircraft over its full operating envelope, improving its performance when unexpected conditions are encountered for the first time \\citeb{FS04}.\n\nBarto, Sutton and Anderson used neural networks to parametrise the value function \\citeb{BSA83}. However, this function approximator is non-linear in the parameters, with the result that stability can only be guaranteed if bounded network weights are used, where bounds are determined by off-line analysis. This approach has been applied to examples \\citeb{HB99,SB97,FS04} mentioned before.\nMore recently, an adaptive controller has been introduced in \\citea{AYB+07}, and again the neural network weights are bounded to guarantee stability. A drawback is that for a time-varying system these bounds shift and stability can no longer be guaranteed. This combination of neural networks and \\gls{DP} is commonly referred to as \\gls{NDP} \\citeb{bertsekas1996neuro}.\n\nThere is a proof of convergence when linear-in-the-parameters function approximators are used in \\citea{TVR97}, however, this proof demands knowledge of the shape of the optimal value function. \nWith the \\gls{RLSTD} algorithm convergence in a stochastic framework is assured \\citeb{BB96}, even when the linear regression basis cannot perfectly fit the value function.\nIn the last decade, the \\gls{DP} theory in continuous time and space has been further developed in \\citea{Doy96, Doy00, MD05}.\nMore recently, the proof of convergence has been extended to include the optimal policy \\citeb{ma2009}, however one of the problems that remains is the a-priori unknown shape of the value function.\n\nUsing polynomials as a function approximator in a \\gls{DP} framework was investigated by Bellman in 1963 \\citeb{BKK63}. In 1985, \\citea{Sch85} concentrates on the use of global polynomials in combination with \\gls{DP}. \nWith the development of the \\gls{RLSTD} algorithm, it is possible to obtain a proven convergence by combining it with a polynomial approximation as discussed in \\citea{ma2009}. However the limitation is that the approximation power of global polynomials can only be increased by increasing the order of the polynomials, which will also lead to numerical instabilities in the solution schemes of the approximation. According to \\citea{summers2013}, using the sum of squares allows the use of higher order polynomials, but will eventually still lead to numerical instability.\n\nA recommendation in \\citea{ma2009} is to use local polynomial regression to treat \\gls{MDP} problems with value functions of unknown form. This recommendation is supported by \\citea{Da76}, which states that it is highly desirable from an efficiency point of view to use local polynomial regression. Recently a novel method based on multivariate simplex B-splines has been applied in a linear regression framework \\citeb{deVisser2009a}. The use of a local polynomial basis allows for transparency and efficient (sparse) computational schemes. Furthermore, the spatial location of the B-coefficients and modularity of the triangulation allow for local model modification and refinement \\citeb{Lai2007b}. These properties make multivariate simplex B-splines an excellent candidate for use in the \\gls{DP} framework.\n\nThere are existing approaches which combines \\gls{MARS} \\citeb{Friedman1991} and \\gls{DP} \\citeb{Chen1999,Chen19992,Cervellera2007}. However, multivariate simplex B-splines distinguishes itself from \\gls{MARS} in terms of computational efficiency by using B-splines \\citeb{Bakin2000}. And they are supported by a triangulation of simplices, allowing functionality in a non-square domain \\citeb{Lai2007b}.\n\nThe contribution of this paper is a framework that allows the use of multivariate simplex B-splines in combination with the \\gls{RLSTD} algorithm, giving rise to \\gls{SDP} that enables control of non-linear stochastic systems. A method for continuous local value function adaptation is presented which is enabled by the spatial location property of the coefficients of the multivariate splines; this is achieved by implementing a new formulation for the covariance update step. The effectiveness of this \\gls{SDP} framework is investigated by comparing it with \\gls{NDP} in terms of computational complexity and performance. Furthermore, the \\gls{RLSTD} algorithm is modified to allow for adaptive control of time-varying systems. The validation and comparison of both cases are investigated with a non-linear 2D control problem, the pendulum swing-up.\n\nIn section \\ref{sec:dp}, we briefly introduce the \\gls{DP} framework and show how the value function and greedy policy are constructed. In section \\ref{sec:splines}, we give a brief introduction on the mathematical background of multivariate simplex B-splines, the function approximator used in the \\gls{SDP} framework. The \\gls{SDP} framework itself is explained in section \\ref{sec:sdp}; the main purpose of this section is to address the steps specific to using multivariate simplex B-splines for value function approximation in combination with the \\gls{RLSTD} algorithm. Section \\ref{sec:results} is there to demonstrate that the \\gls{SDP} framework indeed works for the given control problem. \\gls{SDP} is compared with neural networks for system performance on a stochastic system and a time-varying system. These results are discussed in section \\ref{sec:discussion} and finally conclusions and recommendations are presented in section \\ref{sec:conclusion}.\n\n\\section{Preliminaries on Dynamic Programming} \\label{sec:dp}\nIn this section, we present the preliminaries on \\gls{DP}, the algorithm that is part of the \\gls{SDP} framework.\nFor a more complete description we refer to \\citea{SB98, SBP+04, WBP07, BBDS+10, Ber07}.\nWe start with a brief overview of the \\gls{MDP} followed by the policy evaluation and concluded with the policy improvement.\n\n\\subsection{Markov Decision Processes}\nThe policy evaluation problem associated with discrete-time stochastic optimal control problems is referred to as a \\gls{MDP}. Finding the solution to an \\gls{MDP} is a sequential optimization problem where the goal is to find a policy that maximizes the sum of the expected infinite-horizon discounted rewards.\n\nLet $\\gls{x}_t \\in X$ be the state vector and $\\gls{u}_t \\in U$ be the input vector, both at time $t$, where $\\gls{u}_t$ is determined by the policy $\\gls{pi}$ and $X$ and $U$ denote the finite sets of states and inputs. The reward function is $\\gls{r}_{t+1}(\\gls{x}_{t+1},\\gls{u}_t)$ and $0\\leq\\gls{gamma}<1$ is the discount factor. The goal is to find a policy $\\pi$ that obtains the maximum total reward.\n\nFor each policy $\\gls{pi}$ there exists a value function $V^{\\gls{pi}}(\\gls{x}_t)$ that indicates a measure of long-term performance at each state:\n\\begin{equation} V^{\\gls{pi}}(\\gls{x}_t) = \\sum_{k=t}^{\\infty} \\gls{gamma}^{k-t} \\gls{r}_{k+1}(\\gls{x}_{k+1},\\gls{u}_k) \\label{eq:valuefunc} \\end{equation}\nThe objective can now be formulated as finding a policy $\\gls{pi}^*$ such that $V^{\\gls{pi}^*}(\\gls{x}) \\geq V^{\\gls{pi}}(\\gls{x})$ for all $\\gls{x} \\in X$ and for all policies $\\gls{pi}$. This policy $\\gls{pi}^*$ is called the optimal policy and can be found by applying both policy evaluation and policy improvement \\citea{SB98}.\n\nThe policy evaluation determines the $V^{\\gls{pi}}(\\gls{x}_t)$ of the current policy $\\gls{pi}$, where the policy improvement uses this knowledge to adjust the policy $\\gls{pi}$ such that it ends up in the most valuable states.\n\n\\subsection{Policy evaluation} \\label{sec:policyeval}\nIn order to evaluate the value function in an iterative fashion, \\citea{Sut88} uses \\gls{TD}-learning. For \\gls{TD}-learning, the following must hold:\n\\begin{equation} \\begin{array}{rl}\nV^{\\gls{pi}}(\\gls{x}_t) = \t& \\gls{r}_{t+1} + \\sum_{k={t+1}}^{\\infty} \\gls{gamma}^{k-t} \\gls{r}_{k+1} \\\\\n= \t\t\t\t& \\gls{r}_{t+1} + \\gls{gamma} V^{\\gls{pi}}(\\gls{x}_{t+1})\n\\end{array} \\end{equation}\nIf this equality does not hold, the difference is called the \\gls{TD}-error:\n\\begin{equation} e_t = \\gls{r}_{t+1} + \\gls{gamma} V^{\\gls{pi}}(\\gls{x}_{t+1}) - V^{\\gls{pi}}(\\gls{x}_t) \\label{eq:td} \\end{equation}\nBy minimizing this \\gls{TD}-error, the value function can be constructed in an iterative approach.\nIn order to construct a value function for a continuous problem, there is need for parameterization to describe a complete state space with a finite number of parameters.\nThe value function now becomes $V^{\\gls{pi}}(\\gls{x}_t,\\gls{Ch}_t)$, where $\\gls{Ch}_t$ are the parameters at time $t$ that shape the continuous value function in domain $X$.\n\nIn \\citea{BB96} the \\gls{RLSTD} algorithm was introduced; this algorithm and its computational complexity is visible in Table~\\ref{tab:trainRLSTD}. Here $\\gls{dh}$ and $\\gls{ah}$ represent the number of coefficients per simplex and total number of coefficients respectively, $\\gls{xb}$ represents the linear regression matrix and \\gls{P} is the parameter covariance matrix. These parameters will be further explained in section~\\ref{sec:splines}. \\gls{RLSTD} converges to the least squares approximation of the optimal value function $\\gls{Ch}^*$, given that each state $\\gls{x} \\in \\gls{X}$ is visited infinitely often.\nAlthough \\gls{RLSTD} requires more computations per time-step than \\acrshort{TD}($\\lambda$) algorithms \\citeb{Sut88}, it is more efficient in the statistical sense as more information is extracted from training experience, allowing it to converge faster \\citeb{BB96}. Furthermore, while \\gls{LMS} aims to decrease the mean square error $e_t$ at each time-step separately, \\gls{RLSTD} minimizes this objective function:\n\\begin{equation} \\gls{COST}_t = \\frac{1}{t} \\sum_{k=0}^{t} \\left[ e_k \\right]^2 \\label{eq:classicLS} \\end{equation}\n\n\\begin{table}\n\\caption{\\acrshort{RLSTD} algorithm, from \\protect{\\citea{BB96}}}\n\\label{tab:trainRLSTD}\n\\centering\n \\begin{tabular}{lrll}\n \\hline\n \\hline\n\t \\textbf{Step} & & \\textbf{Action} & \\textbf{Computational} \\\\\n\t\t\t& & & \\textbf{Complexity} \t\t\\\\\n \\hline\n(1)& $e_t =$ & $\\gls{r}_{t+1} - (\\gls{xb}_t - \\gls{gamma} \\gls{xb}_{t+1})^{\\top} \\gls{Ch}_{t}$ & $\\mathcal{O}(\\gls{dh})$ \\\\\n(2)& $\\gls{P}_{t+1} =$ & $\\gls{P}_{t} - \\frac{ \\gls{P}_{t} \\gls{xb}_t ( \\gls{xb}_t - \\gls{gamma} \\gls{xb}_{t+1} )^{\\top} \\gls{P}_{t}}{1 + ( \\gls{xb}_t - \\gls{gamma} \\gls{xb}_{t+1} )^{\\top} \\gls{P}_{t} \\gls{xb}_t } $ & $\\mathcal{O}(\\gls{ah}^2)$ \\\\\n(3) & $ \\gls{Ch}_{t+1} =$ & $\\gls{Ch}_{t} + \\frac{ \\gls{P}_{t} }{ 1 + ( \\gls{xb}_t - \\gls{gamma} \\gls{xb}_{t+1} )^{\\top} \\gls{P}_{t} \\gls{xb}_t } \\gls{xb}_t e_t$ & $\\mathcal{O}(\\gls{ah}^2)$ \\\\\n \\hline\n \\hline\n \\end{tabular}\n\\end{table}\n\n\\subsection{Policy improvement} \\label{sec:pi}\nThe computation of the value function is called policy evaluation. Using this value function, a greedy action can be selected. If a policy is updated in this manner, it is called (greedy) policy improvement:\n\\begin{equation} \\gls{u}_t ( \\gls{x}_t ) = \\max_{\\gls{u}_t \\in \\gls{U}} \\left[ \\gls{V}^{\\gls{pi} } ( \\gls{x}_t ) \\right] \\end{equation}\nThe repetition of the policy evaluation and policy improvement is called policy iteration and will result in an optimal policy \\citeb{SB98}. According to \\citea{Doy96}, the optimal non-linear feedback control law is a function of value function's gradient:\n\\begin{equation} \\gls{u}_t ( \\gls{x}_t ) = \\gls{u}^{\\max} ~ g \\left( \\frac{1}{ c } \\gls{tau} \\frac{\\partial \\gls{V}^{\\gls{pi}} ( \\gls{x}_t ) }{\\partial \\gls{x}_t} \\frac{\\partial f(\\gls{x}_t,\\gls{u}_t)}{\\partial \\gls{u}_t} \\right) \\end{equation}\nwhere $\\gls{u}^{\\max}$ is the maximum control input, $g(x) = \\tanh (\\frac{\\pi}{2} x)$, $c$ is the control cost parameter, $\\tau$ is the step-size parameter and $f(\\gls{x}_t,\\gls{u}_t)$ are the system dynamics.\n\nThis optimal control law is applied to the pendulum swing-up task, with the result visible in \\ref{sec:swingup}. Having discussed both the policy evaluation and improvement, section~\\ref{sec:splines} will now describe the parametrization of the value function using multivariate simplex B-splines.\n\n\\section{Preliminaries on Multivariate Simplex B-Splines} \\label{sec:splines}\nThis section serves as a brief introduction to the mathematical theory of the simplex B-splines. \nFor a more extensive and general introduction to multivariate spline theory we refer to \\citea{Lai2007b}.\nWe start by introducing the basic concept of a single basis polynomial and B-form, then introduce the triangulation, followed by the vector notation of the B-form. Finally the \\gls{RLS} estimator for simplex splines is reviewed\n\n\\subsection{Simplex and barycentric coordinates}\nThe polynomial basis of a multivariate simplex B-spline is defined on a simplex. A simplex is defined by the non-degenerate vertices $ ( \\gls{upsilon}_0, \\gls{upsilon}_1, \\ldots , \\gls{upsilon}_n ) \\in \\mathbb{R}^n $ and thus creates a span in $n$-dimensional space. Any point $ \\gls{x} = ( x_1 , x_2 , \\ldots , x_n ) $ can be transformed to a barycentric coordinate $ \\gls{b}(\\gls{x}) = ( {b}_0 , {b}_1 , \\ldots , {b}_n ) $ with respect to a simplex. \nThe relation between Cartesian coordinate $\\gls{x}$ and barycentric coordinate $\\gls{b}(\\gls{x})$ is:\n\\begin{equation} \\gls{x} = \\sum^n_{i=0} b_i \\gls{upsilon}_i \\qquad \\sum^n_{i=0} b_i = 1 \\end{equation}\n\n\\subsection{Triangulation}\nAny number of simplices can be combined into a triangulation, where\na triangulation $\\mathcal{T}$ is a special partitioning of a domain into a set of $\\mathit{J}$ non-overlapping simplices and is defined in \\citea{Lai2007b} as:\n\\begin{equation} \\mathcal{T} \\equiv \\bigcup_{i=1}^{\\mathit{J}} t_i , \\quad t_i \\cap t_j \\in \\{ \\emptyset, \\tilde t \\}, \\quad \\forall t_i, t_j \\in \\mathcal{T} \\quad , i \\neq j \\end{equation}\nwith $\\tilde t$ a simplex of dimension $( i$, or if $\\nu = i$, then $\\mu > j$, or if $\\nu = i$ and $\\mu = j$, then $\\varkappa > k$. Thus for $d=2$ the order is:\n\\begin{equation} \\kappa_{2,0,0} , \\quad \\kappa_{1,1,0} , \\quad \\kappa_{1,0,1} , \\quad \\kappa_{0,2,0} , \\quad \\kappa_{0,1,1} , \\quad \\kappa_{0,0,2} \\label{eq:lex} \\end{equation}\nEach B-coefficient has a unique position within the simplex based on $\\gls{kappa}$; this relation between B-coefficient and spatial position allows the creation of a B-net as seen in Figure~\\ref{fig:bnet}.\n\n\n\\begin{figure}\n\\centering\n\\resizebox{!}{3cm}{\\input{tex\/fig_Bnet_d4_T1.tex}}\n\\caption{B-net overview of a $4^{th}$ degree simplex spline}\n\\label{fig:bnet}\n\\end{figure}\n\n\\subsection{Vector formulation of the B-form}\nIn order to complete the vector formulation, $\\gls{BERN}_{t_j} (\\gls{b}) \\in \\mathbb{R}^{\\hat{d} \\times 1}$ is introduced as a vector of Bernstein basis polynomials (Eq. \\ref{eq:bern}) of simplex $t_j$, which are sorted lexicographically as indicated by Eq.~\\ref{eq:lex}. Adapted from \\citea{visser2013}, we define the B-form as a row vector on simplex $t_j$:\n\\begin{equation} \\gls{p}(\\gls{b}) = \\left\\{ \\begin{array}{rl} \\gls{BERN}(\\gls{b})^{\\top} \\cdot \\gls{Ch}^{t_j} ,& \\gls{x} \\in t_j \\\\ 0 ,& \\gls{x} \\notin t_j \\end{array} \\right. \\end{equation}\nwhere $\\mathbf{c}^{t_j}$ are the B-coefficients on simplex $t_j$. \nThe matrix operation to evaluate the simplex B-spline function of degree $d$ and continuity order $r$, defined on a triangulation $\\mathcal{T}_\\mathit{J}$ is:\n\\begin{equation}\n s_d^r (\\gls{b}) \\equiv \\gls{B}(\\gls{b})^{\\top} \\cdot \\gls{Ch} \\in \\mathbb{R}, \\quad \\gls{x} \\in \\mathcal{T}_\\mathit{J}\n\\end{equation}\nNow $\\gls{B}(\\gls{b})$ (note the absence of the superscript 'd') is the global vector of basis polynomials:\n\\begin{equation} \\gls{B}(\\gls{b}) \\equiv \\left[ \\gls{BERN}(\\gls{b})_{t_1}^{\\top} ~ \\gls{BERN}(\\gls{b})_{t_2}^{\\top} ~ \\cdots ~ \\gls{BERN}(\\gls{b})_{t_J}^{\\top} \\right]^{\\top} \\in \\mathbb{R}^{\\gls{ah} ~ \\mathtt{x} ~ 1} \\end{equation}\nThe global vector of B-coefficients \\gls{Ch} is defined as:\n\\begin{equation} \\gls{Ch} \\equiv \\left[ {\\gls{Ch}^{t_1}}^{\\top} ~ {\\gls{Ch}^{t_2}}^{\\top} ~ \\cdots ~ {\\gls{Ch}^{t_J}}^{\\top} \\right]^{\\top} \\in \\mathbb{R}^{\\gls{ah} ~ \\mathtt{x} ~ 1} \\label{eq:defc} \\end{equation}\nThe spline space is the space of all spline functions $s_d^r$ in the triangulation $\\mathcal{T}$. We use the definition of the spline space from \\citea{Lai2007b}:\n\\begin{equation} S_d^r (\\mathcal{T}) \\equiv \\{ s_d^r \\in C^r(\\mathcal{T}) : s_d^r |_t \\in \\mathbb{P}_d , \\forall t \\in \\mathcal{T} \\} \\end{equation}\nwhere $\\mathbb{P}_d$ is the space of polynomials of degree $d$.\n\n\\subsection{Continuity} \\label{sec:continuity}\nSince $\\gls{p}(\\gls{b})$ is a linear combination of continuous functions, $s_d^r (\\gls{b})$ is naturally continuous on each simplex. However, in order to assure continuity of $S_d^r (\\mathcal{T})$ between simplices, constraints are imposed on the relations between the coefficients of different simplices. The continuity order $r$ fixes the derivatives $\\frac{d^r p}{db^r}$ on the edges between neighboring simplices. The required continuity conditions can be calculated using \\citea{Lai2007b}:\n\\begin{equation} \\begin{array}{r}\nc^{t_i}_{(\\kappa_0,\\ldots,\\kappa_{n-1},m)} = \\sum_{| \\gamma | = m} c^{t_j}_{(\\kappa_0,\\ldots,\\kappa_{n-1},0) + \\gamma} B^m_\\gamma (w) \n\\\\ \\\\\n0 \\leq m \\leq r\n\\end{array} \\label{eq:contcond} \\end{equation}\nHere $w$ is a vertex of simplex $t_j$ which is not found on the edge that is shared with simplex $t_i$. \nAll constraints required for continuity are collected in the smoothness matrix $\\gls{H}$, with each row containing a new constraint and the columns consisting of the coefficients \\citea{deVisser2009a,deVisser2011}. These equations are all equaled to zero, resulting in the following matrix form:\n\\begin{equation} \\gls{H} \\gls{Ch} = 0 \\label{eq:matrixH} \\end{equation}\nwith \\gls{Ch} as in Eq.~\\ref{eq:defc}.\n\n\\subsection{Approximation power}\nTo describe the approximation power, the following definition from \\citea{Lai2007b}, chapter $10.1$ is used:\n\\begin{definition} \\label{def:approx}\n\\emph{(Approximation power of $S_d^r (\\mathcal{T})$)} \nFix $0 \\leq r < d$ and $0 < \\theta \\leq \\pi \/3$. Let $m$ be the largest integer such that for every polygonal domain $\\Omega$ and every regular triangulation $\\mathcal{T}$ of $\\Omega$ with smallest angle $\\theta$, for every $f \\in W_q^m ( \\Omega )$, there exists a spline $s \\in S_d^r (\\mathcal{T})$ with\n\\begin{equation} ||f - s||_{q,\\Omega} \\leq K ~ |\\mathcal{T}|^m ~ |f|_{m,q,\\Omega} \\label{eq:approx} \\end{equation}\nwhere the constant $K$ depends only on $r$, $d$, $\\theta$, and the Lipschitz constant of the boundary of $\\Omega$. Then we say that $S_d^r$ has approximation power $m$ in the $q$-norm. If this holds for $m = d + 1$, we say that $S_d^r$ has full approximation power in the $q$-norm.\n\\end{definition}\nThe theory behind this is extensive and available in \\citea{Lai2007b}, however for now it is important to realize that $|\\mathcal{T}|^m$ is a function of the longest edge in triangulation $\\mathcal{T}$. This reveals that it is possible to locally increase the approximation power by reducing the length of the longest edge in $\\mathcal{T}$.\n\n\\subsection{Recursive least squares} \\label{sec:rls}\n\\gls{RLS} is a method which allows the estimated parameters $\\gls{Ch}$ to be updated online with the use of the parameter covariance matrix $\\gls{P}$ \\citea{deVisser2011}.\nThe algorithm and the computational complexity is found in Table~\\ref{tab:trainMIL}.\nNote that it is essential to keep the column relations of \\gls{P} intact to enforce the constraints $\\gls{H} \\gls{Ch} = 0$ from Eq.~\\ref{eq:matrixH}.\nTo initialize the \\gls{P} matrix, we use $\\mathbf{Z}$, the orthogonal projector on the null-space of \\gls{H} \\citea{Lawson1974}:\n\\begin{equation} \\gls{P}_1 = \\gls{forget}_1 ~ \\mathbf{Z} \\label{eq:p0} \\end{equation}\nwhere $\\mathbf{Z} = (\\gls{I} - \\gls{H}^+\\gls{H})$, in which $\\gls{I}$ and $\\gls{H}$ are the identity and smoothness matrix respectively. The parameter $\\gls{forget}_1 > 0$ indicates the confidence in the initial estimated parameters, where a larger $\\gls{forget}_1$ indicates a lower confidence level.\nTo initialize $\\gls{Ch}$, it is important to pick these such that the continuity constraints are not violated. For this, a constrained \\gls{LS} fit of the estimated shape can be used, using the approach from \\citea{deVisser2009a}. If no knowledge is available, initialization of all coefficients at zero will satisfy the constraints, resulting in:\n\\begin{equation} \\gls{Ch}_1 = \\mathbf{0} \\end{equation}\n\n\\begin{table}[t]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{\\acrshort{RLS} algorithm from \\citea{deVisser2011}}\n\\label{tab:trainMIL}\n\\centering\n \\begin{tabular}{lrll}\n \\hline\n \\hline\n\t\t\\textbf{Step} & & \\textbf{Action} & \\textbf{Computational} \\\\\n\t\t & & & \\textbf{Complexity} \\\\\n \\hline\n (1) & $\\gls{e}_{t} =$ & $\\gls{y}_{t} - \\gls{xb}_{t}^{\\top} \\gls{Ch}_{t}$ \t& $\\mathcal{O}(\\gls{dh})$\t\\\\\n (2) & $\\gls{P}_{t+1} =$ & $\\gls{P}_{t} - \\frac{ \\gls{P}_{t} \\gls{xb}_{t} \\gls{xb}_{t}^{\\top} \\gls{P}_{t}}{1 + \\gls{xb}_{t}^{\\top} \\gls{P}_{t} \\gls{xb}_{t}} $\t& $\\mathcal{O}(\\gls{ah}^2)$ \\\\\n (3) & $\\gls{Ch}_{t+1} =$ & $\\gls{Ch}_{t} + \\gls{P}_{t+1} \\gls{xb}_{t} \\gls{e}_{t}$ & $\\mathcal{O}(\\gls{ah}^2)$\t\\\\\n \\hline\n \\hline\n \\end{tabular}\n\\end{table}\n\n\n\\section{Spline Dynamic Programming} \\label{sec:sdp}\nThis section has a dual purpose; it will introduce the framework that combines both simplex splines and the \\gls{RLSTD} algorithm, and the modified \\gls{RLSTD} algorithm with the ability to track time-varying systems will be defined.\n\n\\subsection{The SDP framework} \\label{sec:sdpframework}\nTo successfully represent the optimal value function with a simplex spline, a spline space has to have sufficient approximation power at each point of the value function domain. While it is theoretically possible to have an infinite refinement in terms of triangulation, the idea behind the parametrization of the value function is that there is no need for an infinite amount of states, but only a limited amount of parameters to describe the entire state space $\\gls{X}$. \nWith no a-priori information available, a triangulation consisting of nodes positioned in a grid is a good initial estimate, as it evenly distributes the approximation power over the domain according to Eq.~\\ref{eq:approx}. \nWhat remains is to construct a spline space $S_d^r (\\mathcal{T})$ are the polynomial degree and continuity order, parameters with a global effect on the spline function. Finally, to start the procedure, only the initial coefficients and covariance matrix have to be constructed. \nThere are additional settings required for exploration (included in the policy) and the adaptability (to be discussed in the next section).\nNote that there is an efficient method to derive the directional derivatives, which are used in the optimal policy, available in \\citea{deVisser2011}.\n\nWhile the framework functions in an infinite-time setting, the simulation has a time limit after which a new trial is started. This to have a successful convergence to the optimal value function by visiting every state due to a random initial state $\\gls{x}_0$. At each time step an action is selected, the next state is determined, the reward is calculated and the \\gls{RLSTD} algorithm updates the parameter coefficients $\\gls{Ch}$ (Eq. \\ref{eq:defc}) and covariance matrix $\\gls{P}$ (Eq. \\ref{eq:p0}). An overview of the complete algorithm is available in Table~\\ref{tab:SDP}.\n\nThe requirement of each state $\\gls{x} \\in \\gls{X}$ being visited is essential to guarantee the convergence to the optimal coefficients $\\gls{Ch}^*$. Therefore an explorer is introduced into the framework that explores the entire state-space $\\gls{X}$. Note that this proof assumes that the entire state-space is reachable, which is not true for every environment.\n\nIn this situation, the \\gls{SDP} framework consists of a direct implementation of the \\gls{RLSTD} algorithm, using multivariate simplex B-splines as a linear-in-the-parameters function approximator. As a consequence, the convergence proof as given in \\citea{BB96} applies.\n\nA block diagram of the control scheme is presented in Figure~\\ref{fig:SDPblock}. Both the state $\\gls{x}$ and input $\\gls{u}$ are in the diagram, as well as the reward $\\gls{r}$ and the system disturbance $w$. The \\gls{DP} components are the Policy, Reward and Value, however where both the Policy and Reward components are identical for \\gls{NDP} and \\gls{SDP}, the Value component differs. The method to construct the value function in the \\gls{SDP} framework is described in section~\\ref{sec:policyeval}.\n\n\\begin{table}\n\\caption{The \\acrshort{SDP} overview}\n\\label{tab:SDP}\n\\centering\n \\begin{tabular}{ll}\n\\hline\n\\hline\n\\textbf{Step} & \\textbf{Action} \\\\\n\\hline\n(0) & Initialization \\\\\n ~ (0a)& Create a spline space $S_d^r (\\mathcal{T})$ \\\\\n ~ (0b)& Set the parameter values $\\gls{Ch}_1 = \\mathbf{0}$ \\\\\n ~ (0c)& Set the covariance matrix $\\gls{P}_1 = \\gls{forget}_1 \\cdot (\\gls{I} - \\gls{H}^+\\gls{H})$ \\\\\n(1) & Do for $n = 1, \\ldots ,$ trials, \\\\\t\t\t\n ~ (1a)& Set the initial state $\\gls{x}_1$ \\\\\n ~ (2) & Do for $t = 1, \\ldots ,$ end, \\\\\t\t\t\n ~ ~ (2a)& $\\gls{u}_t = f(\\gls{x}_t,\\gls{Ch}_t)$ \\\\\n ~ ~ (2b)& $\\gls{x}_{t+1} = f(\\gls{x}_t,\\gls{u}_t)$ \\\\\n ~ ~ (2c)& $\\gls{r}_{t+1} = f(\\gls{x}_{t+1},\\gls{u}_t)$ \\\\\n ~ ~ (2d)& Update $\\gls{Ch}$ and $\\gls{P}$ with the \\acrshort{RLSTD} algorithm \\\\\n\\hline\n\\hline\n \\end{tabular}\n\\end{table}\n\n\\begin{figure}\n \\tikzstyle{block} = [draw, rectangle, fill=gray!10,\n minimum height=3em, minimum width=6em]\n \\tikzstyle{block1} = [draw, rectangle, \n minimum height=3em, minimum width=6em]\n \\centering\n\\resizebox{!}{3cm} {\n \\begin{tikzpicture}[auto, node distance=2cm]\n \n \\node [block1] (controller) {Policy \\footnotesize{(2a)}};\n \\node [block, right of=controller, node distance=3cm] (system) {System \\footnotesize{(2b)}};\n \\node [coordinate, above of=system, node distance=1cm] (w) {};\n \\node [right of=w, node distance=0.25cm] (w2) {$w$};\n \\draw [->] (w) -- (system);\n \\node [block1, above of=controller] (value) {Value \\footnotesize{(2d)}};\n \\node [block1, right of=value, node distance=3cm] (reward) {Reward \\footnotesize{(2c)}};\n \n \n \\draw [->] (controller) -- node[name=u, below] {\\gls{u}} (system);\n \\draw [->] (reward) -- node[name=r] {\\gls{r}} (value);\n \\node [coordinate, right of=system] (output) {};\n \\node [above of=output, node distance=0.25cm, xshift=-0.2cm] (o2) {\\gls{x}};\n \\node [coordinate, right of=system, node distance = 1.4cm] (y1) {};\n \\node [coordinate, right of=reward, node distance = 1.4cm] (y2) {};\n \\node [coordinate, node distance = 1cm, above of=r] (top) {};\n \\node [coordinate, node distance = 0.6cm, below of=u] (bottom) {};\n \\node [coordinate, node distance = 1cm, above of=u] (mid) {};\n \n \\draw [->] (controller.east) -- ([xshift=0.5cm]controller.east) -- ([xshift=0.5cm, yshift=1cm]controller.east) -| ([xshift=-0.6cm]reward.south);\n \\draw [->] (system) -- (output);\n \\draw [->] (y1) |- (bottom) -| (controller);\n \\draw [->] (y1) -- (y2) -- (reward.east);\n \\draw [->] (y2) |- (top) -| (value.north);\n \\draw [->, dashed] (value) -- (controller);\n \\end{tikzpicture}\n}\n\\caption{The control diagram of the \\acrshort{SDP} framework, including the corresponding step from Table~\\ref{tab:SDP}}\n\\label{fig:SDPblock}\n\\end{figure}\n\n\\subsection{Recursive weighted least squares}\nWhile the objective function in Eq.~\\ref{eq:classicLS} will converge to the classic \\gls{LS} solution, it is unable to cope with time-varying systems as it weighs each measurement equally, driving the \\gls{P} matrix to zero. For the \\gls{RLS} algorithm to track time-varying systems, a popular and effective solution in adaptive control is using the forget factor, changing the quadratic objective function to:\n\\begin{equation} \\gls{COST}_t = \\frac{1}{t} \\sum_{k=1}^{t} \\gls{forget}^{t-k} \\left[ e_k \\right]^2 \\label{eq:weightedLS} \\end{equation}\nwhere $\\gls{forget}$ represents the forget factor. This equation can be rewritten as:\n\\begin{equation} \\gls{COST}_t = \\frac{1}{t} \\left[ \\gls{forget} \\gls{COST}_{t-1} + \\left[ e_t \\right]^2 \\right] \\end{equation}\nmaking it clear that that \\gls{forget} has a discounting effect on the past errors, reducing the importance given to old data. Therefore, by applying a forget factor the \\gls{LS} solution is converted to a \\gls{WLS} solution, where the newest data-points have the most influence on parameters. \nAccording to \\citea{WZ91} the forget factor can be applied to the covariance matrix as:\n\\begin{equation} \\gls{P}_{t+1} = \\gls{forget}^{-1} ~ \\gls{P}_{t} \\label{eq:globalforget} \\end{equation}\nApplying the forget factor in this form has the disadvantage that it scales all elements of \\gls{P} equally. This will result in covariance wind-up when no new information is available over a long period, caused by certain elements of \\gls{P} becoming very large.\nThis is a direct consequence of the spatial influence of the B-coefficients, visible in the B-net seen in Figure~\\ref{fig:bnet}.\nThe forget method perceived in Eq.~\\ref{eq:globalforget} is therefore only used at initialization, represented by $\\gls{forget}_1$ seen in Eq.~\\ref{eq:p0}.\n\nA solution to prevent the covariance wind-up is to apply the forget factor only to the updated parameters. This approach is called directional forgetting \\citea{WZ91} and updates the covariance matrix as follows:\n\\begin{equation} \\gls{P}_{t+1} = \\gls{P}_{t} + \\gls{forget}_2 ~ \\gls{xb}_{t} \\gls{xb}_{t}^{\\top} \\end{equation}\nwhere $\\gls{forget}_{2}$ represents a forget factor applied to the updated parameters.\nHowever, the problem with this approach is that it destroys the continuity by ignoring the constraints set by \\gls{H} in Eq.~\\ref{eq:matrixH}.\nTherefore, in order to keep $\\gls{Ch}$ in the null-space of $\\gls{H}$, the following update is proposed:\n\\begin{equation} \\gls{P}_{t+1} = \\gls{P}_{t} + \\gls{forget}_{2} \\left[ \\mathbf{Z} \\gls{xb}_{t} \\gls{xb}_{t}^{\\top} \\mathbf{Z} \\right] \\end{equation}\nwhere $\\mathbf{Z} = (\\gls{I} - \\gls{H}^+\\gls{H})$, which is the projection on the null-space of $\\mathbf{H}$, introduced in Eq.~\\ref{eq:p0}.\n\nThis approach can be implemented by altering step (2) in the original \\gls{RLS} algorithm from Table~\\ref{tab:trainMIL} to:\n\\begin{equation}\n\\gls{P}_{t+1} = \\gls{P}_{t} - \\frac{ \\gls{P}_{t} \\gls{xb}_{t} \\gls{xb}_{t}^{\\top} \\gls{P}_{t}}{1 + \\gls{xb}_{t}^{\\top} \\gls{P}_{t} \\gls{xb}_{t}} + \\gls{forget}_{2} \\left[ \\mathbf{Z} \\gls{xb}_{t} \\gls{xb}_{t}^{\\top} \\mathbf{Z} \\right] \\label{eq:modcov} \\end{equation}\nresulting in our new formulation for the covariance matrix update step.\nThe modified \\gls{RLSTD} algorithm with the additional term in step (2) is visible in Table~\\ref{tab:trainRLSTDmod}, including the computational complexity. To immediately apply the forget factor at time $t$, step (3) employs the $\\gls{P}_{t+1}$ matrix, as done in \\citea{ljung1983theory}.\n\nThe \\gls{RLS} algorithm with directional forgetting is simply convergent for a system where the data generation mechanism is deterministic \\citea{bittanti1990}. It should be noted that under this assumption, \\gls{LMS} algorithms also have proven convergence \\citea{TVR97}.\n\nAdditionally, the modified \\gls{RLSTD} algorithm is capable of filtering out the residual noise to end up near the optimal coefficients $\\gls{Ch}^*$. With $\\gls{forget}_{2} = 0$, the filter has an infinite window in time, while at $\\gls{forget}_{2} > 0$, the window is infinite no longer, which has the advantage of being able to track time-varying systems and disadvantage of being susceptible to noise. This trade-off between noise filtering and tracking is an often returning phenomenon in adaptive control \\citea{WZ91}. In principle $\\gls{forget}_{2} > 0$ only has a beneficial effect on the control of a time-varying system.\nLuckily, this approach allows the use of a variable $\\gls{forget}_{2}$, able to increase and decrease as desired. The design of a successful variable forget factor will result in both good tracking behavior and a good performance with residual noise.\n\n\\begin{table}\n\\caption{\\acrshort{RLSTD} algorithm, modified from Table~\\ref{tab:trainRLSTD}}\n\\label{tab:trainRLSTDmod}\n\\centering\n \\begin{tabular}{lrll}\n \\hline\n \\hline\n\t \\textbf{Step} & & \\textbf{Action} & \\textbf{Computational} \\\\\n\t\t\t& & & \\textbf{Complexity} \t\t\\\\\n \\hline\n(1)& $e_t =$ & $ \\gls{r}_{t+1} - (\\gls{xb}_t - \\gls{gamma} \\gls{xb}_{t+1})^{\\top} \\gls{Ch}_{t}$ & $\\mathcal{O}(\\gls{dh})$ \\\\\n(2)& $\\gls{P}_{t+1} =$ & $\\gls{P}_{t} - \\frac{ \\gls{P}_{t} \\gls{xb}_{t} ( \\gls{xb}_t - \\gls{gamma} \\gls{xb}_{t+1} )^{\\top} \\gls{P}_{t}}{1 + ( \\gls{xb}_t - \\gls{gamma} \\gls{xb}_{t+1} )^{\\top} \\gls{P}_{t} \\gls{xb}_t} $ & $\\mathcal{O}(\\gls{ah}^2)$ \\\\\n & & $+ \\gls{forget}_{2} \\left[ \\mathbf{Z} \\gls{xb}_{t} \\gls{xb}_{t}^{\\top} \\mathbf{Z} \\right]$ & \\\\\n(3) & $ \\gls{Ch}_{t+1} =$ & $ \\gls{Ch}_{t} + \\gls{P}_{t+1} \\gls{xb}_t e_t$ & $\\mathcal{O}(\\gls{ah}^2)$ \\\\\n \\hline\n \\hline\n \\end{tabular}\n\\end{table}\n\n\\section{Performance Evaluation of SDP} \\label{sec:results}\nThe proposed \\gls{SDP} framework has been implemented on a pendulum swing-up non-linear control problem, as seen in \\ref{sec:swingup}.\nAt the start, the controller has no knowledge about the optimal value function, and has to learn from online measurements. \nThe gain input of the plant is assumed to be known; it will increase the learning time for both algorithms with the same amount if it is to be identified using model identification.\nThe objective of the experiment is to move and keep the pendulum in an upwards position by using a limited torque. \nThe controller receives reinforcement at each state, where the top position is most beneficial.\nThe system dynamics are simulated using an Euler integration scheme in combination with the equations of motion as presented in \\ref{sec:swingup}. \n\nThe performance of each trial is measured by the maximum amount of time $t_{up}$ the pendulum is consecutively kept in an upwards position, where the upwards position is defined as:\n\\begin{equation} | \\gls{theta}_{up} | < \\frac{\\pi}{4} = 45^{\\circ} \\label{eq:up} \\end{equation}\nA trial itself consists of $20$ s, with time steps of $0.02$ s. As each trial is initialized in a random angle $\\gls{theta}$ and a zero angle rate $\\gls{thetad}$ (consistent with the experiment in \\citea{MD05}), some trials require more swings to reach the top. Therefore a lower $t_{up}$ does not mean a worse performance per se, but it may have been initialized in a lower position. \nHowever, while $\\gls{theta}_0$ is random, it is identical for each trial over the different methods. This is done to remove the chance of one method having better initializations than the other, degrading the quality of the comparison.\n\nThe neural networks used for \\gls{NDP} are constructed using either radial basis function or using a sigmoid function as a basis. This corresponds to the radial basis network and feedforward network respectively. In case of the radial basis network, the TD($\\lambda$) algorithm is used for training to increase the performance, while the feedforward network is trained using the gradient descent approach. More information on how these networks are constructed and trained is available in \\citea{bertsekas1996neuro} and \\citea{Roj96}.\n\nFor \\gls{SDP}, a $4^{th}$ degree spline space with $1^{st}$ order continuity, without ($\\gls{forget}_2 = 0$) and with ($\\gls{forget}_2 = 0.4$) forget factor (see Eq.~\\ref{eq:modcov}) has been selected. The polynomial degree has been determined by trial and error, such that the simplex spline is capable of estimating the optimal value function. \nBecause the optimal policy is based on the first derivative and \\gls{u} has no rate restrictions, a discontinuous first derivative would give an unfair advantage to \\gls{SDP}. Therefore the continuity degree has been chosen such that the first derivative is continuous, identical to \\gls{NDP}.\nThe value of $\\gls{forget}_2$ is selected such that an increase of tracking behavior is witnessed in the experiments.\nA type III Delaunay triangulation of nodes positioned in a grid is used to produce the triangulation $\\mathcal{T}_{32}$ seen in Figure~\\ref{fig:triangulation}. As explained in section~\\ref{sec:sdpframework}, it is essential that this triangulation is capable of approximating the optimal value function.\n\nThe parameters of \\gls{NDP} used in the simulation have been selected such that there is a comparable amount of parameters in each function approximator;\nthe $S_4^1(\\mathcal{T}_{32})$ has a total of $480$ coefficients, the feedforward network has $480$ weights, and the 12x12 radial basis network has $432$ weights. The centers of the radial basis network are positioned in a grid, including one centered in $\\gls{x} = [0 ~ 0]^{\\top}$.\nA result of the continuity constraints is that the number of free parameters is less than the total amount of parameters, thus effectively lowering the approximation power. In this case, there are $151$ free parameters as the rank of \\gls{H} is $329$.\n\nA search for the best set of learning parameters for the feedforward and radial basis network was performed in an attempt to have a strong comparison between \\gls{NDP} and \\gls{SDP}. An overview of the parameters used in the experiments is visible in Table~\\ref{tab:sumparam}.\nThe initialization of network weights, center weights or nodes has a significant impact; for the feedforward network and simplex splines half of the initializations failed when the networks weights or nodes were selected randomly, while for the radial basis network only 4\\% failed. In this case, success is described as scoring a $t_{up} > 10$ s for at least one trial. In the experiment the center weights and nodes were defined a-priori, removing the dependency on the initialization.\n\nFirst, in section~\\ref{sec:exp1} the four control methods are tasked with controlling a stochastic system, which will demonstrate the influence of system noise. Secondly, in section~\\ref{sec:exp2} the methods are tasked with controlling a time-varying system; this is meant to exhibit the adaptability of the control methods.\n\n\\begin{table}\n\\caption{The Learning Parameters of the \\gls{NDP} and \\gls{SDP} Methods used to obtain the Results}\n\\label{tab:sumparam}\n\\centering\n \\begin{tabular}{c}\n \\acrshort{NDP} - Feedforward \\\\\n \\begin{tabular}{c|c|c|c|c}\n\t\\hline\n\t\\hline\n\t \\textbf{Parameter}\t& $\\eta_1$ \t& $\\eta_2$ \t& Neurons & Total parameters\t\\\\\n\t \\hline\n\t \\textbf{Value}\t& $10^{-3}$ \t& $10^{-3}$\t& $160$ & $480$ \\\\\n\t\\hline\n\t\\hline\n \\end{tabular}\n \\\\ \\\\ \\acrshort{NDP} - Radial basis \\\\\n \\begin{tabular}{c|c|c|c|c|c}\n\t\\hline\n\t\\hline\n\t \\textbf{Parameter}\t& $\\eta_1$ \t& $\\eta_2$ \t& $\\lambda$ \t& Neurons & Total parameters\t\\\\\n\t \\hline\n\t \\textbf{Value}\t& $10^{-3}$ \t& $10^{-2}$ \t& $0.8$ \t\t& 12x12 & $432$\t\\\\\n\t\\hline\n\t\\hline\n \\end{tabular}\n \\\\ \\\\ \\acrshort{SDP} \\\\\n \\begin{tabular}{c|c|c|c|c|c}\n\t\\hline\n\t\\hline\n\t \\textbf{Parameter} & $S_d^r$ & $\\gls{forget}_1$ & $\\gls{forget}_2$ & Triangulation & Total parameters \\\\\n\t \\hline\n\t \\textbf{Value} & $S_4^1$ & $10$ & $0$ \/ $0.4$ & Type III $\\mathcal{T}_{32}$ & $480$\t\\\\\n\t\\hline\n\t\\hline\n \\end{tabular}\n \\end{tabular}\n\\end{table}\n\n\\begin{figure}\n\\centering\n\\input{tex\/tri.tex\n\\caption{Type III Delaunay Triangulation $\\mathcal{T}_{32}$}\n\\label{fig:triangulation}\n\\end{figure}\n\n\\subsection{Experiment I - The Stochastic System} \\label{sec:exp1}\nIn the first experiment the four methods are tasked with controlling a stochastic system. \nThe stochastic system has a system disturbance of $\\sigma_w = 3^{\\circ}\/s^2$. This means that the standard deviation of the system noise is $60\\%$ of the input's influence, as the maximum influence is $T^{\\max}\/ml^2 = 5^{\\circ}\/s^2$.\nSince the system noise is applied to the equation of motion, the effect of the noise is only directly connected to $\\gls{thetadd}$.\nFurthermore, in order to identify the influence of system noise on each method, the deterministic system ($\\sigma_w = 0^{\\circ}\/s^2$) is used as a baseline.\n\nThe results of the four methods are visible in the figures~\\ref{fig:e1_ndp_w0} and \\ref{fig:e1_sdp_w0} for the deterministic system, and in the figures~\\ref{fig:e1_ndp_w3} and \\ref{fig:e1_sdp_w3} for the stochastic system.\nIt shows that while the learning parameters and initialized weights are identical for both systems, the \\gls{NDP} radial basis has learned to swing the pendulum up in less trials for the stochastic system. This increased learning rate of a dynamic programming algorithm in a stochastic system is a well known phenomenon, and is a result of the extra exploration that occurs due to the system noise \\citea{bertsekas1996neuro}. \nNevertheless the learning rate of \\gls{SDP} remains the highest in both the deterministic and stochastic system.\n\nAnother observation is that the stability of \\gls{NDP} is effected most by the presence of system noise, visible by an overall decrease of $t_{up}$. For \\gls{SDP}, only \\gls{SDP} - $\\gls{forget}_2 = 0.4$ shows a decrease of $t_{up}$ larger than $5$ s in trial $36$ and $68$, thus it can be concluded that \\gls{SDP} shows most resilience to system noise.\nTo support this claim with numerical arguments both the mean and standard deviation of $t_{up}$ of the $100$ trials are presented in Table~\\ref{tab:results1}.\nWhile it does not reveal the decrease in stability of the \\gls{NDP} methods because it is masked by the increased learning rate, it does show that \\gls{SDP} outperforms \\gls{NDP} in both scenarios.\n\n\\begin{table}\n\\caption{Mean and standard deviation (std) of $t_{up}$ of the $100$ trials of Experiment I}\n\\label{tab:results1}\n\\centering\n\\begin{tabular}{c}\n $\\sigma_w = 0^{\\circ}\/s^2$ \\\\\n \\begin{tabular}{c|c|c|c|c} \n\t\\hline\n\t\\hline\n\t\t\t& \\acrshort{NDP} - \t& \\acrshort{NDP} - \t& \\acrshort{SDP} - \t& \\acrshort{SDP} - \\\\\n\t\t\t& FF \t\t& RBF \t\t& $\\gls{forget}_{2} = 0$ & $\\gls{forget}_{2} = 0.4$ \\\\\n\t \\hline\n\t \\textbf{Mean}\t& $12.71$ s \t& $13.60$ s\t& $18.30$ s\t& $18.16$ s\t\\\\\n\t \\textbf{Std}\t& $7.89$ s\t& $7.80$ s\t& $2.84$ s\t& $3.24$ s\t\\\\\n\t\\hline\n\t\\hline\n \\end{tabular} \\\\ \\\\\n $\\sigma_w = 3^{\\circ}\/s^2$ \\\\\n \\begin{tabular}{c|c|c|c|c} \n\t\\hline\n\t\\hline\n\t\t\t& \\acrshort{NDP} - \t& \\acrshort{NDP} - \t& \\acrshort{SDP} - \t& \\acrshort{SDP} - \\\\\n\t\t\t& FF \t\t& RBF \t\t& $\\gls{forget}_{2} = 0$ & $\\gls{forget}_{2} = 0.4$ \\\\\n\t \\hline\n\t \\textbf{Mean}\t& $11.95$ s\t& $15.48$ s\t& $18.14$ s\t& $17.86$ s\t\\\\\n\t \\textbf{Std}\t& $6.62$ s\t& $4.93$ s\t& $3.08$ s\t& $2.91$ s\t\\\\\n\t\\hline\n\t\\hline\n \\end{tabular}\n\\end{tabular}\n\\end{table}\n\n\n\n\\newcommand{7cm}{7cm}\n\\newcommand{4cm}{4cm}\n\n\\begin{figure*}\n\t\\centering\n\t\\input{tex\/trials_NDP_w0_v3.tex}\n\t\\caption{Performance overview of \\acrshort{NDP} - Radial basis 12x12 ($432$ parameters) and feedforward network 160 neurons ($480$ parameters), without system noise ($\\sigma_w = 0^{\\circ}\/s^2$). The trendline respresents a 5 point moving average.}\n\t\\label{fig:e1_ndp_w0}\n\\end{figure*}\n\n\\begin{figure*}\n\t\\centering\n\t\\input{tex\/trials_NDP_w3_v3.tex}\n\t\\caption{Performance overview of \\acrshort{NDP} - Radial basis 12x12 ($432$ parameters) and feedforward network 160 neurons ($480$ parameters), with system noise ($\\sigma_w = 3^{\\circ}\/s^2$). The trendline respresents a 5 point moving average.}\n\t\\label{fig:e1_ndp_w3}\n\\end{figure*}\n\n\\begin{figure*}\n\t\\centering\n\t{\\input{tex\/trials_SDP_w0_v3.tex}}\n\t\\caption{Performance overview of \\acrshort{SDP} - $S_4^1 (\\mathcal{T}_{32})$ ($480$ parameters), with ($\\gls{forget}_{2} = 0.4$) and without ($\\gls{forget}_{2} = 0$) forget factor, and without system noise ($\\sigma_w = 0^{\\circ}\/s^2$). The trendline respresents a 5 point moving average.}\n\t\\label{fig:e1_sdp_w0}\n\\end{figure*}\n\n\\begin{figure*}\n\t\\centering\n\n\t{\\input{tex\/trials_SDP_w3_v3.tex}}\n\n\t\\caption{Performance overview of \\acrshort{SDP} - $S_4^1 (\\mathcal{T}_{32})$ ($480$ parameters), with ($\\gls{forget}_{2} = 0.4$) and without ($\\gls{forget}_{2} = 0$) forget factor, and with system noise ($\\sigma_w = 3^{\\circ}\/s^2$). The trendline respresents a 5 point moving average.}\n\t\\label{fig:e1_sdp_w3}\n\\end{figure*}\n\n\\subsection{Experiment II - The Time-Varying System} \\label{sec:exp2}\n\nThe second experiment involves the control of a time-varying system. To simulate this, the control system has first been allowed to converge to the optimal value function by executing $1000$ learning trials. The pendulum's mass is then changed from $m = 1$ kg to $m = 1.5$ kg and then $100$ trials are simulated, similar to the first experiment.\n\nThe $t_{up}$ of $100$ trials after the change are visible in figures~\\ref{fig:e2_trial_ndp} and \\ref{fig:e2_trial_sdp} for the \\gls{NDP} and \\gls{SDP} methods respectively. \nBy increasing the pendulum's mass by $50\\%$ the old control system does not stop working, in fact, in many cases the top can still be reached, albeit not as close to $\\gls{theta} = 0$ as before the change (i.e. the pendulum is held stationary at a slight angle). \nThe changed optimal value function introduces a \\acrshort{TD}-error which propagates through the network.\nFor the feedforward network a minor adaptation of the parameters is sufficient to adapt the global shape of the estimated value function, allowing the \\acrshort{NDP} - feedforward to continue controlling the altered system without temporary decreased performance. This is in contrast to \\acrshort{NDP} - radial basis, which does produce a decrease in performance as the \\acrshort{TD}-error propagates through the estimated value function.\n\\acrshort{SDP} - $\\gls{forget}_{2} = 0$ gives each data-point an equal weight, it is slow in adapting itself to a new situation, spending a long period in the transition phase where performance is reduced. \\acrshort{SDP} - $\\gls{forget}_{2} = 0.4$ has a shorter transition phase and has adapted itself to the new system before the $50^{th}$ trial.\n\nIn Table~\\ref{tab:results2}, the mean and standard deviation of $t_{up}$ of the $100$ trials are presented. This quantification identifies \\acrshort{NDP} - feedforward as the method with the best performance, and \\acrshort{SDP} - $\\gls{forget}_{2} = 0.4$ as the method with the second best performance. As stated before, the performance of the feedforward network is a direct consequence of its ability to generalize. \nHowever, there is no guarantee of convergence. \nFurthermore, it identifies \\acrshort{NDP} - radial basis and \\acrshort{SDP} - $\\gls{forget}_{2} = 0$ as equally bad, however in the figure~\\ref{fig:e2_trial_ndp} it can be seen that \\acrshort{NDP} - radial basis has recovered from the system change in the last $20$ trials. This indicates that \\acrshort{NDP} - radial basis is capable of recovering although it takes more trials than the other methods.\n\n\\begin{table}\n\\caption{Mean and standard deviation (std) of $t_{up}$ of the $100$ trials of Experiment II}\n\\label{tab:results2}\n\\centering\n \\begin{tabular}{c|c|c|c|c} \n\t\\hline\n\t\\hline\n\t\t\t& \\acrshort{NDP} - \t& \\acrshort{NDP} - \t& \\acrshort{SDP} - \t& \\acrshort{SDP} - \\\\\n\t\t\t& FF \t\t& RBF \t\t& $\\gls{forget}_{2} = 0$ & $\\gls{forget}_{2} = 0.4$ \\\\\n\t \\hline\n\t \\textbf{Mean}\t& $17.88$ s\t& $11.44$ s\t& $12.40$ s\t& $15.90$ s\t\\\\\n\t \\textbf{Std}\t& $1.47$ s\t& $7.89$ s\t& $6.76$ s\t& $5.15$ s\t\\\\\n\t\\hline\n\t\\hline\n \\end{tabular}\n\\end{table}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\input{tex\/trials_NDP_w0_v3.tex}\n\t\\caption{Performance overview of \\acrshort{NDP} - Radial basis 12x12 ($432$ parameters) and feedforward network 160 neurons ($480$ parameters), with an altered system parameter ($m = 1.5$) at $t = 0$ and without system noise ($\\sigma_w = 0^{\\circ}\/s^2$). The trendline respresents a 5 point moving average.}\n\t\\label{fig:e2_trial_ndp}\n\\end{figure*}\n\n\\begin{figure*}\n\t\\centering\n\t\\input{tex\/trials_SDP_w0_v3.tex}\n\t\\caption{Performance overview of \\acrshort{SDP} - $S_4^1 (\\mathcal{T}_{32})$ ($480$ parameters), with ($\\gls{forget}_{2} = 0.4$) and without ($\\gls{forget}_{2} = 0$) forget factor, with an altered system parameter ($m = 1.5$) at $t = 0$ and without system noise ($\\sigma_w = 0^{\\circ}\/s^2$). The trendline respresents a 5 point moving average.}\n\t\\label{fig:e2_trial_sdp}\n\\end{figure*}\n\n\\section{Discussion} \\label{sec:discussion}\nThe most important difference between neural networks and simplex splines for a \\gls{DP} framework, is that neural networks are non-linear in the parameters, while simplex splines are linear-in-the-parameters. Using a linear-in-the-parameters function approximator allows for the use of the \\gls{RLSTD} algorithm, which has a fast and proven convergence in a stochastic framework \\citea{BB96}.\nAs a result, the \\gls{SDP} framework without forget factor ($\\gls{forget}_{2} = 0$) has proven convergence demonstrates stable performance when learning online. Nevertheless, in certain circumstances it is beneficial to trade these properties for adaptability, which is done by using the modified \\gls{RLSTD} algorithm from Table~\\ref{tab:trainRLSTDmod} and thus introducing a forget factor ($\\gls{forget}_{2} > 0$). These circumstances arise when the environment is susceptible to system parameter changes and quick adaptation is required.\n\nTo successfully implement the \\gls{SDP} framework, it is important to construct the proper spline space.\nBecause, even though \\gls{SDP} has proven convergence to the best fit, there is no guarantee of system performance. To obtain this guarantee, the optimal value function, or its shape, must be known a-priori. In practice this will mean that either an off-line simulation is used, or the system is tested to see if the desired performance is reached.\nAnother option is to treat all unknown parameters as an additional optimization, and solve the entire optimization problem using Intersplines \\citea{dVvK+12}. Unfortunately, at the moment Intersplines are limited to two-dimensional inputs, and require too much calculation power to make it attractive for real-time applications.\n\nWhile it is possible to use the multivariate simplex B-splines in higher dimensions \\citea{Boor1986}, there are two problems that arise. First there is the construction of the triangulation, which is not automated and is already tedious in a dimension $N = 3$. Secondly, due to the ``curse of dimensionality'' \\citea{Bellman1957}, the computational costs of dynamic programming are very high when moving to higher dimensions. The effects of this curse are reduced by selecting a triangulation that has fewer simplices, but retains the ability to represent the optimal value function. However, the first argument was that constructing a triangulation becomes increasingly difficult at higher dimensions and is not yet automated. This creates difficulties when applying the \\gls{SDP} framework to a problem with the dimension $N>2$, since a similar grid-approach as used in the simulation, produces high computation costs.\n\nA final remark on the methods used in the experiments. The four methods were considered to be comparable in terms of the number of parameters. However, the continuity constraints reduced the amount of free parameters in the \\gls{SDP} methods; thereby reducing the approximation power of the splines relative to the neural networks. Experiments with \\gls{NDP} have shown that reducing the amount of parameters reduces the performance, which implies that \\gls{SDP} is potentially better than concluded in the experiments.\n\n\\section{Conclusion} \\label{sec:conclusion}\nIn this paper the \\gls{SDP} framework was introduced; a combination of the \\gls{RLSTD} algorithm and multivariate simplex B-splines. It was shown that it is capable of solving the non-linear control problem of the pendulum swing-up with nothing but a reward function as feedback. This was done in significantly less trials than \\gls{NDP} systems supported by function approximators with a comparable amount of parameters.\nIn addition, \\gls{SDP} presented a greater resilience to system noise than \\gls{NDP}, demonstrating no decrease in performance in the presence of a disturbance.\nFurthermore, a forget method that preserves the continuity constraints is introduced, which is merged with the \\gls{RLSTD} algorithm to create an adaptive control system.\nIn conclusion it can be said that the high convergence rate of the \\gls{RLSTD} algorithm, in combination with the high approximation power of the multivariate simplex B-splines, provides a basis for high performance non-linear control at the cost of a higher computational load requirement. By using the modified \\gls{RLSTD} algorithm, it is even possible to keep track of time-varying systems, resulting in an adaptive control method for non-linear systems.\n\n\nIn order to have a good trade-off between noise filtering and adaptability, it is important to design a forget factor that is capable of detecting a parameter change in the controlled system. There is extensive literature available on this subject and it is recommended to investigate which is the most effective in the \\gls{SDP} framework.\n\nThe unsolved issue of multivariate simplex B-splines remains the search for the optimal triangulation. While the assumed static triangulation performs adequately, as seen in the experiments, much can be gained in terms of computational complexity and performance by optimizing the triangulation. The advantage of finding the optimal triangluation grows exponentially, as it is connected to the ``Curse of Dimensionality''.\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nFor various classical groups, Bessel models of local or global representations have proven to be a useful substitute for the frequently missing Whittaker model. The uniqueness of Bessel models in the local non-archimedean case has now been established in a wide variety of cases; see \\cite{AizGouRalSch2010}, \\cite{GanGrossPrasad2012}. In this work we are only concerned with the group ${\\rm GSp}_4$ over a $p$-adic field $F$, for which uniqueness of Bessel models was proven as early as 1973; see \\cite{NovPia1973}.\n\nOther than Whittaker models, which are essentially independent of any choices made, Bessel models depend on some arithmetic data. In the case of ${\\rm GSp}_4$, part of this data is a choice of non-degenerate symmetric $2\\times2$-matrix $S$ over the field $F$. The discriminant $\\mathbf{d}$ of this matrix determines a quadratic extension $L$ of $F$; this extension may be isomorphic to $F\\oplus F$, which will be referred to as the \\emph{split case}. The second ingredient entering into the definition of a Bessel model is a character $\\Lambda$ of the multiplicative group $L^\\times$. \n\nNow let $(\\pi,V)$ be an irreducible, admissible representation of ${\\rm GSp}_4(F)$. Given $S$ and $\\Lambda$, the representation $\\pi$ may or may not have a Bessel model with respect to this data. In the case of ${\\rm GSp}_4$, it is possible to precisely say which representations have which Bessel models; see \\cite{PraTak2011}. In particular, every irreducible, admissible representation has a Bessel model for an appropriate choice of $S$ and $\\Lambda$.\n\nGiven $\\pi$, $S$ and $\\Lambda$, it is one thing to know that a Bessel model exists, but it is another to identify a good \\emph{test vector}. By definition, a test vector is a vector in the space of $\\pi$ on which the relevant Bessel functional is non-zero. Equivalently, in the actual Bessel model consisting of functions $B$ on the group with the Bessel transformation property, $B$ is a test vector if and only if $B(1)$ is non-zero. In this paper, we will identify test vectors for a class of representations that is relevant for the theory of Siegel modular forms of degree $2$. In addition, we shall give explicit formulas for the corresponding Bessel functions. See \\cite{Pitale2011}, \\cite{Saha2009} for the Steinberg case and \\cite{Sug1985} for the spherical case.\n\nMore precisely, the class of representations we consider are those that have a non-zero vector invariant under $P_1$, the Siegel ($\\Gamma_0$-type) congruence subgroup of level $\\mathfrak p$. These representations appear as local components of global automorphic representations generated by Siegel modular forms of degree $2$ with respect to the congruence subgroup $\\Gamma_0(N)$, where $N$ is a square-free positive integer. They fall naturally into thirteen classes, only four of which consist of generic representations (meaning representations that admit a Whittaker model); see our Table \\ref{Iwahoritable} below.\n\nOf the thirteen classes, six are actually spherical, meaning they have a non-zero vector invariant under the maximal compact subgroup ${\\rm GSp}_4({\\mathfrak o})$ (here, ${\\mathfrak o}$ is the ring of integers of our local field $F$). Sugano \\cite{Sug1985} has given test vectors and explicit formulas in the spherical cases. Our main focus, therefore, is on the seven classes consisting of non-spherical representations with non-zero $P_1$-fixed vectors. Of those, five classes have a one-dimensional space of $P_1$-fixed vectors, and two classes have a two-dimensional space of $P_1$-fixed vectors. The one-dimensional cases require a slightly different treatment from the two-dimensional cases. In all cases, our main tool will be two Hecke operators $T_{1,0}$ and $T_{0,1}$, coming from the double cosets\n$$\n P_1\\,{\\rm diag}(\\varpi,\\varpi,1,1)P_1\\qquad\\text{and}\\qquad P_1\\,{\\rm diag}(\\varpi^2,\\varpi,1,\\varpi)P_1.\n$$\nEvidently, these operators act on the spaces of $P_1$-invariant vectors. In Sect.\\ \\ref{heckeeigenvaluessec} we give their eigenvalues for all of our seven classes of representations. The results are contained in Table \\ref{Iheckeeigenvaluestable}. In the one-dimensional cases, trivially, the unique $P_1$-invariant vector is a common eigenvector for both $T_{1,0}$ and $T_{0,1}$. In the two-dimensional cases, it turns out there is a nice basis consisting of common eigenvectors for $T_{1,0}$ and $T_{0,1}$.\n\nIn Sect.\\ \\ref{T10sec} we will apply the two Hecke operators to $P_1$-invariant Bessel functions $B$ and evaluate at certain elements of ${\\rm GSp}_4(F)$. Assuming that $B$ is an eigenfunction, this leads to several formulas relating the values of $B$ at various elements of the group; see Lemma \\ref{T10actionlemma} as an example for this kind of result. The calculations are all based on a ${\\rm GL}_2$ integration formula, which we establish in Lemma \\ref{GL2integrationlemma1}.\n\nIn Sect.\\ \\ref{maintowersec} we use some of these formulas to establish a generating series for the values of $B$ at diagonal elements; see Proposition \\ref{maintowerprop}. It turns out that there is one ``initial element''\n$$\n h(0,m_0)={\\rm diag}(\\varpi^{2m_0},\\varpi^{m_0},1,\\varpi^{m_0}),\n$$\nwhere $m_0$ is the conductor of the Bessel character $\\Lambda$; see (\\ref{m0defeq}) for the precise definition. If $B(h(0,m_0))=0$, then $B$ is zero on \\emph{all} diagonal elements.\n\nA generating series like in Proposition \\ref{maintowerprop} is precisely the kind of result that will be useful in global applications. However, it still has to be established that $B(h(0,m_0))\\neq0$; this is the test vector problem, and it is not trivial since $B$ may vanish on all diagonal elements and yet be non-zero. It turns out that, in almost all cases, $B(h(0,m_0))\\neq0$ as expected; see our main results Theorem \\ref{onedimtheorem} and Theorem \\ref{twodimtheorem}.\n\nHowever, there is \\emph{one} exceptional case, occurring only for split Bessel models of representations of type IIa, and then only for a certain unramified Bessel character $\\Lambda$. In this very special case, our Bessel function $B$ is non-zero not at the identity, as expected, but at a certain other element; see Theorem \\ref{onedimtheorem} i) for the precise statement.\n\nTo handle this exceptional case, and one other split case for VIa type representations, we require an additional tool besides our Hecke operators. This additional tool are zeta integrals, which are closely related to split Bessel models. We also require part of the theory of paramodular vectors from \\cite{NF}. Roughly speaking, we take paramodular vectors and ``Siegelize'' them to obtain $P_1$-invariant vectors. Calculations with zeta integrals then establish the desired non-vanishing at specific elements. This is the topic of Sect.\\ \\ref{genericsplitsec}.\n\nThe final Sects.\\ \\ref{onedimsec} and \\ref{twodimsec} contain our main results. Theorem \\ref{onedimtheorem} exhibits test vectors in all one-dimensional cases, and Theorem \\ref{twodimtheorem} exhibits test vectors in all two-dimensional cases. As mentioned above, explicit formulas for these vectors can be found in Proposition \\ref{maintowerprop}. We mention that, evidently, explicit formulas imply uniqueness of Bessel models. Hence, as a by-product of our calculations, we reprove the uniqueness of these models for the representations under consideration.\n\nLet us remark here that the proofs of several of the lemmas in this paper are very long but not conceptually very deep. Hence, we have omitted the proofs of these results. We direct the reader to \\cite{Pitale-Schmidt-preprint-2012} for a longer version of this article which contains all the details of the proofs.\n\\section{Basic facts and definitions}\\label{besselsubgroupsec}\nLet $F$ be a non-archimedean local field of characteristic zero, ${\\mathfrak o}$ its ring of integers, $\\mathfrak p$ the maximal ideal of ${\\mathfrak o}$, and $\\varpi$ a generator of $\\mathfrak p$. We fix a non-trivial character $\\psi$ of $F$ such that $\\psi$ is trivial on ${\\mathfrak o}$ but non-trivial on $\\mathfrak p^{-1}$. We let $v$ be the normalized valuation map on $F$.\n\nAs in \\cite{Fu1994} we fix three elements $\\mathbf{a},\\mathbf{b},\\mathbf{c}$ in $F$ such that $\\mathbf{d}=\\mathbf{b}^2-4\\mathbf{a}\\mathbf{c}\\neq0$. Let \\begin{equation}\\label{Sxidefeq}\n S=\\mat{\\mathbf{a}}{\\frac{\\mathbf{b}}2}{\\frac{\\mathbf{b}}2}{\\mathbf{c}},\\qquad\n \\xi=\\mat{\\frac{\\mathbf{b}}2}{\\mathbf{c}}{-\\mathbf{a}}{\\frac{-\\mathbf{b}}2}.\n\\end{equation}\nThen $F(\\xi)=F+F\\xi$ is a two-dimensional $F$-algebra. If $\\mathbf{d}$ is not a square in $F^\\times$, then $F(\\xi)$ is isomorphic to the field $L=F(\\sqrt{\\mathbf{d}})$ via the map $x+y\\xi\\mapsto x+y\\frac{\\sqrt{\\mathbf{d}}}2$. If $\\mathbf{d}$ is a square in $F^\\times$, then $F(\\xi)$ is isomorphic to $L=F\\oplus F$ via $x+y\\xi\\mapsto(x+y\\frac{\\sqrt{\\mathbf{d}}}2,x-y\\frac{\\sqrt{\\mathbf{d}}}2)$. Let $z\\mapsto\\bar z$ be the obvious involution on $L$ whose fixed point set is $F$. The determinant map on $F(\\xi)$ corresponds to the norm map on $L$, defined by $N(z)=z\\bar z$. Let\n\\begin{equation}\\label{TFdefeq}\n T(F)=\\{g\\in{\\rm GL}_2(F):\\:^tgSg=\\det(g)S\\}.\n\\end{equation}\nOne can check that $T(F)=F(\\xi)^\\times$, so that $T(F)\\cong L^\\times$ via the isomorphism $F(\\xi)\\cong L$. We define the Legendre symbol as\n\\begin{equation}\\label{legendresymboldefeq}\n \\Big(\\frac L\\mathfrak p\\Big)=\\begin{cases}\n -1&\\text{if $L\/F$ is an unramified field extension},\\\\\n 0&\\text{if $L\/F$ is a ramified field extension},\\\\\n 1&\\text{if }L=F\\oplus F.\n \\end{cases}\n\\end{equation}\nThese three cases are referred to as the \\emph{inert case}, \\emph{ramified case}, and \\emph{split case}, respectively. If $L$ is a field, then let ${\\mathfrak o}_L$ be its ring of integers and $\\mathfrak p_L$ be the maximal ideal of ${\\mathfrak o}_L$. If $L = F \\oplus F$, then let ${\\mathfrak o}_L = {\\mathfrak o} \\oplus {\\mathfrak o}$. Let $\\varpi_L$ be a uniformizer in ${\\mathfrak o}_L$ if $L$ is a field, and set $\\varpi_L = (\\varpi,1)$ if $L$ is not a field. In the field case let $v_L$ be the normalized valuation on $L$. \nExcept in Sect.\\ \\ref{genericsplitsec}, where we consider certain split cases, we will make the following \\emph{standard assumptions},\n\\begin{equation}\\label{standardassumptions}\n \\begin{minipage}{90ex}\n \\begin{itemize}\n \\item $\\mathbf{a},\\mathbf{b}\\in{\\mathfrak o}$ and $\\mathbf{c}\\in{\\mathfrak o}^\\times$.\n \\item If $\\mathbf{d}\\notin F^{\\times2}$, then $\\mathbf{d}$ is a generator of the discriminant of $L\/F$.\n \\item If $\\mathbf{d}\\in F^{\\times2}$, then $\\mathbf{d}\\in{\\mathfrak o}^\\times$.\n \\end{itemize}\n \\end{minipage}\n\\end{equation}\nUnder these assumptions, the group $T({\\mathfrak o}):=T(F)\\cap{\\rm GL}_2({\\mathfrak o})$ is isomorphic to ${\\mathfrak o}_L^\\times$ via the isomorphism $T(F)\\cong L^\\times$. \n\nUnder our assumptions (\\ref{standardassumptions}) it makes sense to consider the quadratic equation $\\mathbf{c}u^2+\\mathbf{b}u+\\mathbf{a}=0$ over the residue class field ${\\mathfrak o}\/\\mathfrak p$. The number of solutions of this equation is $\\big(\\frac L\\mathfrak p\\big)+1$. In the ramified case we will fix an element $u_0\\in{\\mathfrak o}$ and in the split case we will fix two mod $\\mathfrak p$ inequivalent elements $u_1,u_2\\in{\\mathfrak o}$ such that\n\\begin{equation}\\label{u1u2defeq}\n \\mathbf{c}u_i^2+\\mathbf{b}u_i+\\mathbf{a}\\in\\mathfrak p,\\qquad i=0,1,2.\n\\end{equation}\n\\subsubsection*{Groups}\nWe define the group ${\\rm GSp}_4$, considered as an algebraic $F$-group, using the symplectic form\n$\n J=\\mat{}{1_2}{-1_2}{}.\n$\nHence, ${\\rm GSp}_4(F)=\\{g\\in{\\rm GL}_4(F):\\:^tgJg=\\mu(g)J\\}$, where the scalar $\\mu(g)\\in F^\\times$ is called the \\emph{multiplier} of $g$. Let $Z$ be the center of ${\\rm GSp}_4$. Let $B$ denote the Borel subgroup, $P$ the Siegel parabolic subgroup, and $Q$ the Klingen parabolic subgroup of ${\\rm GSp}_4$.\nLet $K={\\rm GSp}_4({\\mathfrak o})$ be the standard maximal compact subgroup of ${\\rm GSp}_4(F)$. The parahoric subgroups corresponding to $B$, $P$ and $Q$ are the \\emph{Iwahori subgroup} $I$, the \\emph{Siegel congruence subgroup} $P_1$, and the \\emph{Klingen congruence subgroup} $P_2$, given by\n\\begin{equation}\\label{parahoricsubgroupsdefeq}\n I=K\\cap\\begin{bmatrix}{\\mathfrak o}&\\mathfrak p&{\\mathfrak o}&{\\mathfrak o}\\\\{\\mathfrak o}&{\\mathfrak o}&{\\mathfrak o}&{\\mathfrak o}\\\\\\mathfrak p&\\mathfrak p&{\\mathfrak o}&{\\mathfrak o}\\\\\\mathfrak p&\\mathfrak p&\\mathfrak p&{\\mathfrak o}\\end{bmatrix},\\quad\n P_1=K\\cap\\begin{bmatrix}{\\mathfrak o}&{\\mathfrak o}&{\\mathfrak o}&{\\mathfrak o}\\\\{\\mathfrak o}&{\\mathfrak o}&{\\mathfrak o}&{\\mathfrak o}\\\\\\mathfrak p&\\mathfrak p&{\\mathfrak o}&{\\mathfrak o}\\\\\\mathfrak p&\\mathfrak p&{\\mathfrak o}&{\\mathfrak o}\\end{bmatrix},\\quad\n P_2=K\\cap\\begin{bmatrix}{\\mathfrak o}&\\mathfrak p&{\\mathfrak o}&{\\mathfrak o}\\\\{\\mathfrak o}&{\\mathfrak o}&{\\mathfrak o}&{\\mathfrak o}\\\\{\\mathfrak o}&\\mathfrak p&{\\mathfrak o}&{\\mathfrak o}\\\\\\mathfrak p&\\mathfrak p&\\mathfrak p&{\\mathfrak o}\\end{bmatrix}.\n\\end{equation}\nWe will also have occasion to consider, for a non-negative integer $n$, the \\emph{paramodular group} of level $\\mathfrak p^n$, defined as\n\\begin{equation}\\label{paradefeq}\n K^{\\rm para}(\\mathfrak p^n)=\\{g\\in{\\rm GSp}_4(F):\\:g\\in\\begin{bmatrix}{\\mathfrak o}&\\mathfrak p^n&{\\mathfrak o}&{\\mathfrak o}\\\\{\\mathfrak o}&{\\mathfrak o}&{\\mathfrak o}&\\mathfrak p^{-n}\\\\{\\mathfrak o}&\\mathfrak p^n&{\\mathfrak o}&{\\mathfrak o}\\\\\\mathfrak p^n&\\mathfrak p^n&\\mathfrak p^n&{\\mathfrak o}\\end{bmatrix},\\:\\det(g)\\in{\\mathfrak o}^\\times\\}.\n\\end{equation}\nThe eight-element Weyl group $W$ of ${\\rm GSp}_4$, defined in the usual way as the normalizer modulo the centralizer of the subgroup of diagonal matrices, is generated by the images of\n\\begin{equation}\\label{s1s2defeq}\n s_1=\\begin{bmatrix}&1\\\\1\\\\&&&1\\\\&&1\\end{bmatrix}\\qquad\\text{and}\\qquad\n s_2=\\begin{bmatrix}&&1\\\\&1\\\\-1\\\\&&&1\\end{bmatrix}.\n\\end{equation}\n\\subsubsection*{Bessel models}\nLet $\\mathbf{a},\\mathbf{b},\\mathbf{c}$, the matrix $S$ and the torus $T(F)$ be as above. We consider $T(F)$ a subgroup of ${\\rm GSp}_4(F)$ via\n\\begin{equation}\\label{TFembeddingeq}\n T(F)\\ni g\\longmapsto\\mat{g}{}{}{\\det(g)\\,^tg^{-1}}\\in{\\rm GSp}_4(F).\n\\end{equation}\nLet $U(F)$ be the unipotent radical of the Siegel parabolic subgroup $P$\nand $R(F)=T(F)U(F)$. We call $R(F)$ the \\emph{Bessel subgroup} of ${\\rm GSp}_4(F)$ (with respect to the given data $\\mathbf{a},\\mathbf{b},\\mathbf{c}$). Let $\\theta:\\:U(F)\\rightarrow{\\mathbb C}^\\times$ be the character given by\n\\begin{equation}\\label{thetadefeq}\n \\theta(\\mat{1}{X}{}{1})=\\psi({\\rm tr}(SX)),\n\\end{equation}\nwhere $\\psi$ is our fixed character of $F$ of conductor ${\\mathfrak o}$. \nWe have $\\theta(t^{-1}ut)=\\theta(u)$ for all $u\\in U(F)$ and $t\\in T(F)$. Hence,\nif $\\Lambda$ is any character of $T(F)$, then the map\n$tu\\mapsto\\Lambda(t)\\theta(u)$ defines a character of $R(F)$. We denote\nthis character by $\\Lambda\\otimes\\theta$. Let $\\mathcal{S}(\\Lambda,\\theta)$ be the space of all locally constant functions $B:\\:{\\rm GSp}_4(F)\\rightarrow{\\mathbb C}$ with the \\emph{Bessel transformation property}\n\\begin{equation}\\label{Besseltransformationpropertyeq}\n B(rg)=(\\Lambda\\otimes\\theta)(r)B(g)\\qquad\\text{for all $r\\in R(F)$ and $g\\in {\\rm GSp}_4(F)$}.\n\\end{equation}\nOur main object of investigation is the subspace $\\mathcal{S}(\\Lambda,\\theta,P_1)$ consisting of functions that are right invariant under $P_1$. The group ${\\rm GSp}_4(F)$ acts on $\\mathcal{S}(\\Lambda,\\theta)$ by right translation. If an irreducible, admissible representation $(\\pi,V)$ of ${\\rm GSp}_4(F)$ is isomorphic to a subrepresentation of $\\mathcal{S}(\\Lambda,\\theta)$, then this realization of $\\pi$ is called a \\emph{$(\\Lambda,\\theta)$-Bessel model}. It is known by \\cite{NovPia1973}, \\cite{PraTak2011} that such a model, if it exists, is unique; we denote it by $\\mathcal{B}_{\\Lambda,\\theta}(\\pi)$. Since the Bessel subgroup contains the center, an obvious necessary condition for existence is\n$ \\Lambda(z)=\\omega_\\pi(z)$ for all $z\\in F^\\times$,\nwhere $\\omega_\\pi$ is the central character of $\\pi$.\n\\subsubsection*{Change of models}\nOf course, Bessel models can be defined with respect to any non-degenerate symmetric matrix $S$, not necessarily subject to the conditions (\\ref{standardassumptions}) we imposed on $\\mathbf{a},\\mathbf{b},\\mathbf{c}$. Since our calculations and explicit formulas will assume these conditions, we shall briefly describe how to switch to more general Bessel models. Hence, let $\\lambda$ be in $F^\\times$ and $A$ be in ${\\rm GL}_2(F)$, and let $S'=\\lambda\\,^t\\!ASA$. Replacing $S$ by $S'$ in the definitions (\\ref{TFdefeq}) and (\\ref{thetadefeq}), we obtain the group $T'(F)$ and the character $\\theta'$ of $U(F)$. There is an isomorphism $T'(F)\\rightarrow T(F)$ given by $t\\mapsto AtA^{-1}$. Let $\\Lambda'$ be the character of $T'(F)$ given by $\\Lambda'(t)=\\Lambda(AtA^{-1})$. For $B\\in\\mathcal{B}_{\\Lambda,\\theta}(\\pi)$, let\n$ B'(g)=B(\\mat{A}{}{}{\\lambda^{-1}\\,^t\\!A^{-1}}g)$, for $g\\in {\\rm GSp}_4(F)$.\nIt is easily verified that $B'$ has the $(\\Lambda',\\theta')$-Bessel transformation property, and that the map $B\\mapsto B'$ provides an isomorphism $\\mathcal{B}_{\\Lambda,\\theta}(\\pi)\\cong\\mathcal{B}_{\\Lambda',\\theta'}(\\pi)$.\n\nLet $S=\\mat{\\mathbf{a}}{\\frac{\\mathbf{b}}2}{\\frac{\\mathbf{b}}2}{\\mathbf{c}}$ be the usual matrix such that $\\mathbf{d}=\\mathbf{b}^2-4\\mathbf{a}\\mathbf{c}$ is a square in $F^\\times$, and let\n\\begin{equation}\\label{splitstandardSeq}\n S'=\\mat{}{1\/2}{1\/2}{}.\n\\end{equation}\nThen we can take $\\lambda = 1$ and $ A=\\frac1{\\sqrt{\\mathbf{d}}}\\mat{1}{-2\\mathbf{c}}{-\\frac1{2\\mathbf{c}}(\\mathbf{b}-\\sqrt{\\mathbf{d}})}{\\;\\mathbf{b}+\\sqrt{\\mathbf{d}}}$ above to get $S'=\\,^t\\!ASA$. The torus $T'(F)$ is explicitly given by \n\\begin{equation}\\label{splitstandardTFeq}\n T'(F)=\\{\\mat{a}{}{}{d}:\\:a,d\\in F^\\times\\}.\n\\end{equation}\nIf $B\\in\\mathcal{S}(\\Lambda,\\theta,P_1)$ and $B$ is a $T_{1,0}$ eigenfunction (see (\\ref{T10defeq})) with non-zero eigenvalue then an explicit computation shows that\n\\begin{equation}\\label{BBpat1eq}\n B'(1)\\neq0\\qquad\\Longleftrightarrow\\qquad B(1)\\neq0.\n\\end{equation}\nSee \\cite{Pitale-Schmidt-preprint-2012} for details.\n\\subsubsection*{The Iwahori-spherical representations of ${\\rm GSp}_4(F)$ and their Bessel models}\nTable \\ref{Iwahoritable} below is a reproduction of Table A.15 of \\cite{NF}. It lists all the irreducible, admissible representations of ${\\rm GSp}_4(F)$ that have a non-zero fixed vector under the Iwahori subgroup $I$, in a notation borrowed from \\cite{SallyTadic1993}. \nIn Table \\ref{Iwahoritable}, all characters are assumed to be unramified. \nFurther, Table \\ref{Iwahoritable} lists the dimensions of the spaces of fixed vectors under the various parahoric subgroup; every parahoric subgroup is conjugate to exactly one of $K$, $P_{02}$, $P_2$, $P_1$ or $I$. In this paper we are interested in the representations that have a non-zero $P_1$-invariant vector. Except for the one-dimensional representations, these are the following:\n\\begin{itemize}\n \\item I, IIb, IIIb, IVd, Vd and VId. These are the \\emph{spherical} representations, meaning they have a $K$-invariant vecor.\n \\item IIa, IVc, Vb, VIa and VIb. These are non-spherical representations that have a one-dimensional space of $P_1$-fixed vectors. Note that Vc is simply a twist of Vb by the character $\\xi$, so we will not list it separately.\n \\item IIIa and IVb. These are the non-spherical representations that have a two-dimensional space of $P_1$-fixed vectors.\n\\end{itemize}\n\n\n\\begin{table}\n\\caption[Iwahori-spherical representations]{The Iwahori-spherical representations of ${\\rm GSp}_4(F)$ and the dimensions of their spaces of fixed vectors under the parahoric subgroups. Also listed are the conductor $a(\\pi)$ and the value of the $\\varepsilon$-factor at $1\/2$. The symbol $\\nu$ stands for the absolute value on $F^\\times$, normalized such that $\\nu(\\varpi)=q^{-1}$, and $\\xi$ stands for the non-trivial, unramified, quadratic character of $F^\\times$.} \\label{Iwahoritable}\n$$\\renewcommand{\\arraystretch}{1.3}\n \\begin{array}{cccccccccc}\n \\midrule\n &&\\pi&a(\\pi)&\\varepsilon(1\/2,\\pi)\n &\\begin{minipage}{5.6ex}\\begin{center}$K$\\end{center}\\end{minipage}\n &\\begin{minipage}{5.6ex}\\begin{center}$P_{02}$\\end{center}\\end{minipage}\n &\\begin{minipage}{5.6ex}\\begin{center}$P_2$\\end{center}\\end{minipage}\n &\\begin{minipage}{5.6ex}\\begin{center}$P_1$\\end{center}\\end{minipage}\n &\\begin{minipage}{5.6ex}\\begin{center}$I$\\end{center}\\end{minipage}\\\\\n \\toprule\n {\\rm I}&&\\chi_1\\times\\chi_2\\rtimes\\sigma\\quad\n \\mbox{(irreducible)}&0&1&1&\n \\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 2 \\\\ {\\scriptscriptstyle +-} \\end{array} &4&\n \\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 4 \\\\ {\\scriptscriptstyle ++}\\\\ {\\scriptscriptstyle --} \\end{array} \n &\\!\\!\\!\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 8 \\\\ {\\scriptscriptstyle ++++}\\\\ {\\scriptscriptstyle ----}\\end{array}\\!\\!\\! \\\\\\midrule\n {\\rm II}&{\\rm a}&\\chi{\\rm St}_{{\\rm GL}(2)}\\rtimes\\sigma&1&-(\\sigma\\chi)(\\varpi)&0&\n \\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 1 \\\\ {\\scriptscriptstyle -}\\end{array}&2&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 1 \\\\ {\\scriptscriptstyle -} \\end{array}\n &\\!\\!\\!\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 4 \\\\ {\\scriptscriptstyle +---} \\end{array}\\!\\!\\!\n \\\\\n \\cmidrule{2-10}\n &{\\rm b}&\\chi{\\mathbf1}_{{\\rm GL}(2)}\\rtimes\\sigma&0&1&1\n &\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 1 \\\\ {\\scriptscriptstyle +} \\end{array}&2&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 3 \\\\ {\\scriptscriptstyle ++-} \\end{array}\n &\\!\\!\\!\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 4 \\\\ {\\scriptscriptstyle +++-} \\end{array}\\!\\!\\!\\\\\n \\midrule\n {\\rm III}&{\\rm a}&\\chi\\rtimes\\sigma{\\rm St}_{{\\rm GSp}(2)}&2&1\n &0&0&1&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 2 \\\\ {\\scriptscriptstyle +-} \\end{array}\n &\\!\\!\\!\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 4 \\\\ {\\scriptscriptstyle ++--} \\end{array}\\!\\!\\!\\\\\n \\cmidrule{2-10}\n &{\\rm b}&\\chi\\rtimes\\sigma{\\mathbf1}_{{\\rm GSp}(2)}\n &0&1&1&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 2 \\\\ {\\scriptscriptstyle +-} \\end{array}&3&\n \\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 2 \\\\ {\\scriptscriptstyle +-} \\end{array}\n &\\!\\!\\!\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 4 \\\\ {\\scriptscriptstyle ++--} \\end{array}\\!\\!\\!\\\\\n \\midrule\n {\\rm IV}&{\\rm a}&\\sigma{\\rm St}_{{\\rm GSp}(4)}&3&-\\sigma(\\varpi)&0&0&0&0&\n \\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 1 \\\\ {\\scriptscriptstyle -} \\end{array}\\\\\n \\cmidrule{2-10}\n &{\\rm b}&L(\\nu^2,\\nu^{-1}\\sigma{\\rm St}_{{\\rm GSp}(2)})&2&1&0&0&1&\n \\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 2 \\\\ {\\scriptscriptstyle +-} \\end{array}&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 3 \\\\ {\\scriptscriptstyle ++-} \\end{array}\\\\\n \\cmidrule{2-10}\n &{\\rm c}&L(\\nu^{3\/2}{\\rm St}_{{\\rm GL}(2)},\\nu^{-3\/2}\\sigma)&1&-\\sigma(\\varpi)&0&\n \\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 1 \\\\ {\\scriptscriptstyle -} \\end{array}&2&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 1 \\\\ {\\scriptscriptstyle -} \\end{array}&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 3 \\\\ {\\scriptscriptstyle +--} \\end{array}\\\\\n \\cmidrule{2-10}\n &{\\rm d}&\\sigma{\\mathbf1}_{{\\rm GSp}(4)}&0&1&1&\n \\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 1 \\\\ {\\scriptscriptstyle +} \\end{array}&1&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 1 \\\\ {\\scriptscriptstyle +} \\end{array}&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 1 \\\\ {\\scriptscriptstyle +} \\end{array}\\\\\n \\midrule\n {\\rm V}&{\\rm a}&\\delta([\\xi,\\nu\\xi],\\nu^{-1\/2}\\sigma)&2&-1\n &0&0&1&0&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 2 \\\\ {\\scriptscriptstyle +-} \\end{array}\\\\\n \\cmidrule{2-10}\n &{\\rm b}&L(\\nu^{1\/2}\\xi{\\rm St}_{{\\rm GL}(2)},\\nu^{-1\/2}\\sigma)\n &1&\\sigma(\\varpi)&0&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 1 \\\\ {\\scriptscriptstyle +} \\end{array}&1&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 1 \\\\ {\\scriptscriptstyle +} \\end{array}&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 2 \\\\ {\\scriptscriptstyle ++} \\end{array}\\\\\n \\cmidrule{2-10}\n &{\\rm c}&\\!\\!L(\\nu^{1\/2}\\xi{\\rm St}_{{\\rm GL}(2)},\\xi\\nu^{-1\/2}\\sigma)\\!\\!\n &1&-\\sigma(\\varpi)&0&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 1 \\\\ {\\scriptscriptstyle -} \\end{array}&1&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 1 \\\\ {\\scriptscriptstyle -} \\end{array}&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 2 \\\\ {\\scriptscriptstyle --} \\end{array}\\\\\n \\cmidrule{2-10}\n &{\\rm d}&L(\\nu\\xi,\\xi\\rtimes\\nu^{-1\/2}\\sigma)&0&1\n &1&0&1&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 2 \\\\ {\\scriptscriptstyle +-} \\end{array}&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 2 \\\\ {\\scriptscriptstyle +-} \\end{array}\\\\\n \\midrule\n {\\rm VI}&{\\rm a}&\\tau(S,\\nu^{-1\/2}\\sigma)&2&1&0&0&1&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 1 \\\\ {\\scriptscriptstyle -} \\end{array}&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 3 \\\\ {\\scriptscriptstyle +--} \\end{array}\\\\\n \\cmidrule{2-10}\n &{\\rm b}&\\tau(T,\\nu^{-1\/2}\\sigma)&2&1&0&0&0&\n \\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 1 \\\\ {\\scriptscriptstyle +} \\end{array}\n &\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 1 \\\\ {\\scriptscriptstyle +} \\end{array}\\\\\n \\cmidrule{2-10}\n &{\\rm c}&L(\\nu^{1\/2}{\\rm St}_{{\\rm GL}(2)},\\nu^{-1\/2}\\sigma)\n &1&-\\sigma(\\varpi)&0&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 1 \\\\ {\\scriptscriptstyle -} \\end{array}&1&0&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 1 \\\\ {\\scriptscriptstyle -} \\end{array}\\\\\n \\cmidrule{2-10}\n &{\\rm d}&L(\\nu,1_{F^\\times}\\rtimes\\nu^{-1\/2}\\sigma)\n &0&1&1&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 1 \\\\ {\\scriptscriptstyle +} \\end{array}&2&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 2 \\\\ {\\scriptscriptstyle +-} \\end{array}&\\renewcommand{\\arraystretch}{0.5}\\begin{array}[t]{c} 3 \\\\ {\\scriptscriptstyle ++-} \\end{array}\\\\\n \\toprule\n \\end{array}\n$$\n\\end{table}\n\nLet $\\pi$ be any irreducible, admissible representation of ${\\rm GSp}_4(F)$. Let $S$ be a matrix as in (\\ref{Sxidefeq}), and $\\theta$ the associated character of $U(F)$; see (\\ref{thetadefeq}). Given this data, one may ask for which characters $\\Lambda$ of the torus $T(F)$, defined in (\\ref{TFdefeq}) and embedded into ${\\rm GSp}_4(F)$ via (\\ref{TFembeddingeq}), the representation $\\pi$ admits a $(\\Lambda,\\theta)$-Bessel model. This question can be answered, based on results from \\cite{PraTak2011}. We have listed the data relevant for our current purposes in Table \\ref{besselmodelstable}. Note that, in this table, the characters $\\chi,\\chi_1,\\chi_2,\\sigma,\\xi$ are not necessarily assumed to be unramified, i.e., these results hold for all Borel-induced representations. For the split case $L=F\\oplus F$ in Table \\ref{besselmodelstable}, it is assumed that the matrix $S$ is the one in (\\ref{splitstandardSeq}). The resulting torus $T(F)$ is given in (\\ref{splitstandardTFeq}); embedded into ${\\rm GSp}_4(F)$ it consists of all matrices ${\\rm diag}(a,b,b,a)$ with $a,b$ in $F^\\times$.\n\n\n\\begin{table}\n\\caption[Bessel models of Borel-induced representations]{The Bessel models of the irreducible, admissible representations of ${\\rm GSp}_4(F)$ that can be obtained via induction from the Borel subgroup. The symbol $N$ stands for the norm map from $L^\\times\\cong T(F)$ to $F^\\times$.} \\label{besselmodelstable}\n$$\\renewcommand{\\arraystretch}{1.1}\n \\begin{array}{ccccc}\n \\toprule\n &&\\text{representation}&\n \\multicolumn{2}{c}{(\\Lambda,\\theta)-\\text{Bessel model existence condition}}\n \\\\\n \\cmidrule{4-5}\n &&&L=F\\oplus F&L\/F\\text{ a field extension}\\\\\n \\toprule\n {\\rm I}&& \\chi_1 \\times \\chi_2 \\rtimes \\sigma\\ \n \\mathrm{(irreducible)}&\\text{all }\\Lambda&\\text{all }\\Lambda\\\\\n \\midrule\n \\mbox{II}&\\mbox{a}&\\chi {\\rm St}_{{\\rm GL}(2)} \\rtimes \\sigma&\n \\text{all }\\Lambda&\\Lambda\\neq(\\chi\\sigma)\\circ N \\\\\n \\cmidrule{2-5}\n &\\mbox{b}&\\chi {\\mathbf1}_{{\\rm GL}(2)} \\rtimes \\sigma\n &\\Lambda=(\\chi\\sigma)\\circ N &\\Lambda=(\\chi\\sigma)\\circ N \\\\\n \\midrule\n \\mbox{III}&\\mbox{a}&\\chi \\rtimes \\sigma {\\rm St}_{{\\rm GSp}(2)}&\\text{all }\\Lambda\n &\\text{all }\\Lambda\\\\\\cmidrule{2-5}\n &\\mbox{b}&\\chi \\rtimes \\sigma {\\mathbf1}_{{\\rm GSp}(2)}\n &\\Lambda({\\rm diag}(a,b,b,a))=&\\text{---}\\\\\n &&&\\chi(a)\\sigma(ab)\\text{ or }\\chi(b)\\sigma(ab)&\\\\\n \\midrule\n \\mbox{IV}&\\mbox{a}&\\sigma{\\rm St}_{{\\rm GSp}(4)}&\\text{all }\\Lambda&\n \\Lambda\\neq\\sigma\\circ N \\\\\\cmidrule{2-5}\n &\\mbox{b}&L(\\nu^2,\\nu^{-1}\\sigma{\\rm St}_{{\\rm GSp}(2)})&\\Lambda=\\sigma\\circ N \n &\\Lambda=\\sigma\\circ N \\\\\\cmidrule{2-5}\n &\\mbox{c}&L(\\nu^{3\/2}{\\rm St}_{{\\rm GL}(2)},\\nu^{-3\/2}\\sigma)\n &\\Lambda({\\rm diag}(a,b,b,a))=&\\text{---}\\\\\n &&&\\nu(ab^{-1})\\sigma(ab)\\text{ or }\\nu(a^{-1}b)\\sigma(ab)&\\\\\\cmidrule{2-5}\n &\\mbox{d}&\\sigma{\\mathbf1}_{{\\rm GSp}(4)}&\\text{---}&\\text{---}\\\\\n \\midrule\n \\mbox{V}&\\mbox{a}&\\delta([\\xi,\\nu \\xi], \\nu^{-1\/2} \\sigma)&\\text{all }\\Lambda\n &\\Lambda\\neq\\sigma\\circ N ,\\:\\Lambda\\neq(\\xi\\sigma)\\circ N \\\\\\cmidrule{2-5}\n &\\mbox{b}&L(\\nu^{1\/2}\\xi{\\rm St}_{{\\rm GL}(2)},\\nu^{-1\/2} \\sigma)&\\Lambda=\\sigma\\circ N \n &\\Lambda=\\sigma\\circ N ,\\:\\Lambda\\neq(\\xi\\sigma)\\circ N \\\\\\cmidrule{2-5}\n &\\mbox{c}&L(\\nu^{1\/2}\\xi{\\rm St}_{{\\rm GL}(2)},\\xi\\nu^{-1\/2} \\sigma)\n &\\Lambda=(\\xi\\sigma)\\circ N \n &\\Lambda\\neq\\sigma\\circ N ,\\:\\Lambda=(\\xi\\sigma)\\circ N \\\\\\cmidrule{2-5}\n &\\mbox{d}&L(\\nu\\xi,\\xi\\rtimes\\nu^{-1\/2}\\sigma)&\\text{---}\n &\\Lambda=\\sigma\\circ N ,\\:\\Lambda=(\\xi\\sigma)\\circ N \\\\\n \\midrule\n \\mbox{VI}&\\mbox{a}&\\tau(S, \\nu^{-1\/2}\\sigma)&\\text{all }\\Lambda\n &\\Lambda\\neq\\sigma\\circ N \\\\\\cmidrule{2-5}\n &\\mbox{b}&\\tau(T, \\nu^{-1\/2}\\sigma)&\\text{---}&\\Lambda=\\sigma\\circ N \\\\\\cmidrule{2-5}\n &\\mbox{c}&L(\\nu^{1\/2}{\\rm St}_{{\\rm GL}(2)},\\nu^{-1\/2}\\sigma)\n &\\Lambda=\\sigma\\circ N&\\text{---}\\\\\\cmidrule{2-5}\n &\\mbox{d}&L(\\nu,1_{F^\\times}\\rtimes\\nu^{-1\/2}\\sigma)&\n \\Lambda=\\sigma\\circ N&\\text{---}\\\\\n \\toprule\n \\end{array}\n$$\n\\end{table}\n\\section{Hecke eigenvalues}\\label{heckeeigenvaluessec}\nFor integers $l$ and $m$, let $h(l,m)$ be as in (\\ref{hlmdefeq}). Let $(\\pi,V)$ be a smooth representation of ${\\rm GSp}_4(F)$ for which $Z({\\mathfrak o})$ acts trivially. We define two endomorphisms of $V$ by the formulas\n\\begin{align}\n \\label{T10defeq}T_{1,0}v&=\\frac1{{\\rm vol}( P_1)}\\int\\limits_{ P_1h(1,0) P_1}\\pi(g)v\\,dg,\\\\\n \\label{T01defeq}T_{0,1}v&=\\frac1{{\\rm vol}( P_1)}\\int\\limits_{ P_1h(0,1) P_1}\\pi(g)v\\,dg\n\\end{align}\nand the \\emph{Atkin-Lehner element}\n\\begin{equation}\\label{ALdefeq}\n \\eta=\\begin{bmatrix}&&&-1\\\\&&1\\\\&\\varpi\\\\-\\varpi\\end{bmatrix}=s_2s_1s_2\\begin{bmatrix}\\varpi\\\\&-\\varpi\\\\&&1\\\\&&&-1\\end{bmatrix}.\n\\end{equation}\nEvidently, $T_{1,0}$ and $T_{0,1}$ induce endomorphisms of the subspace of $V$ consisting of $P_1$-invariant vectors. Table \\ref{Iheckeeigenvaluestable} gives the eigenvalues of $T_{1,0}$ and $T_{0,1}$ on the space of $P_1$-invariant vectors for the representations in Table \\ref{Iwahoritable} which have non-zero $P_1$-invariant vectors, but no non-zero $K$-invariant vectors. \nWe will not give all the details of the eigenvalue calculation, since the method is similar to the one employed in \\cite{schm2005}. See \\cite{Pitale-Schmidt-preprint-2012} for details. Table \\ref{satakenotationtable} explains the notation. \n\n\n\\begin{table}[!htb]\n\\caption[Eigenvalues of some Iwahori-Hecke operators]{Eigenvalues of the operators $T_{1,0}$, $T_{0,1}$ and $\\eta$ on spaces of $P_1$-invariant vectors in irreducible representations. } \\label{Iheckeeigenvaluestable}\n $$\n \\begin{array}{cccccc}\n \\text{type}&\\dim&T_{1,0}&T_{0,1}&\\eta\\\\\n \\toprule\n {\\rm IIa}&1&\\alpha\\gamma q&\\alpha^2\\gamma^2(\\alpha+\\alpha^{-1})q^{3\/2}&-\\alpha\\gamma\\\\\n {\\rm IIIa}&2&\\alpha\\gamma q,\\:\\gamma q&\\alpha\\gamma^2(\\alpha q+1)q,\\:\\alpha\\gamma^2(\\alpha^{-1}q+1)q&\\pm\\sqrt{\\alpha}\\gamma\\\\\n {\\rm IVb}&2&\\gamma,\\:\\gamma q^2&\\gamma^2(q+1),\\:\\gamma^2q(q^3+1)&\\pm\\gamma\\\\\n {\\rm IVc}&1&\\gamma q&\\gamma^2(q^3+1)&-\\gamma\\\\\n {\\rm Vb}&1&-\\gamma q&-\\gamma^2q(q+1)&\\gamma\\\\\n {\\rm VIa}&1&\\gamma q&\\gamma^2q(q+1)&-\\gamma\\\\\n {\\rm VIb}&1&\\gamma q&\\gamma^2q(q+1)&\\gamma\n \\end{array}\n $$\n\\end{table}\nWe make one more comment on the eigenvalues in Table \\ref{Iheckeeigenvaluestable}, concerning the representations where the space of $P_1$ invariant vectors is two-dimensional. One can verify that the operators $T_{1,0}$ and $T_{0,1}$ commute, so that there exists a basis of common eigenvectors. The ordering for types IIIa and IVb in Table \\ref{Iheckeeigenvaluestable} is such that \\emph{the first eigenvalue for $T_{1,0}$ corresponds to the first eigenvalue for $T_{0,1}$, and the second eigenvalue for $T_{1,0}$ corresponds to the second eigenvalue for $T_{0,1}$}.\n\\begin{table}[!htb]\n\\caption[Notation for Sataka parameters]{Notation for Satake parameters for those Iwahori-spherical representations listed in Table \\ref{Iheckeeigenvaluestable}. The ``restriction'' column reflects the fact that certain charcaters $\\chi$ are not allowed in type IIa and IIIa representations. The last column shows the central character of the representation.} \\label{satakenotationtable}\n $$\n \\begin{array}{ccccccc}\n \\text{type}&\\text{representation}&\\text{parameters}&\\text{restrictions}&\\text{c.c.}\\\\\n \\toprule\n {\\rm IIa}&\\chi{\\rm St}_{{\\rm GL}(2)}\\rtimes\\sigma&\\alpha=\\chi(\\varpi),\\;\\gamma=\\sigma(\\varpi)&\\alpha^2\\neq q^{\\pm1},\\:\\alpha\\neq q^{\\pm3\/2}&\\chi^2\\sigma^2\\\\\n {\\rm IIIa}&\\chi\\rtimes\\sigma{\\rm St}_{{\\rm GSp}(2)}&\\alpha=\\chi(\\varpi),\\;\\gamma=\\sigma(\\varpi)&\\alpha\\neq1,\\:\\alpha\\neq q^{\\pm2}&\\chi\\sigma^2\\\\\n {\\rm IVb}&L(\\nu^2,\\nu^{-1}\\sigma{\\rm St}_{{\\rm GSp}(2)})&\\gamma=\\sigma(\\varpi)&&\\sigma^2\\\\\n {\\rm IVc}&L(\\nu^{3\/2}{\\rm St}_{{\\rm GL}(2)},\\nu^{-3\/2}\\sigma)&\\gamma=\\sigma(\\varpi)&&\\sigma^2\\\\\n {\\rm Vb}&L(\\nu^{1\/2}\\xi{\\rm St}_{{\\rm GL}(2)},\\nu^{-1\/2}\\sigma)&\\gamma=\\sigma(\\varpi)&&\\sigma^2\\\\\n {\\rm VIa}&\\tau(S,\\nu^{-1\/2}\\sigma)&\\gamma=\\sigma(\\varpi)&&\\sigma^2\\\\\n {\\rm VIb}&\\tau(T,\\nu^{-1\/2}\\sigma)&\\gamma=\\sigma(\\varpi)&&\\sigma^2\n \\end{array}\n $$\n\\end{table}\n\\section{Double cosets, an integration formula and automatic vanishing}\nLet $\\mathbf{a},\\mathbf{b},\\mathbf{c}$ be as in Sect.\\ \\ref{besselsubgroupsec}, subject to the conditions (\\ref{standardassumptions}). Let $T(F)$ be the subgroup of ${\\rm GL}_2(F)$ defined in (\\ref{TFdefeq}). By \\cite{Sug1985}, Lemma 2-4, there is a disjoint decomposition\n\\begin{equation}\\label{toricdecompositioneq}\n {\\rm GL}_2(F)=\\bigsqcup_{m=0}^\\infty T(F)\\mat{\\varpi^m}{}{}{1}{\\rm GL}_2({\\mathfrak o})\n\\end{equation}\n(here it is important that our assumptions (\\ref{standardassumptions}) on $\\mathbf{a},\\mathbf{b},\\mathbf{c}$ are in force; for example, (\\ref{toricdecompositioneq}) would obviously be wrong for $\\mathbf{a}=\\mathbf{c}=0$). The following two lemmas provide further decompositions for the group ${\\rm GL}_2({\\mathfrak o})$. We will use the notations\n\\begin{equation}\\label{Gamma0pdefeq}\n \\Gamma_0(\\mathfrak p)={\\rm GL}_2({\\mathfrak o})\\cap\\mat{{\\mathfrak o}}{{\\mathfrak o}}{\\mathfrak p}{{\\mathfrak o}}\\qquad\\text{and}\\qquad\n \\Gamma^0(\\mathfrak p)={\\rm GL}_2({\\mathfrak o})\\cap\\mat{{\\mathfrak o}}{\\mathfrak p}{{\\mathfrak o}}{{\\mathfrak o}}.\n\\end{equation}\n\\begin{lemma}\\label{TOGL2OGammaplemma}\n \\begin{enumerate}\n \\item In the inert case $\\big(\\frac L\\mathfrak p\\big)=-1$,\n \\begin{equation}\\label{TOGL2OGammaplemmaeq1}\n {\\rm GL}_2({\\mathfrak o})=T({\\mathfrak o})\\mat{}{1}{1}{}\\Gamma_0(\\mathfrak p)=T({\\mathfrak o})\\Gamma^0(\\mathfrak p).\n \\end{equation}\n \\item In the ramified case $\\big(\\frac L\\mathfrak p\\big)=0$,\n \\begin{equation}\\label{TOGL2OGammaplemmaeq2}\n {\\rm GL}_2({\\mathfrak o})=T({\\mathfrak o})\\mat{1}{}{u_0}{1}\\Gamma_0(\\mathfrak p)\\;\\sqcup\\;T({\\mathfrak o})\\mat{}{1}{1}{}\\Gamma_0(\\mathfrak p)=T({\\mathfrak o})\\mat{1}{}{u_0}{1}\\mat{}{1}{1}{}\\Gamma^0(\\mathfrak p)\\;\\sqcup\\;T({\\mathfrak o})\\Gamma^0(\\mathfrak p),\n \\end{equation}\n with $u_0$ as in (\\ref{u1u2defeq}). We have $T({\\mathfrak o})\\mat{1}{}{u_0}{1}\\Gamma_0(\\mathfrak p)=\\mat{1}{}{u_0}{1}\\Gamma_0(\\mathfrak p)$.\n \\item In the split case $\\big(\\frac L\\mathfrak p\\big)=1$,\n \\begin{align}\\label{TOGL2OGammaplemmaeq3}\n {\\rm GL}_2({\\mathfrak o})&=T({\\mathfrak o})\\mat{1}{}{u_1}{1}\\Gamma_0(\\mathfrak p)\\;\\sqcup\\;T({\\mathfrak o})\\mat{1}{}{u_2}{1}\\Gamma_0(\\mathfrak p)\\;\\sqcup\\;T({\\mathfrak o})\\mat{}{1}{1}{}\\Gamma_0(\\mathfrak p)\\nonumber\\\\\n &=T({\\mathfrak o})\\mat{1}{}{u_1}{1}\\mat{}{1}{1}{}\\Gamma^0(\\mathfrak p)\\;\\sqcup\\;T({\\mathfrak o})\\mat{1}{}{u_2}{1}\\mat{}{1}{1}{}\\Gamma^0(\\mathfrak p)\\;\\sqcup\\;T({\\mathfrak o})\\Gamma^0(\\mathfrak p).\n \\end{align}\n with $u_1,u_2$ as in (\\ref{u1u2defeq}). We have $T({\\mathfrak o})\\mat{1}{}{u_i}{1}\\Gamma_0(\\mathfrak p)=\\mat{1}{}{u_i}{1}\\Gamma_0(\\mathfrak p)$ for $i=1,2$.\n \\item Let $m$ be a positive integer, and\n \\begin{equation}\\label{TOmdefeq}\n T({\\mathfrak o})_m:=\\mat{\\varpi^{-m}}{}{}{1}T({\\mathfrak o})\\mat{\\varpi^m}{}{}{1}\\cap{\\rm GL}_2({\\mathfrak o}).\n \\end{equation}\n Then\n \\begin{equation}\\label{TOGL2OGammapmlemmaeq1}\n {\\rm GL}_2({\\mathfrak o})=T({\\mathfrak o})_m\\Gamma_0(\\mathfrak p)\\sqcup T({\\mathfrak o})_m\\mat{}{1}{1}{}\\Gamma_0(\\mathfrak p)=T({\\mathfrak o})_m\\mat{}{1}{1}{}\\Gamma^0(\\mathfrak p)\\sqcup T({\\mathfrak o})_m\\Gamma^0(\\mathfrak p).\n \\end{equation}\n We have $T({\\mathfrak o})_m\\Gamma_0(\\mathfrak p)=\\Gamma_0(\\mathfrak p)$.\n \\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nThe lemma follows by using standard representatives for ${\\rm GL}_2({\\mathfrak o})\/\\Gamma_0(\\mathfrak p)$ and the definition of $T({\\mathfrak o})$. See \\cite{Pitale-Schmidt-preprint-2012}.\n\\end{proof}\n\nWe turn to double coset decompositions for ${\\rm GSp}_4$. For $l, m \\in {\\mathbb Z}$, let\n\\begin{equation}\\label{hlmdefeq}\n h(l,m)=\\begin{bmatrix}\\varpi^{l+2m}\\\\&\\varpi^{l+m}\\\\&&1\\\\&&&\\varpi^m\\end{bmatrix}.\n\\end{equation}\nUsing (\\ref{toricdecompositioneq}) and the Iwasawa decomposition, it is easy to see that\n\\begin{equation}\\label{SuganoHFdecompositioneq}\n {\\rm GSp}_4(F)=\\bigsqcup_{\\substack{l,m\\in{\\mathbb Z}\\\\m\\geq0}}R(F)h(l,m)K;\n\\end{equation}\ncf.\\ (3.4.2) of \\cite{Fu1994}. \nUsing Lemma \\ref{TOGL2OGammaplemma}, the right coset representatives for $K\/P_1$ from \\cite{NF}, Lemma 5.1.1,\nand employing the useful identity\n\\begin{equation}\\label{usefulidentityeq}\n \\mat{1}{}{z}{1} = \\mat{1}{z^{-1}}{}{1} \\mat{-z^{-1}}{}{}{-z} \\mat{}{1}{-1}{} \\mat{1}{z^{-1}}{}{1},\n\\end{equation}\nwhich holds for $z\\in F^\\times$, we get the following proposition (see \\cite{Pitale-Schmidt-preprint-2012}).\n\\begin{proposition}\\label{HFRFSipdecomplemma}\n \\begin{enumerate}\n\\item\nFor $m>0$,\n \\begin{align}\\label{HFRFSipdecomplemmaeq2}\n R(F)h(l,m)K&=R(F)h(l,m) P_1\\;\\sqcup\\;R(F)h(l,m)s_2 P_1\\nonumber\\\\\n &\\qquad\\sqcup\\;R(F)h(l,m)s_1s_2 P_1\\;\\sqcup\\;R(F)h(l,m)s_2s_1s_2\\, P_1,\n \\end{align}\n \\item In the inert case $\\big(\\frac L\\mathfrak p\\big)=-1$,\n \\begin{align*}\n R(F)h(l,0)K&=R(F)h(l,0) P_1\\;\\sqcup\\;R(F)h(l,0)s_2 P_1\\;\\sqcup\\;R(F)h(l,0)s_2s_1s_2 P_1.\n \\end{align*}\n \\item In the ramified case $\\big(\\frac L\\mathfrak p\\big)=0$,\n \\begin{align*}\n R(F)h(l,0)K&=R(F)h(l,0) P_1\\;\\sqcup\\;R(F)h(l,0)s_2 P_1\\;\\sqcup\\;R(F)h(l,0)s_2s_1s_2 P_1\\\\\n &\\qquad\\sqcup\\;R(F)h(l,0)\\hat u_0s_1s_2\\, P_1,\n \\end{align*}\n where\n \\begin{equation}\\label{hatu0defeq}\n \\hat u_0=\\begin{bmatrix}1\\\\u_0&1\\\\&&1&-u_0\\\\&&&1\\end{bmatrix}.\n \\end{equation}\n \n \\item In the split case $\\big(\\frac L\\mathfrak p\\big)=1$,\n \\begin{align*}\n R(F)h(l,0)K&=R(F)h(l,0) P_1\\;\\sqcup\\;R(F)h(l,0)s_2 P_1\\;\\sqcup\\;R(F)h(l,0)s_2s_1s_2 P_1\\\\\n &\\qquad\\sqcup\\;R(F)h(l,0)\\hat u_1s_1s_2\\, P_1\\;\\sqcup\\;R(F)h(l,0)\\hat u_2s_1s_2\\, P_1,\n \\end{align*}\n where, for $i=1,2$,\n \\begin{equation}\\label{hatu1u2defeq}\n \\hat u_i=\\begin{bmatrix}1\\\\u_i&1\\\\&&1&-u_i\\\\&&&1\\end{bmatrix}.\n \\end{equation}\n \n \\end{enumerate}\n\\end{proposition}\n\\subsection*{An integration formula}\nThe integration formula on ${\\rm GL}_2({\\mathfrak o})$ presented in the following lemma will be used in many of our Hecke operator calculations. Let $\\Lambda$ be a character of $L^\\times\\cong T(F)$. Let\n\\begin{equation}\\label{m0defeq}\n m_0 = {\\rm min}\\big\\{m \\geq 0\\::\\:\\Lambda\\big|_{(1+\\mathfrak p^m{\\mathfrak o}_L)\\cap {\\mathfrak o}_L^\\times}=1\\big\\}.\n\\end{equation}\n\\begin{lemma}\\label{GL2integrationlemma1}\n Let $\\Lambda$ be a character of $L^\\times\\cong T(F)$ which is trivial on ${\\mathfrak o}^\\times$, and let $m_0$ be as in (\\ref{m0defeq}). Let $m$ be a non-negative integer. Let $f:\\:{\\rm GL}_2({\\mathfrak o})\\rightarrow{\\mathbb C}$ be a function with the property\n $$\n f(tg)=\\Lambda(\\mat{\\varpi^m}{}{}{1}t\\mat{\\varpi^{-m}}{}{}{1})f(g)\\qquad\\text{for all $t\\in T({\\mathfrak o})_m$}.\n $$\n Let the Haar measure on ${\\rm GL}_2({\\mathfrak o})$ be normalized such that the total volume is $1$.\n \\begin{enumerate}\n \\item Assume that $f$ is right invariant under $\\Gamma_0(\\mathfrak p)$. Then\n $$\n \\int\\limits_{{\\rm GL}_2({\\mathfrak o})}f(g)\\,dg=\\begin{cases}\n 0&\\text{if }m0$. Then\n $$\n B(h(l,0)\\hat u_is_1s_2)=0\n $$\n with $i=0$ in the ramified case and $i\\in\\{1,2\\}$ in the split case.\n \\item In the ramified case,\n $$\n B(h(l,0)\\hat u_0s_1s_2)=0\\qquad\\text{for }l<-1.\n $$\n \\item In the split case,\n $$\n B(h(l,0)\\hat u_is_1s_2)=0\\qquad\\text{for }l<0.\n $$\n \\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nThese are standard arguments using the $P_1$-invariance of $B$ and the definition of $m_0$. \n\\end{proof}\n\n\n\n\n\\section{The Hecke operators}\\label{T10sec}\nIn the following lemmas we will give the value of $T_{1,0}B$ and $T_{0,1}B$, where $B\\in\\mathcal{S}(\\Lambda,\\theta,P_1)$, on various double coset representatives from Proposition \\ref{HFRFSipdecomplemma}. The main tools are the integration formulas in Lemma \\ref{GL2integrationlemma1} and the useful matrix identity (\\ref{usefulidentityeq}). The proofs are long and tedious and we will not present them here. See \\cite{Pitale-Schmidt-preprint-2012} for details.\n\n\\begin{lemma}\\label{T10actionlemma}\n Let $B\\in\\mathcal{S}(\\Lambda,\\theta,P_1)$, and let $l$ and $m$ be non-negative integers. Let $h(l,m)$ be as in (\\ref{hlmdefeq}). Then the following formulas hold.\n \\begin{enumerate}\n \\item\n $$\n (T_{1,0}B)(h(l,m))=q^3B(h(l+1,m)).\n $$\n \\item\n \\begin{align*}\n &(T_{1,0}B)(h(l,m)s_2)=q^2(q-1)B(h(l+1,m))\\\\[1ex]\n &+\\begin{cases}\n -qB(h(l-1,m+1)s_1s_2)&\\text{if }m0$, we have\n \\begin{align}\\label{T01s2conslemmaeq1}\n &\\lambda\\Lambda(\\varpi) B(h(l,m)s_2)-\\mu qB(h(l+1,m)s_2)+\\lambda q^3B(h(l+2,m)s_2)\\nonumber\\\\\n &\\qquad=q^5(q-1)B(h(l+3,m))-q^3(q-1)B(h(l+1,m+1))\n \\end{align}\n \n \\item For $m\\geq{\\rm max}(m_0,1)$, we have\n \\begin{align}\\label{T01s2conseq4}\n &\\mu B(h(0,m)s_2)-q^2\\lambda B(h(1,m)s_2)-\\Lambda(\\varpi)^2B(h(0,m-1)s_2)\\nonumber\\\\\n &\\qquad=q^2(q-1)B(h(0,m+1))-q^4(q-1)B(h(2,m))\n\\end{align}\n\n\\item For $m_0 > 0$, we have\n\\begin{align}\\label{T01s2conseq4b}\n &\\mu B(h(0,m_0)s_2)-q^2\\lambda B(h(1,m_0)s_2)-\\Lambda(\\varpi)^2B(h(0,m_0-1)s_2)\\nonumber\\\\\n &\\qquad=q^{-2}(q-1)(\\mu-\\lambda^2)B(h(0,m_0))\n\\end{align}\n\n\\item For $l \\geq 0, m_0 > 0$, we have\n\\begin{align}\\label{T01s2conslemmaeq1b}\n &\\lambda\\Lambda(\\varpi) B(h(l,m_0-1)s_2)-\\mu qB(h(l+1,m_0-1)s_2)+\\lambda q^3B(h(l+2,m_0-1)s_2)\\nonumber\\\\\n &\\qquad=-\\lambda(q-1)B(h(l,m_0))\n\\end{align}\n\\end{itemize}\n\\end{lemma}\n\\begin{proof}\n(\\ref{T01s2conslemmaeq1}) and (\\ref{T01s2conseq4}) follow from Lemma \\ref{T01actionlemma} ii) and Lemma \\ref{T10actionlemma} ii). (\\ref{T01s2conseq4b}) follows from (\\ref{T01s2conseq4}), Lemma \\ref{T10actionlemma} i) and Lemma \\ref{T01actionlemma} i). (\\ref{T01s2conslemmaeq1b}) follows from (\\ref{T01s2conslemmaeq1}), the identity $\\lambda B(h(l,m_0))=q^3B(h(l+1,m_0))$ and some automatic vanishing.\n\\end{proof}\n\n\\section{The main tower}\\label{maintowersec}\nAgain we consider the matrix $S$ and the associated character $\\theta$ of $U(F)$; see (\\ref{thetadefeq}). As usual, the assumptions (\\ref{standardassumptions}) are in force. Let $\\Lambda$ be a character of $T(F)$ and define the non-negative integer $m_0$ as in (\\ref{m0defeq}). Let $B$ be a function in the space $\\mathcal{S}(\\Lambda,\\theta,P_1)$. We refer to the values of $B$ at the elements $h(l,m)$, defined in (\\ref{hlmdefeq}), as the \\emph{main tower} (in view of Proposition \\ref{HFRFSipdecomplemma}, there is also an $s_2$-tower etc). Note that $B(h(l,m))=0$ for $l<0$ or $0\\leq m0,\\\\[1ex]\n (q+1)^{-1}\\mu&\\text{if }m_0=0\\text{ and }\\big(\\frac L\\mathfrak p\\big)=-1,\\\\[1ex]\n \\Lambda(\\varpi_L)\\lambda&\\text{if }m_0=0\\text{ and }\\big(\\frac L\\mathfrak p\\big)=0,\\\\[1ex]\n (q-1)^{-1}(q\\lambda(\\Lambda(\\varpi, 1)+\\Lambda(1,\\varpi))-\\mu)&\\text{if }m_0=0\\text{ and }\\big(\\frac L\\mathfrak p\\big)=1.\n \\end{cases}\n $$\n\\end{proposition}\n\\begin{proof}\nThe relation (\\ref{main-tower-gen-l+1tol}) is immediate from Lemma \\ref{T10actionlemma} i). Combining (\\ref{main-tower-gen-l+1tol}) with Lemma \\ref{T01actionlemma} i), we get, for $l\\geq0$ and $m\\geq m_0$,\n\\begin{equation}\\label{general-2-step-recursion}\n q^4B(h(l,m+2)) - \\mu B(h(l,m+1)) +\\lambda^2q^{-3}\\Lambda(\\varpi)B(h(l,m)) = 0.\n\\end{equation}\nWe multiply this by $Y^{m+2}$ and apply $\\sum_{m=m_0}^\\infty$ to both sides, arriving at the formal identity\n$$\n \\sum_{m=m_0}^\\infty B(h(l,m))Y^m=\\frac{(q^4-\\mu Y)B(h(l,m_0))+q^4YB(h(l,m_0+1))}{q^4-\\mu Y+\\lambda^2q^{-3}\\Lambda(\\varpi)Y^2}Y^{m_0}.\n$$\nSetting $m=m_0$ in Lemma \\ref{T01actionlemma} i) and using (\\ref{main-tower-gen-l+1tol}) provides a relation between $B(h(l,m_0))$ and $B(h(l,m_0+1))$. Substituting this relation, we obtain the asserted formula.\n\\end{proof}\n\\section{Generic representations and split Bessel models}\\label{genericsplitsec}\nWe recall some basic facts about generic representations of ${\\rm GSp}_4(F)$, and refer to Sect.\\ 2.6 of \\cite{NF} for details. We denote by $N$ the unipotent radical of the Borel subgroup $B$, and for $c_1,c_2$ in $F^\\times$, consider the character $\\psi_{c_1,c_2}$ of $N(F)$ given in Sect.\\,2.1 of \\cite{NF}.\nAn irreducible, admissible representation $\\pi$ of ${\\rm GSp}_4(F)$ is called \\emph{generic} if ${\\rm Hom}_{N(F)}(\\pi,\\psi_{c_1,c_2})\\neq0$. In this case there is an associated Whittaker model $\\mathcal{W}(\\pi,\\psi_{c_1,c_2})$ consisting of functions ${\\rm GSp}_4(F)\\rightarrow{\\mathbb C}$ that transform on the left according to $\\psi_{c_1,c_2}$. For $W\\in\\mathcal{W}(\\pi,\\psi_{c_1,c_2})$, there is an associated zeta integral\n\\begin{equation}\\label{ZsWdefeq}\n Z(s,W)=\\int\\limits_{F^\\times}\\int\\limits_F\n W(\\begin{bmatrix}a\\\\&a\\\\x&&1\\\\&&&1\\end{bmatrix})|a|^{s-3\/2}\\,dx\\,d^\\times a.\n\\end{equation}\nThis integral is convergent for ${\\rm Re}(s)>s_0$, where $s_0$ is independent\nof $W$; see \\cite{NF}, Proposition 2.6.3. \nMoreover, there exists an $L$-factor of the form\n$\n L(s,\\pi)=Q(q^{-s})^{-1},Q(X)\\in{\\mathbb C}[X],\\;Q(0)=1,\n$\nsuch that\n\\begin{equation}\\label{zetaLquotienteq}\n \\frac{Z(s,W)}{L(s,\\pi)}\\in{\\mathbb C}[q^{-s},q^s]\\qquad\\text{for all }\n W\\in \\mathcal{W}(\\pi,\\psi_{c_1,c_2}).\n\\end{equation}\nWe consider split Bessel models with respect to the quadratic form $S$ defined in (\\ref{splitstandardSeq}).\nLet $\\theta$ be the corresponding character of $U(F)$. The resulting group $T(F)$, defined in (\\ref{TFdefeq}), is a split torus. We think of $T(F)$ embedded into ${\\rm GSp}_4(F)$ as all matrices of the form ${\\rm diag}(a,b,b,a)$ with $a,b\\in F^\\times$. We write a character $\\Lambda$ of $T(F)$ as a function $\\Lambda(a,b)$. Consider the functional $f_s$ on the $\\psi_{c_1,c_2}$-Whittaker model of $\\pi$ given by\n\\begin{equation}\\label{fs-defn}\n f_s(W)=\\frac{Z(s,\\pi(s_2)(W))}{L(s,\\pi)}.\n\\end{equation}\nBy analytic continuation and the defining property of $L(s,\\pi)$, this is a well-defined and non-zero functional on $\\pi$ for \\emph{any} value of $s$. A direct computation shows that the functional $f_s$ is a split $(\\Lambda,\\theta)$-Bessel functional with respect to the character $\\Lambda$ given by\n$\n \\Lambda({\\rm diag}(a,b,b,a))=|a^{-1}b|^{-s+1\/2}.\n$\nIf we assume that $\\pi$ has trivial central character, then any unramified character of $T(F)$ is of this form for an appropriate $s$.\n\\subsection*{Zeta integrals of Siegel vectors}\nLet $(\\pi,V)$ be an irreducible, admissible, generic representation of ${\\rm GSp}_4(F)$ with unramified central character. We assume that $V=\\mathcal{W}(\\pi,\\psi_{c_1,c_2})$ is the Whittaker model with respect to the character $\\psi_{c_1,c_2}$ of $N(F)$. For what follows we will assume that $c_1,c_2\\in{\\mathfrak o}^\\times$. Recall the Siegel congruence subgroup $P_1$ and the Klingen congruence subgroup $P_2$ defined in (\\ref{parahoricsubgroupsdefeq}). \n\n\\begin{lemma}\\label{simplifiedzetalemma}\n Let $(\\pi,V)$ be as above, and let $W$ be an element of $V=\\mathcal{W}(\\pi,\\psi_{c_1,c_2})$.\n \\begin{enumerate}\n \\item If $W$ is $P_1$-invariant, then\n $$\n Z(s,\\pi(s_2)W)=\\int\\limits_{F^\\times}W(\\begin{bmatrix}a\\\\&a\\\\&&1\\\\&&&1\\end{bmatrix}s_2)|a|^{s-3\/2}\\,d^\\times a.\n $$\n \\item If $W$ is $P_2$-invariant, then\n $$\n Z(s,W)=\\int\\limits_{F^\\times}W(\\begin{bmatrix}a\\\\&a\\\\&&1\\\\&&&1\\end{bmatrix})|a|^{s-3\/2}\\,d^\\times a.\n $$\n \\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nii) is a special case of Lemma 4.1.1 of \\cite{NF}, and i) is proved similarly. \n\\end{proof}\n\n\n\nLet $W\\in V$ be a $P_2$-invariant vector. Then\n$$\n T_{\\rm Si}W:=\\sum_{g\\in{\\rm GL}_2({\\mathfrak o})\/\\Gamma^0(\\mathfrak p)}\\pi(\\mat{g}{}{}{^tg^{-1}})W\n$$\nis $P_1$-invariant. Using Lemma \\ref{simplifiedzetalemma}, the standard representatives for ${\\rm GL}_2({\\mathfrak o})\/\\Gamma^0(\\mathfrak p)$ and (\\ref{usefulidentityeq}), one calculates that\n\\begin{equation}\\label{zetaintegralformulaeq}\n Z(s,\\pi(s_2)(T_{\\rm Si}W))=q\\int\\limits_{F^\\times}\n W(\\begin{bmatrix}a\\\\&a\\\\&&1\\\\&&&1\\end{bmatrix}s_2s_1)|a|^{s-3\/2}\\,d^\\times a+Z(s,W).\n\\end{equation}\n\\subsection*{The IIa case}\n\n\\begin{lemma}\\label{IIazetalemma}\n Let $\\pi=\\chi{\\rm St}_{{\\rm GL}(2)}\\rtimes\\sigma$ be a representation of type IIa with unramified $\\chi$ and $\\sigma$. Let $\\alpha=\\chi(\\varpi)$ and $\\gamma=\\sigma(\\varpi)$. We assume that $\\alpha^2\\gamma^2=1$, i.e., that $\\pi$ has trivial central character. Let $W_0$ be a non-zero $K^{\\rm para}(\\mathfrak p)$-invariant vector in the $\\psi_{c_1,c_2}$-Whittaker model of $\\pi$ normalzed such that $Z(s,W_0)=L(s,\\pi)$. Let $\\omega=-\\alpha\\gamma$ be the eigenvalue of the Atkin-Lehner element $\\eta$ on $W_0$.\n \\begin{enumerate}\n \\item \n For any $s\\in{\\mathbb C}$,\n $$\n Z(s,\\pi(s_2)(T_{\\rm Si}W_0))=(\\omega q^{s-1\/2}+1)L(s,\\pi).\n $$\n\\item If $\\omega q^{s-1\/2}+1=0$, then\n $$\n Z(s,T_{\\rm Si}W_0)=(1-q^{-1})^{-1}.\n $$\n \\end{enumerate}\n\n\\end{lemma}\n\\begin{proof}\nWe may assume that $s$ is in the region of convergence. Since the element $t_1 = h(1,0)^{-1}s_1s_2s_1h(1,0)$ lies in the paramodular group $K^{\\rm para}(\\mathfrak p)$ we have, for any $g$ in ${\\rm GSp}_4(F)$, the relation\n$ \\omega W_0(gs_2s_1)=W_0(gh(1,0)).$ Substituting this into (\\ref{zetaintegralformulaeq}), we obtain\n\\begin{align*}\n Z(s,\\pi(s_2)(T_{\\rm Si}W_0))&=\\omega q\\int\\limits_{F^\\times}W_0(\\begin{bmatrix}a\\varpi\\\\&a\\varpi\\\\&&1\\\\&&&1\\end{bmatrix})|a|^{s-3\/2}\\,d^\\times a+Z(s,W_0)\\\\\n &=(\\omega q^{s-1\/2}+1)L(s,\\pi).\n\\end{align*}\nThis proves part i) of the lemma. Proof of part ii) is a long computation and we do not present it here. See \\cite{Pitale-Schmidt-preprint-2012} for details.\n\\end{proof}\n\nFor the following proposition we continue to assume that $\\pi=\\chi{\\rm St}_{{\\rm GL}(2)}\\rtimes\\sigma$ is a representation of type IIa with unramified $\\chi$ and $\\sigma$, but we will drop the condition that $\\pi$ has trivial central character. If the non-zero vector $W$ spans the space of $P_1$-invariant vectors, then we still have $\\eta W=\\omega W$ with $\\omega=-\\alpha\\gamma$, but this constant is no longer necessarily $\\pm1$. We consider split Bessel models with respect $S$ as in (\\ref{splitstandardSeq}). We want the character $\\Lambda$ of $T(F)$ to coincide on the center of ${\\rm GSp}_4(F)$ with the central character of $\\pi$, i.e., $\\Lambda(a,a)=(\\chi\\sigma)^2(a)$. Since $\\Lambda(\\varpi,1)\\Lambda(1,\\varpi)=\\omega^2$, we have $\\Lambda(1,\\varpi)=-\\omega$ if and only if $\\Lambda(\\varpi,1)=-\\omega$.\n\n\\begin{proposition}\\label{IIasplitprop}\n Assume that $\\pi=\\chi{\\rm St}_{{\\rm GL}(2)}\\rtimes\\sigma$ is a representation of type IIa with unramified $\\chi$ and $\\sigma$. Let $S$, $\\theta$ and $T(F)$ be as above. Let $\\Lambda$ be an unramified character of $T(F)$ such that $\\Lambda(a,a)=(\\chi\\sigma)^2(a)$. Let $B$ be a non-zero vector in the $(\\Lambda,\\theta)$-Bessel model of $\\pi$ spanning the one-dimensional space of $P_1$-invariant vectors. Then we have\n $$\\Lambda(\\varpi,1)\\neq-\\omega\\neq\\Lambda(1,\\varpi) \\quad \\Longleftrightarrow \\quad B(1)\\neq0.$$\n\\end{proposition}\n\\begin{proof}\nAfter twisting by an unramified character, we may assume that $\\pi$ has trivial central character. Let $W_0$ be a non-zero vector in the $\\psi_{c_1,c_2}$-Whittaker model of $\\pi$ spanning the one-dimensional space of $K^{\\rm para}(\\mathfrak p)$-invariant vectors. By Lemma \\ref{IIazetalemma}, the vector $T_{\\rm Si}W_0$ is non-zero, and hence spans the one-dimensional space of $P_1$-invariant vectors. Let $f_s$ be as in (\\ref{fs-defn}).\nNote that\n$\n \\Lambda(\\varpi,1)=-\\omega=\\Lambda(1,\\varpi)$ if and only if $\\omega q^{s-1\/2}+1=0$. Then, by i) of Lemma \\ref{IIazetalemma}, we have $f_s(T_{\\rm Si}W_0)\\neq0$ if and only if $\\Lambda(\\varpi,1)\\neq-\\omega\\neq\\Lambda(1,\\varpi)$. The proposition now follows from the fact that $f_s(T_{\\rm Si}W_0)\\neq0$ is equivalent to $B(1)\\neq0$ by uniqueness of Bessel models.\n\\end{proof}\n\\subsection*{The VIa case}\nLet $\\pi=\\tau(S,\\nu^{-1\/2}\\sigma)$ be the representation of type VIa. We assume that $\\sigma$ is unramified and that $\\pi$ has trivial central character, i.e., $\\sigma^2=1$. This is a representation of conductor $2$\nBy Table A.8 of \\cite{NF},\n\\begin{equation}\\label{VIaLeq}\n L(s,\\pi)=L(s,\\nu^{1\/2}\\sigma)^2=\\frac1{(1-\\sigma(\\varpi)q^{-1\/2-s})^2}.\n\\end{equation}\nWe assume that $\\pi$ is given in its $\\psi_{c_1,c_2}$-Whittaker model. Let $W_0$ be the paramodular newform, i.e., a non-zero element invariant under the paramodular group of level $\\mathfrak p^2$; see (\\ref{paradefeq}). By Theorem 7.5.4 of \\cite{NF}, we can normalize $W_0$ such that\n$\n Z(s,W_0)=L(s,\\pi).\n$\nWe let\n\\begin{equation}\\label{shadowvectordefeq}\n W':=\\sum_{x,y,z\\in{\\mathfrak o}\/\\mathfrak p}\\pi(\\begin{bmatrix}1&y\\varpi\\\\&1\\\\&x\\varpi&1\\\\x\\varpi&z\\varpi&-y\\varpi&1\\end{bmatrix})W_0.\n\\end{equation}\nThis is a $P_2$-invariant vector; it was called the \\emph{shadow of the newform} in Sect.\\ 7.4 of \\cite{NF}. By Proposition 7.4.8 of \\cite{NF},\n\\begin{equation}\\label{VIashadowzetaeq}\n Z(s,W')=(1-q^{-1})L(s,\\pi).\n\\end{equation}\n\n\\begin{lemma}\\label{VIazetalemma}\n For any complex number $s$,\n \\begin{equation}\\label{VIacalceq8}\n \\frac{Z(s,\\pi(s_2)(T_{\\rm Si}W'))}{L(s,\\pi)}=2(q-1)\\sigma(\\varpi)q^{-1\/2+s}.\n \\end{equation}\n\\end{lemma}\n\\begin{proof}\nWe may assume that $s$ is in the region of convergence. By (\\ref{zetaintegralformulaeq}) and (\\ref{VIashadowzetaeq}),\n\\begin{equation}\\label{zetaintegralformulaeq2}\n Z(s,\\pi(s_2)(T_{\\rm Si}W'))=q\\int\\limits_{F^\\times}\n W'(\\begin{bmatrix}a\\\\&a\\\\&&1\\\\&&&1\\end{bmatrix}s_2s_1)|a|^{s-3\/2}\\,d^\\times a+(1-q^{-1})L(s,\\pi).\n\\end{equation}\nTo evaluate the integral in this equation, we compute the zeta integral $Z(s,\\pi(s_2s_1)W')$ in two different ways -- first, by using the local functional equation for $Z(s,W)$, and second, by using a direct computation and the fact that the representation has no $K^{\\rm para}(\\mathfrak p)$-invariant vectors. The result is that\n\\begin{equation}\\label{VIacalceq5}\n \\int\\limits_{F^\\times}W'(\\begin{bmatrix}a\\\\&a\\\\&&1\\\\&&&1\\end{bmatrix}s_2s_1)|a|^{s-3\/2}\\,d^\\times a=(q-1)q^{2s-1}(L(s,\\pi)-1).\n\\end{equation}\nFrom (\\ref{zetaintegralformulaeq2}) and (\\ref{VIacalceq5}), we now get\n\\begin{equation}\\label{VIacalceq6}\n Z(s,\\pi(s_2)(T_{\\rm Si}W'))=(q-1)q^{2s}(L(s,\\pi)-1)+(1-q^{-1})L(s,\\pi).\n\\end{equation}\nUsing the explicit form (\\ref{VIaLeq}) of $L(s,\\pi)$, the assertion follows. See \\cite{Pitale-Schmidt-preprint-2012} for further details.\n\\end{proof}\n\n\\begin{proposition}\\label{VIasplitprop}\n Assume that $\\pi=\\tau(S,\\nu^{-1\/2}\\sigma)$ is a representation of type VIa with unramified $\\sigma$. Let $S$ be as in (\\ref{splitstandardSeq}). Let $\\Lambda$ be an unramified character of $T(F)$ such that $\\Lambda(a,a)=\\sigma^2(a)$. Let $B$ be a non-zero vector in the $(\\Lambda,\\theta)$-Bessel model of $\\pi$ spanning the one-dimensional space of $P_1$-invariant vectors. Then $B(1)\\neq0$.\n\\end{proposition}\n\\begin{proof}\nAfter twisting by an unramified character, we may assume that $\\pi$ has trivial central character. Let $W_0$ be a non-zero vector in the $\\psi_{c_1,c_2}$-Whittaker model of $\\pi$ spanning the one-dimensional space of $K^{\\rm para}(\\mathfrak p^2)$-invariant vectors, and let $W'$ be the $P_2$-invariant vector defined in (\\ref{shadowvectordefeq}). By Lemma \\ref{VIazetalemma}, the vector $T_{\\rm Si}W'$ is non-zero, and hence spans the one-dimensional space of $P_1$-invariant vectors. Let $f_s$ be as in (\\ref{fs-defn}).\nBy Lemma \\ref{VIazetalemma}, we have $f_s(T_{\\rm Si}W')\\neq0$. By uniqueness of Bessel models, this is equivalent to $B(1)\\neq0$.\n\\end{proof}\n\\section{The one-dimensional cases}\\label{onedimsec}\nIn this section we will identify good test vectors for those irreducible, admissible representations of ${\\rm GSp}_4(F)$ which are not spherical, but possess a one-dimensional space of $P_1$-invariant vectors. A look at Table \\ref{Iwahoritable} shows that these are the Iwahori-spherical representations of type IIa, IVc, Vb, VIa and VIb (Vc is a twist of Vb and is not counted separately). \nThis entire section assumes that the elements $\\mathbf{a},\\mathbf{b},\\mathbf{c}$ satisfy the conditions (\\ref{standardassumptions}). The matrix $S$, the group $T(F)\\cong L^\\times$ and the character $\\theta$ have the usual meaning. \n\n\n\\begin{lemma}\\label{onedimlemma}\n Let $\\Lambda$ be a character of $L^\\times$, and $m_0$ as in (\\ref{m0defeq}). \n \\begin{enumerate}\n \\item Assume that $B\\in\\mathcal{S}(\\Lambda,\\theta,P_1)$ satisfies\n $$\n T_{1,0}B=\\lambda B,\\qquad T_{0,1}B=\\mu B,\\qquad \\eta B=\\omega B\n $$\n with complex numbers $\\lambda,\\mu,\\omega$ satisfying $\\lambda = -q\\omega$.\n\\begin{itemize}\n \\item If $m_0=0$, $\\big(\\frac L\\mathfrak p\\big)=0$ and $\\Lambda(\\varpi_L)=-\\omega$, then $B=0$. \n \\item If $m_0=0$, $\\big(\\frac L\\mathfrak p\\big)=1$, $\\Lambda(1,\\varpi)=-\\omega=\\Lambda(\\varpi,1)$ and $B(1)=0$, then\n$$\n B=0\\qquad\\Longleftrightarrow\\qquad B(\\hat u_1s_1s_2)=0\\qquad\\Longleftrightarrow\\qquad B(\\hat u_2s_1s_2)=0.\n $$\n\\item Otherwise, \n $$\n B=0\\qquad\\Longleftrightarrow\\qquad B(h(0,m_0))=0.\n $$\n \\end{itemize}\n\\item Assume that $B\\in\\mathcal{S}(\\Lambda,\\theta,P_1)$ satisfies $\\eta B=\\omega B$ and \n\\begin{equation}\\label{VIbspecialeq}\n \\sum_{c\\in{\\mathfrak o}\/\\mathfrak p}\\pi(\\begin{bmatrix}1\\\\&1\\\\c&&1\\\\&&&1\\end{bmatrix})B+\\pi(s_2)B=0.\n\\end{equation}\nThen\n$$\n B=0\\qquad\\Longleftrightarrow\\qquad B(h(0,m_0))=0.\n $$\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\ni) From Lemma \\ref{T10actionlemma} and the Atkin-Lehner relations, we get\n\\begin{align}\n\\label{onedimT10eq1}&\\lambda B(h(l,m))=q^3B(h(l+1,m)), \\quad \\text{ for } l,m \\geq 0, \\\\\n\\label{onedimT10eq5a}\n& (q-1)B(h(l,m))=q^2B(h(l,m)s_2)+q\\omega B(h(l+1,m-1)s_2) \\quad \\text{ for } l \\geq 0,\\: m\\geq\\max(m_0,1).\n\\end{align}\n\nAssume that $m_0>0$ and that $B(h(0,m_0))=0$. Then, by Proposition \\ref{maintowerprop}, the values $B(h(l,m))$ are zero for all $l$ and $m$. Considering (\\ref{onedimT10eq5a}) with $l=0$, (\\ref{onedimT10eq5a}) with $l=1$, Lemma \\ref{T01s2conslemma} with $l=0$, $m=m_0-1$, (\\ref{T01s2conseq4}) and Lemma \\ref{T01actionlemma} ii) with $l=0$, $m=m_0-1$ we get a homogeneous system of $5$ linear equations with a non-singular matrix in the variables\n$$\n B(h(0,m_0-1)s_2),\\:B(h(1,m_0-1)s_2),\\:B(h(2,m_0-1)s_2),\\:B(h(0,m_0)s_2),\\:B(h(1,m_0)s_2).\n$$\nHence, the above values are equal to zero.\nLemma \\ref{T01s2conslemma} then implies that $B(h(l,m_0-1)s_2)=0$ for all $l\\geq0$. Using (\\ref{onedimT10eq5a}), it follows that $B(h(l,m)s_2)=0$ for all $l\\geq0$ and $m\\geq m_0-1$. \nThe Atkin-Lehner relations now imply that the main tower, $s_2$-tower, $s_1s_2$-tower and $s_2s_1s_2$-tower are all zero. By Proposition \\ref{HFRFSipdecomplemma} and Lemma \\ref{automaticvanishinglemma}, the function $B$ is zero. This proves i) of the lemma for $m_0>0$.\n\nNow assume that $m_0=0$. Using Proposition \\ref{maintowerprop}, Lemma \\ref{automaticvanishinglemma}, (\\ref{onedimT10eq5a}), and the Atkin-Lehner relations, we can show the following: If $B(1)=0$ and $B(h(l,0)s_2)=0$ for all $l\\geq0$, then $B(h(l,m)w)=0$ for all $l\\in{\\mathbb Z}$, all $m\\geq0$, and all $w\\in\\{1,s_2,s_2s_1s_2\\}$, as well as $B(h(l,m)s_1s_2)=0$ for all $l\\in{\\mathbb Z}$ and all $m\\geq1$.\n\nAssume that $\\big(\\frac L\\mathfrak p\\big)=-1$ (the inert case). Then, from Lemma \\ref{T10actionlemma} ii),\n (\\ref{onedimT10eq1}), Atkin-Lehner relations and using $\\lambda=-q\\omega$, we get\n$ q(q+1)B(h(l,0)s_2)=(q-1)B(h(l,0))$ for $l\\geq0$. Now, from the double coset representatives in Proposition \\ref{HFRFSipdecomplemma}, it follows that $B=0$ if $B(1)=0$. The other cases are similar. See \\cite{Pitale-Schmidt-preprint-2012} for details.\n\nii) Evaluating (\\ref{VIbspecialeq}) at $h(l,m)s_2$, we obtain\n\\begin{equation}\\label{VIbspecialeq2}\n q B(h(l,m)s_2)=-B(h(l,m))\n\\end{equation}\nfor all $l,m\\geq0$. Let $i=0$ in the ramified case and $i=1$ or $i=2$ in the split case. Evaluating (\\ref{VIbspecialeq}) at $h(l,0)\\hat u_is_1s_2s_1s_2$ leads to\n\\begin{equation}\\label{VIbspecialeq3}\n q B(h(l,0)s_2s_1s_2)=-B(h(l,0)\\hat u_is_1s_2)\n\\end{equation}\nfor $l\\geq-1$; observe here the defining property (\\ref{u1u2defeq}) of the elements $u_i$. Assume that $B(h(0,m_0))$, and therefore, by Proposition \\ref{maintowerprop}, the whole main tower, is zero. Then, by (\\ref{VIbspecialeq2}) and Lemma \\ref{automaticvanishinglemma}, the entire $s_2$-tower is also zero. By the Atkin-Lehner relations, the $s_1s_2$ and $s_2s_1s_2$ towers are also zero. In the inert case, in view of the double cosets given in Proposition \\ref{HFRFSipdecomplemma}, and v) of Lemma \\ref{automaticvanishinglemma}, it follows that $B=0$. Taking into account (\\ref{VIbspecialeq3}), the same conclusion holds in the ramified and split cases.\n\\end{proof}\n\nThe condition $\\lambda = -q\\omega$ in i) of the above lemma is satisfied by the representations of type IIa, IVc, Vb and VIa. The representations of type VIb satisfies $\\lambda = q\\omega$. These representations have a one-dimensional space of $P_1$-invariant vectors, but no non-zero $P_2$-invariant vectors. Hence, if a non-zero $P_1$-invariant vector $B$ is made $P_2$-invariant by summation, the result is zero -- using standard representatives, we get (\\ref{VIbspecialeq}) in ii) of the above lemma.\n\n\\subsection*{The main result}\nThe following theorem, which is the main result of this section, identifies good test vectors for those irreducible, admissible, infinite-dimensional representations of ${\\rm GSp}_4(F)$ that have a one-dimensional space of $P_1$-invariant vectors. \n\\begin{theorem}\\label{onedimtheorem}\n Let $\\pi$ be an irreducible, admissible, Iwahori-spherical representation of ${\\rm GSp}_4(F)$ that is not spherical but has a one-dimensional space of $P_1$-invariant vectors. Let $S$ be the matrix defined in (\\ref{Sxidefeq}), with $\\mathbf{a},\\mathbf{b},\\mathbf{c}$ subject to the conditions (\\ref{standardassumptions}). Let $T(F)$ be the group defined in (\\ref{TFdefeq}). Let $\\theta$ be the character of $U(F)$ defined in (\\ref{thetadefeq}), and let $\\Lambda$ be a character of $T(F)\\cong L^\\times$. Let $m_0$ be as in (\\ref{m0defeq}). We assume that $\\pi$ admits a $(\\Lambda,\\theta)$-Bessel model. Let $B$ be an element in this Bessel model spanning the space of $P_1$-invariant vectors.\n \\begin{enumerate}\n \\item Assume that $\\pi$ is of type IIa. Then $B(h(0,m_0))\\neq0$, except in the split case with $\\Lambda(1,\\varpi)=-\\omega=\\Lambda(\\varpi,1)$. In this latter case $B(1)=0$, but $B(\\hat u_is_1s_2)\\neq0$ for $i=1,2$. Here, the elements $\\hat u_i$ are defined in (\\ref{hatu1u2defeq}).\n \\item If $\\pi$ is of type IVc, Vb, VIa or VIb, then $B(h(0,m_0))\\neq0$.\n \\end{enumerate}\n\\end{theorem}\n\\begin{proof}\nLet $B$ be a non-zero $P_1$-invariant vector in the $(\\Lambda,\\theta)$-Bessel model of $\\pi$. Then $B$ is an element of the space $\\mathcal{S}(\\Lambda,\\theta,P_1)$. Since the space of $P_1$-invariant vectors in $\\pi$ is one-dimensional, $B$ is an eigenvector for $T_{1,0}$, $T_{0,1}$ and $\\eta$; let $\\lambda$, $\\mu$ and $\\omega$ be the respective eigenvalues. Lemma \\ref{onedimlemma} proves our assertions in case $m_0>0$. For the rest of the proof we will therefore assume that $m_0=0$, meaning that $\\Lambda$ is unramified. \n\ni) In this case we have $\\lambda = -q\\omega$. If we are not in the split case, or if we are in the split case and $\\Lambda(1,\\varpi) \\neq -\\omega$ then our assertion follows from Lemma \\ref{onedimlemma} i). Assume that we are in the split case and that $\\Lambda(1,\\varpi)=-\\omega=\\Lambda(\\varpi,1)$. Then $B(1)=0$ by Proposition \\ref{IIasplitprop} and (\\ref{BBpat1eq}). Hence our assertion follows from Lemma \\ref{onedimlemma} i).\n\nii) If $\\pi$ is of type VIb, then $B$ satisfies (\\ref{VIbspecialeq}) and the assertion follows from Lemma \\ref{onedimlemma} ii). Assume that $\\pi$ is of type IVc, Vb or VIa. If we are not in the split case, our assertions follow from Lemma \\ref{onedimlemma} i). Assume we are in the split case, and that $\\pi$ is of type IVc or Vb. Then, by Table \\ref{besselmodelstable} and Table \\ref{Iheckeeigenvaluestable}, we have $\\Lambda(1,\\varpi) \\neq -\\omega$. Hence our assertions follow from Lemma \\ref{onedimlemma} i). Assume we are in the split case, and that $\\pi$ is of type VIa. Then our assertion follows from Proposition \\ref{VIasplitprop} and (\\ref{BBpat1eq}).\n\\end{proof}\n\\section{The two-dimensional cases}\\label{twodimsec}\nIn this section, let $\\pi$ be an irreducible, admissible representation of ${\\rm GSp}_4(F)$ of type IIIa or IVb. In both cases the space of $P_1$-invariant vectors is two-dimensional.\n\\subsection*{The IIIa case}\nLet $B_1$ and $B_2$ be common eigenvectors for $T_{1,0}$ and $T_{0,1}$ in the IIIa case. Then, one can choose the normalizations such that\n \\begin{alignat}{2}\\label{IIIaB1B2eq}\n T_{1,0}\\,B_1&=\\alpha\\gamma q\\,B_1,\\qquad&\n T_{1,0}\\,B_2&=\\gamma q\\,B_2,\\nonumber\\\\\n T_{0,1}\\,B_1&=\\alpha\\gamma^2(\\alpha q+1)q\\,B_1,\\qquad&\n T_{0,1}\\,B_2&=\\alpha\\gamma^2(\\alpha^{-1}q+1)q\\,B_2,\\\\\n \\eta\\,B_1&=\\alpha\\gamma\\,B_2,\\qquad&\n \\eta\\,B_2&=\\gamma\\,B_1.\\nonumber\n \\end{alignat}\nNote that $\\alpha \\neq 1$ in the IIIa case; see Table \\ref{satakenotationtable}.\n\\begin{lemma}\\label{twodimIIIalemma}\n Let $\\Lambda$ be a character of $L^\\times$, and $m_0$ as in (\\ref{m0defeq}). Assume that $B_1,B_2\\in\\mathcal{S}(\\Lambda,\\theta,P_1)$ satisfy (\\ref{IIIaB1B2eq}) with $\\alpha\\neq1$. Assume also that $\\Lambda(\\varpi)=\\alpha\\gamma^2$. Then\n $$\n B_1=0\\qquad\\Longleftrightarrow\\qquad B_1(h(0,m_0))=0.\n $$\n\\end{lemma}\n\\begin{proof}\nUsing the Atkin-Lehner relations (\\ref{IIIaB1B2eq}) and Lemma \\ref{T10actionlemma}, we get \nfor any $l \\geq -1$ and $m \\geq 0$,\n\\begin{equation}\\label{B1-s2s1s2-l-l+1}\n\\gamma B_1(h(l,m)s_2s_1s_2)=q^2B_1(h(l+1,m)s_2s_1s_2),\n\\end{equation}\nand, for $l \\geq 0$ and $m \\geq {\\rm max}(m_0,1)$, the three equations\n\\begin{align}\n \\label{3aT10eq318}0 &= -\\alpha\\gamma q B_1(h(l,m)s_1s_2) + q^2B_1(h(l-1,m+1)s_1s_2)\n +\\alpha \\gamma (q-1)B_1(h(l,m)),\\\\\n \\label{3aT10eq328}0 &= -\\gamma q B_1(h(l+1,m-1)s_2) + q^2 B_1(h(l,m)s_2)\n +q^2(q-1) B_1(h(l,m)s_2s_1s_2),\\\\\n \\label{3aT10eq428}0 &= q^3 B_1(h(l,m)s_2s_1s_2) + q^2 B_1(h(l,m)s_2) + q B_1(h(l,m)s_1s_2) + B_1(h(l,m)).\n\\end{align}\n \nWe will first consider the case ${\\bf m_0 > 0}$. Assume that $B_1(h(0,m_0))=0$; we will show that $B_1=B_2=0$. Note that, by the Atkin-Lehner relations, we only need to show that $B_1=0$. By Proposition \\ref{maintowerprop}, we know that $B_1(h(l,m)) = 0$ for all $l,m \\geq 0$. Considering (\\ref{3aT10eq428}) with $l=0$ and $m=m_0$, Lemma \\ref{T01actionlemma} ii) with $B=B_2$, $l=0$ and $m=m_0-1$, (\\ref{T01s2conslemmaeq1b}) applied to $B_1$, (\\ref{T01s2conseq4b}) applied to $B_1$, Lemma \\ref{T01actionlemma} ii) with $B=B_2$, $l=0$ and $m=m_0$, Lemma \\ref{T01actionlemma} ii) with $B=B_1$, $l=0$ and $m=m_0-1$, (\\ref{T01s2conslemmaeq1b}) applied to $B_2$ and Lemma \\ref{T01actionlemma} ii) with $B=B_1$, $l=0$ and $m=m_0$, \nwe get a homogeneous system of $8$ linear equations in the $7$ variables\n\\begin{align}\\label{lin-sys-var}\n &B_1(h(0,m_0-1)s_2),\\;B_1(h(0,m_0)s_2),\\;B_1(h(1,m_0)s_2),\\;B_1(h(0,m_0)s_1s_2),\\nonumber\\\\ \n &B_1(h(1,m_0)s_1s_2),\\;B_1(h(-1,m_0)s_1s_2),\\;B_1(h(0,m_0)s_2s_1s_2).\n\\end{align}\nFor any $\\alpha$, either the set of first $7$ equations or the set of last $7$ equations has a non-singular matrix. Hence, all the values in (\\ref{lin-sys-var}) are zero. Now (\\ref{T01s2conslemmaeq1}) and (\\ref{3aT10eq328}) imply that\n$\n B_1(h(l,m)s_2) = 0$ for all $l \\geq 0, \\,\\,m = m_0-1$ or $m=m_0$.\nUsing (\\ref{B1-s2s1s2-l-l+1}) and (\\ref{3aT10eq428}), we get\n$B_1(h(l,m_0)s_2s_1s_2) = B_1(h(l,m_0)s_1s_2) = 0$ for $l \\geq -1$. Now, using (\\ref{3aT10eq318}), (\\ref{3aT10eq328}), (\\ref{3aT10eq428}), induction, Proposition \\ref{HFRFSipdecomplemma} and the automatic vanishing from Lemma \\ref{automaticvanishinglemma}, it follows that $B_1=0$. This concludes our proof in case $m_0>0$.\n\nWe next consider the case ${\\bf m_0 = 0}$. Assume that $B_1(1)=0$; we will show that $B_1=0$. By Proposition \\ref{maintowerprop}, we know that $B_1(h(l,m)) = 0$ for all $l,m \\geq 0$. Suppose we can show that $B_1$ vanishes on all the double coset representatives in Proposition \\ref{HFRFSipdecomplemma} that have $m=0$ and that $B_1(h(l,1)s_1s_2) = 0$ for all $l\\geq-1$. Using induction and (\\ref{3aT10eq318}), we get $B_1(h(l,m)s_1s_2) = 0$ for all $l\\geq-1$ and $m\\geq0$. Now, induction and (\\ref{3aT10eq328}), (\\ref{3aT10eq428}), gives us $B_1 = 0$, and hence, $B_2 = 0$. \n\nWe will give the proof of the inert case $\\big(\\frac L{\\mathfrak p}\\big) = -1$ here. The other cases are similar (see \\cite{Pitale-Schmidt-preprint-2012}). Using Lemma \\ref{T10actionlemma} ii), we get, for $l \\geq 0$,\n\\begin{equation}\\label{m0=0-inert-3a-1}\n \\alpha \\gamma q B_1(h(l,0)s_2) = q^2 B_1(h(l-1,1)s_1s_2).\n\\end{equation}\nUsing (\\ref{B1-s2s1s2-l-l+1}) and Lemma \\ref{T10actionlemma} iv), we get, for $l \\geq 0$,\n\\begin{equation}\\label{m0=0-inert-3a-2}\n \\alpha \\gamma q B_1(h(l,0)s_2s_1s_2) = -(q+1)B_1(h(l-1,1)s_1s_2).\n\\end{equation}\nHence, from (\\ref{B1-s2s1s2-l-l+1}), (\\ref{m0=0-inert-3a-1}) and (\\ref{m0=0-inert-3a-2}), we get, for $l \\geq 0$, \n\\begin{equation}\\label{m0=0-inert-3a-3}\n \\gamma B_1(h(l,0)s_2) = q^2 B_1(h(l+1,0)s_2).\n\\end{equation}\nUsing (\\ref{m0=0-inert-3a-1}), (\\ref{m0=0-inert-3a-3}) and Lemma \\ref{T01actionlemma} ii), we get\n\\begin{equation}\\label{m0=0-inert-3a-4}\n \\alpha \\gamma^2 (\\alpha q +1) q B_1(s_2) = q^4 B_1(h(0,1)s_1s_2) = \\alpha \\gamma q^3 B_1(h(1,0)s_2) = \\alpha \\gamma^2 q B_1(s_2).\n\\end{equation}\nSince $\\alpha \\neq 0$ , we get $B_1(s_2) = 0$. Now (\\ref{m0=0-inert-3a-1}), (\\ref{m0=0-inert-3a-2}) and (\\ref{m0=0-inert-3a-3}) implies that $B_1$ vanishes on all the double coset representatives in Proposition \\ref{HFRFSipdecomplemma} that have $m=0$. Hence $B_1 =0$, as claimed.\n\\end{proof}\n\\subsection*{The IVb case}\nLet $B_1$ and $B_2$ be common eigenvectors for $T_{1,0}$ and $T_{0,1}$ in the IVb case. We can choose the normalizations such that\n \\begin{alignat}{2}\\label{IVbB1B2eq}\n T_{1,0}\\,B_1&=\\gamma\\,B_1,\\qquad&\n T_{1,0}\\,B_2&=\\gamma q^2\\,B_2,\\nonumber\\\\\n T_{0,1}\\,B_1&=\\gamma^2(q+1)\\,B_1,\\qquad&\n T_{0,1}\\,B_2&=\\gamma^2q(q^3+1)\\,B_2,\\\\\n \\eta\\,B_1&=\\gamma\\,B_2,\\qquad&\n \\eta\\,B_2&=\\gamma\\,B_1.\\nonumber\n \\end{alignat}\nRecall that $\\gamma = \\sigma(\\varpi)$, where $\\sigma$ is an unramified character. From Table \\ref{besselmodelstable}, a $(\\Lambda,\\theta)$-Bessel model exists if and only if $\\Lambda = \\sigma \\circ N_{L\/F}$. In particular, the number $m_0$ defined in (\\ref{m0defeq}) must be zero, i.e., $\\Lambda$ must be unramified. The central character condition is equivalent to $\\Lambda(\\varpi) = \\gamma^2$. Moreover, in the ramified case $\\big(\\frac L\\mathfrak p\\big)=0$, evaluating at $\\varpi_L$, we get $\\Lambda(\\varpi_L)=\\gamma$, and in the split case $\\big(\\frac L\\mathfrak p\\big)=1$, evaluating at $(\\varpi,1)$ and $(1,\\varpi)$, we get $\\Lambda(\\varpi,1)=\\Lambda(1,\\varpi)=\\gamma$.\n\\begin{lemma}\\label{twodimIVblemma}\n Let $\\Lambda$ be an unramified character of $L^\\times$ satisfying $\\Lambda(\\varpi) = \\gamma^2$. If $\\big(\\frac L\\mathfrak p\\big)=0$, assume that $\\Lambda(\\varpi_L)=\\gamma$, and if $\\big(\\frac L\\mathfrak p\\big)=1$, assume that $\\Lambda(\\varpi,1)=\\Lambda(1,\\varpi)=\\gamma$. Assume that $B_1,B_2\\in\\mathcal{S}(\\Lambda,\\theta,P_1)$ satisfy (\\ref{IVbB1B2eq}). Then\n $$\n B_1=0\\quad\\Longleftrightarrow\\quad B_1(1)=0\\quad\\Longleftrightarrow\\quad B_2(1)=0\\quad\\Longleftrightarrow\\quad B_2=0.\n $$\n\\end{lemma}\n\\begin{proof} The proof is similar to that of Lemma \\ref{twodimIIIalemma}, $m_0=0$ case, and is therefore omitted. See \\cite{Pitale-Schmidt-preprint-2012} for details.\n\\end{proof}\n\\subsection*{The main result}\nThe following theorem identifies good test vectors for those irreducible, admissible representations of ${\\rm GSp}_4(F)$ that are not spherical but have a two-dimensional space of $P_1$-invariant vectors. Recall from Table \\ref{Iwahoritable} that these are precisely the Iwahori-spherical representations of type IIIa and IVb.\n\n\\begin{theorem}\\label{twodimtheorem}\n Let $\\pi$ be an irreducible, admissible, Iwahori-spherical representation of ${\\rm GSp}_4(F)$ that is not spherical but has a two-dimensional space of $P_1$-invariant vectors. Let $S$ be the matrix defined in (\\ref{Sxidefeq}), with $\\mathbf{a},\\mathbf{b},\\mathbf{c}$ subject to the conditions (\\ref{standardassumptions}). Let $T(F)$ be the group defined in (\\ref{TFdefeq}). Let $\\theta$ be the character of $U(F)$ defined in (\\ref{thetadefeq}), and let $\\Lambda$ be a character of $T(F)\\cong L^\\times$. Let $m_0$ be as in (\\ref{m0defeq}). We assume that $\\pi$ admits a $(\\Lambda,\\theta)$-Bessel model. Then the space of $P_1$-invariant vectors is spanned by common eigenvectors for the Hecke operators $T_{1,0}$ and $T_{0,1}$, and if $B$ is any such eigenvector, then $B(h(0,m_0))\\neq0$.\n\\end{theorem}\n\\begin{proof}\nAssume first that $\\pi$ is of type IIIa. Then $\\pi$ has a Bessel model with respect to any $\\Lambda$; see Table \\ref{besselmodelstable}. Let $B_1$ and $B_2$ be the $P_1$-invariant vectors which are common eigenvectors for $T_{1,0}$ and $T_{0,1}$, as in (\\ref{IIIaB1B2eq}). Then $B_1(h(0,m_0))\\neq0$ by Lemma \\ref{twodimIIIalemma}. If we replace $\\gamma$ by $\\alpha^{-1}\\gamma$ and then $\\alpha$ by $\\alpha^{-1}$ in the equations (\\ref{IIIaB1B2eq}), then the roles of $B_1$ and $B_2$ get reversed. This symmetry shows that also $B_2(h(0,m_0))\\neq0$.\n\nNow assume that $\\pi$ is the representation $L(\\nu^2,\\nu^{-1}\\sigma{\\rm St}_{{\\rm GSp}(2)})$ of type IVb. Then, by Table \\ref{besselmodelstable}, we must have $\\Lambda=\\sigma\\circ N_{L\/F}$. In particular, $\\Lambda$ is unramified and satisfies the hypotheses of Lemma \\ref{twodimIVblemma}. Let $B_1$ and $B_2$ be the $P_1$-invariant vectors which are common eigenvectors for $T_{1,0}$ and $T_{0,1}$, as in (\\ref{IVbB1B2eq}). Then $B_1(1)\\neq0$ and $B_2(1)\\neq0$ by Lemma \\ref{twodimIVblemma}.\n\\end{proof}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:1}\n\nIn 1970 V. Efimov predicted \\cite{vefimov} a remarkable and counterintuitive phenomenon now called the Efimov effect, which can be stated as follows.\nIf the negative continuous spectrum of a three-particle Hamiltonian $H$ is empty but\nat least two of the particle pairs have a resonance at zero energy then $H$ has an infinite number of negative energy bound states. Thereby the pair--interactions' range\ncan be finite. This effect was in striking contradiction with the general knowledge of that time saying\nthat an infinite number of bound states can only be produced by long--range interactions. The first sketch of mathematical proof\nof the Efimov effect was done by\nL. D. Faddeev shortly after V. Efimov told him about his discovery \\cite{efimovprivate}. The first published proof, which was not\nmathematically flawless, appeared in \\cite{amadonoble}. Later D. R. Yafaev \\cite{yafaev} basing on the Faddeev's idea presented a complete mathematical proof;\nin \\cite{ovchinnikov,tamura,fonseca} one finds other proofs by different methods.\nThe original Faddeev's argument and his derivation of the discrete spectrum asymptotics in the case of three identical particles can be found in \\cite{merkuriev}.\nThe spectral asymptotics for particles with unequal masses was analyzed in \\cite{sobolev}.\nIn \\cite{wang} the author claimed having generalized the result in \\cite{yafaev,sobolev} to the case of three clusters but the proof in \\cite{wang}\ncontains a mistake \\cite{wangwrong}.\n\nAfter the Efimov effect was proved to exist in the three--particle case the researchers had an eye on its\nmost straightforward generalization, namely, the case of four bosons with an empty negative continuous spectrum.\nAmado and Greenwood in \\cite{greenwood} claimed having proved that the Efimov effect is impossible for $N \\geq 4$ bosons.\nFor four bosons their prediction later got numerical confirmation, see f. e. \\cite{universal}. This showed that somehow\nthe Efimov effect appeared to be possible only for three bosons, not more not less.\nThe ``proof'' in \\cite{greenwood} is invalid: the authors make various unclear and ungrounded assumptions (in particular, \none could question the validity of the expansion Eq.~(10), where somehow $B\\neq 0$, or the finiteness of the nominator in Eq.~6 in \\cite{greenwood}, etc.). \nThe reader should also be warned against\nthe misused terminology in \\cite{amadonoble,greenwood}, where\nthe authors use the term ``zero energy bound states'' both for zero energy\nresonances and $L^2$ states, which may lead to a controversy already in the three--particle case \\cite{1}. The aim of the present paper is to give a\ncorrect mathematical proof to the Amado--Greenwood's conjecture. For a more detailed explanation of our result we refer the reader to Sec.~\\ref{25.9;15}.\n\nIn a series of papers (in particular, see \\cite{vugzhisl1,vugzhisl2,vugzhisl3,vugzhisl4})\nVugal'ter and Zhislin using the variational approach derived several theorems concerning the finiteness of the discrete spectrum of Schr\\\"odinger\noperators. Applied to systems, where subsystems may have virtual levels, their results are the most advanced ones so far.\nIn Theorem~1.3 in \\cite{vugzhisl1} these authors prove the following: suppose that\nthe system of $N$ particles is described by the Hamiltonian $H$ with $\\sigma_{ess} (H) = [0, \\infty)$. Suppose also that the particles can be partitioned into two clusters\n$C_1$ and $C_2$ in such a way that any subsystem having the particles both from $C_1$ and $C_2$ does not have a virtual level at zero energy. Then $H$ has at most a finite\nnumber of negative eigenvalues. Note, that subsystems having the particles only from $C_1$ (or equivalently only from $C_2$) are allowed to have virtual\nlevels! Already in \\cite{yafaev} as a byproduct of the main result one finds the proof\nthat if $N=3$ and at most one particle pair has a zero energy resonance then the number of negative energy levels is finite; in \\cite{yafaev29} \nthis case is analyzed in more detail. \nIn that sense one can view the theorem of Vugal'ter and Zhislin as a\ngeneralization of Yafaev's result regarding the finiteness of the discrete spectrum to $N \\geq 4$.\nThe proof of Theorem~1.3 in \\cite{vugzhisl1} is rather involved\n(a proposed simplification in \\cite{ahia} helps only a little since the method is practically the same as in \\cite{vugzhisl1}\nbut various results from \\cite{vugzhisl1,vugzhisl2,vugzhisl3} are cited without a proof). Sec.~\\ref{1.08;33} of this paper\ncontains a relatively simple proof of Theorem~\\ref{30.07;1}, which implies the theorem of Vugal'ter and Zhislin, however, restricted to the class of potentials considered here.\nOur proof uses the Birman--Schwinger (BS) principle and we believe that it contributes to a better understanding of the results in \\cite{vugzhisl1}.\n\n\n\nThe Amado--Greenwood's conjecture \\cite{greenwood} that is most interesting from a physics point of view\nis not covered by the results of Vugal'ter and Zhislin. Here one can mention two sorts of arising difficulties.\nOne is, quoting Zhislin, a lack of information on the near--threshold resolvent behavior of the\nN-body system for $N \\geq 3$. Another difficulty appears if one tries to extend Yafaev's analysis\nto the case $N \\geq 4$. Namely, in \\cite{yafaev,sobolev} the eigenvalues were ``counted'' using\nsymmetrized Faddeev equations, which have a compact kernel away from the\nresonances. In the case $N \\geq 4$ Faddeev equations seize being compact, which is a\nknown problem \\cite{motovilov,yakovlev,merkuriev}. Their exists a generalization in form of\nFaddeev--Yakubovsky equations \\cite{yakubovsky,merkuriev}, but it is not at all obvious how one can adopt Faddeev--Yakubovsky equations\nto the purpose of counting eigenvalues.\n\nIn our proof of the Amado--Greenwood's conjecture we use an approach, which is in the spirit of \\cite{yafaev}.\nWe reduce the problem to the analysis of the spectrum of\nan integral operator, which arises after successively applying $N$ times the BS principle. In distinction from \\cite{yafaev,sobolev},\nwhere integral equations had a $3\\times 3$ matrix form,\nthe resulting integral equation in our approach is written in one line; this equation is subsequently used for counting eigenvalues. One should also\nmention the fundamental difference between the cases of 3 and 4 identical particles. In the case $N=3$\nthe two--particle subsystems have at most a zero energy resonance but cannot have a zero energy $L^2$ state.\nOn the contrary, for $N=4$ the 3--particle subsystems may have square integrable zero energy\nbound states \\cite{1,jpasubmit}. These zero energy three--body ground states have a power fall--off,\nthe most trivial lower bound \\cite{1} being $|\\psi_0 (x)| \\geq (1+|x|)^{-4}$, where\n$x \\in \\mathbb{R}^6$. The hard part of the proof is to\nshow in some sense that these states fall off rapidly enough in some sense so that they cannot lead to an infinite number of bound states. The presence of a bound state at zero energy also\naffects the dependence of the energy on the coupling constant near the threshold \\cite{klaus1,klaus2,simonfuncan} and shapes the low--energy behavior of the resolvent.\n\nThe paper is organized as follows. In Sec.~\\ref{1.08;33} we prove Theorem~\\ref{30.07;1}, which implies Theorem~1.3 in \\cite{vugzhisl1}.\nSec.~\\ref{25.9;15} contains the statement of the main theorem and an intriguing conjecture concerning the existence of the true four--body Efimov effect.\nSec.~\\ref{1.08;1} deals with the situation when the N--particle system is at critical coupling (for the definition of critical coupling, see \\cite{jpasubmit}).\nThe results of Sec.~\\ref{1.08;1} are then applied to the subsystems\ncontaining $N-1$ particles in Sec.~\\ref{1.08;2}, where we prove the Amado--Greenwood's conjecture.\nThe Appendix reviews the BS principle, thereby, we extensively use the results from \\cite{klaus2}.\n\n\nHere are some of the notations used in the paper. We define\n$\\mathbb{R}_+ := \\{x\\in \\mathbb{R}|x \\geq 0\\}$ and $\\mathbb{Z}_+ := \\{n \\in \\mathbb{Z}| n \\geq 0\\}$, where $\\mathbb{R} , \\mathbb{Z}$ are reals and integers respectively.\nThe function $\\chi_R : \\mathbb{R}_+ \\to \\mathbb{R}_+$ for $R > 0$ is such that\n$\\chi_R (r) = 1$ for $r \\leq R$ and $\\chi_R (r) = 0$ otherwise.\nFor $v : \\mathbb{R}^n \\to \\mathbb{R}$ the positive and negative parts are $v_{\\pm} := \\max[\\pm v , 0]$ so\nthat $v= v_+ - v_-$. For a self--adjoint operator $A$ following \\cite{imsigal} we denote $\\# (\\textrm{evs}(A) >\n\\lambda)$ the number of eigenvalues of $A$ (counting multiplicities) larger\nthan $\\lambda \\in \\mathbb{R}$. In the case when $\\sigma_{ess} (A) \\cap (\\lambda, \\infty) \\neq \\emptyset$ we set by definition $\\# (\\textrm{evs}(A) > \\lambda) = \\infty$.\nThe versions of this definition using other relation symbols like $<, \\geq, \\leq$ are self--explanatory.\nThe linear space of bounded\noperators on a Hilbert space $\\mathcal{H}$ is always denoted as $\\mathfrak{B}(\\mathcal{H})$. For a self--adjoint operator $A$ the notation $A \\ngeqq 0$\nmeans that there exists $f_0 \\in D(A)$ such that $(f_0, Af_0) < 0$. $\\|A \\|_{HS}$ denotes the Hilbert--Schmidt norm of the Hilbert--Schmidt operator $A$. The notation\n$f \\in L^\\infty_\\infty (\\mathbb{R}^n)$ means that $f: \\mathbb{R}^n \\to \\mathbb{C}$ is a bounded Borel function going to zero at infinity.\n\n\n\n\\section{Theorem of Vugal'ter and Zhislin}\\label{1.08;33}\n\nWe consider the Schr\\\"odinger operator of $N$ particles in $\\mathbb{R}^3$\n\\begin{equation}\\label{30.07;8}\n H = H_0 + \\sum_{1 \\leq i< j \\leq N} v_{ij} ,\n\\end{equation}\nwhere $H_0$ is the kinetic energy operator with the center of mass removed and\n$v_{ij}$ are operators of multiplication by $v_{ij} (r_i -r_j)$. Here and further $r_i \\in \\mathbb{R}^3$ and $m_i \\in \\mathbb{R}_+ \/\\{0\\}$ always denote particle position vectors and masses. \nFor pair--interactions we shall require (throughout this section) that $(v_{ij})_+ \\in L^2 (\\mathbb{R}^3) + L^\\infty_\\infty (\\mathbb{R}^3)$ and \n$(v_{ij})_- \\in L^3 (\\mathbb{R}^3) \\cap L^{ 3\/2 - \\alpha} (\\mathbb{R}^3)$, where $\\alpha \\in (0, 1\/2)$ has a fixed value throughout this section. \nBy the Kato--Rellich theorem \\cite{kato,teschl}\n$H$ is self--adjoint on $D(H_0) = \\mathcal{H}^2 (\\mathbb{R}^{3N-3}) \\subset L^2 (\\mathbb{R}^{3N-3})$ (the symbol $\\mathcal{H}^2$ denotes the corresponding Sobolev space \\cite{teschl,liebloss}).\nThe conditions on pair--interactions guarantee that for all $\\gamma \\in [0, \\alpha(3-2\\alpha)^{-1}]$ there is a constant $c_\\gamma \\in \\mathbb{R}_+$ such that \n\\begin{equation}\\label{30.07;4}\n\\left[ \\int \\frac{\\bigl(v_{ij}(r)\\bigr)_- \\bigl(v_{ij}(r')\\bigr)_- } {|r-r'|^{2 - 8\\gamma}} d^3r d^3 r' \\right]^{1\/2} < c_\\gamma \\quad \\quad (1 \\leq i < j \\leq N), \n\\end{equation}\nwhereby the constant $c_\\gamma$ can be determined from the Hardy--Littlewood--Sobolev inequality \\cite{liebloss}. \nIncidentally, $c_0$ is the Rollnik norm of $\\bigl(v_{ij}\\bigr)_- $ \\cite{quadrforms}. \n\n\nSuppose all particles are partitioned into two non--empty clusters\n$\\mathfrak{C}_1$ and $\\mathfrak{C}_2$ each containing $\\# \\mathfrak{C}_1$ and\n$\\# \\mathfrak{C}_2$ particles respectively. Then we can write\n\\begin{gather}\n H = h_{\\mathfrak{C}_1 \\mathfrak{C}_2} - \\Delta_R + V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^+ - V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^- , \\label{31.07;4}\\\\\nV_{\\mathfrak{C}_1 \\mathfrak{C}_2}^\\pm := \\sum_{\\substack{i \\in \\mathfrak{C}_1 \\\\ j \\in \\mathfrak{C}_2 }}\n\\bigl(v_{ij}\\bigr)_\\pm , \\label{31.07;22}\n\\end{gather}\nwhere $h_{\\mathfrak{C}_1 \\mathfrak{C}_2}$ is the Hamiltonian of internal motion in the clusters, $R \\in \\mathbb{R}^3$ is a vector pointing from the\ncenter of mass of $\\mathfrak{C}_1$ to the center of mass of $\\mathfrak{C}_2$ (for convenience we set in (\\ref{31.07;4}) the coefficient in front of the Laplace operator\nequal to unity). $V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^+ $ (resp. $V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^- $) are the sums of positive (resp. negative) parts of\ninteractions between the clusters. For each partition we define\n\\begin{gather}\n H(\\lambda) := h_{\\mathfrak{C}_1 \\mathfrak{C}_2} - \\Delta_R + V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^+ - \\lambda V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^- \\quad \\quad (\\lambda \\geq\n1), \\label{31.07;12}\\\\\n E_{thr} (\\lambda) := \\inf \\sigma_{ess} \\bigl( H (\\lambda)\\bigr), \\label{30.07;9}\n\\end{gather}\nwhere, clearly, $H = H(1)$. Our aim in this section is to prove the following\n\\begin{theorem}\\label{30.07;1}\n Let $H$ be defined as in (\\ref{30.07;8}). Suppose that there exists a partition into two clusters $\\mathfrak{C}_{1,2}$ and $\\lambda_0 > 1$ such that\n$E_{thr}(\\lambda_0)=\\inf \\sigma_{ess} (H)$, where $E_{thr}(\\lambda_0)$ is defined in (\\ref{30.07;9}). Then $\\#(\\textnormal{evs}\\:(H) < \\inf \\sigma_{ess} (H)) < \\infty$.\n\\end{theorem}\nThe proof would be given later in this section, where at the end we would also\ndemonstrate that in the case when $\\inf \\sigma_{ess} (H) = 0$ Theorem~1.3 in \\cite{vugzhisl1} (restricted to the class of potentials discussed here)\nfollows from Theorem~\\ref{30.07;1}.\n\n\n\nLet us introduce the following operator function\n\\begin{equation}\n D (\\gamma, \\epsilon) := \\bigl[H_w + \\epsilon \\bigr]^{-1\/2-\\gamma} (V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^-)^{1\/2} ,\n\\end{equation}\nwhere $\\gamma\\geq 0$, $\\epsilon >0$ and for a shorter notation we set\n\\begin{equation}\nH_w := (h_{\\mathfrak{C}_1 \\mathfrak{C}_2} - E_{thr}(1)) -\n\\Delta_R + V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^+ .\n\\end{equation}\nThe relevant properties of $D (\\gamma, \\epsilon)$ are established\nin the following two lemmas.\n\\begin{lemma}\\label{31.07;1}\nFor all $\\gamma \\in [0, \\alpha(3-2\\alpha)^{-1}]$ \n\\begin{equation}\n\\Lambda_\\gamma := \\sup_{\\epsilon>0} \\left\\| D(\\gamma, \\epsilon)\\right\\| < \\infty\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nNote that for a self--adjoint operator $A \\geq 0$ and $\\epsilon, p >0$ we have\n\\begin{equation}\\label{30.07;10}\n (A+ \\epsilon)^{-p} = [p\\Gamma(p)]^{-1} \\int_0^\\infty \\exp\\{-t^{1\/p}(A+ \\epsilon)\\} dt ,\n\\end{equation}\nwhere the integral is to be understood in the strong sense. Eq.~(\\ref{30.07;10}) can be easily verified using the spectral theorem.\nTherefore, for any $f \\in L^2 (\\mathbb{R}^{3N-3}), \\gamma \\geq 0$\n\\begin{equation}\n \\bigl(H_w + \\epsilon \\bigr)^{-1\/2-\\gamma} f = \\left[\\left(1\/2 +\n\\gamma\\right)\\Gamma \\left(1\/2 + \\gamma\\right)\\right]^{-1} \\int_0^\\infty\n \\exp\\{-t^{\\frac 2{1+2\\gamma}}(H_w + \\epsilon)\\} f dt\n\\end{equation}\nThe operator under the integral is positivity preserving \\cite{lpestim}, which\nallows us to write\n\\begin{equation}\\label{3.8;1}\n \\left| \\bigl(H_w + \\epsilon \\bigr)^{-1\/2-\\gamma} f \\right| \\leq\n\\left[\\left(1\/2 + \\gamma\\right)\\Gamma \\left(1\/2 +\n\\gamma\\right)\\right]^{-1} \\int_0^\\infty\n \\exp\\{-t^{\\frac 2{1+2\\gamma}}(H_w + \\epsilon)\\} |f| dt\n\\end{equation}\nBy the Lie--Trotter product formula (see Sec. VIII of vol.~1 in \\cite{reed})\n\\begin{equation}\ne^{-t(H_w + \\epsilon)} = \\textnormal{s--}\\negmedspace\\lim_{\\negthickspace\n\\negthickspace \\negthickspace n \\to \\infty} \\left( e^{-(t\/n)(H'_w +\n\\epsilon)}e^{-(t\/n)V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^+}\\right)^n ,\n\\end{equation}\nwhere $H'_w := (h_{\\mathfrak{C}_1 \\mathfrak{C}_2} - E_{thr}(1)) -\\Delta_R$. This gives us the inequality\n\\begin{equation}\n \\left| \\bigl(H_w + \\epsilon \\bigr)^{-1\/2-\\gamma} f \\right| \\leq \\bigl(H'_w +\n\\epsilon \\bigr)^{-1\/2-\\gamma} |f| .\n\\end{equation}\nHence,\n\\begin{gather}\n\\Lambda_\\gamma \\leq \\sup_{\\epsilon>0} \\left\\| \\bigl(H'_w + \\epsilon\n\\bigr)^{-1\/2-\\gamma} \\bigl(V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^-\\bigr)^{1\/2}\\right\\| \\nonumber\\\\\n = \\sup_{\\epsilon>0} \\left\\|\n\\bigl(V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^-\\bigr)^{1\/2} \\bigl(H'_w + \\epsilon \\bigr)^{-1-2\\gamma}\n\\bigl(V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^-\\bigr)^{1\/2}\\right\\|^{1\/2} . \\label{31.07;5}\n\\end{gather}\nBy the spectral theorem\n\\begin{equation}\\label{27.01\/1}\n \\bigl(H'_w + \\epsilon \\bigr)^{-1-2\\gamma} \\leq \\bigl(-\\Delta_R + \\epsilon\n\\bigr)^{-1-2\\gamma} . \n\\end{equation}\n(here we benefit from the fact that $[h_{\\mathfrak{C}_1 \\mathfrak{C}_2}, -\\Delta_R] = 0$). Using\n(\\ref{27.01\/1}) we get from (\\ref{31.07;5})\n\\begin{gather}\n\\Lambda_\\gamma^2 \\leq \\sup_{\\epsilon>0} \\left\\| \\bigl(V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^-\\bigr)^{1\/2}\n\\bigl(-\\Delta_R + \\epsilon \\bigr)^{-1-2\\gamma} \\bigl(V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^-\\bigr)^{1\/2}\\right\\|\n\\nonumber \\\\\n= \\sup_{\\epsilon>0} \\left\\| \\bigl(-\\Delta_R + \\epsilon \\bigr)^{-1\/2-\\gamma}\nV_{\\mathfrak{C}_1 \\mathfrak{C}_2}^- \\bigl(-\\Delta_R + \\epsilon \\bigr)^{-1\/2-\\gamma} \\right\\|\\nonumber\\\\\n\\leq \\bigl(\\# \\mathfrak{C}_1 \\bigr) \\bigl(\\# \\mathfrak{C}_2 \\bigr) \\max_{i \\in\n\\mathfrak{C}_1 , j \\in \\mathfrak{C}_2} \\sup_{\\epsilon>0} \\left\\|\n\\bigl(v_{ij}\\bigr)_-^{1\/2} \\bigl(-\\Delta_R + \\epsilon \\bigr)^{-1-2\\gamma}\n\\bigl(v_{ij}\\bigr)_-^{1\/2}\\right\\| . \\label{31.07;6}\n\\end{gather}\nNote that for $i \\in \\mathfrak{C}_1 , j \\in \\mathfrak{C}_2$ we can write $r_j\n- r_i = R + \\sum_k \\mathbf{m}^{(ij)}_k x_k$, where $\\mathbf{m}^{(ij)}_k $ are\nreal\ncoefficients depending on masses and $x_k \\in \\mathbb{R}^3$ for $k = 1, \\ldots, N-2$ are intercluster coordinates. (It is easy to see that the coefficient in\nfront of $R$ is always 1 by fixing all $|x_k|$ and taking $|R|\\gg 1$).\nThus we can trivially estimate the norm on the rhs of (\\ref{31.07;6})\n\\begin{equation}\n \\left\\| \\bigl(v_{ij}\\bigr)_-^{1\/2} \\bigl(-\\Delta_R + \\epsilon\n\\bigr)^{-1-2\\gamma} \\bigl(v_{ij}\\bigr)_-^{1\/2}\\right\\|^2 \\leq \\int \\bigl(v_{ij}(R)\\bigr)_-\nG_\\gamma^2 (\\epsilon; R-R') \\bigl(v_{ij}(R')\\bigr)_- d^3R d^3R' , \\label{31.07;7}\n\\end{equation}\nwhere $G_\\gamma (\\epsilon; R-R')$ is the integral kernel of the operator\n$\\bigl(-\\Delta_R + \\epsilon \\bigr)^{-1-2\\gamma}$ on $L^2(\\mathbb{R}^3)$, which is positive \\cite{reed,lpestim}.\nUsing\n(\\ref{30.07;10}) and the formula on p. 59 in \\cite{reed}, vol. II (c.f . the formula on the top of\npage 262 in \\cite{klaus1}) this integral kernel can be written\nas\n\\begin{gather}\n G_\\gamma (\\epsilon; R) = (4\\pi)^{-3\/2} [p\\Gamma(p)]^{-1} \\int_0^\\infty t^{-\\frac{3}{2p}}\n\\exp\\{-\\epsilon t^{\\frac 1p} -2^{-2}|R|^2 t^{-\\frac 1p}\\} dt\\nonumber\\\\\n\\leq (4\\pi)^{-3\/2} [p\\Gamma(p)]^{-1} \\frac{8p2^{-2p}}{|R|^{3-2p}} \\int_0^\\infty y^{-p + \\frac 12}e^{-y} =\\frac{2^{-2p}\\Gamma( 3\/2 -p )}{\\pi^{3\/2}\\Gamma(p)} \\frac{1}{|R|^{3-2p}} , \\label{2.10;1}\n\\end{gather}\nwhere $p= (1+2\\gamma)$ and in the integral we used the substitution \n$y = |R|^2 \/(4 t^{1\/p})$. \nSubstituting this upper bound into (\\ref{31.07;7}) and using (\\ref{30.07;4}) finishes the proof. \n\\end{proof}\nLet us remark that \n\\begin{equation}\\label{5.10;1}\n \\sup_{\\epsilon>0} \\left\\| \\bigl[H_w + \\epsilon \\bigr]^{-1\/2} (V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^-)\\right\\| < \\infty. \n\\end{equation}\nEq.~(\\ref{5.10;1}) follows from the proof of Lemma~\\ref{31.07;1} since for all $1 < i \\leq j \\leq N$ \n\\begin{equation}\\label{5.10;2}\n \\sup_{\\epsilon >0} \\left\\|\\bigl(v_{ij}(r)\\bigr)_- \\bigl(-\\Delta_r + \\epsilon\\bigr)^{- \\frac 12}\\right\\| < \\infty , \n\\end{equation}\nwhere the norm on the rhs is that of $L^2 (\\mathbb{R}^3)$. To check that (\\ref{5.10;2}) holds note that \n$(-\\Delta_r + \\epsilon)^{-1\/2} f = h * f$ for any $f \\in L^2 (\\mathbb{R}^3)$, where by (\\ref{2.10;1}) \n\\begin{equation}\n h(r) \\leq \\frac 1{2 \\pi^2 |r|^2} \\int_0^\\infty e^{-y} dy = \\frac 1{2 \\pi^2 |r|^2} \n\\end{equation}\nBy the Sobolev's inequality (eq.~(4.2) in \\cite{traceideals}) \n\\begin{equation}\\label{9.10;1}\n \\left\\|\\bigl(v_{ij}(r)\\bigr)_- \\bigl(-\\Delta_r + \\epsilon\\bigr)^{- \\frac 12}\\right\\| \\leq \\left(\\frac{2}{3\\pi^2}\\right)^{2\/3} C_3\\|\\bigl(v_{ij}\\bigr)_-\\|_3 \n\\end{equation}\n(for an estimation of the constant $C_3$ see f. e. \\cite{liebloss}, where this inequality is called the weak Young inequality). \n\\begin{lemma}\\label{31.07;2}\nThe function $D(0, \\epsilon)$ is norm--continuous for $\\epsilon >0 $ and has a norm limit for $\\epsilon\\to +0$.\n\\end{lemma}\n\\begin{proof}\n$D(0, \\epsilon)$ is uniformly norm--bounded for $\\epsilon >0$ by Lemma~\\ref{31.07;1}. Below we prove the following\ninequality\n\\begin{equation} \n \\left\\| D(0, \\epsilon_2 ) - D(0, \\epsilon_1)\\right\\| \\leq \\frac{\\Lambda_{\\gamma_0} |\\epsilon_1 -\n\\epsilon_2|^{1\/2}}{\\bigl(\\max \\{\\epsilon_1, \\epsilon_2\\} \\bigr)^{1\/2 - \\gamma_0}} \\quad \\quad (\\textnormal{for\n$\\epsilon_{1,2} >0$}), \\label{xwdcont}\n\\end{equation}\nwhere $\\gamma_0 := \\alpha(3-2\\alpha)^{-1}$ and $\\Lambda_{\\gamma_0} $ is defined in Lemma~\\ref{31.07;1}. From (\\ref{xwdcont}) it follows that the\nnorm limit\n$D(0, 0):= \\lim_{\\epsilon\\to +0} D(0, \\epsilon)$ exists because it is a\nnorm--limit of a Cauchy sequence (the norm--continuity \nis also a trivial consequence of (\\ref{xwdcont})). For $\\epsilon_2 \\geq \\epsilon_1 >0$ we can write\n\\begin{gather}\nD(0, \\epsilon_1) - D(0, \\epsilon_2) = \\Bigl[ \\bigl(H_w + \\epsilon_2\\bigr)^{1\/2}\n-\n\\bigl(H_w + \\epsilon_1\\bigr)^{1\/2} \\Bigr] \\bigl(H_w + \\epsilon_1 \\bigr)^{-1\/2}\n\\bigl(H_w + \\epsilon_2\\bigr)^{-1\/2}\\bigl(V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^-\\bigr)^{1\/2} \\nonumber\\\\\n= \\Bigl[ \\bigl(H_w + \\epsilon_2\\bigr)^{1\/2} -\n\\bigl(H_w + \\epsilon_1\\bigr)^{1\/2} \\Bigr] \\bigl(H_w + \\epsilon_2\\bigr)^{-1\/2 +\n\\gamma_0} \\bigl(H_w + \\epsilon_1 \\bigr)^{-1\/2} \\bigl(H_w +\n\\epsilon_2\\bigr)^{-\\gamma_0} \\bigl(V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^-\\bigr)^{1\/2}. \\label{xwnonu}\n\\end{gather}\nThe following inequality for $f \\in D(H_0)$ is a trivial consequence of the spectral theorem\n\\begin{equation}\n \\left\\| \\Bigl[ \\bigl(H_w + \\epsilon_2\\bigr)^{1\/2} -\n\\bigl(H_w + \\epsilon_1\\bigr)^{1\/2} \\Bigr] f \\right\\| \\leq |\\epsilon_2 -\n\\epsilon_1|^{1\/2} \\|f\\| .\n\\end{equation}\nBy the same reasons\n\\begin{equation}\n \\| \\bigl(H_w + \\epsilon_2 \\bigr)^{-1\/2 + \\gamma_0} \\| \\leq |\\epsilon_2|^{-1\/2 +\n\\gamma_0}.\n\\end{equation}\nThus from (\\ref{xwnonu})\n\\begin{equation}\n \\left\\| D(0, \\epsilon_2 ) - D(0, \\epsilon_1)\\right\\| \\leq |\\epsilon_2 -\n\\epsilon_1|^{1\/2} |\\epsilon_2|^{-1\/2 + \\gamma_0} \\left\\| \\bigl(H_w + \\epsilon_1\n\\bigr)^{-1\/2} \\bigl(H_w + \\epsilon_2\\bigr)^{-\\gamma_0}\n\\bigl(V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^-\\bigr)^{1\/2}\\right\\| .\\label{xwbotw2}\n\\end{equation}\nFor the norm on the rhs we can write\n\\begin{gather}\n \\left\\| \\bigl(H_w + \\epsilon_1 \\bigr)^{-1\/2} \\bigl(H_w +\n\\epsilon_2\\bigr)^{-\\gamma_0} \\bigl(V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^-\\bigr)^{1\/2}\\right\\|^2 \\nonumber\\\\\n= \\left\\| \\bigl(V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^-\\bigr)^{1\/2} \\bigl(H_w + \\epsilon_1\\bigr)^{-1\/2} \\bigl(H_w +\n\\epsilon_2 \\bigr)^{-2\\gamma_0} \\bigl(H_w + \\epsilon_1\\bigr)^{-1\/2}\n\\bigl(V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^-\\bigr)^{1\/2}\\right\\| \\nonumber\\\\\n\\leq \\left\\| \\bigl(V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^-\\bigr)^{1\/2} \\bigl(H_w + \\epsilon_1 \\bigr)^{-1-2\\gamma_0}\n\\bigl(V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^-\\bigr)^{1\/2}\\right\\| = \\left\\| \\bigl(H_w + \\epsilon_1\n\\bigr)^{-1\/2-\\gamma_0} \\bigl(V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^-\\bigr)^{1\/2}\\right\\|^2 \\leq \\Lambda^2_{\\gamma_0}\n\\label{xwbotw}.\n\\end{gather}\nwhere we used the inequality $ \\bigl(H_w + \\epsilon_2 \\bigr)^{-2\\gamma_0} \\leq\n\\bigl(H_w + \\epsilon_1 \\bigr)^{-2\\gamma_0}$,\nwhich again follows from the spectral theorem.\nNow (\\ref{xwdcont}) follows from (\\ref{xwbotw2}) and (\\ref{xwbotw}).\n\\end{proof}\n\n\n\nThe following lemma is trivial.\n\\begin{lemma}\\label{30.07;2}\n Let $A$ be a Hilbert--Schmidt operator on a Hilbert space $\\mathcal{H}$.\nSuppose there is $\\delta >0$ and\nan orthonormal set $\\varphi_1 , \\ldots, \\varphi_n \\in \\mathcal{H}$ such that\n$|(\\varphi_i, A\\varphi_i)| \\geq \\delta$ for $i = 1,\\ldots, n$.\nThen $n \\leq \\|A\\|_{HS}^2\/\\delta^2$.\n\\end{lemma}\n\\begin{proof}\nFrom Lemma's conditions it follows that\n\\begin{equation}\n \\delta^2 \\leq |(\\varphi_i , A\\varphi_i)|^2 \\leq \\|A \\varphi_i\\|^2 = (\\varphi_i\n, A^* A \\varphi_i)\n\\end{equation}\nFrom the last inequality it follows that\n\\begin{equation}\n n \\delta^2 \\leq \\sum_{i=1}^n (\\varphi_i , A^* A \\varphi_i) \\leq \\|A\\|_{HS}^2 .\n\\end{equation}\n\\end{proof}\n\nNow we pass to the\n\\begin{proof}[Proof of Theorem~\\ref{30.07;1}]\nThe Hamiltonian in (\\ref{31.07;12}) has the form $H(\\lambda) = H_w - \\lambda V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^-$. \nBefore we apply Theorem~\\ref{31.07;16} we need to verify its conditions. Note that $D(V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^-)\\supseteq D(H_w^{\\frac 12})$ due to (\\ref{5.10;1}) \n(see a first Remark in Sec.~\\ref{31.07;14}). Besides, $H_w + \\mu V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^-$ is self--adjoint on $D(H_0) = D(H_w)$ for all $\\mu \\in [0, \\infty)$. \n\nThe\nassociated BS operator for $\\epsilon >0$ is given by \n\\begin{equation}\n K(\\epsilon)= \\bigl(H_w + \\epsilon \\bigr)^{-1\/2} V_{\\mathfrak{C}_1 \\mathfrak{C}_2}^- \\bigl(H_w + \\epsilon\n\\bigr)^{-1\/2} = D(0, \\epsilon) D^* (0, \\epsilon).\n\\end{equation}\nFrom Lemmas~\\ref{31.07;1}, \\ref{31.07;2} it follows that $K(\\epsilon)$ is norm--continuous on $[0, \\infty)$. \nUsing that\n$\\sigma_{ess} \\bigl( H(\\lambda_0)\\bigr) = [E_{thr}(1), \\infty)$ and Theorem~\\ref{31.07;16} we\nconclude that\n\\begin{equation}\\label{xwthisholds}\n \\sigma_{ess} \\bigl( K(\\epsilon)\\bigr) \\cap \\left(\\lambda^{-1}_0 , \\infty\\right)\n= \\emptyset\n\\end{equation}\nfor $\\epsilon >0$.\nBy Theorem~9.5 in \\cite{weidmann} (\\ref{xwthisholds}) holds also for $\\epsilon =\n0$. Hence, due to $\\lambda_0 >1$ there must exist $\\delta > 0$ such that\n\\begin{equation}\\label{xwthisholds4}\n K(0) \\leq 1-2\\delta + \\mathcal{C}_f ,\n\\end{equation}\nwhere $\\mathcal{C}_f$ is a finite rank operator. Now let us assume by\ncontradiction that $\\#(\\textnormal{evs}\\:(H(1)) < E_{thr}(1))$ is infinite. Then by the\nBS principle\n\\begin{equation}\\label{xwthisholds2}\n \\lim_{\\epsilon \\to + 0} \\#\\bigl(\\textnormal{evs}\\:(K(\\epsilon)) > 1 \\bigr) \\to \\infty .\n\\end{equation}\nDue to the norm--continuity of $K(\\epsilon)$ from (\\ref{xwthisholds2}) it follows that for any\n$n = 1, 2, \\ldots$ one can find an orthonormal set $\\psi_1, \\psi_2 , \\ldots ,\n\\psi_n \\in L^2 (\\mathbb{R}^{3N-3})$\nsuch that\n\\begin{equation}\\label{xwthisholds3}\n \\bigl( \\psi_i, K(0)\\psi_i\\bigr) \\geq 1-\\delta \\quad \\quad (i = 1, 2, \\ldots, n).\n\\end{equation}\nThen (\\ref{xwthisholds4}) gives\n\\begin{equation}\n \\bigl( \\psi_i, \\mathcal{C}_f \\psi_i\\bigr) \\geq \\delta \\quad \\quad (i = 1, 2, \\ldots, n).\n\\end{equation}\nBy Lemma~\\ref{30.07;2} $n \\leq \\|\\mathcal{C}_f \\|^2_{HS}\/\\delta^2$,\nwhich contradicts $n$ being arbitrary positive integer.\n\\end{proof}\n\nLet us briefly show that Theorem~1.3 in \\cite{vugzhisl1} follows from Theorem~\\ref{30.07;1}. Suppose that $d_1 \\subset \\{1, \\ldots, N\\}$ and $d_2 \\subset \\{1, \\ldots, N\\}$ are\ntwo nonempty disjoint clusters. Similar to (\\ref{31.07;12})--(\\ref{30.07;9}) the Hamiltonian of the subsystem $d_1 \\cup d_2$ can be written as\n\\begin{equation}\\label{31.07;21}\n H_{d_1 d_2} (\\lambda) := h_{d_1 d_2} - \\Delta_{r_{12}} + V_{d_1 d_2}^+ - \\lambda V_{d_1 d_2}^- ,\n\\end{equation}\nwhere $r_{12} \\in \\mathbb{R}^3$ points from the center of mass of $d_1$ in the direction to the center of mass of $d_2$; the scale is chosen so as to make\n(\\ref{31.07;21}) hold. The meaning of other notations is clear from (\\ref{31.07;4})--(\\ref{31.07;22}).\nThe Hamiltonian (\\ref{31.07;21}) acts on $L^2 (\\mathbb{R}^n)$, where $n = 3 \\bigl( (\\# d_1) + (\\# d_2) - 1\\bigr)$.\nPutting aside the comparison of restrictions on the potentials, Theorem~1.3\nin \\cite{vugzhisl1} can be equivalently reformulated as follows\n\\begin{theorem}[S. Vugal'ter and G. Zhislin 1986]\n Suppose the Hamiltonian in (\\ref{30.07;8}) is such that $\\sigma_{ess} (H) = [0, \\infty)$. Suppose there exists a partition in two clusters $\\mathfrak{C}_{1,2}$ \nwhere for all \nsubsystems $d_1 \\cup d_2$ such that $d_1 \\subseteq \\mathfrak{C}_1$, $d_2 \\subseteq \\mathfrak{C}_2$ and $(\\#d_1) + (\\#d_2) \\leq N-1$,\nthe Hamiltonian $H_{d_1 d_2} (1)$ does not have a virtual level at zero energy. Then $\\#(\\textnormal{evs}\\:(H) < 0) < \\infty$.\n\\end{theorem}\n\\begin{proof}\nWe need to show that the conditions of Theorem~\\ref{30.07;1} are fulfilled. We would say that the subsystem $d_1 \\cup d_2$ is at critical coupling if\n$H_{d_1 d_2} (1) \\geq 0$ and $H_{d_1 d_2} (1+ \\epsilon) \\ngeqq 0$ for $\\epsilon >0$ (this is different from the Definition~1 in \\cite{jpasubmit}; for the definition of virtual\nlevel see, for example, Definition~3 in \\cite{jpasubmit}). By the HVZ theorem\nconditions of Theorem~\\ref{30.07;1} would be verified if we can prove that\n$d_1 \\cup d_2$ is not at critical coupling for all $d_{1,2}$ described in the conditions of the theorem to prove. Assume\nby contradiction that there exist $d_{1,2}$ such that $H_{d_1 d_2}$ is at critical coupling.\nWithout loosing generality we can assume that the subsystems $d'_1 \\cup d'_2$, where\n$d'_1 \\subseteq d_1$, $d'_2 \\subseteq d_2$ and $(\\#d'_1) + (\\#d'_2) < (\\#d_1) + (\\#d_2) $ are \\textit{not} at critical\ncoupling (otherwise we can pass to an appropriate sub\/subsystem).\nThus there must exist $\\omega > 0$ such that $\\sigma_{ess} \\bigl( H_{d_1 d_2} (1+\\omega)\\bigr) = [0, \\infty)$.\nFor $d_1 \\cup d_2$ we construct the BS operator\n\\begin{equation}\n K(\\epsilon) := \\left[h_{d_1d_2} - \\Delta_{r_{12}} + V_{d_1 d_2}^+ + \\epsilon \\right]^{-\\frac 12} V_{d_1 d_2}^-\n\\left[h_{d_1d_2} - \\Delta_{r_{12}} + V_{d_1 d_2}^+ + \\epsilon \\right]^{-\\frac 12} . \n\\end{equation}\nBy the analysis above $K(\\epsilon)$ is positivity preserving, $K(\\epsilon) \\to K(0)$ in norm, where $K(0)$ is a positivity preserving operator as well. \nBy Theorem~\\ref{31.07;16} and Theorem~9.5 in \\cite{weidmann}\n$ \\sigma_{ess} \\bigl(K(0)\\bigr) \\subseteq [0 ,(1+\\omega)^{-1}]$.\nBecause $d_1 \\cup d_2$ is at critical coupling we have $\\|K(0)\\| = 1$ (this follows from the BS principle and norm--continuity of $K(\\epsilon)$).\nDue to the location of the essential spectrum $\\|K(0)\\|$ is an eigenvalue. \nThus there exists $\\phi_0 \\in L^2 (\\mathbb{R}^n)$ such that $K(0)\\phi_0 = \\phi_0$ and $\\phi_0 >0$, see \\cite{reed}. From this fact and\nfrom the variational principle it follows that for all $a >0$ there exists $\\epsilon > 0$ such that $\\|K' (\\epsilon,a) \\| >1$, where\n\\begin{gather}\n K' (\\epsilon,a) := \\left[h_{d_1d_2} - \\Delta_{r_{12}} + V_{d_1 d_2}^+ + \\epsilon \\right]^{-\\frac 12} \\left\\{ V_{d_1 d_2}^- + ae^{-|x|} \\right\\}\n\\left[h_{d_1d_2} - \\Delta_{r_{12}} + V_{d_1 d_2}^+ + \\epsilon \\right]^{-\\frac 12}.\n\\end{gather}\nBy the BS principle this means that $H_{d_1 d_2} - ae^{-|x|} \\ngeq 0$ for any $a>0$. Now it is clear that $H_{d_1 d_2}$ has a virtual level at zero energy contrary to the conditions of the\ntheorem.\n\\end{proof}\n\n\n\n\\section{Main Result and Discussion}\\label{25.9;15}\n\nWe shall consider the Hamiltonian of $N$ particles in $\\mathbb{R}^3$\n\\begin{gather}\n H = H_0 + V \\label{1.08;3}\\\\\nV = \\sum_{i< k} v_{ik}, \\quad\\quad \\quad v_{ik} \\in L^1 (\\mathbb{R}^3) \\cap L^3 (\\mathbb{R}^3) \\label{1.08;4},\n\\end{gather}\nwhere $H_0$ is the kinetic energy operator with the center of mass removed and\n$v_{ik}$ are operators of multiplication by $v_{ik} (r_i - r_k)$. \nBy Kato--Rellich's theorem \\cite{kato,teschl}\n$H$ is self--adjoint on $D(H_0) = \\mathcal{H}^2 (\\mathbb{R}^{3N-3}) \\subset L^2 (\\mathbb{R}^{3N-3})$.\n\nFor an ordered multi--index $\\pmb{i} = \\{i_1, \\ldots , i_s\\}$, where $1 \\leq s \\leq N$ and $1 \\leq i_1 < i_2 < \\cdots < i_s \\leq N$ let us define the set\n$S_{\\pmb{i}} = \\{1, 2, \\ldots, N\\} \/ \\{i_1, \\ldots , i_s\\}$, which is the subsystem containing $N - \\# \\pmb{i}$ particles.\nThe sums of pair--interactions for the subsystem $S_{\\pmb{i}}$ are defined as\n\\begin{gather}\n V_{\\pmb{i}} := \\sum_{\\substack{j < k\\\\ j,k \\in S_{\\pmb{i}}}} v_{jk} \\label{7.8;1a} \\\\\nV^{\\pm}_{\\pmb{i}} := \\sum_{\\substack{j < k\\\\ j,k \\in S_{\\pmb{i}}}} (v_{jk})_\\pm \\label{7.8;1}\n\\end{gather}\nIn particular, $V_{\\{j\\}}$ is the sum of pair--interactions in\nthe subsystem, where particle $j$ is removed; $V_{\\{j, s\\}}$ is the sum of\npair--interactions in the subsystem,\nwhere particles $j$ and $s$ are removed, etc. Obviously, $V_{\\pmb{i}} = V^{+}_{\\pmb{i}} - V^{-}_{\\pmb{i}}$.\nWe shall make the following assumption\n\\begin{list}{R\\arabic{foo}}\n{\\usecounter{foo}\n \n \\setlength{\\rightmargin}{\\leftmargin}}\n\\item $\\sigma_{ess} (H) = [0, \\infty)$. There exists $\\omega >0$ such that $H_0 + V_{\\{j, s\\}}^+ -\n(1+\\omega)V_{\\{j, s\\}}^- \\geq 0$ for all $1 \\leq j < s \\leq N $.\n\\end{list}\nIn particular, R1 implies that a\nsubsystem containing $N-2$ or less particles is not at critical coupling \\cite{jpasubmit}.\nIn Sec.~\\ref{1.08;2} we prove the following\n\\begin{theorem}\\label{1.08;9}\n Suppose that $H$ defined in (\\ref{1.08;3})--(\\ref{1.08;4}) satisfies R1 and $N \\geq 4$. Then\n$\\#(\\textnormal{evs}\\:(H) < 0)$ is finite.\n\\end{theorem}\nThe first thing worth noting is that Theorem~\\ref{1.08;9} does not hold for $N=3$ because\nof the Efimov effect \\cite{vefimov,yafaev,sobolev,merkuriev}.\n\\begin{corollary}\\label{1.08;10}\n Suppose the system of 4 identical particles is described by the Hamiltonian $H$ in (\\ref{1.08;3})--(\\ref{1.08;4}) and $\\sigma_{ess} (H) = [0, \\infty)$.\nThen the number of negative\nenergy bound states is finite.\n\\end{corollary}\n\\begin{proof}\n We need only to check the second part of R1. If one pair of particles would be\nat critical coupling then this would be true for all particle pairs because the\nparticles are\nidentical. Then due to the Efimov effect \\cite{yafaev,tamura} three--particle subsystems would have negative energy bound states thereby violating the condition\n$\\sigma_{ess} (H) = [0, \\infty)$.\nTherefore, none of the particle pairs is at critical coupling and Theorem~\\ref{1.08;9} applies.\n\\end{proof}\nUnfortunately, we did not succeed in extending Corollary~\\ref{1.08;10} to $N\\geq 5$. To\nexplain the difficulty let us consider $N=5$. Like in the proof of Corollary~\\ref{1.08;10} we\nconclude that\nnone of the particle pairs is at critical coupling. It can happen, however, that\nall particle triples would be at critical coupling, each of them having a zero\nenergy\nbound state \\cite{1}. It is natural to assume that in most cases this would lead to negative\nenergy bound states in 4--particle subsystems. But it is unclear how to prove\nthat even for negative pair--interactions.\n\n It is natural to ask whether instead of assumption R1 in the condition of Theorem~\\ref{1.08;9} one could simply require $\\sigma_{ess} (H) = [0, \\infty)$.\nIn this regard we pose the following conjecture\n\\begin{conjecture}\\label{1.08;14}\n Suppose that in (\\ref{1.08;3})--(\\ref{1.08;4}) $N=4$ and all $v_{ik} \\leq 0$ are bounded and finite--range potentials. Suppose also that $\\sigma_{ess} (H) = [0, \\infty)$,\nthe particle pair $\\{1,2\\}$ and the particle triples $\\{1,2,3\\}$ and $\\{1,2,4\\}$ are at critical coupling (in the sense of Definition~1 in \\cite{jpasubmit}),\nand the subsystems $\\{1,3,4\\}$ and $\\{2,3,4\\}$ do not have zero energy bound states. Then $H$\nhas an infinite number of negative energy bound states.\n\\end{conjecture}\nThe conditions in Conjecture~\\ref{1.08;14} can always be met by appropriate tuning of the coupling constants, see Sec.~6 in \\cite{1}. Note, that by Theorem~3 in \\cite{1}\nnone of the subsystems has zero energy bound states and, therefore, the no--clustering theorem applies (see Theorem~3 in \\cite{jpasubmit}).\nConjecture~\\ref{1.08;14}, if true, would mean the existence of a ``true'' Efimov effect for\nfour particles (``true'' means that it does not trivially reduce to the case of three clusters).\n\n\n\n\n\n\n\n\n\n\nLet us remark that using Theorem~2 from \\cite{jpasubmit} Theorem~\\ref{1.08;9} can be reformulated in the following way\n\\begin{theorem}\\label{1.08;9aa}\n Suppose that $N \\geq 4$ and $H$ in (\\ref{1.08;3})--(\\ref{1.08;4}) is such that $\\sigma_{ess} (H) = [0, \\infty)$. Suppose also that\nnone of the particle pairs is at critical coupling and none of the subsystems consisting of $N-2$ or less particles has a square--integrable zero energy ground state.\nThen $H$ has a finite number of bound states with negative energies.\n\\end{theorem}\n\nLet us now discuss how the material is arranged in the next sections. The main\ntool of our analysis is the BS operator, see Sec.~\\ref{31.07;14}. The somewhat uncommon form of the BS operator, which we\nadopt in this paper, has an advantage that the BS operator of an $N$--particle\nsystem can be expressed through the BS operators of the subsystems. In Sec.~\\ref{1.08;1} we\nanalyze the spectrum of the BS operator, which corresponds to the\n$N$--particle system at critical coupling, whose subsystems are not at critical coupling. From Theorem~2 in \\cite{jpasubmit} we know that the Hamiltonian of \nsuch system has eigenvalue equal to zero. Here of special\ninterest is the behavior of the eigenfunction corresponding to the largest positive\neigenvalue of the BS operator. Sec.~\\ref{1.08;2} is devoted to the proof of Theorem~\\ref{1.08;9}.\nFrom Sec.~\\ref{1.08;33} we already know that in proving Theorem~\\ref{1.08;9}\nwe need to focus on the case when some of the\n $N-1$--particle subsystems are at critical coupling (otherwise the proof is\n accomplished by applying Theorem~\\ref{30.07;1}). For these subsystems we shall need the results of Sec.~\\ref{1.08;1}.\n\nWe define the BS operator associated with (\\ref{1.08;3}) as\n\\begin{equation}\\label{1.08;35}\n K(\\epsilon) := \\bigl(H_0 + \\epsilon \\bigr)^{-1\/2} V \\bigl(H_0 + \\epsilon\n\\bigr)^{-1\/2} \\quad \\quad (\\epsilon > 0) ,\n\\end{equation}\nThe main object of our interest is\n\\begin{equation}\\label{7.8;55}\n N_\\epsilon := \\# (\\textrm{evs}(H) < -\\epsilon)= \\# (\\textrm{evs}(K(\\epsilon)) >\n1),\n\\end{equation}\nwhere the last equation follows from the BS principle. (The applicability of Theorem~\\ref{31.07;16} can always be checked in the same way it is done \nin the proof of Theorem~\\ref{30.07;1}). \n\\begin{lemma}\\label{1.8;54}\nOne can define $K(0)$ so that $K(\\epsilon)$ is\nnorm--continuous on $[0, \\infty)$.\n\\end{lemma}\n\\begin{proof}\nWe can write\n\\begin{equation}\\label{1.08;32}\n K(\\epsilon) = \\sum_{1 \\leq i < k\\leq N} d_{ik} (\\epsilon) \\textnormal{sign}\\: (v_{ik})\nd^*_{ik} (\\epsilon) ,\n\\end{equation}\nwhere\n\\begin{equation}\n d_{ik} (\\epsilon) := \\bigl(H_0 + \\epsilon\\bigr)^{-1\/2} |v_{ik}|^{1\/2} .\n\\end{equation}\nand $\\textnormal{sign}\\: (v_{ik})$ is the operator of multiplication by the sign of $v_{ik}$.\nRepeating the\narguments from the proof of Lemma~\\ref{31.07;2} we prove the following inequality analogous to (\\ref{xwdcont}) (the restrictions on the pair--potentials \nallow us to set $\\gamma_0 = 1\/10$)\n\\begin{equation}\\label{dik}\n \\|d_{ik} (\\epsilon_1) - d_{ik} (\\epsilon_2)\\| \\leq \\Lambda_{\\frac 1{10}} |\\epsilon_2 -\n\\epsilon_1|^{1\/2} \\epsilon_2^{-2\/5},\n\\end{equation}\nwhere $\\epsilon_2 \\geq \\epsilon_1 >0$. From (\\ref{dik}) it\nfollows that $d_{ik} (\\epsilon)$, and, hence, $K(\\epsilon)$ is norm continuous on $[0, \\infty)$. \n\\end{proof}\n\n\\section{N--Particle System at Critical Coupling}\\label{1.08;1}\n\nHere we shall analyze the BS operator (\\ref{1.08;35}) in the case when $H$ defined in (\\ref{1.08;3})--(\\ref{1.08;4}) \nis at critical coupling and has a bound state at zero energy.\nLet us define $\\mu : \\mathbb{R}_+ \\to \\mathbb{R}$ \n\\begin{equation}\\label{1.08;41}\n \\mu(\\epsilon):= \\sup \\sigma\\bigl( K (\\epsilon)\\bigr) .\n\\end{equation}\nWe shall make the following assumption\n\\begin{list}{R\\arabic{foo}}\n{\\usecounter{foo}\n \n \\setlength{\\rightmargin}{\\leftmargin}}\\setcounter{foo}{1}\n\\item $H \\geq 0$. There exists $\\omega >0$ such that $H_0 + V_{\\{j\\}}^+\n- (1+\\omega)V_{\\{j\\}}^- \\geq 0$ for $j = 1, \\ldots, N $. Besides, $H_0 + \\delta V \\ngeqq 0$ for all $\\delta >0$. \n\\end{list}\nBy Theorem~2 in \\cite{jpasubmit} $H$ satisfying R2 has zero as an eigenvalue.\n\\begin{theorem}\\label{3.8;3}\nSuppose $H$ defined in (\\ref{1.08;3})--(\\ref{1.08;4}) satisfies R2.\nThen there is $\\pmb{\\epsilon} > 0$ such that $\\mu (\\epsilon)$ defined in (\\ref{1.08;41}) is an eigenvalue of $K(\\epsilon)$ for $\\epsilon \\in [0, \\pmb{\\epsilon}]$.\nAs eigenvalue $\\mu(\\epsilon)$ is isolated and non--degenerate, as a function it is continuous and monotone decreasing on $[0,\n\\pmb{\\epsilon}]$. Besides, $\\mu(0) = 1$, $\\mu(\\pmb{\\epsilon}) \\geq (1+\\omega\/2)^{-1}$ and there is $a_\\mu >0$ such that\n\\begin{equation}\\label{mubound}\n 1 - \\mu(\\epsilon) \\geq a_\\mu \\epsilon \\quad \\quad \\textnormal{for\n$\\epsilon \\in [0, \\pmb{\\epsilon}]$}. \n\\end{equation}\n\\end{theorem}\n\\begin{proof}\nBy R2 and the HVZ theorem $\\sigma_{ess} (H_0 + (1+\\omega) V) = [0, \\infty)$. By Theorem~\\ref{31.07;16} and Theorem~9.5 in \\cite{weidmann}\n\\begin{equation}\n \\sigma_{ess} (K(\\epsilon)) \\cap ((1+\\omega)^{-1} , \\infty) = \\emptyset \\quad\n\\quad \\textnormal{for $\\epsilon \\in [0, \\infty)$}.\n\\end{equation}\nOn one hand, from the BS principle and the condition $H \\geq 0$ it follows that\n\\begin{equation}\n \\sigma \\bigl(K(\\epsilon)\\bigr) \\cap (1 , \\infty) = \\emptyset \\quad\n\\quad \\textnormal{for $\\epsilon >0$}.\\label{1.8;51}\n\\end{equation}\nOn the other hand, since $H$ is at critical coupling, there must exist the sequences $\\omega_n \\to +0$, $\\epsilon_n \\to +0$, where $\\omega_n < \\omega$, such that\n$\\inf \\sigma \\bigl(H_0 + (1+\\omega_n)V\\bigr) = - \\epsilon_n$. Hence, by the BS principle\n\\begin{equation}\n \\sigma \\bigl( (1+\\omega_n)K(\\epsilon_n)\\bigr) \\cap (1, \\infty) \\neq \\emptyset. \\label{1.8;52}\n\\end{equation}\nComparing (\\ref{1.8;51}) and (\\ref{1.8;52}) and using the continuity of $K(\\epsilon)$ (Lemma~\\ref{1.8;54}) we conclude\nthat $\\mu(0) = 1$ is an eigenvalue of $K(0)$ lying aside from the essential\nspectrum. (The non--degeneracy of this eigenvalue would easily follow from $V \\leq 0$, for in this case $K(\\epsilon)$\nis a positivity\npreserving operator).\nLet us assume by contradiction that the eigenvalue $\\mu(0)$ is degenerate. Then by\ncontinuity for $\\epsilon_n \\to +0$ there must exist a sequence $\\mu_n \\nearrow\n1$ such that\n$K (\\epsilon_n)$ has at least two eigenvalues in the interval $(\\mu_n, 1)$.\nWe can choose the sequences $\\epsilon_n, \\mu_n$ so that $(1+\\omega)^{-1} < \\mu_n$. Thereby we guarantee that\n\\begin{equation}\\label{23.03:01}\n \\sigma_{ess} \\left(H_0 + V^+ - \\mu_n^{-1} V^- \\right) = [0, \\infty),\n\\end{equation}\nwhere by definition\n\\begin{equation}\n V^{\\pm} := \\sum_{i < j} \\bigl( v_{ij} \\bigr)_\\pm .\n\\end{equation}\nSince $\\mu_n^{-1} K(\\epsilon_n)$ has at least 2 eigenvalues in the interval $(1,\n\\infty)$, by the BS principle (Theorem~\\ref{31.07;16}) the operator\n\\begin{equation}\n H_0 + \\mu^{-1}_n \\bigl(V^+ - V^- \\bigr)\n\\end{equation}\nhas at least 2 eigenvalues in the interval $(-\\infty, -\\epsilon_n)$. From the\noperator inequality\n\\begin{equation}\\label{31.01\/1}\n H_0 + \\mu^{-1}_n \\bigl(V^+ - V^- \\bigr) \\geq H_0 + V^+ - \\mu^{-1}_n V^- ,\n\\end{equation}\nmaxmin principle and (\\ref{23.03:01}) it follows that the operator on the rhs of\n(\\ref{31.01\/1}) has at least 2 eigenvalues in the interval $(-\\infty,\n-\\epsilon_n)$.\nThe BS operator associated with the operator on the rhs of (\\ref{31.01\/1}) is $\n\\mu_n^{-1} \\tilde K(\\epsilon)$, where\n\\begin{equation}\n\\tilde K (\\epsilon) =\\left[H_0 + V^+ +\n\\epsilon\\right]^{-1\/2}\\bigl(V^-\\bigr)\\left[H_0 + V^+ + \\epsilon\\right]^{-1\/2}.\n\\end{equation}\nBy the BS principle\n\\begin{equation}\\label{23.03:02}\n \\# (\\textrm{evs}(\\tilde K(\\epsilon_n)) > \\mu_n) \\geq 2.\n\\end{equation}\nThe proof that $\\tilde K(\\epsilon)$ is norm--continuous on $[0, \\infty)$ repeats that of Lemma~\\ref{1.8;54}. Using the inequality\n\\begin{equation}\n\\sigma_{ess} (H_0 + V^+ - (1+\\omega) V^- ) = [0, \\infty),\n\\end{equation}\nTheorem~\\ref{31.07;16} and Theorem~9.5 in \\cite{weidmann} we conclude that\n\\begin{equation}\n \\sigma_{ess} (\\tilde K(\\epsilon)) \\cap ((1+\\omega)^{-1} , \\infty) = \\emptyset\n\\quad \\quad (\\textnormal{for $\\epsilon \\in [0, \\infty]$} ).\n\\end{equation}\n\nRepeating the arguments in the beginning of the proof we infer that $\\|\\tilde K\n(0) \\| = 1$ is an eigenvalue of $\\tilde K(0)$. By norm--continuity and\n(\\ref{23.03:02})\nwe know that this eigenvalue must be at least two--fold degenerate. However,\n$\\tilde K(\\epsilon)$ for $\\epsilon >0$ is a product of positivity preserving operators\n(c. f. formula (\\ref{3.8;1})), and $\\tilde K(0)$ is also positivity\npreserving being the norm limit of positivity preserving operators. By\nTheorem~XIII.43 in vol.~4 \\cite{reed} $\\|\\tilde K(0)\\|$ must be a non--degenerate eigenvalue, a\ncontradiction. The existence of $\\pmb{\\epsilon} > 0$ such that $\\mu(\\epsilon)$ is continuous on $[0, \\pmb{\\epsilon}]$ and $\\mu(\\pmb{\\epsilon}) \\geq (1+\\omega\/2)^{-1}$ is a trivial consequence\nof the norm--continuity of $K(\\epsilon)$. Thus there exists $\\varphi(\\epsilon) : \\mathbb{R}_+ \\to L^2 (\\mathbb{R}^{3N-3})$ such that\n\\begin{equation}\\label{3.8;5}\n K(\\epsilon) \\varphi (\\epsilon) = \\mu (\\epsilon) \\varphi(\\epsilon) \\quad \\quad (\\| \\varphi (\\epsilon) \\| = 1, \\epsilon \\in [0, \\pmb{\\epsilon}]) \n\\end{equation}\nand by definition $\\varphi(\\epsilon) \\equiv 0$ for $\\epsilon \\in (\\pmb{\\epsilon}, \\infty)$. \nLet us define\n\\begin{equation}\n \\psi (\\epsilon) := \\bigl( H_0 + \\epsilon\\bigr)^{-1\/2} \\varphi(\\epsilon) \\quad \\quad (\\epsilon >0). \\label{11.01.12\/1}\n\\end{equation}\nDue to (\\ref{3.8;5}) $\\varphi(\\epsilon) \\in \\textrm{Ran} \\bigl( (H_0 + \\epsilon)^{-1\/2}\\bigr) $, hence, $\\psi(\\epsilon)\\in D(H_0)$. Besides, $\\psi(\\epsilon)$\nsatisfies the Schr\\\"odinger\nequation\n\\begin{equation}\n \\bigl( H_0 + \\epsilon\\bigr) \\psi (\\epsilon) + \\frac 1{\\mu(\\epsilon)}V \\psi\n(\\epsilon) = 0 \\quad \\quad (\\epsilon \\in (0, \\pmb{\\epsilon}]). \\label{11.01.12\/1.3}\n\\end{equation}\nMonotonicity of $\\mu(\\epsilon)$ follows from the fact that $-\\epsilon = \\inf \\sigma \\left( H_0 + \\mu^{-1}(\\epsilon) V\\right) $ is monotone \ndecreasing with $\\mu^{-1}$ \\cite{reed}. \n Inequality (\\ref{mubound}) is a direct\nconsequence of Lemma~3.1 in \\cite{simonfuncan}.\n\\end{proof}\n\\begin{remark}\n With additional effort instead of (\\ref{mubound}) one can possibly prove\nthat $\\mu(\\epsilon) = 1 -(\\psi (0) , V \\psi (0))^{-1} \\epsilon +\n\\hbox{o}(\\epsilon)$, that is $\\mu(\\epsilon)$ has a derivative at $\\epsilon =\n0$. Here $\\psi(0) = \\lim_{\\epsilon \\to 0} \\psi(\\epsilon)$, where the limit is in norm (its existence is proved in Corollary~\\ref{18.9;4} below). \n\\end{remark}\n\nLet us introduce the projection operator\n\\begin{equation}\nP(\\epsilon):= \\bigl( \\varphi(\\epsilon),\\cdot\\bigr)\\varphi(\\epsilon) , \\label{7.8;31}\n\\end{equation}\nwhere $\\varphi(\\epsilon)$ is defined in (\\ref{3.8;5}). So far we have defined $\\mu(\\epsilon)$ by equation (\\ref{1.08;41}). Now we redefine $\\mu(\\epsilon)$ setting \n\\begin{equation}\\label{7.01;41}\n \\mu(\\epsilon):= \\begin{cases}\n\\sup \\sigma\\bigl( K (\\epsilon)\\bigr) & \\text{if $\\epsilon \\in [0, \\pmb{\\epsilon}]$},\\\\\n\\mu(\\pmb{\\epsilon}) & \\text{if $\\epsilon \\in (\\pmb{\\epsilon} , \\infty)$}.\n\\end{cases} \n\\end{equation}\nBy Lemma~\\ref{3.8;3}\n\\begin{gather}\n K(\\epsilon) = \\mu(\\epsilon)P(\\epsilon) + K(\\epsilon)\\bigl(1-P(\\epsilon)\\bigr) ,\\\\\n\\|K(\\epsilon)(1-P(\\epsilon)) \\| \\leq \\eta , \\label{19.9;11}\n\\end{gather}\nwhere $\\eta \\in (0, 1)$ is some constant.\n\\begin{lemma}\\label{7.8;53}\n For $\\epsilon >0 $ the following formula holds\n\\begin{equation}\\label{wurzel}\n \\Bigl[1 - \\mu(\\epsilon) P(\\epsilon)\\Bigr]^{-1\/2} = 1 + \\left(\\frac\n1{\\sqrt{1-\\mu(\\epsilon)}} - 1\\right)P(\\epsilon).\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\n Using von Neumann series we get\n\\begin{equation}\\label{wurzel2}\n \\Bigl[1 - \\mu(\\epsilon) P(\\epsilon)\\Bigr]^{-1} = 1 +\n\\frac{\\mu(\\epsilon)}{1-\\mu(\\epsilon)}P(\\epsilon) .\n\\end{equation}\nThe operator on the rhs of (\\ref{wurzel}) is positive and by the direct check\none finds that its square is equal to the operator on the rhs of\n(\\ref{wurzel2}).\n\\end{proof}\n\nIn the rest of this section we shall derive various estimates on $\\varphi(\\epsilon), \\psi(\\epsilon)$, the key result in this respect being\nTheorem~\\ref{3.8;8}.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{Figure1.eps}\n\\caption{Illustration to the choice of orthogonal Jacobi coordinates. The filled square symbolizes the center of mass\nof the particles $\\{3,4, \\ldots, N\\}$. Coordinates' scales\nare set so that $H_0 = -\\sum_i \\Delta_{x_i} = -\\sum_i \\Delta_{z_i}= -\\sum_i \\Delta_{t_i}$ holds.}\n\\label{Fig:1}\n\\end{center}\n\\end{figure}\n\nWe shall use $N$ sets of Jacobi coordinates each associated with the particle number $1, \\ldots, N$.\nLet us construct the Jacobi coordinates $x_1, \\ldots, x_{N-1} \\in\n\\mathbb{R}^3$ associated with the first particle. Here $x_1$ points from the center of mass of particles $\\{2, 3,\n\\ldots, N\\}$ in the\ndirection of particle $1$, $x_i$ points from the center of mass of particles $\\{i+1, \\ldots, N\\}$ in the\ndirection of particle $i$. The coordinates' scales are chosen so that $H_0 = -\n\\sum_i \\Delta_{x_i}$ holds. The coordinates $z_1, z_2, \\ldots, z_{N-1} \\in\n\\mathbb{R}^3$ are associated with particle $2$. Here\n$z_1$ points from the center of mass of particles $\\{1, 3,\n\\ldots, N\\}$ in the\ndirection of particle $2$, $z_2$ points from the center of mass of particles $\\{ 3,\n\\ldots, N\\}$ in the direction of particle $1$, $z_i$ for $i \\geq 3$ points from the center of mass of particles $\\{i+1, \\ldots, N\\}$ in the\ndirection of particle $i$. The coordinates' scales are chosen so that $H_0 = -\n\\sum_i \\Delta_{z_i}$ holds. This choice of coordinates is illustrated in Fig.~\\ref{Fig:1}.\nThe two sets of coordinates are connected through\n\\begin{gather}\n z_1 = a_{11} x_1 + a_{12} x_2 , \\label{24.9;28}\\\\\n z_2 = a_{21} x_1 + a_{22} x_2 , \\label{24.9;29}\\\\\nz_i = x_i \\quad \\quad (i \\geq 3) ,\n\\end{gather}\nwhere the $2\\times 2$ real matrix $a_{ik}$ is orthogonal. In fact,\n\\begin{gather}\na_{11} = -\\left[ \\frac{m_1 m_2}{(M - m_1)(M-m_2)} \\right]^{\\frac 12} \\label{21.9;1}\\\\\na_{12} = \\left[ \\frac{M(M- m_1 -m_2 )}{(M - m_1)(M-m_2)} \\right]^{\\frac 12}, \\label{21.9;2}\n\\end{gather}\nwhere $M := \\sum_{i=1}^N m_i$ and $a_{22} = -a_{11}$ and $a_{12} = a_{21}$.\n\nFor each set of coordinates $j = 1, \\ldots , N$ we introduce the full and the partial Fourier transforms denoted as $F_j$ and $\\mathcal{F}_j$ respectively.\nIn particular,\n\\begin{gather}\n \\hat f (p_1 , \\ldots, p_{N-1}) = (F_1 f) = \\frac 1{(2 \\pi)^{\\frac {3N-3}2}} \\int e^{-i \\sum_{k=1}^{N-1} p_k \\cdot x_k }f(x_1, \\ldots, x_{N-1})d^3 x_1 \\ldots d^3 x_{N-1} , \\label{6.8;51}\\\\\n\\hat f (p_1 , x_2 , \\ldots, x_{N-1}) = (\\mathcal{F}_1 f) = \\frac 1{(2 \\pi)^{3\/2}} \\int e^{-ip_1 \\cdot x_1 }f(x_1, \\ldots, x_{N-1})d^3 x_1 , \\label{7.8;51}\\\\\n\\hat f (q_1, \\ldots, q_{N-1}) = (F_2 f) = \\frac 1{(2 \\pi)^{\\frac {3N-3}2}} \\int e^{-i \\sum_{k=1}^{N-1} q_k \\cdot z_k }f(z_1, \\ldots, z_{N-1})d^3 z_1 \\ldots d^3 z_{N-1} , \\\\\n\\hat f (q_1 , z_2 , \\ldots, z_{N-1}) = (\\mathcal{F}_2 f) = \\frac 1{(2 \\pi)^{3\/2}} \\int e^{-iq_1 \\cdot z_1 }f(z_1, \\ldots, z_{N-1})d^3 z_1 , \\label{20.9;27}\n\\end{gather}\nFor shorter notation let us define the following tuples $x_r := (x_2, x_3, \\ldots, x_{N-1})\\in \\mathbb{R}^{3N-6}$, $x_c := (x_3, \\ldots, x_{N-1}) \\in \\mathbb{R}^{3N-9}$ and\n$p_r := (p_2, p_3, \\ldots, p_{N-1})\\in \\mathbb{R}^{3N-6}$, $p_c := (p_3, \\ldots, p_{N-1})\\in \\mathbb{R}^{3N-9}$. Similarly, we define $z_r, q_r \\in \\mathbb{R}^{3N-6}$ and\n$z_c, q_c \\in \\mathbb{R}^{3N-9}$.\n\n\nFor $j=1,2,\\ldots, N$, $s \\in \\mathbb{R}^3$ and $\\epsilon > 0$ let us define the positive operator\n$G_j (s, \\epsilon)$, where $G_1 (s, \\epsilon)$ acts on $f \\in L^2(\\mathbb{R}^{3N-3})$ as follows\n\\begin{equation}\\label{6.8;1}\nG_1 (s, \\epsilon) f = \\mathcal{F}_1^{-1} \\left[(p_1 + s)^2 + \\epsilon\n\\right]^{-1\/2} (\\mathcal{F}_1 f).\n\\end{equation}\nThe operators $G_j (s, \\epsilon)$ for $j \\geq 2$ are constructed analogously using appropriate coordinates.\n\\begin{theorem}\\label{3.8;8}\n Suppose the conditions of Theorem~\\ref{3.8;3} are fulfilled. Then for all $\\alpha \\in [1,\n\\frac 32 )$ and $\\varphi(\\epsilon)$ defined in (\\ref{3.8;5}) the following bound holds\n\\begin{equation}\\label{6.8;2}\n \\sup_{\\epsilon >0} \\sup_{\\epsilon' >0} \\sup_{s \\in \\mathbb{R}^3} \\| G_j^\\alpha (s, \\epsilon')\n\\varphi(\\epsilon) \\| < \\infty \\quad (j = 1, 2, \\ldots , N).\n\\end{equation}\n\\end{theorem}\nBefore we proceed with the proof we shall need a couple of technical lemmas\n\\begin{lemma}\\label{7.8;8}\n Suppose $H$ defined in (\\ref{1.08;3})--(\\ref{1.08;4}) satisfies R2. For a multi--index $\\pmb i$ and $\\epsilon >0$ let us define the operators\n\\begin{gather}\n \\mathcal{K}_{\\pmb i} (\\epsilon) := \\mu^{-1} (\\epsilon) \\bigl( H_0 + V^+_{\\pmb i} + \\epsilon \\bigr)^{-1\/2} V^-_{\\pmb i} \\bigl( H_0 + V^+_{\\pmb i} + \\epsilon \\bigr)^{-1\/2} \\\\\nQ_{\\pmb i} (\\epsilon) := [1-\\mathcal{K}_{\\pmb i} (\\epsilon)]^{-1} ,\n\\end{gather}\nwhere $V^\\pm_{\\pmb i}$, $\\mu (\\epsilon)$ were defined in (\\ref{7.8;1}), (\\ref{7.01;41}) respectively. \nThen $\\mathcal{K}_{\\pmb i} (\\epsilon) , Q_{\\pmb i} (\\epsilon) \\in \\mathfrak{B}(L^2 (\\mathbb{R}^{3N-3}))$ and $\\|\\mathcal{K}_{\\pmb i} (\\epsilon)\\| \\leq (1+\\omega)^{-1} (1+\\omega\/2)$,\n$\\|Q_{\\pmb i} (\\epsilon)\\| \\leq 2 \\omega^{-1} (1+\\omega)$.\n\\end{lemma}\n\\begin{proof}\nFrom R2 and HVZ theorem it follows that\n\\begin{equation}\\label{7.8;3}\n H_0 + \\mu^{-1} (\\epsilon) V^+_{\\pmb i} - (1+ \\omega)V^-_{\\pmb i} \\geq 0 , \n\\end{equation}\nsince $\\mu^{-1}(\\epsilon) > 1$. \nTherefore, by the BS principle (Theorem~\\ref{31.07;16}) we get\n$\\sigma \\bigl( (1+ \\omega)\\mu(\\epsilon) \\mathcal{K}_{\\pmb i} (\\epsilon) \\bigr) \\cap (1, \\infty) = \\emptyset $. Together \nwith $\\mathcal{K}_{\\pmb i} (\\epsilon) \\geq 0 $ this gives\n$\\|\\mathcal{K}_{\\pmb i} (\\epsilon)\\| \\leq (1+\\omega)^{-1} (1+\\omega\/2) <1 $, see Theorem~\\ref{3.8;3}. The rest of the proof is trivial.\n\\end{proof}\n\\begin{lemma}\\label{6.8;22}\nSuppose $H$ defined in (\\ref{1.08;3})--(\\ref{1.08;4}) satisfies R2. For $f \\in D(H^{1\/2}_0)$, $\\epsilon >0$ and any ordered multi--index $\\pmb i$ \nthe following inequality holds\n\\begin{equation}\n \\left\\| \\bigl( H_0 + \\mu^{-1}(\\epsilon) V^+_{\\pmb i} + \\epsilon \\bigr)^{-1\/2} \\bigl( H_0 + \\epsilon\n\\bigr)^{1\/2} f\\right\\| \\leq \\| f\\| ,\n\\end{equation}\nwhere $V^+_{\\pmb i}$ is defined in (\\ref{7.8;1}).\n\\end{lemma}\n\\begin{proof}\nIndeed,\n\\begin{gather}\n\\left\\| \\bigl( H_0 + \\epsilon + \\mu^{-1}(\\epsilon) V^+_{\\pmb i} \\bigr)^{-1\/2} \\bigl( H_0 + \\epsilon\n\\bigr)^{1\/2} f\\right\\|^2 \\nonumber\\\\\n= \\Bigl(\\bigl( H_0 + \\epsilon \\bigr)^{1\/2} f , \\bigl(\nH_0 + \\epsilon + \\mu^{-1}(\\epsilon) V^+_{\\pmb i} \\bigr)^{-1} \\bigl( H_0 + \\epsilon \\bigr)^{1\/2} f \\Bigr)\n\\nonumber\\\\\n\\leq \\Bigl(\\bigl( H_0 + \\epsilon \\bigr)^{1\/2} f , \\bigl(H_0 + \\epsilon\n\\bigr)^{-1} \\bigl( H_0 + \\epsilon \\bigr)^{1\/2} f \\Bigr) = \\|f\\|^2 ,\n\\label{17.01\/02}\n\\end{gather}\nIn (\\ref{17.01\/02}) we have used the operator inequality\n\\begin{equation}\n \\bigl( H_0 + \\epsilon + \\mu^{-1}(\\epsilon) V^+_{\\pmb i} \\bigr)^{-1} \\leq \\bigl(H_0 + \\epsilon \\bigr)^{-1}, \n\\end{equation}\nwhich follows from $H_0 + \\mu^{-1}(\\epsilon) V^+_{\\pmb i} \\geq H_0 \\geq 0 $ (see, for example, Proposition\nA.2.5 on page 131 in \\cite{glimmjaffe}).\n\\end{proof}\n\n\n\n\\begin{proof}[Proof of Theorem~\\ref{3.8;8}]\nWithout loosing generality we can set $j = 1$; the power $\\alpha \\in [1, \\frac 32 )$ remains fixed throughout the proof. We shall further on\nassume that $\\epsilon' \\leq \\epsilon$ because from (\\ref{6.8;1}) clearly follows that\n\\begin{equation}\n \\sup_{\\epsilon >0} \\sup_{\\epsilon' >0} \\sup_{s \\in \\mathbb{R}^3} \\| G_j^\\alpha (s, \\epsilon')\n\\varphi(\\epsilon) \\| = \\sup_{\\epsilon >0} \\sup_{0 <\\epsilon' \\leq \\epsilon} \\sup_{s \\in \\mathbb{R}^3} \\| G_j^\\alpha (s, \\epsilon')\n\\varphi(\\epsilon) \\|\n\\end{equation}\nUsing (\\ref{11.01.12\/1}), (\\ref{11.01.12\/1.3}) and $\\mu(\\epsilon) \\geq (1+\\omega\/2)^{-1}$ we can write\n\\begin{gather}\n \\| G^\\alpha_1 (s, \\epsilon') \\varphi(\\epsilon) \\| \\leq (1+\\omega\/2) \\sum_{i = 2}^N \\| G^\\alpha_1 (s, \\epsilon') \\bigl(\nH_0 + \\epsilon\\bigr)^{-1\/2} v_{1i} \\psi(\\epsilon) \\| \\nonumber\\\\\n+ (1+\\omega\/2) \\sum_{2\\leq i < k \\leq N} \\| G^\\alpha_1 (s, \\epsilon') \\bigl( H_0 + \\epsilon\\bigr)^{-1\/2}\nv_{ik} \\psi(\\epsilon) \\| .\n\\end{gather}\nThus without\nloss of generality, (\\ref{6.8;2}) follows from the following inequalities\n\\begin{gather}\n \\sup_{\\epsilon >0} \\sup_{0 <\\epsilon' \\leq \\epsilon} \\sup_{s \\in \\mathbb{R}^3} \\| G^\\alpha_1 (s, \\epsilon') \\bigl( H_0 +\n\\epsilon\\bigr)^{-1\/2} v_{12} \\psi(\\epsilon) \\| < \\infty \\label{12.01\/01}\\\\\n\\sup_{\\epsilon >0} \\sup_{0 <\\epsilon' \\leq \\epsilon} \\sup_{s \\in \\mathbb{R}^3} \\| G^\\alpha_1 (s, \\epsilon') \\bigl( H_0 +\n\\epsilon\\bigr)^{-1\/2} v_{23} \\psi(\\epsilon) \\| < \\infty \\label{12.01\/02}\n\\end{gather}\nWe shall follow the method developed in \\cite{1,jpasubmit,lmp}. Let us start with\n(\\ref{12.01\/01}).\nWe introduce another set of Jacobi coordinates $y_1 = \\sqrt{2\\mu_{12}} (r_2 -\nr_1)$, $y_2 = (\\sqrt{2 M_{12;3}})\\bigl[ r_3 -\nm_1\/(m_1+m_2) r_1 - m_2\/(m_1+m_2) r_2\\bigr]$ etc.,\nwhere $M_{ik;j} := (m_i + m_k)m_j \/ (m_i + m_k + m_j)$ and\n$\\mu_{ik} := m_i m_k \/(m_i + m_k)$ denote the reduced masses.\nThe coordinate $y_i \\in \\mathbb{R}^3$ is proportional to the\nvector pointing from the centre of mass of the particles $[1, 2, \\ldots , i]$\nto the particle $i+1$, and the scales are set in order to guarantee that $H_0 = - \\sum_i \\Delta_{y_i} $. The full and partial Fourier transforms have the form\n\\begin{gather}\n (F_{12} f ) (p_{y_1}, p_{y_2}, \\ldots, p_{y_{N-1}}):= \\frac 1{(2\\pi)^{\\frac{3N-3}2}} \\int e^{-i\\sum_{k = 1}^{N-1} p_{y_k} \\cdot y_k} f(y_1, \\ldots, y_{N-1}) d^3 y_1 \\ldots d^3 y_{N-1} \\label{6.8;52}\\\\\n(\\mathcal{F}_{12} f ) (y_1, p_{y_2}, \\ldots, p_{y_{N-1}}):= \\frac 1{(2\\pi)^{\\frac{3N-6}2}} \\int e^{-i\\sum_{k = 2}^{N-1} p_{y_k} \\cdot y_k} f(y_1, \\ldots, y_{N-1}) d^3 y_2 \\ldots d^3 y_{N-1}\n\\end{gather}\nFor shorter notation we shall\ndenote by\n$y_r , p_{y_r}\\in \\mathbb{R}^{3N-6}$ the following tuples $y_r := (y_2, y_3,\n\\ldots, y_{N-1})$ and $p_{y_r} := (p_{y_2}, p_{y_3}, \\ldots, p_{y_{N-1}})$. The coordinate set $y_i$ can be expressed through $x_i$ as follows\n\\begin{equation}\\label{6.8;53}\n x_i = \\sum_{k=1}^{N-1} b_{ik} y_k \\quad (i = 1, \\ldots, N-1),\n\\end{equation}\nwhere the $(N-1)\\times (N-1)$ real orthogonal matrix $b_{ik}$ depends on the mass ratios. The expressions for these coefficients are complicated, we just mention that\n\\begin{equation}\n b_{11} = - \\left[ \\frac{m_2 M}{(M - m_1)(m_1 + m_2)} \\right]^{\\frac 12} . \n\\end{equation}\n\n\n\nSimilar to \\cite{jpasubmit,lmp} we introduce the operator, which acts on $f \\in\nL^2(\\mathbb{R}^3)$ according to the rule\n\\begin{equation}\\label{7.8;15}\n B_{12} (\\epsilon) f = (1+ \\epsilon^{\\zeta\/2})f + \\mathcal{F}_{12}^{-1} (|p_{y_r}|^\\zeta -\n1)\\chi_1 (|p_{y_r}|) (\\mathcal{F}_{12} f),\n\\end{equation}\nwhere\n\\begin{equation}\\label{defzeta28}\n \\zeta = \\frac \\alpha2 + \\frac 14 < 1\n\\end{equation}\nFor all $\\epsilon >0$ the operators $B_{12} (\\epsilon)$ and $B_{12}^{-1}\n(\\epsilon)$ are bounded. Inserting the identity $B_{12} (\\epsilon') B_{12}^{-1} (\\epsilon') = 1$ into\n(\\ref{12.01\/01})\nand using $[B_{12} (\\epsilon') , v_{12}] =0$ we get\n\\begin{equation}\n \\| G^\\alpha_1 (s, \\epsilon') \\bigl( H_0 + \\epsilon\\bigr)^{-1\/2} v_{12} \\psi(\\epsilon) \\| \\leq\n\\| \\mathcal{C} (s, \\alpha, \\epsilon, \\epsilon') \\| \\, \\| B_{12}^{-1}\n(\\epsilon')\\bigl|v_{12}\\bigr|^{1\/2} \\psi (\\epsilon)\\| ,\n\\end{equation}\nwhere by definition\n\\begin{equation}\\label{12.01\/04}\n \\mathcal{C} (s, \\alpha, \\epsilon, \\epsilon') := G^\\alpha_1 (s, \\epsilon') \\bigl( H_0 +\n\\epsilon\\bigr)^{-1\/2} B_{12} (\\epsilon') \\bigl|v_{12}\\bigr|^{1\/2}\n\\end{equation}\nBy Lemma~\\ref{6.8;5} below to prove (\\ref{12.01\/01}) it suffices to show that\n\\begin{equation}\\label{6.8;6}\n \\sup_{\\epsilon >0 } \\sup_{0 < \\epsilon' \\leq \\epsilon } \\| B_{12}^{-1} (\\epsilon') \\bigl|v_{12}\\bigr|^{1\/2} \\psi\n(\\epsilon)\\| < \\infty .\n\\end{equation}\nUsing the method in \\cite{jpasubmit} we shall prove that\n\\begin{equation}\\label{6.8;7}\n \\sup_{\\epsilon >0 } \\sup_{0 < \\epsilon' \\leq \\epsilon } \\| B_{12}^{-1} (\\epsilon') \\bigl((v_{12})_-\\bigr)^{\\frac 12} \\psi\n(\\epsilon)\\| < \\infty .\n\\end{equation}\nWe first prove that (\\ref{6.8;6}) follows from (\\ref{6.8;7}) and afterwards prove that (\\ref{6.8;7}) holds.\nAfter rearranging the terms in the Schr\\\"odinger equation (\\ref{11.01.12\/1.3}) we obtain\n\\begin{gather}\n B_{12}^{-1} (\\epsilon') \\bigl|v_{12}\\bigr|^{1\/2} \\psi(\\epsilon) = \\mu^{-1} (\\epsilon)\\bigl|v_{12}\\bigr|^{1\/2}\nB_{12}^{-1} (\\epsilon') \\bigl[H_0 + \\epsilon + \\mu^{-1} (\\epsilon) (v_{12})_+ \\bigr]^{-1} \\\\\n\\times \n\\bigl((v_{12})_- - \\sum_{i=3}^N v_{1i} - \\sum_{2 \\leq i < j \\leq N} v_{ij}\\bigr)\n\\psi(\\epsilon)\n\\end{gather}\nThis leads to the upper bound\n\\begin{gather}\n \\| B_{12}^{-1} (\\epsilon') \\bigl|v_{12}\\bigr|^{1\/2} \\psi (\\epsilon)\\| \\nonumber\\\\\n\\leq (1+ \\omega\/2)\\Bigl\\|\n\\bigl|v_{12}\\bigr|^{1\/2} \\bigl[H_0 + \\epsilon + \\mu^{-1} (\\epsilon) (v_{12})_+ \\bigr]^{-1}\n\\bigl((v_{12})_-\\bigr)^{1\/2}\\Bigr\\|\n\\Bigl\\| B_{12}^{-1} (\\epsilon') \\bigl((v_{12})_-\\bigr)^{1\/2} \\psi(\\epsilon) \\Bigr\\| \\nonumber\\\\\n+ (1+ \\omega\/2) \\sum_{i=3}^N \\Bigl\\| \\bigl|v_{12}\\bigr|^{1\/2} B_{12}^{-1} (\\epsilon')\\bigl[H_0 + \\epsilon\n+ \\mu^{-1} (\\epsilon) (v_{12})_+ \\bigr]^{-1} \\bigl| v_{1i} \\bigr|^{1\/2}\\Bigr\\| \\Bigl\\| \\bigl| v_{1i}\n\\bigr|^{1\/2}\\psi(\\epsilon) \\Bigr\\| \\nonumber\\\\\n+ (1+ \\omega\/2) \\sum_{2 \\leq i < j \\leq N} \\Bigl\\| \\bigl|v_{12}\\bigr|^{1\/2} B_{12}^{-1} (\\epsilon')\n\\bigl[H_0 + \\epsilon + \\mu^{-1} (\\epsilon) (v_{12})_+ \\bigr]^{-1} \\bigl| v_{ij} \\bigr|^{1\/2}\\Bigr\\|\n\\Bigl\\| \\bigl| v_{ij} \\bigr|^{1\/2}\\psi(\\epsilon) \\Bigr\\|, \\label{6.8;11}\n\\end{gather}\nwhere we have used $\\mu (\\epsilon) \\leq (1+\\omega\/2)$. Note, that for $1 \\leq i < j \\leq N$ the terms $\\bigl\\| \\bigl| v_{ij}\n\\bigr|^{1\/2}\\psi(\\epsilon) \\bigr\\|$ are uniformly bounded. Indeed, by\n(\\ref{11.01.12\/1})\n\\begin{gather}\n\\sup_{\\epsilon>0}\\Bigl\\| \\bigl| |v_{ij} \\bigr|^{1\/2}\\psi(\\epsilon) \\Bigr\\| =\n\\sup_{\\epsilon>0} \\Bigl\\| \\bigl| v_{ij} \\bigr|^{1\/2} \\bigl( H_0 +\n\\epsilon\\bigr)^{-1\/2} \\varphi(\\epsilon) \\Bigr\\| \\nonumber \\\\\n\\leq\nC_v \\equiv \\max_{i 0} \\Bigl\\| \\bigl| v_{ij} \\bigr|^{1\/2} \\bigl( H_0 +\n\\epsilon\\bigr)^{-1} \\bigl| v_{ij} \\bigr|^{1\/2} \\Bigr\\|^{1\/2} < \\infty , \\label{16.01\/1}\n\\end{gather}\nwhere we have used $\\| \\varphi(\\epsilon) \\| \\leq 1$ and $|v_{ij} (r)|^{1\/2} \\in L^3 (\\mathbb{R}^3)$, c. f. (\\ref{9.10;1}). \nApplying Lemma~\\ref{6.8;14} and (\\ref{16.01\/1}) we conclude\nthat\nall terms under the sums in (\\ref{6.8;11}) are uniformly bounded. For the first operator norm on the rhs of\n(\\ref{6.8;11}) using (\\ref{16.01\/1}) we get\n\\begin{gather}\n\\sup_{\\epsilon>0} \\Bigl\\| \\bigl|v_{12}\\bigr|^{1\/2} \\bigl[H_0 + \\epsilon + \\mu^{-1} (\\epsilon) (v_{12})_+ \\bigr]^{-1}\n\\bigl((v_{12})_-\\bigr)^{1\/2}\\Bigr\\| \\nonumber \\\\\n\\leq\n\\sup_{\\epsilon>0} \\Bigl\\| \\bigl|v_{12}\\bigr|^{1\/2} \\bigl[H_0 + \\epsilon \\bigr]^{-1}\n\\bigl|v_{12}\\bigr|^{1\/2}\\Bigr\\| \\leq C_v^2 . \\label{6.8;16}\n\\end{gather}\nThus from (\\ref{6.8;11})--(\\ref{6.8;16}) and (\\ref{6.8;7}) inequality (\\ref{6.8;6}) follows. It remains to prove (\\ref{6.8;7}).\nUsing Eq.~16 in \\cite{jpasubmit} (where one has to set $k_n^2 =\n\\epsilon$ and $\\lambda_n = 1$) we get\n\\begin{gather}\nB_{12}^{-1} (\\epsilon') \\bigl((v_{12})_-\\bigr)^{\\frac 12} \\psi\n(\\epsilon) \\nonumber\\\\\n= - \\mu^{-1} (\\epsilon) Q_{\\{3,\\ldots, N\\}}(\\epsilon) \\sum_{i=3}^N \\bigl((v_{12})_-\\bigr)^{\\frac 12} B_{12}^{-1} (\\epsilon') \\bigl[H_0 + \\epsilon + \\mu^{-1} (\\epsilon) (v_{12})_+ \\bigr]^{-1} v_{1i} \\psi(\\epsilon) \\nonumber\\\\\n- \\mu^{-1} (\\epsilon) Q_{\\{3,\\ldots, N\\}}(\\epsilon) \\sum_{2 \\leq i < j \\leq N} \\bigl((v_{12})_-\\bigr)^{\\frac 12} B_{12}^{-1} (\\epsilon') \\bigl[H_0 + \\epsilon + \\mu^{-1} (\\epsilon) (v_{12})_+ \\bigr]^{-1} v_{ij} \\psi(\\epsilon) ,\n\\end{gather}\nwhere\n\\begin{equation}\n Q_{\\{3,\\ldots, N\\}}(\\epsilon) = \\left\\{1 - \\mu^{-1} (\\epsilon) \\bigl((v_{12})_-\\bigr)^{\\frac 12} \\bigl[H_0 + \\epsilon + \\mu^{-1} (\\epsilon) (v_{12})_+ \\bigr]^{-1} \\bigl((v_{12})_-\\bigr)^{\\frac 12} \\right\\}^{-1}\n\\end{equation}\nwas defined in Lemma~\\ref{7.8;8}. By Lemma~\\ref{7.8;8} and (\\ref{16.01\/1})\n\\begin{gather}\n \\left\\|B_{12}^{-1} (\\epsilon') \\bigl((v_{12})_-\\bigr)^{\\frac 12} \\psi(\\epsilon) \\right\\| \\nonumber \\\\\n\\leq 2 \\omega^{-1} (1+\\omega) (1+\\omega\/2) C_v \\sum_{i=3}^N \\left\\| \\bigl((v_{12})_-\\bigr)^{\\frac 12} B_{12}^{-1} (\\epsilon') \\bigl[H_0 + \\epsilon + \\mu^{-1} (\\epsilon) (v_{12})_+ \\bigr]^{-1} |v_{1i}|^{\\frac12} \\right\\| \\nonumber \\\\\n+ 2 \\omega^{-1} (1+\\omega) (1+\\omega\/2) C_v \\sum_{2 \\leq i < j \\leq N} \n\\left\\| \\bigl((v_{12})_-\\bigr)^{\\frac 12} B_{12}^{-1} (\\epsilon') \\bigl[H_0 + \\epsilon + \\mu^{-1} (\\epsilon) (v_{12})_+ \\bigr]^{-1} |v_{ij}|^{\\frac 12} \\right\\| \\nonumber\n\\end{gather}\nNow (\\ref{6.8;7}) follows from Lemma~\\ref{6.8;14} and (\\ref{12.01\/01}) is proved.\n\nLet us now consider (\\ref{12.01\/02}).\nAfter rearranging the terms in the Schr\\\"odinger equation (\\ref{11.01.12\/1.3}) we obtain\n\\begin{gather}\n G^\\alpha_1 (s, \\epsilon) \\bigl( H_0 + \\epsilon\\bigr)^{-1\/2} v_{23} \\psi(\\epsilon) \\nonumber \\\\\n= \\mu^{-1} (\\epsilon) \\bigl(\nH_0 + \\epsilon\\bigr)^{-1\/2} v_{23} \\bigl( H_0 + \\epsilon + \\mu^{-1} (\\epsilon) V_{\\{1\\}}^+ \\bigr)^{-1}\n G^\\alpha_1 (s, \\epsilon) \\left\\{ V_{\\{1\\}}^- - \\sum_{i=2}^N v_{1i}\\right\\} \\psi(\\epsilon) \\nonumber\n\\\\\n= \\mu^{-1} (\\epsilon)\\mathcal{J}_1(\\epsilon) \\psi_1 (\\epsilon, \\epsilon') - \\mu^{-1} (\\epsilon)\\mathcal{J}_2(\\epsilon) \\psi_2\n(\\epsilon, \\epsilon') \\label{17.01\/0}\n\\end{gather}\nwhere by definition\n\\begin{gather}\n \\mathcal{J}_1(\\epsilon) := \\bigl( H_0 + \\epsilon\\bigr)^{-1\/2} v_{23} \\bigl(\nH_0 + \\epsilon + \\mu^{-1} (\\epsilon)V_{\\{1\\}}^+ \\bigr)^{-1} \\bigl(V_{\\{1\\}}^-\\bigr)^{1\/2} \\\\\n\\mathcal{J}_2(\\epsilon) := \\bigl( H_0 + \\epsilon\\bigr)^{-1\/2} v_{23} \\bigl( H_0\n+ \\epsilon + \\mu^{-1} (\\epsilon)V_{\\{1\\}}^+ \\bigr)^{-1\/2}\n\\end{gather}\nand\n\\begin{gather}\n\\psi_1(\\epsilon, \\epsilon') := G^\\alpha_1 (s, \\epsilon') \\bigl(V_{\\{1\\}}^-\\bigr)^{1\/2} \\psi(\\epsilon) \\label{6.8;27}\\\\\n\\psi_2(\\epsilon, \\epsilon') := \\sum_{i=2}^N \\bigl( H_0 + \\epsilon + \\mu^{-1} (\\epsilon) V_{\\{1\\}}^+ \\bigr)^{-1\/2}\nG^\\alpha_1 (s, \\epsilon') v_{1i} \\psi(\\epsilon) \\label{6.8;28}\n\\end{gather}\nIn (\\ref{17.01\/0}) we have used that $[ G^\\alpha_1 (s, \\epsilon), V_{\\{1\\}}^\\pm] = 0$ due to\n$V_{\\{1\\}}^\\pm$ being dependent only on $x_2, \\ldots, x_{N-1}$.\nIt is easy to show that $\\|\\mathcal{J}_{1,2} (\\epsilon)\\|$ is uniformly bounded.\nFor example,\n\\begin{gather}\n \\|\\mathcal{J}_1 (\\epsilon)\\| \\leq \\left\\||v_{23}|^{1\/2} (H_0 + \\epsilon)^{-1}\n|v_{23}|^{1\/2}\\right\\|^{1\/2} \\,\n\\left\\||v_{23}|^{1\/2} (H_0 + \\mu^{-1} (\\epsilon) V_{\\{1\\}}^+ + \\epsilon )^{-1}\n|v_{23}|^{1\/2}\\right\\|^{1\/2} \\nonumber\\\\\n\\times \\left\\|(V_{\\{1\\}}^-)^{1\/2} (H_0 + \\mu^{-1} (\\epsilon) V_{\\{1\\}}^+ + \\epsilon )^{-1} (V_{\\{1\\}}^-)^{1\/2}\n\\right\\|^{1\/2} \\leq C_v (1+\\omega)^{-1\/2} ,\n\\end{gather}\nwhere we have used Lemma~\\ref{7.8;8}.\nIt remains to prove that\n\\begin{equation}\n \\sup_{\\epsilon > 0} \\sup_{0 < \\epsilon' \\leq \\epsilon} \\|\\psi_{1,2} (\\epsilon, \\epsilon')\\| < \\infty. \\label{6.8;35}\n\\end{equation}\nBy Lemma~\\ref{6.8;22} and (\\ref{12.01\/01})\n\\begin{equation}\n \\sup_{\\epsilon >0 } \\sup_{0 < \\epsilon' \\leq \\epsilon} \\| \\psi_2(\\epsilon, \\epsilon')\\| \\leq \\sum_{i=2}^N \\sup_{\\epsilon >0 }\n\\sup_{0 < \\epsilon' \\leq \\epsilon} \\bigl\\| \\bigl( H_0 + \\epsilon\n\\bigr)^{-1\/2} G^\\alpha_1 (s, \\epsilon') v_{1i} \\psi(\\epsilon)\\bigr\\| < \\infty . \\label{6.8;32}\n\\end{equation}\nAfter\nrearranging the terms in (\\ref{11.01.12\/1.3}) we obtain\n\\begin{equation}\n \\psi(\\epsilon) = \\mu^{-1} (\\epsilon)\\bigl[H_0 + \\mu^{-1} (\\epsilon)V_{\\{1\\}}^+ + \\epsilon\\bigr]^{-1}\n\\bigl(V_{\\{1\\}}^-\\bigr)^{1\/2} \\bigl(V_{\\{1\\}}^-\\bigr)^{1\/2} \\psi(\\epsilon) - \\bigl[H_0 +\n\\mu^{-1} (\\epsilon)V_{\\{1\\}}^+ + \\epsilon\\bigr]^{-1} \\sum_{i=2}^N v_{1i} \\psi(\\epsilon) . \\nonumber\n\\end{equation}\nTherefore,\n\\begin{gather}\n \\bigl(V_{\\{1\\}}^-\\bigr)^{1\/2} \\psi(\\epsilon) = -\\mu^{-1} (\\epsilon) Q_{\\{1\\}} (\\epsilon)\n\\bigl(V_{\\{1\\}}^-\\bigr)^{1\/2} \\bigl[H_0 + \\mu^{-1} (\\epsilon) V_{\\{1\\}}^+ + \\epsilon\\bigr]^{-1\/2} \\nonumber \\\\\n\\times \\sum_{i=2}^N\n\\bigl[H_0 + \\mu^{-1} (\\epsilon) V_{\\{1\\}}^+ + \\epsilon\\bigr]^{-1\/2} v_{1i} \\psi(\\epsilon) , \\label{6.8;25}\n\\end{gather}\nwhere $Q_{\\{1\\}} (\\epsilon)$ was defined in Lemma~\\ref{7.8;8}. Using (\\ref{6.8;25}) and Lemma~\\ref{7.8;8} we obtain from (\\ref{6.8;27}), (\\ref{6.8;28})\n\\begin{gather}\n \\| \\psi_1(\\epsilon, \\epsilon')\\| \\leq 2 (1+\\omega) \\omega^{-1} \\mu^{-1} (\\epsilon) \\left\\|\n\\bigl(V_{\\{1\\}}^-\\bigr)^{1\/2} \\bigl[H_0 + \\mu^{-1} (\\epsilon) V_{\\{1\\}}^+ + \\epsilon\\bigr]^{-1\/2}\\right\\|\n\\|\\psi_2 (\\epsilon, \\epsilon')\\| \\nonumber\\\\\n\\leq 2 (1+\\omega) \\omega^{-1} \\mu^{-1} (\\epsilon) \\left\\|\\bigl(V_{\\{1\\}}^-\\bigr)^{1\/2} \\bigl[H_0 + \\mu^{-1} (\\epsilon) V_{\\{1\\}}^+ +\n\\epsilon\\bigr]^{-1} \\bigl(V_{\\{1\\}}^-\\bigr)^{1\/2} \\right\\|^{1\/2} \\|\\psi_2 (\\epsilon, \\epsilon')\\| \\nonumber\\\\\n\\leq 2[(1+\\omega)(1+\\omega\/2)]^{1\/2} \\omega^{-1} \\|\\psi_2 (\\epsilon, \\epsilon')\\|. \\label{6.8;31}\n\\end{gather}\n Now (\\ref{6.8;35}) follows from (\\ref{6.8;32}) and (\\ref{6.8;31}).\n\\end{proof}\n\\begin{lemma}\\label{6.8;5}\nFor all $\\alpha \\in [1, \\frac 32 )$ the following inequality holds\n\\begin{equation}\n \\sup_{\\epsilon >0 } \\sup_{0 < \\epsilon' \\leq \\epsilon } \\sup_{s \\in \\mathbb{R}^3} \\| \\mathcal{C} (s, \\alpha,\n\\epsilon, \\epsilon') \\| < \\infty ,\n\\end{equation}\nwhere $ \\mathcal{C} (s, \\alpha,\\epsilon, \\epsilon')$ is defined in (\\ref{12.01\/04}).\n\\end{lemma}\n\\begin{proof}\nWithout loosing generality we can consider $0 < \\epsilon < 1$. For the dual\ncoordinates defined in (\\ref{6.8;51}), (\\ref{6.8;52}) the following relation holds $p_1 = \\sum_{i=1}^{N-1} b_{1i} p_{y_i} $,\nwhere we have used that the matrix $b_{ik}$ in (\\ref{6.8;53}) is orthogonal.\n The Fourier--transformed operator $M= F_{12} \\mathcal{C} (s, \\alpha, \\epsilon, \\epsilon')\nF_{12}^{-1}$ acts on $f (p_{y_1}, p_{y_r}) \\in L^2 (\\mathbb{R}^{3N-3})$ as\nfollows\n\\begin{equation}\n (Mf)(p_{y_1}, p_{y_r}) = \\int \\mathcal{M}(p_{y_1} , p'_{y_1} ; p_{y_r}) f(p'_{y_1} ,\np_{y_r}) d^3 p'_{y_1} ,\n\\end{equation}\nwhere\n\\begin{gather}\n \\mathcal{M} (p_{y_1} , p'_{y_1} ; p_{y_r}) =\n\\frac{(2\\mu_{12})^{3\/2}}{(2\\pi)^{3\/2}}\\left[(b_{11} p_{y_1} + \\sum_{k=2}^{N-1} b_{1k}\np_{y_k} +s)^2 + \\epsilon' \\right]^{-\\alpha\/2}\n\\left(p_{y_1}^2 + p_{y_r}^2 + \\epsilon \\right)^{-1\/2} \\nonumber \\\\\n\\times \\left\\{1 + (\\epsilon')^{\\zeta\/2} + (|p_{y_r}|^\\zeta -1|)\\chi_1\n(|p_{y_r}|)\\right\\} \\widehat{|v_{12}|^{\\frac 12}}\\bigl(\\sqrt{2\\mu_{12}}(p_{y_1}\n- p'_{y_1})\\bigr) ,\n\\end{gather}\nand the hat denotes standard Fourier transform in $L^2 (\\mathbb{R}^3)$. We\nestimate the norm as\n\\begin{equation}\n \\|M\\|^2 \\leq \\sup_{|p_{y_r}| \\leq 1} \\int \\bigl|\\mathcal{M}(p_{y_1} , p'_{y_1} ;\np_{y_r})\\bigr|^2 d^3 p'_{y_1} d^3 p_{y_1} + \\sup_{|p_{y_r}| > 1} \\int\n\\bigl|\\mathcal{M}(p_{y_1} , p'_{y_1} ; p_{y_r})\\bigr|^2 d^3 p'_{y_1} d^3 p_{y_1} \\label{6.8;56}\n\\end{equation}\nFor the first term on the rhs in (\\ref{6.8;56}) we get\n\\begin{gather}\n \\sup_{|p_{y_r}| \\leq 1} \\int \\bigl|\\mathcal{M}(p_{y_1} , p'_{y_1} ; p_{y_r})\\bigr|^2 d^3\np'_{y_1} d^3 p_{y_1} \\nonumber \\\\\n\\leq C_0 \\sup_{|p_{y_r}| \\leq 1} \\sup_{s' \\in \\mathbb{R}^3}\n\\int \\left[(b_{11} p_{y_1} + s')^2 + \\epsilon' \\right]^{-\\alpha} \\frac{\n\\left(|p_{y_r}|^\\zeta + (\\epsilon')^{\\zeta\/2}\\right)^2 }{\\left(p_{y_1}^2 +\np_{y_r}^2 + \\epsilon' \\right)} d^3 p_{y_1} , \\label{6.8;59}\n\\end{gather}\nwhere\n\\begin{equation}\n C_0 := \\frac{(2\\mu_{12}^3)}{2\\pi^3} \\int \\left|\\widehat{|v_{12}|^{\\frac\n12}}\\bigl(\\sqrt{2\\mu_{12}} p_{y_1}\\bigr)\\right|^2 d^3 p_{y_1}\n\\end{equation}\nis finite due to $|v_{12}|^{1\/2} \\in L^2 (\\mathbb{R}^3)$. In (\\ref{6.8;59}) we have also used that $\\epsilon' \\leq \\epsilon$. Let us use the following inequality\n\\begin{equation}\n \\left(|p_{y_r}|^\\zeta + (\\epsilon')^{\\zeta\/2}\\right)^2 \\leq 2|p_{y_r}|^{2\\zeta} +\n2(\\epsilon')^\\zeta \\leq 4\\left(|p_{y_r}|^2 + \\epsilon'\\right)^\\zeta ,\\label{8.6;61}\n\\end{equation}\nwhich follows from $a^\\gamma + b^\\gamma \\leq 2(a+b)^\\gamma$ for any $a, b \\geq\n0$ and $0 \\leq \\gamma\\leq 1$. Using (\\ref{8.6;61}) and (\\ref{defzeta28}) we obtain\nfrom (\\ref{6.8;59})\n\\begin{gather}\n \\sup_{|p_{y_r}| \\leq 1} \\int \\bigl|\\mathcal{M} (p_{y_1} , p'_{y_1} ; p_{y_r})\\bigr|^2 d^3\np'_{y_1} d^3 p_{y_1} \\leq 4 C_0 |b_{11}|^{-2\\alpha} \\mathcal{J} \\sup_{|p_{y_r}|\n\\leq 1} \\left(|p_{y_r}|^2 + \\epsilon'\\right)^{\\frac{3-2\\alpha}4} \\nonumber\\\\\n\\leq 4 C_0 |b_{11}|^{-2\\alpha} 2^{\\frac{3-2\\alpha}4} \\mathcal{J} , \\label{6.8;72}\n\\end{gather}\nwhere\n\\begin{equation}\n \\mathcal{J} := \\sup_{s' \\in \\mathbb{R}^3} \\int \\frac{d^3\np_{y_1}}{|p_{y_1}|^{2\\alpha} \\left[(p_{y_1} + s')^2 +1\\right]} . \\label{8.6;64}\n\\end{equation}\nIt remains to show that the expression in (\\ref{8.6;64}) is finite. This becomes clear from he following upper bound\n\\begin{gather}\n \\mathcal{J} \\leq \\int_{|p_{y_1}| \\leq 2} \\frac{d^3\np_{y_1}}{|p_{y_1}|^{2\\alpha}} + \\sup_{s' \\in \\mathbb{R}^3} \\int_{|p_{y_1}| \\geq\n2} \\frac{d^3p_{y_1}}{((1\/2)|p_{y_1}|^{2\\alpha} + 1) [(p_{y_1} + s')^2 + 1]} \\nonumber \\\\\n\\leq \\frac{32\\pi}{(3-2\\alpha)4^\\alpha} + \\left[\\int\n\\frac{d^3p_{y_1}}{((1\/2)|p_{y_1}|^{2\\alpha} + 2)^2}\\right]^{1\/2} \\left[\\int\n\\frac{d^3p_{y_1}}{(|p_{y_1}|^2 + 1)^2}\\right]^{1\/2}\n\\end{gather}\nwhere we have used the Cauchy--Schwarz inequality and the last\ntwo integrals are obviously convergent.\nFor the second term in (\\ref{6.8;56}) we obtain\n\\begin{gather}\n \\sup_{|p_{y_r}| > 1} \\int \\bigl|\\mathcal{M}(p_{y_1} , p'_{y_1} ; p_{y_r})\\bigr|^2 d^3\np'_{y_1} d^3 p_{y_1} \\leq C_0 |b_{11}|^{-2\\alpha} \\mathcal{J} \\label{6.8;71}\n\\end{gather}\nBecause the expression on the rhs of (\\ref{6.8;72} ) and (\\ref{6.8;71}) do not depend on\n$\\epsilon , \\epsilon', s$ the lemma is proved.\n\\end{proof}\n\n\\begin{lemma}\\label{6.8;14}\nThe following inequalities hold\n\\begin{gather}\n \\sup_{\\epsilon > 0} \\sup_{\\epsilon' > 0} \\Bigl\\| \\bigl|v_{12}\\bigr|^{1\/2} B_{12}^{-1} (\\epsilon') \\bigl[H_0 +\n\\epsilon + \\mu^{-1} (\\epsilon) (v_{12})_+ \\bigr]^{-1} \\bigl| v_{1i} \\bigr|^{1\/2}\\Bigr\\| < \\infty\n\\quad (i=3,\\ldots, N ) \\label{6.8;75}\\\\\n\\sup_{\\epsilon > 0} \\sup_{\\epsilon' > 0} \\Bigl\\| \\bigl|v_{12}\\bigr|^{1\/2} B_{12}^{-1} (\\epsilon')\\bigl[H_0 +\n\\epsilon + \\mu^{-1} (\\epsilon) (v_{12})_+ \\bigr]^{-1} \\bigl| v_{ij} \\bigr|^{1\/2}\\Bigr\\| < \\infty\n\\quad (2 \\leq i < j \\leq N )\n\\end{gather}\n\\end{lemma}\n\\begin{proof}\n The proof practically repeats that of Lemma~2 in \\cite{jpasubmit}, where\nin the definition of $B_{12}$ we used $\\zeta=1\/2$\n(note, that coordinates' notations here are different from definitions in\n\\cite{jpasubmit}). So we shall restrict ourselves to the proof of (\\ref{6.8;75}) for $i=3$, which is equivalent to\n$\\sup_{\\epsilon, \\epsilon' >0}\\|D(\\epsilon, \\epsilon')\\|< \\infty$, where\n\\begin{equation}\n D (\\epsilon, \\epsilon') = \\mathcal{F}_{12}^{-1} \\bigl|v_{12}\\bigr|^{1\/2} B_{12}^{-1} (\\epsilon')\n\\bigl[H_0 + \\epsilon + \\mu^{-1} (\\epsilon) (v_{12})_+ \\bigr]^{-1} \\bigl| v_{13} \\bigr|^{1\/2}\n\\mathcal{F}_{12}.\n\\end{equation}\nWe split $D (\\epsilon, \\epsilon')$ as follows\n\\begin{gather}\nD (\\epsilon, \\epsilon') = D^{(1)} (\\epsilon, \\epsilon') + D^{(2)}(\\epsilon, \\epsilon') , \\\\\nD^{(1)} (\\epsilon, \\epsilon') := \\mathcal{F}_{12}^{-1} \\bigl|v_{12}\\bigr|^{\\frac 12}\n\\left\\{ B_{12}^{-1} (\\epsilon') - (1+(\\epsilon')^{\\frac \\zeta2})^{-1} \\right\\}\n\\bigl[H_0 + \\epsilon + \\mu^{-1} (\\epsilon) (v_{12})_+ \\bigr]^{-1}\\bigl| v_{13} \\bigr|^{\\frac 12}\n\\mathcal{F}_{12} , \\\\\n D^{(2)} (\\epsilon, \\epsilon') := (1+(\\epsilon')^{\\frac \\zeta2})^{-1} \\mathcal{F}_{12}^{-1}\n\\bigl|v_{12}\\bigr|^{\\frac 12}\n \\bigl[H_0 + \\epsilon + \\mu^{-1} (\\epsilon) (v_{12})_+ \\bigr]^{-1}\\bigl| v_{13} \\bigr|^{\\frac 12}\n\\mathcal{F}_{12}.\n\\end{gather}\nFor convenience we define the tuple $p_{y_c} := (p_{y_3} , p_{y_4}, \\ldots, p_{y_{N-1}})\n\\in \\mathbb{R}^{3N-9}$.\nThe operator $D^{(1)} (\\epsilon, \\epsilon')$ acts on $\\phi(y_1, p_{y_2}, p_{y_c} ) \\in\nL^2(\\mathbb{R}^{3N-3})$ as follows\n\\begin{equation}\n\\bigl( D^{(1)} (\\epsilon, \\epsilon') \\phi\\bigr) (y_1 , p_{y_2}, p_{y_c} ) = \\int d^3 y'_1 \\: d^3\np'_{y_2} \\: \\mathcal{D}^{(1)} (y_1, y'_1 , p_{y_2}, p'_{y_2} ; p_{y_c} ) \\phi(y'_1 ,\np'_{y_2}, p_{y_c} ) . \\label{6.8;77}\n\\end{equation}\nThe integral kernel in (\\ref{6.8;77}) has the form, see \\cite{1,jpasubmit} \n\\begin{eqnarray}\n\\mathcal{D}^{(1)} (y_1, y'_1 , p_{y_2}, p'_{y_2} ; p_{y_c} ) = \\frac\n1{2^{\\frac{3N-2}2}\\pi^{\\frac{3N-4}2} \\gamma^3}\n\\left\\{|p_{y_r}|^{\\zeta} + (\\epsilon')^{\\frac \\zeta2} \\right\\}^{-1} \\chi_1\n(|p_{y_r}|)\\left| v_{12}\\bigl( (2\\mu_{12})^{-1} y_1 \\bigr) \\right|^{\\frac 12}\n\\nonumber\\\\\n\\times G\\Bigl((p_{y_r}^2 + \\epsilon)^{\\frac 12};y_1, y'_1\\Bigr) \\exp\\left(\ni\\beta \\gamma^{-1} y'_1 \\cdot (p_{y_2} - p'_{y_2})\\right) \\:\\widehat{\\bigl|\nv_{23}\\bigr|^{\\frac 12}} (\\gamma^{-1}(p_{y_2} - p'_{y_2})), \\quad\n\\end{eqnarray}\nwhere $\\beta := -m_2 \\hbar \/ ((m_1 + m_2)\\sqrt{2\\mu_{12}})$, $\\gamma :=\n\\hbar\/\\sqrt{2M_{12;3}}$ and $p_{y_r}^2 = p_{y_2}^2 + p_{y_c}^2 $.\nThe function $G(k; y_1, y'_1)$ denotes the integral kernel of the operator\n$(-\\Delta_{y_1} + k^2 + \\mu^{-1} (\\epsilon)(v_{12})_+)^{-1}$ acting in $L^2 (\\mathbb{R}^3)$. By the arguments in \\cite{jpasubmit} (around Eqs. (18)--(19))\n\\begin{equation}\n 0 \\leq G(k; y_1, y'_1) \\leq \\frac{e^{-k|y_1 -y'_1|}}{|y_1 -y'_1|} \\label{6.8;80}\n\\end{equation}\naway from $y_1 = y'_1$.\nUsing the estimate\n\\begin{equation}\n \\| D^{(1)} (\\epsilon, \\epsilon') \\|^2 \\leq \\sup_{ p_{y_c}} \\int \\bigl|\\mathcal{D}^{(1)} (y_1, y'_1\n, p_{y_2}, p'_{y_2} ; p_{y_c} )\\bigr|^2 d^3 y_1 d^3 y'_1 d^3 p_{y_2} d^3\np'_{y_2}\n\\end{equation}\nand the upper bound (\\ref{6.8;80}) we get\n\\begin{gather}\n\\| D^{(1)} (\\epsilon, \\epsilon') \\|^2 \\leq C_0 \\sup_{ p_{y_c}} \\int_{|p_{y_2}|\\leq 1}\n\\left\\{\\bigl( p^2_{y_2} + |p_{y_c}|^2\\bigr)^{\\frac \\zeta2} + (\\epsilon')^{\\frac\n\\zeta2} \\right\\}^{-2}\n\\left( p^2_{y_2} + |p_{y_c}|^2 + \\epsilon \\right)^{-\\frac 12} d^3 p_{y_2} \\nonumber\\\\\n\\leq C_0 \\int_{|p_{y_2}|\\leq 1} \\frac{d^3 p_{y_2}}{|p_{y_2}|^{2\\zeta+1}} \\leq\n2\\pi C_0 (1-\\zeta)^{-1}\n\\end{gather}\nwhere\n\\begin{equation}\n C_0 := \\frac 1{2^{3N-3} \\pi^{3N-5} \\gamma^6} \\left\\{\\int \\Bigl|\\widehat{\\bigl|\nv_{23}\\bigr|^{\\frac 12}} (\\gamma^{-1}p_{y_2})\\Bigr|^2 d^3 p_{y_2} \\right\\}\n \\left\\{\\int \\Bigl|v_{12}\\bigl( (2\\mu_{12})^{-1} y_1 \\bigr)\\Bigr| d^3 y_1\n\\right\\}\n\\end{equation}\nis finite due to $v_{ik} \\in L^1 (\\mathbb{R}^3)$. Therefore,\n$\\sup_{\\epsilon, \\epsilon' >0} \\| D^{(1)} (\\epsilon, \\epsilon') \\| <\\infty $. The proof that\n$\\sup_{\\epsilon, \\epsilon'>0} \\| D^{(2)} (\\epsilon, \\epsilon') \\| <\\infty $ is trivial (c. f. proof of\nLemma~2 in \\cite{jpasubmit}).\n\\end{proof}\n\nThe following corollaries to Theorem~\\ref{3.8;8} provide further necessary estimates. We\nagreed to set $\\|\\varphi(\\epsilon)\\| = 1$ for $\\epsilon \\in [0, \\pmb{\\epsilon}]$, while for $\\psi(\\epsilon)$ we can\nprove\n\\begin{corollary} \\label{20.9;6}\n\\begin{equation}\n C_\\psi := \\sup_{\\epsilon >0}\\|\\psi(\\epsilon)\\| < \\infty\n\\end{equation}\n\\end{corollary}\n\\begin{proof}\n The proof easily follows from (\\ref{11.01.12\/1}) and Theorem~\\ref{3.8;8}.\n\\end{proof}\n\\begin{corollary}\\label{24.9;9}\n For $\\epsilon , \\epsilon' >0$ let us set\n\\begin{equation}\\label{11.10;1}\n \\eta(\\epsilon, \\epsilon') := |v_{12}|^{\\frac12} B_{12}^{-1}(\\epsilon') \\bigl( H_0 +\n\\epsilon\\bigr)^{-\\frac 12} \\varphi(\\epsilon) ,\n\\end{equation}\nwhere $B_{12} (\\epsilon)$ was defined in (\\ref{7.8;15}) and $\\zeta \\in (0, 1)$. Then\n\\begin{equation}\n C_\\eta := \\sup_{\\epsilon >0} \\sup_{0 < \\epsilon' \\leq \\epsilon}\\|\\eta(\\epsilon, \\epsilon')\\| < \\infty . \n\\end{equation}\n\\end{corollary}\n\\begin{proof}\nUsing (\\ref{11.01.12\/1}) we can write\n\\begin{equation}\n \\eta(\\epsilon, \\epsilon') = |v_{12}|^{\\frac12} B_{12}^{-1}(\\epsilon') \\psi(\\epsilon) . \n\\end{equation}\nNow the result follows from (\\ref{6.8;6}).\n\\end{proof}\nNote that due to continuity of $\\varphi(\\epsilon)$ on $[0, \\pmb{\\epsilon}]$ and\ncompactness of the interval\n\\begin{equation}\n \\sup_{\\substack{\\epsilon_{1, 2} \\in [0, \\pmb{\\epsilon}] \\\\ |\\epsilon_1 - \\epsilon_2| <\nd}} \\|\\varphi(\\epsilon_1) - \\varphi(\\epsilon_2)\\| = \\hbox{o} (d) \\quad \\quad (\\textnormal{when $d \\to 0$}). \\label{7.8;25}\n\\end{equation}\nThe following is also true\n\\begin{corollary}\\label{19.9;10}\nLet $\\alpha \\in [1, \\frac 32 )$. For all $\\varepsilon > 0$ there exists\n$\\delta > 0$ such that\n\\begin{equation}\n\\sup_{\\epsilon ' > 0} \\sup_{s \\in \\mathbb{R}^3} \\| G_j^\\alpha (s, \\epsilon')\n(\\varphi(\\epsilon_1) - \\varphi(\\epsilon_2))\\| < \\varepsilon \\label{7.8;21}\n\\end{equation}\nfor all $|\\epsilon_1 - \\epsilon_2 | < \\delta$ and $j = 1, 2, \\ldots, N$.\n\\end{corollary}\n\\begin{proof}\n Without loosing generality let us set $j = 1$. For any $r \\in \\mathbb{R}_+ \/ \\{0\\}$ we have\n\\begin{gather}\n \\sup_{\\epsilon ' > 0} \\sup_{s \\in \\mathbb{R}^3} \\| G_1^\\alpha (s, \\epsilon')\n(\\varphi(\\epsilon_1) - \\varphi(\\epsilon_2))\\|^2 =\n\\sup_{0 <\\epsilon ' 0} \\sup_{\\epsilon ' >0} \\sup_{s \\in\n\\mathbb{R}^3} \\|G_1^{\\alpha'} (s, \\epsilon') \\varphi(\\epsilon)\\|^2\n\\end{equation}\nis finite by Theorem~\\ref{3.8;8}. Summarizing,\n\\begin{equation}\n\\sup_{\\epsilon ' > 0} \\sup_{s \\in \\mathbb{R}^3} \\| G_1^\\alpha (s, \\epsilon')\n(\\varphi(\\epsilon_1) - \\varphi(\\epsilon_2))\\|^2 \\leq r^{-\\alpha} \\|\n\\varphi(\\epsilon_1) - \\varphi(\\epsilon_2) \\|^2 + 2 (2r)^{\\alpha' - \\alpha}\nM_{\\alpha'}\n\\end{equation}\nNow (\\ref{7.8;21}) for $j = 1$ would hold if the following two inequalities are fulfilled\n\\begin{gather}\n4 (2r)^{\\alpha' - \\alpha} M_{\\alpha'} \\leq \\varepsilon^2 \\label{7.8;22} , \\\\\n2 r^{-\\alpha} \\| \\varphi(\\epsilon_1) - \\varphi(\\epsilon_2) \\|^2 \\leq\n\\varepsilon^2 \\label{7.8;23} . \n\\end{gather}\nLet us fix $r$ so that (\\ref{7.8;22}) is satisfied. Then by (\\ref{7.8;25}) we can always\nchoose an appropriate $\\delta >0$ so that\n(\\ref{7.8;23}) holds for $|\\epsilon_1 - \\epsilon_2| < \\delta$.\n\\end{proof}\n\nUsing (\\ref{11.01.12\/1}) let us define\n\\begin{equation}\n \\tilde \\varphi(\\epsilon) := - \\mu^{-1} (\\epsilon) V \\psi (\\epsilon) \\quad \\quad (\\epsilon \\in (0, \\pmb{\\epsilon}]). \\label{18.9;1}\n\\end{equation}\nObviously\n\\begin{equation}\n\\bigl(H_0 + \\epsilon\\bigr)^{\\frac 12} \\varphi(\\epsilon) = \\bigl(H_0 + \\epsilon\\bigr) \\psi(\\epsilon) = \\tilde \\varphi(\\epsilon).\n\\end{equation}\n\\begin{corollary}\\label{18.9;4}\nThe norm limits $\\psi(0)= \\lim_{\\epsilon \\to +0} \\psi(\\epsilon)$ and $\\tilde \\varphi(0)=\n\\lim_{\\epsilon \\to +0} \\tilde \\varphi(\\epsilon)$ exist and $\\tilde \\varphi(\\epsilon), \\psi(\\epsilon)$ are norm--continuous on $[0, \\pmb{\\epsilon}]$. The following is also true for\n$L > 0$\n\\begin{equation}\\label{19.9;3}\n \\sup_{\\epsilon \\in [0, \\pmb{\\epsilon}]}\\left\\|\\bigl(1-\\chi_L (|x|^2)\\bigr) \\tilde \\varphi(\\epsilon)\\right\\| = \\hbox{o} (L^{-1}) \\quad \\quad (\\textnormal{when $L \\to \\infty$})\n\\end{equation}\n\\end{corollary}\n\\begin{proof}\nLet us first prove the following statement: for all $\\varepsilon > 0$ there exist $\\delta , \\delta' > 0$ such that\n\\begin{gather}\n\\sup_{\\substack{\\epsilon_{1, 2} \\in (0, \\pmb{\\epsilon}] \\\\ |\\epsilon_1 - \\epsilon_2| <\\delta}} \\|\\psi (\\epsilon_1) - \\psi(\\epsilon_2)\\| < \\varepsilon \\label{18.9;2} \\\\\n\\sup_{\\substack{\\epsilon_{1, 2} \\in (0, \\pmb{\\epsilon}] \\\\ |\\epsilon_1 - \\epsilon_2| <\\delta'}} \\|\\tilde \\varphi (\\epsilon_1) - \\tilde \\varphi (\\epsilon_2)\\| < \\varepsilon \\label{18.9;3}\n\\end{gather}\nNote that (\\ref{18.9;3}) easily follows from (\\ref{18.9;2}) since $V$ is relatively $H_0$ bounded with a relative bound zero and\n$\\sup_{\\epsilon \\in (0, \\pmb{\\epsilon}]} \\|H_0 \\psi(\\epsilon)\\| < \\infty$ (see f. e. Lemma~1 in \\cite{1}). For the same reason\n$\\sup_{\\epsilon \\in (0, \\pmb{\\epsilon}] } \\|\\tilde \\varphi(\\epsilon)\\|< \\infty$. The norm--continuity of $\\psi(\\epsilon), \\tilde \\varphi(\\epsilon)$ on $[0, \\pmb{\\epsilon}]$\nfollows directly from (\\ref{18.9;2})--(\\ref{18.9;3}). So suppose that $0 <\\epsilon_1 \\leq \\epsilon_2 \\leq \\pmb{\\epsilon}$. Then\n\\begin{gather}\n \\|\\psi (\\epsilon_1) - \\psi(\\epsilon_2)\\| \\leq \\left\\|\\bigl(H_0 + \\epsilon_1\\bigr)^{-\\frac 12}\\bigl[\\varphi(\\epsilon_1) - \\varphi(\\epsilon_2)\\bigr]\\right\\| +\n\\left\\|\\Bigl[\\bigl(H_0 + \\epsilon_1\\bigr)^{-\\frac 12} - \\bigl(H_0 + \\epsilon_2\\bigr)^{-\\frac 12}\\Bigr]\\varphi(\\epsilon_2)\\right\\| \\nonumber\\\\\n\\leq \\left\\|G_1 (0, \\epsilon_1) \\bigl[\\varphi(\\epsilon_1) - \\varphi(\\epsilon_2)\\bigr]\\right\\| + \\left\\|\\Bigl[\\bigl(H_0 + \\epsilon_1\\bigr)^{-\\frac 12} - \\bigl(H_0 + \\epsilon_2\\bigr)^{-\\frac 12}\\Bigr]\\varphi(\\epsilon_2)\\right\\| . \n\\end{gather}\nBy Corollary~\\ref{19.9;10} we can choose $\\delta > 0$ so that\n\\begin{equation}\\label{18.9;9}\n \\|\\psi (\\epsilon_1) - \\psi(\\epsilon_2)\\| < \\varepsilon\/2 + \\left\\|\\Bigl[\\bigl(H_0 + \\epsilon_1\\bigr)^{-\\frac 12} - \\bigl(H_0 + \\epsilon_2\\bigr)^{-\\frac 12}\\Bigr]\\varphi(\\epsilon_2)\\right\\|\n\\end{equation}\nfor $|\\epsilon_1 - \\epsilon_2| < \\delta$. Let us introduce $f:\\mathbb{R}_+ \\to \\mathbb{R}_+ $ such that $f(r) = r^{2\/3}$ for $r \\in [0, 1]$ and $f(r) = 1$ for $r \\geq 1$. Now we consider\nthe last term in (\\ref{18.9;9}).\n\\begin{gather}\n \\left\\|\\Bigl[\\bigl(H_0 + \\epsilon_1\\bigr)^{-\\frac 12} - \\bigl(H_0 + \\epsilon_2\\bigr)^{-\\frac 12}\\Bigr]\\varphi(\\epsilon_2)\\right\\| \\nonumber\\\\\n\\leq\n\\left\\|\\Bigl[\\bigl(H_0 + \\epsilon_1\\bigr)^{-\\frac 12} - \\bigl(H_0 + \\epsilon_2\\bigr)^{-\\frac 12}\\Bigr]f(H_0 + \\epsilon_1)\\right\\| \\left\\|\\left[f(H_0 + \\epsilon_1)\\right]^{-1}\\varphi(\\epsilon_2)\\right\\| . \\label{18.9;10}\n\\end{gather}\nThe expression $f(H_0 + \\epsilon_1)$ in (\\ref{18.9;10}) is to be understood in terms of functional calculus of self--adjoint operators. \nUsing the Fourier transform it is easy to see that\n\\begin{equation}\n \\left\\|\\Bigl[\\bigl(H_0 + \\epsilon_1\\bigr)^{-\\frac 12} - \\bigl(H_0 + \\epsilon_2\\bigr)^{-\\frac 12}\\Bigr]f(H_0 + \\epsilon_1)\\right\\| \\leq|\\epsilon_1 - \\epsilon_2|^{1\/6}\n\\end{equation}\nfor $|\\epsilon _2 - \\epsilon_1| \\leq 1$. The second norm in (\\ref{18.9;10}) can be estimated as follows \n \\begin{equation}\n \\left\\|\\left[f(H_0 + \\epsilon_1)\\right]^{-1}\\varphi(\\epsilon_2)\\right\\| \\leq 1 + \\|G^{\\frac 43}_1 (0 , \\epsilon_1) \\varphi(\\epsilon_2) \\| , \n \\end{equation}\nwhere the last norm is uniformly bounded by Theorem~\\ref{3.8;8}. Thus we can always set $\\delta > 0$ to ensure that the last term in (\\ref{18.9;9}) is less that $\\varepsilon\/2$. Eq.~(\\ref{19.9;3}) is a\ntrivial consequence of the norm--continuity of $\\tilde \\varphi(\\epsilon)$ on $[0, \\pmb{\\epsilon}]$.\n\\end{proof}\n\n\n\\section{Proof of Main Theorem}\\label{1.08;2}\n\nThroughout this section we assume that the N--particle Hamiltonian (\\ref{1.08;3})--(\\ref{1.08;4}) satisfies the assumption R1. \nLet us by $\\mathfrak{C}_j$ for $j = 1, 2, \\ldots, N$ denote the subsystem\ncontaining $N-1$ particles, where the particle $j$ is missing.\nFor each subsystem $\\mathfrak{C}_j$ we introduce the operators\n\\begin{gather}\n K_j (\\epsilon):= -\\bigl(H_0 + \\epsilon\\bigr)^{-1\/2} V_{\\{j\\}} \\bigl(H_0 +\n\\epsilon\\bigr)^{-1\/2} \\\\\nL_j (\\epsilon) := -\\bigl(H_0 + \\epsilon\\bigr)^{-1\/2} (V-V_{\\{j\\}}) \\bigl(H_0 +\n\\epsilon\\bigr)^{-1\/2} ,\n\\end{gather}\nwhere $\\epsilon >0$ and $V_{\\{j\\}}$ was defined in (\\ref{7.8;1a}).\nClearly, $K_j, L_j \\in \\mathfrak{B}(L^2(\\mathbb{R}^{3N-3}))$ and\n\\begin{equation}\n K (\\epsilon)= K_j(\\epsilon) + L_j(\\epsilon) \\quad \\quad (j = 1, \\ldots, N).\n\\end{equation}\nAfter an appropriate Fourier transform $K_j (\\epsilon)$ becomes the BS operator for the subsystem\n$\\mathfrak{C}_j$.\nSuppose that the subsystem $\\mathfrak{C}_j$ (as a system of $N-1$ particles) is\nat critical coupling and satisfies the conditions of Theorem~\\ref{3.8;3}.\nUsing Theorem~\\ref{3.8;3} and (\\ref{7.01;41}) for each such subsystem we can define $\\pmb{\\epsilon}_j$, \n$\\varphi_j (\\epsilon) \\in L^2 (\\mathbb{R}^{3N-6})$, $\\mu_j (\\epsilon)$. Inequality\n(\\ref{mubound}) reads then $1- \\mu_j (\\epsilon) \\geq a_\\mu^{(j)}\\epsilon$\tfor\n$\\epsilon \\in [0, \\pmb{\\epsilon}_j]$.\nIt is convenient to set $\\pmb{\\epsilon} := \\min_j \\pmb{\\epsilon}_j$ and $a_\\mu := \\min_j\na_\\mu^{(j)}$ (where minima are taken over all such $j$ for which\n$\\mathfrak{C}_j$ satisfies the conditions of Theorem~\\ref{3.8;3}). The function $\\mu_j (\\epsilon)$ is defined as in (\\ref{7.01;41}), where $\\pmb{\\epsilon} := \\min_j \\pmb{\\epsilon}_j$. \nNote that $\\mu_j (\\epsilon) \\geq (1+\\omega\/2)^{-1}$, where $\\omega$ is defined in R1. \nLet us redefine the definitions saying that for $\\epsilon \\in [0, \\pmb{\\epsilon}]$ the\nfunctions $\\varphi_j (\\epsilon) , \\tilde \\varphi_j (\\epsilon)$ are defined through\nTheorem~\\ref{3.8;3} and (\\ref{18.9;1}) respectively \nand $\\varphi_j (\\epsilon) = \\tilde \\varphi_j (\\epsilon) = 0$ if $\\epsilon > \\pmb{\\epsilon}$.\n\n\n Now suppose that the subsystem $\\mathfrak{C}_1$ is at critical coupling. We use\nJacobi coordinates $x_1, \\ldots, x_{N-1}$,\nwhich we have already introduced in Sec.~\\ref{1.08;1}. Then $\\varphi_1 (\\epsilon)$\ndepends explicitly\non the coordinates as $\\varphi_1 (\\epsilon; x_r)$. Similar to (\\ref{7.8;31}) for $\\epsilon >0$ we\ndefine the projection operator $\\hat P_1 (\\epsilon)$,\nwhich acts on $\\hat f (p_1, x_r) \\in L^2(\\mathbb{R}^{3N-3})$ as follows\n\\begin{equation}\\label{29.8;3}\n \\hat P_1 (\\epsilon) \\hat f := \n \\varphi_1 (\\epsilon + p_1^2; x_r) {\\displaystyle \\int \\hat f (p_1, x_r)}\n\\varphi^*_1 (\\epsilon + p_1^2; x_r) \\: d^{3N-6} x_r \n\\end{equation}\nIts Fourier--transformed version is $P_1 (\\epsilon) := \\mathcal{F}_1^{-1}\n\\hat P_1 (\\epsilon)\\mathcal{F}_1$, where $\\mathcal{F}_1$ was defined in\n(\\ref{7.8;51}).\nAdditionally, let us introduce the operator functions $\\mathcal{P}_1 , \\mathfrak{m}_1 : \\mathbb{R}_+ \\to \\mathfrak{B} (L^2 (\\mathbb{R}^{3N-3}))$, and \n$\\mathfrak{g}_1 : \\mathbb{R}_+ \/ \\{0\\}\\to \\mathfrak{B} (L^2 (\\mathbb{R}^{3N-3}))$, which act on\n$f(x_1, x_r) \\in L^2(\\mathbb{R}^{3N-3})$ as follows\n\\begin{gather}\n \\mathfrak{m}_1 (\\epsilon)f = \\mathcal{F}_1^{-1} \\mu_1 (\\epsilon + p_1^2)\n(\\mathcal{F}_1 f) , \\\\\n\\mathcal{P}_1 (\\epsilon) := \\mathfrak{m}_1 (\\epsilon) P_1 (\\epsilon) , \\\\\n\\mathfrak{g}_1 (\\epsilon)f = \\mathcal{F}_1^{-1} \\left(\\left[1-\\mu_1 (\\epsilon\n+ p_1^2) \\right]^{-\\frac 12} - 1\\right) (\\mathcal{F}_1 f). \\label{24.9;26}\n\\end{gather}\nSimilarly, for each subsystem $\\mathfrak{C}_j$ at critical coupling we define\n$\\hat P_j , P_j, \\mathcal{P}_j , \\mathfrak{g}_j , \\mathfrak{m}_j, \\tilde \\varphi_j (\\epsilon)$,\nwhere for each $j$ one has to choose appropriate Jacobi coordinates.\nIf the subsystem $\\mathfrak{C}_j$ is not at critical coupling we simply set\n$\\hat P_j , P_j, \\mathcal{P}_j , \\mathfrak{g}_j, \\mathfrak{m}_j , \\tilde \\varphi_j (\\epsilon) = 0$.\nAccording to Lemma~\\ref{7.8;53} the operator\n\\begin{equation}\n\\mathcal{R}_j (\\epsilon) := \\mathfrak{g}_j (\\epsilon) P_j (\\epsilon)= P_j (\\epsilon) \\mathfrak{g}_j (\\epsilon) \\quad \\quad (\\epsilon >0)\\label{20.9;11}\n\\end{equation}\nsatisfies\n\\begin{equation}\\label{8.8;1}\n \\mathcal{R}_j (\\epsilon) = \\bigl( 1- \\mathcal{P}_j (\\epsilon) \\bigr)^{-1\/2} -\n1.\n\\end{equation}\n\nFrom definitions it is clear that the operator $\\mathcal{P}_j (\\epsilon)$ is not norm--continuous. We shall now construct its norm--continuous\nanalogue. Let us introduce a continuous function $\\theta: \\mathbb{R} \\to \\mathbb{R}$ depending on a parameter $\\gamma >0$ such that\n$\\theta(r) = 1$ if $r\\in [1-\\gamma , 1+\\gamma]$; $\\theta(r) = 0$ if $r\\in \\mathbb{R}\/(1-2\\gamma , 1+2\\gamma)$; in the intervals $[1-2\\gamma, 1-\\gamma]$ and\n$[1+\\gamma, 1+2\\gamma]$ the function $\\theta(r)$ is linear, see Fig.~\\ref{28.08;1}. Recall that $\\mu_j (\\epsilon)$ on $[0, \\pmb{\\epsilon}]$ is continuous and monotone decreasing. Let us set\n$\\gamma$ so that the following equation is fulfilled\n\\begin{equation}\n 1 - 2\\gamma = \\max_j \\mu_j (\\pmb{\\epsilon}\/2) ,\n\\end{equation}\nwhere the maximum is taken over all such $j$ that $\\mathfrak{C}_j$ is at critical coupling. If $\\mathfrak{C}_1$ is at critical coupling then the operators\n$\\mathfrak{m}^{(c)}_1 (\\epsilon), \\mathcal{P}^{(c)}_1 (\\epsilon) $ given by expressions \n\\begin{gather}\n \\mathfrak{m}^{(c)}_1 (\\epsilon)f := \\mathcal{F}_1^{-1} \\theta\\bigl( \\mu_1 (\\epsilon + p_1^2) \\bigr)\n(\\mathcal{F}_1 f) \\label{29.8;11}\\\\\n\\mathcal{P}^{(c)}_1 (\\epsilon) := \\mathfrak{m}^{(c)}_1 (\\epsilon) P_1 (\\epsilon)\n\\end{gather}\nare norm--continuous on $[0, \\infty)$ (the superscript ``c'' stands for continuous). Similarly, using appropriate coordinates\none defines $\\mathfrak{m}^{(c)}_j (\\epsilon), \\mathcal{P}^{(c)}_j (\\epsilon) $ if\n$\\mathfrak{C}_j$ is at critical coupling and sets $\\mathfrak{m}^{(c)}_j (\\epsilon), \\mathcal{P}^{(c)}_j (\\epsilon) = 0$ otherwise.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{fig_new.eps}\n\\caption{Behavior of the continuous function $\\theta(r)$.}\n\\label{28.08;1}\n\\end{center}\n\\end{figure}\nIt is easy to see that $\\mathfrak{m}_j (\\epsilon) - \\mathfrak{m}^{(c)}_j (\\epsilon) \\geq 0$, therefore \n\\begin{equation}\\label{28.08;3}\n\\mathcal{P}_j (\\epsilon) - \\mathcal{P}^{(c)}_j (\\epsilon) \\geq 0 \\quad \\quad (j = 1, \\ldots, N).\n\\end{equation}\nBy construction of $\\mathcal{P}^{(c)}_j$ and (\\ref{19.9;11})\n\\begin{equation}\\label{19.9;1}\n \\pmb{\\eta} := \\max_{j = 1, \\ldots, N} \\sup_{\\epsilon >0 } \\sup \\sigma\\left( K_j\n(\\epsilon) - \\mathcal{P}^{(c)}_j (\\epsilon)\\right) < 1.\n\\end{equation}\n\nThe following theorem, which is used for counting the eigenvalues, is the\ncentral ingredient in the proof of Theorem~\\ref{1.08;9}.\n\\begin{theorem}\\label{8.8.;8}\nSuppose that $H$ defined in (\\ref{1.08;3})--(\\ref{1.08;4}) is such that $\\sigma_{ess} (H) = [0, \\infty)$. Then the following relation\nholds for $N_\\epsilon$ defined in (\\ref{7.8;55})\n\\begin{equation}\n N_\\epsilon = \\# (\\textnormal{evs}\\: (\\mathcal{T}_k (\\epsilon)) > 1) \\quad \\quad (k = 1,\n\\ldots, N),\n\\end{equation}\nwhere $\\epsilon >0$ and\n\\begin{gather}\n \\mathcal{T}_k (\\epsilon) := K(\\epsilon) - \\sum_{i=1}^k \\mathcal{P}_i (\\epsilon)\n+ \\mathcal{M}_k (\\epsilon) , \\\\\n\\mathcal{M}_1 (\\epsilon):= \\mathcal{R}_1 L_1 \\mathcal{R}_1 + \\mathcal{R}_1 L_1 +\nL_1 \\mathcal{R}_1 , \\label{8.8;2}\n\\end{gather}\nand the operators $\\mathcal{M}_k (\\epsilon)$ for $k = 2, 3, \\ldots, N$ are\ndetermined through the following recurrence relation\n\\begin{gather}\n\\mathcal{M}_k (\\epsilon) = \\bigl(1+ \\mathcal{R}_k\\bigr)\n\\mathcal{M}_{k-1}\\bigl(1+ \\mathcal{R}_k\\bigr) + \\mathcal{R}_k \\left\\{ L_k -\n\\sum_{i=1}^{k-1} \\mathcal{P}_i \\right\\} \\mathcal{R}_k \\nonumber\\\\\n+ \\mathcal{R}_k \\left\\{ L_k - \\sum_{i=1}^{k-1} \\mathcal{P}_i \\right\\} +\n\\left\\{ L_k - \\sum_{i=1}^{k-1} \\mathcal{P}_i \\right\\} \\mathcal{R}_k . \\label{8.8.7}\n\\end{gather}\nBesides, $\\sigma_{ess} (\\mathcal{T}_k (\\epsilon )) \\cap (1, \\infty) = \\emptyset$ for $k = 1, \\ldots, N$.\n\\end{theorem}\n\\begin{proof}\n The proof consists in applying the BS principle $N$ times. As\nthe first step let us write\n\\begin{equation}\n 1-K(\\epsilon) = 1- \\mathcal{P}_1 (\\epsilon) - \\left[K(\\epsilon) - \\mathcal{P}_1\n(\\epsilon) \\right] ,\n\\end{equation}\nwhere the operator on the lhs has $N_\\epsilon$ negative eigenvalues, see (\\ref{7.8;55}). By Theorem~\\ref{31.07;16} we also have\n$\\sigma_{ess} ( 1-K(\\epsilon)) \\subset [0, \\infty)$. Obviously, for $\\epsilon >0$\n\\begin{equation}\n 1- \\mathcal{P}_j (\\epsilon) \\geq 1- \\mu_j (\\epsilon) >0 \\quad \\quad (j = 1, \\ldots, N)\n\\end{equation}\nApplying the BS principle\n(see a remark after Theorem~\\ref{31.07;16}) we obtain\n\\begin{equation}\n N_\\epsilon = \\#\\bigl(\\textnormal{evs}\\:( \\mathcal{T}_1 (\\epsilon)) > 1 \\bigr)\n\\end{equation}\nwhere\n\\begin{equation}\n \\mathcal{T}_1 (\\epsilon ) = \\bigl( 1- \\mathcal{P}_1 \\bigr)^{-1\/2} \\bigl[ K -\n\\mathcal{P}_1\\bigr] \\bigl( 1- \\mathcal{P}_1 \\bigr)^{-1\/2} .\n\\end{equation}\nBesides, $\\sigma_{ess} (\\mathcal{T}_1 (\\epsilon )) \\cap (1, \\infty) = \\emptyset$. Using (\\ref{8.8;1}) and $K(\\epsilon) = K_1(\\epsilon) + L_1(\\epsilon)$ we get\n\\begin{equation}\n \\mathcal{T}_1 (\\epsilon ) = K - \\mathcal{P}_1 + L_1 \\mathcal{R}_1 +\n\\mathcal{R}_1 L_1 + \\mathcal{R}_1 L_1 \\mathcal{R}_1 = K - \\mathcal{P}_1 +\n\\mathcal{M}_1\n\\end{equation}\nwhere we have used $\\mathcal{R}_1(K_1 - \\mathcal{P}_1) = 0$ and (\\ref{8.8;2}). Now\nwe do the second step and use that\n\\begin{equation}\n 1 - \\mathcal{T}_1 (\\epsilon ) = 1- \\mathcal{P}_2 (\\epsilon) -\n\\left[\\mathcal{T}_1 (\\epsilon) - \\mathcal{P}_2 (\\epsilon) \\right]\n\\end{equation}\nhas $N_\\epsilon$ negative eigenvalues (counting multiplicities). Therefore, again by the BS principle\n\\begin{equation}\n N_\\epsilon = \\#\\bigl(\\textnormal{evs}\\:( \\mathcal{T}_2 (\\epsilon)) > 1 \\bigr),\n\\end{equation}\nwhere\n\\begin{gather}\n \\mathcal{T}_2 (\\epsilon ) = \\bigl( 1- \\mathcal{P}_2 \\bigr)^{-1\/2} \\bigl[\n\\mathcal{T}_1 (\\epsilon) - \\mathcal{P}_2\\bigr] \\bigl( 1- \\mathcal{P}_2\n\\bigr)^{-1\/2} \\nonumber\\\\\n= \\bigl( 1+ \\mathcal{R}_2 \\bigr) \\bigl[ K - \\mathcal{P}_1 - \\mathcal{P}_2 +\n\\mathcal{M}_1 \\bigr] \\bigl( 1+ \\mathcal{R}_2 \\bigr) \\nonumber\\\\\n= K - \\mathcal{P}_1 - \\mathcal{P}_2 + \\bigl( 1+ \\mathcal{R}_2 \\bigr)\n\\mathcal{M}_1 \\bigl( 1+ \\mathcal{R}_2 \\bigr) + \\mathcal{R}_2 \\bigl[ K -\n\\mathcal{P}_1 - \\mathcal{P}_2 \\bigr] \\mathcal{R}_2 \\nonumber\\\\\n+ \\mathcal{R}_2 \\bigl[ K - \\mathcal{P}_1 - \\mathcal{P}_2 \\bigr] + \\bigl[ K -\n\\mathcal{P}_1 - \\mathcal{P}_2 \\bigr] \\mathcal{R}_2\n\\end{gather}\nhas $N_\\epsilon$ eigenvalues larger than one. From the BS principle it also follows that $\\sigma_{ess} (\\mathcal{T}_2 (\\epsilon )) \\cap (1, \\infty) = \\emptyset$.\nFor the last three terms in square\nbrackets we substitute $K = K_2 + L_2$ and use $\\mathcal{R}_2(K_2 -\n\\mathcal{P}_2) = (K_2 - \\mathcal{P}_2)\\mathcal{R}_2 = 0$.\nThis leads to\n\\begin{equation}\n \\mathcal{T}_2 = K - \\mathcal{P}_1 - \\mathcal{P}_2 + \\mathcal{M}_2 ,\n\\end{equation}\nwhere $\\mathcal{M}_2 $ is defined through (\\ref{8.8.7}). Proceeding in the same way, that is writing each time\n\\begin{equation}\n 1 - \\mathcal{T}_k (\\epsilon ) = 1- \\mathcal{P}_{k+1} (\\epsilon) -\n\\left[\\mathcal{T}_k (\\epsilon) - \\mathcal{P}_{k+1} (\\epsilon) \\right] \\quad (k = 3, \\ldots, N-1)\n\\end{equation}\nand applying the BS principle we prove the theorem.\n\\end{proof}\nLet us define the norm--continuous operator function $\\mathcal{G}_N : \\mathbb{R}_+ \\to \\mathfrak{B}(L^2 (\\mathbb{R}^{3N-3}))$ as\n\\begin{equation}\n \\mathcal{G}_N (\\epsilon) := K(\\epsilon) - \\sum_{j=1}^N \\mathcal{P}^{(c)}_j (\\epsilon). \\label{13.8;11}\n\\end{equation}\n\\begin{lemma}\\label{13.8;4}\n There exist $q, \\varepsilon_0 >0$ such that for $\\epsilon \\in [0, \\varepsilon_0]$\n\\begin{equation}\n \\sigma_{ess} \\bigl(\\mathcal{G}_N (\\epsilon)\\bigr) \\subset (- \\infty , 1-q).\n\\end{equation}\n\\end{lemma}\nThe proof of Lemma~\\ref{13.8;4} would be given later. Let us define \n\\begin{equation}\n \\mathcal{A}_\\infty := \\left\\{ W:\\mathbb{R}_+ \/\\{0\\} \\to \\mathfrak{B}(L^2 (\\mathbb{R}^{3N-3})) \\Bigl| \\; \\; \\sup_{\\epsilon >0} \\| W(\\epsilon)\\| < \\infty \\right\\} . \n\\end{equation}\nBy definition $W \\in \\mathfrak{J}\\subset \\mathcal{A}_\\infty$ {\\it iff} for all $\\beta >0$ there exists a\ndecomposition $W = W^{B} + W^{HS}$, \nwhere $W^{B}, W^{HS} \\in \\mathcal{A}_\\infty$ satisfy the following inequalities\n\\begin{gather}\n\\sup_{\\epsilon >0} \\| W^{B} (\\epsilon)\\| < \\beta \\\\\n\\sup_{\\epsilon >0} \\| W^{HS} (\\epsilon)\\|_{HS} < \\infty .\\label{13.8;8}\n\\end{gather}\n(Eq.~(\\ref{13.8;8}) implies that $W^{HS} (\\epsilon)$ is a Hilbert-Schmidt operator for all $\\epsilon >0$). Obviously, if $W \\in \\mathfrak{J}$ then $W(\\epsilon)$ is a \ncompact operator for all $\\epsilon >0$. \n\\begin{lemma}\\label{13.8;9}\n $\\mathcal{A}_\\infty$ is an algebra and $\\mathfrak{J}$ is a two--sided ideal in\n$\\mathcal{A}_\\infty$.\n\\end{lemma}\nThe proof of Lemma~\\ref{13.8;9} is a trivial consequence of the Hilbert--Schmidt class properties and we omit it.\n\\begin{lemma}\\label{13.8;1}\n The function $\\mathcal{M}_N\\in \\mathcal{A}_\\infty$ defined in (\\ref{8.8.7}) is such that $\\mathcal{M}_N \\in \\mathfrak{J}$.\n\\end{lemma}\nThe proof of Lemma~\\ref{13.8;1} would be given later.\n\n\\begin{proof}[Proof of Theorem~\\ref{1.08;9}]\n Let us assume by contradiction that $N_\\epsilon \\to \\infty$. Then by Theorem~\\ref{8.8.;8}\n\\begin{equation}\n \\lim_{\\epsilon \\to +0 }\\# (\\textnormal{evs}\\: (\\mathcal{T}_N (\\epsilon)) > 1) = \\infty . \\label{13.8;31a}\n\\end{equation}\nLet us define\n\\begin{equation}\n \\mathcal{T'}_N (\\epsilon) := \\mathcal{T}_N (\\epsilon) + \\sum_{j = 1}^N \\left[ \\mathcal{P}_j (\\epsilon) - \\mathcal{P}^{(c)}_j (\\epsilon) \\right].\n\\end{equation}\nUsing (\\ref{13.8;11}) we can write\n\\begin{equation}\n \\mathcal{T'}_N(\\epsilon) = \\mathcal{G}_N (0) + \\left\\{ \\mathcal{G}_N (\\epsilon) -\n\\mathcal{G} (0)\\right\\} + \\mathcal{M}_N(\\epsilon) . \\label{13.8;32}\n\\end{equation}\nBy norm--continuity of $ \\mathcal{G} (\\epsilon)$ and by Lemma~\\ref{13.8;4} there exist $\\varepsilon_0, q >0$ such that for all $\\epsilon \\in (0, \\varepsilon_0)$\n\\begin{gather}\n \\|\\mathcal{G}_N (\\epsilon) - \\mathcal{G}_N (0)\\| < q , \\label{13.8;33}\\\\\n \\mathcal{G}_N (0) \\leq 1 - 3q + \\mathcal{C}_f \\label{13.8;34}\n\\end{gather}\nwhere $\\mathcal{C}_f$ is a fixed finite rank self--adjoint operator.\nBy Lemma~\\ref{13.8;1} we can write the decomposition\n\\begin{equation}\n \\mathcal{M}_N (\\epsilon) = \\mathcal{M}_N^B (\\epsilon) + \\mathcal{M}_N^{HS} \\label{13.8;35}\n(\\epsilon) ,\n\\end{equation}\nwhere\n\\begin{gather}\n\\|\\mathcal{M}_N^B (\\epsilon)\\|< q \\label{13.8;36} \\\\\n\\sup_{\\epsilon \\in (0,\\pmb{\\epsilon}]} \\| \\mathcal{M}_N^{HS} (\\epsilon) \\|_{HS} =\\vartheta < \\infty . \\label{13.8;37}\n\\end{gather}\nOn one hand, from (\\ref{13.8;31a}) we infer that for any $n \\in \\mathbb{Z}_+$ there is\n$\\epsilon \\in (0, \\varepsilon_0)$ and an orthonormal set\n$\\phi_1 , \\ldots, \\phi_n \\in L^2 (\\mathbb{R}^{3N-3})$ such that $(\\phi_i, \\mathcal{T}_N(\\epsilon) \\phi_i) >\n1$ holds for $i = 1, \\ldots, n$. Due to (\\ref{28.08;3}) $(\\phi_i, \\mathcal{T'}_N(\\epsilon) \\phi_i) >\n1$ holds as well for $i = 1, \\ldots, n$. With (\\ref{13.8;32})--(\\ref{13.8;36}) this results in\n\\begin{equation}\n \\bigl| \\bigl(\\phi_i , [\\mathcal{C}_f + \\mathcal{M}_N^{HS}\n(\\epsilon)]\\phi_i)\\bigr| > q \\quad \\quad (i = 1, \\ldots, n).\\label{13.8;38}\n\\end{equation}\nOn the other hand, from Lemma~\\ref{30.07;2} and (\\ref{13.8;37}), (\\ref{13.8;38}) it follows that $n\n\\leq(\\|\\mathcal{C}_f\\|_{HS} +\\vartheta)^2\/q^2$,which contradicts $n$ being arbitrary positive integer.\n\\end{proof}\n\n\nOur next aim is to prove Lemma~\\ref{13.8;4}. Note that the operator $H_0^{1\/2} \\bigl(H_0 + \\epsilon\\bigr)^{-1\/2} $ and its \nadjoint $\\bigl(H_0 + \\epsilon\\bigr)^{-1\/2} H_0^{1\/2}$ are uniformly bounded for $\\epsilon >0$ (the second operator\ncan obviously be extended from $D(H_0^{1\/2})$ to the whole Hilbert space by the BLT theorem). Let us define\n\\begin{equation}\n \\mathcal{P}'_j (\\epsilon) := \\bigl(H_0 + \\epsilon\\bigr)^{-1\/2} H_0^{1\/2} \\mathcal{P}^{(c)}_j (0) H_0^{1\/2} \\bigl(H_0 + \\epsilon\\bigr)^{-1\/2} \\quad \\quad (\\epsilon >0)\n\\end{equation}\n\\begin{lemma}\\label{8.8;19}\nThe following is true\n\\begin{equation}\n \\lim_{\\epsilon \\to +0} \\left\\| \\mathcal{P}'_j (\\epsilon) - \\mathcal{P}^{(c)}_j\n(\\epsilon) \\right\\| = 0 \\quad \\quad (j = 1, \\ldots, N)\n\\end{equation}\n \\end{lemma}\n\\begin{proof}\nNote that $\\textnormal{w--}\\lim_{\\epsilon \\to +0} \\mathcal{P}'_j (\\epsilon) = \\mathcal{P}^{(c)}_j\n(0)$ because\n\\begin{equation}\n \\textnormal{s--}\\lim_{\\epsilon \\to +0} H_0^{1\/2} \\bigl( H_0 + \\epsilon\\bigr)^{-1\/2} = 1.\n\\end{equation}\nSo the lemma would be proved if we would show that $\\mathcal{P}'_j (\\epsilon)$\nform a Cauchy sequence for $\\epsilon \\to +0$.\nWe follow the same recipe as in Lemma~\\ref{31.07;2}. It is enough to prove that\n\\begin{equation}\n \\mathcal{D}_j (\\epsilon) := \\bigl(H_0 + \\epsilon\\bigr)^{-1\/2} H_0^{1\/2} P_j\n(0)\n\\end{equation}\nforms a Cauchy sequence for $\\epsilon \\to +0$. Repeating the arguments from\nLemma~\\ref{31.07;2} we obtain for $\\epsilon_2 \\geq \\epsilon_1 >0$\n\\begin{gather}\n\\left\\|\\mathcal{D}_j (\\epsilon_2) - \\mathcal{D}_j (\\epsilon_1)\\right\\| \\nonumber \\\\\n\\leq \\left\\|\\left\\{(H_0 + \\epsilon_2)^{\\frac 12} - (H_0 + \\epsilon_1)^{\\frac\n12}\\right\\}\n(H_0 + \\epsilon_1)^{-\\frac 12} (H_0 + \\epsilon_2)^{- \\frac 12} H_0^{1\/2} P_j\n(0)\\right\\| \\nonumber\\\\\n\\leq |\\epsilon_2 - \\epsilon_1|^{\\frac 12} |\\epsilon_2|^{- \\frac 14}\n\\left\\|H_0^{\\frac 12} (H_0 + \\epsilon_1)^{- \\frac 12} (H_0 + \\epsilon_2)^{-\n\\frac 14 } P_j (0)\\right\\| \\nonumber \\\\\n\\leq |\\epsilon_2 - \\epsilon_1|^{\\frac 12} |\\epsilon_2|^{- \\frac 14}\nC_0 , \\label{14.8;41}\n\\end{gather}\nwhere\n\\begin{equation}\n C_0 := \\sup_{\\epsilon >0} \\left\\|(H_0 + \\epsilon)^{- \\frac 14} P_j (0)\\right\\|. \\label{14.8;23}\n\\end{equation}\nIt remains to show that $C_0$ in (\\ref{14.8;23}) is finite. Without loss of generality let us set $j = 1$.\n We have\n\\begin{equation}\\label{29.8;1}\n C_0 = \\sup_{\\epsilon >0}\\|(p_1^2 + \\epsilon - \\Delta_{x_r})^{- \\frac 14} \\hat P_1 (0) \\| =\n\\sup_{\\epsilon >0}\\sup_{\\epsilon' >0}\\|(\\epsilon' + \\epsilon - \\Delta_{x_r})^{- \\frac 14} \\varphi_1 (\\epsilon')\\| , \n\\end{equation}\nwhere the last norm is that of $L^2 (\\mathbb{R}^{3N-6})$. The expression on the rhs of (\\ref{29.8;1}) is finite due to Theorem~\\ref{3.8;8}.\nFrom (\\ref{14.8;41}) it follows that $D_j (\\epsilon)$ form\na Cauchy sequence for $\\epsilon \\to +0$.\n\\end{proof}\n\n\nConsider a hermitian sesquilinear form\n$\\mathfrak{q}_j (f, g ) = \\bigl( H_0^{\\frac 12} f , \\mathcal{P}^{(c)}_j (0) H_0^{\\frac 12} g \\bigr)$ with the domain $D( H_0^{\\frac 12}) \\times D( H_0^{\\frac 12})$. Let us first\nshow that $\\mathfrak{q}_j$ is bounded. Using norm--continuity we get\n\\begin{gather}\n \\Bigl\\| \\bigl[\\mathfrak{m}^{(c)}_j (0)\\bigr]^{\\frac 12} P_j (0) H_0^{\\frac 12} f\\Bigr\\| = \\lim_{\\epsilon \\to +0} \\Bigl\\| \\bigl[\\mathfrak{m}^{(c)}_j (\\epsilon)\\bigr]^{\\frac 12} P_j (\\epsilon) H_0^{\\frac 12} f\\Bigr\\| \\nonumber\\\\\n= \\lim_{\\epsilon \\to +0} \\Bigl\\| \\bigl[\\mathfrak{m}_j (\\epsilon)\\bigr]^{-1} \\bigl[\\mathfrak{m}^{(c)}_j (\\epsilon)\\bigr]^{\\frac 12} P_j (\\epsilon) K_j (\\epsilon) H_0^{\\frac 12} f\\Bigr\\| \\label{29.8;12}\n\\end{gather}\nRecall that we chose $\\pmb{\\epsilon}$ so that $\\mu_j (\\pmb{\\epsilon}) \\geq (1 + \\omega\/2)^{-1}$. Hence, \n\\begin{equation}\n \\bigl[\\mathfrak{m}_j (\\epsilon)\\bigr]^{-1} \\bigl[\\mathfrak{m}^{(c)}_j (\\epsilon)\\bigr]^{\\frac 12} \\leq (1+ \\omega\/2)^{1\/2}\n\\end{equation}\nTherefore, we can rewrite (\\ref{29.8;12}) as\n\\begin{gather}\n \\Bigl\\| \\bigl[\\mathfrak{m}^{(c)}_j (0)\\bigr]^{\\frac 12} P_j (0) H_0^{\\frac 12} f\\Bigr\\| \\leq (1+ \\omega\/2)^{1\/2} \\lim_{\\epsilon \\to +0} \\| K_j (\\epsilon) H_0^{\\frac 12} f \\| \\\\\n= (1+ \\omega\/2)^{1\/2} \\lim_{\\epsilon \\to +0} \\| (H_0 + \\epsilon)^{- \\frac 12 } V_{\\{j\\}} (H_0 +\n\\epsilon)^{- \\frac 12 } H_0^{\\frac 12} f \\| \\leq c_j \\| (H_0 + \\epsilon)^{- \\frac\n12 } H_0^{\\frac 12} f\\| \\leq c_j \\|f\\|, \\label{29.8;41}\n\\end{gather}\nwhere\n\\begin{equation}\n c_j := (1+ \\omega\/2)^{1\/2} \\sup_{\\epsilon >0} \\| (H_0 + \\epsilon)^{- \\frac 12 } V_{\\{j\\}} \\| < \\infty\n\\end{equation}\nFrom (\\ref{29.8;41}) it follows that $|\\mathfrak{q}_j (f, g )| \\leq c_j \\|f\\|\\|g\\|$. Hence, there exists a self--adjoint operator\n$\\mathcal{Z}_j \\in \\mathfrak{B} (L^2 (\\mathbb{R}^{3N-3}))$ such that\n\\begin{equation}\n \\mathfrak{q}_j (f, g ) = (f, \\mathcal{Z}_j g ). \\label{8.1;1}\n\\end{equation}\nIt is easy to check that\n\\begin{equation}\n \\mathcal{P}'_j (\\epsilon) = \\bigl(H_0 + \\epsilon\\bigr)^{-1\/2} \\mathcal{Z}_j \\bigl(H_0 + \\epsilon\\bigr)^{-1\/2} \\quad \\quad (\\epsilon >0). \\label{14.8;1}\n\\end{equation}\n\n\\begin{lemma}\\label{8.8;20}\n Suppose that $H$ defined in (\\ref{1.08;3})--(\\ref{1.08;4}) satisfies R1. Then there\nexists $\\lambda_0 > 1 $ such that $\\sigma_{ess} (\\tilde H(\\lambda_0)) =\n[0, \\infty) $, where\n\\begin{equation}\\label{esshard}\n \\tilde H(\\lambda) := H_0 + \\lambda V + \\lambda \\sum_{i = 1}^N \\mathcal{Z}_i .\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nThe operator $\\tilde H (\\lambda)$ is self--adjoint on $D(H_0) \\subset L^2 (\\mathbb{R}^{3N-3})$. \nLet $J_s \\in C^{2}(\\mathbb{R}^{3N-3})$ for $s=1,2,\\ldots, N$ denote the Ruelle--Simon partition of\nunity, see Definition\n3.4 and Proposition 3.5 in \\cite{cikon} and also \\cite{jpasubmit,coulombgridnev}. One has\n$J_s \\geq 0$, $\\sum_s J^{2}_s =1$ and $J_s (k x) =J_s (x)$ for $k\n\\geq 1$ and $|x|= 1$. Besides there exists $C > 0$ such that for $i \\neq s$\n\\begin{equation}\\label{ims5}\n \\textrm{supp}\\: J_s \\cap \\{ x | |x| > 1 \\} \\subset \\{x|\\; |r_i - r_s | \\geq C |x|\\} .\n\\end{equation}\nWe shall use the following version of the IMS formula, see eq.~(42) in \\cite{coulombgridnev}\n\\begin{equation}\\label{06.02\/1}\n \\Delta = \\sum_{s, s' = 1}^N J_s J_{s'} \\Delta J_{s'}J_s + 2\\sum_{s=1}^N\n|\\nabla J_s|^2\n\\end{equation}\nThe previous equation can be obtained from the standard IMS formula (Theorem 3.2\nin \\cite{cikon}) if one notes that $N^2$ functions $J_s J_{s'}$ satisfy\n$\\sum_{s, s '} (J_s J_{s'})^2 = 1 $. With the help of (\\ref{06.02\/1}) we can\nwrite\n\\begin{equation}\n \\tilde H (\\lambda) = \\sum_{s=1}^N J^2_s \\left[H_0 + \\lambda V_{\\{s\\}} + \\lambda\n\\mathcal{Z}_s \\right] J^2_s\n+ \\sum_{s=1}^N \\sum_{\\substack{s'=1\\\\ s\\neq s'}}^N J_s J_{s'} \\tilde H_{s s'}\n(\\lambda)J_{s'} J_s + \\mathcal{C}_1 + \\mathcal{C}_2 + \\mathcal{C}_3 \\label{ims}\n\\end{equation}\nwhere\n\\begin{gather}\n\\tilde H_{s s'} (\\lambda) := H_0 + \\lambda V_{\\{s, s'\\}} \\quad \\quad (s \\neq s')\\\\\n\\mathcal{C}_1 := \\lambda \\sum_{s=1}^N \\sum_{\\substack{i=1\\\\i \\neq s}}^N J_s^2\nv_{is} J_s^2 + 2\\sum_{s=1}^N |\\nabla J_s|^2 \\label{C1} \\\\\n\\mathcal{C}_2 := \\lambda \\sum_{s=1}^N \\sum_{\\substack{s' = 1\\\\ s'\\neq s}}^N\nJ_s^2 J_{s'}^2 \\left\\{ v_{ss'} + \\sum_{\\substack{i=1\\\\i \\neq s, s'}}^N \\left(\nv_{is} + v_{is'} \\right) \\right\\} \\label{C2}\\\\\n\\mathcal{C}_3 := \\lambda \\sum_{s=1}^N \\sum_{m =1}^N \\sum_{\\substack{k = 1 \\\\ k\n\\neq m}}^N J^2_m \\mathcal{Z}_s J^2_k \\label{C3}\n\\end{gather}\nBy the standard arguments in the proof of the HVZ theorem the lemma would be\nproved if we can show that $\\mathcal{C}_{1, 2, 3}$ are relatively $H_0$ compact\nand\nthe operators under the sums in (\\ref{ims}) are non--negative for some $\\lambda_0\n>1$. From the derivation of the HVZ theorem, see \\cite{teschl,cikon}, it follows that the\noperators\n$\\mathcal{C}_{1,2}$ are relatively $H_0$ compact. To prove the same for\n$\\mathcal{C}_3$ it suffices to show that\n$J^2_m \\mathcal{Z}_s$ for $m \\neq s$ is relatively $H_0$ compact.\nWithout loosing generality we consider only $J^2_2 \\mathcal{Z}_1$. Let $\\phi \\in C_0^\\infty(\\mathbb{R}_+)$ be such that $\\phi(r) = 1$ if $r \\in [0, 1]$,\n$\\phi(r) \\in [0,1]$ for $r \\in [1, 2]$ and\n$\\phi(r)=0$ if $r \\in [2, \\infty)$. Then by definition $\\phi_L (r) := \\phi(L^{-1}r)$, where $L >0$.\nWe\nhave\n\\begin{equation}\\label{71}\n J^2_2 \\mathcal{Z}_1 = J^2_2 \\phi_L (x_r^2) \\mathcal{Z}_1 +\nJ^2_2 \\bigl(1- \\phi_L (x_r^2)\\bigr) \\mathcal{Z}_1 ,\n\\end{equation}\nwhere $x_r^2 = x_2^2 + \\ldots + x_{N-1}^2$.\nObviously, the operator $J^2_2 \\phi_L (x_r^2)$ is\nrelatively $H_0$ compact for all $L >0$.\nIt remains to show that the norm of the second term in (\\ref{71}) can\nbe made as small as pleased by choosing $L$ large enough. Before we estimate this term let us introduce the operator $\\hat Y_1$ similar to the expression in (\\ref{29.8;3}),\nwhich acts on $\\hat f (p_1, x_r) \\in L^2(\\mathbb{R}^{3N-3})$ as follows\n\\begin{equation}\\label{29.8;314}\n \\hat Y_1 (\\epsilon) \\hat f := \n \\tilde \\varphi_1 (\\epsilon + p_1^2; x_r) {\\displaystyle \\int \\hat f (p_1, x_r)}\n\\tilde \\varphi^*_1 (\\epsilon + p_1^2; x_r) \\: d^{3N-6} x_r\n\\end{equation}\nwhile $\\tilde \\varphi$ was defined in (\\ref{18.9;1}). The Fourier--transformed version we denote as $Y_1 := \\mathcal{F}_1^{-1}\n\\hat Y_1 (\\epsilon)\\mathcal{F}_1$ (the Fourier transform $\\mathcal{F}_1$ was defined in (\\ref{7.8;51})). By (\\ref{8.1;1}), (\\ref{14.8;1}) for any $f, g \\in C_0^\\infty (\\mathbb{R}^{3N-3})$\n\\begin{gather}\n \\left| \\bigl(f, (1-\\phi_L)\\mathcal{Z}_1 g\\bigr) \\right| = \\left| \\bigl(H_0^{\\frac 12} (1-\\phi_L)f, \\mathcal{P}^{(c)}_1 (0) H_0^{\\frac 12} g\\bigr) \\right| \\nonumber\\\\\n= \\lim_{\\epsilon \\to +0} \\left| \\bigl((H_0+\\epsilon)^{\\frac 12} (1-\\phi_L)f, \\mathfrak{m}_1^{(c)} (\\epsilon) P_1 (\\epsilon) (H_0+\\epsilon)^{\\frac 12} g\\bigr) \\right| \\nonumber\\\\\n= \\lim_{\\epsilon \\to +0} \\left| \\bigl(\\mathfrak{m}_1^{(c)} (\\epsilon) f, (1-\\phi_L) Y_1 (\\epsilon) g\\bigr) \\right| \\leq \\|f\\| \\|g\\| \\sup_{\\epsilon \\in (0, \\pmb{\\epsilon}]}\\|(1-\\phi_L) Y_1 (\\epsilon)\\| \\nonumber\\\\\n\\leq \\|f\\| \\|g\\| \\sup_{\\epsilon \\in (0, \\pmb{\\epsilon}]}\\|(1-\\phi_L) \\tilde \\varphi_1 (\\epsilon)\\| \\sup_{\\epsilon \\in (0, \\pmb{\\epsilon}]}\\|\\tilde \\varphi_1 (\\epsilon)\\| ,\n\\end{gather}\nwhere the last two norms are that of $L^2 (\\mathbb{R}^{3N-6})$. Now by Corollary~\\ref{18.9;4} it follows that the norm of the second term on the rhs of (\\ref{71}) can be\nmade as small as pleased by choosing $L$ large and, hence, $\\mathcal{C}_3$ is relatively $H_0$ compact.\n\n\n\n\n\n\nDue to R1 $\\tilde H_{ss'}\\bigl( 1+\\omega \\bigr) \\geq\n0$ and $\\lambda_0 \\in (1, 1+\\omega)$ ensures that all terms under the double sum in (\\ref{ims}) are non--negative operators.\nThus to prove the Lemma it remains to show that with appropriate $\\lambda_0 > 1$\nthe operator\nin square brackets in (\\ref{ims}) is non--negative. Or equivalently, that for\nsome $\\lambda_0 > 1$ there is a sequence $\\epsilon_n \\to +0$ such that\n\\begin{equation}\n H_0 + \\lambda_0 V_{\\{s\\}} + \\lambda_0 \\mathcal{Z}_s +\\epsilon_n \\geq 0 . \\label{19.9;17}\n\\end{equation}\nWe have\n\\begin{gather}\n H_0 + \\lambda V_{\\{s\\}} + \\lambda \\mathcal{Z}_s +\\epsilon_n \\nonumber\\\\\n =\\bigl(H_0 + \\epsilon_n\\bigr)^{1\/2} \\left\\{ 1 - \\lambda \\Bigl[ K_s\n(\\epsilon_n) -\n\\mathcal{P}^{(c)}_s (\\epsilon_n)\\Bigr] + \\lambda \\Bigl(\\mathcal{P'}_s (\\epsilon_n) -\n\\mathcal{P}^{(c)}_s (\\epsilon_n)\\Bigr)\\right\\}\n\\bigl(H_0 + \\epsilon_n\\bigr)^{1\/2} , \\label{19.9;14}\n\\end{gather}\nwhere $\\epsilon_n \\in (0, \\pmb{\\epsilon})$.\nBy Lemma~\\ref{8.8;19} for any $\\varepsilon > 0$ we can choose $\\epsilon_n \\to +0$ such\nthat\n\\begin{equation}\n \\Bigl\\|\\mathcal{P'}_s (\\epsilon_n) - \\mathcal{P}^{(c)}_s (\\epsilon_n)\\Bigr\\| <\n\\varepsilon\n\\end{equation}\nHence, by (\\ref{19.9;1}) from (\\ref{19.9;14}) follows\n\\begin{gather}\n H_0 + \\lambda V_{\\{s\\}} + \\lambda \\mathcal{Z}_s +\\epsilon_n \\nonumber\\\\\n\\geq \\bigl(H_0 + \\epsilon_n\\bigr)^{1\/2} \\left\\{ 1 - \\lambda (\\pmb{\\eta} +\n\\varepsilon) \\right\\}\n\\bigl(H_0 + \\epsilon_n\\bigr)^{1\/2} .\n\\end{gather}\nSetting $\\lambda_0 \\leq (\\pmb{\\eta} + \\varepsilon)^{-1}$ makes (\\ref{19.9;17}) hold.\nTaking $\\varepsilon$ sufficiently small we ensure that $\\lambda_0 \\in(1, 1+\\omega)$.\n\nThe inclusion\n$[0, \\infty) \\subset \\sigma_{ess} (\\tilde H(\\lambda)) $ for all $\\lambda > 0$ is\nstandard and we omit its proof (it is not used in other proofs anyway). \n\\end{proof}\n\\begin{proof}[Proof of Lemma~\\ref{13.8;4}]\nLet us first consider $\\epsilon \\in (0, \\pmb{\\epsilon}]$. We can write\n\\begin{equation}\n \\mathcal{G}_N(\\epsilon) = \\mathcal{G'} (\\epsilon) + \\sum_{i=1}^N \\left[\\mathcal{P'}_i (\\epsilon) - \\mathcal{P}^{(c)}_i (\\epsilon)\\right] ,\n\\end{equation}\nwhere\n\\begin{equation}\n \\mathcal{G'} (\\epsilon):= K(\\epsilon) - \\sum_{i=1}^N \\mathcal{P'}_i (\\epsilon) .\n\\end{equation}\nIf we set $A=H_0$ and $B = V + \\sum_{i=1}^N \\mathcal{Z}_i$ in the\nBS principle (Theorem~\\ref{31.07;16}) then from (\\ref{14.8;1}) and Lemma~\\ref{8.8;20} it follows that $\\sigma_{ess} \\bigl( \\mathcal{G'} (\\epsilon)\n\\bigr) \\subset (-\\infty , \\lambda_0^{-1}]$ if $\\epsilon \\in (0, \\pmb{\\epsilon}]$. Now the result follows from Lemma~\\ref{8.8;19} and Theorem~9.5 in \\cite{weidmann}.\n\\end{proof}\n\n\\begin{proof}[Proof of Lemma~\\ref{13.8;1}]\nWe shall prove by induction that $\\mathcal{M}_k (\\epsilon) \\in \\mathfrak{J}$\nfor $k= 1, 2, \\ldots, N$. We make the following induction assumption.\nSuppose that for $\\mathcal{M}_k$ the following holds: (a) $\\mathcal{M}_k \\in\n\\mathfrak{J}$; (b) for any $s, s' \\geq k+1$ one has\n$\\mathcal{M}_k \\mathcal{R}_s , \\mathcal{R}_s\\mathcal{M}_k , \\mathcal{R}_s\n\\mathcal{M}_k \\mathcal{R}_{s'} \\in \\mathfrak{J}$.\nLet us first show that the induction\nassumption is fulfilled for $k=1$. That $\\mathcal{M}_1 \\in \\mathfrak{J}$ follows from (\\ref{8.8;2})\nand Lemma~\\ref{19.9;21}.\nChecking (b) is also straightforward if one applies Lemmas~\\ref{19.9;21}, \\ref{19.9;22}, \\ref{19.9;23}.\nFor example,\n\\begin{gather}\n \\mathcal{R}_s\\mathcal{M}_1 \\mathcal{R}_{s'} = \\left[ \\mathcal{R}_s P_1\\right]\n\\left[ \\mathcal{R}_1 \\mathcal{L}_1 \\mathcal{R}_1 \\right]\n\\left[P_1 \\mathcal{R}_{s'} \\right] + \\left[ \\mathcal{R}_s P_1\\right] \\left[\n\\mathcal{R}_1 \\mathcal{L}_1 \\mathcal{R}_{s'} \\right] \\nonumber \\\\\n+ \\left[ \\mathcal{R}_s \\mathcal{L}_1 \\mathcal{R}_1 \\right] \\left[ P_1\n\\mathcal{R}_{s'} \\right] \\quad \\quad (s, s' \\geq 2) , \\label{20.9;1}\n\\end{gather}\nwhere we have used $P_i \\mathcal{R}_i = \\mathcal{R}_i P_i = \\mathcal{R}_i$. All\nexpressions in square brackets are elements of $\\mathfrak{J}$ according to Lemmas~\\ref{19.9;21}, \\ref{19.9;22}, \\ref{19.9;23}, \nhence, the lhs of (\\ref{20.9;1}) also belongs to $\\mathfrak{J}$ \naccording to Lemma~\\ref{13.8;9}. \nThe implication $k \\rightarrow k+1$ is proved similarly. The fact that\n$\\mathcal{M}_{k+1} \\in \\mathfrak{J}$ follows directly from the induction\nassumption and Lemmas~\\ref{19.9;21}, \\ref{19.9;22}, \\ref{19.9;23}. Let us consider, for example,\n$\\mathcal{R}_s \\mathcal{M}_{k+1} $ for $s \\geq k+2$. By (\\ref{8.8.7}) we obtain\n\\begin{gather}\n \\mathcal{R}_s \\mathcal{M}_{k+1} = \\left[ \\mathcal{R}_s \\mathcal{M}_{k} \\right]\n+ \\left[ \\mathcal{R}_s P_{k+1} \\right] \\left[\\mathcal{R}_{k+1} \\mathcal{M}_{k}\n\\right]\n+ \\left[\\mathcal{R}_s \\mathcal{M}_{k} \\mathcal{R}_{k+1} \\right]\n+ \\left[\\mathcal{R}_s P_{k+1} \\right] \\left[\\mathcal{R}_{k+1} \\mathcal{M}_{k}\n\\mathcal{R}_{k+1} \\right] \\nonumber\\\\\n+ \\left[\\mathcal{R}_s P_{k+1} \\right] \\left[\\mathcal{R}_{k+1} \\mathcal{L}_{k+1}\n\\mathcal{R}_{k+1} - \\sum_{i=1}^k \\left[ \\mathcal{R}_{k+1} P_i \\right]\n\\mathcal{P}_i \\left[P_i \\mathcal{R}_{k+1}\\right]\\right] \\nonumber\\\\\n+ \\left[\\mathcal{R}_s P_{k+1} \\right] \\left[\\mathcal{R}_{k+1} \\mathcal{L}_{k+1}\n- \\sum_{i=1}^k \\left[\\mathcal{R}_{k+1} P_i \\right]\\mathcal{P}_i \\right]\n+ \\left[\\mathcal{R}_{s} \\mathcal{L}_{k+1} \\mathcal{R}_{k+1} - \\sum_{i=1}^k\n\\left[ \\mathcal{R}_{s} P_i \\right] \\mathcal{P}_i \\left[P_i \\mathcal{R}_{k+1}\n\\right]\\right] \\nonumber\n\\end{gather}\nAgain, according to Lemmas~~\\ref{19.9;21}, \\ref{19.9;22}, \\ref{19.9;23} and the induction assumption all expressions in\nsquare brackets belong to $\\mathfrak{J}$.\n\\end{proof}\n\n\n\n\n\n\n\n\n\\begin{lemma} \\label{19.9;21}\nFor $i \\neq j$\n\\begin{equation}\n \\sup_{\\epsilon > 0} \\left\\|\\mathcal{R}_i \\bigl(H_0 + \\epsilon\\bigr)^{-1\/2}\n|v_{ij}|^{1\/2}\\right\\|_{HS} < \\infty . \\label{20.9;51}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nWithout loosing generality we can consider the integral operator\n$\\mathcal{C}(\\epsilon):= \\mathcal{F}_1 \\mathcal{R}_1 \\bigl(H_0 +\n\\epsilon\\bigr)^{-1\/2} |v_{12}|^{1\/2} \\mathcal{F}^{-1}_1$.\nWe use the Jacobi coordinates $x_1, \\ldots, x_{N-1}$ and from Fig.~\\ref{Fig:1} one finds $r_1 - r_2 = \\gamma_1 x_1 + \\gamma_2 x_2$, where\n\\begin{gather}\n \\gamma_1 = \\left[ \\frac{M}{2m_1 (M - m_1)} \\right]^{\\frac 12} , \\label{21.9;32}\\\\\n \\gamma_2 = - \\left[ \\frac{M- m_1 - m_2}{2m_2 (M - m_1)} \\right]^{\\frac 12} . \n\\end{gather}\nThe integral operator $\\mathcal{C}(\\epsilon)$ acts on $f(p_1 , x_r) \\in L^2\n(\\mathbb{R}^{3N-3})$ as\nfollows\n\\begin{equation}\n \\bigl(\\mathcal{C}(\\epsilon) f\\bigr) (p_1, x_r)= \\int \\mathcal{C}(p_1, p'_1,\nx_r, x'_r; \\epsilon) f(p'_1, x_r') d^3 p'_1 d^{3N-6}x'_r ,\n\\end{equation}\nwhere the integral kernel is\n\\begin{gather}\n\\mathcal{C}(p_1, p'_1, x_r, x'_r; \\epsilon) = \\frac{\\gamma_1^3}{(2 \\pi)^{3\/2}}\n\\varphi_1 (\\epsilon + p_1^2; x_r ) \\psi_1 (\\epsilon + p_1^2; x'_r)\n\\left\\{[1-\\mu_1(\\epsilon + p_1^2)]^{-\\frac 12} - 1\\right\\} \\nonumber\\\\\n \\times \\widehat{\\bigl| v_{12}\\bigr|^{\\frac 12}} \\bigl(\\gamma_1 (p_1 -\np'_1)\\bigr)\n\\exp\n\\left\\{ i\\gamma_2 (p'_1 - p_1)\\cdot x_2 \\right\\} , \\label{20.9;3}\n\\end{gather}\nand $\\psi_1$ is expressed through $\\varphi_1$ through (\\ref{11.01.12\/1}).\nThe hat in (\\ref{20.9;3}) denotes a standard Fourier transform in $\\mathbb{R}^3$.\nNow we calculate the square of the Hilbert--Schmidt norm\n\\begin{gather}\n \\left\\|\\mathcal{C}(\\epsilon)\\right\\|_{HS}^2 = \\int \\left| \\mathcal{C}(p_1,\np'_1, \\xi, \\xi'; \\epsilon)\\right|^2 \\; d^3 p_1 d^3 p'_1 d^{3N-6}\\xi d^{3N-6}\\xi'\n\\nonumber\\\\\n\\leq \\gamma_1^3 C^2_\\psi C_0 \\int_{ p_1^2 \\leq \\pmb{\\epsilon} - \\epsilon} \\left\\{[1-\\mu_1(\\epsilon +\np_1^2)]^{-\\frac 12} - 1\\right\\}^2 d^3 p_1 \\label{31.01\/02},\n\\end{gather}\nwhere $C_\\psi$ was defined in Corollary~\\ref{20.9;6} and\n\\begin{equation}\n C_0 := (2\\pi)^{-3} \\max_{i < k} \\int \\left| \\widehat{\\bigl| v_{ik}\\bigr|^{\\frac\n12}} \\bigl( t\\bigr) \\right|^2 d^3t\n\\end{equation}\nis finite since $|v_{ik}|^{\\frac 12} \\in L^2 (\\mathbb{R}^3)$. Using (\\ref{mubound}) we\nobtain\n\\begin{gather}\n \\left\\|\\mathcal{C}(\\epsilon)\\right\\|_{HS}^2 \\leq \\gamma_1^3 C^2_\\psi C_0 \\int_{ |p_1|\\leq\n\\sqrt{\\pmb{\\epsilon}}} \\left(a_\\mu^{-\\frac 12} |p_1|^{-1} - 1\\right)^2 d^3p_1 \\nonumber \\\\\n= (4\\pi)\\gamma_1^3 C^2_\\psi C_0 \\left[a_\\mu^{-1} \\sqrt{\\pmb{\\epsilon}} - a_\\mu^{-\\frac 12} \\pmb{\\epsilon} +\n(1\/3) \\pmb{\\epsilon}^{\\frac 32}\\right] . \n\\end{gather}\n\\end{proof}\nAnother less trivial estimate is given by\n\\begin{lemma}\\label{19.9;22}\n For all $1 \\leq i < j \\leq N$ and $1 \\leq i < s \\leq N $\n\\begin{equation}\n\\sup_{\\epsilon > 0}\n\\left\\|\\mathcal{R}_i \\bigl(H_0 + \\epsilon\\bigr)^{-1\/2} v_{is} \\bigl(H_0 +\n\\epsilon\\bigr)^{-1\/2} \\mathcal{R}_j\\right\\|_{HS} < \\infty\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\n For $s = j$ the result easily follows from Lemma~\\ref{19.9;21}. So without loosing\ngenerality it suffices to prove that\n\\begin{equation}\n\\sup_{\\epsilon > 0} \\left\\|\\mathcal{C}_1 (\\epsilon) \\right\\|_{HS} < \\infty ,\\label{20.9;25}\n\\end{equation}\nwhere we have defined\n\\begin{equation}\n\\mathcal{C}_1 (\\epsilon):= \\mathcal{R}_1 \\bigl(H_0 + \\epsilon\\bigr)^{-1\/2}\nv_{13} \\bigl(H_0 + \\epsilon\\bigr)^{-1\/2} \\mathcal{R}_2 .\n\\end{equation}\nBy (\\ref{20.9;11}) we have\n\\begin{gather}\n \\mathcal{C}_1 (\\epsilon) =\nP_1 (\\epsilon) \\mathfrak{g}_1 (\\epsilon)\\bigl(H_0 + \\epsilon\\bigr)^{-1\/2} v_{13}\n\\bigl(H_0 + \\epsilon\\bigr)^{-1\/2} \\mathfrak{g}_2 (\\epsilon) P_2 (\\epsilon) \\nonumber \\\\\n= P_1 (\\epsilon)\\mathfrak{g}_2 (\\epsilon) \\bigl(H_0 + \\epsilon\\bigr)^{-1\/2}\n\\mathfrak{g}_1 (\\epsilon) v_{13} \\bigl(H_0 + \\epsilon\\bigr)^{-1\/2}P_2 (\\epsilon) .\n\\end{gather}\nWe can write $\\mathcal{C}_1 (\\epsilon) = \\mathcal{C}_2 (\\epsilon) -\n\\mathcal{C}_3 (\\epsilon)$, where\n\\begin{gather}\n\\mathcal{C}_2 (\\epsilon):= P_1 (\\epsilon)[\\mathfrak{g}_2 (\\epsilon) +1]\\bigl(H_0\n+ \\epsilon\\bigr)^{-1\/2} \\mathfrak{g}_1 (\\epsilon) v_{13} \\bigl(H_0 +\n\\epsilon\\bigr)^{-1\/2}P_2 (\\epsilon) , \\label{20.9;41}\\\\\n\\mathcal{C}_3 (\\epsilon):= \\mathcal{R}_1 (\\epsilon)\\bigl(H_0 + \\epsilon\\bigr)^{-1\/2} v_{13} \\bigl(H_0 + \\epsilon\\bigr)^{-1\/2}P_2 (\\epsilon) .\n\\end{gather}\nFrom Lemma~\\ref{19.9;21} it easily follows that $\\sup_{\\epsilon > 0} \\left\\|\\mathcal{C}_3\n(\\epsilon) \\right\\|_{HS} < \\infty $. Therefore, (\\ref{20.9;25}) reduces to proving\nthat\n\\begin{equation}\n \\sup_{\\epsilon > 0} \\left\\|\\mathcal{C}_2 (\\epsilon) \\right\\|_{HS} < \\infty . \\label{20.9;42}\n\\end{equation}\nApart from the sets of coordinates $x_i , z_i$ depicted in Fig.~\\ref{Fig:1} (left) we shall need the third set of coordinates\n$t_1, t_2, \\ldots, t_{N-1}$ depicted in Fig.~\\ref{Fig:1} (right).\nThereby $t_1 = z_1$\nand $t_2 = \\sqrt{2\\mu_{13}} (r_3 - r_1)$. The coordinate $t_3$ points in the\ndirection from the center of mass of the particles $\\{4, 5, \\ldots, N\\}$\nto the center of mass of the particle pair\n$\\{1,3\\}$ and $t_i = x_i = z_i$ for $i \\geq 4$. The scales are set so as to make the kinetic energy\noperator take the form $H_0 = - \\sum_i \\Delta_{t_i} $. The coordinates are connected through\n\\begin{gather}\n t_2 = b_{22} z_2 + b_{23} z_3 \\\\\n t_3 = b_{32} z_2 + b_{33} z_3\n\\end{gather}\nwhere\n\\begin{gather}\nb_{22} = -\\left[ \\frac{m_3 (M-m_2)}{(M - m_1-m_2)(m_1 +m_3)} \\right]^{\\frac 12}\\\\\nb_{23} = \\left[ \\frac{m_1(M- m_1 -m_2 -m_3)}{(m_1 +m_3)(M - m_1 -m_2)} \\right]^{\\frac 12},\n\\end{gather}\nand $b_{32} = b_{23}$, $b_{33} = - b_{22}$. We also set $b_{11} = 1$, $b_{12} = b_{13} = b_{21} = b_{31} = 0$. Then $b_{ik}$ \nare entries of the $3\\times 3$ orthogonal matrix $b$. \nFrom the $2\\times 2$ matrix in\n(\\ref{21.9;1})--(\\ref{21.9;2}) we can construct the $3 \\times 3 $ orthogonal matrix $a$ by setting $a_{33} = 1$ and $a_{13} = a_{23} = a_{32} = a_{31} = 0$. Then\n\\begin{equation}\n t_i = \\sum_{i=1}^3 c_{ik} x_k \\quad \\quad (i = 1, 2, 3),\n\\end{equation}\nwhere $c_{ik}$ are elements of the orthogonal matrix $c = ab$. \n\nThe full and partial Fourier transforms associated with $t_i$ are \n\\begin{gather}\n (F_t f ) (p_{t_1}, p_{t_2}, \\ldots, p_{t_{N-1}}):= \\frac 1{(2\\pi)^{\\frac{3N-3}2}} \\int e^{-i\\sum_{k = 1}^{N-1} p_{t_k} \\cdot t_k} f(t_1, \\ldots, t_{N-1}) d^3 t_1 \\ldots d^3 t_{N-1} \\\\\n(\\mathcal{F}_t f ) (t_1, p_{t_2}, \\ldots, p_{t_{N-1}}):= \\frac 1{(2\\pi)^{\\frac{3N-6}2}} \\int e^{-i\\sum_{k = 2}^{N-1} p_{t_k} \\cdot t_k} f(t_1, \\ldots, t_{N-1}) d^3 t_2 \\ldots d^3 t_{N-1}\n\\end{gather}\nFor shorter notation we introduce the tuples\n$p_{t_r} := (p_{t_2}, p_{t_3}, \\ldots, p_{t_{N-1}})$ and $p_{t_c} := (p_{t_3},\np_{t_4}, \\ldots, p_{t_{N-1}})$.\nIn analogy with the proof of Theorem~\\ref{3.8;8} let us introduce the operator $ B_t\n(\\epsilon)$, which acts on $f \\in L^2(\\mathbb{R}^{3N-3})$ as\n\\begin{equation}\n B_t (\\epsilon) f = F_t^{-1} (1+ (p_{t_1}^2 + \\epsilon)^{\\frac \\zeta2})(F_t\nf) +F_t^{-1} (|p_{t_c}|^\\zeta - 1)\\chi_1 (|p_{t_c}|) (F_t f).\n\\end{equation}\nand $\\zeta = 3\/4$ (in fact, we could take any $\\zeta \\in (1\/2 , 1)$). For\nall $\\epsilon >0$ the operators $B_t (\\epsilon)$ and $B_t^{-1} (\\epsilon)$\nare bounded. Inserting the identity $B_t B_t^{-1} = 1$ into (\\ref{20.9;41})\nand using $[B_t , v_{13}] =0$ we get\n\\begin{equation}\n \\mathcal{C}_2 (\\epsilon) = \\mathcal{C}_4 (\\epsilon) \\textnormal{sign}\\: (v_{13})\n\\mathcal{C}_5 (\\epsilon)\n\\end{equation}\nwhere\n\\begin{gather}\n \\mathcal{C}_4 (\\epsilon) := P_1 (\\epsilon)_1 [\\mathfrak{g}_2 (\\epsilon) + 1] \n\\bigl(H_0 + \\epsilon\\bigr)^{-1\/2} B_t (\\epsilon) \\mathfrak{g}_1\n(\\epsilon)\\bigl|v_{13}\\bigr|^{1\/2} , \\\\\n\\mathcal{C}_5 (\\epsilon) = \\bigl|v_{13}\\bigr|^{1\/2} \\bigl(H_0 +\n\\epsilon\\bigr)^{-1\/2} B^{-1}_t (\\epsilon) P_2 (\\epsilon) . \n\\end{gather}\nNow (\\ref{20.9;42}) follows from the inequalities\n\\begin{gather}\n \\sup_{\\epsilon > 0} \\| \\mathcal{C}_4 (\\epsilon)\\|_{HS} < \\infty , \\\\\n\\sup_{\\epsilon > 0} \\| \\mathcal{C}_5 (\\epsilon)\\| < \\infty . \\label{24.9;6}\n\\end{gather}\nBy construction of coordinates $r_1- r_3 = \\gamma_1 x_1 + \\gamma_2' x_2 + \\gamma_3 x_3$, where $\\gamma_1$ was defined in (\\ref{21.9;32}) and\n\\begin{gather}\n \\gamma_2' = \\left[ \\frac{m_2}{2(M - m_1)(M - m_1 -m_2 )} \\right]^{\\frac 12} , \\\\\n \\gamma_3 = - \\left[ \\frac{M- m_1 - m_2 -m_3}{2m_3 (M - m_1-m_2)} \\right]^{\\frac 12} . \n\\end{gather}\nLet us first consider the operator $ \\mathcal{\\hat C}_4 (\\epsilon) :=\n\\mathcal{F}_1 \\mathcal{C}_4 (\\epsilon) \\mathcal{F}^{-1}_1$, which acts on $f(p_1 , x_r) \\in L^2 (\\mathbb{R}^{3N-3})$ as\nfollows\n\\begin{equation}\n \\bigl(\\mathcal{\\hat C}_4 (\\epsilon) f\\bigr) (p_1, x_r)= \\int \\mathcal{C}_4\n(p_1, p'_1, x_r, x'_r; \\epsilon) f(p'_1, x_r') d^3 p'_1 d^{3N-6}x'_r ,\n\\end{equation}\nwhereby the integral kernel is (c. f. eq.~(\\ref{20.9;3}))\n\\begin{gather}\n\\mathcal{C}_4 (p_1, p'_1, x_r, x'_r; \\epsilon) = \\frac{\\gamma_1^3}{(2\n\\pi)^{3\/2}} \\varphi_1 (\\epsilon + p_1^2; x_r ) \\psi'_1 (p_1, x'_r)\n\\left\\{[1-\\mu_1(\\epsilon + p_1^2)]^{-\\frac 12} - 1\\right\\} \\nonumber\\\\\n \\times \\widehat{\\bigl| v_{13}\\bigr|^{\\frac 12}} \\bigl(\\gamma_1 (p_1 -\np'_1)\\bigr)\n\\exp\n\\left\\{ i(p'_1 - p_1)\\cdot(\\gamma_2' x_2 + \\gamma_3 x_3 )\\right\\} . \\label{20.9;54}\n\\end{gather}\nIn (\\ref{20.9;54}) we have introduced the function $\\psi_1' = \\mathcal{F}_1 F_1^{-1} \\hat \\psi'_1$,\nwhere\n\\begin{gather}\n\\hat \\psi'_1 = \\hat \\varphi_1 (\\epsilon +p_1^2; p_r) \\left[1- \\mu_2 (\\epsilon +\np_{t_1}^2)\\right]^{- \\frac 12} \\left[p_1^2 + p_r^2 + \\epsilon\\right]^{- \\frac\n12} \\nonumber\\\\\n\\times \\left\\{1+ (p_{t_1}^2 + \\epsilon)^{\\frac \\zeta2} + (|p_{t_c}|^\\zeta -\n1)\\chi_1 (|p_{t_c}|) \\right\\} \\label{21.9;31}\n\\end{gather}\nand in (\\ref{21.9;31}) one has to substitute $p_{t_1} = a_{11} p_1 + a_{12} p_2$ and\n\\begin{equation}\n p_{t_c}^2 = \\sum_{i=4}^{N-1}p_i^2 + (c_{31}p_1 + c_{32}p_2 + c_{33}p_3)^2.\n\\end{equation}\nRepeating the arguments in Lemma~\\ref{19.9;21} we obtain\n\\begin{equation}\n \\|\\mathcal{C}_4 (\\epsilon)\\|_{HS} \\leq (4\\pi) \\gamma_1^3 C^2_{\\psi'} C_0 \\left[a_\\mu^{-1}\n\\sqrt{\\pmb{\\epsilon}} - a_\\mu^{-\\frac 12} \\pmb{\\epsilon} + (1\/3) \\pmb{\\epsilon}^{\\frac 32}\\right] ,\n\\end{equation}\nwhere by definition\n\\begin{equation}\n C^2_{\\psi'} = \\sup_{\\epsilon >0} \\sup_{p_1 \\in \\mathbb{R}^3}\\int \\left|\n\\psi'_1 (p_1, x_r)\\right|^2 d^{3N-6} x_r =\n\\sup_{\\epsilon >0} \\sup_{p_1 \\in \\mathbb{R}^3} \\int \\left| \\hat \\psi'_1 (p_1,\np_r)\\right|^2 d^{3N-6} p_r \\label{21.9;61}\n\\end{equation}\nTo further estimate the expression in (\\ref{21.9;61}) we use the inequality\n\\begin{equation}\n\\frac{\\left\\{1+ (p_{t_1}^2 + \\epsilon)^{\\frac \\zeta2} + (|p_{t_c}|^\\zeta -\n1)\\chi_1 (|p_{t_c}|) \\right\\}^2 }{p_{t_1}^2 + p_{t_2}^2 + p_{t_c}^2 + \\epsilon}\n\\leq \\frac{4}{(p_{t_1}^2 + \\epsilon)^{1-\\zeta}} \\label{24.9;1}\n\\end{equation}\nLet us check (\\ref{24.9;1}) for $|p_{t_c}| < 1$. By (\\ref{8.6;61})\n\\begin{gather}\n \\frac{\\left\\{ (p_{t_1}^2 + \\epsilon)^{\\frac \\zeta2} + |p_{t_c}|^\\zeta\n\\right\\}^2 }{p_{t_1}^2 + p_{t_2}^2 + p_{t_c}^2 + \\epsilon} \\leq\n \\frac{4\\left\\{ p_{t_1}^2 + p_{t_c}^2 + \\epsilon \\right\\}^\\zeta }{p_{t_1}^2 +\np_{t_c}^2 + \\epsilon} \\leq \\frac{4}{(p_{t_1}^2 + \\epsilon)^{1-\\zeta}}\n\\end{gather}\nSimilarly, one proves that (\\ref{24.9;1}) holds for $|p_{t_c}| \\geq 1$. Substituting\n(\\ref{24.9;1}), (\\ref{21.9;31}) into (\\ref{21.9;61}) and using that\n$p_1^2 + p_r^2 = p_{t_1}^2 + p_{t_2}^2 + p_{t_c}^2$ we obtain the estimate\n\\begin{gather}\n C^2_{\\psi'} \\leq 4 \\sup_{\\epsilon >0} \\sup_{p_1 \\in \\mathbb{R}^3} \\int \\left|\n\\hat \\varphi_1 (\\epsilon +p_1^2; p_r) \\right|^2 \\left[1- \\mu_2 (\\epsilon +\np_{t_1}^2)\\right]^{-1} (p_{t_1}^2 + \\epsilon)^{\\zeta-1} d^{3N-6} p_r \\nonumber\\\\\n\\leq 4 a_\\mu^{-1}\n \\sup_{\\epsilon >0} \\sup_{p_1 \\in \\mathbb{R}^3} \\int \\left| \\hat \\varphi_1\n(\\epsilon +p_1^2; p_r) \\right|^2 \\left[\\epsilon + (a_{11}p_1 + a_{12} p_2)^2\n\\right]^{-2+ \\zeta} d^{3N-6} p_r \\nonumber\\\\\n\\leq 4 a_\\mu^{-1} \\sup_{\\epsilon >0} \\sup_{p_1 \\in \\mathbb{R}^3} \\sup_{s\\in\n\\mathbb{R}^3} \\int \\left| \\hat \\varphi_1 (\\epsilon +p_1^2; p_r) \\right|^2\n\\left[\\epsilon + (s + a_{12} p_2)^2 \\right]^{-2+ \\zeta} d^{3N-6} p_r \\nonumber\\\\\n\\leq 4 a_\\mu^{-1} \\sup_{0 < \\epsilon' < \\epsilon } \\sup_{s\\in \\mathbb{R}^3}\n\\int \\left| \\hat \\varphi_1 (\\epsilon; p_r) \\right|^2 \\left[\\epsilon' + (s +\na_{12} p_2)^2 \\right]^{-2+ \\zeta} d^{3N-6} p_r , \\label{24.9;4}\n\\end{gather}\nwhere we have used (\\ref{mubound}). For $\\zeta = 3\/4$ the expression on the rhs of (\\ref{24.9;4}) is finite due to\nTheorem~\\ref{3.8;8}. It remains to prove (\\ref{24.9;6}). Like in Corollary~\\ref{24.9;9} let us define $\\eta_2 (\\epsilon;t_r) \\in L^2 (\\mathbb{R}^{3N-6})$\nas\n\\begin{equation}\n \\eta_2 (\\epsilon;t_r) := |v_{13}|^{\\frac 12} \\bigl(-\\Delta_{t_r} +\n\\epsilon\\bigr)^{-\\frac 12} \\tilde B_t^{-1} (\\epsilon) \\varphi_2 (\\epsilon). \\label{24.9;10}\n\\end{equation}\nIn (\\ref{24.9;10}) we use the inverse of $\\tilde B_t (\\epsilon) \\in \\mathfrak{B}\n(L^2 (\\mathbb{R}^{3N-6}))$, which acts on $f \\in L^2(\\mathbb{R}^{3N-6})$ as\n\\begin{equation}\n \\tilde B_t (\\epsilon) f = (1+ \\epsilon^{\\frac \\zeta2} ) f +\\mathcal{F}_{t}^{-1}\n(|p_{t_c}|^\\zeta - 1)\\chi_1 (|p_{t_c}|) (\\mathcal{F}_t f).\n\\end{equation}\nand $\\zeta = 3\/4$. Eq.~(\\ref{24.9;10}) is equivalent to the expression (\\ref{11.10;1}) corresponding to the subsystem $\\mathfrak{C}_2$ (though $\\tilde B_t (\\epsilon)$ and \n$B_{12} (\\epsilon)$ are defined using different coordinates, they are, in fact, equal, see the discussion around Eq.~(8), (9) in \\cite{jpasubmit}). \nThen the operator $F_t \\mathcal{C}_5 (\\epsilon) F_t^{-1}$ acts on\n$\\hat f(p_{t_1}, p_{t_r}) \\in L^2 (\\mathbb{R}^{3N-3})$ as follows\n\\begin{equation}\n \\bigl(F_t \\mathcal{C}_5 (\\epsilon) F_t^{-1} \\hat f\\bigr)(p_{t_1}, p_{t_r}) =\n\\hat \\eta_2^* (p_{t_1}^2 + \\epsilon; p_{t_r}) \\int \\hat \\varphi_2 (p_{t_1}^2 +\n\\epsilon; p'_{t_r}) \\hat f(p_{t_1}, p'_{t_r}) d^{3N-6}p'_{t_r} ,\n\\end{equation}\nwhere $\\hat \\eta_2 (\\epsilon) = \\mathcal{F}_t \\eta_2 (\\epsilon)$, $\\hat \\varphi\n(\\epsilon) = \\mathcal{F}_t \\varphi (\\epsilon)$. Now it is trivial to show that\n\\begin{equation}\n \\| \\mathcal{C}_5 (\\epsilon)\\|^2 \\leq \\sup_{\\epsilon >0} \\sup_{p_{t_1} \\in\n\\mathbb{R}^3} \\int \\left| \\hat \\eta_2 (p_{t_1}^2 + \\epsilon; p_{t_r}) \\right|^2\nd^{3N-6} p_{t_r} =\n \\sup_{\\epsilon >0} \\int \\left| \\eta_2 (\\epsilon; t_r) \\right|^2 d^{3N-6} t_r. \\label{24.9;12}\n\\end{equation}\nThe rhs in (\\ref{24.9;12}) is finite due to Corollary~\\ref{24.9;9}.\n\\end{proof}\n\nNote that in the expression (\\ref{29.8;3}) for the operator $\\hat P_1$ the function $\\hat \\varphi_1 (\\epsilon + p_1^2; p_r)$ depends also on $p_1$ through\nits first argument.\nThis is a source of trouble when one attempts to prove, for example, that\n$\\sup_{\\epsilon >0}\\|P_1 (\\epsilon) P_2 (\\epsilon)\\|_{HS} < \\infty$.\nTherefore, in the expression for $\\hat P_j$ it makes sense to approximate $\\hat\n\\varphi_j$ by a function, which is\npiecewise constant in the first argument. Namely, for $\\epsilon \\in (0, \\pmb{\\epsilon}]$,\n$n \\in \\mathbb{Z}_+ $ and $k = 1, \\ldots , n-1$\nlet us define $\\hat P_1^{(k)} (\\epsilon, n) $ as an operator, which acts on\n$\\hat f (p_1, p_r) \\in L^2(\\mathbb{R}^{3N-3})$ as follows\n\\begin{equation}\\label{24.9;43}\n \\hat P^{(k)}_1 (\\epsilon, n) \\hat f := \\begin{cases}\n \\hat \\varphi^*_1 (\\epsilon_k ; p_r) {\\displaystyle \\int \\hat \\varphi_1\n(\\epsilon_k , p'_r) f (p_1, p'_r)} \\: d^{3N-6} p'_r &\\text{if $p_1^2 +\n\\epsilon\\in (\\epsilon_k, \\epsilon_{k+1}]$},\\\\\n0& \\text{if $p_1^2 + \\epsilon \\notin (\\epsilon_k, \\epsilon_{k+1}]$} ,\n\\end{cases}\n\\end{equation}\nwhere $\\epsilon_k := k\\pmb{\\epsilon} n^{-1}$. We define $ P_1^{(k)}\n(\\epsilon, n) := F_1^{-1} \\hat P_1^{(k)} (\\epsilon, n) F_1$.\nSimilarly, using appropriate coordinates one defines $\\hat P_j^{(k)} (\\epsilon,\nn) , P_j^{(k)} (\\epsilon, n)$ for $j = 1, \\ldots, N$.\n\\begin{lemma}\\label{24.9;41}\n The following approximation formulas hold for $j, s = 1, \\ldots, N$\n\\begin{gather}\n \\sup_{\\epsilon \\in (0, \\pmb{\\epsilon}]}\\left\\| P_j (\\epsilon) - \\sum_{k=1}^{n - 1}\nP^{(k)}_j (\\epsilon, n) \\right\\| = \\hbox{o} (1\/n) \\\\\n\\sup_{\\epsilon \\in (0, \\pmb{\\epsilon}]} \\left\\| \\mathfrak{ g}_s (\\epsilon) P_j\n(\\epsilon) - \\sum_{k=1}^{n - 1} \\mathfrak{ g}_s (\\epsilon) P^{(k)}_j\n(\\epsilon, n) \\right\\| = \\hbox{o} (1\/n) \\quad \\quad (j \\neq s)\n\\end{gather}\n\\end{lemma}\n\\begin{proof}\nWithout loosing generality we set $j = 1$ and $s=2$. Generally, suppose $f, h,\ng, g' \\in \\mathcal{H}$, where $\\|f\\| = \\|h\\|=1$ and $\\mathcal{H}$ denotes some\nHilbert space.\nThen the norm of the difference of projections\ncan be trivially estimated as follows \n\\begin{equation}\n \\|g(f, \\cdot ) - g' (h, \\cdot)\\| \\leq \\|g - g'\\| + \\|g\\| \\|f-h\\| \\label{24.9;22}\n\\end{equation}\nand, consequently,\n\\begin{equation}\n \\|f( f, \\cdot ) - h(h, \\cdot )\\| \\leq 2\\|f-h\\|. \\label{24.9;21}\n\\end{equation}\nUsing (\\ref{24.9;21}) we get\n\\begin{gather}\n \\sup_{\\epsilon \\in (0, \\pmb{\\epsilon}]}\\left\\| \\hat P_1 (\\epsilon) - \\sum_{k=1}^{n - 1}\n\\hat P^{(k)}_1 (\\epsilon, n) \\right\\| \\leq 2 \\sup_{\\epsilon \\in (0, \\pmb{\\epsilon}]}\n\\sup_{k = 1, \\ldots, n} \\sup_{p_1^2 \\in [\\epsilon_k - \\epsilon, \\epsilon_{k+1} -\n\\epsilon)} \\| \\varphi_1 (p_1^2 + \\epsilon) - \\varphi_1 (\\epsilon_k)\\| \\nonumber\\\\\n\\leq 2\\sup_{|\\epsilon' - \\epsilon|\\leq \\pmb{\\epsilon} n^{-1} } \\| \\varphi_1 (\\epsilon') -\n\\varphi_1 (\\epsilon)\\| = \\hbox{o} (n^{-1}) \\label{24.9;24}\n\\end{gather}\ndue to (\\ref{7.8;25}). The norms on the rhs in (\\ref{24.9;24}) are that of $L^2\n(\\mathbb{R}^{3N-6})$. For the second approximation formula using (\\ref{24.9;22}) we\nget\n\\begin{gather}\n\\sup_{\\epsilon \\in (0, \\pmb{\\epsilon}]} \\left\\| \\mathfrak{ g}_2 (\\epsilon) P_1\n(\\epsilon) - \\sum_{k=1}^{n - 1} \\mathfrak{ g}_2 (\\epsilon) P^{(k)}_1\n(\\epsilon, n) \\right\\| \\nonumber \\\\\n\\leq \\sup_{\\epsilon \\in (0, \\pmb{\\epsilon}]} \\left\\| (\\mathfrak{\\hat g}_2 (\\epsilon) +1)\n\\hat P_1 (\\epsilon) - \\sum_{k=1}^{n - 1} (\\mathfrak{\\hat g}_2 (\\epsilon) +1)\n\\hat P^{(k)}_1 (\\epsilon, n) \\right\\| +\n\\hbox{o}(n^{-1}),\n\\end{gather}\nwhere $\\mathfrak{ \\hat g}_2 (\\epsilon) := F_1 \\mathfrak{ g}_2 (\\epsilon)F_1^{-1}$\nUsing (\\ref{24.9;26}) and (\\ref{24.9;28})--(\\ref{24.9;29}) we estimate the last term as follows\n\\begin{gather}\n\\sup_{\\epsilon \\in (0, \\pmb{\\epsilon}]} \\left\\| (\\mathfrak{\\hat g}_2 (\\epsilon) +1) \\hat\nP_1 (\\epsilon) - \\sum_{k=1}^{n - 1} (\\mathfrak{\\hat g}_2 (\\epsilon) +1) \\hat\nP^{(k)}_1 (\\epsilon, n) \\right\\|^2 \\leq \\nonumber\\\\\n\\sup_{\\epsilon \\in (0, \\pmb{\\epsilon}]}\n\\sup_{k = 1, \\ldots, n} \\sup_{p_1^2 \\in [\\epsilon_k - \\epsilon, \\epsilon_{k+1} -\n\\epsilon)} \\left\\|\\left[1-\\mu_2 \\bigl(\\epsilon + (a_{11} p_1 + a_{12} p_2\n)^2\\bigr)\\right]^{- \\frac 12} \\bigl(\\hat \\varphi_1 (p_1^2 + \\epsilon) - \\hat\n\\varphi_1 (\\epsilon_k)\\bigr) \\right\\| \\nonumber\\\\\n+ \\sup_{\\epsilon \\in (0, \\pmb{\\epsilon}]} \\sup_{k = 1, \\ldots, n} \\sup_{p_1^2 \\in\n[\\epsilon_k - \\epsilon, \\epsilon_{k+1} - \\epsilon)} \\left\\|\\left[1-\\mu_2\n\\bigl(\\epsilon + (a_{11} p_1 + a_{12} p_2 )^2\\bigr)\\right]^{- \\frac 12} \\hat\n\\varphi_1 (p_1^2 + \\epsilon) \\right\\| \\nonumber\\\\\n\\times \\sup_{\\epsilon \\in (0, \\pmb{\\epsilon}]} \\sup_{k = 1, \\ldots, n} \\sup_{p_1^2 \\in\n[\\epsilon_k - \\epsilon, \\epsilon_{k+1} - \\epsilon)} \\| \\varphi_1 (p_1^2 +\n\\epsilon) - \\varphi_1 (\\epsilon_k)\\|\n\\end{gather}\nNow applying (\\ref{mubound}) we continue the last line as\n\\begin{gather}\n \\leq a_{12}^{-1} a_\\mu^{- \\frac 12} \\sup_{|\\epsilon - \\epsilon'| < \\pmb{\\epsilon}\nn^{-1}} \\sup_{\\epsilon'' >0} \\sup_{s \\in \\mathbb{R}^3} \\left\\|G_2 (s,\n\\epsilon'') \\bigl(\\varphi_1 (\\epsilon') - \\varphi_1 (\\epsilon)\\bigr)\\right\\| \\nonumber \\\\\n+ a_{12}^{-1} a_\\mu^{- \\frac 12} \\left\\{\\sup_{\\epsilon \\in (0, \\pmb{\\epsilon}]}\n\\sup_{\\epsilon' >0} \\sup_{s \\in \\mathbb{R}^3} \\left\\|G_2 (s, \\epsilon')\n\\varphi_1 (\\epsilon)\\right\\| \\right\\} \\times\n\\left\\{\\sup_{|\\epsilon' - \\epsilon|\\leq \\pmb{\\epsilon} n^{-1} } \\| \\varphi_1 (\\epsilon')\n- \\varphi_1 (\\epsilon)\\|\\right\\} \\nonumber \\\\\n= \\hbox{o}(n^{-1}),\n\\end{gather}\nwhere we have used Theorem~\\ref{3.8;8}, Corollary~\\ref{19.9;10} and (\\ref{7.8;25}).\n\\end{proof}\n\nIt remains to prove\n\\begin{lemma}\\label{19.9;23}\n$\\mathcal{R}_i P_j , P_i P_j\\in \\mathfrak{J}$ for $i \\neq j$.\n\\end{lemma}\n\\begin{proof}\n Without loosing generality it suffices to prove that $\\mathcal{R}_1 P_2 \\in\n\\mathfrak{J}$. By Lemma~\\ref{24.9;41} it is enough to prove that\n\\begin{gather}\n \\sup_{\\epsilon > 0} \\left\\|P_1^{(i)}(\\epsilon, n) \\bigl(\\mathfrak{g}_1\n(\\epsilon) + 1 \\bigr) P_2^{(k)}(\\epsilon, n) \\right\\|_{HS} < \\infty \\label{24.9;44} , \\\\\n\\sup_{\\epsilon > 0} \\left\\|P_1^{(i)}(\\epsilon, n) P_2^{(k)}(\\epsilon, n)\n\\right\\|_{HS} < \\infty \\label{24.9;45} . \n\\end{gather}\nfor any given $i, k = 1, \\ldots, n-1$ and $n \\in \\mathbb{Z}_+$.\nWe shall consider only (\\ref{24.9;44}), (\\ref{24.9;45}) is proved analogously. Eq.~(\\ref{24.9;44}) follows from the\ninequality\n\\begin{equation}\n\\sup_{\\epsilon > 0} \\left\\|\\mathcal{C}_L \\mathcal{C}_R\\right\\|_{HS} < \\infty ,\n\\end{equation}\nwhere $\\mathcal{C}_L = F_1 P_1^{(i)}(\\epsilon, n) \\bigl(\\mathfrak{g}_1 (\\epsilon)\n+ 1 \\bigr) F_1^{-1}$ and $\\mathcal{C}_R = F_1 P_2^{(k)}(\\epsilon, n) F_1^{-1}$ are\nintegral\noperators with the kernels\n\\begin{gather}\n\\mathcal{C}_L (p_1, p_2 , p_c ; p'_1, p'_2 , p'_c) = \\hat \\varphi^*_1 (\\epsilon_i\n, p_2, p_c) \\left[1-\\mu_1(\\epsilon + p_1^2)\\right]^{-\\frac 12}\n\\chi_{(\\epsilon_i, \\epsilon_{i+1}]} (|p_1|) \\hat \\varphi_1 (\\epsilon_i , p'_2,\np'_c) \\nonumber\\\\\n\\times \\delta (p_1 - p'_1 )\\\\\n\\mathcal{C}_R (p_1, p_2 , p_c ; p'_1, p'_2 , p'_c) = \\hat \\varphi^*_2 (\\epsilon_k\n, a_{21}p_1 + a_{22}p_2, p_c)\n\\hat \\varphi_2 (\\epsilon_k , a_{21}p'_1 + a_{22}p'_2, p'_c) \\nonumber \\\\\n\\times \\chi_{(\\epsilon_k, \\epsilon_{k+1}]} (|a_{11}p_1 + a_{12}p_2|) \\delta\n(a_{11}p_1 + a_{12}p_2 - a_{11}p'_1 - a_{12}p'_2)\n\\end{gather}\nwhere $ \\chi_\\Omega : \\mathbb{R} \\to \\mathbb{R}$ is the characteristic function\nof the interval $\\Omega \\subset \\mathbb{R}$ (the delta--function is needed formally here to compute the product kernel).\n\nLet us define the function $D:\\mathbb{R}^3 \\times \\mathbb{R}^3 \\to \\mathbb{C}$\nas\n\\begin{equation}\n D (p_2, q_2) := \\int \\hat \\varphi_1 (\\epsilon_i ; p_2, p_c) \\hat \\varphi_2^*\n(\\epsilon_k ; q_2, p_c) d^{3N-9} p_c.\n\\end{equation}\nBy the Cauchy--Schwarz inequality\n\\begin{equation}\n |D (p_2, q_2)|^2 \\leq \\rho_1 (p_2) \\rho_2 (q_2) , \\label{24.9;72}\n\\end{equation}\nwhere\n\\begin{gather}\n \\rho_1 (p_2) := \\int \\bigl|\\hat \\varphi_1 (\\epsilon_i ; p_2, p_c) \\bigr|^2\nd^{3N-9} p_c \\\\\n \\rho_2 (q_2) := \\int \\bigl|\\hat \\varphi_2 (\\epsilon_k ; q_2, p_c) \\bigr|^2\nd^{3N-9} p_c,\n\\end{gather}\nand by normalization\n\\begin{equation}\n \\int \\rho_1 (z) d^3 z = \\int \\rho_2 (z) d^3 z = 1. \\label{24.9;71}\n\\end{equation}\n\nA straightforward calculation shows that the kernel of the product $\\mathcal{C}_L \\mathcal{C}_R$ has the form\n\\begin{gather}\n(\\mathcal{C}_L \\mathcal{C}_R) (p_1, p_2 , p_c ; p'_1, p'_2 , p'_c) =\n|a_{12}|^{-3}\\hat \\varphi_1 (\\epsilon_i , p_2, p_c) \\left[1-\\mu_1(\\epsilon +\np_1^2)\\right]^{-\\frac 12}\n\\chi_{(\\epsilon_i, \\epsilon_{i+1}]} (|p_1|) \\nonumber\\\\\n\\times D\\bigl(a_{12}^{-1}a_{11} (p'_1 - p_1) + p'_2 , a_{21} p_1 +\na_{22}a_{12}^{-1} a_{11} (p'_1 - p_1) + a_{22}p'_2 \\bigr) \\nonumber\\\\\n\\times \\hat \\varphi_2 (\\epsilon_k , a_{21} p'_1 + a_{22} p'_2 , p'_c)\n\\chi_{(\\epsilon_k, \\epsilon_{k+1}]} (|a_{11}p'_1 + a_{12}p'_2|)\n\\end{gather}\nUsing (\\ref{24.9;72}), (\\ref{24.9;71}) and (\\ref{mubound}) the square of the Hilbert--Schmidt norm\ncan be estimated as follows\n\\begin{gather}\n\\left\\|\\mathcal{C}_L \\mathcal{C}_R\\right\\|^2_{HS} = \\int \\left|(\\mathcal{C}_L\n\\mathcal{C}_R) (p_1, p_2 , p_c ; p'_1, p'_2 , p'_c)\\right|^2\nd^3 p_1 d^3 p_2 d^{3N-9} p_c d^3 p'_1 d^3 p'_2 d^{3N-9} p'_c \\nonumber\\\\\n\\leq |a_{12}|^{-6} a_\\mu^{-1}\\int d^3 p_1 d^3 p_2 d^{3N-9} p_c d^3 p'_1 d^3 p'_2 d^{3N-9}\np'_c \\bigl|\\hat \\varphi_1 (\\epsilon_i , p_2, p_c)\\bigr|^2 \\bigl[\\epsilon + p_1^2\n\\bigr]^{-1} \\nonumber\\\\\n\\times \\rho_1 \\bigl(a_{12}^{-1}a_{11} (p'_1 - p_1) + p'_2 \\bigr) \\rho_2\n\\bigl(a_{21} p_1 + a_{22}a_{12}^{-1} a_{11} (p'_1 - p_1) + a_{22}p'_2 \\bigr) \\nonumber\\\\\n\\times \\bigl|\\hat \\varphi_2 (\\epsilon_k , a_{21} p'_1 + a_{22} p'_2 , p'_c)\n\\bigr|^2\n= |a_{12}|^{-6} a_\\mu^{-1} \\int d^3 p_1 d^3 p_2 d^3 p'_1 d^3 p'_2 \\bigl[\\epsilon + p_1^2\n\\bigr]^{-1} \\rho_1 (p_2) \\nonumber\\\\\n\\times \\rho_1 \\bigl(a_{12}^{-1}a_{11} (p'_1 - p_1) + p'_2 \\bigr) \\rho_2\n\\bigl(a_{21} p_1 + a_{22}a_{12}^{-1} a_{11} (p'_1 - p_1) + a_{22}p'_2 \\bigr)\n\\rho_2 \\bigl(a_{21} p'_1 + a_{22} p'_2 \\bigr) \\nonumber\\\\\n= |a_{12}|^{-6} a_\\mu^{-1} \\int d^3 p_1 d^3 p'_1 d^3 p'_2 \\bigl[\\epsilon + p_1^2\n\\bigr]^{-1} \\rho_1 \\bigl(a_{12}^{-1}a_{11} (p'_1 - p_1) + p'_2 \\bigr) \\nonumber \\\\\n\\times \\rho_2 \\bigl(a_{12}^{-1} (p'_1 - p_1) + a_{21}p'_1 + a_{22}p'_2 \\bigr)\n\\rho_2 \\bigl(a_{21} p'_1 + a_{22} p'_2 \\bigr),\n\\end{gather}\nwhere we have used the relation $a_{11}a_{22} - a_{12}a_{21} = 1$.\nNow we make the change of variables\n\\begin{gather}\n \\xi_1 := a_{12}^{-1} a_{11} (p'_1 - p_1 ) + p'_2 \\\\\n\\xi_2 := a_{12}^{-1} (p'_1 - p_1 ) + a_{21}p'_1 + a_{22} p'_2 \\\\\n\\xi_3 := a_{21}p'_1 + a_{22}p'_2\n\\end{gather}\nThe inverse transformation has the form\n\\begin{gather}\n p_1 = -a_{21}^{-1} a_{22} \\xi_1 + a_{21}^{-1} \\xi_2 \\\\\np'_1 = -a_{21}^{-1} a_{22} \\xi_1 + a_{21}^{-1} a_{22} a_{11}\\xi_2 - a_{12}\\xi_3\\\\\np'_2 = \\xi_1 - a_{11} \\xi_2 + a_{11} \\xi_3\n\\end{gather}\nThis gives\n\\begin{gather}\n\\left\\|\\mathcal{C}_L \\mathcal{C}_R\\right\\|^2_{HS} \\leq |a_{12}|^{-5} |a_{21}|^{-1} a_\\mu^{-1} \\int d^3 \\xi_1 d^3 \\xi_2 \\rho_1 (\\xi_1) \\rho_2 (\\xi_2)\n\\left\\{\\epsilon + a_{21}^{-2} (\\xi_2 - a_{22}\\xi_1 )^2 \\right\\}^{-1} \\\\\n\\leq |a_{12}|^{-5} |a_{21}| a_\\mu^{-1} \\sup_{\\epsilon >0} \\sup_{s \\in \\mathbb{R}^3} \\int d^3 \\xi_2\\rho_2 (\\xi_2)\n\\left\\{\\epsilon + (\\xi_2 + s)^2 \\right\\}^{-1}\n\\end{gather}\nThat the rhs is finite follows from Theorem~\\ref{3.8;8}. \\end{proof}\n\nIn conclusion, let us explain why the proof of Theorem~\\ref{1.08;9} does not apply to the three--particle \ncase, where the Efimov effect is possible \\cite{yafaev,merkuriev,sobolev}.\nFor simplicity let us assume that the pair--interactions are bounded and finite\nrange, particle pairs $\\{2,3\\}$ and $\\{1,3\\}$ have zero energy resonances and\n$v_{12} = 0$. Theorem~\\ref{3.8;3} applies in this case as well\nbut instead of (\\ref{mubound}) one has $1 - \\mu (\\epsilon) = a_\\mu \\sqrt\\epsilon +\n\\mathcal{O} (\\epsilon)$, see \\cite{klaus1} or \\cite{yafaev2}\n(this makes the operators $\\mathcal{R}_j$ defined in (\\ref{20.9;11}) less singular\ncompared to the case of Theorem~\\ref{1.08;9}!). Lemma~\\ref{13.8;4} and Theorem~\\ref{8.8.;8} apply without change. However, in\ncase $N=3$ the proof of\nTheorem~\\ref{1.08;9}\nbreaks down because $\\mathcal{M}_3 \\notin \\mathfrak{J}$. Indeed, from definition\nin Theorem~\\ref{8.8.;8} it follows that $\\mathcal{M}_3 = \\mathcal{M}_2$, while the\nexpression for\n$\\mathcal{M}_2$ contains the term $\\mathcal{R}_1 L_1 \\mathcal{R}_2 +\n\\mathcal{R}_2 L_1 \\mathcal{R}_1 = \\mathcal{R}_1 \\mathfrak{m}_2 \\mathcal{R}_2 +\n\\mathcal{R}_2 \\mathfrak{m}_2 \\mathcal{R}_1$. Slightly modifying the analysis in\n\\cite{yafaev,sobolev} one can show that for $\\epsilon \\to +0$ the eigenvalues of the last operator accumulate at the point lying\nin the interval $(1, \\infty)$, which implies that $\\mathcal{M}_3 \\notin \\mathfrak{J}$.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}