diff --git a/data_all_eng_slimpj/shuffled/split2/finalzako b/data_all_eng_slimpj/shuffled/split2/finalzako new file mode 100644 index 0000000000000000000000000000000000000000..2ddcbbdfb7dd852d02ad8455b97c28744d28655d --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzako @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe interaction of protostellar outflows with the ambient molecular cloud occurs through radiative\nshocks that compress and heat the gas, which in turn cools down through line emission at\ndifferent wavelengths. In the dense medium where the\nstill very embedded protostars (the so called class 0 sources) are located, shocks are primarily non-dissociative, and hence the cooling is mainly through emission from \nabundant molecules. Molecular hydrogen is by far the most abundant species in these environments, \nand although H$_2$\\, emits only through quadrupole transitions with low radiative rates,\nit represents the main gas coolant in flows from young protostars. H$_2$\\, shocked emission in outflows has been widely studied in the past mainly through its\nro-vibrational emission in the near-IR (e.g. Eisl{\\\"o}ffel et al. 2000; Giannini et al. 2004; Caratti o Garatti et al. 2006) that traces the dense\ngas at T$\\sim$2000-4000 K. Most of the thermal energy associated with the shocks is however radiated away through\nthe emission of H$_2$\\ rotational transitions of the ground state vibrational level at $\\lambda \\le 28\\mu$m\n(e.g. Kaufman \\& Neufeld 1996). \nMid-IR H$_2$\\, lines are easily excited at low densities and temperatures between 300 and 1500 K: therefore they\nare very good tracers of the molecular shocks associated with the acceleration of ambient gas by\nmatter ejection from the protostar. Given the low excitation temperature, \nthey can also probe regions where H$_2$\\, has not yet reached the ortho-to-para equilibrium, thus \ngiving information on the thermal history of the shocked gas (Neufeld et al. 1998; Wilgenbus et al. 2000).\nIn addition, given the different excitation temperature and critical densities of the v=0--0 and v$\\ge$ 1 H$_2$\\ lines,\nthe combination of mid-IR with near-IR observations is a very powerful tool to constrain \nthe global physical structure and the shock conditions giving rise to the observed emission.\n\nThe study of the 0--0 rotational emission in outflows started in some detail with the \n\\emph{Infrared Space Observatory}. Thanks to the observations performed with the SWS and ISOCAM instruments,\nthe shock conditions, the ortho-para ratio and the global H$_2$\\, cooling have been derived in a handful of \nflows (e.g. Neufeld et al. 1998; Nisini et al. 2000; Molinari et al. 2000; Lefloch et al. 2003). More recently, the \\emph{Infrared Spectrometer} (Houck et al. 2004) on board\n\\emph{Spitzer}, with its enhanced spatial resolution and sensitivity with respect to the ISO spectrometers, \nhas been used to obtain detailed images of the H$_2$\\, rotational emission, from S(0) to S(7), of several\noutflows, from which maps of important physical parameters, such as temperature, column density and o\/p\nratio, have been constructed (Neufeld et al. 2006; Maret et al. 2009; Dionatos et al. 2010). \nIn this framework, Neufeld et al. (2009, hereafter N09) have recently presented IRS spectroscopic maps observations of five young \nprotostellar outflows at wavelengths between 5.2 and 37$\\mu$m, and discussed their averaged physical properties \nand overall energetics. In all the flows, the H$_2$\\, S(0)-S(7) emission has been detected and contributes to more than 95\\% of the \ntotal line luminosity in the 5.2-37$\\mu$m\\, range, while atomic emission, in the form of FeII and SI fine structure lines, accounts for only the remaining $\\sim$5\\%. \n\nIn the present paper, we will analyse the H$_2$\\, line maps obtained by N09 towards the L1157 outflow,\nwith the aim of deriving the main physical conditions pertaining to the molecular gas and their variations \nwithin the flow. This in turn will give information on the thermal history of the flow and on how \nenergy is progressively transferred from the primary ejection event to the slow moving ambient gas. \n\nFor this first detailed analysis, L1157 has been chosen among the sources observed by N09 given its uniqueness as a very active and well studied flow at different wavelengths. \nMore than 20 different chemical species have been indeed \ndetected in the shocked spots of this object (Bachiller \\& Perez Gutierrez 1997; Benedettini et al. 2007), some of them for the first time in outflows (e.g. HNCO, Rodr{\\'{\\i}}guez-Fern{\\'a}ndez et al. 2010, and complex organic molecules, Arce et al. 2008, Codella et al. 2009)\n, testifying for a rich shock induced chemistry. Warm H$_2$\\ shocked emission in L1157\nis also evidenced through near-IR maps (e.g. Davis \\& Eisl\\\"offel, 1995) and Spitzer-IRAC\nimages (Looney et al. 2007). The L1157\noutflow has been also recently investigated with the \\textit{Herschel Space Observatory}, showing\nto be very strong also at far-IR wavelengths (Codella et al. 2010, Lefloch et al. 2010,\nNisini et al. 2010).\n\nThe L1157 outflow extends about 0.7 pc in length. Its distance is uncertain and has been estimated between 250 and 440 pc. Here we will adopt D=440 pc for an easier comparison with other works.\nThe outflow is driven by a highly embedded, low mass class 0 source (L1157-mm or IRAS20386+6751) having $L_{bol} \\sim $ 8.3 $L_\\odot$ (Froebrich 2005). \nIt is a very nice example of an outflow driven by a precessing and pulsed jet, possessing an S-shaped structure and different cavities, whose morphology has been reproduced assuming that the outflow\nis inclined by $\\sim$80$^\\circ$ to the line of sight and the axis of the underlying jet precesses\non a cone of 6$^\\circ$ opening angle (Gueth et al. 1996). The episodic mass ejection events are\nevidenced by the presence, along the flow, of individual clumps that are symmetrically displaced \nwith respect to the central source.\nIt is therefore a very interesting target for a study of the physical conditions pertaining to these active regions through an H$_2$\\ excitation analysis.\n\nThe paper is organized as follow: the observations and the main results are summarized in \\S 2. In \\S 3 \nwe describe the analysis performed on the H$_2$\\ images to derive maps of temperature, column density and \northo-to-para ratio. A more detailed NLTE analysis on individual emission peaks is also presented here, where the \nSpitzer data are combined with near-IR data to further constrain the excitation conditions. The implications of these results for the shock conditions along the L1157 flow are discussed in \\S 4, together with an analysis of the global\nenergy budget in the flow. A brief summary follows in $\\S 5$.\n\n\\section{Observations and results \\label{analysis}}\n\nObservations of the L1157 outflow were obtained in November 2007 with the IRS instrument, during Cycle 4 of the Spitzer mission . \nThe full IRS spectral range (5.2-36.5$\\mu$m) was observed with the Long-High (LH), Short-High (SH) (R $\\sim$ 600) \nand Short-Low (SL) (R between 64 and 128) modules. \nThe L1157 outflow region was covered through 5 individual IRS maps of $\\sim$ 1\\arcmin x1\\arcmin\\, of size each, arranged along the outflow axis. Each map was obtained by stepping the IRS slit by half of its width in the direction perpendicular\nto the slit length. For the SH and LH modules the slit was stepped also parallel to its axis by 4\/5 (SL) and 1\/5 (LH) of its length. \nDetails on the data reduction that generated the individual line maps from the IRS scans are given in N09. The final maps have been resampled to a grid of 2\\arcsec\\, spacing allowing a pixel by pixel comparison of maps obtained with the different IRS modules.\nMaps of the brightest detected lines as well as the full spectrum in a representative position are shown in Fig.\\,7 and Fig.\\,12 of N09. As regards to H$_2$, all the pure rotational lines of the first vibrational levels, from S(0) to S(7),\n are detected at various intensity along the flow. Here we report, in Tab. \\ref{fluxes}, the H$_2$\\, brightness measured in a 20\\arcsec\\, FWHM Gaussian aperture towards different positions.\n\n Fig\\,1 shows the L1157 maps of the S(1) and S(2) lines while Fig.\\,2 displays the S(5) line with superimposed contours of the CO 2--1 emission from Bachiller et al. 2001.\n In the same figure, a map of the 2.12 $\\mu$m 1--0 S(1) line is also presented. The morphology of the 0--0 S(5) and 1--0 S(1) is very similar, with peaks of mid-IR emission \nlocated at the near-IR knots from A to D, as identified by Davis \\& Eisl\\\"offel (1995). \n When compared with the CO map, the mid-IR H$_2$\\, emission appears to follow the curved chain of clumps (labelled as B0-B1-B2 and R0-R1-R, for the blue-shifted and red-shifted lobes, respectively) that also correspond to peaks of SiO emission, as resolved in interferometric observation by Gueth et al. (1998) and Zhang et al. (2000). \n The L1157 outflow morphology has been suggested to delineate a precessing flow (Gueth et al. 1996), where the H$_2$\\, and SiO peak emission knots follow the location of the actual working surface of the precessing jet and are thus associated with the youngest ejection episodes. \nDiffuse H$_2$\\ emission is also detected in the S(1)-S(2) maps, that delineates the wall of a cavity that connects the central source with both\nthe R0 and B0 clumps. Such a cavity has been recognized in the CO 1-0 interferometric maps and it is likely created by the propagation of large bow-shocks.\nThe S(1)-S(2) maps of Fig.\\,1 show extended emission of H$_2$\\, also in the SE direction (i.e. where the B2 clump is located) and in the eastern edge of the northern lobe, that also follow quite closely the CO morphology: these regions at lower excitation might trace additional cavities created by an older ejection episode of the precessing jet.\n\n\\section{ H$_2$ Analysis }\n\\subsection{LTE 2D analysis of the rotational lines: maps of averaged parameters }\n\\label{sec:maps}\nWe have used the H$_2$\\, line maps to obtain the 2D distribution of basic H$_2$\\, physical parameters, through the analysis \nof the rotational diagrams in each individual pixel. As described in N09, who analysed the global \nH$_2$\\, excitation conditions in L1157, the distribution of upper level column densities of the S(0)-S(7) lines as a function of their \nexcitation energy, does not follow a straight line, indicating that a temperature stratification in the \nobserved medium exists. The exact form of this temperature stratification depends on the type of shock\nthe H$_2$\\, lines are tracing, as they probe the post-shock regions where the gas cools from $\\sim $ 1000 K \nto $ \\sim $ 200 K. \n\nThe simplest way to parametrize the post-shock temperature stratification\nis to assume a power-law distribution: this approach was applied by \nNeufeld \\& Yuan (2008), which also show that this type of distribution is expected \nin gas ahead of unresolved bow-shocks. On this basis, and following also N09, we fit the observations \nassuming a slab of gas where the H$_2$\\, column density in each layer at a given $T$ varies as \n$ dN \\propto T^{-\\beta}dT$.\n\nThis law is integrated, to find the total column density, between a minimum ($T_{min} $) and a maximum ($T_{max}$) temperatures . For our calculations \nwe have kept $T_{max}$ fixed at 4000 K, since gas at temperatures larger than this value is not expected to contribute to the emission of the observed pure rotational lines. $T_{min} $ was instead assumed to be equal, in each position, to the minimum temperature probed by the observed lines. This $T_{min} $ is taken as the excitation temperature giving\nrise to the observed ratio of the S(0) and S(1) column densities, assuming a Boltzman\ndistribution. \nThe $T_{min} $ value ranges between $\\sim$150 and 400 K.\n\nWe found that the approach of a variable $T_{min}$\nproduces always fits with a better $\\chi^2$ than assuming a fixed low value in all positions.\nWe also assume the gas is in LTE conditions. Critical densities of rotational lines from S(0) to S(7)\nrange between 4.9 cm$^{-3}$\\, (S(0)) and 4.4$\\times$10$^5$cm$^{-3}$(S(7)) at T=1000 K assuming only H$_2$\\, collisions\n(Le Bourlot et~al. 1999): critical densities decrease if collisions with H and He are not negligible. \nDeviations from LTE can be therefore expected only for the high-$J$ S(6) and S(7) transitions: the S\/N of these \ntransitions in the individual pixels is however not high enough to disentangle, in the rotational diagrams, NLTE \neffects from the effects caused by the variations of the other considered parameters. In particular, as also\ndiscussed in N09, there is a certain degree of degeneracy in the density and the $\\beta$ parameter \nof the temperature power law in a NLTE treatment that we are not able to remove in the analysis of the\nindividual pixels. This issue will be further discussed in \\S \\ref{sec:NLTE}. \nAn additional parameter of our fit is the ortho-to-para ratio (OPR) value. It is indeed recognized that the 0--0 H$_2$\\, lines are\noften far from being in ortho-to-para equilibrium, an effect that in a rotational diagram is evidenced by\na characteristic zigzag behavior in which column densities of lines with even-$J$ lie systematically above those of odd-$J$ lines. In order not to introduce too many parameters, we assume a single OPR value as a free parameter for the\nfit. In reality, the OPR value is temperature dependent (e.g. N09 and \\S \\ref{sec:NLTE}), and therefore\nthe high-$J$ lines might present an OPR value closer to equilibrium then the low-$J$ transitions. \nOur fit gives therefore only a value averaged over the temperature range probed by the lines considered (i.e. $\\sim$200-1500 K). \n\nIn summary, we have varied only three parameters, namely the total H$_2$\\, column density N(H$_2$), the OPR, and the temperature power law index $\\beta$, in order to obtain the best model fit through a $\\chi^2$ minimization procedure and assuming a 20$\\%$ flux uncertainty for all the lines. \nThe fit was performed only in those pixels where at least four lines with an S\/N larger\nthan 3 have been detected. \nBefore performing the fit, the line column densities were corrected for extinction assuming $A_v$ = 2 (Caratti o Garatti et al. 2006) and \nadopting the Rieke \\& Lebofsky (1985) extinction law. At the considered wavelengths,\nvariations of $A_v$ of the order of 1-2mag do not affect any of the derived results.\n\nFigure \\ref{fig:ncol} shows the derived map of the H$_2$\\, column density, while in Fig. \\ref{fig:b_tmin} and \\ref{fig:op_tcold} \nmaps of the OPR and $\\beta$ are displayed together with temperature maps relative to the ''cold'' and ''warm'' components ($T_{cold}$ and $T_{warm}$),\ni.e. the temperature derived from linearly fitting the S(0)-S(1)-S(2) and S(5)-S(6)-S(7) lines, once corrected for the derived OPR value. \nIn Fig. 6 we also show the individual excitation diagrams for selected positions along the flow, obtained from intensities\nmeasured in a 20$\"$ FWHM Gaussian aperture centered towards emission peaks (Tab. 1). Values of the\nfitted parameters in these positions are reported in Tab. \\ref{param}. In addition to $T_{cold}$ and $T_{warm}$,\nwe give in this table also the values of the average temperature in each knot, derived through a\nlinear fit through all the H$_2$\\ lines ($T_{med}$).\n\nThe maps show significant variations in the inferred parameters along the outflow. \nThe H$_2$\\ column density ranges between 5$\\times$10$ ^{19} $ and \n3$\\times$10$ ^{20} $cm$^{-3}$. The region at the highest column density is located towards the B1 molecular bullet (see Fig. \\ref{fig:h2co})\\footnote{Fig. \\ref{fig:h2co} shows that the molecular clumps B1, R0 and\nR coincide in position with the NIR H$_2$\\ knots A, C and D. In the paper, both nomenclature\nwill be used specifying if we refer to the NIR or mm condensations}. \nThis is consistent with the higher column density of CO found in B1 with respect \nto other positions in the blue lobe (Bachiller \\& Perez Gutierrez 1997), and might suggest that this is a zone where the outflowing gas is compressed due \nto the impact with a region of higher density (Nisini et al. 2007). Towards the NW, red-shifted \noutflow, the column density has a more uniform distribution, with a plateau at $ \\sim 10 ^{20} $ cm$^{-2}$\\ that follows the H$_2$\\ intensity distribution. \nThe N(H$_2$) decreases at the apex of the red-shifted outflow, with a value slightly below $ 10 ^{20} $ cm$^{-2}$\\ at the position\nof the D near-IR knot.\n\n$T_{cold}$ ranges between $ \\sim $ 250 and 550 K. The highest values are found at the tip of the northern outflow lobe, while local maxima corresponds to the positions of line intensity peaks.\n$T_{warm}$ ranges between $ \\sim $ 1000 and 1500 K. In this case the highest values are in the southern lobe, at the position of the A NIR knot. \nAs a general trend, the $T_{warm}$ value decreases going from the southern to the northern peaks of emission, with\nthe minimum value at the position of the D NIR knot. \n\n$\\beta$ values range between $ \\sim $4-4.5 in the blue-shifted lobe while it is larger in the red-shifted\nlobe, with maximum values of $ \\sim $ 5.5 at the tip of the flow. \nDue to the degeneracy between $\\beta$ and density discussed in the previous section, these\nvalues can be considered as upper limits because of our assumption of LTE conditions. \nNeufeld \\& Yuan (2008) have discussed the $\\beta$ index expectations in bow-shock excitation.\nA $\\beta$ index of $ \\sim $ 3.8 is expected in paraboloid bow shocks having a velocity at the bow apex\nhigh enough to dissociate H$_2$, in which case the temperature distribution\nextends to the maximum allowed temperature. Slower shocks that are not able to attain the maximum\ntemperature, produce steeper temperature distributions, i.e. with values of $ \\beta $ greater than 3.8.\nThis is consistent with our findings: low values of $ \\beta $ (of the order of 4) are found in the blue-shifted\nlobe: here, evidence of H$_2$\\ dissociation is given by the detection\nof atomic lines (i.e. [SiII], [FeII] and [SI]) in the IRS spectrum. In the red-shifted lobe, where values of $\\beta$ larger than 4 are derived in the LTE\nassumption, no atomic emission is detected and the $T_{warm}$ values\nare lower than those measured in the blue-shifted lobe, indicating a maximum temperature lower than in the blue flow .\n\nThe OPR varies significantly along the outflow, spanning from $ \\sim $ 0.6 to 2.8. Hence it is always below the equilibrium value of 3. Although a one-to-one correlation between temperature and OPR cannot be discerned, some trends can be\ninferred from inspection of Fig. \\ref{fig:op_tcold}. In general the OPR minima are observed in plateau regions between two consecutive\nintensity peaks, where also the cold temperature has its lowest values. At variance with this trend, the emission\nfilaments delineating the outflow cavity within $\\pm$ 20$ \\arcsec $ from the mm source, where the cold \ntemperature reaches a minimum value of $ \\sim $ 250 K, show rather high\nvalues of the OPR, $ \\sim $2.4-2.8. This might suggests that this region has experienced an older \nshock event that has raised the OPR, though not to the equilibrium value, and where the gas\nhad time to cool at a temperature close to the pre-shock gas temperature. \nOn the other hand, at the apex of both the blue- and red-shifted lobes, where the cold temperatures \nare relatively high (i.e. 500-550 K), the OPR is rather low, $ 1.5-2.0 $. Evidence of regions of low OPR and high \ntemperatures at the outflow tips has been already given in other flows (Neufeld et al. 2006; Maret et al. 2009). It has been suggested that these represent zones subject to recent shocks where the OPR has not had time yet to reach the\nequilibrium value. \n\n\n\\subsection{ NLTE analysis: constraints on H and H$_2$ particle density}\n\\label{sec:NLTE}\n\nAdditional constraints on the physical conditions responsible for the H$_2$\\, excitation, are provided by combining\nthe emission of the mid-IR H$_2$\\ pure-rotational lines from the ground vibrational level with the emission from near-IR ro-vibrational lines.\nIt can be seen from Fig. \\ref{fig:h2co} that the 2.12$\\mu$m\\, emission follows quite closely the\nemission of the 0--0 lines at higher $J$. In addition to the 2.12$\\mu$m\\, data presented in Fig. \\ref{fig:h2co}, we have also considered\nthe NIR long-slit spectra obtained on the A and C NIR knots by Caratti o Garatti (2006). These knots, at the spatial\nresolution of the NIR observations (i.e. $\\sim$0.8\\arcsec), are separated in several different sub-structures that have been individually investigated with the long-slit spectroscopic observations. For our analysis we have considered the\ndata obtained on the brightest of the sub-structures, that coincide, in position, \nwith peaks of the 1--0 S(1) line.\n\nIn order to inter-calibrate in flux the Spitzer data with these NIR long-slit data, obtained with a slit-width of 0.5 arcsec, \nwe have proceeded as follows: we first convolved the \n2.12$\\mu$m\\, image at the resolution of the Spitzer images and than performed photometry on the\nA and C peaks positions with a 20\\arcsec\\, diameter Gaussian aperture, i.e. with the same\naperture adopted for the brightness given in Tab. \\ref{fluxes}. We have then scaled the fluxes of the individual lines given in \nCaratti o Garatti (2006) in order to match the 2.12$\\mu$m\\ flux gathered in the slit with that measured by the\nimage photometry. In doing this, we assumed that the average excitation conditions within the 20$\\arcsec$ aperture are\nnot very different from those of the A-C peaks. This assumption is \nobservationally supported by the fact that the ratios of different H$_2$\\ NIR lines \ndo not change significantly (i.e. less than 20\\%) in the A-C knot substructures separately\ninvestigated in Caratti o Garatti (2006).\nWe have considered only those lines detected with S\/N larger \nthan 5; in practice this means considering lines from the first four and three vibrational levels for knots A and C, respectively. \nThe excitation diagrams obtained by combining the Spitzer and NIR data for these two knots are displayed in Fig.\\ref{fig:ACfit}.\n\nIn order to model together the 0--0 lines and the near-IR ro-vibrational lines, we have implemented\n two modifications to the approach adopted previously. First of all, the NIR lines probe gas\nat temperatures higher than the pure rotational lines, of few thousands of K, at which values it is expected\nthat the OPR has already reached equilibrium. Thus, \nthe ortho-para conversion time as a function of temperature needs to be included in the\nfitting procedure, since lines excited at different temperatures have different OPRs.\nWe have here adopted the approach of N09 and used an analytical expression for the OPR as a function of the temperature, considering a gas that had an initial value of the ortho-to-para ratio OPR$_0 $ and has been heated to a temperature T for a time $ \\tau$. Assuming that the para-to-ortho conversion occurs through reactive collisions with atomic hydrogen, we have:\n\n\\begin{equation}\n\\frac{OPR(\\tau)}{1+OPR(\\tau)} = \\frac{OPR_0}{1+OPR_0}\\,e^{-n(H)k\\tau} + {\\frac{OPR_{LTE}}{1+OPR_{LTE}}}\\,\\left( 1 - e^{-n(H)k\\tau}\\right) \n\\end{equation}\n\nIn this expression, n(H) is the number density of atomic hydrogen and OPR$_{LTE}$ is the ortho-to-para ratio equlibrium value.\nThe parameter $ k $ is given by the sum of the rates coefficients for para-to-ortho conversion ($ k_{po} $), estimated\nas 8$ \\times $10$ ^{-11} $exp(-3900\/T) cm$ ^{3} $\\,s$ ^{-1} $, and for ortho-to-para conversion, $ k_{op} \\sim k_{po}\/3 $\n(Schofield et al. 1967). Thus the dependence of the OPR on the temperature is implicitly given by the dependence on T of the\n$ k $ coefficient. The inclusion of a function of OPR on $T$, introduces one additional parameter to our fit: while we have previously \nconsidered only the average OPR of the 0--0 lines, we will fit here the initial OPR$_0 $ value and the coefficient $ K = n(H)\\tau $.\n\nThe second important change that we have introduced with respect to the previous fitting procedure, \nis to include a NLTE treatment of the H$_2$\\, level column densities. In fact, the critical densities\nof the NIR lines are much higher than those of the pure rotational lines (see Le Bourlot et~al. 1999). For example, the \n$n _{crit} $ of the 1-0 S(1) 2.12$\\mu$m\\, line is 10$^7$cm$^{-3}$\\, assuming only collisions with H$_2$\\, and T=2000 K. \nTherefore the previously adopted LTE approximation might not be valid when combining lines from different \nvibrational levels.\nThis is illustrated in Fig.\\ref{fig:plot_dens}, where we plot the results obtained by varying the H$_2$\\ density between \n10$ ^{3} $ and 10$ ^{7} $cm$^{-3}$\\, while keeping the other model parameters fixed . The observed column densities in the A\nposition are displayed for comparison. \nFor the NLTE statistical equilibrium computation we have adopted the H$_2$\\, collisional rate coefficients given by Le Bourlot et~al. (1999) \\footnote{The rate coefficients for collisions with ortho- and para-H$_2$, HI and He, computed in Le Bourlot et~al. (1999) \nare available at the web-site: http:\/\/ccp7.dur.ac.uk\/cooling$\\_$by$\\_$h2\/} .\nThis figure demonstrates the sensitivity of the relative ratios between 0--0 and 1--0 transitions to density variations. For example, in this particular case, the ratio N(H$_2$)$_{0-0 S(7)}$\/N(H$_2$)$_{1-0 S(1)}$ is 64.6 at n(H$_2$)=10$ ^{4} $ cm$^{-3}$\\, and 1.9 at n(H$_2$) $ \\gtrsim $ 10$ ^{7} $ cm$^{-3}$. The figure also shows that the observational points display only a small misalignment in column densities between the 0--0 lines and the 1--0 lines, already indicating that the ro-vibrational lines are close to\nLTE conditions at high density.\n\nIn Fig.\\ref{fig:ACfit}, we show the final best-fit models for the combined mid- and near-IR column densities \nin the A and C positions. \nAs anticipated, the derived n(H$_2$) densities are large, of the order of 10$ ^{7} $ and 6$\\times$10$ ^{6}$ cm$^{-3}$, for the A and C positions, respectively. The two positions indeed show very similar excitation conditions: only the column density is a factor of 3 smaller in knot C. \nHence, we conclude that the lack of detection of rotational lines with v$ > $ 3 in knot C, in contrast to knot A (Caratti o Garatti et al. 2006), is due\npurely to a smaller number of emitting molecules along the line of sight and not to different excitation conditions.\n\nThe derived H$_2$\\, densities are much higher than previous estimates based on other tracers. Nisini et al. (2007)\nderive a density of 4$\\times$10$ ^{5} $ cm$^{-3}$\\, at the position of knot A from multi-line SiO observations, thus more than an order of magnitude smaller than those inferred from our analysis. The high-J CO lines observed along the blue-shifted lobe of L1157 by \nHirano et al. (2001), indicate a density even smaller, of the order of 4$\\times$10$ ^{4} $ cm$^{-3}$. \nSiO is synthesized and excited in a post-shock cooling zone where the maximum compression is reached, therefore\nit should trace post-shock regions at densities higher than H$_2$ (e.g. Gusdorf et al. 2008). \nOne possibility at the origin of the discrepancy is our assumption of collisions with only H$_2$\\ molecules, and thus of a \nnegligible abundance of H. This can be considered roughly true in the case of non-dissociative C-shocks, where H atoms are \nproduced primarely in the chemical reactions that form H$_2$O from O and H$_2$, with an abundance n(H)\/n(H$_2$+H) $ \\sim $ 10$^{-3}$\n(e.g. Kaufmann \\& Neufeld 1996). However, if the shock is partially dissociative, the abundance of H can increase considerably and \ncollisions with atomic hydrogen cannot be neglected, in view of its large efficiency in the H$_2$\\, excitation.\nThis situation cannot be excluded at least for the knot A, where atomic emission from [FeII] and [SI] has been \ndetected in our Spitzer observations.\nSince we cannot introduce the n(H) as an additional independent parameter of our fit, we have fixed n(H$_2$) at the value\nderived from SiO observations (4$\\times$10$ ^{5} $ cm$^{-3}$, Nisini et al. 2007) and varied the H\/H$_2$\\, abundance ratio. \nThe best fit is in this case obtained with a ratio H\/H$_2$=0.3: this indicates that our observational data are\nconsistent with previous H$_2$\\ density determinations only if a large fraction of the gas is in atomic form.\n\nTurning back to the inferred OPR variations with temperature, our fit implies that the OPR in the cold gas component at T=300 K\nis significantly below the equilibrium value, while the value of \nOPR=3 is reached in the hot gas at T=2000 K traced by the NIR lines. The parameter $K=n(H)\\tau$\nis constrained to be $\\sim$10$^6$ and 10$^7$ yr\\,cm$^{-3}$\\, for knots A and C, respectively. \nWe can also estimate the time needed for the gas to reach this distribution of OPR, from the limits on the\natomic hydrogen abundance previously discussed.\nOur data implies a high value of the n(H) density: a minimum value of n(H) $\\sim$0.6-1$\\times$10$^4$cm$^{-3}$, (for knots C and A, respectively) \n is given if we assume H\/H$_2$$\\sim$ 10$^{-3}$ (and thus the n(H$_2$) $\\sim$ 10$ ^{7} $cm$^{-3}$, given\n by our fit), while a maximum value of $\\sim$10$^5$cm$^{-3}$,\nis derived from the fit where n(H$_2$) is kept equal to 4$\\times$10$ ^{5} $ cm$^{-3}$.\nThe high abundance of atomic H ensures that conversion of para- to\northo-H$_2$\\, proceeds very rapidly: the fitted values of the K parameter indeed imply that the observed range of\nOPR as function of temperature have been attained in a timescale between 100 and 1000 yrs for both the knots.\n\nFinally, given the column density and particle density discussed above, we can estimate the H$_2$\\,\ncooling length ($L \\sim$ N(H$_2$)\/n(H$_2$)). If we consider the case of n(H$_2$) $\\sim$ 10$ ^{7} $cm$^{-3}$,\nand negligible n(H), we have $L \\sim$ 10$ ^{13} $ cm while a length of $\\sim$ 10$ ^{15} $ cm is inferred\nin the case of n(H$_2$)$\\sim$ 4$\\times$10$^5$cm$^{-3}$. \nAll the parameters derived from the above analysis are summarised in Tab. \\ref{shock} and they will be discussed\nin the next section in the framework of different shock models.\n\n\\section{Discussion}\n\n\\subsection{Shock conditions giving rise to the H$_2$\\, emission}\n\nThe copious H$_2$\\, emission at low excitation observed along the L1157 outflow \nindicates that the interaction of the flow with the ambient medium occurs\nprevalently through non-dissociative shocks. \nBoth the Spitzer IRS maps of N09, and the NIR narrow band images of Caratti o Garatti et al. 2006, show that significant gas dissociation in L1157 occurs only at the A peak, where both mid-IR \nlines from [FeII], [SII] and [SI] and weak [FeII] at 1.64$\\mu$m\\ have been detected. \nWeaker [SI]\\,25$\\mu$m\\ and [FeII]\\,26$\\mu$m\\ emission have been also detected on the C spot, but\noverall the atomic transitions give a negligible contribution to the total gas cooling,\nas pointed out in N09.\nThese considerations suggest that most of the shocks along the outflow occur at speeds\n below $\\sim$ 40km\\,s$^{-1}$, as this is the velocity limit above which H$_2$\\ is expected to be dissociated.\nThe knot A is the only one showing a clear bow-shock structure. Here the velocity at\nthe bow apex is probably high, causing H$_2$\\ dissociation and atomic line excitation,\nconsistent with the fitted temperature power law $\\beta$ of $\\sim$ 4, as discussed in \\S 3.1,\nwhile the bulk of the H$_2$\\, emission comes from shocks at lower velocities originating in\nthe bow wings.\n\nConstraints on the shock velocity that gives rise to the molecular emission in L1157\n have been already given in previous works. The sub-mm SiO emission and\nabundances, measured in different outflow spots, suggest shock velocities of the \norder of 20-30km\\,s$^{-1}$\\ (Nisini et al. 2007). The comparison of SiO and H$_2$\\ \nemission against detailed shock models performed by Gusdorf et al. (2008) confirm a similar\nrange of velocities in the NIR-A knot, although the authors could not find a unique \nshock model that well represents both the emissions. \n\nCabrit et al. (1999) found that the column density of the mid-IR H$_2$\\, emission lines, \nfrom S(2) to S(8), observed by ISO-ISOCAM was consistent either with C-shocks having\nvelocities of $\\sim$\\,25km\\,s$^{-1}$\\, or with J-shocks at lower velocity, of the order\nof $\\sim$\\,10km\\,s$^{-1}$. Gusdorf et al. (2008), however, conclude\nthat stationary shock models, either of C- or J-type, are not able to reproduce the observed\nrotational diagram on the NIR-A position, constructed combining ISOCAM data and \nNIR vibrational lines emission. A better fit was obtained \nby these authors considering non-stationary shock models, which have developed a magnetic precursor but which retain a J-type discontinuity (the so-called CJ shocks, Flower et al. 2003). \nSimilar conclusions, but on a different outflow, have been reached by Giannini et al. 2006\nwho studied the H$_2$\\ mid- and near-IR emission in HH54: in general, stady-state C- and J-type \nshocks fail to reproduce simultaneously the column densities of both the ro-vibrational \nand the v=0, pure-rotational H$_2$\\ levels. \n\nA different way to look at the issue of the prevailing shock conditions in the observed\nregions, is to compare the set of physical parameters that we have inferred from our analysis \nto those expected from different shocks. With this aim, we summarize in \nTab. \\ref{shock} the physical properties derived on the A and C H$_2$\\, knots. \nIn addition to the parameters derived from the NLTE analysis \n reported in Section \\ref{analysis}, namely H$_2$ post-shock density, H\/H$_2$\\ fraction, \n initial OPR, cooling length and time, the table reports also the average values of \nOPR and rotational temperature, as they are measured from a simple linear fit\n of the rotational diagrams presented in Fig. \\ref{fig:fit_nir}.\n\n\nAs mentioned in \\S 3.2, the high fraction of atomic hydrogen inferred by our analysis rule out excitation in a pure\n C-shock. In fact, dissociation in C-shocks is always too low to have a \nH\/H$_2$ ratio higher than 5$\\times$10$^{-3}$, irrespective from the shock velocity and magnetic field strength \n(Kaufman \\& Neufeld 1996; Wilgenbus et al. 2000).\nC-shocks are not consistent with the derived parameters even if we consider the model fit with\nthe high H$_2$\\ post-shock density of the order of 10$^{7}$ cm$^{-3}$ and negligible atomic hydrogen: in this case we derive an emission length of 10$^{13}$ cm,\nwhich is much lower than the cooling length expected in C-shocks, which, although \ndecreasing with the pre-shock density, is never less than 10$^{15}$ cm (Neufeld et al. 2006) .\n\nStationary J-shock models better reproduce some of our derived parameters.\nFor example, in J-type shocks the fraction of hydrogen in the post-shocked gas\ncan reach the values of 0.1-0.3 we have inferred, provided that the shock velocity is larger\nthan $\\sim$ 20 km\\,s$^{-1}$. In general, a reasonable agreement with the inferred post-shock density and \nH\/H$_2$\\ ratio is achieved with models having $v_s$=20-25 km\\,s$^{-1}$ and pre-shock densities of 10$^3$cm$^{-3}$\n(Wilgenbus et al. 2000). Such models predict a shock flow time of the order of 100 yr or less, which\nis also in agreement with the value estimated in our analysis at least in knot A.\n In such models, however, the cooling length is an order of magnitude smaller than the \ninferred value of $ \\sim $10$ ^{15} $ cm. In addition, the gas temperature remains high for most of the post-shocked region: the average rotational temperature of \nthe v=0 vibrational level is predicted to be, according to the Wilgenbus et al. (2000)\ngrid of models, always about 1600 K or larger, as compared with the value of about 800-900 K inferred from observations. \nThe consequence of the above inconsistencies is that J-type shocks tend to underestimate \nthe column densities of the lowest H$_2$\\, rotational levels in L1157, an effect already pointed \nout by Gusdorf et al. (2008). \n\nAs mentioned before, Gusdorf et al. (2008) conclude that the H$_2$\\ pure rotational emission in L1157 \nis better fitted with a non-stationary C+J shock model with either $v_s$ between 20 and 25 km\\,s$^{-1}$\\ and pre-shock densities $n_H = 10^4$ cm$^{-3}$, or with $v_s \\sim 15$ km\\,s$^{-1}$\\, and\nhigher pre-shock densities of $n_H = 10^5$ cm$^{-3}$. Such models, however, still underestimate\nthe column densities of the near-IR transitions: the post-shocked H$_2$\\ gas density \nremains lower than the NIR transitions critical density and the atomic hydrogen\nproduced from H$_2$\\ dissociation is not high enough to populate the vibrational\nlevels to equilibrium conditions.\n\nThe difficulty of finding a suitable single model that reproduce the derived physical \nconditions is likely related to possible geometrical effects and to the fact that multiple shocks with different velocities might be present along the line of sight. It would be indeed interesting to explore whether bow-shock models might be able to predict the averaged physical characteristics \nalong the line of sight that we infer from our analysis.\n\n\\subsection{Flow energetics}\n\nH$_2$\\ emission represents one of the main contributor to the energy radiated\naway in shocks along outflows from very young stars. Kaufman \\& Neufeld (1996) predicted that\nbetween 40 and 70\\% of the total shock luminosity is emitted in H$_2$\\, lines\nfor shocks with pre-shock density lower than 10$^5$ cm$^{-3}$\\ and shock velocities larger\nthan 20 km\\,s$^{-1}$, the other main contributions being in CO and H$_2$O rotational emission. \nThis has been also observationally tested by Giannini et al. (2001) who measured the \nrelative contribution of the different species to the outflow cooling in a sample of class 0 \nobjects observed with ISO-LWS. \n\nWe will discuss here the role of the H$_2$\\, cooling in the global radiated energy of the L1157 outflow.\nFrom the best fit model obtained for the knots A and C, we have derived the total, extinction corrected,\nH$_2$\\ luminosity by integrating over all the ro-vibrational transitions considered by our\nmodel. $L_{\\rm H_2}$ is found\nto be 8.4$\\times$10$^{-2}$ and 3.7$\\times$10$^{-2}$ L$_\\odot$ for the A and C knots, respectively. Out of this total\nluminosity, the contribution of only the rotational lines is 5.6$\\times$10$^{-2}$(A) and 2.7$\\times$10$^{-2}$(C) L$_\\odot$,\nwhich means that in both cases they represent about 70\\% of the total H$_2$\\, luminosity.\n\nN09 have found that the total luminosity of the H$_2$\\ rotational lines from S(0) to\nS(7), integrated over the entire L1157 outflow, \namount to 0.15 L$_\\odot$. If we take into account an additional 30\\% of contribution from the v$>$0 \nvibrational levels, we estimate a total H$_2$\\ luminosity of 0.21 L$_\\odot$. This is a 30\\% larger\nthan the total H$_2$\\ luminosity estimated by Caratti o Garatti (2006) in this outflow, \nassuming a single component gas at temperature between 2000 and 3000 K that fit the NIR H$_2$\\ lines. \n\nIf we separately compute the H$_2$\\ luminosity\nin the two outflow lobes, we derive $L_{\\rm H_2}$ = 8.5$\\times$10$^{-2}$ L$_\\odot$ in the blue lobe and 1.3$\\times$10$^{-1}$ L$_\\odot$ in the red\nlobe. Comparing these numbers with those derived in the individual A and C knots, \nwe note that the A knot alone contributes to most of the H$_2$\\ luminosity in the blue lobe. By contrast,\nthe H$_2$\\ luminosity of the red lobe is distributed among several peaks of similar values. \nThis might suggest that most of the energy carried out by the blue-shifted jet is\nreleased when the leading bow-shock encounters a density enhancement at the position of the A knot. \nOn the other hand, the red-shifted gas flows more freely without large density discontinuities, and \nthe corresponding shocks are internal bow-shocks, all with similar luminosities.\n\nThe integrated luminosity radiated by CO, H$_2$O and OI in L1157 has been estimated, through ISO and\nrecent Herschel observations, as $\\sim$0.2 L$_\\odot$ (Giannini et al. 2001, Nisini et al. 2010), \nwhich means that H$_2$\\ alone contributes about 50\\% of the\ntotal luminosity radiated by the outflow. \nIncluding all contributions, the total shock cooling along the L1157 outflow amounts to about 0.4 L$_\\odot$, i.e. $L_{cool}\/L_{bol}$ $\\sim$ 5$\\times$10$^{-2}$, assuming $L_{bol}$=8.4 L$_\\odot$ \nfor L1157-mm (Froebrich 2005). This ratio is consistent with the range of values derived from other class 0 sources \nfrom ISO observations (Nisini et al. 2002).\n\nThe total kinetic energy of the L1157 molecular outflow \nestimated by Bachiller et al. (2001) amounts to 0.2 L$_\\odot$ without any correction for the\noutflow inclination angle, or to 1.2 L$_\\odot$ if an inclination angle of 80 degrees \nis assumed. Considering that the derivation of the L$_{kin}$ value has normally\nan uncertainty of a factor of five (Downes \\& Cabrit 2007), we conclude that the mechanical energy flux \ninto the shock, estimated as $L_{cool}$, is comparable to the kinetic energy of the swept-out \noutflow and thus that the shocks giving rise to the H$_2$ emission have \nenough power to accelerate the molecular outflow.\n\nThe total shock cooling derived above can be also used to infer the momentum flux \nthrough the shock, i.e. $\\dot{P}$ = 2$L_{cool}$\/V$_s$, where V$_s$ is the shock velocity that\nwe can assume, on the basis of the discussion in the previous section, to be of the order of 20 km\\,s$^{-1}$.\nComputing the momentum flux separately for the blue and red outflow lobe, we derive \n$\\dot{P}_{red} \\sim$ 1.7$\\times$10$^{-4}$ and $\\dot{P}_{blue} \\sim$ 1.1$\\times$10$^{-4} $M$_\\odot$ yr$^{-1}$ km\\,s$^{-1}$. \nIn this calculation, we have assumed that the contribution from cooling species\n different from H$_2$, as estimated by ISO and Herschel, is distributed among the two lobes \n in proportion to the H$_2$\\ luminosity. If we assume that the molecular \n outflow is accelerated at the shock front through momentum conservation, then the\n above derived momentum flux should results comparable to the thrust of the outflow,\n derived from the mass, velocity and age measured through CO observations.\nThe momentum flux measured in this way by Bachiller et al. (2001) is 1.1$\\times$10$^{-4}$and 2$\\times$10$^{-4}$ M$_\\odot$ yr$^{-1}$ km\\,s$^{-1}$\nin the blue and red lobes, respectively, i.e. comparable to our derived values. It is interesting to note that \nthe $\\dot{P}$ determination from the shock luminosity confirms the asymmetry between\nthe momentum fluxes derived in two lobes. As shown by Bachiller et al. (2001), the L1157 red lobe \nhas a 30\\% smaller mass with respect to the blue lobe, but a higher momentum flux due to the larger flow velocity. The northern red lobe is in fact more extended than the southern lobe:\nhowever, given the higher velocity of the red-shifted gas, the mean kinematical ages of the two lobes is very\nsimilar. \n\n\n\\section{Conclusions \\label{conclusions}}\n\nWe have analysed the H$_2$\\ pure rotational line emission, from S(0) to S(7),\nalong the outflow driven by the L1157-mm protostar, mapped with the Spitzer - IRS \ninstrument. The data have been analysed assuming a gas temperature stratification where the H$_2$\\ column \ndensity varies as $T^{-\\beta}$ and 2D maps of the H$_2$\\ column density,\northo-to-para ratio (OPR) and temperature spectral index $\\beta$\nhave been constructed. \nFurther constraints on the physical conditions of the shocked gas have been derived \nin two bright emission knots by combining the Spitzer observations with near-IR \ndata of H$_2$\\ ro-vibrational emission. Finally, the global H$_2$\\ radiated energy of the\noutflow has been discussed in comparison with the energy budget of the associated\nCO outflow.\n\nThe main conclusions derived by our analysis are the following:\n\\begin{itemize}\n\\item H$_2$\\ transitions with $J_{lower} \\le$ 2 follows the morphology of the CO molecular\noutflow, with peaks correlated with individual CO clumps and more diffuse\nemission that delineates the CO cavities created by the precessing jet. \nLines with higher $J$ are localized on the shocked peaks, presenting a morphology\nsimilar to that of the H$_2$\\, 2.12$\\mu$m\\, ro-vibrational emission.\n\\item Significant variations of the derived parameters are observed along the flow. \nThe H$_2$\\ column density ranges between 5$\\times$10$^{19} $ and 3$\\times$10$^{20} $cm$^{-2}$: \nthe highest values are found in the blue-shifted lobe, suggesting that here the outflowing\ngas is compressed due to the impact with a high density region.\nGas components in a wide range of temperature values, from $ \\sim $ 250 to $ \\sim $ 1500 K\ncontribute to the H$_2$\\ emission along individual lines of sight. The largest range\nof temperature variations is derived towards the intensity peaks closer to the \ndriving source, while a more uniform temperature distribution, with $ T $\nbetween $ 400 $ and $ 1000$ K, is found at the tip of the northern outflow lobe.\n\\item The OPR is in general lower than the equilibrium value at high temperatures and spans a range from $\\sim$0.6 to 2.8, with the lowest values\nfound in low temperature plateau regions between consecutive intensity peaks. \nAs in previous studies, we also found the presence of regions at low OPR (1.5-1.8) \nbut with relatively high temperatures. These might represent zones subject to recent shocks \nwhere the OPR has not had time yet to reach the equilibrium value.\n\n\\item Additional shock parameters have been derived in the two bright near-IR \nknots A and C, located \nin the blue- and red-shifted outflow lobes, where the mid- and near-IR H$_2$\\ \ndata have been combined. \nThe ratio between mid- and near-IR lines is very sensitive to the molecular plus atomic hydrogen particle density. A high \nabundance of atomic hydrogen (H\/H$_2$ $\\sim$ 0.1-0.3) is implied by the \nthe observed H$_2$\\ column densities if we assume n(H$_2$) values as derived by independent \nmm observations. With this assumption, the cooling lengths of the shock result \nof the order of 7$\\times$10$ ^{14} $ and 10$ ^{15} $ cm for the A and C knot, respectively.\nThe distribution of OPR values as a function of temperature and the \nderived abundance of atomic hydrogen, implies that the shock passing time is of the\norder of 100 yr for knot A and 1000 yr for knot C, given the assumption that the para-to-ortho\nconversion occurs through reactive collisions with atomic hydrogen.\nWe find that planar shock models, either of C- or J-type, are\nnot able to consistently reproduce all the physical parameters derived from our analysis \nof the H$_2$\\ emission. \n\\item Globally, H$_2$\\ emission contributes to about 50\\% of the total shock radiated energy in the L1157 outflow. We find that the momentum flux through the shocks derived from the radiated luminosity is\ncomparable to the thrust of the associated molecular outflow, supporting a scenario\nwhere the working surface of the shocks drives the molecular outflow. \n\\end{itemize}\n\n\\acknowledgments\n\nThis work is based on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. Financial support from contract ASI I\/016\/07\/0 is acknowledged. \n\n\\bibliographystyle{plainnat}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nFrames are overcomplete (or redundant) sets of vectors that serve to faithfully represent signals. They were introduced in $1952$ by Duffin and Schaeffer \\cite{duscha52}, and reemerged with the advent of wavelets \\cite{Christensen:2003ab, Dau92,Ehler:2007aa, Ehler:2008ab, hw89}. \nThough the overcompleteness of frames precludes signals from having unique representation in the frame expansions, it is, in fact, the driving force behind the use of frames in signal processing \\cite{Casazza:2003aa, koche1, koche2}.\n\nIn the finite dimensional setting, frames are exactly spanning sets. However, many applications require ``custom-built'' frames that possess additional properties which are dictated by these applications. As a result, the construction of frames with prescribed structures has been actively pursued. For instance, a special class called {\\it finite unit norm tight frames } (FUNTFs) that provide a Parseval-type representation very similar to orthonormal bases, has been customized to model data transmissions \\cite{Casazza:2003aa, Goyal:2001aa}. Since then the characterization and construction of FUNTFs and some of their generalizations have received a lot of attention \\cite{Casazza:2003aa, koche1, koche2}. Beyond their use in applications, FUNTFs are also related to some deep open problems in pure mathematics such as the Kadison-Singer conjecture \\cite{cftw}. FUNTFs appear also in statistics where, for instance, Tyler used them to construct $M$-estimators of multivariate scatter \\cite{Tyler:1987}. We elaborate more on the connection between the $M$-estimators and FUNTFs in Remark~\\ref{remark:M estimator FUNTF}. These $M$-estimators were subsequently used to construct maximum likelihood estimators for the the wrapped Cauchy distribution on the circle in \\cite{KentTaylor1994} and for the angular central Gaussian distribution on the sphere in \\cite{Tyler:1988}. \n\nFUNTFs are exactly the minimizers of a functional called the frame potential \\cite{Benedetto:2003aa}. This was extended to characterize all finite tight frames in \\cite{Waldron:2003aa}. Furthermore, in \\cite{fjko, jbok}, finite tight frames with a convolutional structure, which can be used to model filter banks, have been characterized as minimizers of an appropriate potential. All these potentials are connected to other functionals whose extremals have long been investigated in various settings. We refer to \\cite{cokum06, Delsarte:1977aa, Seidel:2001aa, Venkov:2001aa, Welch:1974aa} for details and related results. \n\nIn the present paper, we study objects beyond both FUNTFs and the frame potential. In fact, we consider a family of functionals, the {\\it $p$-frame potentials}, which are defined on sets $\\{x_{i}\\}_{i=1}^{N}$ of unit vectors in $\\R^d$; see Section~\\ref{section:pfp}. These potentials have been studied in the context of spherical $t$-designs for even integers $p$, cf.~Seidel in \\cite{Seidel:2001aa}, and their minimizers are not just FUNTFs but FUNTFs that inherit additional properties and structure. Common FUNTFs are recovered only for $p=2$. In the process, we extend Seidel's results on spherical $t$-designs in \\cite{Seidel:2001aa} to the entire range of positive real $p$. \n\nIn Section~\\ref{section:estimates}, we give lower estimates on the $p$-frame potentials, and prove that in certain cases their minimizers are FUNTFs, which possess additional properties and structure. In particular, if $0

2$. Finally in Section \\ref{section:intro prob}, we introduce {\\it probabilistic $p$-frames} that generalize the concepts of frames and $p$-frames. We characterize the minimizers of {\\it probabilistic $p$-frame potentials} in terms of probabilistic $p$-frames. The latter problem is solved completely for $02}\nLet $\\{x_i\\}_{i=1}^N\\subset S^{d-1}$, $N\\geq d$, and $22}\n\\FP_{p, N}(\\{x_{i}\\}_{i=1}^{N}) \\geq N(N-1)\\big(\\frac{N-d}{d(N-1)}\\big)^{p\/2}+N,\n\\end{equation}\n and equality holds if and only if $\\{x_i\\}_{i=1}^N$ is an equiangular FUNTF.\n\\end{proposition}\n\n\n\\begin{proof}\nFor $\\frac{1}{2}=\\frac{1}{p}+\\frac{1}{r}$, H\\\"older's inequality yields\n\\begin{equation}\\label{eq:hoelder numer 1}\n\\|(\\langle x_i,x_j\\rangle)_{i\\neq j}\\|_{\\ell_2} \\leq \\|(\\langle x_i,x_j\\rangle)_{i\\neq j}\\|_{\\ell_p}(N(N-1))^{1\/r}.\n\\end{equation}\nRaising to the $p$-th power and applying $\\frac{1}{r}=\\frac{1}{2}-\\frac{1}{p}$ leads to \n\\begin{equation}\\label{eq:raising to the power}\n\\|(\\langle x_i,x_j\\rangle)_{i\\neq j}\\|_{\\ell_2}^p \\leq \\|(\\langle x_i,x_j\\rangle)_{i\\neq j}\\|_{\\ell_p}^p(N(N-1))^{p\/2-1}.\n\\end{equation}\nTherefore, \n$$\n \\sum_{i\\neq j}|\\langle x_i,x_j\\rangle |^p \\geq \\big(\\sum_{i\\neq j}|\\langle x_i,x_j\\rangle |^2\\big)^{p\/2} (N(N-1))^{1-p\/2}.$$\nUsing the fact that $\\sum_{i\\neq j}|\\langle x_i,x_j\\rangle |^2\\geq \\frac{N^2}{d}-N$ (see Theorem~\\ref{theorem:Benedetto Fickus}) implies that\n$$ \\sum_{i\\neq j}|\\langle x_i,x_j\\rangle |^p \\geq \\big(N(\\frac{N}{d}-1)\\big)^{p\/2} (N(N-1))^{1-p\/2} = N(N-1)\\bigg(\\frac{N-d}{d(N-1)}\\bigg)^{p\/2},$$\nwhich proves~\\eqref{eq:potential for p>2}. \n\nTo establish the last part of the Proposition, we recall that an equiangular FUNTF $\\{x_{k}\\}_{k=1}^{N} \\subset \\R^d$ satisfies \n\\begin{equation}\\label{eq:equi}\n|\\langle x_i,x_j\\rangle | = \\sqrt{\\frac{N-d}{d(N-1)}},\\quad \\text{ for all }i\\neq j\n\\end{equation} \nsee, \\cite{Casazza:2008ab,Sustik:2007aa}, for details. Consequently, if $\\{x_{k}\\}_{k=1}^{N}$ is an equiangular FUNTF, then~\\eqref{eq:potential for p>2} holds with equality. \n\nOn the other hand, if equality holds in \\eqref{eq:potential for p>2}, then $\\sum_{i\\neq j}|\\langle x_i,x_j\\rangle |^2 = \\frac{N^2}{d}-N$ and $\\{x_i\\}_{i=1}^N$ is a FUNTF due to Theorem \\ref{theorem:Benedetto Fickus}. Moreover, the H\\\"older estimate \\eqref{eq:hoelder numer 1} must have been an equality which means that $|\\langle x_i,x_j\\rangle|=C$ for $i\\neq j$, and some constant $C\\geq 0$. Thus, the FUNTF must be equiangular.\n\\end{proof}\n\n\n By comparing \\eqref{eq:Welch} with \\eqref{eq:potential for p>2}, it is easily seen that the Welch bound is not optimal for small $N$: \n \n \n \\begin{proposition}\\label{prop:second one}\n Let $\\{x_i\\}_{i=1}^N\\subset S^{d-1}$ and $p=2k>2$ be an even integer. If $d\\frac{N^2}{\\binom{d+k-1}{k}}.\n \\end{equation} \n \\end{proposition}\n \n \n \\begin{proof} \n The condition on $N$ implies $1 \\geq \\frac{N}{\\binom{d+k-1}{k}}$, and adding $ (N-1)\\big(\\frac{N-d}{d(N-1)}\\big)^k>0$ to the right hand side leads to\n \\begin{equation*}\n (N-1)\\big(\\frac{N-d}{d(N-1)}\\big)^k +1 > \\frac{N}{\\binom{d+k-1}{k}}.\n \\end{equation*}\n Multiplication by $N$ and Proposition \\ref{prop:potential for p>2} then yield \\eqref{eq:abc}.\n \\end{proof}\n\n\\begin{remark} The estimate in Proposition \\ref{prop:potential for p>2} is sharp if and only if an equiangular FUNTF exists. In \\cite[Sections 4 $\\&$ 6]{Sustik:2007aa}, construction (and hence existence) of equiangular FUNTFs was established when $d+2 \\leq N \\leq 100$. For general $d$ and $N$, a necessary condition for existence of equiangular FUNTFs is given, and it is conjectured that the conditions are sufficient as well. The authors essentially provide on upper bound on $N$ that depends on the dimension $d$. \nTherefore, Proposition~\\ref{prop:potential for p>2} might not be optimal when the redundancy $N\/d$ is much larger than $1$. \n\\end{remark}\n\n\n\\subsection{Relations to spherical $t$-designs}\\label{subsection:t design}\nA \\emph{spherical $t$-design} is a finite subset $\\{x_i\\}_{i=1}^N$ of the unit sphere $S^{d-1}$ in $\\R^d$,\nsuch that,\n\\begin{equation*}\n\\frac{1}{N}\\sum_{i=1}^N h(x_i) = \\int_{S^{d-1}} h(x)d\\sigma(x),\n\\end{equation*}\nfor all homogeneous polynomials $h$ of total degree equals or less than $t$ in $d$ variables and where $\\sigma$ denotes the uniform surface measure on $S^{d-1}$ normalized to have mass one. The following result is due to \\cite[Theorem 8.1]{Venkov:2001aa} (see \\cite{Seidel:2001aa}, \\cite{Delsarte:1977aa} for similar results). \n\n \n \\begin{theorem}\\label{theorem:p even integer discrete}\\cite[Theorem 8.1]{Venkov:2001aa}\nLet $p=2k$ be an even integer and $\\{x_i\\}_{i=1}^N=\\{-x_i\\}_{i=1}^N\\subset S^{d-1}$, then \n \\begin{equation*}\n\\FP_{p, N}(\\{x_{i}\\}_{i=1}^{N}) \\geq \\frac{1\\cdot 3\\cdot 5\\cdots(p-1)}{d(d+2)\\cdots (d+p-2) }N^2,\n \\end{equation*}\nand equality holds if and only if $\\{x_i\\}_{i=1}^N$ is a spherical $p$-design. \n \\end{theorem} \n \n \n \n\\subsection{Optimal configurations for the $p$-frame potential} \nWe first use Theorem~\\ref{theorem:Benedetto Fickus} to characterize the minimizers of the $p$-frame potential for $02}, so we focus on $p\\in (0,2)$. \n\nOne easily verifies that, for $p_0=\\frac{\\log(\\frac{d(d+1)}{2})}{\\log(d)}$, an orthonormal basis plus one repeated vector and an equiangular FUNTF have the same $p_0$-frame potential $\\FP_{p_{0}, d+1}$. Under the assumption that those two systems are exactly the minimizers of $\\FP_{p_{0}, d+1}$, the next result will give a complete characterization of the minimizers of $\\FP_{p, d+1}$, for $01$. According to Proposition \\ref{prop:potential for p>2}, the minimizers of the $p$-frame potential for $22} and \\ref{prop:second one} still hold for complex vectors $\\{z_i\\}_{i=1}^N\\subset \\C^d$ that have unit norm. The constraints on $N$ and $d$ that allow for the existence of a complex FUNTF are slightly weaker than in the real case~\\cite{Sustik:2007aa}.\n\\end{remark}\n\n\n\n\n\n\n\n\n\n \n\\section{The probabilistic $p$-frame potential}\\label{section:intro prob}\n\nThe present section is dedicated to introducing a probabilistic version of the previous section. We shall consider probability distributions on the sphere rather than finite point sets. Let $\\mathcal{M}(S^{d-1},\\mathcal{B})$ denote the collection of probability distributions on the sphere with respect to the Borel sigma algebra $\\mathcal{B}$. \n\nWe begin by introducing the probabilistic $p$-frame which generalizes the notion of probabilistic frames introduced in~\\cite{Ehler:2010aa}. \n\n\\begin{definition}\\label{probpframe}\nFor $00$ such that\n \\begin{equation}\\label{ppframeineq}\nA\\|y\\|^p\\leq \\int_{S^{d-1}} |\\langle x,y\\rangle|^p d\\mu(x) \\leq B\\|y\\|^p, \\quad\\forall y\\in\\R^d.\n \\end{equation}\n We call $\\mu$ a \\emph{tight probabilistic $p$-frame} if and only if we can choose $A=B$.\n \\end{definition}\n Due to Cauchy-Schwartz, the upper bound $B$ always exists. \nConsequently, in order to check that $\\mu$ is a probabilistic $p$-frame one only needs to focus on the lower bound $A$. \n\n Since the uniform surface measure $\\sigma$ on $S^{d-1}$ is invariant under orthogonal transformations, one can easily check that it constitutes a tight probabilistic $p$-frame, for any $02$. Then, for all $y\\neq 0 \\in \\R^d$, \n\\begin{align*}\nA\\|y\\|^p&\\leq \\int_{S^{d-1}} |\\langle x,y\\rangle|^p d\\mu(x)\\\\\n&=\\int_{S^{d-1}} |\\langle x,y\\rangle|^2\\, |\\langle x,y\\rangle|^{p-2}\\, d\\mu(x)\\\\\n& \\leq \\int_{S^{d-1}} \\|x\\|^{p-2}\\, \\|y\\|^{p-2}\\, |\\langle x,y\\rangle|^2 d\\mu(x) \\\\\n&= \\|y\\|^{p-2}\\, \\int_{S^{d-1}} |\\langle x,y\\rangle|^2 d\\mu(x),\n\\end{align*}from which it follows that $$A\\|y\\|^2 \\leq \\int_{S^{d-1}} |\\langle x,y\\rangle|^2 d\\mu(x).$$\n\nIf $\\mu$ is a probabilistic $p$-frame for some $p<2$. Then, for all $y\\neq 0 \\in \\R^d$, \n$$\\|y\\|^{2}=|\\langle Sy,S^{-1}y\\rangle_{\\R^{d}}|=|\\langle F^{*}Fy,S^{-1}y\\rangle_{\\R^{d}}|=|\\langle Fy,FS^{-1}y\\rangle_{L_{p}\\to L_{p'}}|,$$ which can be estimated by \n$$\\|y\\|^{2} \\leq \\|Fy\\|_{L_{p}}\\|FS^{-1}y\\|_{L_{p'}}\\leq C \\|Fy\\|_{L_{2}}\\|y\\|,$$ where we have used the fact that for $p<2$, $L_{2}(S^{d-1}, \\mu) \\subset L_{p}(S^{d-1}, \\mu)$. This conclude the proof of a). \n\n\nb) If $\\mu$ is a probabilistic $p$-frame for some $1\\leq p < \\infty,$ then by a) $\\mu$ is a probabilistic frame. In this case, $\\tilde{\\mu}$ is known to be a probabilistic frame, cf.~\\cite{Ehler:2010aa}, and thus a probabilistic $p$-frame. \n\\end{proof}\n\n\n \n\n\n\nWe are particularly interested in tight probabilistic $p$-frame potentials, which we seek to characterize in terms of minimizers of appropriate potentials. This motivates the following definition: \n\n \\begin{definition}\\label{profframpot}\nFor $00$. One can check that the measure $\\nu$ defined by \n\\begin{equation*}\n\\nu(E) := m\\delta_{y_2}(E)-\\mu(E\\cap K),\\quad E\\in\\mathcal{B},\n\\end{equation*}\nsatisfies $\\nu(S^{d-1})=0$, and $\\mu +\\epsilon \\nu \\geq 0$. Hence, $\\PFP(\\mu,\\nu,p)\\geq 0$. On the other hand, we can estimate\n$$\n\\PFP(\\mu,\\nu,p) = \\int_{S^{d-1}} P_\\mu(y)d\\nu(y)= P_\\mu(y_2)m - \\int_K P_\\mu(y)d\\mu(y)= am - \\int_K P_\\mu(y)d\\mu(y)$$ and so\n\n$$\\PFP(\\mu,\\nu,p) \\leq am - (b-\\frac{b-a}{2})m = - \\frac{b-a}{2}m <0.$$\n\nThis is a contradiction to $\\PFP(\\mu,\\nu,p)\\geq 0$ and implies that there is a constant $C$ such that $P_\\mu(y)=C$, for all $y\\in\\supp(\\mu)$. \nWe still have to verify that the constant $C$ is in fact $\\PFP(p)$: \n\\begin{align*}\n\\PFP(p) = \\PFP(\\mu,p) & = \\int_{S^{d-1}} P_\\mu(y)d\\mu(y) \\\\\n& = \\int_{\\supp(\\mu)} P_\\mu(y)d\\mu(y)\\\\\n& = \\int_{\\supp(\\mu)} C d\\mu(y) = C.\n\\end{align*}\n\nThe proof of $(2)$ is similar to the one above, and so we omit it. \n\\end{proof}\nThe following result is an immediate consequence of Proposition~\\ref{prop:1}. \n\n\\begin{corollary}\\label{theorem:tight p frame is necessary}\nLet $00$ and $\\delta_\\varepsilon>0$ such that \n \\begin{itemize}\n \\item[(a)] $B_\\varepsilon(v)\\cap B_\\varepsilon(w)=\\emptyset$ and $\\mu(B_\\varepsilon(v)), \\mu(B_\\varepsilon(w)) \\geq \\delta_\\varepsilon$. \n \\item[(b)] for all $x\\in B_\\varepsilon(v)$ and $y\\in B_\\varepsilon(w)$, $|\\langle x,y\\rangle |^p\\geq |\\langle x,y\\rangle |^2+\\varepsilon$.\n \\end{itemize}\nBy using $B=B_\\varepsilon(v)\\times B_\\varepsilon(w)$, this implies\n\\begin{align*}\n\\PFP(\\mu,p) & = \\int_{B} |\\langle x,y\\rangle|^p d\\mu(x) d\\mu(y) + \\int_{S^{d-1}\\times S^{d-1}\\setminus B } |\\langle x,y\\rangle|^p d\\mu(x) d\\mu(y)\\\\\n& \\geq \\int_{B} (|\\langle x,y\\rangle|^2+\\varepsilon) d\\mu(x) d\\mu(y) + \\int_{S^{d-1}\\times S^{d-1}\\setminus B } |\\langle x,y\\rangle|^2 d\\mu(x) d\\mu(y)\\\\\n& = \\PFP(\\mu,2) + \\varepsilon \\mu(B_\\varepsilon(v)) \\mu(B_\\varepsilon(w))\\\\\n&\\geq \\PFP(\\mu,2) +\\varepsilon \\delta_\\varepsilon^2 > \\PFP(\\mu,2),\n\\end{align*}\n which is a contradiction. Thus, we have verified that $|\\langle x,y\\rangle|\\in \\{0,1\\}$, for all $x,y\\in \\supp(\\mu)$. Distinct elements in $\\supp(\\mu)$ are then either orthogonal to each other or antipodes. According to Corollary \\ref{theorem:tight p frame is necessary}, $\\supp(\\mu)$ is complete in $\\R^d$. Thus, there must be an orthonormal basis $\\{x_i\\}_{i=1}^d$ such that\n \\begin{equation*}\n \\{x_1,\\ldots,x_d\\} \\subset \\supp(\\mu) \\subset \\{\\pm x_1,\\ldots,\\pm x_d\\}.\n \\end{equation*} \nConsequently, there is a density $f:S^{d-1}\\rightarrow\\R$ that vanishes on $S^{d-1}\\setminus \\supp(\\mu)$ such that $\\mu(x)=f(x)\\nu_{\\pm x_1,\\ldots,\\pm x_d}(x)$. \n \nTo verify that $f$ satisfies (ii), let us define $\\tilde{f}:S^{d-1}\\rightarrow \\R$ by \n\\begin{equation*}\n\\tilde{f}(x)=\\begin{cases} f(x)+f(-x),& x\\in\\{x_1,\\ldots,x_d\\}\\\\\n0,& \\text{ otherwise. } \n\\end{cases}\n\\end{equation*}\nThis implies that $\\tilde{\\mu}(x)=\\tilde{f}(x)\\nu_{x_1,\\ldots,x_d}(x)$ is also a minimizer of $\\PFP(\\cdot,2)$. But the minimizers of the probabilistic frame potential for $p=2$ have been investigated in~\\cite[Section 3]{Ehler:2010aa}. We can follow the arguments given there to obtain $\\tilde{f}(x_i)=\\frac{1}{d}$, for all $i=1,\\ldots,d$. \n\\end{proof}\n\n\n \n For even integers $p$, we can give the minimum of $\\PFP(\\mu, p)$ and characterize its minimizers. The following theorem generalizes Theorem \\ref{theorem:p even integer discrete}. Moreover, note that the bounds are now sharp, i.e., for any even integer $p$, there is a probabilistic tight $p$-frame: \n \n \\begin{theorem}\\label{theorem:p even integer}\n Let $p$ be an even integer. For any probability distribution $\\mu$ on $S^{d-1}$, \n \\begin{equation*}\n \\PFP(\\mu, p)=\\int_{S^{d-1}}\\int_{S^{d-1}} |\\langle x,y\\rangle|^p d\\mu(x) d\\mu(y) \\geq \\frac{1\\cdot 3\\cdot 5\\cdots(p-1)}{d(d+2)\\cdots (d+p-2) },\n \\end{equation*}\nand equality holds if and only if $\\mu$ is a probabilistic tight $p$-frame. \n \\end{theorem}\n \n\n\n\n \\begin{proof}\n Let $\\alpha=\\frac{d}{2}-1$ and consider the Gegenbauer polynomials $\\{C_{n}^{\\alpha}\\}_{n\\geq 0}$ defined by \n \\begin{equation*}\n C_0^\\alpha(x) = 1, \\qquad C_1^\\alpha(x) = 2 \\alpha x,\n \\end{equation*}\n \\begin{align*}\nC_{n}^\\alpha(x) &= \\frac{1}{n}[2x(n+\\alpha-1)C_{n-1}^\\alpha(x) - (n+2\\alpha-2)C_{n-2}^\\alpha(x)]\\\\\n&= C_n^{(\\alpha)}(z)=\\sum_{k=0}^{\\lfloor n\/2\\rfloor} (-1)^k\\frac{\\Gamma(n-k+\\alpha)}{\\Gamma(\\alpha)k!(n-2k)!}(2z)^{n-2k}.\n\\end{align*}\n$\\{C_{n}^{(\\alpha)}\\}_{n=1}^s$ is an orthogonal basis for the collection of polynomials of degree less or equal to $s$ on the interval $[-1,1]$ with respect to the weight\n\\begin{equation*}\nw(z) = \\left(1-z^2\\right)^{\\alpha-\\frac{1}{2}},\n\\end{equation*} \ni.e., for $m\\neq n$,\n \\begin{equation*}\n \\int_{-1}^1 C_n^{(\\alpha)}(x)C_m^{(\\alpha)}(x)w(x)\\,dx = 0.\n \\end{equation*}\n They are normalized by\n \\begin{equation*}\n \\int_{-1}^1 \\left[C_n^{(\\alpha)}(x)\\right]^2(1-x^2)^{\\alpha-\\frac{1}{2}}\\,dx = \\frac{\\pi 2^{1-2\\alpha}\\Gamma(n+2\\alpha)}{n!(n+\\alpha)[\\Gamma(\\alpha)]^2}.\n \\end{equation*}\nThe polynomials $t^p$, $p$ an even integer, can be represented by means of\n\\begin{equation*}\nt^p=\\sum_{k=0}^p \\lambda_k C^{\\alpha}_k(t).\n\\end{equation*}\nIt is known (see, e.g.,~\\cite{Bachoc:2005aa,Delsarte:1977aa}) that $\\lambda_i> 0$, $i=0,\\ldots,p$, and $\\lambda_0$ is given by\n \\begin{equation*}\n\\lambda_0= \\frac{1}{c}\\int_{-1}^1 t^p w(t) dt,\n \\end{equation*}\n where \n \\begin{equation*}\n c = \\frac{\\pi 2^{d+3}\\Gamma(d-2) }{(\\frac{d}{2}-1)\\Gamma(\\frac{d}{2}-1)^2}.\n \\end{equation*}\n Moreover, $C^\\alpha_k$ induces a positive kernel, i.e., for $\\{x_i\\}_{i=1}^N\\subset S^{d-1}$ and $\\{u_i\\}_{i=1}^N\\subset \\R$,\n \\begin{equation*\n \\sum_{i,j=1}^{N} u_iC^{\\alpha}_k (\\langle x_i,x_j\\rangle )u_j \\geq 0, \\quad \\forall k=0,1,2,...\n \\end{equation*}\n see~\\cite{Bachoc:2005aa,Delsarte:1977aa}. Note that the probability measures with finite support are weak star dense in $\\mathcal{M}(S^{d-1},\\mathcal{B})$. Since $C^{\\alpha}_k$ is continuous, we obtain, for all $\\mu\\in \\mathcal{M}(S^{d-1},\\mathcal{B})$, \n \\begin{equation*} \n \\int_{S^{d-1}} \\int_{S^{d-1}} C^{\\alpha}_k (\\langle x,y\\rangle )d\\mu(x) d\\mu(y) \\geq 0, \\quad \\forall k=0,1,2,...\n \\end{equation*}\nWe can then estimate\n\\begin{align*}\n\\int_{S^{d-1}} \\int_{S^{d-1}} |\\langle x,y\\rangle|^p d\\mu(x) d\\mu(y) & = \\int_{S^{d-1}} \\int_{S^{d-1}}\\sum_{k=0}^p \\lambda_k C^{\\alpha}_k (\\langle x,y\\rangle )d\\mu(x) d\\mu(y)\\\\\n& = \\sum_{k=0}^p \\lambda_k \\int_{S^{d-1}} \\int_{S^{d-1}}C^{\\alpha}_k (\\langle x,y\\rangle ) d\\mu(x) d\\mu(y) \\geq \\lambda_0.\n\\end{align*}\nFrom the results in \\cite{Seidel:2001aa}, one can deduce that \n \\begin{equation*}\n\\lambda_0= \\frac{1\\cdot 3\\cdot 5\\cdots(2t-1)}{d(d+2)\\cdots (d+2t-2) },\n \\end{equation*}\nwhich provides the desired estimate.\n\nWe still have to address the ``if and only if'' part. Equality holds if and only if $\\mu$ satisfies \n \\begin{equation*}\n \\int_{S^{d-1}}\\int_{S^{d-1}} C^{\\alpha}_k (\\langle x,y\\rangle ) d\\mu(x) d\\mu(y) = 0, \\quad \\forall k=1,\\ldots, p. \n \\end{equation*}\nWe shall follow the approach outlined in \\cite{Venkov:2001aa} in which the analog of Theorem~\\ref{theorem:p even integer discrete} was addressed for finite symmetric collections of points. In this case, the finite symmetric sets of points lead to finite sums rather than integrals as above. The key ideas that we need in order to use the approach presented in \\cite{Venkov:2001aa} are: First, $\\tilde{\\mu}(E):=\\frac{1}{2}(\\mu(E)+\\mu(-E))$, for $E\\in\\mathcal{B}$, satisfies $\\PFP(\\tilde{\\mu},p) = \\PFP(\\mu,p)$. Thus, we can assume that $\\mu$ is symmetric. Secondly and more critically, the map \n\\begin{equation*}\ny\\mapsto \\int_{S^{d-1}} |\\langle x,y\\rangle |^p d\\mu(x)\n\\end{equation*}\nis a polynomial in $y$. In fact, the integral resolves in the polynomial's coefficients. These two observations enable us to follow the lines in \\cite{Venkov:2001aa}, and we can conclude the proof.\n\\end{proof}\n \n \\begin{remark}\nOne may speculate that Theorem \\ref{theorem:p even integer} could be extended to $p\\geq 2$ that are not even integers. This is not true in general. For $d=2$ and $p=3$, for instance, the equiangular FUNTF with $3$ elements induces a smaller potential than the uniform distribution. The uniform distribution is a probabilistic tight $3$-frame, but the equiangular FUNTF is not.\n\\end{remark}\n\n\n\n\n\\section*{Acknowledgements}\nThe authors would like to thank C.~Bachoc, W.~Czaja, C.~Wickman, and W.~S.~Yu for discussions leading to some of the results presented here. M.~Ehler was supported by the Intramural Research Program of the National Institute of Child Health and Human Development and by NIH\/DFG Research Career Transition Awards Program (EH 405\/1-1\/575910). K.~A.~Okoudjou was partially supported by ONR grant N000140910324, by RASA from the Graduate School of UMCP, and by the Alexander von Humboldt foundation. \n\n\n\n\n\\bibliographystyle{plain}\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\thispagestyle{empty}\n\nOne equivalent characterization of the amenability of an infinite group $G$, called the \\textit{F{\\o}lner condition}, is that the isoperimetric constant (also known as Cheeger constant) of its Cayley graph should be $0$.\nThat constant is defined as the infimum of $\\frac{|\\partial F|}{|F|}$ over all finite sets $F\\subset G$ with $|F|\\leq\\frac{1}{2}|G|$.\nAs the quotient cannot reach $0$, amenability of infinite groups is therefore characterized by the existence of a sequence of sets $F_n$ such that $\\frac{|\\partial F_n|}{|F_n|}$ converges towards $0$, also known as a \\textit{F{\\o}lner sequence}.\nOne natural direction for studying the possible F{\\o}lner sequences on a given group is to ask how small the sets can be.\nWe consider the F{\\o}lner function.\nIt has classically been defined using the inner boundary:\n\\begin{equation}\\label{defdin}\n\t\\partial_{in}F=\\left\\{g\\in F:\\exists s\\in S\\bigcup S^{-1}:gs\\notin F\\right\\}.\n\\end{equation}\n\\begin{defi}\\label{foldef}\n\tThe \\textit{F{\\o}lner function} $\\Fol$ (or $\\Fol_S$; or $\\Fol_{G,S}$) of a group $G$ with a given finite generating set $S$ is defined on $\\N$ by\n\t$$\\Fol(n)=\\min\\left(|F|:F\\subset G,\\frac{|\\partial_{in}F|}{|F|}\\leq\\frac{1}{n}\\right).$$\n\\end{defi}\n\nRemark that $\\Fol(1)=1$ and that the values of the function are finite if and only if $G$ is amenable.\nIts values clearly depend on the choice of a generating set, but the functions arising from different generating sets (and more generally, functions arising from quasi-isometric spaces) are asymptotically equivalent.\nTwo functions are asymptotically equivalent if there are constants $A$ and $B$ such that $f(x\/A)\/B\\lambda)$.\nThen for all finite sets $F$:\n\n$$\\frac{|\\partial_{in}F|}{|F|}\\geq\\frac{1}{8|S|\\phi(2|F|)}.$$\n\\end{thm}\nThe multiplicative constants can improved (see G\u00e1bor Pete~\\cite[Theorem~5.11]{pete}, Bruno Luiz Santos Correia~\\cite{csc2020}):\n\\begin{equation}\\label{csceq}\n\t\\frac{|\\partial_{in}F|}{|F|}\\geq\\frac{1}{2\\phi(2|F|)}.\n\\end{equation}\nThe result of Santos Correia is also announced for finite groups for $|F|\\leq\\frac{1}{2}|G|$.\nSantos Correia and Troyanov~\\cite{csc2021} show:\n\\begin{equation}\\label{csceqlam}\n\t\\frac{|\\partial_{in}F|}{|F|}\\geq\\left(1-\\frac{1}{\\lambda}\\right)\\frac{1}{\\phi(\\lambda|F|)}\n\\end{equation}\nfor $1<\\lambda\\leq\\frac{|G|}{|S|}$ (in particular, for arbitrarily large $\\lambda$ if $G$ is infinite).\nThe result is replicated in~\\cite{csccollab} without an upper bound on $\\lambda$.\n\nThe Coulhon and Saloff-Coste inequality (Theorem~\\ref{csc}) implies in particular that for a group with exponential growth, the F{\\o}lner function must also grow at least exponentially.\nIt can be obtained then that it is exactly exponential if one can describe F{\\o}lner sets with exponential growth.\nOne simple example is the lamplighter group $\\Z\\wr\\Z\/2\\Z$ with the standard generating set (\\ref{defst}).\nSimilarly, it is known that the F{\\o}lner functions of groups with polynomial growth are polynomial (see for example~\\cite[Section~I.4.C]{woess2000random}).\nAnother inequality on group isoperimetry is given by \u017buk~\\cite{Zuk2000}.\nVershik~\\cite{vershik1973countable} asks if F{\\o}lner function can be super-exponential, initiating the study of F{\\o}lner functions.\nHe suggests studying the wreath product $\\Z\\wr\\Z$ as a possible example.\nPittet~\\cite{Pittet1995} shows that the F{\\o}lner functions of polycyclic groups are at most exponential (and are therefore exponential for polycyclic groups with exponential growth).\nThis is true more generally for solvable groups of finite Pr\u00fcfer rank, see~\\cite{Pittet2003} and~\\cite{Kropholler2020}.\nThe first example of a group with super-exponential F{\\o}lner function is obtained by Pittet and Saloff-Coste~\\cite{PittetSaloffCoste} for $\\Z^d\\wr\\Z\/2\\Z$ with $d\\geq3$.\nLater the F{\\o}lner functions of wreath products with certain regularity conditions are described by Erschler~\\cite{Erschler2003} up to asymptotic equivalence.\nSpecifically, say that a function $f$ verifies property $(*)$ if for all $C>0$ there is $k>0$ such that $f(kn)>Cf(n)$.\nThe result of \\cite{Erschler2003} than states that if the F{\\o}lner function of a group $A$ verifies property $(*)$ (for some fixed generating set), then for any non-trivial group $B$, the F{\\o}lner function of $A\\wr B$ is $\\Fol_{A\\wr B}(n)=\\Fol_B(n)^{\\Fol_A(n)}$.\n\nOther examples with know F{\\o}lner functions have been presented by Gromov~\\cite[Section~8.2,Remark~(b)]{Gromov2009} for all functions with sufficiently fast growing derivatives.\nSaloff-Coste and Zheng~\\cite{saloffcostezheng} provide upper and lower bounds for it on, among others, \"bubble\" groups and cyclic Neumann-Segal groups, and those two bounds are asymptotically equivalent under certain conditions.\nRecently, Brieussel and Zheng~\\cite{BrieusselZheng} show that for any function $g$ that can be written as the inverse function of $x\/f(x)$ for some non-decreasing $f$ with $f(1)=1$ and $x\/f(x)$ also non-decreasing, there is a group the F{\\o}lner function of which is asymptotically equivalent to $\\exp(g(n))$.\nErschler and Zheng~\\cite{Erschler2017} obtain examples for a class of super-exponential functions under $\\exp(n^2)$ with weaker regularity conditions.\nSpecifically, for any $d$ and any non-decreasing $\\tau$ such that $\\tau(n)\\leq n^d$, there is a group $G$ and a constant $C$ such that\n\\begin{equation}\\label{annatyani}\n\tCn\\exp(n+\\tau(n))\\geq\\Fol_G(n)\\geq\\exp(\\frac{1}{C}(n+\\tau(n\/C))).\n\\end{equation}\nThe left-hand side of this inequality is always asymptotically equivalent to $\\exp(n+\\tau(n))$, and it suffices therefore that the right-hand side be asymptotically equivalent to that function to have a description of the F{\\o}lner function of $G$.\nNotice in particular that if $\\tau$ verifies condition $(*)$, this is verified.\nRemark that the conditions we mentioned only consider functions at least as large as $\\exp(n)$; it is an open question whether a F{\\o}lner function can have intermediate growth (see Grigorchuk~\\cite[Conjecture~5(ii)]{grigsurvey}).\nA negative answer would imply the Growth Gap Conjecture~\\cite[Conjecture~2]{grigsurvey}, which conjectures that the volume growth function must be either polynomial or at least as fast as $\\exp(\\sqrt{n})$.\nThose conjectures also have weak versions, which are equivalent to each other (see discussion after Conjecture~6 in~\\cite{grigsurvey}).\n\n\\section{Statement of results}\\label{statesect}\n\nIn this paper, we obtain the exact values of the F{\\o}lner function for the standard generating set $S=\\{t,\\delta\\}$ (see~(\\ref{defst})) on the lamplighter group $\\Z\\wr\\Z_2$ (see Definition~\\ref{deflamp}) where by $\\Z_2$ we denote $\\Z\/2\\Z$.\n\n\\begin{thm}\\label{thmmain}\nFor $n\\geq2$, the F{\\o}lner function of the lamplighter group $\\Z\\wr\\Z_2$ is, for the standard generating set: \n$$\\Fol(n)=2n2^{2(n-1)}.$$\n\\end{thm}\n\nWe also describe the sets that give rise to this function.\nSpecifically, we obtain that the standard F{\\o}lner sets $F_n=\\{(k,f):k\\in[\\![1,n]\\!],\\supp(f)\\subset[\\![1,n]\\!]\\}$ are optimal (see Definition~\\ref{optimal}) for the outer and edge boundary (see Section~\\ref{prel1}).\nWe then show that by Lemma~\\ref{equiv}, $F_n\\bigcup\\partial_{out}F_n$ is optimal for the inner boundary, from which Theorem~\\ref{thmmain} follows:\n\n\\begin{thm}\\label{thmlamp}\nConsider the lamplighter group $\\Z\\wr\\Z_2$ with the standard generating set.\n\\begin{enumerate}\n\t\\item For any $n\\geq2$ and any $F\\subset\\Z\\wr\\Z_2$ such that $|F|\\leq|F_n|$, we have\n\t$$\\frac{|\\partial_{edge}F|}{|F|}\\geq\\frac{|\\partial_{out}F|}{|F|}\\geq\\frac{|\\partial_{out}F_n|}{|F_n|}=\\frac{|\\partial_{edge}F_n|}{|F_n|},$$\n\tand if $|F|<|F_n|$, the inequality $\\frac{|\\partial_{out}F|}{|F|}>\\frac{|\\partial_{out}F_n|}{|F_n|}$ is strict,\n\t\\item From point $(1)$ it follows that for any $n\\geq2$ and any $F\\subset\\Z\\wr\\Z_2$ such that $|F|\\leq|F_n\\bigcup\\partial_{out}F_n|$, we have\n\t$$\\frac{|\\partial_{in}F|}{|F|}\\geq\\frac{|\\partial_{in}(F_n\\bigcup\\partial_{out}F_n)|}{|F_n\\bigcup\\partial_{out}F_n|},$$\n\tand if $|F|<|F_n\\bigcup\\partial_{out}F_n|$, the inequality is strict.\n\\end{enumerate}\nFurthermore, the sets that give equality are unique up to translation.\n\\end{thm}\n\nWe can substitute those values in the Coulhon and Saloff-Coste inequality in order to study the multiplicative constant.\nAs in~\\cite{csccollab}, we define\n\\begin{defi}\\label{constq}\nFor a group $G$ and a generating set $S$, denote\n$$C_{G,S}=\\sup\\left\\{c\\geq0:\\exists\\alpha\\geq0\\mbox{ such that }\\forall F\\subset G,\\frac{|\\partial_{in}F|}{|F|}\\geq c\\frac{1}{\\phi((1+\\alpha)|F|)}\\right\\},$$\nwhere $F$ is assumed to be finite and non-empty.\n\\end{defi}\nThe original inequality obtains that for all $G,S$, $C_{G,S}\\geq\\frac{1}{8|S|}$.\nThe results of \\cite[Theorem~5.11]{pete} and \\cite{csc2020} that we cited as Equation~\\ref{csceq} give a lower bound of $\\frac{1}{2}$.\nEquation~\\ref{csceqlam} from~\\cite{csc2021} further implies that $C_{G,S}\\geq1$ for all $G,S$.\n\nIn \\cite{csccollab}, it is shown that for groups of exponential growth:\n$$C_{G,S}=\\frac{\\liminf\\frac{\\ln\\Fol(n)}{n}}{\\lim\\frac{\\ln V(n)}{n}}.$$\n\\begin{prop}\\label{const}\nThe lamplighter group verifies\n\n$$C_{\\Z\\wr\\Z_2,S}=\\frac{\\lim\\frac{\\ln\\Fol(n)}{n}}{\\lim\\frac{\\ln V(n)}{n}}=\\frac{\\ln4}{\\ln(\\frac{1}{2}(1+\\sqrt{5}))}\\approx2,88$$\nfor the standard generating set. \n\\end{prop}\n\n\\begin{remark}\\label{const2}\nFor the switch-walk-switch generating set $S'=\\{t,\\delta,t\\delta,\\delta t,\\delta t\\delta\\}$, we have\n\n$$C_{\\Z\\wr\\Z_2,S'}=\\frac{\\liminf\\frac{\\ln\\Fol_{sws}(n)}{n}}{\\lim\\frac{\\ln V_{sws}(n)}{n}}\\leq2.$$\n\\end{remark}\n\nAnother direction that can be considered once one has exact evaluations of a F{\\o}lner function is studying the power series $\\sum_n\\Fol(n)x^n$.\nThe equivalent series have been studied for volume growth (see Grigorchuk-de la Harpe~\\cite[Section~(4)]{Grigorchuk1997}).\nOne central question that a lot of authors have considered is the rationality of those series as a function.\nFor the example shown here, the power series of the F{\\o}lner function is a rational function:\n$$\\sum_{n\\in\\N}\\Fol(n)x^n=\\frac{2x}{(4x-1)^2}.$$\n\nWe also obtain results for the Baumslag-Solitar group $BS(1,2)$ (see Definition~\\ref{defbs}), however only in respect to the edge boundary.\nTaking the notation from the definition, its standard sets are defined the same way as in the lamplighter group.\n\\begin{thm}\\label{bsthm}\nConsider the Baumslag-Solitar group $BS(1,2)$ with the standard generating set.\nThen for any $n\\geq2$ and any $F\\subset BS(1,2)$ such that $|F|\\leq|F_n|$, we have $\\frac{|\\partial_{edge}F|}{|F|}\\geq\\frac{|\\partial_{edge}F_n|}{|F_n|}$ (where $F_n$ are the standard F{\\o}lner sets), and if $|F|<|F_n|$, the inequality is strict.\n\\end{thm}\nThis result is not always true for $B(1,p)$ for larger $p$, and we provide a counter example in Example~\\ref{exbsp}.\nHowever this counter example uses that $p$ is significant when compared to the length of the interval defining the standard set, and it is possible that for $B(1,p)$ as well, standard sets are optimal above a certain size.\n\nWe present more detailed definitions in the next section.\nIn Section~\\ref{prelim2}, we present associated graphs, which are the main tool of the proof, and prove some general results.\nIn particular, we show Lemma~\\ref{equiv}, which will be used to obtain that part $(2)$ of Theorem~\\ref{thmlamp} follows from part $(1)$.\nIn Section~\\ref{mainsection}, we prove Theorem~\\ref{thmlamp}.\nIn Section~\\ref{csc-const-sect}, we prove Proposition~\\ref{const} and Remark~\\ref{const2}.\nFinally, in Section~\\ref{bssect}, we prove Theorem~\\ref{bsthm} and Example~\\ref{exbsp}.\n\n\\section{Preliminaries}\\label{prel1}\n\nThe concept of amenability finds its origins in a 1924 result by Banach and Tarski~\\cite{Banach-Tarski-original}, where they decompose a solid ball in $\\R^3$ into five pieces, and reassemble them into two balls using rotations.\nThat is now called the Banach-Tarski paradox.\nThe proof makes use of the fact that the group of rotations of $\\R^3$ admits a free subgroup.\nVon Neumann~\\cite{Neumann1929} considers it as a group property and introduces the concept of amenable groups.\nNowadays, there are multiple different characterizations of amenability; see books by Greenleaf~\\cite{greenleaf} and Wagon~\\cite{Banach-Tarski}, or an article by Ceccherini-Silberstein-Grigorchuk-la~Harpe~\\cite{MR1721355}, or a recent survey by Bartholdi~\\cite{bartholdi}.\n\n\\begin{defi}[F{\\o}lner criterion]\\label{folamdef}\n\tA group $G$ is amenable if and only if for every finite set $S\\subset G$ and every $\\varepsilon>0$ there exists a set $F$ such that\n\t\n\t$$|F\\Delta F.S|\\leq\\varepsilon|F|.$$\n\\end{defi}\n\nIf $G$ is finitely generated, it suffices to consider a single generating set $S$ instead of all finite sets.\nWe can also apply Definition~\\ref{folamdef} for $S\\bigcup S^{-1}\\bigcup\\{\\Id\\}$.\nThen $|F\\Delta(S\\bigcup S^{-1}\\bigcup\\{\\Id\\}).F|$ is the set of vertices in the Cayley graph of $G$ that are at a distance exactly $1$ from $F$.\nWe denote that the outer boundary $\\partial_{out}F$.\nThen the condition can be written as $\\frac{|\\partial_{out}F_n|}{|F_n|}\\leq\\varepsilon$, or in other words that the infimum of those quotients should be $0$.\nRecall the definition (\\ref{defdin}) of the inner boundary.\nFinally, we consider $\\partial_{edge}F$ to be the set of edges between $F$ and its complement.\nRemark that while those values can differ, whether the infimum of $\\frac{|\\partial F|}{|F|}$ is $0$ or not does not depend on which boundary we consider.\n\nFor groups of subexponential growth, for every $\\varepsilon$, there is some $n$ such that the ball around the identity of radius $n$ is a corresponding F{\\o}lner set.\nNote that to obtain a F{\\o}lner sequence from this, one needs to consider a subsequence of the sequence of balls of radius $n$.\nIt is an open question whether in every group of subexponential growth, all balls form a F{\\o}lner sequence.\nFor groups of exponential growth, it is generally not sufficient to consider balls, and it is an open question whether there exists any group of exponential growth where some subsequence of balls forms a F{\\o}lner sequence (see for example Tessera~\\cite[Question~15]{Tessera2007}).\n\nFor two groups $A$ and $B$ and a function $f\\in B^A$, denote\n$$\\supp(f)=\\{a\\in A:f(a)\\neq\\Id_B\\}.$$\nLet $B^{(A)}$ be the set of functions from $A$ onto $B$ with finite support.\n\\begin{defi}\\label{deflamp}\n\tThe (restricted) wreath product $A\\wr B$ is the semidirect product $A\\ltimes B^{(A)}$ where $A$ acts on $B^{(A)}$ by translation.\n\\end{defi}\nWe can write the elements as $(a,f)$ with $a\\in A$ and $f\\in B^{(A)}$.\nThe group law is then $(a,f)(a',f')=(aa',x\\mapsto f(x)f'(a^{-1}x))$.\n\nGiven generating sets $S$ and $S'$ on $A$ and $B$ respectively, we can define a standard generating set on $A\\wr B$.\nIt consists of the elements of the form $(s,\\mathbb{Id_B})$ for $s\\in S$ (where $\\mathbb{Id_B}=\\Id_B$ for all $x\\in A$), as well as $(\\Id_A,\\delta_{\\Id_A}^{s'})$ for $s'\\in S'$ where $\\delta_{\\Id_A}^{s'}(\\Id_A)=s'$ and $\\delta_{\\Id_A}^{s'}(x)=\\Id_B$ for all other $x$.\nOne can verify that $(a,f)(s,\\mathbb{Id_B})=(as,f)$, and $(a,f)(\\Id_A,\\delta_{\\Id_A}^{s'})=(a,f+\\delta_a^{s'})$, or in other words the value of $f$ at the point $a$ is changed by $s'$.\n\nSimilarly, given F{\\o}lner sets $F_A$ and $F_B$ on $A$ and $B$ respectively, one obtains standard F{\\o}lner sets on $A\\wr B$: \n$$F=\\{(a,f):a\\in F_A,\\supp(f)\\subset F_A,\\forall x:f(x)\\in F_B\\}.$$\nTheir outer boundary is\n\\begin{equation*}\n\\begin{split}\n\\partial_{out}F&=\\{(a,f):a\\in\\partial_{out}F_A,\\supp(f)\\subset F_A,\\forall x:f(x)\\in F_B\\}\\\\\n&\\cup\\{(a,f):a\\in F_A,\\supp(f)\\subset F_A,f(a)\\in\\partial_{out}F_B,\\forall x\\neq a:f(x)\\in F_B\\}.\n\\end{split}\n\\end{equation*}\nAs $|F|=|F_A||F_B|^{|F_A|}$ and $|\\partial_{out}F|=|\\partial_{out}F_A||F_B|^{|F_A|}+|F_A||F_B|^{|F_A|-1}|\\partial_{out}F_B|$, we have\n$$\\frac{|\\partial_{out}F|}{|F|}=\\frac{|\\partial_{out}F_A|}{|F_A|}+\\frac{|\\partial_{out}F_B|}{|F_B|}.$$\n\nWe will focus on the lamplighter group $\\Z\\wr\\Z_2$.\nAs both of those groups have standard generating sets, this gives us a standard generating set on the lamplighter group:\n\n\\begin{equation}\\label{defst}\nS=\\{t,\\delta\\}\\mbox{ where }t=(1,\\mathbbold{0})\\mbox{ and }\\delta=(0,\\delta^1_0).\n\\end{equation}\n\nThe Baumslag-Solitar groups are defined as follows:\n\\begin{defi}\\label{defbs}\n\tThe Baumslag-Solitar group $BS(m,n)$ is the two-generator group given by the presentation $\\langle a,b:a^{-1}b^ma=b^n\\rangle$.\n\\end{defi}\nThe standard generating set is $\\{a,b\\}$.\n\nWe will focus on the groups $BS(1,p)$.\nThat group is isomorphic to the group generated by $x\\mapsto px$ and $x\\mapsto x+1$ (by mapping $a^{-1}$ and $b$ to them respectively).\nBy abuse of notation, we will also denote the images of $a$ and $b$ with the same letters.\nIn that group, any element can be written as $x\\mapsto p^nx+f$ with $n\\in\\Z$ and $f\\in\\Z[\\frac{1}{p}]$.\nWe then have $(x\\mapsto p^nx+f)a=x\\mapsto p^{n-1}x+f$ and $(x\\mapsto p^nx+f)b=x\\mapsto p^nx+(f+p^n)$.\n\nRemark that the subgroup $N=\\{x\\mapsto x+f:f\\in\\Z[\\frac{1}{p}]\\}$ is normal.\nIndeed, we have $(x\\mapsto p^{-n}(x-f))\\circ(x\\mapsto p^nx+f)=\\Id$ and\n$$(x\\mapsto p^{-n}(x-f'))\\circ(x\\mapsto x+f)\\circ(x\\mapsto p^nx+f')=x\\mapsto x+p^nf.$$\nThus $BS(1,p)$ is isomorphic to the semidirect product $\\Z\\ltimes\\Z[\\frac{1}{p}]$ defined by the action $n.f=p^nf$.\nWe therefore write the element $x\\mapsto p^nx+f$ as $(n,f)$.\nThe standard F{\\o}lner sets are then expressed in the same way as for wreath products.\nIn other words:\n$$F_n=\\{(k,f):k\\in[\\![0,n-1]\\!],f\\in\\Z,0\\leq f2^n$, the result follows immediately.\nAssume that $|V(\\overline{G})|\\leq2^n$, and thus $c_K\\leq n$, and $c_i\\leq n-K+i$.\nThen\n$$|E(\\overline{G})|=\\sum_{i=1}^Kc_i2^{c_i}+\\sum_{io(n)=n-1$.\n\nThus $|V(G)|=n'\\leq n$.\nFrom Lemma~\\ref{baums} we obtain $|E(G)|\\leq e(n')$.\nWe have:\n\n$$\\frac{|V(G)|+o(G)}{|E(G)|+o(G)}\\geq\\frac{n'+o(n')+(o(G)-o(n'))}{e(n')+o(n')+(o(G)-o(n'))}\\geq\\frac{n'+o(n')}{e(n')+o(n')}\\geq\\frac{n+o(n)}{e(n)+o(n)}.$$\n\\end{proof}\n\nThe large inequality of Theorem~\\ref{bsthm} follows directly from this corollary and Lemma~\\ref{comparebs}.\nThe strict inequality is obtained by noticing that if $|V(\\overline{\\widetilde{F}})|+o(\\overline{\\widetilde{F}})$4 is not \nenough \\citep{Fan06,Cow09,HM12} and the number of faint AGNs at high redshifts is \nstill not well constrained. Based on X-ray samples, at low luminosities in this redshift \nrange, the space density of obscured AGNs is at least two times higher than the unobscured \npopulation \\citep{Mar16b}, indicating that optically selected luminosity functions (LFs) \ncould only be a lower limit. \n\nIn order to answer the question of whether faint AGNs can contribute to the ionizing \nultraviolet background (UVB), three aspects need to be quantified: (i) the exact level \nof the UVB; (ii) the fraction of ionizing radiation escaping these sources \n({\\it $f_{esc}$}); and (iii) the faint slope of the AGN LF. \n\nObservations of the ionizing UVB intensity in the redshift range 2$4$ becomes crucial in \naccurately determining the level of this contribution.\n\nCurrently the consensus is that the LF for bright AGNs is well constrained, showing a \npeak at $z\\sim$3 and then rapidly declining \\citep{Bon07,Cro09}. However, for $z>$3 the \ndebate is still open, with various studies presenting contradicting results. Works \npresented by \\citet{Ike11} and \\citet{Gli11} suggest that the number of faint AGNs \nat $z>$3 is higher than expected, producing a steeper slope at the faint end of the LF. \nBut although the faint-end slopes are similar, the normalization factor $\\Phi^{*}$ derived \nby \\citet{Gli11} is three times higher than what calculated by \\citet{Ike11} and \nsubsequently reproduced by other studies \\citep[i.e][]{Mas12,Aki18}. These latter \nstudies report a strong decline in AGN numbers going from z=3 to z=4. In other words, \nthere is still wide disagreement on the actual shape and normalization of the \nLF at $z\\sim$4.\n\nWork by \\citet{Gia15}, including photometric and spectroscopic redshifts of X-ray-selected \nAGN candidates in the CANDELS GOODS-South region, has shown that at $z>$4 the probed AGN \npopulation could produce the necessary ionization rate to keep the IGM highly ionized \n\\citep{MH15}. This result is still controversial, with recent works claiming the opposite \n\\citep[i.e.][]{DAl17,Ric17,Aki18,Has18,Par18}. In fact, so far, the optical LFs at this \nredshift range and luminosities are based on a handful of spectroscopically confirmed \nsources (e.g., eight for \\citet{Ike11} and five for \\citet{Gia15}). Since the bulk of \nionizing photons come from AGNs close to L$^{*}$, it is mandatory to measure their LF \nat $z>$4 in this luminosity range. For this reason, we started a pilot study in the \nCOSMOS field, ideal for this kind of analysis thanks to its multi-wavelength catalog, \nX-ray, and radio coverage, which allows us to robustly select our AGN candidates. Here \nwe present the bright part of our spectroscopically confirmed sample of \nintermediate-\/low-luminosity AGNs, reaching an absolute magnitude of M$_{1450}$=-23 \nand discuss a robust determination of the space density at $z\\sim$4.\n\nThroughout the paper we adopt the $\\Lambda$ cold dark matter ($\\Lambda$CMD) concordance \ncosmological model (H$_{0}$ = 70 km s$^{-1}$ Mpc$^{-1}$, $\\Omega_{M}$ = 0.3, and \n$\\Omega_{\\Lambda}$ = 0.7). All magnitudes are in the AB system.\n\n\\section{AGN Candidate Selection} \nThe selection of our sample is based on: (i) photometric redshifts, (ii) color-color \nselection, and (iii) X-ray emission. \n\n\\begin{deluxetable*}{ccccccccc}\n\\tablecaption{Color-Color Candidates \\label{tab:colorsel}}\n\\tablehead{\n\\colhead{ID} & \\colhead{R.A.} &\n\\colhead{Decl.} & \\colhead{i$_{AB}$} & \\colhead{$z_{phot}$} &\n\\colhead{$z_{spec}$} & \\colhead{(B$_{J}$-V$_{J}$)} &\n\\colhead{(r-i)} & \\colhead{X-ray}\\\\}\n\\startdata\n658294\\tablenotemark{*} & 149.467350 &1.855592 &21.056&-1.000 & 4.174 &1.40&0.25 & no \\\\\n1856470\\tablenotemark{*}& 150.475680 &2.798362 &21.282&0.000 & 4.110 &1.42&0.32 & yes \\\\\n1581239& 150.746170 &2.674495 &21.556&0.293 & -1.000&1.77&0.48 & no \\\\\n507779& 150.485630 &1.871927 &22.034&0.605 & 4.450 &4.94&0.55 & yes \\\\\n38736\\tablenotemark{*}& 150.732540 &1.516127 &22.088&-1.000 & 4.183 &1.69&0.64 & no \\\\\n1226535& 150.100980 &2.419435 &22.325&0.480 & 4.637 &1.68&0.43 & yes \\\\\n422327 & 149.701500 &1.638375 &22.409&0.343 & 3.201 &1.54&0.14 & no \\\\\n664641\\tablenotemark{*} & 149.533720 &1.809260 &22.436&0.338 & 3.986 &1.69&0.30 & no \\\\\n1163086\\tablenotemark{*}& 150.703770 &2.370019 &22.444&-1.000 & 3.748 &1.44&0.25 & yes \\\\\n330806\\tablenotemark{*} & 150.107380 &1.759201 &22.555&3.848 & 4.140 &1.48&0.30 & yes \\\\\n344777 & 150.188180 &1.664540 &22.634&0.392 & -1.000&1.89&-0.44 & no \\\\\n1450499& 150.115830 &2.563627 &22.685&0.280 & 3.355 &1.94&0.63 & no \\\\\n1687778& 150.006940 &2.779943 &22.715&0.437 & -1.000&1.96&0.44 & no \\\\\n96886 & 150.289380 &1.559480 &22.765&3.860 & -1.000&1.77&0.27 & no \\\\\n1573716& 150.729200 &2.739130 &22.783&0.376 & -1.000&1.35&0.48 & no \\\\\n346317 & 150.205950 &1.654837 &22.800&0.352 & -1.000&1.450&-0.21 & no \\\\\n1257518& 150.025190 &2.371214 &22.810&0.241 & -1.000&1.60&0.34 & no \\\\\n1322738& 149.444050 &2.424602 &22.839&0.428 & -1.000&1.92&0.71 & no \\\\\n1663056& 150.185000 &2.779340 &22.862&3.658 & -1.000&2.29&0.52 & no \\\\\n1719143& 149.755390 &2.738555 &22.873&-1.000 & 3.535 &1.76&0.23 & yes \\\\\n125420 & 150.222680 &1.510574 &22.898&0.181 & -1.000&1.83&0.53 & no \\\\\n867305 & 149.446230 &2.115336 &22.950&0.651 & -1.000&2.11&0.71 & no \\\\\n612661 & 149.838500 &1.829048 &23.011&4.229 & 4.351 &1.93&0.60 & no \\\\\n\\enddata\n\\tablenotetext{*}{Used for the LF}\n\\end{deluxetable*}\n\nWe use the photometric catalog and redshifts presented by \\citet{Ilb09}. This is a 30-band \ncatalog, spanning from NUV photometry to IRAC data, with calculated $z_{phot}$ in a \nregion covering 1.73 deg$^{2}$ in COSMOS. The reported $z_{phot}$ dispersion \nis $\\sigma_{(\\Delta z)\/(z_{s}+1)}$=0.007 at i$_{AB}<$22.5 and increases to \n$\\sigma_{(\\Delta z)\/(z_{s}+1)}$=0.012 at i$_{AB}<$24. As discussed in \\citet{Ilb09}, \ntheir $z_{phot}$ determination is mostly based on galaxy templates. However, as showed \nby \\citet{Gia15}, for z$>$4 the accuracy on the photometric redshift estimate is weakly \ndependent on the adopted spectral libraries but it is mainly driven by the Lyman break \nfeature at rest frame wavelength (912\\AA). To take into account possible larger errors \non photometric redshifts for the AGN population, we extended the redshift interval. Thus, \nwe obtained a list of 42 candidates that have a photometric redshift estimate in the \ninterval $3.0\\leq z_{phot}\\leq 5.0$ and a magnitude i$_{AB}<$23.0.\n\nTo increase our selection efficiency and mitigate shortcomings of the $z_{phot}$ technique, \nwe include a color criterion. Since we have a wide number of bands available, initially \nwe explored various combinations of color-color selections, i.e., (B$_{J}$-V$_{J}$) versus \n(r-i), (B$_{J}$-r) versus (r-i), (g-r) versus (r-i), (u$_{*}$-B$_{J}$) versus (r-i), and \n(u$_{*}$-g) versus (r-i). Cross-correlating those candidates with known AGNs from the \nliterature, and after exploratory spectroscopy with \nLDSS-3\\footnote{http:\/\/www.lco.cl\/telescopes-information\/magellan\/instruments\/ldss-3}, \nfor this pilot study we narrowed down our selection to the most promising criterion, \ni.e. (B$_{J}$-V$_{J}$) versus (r-i). In the (B$_{J}$-V$_{J}$) versus (r-i) color-color \ndiagram we consider as high-redshift AGN candidates the sources found in the locus \ndelimited by: \\\\\n\\\\\n(B$_{J}$-V$_{J}$)$>$1.3 \\\\\nand \\\\\n(r-i)$\\leq$0.60$\\times$(B$_{J}$-V$_{J}$) - 0.30.\\\\\n\\\\\nWith this criterion we obtained 23 candidates down to i$_{AB}$=23.0, summarized in \nTable \\ref{tab:colorsel}. We decided not to put any constraints on the morphology, \nsince the population of low-luminosity AGNs (M$_{1450}\\sim$-23) includes Seyferts, \nwhere the host galaxy could be visible. \n\n\\begin{figure}\n\\label{fig:spec}\n\\plotone{fig1.pdf}\n\\caption{The spectra of the six AGNs with $3.6\\leq z\\leq 4.2$ and i$_{AB}\\leq $23.0 \ndiscovered during our spectroscopic campaign with IMACS and LDSS-3. The red line corresponds \nto zero flux F$_{\\lambda}$, in arbitrary units.} \n\\end{figure}\n\nThere is a relatively small overlap between the candidates selected by the various methods. \nMore specifically, only 7\\% of the candidates selected by photometric redshifts are also \nincluded in the color-selected sample (three out of 42 objects), which is useful to \nincrease our completeness. This is a clear advantage with respect to the works of \n\\citet{Gli11} and \\citet{Ike11} which only used four bands for their selections.\n\nThe final criterion for the creation of our sample was X-ray emission. In practice, we \nselected 38 sources detected in X-rays by deep Chandra observations in the COSMOS field \n\\citep{Civ16} with $z_{phot}\\geq$3 and a limiting magnitude i$_{AB}<$23. These photometric \nredshifts were provided by \\citet{Mar16a} based on AGNs, galaxies or hybrid templates, \nas described in \\citet{Sal11}. This sample consists both of type-1 and type-2 AGNs, and \nrepresents an unbiased census of the faint AGN population at this redshift. Only eight of \nthe sources selected with the first two criteria present also emission in X-rays, while six \ncandidates have been selected both by X-ray and color criteria. \n\nOur final sample consists of 92 AGN candidates with magnitudes i$_{AB}<$23, that have been \nselected by at least one of the methods mentioned above. Thanks to extensive spectroscopic \ncampaigns carried out in the COSMOS field \\citep[e.g.][]{Bru09,Ike11,Civ12,Mar16a,Hasin18}, \n22 of our 92 candidates have secure spectroscopic redshifts. To establish the nature of \nthe remaining 70 sources (five of which have uncertain spectroscopic redshifts), we started \nan exploratory spectroscopic campaign at the Magellan Telescopes. \n\n\\section{Spectroscopic Follow-up} \n\nWe were awarded 2.5 nights with the wide-field Inamori-Magellan Areal Camera and \nSpectrograph \\citep[IMACS,][]{Dre11} on the 6.5m Magellan-Baade telescope at Las \nCampanas Observatory to obtain spectra for our AGN candidates. We observed a total of five \nmulti-slit masks with the IMACS f\/2 camera (27$\\arcmin$ diameter field of view) \nwith total exposure times ranging from 3hr to 6hr, during dark time in 2018 February and \nMarch. The width of the slits was 1$\\arcsec$.0 and the detector was used without binning \n(0$\\arcsec$.2\/pixel in the spatial direction). \n\nFor the three 6hr masks we used the 300 line mm$^{-1}$ red-blazed grism (300\\_26.7) with \nspectral sampling of 1.25{\\AA} pixel$^{-1}$, while for the two 3hr masks we used the 200 \nline mm$^{-1}$ grism that has a slightly lower resolution, sampling 2.04{\\AA} pixel$^{-1}$. \nIt is worth noting that the space density of our AGN candidates is such that only around \nthree objects typically fall in an IMACS field of view at this magnitude limit.\n\nWe observed a total of 16 AGN candidates with magnitudes ranging from i$_{AB}=$20 to 23.0, \nand for 14 of them we obtained robust redshift determination at $z>3$, resulting in an \nefficiency of $\\sim 88\\%$, and two uncertain redshifts at $z>3$. Out of the sub-sample with \nsecure redshifts, we found six AGNs in the redshift range $3.6\\leq z_{spec}\\leq 4.2$, and \neight AGNs with a measured redshift of either $3.1$4.2. \nNotice that half of the confirmed AGNs are found outside the color selection locus and that \nhalf of the color selected candidates still need to be observed.} \n\\end{figure}\n\nThe distribution of our candidates in the color-color space can be seen in Figure 2. \nHere we plot the entire color-selected sample and indicate which sources were confirmed as \nAGNs, in the redshift range of interest, either after our spectroscopic campaign or from \nthe literature. We also indicate sources that lie in the color locus but their spectroscopic \nredshifts are either $z_{spec}<$3.6 or $z_{spec}>$4.2. A detailed presentation of the \nfull spectroscopic sample and a comprehensive description of the different color criteria \nare not the main aims of the present paper and will be discussed in a future work.\n\n\\begin{deluxetable*}{ccccccccc}\n\\tablecaption{Confirmed AGNs Used for Determining Space Density \\label{tab:spaced}}\n\\tablehead{\n\\colhead{ID} & \\colhead{R.A.} &\n\\colhead{Decl.} & \\colhead{$i_{AB}$} & \n\\colhead{$z_{spec}$} & \\colhead{$r_{AB}$} &\\colhead{M$_{1450}$} & References \\\\}\n\\startdata\n38736 & 150.732540 & 1.516127 & 22.088 & 4.183 & 22.897 & -23.341 & our spectroscopy \\\\\n247934 & 150.801300 & 1.657550 & 22.334 & 3.772 & 22.817 & -23.182 & our spectroscopy \\\\\n330806 & 150.107380 & 1.759201 & 22.555 & 4.140 & 23.105 & -23.110 & \\citet{Ike11} \\\\\n658294 & 149.467350 & 1.855592 & 21.056 & 4.174 & 21.603 & -24.630 & \\citet{Tru09} \\\\\n664641 & 149.533720 & 1.809260 & 22.436 & 3.986 & 22.946 & -23.182 & our spectroscopy \\\\\n899256 & 150.782210 & 2.285049 & 21.927 & 3.626 & 22.363 & -23.545 & our spectroscopy \\\\\n1054048\\tablenotemark{*} & 149.879200 & 2.225839 & 22.697 & 3.650 & 23.200 & -22.722 & \\citet{Mar16a} \\\\\n1159815 & 150.638440 & 2.391350 & 22.157 & 3.650 & 22.539 & -23.383 & \\citet{Ike11} \\\\\n1163086 & 150.703770 & 2.370019 & 22.444 & 3.748 & 22.863 & -23.122 & \\citet{Mar16a} \\\\\n1208399 & 150.259540 & 2.376141 & 21.424 & 3.717 & 21.488 & -24.478 & \\citet{Mar16a} \\\\\n1224733 & 150.208990 & 2.438466 & 21.147 & 3.715 & 21.485 & -24.480 & \\citet{Mar16a} \\\\\n1273346\\tablenotemark{*} & 149.776910 & 2.444306 & 22.779 & 4.170 & 23.274 & -22.952 & \\citet{Mar16a} \\\\\n1730531\\tablenotemark{*} & 149.843220 & 2.659095 & 22.900 & 3.748 & 23.439 & -22.545 & our spectroscopy \\\\\n1856470 & 150.475680 & 2.798362 & 21.282 & 4.110 & 21.753 & -24.445 & \\citet{Mar16a} \\\\\n1938843 & 149.845860 & 2.860459 & 22.160 & 3.630 & 22.619 & -23.290 & our spectroscopy \\\\\n1971812 & 149.472870 & 2.793400 & 21.887 & 3.610 & 22.179 & -23.717 & \\citet{Mar16a} \\\\\n\\enddata\n\\tablenotetext{*}{Not included in the space density bins because M$_{1450}>$-23.0}\n\\end{deluxetable*}\n\n\\section{Space Density Determination}\n\nThe advantage of doing this study in the COSMOS field is that it already contains extensive \nspectroscopic follow-up and extensive multi-wavelength data from radio to X-rays. Thus, \ncombining the confirmed candidates presented above, with known AGNs from the literature, \nwe obtain a sample of 16 spectroscopically confirmed AGNs with 3.6$1.1$ instead of 1.3 as threshold, the expected total \nnumber of AGNs is 34 and the completeness corrections remain at the $\\sim 50\\%$ level. \nThis indicates that the green squares (corrected space density) in Figure 3 are quite \nrobust with respect to the details of the adopted color criterion.\n\n\\begin{deluxetable*}{cccccc}\n\\tablecaption{AGN Space Density ($$=3.9)\\label{tab:spacedn}}\n\\tablehead{\n\\colhead{M$_{1450}$} & \\colhead{$\\Phi$} &\n\\colhead{$\\sigma_\\Phi^{up}$} & \\colhead{$\\sigma_\\Phi^{low}$} & \n\\colhead{N$_{AGN}$} & \\colhead{$\\Phi_{corr}$}\\\\ & $Mpc^{-3} Mag^{-1}$ & & & & \\\\}\n\\startdata\n-24.5 & 3.509e-07& 2.789e-07& 1.699e-07& 4 &7.018e-07\\\\\n-23.5 &7.895e-07& 3.616e-07& 2.595e-07& 9 & 1.579e-06 \\\\\n\\enddata\n\\end{deluxetable*}\n\nIn Figure 3 we also present the LFs calculated by \\citet{Aki18}, \\citet{Par18}, and \n\\citet{Mas12} for comparison. The sample created by \\citet{Aki18} is limited to g-band \ndropout (i.e. $3.5$4. Even though the faint-end \nslope in \\citet{Par18} is steeper than that found by \\citet{Gli11}, their space density in \nabsolute magnitudes M$_{1450}<$-23 is marginally in agreement with our estimates. We also \nshow the space density derived by \\citet{Mar16b}, based on X-ray data, after being converted \nto UV \\citep{Ric17}. Although these points are higher than most optical LFs, they are slightly \nlower than our estimate. In Table \\ref{tab:spacedn} we present the estimate of the AGN space \ndensity $\\Phi$, based on our analysis, in the two absolute magnitude bins. Even excluding \nthe COSMOS247934, which is the least certain among our sources, the uncorrected space density \nat M$_{1450}$=-23.5 becomes 7.018e-07 $Mpc^{-3}Mag^{-1}$, which is still higher than all LFs \npresented in Figure 3, except for \\citet{Par18}. Considering the space density corrected for\nincompleteness, also the \\citet{Par18} LF also turns out to be underestimated.\n\nAn important aspect, made clear by our sample, is that selections based on color criteria \ncan be highly incomplete, since out of the 16 spectroscopically confirmed AGNs only six \nhave been selected by color. So far, the majority of studies on the AGN LF at this redshift \nrange is based on color-selected samples and this could be the reason why faint AGN number \ndensities have been underestimated. When a first attempt was made by \\citet{Gia15} to create \nan AGN sample based on non-traditional criteria, a different picture emerged. Given that AGNs, \neven at faint magnitudes, have a large escape fraction as shown by \\citet{Gra18}, an increase \nof the estimate of their population can have significant implications on the contribution \nof AGNs to the H$_{I}$ ionizing background. \n\n\\section{Discussion and Conclusions}\n\nOur estimates of the space density in the range $-24.54$. In fact, in a subsequent work, once our spectroscopic sample is complete, we will \npresent the global shape of the LF at $z\\sim$4 and the associated emissivity. This can have \ndeep implications on the extrapolation of the number of QSOs expected at high-z in wide and \ndeep large area surveys, either ground based, e.g. LSST, or from space, e.g., e-Rosita, \nEuclid, WFIRST. An upward revision of the number density of L=L$^{*}$ AGNs would certainly \nimply a reconsideration of the expected QSO and AGN numbers at $z>4$ in these future missions. \n\n\\acknowledgments\nWe would like to thank the anonymous referee for useful suggestions and constructive \ncomments that helped us improve this paper. This paper includes data gathered with the \n6.5 meter Magellan Telescopes located at Las Campanas Observatory (LCO), Chile.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\n\nWe consider the equation\n\\begin{equation}\n \\label{eq:1}\n i\\eps\\d_t \\psi^\\eps +\\frac{\\eps^2}{2}\\Delta \\psi^\\eps =\n V(x)\\psi^\\eps + |\\psi^\\eps|^2 \\psi^\\eps,\\quad (t,x)\\in \\R\\times \\R^3,\n\\end{equation}\nand both semi-classical ($\\eps\\to 0$) and large time ($t\\to \\pm\n\\infty$) limits. Of course these limits must not be expected to\ncommute, and one of the goals of this paper is to analyze this lack of\ncommutation on specific asymptotic data, under the form of coherent\nstates as described below. Even though our main result\n(Theorem~\\ref{theo:cv}) is proven specifically for the above case of a\ncubic three-dimensional equation, two important intermediate results\n(Theorems~\\ref{theo:scatt-quant} and \\ref{theo:scatt-class}) are\nestablished in a more general setting. Unless specified otherwise, we\nshall from now on consider $\\psi^\\eps:\\R_t \\times \\R^d_x\\to \\C$, $d\\ge\n1$. \n\n\n\\subsection{Propagation of initial coherent states}\n\\label{sec:prop-init-coher}\n\nIn this subsection, we consider the initial value problem, as opposed\nto the scattering problem treated throughout this paper. More\nprecisely, we assume here that the wave function is, at time $t=0$,\ngiven by the coherent state\n\\begin{equation}\n \\label{eq:ci}\n \\psi^\\eps(0,x) = \\frac{1}{\\eps^{d\/4}}a\\(\\frac{x-q_0}{\\sqrt\\eps}\\)\n e^{ip_0\\cdot (x-q_0)\/\\eps},\n\\end{equation}\nwhere $q_0,p_0\\in \\R^d$ denote the initial position and velocity,\nrespectively. The function $a$ belongs to the Schwartz class,\ntypically. In the case where $a$ is a (complex) Gaussian, many\nexplicit computations are available in the linear case (see\n\\cite{Hag80}). Note that the $L^2$-norm of $\\psi^\\eps$ is independent\nof $\\eps$, $\\|\\psi^\\eps(t,\\cdot)\\|_{L^2(\\R^d)} =\\|a\\|_{L^2(\\R^d)}$. \n\n Throughout this subsection, we assume that the external\npotential $V$ is smooth and real-valued, $V\\in C^\\infty(\\R^d;\\R)$, and\nat most quadratic, in the sense that \n\\begin{equation*}\n \\d^\\alpha V\\in L^\\infty(\\R^d),\\quad \\forall |\\alpha|\\ge 2.\n\\end{equation*}\nThis assumption will be strengthened when large time behavior is\nanalyzed. \n\\subsubsection{Linear case}\n\\label{sec:linear-case}\n Resume \\eqref{eq:1} in the absence of nonlinear term:\n\\begin{equation}\n \\label{eq:lin}\n i\\eps\\d_t \\psi^\\eps +\\frac{\\eps^2}{2}\\Delta \\psi^\\eps =\n V(x)\\psi^\\eps,\\quad x\\in \\R^d,\n\\end{equation}\nassociated with the initial datum \\eqref{eq:ci}. To derive an\napproximate solution, and to describe the propagation of the initial\nwave packet, introduce the Hamiltonian flow\n\\begin{equation}\n \\label{eq:hamil}\n \\dot q(t)= p(t),\\quad \\dot p(t)=-\\nabla V\\(q(t)\\),\n\\end{equation}\nand prescribe the initial data $q(0)=q_0$, $p(0)=p_0$. Since the\npotential $V$ is smooth and at most quadratic, the solution\n$(q(t),p(t))$ is smooth, defined for all time, and grows at most\nexponentially.\nThe classical action is given by\n\\begin{equation}\\label{eq:action}\nS(t)=\\int_0^t \\left( \\frac{1}{2} |p(s)|^2-V(q(s))\\right)\\,ds.\n\\end{equation}\nWe observe that if we change the unknown function $\\psi^\\eps$ to\n$u^\\eps$ by\n\\begin{equation}\n \\label{eq:chginc}\n \\psi^\\eps(t,x)=\\eps^{-d\/4} u^\\eps \n\\left(t,\\frac{x-q(t)}{\\sqrt\\eps}\\right)e^{i\\left(S(t)+p(t)\\cdot\n (x-q(t))\\right)\/\\eps},\n\\end{equation}\nthen, in terms of $u^\\eps=u^\\eps(t,y)$, the Cauchy problem\n\\eqref{eq:lin}--\\eqref{eq:ci} is equivalent to \n\\begin{equation}\\label{eq:ueps0}\ni\\d_t\nu^\\eps+\\frac{1}{2}\\Delta u^\\eps=V^\\eps(t,y)\nu^\\eps\\quad ;\\quad u^\\eps(0,y) = a(y),\n\\end{equation}\nwhere the external time-dependent potential $V^\\eps$ is given by\n\\begin{equation}\n \\label{eq:Veps}\n V^\\eps(t,y)= \\frac{1}{\\eps}\\(V(x(t)+\n\\sqrt{\\eps}y)-V(x(t))-\\sqrt{\\eps}\\<\\nabla V(x(t)),y\\>\\).\n\\end{equation}\nThis potential corresponds to the first term of a Taylor expansion of\n$V$ about the point $q(t)$, and we naturally introduce \n$u=u(t,y)$ solution to \n\\begin{equation}\\label{eq:ulin}\ni\\d_tu+\\frac{1}{2}\\Delta u=\\frac{1}{2}\\< Q(t)y,y\\> u\\quad\n;\\quad u(0,y)=a(y),\n\\end{equation}\nwhere\n\\begin{equation*}\n Q(t):= \\nabla^2 V\\(q(t)\\), \\quad \\text{so that } \\frac{1}{2}\\<\n Q(t)y,y\\> = \\lim_{\\eps \\to 0} V^\\eps(t,y). \n\\end{equation*}\nThe obvious candidate to approximate the initial wave function\n$\\psi^\\eps$ is then:\n\\begin{equation}\n \\label{eq:phi}\n \\varphi^\\eps(t,x)=\\eps^{-d\/4} u\n\\left(t,\\frac{x-q(t)}{\\sqrt\\eps}\\right)e^{i\\left(S(t)+p(t)\\cdot\n (x-q(t))\\right)\/\\eps}.\n\\end{equation}\nIndeed, it can be proven (see\ne.g. \\cite{BGP99,BR02,CoRoBook,Hag80,HaJo00,HaJo01}) that there\nexists \n$C>0$ independent of $\\eps$ such that\n\\begin{equation*}\n \\|\\psi^\\eps(t,\\cdot)-\\varphi^\\eps (t,\\cdot)\\|_{L^2(\\R^d)}\\le\n C\\sqrt\\eps e^{Ct}. \n\\end{equation*}\nTherefore, $\\varphi^\\eps$ is a good approximation of $\\psi^\\eps$ at least up to time\nof order $c\\ln\\frac{1}{\\eps}$ (Ehrenfest time). \n\n\\subsubsection{Nonlinear case}\n\\label{sec:nonlinear}\n When adding a nonlinear term to \\eqref{eq:lin}, one has to be\n cautious about the size of the solution, which rules the importance\n of the nonlinear term. To simplify the discussions, we restrict our\n analysis to the case of a gauge invariant, defocusing, power nonlinearity,\n $|\\psi^\\eps|^{2\\si}\\psi^\\eps$. We choose to measure the importance of\n nonlinear effects not directly through the size of the initial data,\n but through an $\\eps$-dependent coupling factor: we keep the initial\n datum \\eqref{eq:ci} (with an $L^2$-norm independent of $\\eps$), and\n consider\n \\begin{equation*}\n i\\eps\\d_t \\psi^\\eps + \\frac{\\eps^2}{2}\\Delta \\psi^\\eps =\n V(x)\\psi^\\eps + \\eps^\\alpha|\\psi^\\eps|^{2\\si}\\psi^\\eps.\n \\end{equation*}\nSince the nonlinearity is homogeneous, this approach is equivalent to\nconsidering $\\alpha=0$, up to multiplying the initial datum by\n$\\eps^{\\alpha\/(2\\si)}$. \nWe assume $\\si>0$, with $\\si<2\/(d-2)$ if $d\\ge 3$: for $a\\in \\Sigma$,\ndefined by\n\\begin{equation*}\n \\Sigma = \\{f\\in H^1(\\R^d),\\quad x\\mapsto \\ f(x)\\in\n L^2(\\R^d)\\},\\quad \\=\\(1+|x|^2\\)^{1\/2},\n\\end{equation*}\nwe have, for fixed $\\eps>0$, $\\psi^\\eps_{\\mid t=0}\\in \\Sigma$, and the\nCauchy problem is globally well-posed, $\\psi^\\eps\\in C(\\R_t;\\Sigma)$\n(see e.g. \\cite{Ca11}). It was established in \n\\cite{CaFe11} that the value \n\\begin{equation*}\n \\alpha_c = 1+\\frac{d\\si}{2}\n\\end{equation*}\nis critical in terms of the effect of the nonlinearity in the\nsemi-classical limit $\\eps\\to 0$. If $\\alpha>\\alpha_c$, then \n$\\varphi_{\\rm lin}^\\eps$, given by \\eqref{eq:ulin}-\\eqref{eq:phi}, is\nstill a good approximation of $\\psi^\\eps$ at least up to time\nof order $c\\ln\\frac{1}{\\eps}$. On the other hand, if\n$\\alpha=\\alpha_c$, nonlinear effects alter the behavior of $\\psi^\\eps$\nat leading order, through its envelope only. Replacing \\eqref{eq:ulin}\nby \n\\begin{equation}\\label{eq:u}\ni\\d_tu+\\frac{1}{2}\\Delta u=\\frac{1}{2}\\< Q(t)y,y\\> u+|u|^{2\\si}u,\n\\end{equation}\nand keeping the relation \\eqref{eq:phi}, $\\varphi^\\eps$ is now a good\napproximation of $\\psi^\\eps$. In \\cite{CaFe11} though, the time of\nvalidity of the approximation is not always proven to be of order at\nleast $c\\ln\\frac{1}{\\eps}$, sometimes shorter time scales (of the\norder $c\\ln\\ln\\frac{1}{\\eps}$) have to be considered, most likely for\ntechnical reasons only. Some of these restrictions have been removed in\n\\cite{Ha13}, by considering decaying external\npotentials $V$. \n\n\n\\subsection{Linear scattering theory and coherent states}\n\\label{sec:line-scatt-theory}\n\nWe now consider the aspect of large time, and instead of prescribing\n$\\psi^\\eps$ at $t=0$ (or more generally at some finite time), we\nimpose its behavior at $t=-\\infty$.\n In the linear case \\eqref{eq:lin},\nthere are several results addressing the question mentioned above,\nconsidering different forms of asymptotic states at $t=-\\infty$.\nBefore describing them, we recall important facts concerning quantum\nand classical scattering. \n\n\\subsubsection{Quantum scattering}\n\\label{sec:quantum-scattering}\n\nThroughout this paper, we assume that the external potential is\nshort-range, and satisfies the following properties:\n\\begin{hyp}\\label{hyp:V}\n We suppose that $V$ is smooth and real-valued, $V\\in\n C^\\infty(\\R^d;\\R)$. In addition, it is short range in the following\n sense: there exists $\\mu>1$ such that\n \\begin{equation}\n \\label{eq:short}\n |\\d^\\alpha V(x)|\\le \\frac{C_\\alpha}{(1+|x|)^{\\mu+|\\alpha|}},\\quad\n \\forall \\alpha\\in \\N^d. \n \\end{equation}\n\\end{hyp}\nOur final result is established under the stronger condition\n$\\mu>2$ (a condition which is needed in several steps of the proof), but\nsome results are established under the mere assumption\n$\\mu>1$. Essentially, the analysis of the approximate solution is valid\nfor $\\mu>1$\n(see Section~\\ref{sec:class}), while the rest of the analysis requires $\\mu>2$. \n\\smallbreak\n\nDenote by \n\\begin{equation*}\nH_0^\\eps= -\\frac{\\eps^2}{2}\\Delta\\quad \\text{and}\\quad\nH^\\eps=-\\frac{\\eps^2}{2}\\Delta+V(x) \n\\end{equation*}\nthe underlying Hamiltonians. For fixed $\\eps>0$, the (linear) wave\noperators are given by\n\\begin{equation*}\n W_\\pm^\\eps = \\lim_{t\\to \\pm \\infty}e^{i\\frac{t}{\\eps}H^\\eps}e^{-i\\frac{t}{\\eps}H^\\eps_0},\n\\end{equation*}\nand the (quantum) scattering operator is defined by\n\\begin{equation*}\n S^\\eps_{\\rm lin} = \\(W_+^\\eps\\)^* W_-^\\eps. \n\\end{equation*}\nSee for instance \\cite{DG}.\n\n\\subsubsection{Classical scattering}\n\\label{sec:classical-scattering}\n\nLet $V$ satisfying Assumption~\\ref{hyp:V}. \nFor $(q^-,p^-)\\in \\R^d\\times \\R^d$, we consider the classical\ntrajectories $(q(t),p(t))$ defined by \\eqref{eq:hamil}, \nalong with the prescribed asymptotic behavior as $t\\to -\\infty$:\n\\begin{equation}\n \\label{eq:CI-hamilton}\n \\lim_{t\\to -\\infty}\\left| q(t)-p^- t -q^-\\right| = \\lim_{t\\to\n -\\infty} |p(t)-p^-|=0. \n\\end{equation}\nThe existence and uniqueness of such a trajectory can be found in\ne.g. \\cite{DG,ReedSimon3}, provided that $p^-\\not =0$. Moreover, there\nexists a closed set $\\mathcal N_0$ of Lebesgue measure zero in\n$\\R^{2d}$ such that for all $(q^-,p^-)\\in \\R^{2d}\\setminus \\mathcal\nN_0$, there exists $(q^+,p^+)\\in \\R^d\\times\n \\(\\R^d\\setminus\\{0\\}\\)$ such that\n\\begin{equation*}\n \\lim_{t\\to +\\infty}\\left| q(t)-p^+ t -q^+\\right| = \\lim_{t\\to\n +\\infty} |p(t)-p^+|=0. \n\\end{equation*}\nThe classical scattering operator is $S^{\\rm cl}:(q^-,p^-)\\mapsto\n(q^+,p^+)$. Choosing $(q^-,p^-)\\in \\R^{2d}\\setminus \\mathcal\nN_0$ implies that the following assumption is satisfied:\n\\begin{assumption}\\label{hyp:flot}\n The asymptotic center in phase space, $(q^-,p^-)\\in \\R^d\\times\n \\(\\R^d\\setminus\\{0\\}\\)$ is such that the classical scattering\n operator is well-defined, \n \\begin{equation*}\n S^{\\rm cl}(q^-,p^-)=\n(q^+,p^+),\\quad p^+\\not =0,\n \\end{equation*}\nand the classical action has limits as $t\\to \\pm\\infty$:\n\\begin{equation*}\n \\lim_{t\\to -\\infty}\\left|S(t)-t\\frac{|p^-|^2}{2}\\right| =\n \\lim_{t\\to +\\infty}\\left|S(t)-t\\frac{|p^+|^2}{2}-S_+\\right| =0,\n\\end{equation*}\nfor some $S_+\\in \\R$. \n\\end{assumption}\n \n\\subsubsection{Some previous results}\n\\label{sec:some-prev-results}\n\n\nIt seems that the first mathematical result involving both the\nsemi-classical and large time limits appears in\n\\cite{GV79mean}, where the classical field limit of non-relativistic\nmany-boson theories is studied in space dimension $d\\ge 3$. \n\nIn\n\\cite{Yajima79}, the\ncase of a short range potential (Assumption~\\ref{hyp:V}) is\nconsidered, with asymptotic states \nunder the form of semi-classically concentrated functions,\n\\begin{equation*}\n e^{-i\\frac{\\eps t}{2}\\Delta}\\psi^\\eps(t)_{\\mid t =-\\infty}\n =\\frac{1}{\\eps^{d\/2}}\\widehat f\\(\\frac{x-q^-}{\\eps}\\),\\quad f\\in L^2(\\R^d),\n\\end{equation*}\nwhere $\\widehat f$ denotes the standard Fourier transform (whose\ndefinition is independent of $\\eps$). The main result from\n\\cite{Yajima79} shows that the semi-classical limit for $S^\\eps_{\\rm lin}$\ncan be expressed in terms of the classical scattering operator, of the\nclassical action, and of\nthe Maslov index associated to each classical trajectory. We refer to\n\\cite{Yajima79} for a precise statement, and to \\cite{Yaj81} for the\ncase of long range potentials, requiring modifications of the\ndynamics. \n\\smallbreak\n\nIn \\cite{Hag81,HaJo00}, coherent states are considered,\n\\begin{equation}\n \\label{eq:asym-state}\n e^{-i\\frac{\\eps t}{2}\\Delta}\\psi^\\eps(t)_{\\mid t =-\\infty}=\n \\frac{1}{\\eps^{d\/4}}u_-\\(\\frac{x-q^-}{\\sqrt\\eps}\\) \n e^{ip^-\\cdot (x-q^-)\/\\eps+iq^-\\cdot p^-\/(2\\eps)}=:\\psi_-^\\eps(x).\n\\end{equation}\nMore precisely, in \\cite{Hag81,HaJo00}, the asymptotic state $u_-$ is assumed\nto be a complex Gaussian function. Introduce the notation\n\\begin{equation*}\n \\delta(t) = S(t)-\\frac{q(t)\\cdot p(t) - q^-\\cdot p^-}{2}.\n\\end{equation*}\nThen Assumption~\\ref{hyp:flot} implies that there exists $\\delta^+\\in\n\\R$ such that\n\\begin{equation*}\n \\delta(t)\\Tend t {-\\infty}0\\quad \\text{and}\\quad \\delta(t)\\Tend t\n {+\\infty} \\delta^+. \n\\end{equation*}\n In \n\\cite{CoRoBook,HaJo00}, we find the following general result (an asymptotic\nexpansion in powers of $\\sqrt\\eps$ is actually given, but we stick to\nthe first term to ease the presentation):\n\\begin{theorem}\\label{theo:version-lineaire}\n Let Assumptions~\\ref{hyp:V} and \\ref{hyp:flot} be satisfied, and let\n \\begin{equation*}\n u_-(y) = a_- \\exp \\(\\frac{i}{2}\\<\\Gamma_-y,y\\>\\),\n \\end{equation*}\nwhere $a_-\\in \\C$ and $\\Gamma_-$ is a complex symmetric $d\\times d$\nmatrix whose \nimaginary part is positive and non-degenerate. Consider $\\psi^\\eps$\nsolution to \\eqref{eq:lin}, with \\eqref{eq:asym-state}. Then the\nfollowing asymptotic expansion holds in $L^2(\\R^d)$:\n\\begin{equation*}\n S^\\eps_{\\rm lin} \\psi_-^\\eps = \\frac{1}{\\eps^{d\/4}}e^{i\\delta^+\/\\eps} e^{ip^+\\cdot\n (x-q^+)\/\\eps+iq^+\\cdot p^+\/(2\\eps)} \\hat R(G_+)\n u_-\\(\\frac{x-q^+}{\\sqrt\\eps}\\) +\\O(\\sqrt\\eps),\n\\end{equation*}\nwhere $\\hat R(G_+)$ is the metaplectic transformation associated to\n$G_+ = \\frac{\\d (q^+,p^+)}{\\d(q^-,p^-)}$. \n\\end{theorem}\nAs a corollary, our main result yields another interpretation of the above\nstatement. It turns out that a complete scattering theory is available\nfor \\eqref{eq:ulin}. As a particular case of\nTheorem~\\ref{theo:scatt-class} (which addresses the nonlinear case), given $u_-\\in\n\\Sigma$, there exist a unique $u\\in C(\\R;\\Sigma)$ solution to\n\\eqref{eq:ulin} and a unique $u_+\\in\n\\Sigma$ such that \n\\begin{equation*}\n \\|e^{-i\\frac{t}{2}\\Delta}u(t)-u_\\pm \\|_\\Sigma \\Tend t {\\pm \\infty}\n 0. \n\\end{equation*}\nThen in the above theorem (where $u_-$ is restricted to be a Gaussian), we have\n\\begin{equation*}\n u_+ = \\hat R(G_+)\n u_-.\n\\end{equation*}\nFinally, we mention in passing the paper \\cite{NierENS}, where similar\nissues and results are obtained for\n\\begin{equation*}\n i\\eps\\d_t \\psi^\\eps + \\frac{\\eps^2}{2}\\Delta \\psi^\\eps =\n V\\(\\frac{x}{\\eps}\\) \\psi^\\eps + U(x)\\psi^\\eps,\n\\end{equation*}\nfor $V$ a short-range potential, and $U$ is bounded as well as its\nderivatives. The special scaling in $V$ implies that initially\nconcentrated waves (at scaled $\\eps$) first undergo the effects of $V$,\nthen exit a time layer of order $\\eps$, through which the main action of $V$\ncorresponds to the above quantum scattering operator (but with $\\eps=1$\ndue to the new scaling in the equation). Then, the action of $V$\nbecomes negligible, and the propagation of the wave is dictated by\nthe classical dynamics associated to $U$. \n\n\n\n\n\n\\subsection{Main results}\n\\label{sec:main}\n We now consider the nonlinear equation\n\\begin{equation}\n \\label{eq:psi-eps}\n i\\eps\\d_t \\psi^\\eps +\\frac{\\eps^2}{2}\\Delta \\psi^\\eps = V(x)\\psi^\\eps +\n \\eps^\\alpha|\\psi^\\eps|^{2\\si}\\psi^\\eps,\n\\end{equation}\nalong with asymptotic data \\eqref{eq:asym-state}. We first prove that\nfor fixed $\\eps>0$, a scattering theory is available for\n\\eqref{eq:psi-eps}: at this stage, the value of $\\alpha$ is naturally\nirrelevant, as well as the form \\eqref{eq:asym-state}.\nTo establish a large data scattering theory for \\eqref{eq:psi}, we\nassume that the attractive part of the potential,\n\\begin{equation*}\n (\\d_r V(x))_+=\n \\(\\frac{x}{|x|}\\cdot \\nabla V(x)\\)_+\n\\end{equation*}\n is not too large, where $f_+=\\max (0,f)$ for any real number $f$.\n\\begin{theorem}\\label{theo:scatt-quant}\n Let $d\\ge 3$, $\\frac{2}{d}<\\si<\\frac{2}{d-2}$, and $V$ satisfying\n Assumption~\\ref{hyp:V} for some $\\mu>2$. There exists $M=M(\\mu,d)$ such\n that if the attractive part of the potential $(\\d_r V)_+$ satisfies\n \\begin{equation*}\n (\\d_r V(x))_+\\le \\frac{M}{(1+|x|)^{\\mu+1}},\\quad \\forall x\\in\n \\R^d,\n \\end{equation*}\none can define a\n scattering operator for \\eqref{eq:psi} in $H^1(\\R^d)$: for\n all $\\psi_-^\\eps\\in H^1(\\R^d)$, there exist a unique $\\psi^\\eps\\in\n C(\\R;H^1(\\R^d))$ solution to \\eqref{eq:psi} and a unique $\\psi_+^\\eps\\in\n H^1(\\R^d)$ such that\n \\begin{equation*}\n \\|\\psi^\\eps(t)-e^{i\\frac{\\eps t}{2}\\Delta}\\psi_\\pm^\\eps\\|_{H^1(\\R^d)}\\Tend t {\\pm\n \\infty} 0.\n \\end{equation*}\nThe (quantum) scattering operator is the map\n$S^\\eps:\\psi_-^\\eps\\mapsto \\psi_+^\\eps$.\n\\end{theorem}\nWe emphasize the fact that several recent results address the same\nissue, under various assumptions on the external potential $V$:\n\\cite{ZhZh14} treats the case where $V$ is an inverse square (a\nframework which is ruled out in our contribution), while in\n\\cite{CaDa-p}, the potential is more general than merely inverse\nsquare. In \\cite{CaDa-p}, a magnetic field is also included, and the\nLaplacian is perturbed with variable coefficients. We make more\ncomparisons with \\cite{CaDa-p} in Section~\\ref{sec:quant}. \n\\smallbreak\n\nThe second result of this paper concerns the scattering theory for the\nenvelope equation:\n\n\\begin{theorem}\\label{theo:scatt-class}\n Let $d\\ge 1$, $\\frac{2}{d}\\le \\si<\\frac{2}{(d-2)_+}$, and $V$ satisfying\n Assumption~\\ref{hyp:V} for some $\\mu>1$. One can define a\n scattering operator for \\eqref{eq:u} in $\\Sigma$: for\n all $u_-\\in \\Sigma$, there exist a unique $u\\in\n C(\\R;\\Sigma)$ solution to \\eqref{eq:u} and a unique $u_+\\in\n \\Sigma$ such that\n \\begin{equation*}\n \\|e^{-i\\frac{t}{2}\\Delta}u(t)-u_\\pm\\|_{\\Sigma}\\Tend t {\\pm\n \\infty} 0.\n \\end{equation*}\n\\end{theorem}\nAs mentioned above, the proof includes the construction of a linear\nscattering operator, comparing the dynamics associated to\n\\eqref{eq:ulin} to the free dynamics $e^{i\\frac{t}{2}\\Delta}$. In the\nabove formula, we have incorporated the information that\n$e^{i\\frac{t}{2}\\Delta}$ is unitary on $H^1(\\R^d)$, but \\emph{not on\n}$\\Sigma$ (see e.g. \\cite{CazCourant}). \n\\smallbreak\n\nWe can now state the nonlinear analogue to\nTheorem~\\ref{theo:version-lineaire}. Since\nTheorem~\\ref{theo:scatt-quant} requires $d\\ge 3$, we naturally have to\nmake this assumption. On the other hand, we will need the\napproximate envelope $u$ to be rather smooth, which requires a smooth\nnonlinearity, $\\si\\in \\N$. Intersecting this property with the\nassumptions of Theorem~\\ref{theo:scatt-quant} leaves only one case:\n$d=3$ and $\\si=1$, that is \\eqref{eq:1}, up to the scaling. We will\nsee in Section~\\ref{sec:cv} that considering $d=3$ is also crucial,\nsince the argument uses dispersive estimates which are known only in\nthe three-dimensional case for $V$ satisfying Assumption~\\ref{hyp:V}\nwith $\\mu>2$ (larger values for $\\mu$ could be considered in higher\ndimensions, though). Introduce\nthe notation\n\\begin{equation*}\n \\Sigma^k=\\{ f\\in H^k(\\R^d),\\quad x\\mapsto |x|^k f(x)\\in\n L^2(\\R^d)\\}. \n\\end{equation*}\n\n\\begin{theorem}\\label{theo:cv}\n Let Assumptions~\\ref{hyp:V} and \\ref{hyp:flot} be satisfied, with\n $\\mu>2$ and $V$ as in Theorem~\\ref{theo:scatt-quant}. Consider \n $\\psi^\\eps$ solution to \n \\begin{equation*}\n i\\eps\\d_t \\psi^\\eps +\\frac{\\eps^2}{2}\\Delta \\psi^\\eps =\n V(x)\\psi^\\eps + \\eps^{5\/2}|\\psi^\\eps|^2 \\psi^\\eps,\\quad (t,x)\\in \\R\\times \\R^3,\n \\end{equation*}\nand such that \\eqref{eq:asym-state} holds, with $u_-\\in \\Sigma^7$.\nThen the\nfollowing asymptotic expansion holds in $L^2(\\R^3)$:\n\\begin{equation}\\label{eq:asym-finale}\n S^\\eps \\psi_-^\\eps = \\frac{1}{\\eps^{3\/4}}e^{i\\delta^+\/\\eps} e^{ip^+\\cdot\n (x-q^+)\/\\eps+iq^+\\cdot p^+\/(2\\eps)} u_+\\(\\frac{x-q^+}{\\sqrt\\eps}\\)\n +\\O(\\sqrt\\eps), \n\\end{equation}\nwhere $S^\\eps$ is given by Theorem~\\ref{theo:scatt-quant} and $u_+$\nstems from Theorem~\\ref{theo:scatt-class}. \n\\end{theorem}\n\\begin{remark}\n In the subcritical case, that is if we consider\n \\begin{equation*}\n i\\eps\\d_t \\psi^\\eps +\\frac{\\eps^2}{2}\\Delta \\psi^\\eps =\n V(x)\\psi^\\eps + \\eps^{\\alpha}|\\psi^\\eps|^2 \\psi^\\eps,\\quad (t,x)\\in \\R\\times \\R^3,\n \\end{equation*}\nalong with \\eqref{eq:asym-state}, for some $\\alpha>5\/2$, the argument\nof the proof shows that \\eqref{eq:asym-finale} remains true, but with $u_+$ given\nby the scattering operator associated to \\eqref{eq:ulin} (as opposed\nto \\eqref{eq:u}), that is, the same conclusion as in\nTheorem~\\ref{theo:version-lineaire} when $u_-$ is a Gaussian. \n\\end{remark}\nAs a corollary of the proof of the above result, and of the analysis\nfrom \\cite{CaFe11}, we infer:\n\\begin{corollary}[Asymptotic decoupling]\\label{cor:decoupling}\n Let Assumption~\\ref{hyp:V} be satisfied, with\n $\\mu>2$ and $V$ as in Theorem~\\ref{theo:scatt-quant}. Consider \n $\\psi^\\eps$ solution to \n \\begin{equation*}\n i\\eps\\d_t \\psi^\\eps +\\frac{\\eps^2}{2}\\Delta \\psi^\\eps =\n V(x)\\psi^\\eps + \\eps^{5\/2}|\\psi^\\eps|^2 \\psi^\\eps,\\quad (t,x)\\in \\R\\times \\R^3,\n \\end{equation*}\nwith initial datum\n\\begin{equation*}\n \\psi^\\eps(0,x) = \\sum_{j=1}^N\\frac{1}{\\eps^{3\/4}}a_j\\(\\frac{x-q_{0j}}{\\sqrt\\eps}\\)\n e^{ip_{0j}\\cdot (x-q_{0j})\/\\eps}=:\\psi_0^\\eps(x),\n\\end{equation*}\nwhere $N\\ge 2$, $q_{0j},p_{0j}\\in \\R^3$, $p_{0j}\\not =0$ so that scattering is\navailable as\n$t\\to +\\infty$ for $(q_j(t),p_j(t))$, in the sense of\nAssumption~\\ref{hyp:flot}, and $a_j\\in \\Sch(\\R^3)$. We suppose\n$(q_{0j},p_{0j})\\not =(q_{0k},p_{0k})$ for $j\\not =k$. Then we have\nthe uniform estimate: \n\\begin{equation*}\n \\sup_{t\\in \\R}\\left\\| \\psi^\\eps(t) - \\sum_{j=1}^N\n \\varphi_j^\\eps(t)\\right\\|_{L^2(\\R^3)} \\Tend \\eps\n 0 0 ,\n\\end{equation*}\nwhere $\\varphi_j^\\eps$ is the approximate solution with the $j$-th\nwave packet as an initial datum. As a consequence, the \n asymptotic expansion holds in $L^2(\\R^3)$, as $\\eps \\to 0$:\n\\begin{equation*}\n \\(W^\\eps_\\pm\\)^{-1} \\psi_0^\\eps =\\sum_{j=1}^N\n \\frac{1}{\\eps^{3\/4}}e^{i\\delta_{j}^\\pm\/\\eps} e^{ip_{j}^\\pm\\cdot \n (x-q_{j}^\\pm)\/\\eps+iq_{j}^\\pm\\cdot p_{j}^\\pm\/(2\\eps)}\n u_{j\\pm}\\(\\frac{x-q_{j}^\\pm}{\\sqrt\\eps}\\) \n +o(1), \n\\end{equation*}\nwhere the inverse wave operators $\\(W^\\eps_\\pm\\)^{-1} $ stem from\nTheorem~\\ref{theo:scatt-quant}, the $u_{j\\pm}$'s \nare the asymptotic states emanating from $a_j$, and \n\\begin{equation*}\n \\delta_{j}^\\pm = \\lim_{t\\to \\pm\\infty}\\(S_j(t) - \\frac{q_j(t)\\cdot\n p_j(t)-q_{0j}\\cdot p_{0j}}{2}\\)\\in \\R. \n\\end{equation*}\n\\end{corollary}\n\\begin{remark}\n In the case $V=0$, the approximation by wave packets is actually\n exact, since then $Q(t)\\equiv 0$, hence $u^\\eps=u$. For one wave\n packet, Theorem~\\ref{theo:cv} \n becomes empty, since it is merely a rescaling. On the other hand,\n for two initial wave packets, even in the case $V=0$,\n Corollary~\\ref{cor:decoupling} brings some information, reminiscent\n of profile decomposition. More precisely, define $u^\\eps$ by\n \\eqref{eq:chginc}, and choose (arbitrarily) to privilege the trajectory\n $(q_1,p_1)$. The Cauchy problem is then equivalent to \n \\begin{equation*}\n\\left\\{\n\\begin{aligned}\n &i\\d_t u^\\eps+\\frac{1}{2}\\Delta u^\\eps = |u^\\eps|^2 u^\\eps,\\\\\n& u^\\eps(0,y) = a_1(y) + a_2\\( y +\\frac{q_{01}-q_{02}}{\\sqrt\\eps}\\)\n e^{ip_{02}\\cdot \\delta q_0\/\\eps -i\\delta p_0\\cdot y\/\\sqrt\\eps},\n \\end{aligned}\n\\right.\n\\end{equation*}\nwhere we have set $\\delta p_0 = p_{01}-p_{02}$ and $\\delta q_0\n=q_{01}-q_{02}$. \nNote however that the initial datum is uniformly bounded in\n$L^2(\\R^3)$, but in no $H^s(\\R^3)$ for $s>0$ (if $p_{01}\\not =\np_{02}$), while the equation is \n$\\dot H^{1\/2}$-critical, Therefore, even in the case\n$V=0$, \nCorollary~\\ref{cor:decoupling} does not seem to be a consequence of\nprofile decompositions like in\ne.g. \\cite{DuHoRo08,Keraani01,MerleVega98}. In view of\n\\eqref{eq:hamil}, the approximation provided by\nCorollary~\\ref{cor:decoupling} reads, in that case:\n\\begin{equation*}\n u^\\eps(t,y) = u_1(t,y) + u_2\\(t,y\n +\\frac{t \\delta p_0+\\delta q_0}{\\sqrt\\eps}\\)\n e^{i\\phi_2^\\eps(t,y)}+o(1)\\quad \\text{in }L^\\infty(\\R;L^2(\\R^3)),\n\\end{equation*}\nwhere the phase shift is given by\n\\begin{align*}\n \\phi^\\eps_2(t,y) &= \\frac{1}{\\eps}p_{02}\\cdot \\( t\\delta p_0+\\delta\n q_0\\) -\\frac{1}{\\sqrt\\eps}\\delta p_0\\cdot y +\\frac{t}{2\\eps}\n \\( |p_{02}|^2-|p_{01}|^2\\) \\\\\n&= \\frac{1}{\\eps}p_{02}\\cdot \\delta\n q_0 -\\frac{1}{\\sqrt\\eps}\\delta p_0\\cdot y -\\frac{t}{2\\eps}|\\delta\n p_0|^2. \n\\end{align*}\n\\end{remark}\n\n\n\n\\noindent {\\bf Notation.} We write $a^\\eps(t)\\lesssim b^\\eps(t)$\nwhenever there exists $C$ independent of $\\eps\\in (0,1]$ and $t$ such\nthat $a^\\eps(t)\\le C b^\\eps(t)$. \n \n\n\n\n\n\n\n\n\n\n\\section{Spectral properties and consequences}\n\\label{sec:spectral}\n\nIn this section, we derive some useful properties for the Hamiltonian\n\\begin{equation*}\n H=-\\frac{1}{2}\\Delta +V.\n\\end{equation*}\nSince the dependence upon $\\eps$ is not addressed in this\nsection, we assume $\\eps=1$.\n\\smallbreak\n\nFirst, it follows for instance from \\cite{Mourre} that\nAssumption~\\ref{hyp:V} implies that $H$ has no\nsingular spectrum. Based on Morawetz estimates, we show that $H$ has\nno eigenvalue, provided that the attractive part of $V$ is\nsufficiently small. Therefore, the spectrum of $H$ is\npurely absolutely continuous. \nFinally, again if the attractive part of $V$ is\nsufficiently small, zero is not a resonance of $H$, so Strichartz\nestimates are available for $e^{-itH}$. \n\n\n\n\n\\subsection{Morawetz estimates and a first consequence}\n\\label{sec:morawetz}\n\nIn this section, we want to treat both linear and nonlinear equations,\nso we consider\n\\begin{equation}\n \\label{eq:psi-gen}\n i\\d_t \\psi +\\frac{1}{2}\\Delta \\psi = V\\psi + \\lambda\n |\\psi|^{2\\si}\\psi,\\quad \\l \\in \\R.\n\\end{equation}\nMorawetz estimate in the linear case $\\l=0$ will show the absence of\neigenvalues. In the nonlinear case $\\l>0$, these estimates will be a\ncrucial tool for prove scattering in the quantum case. \n The following lemma and its proof are essentially a rewriting of the\n presentation from \\cite{BaRuVe06}. \n\\begin{proposition}[Morawetz inequality]\\label{prop:Morawetz}\n Let $d\\ge 3$, and $V$ satisfying\n Assumption~\\ref{hyp:V} for some $\\mu>2$. There exists $M=M(\\mu,d)>0$ such\n that if the attractive part of the potential satisfies\n \\begin{equation*}\n (\\d_r V(x))_+ \\le \\frac{M}{(1+|x|)^{\\mu+1}},\\quad \\forall x\\in\n \\R^d,\n \\end{equation*}\nthen any solution $\\psi\\in L^\\infty(\\R;H^1(\\R^d))$ to \\eqref{eq:psi-gen}\nsatisfies\n\\begin{equation}\\label{eq:morawetz}\n \\l \\iint_{\\R\\times \\R^d}\\frac{|\\psi(t,x)|^{2\\si+2}}{|x|}dtdx +\n \\iint_{\\R\\times \\R^d}\\frac{|\\psi(t,x)|^{2}}{(1+|x|)^{\\mu+1}}dtdx\\lesssim \n \\|\\psi\\|_{L^\\infty(\\R;H^1)}^2. \n\\end{equation}\n\\end{proposition}\nIn other words, the main obstruction to global dispersion for $V$\ncomes from $(\\d_r V)_+$, which is the attractive contribution of $V$\nin classical trajectories, while $(\\d_r V)_-$ is the repulsive part,\nwhich does not ruin the dispersion associated to $-\\Delta$ (it may\n reinforce it, see e.g. \\cite{CaDCDS}, but repulsive\npotentials do not necessarily improve the dispersion, see \\cite{GoVeVi06}). \n\\begin{proof}\n The proof follows standard arguments, based on virial identities\n with a suitable weight. We resume the main steps of the\n computations, and give more details on the choice of the weight in\n our context. For a real-valued function $h(x)$, we compute, for $\\psi$ solution\n to \\eqref{eq:psi},\n \\begin{equation*}\n \\frac{d}{dt}\\int h(x)|\\psi(t,x)|^2dx = \\IM \\int \\bar \\psi(t,x) \\nabla\n h(x)\\cdot \\nabla \\psi(t,x)dx,\n \\end{equation*}\n \\begin{equation}\n \\label{eq:viriel}\n \\begin{aligned}\n \\frac{d}{dt}\\IM \\int \\bar \\psi(t,x) \\nabla\n h(x)\\cdot \\nabla \\psi(t,x)dx &= \\int \\nabla \\bar \\psi(t,x)\\cdot\n \\nabla^2h(x)\\nabla \\psi(t,x)dx \\\\\n-\\frac{1}{4}\\int |\\psi(t,x)|^2 &\\Delta^2\n h(x)dx -\\int |\\psi(t,x)|^2\\nabla V\\cdot \\nabla h(x)dx\\\\\n & +\\frac{\\l\\si}{\\si+1}\\int |\\psi(t,x)|^{2\\si+2}\\Delta h(x) dx. \n \\end{aligned}\n \\end{equation}\nIn the case $V=0$, the standard choice is $h(x)=|x|$, for which\n\\begin{equation*}\n \\nabla h=\\frac{x}{|x|},\\quad \\nabla^2_{jk}h =\n \\frac{1}{|x|}\\(\\delta_{jk}-\\frac{x_jx_k}{|x|^2}\\),\\quad \\Delta h\\ge\n \\frac{d-1}{h},\\quad \\text{and }\\Delta^2 h\\le 0\\text{ for }d\\ge 3. \n\\end{equation*}\nThis readily yields Proposition~\\ref{prop:Morawetz} in the repulsive case\n$\\d_r V\\le 0$, since $\\nabla h\\in L^\\infty$. \n\\smallbreak\n\nIn the same spirit as in \\cite{BaRuVe06}, we proceed by perturbation to\nconstruct a suitable weight when the attractive part of the potential\nis not too large. We seek a priori a radial weight, $h=h(|x|)\\ge 0$, so we\nhave \n\\begin{align*}\n & \\Delta h = h'' +\\frac{d-1}{r} h',\\\\\n& \\Delta^2 h = h^{(4)}\n +2\\frac{d-1}{r} h^{(3)} +\\frac{(d-1)(d-3)}{r^2}h'' -\n \\frac{(d-1)(d-3)}{r^3}h',\\\\\n&\\nabla^2_{jk} h = \\frac{1}{r}\\(\\delta_{jk}-\\frac{x_jx_k}{r^2}\\) h'\n+\\frac{x_jx_k}{r^2}h''. \n\\end{align*}\nWe construct a function $h$ such that $h',h''\\ge 0$, so the condition\n$\\nabla^2 h\\ge 0$ will remain. The goal is then to construct a radial\nfunction $h$ such that the second line in \\eqref{eq:viriel} is\nnon-negative, along with $\\Delta h \\ge \\eta\/|x|$ for some $\\eta>0$.\n\\smallbreak\n\n\\noindent {\\bf Case $d=3$.} In this case, the expression for $\\Delta^2\nh$ is simpler, and the above conditions read\n\\begin{align*}\n &\\frac{1}{4} h^{(4)}\n +\\frac{1}{r} h^{(3)} + \\nabla V(x)\\cdot \\nabla h\\le 0,\\\\\n& h''+\\frac{2}{r}h'\\ge \\frac{\\eta}{r},\\quad h',h''\\ge 0. \n\\end{align*}\nSince we do not suppose a priori that $V$ is a radial potential, the\nfirst condition is not rigorous. We actually use the fact that for\n$h'\\ge 0$, Assumption~\\ref{hyp:V} implies\n\\begin{equation*}\n \\nabla V(x)\\cdot \\nabla h \\le \\(\\d_r V(x)\\)_+ h'(r)\\le\n \\frac{M}{(1+r)^{\\mu+1}}h'(r). \n\\end{equation*}\nTo achieve our goal, it is therefore sufficient to require:\n\\begin{align}\n \\label{eq:h1}&\\frac{1}{4} h^{(4)}\n +\\frac{1}{r} h^{(3)} + \\frac{M}{(1+r)^{\\mu+1}}h' \\le 0,\\\\\n\\label{eq:h2}& h''+\\frac{2}{r}h'\\ge \\frac{\\eta}{r},\\quad h'\\in L^\\infty(\\R_+), \\\nh',h''\\ge 0. \n\\end{align}\nIn view of \\eqref{eq:h2}, we seek\n\\begin{equation*}\n h'(r) = \\eta +\\int_0^r h''(\\rho)d\\rho.\n\\end{equation*}\nTherefore, if $h''\\ge 0$ with $h''\\in L^1(\\R_+)$, \\eqref{eq:h2} will\nbe automatically fulfilled. We now turn to \\eqref{eq:h1}. Since we\nwant $h'\\in L^\\infty$, we may even replace $h'$ by a constant in\n\\eqref{eq:h1}, and solve, for $C>0$, the ODE\n\\begin{equation*}\n \\frac{1}{4} h^{(4)}\n +\\frac{1}{r} h^{(3)} + \\frac{C}{(1+r)^{\\mu+1}}=0.\n\\end{equation*}\nWe readily have\n\\begin{equation*}\n h^{(3)}(r) = -\\frac{4C}{r^4}\\int_0^r\\frac{\\rho^4}{(1+\\rho)^{\\mu+1}}d\\rho,\n\\end{equation*}\nalong with the properties $h^{(3)}(0)=0$, \n\\begin{equation*}\n h^{(3)}(r)\\Eq r \\infty -\\frac{k}{r^{\\min (\\mu,4)}},\\quad \\text{for some }k>0.\n\\end{equation*}\nIt is now natural to set\n\\begin{equation*}\n h''(r) = -\\int_r^\\infty h^{(3)}(\\rho)d\\rho,\n\\end{equation*}\nso we have $h''\\in C([0,\\infty);\\R_+)$ and\n\\begin{equation*}\n h''(r) \\Eq r \\infty \\frac{\\kappa}{r^{\\min (\\mu-1,3)}},\\quad \\text{for some }\\kappa>0.\n\\end{equation*}\nThis function is indeed in $L^1$ if and only if $\\mu>2$. We \ndefine $h$ by $h(r)= \\int_0^r h'(\\rho)d\\rho$,\n\\begin{equation}\\label{eq:h3}\n h^{(3)}(r) = -\\frac{K}{r^4}\\int_0^r\\frac{\\rho^4}{(1+\\rho)^{\\mu+1}}d\\rho,\n\\end{equation}\nfor some $K>0$, $h''$ and $h'$ being given by the above relations:\n\\eqref{eq:h2} is satisfied for any value of $K>0$, and \\eqref{eq:h1}\nboils down to an inequality of the form\n\\begin{equation}\\label{eq:M}\n -\\frac{K}{4} +M\\(\\eta +C(\\mu)K\\)\\le 0,\n\\end{equation}\nwhere $C(\\mu)$ is proportional to \n\\begin{equation*}\n \\frac{1}{K} \\|h'\\|_{L^\\infty} = \\int_0^\\infty \\int_r^\\infty\n \\frac{1}{\\rho^4}\\int_0^\\rho \\frac{s^4}{(1+s)^{\\mu+1}}dsd\\rho dr.\n\\end{equation*}\nWe infer that \\eqref{eq:h3} is satisfied for $K\\gg \\eta$, provided\nthat $M<\\frac{1}{4C(\\mu)}$. Note then that by construction, we may also\nrequire\n\\begin{equation*}\n \\frac{1}{4}\\Delta^2 h +\\nabla V\\cdot \\nabla h\\le \\frac{-c_0}{(1+|x|)^{\\mu+1}},\n\\end{equation*}\nfor $c_0>0$ morally very small. \n\\smallbreak\n\n\\noindent {\\bf Case $d\\ge 4$.} Resume the above reductions, pretending\nthat the last two terms in $\\Delta^2 h$ are not present: \\eqref{eq:h3}\njust becomes\n\\begin{equation*}\n h^{(3)}(r) = -\\frac{K}{r^{2d-2}}\\int_0^r\\frac{\\rho^{2d-2}}{(1+\\rho)^{\\mu+1}}d\\rho,\n\\end{equation*}\nand we see that with $h''$ and $h'$ defined like before, we have\n\\begin{equation*}\n rh''-h'= -\\eta- \\int_0^r h'' +rh''.\n\\end{equation*}\nSince this term is negative at $r=0$ and has a non-positive\nderivative, we have $rh''-h'\\le 0$, so finally $\\Delta^2 h\\le 0$. \n\\end{proof}\n\nWe infer that $H$ has no eigenvalue. Indeed, if there were an $L^2$ solution\n$\\psi=\\psi(x)$ \nto $H\\psi =E\\psi$, $E\\in \\R$, then $\\psi\\in\nH^2(\\R^d)$, and $\\psi(x)e^{-iEt}$ would be an $H^1$\nsolution to \\eqref{eq:psi-gen} for $\\l=0$. This is contradiction with\nthe global integrability in time from \\eqref{eq:morawetz}, so\n$\\si_{\\rm pp}(H)=\\emptyset$. \n\n\\subsection{Strichartz estimates}\n\\label{sec:strichartz}\n\nIn\n\\cite[Proposition~3.1]{BaRuVe06}, it is proved that zero is\nnot a resonance of $H$, but with a definition of resonance which is\nnot quite the definition in \\cite{RodnianskiSchlag}, which contains a\nresult that we want to use. So we shall resume the argument.\n\nBy definition (as in \\cite{RodnianskiSchlag}), zero is a resonance of\n$H$, if there is a distributional solution \n$\\psi\\not\\in L^2$, such that $\\^{-s}\\psi\\in L^2(\\R^d)$ for all\n$s>\\frac{1}{2}$, to\n$H\\psi=0$. \n\\begin{corollary}\n Under the assumptions of Proposition~\\ref{prop:Morawetz}, zero is not a\n resonance of $H$.\n\\end{corollary}\n\\begin{proof}\n Suppose that zero is a resonance of $H$. Then by definition, we\n obtain a stationary distributional solution of \\eqref{eq:psi-gen} (case\n $\\l=0$), $\\psi= \\psi(x)$, and we may assume that it is\n real-valued. Since $\\Delta \\psi = \n 2V\\psi$, Assumption~\\ref{hyp:V} implies\n \\begin{equation*}\n \\^{\\mu-s}\\Delta \\psi\\in L^2(\\R^d),\\quad \\forall s>\\frac{1}{2}. \n \\end{equation*}\nThis implies that $\\nabla \\psi\\in L^2$, by taking for instance $s=1$ in\n\\begin{equation*}\n \\int|\\nabla \\psi|^2 = -\\int \\^{-s}\\psi \\^s\\Delta \\psi.\n\\end{equation*}\nBy definition, for all test function $\\varphi$,\n\\begin{equation}\\label{eq:variat}\n \\frac{1}{2}\\int_{\\R^d}\\nabla \\varphi(x) \\cdot \\nabla \\psi(x)dx\n +\\int_{\\R^d}V(x)\\varphi(x)\\psi(x)dx =0.\n\\end{equation}\nLet $h$ be the weight constructed in the proof of\nProposition~\\ref{prop:Morawetz}, and consider\n\\begin{equation*}\n \\varphi = \\psi\\Delta h +2\\nabla \\psi\\cdot \\nabla h. \n\\end{equation*}\nSince $\\nabla h\\in L^\\infty$, $\\nabla^2 h(x)=\\O(\\^{-1})$, and\n$\\nabla^3 h(x)= \\O(\\^{-2})$, we see\nthat $\\varphi \\in H^1$, and that this choice is allowed in\n\\eqref{eq:variat}. Integration by parts then yields \\eqref{eq:viriel}\n(where the left hand side is now zero):\n\\begin{equation*}\n 0=\\int \\nabla \\psi\\cdot \\nabla^2h \\nabla \\psi -\\frac{1}{4}\\int\n \\psi^2\\Delta^2 h -\\int \\psi^2 \\nabla V\\cdot \\nabla h.\n\\end{equation*}\nBy construction of $h$, this implies\n\\begin{equation*}\n \\int_{\\R^d}\\frac{\\psi(x)^2}{(1+|x|)^{\\mu+1}}dx\\le 0,\n\\end{equation*}\nhence $\\psi\\equiv 0$. \n\\end{proof}\nTherefore,\n\\cite[Theorem~1.4]{RodnianskiSchlag} implies non-endpoint global in\ntime Strichartz estimates. In the case $d=3$, we know from\n\\cite{Go06} that (in view of the above spectral properties)\n\\begin{equation*}\n \\|e^{-itH}\\|_{L^1\\to L^\\infty}\\le C |t|^{-d\/2},\\quad \\forall t\\not\n =0,\n\\end{equation*}\na property which is stronger than Strichartz estimates, and yields the\nendpoint Strichartz estimate missing in \\cite{RodnianskiSchlag}, from\n\\cite{KT}. On the other \nhand, this dispersive estimate does not seem to be known under\nAssumption~\\ref{hyp:V} with $\\mu>2$ when $d\\ge 4$: stronger assumptions\nare always present so far (see e.g. \\cite{CaCuVo09,ErGr10}). However,\nendpoint Strichartz estimates for $d\\ge 4$ are a consequence of\n\\cite[Theorem~1.1]{AFVV10}, under the assumptions of\nProposition~\\ref{prop:Morawetz}. \n\n\\begin{proposition}\\label{prop:StrichartzRS}\nLet $d\\ge 3$. Under the assumptions of\nProposition~\\ref{prop:Morawetz}, for all $(q,r)$ such that \n \\begin{equation}\\label{eq:adm}\n \\frac{2}{q}=d\\(\\frac{1}{2}-\\frac{1}{r}\\),\\quad 22$ seems essentially sharp in order to have\nglobal in time Strichartz estimates. The result remains true for $\\mu\n=2$ (\\cite{BPST03,BPST04}), but in \\cite{GoVeVi06}, the authors\nprove that for repulsive potentials which are homogeneous of degree\nsmaller than $2$, global Strichartz estimates fail to exist.\n\n\n\n\n\n\\section{Quantum scattering}\n\\label{sec:quant}\n\nIn this section, we prove Theorem~\\ref{theo:scatt-quant}. Since the\ndependence upon $\\eps$ is not measured in\nTheorem~\\ref{theo:scatt-quant}, we shall \nconsider the case $\\eps=1$, corresponding to \n\\begin{equation}\n \\label{eq:psi}\n i\\d_t \\psi +\\frac{1}{2}\\Delta \\psi = V\\psi + |\\psi|^{2\\si}\\psi.\n\\end{equation}\nWe split the proof of Theorem~\\ref{theo:scatt-quant} into two\nsteps. First, we solve the Cauchy problem with data prescribed at\n$t=-\\infty$, that is, we show the existence of wave operators. Then,\ngiven an initial datum at $t=0$, we show that the (global) solution to\n\\eqref{eq:psi} behaves asymptotically like a free solution, which\ncorresponds to asymptotic completeness. \n\\smallbreak\n\nFor each of these two steps, we first show that the nonlinearity is\nnegligible for large time, and then recall that the potential is\nnegligible for large time (linear scattering). This means that for any $\\tilde \\psi_-\\in\nH^1(\\R^d)$, there exists a unique $\\psi\\in \n C(\\R;H^1(\\R^d))$ solution to \\eqref{eq:psi} such that\n \\begin{equation*}\n \\|\\psi(t)-e^{-itH}\\tilde \\psi_-\\|_{H^1(\\R^d)}\\Tend t {-\n \\infty} 0,\n \\end{equation*}\nand for any $\\varphi\\in H^1(\\R^d)$, there exist a unique $\\psi\\in\n C(\\R;H^1(\\R^d))$ solution to \\eqref{eq:psi} and a unique $\\tilde\\psi_+\\in\n H^1(\\R^d)$ such that\n\\begin{equation*}\n \\|\\psi(t)-e^{-itH}\\tilde \\psi_+\\|_{H^1(\\R^d)}\\Tend t {+\n \\infty} 0.\n \\end{equation*}\nThen, we recall that the potential $V$ is negligible for large\ntime. We will adopt the following notations for the propagators,\n\\begin{equation*}\n U(t)=e^{i\\frac{t}{2}\\Delta},\\quad U_V(t)= e^{-itH}. \n\\end{equation*}\n\n\n\nIn order to construct wave operators which show that the nonlinearity\ncan be neglected for large time, we shall work with an $H^1$\nregularity, on the Duhamel's formula associated to \\eqref{eq:psi} in\nterms of $U_V$, with a prescribed asymptotic behavior as $t\\to\n-\\infty$:\n\\begin{equation}\n \\label{eq:duhamel-}\n \\psi(t) = U_V(t)\\tilde \\psi_- -i\\int_{-\\infty}^t\n U_V(t-s)\\(|\\psi|^{2\\si}\\psi(s)\\)ds. \n\\end{equation}\nApplying the gradient to this formulation brings up the problem of\nnon-commutativity with $U_V$. The worst term is actually the linear\none, $U_V(t)\\tilde \\psi_-$, since\n\\begin{equation*}\n \\nabla \\(U_V(t)\\tilde \\psi_-\\) = U_V(t)\\nabla \\tilde \\psi_-\n -i\\int_0^t U_V(t-s)\\((U_V(s)\\tilde \\psi_-)\\nabla V\\)ds.\n\\end{equation*}\nSince the construction of wave operators relies on the use of\nStrichartz estimates, it would be necessary to have an estimate of\n\\begin{equation*}\n \\left\\|\\nabla \\(U_V(t)\\tilde \\psi_-\\)\\right\\|_{L^qL^r}\n\\end{equation*}\nin terms of $\\psi_-$, for admissible pairs\n$(q,r)$. Proposition~\\ref{prop:StrichartzRS} yields\n\\begin{equation*}\n \\left\\|\\nabla \\(U_V(t)\\tilde \\psi_-\\)\\right\\|_{L^qL^r} \\lesssim \\|\\nabla \\tilde\n \\psi_-\\|_{L^2} + \\|(U_V(t)\\tilde \\psi_-)\\nabla V\\|_{L^{\\tilde\n q'}L^{\\tilde r'}},\n\\end{equation*}\nfor any admissible pair $(\\tilde q,\\tilde r)$. In the last factor,\ntime is present only in the term $U_V(t)\\tilde \\psi_-$, so to be able\nto use Strichartz estimates again, we need to consider $\\tilde\nq=2$, in which case $\\tilde r=2^*:=\\frac{2d}{d-2}$:\n\\begin{equation*}\n \\|(U_V(t)\\tilde \\psi_-)\\nabla V\\|_{L^2L^{{2^*}'}}\\le \\|U_V(t)\\tilde\n \\psi_-\\|_{L^2L^{2^*}}\\|\\nabla V\\|_{L^{d\/2}},\n\\end{equation*}\nwhere Assumption~\\ref{hyp:V} implies $\\nabla V\\in L^{d\/2}(\\R^d)$ as\nsoon as $\\mu>1$. Using the endpoint Strichartz estimate from\nProposition~\\ref{prop:StrichartzRS}, we have\n\\begin{equation*}\n \\|U_V(t)\\tilde\n \\psi_-\\|_{L^2L^{2^*}} \\lesssim \\|\\tilde \\psi_-\\|_{L^2}, \n\\end{equation*}\nand we have:\n\\begin{lemma}\\label{lem:stri2}\n Let $d\\ge 3$. Under the assumptions of\n Proposition~\\ref{prop:Morawetz}, for all admissible pair $(q,r)$, \n \\begin{equation*}\n \\|e^{-itH}f\\|_{L^q(\\R;W^{1,r}(\\R^d))}\\lesssim \\|f\\|_{H^1(\\R^d)}. \n \\end{equation*}\n\\end{lemma}\nWe shall rather use a vector-field, for we believe this approach may be\ninteresting in other contexts.\n\n\n\\subsection{Vector-field}\n\\label{sec:vector-field}\n\n We\nintroduce a vector-field which naturally commutes with $U_V$, and\nis comparable with the gradient. \n\\smallbreak\n\nFrom Assumption~\\ref{hyp:V}, $V$ is bounded, so there exists $c_0\\ge\n0$ such that $V+c_0\\ge 0$. We shall consider the operator\n\\begin{equation*}\n A = \\sqrt{H+c_0}=\\sqrt{-\\frac{1}{2}\\Delta +V+c_0}.\n\\end{equation*}\n\\begin{lemma}\\label{lem:A}\n Let $d\\ge 3$, and $V$ satisfying Assumption~\\ref{hyp:V} with\n $V+c_0\\ge 0$. For every $1^{-|\\beta|},\n\\end{equation*}\nfor all $\\alpha,\\beta\\in \\N^d$. This implies that the\npseudo-differential operators of symbol \n$a$ and $b$, respectively, are bounded on $L^r(\\R^d)$, for\nall $12$. For\n all $\\tilde \\psi_-\\in H^1(\\R^d)$, there exists a unique \n$$\\psi\\in\n C((-\\infty,0];H^1(\\R^d))\\cap\n L^{\\frac{4\\si+4}{d\\si}}((-\\infty,0);L^{2\\si+2}(\\R^d))$$\n solution to \\eqref{eq:psi} such that \n \\begin{equation*} \n \\|\\psi(t)-e^{-it H}\\tilde\\psi_-\\|_{H^1(\\R^d)}\\Tend t {-\n \\infty} 0.\n \\end{equation*} \n\\end{proposition}\n\\begin{proof}\n The main part of the proof is to prove that \\eqref{eq:duhamel-} has\n a fixed point. Let\n \\begin{equation*}\n q=\\frac{4\\si+4}{d\\si}.\n \\end{equation*}\nThe pair $(q,2\\si+2)$ is admissible, in the sense that it satisfies\n\\eqref{eq:adm}. \nWith the notation $L^\\beta_TY=L^\\beta(]-\\infty,-T];Y)$, we\n introduce: \n \\begin{align*}\n X_T:=\\Big\\{ \\psi\\in C(]-\\infty,-T];H^1)\\ ;\\ &\\left\\|\n \\psi\\right\\|_{L^q_TL^{2\\si+2}} \\le K\\|\\tilde\n \\psi_-\\|_{L^2},\\\\\n\\left\\|\n \\nabla \\psi\\right\\|_{L^q_TL^{,2\\si+2}} \\le K\\|\\tilde\n \\psi_-\\|_{H^1},\\quad &\n \\left\\| \\psi\\right\\|_{L^\\infty_TL^2} \\le 2 \\|\\tilde \\psi_-\\|_{L^2}\\,\n ,\\\\\n \\left\\| \\nabla \\psi\\right\\|_{L^\\infty_TL^2} \\le K \\|\\tilde \\psi_-\\|_{H^1}\n,\\quad \n&\\left\\| \\psi\\right\\|_{L^q_T L^{2\\si+2}} \\le 2 \\left\\|\n U_V(\\cdot)\\tilde \\psi_-\\right\\|_{L^q_T L^{2\\si+2}}\\Big\\},\n \\end{align*}\nwhere $K$ will be chosen sufficiently large in terms of the\nconstants present in Strichartz estimates presented in\nProposition~\\ref{prop:StrichartzRS}. Set\n$r=s=2\\si +2$: we have\n\\begin{equation*}\n \\frac{1}{r'}= \\frac{1}{r}+\\frac{2\\si}{s},\\quad\n\\frac{1}{q'}= \\frac{1}{q}+\\frac{2\\si}{k},\n\\end{equation*}\nwhere $q\\le k<\\infty$ since $2\/d\\le\n\\si<2\/(d-2)$. Denote by $\\Phi(\\psi)$ the right hand side of\n\\eqref{eq:duhamel-}. For $\\psi\\in X_T$, Strichartz estimates and H\\\"older\ninequality yield, for all admissible pairs $(q_1,r_1)$:\n\\begin{align*}\n \\left\\| \\Phi(\\psi)\\right\\|_{L^{q_1}_T L^{r_1}} &\\le C_{q_1}\\|\\tilde \n\\psi_-\\|_{L^2} + C\\left\\| |\\psi|^{2\\si}\\psi\\right\\|_{L^{q'}_TL^{r'}} \n \\\\\n&\\le C_{q_1}\\|\\tilde \\psi_-\\|_{L^2} +\nC\\|\\psi\\|_{L^k_TL^s}^{2\\si}\\|\\psi\\|_{L^q_T L^r}\\\\\n&\\le C_{q_1}\\|\\tilde \\psi_-\\|_{L^2} + C\\|\\psi\\|_{L^q_TL^r}^{2\\si\\theta\n }\\|\\psi\\|_{L^\\infty_TL^r}^{2\\si(1-\\theta) } \\|\\psi\\|_{L^q_T L^r} ,\n\\end{align*}\nfor some $0<\\theta\\le 1$, where we have used the property\n$r=s=2\\si+2$. Sobolev embedding and the definition of $X_T$ then imply:\n\\begin{align*}\n \\left\\| \\Phi(\\psi)\\right\\|_{L^{q_1}_T L^{r_1}} \\le C_{q_1}\\|\\tilde\n \\psi_-\\|_{L^2} + C\\left\\|U_V(\\cdot) \\tilde \n \\psi_-\\right\\|_{L^q_TL^r}^{2\\si\\theta \n }\\|\\psi\\|_{L^\\infty_TH^1}^{2\\si(1-\\theta) } \\|\\psi\\|_{L^q_T L^r} .\n\\end{align*}\nWe now apply the operator $A$. Since $A$ commutes with $H$, we have\n\\begin{equation*}\n \\left\\| A\\Phi(\\psi)\\right\\|_{L^{q_1}_T L^{r_1}} \\lesssim\n \\|A\\tilde \\psi_-\\|_{L^2} + \\left\\|\n A\\(|\\psi|^{2\\si}\\psi\\)\\right\\|_{L^{q'}_TL^{r'}}. \n\\end{equation*}\nIn view of Lemma~\\ref{lem:A}, we have successively,\n\\begin{align*}\n \\|A\\tilde \\psi_-\\|_{L^2} &\\lesssim \\|\\tilde \\psi_-\\|_{H^1},\\\\\n \\left\\|\n A\\(|\\psi|^{2\\si}\\psi\\)\\right\\|_{L^{q'}_TL^{r'}}&\\lesssim \\left\\|\n |\\psi|^{2\\si}\\psi\\right\\|_{L^{q'}_TL^{r'}} + \\left\\|\n \\nabla \\(|\\psi|^{2\\si}\\psi\\)\\right\\|_{L^{q'}_TL^{r'}} \\\\\n&\\lesssim \\|\\psi\\|_{L^k_TL^s}^{2\\si}\\(\\|\\psi\\|_{L^q_T L^r} +\n \\|\\nabla\\psi\\|_{L^q_T L^r} \\)\\\\\n&\\lesssim \\|\\psi\\|_{L^k_TL^s}^{2\\si}\\(\\|\\psi\\|_{L^q_T L^r} +\n \\|A\\psi\\|_{L^q_T L^r} \\).\n\\end{align*}\nWe infer along the same lines as above,\n\\begin{align*}\n \\left\\| \\nabla\\Phi(\\psi)\\right\\|_{L^{q_1}_T L^{r_1}} &\\lesssim\n \\|\\tilde \\psi_-\\|_{H^1} +\\left\\|U_V(\\cdot) \\tilde\n \\psi_-\\right\\|_{L^q_TL^r}^{2\\si\\theta \n }\\|\\psi\\|_{L^\\infty_TH^1}^{2\\si(1-\\theta) } \\( \n\\| \\psi\\|_{L^q_T L^r} + \\|A\\psi\\|_{L^q_T L^r}\\) .\n \\end{align*}\nWe have also\n\\begin{align*}\n\\left\\| \\Phi(\\psi)\\right\\|_{L^{q}_T L^{r}}& \\le\n \\left\\|U_V(\\cdot)\\tilde \\psi_-\\right\\|_{L^{q}_TL^{r}} \n+ C\\left\\|U_V(\\cdot) \\tilde\n \\psi_-\\right\\|_{L^q_TL^r}^{2\\si\\theta \n }\\|\\psi\\|_{L^\\infty_TH^1}^{2\\si(1-\\theta) } \\|\\psi\\|_{L^q_T L^r}.\n\\end{align*}\nFrom Strichartz estimates, $U_V(\\cdot)\\tilde \\psi_- \\in L^{q}(\\R;L^{r})$, so \n\\begin{equation*}\n \\left\\|U_V(\\cdot)\\tilde \\psi_-\\right\\|_{L^q_TL^r} \\to 0\\quad \\text{as }T\\to +\\infty.\n\\end{equation*}\nSince $\\theta>0$, we infer that $\\Phi$ sends $X_T$ to itself, for\n$T$ sufficiently large. \n\\smallbreak\n\nWe have also, for $\\psi_2,\\psi_1\\in X_T$:\n\\begin{align*}\n \\left\\| \\Phi(\\psi_2)-\\Phi(\\psi_1)\\right\\|_{L^q_T L^r}&\\lesssim\n \\max_{j=1,2}\\| \\psi_j\\|_{L^k_TL^s}^{2\\si} \\left\\|\n \\psi_2-\\psi_1\\right\\|_{L^q_T L^r}\\\\\n&\\lesssim \\left\\|U_V(\\cdot)\\tilde \\psi_-\\right\\|_{L^q_TL^r}^{2\\si\\theta\n }\\|\\tilde \\psi_-\\|_{H^1}^{2\\si(1-\\theta) }\\left\\|\n \\psi_2-\\psi_1\\right\\|_{L^q_T L^r}.\n\\end{align*}\nUp to choosing $T$ larger, $\\Phi$ is a contraction on $X_T$, equipped\nwith the distance\n\\begin{equation*}\n d(\\psi_2,\\psi_1) = \\left\\|\n \\psi_2-\\psi_1\\right\\|_{L^q_T L^r} + \\left\\|\n \\psi_2-\\psi_1\\right\\|_{L^\\infty_T L^2},\n\\end{equation*}\nwhich makes it a Banach space (see \\cite{CazCourant}). \nTherefore, $\\Phi$\nhas a unique fixed point in $X_T$, solution to\n\\eqref{eq:duhamel-}. It follows from \\eqref{eq:A} that this solution\nhas indeed an $H^1$ regularity with \n\\begin{equation*}\n \\|\\psi(t)-e^{-it H}\\tilde\\psi_-\\|_{H^1(\\R^d)}\\Tend t {-\n \\infty} 0.\n\\end{equation*}\n In view\nof the global well-posedness results for the Cauchy problem associated\nto \\eqref{eq:psi} (see e.g. \\cite{CazCourant}),\nthe proposition follows.\n\\end{proof}\n\\subsection{Asymptotic completeness}\n\\label{sec:AC-quant}\n\nThere are mainly three approaches to prove asymptotic completeness for\nnonlinear Schr\\\"odinger equations (without potential). The initial\napproach (\\cite{GV79Scatt}) consists in working with a\n$\\Sigma$ regularity. This \nmakes it possible to use the operator $x+it\\nabla$, which enjoys\nseveral nice properties, and to which an important evolution law (the\npseudo-conformal conservation law) is associated; see\nSection~\\ref{sec:class} for more details. This law provides\nimportant a priori estimates, from which asymptotic completeness\nfollows very easily the the case $\\si\\ge 2\/d$, and less easily for\nsome range of $\\si$ below $2\/d$; see e.g. \\cite{CazCourant}. \n\\smallbreak\n\nThe second historical approach relaxes the localization assumption,\nand allows \nto work in $H^1(\\R^d)$, provided that $\\si>2\/d$. It is based on\nMorawetz inequalities: asymptotic completeness is then established in\n\\cite{LiSt78,GV85} for the case $d\\ge 3$, and in \\cite{NakanishiJFA} for the low\ndimension cases $d=1,2$, by introducing more intricate Morawetz\nestimates. Note that the case $d\\le 2$ is already left out in our case, since we\nhave assumed $d\\ge 3$ to prove Proposition~\\ref{prop:waveop-quant}. \n\\smallbreak\n\nThe most recent approach to prove asymptotic completeness in $H^1$\nrelies on the introduction of interaction Morawetz estimates in \\cite{CKSTTCPAM},\nan approach which has been revisited since, in particular in\n\\cite{PlVe09} and \\cite{GiVe10}. See also \\cite{Vi09} for a very nice\nalternative approach of the use of interaction Morawetz estimates. In\nthe presence of an external potential, this approach was used in\n\\cite{CaDa-p}, by working with Morrey-Campanato type norms. \n\\smallbreak\n\nAn analogue for the pseudo-conformal evolution law is available (see\ne.g. \\cite {CazCourant}), but it seems that in the presence of $V$\nsatisfying Assumption~\\ref{hyp:V}, it cannot be exploited to get\nsatisfactory estimates. We shall rather consider\nMorawetz estimates as in \\cite{GV85}, and thus give an alternative\nproof of the corresponding result from \\cite{CaDa-p}: note that for $\\l=1$,\nthe first part of \\eqref{eq:morawetz} provides exactly the same a\npriori estimate as in \\cite{GV85}. \n\\begin{proposition}\\label{prop:AC-quant}\n Let $d\\ge 3$, $\\frac{2}{d}<\\si<\\frac{2}{d-2}$, and $V$ satisfying\n Assumption~\\ref{hyp:V} for some $\\mu>2$. There exists $M=M(\\mu,d)$ such\n that if the attractive part of the potential satisfies\n \\begin{equation*}\n (\\d_r V(x))_+\\le \\frac{M}{(1+|x|)^{\\mu+1}},\\quad \\forall x\\in\n \\R^d,\n \\end{equation*}\nthen for\n all $\\varphi\\in H^1(\\R^d)$, there exist a unique $\\psi\\in\n C(\\R;H^1(\\R^d))$ solution to \\eqref{eq:psi} with $\\psi_{\\mid\n t=0}=\\varphi$, and a unique $\\tilde\\psi_+\\in\n H^1(\\R^d)$ such that\n \\begin{equation*}\n \\|\\psi(t)-e^{-itH}\\tilde \\psi_+\\|_{H^1(\\R^d)}\\Tend t {+\n \\infty} 0.\n \\end{equation*}\nIn addition, $\\psi,\\nabla \\psi\\in L^q(\\R_+,L^r(\\R^d))$ for all\nadmissible pairs $(q,r)$. \n\\end{proposition}\n\\begin{proof}\n The proof follows that argument presented in \\cite{GV85} (and\n resumed in \\cite{GinibreDEA}), so we shall only described the main\n steps and the modifications needed in the present context. The key\n property in the proof consists in showing that there exists\n $22^*=\\frac{2d}{d-2}$ and $\\alpha>0$ \nsuch that \n\\begin{equation}\n \\left\\|\\int_{t_0}^{t-\\ell}\n U(t-s)\\(V\\psi(s)\\)ds\\right\\|_{L^{r_1}(\\R^d)}\\le C\n\\ell^{-\\alpha}\\|\\psi\\|_{L^\\infty(\\R;H^1)}, \n\\end{equation}\nConsider a Lebesgue index $r_1$\nslightly larger than \n$2^*$, \n\\begin{equation*}\n \\frac{1}{r_1} = \\frac{1}{2^*} -\\eta,\\quad 0<\\eta\\ll 1. \n\\end{equation*}\nLet $\\ell>0$, and consider\n\\begin{equation*}\n I_1(t) = \\left\\|\\int_{t_0}^{t-\\ell}\n U(t-s)\\(V\\psi(s)\\)ds\\right\\|_{L^{r_1}(\\R^d)}.\n\\end{equation*}\nStandard dispersive estimates for $U$ yield\n\\begin{equation*}\n I_1(t) \\lesssim \\int_{t_0}^{t-\\ell} (t-s)^{-\\delta_1} \\|V\\psi(s)\\|_{L^{r'_1}}ds,\n\\end{equation*}\nwhere $\\delta_1$ is given by\n\\begin{equation*}\n \\delta_1 = d\\(\\frac{1}{2}-\\frac{1}{r_1}\\) = 1+\\eta d.\n\\end{equation*}\nNow we apply H\\\"older inequality in space, in view of the identity\n\\begin{equation*}\n \\frac{1}{r'_1} = \\frac{1}{2}+\\frac{1}{d}-\\eta =\n \\underbrace{\\frac{1}{2}-\\frac{1}{d} +\\eta}_{1\/k} +\n \\underbrace{\\frac{2}{d} -2\\eta}_{1\/q}. \n\\end{equation*}\nFor $\\eta>0$ sufficiently small, $V\\in L^q(\\R^d)$ since $\\mu>2$, and so\n\\begin{equation*}\n \\|V\\psi(s)\\|_{L^{r'_1}} \\le \\|V\\|_{L^{q}}\\|\\psi(s)\\|_{L^k}\\lesssim \\|\\psi\\|_{L^\\infty(\\R;H^1)},\n\\end{equation*}\nwhere we have used Sobolev embedding, since $20$, let \n\\begin{equation*}\n I_2(t) = \\left\\|\\int_{t-\\ell}^t\n U(t-s)\\(V\\psi(s)\\)ds\\right\\|_{L^{2\\si+2}(\\R^d)}.\n\\end{equation*}\nWe show that for any $\\ell>0$, $I_2(t)\\to 0$ as $t\\to \\infty$. \nDispersive estimates for $U(t)$ yield\n\\begin{equation*}\n I_2(t) \\lesssim \\int_{t-\\ell}^t\n (t-s)^{-\\delta}\\|V\\psi(s)\\|_{L^{\\frac{2\\si+2}{2\\si+1}}}ds,\\quad\n \\delta = d\\(\\frac{1}{2}-\\frac{1}{2\\si+2}\\) = \\frac{d\\si}{2\\si+2}<1. \n\\end{equation*}\nFor (a small) $\\alpha$ to be fixed later, H\\\"older inequality yields\n\\begin{equation*}\n \\|V\\psi(s)\\|_{L^{\\frac{2\\si+2}{2\\si+1}}} =\\left\\| |x|^{\\alpha}V\n \\frac{\\psi(s)}{|x|^\\alpha} \\right\\|_{L^{\\frac{2\\si+2}{2\\si+1}}}\n \\le \\left\\| |x|^{\\alpha}V \\right\\|_{L^{\\frac{\\si+1}{\\si}}}\n\\left\\| \n \\frac{\\psi(s)}{|x|^\\alpha} \\right\\|_{L^{2\\si+2}}.\n\\end{equation*}\nNote that for $0<\\alpha\\ll 1$, $\\left\\| |x|^{\\alpha}V\n\\right\\|_{L^{\\frac{\\si+1}{\\si}}}$ is finite, since\n$\\frac{\\si+1}{\\si}>\\frac{d}{2}$ and $\\mu>2$. For $0<\\theta<1$, write \n\\begin{align*}\n \\left\\| \n \\frac{\\psi(s)}{|x|^\\alpha} \\right\\|_{L^{2\\si+2}}= \\left\\| \n \\frac{|\\psi(s)|^{\\theta}}{|x|^\\alpha}\n |\\psi(s)|^{1-\\theta}\\right\\|_{L^{2\\si+2}}&\\le \\left\\| \n \\frac{\\psi(s)}{|x|^{\\alpha\/\\theta}}\\right\\|_{L^{2\\si+2}}^\\theta\n \\left\\|\\psi(s)\\right\\|_{L^{2\\si+2}}^{1-\\theta}\\\\\n & \\lesssim \\left\\| \n \\frac{\\psi(s)}{|x|^{\\alpha\/\\theta}}\\right\\|_{L^{2\\si+2}}^\\theta\n \\left\\|\\psi\\right\\|_{L^\\infty(\\R;H^1)}^{1-\\theta}.\n\\end{align*}\nTo use Morawetz estimate, we impose $\\alpha\/\\theta= 1\/(2\\si+2)$, so\nthat we have\n\\begin{equation*}\n \\left\\| \n \\frac{\\psi(s)}{|x|^\\alpha} \\right\\|_{L^{2\\si+2}} \\lesssim\n\\( \\int_{\\R^d}\\frac{|\\psi(s,x)|^{2\\si+2}}{|x|}dx\\)^{\\theta\/(2\\si+2)}\n\\left\\|\\psi\\right\\|_{L^\\infty(\\R;H^1)}^{1-\\theta}. \n\\end{equation*}\nWe conclude by applying H\\\"older inequality in time: since $\\delta<1$,\nthe map $s\\mapsto (t-s)^{-\\delta}$ belongs to $L^q_{\\rm loc}$ for $1\\le\n q\\le 1+\\gamma$ and $\\gamma>0$ sufficiently small. Let $q=1+\\gamma$\n with $0<\\gamma\\ll 1$ so that $s\\mapsto (t-s)^{-\\delta}\\in L^q_{\\rm\n loc}$: we have $q'<\\infty$, and we can choose $0<\\theta\\ll 1$ (or\n equivalently $0<\\eta\\ll 1$) so\n that \n \\begin{equation*}\n \\theta q'=2\\si+2. \n \\end{equation*}\nWe end up with \n\\begin{equation*}\n I_2(t) \\lesssim \\ell^\\beta \\(\\iint_{[t-\\ell,t]\\times\\R^d}\n \\frac{|\\psi(s,x)|^{2\\si+2}}{|x|}dsdx\\)^{1\/(2\\si+2)q'},\n\\end{equation*}\nfor some $\\beta>0$. The last factor goes to zero as $t\\to \\infty$ from\nProposition~\\ref{prop:Morawetz}. \n\\end{proof}\n\n\n\\subsection{Scattering}\n\\label{sec:conclusion}\n\nUnder Assumption~\\ref{hyp:V}, a linear scattering theory is available,\nprovided that $\\mu>1$; see e.g. \\cite[Section~4.6]{DG}. This means that\nthe following strong limits exist in $L^2(\\R^d)$,\n\\begin{equation*}\n \\lim_{t\\to -\\infty} U_V(-t)U(t),\\quad\\text{and}\\quad \\lim_{t\\to +\\infty} U(-t)U_V(t),\n\\end{equation*}\nwhere the second limit usually requires to project on the continuous\nspectrum. Recall that this projection is the identity in our\nframework. \n\\begin{lemma}\\label{lem:Cook-quant}\n Let $d\\ge 3$, $V$ satisfying Assumption~\\ref{hyp:V} with $p>1$. Then \nthe strong limit\n \\begin{equation*}\n \\lim_{t\\to -\\infty} U_V(-t)U(t)\n\\end{equation*}\nexists in $H^1(\\R^d)$. \n\\end{lemma}\n\\begin{proof}\n Following Cook's method (\\cite[Theorem~XI.4]{ReedSimon3}), it\n suffices to prove that for all $\\varphi \\in \\Sch(\\R^d)$,\n \\begin{equation*}\n t\\mapsto \\left\\| U_V(-t) VU(t)\\varphi\\right\\|_{H^1}\\in\n L^1((-\\infty,-1]). \n \\end{equation*}\nFor the $L^2$ norm, we have\n\\begin{equation*}\n \\left\\| U_V(-t) VU(t)\\varphi\\right\\|_{L^2} = \\left\\|\n VU(t)\\varphi\\right\\|_{L^2} .\n\\end{equation*}\nAssumption~\\ref{hyp:V} implies that $V\\in L^q(\\R^d)$ for all\n$q>d\/\\mu$. For $\\mu>1$, let $q$ be given by \n\\begin{equation*}\n \\frac{1}{q} = \\frac{1}{d}+\\eta,\\text{ with } \\eta>0\\text{ and }q>\\frac{d}{\\mu}. \n\\end{equation*}\nWe apply H\\\"older inequality with the identity\n\\begin{equation*}\n \\frac{1}{2} = \\frac{1}{q} +\\underbrace{\\frac{1}{2}-\\frac{1}{d}-\\eta}_{1\/r}.\n\\end{equation*}\nUsing dispersive estimates for $U(t)$, we have\n\\begin{equation*}\n \\left\\|\n VU(t)\\varphi\\right\\|_{L^2} \\lesssim \\|U(t)\\varphi\\|_{L^r}\\lesssim\n |t|^{-d\\(\\frac{1}{2}-\\frac{1}{r}\\)}\\|\\varphi\\|_{L^{r'}}= |t|^{-1-d\\eta}\\|\\varphi\\|_{L^{r'}},\n\\end{equation*}\nhence the existence of the strong limit in $L^2$. \n\\smallbreak\n\nFor the $H^1$ limit, recall that from Lemma~\\ref{lem:A}, \n\\begin{equation*}\n \\left\\| \\nabla U_V(-t) VU(t)\\varphi\\right\\|_{L^2}\\lesssim \\left\\| A\n U_V(-t) VU(t)\\varphi\\right\\|_{L^2} \n\\end{equation*}\nSince $A$ commutes with $U_V$ which is unitary on $L^2$, the right\nhand side is equal to \n\\begin{equation*}\n\\left\\| A\n VU(t)\\varphi\\right\\|_{L^2}\\lesssim \\|VU(t)\\varphi\\|_{H^1},\n\\end{equation*}\nwhere we have used Lemma~\\ref{lem:A} again. Now\n\\begin{equation*}\n \\|VU(t)\\varphi\\|_{H^1} \\le \\|VU(t)\\varphi\\|_{L^2}+ \\|\\nabla V\\times\n U(t)\\varphi\\|_{L^2} + \\|VU(t)\\nabla \\varphi\\|_{L^2},\n\\end{equation*}\nand each term is integrable, like for the $L^2$ limit, from\nAssumption~\\ref{hyp:V}. \n\\end{proof}\n\n In the case $d=3$, the dispersive estimates established by Goldberg\n \\cite{Go06} make it possible to prove asymptotic completeness in\n $H^1$ by Cook's method as well: for all $\\varphi\\in\n \\Sch(\\R^d)$,\n \\begin{equation*}\n t\\mapsto \\|U(-t)VU_V(t)\\varphi\\|_{H^1}\\in L^1(\\R),\n \\end{equation*}\na property which can be proven by the same computations as above, up\nto changing the order of the arguments. To complete the proof of\nTheorem~\\ref{theo:scatt-quant}, it therefore remains to prove that for\n$d\\ge 4$, $\\psi_+\\in H^1(\\R^d)$ and\n \\begin{equation}\\label{eq:cvH1}\n \\|\\psi(t)-U(t)\\psi_+\\|_{H^1(\\R^d)}\\Tend t \\infty 0. \n \\end{equation}\nIt follows from the above results that\n\\begin{equation*}\n \\psi(t) = U(t) \\psi_+ +i\\int_t^{+\\infty}\n U(t-s)\\(|\\psi|^{2\\si}\\psi(s)\\)ds\n +i\\int_t^{+\\infty}U(t-s)\\(V(\\psi(s)\\)ds,\n\\end{equation*}\nand that $\\psi,\\nabla \\psi \\in L^q(\\R;L^r(\\R^d))$ for all admissible\npairs $(q,r)$. Since we have\n\\begin{equation*}\n \\psi_+= U(-t)\\psi(t) - i\\int_t^{+\\infty}\n U(-s)\\(|\\psi|^{2\\si}\\psi(s)\\)ds\n -i\\int_t^{+\\infty}U(-s)\\(V(\\psi(s)\\)ds,\n\\end{equation*}\nthe previous estimates show that $\\psi_+\\in H^1(\\R^d)$, along with\n\\eqref{eq:cvH1}. \n\n\n\n\\section{Scattering for the asymptotic envelope}\n\\label{sec:class}\n\n\nIn this section, we prove Theorem~\\ref{theo:scatt-class}. The general\nargument is similar to the quantum case: we first prove that the\nnonlinear term can be neglected to large time, and then rely on\nprevious results to neglect the potential. \nRecall that in view of Assumption~\\ref{hyp:V}, the time dependent\nharmonic potential $\\frac{1}{2}\\$ satisfies\n \\begin{equation}\\label{eq:decayQ}\n \\left\\|\\frac{d^\\alpha}{dt^\\alpha}Q(t)\\right\\|\\lesssim\n \\^{-\\mu-2-\\alpha},\\quad \\alpha \\in \\N,\n \\end{equation}\nwhere $\\|\\cdot\\|$ denotes any matricial norm. \nWe denote by \n\\begin{equation*}\n H_Q = -\\frac{1}{2}\\Delta + \\frac{1}{2}\\\n\\end{equation*}\nthe time-dependent Hamiltonian present in \\eqref{eq:u}. Like in the\nquantum case, we show that the nonlinearity is negligible for large\ntime by working on Duhamel's formula associated to \\eqref{eq:u} in\nterms of $H_Q$. Since $H_Q$ depends on time, we recall that the\npropagator $U_Q(t,s)$ is the operator which maps $u_0$ to $u_{\\rm lin}(t)$,\nwhere $u_{\\rm lin}$ solves\n\\begin{equation*}\n i\\d_t u_{\\rm lin} +\\frac{1}{2}\\Delta u_{\\rm lin} =\n \\frac{1}{2}\\u_{\\rm lin};\\quad u_{{\\rm lin}}(s,y)=u_0(y). \n\\end{equation*}\nIt is a unitary dynamics, in the sense that $U_Q(s,s)=1$, and\n$U_Q(t,\\tau)U_Q(\\tau,s)=U_Q(t,s)$;\nsee e.g. \\cite{DG}. Then to prove the existence of wave operators, we consider the\nintegral formulation\n\\begin{equation}\n \\label{eq:duhamel-wave-class}\n u(t) = U_Q(t,0)\\tilde u_--i\\int_{-\\infty}^t U_Q(t,s)\\(|u|^{2\\si}u(s)\\)ds.\n\\end{equation}\nA convenient tool is given by Strichartz estimates associated to\n$U_Q$. Local in time Strichartz estimates follow from general results\ngiven in \\cite{Fujiwara}, where local dispersive estimates are\nproven for more general potential. To address large time, we take\nadvantage of the fact that the \npotential is exactly quadratic with respect to the space variable, so\nan explicit formula is available for $U_Q$, entering the general\nfamily of Mehler's formulas (see e.g. \\cite{Feyn,HormanderQuad}). \n\\subsection{Mehler's formula}\n\\label{sec:mehler}\n\nConsider, for $t_0\\ll -1$,\n\\begin{equation*}\ni\\d_tu+\\frac{1}{2}\\Delta u=\\frac{1}{2}\\< Q(t)y,y\\> u\\quad\n;\\quad u(t_0,y)=u_0(y).\n\\end{equation*}\nWe seek a solution of the form\n\\begin{equation}\n \\label{eq:mehler}\n u(t,y) = \\frac{1}{h(t)}\\int_{\\R^d}\n e^{\\frac{i}{2}\\(\\+\\+2\\\\)}u_0(z)dz, \n\\end{equation}\nwith symmetric matrices $M_1, M_2,P\\in \\mathcal S_d(\\R)$. \nExperience shows that no linear term is needed in this formula, since\nthe potential is exactly quadratic (see\ne.g. \\cite{CLSS08}). \n\\smallbreak\n\nWe compute:\n\\begin{align*}\n i\\d_t u & = -i\\frac{\\dot h}{h}u -\\frac{1}{2}\\<\\dot M_1(t)y,y\\>u\\\\\n&\\quad \n +\\frac{1}{h}\\int e^{\\frac{i}{2}\\(\\dots\\)} \\(-\\frac{1}{2}\\<\\dot\n M_2(t)z,z\\>-\\<\\dot P(t)y,z\\>\\)u_0(z)dz,\n\\end{align*}\n\\begin{align*}\n \\d_{j}^2 u &= \\frac{1}{h}\\int e^{\\frac{i}{2}\\(\\dots\\)}\n \\(-\\(\\(M_1(t)y\\)_j + \\(P(t)z\\)_j\\)^2 -i\\(M_1\\)_{jj}\\)u_0(z)dz,\n\\end{align*}\nhence\n\\begin{align*}\n & i\\d_tu+\\frac{1}{2}\\Delta u = -i\\frac{\\dot h}{h}u\n +\\frac{i}{2}\\operatorname{tr} M_1 - \\frac{1}{2}\\<\\dot M_1(t)y,y\\>u\\\\\n&+ \\frac{1}{2h}\\int\n e^{\\frac{i}{2}\\(\\+\\+2\\\\)}u_0(z)\\times\\\\\n&\\times\\( \n-\\<\\dot M_2(t)z,z\\>-2\\<\\dot\nP(t)y,z\\>-|M_1(t)y|^2 -|P(t)z|^2 -2\n\\\\)dz. \n\\end{align*}\nIdentifying the quadratic forms (recall that the matrices $M_j$ and\n$P$ are symmetric), we find:\n\\begin{align*}\n&\\frac{\\dot h}{h}= \\frac{1}{2}\\operatorname{tr} M_1,\\\\\n& \\dot M_1+M_1^2+Q=0,\\\\\n& \\dot M_2 +P^2=0,\\\\\n& \\dot P + PM_1=0.\n\\end{align*}\nDispersion is given by\n\\begin{equation*}\n h(t) = h(t_1)\\exp\\(\\frac{1}{2}\\int_{t_1}^t \\operatorname{tr} M_1(s)ds\\),\n\\end{equation*}\nwhere $M_1$ solves the matrix Riccati equation\n\\begin{equation}\n \\label{eq:riccati}\n \\dot M_1 + M_1^2 + Q=0;\\quad M_1(t_0)=\\frac{1}{t_0}{\\rm I}_d.\n\\end{equation}\nNote that in general, solutions to Riccati equations develop\nsingularities in finite time. What saves the day here is that\n\\eqref{eq:riccati} is not translation invariant, and can be\nconsidered, for $t\\le t_0\\ll -1$, \nas a perturbation of the Cauchy problem\n\\begin{equation*}\n \\dot M + M^2 =0;\\quad M(t_0)=\\frac{1}{t_0}{\\rm I}_d,\n\\end{equation*}\nwhose solution is given by \n\\begin{equation*}\n M(t) = \\frac{1}{t}{\\rm I}_d. \n\\end{equation*}\n\\begin{lemma}\\label{lem:riccati}\n Let $Q$ be a symmetric matrix satisfying \\eqref{eq:decayQ} for $\\mu>1$. There\n exists $t_0<0$ such that \\eqref{eq:riccati} has a unique solution\n $M_1\\in C((-\\infty,t_0];\\mathcal S_d(\\R))$. In addition, it\n satisfies\n \\begin{equation*}\n M_1(t)= \\frac{1}{t}{\\rm I}_d +\\O\\(\\frac{1}{t^2}\\)\\quad \\text{as\n }t\\to -\\infty. \n \\end{equation*}\n\\end{lemma}\n\\begin{proof}\nSeek a solution of the form $M_1(t) = \\frac{1}{t}{\\rm I}_d +R(t)$,\nwhere $R$ is s symmetric matrix solution of\n\\begin{equation*}\n \\dot R + \\frac{2}{t}R+R^2 +Q= 0;\\quad R(t_0)=0. \n\\end{equation*}\nEquivalently, the new unknown $\\tilde R = t^2 R$ must satisfy\n\\begin{equation}\\label{eq:Rmatrix}\n \\dot {\\tilde R} + \\frac{1}{t^2}\\tilde R^2 +t^2Q= 0;\\quad \\tilde R(t_0)=0. \n\\end{equation}\nCauchy-Lipschitz Theorem yields a local solution: we show that it is\ndefined on $(-\\infty,t_0]$, along with the announced decay. \nIntegrating between $t_0$ and $t$, we find\n\\begin{equation*}\n \\tilde R(t) = -\\int_{t_0}^t \\frac{1}{s^2}\\tilde R(s)^2ds -\n \\int_{t_0}^ts^2 Q(s)ds.\n\\end{equation*}\nNote that $s\\mapsto s^2 Q$ is integrable as $s\\to -\\infty$ from \\eqref{eq:decayQ}\n(we assume $\\mu>1$). Setting \n\\begin{equation*}\n \\rho(t) =\\sup_{t\\le s\\le t_0}\\|\\tilde R(s)\\|,\n\\end{equation*}\nwhere $\\|\\cdot\\|$ denotes any matricial norm, we have\n\\begin{equation*}\n \\rho(t) \\le \\frac{C}{t_0}\\rho(t)^2 + \\frac{C}{t_0^{\\mu-1}},\n\\end{equation*}\nfor some constant $C$. Choosing $t_0\\ll -1$, global existence follows\nfrom the following bootstrap argument (see \\cite{BG3}):\nLet $f=f(t)$ be a nonnegative continuous function on $[0,T]$ such\nthat, for every $t\\in [0,T]$, \n\\begin{equation*}\n f(t)\\le \\eps_1 + \\eps_2 f(t)^\\theta,\n\\end{equation*}\nwhere $\\eps_1,\\eps_2>0$ and $\\theta >1$ are constants such that\n\\begin{equation*}\n \\eps_1 <\\left(1-\\frac{1}{\\theta} \\right)\\frac{1}{(\\theta \\eps_2)^{1\/(\\theta\n-1)}}\\ ,\\ \\ \\ f(0)\\le \\frac{1}{(\\theta \\eps_2)^{1\/(\\theta-1)}}.\n\\end{equation*}\nThen, for every $t\\in [0,T]$, we have\n\\begin{equation*}\n f(t)\\le \\frac{\\theta}{\\theta -1}\\ \\eps_1.\n\\end{equation*}\nThis shows that for $|t_0|$ sufficiently large, the matrix $R$ (hence\n$M_1$) is defined on $(-\\infty,t_0]$. Moreover, since $\\tilde R$ is\nbounded, $R(t)=\\O(t^{-2})$ as $t\\to -\\infty$, hence the result. \n\\end{proof}\nWe infer\n\\begin{equation*}\n h(t)\\Eq t {-\\infty} c|t|^{d\/2},\n\\end{equation*}\nwhich is the same dispersion as in the case without\npotential. Putting this result together with local dispersive estimates from\n\\cite{Fujiwara}, we have:\n\\begin{lemma}\\label{lem:strichartz-quad}\n Let $Q$ be a symmetric matrix satisfying \\eqref{eq:decayQ} for\n $\\mu>1$. Then for all admissible pairs $(q,r)$, \nthere exists $C=C(q,d)$ such that for all $s\\in \\R$,\n\\begin{equation*}\n \\|U_Q(\\cdot,s)f\\|_{L^q(\\R;L^r(\\R^d))}\\le C \\|f\\|_{L^2(\\R^d)},\\quad\n \\forall f\\in L^2(\\R^d). \n\\end{equation*}\nFor two admissible pairs $(q_1,r_1)$ and $(q_2,r_2)$, there exists $C_{q_1,q_2}$ such\nthat for all time interval $I$, if we denote by\n\\begin{equation*}\n R(F)(t,y) = \\int_{I\\cap \\{s\\le t\\}} U_Q(t,s)F(s,y)ds,\n\\end{equation*}\nwe have\n\\begin{equation*}\n \\|R(F)\\|_{L^{q_1}(I;L^{r_1}(\\R^d))}\\le\n C_{q_1,q_2}\\|F\\|_{L^{q_2'}(I;L^{r_2'}(\\R^d))},\\quad \\forall F\\in\n L^{q_2'}(I;L^{r_2'}(\\R^d)). \n\\end{equation*}\n\\end{lemma}\n\\begin{remark}\n Since we have dispersive estimates, end-point Strichartz estimates\n ($q=2$ when $d\\ge 3$)\n are also available from \\cite{KT}. \n\\end{remark}\n\\subsection{Wave operators}\n\\label{sec:wave-class}\n\nIn this section, we prove:\n\\begin{proposition}\\label{prop:wave-class}\n Let $d\\ge 1$, $\\frac{2}{d}\\le \\si<\\frac{2}{(d-2)_+}$, and $V$ satisfying\n Assumption~\\ref{hyp:V} for some $\\mu>1$. For\n all $\\tilde u_-\\in \\Sigma$, there exists a unique $u\\in\n C(\\R;\\Sigma)$ solution to \\eqref{eq:u} such that\n \\begin{equation*}\n \\|U_Q(0,t)u(t)-\\tilde u_-\\|_{\\Sigma}\\Tend t {-\n \\infty} 0.\n \\end{equation*}\n\\end{proposition}\n\\begin{remark}\n The assumption $\\si\\ge \\frac{2}{d}$ could easily be relaxed,\n following the classical argument (see e.g. \\cite{CazCourant}). We do\n not present the argument, since Theorem~\\ref{theo:scatt-quant} is\n proven only for $\\si>\\frac{2}{d}$. \n\\end{remark}\n\n\\begin{proof}\n The proof follows closely the approach without potential\n ($Q=0$). From this perspective, a key tool is the vector field\n \\begin{equation*}\n J(t)=y+it\\nabla.\n \\end{equation*}\nIt satisfies three important properties:\n\\begin{itemize}\n\\item It commutes with the free Schr\\\"odinger dynamics,\n \\begin{equation*}\n \\left[ i\\d_t +\\frac{1}{2}\\Delta,J\\right]=0. \n \\end{equation*}\n\\item It acts like a derivative on gauge invariant nonlinearities. If\n $F(z)$ is of the form $F(z)=G(|z|^2)z$, then \n \\begin{equation*}\n J(t)\\(F(u)\\) = \\d_z F(u)J(t)u -\\d_{\\bar z}F(u)\\overline{J(t)u}.\n \\end{equation*}\n\\item It provides weighted Gagliardo-Nirenberg inequalities:\n \\begin{align*}\n \\|f\\|_{L^r}\\lesssim &\n \\frac{1}{|t|^{\\delta(r)}}\\|f\\|_{L^2}^{1-\\delta(r)}\\|J(t)f\\|_{L^2}^{\\delta(r)},\n \\quad \\delta(r)=d\\(\\frac{1}{2}-\\frac{1}{r}\\), \\\\\n&\\text{with }\n\\left\\{\n \\begin{aligned}\n 2\\le r\\le \\infty &\\text{ if }d=1,\\\\\n2\\le r<\\infty &\\text{ if }d=2,\\\\\n2\\le r\\le \\frac{2d}{d-2}&\\text{ if }d\\ge 3. \n \\end{aligned}\n\\right.\n \\end{align*}\n\\end{itemize}\nThe last two properties stem from the factorization $J(t)f =\nit e^{i\\frac{|y|^2}{2t}}\\nabla \\(e^{-i\\frac{|y|^2}{2t}}f\\)$. Note that\nthe commutation property does not incorporate the quadratic potential:\n\\begin{align*}\n \\left[ i\\d_t -H_Q,J\\right]= itQ(t)y=itQ(t)J(t) +t^2 Q(t)\\nabla. \n\\end{align*}\nNow the important remark is that $t\\mapsto t^2Q(t)$ is integrable,\nfrom \\eqref{eq:decayQ} since $\\mu>1$. \n\\smallbreak\n\nTo prove Proposition~\\ref{prop:wave-class}, we apply a fixed point argument \nto the Duhamel's formula \\eqref{eq:duhamel-wave-class}. As in the case\nof the quantum scattering operator, we have to deal with the fact that\nthe gradient does not commute with $U_Q$, leading to the problem\ndescribed in Section~\\ref{sec:vector-field}. Above, we have sketched\nhow to deal with the inhomogeneous term in\n\\eqref{eq:duhamel-wave-class}, while in\nSection~\\ref{sec:vector-field}, we had underscored the difficulty\nrelated to the homogeneous term. We therefore start by showing that\nfor any admissible pair $(q_1,r_1)$, there exists $K_{q_1}$ such that\n\\begin{equation}\\label{eq:Qhomo}\n \\|\\nabla U_Q(t,0)f\\|_{L^{q_1}(\\R;L^{r_1})} +\n \\|J(t)U_Q(t,0)f\\|_{L^{q_1}(\\R;L^{r_1})} \\le K_{q_1} \\|f\\|_{\\Sigma}. \n\\end{equation}\nTo prove this, denote \n\\begin{equation*}\n v_0(t)=U_Q(t,0)f,\\quad v_1(t)= \\nabla U_Q(t,0)f, \\quad v_2(t)=\nJ(t)U_Q(t,0)f.\n\\end{equation*}\nSince $yv_0 =v_2-it v_1$, we have:\n\\begin{align*}\n &i\\d_t v_1=H_Q v_1 +Q(t)yv_0 = Hv_1 +Q(t)v_2-it Q(t)v_1;\\quad\n v_1(0,y)=\\nabla f(y),\\\\\n& i\\d_t v_2 = H_Qv_2 +itQ(t)v_2+t^2Q(t)v_1;\\quad v_2(0,y)=yf(y). \n\\end{align*}\nLemma~\\ref{lem:strichartz-quad} yields\n\\begin{align*}\n \\|v_1\\|_{L^{q_1}(\\R;L^{r_1})} + \\|v_2\\|_{L^{q_1}(\\R;L^{r_1})}\n &\\lesssim \\|f\\|_\\Sigma + \\int_{-\\infty}^\\infty \\|\\Q(t)v_2(t)\\|_{L^2}dt \\\\\n&\\quad +\n \\int_{-\\infty}^\\infty \\|\\^2Q(t)v_1(t)\\|_{L^2}dt ,\n\\end{align*}\nwhere we have chosen $(q_2,r_2)=(\\infty,2)$. The fact that $U_Q$ is\nunitary on $L^2$ and \\eqref{eq:decayQ} imply\n\\begin{equation*}\n \\|\\Q(t)v_2(t)\\|_{L^2}\\lesssim \\^{-\\mu-1}\\|yf\\|_{L^2},\\quad\n \\|\\^2Q(t)v_1(t)\\|_{L^2}\\lesssim \\^{-\\mu}\\|\\nabla f\\|_{L^2}, \n\\end{equation*}\nhence \\eqref{eq:Qhomo}. \nWe then apply a fixed point\nargument in\n\\begin{align*}\n X(T) =&\\Big\\{ u\\in L^\\infty((-\\infty,-T];H^1), \\\\\n& \\quad\\sum_{B\\in \\{{\\rm Id}, \\nabla, J\\}}\n\\( \\|B u\\|_{L^\\infty((-\\infty,-T];L^2)}+\n\\|B u\\|_{L^q((-\\infty,-T];L^r)}\\)\\le {\\mathbf K}\\|\\tilde u_-\\|_{\\Sigma}\\Big\\},\n\\end{align*}\nwhere the admissible pair $(q,r)$ is given by\n\\begin{equation*}\n (q,r) = \\(\\frac{4\\si+4}{d\\si},2\\si+2\\),\n\\end{equation*}\n and the constant $\\mathbf K$ is related to the constants $C_q$ from\n Strichartz inequalities \n(Lemma~\\ref{lem:strichartz-quad}), and $K_q$ from\n\\eqref{eq:Qhomo}, whose value we do not try to optimize. The fixed\npoint argument is applied \nto the Duhamel's formula \\eqref{eq:duhamel-wave-class}: we denote by\n$\\Phi(u)$ the left hand side, and let $u\\in X(T)$. We have \n\\begin{equation*}\n \\|\\Phi(u)\\|_{L^\\infty((-\\infty,-T];L^2)}\\le \\|\\tilde u_-\\|_{L^2} + C\n \\left\\| |u|^{2\\si}u\\right\\|_{L^{q'}_TL^{r'}},\n\\end{equation*}\nwhere $L^a_T$ stands for $L^a((-\\infty,-T])$. H\\\"older inequality\nyields\n\\begin{equation*}\n \\left\\| |u|^{2\\si}u\\right\\|_{L^{q'}_TL^{r'}} \\le\n \\|u\\|_{L^k_TL^r}^{2\\si} \\|u\\|_{L^q_TL^r},\n\\end{equation*}\nwhere $k$ is given by\n\\begin{equation*}\n \\frac{1}{q'}=\\frac{1}{q}+\\frac{2\\si}{k},\\text{ that is }k =\n \\frac{4\\si(\\si+1)}{2-(d-2)\\si}. \n\\end{equation*}\nWeighted Gagliardo-Nirenberg inequality and the definition of $X(T)$ yield\n\\begin{equation*}\n \\|u(t)\\|_{L^r}\\lesssim \\frac{1}{|t|^{\\frac{d\\si}{2\\si+2}}}\\|u_-\\|_{\\Sigma}.\n\\end{equation*}\nWe check that for $\\si\\ge \\frac{2}{d}$, \n\\begin{equation*}\n k\\times \\frac{d\\si}{2\\si+2} = \\frac{2d\\si^2}{2-(d-2)\\si}\\ge 2,\n\\end{equation*}\nand so\n\\begin{equation*}\n \\|u\\|_{L^k_TL^r}^k =\\O\\(\\frac{1}{T}\\) \\text{ as }T\\to \\infty. \n\\end{equation*}\nBy using Strichartz estimates again,\n\\begin{equation*}\n \\|\\Phi(u)\\|_{L^q_TL^r}\\le C_q\\|\\tilde u_-\\|_{L^2} + C\n \\left\\| |u|^{2\\si}u\\right\\|_{L^{q'}_TL^{r'}},\n\\end{equation*}\nwhich shows, like above, that if $T$ is sufficiently large,\n$\\|\\Phi(u)\\|_{L^q_TL^r}\\le 2C_q\\|\\tilde u_-\\|_{L^2}$. \n\\smallbreak\n\nWe now apply $\\nabla$ and $J(t)$ to $\\Phi$, and get a closed system of\nestimates:\n\\begin{align*}\n \\nabla \\Phi(u) &= \\nabla U_Q(t,0)\\tilde u_- - i \\int_{-\\infty}^t\n U_Q(t,s)\\nabla \\(|u|^{2\\si}u(s)\\)ds \\\\\n& -i\\int_{-\\infty}^t U_Q(t,s)\\(Q(s)J(s)\\Phi(u)\\)ds - \\int_{-\\infty}^t\n U_Q(t,s)\\(sQ(s)\\nabla\\Phi(u)\\)ds, \\\\\n J(t) \\Phi(u) &= J(t) U_Q(t,0)\\tilde u_- - i \\int_{-\\infty}^t\n U_Q(t,s)J(s) \\(|u|^{2\\si}u(s)\\)ds \\\\\n& +\\int_{-\\infty}^t U_Q(t,s)\\(sQ(s)J(s)\\Phi(u)\\)ds - i\\int_{-\\infty}^t\n U_Q(t,s)\\(s^2Q(s)\\nabla\\Phi(u)\\)ds,\n\\end{align*}\nwhere we have used the same algebraic properties as in the proof of\n\\eqref{eq:Qhomo}. Set \n\\begin{equation*}\n M(T) = \\sum_{B\\in \\{\\nabla, J\\}} \\( \\|B(t)\\Phi(u)\\|_{L^\\infty_TL^2}\n + \\|B(t)\\Phi(u)\\|_{L^q_TL^r}\\).\n\\end{equation*}\nLemma~\\ref{lem:strichartz-quad} and\n\\eqref{eq:Qhomo} yield\n\\begin{align*}\n M(T)&\\lesssim \\|\\tilde u_-\\|_\\Sigma + \\sum_{B\\in \\{\\nabla, J\\}}\\left\\||u|^{2\\si}B\n u\\right\\|_{L^{q'}_TL^{r'}} \\\\\n&\\quad+\\|\\Q(t)J(t)\\Phi(u)\\|_{L^1_TL^2} +\n \\|\\^2Q(t)\\nabla\\Phi(u)\\|_{L^1_TL^2}, \n\\end{align*}\nwhere we have also used the fact that $J(t)$ acts like a derivative on\ngauge invariant nonlinearities. The same H\\\"older inequalities as\nabove yield\n\\begin{equation*}\n \\left\\||u|^{2\\si}B\n u\\right\\|_{L^{q'}_TL^{r'}} \\le\n \\|u\\|_{L^k_TL^r}^{2\\si}\\|Bu\\|_{L^q_TL^r}\\lesssim \\frac{1}{T^{2\\si\/k}}\\|Bu\\|_{L^q_TL^r}.\n\\end{equation*}\nOn the other hand, from \\eqref{eq:decayQ},\n\\begin{equation*}\n \\|\\Q(t)J(t)\\Phi(u)\\|_{L^1_TL^2} +\n \\|\\^2Q(t)\\nabla\\Phi(u)\\|_{L^1_TL^2}\\lesssim \\frac{1}{T^{\\mu-1}}M(T),\n\\end{equation*}\nand so\n\\begin{equation*}\n M(T) \\lesssim \\|\\tilde u_-\\|_\\Sigma +\n \\frac{1}{T^{2\\si\/k}}\\sum_{B\\in \\{\\nabla, J\\}} \\|Bu\\|_{L^q_TL^r} + \\frac{1}{T^{\\mu-1}}M(T).\n\\end{equation*}\nBy choosing $T$ sufficiently large, we infer\n\\begin{equation*}\n M(T) \\lesssim \\|\\tilde u_-\\|_\\Sigma +\n \\frac{1}{T^{2\\si\/k}}\\sum_{B\\in \\{\\nabla, J\\}} \\|Bu\\|_{L^q_TL^r},\n\\end{equation*}\nand we conclude that $\\Phi$ maps $X(T)$ to $X(T)$ for $T$\nsufficiently large. Up to choosing $T$ even larger, $\\Phi$ is a\ncontraction on $X(T)$ with respect to the weaker norm $L^q_TL^r$,\nsince for $u,v\\in X(T)$, we have \n\\begin{align*}\n \\|\\Phi(u)-\\Phi(v)\\|_{L^q_TL^r}&\\lesssim \\left\\| |u|^{2\\si}u\n -|v|^{2\\si}v\\right\\|_{L^{q'}_TL^{r'}}\\lesssim \\(\n \\|u\\|_{L^k_TL^r}^{2\\si}+\\|v\\|_{L^k_TL^r}^{2\\si}\\)\\|u-v\\|_{L^q_TL^r}\\\\\n&\\lesssim \\frac{1}{T^{2\\si\/k}}\\|u-v\\|_{L^q_TL^r},\n\\end{align*}\nwhere we have used the previous estimate. Therefore, there exists\n$T>0$ such that $\\Phi$ has a unique fixed point in $X(T)$. This\nsolution actually belongs to $C(\\R;\\Sigma)$ from \\cite{CaSi15}. \nUnconditional uniqueness (in $\\Sigma$, without referring to mixed\nspace-time norms) stems from the approach in \\cite{TzVi-p}. \n\\end{proof}\n\n\\subsection{Vector field}\n\\label{sec:vector-field-Q}\n\n It is possible to construct a vector field adapted to the presence\n of $Q$, even though it is not needed to prove\n Proposition~\\ref{prop:wave-class}. Such a vector field will be useful\n in Section~\\ref{sec:cv}, and since its construction is very much in\n the continuity of Section~\\ref{sec:mehler}, we present it now. Set,\n for a scalar function $f$, \n\\begin{equation*}\n {\\mathcal A} f= i W(t) e^{i\\phi(t,y)}\\nabla \\(\n e^{-i\\phi(t,y)}f\\)= W(t) \\(f\\nabla \\phi +i\\nabla f\\),\n\\end{equation*}\nwhere $W$ is a matrix and the phase $\\phi$ solves the eikonal equation\n\\begin{equation*}\n \\d_t \\phi +\\frac{1}{2}|\\nabla \\phi|^2 + \\frac{1}{2}\\< Q(t)y,y\\>=0. \n\\end{equation*}\nSince the underlying Hamiltonian is quadratic, $\\phi$ has the form\n\\begin{equation*}\n \\phi(t,y) = \\frac{1}{2}\\,\n\\end{equation*}\nwhere $K(t)$ is a symmetric matrix. For $ {\\mathcal A}$ to commute\nwith $i\\d_t -H_Q$, we come up with the conditions\n\\begin{equation*}\n \\dot K + K^2 + Q=0,\\quad \\dot W = W \\nabla^2\\phi= WK .\n\\end{equation*}\nWe see that we can take $K=M_1$ as in the proof of\nLemma~\\ref{lem:riccati}, and $ {\\mathcal A}$ will then satisfy the\nsame three properties as $J$, up to the fact that the commutation\nproperty now includes the quadratic potential. \n\\smallbreak\n\nSince the construction of this vector field\nboils down to solving a matricial Riccati equation with initial data\nprescribed at large time (see \\eqref{eq:riccati}), we naturally\nconstruct two vector fields $\\mathcal A_\\pm$, associated to $t\\to \\pm\n\\infty$. In view of Lemma~\\ref{lem:riccati}, $\\mathcal A_-$ is defined\non $(-\\infty,-T]$, while $\\mathcal A_+$ is defined\non $[T,\\infty)$, for a common $T\\gg 1$, with\n\\begin{equation*}\n \\mathcal A_\\pm = W_\\pm(t)\\(\\nabla \\phi_\\pm + i\\nabla\\), \\quad\n \\phi_\\pm (t,y) = \\frac{1}{2}\\,\n\\end{equation*}\nwhere $K_\\pm$ and $W_\\pm$ satisfy\n\\begin{equation*}\n \\dot K_\\pm +K_\\pm^2+Q=0,\\quad \\dot W_\\pm =W_\\pm K_\\pm,\n\\end{equation*}\nso that Lemma~\\ref{lem:riccati} also yields\n\\begin{equation}\\label{eq:vector-asym}\n K_\\pm(t)\\sim \\frac{1}{t}{\\rm I}_d,\\quad W_\\pm (t)\\sim t {\\rm\n I}_d\\quad \\text{as }t\\to \\pm \\infty. \n\\end{equation} \nWe construct commuting vector fields for large time only, essentially\nbecause on finite time intervals, the absence of commutation is not a\nproblem, so we can use $\\nabla$, $y$ or $J$. \n\n\n\n\\subsection{Asymptotic completeness}\n\\label{sec:ac-class}\n\nIn this section we prove:\n\\begin{proposition}\\label{prop:AC-class}\n Let $d\\ge 1$, $\\frac{2}{d}\\le \\si<\\frac{2}{(d-2)_+}$, and $V$\n satisfying Assumption~\\ref{hyp:V} for some $\\mu>1$. For all $u_0\\in\n \\Sigma$, there exists a unique $\\tilde u_+\\in \\Sigma$ such that the\n solution $u\\in C(\\R;\\Sigma)$ to \\eqref{eq:u} with $u_{\\mid t=0}=u_0$\n satisfies\n \\begin{equation*}\n \\sum_{\\Gamma\\in \\{{\\rm Id},\\nabla, J\\}} \\|\\Gamma(t) u(t)-\n \\Gamma(t)U_Q(t,0)\\tilde u_+\\|_{L^2} \\Tend t {+\\infty} 0. \n \\end{equation*}\n\\end{proposition}\n\\begin{proof}\n In the case $Q=0$, such a result is a rather direct consequence of\n the \\emph{pseudo-conformal conservation law}, established in\n \\cite{GV79Scatt}. Recalling that $J(t)=y+it\\nabla$, this law reads\n\\begin{equation*}\n \\frac{d}{dt}\\(\\frac{1}{2}\\|J(t)u\\|_{L^2}^2\n +\\frac{t^2}{\\si+1}\\|u(t)\\|_{L^{2\\si+2}}^{2\\si+2}\\)\n =\\frac{t}{\\si+1}(2-d\\si)\\|u(t)\\|_{L^{2\\si+2}}^{2\\si+2}. \n\\end{equation*}\nA way to derive this relation is to apply $J$ to \\eqref{eq:u}. The operator $J$\ncommutes with the linear part ($Q=0$), and the standard $L^2$\nestimate, which consists in multiplying the outcome by\n$\\overline{Ju}$, integrating in space, and taking the imaginary part, yields:\n\\begin{equation*}\n\\frac{1}{2} \\frac{d}{dt}\\|J(t)u\\|_{L^2}^2 = \\IM \\int \\overline {Ju}J\\(|u|^{2\\si}u\\).\n\\end{equation*}\nSince we have $J= i t e^{i\\frac{|y|^2}{2t}} \\nabla\\( \\cdot\ne^{-i\\frac{|y|^2}{2t}}\\)$, \n\\begin{equation*}\n J\\(|u|^{2\\si}u\\) = (\\si+1)|u|^{2\\si}Ju + \\si u^{\\si+1}\\bar\n u^{\\si-1}\\overline{Ju}.\n\\end{equation*}\nThe first term is real, and the rest of the computation consists in\nexpanding the remaining term. \n\\smallbreak\n\nIn the case where $Q\\not =0$, we resume the above approach: the new\ncontribution is due to the fact that $J$ does not commute with the\nexternal potential, so we find:\n\\begin{align*}\n \\frac{1}{2} \\frac{d}{dt}\\|J(t)u\\|_{L^2}^2 & =\\text{like before} + \\RE \\int\n t Q(t)xu\\cdot \\overline {Ju}\\\\\n&=\\text{like before} + \n t\\RE\\int_{\\R^d} \\ +t^2 \\IM \\int_{\\R^d} \\.\n\\end{align*}\nOn the other hand, we still have\n\\begin{align*}\n \\frac{d}{dt}\\|u(t)\\|_{L^{2\\si+2}}^{2\\si+2}& =2 (\\si+1)\\int\n |u|^{2\\si}\\RE \\(\\bar u\\d_tu\\) = 2 (\\si+1)\\int\n |u|^{2\\si}\\RE \\(\\bar u \\times\\frac{i}{2}\\Delta u\\) ,\n\\end{align*}\nand so,\n\\begin{align*}\n \\frac{d}{dt}\\(\\frac{1}{2}\\|J(t)u\\|_{L^2}^2\n +\\frac{t^2}{\\si+1}\\|u(t)\\|_{L^{2\\si+2}}^{2\\si+2}\\)\n &=\\frac{t}{\\si+1}(2-d\\si)\\|u(t)\\|_{L^{2\\si+2}}^{2\\si+2}\\\\\n+ \n t\\RE\\int_{\\R^d} \\& +t^2 \\IM \\int_{\\R^d} \\ . \n\\end{align*}\nThus for $t\\ge 0$ and $\\si\\ge\\frac{2}{d}$, \\eqref{eq:decayQ} implies\n\\begin{equation*}\n \\frac{d}{dt}\\(\\frac{1}{2}\\|J(t)u\\|_{L^2}^2\n +\\frac{t^2}{\\si+1}\\|u(t)\\|_{L^{2\\si+2}}^{2\\si+2}\\)\n \\lesssim \n \\^{-\\mu-1}\\|J(t)u\\|_{L^2}^2 +\\^{-\\mu}\\| \\nabla u\\|_{L^2} \\|Ju\\|_{L^2}. \n\\end{equation*}\nEven though there is no conservation of the energy for \\eqref{eq:u}\nsince the potential depends on time, we know from \\cite{Ha13} that $u\\in\nL^\\infty(\\R;H^1(\\R^d))$. As a matter of fact, the proof given in\n\\cite[Section~4]{Ha13} concerns the case $\\si=1$ in $d=2$ or $3$, but\nthe argument, based on energy estimates, remains valid for $d\\ge 1$,\n$\\si<\\frac{2}{(d-2)_+}$, since we then know that $u\\in\nC(\\R;\\Sigma)$. Since $\\mu>1$, we infer\n\\begin{equation}\\label{eq:Jborne}\n Ju\\in L^\\infty(\\R_+;L^2). \n\\end{equation}\nWriting Duhamel's formula for \\eqref{eq:u} with initial datum $u_0$,\nin terms of $U_Q$, we have\n\\begin{equation*}\n u(t) = U_Q(t,0)u_0-i\\int_0^t U_Q(t,s)\\(|u|^{2\\si}u(s)\\)ds.\n\\end{equation*}\nResuming the computations presented in the proof of\nProposition~\\ref{prop:wave-class}, \\eqref{eq:Jborne} and (weighted)\nGagliardo-Nirenberg inequalities make it possible to prove that\n\\begin{equation*}\n Bu \\in L^{q_1}(\\R_+;L^{r_1}),\\ \\forall (q_1,r_1)\\text{ admissible},\n \\ \\forall B\\in \\{{\\rm Id},\\nabla, J\\}. \n\\end{equation*}\nDuhamel's formula then yields, for $0^{-\\mu-1}J(s)u\\|_{L^1(t,\\infty;L^2)} + \\|\\^{-\\mu}\\nabla u\\|_{L^1(t,\\infty;L^2)}. \n\\end{align*}\nThe right hand side goes to zero as $t\\to \\infty$, hence the\nproposition. \n\\end{proof}\n\n\\begin{remark}\n As pointed out in the previous section, it would be possible to\n prove the existence of wave operators by using an adapted vector\n field $\\mathcal A$. On the other hand, if $Q(t)$ is not proportional\n to the identity matrix, it seems that no (exploitable) analogue of\n the pseudo-conformal conservation law is available in terms of\n $\\mathcal A$ rather than in terms of $J$. \n\\end{remark}\n\\subsection{Conclusion}\n\\label{sec:concl-class}\n\nLike in the case of quantum scattering, we use a stronger version of\nthe linear scattering theory:\n\\begin{proposition}\\label{prop:Cook-class}\n Let $d\\ge 1$, $V$ satisfying Assumption~\\ref{hyp:V} with $\\mu>1$. Then \nthe strong limits\n \\begin{equation*}\n \\lim_{t\\to \\pm \\infty} U_Q(0,t)U(t) \\quad \\text{and}\\quad \\lim_{t\\to\n \\pm\\infty} U(-t) U_Q(t,0) \\quad \\text{and}\\quad \n\\end{equation*}\nexist in $\\Sigma$. \n\\end{proposition}\n\\begin{proof}\n For the first limit (existence of wave operators), again in view of\n Cook's method, we prove that for all $\\varphi\\in \n \\Sch(\\R^d)$, \n\\begin{equation*}\n t\\mapsto \\left\\| U_Q(0,t) \\U(t)\\varphi\\right\\|_{\\Sigma}\\in\n L^1(\\R). \n \\end{equation*}\nFor the $L^2$ norm, we have, in view of \\eqref{eq:decayQ},\n\\begin{equation*}\n \\left\\| U_Q(0,t) \\U(t)\\varphi\\right\\|_{L^2} \\lesssim\n \\^{-\\mu-2}\\sum_{j=1}^d\\| y_j^2 U(t)\\varphi\\|_{L^2}.\n\\end{equation*}\nWrite\n\\begin{equation*}\n y_j^2 = (y_j+it\\d_j)^2 +t^2\\d_j^2 -2ity_j\\d_j = (y_j+it\\d_j)^2\n -t^2\\d_j^2 -2it(y_j+it\\d_j)\\d_j,\n\\end{equation*}\nto take advantage of the commutation\n\\begin{equation*}\n (y_j+it\\d_j)U(t) = U(t)y_j,\n\\end{equation*}\nand infer\n\\begin{equation*}\n \\left\\| U_Q(0,t) \\U(t)\\varphi\\right\\|_{L^2} \\lesssim\n \\^{-\\mu-2}\\(\\||y|^2\\varphi\\|_{L^2} +t^2\\|\\Delta \\varphi\\|_{L^2}\n \\)\\lesssim \\^{-\\mu}.\n\\end{equation*}\nThe right hand side is integrable since $\\mu>1$, so the strong limits\n\\begin{equation*}\n \\lim_{t\\to \\pm\\infty} U_Q(0,t)U(t)\n\\end{equation*}\nexist in $L^2$. \nTo infer that these strong limits actually exist in $\\Sigma$, we\nsimply invoke \\eqref{eq:Qhomo} in the case $(q_1,r_1)=(\\infty,2)$, so\nthe above computation are easily adapted. \n\\smallbreak\n\nFor asymptotic completeness, we can adopt the same strategy. Indeed,\nit suffices to prove that for all $\\varphi\\in \n \\Sch(\\R^d)$, \n\\begin{equation*}\n t\\mapsto \\left\\| U(-t) \\U_Q(t,0)\\varphi\\right\\|_{\\Sigma}\\in\n L^1(\\R). \n \\end{equation*}\nFor the $L^2$ norm, we have\n\\begin{align*}\n \\left\\| U(-t) \\U_Q(t,0)\\varphi\\right\\|_{L^2}&= \\left\\|\n \\U_Q(t,0)\\varphi\\right\\|_{L^2}\\\\\n& \\lesssim\n \\^{-\\mu-2}\\sum_{j=1}^d \\left\\|\n y_j^2U_Q(t,0)\\varphi\\right\\|_{L^2}.\n\\end{align*}\nWe first proceed like above, and write\n\\begin{equation*}\n y_j^2 = (y_j+it\\d_j)^2\n -t^2\\d_j^2 -2it(y_j+it\\d_j)\\d_j.\n\\end{equation*}\nThe operator $J$ does not commute with $U_Q$, but this lack of\ncommutation is harmless for our present goal, from\n\\eqref{eq:Qhomo}. By considering the system satisfied by\n$$(y_j+it\\d_j)^2U_Q(t,0)\\varphi, \\d_j^2 U_Q(t,0)\\varphi,\n\\d_j(y_j+it\\d_j)U_Q(t,0)\\varphi,$$ \nwe obtain \n\\begin{align*}\n \\sum_{j=1}^d&\\( \\| (y_j+it\\d_j)^2U_Q(t,0)\\varphi\\|_{L^2} + \\|\\d_j^2\n U_Q(t,0)\\varphi\\|_{L^2} +\n \\|\\d_j(y_j+it\\d_j)U_Q(t,0)\\varphi\\|_{L^2}\\)\\\\\n&\\le C \\|\\varphi\\|_{\\Sigma^2},\n\\end{align*}\nwhere $\\Sigma^k$ is the space of $H^k$ functions with $k$ momenta in\n$L^2$, and $C$ does not depend on time. Finally, we also have a\nsimilar estimate by considering one more derivative or momentum. The\nkey remark in the computation is that the external\npotential $\\$ is exactly quadratic in space, and so\ndifferentiating it three times with any space variables yields zero. \n\\end{proof}\n\n\n\n\\section{Proof of Theorem~\\ref{theo:cv}}\n\\label{sec:cv}\n\nThe main result of this section is:\n\\begin{theorem}\\label{theo:cv-unif}\n Let $d=3$, $\\si=1$, $V$ as in Theorem~\\ref{theo:scatt-quant}, and\n $u_-\\in \\Sigma^7$. Suppose that Assumption~\\ref{hyp:flot} \n is satisfied. Let $\\psi^\\eps$ be given by\n Theorem~\\ref{theo:scatt-quant}, $u$ be given by\n Theorem~\\ref{theo:scatt-class}, $\\varphi^\\eps$ defined by\n \\eqref{eq:phi}. We have\n the uniform error \n estimate:\n \\begin{equation*}\n \\sup_{t\\in \\R}\\|\\psi^\\eps(t)-\\varphi^\\eps(t)\\|_{L^2(\\R^3)} =\n \\O\\(\\sqrt\\eps\\). \n \\end{equation*}\n\\end{theorem}\nTheorem~\\ref{theo:cv} is a direct consequence of the above\nresult, whose proof is the core of\nSection~\\ref{sec:cv}. From now on, we assume $d=3$ and $\\si=1$. \n\\subsection{Extra properties for the approximate solution}\n\\label{sec:extra-u}\n\nFurther regularity and localization properties on $u$ will be\nneeded. \n\\begin{proposition}\\label{prop:extra-u}\n Let $\\si=1$, $1\\le d\\le 3$, $k\\ge 2$ and $V$ satisfying\n Assumption~\\ref{hyp:V} for some $\\mu>1$. If $u_-\\in \\Sigma^k$, then\n the solution $u\\in C(\\R;\\Sigma)$ provided by Theorem~\\ref{theo:scatt-class}\n satisfies $u\\in C(\\R;\\Sigma^k)$. The momenta\n of $u$ satisfy\n \\begin{equation*}\n \\lVert \\lvert y\\rvert^\\ell u(t,y)\\|_{L^2(\\R^d)}\\le C_\\ell\n \\^\\ell,\\quad 0\\le \\ell\\le k,\n \\end{equation*}\nwhere $C_\\ell$ is independent of $t\\in \\R$.\n \\end{proposition}\n\\begin{proof}\n We know from the proof of Theorem~\\ref{theo:scatt-class} that since\n $u_-\\in \\Sigma$,\n\\begin{equation*}\n u,\\nabla u, Ju \\in L^\\infty(\\R;L^2(\\R^d)).\n \\end{equation*}\nThe natural approach is then to proceed by induction on $k$,\nto prove that \n\\begin{align*}\n \\nabla^k u,J^k u\\in L^\\infty(\\R;L^2(\\R^d)). \n\\end{align*}\n We have, as we have seen in the proof of\nProposition~\\ref{prop:wave-class},\n\\begin{align*}\n i\\d_t \\nabla u &= H_Q \\nabla u + Q(t)y u +\\nabla\n \\(|u|^2 u\\)\\\\\n&+ H_Q \\nabla u + Q(t)J(t)u -it Q(t)\\nabla u +\\nabla\n \\(|u|^2 u\\),\\\\\ni\\d_t Ju & = H_Q Ju +it Q(t)y u +J\n \\(|u|^2 u\\)\\\\\n& = H_Q J u + itQ(t)J(t)u +t^2Q(t)\\nabla u +J\n \\(|u|^2 u\\).\n\\end{align*}\nApplying the operators $\\nabla$ and $J$ again, we find\n\\begin{align*}\n i\\d_t \\nabla^2 u &= H_Q \\nabla^2 u + 2Q(t)y \\nabla u +Q(t) u +\\nabla^2\n \\(|u|^2 u\\)\\\\\n&+ H_Q \\nabla u + 2Q(t)J(t)\\nabla u -2it Q(t)\\nabla^2 u+Q(t)u +\\nabla^2\n \\(|u|^2 u\\),\\\\\ni\\d_t J^2u & = H_Q J^2u -2t^2 Q(t)y \\nabla u -t^2Q(t)u+J^2\n \\(|u|^2 u\\)\\\\\n& = H_Q J^2 u - 2t^2Q(t)J\\nabla u +2it^3Q(t) J^2 u +itQ(t)u+J^2\n \\(|u|^2 u\\).\n\\end{align*}\nIn view of \\eqref{eq:decayQ}, we see that $t\\mapsto t^3 Q(t)$ need not be\nintegrable (unless we make stronger and stronger assumptions of $\\mu$,\nas $k$ increases), so the commutator seems to be fatal to this approach. To\novercome this issue, we use the vector field mentioned in\nSection~\\ref{sec:vector-field-Q}. \nFor bounded time $t\\in\n[-T,T]$, the above mentioned lack of commutation is not a problem, and\nwe can use the operator $J$, which is defined for all time. \nWe note that either of the operators $\\mathcal A_\\pm$ or \n$J$ satisfies more generally the pointwise identity\n\\begin{equation*}\n B\\(u_1\\overline u_2 u_3\\) =\\( B u_1\\) \\overline u_2 u_3 +\n u_1\\(\\overline{B u_2}\\) u_3 + u_1\\overline u_2\\( Bu_3\\),\n\\end{equation*}\nfor all differentiable functions $u_1,u_2,u_3$. \n\nNow we have all the tools to proceed by induction, and mimic the\nproof from \\cite[Appendix]{Ca11}. The main idea is that the proof is\nsimilar to the propagation of higher regularity for energy-subcritical\nproblems, with the difference that large time is handled thanks to\nvector fields. We leave out the details, which are not difficult but\nrather cumbersome: considering\n\\begin{equation*}\n B(t) =\n\\left\\{\n \\begin{aligned}\n \\mathcal A_-(t)&\\text{ for }t\\le -T,\\\\\nJ(t)&\\text{ for }t\\in [-T,T],\\\\\n \\mathcal A_+(t)&\\text{ for }t\\ge T,\n \\end{aligned}\n\\right.\n\\end{equation*}\n we can then prove that \n \\begin{equation*}\n \\nabla^k u,B^k u\\in L^\\infty(\\R;L^2(\\R^d)). \n \\end{equation*}\nBack to the definition of $\\mathcal A_\\pm$, \n\\begin{equation*}\n \\mathcal A_\\pm (t) = W_\\pm (t)K_\\pm (t)y +iW_\\pm (t)\\nabla,\n\\end{equation*}\n\\eqref{eq:vector-asym}\nthen yields the result. \n\\end{proof}\n\n\\subsection{Strichartz estimates}\n\\label{sec:strichartz-raff}\n\nIntroduce the following notations, taking the dependence upon $\\eps$\ninto account:\n\\begin{equation*}\n H^\\eps=-\\frac{\\eps^2}{2}\\Delta+V(x),\\quad U_V^\\eps(t) =\n e^{-i\\frac{t}{\\eps} H^\\eps}. \n\\end{equation*}\nSince we now work only in space dimension $d=3$, we can use the result\nfrom \\cite{Go06}. Resuming the proof from \\cite{Go06} (a mere scaling\nargument is not sufficient), we have, along with the preliminary\nanalysis from Section~\\ref{sec:spectral}, the global dispersive estimate\n\\begin{equation}\n \\label{eq:disp-semi-glob}\n \\|U^\\eps_V(t)\\|_{L^1(\\R^3)\\to L^\\infty(\\R^3)}\\lesssim\n \\frac{1}{(\\eps|t|)^{3\/2}},\\quad t\\not =0.\n\\end{equation}\nFor $|t|\\le \\delta$, $\\delta>0$ independent of $\\eps$, the above\nrelation stems initially from \\cite{Fujiwara}. As a consequence, we\ncan measure the dependence upon $\\eps$ in Strichartz estimates. We\nrecall the definition of admissible pairs related to Sobolev\nregularity.\n\\begin{definition}\n Let $d=3$ and $s\\in \\R$. A pair $(q,r)$ is called $\\dot H^s$-admissible if \n \\begin{equation*}\n \\frac{2}{q}+\\frac{3}{r} = \\frac{3}{2}-s. \n \\end{equation*}\n\\end{definition}\nFor $t_0\\in \\R\\cup \\{-\\infty\\}$, we denote by\n\\begin{equation*}\n R^\\eps_{t_0}(F)(t) = \\int_{t_0}^t U_V^\\eps(t-s)F(s)ds\n\\end{equation*}\nthe retarded term related to Duhamel's formula. Since the dispersive\nestimate \\eqref{eq:disp-semi-glob} is the same as the one for\n$e^{i\\eps t\\Delta}$, we get the same scaled Strichartz estimates as\nfor this operator, which can in turn be obtained by scaling\narguments from the case $\\eps=1$. \n\\begin{lemma}[Scaled $L^2$-Strichartz estimates]\\label{lem:stri-eps}\n Let $t_0\\in \\R\\cup\\{-\\infty\\}$, and let $(q_1,r_1)$ and $(q_2,r_2)$\n be $L^2$-admissible pairs, $2\\le r_j\\le 6$. We have\n \\begin{equation*}\n \\eps^{\\frac{1}{q_1}} \\|U_V^\\eps(\\cdot) f\\|_{L^{q_1}(\\R;L^{r_1}(\\R^3))}\\lesssim\n \\|f\\|_{L^2(\\R^3)}, \n \\end{equation*}\n \\begin{equation*}\n \\eps^{\\frac{1}{q_1}+\\frac{1}{q_2}}\n \\|R^\\eps_{t_0}(F)\\|_{L^{q_1}(I;L^{r_1}(\\R^3))}\\le C_{q_1,q_2} \\|F\\|_{L^{q_2'}(I;L^{r_2'}(\\R^3))},\n \\end{equation*}\nwhere $C_{q_1,q_2}$ is independent of $\\eps$, $t_0$, and of $I$ such that\n$t_0\\in \\bar I$. \n\\end{lemma}\nWe will also use Strichartz estimates for non-admissible pairs, as\nestablished in \\cite{Kat94} (see also \\cite{CW92,FoschiStri}).\n\\begin{lemma}[Scaled inhomogeneous Strichartz\n estimates]\\label{lem:stri-inhom-eps} \n Let $t_0\\in \\R\\cup\\{-\\infty\\}$, and let $(q_1,r_1)$ be an $\\dot\n H^{1\/2}$-admissible pair, and $(q_2,r_2)$\n be an $\\dot H^{-1\/2}$-admissible pair, with \n \\begin{equation*}\n 3\\le r_1,r_2<6.\n \\end{equation*}\nWe have\n\\begin{equation*}\n \\eps^{\\frac{1}{q_1}+\\frac{1}{q_2}}\n \\|R^\\eps_{t_0}(F)\\|_{L^{q_1}(I;L^{r_1}(\\R^3))}\\le C_{q_1,q_2} \\|F\\|_{L^{q_2'}(I;L^{r_2'}(\\R^3))},\n \\end{equation*}\nwhere $C_{q_1,q_2}$ is independent of $\\eps$, $t_0$, and of $I$ such that\n$t_0\\in \\bar I$. \n\\end{lemma}\n\\subsection{Preparing the proof}\n\\label{sec:preparing-proof}\n\n\n\n\n\n Subtracting the equations satisfied by\n$\\psi^\\eps$ and $\\varphi^\\eps$, respectively, we obtain as in\n\\cite{CaFe11}: $w^\\eps=\\psi^\\eps-\\varphi^\\eps$ satisfies\n\\begin{equation}\\label{eq:restecrit}\n i\\eps\\d_t w^\\eps +\\frac{\\eps^2}{2}\\Delta w^\\eps =V w^\\eps -\\mathcal L^\\eps\n + \n \\eps^{5\/2}\\(|\\psi^\\eps|^{2}\\psi^\\eps\n -|\\varphi^\\eps|^{2}\\varphi^\\eps\\),\n\\end{equation}\nalong with the initial condition\n\\begin{equation*}\n e^{-i\\frac{\\eps\n t}{2}\\Delta}w^\\eps_{\\mid t=-\\infty}=0, \n\\end{equation*}\nwhere the source term is given by \n\\begin{equation*}\n {\\mathcal L}^\\eps(t,x) = \\(V(x) - V\\(q(t)\\)\n -\\sqrt\\eps \\<\\nabla V\\(q(t)\\),y\\>\n -\\frac{\\eps}{2}\\\\)\\Big|_{y=\\frac{x-q(t)}{\\sqrt\\eps}}\n \\varphi^\\eps(t,x). \n\\end{equation*}\nDuhamel's\nformula for $w^\\eps$ reads\n\\begin{align*}\n w^\\eps(t) &= -i\\eps^{3\/2}\\int_{-\\infty}^t U^\\eps_V(t-s)\\(|\\psi^\\eps|^{2}\\psi^\\eps\n -|\\varphi^\\eps|^{2}\\varphi^\\eps\\)(s)ds\\\\\n&\\quad +i\\eps^{-1}\\int_{-\\infty}^t U^\\eps_V(t-s) \\mathcal L^\\eps(s)ds. \n\\end{align*}\nDenoting $L^a(]-\\infty,t];L^b(\\R^3))$ by $L^a_tL^b$, Strichartz\nestimates yield, for any $L^2$-admissible pair $(q_1,r_1)$,\n\\begin{equation}\\label{eq:stri-weps}\n \\eps^{1\/q_1}\\|w^\\eps\\|_{L^{q_1}_t L^{r_1}} \\lesssim\n \\eps^{3\/2-1\/q}\\left\\||\\psi^\\eps|^{2}\\psi^\\eps \n -|\\varphi^\\eps|^{2}\\varphi^\\eps\\right\\|_{L^{q'}_tL^{r'}} +\n \\frac{1}{\\eps}\\|\\mathcal L^\\eps\\|_{L^1_tL^2},\n\\end{equation}\nwhere $(q,r)$ is the admissible pair chosen in the proof of\nProposition~\\ref{prop:waveop-quant}, that is $r=2\\si+2$. Since we now\nhave $d=3$ and $\\si=1$, this means:\n\\begin{equation*}\n q=\\frac{8}{3},\\quad k=8,\n\\end{equation*}\nand \\eqref{eq:stri-weps} yields\n\\begin{equation}\\label{eq:w-presque}\n \\eps^{1\/q_1}\\|w^\\eps\\|_{L^{q_1}_t L^{r_1}} \\lesssim\n \\eps^{9\/8}\\( \\|w^\\eps\\|^2_{L^8_t L^4}+ \\|\\varphi^\\eps\\|^2_{L^8_t\n L^4}\\)\\|w^\\eps\\|_{L^{8\/3}_tL^4} +\n \\frac{1}{\\eps}\\|\\mathcal L^\\eps\\|_{L^1_tL^2}.\n\\end{equation}\nThe strategy is then to first\nobtain an a priori estimate for $w^\\eps$ in $L^8_tL^4$, and then to\nuse it in the above estimate. In order to do so, we begin by\nestimating the source term $\\mathcal L^\\eps$, in the next subsection. \n\\subsection{Estimating the source term}\n\\label{sec:estim-source-term}\n\n\\begin{proposition}\\label{prop:est-source}\n Let $d= 3$, $\\si=1$, $V$ satisfying Assumption~\\ref{hyp:V}\n with $\\mu>2$, and $u_-\\in \\Sigma^k$ for some $k\\ge\n 7$. Suppose that Assumption~\\ref{hyp:flot} is satisfied.\nLet $u\\in C(\\R;\\Sigma^k)$ given by\n Theorem~\\ref{theo:scatt-class} and \n Proposition~\\ref{prop:extra-u}. The source term $\\mathcal L^\\eps$ satisfies\n \\begin{equation*}\n \\frac{1}{\\eps} \\|\\mathcal L^\\eps(t)\\|_{L^2(\\R^3)}\\lesssim \\frac{\\sqrt\n \\eps}{\\^{3\/2} }\\quad\\text{and}\\quad \\frac{1}{\\eps}\n \\|\\mathcal L^\\eps(t)\\|_{L^{3\/2}(\\R^3)}\\lesssim \\frac{\n \\eps^{3\/4}}{\\^{3\/2} },\n\\quad \\forall\n t\\in \\R.\n \\end{equation*}\n\\end{proposition}\n\\begin{proof}\nTo ease notation, we note that\n\\begin{equation*}\n \\frac{1}{\\eps} \\mathcal L^\\eps(t,x) = \\frac{1}{\\eps^{3\/4}} {\\mathcal\n S}^\\eps(t,y)\\Big|_{y=\\frac{x-q(t)}{\\sqrt\\eps}}\n e^{i(S(t)+ip(t)\\cdot (x-q(t)))\/\\eps}, \n\\end{equation*}\nwhere\n\\begin{equation*}\n {\\mathcal S}^\\eps(t,y) = \\frac{1}{\\eps}\\( V\\(q(t)+y\\sqrt \\eps\\) -V\\(q(t)\\)\n -\\sqrt\\eps \\<\\nabla V\\(q(t)\\),y\\> -\\frac{\\eps}{2}\\\\)u(t,y).\n\\end{equation*}\nIn particular,\n\\begin{equation*}\n \\frac{1}{\\eps}\\| \\mathcal L^\\eps(t)\\|_{L^2(\\R^3)} = \\|{\\mathcal\n S}^\\eps(t)\\|_{L^2(\\R^3)} ,\\quad \\frac{1}{\\eps}\\| \\mathcal\n L^\\eps(t)\\|_{L^{3\/2}(\\R^3)} = \\eps^{1\/4}\\|{\\mathcal \n S}^\\eps(t)\\|_{L^{3\/2}(\\R^3)}. \n\\end{equation*}\n Taylor's formula and Assumption~\\ref{hyp:V} yield the pointwise estimate\n \\begin{equation*}\n | {\\mathcal S}^\\eps(t,y) | \\lesssim \\sqrt\\eps |y|^3 \\int_0^1\n \\frac{1}{\\^{\\mu+3}}d\\theta |u(t,y)|. \n \\end{equation*}\nTo simplify notations, we consider only positive times. Recall that\nfrom Assumption~\\ref{hyp:flot}, $p^+\\not =0$. Introduce, for\n$0<\\eta< |p^+|\/2$,\n\\begin{equation*}\n \\Omega = \\left\\{y\\in \\R^3,\\quad |y|\\ge \\eta\\frac{t}{\\sqrt\\eps}\\right\\}.\n\\end{equation*}\nSince $q(t)\\sim p^+ t$ as\n$t\\to \\infty$, on the complement of $\\Omega$, we can use the decay of $V$,\n\\eqref{eq:short}, to infer the pointwise estimate\n\\begin{equation}\\label{eq:Spoint}\n | {\\mathcal S}^\\eps(t,y) | \\lesssim \\sqrt\\eps |y|^3\n \\frac{1}{\\^{\\mu+3}} |u(t,y)| \\quad \\text{on }\\Omega^c.\n\\end{equation}\nTaking the $L^2$-norm, we have\n\\begin{equation*}\n \\|\\mathcal S^\\eps(t)\\|_{L^2(\\Omega^c)}\\le \\frac{\\sqrt \\eps\n }{\\^{\\mu+3}}\\||y|^3u(t,y)\\|_{L^2(\\R^3)}\\lesssim \\frac{\\sqrt \\eps\n }{\\^{\\mu}},\n\\end{equation*}\nwhere we have used Proposition~\\ref{prop:extra-u}. On $\\Omega$\nhowever, the argument of the potential in Taylor's formula is not\nnecessarily going to infinity, so the decay of the potential is\napparently useless. Back to the definition of $\\mathcal L^\\eps$, that is leaving\nout Taylor's formula, we see that all the terms but the first one can\nbe easily estimated on $\\Omega$. Indeed, the definition of $\\Omega$ implies\n\\begin{equation*}\n |V(q(t))u(t,y)| \\lesssim \\frac{1}{\\^\\mu}|u(t,y)|\\lesssim\n \\frac{1}{\\^\\mu} \\left| \\frac{y\\sqrt \\eps}{t}\\right|^k |u(t,y)|,\n\\end{equation*}\nwhere $k$ will be chosen shortly. Taking the $L^2$ norm, we find\n\\begin{equation*}\n \\frac{1}{\\eps}\\|V(q(t))u(t)\\|_{L^2(\\Omega)} \\lesssim\n \\frac{\\eps^{k\/2-1}}{\\^{\\mu+k}}\\||y|^k u(t,y)\\|_{L^2(\\R^3)}\n \\lesssim \\frac{\\eps^{k\/2-1}}{\\^{\\mu}},\n\\end{equation*}\nwhere we have used Proposition~\\ref{prop:extra-u} again. Choosing\n$k=3$ yields the expected estimate. The last two terms in $\\mathcal\nL^\\eps$ can be estimated accordingly. For the first term in $\\mathcal\nL^\\eps$ however, we face the same problem as above: the argument of\n$V$ has to be considered as bounded. A heuristic argument goes as\nfollows. In view of Theorem~\\ref{theo:scatt-class},\n\\begin{equation*}\n u(t,y) \\Eq t \\infty e^{i\\frac{t}{2}\\Delta}u_+ \\Eq t \\infty\n \\frac{1}{t^{3\/2}}\\widehat u_+\\(\\frac{y}{t}\\)e^{i|y|^2\/(2t)},\n\\end{equation*}\nwhere the last behavior stems from standard analysis of the\nSchr\\\"odinger group (see e.g. \\cite{Rauch91}). In view of\nthe definition of $\\Omega$, we have, formally for $y\\in \\Omega$, \n\\begin{equation*}\n |u(t,y)|\\lesssim\n \\frac{1}{t^{3\/2}}\\sup_{ |z|\\ge \\eta}\\left |\\widehat\n u_+\\(\\frac{z}{\\sqrt\\eps}\\)\\right|. \n\\end{equation*}\nThen the idea is to keep the linear dispersion measured by the factor\n$t^{-3\/2}$ (which is integrable since $d=3$), and use decay properties\nfor $\\widehat u_+$ to gain powers of $\\eps$. To make this argument\nrigorous, we keep the idea \nthat $u$ must be assessed in $L^\\infty$ rather than in $L^2$, and write\n\\begin{equation*}\n \\frac{1}{\\eps}\\|V\\(q(t)+y\\sqrt \\eps\\)u(t,y)\\|_{L^2(\\Omega)} \\le\n \\frac{1}{\\eps}\\|u(t)\\|_{L^\\infty(\\Omega)} \\|V\\(q(t)+y\\sqrt \\eps\\)\\|_{L^2(\\Omega)} .\n\\end{equation*}\nFor the last factor, we have\n\\begin{equation*}\n \\|V\\(q(t)+y\\sqrt\n \\eps\\)\\|_{L^2(\\Omega)}\\le \\eps^{-3\/4}\\|V\\|_{L^2(\\R^3)},\n\\end{equation*}\nwhere the last norm is finite since $\\mu>2$. For the $L^\\infty$ norm of\n$u$, we use Gagliardo-Nirenberg inequality and the previous\nvector-fields. To take advantage of the localization in space,\nintroduce a non-negative cut-off function $\\chi\\in C^\\infty(\\R^3)$, such that:\n\\begin{equation*}\n \\chi(z)=\\left\\{\n \\begin{aligned}\n 1& \\text{ if }|z|\\ge \\eta,\\\\\n0 & \\text{ if }|z|\\le\\frac{\\eta}{2}.\n \\end{aligned}\n\\right.\n\\end{equation*}\nIn view of the definition of $\\Omega$, \n\\begin{equation*}\n \\|u(t)\\|_{L^\\infty(\\Omega)} \\le \\left\\|\n \\chi\\(\\frac{y\\sqrt\\eps}{t}\\)u(t,y)\\right\\|_{L^\\infty(\\R^3)}. \n\\end{equation*}\nNow with $B$ as defined in the proof of\nProposition~\\ref{prop:extra-u}, Gagliardo-Nirenberg inequality yields,\nfor any smooth function $f$\n (recall that $y\\in \\R^3$),\n \\begin{equation*}\n \\|f\\|_{L^\\infty(\\R^3)}\\lesssim\n \\frac{1}{t^{3\/2}}\\|f\\|_{L^2(\\R^3)}^{1\/4}\\|B^2(t)f\\|_{L^2(\\R^3)}^{3\/4}.\n \\end{equation*}\nWe use this inequality with\n\\begin{equation*}\n f(t,y) = \\chi\\(\\frac{y\\sqrt\\eps}{t}\\)u(t,y),\n\\end{equation*}\nand note that\n\\begin{equation*}\n B(t)f (t,y)= \\chi\\(\\frac{y\\sqrt\\eps}{t}\\)B(t)u(t,y) + i\\frac{\\sqrt\n \\eps}{t}W(t) \\nabla \\chi \\(\\frac{y\\sqrt\\eps}{t}\\) \\times u(t,y),\n\\end{equation*}\nwhere $W(t)$ stands for $W_\\pm$ or $t$. Recall that $t\\mapsto W(t)\/t$\nis bounded, so the last term is actually ``nice''. \nProceeding in the same way as above, we obtain\n\\begin{equation*}\n \\|u(t)\\|_{L^2(\\Omega)}\\lesssim\n \\left\\| \\left| \\frac{y\\sqrt \\eps}{t}\\right|^k\n u(t,y)\\right\\|_{L^2(\\Omega)}\\lesssim \\eps^{k\/2},\n\\end{equation*}\nprovided that $u_-\\in \\Sigma^k$. Similarly,\n\\begin{equation*}\n \\|B^2(t)u\\|_{L^2(\\Omega)} \\lesssim \\eps^{k\/2-1},\n\\end{equation*}\nand so \n\\begin{equation*}\n \\frac{1}{\\eps}\\|V\\(q(t)+y\\sqrt \\eps\\)u(t,y)\\|_{L^2(\\Omega)}\\lesssim \n \\frac{1}{t^{3\/2}}\\eps^{-7\/4 +k\/8+3(k\/2-1)\/4}= \\frac{\\eps^{k\/2-5\/2}}{t^{3\/2}}.\n\\end{equation*}\nTherefore, the $L^2$ estimate follows as soon as $k \\ge 6$. For the\n$L^{3\/2}$-estimate, we resume the same computations, and use the extra\nestimate: for all $s>1\/2$, \n\\begin{equation}\\label{eq:localizing}\n \\|f\\|_{L^{3\/2}(\\R^3)}\\lesssim \\|f\\|_{L^2(\\R^3)}^{1-1\/2s}\\||x|^sf\\|_{L^2(\\R^3)}^{1\/2s}.\n\\end{equation}\nThis estimate can easily be proven by writing\n\\begin{equation*}\n \\|f\\|_{L^{3\/2}(\\R^3)} \\le \\|f\\|_{L^{3\/2}(|y|R)},\n\\end{equation*}\nso H\\\"older inequality yields, provided that $s>1\/2$ (so that\n$y\\mapsto |y|^{-s}\\in L^6(|y|>R)$)\n\\begin{equation*}\n \\|f\\|_{L^{3\/2}(\\R^3)} \\le \\sqrt R \\|f\\|_{L^2} + \\frac{1}{R^{s-1\/2}}\\||x|^s f\\|_{L^2},\n\\end{equation*}\nand by optimizing in $R$. Now from \\eqref{eq:Spoint}, we have\n\\begin{align*}\n \\|\\mathcal S^\\eps(t)\\|_{L^{3\/2}(\\Omega^c)}&\\le \\frac{\\sqrt \\eps\n }{\\^{\\mu+3}}\\||y|^3u(t,y)\\|_{L^{3\/2}(\\R^d)}\\\\\n&\\lesssim \\frac{\\sqrt \\eps\n }{\\^{\\mu+3}}\\||y|^3u(t,y)\\|_{L^2(\\R^d)}^{1\/2}\\||y|^4u(t,y)\\|_{L^2(\\R^d)}^{1\/2}\\\\\n&\n\\lesssim \\frac{\\sqrt \\eps\n }{\\^{\\mu-1\/2}}\\lesssim \\frac{\\sqrt \\eps\n }{\\^{3\/2}}\n\\end{align*}\nwhere we have used \\eqref{eq:localizing} with $s=1$, \nProposition~\\ref{prop:extra-u}, and the fact that $\\mu>2$. \n\nOn $\\Omega$, we can repeat the computations from the $L^2$-estimate\n(up to incorporating \\eqref{eq:localizing}): for the last term, we\nnote that\n\\begin{equation*}\n \\frac{1}{\\eps}\\|V\\(q(t)+y\\sqrt \\eps\\)u(t,y)\\|_{L^{3\/2}(\\Omega)} \\le\n \\frac{1}{\\eps}\\|u(t)\\|_{L^\\infty(\\Omega)} \\|V\\(q(t)+y\\sqrt \\eps\\)\\|_{L^{3\/2}(\\Omega)} ,\n\\end{equation*}\nand that\n\\begin{equation*}\n \\|V\\(q(t)+y\\sqrt\n \\eps\\)\\|_{L^{3\/2}(\\Omega)}\\le \\eps^{-1}\\|V\\|_{L^{3\/2}(\\R^3)},\n\\end{equation*}\nwhere the last norm is finite since $\\mu>2$. Up to taking $u$ in\n$\\Sigma^7$, we conclude\n\\begin{equation*}\n \\|\\mathcal S^\\eps(t)\\|_{L^{3\/2}(\\R^3)}\\lesssim \\frac{\\sqrt\\eps}{\\^{3\/2}},\n\\end{equation*}\nand the proposition follows. \n\\end{proof}\n\n\\subsection{A priori estimate for the error in the critical norm}\n\\label{sec:priori-estim-error}\n\nIn this subsection, we prove:\n\\begin{proposition}\\label{prop:w-crit}\n Under the assumptions of Theorem~\\ref{theo:cv-unif}, the error\n $w^\\eps=\\psi^\\eps-\\varphi^\\eps$ satisfies the a priori estimate,\n for any $\\dot H^{1\/2}$-admissible pair \n $(q,r)$, \n \\begin{equation*}\n \\eps^{\\frac{1}{q}}\\|w^\\eps\\|_{L^q(\\R;L^r(\\R^3))}\\lesssim \\eps^{1\/4}. \n \\end{equation*}\n\\end{proposition}\n\\begin{proof}\n The reason for considering $\\dot H^{1\/2}$-admissible pairs is that\n the cubic three-dimensional Schr\\\"odinger equation is $\\dot\n H^{1\/2}$-critical; see e.g. \\cite{CW90}. The proof of\n Proposition~\\ref{prop:w-crit} is then very similar to the proof of\n \\cite[Proposition~2.3]{HoRo08}. \n\nAn important tool is the known estimate for the approximate solution\n$\\varphi^\\eps$: we have, in view of the fact\nthat $u,Bu\\in L^\\infty L^2$,\n\\begin{equation}\\label{eq:est-a-priori-phi}\n \\|\\varphi^\\eps(t)\\|_{L^r(\\R^3)}\\lesssim\n \\(\\frac{1}{\\\\sqrt\\eps}\\)^{3\\(\\frac{1}{2}-\\frac{1}{r}\\)},\\quad\n 2\\le r\\le 6.\n\\end{equation}\nNote that for an $\\dot H^{1\/2}$ admissible pair, we infer\n\\begin{equation*}\n \\|\\varphi^\\eps(t)\\|_{L^q(\\R;L^r(\\R^3))}\\lesssim\n \\eps^{-\\frac{3}{2}\\(\\frac{1}{2}-\\frac{1}{r}\\)} = \\eps^{-\\frac{1}{q}-\\frac{1}{4}},\n\\end{equation*}\nso Proposition~\\ref{prop:w-crit} shows a $\\sqrt\\eps$ gain\nfor $w^\\eps$ compared to $\\varphi^\\eps$, which is the order of\nmagnitude we eventually prove in $L^\\infty L^2$, and stated in\nTheorem~\\ref{theo:cv-unif}. \nLet $0<\\eta\\ll 1$, and set\n\\begin{equation*}\n \\|w^\\eps\\|_{\\mathcal N^\\eps(I)} :=\\sup_{{(q,r)\\ \\dot\n H^{1\/2}-\\text{admissible}}\\atop 3\\le r\\le\n 6-\\eta}\\eps^{\\frac{1}{q}}\\|w^\\eps\\|_{L^q(I;L^r(\\R^3)}. \n\\end{equation*}\nDuhamel's formula for \\eqref{eq:restecrit} reads, given $w^\\eps_{\\mid\n t=-\\infty}=0$,\n\\begin{equation*}\n w^\\eps(t) =-i\\eps^{3\/2} \\int_{-\\infty}^t\n U_V^\\eps(t-s)\\(|\\psi^\\eps|^2\\psi^2-|\\varphi^\\eps|^2\\varphi^\\eps\\)(s)ds\n+i\\eps^{-1} \\int_{-\\infty}^t U_V^\\eps(t-s)\\mathcal L^\\eps(s)ds. \n\\end{equation*}\nSince we have the point-wise estimate\n\\begin{equation*}\n \\left|\n |\\psi^\\eps|^2\\psi^2-|\\varphi^\\eps|^2\\varphi^\\eps\\right|\\lesssim\n \\(|w^\\eps|^2+|\\varphi^\\eps|^2\\)|w^\\eps|, \n\\end{equation*}\nLemma~\\ref{lem:stri-inhom-eps} yields, with\n$(q_2,r_2)=(\\frac{10}{7},5)$ for the first term of the right hand\nside, and with $(q_2,r_2)=(2,3)$ for the second term,\n\\begin{align*}\n \\|w^\\eps\\|_{\\mathcal N^\\eps(-\\infty,t)}&\\lesssim \\eps^{3\/2-7\/10}\\left\\|\n \\(|w^\\eps|^2+|\\varphi^\\eps|^2\\)w^\\eps\\right\\|_{L^{10\/3}_tL^{5\/4}}\n + \\eps^{-3\/2}\\|\\mathcal L^\\eps\\|_{L^2_t L^{3\/2}}\\\\\n&\\lesssim \\eps^{4\/5}\\( \\|w^\\eps\\|_{L^{20}_tL^{10\/3}}^2 +\n\\|\\varphi^\\eps\\|_{L^{20}_tL^{10\/3}}^2 \\)\n\\|w^\\eps\\|_{L^{5}_tL^{5}}\n + \\eps^{-3\/2}\\|\\mathcal L^\\eps\\|_{L^2_t L^{3\/2}},\n\\end{align*}\nwhere we have used H\\\"older inequality. Note that the pairs\n$(20,\\frac{10}{3})$ and $(5,5)$ are $\\dot H^{1\/2}$-admissible. Denote\nby \n\\begin{equation*}\n \\omega(t) =\\frac{1}{\\^{3\/5}}. \n\\end{equation*}\nThis function obviously belongs to $L^{20}(\\R)$. \nThe estimate \\eqref{eq:est-a-priori-phi} and the definition of the\nnorm $\\mathcal N^\\eps$ yield\n\\begin{equation*}\n \\|w^\\eps\\|_{\\mathcal N^\\eps(-\\infty,t)}\\lesssim \\sqrt\\eps\n \\|w^\\eps\\|_{\\mathcal N^\\eps(-\\infty,t)}^3 + \n\\|\\omega\\|_{L^{20}(-\\infty,t)}^2 \\|w^\\eps\\|_{\\mathcal N^\\eps(-\\infty,t)}\n + \\eps^{-3\/2}\\|\\mathcal L^\\eps\\|_{L^2_t L^{3\/2}}.\n\\end{equation*}\nTaking $t\\ll -1$, we infer\n\\begin{equation*}\n \\|w^\\eps\\|_{\\mathcal N^\\eps(-\\infty,t)}\\lesssim \\sqrt\\eps\n \\|w^\\eps\\|_{\\mathcal N^\\eps(-\\infty,t)}^3 \n + \\eps^{-3\/2}\\|\\mathcal L^\\eps\\|_{L^2_t L^{3\/2}}\\lesssim \\sqrt\\eps\n \\|w^\\eps\\|_{\\mathcal N^\\eps(-\\infty,t)}^3 + \\eps^{1\/4},\n\\end{equation*}\nwhere we have use Proposition~\\ref{prop:est-source}. We can now use a\nstandard bootstrap argument, as recalled in Section~\\ref{sec:class}.\nWe infer that for $t_1\\ll -1$,\n\\begin{equation*}\n \\|w^\\eps\\|_{\\mathcal N^\\eps(-\\infty,t_1)}\\lesssim \\eps^{1\/4}.\n\\end{equation*}\nUsing Duhamel's formula again, we have\n\\begin{align*}\n U_V^\\eps(t-t_1)w^\\eps(t_1) &=-i\\eps^{3\/2} \\int_{-\\infty}^{t_1}\n U_V^\\eps(t-s)\\(|\\psi^\\eps|^2\\psi^2-|\\varphi^\\eps|^2\\varphi^\\eps\\)(s)ds\\\\\n&+i\\eps^{-1} \\int_{-\\infty}^{t_1} U_V^\\eps(t-s)\\mathcal L^\\eps(s)ds,\n\\end{align*}\nso we infer\n\\begin{align*}\n \\| U_V^\\eps(t-t_1)w^\\eps(t_1)\\|_{\\mathcal N^\\eps(\\R)}& \\lesssim \\sqrt\\eps\n \\|w^\\eps\\|_{\\mathcal N^\\eps(-\\infty,t_1)}^3 + \n\\|\\omega\\|_{L^{20}(-\\infty,t_1)}^2 \\|w^\\eps\\|_{\\mathcal\n N^\\eps(-\\infty,t_1)}\\\\\n& \\quad + \\eps^{-3\/2}\\|\\mathcal L^\\eps\\|_{L^2((-\\infty,t_1]; L^{3\/2})}\\\\\n&\\le\nC_0\\eps^{1\/4}.\n\\end{align*}\nWe now rewrite Duhamel's formula with some initial time $t_j$:\n\\begin{align*}\n w^\\eps(t) &= U_V^\\eps(t-t_j)w^\\eps(t_j)-i\\eps^{3\/2} \\int_{t_j}^t\n U_V^\\eps(t-s)\\(|\\psi^\\eps|^2\\psi^2-|\\varphi^\\eps|^2\\varphi^\\eps\\)(s)ds\\\\\n&\\quad\n+i\\eps^{-1} \\int_{t_j}^t U_V^\\eps(t-s)\\mathcal L^\\eps(s)ds. \n\\end{align*}\nFor $t\\ge t_j$ and $I=[t_j,t]$, the same estimates as above yield\n\\begin{align*}\n \\|w^\\eps\\|_{\\mathcal N^\\eps(I)}&\\le \\|U_V^\\eps(\\cdot-t_j)w^\\eps(t_j)\\|_{\\mathcal N^\\eps(I)}\n + C\\sqrt\\eps\n \\|w^\\eps\\|_{\\mathcal N^\\eps(I)}^3 + \nC\\|\\omega\\|_{L^{20}(I)}^2 \\|w^\\eps\\|_{\\mathcal N^\\eps(I)}\\\\\n& \\quad+ C\\eps^{-3\/2}\\|\\mathcal L^\\eps\\|_{L^2(I; L^{3\/2})},\n\\end{align*}\nwhere the above constant $C$ is independent of $\\eps, t_j$ and $t$. We\nsplit $\\R_t$ into finitely many intervals\n\\begin{equation*}\n \\R = (-\\infty,t_1]\\cup \\bigcup_{j=1}^N [t_j,t_{j+1}]\\cup\n[t_N,\\infty)=:\\bigcup_{j=0}^{N+1} I_j, \n\\end{equation*}\n on which \n\\begin{equation*}\n C\\|\\omega\\|_{L^{20}(I_j)}^2 \\le\\frac{1}{2},\n\\end{equation*}\nso that we have\n\\begin{align*}\n \\|w^\\eps\\|_{\\mathcal N^\\eps(I_j)}&\\le 2\\|U_V^\\eps(\\cdot-t_j)w^\\eps(t_j)\\|_{\\mathcal N^\\eps(I_j)}\n +2 C\\sqrt\\eps\n \\|w^\\eps\\|_{\\mathcal N^\\eps(I_j)}^3 +\n 2 C\\eps^{-3\/2}\\|\\mathcal L^\\eps\\|_{L^2(I_j; L^{3\/2})}\\\\\n&\\le 2\\|U_V^\\eps(\\cdot-t_j)w^\\eps(t_j)\\|_{\\mathcal N^\\eps(I_j)}\n +2 C\\sqrt\\eps\n \\|w^\\eps\\|_{\\mathcal N^\\eps(I_j)}^3 + \\tilde C\n \\eps^{1\/4}\\left\\|\\^{-3\/2}\\right\\|_{L^2(I_j)}, \n\\end{align*}\nwhere we have used Proposition~\\ref{prop:est-source} again. Since we\nhave\n\\begin{equation*}\n \\| U_V^\\eps(t-t_1)w^\\eps(t_1)\\|_{\\mathcal N^\\eps(\\R)}\\le C_0\\eps^{1\/4},\n\\end{equation*}\nthe bootstrap argument shows that at least for $\\eps\\le \\eps_1$\n($\\eps_1>0$),\n\\begin{equation*}\n \\|w^\\eps\\|_{\\mathcal N^\\eps(I_1)}\\le 3\n \\|U_V^\\eps(\\cdot-t_1)w^\\eps(t_1)\\|_{\\mathcal N^\\eps(I_1)} + \\frac{3}{2} \\tilde C\n \\eps^{1\/4}\\left\\|\\^{-3\/2}\\right\\|_{L^2(I_1)}.\n\\end{equation*}\nOn the other hand, Duhamel's formula implies\n\\begin{align*}\n U_V^\\eps(t-t_{j+1})w^\\eps(t_{j+1}) &=\n U_V^\\eps(t-t_j)w^\\eps(t_j)\n+i\\eps^{-1} \\int_{t_j}^{t_{j+1}} U_V^\\eps(t-s)\\mathcal L^\\eps(s)ds\\\\\n&\\quad -i\\eps^{3\/2}\n \\int_{t_j}^{t_{j+1}} \n U_V^\\eps(t-s)\\(|\\psi^\\eps|^2\\psi^2-|\\varphi^\\eps|^2\\varphi^\\eps\\)(s)ds\n. \n\\end{align*}\nTherefore, we infer\n\\begin{align*}\n \\| U_V^\\eps(t-t_{j+1})w^\\eps(t_{j+1})\\|_{\\mathcal N^\\eps(\\R)}&\\le\n \\|U_V^\\eps(t-t_{j})w^\\eps(t_{j})\\|_{\\mathcal N^\\eps(\\R)}+ \n + C\\sqrt\\eps\n \\|w^\\eps\\|_{\\mathcal N^\\eps(I_j)}^3 \\\\\n& \\quad + \nC\\|\\omega\\|_{L^{20}(I_j)}^2 \\|w^\\eps\\|_{\\mathcal N^\\eps(I_j)}\n+ C\\eps^{-3\/2}\\|\\mathcal L^\\eps\\|_{L^2(I_j; L^{3\/2})}.\n\\end{align*}\nBy induction (carrying over finitely many steps), we conclude\n\\begin{equation*}\n \\|U_V^\\eps(t-t_{j})w^\\eps(t_{j})\\|_{\\mathcal N^\\eps(\\R)}\n =\\O\\(\\eps^{1\/4}\\),\\quad 0\\le j\\le N+1,\n\\end{equation*}\nand $\\|w^\\eps\\|_{\\mathcal N^\\eps(\\R)}=\\O\\(\\eps^{1\/4}\\)$ as announced. \n\\end{proof}\n\n\\subsection{End of the argument}\n\\label{sec:end-argument}\n\nResume the estimate \\eqref{eq:w-presque} with the $L^2$-admissible\npair $(q_1,r_1)= (\\frac{8}{3},4)$:\n\\begin{equation*}\n \\eps^{3\/8}\\|w^\\eps\\|_{L^{8\/3}_t L^{4}} \\lesssim\n \\eps^{3\/4}\\( \\|w^\\eps\\|^2_{L^8_t L^4}+ \\|\\varphi^\\eps\\|^2_{L^8_t\n L^4}\\) \\eps^{3\/8}\\|w^\\eps\\|_{L^{8\/3}_tL^4} +\n \\frac{1}{\\eps}\\|\\mathcal L^\\eps\\|_{L^1_tL^2}.\n\\end{equation*}\nFrom Proposition~\\ref{prop:w-crit} (the pair $(8,4)$ is $\\dot H^{1\/2}$-admissible),\n\\begin{equation*}\n \\|w^\\eps\\|_{L^8(\\R; L^4)}\\lesssim \\eps^{1\/8},\n\\end{equation*}\nand we have seen in the course of the proof that\n\\begin{equation*}\n \\|\\varphi^\\eps\\|_{L^8(\\R; L^4)}\\lesssim \\eps^{-3\/8}.\n\\end{equation*}\nTherefore, we can split $\\R_t$ into finitely many intervals, in a way\nwhich is independent of $\\eps $, so that \n\\begin{equation*}\n \\eps^{3\/4}\\( \\|w^\\eps\\|^2_{L^8(I; L^4)}+ \\|\\varphi^\\eps\\|^2_{L^8(I;\n L^4)}\\)\\le \\eta \n\\end{equation*}\non each of these intervals, with $\\eta$ so small that we infer\n\\begin{equation*}\n \\eps^{3\/8}\\|w^\\eps\\|_{L^{8\/3}(\\R; L^{4})} \\lesssim\n \\frac{1}{\\eps}\\|\\mathcal L^\\eps\\|_{L^1(\\R;L^2)}\\lesssim \\sqrt\\eps,\n\\end{equation*}\nwhere we have used Proposition~\\ref{prop:est-source}. Plugging this\nestimate into \\eqref{eq:w-presque} and now taking $(q_1,r_1)$,\nTheorem~\\ref{theo:cv-unif} follows.\n\n\n\n\n\n\\section{Superposition}\n\\label{sec:superp}\n\nIn this section, we sketch the proof of\nCorollary~\\ref{cor:decoupling}. This result heavily relies on the\n(finite time) superposition \n principle established in \\cite{CaFe11}, in the case of two initial\n coherent states with different centers in phase space. We present\n the argument in the case of two initial wave packets, and explain\n why it can be generalized to any finite number of initial coherent\n states. \n\\smallbreak\n\nFollowing the proof of\n\\cite[Proposition~1.14]{CaFe11}, we introduce the approximate\nevolution of each individual initial wave packet:\n\\begin{equation*}\n \\varphi_j^\\eps(t,x)=\\eps^{-3\/4} u_j\n\\left(t,\\frac{x-q_j(t)}{\\sqrt\\eps}\\right)e^{i\\left(S_j(t)+p_j(t)\\cdot\n (x-q_j(t))\\right)\/\\eps},\n\\end{equation*}\nwhere $u_j$ solves \\eqref{eq:u} with initial datum $a_j$. In the proof\nof \\cite[Proposition~1.14]{CaFe11}, the main remark is that all that\nis needed is the control of a new source term, corresponding to the\ninteractions of the approximate solutions. Set \n\\begin{equation*}\n w^\\eps = \\psi^\\eps -\\varphi_1^\\eps-\\varphi_2^\\eps.\n\\end{equation*}\nIt solves \n\\begin{equation*}\n i\\eps\\d_t w^\\eps +\\frac{\\eps^2}{2}\\Delta w^\\eps = Vw^\\eps -\\mathcal\n L^\\eps+\\mathcal N_I^\\eps+ \\mathcal N_s^\\eps\\quad ;\\quad w^\\eps_{\\mid t=0}=0,\n\\end{equation*}\nwhere the linear source term is the same as in Section~\\ref{sec:cv}\n(except than now we consider the sums of two such terms), $\\mathcal\nN_s^\\eps$ is the semilinear term\n\\begin{equation*}\n \\mathcal N_s^\\eps =\\eps^{5\/2}\\( |w^\\eps+\n \\varphi_1^\\eps+\\varphi_2^\\eps|^2 (w^\\eps+\n \\varphi_1^\\eps+\\varphi_2^\\eps) - |\n \\varphi_1^\\eps+\\varphi_2^\\eps|^2 (\n \\varphi_1^\\eps+\\varphi_2^\\eps)\\),\n\\end{equation*}\nand $\\mathcal N_I^\\eps$ is precisely the new interaction term,\n\\begin{equation*}\n \\mathcal N_I^\\eps =\\eps^{5\/2}\\( |\n \\varphi_1^\\eps+\\varphi_2^\\eps|^2 (\n \\varphi_1^\\eps+\\varphi_2^\\eps) - |\n \\varphi_1^\\eps|^2 \\varphi_1^\\eps-|\n \\varphi_2^\\eps|^2 \\varphi_2^\\eps\\).\n\\end{equation*}\nIn \\cite{CaFe11}, it is proven that if $(q_{01},p_{01})\\not\n=(q_{02},p_{02})$, then the possible interactions between\n$\\varphi_1^\\eps$ and $\\varphi_2^\\eps$ are negligible on every finite\ntime interval, in the sense that\n\\begin{equation*}\n \\frac{1}{\\eps} \\| \\mathcal N_I^\\eps\\|_{L^1(0,T;L^2)}\\le C(T,\\gamma) \\eps^\\gamma,\n\\end{equation*}\nfor every $\\gamma<1\/2$. We infer that\n$\\|w^\\eps\\|_{L^\\infty(0,T;L^2)}=\\O(\\eps^\\gamma)$ for every $T>0$. For\n$t\\ge T$, we have\n\\begin{align*}\n \\frac{1}{\\eps} \\| \\mathcal N_I^\\eps(t)\\|_{L^2}& \\lesssim\n \\sum_{\\ell_1,\\ell_2\\ge 1,\\ \\ell_1+\\ell_2 =3}\\left\\|\n u_1^{\\ell_1}\\(t,y-\\frac{q_1(t)-q_2(t)}{\\sqrt\\eps } \\)\n u_2^{\\ell_2}(t,y)\\right\\|_{L^2}\\\\\n&\\lesssim\n \\sum_{\\ell_1,\\ell_2\\ge 1,\\ \\ell_1+\\ell_2 =3}\\|\n u_1(t)\\|_{L^\\infty}^{\\ell_1} \n\\| u_2(t)\\|_{L^\\infty}^{\\ell_2-1} \n\\| u_2(t)\\|_{L^2}\\lesssim \\frac{1}{t^3}.\n\\end{align*}\nSimilarly, resuming the same estimates as in the proof of\nProposition~\\ref{prop:est-source}, \n\\begin{equation*}\n \\frac{1}{\\eps} \\| \\mathcal N_I^\\eps(t)\\|_{L^{3\/2}}\\lesssim\n \\frac{\\eps^{1\/4}}{t^{5\/2}}. \n\\end{equation*}\nBy resuming the proof of Theorem~\\ref{theo:cv-unif} on the time\ninterval $[T,\\infty)$, we infer\n\\begin{equation*}\n \\|w^\\eps\\|_{L^\\infty(0,\\infty;L^2)}\\le C(T,\\gamma)\\eps^\\gamma + \\frac{C}{T^2}.\n\\end{equation*}\nTherefore, \n\\begin{equation*}\n \\limsup_{\\eps\\to 0} \\|w^\\eps\\|_{L^\\infty(0,\\infty;L^2)}\\lesssim \\frac{1}{T^2},\n\\end{equation*}\nfor all $T>0$, hence the result by letting $T\\to \\infty$. \n\\smallbreak\n\nIn the case of more than two initial coherent states, the idea is that\nthe nonlinear interaction term, $\\mathcal N_I^\\eps$, always contains\nthe product of two approximate solutions corresponding to different\ntrajectories in phase space. This is enough for the proof of\n\\cite[Proposition~1.14]{CaFe11} to go through: we always have\n\\begin{align*}\n \\frac{1}{\\eps} \\| \\mathcal N_I^\\eps(t)\\|_{L^2}& \\\\\n\\lesssim\n \\sum_{{j\\not =k, \\ \\ell_j,\\ell_k\\ge 1}\\atop {\\ell_j+\\ell_k+\\ell_m =3}}&\\left\\|\n u_j^{\\ell_j}\\(t,y-\\frac{q_j(t)-q_k(t)}{\\sqrt\\eps } \\)\n u_k^{\\ell_k}(t,y)u_m^{\\ell_m}\\(t,y-\\frac{q_m(t)-q_k(t)}{\\sqrt\\eps } \\)\\right\\|_{L^2}\\\\\n\\lesssim\n \\sum_{{j\\not =k,\\ \\ell_j,\\ell_k\\ge 1}\\atop {\\ell_j+\\ell_k+\\ell_m\n =3}}&\\|u_m(t)\\|_{L^\\infty}^{\\ell_m}\\left\\| \n u_j^{\\ell_j}\\(t,y-\\frac{q_j(t)-q_k(t)}{\\sqrt\\eps } \\)\n u_k^{\\ell_k}(t,y)\\right\\|_{L^2},\n\\end{align*}\nso the last factor is exactly the one considered in \\cite{CaFe11} and\n above. \n\\subsection*{Acknowledgements}\nThe author is grateful to Jean-Fran\\c cois Bony, Clotilde Fermanian,\nIsabelle Gallagher and Fabricio Maci\\`a for fruitful discussions about\nthis work. \n\n\\bibliographystyle{siam}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzito b/data_all_eng_slimpj/shuffled/split2/finalzito new file mode 100644 index 0000000000000000000000000000000000000000..7a1356ef727861a7bb8c3226ba466e950b983f8d --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzito @@ -0,0 +1,5 @@ +{"text":"\\section{\\label{Intro}Introduction}\n\nOver four decades passed since it was first shown that plasmas and\nbeam-plasma systems immersed in an external magnetic field can\nsupport travelling electromagnetic waves with specific features.\nThese waves propagate parallel to the applied magnetic field being\ncircularly polarized in a plane transverse to the direction of\npropagation. It has become conventional in the physics of\nmagnetized plasmas to call such structures waves in the whistler\nmode.\n\nAlthough the linear stability properties of the electromagnetic\nwaves in the whistler mode are relatively well studied\n\\cite{Weibel,Neufeld,Bell,Sudan}, there is a serious gap in the\nunderstanding of their nonlinear behaviour. Chen et al.\n\\cite{Wurtele} have shown that electromagnetic whistler waves can\nbe considered as complementary to the nonlinear travelling\nelectrostatic waves, known as the Bernstein-Greene-Kruskal (BGK)\nmodes \\cite{BGK}. While the BGK modes are longitudinal, the\nwhistler modes are transverse, in other words, the components of\nthe electric and magnetic field of the whistler wave parallel to\nthe external magnetic field are both zero. The study of the\nnonlinear behaviour of whistler waves has been initiated by\nTaniuti and Washimi \\cite{Taniuti}, who obtained a nonlinear\nSchr\\\"{o}dinger equation for the slowly varying amplitude (see\nalso Reference \\cite{Shukla}).\n\nThe present paper is aimed at filling the gap in the understanding\nof the nonlinear evolution of whistler waves. The method adopted\nhere is the renormalization group (RG) method \\cite{Oono,Tzenov}.\nThe basic feature of this approach is that it provides a\nconvenient and straightforward tool to obtain an adequate\ndescription of the physically essential properties of\nself-organization and formation of patterns in complex systems.\nCoherent structures which result from the nonlinear interaction\nbetween plane waves evolve on time and\/or spatial scales\ncomparatively large compared to those the fast oscillations occur.\nThe RG method can be considered as a powerful systematic procedure\nto separate the relatively slow dynamics from the fast one, which\nis of no considerable physical relevance. In a context similar to\nthat of the present paper, it has been successfully applied by one\nof the authors \\cite{Tzenov,Tzenov1} to study collective effects\nin intense charged-particle beams.\n\nThe paper is organized as follows. In the next section, we state\nthe basic equations which will be the subject of the\nrenormalization group reduction in section III. Starting from a\nsingle equation [see equation (\\ref{Waveeqallord})] for the\nelectromagnetic vector potential, we obtain a formal perturbation\nexpansion of its solution to second order. As expected, it\ncontains secular terms proportional to powers of the time variable\nwhich is the only renormalization parameter adopted in our\napproach. In section IV, the arbitrary constant amplitudes of the\nperturbation expansion are renormalized such as to eliminate the\nsecular terms. As a result, a set of equations for the\nrenormalized slowly varying amplitudes is obtained, known as the\nrenormalization group equations (RGEs). These equations comprise\nan infinite system of coupled nonlinear Schr\\\"{o}dinger equations.\nIn section V, the latter are analyzed in the simplest case.\nFinally, section VI is dedicated to discussion and conclusions.\n\n\\renewcommand{\\theequation}{\\thesection.\\arabic{equation}}\n\n\\setcounter{equation}{0}\n\n\\section{Formulation of the Problem and Basic Equations}\n\nPlasmas and beam-plasma systems considered in the present paper\nare assumed to be weakly collisional. Therefore, the dynamics of\nplasma species is well described by the hydrodynamic equations\ncoupled with the equations for the electromagnetic self-fields. We\nstart with the equations for plasma in an external constant\nmagnetic field ${\\bf B}_0$, which can be written as follows\n\\begin{equation}\n{\\frac {\\partial n_a} {\\partial t}} + \\nabla \\cdot {\\left( n_a\n{\\bf V}_a \\right)} = 0, \\label{Continuity}\n\\end{equation}\n\\begin{equation}\n{\\frac {{\\rm D}_a {\\bf V}_a} {{\\rm D} t}} = - {\\frac {k_B T_a}\n{m_a n_a}} \\nabla n_a + {\\frac {e q_a} {m_a}} {\\left[ {\\bf E} +\n{\\bf V}_a \\times {\\left( {\\bf B}_0 + {\\bf B} \\right)} \\right]},\n\\label{Mombalance}\n\\end{equation}\nwhere $n_a$ and ${\\bf V}_a$ are the density and the current\nvelocity of the species $a$. Furthermore, $m_a$, $q_a$ and $T_a$\nare the mass, the relative charge and the temperature,\nrespectively, while $k_B$ is the Boltzmann constant. The\nsubstantional derivative on the left-hand-side of equation\n(\\ref{Mombalance}) is defined as\n\\begin{equation}\n{\\frac {{\\rm D}_a} {{\\rm D} t}} = {\\frac {\\partial} {\\partial t}}\n+ {\\bf V}_a \\cdot \\nabla. \\label{Substant}\n\\end{equation}\nThe electromagnetic self-fields ${\\bf E}$ and ${\\bf B}$ can be\nobtained in terms of the electromagnetic vector ${\\bf A}$ and\nscalar $\\varphi$ potentials according to the well-known relations\n\\begin{equation}\n{\\bf E} = - \\nabla \\varphi - {\\frac {\\partial {\\bf A}} {\\partial\nt}}, \\qquad {\\bf B} = \\nabla \\times {\\bf A}. \\label{Elmagfield}\n\\end{equation}\nThe latter satisfy the wave equations\n\\begin{equation}\n\\Box {\\bf A} = - \\mu_0 e \\sum \\limits_a n_a q_a {\\bf V}_a, \\qquad\n\\Box \\varphi = - {\\frac {e} {\\epsilon_0}} \\sum \\limits_a n_a q_a,\n\\label{Waveequatvec}\n\\end{equation}\nin the Lorentz gauge\n\\begin{equation}\n{\\frac {1} {c^2}} {\\frac {\\partial \\varphi} {\\partial t}} + \\nabla\n\\cdot {\\bf A} = 0. \\label{Lorgauge}\n\\end{equation}\nHere $\\Box$ denotes the well-known d'Alembert operator. In what\nfollows, we consider the case of a quasineutral plasma\n\\begin{equation}\n\\sum \\limits_a n_a q_a = 0, \\label{Quasineutr}\n\\end{equation}\nin a constant external magnetic field along the $x$-axis ${\\bf\nB}_0 = {\\left( B_0, 0, 0 \\right)}$. Then, equations\n(\\ref{Continuity})--(\\ref{Lorgauge}) possess a stationary solution\n\\begin{equation}\nn_a = n_{a0} = {\\rm const}, \\quad {\\bf V}_a = 0, \\quad {\\bf A} =\n0, \\quad \\varphi = 0. \\label{Statsol}\n\\end{equation}\nThe frequency of the wave will be taken as much higher than the\nion-cyclotron frequency. Therefore, we can further neglect the ion\nmotion and scale the hydrodynamic and field variables as\n\\begin{equation}\nn_e = n_0 + \\epsilon N, \\quad {\\bf V}_e = \\epsilon {\\bf V}, \\quad\n{\\bf A} \\longrightarrow \\epsilon {\\bf A}, \\quad \\varphi\n\\longrightarrow \\epsilon \\varphi, \\label{Scale}\n\\end{equation}\nwhere $\\epsilon$ is a formal small parameter introduced for\nconvenience, which will be set equal to one at the end of the\ncalculations. Thus, the basic equations to be used for the\nsubsequent analysis can be written in the form\n\\begin{equation}\n{\\frac {\\partial N} {\\partial t}} + n_0 \\nabla \\cdot {\\bf V} +\n\\epsilon \\nabla \\cdot {\\left( N {\\bf V} \\right)} = 0,\n\\label{Continuit}\n\\end{equation}\n\\begin{equation}\n{\\frac {\\partial {\\bf V}} {\\partial t}} + \\epsilon {\\bf V} \\cdot\n\\nabla {\\bf V} = - {\\frac {k_B T} {m {\\left( n_0 + \\epsilon N\n\\right)}}} \\nabla N \\nonumber\n\\end{equation}\n\\begin{equation}\n- {\\frac {e} {m}} {\\left[ {\\bf E} + {\\bf V} \\times {\\left( {\\bf\nB}_0 + \\epsilon {\\bf B} \\right)} \\right]}, \\label{Mombalanc}\n\\end{equation}\n\\begin{equation}\n\\Box {\\bf A} = \\mu_0 e {\\left( n_0 + \\epsilon N \\right)} {\\bf V},\n\\qquad {\\frac {1} {c^2}} {\\frac {\\partial \\varphi} {\\partial t}} +\n\\nabla \\cdot {\\bf A} = 0. \\label{Waveequat}\n\\end{equation}\nBefore we continue with the renormalization group reduction of the\nsystem of equations (\\ref{Continuit})--(\\ref{Waveequat}) in the\nnext section, let us assume that the actual dependence of the\nquantities $N$, ${\\bf V}$, ${\\bf A}$ and $\\varphi$ on the spatial\nvariables is represented by the expression\n\\begin{equation}\n{\\widehat{\\Psi}} = {\\widehat{\\Psi}} {\\left( {\\bf x}, {\\bf X}; t\n\\right)}, \\qquad {\\widehat{\\Psi}} = {\\left( N, {\\bf V}, {\\bf A},\n\\varphi \\right)}, \\label{Actualdep}\n\\end{equation}\nwhere ${\\bf X} = \\epsilon {\\bf x}$ is a slow spatial variable.\nThus, the only renormalization parameter left at our disposal is\nthe time $t$ which will prove extremely convenient and simplify\ntedious algebra in the sequel.\n\n\\renewcommand{\\theequation}{\\thesection.\\arabic{equation}}\n\n\\setcounter{equation}{0}\n\n\\section{Renormalization Group Reduction of the Magnetohydrodynamic\nEquations}\n\nFollowing the standard procedure of the renormalization group\nmethod, we represent ${\\widehat{\\Psi}}$ as a perturbation\nexpansion\n\\begin{equation}\n{\\widehat{\\Psi}} = \\sum \\limits_{n=0}^{\\infty} \\epsilon^n\n{\\widehat{\\Psi}}_n, \\label{Perturbexp}\n\\end{equation}\nin the formal small parameter $\\epsilon$. The next step consists\nin expanding the system of hydrodynamic and field equations\n(\\ref{Continuit})-(\\ref{Waveequat}) in the small parameter\n$\\epsilon$, and obtaining their naive perturbation solution order\nby order. Note that in all orders the perturbation equations\nacquire the general form\n\\begin{equation}\n{\\frac {\\partial N_n} {\\partial t}} + n_0 \\nabla \\cdot {\\bf V}_n =\n\\alpha_n, \\label{Continuitn}\n\\end{equation}\n\\begin{equation}\n{\\frac {\\partial {\\bf V}_n} {\\partial t}} = - {\\frac {v_T^2}\n{n_0}} \\nabla N_n - {\\frac {e} {m}} {\\bf E}_n - \\omega_c {\\bf V}_n\n\\times {\\bf e}_x + {\\bf W}_n, \\label{Mombalancn}\n\\end{equation}\n\\begin{equation}\n\\Box {\\bf A}_n = \\mu_0 e n_0 {\\bf V}_n + {\\bf U}_n, \\qquad {\\frac\n{1} {c^2}} {\\frac {\\partial \\varphi_n} {\\partial t}} + \\nabla\n\\cdot {\\bf A}_n = \\beta_n, \\label{Waveequatn}\n\\end{equation}\nwhere $\\alpha_n$, $\\beta_n$, ${\\bf U}_n$ and ${\\bf W}_n$ are\nquantities, that have been already determined from previous\norders. Here\n\\begin{equation}\nv_T^2 = {\\frac {k_B T} {m}}, \\qquad \\omega_c = {\\frac {e B_0} {m}}\n\\label{Parameters}\n\\end{equation}\nare the thermal velocity of electrons and the electron-cyclotron\nfrequency, respectively and ${\\bf e}_x = {\\left( 1, 0, 0 \\right)}$\nis the unit vector in the $x$-direction. Manipulating in an\nobvious manner equations (\\ref{Continuitn})--(\\ref{Waveequatn}),\nit is possible to obtain a single equation for ${\\bf A}_n$. The\nlatter reads as\n\\begin{equation}\n\\Box {\\frac {\\partial^2 {\\bf A}_n} {\\partial t^2}} - v_T^2 \\Box\n\\nabla {\\left( \\nabla \\cdot {\\bf A}_n \\right)} + \\omega_c \\Box\n{\\frac {\\partial {\\bf A}_n} {\\partial t}} \\times {\\bf e}_x\n\\nonumber\n\\end{equation}\n\\begin{equation}\n- {\\frac {\\omega_p^2} {c^2}} {\\frac {\\partial^2 {\\bf A}_n}\n{\\partial t^2}} + \\omega_p^2 \\nabla {\\left( \\nabla \\cdot {\\bf A}_n\n\\right)} = \\mu_0 e n_0 {\\frac {\\partial {\\bf W}_n} {\\partial t}} +\n{\\frac {\\partial^2 {\\bf U}_n} {\\partial t^2}} \\nonumber\n\\end{equation}\n\\begin{equation}\n- \\mu_0 e v_T^2 \\nabla \\alpha_n - v_T^2 \\nabla {\\left( \\nabla\n\\cdot {\\bf U}_n \\right)} + \\omega_c {\\frac {\\partial {\\bf U}_n}\n{\\partial t}} \\times {\\bf e}_x + \\omega_p^2 \\nabla \\beta_n,\n\\label{Waveeqallord}\n\\end{equation}\nwhere\n\\begin{equation}\n\\omega_p^2 = {\\frac {e^2 n_0} {\\epsilon_0 m}}, \\label{Plasmafreq}\n\\end{equation}\nis the electron plasma frequency. Note that the thermal velocity\n$v_T$ as defined by equation (\\ref{Parameters}) can be\nalternatively expressed according to the expression\n\\begin{equation}\nv_T = \\omega_p r_D, \\qquad r_D^2 = {\\frac {\\epsilon_0 k_B T} {e^2\nn_0}}, \\label{Thermvel}\n\\end{equation}\nwhere $r_D$ is the electron Debye radius. Equation\n(\\ref{Waveeqallord}) represents the starting point for the\nrenormalization group reduction, the final goal of which is to\nobtain a description of the relatively slow dynamics leading to\nformation of patterns and coherent structures.\n\nLet us proceed order by order. We assume that the dependence on\nthe fast spatial variables ${\\bf x} = {\\left( x, y, z \\right)}$ is\nthrough the longitudinal (parallel to the external magnetic field\n${\\bf B}_0$) $x$-coordinate only. The solution to the zero-order\nperturbation equations (\\ref{Waveeqallord}) can be written as\n\\begin{equation}\n{\\bf A}_0 = \\sum \\limits_{k} {\\bf A}_{k}^{(0)} {\\cal A}_{k} {\\rm\ne}^{i \\psi_{k}}, \\label{Zeroordera}\n\\end{equation}\nwhere\n\\begin{equation}\n\\psi_{k} {\\left( x; t \\right)} = k x - \\omega_{k} t, \\label{Phase}\n\\end{equation}\nand ${\\cal A}_{k}$ is an infinite set of constant complex\namplitudes, which will be the subject of the renormalization\nprocedure in the sequel. Here \"constant\" means that the amplitudes\n${\\cal A}_{k}$ do not depend on the fast spatial variable $x$ and\non the time $t$, however, it can depend on the slow spatial\nvariables ${\\bf X}$. The summation sign in equation\n(\\ref{Zeroordera}) and throughout the paper implies summation over\nthe wave number $k$ in the case where it takes discrete values, or\nintegration in the continuous case. From the dispersion equation\n\\begin{equation}\n{\\cal D} {\\left( k; \\omega_{k} \\right)} = \\omega_k^2 {\\left[\n\\omega_k^2 {\\left( \\Box_k - {\\frac {\\omega_p^2} {c^2}} \\right)}^2\n- \\omega_c^2 \\Box_k^2 \\right]} = 0, \\label{Disperequat}\n\\end{equation}\nit follows that the wave frequency $\\omega_{k}$ can be expressed\nin terms of the wave number $k$, where the Fourier-image $\\Box_k$\nof the d'Alembert operator can be written according to\n\\begin{equation}\n\\Box_{k} = {\\frac {\\omega_{k}^2} {c^2}} - k^2. \\label{Dalembert}\n\\end{equation}\nMoreover, it can be verified in a straightforward manner that the\nconstant vector ${\\bf A}_{k}^{(0)}$ can be expressed as\n\\begin{equation}\n{\\bf A}_{k}^{(0)} = {\\left( 0, 1, -i {\\rm sgn} (k) \\right)},\n\\label{Constvect}\n\\end{equation}\nwhere ${\\rm sgn} (k)$ is the well-known sign-function. Details\nconcerning the derivation of the dispersion law\n(\\ref{Disperequat}) and equation (\\ref{Constvect}) can be found in\nthe Appendix. Note that equation (\\ref{Constvect}) is an\nalternative representation of the solvability condition\n(\\ref{Appsolcond}). It is important to emphasize that\n\\begin{equation}\n\\omega_{-k} = - \\omega_k, \\qquad {\\cal A}_{-k} = {\\cal\nA}_{k}^{\\ast}, \\label{Importnote}\n\\end{equation}\nwhere the asterisk denotes complex conjugation. The latter assures\nthat the vector potential as defined by equation\n(\\ref{Zeroordera}) is a real quantity. The zero-order current\nvelocity ${\\bf V}_0$ obtained directly from the first equation\n(\\ref{Waveequatn}) can be written as\n\\begin{equation}\n{\\bf V}_0 = \\sum \\limits_{k} {\\bf V}_{k}^{(0)} {\\cal A}_{k} {\\rm\ne}^{i \\psi_{k}}, \\qquad {\\bf V}_{k}^{(0)} = {\\frac {\\Box_{k}}\n{\\mu_0 e n_0}} {\\bf A}_{k}^{(0)}. \\label{Zeroorderv}\n\\end{equation}\nIn addition, the zero-order density, scalar potential and magnetic\nfield are represented by the expressions\n\\begin{equation}\nN_0 \\equiv 0, \\qquad \\varphi_0 \\equiv 0, \\qquad {\\bf B}_0 = \\sum\n\\limits_{k} {\\bf B}_{k}^{(0)} {\\cal A}_{k} {\\rm e}^{i \\psi_{k}},\n\\label{Zeroordern}\n\\end{equation}\nwhere\n\\begin{equation}\n{\\bf B}_{k}^{(0)} = -k {\\bf A}_{k}^{(0)} {\\rm sgn} (k) = {\\left(\n0, -k {\\rm sgn} (k), ik \\right)}. \\label{Zeroorderb}\n\\end{equation}\n\nIt has been mentioned that the first-order \"source terms\" on the\nright-hand-side of equation (\\ref{Waveeqallord}) can be expressed\nvia quantities already known from zero order. Thus, we have\n\\begin{equation}\n\\alpha_1 = - n_0 {\\widehat{\\nabla}} \\cdot {\\bf V}_0, \\qquad\n\\beta_1 = - {\\widehat{\\nabla}} \\cdot {\\bf A}_0,\n\\label{Firstordalp}\n\\end{equation}\n\\begin{equation}\n{\\bf U}_1 = - 2 \\nabla \\cdot {\\widehat{\\nabla}} {\\bf A}_0, \\qquad\n{\\bf W}_1 = - {\\frac {e} {m}} {\\bf V}_0 \\times {\\bf B}_0,\n\\label{Firstordu}\n\\end{equation}\nwhere the shorthand notation\n\\begin{equation}\n{\\widehat{\\nabla}} = {\\frac {\\partial} {\\partial {\\bf X}}}\n\\label{Firstorddef}\n\\end{equation}\nhas been introduced. Note that the vector ${\\bf W}_1$ representing\nthe zero-order Lorentz force has the only nonzero component along\nthe external magnetic field, that is\n\\begin{equation}\n{\\bf W}_1 = {\\bf e}_x \\sum \\limits_{k,l} \\alpha_{kl} {\\cal A}_k\n{\\cal A}_l {\\rm e}^{i {\\left( \\psi_k + \\psi_l \\right)}},\n\\label{Zeroordlf}\n\\end{equation}\nwhere\n\\begin{equation}\n\\alpha_{kl} = - {\\frac {i} {2 \\mu_0 n_0 m}} {\\left( k \\Box_l + l\n\\Box_k \\right)} {\\left[ 1 - {\\rm sgn} (k) {\\rm sgn} (l) \\right]}.\n\\label{Firstordalph}\n\\end{equation}\n\nEquation (\\ref{Waveeqallord}) has now two types of solutions. The\nfirst is a secular solution linearly dependent on the time\nvariable in the first-order approximation. As a rule, the highest\npower in the renormalization parameter of the secular terms\ncontained in the standard perturbation expansion is equal to the\ncorresponding order in the small perturbation parameter. The\nsecond solution of equation (\\ref{Waveeqallord}) arising from the\nnonlinear interaction between waves in the first order, is\nregular. Omitting tedious but standard algebra, we present here\nonly the result\n\\begin{equation}\n{\\bf A}_{1} = \\sum \\limits_{k} {\\widehat{\\bf A}}_{k}^{(1)} {\\cal\nA}_{k} {\\rm e}^{i \\psi_{k}} + {\\bf e}_x \\sum \\limits_{k,l}\nA_{kl}^{(1)} {\\cal A}_{k} {\\cal A}_{l} {\\rm e}^{i {\\left( \\psi_k +\n\\psi_l \\right)}} , \\label{Firstordvps}\n\\end{equation}\nwhere\n\\begin{equation}\n{\\widehat{\\bf A}}_{k}^{(1)} = {\\left( {\\widehat{A}}_{kx}^{(1)}, t\n{\\widehat{A}}_{ky}^{(1)}, -i t {\\widehat{A}}_{ky}^{(1)} {\\rm sgn}\n(k) \\right)}, \\label{Firstordvec}\n\\end{equation}\nSome of the details of the calculations are presented in the\nAppendix. In explicit form, the components of the vector operator\n${\\widehat{\\bf A}}_{\\bf k}^{(1)}$ and those of the infinite matrix\n$A_{kl}^{(1)}$ are given by the expressions\n\\begin{equation}\n{\\widehat{A}}_{kx}^{(1)} = - {\\frac {i k \\beta_k} {\\gamma_k\n\\Box_k}} {\\widehat{\\nabla}}_k, \\qquad {\\widehat{\\nabla}}_k = {\\bf\nA}_{k}^{(0)} \\cdot {\\widehat{\\nabla}}, \\label{Firstordakx}\n\\end{equation}\n\\begin{equation}\n{\\widehat{A}}_{ky}^{(1)} = - {\\frac {{\\widehat{F}}_{k}} {2\n\\omega_k \\alpha_k {\\rm sgn} (k) + \\omega_c \\chi_k}},\n\\label{Firstordaky}\n\\end{equation}\n\\begin{equation}\nA_{kl}^{(1)} = {\\frac {e} {2m v_T^2}} {\\frac {\\omega_k + \\omega_l}\n{\\Box_{kl} {\\cal D}_{kl}}} {\\left( k \\Box_l + l \\Box_k \\right)}\n{\\left[ 1 - {\\rm sgn} (k) {\\rm sgn} (l) \\right]},\n\\label{Firstordaklx}\n\\end{equation}\nwhere\n\\begin{equation}\n{\\widehat{F}}_{k} = 2k \\omega_k {\\left[ \\omega_k {\\rm sgn} (k) +\n\\omega_c \\right]} {\\widehat{\\nabla}}_{X}, \\label{Firstordoper}\n\\end{equation}\n\\begin{equation}\n\\Box_{kl} = {\\frac {{\\left( \\omega_k + \\omega_l \\right)}^2} {c^2}}\n- (k+l)^2, \\label{Firstordconst}\n\\end{equation}\n\\begin{equation}\n{\\cal D}_{kl} = {\\frac {{\\left( \\omega_k + \\omega_l \\right)}^2}\n{v_T^2}} - (k+l)^2 - {\\frac {1} {r_D^2}}. \\label{Firstordconsta}\n\\end{equation}\nIn addition, the constants $\\alpha_k$, $\\beta_k$, $\\gamma_k$ and\n$\\chi_k$ entering the expressions above are given by\n\\begin{equation}\n\\alpha_k = \\Box_k + {\\frac {\\omega_k^2 - \\omega_p^2} {c^2}},\n\\qquad \\beta_k = \\Box_k - {\\frac {1} {r_D^2}}, \\label{Constantsfo}\n\\end{equation}\n\\begin{equation}\n\\gamma_k = {\\frac {\\omega_k^2} {v_T^2}} - k^2 - {\\frac {1}\n{r_D^2}}, \\qquad \\chi_k = \\Box_k + {\\frac {2 \\omega_k^2} {c^2}}.\n\\label{Constantsfor}\n\\end{equation}\nFurthermore, the first-order current velocity can be expressed as\n\\begin{equation}\n{\\bf V}_{1} = \\sum \\limits_{k} {\\widehat{\\bf V}}_{k}^{(1)} {\\cal\nA}_{k} {\\rm e}^{i \\psi_{k}} + {\\bf e}_x \\sum \\limits_{k,l}\nV_{kl}^{(1)} {\\cal A}_{k} {\\cal A}_{l} {\\rm e}^{i {\\left( \\psi_k +\n\\psi_l \\right)}}, \\label{Firstordcvel}\n\\end{equation}\nwhere\n\\begin{equation}\n{\\widehat{\\bf V}}_{k}^{(1)} = {\\left( {\\widehat{V}}_{kx}^{(1)},\n{\\widehat{V}}_{ky}^{(1)}, -i {\\widehat{V}}_{ky}^{(1)} {\\rm sgn}\n(k) \\right)}. \\label{Firstordvcvel}\n\\end{equation}\nThe corresponding operators and matrix coefficients can be written\nexplicitly according to the expressions\n\\begin{equation}\n{\\widehat{V}}_{kx}^{(1)} = {\\frac {\\Box_k} {\\mu_0 e n_0}}\n{\\widehat{A}}_{kx}^{(1)}, \\qquad V_{kl}^{(1)} = {\\frac {\\Box_{kl}}\n{\\mu_0 e n_0}} A_{kl}^{(1)}, \\label{Currentvelx}\n\\end{equation}\n\\begin{equation}\n{\\widehat{V}}_{ky}^{(1)} = {\\frac {1} {\\mu_0 e n_0}} {\\left[ t\n\\Box_k {\\widehat{A}}_{ky}^{(1)} + 2i {\\left( {\\frac {\\omega_k}\n{c^2}} {\\widehat{A}}_{ky}^{(1)} + k {\\widehat{\\nabla}}_X \\right)}\n\\right]}, \\label{Currentvely}\n\\end{equation}\nCalculating the first-order density $N_1$ from equation\n(\\ref{Continuitn}), we obtain\n\\begin{equation}\nN_{1} = \\sum \\limits_{k} {\\widehat{N}}_{k}^{(1)} {\\cal A}_{k} {\\rm\ne}^{i \\psi_{k}} + \\sum \\limits_{k,l} N_{kl}^{(1)} {\\cal A}_{k}\n{\\cal A}_{l} {\\rm e}^{i {\\left( \\psi_k + \\psi_l \\right)}},\n\\label{Firstordden}\n\\end{equation}\n\\begin{equation}\n{\\widehat{N}}_{k}^{(1)} = {\\frac {\\Box_k} {\\mu_0 e \\omega_k}}\n{\\left( k {\\widehat{A}}_{kx}^{(1)} - i {\\widehat{\\nabla}}_k\n\\right)}, \\label{Firstorddenc}\n\\end{equation}\n\\begin{equation}\nN_{kl}^{(1)} = {\\frac {k + l} {2 \\mu_0 m v_T^2 {\\cal D}_{kl}}}\n{\\left( k \\Box_l + l \\Box_k \\right)} {\\left[ 1 - {\\rm sgn} (k)\n{\\rm sgn} (l) \\right]}. \\label{Firstorddenco}\n\\end{equation}\nAnalogously, for the first-order scalar potential $\\varphi_1$, we\nfind\n\\begin{equation}\n\\varphi_1 = \\sum \\limits_{k} {\\widehat{\\varphi}}_{k}^{(1)} {\\cal\nA}_{k} {\\rm e}^{i \\psi_{k}} + \\sum \\limits_{k,l}\n\\varphi_{kl}^{(1)} {\\cal A}_{k} {\\cal A}_{l} {\\rm e}^{i {\\left(\n\\psi_k + \\psi_l \\right)}}, \\label{Firstordscp}\n\\end{equation}\n\\begin{equation}\n{\\widehat{\\varphi}}_{k}^{(1)} = {\\frac {e} {\\epsilon_0 \\Box_k}}\n{\\widehat{N}}_{k}^{(1)} = {\\frac {c^2} {\\omega_k}} {\\left( k\n{\\widehat{A}}_{kx}^{(1)} - i {\\widehat{\\nabla}}_k \\right)},\n\\label{Firstordscpc}\n\\end{equation}\n\\begin{equation}\n\\varphi_{kl}^{(1)} = {\\frac {e c^2 (k+l)} {2 m v_T^2 \\Box_{kl}\n{\\cal D}_{kl}}} {\\left( k \\Box_l + l \\Box_k \\right)} {\\left[ 1 -\n{\\rm sgn} (k) {\\rm sgn} (l) \\right]}. \\label{Firstordscpco}\n\\end{equation}\nFinally, the first-order magnetic field is calculated to be\n\\begin{equation}\n{\\bf B}_{1} = \\sum \\limits_{k} {\\widehat{\\bf B}}_{k}^{(1)} {\\cal\nA}_{k} {\\rm e}^{i \\psi_{k}}, \\label{Firstordmagf}\n\\end{equation}\nwhere\n\\begin{equation}\n{\\widehat{\\bf B}}_{k}^{(1)} = {\\left( - i {\\rm sgn} (k)\n{\\widehat{\\nabla}}_k, {\\widehat{B}}_{ky}^{(1)}, -i\n{\\widehat{B}}_{ky}^{(1)} {\\rm sgn} (k) \\right)},\n\\label{Firstordmagfi}\n\\end{equation}\n\\begin{equation}\n{\\widehat{B}}_{ky}^{(1)} = - {\\rm sgn} (k) {\\left( t k\n{\\widehat{A}}_{ky}^{(1)} - i {\\widehat{\\nabla}}_X \\right)}.\n\\label{Firstordmafi}\n\\end{equation}\n\nA couple of interesting features of the zero and first-order\nperturbation solution are noteworthy to be commented at this\npoint. First of all, the zero-order density $N_0$ vanishes which\nmeans that no density waves are induced by the whistler\neigenmodes. The second terms in the expressions for the\nfirst-order density $N_1$ and current velocity ${\\bf V}_1$ [see\nequations (\\ref{Firstordcvel}) and (\\ref{Firstordden})] imply\ncontribution from nonlinear interaction between waves according to\nthe nonlinear Lorentz force. It will be shown in the remainder\nthat these terms give rise to nonlinear terms in the\nrenormalization group equation and describe solitary wave\nbehaviour of the whistler mode.\n\n\\renewcommand{\\theequation}{\\thesection.\\arabic{equation}}\n\n\\setcounter{equation}{0}\n\n\\section{The Renormalization Group Equation}\n\nPassing over to the final stage of our renormalization group\nprocedure, we note that in second order the quantities ${\\bf U}_2$\nand ${\\bf W}_2$ entering the right-hand-side of equation\n(\\ref{Waveeqallord}) can be written as\n\\begin{equation}\n{\\bf U}_2 = - 2 \\nabla \\cdot {\\widehat{\\nabla}} {\\bf A}_1 -\n{\\widehat{\\nabla}}^2 {\\bf A}_0 + \\mu_0 e N_1 {\\bf V}_0,\n\\label{Secondordu}\n\\end{equation}\n\\begin{equation}\n{\\bf W}_2 = {\\frac {e} {m}} {\\widehat{\\nabla}} \\varphi_1 - {\\frac\n{v_T^2} {n_0}} {\\widehat{\\nabla}} N_1 - {\\bf V}_1 \\cdot \\nabla\n{\\bf V}_0 - {\\frac {e} {m}} {\\bf V}_1 \\times {\\bf B}_0,\n\\label{Secondordw}\n\\end{equation}\nSince we are interested only in the secular terms in second order,\nappearing in the expressions for the $y$ and $z$ components of the\nelectromagnetic vector potential ${\\bf A}_2$, contributions in the\nsource vectors ${\\bf U}_2$ and ${\\bf W}_2$ leading to such terms\nare sufficient for completing the renormalization group procedure.\nThus, we can write\n\\begin{equation}\n{\\bf A}_2 = \\sum \\limits_{k} {\\left( t {\\widehat{\\bf A}}_k^{(2)} +\nt^2 {\\widehat{\\bf C}}_k \\right)} {\\cal A}_{k} {\\rm e}^{i \\psi_{k}}\n\\nonumber\n\\end{equation}\n\\begin{equation}\n+ t \\sum \\limits_{k} {\\widehat{\\bf D}}_k^{(2)} {\\cal A}_{k} {\\rm\ne}^{i \\psi_{k}} + t \\sum \\limits_{k,l} {\\mathbf \\Gamma}_{kl}\n{\\left| {\\cal A}_{l} \\right|}^2 {\\cal A}_{k} {\\rm e}^{i \\psi_k}.\n\\label{Secondordvps}\n\\end{equation}\nAn important remark is in order at this point. From the\nsolvability condition (\\ref{Appsolcond}) it follows that the\ncomplex amplitude ${\\cal A}_k$ must satisfy the complex Poisson\nequation\n\\begin{equation}\n{\\widehat{\\nabla}}_k^2 {\\cal A}_k = 0. \\label{Secondordscon}\n\\end{equation}\nThe latter imposes additional restrictions on the dependence of\nthe wave amplitudes ${\\cal A}_k$ on the slow transverse\nindependent variables $Y$ and $Z$. Straightforward calculations\nyield (see the Appendix for details)\n\\begin{equation}\n{\\widehat{A}}_{ky}^{(2)} = - {\\frac {i {\\rm sgn} (k)} {2 \\omega_k\n\\alpha_k {\\rm sgn} (k) + \\omega_c \\chi_k}} {\\left( \\beta_k^{(2)}\n{\\widehat{A}}_{ky}^{(1) {\\bf 2}} - {\\widehat{G}}_k \\right)},\n\\label{Secondordaky}\n\\end{equation}\n\\begin{equation}\n{\\widehat{D}}_{ky}^{(2)} = {\\frac {i v_T^2 \\beta_k {\\rm sgn} (k)}\n{2 \\omega_k \\alpha_k {\\rm sgn} (k) + \\omega_c \\chi_k}} {\\left( 1 +\n{\\frac {k^2 \\beta_k} {\\gamma_k \\Box_k}} \\right)}\n{\\widehat{\\nabla}}_Y {\\widehat{\\nabla}}_k, \\label{Secondorddky}\n\\end{equation}\n\\begin{equation}\n{\\widehat{C}}_{ky} = {\\frac {1} {2}} {\\widehat{A}}_{ky}^{(1) {\\bf\n2}}, \\label{Secondordcky}\n\\end{equation}\nwhere\n\\begin{equation}\n\\beta_k^{(2)} = \\alpha_k + {\\frac {4 \\omega_k^2} {c^2}} + {\\frac\n{3 \\omega_c \\omega_k} {c^2}} {\\rm sgn} (k), \\label{Secondordcon}\n\\end{equation}\n\\begin{equation}\n{\\widehat{G}}_k = \\omega_k {\\rm sgn} (k) {\\left[ \\omega_k {\\rm\nsgn} (k) + \\omega_c \\right]} {\\widehat{\\nabla}}^2.\n\\label{Secondordoper}\n\\end{equation}\nThe matrix coefficient $\\Gamma_{kly}$ determining the nonlinear\ncontribution represented by the second term in equation\n(\\ref{Secondordvps}) reads explicitly as\n\\begin{equation}\n\\Gamma_{kly} = - {\\frac {1 - {\\rm sgn} (k) {\\rm sgn} (l)} {\\mu_0\nn_0 m v_T^2 \\omega_l {\\cal D}_{kl}}} {\\frac {i \\omega_k \\Box_l\n{\\left( k \\Box_l + l \\Box_k \\right)} {\\rm sgn} (k)} {2 \\omega_k\n\\alpha_k {\\rm sgn} (k) + \\omega_c \\chi_k}} \\nonumber\n\\end{equation}\n\\begin{equation}\n\\times {\\left[ \\omega_c {\\left( l \\omega_k - k \\omega_l \\right)}\n{\\rm sgn} (l) + (k+l) \\omega_k \\omega_l \\right]}.\n\\label{Secondordgam}\n\\end{equation}\nFollowing the standard procedure \\cite{Tzenov} of the RG method,\nwe finally obtain the desired RG equation\n\\begin{equation}\n{\\frac {\\partial {\\widetilde{\\cal A}}_k} {\\partial t}} - \\epsilon\n{\\widehat{A}}_{ky}^{(1)} {\\widetilde{\\cal A}}_k \\nonumber\n\\end{equation}\n\\begin{equation}\n= \\epsilon^2 {\\left( {\\widehat{A}}_{ky}^{(2)} +\n{\\widehat{D}}_{ky}^{(2)} \\right)} {\\widetilde{\\cal A}}_k +\n\\epsilon^2 \\sum \\limits_l \\Gamma_{kly} {\\left| {\\widetilde{\\cal\nA}}_l \\right|}^2 {\\widetilde{\\cal A}}_k, \\label{Rgroupeq}\n\\end{equation}\nwhere now ${\\widetilde{\\cal A}}_k$ is the renormalized complex\namplitude \\cite{Tzenov}. Thus, the renormalized solution for the\nelectromagnetic vector potential acquires the form\n\\begin{equation}\n{\\bf A} = \\sum \\limits_{k} {\\bf A}_{k}^{(0)} {\\widetilde{\\cal\nA}}_{k} {\\rm e}^{i \\psi_{k}}. \\label{Renormsolut}\n\\end{equation}\nAnalogously, for the electric and magnetic field of the whistler\nwave, one can obtain in a straightforward manner the following\nexpressions\n\\begin{equation}\n{\\bf B} = \\sum \\limits_{k} {\\bf B}_{k}^{(0)} {\\widetilde{\\cal\nA}}_{k} {\\rm e}^{i \\psi_{k}}, \\qquad {\\bf E} = i \\sum \\limits_{k}\n\\omega_k {\\bf A}_{k}^{(0)} {\\widetilde{\\cal A}}_{k} {\\rm e}^{i\n\\psi_{k}}. \\label{Renormsolb}\n\\end{equation}\n\nIt is important to mention that the plasma density remains\nunchanged ($N = 0$) contrary to the case of electrostatic waves,\nwhere the evolution of the induced electrostatic waves follows the\nevolution of the density waves.\n\n\\renewcommand{\\theequation}{\\thesection.\\arabic{equation}}\n\n\\setcounter{equation}{0}\n\n\\section{\\label{Essent}System of Coupled Nonlinear Schr\\\"{o}dinger\nEquations}\n\nThe simplest case of the validity of the solvability condition\n(\\ref{Secondordscon}) consists in the assumption that the slow\nwave amplitudes ${\\cal A}_k$ do not depend on the transverse\ncoordinates. Setting $\\epsilon = 1$ in equation (\\ref{Rgroupeq}),\nwe obtain the following system of coupled nonlinear\nSchr\\\"{o}dinger equations\n\\begin{equation}\ni {\\rm sgn} (k) {\\frac {\\partial {\\cal A}_k} {\\partial t}} + i\n\\nu_k {\\rm sgn} (k) {\\frac {\\partial {\\cal A}_k} {\\partial x}} =\n\\lambda_k {\\frac {\\partial^2 {\\cal A}_k} {\\partial x^2}} + \\sum\n\\limits_l \\mu_{kl} {\\left| {\\cal A}_l \\right|}^2 {\\cal A}_k,\n\\label{Couplednse}\n\\end{equation}\nwhere for simplicity the tilde-sign over the renormalized\namplitude has been dropped. Moreover, the coefficients $\\nu_k$,\n$\\lambda_k$ and $\\mu_{kl}$ are given by the expressions\n\\begin{equation}\n\\nu_k = {\\frac {2k \\omega_k {\\left[ \\omega_k {\\rm sgn} (k) +\n\\omega_c \\right]}} {2 \\omega_k \\alpha_k {\\rm sgn} (k) + \\omega_c\n\\chi_k}}, \\label{Coefficientnu}\n\\end{equation}\n\\begin{equation}\n\\lambda_k = {\\frac {\\omega_k {\\left[ \\omega_k {\\rm sgn} (k) +\n\\omega_c \\right]}} {2 \\omega_k \\alpha_k {\\rm sgn} (k) + \\omega_c\n\\chi_k}} \\nonumber\n\\end{equation}\n\\begin{equation}\n\\times {\\left\\{ {\\frac {4k^2 \\omega_k \\beta_k^{(2)} {\\left[\n\\omega_k {\\rm sgn} (k) + \\omega_c \\right]}} {{\\left[ 2 \\omega_k\n\\alpha_k {\\rm sgn} (k) + \\omega_c \\chi_k \\right]}^2}} - {\\rm sgn}\n(k) \\right\\}}, \\label{Coefficientla}\n\\end{equation}\n\\begin{equation}\n\\mu_{kl} = {\\frac {1 - {\\rm sgn} (k) {\\rm sgn} (l)} {\\mu_0 n_0 m\nv_T^2 \\omega_l {\\cal D}_{kl}}} {\\frac {\\omega_k \\Box_l {\\left( k\n\\Box_l + l \\Box_k \\right)}} {2 \\omega_k \\alpha_k {\\rm sgn} (k) +\n\\omega_c \\chi_k}} \\nonumber\n\\end{equation}\n\\begin{equation}\n\\times {\\left[ \\omega_c {\\left( l \\omega_k - k \\omega_l \\right)}\n{\\rm sgn} (l) + (k+l) \\omega_k \\omega_l \\right]}.\n\\label{Coefficientmu}\n\\end{equation}\nInterestingly enough, the infinite matrix of coupling coefficients\n$\\mu_{kl}$ represents a sort of selection rules. Clearly,\n\\begin{equation}\n\\mu_{kk} = 0, \\qquad \\mu_{k, -k} = 0, \\label{Selectrule1}\n\\end{equation}\nand\n\\begin{equation}\n\\mu_{kl} = 0, \\qquad {\\rm for} \\quad {\\rm sgn} (k) {\\rm sgn} (l) =\n1. \\label{Selectrule2}\n\\end{equation}\nThis means that a generic mode with a wave number $k$ cannot\ncouple with itself, neither can it couple with another mode with a\nwave number of the same sign. Note that this feature is a\nconsequence of the vector character of the nonlinear coupling\nbetween modes and is due to the nonlinear Lorentz force.\nTherefore, for a given mode $k$ the simplest nontrivial reduction\nof the infinite system of coupled nonlinear Schr\\\"{o}dinger\nequations consists of minimum two coupled equations.\n\nWithout loss of generality, we can assume in what follows that the\nsign of an arbitrary mode $k$ under consideration is positive ($k\n> 0$). Suppose that for a particular whistler mode with a\npositive wave number $k$ there exist a mode with wave number $-l$\nfor which the coupling coefficient $\\mu_{k, -l}$ is maximum.\nNeglecting all other modes but the modes $k$ and $-l$, we can\nwrite\n\\begin{equation}\ni {\\frac {\\partial {\\cal A}_k} {\\partial t}} + i \\nu_k {\\frac\n{\\partial {\\cal A}_k} {\\partial x}} = \\lambda_k {\\frac {\\partial^2\n{\\cal A}_k} {\\partial x^2}} + \\mu_1 {\\left| {\\cal A}_l \\right|}^2\n{\\cal A}_k, \\label{Couplednsek}\n\\end{equation}\n\\begin{equation}\ni {\\frac {\\partial {\\cal A}_l} {\\partial t}} + i \\nu_l {\\frac\n{\\partial {\\cal A}_l} {\\partial x}} = \\lambda_l {\\frac {\\partial^2\n{\\cal A}_l} {\\partial x^2}} + \\mu_2 {\\left| {\\cal A}_k \\right|}^2\n{\\cal A}_l, \\label{Couplednsel}\n\\end{equation}\nwhere\n\\begin{equation}\n\\mu_1 = {\\frac {2} {\\mu_0 n_0 m v_T^2 \\omega_l {\\cal D}_{k, -l}}}\n{\\frac {\\omega_k \\Box_l {\\left( k \\Box_l - l \\Box_k \\right)}} {2\n\\omega_k \\alpha_k + \\omega_c \\chi_k}} \\nonumber\n\\end{equation}\n\\begin{equation}\n\\times {\\left[ \\omega_c {\\left( k \\omega_l - l \\omega_k \\right)} +\n(k-l) \\omega_k \\omega_l \\right]}. \\label{Coefficientmu1}\n\\end{equation}\n\\begin{equation}\n\\mu_2 = {\\frac {2} {\\mu_0 n_0 m v_T^2 \\omega_k {\\cal D}_{k, -l}}}\n{\\frac {\\omega_l \\Box_k {\\left( k \\Box_l - l \\Box_k \\right)}} {2\n\\omega_l \\alpha_l + \\omega_c \\chi_l}} \\nonumber\n\\end{equation}\n\\begin{equation}\n\\times {\\left[ \\omega_c {\\left( k \\omega_l - l \\omega_k \\right)} +\n(k-l) \\omega_k \\omega_l \\right]}. \\label{Coefficientmu2}\n\\end{equation}\n\nThe system of coupled nonlinear Schr\\\"{o}dinger equations\n(\\ref{Couplednsek}) and (\\ref{Couplednsel}) is non integrable in\ngeneral \\cite{Manakov}. It represents an important starting point\nfor further investigations on the nonlinear dynamics and evolution\nof whistler waves in magnetized plasmas.\n\n\\renewcommand{\\theequation}{\\thesection.\\arabic{equation}}\n\n\\setcounter{equation}{0}\n\n\\section{Discussion and conclusions}\n\nWe studied the nonlinear dynamics of whistler waves in magnetized\nplasmas. Since plasmas and beam-plasma systems considered here are\nassumed to be weakly collisional, the point of reference for the\nanalysis performed in the present paper is the system of\nhydrodynamic and field equations. We apply the renormalization\ngroup method to obtain dynamical equations for the slowly varying\namplitudes of whistler waves. As a result of the investigation\nperformed, it has been shown that the amplitudes of eigenmodes\nsatisfy an infinite system of coupled nonlinear Schr\\\"{o}dinger\nequations. In this sense, the whistler eigenmodes form a sort of\na gas of interacting quasiparticles, while the slowly varying\namplitudes can be considered as dynamical variables heralding the\nrelevant information about the system.\n\nAn important feature of our description is that whistler waves do\nnot perturb the initial uniform density of plasma electrons. The\nplasma response to the induced whistler waves consists in velocity\nredistribution which follows exactly the behaviour of the\nwhistlers. Another interesting peculiarity are the selection rules\ngoverning the nonlinear mode coupling. According to these rules\nmodes with the same sign do not couple, which is a direct\nconsequence of the vector character of the interaction. Careful\ninspection shows that the initial source of the nonlinear\ninteraction between waves in the whistler mode is the zero-order\nLorentz force [see equation (\\ref{Zeroordlf})]. Since the quantity\n${\\bf W}_1$ is proportional to ${\\bf A}_k^{(0)} \\times {\\bf\nA}_l^{(0)}$, the above mentioned selection rules follow directly,\nprovided the only case in which the cross product does not vanish\nis the case, where modes $k$ and $l$ have different sign.\n\nWe believe that the results obtained in the present paper might\nhave a wide class of possible applications ranging from laboratory\nexperiments to observations of a variety of effects relevant to\nspace plasmas.\n\n\\begin{acknowledgments}\nIt is a pleasure to thank B. Baizakov for many interesting and\nuseful discussions concerning the subject of the present paper.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe detection of primordial gravitational waves (GWs) via B-mode\npolarization of Cosmic Microwave Background Radiation (CMBR) by\nBICEP2~\\cite{Ade:2014xna} has shown that the cosmic inflation\noccurred at a high scale of $10^{16}$ GeV is the most plausible\nsource of generating primordial GWs. However, more data are\nrequired to confirm the above situation. The primordial GWs can be\nimprinted in the anisotropies and polarization spectrum of CMBR\n by making the photon redshifts. The B-mode signal\nobserved by BICEP2 might contain contributions from other sources\nof vector modes and cosmic strings, in addition to tensor\nmodes~\\cite{Moss:2014cra}.\n\nThe conformal gravity\n$C^{\\mu\\nu\\rho\\sigma}C_{\\mu\\nu\\rho\\sigma}(=C^2)$ of being invariant\nunder the conformal transformation of $g_{\\mu\\nu}\\to\n\\Omega^2(x)g_{\\mu\\nu}$ has its own interests in gravity and\ncosmology. On the gravity side, it gives us a promising combination\nof $R_{\\mu\\nu}^2-R^2\/3$ up to the Gauss-Bonnet term which kills\nmassive scalar GWs when it couples to Einstein gravity (known to be\nthe Einstein-Weyl gravity)~\\cite{Lu:2011zk}.\nStelle~\\cite{Stelle:1976gc} has first introduced the quadratic\ncurvature gravity of $a(R_{\\mu\\nu}^2-R^2\/3)+b R^2$ to improve the\nperturbatively renormalizable property of Einstein gravity. In case\nof $ab\\not=0$, the renormalizability was achieved but the unitarity\nwas violated unless $a=0$, showing that the renormalizability and\nunitarity exclude to each other. Although the $a$-term of providing\nmassive GWs improves the ultraviolet divergence, it induces\nsimultaneously ghost excitations which spoil the unitarity. The\nprice one has to pay for making the theory renormalizable is the\nloss of unitarity. This issue is not resolved completely until now.\n\nHowever, the conformal gravity itself is renormalizable. Also, it\nprovides the AdS black hole solution~\\cite{Riegert:1984zz} and its\nthermodynamic properties and holography were discussed extensively\nin the literature~\\cite{Klemm:1998kf,Lu:2012xu,Grumiller:2013mxa}.\nThe authors have investigated the AdS black hole thermodynamics and\nstability in the Einstein-Weyl gravity and in the limit of the\nconformal gravity~\\cite{Myung:2013uka}.\n\nOn the cosmology side of the conformal gravity, it provides surely a\nmassive vector propagation generated during de Sitter inflation in\naddition to massive tensor ghosts when it couples to Einstein\ngravity~\\cite{Clunan:2009er,Deruelle:2010kf,Deruelle:2012xv}.\nRecently, the authors have shown that in the limit of $m^2 \\to 0$\n(keeping the conformal gravity only), the vector and tensor power\nspectra disappear. It implies that their power spectra are not\ngravitationally produced because the vector and tensor perturbations\nare decoupled from the expanding de Sitter background. This occurs\ndue to conformal invariance as a transversely massive vector has\nbeen shown in the $m^2\\to 0$ limit of the massive Maxwell theory\n($-F^2\/4+m^2A^2\/2$)~\\cite{Dimopoulos:2006ms}. We note here that\n$F^2$ is conformally invariant like $C^2$ under the transformation\nof $g_{\\mu\\nu}\\to \\Omega^2(x)g_{\\mu\\nu}$~\\cite{Myung:2014aia}.\n The\nconformal gravity implication to cosmological perturbation was first\nstudied in~\\cite{Mannheim:2011is} which might indicate that there\nexists a difference between conformal and Einstein gravities in\ntheir perturbed equations around de Sitter background. Even though\nhe has obtained a ``degenerate fourth-order equation\" for the metric\nperturbation tensor $h_{\\mu\\nu}$ from the conformal gravity, any\nrelevant quantity was not found because he did not split\n$h_{\\mu\\nu}$ according to the SO(3) decomposition for cosmological\nperturbations. As far as we know, there is no definite computation\nof an observable like the power spectrum in the conformal gravity.\n\nIn this Letter, we will study the conformal gravity as a\nhigher-order gravity theory to compute the vector and tensor power\nspectra generated from de Sitter inflation. Considering the\nconformal invariant of the conformal gravity seriously, we expect to\nobtain the constant power spectra for vector and tensor\nperturbations.\n\n\n\\section{Conformal gravity }\n\n\nLet us first consider the conformal gravity whose action is given\nby\n\\begin{equation} \\label{ECSW}\nS_{\\rm CG}=\\frac{1}{4\\kappa m^2}\\int d^4x\n\\sqrt{-g}\\Big[C^{\\mu\\nu\\rho\\sigma}C_{\\mu\\nu\\rho\\sigma}\\Big],\n\\end{equation}\nwhere the Weyl-squared term is given by\n\\begin{eqnarray}\nC^{\\mu\\nu\\rho\\sigma}C_{\\mu\\nu\\rho\\sigma}&=&2\\Big(R^{\\mu\\nu}R_{\\mu\\nu}-\\frac{1}{3}R^2\\Big)+\n(R^{\\mu\\nu\\rho\\sigma}R_{\\mu\\nu\\rho\\sigma}-4R^{\\mu\\nu}R_{\\mu\\nu}+R^2)\n\\end{eqnarray}\nwith the Weyl tensor\n\\begin{equation}\nC_{\\mu\\nu\\rho\\sigma}=R_{\\mu\\nu\\rho\\sigma}-\\frac{1}{2}\\Big(\ng_{\\mu\\rho}R_{\\nu\\sigma}-g_{\\mu\\sigma}R_{\\nu\\rho}-g_{\\nu\\rho}R_{\\mu\\sigma}+g_{\\nu\\sigma}R_{\\mu\\rho}\\Big)+\\frac{1}{6}R(g_{\\mu\\rho}g_{\\nu\\sigma}-g_{\\mu\\sigma}g_{\\nu\\rho}).\n\\end{equation}\n Here we have\n$\\kappa=8\\pi G=1\/M^2_{\\rm P}$, $M_{\\rm P}$ being the reduced Planck\nmass and a mass-squared $m^2$ is introduced to make the action\ndimensionless. Greek indices run from 0 to 3 with conventions\n$(-+++)$, while Latin indices run from 1 to 3. Further, we note that\nthe Weyl-squared term is invariant under the conformal\ntransformation of $g_{\\mu\\nu} \\to \\Omega^2(x) g_{\\mu\\nu}$.\n\nIts equation takes the form\n\\begin{equation} \\label{ein-eq}\n2 \\nabla^\\rho\\nabla^\\sigma\nC_{\\mu\\rho\\nu\\sigma}+G^{\\rho\\sigma}C_{\\mu\\rho\\nu\\sigma}=0\n\\end{equation} with the Einstein tensor $G_{\\mu\\nu}$.\nThe solution is de Sitter space whose curvature quantities are\ngiven by\n\\begin{equation}\n\\bar{R}_{\\mu\\nu\\rho\\sigma}=H^2(\\bar{g}_{\\mu\\rho}\\bar{g}_{\\nu\\sigma}-\\bar{g}_{\\mu\\sigma}\\bar{g}_{\\nu\\rho}),~~\\bar{R}_{\\mu\\nu}=3H^2\\bar{g}_{\\mu\\nu},~~\\bar{R}=12H^2\n\\end{equation}\nwith $H$=constant. We choose de Sitter background explicitly by\nchoosing a conformal time $\\eta$\n\\begin{eqnarray} \\label{frw}\nds^2_{\\rm dS}=\\bar{g}_{\\mu\\nu}dx^\\mu\ndx^\\nu=a(\\eta)^2\\Big[-d\\eta^2+\\delta_{ij}dx^idx^j\\Big],\n\\end{eqnarray}\nwhere the conformal scale factor is\n\\begin{eqnarray}\na(\\eta)=-\\frac{1}{H\\eta}\\to a(t)=e^{Ht}.\n\\end{eqnarray}\nHere the latter denotes the scale factor with respect to cosmic\ntime $t$.\n\nWe choose the Newtonian gauge of $B=E=0 $ and $\\bar{E}_i=0$ which\nleads to $10-4=6$ degrees of freedom (DOF). In this case, the\ncosmologically perturbed metric can be simplified to be\n\\begin{eqnarray} \\label{so3-met}\nds^2=a(\\eta)^2\\Big[-(1+2\\Psi)d\\eta^2+2\\Psi_i d\\eta\ndx^{i}+\\Big\\{(1+2\\Phi)\\delta_{ij}+h_{ij}\\Big\\}dx^idx^j\\Big]\n\\end{eqnarray}\nwith the transverse vector $\\partial_i\\Psi^i=0$ and\ntransverse-traceless tensor $\\partial_ih^{ij}=h=0$. We emphasize\nthat choosing the SO(3)-perturbed metric (\\ref{so3-met}) contrasts\nsharply with the covariant approach to the cosmological conformal\ngravity~\\cite{Mannheim:2011is}.\n\nIn order to get the cosmological perturbed equations, one is\nfirst to obtain the bilinear action and then, varying it to yield\nthe perturbed equations. We expand the conformal gravity action\n(\\ref{ECSW}) up to quadratic order in the perturbations of\n$\\Psi,\\Phi,\\Psi_i,$ and $h_{ij}$ around the de Sitter\nbackground~\\cite{Deruelle:2010kf}. Then, the bilinear actions for\nscalar, vector and tensor perturbations can be found as\n\\begin{eqnarray}\n&&\\hspace*{-2.3em}S_{\\rm CG}^{({\\rm S})}=\\frac{1}{3\\kappa m^2}\\int\nd^4x\\Big[\\nabla^2 (\\Psi-\\Phi)\\Big]^2,\n\\label{scalar}\\\\\n&&\\nonumber\\\\\n &&\\hspace*{-2.3em}S_{\\rm CG}^{({\\rm V})}=\\frac{1}{4\\kappa m^2}\\int\nd^4x\\Big(\\partial_i\\Psi'_{j}\\partial^{i}\\Psi'^{j}\n-\\nabla^2\\Psi_i\\nabla^2\\Psi^i\\Big),\\label{vpeq}\\\\\n&&\\nonumber\\\\\n&&\\hspace*{-2.3em} S_{\\rm CG}^{({\\rm T})}=\\frac{1}{8\\kappa m^2}\\int\nd^4x\\Big(h''_{ij}h''^{ij} -2\\partial_kh'_{ij}\\partial^{k}h'^{ij}\n+\\nabla^2h_{ij}\\nabla^2h^{ij}\\Big)\\label{hpeq}.\n\\end{eqnarray}\nVarying the actions (\\ref{vpeq}) and (\\ref{hpeq}) with respect to\n$\\Psi^{i}$ and $h^{ij}$ leads to the equations of motion for vector\nand tensor perturbations\n\\begin{eqnarray}\n&&\\Box\\Psi_i=0,\\label{veq}\\\\\n &&\\Box^2h_{ij}=0,\\label{heq}\n\\end{eqnarray}\nwhere $\\Box=d^2\/d\\eta^2-\\nabla^2$ with $\\nabla^2$ the Laplacian\noperator.\n It is worth noting that Eqs.(\\ref{veq}) and (\\ref{heq}) are independent of\nthe expanding de Sitter background in the conformal gravity.\n\nFinally, we would like to mention two scalars $\\Phi$ and $\\Phi$. Two\nscalar equations\n are given by $\\nabla^2\\Psi=\\nabla^2\\Phi=0$, which implies\nthat they are obviously\n non-propagating modes in the de Sitter background. This means that the cosmological conformal gravity describes 4 DOF of vector and tensor modes. Hereafter,\n thus, we will not consider the scalar sector.\n\n\\section{Primordial power spectra}\nThe power spectrum is given by the two-point correlation function\nwhich could be computed when one chooses the vacuum state\n$|0\\rangle$. It is defined by\n\\begin{equation}\n\\langle0|{\\cal F}(\\eta,\\bold{x}){\\cal\nF}(\\eta,\\bold{x}')|0\\rangle=\\int d^3\\bold{k} \\frac{{\\cal P}_{\\cal\nF}}{4\\pi k^3}e^{-i \\bold{k}\\cdot (\\bold{x}-\\bold{x}')},\n\\end{equation}\nwhere ${\\cal F}$ denotes vector and tensor and $k=|\\bold{k}|$ is the\nwave number. In general, fluctuations are created on all length\nscales with wave number $k$. Cosmologically relevant fluctuations\nstart their lives inside the Hubble radius which defines the\nsubhorizon: $k~\\gg aH~(z=-k\\eta\\gg 1)$. On the other hand, the\ncomoving Hubble radius $(aH)^{-1}$ shrinks during inflation while\nthe comoving wavenumber $k$ is constant. Therefore, eventually all\nfluctuations exit the comoving Hubble radius which defines the\nsuperhorizon: $k~\\ll aH~(z=-k\\eta\\ll 1)$.\n\nOne may compute the two-point function by taking the Bunch-Davies\nvacuum $|0\\rangle$.\n In the de Sitter inflation, we choose the subhorizon limit\nof $z\\to \\infty$ to define the Bunch-Davies vacuum, while we\nchoose the superhorizon limit of $z\\to 0$ to get a definite form of\nthe power spectrum which stays alive after decaying. For example,\nfluctuations of scalar and tensor originate on subhorizon scales and\nthey propagate for a long time on superhorizon scales. This can be\nchecked by computing their power spectra which are scale-invariant.\nAccordingly, it would be interesting to check what happens when one\ncomputes the power spectra for vector and tensor perturbations\ngenerated from de Sitter inflation in the frame work of conformal\ngravity.\n\n\n\\subsection{Vector power spectrum}\n\n\nLet us consider Eq.(\\ref{veq}) for vector perturbation and then,\nexpand $\\Psi_i$ in plane waves with the linearly polarized states\n\\begin{eqnarray}\\label{psim}\n\\Psi_i(\\eta,{\\bf x})=\\frac{1}{(2\\pi)^{\\frac{3}{2}}}\\int d^3{\\bf\nk}\\sum_{s=1,2}p_i^{s}({\\bf k})\\Psi_{\\bf k}^{s}(\\eta)e^{i{\\bf\nk}\\cdot{\\bf x}},\n\\end{eqnarray}\nwhere $p^{1\/2}_{i}$ are linear polarization vectors with\n $p^{1\/2}_i\np^{1\/2, i}=1$. Also, $\\Psi_{\\bf k}^{s}$ denote linearly polarized\nvector modes. Plugging (\\ref{psim}) into the equation (\\ref{veq}),\none finds the equation\n\\begin{eqnarray}\\label{v0eq}\n\\Bigg[\\frac{d^2}{d\\eta^2}+k^2\\Bigg]\\Psi_{\\bf k}^s(\\eta)=0.\n\\end{eqnarray}\nIntroducing $z=-k\\eta$, Eq.(\\ref{v0eq}) takes a simple form\n\\begin{equation}\\label{v1eq}\n\\Big[\\frac{d^2}{dz^2}+1\\Big]\\Psi_{\\bf k}^{s}(z)=0\\end{equation}\nwhose positive frequency solution is given by \\begin{equation}\n\\Psi_{\\bf k}^{s}(z)\\sim e^{iz}\\end{equation} up to the\nnormalization.\n\n We are willing to calculate\nvector power spectrum. For this purpose, we define a commutation\nrelation for the vector. In the bilinear action (\\ref{vpeq}), the\nconjugate momentum for the field $\\Psi_j$ is found to be\n\\begin{eqnarray}\\label{vconj}\n\\pi_{\\Psi}^{j}=-\\frac{1}{2\\kappa m^2}\\nabla^2\\Psi'^{j},\n\\end{eqnarray}\nwhere one observes an unusual factor $\\nabla^2$ which reflects that\nthe vector $\\Psi_i$ is not a canonically defined vector, but it is\nfrom the cosmological conformal gravity. The canonical quantization\nis implemented by imposing the commutation relation\n\\begin{eqnarray}\\label{vcomm}\n[\\hat{\\Psi}_{j}(\\eta,{\\bf x}),\\hat{\\pi}_{\\Psi}^{j}(\\eta,{\\bf\nx}^{\\prime})]=2i\\delta({\\bf x}-{\\bf x}^{\\prime})\n\\end{eqnarray}\nwith $\\hbar=1$.\n\n\nNow, the operator $\\hat{\\Psi}_{j}$ can be expanded in Fourier modes\nas\n\\begin{eqnarray}\\label{vex}\n\\hat{\\Psi}_{j}(\\eta,{\\bf x})=\\frac{1}{(2\\pi)^{\\frac{3}{2}}}\\int\nd^3{\\bf k}\\sum_{s=1,2}\\Big(p_{j}^{s}({\\bf k})\\hat{a}_{\\bf\nk}^{s}\\Psi_{\\bf k}^{s}(\\eta)e^{i{\\bf k}\\cdot{\\bf x}}+{\\rm h.c.}\\Big)\n\\end{eqnarray}\nand the operator $\\hat{\\pi}_{\\Psi}^{j}=\\frac{k^2}{2\\kappa\nm^2}\\hat{\\Psi}'^{j}$ can be easily obtained from (\\ref{vex}).\nPlugging (\\ref{vex}) and $\\hat{\\pi}_{\\Psi}^{j}$ into (\\ref{vcomm}),\nwe find the commutation relation and Wronskian condition as\n\\begin{eqnarray}\n&&\\hspace*{-2em}[\\hat{a}_{\\bf k}^{s},\\hat{a}_{\\bf k^{\\prime}}^{\ns^{\\prime}\\dag}]=\\delta^{ss^{\\prime}}\\delta^3({\\bf k}-{\\bf\nk}^{\\prime}),\\label{comm0}\\\\\n&&\\hspace*{-2em}\\Psi_{\\bf k}^{s}\\Big(\\frac{k^2}{2\\kappa\nm^2}\\Big)(\\Psi_{\\bf k}^{*s})^{\\prime}-{\\rm c.c.}=i \\to \\Psi_{\\bf\nk}^{s}\\frac{d\\Psi_{\\bf k}^{*s}}{dz}-{\\rm c.c.}=-\\frac{2i\\kappa\nm^2}{k^3}. \\label{vwcon}\n\\end{eqnarray}\n We\nchoose the positive frequency mode for a Bunch-Davies vacuum\n$|0\\rangle$ normalized by the Wronskian condition\n\\begin{eqnarray} \\label{vecsol}\n\\Psi_{\\bf k}^{s}(z) =\\sqrt{\\frac{\\kappa m^2}{k^3}} e^{iz}\n\\end{eqnarray}\nas the solution to (\\ref{v1eq}).\n On the other hand, the\nvector power spectrum is defined by\n\\begin{eqnarray}\\label{powerv}\n\\langle0|\\hat{\\Psi}_{j}(\\eta,{\\bf x})\\hat{\\Psi}^{j}(\\eta,{\\bf\nx}')|0\\rangle=\\int d^3{\\bf k}\\frac{{\\cal P}_{\\Psi}}{4\\pi\nk^3}e^{i{\\bf k}\\cdot({\\bf x}-{\\bf x^{\\prime}})},\n\\end{eqnarray}\nwhere we take the Bunch-Davies vacuum state $|0\\rangle$ by\nimposing $\\hat{a}_{\\bf k}^{s}|0\\rangle=0$. The vector power\nspectrum ${\\cal P}_{\\Psi}$ takes the form \\begin{equation}\n\\label{vecpt}{\\cal\nP}_{\\Psi}\\equiv\\sum_{s=1,2}\\frac{k^3}{2\\pi^2}\\Big|\\Psi_{\\bf\nk}^{s}\\Big|^2.\\end{equation}\n Plugging (\\ref{vecsol}) into\n(\\ref{vecpt}), we find a constant power spectrum for a vector\nperturbation\n\\begin{eqnarray} \\label{vec-pow}\n{\\cal P}_{\\Psi}=\\frac{m^2}{\\pi^2 M^2_{\\rm P}}.\n\\end{eqnarray}\n\n\\subsection{Tensor power spectrum}\n\n\nWe take Eq.(\\ref{heq}) to compute tensor power spectrum. In this\ncase, the metric tensor $h_{ij}$ can be expanded in Fourier modes\n\\begin{eqnarray}\\label{hijm}\nh_{ij}(\\eta,{\\bf x})=\\frac{1}{(2\\pi)^{\\frac{3}{2}}}\\int d^3{\\bf\nk}\\sum_{s={\\rm +,\\times}}p_{ij}^{s}({\\bf k})h_{\\bf\nk}^{s}(\\eta)e^{i{\\bf k}\\cdot{\\bf x}},\n\\end{eqnarray}\nwhere $p^{s}_{ij}$ linear polarization tensors with $p^{s}_{ij}\np^{s,ij}=1$. Also, $h_{\\bf k}^{s}(\\eta)$ represent linearly\npolarized tensor modes. Plugging (\\ref{hijm}) into (\\ref{heq}) leads\nto the fourth-order differential equation\n\\begin{eqnarray}\n&&(h_{\\bf k}^{s})^{''''}+2k^2(h_{\\bf k}^{s})^{''}+k^4h_{\\bf k}^{s}\n=0,\\label{heq2}\n\\end{eqnarray}\nwhich is further rewritten as a factorized form\n\\begin{eqnarray}\n&&\\Bigg[\\frac{d^2}{d\\eta^2}+k^2\\Bigg]^2h_{\\bf k}^{s}(\\eta)=0.\n\\label{hee}\n\\end{eqnarray}\nIntroducing $z=-k\\eta$, Eq.(\\ref{hee}) takes the form of a\ndegenerate fourth-order equation\n\\begin{equation}\\label{hc0}\n\\Big[\\frac{d^2}{dz^2}+1\\Big]^2h_{\\bf k}^{s}(z)=0.\\end{equation} This\nis the same equation for a degenerate Pais-Uhlenbeck (PU)\noscillator and its solution is given by\n\\begin{equation} \\label{desol}\nh_{\\bf k}^{s}(z)=\\frac{N}{2k^2}\\Big[(a_2^s+a_1^s z)e^{iz}+c.c.\\Big]\n\\end{equation}\n with $N$ the normalization constant. After quantization, $a^s_2$\n and $a^s_1$ are promoted to operators $\\hat{a}^s_2({\\bf k})$ and $\\hat{a}^s_1({\\bf\n k})$. The presence of $z$\nin $(\\cdots)$ reflects clearly that $ h_{\\bf k}^{s}(z)$ is a\nsolution to the degenerate\n equation (\\ref{hc0}).\nHowever, it is difficult to quantize $h_{ij}$ in the subhorizon\nregion directly because it satisfies the degenerate fourth-order\nequation (\\ref{heq}). In order to quantize $h_{ij}$, we have to\nconsider (\\ref{heq}) as a final equation obtained by making use of\nan auxiliary tensor $\\beta_{ij}$.\n\nFor this purpose, one rewrites the fourth-order action (\\ref{hpeq})\nas a second-order action\n\\begin{equation}\nS_{\\rm AC}^{({\\rm T})}=-\\frac{1}{4 \\kappa m^2}\\int\nd^4x\\Big(\\eta^{\\mu\\nu}\\partial_\\mu \\beta_{ij}\\partial_\\nu h^{ij}\n+\\frac{1}{2} \\beta_{ij}\\beta^{ij}\\Big).\\label{ahpeq}\n\\end{equation}\nTheir equations are given by\n\\begin{equation}\n\\Box h_{ij}=\\beta_{ij},~~\\Box \\beta_{ij}=0,\n\\end{equation}\nwhich are combined to give the fourth-order tensor equation\n(\\ref{heq}). Explicitly, acting $\\Box$ on the first equation leads\nto (\\ref{heq}) when one uses the second one. Actually, this is an\nextension of the singleton action describing a dipole ghost pair as\nthe fourth-order scalar\ntheory~\\cite{Myung:1999nd,Rivelles:2003jd,Kim:2013waf,Kim:2013mfa}.\nThis is related to not a non-degenerate PU oscillator and its\nquantization, but a degenerate PU and\nquantization~\\cite{Mannheim:2004qz,Smilga:2008pr}. The canonical\nconjugate momenta are given by\n\\begin{equation}\n\\pi_h^{ij}=\\frac{1}{4\\kappa\nm^2}\\beta'^{ij},~~\\pi_\\beta^{ij}=\\frac{1}{4\\kappa m^2}h'^{ij}.\n\\end{equation}\nAfter expanding $\\hat{h}_{ij}$ and $\\hat{\\beta}_{ij}$ in their\nFourier modes, their amplitudes at each mode are given by\n\\begin{eqnarray}\n\\label{sol1}\\hat{\\beta}^s_{\\bf k}(z)&=&iN\\Big(\\hat{a}^s_1({\\bf k})e^{iz}-\\hat{a}^{s\\dagger}_1({\\bf k})e^{-iz}\\Big),\\\\\n \\label{sol2}\\hat{h}^s_{\\bf k}(z)&=&\\frac{N}{2k^2}\\Big[\\left(\\hat{a}^s_2({\\bf k})+\\hat{a}^s_1({\\bf\n k})z\\right)e^{iz}+{\\rm h.c.}\\Big].\n \\end{eqnarray}\nNow, the canonical quantization is accomplished by imposing\nequal-time commutation relations:\n\\begin{eqnarray}\\label{comm}\n[\\hat{h}_{ij}(\\eta,{\\bf x}),\\hat{\\pi}_{h}^{ij}(\\eta,{\\bf\nx}^{\\prime})]=2i\\delta^3({\\bf x}-{\\bf\nx}^{\\prime}),~~[\\hat{\\beta}_{ij}(\\eta,{\\bf\nx}),\\hat{\\pi}_{\\beta}^{ij}(\\eta,{\\bf x}^{\\prime})]=2i\\delta^3({\\bf\nx}-{\\bf x}^{\\prime}),\n\\end{eqnarray}\nwhere the factor 2 is coming from the fact that $h_{ij}$ and\n$\\beta_{ij}$ represent 2 DOF, respectively. Taking (\\ref{sol1}) and\n(\\ref{sol2}) into account, the two operators $\\hat{\\beta}_{ij}$ and\n$\\hat{h}_{ij}$ are given by\n\\begin{eqnarray}\\label{hex1}\n\\hat{\\beta}_{ij}(z,{\\bf x})&=&\\frac{ N}{(2\\pi)^{\\frac{3}{2}}}\\int\nd^3{\\bf k}\\Bigg[\\sum_{s=+,\\times}\\Big(ip_{ij}^{s}({\\bf\nk})\\hat{a}_1^s({\\bf k})e^{iz}e^{i{\\bf k}\\cdot{\\bf\nx}}\\Big)+{\\rm h.c.}\\Bigg], \\\\\n\\label{hex2} \\hat{h}_{ij}(z,{\\bf\nx})&=&\\frac{1}{(2\\pi)^{\\frac{3}{2}}}\\int d^3{\\bf\nk}\\frac{N}{2k^2}\\Bigg[\\sum_{s=+,\\times}\\Big\\{p_{ij}^{s}({\\bf\nk})\\Big(\\hat{a}_2^s({\\bf k})+\\hat{a}_1^s({\\bf k})z\n\\Big)e^{iz}e^{i{\\bf k}\\cdot{\\bf x}}\\Big\\}+{\\rm h.c.}\\Bigg].\n\\end{eqnarray}\nPlugging (\\ref{hex1}) and (\\ref{hex2}) into (\\ref{comm}) determines\nthe normalization constant $N=\\sqrt{2\\kappa m^2}$ and commutation\nrelations between $\\hat{a}_i^s({\\bf k})$ and\n$\\hat{a}^{s\\dagger}_j({\\bf k}')$ as\n \\begin{equation} \\label{scft}\n [\\hat{a}_i^s({\\bf k}), \\hat{a}^{s^{\\prime}\\dagger}_j({\\bf k}')]= 2k \\delta^{ss'}\n \\left(\n \\begin{array}{cc}\n 0 & -i \\\\\n i & 1 \\\\\n \\end{array}\n \\right)\\delta^3({\\bf k}-{\\bf k}').\n \\end{equation}\nHere the commutation relation of $ [\\hat{a}_2^s({\\bf k}),\n\\hat{a}^{s^{\\prime}\\dagger}_2({\\bf k}')]$ is determined by the\ncondition of \\begin{equation} [\\hat{h}_{ij}(\\eta,{\\bf\nx}),\\hat{\\pi}_{\\beta}^{ij}(\\eta,{\\bf x}^{\\prime})]=0.\n\\end{equation}\nWe are ready to compute the power spectrum of the\ngravitational waves defined by\n\\begin{eqnarray}\\label{power}\n\\langle0|\\hat{h}_{ij}(\\eta,{\\bf x})\\hat{h}^{ij}(\\eta,{\\bf\nx^{\\prime}})|0\\rangle=\\int d^3k\\frac{{\\cal P}_{\\rm h}}{4\\pi\nk^3}e^{i{\\bf k}\\cdot({\\bf x}-{\\bf x^{\\prime}})}.\n\\end{eqnarray}\nHere we choose the Bunch-Davies vacuum $|0\\rangle$ by imposing\n$\\hat{a}_i^s({\\bf k})|0\\rangle=0$. The tensor power spectrum ${\\cal\nP}_{\\rm h}$ in (\\ref{power}) denotes ${\\cal P}_{\\rm\nh}\\equiv\\sum_{s={+,\\times}}{\\cal P}^s_{\\rm h}$ where $ {\\cal\nP}^s_{\\rm h}$ is given as\n\\begin{eqnarray}\n{\\cal P}^s_{\\rm h}=\\frac{k^3}{2\\pi^2}|h_{\\bf\nk}^{s}\\Big|^2=\\frac{m^2}{2\\pi^2M^2_{\\rm P}}.\n\\end{eqnarray}\nFinally, we obtain the tensor power spectrum\n\\begin{equation}\n{\\cal P}_{\\rm h}=\\frac{m^2}{\\pi^2M^2_{\\rm P}}\n\\end{equation}\nwhich corresponds to a constant power spectrum. This is the same\nform as for the vector power spectrum (\\ref{vec-pow}).\n\nOn the other hand, the power spectrum of auxiliary tensor\n$\\beta_{ij}$ is defined by\n\\begin{equation}\n\\langle0|\\hat{\\beta}_{ij}(\\eta,{\\bf x})\\hat{\\beta}^{ij}(\\eta,{\\bf\nx^{\\prime}})|0\\rangle=\\int d^3k\\frac{{\\cal P}_{\\rm \\beta}}{4\\pi\nk^3}e^{i{\\bf k}\\cdot({\\bf x}-{\\bf x^{\\prime}})}. \\end{equation} Here\nwe obtain the zero power spectrum as\n\\begin{equation}\n{\\cal P}_{\\rm \\beta}=0\n\\end{equation}\nwhen one used the commutation relation $[\\hat{a}_1^s({\\bf k}),\n\\hat{a}^{s^{\\prime}\\dagger}_1({\\bf k}')]=0$. This is clear because\n$\\beta_{ij}$ is an auxiliary tensor to lower the fourth-order action\nto the second-order action. However, it is not understand why\n$\\hat{h}_{ij}$ could be expanded by $\\hat{a}_2^s({\\bf k})$ and\n$\\hat{a}_1^s({\\bf k})$ without introducing $\\beta_{ij}$, because\n$\\beta_{ij}$ was expanded by $\\hat{a}_1^s({\\bf k})$ solely.\n\n\\section{Discussions}\n\nWe have found the constant vector and tensor power spectra generated\nduring de Sitter inflation from conformal gravity. These constant\npower spectra could be understood because the conformal gravity is\ninvariant under conformal (Weyl) transformation. This means that\ntheir power spectra are constant with respect to $z=-k\\eta$ since\nvector and tensor perturbations are decoupled from the expanding de\nSitter inflation. In other words, this is so because the bilinear\nactions (\\ref{vpeq}) and (\\ref{hpeq}) are independent of the\nconformal scale factor $a(\\eta)$ as a result of conformal\ninvariance. On the contrary of Ref.\\cite{Mannheim:2011is}, it is\nless interesting for the conformal gravity to further investigate\nits cosmological implications.\n\n Hence, our\nanalysis implies that the Einstein-Weyl gravity is more promising\nthan the conformal gravity to obtain the physical tensor power\nspectrum because the Einstein-Hilbert term provides the coupling of\nscale factor $a$ like as\n$a^2(h'_{ij}h'^{ij}-\\partial_lh_{ij}\\partial^lh^{ij})$. Also, the\nsingleton Lagrangian of ${\\cal\nL}_s=-\\sqrt{-\\bar{g}}(\\frac{1}{2}\\bar{g}^{\\mu\\nu}\\partial_\\mu\\phi_1\\partial_\\nu\\phi_2+\\frac{1}{2}\\phi_1^2)$\nis quite interesting because it provides two scalar equations $(\\Box\n+2aH\\partial_\\eta)\\phi_2=\\phi_1$ and $(\\Box +2aH\\partial_\\eta)\n\\phi_1=0$ which are combined to yield the degenerate fourth-order\nequation $(\\Box +2aH\\partial_\\eta)^2 \\phi_2=0$. Here we observe the\npresence of the scale factor $a$ in the perturbed equation of the\nsingleton.\n\n Consequently, the\nconformal invariance of the Lagrangian like $\\sqrt{-g}C^2$ or\n$\\sqrt{-g}F^2$ has no responsibility for generating the observed\nfluctuations during inflation.\n\n\\vspace{0.25cm}\n\n {\\bf Acknowledgement}\n\n\\vspace{0.25cm}\n This work was supported by the National\nResearch Foundation of Korea (NRF) grant funded by the Korea\ngovernment (MEST) (No.2012-R1A1A2A10040499).\n\n\\newpage\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\nDeep neural networks (DNNs) are a popular machine learning technique and have shown superior performance in many scientific problems. Despite of their high prediction accuracy, DNNs are often criticized for a lack of interpretation of how changes of the input variables influence the output. Indeed, for applications in many scientific fields such as biology and medicine, understanding the statistical models described by the networks can be as important as, if not more important than, the prediction accuracy. In a DNN, because of its nonlinearity and inherent complexity, generally one should not expect a concise relationship between each input variable and the output, such as the conditional monotonicity in linear regression or logistic regression. A more realistic approach for interpreting the DNN model can be selecting a subset of variables, among all input variables, that have significant predictive power on the output, which is known as ``variable selection''. This paper considers the variable selection problem in DNNs.\n\nDuring the past decades, many methods have been proposed for this task. The variable selection methods for neural networks, similar to the ones for other machine learning techniques, can be broadly classified into three categories: filters, wrappers and embedded methods \\cite{may_review_2011-1,guyon_introduction_2003,chandrashekar_survey_2014}. Filters select variables by information theoretic criteria such as mutual information \\cite{battiti_using_1994} and partial mutual information \\cite{may_non-linear_2008}, and the selection procedure does not involve network training. In contrast, both wrappers and embedded methods are based on the training of neural networks. Wrappers wrap the training phase with a search strategy, which searches through the set, or a subset, of all possible combinations of input variables and selects the combination whose corresponding network gives the highest prediction accuracy. A number of sequential \\cite{sung1998ranking,maier1998use} and heuristic search strategies \\cite{brill_fast_1992,tong_genetic_2010,sivagaminathan_hybrid_2007} have been used. Embedded methods, unlike wrappers, select variables during the training of the network of interest. This can be done by gradually removing\/pruning weights or variables according to their importance measured in various ways (a detailed review is given in the Methods section) or by incorporating a regularization term into the loss function of the neural network to impose sparsity on the weights \\cite{grandvalet_outcomes_1999,chapados_input_2001,simila_combined_2009,scardapane_group_2017}. For a more exhaustive review of variable selection methods in neural networks, see \\cite{may_review_2011-1,zhang_neural_2000}.\n\nWhile a lot of variable selection methods have been developed for neural networks, there are still challenges that hinder them from being widely used. First and foremost, these methods lack a control on the quality of selected variables. When selecting from a large number of variables, a standard way of quality control is to calculate false discovery rate (FDR) \\cite{benjamini1995controlling} and control it at a certain level, particularly in biological and medical studies. In the context of variable selection, FDR is the (expected) proportion of false positives among all variables called significant; for example, if 20 variables are selected (called significant), and two of them are actually null, then the FDR is $2\/20=0.1$. However, no variable selection methods for neural networks so far have tried to estimate FDR or keep FDR under control. Second, among these methods, many were developed for specific types of networks, especially very shallow networks, and they do not work, or work inefficiently, for deeper networks. Third, many of the methods are not applicable to large datasets, on which their computational loads can be prohibitively high.\n\nIn this paper, we develop a method called SurvNet for variable selection in neural networks that overcomes these limitations. It is an embedded method that gradually removes least relevant variables until the FDR of remaining variables reaches a desired threshold. Figure \\ref{fig:1} is the flowchart of SurvNet. It starts by adding a set of simulated input variables called ``surrogate variables'' that will help estimate FDR and training a network with all variables, including both original and surrogate variables. Then it calculates the importance of each variable (original or surrogate) and eliminates the variables that are least important. When eliminating a variable, its corresponding input neuron and all outgoing connections of this neuron are removed from the network. After this, SurvNet estimates the FDR of the original variables that remain in the model. If the estimated FDR is greater than the pre-set threshold, SurvNet will go back to the step of training the (updated) network; otherwise, the elimination stops, and all remaining surrogate variables are removed before the final model is trained. Note that each updated network is trained using the values of weights in the last trained network as initial values for a ``warm start''.\n\nThere are three major novelties in this backward elimination procedure of SurvNet. First, it proposes a new measure\/score of variable importance, which works regardless of the type of problems (classification or regression), the number of output neurons (one or multiple), and the number of hidden layers (one or multiple) in neural networks. In fact, this score can be readily computed for networks with arbitrary depths and activation functions. Second, SurvNet proposes an easy and quick way of estimating FDRs. Statistical estimation of FDRs requires obtaining the null distribution of the importance scores, that is, the distribution of the scores of irrelevant variables \\cite{storey2003statistical}. This is often done by permuting the output values of samples and training multiple independent models in parallel, each of which corresponds to a permuted dataset, but the computational cost is typically unaffordable for neural networks. SurvNet proposes a distinct way. It generates a set of null variables which serve as surrogates of the (unknown) null original variables to obtain the null distribution. With the introduction of surrogate variables, an estimate of FDR can be given by a simple mathematical formula without training a large number of networks at each step. Third, instead of eliminating one variable or any pre-specified number of variables at each step, SurvNet is able to adaptively determine an appropriate number of variables to eliminate by itself. This number, expressed in a concise mathematical formula, makes the elimination highly efficient while having the FDR well controlled on the desired level. The formula includes a parameter called ``elimination rate'', which is a constant between 0 and 1 and controls the ``aggressiveness'' of elimination. When this parameter is chosen to be 1, the elimination is the most aggressive.\n\nPut together, SurvNet is a computationally efficient mechanism for variable selection in neural networks that needs little manual intervention. After setting the initial network structure, an FDR cutoff $\\eta ^*$ (0.1 is the most commonly used value), and an elimination rate $\\varepsilon$ (1 is often an acceptable choice), the elimination procedure will automatically determine how many and which variables to eliminate at each step and stop when the estimated FDR is no greater than $\\eta ^*$.\n\n\\section*{Data and results}\nWe applied SurvNet to digits 4's and 9's in the MNIST database (Dataset 5), a single-cell RNA-Seq dataset (Dataset 6), as well as four simulation datasets (Datasets 1 $\\sim$ 4).\n\nMNIST \\cite{lecun1998gradient} contains 60,000 training images (including 5,000 validation images) and 10,000 testing images of ten handwritten digits from 0 to 9. Each image contains $28\\times28 = 784$ pixels, which are treated as 784 input variables.\n\nSingle-cell RNA-Seq \\cite{kolodziejczyk2015technology} is a biological technique for measuring gene expression in cells. Along with other single-cell techniques, it was recognized as the ``2018 Breakthrough of the Year'' by the \\textit{Science} magazine on account of its important applications in biomedical and genomic research. In single-cell RNA-Seq data, the samples are the cells, the inputs are expression levels of individual genes, and the output is the cell type. Biologically, it is often believed that the cell type is determined by a small set of genes, and thus single-cell RNA-Seq data can be a good choice to study variable selection. \n\nThe classification accuracy of SurvNet for these real data was evaluated by several criteria, including initial test loss, initial test error, final test loss and final test error. Here ``test loss'' and ``test error'' refer to the cross-entropy loss and the misclassification rate on the test data, respectively; and their ``initial'' and ``final'' values were derived by using the network with all original variables and with selected variables only, respectively. See Supplementary Materials for details about how they were calculated.\n\nFor these real datasets (Datasets 5 $\\sim$ 6), however, it is unknown which variables are truly significant. Hence we relied on simulated data to quantitatively assess the accuracy of selected variables, the most important aspect of SurvNet. Four datasets were simulated under different schemes. Datasets 1 $\\sim$ 3 were for classification and Dataset 4 was for regression.\n\nExcept for the MNIST data, each dataset was divided into a training set and a test set, with 80\\% of the samples in the training set and 20\\% in the test set, and 30\\% of training samples were further separated for validation (used to decide when to stop training, see Supplementary Materials).\n\nSurvNet was implemented on TensorFlow 1.8 \\cite{tensorflow2015-whitepaper}. For each dataset, we used a common and simple network structure with two hidden layers, which consisted of 40 and 20 nodes respectively. The ReLU activation function was used, together with a batch size of 50 and a learning rate of 0.05 (0.01 for the regression problem).\n\nIn our experiments, Datasets 1 $\\sim$ 4 were simulated 25 times, and the results of variable selection using SurvNet are averaged over these 25 simulations. For Dataset 1, we demonstrate how SurvNet works step by step to look into its behavior, and we also study the influence of the elimination rate by setting $\\varepsilon$ to different values. On other simulation datasets, results are similar and thus are not given.\n\n\\subsection*{Dataset 1: simulated data with independent variables}\nWe simulated a $10,000\\times784$ matrix $\\bm{X}$, with $x_{ij} \\sim {\\rm i.i.d.}\\ U(0,1)$ for $1 \\le i \\le 10,000$, $1 \\le j \\le 784$, where $U$ means uniform distribution, and treated its rows and columns as samples and variables respectively. The samples were randomly assigned into two classes $C_1$ and $C_2$ of equal size. Then $p^\\prime = 64$ variables were chosen at random and their values in one class were shifted: for each of these variables, we generated a shift value $\\delta_j \\sim U(0.1,0.3)$, with its direction having equal probability of being positive and negative. More precisely, $x_{ij} \\leftarrow x_{ij} + (2\\alpha_j-1) \\cdot \\delta_j$ for $i \\in C_1$, $j \\in \\Omega_{p^\\prime}$, where $\\alpha_j \\sim {\\rm Bernoulli}(\\frac{1}{2})$ and $\\Omega_{p^\\prime}$ was the set of $p^\\prime$ randomly chosen variables. In this way, the 784 variables were independent from each other, and the 64 variables were significant because each of them had different mean values in the two classes. This ``independent-variable differential-mean'' scheme is a very widely used simulation scheme for studying variable selection.\n\nWe ran SurvNet on this dataset with an FDR cutoff $\\eta^* = 0.1$ and an elimination rate $\\varepsilon = 1$. To demonstrate how SurvNet works step by step, Figure \\ref{fig:2}a shows, in one instance of simulation, the number of original variables and surrogate variables left at each step of a selection process as well as the corresponding estimated FDR. The number of variables to be eliminated in the subsequent step is also displayed, and notice that our algorithm was efficient: it eliminated a large number of variables at the beginning and gradually slowed down the elimination as the number of remaining variables decreased and the estimated FDR got closer to the desired value. When the estimated FDR became less than 0.1, the selection process stopped, and the final model turned out to contain all the 64 truly significant variables. On the same data, we studied the influence of elimination rate, and the results of using $\\varepsilon = 1$ and $\\varepsilon = 0.5$ are shown in Figure \\ref{fig:2}b and \\ref{fig:2}c. It is found that while a larger elimination rate led to a faster selection process with fewer steps, the number of variables left at the end of the selection was almost the same (Figure \\ref{fig:2}b). Moreover, regardless of elimination rate, our method gave an accurate estimate of FDR, and the true value of FDR was well controlled throughout the selection process (Figure \\ref{fig:2}c).\n\nThe overall performance of SurvNet under $\\eta^* = 0.1$ and $\\varepsilon = 1$ was summarized in Table \\ref{tab:1}. The test loss and test error on the model with selected variables were both less than those on the model that contains all original variables, indicating enhanced predictive power of the network. More importantly, SurvNet accurately selected the significant variables: it kept 61.92 of the 64 significant variables, along with 7.42 false positives, and the selected variables had an FDR of 0.105, which was very close to the cutoff value 0.1. The estimated FDR, 0.093, was also close to the actual FDR.\n\nThe results under different elimination rates ($\\varepsilon = 1$ and $\\varepsilon = 0.5$), different FDR cutoffs ($\\eta^* = 0.1$ and $\\eta^* = 0.05$), and different numbers of significant variables ($p^\\prime = 64$ and $p^\\prime = 32$) are shown in Table S1.\n\n\\subsection*{Dataset 2: simulated data with correlated variables}\nWe considered correlated variables in this simulation dataset. It is well known that variable dependence often makes FDR estimation difficult \\cite{benjamini2001control, heesen2015inequalities}, and we wondered whether SurvNet was still valid in this case. Images are perfect examples of data with correlated variables, as the value of a pixel usually highly depends on the value of its surrounding pixels. Here we used all images of digit 0 in the MNIST data and randomly assigned them into two classes, and all variables were supposed to be non-significant for classification at this time. Then we picked $p^\\prime = 64$ variables and shifted their mean values in one class in the same way we did in Dataset 1.\n\nTable \\ref{tab:1} shows the performance of SurvNet under $\\eta^* = 0.1$ and $\\varepsilon = 1$. Similar to that on Dataset 1, the test loss decreased after variable selection. The test error before and after variable selection were both zero, possibly due to the positive correlation between pixels, which reduced the difficulty of the classification problem. Although SurvNet identified slightly fewer significant variables (59.36 of the 64 significant variables) than it did in Dataset 1, the FDR 0.107 was still very close to the desired cutoff, and its estimated value 0.094 was accurate as well. For results under different sets of parameter values, see Table S2.\n\n\\subsection*{Dataset 3: simulated data with variance-inflated variables}\nThe third simulation scheme is very challenging. Unlike in the previous two datasets, the significant variables did not differ in the mean values of the two classes; instead, they differed only in the variances. Same as in Dataset 1, we simulated a $10,000\\times784$ matrix $\\bm{X}$ whose element $x_{ij} \\sim {\\rm i.i.d.}\\ U(0,1)$ and divided the samples into two equal-size classes $C_1$ and $C_2$. But then, to make $p^\\prime=64$ randomly chosen variables significant, we let $x_{ij} \\leftarrow x_{ij} + (2\\alpha_{ij}-1) \\cdot \\delta_{ij}$ for $i \\in C_1$, $j \\in \\Omega_{p^\\prime}$, where $\\alpha_{ij} \\sim {\\rm Bernoulli}(\\frac{1}{2})$, and $\\delta_{ij} \\sim U(0.8,1)$. Note that different from the first two simulation schemes, here $\\delta$ and $\\alpha$ depend on both $i$ and $j$. Thus the means of these variables remained unchanged, but their standard deviations inflated from 0.29 to 0.95 (see Supplementary Materials for calculations). In other words, the only difference between the two classes was that the values of 64 out of 784 pixels were ``noisier''. In this case, classifiers and tests based on discrepancies in the mean values would fail. For example, t-test merely identified 0.20 (averaged over 25 instances) of the 64 significant variables.\n\nThe results of applying SurvNet with $\\eta^* = 0.1$ and $\\varepsilon = 1$ were shown in Table \\ref{tab:1}, and the first thing to notice is the dramatic improvement of classification accuracy on the test set. While the test error given by the network with all 784 vriables was 49.42\\%, it dropped to 0.47\\% after variable selection by SurvNet; that is, from an almost random guess to an almost perfect classification. This implies that the variable selection gives back to the DNN the ability of utilizing all types of information useful for classification, which was masked by the overwhelming irrelevant variables. \nAmong the selected variables, 23.00 were truly significant variables, and 3.40 were false positives. Although only 36\\% of the significant variables were successfully identified, the FDR of the remaining variables, 0.114, was close to the cutoff, and the estimated FDR was acceptably accurate.\n\nWe then scrutinized the selection process of SurvNet on this dataset, and found that the reason only a proportion of significant variables were retained was that the initial network that made almost random guesses could not accurately determine the importance of variables and thus many significant variables were removed. As the selection proceeded, the network gained higher classification accuracy and also stronger ability to distinguish the significant variables. When we used a smaller elimination rate, say $\\varepsilon = 0.5$, SurvNet was able to keep a larger proportion of significant variables (see Table S3 for details).\n\n\n\\subsection*{Dataset 4: simulated regression data}\nSuppose the data matrix is $\\bm{X}=(x_{ij})_{10,000 \\times 784}$, and each $x_{ij} \\sim U(-1,1)$. Of the 784 variables, 64 were randomly chosen as significant variables (denoted by $x_{k_j}, j=1,\\ldots,64$), and $y$ was set to be the linear combination of $x_{k_j}$ or its nonlinear functions, plus a few interaction terms and a random error term:\n\\[\n\\begin{split}\ny_i=\\sum_{j=1}^{16} \\beta_j x_{ik_j} + \\sum_{j=17}^{32} \\beta_j \\sin x_{ik_j} + \\sum_{j=33}^{48} \\beta_j e^{x_{ik_j}} + \\sum_{j=49}^{64} \\beta_j \\max (0,x_{ik_j}) \\\\\n+ \\beta_1^\\prime x_{ik_{15}} x_{ik_{16}} + \\beta_2^\\prime x_{ik_{31}} x_{ik_{32}} + \\beta_3^\\prime x_{ik_{47}} x_{ik_{48}} + \\beta_4^\\prime x_{ik_{63}} x_{ik_{64}} + \\varepsilon_i,\n\\end{split}\n\\]\nwhere $\\beta_j=(2\\alpha_j-1) \\cdot b_j$, $\\alpha_j \\sim {\\rm Bernoulli}(\\frac{1}{2})$, $b_j \\sim U(1,3)$, $\\varepsilon_i \\sim N(0,1)$ for $i=1,\\ldots,10,000$, $j=1,\\ldots,64$, and $\\beta_1^\\prime$, $\\beta_2^\\prime$, $\\beta_3^\\prime$, $\\beta_4^\\prime$ have the same distribution as $\\beta_j$.\n\nWe ran SurvNet with $\\eta^*=0.1$ and $\\varepsilon=1$ on 25 instances of simulation and the results are reported in the format of mean $\\pm$ standard deviation. After variable selection, the test loss was reduced greatly, from $33.013\\pm27.059$ to $8.901\\pm1.988$. The number of remaining original variables was $71.16\\pm5.02$ on average, and $63.96\\pm0.20$ of the 64 significant variables were kept. The actual FDR of the selected variables was $0.097\\pm0.061$, close to the desired value 0.1, and the estimated FDR, $0.094\\pm0.004$, was accurate. The results suggest that SurvNet is highly effective for this regression dataset.\n\n\\subsection*{Dataset 5: digits 4 and 9 in MNIST}\nAfter four simulation datasets, we applied SurvNet to the MNIST data. Here we only used the images of two digits that look alike (4 and 9), as they are similar in most pixels and are only different in pixels in certain regions. In Figure \\ref{fig:3}a, we show two representative 4's that differ in the width of top opening and two representative 9's that differ in the presence of a bottom hook. The four regions circled in red are likely to be most significant in differentiating 4's and 9's, especially the region in the upper middle denoting whether the top is closed or open, and the region in the lower middle denoting whether there is a hook at the bottom.\n\nFrom left to right, Figure \\ref{fig:3}b shows the pixels that were selected by SurvNet under four combinations of FDR cutoffs ($\\eta^* = 0.1$ or 0.01) and elimination rates ($\\varepsilon = 1$ or 0.5). The colors display the relative importance, defined by equation \\ref{eq:Sj_L2} (see Methods), of the selected pixels, and a darker color means greater importance. We found that different parameter settings gave quite consistent results, and they all picked out the four regions that were speculated to be significant.\n\n\\subsection*{Dataset 6: single-cell RNA-Seq data}\nChen \\textit{et al.} performed single-cell RNA-Seq analysis of the adult mouse hypothalamus and identified 45 cell types based on clustering analysis \\cite{chen_single-cell_2017}. We used 5,282 cells in two non-neuronal clusters, oligodendrocyte precursor cell (OPC) and myelinating oligodendrocyte (MO), which reflected two distinct stages of oligodendrocyte maturation. Following a standard pre-processing protocol of single-cell RNA-Seq data \\cite{hwang2018single}, we filtered out the genes whose expression could not be detected in more than 30\\% of these cells, which left 1,046 genes for further analysis, and used $\\log({\\rm TPM}+1)$ for measuring gene expression levels, where TPM standed for ``transcripts per million''.\n\nWith $\\eta^*=0.01$ and $\\varepsilon=1$, SurvNet selected 145 genes in one realization. Figure \\ref{fig:4} shows the heatmap of the expression values of these genes, in which rows are genes and columns are cells. The top banner shows the class labels for the samples. For gene expression data, the set of significant genes are typically identified by ``differential expression'' analysis, which finds differences in the mean expression levels of genes between classes. Indeed, as the heatmap shows, most genes have evidently different mean expression levels in the OPCs and MOs. However, among the 145 significant genes identified by SurvNet, 16 have log-fold-changes (logFCs) less than 1, meaning that their mean expression levels are not very different in the OPCs and MOs. In Figure \\ref{fig:4}, these genes are marked in purple on the left banner, in contrast to green for the other genes. Actually, Bartlett's test, which tests the difference in variance, claimed that 14 of these 16 genes had unequal variances in the two groups of cells (p-value < 0.05); thus, they are instances of variance-inflated variables selected by SurvNet, in addition to the ones in Dataset 3. Again, SurvNet demonstrates its ability to identify various types of significant variables, not just variables with different means.\n\nFurther, the functional interpretations of the selected genes match the biological characteristics of OPCs and MOs. We conducted Gene Ontology (GO) analysis using DAVID 6.8 program \\cite{huang2009systematic,da2009bioinformatics}, and found that these genes were likely to play an important role in a number of biological processes, for example, substantia nigra development (with p-value $8.8\\times10^{-9}$, fold enrichment 29.1), nervous system development ($1.8\\times10^{-5}$, 4.8), positive regulation of dendritic spine development ($1.2\\times10^{-3}$, 19.0) and astrocyte differentiation ($3.2\\times10^{-3}$, 34.5). In particular, oligodendrocyte differentiation ($1.8\\times10^{-3}$, 16.2) defines the transition from OPCs to their mature form (MOs) \\cite{rubio2004vitro,barateiro2014temporal}, and myelination ($7.1\\times10^{-5}$, 13.8), which is the process of generating myelin and is a kind of axon ensheathment ($3.5\\times10^{-2}$, 55.2), is unique to MOs \\cite{menn2006origin,barateiro2014temporal}. Corresponding to these processes, the selected genes were also enriched for cellular components such as myelin sheath ($2.4\\times10^{-19}$, 16.2), axon ($1.2\\times10^{-5}$, 5.0) as well as internode region of axon ($2.9\\times10^{-4}$, 106.1), and molecular functions like structural constituent of myelin sheath ($3.1\\times10^{-6}$, 115.3). Besides, among the 16 selected genes whose expression levels had no obvious differences in the OPCs and MOs, \\textit{Cd9} was involved in oligodendrocyte development \\cite{terada2002tetraspanin}, and \\textit{Ckb}, \\textit{Actb}, \\textit{Tuba1a} as well as \\textit{Gpm6b} were related to myelin sheath or myelin proteolipid protein PLP \\cite{jahn2009myelin,werner2013critical}.\n\nAfter variable selection, the test loss was reduced from $4.230\\times10^{-3}$ to $3.460\\times10^{-3}$, and the test error dropped from 0.083\\% to 0.076\\% (averaged over 25 realizations).\n\n\\section*{Conclusions and discussion}\nWe have presented a largely automatic procedure for variable selection in neural networks (SurvNet). It is based on a new measure of variable importance that applies to a variety of networks, deep or shallow, for regression or classification, and with one or multiple output units. More importantly, SurvNet is the first method that estimates and controls the FDR of selected variables, which is essential for applications where the trustworthiness of variable selection is pivotal. By introducing surrogate variables, it avoids training multiple networks in parallel. SurvNet also adjusts the number of variables to eliminate at each step, and the ``warm start'' nature of backward elimination facilitates the training of networks. On multiple simulation datasets and real datasets, SurvNet has effectively identified the significant variables and given a dependable estimate of FDR.\n\nSurvNet takes advantages of modern developments of DNNs. The importance scores of input variables that are based on derivatives with respect to the inputs can be efficiently computed by functions in deep-learning packages such as TensorFlow, PyTorch, and Theano. Moreover, advances in optimization techniques and computation platforms have made the training of DNNs highly scalable. In particular, DNNs can accommodate a large number of input variables, which enables the introduction of surrogate variables.\n\nGiven a dataset, SurvNet may select different sets of significant variables at different runs owing to the randomness originated from the generation of surrogate variables and the training of networks (e.g., the random initial values of weights). While the former is unique to SurvNet, the latter is ubiquitous to any applications of neural networks. The randomness caused by generating surrogate variables may be lowered by, for example, using a larger number of surrogate variables or assembling results from multiple runs, but this randomness should not be a major concern if it is not much larger than the inevitable randomness coming from network training. To study this, we take Dataset 5 as an example. Using $\\eta^*=0.1$ and $\\varepsilon=1$, we ran SurvNet 25 times, and found that SurvNet selected $114.16\\pm11.36$ variables; and the overlapped proportion of the selected variables in each pair of realizations was approximately 0.77. These results reflected both sources of randomness. Then we fixed the surrogate variables in each realization, and SurvNet selected $118.32\\pm7.94$ variables, with the overlapped proportion of the selected variables of each pair of realizations around 0.79. This indicates that for this dataset, the randomness brought by surrogate variables was much less than that by the training of networks. And (hopefully) as peace of mind, some other well-known techniques for statistical tests and variable selection, such as permutation tests and bootstrap tests (and especially, parametric bootstrap tests), also have extra randomness caused by permutations or random number generations, but they are still very widely used.\n\nNext we discuss how many surrogate variables should be generated. In all experiments in this paper, we simply set the number of surrogate variables ($q$) to be the same as the number of original variables ($p$). A larger $q$ may lower the randomness brought by the surrogate variables and thus give a more stable selection of variables and a more accurate estimate of FDR. These improvements can be noticeable and worth pursuing when the number of original variables is small. On the other hand, a larger number of surrogate variables may increase the computational load. As a rule of thumb, we recommend using $q=p$ for datasets with moderate to large sample size, and $q$ can be a few times larger than $p$ if $p$ is small and be smaller than $p$ if $p$ is very large.\n\nAlthough variable selection is critical to many real applications and is often considered one of the most fundamental problems in machine learning \\cite{trevor2009elements, tibshirani2015statistical}, it is worth noting that this task does not apply to certain problems or certain types of DNNs. As an example, for some image datasets like ImageNet \\cite{deng2009imagenet}, deep convolutional neural networks are often a good choice due to their translation invariance characteristics, as the object of interest, such as a dog, may appear at any position in an image and thus theoretically every pixel should be relevant. Also, in the area of natural language processing, where recurrent neural networks are often used, the number of input variables (i.e. the length of input sequence) is not fixed and variable selection makes little sense.\n\nThe main aim of variable selection is to identify significant variables, which may, for example, shed light on the mechanisms of biological processes or guide further experimental validation. Apart from that, an additional aim may be to improve the classification accuracy. Although we did observe an improvement of generalization accuracy on all our simulated and real datasets, such an improvement is not guaranteed even if the variable selection procedure works perfectly. In some datasets, except for a set of significant variables, all other variables are almost completely irrelevant to the outcome, and variable selection may give extra power in prediction. However, in some other datasets, the relevances of variables are not polarized; there are many variables each having a very small influence on the output, but their accumulative contribution is non-negligible. For these datasets, such variables are likely to be ruled out during selection since it is hard to confidently determine their individual significance, but ignoring all of them could cause a loss of prediction power.\n\n\\section*{Methods}\n\n\\subsection*{Measures of variable importance}\n\n\\subsubsection*{Notation}\nWe use a tuple ($\\bm{x}$,$\\bm{y}$) to represent the input and the output of the network, with $\\bm{y}$ being either one-dimensional or multi-dimensional. $x_j$ denotes the $j^\\mathrm{th}$ component of $\\bm{x}$, namely the $j^\\mathrm{th}$ variable, and ($\\bm{x}^{(i)}$,$\\bm{y}^{(i)}$) ($i=1,\\ldots,n$) is the $i^\\mathrm{th}$ sample, where $n$ is the total number of samples (in the training set). Given a proper form of the loss $L(\\cdot,\\cdot)$, the loss function $L^*=\\sum_{i=1}^n L(\\bm{y}^{(i)},f(\\bm{x}^{(i)}))$, where $f$ denotes the output function of the network. The most popular choices for $L(\\cdot,\\cdot)$ are the squared error loss for regression problems and the cross-entropy loss for classification problems.\n\n\\subsubsection*{Existing measures}\nMany statistics have been proposed to measure the importance of variables in neural networks, and they generally fall into two categories \\cite{tetko_neural_1996, steppe_feature_1997}.\n\nOne category of methods estimate the importance of $x_j$, denoted by $S_j$, based on the magnitudes of the connection weights in the network \\cite{sen_predicting_1995, yacoub_hvs:_1997, garson_interpreting_1991, nath_determining_1997, gevrey_review_2003}. A simple example is the sum of absolute values of input weights \\cite{sen_predicting_1995}, but larger values of weights in the input layer do not mean greater importance if connections in hidden layers have small weights, and a better alternative is to replace the input weights with the products of the weights on each path from this input to the output \\cite{yacoub_hvs:_1997}. These measures were developed for networks with only one hidden layer, and they are unlikely to work well for deeper networks as the outgoing weights of a neuron does not reflect its importance once the neuron is inactive (e.g., when the input of a sigmoid neuron is far from zero or the input of a ReLU neuron is negative).\n\nThe other category of methods estimate $S_j$ by the sum of influences of the input weights on the loss function, i.e. $S_j=\\sum_{k \\in \\Omega_j} \\delta L^*_k$, where $\\Omega_j$ is the set of outgoing weights from the $j^\\mathrm{th}$ input neuron, and $\\delta L^*_k$ is the increment of the loss function caused by the removal of weight $w_k$ \\cite{tetko_neural_1996}. $\\delta L^*_k$ can be approximated by a Taylor series of the loss function using first-order terms \\cite{mozer_skeletonization:_1989, karnin_simple_1990} or second-order terms \\cite{lecun_optimal_1990, cibas_variable_1994, hassibi_second_1993}. However, it is unclear why $S_j$ equals the (unweighted) sum of $\\delta L^*_k$'s.\n\nApart from these two major categories of measures, it was also proposed to use $S_j=\\frac{\\partial f}{\\partial x_j}$, i.e. $S_j=\\frac{\\partial y}{\\partial x_j}$, when the output $y$ is one-dimensional \\cite{dimopoulos_use_1995,dimopoulos_neural_1999}. But it is unclear how $S_j$ should be defined when there are multiple output units. Let $y_1,\\ldots,y_K$ be the output values of $K$ output units, and one definition of $S_j$ was given by $S_j=\\sum_{k=1}^{K} |\\frac{\\partial y_k}{\\partial x_j}|$ \\cite{ruck_feature_1990}. However, using this summation seems problematic in some cases, especially when $y_1,\\ldots,y_K$ are the outputs of softmax functions.\n\n\\subsubsection*{Our new measure}\nWe propose a simple and direct measure of the importance of variable $j$ based on $\\frac{\\partial L}{\\partial x_j}$, which describes how the loss changes with $x_j$. There are a few advantages of using $\\frac{\\partial L}{\\partial x_j}$. First, regardless of the structure of the network and whether the output(s) is\/are continuous or categorical, $L$ is always well defined since it is the target for the optimization\/training of the network. Thus the proposed measure is applicable to a wide variety of networks. Second, no matter how many output units there are, $L$ is always a scalar and hence $\\frac{\\partial L}{\\partial x_j}$ is always a scalar. There is no trouble in how to combine effects from multiple output units. Third, $\\frac{\\partial L}{\\partial x_j}$ is easily computable with the backpropogation method, and popular frameworks\/libraries for DNN computations (e.g., TensorFlow, PyTorch and Theano) all use differentiators that efficiently compute partial derivatives (gradients) of arbitrary forms.\n\nNote that $\\frac{\\partial L}{\\partial x_j}$ is a function of the tuple ($\\bm{x}$,$\\bm{y}$), and hence it is natural to estimate it by its mean over all observations in the training set. To avoid cancellation of positive and negative values, we measure the importance of $x_j$ by the mean of absolute values\n\\begin{equation}\n\tS_j=\\frac{1}{n} \\sum_{i=1}^n |\\frac{\\partial L}{\\partial x_j}(\\bm{y}^{(i)},f(\\bm{x}^{(i)}))|,\n\t\\label{eq:Sj_L1}\n\\end{equation}\nor the mean of squares\n\\begin{equation}\n\tS_j=\\frac{1}{n} \\sum_{i=1}^n \\frac{\\partial L}{\\partial x_j}(\\bm{y}^{(i)},f(\\bm{x}^{(i)}))^2,\n\t\\label{eq:Sj_L2}\n\\end{equation}\nwhere $\\frac{\\partial L}{\\partial x_j}(\\bm{y}^{(i)},f(\\bm{x}^{(i)}))$ is the value of $\\frac{\\partial L}{\\partial x_j}$ at the $i$'th training sample.\n\nThe importance scores given by equation \\ref{eq:Sj_L1} and equation \\ref{eq:Sj_L2} implicitly assume that all the input values have similar range, which is typically the case for DNNs, since it is common practice to standardize\/scale the variables before supplying them to the network for the sake of faster and more stable training of the network \\cite{bishop1995neural, lecun2012efficient}. If this is not the case, we suggest the score in equation \\ref{eq:Sj_L1} be multiplied by the (sample) standard deviation of $x_j$ and the score in equation \\ref{eq:Sj_L2} be multiplied by the (sample) variance of $x_j$.\n\nNote that in the case of multiple linear regression, $L = \\frac{1}{2} (y-\\hat{y})^2 = \\frac{1}{2} (y-\\sum_j \\beta_j x_j)^2$, where $y$ is a scalar response and $\\beta_j$ is the $j^\\mathrm{th}$ regression coefficient, then $\\frac{\\partial L}{\\partial x_j}=-(y-\\hat{y})\\beta_j$. Thus, $S_j$ is defined as $|\\beta_j| \\cdot \\frac{1}{n} \\sum_{i=1}^n |e_i|$ or $\\beta_j^2 \\cdot \\frac{1}{n} \\sum_{i=1}^n e_i^2 $ by (1) and (2) respectively, where $e_i=y^{(i)}-\\hat{y}^{(i)}$. Note that $S_j$ is proportional to $|\\beta_j|$ or $\\beta_j^2$ as $\\frac{1}{n} \\sum_{i=1}^n |e_i|$ and $\\frac{1}{n} \\sum_{i=1}^n e_i^2$ are constants. Therefore, both of them are reasonable measures of the contribution of the $j^\\mathrm{th}$ variable, and they are actually equivalent in this case. The meaning of $S_j$ in some other special cases, such as linear regression with multiple outputs and logistic regression with one or multiple outputs, is elaborated in Supplementary Materials.\n\nAll results in the main text were obtained using equation \\ref{eq:Sj_L2}. Results obtained using equation \\ref{eq:Sj_L1} (given in Supplementary Materials) are not significantly different.\n\n\\subsection*{Elimination procedure with FDR control}\nIn this section, we first introduce how we estimate FDR and then talk about how we use this estimate to determine the number of variables to eliminate at each step.\n\n\\subsubsection*{Introduction of surrogate variables}\nThe key of estimating FDR \\cite{storey2003statistical} is to estimate\/generate the null distribution of the test statistic. In our case, it is to obtain the distribution of the importance score $S_j$ defined by equation \\ref{eq:Sj_L2} or equation \\ref{eq:Sj_L1} for variables that are not significant. Since the network is a complicated and highly nonlinear model, a theoretical distribution that applies to various network structure and various types of data may not exist. This null distribution needs to be obtained for the network and the data in hand.\n\nHowever, it is usually unknown which variables are truly null. If we construct the null distribution by permuting the output values of the data, it seems inevitable to train multiple networks from scratch in parallel. For this reason, we propose to introduce\/add a number of variables that are known\/generated to be null. We call these variables ``surrogate null variables'' (or ``surrogate variables'' for short). These variables will be concatenated with the original variables to form a larger data matrix.\n\nTo be precise, suppose there are $p$ original variables and $n$ training samples (including validation samples). Then after we add $q$ surrogate variables, the new data matrix will be of size $n\\times(p+q)$, which binds the original $n\\times p$ data matrix $\\bm{X}$ with a $n\\times q$ data matrix for surrogate variables $\\bm{X}_s$. It is assumed that the original variables are distributed in similar ranges or have been standardized, which is a suggested pre-processing step as it benefits the training of the network, and the elements in $\\bm{X}_s$ are sampled with replacement (or without replacement when $q \\le p$) from the elements in $\\bm{X}$. As a result, the $q$ surrogate variables are null, and their importance scores give the null distribution.\n\nWe recommend $q$ to be on the same scale as $p$ (see Conclusions and Discussion for a more detailed discussion about the choice of $q$). For convenience, $q$ takes the same value as $p$ in all experiments in this paper. In this case, the elements in $\\bm{X}_s$ can be generated by permuting the elements in $\\bm{X}$.\n\nThe selection procedure of SurvNet starts with using all $p+q$ variables as inputs. Then at each step, it eliminates a number of least important variables, including both original variables and surrogate variables. The remaining variables are used to continue training the network, and the elimination stops once the FDR falls below the cutoff.\n\n\\subsubsection*{FDR estimation}\nThen we consider how to estimate FDR at any given time of the selection process. Suppose $r$ variables are retained in the network, among which there are $r_0$ surrogate variables, then $r_0\/q$ proportion of surrogate (null) variables have not been eliminated yet. Accordingly, one would expect that roughly the same proportion of null original variables still exist at this time, that is, approximately $\\frac{r_0}{q} \\cdot p_0$ variables among the remaining original variables are falsely called significant, where $p_0$ is the number of null variables in the original dataset. Thus, an estimate of the FDR of the $r-r_0$ original variables is given by\n\\begin{equation}\n\t\\tilde{\\eta}=\\frac{\\frac{r_0}{q} \\cdot p_0}{r-r_0}\n\\end{equation}\nIn practice, however, $p_0$ is unknown, and a common strategy is to replace it with its upper bound $p$ \\cite{storey2003statistical}. Hence we have the following estimated FDR,\n\\begin{equation}\n\t\\hat{\\eta}=\\frac{\\frac{r_0}{q} \\cdot p}{r-r_0}=\\frac{r_0}{r-r_0} \\cdot \\frac{p}{q}\n\\label{eq:est_fdr}\n\\end{equation}\nApparently, when $\\hat{\\eta}$ is controlled to be no greater than a pre-specified threshold $\\eta^*$, $\\tilde{\\eta}$ is guaranteed to be no greater than $\\eta^*$ as well. When $q=p$, $\\hat{\\eta}$ can be simplified as $\\frac{r_0}{r-r_0}$.\n\n\n\\subsubsection*{Determination of the number of variables to eliminate}\nIf the estimated FDR $\\hat{\\eta}$ (given by equation \\ref{eq:est_fdr}) is less than or equal to the FDR cutoff $\\eta^*$, the variable selection procedure stops. Otherwise, the procedure proceeds, and we want to decide how many variables to eliminate among the $r$ variables that are still in the model. Let this number be $m$, and the determination of $m$ is based on the following considerations. On one hand, we expect that the elimination process is time-saving and reaches the FDR threshold quickly; on the other hand, we want to avoid eliminating too many variables at each step, in which case the FDR may fall much lower than the threshold. We have\n\\begin{thm}\n\tIf $m$ variables are further eliminated from the current model, the smallest possible estimated FDR after this step of elimination is\n\t\\begin{equation}\n\t\\min \\hat{\\eta}^{\\rm new}= (1-\\frac{m}{r_0})\\cdot \\hat{\\eta},\n\t\\label{eq:rm_new}\n\t\\end{equation}\n\twhere $r_0$ is the number of surrogate variables that are in the model before this step of elimination.\n\\end{thm}\n\\begin{proof}\n\tSuppose there are $m_0$ surrogate variables among the $m$ variables to be eliminated, $0 \\le m_0 \\le m$, then according to equation \\ref{eq:est_fdr}, $\\hat{\\eta}$ will be updated to\n\t\\begin{equation}\n\t\\hat{\\eta}^{\\rm new}=\\frac{r_0-m_0}{r-r_0-(m-m_0)} \\cdot \\frac{p}{q}.\n\t\\end{equation}\n\tNote that $\\hat{\\eta}^{\\rm new}$ is monotonically decreasing with respect to $m_0$ for any fixed $m$, we have\n\t\\begin{equation}\n\t\\min \\hat{\\eta}^{\\rm new}=\\hat{\\eta}^{\\rm new}|_{m_0=m}=\\frac{r_0-m}{r-r_0} \\cdot \\frac{p}{q}.\n\t\\label{eq:rm_new_int}\n\t\\end{equation}\n\tEquation \\ref{eq:est_fdr} indicates that $\\frac{1}{r - r_0} \\cdot \\frac{p}{q}=\\frac{\\hat{\\eta}}{r_0}$. Plugging it into \\ref{eq:rm_new_int}, we have\n\t\\[\n\t\\min \\hat{\\eta}^{\\rm new}=(r_0-m) \\cdot \\frac{\\hat{\\eta}}{r_0} = (1-\\frac{m}{r_0})\\cdot \\hat{\\eta}.\n\t\\]\n\\end{proof}\n\nIt follows from equation \\ref{eq:rm_new} that $\\min \\hat{\\eta}^{\\rm new} = \\eta^*$ when $m=(1-\\frac{\\eta^*}{\\hat{\\eta}}) \\cdot r_0$. Also, note that $\\min \\hat{\\eta}^{\\rm new}$ is a monotonically decreasing function of $m$. Therefore, when $m < (1-\\frac{\\eta^*}{\\hat{\\eta}}) \\cdot r_0$, $\\min \\hat{\\eta}^{\\rm new} > \\eta^*$ and thus $\\hat{\\eta}^{\\rm new} > \\eta^*$. That is,\n\\begin{cor}\n\tWhen $m < (1-\\frac{\\eta^*}{\\hat{\\eta}}) \\cdot r_0$, the estimated FDR after this step of elimination $\\hat{\\eta}^{\\rm new}$ is guaranteed to be still greater than the FDR cutoff $\\eta ^*$.\n\\end{cor}\nOn the other hand, when $m \\ge (1-\\frac{\\eta^*}{\\hat{\\eta}}) \\cdot r_0$, $\\min \\hat{\\eta}^{\\rm new} \\le \\eta^*$. That is,\n\\begin{cor}\n\tWhen $m \\ge (1-\\frac{\\eta^*}{\\hat{\\eta}}) \\cdot r_0$, the estimated FDR after this step of elimination $\\hat{\\eta}^{\\rm new}$ may reach the FDR cutoff $\\eta ^*$.\n\\end{cor}\n\nCorollary 1 says that $m$ values less than $(1-\\frac{\\eta^*}{\\hat{\\eta}}) \\cdot r_0$ are ``safe'' but the elimination will not stop after this step. Corollary 2 says that $m$ values much larger than $(1-\\frac{\\eta^*}{\\hat{\\eta}}) \\cdot r_0$ may not be ``safe'' anymore. Taking both into consideration, we choose the step size to be\n\\begin{equation}\n m=\\lceil (1-\\frac{\\eta^*}{\\hat{\\eta}}) \\cdot r_0 \\rceil,\n \\label{eq:m}\n\\end{equation}\nwhere $\\lceil \\cdot \\rceil$ denotes ``ceiling'', i.e. the smallest integer that is no less than $\\cdot$. Notice that when $\\hat{\\eta} > \\eta^*$, which is the premise of continuing to eliminate variables, $1-\\frac{\\eta^*}{\\hat{\\eta}}>0$, and $r_0>0$ as well since $\\hat{\\eta}$ is positive. Thus $m$ is ensured to be no less than 1 at each step of variable elimination.\n\nThis form of $m$ seems to be quite reasonable for the following reasons. First, if there still remain a great number of surrogate variables in the network, clearly more of them should be taken out. As $r_0$ decreases, $m$ will be smaller, and this makes sense since one should be more careful in further elimination. Second, when $\\hat{\\eta}$ is much higher than $\\eta^*$, one will naturally expect a larger $m$ so that the updated estimated FDR will approach this cutoff.\n\nUsing the $m$ determined by equation \\ref{eq:m}, there is a chance that the estimated FDR will get to the cutoff in only one step. Many times such a fast pace is not preferred as removing too many inputs at a time may make our warm start of the training not warm any more. Hence we may introduce an ``elimination rate'' $\\varepsilon$, which is a constant between 0 and 1, and take\n\\begin{equation}\n m=\\lceil \\varepsilon \\cdot (1-\\frac{\\eta^*}{\\hat{\\eta}}) \\cdot r_0 \\rceil.\n\\end{equation}\n\n\n\\iffalse\n\n\\begin{itemize}\n\\item Bullet point one\n\\end{itemize}\n\n\\begin{enumerate}\n\\item Numbered list item one\n\\end{enumerate}\n\n\\fi\n\n\n\\section*{Author Contributions}\nJ.L. conceived the study, J.L. and Z.S. proposed the methods, Z.S. implemented the methods and constructed the data analysis, Z.S. drafted the manuscript, J.L. substantively revised it.\n\n\\section*{Competing Interests statement}\nThe authors declare no competing interests.\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nOne of the fundamental results in commutative algebra is the irreducible decomposition theorem \\cite[Satz II and Satz IV]{N21} proved by Emmy Noether in 1921. In this paper she had showed that any ideal $I$ of a Noetherian ring $R$ can be expressed as a finite intersection of irreducible ideals, and the number of irreducible ideals in such an irredundant irreducible decomposition is independent of the choice of the decomposition. This number is then called the index of reducibility of $I$ and denoted by $\\mathrm{ir}_R(I)$. Although irreducible ideals belong to basic objects of commutative algebra, there are not so much papers on the study of irreducible ideals and the index of reducibility. Maybe the first important paper on irreducible ideals after Noether's work is of W. Gr\\\"{o}bner \\cite{G35} (1935). Since then there are interesting works on the index of reducibility of parameter ideals on local rings by D.G. Northcott \\cite{No57} (1957), S. Endo and M. Narita \\cite{EN64} (1964) or S. Goto and N. Suzuki \\cite{GS84} (1984). Especially, W. Heinzer, L.J. Ratliff and K. Shah propounded in a series of papers \\cite{HRS94}, \\cite{HRS95-1}, \\cite{HRS95-2}, \\cite{HRS95-3} a theory of maximal embedded components which is useful for the study of irreducible ideals. It is clear that the concepts of irreducible ideals, the index of reducibility and maximal embedded components can be extended for finitely generated modules. Then the purpose of this paper is to investigate the index of reducibility of submodules of a finitely generated $R$-module $M$ concerning its maximal embedded components as well as the behaviour of the function ir$_M(I^nM)$, where $I$ is an ideal of $R$, and to present applications of the index of reducibility for studying the structure of the module $M$. The paper is divided into 5 sections. Let $M$ be a finitely generated module over a Noetherian ring and $N$ a submodule of $M$. We present in the next section a formula to compute the index of reducibility $\\mathrm{ir}_M(N)$ by using the socle dimension of the module $(M\/N)_{\\frak p}$ for all $\\frak p \\in \\mathrm{Ass}_R(M\/N)$ (see Lemma 2.3). This formula is a generalization of a well-known result which says that $\\mathrm{ir}_M(N) = \\dim_{R\/\\frak m} \\mathrm{Soc}(M\/N)$ provided $(R, \\frak m)$ is a local ring and $\\lambda_R(M\/N) < \\infty$. Section 3 is devoted to answer the following question: When is the index of reducibility of a submodule $N$ equal to the sum of the indices of reducibility of their primary components in a given irredundant primary decomposition of $N$? It turns out here that the notion of maximal embedded components of $N$ introduced by Heinzer, Ratliff and Shah is the key for answering this question (see Theorem \\ref{T3.2}). In Section 4, we consider the index of reducibility $\\mathrm{ir}_M(I^nM)$ of powers of an ideal $I$ as a function in $n$ and show that this function is in fact a polynomial for sufficiently large $n$. Moreover, we can prove that $\\mathrm{bight}_M(I)-1$ is a lower bound and the $\\ell_M(I)-1$ is an upper bound for the degree of this polynomial (see Theorem \\ref{T4.1}), where $\\mathrm{bight}_M(I)$ is the big height and \n $\\ell_M(I)$ is the analytic spread of $M$ with respect to the ideal $I$. However, the degree of this polynomial is still mysterious to us. We can only give examples to show that these bounds are optimal. In the last section, we involve in working out some applications of the index of reducibility. A classical result of Northcott \\cite{No57} says that the index of reducibility of a parameter ideal in a Cohen-Macaulay local ring is dependent only on the ring and not on the choice of parameter ideals. We will generalize Northcott's result in this section and get a characterization for the Cohen-Macaulayness of a Noetherian module in terms of the index of reducibility of parameter ideals (see Theorem \\ref{T4.7}).\n\n\\section{Index of reducibility of submodules}\nThroughout this paper $R$ is a Noetherian ring and $M$ is a finitely generated $R$-module. For an $R$-module $L$, $\\lambda_R(L)$ denotes the length of $L$.\n\n\\begin{definition}\\rm A submodule $N$ of $M$ is called an {\\it irreducible submodule} if $N$ can not be written as an intersection of two properly larger submodules of $M$. The number of irreducible components of an irredundant irreducible decomposition of $N$, which is independent of the choice of the decomposition by Noether \\cite{N21}, is called the {\\it index of reducibility} of $N$ and denoted by $\\mathrm{ir}_M(N)$.\n\\end{definition}\n\n\\begin{remark}\\rm We denoted by $\\mathrm{Soc}(M)$ the sum of all simple submodules of $M$. $\\mathrm{Soc}(M)$ is called the socle of $M$. If $R$ is a local ring with the unique maximal ideal $\\frak m$ and $\\frak k = R\/\\frak m$ its residue field, then it is well-known that $\\mathrm{Soc}(M) = 0:_M\\frak m$ is a $\\frak k$-vector space of finite dimension. Let $N$ be a submodule of $M$ with $\\lambda_R(M\/N) < \\infty$. Then it is easy to check that $\\mathrm{ir}_M(N) = \\lambda_R((N:\\frak m)\/N) = \\dim_{\\frak k} \\mathrm{Soc}(M\/N).$\n\\end{remark}\n\nThe following lemma presents a formula for computing the index of reducibility $\\mathrm{ir}_M(N)$ without the requirement that $R$ is local and $\\lambda_R(M\/N) < \\infty$. It should be mentioned here that the first conclusion of the lemma would be known to experts. But, we cannot find its proof anywhere. So for the completeness, we give a short proof for it. Moreover, from this proof we obtain immediately a second conclusion which is useful for proofs of further results in this paper. For a prime ideal $\\frak p$, we use $k(\\frak p)$ to denote the residue field $ R_{\\frak p}\/\\frak pR_{\\frak p}$ of the local ring $R_{\\frak p}$.\n\n\\begin{lemma}\\label{L2.3} Let $N$ be a submodule of $M$. Then\n $$\\mathrm{ir}_M(N) = \\sum_{\\frak p \\in \\mathrm{Ass}_R(M\/N)} \\dim_{k(\\frak p)} \\mathrm{Soc}(M\/N)_{\\frak p} .$$\nMoreover, for any $\\frak p \\in \\mathrm{Ass}_R(M\/N)$, there is a $\\frak p$-primary submodule $N(\\frak p)$ of $M$ with $\\mathrm{ir}_M(N(\\frak p)) = \\dim_{k(\\frak p)} \\mathrm{Soc}(M\/N)_{\\frak p} $ such that\n$$N = \\bigcap_{\\frak p \\in \\mathrm{Ass}_R(M\/N)}N(\\frak p)$$\nis an irredundant primary decomposition of $N$.\n\\end{lemma}\n\\begin{proof}\n Passing to the quotient $M\/N$ we may assume without any loss of generality that $N=0$. Let $\\mathrm{Ass}_R(M) = \\{ \\frak p_1,..., \\frak p_n\\}$. We set $t_i = \\dim_{k(\\frak p_i)}\\mathrm{Soc}(M_{\\frak p_i})$ and $t = t_1 + \\cdots + t_n$. Let $\\mathcal{F} = \\{\\frak p_{11},..., \\frak p_{1t_1}, \\frak p_{21},..., \\frak p_{2t_2}, ..., \\frak p_{n1},..., \\frak p_{nt_n}\\}$ be a family of prime ideals of $R$ such that $\\frak p_{i1} = \\cdots = \\frak p_{it_i} = \\frak p_i$ for all $i = 1,..., n$. Denote $E(M)$ the injective envelop of $M$. Then we can write\n $$E(M) = \\bigoplus_{i=1}^n E(R\/\\frak p_i)^{t_i} = \\bigoplus_{\\frak p_{ij} \\in \\mathcal{F}}E(R\/\\frak p_{ij}).$$\n Let $$ \\pi_i: \\oplus_{i=1}^n E(R\/\\frak p_i)^{t_i} \\to E(R\/\\frak p_i)^{t_i} \\ \\ \\mathrm{and} \\ \\ \\pi_{ij}: \\oplus_{\\frak p_{ij} \\in \\mathcal{F}}E(R\/\\frak p_{ij}) \\to E(R\/\\frak p_{ij})$$ be the canonical projections for all $i = 1,..., n$ and $j = 1,..., t_i$, and set $N(\\frak p_i) = M \\cap \\ker \\pi_i$, $N_{ij} = M \\cap \\ker \\pi_{ij}$. Since $E(R\/\\frak p_{ij})$ are indecomposible, $N_{ij}$ are irreducible submodules of $M$. Then it is easy to check that $N(\\frak p_i)$ is a $\\frak p_i$-primary submodule of $M$ having an irreducible decomposition $N(\\frak p_i) = N_{i1} \\cap \\cdots \\cap N_{it_i}$ for all $i=1,\\ldots , n$. \n Moreover, because of the minimality of $E(M)$ among injective modules containing $M$, the finite intersection $$0 = N_{11} \\cap \\cdots \\cap N_{1t_1} \\cap \\cdots \\cap N_{n1} \\cap \\cdots \\cap N_{nt_n}$$ \n is an irredundant irreducible decomposition of $0$. Therefore $0 = N(\\frak p_1) \\cap \\cdots \\cap N(\\frak p_n)$ is an irredundant primary decomposition of $0$ with\n $\\mathrm{ir}_M(N(\\frak p_i)) = \\dim_{k(\\frak p_i)} \\mathrm{Soc}(M\/N)_{\\frak p_i} $ and $\\mathrm{ir}_M(0) = \\sum_{\\frak p \\in \\mathrm{Ass}(M)} \\dim_{k(\\frak p)} \\mathrm{Soc}(M)_{\\frak p}$\n as required.\n \\end{proof}\n\n\\section{Index of reducibility of maximal embedded components}\n\nLet $N$ be a submodule of $M$ and $\\frak p \\in \\mathrm{Ass}_R(M\/N)$. We use $\\bigwedge_{\\frak p}(N)$ to denote the set of all $\\frak p$-primary submodules of $M$ which appear in an irredundant primary decomposition of $N$. We say that a $\\frak p$-primary submodule $Q$ of $M$ is a $\\frak p$-primary component of $N$ if $Q \\in \\bigwedge_{\\frak p}(N)$, and $Q$ is said to be a {\\it maximal embedded component} (or more precisely, $\\frak p$-maximal embedded component) of $N$ if $Q$ is a maximal element in the set $\\bigwedge_{\\frak p}(N)$. It should be mentioned that the notion of maximal embedded components was first introduced for commutative rings by Heinzer, Ratliff and Shah. They proved in the papers \\cite{HRS94}, \\cite{HRS95-1}, \\cite{HRS95-2}, \\cite{HRS95-3} many interesting properties of maximal embedded components as well as they showed that this notion is an important tool for studying irreducible ideals.\\\\\n\nWe recall now a result of Y. Yao \\cite{Y02} which is often used in the proof of the next theorem.\n\\begin{theorem}[Yao \\cite{Y02}, Theorem 1.1] \\label{T3.1} Let $N$ be a submodule of $M$, $\\mathrm{Ass}_R(M\/N) = \\{\\frak p_1,..., \\frak p_n\\}$ and $Q_i \\in \\bigwedge_{\\frak p_i}(N)$, $i = 1,..., n$. Then $N = Q_1 \\cap \\cdots \\cap Q_n$ is an irredundant primary decomposition of $N$.\n\\end{theorem}\nThe following theorem is the main result of this section.\n\\begin{theorem}\\label{T3.2} Let $N$ be a submodule of $M$ and $\\mathrm{Ass}_R(M\/N) = \\{\\frak p_1,..., \\frak p_n\\}$. Let $N = Q_1 \\cap \\cdots \\cap Q_n$ be an irredundant primary decomposition of $N$, where $Q_i$ is $\\frak p_i$-primary for all $i= 1, \\ldots , n$. Then $\\mathrm{ir}_M(N) = \\mathrm{ir}_M(Q_1) + \\cdots + \\mathrm{ir}_M(Q_n)$ if and only if $Q_i$ is a $\\frak p_i$-maximal embedded component of $N$ for all embedded associated prime ideals $\\frak p_i$ of $N$.\n \\end{theorem}\n \\begin{proof} As in the proof of Lemma \\ref{L2.3}, we may assume that $N = 0$.\\\\\n {\\it Sufficient condition}: Let $0 = Q_1 \\cap \\cdots \\cap Q_n$ be an irredundant primary decomposition of the zero submodule $0$, where $Q_i$ is maximal in $\\bigwedge_{\\frak p_i}(0)$, $i = 1,..., n$. Setting $\\mathrm{ir}_M(Q_i) = t_i$, and let $Q_i = Q_{i1} \\cap \\cdots \\cap Q_{it_i}$ be an irredundant irreducible decomposition of $Q_i$. Suppose that\n $$t_1 + \\cdots + t_n = \\mathrm{ir}_M(Q_1) + \\cdots + \\mathrm{ir}_M(Q_n) > \\mathrm{ir}_M(0).$$\n Then there exist an $i \\in \\{1,..., n \\}$ and a $j \\in \\{1,..., t_i\\}$ such that\n $$Q_1 \\cap \\cdots \\cap Q_{i-1} \\cap Q_i' \\cap Q_{i+1} \\cap \\cdots \\cap Q_n \\subseteq Q_{ij},$$\n where $Q_i' = Q_{i1} \\cap \\cdots \\cap Q_{i(j-1)} \\cap Q_{i(j+1)} \\cap \\cdots \\cap Q_{it_i} \\supsetneqq Q_i$. Therefore\n $$Q_i' \\bigcap (\\cap_{k \\neq i} Q_k) = Q_i \\bigcap (\\cap_{k \\neq i} Q_k) = 0$$\n is also an irredundant primary decomposition of $0$. Hence $Q_i' \\in \\bigwedge_{\\frak p_i}(0)$ which contradicts the maximality of $Q_i$ in $\\bigwedge_{\\frak p_i}(0)$. Thus $\\mathrm{ir}_R(0) = \\mathrm{ir}_R(Q_1) + \\cdots + \\mathrm{ir}_R(Q_n)$ as required.\\\\\n {\\it Necessary condition}: Assume that $0 = Q_1 \\cap \\cdots \\cap Q_n$ is an irredundant primary decomposition of $0$ such that $\\mathrm{ir}_M(0) = \\mathrm{ir}_M(Q_1) + \\cdots + \\mathrm{ir}_M(Q_n)$. We have to prove that $Q_i$ are maximal in $\\bigwedge_{\\frak p_i}(0)$ for all $i = 1,..., n$. Indeed, let $N_1=N(\\frak p_1),..., N_n=N(\\frak p_n)$ be primary submodules of $M$ as in Lemma \\ref{L2.3}, that is $N_i \\in \\bigwedge_{\\frak p_i}(0)$, $0 = N_1 \\cap \\cdots \\cap N_n$ and $\\mathrm{ir}_M (0) = \\sum_{i=1}^n \\mathrm{ir}_M(N_i) = \\sum_{i=1}^n \\dim_{k(\\frak p_i)} \\mathrm{Soc}(M_{\\frak p_i})$. Then by Theorem \\ref{T3.1} we see for any $0 \\leq i \\leq n$ that\n $$0 = N_1 \\cap \\cdots \\cap N_{i-1} \\cap Q_i \\cap N_{i+1} \\cap \\cdots \\cap N_n = N_1 \\cap \\cdots \\cap N_n$$\nare two irredundant primary decompositions of $0$. Therefore\n$$\\mathrm{ir}_M(Q_i) + \\sum_{j \\neq i} \\mathrm{ir}_M(N_j) \\geq \\mathrm{ir}_M(0) = \\sum_{j =1}^n \\mathrm{ir}_M(N_j),$$\nand so $\\mathrm{ir}_M(Q_i) \\geq \\mathrm{ir}_M(N_i) = \\dim_{k(\\frak p_i)} \\mathrm{Soc}(M_{\\frak p_i})$ by Lemma \\ref{L2.3}.\\\\\nSimilarly, it follows from the two irredundant primary decompositions\n$$0 = Q_1 \\cap \\cdots \\cap Q_{i-1} \\cap N_i \\cap Q_{i+1} \\cap \\cdots \\cap Q_n = Q_1 \\cap \\cdots \\cap Q_n$$\nand the hypothesis that $\\mathrm{ir}_M(N_i) \\geq \\mathrm{ir}_M(Q_i)$. Thus we get\n$$\\mathrm{ir}_M(Q_i) = \\mathrm{ir}_M(N_i) = \\dim_{k(\\frak p_i)} \\mathrm{Soc}(M_{\\frak p_i})$$\nfor all $i = 1, ..., n$. Now, let $Q_i'$ be a maximal element of $\\bigwedge_{\\frak p_i}(0)$ and $Q_i \\subseteq Q_i'$. It remains to prove that $Q_i = Q_i'$. By localization at $\\frak p_i$, we may assume that $R$ is a local ring with the unique maximal ideal $\\frak m = \\frak p_i$. Then, since $Q_i$ is an $\\frak m$-primary submodule and by the equality above we have\n$$\\lambda_R((Q_i:\\frak m)\/Q_i) = \\mathrm{ir}_M(Q_i) = \\dim_{\\frak k}\\mathrm{Soc}(M) = \\lambda_R(0:_M \\frak m) = \\lambda_R \\big( (Q_i + 0:_M \\frak m)\/Q_i\\big).$$\nIt follows that $Q_i: \\frak m = Q_i + 0:_M \\frak m$. If $Q_i \\subsetneqq Q_i'$, there is an element $x \\in Q_i' \\setminus Q_i$. Then we can find a positive integer $l$ such that $\\frak m^l x \\subseteq Q_i$ but $\\frak m^{l-1} x \\nsubseteq Q_i$. Choose $y \\in \\frak m^{l-1} x \\setminus Q_i$. We see that\n$$y \\in Q_i' \\cap (Q_i : \\frak m) = Q_i' \\cap (Q_i + 0:_M \\frak m) = Q_i + (Q_i' \\cap 0:_M \\frak m).$$\nSince $0:_M \\frak m \\subseteq \\cap_{j \\neq i} Q_j$ and $Q_i' \\cap (\\cap_{j \\neq i} Q_j) = 0$ by Theorem \\ref{T3.1}, $Q_i' \\cap (0:_M \\frak m) = 0$. Therefore $y \\in Q_i$ which is a contradiction with the choice of $y$. Thus $Q_i = Q_i'$ and the proof is complete.\n \\end{proof}\n\nThe following characterization of maximal embedded components of $N$ in terms of the index of reducibility follows immediately from the proof of Theorem \\ref{T3.2}.\n\\begin{corollary}\\label{T3.3}\nLet $N$ be a submodule of $M$ and $\\frak p$ an embedded associated prime ideal of $N$. Then an element $Q \\in \\bigwedge_{\\frak p}(N)$ is a maximal embedded component of $N$ if and only if $\\mathrm{ir}_M(Q) = \\dim_{k(\\frak p)} \\mathrm{Soc}(M\/N)_{\\frak p}$.\n\\end{corollary}\n\nAs consequences of Theorem \\ref{T3.2}, we can obtain again several results on maximal embedded components proved by Heinzer, Ratliff and Shah. The following corollary is one of that results stated for modules. For a submodule $L$ of $M$ and $\\frak p$ a prime ideal, we denote by $\\mathrm{IC}_{\\frak p}(L)$ the set of all irreducible $\\frak p$-primary submodules of $M$ that appear in an irredundant irreducible decomposition of $L$, and denote by $\\mathrm{ir}_{\\frak p}(L)$ the number of irreducible $\\frak p$-primary components in an irredundant irreducible decomposition of $L$ (this number is well defined by Noether \\cite[Satz VII]{N21}).\n\n\\begin{corollary}[see \\cite{HRS95-3}, Theorems 2.3 and 2.7] Let $N$ be a submodule of $M$ and $\\frak p$ an embedded associated prime ideal of $N$. Then\n\\begin{enumerate}[{(i)}]\\rm\n\\item {\\it $\\mathrm{ir}_{\\frak p}(N) = \\mathrm{ir}_{\\frak p}(Q) = \\dim_{k(\\frak p)} \\mathrm{Soc}(M\/N)_{\\frak p}$ for any $\\frak p$-maximal embedded component $Q$ of $N$.}\n\\item {\\it $\\mathrm{IC}_{\\frak p}(N) = \\bigcup_Q \\mathrm{IC}_{\\frak p}(Q)$, where the submodule $Q$ in the union runs over all $\\frak p$-maximal embedded components of $N$.}\n\\end{enumerate}\n\\end{corollary}\n\n\\begin{proof}\n(i) follows immediately from the proof of Theorem \\ref{T3.2} and Corollary \\ref{T3.3}.\\\\\n(ii) Let $Q_1 \\in \\mathrm{IC}_{\\frak p}(N)$ and $t_1 = \\dim_{k(\\frak p)} \\mathrm{Soc}(M\/N)_{\\frak p}$. By the hypothesis and (i) there exists an irredundant irreducible decomposition $N= Q_{11}\\cap \\ldots \\cap Q_{1 t_1} \\cap Q_2 \\cap \\ldots \\cap Q_l$ such that $Q_{11} = Q_1 , \\ Q_{12}, \\ldots , Q_{1 t_1}$ are all $\\frak p$-primary submodules in this decomposition. Therefore $Q=Q_{11}\\cap \\ldots \\cap Q_{1 t_1}$ is a maximal embedded component of $N$ by Corollary \\ref{T3.3}, and so $Q_1\\in \\mathrm{IC}_{\\frak p}(Q)$. The converse inclusion can be easily proved by applying Theorems \\ref{T3.1} and \\ref{T3.2}.\n\\end{proof}\n\n\\section{Index of reducibility of powers of an ideal}\nLet $I$ be an ideal of $R$. It is well known by \\cite{B79} that the $\\mathrm{Ass}_R(M\/I^nM)$ is stable for sufficiently large $n$ ($n\\gg 0$ for short). We will denote this stable set by $\\text{A}_M(I)$. The big height, $\\mathrm{bight}_M(I)$, of $I$ on $M$ is defined by\n$$\\mathrm{bight}_M(I)=\\max\\{\\dim_{R_{\\frak p}} M_{\\frak p}\\mid \\text{ for all minimal prime ideals } {\\frak p}\\in \\mathrm{Ass}_R(M\/IM)\\}.$$\nLet $G(I)=\\bigoplus\\limits_{n\\geq 0} I^n\/I^{n+1}$ be the associated graded ring of $R$ with respect to $I$ and $G_M(I)=\\bigoplus\\limits_{n\\geq 0} I^nM\/I^{n+1}M$ the associated graded $G(I)$-module of $M$ with respect to $I$. If $R$ is a local ring with the unique maximal ideal $\\frak m$, then the analytic spread $\\ell_M(I)$ of $I$ on $M$ is defined by\n$$\\ell_M(I)=\\dim_{G(I)}(G_M(I)\/{\\frak m} G_M(I)).$$\nIf $R$ is not local, the analytic spread $\\ell_M(I)$ is also defined by\n$$\n\\begin{aligned}\n\\ell_M(I)=\\max\\{ \\ell_{M_{\\frak m}}(IR_{\\frak m})\\mid {\\frak m} &\\text{ is a maximal ideal and }\n\\\\&\n\\text{ there is a prime ideal }{\\frak p} \\in A_M(I) \\text{ such that } {\\frak p}\\subseteq {\\frak m}\\}.\n\\end{aligned}$$\nWe use $\\ell(I)$ to denote the analytic spread of the ideal $I$ on $R$.\nThe following theorem is the main result of this section.\n\n\\begin{theorem} \\label{T4.1} Let $I$ be an ideal of $R$. Then there exists a polynomial $\\mathrm{Ir}_{M,I}(n)$ with rational coefficients such that $\\mathrm{Ir}_{M,I}(n)=\\mathrm{ir}_M(I^nM)$ for sufficiently large $n$. Moreover, we have\n$$\\mathrm{bight}_M(I)-1\\le \\deg(\\mathrm{Ir}_{M,I}(n))\\le \\ell_M(I)-1.$$\n\\end{theorem}\n\nTo prove Theorem \\ref{T4.1}, we need the following lemma.\n\n\\begin{lemma}\\label{L4.2} Suppose that $R$ is a local ring with the unique maximal ideal $\\frak m$ and $I$ an ideal of $R$. Then\n\\begin{enumerate}[{(i)}]\\rm\n \\item {\\it $\\dim_{\\frak k}\\mathrm{Soc}(M\/I^nM)=\\lambda_R(I^nM:{\\frak m}\/I^nM)$ is a polynomial of degree $\\le \\ell_M(I)-1$ for $n\\gg 0$.}\n \\item {\\it Assume that $I$ is an ${\\frak m}$-primary ideal. Then $\\mathrm{ir}_M(I^nM)=\\lambda_R(I^nM:{\\frak m}\/I^nM)$ is a polynomial of degree $\\dim_RM-1$ for $n\\gg 0$.}\n \\end{enumerate}\n \\end{lemma}\n\n \\begin{proof}\n\n(i) Consider the homogeneous submodule $0:_{G_M(I)}{\\frak m}G(I)$. Then \n$$\\lambda_R(0:_{G_M(I)}{\\frak m}G(I))_n=\\lambda_R(((I^{n+1}M:{\\frak m})\\cap I^nM)\/I^{n+1}M)$$ is a polynomial for $n\\gg 0$. Using a result proved by P. Schenzel \\cite[Proposition 2.1]{S98}, proved for rings but easily extendible to modules, we find a positive integer $l$ such that for all $n\\ge l$, $0:_M{\\frak m}\\cap I^nM=0$ and\n$$I^{n+1}M:{\\frak m}= I^{n+1-l}(I^l M:{\\frak m})+ 0:_M{\\frak m}.$$\nTherefore \n$$\n\\begin{aligned}\n(I^{n+1}M:{\\frak m})\\cap I^nM&=I^{n+1-l}(I^l M:{\\frak m})+0:_M{\\frak m}\\cap I^nM\\\\\n&=I^{n+1-l}(I^l M:{\\frak m}).\n\\end{aligned}\n$$\nHence, $\\lambda_R(I^{n+1-l}(I^l M:{\\frak m})\/I^{n+1}M)=\\lambda_R(((I^{n+1}M:{\\frak m})\\cap I^nM)\/I^{n+1}M)$ is a polynomial for $n\\gg 0$. It follows that\n$$\\dim_{\\frak k}\\mathrm{Soc}(M\/I^nM) =\\lambda_R((I^n M:{\\frak m})\/I^{n}M)=\\lambda_R(I^{n-l}(I^l M:{\\frak m})\/I^{n}M)+\\lambda_R(0:_M{\\frak m})$$\nis a polynomial for $n\\gg 0$, and the degree of this polynomial is just equal to\n$$\\dim_{G(I)}(0:_{G_M(I)}{\\frak m}G(I))-1\\le \\dim_{G(I)}({G_M(I)}\/{\\frak m}G_M(I))-1=\\ell_M(I)-1.$$\n(ii) The second statement follows from the first one and the fact that\n$$\n\\begin{aligned}\n\\lambda_R(I^nM\/I^{n+1}M) &=\\lambda_R(\\mathrm{Hom}_R(R\/I,I^nM\/I^{n+1}M))\\\\\n &\\le \\lambda_R(R\/I)\\lambda_R(\\mathrm{Hom}_R(R\/{\\frak m},I^nM\/I^{n+1}M)) \\le \\lambda_R(R\/I)\\mathrm{ir}_M(I^{n+1}M).\n\\end{aligned}\n$$\n\\end{proof}\n\nWe are now able to prove Theorem \\ref{T4.1}.\n\n\\begin{proof}[Proof of Theorem \\ref{T4.1}] Let $\\text{A}_M(I)$ denote the stable set $\\mathrm{Ass}_R(M\/I^nM)$ for $n\\gg 0$. Then, by Lemma \\ref{L2.3} we get that\n$$\\mathrm{ir}_M(I^nM)=\\sum\\limits_{\\frak p\\in A_M(I)}\\dim_{k({\\frak p})}\\mathrm{Soc}(M\/I^nM)_{\\frak p}$$ for all $n\\gg 0$.\nFrom Lemma \\ref{L4.2}, (i), $\\dim_{k({\\frak p})}\\mathrm{Soc}(M\/I^nM)_{\\frak p}$ is a polynomial of degree $\\le \\ell_{M_{\\frak p}}(IR_{\\frak p})-1$ for $n\\gg0$. Therefore there exists a polynomial $\\mathrm{Ir}_{M,I}(n)$ of such that $\\mathrm{Ir}_{M,I}(n)=\\mathrm{ir}_M(I^nM)$ for $n\\gg 0$ and \n$$\\mathrm{deg}(\\mathrm{Ir}_{M,I}(n))\\le \\max \\{\\ell_{M_{\\frak p}}(IR_{\\frak p})-1\\mid \\frak p \\in A_M(I)\\}\\le \\ell_M(I)-1.$$\nLet $\\mathrm{Min}(M\/IM)=\\{{\\frak p_1},\\ldots,{\\frak p_m}\\}$ be the set of all minimal associated prime ideals of $IM$. It is clear that ${\\frak p_i}$ is also minimal in $A_M(I)$. Hence $\\Lambda_{\\frak p_i}(I^nM)$ has only one element, says $Q_{in}$. It is easy to check that \n$$\\mathrm{ir}_M(Q_{in})={\\rm ir}_{M_{\\frak p_i}}(Q_{in})_{\\frak p_i}=\\mathrm{ir}_{M_{\\frak p_i}}(I^nM_{\\frak p_i})$$ for $i=1, \\ldots , m$. This implies by Theorem \\ref{T3.2} that\n$\\mathrm{ir}_M(I^nM)\\ge \\sum\\limits_{i=1}^m \\mathrm{ir}_{M_{\\frak p_i}}(I^nM_{\\frak p_i})$. It follows from Lemma \\ref{L4.2}, (ii) for $n\\gg 0$ that\n$$\\mathrm{deg}(\\mathrm{Ir}_{M,I}(n))\\ge \\max\\{\\dim_{R_{\\frak p_i} }M_{\\frak p_i}-1\\mid i=1, \\ldots ,m\\}=\\mathrm{bight}_M(I)-1.$$\n\\end{proof}\n\n The following corollaries are immediate consequences of Theorem \\ref{T4.1}.\n An ideal $I$ of a local ring $R$ is called an equimultiple ideal if $\\ell(I)=\\mathrm{ht}(I)$, and therefore $\\mathrm{bight}_R(I)=\\mathrm{ht}(I)$.\n\n\\begin{corollary} \\label{C4.3} Let $I$ be an ideal of $R$ satisfying $\\ell_M(I)= \\mathrm{bight}_M(I)$. Then $$\\deg(\\mathrm{Ir}_{M,I}(n))=\\ell_M(I)-1.$$\n\\end{corollary}\n\n\\begin{corollary} \\label{C4.4} Let $I$ be an equimultiple ideal of a local ring $R$ with the unique maximal ideal $\\frak m$. Then $$\\mathrm{deg}(\\mathrm{Ir}_{R,I}(n))=\\mathrm{ht}(I)-1$$. \n\\end{corollary}\n\nExcepting the corollaries above, the authors of the paper do not know how to compute exactly the degree of the polynomial of index of reducibility $\\mathrm{Ir}_{M,I}(n)$. Therefore it is maybe interesting to find a formula for this degree in terms of known invariants associated to $I$ and $M$. Below we give examples to show that although these bounds are sharp, neither $\\mathrm{bight}_M(I)-1$ nor $\\ell_M(I)-1$ equal to $\\mathrm{deg}(\\mathrm{Ir}_{M,I}(n))$ in general.\n\n\\begin{example} \\label{E4.5}{\\rm\n(1) Let $R=K[X,Y]$ be the polynomial ring of two variables $X$, $Y$ over a field $K$ and $I=(X^2,XY)=X(X,Y)$ an ideal of $R$. Then we have\n$$\\mathrm{bight}_R(I)=\\mathrm{ht}(I)=1, \\text{ }\\ell(I)=2,$$ and by Lemma \\ref{L2.3}\n$$\\mathrm{ir}_R(I^n)=\\mathrm{ir}_R(X^n(X,Y)^n)=\\mathrm{ir}_R((X,Y)^n) +1=n+1.$$\nTherefore \n$$\\mathrm{bight}_R(I)-1=0 <1= \\mathrm{deg}(\\mathrm{Ir}_{R,I}(n))=\\ell (I)-1.$$\n\n(2) Let $T=K[X_1,X_2,X_3,X_4,X_5,X_6]$ be the polynomial ring in six variables over a field $K$ and $R=T_{(X_1,\\ldots,X_6)}$ the localization of $T$ at the homogeneous maximal ideal $(X_1,\\ldots,X_6)$. Consider the monomial ideal\n$$\n\\begin{aligned}\nI&=(X_1X_2,X_2X_3,X_3X_4,X_4X_5,X_5X_6,X_6X_1)= (X_1,X_3,X_5)\\cap (X_2,X_4,X_6)\\cap \\\\\n&\\cap (X_1,X_2,X_4,X_5) \\cap (X_2,X_3,X_5,X_6)\\cap (X_3,X_4,X_6,X_1).\n\\end{aligned}\n$$\nSince the associated graph to this monomial ideal is a bipartite graph, it follows from \\cite[Theorem 5.9]{SVV94} that\n$\\mathrm{Ass}(R\/I^n)=\\mathrm{Ass}(R\/I)=\\mathrm{Min}(R\/I)$ for all $n\\geq 1$. Therefore $\\mathrm{deg}(\\mathrm{Ir}_{R,I}(n))= \\mathrm{bight}(I)-1= 3$ by Theorem \\ref{T3.2} and Lemma \\ref{L4.2} (ii). On the other hand, by \\cite[Exercise 8.21]{HS06} $\\ell(I)=5$, so\n$$\\mathrm{deg}(\\mathrm{Ir}_{R,I}(n))=3< 4=\\ell(I)-1.$$\n}\n\\end{example}\n\nLet $I$ be an ideal of $R$ and $n$ a positive integer. The $n$th symbolic power $I^{(n)}$ of $I$ is defined by\n$$I^{(n)} = \\bigcap_{\\frak p \\in \\mathrm{Min}(I)}(I^nR_{\\frak p}\\cap R),$$ where ${\\rm Min} (I)$ is the set of all minimal associated prime ideals in Ass$(R\/I)$. Contrary to the function ${\\rm ir}(I^n)$, the behaviour of the function ${\\rm ir}(I^{(n)})$ seems to be better.\n\\begin{proposition} \\label{C4.2}\nLet $I$ be an ideal of $R$. Then there exists a polynomial $p_I(n)$ with rational coefficients that such\n$p_I(n)={\\rm ir}_R (I^{(n)}) $ for sufficiently large $n$ and $${\\rm deg}(p_I(n))={\\rm bight}(I)-1.$$\n\\end{proposition} \\label{C4.2}\n\\begin{proof} It should be mentioned that Ass$(R\/I^{(n)})={\\rm Min} (I)$ for all positive integer $n$. Thus, by virtue of Theorem \\ref{T3.2}, we can show as in the proof of Theorem \\ref{T4.1} that \n$${\\rm ir} _R(I^{(n)})=\\sum\\limits_{\\frak p\\in {\\rm Min}(I)}\\mathrm{ir}_{R_{\\frak p}}(I^nR_{\\frak p})$$ for all $n$. So the proposition follows from Lemma \\ref{L4.2}, (ii).\n\\end{proof}\n\n\\section{Index of reducibility in Cohen-Macaulay modules}\nIn this section, we assume in addition that $R$ is a local ring with the unique maximal ideal $\\frak m$, and $\\frak k = R\/\\frak m$ is the residue field. Let $\\frak q=(x_1,\\ldots, x_d)$ be a parameter ideal of $M$ ($d=\\dim M$). Let $H^i({\\frak q}, M)$ be the $i$-th Koszul cohomology module of $M$ with respect to $\\frak q$ and $H^i_{\\frak m}(M)$ the $i$-th local cohomology module of $M$ with respect to the maximal ideal $\\frak m$. In order to state the next theorem, we need the following result of Goto and Sakurai \\cite[Lemma 3.12]{GS03}.\n\n\\begin{lemma}\\label{L4.6}\nThere exists a positive integer $l$ such that for all parameter ideals $\\frak q$ of $M$ contained in $\\frak m^l$, the canonical homomorphisms on socles\n$$\\mathrm{Soc}(H^i({\\frak q}, M))\\to \\mathrm{Soc}(H^i_{\\frak m}(M))$$\nare surjective for all $i$.\n \\end{lemma}\n\n\\begin{theorem} \\label{T4.7} Let $M$ be a finitely generated $R$-module of $\\dim M=d$. Then the following conditions are equivalent:\n\\begin{enumerate}[{(i)}]\\rm\n \\item {\\it $M$ is a Cohen-Macaulay module.}\n \\item {\\it $\\mathrm{ir}_M({\\frak q}^{n+1}M)=\\dim_{\\frak k}\\mathrm{Soc}(H^d_{\\frak m}(M)) \\binom{n+d-1}{d-1}$ for all parameter ideals $\\frak q$ of $M$ and all $n \\geq 0$.}\n \\item {\\it $\\mathrm{ir}_M({\\frak q}M)=\\dim_{\\frak k} \\mathrm{Soc}(H^d_{\\frak m}(M))$ for all parameter ideals $\\frak q$ of $M$.}\n \\item {\\it There exists a parameter ideal $\\frak q$ of $M$ contained in $\\frak m^l$, where $l$ is a positive integer as in Lemma \\ref{L4.6}, such that $\\mathrm{ir}_M({\\frak q}M)=\\dim_{\\frak k}\\mathrm{Soc}(H^d_{\\frak m}(M))$.}\n \\end{enumerate}\n\\end{theorem}\n\\begin{proof}\n(i) $\\Rightarrow$ (ii) Let $\\frak q$ be a parameter ideal of $M$. Since $M$ is Cohen-Macaulay, we have a natural isomorphism of graded modules\n\n$$G_M(\\frak q)=\\bigoplus\\limits_{n\\ge 0}\\frak q^nM\/\\frak q^{n+1}M\\to M\/\\frak q M[T_1,\\ldots,T_d],$$ where $T_1,\\ldots,T_d$ are indeterminates.\nThis deduces $R$-isomomorphisms on graded parts\n$$\\frak q^nM\/\\frak q^{n+1}M\\to\\big( M\/\\frak q M[T_1,\\ldots,T_d]\\big)_n\\cong M\/\\frak q M^{\\binom{n+d-1}{d-1}}$$ for all $n\\geq 0$.\nOn the other hand, since $\\frak q$ is a parameter ideal of a Cohen-Macaulay module, $\\frak q^{n+1}M:\\frak m\\subseteq \\frak q^{n+1}M:\\frak q=\\frak q^{n}M$. It follows that\n$$\n\\begin{aligned}\n\\mathrm{ir}_M({\\frak q}^{n+1}M)&=\\lambda_R(\\frak q^{n+1}M:\\frak m\/\\frak q^{n+1}M)=\\lambda_R(0:_{\\frak q^{n}M\/\\frak q^{n+1}M}\\frak m)\\\\\n&=\\lambda_R(0:_{M\/\\frak q M}\\frak m) \\binom{n+d-1}{d-1}=\\dim_{\\frak k}(\\mathrm{Soc}(M\/\\frak q M))\\binom{n+d-1}{d-1}.\n\\end{aligned}\n$$\nSo the conclusion is proved, if we show that $\\dim_{\\frak k} \\mathrm{Soc}(M\/\\frak q M)=\\dim_{\\frak k} \\mathrm{Soc}(H^d_{\\frak m}(M))$. Indeed, let $\\frak q=(x_1,\\ldots, x_d)$ and $\\overline{M}= M\/x_1M$. Then, it is easy to show by induction on $d$ that\n$$\n\\begin{aligned}\n\\dim_{\\frak k} \\mathrm{Soc}(M\/\\frak q M)&=\\dim_{\\frak k} \\mathrm{Soc}(\\overline{M}\/\\frak q \\overline{M})\\\\\n&=\\dim_{\\frak k} \\mathrm{Soc}(H^{d-1}_{\\frak m}(\\overline{M})) = \\dim_{\\frak k} \\mathrm{Soc}(H^d_{\\frak m}(M)).\n\\end{aligned}\n$$\n\n(ii) $\\Rightarrow$ (iii) and (iii) $\\Rightarrow$ (iv) are trivial.\n\n(iv) $\\Rightarrow$ (i) Let $\\frak q =(x_1, \\ldots , x_d)$ be a parameter ideal of $M$ such that $\\frak q\\subseteq \\frak m^l$, where $l$ is a positive integer as in Lemma 5.1 such that the canonical homomorphism on socles\n$$\\mathrm{Soc}(M\/\\frak q M)=\\mathrm{Soc}(H^d({\\frak q}, M))\\to \\mathrm{Soc}(H^d_{\\frak m}(M))$$\nis surjective. Consider the submodule $(\\underline{x})_M^{\\mathrm{lim}}=\\bigcup\\limits_{t\\ge 0}(x_1^{t+1},\\ldots,x_d^{t+1}):(x_1\\ldots x_d)^{t}$ of $M$. This submodule is called the limit closure of the sequence $x_1,\\ldots,x_d$. Then \n$(\\underline{x})_M^{\\mathrm{lim}}\/\\frak q M$ is just the kernel of the canonical homomorphism $M\/\\frak q M\\to H^d_{\\frak m}(M)$ (see \\cite{CHL99}, \\cite{CQ10}). Moreover, it was proved in \\cite[Corollary 2.4]{CHL99} that the module $M$ is Cohen-Macaulay if and only if $(\\underline{x})_M^{\\mathrm{lim}}=\\frak q M$. Now we assume that $\\mathrm{ir}_M({\\frak q}M)=\\dim_{\\frak k} \\mathrm{Soc}(H^d_{\\frak m}(M))$, therefore $\\dim_{\\frak k} \\mathrm{Soc}(H^d_{\\frak m}(M)) =\\dim_{\\frak k} \\mathrm{Soc}(M\/\\frak q M)$. Then it follows from the exact sequence\n$$0\\to (\\underline{x})_M^{\\mathrm{lim}}\/\\frak q M \\to M\/\\frak q M\\to H^d_{\\frak m}(M) $$\n and the choice of $l$ that the sequence\n$$0\\to \\mathrm{Soc}((\\underline{x})_M^{\\mathrm{lim}}\/\\frak q M) \\to \\mathrm{Soc}( M\/\\frak q M)\\to\\mathrm{Soc}( H^d_{\\frak m}(M))\\to 0 $$\n is a short exact sequence. Hence $\\dim_{\\frak k} \\mathrm{Soc}((\\underline{x})_M^{\\mathrm{lim}}\/\\frak q M)=0 $ by the hypothesis. So $(\\underline{x})_M^{\\mathrm{lim}}=\\frak q M$, and therefore $M$ is a Cohen-Macaulay module.\n \\end{proof}\n \\rm It should be mentioned here that the proof of implication (iv) $\\Rightarrow$ (i) of Theorem \\ref{T4.7} is essentially following the proof of \\cite [Theorem 2.7] {MRS08}. It is well-known that a Noetherian local ring $R$ with $\\dim R =d$ is Gorenstein if and only if $R$ is Cohen-Macaulay with the Cohen-Macaulay \ntype r$(R)=\\dim_{\\frak k}\\mathrm{Ext}^d(\\frak k,M))= 1$. Therefore the following result, which is the main result of \\cite[Theorem]{MRS08}, is an immediate consequence of Theorem \\ref{T4.7}.\n \n \\begin{corollary}\\label{5.3}\n Let $(R, \\frak m)$ be a Noetherian local ring of dimension $d$. Then $R$ is Gorenstein if and only if there exists an irreducible parameter ideal $\\frak q$ contained in $\\frak m^l$, where $l$ is a positive integer as in Lemma \\ref{L4.6}. Moreover, if $R$ is Gorenstein, then for any parameter ideal $\\frak q$ it holds ${\\rm ir}_R(\\frak q^{n+1}) = \\binom{n+d-1}{d-1}$ for all $n\\geq 0$. \n \\end{corollary}\n \\begin{proof}\nLet $\\frak q=(x_1,\\ldots , x_d)$ be an irreducible parameter ideal contained in $\\frak m^l$ such that the map\n $$\\mathrm{Soc}( M\/\\frak q M)\\to\\mathrm{Soc}( H^d_{\\frak m}(M))$$ is surjective.\n Since $\\dim_{\\frak k}\\mathrm{Soc}( H^d_{\\frak m}(M))\\not= 0$ and $\\dim_{\\frak k} \\mathrm{Soc}(M\/\\frak q M)=1$ by the hypothesis, \n $\\dim_{\\frak k} \\mathrm{Soc}( H^d_{\\frak m}(M))=1. $ This imples by Theorem \\ref{T4.7} that $M$ is a Cohen-Macaulay module with \n $${\\rm r}(R)=\\dim_{\\frak k}\\mathrm{Ext}^d(\\frak k,M)=\\dim_{\\frak k} \\mathrm{Soc}(M\/\\frak q M)= 1,$$ and so $R$ is Gorenstein. The last conclusion follows from Theorem \\ref{T4.7}.\n \\end{proof}\n \\begin{remark} \\rm Recently, it was shown by many works that the index of reducibility of parameter ideals can be used to deduce a lot of information on the structure of some classes of modules such as Buchsbaum modules \\cite{GS03}, generalized Cohen-Macaulay modules \\cite{CT08}, \n\\cite{Q12} and sequentially Cohen-Macaulay modules \\cite{T13}.\n It follows from Theorem \\ref{T4.7} that $M$ is a Cohen-Macaulay module if and only if there exists a positive integer $l$ such that\n $\\mathrm{ir}_M({\\frak q}M)=\\dim_{\\frak k}\\mathrm{Soc}(H^d_{\\frak m}(M))$ for all parameter ideals $\\frak q$ of $M$ contained in $\\frak m^l$. The necessary condition of this result can be extended for a large class of modules called generalized Cohen-Macaulay modules. An $R$-module $M$ of dimension $d$ is said to be a generalized Cohen-Macaulay module (see \\cite{CST78}) if $H^i_{\\frak m}(M)$ is of finite length for all $i=0,\\ldots , d-1$. We proved in \\cite [Theorem 1.1]{CT08} (see also \\cite [Corollary 4.4]{CQ11}) that if $M$ is a generalized Cohen-Macaulay module, then there exists an integer $l$ such that \n $${\\rm ir}_M(\\frak qM) = \\sum_{i=0}^d \\binom{d}{i}\\dim_{\\frak k}\\mathrm{Soc}(H^i_{\\frak m}(M)).$$\n for all parameter ideals $\\frak q \\subseteq \\frak m^l$. Therefore, we close this paper with the following two open questions, which are suggested during the work in this paper, on the characterization of the Cohen-Macaulayness and of the generalized Cohen-Macaulayness in terms of the index of reducibility of parameter ideals as follows.\n \\vskip0.1cm\n \\noindent\n {\\bf Open questions 5.5.} Let $M$ be a finitely generated module of dimension $d$ over a local ring $R$. Then our questions are \\\\\n 1. Is $M$ a Cohen-Macaulay module if and only if there exists a parameter ideal $\\frak q$ of $M$ such that $$\\mathrm{ir}_M({\\frak qM}^{n+1}M)=\\dim_{\\frak k}\\mathrm{Soc}(H^d_{\\frak m}(M)) \\binom{n+d-1}{d-1}$$ for all $n\\geq 0$?\n \n \\noindent\n 2. Is $M$ a generalized Cohen-Macaulay module if and only if there exists a positive integer $l$ such that $${\\rm ir}_M(\\frak qM)= \\sum_{i=0}^d \\binom{d}{i}\\dim_{\\frak k}\\mathrm{Soc}(H^i_{\\frak m}(M))$$ for all parameter ideals $\\frak q \\subseteq \\frak m^l$?\n \\end{remark}\n \n\\bigskip\n\\noindent{\\bf Acknowledgments.} The authors would like to thank the anonymous referee for helpful comments on the earlier version. This paper was finished during the authors' visit at the Vietnam\nInstitute for Advanced Study in Mathematics (VIASM).\nThey would like to thank VIASM for their support and hospitality.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and Motivation}\n\nIn 1998, Ellis \\cite{el98} extended the theory of the Schur multiplier for a pair of groups. By a pair of groups $(G,N)$ we mean a group $G$ with a normal subgroup $N$ of $G$. The Schur multiplier of a pair $(G,N)$ of groups is a\nfunctorial abelian group $M(G,N)$ whose principal feature is a\nnatural exact sequence\n\\begin{eqnarray*}\n H_3(G) &\\rightarrow& H_3(G\/N) \\rightarrow M(G,N) \\rightarrow M(G)\n\\rightarrow M(G\/N)\\\\\n&\\rightarrow& N\/[N,G]\\rightarrow (G)^{ab} \\rightarrow (G\/N)^{ab} \\rightarrow 0\n\\end{eqnarray*}\nin which $H_3(G)$ is the third homology group of $G$ with integer coefficients.\nIn particular, if $N=G$, then $M(G,G)$ is the usual Schur multiplier $M(G)$.\n\nIt has been a considerable question that when $\\exp(M(G))$ divides $\\exp(G)$, in which $\\exp(G)$ denotes the exponent of $G$.\nMacdonald and Wamsely (see \\cite{bay})\nconstructed an example of a group of exponent 4, whereas its Schur\nmultiplier has exponent 8, hence the conjecture is not true in\ngeneral. In 1973, Jones \\cite{jons} proved\nthat the exponent of the Schur multiplier of a finite $p$-group of\nclass $c \\geq 2$ and exponent $p^{e}$ is at most $p^{e(c-1)}$ and hence $\\exp(M(G))$ divides $\\exp(G)$ when $G$ is a $p$-group of class 2. In 1987, Lubotzky and Mann \\cite{lub} proved that $\\exp(M(G))$ divides $\\exp(G)$ when $G$ is a powerful $p$-group. A\nresult of Ellis \\cite{el2001} shows that if $G$ is a $p$-group of class $k\n\\geq 2$ and exponent $p^{e}$, then $\\exp(M(G))\\leq p^{e\\lceil k\/2\\rceil}$, where $\\lceil k\/2\\rceil$ denotes the smallest integer\n$n$ such that $n \\geq k\/2$. Moravec \\cite{mor} showed that\n$\\lceil k\/2\\rceil$ can be replaced by $2\\lfloor \\log_2{k}\\rfloor$\nwhich is an improvement if $k\\geq 11$. He \\cite{mor} also proved that if $G$ is\na metabelian group of exponent $p$, then $\\exp(M(G))$ divides $p$.\nKayvanfar and Sanati \\cite{kay} proved that if $G$ is a $p$-group, then $\\exp(M(G))$ divides\n$\\exp(G)$ when $G$ is a finite $p$-group of class 3, 4 or 5 with some conditions. The authors \\cite{mhm} extended the result and proved that $\\exp(M(G))$ divides $\\exp(G)$ when $G$ is a finite $p$-group of class at most $p-1$.\\\n\nOn the other hand, Ellis \\cite{el98} proved that $\\exp(M(G,N))$ divides $|N|$ for any pair $(G,N)$ of finite groups, in which $|N|$ denotes the order of $N$. Now a question that can naturally arise, is whether $\\exp(M(G,N))$ divides $\\exp(N)$ when $N$ is a proper normal subgroup of $G$. In this paper, first we present an example to give\n a negative answer to the question. Second, we give some conditions\n under which the exponent of $M(G,N)$ divides the exponent of $N$.\n\n In Section 2, we give an upper bound for $\\exp(M(G,N))$ in terms of $\\exp(N)$, when $(G,N)$ is a pair of finite $p$-groups such that $N$ admits a complement in $G$, and apply it to prove that if $(G,N)$ is a pair of finite $p$-groups of class at most $p-1$ (i.e $[N, \\ _{p-1}G]=1$), then $\\exp(M(G,N))$ divides $\\exp(N)$.\nFinally in Section 3, we show that if $(G,N)$ is a pair of finite $p$-groups and $N$ is powerfully embedded in $G$, then $\\exp(M(G,N))$ divides $\\exp(N)$.\n\n\\section{ Nilpotent pairs of $p$-groups}\n\nI. D. Macdonald and J.W. Wamsley \\cite{bay} gave an example which shows that $\\exp(M(G,G))$ dose not divide $\\exp(G)$, in general. The following example shows that $\\exp(M(G,N))$ dose not divide $\\exp(N)$ when $N$ is a proper normal subgroup of $G$. \\\\\n\n\\begin{ex}\nLet $D=A >\\!\\!\\!\\! \\lhd $, where $A=\\times \\times\\times\\cong {\\mathbf Z}_{{4}}\\times{\\mathbf Z}_{{4}}\\times{\\mathbf Z}_{{4}}\\times{\\mathbf Z}_{{2}}$ and $x_1$ is an automorphism of order 2 of $A$ acting in the following way:\n$$[x_2,x_1]=x_2^2,\\ \\ [x_3,x_1]=x_3^2,\\ \\ [x_4,x_1]=x_4^2,\\ \\ [x_5,x_1]=1.$$\nThere exists an automorphism $a$ of $D$ of order 4 acting on $D$ as follows:\n$$[x_1,a]=x_3,\\ \\ [x_2,a]=x_2^2 x_3^2 x_4^3\\ \\ [x_3,a]=x_5,\\ \\ [x_4,a]=x_2^2,\\ \\ [x_5,a]=x_3^2.$$\nForm $N=D >\\!\\!\\! \\lhd $ and put $G=N >\\!\\!\\! \\lhd $, where $b^2=1$ and\n$[x_1,b]=x_2,\\ [x_2,b]=x_2^2x_4^3x_5,\\ [x_3,b]=x_4,\\ [x_4,b]=x_3^2x_4^2,\\ [x_5,b]=x_2^2x_3^2x_4^2,\\ [a,b]=x_1.$\n Moravec \\cite{mor} showed that the group $G$ is a nilpotent group of class 6 and exponent 4 and $M(G)\\cong{\\mathbf Z}_{{2}}\\times{\\mathbf Z}_{{4}}\\times{\\mathbf Z}_{{8}}$.\n Ellis \\cite{el98} proved that if $G=K >\\!\\!\\!\\! \\lhd Q$, then $M(G)\\cong M(G,K)\\oplus M(Q)$. This implies that $M(G,N)\\cong M(G)$. Therefore $\\exp(M(G,N))=8$ dose not divide $\\exp(N)=4$.\n \\end{ex}\n\nHere we first give an upper bound for the exponent of $M(G,N)$ in terms of the exponent of $N$, when $(G,N)$ is a pair of finite $p$-groups such that $N$ admits a complement in $G$. Since our proof relies on commutator calculations, we need to state the following lemmas.\n\n\\begin{lem} (\\cite{st}). Let $x_1, x_2,\n\\ldots , \\ x_r$ be any elements of a group and $\\alpha$ be a nonnegative integer. Then\n$$(x_1 x_2 ... x_r)^{\\alpha}=x_{i_1}^{\\alpha}x_{i_2}^{\\alpha} ...x_{i_r}^{\\alpha} \\upsilon_1^\n{f_1(\\alpha)}\\upsilon_2^{f_2(\\alpha)}\\cdots \\ ,$$\nwhere $\\{i_1, \\ i_2, \\ldots,\\ i_r \\} = \\{1, 2, \\ldots, r \\}$ and $\\upsilon_1,\\upsilon_2, \\cdots$ are commutators of weight at least two in the letters $x_i^,$ in ascending order and\n\\begin{equation}\nf_i(\\alpha)=a_1{\\alpha \\choose 1}+ a_2{\\alpha \\choose 2}+\\cdots+a_{w_i}{\\alpha \\choose w_i},\n\\end{equation}\nwith $a_1, \\ldots, a_{wi} \\in \\mathbf{Z} $ and $w_i$ is the\nweight of $\\upsilon_i$ in elements $x_1, \\ldots, x_r$.\n\\end{lem}\n\n\\begin{lem} (\\cite{st}). Let $\\alpha$\nbe a fixed integer and $G$ be a nilpotent group of class at most\n$k$. If $b_1, \\ldots, b_r \\in G$ and $rl$. We will prove the result for $l$. Put\n$\\alpha=p^{e+m(k+1-l)}$ with $m= \\lfloor \\log_p k \\rfloor $ and let $u=[n, x_2, \\ldots , x_l]$ be a commutator of weight $l$, where $n\\in N^*$ and $x_2, \\ldots , x_l \\in G^*$. Then by Lemma 2.2, we have\n$$ [n^{\\alpha}, x_2, \\ldots , x_l]=[n, x_2, \\ldots , x_l]^{\\alpha}\n\\upsilon_1^{f_1(\\alpha)} \\upsilon_2^{f_2(\\alpha)} \\cdots,$$\nwhere $\\upsilon_i$ is a commutator on $n, x_2, \\ldots, x_l$ of weight $w_i$ such that $l < w_i \\leq k+1 $, and $f_i(\\alpha)=a_1{\\alpha \\choose 1}+\na_2{\\alpha \\choose 2}+ \\dots +a_{k_i}{\\alpha \\choose k_i}$, where\n$k_i=w_i - l + 1 \\leq k$, for all $ i \\geq 1$.\nOne can easily check that $p^t$ divides $p^{t+m} \\choose s$ with $m=\\lfloor \\log_p k \\rfloor$, for any prime $p$ and any positive integers $t,s$ with $s \\leq k$.\nThis implies that $p^{e+m(k-l)}$ divides the $f_i(\\alpha)$'s and so by induction hypothesis\n$\\upsilon_i^{f_i(\\alpha)} =1$, for all $ i \\geq 1$.\nOn the other hand, it is clear that $[n^{\\alpha},x_2, \\ldots, x_l] =1$. Therefore $u^{\\alpha}=1$ and this\ncompletes the proof.\n\\end{proof}\n\n\\begin{thm}\nIf $(G,N)$ is\na nilpotent pair of finite groups of class $k$ and $N$ is a $p$-group of exponent\n$p^e$, then $\\exp([N^*,G^*])$ divides $p^{e+m(k-1)}$,\nwhere $m= \\lfloor \\log_p k \\rfloor $.\n\\end{thm}\n\n\\begin{proof}\nEvery element $g \\in [N^*,G^*]$ can be expressed as $g=y_1 y_2 \\cdots\ny_n$, where $y_i=[n_i,g_i]$ for $ n_i \\in N^*, g_i \\in G^*$. Put $\\alpha = p^{e+m(k-1)} $.\nBy Lemma 2.1, we have\n$$g^{\\alpha} = y_{i_1}^{\\alpha} y_{i_2}^{\\alpha} \\cdots y_{i_n}^{\\alpha}\n\\upsilon_1^{f_1(\\alpha)} \\upsilon_2^{f_2(\\alpha)} \\cdots,$$\nwhere $\\{i_1, i_2, \\ldots,i_n \\}=\\{1, 2, \\ldots, n \\}$ and $\\upsilon_i$ is a basic commutator of weight $w_i$ in $y_1, y_2, \\ldots, y_n$, with $2\\leq w_i \\leq k$, for all $i \\geq 1$, and also $f_i(\\alpha)$ is of the form (2.1). Hence by an argument similar to the proof of Lemma 2.5 $p^{e+m(k-2)}$\ndivides $f_i(\\alpha)$. Then applying Lemma 2.5, we have $\\upsilon_i^{f_i(\\alpha)} =1$,\nfor all $i \\geq 1$, and $y_j^{\\alpha} =1$, for all $j$, $1\\leq j \\leq n$.\nWe therefore have $g^{\\alpha}=1$ and the desired result follows.\n\\end{proof}\n\nAn upper bound for the exponent of the Schur multiplier of some pairs of finite groups is given in the following theorem.\n\\begin{thm}\nLet $(G,N)$ be a nilpotent pair of finite groups of class $k$ such that $\\exp(N)=p^e$. Then $\\exp(M(G,N))$ is a divisor of $p^{e+m(k-1)}$, where $m= \\lfloor \\log_p k \\rfloor $.\n\\end{thm}\n\n\\begin{proof}\n The result follows by Theorem 2.6 and the fact that $M(G,N)\\cong A \\leq [N^*,G]\\leq [N^*,G^*]$.\n\\end{proof}\n\nThe following corollary gives a condition under which the exponent of the Schur multiplier of a pair $(G,N)$ divides the exponent of $N$.\n\\begin{cor}\nLet $(G,N)$ be a pair of finite $p$-groups of class at most $p-1$. Then $\\exp(M(G,N))$ divides $\\exp(N)$.\n\\end{cor}\n\n\\begin{rem}\nLet $G$ be a finite $p$-group of class $k$ with $exp(G)=p^e$ . Since $M(G,G)=M(G)$, Theorem 2.7 implies that $exp(M(G))$ divides $p^{e+[\\log_pk](k-1)}$. It is easy to see that this bound improves the bound $p^{(2e\\lfloor \\log_2{k}\\rfloor)}$ given by Moravec \\cite{mor}. For example for any $p$-group $G$ of class $k$, $2\\leq k \\leq p-1$ with $exp(G)=p^e$, we have $p^{e+[\\log_pk](k-1)} \\leq p^{(2e\\lfloor \\log_2{k}\\rfloor)}$.\n\\end{rem}\n\n\\begin{rem}\nLet $(G,N)$ be a pair of finite nilpotent groups of class at most $k$. Let $S_1,S_2, \\ldots, S_n$ be all the Sylow subgroups of $G$.\nBy [2, Corollary 1.2], we have $$ M(G,N)=M(S_{1} , S_{1} \\cap N) \\times \\dots\n\\times M(S_{n} , S_{n} \\cap N).$$ Put $m_i= \\lfloor \\log_{p_i} k \\rfloor $, for all $i$,\n $1\\leq i \\leq n$. Then by Theorem 2.7, we have\n$$\\exp( M(G,N) \\mid \\prod^n_{i=1}p_i^{e_i+m_i(k-1)},$$ where\n$p_i^{e_i} = \\exp(S_{i})$.\n\\end{rem}\n\n\\section{ Pairs of powerful $p$-groups}\n\nIn 1987, A. Lubotzky and A. Mann \\cite{lub} defined powerful $p$-groups which are used for studying $p$-groups.\nThey gave some bounds for the order, the exponent and the number of\ngenerators of the Schur multiplier of a powerful $p$-group. Also, they showed that $\\exp(M(G))$ divides $\\exp(G)$ when $G$ is a powerful $p$-group. The purpose of this section is to show that\nif $(G,N)$ is a pair of finite $p$-groups and $N$ is powerfully embedded in $G$, then the exponent of $M(G,N)$ divides the exponent of $N$.\n Throughout this section $\\mho_i(G)$ denotes the subgroup of\n$G$ generated by all $p^{i}$th powers of elements of $G$. It is easy to see that $\\mho_{i+j}(G) \\subseteq \\mho_i(\\mho_j(G))$, for all positive integers $i,j$.\n\\begin{defn}\n$(i)$ A $p$-group $G$ is\ncalled powerful if $p$ is odd and $G' \\leq \\mho_1(G)$, or\n$p=2$ and $G' \\leq \\mho_2(G)$.\\\\\n$(ii)$ Let $G$ is a $p$-group and $N\\leq G$. Then $N$ is powerfully\nembedded in $G$ if $p$ is odd and $[N,G]\\leq \\mho_1(N)$, or $p=2$ and $[N,G]\\leq \\mho_2(N)$.\n\\end{defn}\n\nAny powerfully embedded subgroup is itself a powerful\n$p$-group and must be normal in the whole group. Also a $p$-group is\npowerful exactly when it is powerfully embedded in itself. While it\nis obvious that factor groups and direct products of powerful\n$p$-groups are powerful, this property is not subgroup-inherited \\cite{lub}.\nThe following lemma gives some properties of powerful $p$-groups.\\\\\n\n\\begin{lem}\n (\\cite{lub}). The following statements\nhold for a powerful $p$-group\n$G$.\\\\\n$(i)$ $\\gamma_i(G), G^{i}, \\mho_i(G), \\Phi(G)$ are powerfully\nembedded in $G$.\\\\\n$(ii)$ $\\mho_i(\\mho_j(G))=\\mho_{i+j}(G)$.\\\\\n$(iii)$ Each element of $\\mho_i(G)$ can be written as $a^{p^{i}},$\nfor\nsome $a\\in G$ and hence $\\mho_i(G)=\\{g^{p^{i}}: g\\in G\\}$.\\\\\n$(iv)$ If $G=\\langle a_1,a_2,...,a_d\\rangle$, then $\\mho_i(G)=\\langle\na_1^{p^{i}},a_2^{p^{i}},...,a_d^{p^{i}}\\rangle$.\n\\end{lem}\n\n\\begin{lem}\n (\\cite{lub}). Let $N$ be powerfully embedded in $G$. Then $\\mho_i(N)$ is powerfully embedded in $G$.\n\\end{lem}\n\nThe proof of the following lemma is straightforward.\n\\begin{lem}\nLet $M$ and $G$ be two groups with an action of $G$ on $M$. Then for all $m,n \\in M$, $g,h \\in G$, and any integer $k$ we have the following equalities. \\\\\n$(i)$ $[mn,g]=[m,g]^n[n,g]$;\\\\\n$(ii)$ $[m,gh]=[m,h][m,g]^h$;\\\\\n$(iii)$ $[m^{-1},g]^{-1}=[m,g]^{m^{-1}}$;\\\\\n$(iv)$ $[m,g^{-1}]^{-1}=[m,g]^{g^{-1}}$;\\\\\n$(v)$ $[m,g^{-1},h]^g[m,[g,h^{-1}]]^h[[m^{-1},h]^{-1},g]^m=1$;\\\\\n$(vi)$ $[m^k,g]=[m,g]^k [m,g,m]^{k(k-1)\/2} \\pmod{[M, \\ _3G]}$.\n\\end{lem}\n\n\n\\begin{lem}\nLet $(G,N)$ be a pair of finite $p$-groups and $\\sigma : N^* \\rightarrow G$ be a relative central extension of $(G,N)$. Suppose that $M$ and $K$ are two normal subgroups of $N^*$. Then $M \\leq K$ if $M \\leq K[M,G]$.\n \\end{lem}\n\n\\begin{proof}\n Applying Lemma 3.4 we have\n$$ M \\leq K[M,G]\\leq K[K[M,G],G]\\leq K[K,G][M,G,G]\\leq \\dots \\leq K[M, \\ _iG], $$\nfor all $i\\geq 1$. On the other hand, since $G$ is a finite $p$-group, there exists an integer $l$ such that $[N, \\ _lG]=1 $. Hence $[N^*, \\ _{l+1}\\ G]=1$ and the result follows.\n\\end{proof}\n\n\\begin{lem}\n Let $(G,N)$ be a pair of finite $p$-groups and $\\sigma : N^* \\rightarrow G$ be a relative central extension of $(G,N)$. Let $M$ be a normal\nsubgroup of $H$. Then the following statements hold.\\\\\n$(i)$ If $p>2$, then $[\\mho_1(M),G]\\subseteq \\mho_1([M,G] )[M, \\ _3G]$.\\\\\n$(ii)$ If $p=2$, then $[\\mho_2(M),G]\\subseteq \\mho_2([M,G] ) \\mho_1([M, \\ _2G])[M, \\ _3G]$.\n\\end{lem}\n\\begin{proof} $(i)$ It is enough to show that $[m^p,g]\\in \\mho_1([M,G] )[M, \\ _3G]$, for all $m \\in M, g \\in G$.\n By Lemma 3.4 $[m^p,g]={[m,g]}^p {[m,g,m]}^{p(p-1)\/2}\\ \\ \\ \\pmod {[M, \\ _3G]}.$ Since $p$ is odd and $p \\mid \\frac{p(p-1)}{2}$ we have ${[m,g]}^p {[m,g,m]}^{p(p-1)\/2} \\in \\mho_1([M,G] )$. Now the result holds.\\\\\n $(ii)$ The proof is similar to $(i)$.\n\\end{proof}\n\n\\begin{lem}\nLet $(G,N)$ be a pair of finite $p$-groups and $\\sigma : N^* \\rightarrow G$ be a relative central extension of $(G,N)$. Suppose that $K \\leq N^*$. Then the following statements hold.\\\\\n$(i)$ If $p>2$, then $[K,G] \\leq \\mho_1 (K)$ if and only if $[K\/[K,\\ _2G], G]\\leq \\mho_1( K\/[K,\\ _2G])$.\\\\\n$(ii)$ If $p=2$, then $[K,G] \\leq \\mho_2 (K)$ if and only if $[K\/[K,\\ _2G], G]\\leq \\mho_2( K\/[K,\\ _2G])$.\\\\\n$(iii)$ If $p=2$, then $[K,G] \\leq \\mho_2 (K)$ if and only if $[K\/\\mho_1([K, G]), G]\\leq \\mho_2( K\/\\mho_1([K, G]))$.\n\\end{lem}\n\\begin{proof} $(i)$ Let $[K,G] \\leq \\mho_1 (K)$ and put $H=[K, \\ _2G]$. Then\n$$ [\\frac{K}{H},G]=\\frac{[K,G]H}{H} \\leq \\frac{\\mho_1(K)H}{H}=\\mho_1(\\frac{K}{H}),$$ as desired. Sufficiency follows by Lemma 3.5.\\\\\n$(ii)$ The proof is similar to $(i)$.\\\\\n$(iii)$ Necessity follows as for (i). Let $[K\/\\mho_1([K, G]), G]\\leq \\mho_2( K\/\\mho_1([K, G]))$. Then $[K,G]\\leq \\mho_2(K) \\mho_1([K,G])$. On the other hand, $\\mho_1([K\/\\mho_1([K, G]), G])$. Thus $[K\/\\mho_1([K, G]), G]$ is abelian and so $\\Phi([K\/\\mho_1([K, G]), G])=1$. This implies that $\\Phi([K,G])=\\mho_1([K,G])$. Therefore $[K,G]\\leq \\mho_2(K)$.\n\\end{proof}\n\nThe following useful remark is a consequence of Lemma 3.7.\n\\begin{rem}\nLet $(G,N)$ be a pair of finite $p$-groups and $\\sigma : N^* \\rightarrow G$ be a relative central extension of $(G,N)$. Let $K \\leq N^*$. Then to prove that $[K,G] \\leq \\mho_1 (K)$ ( $[K,G] \\leq \\mho_2 (K)$ for $p=2$) we can assume that\\\\\n$(i)$ $[K, \\ _2G]=1$;\\\\\n$(ii)$ $\\mho_1(K)=1\\ ( \\mho_2(K)=1$ for $p=2$ ) and try to show that $[K,G]=1$;\\\\\n$(iii)$ $\\mho_1([K,G])=1$ whenever $p=2$.\n\\end{rem}\n\\begin{lem}\nLet $(G,N)$ be a pair of finite $p$-groups and $\\sigma : N^* \\rightarrow G$ be a covering pair of $(G,N)$. Let $N$ be powerfully embedded in $G$. \\\\\n$(i)$ If $p>2$, then $[\\mho_n([N^*,G]),G]\\leq \\mho_1(\\mho_n([N^*,G]))$ .\\\\\n$(ii)$ If $p=2$, then $[\\mho_n([N^*,G]),G]\\leq \\mho_2(\\mho_n([N^*,G]))$.\n\\end{lem}\n\\begin{proof}\n $N^*$ has a subgroup $A$ such that $A \\leq Z(N^*,G)\\cap [N^*,G]$, $A\\cong M(G,N)$ and $N\\cong N^*\/A$.\\\\\n$(i)$ Let $p>2$. We use induction on $n$. If $n=0$, then by Remark 3.8 we may assume that $[[N^*,G],G,G]=1$, $\\mho_1([N^*,G])=1$ and\n we should show that $[[N^*,G],G]=1$.\n Since $N$ is powerfully embedded in $G$, we have $[N,G]\\leq \\mho_1(N)$, and therefore $[N^*,G]\\leq \\mho_1(N^*)A$. Now we claim that $\\mho_1(N^*)\\leq Z(N^*,G)$.\n To prove the claim, let $ a \\in N^*$ and $b\\in G$. Since $\\gamma_3(\\langle a, [N^*,G]\\rangle)=1$, we have $cl() \\leq cl()\\leq 2$ ($cl(H)$ denotes the nilpotency class of $H$). On the other hand, Lemma 3.4 implies that $$\n (a^p)^b=a^p[a^p,b]\\equiv a^p[a,b]^p[a,b,a]^{p(p-1)\/2} \\pmod{[, \\ _3G]}.$$ Therefore $(a^p)^b=a^p$ since $[[N^*,G],G,G]=1$ and $\\mho_1([N^*,G])=1$.\n Hence $\\mho_1(N^*)\\leq Z(N^*,G)$ as desired. Thus $[N^*,G] \\leq \\mho_1(N^*)A\\leq Z(N^*,G)$ and the result follows for $n=0$.\n \n Now suppose that the induction hypothesis is true for $n=k$. The first step of induction implies that $[N^*,G]$ is powerful.\n Using Lemmas 3.5 and 3.6, one can see that if $H$ is a subgroup of $N^*$ and $[H,G] \\leq \\mho_1(H)$, then $[\\mho_1(H),G] \\leq \\mho_1(\\mho_1(H))$. Hence by Lemma 3.2 and induction hypothesis we have\n$$[\\mho_{k+1}([N^*,G]),G]= [\\mho_1(\\mho_{k}([N^*,G])),G]\\leq \\mho_1(\\mho_1(\\mho_{k}([N^*,G])))$$ $$=\\mho_1(\\mho_{k+1}([N^*,G]))$$ which completes the proof.\\\\\n$(ii)$ Let $p=2$. The proof is similar to (i), but we need to prove that if $H$ is a subgroup of $N^*$ and $[H,G]\\leq \\mho_2(H)$, then $[ \\mho_1(H),G] \\leq \\mho_2(\\mho_1(H))$. By Remark 3.8,\n for $a \\in H, b \\in G$ we have $[a^4,b]=[a^2,b]^2=1$. So $a^4 \\in Z(H,G)$ and $\\mho_2(H)\\leq Z(H,G)$. Then $[H,G]\\leq \\mho_2(H)\\leq Z(H,G)$. Therefore $[a^2,b]=[a,b]^2$ and\n \\begin{equation}\n [\\mho_1( H),G]=\\mho_1([H,G]).\n \\end{equation}\n On the other hand, since $\\mho_2(H)\\leq Z(H,G)$, we have\n $$\\mho_1(\\mho_2(H))=<({a_1}^4 \\dots {a_k}^4)^2| a_i \\in H>=<{a_1}^8 \\dots {a_k}^8>=\\mho_3(H)\\leq \\mho_2(\\mho_1(H)). $$\n Hence (3.1) implies that $[\\mho_1(H),G] \\leq \\mho_2(\\mho_1(H))$ which completes the proof of the above claim.\n\\end{proof}\n\\begin{lem}\nLet $H$ and $G$ be two arbitrary groups with an action of $G$ on $N$. If $x \\in H$ and $g\\in G$, then $$[x^n,g]=[x,g]^n c, $$ where $M=\\langle x,[x,g] \\rangle$ and $ c \\in \\gamma_2(M)$.\n\\end{lem}\n\\begin{proof} Applying Lemma 2.1, we have\n$$ [x^n,g]=(x^n)^{-1}(x^n)^{g}=(x)^{-n}(x^g)^n=(x)^{-n}(x[x,g])^n=[x,g]^n c, $$\nwhere $M=\\langle x,[x,g] \\rangle, c \\in \\gamma_2(M)$.\n\\end{proof}\n\nNow we can state the main result of this section.\n\\begin{thm}\n Let $(G,N)$ be a pair of finite $p$-groups in which $N$ is powerfully embedded in $G$. Then\n $\\exp(M(G,N))$ divides $ \\exp(N)$.\n \\end{thm}\n\\begin{proof} Let $p>2$ and $\\sigma : N^* \\rightarrow G$ be a covering pair of $(G,N)$ with a subgroup $A$ such that $A \\leq Z(N^*,G)\\cap [N^*,G]$, $A\\cong M(G,N)$ and $N\\cong N^*\/A$. It is enough to show that $\\exp([N^*,G])=\\exp(N^*\/Z(N^*,G))$. For this we use induction on $k$ and show that\n\\begin{equation}\n\\mho_k([N^*,G])=[\\mho_k(N^*),G].\n\\end{equation}\nIf $k=0$, then (3.2) holds. Now assume that (3.2) holds, for $k=n$.\nWorking in powerful $p$-group $N^*\/A$ we get $\\mho_{n+1}(N^*\/A)=\\mho_1 (\\mho_n(N^*\/A))$ by Lemma 3.2. Hence\n\\begin{equation}\n \\frac{\\mho_{n+1}(N^*)A}{A}=\\mho_1(\\frac{\n\\mho_n(N^*)A}{A})=\\frac{\\mho_1(\\mho_n(N^*)A)A}{A}.\n\\end{equation}\nThen Lemmas 3.6 and 3.9 and induction hypothesis imply that\n\\begin{eqnarray*}\n [\\mho_{n+1}(N^*),G]=\n[\\mho_1 (\\mho_n(N^*)A)A,G] & \\leq &\n\\mho_1 ([\\mho_n(N^*)A,G])[\\mho_n(N^*)A, \\ _3G] \\\\& \\leq & \\mho_1 ([\\mho_n(N^*),G])[\\mho_n(N^*)A, \\ _2G] \\\\&\\leq& \\mho_1 (\\mho_n([N^*,G])[\\mho_n([N^*,G]),G] \\\\&\\leq& \\mho_1 (\\mho_n([N^*,G])=\\mho_{n+1}([N^*,G]).\n\\end{eqnarray*}\nFor the reverse inclusion, we show that\n$$\\mho_{n+1}([N^*,G]) \\equiv 1 \\pmod{[\\mho_{n+1}(N^*),G]}.$$\nSince by $(3.4)$, $[\\mho_{n+1}(N^*),G])=[\\mho_1 (\\mho_n(N^*)A)A,G]=[\\mho_1 (\\mho_n(N^*)A),G]$, it follows that\n$\\mho_1 (\\mho_n(N^*)A)A \\leq Z(N^*,G) \\pmod{[\\mho_{n+1}(N^*),G]}$.\\\\\nOn the other hand, since $N$ is powerfully embedded in $G$, we have\n\\begin{eqnarray*}\n [\\mho_{n}(N^*),G]=[\\mho_{n}(N^*)A,G] &\\leq& \\mho_1 (\\mho_n(N^*)A)A \\\\&\\leq& Z(N^*,G) \\pmod{[\\mho_{n+1}(N^*),G]}.\n \\end{eqnarray*}\nTherefore $[\\mho_{n}(N^*),G,G]\\equiv 1 \\pmod{[\\mho_{n+1}(N^*),G]}.$\\\\\n Moreover, by Lemma 3.10 we have\n $$[\\mho_1 (\\mho_n(N^*)A),G][\\mho_{n}(N^*),G,G]=\\mho_1( [ \\mho_n(N^*),G])[\\mho_{n}(N^*),G,G].$$\n It follows that $\\mho_1 [(\\mho_n(N^*)),G]\\equiv 1 \\pmod{[\\mho_{n+1}(N^*),G]}$. Then by induction hypothesis, we have\n $$\\mho_{n+1}([N^*,G])= \\mho_1(\\mho_{n}[N^*,G])=\\mho_1 ([\\mho_n(N^*),G])\\equiv 1 \\pmod{[\\mho_{n+1}(N^*),G]}.$$\n This completes the proof for odd primes $p$. The proof for the case $p=2$ is similar.\n\\end{proof}\n\\section*{Acknowledgement}\nThe authors would like to thank the referee for the valuable comments and useful suggestions to improve the present paper.\n\nThis research was supported by a grant from Ferdowsi University of Mashhad; (No. MP90259MSH).\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzakwy b/data_all_eng_slimpj/shuffled/split2/finalzzakwy new file mode 100644 index 0000000000000000000000000000000000000000..0bf67643a70b6e03e3c053393a19e816c3edd1cc --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzakwy @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n$Q$-balls represent stationary localized solutions \nof a complex scalar field theory with\na suitable self-interaction in flat space\n\\cite{Friedberg:1976me,Coleman:1985ki}.\nThe global phase invariance of the scalar field theory\nis associated with a conserved charge $Q$ \\cite{Friedberg:1976me},\nwhich represents the electromagnetic charge,\nonce the theory is promoted to a gauge theory.\n\nThe simplest type of $Q$-balls is spherically symmetric.\nThese possess a finite mass and charge, but carry no angular momentum.\nConsidering their mass as a function of their charge,\nthere are two branches of $Q$-balls, merging and ending at a cusp,\nwhere mass and charge assume their minimal values \\cite{Friedberg:1976me}.\n\nRecently, a new type of scalar potential for $Q$-balls was considered,\nleading to the signum-Gordan equation for the scalar field\n\\cite{Arodz:2008jk,Arodz:2008nm}.\nThis potential gives rise to spatially compact $Q$-balls, \nwhere the scalar field vanishes identically outside a critical\nradius $r_o$ \\cite{Arodz:2008jk}.\nWhen coupled to electromagnetism, a new type of solution appears,\n$Q$-shells \\cite{Arodz:2008nm}.\nIn $Q$-shells the scalar field vanishes identically both inside\na critical radius $r_i$ and outside a critical radius $r_o$,\nthus forming a finite shell $r_i < r < r_o$ of charged matter.\n\nWhen gravity is coupled to $Q$-balls, boson stars arise,\nrepresenting globally regular self-gravitating solutions\n\\cite{Lee:1991ax,Jetzer:1991jr,Mielke:2000mh,Schunck:2003kk}.\nThe presence of gravity has a crucial influence\non the domain of existence of the classical solutions.\nInstead of only two branches of solutions joint at a single cusp, \nthe boson stars exhibit an intricate cusp structure,\nwhere mass and charge oscillate endlessly.\nFor black holes with scalar fields, on the other hand, \na number of theorems exist, which exclude their existence\nunder a large variety of conditions \n\\cite{Bekenstein:1971hc,Bekenstein:1995un,Mayo:1996mv}.\n\nHere we consider the effect of gravity on the $Q$-balls and\n$Q$-shells of the signum-Gordon model coupled to a Maxwell field.\nWe construct these charged boson stars and gravitating $Q$-shells\nand analyze their properties and their domains of existence.\nWe observe that at certain critical values of the mass and charge,\nthe space-times form a throat at the (outer) radius $r_o$, \nrendering the respective exterior space-time\nan exterior extremal Reissner-Nordstr\\\"om space-time.\n\nMoreover, we show that in this model\nthe black hole theorems can be elluded,\nthat forbid black holes with scalar hair.\nIndeed, the gravitating $Q$-shells can be endowed with a horizon $r_H$\nin the interior region $0< r_H < r_i$,\nwhere the scalar field vanishes and the gauge potential is constant.\nThis Schwarzschild-type black hole in the interior\nis surrounded by a shell of charged matter, $r_i \\alpha_{cr}$ \nthey then split into a right and left set of solutions,\nforming regions II and IIa, respectively.\nThe solutions in region II correspond to the larger values of $b(0)$,\nwhile the solutions in region IIa are restricted to the\nsmaller values of $b(0)$.\nWith increasing $\\alpha$, the sets of solutions in region IIa\nmove towards smaller values of $b(0)$,\npossibly disappearing at some critical value of the gravitational coupling,\nwhereas the sets of solutions in region II\nmove towards larger values of $b(0)$.\n\nFig.~\\ref{QB_r0_vs_b0} shows the outer radius $r_o$ for these sets of solutions,\nand thus the size of the corresponding boson stars.\nClearly, the biggest size for a given $\\alpha \\le \\alpha_{cr}$\nis always reached in region I at the bifurcation point\nwith the shell-like solutions.\nThe oscillations of the gauge field value $b(0)$ \nwith increasing scalar field value $h(0)$ seen in region II in\nFig.~\\ref{phasediag}\nare reflected in the spirals\nformed by the outer radius $r_o$ in region II in Fig.~\\ref{QB_r0_vs_b0}.\nThey are also present in regions IIa and Ia, whenever\nthe gauge field value $b(0)$ exhibits oscillations.\n\nThe mass $M$ and the charge $Q$ of these sets of boson star solutions\nare exhibited in Figs.~\\ref{QB_M_vs_b0} and \\ref{QB_Q_vs_b0}.\nBoth show a very similar pattern.\nAgain, the biggest mass and charge for a given $\\alpha \\le \\alpha_{cr}$\nare reached in region I at the respective bifurcation point\nwith the shell-like solutions,\nwhile the oscillations of $b(0)$ seen in regions Ia, II and IIa\nlead to spiral patterns for the mass and charge.\n\nThe corresponding family of curves for the asymptotic\nvalue $b(\\infty)$ of the gauge field function $b(r)$ at infinity\n(which can be indentified with the value of the\nscalar field frequency $\\omega$ in the gauge,\nwhere the gauge field vanishes at infinity)\nis exhibited in Fig.~\\ref{QB_om_vs_b0}.\nHere the overall pattern is different, but spirals occur as well.\nFinally, in Fig.~\\ref{QB_Mvs_Q} we exhibit\nthe ratio of mass and charge $M\/Q$ versus $Q$.\nWe observe a linear increase of $M\/Q$ with $Q$ for the larger values\nof $Q$ in regions I and IIa, where\nthe slope decreases with increasing $\\alpha$,\nmaking $M\/Q$ almost constant for larger values of \n$\\alpha$ (e.g.,~$\\alpha=0.22$).\n\nWhile the occurrence of spirals is a typical feature of\nboson star solutions \\cite{Friedberg:1976me}, \nthe present sets of solutions exhibit a for boson stars new phenomenon,\nnamely the formation of throats.\nAs a throat is formed,\nthe minimum of the metric function $N(r)$ tends to zero,\nand the zero is reached precisely at the outer radius $r_o$.\nAt the same time the metric function $A(r)$ tends to a step\nfunction, that vanishes inside $r_o$, and assumes the asymptotic value\n$A(r)=1$ outside $r_o$.\n(In Fig.~\\ref{fun} the functions close to throat formation\nare exhibited in the case of black holes.)\n\nThe space-time for $r \\ge r_o$ then corresponds to the exterior\nspace-time of an extremal Reissner-Nordstr\\\"om (RN) black hole.\nIndeed, there the metric function $N(r)$ can be expressed as\n\\begin{equation}\nN(r) = 1 - \\frac{2 \\alpha^2 M}{r} + \\frac{\\alpha^2 Q^2}{r^2} \n = \\left( 1 - \\frac{\\alpha Q}{r} \\right)^2 \n , \\label{RN} \\end{equation}\ni.e., $r_H = r_o= \\alpha^2 M = \\alpha Q$ for the extremal RN solution\n(in the units employed).\nAs seen in Fig.~\\ref{QB_Mvs_Q},\nthis relation is precisely satisfied, when $b(0) \\rightarrow 0$.\nThus a throat is formed, when in a set of solutions the value $b(0)$ \nof the gauge field function tends to zero.\nIn fact, the function $b(r)$ then tends to zero in the whole region\n$r< r_o$, and its derivative $b'(r)$ does so as well.\nHowever, at $r_o$ the derivative $b'(r)$ jumps to a finite\nvalue, necessary for the \nCoulomb fall-off of a solution with charge $Q$.\n\nFinally we note, that the sets of boson star solutions\nwith fixed gravitational coupling constant $\\alpha$ \nsatisfy a mass relation.\nThis relation is based on the observation, that\n\\begin{equation}\nd M = b(\\infty) d Q \n , \\label{Mreg2} \\end{equation}\nshown to hold for the regular solutions in flat space \\cite{Arodz:2008nm}.\nSince (\\ref{Mreg2}) continues to hold for the gravitating solutions,\nintegration \nyields the mass relation\n\\begin{equation}\nM_2 = M_1 + M_Q =\n M_1 + \\int_{Q_1}^{Q_2} b(\\infty) d Q\n , \\label{Mreg} \\end{equation}\nwhere the mass $M_2$ of a regular solution with charge $Q_2$\nis obtained \nby integrating from any regular solution $M_1$ with charge $Q_1$\nalong the curve of intermediate solutions of the set.\n\n\n\n\\section{Gravitating $Q$-Shells}\n \n\\begin{figure}[t!]\n\\begin{center}\n\\mbox{\\hspace{-0.5cm}\n\\subfigure[][]{\\hspace{-1.0cm}\n\\includegraphics[height=.27\\textheight, angle =0]{Fig2a.eps}\n\\label{fun1}\n}\n\\subfigure[][]{\\hspace{-0.5cm}\n\\includegraphics[height=.27\\textheight, angle =0]{Fig2b.eps}\n\\label{fun2}\n}\n}\n\\end{center}\n\\caption{Functions of the gravitating $Q$-shell solutions \nshown versus the radial coordinate $r$ for $Q=10$ and $\\alpha^2=0.1$:\n(a) metric functions $A(r)$ and $N(r)$;\n(b) matter functions $h(r)$ and $b(r)$.\nAlso shown are the corresponding functions for\nblack hole solutions with several horizon radii $r_H$.\nThe largest $r_H$ is close to the critical\nvalue, where the throat is formed.\n\\label{fun}\n}\n\\end{figure}\n\n\n\n\\begin{figure}[t!]\n\\begin{center}\n\\mbox{\\hspace{-0.5cm}\n\\subfigure[][]{\\hspace{-1.0cm}\n\\includegraphics[height=.27\\textheight, angle =0]{Fig3a.eps}\n\\label{phaseSdiag2}\n}\n\\subfigure[][]{\\hspace{-0.5cm}\n\\includegraphics[height=.27\\textheight, angle =0]{Fig3b.eps}\n\\label{QS_r0_vs_b1}\n}\n}\n\\mbox{\\hspace{-0.5cm}\n\\subfigure[][]{\\hspace{-1.0cm}\n\\includegraphics[height=.27\\textheight, angle =0]{Fig3c.eps}\n\\label{QBS_M_vs_b1}\n}\n\\subfigure[][]{\\hspace{-0.5cm}\n\\includegraphics[height=.27\\textheight, angle =0]{Fig3d.eps}\n\\label{QS_MQ_vs_b1}\n}\n}\n\\end{center}\n\\caption{Properties of the gravitating $Q$-shell solutions shown versus \n$b(r_i)$,\nthe value of the gauge field function $b(r)$ at the inner \nshell radius $r_i$:\n(a) $r_i\/r_o$, the ratio of inner and outer shell radii;\n(b) the outer shell radius $r_o$;\n(c) the mass $M$ of shell-like solutions and boson stars \n(resp.~$Q$-balls for $\\alpha=0$);\n(d) the scaled mass $\\alpha^2 M$ and the scaled charge $\\alpha Q$.\nNote that $a=\\alpha^2$, and the asterisks mark the transition\npoints from boson stars ($Q$-balls) to $Q$-shells.\n\\label{Q_shell}\n}\n\\end{figure}\n\nLet us next consider the gravitating shell-like solutions.\nHere the space-time consists of 3 parts.\nIn the inner part $0 \\le r < r_i$ \nthe gauge potential is constant and the scalar field vanishes.\nConsequently, it is Minkowski-like,\nwith $N(r)=1$ and $A(r)={\\rm const}<1$. \nThe middle region $r_i < r < r_o$ then represents the shell of\ncharged bosonic matter, while the outer region\n$r_o < r < \\infty$ corresponds to part of a Reissner-Nordstr\\\"om\nspace-time, where the gauge field exhibits the standard Coulomb fall-off,\nwhile the scalar field vanishes identically.\nThis behaviour of the functions is demonstrated in Fig.~\\ref{fun} for\nthe shell-like solution with charge $Q=10$ and \ngravitational coupling constant $\\alpha^2=0.1$.\n\nWe exhibit in Fig.~\\ref{Q_shell} the properties of\nshell-like solutions.\nFig.~\\ref{phaseSdiag2} shows the ratio of the\ninner radius $r_i$ to the outer radius $r_o$ for these solutions.\nFor a given finite value of the gravitational coupling,\nthe branch of gravitating shells emerges\nat the corresponding boson star solution and ends,\nwhen a throat is formed at the outer radius $r_o$.\nAs this happens, the value of\n$b(r_i)$ reaches zero (or equivalently $b(0)\\rightarrow 0$,\nsince $b(r)$ is constant in the interior, $0 \\le r \\le r_i$).\nThe exterior space-time $r > r_o$ then corresponds to the exterior of an\nextremal RN space-time.\n\nThus in contrast to $Q$-shells in flat space, which grow rapidly\nin size, mass and charge as the ratio $r_i\/r_o \\rightarrow 1$,\nthe growth of gravitating $Q$-shells is limited by gravity,\nand the restriction in size, mass and charge is the stronger, \nthe greater the value of the gravitational coupling constant $\\alpha$.\nThis is demonstrated in Figs.~\\ref{QS_r0_vs_b1}, \\ref{QBS_M_vs_b1}\nand \\ref{QS_MQ_vs_b1}, \nwhere the outer radius $r_o$, the mass $M$\nand the charge $Q$ are exhibited\nfor a sequence of values of the gravitational coupling constant.\n\nIn Fig.~\\ref{QBS_M_vs_b1} for comparison also the mass of the\ncorresponding boson star solutions (resp.~$Q$-ball solutions\nfor vanishing gravitational coupling constant) are exhibited.\nThe transitions from the ball-like to the shell-like solutions\nare indicated in the figure by the small asterisks.\nWith increasing $\\alpha$ the sets of shell-like solutions\ndecrease rapidly, until at the critical value\n$\\alpha_{sh}$ (see Fig.~\\ref{phasediag})\nthey cease to exist.\n\nFig.~\\ref{QS_MQ_vs_b1} exhibits the scaled mass $\\alpha^2 M$\nand the scaled charge $\\alpha Q$ for several sets of\ngravitating $Q$-shells. Together with Fig.~\\ref{QS_r0_vs_b1}\nthe figure demonstrates,\nthat the condition \nfor extremal RN solutions,\n$r_o= \\alpha^2 M = \\alpha Q$,\nis satisfied\nfor the shell-like solutions,\nas the throat forms at the outer radius $r_o$.\n\nFinally we note, that the shell-like solutions satisfy the mass relation\n(\\ref{Mreg}) as well.\nConsequently the mass relation holds for any two globally regular solutions\nof a set with given gravitational coupling constant,\nthus relating also ball-like and shell-like solutions.\n\n\\section{Black Holes}\n\n\\begin{figure}[p!]\n\\begin{center}\n\\vspace{-1.0cm}\n\\mbox{\\hspace{-0.5cm}\n\\subfigure[][]{\\hspace{-1.0cm}\n\\includegraphics[height=.27\\textheight, angle =0]{Fig4a.eps}\n\\label{BHQ10_b1_vs_rh}\n}\n\\subfigure[][]{\\hspace{-0.5cm}\n\\includegraphics[height=.27\\textheight, angle =0]{Fig4b.eps}\n\\label{BHQ100_b1_vs_rh}\n}\n}\n\\vspace{-0.5cm}\n\\mbox{\\hspace{-0.5cm}\n\\subfigure[][]{\\hspace{-1.0cm}\n\\includegraphics[height=.27\\textheight, angle =0]{Fig4c.eps}\n\\label{BHQ10_M_vs_rh}\n}\n\\subfigure[][]{\\hspace{-0.5cm}\n\\includegraphics[height=.27\\textheight, angle =0]{Fig4d.eps}\n\\label{BHQ100_M_vs_rh}\n}\n}\n\\vspace{-0.5cm}\n\\mbox{\\hspace{-0.5cm}\n\\subfigure[][]{\\hspace{-1.0cm}\n\\includegraphics[height=.27\\textheight, angle =0]{Fig4e.eps}\n\\label{BHQ10_T_vs_rh}\n}\n\\subfigure[][]{\\hspace{-0.5cm}\n\\includegraphics[height=.27\\textheight, angle =0]{Fig4f.eps}\n\\label{BHQ100_T_vs_rh}\n}\n}\n\\end{center}\n\\caption{Properties of the black hole solutions \nwith Schwarzschild-type interior shown versus \nthe horizon radius $r_H$:\nThe left column exhibits for solutions with fixed charge $Q=10$\n(a) \n$b(r_i)$, the value of the gauge\nfield function $b(r)$ at the inner shell radius $r_i$;\n(c) the mass $M$;\n(e) the ratio of the temperature $T$ at the black hole horizon $r_H$ to the\ncorresponding Schwarzschild black hole temperature $T_S$.\nThe right column ((b), (d), (f)) exhibits the\nsame for solutions with fixed charge $Q=100$.\nNote that $a=\\alpha^2$. \n\\label{BH1}\n}\n\\end{figure}\n\nLet us finally address black holes in this model.\nThe simplest type of black holes is obtained,\nwhen the Minkowski-like inner part of the space-time,\n$0 \\le r \\le r_i$, of gravitating $Q$-shell solutions\nis replaced by the inner part of a curved Schwarzschild-like space-time.\nThe metric in the interior region $0 \\le r \\le r_i$\nis then determined by the function $N(r)= 1 - (r_H\/r)$ \nand a constant function $A(r)$.\nThus the event horizon resides at $r_H < r_i$.\nBut the presence of the $Q$-shell outside the event horizon,\nmakes the properties of the black hole differ from those\nof a pure Schwarzschild black hole.\n\nSince with the event horizon size a further variable appears,\nwhich is an important physical quantity,\nwe discuss the black hole properties with respect to the\nhorizon radius $r_H$ in the following.\nThe metric and matter field functions \nfor black holes with charge $Q=10$ \nat gravitational coupling $\\alpha^2=0.1$\nare exhibited in Fig.~\\ref{fun} \nfor several values of the horizon radius $r_H$.\n\nTo illustrate the domain of existence of such black hole solutions,\nwe again choose a sequence\nof values for the gravitational coupling constant,\nbut we now keep the charge $Q$ fixed, as we vary \nthe horizon radius, starting from the corresponding\nglobally regular $Q$-shell solution.\nA respective set of solutions is shown in Fig.~\\ref{BH1}\nfor $Q=$10 and 100.\n\nFirst of all we note, that the horizon radius is always limited\nin size, where the maximal size grows with the charge $Q$.\nFor small $Q$, e.g.~$Q=$10, we observe two distinct patterns\nfor the black hole solutions.\nThe first pattern arises when the\nfixed gravitational coupling constant has a value\nbelow a certain critical value.\nHere a maximal horizon size is reached,\nwhen the horizon radius $r_H$ gets close to the inner radius \nof the shell $r_i$.\nThere a bifurcation occurs and a second branch emerges,\nwhich ends at a second bifurcation, where a third branch emerges, etc.\nThis results in a spiralling pattern,\nwhere the mass $M$ and the temperature $T$ of the solutions tend towards\nfinite limiting values.\n(The first few branches are apparent in Fig.~\\ref{BHQ10_b1_vs_rh},\nand enlarged in the inlet for a representative value\nof the gravitational coupling constant, $\\alpha^2=0.01$,\nwhile the higher branches are too small to be resolved there.)\n\nThe second pattern is present above that critical value\nof the coupling constant. Here the set of black hole solutions\nfor fixed gravitational coupling ends, when a throat is formed\nat the outer shell radius $r_o$.\nThere the condition for extremal RN solutions,\n$r_o= \\alpha^2 M = \\alpha Q$, is satisfied again,\nas seen in Figs.~\\ref{BHQ10_M_vs_rh} and \\ref{BHQ100_M_vs_rh}.\nAs the throat forms,\nthe temperature $T$ at the event horizon of the Schwarzschild-like\nblack hole $r_H < r_i$ tends to zero,\nas seen in Figs.~\\ref{BHQ10_T_vs_rh} and \\ref{BHQ100_T_vs_rh}.\n\nWhile appearing at first unexpected,\nthe reason for the vanishing of the temperature $T$\nis the behaviour of the metric function $A(r)$\nin $g_{tt}$, since $A(r)$ tends to zero in the interior,\nwhen the throat is formed,\nas seen in Fig.~\\ref{fun1}.\nWe recall, that the ratio of the temperature $T$ of\nthe black hole within the $Q$-shell \nto the temperature \n$T_{\\rm S} = (4 \\pi r_{\\rm H})^{-1}$ \nof the Schwarzschild black hole is given by\n$\\displaystyle T \/ T_{\\rm S}\n = \\left. r A N' \\right|_{r_{\\rm H}} = A(r_H)$.\n\nFor larger (fixed) values of the charge we always observe\nthis second pattern,\nalthough the throat may either be reached directly after a monotonic\nincrease of the horizon radius $r_H$ to its maximum value,\nor along a second branch, where the\nhorizon radius is decreasing again (having passed a bifurcation),\nas seen in Fig.~\\ref{BHQ100_b1_vs_rh}.\n\nAs seen in the figure,\nwhenever bifurcations occur, there are two (or more) black hole\nspace-times with the same value of the charge $Q$ and the\nsame horizon radius $r_H$ (within a certain range of values),\nbut different values of the total mass $M$\nas measured at infinity.\nSurprisingly, however, \nthere are also two (or more) black hole space-times\nwith the same value of the charge $Q$ and the same \nvalue of the total mass $M$ (within a certain range of values).\nThese black holes thus have the same set of global charges\nbut are otherwise distinct solutions of the Einstein-matter equations.\nConsequently black hole uniqueness does not hold in this model\nof scalar electrodynamics.\n\n\n\\begin{figure}[t!]\n\\begin{center}\n\\vspace{-0.5cm}\n\\mbox{\\hspace{-0.5cm}\n\\subfigure[][]{\\hspace{-1.0cm}\n\\includegraphics[height=.27\\textheight, angle =0]{Fig5a.eps}\n\\label{figiso1}\n}\n\\subfigure[][]{\\hspace{-0.5cm}\n\\includegraphics[height=.27\\textheight, angle =0]{Fig5b.eps}\n\\label{figiso2}\n}\n}\n\\end{center}\n\\caption{Mass formulae for black hole solutions:\n(a) the mass $M$ of solutions with fixed charge $Q=10$ and fixed\ngravitational coupling constant $\\alpha$,\nobtained from the asymptotic metric (\\ref{mass}) and the\nmass formula (\\ref{IHmu});\n(b) the mass $M$ of solutions with variable charge $Q$ but\nfixed ratio of inner and outer shell radii $r_i\/r_o$ for several fixed\nvalues of the gravitational coupling constant $\\alpha$,\nobtained from the asymptotic metric (\\ref{mass}) and the\nmass formula (\\ref{IHmuDQ2}).\nNote that $a=\\alpha^2$.\n\\label{BH2}\n}\n\\end{figure}\n\nLet us finally consider some mass relations\nfor these black holes space-times possessing $Q$-shells.\nWe begin by recalling an interesting result\nobtained in the isolated horizon framework\n\\cite{Ashtekar:2004cn}.\nIt states that the mass $M$ \nof a black hole space-time with horizon radius $r_H$\nand the mass $M_{\\rm reg}$\nof the corresponding globally regular space-time\nobtained in the limit $r_H \\rightarrow 0$ are related via\n\\cite{Corichi:1999nw,Ashtekar:2000nx,Ashtekar:2004cn}\n\\begin{equation}\nM = M_{\\rm reg} + M_\\Delta ,\n\\label{IHmu} \\end{equation}\nwhere the mass contribution $M_\\Delta$ is defined by\n\\begin{equation}\nM_\\Delta = \\frac{1}{\\alpha^2} \\, \\int_0^{r_H} \\kappa(r'_H)r'_H d r'_H .\n\\label{IHmuD}\n\\end{equation}\nHere $\\kappa(r_H)$ represents the surface gravity \nof the black hole with horizon radius $r_H$,\n$\\kappa = 2 \\pi T$.\nAccordingly, the mass $M$ \nof a space-time with a black hole\nwith horizon radius $r_H$ within a $Q$-shell with total charge $Q$\nshould be obtained as the sum of the globally regular gravitating $Q$-shell\nwith charge $Q$ and the integral $M_\\Delta$ along the set of black hole\nspace-times, obtained by increasing the horizon radius for fixed charge\nfrom zero to $r_H$.\n\nThis relation is demonstrated in Fig.~\\ref{figiso1}\nfor the set of solutions with charge $Q=10$ and gravitational coupling\nconstant $\\alpha^2=0.01$.\nThe values for the mass $M$\nobtained from the relation (\\ref{IHmuD}) are seen to agree with\nthe values for the black hole mass $M$ obtained from the\nasymptotics (\\ref{mass}).\nThe set of solutions exhibited has spiralling character,\ni.e., it has besides the main first branch\na second branch, also exhibited, and further branches, \nnot resolved in the figure.\n\nWhen the charge is allowed to vary, too, one expects\na change of the above relation in accordance with (\\ref{Mreg2})\nand the first law (in the units empoyed), i.e.,\n\\begin{equation}\ndM = \\frac{\\kappa}{8 \\pi \\alpha^2} d{\\cal A} + b(\\infty) dQ\n , \\label{firstlaw} \\end{equation}\nwhere ${\\cal A} = 4 \\pi r_H^2$ denotes the area of the horizon\nand $b(\\infty)$ represents the electrostatic potential at infinity.\nThus we generalize the above relation (\\ref{IHmu}) to read\n\\begin{equation}\nM = M_{\\rm reg} + M_\\Delta + M_Q =\n M_{\\rm reg} + M_\\Delta \n + \\int_{Q_{\\rm reg}}^{Q} b(\\infty) d Q' .\n\\label{IHmuDQ2}\n\\end{equation}\nThis relation is demonstrated in Fig.~\\ref{figiso2},\nwhere for several values of the gravitational coupling constant\nand for fixed ratio\nof inner and outer shell radii $r_i\/r_o$,\nthe values for the mass $M$\nobtained from the relation (\\ref{IHmuDQ2}) are shown together with\nthe values for the mass $M$ obtained from the\nasymptotics (\\ref{mass}).\n\n\n\\section{Conclusion and Outlook}\n\nWe have considered boson stars, gravitating $Q$-shells and\nblack holes within $Q$-shells in scalar electrodynamics\nwith a $V$-shaped scalar potential,\nwhere the scalar field is finite only in compact ball-like or\nshell-like regions.\n\nThe gravitating $Q$-shells surround a flat Minkoswki-like interior region,\nwhile their exterior represents part of an exterior\nRN space-time.\nWhen the flat interior is replaced by a Schwarzschild-like\ninterior, black hole space-times result, where\na Schwarzschild-like black hole is surrounded by a compact shell of\ncharged matter, whose exterior again represents part of an exterior\nRN space-time.\n\nThese black hole space-times violate black hole uniqueness,\nin certain regions of parameter space.\nHere for the same values of the mass $M$ and the charge $Q$\ntwo or more distinct solutions of the Einstein-matter equations\nexist.\n\nThe solutions satisfy certain relations of the type obtained first in the\nisolated horizon formalism, which connect the mass $M$ of \na black hole solution with the mass $M_{\\rm reg}$ of the\nassociated globally regular solution.\nThe masses of two regular solutions are related in an analogous \n(simpler) manner.\nThis formalism further suggests to interpret the\nblack hole space-times as bound states of\nSchwarzschild-type black holes and gravitating $Q$-shells\n\\cite{Ashtekar:2000nx}.\n\nWhile we have restricted our discussion here to\nSchwarzschild-type black holes in the interior,\nthere are also black hole space-times with charged, i.e.,\nReissner-Nordstr\\\"om-type interior solutions.\nThese more general black hole space-times \nwill be discussed elsewhere.\n\nThe inclusion of rotation presents another interesting generalization\nof the solution considered here, \nsince rotating boson stars are well-known \n\\cite{Mielke:2000mh,Schunck:2003kk,Yoshida:1997qf,Kleihaus:2005me,Kleihaus:2007vk}.\nThe construction of the corresponding rotating shells\nand their black hole generalizations, however, still poses a challenge.\n\n\\vspace{0.5cm}\n{\\bf Acknowledgement}\n\n\\noindent\nBK gratefully acknowledges support by the DFG,\nCL and ML by the DLR.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nHurwitz theory is the study of maps of algebraic curves, viewed as ramified covers of orientable surfaces. At the intersection of geometry, representation theory and combinatorics, it is an area that naturally lends itself to making bridges and connections. In this paper we study the combinatorial structure of certain Hurwitz spaces via a parallel investigation in tropical geometry. \n\n\n\\subsection{Summary}\n\nThe study of the connections between classical and tropical Hurwitz theory was initiated in \\cite{CJM10} and \\cite{CJMwc}; the tropical point of view provided a combinatorial interpretation of double Hurwitz numbers which was very well tuned to describing the polynomial aspects and the wall crossing phenomena occurring in the theory. We continue this exploration by studying rational double Hurwitz loci inside spaces of (relative\/tropical) stable maps, generically parameterizing covers of $\\mathbb{P}^1$ with two prescribed ramification profiles and a part of the branch divisor fixed, and their pushforwards to the moduli space of curves, which we call double Hurwitz cycles. Besides the genus, the discrete invariants we fix are the total length $n$ of the special ramification profile, and the dimension of the loci we want to study. Then we study families of Hurwitz loci parameterized by integral points in an $(n-1)$ dimensional vector space. \n\nOn the classical side, we realize Hurwitz loci as the pullback via a natural branch morphism of appropriate strata in spaces of pointed chains of projective lines (Losev-Manin spaces, Section \\ref{sec:mscm}). This gives a boundary expression for Hurwitz cycles where the coefficients are piecewise polynomials in the entries of the special ramification data (Theorem \\ref{thm:poly}). \n\nOn the tropical side, we realize the Hurwitz loci as tropical Gromov-Witten cycles. Our tropical Hurwitz cycles are balanced polyhedral complexes; their topology is constant in the chambers of polynomiality of (classical) Hurwitz cycles, whereas their geometry (affine integral structure, weights and coordinates of vertices) varies in a polynomial way in terms of the special ramification profiles. \n\nNaturally, we also study the connection between classical and tropical Hurwitz cycles (Section \\ref{sec:tcc}) and observe a natural combinatorial duality between tropical and classical strata. To be more precise, the stratification on the tropical side is the polyhedral structure inherited from the moduli space of tropical curves. The stratification on the classical side is the usual stratification in boundary classes. For Hurwitz cycles of dimension $d$, $k$-dimensional classical strata correspond to $d-k$-dimensional tropical strata, and the combinatorial type of the tropical stratum can be encoded in terms of the dual graph of the classical stratum.\n\n\nWe conclude the paper by studying how Hurwitz cycles vary across the walls of the chambers of polynomiality, both on the classical and on the tropical side. In a similar fashion to Hurwitz numbers, the wall crossing formulae have an inductive form: the cycles in the formula can be obtained as pushforwards via appropriate gluing morphism of pairs of boundary strata coming from Hurwitz cycles where the profile data is split according to the equation of the wall, and the dimensions are split in all possible ways adding up to the correct one. \nEven though the final form of the tropical and classical wall-crossing formulae is essentially the same, there are some subtleties involved in even making sense of what a tropical wall crossing formula may be: that's why we treat the two cases separately. The classical wall crossing formula is Theorem \\ref{thm:wc}, the (cleanest form of the) tropical one Corollary \\ref{cor:twc}.\n\nTo make our exposition easier to follow, throughout the paper we use the one-dimensional case ((tropical) Hurwitz curves) as a running example.\n\n\\subsection{Context, Motivation and further directions of our research}\n\nBecause of the many and diverse categories equivalent to curves and their maps, Hurwitz theory is by nature ``interdisciplinary'': exploring the dictionary between the tropical and classical approaches to Hurwitz theory is a natural thing to do. It has already been fruitful and hopefully will bear even more applications in the future. \n\nBefore Hurwitz theory made its appearance in tropical geometry, the area of tropical enumerative geometry was pioneered by Mikhalkin's celebrated Correspondence Theorem which relates classical numbers of plane curves to their tropical counterparts \\cite{Mi03}. Nowadays, numbers of tropical curves can (at least in the rational case) be understood as intersection products in an appropriate moduli space of tropical curves, just as in the classical world. For higher genus, understanding the appropriate moduli space of tropical curves and its connection to the moduli space of algebraic curves is an active area of research (see e.g.\\ \\cites{CMV12,Cap12a}). Also in the rational case, the connection between the intersection theory of the moduli space of algebraic curves and the moduli space of tropical curves is not yet completely understood. Combinatorial dualities such as the one we observe in Section \\ref{sec:tcc} are present in many situations, but in our opinion they do not fully explain the success of tropical methods in enumerative geometry. We expect deeper connections to be discovered. Our paper contributes some interesting and geometrically meaningful families of cycles in the intersection ring of tropical $\\mathcal{M}_{0,n}$, and makes important steps in understanding the correspondence of these cycles to their classical counterparts. We hope to extent the study to higher genus, and to contribute to the understanding of the deeper connections between moduli spaces of algebraic and tropical curves.\n\n\nClassically double Hurwitz loci were a key ingredient in the proof of Theorem $\\star$, the main result of \\cite{gv:rvl}: tautological classes in the moduli space of curves of degree greater than $g-1$ (say $g+k$ with $k$ a non-negative integer) must admit an expression supported on the boundary, and more specifically on strata parameterizing curves with at least $k$ rational components. Then again they were applied to the study of tautological classes in \\cite{gjv:last}, even though in neither of these cases they were viewed as families of loci with any sort of polynomial structure. This makes its appearence for the case of $0$-dimensional cycles, or more mundanely double Hurwitz numbers, in \\cite{gjv:ttgodhn}. After \\cites{ssv:cbodhn,CJM10,CJMwc} the algebro-combinatorial aspects of the piecewise polynomiality\nof double Hurwitz numbers are well understood, leading the way to some really interesting geometric questions: \n\\begin{description}\n\\item[ELSV-type formula for double Hurwitz numbers] can double Hurwitz numbers be obtained as intersections of tautological classes on some family of birational moduli spaces --- constant in the polynomiality chambers --- in a way that naturally explains the structure of the polynomials? Can the wall crossings be somehow seen as coming from the birational transformations occurring when crossing the walls?\n\\item[Higher dimensional Hurwitz loci] How well does the piecewise polynomial structure carry over to higher dimensional loci? In particular can we understand full Hurwitz spaces (or rather their compactifications such as Admissible Covers or Relative Stable Maps) as tautological classes in the moduli space of curves?\n\\end{description}\nThis paper provides an exhaustive answer for the second question in genus $0$: here the full moduli space is birational to $\\overline{M}_{0,n}$, and we show that every time we increase the codimension by one by fixing a simple branch point in the branch divisor we obtain a polynomial class of one degree higher. In genus $0$ an ELSV formula is trivially showed to hold in the one part double Hurwitz number case (\\cite{gjv:ttgodhn}), and we are hopeful that a geometric understanding of the wall crossings will be complete soon. In higher genus, the situation is both more complicated and more interesting: here the Hurwitz moduli space represents a codimension $g$ tautological class, which has recently been at the center of attention because of its connections with symplectic field theory (\\cite{eliashberg}). Recently Hain (\\cite{H11}) has produced tautological classes on $\\overline{M}_{g,n}$, which agree with double Hurwitz loci when restricted to the partial compactification of curves of compact type (however simple intersection computations show that Hain's class does not agree with either the Admissible Cover nor the Relative Stable Map compactification of the Hurwitz space already in genus one). Interestingly, Hain's class is homogeneous polynomial of degree $2g$. Understanding how Hain's class compares with Admissible Covers or Relative Stable Maps, besides being a very interesting question on its own, is likely a useful ingredient in the quest for an ELSV formula for double Hurwitz numbers. Our work is aiming in that direction in a couple different ways. On the one hand we interpolate a family of classes starting from the zero dimensional loci which we understand very well. On the other hand we make a connection with tropical geometry, which is usually well tuned to give information about the deeper boundary strata of the classical moduli spaces of curves.\n\n\n\n\\subsection{Acknowledgements}\nThis work is the result of a two week long \\textit{Research in Pairs} at the Oberwolfach Institute for Mathematics, which the authors thank for its hospitality. The second author was at the intersection of supports by a Simons Collaboration Grant and NSF grant DMS110549. The third author was partially supported by DFG-grant MA 4797\/1-2.\n \n\n\n\\section{Background and Notation}\nIn this section we recall the basic definitions and constructions that are needed for the set-up of the theory. \n\\subsection{Moduli Spaces of Curves and Maps}\n\\label{sec:mscm}\n\nWe assume that the reader is familiar with $\\overline{M}_{0,n}$, the moduli space of rational pointed stable curves, a smooth projective variety of dimension $n-3$. The Chow ring of $\\overline{M}_{0,n}$ is generated by irreducible boundary divisors, with the only relations (besides the obvious ones given by empty intersections) generated by the WDVV relations (\\cite{sk:m0n}). Irreducible boundary strata are identified by their dual graph: given a graph $\\Gamma$, we denote the corresponding stratum by $\\Delta_\\Gamma$.\n\nIn \\textit{weighted stable curves} one tweaks the stability of a rational pointed curve $(X= \\cup_j X_j, p_1, \\ldots, p_n)$ by assigning weights $a_i$ to the marked points and requiring the restriction to each $X_j$ of $\\omega_X +\\sum a_ip_i$ to be ample (this amounts to the combinatorial condition that $\\sum_{p_i\\in X_j} a_i + n_j>2$, where $n_j$ is the number of shadows of nodes on the $j$-th component of the normalization of $X$). \n\nWhen two points are given weight $1$ and all other points very small weight, the space $\\overline{M}_{0,2+r}(1^2,\\varepsilon^{r})$ is classically known as the \\textit{Losev-Manin} space {\\cite{lm:lms}}: it parameterizes chains of $\\mathbb{P}^1$'s with the heavy points on the two terminal components and light points (possibly overlapping amongst themselves) in the smooth locus of the chain.\n\nLet $\\mathbf{x}$ be an $n$-tuple of integers adding to $0$, and denote $\\mathbf{x^+}$ (resp. $\\mathbf{x^-}$) the sub-tuple of positive (resp. negative) parts. We consider moduli space of relative stable maps $\\overline{M}_0(\\mathbb{P}^1; \\mathbf{x^+}0, \\mathbf{x^-}\\infty)$ and their ``rubber'' variant $\\overline{M}^\\sim_0(\\mathbb{P}^1; \\mathbf{x^+}0, \\mathbf{x^-}\\infty)$ (see \\cites{gv:rvl,mp:tvgwt}). An important technical detail is we mark the preimages of the relative points. In order to mark some of the simple ramification points on the source curve, we introduce a space of relative weighted stable maps, where in addition to $0$ and $\\infty$ there are $j$ moving simple transposition points that are marked and given weight $\\varepsilon$. These spaces are typically denoted $\\overline{M}^\\sim_0(\\mathbb{P}^1; \\mathbf{x^+}0, \\mathbf{x^-}\\infty, \\varepsilon t_1, \\ldots, \\varepsilon t_j)$. \n\\begin{note}\nIn all our spaces of maps we make the notation lighter by forgetting the target (always $\\mathbb{P}^1$), noting that $\\mathbf{x}$ gives sufficient information to determine the relative points fixed at $0 $ and $\\infty$ and that the additional transposition points are understood to be ``light''. For example we write $\\overline{M}^\\sim_0(\\mathbf{x}, t_1, \\ldots, t_j)$ for $\\overline{M}^\\sim_0(\\mathbb{P}^1; \\mathbf{x^+}0, \\mathbf{x^-}\\infty, \\varepsilon t_1, \\ldots, \\varepsilon t_j)$. \n\\end{note}\n\n\nThere is a natural stabilization map $\\st$ to $\\overline{M}_{0,n}$ that forgets the map and remembers the (marked) points over $0$ and $\\infty$, and a branch map to an appropriate quotient of a Losev-Manin space, recording the position of the $r+2=n$ branch points. \nSince a degree $0$ divisor on $\\mathbb{P}^1$ determines a rational function up to a multiplicative constant, the map $\\st: \\overline{M}^\\sim_0(\\mathbf{x}) \\to \\overline{M}_{0,n}$ is birational. Marking $j$ simple transpositions makes $\\st$ into a degree ${{r}\\choose{j}}$ cover. The branch map is a cover of the Losev-Manin space of degree the double Hurwitz number $H_0(\\mathbf{x})$. \n\n\\subsubsection{Multiplicities of boundary strata.}\nBoundary strata in moduli spaces of relative stable maps corresponding to breaking the target are naturally described in terms of products of other moduli spaces of relative stable maps. It is important to keep careful track of various multiplicities coming both from combinatorics of the gluing and infinitesimal automorphisms (see \\cite{gv:rvl}*{Theorem 4.5}). \nLet $S$ be a boundary stratum in $\\overline{M}^\\sim_0(\\mathbf{x})$, parameterizing maps to a chain $T^N$ of $N$ projective lines. $S$ can be seen as the image of:\n$$\n\\gl: \\prod_{i=1}^N \\mathcal{M}^\\bullet_i \\to S \\subset \\overline{M}^\\sim_0(\\mathbf{x}),\n$$\n\nwhere the $\\mathcal{M}^\\bullet_i$ are moduli spaces of possibly disconnected relative stable maps, where the relative condition imposed at the point $\\infty$ in the $i$-th line matches the condition at $0$ in the $(i+1)$-th line. We denote by $\\mathbf{z_i}=(z_i^1, \\ldots, z_i^{r_i})$ such relative condition and by abuse of notation we say it is the relative condition at the $i$-th node of $T^N$. Then,\n\\begin{equation}\n\\label{bound:multi}\n[S] = \\prod_{i=1}^{N-1}\\frac{\\prod_{j=1}^{r_i} z_i^j}{|\\Aut(\\mathbf{z_i})|} \\left[\\gl_\\ast\\left( \\prod_{i=1}^N \\mathcal{M}^\\bullet_i \\right)\\right].\n\\end{equation}\n\nEquation \\eqref{bound:multi} seems horrendous, but it amounts to the following recipe: the general element in $S$ is represented by a map from a nodal curve $X$ to $T^N$, with matching ramification on each side of each node of $X$. The multiplicity $m(S)$ is the product of ramification orders for each node of $X$ divided by the (product of the order of the group of) automorphisms of each partition of the degree prescribing the ramification profile over each node of $T^N$.\n\n\n\n\n\n\\subsection{Tropical Geometry}\n\\label{sec:tg}\nWe assume that the reader is familiar with tropical varieties and tropical cycles in a vector space with a lattice, i.e.\\ with weighted polyhedral complexes (possibly with negative weights in the case of cycles) satisfying the balancing condition around each cell of codimension one.\nThe exact polyhedral complex structure is not important. We do not distinguish between equivalent tropical cycles, i.e.\\ cycles which allow a common refinement respecting the weights. \n\n A rational function\n on a tropical cycle is a continuous function that is\n affine on each cell, and whose linear part is\n integer. To a tropical cycle $X$ and a rational function $\\varphi$, we can associate the divisor $\\varphi\\cdot X$, a tropical subcycle of codimension one supported on\n the subset of $X$ where $\\varphi$ is not linear \\cite{AR07}*{Construction 3.3}.\nWe can also form multiple intersection products $\\varphi_1 \\cdot \\ldots \\cdot\n \\varphi_m \\cdot X$. They are commutative by \\cite{AR07}*{Proposition 3.7}. The weights of cells of intersection products can be computed locally as follows.\n \n\\begin{rem}\\label{rem-intersect}\n Let $h_1, \\ldots, h_m$ be linearly independent integer linear\n functions on $\\mathbb{R}^n$. By $H : \\mathbb{Z}^n \\rightarrow \\mathbb{Z}^m $ we\n denote the linear map given by $x \\mapsto (h_1(x), \\ldots, h_m(x))$. Consider\n the rational functions $ \\varphi_i = \\max\\{h_i,p_i\\}$ on $\\mathbb{R}^n$, where $p_i$ are fixed constants in $\\mathbb{R}$. These rational functions give rise\n to an intersection product, which obviously consists of only one cone with \n weight equal to the order of the torsion part of $ \\mathbb{Z}^m \/ \\text{Im}(\n H)$, i.e. the greatest common divisor of\n the absolute values of the maximal minors of $H$ (see e.g.\\ \\cite{MR08}*{Lemma 5.1}). \n \\end{rem}\n\nA morphism between tropical cycles\nis a locally affine linear map, with\nthe linear part induced by a map between the underlying lattices.\n A rational function $\\varphi$ on a tropical cycle $Y$ can be pulled back along a morphism $f:X\\rightarrow Y$\n to the rational function $f^*(\\varphi) = \\varphi \\circ f$ on $X$. Also, we can push forward\n subcycles $Z$ of $X$ to subcycles $f_*(Z)$ of $Y$ \\cite{AR07}*{Proposition\n 4.6 and Corollary 7.4}. \n\\vspace{0.3cm}\n\nWe refer the reader to \\cite{GKM07,Mi07,CJM10} for comprehensive background on moduli spaces of tropical curves and maps. A (marked, rational, abstract) tropical curve is a metric tree $\n\\Gamma $ without 2-valent vertices. Edges leading to 1-valent vertices have infinite length, are marked by the numbers $1,\\ldots,N$ and are called ends. The space of all marked tropical curves with\n$N$ ends is denoted $ \\mathcal{M}_{0,N} $. It can be embedded into $\\mathbb{R}^{\\binom{N}{2}-N}$ using the distance map. It follows from \\cite{SS04a}*{Theorem 3.4},\n\\cite{Mi07}*{Section 2}, or \\cite{GKM07}*{Theorem 3.7} that $\\mathcal{M}_{0,N}$ is a\ntropical variety which is even a fan. All top-dimensional cones have weight one. \nFor a subset $I\\subset\\left[N\\right]$ of cardinality $1<|I|0$, $x_3,x_4,x_5<0$, $x_1>|x_i+x_j|$ for all $i\\not=j\\in\\{3,4,5\\}$ and $x_2<-x_i$ for all $i=3,\\ldots,5$ in $\\mathcal{M}_{0,5}$.\nWe start with the cells in the cone spanned by $v_{12}$ and $v_{34}$ in $\\mathcal{M}_{0,5}$. To fix a combinatorial type of covers such that the corresponding cell of $\\mathbb{H}^{\\trop}_1(\\mathbf{x})$ lives in this cone, we first have to choose a moving vertex among the three vertices of the tree with ends $1$ and $2$ coming together and ends $3$ and $4$ coming together. For each of these three choices, there is one ordering of the remaining vertices which is compatible with the orientation of the edges (see Figure \\ref{fig:m05}). We thus have three cells of $\\mathbb{H}^{\\trop}_1(\\mathbf{x})$ in this cone, two of these cells are constant ends, the other one is an linear edge. Figure \\ref{fig:m052} shows the cone and the three cells.\nBy symmetry, the cones spanned by $v_{12}$ and $v_{35}$ resp.\\ by $v_{12}$ and $v_{45}$ look analogous. Similar arguments also show that all remaining cones except the cones spanned by $v_{23}$ and $v_{45}$, resp.\\ $v_{24}$ and $v_{35}$, resp.\\ $v_{25}$ and $v_{35}$, look alike.\n\n\\end{example}\n\\begin{figure}[tb]\n\\input{m05.pstex_t}\n\\caption{The combinatorial types of the Hurwitz curve in the cone of $\\mathcal{M}_{0,5}$ spanned by $v_{12}$ and $v_{34}$.}\n\\label{fig:m05}\n\\end{figure}\n\n\\begin{figure}[tb]\n\\input{m052.pstex_t}\n\\caption{The Hurwitz curve in the cone of $\\mathcal{M}_{0,5}$ spanned by $v_{12}$ and $v_{34}$. We require the vertex $6$ to be mapped to $0$ as usual and $7$ to $1$.}\n\\label{fig:m052}\n\\end{figure}\n\nIn the cone spanned by $v_{23}$ and $v_{45}$, we have two constant ends when the moving vertex is adjacent to two ends. If the third vertex is moving, there are two orderings of the remaining vertices compatible with the orientation of the edges (see Figure \\ref{fig:m053}). Altogether, we get two constant and two linear ends as depicted in Figure \\ref{fig:m054}.\n\n\\begin{figure}[tb]\n\\input{m053.pstex_t}\n\\caption{The combinatorial types of the Hurwitz curve in the cone of $\\mathcal{M}_{0,5}$ spanned by $v_{23}$ and $v_{45}$.}\n\\label{fig:m053}\n\\end{figure}\n\n\\begin{figure}[tb]\n\\input{m054.pstex_t}\n\\caption{The Hurwitz curve in the cone of $\\mathcal{M}_{0,5}$ spanned by $v_{23}$ and $v_{45}$.}\n\\label{fig:m054}\n\\end{figure}\n\n\n\n\n\n\\section{Tropical-Classical Correspondence}\n\\label{sec:tcc}\n\nOur work in Sections \\ref{sec:hl} and \\ref{sec:thl} has highlighted a combinatorial correspondence between classical and tropical Hurwitz cycles. In this section we make a precise statement of such a correspondence, and illustrate it in the example of one dimensional cycles. A satisfactory correspondence should also encode the polynomial multiplicities of strata. In Section \\ref{sec:int} we interpret the (classical) multiplicity of a stratum in the Hurwitz cycle as an intersection multiplicity of the corresponding tropical face with the $k$-dimensional skeleton of $\\mathcal{M}_{0,n}$.\n\\begin{co}\n\\label{co:tropclas}\nThere is a natural bijection between $i$-dimensional faces of $\\mathbb{H}^{\\trop}_k(\\mathbf{x})$ and connected components in $\\tilde\\mathbb{H}_k(\\mathbf{x})$ of the inverse image via $\\st$ of irreducible strata in $\\mathbb{H}_k(\\mathbf{x})$ of dimension $k-i$. Further, incidence of faces on the tropical side corresponds to intersection of strata on the classical side. \n\\end{co}\n\nThe key ingredient here is the correspondence between tropical graphs and boundary strata of moduli spaces of relative stable maps outlined in Lemma \\ref{flatten}. The subtlety to observe is that a cell of the tropical Hurwitz cycle is sensitive to the type of the directed graph parameterized and to the ordering of the fixed vertices, but not to the ordering of moving vertices amongst themselves or with respect to the fixed ones. So two general points $P_1,P_2$ in the same $i$-dimensional tropical cell may correspond to graphs where two adjacent vertices (at least one of which is moving) have switched order, and hence to different $(k-i)$-dimensional boundary strata $\\tilde{\\Delta}_1, \\tilde{\\Delta}_2$ of relative stable maps. The segment joining $P_1$ and $P_2$ contains a point $P$ where the two incriminated vertices map to the same image point: this corresponds to a $(k-i+1)$-dimensional boundary stratum $\\Delta$ that contains $\\tilde{\\Delta}_1$ and $\\tilde{\\Delta}_2$ as specializations. The stratum $\\Delta$ does not belong to $\\tilde{\\mathbb{H}}_k(\\mathbf{x})$, but it does belong to the inverse image via $\\st$ of the Hurwitz cycle. We feel that describing this phenomenon in full generality would only mire us in notational confusion, so we choose to illustrate it in one very specific example.\n\n\n\\subsection{Hurwitz Curves Continued}\n\nConsider the one dimensional cell labelled $I$ in Figure \\ref{fig:m054}, and the corresponding graphs parameterized by points in such cell (these are illustrated in Figure \\ref{fig:m053}). Let $\\ell$ be a coordinate for this cell, corresponding to the length of the segment joining vertices $6$ and $M$. For $\\ell < -\\frac{1}{x_4+x_5}$ (resp. $\\ell > -\\frac{1}{x_4+x_5}$) any point of $I$ is the tropical dual graph to the stratum $\\tilde{\\Delta}_1$ (resp. $\\tilde{\\Delta}_2$) as depicted in Figure \\ref{fig:contract}. The point $\\ell = -\\frac{1}{x_4+x_5}$ correspond to the stratum $\\tilde{\\Delta}$, which is a ${\\mathbb{P}^1}$ connecting $\\tilde{\\Delta}_1$ and $\\tilde{\\Delta}_2$ in $\\st^{-1}({\\mathbb{H}_1}(\\mathbf{x}))$.\n\n\\begin{figure}[tb]\n\\input{strata.pstex_t}\n\\caption{Strata of relative stable maps in $\\st^{-1}({\\mathbb{H}_1}(\\mathbf{x}))$.}\n\\label{fig:contract}\n\\end{figure}\n\n\\subsection{The Hurwitz cycle \nintersecting the codimension $k$-skeleton of $\\mathcal{M}_{0,n}$}\n\n\\label{sec:int}\n\n\n\nIn this section we realise the (classical) multiplicities of the strata in the Hurwitz cycle as intersection numbers of the tropical Hurwitz cycle with the codimension $k$-skeleton of $\\mathcal{M}_{0,n}$. To do so we must view each codimension $k$ cell as a part of an intersection product of divisors. \nWe recall the boundary divisors $D_I$ that ``play well'' in the intersection theory of $ \\mathcal{M}_{0,n}$ are defined as divisors of appropriate rational functions \\cite{Rau08}*{Definition 2.4}. Each $D_I$ is a linear combination of codimension one cells of $ \\mathcal{M}_{0,n}$ with appropriate weights. In the following lemma we describe some intersection products of these boundary divisors in term of the tropical curves they parameterize.\n\\begin{lemma}\\label{lem:divintersect}\nThe intersection of the tropical boundary divisors $ D_{12}\\cdot D_{123}\\cdot \\ldots \\cdot D_{1...j}$ for some $j\\geq 2$ in $\\mathcal{M}_{0,m}$ consists of all cones corresponding to a type with the ends $1,\\ldots,j$ at\n\\begin{itemize}\n \\item a $j+2$-valent vertex and only $3$-valent vertices otherwise with weight one,\n\\item a $j+1$-valent vertex adjacent to a $4$-valent vertex and only $3$-valent vertices otherwise with weight $-1$.\n\\end{itemize}\n\\end{lemma}\n\\begin{proof}\nWe show this by induction. The induction beginning is \\cite{Rau08}*{Lemma 2.5}. For the induction step, assume that the statement is true for $j-1$. \nThe codimension $j$ cones in $ D_{12}\\cdot D_{123}\\cdot \\ldots \\cdot D_{1...j-1}$ then consist of all cones corresponding to a type with the ends $1,\\ldots,j-1$ at\n\\begin{itemize}\n \\item a $j+2$-valent vertex and only $3$-valent vertices otherwise,\n\\item a $j+1$-valent vertex and one $4$-valent vertex,\n\\item a $j$-valent vertex adjacent to a $5$-valent vertex,\n\\item a $j$-valent vertex adjacent to a $4$-valent vertex and one other $4$-valent vertex.\n\\end{itemize}\nTo obtain the coefficients of such cones in the product $ D_{12}\\cdot D_{123}\\cdot \\ldots \\cdot D_{1...j}$ we must compute the intersection with $\\varphi_{1...j}$ around each of these cones.\n\nIn the first case, the neighbors of $ D_{12}\\cdot D_{123}\\cdot \\ldots \\cdot D_{1...j-1}$ are the resolutions of the $j+2$-valent vertex in a $j+1$-valent vertex with $1,\\ldots,j-1$ adjacent. Each such neighbor is spanned by a vector corresponding to this resolution. Since only the vector $v_{1...j}$ is mapped to one by $\\varphi_{1...j}$, we get a contribution of one for the weight of the codimension one cone only if also $j$ is adjacent to the $j+2$-valent vertex. The weight then equals one.\n\nIn the second case, we can resolve the $j+1$-valent vertex, but none of the resolutions is spanned by the vector $v_{1..j}$. We can also resolve the $4$-valent vertex. Again, none of the resolutions is spanned by the vector $v_{1...j}$, however, this vector is contained in the codimension one cone if also $j$ is adjacent to the $j+1$-valent vertex. If in addition the $4$-valent vertex is adjacent to the $j+1$-valent vertex, the sum of the vectors spanning its three resolutions contains $v_{1...j}$ as a summand: if we denote the four edges adjacent to the $4$-valent vertex by $e_1,\\ldots,e_4$ and assume that the subset of ends that can be reached via $e_i$ from the $4$-valent vertex is $A_i$, then the three resolutions are spanned by $v_{A_1\\cup A_2}$, $v_{A_1\\cup A_3}$ and $v_{A_2\\cup A_3}$ and their sum satisfies $v_{A_1\\cup A_2}+v_{A_1\\cup A_3}+v_{A_2\\cup A_3}=v_{A_1}+v_{A_2}+v_{A_3}+v_{A_1\\cup A_2 \\cup A_3}= v_{A_1}+v_{A_2}+v_{A_3}+v_{A_4}$ by \\cite{KM07}*{Lemma 2.6}. This yields a contribution of minus one for the weight of this cone.\n\nIn the third and fourth case, neither the codimension one cone itself nor any of its neighbors contains the vector $v_{1...j}$. Therefore it gets weight zero. \nThe claim follows.\n\\end{proof}\n\n\\begin{rem}\\label{rem:coneintersect}\nThe important consequence of this lemma is the statement that a cone with a $j+2$-valent vertex adjacent to the ends $1,\\ldots,j$ appears with weight one in the intersection $ D_{12}\\cdot D_{123}\\cdot \\ldots \\cdot D_{1...j}$. It is straight-forward to generalize this statement to a cone $C$ with an arbitrary $j+2$-valent vertex $V$, adjacent to the edges $e_1,\\ldots,e_{j+2}$.\nWe denote by $A_i$ the subset of ends that can be reached from $V$ via $e_i$. Then $C$ appears with weight one in the intersection $ D_{A_1\\cup A_2}\\cdot D_{A_1\\cup A_2 \\cup A_3}\\cdot \\ldots \\cdot D_{A_1\\cup \\ldots\\cup A_{j-1}}$.\nAlso, to cut out a cone with several higher-valent vertices, we can combine several such intersection products.\n\\end{rem}\n\n\n\n\\begin{lemma}\\label{lem:pullbackintersect}\n The intersection $\\Psi_\\alpha\\cdot \\ft_\\alpha^\\ast( D_{12})\\cdot\\ft_\\alpha^\\ast( D_{123})\\cdot \\ldots \\cdot \\ft_\\alpha^\\ast(D_{1...j})$ for some $j\\geq 2$ in $\\mathcal{M}_{0,m+1}$ consists of all cones corresponding to a type with the ends $1,\\ldots,j$ \n\\begin{itemize}\n \\item and $\\alpha$ at a $j+3$-valent vertex with weight $j$, \n\\item at a $j+2$-valent vertex, and $\\alpha$ adjacent to some other vertex with weight $1$,\n\\item and $\\alpha$ at a $j+2$-valent vertex adjacent to a $4$-valent vertex with weight $-(j-1)$,\n\\item at a $j+1$-valent vertex adjacent to a $5$-valent vertex with $\\alpha$ with weight $-2$,\n\\item at a $j+1$-valent vertex adjacent to a $4$-valent vertex and $\\alpha$ adjacent to some other vertex with weight $-1$.\n\\end{itemize}\n\\end{lemma}\n\\begin{proof}\n The proof is again by induction. For $j=2$, we intersect $\\Psi_\\alpha$ with $\\ft_\\alpha^\\ast(\\varphi_{12})=\\varphi_{12}\\circ \\ft_\\alpha$. This map sends the vector $v_{12}$ and the vector $v_{12\\alpha}$ to one and all other $v_i$ to zero. Codimension one cones of $\\Psi_\\alpha$ either have a $5$-valent vertex with $\\alpha$ or a $4$-valent vertex with $\\alpha$ and another $4$-valent vertex. In the first case, if $1$ and $2$ are also adjacent to the $5$-valent vertex, two of the six neighbors are spanned by vectors mapping to one, so we get weight $2$. In the second case, the vector $v_{12\\alpha}$ is contained in the codimension one cone itself and appears in the sum of the vectors spanning the neighbors if $1$ and $2$ are adjacent to the vertex with $\\alpha$ and the other $4$-valent vertex is adjacent. Such a cone then comes with weight minus one.\n\nFor the induction step, assume the statement is true for $j-1$. The codimension one cones of $\\Psi_\\alpha\\cdot \\ft_\\alpha^\\ast( D_{12})\\cdot\\ft_\\alpha^\\ast( D_{123})\\cdot \\ldots \\cdot \\ft_\\alpha^\\ast(D_{1...j})$ then consist of all cones corresponding to a type with the ends $1,\\ldots,j-1$ \n\\begin{enumerate}\n \\item and $\\alpha$ at a $j+3$-valent vertex, \n\\item and $\\alpha$ at a $j+2$-valent vertex and one $4$-valent vertex,\n\\item at a $j+2$-valent vertex, $\\alpha$ somewhere else ,\n\\item at a $j+1$-valent vertex and a $5$-valent vertex with $\\alpha$,\n\\item at a $j+1$-valent vertex and a $4$-valent vertex, $\\alpha$ somewhere else,\n\\item and $\\alpha$ at a $j+1$-valent vertex, next to a $5$-valent vertex,\n\\item and $\\alpha$ at a $j+1$-valent vertex next to a $4$-valent vertex and another $4$-valent vertex,\n\\item at a $j$-valent vertex next to a $6$-valent vertex with $\\alpha$,\n\\item at a $j$-valent vertex next to a $5$-valent vertex with $\\alpha$ and another $4$-valent vertex,\n\\item at a $j$-valent vertex next to a $5$-valent vertex and $\\alpha$ somewhere else,\n\\item at a $j$-valent vertex next to a $4$-valent vertex, and another $4$-valent vertex and $\\alpha$ somewhere else. \n\\end{enumerate}\nFor the cases (6)-(11) it is easy to see that neither the codimension one cone itself nor any of its neighbors contains the vectors $v_{1..j}$ or $v_{1..j\\alpha}$ which are mapped to one by $\\ft_\\alpha^\\ast(\\varphi_{1..j})=\\varphi_{1..j}\\circ \\ft_\\alpha$. Thus any of these cones is taken with weight zero.\n\nIn the first case, if $j$ is also adjacent to the $j+3$-valent vertex, we have a neighbor spanned by $v_{1...j\\alpha}$ with weight $j-1$, and a neighbor spanned by $v_{1...j}$ with weight one. Altogether the weight is $j$.\n\nIn the second case, if $j$ is also adjacent to the $j+2$-valent vertex the vector $v_{1...j\\alpha}$ is contained in the codimension one cone itself. If in addition the $4$-valent vertex is adjacent to the $j+2$-valent vertex, the vector $v_{1...j\\alpha}$ appears in the sum of the three neighbors corresponding to the resolutions of the $4$-valent vertex. Since any such resolution comes with weight $j-1$, this codimension one cone has weight $-(j-1)$.\n\nIn the third case, if $j$ is also adjacent to the $j+2$-valent vertex, we have one neighbor spanned by $v_{1...j}$ with weight one, so we also get weight one for this cone.\n\nIn the fourth case, none of the resolutions of the $j+1$-valent vertex contains the vectors $v_{1...j}$ or $v_{1...j\\alpha}$.\n The vector $v_{1...j}$ is contained in the cone itself however if $j$ is also adjacent to the $j+1$-valent vertex. We can also resolve the $5$-valent with $\\alpha$ in such a way that $\\alpha$ is still at a $4$-valent vertex. All these six neighbors have weight one. Denote the four edges not equal to the end $\\alpha$ but adjacent to the $5$-valent vertex by $e_1,\\ldots,e_4$ and denote by $A_i$ the subset of ends that can be reached from the $5$-valent vertex via $e_i$. Then the six neighbors are spanned by the vectors $v_{A_1\\cup A_2}$, $v_{A_1\\cup A_3}$, $v_{A_2\\cup A_3}$, $v_{A_1\\cup A_2\\cup\\{\\alpha\\}}$, $v_{A_1\\cup A_3\\cup\\{\\alpha\\}}$ and $v_{A_2\\cup A_3\\cup\\{\\alpha\\}}$ whose sum equals $2v_{A_1}+2v_{A_2}+2v_{A_3}+2\\cdot v_{A_1\\cup A_2\\cup A_3\\cup\\{\\alpha\\}}=2v_{A_1}+2v_{A_2}+2v_{A_3}+2\\cdot v_{A_4}$ by \\cite{KM07}{Lemma 2.6}. Thus we get weight $-2$ if and only if $A_i=\\{1,\\ldots,j\\}$ for $i=1,2,3$ or $4$ which is the case if and only if the $5$-valent vertex with $\\alpha$ is adjacent to the $j+1$-valent vertex with $1,\\ldots,j$.\n\nThe fifth case is analogous to the second case of Lemma \\ref{lem:divintersect}. All neighbors have weight one. Therefore we get weight minus one if the $4$-valent vertex is adjacent to the $j+1$-valent vertex and $j$ is adjacent to the $j+1$-valent vertex. The claim follows.\n\\end{proof}\n\n\n Now we intersect $\\mathbb{H}^{\\trop}_k(\\mathbf{x})$ with the codimension $k$-skeleton of $\\mathcal{M}_{0,n}$. Let $K$ denote a cone of the codimension $k$-skeleton. It corresponds to a combinatorial type of a tree $\\Gamma$ with $r-k$ vertices $V_1,\\ldots,V_{r-k}$ of valence $\\val(V_i)=k_i$ with $\\sum (k_i-3)=k$. \nRemark \\ref{rem:coneintersect} tells us how to pick functions $\\varphi_1,\\ldots,\\varphi_k$ that cut out $K$ with weight one. Thus we want to compute \n$\\mathbb{H}^{\\trop}_k(\\mathbf{x})\\cdot\\varphi_1\\cdot\\ldots\\cdot\\varphi_k $ locally around $K$. We have\n\\begin{align*}\n& \\mathbb{H}^{\\trop}_k(\\mathbf{x})\\cdot\\varphi_1\\cdot\\ldots\\cdot\\varphi_k = \\\\\n& \\ft_{\\ast}\\Big(\\Psi_{n+1}\\cdot \\prod_{i=n+2}^{n+r-k} \\big(\\Psi_i \\cdot \\ev_i^{\\ast}(p_i) \\big) \\Big) \\cdot\\varphi_1\\cdot\\ldots\\cdot\\varphi_k =\\\\\n& \\ft_{\\ast}\\Big( \\Psi_{n+1}\\cdot \\prod_{i=n+2}^{n+r-k} \\big(\\Psi_i \\cdot \\ev_i^{\\ast}(p_i) \\big) \\cdot\\ft^\\ast(\\varphi_1)\\cdot\\ldots\\cdot\\ft^\\ast(\\varphi_k) \\Big) =\\\\\n& \\ft_{\\ast}\\Big( \\prod_{i=n+1}^{n+r-k}\\Psi_{i} \\cdot\\ft^\\ast(\\varphi_1)\\cdot\\ldots\\cdot\\ft^\\ast(\\varphi_k) \\cdot \\prod_{i=n+2}^{n+r-k} \\ev_i^{\\ast}(p_i) \\Big)\n\\end{align*}\nwhere the second equality holds by the projection formula \\cite{AR07}*{Proposition 4.8}.\n\nTo get a nonzero intersection of $\\prod_{i=n+1}^{n+r-k}\\Psi_{i} \\cdot\\ft^\\ast(\\varphi_1)\\cdot\\ldots\\cdot\\ft^\\ast(\\varphi_k) $ with the cycle $ \\prod_{i=n+2}^{n+r-k} \\ev_i^{\\ast}(p_i) $, the ends $n+1,\\ldots, n+r-k$ must be adjacent to different vertices. The type $\\Gamma$ corresponding to $K$ has $r-k$ vertices, and so we can attach one new end to each vertex of $\\Gamma$. There are $m(\\Gamma)$ ways to do this, where we use the notation from Definition \\ref{def:weights} (we do not need to pick moving vertices here). Hence $\\mathbb{H}^{\\trop}_k(\\mathbf{x})$ intersects $K$ in $m(\\Gamma)$ points with a nonzero weight.\nEach such point is the push-forward of an intersection point of $ \\prod_{i=n+2}^{n+r-k} \\ev_i^{\\ast}(p_i) $ with a cone $\\tilde{K}$ of $\\prod_{i=n+1}^{n+r-k}\\Psi_{i}$ where all ends are adjacent to different vertices. To compute the weight of $\\tilde{K}$ in $\\prod_{i=n+1}^{n+r-k}\\Psi_{i} \\cdot\\ft^\\ast(\\varphi_1)\\cdot\\ldots\\cdot\\ft^\\ast(\\varphi_k) $, we use a generalization of Lemma \\ref{lem:pullbackintersect} analogous to Remark \\ref{rem:coneintersect}: in $\\tilde{K}$, we have $r-k$ vertices $V_i$ of valence $k_i+1$ each of which is adjacent to an end with a Psi-class condition. We thus get weight $\\prod (k_i-2)$. From Lemma \\ref{lem-evproduct}, the further intersection with $ \\prod_{i=n+2}^{n+r-k} \\ev_i^{\\ast}(p_i) $ yields a factor equal to the product of weights of all bounded edges.\nWe have thus proved the following statement that again illustrates the analogy between classical and tropical Hurwitz loci (compare with Lemma \\ref{coeff}):\n\n\\begin{prop}\n Let $K$ be a cone of $\\mathcal{M}_{0,N}$ of codimension $k$, corresponding to the type $\\Gamma$.\nThen the intersection $\\mathbb{H}^{\\trop}_k(\\mathbf{x})\\cdot K$ consists of $m(\\Gamma)$ points, each with weight $\\prod_v (\\val(v)-2)\\cdot \\varphi(\\Gamma)$\nwhere the product goes over all vertices $v$ of $\\Gamma$.\n\\end{prop}\n\n\n\n\\section{Wall Crossings}\n\n\n\nWe have seen that Hurwitz cycles are polynomials in each chamber $\\mathfrak{c}$. In this section we investigate wall-crossings, i.e. how the cycles change from chamber to chamber. \n\\begin{defn}\nLet $I\\subseteq \\{1, \\ldots, n\\}$ and consider the wall $W_I=\\{\\sum_{i\\in I}x_i =0\\} $. Let $\\mathfrak{c}^+$ and $\\mathfrak{c}^-$ be two adjacent chambers: $\\sum_{i\\in I}x_i >0$ in $\\mathfrak{c}^+$, $\\sum_{i\\in I}x_i < 0$ in $\\mathfrak{c}^-$ and for every $J\\not=I \\subseteq \\{1, \\ldots, n\\}$ the sign of $ \\sum_{i\\in J}x_i$ is the same in both chambers. Let $\\mathbb{H}_k^+(\\mathbf{x})$ (resp. $\\mathbb{H}_k^-(\\mathbf{x})$) be the polynomial class giving the Hurwitz cycle in $\\mathfrak{c}^+$(resp. $\\mathfrak{c}^-$). By {wall crossing formula} at the wall $I$ we mean the formal difference of cycles:\n\\begin{equation}\nWC_{I,k}(\\mathbf{x}):= \\mathbb{H}_k^+(\\mathbf{x}) -\\mathbb{H}_k^-(\\mathbf{x}) \\in Z_k(\\overline{M}_{0,n}).\n\\end{equation}\n\n\\end{defn}\nNaturally the difference of two polynomial cycles is a polynomial cycle: the upshot is that such a cycle can be expressed inductively in terms of Hurwitz cycles.\n\n\\begin{note} Let $\\mathbf{x}$ and $\\mathbf{y}$ be two tuples of integers such that $\\sum{x_i}=-\\sum{y_j}=\\epsilon\\not=0$. Consider the Hurwitz cycles $\\mathbb{H}_{k_1}(\\mathbf{x},-\\epsilon)\\in \\overline{M}_{0,n_1+1}$ and $\\mathbb{H}_{k_2}(\\mathbf{y},\\epsilon)\\in \\overline{M}_{0,n_2+1}$ and the gluing morphism \n$\\gl:\\overline{M}_{0,n_1+1}\\times\\overline{M}_{0,n_2+1}\\to \\overline{M}_{0,n_1+n_2}$. We denote:\n$$\n\\mathbb{H}_{k_1}(\\mathbf{x},-\\epsilon)\\boxtimes\\mathbb{H}_{k_2}(\\mathbf{y},\\epsilon):= \\gl_\\ast\\left( \\mathbb{H}_{k_1}(\\mathbf{x},-\\epsilon)\\times\\mathbb{H}_{k_2}(\\mathbf{y},\\epsilon)\\right) \\in Z_{k_1+k_2}( \\overline{M}_{0,n_1+n_2}).\n$$\n\\end{note}\n\nWith this notation in place we are ready to state the wall crossing formulas.\n\n\\begin{thm}[Classical Wall Crossing]\n\\label{thm:wc}\nLet $I\\subseteq \\{1, \\ldots, n\\}$ and consider the wall $W_I=\\{\\epsilon:=\\sum_{i\\in I}x_i =0\\} $. Then:\\begin{equation}\n\\label{eq:wc}\nWC_{I,k}(\\mathbf{x})= \\epsilon \\sum_{j=\\max\\{0,1+k-r_2\\}}^{\\min\\{k,r_1-1\\}} {{r-k}\\choose{|I|-1-j}}\\mathbb{H}_j(\\mathbf{x}_I,-\\epsilon)\\boxtimes\\mathbb{H}_{k-j}(\\mathbf{x}_{I^c},\\epsilon)\n\\end{equation}\n\\end{thm}\n\\begin{proof}\nThe proof of Theorem \\ref{thm:wc} is parallel to \\cite{CJM10}*{Theorem 6.10}.\nWe first remark that the bounds of the summation are simply recording the fact that $j$ (resp. $k-j$) must be less than or equal that the dimension of $\\overline{M}_0^{\\sim}(\\mathbf{x}_I,-\\epsilon)$ (resp. $\\overline{M}_0^{\\sim}(\\mathbf{x}_{I^c},\\epsilon)$). One may make the summation simply from $0$ to $k$ by noting that the Hurwitz loci of dimension greater than the corresponding moduli spaces of maps are empty.\n\nHurwitz cycles are completely described by the tropical dual graphs of the boundary strata in the moduli spaces of maps. In order for a tropical dual graph to contribute to the wall crossing, it must have an edge with weight equal to the equation of the wall. For a given tropical dual graph $\\Gamma$, if such an edge exists, then it is unique and we call it the special edge. \n\nCutting the special edge separates the graph into two subtrees $\\Gamma_I$ and $\\Gamma_{I^c}$. \nThe ends of $\\Gamma_I$ (resp. $\\Gamma_{I^c}$) are labelled by $x_i\\in I$ and $-\\epsilon$ (resp. $x_i\\in I^c$ and $\\epsilon$). We note immediately that $\\Gamma_I,\\Gamma_{I^c}$ are a pair of graphs identifying a boundary stratum appearing in the product of Hurwitz cycles on the right hand side of formula \\eqref{eq:wc}. We make this connection more precise in order to extract quantitative information.\n\nFor $0\\leq j\\leq k$, let\n$\nR_j=\\left\\{ \\left(\\Gamma_1, \\Gamma_2, \\mathfrak{m} \\right)\\right\\},\n$\nwhere:\n\\begin{itemize}\n\\item $\\Gamma_1$ is the tropical dual graph of a stratum in $\\tilde{\\mathbb{H}}_j(\\mathbf{x}_I,-\\epsilon)$ (pushing forward non-trivially to $\\overline{M}_{0,n}$).\n\\item $\\Gamma_2$ is the tropical dual graph of a stratum in $\\tilde{\\mathbb{H}}_{k-j}(\\mathbf{x}_{I^c},\\epsilon)$ (pushing forward non-trivially to $\\overline{M}_{0,n}$).\n\\item $\\mathfrak{m}$ is a total ordering of the vertices of $\\Gamma_1 \\cup \\Gamma_2$, compatible with the total ordering of the vertices of $\\Gamma_1$ and $\\Gamma_2$.\n\\end{itemize}\n\nCutting the special edge gives a function $\\Cut$ from the set of graphs contributing to the wall crossing formula to the union $R= \\cup_{j=0}^k R_j$. We claim that $\\Cut$ is a bijection, and it will come as little surprise that the inverse function $\\Glue$ consists in gluing the two graphs along the special edge labelled $\\pm \\epsilon$. The total ordering $\\mathfrak{m}$ is precisely the information needed to make such gluing well defined. We note in particular that $\\mathfrak{m}$ determines in which direction the special edge is pointing once it is glued.\n\nGiven a graph $\\Gamma$ contributing to the wall crossing, we note that the multiplicity it contributes to the wall crossing formulas is $\\epsilon$ times the product of all weights of all non-special internal edges (this is obvious if $\\Gamma$ comes from $\\mathbb{H}_k^+(\\mathbf{x})$. If $\\Gamma$ comes from $\\mathbb{H}_k^-(\\mathbf{x})$, then the weight of the special edge is $-\\epsilon$, and there is another minus sign coming from the wall crossing formula). On the other hand the pair of graphs $\\Gamma_1$ and $\\Gamma_2$ in $\\Cut(\\Gamma)$ have multiplicity in $\\mathbb{H}_j(\\mathbf{x}_I,-\\epsilon)\\boxtimes\\mathbb{H}_{k-j}(\\mathbf{x}_{I^c},\\epsilon)$ equal to the product of all non-special internal edges of $\\Gamma$. Therefore $\\Cut$ is a bijection that preserves the multiplicities on both sides of formula \\eqref{eq:wc}.\n\n The proof is then concluded by remarking that if $\\Gamma_1, \\Gamma_2$ appear in $R_j$, there are ${{r-k}\\choose{|I|-1-j}}$ possible ways of giving a total ordering of the vertices of $\\Gamma_1 \\cup \\Gamma_2$, compatible with the total ordering of the vertices of $\\Gamma_1$ and $\\Gamma_2$. \n\\end{proof}\n\nThe wall crossing formula on the tropical side is similar: the only apparent difference is the lack of the multiplicative factor $\\epsilon$, reflecting the fact that polynomiality does not appear in the generic representative of a tropical Hurwitz cycle.\nDifferently from the classical side however, it is not only the weights that depend on $\\mathbf{x}$ but the cycles themselves, making even the statement of a wall crossing formula more subtle.\n\nFix a wall $W_I$ and two adjacent chambers $\\mathfrak{c}^+$ and $\\mathfrak{c}^-$ with $\\epsilon:=\\sum_{i\\in I}x_i >0$ in $\\mathfrak{c}^+$, $\\epsilon < 0$ in $\\mathfrak{c}^-$. We denote by $\\mathbb{H}^{\\trop,+}_k(\\mathbf{x})$ resp.\\ $\\mathbb{H}_k^{\\trop,-}(\\mathbf{x})$ the Hurwitz cycles in the two chambers.\nWe would like to evaluate both $\\mathbb{H}^{\\trop,+}_k(\\mathbf{x})$ and $\\mathbb{H}_k^{\\trop,-}(\\mathbf{x})$\n at $\\mathbf{x}\\in \\mathfrak{c}^+$\n\n and then consider the difference. However the fact that $\\epsilon$ changes sign when crossing the wall requires some interpretation, since the edge lengths of tropical covers\nare required to be positive. \nConsider a point in $\\mathbb{H}^{\\trop,-}_k(\\mathbf{x})$ corresponding to a tropical cover with an edge of weight $-\\epsilon$ connecting two vertices that are mapped to two points $p0$, $x_3,x_4,x_5<0$, $x_1>|x_i+x_j|$ for $i,j \\in\\{3,4,5\\}$ and $x_2<-x_i$ for $i=3,4,5$ in $\\mathcal{M}_{0,5}$. We now cross the wall $\\epsilon:=x_1+x_4+x_5=0$. \n\n$\\mathbb{H}^{\\trop,+}_1(\\mathbf{x}) $ and $ \\mathbb{H}^{\\trop,-}_1(\\mathbf{x})$ only differ in cones whose corresponding type contains an edge with weight $\\pm\\epsilon$, i.e.\\ in any cone containing the vector $v_{23}$. There are three such cones. For two of these cones, the Hurwitz curve $\\mathbb{H}^{\\trop,+}_1(\\mathbf{x}) $ looks as depicted in Figure \\ref{fig:m052} and for the third cone as in Figure \\ref{fig:m054}. In $ \\mathbb{H}^{\\trop,-}_1(\\mathbf{x})$, any edge with weight $-\\epsilon$ changes direction. In the cone spanned by $v_{23}$ and $v_{14}$, if we choose the vertex adjacent to end $5$ to be the moving vertex, we now have two ways to order the remaining vertices, both corresponding to linear ends. Thus one linear edge is replaced by two linear ends in this cone. The same is true for the cone spanned by $v_{23}$ and $v_{15}$ by symmetry. In the cone spanned by $v_{23}$ and $v_{45}$ contrarily, if the moving vertex is adjacent to end $1$, we have only one way to order the remaining vertices corresponding to a linear edge instead of the two linear ends (see Figures \\ref{fig:m053} and \\ref{fig:m054}). \nNote also that the constant ends of direction $v_{23}$ do not have a factor of $\\epsilon$ in their weight, thus they appear with the same sign both in $\\mathbb{H}^{\\trop,+}(_1\\mathbf{x}) $ and $ \\mathbb{H}^{\\trop,-}_k(\\mathbf{x})$ and cancel in the difference. The other constant ends have weight $\\epsilon$ in $\\mathbb{H}^{\\trop,+}_1(\\mathbf{x}) $ but weight $-\\epsilon$ in $ \\mathbb{H}^{\\trop,-}_k(\\mathbf{x})$, so they do not cancel but add up to a constant end with weight $2\\epsilon$. Figure \\ref{fig:wc} depicts the tropical wall crossing curve for these two chambers. Blue edges are edges of $\\mathbb{H}^{\\trop,+}_1(\\mathbf{x}) $, red edges are edges of $ \\mathbb{H}^{\\trop,-}_k(\\mathbf{x})$. The green constant ends appear in both and add up to the weight $2\\epsilon$. The picture only shows the three cones of $\\mathcal{M}_{0,5}$ in which the difference $\\mathbb{H}^{\\trop,+}_1(\\mathbf{x}) - \\mathbb{H}^{\\trop,-}_k(\\mathbf{x})$ is nonzero.\n\n\n\\begin{figure}[tb]\n\\input{wc.pstex_t}\n\\caption{The wall crossing curve in $\\mathcal{M}_{0,5}$ for $\\mathfrak{c}^+$ and $\\mathfrak{c}^-$.}\n\\label{fig:wc}\n\\end{figure}\n\n\\end{example}\n\n\nAs in the classical world, we want to describe a tropical wall crossing in terms of cutting and regluing. \n Any cell contributing to the wall crossing parameterizes graphs with a special edge that we may cut to obtain two subgraphs each of which is a tropical cover of the projective line. Remembering the length of the edge that gets cut accounts for the fact that we want to mod out by translations only once. We make this process precise in the following paragraph.\n\n\n\\begin{construction}\n \nLet $I\\subseteq \\{1, \\ldots, n\\}$ and consider the wall $W_I=\\{\\epsilon:=\\sum_{i\\in I}x_i =0\\} $. \nAssume that $|I|=n_1$ and $|I^c|=n_2$ and denote $r_i=n_i-1$ for $i=1,2$.\nLet $\\Gamma$ be a tropical cover in a $k$-dimensional cell that contributes to the wall crossing, and hence it contains an edge $e$ with weight $\\pm\\epsilon$. Cutting $e$ we obtain two subgraphs $\\Gamma_1$ and $\\Gamma_2$ that are themselves tropical covers of the projective line. We assume that $\\Gamma_1$ contains the ends in $I$ and $\\Gamma_2$ contains the ends in $I^c$. Both $\\Gamma_1$ and $\\Gamma_2$ have an extra end that we denote by $E_1$ (resp.\\ $E_2$). According to our conventions ends are oriented inward, and hence the balancing condition gives $E_1$ weight $-\\epsilon$ and $E_2$ weight $\\epsilon$. \nAssuming that $r_1-j$ fixed vertices are in $\\Gamma_1$, we can interpret $\\Gamma_1$ as an element in $\\mathcal{M}_{0,r_1-j}(\\TP,(\\mathbf{x}_I,-\\epsilon))$ and $\\Gamma_2$ as an element in $\\mathcal{M}_{0,r_2-(k-j)}(\\TP,(\\mathbf{x}_{I^c},\\epsilon))$ (adjusting the labeling of the ends). We wish to remember the length of the edge we cut and whether $\\Gamma$ belonged to $\\mathbb{H}^{\\trop,+}_1(\\mathbf{x}) $ or $\\mathbb{H}^{\\trop,-}_1(\\mathbf{x}) $, so we define:\n$\n\\Cut (\\Gamma) \\in \\left(\\mathcal{M}_{0,r_1-j}(\\TP,(\\mathbf{x}_I,-\\epsilon))\\times {\\mathbb R}\\right)\\times \\left(\\mathcal{M}_{0,r_2-(k-j)}(\\TP,(\\mathbf{x}_{I^c},\\epsilon))\\times {\\mathbb R}\\right)\n$\nby \n\n\\begin{equation} \n\\label{eq:cut}\n\\Cut(\\Gamma) =\\begin{cases} \\left((\\Gamma_1, 0) , (\\Gamma_2, l(e))\\right) & \\Gamma \\in \\mathbb{H}^{\\trop,+}_1(\\mathbf{x}) \\\\ \\left((\\Gamma_1, 0) , (\\Gamma_2, -l(e))\\right) & \\Gamma \\in \\mathbb{H}^{\\trop,-}_1(\\mathbf{x}). \\end{cases}\n\\end{equation}\n\n\n\n\n\\begin{rem} \nOur choice to have two ${\\mathbb R}$ coordinates and assigning one of them to be $0$ seems, and in fact is, somewhat arbitrary. However, it will be handy when comparing weights of the same cells appearing on opposite sides of the wall crossing formula.\n\\end{rem}\n\\begin{rem}\nIn order for equation \\eqref{eq:cut} to make sense we need to drop Convention \\ref{conv} (see page \\pageref{conv}) and remember tropical covers are equivalent up to translation. We therefore use one of the vertices in each of the subgraphs to fix a parameterization of $\\TP$.\n\\end{rem}\n\nWe reverse this operation to glue two graphs in $\\left(\\mathcal{M}_{0,r_1-j}(\\TP,(\\mathbf{x}_I,-\\epsilon))\\times {\\mathbb R}\\right) \\times\\left(\\mathcal{M}_{0,r_2-(k-j)}(\\TP,(\\mathbf{x}_{I^c},\\epsilon))\\times {\\mathbb R}\\right)$. Denote by $l_i$ the ${\\mathbb R}$ coordinate function for the $i$-th factor in the product.\nIf $V_i$ is the interior vertex of $\\Gamma_i$ adjacent to $E_i$, then define $\\ev_{E_i}: \\mathcal{M}_{0,r_1-j}(\\TP,(\\mathbf{x}_I,-\\epsilon))\\times {\\mathbb R}\\to {\\mathbb R}$ by $$\\ev_{E_i}(\\Gamma_i, l_i)=\\ev_{V_i}+(-1)^il_i\\cdot \\epsilon. $$\n\n\n\nWe can glue any two pieces in $(\\ev_{E_1}-\\ev_{E_2})^\\ast(0)\\cdot (\\mathcal{M}_{0,r_1-j}(\\TP,(\\mathbf{x}_I,-\\epsilon))\\times {\\mathbb R}) \\times (\\mathcal{M}_{0,r_2-(k-j)}(\\TP,(\\mathbf{x}_{I^c},\\epsilon))\\times {\\mathbb R}) $ to one tropical cover in $\\mathcal{M}_{0,r-k}(\\TP,\\mathbf{x})$. To make this operation the inverse to $\\Cut$ defined above, we further impose $l_1=0$. \n\n\nThe above discussion shows that \n\\begin{align*}\\mathcal{G} := (\\ev_{E_1}-\\ev_{E_2})^\\ast(0)\\cdot l_1^\\ast(0) \\cdot \\Big( &(\\mathcal{M}_{0,r_1-j}(\\TP,(\\mathbf{x}_I,-\\epsilon))\\times {\\mathbb R}) \\times \\\\ & (\\mathcal{M}_{0,r_2-(k-j)}(\\TP,(\\mathbf{x}_{I^c},\\epsilon))\\times {\\mathbb R}) \\Big)\\end{align*}\n is in bijection to the set of covers in $\\mathcal{M}_{0,r-k}(\\TP,\\mathbf{x})$ contributing to the wall crossing \n(see \\cite{GMO} for a similar gluing construction for moduli spaces). \n\nWe now define the folding map $\\fold:\\mathcal{G} \\rightarrow \\mathcal{M}_{0,r-k}(\\TP,\\mathbf{x})$ that maps $l_2$ to its absolute value.\nThe folding map is not globally a tropical morphism, it is only locally a morphism away from $\\mathcal{G}\\cdot l_2^\\ast(0)$.\nConsequently, while $\\mathcal{G}$\nis a tropical variety, its image under $\\fold$ is not. Since the wall crossing curve is also not a tropical variety however, this is not disturbing.\n\nTo make the image of $\\fold$ a weighted polyhedral complex, we give each cell the sum of the weights of its preimages (cells are subdivided by \n$\\mathcal{G}\\cdot l_2^\\ast(0))$. This coincides with the weight of the push forward of $\\fold$ locally where it is a morphism.\n\\end{construction}\n\n\n\nWe now state the first version of the tropical wall crossing. \n \\begin{prop}[Tropical Wall Crossing, first version]\nLet $I\\subseteq \\{1, \\ldots, n\\}$ and consider the wall $W_I=\\{\\epsilon:=\\sum_{i\\in I}x_i =0\\} $. Then:\n\n\n\n\n\n\n\n\\begin{equation}\\hspace{-1cm}\n\\begin{array}{cl}\nWC^{\\trop}_{I,k}(\\mathbf{x}) &= \\ft_{\\ast} \\bigg( \\sum_{j=\\max\\{0,1+k-r_2\\}}^{\\min\\{k,r_1-1\\}} \\;\\;\\;\\; \\sum_{n+1 \\leq i_1<\\ldots1$) \\textit{can} reach the Stokes-flow limit, so there is an inertially limited viscous to Stokes crossover that is traversed as the neck radius grows. \n\nThe final piece of the coalescence phase diagram is the viscous-to-inertial crossover time, $\\tau_c$ (or the crossover radius, $r_c$), where the dynamics switch from the inertially limited viscous regime to a regime where only inertia is important. \nFor many fluid flows, this crossover is easily identified by computing the dimensionless Reynolds number, $Re=\\rho U L\/\\mu$, where $U$ and $L$ are characteristic velocity- and length-scales in the flows, respectively. \nCrossover behavior is expected when $Re\\approx 1$. \nFor coalescence, it was always assumed that $L=r_{\\text{min}}$ \\cite{Eggers1999,Eggers2003,Wu2004,Yao2005,Thoroddsen2005,Bonn2005,Burton2007,Case2008,Case2009,Thompson2012_2}. \n\nTo observe the viscous-to-inertial crossover, Paulsen \\textit{et al.}\\ \\cite{Paulsen2011} used an ultrafast electrical method (following refs.\\ \\cite{Case2008,Case2009}), which measures the neck radius down to tens of nanoseconds after the drops touch. \nFor salt-water drops, Paulsen \\textit{et al.}\\ observed viscous behavior more than $3$ decades later than the prediction using the accepted Reynolds number for coalescence. \nTo explain this discrepancy, they proposed that the dominant length-scale for the flows is instead given by the neck height, $L=r_{\\text{min}}^2\/A$. \nThus, a revised phase diagram for coalescence was constructed, which is pictured in Fig.\\ \\ref{phaseDiagram}(b). \n\n\nThis paper provides a more detailed experimental description and presents additional evidence for the picture developed in refs.\\ \\cite{Paulsen2012, Paulsen2011}. \nFirst, section \\ref{ExperDesc} describes the electrical method, the fluids used, and the high-speed imaging technique, and section \\ref{StokesInertialTheory} outlines several theoretical predictions for the purely viscous (Stokes) regime and the inertial regime. \n\nSection \\ref{ILVregime} provides measurements and analysis of coalescence in the Stokes regime and the inertially limited viscous regime. \nWhereas Paulsen \\textit{et al.}\\ \\cite{Paulsen2012} identified the inertially limited viscous to Stokes crossover by the motion of the back of the drops, I show that the same motions occur in the center-of-mass of the drops. \nThe neck shapes in the inertially limited viscous and Stokes regimes are consistent with two distinct similarity solutions, and the interfacial curvature at the neck minimum can be used to distinguish between the regimes. \nThe phase diagram is robust to different boundary conditions.\n\nSection \\ref{VIcrossover} provides measurements and analysis of the viscous-to-inertial crossover. \nI collapse the electrical data with a different analysis from ref.\\ \\cite{Paulsen2011}, to demonstrate that the results are not sensitive to the details of the collapse protocol. \nI argue for a new Reynolds number for coalescence, as was done in ref.\\ \\cite{Paulsen2011}, now coming from the viscous side of the transition. \nI present high-speed imaging data where the surface tension is varied, which follows the crossover scaling calculated with the new Reynolds number. \n\nSection \\ref{approach} studies the drops during their approach.\nUsing optical, electrical resistance, and capacitance measurements, I show that at low approach-speed, the drops coalesce as undeformed spheres at finite separation. \nThe data suggest that at low voltage, Van der Waals forces form the initial liquid neck (instead of forces due to the applied voltage).\nThe measurements provide an upper bound on the initial neck radius, $r_0$, which is smaller than previous estimates \\cite{Thoroddsen2005,Fezzaa2008}.\n\n\nAppendix \\ref{elecSystematic} reports checks on the electrical method, which show that the applied voltages and resulting electric fields do not affect the coalescence dynamics. Appendix \\ref{prevCross} addresses previous measurements of the viscous-to-inertial crossover in the literature. \n\nThis work gives a consistent picture wherein the inertially limited viscous regime is the asymptotic regime of liquid drop coalescence in vacuum or air. \nViscous drops ($Oh>1$) transition into the Stokes regime later on, and low-viscosity drops ($Oh<1$) crossover into the inertial regime. \nIn the inertially limited viscous regime and the inertial regime, the dominant flow gradients are on the scale of the neck height, $r_{\\text{min}}^2\/A$. \n\n\n\n\n\n\\section{Experimental Description}\n\\label{ExperDesc}\n\n\\subsection{Ultrafast electrical method}\n\nIn the experiment, two drops are formed on vertically aligned teflon nozzles of radius $A=2$ mm, which are separated by a distance $2A$. \nThe pendant drop is fixed while the sessile drop is slowly grown with a syringe pump until the drops coalesce. \nExcept for section \\ref{approach}, the experiments are at sufficiently low approach-speed ($U_{\\text{app}}< 9 \\times 10^{-5}$ m\/s) where the drops do not deform before contact.\n\n\n\\begin{figure}[bt]\n\\centering \n\\begin{center} \n\\includegraphics[width=3.0in]{Schematic_Signals.pdf}\n\\end{center}\n\\caption{\n(Color online) \nElectrical method. \n(a) Coalescence cell and measurement circuit. \nLiquid hemispheres are formed on nozzles. \nOne drop is grown slowly with a syringe pump (Razel Scientific, R-99) to initiate coalescence, while an AC voltage, $V_{\\text{in}}$ (Hewlett-Packard, HP3325A), is applied across the drops and known circuit elements ($R_k$, $C_k$). \nVoltages $V_1$ and $V_2$ are recorded with a high-speed digitizer (NI PCI-5105, National Instruments) and converted to the time-varying complex impedance of the coalescence cell. \n$Z_{\\text{CR}}$: impedance of the coalescence region (dashed box). \n$Z_t$, $Z_b$: impedances of the fluid-filled nozzles. \n$C_p$: stray capacitance between the nozzles. \n(b-d) Signals for a single saturated aqueous NaCl coalescence versus $\\tau\\equiv t-t_0$. \n(b) The phase angle, $\\Delta\\phi$, between $V_1$ and $V_2$ decreases sharply when the drops touch, which is used to measure $t_0$. \n(c) Capacitance of the coalescence region, $C_{\\text{CR}}$, is roughly constant before and after contact. \n(d) Resistance of the coalescence region, $R_{\\text{CR}}$, after contact.\n}\n\\label{Schematic}\n\\end{figure}\n\nFollowing the AC electrical method developed by Case \\textit{et al.}\\ \\cite{Case2008,Case2009} and used in refs.\\ \\cite{Paulsen2012, Paulsen2011}, I measure the time-varying complex impedance, $Z_{\\text{CR}}$, of two liquid hemispheres while they are coalescing (see Fig.\\ \\ref{Schematic}). \nSalt (NaCl) is added to the drops to make them electrically conductive. \nA high-frequency ($0.6 \\leq f \\leq 10$ MHz), low-amplitude ($V_{\\text{in}} \\leq 2$ V) AC voltage is applied across the drops by gold electrodes that are submerged in the fluid. \nBy simultaneously sampling the voltage below the coalescence cell and the voltage below known passive circuit elements, the impedance of the coalescence cell is determined. \nTwo backgrounds are subtracted: one is measured by bringing the nozzles together, and the second is a small parallel capacitance, $C_p=0.61\\pm 0.12$ pF, that is measured before forming drops on the nozzles. \nThis isolates the impedance of the coalescing drops, $Z_{\\text{CR}}$, which is modeled as a time-varying resistor, $R_{\\text{CR}}$, and capacitor, $C_{\\text{CR}}$, in parallel. \nAt the instant the drops touch, there is a sharp decrease in the phase difference, $\\Delta\\phi$, between the two measured voltages, which indicates the moment of contact, $t_0$, to within 1\/$f$. \n\nExamples of these measured quantities are shown in Fig.\\ \\ref{Schematic}(b-d) as a function of $\\tau \\equiv t-t_0$, which measures time elapsed since the moment of contact, $t_0$. \nMore than $10^4$ points are sampled, thereby capturing a large dynamic range from a single coalescence event. \n\nTo ensure that the applied voltage and the resulting electric fields between the drops do not alter the coalescence dynamics, a variety of checks were performed on the electrical method (see Appendix \\ref{elecSystematic}). \n\n\\begin{figure}[bt]\n\\centering \n\\begin{center} \n\\includegraphics[width=3.15in]{EStat.pdf} \n\\end{center}\n\\caption{\n(Color online) \nConversion between electrical resistance, $R_{\\text{CR}}$, and neck radius, $r_{\\text{min}}$. \n(a) Three axisymmetric models of the coalescence region. \nTop to bottom: two truncated hemispheres are joined with a cylindrical neck of radius $r_{\\text{min}}$, with planar equipotentials (dashed lines) sandwiching the neck; the same geometry without the equipotentials; two hemispheres joined smoothly with a circular arc. \n(b) Electrical resistance versus $r_{\\text{min}}$ calculated numerically for $\\sigma=1$ $\\Omega^{-1}$m$^{-1}$ and $A=2$ mm. \nThe data from all three models are well described by $R_{\\text{CR}}=2\/(\\xi \\sigma r_{\\text{min}})+1\/(\\sigma \\pi A)$ (solid line: Eq.\\ (\\ref{conversion})), where $\\xi=3.62\\pm 0.05$ is a fitting parameter. \nFor small $r_{\\text{min}}$, the data follow $R_{\\text{CR}}=2\/(\\xi \\sigma r_{\\text{min}})$ (dashed line). \n}\n\\label{EStat}\n\\end{figure}\n\nThe conversion between $R_{\\text{CR}}$ and $r_{\\text{min}}$ is geometrical, and was determined numerically using the electrostatics calculation package EStat (FieldCo). \nTo assess the dependance of the conversion on the choice of the model, this conversion was calculated in three different ways, pictured in Fig.\\ \\ref{EStat}(a). \nFirst, the conversion by Case \\textit{et al.}\\ \\cite{Case2008,Case2009} was repeated, in which equipotentials are fixed on two planes that sandwich a cylindrical neck of radius $r_{\\text{min}}$ and height $r_{\\text{min}}^2\/A$, so that the drops and their connecting neck are treated as series contributions to the total resistance. \nThis calculation was compared with a second model with the same geometry but no such restriction on the field lines. \nIn the third model, the shape of the interface is given by a circular arc connecting two hemispheres smoothly. \nAs shown in Fig.\\ \\ref{EStat}(b), the three conversions agree within error bars, and the data are well described by:\n\\begin{equation}\nR_{\\text{CR}} = \\frac{2}{\\xi \\sigma r_{\\text{min}}} + \\frac{1}{\\sigma\\pi A},\n\\label{conversion}\n\\end{equation}\n\n\\noindent where $\\sigma$ is the electrical conductivity of the fluid and the dimensionless constant $\\xi=3.62\\pm 0.05$ is determined empirically. \n\nThe first term in the conversion, $2\/(\\xi \\sigma r_{\\text{min}})$, is twice the resistance of a hemisphere with an opening of radius $r_{\\text{min}}$. \nThe constant term in the conversion, $1\/(\\sigma\\pi A)$, can be understood as coming from the fluid neck itself. \n(This expression is the electrical resistance of a cylinder with radius $r_{\\text{min}}$ and height $r_{\\text{min}}^2\/A$, with equipotentials on its flat faces.) \nOther neck geometries (e.g., an overturned neck shape, which is predicted to occur in the inertial regime \\cite{Eggers2003, Fezzaa2008}) are expected to give the same conversion when $r_{\\text{min}}$ is small, since the dominant term in the resistance comes from the general feature of a conducting hemisphere with an opening of radius $r_{\\text{min}}$. \n\n\n\n\n\n\\subsection{Varying the liquid viscosity}\n\nFor the electrical measurements, the drops were mixtures of glycerol and water, with salt (NaCl) added to make the fluids electrically conductive. \nDe-ionized water was saturated with NaCl at room temperature and mixed with glycerol.\nEach mixture was characterized by measuring its density, surface tension, viscosity, and electrical conductivity. \nDensity was measured by weighing a known volume of fluid. \nSurface tension was measured by matching numerical solutions of the Young-Laplace equation to an image of a pendant drop. \nViscosity was measured with glass capillary viscometers (Cannon-Fenske). \nElectrical conductivity was determined by measuring the electrical impedance of a thin cylindrical channel filled with fluid, using the coalescence cell and measurement circuit. \n\nThe measured fluid parameters are shown in Fig.\\ \\ref{fluidParams}. \nBy changing the volume fraction of glycerol, the liquid viscosity was varied over two decades (from $1.9$ mPa s to $230$ mPa s) while the density and surface tension remained nearly constant, changing by factors of only $1.04$ and $1.6$, respectively. \n\n\\begin{figure}[bt]\n\\centering \n\\begin{center} \n\\includegraphics[width=3.2in]{FluidParams.pdf} \n\\end{center}\n\\caption{\nFluid parameters for glycerol-water-NaCl mixtures used for electrical measurements. \n(a) Mass density, $\\rho$, is approximately constant over the range of mixtures used. \n(b) Surface tension, $\\gamma$, is approximately constant. \n(Aqueous NaCl has $\\gamma=88.5 \\pm 2$ mN\/m, which is higher than for pure water.) \n(c) Viscosity, $\\mu$, varies over a large range. \n(d) AC electrical conductivity, $\\sigma$ (at 1 to 10 MHz), decreases with increasing glycerol concentration. \n(e) AC electrical conductivity as a function of viscosity decreases slightly faster than $\\mu^{-1}$. \nThe low electrical conductivity at high viscosity sets the upper viscosity limit for the electrical method. \n}\n\\label{fluidParams}\n\\end{figure}\n\nAs shown in Fig.\\ \\ref{fluidParams}(e), the electrical conductivity decreases with increasing viscosity, which limits the experimental range of the viscosity of these mixtures with the electrical method. \nFor a fixed, dilute concentration of NaCl, the relationship would obey: $\\sigma\\propto \\mu^{-1}$. \nThis expression comes from combining the Nernst-Einstein law (which relates conductivity to the ionic diffusion coefficients, $D$, at low ionic concentration: $\\sigma\\propto D$) with the Stokes-Einstein equation ($D\\propto \\mu^{-1}$). \nThe conductivity falls off slightly faster than $\\mu^{-1}$, which is consistent with the lower concentration of NaCl in the mixtures as the glycerol fraction is increased.\n(There is another, smaller correction because the mixtures are not at low concentration, which has the opposite effect on the scaling.) \n\n\n\n\\subsection{High-speed imaging}\n\nA high-speed camera (Phantom v12, Vision Research) was used to observe other aspects of the coalescence dynamics, and to measure the neck radius versus time for silicone oils, which are non-conductive. \nThe drops were precisely aligned with respect to the line-of-sight of the camera.\nNeck radii were measured using an edge-locating analysis on the images.\n\n\\begin{figure}[bt]\n\\centering \n\\begin{center} \n\\includegraphics[width=3.3in]{VideoElectric_rmin.pdf} \n\\end{center}\n\\caption{\nNeck radius versus time for coalescing aqueous NaCl drops ($\\mu=1.88$ mPa s, $\\gamma=88.5$ mN\/m, $\\rho=1180$ kg\/m$^3$). \n(a) Data from the electrical method ($\\bullet$) and high-speed imaging ($\\circ$), where $t_0$ is determined from a simultaneous electrical measurement.\nThe two methods are in good agreement. \nThe electrical data extends to far earlier times. \n(b) Data from the same experiments, on linear-linear axes, showing every camera frame. \nBefore contact, the drop geometry and finite spatial resolution create an apparent neck of radius $110$ $\\mu$m.\nThe earliest imaging point that corresponds to the actual fluid neck is the third frame after $t_0$ ($\\tau=27.0$ $\\mu$s). \n}\n\\label{VideoElectric_rmin}\n\\end{figure}\n\nTo compare electrical measurements with high-speed imaging data, $r_{\\text{min}}$ was measured both electrically and optically for saturated aqueous NaCl drops. \nFor the optical data used in this comparison, $t_0$ was determined from a simultaneous electrical measurement, which was converted to the camera's time-base with a precision of $0.1$ $\\mu$s. \nAs shown in Fig.\\ \\ref{VideoElectric_rmin}(a), the two methods are in good agreement. \nThe comparison serves as a quantitative check on the electrical method, and additionally illustrates the dynamic range gained by the electrical method versus high-speed imaging. \n\nIn the current configuration, the dynamic range of high-speed imaging is determined by spatial resolution, as opposed to timing resolution.\nTo see this, observe that imaging a neck of radius $r_{\\text{min}}$ requires resolving a much smaller feature: the vertical gap between the drops, $r_{\\text{min}}^2\/A$. \nThus, the minimum observable neck radius is set by the condition that $r_{\\text{min}}^2\/A$ is approximately equal to the spatial resolution of the optical setup (i.e., the neck height limits measurements of the neck width). \nFor the experiment in Fig.\\ \\ref{VideoElectric_rmin}, the spatial resolution is 5.3 $\\mu$m\/pixel, so this estimate predicts that $r_{\\text{min}}$ can be seen down to $100$ $\\mu$m, which is consistent with the data. \n(To avoid this optical limitation, one can alternatively image \\textit{through} the neck, as was done in recent drop spreading experiments \\cite{Eddi2013_1}.) \n\nWhen comparing electrical and optical signals, a recent high-speed imaging study of coalescence reported a short delay ($20$ to $60$ $\\mu$s) between the moment of electrical contact (from an electrical trigger for their ultrafast camera) and the first visible motion of the neck \\cite{Thoroddsen2005}. \nThe apparent delay between the electrical signal and visualized motion is now easily accounted for by the period of time when the neck height is smaller than the optical resolution.\nThis explanation is also consistent with those authors' observation that the delay is shorter for smaller drops. \nTo illustrate this point, Fig. \\ref{VideoElectric_rmin}(b) compares electrical and optical measurements of the apparent neck size, $r_{\\text{min}}$. \nIndeed, the early-time optical data give a constant value of $110$ $\\mu$m, corresponding to the radius of the darkened region where the gap between the drops is smaller than the optical resolution. \n\n\n\n\n\n\\section{Purely viscous (Stokes) and inertial regimes}\n\\label{StokesInertialTheory}\n\nFor purely viscous Stokes flow in two dimensions (2D), an exact analytic solution of coalescence was given by Hopper \\cite{Hopper1984,Hopper1990,Hopper1993a,Hopper1993b}. \nThe shape of the fluid interface at any instant during coalescence is an inverse ellipse, given parametrically by: \n\\begin{subequations}\n \\label{Hopper_contour}\n \\begin{align}\n r(\\theta) & = \\sqrt{2}A\\frac{(1-m^2)(1+m)\\cos \\theta}{\\sqrt{1+m^2}(1+2m \\cos 2\\theta +m^2)}, \\label{Hopper_contour_r} \\\\\n z(\\theta) & = \\sqrt{2}A\\frac{(1-m^2)(1-m)\\sin \\theta}{\\sqrt{1+m^2}(1+2m \\cos 2\\theta +m^2)}, \\label{Hopper_contour_z}\n \\end{align}\n\\end{subequations}\n\n\\noindent where $0\\leq \\theta <2\\pi$, and the parameter $m$ is mapped to a neck radius by: \n\\begin{equation}\nr_{\\text{min}}=A\\sqrt{2}(1-m)\/\\sqrt{1+m^2}. \n\\label{rmin_vs_m}\n\\end{equation}\n\n\\noindent This family of curves interpolates between two kissing circles ($m=1$) and a single circle ($m=0$). \nFor small neck radius, these shapes limit to: \n\\begin{equation}\n(r^2+z^2)^2=4 A^2 z^2+r_{\\text{min}}^2 r^2.\n\\label{inverse_ellipse}\n\\end{equation}\n\nIn the solution, the neck radius is given as a function of time by:\n\\begin{equation}\n\\frac{\\gamma\\tau}{\\mu A} = \\frac{\\pi \\sqrt{2}}{4} \\int_{m^2}^1 \\frac{ds}{s (1+s)^{1\/2} K(s)},\n\\label{Hopper_exact}\n\\end{equation}\n\n\\noindent where $K(s)$ is the complete elliptic integral of the first kind, and $m$ is related to $r_{\\text{min}}$ by Eq.\\ (\\ref{rmin_vs_m}). \nThe asymptotic behavior of Eq.\\ (\\ref{Hopper_exact}) (in the limit that $r_{\\text{min}}\/A \\rightarrow 0$) is given by the simple expression: \n\\begin{equation}\n\\frac{\\gamma\\tau}{\\mu A} = \\frac{\\pi r_{\\text{min}}}{A} \\left| \\ln\\left(\\frac{r_{\\text{min}}}{8 A}\\right) \\right|^{-1}.\n\\label{Hopper_approx}\n\\end{equation}\n\nThe early-time asymptotic form of 2D Stokes coalescence was extended to three dimensions (3D) by Eggers \\textit{et al.}\\ \\cite{Eggers1999}. \nFor asymptotically small neck radius,\n\\begin{equation}\nr_{\\text{min}} = \\frac{\\gamma\\tau}{\\pi\\mu} \\left|\\ln\\left({\\frac{\\gamma \\tau}{\\mu A}}\\right)\\right|,\n\\label{viscScaling}\n\\end{equation}\n\n\\noindent which they report is a reasonable approximation for $r_{\\text{min}}\\lesssim 0.03 A$. \n\n\n\nFor inertially-dominated flows where the fluid viscosity is negligible, a scaling argument \\cite{Eggers1999} predicted that in this regime,\n\\begin{equation}\nr_{\\text{min}} = D_0 \\left( \\frac{\\gamma A}{\\rho} \\right)^{1\/4} \\tau^{1\/2},\n\\label{invScaling}\n\\end{equation}\n\n\\noindent where $D_0$ is a dimensionless prefactor.\nThis scaling was seen in numerical simulations, which report $D_0=1.62$ \\cite{Eggers2003}. \nHigh-speed imaging experiments \\cite{MenchacaRocha2001, Wu2004, Thoroddsen2005, Bonn2005, Fezzaa2008} and other numerical simulations \\cite{MenchacaRocha2001, Lee2006, Baroudi2014} have also observed this scaling regime and all report $D_0\\approx 1$. \n\n\n\n\n\n\\section{The inertially limited viscous (ILV) regime}\n\\label{ILVregime}\n\n\n\nRecently, Paulsen \\textit{et al.}\\ \\cite{Paulsen2012} showed that there is a third regime of liquid drop coalescence, which had been missed by previous experiments and was unanticipated by theory. \nThe regime arises because the analytical Stokes solution cannot apply at early times, because it violates a simple force-balance when the neck is small. \nNamely, the macroscopic motion of the drops inherent in the Stokes solution requires a larger force than the vanishingly small neck can provide. \nPaulsen \\textit{et al.}\\ \\cite{Paulsen2012} used simulation and experiment to show that at later times when the neck is larger, the Stokes regime is entered. \n\nPaulsen \\textit{et al.}\\ \\cite{Paulsen2012} called this regime the ``inertially limited viscous\" (ILV) regime, because the inertia of the drops prevents the Stokes solution from applying. \nIn the ILV regime, the neck radius is empirically found to follow: \n\\begin{equation}\nr_{\\text{min}} = C_0 \\frac{\\gamma}{\\mu}\\tau.\n\\label{ILV_Scaling}\n\\end{equation}\n\\noindent Previous experiments had observed this linear growth, but incorrectly assumed the drops to be in the Stokes regime \\cite{Bonn2005,Thoroddsen2005,Burton2007,Paulsen2011,Yokota2011}. \n\nPaulsen \\textit{et al.}\\ \\cite{Paulsen2012} used a force-balance argument to predict that for 3D drops, the Stokes regime is entered when: \n\\begin{equation}\nOh \\propto \\left| \\ln\\left(\\frac{1}{8}\\frac{r_{\\text{min}}}{A}\\right)\\right| \\left(\\frac{r_{\\text{min}}}{A}\\right)^{-1\/2}.\n\\label{phase_boundary_3D}\n\\end{equation} \nFig.\\ \\ref{phaseDiagram}(b) shows the phase diagram for liquid drop coalescence in 3D. \nThe ILV regime occupies an increasingly larger portion of the phase-space as $r_{\\text{min}}\/A\\rightarrow 0$.\nThus, surface tension, inertia, and viscosity combine to form the true asymptotic regime of liquid drop coalescence. \n(However, if the drop viscosity is extremely large or small, the range where the ILV regime occurs may be below atomic scales, and so coalescence will start in the Stokes or the inertial regime.) \n\nIn this section, I provide additional measurements and analysis of the ILV and Stokes regimes. \nThese measurements support the new picture of coalescence developed by Paulsen \\textit{et al.}\\ \\cite{Paulsen2012}. \n\n\n\n\n\n\n\n\n\n\\subsection{Change in velocity scaling}\n\nPaulsen \\textit{et al.}\\ \\cite{Paulsen2012} observed that the transition from the ILV regime to the Stokes regime would be accompanied by a change in the macroscopic velocity scaling of the drops. \nTo observe this macroscopic motion, they used a geometry where two pendant drops hang from nozzles and are translated horizontally to initiate contact on their equators. \nPaulsen \\textit{et al.}\\ \\cite{Paulsen2012} measured the velocity of a point on the back of one drop, $v_{\\text{b.o.d.}}$, as a probe of the global motion of the drops, thus identifying the phase boundary between the ILV and Stokes regimes. \nHere, I measure the center-of-mass velocity of each drop, $v_{\\text{c.o.m.}}$, and show that it gives consistent results. \n\nIn the ILV regime, a force balance argument \\cite{Paulsen2012} gives: \n\\begin{equation}\nv_{\\text{c.o.m.}} \\approx \\frac{3\\mu}{4A^3 \\rho} r_{\\text{min}}^2,\n\\label{vcom_ILV}\n\\end{equation}\n\n\\noindent In the Stokes regime, the 2D Stokes solution gives the asymptotic relationship: \n\\begin{equation}\nv_{\\text{c.o.m.}} \\approx \\frac{\\gamma}{2\\pi\\mu} \\left(\\frac{r_{\\text{min}}}{A}\\right) \\left|\\ln\\left(\\frac{1}{8}\\frac{r_{\\text{min}}}{A}\\right)\\right|,\n\\label{vcom_Stokes}\n\\end{equation}\n\\noindent which should apply for 3D drops as well \\cite{Eggers1999}.\n\nHigh-speed movies of the coalescing drops are analyzed to give the position of the center-of-mass of one drop, which is numerically differentiated to give $v_{\\text{c.o.m.}}$, and averaged to suppress noise. \n(Because the movies only give the planar drop contour, I calculate the center-of-mass of the shape that is given by revolving the contour of the bottom half of one drop around the axis passing through the center of both drops.) \nThe neck radius is measured directly from the same movie. \n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=2.8in]{Vcom_vs_rmin.pdf}\n\\end{center}\n\\caption{\n(Color online) \nILV-to-Stokes crossover. \n(a) Rescaled center-of-mass velocity of the drops, $v_{\\text{c.o.m.}}\\mu\/\\gamma$, versus $r_{\\text{min}}\/A$ at several viscosities. \nSolid line: asymptotic result from 2D Stokes theory, Eq.\\ (\\ref{vcom_Stokes}). \nThe data show super-linear growth at early times and merge onto the Stokes solution at late times. \nHigher-viscosity drops enter the Stokes regime at smaller neck radius. \n(b) The center-of-mass motion follows the motion of the back of one drop, here shown for $Oh=3.1$. \n}\n\\label{Vcom_vs_rmin}\n\\end{figure}\n\nFigure \\ref{Vcom_vs_rmin}(a) shows $v_{\\text{c.o.m.}}$ rescaled by the viscous-capillary velocity, $\\gamma\/\\mu$, versus the non-dimensional neck radius, $r_{\\text{min}}\/A$, for several viscosities. \nThe data capture both an early dynamics where the global drop velocity is growing approximately as $r_{\\text{min}}^2$ (as predicted for the ILV regime by Eq.\\ (\\ref{vcom_ILV})), and a late dynamics, where the data merge onto a master curve that is consistent with the Stokes theory, Eq.\\ (\\ref{vcom_Stokes}). \nThe higher the fluid viscosity, the earlier the transition into the Stokes regime. \nIn Fig.\\ \\ref{Vcom_vs_rmin}(b), $v_{\\text{c.o.m.}}$ is shown for one of the viscosities along with $v_{\\text{b.o.d.}}$ obtained from the same movie. \nThe two measurements are in good agreement. \nThis crossover in global drop motion marks the phase boundary between the ILV regime and the Stokes regime, which was reported in ref.\\ \\cite{Paulsen2012}, and is plotted in Fig.\\ \\ref{phaseDiagram}(b). \n\n\n\n\\subsection{Neck shapes}\n\nThe shape of the fluid neck connecting the coalescing drops offers another means of identifying the Stokes regime from the ILV regime. \nThe neck shapes were compared by Paulsen \\textit{et al.}\\ \\cite{Paulsen2012}, and a more detailed comparison is provided here. \n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=3.3in]{DropProfileCurvature.pdf}\n\\end{center}\n\\caption{\n(Color online) \nNeck shape versus $r_{\\text{min}}\/A$ and viscosity. \n(a-c) Neck shape for high viscosity (a) ($\\mu=58600$ mPa s, $Oh=370$) and intermediate viscosity (b,c) ($\\mu=96.6$ mPa s, $Oh=0.62$). \nAt high viscosity (a), the data agree with the exact Stokes-flow shapes (solid lines: Eq.\\ (\\ref{Hopper_contour})). \nAt intermediate viscosity (b), the neck is much broader, and the Stokes theory is a poor fit to the data. \nInstead, the data is well described by two spheres joined smoothly with a parabolic neck (c). \n(d) Neck shape similarity solutions versus rescaled coordinates, $\\tilde{r} \\equiv r\/r_{\\text{min}}$, $\\tilde{z} \\equiv z\/(r_{\\text{min}}^2\/A)$. \nDotted line: parabolic neck connected to spherical drops, Eq.\\ (\\ref{simsol_Parabola}). \nSolid line: Stokes solution, Eq.\\ (\\ref{simsol_Hopper}). \n(e) Dimensionless radius of curvature at neck minimum, $1\/\\kappa A$, versus $r_{\\text{min}}\/A$. \nHigh-viscosity data ($\\bullet$ $Oh=370$) agree with the Stokes theory (dashed line), which is approximated by Eq.\\ (\\ref{neck_curve}) with $C=1\/4$ at early times (solid line). \nIntermediate-viscosity data ($\\circ$ $Oh=0.62$) follow the result for a parabolic neck (dotted line: Eq.\\ (\\ref{neck_curve}) with $C=32\/27$). \n(f) Curvature scaling prefactor, $C$, measured at fixed radius ($0.1 0.03 A$, where all of the data lie. \nTherefore, following ref.\\ \\cite{Paulsen2012}, the data are compared with the 2D exact analytic solution.) \nThe data follow the theory for small $r_{\\text{min}}\/A$; only at later times do the curves begin to depart from each other. \nFigure \\ref{NozzleGeomCompare}(b) shows that for intermediate-viscosity drops, the boundary condition has a negligible effect on the dynamics, and the data matches the neck scaling for the ILV regime, Eq.\\ (\\ref{ILV_Scaling}). \nThus, the phase diagram shown in Fig.\\ \\ref{phaseDiagram}(b) applies to coalescing drops that are fixed or free. \n\n\n\n\n\n\\section{Viscous-to-inertial crossover}\n\\label{VIcrossover}\n\nThus far, I have reported coalescence measurements in the Stokes and the ILV regimes, and I have observed the ILV-to-Stokes crossover by measuring the macroscopic motion of the drops. \nThe remaining component of the coalescence phase diagram is the viscous-to-inertial crossover (from the ILV regime to the inertial regime). \n\nRecently, Paulsen \\textit{et al.}\\ \\cite{Paulsen2011} used an ultrafast electrical method to measure this crossover for salt-water drops, and reported a major discrepancy with the theory \\cite{Eggers1999,Eggers2003}. \nWhereas the theory predicts a crossover time between these regimes of $t_c \\approx 0.7$ ns, the experiments show $t_c \\approx 2$ $\\mu$s. \nIn terms of the neck size, the crossover radius was predicted to be $r_c \\approx 30$ nm, whereas experiment showed $r_c \\approx 20$ $\\mu$m.\n\nTo investigate this discrepancy, experiments were carried out where the liquid viscosity was varied over a large range \\cite{Paulsen2011}. \nThe data was found to be consistent with a newly proposed Reynolds number for coalescence, which is based on a smaller length scale for the dominant flow gradients given by the neck height, $L=r_{\\text{min}}^2\/A$. \nHere, I provide additional measurements and analysis of the viscous-to-inertial crossover, which support the conclusions of ref.\\ \\cite{Paulsen2011}. \n\n\n\n\n\\subsection{Collapse of electrical data}\n\\label{PrelimComp}\n\n\n\n\n\nThe inset to Fig.\\ \\ref{muCollapse} shows $r_{\\text{min}}$ versus time for 4 viscosities, ranging from $1.9$ to $82$ mPa s, which were measured electrically. \nIn ref.\\ \\cite{Paulsen2011}, these data were rescaled to fall onto a master plot by rescaling the vertical and horizontal axes with free parameters, $r_c$ and $\\tau_c$, at each viscosity to produce the best collapse. \nI perform a different analysis here, in order to demonstrate that the results are not sensitive to the particular way in which the data collapse is obtained. \n\nIn ref.\\ \\cite{Paulsen2011}, after collapsing the data, it was noted that all of the data followed the simple interpolation:\n\\begin{equation}\n\\frac{r_{\\text{min}}}{r_c}=2 \\left(\\frac{1}{\\tau\/\\tau_c} + \\frac{1}{\\sqrt{\\tau\/\\tau_c}}\\right)^{-1}.\n\\label{interpolate}\n\\end{equation}\n\n\\noindent Here, I start with Eq.\\ (\\ref{interpolate}) and use it to collapse the data. \nFor each viscosity, Eq.\\ (\\ref{interpolate}) is fit to the data, where $r_c$ and $\\tau_c$ are fitting parameters. \nThe data is then rescaled by the $r_c$ and $\\tau_c$.\n\n\n\\begin{figure}[bt]\n\\centering \n\\begin{center} \n\\includegraphics[width=3.2in]{muCollapse.pdf}\n\\end{center}\n\\caption{\n(Color online) \n\\textit{Inset}: Neck radius versus time for glycerol-water-NaCl mixtures of different viscosities, from $1.9$ to $82$ mPa s. \nAt each viscosity, data from 5 or more coalescence events are logarithmically binned and averaged. \n\\textit{Main}: The data is collapsed by rescaling the x- and y-axes. \nRescaling parameters $\\tau_c$ and $r_c$ are obtained for each viscosity by fitting the data to Eq.\\ (\\ref{interpolate}) (solid line).\nThe collapsed data and the fit exhibit asymptotic behavior of $2\\tau\/\\tau_c$ (dotted line) at early times and $2\\sqrt{\\tau\/\\tau_c}$ (dashed line) at late times. \n} \n\\label{muCollapse}\n\\end{figure}\n\nFigure \\ref{muCollapse} shows the collapsed data, which fall cleanly onto a single curve given by Eq.\\ (\\ref{interpolate}). \nThe scaling parameters, $r_c$ and $\\tau_c$, determine the coefficients for the early- and late-time scaling laws, $C_0$ and $D_0$ (defined by eqns.\\ \\ref{ILV_Scaling} and \\ref{invScaling}, respectively). \nFigure \\ref{rcvsOh}(a) and (b) shows these coefficients as a function of dimensionless viscosity, $Oh$. \nThe ILV scaling prefactor, $C_0$, is of order 1 across the entire range of viscosity (although there is a slight increase in $C_0$ as the viscosity is increased). \nThe inertial scaling prefactor, $D_0$, is in good agreement with the value from numerical simulations \\cite{Eggers2003}, $D_0=1.62$. \n\n\\begin{figure}[bt]\n\\centering \n\\begin{center}\n\\includegraphics[width=3.4in]{rcOh.pdf} \n\\end{center}\n\\caption{\n(a,b) Measured dimensionless scaling-law prefactors, $C_0$ and $D_0$, versus $Oh$. \nIn (a), the dashed line is $C_0=1$. \nIn (b), the dashed line is the value from simulation \\cite{Eggers2003}: $D_0=1.62$. \n(c) Rescaled viscous-to-inertial crossover time versus $Oh$. \nThe dashed line shows $\\tau_c\/\\tau_{\\text{v}} = Oh^2$ (where $\\tau_{\\text{v}}= \\mu A\/\\gamma$ is the viscous timescale), as predicted in the literature \\cite{Eggers1999, Eggers2003}. \nClearly this is a poor description of the data. \nThe crossover radius proposed by ref.\\ \\cite{Paulsen2011} (with $\\tau_c\/\\tau_{\\text{v}} \\propto Oh$) is consistent with the data (solid line: Eq.\\ (\\ref{ourtc})). \n(d) Rescaled viscous-to-inertial crossover radius versus $Oh$.\nThe dashed line shows $r_c\/A = Oh^2$, which was proposed in the literature \\cite{Eggers1999, Eggers2003}.\nThis fails to capture the data. \nThe crossover radius proposed in this work describes the data well, with $r_c\/A \\propto Oh$ (solid line: Eq.\\ (\\ref{rcformula})). \nIn (a-d), the error bars are determined by the fits to Eq.\\ (\\ref{interpolate}). \n} \n\\label{rcvsOh}\n\\end{figure}\n\nFigure \\ref{rcvsOh}(c) shows the dimensionless crossover time, $\\tau_c\/\\tau_{\\text{v}}$, as a function of $Oh$ (where $\\tau_{\\text{v}}= \\mu A\/\\gamma$ is the viscous timescale). \nClearly, the accepted formula for the crossover time, $\\tau_c\/\\tau_{\\text{v}} \\approx Oh^2$, does not agree with the data. \nThe measurements are better described by a linear dependence on $Oh$. \n\nThe discrepancy between theory and experiment is also evident in the dimensionless crossover radius, $r_c\/A$, versus $Oh$, shown in Fig.\\ \\ref{rcvsOh}(d). \nThe predicted crossover radius is $r_c\/A \\approx Oh^2$, whereas the data follow $r_c\/A \\approx Oh$. \nThis suggests that the conventional Reynolds number for coalescence, $Re = \\rho \\gamma r_{\\text{min}}\/\\mu^2$, is wrong. \n\n\n\n\\subsection{Reynolds number for coalescence}\n\nThe viscous-to-inertial crossover can be estimated by the condition that the dimensionless Reynolds number for the flows, $Re=\\rho U L\/\\mu$, is of order unity (where $U$ and $L$ are characteristic velocity- and length-scales in the flows, respectively). \nAs was argued in ref.\\ \\cite{Paulsen2011}, the dominant flows in the viscous-to-inertial crossover correspond to a different Reynolds number than the one used in the literature \\cite{Eggers1999, Eggers2003, Wu2004, Yao2005, Thoroddsen2005, Bonn2005}. \nInstead of the conventionally used length scale given by the neck radius, $L=r_{\\text{min}}$, a much smaller length scale---the neck height, $r_{\\text{min}}^2\/A$---describes the size of the flow gradients. \n\nPaulsen \\textit{et al.}\\ \\cite{Paulsen2011} gave an estimate for the Reynolds number coming from the inertial side of the crossover. \nThey found that the crossover time, $\\tau_c$, is given by: \n\\begin{equation}\n\\frac{\\tau_c}{\\tau_{\\text{v}}} \\approx \\frac{64}{D_0^6} \\left(\\frac{\\mu}{\\sqrt{\\rho \\gamma A}}\\right) = \\frac{64}{D_0^6} \\mbox{ } Oh,\n\\label{ourtc}\n\\end{equation}\n\\noindent which is written here using the viscous timescale, $\\tau_{\\text{v}}$, and the Ohnesorge number. \nFigure \\ref{rcvsOh}(c) shows that this prediction is consistent with the crossover times measured in this work. \n\n\n\n\n\n\n\nA similar argument can be made coming from the viscous side of the crossover, which is presented here. \nIn the early (ILV) regime, the characteristic speed of the flows is $U=\\gamma\/\\mu$, and the characteristic length-scale is $L=r_{\\text{min}}^2\/(2A)$, since liquid from each drop moves in to advance the neck. \nUsing these scales, the Reynolds number is: $Re=\\rho\\gamma r_{\\text{min}}^2\/(2 A \\mu^2)$. \n\\noindent The dimensionless crossover radius, $r_c\/A$, is obtained by setting $Re=1$: \n\\begin{equation}\n\\frac{r_c}{A}\\approx \\sqrt{2}\\left(\\frac{\\mu}{\\sqrt{\\rho \\gamma A}}\\right)=\\sqrt{2}\\mbox{ }Oh.\n\\label{rcformula}\n\\end{equation}\n\n\\noindent Figure \\ref{rcvsOh}(d) shows that this prediction gives excellent agreement with the data\n\nIn Appendix \\ref{prevCross}, I compare the calculated crossover time, Eq.\\ (\\ref{ourtc}), with previous measurements of the viscous-to-inertial crossover in the literature. \n\n\n\n\\subsection{High-speed imaging collapse}\n\nHigh-speed videos of coalescence show that these results also capture the dependance of the crossover on surface tension. \nI coalesce glycerol-water-NaCl mixtures with viscosities ranging from 1.9 to 230 mPa s, and silicone oils with viscosities ranging from 0.82 to 97 mPa s. \n(The silicone oils are electrically insulating, and therefore cannot be measured with the electrical method.) \nUsing these liquids, the surface tension is varied by a factor of 5.\nThe liquids have $Oh<1$ so that the behavior should be described by the ILV regime and the inertial regime, but not the Stokes regime. \n\n\\begin{figure}[bt]\n\\centering \n\\begin{center} \n\\includegraphics[width=3.1in]{videoCollapse.pdf} \n\\end{center}\n\\caption{\n(Color online)\nHigh-speed imaging of coalescence. \n\\textit{Inset:} Neck radius versus time for glycerol-water-NaCl mixtures ($\\mu=1.9$, 30, and 230 mPa s) and silicone oils ($\\mu=0.82$ and 97 mPa s). \nOther parameters are listed in the legend. \n\\textit{Main:} The data is collapsed by rescaling the axes with the crossover radius, $r_c$ (calculated with Eq.\\ (\\ref{rcformula})), and the crossover time, $\\tau_c$ (calculated with Eq.\\ (\\ref{ourtc})). \nThe rescaled data are consistent with Eq.\\ (\\ref{interpolate}) (solid line). \n} \n\\label{videoCollapse}\n\\end{figure}\n\nThe inset of Fig.\\ \\ref{videoCollapse} shows $r_{\\text{min}}$ versus $\\tau$ for these liquids. \nWhen the axes are rescaled with $r_c$ (given by Eq.\\ (\\ref{rcformula})) and $\\tau_c$ (given by Eq.\\ (\\ref{ourtc})), the data collapse to a master curve, shown in Fig.\\ \\ref{videoCollapse}. \nThe collapsed data follow Eq.\\ (\\ref{interpolate}), and therefore fall on the electrical data collapse, Fig.\\ \\ref{muCollapse}. \nThese experiments further solidify the new phase diagram for coalescence, shown in Fig.\\ \\ref{phaseDiagram}(b). \n\n\n\n\n\n\\section{Dynamics of drops during approach}\n\\label{approach}\n\n\\subsection{Drop deformation}\n\nThe experiments in this work were performed at ambient air pressure. \nBecause the drops approach at finite speed, they can be deformed by the viscous stresses in the air layer between them \\cite{Neitzel2002}. \nThis deformation could affect the subsequent coalescence dynamics. \nPrevious experiments \\cite{Case2008,Case2009} using the same electrical method suggest that deformation may be present for approach speeds as low as $10^{-4}$ m\/s. \n\nHere, aqueous NaCl drops are coalesced in air at an approach speed that is varied over $7$ orders of magnitude down to $17$ nm\/s, to examine the effects of the ambient gas during approach. \nTo achieve constant approach-speeds lower than $10^{-3}$ m\/s, a variable-speed syringe pump was used with a wide range of syringe sizes.\nThe approach speed, $U_{\\text{app}}$, was calculated based on the geometry and the flow rate. \nThe coalescence cell was fixed to a vibration-isolation table to suppress disturbances on the drops. \nFor high approach-speeds, a gravity-fed system was used: the bottom drop was fed by a reservoir held at a variable height above the coalescence cell, so that hydrostatic pressure caused the bottom drop to grow and impact the top drop. \nFor the gravity-fed system, $U_{\\text{app}}$ was measured directly with a high-speed camera. \n\nFigure \\ref{RCRva}(a) shows an image taken within one frame of $t_0$ for $U_{\\text{app}}=8.8\\times 10^{-5}$ m\/s. \nThe drops appear to be undeformed at the moment of contact. \nAt much higher approach-speed, the drops visibly deform before they merge, as shown in Fig.\\ \\ref{RCRva}(b), for $U_{\\text{app}}=3.3\\times 10^{-2}$ m\/s. \nThis transient non-coalescence is due to the pressure provided by the lubricating air layer between the drops. \nAlthough the drops appear undeformed in the low approach-speed case shown in Fig.\\ \\ref{RCRva}(a), the image does not rule out the possibility of a small flattened region at the drop tips. \n\n\\begin{figure}[bt]\n\\centering \n\\begin{center} \n\\includegraphics[width=3.2in]{dropsRCRva.pdf} \n\\end{center}\n\\caption{\nOptical and electrical measurements of coalescence at low and high approach-speed. \n(a,b) Visual indications of drop deformation. \nDrops are shown within one frame of $t_0$ at low approach-speed: (a) $U_{\\text{app}}=8.8\\times 10^{-5}$ m\/s, and high approach-speed: (b) $U_{\\text{app}}=3.3\\times 10^{-2}$ m\/s. \nAt low approach-speed, the drops appear to coalesce as undeformed spheres, whereas the high approach-speed drops are flattened. \n(c,d) Electrical measurements corresponding to the experiments shown in (a,b). \n$R_{\\text{CR}}-R_0$ versus $\\tau$, where $R_0=1\/(\\sigma \\pi A)$. \nAt low approach-speed (c), the resistance follows $\\tau^{-1}$ (ILV scaling, dashed line) at early times and $\\tau^{-1\/2}$ (inertial scaling, dotted line) at late times. \nAt high approach-speed (d), the resistance follows $\\tau^{-0.72}$ at early times (solid line). \n}\n\\label{RCRva}\n\\end{figure}\n\nThe electrical method was used to access these small scales. \nFigure \\ref{RCRva}(c) shows electrical measurements of the coalescing drops for the low approach-speed case. \nThe data follow the behavior shown in earlier sections of this work (e.g., Fig.\\ \\ref{Schematic}(d)). \nHowever, for the high approach-speed case, the electrical measurements are qualitatively different, as shown in Fig.\\ \\ref{RCRva}(d). \nAt early times, the resistance appears to follow an approximate power-law with a scaling exponent of $-0.72$.\nAt late times, there is an abrupt crossover out of this scaling.\n\n\\begin{figure}[bt]\n\\centering \n\\begin{center} \n\\includegraphics[width=2.7in]{tauc_Cfinal_Uapp.pdf} \n\\end{center}\n\\caption{\n(a) Crossover time between the early and late electrical-resistance scalings versus approach speed. \nThe crossover time is constant for $U_{\\text{app}}U_{\\text{app}}^*$ (shaded region), $\\tau_c$ depends on the approach speed, and is delayed as $U_{\\text{app}}$ increases. \n(b) $C_{\\text{final}}$ versus approach speed. \n$C_{\\text{final}}$ is constant for $U_{\\text{app}}U_{\\text{app}}^*$ (shaded region), $C_{\\text{final}}$ increases with $U_{\\text{app}}$, consistent with an increase in the flattening of the drops before they touch. \nThe data are averaged over 800 samples within the final 100 $\\mu$s before $t_0$, and $V_{\\text{in}} \\leq 275$ mV. \n} \n\\label{Uapp}\n\\end{figure}\n\nA crossover time, $\\tau_c$, is measured by fitting the early- and late-time data to separate power-laws and determining the point of intersection of the fits. \n(This criteria is equivalent to fitting to Eq.\\ (\\ref{interpolate}) if the two scalings are the ILV and inertial scalings.) \nFigure \\ref{Uapp}(a) shows $\\tau_c$ versus approach speed. \nThe crossover time is insensitive to the drop approach-speed for $U_{\\text{app}}<3 \\times 10^{-4}$ m\/s. \nFor $U_{\\text{app}}>3 \\times 10^{-4}$ m\/s, the crossover time increases approximately linearly with $U_{\\text{app}}$, which is correlated with an increase in flattening in the high-speed videos. \nA threshold approach-speed, $U_{\\text{app}}^*=(3 \\pm 1)\\times 10^{-4}$ m\/s, separates the two behaviors. \n\nThe capacitance of the drops at the moment of contact, $C_{\\text{final}}\\equiv C_{\\text{CR}}(\\tau=0)$, should be sensitive to the amount of drop deformation as well.\nIn particular, $C_{\\text{final}}$ should grow with the area of the deformed region.\nFig.\\ \\ref{Uapp}(b) shows $C_{\\text{final}}$ versus approach speed. \nThe capacitance shows two behaviors, which fall on either side of the threshold approach-speed, $U_{\\text{app}}^*$. \nAt high approach-speed ($U_{\\text{app}}>U_{\\text{app}}^*$), $C_{\\text{final}}$ increases with $U_{\\text{app}}$ as the drops are increasingly deformed. \nAt low approach-speed ($U_{\\text{app}}0.3$ V, the data are consistent with the drops forming a connecting neck when the intervening electric field exceeds a threshold value, $E_{\\text{thresh}}$. \nThe data are fit well by $E_{\\text{thresh}}=1.2 \\pm 0.2$ MV\/m, using Eq.\\ (\\ref{Ceq}) for the capacitance of the drops as a function of the final separation, $z_0$, and substituting $z_0\\approx V_{\\text{in}}\/E_{\\text{thresh}}$. \nWhile this value is only slightly smaller than the approximate dielectric strength of air at large distances (3 MV\/m), the dielectric strength of air at these short distances is much greater \\cite{Townsend1915}.\n\nHaving argued that dielectric breakdown does not occur, I now address whether the applied voltage deforms the drops. \nA recent study measured the deformation of two nearby drops with an applied DC electric potential difference \\cite{Bird2009}. \nTheir experiments showed that the drops sharpen into cones, and they measured a cone angle of roughly $20^{\\circ}$ for a potential difference of $500$ V (where $0^{\\circ}$ corresponds to no deformation). \nTheir measurements of the cone angle are approximately linear for electric potentials between $0$ and $500$ V, suggesting that the cone angle would be less than $0.08^{\\circ}$ for the applied voltages used in this work ($V_{\\text{in}}\\leq 2$ V). \n(The angle is likely diminished even further since the measurements in this work are AC instead of DC.) \nThus, any deformation of the drops is expected to be on a small scale, although it could contribute to forming the initial microscopic neck for $V_{\\text{in}}>0.3$ V. \n\nFigure \\ref{CfinalVa}(b) shows that at lower voltages, $V_{\\text{in}}<0.3$ V, $C_{\\text{final}}$ is roughly constant within error, and the description invoking a threshold electric field is a poor fit. \nInstead, the data are consistent with a picture where Van der Waals forces initiate coalescence at finite separation when $V_{\\text{in}}$ is small.\nFor the data at low voltages, I measure $C_{\\text{final}}=0.63 \\pm 0.05$ pF, giving a best fit of $z_0=280$ nm. \nBecause the capacitance is logarithmic in drop separation, the experimental error on $z_0$ is large; the data are consistent with $z_0$ ranging from $120$ nm to $650$ nm. \n\n\n\n\n\\subsection{Initial neck size}\n\nFinally, I address the finite length and width of the liquid neck that is formed at the inception of coalescence. \nDue to the finite separation of the drops at the moment of contact, the separation between the drop interfaces at radius $r$ will be given by $r^2\/A+z_0$, instead of simply $r^2\/A$ as was assumed in previous sections. \nHowever, this correction becomes relatively smaller as $r_{\\text{min}}$ grows. \nNumerical simulations \\cite{Baroudi2013} where low-viscosity drops initiate contact by forming a small fluid neck at finite separation show that after a short delay, the dynamics converge onto the predicted scaling (i.e. Eq.\\ (\\ref{invScaling})). \n\nPrevious high-speed imaging studies have reported values for the initial finite radius of the fluid neck (referred to as $r_0$) at the inception of liquid drop coalescence in air. \nValues reported were $r_0=50$ $\\mu$m for $U_{\\text{app}} \\lesssim 0.1$ mm\/s (ref.\\ \\cite{Thoroddsen2005}) and $r_0=43.8 \\pm 4.3$ $\\mu$m for $U_{\\text{app}}=6.6$ mm\/s (ref.\\ \\cite{Fezzaa2008}). \nIn contrast, I measure $r_{\\text{min}}$ down to $0.7$ $\\mu$m at $\\tau=50$ ns for aqueous NaCl drops at low approach-speed.\nThis is a significantly smaller upper bound for the initial size of the neck for aqueous NaCl drops at low approach-speed.\n\n\n\n\n\n\\section{Conclusion}\n\nIn summary, I have presented supporting evidence for the new phase diagram for liquid drop coalescence in vacuum or air, developed in refs.\\ \\cite{Paulsen2012, Paulsen2011}. \nThe theoretically unanticipated inertially limited viscous regime was found to be the true asymptotic regime of coalescence for drops of any finite viscosity. \nIn this regime, surface-tension, viscosity, and inertia all balance. \nViscous drops ($Oh>1$) transition into the Stokes regime once the neck is sufficiently large to pull the drops towards each other. \nLow-viscosity drops ($Oh<1$) transition into the inertial regime at late times. \n\nIn the inertially limited viscous regime and the Stokes regime, the center-of-mass motion of the drops was found to track with the motion of the backs of the drops, further solidifying the force balance argument that identified the inertially limited viscous regime in ref.\\ \\cite{Paulsen2012}. \nThis work provides similarity solutions for the neck shapes in these two regimes, and the new phase diagram for coalescence was shown to apply for different boundary conditions. \n\nAdditional evidence was provided for the surprisingly late viscous-to-inertial crossover (from the inertially limited viscous regime to the inertial regime), including an alternative method of data-collapse, a Reynolds-number argument coming from the viscous side, and high-speed imaging experiments where the surface tension was varied. \nThe agreement of the new coalescence Reynolds number with the data supports the new picture for the flows, which must have a significant gradient on a small axial length scale set by the neck height, $r_{\\text{min}}^2\/A$. \n\nMany of the results are based on electrical measurements, which were shown to have an insignificant effect on the coalescence dynamics reported here. \nAt low approach-speed and low applied-voltage, the drops coalesce at finite separation as undeformed spheres. \n\nWhereas this work has established the behavior of liquid drop coalescence in vacuum or air, further work is needed to determine how an outer fluid with significant density or viscosity alters the coalescence phase diagram. \n\n\n\n\n\n\n\\begin{acknowledgments} \nI thank Sidney Nagel and Justin Burton for their guidance, support, and keen insight throughout this work. \nI am also particularly grateful to Santosh Appathurai, Osman Basaran, Sarah Case, Thomas Rosenbaum, Savdeep Sethi, and Wendy Zhang. \nI thank Michelle Driscoll, Efi Efrati, Nathan Keim, and Tom Witten for their assistance and for many illuminating discussions. \nThis work was supported by NSF Grant DMR-1105145 and by NSF MRSEC DMR-0820054. \n\\end{acknowledgments}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:Int}\n\nExoplanets -- planets orbiting stars other than the Sun -- are most often identified through indirect means. We observe a star with periodic behaviour that would otherwise be unexpected and conclude that the best explanation is the presence of one (or more) planets. We direct the interested reader to \\cite{2018Perryman} for a summary of the various exoplanet detection techniques. By piecing together the observations of the unexpected behaviour it is possible to constrain, to some degree, the orbit and physical nature of the planets in question. Such inference is, however, not perfect -- particularly when the planets in question have been detected through observations of the ``wobble'' of their host star, as is the case for planets found using the radial velocity technique \\citep[e.g. ][]{51peg,47UMa,HarpsM,114613,3167c}, or candidate planets claimed on the basis of binary star eclipse timing variability \\citep[e.g. ][]{hwvir,nnser,silly1,uzfor}.\n\nThe accurate determination of the (minimum) masses and orbits of newly discovered exoplanets provides the key data by which we can understand the variety of outcomes of the planet formation process. As such, it behooves us to ensure that exoplanet catalogues contain information which is as accurate and realistic as possible. Such accurate solutions do not just enable us to properly ascertain the distribution of planets at the current epoch -- they also provide an important window into the history of the planetary systems we discover \\citep[e.g. ][]{2014Ford,2015PuWu,2017Fulton,2019Wu}, and allow us to predict and plan follow up observations through population synthesis models \\citep[e.g. ][]{2013Hasegawa,2018Mordasini,2020Dulz}. For example, the migration and mutual gravitational interaction of planets have been identified as being of critical importance to both the observed architectures and predicted long-term stability of the menagerie of known multi-planet systems heretofore identified through radial velocity and transit surveys \\citep[e.g.][]{prototrap,2012cWittenmyer,chain,trappist,2017Mustill,2017Hamers,2019Childs}. \n\nHowever, the accuracy of orbital parameters of the planetary companions presented in discovery works is frequently limited by the time period covered by the observations that led to the discovery, which are often enough to claim detection and little more \\citep{JupiterAnalogues,CoolJupiters}. Long term follow-up of known planet host systems is therefore desirable to refine the orbital parameters for known companions, to infer the presence of additional companions at lower masses and\/or larger semi-major axes \\citep[e.g.][]{2017BeckerAdams,EightSingle,RVDItech,181433,2019Denham,Rick19}, and to disentangle the complex signals produced by planets on resonant \\citep[e.g.][]{Ang10,2012cWittenmyer,2014aWittenmyer,2016Wittenmyer} or eccentric orbits \\citep[e.g.][]{2013Wittenmyer,2017cWittenmyer}. Equally, due to the relative paucity of data on which planet discoveries are often based, it is possible for those initial solutions to change markedly as more data are acquired. The ultimate extension of this is that, on occasion, the process by which planetary solutions are fit to observational data can yield false solutions -- essentially finding local minima in the phase space of all possible orbital solutions that represent a good theoretical fit to the data whilst being unphysical. It is therefore important to check the dynamical feasibility of multi-planet solutions that appear to present a good fit to observational data -- particularly in those cases where such solutions invoke planets on orbits that offer the potential for close encounters between the candidate planets.\n\nFollowing this logic, we have in the past tested the stability of multi-planet systems in a variety of environments including around main sequence \\citep{2010Marshall,2012bWittenmyer,2015Wittenmyer,2017bWittenmyer}, evolved \\citep{2017aWittenmyer,2017cWittenmyer,2019Marshall}, and post-main sequence stars \\citep{2011Horner,2012Horner,QSVir,2012aWittenmyer,2013Mustill}. In some cases, our results confirmed that the proposed systems were dynamically feasible as presented in the discovery work, whilst in others, our analysis demonstrated that alternative explanations must be sought for the observed behaviour of the claimed ``planet-host'' star \\citep[e.g.][]{2011Horner,QSVir}. To ensure that our own work remains robust, we have incorporated such analysis as a standard part of our own exoplanet discovery papers. We test all published multi-planet solutions for dynamical stability before placing too great a confidence in a particular outcome. As an extension to this approach, we presented a revised Bayesian method to the previously adopted frequentist stability analysis in \\cite{2019Marshall}, and demonstrated the consistency between these approaches. \n\nRather than using direct dynamical simulations, the stability of a planetary system can also be inferred from a criterion derived from the planetary masses, semi-major axes, and conservation of the angular momentum deficit \\citep[AMD,][]{2000Laskar,2017Laskar}. AMD can be interpreted as measuring the degree of excitation of planetary orbits, with less excited orbits implying greater stability. The definition of AMD stability has been revised to account for the effect of mean motion resonances and close encounters on orbital stability \\citep{2017Petit,2018Petit}. Of the systems examined in this work, HD~110014 has been identified as being weakly stable, whilst HD~67087 and HD~133131A are both considered unstable according to AMD \\citep[see Figs. 6 and 7,][]{2017Laskar}. In our previous dynamical studies, we find good agreement between the stability inferred from AMD and our dynamical simulations with 13 systems in common between them, of which nine were classified unstable and four stable, one marginally so \\citep{2012Horner,2012Robertson,2012aWittenmyer,2012bWittenmyer,2012cWittenmyer,2014aWittenmyer,2014bWittenmyer,2015Wittenmyer,2016Wittenmyer,2016Endl}. In this paper we examine the dynamical stability of the three multi-planet systems, HD~67087, HD~110014, and HD~133131A, as a critical examination of their stability and a further test of the reliability of AMD for the identification of instability in exoplanet systems. \n\nHD~67087, observed as part of the Japanese Okayama Planet Search programme \\citep{2005Sato}, was discovered to host a pair of exoplanets by \\cite{2015Harakawa}. The candidate planets are super-Jupiters, with $m~\\sin i$ of 3.1\\,$M_{\\rm Jup}$\\ and 4.9\\,$M_{\\rm Jup}$, respectively. They move on orbits with (${a,e}$) of ({1.08, 0.17}) and ({3.86, 0.76}), respectively, which would place the outer planet amongst the most eccentric Jovian planets identified thus far. The authors noted that the orbit and mass of the outer planet are poorly constrained. \n\nHD\\,110014 was found to host a planet by \\citet{2009deMedeiros}; the second companion was identified through re-analysis of archival spectra taken by the FEROS instrument \\citep{1998KauferPasquini} looking to derive an updated orbit for planet b \\citep{2015Soto}. The two candidate planets have super-Jupiter masses, and \\citet{2015Soto} cautioned that the proposed second planet was worryingly close in period to the typical rotation period of K giant stars. However, their analysis of the stellar photometry was inconclusive in identifying its activity as the root cause for the secondary signal. \n\nHD~133131A's planetary companions were reported in \\citet{2016Teske}, based on precise radial velocities primarily from the Magellan Planet Finder Spectrograph \\citep{2006Crane,2008Crane,2010Crane}. Their data supported the presence of two planets, where the outer planet is poorly constrained due to its long period. \\citet{2016Teske} ran a single dynamical stability simulation on the adopted solution and found it to remain stable for the full $10^5$ yr duration. The authors presented both a low- and high- eccentricity solution, reasoning that in a formal sense, the two solutions were essentially indistinguishable. They favoured the low-eccentricity model ($e_2=0.2$) for dynamical stability reasons. There is precedent in the literature for this choice, since it does happen that the formal best fit can be dynamically unfeasible whilst a slightly worse fit pushes the system into a region of stability \\citep[e.g.][]{2013Mustill, 2014Trifonov, 2017bWittenmyer}. \n\nThe remainder of the paper is laid out as follows. We present a brief summary of the radial velocity observations and other data (e.g. stellar parameters) used for our reanalysis in Sect. \\ref{sec:Obs} along with an explanation of our modelling approach. The results of the reanalyses for each target are shown in Sect. \\ref{sec:Res}. A brief discussion of our findings in comparison to previous work on these systems is presented in Sect. \\ref{sec:Dis}. Finally, we present our conclusions in Sect. \\ref{sec:Con}.\n\n\\section{Observations and methods}\n\\label{sec:Obs}\n\n\\subsection{Radial Velocity Data}\n\nWe compiled radial velocity values from the literature for the three systems examined in this work; the origin of these data are summarised in Table \\ref{tab:rvs}. \n\n\\begin{table}\n\\centering\n\\caption{Table of references for the radial velocities data used in this work. \\label{tab:rvs}}\n\\begin{tabular}{ll}\n \\hline\n Target & References \\\\\n \\hline\\hline\n HD~67087 & \\cite{2015Harakawa} \\\\\n HD~110114 & \\cite{2009deMedeiros} \\\\\n HD~133131A & \\cite{2016Teske} \\\\\n \\hline\n\\end{tabular}\n\\end{table}\n\n\n\\begin{table*}\n\\centering\n\\caption{Planetary orbital parameters based on \\textit{Systemic} fits to radial velocity data. Semi-major axes were calculated using measured orbital periods and stellar masses taken from the NASA Exoplanet Archive. \\label{tab:systemic}}\n\\begin{tabular}{lcccccc}\n \\hline\n & HD\\,67087 b & HD\\,67087 c & HD\\,110014 b & HD\\,110014 c & HD\\,133131A b & HD\\,133131A c \\\\\n \\hline \\hline\n Amplitude [\\mbox{m\\,s$^{-1}$}] & 74.0~$\\pm$~3.0 & 54.0~$\\pm$~4.0 & 36.949$\\pm$0.750 & 5.956$\\pm$1.617 & 135.315$\\pm$3.640 & 64.660$\\pm$3.966 \\\\\n Period [days] & 352.3$\\pm$1.7 & 2380$^{+167}_{-141}$ & 877.5$\\pm$5.2 & 130.125$\\pm$0.096 & 648.$\\pm$3 & 5342$^{+7783}_{-2009}$ \\\\\n Mean anomaly [deg] & 35$^{+20}_{-16}$ & 94$^{+28}_{-35}$ & 155$\\pm$4 & 231$\\pm$3 & 265$\\pm$11 & 188$^{+104}_{-123}$ \\\\\n Longitude [deg] & 281$^{+18}_{-15}$ & 256 (fixed) & 41$\\pm$3 & 302$\\pm$4 & 16.6$^{+4.7}_{-4.5}$ & 110$^{+25}_{-43}$ \\\\\n Eccentricity & 0.18$^{+0.07}_{-0.06}$ & 0.51 (fixed) & 0.259$\\pm$0.017 & 0.410$\\pm$0.022 & 0.340$\\pm$0.032 & 0.63$^{+0.25}_{-0.20}$ \\\\\n $M$ sin $i$ [$M_{\\rm Jup}$] & 3.10$^{+0.15}_{-0.14}$ & 3.73$^{+0.47}_{-0.45}$ & 10.61$\\pm$0.25 & 3.228$\\pm$0.098 & 1.418$\\pm$0.036 & 0.52$^{+0.45}_{-0.17}$ \\\\\n Semi-major Axis [au] & 1.08 & 3.87 & 2.32 & 0.65 & 1.44 & 5.88 \\\\\n \\hline\n\\end{tabular}\n\\end{table*}\n\n\n\\begin{table*}\n\\centering\n\\caption{Results from \\textsc{Astroemperor} exploration of parameter space around \\textit{Systemic} nominal best fit values for planetary companions to HD~133131A and HD~110014. \\label{tab:mcmc}}\n\\begin{tabular}{lcccc}\n \\hline\n & HD\\,133131A b & HD\\,133131A c & HD\\,110014 b & HD\\,110014 c \\\\\n \\hline \\hline\n Amplitude [\\mbox{m\\,s$^{-1}$}] & 36.949$\\pm$0.750 & 5.956$\\pm$1.617 & 135.315$\\pm$3.640 & 64.660$\\pm$3.966 \\\\\n Period [days] & 647.816$\\pm$1.575 & 3205.648$\\pm$948.063 & 865.206$\\pm$6.170 & 132.431$\\pm$0.279 \\\\\n Phase [deg] & 261.620$\\pm$4.850 & 31.734$\\pm$98.433 & 43.753$\\pm$72.179 & 341.373$\\pm$64.346 \\\\\n Longitude [deg] & 18.550$\\pm$2.165 & 113.777$\\pm$81.302 & 146.633$\\pm$17.229 & 236.903$\\pm$16.452 \\\\\n Eccentricity & 0.341$\\pm$0.021 & 0.263$\\pm$0.145 & 0.011$\\pm$0.015 & 0.294$\\pm$0.076 \\\\\n M sin $i$ [$M_{\\rm Jup}$]& 1.428$\\pm$0.099 & 0.388$\\pm$0.124 & 10.622$\\pm$0.757 & 2.581$\\pm$0.247 \\\\\n Semimajor Axis [au] & 1.435$\\pm$0.046 & 4.153$\\pm$0.800 & 2.350$\\pm$0.075 & 0.668$\\pm$0.023 \\\\\n \\hline\n Jitter [\\mbox{m\\,s$^{-1}$}] & 3.557$\\pm$1.254 & 0.466$\\pm$0.419 & 6.060$\\pm$1.856 & 13.350$\\pm$1.492 \\\\\n Offset [\\mbox{m\\,s$^{-1}$}] & -9.333$\\pm$4.787 & 12.321$\\pm$7.543 & 52.575$\\pm$4.737 & 72.198$\\pm$4.541 \\\\\n MA coefficient & 0.714$\\pm$0.531 & 0.466$\\pm$0.419 & 0.697$\\pm$0.214 & 13.350$\\pm$1.492 \\\\\n MA Timescale [days] & 4.158$\\pm$2.815 & 12.321$\\pm$7.543 & 9.793$\\pm$2.488 & 72.198$\\pm$4.541 \\\\\n Acceleration [\\mbox{m\\,s$^{-1}$}\/yr] & -1.435 & & -21.620 & \\\\\n \\hline\n\\end{tabular}\n\\end{table*}\n\n\n\\subsection{Modelling}\n\nTo test the dynamical stability of these proposed planetary systems, we follow the updated dynamical methodology outlined in our previous work \\citep[][]{2017bWittenmyer,2019Marshall}.\n\nIn brief, we perform a fit to the published velocity data using the \\textit{Systemic Console} \\citep{2009Meschiari}. We then use the MCMC tool within \\textit{Systemic} to explore the parameter space about the best fit. The MCMC chain runs for $10^7$ steps, discarding the first 10,000, and we then draw the trial solutions for our dynamical stability simulations from these posteriors. Using this data, we populate three ``annuli'' in $\\chi^2$ space corresponding to the ranges $0-1\\sigma$, $1-2\\sigma$, and $2-3\\sigma$ from the best fit. Each annulus contains 42,025 unique realisations drawn from the MCMC chain. The innermost annulus was drawn from the lowest 68.3 per cent of all $\\chi^2$ values, the middle annulus contained the next best 27.2 per cent of values, and the outer annulus contained the worst 4.5 per cent of solutions (i.e. those falling $2-3\\sigma$ away from the best fit). The result is a set of ``clones'' which fall within $3\\sigma$ of the best-fit solution, thus representing a reasonable region of parameter space within which we explore the dynamical stability of the proposed planetary system, using the constraints afforded by the existing observational data.\n\nWe then proceed to perform lengthy dynamical simulations of each of the 126,075 solutions generated by this method. We used the Hybrid integrator within the $n$-body dynamics package \\textit{Mercury} \\citep{1999Chambers} to integrate the solutions forwards in time for a period of 100 Myr. The simulations are brought to a premature end if either of the planets being simulated is ejected from the system, is flung in to the central star, or if the two planets collide with one another. When such events occur, the time at which the collision or ejection occurred is recorded, giving us the lifetime for that particular run. As such, our suite of simulations yield 126,075 tests of the candidate planetary system, allowing us to study how its stability varies as a function of the particular details of the solution chosen to explain the observational data.\n\nWe determine the best-fit parameters and uncertainties for each system using the code Exoplanet Mcmc Parallel tEmpering Radial velOcity fitteR\\footnote{\\href{https:\/\/github.com\/ReddTea\/astroEMPEROR}{https:\/\/github.com\/ReddTea\/astroEMPEROR}} ({\\sc astroEMPEROR}), which uses thermodynamic methods combined with MCMC. Our approach has previously been established and described in \\cite{2019Marshall} and \\cite{EightSingle}. We summarise the input values and constraints used in the fitting presented in this work for the sake of reproducibility. Given that our goal was to test the feasibility of the exoplanetary systems as presented in the literature, we restricted {\\sc astroEMPEROR} to consider zero, one, or two planetary signals in the radial velocity data; dynamical configurations with additional planetary companions in orbits that could mimic a single planetary companion, e.g. two resonant planets looking like a single eccentric planet (for a total of three planetary companions), were not considered in this analysis. The planetary fitting parameters were the orbital period ($P$), line-of-sight mass ($M\\sin i$), orbital eccentricity ($e$), longitude of periastron ($\\omega$), and mean anomaly ($M$). We also include an additional jitter term when fitting the data. We initialised the locations of the walkers in the MCMC fitting at their best-fit values from the \\textit{Systemic} console fit, plus a small random scatter. The priors on each parameter were flat and unbounded i.e. with uniform probability between $\\pm\\infty$, except for the orbital eccentricities which had folded Gaussian priors, and the jitter term, which was a Jeffries function (but still unbound between $\\pm\\infty$). The parameter space was surveyed by 150 walkers at five temperatures over 15\\,000 steps, with the first 5\\,000 steps being discarded as the burn-in phase.\n\n\\section{Results}\n\\label{sec:Res}\n\n\\subsection{HD~67087}\n\nThe HD~67087 system is catastrophically unstable, as illustrated by the results of our stability analysis in Fig. \\ref{fig:HD67087_stability}. In this plot it is clear that the most stable solutions cluster toward the largest ratios of semi-major axes, and the smallest eccentricities. Even in this limit, the longest-lived solutions that plausibly represent the observations are still only stable for 10$^{6}$ yrs, out of a total integration time of 10$^{8}$ yrs. This leads us to the interpretation that the HD~67087 system, as inferred from the available radial velocity data, is dynamically infeasible. Given this high degree of instability, we do not attempt to determine a global best-fit solution for the system parameters.\n\n\\begin{figure*}\n \\includegraphics[width=0.45\\textwidth]{plot_HD67087_MaxEcc_ARatio_200_45.png}\n\t\\includegraphics[width=0.45\\textwidth]{plot_HD67087_MRatio_MaxEcc_200_45.png}\n \\caption{Visualisation of the dynamical stability of the HD~67087 planetary system. On the left we show the log(lifetime) as a function of the largest initial eccentricity fit to HD~67087b and c and the ratio of their orbital semi-major axes, whilst on the right we show the log(lifetime) as a function of of the largest initial eccentricity fit and the mass ratio between HD~67087b and c. The colour bar shows the goodness of fit ($\\chi^{2}$) of each solution tested. We find no stable solutions that last the full 100 Myr duration of the dynamical simulations close to the nominal best-fit orbital solution for the planets, with the only stable solutions lying at the extreme edges of the parameter space toward low eccentricities, large separations and low mass ratios. \\label{fig:HD67087_stability}}\n\\end{figure*} \n\n\\subsection{HD~110014}\n\nThe HD~110014 system is found to be dynamically stable, with a broad swathe of parameter space centred on the nominal solution producing system architectures that last for the full 10$^{8}$ yrs of our dynamical integrations. We show the results of the stability analysis, sampling the 3-$\\sigma$ parameter space around the nominal orbital solution determined from the radial velocities in Fig. \\ref{fig:HD110014_stability}. The results of the Bayesian analysis, showing what we infer to be the global best-fit parameters for the system, are presented in Fig. \\ref{fig:HD110014_confusogram}.\n\n\\begin{figure*}\n \\includegraphics[width=0.45\\textwidth]{plot_HD110014_MaxEcc_ARatio_1800_45.png}\n\t\\includegraphics[width=0.45\\textwidth]{plot_HD110014_MRatio_MaxEcc_1800_45.png}\n \\caption{Visualisation of the dynamical stability of the HD~110014 planetary system. On the left we show the log(lifetime) as a function of the largest initial eccentricity fit to HD~110014b and c and the ratio of their orbital semi-major axes, whilst on the right we show the log(lifetime) as a function of of the largest initial eccentricity fit and the mass ratio between HD~110014b and c. The colour bar shows the goodness of fit ($\\chi^{2}$) of each solution tested. We find stable solutions that last the full 100 Myr duration of the dynamical simulations close to the nominal best-fit orbital solution for the planets. \\label{fig:HD110014_stability}}\n\\end{figure*}\n\n\\begin{figure*}[h]\n\t\\includegraphics[width=\\textwidth]{HD110014Triangle.pdf}\n\t\\caption{Bayesian posterior distributions of HD~110014 b and HD~110014 c's orbital parameters derived from {\\sc astroemperor}. From left to right (top to bottom), the parameters are $K_{b}$, $P_{b}$, $\\omega_{b}$, $\\phi_{b}$, $e_{b}$, $K_{c}$, $P_{c}$, $\\omega_{c}$, $\\phi_{c}$ and $e_{c}$. Credible intervals are denoted by the solid contours with increments of 1-$\\sigma$.}\n \\label{fig:HD110014_confusogram}\n\\end{figure*}\n\n\\subsection{HD~133131A}\n\nThe HD~133131A system shows a very complex parameter space in the stability plots. As one would expect, the stability of the system generally increases towards lower orbital eccentricities and lower mass ratios between the two planetary components. The overall stability appears to be insensitive to the ratio of the semi-major axes for the planets, with long-lived solutions possible across the full range of values probed for this parameter. Interestingly, we demonstrate that stable architectures for the planetary system exist in both the high and low orbital eccentricity scenarios for the system. We show the results of the stability analysis, sampling the 3-$\\sigma$ parameter space around the nominal orbital solution determined from the radial velocities in Fig. \\ref{fig:HD133131A_stability}. The results of the Bayesian analysis, showing what we infer to be the global best-fit parameters for the system, are presented in Fig. \\ref{fig:HD133131A_confusogram}. Further observations to refine the planet properties of this system will be required to definitively characterise its dynamical stability.\n\n\\begin{figure*}\n \\includegraphics[width=0.45\\textwidth]{plot_HD133131AH_MaxEcc_ARatio_200_45.png}\n\t\\includegraphics[width=0.45\\textwidth]{plot_HD133131AH_MRatio_MaxEcc_200_45.png}\\\\\n\t\\includegraphics[width=0.45\\textwidth]{plot_HD133131AL_MaxEcc_ARatio_200_45.png}\n\t\\includegraphics[width=0.45\\textwidth]{plot_HD133131AL_MRatio_MaxEcc_200_45.png}\n \\caption{Plots of the dynamical stability of the HD~133131A planetary system for both the high eccentricity (top) and low eccentricity (bottom) orbital solutions. On the left we show the log(lifetime) as a function of the largest initial eccentricity fit to HD~133131Ab and c and the ratio of their orbital semi-major axes, whilst on the right we show the log(lifetime) as a function of of the largest initial eccentricity fit and the mass ratio between HD~133131Ab and c. The colour bar shows the goodness of fit ($\\chi^{2}$) of each solution tested. The stability revealed by our dynamical simulations is complex, with regions of both extreme stability (log(lifetime) $\\sim$ 100 Myrs) and instability (log(lifetime) $\\sim$ 100 yrs) lying within the 3-$\\sigma$ reach of the nominal best-fit orbital parameters. \\label{fig:HD133131A_stability}}\n\\end{figure*} \n\n\\begin{figure*}\n\t\\includegraphics[width=\\textwidth]{HD133131ATriangle.pdf}\n\t\\caption{Bayesian posterior distributions of HD~133131A b and HD~133131A c's orbital parameters derived from {\\sc astroemperor}. From left to right (top to bottom), the parameters are $K_{b}$, $P_{b}$, $\\omega_{b}$, $\\phi_{b}$, $e_{b}$, $K_{c}$, $P_{c}$, $\\omega_{c}$, $\\phi_{c}$ and $e_{c}$. Credible intervals are denoted by the solid contours with increments of 1-$\\sigma$.}\n \\label{fig:HD133131A_confusogram}\n\\end{figure*}\n\n\\section{Discussion}\n\\label{sec:Dis}\n\nThe results of our dynamical modelling for the three systems considered in this work, HD~67087, HD~110014 and HD~133131A show three distinctly different outcomes. For the first system tested, HD~67087, we find no orbital solutions that exhibit long-term dynamical stability. As a result, we are forced to conclude that, if the planets proposed to orbit that star are real, they must move on orbits significantly different from those proposed in the discovery work, and sampled in our simulations. It seems likely that new radial velocity observations of HD~67087, extending the temporal baseline over which the star has been observed, will yield fresh insights to the system -- either significantly constraining and altering the proposed orbit for the outermost planet, or even revealing that that eccentric solution is in fact the result of multiple unresolved planets at large orbital radii. Such an outcome is far from unusual -- and indeed, it is often the case that, with more data, a single eccentric planet seen in RV data is resolved to actually be two planets moving on near circular orbits \\citep[e.g.][]{2013Wittenmyer,EightSingle}. For now, however, we can do no more than to call the existence of HD~67087~c into question, pending the acquisition of such additional data.\n\nIn contrast to the instability of HD~67087, our simulations of the HD~110014 system reveal that the best-fit solution for that two-planet system lies in a broad region of strong dynamical stability. In this case, our simulations simply reveal that the system, as proposed in the discovery work, is dynamically feasible -- and in a sense, the simulations add little beyond that.\n\nThe case of HD~133131A is somewhat more interesting. Here, our simulations reveal that solutions that fit the observational data can exhibit both strong dynamical stability, and extreme instability (with dynamical lifetimes of just a few years). Both the high- and low-eccentricity solutions considered in \\citet{2016Teske} can produce scenarios that are stable for the full 100 Myr of our simulations. In both the high- and low-eccentricity cases, the stable solutions cluster around the least eccentric available scenarios. The more widely separated the two planets, the more eccentric their orbits can be before instability occurs -- a natural result of the stability being driven by the minimum separation between the planets, rather than their orbital semi-major axes. The more widely the semi-major axes of the orbits are spaced, the more eccentric they must be to bring the planets into close proximity. These results show once again the benefits inherent to such dynamical analysis -- reminding us how studying the dynamical evolution of a given system can help to provide stronger constraints on the orbits of the planets contained therein than is possible by studying the observational data on their own.\n\nA comparison of our results to the analysis of the AMD stability criterion presented in \\cite{2017Laskar} shows agreement between the two different techniques for the dynamical stability of the three systems. Whilst HD~67087 and HD~110014 are respectively very clear cut cases of an unstable and a stable system, HD~133131A exhibits a more complex behaviour. HD~133131A may be dynamically stable, but the inferred lifetime for the planetary system as proposed is sensitive to the chosen initial conditions; this system therefore represents an edge case of stability where limitations of available data and the respective analyses provide no clear answer to the veracity of the previously inferred planetary system.\n\nCombining these new results with our previous dynamical analyses, as summarised in the introduction, we may consider that the AMD criterion is a reliable estimator of stability for planetary systems. There are 13 systems (out of 131 considered in that work) from \\cite{2017Laskar} that have had dynamical modelling of their stability. In \\cite{2017Laskar}, a planetary system is considered strongly stable if all planet-pairs have $\\beta$ values less than 1, such that collisions are impossible whilst weakly stable planetary systems are those in which the inner-most planet might collide with the star without disrupting the remainder of the planetary system. In five systems, both the AMD criterion and dynamical modelling agree on their dynamical stability (HD~142, HD~159868, NN Ser (AB), GJ~832, and HD~110014); the planets in each of these systems are dynamically well separated and therefore not strongly interacting \\citep[][this work]{2011Horner,2012bWittenmyer,2014bWittenmyer}. Six systems are unstable according to the AMD criterion with values of $\\beta$ in the range 1 to 5 for the planet pair (HD~155358, 24 Sex, HD~200964, HD~73526, HD~33844, HD~47366), but all are in mean motion resonances and have been demonstrated to be dynamically stable through $n$-body simulations \\citep{2012Robertson,2012cWittenmyer,2014aWittenmyer,2016Wittenmyer,2019Marshall}. The remaining two systems (HD~67087, HD~133131A) are dynamically unstable in both the AMD and dynamical analysis (this work). However, dynamical analysis of the HD~133131A system reveals regions of dynamical stability consistent with the observed radial velocities, prompting the need for further investigation of this system and its architecture. Neither of these two unstable planetary systems have $\\beta$ values radically different from those of the planetary systems in resonance, or each other, such that determining their stability can only be carried out using dynamical simulations. The existence of such systems in the known planet population as demonstrated in our analysis therefore showcases the necessity of performing long duration dynamical analyses of proposed planetary system architectures to reveal the complex dynamical interplay between high mass planets, the evolution of their orbital elements, and determine what constraints this places on the available parameter space for the endurance of the proposed planetary system over its lifetime.\n\n\\section{Conclusions}\n\\label{sec:Con}\n\nWe re-analysed the dynamical stability of the exoplanet systems around HD~67087, HD~110014, and HD~133131A, using available radial velocity data. These three planetary systems have poorly constrained orbital parameters, and had previously been identified as being potentially unstable. We combine a determination of the best-fit orbital parameters from least-squares fitting to the data with $n$-body simulations to determine the global best-fit solution for the planetary system architectures, and thereafter determine the probability distribution of the orbital solutions through Bayesian inference. \n\nOur dynamical analysis confirms that the published planetary system parameters for HD~67087bc are dynamically unstable on very short timescales, and we must conclude that the system, as published, is dynamically unfeasible. As more data are collected for the HD~67087 system, it seems likely that the true nature of the candidate planets therein will be revealed, and that future planetary solutions for that system will veer towards dynamical stability as the planetary orbits become better constrained. \n\nIn the case of HD~110014 bc we demonstrate that the system parameters can be dynamically stable for the full duration of our 100 Myr integrations. The third system, HD~133131A , exhibits much more complex behaviour, with HD~133131A bc being strongly unstable over much of the parameter space exhibited in this work including the region encompassing the nominal best-fit to the orbital parameters. In agreement with previous analysis of this system, we strongly disfavour a high eccentricity orbital solution for planet c. Additional observations of this system will be required to more precisely determine the planetary properties for HD~133131A bc and thereby categorically rule on the plausibility of the proposed planetary system.\n\nThese results demonstrate the complementarity of various techniques to deduce the stability of planetary systems, with good agreement between the results of our various works, and that of the AMD approach. We highlight the appropriateness of dynamical simulations for determination the long-term stability of planetary systems in the presence of strongly interacting planets, which although costly in a computing sense capture the full essence of planetary interaction in such systems which is not possible with other techniques. We finally assert that the orbital parameters for these three systems which have been determined in this work (as summarised in Table 3) should be the accepted values adopted by exoplanet archives or elsewhere. This work is thus one additional thread in the tapestry of cross-checking of published results through various means that ensures the reliability of archival information on planetary properties and the architectures of planetary systems which are essential to inform models of the formation and evolution of the exoplanet population \\citep[e.g. ][]{2019Childs,2019Denham,2020He,2020VolkMalhotra}.\n\n\\section*{Acknowledgements}\n\nThis is a pre-copyedited, author-produced PDF of an article accepted for publication in MNRAS following peer review. The version of record Marshall et al., 2020, MNRAS, 494, 2, 2280--2288 is available online \\href{https:\/\/academic.oup.com\/mnras\/article\/494\/2\/2280\/5819459}{here}.\n\nWe thank the anonymous referee for their comments which helped to improve the article.\n\nThis research has made use of NASA's Astrophysics Data System and the SIMBAD database, operated at CDS, Strasbourg, France.\n\nJPM acknowledges research support by the Ministry of Science and Technology of Taiwan under grants MOST104-2628-M-001-004-MY3 and MOST107-2119-M-001-031-MY3, and Academia Sinica under grant AS-IA-106-M03.\n\n\\textit{Software}: This research has made use of the following Python packages: \\textsc{matplotlib} \\citep{2007Hunter}; \\textsc{numpy} \\citep{2006Oliphant}; \\textsc{pygtc} \\citep{2016Bocquet}; \\textsc{emcee} \\citep{2013ForemanMackey}; \\textsc{corner} \\citep{2016ForemanMackey}; \\textsc{mercury} \\citep{1999Chambers}.\n\n\n\n\n\n\n\\bibliographystyle{mnras}\n\n\\section{Introduction}\n\\label{sec:Int}\n\nExoplanets -- planets orbiting stars other than the Sun -- are most often identified through indirect means. We observe a star with periodic behaviour that would otherwise be unexpected and conclude that the best explanation is the presence of one (or more) planets. We direct the interested reader to \\cite{2018Perryman} for a summary of the various exoplanet detection techniques. By piecing together the observations of the unexpected behaviour it is possible to constrain, to some degree, the orbit and physical nature of the planets in question. Such inference is, however, not perfect -- particularly when the planets in question have been detected through observations of the ``wobble'' of their host star, as is the case for planets found using the radial velocity technique \\citep[e.g. ][]{51peg,47UMa,HarpsM,114613,3167c}, or candidate planets claimed on the basis of binary star eclipse timing variability \\citep[e.g. ][]{hwvir,nnser,silly1,uzfor}.\n\nThe accurate determination of the (minimum) masses and orbits of newly discovered exoplanets provides the key data by which we can understand the variety of outcomes of the planet formation process. As such, it behooves us to ensure that exoplanet catalogues contain information which is as accurate and realistic as possible. Such accurate solutions do not just enable us to properly ascertain the distribution of planets at the current epoch -- they also provide an important window into the history of the planetary systems we discover \\citep[e.g. ][]{2014Ford,2015PuWu,2017Fulton,2019Wu}, and allow us to predict and plan follow up observations through population synthesis models \\citep[e.g. ][]{2013Hasegawa,2018Mordasini,2020Dulz}. For example, the migration and mutual gravitational interaction of planets have been identified as being of critical importance to both the observed architectures and predicted long-term stability of the menagerie of known multi-planet systems heretofore identified through radial velocity and transit surveys \\citep[e.g.][]{prototrap,2012cWittenmyer,chain,trappist,2017Mustill,2017Hamers,2019Childs}. \n\nHowever, the accuracy of orbital parameters of the planetary companions presented in discovery works is frequently limited by the time period covered by the observations that led to the discovery, which are often enough to claim detection and little more \\citep{JupiterAnalogues,CoolJupiters}. Long term follow-up of known planet host systems is therefore desirable to refine the orbital parameters for known companions, to infer the presence of additional companions at lower masses and\/or larger semi-major axes \\citep[e.g.][]{2017BeckerAdams,EightSingle,RVDItech,181433,2019Denham,Rick19}, and to disentangle the complex signals produced by planets on resonant \\citep[e.g.][]{Ang10,2012cWittenmyer,2014aWittenmyer,2016Wittenmyer} or eccentric orbits \\citep[e.g.][]{2013Wittenmyer,2017cWittenmyer}. Equally, due to the relative paucity of data on which planet discoveries are often based, it is possible for those initial solutions to change markedly as more data are acquired. The ultimate extension of this is that, on occasion, the process by which planetary solutions are fit to observational data can yield false solutions -- essentially finding local minima in the phase space of all possible orbital solutions that represent a good theoretical fit to the data whilst being unphysical. It is therefore important to check the dynamical feasibility of multi-planet solutions that appear to present a good fit to observational data -- particularly in those cases where such solutions invoke planets on orbits that offer the potential for close encounters between the candidate planets.\n\nFollowing this logic, we have in the past tested the stability of multi-planet systems in a variety of environments including around main sequence \\citep{2010Marshall,2012bWittenmyer,2015Wittenmyer,2017bWittenmyer}, evolved \\citep{2017aWittenmyer,2017cWittenmyer,2019Marshall}, and post-main sequence stars \\citep{2011Horner,2012Horner,QSVir,2012aWittenmyer,2013Mustill}. In some cases, our results confirmed that the proposed systems were dynamically feasible as presented in the discovery work, whilst in others, our analysis demonstrated that alternative explanations must be sought for the observed behaviour of the claimed ``planet-host'' star \\citep[e.g.][]{2011Horner,QSVir}. To ensure that our own work remains robust, we have incorporated such analysis as a standard part of our own exoplanet discovery papers. We test all published multi-planet solutions for dynamical stability before placing too great a confidence in a particular outcome. As an extension to this approach, we presented a revised Bayesian method to the previously adopted frequentist stability analysis in \\cite{2019Marshall}, and demonstrated the consistency between these approaches. \n\nRather than using direct dynamical simulations, the stability of a planetary system can also be inferred from a criterion derived from the planetary masses, semi-major axes, and conservation of the angular momentum deficit \\citep[AMD,][]{2000Laskar,2017Laskar}. AMD can be interpreted as measuring the degree of excitation of planetary orbits, with less excited orbits implying greater stability. The definition of AMD stability has been revised to account for the effect of mean motion resonances and close encounters on orbital stability \\citep{2017Petit,2018Petit}. Of the systems examined in this work, HD~110014 has been identified as being weakly stable, whilst HD~67087 and HD~133131A are both considered unstable according to AMD \\citep[see Figs. 6 and 7,][]{2017Laskar}. In our previous dynamical studies, we find good agreement between the stability inferred from AMD and our dynamical simulations with 13 systems in common between them, of which nine were classified unstable and four stable, one marginally so \\citep{2012Horner,2012Robertson,2012aWittenmyer,2012bWittenmyer,2012cWittenmyer,2014aWittenmyer,2014bWittenmyer,2015Wittenmyer,2016Wittenmyer,2016Endl}. In this paper we examine the dynamical stability of the three multi-planet systems, HD~67087, HD~110014, and HD~133131A, as a critical examination of their stability and a further test of the reliability of AMD for the identification of instability in exoplanet systems. \n\nHD~67087, observed as part of the Japanese Okayama Planet Search programme \\citep{2005Sato}, was discovered to host a pair of exoplanets by \\cite{2015Harakawa}. The candidate planets are super-Jupiters, with $m~\\sin i$ of 3.1\\,$M_{\\rm Jup}$\\ and 4.9\\,$M_{\\rm Jup}$, respectively. They move on orbits with (${a,e}$) of ({1.08, 0.17}) and ({3.86, 0.76}), respectively, which would place the outer planet amongst the most eccentric Jovian planets identified thus far. The authors noted that the orbit and mass of the outer planet are poorly constrained. \n\nHD\\,110014 was found to host a planet by \\citet{2009deMedeiros}; the second companion was identified through re-analysis of archival spectra taken by the FEROS instrument \\citep{1998KauferPasquini} looking to derive an updated orbit for planet b \\citep{2015Soto}. The two candidate planets have super-Jupiter masses, and \\citet{2015Soto} cautioned that the proposed second planet was worryingly close in period to the typical rotation period of K giant stars. However, their analysis of the stellar photometry was inconclusive in identifying its activity as the root cause for the secondary signal. \n\nHD~133131A's planetary companions were reported in \\citet{2016Teske}, based on precise radial velocities primarily from the Magellan Planet Finder Spectrograph \\citep{2006Crane,2008Crane,2010Crane}. Their data supported the presence of two planets, where the outer planet is poorly constrained due to its long period. \\citet{2016Teske} ran a single dynamical stability simulation on the adopted solution and found it to remain stable for the full $10^5$ yr duration. The authors presented both a low- and high- eccentricity solution, reasoning that in a formal sense, the two solutions were essentially indistinguishable. They favoured the low-eccentricity model ($e_2=0.2$) for dynamical stability reasons. There is precedent in the literature for this choice, since it does happen that the formal best fit can be dynamically unfeasible whilst a slightly worse fit pushes the system into a region of stability \\citep[e.g.][]{2013Mustill, 2014Trifonov, 2017bWittenmyer}. \n\nThe remainder of the paper is laid out as follows. We present a brief summary of the radial velocity observations and other data (e.g. stellar parameters) used for our reanalysis in Sect. \\ref{sec:Obs} along with an explanation of our modelling approach. The results of the reanalyses for each target are shown in Sect. \\ref{sec:Res}. A brief discussion of our findings in comparison to previous work on these systems is presented in Sect. \\ref{sec:Dis}. Finally, we present our conclusions in Sect. \\ref{sec:Con}.\n\n\\section{Observations and methods}\n\\label{sec:Obs}\n\n\\subsection{Radial Velocity Data}\n\nWe compiled radial velocity values from the literature for the three systems examined in this work; the origin of these data are summarised in Table \\ref{tab:rvs}. \n\n\\begin{table}\n\\centering\n\\caption{Table of references for the radial velocities data used in this work. \\label{tab:rvs}}\n\\begin{tabular}{ll}\n \\hline\n Target & References \\\\\n \\hline\\hline\n HD~67087 & \\cite{2015Harakawa} \\\\\n HD~110114 & \\cite{2009deMedeiros} \\\\\n HD~133131A & \\cite{2016Teske} \\\\\n \\hline\n\\end{tabular}\n\\end{table}\n\n\n\\begin{table*}\n\\centering\n\\caption{Planetary orbital parameters based on \\textit{Systemic} fits to radial velocity data. Semi-major axes were calculated using measured orbital periods and stellar masses taken from the NASA Exoplanet Archive. \\label{tab:systemic}}\n\\begin{tabular}{lcccccc}\n \\hline\n & HD\\,67087 b & HD\\,67087 c & HD\\,110014 b & HD\\,110014 c & HD\\,133131A b & HD\\,133131A c \\\\\n \\hline \\hline\n Amplitude [\\mbox{m\\,s$^{-1}$}] & 74.0~$\\pm$~3.0 & 54.0~$\\pm$~4.0 & 36.949$\\pm$0.750 & 5.956$\\pm$1.617 & 135.315$\\pm$3.640 & 64.660$\\pm$3.966 \\\\\n Period [days] & 352.3$\\pm$1.7 & 2380$^{+167}_{-141}$ & 877.5$\\pm$5.2 & 130.125$\\pm$0.096 & 648.$\\pm$3 & 5342$^{+7783}_{-2009}$ \\\\\n Mean anomaly [deg] & 35$^{+20}_{-16}$ & 94$^{+28}_{-35}$ & 155$\\pm$4 & 231$\\pm$3 & 265$\\pm$11 & 188$^{+104}_{-123}$ \\\\\n Longitude [deg] & 281$^{+18}_{-15}$ & 256 (fixed) & 41$\\pm$3 & 302$\\pm$4 & 16.6$^{+4.7}_{-4.5}$ & 110$^{+25}_{-43}$ \\\\\n Eccentricity & 0.18$^{+0.07}_{-0.06}$ & 0.51 (fixed) & 0.259$\\pm$0.017 & 0.410$\\pm$0.022 & 0.340$\\pm$0.032 & 0.63$^{+0.25}_{-0.20}$ \\\\\n $M$ sin $i$ [$M_{\\rm Jup}$] & 3.10$^{+0.15}_{-0.14}$ & 3.73$^{+0.47}_{-0.45}$ & 10.61$\\pm$0.25 & 3.228$\\pm$0.098 & 1.418$\\pm$0.036 & 0.52$^{+0.45}_{-0.17}$ \\\\\n Semi-major Axis [au] & 1.08 & 3.87 & 2.32 & 0.65 & 1.44 & 5.88 \\\\\n \\hline\n\\end{tabular}\n\\end{table*}\n\n\n\\begin{table*}\n\\centering\n\\caption{Results from \\textsc{Astroemperor} exploration of parameter space around \\textit{Systemic} nominal best fit values for planetary companions to HD~133131A and HD~110014. \\label{tab:mcmc}}\n\\begin{tabular}{lcccc}\n \\hline\n & HD\\,133131A b & HD\\,133131A c & HD\\,110014 b & HD\\,110014 c \\\\\n \\hline \\hline\n Amplitude [\\mbox{m\\,s$^{-1}$}] & 36.949$\\pm$0.750 & 5.956$\\pm$1.617 & 135.315$\\pm$3.640 & 64.660$\\pm$3.966 \\\\\n Period [days] & 647.816$\\pm$1.575 & 3205.648$\\pm$948.063 & 865.206$\\pm$6.170 & 132.431$\\pm$0.279 \\\\\n Phase [deg] & 261.620$\\pm$4.850 & 31.734$\\pm$98.433 & 43.753$\\pm$72.179 & 341.373$\\pm$64.346 \\\\\n Longitude [deg] & 18.550$\\pm$2.165 & 113.777$\\pm$81.302 & 146.633$\\pm$17.229 & 236.903$\\pm$16.452 \\\\\n Eccentricity & 0.341$\\pm$0.021 & 0.263$\\pm$0.145 & 0.011$\\pm$0.015 & 0.294$\\pm$0.076 \\\\\n M sin $i$ [$M_{\\rm Jup}$]& 1.428$\\pm$0.099 & 0.388$\\pm$0.124 & 10.622$\\pm$0.757 & 2.581$\\pm$0.247 \\\\\n Semimajor Axis [au] & 1.435$\\pm$0.046 & 4.153$\\pm$0.800 & 2.350$\\pm$0.075 & 0.668$\\pm$0.023 \\\\\n \\hline\n Jitter [\\mbox{m\\,s$^{-1}$}] & 3.557$\\pm$1.254 & 0.466$\\pm$0.419 & 6.060$\\pm$1.856 & 13.350$\\pm$1.492 \\\\\n Offset [\\mbox{m\\,s$^{-1}$}] & -9.333$\\pm$4.787 & 12.321$\\pm$7.543 & 52.575$\\pm$4.737 & 72.198$\\pm$4.541 \\\\\n MA coefficient & 0.714$\\pm$0.531 & 0.466$\\pm$0.419 & 0.697$\\pm$0.214 & 13.350$\\pm$1.492 \\\\\n MA Timescale [days] & 4.158$\\pm$2.815 & 12.321$\\pm$7.543 & 9.793$\\pm$2.488 & 72.198$\\pm$4.541 \\\\\n Acceleration [\\mbox{m\\,s$^{-1}$}\/yr] & -1.435 & & -21.620 & \\\\\n \\hline\n\\end{tabular}\n\\end{table*}\n\n\n\\subsection{Modelling}\n\nTo test the dynamical stability of these proposed planetary systems, we follow the updated dynamical methodology outlined in our previous work \\citep[][]{2017bWittenmyer,2019Marshall}.\n\nIn brief, we perform a fit to the published velocity data using the \\textit{Systemic Console} \\citep{2009Meschiari}. We then use the MCMC tool within \\textit{Systemic} to explore the parameter space about the best fit. The MCMC chain runs for $10^7$ steps, discarding the first 10,000, and we then draw the trial solutions for our dynamical stability simulations from these posteriors. Using this data, we populate three ``annuli'' in $\\chi^2$ space corresponding to the ranges $0-1\\sigma$, $1-2\\sigma$, and $2-3\\sigma$ from the best fit. Each annulus contains 42,025 unique realisations drawn from the MCMC chain. The innermost annulus was drawn from the lowest 68.3 per cent of all $\\chi^2$ values, the middle annulus contained the next best 27.2 per cent of values, and the outer annulus contained the worst 4.5 per cent of solutions (i.e. those falling $2-3\\sigma$ away from the best fit). The result is a set of ``clones'' which fall within $3\\sigma$ of the best-fit solution, thus representing a reasonable region of parameter space within which we explore the dynamical stability of the proposed planetary system, using the constraints afforded by the existing observational data.\n\nWe then proceed to perform lengthy dynamical simulations of each of the 126,075 solutions generated by this method. We used the Hybrid integrator within the $n$-body dynamics package \\textit{Mercury} \\citep{1999Chambers} to integrate the solutions forwards in time for a period of 100 Myr. The simulations are brought to a premature end if either of the planets being simulated is ejected from the system, is flung in to the central star, or if the two planets collide with one another. When such events occur, the time at which the collision or ejection occurred is recorded, giving us the lifetime for that particular run. As such, our suite of simulations yield 126,075 tests of the candidate planetary system, allowing us to study how its stability varies as a function of the particular details of the solution chosen to explain the observational data.\n\nWe determine the best-fit parameters and uncertainties for each system using the code Exoplanet Mcmc Parallel tEmpering Radial velOcity fitteR\\footnote{\\href{https:\/\/github.com\/ReddTea\/astroEMPEROR}{https:\/\/github.com\/ReddTea\/astroEMPEROR}} ({\\sc astroEMPEROR}), which uses thermodynamic methods combined with MCMC. Our approach has previously been established and described in \\cite{2019Marshall} and \\cite{EightSingle}. We summarise the input values and constraints used in the fitting presented in this work for the sake of reproducibility. Given that our goal was to test the feasibility of the exoplanetary systems as presented in the literature, we restricted {\\sc astroEMPEROR} to consider zero, one, or two planetary signals in the radial velocity data; dynamical configurations with additional planetary companions in orbits that could mimic a single planetary companion, e.g. two resonant planets looking like a single eccentric planet (for a total of three planetary companions), were not considered in this analysis. The planetary fitting parameters were the orbital period ($P$), line-of-sight mass ($M\\sin i$), orbital eccentricity ($e$), longitude of periastron ($\\omega$), and mean anomaly ($M$). We also include an additional jitter term when fitting the data. We initialised the locations of the walkers in the MCMC fitting at their best-fit values from the \\textit{Systemic} console fit, plus a small random scatter. The priors on each parameter were flat and unbounded i.e. with uniform probability between $\\pm\\infty$, except for the orbital eccentricities which had folded Gaussian priors, and the jitter term, which was a Jeffries function (but still unbound between $\\pm\\infty$). The parameter space was surveyed by 150 walkers at five temperatures over 15\\,000 steps, with the first 5\\,000 steps being discarded as the burn-in phase.\n\n\\section{Results}\n\\label{sec:Res}\n\n\\subsection{HD~67087}\n\nThe HD~67087 system is catastrophically unstable, as illustrated by the results of our stability analysis in Fig. \\ref{fig:HD67087_stability}. In this plot it is clear that the most stable solutions cluster toward the largest ratios of semi-major axes, and the smallest eccentricities. Even in this limit, the longest-lived solutions that plausibly represent the observations are still only stable for 10$^{6}$ yrs, out of a total integration time of 10$^{8}$ yrs. This leads us to the interpretation that the HD~67087 system, as inferred from the available radial velocity data, is dynamically infeasible. Given this high degree of instability, we do not attempt to determine a global best-fit solution for the system parameters.\n\n\\begin{figure*}\n \\includegraphics[width=0.45\\textwidth]{plot_HD67087_MaxEcc_ARatio_200_45.png}\n\t\\includegraphics[width=0.45\\textwidth]{plot_HD67087_MRatio_MaxEcc_200_45.png}\n \\caption{Visualisation of the dynamical stability of the HD~67087 planetary system. On the left we show the log(lifetime) as a function of the largest initial eccentricity fit to HD~67087b and c and the ratio of their orbital semi-major axes, whilst on the right we show the log(lifetime) as a function of of the largest initial eccentricity fit and the mass ratio between HD~67087b and c. The colour bar shows the goodness of fit ($\\chi^{2}$) of each solution tested. We find no stable solutions that last the full 100 Myr duration of the dynamical simulations close to the nominal best-fit orbital solution for the planets, with the only stable solutions lying at the extreme edges of the parameter space toward low eccentricities, large separations and low mass ratios. \\label{fig:HD67087_stability}}\n\\end{figure*} \n\n\\subsection{HD~110014}\n\nThe HD~110014 system is found to be dynamically stable, with a broad swathe of parameter space centred on the nominal solution producing system architectures that last for the full 10$^{8}$ yrs of our dynamical integrations. We show the results of the stability analysis, sampling the 3-$\\sigma$ parameter space around the nominal orbital solution determined from the radial velocities in Fig. \\ref{fig:HD110014_stability}. The results of the Bayesian analysis, showing what we infer to be the global best-fit parameters for the system, are presented in Fig. \\ref{fig:HD110014_confusogram}.\n\n\\begin{figure*}\n \\includegraphics[width=0.45\\textwidth]{plot_HD110014_MaxEcc_ARatio_1800_45.png}\n\t\\includegraphics[width=0.45\\textwidth]{plot_HD110014_MRatio_MaxEcc_1800_45.png}\n \\caption{Visualisation of the dynamical stability of the HD~110014 planetary system. On the left we show the log(lifetime) as a function of the largest initial eccentricity fit to HD~110014b and c and the ratio of their orbital semi-major axes, whilst on the right we show the log(lifetime) as a function of of the largest initial eccentricity fit and the mass ratio between HD~110014b and c. The colour bar shows the goodness of fit ($\\chi^{2}$) of each solution tested. We find stable solutions that last the full 100 Myr duration of the dynamical simulations close to the nominal best-fit orbital solution for the planets. \\label{fig:HD110014_stability}}\n\\end{figure*}\n\n\\begin{figure*}[h]\n\t\\includegraphics[width=\\textwidth]{HD110014Triangle.pdf}\n\t\\caption{Bayesian posterior distributions of HD~110014 b and HD~110014 c's orbital parameters derived from {\\sc astroemperor}. From left to right (top to bottom), the parameters are $K_{b}$, $P_{b}$, $\\omega_{b}$, $\\phi_{b}$, $e_{b}$, $K_{c}$, $P_{c}$, $\\omega_{c}$, $\\phi_{c}$ and $e_{c}$. Credible intervals are denoted by the solid contours with increments of 1-$\\sigma$.}\n \\label{fig:HD110014_confusogram}\n\\end{figure*}\n\n\\subsection{HD~133131A}\n\nThe HD~133131A system shows a very complex parameter space in the stability plots. As one would expect, the stability of the system generally increases towards lower orbital eccentricities and lower mass ratios between the two planetary components. The overall stability appears to be insensitive to the ratio of the semi-major axes for the planets, with long-lived solutions possible across the full range of values probed for this parameter. Interestingly, we demonstrate that stable architectures for the planetary system exist in both the high and low orbital eccentricity scenarios for the system. We show the results of the stability analysis, sampling the 3-$\\sigma$ parameter space around the nominal orbital solution determined from the radial velocities in Fig. \\ref{fig:HD133131A_stability}. The results of the Bayesian analysis, showing what we infer to be the global best-fit parameters for the system, are presented in Fig. \\ref{fig:HD133131A_confusogram}. Further observations to refine the planet properties of this system will be required to definitively characterise its dynamical stability.\n\n\\begin{figure*}\n \\includegraphics[width=0.45\\textwidth]{plot_HD133131AH_MaxEcc_ARatio_200_45.png}\n\t\\includegraphics[width=0.45\\textwidth]{plot_HD133131AH_MRatio_MaxEcc_200_45.png}\\\\\n\t\\includegraphics[width=0.45\\textwidth]{plot_HD133131AL_MaxEcc_ARatio_200_45.png}\n\t\\includegraphics[width=0.45\\textwidth]{plot_HD133131AL_MRatio_MaxEcc_200_45.png}\n \\caption{Plots of the dynamical stability of the HD~133131A planetary system for both the high eccentricity (top) and low eccentricity (bottom) orbital solutions. On the left we show the log(lifetime) as a function of the largest initial eccentricity fit to HD~133131Ab and c and the ratio of their orbital semi-major axes, whilst on the right we show the log(lifetime) as a function of of the largest initial eccentricity fit and the mass ratio between HD~133131Ab and c. The colour bar shows the goodness of fit ($\\chi^{2}$) of each solution tested. The stability revealed by our dynamical simulations is complex, with regions of both extreme stability (log(lifetime) $\\sim$ 100 Myrs) and instability (log(lifetime) $\\sim$ 100 yrs) lying within the 3-$\\sigma$ reach of the nominal best-fit orbital parameters. \\label{fig:HD133131A_stability}}\n\\end{figure*} \n\n\\begin{figure*}\n\t\\includegraphics[width=\\textwidth]{HD133131ATriangle.pdf}\n\t\\caption{Bayesian posterior distributions of HD~133131A b and HD~133131A c's orbital parameters derived from {\\sc astroemperor}. From left to right (top to bottom), the parameters are $K_{b}$, $P_{b}$, $\\omega_{b}$, $\\phi_{b}$, $e_{b}$, $K_{c}$, $P_{c}$, $\\omega_{c}$, $\\phi_{c}$ and $e_{c}$. Credible intervals are denoted by the solid contours with increments of 1-$\\sigma$.}\n \\label{fig:HD133131A_confusogram}\n\\end{figure*}\n\n\\section{Discussion}\n\\label{sec:Dis}\n\nThe results of our dynamical modelling for the three systems considered in this work, HD~67087, HD~110014 and HD~133131A show three distinctly different outcomes. For the first system tested, HD~67087, we find no orbital solutions that exhibit long-term dynamical stability. As a result, we are forced to conclude that, if the planets proposed to orbit that star are real, they must move on orbits significantly different from those proposed in the discovery work, and sampled in our simulations. It seems likely that new radial velocity observations of HD~67087, extending the temporal baseline over which the star has been observed, will yield fresh insights to the system -- either significantly constraining and altering the proposed orbit for the outermost planet, or even revealing that that eccentric solution is in fact the result of multiple unresolved planets at large orbital radii. Such an outcome is far from unusual -- and indeed, it is often the case that, with more data, a single eccentric planet seen in RV data is resolved to actually be two planets moving on near circular orbits \\citep[e.g.][]{2013Wittenmyer,EightSingle}. For now, however, we can do no more than to call the existence of HD~67087~c into question, pending the acquisition of such additional data.\n\nIn contrast to the instability of HD~67087, our simulations of the HD~110014 system reveal that the best-fit solution for that two-planet system lies in a broad region of strong dynamical stability. In this case, our simulations simply reveal that the system, as proposed in the discovery work, is dynamically feasible -- and in a sense, the simulations add little beyond that.\n\nThe case of HD~133131A is somewhat more interesting. Here, our simulations reveal that solutions that fit the observational data can exhibit both strong dynamical stability, and extreme instability (with dynamical lifetimes of just a few years). Both the high- and low-eccentricity solutions considered in \\citet{2016Teske} can produce scenarios that are stable for the full 100 Myr of our simulations. In both the high- and low-eccentricity cases, the stable solutions cluster around the least eccentric available scenarios. The more widely separated the two planets, the more eccentric their orbits can be before instability occurs -- a natural result of the stability being driven by the minimum separation between the planets, rather than their orbital semi-major axes. The more widely the semi-major axes of the orbits are spaced, the more eccentric they must be to bring the planets into close proximity. These results show once again the benefits inherent to such dynamical analysis -- reminding us how studying the dynamical evolution of a given system can help to provide stronger constraints on the orbits of the planets contained therein than is possible by studying the observational data on their own.\n\nA comparison of our results to the analysis of the AMD stability criterion presented in \\cite{2017Laskar} shows agreement between the two different techniques for the dynamical stability of the three systems. Whilst HD~67087 and HD~110014 are respectively very clear cut cases of an unstable and a stable system, HD~133131A exhibits a more complex behaviour. HD~133131A may be dynamically stable, but the inferred lifetime for the planetary system as proposed is sensitive to the chosen initial conditions; this system therefore represents an edge case of stability where limitations of available data and the respective analyses provide no clear answer to the veracity of the previously inferred planetary system.\n\nCombining these new results with our previous dynamical analyses, as summarised in the introduction, we may consider that the AMD criterion is a reliable estimator of stability for planetary systems. There are 13 systems (out of 131 considered in that work) from \\cite{2017Laskar} that have had dynamical modelling of their stability. In \\cite{2017Laskar}, a planetary system is considered strongly stable if all planet-pairs have $\\beta$ values less than 1, such that collisions are impossible whilst weakly stable planetary systems are those in which the inner-most planet might collide with the star without disrupting the remainder of the planetary system. In five systems, both the AMD criterion and dynamical modelling agree on their dynamical stability (HD~142, HD~159868, NN Ser (AB), GJ~832, and HD~110014); the planets in each of these systems are dynamically well separated and therefore not strongly interacting \\citep[][this work]{2011Horner,2012bWittenmyer,2014bWittenmyer}. Six systems are unstable according to the AMD criterion with values of $\\beta$ in the range 1 to 5 for the planet pair (HD~155358, 24 Sex, HD~200964, HD~73526, HD~33844, HD~47366), but all are in mean motion resonances and have been demonstrated to be dynamically stable through $n$-body simulations \\citep{2012Robertson,2012cWittenmyer,2014aWittenmyer,2016Wittenmyer,2019Marshall}. The remaining two systems (HD~67087, HD~133131A) are dynamically unstable in both the AMD and dynamical analysis (this work). However, dynamical analysis of the HD~133131A system reveals regions of dynamical stability consistent with the observed radial velocities, prompting the need for further investigation of this system and its architecture. Neither of these two unstable planetary systems have $\\beta$ values radically different from those of the planetary systems in resonance, or each other, such that determining their stability can only be carried out using dynamical simulations. The existence of such systems in the known planet population as demonstrated in our analysis therefore showcases the necessity of performing long duration dynamical analyses of proposed planetary system architectures to reveal the complex dynamical interplay between high mass planets, the evolution of their orbital elements, and determine what constraints this places on the available parameter space for the endurance of the proposed planetary system over its lifetime.\n\n\\section{Conclusions}\n\\label{sec:Con}\n\nWe re-analysed the dynamical stability of the exoplanet systems around HD~67087, HD~110014, and HD~133131A, using available radial velocity data. These three planetary systems have poorly constrained orbital parameters, and had previously been identified as being potentially unstable. We combine a determination of the best-fit orbital parameters from least-squares fitting to the data with $n$-body simulations to determine the global best-fit solution for the planetary system architectures, and thereafter determine the probability distribution of the orbital solutions through Bayesian inference. \n\nOur dynamical analysis confirms that the published planetary system parameters for HD~67087bc are dynamically unstable on very short timescales, and we must conclude that the system, as published, is dynamically unfeasible. As more data are collected for the HD~67087 system, it seems likely that the true nature of the candidate planets therein will be revealed, and that future planetary solutions for that system will veer towards dynamical stability as the planetary orbits become better constrained. \n\nIn the case of HD~110014 bc we demonstrate that the system parameters can be dynamically stable for the full duration of our 100 Myr integrations. The third system, HD~133131A , exhibits much more complex behaviour, with HD~133131A bc being strongly unstable over much of the parameter space exhibited in this work including the region encompassing the nominal best-fit to the orbital parameters. In agreement with previous analysis of this system, we strongly disfavour a high eccentricity orbital solution for planet c. Additional observations of this system will be required to more precisely determine the planetary properties for HD~133131A bc and thereby categorically rule on the plausibility of the proposed planetary system.\n\nThese results demonstrate the complementarity of various techniques to deduce the stability of planetary systems, with good agreement between the results of our various works, and that of the AMD approach. We highlight the appropriateness of dynamical simulations for determination the long-term stability of planetary systems in the presence of strongly interacting planets, which although costly in a computing sense capture the full essence of planetary interaction in such systems which is not possible with other techniques. We finally assert that the orbital parameters for these three systems which have been determined in this work (as summarised in Table 3) should be the accepted values adopted by exoplanet archives or elsewhere. This work is thus one additional thread in the tapestry of cross-checking of published results through various means that ensures the reliability of archival information on planetary properties and the architectures of planetary systems which are essential to inform models of the formation and evolution of the exoplanet population \\citep[e.g. ][]{2019Childs,2019Denham,2020He,2020VolkMalhotra}.\n\n\\section*{Acknowledgements}\n\nThis is a pre-copyedited, author-produced PDF of an article accepted for publication in MNRAS following peer review. The version of record Marshall et al., 2020, MNRAS, 494, 2, 2280--2288 is available online \\href{https:\/\/academic.oup.com\/mnras\/article\/494\/2\/2280\/5819459}{here}.\n\nWe thank the anonymous referee for their comments which helped to improve the article.\n\nThis research has made use of NASA's Astrophysics Data System and the SIMBAD database, operated at CDS, Strasbourg, France.\n\nJPM acknowledges research support by the Ministry of Science and Technology of Taiwan under grants MOST104-2628-M-001-004-MY3 and MOST107-2119-M-001-031-MY3, and Academia Sinica under grant AS-IA-106-M03.\n\n\\textit{Software}: This research has made use of the following Python packages: \\textsc{matplotlib} \\citep{2007Hunter}; \\textsc{numpy} \\citep{2006Oliphant}; \\textsc{pygtc} \\citep{2016Bocquet}; \\textsc{emcee} \\citep{2013ForemanMackey}; \\textsc{corner} \\citep{2016ForemanMackey}; \\textsc{mercury} \\citep{1999Chambers}.\n\n\n\n\n\n\n\\bibliographystyle{mnras}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\subsection{Motivation}\n\\IEEEPARstart{T}{he} emerging of many dazzling innovative technologies such as ultra-massive MIMO, large-scale intelligent reflective surfaces (IRS) and wireless artificial intelligence (AI) \\cite{Saad,Yang}, etc., boosts the rapid development of broadband wireless communications. On the other hand, in the foreseeable future, many revolutionary and challenging application scenarios such as smart city \\cite{Zanella}, autonomous driving \\cite{Toutouh} and unmanned aerial vehicle (UAV) positioning \\cite{Bor}, etc, may call for not only broadband connections but also accurate environment information which include but are not limited to, the locations, shapes, status and electromagnetic characteristic of the stationary or moving objects and\/or the background scatterers, within that environment.\n\nSuch kind of environment sensing has long been accomplished by traditional Radar technology which has been regarded as a related but separate field from communication. However, the channel state information (CSI) obtained during the communication process usually contains certain knowledge of the environment \\cite{Sen, Rao}, and likewise, the environment sensing result also helps improve the accuracy of channel estimation and enhance the performance of communication \\cite{Jiaoicc,Jiaotwc}. As the wireless network operates in higher frequency with wider bandwidth and deploys denser base stations with a larger amount of antennas, its longly-neglected sensing capability inherited from the intrinsic nature of electromagnetic wave propagation becomes even more explorable. The signal processing principles employed in these two fields also tend to converge. As visioned in \\cite{Wild}, this may give rise to a promising new technology of JCAS or sensing-communication integration.\n\nOne major challenge in JCAS lies in the potentially large amount of unknown variables brought by the environment, even many more than those contained in the statistical channel models, which may make the problem rank-deficient and eventually insolvable. As a result, certain sparsity should be exploited. Fortunately, the targeted environment itself usually does possess certain sparsity, which in turn makes the communication channel sparse. For example, in a cellular communication network, buildings are sparsely located within the wireless network coverage, and in the indoor scenario, furniture and other items are sparsely distributed in the entire room. \n\nAs a matter of fact, either in conventional communication or in conventional sensing, sparsity has been fully exploited to achieve better and lower-complexity solutions under the compressed sensing (CS) framework. As in wireless communications, the CS-based channel estimation approaches exploiting the channel sparsity usually exhibit superiority in signaling overhead and computational complexity \\cite{Rao} compared to the conventional channel estimation approaches \\textit{et al.}\\cite{Sen}. Likewise, in the broad area of Radar sensing or computational imaging, the utilization of the intrinsic sparsity of objects or scatterers within an environment is key to their effective detection. Such problems are often modeled as sparse signal recovery problems based on pixel division according to the CS theory \\cite{Donoho, Candes}, and solved by the widely used methods like Sparse Bayesian Learning (SBL) \\cite{Zhang}, orthogonal matching pursuit (OMP) \\cite{Cai}, and Generalized Approximate Message Passing (GAMP) \\cite{Rangan}, and so on. This has been showcased by recent works in \\cite{taoy, yaojj}, where innovative microwave computational imaging methods with the aid of intelligent reflecting surfaces (IRS) are proposed based on a fast block sparse Bayesian learning (BSBL) algorithm. However, how to explore and exploit the sparsity in the JCAS scenario still lacks adequate study.\n\n\\subsection{Related Works}\nSo far, there have been different sorts of attempts to implement joint environment sensing and communication. \n\nTo list a few, in the Radar-Communication Coexistence (RCC) sort of approaches \\cite{wang2008, Saruthirathanaworakun}, effective interference cancellation and management mechanism are designed to achieve flexible coexistence between the radar and the communication systems. In contrast to the RCC system, the second sort of approach, i.e., Dual-Functional Radar-Communication (DFRC), aim to achieve an integration of radar and communication through sharing a common hardware platform, with improved sensing and communication performance through collaborative operation \\cite{Paul, Blunt}. In such systems, separate sensing and communication operations with shared radio resources have also been extensively studied. For instance, a multi-beam scheme is proposed in \\cite{ZhangAndrew}, which uses an analog array to generate multiple beams for simultaneous communication and radar scanning. \n\nThe third sort of works mainly concentrates on the purpose of environment sensing by pure usage of the conventional communication signals or by proper joint design of the radio signals, which can be found in \\cite{AndrewEnabling} and the references therein. As in one of such works on environment sensing with the aid of real deployed communication systems, \\cite{Tan} identified the behavior of a human body by extracting the Doppler frequency shift from the CSI conveyed by the WIFI communication signals in the indoor scenario. In \\cite{Daniels}, the authors used the orthogonal frequency division multiplexing communication waveform as a radar signal to achieve joint communication and environment sensing between vehicles. As an illustration of joint sensing and communication signaling, the authors in \\cite{ChenPerformance} designed a cooperative sensing unmanned aerial vehicle network (CSUN) with joint sensing and communication beams based on a common transceiver device. Considering the non-ideal factors of the channel, \\cite{Shahi} analyzed the communication channel capacity under the joint effect of Gaussian random noise and non-Gaussian radar sensing interference. In \\cite{Ahmadipour}, the authors theoretically analyzed the performance of the JCAS system under the condition of the memoryless broadcast channel. The related systematic design rules and methodologies for signaling and processing in such a JCAS system have raised increasing research effort recently, resulting in some interesting JCAS implementations based on machine learning \\cite{Aoudia}, joint data sensing and fusion \\cite{Schmitt}, and time-frequency-space full dimension utilization \\cite{Gaudio}, etc. \n\n\\subsection{Main Ideas and Contributions}\n\nIn this paper, we exploit the sparsity of both the structured multi-user signals and the unstructured environment to design a low-complexity joint multi-user communication and environment sensing scheme based on microwave computational imaging.\nDifferent from the above-mentioned application scenarios, we aim at designing a system with integrated sensing and communication capability based on existing wireless communication systems, which is capable of accomplishing the environment sensing (imaging) using the multi-user transmission signals and in turn, assisting the multi-user information detection with the channel information derived from the sensing results. To the best of our knowledge, there still lacks sufficient study on the design of such a JCAS system in the literature.\n\nIn order to achieve these goals, we employ the Sparse Code Multiple Access (SCMA) protocol \\cite{Nikopour,Taherzadeh} for multi-user uplink access, and employ an IRS \\cite{Garcia} to assist signal propagation and collection. SCMA is an elegant code-domain non-orthogonal multiple access (C-NOMA) method, which has extracted extensive research effort due to its superior performance and low detection complexity. In the SCMA scheme, the codebook for users to send data is sparse, and each user occupies a few but not all subcarriers. The sparsity of the user codebook effectively enhances the decoding performance of the received data. IRS is a promising technology to manipulate the electromagnetic environment with low-cost passive reflective elements by adjusting the phase of incident signals, which has been extensively used in wireless communications \\cite{Huang, Garcia, Wu, Chenw}. Based on the idea of computational imaging, IRS can cause known and diverse changes to electromagnetic signals, and such changes are beneficial to environment sensing. Therefore, IRS also exhibited great potential in environment sensing as recently described in \\cite{yaojj, taoy}. But in these cases, the base station is only used as an environment sensing device instead of a communication signal transmitter. It is noteworthy that, in this work, the environment sensing is just accomplished by making use of the IRS' reflection characteristics rather than by actively manipulating it. Our method not only enables environment sensing to obtain the help of IRS but also retains the ability of IRS to assist communication.\n\nOur design is depicted as follows. First, in the multiple access part, the user uses the SCMA protocol to communicate with the wireless AP. The signals are reflected by the IRS and then arrive at the AP. With the limited channel information obtained by an initial pilot sequence, we use the proposed SCMA-IRS-MPA algorithm (see Section. \\ref{mainalgm} for details) to conduct multi-user detection based on the sparse codebook of the transmitted signals. Then a sliding-window-based environment sensing algorithm is proposed to accomplish the environment sensing (imaging) with the received signal and recovered users' data, again based on the CS principle. Note that the proposed multiple user detection algorithm requires the environment (channel) knowledge, and the proposed environment sensing algorithm also needs to know the decoded data, so the two processes, i.e., multiple access and environment sensing, rely on each other. Therefore, finally, we propose an iterative and incremental algorithm to jointly recover the users' data and accomplish environment sensing at the same time with significantly reduced pilot overhead.\n\nThe main contributions of this paper are summarized as follows:\n\n\\begin{itemize}\n \\item We design a joint communication and environment sensing scheme, which exploits the sparsity of both the structured multi-user signals and the unstructured environment to achieve the integration of multiple access and environment sensing. \n \n \\item We develop a low-complexity iterative algorithm based on CS and generalized message passing theory to conduct the multi-user information detection and environment sensing (imaging). It is sliding-window based and runs alternately between the states of sensing with the decoded multi-user data and data decoding with the sensing results. This way, the overall system performance can be incrementally improved. \n \n \\item We analyze the decoding error and sensing accuracy performances as well as the computational complexity of the proposed algorithm, and investigate the impact of access user number on system performances, base on which, we approximate the optimal operating point. Extensive simulation results verify the convergence and effectiveness of the proposed algorithm.\n \n\\end{itemize}\n\nThe rest of this paper is organized as follows. Section \\uppercase\\expandafter{\\romannumeral2} presents the environment setting and system model in the uplink communication scenario. Section \\uppercase\\expandafter{\\romannumeral3} proposes the multiple access method based on the SCMA scheme. Section \\uppercase\\expandafter{\\romannumeral4} proposes the environment sensing method base on the CS theory. Section \\uppercase\\expandafter{\\romannumeral5} proposes the iterative and incremental algorithm based on low-density pilots jointly recovers the users' data and achieves environment sensing. In section \\uppercase\\expandafter{\\romannumeral6}, we discuss the trade-off relationship between the number of access users and system performance. Finally, section \\uppercase\\expandafter{\\romannumeral7} presents the numerical results, and section \\uppercase\\expandafter{\\romannumeral8} concludes the paper.\n\n\\textit{Notation}: Fonts $a$ and $\\mathbf{A}$ represent scalars and matrices, respectively.\n$\\mathbf{A}^{\\rm{T}}$ and $\\|\\mathbf{A}\\|_F$ denote transpose and Frobenius norm of $ \\mathbf{A} $, respectively.\n$[\\mathbf{A}](i,j)$ represents $\\mathbf{A}$'s $(i,j)$-th element.\n$|\\cdot|$ and $[\\cdot]$ denote the modulus and the catenation of the matrix, respectively.\n$\\odot $ represents the Hadamard product between two matrices.\nFinally, notation ${\\rm diag}(\\mathbf{a})$ represents a diagonal matrix with the entries of $\\mathbf{a}$ on its main diagonal, and $\\delta(\\cdot)$ is the Dirac delta function.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=3in]{fig1.eps}\n \\caption{The uplink multi-user communication scenario.}\n \\label{figsetting}\n \\end{figure}\n\n\\section{Environment Setting and System Model}\nAs shown in Fig. \\ref{figsetting}, a millimeter-wave multi-antenna AP and multiple single-antenna user equipments (UEs) are deployed in an indoor scenario. An IRS is deployed near the AP to assist the users' communication, and there are some target objects (serve as scatterers) in the scenario. We consider that in the uplink regime, i.e., multiple users simultaneously send data to the AP via a shared channel. Our goal is to accomplish the environment sensing while reliably obtaining the communication data, that is, to sense the distribution position and scattering rate of the scatterers in the environment and obtain the communication data of all the users at the same time.\n\n\\subsection{Environment Setting}\nLet the number of users in the environment be $N_{\\rm{u}}$ and the number of AP receiving antennas be $N_{\\rm{R}}$. In the uplink communication scenario, the channel from the user to the AP has mainly composed of three parts: The first part is the line-of-sight (LOS) path from the user directly to the AP. The second part is the path from the user to the IRS and then reflected to the AP. The third part is the multipath propagation path which the user signal is scattered by the scatterers and reflected to the AP by the IRS. They are denoted as ${{\\bf{H}}^{{\\rm{LOS}}}} \\in \\mathbb{C}^{{N_{\\rm{u}}} \\times {N_{\\rm{R}}}}$, ${{\\bf{H}}^{{\\rm{IRS}}}} \\in \\mathbb{C}^{{N_{\\rm{u}}} \\times {N_{\\rm{R}}}}$, and ${{\\bf{H}}^{{\\rm{s}}}} \\in \\mathbb{C}^{{N_{\\rm{u}}} \\times {N_{\\rm{R}}}}$ respectively. Let the number of reflective elements of the IRS be $N_{\\rm{I}}$, and each element can set amplitude reflection coefficient and phase shift independently, thereby controlling the relationship between the reflected signal and the incident signal. The reflection characteristic matrix of IRS is expressed as\n\\begin{equation}\n{\\bf{\\Theta }} = {\\rm{diag}}\\left( {{\\theta _1}, \\cdots, {\\theta _{{N_{\\rm{I}}}}}} \\right) \\in \\mathbb{C}^{{N_{\\rm{I}}} \\times {N_{\\rm{I}}}}, \\label{eq1}\n\\end{equation}\nwhere ${\\theta _{{n_{\\rm{I}}}}} = {\\rho _{{n_{\\rm{I}}}}}{e^{j{\\varphi _{{n_{\\rm{I}}}}}}}$ represents the reflection characteristic of the $n_{\\rm{I}}$ element of the IRS, ${\\rho _{{n_{\\rm{I}}}}} \\in \\left[ {0,1} \\right]$ and ${\\varphi _{{n_{\\rm{I}}}}} \\in \\left[ {{\\rm{0}},2\\pi } \\right]$ represents the amplitude reflection coefficient and phase shift of the $n_{\\rm{I}}$ element respectively, and the diag function represents the construction of a diagonal matrix.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=2in]{fig14.eps}\n \\caption{The discretized targeted environment object.}\n \\label{pixel}\n \\end{figure}\n\nWe discretize the room environment information and regard the environment information of the entire room as a point cloud. Each point in the point cloud represents the environment information of small cubes with sizes $l_{\\rm{s}}$, $w_{\\rm{s}}$, and $h_{\\rm{s}}$ around this point. These small cubes are called pixels. Assuming that the length, width, and height of the room are $L_{\\rm{s}}$, $W_{\\rm{s}}$, and $H_{\\rm{s}}$ respectively, the number of point clouds in the space is ${N_{\\rm{s}}} = {{{L_{\\rm{s}}}} \\mathord{\\left\/\n{\\vphantom {{{L_{\\rm{s}}}} {{l_{\\rm{s}}}}}} \\right.\n\\kern-\\nulldelimiterspace} {{l_{\\rm{s}}}}} \\times {{{W_{\\rm{s}}}} \\mathord{\\left\/\n{\\vphantom {{{W_{\\rm{s}}}} {{w_{\\rm{s}}}}}} \\right.\n\\kern-\\nulldelimiterspace} {{w_{\\rm{s}}}}} \\times {{{H_{\\rm{s}}}} \\mathord{\\left\/\n{\\vphantom {{{H_{\\rm{s}}}} {{h_{\\rm{s}}}}}} \\right.\n\\kern-\\nulldelimiterspace} {{h_{\\rm{s}}}}}$. The inside of each pixel may be empty, or there may be scatterers. We use a scattering coefficient ${x_{{n_{\\rm{s}}}}}$ to represent the scattering coefficient of the pixel where the $n_{\\rm{s}}$-th point cloud point is located. If the inside of the small cube is empty, then ${x_{{n_{\\rm{s}}}}} = 0$. As shown in the Fig. \\ref{pixel}, the target scatterer in Fig. \\ref{figsetting} is discretized. Therefore, the environmental information of the entire room can be expressed as\n\\begin{equation}\n{\\bf{x}} = {\\left[ {{x_1},{x_2}, \\cdots ,{x_{{N_{\\rm{s}}}}}} \\right]^{\\rm{T}}}. \\label{eq2}\n\\end{equation}\n\n\\subsection{System Model}\nMultiple users in the space share the same time-frequency resources. The frequency resources of communication are divided into $R$ orthogonal resource elements (OREs), and $N_u$ users send data on all OREs. Therefore, on the $r$-th ORE, the channel from the user to the AP can be expressed as\n\\begin{equation}\n\\begin{array}{l}\n {{\\bf{H}}_r} = {\\bf{H}}_r^{{\\rm{LOS}}} + {\\bf{H}}_r^{{\\rm{IRS}}} + {\\bf{H}}_r^{\\rm{s}}\\\\\n \\quad = {\\bf{H}}_r^{{\\rm{LOS}}} + {\\bf{H}}_r^{{\\rm{IRS1}}}{\\bf{\\Theta H}}_r^{\\rm{s1}}\\\\\n \\quad + \\left[ {\\begin{array}{*{20}{c}}\n {{\\bf{H}}_r^{\\rm{s}}\\left(1\\right)} & \\cdots & {{\\bf{H}}_r^{\\rm{s}}\\left(n_{\\rm{u}}\\right)}& \\cdots &{{\\bf{H}}_r^{\\rm{s}}\\left(N_{\\rm{u}}\\right)}\n \\end{array}} \\right]\n \\end{array}, \\label{eq3}\n\\end{equation}\nwhere\n\\begin{equation}\n{\\bf{H}}_r^{\\rm{s}}\\left(n_{\\rm{u}}\\right) = {{\\bf{x}}^{\\rm{T}}}{\\rm{diag}}\\left( {{\\bf{H}}_r^{\\rm{s3}}}\\left(n_{\\rm{u}}\\right) \\right){\\bf{H}}_r^{\\rm{s2}}{\\bf{\\Theta H}}_r^{\\rm{s1}} \\in \\mathbb{C}^{{\\rm{1}} \\times {N_{\\rm{R}}}} \\label{eq4}\n\\end{equation}\nrepresents the channel coefficient from the $n_u$-th user to the AP after being scattered by the scatterers, and\n${\\bf{H}}_r^{{\\rm{LOS}}} = {\\bm{\\alpha }}_r^{{\\rm{LOS}}}\\odot {e^{j{\\bm{\\varphi }}_r^{{\\rm{LOS}}}}} \\in \\mathbb{C}^{{{N_{\\rm{u}}}} \\times {N_{\\rm{R}}}}$ represents the LOS channel coefficient from the user directly to the AP, with ${\\bm{\\alpha }}_r^{{\\rm{LOS}}}$ denoting the amplitude of the channel and ${e^{j{\\bm{\\varphi }}_r^{{\\rm{LOS}}}}}$ its phase shift. Similarly, ${\\bf{H}}_r^{{\\rm{IRS1}}} = {\\bm{\\alpha }}_r^{{\\rm{IRS1}}}\\odot {e^{j{\\bm{\\varphi }}_r^{{\\rm{IRS1}}}}} \\in \\mathbb{C}^{{{N_{\\rm{u}}}} \\times {N_{\\rm{I}}}}$, ${{\\bf{H}}_r^{{\\rm{s1}}}} = {\\bm{\\alpha }}_r^{\\rm{s1}}\\odot {e^{j{\\bm{\\varphi }}_r^{\\rm{s1}}}} \\in \\mathbb{C}^{{{N_{\\rm{I}}}} \\times {N_{\\rm{R}}}}$, ${\\bf{H}}_r^{\\rm{s3}}\\left(n_{\\rm{u}}\\right) = {\\bm{\\alpha }}_r^{\\rm{s3}}\\left(n_{\\rm{u}}\\right)\\odot {e^{j{\\bm{\\varphi }}_r^{{\\rm{s3}}}\\left(n_{\\rm{u}}\\right)}} \\in \\mathbb{C}^{{\\rm{1}} \\times {N_{\\rm{s}}}}$, ${\\bf{H}}_r^{{\\rm{s2}}} = {\\bm{\\alpha }}_r^{{\\rm{s2}}}\\odot {e^{j{\\bm{\\varphi }}_r^{{\\rm{s2}}}}} \\in \\mathbb{C}^{{{N_{\\rm{s}}}} \\times {N_{\\rm{I}}}}$ represent the LOS channel coefficient from the user to the IRS, from the IRS to the AP, from the user to the spatial point cloud location, and from the spatial point cloud location to the IRS, respectively.\n\nLet the $n_{\\rm{u}}$-th user's transmitted symbols on $R$ OREs be ${{\\bf{s}}_{{n_{\\rm{u}}}}} \\in \\mathbb{C}^{{{R}} \\times {1}}$, then at the $n_{\\rm{R}}$-th AP receiving antenna, the received data on all OREs can be expressed as\n\\begin{equation}\n{{\\bf{y}}_{{n_{\\rm{R}}}}} = \\sum\\limits_{{n_{\\rm{u}}} = 1}^{{N_{\\rm{u}}}} {{\\rm{diag}}\\left\\{ {{{\\bf{H}}\\left({n_{\\rm{u}}},{n_{\\rm{R}}}\\right)}} \\right\\}{{\\bf{s}}_{{n_{\\rm{u}}}}}} + {\\bf{w}}, \\label{eq5}\n\\end{equation}\nwhere ${{\\bf{H}}\\left({n_{\\rm{u}}},{n_{\\rm{R}}}\\right)} = \\left[ {\\begin{array}{*{20}{c}}\n {{{\\bf{H}}_1}\\left( {{n_u},{n_R}} \\right)}& \\cdots &{{{\\bf{H}}_R}\\left( {{n_u},{n_R}} \\right)}\n \\end{array}} \\right]$ , ${\\bf{H}}_r\\left(n_{\\rm{u}},n_{\\rm{R}}\\right) $ represents the $n_{\\rm{R}}$ column and $n_{\\rm{u}}$ row of ${{\\bf{H}}_r}$ calculated in (\\ref{eq3}), and $\\bf{w}$ the Gaussian white noise.\n\n\\section{The Multiple Access Scheme}\nIn order to recover multi-user communication data accurately, the wireless access algorithm can be used to achieve the separation and detection of multi-user transmission symbols under the premise of environmental prior information.\n\n\\subsection{Sparse Code Multiple Access}\nSCMA is an efficient code-domain non-orthogonal multiple access technology. It is based on low-density spectrum spreading. That means, a single user does not completely occupy all OREs, but only a few of them, which greatly reduces the difficulty of signal decoding. In the uplink multi-user SCMA communication scenario considered in this article, the $N_{\\rm{u}}$ users use the SCMA protocol to send their data to the AP and $N_{\\rm{u}}$ users share $R$ OREs. Each user has a total of $M$ input possibilities, and each user accesses the channel by using a unique sparse codebook ${{\\bf{C}}_{{n_{\\rm{u}}}}} \\in \\mathbb{C}^{{{R}} \\times {M}}$. Therefore, each user's codebook contains $M$ codewords. Let ${{\\bf{C}}_{{n_{\\rm{u}}}}\\left(m\\right)} \\in \\mathbb{C}^{{{R}} \\times {1}}$ represent the $m$-th codeword of user $n_{\\rm{u}}$. Then the (\\ref{eq5}) can be expressed as\n\\begin{equation}\n{{\\bf{y}}_{{n_{\\rm{R}}}}} = \\sum\\limits_{{n_{\\rm{u}}} = 1}^{{N_{\\rm{u}}}} {{\\rm{diag}}\\left\\{ {{{\\bf{H}}\\left({{n_{\\rm{u}}},{n_{\\rm{R}}}}\\right)}} \\right\\}{{\\bf{C}}_{{n_{\\rm{u}}}}\\left(m\\right)}} + {\\bf{w}}, \\label{eq6}\n\\end{equation}\nwhere ${{\\bf{C}}_{{n_{\\rm{u}}}}\\left(m\\right)}$ represents the sending symbols, it contains $d_{\\rm{v}}$ non-zero elements, that is, each user will only transmit on the OREs represented by $d_{\\rm{v}}$ non-zero elements. $N_{\\rm{u}}$ users perform overload transmission on all OREs, and the number of users $d_{\\rm{f}}$ transmitted on each ORE is constant. Since the codewords $\\bf{C}$ is sparse, not all users' codewords will collide on a single ORE. Fig. \\ref{figSCMA} shows an example of an uplink SCMA system, in which 6 users transmit on 4 OREs, thus $N_{\\rm{u}}$ = 6, $R$ = 4. Each user has its own codebook, and the codebook determines the OREs occupied by the user. In Fig. \\ref{figSCMA}, user 1 transmits on ORE 1 and 2, and user 2 transmits on ORE 3 and 4. The user's bitstream is mapped to the codewords by SCMA encoder after channel-coded, then transmitted to the receiver through the channel, and finally separated and decoded by the SCMA detector.\n\\begin{figure*}\n \\centering\n \\includegraphics[width=6in]{fig2.eps}\n \\caption{The uplink SCMA system ($N_{\\rm{u}} = 6$, $R = 4$).}\n \\label{figSCMA}\n \\end{figure*}\n\nCodebook design is an important physical layer technology in the SCMA system. The sparsity of the codebook makes it possible for the SCMA receiver to use message passing algorithm (MPA) to decode. As mentioned above, the process of SCMA encoding is the process of mapping the binary bit stream to the complex domain. The codebook of each user is an ${R}\\times{M}$-dimensional matrix. Therefore, the SCMA encoder can be defined as: $f:\\mathbb{B}^{{{\\log }_2}M} \\to {\\cal X}$, where. ${\\cal X} \\subset \\mathbb{C} ^R,\\left| {\\cal X} \\right| = M$. Let $\\bf{b}$ represent the user's input bits, the corresponding codeword output can be expressed as ${\\cal X} = f\\left( {\\bf{b}} \\right)$, the codeword ${\\cal X}$ is an $R$-dimensional sparse complex vector, and the vector contains $N_{\\rm{c}} < R$ non-zero elements. Since SCMA encoding process combines bit-to-constellation mapping and spreading spectrum, the bit-to-constellation mapping can be expressed as: $g:\\mathbb{B} ^{{{\\log }_2}M} \\to {\\cal C},\\;\\;{\\cal C} \\subset \\mathbb{C} ^{N_{\\rm{c}}}$, where ${\\cal C}$ represents the constellation point of the $N_{\\rm{c}}$-dimensional complex constellation, so the SCMA encoder can also be expressed as : $f = {\\bf{V}}g$, where ${\\cal C} = f\\left( {\\bf{b}} \\right)$ and ${\\bf{V}} \\in \\mathbb{B} ^{R \\times N}$ is a binary mapping matrix, and the mapping matrix can map $N_{\\rm{c}}$-dimensional constellation points to $R$-dimensional SCMA codewords. Meanwhile, the mapping matrix of each user is different, and contains $R-N_{\\rm{c}} $ all-zero rows.\n\nDefine the codebook structure of SCMA as ${\\cal S}\\left( {{\\cal V},{\\cal G};{N_{\\rm{u}}},M,N_{\\rm{c}},R} \\right)$, where ${\\cal V}: = \\left[ {{{\\bf{V}}_{{n_{\\rm{u}}}}}} \\right]_{{n_{\\rm{u}}} = 1}^{{N_{\\rm{u}}}}$, ${\\cal G}: = \\left[ {{g_{{n_{\\rm{u}}}}}} \\right]_{{n_{\\rm{u}}} = 1}^{{N_{\\rm{u}}}}$. Therefore, the SCMA codebook design problem can be expressed as\n\\begin{equation}\n{{\\cal V}^*},{{\\cal G}^*} = \\arg \\mathop {\\max }\\limits_{{\\cal V},{\\cal G}} {\\cal M}\\left( {{\\cal S}\\left( {{\\cal V},{\\cal G};{N_{\\rm{u}}},M,N_{\\rm{c}},R} \\right)} \\right), \\label{eq7}\n\\end{equation}\nwhere ${\\cal M}$ is a codebook design standard. Since there is no unified design standard at present, there are many methods for SCMA codebook design problems, such as rearranging the real and imaginary parts of the constellation points and designing codebooks based on theories such as constellation interleaving and rotation. These methods can achieve a suboptimal solution to the SCMA codebook design problem.\n\n\\subsection{SCMA-IRS-MPA Decoder}\\label{mainalgm}\nIn the above-mentioned SCMA-IRS uplink transmission scheme, there are a total of ${M^{{N_{\\rm{u}}}}}$ combinations of user codewords. The Maximum Likelihood (ML) decoder can provide the theoretically optimal symbol error rate (SER) performance by performing a traversal search on all codewords combinations. The estimated transmit codewords of all users by the ML decoder can be expressed as\n\\begin{equation}\n\\begin{array}{l}\n {{{\\bf{\\hat C}}}_{{\\rm{ML}}}} = \\arg \\mathop {\\min }\\limits_{j \\in {M^{{n_{\\rm{u}}}}}} \\\\\n \\quad \\quad {\\left\\| {{{\\bf{y}}_{{n_{\\rm{R}}}}} - \\sum\\limits_{{n_{\\rm{u}}} = 1}^{{N_{\\rm{u}}}} {\\left( {{\\rm{diag}}\\left( {{\\bf{H}}\\left( {{n_{\\rm{u}}},{n_{\\rm{R}}}} \\right)} \\right){{\\bf{C}}_{{n_{\\rm{u}}}}}\\left( {{\\bf{m}}\\left( j \\right)} \\right)} \\right)} } \\right\\|^2}\n \\end{array}, \\label{eq8}\n\\end{equation}\nwhere ${{\\bf{\\hat C}}_{\\rm{ML}}} = \\left[ {{{{\\bf{\\hat c}}}_{1}}, \\cdots ,{{{\\bf{\\hat c}}}_{{N_{\\rm{u}}}}}} \\right] \\in \\mathbb{C} ^{R \\times {N_{\\rm{u}}}}$, ${\\bf{m}}(j)$ represents the value of the $j$-th combination among ${M^{{N_{\\rm{u}}}}}$ user codewords combinations. Although the ML decoder can provide the theoretical optimal value, it uses an exhaustive method to search for the optimal solution, which is impractical in the actual implementation process. MPA decoder is an iterative decoder, which can nearly achieve the performance of an ML decoder while requiring an achievable computational complexity. And the MPA decoder obtains the corresponding user codewords by calculating the maximum joint message probability.\n\\begin{figure}\n \\centering\n \\includegraphics[width=7cm]{fig3.eps}\n \\caption{The MPA decoder factor graph ($N_{\\rm{u}} = 6$, $R = 4$).}\n \\label{figFG}\n \\end{figure}\n\nMPA is a belief propagation algorithm that uses a factor graph model to solve probabilistic reasoning problems. The proposed SCMA-IRS-MPA uses the factor graph method shown in the Fig. \\ref{figFG}, where the function nodes (FNs) represent OREs, the variable nodes (VNs) represent the users, and the connection between the FN and VN represents the user transmitting data on the corresponding ORE. The MPA decoder achieves decoding by iteratively updating the message probability between FNs and VNs, and let the MPA decoder stop after ${K_{\\rm{it}}}$ iterations. In order to estimate the transmission codewords in the SCMA-IRS-MPA scheme, we modified the traditional SCMA-MPA. In our method, multiple antennas at the AP perform independent and parallel decoding. For the $n_{\\rm{R}}$-th receiving antenna, we use $p_{{v_u} \\to {f_r}}^{\\left( {{k_{\\rm{it}}}} \\right)}\\left( {{{\\bf{C}}_{{n_{\\rm{u}}}}\\left(m,r\\right)}} \\right)$ to denote the probability of transmitting a message from the $n_{\\rm{u}}$-th VN to the $r$-th FN, and use $p_{{f_r} \\to {v_u}}^{\\left( {{k_{\\rm{it}}}} \\right)}\\left( {{{\\bf{C}}_{{n_{\\rm{u}}}}\\left(m,r\\right)}} \\right)$ to denote the probability of transmitting a message from the $r$-th FN to the $n_{\\rm{u}}$-th VN. The above all represent the probability in the ${K_{\\rm{it}}}$ round iteration, ${k_{\\rm{it}}} = 1,2, \\cdots ,{K_{\\rm{it}}}$. Assuming that at the beginning, in the first iteration, all messages sent from VN to FN have the same probability,\n\\begin{equation}\np_{{v_u} \\to {f_r}}^{\\left( 0 \\right)}\\left( {{{\\bf{C}}_{{n_{\\rm{u}}}}\\left(m,r\\right)}} \\right) = \\frac{1}{M},\\left( {\\forall {n_{\\rm{u}}},\\forall r,\\forall m} \\right). \\label{eq9}\n\\end{equation}\n\nTherefore, $p_{{f_r} \\to {v_u}}^{\\left( {{k_{\\rm{it}}} + 1} \\right)}\\left( {{{\\bf{C}}_{{n_{\\rm{u}}}}\\left(m,r\\right)}} \\right)$ can be expressed as\n\\begin{equation}\n\\begin{array}{l}\n p_{{f_r} \\to {v_u}}^{\\left( {{k_{\\rm{it}}} + 1} \\right)}\\left( {{{\\bf{C}}_{{n_{\\rm{u}}}}\\left(m,r\\right)}} \\right){\\rm{ = }}\\\\\n \\quad \\quad \\sum\\limits_{\\psi \\left( i \\right),i \\in {{\\bf{\\Lambda }}_r}\\backslash {n_{\\rm{u}}}} {\\left\\{ {p\\left( {{\\bf{y}}|\\psi \\left( i \\right),\\psi \\left( u \\right) = {{{\\bf{C}}_{{n_{\\rm{u}}}}\\left(m,r\\right)}}} \\right)} \\right.} \\\\\n \\quad \\quad \\left. { \\times \\prod\\limits_{i \\in {{\\bf{\\Lambda }}_r}\\backslash {n_{\\rm{u}}}} {p_{{v_i} \\to {f_r}}^{\\left( {{k_{\\rm{it}}}} \\right)}} \\left( {\\psi \\left( i \\right)} \\right)} \\right\\},\\left( {\\forall m,\\forall r,{n_{\\rm{u}}} \\in {{\\bf{\\Lambda }}_r}} \\right)\n \\end{array}, \\label{eq10}\n\\end{equation}\nwhere ${{\\bf{\\Lambda }}_r}$ represents a set of user indexes sharing the $r$-th ORE, ${{\\bf{\\Lambda }}_r}\\backslash {n_{\\rm{u}}}$ represents ${{\\bf{\\Lambda }}_r}$ except for the $n_{\\rm{u}}$-th user, and\n\\begin{equation}\n\\begin{array}{l}\n p\\left( {{\\bf{y}}|{{\\bf{\\Psi }}_r}} \\right) = \\frac{1}{{\\sqrt {2\\pi } \\sigma }}\\exp \\left( { - \\left| {{{\\bf{y}}_r} - \\sum\\nolimits_{{n_{\\rm{u}}} \\in {{\\bf{\\Lambda }}_r}} {\\left( {{\\bf{H}}_r^{{\\rm{LOS}}}\\left( {{n_{\\rm{u}}},{n_{\\rm{R}}}} \\right)} \\right.} } \\right.} \\right.\\\\\n \\quad \\left. {{{{{\\left. {\\left. { + {\\bf{H}}_r^{{\\rm{IRS}}}\\left( {{n_{\\rm{u}}},{n_{\\rm{R}}}} \\right) + {\\bf{H}}_r^{\\rm{s}}\\left( {{n_{\\rm{u}},n_{\\rm{R}}}} \\right)} \\right){{\\bf{C}}_{{n_{\\rm{u}}}}\\left(m,r\\right)}} \\right|}^2}} \/ {\\left( {2{\\sigma ^2}} \\right)}}} \\right)\n \\end{array}, \\label{eq11}\n\\end{equation}\nwhere ${{\\bf{\\Psi }}_r}$ represents the possible codewords of all users sharing the $r$-th ORE, then $p_{{v_u} \\to {f_r}}^{\\left( {{k_{\\rm{it}}} + 1} \\right)}\\left( {{{\\bf{C}}_{{n_{\\rm{u}}}}\\left(m,r\\right)}} \\right)$ is updated to,\n\\begin{equation}\n\\begin{array}{l}\n p_{{v_u} \\to {f_r}}^{\\left( {{k_{\\rm{it}}} + 1} \\right)}\\left( {{{\\bf{C}}_{{n_{\\rm{u}}}}\\left(m,r\\right)}} \\right) = \\gamma _{{v_u},r}^{\\left( {{k_{\\rm{it}}} + 1} \\right)}\\\\\n \\quad \\quad \\quad \\times \\prod\\limits_{j \\in {\\Omega _u}\\backslash r} {p_{{f_r} \\to {v_u}}^{\\left( {{k_{\\rm{it}}} + 1} \\right)}\\left( {{{\\bf{C}}_{{n_{\\rm{u}}}}\\left(m,r\\right)}} \\right)} ,\\forall m,\\forall {n_{\\rm{u}}},r \\in {{\\bf{\\Omega }}_u}\n \\end{array}, \\label{eq12}\n\\end{equation}\nwhere ${{\\bf{\\Omega }}_u}$ represents the ORE index corresponding to the $d_{\\rm{v}}$ non-zero element positions of the codeword of the $n_{\\rm{u}}$-th user, ${{\\bf{\\Omega }}_u}\\backslash r$ represents ${{\\bf{\\Omega }}_u}$ except for the $r$-th ORE, and $\\gamma _{{v_u},r}^{\\left( {{k_{\\rm{it}}} + 1} \\right)}$ can be expressed as\n\\begin{equation}\n\\gamma _{{v_u},r}^{\\left( {{k_{\\rm{it}}} + 1} \\right)}{\\rm{ = }}{\\left( {\\sum\\limits_{m = 1}^M {p_{{v_u} \\to {f_r}}^{\\left( {{k_{\\rm{it}}}} \\right)}\\left( {{{\\bf{C}}_{{n_{\\rm{u}}}}\\left(m,r\\right)}} \\right)} } \\right)^{ - 1}}. \\label{eq13}\n\\end{equation}\n\nAfter $K_{\\rm{it}}$ iterations, the estimated transmission codewords of the $n_{\\rm{u}}$-th user can be expressed as\n\\begin{equation}\n{{{\\bf{\\hat C}}}_{{n_{\\rm{u}}}} ^{\\left( {{k_{it}}} \\right)}} = \\arg \\mathop {\\max }\\limits_{m = 1, \\cdots M} \\prod\\limits_{j \\in {\\Omega _u}} {p_{{f_j} \\to {v_u}}^{\\left( {{k_{\\rm{it}}}} \\right)}\\left( {{{\\bf{C}}_{{n_{\\rm{u}}}}\\left(m,r\\right)}} \\right)} ,\\forall n_{\\rm{u}}. \\label{eq14}\n\\end{equation}\n\nThe set of all user transmission codewords obtained by using the SCMA-IRS-MPA decoder is\n\\begin{equation}\n{{\\bf{\\hat C}}_{{\\rm{MPA}}}} = \\left\\{ {{{ {{{{\\bf{\\hat c}}}_1}} }^{\\left( {{k_{\\rm{it}}}} \\right)}}, \\cdots ,{{ {{{{\\bf{\\hat c}}}_{{N_{\\rm{u}}}}}} }^{\\left( {{k_{\\rm{it}}}} \\right)}}} \\right\\}. \\label{eq15}\n\\end{equation}\n\nThe above is the MPA decoder of the SCMA-IRS scheme. We express the decoding computational complexity of the MPA decoder according to the number of addition operations and the number of multiplication operations. Therefore, the number of additions and multiplications required by the MPA detector are $R{d_{\\rm{f}}}\\left( {{M^{{d_{\\rm{f}}}}}\\left( {4{d_{\\rm{f}}} + {K_{\\rm{it}}} + 1} \\right) + N - {K_{\\rm{it}}}} \\right) + 1$ and $R{d_{\\rm{f}}}\\left( {{M^{{d_{\\rm{f}}}}}\\left( {4{d_{\\rm{f}}} + {K_{\\rm{it}}}{d_{\\rm{f}}} + 3} \\right) + N + M{K_{\\rm{it}}}\\left( {{d_{\\rm{v}}} - 1} \\right)} \\right) + {N_{\\rm{u}}}M\\left( {{d_{\\rm{v}}} - 1} \\right)$ respectively.\n\n\\section{Environment Sensing}\nContrary to the multiple access process, the algorithm proposed in this section can sense the environmental information with the data sent by the user has been decoded correctly. Since the distribution of scatterers in the environment is sparse, sensing environmental information is essential to solve the CS reconstruction problem. As shown in (\\ref{eq5}), on the $r$-th ORE, the solution of environmental information can be expressed as\n\\begin{equation}\n{\\bf{\\hat x}} = \\arg \\mathop {\\min }\\limits_{\\bf{x}} {\\left\\| {\\bf{x}} \\right\\|_1}\\quad \\quad {\\rm{s}}{\\rm{.t}}{\\rm{.}}\\quad {\\left\\| {{{\\bf{y}}_r} - {{\\bf{s}}_r}{{\\bf{H}}_r}} \\right\\|_2} \\le {\\varepsilon _{\\rm{x}}}, \\label{eq16}\n\\end{equation}\nwhere $\\varepsilon _{\\rm{x}}$ is the slack variable, ${{\\bf{y}}_r} \\in \\mathbb{C}^{{N_{\\rm{T}}} \\times {N_{\\rm{R}}}}$ is the symbol sequence received by the AP receiving antennas, $N_{\\rm{T}}$ is the time sequence length, and ${{\\bf{s}}_r} \\in \\mathbb{C} ^{{N_{\\rm{T}}} \\times {N_{\\rm{u}}}}$ is the transmitted symbol sequence of $N_{\\rm{u}}$ users. In the received signal model, when both the transmitted data ${\\bf{s}}_r$ and the received data ${\\bf{y}}_r$ are known, as shown in (\\ref{eq6}), the channel coefficients ${{\\bf{H}}\\left({{n_{\\rm{u}}},{n_R}}\\right)}$ can be obtained by simply solving the linear equations. After performing the same analysis on all the receiving antennas of the AP, the channel coefficient ${\\bf{H}}_r$ on the $r$-th ORE is solved.\n\nAs shown in (\\ref{eq3}), the ${\\bf{H}}_r^{{\\rm{LOS}}}$ and ${\\bf{H}}_r^{{\\rm{IRS}}}$ in the channel coefficient ${\\bf{H}}_r$ are composed of LOS channels. Meanwhile, the reflection characteristic matrix $\\bf{\\Theta} $ of the IRS used for assist communication is given, and only ${\\bf{H}}_r^{\\rm{s}}$ contains unknown environmental information. The $n_{\\rm{u}}$-th row of ${\\bf{H}}_r^{\\rm{s}}$ is expressed as\n\\begin{equation}\n{\\bf{H}}_r^{\\rm{s}}\\left(n_{\\rm{u}}\\right) = {{\\bf{x}}^{\\rm{T}}}{\\rm{diag}}\\left( {{\\bf{H}}_r^{{\\rm{s3}}}} \\left(n_{\\rm{u}}\\right) \\right){\\bf{H}}_r^{{\\rm{s2}}}{\\bf{\\Theta H}}_r^{{\\rm{s1}}}, \\label{eq17}\n\\end{equation}\n\\begin{equation}\n{\\left( {{\\bf{H}}_r^{\\rm{s}}}\\left(n_{\\rm{u}}\\right)\\right)^{\\rm{T}}} = {{\\bf{A}}_r^{\\rm{s}}}\\left(n_{\\rm{u}}\\right){\\bf{x}}, \\label{eq18}\n\\end{equation}\nwhere ${{\\bf{A}}_r^{\\rm{s}}}\\left(n_{\\rm{u}}\\right) \\in \\mathbb{C}^{{N_{\\rm{R}}} \\times {N_{\\rm{s}}}}$ is the known channel coefficient, which is also called the measurement matrix in the CS problem. For $N_{\\rm{u}}$ users, the (\\ref{eq18}) is expressed as the matrix form of the CS problem,\n\\begin{equation}\n{\\left[ {\\begin{array}{*{20}{c}}\n {{{\\left( {{\\bf{H}}_r^{\\rm{s}}}\\left(1\\right) \\right)}^{\\rm{T}}}}\\\\\n {{{\\left( {{\\bf{H}}_r^{\\rm{s}}}\\left(2\\right) \\right)}^{\\rm{T}}}}\\\\\n \\vdots \\\\\n {{{\\left( {{\\bf{H}}_r^{\\rm{s}}}\\left(N_{\\rm{u}}\\right) \\right)}^{\\rm{T}}}}\n \\end{array}} \\right]_{{N_{\\rm{u}}}{N_{\\rm{R}}} \\times 1}} = {\\left[ {\\begin{array}{*{20}{c}}\n {{{\\bf{A}}_r^{\\rm{s}}}}\\left(1\\right)\\\\\n {{{\\bf{A}}_r^{\\rm{s}}}}\\left(2\\right)\\\\\n \\vdots \\\\\n {{{\\bf{A}}_r^{\\rm{s}}}}\\left(N_{\\rm{u}}\\right)\n \\end{array}} \\right]_{{N_{\\rm{u}}}{N_{\\rm{R}}} \\times {N_{\\rm{s}}}}}{\\left[ {\\bf{x}} \\right]_{{N_{\\rm{s}}} \\times 1}}\n \\end{equation} \n\\begin{equation}\n\\Rightarrow {{\\bf{\\tilde{H}}}_r^{\\rm{s}}} = {\\bf{\\tilde{A}}}_r^{\\rm{s}}{\\bf{x}}. \\label{eq19}\n\\end{equation}\n\n\\subsection{Generalized Approximate Message Passing}\nThe GAMP Algorithm\\cite{Rangan} solves the problem of CS sparse reconstruction by iterative decomposition. The above problem formula (\\ref{eq19}) is abbreviated as ${\\bf{y}} = {\\bf{\\Phi x}} + {\\bf{w}}$, where ${\\bf{\\Phi }} \\in \\mathbb{C} ^{{M_\\phi } \\times {N_\\phi }}$ is the CS measurement matrix and ${\\bf{w}} \\sim {\\cal C}{\\cal N}\\left( {0,{\\sigma ^{\\rm{w}}}} \\right)$ represents noise. In this article, we assume that the distribution of environmental scatterers information as a Bernoulli-Gaussian distribution in a limited interval which probability density function is expressed as\n\\begin{equation}\n\\begin{array}{l}\n {p_{X{\\rm{|}}{\\bf{Q}}}}\\left( {x|{\\bf{q}}} \\right) = \\left( {1 - \\lambda + \\alpha } \\right)\\delta \\left( x \\right)\\\\\n \\quad \\quad \\quad + \\lambda {\\cal N}\\left( {x|\\theta ,{\\sigma ^{\\rm{x}}}} \\right)\\left[ {u\\left( x \\right) - u\\left( {x - 1} \\right)} \\right]\n \\end{array}, \\label{eq20}\n\\end{equation}\nwhere all parameters be expressed as ${\\bf{q}} \\buildrel \\Delta \\over = \\left[ {\\lambda ,\\alpha ,\\theta ,{\\sigma ^{\\rm{x}}}} \\right]$, $\\delta \\left( \\cdot \\right)$ is the Dirac function, $\\lambda $ is the sparsity coefficient, $\\alpha {\\rm{ = }}\\int_{x \\in \\left( { - \\infty ,0} \\right] \\cup \\left[ {1, + \\infty } \\right)} {\\lambda {\\cal N}\\left( {x|\\theta ,{\\sigma ^{\\rm{x}}}} \\right)} dx$. $\\theta \\in \\left[ {{\\rm{0}},{\\rm{1}}} \\right]$ and ${\\sigma ^{\\rm{x}}}$ represent the mean and variance of the environmental scatterers information distribution respectively.\n\nThe GAMP algorithm has defined two parameterized functions ${g_{\\rm{in}}}\\left( \\cdot \\right)$ and ${g_{\\rm{out}}}\\left( \\cdot \\right)$ and the specific algorithm is shown in Algorithm 1. At this point, we will show how to specify the parameterized functions ${g_{\\rm{in}}}\\left( \\cdot \\right)$, ${g_{\\rm{out}}}\\left( \\cdot \\right)$, ${g'_{\\rm{in}}}\\left( \\cdot \\right)$ and ${g'_{\\rm{out}}}\\left( \\cdot \\right)$, based on the maximum posterior probability (MAP) estimation, the input function can be written as\n\\begin{equation}\n{g_{\\rm{in}}}\\left( {\\hat v,{\\sigma ^{\\rm{v}}},{\\bf{q}}} \\right) = \\arg \\mathop {\\max }\\limits_x {F_{\\rm{in}}}\\left( {x,\\hat v,{\\sigma ^{\\rm{v}}},{\\bf{q}}} \\right), \\label{eq22}\n\\end{equation}\n\\begin{equation}\n{F_{\\rm{in}}}\\left( {x,\\hat v,{\\sigma ^{\\rm{v}}},{\\bf{q}}} \\right) = \\log {p_{X|{\\bf{Q}}}}\\left( {x|{\\bf{q}}} \\right) - \\frac{1}{{2{\\sigma ^{\\rm{v}}}}}{\\left( {\\hat v - x} \\right)^2}, \\label{eq23}\n\\end{equation}\n\\begin{equation}\n{g'_{\\rm{in}}}\\left( {\\hat v,{\\sigma ^{\\rm{v}}},{\\bf{q}}} \\right) = \\frac{\\partial {g_{\\rm{in}}}\\left( {\\hat v,{\\sigma ^{\\rm{v}}},{\\bf{q}}} \\right)}{\\partial \\hat v} = \\frac{1}{{1 - {\\sigma ^{\\rm{v}}}\\frac{\\partial ^2}{\\partial x^2} {\\rm log}\\left[{p_{X|{\\bf{Q}}}}\\left( {x|{\\bf{q}}} \\right)\\right]}}, \\label{eq24}\n\\end{equation}\nthe output function can be expressed as\n\\begin{equation}\n{g_{\\rm{out}}}\\left( {y,\\hat p,{\\sigma ^{\\rm{z}}}} \\right) = \\frac{{y - \\hat p}}{{{\\sigma ^{\\rm{w}}}} + {\\sigma ^{\\rm{z}}}}, \\label{eq25}\n\\end{equation}\n\\begin{equation}\n{g'_{\\rm{out}}}\\left( {y,\\hat p,{\\sigma ^{\\rm{z}}}} \\right) = \\frac{\\partial {g'_{\\rm{out}}}\\left( {y,\\hat p,{\\sigma ^{\\rm{z}}}} \\right)}{\\partial y} = - \\frac{1}{{{\\sigma ^{\\rm{w}}}} + {\\sigma ^{\\rm{z}}}}. \\label{eq26}\n\\end{equation}\n\n\\begin{algorithm}[htb]\n \\caption{The GAMP Algorithm\\cite{Rangan}}\n \n \\begin{algorithmic}[1]\n \\REQUIRE\n Given measurement matrix ${\\bf{\\Phi }} \\in {\\mathbb{C}^{{M_\\phi } \\times {N_\\phi }}}$ and sequence of measurement value ${\\bf{y}} \\in \\mathbb{C} ^{{M_\\phi } \\times 1}$.\n \\STATE\n \\textbf{Initialization}: Set environment prior parameter $\\bf{q}$. Defined ${g_{\\rm{in}}}\\left( \\cdot \\right)$ and ${g_{\\rm{out}}}\\left( \\cdot \\right)$ from (\\ref{eq22}), (\\ref{eq24}). Set $t_i = 0$, ${\\bf{\\hat s}}\\left( { - 1} \\right) = 0$, ${\\hat x_{{n_\\phi }}}\\left( {{t_i}} \\right) > 0$, $\\sigma _{{n_\\phi }}^{\\rm{x}}\\left( {{t_i}} \\right) > 0$.\n\n \\WHILE {$\\sum\\limits_{{m_\\phi }} {\\left( {{y_{{m_\\phi }}} - {{\\hat z}_{{m_\\phi }}}\\left( {{t_i}} \\right)} \\right)} > \\varepsilon_{\\rm{t}} $, where $\\varepsilon_{\\rm{t}} $ is a given error tolerance value}\n \\STATE\n For each $m_\\phi $:\n\n $\\sigma _{{m_\\phi }}^{\\rm{z}}\\left( {{t_i}} \\right) = \\sum\\limits_{{n_\\phi }} {\\Phi _{{m_\\phi },{n_\\phi }}^2} \\sigma _{{n_\\phi }}^{\\rm{x}}\\left( {{t_i}} \\right),$\n\n ${\\hat p_{m_\\phi }}\\left( {t_i} \\right) = \\sum\\limits_{n_\\phi } {\\Phi _{{m_\\phi },{n_\\phi }}}{{\\hat x}_{n_\\phi }}\\left( {t_i} \\right) - \\sigma_{m_\\phi }^{\\rm{z}}\\left( t \\right) {\\hat s_{{m_\\phi }}}\\left( {{t_i} - 1} \\right),$\n\n ${\\hat z_{{m_\\phi }}}\\left( {{t_i}} \\right){\\rm{ = }}\\sum\\limits_{{n_\\phi }} {{\\Phi _{{m_\\phi },{n_\\phi }}}} {\\hat x_{{n_\\phi }}}\\left( {{t_i}} \\right).$\n \\STATE\n For each $m_\\phi $:\n\n ${\\hat s_{{m_\\phi }}}\\left( {{t_i}} \\right) = {g_{\\rm{out}}}\\left( {{t_i},{y_{{m_\\phi }}},{{\\hat p}_{{m_\\phi }}}\\left( {{t_i}} \\right),\\sigma _{{m_\\phi }}^{\\rm{z}}\\left( {{t_i}} \\right)} \\right),$\n\n $\\sigma _{{m_\\phi }}^{\\rm{s}}\\left( {{t_i}} \\right) = - {g'_{\\rm{out}}}\\left( {{t_i},{y_{{m_\\phi }}},{{\\hat p}_{{m_\\phi }}}\\left( {{t_i}} \\right),\\sigma _{{m_\\phi }}^{\\rm{z}}\\left( {{t_i}} \\right)} \\right).$\n \\STATE\n For each $n_\\phi $:\n\n $\\sigma _{{n_\\phi }}^{\\rm{v}}\\left( {{t_i}} \\right) = {\\left[ {\\sum\\limits_{{n_\\phi }} {\\Phi _{{m_\\phi },{n_\\phi }}^2\\sigma _{{n_\\phi }}^{\\rm{s}}\\left( {{t_i}} \\right)} } \\right]^{ - 1}},$\n\n ${\\hat v_{{n_\\phi }}}\\left( {{t_i}} \\right) = {\\hat x_{{n_\\phi }}}\\left( {{t_i}} \\right) + \\sigma _{{n_\\phi }}^{\\rm{v}}\\left( {{t_i}} \\right)\\sum\\limits_{{m_\\phi }} {{\\Phi _{{m_\\phi },{n_\\phi }}}{{\\hat s}_{{m_\\phi }}}\\left( {{t_i}} \\right)}.$\n \\STATE\n For each $n_\\phi $:\n\n ${\\hat x_{{n_\\phi }}}\\left( {{t_i}{\\rm{ + 1}}} \\right) = {g_{\\rm{in}}}\\left( {{t_i},{{\\hat v}_{{n_\\phi }}}\\left( {{t_i}} \\right),\\sigma _{{n_\\phi }}^{\\rm{v}}\\left( {{t_i}} \\right),{\\bf{q}}} \\right),$\n\n $\\sigma _{{n_\\phi }}^{\\rm{x}}\\left( {{t_i}{\\rm{ + 1}}} \\right) = \\sigma _{{n_\\phi }}^{\\rm{v}}\\left( {{t_i}} \\right){g'_{\\rm{in}}}\\left( {{t_i},{{\\hat v}_{{n_\\phi }}}\\left( {{t_i}} \\right),\\sigma _{{n_\\phi }}^r\\left( {{t_i}} \\right),{\\bf{q}}} \\right).$\n \\STATE\n ${t_i} = {t_i} + 1.$\n \\ENDWHILE\n \\ENSURE\n Estimated sparse vector ${\\hat x_{{n_\\phi }}}\\left( {{t_i}} \\right)$ and $\\sigma _{{n_\\phi }}^{\\rm{x}}\\left( {{t_i}} \\right)$.\n \\end{algorithmic}\n \\end{algorithm}\n\n\\subsection{Proposed Environment Sensing Algorithm}\nIn the process of solving the CS sparse reconstruction problem, the product $N_{\\rm{u}}N_{\\rm{R}}$ of the number of users and the number of receiving antennas, and the number of spatial pixels $N_{\\rm{s}}$ are orders of magnitude different, that is ${N_{\\rm{u}}}{N_{\\rm{R}}} \\ll {N_{\\rm{s}}}$, so that the number of columns in the CS measurement matrix ${\\bf{\\tilde{A}}}_r^{\\rm{s}}$ is much larger than the number of rows, and the compression ratio is too high. Therefore, the environmental information $\\bf{x}$ cannot be recovered accurately. We improve the above algorithm to adapt to the continuous data stream sent from users in the proposed scenario. We use multiple data packets to recover the environmental information after multiple observations. The CS problem is redefined as\n\\begin{equation}\n{\\bf{H}}_r^{\\rm{s}}\\left(n_{\\rm{u}},k\\right) = {{\\bf{x}}^{\\rm{T}}}{\\rm{diag}}\\left( {{\\bf{H}}_r^{{\\rm{s3}}}}\\left(n_{\\rm{u}}\\right) \\right){\\bf{H}}_r^{{\\rm{s2}}}{{\\bf{\\Theta }}\\left(k\\right)}{\\bf{H}}_r^{{\\rm{s1}}}, \\label{eq27}\n\\end{equation}\n\\begin{equation}\n{\\left( {{\\bf{H}}_r^{\\rm{s}}}\\left(n_{\\rm{u}},k\\right) \\right)^{\\rm{T}}} = {\\bf{A}}_r^{\\rm{s}}\\left(n_{\\rm{u}},k\\right){\\bf{x}}, \\label{eq28}\n\\end{equation}\n\\begin{equation}\n{\\left[ {\\begin{array}{*{20}{c}}\n {{{\\left( {{\\bf{H}}_r^{\\rm{s}}}\\left(1,1\\right) \\right)}^{\\rm{T}}}}\\\\\n \\vdots \\\\\n {{{\\left( {{\\bf{H}}_r^{\\rm{s}}}\\left(n_{\\rm{u}},k\\right) \\right)}^{\\rm{T}}}}\\\\\n \\vdots \\\\\n {{{\\left( {{\\bf{H}}_r^{\\rm{s}}}\\left(N_{\\rm{u}},K\\right) \\right)}^{\\rm{T}}}}\n \\end{array}} \\right]} = {\\left[ {\\begin{array}{*{20}{c}}\n {{\\bf{A}}_r^{\\rm{s}}}\\left(1,1\\right)\\\\\n \\vdots \\\\\n {{\\bf{A}}_r^{\\rm{s}}}\\left(n_{\\rm{u}},k\\right)\\\\\n \\vdots \\\\\n {{\\bf{A}}_r^{\\rm{s}}}\\left(N_{\\rm{u}},K\\right)\n \\end{array}} \\right]} \\left[{\\bf{x}}\\right] , \\label{eq29}\n\\end{equation}\n\\begin{equation}\n\\Rightarrow \\left[{{\\bf{\\tilde{H}}}_r^{\\rm{s}}}\\left(K\\right)\\right]_{{N_{\\rm{u}}}{N_{\\rm{R}}}K \\times 1} = \\left[{\\bf{\\tilde{A}}}_r^{\\rm{s}}\\left(K\\right)\\right]_{{N_{\\rm{u}}}{N_{\\rm{R}}}K \\times {N_{\\rm{s}}}}\\left[{\\bf{x}}\\right]_{{N_{\\rm{s}}}\\times 1}, \\label{eq30}\n\\end{equation}\nwhere ${\\bf{\\Theta }}\\left(k\\right)$ represents the IRS reflection characteristic matrix when the $k$-th data packet is received, and $K$ is the number of data packets. After multiple observations, the difference between the number of rows of the observation matrix $N_{\\rm{u}}N_{\\rm{R}}K$ and the number of columns $N_{\\rm{s}}$ is relatively small, and environmental information can be sensed more accurately.\n\nSince data packets are continuously transmitted during the communication process, and the amount of data is very large, it is impossible to store all the data packets $K$ sent at all times. We set a time sliding-window with a length of $n_{\\rm{f}}$, store the received data packet $k$ at the current moment to the $n_{\\rm{f}}$ data packets previously received, and use the data in the sliding-window for environment sensing.\n\nCompared with the communication data that exists all the time, the data in the sliding-window is very limited. Therefore, a large amount of communication data transmitted earlier will be wasted in the process of environment sensing. To solve this problem, we propose a ``momentum-mode'', which combines the sensing results calculated at the previous moments to calculate the current environment sensing results and makes the current sensing result contain part of the information outside the sliding-window. According to (\\ref{eq16}), (\\ref{eq18}), the ``momentum-mode'' can be expressed as\n\\begin{equation}\n{{{\\bf{\\hat x}}}_k}{\\rm{ = }}\\arg \\mathop {\\min }\\limits_{\\bf{x}} \\left( {{{\\left\\| {\\bf{x}} \\right\\|}_1}} \\right){\\rm{ + }}\\mu {{{\\bf{\\hat x}}}_{k{\\rm{ - 1}}}}, \\label{eq31}\n\\end{equation}\n\\begin{equation}\n{\\rm{s}}{\\rm{.t}}{\\rm{.}}\\quad {\\left\\|{ {{\\bf{\\tilde{H}}}_r^{\\rm{s}}}\\left(k\\right) - {\\bf{\\tilde{A}}}_r^{\\rm{s}}\\left(k\\right)}{\\bf{x}} \\right\\|_2} \\le {\\varepsilon _{\\rm{x}}}, \\label{eq32}\n\\end{equation}\nwhere $\\mu $ is the momentum coefficient. The larger the momentum coefficient $\\mu $, the more previous data information the current sensing result depends on. Therefore, the setting of the momentum coefficient $\\mu $ should be set according to the actual system and will be further analyzed in section \\uppercase\\expandafter{\\romannumeral7}.\n\n\\section{Joint Multi-User Detection and Environment Sensing Algorithm}\nAs mentioned in Section \\uppercase\\expandafter{\\romannumeral3} and Section \\uppercase\\expandafter{\\romannumeral4}, the accurate decoding of user communication data requires the knowledge of the environment, and if the data sent by the user is not recovered, the environmental information cannot be sensed accurately. Although a sufficient number of pilots can be used to implement the proposed environment sensing algorithm, an excessive number of pilots will cause a decrease in communication efficiency. To tackle this issue, we propose an iterative and incremental algorithm based on low-density pilots to jointly recover users' communication data and environmental information. Let ${{\\bf{s}}_k}\\left( {0 < k \\le K} \\right)$ denote the $k$-th data packets of all users. We insert a pilot $\\bf{P}$ before each $K$ data packets, and the AP can obtain the received data ${\\bf{y}}_{\\rm{p}}$. According to the previous section, based on the pilot $\\bf{P}$, the environmental information ${{\\bf{\\hat x}}_{\\rm{p}}}$ can be roughly estimated. Meanwhile, we need to use subsequent communication data packets to further improve the environment sensing results. Therefore, we use the pilot $\\bf{P}$, the received data ${\\bf{y}}_{\\rm{p}}$ and the estimated environmental information ${{\\bf{\\hat x}}_{\\rm{p}}}$ as initial terms to start the iterative algorithm.\n\n\\subsection{The Proposed Iterative Algorithm}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=6in]{fig4.eps}\n \\caption{The proposed iterative algorithm.}\n \\label{figIT}\n \\end{figure*}\n\nAs shown in Fig. \\ref{figIT}, after receiving the $k$-th packet data ${\\bf{y}}_k$, the proposed iterative algorithm is divided into three parts:\n\n1. Forward propagation: First, use the estimated environment information ${{\\bf{\\hat x}}_{k - 1}}$ after receiving the $(k-1)$-th data packet to estimate the current channel ${{\\bf{\\hat H}}_k}$, and then the decoder decodes the received data ${\\bf{y}}_k$ to send data ${{\\bf{\\hat s}}_k}$ based on the estimation of current channel ${{\\bf{\\hat H}}_k}$. Finally, based on current ${\\bf{y}}_k$, ${{\\bf{\\hat s}}_k}$, the received data ${{\\bf{y}}_{k - {n_{\\rm{f}}} - 1}}, \\cdots ,{{\\bf{y}}_{k - 1}}$ and the decoded data ${{\\bf{\\hat s}}_{k - {n_{\\rm{f}}} - 1}}, \\cdots ,{{\\bf{\\hat s}}_{k - 1}}$ in the previous $n_{\\rm{f}}$ data packets, the current environmental information ${{\\bf{\\hat x}}_k}$ can be estimated more accurately. In the initial stage of the proposed algorithm, the results have not converged. Therefore, a certain number of iterations are required to make the system performance converge to a relatively accurate estimation of the transmitted data and environmental information, when $\\left\\lVert {{{{\\bf{\\hat x}}}_k} - {{{\\bf{\\hat x}}}_{k - 1}}} \\right\\rVert _2 < {\\varepsilon _{\\rm{k}}}$, the ``momentum-mode'' is enabled.\n\n2. Self-iteration: First, estimate the current environment ${{\\bf{\\hat x}}_k}$ by using the channel estimated ${{\\bf{\\hat H}}_k}$ in the forward propagation part. Then, the decoder decodes the currently transmitted data ${{\\bf{\\hat s}}_k}$ again and estimates the current environment information ${{\\bf{\\hat x}}_k}$ again. \nFinally, iterate $K_{\\rm{s}}$ times to obtain more accurate transmission data and environmental information. $K_s$ can be set to a fixed value, or it can be gradually reduced as the algorithm converges, especially when $\\left\\lVert {{{{\\bf{\\hat x}}}_k} - {{{\\bf{\\hat x}}}_{k - 1}}} \\right\\rVert_2 < {\\varepsilon _{\\rm{k}}}$, $K_s$ should be set to a small value. Enable the ``momentum-mode'' when $\\left\\lVert {{{{\\bf{\\hat x}}}_k} - {{{\\bf{\\hat x}}}_{k - 1}}} \\right\\rVert_2 < {\\varepsilon _{\\rm{k}}}$.\n\n3. Feedback: feedback the estimated environmental information ${{\\bf{\\hat x}}_k}$ in the $k$-th data packet to the previous $n_{\\rm{b}}$ data packets, and based on the more accurate environmental information ${{\\bf{\\hat x}}_k}$ estimated by $k$-th data packet, estimate the received signal ${{\\bf{\\hat s}}_{k - {n_{\\rm{b}}} - 1}}, \\cdots ,{{\\bf{\\hat s}}_{k - 1}}$ again to improve the accuracy of decoding. Stop feedback when $\\left\\lVert {{{{\\bf{\\hat x}}}_k} - {{{\\bf{\\hat x}}}_{k - n_{\\rm{b}} - 1}}} \\right\\rVert_2 < {\\varepsilon _{\\rm{k}}}$.\n\n\\begin{figure*}[htb]\n \\centering\n \\includegraphics[width=6in]{fig15.eps}\n \\caption{The Factor graph representation of the proposed iterative algorithm. }\n \\label{figITGH}\n \\end{figure*}\n\n\\begin{algorithm}[t]\n \\caption{The proposed iterative algorithm}\n \\label{alg2}\n \\begin{algorithmic}[1] \n \\REQUIRE\n Calculated LOS channel matrix ${\\bf{H}}_r^{{\\rm{LOS}}}$, ${\\bf{H}}_r^{{\\rm{IRS1}}}$, ${\\bf{H}}_r^{{\\rm{s1}}}$, ${\\bf{H}}_r^{{\\rm{s2}}}$and ${\\bf{H}}_r^{{\\rm{s3}}}$. Given IRS reflection characteristic control matrix $\\bf{\\Theta}$.\n \\STATE\n \\textbf{Initialization}: Set pilot $\\bf{P}$. Defined sliding-window length $n_{\\rm{f}}$, $n_{\\rm{b}}$. Set ${\\varepsilon _{\\rm{k}}} > 0$, $0 < \\mu < 1$, $K_{\\rm{s}} > 0$.\n \\STATE\n Estimate ${{\\bf{\\hat x}}_{\\rm{p}}}$ from $\\bf{P}$ and ${\\bf{y}}_{\\rm{p}}$ by GAMP. Let ${{\\bf{\\hat x}}_{\\rm{0}}} = {{\\bf{\\hat x}}_{\\rm{p}}}$, ${\\bf{y}}_0 = {\\bf{y}}_{\\rm{p}}$, ${{\\bf{\\hat s}}_0} = {\\bf{P}}$.\n \\FOR {$k = 1, 2, \\cdots ,K$}\n\n \n \\STATE\n Estimate ${{\\bf{\\hat H}}_k}$ from ${{\\bf{\\hat x}}_{k - 1}}$ according to (\\ref{eq3}).\n \\STATE\n Estimate ${{\\bf{\\hat s}}_k}$ from ${\\bf{y}}_k$ and ${{\\bf{\\hat H}}_k}$ based on SCMA-IRS-MPA decoder.\n \\STATE\n Estimate ${{\\bf{\\hat x}}_k}$ from ${{\\bf{y}}_{k - {n_{\\rm{f}}} - 1}}, \\cdots ,{{\\bf{y}}_{k - 1}}$ and ${{\\bf{\\hat s}}_{k - {n_{\\rm{f}}} - 1}}, \\cdots ,{{\\bf{\\hat s}}_{k - 1}}$ according to formula (\\ref{eq31}).\n \\STATE\n Replace ${{\\bf{\\hat x}}_{k - 1}}$ with ${{\\bf{\\hat x}}_k}$, Repeat steps 3 to 5 $K_{\\rm{s}}$ times.\n \n \\STATE\n Estimate ${{\\bf{\\hat H}}_{k - 1}}, \\cdots ,{{\\bf{\\hat H}}_{k - {n_{\\rm{b}}} - 1}}$ from ${{\\bf{\\hat x}}_k}$ according to (\\ref{eq3}).\n \\STATE\n Estimate ${{\\bf{\\hat s}}_{k - 1}}, \\cdots ,{{\\bf{\\hat s}}_{k - {n_{\\rm{b}}} - 1}}$ from ${{\\bf{y}}_{k - 1}}, \\cdots ,{{\\bf{y}}_{k - {n_{\\rm{b}}} - 1}}$ and ${{\\bf{\\hat H}}_{k - 1}}, \\cdots ,{{\\bf{\\hat H}}_{k - {n_{\\rm{b}}} - 1}}$ according to (\\ref{eq31}).\n \\STATE\n If $\\left\\lVert {{{{\\bf{\\hat x}}}_k} - {{{\\bf{\\hat x}}}_{k - 1}}} \\right\\rVert_2 < {\\varepsilon _{\\rm{k}}}$, start ``momentum-mode'', else set $\\mu = 0$.\n \\ENDFOR\n \\ENSURE\n Estimated environment information ${{\\bf{\\hat x}}_k}$ and data packet ${{\\bf{\\hat s}}_1}, \\cdots ,{{\\bf{\\hat s}}_K}$.\n \\end{algorithmic}\n \\end{algorithm}\n\nWe summarize the proposed iterative algorithm in Algorithm \\ref{alg2}. During the execution process of the algorithm, the forward propagation process can be executed every time a new data packet is received. The self-iteration process can be executed as many times as necessary at any time, and the feedback process should be executed after the self-iteration process. Since the cached data cannot be too much, the forward propagation sliding-window size $n_{\\rm{f}}$ and the feedback window size $n_{\\rm{b}}$ need to be adjusted according to the actual system ability.\n\nThe effectiveness of the proposed algorithm can be explained by the message passing theory. Fig. \\ref{figITGH} shows the factor graph of the proposed iterative algorithm. The decoding algorithm and the sensing algorithm use each other's solution results as side information to achieve their performance. As the number of iterations increases, the environmental information and data information contained in the received data are separated and recovered. The components of the proposed iterative algorithm also reflect this idea: forward propagation passes the previously sensed environmental information and received data information to the next time slot so that the algorithm can incrementally optimize performance based on the continuously received data. The self-iteration process is executed repeatedly and iteratively based on the existing data, and the environmental information in the received data is fully obtained. The feedback process passes more accurate environmental information to the previous time slot and reduces the error caused by inaccurate decoding and sensing in the initial stage of the proposed algorithm.\n\n\\subsection{Computational Complexity Analysis}\nThe computational complexity of the proposed algorithm is mainly composed of two parts: \n\\begin{itemize}\n \\item [(1)] \n From the perspective of data decoding, we use the MPA decoder, whose computational complexity is $\\mathcal{O} \\left(Rd_{\\rm{f}}M^{d_{\\rm{f}}}+N_{\\rm{u}}M\\right)$. Compared with the ML decoder whose computational complexity is $\\mathcal{O}\\left(RC^{N_{\\rm{u}}}\\right)$, the computational complexity of the MPA decoder is lower. For example, when there are more users, the computational complexity of the MPA-based decoder just increases linearly. \n \\item [(2)] \n From the perspective of environment sensing, we use the GAMP algorithm, whose computational complexity is $\\mathcal{O}\\left(N_{\\rm{u}}N_{\\rm{R}}KN_{\\rm{s}}\\right)$. Compared with the OMP algorithm, whose computational complexity is $\\mathcal{O}(N_{\\rm{u}}N_{\\rm{R}}KN_{\\rm{s}} + (\\lambda N_{\\rm{s}})^3 )$, it can be seen that the GAMP algorithm is a relatively low-complexity CS reconstruction algorithm.\n \n \n\\end{itemize}\n\nIn summary, during the execution of the algorithm, replace $K$ with the sliding window sizes $n_{\\rm{f}}$ and $n_{\\rm{b}}$.\nThe computational complexity of the proposed iterative algorithm is $\\mathcal{O}( {R{d_{\\rm{f}}}{M^{{d_{\\rm{f}}}}}} $ $+ {N_{\\rm{u}}}M + {N_{\\rm{u}}}{N_{\\rm{R}}}{n_{\\rm{f}}}{n_{\\rm{b}}}{N_{\\rm{s}}} )$, where $n_{\\rm{f}}$ and $n_{\\rm{b}}$ can be controlled according to the convergence of the algorithm to save computing resources.\nIt can be seen that the computational complexity of the proposed algorithm is mainly determined by the number of users $N_{\\rm{u}}$ and the SCMA codebook parameter $d_{\\rm{f}}$. In contrast, if the OMP algorithm and the ML decoder are used to design the iterative algorithm, then its computational complexity will be $\\mathcal{O}( R{C^{{N_{\\rm{u}}}}} + {N_{\\rm{u}}}{N_{\\rm{R}}}{n_{\\rm{f}}}{n_{\\rm{b}}}{N_{\\rm{s}}} + (\\lambda N_{\\rm{s}})^3 )$, which is much higher than our algorithm.\n\nIn addition, the low complexity of the proposed iterative algorithm is also reflected in the use of low-density pilots, which effectively reduces the time-frequency resources and computing resources consumed by the pilots.\n\n\\section{System Performance Analysis}\nIn this section, we analyze the influence of the number of users on the decoding results of communication data and the accuracy of environment sensing. After receiving the $k$-th data packet, we calculate the mean square error (MSE) between the estimated environmental information ${{\\bf{\\hat x}}_k}$ and the actual environmental information $\\bf{x}$ to evaluate the accuracy of the environment sensing,\n\\begin{equation}\n{\\rm{MSE}} = \\frac{1}{{{N_{\\rm{s}}}}}\\left\\| {{{{\\bf{\\hat x}}}_k} - {\\bf{x}}} \\right\\|_2^2, \\label{eq33}\n\\end{equation}\nwhere $N_{\\rm{s}}$ is the total number of point clouds in the environment, and the SER between the decoding results ${{\\bf{\\hat s}}_k}$ and the original transmission data ${{\\bf{s}}_k}$ is calculated to evaluate the accuracy of data decoding.\n\nBased on the received data ${{\\bf{y}}_k}$ and decoded data ${{\\bf{\\hat s}}_k}$, the essence of estimating the environmental information ${{\\bf{\\hat x}}_k}$ is to solve the CS sparse reconstruction problem, and (16) can be expressed as\n\\begin{equation}\n{{{\\bf{\\hat x}}}_k} = \\arg \\mathop {\\min }\\limits_{\\bf{x}} {\\left\\| {\\bf{x}} \\right\\|_1}, \\label{eq34}\n\\end{equation}\n\\begin{equation}\n\\begin{array}{l}\n {\\rm{s}}{\\rm{.t}}{\\rm{.}}\\quad {\\left\\| {{{\\bf{y}}_{k-{n_{\\rm{f}}},k}} - {{{\\bf{\\hat s}}}_{k-{n_{\\rm{f}}},k}}{{\\bf{x}}^{\\rm{T}}} {{\\bf{\\tilde{A}}}^{\\rm{s}}\\left(\\left[k-n_{\\rm{f}},k\\right]\\right)} } \\right\\|_2}\\\\\n \\quad {\\left\\| { {\\left( {{{\\bf{s}}_{k-{n_{\\rm{f}}},k}} - {{{\\bf{\\hat s}}}_{k-{n_{\\rm{f}}},k}}} \\right){{\\bf{x}}^{\\rm{T}}}{{\\bf{\\tilde{A}}}^{\\rm{s}}\\left(\\left[k-n_{\\rm{f}},k\\right]\\right)}} } \\right\\|_2} + {\\varepsilon _{\\rm{x}}}\n \\end{array}, \\label{eq35}\n\\end{equation}\nwhere ${{\\bf{y}}_{k-{n_{\\rm{f}}},k}}$, ${{\\bf{\\hat s}}_{k-{n_{\\rm{f}}},k}}$, and ${{\\bf{\\tilde{A}}}^{\\rm{s}}\\left(\\left[k-n_{\\rm{f}},k\\right]\\right)}$ represent the received packets, the decoding results, and the measurement matrix in the forward propagation window of size $n_{\\rm{f}}$ respectively, ${\\rm{SER}} \\propto {\\left\\| { {\\left( {{{\\bf{s}}_{k-{n_{\\rm{f}}},k}} - {{{\\bf{\\hat s}}}_{k-{n_{\\rm{f}}},k}}} \\right){{\\bf{x}}^{\\rm{T}}}{{\\bf{\\tilde{A}}}^{\\rm{s}}\\left(\\left[k-n_{\\rm{f}},k\\right]\\right)}} } \\right\\|_2}$. Therefore, when the decoding error rate (SER) increases, the constraint conditions of the sparse reconstruction problem become more slack, and the estimated environmental information error (MSE) increased. According to the theory of CS \\cite{Donoho}, the theoretical upper bound of environment sensing accuracy is,\n\\begin{equation}\n{\\left\\| {{\\bf{x}} - {{{\\bf{\\hat x}}}_k}} \\right\\|_2} \\le c \\cdot {R_p} \\cdot {\\left( {\\frac{{{N_{\\rm{u}}} \\cdot {N_{\\rm{R}}} \\cdot {n_{\\rm{f}}}}}{{\\log {N_{\\rm{s}}}}}} \\right)^{{1 \\mathord{\\left\/\n {\\vphantom {1 2}} \\right.\n \\kern-\\nulldelimiterspace} 2} - {1 \\mathord{\\left\/\n {\\vphantom {1 p}} \\right.\n \\kern-\\nulldelimiterspace} p}}}, \\label{eq36}\n\\end{equation}\nwhere $c > 0$ is a constant, ${\\left\\| {\\bf{x}} \\right\\|_p} \\le {R_p},\\left( {0 < p < 2} \\right)$ is the sparsity condition, ${\\rm{MSE}} \\propto {\\left\\| {{\\bf{x}} - {{{\\bf{\\hat x}}}_k}} \\right\\|_2}$. Therefore, when the number of users $N_{\\rm{u}}$ increases, the rank of the measurement matrix in the sparse reconstruction problem increases, and the environment sensing error (MSE) decreases. In addition, when the number of non-zero elements in the environment increases, $R_p$ increases, and the environment sensing error (MSE) increases.\n\nWhen using MPA decoder to decode the received signal based on the ML theory, for a single ORE, let $C = N_{\\rm{u}} \\times M$ be the number of symbols in the codebook, then $N_{\\rm{u}}$ users at each moment have ${C^{{N_{\\rm{u}}}}}$ ways to send symbols. The theoretical upper bound of the average decoding error rate (SER) is,\n\\begin{equation}\n{\\rm{SER}} \\le \\frac{1}{{{C^{{N_{\\rm{u}}}}}}}\\sum\\limits_{{{\\bf{s}}_{\\rm{a}}}} {\\sum\\limits_{{{\\bf{s}}_{\\rm{b}}},{{\\bf{s}}_{\\rm{a}}} \\ne {{\\bf{s}}_{\\rm{b}}}} {P\\left( {{{\\bf{s}}_{\\rm{a}}} \\to {{\\bf{s}}_{\\rm{b}}}} \\right)} }, \\label{eq37}\n\\end{equation}\nwhere $P\\left( {{{\\bf{s}}_{\\rm{a}}} \\to {{\\bf{s}}_{\\rm{b}}}} \\right)$ represents the pairwise error probability (PEP) that the symbol ${\\bf{s}}_{\\rm{a}}$ is incorrectly decoded to ${\\bf{s}}_{\\rm{b}}$. In (\\ref{eq37}), there are a total of $\\left( {C - 1} \\right){C^{2{N_{\\rm{u}}} - 1}}$ items for summation. Therefore, when other conditions are the same, SER increases as the number of users $N_{\\rm{u}}$ increases. Due to the random distribution of scatterers in the environment, the channels from the users to the AP can be expressed as Rayleigh fading channels. According to the ML theory, the decoding process in (\\ref{eq8}) can be expressed as\n\\begin{equation}\n {{{\\bf{\\hat s}}}_k} = \\arg \\mathop {\\min }\\limits_{j \\in {C^{{n_{\\rm{u}}}}}}{\\left\\| {{{\\bf{y}}_k} - {{{\\bf{\\hat H}}_k}{{\\bf{s}}_k}\\left( j \\right)} } \\right\\|^2}, \\label{eq38}\n\\end{equation}\n\\begin{equation}\n {\\rm{s}}{\\rm{.t}}{\\rm{.}}\\quad {{\\bf{y}}_k} = {{{\\bf{\\hat H}}}_k}{{\\bf{s}}_k} + \\left( {{{\\bf{H}}_k} - {{{\\bf{\\hat H}}}_k}} \\right){{\\bf{s}}_k} + {\\bf{w}}, \\label{eq39}\n\\end{equation}\nwhere $\\bf{w}$ represents Gaussian white noise with variance $N_0$. Due to the random distribution of scatterers in the environment, the interference caused by channel estimation errors is also Gaussian. Then the PEP in (\\ref{eq37}) can be expressed as\n\\begin{equation}\nP\\left( {{{\\bf{s}}_{\\rm{a}}} \\to {{\\bf{s}}_{\\rm{b}}}} \\right) = {{\\mathbb{E}}_{{{{\\bf{\\hat H}}}_k}}}\\left[ {Q\\left( {\\sqrt {\\frac{{{{\\left\\| {{{{\\bf{\\hat H}}}_k}\\left( {{{\\bf{s}}_{\\rm{a}}} - {{\\bf{s}}_{\\rm{b}}}} \\right)} \\right\\|}^2}}}{{{N_0} + \\mathbb{D} \\left( {\\left( {{{\\bf{H}}_k} - {{{\\bf{\\hat H}}}_k}} \\right){{\\bf{s}}_k}} \\right)}}} } \\right)} \\right], \\label{eq40}\n\\end{equation}\nWhere $Q$ is the error function. Therefore, the channel estimation error caused by inaccurate environmental information estimation will lead to an increase in the decoding symbol error rate (SER).\n\\begin{figure}\n \\centering\n \\includegraphics[width=7.5cm]{fig5.eps}\n \\caption{The trade-off relationship between the number of users and system performance.}\n \\label{figtrade}\n \\end{figure}\n\nAs shown in Fig. \\ref{figtrade}, the solid line indicates that the increase in the number of users $N_{\\rm{u}}$ promotes the accuracy of environment sensing, and the decrease of MSE reduces the channel prior information error $\\Delta {{\\bf{\\hat H}}_k}$ for decoding, so the SER also decreases and the decoding becomes more accurate. On the other hand, the dotted line indicates that an increase in the number of users $N_{\\rm{u}}$ increases the error of decoding (SER), and the decoded data error $\\Delta {{\\bf{\\hat s}}_k}$ also increases, which increases the environment sensing error (MSE). Therefore, firstly, when the number of users decreases, the environment sensing error (MSE) increases. Secondly, when the number of users increases, the SCMA decoding error (SER) increases. Finally, because the proposed iterative algorithm repeatedly executes environment sensing and data decoding, their performance affects each other, resulting in the same trend of SER and MSE, the number of users should be a compromise choice. The number of users $N_{\\rm{u}}$ has a trade-off relationship with system performance (MSE\/SER) because the number of users $N_{\\rm{u}}$ is traded for system performance. Finally, the optimal operating point could be estimated as\n\\begin{equation}\n{\\tilde N_{\\rm{u}}} = \\arg \\mathop {\\min }\\limits_{{N_{\\rm{u}}}} \\quad {a_1} \\cdot {\\rm{MSE}} + {a_2} \\cdot {\\rm{SER}}. \\label{eq41}\n\\end{equation}\n\nWhen the number of users is ${\\tilde N_{\\rm{u}}}$, the system performance MSE and SER reach the best. In practical applications, we need to adjust the coefficients $a_1$ and $a_2$ to suit the system's requirements for communication and environment sensing performance. And select the appropriate SCMA codebook and system parameters according to the number of users in the scene to ensure that the number of actual users is within the optimal working range of the system.\n\n\\section{Numerical Results}\nIn this section, we simulated the performance of the algorithm and all simulations are conducted in MATLAB 2017b on a computing server with a Xeon E5-2697 v3 processer and 128GB memory. The simulation scenario is set in a room with a size of $4\\rm{m} \\times 4m \\times 4m$, and the point cloud with a size of $8 \\times 8 \\times 8$ is used to represent the environmental information. The transmission signal frequency is set to 28 - 30GHz, and the bandwidth is 2GHz. A $20 \\times 20$ IRS is used for assist communication. In order to meet the actual system design, we set the IRS amplitude reflection coefficient $\\rho_{{n_{\\rm{I}}}} = 1$, the phase shift $\\varphi_{{n_{\\rm{I}}}} = 0$ or $\\pi $. The position of the scatterers distributed in the space is random, and the scattering coefficient ${\\bf{x}}_{{n_{\\rm{s}}}} \\in \\left[ {0,1} \\right]$. As shown in Fig. \\ref{figanswer}, small cubes are used to indicate the position distribution and scattering coefficient of the point cloud in space. The lower the transparency of the small cube, the larger the scattering coefficient of the point.\n\\begin{figure*}[t]\n \\centering\n \\subfigure[The original environment scatterer distribution.]{\n \\includegraphics[width=0.4\\textwidth]{fig6.eps}}\n \\subfigure[The sensing result when $\\rm{E_b\/N_0}=0dB$.]{\n \\includegraphics[width=0.4\\textwidth]{fig7.eps}}\n \\subfigure[The sensing result when $\\rm{E_b\/N_0}=5dB$.]{\n \\includegraphics[width=0.4\\textwidth]{fig8.eps}}\n \\subfigure[The sensing result when $\\rm{E_b\/N_0}=10dB$.]{\n \\includegraphics[width=0.4\\textwidth]{fig9.eps}}\n \\caption{The original environment scatterer distribution and the sensing results under different SNR conditions.}\n \\label{figanswer}\n \\end{figure*}\n\nThe distribution of the environment scatterer is shown in Fig. \\ref{figanswer}(a), and the system parameters are set to the number of users $N_{\\rm{u}}$ = 6, the number of OREs $R$ = 4, and $d_{\\rm{v}}$ = 2. According to the convergence of the algorithm, we set $K_{\\rm{s}} = 5$ in this section. After the proposed algorithm is iterated to convergence, the intuitive sensing results are shown in Fig. \\ref{figanswer}(b), Fig. \\ref{figanswer}(c), and Fig. \\ref{figanswer}(d), when the signal-to-noise ratio (SNR, $\\rm{E_b\/N_0}$) are 0dB, 5dB, and 10dB respectively. It can be seen that the sensing result is very blurred when $\\rm{E_b\/N_0}=0dB$, and the shape of the target object can only be barely distinguished. As the SNR condition becomes better, the number of misidentified scatterers gradually decreases until $\\rm{E_b\/N_0}=10dB$ can clearly distinguish the shape of the target. This is because the SNR conditions affect the accuracy of communication data decoding, and therefore affect the accuracy of the sensing results.\n\nAs shown in Fig. \\ref{IT-MSE} and Fig. \\ref{IT-SER}, the system parameters are set to the number of users $N_{\\rm{u}}$ = 6, the number of OREs $R$ = 4, $d_{\\rm{v}}$ = 2, and the sparsity of randomly generated environmental scatterers is 1.5\\%. Set the forward propagation window size $n_{\\rm{f}}$ = 10 and the feedback window size $n_{\\rm{b}}$ = 1. We use MSE to evaluate the environmental sensing accuracy, when the number of data packets increases, the iterative algorithm converged, and the environment sensing result gradually becomes accurate. We use SER to evaluate the accuracy of transmission data decoding. As the number of data packets increases, the iterative algorithm converged, and the transmission data decoding results become more accurate. \nAt the same time, due to the existence of feedback, as the number of data packets increases, the data packets at the previous time are decoded based on the more accurate environmental information at the later time. The feedback process improves the decoding accuracy during convergence.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8cm]{fig17.eps}\n \\caption{The relationship between the data packet number (number of iterations) and MSE.}\n \\label{IT-MSE}\n \\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8cm]{fig16.eps}\n \\caption{The relationship between the data packet number (number of iterations) and SER.}\n \\label{IT-SER}\n \\end{figure}\n\nAfter the proposed algorithm iterates to convergence, the relationship between the number of users and system performance is shown in Fig. \\ref{UE-MSE} and Fig. \\ref{UE-SER}. We set a tough condition and the system parameters are set to the number of OREs $R$ = 7, $d_{\\rm{v}}$ = 2, $\\rm{E_b\/N_0}=10dB$, and the sparsity of randomly generated environmental scatterers is 3\\%. \nAs analyzed in the section \\uppercase\\expandafter{\\romannumeral6}, there is a trade-off relationship between the number of users and the system performance indicators SER and MSE. \nAs the number of users changes, the environment sensing performance and the multi-user communication performance cannot reach the best at the same time.\nAs shown in Fig. \\ref{UE-MSE} and Fig. \\ref{UE-SER}, \nwhen the number of users is small, as the number of users increases, the sensing accuracy is significantly improved (as analyzed in \\eqref{eq36}), and therefore the decoding accuracy is improved, until the optimal operating point ${\\tilde N_{\\rm{u}}} = 12$ is reached under simulation conditions. When the sensing of the environment is accurate sufficiently, a further increase in the number of users causes a decrease in the accuracy of decoding (as analyzed in \\eqref{eq37} and \\eqref{eq40}), and therefore the accuracy of environment sensing also decreases slightly.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8cm]{fig12.eps}\n \\caption{The trade-off relationship between the number of users and MSE.}\n \\label{UE-MSE}\n \\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8cm]{fig13.eps}\n \\caption{The trade-off relationship between the number of users and SER.}\n \\label{UE-SER}\n \\end{figure}\n\nFig. \\ref{UE-MSE} and Fig. \\ref{UE-SER} also show the impact of ``momentum-mode'' on system performance. The ``momentum-mode'' uses data information outside the sliding-window for environmental sensing. As analyzed in Section \\uppercase\\expandafter{\\romannumeral6}, the environment sensing accuracy is low when the number of users is small, so the ``momentum-mode'' accumulates errors and cannot improve system performance. However, ``momentum-mode'' can improve system performance when the number of users is higher than the optimal operating point, Since. It can be seen from the simulation results that when $\\mu $ = 0.9, the momentum coefficient is large, the MSE and SER are greatly improved when there is a large number of users, and there is a negative impact on the SER in the case of few users. When $\\mu $ = 0.5, the momentum coefficient is moderate, the MSE and SER indicators are slightly improved when there is a large number of users, and there is no significant impact on the system performance in the case of few users. When $\\mu $ = 0.1, the momentum coefficient is small, there is no significant impact on the system performance. We recommend using a larger momentum coefficient when there is a large number of users, and using a moderate or small momentum coefficient in the case of few users.\n\nFinally, in Table \\ref{tab1}, we provide the run time of using the $k$-th received data package for data decoding and environment sensing (steps 4 to 10 in Algorithm \\ref{alg2}). The simulation parameter settings are the same as those in Fig. \\ref{UE-MSE} and Fig. \\ref{UE-SER} and the momentum mode is not enabled. As shown in Table \\ref{tab1}, we verify that the computational complexity of the algorithm increases with the increase in the number of users.\n\\begin{table}[h]\n \\centering\n \\caption{The run time of proposed algorithm.}\n \\begin{tabular}{|p{1in}|p{0.3in}|p{0.3in}|p{0.3in}|p{0.3in}|}\n \\arrayrulecolor{black}\n \\hline \n \\makecell[l]{Number of users $N_{\\rm{u}}$} & \\makecell[c]{5} & \\makecell[c]{10} & \\makecell[c]{15} & \\makecell[c]{20}\\\\\n \\hline \n \\makecell[l]{Run time (s)} & \\makecell[c]{4.95} & \\makecell[c]{5.11} & \\makecell[c]{5.33} & \\makecell[c]{5.64}\\\\\n \\hline \n \\end{tabular}\n \\label{tab1}\n \\end{table}\n\n\\section{Conclusion}\nIn the diverse wireless communication application scenarios in the future, environment sensing is an important component of the wireless communication system. In the scenario of IRS-assisted indoor uplink communication, we design a multiple access method and an environment sensing method. The multiple access method is based on SCMA. With the assistance of the IRS, based on the sparse codebook of the transmitted signals, the SCMA-IRS-MPA decoder is used. The environment sensing algorithm is based on the CS theory, including time sliding-window and ``momentum-mode'' which keep on sensing the environment while continuously receiving the data stream sent by the user. In this paper, the proposed multiple access algorithm and the proposed environment sensing algorithm rely on each other. Therefore, we propose a novel iterative algorithm based on low-density pilots to jointly solve the multiple access and environment sensing problems and achieve the integration of environment sensing and communication. Finally, numerical simulation has verified the convergence and effectiveness of the iterative and incremental algorithm and analyzed the trade-off relationship between the number of users and system performance. We also give a system parameter configuration method. The sensing-communication integration ideas and algorithms proposed in this paper will provide references for the development of new wireless communication technologies in the future.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbyee b/data_all_eng_slimpj/shuffled/split2/finalzzbyee new file mode 100644 index 0000000000000000000000000000000000000000..6d897c508c14de7675032fd12f345b9f4f6ae543 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbyee @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nCan the holographic dictionary of AdS\/CFT be generalized to gravitational theories defined on a finite patch of spacetime? This question has recently attracted renewed attention due to the discovery of a new class of solvable irrelevant deformations of two-dimensional conformal field theory, known as the $T\\bar{T}$ deformation \\cite{Smirnov:2016lqw, Zamolodchikov:2004ce, Cavaglia:2016oda}. It was conjectured in \\cite{McGough:2016lol} (see also \\cite{Kraus:2018xrn}) that the $T\\bar{T}$ deformed CFT can be interpreted as the holographic dual of a finite patch of asymptotically AdS$_3$ spacetime. A key piece of evidence in support of this conjectured duality is that the conformal Ward identify of the CFT gets deformed into a second order functional differential equation that formally matches with the Wheeler-DeWitt equation of AdS${}_3$ gravity. This relationship is akin to the familiar duality between Chern-Simons field theory and Wess-Zumimo-Witten conformal field theory \\cite{Witten:1988hf}, and points to the possible identification between the wave functionals of gravitational theories in $D+1$-dimensions and partition functions of a special class of $D$-dimensional QFTs.\\footnote{For studies of higher-dimensional generalisations of the $T\\bar{T}$ deformation and their potential holographic interpretation, see \\cite{Hartman:2018tkw, Taylor:2018xcy}.} If such a precise relationship could indeed be established, it would be an important generalization of the standard holographic dictionary that would open up a new avenue for studying gravitational physics in the bulk space-time.\n\nHowever, while the new duality AdS${}_3$ gravity and the $T\\bar{T}$ deformed CFTs passes several non-trivial checks, the precise status of the correspondence is still unclear. In particular, it was found that for large enough energies and fixed deformation parameter, the energy spectrum of the boundary QFT complexifies, indicating a possible breakdown of unitary. This apparent breakdown has a natural interpretation from the point of view of the bulk: it corresponds to a physical cut-off on the spectrum of black hole states, that removes all black holes with Schwarzschild radii that would extend beyond the finite radial cutoff. Nonetheless, the existence of this cut-off in the energy spectrum raises several conceptual questions, that require better understanding of the UV properties of the boundary QFT.\n\nTo circumvent the complications of field theory, in this paper we turn to analyzing the problem in one dimension lower. In particular we will consider the finite cutoff version of two-dimensional Jackiw-Teitelboim (JT) gravity with a negative cosmological constant and its dual formulation in terms of a deformed version of Schwarzian quantum mechanics proposed in \\cite{Gross:2019ach, Gross:2019uxi}. Traditionally, the JT path integral is computed with Dirichlet boundary conditions in the limit where the proper boundary length and boundary value for the dilaton become very large \\cite{Maldacena:2016upp}. In that limit, JT gravity reduces to a soluble 1D quantum theory, the Schwarzian theory \\cite{Bagrets:2016cdf, Stanford:2017thb, Mertens:2017mtv}.\\footnote{For further investigations of Schwarzian quantum mechanics and JT gravity, see \\cite{kitaevTalks, Maldacena:2016hyu, Jensen:2016pah, Engelsoy:2016xyb, Bagrets:2016cdf, Mertens:2017mtv, Lam:2018pvp, Goel:2018ubv, Kitaev:2018wpr, Yang:2018gdb, Saad:2019lba, Mertens:2019tcm, Iliesiu:2019xuh, Suh:2019uec}.} At finite cut-off, however, the boundary theory is expected to become highly non-linear and the computation of the JT partition function in this regime has thus far been an open problem. In the following, we will discuss two different ways to compute it, relying on either canonical quantization or path integral quantization. \n\nIn the canonical approach one foliates the spacetime with a certain, usually time-like, coordinate and parametrizes the metric in an ADM decomposition \\cite{PhysRev.160.1113}. As a result of diffeomorphism invariance, the quantum mechanical wave functionals satisfy a set of local Wheeler-DeWitt constraints, that uniquely determine their dependence on local data defined on the chosen foliation. In general these constraints are difficult to solve, except possibly in the so-called mini-superspace approximation. Luckily, for two-dimensional dilaton gravity theories, the constraints reduce to two first order functional differential equations that can be solved exactly \\cite{Henneaux:1985nw} \\cite{LouisMartinez:1993eh} in the form of explicit diffeomorphism invariant wavefunctionals of the boundary metric and dilaton profile. The WDW wavefunctionals relevant for our analysis are defined through radial quantisation in Euclidean signature. This provides a way to compute the path integral at finite cutoff.\n\nIn the second approach, we compute the Euclidean path integral directly. Again, the analysis at finite proper boundary length becomes more intricate as some of the gravitational modes, that were frozen in the large volume limit, now become dynamical. After integrating out the dilaton, the JT path integral localizes to one over a boundary action given by the extrinsic curvature $K$ of the boundary. By using constraints from the $SL(2,\\mathbb{R})$ isometry of AdS$_2$, we manage to express $K$ in an expansion containing solely powers of the Schwarzian derivative and its derivatives. This greatly facilitates our computations and allows us to express the partition function as the expectation value of an operator in the Schwarzian theory. Using integrability properties of the Schwarzian theory, we manage to exactly compute the partition function to all orders in a perturbative expansion in the cutoff. \n\nBoth the canonical and path integral approach use widely different techniques to compute the finite cutoff partition function, yet, as expected from the equivalence between the two quantisation procedures, the results agree. Moreover, we find a perfect agreement between the result obtained via two approaches with a proposed deformation of the Schwarzian partition function, analogous to the $T\\bar T$ deformation for $2D$ CFTs \\cite{Gross:2019ach, Gross:2019uxi}. Before diving in the computations, lwe first review (for completeness and later reference) this one-dimensional analog of $T\\bar{T}$ and then present a more detailed summary of our results.\n\n\\subsection{Review $1d$ $T\\bar{T}$}\n\nIn previous work \\cite{Gross:2019ach}, a particular deformation of the Schwarzian quantum mechanics\nwas shown to be classically equivalent to JT gravity with Dirichlet boundary conditions for the metric and dilaton. The deformation on the Schwarzian theory follows from a dimensional reduction of the $T\\bar T$ deformation in $2D$ CFTs \\footnote{This reduction is valid in the classical limit and should be seen as a motivation for the proposed deformation. It would be interesting to extend it to a precise statement using the methods of \\cite{Ghosh:2019rcj}.}. Explicitly the deformation involves a flow of the action $S$ of the quantum mechanical theory, \n\\begin{eqnarray}\\label{1dTTbar}\n\\partial_{\\l} S = \\int_0^1 d\\theta \\frac{T^2}{1\/2 - 2\\l T} \n\\end{eqnarray}\nwhere $T$ is the trace of the stress-`scalar' of the quantum mechanical theory and $\\l$ is the deformation parameter. By going from the Lagrangian to the Hamiltonian formulation, we can write an equivalent flow for the Hamiltonian instead of $S$ and find the flow of the energy eigenvalues,\\footnote{Since the deformation is a function of the Hamiltonian, the eigenfunctions do not change under the flow.}\n\\begin{eqnarray}\\label{deformedE}\n\\partial_{\\l}H = \\frac{H^2}{1\/2-2\\l H}\\quad \\Rightarrow\n\\quad \\mathcal{E}_{\\pm}(\\l) = \\frac{1}{4\\l}\\left(1 \\mp \\sqrt{1- 8 \\l E}\\right).\n\\end{eqnarray}\nHere $E$ are the energy levels of the undeformed theory and matching onto the original spectrum as $\\l \\to 0$ results in picking the minus sign for the branch of the root in \\eqref{deformedE}. In section \\ref{sec:nonpert} we will see that the other branch of the root will also make its appearance. In the case of the Schwarzian theory, which has a partition function that can be exactly computed \\cite{Stanford:2017thb},\\footnote{In the gravitational theory $C$ is equal to $\\phi_r$, the renormalised boundary value of the dilaton. Furthermore, here we picked a convenient normalisation of the partition function.}\n\\begin{eqnarray}\\label{SchZ}\nZ(\\b) = \\int_0^{\\infty} dE\\; \\frac{\\sinh(2\\pi \\sqrt{2 C E})}{\\sqrt{2C\\pi^3}} e^{-\\b E} = \\frac{e^{2C\\pi^2\/\\b}}{\\b^{3\/2}} ,\n\\end{eqnarray}\nthe deformed partition function is,\n\\begin{eqnarray}\\label{deformedZ}\nZ_{\\l}(\\b) = \\int_0^{\\infty} dE\\; \\frac{\\sinh(2\\pi \\sqrt{2 C E})}{\\sqrt{2C\\pi^3}} e^{-\\b \\mathcal{E}_+(\\l)}.\n\\end{eqnarray}\nLet us make two observations. First, the integral over $E$ runs over the full positive real axis and therefore will also include complex energies $\\mathcal{E}_{+}(\\l)$ when $\\l > 0$, i.e. for $E > 1\/8\\l$ the deformed spectrum complexifies. This violates unitarity and needs to be dealt with. We will come back to this issue in section \\ref{sec:nonpert}. Second, given that there is a closed from expression of the original Schwarzian partition function, one can wonder whether this is also the case for the deformed partition function. This turns out to be the case. For the moment let us assume $\\l < 0$ so that there are no complex energies, then it was shown in \\cite{Gross:2019ach} that the deformed partition function is given by an integral transform of the original one, analogous to the result of \\cite{Dubovsky_2018} in $2D$. The integral transform reads,\n\\begin{eqnarray}\\label{intTransform}\nZ_{\\l}(\\b) = \\frac{\\b}{\\sqrt{-8\\pi \\l}} \\int_0^{\\infty} \\frac{d\\b'}{\\b'^{3\/2}} e^{\\frac{(\\b-\\b')^2}{8\\l \\b'}} Z(\\b'),\n\\end{eqnarray}\nPlugging \\eqref{SchZ} into this expression and performing the integral over $\\b'$ yields,\n\\begin{eqnarray}\\label{K2}\nZ_{\\l}(\\b) = \\frac{ \\b e^{-\\frac{\\b}{4\\l}}}{\\sqrt{-2 \\pi \\l}(\\b^2 + 16 C \\pi^2 \\l)}K_2\\left( -\\frac{1}{4\\l}\\sqrt{\\b^2 + 16 C \\pi^2 \\l} \\right).\n\\end{eqnarray}\nwith the associated density of states given by\n\\begin{eqnarray}\\label{deformedrho}\n\\rho_\\l(E) = \\frac{1-4\\l E}{\\sqrt{2\\pi^3 C}} \\sinh\\left(2\\pi \\sqrt{2CE(1-2\\l E)} \\right)\n\\end{eqnarray}\nAlthough we have derived this formula assuming that $\\l < 0$, we will simply analytically continue to $\\l > 0$ to obtain the partition function of the deformed Schwarzian theory that describes JT gravity at finite cutoff. One might be worried that this would not yield the same as \\eqref{deformedZ} and indeed there are a few subtleties involved in doing that analytic continuation as discussed in the end of section \\ref{sec:JT-gravity-review} and in section \\ref{sec:nonpert}.\n\n\\subsection{Summary of results and outline}\n\nThe purpose of this paper is give two independent bulk computation that reproduce the partition function \\eqref{deformedZ}. In section \\ref{sec:WdW-wavefunctional-main} we present a derivation of the partition function of JT gravity (with negative cosmological constant) at finite cutoff by computing the radial Wheeler-de Witt (WdW) wavefunctional. Due to Henneaux it is known since the 80's that the contraints of $2D$ dilaton gravity can be solved exactly in the full quantum theory \\cite{Henneaux:1985nw}. We will review this computation and fix the solution by imposing Hartle-Hawking boundary conditions. In particular we find that\n\\begin{equation}\\label{eq:AdSfinitecutoffJTintro}\n\\Psi_{\\rm HH}[\\phi_b(u),L] = \\int_0^\\infty dM\\; \\sinh(2\\pi \\sqrt{M}) ~e^{ \\int_0^L du \\left[ \\sqrt{\\phi^2_b-M-( \\partial_u \\phi_b)^2} - \\partial_u \\phi \\tan^{-1} \\left(\\sqrt{\\frac{\\phi_b^2-M}{(\\partial_u \\phi_b)^2}-1}\\right) \\right]}.\n\\end{equation}\nThis wavefunction is computed in a basis of fixed dilaton $\\phi_b(u)$, where $u$ corresponds to the proper length along the boundary, and $L$ the total proper length of the boundary. The above results obtained through the WdW constraint are non-perturbative in both $L$ and $\\phi_b(u)$.\n\n When considering a constant dilaton profile $\\phi_b(u) = \\phi_b$, the wavefunction \\eqref{eq:AdSfinitecutoffJTintro} reproduces the $T\\bar T$ partition function in \\eqref{deformedZ}, with the identification \n\\begin{equation}\nM \\to 2 C E,\\quad \\phi^2_b \\to \\frac{C}{4\\l},\\quad L \\to \\frac{\\b}{\\sqrt{4C\\l}}, \n\\end{equation}\nIn terms of these variables, \\eqref{eq:AdSfinitecutoffJTintro} matches with $T\\bar{T}$ up to a shift in the ground state energy, which can be accounted for by a boundary counterterm $e^{-I_{\\rm ct}} = e^{- \\phi_b L}$ added to the gravitational theory. An important aspect that this analysis emphasizes if the fact that, for JT gravity, studying boundary conditions with a constant dilaton is enough. As we explain in section \\ref{minisuper}, if the wavefunction for a constant dilaton is known, the general answer \\eqref{eq:AdSfinitecutoffJTintro} is fixed by the constraints and does not constrain any further dynamical information \\cite{MTY}.\n\n\n\n\nThe partition function \\eqref{deformedZ} is also directly computed from the path integral in JT gravity at finite cutoff in section \\ref{sec:JT-gravity-review}. We will impose dilaton and metric Dirichlet boundary conditions, in terms of $\\phi_b$ and the total proper length $L$. For the reasons explain in the previous paragraph, it is enough to focus on the case of a constant dilaton. It is convenient to parametrize these quantities in the following way \n\\begin{eqnarray}\n\\label{eq:Dirichlet-bdy-conditions}\n \\phi_b = \\frac{\\phi_{r}}{\\e},~~~~~L = \\frac{\\beta}{\\e}, \n\\end{eqnarray}\nin terms of a renormalized length $\\beta$ and dilaton $\\phi_r$. We will refer to $\\e$ as the cutoff parameter \\footnote{In Poincar\\'e coordinates $\\e$ corresponds semiclassically to the bulk coordinate of the cutoff surface.}. When comparing with the $T\\bar{T}$ approach this parameter is $\\e = \\sqrt{2\\lambda}$ (in units for which we set $\\phi_r \\to 1\/2 $). In order to compare to the asymptotically $AdS_2$ case previously studied in the literature \\cite{Maldacena:2016upp, Saad:2019lba}, we need to take $\\phi_b, L \\to \\infty$ with a fixed renormalized length $L\/\\phi_b$. In terms of the cutoff parameter, this limit corresponds to $\\e \\to 0$, keeping $\\phi_r$ and $\\beta$ fixed. \n\n\nWe will solve this path integral perturbatively in the cutoff $\\e$, to all orders. We integrate out the dilaton and reduce the path integral to a boundary action comprised of the extrinsic curvature $K$ and possible counter-terms. We find an explicit form of the extrinsic curvature valid to all orders in perturbation theory in $\\e$. A key observation in obtaining this result is the realisation of a (local) $SL(2,\\mathbb{R})$ invariance of $K$ in terms of lightcone coordinates $z = \\tau - i x$, $\\bar{z} = \\tau + i x$:\n\\begin{eqnarray}\nK[z,\\bar{z}] = K\\left[ \\frac{a z + b}{c z + d}, \\frac{a \\bar{z} + b}{c \\bar{z} + d} \\right].\n\\end{eqnarray}\nSolving the Dirichlet boundary condition for the metric allows us to write $K$ as a functional of the Schwarzian derivative of the coordinate $z$.\\footnote{This generalizes the computation of \\cite{Maldacena:2016upp} which found the relation between the extrinsic curvature and the Schwarzian derivative in the infinite cutoff limit. } As we will explain in detail, the remaining path integral can be computed exactly using integrability properties in the Schwarzian theory to all orders in $\\e^2$.\n\nThus, by the end of section \\ref{sec:JT-gravity-review}, we find agreement between the WdW wavefunctional, the Euclidean partition function and the $T\\bar T$ partition function from \\eqref{deformedZ}:\n\\begin{eqnarray}\n e^{-I_{\\rm ct}}\\Psi_{\\rm HH}[\\phi_b,\\, L] \\overset{{\\rm non-pert.}}{=} Z_\\l(\\b) \\overset{{\\rm pert.}}{=} Z_{\\rm JT}[\\phi_b,\\, L] \\,.\n\\end{eqnarray}\nHere we emphasize again that we show that the first equality is true non-perturbatively in $\\e$ (respectively in $\\l$), whereas we prove the second equality to all orders in perturbation theory.\n\nIn section \\ref{sec:nonpert} we discuss various extensions of the deformed partition function including further corrections. In particular we discuss two types of corrections in the path integral and in the integral over energies in \\eqref{deformedZ}: first, we analyze non-perturbative terms in $\\e$ coming from contributions that cannot be written as a path integral on the disk (the contracting branch of the wavefunction) and second, we speculate about non-perturbative corrections coming from the genus expansion. Related to the first kind of ambiguity, given the exact results we obtained for the wavefunctional and partition function, we explore how the complexification of the energy levels (that we mentioned above) can be cured. In particular, we propose that it requires the inclusion of the other branch of the root in \\eqref{deformedE}, but still results in a negative density of states. The structure of the negative density of states suggests that the (unitary) partition function is not an ordinary one, but one with a chemical potential turned on. Related to the second type, we compute the partition function of the finite cutoff ``trumpet'' which is a necessary ingredient when constructing higher genus hyperbolic surfaces. Finally, we speculate about the range of the remaining Weil-Petersson integral which is needed in order to compute the finite cutoff partition function when including the contribution of surfaces with arbitrary topology. \n\nSection \\ref{sec:desitter} applies the computation from section \\ref{sec:WdW-wavefunctional-main} to the case of JT gravity with a positive cosmological constant and finds the wavefunctional on a de Sitter time-slice at finite time. This wavefunctional has some interesting behaviour, similar to the Hagedorn divergence present in \\eqref{K2}. We finish with a discussion of our results and future directions in section \\ref{sec:discussion}.\n\\vspace{0.25cm}\n\n\\textbf{Note:} While this work was in progress, we became aware of a closely related project by D. Stanford and Z. Yang \\cite{SY}. They analyze finite cutoff JT gravity from yet a different perspective, finding different results. We leave understanding how these approaches are related for future work, but we believe they correspond to different ways to regularize (and therefore define) the theory. \n\n\n\\section{Wheeler-DeWitt wavefunction}\n\\label{sec:WdW-wavefunctional-main}\n\nIn this section, we will start by reviewing the canonical quantization of $2D$ dilaton-gravity following the approach of \\cite{Henneaux:1985nw, LouisMartinez:1993eh}. In these references, the authors find the space of exact solutions for both the momentum and Wheeler-DeWitt constraints. Later, in subsections \\ref{sec:WdW-finite-cutoff-JT} and \\ref{sec:HH-boundary-conditions-JT-wavefunctional}, we will focus on JT gravity, and we will explain how to impose the Hartle-Hawking condition appropriately to pick a solution corresponding to finite cutoff AdS$_2$. \n\nLet us consider the more general two dimensional dilaton gravity in Lorentzian signature,\n\\begin{equation}\n\\label{eq:action-dilaton-general}\nI = \\frac{1}{2}\\int_{M} d^2 x \\sqrt{g} [\\phi R - U(\\phi)] + \\int_{\\partial M} du \\sqrt{\\g_{uu}}\\;\\phi K,\n\\end{equation}\nwith an arbitrary potential $U(\\phi)$. $g$ is the two-dimensional space-time metric on $M$ and $\\g$ the induced metric on its boundary $\\partial M$. The boundary term in \\eqref{eq:action-dilaton-general} is necessary in order for the variational principle to be satisfied when imposing Dirichlet boundary conditions for the metric and dilaton. In \\eqref{eq:action-dilaton-general} we could also add the topological term $ \\frac{1}{2}\\int_{M} d^2 x \\sqrt{g} \\,\\phi_0 R + \\int_{\\partial M} du \\sqrt{\\g_{uu}}\\;\\phi_0 K = 2\\pi \\phi_0$ which will be relevant in section \\ref{sec:nptopo}.\n\nIt will be useful to define also the prepotential $W(\\phi)$ by the relation $\\partial_\\phi W(\\phi)=U(\\phi)$. In the case of JT gravity with negative (or positive) cosmological constant we will pick $U(\\phi) = -2 \\phi$ (or $U(\\phi)=2\\phi$) and $W(\\phi)=-\\phi^2$ ($W(\\phi)=\\phi^2$), which has as a metric solution AdS$_2$ (dS$_2$) space with unit radius.\n\nWe will assume the topology of space to be a closed circle, and will use the following ADM decomposition of the metric \n\\begin{equation}\\label{ADMdecomp}\nds^2 = - N^2 dt^2 + h (dx + N_{\\perp} dt)^2,~~~~h=e^{2\\sigma}\n\\end{equation}\nwhere $N$ is the lapse, $N_{\\perp}$ the shift, $h$ the boundary metric (which in this simple case is an arbitrary function of $x$) and we identify $x\\sim x+1$. After integrating by parts and using the boundary terms, the action can then be written as \n\\begin{eqnarray}\nI &=& \\int d^2x ~e^{\\sigma} \\Big[ \\frac{\\dot{\\phi}}{N} \\left( N_{\\perp} \\partial_x\\sigma+\\partial_x N_{\\perp} -\\dot{\\sigma}\\right) \\nonumber\\\\\n&&~~~~+ \\frac{\\partial_x\\phi}{N}\\left(\\frac{N\\partial_xN}{e^{2\\sigma}} - N_{\\perp} \\partial_xN_{\\perp}+N_{\\perp} \\dot{\\sigma} - N_{\\perp}^2 \\partial_x \\sigma \\right) - \\frac{1}{2}N U(\\phi) \\Big]\n\\end{eqnarray}\nwhere the dots correspond to derivatives with respect to $t$. As usual the action does not involve time derivatives of fields $N$ and $N_{\\perp}$ and therefore \n\\begin{equation}\n\\Pi_{N} = \\Pi_{N_{\\perp}} =0,\n\\end{equation}\nwhich act as primary constraints. The momenta conjugate to the dilaton and scale factor are \n\\begin{equation}\n\\Pi_\\phi = \\frac{e^\\sigma}{N}(N_{\\perp} \\partial_x \\sigma+\\partial_x N_{\\perp} -\\dot{\\sigma}),~~~~\\Pi_\\sigma = \\frac{e^\\sigma}{N}(N_{\\perp}\\partial_x\\phi-\\dot{\\phi}) .\n\\end{equation}\nWith these equations we can identify the momentum conjugate to the dilaton with the extrinsic curvature $\\Pi_\\phi \\sim K$, and the momentum of $\\sigma$ with the normal derivative of the dilaton $\\Pi_\\sigma \\sim \\partial_n \\phi$. The classical Hamiltonian then becomes \n\\begin{equation}\nH = \\int dx \\left[ N_{\\perp} \\mathcal{P} + e^{-\\sigma}N \\mathcal{H}_{\\text{\\tiny{WdW}}} \\right]\n\\end{equation}\nwhere \n\\begin{eqnarray}\\label{constraintP}\n\\mathcal{P} &\\equiv & \\Pi_\\sigma \\partial_x \\sigma + \\Pi_\\phi \\partial_x \\phi -\\partial_x \\Pi_\\sigma, \\\\[2mm] \\label{constraintH}\n\\mathcal{H}_{\\text{\\tiny{WdW}}} &\\equiv & - \\Pi_\\phi \\Pi_\\sigma + \\frac{1}{2}e^{2\\sigma}U(\\phi) + \\partial_x^2\\phi- \\partial_x\\phi \\partial_x\\sigma,\n\\end{eqnarray}\nand classically the momentum and Wheeler-DeWitt constraints are respectively $\\mathcal{P}=0$ and $\\mathcal{H}_{\\text{\\tiny{WdW}}}=0$. \n\nSo far the discussion has been classical. Now we turn to quantum mechanics by promoting field to operators. We will be interested in wavefunctions obtained from path integrals over the metric and dilaton, and we will write them in configuration space. The state will be described by a wave functional $\\Psi[\\phi,\\sigma]$ and the momentum operators are replaced by \n\\begin{equation}\n\\hat{\\Pi}_\\sigma = - i \\frac{\\delta}{\\delta \\sigma(x)},~~~\\hat{\\Pi}_\\phi = - i \\frac{\\delta}{\\delta \\phi(x)},~~~\n\\end{equation}\nThe physical wavefunctions will only depend on the boundary dilaton profile and metric. \n\nUsually, when quantizing a theory, one needs to be careful with the measure and whether it can contribute Liouville terms to the action. Such terms only appear when in conformal gauge, which is not what we are working in presently. Actually, the ADM decomposition \\eqref{ADMdecomp} captures a general metric and is merely a parametrization of all $2D$ metrics and so we have not fixed any gauge. The quantum theory is thus defined through the quantum mechanical version of the classical constraints \\eqref{constraintP} and \\eqref{constraintH} \\footnote{From the path integral perspective, we are assuming an infinite range of integration over the lapse. Different choices for the contour of integration can drastically modify the constraints after quantization. We thank S. Giddings for discussions on this point.}. As a result, we do not need to include any Liouville term in our action in the case of pure gravity. If matter would have been present, there could be Liouville terms coming from integrating out the matter, but that is beyond the scope of this paper.\n\n\n\\subsection{Solution} \n\nIn references \\cite{Henneaux:1985nw, LouisMartinez:1993eh}, the physical wavefunctions that solve the dilaton gravity constraints are constructed as follows. The key step is to notice that the constraints $\\mathcal{P}$ and $\\mathcal{H}_{\\rm WdW}$ are simple enough that we can solve for $\\Pi_{\\sigma}$ and $\\Pi_{\\phi}$ separately. For instance, by combing $\\Pi_{\\sigma}\\mathcal{P}$ with the WdW constraint, we get \n\\begin{eqnarray}\n\\label{solPirho}\n\\partial_x (e^{-2\\sigma}\\Pi_{\\sigma}^2) = \\partial_x (e^{-2\\sigma}(\\partial_x \\phi)^2 + W(\\phi)) ~\\Rightarrow~\\Pi_{\\sigma} = \\pm \\sqrt{(\\partial_x \\phi)^2 + e^{2\\sigma}[M+W(\\phi)]},\n\\end{eqnarray}\nwith $M$ an integration constant that is proportional to the ADM mass of the system as we will see momentarily. It is then straightforward to plug this into the WdW constraint to find an expression for $\\Pi_{\\phi}$. Quantum mechanically, we want the physical wavefunction to satisfy, \n\\begin{equation} \\label{pis}\n\\hat{\\Pi}_\\sigma \\Psi_{\\rm phys} = \\pm Q[M;\\phi,\\sigma] \\Psi_{\\rm phys},~~~~\\hat{\\Pi}_\\phi \\Psi_{\\rm phys} = \\pm \\frac{g[\\phi,\\sigma]}{Q[M;\\phi,\\sigma]}\\Psi_{\\rm phys},\n\\end{equation}\nwhere we defined the functions \n\\begin{equation}\\label{eq:Qdef}\nQ[E;\\phi,\\sigma] \\equiv \\sqrt{(\\partial_x \\phi)^2 + e^{2\\sigma}[M+W(\\phi)]},~~~~g[\\phi,\\sigma]\\equiv \\frac{1}{2}e^{2\\sigma}U(\\phi) + \\partial_x^2\\phi- \\partial_x\\phi \\partial_x\\sigma.\n\\end{equation}\nWavefunctions that solve these constraints also solve the momentum and Wheeler-DeWitt constraints as explained in \\cite{Henneaux:1985nw, LouisMartinez:1993eh}. In particular they solve the following WdW equation with factor ordering,\\footnote{Here we think of $\\hat{Q}$ as well as $\\hat{M}$ as operators. The physical wavefunctions can be written as linear combinations of eigenfunctions of the operator $\\hat{M}$ with eigenvalue $M$.}\n\\begin{eqnarray}\n\\left(g - \\hat{Q} \\hat{\\Pi}_{\\phi} \\hat{Q}^{-1} \\hat{\\Pi}_{\\sigma} \\right) \\Psi_{\\rm phys} = 0 \n\\end{eqnarray}\nThe most general solution can be written as \n\\begin{equation}\n\\Psi =\\Psi_+ + \\Psi_- ,~~~\\Psi_{\\pm}= \\int dM \\rho_\\pm(M) \\Psi_\\pm(M),\n\\end{equation}\nwhere we will distinguish the two contributions\n\\begin{equation}\\label{eq:psinoninv}\n \\Psi_\\pm(M) = \\exp{\\left[ \\pm i \\int dx \\left( Q[M;\\phi,\\sigma] - \\partial_x \\phi \\tanh^{-1} \\left(\\frac{Q[M;\\phi,\\sigma]}{2\\partial_x\\phi} \\right)\\right)\\right]},\n\\end{equation}\nwith the function $Q$ defined in \\eqref{eq:Qdef} which depends on the particular dilaton potential. We will refer in general to $\\Psi_+$ ($\\Psi_-$) as the expanding (contracting) branch.\n\nThis makes explicit the fact that solutions to the physical constraints reduce the naive Hilbert space from infinite dimensional to two dimensional with coordinate $M$ (and its conjugate). The most general solution of the Wheeler-DeWitt equation can then be expanded in the base $\\Psi_\\pm(M)$ with coefficients $\\rho_\\pm (M)$. The new ingredient in this paper will be to specify appropriate boundary conditions to pick $\\rho_\\pm(M)$ and extract the full Hartle-Hawking wavefunction. We will see this is only possible for JT gravity for reasons that should will be clear in the next section. \n\n\nIt will be useful to write the physical wavefunction in terms of diffeomorphism invariant quantities. This is possible thanks to the fact that we are satisfying the momentum constraints. In order to do this we will define the proper length $u$ of the spacelike circle as \n\\begin{equation}\ndu = e^{\\sigma} dx,~~~~L\\equiv \\int_0^1 e^{\\sigma}dx,\n\\end{equation}\nwhere $L$ denotes the total length. The only gauge invariant data that the wavefunction can depend on is then $L$ and $\\phi(u)$, a dilaton profile specified as a function of proper length along the boundary. The wavefunction \\eqref{eq:psinoninv} can be rewritten as \n\\begin{equation}\n\\Psi_\\pm(M) = e^{\\pm i \\int_0^L du \\left[ \\sqrt{W(\\phi)+M+( \\partial_u \\phi)^2} - \\partial_u \\phi\\, \\tanh^{-1} \\left(\\sqrt{1+ \\frac{W(\\phi)+M}{(\\partial_u \\phi)^2}}\\right) \\right]},\n\\end{equation}\nwhich is then manifestly diffeomorphism invariant.\n\nThe results of this section indicate the space of physical states that solve the gravitational constraints is one dimensional, labeled by $M$. In the context of radial quantization of $AdS_2$ that we will analyze in the next section, this parameter corresponds to the ADM mass of the state, while in the case of $dS_2$, it corresponds to the generator of rotations in the spatial circle. Phase space is even-dimensional, and the conjugate variable to $E$ is given by \n\\begin{equation}\n\\Pi_M = - \\int dx \\frac{e^{2\\sigma} \\Pi_\\rho}{\\Pi_\\rho^2 - 2 (\\partial_x \\phi)^2} \n\\end{equation}\nsuch that $[M , \\Pi_M] =i$.\\footnote{The simplicity of the phase space of dilaton gravity theories was also noted in \\cite{Thiemann:1992jj}.}\n\n\n\\subsection{Phase space reduction}\n\\label{minisuper}\n\nHaving the full solution to the WdW equation, we now study the minisuperspace limit. In this limit, the dilaton $\\phi$ and boundary metric $e^{2\\s}$ are taken to be constants. In a general theory of gravity, minisuperspace is an approximation. In JT gravity, as we saw above, the physical phase space is finite-dimensional (two dimensional to be precise). Therefore giving the wavefunction in the minisuperspace regime encodes all the dynamical information of the theory, while the generalization to varying dilaton is fixed purely by the constraints. In this section, we will directly extract the equation satisfied by the wavefunction as a function of constant dilaton and metric, from the more general case considered in the previous section. \n\nIf we start with the WdW equation and fix the dilaton and metric to be constant, the functional derivatives then become ordinary derivatives and the equation reduces to \n\\begin{eqnarray}\n\\label{WdWmini}\n\\left(\\frac{1}{2}e^{2\\s} U(\\phi) - \\hat{Q}\\partial_\\phi \\hat{Q}^{-1}\\partial_\\s \\right) \\Psi(\\phi,\\sigma) = 0.\n\\end{eqnarray}\nwith $\\hat{Q} = (\\hat{M} + W(\\phi))^{1\/2}$. Due to the factor ordering, this differential equation still depends on the operator $\\hat{M}$, which is a bit unsatisfactory. Fortunately, we know that a $\\s$ derivative acting on $\\Psi$ is the same as acting with $Q^2\/g \\partial_\\phi$. In the minisuperspace limit, we can therefore write \\eqref{WdWmini} as \n\\begin{eqnarray}\n\\label{minisupereq}\n(L U(\\phi) - 2L \\partial_L(L^{-1} \\partial_\\phi) ) \\Psi(\\phi,L) = 0,\n\\end{eqnarray}\nwhere $L$ is the total boundary length. This equation is the exact constraint that wavefunctions with a constant dilaton should satisfy even though it was derived in a limit. We can explicitly check this by using \\eqref{eq:psinoninv} and noticing that any physical wavefunction, evaluated in the minisuperspace limit, will satisfy precisely this equation. \n\nThis equation differs from the one obtained in \\cite{MTY} by $\\Psi_{\\rm here} = L \\Psi_{\\rm there}$ and, therefore, changes the asymptotics of the wavefunctions, something we will analyze more closely in the next subsection. \n\n\n\n\\subsection{Wheeler-DeWitt in JT gravity: radial quantization}\n\\label{sec:WdW-finite-cutoff-JT} \n\nIn this section, we will specialize the previous discussion to JT gravity with a negative cosmological constant. We fix units such that $U(\\phi)=-2\\phi$. We will analytically continue the results of the previous section to Euclidean space and interpret them in the context of radial quantization, such that the wavefunction is identified with the path integral in a finite cutoff surface. Then, we will explain how to implement Hartle-Hawking boundary conditions, obtaining a proposal for the exact finite cutoff JT gravity path integral that can be compared with results for the analog of the $T\\bar{T}$ deformation in $1d$ \\cite{Gross:2019ach, Gross:2019uxi}. \n\nLets begin by recalling some small changes that appear when going from Lorenzian to Euclidean radial quantization. The action we will work with is \n\\begin{equation}\\label{IJT}\nI_{\\rm JT}=- \\frac{1}{2}\\int_{M} \\sqrt{g} \\phi( R+2) - \\int_{\\partial M} \\sqrt{\\g}\\phi K,\n\\end{equation}\nand the ADM decomposition of the metric we will use is \n\\begin{equation}\nds^2 = N^2 dr^2 + h (d\\theta + N_{\\perp} dr)^2,~~~~h=e^{2\\sigma}\\,,\n\\end{equation}\nwhere $r$ is the radial direction while $\\theta \\sim \\theta+1$ corresponds to the angular direction that we will interpret as Euclidean time. We show these coordinates in figure \\ref{fig:ads2rq}. In terms of holography we will eventually interpret $\\theta$ as related to the Euclidean time of a boundary quantum mechanical theory. \n\\begin{figure}[t!]\n\\centering\n\\begin{tikzpicture}[scale=0.9]\n\\pgftext{\\includegraphics[scale=0.5]{ads2.pdf}} at (0,0);\n\\draw (2.6,0) node { $\\theta$ };\n\\draw (1,0.5) node { $r$ };\n\\draw (0,-3.5) node { $(a)$ };\n\\end{tikzpicture}\n\\hspace{2cm}\n\\begin{tikzpicture}[scale=0.9]\n\\pgftext{\\includegraphics[scale=0.5]{ads22.pdf}} at (0,0);\n\\draw (3.5,0) node { $r$ };\n\\draw (-0.4,-3) node { $(b)$ };\n\\end{tikzpicture}\n\\caption{\\small (a) We show the slicing we use for Euclidean JT gravity in asymptotically $AdS_2$, which has disk topology (but not necessarily rigid hyperbolic metric). (b) Frame where the geometry is rigid $EAdS_2$ with $r$ increasing upwards and a wiggly boundary denoted by the blue curve. \\label{fig:ads2rq}}\n\\end{figure}\n\nAs shown in figure \\ref{fig:ads2rq}, and as we will explicitly show in section \\ref{sec:JT-gravity-review}, the radial quantization wavefunction is identified with the gravitational path integral at a finite cutoff (inside the black circle) with Dirichlet boundary conditions\n\\begin{equation}\n\\Psi[\\phi_b(u),\\sigma(u)] = \\int \\mathcal{D}g \\mathcal{D} \\phi ~e^{- I_{\\rm JT}[\\phi,g]},\\qquad {\\rm with} \\qquad \\phi|_{\\partial} = \\phi_b(u),\\qquad g|_\\partial = \\gamma_{uu}=e^{2\\sigma(u)}.\n\\end{equation}\n The geometry inside the disk in figure \\ref{fig:ads2rq} is asymptotically $EAdS_2$. From this path integral we can derive the WDW and momentum constraints and therefore solving the latter with the appropriate choice of state should be equivalent to doing the path integral directly. \n\nThe result of previous section implies that this path integral is given by a linear combination of\n\\begin{eqnarray}\n\\label{expanding}\n\\hspace{-1cm}&&\\text{Expanding branch:}~~~\\hspace{0.05cm}\\Psi_+(M) = e^{ \\int_0^L du \\left[ \\sqrt{\\phi_b^2-M-( \\partial_u \\phi_b)^2} - \\partial_u \\phi_b \\tan^{-1} \\left(\\sqrt{\\frac{\\phi_b^2-M}{(\\partial_u \\phi_b)^2}-1}\\right) \\right]},\\\\\n\\label{contracting} \\hspace{-1cm}&&\\text{Contracting branch:}~~\\Psi_-(M) = e^{- \\int_0^L du \\left[ \\sqrt{\\phi_b^2-M-( \\partial_u \\phi_b)^2} - \\partial_u \\phi_b \\tan^{-1} \\left(\\sqrt{\\frac{\\phi_b^2-M}{(\\partial_u \\phi_b)^2}-1}\\right) \\right]}.\n\\end{eqnarray}\n\nWe will focus on the purely expanding branch of the solution \\eqref{expanding}, as proposed in \\cite{Freidel:2008sh} and \\cite{MTY} to correspond to the path integral in the disk and therefore set $\\rho_-(M)=0$. We will go back to possible effects coming from turning on this term later. Thus, we will study the solutions\n\\begin{equation}\n\\Psi_{\\rm disk}[\\phi_b(u),\\sigma(u)] = \\int dM \\rho(M) ~e^{ \\int_0^L du \\left[ \\sqrt{\\phi_b^2-M-( \\partial_u \\phi_b)^2} - \\partial_u \\phi_b \\tan^{-1} \\left(\\sqrt{\\frac{\\phi_b^2-M}{(\\partial_u \\phi_b)^2}-1}\\right) \\right]}\\,.\n\\end{equation}\nTo make a choice of boundary conditions that fix the boundary curve very close to the boundary of the disk we will eventually take the limit of large $L$ and $\\phi_b$.\n\n\\subsection{Hartle-Hawking boundary conditions and the JT wavefunctional} \n\\label{sec:HH-boundary-conditions-JT-wavefunctional}\n\nTo determine the unknown function $\\rho(M)$, we will need to impose a condition that picks the Hartle-Hawking state. For this, one usually analyses the limit $L\\to0$ \\cite{PhysRevD.28.2960}. Such a regime is useful semiclassically but not in general. From the no-boundary condition, $L\\to0$ should reproduce the path integral over JT gravity inside tiny patches deep inside the hyperbolic disk; performing such a calculation is difficult. Instead, it will be simpler to impose the Hartle-Hawking condition at large $L\\to \\infty$. In this case, we know how to do the path integral directly using the Schwarzian theory. The derivation of the Schwarzian action from \\cite{Maldacena:2016upp} explicitly uses the no-boundary condition, so we will take this limit instead, which will be enough to identify a preferred solution of the WdW equation.\n\nTo match the wavefunction with the partition function of the Schwarzian theory, it is enough to consider the case of constant dilaton and metric. Then, the wavefunction simplifies to\\footnote{It is interesting to note that this partition function first appeared in \\cite{Kunstatter:1997my}.}\n\\begin{equation}\n\\Psi[\\phi_b,\\,\\sigma] = \\int dM \\rho(M) ~e^{ \\int_0^1 d\\theta e^{\\sigma} \\sqrt{\\phi_b^2-M}} =\\int dM \\rho(M) ~e^{ \\int_0^L du \\sqrt{\\phi_b^2-M}}\n\\end{equation}\nwith $\\phi_b$ and $\\sigma$ constants. Expanding the root at large $\\phi_b$ and large $L=e^{\\sigma}$ gives,\n\\begin{equation}\\label{eq:schlimwdw}\n\\Psi[\\phi_b,\\sigma] =e^{L\\phi_b} \\int dM \\rho(M) ~e^{ -L \\frac{M}{2\\phi_b}+\\ldots}\n\\end{equation}\nWe find the usual divergence for large $L$ and $\\phi$, which can be removed by adding to \\eqref{IJT} the counter term, $I_{\\rm ct} = \\int_0^L du \\,\\phi_b$. In fact, we will identify the JT path integral with this counter term as computing the thermal partition function at a temperature specified by the boundary conditions. At large $L$ and $\\phi_b$ we know that the gravity partition function is given by the Schwarzian theory:\n\\begin{equation}\n\\int \\mathcal{D}g \\mathcal{D} \\phi ~e^{- I_{\\rm JT}[\\phi,g]} \\to e^{L\\phi_b} \\int \\frac{\\mathcal{D}f}{SL(2, \\mR)}~e^{ \\phi_{b} \\int_0^L du \\hspace{0.05cm}{\\rm Sch}(\\tan \\frac{\\pi}{L} f,u) },\n\\end{equation}\nwhere ${\\rm Sch}(F(u),u) \\equiv \\frac{F'''}{F'} - \\frac{3}{2} \\left( \\frac{F''}{F'} \\right)^2 $. By rescaling time we can see the path integral only depends on $L\/\\phi_b$ which we will sometimes refer to as renormalized length. This result can be derived by first integrating over the dilaton over an imaginary contour, localizing the geometry to rigid $AdS_2$. Then the remaining degree of freedom is the shape of the boundary curve, from which the Schwarzian theory arises.\n\nThe Schwarzian partition function can be computed exactly and gives \n\\begin{equation}\nZ_{\\rm Sch} (\\ell) \\equiv \\int \\frac{\\mathcal{D}f}{SL(2, \\mR)}e^{\\int_0^\\ell du\\hspace{0.05cm} {\\rm Sch}(\\tan \\frac{\\pi}{\\ell} f,u) } = \\left(\\frac{\\pi}{\\ell}\\right)^{3\/2} e^{\\frac{2\\pi^2}{\\ell}}= \\int dk^2 \\sinh(2\\pi k) e^{- \\ell k^2\/2}\n\\end{equation}\nApplying this result to the JT gravity path integral with the replacement $\\ell \\to L\/\\phi_b$ gives the partition function directly in the form of equation \\eqref{eq:schlimwdw} where we can straightforward identify the Schwarzian density of states with the function of $M$ as\n\\begin{equation}\n\\rho_{\\rm HH}(M) = \\sinh(2 \\pi \\sqrt{M}),\n\\end{equation}\nwhere the subscript indicates that we picked the Hartle-Hawking state. It is important that we are able to compute the path integral of JT gravity for $\\phi_b,\\, L\\to \\infty$ but fixed $L\/\\phi_b$. This involves an exact treatment of the Schwarzian mode since otherwise we would only obtain $\\rho_{\\rm HH}(M)$ in some limits. This ingredient was missing in \\cite{Henneaux:1985nw, LouisMartinez:1993eh} making them unable to identify the HH state from the full space of physical states. \n\nTo summarize, the solution of the gravitational constraints gives the finite cutoff JT gravity path integral as \n\\begin{equation}\\label{eq:AdSfinitecutoffJT}\n\\Psi_{\\rm HH}[\\phi_b(u),L] =\\hspace{-0.1cm} \\int_0^\\infty \\hspace{-0.3cm}dM\\; \\hspace{-0.1cm} \\sinh(2\\pi \\sqrt{M}) ~e^{ \\int_0^L du \\left[ \\sqrt{\\phi^2_b-M-( \\partial_u \\phi_b)^2} - \\partial_u \\phi \\tan^{-1} \\left(\\sqrt{\\frac{\\phi^2-M}{(\\partial_u \\phi)^2}-1}\\right) \\right]}.\n\\end{equation}\nBy construction, this matches the Schwarzian limit when $\\phi_b$ and $\\sigma$ are constant. \n\nWhen the dilaton is constant but $\\sigma(u)$ is not, it is clear that we can simply go to coordinates $d\\tilde{\\theta} = e^{\\s}d\\theta$ in both the bulk path integral and the WdW wavefunction and see that they give the same result. Since we can always choose time-slices with a constant value for the dilaton, this situation will suffice for comparing our result to the analog of the $T\\bar T$ deformation in the next subsection. \n\nThe more non-trivial case is for non-constant dilaton profiles. We provide a further check of our result in appendix \\ref{sec:vardil}, where we compare the wavefunctional \\eqref{eq:AdSfinitecutoffJT} to the partition function of JT gravity with a non-constant dilaton profile when the cutoff is taken to infinity.\n\n\\subsection{Comparison to $T\\bar{T}$}\n\nLet us now compare the wavefunctional \\eqref{eq:AdSfinitecutoffJT} to the partition function obtained from the $1D$ analog of the $T\\bar T$ deformation \\eqref{deformedZ}. First of all, let us consider configurations of constant $\\phi_b$, so $\\partial_u \\phi_b = 0$. This will simplify $\\Psi_{\\rm HH}$ to \n\\begin{eqnarray}\n\\Psi_{\\rm HH}[\\phi_b,\\,L] = \\int_0^{\\infty} dM\\; \\sinh(2\\pi \\sqrt{M})e^{\\phi_b L \\sqrt{1- M\/\\phi^2_b}}.\n\\end{eqnarray}\nThe partition function is then obtained by multiplying this wavefunction by $e^{-I_{\\rm ct}} = e^{- L \\phi_b}$. The resulting partition function agrees with \\eqref{deformedZ} with identifications:\n\\begin{eqnarray}\nM \\to 2 C E,\\quad \\phi^2_b \\to \\frac{C}{4\\l},\\quad L \\to \\frac{\\b}{\\sqrt{4C\\l}},\n\\end{eqnarray}\nup to an unimportant normalization. In fact, we can say a little more than just mapping solution onto each other. In section \\ref{minisuper} we showed that in the minisuperspace approximation the wave\\emph{functions} satisfy \\eqref{minisupereq}. With the identifications made above and the inclusion of the counter term, the partition function $Z_{\\l}(\\b)$ satisfies \n\\begin{eqnarray}\\label{diffeqZ}\n\\left[4 \\l \\partial_{\\l} \\partial_{\\b} + 2 \\b \\partial_\\b^2 - \\left(\\frac{4\\l}{\\b} - 1\\right) \\partial_\\l \\right] Z_{\\l}(\\b) = 0.\n\\end{eqnarray}\nThis is now purely written in terms of field theory variables and is precisely the flow equation as expected from \\eqref{1dTTbar}, i.e. solutions to this differential equation have the deformed spectrum \\eqref{deformedE}. This is also the flow of the partition function found in two dimensions in \\cite{Aharony:2018bad}, specialised to purely imaginary modular parameter of the torus. We will analyze the associated non-perturbative ambiguities associated to this flow in section \\ref{sec:nonpert}.\n\nLet us summarise. We have seen that the partition function of the deformed Schwarzian theory is mapped to the exact dilaton gravity wavefunctions for constant $\\phi_b$ and $\\g_{uu}$. In fact, \\emph{any} quantum mechanics theory that is deformed according to \\eqref{1dTTbar} will obey the quantum WdW equation (for constant $\\phi_b$ and $\\s$). This principle can be thought of as the two-dimensional version of \\cite{Freidel:2008sh}. It is only the boundary condition at $\\l \\to 0$ (or large $\\phi_b L$), where we know the bulk JT path integral gives the Schwarzian theory, that tells us that the density of states is $\\sinh(2\\pi \\sqrt{M})$. Next, we will show that the wavefunction for constant $\\phi_b$ and $\\gamma_{uu}$ can be reproduced by explicitly computing the Euclidean path integral in the bulk, at finite cutoff. \n\n\n\n\\section{The Euclidean path integral}\n\\label{sec:JT-gravity-review}\n\nWe will once again consider the JT gravity action, \\eqref{IJT}, and impose Dirichlet boundary conditions for the dilaton field $\\phi|_{\\partial M_2} \\equiv \\phi_b \\equiv \\phi_{r}\/\\e$, boundary metric $\\g_{uu}$, and proper length $L \\equiv \\b\/\\e$ and with the addition the counter-term, \n\\begin{eqnarray}\nI_{\\rm ct} = \\int du \\sqrt{\\g} \\phi\\,,\n\\end{eqnarray}\nwhose addition leads to an easy comparison between our results and the infinite cutoff results in JT gravity. As in the previous section we will once again focus on disk topologies. \n\nAs discussed in section \\ref{sec:HH-boundary-conditions-JT-wavefunctional}, the path integral over the dilaton $\\phi$ yields a constrain on the curvature of the space, with $R = -2$. Therefore, in the path integral we are simply summing over different patches of $AdS_2$, which we parametrize in Euclidean signature using Poincar\\'e coordinates as $ds^2 = (d\\tau^2 + dx^2)\/x^2$. To describe the properties which we require of the boundary of this patch we choose a proper boundary time $u$, with a fixed boundary metric $\\g_{uu}= 1\/\\e^2$ (related to the fix proper length $L = \\int_0^\\b du \\sqrt{\\g_{uu}})$. Fixing the intrinsic boundary metric to a constant, requires: \n\\begin{eqnarray}\n\\label{eq:Poincare-bdy-metric}\n\\frac{\\tau'^2 + x'^2 }{x^2} = \\frac{1}{\\e^2 }\\,, \\qquad \\frac{-t'^2 + x'^2 }{x^2} = \\frac{1}{\\e^2 } \\,, \\qquad \\tau = -i t\\,.\n\\end{eqnarray}\nIf choosing some constant $\\e \\in \\mR$ then we require that the boundary has the following properties: \n\\begin{itemize}\n\\item If working in Euclidean signature, the boundary should never self-intersect. Consequently if working on manifolds with the topology of a disk this implies that the Euler number $\\chi(M_2) = 1$. \n\\item If working in Lorentzian signature, the boundary should always remain time-like since \\eqref{eq:Poincare-bdy-metric} implies that $-(t')^2 + (x')^2 = (x'-t')(t'+x')>0$.\\footnote{While fixing the metric $\\g_{uu}$ to be a constant is not diffeomorphism invariant, the notion of the boundary being time-like ($\\text{sgn}\\, \\g_{uu}$) is in fact diffeomorphism invariant.} From now on we will assume without loss of generality that $t' >0$. \n\\end{itemize}\n Both conditions are important constraints which we should impose at the level of the path integral. Such conditions are not typical if considering the boundary of the gravitational theory as the worldline of a particle moving on $H^2$ or $AdS_2$: in Euclidean signature, the worldline could self-intersect, while in Lorentzian signature the worldline could still self-intersect but could also become space-like. These are the two deficiencies that \\cite{Kitaev:2018wpr, Yang:2018gdb} encountered in their analysis, when viewing the path integral of JT gravity as that of a particle moving in an imaginary magnetic field on $H^2$. \n\nFor the purposes of this paper it will also prove convenient to introduce the light-cone coordinates (with $z =- ix+\\tau\\,, \\bar z = i x+ \\tau$), for which fixing the intrinsic boundary metric implies: \n\\begin{eqnarray}\n\\label{eq:lightcone-bdy-metric}\n-\\frac{4z' \\bar z'}{(z-\\bar z)^2 } =\\frac{1}{\\e^2} \\,.\n\\end{eqnarray}\nIn Euclidean signature $z = \\bar z^*$, while in Lorentzian signature $z$, $\\bar z \\in i\\mR$. The constraint that the boundary is time-like implies that $i z'>0$ and $i \\bar z'<0$ (alternatively, if assuming $t' <0$, $i z'<0$ and $i \\bar z'>0$). In order to solve the path integral for the remaining boundary fluctuations in the 1D system it will prove convenient to use light-cone coordinates and require that the path integral obeys the two properties described above. \n\n\\subsection{Light-cone coordinates and $SL(2, \\mR)$ isometries in $AdS_2$}\n\\label{sec:light-cone-coordinates-SL(2,R)}\n\nAs is well known, $AdS_2$, even at finite cutoff, exhibits an $SL(2, \\mR)$ isometry. This isometry becomes manifest when considering the coordinate transformations:\n\\begin{eqnarray}\n\\label{eq:SL2-isometry}\n&\\text{E \\& L} :\\qquad z \\to \\frac{a z+ b}{c z+d}\\,, \\hspace{3.9cm} \\bar z \\to \\frac{a \\bar z+ b}{c \\bar z+d}\\,,\\nonumber \\\\[2mm] &\\text{E}:\\qquad x+i \\tau \\to \\frac{a (x+i \\tau)+ b}{c (x+i \\tau )+d}\\,, \\quad \\hspace{0.5cm} \\text{L}:\\quad t+ x \\to \\frac{a (t+ x)+ b}{c (t+ x)+d}\\,,\n\\end{eqnarray}\nIt is straightforward to check that under such transformations the boundary metrics, \\eqref{eq:Poincare-bdy-metric} and \\eqref{eq:lightcone-bdy-metric}, both remain invariant. The same is true of the extrinsic curvature, which is the light-cone parametrization of the boundary degrees of freedom can be expressed as\n\\begin{eqnarray}\n\\label{eq:K-lightcone}\nK[z(u), \\bar z(u)] = \\frac{2 z'^2 \\bar z' + ( \\bar z-z )\\bar z' z'' + z'(2\\bar z'^2 + (z- \\bar z)\\bar z'')}{4(z' \\bar z')^{3\/2}} \\,.\n\\end{eqnarray}\nConsequently, invariance under $SL(2, \\mR)$ transformations gives: \n\\begin{eqnarray}\nK[z, \\bar z] = K\\left[\\frac{a z+ b}{c z + d}, \\frac{a \\bar z+ b}{c \\bar z + d}\\right]\\,,\n\\end{eqnarray}\nTherefore, upon solving for $\\bar z[ z(u)]$ (as a functional of $z(u)$) we will find that \n\\begin{eqnarray}\n\\bar z[z(u)] \\qquad \\Rightarrow \\qquad K[z] = K\\left[\\frac{a z+ b}{c z + d}\\right]\n\\end{eqnarray}\nAs we will see in the next subsection, such a simple invariance under $SL(2, \\mR)$ transformations will be crucial to being able to relate the path integral of the boundary fluctuations to that of some deformation of the Schwarzian theory. An important related point is that when solving for $\\tau[x(u)]$ as a functional of $x(u)$, the resulting extrinsic curvature is not invariant under the $SL(2, \\mR)$ transformations, $\\tau \\to \\frac{a \\tau + b}{c\\tau+d}$. Rather this is only a valid symmetry in the $\\e \\to 0$ limit, for which $x \\to 0$, while $\\tau$ is kept finite. It is only in the asymptotically $AdS_2$ limit that the transformation in the second line of \\eqref{eq:SL2-isometry} can be identified with $\\tau \\to \\frac{a \\tau + b}{c\\tau+d}$. If keeping track of higher orders in $\\e$, the transformation on $\\tau$ would involve a growing number of derivatives on the $\\tau$ field which should be proportional to the order of the $\\e$-expansion.\n\n\\subsection{Restricting the extrinsic curvature}\n\\label{sec:restricting-extrinsic-curvature}\n\nNext, we discuss the expansion of the extrinsic curvature $K[z]$ to all orders in perturbation in $\\e$: \n\\begin{eqnarray}\nK[z] = \\sum_{n=0}^\\infty \\e^n K_n[z]\\,, \\qquad K_n[z] = K_n\\left[\\frac{a z+b}{c z + d}\\right]\\,,\n\\end{eqnarray}\nWe could in principle explicitly solve for $\\bar z[z(u)]$ to first few orders in perturbation theory in $\\e$ and then plug the result into \\eqref{eq:K-in-terms-of-z}. The first few orders in the expansion can be solved explicitly and yield:\n\\begin{eqnarray} \n\\label{eq:K3-K4}\nK_0[z] &= 1, \\qquad K_1[z]=0, \\qquad K_2[z]= \\Sch(z, u), \\nonumber\\\\ K_3[z] &= -i\\, \\partial_u \\Sch(z, u) \\,, \\qquad K_4[z]= -\\frac{1}2\\Sch(z, u)^2+\\partial_{u}^2 \\,\\Sch(z, u) \\,.\n\\end{eqnarray}\nThe fact that all orders in $K_n[z(u)]$ solely depend on the Schwarzian and its derivatives is not a coincidence. In fact, one generally finds that: \n\\begin{eqnarray}\nK_n[z] = \\cK_n[\\Sch(z, u),\\, \\partial_u ]\\,.\n\\end{eqnarray}\n\nThe reason for this is as follows. $K_n[z]$ is a local function of $z(u)$ since solving for $\\bar z[z(u)]$ involves only derivatives of $z(u)$. The Schwarzian can be written as the Casimir of the $\\mathfrak{sl}(2, \\mR)$ transformation, $z \\to \\frac{a\\,z+b}{c\\,z+d}$ \\cite{Maldacena:2016upp}. Because the rank of the $\\mathfrak{sl}(2, \\mR)$ algebra is $1$, higher-order Casimirs of $\\mathfrak{sl}(2, \\mR)$ can all be expressed as a polynomial (or derivatives of powers) of the quadratic Casimir. Since local functions in $u$ that are $SL(2, \\mR)$ invariant, can also only be written in terms of the Casimirs of $\\mathfrak{sl}(2, \\mR)$ this implies that they should also be linear combinations of powers (or derivatives of powers) of the quadratic Casimir, which is itself the Schwarzian. \n\nAlternatively, we can prove that $K_n[z(u)]$ is a functional of the Schwarzian by once again noting that $K_n[z(u)]$ only contains derivatives of $z(u)$ up to some finite order. Then we can check explicitly how each infintesimal $SL(2,\\mR)$ transformation constrains $K_n[z(u)]$. For instance, translation transformations $z \\to z + b$ imply that $K_n$ solely depends on derivatives of $z(u)$. The transformation $z(u) \\to a z(u)$ implies that $K_n[z(u)]$ depends solely on ratios of derivatives with a matching order in $z$ between the numerator and denominator of each ratio, of the type ${(\\prod_k z^{(k_i)})}\/{(\\prod_k z^{(\\tilde k_i)})}$. Finally considering all possible linear combinations between ratios of derivatives of the type ${(\\prod_k z^{(k_i)})}\/{(\\prod_k z^{(\\tilde k_i)})}$ and requiring invariance under the transformation $z(u) \\to 1\/z(u)$, fixes the coefficients of the linear combination to those encountered in arbitrary products of Schwarzians and of its derivatives. \n\nOnce again, we emphasize that this does not happen when using the standard Poincar\\'e parametrization \\eqref{eq:Poincare-bdy-metric} in $\\tau$ and $x$. When solving for $\\tau[x]$ and plugging into $K[\\tau(u)]$, since we have that $K[\\tau(u)] \\neq K[a\\tau(u)+b\/(c \\tau(u)+d)]$ and consequently $K[\\tau(u)]$ is not a functional of the Schwarzian; it is only a functional of the Schwarzian at second-order in $\\e$. This can be observed by going to fourth order in the $\\e$-expansion, where\n\\begin{eqnarray}\nK_4[\\tau(u)] = \\frac{\\tau^{(3)}(u)^2}{\\tau'(u)^2}+\\frac{27 \\tau''(u)^4}{8 \\tau'(u)^4}+\\frac{\\tau^{(4)}(u)\n \\tau''(u)}{\\tau'(u)^2}-\\frac{11 \\tau^{(3)}(u) \\tau''(u)^2}{2 \\tau'(u)^3}\\,,\n\\end{eqnarray}\nwhich cannot be written in terms of $\\Sch(\\tau(u), u)$ and of its derivatives.\n\n\n\\subsection{Finding the extrinsic curvature: perturbative terms in $K[z(u)]$ }\n\\label{sec:finding-extrinsic-curvature}\n\n\nThe previous subsection identified the abstract dependence of the extrinsic curvature as a function of the Schwarzian. To quantize the theory, we need to find the explicit dependence of $K_n$ on the Schwarzian. To do this, we employ the following trick. Consider the specific configuration for $z(u)$:\\footnote{While \\eqref{eq:specific-configuration-z(u)} is, in fact, a solution to the equation of motion for the Schwarzian theory it is not necessarily a solution to the equation of motion in the theory with finite cutoff. }\n\\begin{eqnarray}\n\\label{eq:specific-configuration-z(u)}\nz(u) = \\exp( a u)\\,, \\qquad \\qquad \\Sch(z, u) = -\\frac{ a^2}{2}\\,.\n\\end{eqnarray}\n\nSince $K[z(u)]$ is a functional of the $\\Sch(z, u)$ and of its derivatives to all orders in perturbation theory in $\\e$, then $K_n[z(u) = \\exp(au)] = \\cK_n[\\Sch(z, u),\\, \\partial_u ] = \\cK_n[a]$. On the other hand, when using a specific configuration for $z(u)$ we can go back to the boundary metric constraint \\eqref{eq:lightcone-bdy-metric} and explicitly solve for $\\bar z(u)$. Plugging-in this solution together with \\eqref{eq:specific-configuration-z(u)} into the formula for the extrinsic curvature $K[z(u), \\bar z(u)]$ \\eqref{eq:K-lightcone}, we can find $\\cK_n[a]$ and, consequently, find the powers of the Schwarzian in $\\cK_n[\\Sch(z, u), \\, \\partial_u]$. \n\nThe metric constraint involves solving the first order differential equation\n\\begin{eqnarray}\n\\label{eq:metric-a-barZ-config}\n - \\frac{4 \\,a\\, e^{a u}\\bar z'}{\\left(e^{a u} - \\,\\bar z\\right)^2} = \\frac{1}{\\e^2}\\,,\n\\end{eqnarray}\nwhose solution, to all orders in perturbation theory in $\\e$, is given by \n\\begin{eqnarray} \n\\label{eq:bar-z-diff-eq-sol}\n\\bar z(u) = e^{a u }\\left(1-2a^2 \\e^2 -2a \\e \\sqrt{-1+a^2 \\e^2}\\right)\\,.\n\\end{eqnarray}\n\nWe can plug this solution for $\\bar z(u)$ together with the configuration $z(u) = \\exp(a u)$ to find that \n\\begin{eqnarray}\n\\label{eq:K-in-terms-of-a}\nK\\left[z(u) = \\exp(a u)\\right] &= \\sqrt{1- \\e^2 a^2}\\,.\n\\end{eqnarray}\nDepending on the choice of branch one can reverse the sign of \\eqref{eq:K-in-terms-of-a} to find that $K\\left[z(u) = \\exp(a u)\\right] = -\\sqrt{1- \\e^2 a^2}$ which corresponds to the considering the exterior of an $AdS_2$ patch as our surface (instead of a regular $AdS_2$ patch). This is analogous to the contracting branch in of the WDW functional in \\eqref{contracting}. \n\nConsequently, it follows that in a perturbative series in $\\e$ we find:\\footnote{The terms containing derivatives of the Schwarzian are not necessarily total derivatives and thus we need to explain why they do not contribute to the path integral. }\n\\begin{eqnarray}\n\\label{eq:K-in-terms-of-z}\nK_\\pm[z(u)] &= \\pm \\left(\\sqrt{1+ 2{\\e^2}\\,\\Sch(z, u)} \\,\\,\\,+ \\,\\,\\, \\text{derivatives of Sch.}\\right) \\,,\n\\end{eqnarray}\nwhere we find that the quadratic term in $\\e$ for the $+$ branch of \\eqref{eq:K-in-terms-of-z} agrees with the expansion of $K$ in terms of $\\e$ in JT gravity in asymptotic $AdS_2$ \\cite{Maldacena:2016upp} (which found that $K[z(u)] = 1+\\e^2 \\Sch(z, u) + \\dots$). The $+$ branch in \\eqref{eq:K-in-terms-of-z} corresponds to compact patches of $AdS_2$ for which the normal vector points outwards; the $-$ branch corresponds to non-compact surfaces (the complement of the aforementioned $AdS_2$ patches) for which the normal vector is pointing inwards. While the $+$ branch has a convergent path integral for real values of $\\phi_r$, for a normal choice of countour for $z(u)$, the path integral of the $-$ branch will be divergent. Even for a potential contour choice for which the path integral were convergent, the $-$ branch is non-perturbatively suppressed by $O(e^{-\\int_0^\\b du\\, \\phi_b\/\\e}) = O(e^{-1\/\\e^2})$. Therefore, for now, we will ignore the effect of this different branch ($-$) and set $K[z(u)] \\equiv K_+[z(u)]$; we will revisit this problem in section \\ref{sec:nonpert} when studying non-perturbative corrections in $\\e$.\n\n \n\nIn principle, one can also solve for the derivative of the Schwarzian in \\eqref{eq:K-in-terms-of-z} following a similar strategy to that outlined above. Namely, it is straightforward to find that when $\\Sch(z, u)= a u^n$, for some $n \\in \\mathbb Z$, then $z(u)$ is related to a Bessel function. Following the steps above, and using the fact that $\\partial^{n+1} \\Sch(z, u) = 0$ for such configurations, one can then determine all possible terms appearing in the extrinsic curvature. However, since we are interested in quantizing the theory in a constant dilaton configuration, we will shortly see that we can avoid this more laborious process. \n\nTherefore, the JT action that we are interested in quantizing is given by:\n\\begin{eqnarray}\n\\label{eq:JT-simplified-action}\nI_{JT} = -\\int_0^\\b \\frac{du}{\\e^2} \\phi_{ r} \\bigg(\\sqrt{1+ 2\\e^2\\,\\Sch(z, u)} - 1 + \\text{derivatives of Sch.} \\bigg) \\,,\n\\end{eqnarray}\nwhere we have added the correct counter-term needed in order to cancel the $1\/\\e^2$ divergence in the $\\e \\to 0$ limit.\n\nWhile we have found $K[z(u)]$ and $I_{JT}$ to all orders in perturbation theory in $\\e$, we have not yet studied other non-perturbative pieces in $\\e$ (that do not come from the $-$ branch in \\eqref{eq:K-in-terms-of-z}). Such corrections could contain non-local terms in $u$ since all terms containing a finite number of derivatives in $u$ are captured by the $\\e$-perturbative expansion. The full solution of \\eqref{eq:metric-a-barZ-config} provides clues that such non-perturbative corrections could exist and are, indeed, non-local (as they will not be a functional of the Schwarzian). The full solution to \\eqref{eq:metric-a-barZ-config} is\n\\begin{eqnarray}\n\\label{eq:bar-z-diff-eq-sol}\n\\bar z(u) = e^{a u }\\left(1-2a^2 \\e^2 +2a \\e \\left(\\sqrt{-1+a^2 \\e^2}-\\frac{2\\e}{\\frac{\\e}{\\sqrt{-1+a^2 \\e^2}}+\\cC_1 e^{\\frac{u}{\\e}\\sqrt{-1+a^2 \\e^2}}} \\right)\\right)\\,,\n\\end{eqnarray} \nfor some integration constant $\\cC_1$. When $\\cC_1 \\neq 0$, note that the correction to $\\bar z(u)$ in \\eqref{eq:bar-z-diff-eq-sol} are exponentially suppressed in $1\/\\e$ and do not contribute to the series expansion $\\cK_n$. However, when taking $\\cC_1 \\neq 0$, \\eqref{eq:bar-z-diff-eq-sol} there is no way of making $\\bar z(u)$ periodic (while it is possible to make $z(u)$ periodic). While we cannot make sure that every solution has the feature that non-perturbative corrections are inconsistent with the thermal boundary conditions, for the remainder of this section we will only focus on the perturbative expansion of $K[z(u)]$ with the branch choice for the square root given by \\eqref{eq:K-in-terms-of-z}. We will make further comments about the nature of non-perturbative corrections in section \\ref{sec:nonpert}. \n\n\n\n\\subsection{Path integral measure}\n\\label{sec:path-integral-measure}\n\nBefore we proceed by solving the path integral of \\eqref{eq:JT-simplified-action}, it is important to discuss the integration measure and integration contour for $z(u)$. Initially, before imposing the constraint \\eqref{eq:lightcone-bdy-metric} on the boundary metric, we can integrate over both $z(u)$ and $\\bar z(u)$, with the two variables being complex conjugates in Euclidean signature. However, once we integrate out $\\bar z(u)$ we are free to choose an integration contour consistent with the constraint \\eqref{eq:lightcone-bdy-metric} and with the topological requirements discussed at the beginning of this section. Thus, for instance if we choose $z(u) \\in \\mR$ then the constraint \\eqref{eq:lightcone-bdy-metric} would imply that $z'(u) >0$ (or $z'(u) <0$); this, in turn, implies that we solely need to integrate over strictly monotonic functions $z(u)$. The boundary conditions for $z(u)$ should nevertheless be independent of the choice of contour; therefore we will impose that $z(u)$ is periodic, $z(0) = z(\\b)$. Of course, this implies that $z(u)$ has a divergence. In order to impose that the boundary is never self-intersecting we will impose that this divergence occurs solely once.\\footnote{All this is also the case in the Schwarzian theory whose classical solution is $\\tau(u) = \\tan(\\pi u\/\\b)$. \\cite{Maldacena:2016upp} has found that if considering solutions where $\\tau(u)$ diverges multiple times ($\\tau(u) = \\tan(n\\pi u\/\\b)$ with $n \\in \\mathbb Z$) then the fluctuations around such solutions are unbounded, and the path integral is divergent (one can still make sense of this theory though, as explained in \\cite{Mertens:2019tcm}). } \nSuch a choice of contour therefore satisfies the following two criteria: \n\\begin{itemize}\n\\item That the boundary is not self-intersecting.\n\\item The boundary is time-like when going to Lorentzian signature. This is because redefining $z(u) \\to z^\\text{Lor.}(u) = -i z(u) \\in \\mR $ leaves the action invariant and describes the boundary of a Lorentzian manifold. Since $i (z^\\text{Lor.})' > 0$, it then follows that the boundary would be time-like. \n\\end{itemize} \n\n\nFurthermore, while we have chosen a specific diffeomorphism gauge which fixes $\\g_{uu}=1\/\\e^2$, the path integral measure (as opposed to the action) should be unaffected by this choice of gauge and should rather be diffeomorphism invariant. The only possible local diffeomorphism invariant path integral measure is that encountered in the Schwarzian theory \\cite{Alekseev:1988ce, Bagrets:2016cdf, Stanford:2017thb} and, in JT gravity at infinite cutoff \\cite{Saad:2019lba}: \n\\begin{eqnarray} \n\\label{eq:path-integral-measure}\nD\\mu[z] = \\prod_{z \\in [0, \\b)} \\frac{dz(u)}{z'(u)}\\,.\n\\end{eqnarray} \nIn principle, one should also be able to derive \\eqref{eq:path-integral-measure} by considering the symplectic form for JT gravity obtained from an equivalent $\\mathfrak{sl}(2, \\mR)$ BF-theory. In \\cite{Saad:2019lba} this symplectic form (which in turn yields the path integral measure \\eqref{eq:path-integral-measure}) was derived in the limit $\\e \\to 0$. It would however be interesting to rederive the result of \\cite{Saad:2019lba} at finite $\\e$ in order to find a more concrete derivation of \\eqref{eq:path-integral-measure}.\n\nTo summarize, we have therefore argued that both the path integration measure, as well as the integration contour, in the finite-$\\e$ theory, can be taken to be the same as those in the pure Schwarzian theory. \n\n\n\n\\subsection{Finite cutoff partition function as a correlator in the Schwarzian theory} \n\\label{sec:correlators-and-Schwarzian}\n\nThe path integral which we have to compute is given by \n\\begin{eqnarray}\n\\label{eq:JT-path-integral-what-to-solve}\nZ_{JT}[\\phi_b, L] = \\int_{z'(u)>0} D\\mu[z]\\exp\\bigg[\\int_0^\\b \\frac{du}{\\e^2} \\phi_{ r} &\\bigg(\\sqrt{1+ 2{\\e^2} \\Sch(z, u)} - 1 + \\nonumber \\\\ &+ \\,\\,\\,\\, \\text{derivatives of Sch.} \\,\\bigg)\\bigg] \\,,\n\\end{eqnarray}\nOf course, due to the agreement of integration contour and measure, we can view \\eqref{eq:JT-path-integral-what-to-solve} as the expectation value of the operator in the pure Schwarzian theory with coupling $\\phi_{r}$:\n\\begin{eqnarray}\n\\label{eq:expectation-val-op}\n&Z_{JT}[\\phi_b, L] = \\< \\cO_\\defo\\> \\equiv \\\\ \n &\\equiv \\bigg\\<\\exp\\bigg[\\int_0^\\b \\frac{du}{\\e^2} \\phi_{r}\\,\\bigg(\\sqrt{1+ 2{\\e^2} \\Sch(z, u)} - 1 - \\e^2 \\,\\Sch(z, u) + \\text{derivatives of Sch.} \\bigg) \\,\\bigg] \\bigg\\>\\,.\\nonumber\n\\end{eqnarray}\nA naive analysis (whose downsides will be mention shortly) would conclude that, since in the pure Schwarzian theory, the Schwarzian can be identified with the Hamiltonian of the theory ($-\\frac{H}{2\\phi_{r}^2} = \\Sch(z, u)$), then computing \\eqref{eq:expectation-val-op} amounts to computing the expectation value for some function of the Hamiltonian and of its derivatives. In the naive analysis, one can use that the Hamiltonian is conserved and therefore all derivatives of the Schwarzian in \\eqref{eq:expectation-val-op} can be neglected. The conservation of the Hamiltonian would also imply that the remaining terms in the integral in the exponent \\eqref{eq:expectation-val-op} are constant. Therefore, the partition function simplifies to\n\\begin{eqnarray} \nZ_{JT}[\\phi_b, L] \\, =_\\text{naive} \\bigg\\<\\exp\\bigg[ \\frac{\\b\\phi_{r}}{\\e^2} &\\bigg(\\sqrt{1- \\frac{\\e^2}{\\phi_{r}^2} H} - 1 + \\frac{\\e^2}{2\\phi_{r}^2} \\,H\\bigg)\\bigg] \\bigg\\>\\,.\n\\end{eqnarray} \nwhich can be conveniently rewritten in terms of the actual boundary value of the dilaton $\\phi_b = \\phi_{r}\/\\e$ and the proper length $L = \\b\/\\e$ as\n\\begin{eqnarray} \nZ_{JT}[\\phi_b, L]\\, =_\\text{naive} \\bigg\\<\\exp\\bigg[ L \\phi_b &\\bigg(\\sqrt{1- \\frac{H}{\\phi_b^2}} - 1 + \\frac{H}{\\phi_b}\\,\\bigg] \\bigg\\>\\,.\n\\end{eqnarray} \nThe result for this expectation value in the Schwarzian path integral is given by \n\\begin{eqnarray}\n\\label{eq:final-naive-result}\nZ_{JT}[\\phi_b, L] \\, = _\\text{naive} \\int ds \\,s \\sinh(2\\pi s)e^{L \\phi_b \\left(\\sqrt{1-\\frac{s^2}{\\phi^2_b}} - 1 \\right)}\n\\end{eqnarray}\nwhere we have identified the energy of the Schwarzian theory in terms of the $\\mathfrak{sl}(2, \\mR)$ Casimir for which (for the principal series) $E = C_2(\\l = i s+\\frac{1}2) + \\frac{1}4 = s^2$ (see \\cite{Kitaev:2017hnr, Kitaev:2018wpr, Yang:2018gdb, Iliesiu:2019xuh}). The result \\eqref{eq:final-naive-result} agrees with both the result for the WDW wavefunctional presented in section \\ref{sec:WdW-wavefunctional-main} (up to an overall counter-term) and with the results of \\cite{Gross:2019ach,Gross:2019uxi} (reviewed in the introduction), obtained by studying an analogue of the $T\\bar T$ deformation in $1d$.\\footnote{We identify the deformation parameter $\\l = \\frac{\\e^2}{4\\phi_{ r}}$ in \\cite{Gross:2019ach,Gross:2019uxi}.} \n\nAs previously hinted, the argument presented above is incomplete. Namely, the problem appears because correlation functions of the $\\Sch(z, u)$ are not precisely the same as those of a quantum mechanical Hamiltonian. While at separated points correlation functions of the Schwarzian are constant (just like those of $1d$ Hamiltonians), the problem appears at identical points where contact-terms are present. Therefore, the rest of this section will be focused on a technical analysis of the contribution of these contact-terms, and we will show that the final result \\eqref{eq:final-naive-result} is indeed correct even when including such terms.\n\n\\subsubsection*{The generating functional}\n\nTo organize the calculation we will first present a generating functional for the Schwarzian operator in the undeformed theory. This generating functional is defined by \n\\begin{equation}\nZ_{\\text{Sch}}[j(u)] \\equiv \\int \\frac{D\\mu[z]}{SL(2, \\mR)}\\, e^{ \\int_0^\\b du j(u) \\Sch(z(u), u)} ,\n\\end{equation}\nfor an arbitrary function $j(u)$ which acts as a source for Schwarzian insertions. This path integral can be computed repeating the procedure in \\cite{Stanford:2017thb}, which we also review in appendix \\ref{sec:vardil}. The final answer is given by \n\\begin{eqnarray}\n\\label{eq:generating-functional}\nZ_\\text{Sch}[j(u)] \\sim e^{\\int_0^\\b {du}\\frac{j'(u) ^2}{2j(u)}} \\int ds \\,{s} \\sinh(2\\pi s) e^{-\\frac{s^2}2 \\int_0^\\b \\frac{du}{j(u)} }\\,.\n\\end{eqnarray}\nWe will use \\eqref{eq:generating-functional} to evaluate the integrated correlator \\eqref{eq:expectation-val-op}, by rewriting it as \n\\begin{eqnarray} \n\\label{eq:correlator-to-functional-derivative}\n\\<\\cO_\\defo \\> =\\bigg[\\exp\\left(\\int_0^\\b \\frac{du}{\\e^2} \\phi_{r} : \\left(\\sqrt{1+ 2{\\e^2} \\frac{\\delta}{\\delta j(u)}} - 1 + \\cK\\left[\\partial_u \\frac{\\delta}{\\delta j(u)}\\right] \\right):\\,\\right) \\nonumber \\\\ \\times \\,Z_\\text{Sch}[j(u)]\\bigg] \\bigg|_{j(u) = 0}\\,,\n\\end{eqnarray}\nwhere $\\cK\\left[\\partial_u \\frac{\\delta}{\\delta j(u)}\\right]$ is a placeholder for terms containing derivative terms of the Schwarzian and, equivalently, for terms of the from $\\dots \\partial_u \\frac{\\delta}{\\delta j(u)} \\dots $. Finally, $:\\cO:$ is a point-splitting operation whose role we will clarify shortly. \n\n\n\n\\subsubsection*{Computing the full path integral}\n\nTo understand the point splitting procedure necessary in \\eqref{eq:new-expectation-value}, we start by analyzing the structure of correlators when taking functional derivatives of $Z_{JT}[j(u)]$. Schematically, we have that\n\\begin{eqnarray}\n\\label{eq:example-generating-functional}\n \\bigg(\\frac{\\delta}{\\delta j(u_1)} \\dots \\frac{\\delta}{\\delta j(u_n)} Z_\\text{Sch}[j(u)]\\bigg)\\bigg|_{j(u) = \\phi_{r}} = a_1 + a_2[\\delta(u_{ij})] + a_3[\\partial_u \\delta(u_{ij})] + \\dots\\,,\n\\end{eqnarray}\nwhere $a_1$ is a constant determined by the value of the coupling constant $\\phi_{r}$ and $a_2[\\delta(u_{ij})]]$ captures terms which have $\\delta$-functions in the distances $u_{ij} = u_i - u_j$, while $a_3[\\partial_u \\delta(u_{ij})]$ contains terms with at least one derivative of the same $\\delta$-functions for each term.\\footnote{For example, when $n=2$ the exact structure of \\eqref{eq:example-generating-functional} is computed in \\cite{Stanford:2017thb} and is reviewed in appendix \\ref{app:Kexplicit}. } The $\\dots$ in \\eqref{eq:example-generating-functional} capture potential higher-derivative contact-terms.\n\nIf in the expansion of the square root in the exponent of \\eqref{eq:correlator-to-functional-derivative} one takes the functional derivative $\\delta\/\\delta j(u)$ at identical points then the contact terms in \\eqref{eq:example-generating-functional} become divergent (containing $\\delta(0)$, $\\delta'(0)$, $\\dots$). An explicit example about such divergences is given in appendix \\ref{app:Kexplicit} when evaluating the contribution of $K_4[z]$ in the perturbative series. In order to eliminate such divergences we define the point-splitting procedure\n\\begin{eqnarray}\n\\label{eq:ordering-prescription}\n:\\frac{\\delta^n}{\\delta j(u)^n} : \\,\\equiv \\, \\lim_{(u_1,\\, \\dots,\\, u_n) \\to u}\\, \\frac{\\delta}{\\delta j(u_1)} \\dots \\frac{\\delta}{\\delta j(u_n)} \\,.\n\\end{eqnarray}\nSuch a procedure eliminates the terms containing $\\delta(0)$ or its derivatives since we first evaluate the functional derivatives in the expansion of \\eqref{eq:new-expectation-value} at separated points. \n\nThe structure of the generating functional also suggests that when integrating the correlator \\eqref{eq:example-generating-functional} the contribution of the derivatives of $\\delta(u_{ij})$ vanish after integration by parts since we will be evaluating \\eqref{eq:new-expectation-value} for constant dilaton values. As we explain in more detail in appendix \\ref{app:Kexplicit}, the origin of the derivatives of $\\delta(u_{ij})$ is two-fold: they either come by taking functional derivatives $\\delta\/\\delta j(u)$ of the term $\\exp\\left({\\int_0^\\b {du}\\frac{j'(u) ^2}{2j(u)}}\\right)$ in $Z_{Sch}[j(u)]$, or they come from the contribution of the derivative terms $\\cK\\left[\\partial_u \\frac{\\delta}{\\delta j(u)}\\right]$. In either case, both sources only contribute terms containing derivatives of $\\delta$-functions (no constant terms or regular $\\delta$-functions). Thus, since such terms vanish after integration by parts, neither $\\cK\\left[\\partial_u \\frac{\\delta}{\\delta j(u)}\\right]$ nor $\\exp\\left({\\int_0^\\b {du}\\frac{j'(u) ^2}{2j(u)}}\\right)$ contribute to the partition function. Consequently, we have to evaluate\n\\begin{eqnarray} \n\\label{eq:new-expectation-value}\n&\\<\\cO_\\defo\\>\\nonumber \\\\ &=\\bigg(\\int ds\\, s \\sinh(2\\pi s)\\,\\exp\\bigg[\\int_0^\\b \\frac{du}{\\e^2} \\phi_{ r} \\bigg(:\\sqrt{1+ 2{\\e^2}\\frac{\\delta}{\\delta j(u)}}: - 1 \\bigg)\\bigg] \\, e^{-\\frac{s^2}2 \\int_0^\\b du \\frac{1}{j(u)}}\\bigg)\\bigg|_{j(u) = 0}\\,.\n\\end{eqnarray}\nTo avoid having to deal with the divergences eliminated by the point-splitting discussed in the continuum limit, we proceed by discretizing the thermal circle into $\\b\/\\delta$ units of length $\\delta$ (and will ultimately consider the limit $\\delta \\to 0$).\\footnote{Sums and products of the type $\\sum_{u \\in [0, \\b)}$ and $\\prod_{u \\in [0, \\b)}$ will iterate over all $\\b\/\\delta $ intervals. } Divergent terms containing $\\delta$ in the final result correspond to terms that contain $\\delta(0)$ in the continuum limit and thus should be eliminated by through the point-splitting procedure \\eqref{eq:ordering-prescription}. Therefore, once we obtain the final form of \\eqref{eq:new-expectation-value}, we will select the universal diffeomorphism invariant $\\delta$-independent term. \n\nTo start, we can use that \n \\begin{eqnarray} \n \\label{eq:Laplce-transf-1}\ne^{-\\frac{s^2\\delta}{2 j(u)}} = \\frac{1}{2\\pi i} \\int_{-c - i\\oo}^{-c + i \\oo} d\\a_u \\left[-\\frac{\\pi Y_1(2\\sqrt{\\a_u})}{\\sqrt{\\a_u}}\\right] e^{-\\frac{2\\a_u j_u}{s^2 \\delta}}\n\\end{eqnarray}\nwhere we have introduced a Lagrange multiplier $\\a_u$ for each segment in the thermal circle. The integration contours for all $\\a_u$ are chosen along the imaginary axis for some real constant $c$. The next step is to apply the differential operator in the exponent in \\eqref{eq:new-expectation-value} to \\eqref{eq:Laplce-transf-1},\n\\begin{eqnarray} \n\\label{eq:action-of-diff-ops-on-exp}\n&\\,\\,\\,\\,\\,\\,\\,\\left(e^{\\int_0^{\\b} du \\frac{\\phi_{ r}}{\\e^2}\\,\\left(:\\sqrt{1+2\\e^2\\frac{\\delta}{\\delta j(u)}}:-1\\right)} \\right)\\prod_{u \\in [0, \\b) }e^{-\\frac{2\\a_u j_u}{ s^2 \\delta}} \\bigg|_{j_u=0} = \\nonumber \\\\ \n&= \\left(e^{\\int_0^{\\b} du \\frac{\\phi_{ r}}{\\e^2}\\,\\left(:\\sqrt{1+2\\e^2\\frac{\\delta}{\\delta j(u)}}:-1\\right)}\\right) e^{- \\int_0^\\b du \\,\\frac{2 \\a_u j_u}{s^2 \\delta^2}} \\bigg|_{j_u=0}\\nonumber \\\\ &= \\,: \\exp\\left[\\sum_{u \\in [0, \\b)} \\frac{\\delta\\phi_{r}}{\\e^2} \\left(\\sqrt{1-\\frac{4\\a_u\\e^2}{s^2 \\delta^2}}-1\\right)\\right]:\\,,\n\\end{eqnarray}\nwhere $:\\dots:$ indicates that we will be extracting the part independent of the UV cutoff, $\n\\delta$, when taking the limit $\\delta \\to 0$. Thus, we now need to compute \n\\begin{eqnarray} \nZ_{JT}\\left[\\phi_b,\\, L\\right]=\\frac{1}{2\\pi i} :\\int_0^\\infty ds \\,{s} \\sinh(2\\pi s)&\\int_{-c - i\\oo}^{-c + i \\oo} \\left(\\prod \\,d\\a_u \\right)\\left[-\\frac{\\pi Y_1(2\\sqrt{\\a_u})}{\\sqrt{\\a_u}}\\right] \\nonumber \\\\ &\\times e^{\\sum_{u \\in [0, \\b)} \\frac{\\delta\\phi_{r}}{\\e^2} \\left(\\sqrt{1-\\frac{4\\a_u\\e^2}{s^2 \\delta^2}}-1\\right)}\\,:.\n\\end{eqnarray}\nIn order to do these integrals we introduce an additional field $\\s_u$, such that\n\\begin{eqnarray}\n\\label{eq:Laplce-transf-2}\ne^{ \\frac{\\delta\\phi_{r}}{\\e^2} \\left(\\sqrt{1-\\frac{4\\a_u\\e^2}{s^2 \\delta^2}}-1\\right)} = \\int_0^\\infty \\frac{d \\sigma_u}{\\sigma_u^{3\/2}} \\,\\sqrt{-\\frac{\\delta \\phi_{r}}{2\\pi \\e^2}}\\, e^{- \\frac{2\\sigma_u \\a_u \\phi_{r}}{s^2 \\delta} + \\frac{\\delta \\phi_{r}}{2\\sigma_u \\e^2}(1-\\sigma_u)^2}\\,,\n\\end{eqnarray}\nwhere in order for the integral \\eqref{eq:Laplce-transf-2} to be convergent, we can analytically continue $\\phi_{r}$ to complex values. \nWe can now perform the integral over $\\a_u$ using \\eqref{eq:Laplce-transf-1}, since $\\a_u$ now appears once again in the numerator of the exponent:\n\\begin{eqnarray}\nZ_{JT}\\left[\\phi_b,\\,L\\right]=&\\,:\\int_0^\\infty ds \\,{s} \\sinh(2\\pi s) \\nonumber \\\\ & \\times \\int_0^\\infty \\left( \\prod_{u\\in[0, \\b) }\\frac{d \\sigma_u}{\\sigma_u^{3\/2}} \\,\\sqrt{-\\frac{\\delta \\phi_{r}}{2\\pi \\e^2}} \\right)\\, e^{\\sum_{u\\in[0, \\b) }\\left[- \\frac{s^2 \\delta}{2\\sigma_u \\phi_{r}}+ \\frac{\\delta \\phi_{r}}{2\\sigma_u \\e^2}(1-\\sigma_u)^2\\right]}:\\,.\n\\end{eqnarray}\nWe now change variable in the equation above from $\\sigma_u \\to 1\/\\tilde \\sigma_u$ and perform the Laplace transform, once again using \\eqref{eq:Laplce-transf-2}. We finally find that (when keeping the finite terms in $\\delta$) the partition function is given by:\\footnote{Once again to integrate over $\\tilde \\sigma_u$ we have to analytically continue $\\phi_{r}$ to complex values. Finally, to perform the integral over $s$ in \\eqref{eq:JT-analytic-final-result} we analytically continue back to real values of $\\phi_{r}$ and, equivalently, $\\phi_b$. } \n\\begin{eqnarray}\n\\label{eq:JT-analytic-final-result}\n Z_{JT}\\left[\\phi_b, \\,L\\right] &\\sim\\int_0^\\infty ds \\,{s} \\sinh(2\\pi s)\\, e^{\\frac{\\beta \\phi_{r}}{\\e^2}\\left(\\sqrt{1-\\frac{s^2\\e^2}{\\phi_{r}^2}}-1 \\right)}\n\\nonumber \\\\ \n&\\sim\\int_0^\\infty ds\\,{s} \\sinh(2\\pi s)\\, e^{\\frac{\\b}{4\\l}\\left(\\sqrt{1-4\\l s^2\/\\phi_{r}}-1 \\right)}\\,,\n\\end{eqnarray}\nwhere we defined $\\l = \\e^2\/(4\\phi_{r})$. This partition function agrees with the naive result \\eqref{eq:final-naive-result} obtained by replacing the Schwarzian with the Hamiltonian of the pure theory. Consequently, we arrive to the previously mentioned matching between the Euclidean partition function, the WDW wavefunctional and the partition function of the $T\\bar T$ deformed Schwarzian theory,\n\\begin{eqnarray}\ne^{-I_{\\rm ct}}\\Psi_{HH}[\\phi_b,\\, L] = Z_{\\l=\\e^2\/(4\\phi_{r})}(\\b) = Z_{JT}[\\phi_b,\\, L] \\,.\n\\end{eqnarray}\n\nAs a final comment, the Euclidean path integral approach hides two ambiguities. First, as we briefly commented in section \\ref{sec:finding-extrinsic-curvature}, the finite cutoff expansion of the extrinsic curvature might involve terms that are non-perturbatively suppressed in $\\varepsilon$. As we have mentioned before, such terms can either come from considering non-local terms in the extrinsic curvature $K[z(u)]$ or by considering the contribution of the negative branch in \\eqref{eq:K-in-terms-of-z}. Second, even if these terms would vanish, the perturbative series is only asymptotic. Performing the integral \\eqref{eq:JT-analytic-final-result} over energies explicitly gives a finite cutoff partition function \n\\begin{equation}\\label{eqn:k2fullanswer}\nZ_{JT}[\\phi_b, L] = \\frac{L \\phi^2_b e^{-L \\phi_b} }{L^2 +4\\pi^2} K_2\\Big( -\\sqrt{\\phi^2_b(L^2 + 4\\pi^2)} \\Big).\n\\end{equation} \nThis formal result is not well defined since the Bessel function is evaluated at a branch cut \\footnote{This can be tracked to the fact that we are sitting at a Stokes line. It is curious that this explicit answer gives a complex function even though the perturbative terms we found from the path integral are all real (this phenomenon also happens in more familiar setups like WKB \\cite{WKB}).}. The ambiguity related to the presence of this branch cut can be regulated by analytic continuation; for example, in $L \\to L e^{i \\epsilon}$, and the $\\epsilon \\to 0$ limit we find different answers depending on the sign of $\\epsilon$. The ambiguity given by the choice of analytic continuation can be quantified by the discontinuity of the partition function ${\\rm Disc}~Z$ for real $\\phi$ and $L$. \n\nA similar effect is reproduced by the contracting branch of the wavefunction from the canonical approach, there are two orthogonal solutions to the gravitational constraint $\\Psi_\\pm$, defined by their small cutoff behavior $\\Psi_\\pm \\sim e^{\\pm \\phi L}Z_\\pm$, where $Z_\\pm$ is finite. In the language of the Euclidean path integral, the different choice of wavefunctionals correspond to different choices for the square root in the extrinsic curvature \\eqref{eq:K-in-terms-of-z}. Imposing Hartle-Hawking boundary conditions fixes $\\Psi_+$, which matches the perturbative expansion of the Euclidean path integral. The corrections to the partition function from the other branch are exponentially suppressed $\\Psi_- \/ \\Psi_+ \\sim e^{-\\frac{1}{\\varepsilon^2}}$. \n\nAs previously hinted, contributions from turning on $\\Psi_-$ are not only related to the choice of branch for $K[z(u)]$, but is the same as the branch-cut ambiguity mentioned above for \\eqref{eqn:k2fullanswer}. To see this, we can notice that ${\\rm Disc}~Z$ is a difference of two functions that separately satisfy the WDW equation and goes to zero at small cutoff. Therefore it has to be of the same form as the $\\Psi_-$ branch given in \\eqref{contracting}.\n\n\\section{The contracting branch and other topologies}\n\\label{sec:nonpert}\nIn this section, we will analyze two different kinds of non-perturbative corrections to the partition function. First we will study corrections that are non-perturbative in the cutoff parameter $\\e$ in sections \\ref{sec:npco} and \\ref{sec:npco2}, which come from turning on the contracting branch of the wavefunction.\nThen, we will comment on non-perturbative corrections coming from non-trivial topologies in section \\ref{sec:nptopo}.\n\n\\subsection{Unitarity at finite cutoff}\n\\label{sec:npco}\n\nGiven the exact form of the wavefunction for general cutoff surfaces, we can study some of the more detailed questions about $T\\bar{T}$ in $AdS_2$. One such question is whether the theory can be corrected to become unitarity. As can be seen from the expression for the dressed energy levels \\eqref{deformedE}, the energies go complex whenever $\\l > 1\/(8E)$. This is unsatisfactory if we want to interpret the finite cutoff JT gravity partition function as being described by a $0+1$ dimensional theory, just like the Schwarzian theory describes the full $AdS_2$ bulk of JT gravity. There are a few ways in which one can go around this complexification. \n\nFirstly, we can truncate the spectrum of the initial theory so that $E$ is smaller than some $E_{\\rm max}$. This is totally acceptable, but if we want to have an initial theory that describes the full $AdS_2$ geometry, we cannot do that without making the flow irreversible. In other words, the truncated Schwarzian partition function is not enough to describe the entire JT bulk. The second option is to accept there are complex energies along the flow but truncate the spectrum to real energies after one has flowed in the bulk. In $1D$ this was emphasized in \\cite{Gross:2019ach} (and in \\cite{McGough:2016lol, Smirnov:2016lqw} for $2D$ CFTs). The projection operator that achieves such a truncation will then depend on $\\l$ and, in general, will not solve the flow equation \\eqref{diffeqZ} of the partition function. A third option is that we use the other branch of the deformed energy levels $\\mathcal{E}_{-}$ (see \\eqref{deformedE}) to make the partition function real. In doing so, we will be guaranteed a solution to the Wheeler-de-Witt equation. Let us pursue option three in more detail and show that we can write down a real partition function $Z_{\\l}(\\b)$ with the correct (Schwarzian) boundary condition at $\\l \\to 0$. \n\nThe solution to the $T\\bar{T}$ flow equation \\eqref{diffeqZ} that takes the form of a partition function is, \n\\begin{eqnarray}\\label{solZ}\n{Z}_{\\l}^{\\rm non-pert.}(\\b) = \\int_{0}^{\\infty} dE \\rho_+(E) e^{-\\b \\mathcal{E}_+(E,\\l)} + \\int_{-\\infty}^{\\infty} dE \\rho_-(E) e^{-\\b \\mathcal{E}_-(E,\\l)}.\n\\end{eqnarray}\nHere, we took the ranges of $E$ to be such that $\\mathcal{E}_{\\pm}$ are bounded from below. As $\\l \\to 0$, we see that the first term goes to some constant (as we already saw previously), but the second term goes to zero non-perturbatively in $\\l$ as $e^{-\\b\/(2\\l)}$. From the boundary condition $\\l \\to 0$ we can therefore not fix the general solution, but only $\\rho_+(E) = \\sinh(2\\pi \\sqrt{2CE})$. If we demand the partition function to be real, then both integrals over $E$ in \\eqref{solZ} should be cutoff at $E = 1\/(8\\l)$ and it will therefore not be a solution to \\eqref{diffeqZ} anymore, because the derivatives with respect to $\\l$ can then act on the integration limit. However, by picking\n\\begin{eqnarray}\\label{rho}\n\\rho_- = \\left\\{\\begin{array}{cc}\n-\\sinh(2\\pi \\sqrt{2CE}) & 0 < E < \\frac{1}{8\\l}\\\\\n\\hat{\\rho}(E) & E < 0\n\\end{array}\\right.,\n\\end{eqnarray}\nwith $\\hat{\\rho}(E)$ an arbitrary function of $E$, the boundary terms cancel and we obtain a valid solution to \\eqref{diffeqZ} and the associated wavefunction $\\Psi = e^{L\\phi_b}Z$ will solve the WDW equation \\eqref{minisupereq}. The final partition function is then given by (see appendix \\ref{app:flowsol} for details),\n\\begin{eqnarray}\n\\label{Znp}\n{Z}^{\\rm non-pert.}_{\\l}(\\b) = \\frac{\\pi \\b e^{-\\frac{\\b}{4\\l}}}{\\sqrt{2 \\l}(\\b^2 + 16 C \\pi^2 \\l)}I_2\\left( \\frac{1}{4\\l}\\sqrt{\\b^2 + 16 C \\pi^2 \\l} \\right) + \\int_{-\\infty}^0 dE \\hat{\\rho}(E) e^{-\\b \\mathcal{E}_-(E,\\l)}.\n\\end{eqnarray}\nNotice that when we redefine $E$ such that we have the canonical Boltzman weight in the second term of \\eqref{Znp}, the support of $\\hat{\\rho}$ is for $E > \\frac{1}{2\\l}$, because for this redefined energy $E = 0$ maps to $\\frac{1}{2\\l}$. Let us comment on this partition function. First, because of the sign in \\eqref{rho}, the first part of \\eqref{Znp} has a negative density of states and turns out the be equal to \\eqref{deformedrho} with support between $0 \\leq E \\leq \\frac{1}{2\\l}$, see Fig. \\ref{fig:DOS}.\n\\begin{figure}\n\\centering\n\\includegraphics[scale=1]{PlotDOS.pdf}\n\\caption{\\label{fig:DOS} In orange, we show the undeformed density of states $\\sinh(2\\pi \\sqrt{E})$ of JT gravity at infinite cutoff. In dashed black, we show the density of states of the theory with just the branch of the root, $\\mathcal{E}_-$, that connects to the undeformed energies, until the energy complexifies. In blue, we show the density of states $\\rho_{\\l}(E)$ of the deformed partition function \\eqref{Znp} which includes non-perturbative corrections in $\\l$. Above we have set $\\hat{\\rho}(E)=0$ and $\\l = 1\/4$ and $C = 1\/2$, the black line therefore ends at $E = \\frac{1}{4\\l} = 1$. The vertical dashed line indicates the energy beyond which $\\hat{\\rho}$ has support. }\n\\end{figure}\nSecond, there is a whole function worth of non-perturbative ambiguities coming from the second term in \\eqref{Znp} that cannot be fixed by the Schwarzian boundary condition. From the Euclidean path integral approach, assuming that the extrinsic curvature does not receive non-perturbative corrections, we could fix $\\hat{\\rho}(E)=0$ by choosing an appropriate analytic continuation on $L$ when defining the partition function.\n \n\n\\subsection{Relation to $3D$ gravity}\\label{sec:npco2}\n\nThe analysis in the previous section can be repeated in the context of $3D$ gravity and $T\\bar{T}$ deformations of $2D$ CFTs on a torus of parameters $\\tau$ and $\\bar{\\tau}$. The deformed partition function satisfies an equation similar to \\eqref{diffeqZ} derived in \\cite{Aharony:2018bad}. This is given by\n\\begin{eqnarray}\n-\\partial_{\\l} Z_{\\l} = \\left[8 \\t_2 \\partial_{\\t}\\partial_{\\bar{\\t}} + 4\\left(i (\\partial_{\\t} - \\partial_{\\bar{\\t}}) - \\frac{1}{\\t_2} \\right) \\l \\partial_{\\l} \\right]Z_{\\l}\n\\end{eqnarray}\nThe solutions of this equation, written in a form of a deformed partition function, can be written as\n\\begin{eqnarray}\nZ(\\t,\\bar{\\t},\\l) = \\sum_{\\pm, \\,k} \\int_{E_0}^{\\infty} dE \\rho_\\pm(E) e^{-\\t_2 \\mathcal{E}_\\pm(E,k) + 2\\pi i k \\t_1}\n\\end{eqnarray}\n where $\\tau = \\t_1+i \\t_2$ and $\\bar \\tau = \\t_1 - i \\t_2$. \nHere we have set the radius to one and\n\\begin{eqnarray}\\label{deformedE3d}\n\\mathcal{E}_\\pm(E,k) = \\frac{1}{4\\l}\\left(1 \\mp \\sqrt{ 1 - 8 \\l E + 64 \\pi^2 k^2 \\l^2 }\\right).\n\\end{eqnarray}\nAs usual we pick the minus sign of the root as that connects to the undeformed energy levels at $\\l = 0$. The energy levels of the deformed partition function complexify when $E_c = \\frac{1}{8\\l} + 8 k^2 \\pi^2 \\l^2$. So we would like to cutoff the integral there. Similarly, a hard cutoff in the energy will not solve the above differential flow equation anymore. We can resolve this by subtracting the same partition function but with the other sign of the root in \\eqref{deformedE3d}. This is again a solution, but (again) with negative density of states. \n\n\\subsection{Comments about other topologies}\\label{sec:nptopo}\n\nFinally, we discuss the contribution to the path integral of manifolds with different topologies. The contribution of such surfaces is non-perturbatively suppressed by $e^{-\\phi_0 \\chi(M)}$, where $\\chi(M)$ is the Euler characteristic of the manifold. \n\nWe start with surfaces with two boundaries of zero genus, where one boundary has the Dirichlet boundary conditions \\eqref{eq:Dirichlet-bdy-conditions} and the other ends on a closed geodesic with proper length $b$. The contribution of such surfaces to the partition function, referred to as ``trumpets'', has been computed in the infinite cutoff limit in \\cite{Saad:2019lba}. We can repeat the method of section \\ref{sec:HH-boundary-conditions-JT-wavefunctional} to a spacetime with the geodesic hole of length $b$ by applying the WDW constraints to the boundary on which we have imposed the Dirichlet boundary conditions. This constraint gives the trumpet finite cutoff partition function\n\\begin{equation}\n\\label{eq:trumpet-part-function}\nZ_{\\rm trumpet} [\\phi_b, L, b] = \\frac{\\phi L}{\\sqrt{L^2 - b^2}} K_1\\Big(-\\sqrt{\\phi_b^2(L^2-b^2)} \\Big)\\,.\n\\end{equation}\nThe partition function diverges as $L \\to b$, indicating the fact that the boundary with Dirichlet boundary conditions overlaps with the geodesic boundary. \n\nIn order to construct higher genus surfaces or surfaces with more Dirichlet boundaries one can naviely glue the trumpet to either a higher genus Riemann bordered surface or to another trumpet. In order to recover the contribution to the partition function of such configurations we have to integrate over the closed geodesic length $b$ using the Weil-Petersson measure, $d\\mu[b] = db \\,b$. However, if integrating over $b$ in the range from $0$ to $\\infty$ for a fixed value of $L$ we encounter the divergence at $L=b$. \n\n One way to resolve the appearance of this divergence is to once again consider the non-perturbative corrections in $\\e$ discussed in section \\ref{sec:npco} for the trumpet partition function \\eqref{eq:trumpet-part-function}. We can repeat the same procedure as in \\ref{sec:npco} by accounting for the other WDW branch thus making the density of states of the ``trumpet'' real. Accounting for the other branch we find that\n\\begin{eqnarray} \n\\label{eq:trumpet-part-function-1}\nZ_{\\rm trumpet}^{\\rm non-pert.} [\\phi_b, L, b] = \\frac{2\\pi\\phi L}{\\sqrt{L^2 - b^2}} I_1\\Big(\\sqrt{\\phi_b^2(L^2-b^2)} \\Big) \\,,\n\\end{eqnarray}\nwhere we set the density of states for negative energies for the contracting branch to $0$. Interestingly, the partition function \\eqref{eq:trumpet-part-function-1} no longer has a divergence at $L=b$ which was present in \\eqref{eq:trumpet-part-function} and precluded us previously from performing the integral over $b$. We could now integrate \\footnote{Alternatively, one might hope to directly use WDW together with the results of \\cite{Saad:2019lba} for arbitrary genus to directly compute the partition function at finite cutoff. However, as pointed out in \\cite{Giddings:1988wv}, the WDW framework is insufficient for such a computation; instead, computing the full partition function requires a third-quantized framework which greatly complicates the computation. } \n\\begin{eqnarray} \n\\label{eq:cylinder-partition-function}\nZ_{\\rm cyl.}^{\\rm non-pert.}[\\phi_{b_1}, L_1, \\phi_{b_2}, L_2] =_\\text{naive} \\int_0^\\infty db\\, b\\,Z_{\\rm trumpet}^{\\rm non-pert.} [\\phi_{b_1}, L_1, b] Z_{\\rm trumpet}^{\\rm non-pert.} [\\phi_{b_2}, L_2, b]\\,,\n\\end{eqnarray}\n to obtain a potential partition function for the cylinder.\\footnote{ While unfortunately we cannot compute the integral over $b$ exactly it would be interesting to check whether the partition function for the cylinder can be reproduced by a matrix integral whose leading density of states is given by the one found from the disk contribution. \n} \n\nBesides the ambiguity related to the non-perturbative corrections, there is another issue with the formula for the cylinder partition function \\eqref{eq:cylinder-partition-function}. Specifically, for any value of the proper length $L_1$ and $L_2$ and for a closed geodesic length $b$ (with $b0$ with\n\\begin{equation}\\label{equivalence}\nc_{2,\\ell}^{-1}\\|x\\|_\\ell\\le\\|x\\|\n\\le c_{\\ell,2}\\|x\\|_\\ell\\quad\\forall\\,x\\in\\mathbbm{R}^d,\n\\end{equation}\nwhere $\\|\\cdot\\|:\\mathbbm{R}^d\\to\\mathbbm{R}_+$ denotes the Euclidean norm.\n\nWe denote the space of all nonempty compact subsets of $\\mathbbm{R}^d$ by $\\mathcal{K}(\\mathbbm{R}^d)$\nand the space of all nonempty compact and convex subsets of $\\mathbbm{R}^d$ \nby $\\mathcal{K}_c(\\mathbbm{R}^d)$.\nThe Hausdorff semi-distance $\\dist:\\mathcal{K}(\\mathbbm{R}^d)\\times\\mathcal{K}(\\mathbbm{R}^d)\\to\\mathbbm{R}_+$ \nand the corresponding symmetric Hausdorff distance \n$\\dist_H:\\mathcal{K}(\\mathbbm{R}^d)\\times\\mathcal{K}(\\mathbbm{R}^d)\\to\\mathbbm{R}_+$ \nare defined by\n\\begin{align*}\n&\\dist(X,X')=\\sup_{x\\in X}\\inf_{x'\\in X'}\\|x-x'\\|,\\\\\n&\\dist_H(X,X')=\\max\\{\\dist(X,X'),\\dist(X',X)\\}.\n\\end{align*}\nFor any $X\\in\\mathcal{K}(\\mathbbm{R}^d)$ and $R>0$, we write\n\\[B_R(X):=\\{x\\in\\mathbbm{R}^d:\\dist(x,X)\\le R\\}\\quad\\text{and}\\quad \n\\|X\\|:=\\sup_{x\\in X}\\|x\\|.\\]\nIdentical notation with a subscript or superscript $\\ell$ will be used\nwhen the underlying norm is $\\|\\cdot\\|_\\ell:\\mathbbm{R}^d\\to\\mathbbm{R}_+$.\nNote that for any $X\\in\\mathcal{K}_c(\\mathbbm{R}^d)$ and $R>0$, the property\n$B_R^\\ell(X)\\in\\mathcal{K}_c(\\mathbbm{R}^d)$ still holds in this non-Euclidean\ngeometry by triangle inequality.\n\nThe support function of a set $X\\in\\mathcal{K}_c(\\mathbbm{R}^d)$ is a mapping\n\\[\\sigma_X:\\mathbbm{R}^d\\to\\mathbbm{R},\\quad\\sigma_X(p):=\\max_{x\\in X}p^Tx.\\]\nFor a set-valued map $F:\\mathbbm{R}^d\\to\\mathcal{K}(\\mathbbm{R}^d)$ and $X\\in\\mathcal{K}(\\mathbbm{R}^d)$,\nwe denote the image and the preimage of $X$ by\n\\[F(X):=\\cup_{x\\in X}F(x)\\quad\\text{and}\\quad\nF^{-1}(X):=\\{x\\in\\mathbbm{R}^d:F(x)\\cap X\\neq\\emptyset\\}.\\]\nThe vector $\\mathbbm{1}\\in\\mathbbm{R}^N$ is the vector with the number $1$ \nin all $N$ components, and the vector $e_i$ is the $i$-th unit vector.\nFor any convex set $X\\subset\\mathbbm{R}^d$, the set of extreme points of $X$\nis denoted $\\ext(X)$, and its interior is denoted $\\interior(X)$.\n\n\n\n\n\\section{Preliminaries and auxiliary results} \\label{PAR}\n\nThe fact that all norms on $\\mathbbm{R}^d$ are equivalent is reflected by a similar\nstatement for Hausdorff semi-distances and Hausdorff distances on $\\mathcal{K}(\\mathbbm{R}^d)$.\n\n\\begin{lemma} \\label{equivalent:Hausdorff}\nThe Hausdorff semi-distances $\\dist$ and $\\dist^\\ell$ as well as the \nHausdorff distances $\\dist_H$ and $\\dist_H^\\ell$ are equivalent.\n\\end{lemma}\n\n\\begin{proof}\nFor any $X,X'\\in\\mathcal{K}(\\mathbbm{R}^d)$, we compute\n\\begin{align*}\n&c_{2,\\ell}^{-1}\\dist^\\ell(X,X')\n=c_{2,\\ell}^{-1}\\sup_{x\\in X}\\inf_{x'\\in X'}\\|x-x'\\|_\\ell\n\\le \\sup_{x\\in X}\\inf_{x'\\in X'}\\|x-x'\\|\n=\\dist(X,X'),\\\\\n&\\dist(X,X')=\\sup_{x\\in X}\\inf_{x'\\in X'}\\|x-x'\\|\n\\le c_{\\ell,2}\\sup_{x\\in X}\\inf_{x'\\in X'}\\|x-x'\\|_\\ell\n=c_{\\ell,2}\\dist^\\ell(X,X'),\n\\end{align*}\nand hence\n\\[c_{2,\\ell}^{-1}\\dist_H^\\ell(X,X')\\le\\dist_H(X,X')\\le c_{\\ell,2}\\dist_H^\\ell(X,X').\\]\n\\end{proof}\n\nGiven a matrix $A\\in\\mathbbm{R}^{N\\times d}$, we define a space of polyhedra \nby setting \n\\[\\mathcal{G}_A:=\\{Q_{A,b}:b\\in\\mathbbm{R}^N\\}\\setminus\\{\\emptyset\\},\\quad\nQ_{A,b}:=\\{x\\in\\mathbbm{R}^d: Ax\\le b\\}.\\]\nThis space has been explored in depth in the paper \\cite{Rieger:17}.\nWe recapitulate the relevant facts as briefly as possible\nand refer to \\cite{Rieger:17} for technical details.\n\nThroughout the rest of the paper, we require the following assumptions.\n\n\\begin{assumption} \\label{A:ass}\nThe matrix $A\\in\\mathbbm{R}^{N\\times d}$ has the following properties.\n\\begin{itemize}\n\\item [a)] It consists of pairwise distinct rows $a_1^T,\\ldots,a_N^T$ \nsatisfying $a_i\\in\\mathbbm{R}^d$ and $\\|a_i\\|_2=1$ for $i=1,\\ldots,N$.\n\\item [b)] We have $Q_{A,0}=\\{0\\}$.\n\\end{itemize}\n\\end{assumption}\n\nAssumption \\ref{A:ass}b) holds whenever the rows of $A$ \nare reasonably dense in the sphere, see Theorem 16 in \\cite{Rieger:17},\nand by Corollary 17 in \\cite{Rieger:17}, it\nguarantees that the space $\\mathcal{G}_A$ consists of (bounded) polytopes. \nBy Theorem 13 in \\cite{Rieger:17}, the mapping $b\\mapsto Q_{A,b}$\nis bi-Lipschitz w.r.t.\\ Hausdorff distance.\n\n\\medskip\n\nIntersections of polytopes can be expressed as the componentwise infimum\nof their representations.\n\n\\begin{lemma} \\label{intersections}\nLet $\\mathcal{B}\\subset\\mathbbm{R}^N$ be a subset with $\\cap_{b\\in\\mathcal{B}}Q_{A,b}\\neq\\emptyset$,\nand let $b^*\\in\\mathbbm{R}^N$ be given by $b^*_i:=\\inf_{b\\in\\mathcal{B}}b_i$.\nThen $Q_{A,b*}=\\cap_{b\\in\\mathcal{B}}Q_{A,b}$.\n\\end{lemma}\n\n\\begin{proof}\nIf $x\\in Q_{A,b^*}$, then $a_i^Tx\\le b^*_i\\le b_i$ for all $b\\in\\mathcal{B}$\nand $i\\in\\{1,\\ldots,N\\}$, so $x\\in\\cap_{b\\in\\mathcal{B}}Q_{A,b}$.\nIf, on the other hand, we have $x\\notin Q_{A,b^*}$, then there exists\n$i\\in\\{1,\\ldots,N\\}$ with $b_i^*0$.\nThen we have $F(B^\\ell_{\\ell^{-1}R}(X))\\subset B^\\ell_R(X)$.\nThe same statement holds for the mapping $F_\\varepsilon$.\n\\end{lemma}\n\n\\begin{proof}\nThis follows from the computation\n\\begin{align*}\n&\\dist^\\ell(F(B^\\ell_{\\ell^{-1}R}(X)),X)\n\\le\\dist^\\ell(F(B^\\ell_{\\ell^{-1}R}(X)),F(X))\\\\\n&\\le\\ell\\dist^\\ell(B^\\ell_{\\ell^{-1}R}(X),X)\n\\le R.\n\\end{align*}\nSince $F_\\varepsilon$ shares all properties of $F$, the same proof applies to $F_\\varepsilon$.\n\\end{proof}\n\n\nWe construct an approximation to $X^*$ by solving an optimization problem, \nwhich uses the property proved in Proposition \\ref{inf:reach} part e) \nas a constraint.\n\n\\begin{proposition}\\label{vacuum:approximation}\nThe optimization problem\n\\begin{equation}\\left.\\begin{aligned} \n&\\min_b\\,\\mathbbm{1}^Tb\\quad\\text{subject to}\\quad b\\in\\mathcal{B}_{X^*}\\\\\n&\\mathcal{B}_{X^*}:=\\{b\\in\\mathbbm{R}^N:F(Q_{A,b})\\subset Q_{A,b}\\neq\\emptyset\\}\n\\end{aligned}\\right\\}\\label{op1}\\end{equation}\npossesses a unique solution $b^*\\in\\mathbbm{R}^N$. \nThis $b^*$ is given by \n\\[b^*_i=\\inf_{b\\in\\mathcal{B}_{X^*}}b_i\\quad\\text{for}\\quad i\\in\\{1,\\ldots,N\\}\\]\nand satisfies the error bound\n\\[X^*\\subset Q_{A,b^*}\\subset B^\\ell_{\\ell^{-1}R_A^{X^*}}(X^*)\\]\nwith $R_A^{X^*}$ as in Lemma \\ref{not:worse:than:ell}.\n\\end{proposition}\n\n\\begin{remark}\na) Note that Problem \\eqref{op1} selects the smallest polytope\nin the collection $\\{Q_{A,b}:b\\in\\mathcal{B}_{X^*}\\}$ with respect to inclusion.\nThe setup of the problem also guarantees that the vector $b^*$ is a particularly \nnice representation of the polytope $Q_{A,b^*}$, see Section 2.2 \nof \\cite{Rieger:17}.\n\nb) The number $R_A^{X^*}$ is defined in Lemma \\ref{not:worse:than:ell} and\nbounded by the a-priori estimate in Proposition \\ref{inf:reach} part f).\n\nc) It is at this stage not obvious that Problem \\eqref{op1} is a\ndisjunctive program.\nThis will be established in the next section.\n\\end{remark}\n\n\\begin{proof}[Proof of Proposition \\ref{vacuum:approximation}]\nWe clearly have $\\pi_{\\mathcal{G}_A}(B_{R_A^{X^*}}^\\ell(X^*))\\neq\\emptyset$.\nLemma \\ref{not:worse:than:ell} implies\n\\[\\pi_{\\mathcal{G}_A}(B_{R_A^{X^*}}^\\ell(X^*))\\subset B^\\ell_{\\ell^{-1}R_A^{X^*}}(X^*),\\]\nand by Proposition \\ref{inf:reach} part c), Lemma \\ref{nested:lemma} \nand Theorem \\ref{projector}, we have\n\\[F(\\pi_{\\mathcal{G}_A}(B_{R_A^{X^*}}^\\ell(X^*)))\n\\subset F(B^\\ell_{\\ell^{-1}{R_A^{X^*}}}(X^*))\n\\subset B_{R_A^{X^*}}^\\ell(X^*)\n\\subset\\pi_{\\mathcal{G}_A}(B_{R_A^{X^*}}^\\ell(X^*)).\\]\nIn particular, we find\n\\[\\pi_{\\mathcal{G}_A}(B_{R_A^{X^*}}^\\ell(X^*))\\in\\{Q_{A,b}:b\\in\\mathcal{B}_{X^*}\\},\\]\nso $\\mathcal{B}_{X^*}\\neq\\emptyset$.\nAccording to Proposition \\ref{inf:reach} part e), we have\n$X^*\\subset(\\cap_{b\\in\\mathcal{B}_{X^*}}Q_{A,b})$,\nso by Lemma \\ref{intersections}, the vector $b^*\\in\\mathbbm{R}^N$ given by \n$b^*_i:=\\inf_{b\\in\\mathcal{B}}b_i$ satisfies\n\\[X^*\\subset(\\cap_{b\\in\\mathcal{B}_{X^*}}Q_{A,b})=Q_{A,b^*}.\\]\nFrom the definition of $\\mathcal{B}_{X^*}$, we obtain\n\\begin{align*}\n&F(Q_{A,b^*})\n=F(\\cap_{b\\in\\mathcal{B}_{X^*}}Q_{A,b})\n=C(\\cap_{b\\in\\mathcal{B}_{X^*}}Q_{A,b})+V\n\\subset(\\cap_{b\\in\\mathcal{B}_{X^*}}(CQ_{A,b})+V)\\\\\n&\\subset(\\cap_{b\\in\\mathcal{B}_{X^*}}(CQ_{A,b}+V))\n=\\cap_{b\\in\\mathcal{B}_{X^*}}F(Q_{A,b})\n\\subset(\\cap_{b\\in\\mathcal{B}_{X^*}} Q_{A,b})\n=Q_{A,b^*},\n\\end{align*}\nso we have $b^*\\in\\mathcal{B}_{X^*}$ as well.\nBy the above and by Lemma \\ref{not:worse:than:ell}, we conclude \n$b^*=\\argmin_{b\\in\\mathcal{B}_{X^*}}\\mathbbm{1}^Tb$ and\n\\[X^*\\subset Q_{A,b^*}\\subset\\pi_{\\mathcal{G}_A}(B_{R_A^{X^*}}^\\ell(X^*))\n\\subset B^\\ell_{\\ell^{-1}R_A^{X^*}}(X^*).\\]\n\\end{proof}\n\nThe unique minimizers of the perturbed problems approximate\nthe unique minimizer of the original problem.\n\n\\begin{proposition}\\label{eps:converge}\nFor any $\\varepsilon>0$, the optimization problem\n\\begin{equation}\\left.\\begin{aligned}\n&\\min_b\\,\\mathbbm{1}^Tb\\quad\\text{subject to}\\quad b\\in\\mathcal{B}_{X^*_\\varepsilon}\\\\\n&\\mathcal{B}_{X^*_\\varepsilon}:=\\{b\\in\\mathbbm{R}^N:F_\\varepsilon(Q_{A,b})\\subset Q_{A,b}\\neq\\emptyset\\}\n\\end{aligned}\\right\\}\\label{op1eps}\\end{equation}\npossesses a unique solution $b^*_\\varepsilon\\in\\mathbbm{R}^N$.\nThis $b^*_\\varepsilon$ is given by \n\\[b^*_{\\varepsilon,i}=\\inf_{b\\in\\mathcal{B}_{X^*_\\varepsilon}}\nb_i\\quad\\text{for}\\quad i\\in\\{1,\\ldots,N\\},\\]\nand we have\n\\[\\lim_{\\varepsilon\\searrow 0}b_\\varepsilon^*=b^*.\\]\n\\end{proposition}\n\n\\begin{proof}\nSince $F_\\varepsilon$ shares all properties of $F$ for every $\\varepsilon>0$, we can apply \nProposition \\ref{vacuum:approximation} to the mapping $F_\\varepsilon$\nto obtain existence and uniqueness of the solutions $b^*_\\varepsilon$.\nIt remains to show the convergence statement.\n\nThe inclusion\n$F(Q_{A,b_\\varepsilon^*})\\subset F_\\varepsilon(Q_{A,b_\\varepsilon^*})\\subset Q_{A,b_\\varepsilon^*}$\nimplies $b_\\varepsilon^*\\in\\mathcal{B}_{X^*}$, so $b^*\\le b_\\varepsilon^*$ holds by \nProposition \\ref{vacuum:approximation}, and hence \n$Q_{A,b^*}\\subset Q_{A,b_\\varepsilon^*}$.\nLet $R_1=R_1(\\varepsilon):=\\frac{c_{2,\\ell}\\ell\\varepsilon}{1-\\ell}$ and define\n$X(\\varepsilon):=B^\\ell_{\\ell^{-1}R_1}(Q_{A,b^*})$. \nLemma \\ref{nested:lemma} gives\n\\begin{align*}\n&F_\\varepsilon(X(\\varepsilon))=F_\\varepsilon(B^\\ell_{\\ell^{-1}R_1}(Q_{A,b^*}))\n=F(B^\\ell_{\\ell^{-1}R_1}(Q_{A,b^*}))+B_\\varepsilon(0)\\\\\n&\\subset B_{R_1}^\\ell(Q_{A,b^*})+B_{c_{2,\\ell}\\varepsilon}^\\ell(0)\n=B_{R_1+c_{2,\\ell}\\varepsilon}^\\ell(Q_{A,b^*})\n\\subset B^\\ell_{\\ell^{-1}R_1}(Q_{A,b^*})\n=X(\\varepsilon).\n\\end{align*}\nNow let $R_2=R_2(\\varepsilon):=R_A^{X(\\varepsilon)}$ with notation as in \nLemma \\ref{not:worse:than:ell}.\nUsing Lemma \\ref{not:worse:than:ell}, Lemma \\ref{nested:lemma}\nand Theorem \\ref{projector}, we obtain\n\\begin{align*}\nF_\\varepsilon(\\pi_{\\mathcal{G}_A}(B_{R_2}^\\ell(X(\\varepsilon))))\n\\subset F_\\varepsilon(B_{\\ell^{-1}R_2}^\\ell(X(\\varepsilon)))\n\\subset B_{R_2}^\\ell(X(\\varepsilon))\n\\subset\\pi_{\\mathcal{G}_A}(B_{R_2}^\\ell(X(\\varepsilon))),\n\\end{align*}\nso $\\pi_{\\mathcal{G}_A}(B_{R_2}^\\ell(X(\\varepsilon)))\\in\\{Q_{A,b}:b\\in\\mathcal{B}_{X^*_\\varepsilon}\\}$,\nand by minimality of $b_\\varepsilon^*$, we have\n\\[Q_{A,b^*}\\subset Q_{A,b^*_\\varepsilon}\\subset\\pi_{\\mathcal{G}_A}(B_{R_2}^\\ell(X(\\varepsilon))).\\]\nLet $L_A>0$ be the Lipschitz constant of the mapping $\\pi_{\\mathcal{G}_A}$.\nBy the above, and since $Q_{A,b^*}\\in\\mathcal{G}_A$, we obtain\n\\begin{align*}\n&\\dist^\\ell(Q_{A,b^*_\\varepsilon},Q_{A,b^*})\n\\le\\dist^\\ell(\\pi_{\\mathcal{G}_A}(B_{R_2}^\\ell(X(\\varepsilon))),Q_{A,b^*})\\\\\n&=\\dist^\\ell(\\pi_{\\mathcal{G}_A}(B_{R_2}^\\ell(X(\\varepsilon))),\\pi_{\\mathcal{G}_A}(Q_{A,b^*}))\n\\le L_A\\dist^\\ell(B_{R_2}^\\ell(X(\\varepsilon)),Q_{A,b^*}),\n\\end{align*}\nwhich implies\n\\begin{align*}\n&\\lim_{\\varepsilon\\searrow 0}\\dist^\\ell(Q_{A,b^*_\\varepsilon},Q_{A,b^*})=0\n\\end{align*}\nand hence the desired convergence statement.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Disjunctive programs} \\label{CC}\n\nWe assume that the values $\\sigma_V(a_i)$ for $i\\in\\{1,\\ldots,N\\}$ \nof the support function of the sets $V$ are available.\nThis is not a strong requirement, because in many applications, \nthe set $V$ has a very simple shape.\nWe use the notation\n\\begin{align*}\n&P_0:=\\{p\\in\\mathbbm{R}^N:A^Tp=0,\\ \\mathbbm{1}^Tp=1,\\ p\\ge 0\\},\\\\\n&P_i:=\\{p\\in\\mathbbm{R}^N,\\ A^Tp=C^Ta_i,\\ p\\ge 0\\},\\quad i\\in\\{1,\\ldots,N\\},\n\\end{align*}\nand we develop representation of the sets $\\mathcal{B}_{X^*_\\varepsilon}$\nwhich is accessible to linear optimization techniques.\nIf $\\varepsilon=0$, then $F_\\varepsilon=F$, $B_\\varepsilon(V)=V$ and $\\mathcal{B}_{X^*_\\varepsilon}=\\mathcal{B}_{X^*}$.\n\n\\begin{proposition} \\label{lpc}\nConsider an arbitrary vector $b\\in\\mathbbm{R}^N$ and $\\varepsilon\\ge 0$.\n\\begin{itemize}\n\\item [a)] For $X\\in\\mathcal{K}(\\mathbbm{R}^d)$, the inclusion \n$X\\subset Q_{A,b}$ holds if and only if we have $\\sigma_{X}(a_i)\\le b_i$ \nfor all $i\\in\\{1,\\ldots,N\\}$.\n\\item [b)] The following statements are equivalent:\n\\begin{itemize}\n\\item [i)] $Q_{A,b}\\neq\\emptyset$;\n\\item [ii)] $p^Tb\\ge 0$ for all $p\\in\\mathbbm{R}^N$ with $A^Tp=0$ and $p\\ge 0$;\n\\item [iii)] $p^Tb\\ge 0$ for all $p\\in P_0$.\n\\item [iv)] $p^Tb\\ge 0$ for all $p\\in\\ext(P_0)$.\n\\end{itemize}\n\\item [c)] If we have $Q_{A,b}\\neq\\emptyset$, then the following statements\nare equivalent:\n\\begin{itemize}\n\\item [i)] $F_\\varepsilon(Q_{A,b})\\subset Q_{A,b}$;\n\\item [ii)] $\\max\\{(C^Ta_i)^Tx:x\\in Q_{A,b}\\}\\le b_i-\\sigma_{B_\\varepsilon(V)}(a_i)$\n$\\forall\\,i\\in\\{1,\\ldots,N\\}$;\n\\item [iii)] $\\min\\{(p-e_i)^Tb:p\\in P_i\\}\\le-\\sigma_{B_\\varepsilon(V)}(a_i)$ \n$\\forall\\,i\\in\\{1,\\ldots,N\\}$;\n\\item [iv)] $\\min\\{(p-e_i)^Tb:p\\in\\ext(P_i)\\}\\le-\\sigma_{B_\\varepsilon(V)}(a_i)$\n$\\forall\\,i\\in\\{1,\\ldots,N\\}$.\n\\end{itemize}\n\\end{itemize}\n\\end{proposition}\n\n\\begin{proof}\nStatement a) is obvious.\n\nb) The equivalence between i) and ii) is the version of the Farkas lemma \ngiven in Proposition 1.7 of \\cite{Ziegler}.\nElementary arguments show that statement ii) is equivalent with statement iii).\nSince the set $P_0$ is a compact polytope, statement iii) is equivalent \nwith statement iv).\n\nc) We have $F_\\varepsilon(Q_{A,b})\\subset Q_{A,b}$ if and only if \n\\[a_i^T(Cx+v)\\le b_i\\quad\\forall\\,x\\in Q_{A,b},\\ \\forall\\,v\\in B_\\varepsilon(V),\\\n\\forall\\,i\\in\\{1,\\ldots,N\\},\\]\nwhich can be rewritten as\n\\[(C^Ta_i)^Tx\\le b_i-a_i^Tv\\quad\\forall\\,x\\in Q_{A,b},\\ \\forall\\,v\\in B_\\varepsilon(V),\\\n\\forall\\,i\\in\\{1,\\ldots,N\\}.\\]\nThis establishes the equivalence of statements i) and ii).\nSince $Q_{A,b}$ is nonempty and bounded, the strong duality theorem \nfor linear programming as presented in Theorem 4.13 of \\cite{Theobald:13}\nguarantees that \n\\begin{align*}\n&\\max\\{(C^Ta_i)^Tx:x\\in Q_{A,b}\\}=\\min\\{p^Tb:p\\in P_i\\}\n=\\min\\{p^Tb:p\\in\\ext(P_i)\\}\n\\end{align*}\nis finite.\nHence statement ii) is equivalent with statements iii) and iv).\n\\end{proof}\n\nWe state Problems \\eqref{op1} and \\eqref{op1eps} as a disjunctive programs \nto highlight their structural properties.\n\n\\begin{theorem}\nProblems \\eqref{op1} and \\eqref{op1eps} are equivalent with the problem\n\\begin{equation}\\label{LP:ext:1}\n\\left.\\begin{aligned}\n&\\min_b\\,\\mathbbm{1}^Tb&&\\\\\n&\\text{subject to}&0&\\le p^Tb\\quad\\forall\\,p\\in P_0\\\\\n&&\\min\\{(p-e_1)^Tb:p\\in\\ext(P_1)\\}&\\le-\\sigma_{B_\\varepsilon(V)}(a_1),\\\\\n&&&\\vdotswithin{\\le}\\\\\n&&\\min\\{(p-e_N)^Tb:p\\in\\ext(P_N)\\}&\\le-\\sigma_{B_\\varepsilon(V)}(a_N)\n\\end{aligned}\\right\\}\n\\end{equation}\nwith $\\varepsilon=0$ in the case of Problem \\eqref{op1}.\n\\end{theorem}\n\nDisjunctive programs are, in general, hard to solve, see \\cite{Balas}.\nTo compute the desired solution $b^*$, we will construct a dual-type\nproblem that is just an ordinary linear program.\n\n\n\n\n\n\n\n\n\\section{Perturbed dual LPs}\n\nIn the following, we will formulate a linear program, which is related\nto the dual of Problem \\eqref{LP:ext:1} with $\\varepsilon>0$ in the sense of \\cite{Balas}.\n\n\\begin{remark}\nIt is, in general, not possible to compute the solution $b^*$ of \nProblem \\ref{LP:ext:1} with $\\varepsilon=0$ by following \\cite{Balas} directly:\n\na) If $\\interior(X^*)\\neq\\emptyset$, then $\\interior(Q_{A,b^*})\\neq\\emptyset$,\nwhich implies $p^Tb^*>0$ for all $p\\in P_0$ by Proposition 36 in \\cite{Rieger:17}.\nThe dual as defined in \\cite{Balas} and similar works involves a constraint\nof type $p^Tb\\le 0$ for some $p\\in P_0$, which means that $b^*$ is dual infeasible \nin this setting.\n\nb) The dual problem from \\cite{Balas} may have more than one maximizer, \nso it is not obvious how to recover $b^*$ from a dual solution.\n\\end{remark}\n\nThese facts motivate us to construct a dual problem following the general idea from\n\\cite{Balas}, but omitting the constraints $p^Tb\\ge 0$ for all $p\\in P_0$, and to\nuse a perturbation argument to show uniqueness of the dual maximizer.\n\n\\begin{proposition}\\label{locmin}\nFor any $\\varepsilon>0$, the global minimizer $b^*_\\varepsilon\\in\\mathbbm{R}^N$ \nof Problem \\eqref{op1eps} satisfies\n\\begin{align}\n&\\max\\{(C^Ta_i)^Tx:x\\in Q_{A,b^*_\\varepsilon}\\}=b_{\\varepsilon,i}^*-\\sigma_{B_\\varepsilon(V)}(a_i)\\quad\n\\forall\\,i\\in\\{1,\\ldots,N\\},\\label{primal:eq}\\\\\n&\\min\\{(p-e_i)^Tb^*_\\varepsilon:p\\in\\ext(P_i)\\}=-\\sigma_{B_\\varepsilon(V)}(a_i)\\quad\\forall\\,i\\in\\{1,\\ldots,N\\},\n\\label{dual:eq}\n\\end{align}\nand the global minimizer $b^*\\in\\mathbbm{R}^N$ of Problem \\eqref{op1} satisfies\n\\[\\min\\{(p-e_i)^Tb^*:p\\in\\ext(P_i)\\}=-\\sigma_{V}(a_i)\\quad\\forall\\,i\\in\\{1,\\ldots,N\\}.\\]\nIn particular, we have $b^*_\\varepsilon\\in\\Omega_\\varepsilon$ and $b^*\\in\\Omega$, where\n\\begin{align*}\n&\\Omega_\\varepsilon:=\\{b\\in\\mathbbm{R}^N:(e_i-p)^Tb\\le\\sigma_{B_\\varepsilon(V)}(a_i)\\ \\forall\\,p\\in\\ext(P_i),\\ i\\in\\{1,\\ldots,N\\}\\},\\\\\n&\\Omega:=\\{b\\in\\mathbbm{R}^N:(e_i-p)^Tb\\le\\sigma_{V}(a_i)\\ \\forall\\,p\\in\\ext(P_i),\\ i\\in\\{1,\\ldots,N\\}\\}.\n\\end{align*}\n\\end{proposition}\n\n\\begin{proof}\nSince $b^*_\\varepsilon\\in\\mathcal{B}_{X^*_\\varepsilon}$, \nwe have $Q_{A,b^*_\\varepsilon}\\neq\\emptyset$, \nand hence $F_\\varepsilon(Q_{A,b^*_\\varepsilon})\\neq\\emptyset$, \nas well as $F_\\varepsilon(Q_{A,b^*_\\varepsilon})\\subset Q_{A,b^*_\\varepsilon}$, which is,\nby Proposition \\ref{lpc} part c), equivalent with\n\\begin{equation}\\label{forall}\n\\max\\{(C^Ta_i)^Tx:x\\in Q_{A,b^*_\\varepsilon}\\}\\le b^*_{\\varepsilon,i}-\\sigma_{B_\\varepsilon(V)}(a_i)\\quad\n\\forall\\,i\\in\\{1,\\ldots,N\\}.\n\\end{equation}\nAssume that there exist $j\\in\\{1,\\ldots,N\\}$ and $\\delta>0$ with\n\\begin{equation}\\label{forj}\n\\max\\{(C^Ta_j)^Tx:x\\in Q_{A,b^*_\\varepsilon}\\}\\le b^*_{\\varepsilon,j}-\\sigma_{B_\\varepsilon(V)}(a_j)-\\delta.\n\\end{equation}\nThen for $b^\\delta:=b^*_\\varepsilon-\\delta e_j$, inequalities \\eqref{forall} \nand \\eqref{forj} yield\n\\begin{align*}\n\\sigma_{F_\\varepsilon(Q_{A,b^*_\\varepsilon})}(a_i)\n&=\\max\\{(C^Ta_i)^Tx:x\\in Q_{A,b^*_\\varepsilon}\\}+\\sigma_{B_\\varepsilon(V)}(a_i)\\\\\n&\\le b^\\delta_i\\quad\\forall\\,i\\in\\{1,\\ldots,N\\},\n\\end{align*}\nso $\\emptyset\\neq F_\\varepsilon(Q_{A,b^*_\\varepsilon})\\subset Q_{A,b^\\delta}$ follows from\nProposition \\ref{lpc} part a).\nBy monotonicity, we have\n\\[\\sigma_{F_\\varepsilon(Q_{A,b^\\delta})}(a_i)\n\\le\\sigma_{F_\\varepsilon(Q_{A,b^*_\\varepsilon})}(a_i)\n\\le b_i^\\delta\\quad\\forall\\,i\\in\\{1,\\ldots,N\\},\\]\nwhich is equivalent with $F_\\varepsilon(Q_{A,b^\\delta})\\subset Q_{A,b^\\delta}$.\nHence $b^\\delta\\in\\mathcal{B}_{X^*_\\varepsilon}$, but we have \n$\\mathbbm{1}^Tb^\\delta<\\mathbbm{1}^Tb^*_\\varepsilon$,\nwhich is a contradiction.\nAll in all, we have proved equation \\eqref{primal:eq},\nand equation \\eqref{dual:eq} follows from the strong duality theorem \nof linear programming.\nEquation \\eqref{dual:eq}, in turn, implies that\n\\[(p-e_i)^Tb^*_\\varepsilon\\ge-\\sigma_{B_\\varepsilon(V)}(a_i)\\quad\\forall\\,p\\in\\ext(P_i),\\ \n\\forall\\,i\\in\\{1,\\ldots,N\\},\\]\nwhich shows that $b^*_\\varepsilon\\in\\Omega_\\varepsilon$.\nThe same arguments work in the case $\\varepsilon=0$.\n\\end{proof}\n\n\nThe following result shows that the set $Q_{A,b^*_\\varepsilon}$ can be computed \nby solving a linear programming problem for $b^*_\\varepsilon$.\n\n\\begin{proposition} \\label{perturbed:unique}\nFor any $\\varepsilon>0$, the unique solution $b^*_\\varepsilon$ of the perturbed disjunctive\nprogram \\eqref{op1eps} is the unique solution of the linear program\n\\[\\max_b\\,\\mathbbm{1}^Tb\\quad\\text{subject to}\\quad b\\in\\Omega_\\varepsilon.\\]\n\\end{proposition}\n\nThe need to consider an arbitrarily small inflation of the set $V$ \narises from the following proof, in which we need that small perturbations $b$\nof the point $b^*_\\varepsilon$ satisfy $Q_{A,b}\\neq\\emptyset$.\n\n\\begin{proof}\nAssume that there exists $b_*\\in\\Omega_\\varepsilon\\setminus\\{b^*_\\varepsilon\\}$ with \n$\\mathbbm{1}^Tb_*\\ge\\mathbbm{1}^Tb^*_\\varepsilon$. \nSince $b^*_\\varepsilon\\in\\mathcal{B}_{X^*_\\varepsilon}$, we have\n\\[\\emptyset\\neq\\interior B_\\varepsilon(V)\n\\subset\\interior F_\\varepsilon(Q_{A,b^*_\\varepsilon})\\subset\\interior Q_{A,b^*_\\varepsilon},\\]\nand by Proposition 37 in \\cite{Rieger:17}, there exists \n$\\delta>0$ such that $p^Tb^*_\\varepsilon\\ge\\delta$ for all $p\\in\\ext(P_0)$. \nBy H\\\"older inequality, the vector \n$\\bar{b}:=b^*_\\varepsilon+\\tfrac{\\delta}{\\|b^*_\\varepsilon-b_*\\|_\\infty}(b^*_\\varepsilon-b_*)$\nsatisfies\n\\[p^T\\bar{b}\n=p^Tb^*_\\varepsilon+\\tfrac{\\delta}{\\|b^*_\\varepsilon-b_*\\|_\\infty}p^T(b^*_\\varepsilon-b_*)\n\\ge\\delta-\\delta\\|p\\|_1\n=0\\quad\\forall p\\in\\ext(P_0),\\]\nso $Q_{A,\\bar{b}}\\neq\\emptyset$ by part b) of Proposition \\ref{lpc}.\nSince $b_*\\in\\Omega_\\varepsilon$ and by Proposition \\ref{locmin}, for every $i\\in\\{1,\\ldots,N\\}$,\nthere exists $p\\in\\ext(P_i)$ such that\n\\[(e_i-p)^Tb_*\\le\\sigma_{B_\\varepsilon(V)}(a_i)\\quad\\text{and}\\quad\n(p-e_i)^Tb^*_\\varepsilon=-\\sigma_{B_\\varepsilon(V)}(a_i).\\]\nFrom this we conclude that\n\\begin{align*}\n(p-e_i)^T\\bar{b}\n&=(p-e_i)^T(b^*_\\varepsilon+\\tfrac{\\delta}{\\|b^*_\\varepsilon-b_*\\|_\\infty}(b^*_\\varepsilon-b_*))\\\\\n&=(1+\\tfrac{\\delta}{\\|b^*_\\varepsilon-b_*\\|_\\infty})(p-e_i)^Tb^*_\\varepsilon-\\tfrac{\\delta}{\\|b^*_\\varepsilon-b_*\\|_\\infty}(p-e_i)^Tb_*\\\\\n&\\le-(1+\\tfrac{\\delta}{\\|b^*_\\varepsilon-b_*\\|_\\infty})\\sigma_{B_\\varepsilon(V)}(a_i)\n+\\tfrac{\\delta}{\\|b^*_\\varepsilon-b_*\\|_\\infty}\\sigma_{B_\\varepsilon(V)}(a_i)\n=-\\sigma_{B_\\varepsilon(V)}(a_i),\n\\end{align*}\nand hence that $F_\\varepsilon(Q_{A,\\bar{b}})\\subset Q_{A,\\bar{b}}$ according to\nProposition \\ref{lpc} part c).\nAll in all, we have $\\bar{b}\\in\\mathcal{B}_{X^*_\\varepsilon}$, but \n\\[\\mathbbm{1}^T\\bar{b}\n=\\mathbbm{1}^T(b^*_\\varepsilon+\\tfrac{\\delta}{\\|b^*_\\varepsilon-b_*\\|_\\infty}(b^*_\\varepsilon-b_*))\n=(1+\\tfrac{\\delta}{\\|b^*_\\varepsilon-b_*\\|_\\infty})\\mathbbm{1}^Tb^*_\\varepsilon\n-\\tfrac{\\delta}{\\|b^*_\\varepsilon-b_*\\|_\\infty}\\mathbbm{1}^Tb_*\n\\le\\mathbbm{1}^Tb^*_\\varepsilon,\\]\nwhich is impossible, because the point $b^*_\\varepsilon$ is the unique global minimum \nof Problem \\eqref{op1}.\n\\end{proof}\n\n\n\n\n\n\n\n\\section{The unperturbed dual LP}\n\nNow we conclude that the approximation $Q_{A,b^*}$ to $X^*$ we wish to compute\nis indeed given by the unique solution of the unperturbed dual linear program.\n\n\\begin{theorem}\nThe unique solution $b^*$ of the disjunctive program \\eqref{op1} \nis the unique solution of the linear program\n\\[\\max_b\\,\\mathbbm{1}^Tb\\quad\\text{subject to}\\quad b\\in\\Omega.\\]\n\\end{theorem}\n\n\\begin{proof}\nBy Proposition \\ref{perturbed:unique}, for any $\\varepsilon>0$, the unique solution \n$b^*_\\varepsilon$ of the disjunctive program \\eqref{op1eps} is the unique solution \nof the linear program\n\\begin{equation} \\label{loc:1}\n\\max_b\\,\\mathbbm{1}^Tb\\quad\\text{subject to}\\quad b\\in\\Omega_\\varepsilon.\n\\end{equation}\nBy Proposition \\ref{locmin}, we have $b^*\\in\\Omega$, so the linear program\n\\begin{equation} \\label{loc:2}\n\\max_b\\,\\mathbbm{1}^Tb\\quad\\text{subject to}\\quad b\\in\\Omega\n\\end{equation}\nis feasible. \nSince $\\Omega\\subset\\Omega_\\varepsilon$, the value of Problem \\eqref{loc:2} is bounded \nby the value of Problem \\eqref{loc:1} with $\\varepsilon=1$, so \n$\\argmax_{b\\in\\Omega}\\mathbbm{1}^Tb\\neq\\emptyset$.\nCorollary 3.1 from \\cite{Robinson} yields that if \n$\\tilde{b}\\in\\argmax_{b\\in\\Omega}\\mathbbm{1}^Tb$ and there exists $\\varepsilon_0>0$ with\n$\\argmax_{b\\in\\Omega_\\varepsilon}\\mathbbm{1}^Tb\\neq\\emptyset$ for $\\varepsilon\\in(0,\\varepsilon_0]$, then\nthere exist $\\tilde{b}_\\varepsilon\\in\\argmax_{b\\in\\Omega_\\varepsilon}\\mathbbm{1}^Tb$ \nwith\n\\[\\tilde{b}=\\lim_{\\varepsilon\\searrow 0}\\tilde{b}_\\varepsilon.\\]\nIn the present situation, we have $b^*_\\varepsilon=\\argmax_{b\\in\\Omega_\\varepsilon}\\mathbbm{1}^Tb$,\nso using Proposition \\ref{eps:converge}, we conclude that\n\\[\\argmax_{b\\in\\Omega}\\mathbbm{1}^Tb=\\lim_{\\varepsilon\\searrow 0}b^*_\\varepsilon=b^*.\\]\n\\end{proof}\n\n\n\n\n\n\\subsection*{Acknowledgement}\nWe thank Andrew Eberhard for pointing us towards the term \\emph{disjunctive\nprogramming}, which we would otherwise never have found in the maze of mathematical\nliterature.\n\n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRecently, the sequence-to-sequence (S2S) approach in automatic speech recognition (ASR) has received a considerable amount of interest, due to the ability to jointly train all components towards a common goal which reduces complexity and error propagation compared to traditional hybrid systems. \nTraditional systems divide representation into different levels in the acoustic model, in particular separating global features (such as channel and speaker characteristics) and local features (on the phoneme level). \nThe language model and acoustic model are trained with different loss functions and then combined during decoding.\nIn contrast, neural S2S models perform a direct mapping from audio signals to text sequences based on dynamic interactions between two main model components, an encoder and a decoder, which are jointly trained towards maximizing the likelihood of the generated output sequence. \nThe neural encoder reads the audio features into high-level representations, which are then fed into an auto-regressive decoder which attentively generates the output sequences~\\cite{bahdanau2014neural,bahdanau2016end}.\n\nIn this context, we aim at reconsidering acoustic modeling within end-to-end models. Previous approaches in general had long short-term memory neural networks (LSTM)~\\cite{hochreiter1997long} or time-delay neural networks~\\cite{waibel1989tdnn} operating on top of frame-level features to learn sequence-level representation. These neural networks are able to capture long range and local dependencies between different timesteps. \n\nRecently, self-attention has been shown to efficiently represent different structures including text~\\cite{bahdanau2014neural}, images~\\cite{velivckovic2017graph}, and even acoustic signals~\\cite{sperber2018self} with impressive results. The Transformer model using self-attention achieved the state-of-the-art in mainstream NLP tasks~\\cite{vaswani2017attention}. The attractiveness of self-attention networks originates from the ability to establish a direct connection between any element in the sequence. Self-attention is able to scale with the length of the input sequence without any limiting factor such as, e.g., the kernel size of CNNs, or the vanishing gradient problem of LSTMs. Moreover, the self-attention network is also computationally advantageous compared to recurrent structures because intermediate states are no longer connected recurrently, leading to more efficient batched operations. As a result, self-attention networks can be reasonably trained with many layers leading to state-of-the-art performance in various tasks~\\cite{devlin2018bert}.\nSelf-attention and the Transformer have been exploratorily applied to ASR, but so far with unsatisfactory results. \\cite{sperber2018self} found that self-attention in the encoder (acoustic model) was not effective, but combined with an LSTM brought marginal improvement and greater interpretability, while\n\\cite{dong2018speech} did not find any notable improvement using the Transformer in which the encoder combines self-attention with convolution\/LSTM compared to other model architectures.\\\\\nIn this work, we show that the Transformer requires little modification to adapt on the speech recognition task. Specifically, we exploit the advantages of self-attention networks for ASR such that both our acoustic encoder and character-generating decoder are constructed without any recurrence or convolution. This is the first attempt to propose this system architecture to the best of our knowledge, and we show that a competitive end-to-end ASR model can be achieved solely using standard training techniques from general S2S systems.\n\nOur contributions are as follows. First, we show that depth is an important factor to acquire competitive end-to-end ASR models with the Transformer. Second, in order to facilitate training of very deep configurations, we propose a variation of stochastic depth for the Transformer inspired by the Stochastic Residual Network for image classification~\\cite{huang2016deep}. \n\nWe discovered that its ability to regularize is the key contribution to obtain the state-of-the-art result among end-to-end ASR models for the standard 300h Switchboard (SWB) benchmark. \nThis result is achieved using a total of 48 Transformer layers across the encoder and decoder.~\\footnote{Our source code and final model are available at \\textit{https:\/\/github.com\/quanpn90\/NMTGMinor\/tree\/audio-encoder\/}}\n\n\\begin{figure}[htb]\n\\vspace{-1em}\n\\centering\n\\includegraphics[width = 0.45\\textwidth]{figures\/transformer_speech.PNG}\n\\caption{\\label{fig:model} A diagram of transformation from acoustic features to character-level transcriptions. The red connections represent the residual connections, which are rescaled according to Equation~\\ref{eq:stochastic_res} for stochastic Transformers.}\n\\end{figure}\n\n\\section{Model Description}\n\n\\subsection{Encoder-Decoder with Attention}\nThe main components of the model include an encoder, which consumes the source sequence and then generates a high-level representation, and a decoder generating the target sequence. \nThe decoder models the data as a conditional language model - the probability of the sequence of discrete tokens is decomposed into an ordered product of distributions conditioned on both the previously generated tokens and the encoder representation.\n\nBoth encoder and decoder are neural networks and require neural components that are able to learn the relationship between the time steps in the input and output sequence. The decoder also requires a mechanism to condition on specific components of the encoder representation. \nFor the Transformer, attention or its common variation multi-head attention, is the core of the model in place of recurrence.\n\n\\subsection{Multi-Head Attention}\nFundamentally, attention refers to the method of using a content-based information extractor from a set of queries $Q$, keys $K$ and values $K$. The retrieval function is based on similarities~\\cite{luong2015effective} between the queries and the keys, and in turn returns the the weighted sum of the values as following:\n\\begin{equation}\n \\text{Attention}(Q, K, V) = \\text{softmax}(QK^T)V\n\\end{equation}\nRecently, \\cite{vaswani2017attention} improves dot-product attention by scaling the queries before hand and introducing sub-space projection for keys, queries and values into $n$ parallel heads, in which $n$ attention operations are performed with corresponding heads. The result is the concatenation of attention outputs from each head.\nNotably, unlike recurrent connections which use single states with gating mechanism to transfer data or convolution connections linearly combining local states limited in a kernel size, self-attention aggregates the information in \\textit{all} time-steps without any intermediate transformation.\n\n\\subsection{Layer Architecture}\n\nThe overall architecture is demonstrated in Figure~\\ref{fig:model}. The encoder and decoder of the Transformers are constructed by layers, each of which contains self-attentional sub-layers coupled with feed-forward neural networks. \n\n\nTo adapt the encoder to long speech utterances, we follow the reshaping practice from~\\cite{sperber2018self} by grouping consecutive frames into one step. \nSubsequently we combine the input features with sinusoidal positional encoding~\\cite{vaswani2017attention}. While directly adding acoustic features to the positional encoding is harmful, potentially leading to divergence during training~\\cite{sperber2018self}, we resolved that problem by simply projecting the concatenated features to a higher dimension before adding ($512$, as other hidden layers in the model). In the case of speech recognition specifically, the positional encoding offers a clear advantage compared to learnable positional embeddings~\\cite{gehring2017convolutional}, because the speech signals can be arbitrarily long with a higher variance compared to text sequences. \n\nThe Transformer encoder passes the input features to a self-attention layer followed by a feed-forward neural network with 1 hidden layer with the ReLU activation function. Before these sub-modules, we follow the original work to include residual connections which establishes short-cuts between the lower-level representation and the higher layers. The presence of the residual layer massively increases the magnitude of the neuron values which is then alleviated by the layer-normalization~\\cite{ba2016layer} layers placed after each residual connection.\n\nThe decoder is the standard Transformer decoder in the recent translation systems~\\cite{vaswani2017attention}. The notable difference between the decoder and the encoder is that to maintain the auto-regressive nature of the model, the self-attention layer of the decoder must be masked so that each state has only access to the past states. Moreover, an additional attention layer using the target hidden layer layers as queries and the encoder outputs as keys and values is placed between the self-attention and the feed-forward layers. Residual and layer-normalization are setup identically to the encoder.\n\nThis particular design of the Transformer has various advantages compared to previously proposed RNNs and CNNs networks. First, computation of each layer and sub-module can be efficiently parallelized over both mini-batch and time dimensions of the input. Second, the combination of residual and layer normalization is the key to enable greater depth configurations to be trainable, which is the main reason for the performance breakthrough in recent works in both MT and natural language processing~\\cite{devlin2018bert,pham2018wmt}. \n\n\\subsection{Stochastic Layers}\nThe high density of residual connections is the reason why Transformer is favourably trained with many layers. However, deep models in general suffer from either overfitting due to more complex architectures and optimization difficulty~\\cite{bapna2018training}. Studies about residual networks have shown that during training the network consists of multiple sub-networks taking different paths through shortcut connections~\\cite{veit2016residual}, and thus there are redundant layers. Motivated by the previous work of~\\cite{huang2016deep}, we propose to apply stochastic residual layers into our Transformers. The method resembles Dropout~\\cite{srivastava2014dropout}, in which the key idea is the layers are randomly dropped during training.\nThe original residual connection of an input $x$ and its corresponding neural layer $F$ has the following form:\n\\begin{equation}\nR(x) = \\text{LayerNorm}(F(x) + x)\n\\label{eq:res}\n\\end{equation}\nIn equation~\\ref{eq:res}, the inner function $F$ is either self-attention, feed-forward layers or even decoder-encoder attention. The layer normalization as in~\\cite{ba2016layer} keeps the magnitude of the hidden layers from growing large. Stochastic residual connections fundamentally apply a mask $M$ on the function $F$, as follows:\n\\begin{equation}\nR(x) = \\text{LayerNorm}(M * F(x) + x)\n\\label{eq:res}\n\\end{equation}\nMask $M$ only takes $0$ or $1$ as values, generated from a Bernoulli distribution similar to dropout~\\cite{srivastava2014dropout}. When $M=1$, the inner function $F$ is activated, while it is skipped when $M=0$. These stochastic connections enables more sub-network configurations to be created during training, while during inference the full network is presented, causing the effect of ensembling different sub-networks, as analyzed in~\\cite{veit2016residual}. \nIt is non-trivial regarding how to the parameter $p$ for dropping layers, since the amount of residual connections for the Transformer is considerable. \n\nIn other words, the lower the layer is, the lower the probability $p$ is required to be set. As a result, $p$ values are set with the following policy:\n\\begin{itemize}\n \n \\item Sub-layers inside each encoder or decoder layer share the same mask, so each mask decides to drop or to keep the whole layer (including the sub-layers inside). This way we have one hyper-parameter $p$ for each layer.\n \\item As suggested by~\\cite{huang2016deep}, the lower layers of the networks handle raw-level acoustic features on the encoder side, and the character embeddings on the decoder side. Therefore, lower layers $l$ have lower probability linearly scaled by their depth according to equation~\\ref{eq:p} with $p$ is the global-level parameter and $L$ is the total number of layers.~\\footnote{Our early experiments with a constant $p$ for all connections provide evidence that dropping lower-level representations is less tolerable than dropping higher-level representations.} \n\\end{itemize}\nLastly, since the layers are selected with probability $1-p_l$ during training and are always presented during inference, we scale the layers' output by $\\frac{1}{1-p_l}$ whenever they are not skipped. Therefore, each stochastic residual connection has the following form during \\textit{training} (the scalar is removed during testing):\n\\begin{equation}\np_l = \\frac{l}{L} (1 - p)\n\\label{eq:p}\n\\end{equation}\n\\begin{equation}\nR(x) = \\text{LayerNorm}(M * F(x) * \\frac{1}{1-p_l} + x)\n\\label{eq:stochastic_res}\n\\end{equation}\n\n\n\\section{Experimental Setup}\n\n\\subsection{Data}\nOur experiments were conducted on the Switchboard-1 Release 2 (LDC97S62) corpus which contains over 300 hours of speech. The Hub5'00 evaluation data (LDC2002S09) was used as our test set. All the models were trained on 40 log mel filter-bank features which are extracted and normalized per conversation. We also adopted a simple down-sampling method in which we stacked 4 consecutive features vectors to reduce the length of input sequences by a factor of 4. Beside the filter-bank features, we did not employ any auxiliary features. We followed the approach from \\cite{ko2015audio} to generate a speech perturbation training set. Extra experiments are also conducted on the TED-LIUM 3~\\cite{hernandez2018ted} dataset which is more challenging due to longer sequences.\n\n\\subsection{Implementation Details}\nOur hyperparameter search revolves around the~\\textit{Base} configuration of the machine translation model in the original Transformer paper~\\cite{vaswani2017attention}. For all of our experiments in this work, the embedding dimension $d$ is set to $512$ and the size of the hidden state in the feed-forward sub-layer is $1024$. The mini-batch size is set so that we can fit our model in the GPU, and we accumulate the gradients and update every $25000$ characters. Adam~\\cite{kingma2014adam} with adaptive learning rate over the training progress:\n\\begin{equation}\n lr = init\\_lr * d^{-0.5} * min (step^{-0.5}, step * warmup^{-1.5})\n\\end{equation}\nin which the init\\_lr is set to $2$, and we warm up the learning rate for $8000$ steps. Dropout~\\cite{srivastava2014dropout} (applied before residual connection and on the attention weights) is set at $0.2$. We also apply character dropout~\\cite{gal2016theoretically} with $p=0.1$ and label smoothing~\\cite{szegedy2016rethinking} with $\\epsilon=0.1$.\n\\section{Results}\n\\begin{table}[ht]\n\\caption{The performance of deep self-attention networks with and without stochastic layers on Hub5'00 test set with 300h SWB training set.}\n\t\\label{tab:swb}\n\t\\vspace{-0.2cm}\t\n\t\\setlength{\\tabcolsep}{6pt}\n\t\\centering\n\t\\begin{tabular}{lccc}\n\t\t\\toprule\n \\textbf{Layers} & \\textbf{\\#Param} & \\textbf{SWB} & \\textbf{CH}\\\\\n \\midrule\n 04Enc-04Dec & 21M & 20.8 & 33.2 \\\\ \n 08Enc-08Dec & 42M & 14.8 & 25.5 \\\\ \n 12Enc-12Dec & 63M & 13.0 & 23.9 \\\\\n \\;\\; \\textit{+Stochastic Layers} & & 13.1 & 23.6 \\\\ \n 24Enc-24Dec & 126M & 12.1 & 23.0 \\\\\n \\;\\; \\textit{+Stochastic Layers} & & 11.7 & 21.5 \\\\ \n \\;\\; \\textit{+Speed Perturbation} & & 10.6 & 20.4 \\\\\n 48Enc-48Dec & 252M & - & - \\\\\n \\;\\; \\textit{+Stochastic Layers} & & 11.6 & 20.9 \\\\\n 48Enc-48Dec (half-size) & 63M & - & - \\\\\n \\;\\; \\textit{+Stochastic Layers} & & 12.5 & 22.9\\\\ \n \\midrule\n 08Enc-08Dec (big) & 168M & 13.8 & 25.1 \\\\\n \\midrule\n 24Enc-12Dec & 113M & 13.3 & 23.7 \\\\ \n \\;\\; \\textit{+Stochastic Layers} & & 11.9 & 21.6\\\\\n 36Enc-8Dec & 113M & 12.4 & 22.6 \\\\\n \\;\\; \\textit{+Stochastic Layers} & & 11.5 & 20.6\\\\\n 36Enc-12Dec & 113M & 12.4 & 22.6 \\\\\n \\;\\; \\textit{+Speed Perturbation} & & 11.2 & 20.6 \\\\\n \\;\\; \\textit{+Stochastic Layers} & & 11.3 & 20.7\\\\\n \\;\\; \\textit{+Both} & & \\textbf{10.4} & \\textbf{18.6} \\\\\n 40Enc-8Dec & 109M & -- & --\\\\\n \\;\\; \\textit{+Stochastic Layers} & & 11.9 & 21.4\\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\vspace{-0.0cm}\n\\end{table}\n\n\\begin{table}[ht]\n\\caption{Comparing our best model to other hybrid and end-to-end systems reporting on Hub5'00 test set with 300h SWB training set. }\n\\label{tab:swb2}\n\t\\vspace{-0.2cm}\t\n\t\\setlength{\\tabcolsep}{4pt}\n\t\\centering\n\t\\begin{tabular}{lccc}\n\t\t\\toprule\n \\textbf{Hybrid\/End-to-End Models} & \\textbf{Tgt Unit} & \\textbf{SWB} & \\textbf{CH}\\\\\n \\midrule\n TDNN~~~+LFMMI \\cite{povey2016purely} & Phone & 10.0 & 20.1 \\\\\n BLSTM +LFMMI \\cite{povey2016purely} & Phone & \\textbf{9.6} & 19.3 \\\\\n \\midrule\n CTC+CharLM \\cite{maas2015lexicon} & Char & 21.4 & 40.2 \\\\\n LSTM w\/attention~\\cite{bahdanau2014neural} & Char & 15.8 & 36.0 \\\\\n Iterated-CTC +LSTM-LM \\cite{zweig2017advances} & Char & 14.0 & 25.3 \\\\\n \n Seq2Seq ~~~~~~~~+LSTM-LM \\cite{zeyer2018improved} & BPE & 11.8 & 25.7 \\\\\n Seq2Seq ~~~~+Speed Perturbation \\cite{weng2018improving} & Char & 12.2 & 23.3 \\\\\n CTC-A2W +Speed Perturbation \\cite{yu2018multistage} & Word & 11.4 & 20.8 \\\\\n \\midrule\n \n \n 36Enc-12Dec (Ours) & Char & 10.4 & 18.6 \\\\\n 48Enc-12Dec (Ours) & Char & 10.7 & 19.4 \\\\\n 60Enc-12Dec (Ours) & Char & 10.6 & 19.0 \\\\\n Ensemble & & 9.9 & \\textbf{17.7} \\\\ \n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\vspace{-0.0cm}\n\\end{table}\n\nThe experiment results on SWB testsets are shown in Table~\\ref{tab:swb}. A shallow configuration (i.e $4$ layers) is not sufficient for the task, and the WER reduces from $20.8\\%$ to $12.1\\%$ on the SWB test as we increase the depth from $4$ to $24$. \nThe improvement is less significant between $12$ and $24$ (only $5\\%$ relative WER), which seems to be a symptom of overfitting.\n\n\n\nOur suspicion of overfitting is confirmed by the addition of stochastic networks. At $12$ layers, the stochastic connections only improve the CH performance by a small margin, while the improvement was substantially greater on the $24$ layer setting. Following this trend, the stochastic $48$-layer model keeps improving on the CH test set, showing the model's ability to generalize better. \n\nArguably, the advantage of deeper models is to offer more parameters, as shown in the second column. We performed a contrastive experiment using a shallow model of 8 layers, but doubling the model size so that its parameter count is larger than the deep $24$-layer model. The performance of this model is considerable worse than the $24$ layer model, demonstrating deeper networks with smaller size are more beneficial than a wider yet shallower configuration. Reversely, we found that the 48-layer model with half size is equivalent with the $12$-layer variant, possibly due to over-regularization~\\footnote{We did not change dropout values for this model, so each layer's hidden layers are dropped more severely}.\n\nOur second discovery is that the encoder requires deeper networks than the decoder for the Transformer. This is inline with the previous work from~\\cite{zhang2017very} who increases depth for the CNN encoder.\nAs shown above, the encoder has learn representations starting from audio features, while the decoder handles the generation of character sequences, conditionally based on the encoder representation. \nThe difference in modalities suggest different configurations. \nHolding the total number of layers as $48$, we shift depth to the encoder.\nOur result with a much shallower decoder, only $8$ layers, but with $40$ encoder layers is as good as the $24$-layer configuration. \nMore stunningly, we were able to obtain our best result with the $36-12$ configuration with $20.6\\%$ WER, which is competitive with the best previous end-to-end work using data augmentation.\n\nThird, it was revealed that the combination of our regularization techniques (dropout, label-smoothing and stochastic networks) are additive with data augmentation, which further improved our result to $18.1\\%$ with the $36-12$ setup. This model, as far as we know, establishes the state-of-the-art result for the SWB benchmark among end-to-end ASR models, as shown in table~\\ref{tab:swb2}. Comparing to the best hybrid models with similar data constraints, our models outperformed on the CH test set while remaining competitive on the SWB test set without any additional language model training data. This result suggests the strong generalizability of the Stochastic Transformer. \n\nFinally, the experiments with similar depth suggest that self-attention performs competitively compared to LSTMs~\\cite{hochreiter1997long} or TDNNs~\\cite{waibel1989tdnn}. The former benefits strongly from building deep residual networks, in which our main finding shows that depth is crucial for using self-attention in the regimen of ASR. \n\n\n\n\n\n\n\n\\subsection{On TED-LIUM dataset}\n\n\\begin{table}[ht]\n\\caption{The Transformer results on the TED-LIUM test set using TED-LIUM 3 training set.}\n\\label{tab:ted}\n\t\\vspace{-0.2cm}\t\n\t\\setlength{\\tabcolsep}{4pt}\n\t\\centering\n\t\\begin{tabular}{lccc}\n\t\t\\toprule\n \\textbf{Models} & Test WER \\\\\n CTC~\\cite{hernandez2018ted} & 17.4 \\\\\n CTC\/LM + speed perturbation~\\cite{hernandez2018ted} & 13.7 \\\\\n \\midrule\n 12Enc-12Dec (Ours) & 14.2 \\\\\n Stc. 12Enc-12Dec (Ours) & 12.4 \\\\\n Stc. 24Enc-24Dec (Ours) & 11.3 \\\\\n Stc. 36Enc-12Dec (Ours) & \\textbf{10.6} \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\vspace{-0.0cm}\n\\end{table}\n\nTable~\\ref{tab:ted} shows our result on TED-LIUM (version 3) dataset. \nWith a similar configuration to the SWB models, we outperformed a strong baseline which uses both an external language model trained on larger data than the available transcription and speed perturbation, using our model with 36 encoder layers and 12 decoder layers. This result continues the trend that these models benefit from a deeper encoder, and together with the stochastic residual connections we further improved WER by $21.8\\%$ relatively, from $14.2$ to $11.1\\%$. \nGiven the potential of the models~\\footnote{We did not have enough time for a thorough hyper-parameter search by the time of submission}, it is strongly suggested that better results can be obtained by further hyper-parameter optimization. \n\n\n\n\\section{Related Work}\nThe idea of using self-attention as the main component of ASR models has been investigated in various forms. \\cite{sperber2018self} combines self-attention with LSTMs, while \\cite{salazar2019self} uses self-attention as an alternative in CTC models. A variation of the Transformer has been applied to ASR with additional TDNN layers to downsample the acoustic signal~\\cite{dong2018speech}. Though self-attention has provided various benefits such as training speed or model interpretability, previous works have not been able to point out any enhancement in terms of performance. Our work provides a self-attention-only model and showed that with high capacity and regularization, such a network can exceed previous end-to-end models and approach the performance of hybrid systems. \n\\section{Conclusion}\nDirectly mapping from acoustics to text transcriptions is a challenging task for the S2S model. \nTheoretically, self-attention can be used alternatively to TDNNs or LSTMs for acoustic modeling, and here we are the first demonstrate that the Transformer can be effective for ASR with the key is to setup very deep stochastic models. State-of-the-art results among end-to-end models on $2$ standard benchmarks are achieved and our networks are among the deepest configurations for ASR.\nFuture works will involve developing the framework under more realistic and challenging conditions such as real-time recognition, in which latency and streaming are crucial. \n\\section{Acknowledgements}\nThe work leading to these results has received funding from the European Union under grant agreement N\\textsuperscript{\\underline{o}}~825460 and the Federal Ministry of Education and Research (Germany) \/ DLR Projekttr\u00e4ger Bereich Gesundheit under grant agreement. N\\textsuperscript{\\underline{o}}~01EF1803B. We are also grateful to have very useful comments from Elizabeth Salesky. \n\n\\bibliographystyle{IEEEtran}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\nOne of the most fundamental attributes of deep networks, and the reason for\ndriving its empirical success, is the ``Depth Efficiency\" result which states\nthat deeper models are exponentially more expressive than shallower models of\nsimilar size. Formal studies of Depth Efficiency include the early work on\nboolean or thresholded circuits~\\citep{Sipser83,Yao89,Hastad91,Hajnal:1993kl},\nand the more recent studies covering the types of networks used in practice\n\\citep{Pascanu:2013ue,Montufar:2014tb,Eldan:2015uc,expressive_power,\ngeneralized_decomp,Telgarsky:2016wk,Safran:2016te,Raghu:2016wn,BenPoole:2016vy}.\nWhat makes the Depth Efficiency attribute so desirable, is that it brings\nexponential increase in expressive power through merely a polynomial change in\nthe model, i.e. the addition of more layers. Nevertheless, depth is merely one\namong many architectural attributes that define modern networks. The deep\nnetworks used in practice consist of architectural features defined by various\nschemes of connectivity, convolution filter defined by size and stride, pooling\ngeometry and activation functions. Whether or not those relate to expressive\nefficiency, as depth has proven to be, remains an open question.\n\nIn order to study the effect of network design on expressive efficiency we\nshould first define \"efficiency\" in broader terms. Given two network\narchitectures~$A$ and~$B$, we say that architecture~$A$ is expressively\nefficient with respect to architecture~$B$, if the following two conditions\nhold: \\emph{(i)}~any function ${\\mathbf h}$ realized by~$B$ of size~$r_B$ can be realized\n(or approximated) by $A$ with size $r_A \\in{\\mathcal O}(r_B)$; \\emph{(ii)}~there exist a\nfunction ${\\mathbf h}$ realized by~$A$ with size $r_A$, that cannot be realized (or\napproximated) by~$B$, unless $r_B \\in\\Omega(f(r_A))$ for some super-linear\nfunction~$f$. The exact definition of the sizes $r_A$ and $r_B$ depends on the\nmeasurement we care about, e.g. the number of parameters, or the number of\n``neurons''. The nature of the function~$f$ in condition~\\emph{(ii)} determines\nthe type of efficiency taking place~--~if~$f$ is exponential then\narchitecture~$A$ is said to be exponentially efficient with respect to\narchitecture~$B$, and if~$f$ is polynomial so is the expressive efficiency.\nAdditionally, we say $A$ is \\emph{completely efficient} with respect to $B$, if\ncondition (ii) holds not just for some specific functions (realizable by $A$),\nbut for all functions other than a negligible set.\n\nIn this paper we study the efficiency associated with the architectural\nattribute of convolutions, namely the size of convolutional filters (receptive\nfields) and more importantly its proportion to their stride. We say that a\nnetwork architecture is of the \\emph{non-overlapping} type when the size of the\nlocal receptive field in each layer is equal to the stride. In that case, the\nsets of pixels participating in the computation of each two neurons in the same\nlayer are completely separated. When the stride is smaller than the receptive\nfield we say that the network architecture is of the \\emph{overlapping} type. In\nthe latter case, the \\emph{overlapping degree} is determined by the \\emph{total}\nreceptive field and stride projected back to the input layer~--~the implication\nbeing that for the overlapping architecture the total receptive field and stride\ncan grow much faster than with the non-overlapping case.\n\nAs several studies have shown, non-overlapping convolutional networks do have\nsome theoretical merits. Namely, non-overlapping networks are\nuniversal~\\citep{expressive_power,generalized_decomp}, i.e. they can approximate\nany function given sufficient resources, and in terms of optimization, under\nsome conditions they actually possess better convergence guaranties than\noverlapping networks. Despite the above, there are only few instances of\nstrictly non-overlapping networks used in practice (e.g. \\citet{tmm,\nvan2016wavenet}), which raises the question of \\textbf{why are non-overlapping\narchitectures so uncommon?} Additionally, when examining the kinds of\narchitectures typically used in recent years, which employ a mixture of both\noverlapping and non-overlapping layers, there is a trend of using ever smaller\nreceptive fields, as well as non-overlapping layers having an ever increasing\nrole~\\citep{NiN,Springenberg:2014tx,Szegedy:2014tb}. Hence, the most common\nnetworks used practice, though not strictly non-overlapping, are increasingly\napproaching the non-overlapping regime, which raises the question of \\textbf{why\nhaving just slightly overlapping architectures seems sufficient for most tasks?}\n\nIn the following sections, we will shed some light on these questions by \nanalyzing the role of overlaps through a surrogate class of convolutional\nnetworks called Convolutional Arithmetic\nCircuits~(ConvACs)~\\citep{expressive_power}~--~instead of non-linear activations\nand average\/max pooling layers, they employ linear activations and product\npooling. ConvACs, as a theoretical framework to study ConvNets, have been the\nfocused of several works, showing, amongst other things, that many of the\nresults proven on this class are typically transferable to standard ConvNets as\nwell~\\citep{generalized_decomp,inductive_bias}. Though prior works on ConvACs\nhave only considered non-overlapping architectures, we suggest a natural\nextension to the overlapping case that we call Overlapping ConvACs. In our\nanalysis, which builds on the known relation between ConvACs and tensor\ndecompositions, we prove that overlapping architectures are in fact completely\nand exponentially more efficient than non-overlapping ones, and that their\nexpressive capacity is directly related to their \\emph{overlapping degree}.\nMoreover, we prove that having even a limited amount of overlapping is\nsufficient for attaining this exponential separation. To further ground our\ntheoretical results, we demonstrate our findings through experiments with\nstandard ConvNets on the CIFAR10 image classification dataset.\n\n\\section{Overlapping Convolutional Arithmetic Circuits}\n\\label{sec:overlapping_convac}\n\nIn this section, we introduce a class of convolutional networks referred to as\nOverlapping Convolutional Arithmetic Circuits, or Overlapping ConvACs for short.\nThis class shares the same architectural features as standard ConvNets,\nincluding some that have previously been overlooked by similar attempts to model\nConvNets through ConvACs, namely, having any number of layers and unrestricted\nreceptive fields and strides, which are crucial for studying overlapping\narchitectures. For simplicity, we will describe this model only for the case of\ninputs with two spatial dimensions, e.g. color images, and limiting the\nconvolutional filters to the shape of a square.\n\n\\begin{wrapfigure}{r}{0.5\\textwidth} \n\\centering\n\\includegraphics[width=\\linewidth]{figures\/gc_layer_new}\n\\caption{An illustration of a GC Layer.}\n\\label{fig:gc_layer}\n\\end{wrapfigure}\n\nWe begin by presenting a broad definition of a Generalized Convolutional~(GC)\nlayer as a fusion of a $1{\\times}1$ linear operation with a pooling\nfunction~--~this view of convolutional layers is motivated by the\nall-convolutional architecture~\\citep{Springenberg:2014tx}, which replaces all\npooling layers with convolutions with stride greater than 1. The input to a\nGC layer is a 3-order tensor (multi-dimensional array), having width\nand height equal to $H^{(\\text{in})} \\in {\\mathbb N}$ and depth $D^{(\\text{in})} \\in {\\mathbb N}$,\nalso referred to as channels, e.g. the input could be a 2D image with RGB color\nchannels. Similarly, the output of the layer has width and height equal to\n$H^{(\\text{out})} \\in {\\mathbb N}$ and $D^{(\\text{out})} \\in {\\mathbb N}$ channels, where\n$H^{(\\text{out})} = \\frac{H^{(\\text{in})}}{S}$ for $S \\in {\\mathbb N}$ that is referred\nto as the \\emph{stride}, and has the role of a sub-sampling operation. Each\nspatial location $(i,j)$ at the output of the layer corresponds to a 2D window\nslice of the input tensor of size $R \\times R \\times D^{(\\text{in})}$, extended\nthrough all the input channels, whose top-left corner is located exactly at\n$(i\\cdot S, j\\cdot S)$, where $R \\in {\\mathbb N}$ is referred to as its \\emph{local\nreceptive field}, or filter size. For simplicity, the parts of window slices\nextending beyond the boundaries have zero value. Let\n${\\mathbf y} \\in {\\mathbb R}^{D^{(\\text(out)}}$ be a vector representing the channels at some\nlocation of the output, and similarly, let\n${\\mathbf x}^{(1)},\\ldots,{\\mathbf x}^{(R^2)} \\in {\\mathbb R}^{D^{(\\text{in})}}$ be the set of vectors\nrepresenting the slice, where each vector represents the channels at its\nrespective location inside the $R \\times R$ window, then the operation of a\nGC layer is defined as follows:\n\\begin{equation*}\n {\\mathbf y} = g(W^{(1)}{\\mathbf x}^{(1)}+{\\mathbf b}^{(1)}, \\ldots, W^{(R^2)}{\\mathbf x}^{(R^2)}+{\\mathbf b}^{(R^2)}),\n\\end{equation*}\nwhere $W^{(1)},\\ldots,W^{(R^2)} \\in {\\mathbb R}^{D^{(out)} \\times D^{(in)}}$\nand ${\\mathbf b}^{(1)}, \\ldots, {\\mathbf b}^{(R^2)} \\in {\\mathbb R}^{D^{(out)}}$ are referred to as the\nweights and biases of the layer, respectively, and\n$g:{\\mathbb R}^{D^{(out)}} \\times \\cdots \\times {\\mathbb R}^{D^{(out)}} \\to {\\mathbb R}^{D^{(out)}}$ is\nsome point-wise pooling function. See fig.~\\ref{fig:gc_layer} for an\nillustration of the operation a GC layer performs.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.75\\linewidth]{figures\/gc_network}\n\\caption{An illustration of a Generalized Convolutional Network.}\n\\label{fig:gc_network}\n\\end{figure}\n\nWith the above definitions, a GC network is simply a sequence of $L$ GC layers,\nwhere for $l \\in [L] \\equiv \\{1,\\ldots,L\\}$, the $l$'th layer is specified by a\nlocal receptive field $R^{(l)}$, a stride $S^{(i)}$, $D^{(l)}$ output channels,\nparameters $\\theta^{(l)}$, and a pooling function $g^{(l)}$. For classification\ntasks, the output of the last layer of the network typically has $1{\\times}1$\nspatial dimensions, i.e. a vector, where each output channel\n$y \\in [Y] \\equiv [D^{(L)}]$ represents the score function of the $y$'th class,\ndenoted by ${\\mathbf h}_y$, and inference is perform by $y^* = \\arg\\max_y h_y(X)$.\nOftentimes, it is common to consider the output of the very first layer of a\nnetwork as a low-level feature representation of the input, which is motivated\nby the observation that these learned features are typically shared across\ndifferent tasks and datasets over the same domain (e.g. edge and Gabor filters\nfor natural images). Hence, we treat this layer as a separate fixed\n``zeroth'' convolutional layer referred to as the \\emph{representation} layer,\nwhere the operation of the layer can be depicted as applying a set of fixed\nfunctions $\\{f_d:{\\mathbb R}^s\\to{\\mathbb R}\\}_{d=1}^M$ to the window slices denoted by\n${\\mathbf x}_1,\\ldots,{\\mathbf x}_N \\in {\\mathbb R}^s$, i.e. the entries of the output tensor of this layer\nare given by $\\{f_d({\\mathbf x}_i)\\}_{d \\in [M], i\\in [N]}$. With these notations, the\noutput of a GC network can be viewed as a function ${\\mathbf h}_y({\\mathbf x}_1,\\ldots,{\\mathbf x}_N)$. The\nentire GC network is illustrated in fig.~\\ref{fig:gc_network}.\n\nGiven a non-linear\npoint-wise activation function $\\sigma(\\cdot)$ (e.g. ReLU), then setting all\npooling functions to average pooling followed by the activation, i.e.\n$g({\\mathbf x}^{(1)},\\ldots,{\\mathbf x}^{(R^2)})_c=\\sigma\\left(\\sum_{i=1}^{R^2} x^{(i)}_c\\right)$\nfor $c \\in [D^{(\\text{out})}]$,\ngive rise to the common all-convolutional network with $\\sigma(\\cdot)$\nactivations, which served as the initial motivation for our formulation.\nAlternatively, choosing instead a product pooling function, i.e.\n$g({\\mathbf x}^{(1)},\\ldots,{\\mathbf x}^{(R^2)})_c = \\prod_{i=1}^{R^2} x^{(i)}_c$ for\n$c \\in [D^{(\\text{out})}]$, results in an Arithmetic Circuit, i.e.\na circuit containing just product and sum operations, hence it is referred to\nas a Convolutional Arithmetic Circuit, or ConvAC. It is important to emphasize\nthat ConvACs, as originally introduced by \\citet{expressive_power}, are\ntypically described in a very different manner, through the language of tensor\ndecompositions (see app.~\\ref{app:convac} for background). Since vanilla ConvACs\ncan be seen as an alternating sequence of $1{\\times}1$ convolutions and\nnon-overlapping product pooling layers, then the two formulations coincide when\nall GC layers are non-overlapping, i.e. for all $l \\in [L]$, $R^{(l)}=S^{(l)}$.\nIf, however, some of the layers are overlapping, i.e. there exists $l \\in [L]$\nsuch that $R^{(l)} > S^{(l)}$, then our formulation through GC layers diverges,\nand give rise to what we call \\emph{Overlapping ConvACs}.\n\nGiven that our model is an extension of the ConvACs framework, it inherits many\nof its desirable attributes. First, it shares most of the same traits as modern\nConvNets, i.e. locality, sharing and pooling. Second, it can be shown to form a\nuniversal hypotheses space~\\citep{expressive_power}. Third, its underlying\noperations lend themselves to mathematical analysis based on measure theory and\ntensor analysis~\\citep{expressive_power}. Forth, through the concept of\ngeneralized tensor decompositions~\\citep{generalized_decomp}, many of the\ntheoretical results proven on ConvACs could be transferred to standard ConvNets\nwith ReLU activations. Finally, from an empirical perspective, they tend to work\nwell in many practical settings, e.g. for optimal classification with missing\ndata~\\citep{tmm}, and for compressed networks~\\citep{simnets2}.\n\nWhile we have just established that the non-overlapping GC Network with a\nproduct pooling function is equivalent to vanilla ConvACs, one might wonder if\nusing overlapping layers instead could diminish what these overlapping networks\ncan represent. We show that not only is it not the case, but prove the more\ngeneral claim that a network of a given architecture can realize exactly the\nsame functions as networks using smaller local receptive fields, which includes\nthe non-overlapping case.\n\\begin{proposition}\\label{prop:nothing_to_lose}\n Let $A$ and $B$ be two GC Networks with a product pooling function. If\n the architecture of $B$ can be derived from $A$ through the removal of\n layers with $1{\\times}1$ stride, or by decreasing the local receptive field\n of some of its layers, then for any choice of parameters for $B$, there\n exists a matching set of parameters for $A$, such that the function\n realized by $B$ is exactly equivalent to~$A$. Specifically, $A$ can\n realize any non-overlapping network with the same order of strides\n (excluding $1{\\times}1$ strides).\n\\end{proposition}\n\\begin{proof}[Proof sketch]\n This follows from two simple claims: (i)~a GC layer can produce an output\n equivalent to that of a GC layer with a smaller local receptive field, by\n ``zeroing'' its weights beyond the smaller local receptive field; and (ii)\n GC layers with $1{\\times}1$ receptive fields can be set such that\n their output is equal to their input, i.e. realize the identity function.\n With these claims, the local receptive fields of $A$ can be effectively\n shrank to match the local receptive fields of $B$, and any additional layers\n of $A$ with stride $1{\\times}1$ could be set such that they are realizing\n the identity mapping, effectively ``removing'' them from $A$. See\n app.~\\ref{app:proofs:nothing_to_lose} for a complete proof.\n\\end{proof}\nProposition~\\ref{prop:nothing_to_lose} essentially means that overlapping\narchitectures are just as expressive as non-overlapping ones of similar\nstructure, i.e. same order of non-unit strides. As we recall, this satisfies the\nfirst condition of the efficiency property introduced in sec.~\\ref{sec:intro},\nand does so regardless if we measure the size of a network as the number of\nparameters, or the number of ``neurons''\\footnote{We take here the broader\ndefinition of a ``neuron'', as any one of the scalar values comprising the\noutput array of an arbitrary layer in a network. In the case the output array is\nof width and height equal to $H$ and $C$ channels, then the number of such\n``neurons'' for that layer is $H^2 \\cdot C$.}. In the following section we will\ncover the preliminaries required to show that overlapping networks actually lead\nto an increase in expressive capacity, which under some settings results in an\nexponential gain, proving that the second condition of expressive efficiency\nholds as well.\n\n\\section{Analyzing Expressive Efficiency Through Grid Tensors}\n\\label{sec:efficiency_analysis}\n\nIn this section we describe our methods for analyzing the expressive efficiency\nof overlapping ConvACs that lay the foundation for stating our theorems.\nA minimal background on tensor analysis required to follow our work can be found\nin sec.~\\ref{sec:efficiency_analysis:pre}, followed by presenting our\nmethods in sec.~\\ref{sec:efficiency_analysis:bounds}.\n\n\\subsection{Preliminaries}\n\\label{sec:efficiency_analysis:pre}\n\nIn this sub-section we cover the minimal background on tensors analysis required\nto understand our analysis. A tensor ${\\mathcal A} \\in{\\mathbb R}^{M_1 \\otimes \\cdots \\otimes M_N}$\nof order $N$ and dimension $M_i$ in each mode $i \\in [N] \\equiv \\{1,\\ldots,N\\}$,\nis a multi-dimensional array with entries ${\\mathcal A}_{d_1,\\ldots,d_N} \\in {\\mathbb R}$ for all\n$i \\in [N]$ and $d_i \\in [M_i]$. For simplicity, henceforth we assume that all\ndimensions are equal, i.e. $M \\equiv M_1 = \\ldots = M_N$. One of the central\nconcepts in tensor analysis is that of \\emph{tensor matricization}, i.e.\nrearranging its entries to the shape of a matrix. Let\n$P \\mathbin{\\mathaccent\\cdot\\cup} Q=[N]$ be a disjoint partition of its indices, such that\n$P = \\{p_1,\\ldots,p_{|P|}\\}$ with $p_1< \\ldots< p_{|P|}$,\nand $Q = \\{q_1, \\ldots, q_{|Q|}\\}$ with $q_1 < \\ldots < q_{|Q|}$. The\nmatricization of ${\\mathcal A}$ with respect to the partition\n$P \\mathbin{\\mathaccent\\cdot\\cup} Q$, denoted by $\\mat{{\\mathcal A}}_{P,Q}$, is the $M^{|P|}$-by-$M^{|Q|}$ matrix\nholding the entries of ${\\mathcal A}$, such that for all $i \\in [N]$ and $d_i \\in [M]$ the \nentry $A_{d_1, \\ldots, d_N}$ is placed in row index\n${1 + \\sum_{t=1}^{|P|} (d_{p_t} - 1) M^{|P| - t}}$ and column index\n${1 + \\sum_{t=1}^{|Q|} (d_{q_t} - 1) M^{|Q| - t}}$. Lastly, the tensors we\nstudy in this article originate by examining the values of some given function\nat a set of predefined points and arranging them in a tensor referred to as\nthe \\emph{grid tensor} of the function. Formally, let\n$f:{\\mathbb R}^s \\times \\ldots \\times {\\mathbb R}^s \\to {\\mathbb R}$ be a function, and let\n$\\{{\\mathbf x}^{(1)}, \\ldots, {\\mathbf x}^{(M)} \\in {\\mathbb R}^s\\}$ be a set of vectors called\n\\emph{template vectors}, then the grid tensor of $f$ is denoted by\n${\\mathcal A}(f) \\in {\\mathbb R}^{M \\otimes \\ldots \\otimes M}$ and defined by\n${\\mathcal A}(f)_{d_1,\\ldots,d_N} = f({\\mathbf x}^{(d_1)}, \\ldots, {\\mathbf x}^{(d_N)})$ for all\n$d_1,\\ldots,d_N \\in [M]$.\n\n\\subsection{Bounding the Size of Networks via Grid Tensors}\n\\label{sec:efficiency_analysis:bounds}\n\nWe begin with a discussion on how to have a well-defined measure of efficiency.\nWe wish to compare the efficiency of non-overlapping ConvACs to overlapping\nConvACs, for a fixed set of $M$ representation functions (see\nsec.~\\ref{sec:overlapping_convac} for definitions). While all functions\nrealizable by non-overlapping ConvACs with shared representation functions lay\nin the same function subspace (see \\citet{expressive_power}), this is not the\ncase for overlapping ConvACs, which can realize additional functions outside the\nsub-space induced by non-overlapping ConvACs. We cannot therefore compare both\narchitectures directly, and need to compare them through an auxiliary objective.\nFollowing the work of \\citet{generalized_decomp}, we instead compare\narchitectures through the concept of grid tensors, and specifically, the grid\ntensor defined by the output of a ConvAC, i.e. the tensor ${\\mathcal A}({\\mathbf h})$ for\n${\\mathbf h}({\\mathbf x}_1,\\ldots,{\\mathbf x}_N)$. Unlike with the ill-defined nature of directly comparing\nthe functions of realized by ConvACs, \\citet{generalized_decomp} proved that\nassuming the fixed representation functions are linearly independent, then there\nexists template vectors ${\\mathbf x}^{(1)},\\ldots,{\\mathbf x}^{(M)}$, for which any\nnon-overlapping ConvAC architecture could represent all possible grid tensors\nover these templates, given sufficient number of channels at each layer. More\nspecifically, if $F_{ij} = f_i({\\mathbf x}^{(j)})$, then these template vector are\nchosen such that $F$ is non-singular. Thus, once we fix a set of linearly\nindependent representation functions, we can compare different ConvACs, whether\noverlapping or not, on the minimal size required for them to induce the same\ngrid tensor, while knowing such a finite number always exists.\n\nOne straightforward direction for separating between the expressive efficiency\nof two network architectures A and B is by examining the ranks of their\nrespective matricized grid tensors. Specifically, Let ${\\mathcal A}({\\mathbf h}^{(A)})$ and\n${\\mathcal A}({\\mathbf h}^{(B)})$ denote the grid tensors of A and B, respectively, and let $(P,Q)$\nbe a partition of $[N]$, then we wish to find an upper-bound on the rank of\n$\\mat{{\\mathcal A}({\\mathbf h}^{(A)})}_{P,Q}$ as a function of its size on one hand, while showing\non the other hand that $\\rank{\\mat{{\\mathcal A}({\\mathbf h}^{(B)})}_{P,Q}}$ can be significantly\ngreater. One benefit of studying efficiency through a matrix rank is that not\nonly we attain separation bounds for exact realization, but also immediately\ngain access to approximation bounds by examining the singular values of the\nmatricized grid tensors. This brings us to the following lemma, which connects\nupper-bounds that were previously found for non-overlapping\nConvACs~\\citep{inductive_bias}, with the grid tensors induced by them (see\napp.~\\ref{app:proofs:preliminaries} for proof):\n\\begin{lemma} \\label{lemma:mat_rank_bound}\n Let $h_y({\\mathbf x}_1,\\ldots,{\\mathbf x}_N)$ be a score function of a non-overlapping\n ConvAC with a fixed set of $M$ linearly independent and continuous\n representation functions, and $L$ GC layers. Let $(P,Q)$ be a partition\n dividing the spatial dimensions of the output of the representation layer\n into two equal parts, either along the horizontal or vertical axis, referred\n to as the ``left-right'' and ``top-bottom'' partitions, respectively. Then,\n for any template vectors such that $F$ is non-singular and for any choice of\n the parameters of the network, it holds that\n $\\rank{\\mat{{\\mathcal A}({\\mathbf h}_y)}_{P,Q}} \\leq D^{(L-1)}$.\n\\end{lemma}\n\nLemma~\\ref{lemma:mat_rank_bound} essentially means that it is sufficient to show\nthat overlapping ConvACs can attain ranks super-polynomial in their size to\nprove they are exponentially efficient with respect to non-overlapping ConvACs.\nIn the next section we analyze how the overlapping degree is related to the\nrank, and under what cases it leads to an exponentially large rank.\n\n\\section{The Expressive Efficiency of Overlapping Architectures}\n\\label{sec:overlaps_efficiency}\n\nIn this section we analyze the expressive efficiency of overlapping\narchitectures. We begin by defining our measures\nof the overlapping degree that will used in our claims, followed by presenting\nour main results in sec.~\\ref{sec:main_results}. For the sake of\nbrevity, an additional set of results, in light of the recent work by\n\\citet{inductive_bias} on ``Pooling Geometry'', is deferred to\napp.~\\ref{app:pooling_geometry}.\n\n\\subsection{The Overlapping Degree of a Network}\n\\label{sec:overlapping_degree}\n\n\\begin{wrapfigure}{r}{0.5\\textwidth} \n\\vspace{-3mm}\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/total_properties}\n\\caption{Illustrating the total receptive field and total stride\n attributes for the $L$'th layer, which could be seen as the projected\n receptive field and stride with respect to the input layer. Together,\n they capture the overlapping degree of a network.}\n\\label{fig:total_properties}\n\\vspace{-3mm}\n\\end{wrapfigure}\n\n\nTo analyze the efficiency of overlapping architectures, we will first formulate\nmore rigorously the measurement of the overlapping degree of a given\narchitecture. As mentioned in sec.~\\ref{sec:intro}, we do so by defining the\nconcepts of the \\emph{total receptive field} and \\emph{total stride} of a given\nlayer $l\\in[L]$, denoted by $T_R^{(l)}$ and $T_S^{(l)}$, respectively.\nBoth measurements could simply be thought of as projecting the accumulated local\nreceptive fields (or strides) to the the first layer, as illustrated in\nfig.~\\ref{fig:total_properties}, which represent a type of global statistics\nof the architecture. However, note that proposition~\\ref{prop:nothing_to_lose}\nentails that a given architecture could have a smaller \\emph{effective} total\nreceptive field, for some settings of its parameters. This leads us to define\nthe $\\alpha$-minimal total receptive field, for any $\\alpha\\in{\\mathbb R}_+$, as the\nsmallest effective total receptive field still larger than $\\alpha$, which we\ndenote by $T_R^{(l,\\alpha)}$. The exact definitions of the above concepts\nare formulated as follows:\n\\begin{align}\n \\label{eq:total_stride}\n T_S^{(l)} \\equiv T_S^{(l)}(S^{(1)},\\ldots,S^{(l)})\n &\\equiv \\begin{cases}\n \\prod_{i=1}^l S^{(i)} & l \\geq 1\\\\\n 1 & l = 0\n \\end{cases} \\\\\n \\label{eq:total_receptive}\n T_R^{(l)} \\equiv T_R^{(l)}(R^{(1)},S^{(1)},\\ldots,R^{(l)},S^{(l)})\n &\\equiv R^{(l)} \\cdot T_S^{(l-1)} +\n \\sum\\nolimits_{k=1}^{l-1}\\left(R^{(k)}-S^{(k)}\\right)\\cdot T_S^{(k-1)} \\\\\n \\label{eq:minimal_receptive}\n T_R^{(l,\\alpha)}\n \\equiv T_R^{(l,\\alpha)}(R^{(1)},S^{(1)},\\ldots,R^{(l)},S^{(l)})\n &\\equiv \\smashoperator[r]{\\argmin_{\\substack{\n \\forall i\\in[l], S^{(i)} \\leq t_i \\leq R^{(i)} \\\\\n T_R^{(l)}(t_1,S^{(1)},\\ldots,t_l,S^{(l)}) > \\alpha}}}\n \\quad\\,\\,T_R^{(l)}(t_1,S^{(1)},\\ldots,t_l,S^{(l)})\n\\end{align}\nwhere we omitted the arguments of $T_S^{(l-1)}$ and $T_S^{(k-1)}$ for\nthe sake of visual compactness.\n\nNotice that for non-overlapping networks the total receptive field always\nequals the total stride, and that only at the end of the network, after the\nspatial dimension collapses to $1{\\times}1$, does the the total receptive field\ngrow to encompass the entire size of the representation layer. For overlapping\nnetworks this is not the case, and the total receptive field could grow much\nfaster. Intuitively, this means that values in regions of the input layer that\nare far apart would be combined by non-overlapping networks only near the last\nlayers of such networks, and thus non-overlapping networks are effectively\nshallow in comparison to overlapping networks. Base on this intuition, in the\nnext section we analyze networks with respect to the point at which their total\nreceptive field is large enough.\n\n\\subsection{Main Results}\\label{sec:main_results}\n\nWith all the preliminaries in place, we are ready to present our main result:\n\\begin{theorem}\\label{thm:main_overlaps}\n Assume a ConvAC with a fixed representation layer having $M$ output channels\n and both width and height equal to $H$, followed by $L$ GC layers, where the\n $l$'th layer has a local receptive field $R^{(l)}$, a stride $S^{(l)}$, and\n $D^{(l)}$ output channels. Let $K \\in [L]$ be a layer with a total receptive\n field $T_R^{(K)} \\equiv T_R^{(K)}(R^{(1)},S^{(1)},\n \\ldots,R^{(K)},S^{(K)})$, such that $T_R^{(K)}>\\frac{H}{2}$.\n Then, for any choice\n of parameters, except a null set (with respect to the Lebesgue\n measure), and for any template vectors such that $F$ is non-singular, the\n following equality holds:\n \\begin{align}\\label{eq:lower_bound}\n \\rank{\\mat{{\\mathcal A}({\\mathbf h}_y)}_{P,Q}} \\geq D^{\\left\\lfloor \\frac{H - T_R^{(K, \\left\\lfloor \\nicefrac{H}{2} \\right\\rfloor)}}{T_S^{(K)}} +1 \\right\\rfloor \\cdot \\left\\lceil \\frac{H}{T_S^{(K)}} \\right\\rceil}\n \\end{align}\n where $(P,Q)$ is either the ``left-right'' or the ``top-bottom'' partitions\n and ${D \\equiv \\min\\{M,D^{(K)},\\frac{1}{2} \\min_{1\\leq l\\leq K} D^{(l)}\\}}$.\n\\end{theorem}\n\\begin{proof}[Proof sketch]\n Because the entries of the matricized grid tensors are polynomials in the\n parameters, then according to a lemma by \\citet{tmm}, if there is a single\n example that attains the above lower-bound on the rank, then it occurs\n almost everywhere with respect to the Lebesgue measure on the Euclidean\n space of the parameters.\n\n Given the last remark, the central part of our proof is simply the\n construction of such an example. First we find a set of parameters for the\n simpler case where the first GC layer is greater than a quarter of the\n input, satisfying the conditions of the theorem. The motivation behind the\n specific construction is the pairing of indices from each side of\n the partition, such that they are both in the same local receptive field,\n and designing the filters such that the output of each local application of\n them defines a mostly diagonal matrix of rank $D$, with respect to\n these two indices. The rest of the parameters are chosen such that the\n output of the entire network results in a product of the entries of these\n matrices. Under matricization, this results in a matrix who is\n equivalent\\footnote{Two matrices are equivalent if one could be converted\n to the other by elementary row or column operations.} to a\n Kronecker product of mostly diagonal matrices. Thus, the matricization rank\n is equal to the product of the ranks of these matrices, which results in the\n exponential form of eq.~\\ref{eq:lower_bound}.\n Finally, we extend the above example to the general case, by realizing the\n operation of the first layer of the above example through multiple layers\n with small local receptive fields. See app.~\\ref{app:proofs:preliminaries}\n for the definitions and lemmas we rely on, and see\n app.~\\ref{app:proofs:main_overlaps} for a complete proof.\n\\end{proof}\nCombined with Lemma~\\ref{lemma:mat_rank_bound}, it results in the following\ncorollary:\n\\begin{corollary}\n Under the same setting as theorem~\\ref{thm:main_overlaps}, and for all\n choices of parameters of an overlapping ConvAC, except a negligible set,\n any non-overlapping ConvAC that realizes (or approximates) the same grid\n tensor must be of size at least:\n \\begin{equation*}\n D^{\\left\\lfloor \\frac{H - T_R^{(K, \\left\\lfloor \\nicefrac{H}{2} \\right\\rfloor)}}{T_S^{(K)}} +1 \\right\\rfloor \\cdot \\left\\lceil \\frac{H}{T_S^{(K)}} \\right\\rceil} .\n \\end{equation*}\n\\end{corollary}\n\nWhile the complexity of the generic lower-bound above might seem\nincomprehensible at first, its generality gives us the tools to analyze\npractically any kind of feed-forward architecture. As an example, we can analyze\nthe lower bound for the well known GoogLeNet\narchitecture~\\citep{Szegedy:2014tb}, for which the lower bound equals $32^{98}$,\nmaking it clear that using a non-overlapping architecture for this case is\ninfeasible. Next, we will focus on specific cases for which we can derive more\nintelligible lower bounds.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.8\\linewidth]{figures\/start_with_big_conv}\n\\caption{A network architectures beginning with large local receptive fields\n greater than $\\nicefrac{N}{2}$ and at least $M$ output channels. According\n to theorem~\\ref{thm:main_overlaps}, for almost all choice of parameters\n we obtain a function that cannot be approximated by a non-overlapping\n architecture, if the number of channels in its next to last layer is\n less than $M^{\\frac{H^2}{2}}$.}\n\\label{fig:start_with_big_conv}\n\\end{figure}\n\nAccording to theorem~\\ref{thm:main_overlaps}, the lower bound depends on the\nfirst layer for which its total receptive field is greater than a quarter of the \ninput. As mentioned in the previous section, for non-overlapping networks this\nonly happens after the spatial dimension collapses to $1{\\times}1$, which\nentails that both the total receptive field and total stride would be equal to\nthe width $H$ of the representation layer, and substituting this values in\neq.~\\ref{eq:lower_bound} results simply in $D$~--~trivially meaning that to\nrealize one non-overlapping network by another non-overlapping network, the next\nto last layer must have at least half the channels of the target network.\n\nOn the other extreme, we can examine the case where the first GC layer has a\nlocal receptive field $R$ greater than a quarter of its input, i.e.\n$R > \\nicefrac{H}{2}$. Since the layers following the first GC layer do not\naffect the lower bound in this case, it applies to any arbitrary sequence of\nlayers as illustrated in fig.~\\ref{fig:start_with_big_conv}. For simplicity\nwe will also assume that the stride $S$ is less than $\\nicefrac{H}{2}$, and that\n$\\frac{H}{2}$ is evenly divided by $S$. In this case the $\\frac{H}{2}$-minimal\nreceptive field equals to $\\frac{H}{2} + 1$, and thus the lower bound results in\n$D^{\\frac{H^2}{2S}}$. Consider the case of $D = M$ and $S=1$, then a\nnon-overlapping architecture that satisfies this lower bound is of the order of\nmagnitude at which it could already represent any possible grid tensor. This\ndemonstrate our point from the introduction, that through a a polynomial change\nin the architecture, i.e. increasing the receptive field, we get an exponential\nincrease in expressivity.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.9\\linewidth]{figures\/alt_conv_pool_net}\n\\caption{The common network architecture of alternating $B{\\times}B$ ``conv''\n and $2{\\times}2$ ``pooling'' layers. If $B \\leq \\nicefrac{H}{5}{+}1$ and\n $D^{(l)} \\geq 2M$ for all $1 \\leq l < L$, then the lower bound of\n theorem~\\ref{thm:main_overlaps} for this network results in\n $M^{\\frac{(2B - 1)^2}{4}}$.\n}\n\\label{fig:alt_conv_pool_net}\n\\end{figure}\n\nThough the last example already demonstrates that a polynomially sized\noverlapping architecture could lead to an exponential separation, in practice,\nemploying such large convolutions is very resource intensive. The common best\npractice is to use multiple small local receptive fields of size\n$B \\times B$, where the typical values are $B=3$ or $B=5$, separated by a\n$2 \\times 2$ ``pooling'' layers, i.e. layers with both stride and local\nreceptive field equal to $2 \\times 2$. For simplicity, we assume that $H = 2^L$\nfor some $L \\in {\\mathbb N}$. See fig.~\\ref{fig:alt_conv_pool_net} for an illustration of\nsuch a network. Analyzing the above network with theorem~\\ref{thm:main_overlaps}\nresults in the following proposition:\n\\begin{proposition}\\label{prop:common_case}\n Consider a network comprising a sequence of GC blocks, each block begins\n with a layer whose local receptive field is $B {\\times} B$ and its stride\n $1{\\times}1$, followed by a layer with local receptive field $2 {\\times} 2$\n and stride $2 {\\times} 2$, where the output channels of all layers are at\n least $2M$, and the spatial dimension of the representation layer is\n $H {\\times} H$ for $H{=}2^L$. Then, the lower bound describe by\n eq.~\\ref{eq:lower_bound} for the above network is greater than or equal to:\n \\begin{align*}\n \\tau(B,H) &\\equiv M^{\\frac{(2B-1)^2}{2} \\cdot \\left(1+\\frac{2B-2}{H}\\right)^{-2}} \n = M^{\\frac{H^2}{2}\\cdot \\left(1 + \\frac{H-1}{2B-1} \\right)^{-2}},\n \\end{align*}\n whose limits are $\\lim_{B\\to\\infty} \\tau(B,H) = M^\\frac{H^2}{2}$ and\n $\\lim_{H\\to\\infty} \\tau(B,H) = M^\\frac{(2B-1)^2}{2}$. Finally, assuming\n $B \\leq \\frac{H}{5} + 1$, then $\\tau(B,H) \\geq M^{\\frac{(2B-1)^2}{4}}$.\n\\end{proposition}\n\\begin{proof}[Proof sketch]\n We first find a closed-form expression for the total receptive field and\n stride of each of the $B{\\times}B$ layers in the given network. We\n then show that for layers whose total receptive field is greater than\n $\\frac{H}{2}$, its $\\alpha$-minimal total receptive field, for\n $\\alpha{=}\\frac{H}{2}$, is equal to $\\frac{H}{2}{+}1$. We then use the above\n to find the first layer who satisfies the conditions of\n theorem~\\ref{thm:main_overlaps}, and then use our closed-forms expressions\n to simplify the general lower bound for this case. See\n app.~\\ref{app:proofs:common_case} for a complete proof.\n\\end{proof}\nIn particular, for the typical values of $M=64$, $B=5$, and $H \\geq 20$, the\nlower bound is at least $64^{20}$, which demonstrates that even having a small\namount of overlapping already leads to an exponential separation from the\nnon-overlapping case. When $B$ grows in size, this bound approaches the earlier\nresult we have shown for large local receptive fields encompassing more than a\nquarter of the image. When $H$ grows in size, the lower bound is dominated\nstrictly by the local receptive fields. Also notice that based on\nproposition~\\ref{prop:common_case}, we could also derive a respective lower\nbound for a network following VGG style architecture~\\citep{Simonyan:2014ws},\nwhere instead of a single convolutional layer before every ``pooling'' layer, we\nhave $K$ layers, each with a local receptive field of $C \\times C$. Under this\ncase, it is trivial to show that the bound from\nproposition~\\ref{prop:common_case} holds for $B = K \\cdot (C-1) + 1$, and under\nthe typical values of $C=3$ and $K=2$ it once again results in a lower bound of\nat least $64^{20}$.\n\n\\section{Experiments}\\label{sec:exp}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.49\\linewidth]{figures\/exp_channels}~~~~\\includegraphics[width=0.49\\linewidth]{figures\/exp_params}\n\n\\includegraphics[width=0.49\\linewidth]{figures\/exp_color_channels}~~~~\\includegraphics[width=0.49\\linewidth]{figures\/exp_color_params}\n\n\\caption{Training accuracies of standard ConvNets on CIFAR-10 with data\n augmentations, where the results of spatial augmentations presented at the\n top row, and color augmentations at the bottom row. Each network follows the\n architecture of proposition~\\ref{prop:common_case}, with with receptive\n field $B$ and using the same number of channels across all layers, as\n specified by the horizontal axis of left plot. We plot the same results with\n respect to the total number of parameters in the right plot.}\n\\label{fig:exp}\n\\end{figure}\n\nIn this section we show that the theoretical results of\nsec.~\\ref{sec:main_results} indeed hold in practice. In other words, there\nexists tasks that require the highly expressive power of overlapping\narchitectures, on which non-overlapping architectures would have to grow\nby an exponential factor to achieve the same level of performance. We\ndemonstrate this phenomenon on standard ConvNets with ReLU activations that\nfollow the same architecture that was outlined in\nproposition~\\ref{prop:common_case}, while varying the number of channels and the\nsize of the receptive field of the $B{\\times}B$ ``conv'' layers. The only change\nwe made, was to replace the $2{\\times}2$-``pooling'' layers of the convolutional\ntype, with the standard $2{\\times}2$-max-pooling layers, and using the same\nnumber of channels across all layers. This was done for the purpose of having\nall the learned parameters located only at the (possibly) overlapping layers.\nMore specifically, the network has 5 blocks, each starting with a $B{\\times}B$\nconvolution with $C$ channels, stride $1 {\\times} 1$, and ReLU activation, and\nthen followed by $2 {\\times} 2$ max-pooling layer. After the fifth\n``conv-pool'', there is a final dense layer with 10 outputs and softmax\nactivations.\n\nWe train each of these networks for classification over the\nCIFAR-10 dataset, with two types of data augmentation schemes: (i) spatial\naugmentations, i.e. randomly translating (up to 3 pixels in each direction) and\nhorizontally flipping each image, and (ii) color augmentations following\n\\citet{Dosovitskiy:2014tu}, i.e. randomly adding a constant shift (at most\n$\\pm 0.3$) to the hue, saturation, and luminance, for each attribute separately,\nand in addition randomly sampling a multiplier (in the range $[0.5, 1.5]$) just\nto the saturation and luminance. Though typically data augmentation is only used\nfor the purpose of regularization, we employ it for the sole purpose of raising\nthe hardness of the regular CIFAR-10 dataset, as even small networks can already\noverfit and effectively memorize its small dataset. We separately test both\nthe spatial and color augmentation schemes to emphasize that our empirical\nresults cannot be explained simply by spatial-invariance type arguments.\nFinally, the training itself is carried out for 300 epochs with\nADAM~\\citep{Kingma:2014us} using its standard hyper-parameters, at which point\nthe loss of the considered networks have stopped decreasing. We report the\ntraining accuracy over the augmented dataset in fig.~\\ref{fig:exp}, where for\neach value of the receptive field $B$, we plot its respective training\naccuracies for variable number of channels $C$. The source code for reproducing\nthe above experiments and plots can be found at \\githuburl{OverlapsAndExpressiveness}.\n\nIt is quite apparent that the greater $B$ is chosen, the less channels are\nrequired to achieve the same accuracy. Moreover, for the non-overlapping case of\n$B{=}1$, more than 2048 channels are required to reach the same performance of\nnetworks with $B {>} 2$ and just 64 channels under the spatial\naugmentations~--~which means effectively exponentially more channels were\nrequired. Even more so, under the color augmentations, we were not able to train\nnon-overlapping networks to reach even the smallest overlapping network\n($B=2$ and $C=16$). In terms of total number of parameters, there is a clear\nseparation between the overlapping and the non-overlapping types, and we once\nagain see more than an order of magnitude increase in the number of parameters\nbetween an overlapping and non-overlapping architectures that achieve similar\ntraining accuracy. As a somewhat surprising result, though based only on our\nlimited experiments, it appears that for the same number of parameters, all\noverlapping networks attain about the same training accuracy, suggesting perhaps\nthat having the smallest amount of overlapping already attain all the benefits\noverlapping provides, and that increasing it further does not affect the\nperformance in terms of expressivity.\n\nAs final remark, we also wish to acknowledge the limitations of drawing\nconclusions strictly from empirical experiments, as there could be alternative\nexplanations to these observations, e.g. the effects overlapping has on the\noptimization process. Nevertheless, our theoretical results suggests this is\nless likely the case.\n\n\\section{Discussion} \\label{sec:discussion}\n\nThe common belief amongst deep learning researchers has been that depth is one\nof the key factors in the success of deep networks~--~a belief formalized\nthrough the depth efficiency conjecture. Nevertheless, depth is\none of many attributes specifying the architecture of deep networks, and each\ncould potentially be just as important. In this paper, we studied the effect\noverlapping receptive fields have on the expressivity of the network, and found\nthat having them, and more broadly denser connectivity, results in an\nexponential gain in the expressivity that is orthogonal to the depth.\n\nOur analysis sheds light on many trends and practices in contemporary design\nof neural networks. Previous studies have shown that non-overlapping\narchitectures are already universal~\\citep{expressive_power}, and even have\ncertain advantages in terms of optimization~\\citep{Brutzkus:2017wp}, and yet,\nreal-world usage of non-overlapping networks is scarce. Though there could be\nmultiple factors involved, our results clearly suggest that the main culprit is\nthat non-overlapping networks are significantly handicapped in terms of\nexpressivity compared to overlapping ones, explaining why the former are so\nrarely used. Additionally, when examining the networks that are commonly used in\npractice, where the majority of the layers are of the convolutional type with\nvery small receptive field, and only few if any fully-connected\nlayers~\\citep{Simonyan:2014ws,Springenberg:2014tx,He:2016ib},\nwe find that though they are obviously overlapping, their overlapping degree is\nrather low. We showed that while denser connectivity can increase\nthe expressive capacity, even in the most common types of modern architectures\nalready exhibit exponential increase in expressivity, without relying on\nfully-connected layers. This could partly explain that somewhat surprising\nobservation, as it is probable that such networks are sufficiently expressive\nfor most practical needs simply because they are already in the exponential\nregime of expressivity. Indeed, our experiments seems to suggests the same,\nin which we saw that further increases in the overlapping degree beyond the most\nlimited overlapping case seems to have insignificant effects on performance~--~a \nconjecture not quite proven by our current work, but one we wish to investigate\nin the future.\n\nThere are relatively few other works which have studied the role of receptive\nfields in neural networks. Several empirical works~\\citep{Li:2005kc,\nCoates:2011wo,Krizhevsky:2012wl} have demonstrated similar behavior, showing\nthat the classification accuracy of networks can sharply decline as the degree\nof overlaps is decreased, while also showing that gains from using very large\nlocal receptive fields are insignificant compared to the increase in\ncomputational resources. Other works studying the receptive fields of neural\nnetworks have mainly focused on how to learn them from the\ndata~\\citep{Coates:2011tl,Jia:2012uz}. While our analysis has no direct\nimplications to those specific works, it does lay the ground work for\npotentially guiding architecture design, through quantifying the expressivity of\nany given architecture. Lastly, \\citet{Luo:2016vj} studied the \\emph{effective\ntotal receptive field} of different layers, a property of a similar nature to\nour total receptive field, where they measure the the degree to which each input\npixel is affecting the output of each activation. They show that under common\nrandom initialization of the weights, the effective total receptive field has a\ngaussian shape and is much smaller than the maximal total receptive field. They\nadditionally demonstrate that during training the effective total receptive\nfield grows in size, and suggests that weights should be initialized such that\nthe initial effective receptive field is large. Their results strengthen our\ntheory, by showing that trained networks tend to maximize their effective\nreceptive field, taking full potential of their expressive capacity.\n\nTo conclude, we have shown both theoretically and empirically that overlapping\narchitectures have an expressive advantage compared to non-overlapping ones. Our\ntheoretical analysis is grounded on the framework of ConvACs, which we extend to\noverlapping configurations. Though are proofs are limited to this specific case,\nprevious studies~\\citep{generalized_decomp} have already shown that such results\ncould be transferred to standard ConvNets as well, using most of the same\nmathematical machinery. While adapting our analysis accordingly is left for\nfuture work, our experiments on standard ConvNets~(see sec.~\\ref{sec:exp})\nalready suggest that the core of our results should hold in this case as well.\nFinally, an interesting outcome of moving from non-overlapping architectures to\noverlapping ones is that the depth of a network is no longer capped at\n$\\log_2 \\left(\\textit{input size} \\right)$, as has been the case in the models\ninvestigated by \\citet{expressive_power}~--~a property we will examine in future\nworks\n\n\\newcommand{This work is supported by Intel grant ICRI-CI \\#9-2012-6133, by ISF Center grant 1790\/12 and by the European Research Council (TheoryDL project).}{This work is supported by Intel grant ICRI-CI \\#9-2012-6133, by ISF Center grant 1790\/12 and by the European Research Council (TheoryDL project).}\n\\ifdefined\\COLT\n\t\\acks{This work is supported by Intel grant ICRI-CI \\#9-2012-6133, by ISF Center grant 1790\/12 and by the European Research Council (TheoryDL project).}\n\\else\n\t\\ifdefined\n\t\t\\subsubsection*{Acknowledgments}\n\t\tThis work is supported by Intel grant ICRI-CI \\#9-2012-6133, by ISF Center grant 1790\/12 and by the European Research Council (TheoryDL project).\n\t\\fi\n\\fi\n\n\\small{\n\\ifdefined\n\\bibliographystyle{plainnat}\n\\fi\n\\ifdefined\\NIPS\n\\bibliographystyle{plainnat}\n\\fi\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:intro}Introduction}\nThe launch of the {\\em Submillimeter Wave Astronomy Satellite} \\citep[SWAS;][]{2000ApJ...539L..87M} made it possible to observe emission from the ground-state transition ($\\mathrm{1_{10}}$$\\rightarrow$$\\mathrm{1_{01}}$) of ortho-$\\mathrm{H_2^{16}O}$ and its isotopomer ortho-$\\mathrm{H_2^{18}O}$, to determine the abundance of water vapour, i.e., column density $\\mathrm{N_{o-H_2O}}$, in various regions in the interstellar medium, e.g., dense and diffuse interstellar gas clouds, circumstellar envelopes, planetary atmospheres, and comets \\citep[e.g.,][]{2000ApJ...539L..87M, 2000ApJ...539L.101S, 2000ApJ...539L..93S, 1995Icar..117..162E, 2000ApJ...539L.147B, 2000ApJ...539L.143G, 2000ApJ...539L..87M}. In the future, the {\\em Heterodyne Instrument for the Far Infrared} (HIFI) on {\\em Herschel} will observe even more transitions of ortho- and para-$\\mathrm{H_2O}$ (o\/p-$\\mathrm{H_2O}$) like $\\mathrm{2_{12}}$$\\to$$\\mathrm{1_{01}}$ (1669.904 GHz), $\\mathrm{2_{21}}$$\\to$$\\mathrm{2_{12}}$ (1661.015 GHz), $\\mathrm{3_{03}}$$\\to$$\\mathrm{2_{12}}$ (1716.774 GHz), $\\mathrm{3_{12}}$$\\to$$\\mathrm{3_{03}}$ (1097.357 GHz), $\\mathrm{3_{21}}$$\\to$$\\mathrm{3_{12}}$ (1162.910 GHz), $\\mathrm{1_{11}}$$\\to$$\\mathrm{0_{00}}$ (1113.342 GHz), $\\mathrm{2_{02}}$$\\to$$\\mathrm{1_{11}}$ (967.924 GHz), $\\mathrm{2_{11}}$$\\to$$\\mathrm{2_{02}}$ (752.029 GHz) and $\\mathrm{2_{20}}$$\\to$$\\mathrm{2_{11}}$ (1228.801 GHz); some in absorption while others in emission.\\\\\nOne of the main goals of the SWAS mission is to determine where and whether $\\mathrm{H_2O}$ and $\\mathrm{O_2}$ are the major reservoirs of oxygen through the interstellar medium. SWAS observations have determined the gaseous water abundance in warm dense gas (T $\\gtrsim$ 300 K and n($\\mathrm{H_2}$) $\\gtrsim$ $10^3$ \\mbox{$\\mathrm{cm}^{-3}$}) to be $10^{-5}$ relative to $\\mathrm{H_2}$, in good agreement with chemical models for such conditions.\nHowever, in cold (T $\\lesssim$ 30 K), dense clouds the abundance of gaseous water is $\\sim$100 to 1000 times below the predictions of cold-cloud gas-phase chemical models. It has been suggested that -- toward cold clouds -- gaseous $\\mathrm{H_2O}$ exists only near the cloud surface. Indeed, closer to the surface than an $\\mathrm{A_V}$ of a few mag, $\\mathrm{H_2O}$ is photo-dissociated by the ambient galactic UV field. Deeper into the cloud, i.e., $\\mathrm{A_V}$ of 4 $\\sim$8 mag (depending on density and UV intensity), $\\mathrm{H_2O}$ may rapidly deplete onto dust grains \\citep{2000ApJ...539L.129B, 2001A&A...378.1024C, 2001A&A...370..557V}. Although the derivation of the column density from {\\em absorption} observations is straightforward (column density is simply proportional to the optical depth in the line, see \\cite{2004ApJ...605..247P}) this is not the case for {\\em emission} observations. The analysis to determine the $\\mathrm{H_2O}$ abundance now crucially depends on the physical properties of the gas through the collisional rate coefficients. Therefore, accurate constraints on the gas densities and temperatures are needed.\\\\% This is nearly the case.\\\\\nExtensive SWAS observations of the Orion A molecular cloud show that gaseous $\\mathrm{H_2O}$ correlates with CN, a surface tracer, rather than with $\\mathrm{C^{18}O}$, a volume tracer \\citep{2005AdSpR..36.1027M}. This result has been interpreted as evidence that gaseous water resides only near the surface. However, caution is needed when relying purely on single transition observations to draw such a conclusion in view of the complex rotational level structure of the $\\mathrm{H_2O}$ molecule. In particular, there is a fundamental difference between an optically thin and an effectively optically thin line. The latter case implies a strong coupling between line photons and water molecules. A full radiative transfer calculation is needed to address this problem, since the observed intensities of molecular emission depend on a complex competition between radiative and collisional processes. Moreover, the excitation of $\\mathrm{H_2O}$ also differs from that of other molecules, since both collisions and infrared radiation from warm dust influence the level populations \\citep{1983ApJ...275..145T}.\\\\\nThe intent of this work is to show that it is not straightforward to retrieve accurate information, e.g., column density, from single transition observations of $\\mathrm{H_2O}$ due to the complex level structure of this molecule.\n\n\\section{\\label{sec:}Basic model description}\nThe results presented here were obtained by application of the numerical code of \\cite{2005A&A...440..559P}, described further in \\cite{2006A&A...453..615P}. The interested reader is referred to these papers for a description of the underlying algorithms. The radiative transfer of o\/p-$\\mathrm{H_2O}$ is solved by means of a multi-zone escape probability method in three dimensions. By using a multi-zone formalism, the medium is divided into different zones, i.e., gridcells, each with a value for the abundance of the species (e.g., $\\mathrm{H_2O}$), the density of the medium, and the temperature of gas and dust. Besides this, the cloud is characterized by a fixed total column density. The statistical equilibrium equation for a multilevel sytem can be written as\n\\begin{equation}\n\\sum_{i>j}(n_iA_{ij}~+~nn_i\\gamma_{ij}~+~n_iB_{ij}J)~=~\\sum_{ij}(n_iA_{ij}\\beta(\\tau_{ij})~+~nn_i\\gamma_{ij})~=~\\sum_{ij}J~=~\\sum_{i>j}n_i[1~-~\\beta(\\tau_{ij})]A_{ij},\n\\label{eq:approx}\n\\end{equation}\nwhere $\\beta(\\tau)$ is the probability that a photon formed at optical depth $\\tau$ in a certain direction escapes the cloud along that direction. Therefore, \n\\begin{equation}\nn_{cr}~=~\\frac{\\sum_{i>j}\\beta(\\tau_{ij})A_{ij}}{\\sum_{i>j}\\gamma_{ij}}\n\\label{Eq:ncr}\n\\end{equation}\nNote that because of the large Einstein A coefficients of the water molecule critical densities are of the order of $10^8$-$10^9$ \\mbox{$\\mathrm{cm}^{-3}$} in the optically thin case.\\\\ \nThe background radiation field P($\\nu_{ij}$) in the code consists of two terms: the 2.7K microwave background and the infrared emission of dust at a temperature $T_{\\mathrm{d}}$. This is,\n\\begin{equation}\nP(\\nu_{ij}) = B(\\nu_{ij}, T=2.7K)\\ +\\ (1-e^{-{\\tau_{dust}}})B(\\nu_{ij},T_d)\n\\end{equation}\nThe intensity of transition i$\\rightarrow$j (i$>$j) is then given by\n\\begin{equation}\nI_{ij,\\mathrm{total}} = {1\\over 4\\pi}\\int_0^r\\Lambda_{ij,\\mathrm{local}}dr,\n\\label{eq:int_total}\n\\end{equation}\nwith \n\\begin{equation}\n\\Lambda_{ij,\\mathrm{local}} = n_iA_{ij}h\\nu_{ij}\\beta(\\tau_{ij})\\{[S(\\nu_{ij})-P(\\nu_{ij})]\/S(\\nu_{ij})\\},\n\\end{equation} \nS($\\nu_{ij}$) is the source function at frequency $\\nu_{ij}$. \\\\\nCollisional rate coefficients for inelastic collisions between o\/p-$\\mathrm{H_2O}$ and He \\citep{1993ApJS...85..181G}, and for collisions between o\/p-$\\mathrm{H_2O}$ and both o-$\\mathrm{H_2}$ and p-$\\mathrm{H_2}$ \\citep{1996ApJS..107..467P} are adopted. We adopt the expression for the ortho-to-para ratio (OPR) of $\\mathrm{H_2}$, in thermal equilibrium, defined by \n\\begin{equation}\n\\mathrm{OPR}= {{(2I_\\mathrm{o} + 1)\\sum(2J + 1)\\exp\\left(-{E_\\mathrm{o}(J,K_\\mathrm{a},K_\\mathrm{c})\\over kT}\\right)}\\over{(2I_\\mathrm{p} + 1)\\sum(2J + 1)\\exp\\left(-{E_\\mathrm{p}(J,K_\\mathrm{a},K_\\mathrm{c})\\over kT}\\right)}}\\,, \n\\label{eq:OPR}\n\\end{equation}\nwhere $I_\\mathrm{o}$ and $I_\\mathrm{p}$ are the total nuclear spin, corresponding to wether the hydrogen nuclear spins are parallel ($I_\\mathrm{o}$ = 1, $\\uparrow$$\\uparrow$) or anti-parallel ($I_\\mathrm{p}$ = 0, $\\uparrow$$\\downarrow$). The sum in the numerator (denominator) extends over all ortho (para) levels $({J},K_\\mathrm{a},K_\\mathrm{c})$, \\citet{1987A&A...187..419M}. The code has been tested extensively against (analytical) benchmark problems presented at the radiative transfer workshop held in Leiden (2004), see \\citet{2006A&A...453..615P}. It is found that the level populations are completely consistent with the solutions of other Monte Carlo and ALI codes, as presented in \\citet{2005dmu..conf..431V}\\\\\n\\begin{table\n\\begin{minipage}[b]{\\columnwidth}\n\\renewcommand{\\footnoterule}{}\n\\caption{Model parameters}\n\\label{tab:model}\n\\centering\n\\begin{tabular}{lcccccc}\n\\hline\\hline \nModel & $\\mathrm{n(H_2)}$ & size & ${\\mathrm{X(H_2O)}^{a}}$ & ${\\mathrm{N({H_2})}^{b}}$ & $\\mathrm{T_{gas}}$ & $\\mathrm{T_{dust}}$\\\\\n & $[$\\mbox{$\\mathrm{cm}^{-3}$}$]$ & $[$\\mbox{pc}$]$ & &$[$\\mbox{$\\mathrm{cm}^{-2}$}$]$ & $[$K$]$&$[$K$]$\\\\\n\\hline\nI & $10^4$-$10^6$ & 0.002-0.2 & $10^{-10}$-$10^{-6}$ & 6$\\times$$10^{21}$ & 50 & no \\\\\nII & $10^4$-$10^6$ & 0.002-0.2 & $10^{-10}$-$10^{-6}$ & 6$\\times$$10^{21}$ & 50 & 15\\\\\nIII & $10^4$-$10^6$ & 0.002-0.2 & $10^{-10}$-$10^{-6}$ & 6$\\times$$10^{21}$ & 50 & 25\\\\\nIV & $10^4$-$10^6$ & 0.002-0.2 & $10^{-10}$-$10^{-6}$ & 6$\\times$$10^{21}$ & 50 & 50\\\\\nV & $10^4$-$10^6$ & 0.002-0.2 & $10^{-10}$-$10^{-6}$ & 6$\\times$$10^{21}$ & 30 & 15\\\\\n\\hline\n\\end{tabular}\n\\end{minipage}\n$^{a}$ Abundance of $\\mathrm{H_2O}$; $^{b}$ Total column density through the centre\n\\end{table}\n\\section{Model results}\n\n\\begin{figure\n\\includegraphics[width=9cm]{N6e21_nodustCMB.ps}\n\\caption{The intensity of the ortho-$\\mathrm{H_2O}$ ground state transition for a homogeneous sphere with $\\mathrm{H_2}$ densities of $10^4$({\\em top}), $10^5$({\\em middle}), $10^6$({\\em bottom}) $\\mathrm{cm^{-3}}$ and temperature of the gas of 50 K. The dust emission as well as the CMB radiation are ignored ({\\em i.e., model I}). In every case, the total column density ($\\mathrm{H_2}$) is kept constant. Lines are plotted for an abundance of $\\mathrm{H_2O}$, relative to $\\mathrm{H_2}$, from $10^{-10}$ ({\\em upper line\/red}) to $10^{-6}$ ({\\em lower line\/light blue}) as function of $\\mathrm{H_2}$ column density along the line of sight, where 2 $\\times$ $10^{19}$ $\\mathrm{cm^{-2}}$ is at the edge, and 6 $\\times$ $10^{21}$ $\\mathrm{cm^{-2}}$ through the center of the cloud. $Y$-axis in units of \\mbox{$\\mathrm{erg\\ s^{-1}\\ sr^{-1}}$}. The position of the $\\tau$=1 surface is displayed with a cross for X($\\mathrm{H_2O}$)=$10^{-9}$-$10^{-6}$.}\n\\label{fig:N6e21_nodustCMB}\n\\end{figure}\n\n\n\\begin{figure\n\\includegraphics[width=9cm]{N6e21_1e4.ps}\n\\caption{The intensity of the ortho-$\\mathrm{H_2O}$ ground state transition for a homogeneous sphere with density ($\\mathrm{H_2}$) of $10^4$ $\\mathrm{cm^{-3}}$ for model II ({\\em top}), III ({\\em middle}) and IV ({\\em bottom}). Lines are plotted for an abundance of $\\mathrm{H_2O}$, relative to $\\mathrm{H_2}$, from $10^{-10}$ ({\\em upper line\/red}) to $10^{-6}$ ({\\em lower line\/light blue}) as function of $\\mathrm{H_2}$ column density along the line of sight, where 2 $\\times$ $10^{19}$ $\\mathrm{cm^{-2}}$ is at the edge, and 6 $\\times$ $10^{21}$ $\\mathrm{cm^{-2}}$ through the center of the cloud. $Y$-axis in units of \\mbox{$\\mathrm{erg\\ s^{-1}\\ sr^{-1}}$}.}\n\\label{fig:N6e21_1e4.ps}\n\\end{figure}\n\n\n\n\n\\begin{figure\n\\includegraphics[width=9cm]{N6e21_Tg30Td15.ps}\n\\caption{The intensity of the ortho-$\\mathrm{H_2O}$ ground state transition in case of a homogeneous sphere with densities ($\\mathrm{H_2}$) of $10^4$({\\em top}), $10^5$({\\em middle}), $10^6$({\\em bottom}) $\\mathrm{cm^{-3}}$ and temperatures for gas and dust of 30 K and 15 K, respectively ({\\em i.e., model V}). In every case, the total column density ($\\mathrm{H_2}$) is kept constant. Lines are plotted for an abundance of $\\mathrm{H_2O}$, relative to $\\mathrm{H_2}$, from $10^{-10}$ ({\\em upper line\/red}) to $10^{-6}$ ({\\em lower line\/light blue}) as function of $\\mathrm{H_2}$ column density along the line of sight, where 2 $\\times$ $10^{19}$ $\\mathrm{cm^{-2}}$ is at the edge, and 6 $\\times$ $10^{21}$ $\\mathrm{cm^{-2}}$ through the center of the cloud. $Y$-axis in units of \\mbox{$\\mathrm{erg\\ s^{-1}\\ sr^{-1}}$}.}\n\\label{fig:N6e21_Tg30Td15}\n\\end{figure}\n\n\n\\begin{figure\n\\includegraphics[width=9cm]{l2.ps}\n\\caption{Shown here is the population of the $\\mathrm{1_{01}}$ level of o-$\\mathrm{H_2O}$ as function of radius. Top, middle, bottom panel are the results in case X($\\mathrm{H_2O}$) = $10^{-10}$, $10^{-9}$, $10^{-8}$, respectively. Dashed, dotted and dashed-dotted lines represent the results of model I, III and IV, respectively.}\n\\label{fig:levelpop}\n\\end{figure}\n\n\\begin{figure\n\\includegraphics[width=9cm]{mean.ps}\n\\caption{Conversion of the results of model I ({\\em top}) and IV ({\\em bottom}) into surface area weighted values. In each panel the dashed-dotted, dotted, and dashed curve represent the results in case the density n($\\mathrm{H_2}$) is $10^6$, $10^5$ and $10^4$ \\mbox{$\\mathrm{cm}^{-3}$}, respectively. $Y$-axis in units of \\mbox{$\\mathrm{erg\\ s^{-1}\\ sr^{-1}}$}.}\n\\label{fig:mean}\n\\end{figure}\nWe calculate, in the case of a homogeneous sphere and as a function of impact parameter, the surface brightness for the $\\mathrm{1_{10}}$$\\rightarrow$$\\mathrm{1_{01}}$ ground-state transition of ortho-$\\mathrm{H_2O}$. The density and abundance of $\\mathrm{H_2O}$ are the main parameters of the model. All the models have a constant total column density N($\\mathrm{H_2}$) of 6 $\\times$ $10^{21}$ \\mbox{$\\mathrm{cm}^{-2}$} through the center of the cloud, corresponding to a total $A_V$ of $\\sim$3 mag. As a result, the physical size of the cloud is inversely proportional to the density of the medium. The density ranges from $10^4$ to $10^6$ \\mbox{$\\mathrm{cm}^{-3}$} covering the relevant range for dense molecular clouds such as the Orion ridge. \nThe temperatures, ranging from 30 to 50K for the gas and from 15 to 50K for the dust, were chosen to represent the mean observed temperatures towards star forming molecular clouds such as the Orion ridge which have been the focus of the SWAS effort. In model I we ignore the emission from dust and CMB, whereas in model IV a dust temperature of 50K is assumed, in order to assess the influence of the dust and temperature, respectively, on the excitation of the water molecule. The temperatures in model V are a factor of $\\sim$2 lower than in the other models since to maintain a temperature of the gas of 50K throughout the cloud one needs a strong UV radiation field, which is not always the case. Note that the gas and dust temperatures are independent of cloud depth. The intent of the paper is to illustrate the excitation and radiative transfer effects assuming a 'simple' cloud model, not to model a realistic cloud. A Galactic dust-to-gas ratio of $10^{-2}$ by mass is assumed. We adopt the dust opacities of \\cite{1994A&A...291..943O} (Column 5 of their Table 1). In all the models, except for model I, the dust optical depth $\\tau_{\\mathrm{dust}}$ through the center of the cloud at the frequency of the ground state transition of o-$\\mathrm{H_2O}$, i.e., $\\mathrm{\\tau_{\\nu=556.936GHz}}$ or $\\mathrm{\\tau_{\\lambda=538\\mu m}}$, is $10^{-3}$. Within each model the water abundance, X($\\mathrm{H_2O}$), ranges from $10^{-10}$ to $10^{-6}$. The parameters for the different models are shown in Table \\ref{tab:model}. Throughout the models, a velocity dispersion of 1 \\mbox{$\\mathrm{km\\ s^{-1}}$} is adopted, typical for a cloud with moderate turbulence. Note that the results presented in this paper depend on the adopted velocity dispersion. A higher (lower) velocity dispersion will decrease (increase) the optical depth for a given transition, hence having an impact on the excitation of the molecule.\\\\\nFig. \\ref{fig:N6e21_nodustCMB}--\\ref{fig:N6e21_Tg30Td15} present the basic results of this work. We plot, as a function of N($\\mathrm{H_2}$), i.e., different impact parameter, the $\\mathrm{1_{10}}$$\\rightarrow$$\\mathrm{1_{01}}$ line intensity above the continuum per unit column density of o-$\\mathrm{H_2O}$, see Eq. \\ref{eq:int_total}. The results for model I, in which we ignore the contribution of dust and CMB on the excitation of the water molecule are displayed in Fig. \\ref{fig:N6e21_nodustCMB}. The position of the $\\tau$=1 surface is denoted by a cross for X($\\mathrm{H_2O}$)=$10^{-9}$-$10^{-6}$. Models II, III and IV behave in a similar way as in model I in the case n($\\mathrm{H_2}$)$\\ge$$10^5$$\\mathrm{cm^{-3}}$. For this reason, we only plot the outcome in the low density case for models II, III and IV in Fig. \\ref{fig:N6e21_1e4.ps}. The results for model V are plotted in Fig. \\ref{fig:N6e21_Tg30Td15}. \\\\\nThe following trends can be identified, which will be discussed in Sect. \\ref{sec:discussion}. \\\\\nFirst, in the case of n($\\mathrm{H_2}$) $\\ge$ $10^5$ $\\mathrm{cm^{-3}}$ and X($\\mathrm{H_2O}$) $\\lesssim$ $10^{-8}$, i.e., $\\tau$$<$10, a linear relationship holds between the number of photons escaping the cloud and the impact parameter, i.e., I\/$\\mathrm{N_{H_2O}}$ is constant. However, this relationship breaks down at high optical depth, i.e., $\\mathrm{H_2O}$ $>$ $10^{-8}$, for all the models. Second, in all the models, except in model I, absorption occurs in the low density case when abundances are low, i.e., X($\\mathrm{H_2O}$) $<$ $10^{-8}$, which becomes more apparent as the dust temperature increases. However, the amount of absorption is moderate as the self-reversal in the center of the line is small. Third, when the $\\mathrm{H_2O}$ abundance exceeds $10^{-7}$, in all the models, the ratio of the intensity to the column density decreases near the edge of the cloud (N($\\mathrm{H_2}$)$\\sim$5$\\times$$10^{20}$$\\mathrm{cm^{-2}}$). Fourth, for high optical depth, i.e., X($\\mathrm{H_2O}$)$\\gtrsim$$10^{-7}$, and n($\\mathrm{H_2}$)=$10^6$ $\\mathrm{cm^{-3}}$, I\/$\\mathrm{N_{H_2O}}$ decreases with increasing column density. Fifth, lowering the gas and dust temperatures by a factor of $\\sim$2 (model V) does not lead to significant differences in the shapes of the curves. That is, models II-IV and model V experience similar complicating radiative transfer effects.\n\n\\section{\\label{sec:discussion}Discussion}\nThe asymmetry of the water molecule causes the rotational levels to split into a number of different ladders, so called 'K-ladders', characterized by different values of the projection of the angular momentum onto the principal axes of the molecule ($J_{K_-K_+}$). Radiative transitions occur rapidly between levels in each ladder but are much slower between levels in different ladders. This leads to a spectrum more complex than linear or symmetric top molecules, e.g., CO, $\\mathrm{NH_3}$.\nHence, it is not straightforward to disentangle the different processes that contribute to the observed spectrum. We now describe the different effects that play a role in the interpretation of the figures. In this, the 'edge' and 'centre' of the cloud refers to an impact parameter of 1 and 0, respectively. Note that the use of spherical models leads to angular non-trivial re-distribution of line photons. The same holds for continuum photons within the line profile frequency range. \\\\\nFirst, one can see in Figs. \\ref{fig:N6e21_nodustCMB} and \\ref{fig:N6e21_Tg30Td15} that the curves as function of total column density ($\\mathrm{H_2}$) are constant for X($\\mathrm{H_2O}$) $\\lesssim$ $10^{-8}$ and n($\\mathrm{H_2}$)$\\ge$$10^5$$\\mathrm{cm^{-3}}$. In this limit, collisional de-excitation and scattering effects are negligible. Eventually every photon produced in the cloud will escape the cloud with few interactions with the surrounding medium. Note that the number of scatterings $N$ to escape depends on the optical depth. In this regime $\\tau$ $\\ll$ 1, therefore few photons are scattered and $N$ $\\approx$ $\\tau$. With increasing optical depth, i.e., X($\\mathrm{H_2O}$) $\\gtrsim$ $10^{-8}$, more effects have to be taken into account. All models show a drop near the edge of the cloud.\n Because of the increasing optical depth, line-scattering effects become important. Thus, line photons then tend to escape in the direction with the lowest optical depth rather than tangentially to the cloud surface, causing the dip near the edge. However, towards the centre of the cloud, the optical depth increases with orders of magnitude. The photons will undergo numerous scatterings for $\\tau$ $\\gg$ 1, with $N$ $\\approx$ $\\tau^2$, and eventually will escape in the line wings.\\\\\nSecond,\nat densities as low as $10^4$ $\\mathrm{cm^{-3}}$, and abundances not exceeding $10^{-8}$ (modest optical depth), the line is strongly subthermally excited, and radiatively colder than the dust background. Hence, the line appears in absorption. The decrease in intensity\/N($\\mathrm{H_2O}$) shown in Fig. \\ref{fig:N6e21_1e4.ps} (red and green curves) now indicates that lines of sight through the cloud center are no longer contributing evenly to the emissivity across their entire column. The line is not strictly in absorption yet, but it has developed an intensity dip around line center. Thus, the presence of dust causes the trend in the intensity per column in this regime to decrease \\citep{1983ApJ...275..145T}. This behaviour is not seen in case n($\\mathrm{H_2}$) $\\gtrsim$ $10^5$ $\\mathrm{cm^{-3}}$, as in this regime collisions are the dominant process in the excitation of the water molecule, thereby nullifying the effect of dust emission\n The influence of dust on the excitation\/level populations of water is plotted in Fig. \\ref{fig:levelpop} where the relative population of the $\\mathrm{1_{10}}$ level is displayed. One notices that for warmer dust level $\\mathrm{1_{10}}$ is more populated. In essence, dust continuum emission will tend to drive the level populations towards a Boltzmann distribution at the temperature of the dust. For a given density, the effects of radiative excitation by dust continuum emission is more pronounced for higher dust temperatures (e.g., higher continuum intensities).\\\\\nThird, the effect of photon trapping is to lower the density at which LTE is approached, i.e., after each absorption, the gas has a chance to collisionally de-excite the species and return the excitation energy to the thermal bath of the gas, see Eq. \\ref{Eq:ncr}. For optically thin gas, the critical density of the ground state transition at 557 GHz of o-$\\mathrm{H_2O}$ is $\\sim$$10^8$ \\mbox{$\\mathrm{cm}^{-3}$} at 50 K. The optical depth, through the centre of the cloud, varies from 0.1 to $10^3$ when the abundance rises from $10^{-10}$ to $10^{-6}$ in all the models. Hence, for high abundances, i.e., X($\\mathrm{H_2O}$) $\\gtrsim$$10^{-7}$, the effective critical density drops to $10^5$-$10^6$ \\mbox{$\\mathrm{cm}^{-3}$}, since for high optical depth $\\beta(\\tau)$$\\sim$$1\/\\tau$. Collisional de-excitation processes then become important in the regime where n($\\mathrm{H_2}$) $\\gtrsim$ $10^5$ \\mbox{$\\mathrm{cm}^{-3}$}, and X($\\mathrm{H_2O}$) $\\gtrsim$ $10^{-7}$. It is seen in Fig. \\ref{fig:N6e21_nodustCMB} and \\ref{fig:N6e21_Tg30Td15} that for n($\\mathrm{H_2}$)=$10^6$$\\mathrm{cm^{-3}}$ and X($\\mathrm{H_2O}$)=$10^{-6}$ the I\/$\\mathrm{N_{H_2O}}$ drops as function of impact parameter. In this part of parameter space collisional de-excitation is important and the probability that line photons are lost to the thermal bath through collisional de-excitation during one of the many scattering events is high.\\\\% Note that the optical depth for the $\\mathrm{2_{12}}$$\\to$$\\mathrm{1_{01}}$ transition at 1669.904 GHz is comparable with $\\tau_{\\mathrm{1_{10}} \\to \\mathrm{1_{01}}}$ due to the larger Einstein A coefficient. The optical depth of the higher transitions are orders of magnitude lower.\\\\\nFourth, calculations are performed for model V (Fig. \\ref{fig:N6e21_Tg30Td15}) with gas temperatures a factor of 2 lower relative to the temperatures used in model II. We find that the shape of the curves are not affected by such a change in gas temperature. However, it affects the distribution of the level populations and thus the absolute intensity in the lines. Hence, temperature variations cannot dispense of the radiative transfer effects studied in this work.\\\\\nTo summarize, we plot in Fig. \\ref{fig:mean}, as a function of abundance, the average intensity emanating from the cloud for model I and IV, i.e., \n\\begin{equation}\n{{\\int I_{{1_{10}}\\rightarrow{1_{01}}}(b)2\\pi b\\ db}\\over{\\int N_{\\mathrm{H_2O}}(b)2\\pi b\\ db}}, \n\\end{equation}\nwith b the impact parameter. One notices a drop by a factor of $\\sim$2--5 in case n($\\mathrm{H_2}$) is $10^5$--$10^6$ \\mbox{$\\mathrm{cm}^{-3}$}, respectively. Note that with the assumption of an effectively optically thin line one would underestimate the water column by these same factors.\\\\\n\n\\section{Astrophysical implications}\nThe intensity of the ground-state transition of o-$\\mathrm{H_2O}$ is driven by a combination of the ambient gas and dust temperatures on the one side and by the density of the surrounding medium on the other side. It is this interplay, together with the complex structure of the molecule that drives the level populations. To interpret existing SWAS and future HIFI data a clear sense of the information content of the water\n lines is needed. \\\\\nSWAS observations of the lowest rotational transition of o-$\\mathrm{H_2^{16}O}$ of the Orion A molecular cloud show that gaseous water correlates much better with the near surface tracer CN than with the volume tracer $\\mathrm{C^{18}O}$, as presented in \\citet{2005AdSpR..36.1027M}. Through these observations -- in which it is assumed that the ground-state transition of ortho-$\\mathrm{H_2O}$ is effectively optically thin -- one concludes that water is a surface tracer. This is plausible from a chemical point of view in which photo-dissociation destroyes the water molecule near the surface. Further inwards the cloud the water abundance reaches its equilibrium value through photodesorption of $\\mathrm{H_2O}$-ice and photodestruction of $\\mathrm{H_2O}$-gas until it freezes-out onto dust grains deeper into the cloud. However, we have shown, as seen in Fig. \\ref{fig:mean}, that for $\\tau$$>$10 and n$>$$10^5$$\\mathrm{cm^{-3}}$ the effectively optically thin assumption no longer holds. Hence, under these conditions one is limited to observing the '$\\tau$=10' surface and cannot use water to trace the cloud's volume, even when it is present, i.e., not frozen out (Cernicharo, private communication). Therefore, as CN is a surface tracer and the water intensity originates from a layer of gas with an optical thickness of 1 -- depending on local excitation conditions this layer is a surface layer -- the CN intensity correlates much better with the $\\mathrm{H_2O}$ intensity and not with the volume tracer $\\mathrm{C^{18}O}$. Thus, the anti-correlation of $\\mathrm{H_2O}$ with $\\mathrm{C^{18}O}$ is partly due to optical depth effects, and is not neccesarily a result of chemical changes. As a consequence, the presence of water past the $\\tau$=1 surface cannot be ruled out. \\\\\nWe would also like to point out here that the most interesting aspect of the correlation of the water line intensity with the CN line intensity is the fact that both are observed to vary by a factor $\\sim$100. Theoretically, the CN abundance is expected to scale with density squared \\citep{2005ApJ...632..302B}, indicating the importance of density variations over the Orion molecular cloud. Given the results presented in this paper, we surmise that these density variations will hamper the interpretation of the water observations.\\\\ \nIn order to deduce the total water column along the line of sight, additional information is needed from other -- effectively optically thin-- lines, which will be observed with future missions such as Herschel\/HIFI.\n\n\\begin{acknowledgements}\nWe are grateful to Ted Bergin and Gary Melnick for sending an early version of the manuscript. We also thank Floris van der Tak for helpful discussions and suggestions which have improved the paper and the anonymous referee for his\/her constructive comments.\n\\end{acknowledgements}\n\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzddfg b/data_all_eng_slimpj/shuffled/split2/finalzzddfg new file mode 100644 index 0000000000000000000000000000000000000000..f4202e3d7865dcfc6b2096652858226e7ff22617 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzddfg @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction} \\label{sect:intro}\n\nSpectral measures for the representation graphs for the irreducible representations of the exceptional compact Lie group $G_2$ and its maximal torus $\\mathbb{T}^2$, and for nimrep graphs associated to the $G_2$ modular invariants, were studied in \\cite{evans\/pugh:2012i}. Here we consider the McKay graphs for finite subgroups of $G_2$ and study their spectral measures.\n\nThe spectral measure of a self-adjoint operator $a$ in a unital $C^{\\ast}$-algebra $A$ with state $\\varphi$ is a compactly supported probability measure $\\nu_a$ on the spectrum $\\sigma(a) \\subset \\mathbb{R}$ of $a$, uniquely determined by its moments\n\\begin{equation} \\label{eqn:moments_sa_operator}\n\\varphi(a^m) = \\int_{\\sigma(a)} x^m \\mathrm{d}\\nu_a (x),\n\\end{equation}\nfor all non-negative integers $m$.\n\nThe self-adjoint operators we consider here are the adjacency matrices of the McKay graphs for finite subgroups of $G_2$. The characters of the rank two Lie group $G_2$ are functions on $\\mathbb{T}^2$, and it is convenient to write the spectral measures for these operators as measures $\\varepsilon$ over the torus $\\mathbb{T}^2$. However, $\\mathbb{T}^2$ has dimension one greater than $\\sigma(a) \\subset \\mathbb{R}$, so that there is an infinite family of pullback measures $\\varepsilon$ over $\\mathbb{T}^2$ for any spectral measure $\\nu_a$. The details of the relation between the measures $\\varepsilon$ and $\\nu_a$ are given in Section \\ref{sect:measures-different_domains}.\n\nIn order to remove this ambiguity, we also consider joint spectral measures, that is, measures over the joint spectrum $\\sigma(a,b) \\subset \\sigma(a) \\times \\sigma(b) \\subset \\mathbb{R}^2$ of commuting self-adjoint operators $a$ and $b$. The abelian $C^{\\ast}$-algebra $B$ generated by $a$, $b$ and the identity 1 is isomorphic to $C(X)$, where $X$ is the spectrum of $B$. Then the joint spectrum is defined as $\\sigma(a,b) = \\{ (a(x), b(x)) | \\, x \\in X \\}$. In fact, one can identify the spectrum $X$ with its image $\\sigma(a,b)$ in $\\mathbb{R}^2$, since the map $x \\mapsto (a(x), b(x))$ is continuous and injective, and hence a homomorphism since $X$ is compact \\cite{takesaki:2002}.\nIn the case where the operators $a$, $b$ act on a finite-dimensional Hilbert space, this is the set of all pairs of real numbers $(\\lambda_a,\\lambda_b)$ for which there exists a vector $\\phi$, $||\\phi||=1$, such that $a\\phi = \\lambda_a \\phi$, $b\\phi = \\lambda_b \\phi$.\nThen there exists a compactly supported probability measure $\\widetilde{\\nu}_{a,b}$ on $\\sigma(a,b)$, which is uniquely determined by its cross moments\n\\begin{equation} \\label{eqn:cross_moments_sa_operators}\n\\varphi(a^m b^n) = \\int_{\\sigma(a,b)} x^m y^n \\mathrm{d}\\widetilde{\\nu}_{a,b} (x,y),\n\\end{equation}\nfor all non-negative integers $m$, $n$.\nSuch joint spectral measures specialize to the spectral measures $\\nu_a$ (respectively $\\nu_b$) by integrating over all $y$ for which $(\\lambda_a,y) \\in \\sigma(a,b)$ (respectively all $x$ for which $(x,\\lambda_b) \\in \\sigma(a,b)$).\nAs discussed in Section \\ref{sect:measures-different_domains}, such a measure uniquely defines a measure over $\\mathbb{T}^2$ invariant under an action of the Weyl group of $G_2$. In this paper we determine the joint spectral measure for each finite subgroup $\\Gamma$ of $G_2$ for all non-conjugate embeddings of $\\Gamma$ into the fundamental representations of $G_2$.\n\nThe paper is organised as follows. In Section \\ref{sect:preliminaries} we present some preliminary material, including a discussion on the relation between spectral measures over certain different domains in Section \\ref{sect:measures-different_domains} and a summary of relevant results for $G_2$ and its nimreps from \\cite{evans\/pugh:2012i}.\n\nIn Section \\ref{sect:subgroupsG2} we discuss the finite subgroups of $G_2$, including their embeddings into the fundamental representations of $G_2$. We also give a general discussion of their spectral measures. In Sections \\ref{sect:II1}-\\ref{sect:IP5} we discuss each finite subgroup of $G_2$ individually, including determining all non-conjugate embeddings into the fundamental representations of $G_2$. For each such embedding we construct their McKay graphs, some of which have appeared before in \\cite{he:2003}, and we determine their joint spectral measures.\n\nSpectral measures associated to the compact Lie groups $A_1 = SU(2)$ and $A_2 = SU(3)$ and their maximal tori, nimrep graphs associated to the $SU(2)$ and $SU(3)$ modular invariants, and the McKay graphs for finite subgroups of $SU(2)$ and $SU(3)$ were studied in \\cite{banica\/bisch:2007, evans\/pugh:2009v, evans\/pugh:2010i}.\nSpectral measures associated to the compact Lie group $C_2$ are studied in \\cite{evans\/pugh:2012iii}, whilst spectral measures associated to other compact rank two Lie groups and their maximal tori are studied in \\cite{evans\/pugh:2012iv}.\n\n\n\n\\section{Preliminaries} \\label{sect:preliminaries}\n\\subsection{Spectral measures over different domains} \\label{sect:measures-different_domains}\n\nThe Weyl group of $G_2$ is the dihedral group $D_{12}$ of order 12.\nAs a subgroup of $GL(2,\\mathbb{Z})$, $D_{12}$ is generated by matrices $T_2$, $T_6$, of orders 2, 6 respectively, given by\n\\begin{equation} \\label{T2,T6}\nT_2 = \\left( \\begin{array}{cc} 0 & -1 \\\\ -1 & 0 \\end{array} \\right), \\qquad T_6 = \\left( \\begin{array}{cc} 0 & 1 \\\\ -1 & 1 \\end{array} \\right),\n\\end{equation}\nwhere the action of $D_{12}$ on $\\mathbb{T}^2$ is given by $T(\\omega_1,\\omega_2) = (\\omega_1^{a_{11}}\\omega_2^{a_{12}},\\omega_1^{a_{21}}\\omega_2^{a_{22}})$, for $T = (a_{il}) \\in D_{12}$. This action leaves $\\chi_{\\mu}(\\omega_1,\\omega_2)$ invariant, for any $\\mu \\in P_{++} = \\{ (\\mu_1,\\mu_2) \\in \\mathbb{N}^2 | \\, \\mu_1 \\geq \\mu_2 \\}$, the interior of the Weyl alcove for $G_2$.\nAny $D_{12}$-invariant measure $\\varepsilon_{\\mu}$ on $\\mathbb{T}^2$ yields a pushforward probability measure $\\nu_{\\mu}$ on $I_{\\mu} = \\chi_{\\mu}( \\mathbb{T}^2)\\subset \\mathbb{R}$ by\n\\begin{equation} \\label{eqn:measures-T2-Ij_G2}\n\\int_{I_{\\mu}} \\psi(x) \\mathrm{d}\\nu_{\\mu}(x) = \\int_{\\mathbb{T}^2} \\psi(\\chi_{\\mu}(\\omega_1,\\omega_2)) \\mathrm{d}\\varepsilon_{\\mu}(\\omega_1,\\omega_2),\n\\end{equation}\nfor any continuous function $\\psi:I_{\\mu} \\rightarrow \\mathbb{C}$, where $\\mathrm{d}\\varepsilon_{\\mu}(\\omega_1,\\omega_2) = \\mathrm{d}\\varepsilon_{\\mu}(g(\\omega_1,\\omega_2))$ for all $g \\in D_{12}$.\nThere is a loss of dimension here, in the sense that the integral on the right hand side is over the two-dimensional torus $\\mathbb{T}^2$, whereas on the right hand side it is over the interval $I_{\\mu}$. Thus there is an infinite family of pullback measures $\\varepsilon_{\\mu}$ over $\\mathbb{T}^2$ for any measure $\\nu_{\\mu}$ on $I_{\\mu}$, that is, any $\\varepsilon_{\\mu}$ such that $\\varepsilon_{\\mu}(I_{\\mu}^{-1}[x]) = \\nu_{\\mu}(x)$ for all $x \\in I_{\\mu}$ will yield the probability measure $\\nu_{\\mu}$ on $I_{\\mu}$ as a pushforward measure by (\\ref{eqn:measures-T2-Ij_G2}).\nAs in \\cite{evans\/pugh:2012i}, we instead work with an intermediate probability measure $\\widetilde{\\nu}_{\\lambda,\\mu}$ which lives over the joint spectrum $\\mathfrak{D}_{\\lambda,\\mu} \\subset I_{\\lambda} \\times I_{\\mu} \\subset \\mathbb{R}^2$, for $\\lambda,\\mu \\in P_{+}$, where there is no loss of dimension.\n\nA fundamental domain $C$ of $\\mathbb{T}^2$ under the action of the dihedral group $D_{12}$ is illustrated in Figure \\ref{fig:fund_domain-G2inT2}, where the axes are labelled by the parameters $\\theta_1$, $\\theta_2$ in $(e^{2 \\pi i \\theta_1},e^{2 \\pi i \\theta_2}) \\in \\mathbb{T}^2$, which is a quotient of the fundamental domain of $\\mathbb{T}^2\/S_3$ illustrated in Figure \\ref{fig:fund_domain-A2inT2} (see \\cite{evans\/pugh:2009v}) by the $\\mathbb{Z}_2$-action given by the matrix -1.\nNote that in Figure \\ref{fig:fund_domain-G2inT2}, the lines $\\theta_1=0$ and $\\theta_2=0$ are also boundaries of copies of the fundamental domain $C$ under the action of $D_{12}$, whereas in Figure \\ref{fig:fund_domain-A2inT2} they are not boundaries of copies of the fundamental domain under the action of $S_3$. The torus $\\mathbb{T}^2$ contains 12 copies of $C$, so that\n\\begin{equation} \\label{eqn:measureT2=12C}\n\\int_{\\mathbb{T}^2} \\phi(\\omega_1,\\omega_2) \\mathrm{d}\\varepsilon(\\omega_1,\\omega_2) = 12 \\int_{C} \\phi(\\omega_1,\\omega_2) \\mathrm{d}\\varepsilon(\\omega_1,\\omega_2),\n\\end{equation}\nfor any $D_{12}$-invariant function $\\phi:\\mathbb{T}^2 \\rightarrow \\mathbb{C}$ and $D_{12}$-invariant measure $\\varepsilon$ over $\\mathbb{T}^2$. The only fixed point of $\\mathbb{T}^2$ under the action of $D_{12}$ is the point $(1,1)$.\n\n\\begin{figure}[tb]\n\\begin{minipage}[t]{7.9cm}\n\\begin{center}\n \\includegraphics[width=55mm]{Fig-fund_domain-A2inT2.eps}\\\\\n \\caption{\\small A fundamental domain of $\\mathbb{T}^2\/S_3$.} \\label{fig:fund_domain-A2inT2}\n\\end{center}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[t]{7.9cm}\n\\begin{center}\n \\includegraphics[width=55mm]{Fig-fund_domain-G2inT2.eps}\\\\\n \\caption{\\small A fundamental domain $C$ of $\\mathbb{T}^2\/D_{12}$.} \\label{fig:fund_domain-G2inT2}\n\\end{center}\n\\end{minipage}\n\\end{figure}\n\nLet $x_{\\lambda} = \\chi_{\\lambda}(\\omega_1,\\omega_2)$ and let $\\Psi_{\\lambda,\\mu}$ be the map $(\\omega_1,\\omega_2) \\mapsto (x_{\\lambda},x_{\\mu})$. We denote by $\\mathfrak{D}_{\\lambda,\\mu}$ the image of $\\Psi_{\\lambda,\\mu}(C) (= \\Psi_{\\lambda,\\mu}(\\mathbb{T}^2))$ in $\\mathbb{R}^2$.\nNote that we can identify $\\mathfrak{D}_{\\lambda,\\mu}$ with $\\mathfrak{D}_{\\mu,\\lambda}$ by reflecting about the line $x_{\\lambda} = x_{\\mu}$.\nThen the joint spectral measure $\\widetilde{\\nu}_{\\lambda,\\mu}$ is the measure on $\\mathfrak{D}_{\\lambda,\\mu}$ uniquely determined by its cross-moments as in (\\ref{eqn:cross_moments_sa_operators}).\nThen there is a unique $D_{12}$-invariant pullback measure $\\varepsilon$ on $\\mathbb{T}^2$ such that\n\\begin{equation} \\label{eqn:measures-T2-D_G2}\n\\int_{\\mathfrak{D}_{\\lambda,\\mu}} \\psi(x_{\\lambda},x_{\\mu}) \\mathrm{d}\\widetilde{\\nu}_{\\lambda,\\mu}(x_{\\lambda},x_{\\mu}) = \\int_{\\mathbb{T}^2} \\psi(\\chi_{\\lambda}(\\omega_1,\\omega_2),\\chi_{\\mu}(\\omega_1,\\omega_2)) \\mathrm{d}\\varepsilon_{\\lambda,\\mu}(\\omega_1,\\omega_2),\n\\end{equation}\nfor any continuous function $\\psi:\\mathfrak{D}_{\\lambda,\\mu} \\rightarrow \\mathbb{C}$.\n\nAny probability measure on $\\mathfrak{D}_{\\lambda,\\mu}$ yields a probability measure on the interval $I_{\\lambda}$, given by the pushforward $(p_{\\lambda})_{\\ast}(\\widetilde{\\nu}_{\\lambda,\\mu})$ of the joint spectral measure $\\widetilde{\\nu}_{\\lambda,\\mu}$ under the orthogonal projection $p_{\\lambda}$ onto the spectrum $\\sigma(\\lambda)$ (see \\cite{evans\/pugh:2012i} for more details).\nSince the spectral measure $\\nu_{\\lambda}$ over $I_{\\lambda}$ is also uniquely determined by its (one-dimensional) moments $\\widetilde{\\varsigma}_m = \\int_{I_{\\lambda}} x_{\\lambda}^m \\mathrm{d}\\nu_{\\lambda}(x_{\\lambda})$ for all $m \\in \\mathbb{N}$, one could alternatively consider the moments in (\\ref{eqn:cross_moments_sa_operators}) with $n=0$ to determine the measure $\\nu_{\\lambda}$ over $I_{\\lambda}$.\n\nLet\n\\begin{equation} \\label{def:Dl}\nC_k^W = \\{ (e^{2 \\pi i q_1\/3(k+4)}, e^{2 \\pi i q_2\/3(k+4)}) \\in \\mathbb{T}^2 | \\; q_1,q_2 = 0, 1, \\ldots, 3k+11; \\, q_1 + q_2 \\equiv 0 \\textrm{ mod } 3 \\}\n\\end{equation}\nwhich is the support (over $\\mathbb{T}^2$) of the spectral measure of the nimrep graph $\\mathcal{A}_k(G_2)$ associated to the trivial $G_2$ modular invariant at level $k$.\nThe following $G_2$-invariant measures will be useful later, c.f. \\cite{evans\/pugh:2010i}.\n\\begin{Def} \\label{def:4measures}\nLet $\\omega = e^{2 \\pi i\/3}$, $\\tau = e^{2 \\pi i\/n}$. We define the following measures on $\\mathbb{T}^2$:\n\\begin{itemize}\n\\item[(1)] $\\mathrm{d}_m \\times \\mathrm{d}_n$, where $\\mathrm{d}_k$ is the uniform measure on the $k^{\\mathrm{th}}$ roots of unity, for $k \\in \\mathbb{N}$.\n\\item[(2)] $\\mathrm{d}^{(n)}$, the uniform measure on $C_n^W$ for $n \\in \\mathbb{N}$.\n\\item[(3)] $\\mathrm{d}^{((n))}$, the uniform measure on the $S_3$-orbit of the points $(\\tau, \\tau)$, $(\\overline{\\omega} \\, \\overline{\\tau}, \\omega)$, $(\\omega, \\overline{\\omega} \\, \\overline{\\tau})$, for $n \\in \\mathbb{Q}$, $n \\geq 2$.\n\\item[(4)] $\\mathrm{d}^{(n,k)}$, the uniform measure on the $S_3$-orbit of the points $(\\tau \\, e^{2 \\pi i k}, \\tau)$, $(\\tau, \\tau \\, e^{2 \\pi i k})$, $(\\overline{\\omega} \\, \\overline{\\tau}, \\omega \\, e^{2 \\pi i k})$, $(\\omega \\, e^{2 \\pi i k}, \\overline{\\omega} \\, \\overline{\\tau})$, $(\\overline{\\omega} \\, \\overline{\\tau} \\, e^{-2 \\pi i k}, \\omega \\, e^{-2 \\pi i k})$, $(\\omega \\, e^{-2 \\pi i k}, \\overline{\\omega} \\, \\overline{\\tau} \\, e^{-2 \\pi i k})$, for $n,k \\in \\mathbb{Q}$, $n > 2$, $0 \\leq k \\leq 1\/n$.\n\\end{itemize}\n\\end{Def}\n\nThe sets $\\mathrm{Supp}(\\mathrm{d}^{((n))})$, $\\mathrm{Supp}(\\mathrm{d}^{(n,k)})$ are illustrated in Figures \\ref{fig:poly-15}, \\ref{fig:poly-16} respectively, where $\\mathrm{Supp}(\\mathrm{d}\\mu)$ denotes the set of points $(\\theta_1,\\theta_2) \\in [0,1]^2$ such that $(e^{2 \\pi i \\theta_1}, e^{2 \\pi i \\theta_2})$ is in the support of the measure $\\mathrm{d}\\mu$. The white circles in Figure \\ref{fig:poly-16} denote the points given by the measure $\\mathrm{d}^{((n))}$. The cardinality $|\\mathrm{Supp}(\\mathrm{d}_m \\times \\mathrm{d}_n)|$ of $\\mathrm{Supp}(\\mathrm{d}_m \\times \\mathrm{d}_n)$ is $mn$, whilst $|\\mathrm{Supp}(\\mathrm{d}^{(n)})| = |D_n| = 3n^2$ was shown in \\cite[Section 7.1]{evans\/pugh:2009v}. For $n > 2$ and $0 < k < 1\/n$, $|\\mathrm{Supp}(\\mathrm{d}^{((n))})| = 18$, whilst $|\\mathrm{Supp}(\\mathrm{d}^{(n,k)})| = 36$. The cardinalities of the other sets are $|\\mathrm{Supp}(\\mathrm{d}^{(n,0)})| = |\\mathrm{Supp}(\\mathrm{d}^{(n,1\/n)})| = 18$ for $n > 2$, and $|\\mathrm{Supp}(\\mathrm{d}^{((2))})| = 9$.\nSome relations between these measures are given in \\cite[Section 2]{evans\/pugh:2010i}.\n\n\\begin{figure}[tb]\n\\begin{minipage}[t]{7.5cm}\n\\begin{center}\n \\includegraphics[width=55mm]{fig-poly-15.eps}\\\\\n \\caption{$\\mathrm{Supp}(\\mathrm{d}^{((n))})$} \\label{fig:poly-15}\n\\end{center}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[t]{7.5cm}\n\\begin{center}\n \\includegraphics[width=55mm]{fig-poly-16.eps}\\\\\n \\caption{$\\mathrm{Supp}(\\mathrm{d}^{(n,k)})$} \\label{fig:poly-16}\n\\end{center}\n\\end{minipage}\n\\end{figure}\n\n\n\\subsection{Spectral measures for $G_2$} \\label{sect:spec_measure-G2}\n\nHere we review the results, determined in \\cite{evans\/pugh:2012i}, for the spectral measures for $G_2$.\nLet $\\rho_1$, $\\rho_2$ denote the fundamental representations of $G_2$ of dimensions 7, 14 respectively.\nThe restrictions of the characters $\\chi_{\\rho_j}$ of the fundamental representations of $G_2$ to $\\mathbb{T}^2$ yield maps from the torus to the interval $I_j = \\chi_{\\rho_j}(\\mathbb{T}^2) \\subset \\mathbb{R}$:\n\\begin{align*}\n\\chi_{\\rho_1}(\\omega_1,\\omega_2) & = 1 + \\omega_1 + \\omega_1^{-1} + \\omega_2 + \\omega_2^{-1} + \\omega_1\\omega_2^{-1} + \\omega_1^{-1}\\omega_2 \\\\\n& = 1 + 2\\cos(2\\pi\\theta_1) + 2\\cos(2\\pi\\theta_2) + 2\\cos(2\\pi(\\theta_1-\\theta_2)), \\\\\n\\chi_{\\rho_2}(\\omega_1,\\omega_2) & = \\chi_{\\rho_1}(\\omega_1,\\omega_2) + 1 + \\omega_1\\omega_2 + \\omega_1^{-1}\\omega_2^{-1} + \\omega_1^2\\omega_2^{-1} + \\omega_1^{-2}\\omega_2 + \\omega_1\\omega_2^{-2} + \\omega_1^{-1}\\omega_2^2 \\\\\n= \\chi_{\\rho_1}&(\\omega_1,\\omega_2) + 1 + 2\\cos(2\\pi(\\theta_1+\\theta_2)) + 2\\cos(2\\pi(2\\theta_1-\\theta_2)) + 2\\cos(2\\pi(\\theta_1-2\\theta_2)),\n\\end{align*}\nwhere $\\omega_j = e^{2\\pi i \\theta_j} \\in \\mathbb{T}$ for $\\theta_j \\in [0,1]$, $j=1,2$.\n\nLet\n\\begin{equation} \\label{eqn:x,y-G2}\nx := \\chi_{\\rho_1}(\\omega_1,\\omega_2), \\qquad y := \\chi_{\\rho_2}(\\omega_1,\\omega_2),\n\\end{equation}\nand denote by $\\Psi$ the map $\\Psi_{(1,0),(1,1)}: (\\omega_1,\\omega_2) \\mapsto (x,y)$.\nThe image $\\mathfrak{D} = \\Psi(C)$ of the fundamental domain $C$ of $\\mathbb{T}^2\/D_{12}$ (illustrated in Figure \\ref{fig:fund_domain-G2inT2}) under $\\Psi$ is illustrated in Figure \\ref{fig:DomainD-G2}, where the boundaries of $C$ given by $\\theta_1 = 2\\theta_2$, $\\theta_1 = -\\theta_2$, $\\theta_1 = \\theta_2$ yield the curves $c_1$, $c_2$, $c_3$ respectively. These curves are given by \\cite{uhlmann\/meinel\/wipf:2007}\n\\begin{align*}\nc_1: && y & = -5(x+1)+2(x+2)^{3\/2}, \\qquad x \\in [-2,7], \\\\\nc_2: && y & = -5(x+1)-2(x+2)^{3\/2}, \\qquad x \\in [-2,-1], \\\\\nc_3: && 4y & = x^2+2x-7, \\hspace{28mm} x \\in [-1,7].\n\\end{align*}\nThe fixed point $(1,1)$ of $\\mathbb{T}^2$ under the action of $D_{12}$ maps to 7, 14 in the intervals $I_1$, $I_2$ respectively.\n\n\\begin{figure}[tb]\n\\begin{center}\n \\includegraphics[width=60mm]{Fig-DomainD2-G2.eps}\\\\\n \\caption{The domain $\\mathfrak{D} = \\Psi(C)$.} \\label{fig:DomainD-G2}\n\\end{center}\n\\end{figure}\n\nUnder the change of variables (\\ref{eqn:x,y-G2}) the Jacobian is given by \\cite{evans\/pugh:2012i}\n\\begin{equation} \\label{eqn:J[theta]-D12}\n\\begin{split}\nJ & = 8 \\pi^2 (\\cos(2 \\pi (2\\theta_1 + \\theta_2)) + \\cos(2 \\pi (\\theta_1 - 3\\theta_2)) + \\cos(2 \\pi (3\\theta_1 - 2\\theta_2)) \\\\\n& \\qquad - \\cos(2 \\pi (\\theta_1 + 2\\theta_2)) - \\cos(2 \\pi (3\\theta_1 - \\theta_2)) - \\cos(2 \\pi (2\\theta_1 - 3\\theta_2))).\n\\end{split}\n\\end{equation}\nThe Jacobian is real and vanishes in $\\mathbb{T}^2$ only on the boundaries of the images of the fundamental domain $C$ under $D_{12}$.\nAgain, $J^2$ can be written in terms of the $D_{12}$-invariant elements $x$, $y$ as $J^2 = (4x^3-x^2-2x-10xy-y^2-10y+7)(x^2+2x-7-4y)$ (see also \\cite{uhlmann\/meinel\/wipf:2007}), which is non-negative since $J$ is real. We write $J$ in terms of $x$ and $y$ as\n\\begin{eqnarray} \\label{eqn:J[x,y]-D12}\n|J| & = & 4 \\pi^2 \\sqrt{(4x^3-x^2-2x-10xy-y^2-10y+7)(x^2+2x-7-4y)}.\n\\end{eqnarray}\n\n\n\n\n\n\n\n\n\\subsection{Spectral measures for nimrep graphs associated to $G_2$ modular invariants} \\label{sect:spec_measure-nimrepsG2}\n\nSuppose $G$ is the nimrep associated to a $G_2$ braided subfactor at some finite level $k$ with vertex set $G_0$.\nWe define a state $\\varphi$ on $\\ell^2(G_0)$ by $\\varphi( \\, \\cdot \\, ) = \\langle \\,\\cdot \\, e_{\\ast}, e_{\\ast} \\rangle$, where $e_{\\ast}$ is the basis vector in $\\ell^2(G_0)$ corresponding to the distinguished vertex $\\ast$ with lowest Perron-Frobenius weight.\n\nIf we consider the nimrep graphs $G_{\\lambda}$, $G_{\\mu}$, which have joint spectrum $\\mathfrak{D}_{\\lambda,\\mu}$, then the $m,n^{\\mathrm{th}}$ cross moment $\\varsigma_{m,n} = \\varphi(G_{\\lambda}^m G_{\\mu}^n) = \\int_{\\mathfrak{D}_{\\lambda,\\mu}} x^m y^n \\mathrm{d}\\widetilde{\\nu}(x,y)$, where $x=x_{\\lambda}$, $y=x_{\\mu}$, is given by $\\langle G_{\\lambda}^m G_{\\mu}^n e_{\\ast}, e_{\\ast} \\rangle$.\nLet $\\beta_{\\lambda}^{(\\nu)} = \\chi_{\\lambda}(t_{\\nu})$ be the eigenvalues of $G_{\\lambda}$, where $t_{\\nu}=(\\exp(\\xi(\\nu_1+1)),\\exp(-3\\xi (\\nu_2+1)))$ for $\\xi = 6\\pi i\/(k+4)$, with corresponding eigenvectors $\\psi^{(\\nu)}$ (note that the eigenvectors of $G_{\\lambda}$ are the same for all $\\lambda$). Each eigenvalue $\\beta_{\\lambda}^{(\\mu)}$ is also given by a ratio of the $S$-matrix for $G_2$ at level $k$, $\\beta_{\\lambda}^{(\\mu)} = S_{\\lambda\\mu}\/S_{0\\mu}$, where $\\mu \\in \\mathrm{Exp}(G) \\subset P^k_{+} = \\{ (\\lambda_1,\\lambda_2) | \\, \\lambda_1,\\lambda_2 \\geq 0; \\lambda_1 + 2\\lambda_2 \\leq k \\}$ are given by the modular invariant $Z$. Then $G_{\\lambda}^m G_{\\mu}^n = \\mathcal{U} \\Lambda_{\\lambda}^m \\Lambda_{\\mu}^n \\mathcal{U}^{\\ast}$, where $\\Lambda_{\\lambda}$ is the diagonal matrix with the eigenvalues $\\beta_{\\lambda}^{(\\nu)}$ on the diagonal, and $\\mathcal{U}$ is the matrix whose columns are given by the eigenvectors $\\psi^{(\\nu)}$, so that\n\\begin{equation}\\label{eqn:moments-nimrep-G2}\n\\varsigma_{m,n} \\;\\; = \\;\\; \\langle \\mathcal{U} \\Lambda_{\\lambda}^m \\Lambda_{\\mu}^n \\mathcal{U}^{\\ast} e_{\\ast}, e_{\\ast} \\rangle \\;\\; = \\;\\; \\langle \\Lambda_{\\lambda}^m \\Lambda_{\\mu}^n \\mathcal{U}^{\\ast} e_{\\ast}, \\mathcal{U}^{\\ast} e_{\\ast} \\rangle \\;\\; = \\;\\; \\sum_{\\nu} (\\beta_{\\lambda}^{(\\nu)})^m (\\beta_{\\mu}^{(\\nu)})^n |\\psi^{(\\nu)}_{\\ast}|^2,\n\\end{equation}\nwhere $\\psi^{(\\nu)}_{\\ast} = \\mathcal{U}^{\\ast} e_{\\ast}$ is the entry of the eigenvector $\\psi^{(\\nu)}$ corresponding to the distinguished vertex $\\ast$.\nThen there is a $D_{12}$-invariant measure $\\varepsilon$ over $\\mathbb{T}^2$ such that\n$$\\varsigma_{m,n} = \\int_{\\mathbb{T}^2} \\chi_{\\lambda}(\\omega_1,\\omega_2)^m \\chi_{\\mu}(\\omega_1,\\omega_2)^n \\mathrm{d}\\varepsilon(\\omega_1,\\omega_2),$$\nfor all $\\lambda$, $\\mu$.\n\nNote from (\\ref{eqn:moments-nimrep-G2}) that the measure $\\varepsilon$ is a discrete measure which has weight $|\\psi^{(\\nu)}_{\\ast}|^2$ at the points $g(t_{\\nu}) \\in \\mathbb{T}^2$ for $g \\in D_{12}$, $\\nu \\in \\mathrm{Exp}(G)$, and zero everywhere else. Thus the measure $\\varepsilon$ does not depend on the choice of $\\lambda$, $\\mu$, so that the spectral measure over $\\mathbb{T}^2$ is the same for any pair $(G_{\\lambda},G_{\\mu})$, even though the corresponding measures over $\\mathfrak{D}_{\\lambda,\\mu} \\subset \\mathbb{R}^2$, and indeed the subsets $\\mathfrak{D}_{\\lambda,\\mu}$ themselves, are different for each such pair. The same result holds for the spectral measure over $\\mathbb{T}^2$ of a finite subgroup of $G_2$.\n\n\n\n\n\n\n\\section{The finite subgroups of $G_2$} \\label{sect:subgroupsG2}\n\nThe classification of finite subgroups of $G_2$ is due to \\cite{wales:1970, cohen\/wales:1983} (see also \\cite{greiss:1995, he:2003}).\n\nThe reducible (i.e. block-diagonalizable) finite subgroups of $G_2$ are the finite discrete subgroups of $SU(2) \\times SU(2)$ and $SU(3)$ \\cite{wales:1970}. These subgroups are thus well known, and the corresponding spectral measures can be obtained from \\cite{banica\/bisch:2007, evans\/pugh:2009v, evans\/pugh:2010i}.\n\nThe irreducible finite subgroups of $G_2$, of which there are seven up to conjugacy in $G_2$ (or equivalently, up to conjugacy in $GL(V)$, where $V$ is the natural 7-dimensional module for $O(7,\\mathbb{C})$ \\cite[Corollary 1]{greiss:1995}), can be further classified into two types, primitive and imprimitive, where a linear group $\\Gamma \\subset GL(V)$ is imprimitive if there is a non-trivial decomposition $V=\\bigoplus_i V_i$ such that $\\Gamma$ permutes the $V_i$. There are two imprimitive finite subgroups and five primitive ones.\nThese finite subgroups are listed in Table \\ref{Table:subgroupsG2}, where type denotes whether an irreducible subgroup is primitive (P) or imprimitive (I).\n\n\\renewcommand{\\arraystretch}{1}\n\n\\begin{table}[tb]\n\\begin{center}\n\\begin{tabular}{|c|c|c|} \\hline\nSubgroup $\\Gamma \\subset G_2$ & Type & $|\\Gamma|$ \\\\\n\\hline\\hline finite subgroups of $SU(2) \\times SU(2)$, $SU(3)$ & - & - \\\\\n\\hline $PSL(2;7) \\cong GL(3;2) \\cong \\Sigma (168) \\subset SU(3)$ & I & 168 \\\\\n\\hline $PSL(2;7) \\rtimes \\mathbb{Z}_2^3$ & I & 1344 \\\\\n\\hline $PGL(2;7)$ & P & 336 \\\\\n\\hline $PSL(2;8)$ & P & 504 \\\\\n\\hline $PSL(2;13)$ & P & 1092 \\\\\n\\hline $PU(3;3) \\cong G_2(2)'$ & P & 6048 \\\\\n\\hline $G_2(2)$ & P & 12096 \\\\\n\\hline\n\\end{tabular} \\\\\n\\caption{Finite subgroups of $G_2$.} \\label{Table:subgroupsG2}\n\\end{center}\n\\end{table}\n\nThe McKay graph $\\mathcal{G}^{\\rho}_{\\Gamma}$ is the the fusion graph of the irreducible representation $\\rho$ of $\\Gamma$ acting on the irreducible representations of $\\Gamma$.\nThis graph determines the Bratteli diagram for the tower of relative commutants of the subfactor $P^{\\Gamma} \\subset (M_n \\otimes P)^{\\Gamma}$, where $n$ is the dimension the representation $\\rho$ and $P$ is the type $\\mathrm{II}_1$ factor $\\bigotimes_{n=1}^{\\infty} M_n$ \\cite[$\\S$VI]{wassermann:1988}. This graph is not however the principal graph of this subfactor as it is not bipartite. The prinicpal graph is rather an unfolded version of the McKay graph $\\mathcal{G}^{\\rho}_{\\Gamma}$, with adjacency matrix given by $\\left( \\begin{array}{cc} 0 & \\Delta^{\\rho}_{\\Gamma} \\\\ \\Delta^{\\rho}_{\\Gamma} & 0 \\end{array} \\right)$, where $\\Delta^{\\rho}_{\\Gamma}$ is the adjacency matrix of the (folded) graph $\\mathcal{G}^{\\rho}_{\\Gamma}$.\n\nWe will consider the (joint) spectral measure for the McKay graphs $\\mathcal{G}^j_{\\Gamma} := \\mathcal{G}^{\\varrho_j}_{\\Gamma}$ associated to a finite subgroup $\\Gamma \\subset G_2$, where $\\varrho_j$ are the restrictions of the fundamental representations $\\rho_j$ of $G_2$ to $\\Gamma$, $j=1,2$.\nWe will consider all possible embeddings of the subgroup in $G_2$. Any two such embeddings are conjguate in $G_2$ if and only if they afford the same character on the seven-dimensional representation $\\rho_1$ \\cite[Corollary 1]{greiss:1995}. In some cases there is more than one non-conjugate embedding of the subgroup in $G_2$.\nIn these cases the restricted representation $\\varrho_1$ is not necessarily irreducible.\nIn all cases, even for irreducible $\\varrho_1$, the restriction $\\varrho_2$ of the 14-dimensional representation is not necessarily irreducible.\n\nWe use the following methods to determine embeddings $\\varrho_1$ of $\\Gamma$ in $G_2$.\nFirst, take a seven-dimensional (not necessarily irreducible) representation $\\gamma_1$ of $\\Gamma$.\nThe Kronecker square of $\\rho_1$ decomposes into irreducible representations of $G_2$ as $\\rho_1^2 = \\mathrm{id}_{G_2} + \\rho_1 + \\rho_2 + \\lambda_{(2,0)}$, where $\\lambda_{(2,0)}$ has dimension 21. The Kronecker square of $\\varrho_1$ is obtained by restricting this decomposition to $\\Gamma$, and we see that $\\varrho_1$ appears in the decomposition of $\\varrho_1^2$ into irreducible representations of $\\Gamma$. Thus, if $\\gamma_1$ is not contained in the decomposition of $\\gamma_1^2$ into irreducible representations of $\\Gamma$, we can eliminate $\\gamma_1$ as a possible restriction $\\varrho_1$ of $\\rho_1$. The decomposition of $\\gamma_1^2$ into irreducible representations can be obtained using the character table for $\\Gamma$, by decomposing the character $\\chi_{\\gamma_1^2} = \\chi_{\\gamma_1}^2$ of $\\gamma_1^2$ into the characters of the irreducible representations $\\lambda$ of $\\Gamma$: $\\chi_{\\gamma_1}^2 = \\sum_{\\lambda} a_{\\lambda} \\chi_{\\lambda}$, where $a_{\\lambda} = \\langle \\gamma_1^2, \\lambda \\rangle\/|\\Gamma| = \\sum_{g \\in \\Gamma} \\chi_{\\gamma_1^2}(g)\\chi_{\\lambda}(g)\/|\\Gamma|$.\n\nWe next consider the eigenvalues of the representation matrices of $\\Gamma$. If the elements in a conjugacy class $C_n$ of $\\Gamma$ have order $n$, then $(C_n)^n = Z(\\Gamma)$, where $Z(\\Gamma)$ is the center of $\\Gamma$. If the center is trivial, then the eigenvalues $\\xi$ of the matrices representing these elements must satisfy $\\xi^n = 1$ (this is the case for $PSL$, $PGL$ and $PU(3;3)$). Since $\\chi_{\\lambda}(\\Gamma_j)$ is the sum of the eigenvalues $\\xi$, it is usually possible to write down the complete set of eigenvalues from the information provided by the character table of $\\Gamma$ and the fact that the eigenvalues must be powers of $n^{\\mathrm{th}}$ roots of unity.\nWhere there is some ambiguity, we can pin down the correct choice for the set of eigenvalues from the following considerations. Suppose there is ambiguity regarding the eigenvalues of the conjugacy class $C_{mn}$ whose elements have order $mn$, $m,n \\in \\mathbb{N}$. If there is only one conjugacy class $C_n$ whose elements have order $n$, then for $g \\in C_{mn}$, $g^m \\in C_n$, and since $g, g^m$ commute, their corresponding representation matrices can be simultaneously diagonalised, and thus the eigenvalues of $C_n$ must be $m^{\\mathrm{th}}$ powers of those for $C_{mn}$. Suppose now that there is more than one conjugacy class $C_n^{(j)}$ whose elements have order $n$. Since $g^m$ are all conjugate for conjugate $g$, we see that there exists a $j$ such that $g^m \\in C_n^{(j)}$ for all $g \\in C_{mn}$. It turns out in all the cases considered here that there is only one consistent choice of $j$ such that the eigenvalues of $C_n^{(j)}$ are $m^{\\mathrm{th}}$ powers of those for $C_{mn}$ for all (irreducible) representations.\n\nAs was shown in \\cite[$\\S$4]{evans\/pugh:2009v}, the eigenvalues of the representation matrices of $\\Gamma$ can be written in the form $\\chi_{\\varrho_j}(C) = \\mathrm{Tr}(\\varrho_j(g))$, where $g$ is any element of the conjugacy class $C$ of $\\Gamma$.\nEvery element $g \\in \\Gamma$ is conjugate to an element $d$ in the maximal torus of $G_2$, i.e. $\\varrho_j(h^{-1}gh) = \\varrho_j(d) = (\\varrho_j|_{\\mathbb{T}^2})(t_1,t_2)$ for some $(t_1,t_2) \\in \\mathbb{T}^2$, for $j=1,2$, where $\\varrho_j|_{\\mathbb{T}^2}$ is given by \\cite{evans\/pugh:2012i}\n\\begin{align}\n(\\rho_1|_{\\mathbb{T}^2})&(t_1,t_2) = \\textrm{diag}(D(t_1), D(t_2^{-1}), D(t_1^{-1}t_2), 1), \\label{eqn:restrict_rho1G2_to_T2} \\\\\n(\\rho_2|_{\\mathbb{T}^2})&(t_1,t_2) \\nonumber \\\\\n&= \\textrm{diag}(D(t_1), D(t_2^{-1}), D(t_1^{-1}t_2), D(1), D(t_1t_2), D(t_1^2t_2^{-1}), D(t_1^{-1}t_2^2)), \\label{eqn:restrict_rho2G2_to_T2}\n\\end{align}\nwhere $D(t_i) = \\left( \\begin{array}{cc} \\mathrm{Re}(t_i) & -\\mathrm{Im}(t_i) \\\\ \\mathrm{Im}(t_i) & \\mathrm{Re}(t_i) \\end{array} \\right)$ for $t_i \\in \\mathbb{T}$.\nNow $\\mathrm{Tr}(\\varrho_j(g)) = Tr(\\varrho_j(d)) = \\Phi_j(t_1,t_2)$, thus the eigenvalues of $\\varrho_j(g)$ are all of the form $\\Phi_j(\\omega_1,\\omega_2)$ for $\\omega_1,\\omega_2 \\in \\mathbb{T}$, and hence its spectrum is contained in the interval $I_j$.\nAs shown in \\cite[Sections 3,4]{evans\/pugh:2012i} the spectrum of the fundamental representation $\\rho_j$ of $G_2$, and its restriction to $\\mathbb{T}^2$, is the whole of the interval $I_j = \\chi_{\\rho_j}(\\mathbb{T}^2)$, for $j=1,2$.\nThus the support of the spectral measure $\\mu_{\\Delta_j}$ of $\\Delta_j = \\Delta_{\\mathcal{G}^j_{\\Gamma}}$, the adjacency matrix of $\\mathcal{G}^j_{\\Gamma}$, is contained in $I_j$ when $\\Gamma$ is $G_2$ or one of its finite subgroups.\nThen for $\\Gamma \\subset G_2$, the eigenvalues of every group element in $\\varrho_1$ are necessarily of the form $\\mathcal{E}_{t_1,t_2} := \\{ 1,t_1,t_1^{-1},t_2,t_2^{-1},t_1t_2^{-1},t_1^{-1}t_2 \\}$, where $t_i \\in \\mathbb{T}$. Thus, by \\cite[Proposition]{king\/toumazet\/wybourne:1999}, if the eigenvalues of the group elements in the representation $\\gamma_1$ have this form then $\\gamma_1$ is a restriction $\\varrho_1$ of the seven-dimensional fundamental representation $\\rho_1$ of $G_2$ to $\\Gamma$.\n\nWe now turn to consider the possible restrictions $\\varrho_2$ of the fourteen-dimensional fundamental representation $\\rho_2$ of $G_2$ to $\\Gamma$, for fixed $\\varrho_1$.\nBy dimension considerations one can determine the possible candidates for $\\varrho_2$ from the Kronecker square of $\\varrho_1$.\nLet $\\gamma_2$ be such a candidate, i.e. a fourteen-dimensional representation such that $\\varrho_1^2 = \\mathrm{id}_{\\Gamma} + \\varrho_1 + \\gamma_2 + \\lambda$, where $\\lambda$ is (necessarily) some 27-dimensional representation of $\\Gamma$.\nWe make a choice of pair $(t_1^{C},t_2^{C})$ from the set $X_C$ of eigenvalues of group elements (from the conjugacy class $C$) in $\\varrho_1$ such that $\\mathcal{E}_{t_1^C,t_2^C} = X_C$.\nNote that the choice of $(t_1^C,t_2^C)$ such that $\\mathcal{E}_{t_1^C,t_2^C} = X_C$ is not unique. However, any other pair $(\\tilde{t}_1^C,\\tilde{t}_2^C)$ such that $\\mathcal{E}_{\\tilde{t}_1^C,\\tilde{t}_2^C} = X_C$ will appear in the orbit of $(t_1^C,t_2^C)$ under the action of the Weyl group $D_{12}$ of $G_2$, where the action of $D_{12}$ on $\\mathbb{T}^2$ is given in Section \\ref{sect:measures-different_domains}. Thus $\\Phi_j(\\tilde{t}_1^C,\\tilde{t}_2^C) = \\Phi_j(t_1^C,t_2^C)$ for $j=1,2$.\nThen one checks that $\\Phi_2(t_1^C,t_2^C) = \\chi_{\\gamma_2}(C)$ for each conjugacy class $C$, in which case $\\gamma_2$ is indeed a restriction $\\varrho_2$ of the fourteen-dimensional fundamental representation $\\rho_2$ of $G_2$ to $\\Gamma$.\n\n\\subsection{Spectral measures for finite subgroups of $G_2$} \\label{sect:spec_measure-subgroupsG2}\n\nThere is an $S$-matrix, with rows, columns labelled by the irreducible characters, conjugacy classes respectively of $\\Gamma$, which simultaneously diagonalizes the representations of $\\Gamma$ \\cite{kawai:1989}. The entries of this matrix for the trivial representation 0 are given by $S_{0,C} = \\sqrt{|C|} \\chi_0(C)\/\\sqrt{|\\Gamma|} = \\sqrt{|C|}\/\\sqrt{|\\Gamma|}$, for conjugacy class $C$.\nThen the $m,n^{\\mathrm{th}}$ moment $\\varsigma_{m,n}$ is given over $\\mathfrak{D}$ by (c.f. \\cite[Section 4]{evans\/pugh:2009v} for the case of finite subgroups of $SU(2)$)\n\\begin{equation} \\label{eqn:moments-subgroupG2}\n\\varsigma_{m,n} \\; = \\; \\int_{\\mathfrak{D}} x^m y^n \\mathrm{d}\\nu(x,y) \\; = \\; \\sum_C \\frac{|C|}{|\\Gamma|} \\chi_{\\varrho_1} (C)^m \\chi_{\\varrho_2} (C)^n.\n\\end{equation}\nThere is an analogous statement to (\\ref{eqn:moments-subgroupG2}) for the joint spectral measure $\\nu_{\\lambda,\\mu}$ over $\\mathfrak{D}_{\\lambda,\\mu}$ for any irreducible representations $\\lambda$, $\\mu$ of $\\Gamma$. The weight on the right hand side will again be $|C|\/|\\Gamma|$, since the same $S$-matrix simultaneously diagonalises all the representations of $\\Gamma$. Since this weight does not depend on the representations $\\lambda$, $\\mu$, we see that the $D_{12}$-invariant pullback measure $\\varepsilon$ over $\\mathbb{T}^2$ will be the same for any joint spectral measure $\\nu_{\\lambda,\\mu}$. This is analogous to the situation for nimrep graphs discussed in Section \\ref{sect:spec_measure-nimrepsG2}.\n\nWe wish to compute `inverse' maps $\\widetilde{\\Psi}: \\mathfrak{D} \\rightarrow \\mathbb{T}^2$ such that $\\Psi \\circ \\widetilde{\\Psi} = \\mathrm{id}$.\nThe following equation can easily be checked by substituting in $x=\\Phi_1(\\omega_1,\\omega_2)$, $y=\\Phi_2(\\omega_1,\\omega_2)$:\n$$(\\omega_j+\\omega_j^{-1})^3 + (1-x)(\\omega_j+\\omega_j^{-1})^2 + (y-2)(\\omega_j+\\omega_j^{-1}) +2y-x^2+2x-1 = 0,$$\nwhere $j=1,2$.\nSolving this cubic in $\\omega_j+\\omega_j^{-1} = 2\\cos(\\vartheta_j)$, we obtain for $l \\in \\{ 0,1,2 \\}$\n$$\\vartheta_j(l) = \\cos^{-1}\\left( \\frac{1}{6} \\left( x-1 + 2^{-1\/3} \\epsilon_l P + 2^{1\/3} \\overline{\\epsilon_l} (x^2-2x+7-3y)P^{-1} \\right) \\right)$$\nwhere $2^{1\/3}$ takes a real value, $\\epsilon_l = e^{2 \\pi i l\/3}$ and $P = (2x^3+21x^2-30x+7-45y-9xy + \\sqrt{(4x^3-x^2-2x-10xy-y^2-10y+7)(x^2+2x-7-4y)} \\, )^{1\/3}$. We note that for the roots of a cubic equation it does not matter whether the square root in $P$ is taken to be positive or negative.\nThen we set $\\Psi_{l,l'}(x,y) = (e^{\\vartheta_1(l)i},e^{\\vartheta_2(l')i})$, and we have that $\\Psi(\\Psi_{l,l'}(x,y)) = (x,y)$ for some $l,l' \\in \\{ 0,1,2 \\}$. The particular choice of pair $l,l'$ such that the equality $\\Psi \\circ \\Psi_{l,l'} = \\mathrm{id}$ is satisfied depends on $x,y$, but it is easy to check (eg. using Mathematica) whether a given choice satisfies this equality for any of the examples we consider.\nWe present in Table \\ref{Table:subgroupsG2-orbits(theta1,theta2)} the values of the eigenvalues $(\\chi_{\\varrho_1}(C), \\chi_{\\varrho_2}(C)) = (x,y) \\in \\mathfrak{D}$ which will appear for the finite subgroups of $G_2$, and (the orbits under $D_{12}$ of) the corresponding points $(\\theta_1,\\theta_2) \\in [0,1]^2$ such that $\\Psi(e^{2\\pi i \\theta_1},e^{2\\pi i \\theta_2}) = (x,y)$.\n\n\\renewcommand{\\arraystretch}{1.4}\n\n\\begin{table}[tbp]\n\\begin{center}\n\\begin{tabular}{|c|c|} \\hline\n$(x,y) \\in \\mathfrak{D}$ & Orbit of $(\\theta_1,\\theta_2) \\in [0,1]^2$ \\\\\n\\hline $(7,14)$ & $(0,0)$ \\\\\n\\hline $(-2,5)$ & $\\left(\\frac{1}{3},\\frac{2}{3}\\right), \\left(\\frac{2}{3},\\frac{1}{3}\\right)$ \\\\\n\\hline $(-1,-2)$ & $\\left(0,\\frac{1}{2}\\right), \\left(\\frac{1}{2},\\frac{1}{2}\\right), \\left(\\frac{1}{2},0\\right)$ \\\\\n\\hline $(1,-1)$ & $\\left(0,\\frac{1}{3}\\right), \\left(\\frac{1}{3},\\frac{1}{3}\\right), \\left(\\frac{1}{3},0\\right), \\left(0,\\frac{2}{3}\\right), \\left(\\frac{2}{3},\\frac{2}{3}\\right), \\left(\\frac{2}{3},0\\right)$ \\\\\n\\hline $(3,2)$ & $\\left(0,\\frac{1}{4}\\right), \\left(\\frac{1}{4},\\frac{1}{4}\\right), \\left(\\frac{1}{4},0\\right), \\left(0,\\frac{3}{4}\\right), \\left(\\frac{3}{4},\\frac{3}{4}\\right), \\left(\\frac{3}{4},0\\right)$ \\\\\n\\hline $(-1,2)$ & $\\left(\\frac{1}{4},\\frac{1}{2}\\right), \\left(\\frac{1}{2},\\frac{1}{4}\\right), \\left(\\frac{1}{4},\\frac{3}{4}\\right), \\left(\\frac{3}{4},\\frac{1}{2}\\right), \\left(\\frac{1}{2},\\frac{3}{4}\\right), \\left(\\frac{3}{4},\\frac{1}{4}\\right)$ \\\\\n\\hline $(2,1)$ & $\\left(\\frac{1}{6},\\frac{1}{3}\\right), \\left(\\frac{1}{3},\\frac{1}{6}\\right), \\left(\\frac{1}{6},\\frac{5}{6}\\right), \\left(\\frac{5}{6},\\frac{2}{3}\\right), \\left(\\frac{2}{3},\\frac{5}{6}\\right), \\left(\\frac{5}{6},\\frac{1}{6}\\right)$ \\\\\n\\hline $(-1,1)$ & $\\left(\\frac{1}{6},\\frac{1}{2}\\right), \\left(\\frac{1}{2},\\frac{1}{3}\\right), \\left(\\frac{1}{3},\\frac{5}{6}\\right), \\left(\\frac{5}{6},\\frac{1}{2}\\right), \\left(\\frac{1}{2},\\frac{2}{3}\\right), \\left(\\frac{2}{3},\\frac{1}{6}\\right),$ \\\\\n& $\\left(\\frac{1}{2},\\frac{1}{6}\\right), \\left(\\frac{1}{3},\\frac{1}{2}\\right), \\left(\\frac{5}{6},\\frac{1}{3}\\right), \\left(\\frac{1}{2},\\frac{5}{6}\\right), \\left(\\frac{2}{3},\\frac{1}{2}\\right), \\left(\\frac{1}{6},\\frac{2}{3}\\right)$ \\\\\n\\hline $(0,0)$ & $\\left(\\frac{1}{7},\\frac{3}{7}\\right), \\left(\\frac{3}{7},\\frac{2}{7}\\right), \\left(\\frac{2}{7},\\frac{6}{7}\\right), \\left(\\frac{6}{7},\\frac{4}{7}\\right), \\left(\\frac{4}{7},\\frac{5}{7}\\right), \\left(\\frac{5}{7},\\frac{1}{7}\\right),$ \\\\\n& $\\left(\\frac{3}{7},\\frac{1}{7}\\right), \\left(\\frac{2}{7},\\frac{3}{7}\\right), \\left(\\frac{6}{7},\\frac{2}{7}\\right), \\left(\\frac{4}{7},\\frac{6}{7}\\right), \\left(\\frac{5}{7},\\frac{4}{7}\\right), \\left(\\frac{1}{7},\\frac{5}{7}\\right)$ \\\\\n\\hline $(1,0)$ & $\\left(\\frac{1}{8},\\frac{3}{8}\\right), \\left(\\frac{3}{8},\\frac{1}{4}\\right), \\left(\\frac{1}{4},\\frac{7}{8}\\right), \\left(\\frac{7}{8},\\frac{5}{8}\\right), \\left(\\frac{5}{8},\\frac{3}{4}\\right), \\left(\\frac{3}{4},\\frac{1}{8}\\right),$ \\\\\n& $\\left(\\frac{3}{8},\\frac{1}{8}\\right), \\left(\\frac{1}{4},\\frac{3}{8}\\right), \\left(\\frac{7}{8},\\frac{1}{4}\\right), \\left(\\frac{5}{8},\\frac{7}{8}\\right), \\left(\\frac{3}{4},\\frac{5}{8}\\right), \\left(\\frac{1}{8},\\frac{3}{4}\\right)$ \\\\\n\\hline $(-1,0)$ & $\\left(\\frac{1}{8},\\frac{1}{2}\\right), \\left(\\frac{1}{2},\\frac{3}{8}\\right), \\left(\\frac{3}{8},\\frac{7}{8}\\right), \\left(\\frac{7}{8},\\frac{1}{2}\\right), \\left(\\frac{1}{2},\\frac{5}{8}\\right), \\left(\\frac{5}{8},\\frac{1}{8}\\right),$ \\\\\n& $\\left(\\frac{1}{2},\\frac{1}{8}\\right), \\left(\\frac{3}{8},\\frac{1}{2}\\right), \\left(\\frac{7}{8},\\frac{3}{8}\\right), \\left(\\frac{1}{2},\\frac{7}{8}\\right), \\left(\\frac{5}{8},\\frac{1}{2}\\right), \\left(\\frac{1}{8},\\frac{5}{8}\\right)$ \\\\\n\\hline $(-p,p+q+1)$ & $\\left(\\frac{1}{9},\\frac{4}{9}\\right), \\left(\\frac{4}{9},\\frac{1}{3}\\right), \\left(\\frac{1}{3},\\frac{8}{9}\\right), \\left(\\frac{8}{9},\\frac{5}{9}\\right), \\left(\\frac{5}{9},\\frac{2}{3}\\right), \\left(\\frac{2}{3},\\frac{1}{9}\\right),$ \\\\\n& $\\left(\\frac{4}{9},\\frac{1}{9}\\right), \\left(\\frac{1}{3},\\frac{4}{9}\\right), \\left(\\frac{8}{9},\\frac{1}{3}\\right), \\left(\\frac{5}{9},\\frac{8}{9}\\right), \\left(\\frac{2}{3},\\frac{5}{9}\\right), \\left(\\frac{1}{9},\\frac{2}{3}\\right)$ \\\\\n\\hline $(-q,1-p)$ & $\\left(\\frac{1}{9},\\frac{1}{3}\\right), \\left(\\frac{1}{3},\\frac{2}{9}\\right), \\left(\\frac{2}{9},\\frac{8}{9}\\right), \\left(\\frac{8}{9},\\frac{2}{3}\\right), \\left(\\frac{2}{3},\\frac{7}{9}\\right), \\left(\\frac{7}{9},\\frac{1}{9}\\right),$ \\\\\n& $\\left(\\frac{1}{3},\\frac{1}{9}\\right), \\left(\\frac{2}{9},\\frac{1}{3}\\right), \\left(\\frac{8}{9},\\frac{2}{9}\\right), \\left(\\frac{2}{3},\\frac{8}{9}\\right), \\left(\\frac{7}{9},\\frac{2}{3}\\right), \\left(\\frac{1}{9},\\frac{7}{9}\\right)$ \\\\\n\\hline $(p+q,1-q)$ & $\\left(\\frac{2}{9},\\frac{5}{9}\\right), \\left(\\frac{5}{9},\\frac{1}{3}\\right), \\left(\\frac{1}{3},\\frac{7}{9}\\right), \\left(\\frac{7}{9},\\frac{4}{9}\\right), \\left(\\frac{4}{9},\\frac{2}{3}\\right), \\left(\\frac{2}{3},\\frac{2}{9}\\right),$ \\\\\n& $\\left(\\frac{5}{9},\\frac{2}{9}\\right), \\left(\\frac{1}{3},\\frac{5}{9}\\right), \\left(\\frac{7}{9},\\frac{1}{3}\\right), \\left(\\frac{5}{9},\\frac{7}{9}\\right), \\left(\\frac{2}{3},\\frac{4}{9}\\right), \\left(\\frac{2}{9},\\frac{2}{3}\\right)$ \\\\\n\\hline $(0,-1)$ & $\\left(\\frac{1}{12},\\frac{5}{12}\\right), \\left(\\frac{5}{12},\\frac{1}{3}\\right), \\left(\\frac{1}{3},\\frac{11}{12}\\right), \\left(\\frac{11}{12},\\frac{7}{12}\\right), \\left(\\frac{7}{12},\\frac{2}{3}\\right), \\left(\\frac{2}{3},\\frac{1}{12}\\right),$ \\\\\n& $\\left(\\frac{5}{12},\\frac{1}{12}\\right), \\left(\\frac{1}{3},\\frac{5}{12}\\right), \\left(\\frac{11}{12},\\frac{1}{3}\\right), \\left(\\frac{7}{12},\\frac{11}{12}\\right), \\left(\\frac{2}{3},\\frac{7}{12}\\right), \\left(\\frac{1}{12},\\frac{2}{3}\\right)$ \\\\\n\\hline $\\left(\\frac{1+\\sqrt{13}}{2},1\\right)$ & $\\left(\\frac{1}{13},\\frac{4}{13}\\right), \\left(\\frac{4}{13},\\frac{3}{13}\\right), \\left(\\frac{3}{13},\\frac{12}{13}\\right), \\left(\\frac{12}{13},\\frac{9}{13}\\right), \\left(\\frac{9}{13},\\frac{10}{13}\\right), \\left(\\frac{10}{13},\\frac{1}{13}\\right),$ \\\\\n& $\\left(\\frac{4}{13},\\frac{1}{13}\\right), \\left(\\frac{3}{13},\\frac{4}{13}\\right), \\left(\\frac{12}{13},\\frac{3}{13}\\right), \\left(\\frac{9}{13},\\frac{12}{13}\\right), \\left(\\frac{10}{13},\\frac{9}{13}\\right), \\left(\\frac{1}{13},\\frac{10}{13}\\right)$ \\\\\n\\hline $\\left(\\frac{1-\\sqrt{13}}{2},1\\right)$ & $\\left(\\frac{2}{13},\\frac{7}{13}\\right), \\left(\\frac{7}{13},\\frac{5}{13}\\right), \\left(\\frac{5}{13},\\frac{11}{13}\\right), \\left(\\frac{11}{13},\\frac{6}{13}\\right), \\left(\\frac{6}{13},\\frac{8}{13}\\right), \\left(\\frac{8}{13},\\frac{2}{13}\\right),$ \\\\\n& $\\left(\\frac{7}{13},\\frac{2}{13}\\right), \\left(\\frac{5}{13},\\frac{7}{13}\\right), \\left(\\frac{11}{13},\\frac{5}{13}\\right), \\left(\\frac{6}{13},\\frac{11}{13}\\right), \\left(\\frac{8}{13},\\frac{6}{13}\\right), \\left(\\frac{2}{13},\\frac{8}{13}\\right)$ \\\\\n\\hline\n\\end{tabular}\n\\caption{$(x,y) \\in \\mathfrak{D}$ and (the orbits of) the corresponding points $(\\theta_1,\\theta_2) \\in [0,1]^2$. Here $p=2\\cos(4\\pi\/9)$, $q=2\\cos(8\\pi\/9)$} \\label{Table:subgroupsG2-orbits(theta1,theta2)}\n\\end{center}\n\\end{table}\n\n\\renewcommand{\\arraystretch}{1}\n\n\n\\section{Group $PSL(2;7) \\cong GL(3;2) \\cong \\Sigma (168)$} \\label{sect:II1}\n\nThe subgroup $PSL(2;7)$ of $G_2$ is an irreducible imprimitive group of order 168 which is isomorphic to the group $GL(3;2)$, and also to the subgroup $\\Sigma(168)$ of $SU(3)$ which was considered in \\cite{evans\/pugh:2010i}.\n\nThe group $PSL(2;7)$ has irreducible real representations $\\Sigma_d$ of dimensions $d = 1,6,7,8$, and two complex conjugate irreducible representations $\\Sigma_3, \\Sigma_3^{\\ast}$ of dimension 3. Its character table is given in Table \\ref{table:Character_table-II1} \\cite{littlewood:1934}.\n\n\\begin{table}[tb]\n\\begin{center}\n\\begin{tabular}{|c||c|c|c|c|c|c|} \\hline\n$C$ & $C_1$ & $C_2$ & $(C_7,C_7^2,C_7^4)$ & $(C_7^3,C_7^5,C_7^6)$ & $(C_4,C_4^3)$ & $(C_3,C_3^2)$ \\\\\n\\hline $|C|$ & 1 & 21 & 24 & 24 & 42 & 56 \\\\\n\\hline \\hline $\\Sigma_1$ & 1 & 1 & 1 & 1 & 1 & 1 \\\\\n\\hline $\\Sigma_3$ & 3 & -1 & $w$ & $\\overline{w}$ & 1 & 0 \\\\\n\\hline $\\Sigma_3^{\\ast}$ & 3 & -1 & $\\overline{w}$ & $w$ & 1 & 0 \\\\\n\\hline $\\Sigma_6$ & 6 & 2 & -1 & -1 & 0 & 0 \\\\\n\\hline $\\Sigma_7$ & 7 & -1 & 0 & 0 & -1 & 1 \\\\\n\\hline $\\Sigma_8$ & 8 & 0 & 1 & 1 & 0 & -1 \\\\\n\\hline\n\\end{tabular} \\\\\n\\caption{Character table for group $PSL(2;7)$, where $w = \\eta + \\eta^2 + \\eta^4 = (-1+i\\sqrt{7})\/2$, $\\eta = e^{2\\pi i\/7}$.} \\label{table:Character_table-II1}\n\\end{center}\n\\end{table}\n\nThere are two non-conjugate embeddings of $PSL(2;7)$ in $G_2$ \\cite{king\/toumazet\/wybourne:1999}, given by $\\varrho_1^{(1)} = \\Sigma_7$ and $\\varrho_1^{(2)} = \\Sigma_1 + \\Sigma_3 + \\Sigma_3^{\\ast}$.\nThe McKay graph $\\mathcal{G}^{\\varrho_1^{(1)}}_{PSL(2;7)}$ for $\\varrho_1^{(1)}$ is given in \\cite[Figure 1]{he:2003}. We reproduce it in Figure \\ref{Fig-McKay_Graph-II1-rho1} for completeness, along with the McKay graph $\\mathcal{G}^{\\varrho_1^{(2)}}_{PSL(2;7)}$ for $\\varrho_1^{(2)}$. We use the notation $n$, $n^{\\ast}$ to label the vertices corresponding to the irreducible representations $\\Sigma_n$, $\\Sigma_n^{\\ast}$ respectively.\n\n\\begin{figure}[tb]\n\\begin{center}\n \\includegraphics[width=110mm]{Fig-McKay_Graph-II1-rho1}\\\\\n \\caption{The McKay graphs $\\mathcal{G}^{\\varrho_1^{(i)}}_{PSL(2;7)}$, $i=1,2$.} \\label{Fig-McKay_Graph-II1-rho1}\n\\end{center}\n\\end{figure}\n\nThe eigenvalues of the representation matrices are given in \\cite[Tables 4a,b]{king\/toumazet\/wybourne:1999}.\nThe decomposition of the Kronecker square of $\\varrho_1^{(i)}$ into irreducibles is given by\n$$(\\varrho_1^{(1)})^2 = \\mathrm{id} + \\varrho_1^{(1)} + \\Sigma_3 + \\Sigma_3^{\\ast} + 2\\Sigma_6 + \\Sigma_7 + 2 \\Sigma_8, \\qquad\n(\\varrho_1^{(2)})^2 = \\mathrm{id} + \\varrho_1^{(2)} + \\Sigma_1 + 2\\Sigma_3 + 2\\Sigma_3^{\\ast} + 2\\Sigma_6 + 2 \\Sigma_8,$$\nwhere $\\mathrm{id} = \\Sigma_1$.\nFrom dimension considerations there are thus two candidates for the fourteen-dimensional representation $\\varrho_2^{(i)}$, which are given by $\\Sigma_3 + \\Sigma_3^{\\ast} + \\Sigma_8$ and $\\Sigma_6 + \\Sigma_8$ for both $i=1,2$.\nHowever, as discussed in Section \\ref{sect:subgroupsG2}, since $\\chi_{\\varrho_2}(C) = \\Phi_2(t_1^C,t_2^C)$, where $(t_1^{C},t_2^{C})$ is a pair from the set of eigenvalues of group elements from the conjugacy class $C$, from knowledge of the eigenvalues from \\cite{king\/toumazet\/wybourne:1999} we see that the decomposition of the fundamental fourteen-dimensional representation into irreducible representations of $PSL(2;7)$ is given by $\\varrho_2 := \\varrho_2^{(i)} = \\Sigma_3 + \\Sigma_3^{\\ast} + \\Sigma_8$ for both $i=1,2$.\nThe values of $x^{(i)} = \\chi_{\\varrho_1^{(i)}}(C) \\in [-2,7]$, $y = \\chi_{\\rho_2}(C) \\in [-2,14]$ for $PSL(2;7)$ are given in Table \\ref{table:(x,y)-II1}, along with the values of $J^2\/64\\pi^4$ for the corresponding pairs $(\\theta_1,\\theta_2) \\in [0,1]^2$ obtained from Table \\ref{Table:subgroupsG2-orbits(theta1,theta2)}.\n\n\\begin{table}[tb]\n\\begin{center}\n\\begin{tabular}{|c||c|c|c|c|c|c|} \\hline\n$C$ & $C_1$ & $C_2$ & $(C_7,C_7^2,C_7^4)$ & $(C_7^3,C_7^5,C_7^6)$ & $(C_4,C_4^3)$ & $(C_3,C_3^2)$ \\\\\n\\hline $\\chi_{\\varrho_1^{(1)}}(C) \\in [-2,7]$ & 7 & -1 & 0 & 0 & -1 & 1 \\\\\n\\hline $\\chi_{\\varrho_1^{(2)}}(C) \\in [-2,7]$ & 7 & -1 & 0 & 0 & 3 & 1 \\\\\n\\hline $\\chi_{\\varrho_2}(C) \\in [-2,14]$ & 14 & -2 & 0 & 0 & 2 & -1 \\\\\n\\hline $J^2(\\theta_1,\\theta_2)\/64\\pi^4$ & 0 & 0 & 49\/4 & 49\/4 & 0 & 0 \\\\\n\\hline\n\\end{tabular} \\\\\n\\caption{$\\chi_{\\varrho_j}(C)$ for group $PSL(2;7)$, $j=1,2$.} \\label{table:(x,y)-II1}\n\\end{center}\n\\end{table}\n\nLet $\\Omega(\\theta_1,\\theta_2) := \\Phi_1(e^{2\\pi i \\theta_1},e^{2\\pi i \\theta_2})^m \\Phi_2(e^{2\\pi i \\theta_1},e^{2\\pi i \\theta_2})^n \\in \\mathfrak{D}$ and $\\Omega^W(\\theta_1,\\theta_2)$ its orbit under $W=D_{12}$, $\\Omega^W(\\theta_1,\\theta_2) := \\sum_{g \\in D_{12}} \\Omega(g(\\theta_1,\\theta_2))\/12$.\nThen from (\\ref{eqn:moments-subgroupG2}) and Tables \\ref{table:Character_table-II1}, \\ref{table:(x,y)-II1} and \\ref{Table:subgroupsG2-orbits(theta1,theta2)}, we see that\n$$\\varsigma_{m,n} = \\frac{1}{168} \\Omega^W(0,0) + \\frac{21}{168} \\Omega^W(0,1\/2) + \\frac{56}{168} \\Omega^W(0,1\/3) + \\frac{42}{168} \\Omega' + \\frac{24+24}{168} \\Omega^W(1\/7,3\/7),$$\nwhere $\\Omega'$ is $\\Omega^W(1\/4,1\/2)$ for $\\varrho_1^{(1)}$, and $\\Omega^W(0,1\/4)$ for $\\varrho_1^{(2)}$.\nIt is easy to see that $\\Omega^W(0,0) = \\int_{\\mathbb{T}^2} \\Omega(\\theta_1,\\theta_2) \\mathrm{d}_1 \\times \\mathrm{d}_1$ and $3\\Omega^W(0,1\/2) = \\int_{\\mathbb{T}^2} \\Omega(\\theta_1,\\theta_2) (4 \\, \\mathrm{d}_2 \\times \\mathrm{d}_2 - \\mathrm{d}_1 \\times \\mathrm{d}_1)$.\nNow $12\\Omega^W(1\/7,3\/7) = 4 \\int_{\\mathbb{T}^2} \\Omega(\\theta_1,\\theta_2) (J^2\/64\\pi^4) \\, \\mathrm{d}_7 \\times \\mathrm{d}_7$, as illustrated in Figure \\ref{Fig-OmegaWII1}$(a)$ since the Jacobian $J=0$ along the boundaries of the orbit of the fundamental domain, whilst $J^2(g(1\/7,3\/7)\/64\\pi^4) = 49\/4$ for all $g \\in D_{12}$.\n\n\\begin{figure}[tb]\n\\begin{center}\n \\includegraphics[width=135mm]{Fig-OmegaWII1}\\\\\n \\caption{The orbits of $(a)$ $(1\/7,3\/7)$, \\mbox{$(b)$ $(0,1\/4) \\, \\bullet$ and $(1\/4,1\/2) \\, \\ast$,} $(c)$ $(0,1\/3)$.} \\label{Fig-OmegaWII1}\n\\end{center}\n\\end{figure}\n\nThe orbits of $(0,1\/4)$, $(1\/4,1\/2)$ are illustrated in Figure \\ref{Fig-OmegaWII1}$(b)$, represented by $\\bullet$, $\\ast$ respectively. Both orbits lie on the boundary of the fundamental domains of $\\mathbb{T}^2\/D_{12}$, however, only the orbit of $(1\/4,1\/2)$ lies on the boundary of the fundamental domains of $\\mathbb{T}^2\/S_3$, illustrated in Figure \\ref{fig:fund_domain-A2inT2}. Then $6K(0,1\/4)\\,\\Omega^W(0,1\/4) = 16 \\int_{\\mathbb{T}^2} \\Omega(\\theta_1,\\theta_2) K(\\theta_1,\\theta_2) \\, \\mathrm{d}_4 \\times \\mathrm{d}_4$ where $K$ is an $S_3$-invariant function on $\\mathbb{T}^2$ which is zero on the boundaries of the fundamental domains of $\\mathbb{T}^2\/S_3$. Such an $S_3$-invariant function is given by the square $\\widetilde{J}^2$ of the Jacobian which appeared for the $A_2$ spectral measures in \\cite{evans\/pugh:2009v, evans\/pugh:2010i}, which is given by $\\widetilde{J}(\\theta_1,\\theta_2) = 4\\pi^2(\\sin(2\\pi(\\theta_1+\\theta_2))-\\sin(2\\pi(2\\theta_1-\\theta_2))-\\sin(2\\pi(2\\theta_2-\\theta_1)))$. Now $|\\widetilde{J}(0,1\/4)| = 8\\pi^2$, thus we take $K = 16\\widetilde{J}^2\/64\\pi^4 = \\widetilde{J}^2\/4\\pi^4$, and we have $K(0,1\/4) = 16$. We can thus also obtain an expression for $\\Omega^W(1\/4,1\/2)$, where from Figure \\ref{Fig-OmegaWII1}$(b)$ we see that $6\\Omega^W(1\/4,1\/2) = \\int_{\\mathbb{T}^2} \\Omega(\\theta_1,\\theta_2) \\left( (16 - K(\\theta_1,\\theta_2)) \\, \\mathrm{d}_4 \\times \\mathrm{d}_4 - 4 \\, \\mathrm{d}_2 \\times \\mathrm{d}_2 \\right)$.\n\nFinally, the orbit of $(\\theta_1,\\theta_2)=(0,1\/3)$ is illustrated in Figure \\ref{Fig-OmegaWII1}$(c)$, and thus $6\\Omega^W(0,1\/3) = \\int_{\\mathbb{T}^2} \\Omega(\\theta_1,\\theta_2) \\, (9 \\, \\mathrm{d}_3 \\times \\mathrm{d}_3 - 3 \\, \\mathrm{d}^{(1)})$, and we obtain\n\n\\begin{Thm} \\label{thm:measureII1}\nThe joint spectral measure (over $\\mathbb{T}^2$) for the non-conjugate embeddings of the projective special linear group $PSL(2;7)$ over the finite field $\\mathbb{F}_7$ into the fundamental representations of $G_2$ is\n\\begin{equation} \\label{eqn:measureII1}\n\\mathrm{d}\\varepsilon = \\frac{1}{672\\pi^4} J^2 \\, \\mathrm{d}_7 \\times \\mathrm{d}_7 + \\frac{1}{24} K' \\, \\mathrm{d}_4 \\times \\mathrm{d}_4 + \\frac{1}{2} \\, \\mathrm{d}_3 \\times \\mathrm{d}_3 - \\frac{1}{28} \\, \\mathrm{d}_1 \\times \\mathrm{d}_1 - \\frac{1}{6} \\, \\mathrm{d}^{(1)},\n\\end{equation}\nwhere $K' = 16-K$ for the embedding of $PSL(2;7)$ in $G_2$ given by $\\varrho_1^{(1)} = \\Sigma_7$ and $K' = K$ for the embedding given by $\\varrho_1^{(2)} = \\Sigma_1 + \\Sigma_3 + \\Sigma_3^{\\ast}$, where $K(\\theta_1,\\theta_2) = (\\sin(2\\pi(\\theta_1+\\theta_2))-\\sin(2\\pi(2\\theta_1-\\theta_2))-\\sin(2\\pi(2\\theta_2-\\theta_1)))^2$, $\\mathrm{d}_m$ is the uniform measure over $m^{\\mathrm{th}}$ roots of unity and $\\mathrm{d}^{(k+4)}$ is the uniform measure on the points in $C_k^W$.\n\\end{Thm}\n\n\\begin{Rem}\nNote that measure in Theorem \\ref{thm:measureII1} for the second embedding $\\varrho_1^{(2)}$ of $PSL(2;7)$ in $G_2$ is precisely that for $\\Sigma (168) \\subset SU(3)$ given in \\cite[Theorem 16]{evans\/pugh:2010i}. However, (\\ref{eqn:measureII1}) has a neater expression than that given in \\cite{evans\/pugh:2010i}, because here we were able to use the Jacobian $J$ for $G_2$ which is also 0 along the diagonal, whereas the Jacobian for $SU(3)$ (essentially $K$ in Theorem \\ref{thm:measureII1}) is non-zero along the diagonal. \\end{Rem}\n\n\n\\section{Group $PSL(2;7) \\rtimes \\mathbb{Z}_2^3$} \\label{sect:II2}\n\nThe subgroup $PSL(2;7) \\rtimes \\mathbb{Z}_2^3$ of $G_2$ is an irreducible imprimitive group of order 1344.\nIt has eleven irreducible representations (nine real and two complex conjugate representations) and its character table is given in Table \\ref{table:Character_table-II2} (see \\cite{littlewood:1934, he:2003}),\nwhere elements in $C_4$, $C_4^{\\prime}$, $C_4^{\\prime}$, $C_7$, $C_7^{\\prime}$, $C_6$, $C_3$ are of cycle type $(C_4,C_4^3)$, $(C_4^{\\prime},C_4^{\\prime3})$, $(C_4^{\\prime\\prime},C_4^{\\prime\\prime3})$, $(C_7,C_7^2,C_7^4)$, $(C_7^3,C_7^5,C_7^6)$, $(C_6,C_6^5)$, $(C_3,C_3^2)$ respectively.\n\n\\begin{table}[tb]\n\\begin{center}\n\\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|} \\hline\n$C$ & $C_1$ & $C_2$ & $C_2^{\\prime}$ & $C_2^{\\prime\\prime}$ & $C_4$ & $C_4^{\\prime}$ & $C_4^{\\prime\\prime}$ & $C_7$ & $C_7^{\\prime}$ & $C_6$ & $C_3$ \\\\\n\\hline $|C|$ & 1 & 7 & 42 & 42 & 84 & 168 & 168 & 192 & 192 & 224 & 224 \\\\\n\\hline \\hline $\\Sigma_1$ & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\\\\n\\hline $\\Sigma_3$ & 3 & 3 & -1 & -1 & -1 & 1 & 1 & $w$ & $\\overline{w}$ & 0 & 0 \\\\\n\\hline $\\Sigma_3^{\\ast}$ & 3 & 3 & -1 & -1 & -1 & 1 & 1 & $\\overline{w}$ & $w$ & 0 & 0 \\\\\n\\hline $\\Sigma_6$ & 6 & 6 & 2 & 2 & 2 & 0 & 0 & -1 & -1 & 0 & 0 \\\\\n\\hline $\\Sigma_7^{(1)}$ & 7 & -1 & -1 & 3 & -1 & -1 & 1 & 0 & 0 & -1 & 1 \\\\\n\\hline $\\Sigma_7^{(1)\\prime}$ & 7 & -1 & 3 & -1 & -1 & 1 & -1 & 0 & 0 & -1 & 1 \\\\\n\\hline $\\Sigma_7^{(2)}$ & 7 & 7 & -1 & -1 & -1 & -1 & -1 & 0 & 0 & 1 & 1 \\\\\n\\hline $\\Sigma_8$ & 8 & 8 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & -1 & -1 \\\\\n\\hline $\\Sigma_{14}$ & 14 & -2 & 2 & 2 & -2 & 0 & 0 & 0 & 0 & 1 & -1 \\\\\n\\hline $\\Sigma_{21}$ & 21 & -3 & -3 & 1 & 1 & 1 & -1 & 0 & 0 & 0 & 0 \\\\\n\\hline $\\Sigma_{21}'$ & 21 & -3 & 1 & -3 & 1 & -1 & 1 & 0 & 0 & 0 & 0 \\\\\n\\hline\n\\end{tabular} \\\\\n\\caption{Character table for $PSL(2;7) \\rtimes \\mathbb{Z}_2^3$, where $w = \\eta + \\eta^2 + \\eta^4 = (-1+i\\sqrt{7})\/2$, $\\eta = e^{2\\pi i\/7}$.} \\label{table:Character_table-II2}\n\\end{center}\n\\end{table}\n\nThere are five non-conjugate seven-dimensional representations, $\\gamma_1^{(1)} = \\Sigma_1 + \\Sigma_3 + \\Sigma_3^{\\ast}$, $\\gamma_1^{(2)} = \\Sigma_1 + \\Sigma_6$, $\\gamma_1^{(3)} = \\Sigma_7^{(1)}$, $\\gamma_1^{(4)} = \\Sigma_7^{(1)\\prime}$ and $\\gamma_1^{(5)} = \\Sigma_7^{(2)}$.\nThese all satisfy the condition that $\\gamma_1^{(i)}$ appears in the decomposition of $(\\gamma_1^{(i)})^2$.\nWe thus consider the eigenvalues of the representation matrices to determine which of the $\\gamma_1^{(i)}$ are embeddings of $PSL(2;7) \\rtimes \\mathbb{Z}_2^3$ in $G_2$.\nThese eigenvalues are given in Table \\ref{table:evalues-II2} for representations of dimension less than or equal to 7. As described in Section \\ref{sect:subgroupsG2}, these eigenvalues can be determined from the character table of $PSL(2;7) \\rtimes \\mathbb{Z}_2^3$. The additional information that is needed is to note that the eigenvalues for group elements in $(C_4,C_4^3)$ square to those for elements in $C_2$, those for $(C_4^{\\prime},C_4^{\\prime3})$ square to those for $C_2^{\\prime}$, those for $(C_4^{\\prime\\prime},C_4^{\\prime\\prime3})$ square to those for $C_2^{\\prime\\prime}$, whilst those for $(C_6,C_6^5)$ square to those for $(C_3,C_3^2)$ and also cube to those for $C_2$.\nThese observations follow from \\cite{littlewood:1934} and the fact that, for example, it is impossible to choose eigenvalues for group elements in $(C_6,C_6^5)$ which cube to those for elements in $C_2^{\\prime}$ or $C_2^{\\prime\\prime}$ for all irreducible representations.\n\n\\begin{table}[tb]\n\\begin{center}\n\\begin{tabular}{|c||c|c|c|c|} \\hline\n & $\\Sigma_1$ & $\\Sigma_3$ & $\\Sigma_3^{\\ast}$ & $\\Sigma_6$ \\\\\n\\hline \\hline $C_1$ & 1 & $(1,1,1)$ & $(1,1,1)$ & $(1,1,1,1,1,1)$ \\\\\n\\hline $C_2$ & 1 & $(1,1,1)$ & $(1,1,1)$ & $(1,1,1,1,1,1)$ \\\\\n\\hline $C_2^{\\prime}$ & 1 & $(1,-1,-1)$ & $(1,-1,-1)$ & $(1,1,1,1,-1,-1)$ \\\\\n\\hline $C_2^{\\prime\\prime}$ & 1 & $(1,-1,-1)$ & $(1,-1,-1)$ & $(1,1,1,1,-1,-1)$ \\\\\n\\hline $C_4$ & 1 & $(1,-1,-1)$ & $(1,-1,-1)$ & $(1,1,1,1,-1,-1)$ \\\\\n\\hline $C_4^{\\prime}$ & 1 & $(1,i,-i)$ & $(1,i,-i)$ & $(1,1,-1,-1,i,-i)$ \\\\\n\\hline $C_4^{\\prime\\prime}$ & 1 & $(1,i,-i)$ & $(1,i,-i)$ & $(1,1,-1,-1,i,-i)$ \\\\\n\\hline $C_7$ & 1 & $(\\eta,\\eta^2,\\eta^4)$ & $(\\eta^3,\\eta^5,\\eta^6)$ & $(\\eta,\\eta^2,\\eta^3,\\eta^4,\\eta^5,\\eta^6)$ \\\\\n\\hline $C_7^{\\prime}$ & 1 & $(\\eta^3,\\eta^5,\\eta^6)$ & $(\\eta,\\eta^2,\\eta^4)$ & $(\\eta,\\eta^2,\\eta^3,\\eta^4,\\eta^5,\\eta^6)$ \\\\\n\\hline $C_6$ & 1 & $(1,\\mu^2,\\mu^4)$ & $(1,\\mu^2,\\mu^4)$ & $(1,1,\\mu^2,\\mu^2,\\mu^4,\\mu^4)$ \\\\\n\\hline $C_3$ & 1 & $(1,\\omega,\\omega^2)$ & $(1,\\omega,\\omega^2)$ & $(1,1,\\omega,\\omega,\\omega^2,\\omega^2)$ \\\\\n\\hline\n\\end{tabular} \\\\\n$\\;$ \\\\\n\\begin{tabular}{|c||c|c|c|} \\hline\n & $\\Sigma_7^{(1)}$ & $\\Sigma_7^{(1)\\prime}$ & $\\Sigma_7^{(2)}$ \\\\\n\\hline \\hline $C_1$ & $(1,1,1,1,1,1,1)$ & $(1,1,1,1,1,1,1)$ & $(1,1,1,1,1,1,1)$ \\\\\n\\hline $C_2$ & $(1,1,1,-1,-1,-1,-1)$ & $(1,1,1,-1,-1,-1,-1)$ & $(1,1,1,1,1,1,1)$ \\\\\n\\hline $C_2^{\\prime}$ & $(1,1,1,-1,-1,-1,-1)$ & $(1,1,1,1,1,-1,-1)$ & $(1,1,1,-1,-1,-1,-1)$ \\\\\n\\hline $C_2^{\\prime\\prime}$ & $(1,1,1,1,1,-1,-1)$ & $(1,1,1,-1,-1,-1,-1)$ & $(1,1,1,-1,-1,-1,-1)$ \\\\\n\\hline $C_4$ & $(1,-1,-1,i,i,-i,-i)$ & $(1,-1,-1,i,i,-i,-i)$ & $(1,1,1,-1,-1,i,-i)$ \\\\\n\\hline $C_4^{\\prime}$ & $(1,-1,-1,i,i,-i,-i)$ & $(1,1,1,-1,-1,i,-i)$ & $(1,-1,-1,i,i,-i,-i)$ \\\\\n\\hline $C_4^{\\prime\\prime}$ & $(1,1,1,-1,-1,i,-i)$ & $(1,-1,-1,i,i,-i,-i)$ & $(1,-1,-1,i,i,-i,-i)$ \\\\\n\\hline $C_7$ & $(1,\\eta,\\eta^2,\\eta^3,\\eta^4,\\eta^5,\\eta^6)$ & $(1,\\eta,\\eta^2,\\eta^3,\\eta^4,\\eta^5,\\eta^6)$ & $(1,\\eta,\\eta^2,\\eta^3,\\eta^4,\\eta^5,\\eta^6)$ \\\\\n\\hline $C_7^{\\prime}$ & $(1,\\eta,\\eta^2,\\eta^3,\\eta^4,\\eta^5,\\eta^6)$ & $(1,\\eta,\\eta^2,\\eta^3,\\eta^4,\\eta^5,\\eta^6)$ & $(1,\\eta,\\eta^2,\\eta^3,\\eta^4,\\eta^5,\\eta^6)$ \\\\\n\\hline $C_6$ & $(1,-1,-1,\\mu,\\mu^2,\\mu^4,\\mu^5)$ & $(1,-1,-1,\\mu,\\mu^2,\\mu^4,\\mu^5)$ & $(1,1,1,\\mu^2,\\mu^2,\\mu^4,\\mu^4)$ \\\\\n\\hline $C_3$ & $(1,1,1,\\omega,\\omega,\\omega^2,\\omega^2)$ & $(1,1,1,\\omega,\\omega,\\omega^2,\\omega^2)$ & $(1,1,1,\\omega,\\omega,\\omega^2,\\omega^2)$ \\\\\n\\hline\n\\end{tabular} \\\\\n\\caption{Eigenvalues of group elements in each conjugacy class of $PSL(2;7) \\rtimes \\mathbb{Z}_2^3$ for irreducible representations of dimension $\\leq 7$, where $\\omega = e^{2\\pi i\/3}$, $\\mu = e^{2\\pi i\/6}$ and $\\eta = e^{2\\pi i\/7}$.} \\label{table:evalues-II2}\n\\end{center}\n\\end{table}\n\nFrom considering the set of eigenvalues $X_C$ for group elements in $C$ in the representation $\\gamma_1^{(i)}$, we see that there is no choice of $(t_1^C,t_2^C) \\in X_C$ such that $\\mathcal{E}_{t_1^C,t_2^C} = X_C$ for $i=2,3$ when $C = C_2^{\\prime}$ and for $i=4$ when $C = C_2^{\\prime\\prime}$. However, such a choice does exist for $i=1,5$ for all conjugacy classes $C$, thus we set $\\varrho_1^{(1)} = \\gamma_1^{(5)}$, $\\varrho_1^{(2)} = \\gamma_1^{(1)}$. We present one such choice of eigenvalues $(t_1^C,t_2^C)$ in Table \\ref{table:t1,t2-II2}.\nThe McKay graphs $\\mathcal{G}^{\\varrho_1^{(1)}}_{PSL(2;7) \\rtimes \\mathbb{Z}_2^3}$, $\\mathcal{G}^{\\varrho_1^{(2)}}_{PSL(2;7) \\rtimes \\mathbb{Z}_2^3}$ for $\\varrho_1^{(2)}$ for $\\varrho_1^{(1)}$, $\\varrho_1^{(2)}$ are given in Figure \\ref{Fig-McKay_Graph-II2-rho1}. We use the notation $n$, $n^{\\ast}$, $n^{(i)\\prime}$ to label the vertices corresponding to the irreducible representations $\\Sigma_n$, $\\Sigma_n^{\\ast}$, $\\Sigma_n^{(i)\\prime}$ respectively. Since both McKay graphs are not connected, we see that $\\varrho_1^{(i)}$ is not a faithful representation, $i=1,2$.\nNote that the McKay graph for $PSL(2;7) \\rtimes \\mathbb{Z}_2^3$ given in \\cite[Figure 1]{he:2003} is not the McKay graph for a restriction of the fundamental seven-dimensional representation, as claimed there, but rather for the irreducible representation $\\Sigma_7^{(1)}$ (or equivalently the representation $\\Sigma_7^{(1)\\prime}$).\n\n\\begin{figure}[tb]\n\\begin{center}\n \\includegraphics[width=115mm]{Fig-McKay_Graph-II2-rho1}\\\\\n \\caption{The McKay graphs $\\mathcal{G}^{\\varrho_1^{(i)}}_{PSL(2;7) \\rtimes \\mathbb{Z}_2^3}$, $i=1,2$.} \\label{Fig-McKay_Graph-II2-rho1}\n\\end{center}\n\\end{figure}\n\nThe decomposition of the Kronecker square of $\\varrho_1^{(i)}$ into irreducibles is given by\n$$(\\varrho_1^{(1)})^2 = \\mathrm{id} + \\varrho_1^{(1)} + \\Sigma_3 + \\Sigma_3^{\\ast} + 2\\Sigma_6 + \\Sigma_7^{(2)} + 2 \\Sigma_8, \\qquad\n(\\varrho_1^{(2)})^2 = \\mathrm{id} + \\varrho_1^{(2)} + \\Sigma_1 + 2\\Sigma_3 + 2\\Sigma_3^{\\ast} + 2\\Sigma_6 + 2 \\Sigma_8,$$\nwhere $\\mathrm{id} = \\Sigma_1$.\nFrom dimension considerations there are thus two candidates for the fourteen-dimensional representation $\\varrho_2^{(i)}$, which are given by $\\gamma_2^{(1)} = \\Sigma_3 + \\Sigma_3^{\\ast} + \\Sigma_8$ and $\\gamma_2^{(2)} = \\Sigma_6 + \\Sigma_8$ for both $i=1,2$.\nHowever, since $\\chi_{\\varrho_2^{(i)}}(C) = \\Phi_2(t_1^C,t_2^C)$, we see from Table \\ref{table:t1,t2-II2} that the decomposition of the fundamental fourteen-dimensional representation into irreducible representations of $PSL(2;7) \\rtimes \\mathbb{Z}_2^3$ is given by $\\varrho_2 = \\gamma_2^{(1)} = \\Sigma_3 + \\Sigma_3^{\\ast} + \\Sigma_8$ for both $i=1,2$.\n\n\\begin{table}[tb]\n\\begin{center}\n\\begin{tabular}{|c||c|c|c|c|c|c|} \\hline\n& \\multicolumn{2}{|c|}{$\\varrho_1^{(1)}$} & \\multicolumn{2}{|c|}{$\\varrho_1^{(2)}$} & & \\\\\n$C$ & $(t_1^C,t_2^C)$ & $\\chi_{\\varrho_2^{(1)}}(C)$ & $(t_1^C,t_2^C)$ & $\\chi_{\\varrho_2^{(2)}}(C)$ & $\\chi_{\\gamma_2^{(1)}}(C)$ & $\\chi_{\\gamma_2^{(2)}}(C)$ \\\\\n\\hline \\hline $C_1$ & $(1,1)$ & 14 & $(1,1)$ & 14 & 14 & 14 \\\\\n\\hline $C_2$ & $(1,1)$ & 14 & $(1,1)$ & 14 & 14 & 14 \\\\\n\\hline $C_2^{\\prime}$ & $(1,-1)$ & -2 & $(1,-1)$ & -2 & -2 & 2 \\\\\n\\hline $C_2^{\\prime\\prime}$ & $(1,-1)$ & -2 & $(1,-1)$ & -2 & -2 & 2 \\\\\n\\hline $C_4$ & $(1,-1)$ & -2 & $(1,-1)$ & -2 & -2 & 2 \\\\\n\\hline $C_4^{\\prime}$ & $(-1,i)$ & 2 & $(1,i)$ & 2 & 2 & 0 \\\\\n\\hline $C_4^{\\prime\\prime}$ & $(-1,i)$ & 2 & $(1,i)$ & 2 & 2 & 0 \\\\\n\\hline $C_7$ & $(\\eta,\\eta^5)$ & 0 & $(\\eta,\\eta^5)$ & 0 & 0 & 0 \\\\\n\\hline $C_7^{\\prime}$ & $(\\eta^2,\\eta^3)$ & 0 & $(\\eta^2,\\eta^3)$ & 0 & 0 & 0 \\\\\n\\hline $C_6$ & $(1,\\mu^2)$ & -1 & $(1,\\mu^2)$ & -1 & -1 & -1 \\\\\n\\hline $C_3$ & $(1,\\omega)$ & -1 & $(1,\\omega)$ & -1 & -1 & -1 \\\\\n\\hline\n\\end{tabular} \\\\\n\\caption{Choice of eigenvalues $(t_1^C,t_2^C)$ for $\\varrho_1^{(i)}$, $i=1,2$, and corresponding values of $\\chi_{\\varrho_2^{(i)}}(C)$.} \\label{table:t1,t2-II2}\n\\end{center}\n\\end{table}\n\nThe values of $x^{(i)} = \\chi_{\\varrho_1^{(i)}}(C) \\in [-2,7]$, $y = \\chi_{\\rho_2}(C) \\in [-2,14]$ for $PSL(2;7) \\rtimes \\mathbb{Z}_2^3$ are given in Table \\ref{table:(x,y)-II1}, along with the values of $J^2\/64\\pi^4$ for the corresponding pairs $(\\theta_1,\\theta_2) \\in [0,1]^2$ obtained from Table \\ref{Table:subgroupsG2-orbits(theta1,theta2)}.\n\n\\begin{table}[tb]\n\\begin{center}\n\\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|} \\hline\n$C$ & $C_1$ & $C_2$ & $C_2^{\\prime}$ & $C_2^{\\prime\\prime}$ & $C_4$ & $C_4^{\\prime}$ & $C_4^{\\prime\\prime}$ & $C_7$ & $C_7^{\\prime}$ & $C_6$ & $C_3$ \\\\\n\\hline $\\chi_{\\varrho_1^{(1)}}(\\Gamma_j) \\in [-2,7]$ & 7 & 7 & -1 & -1 & -1 & -1 & -1 & 0 & 0 & 1 & 1 \\\\\n\\hline $\\chi_{\\varrho_1^{(2)}}(\\Gamma_j) \\in [-2,7]$ & 7 & 7 & -1 & -1 & -1 & 3 & 3 & 0 & 0 & 1 & 1 \\\\\n\\hline $\\chi_{\\varrho_2}(\\Gamma_j) \\in [-2,14]$ & 14 & 14 & -2 & -2 & -2 & 2 & 2 & 0 & 0 & -1 & -1 \\\\\n\\hline $J^2(\\theta_1,\\theta_2)\/64\\pi^4$ & 0 & 0 & 0 & 0 & 0 & 8 & 8 & 49\/4 & 49\/4 & 9 & 0 \\\\\n\\hline\n\\end{tabular} \\\\\n\\caption{$\\chi_{\\varrho_j}(C)$ for group $PSL(2;7) \\rtimes \\mathbb{Z}_2^3$, $j=1,2$.} \\label{table:(x,y)-II2}\n\\end{center}\n\\end{table}\n\nThen from (\\ref{eqn:moments-subgroupG2}) and Tables \\ref{table:Character_table-II2}, \\ref{table:(x,y)-II2} and \\ref{Table:subgroupsG2-orbits(theta1,theta2)}, we see that\n\\begin{align*}\n\\varsigma_{m,n} & = \\frac{1+7}{1344} \\Omega^W(0,0) + \\frac{42+42+84}{1344} \\Omega^W(0,1\/2) + \\frac{224+224}{1344} \\Omega^W(0,1\/3) \\\\\n& \\quad + \\frac{168+168}{1344} \\Omega' + \\frac{192+192}{1344} \\Omega^W(1\/7,3\/7),\n\\end{align*}\nwhere $\\Omega^W(\\theta_1,\\theta_2)$ is as in Section \\ref{sect:II1}, and $\\Omega'$ is $\\Omega^W(1\/4,1\/2)$ for $\\varrho_1^{(1)}$, and $\\Omega^W(0,1\/4)$ for $\\varrho_1^{(2)}$. Thus the joint moments are precisely those for the two embeddings of $PSL(2;7)$ in $G_2$, and we obtain:\n\n\\begin{Thm}\nThe joint spectral measure (over $\\mathbb{T}^2$) for the non-conjugate embeddings of $PSL(2;7) \\rtimes \\mathbb{Z}_2^3$ into the fundamental representations of $G_2$ is given by (\\ref{eqn:measureII1}),\nwhere $K' = 16-K$ for the embedding of $PSL(2;7) \\rtimes \\mathbb{Z}_2^3$ in $G_2$ given by $\\varrho_1^{(1)} = \\Sigma_7^{(2)}$ and $K' = K$ for the embedding given by $\\varrho_1^{(2)} = \\Sigma_1 + \\Sigma_3 + \\Sigma_3^{\\ast}$, and where $K$ is again given by $-4K(\\omega_1,\\omega_2) = (\\omega_1\\omega_2 - \\omega_1^{-1}\\omega_2^{-1} - \\omega_1^2\\omega_2^{-1} + \\omega_1^{-2}\\omega_2 + \\omega_1\\omega_2^{-2} - \\omega_1^{-1}\\omega_2^2)^2$ for $\\omega_1,\\omega_2\\in\\mathbb{T}$.\n\\end{Thm}\n\n\n\n\n\\section{Group $PGL(2;7)$}\n\nThe subgroup $PGL(2;7)$ of $G_2$ is an irreducible primitive group of order 336.\nIt has nine irreducible representations, all real, and its character table is given in Table \\ref{table:Character_table-IP3} \\cite{collins:1990}.\n\n\\begin{table}[tb]\n\\begin{center}\n\\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|} \\hline\n$C$ & $C_1$ & $C_2$ & $C_2^{\\prime}$ & $C_8$ & $C_8^{\\prime}$ & $C_4$ & $C_7$ & $C_6$ & $C_3$ \\\\\n\\hline $|C|$ & 1 & 21 & 28 & 42 & 42 & 42 & 48 & 56 & 56 \\\\\n\\hline \\hline $\\Sigma_1$ & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\\\\n\\hline $\\Sigma_1^{\\prime}$ & 1 & 1 & -1 & -1 & -1 & 1 & 1 & -1 & 1 \\\\\n\\hline $\\Sigma_6^{(1)}$ & 6 & -2 & 0 & 0 & 0 & 2 & -1 & 0 & 0 \\\\\n\\hline $\\Sigma_6^{(2)}$ & 6 & 2 & 0 & $\\sqrt{2}$ & $-\\sqrt{2}$ & 0 & -1 & 0 & 0 \\\\\n\\hline $\\Sigma_6^{(2)\\prime}$ & 6 & 2 & 0 & $-\\sqrt{2}$ & $\\sqrt{2}$ & 0 & -1 & 0 & 0 \\\\\n\\hline $\\Sigma_7$ & 7 & -1 & 1 & -1 & -1 & -1 & 0 & 1 & 1 \\\\\n\\hline $\\Sigma_7^{\\prime}$ & 7 & -1 & -1 & 1 & 1 & -1 & 0 & -1 & 1 \\\\\n\\hline $\\Sigma_8$ & 8 & 0 & 2 & 0 & 0 & 0 & 1 & -1 & -1 \\\\\n\\hline $\\Sigma_8^{\\prime}$ & 8 & 0 & -2 & 0 & 0 & 0 & 1 & 1 & -1 \\\\\n\\hline\n\\end{tabular} \\\\\n\\caption{Character table for $PGL(2;7)$.} \\label{table:Character_table-IP3}\n\\end{center}\n\\end{table}\n\nThere are eight non-conjugate seven-dimensional representations, $\\gamma_1^{(1)} = \\Sigma_1 + \\Sigma_6^{(1)}$, $\\gamma_1^{(2)} = \\Sigma_1 + \\Sigma_6^{(2)}$, $\\gamma_1^{(3)} = \\Sigma_1 + \\Sigma_6^{(2)\\prime}$, $\\gamma_1^{(4)} = \\Sigma_1^{\\prime} + \\Sigma_6^{(1)}$, $\\gamma_1^{(5)} = \\Sigma_1^{\\prime} + \\Sigma_6^{(2)}$, $\\gamma_1^{(6)} = \\Sigma_1^{\\prime} + \\Sigma_6^{(2)\\prime}$, $\\gamma_1^{(7)} = \\Sigma_7$ and $\\gamma_1^{(8)} = \\Sigma_7^{\\prime}$.\nThe decomposition of the Kronecker squares of the $\\gamma_1^{(i)}$ are given by:\n\\begin{align*}\n(\\gamma_1^{(1)})^2 & = \\mathrm{id} + \\gamma_1^{(1)} + \\Sigma_1^{\\prime} + 2\\Sigma_6^{(1)} + \\Sigma_6^{(2)} + \\Sigma_6^{(2)\\prime} + \\Sigma_8 + \\Sigma_8^{\\prime}, \\\\\n(\\gamma_1^{(2)})^2 & = \\mathrm{id} + \\gamma_1^{(2)} + 2\\Sigma_6^{(2)} + \\Sigma_6^{(2)\\prime} + \\Sigma_7^{\\prime} + \\Sigma_8 + \\Sigma_8^{\\prime}, \\\\\n(\\gamma_1^{(4)})^2 & = \\mathrm{id} + \\gamma_1^{(4)} + \\Sigma_1 + 2\\Sigma_6^{(1)} + \\Sigma_6^{(2)} + \\Sigma_6^{(2)\\prime} + \\Sigma_8 + \\Sigma_8^{\\prime}, \\\\\n(\\gamma_1^{(5)})^2 & = \\mathrm{id} + \\Sigma_1 + 3\\Sigma_6^{(2)} + \\Sigma_6^{(2)\\prime} + \\Sigma_7^{\\prime} + \\Sigma_8 + \\Sigma_8^{\\prime}, \\\\\n(\\gamma_1^{(7)})^2 & = \\mathrm{id} + \\gamma_1^{(7)} + \\Sigma_1 + \\Sigma_6^{(1)} + \\Sigma_6^{(2)} + \\Sigma_6^{(2)\\prime} + \\Sigma_7^{\\prime} + \\Sigma_8 + \\Sigma_8^{\\prime}, \\\\\n(\\gamma_1^{(8)})^2 & = \\mathrm{id} + \\gamma_1^{(8)} + \\Sigma_1 + \\Sigma_6^{(1)} + \\Sigma_6^{(2)} + \\Sigma_6^{(2)\\prime} + \\Sigma_7 + \\Sigma_8 + \\Sigma_8^{\\prime},\n\\end{align*}\nwhere $\\mathrm{id} = \\Sigma_1$.\nNote, we have omitted the decompositions for $\\gamma_1^{(3)}$, $\\gamma_1^{(6)}$, since from the character table we see that these representations are essentially the same as $\\gamma_1^{(2)}$, $\\gamma_1^{(5)}$ respectively.\nThen we see that $\\gamma_1^{(i)}$ does not appear in the decomposition of $(\\gamma_1^{(i)})^2$ for $i=5,6$, therefore they are not embeddings of $PGL(2;7)$ in $G_2$.\nFrom dimension considerations, we see that candidates $\\gamma_2^{(j)}$ for $\\varrho_2$ are $\\gamma_2^{(1)}=\\Sigma_6^{(1)}+\\Sigma_8$, $\\gamma_2^{(2)}=\\Sigma_6^{(1)}+\\Sigma_8^{\\prime}$, $\\gamma_2^{(3)}=\\Sigma_6^{(2)}+\\Sigma_8$, $\\gamma_2^{(4)}=\\Sigma_6^{(2)}+\\Sigma_8^{\\prime}$, $\\gamma_2^{(5)}=\\Sigma_6^{(2)\\prime}+\\Sigma_8$ and $\\gamma_2^{(6)}=\\Sigma_6^{(2)\\prime}+\\Sigma_8^{\\prime}$, where $1 \\leq j \\leq 6$ for $\\gamma_1^{(i)}$ when $i=1,4,7,8$, and $3 \\leq j \\leq 6$ when $i=2$.\nThen with $(x_i^C,y_j^C) = (\\chi_{\\gamma_1^{(i)}}(C),\\chi_{\\gamma_2^{(j)}}(C))$, we see that $(x_i^C,y_j^C) \\not \\in \\mathfrak{D}$ for any candidate $\\gamma_2^{(j)}$ in the cases $i=1,2,3,7$ when $C=C_2^{\\prime}$. Thus $\\gamma_1^{(i)}$ cannot define an embedding of $PGL(2;7)$ in $G_2$ for $i=1,2,3,7$.\nWe also see that $(x_8^C,y_j^C) \\not \\in \\mathfrak{D}$ for $j=3,4,5,6$.\n\nThus we have candidates $(\\gamma_1^{(i)},\\gamma_2^{(j)})$ for $(\\varrho_1,\\varrho_2)$ when $i=4,8$, where $1 \\leq j \\leq 6$ for $i=4$ and $j \\in \\{ 1,2 \\}$ for $i=8$.\n\n\\begin{table}[tb]\n\\begin{center}\n\\begin{tabular}{|c||c|c|c|c|c|} \\hline\n & $\\Sigma_1$ & $\\Sigma_1^{\\prime}$ & $\\Sigma_6^{(1)}$ & $\\Sigma_6^{(2)}$ & $\\Sigma_7$ \\\\\n\\hline \\hline $C_1$ & 1 & 1 & $(1,1,1,1,1,1)$ & $(1,1,1,1,1,1)$ & $(1,1,1,1,1,1,1)$ \\\\\n\\hline $C_2$ & 1 & 1 & $(1,1,-1,-1,-1,-1)$ & $(1,1,1,1,-1,-1)$ & $(1,1,1,-1,-1,-1,-1)$ \\\\\n\\hline $C_2^{\\prime}$ & 1 & -1 & $(1,1,1,-1,-1,-1)$ & $(1,1,1,-1,-1,-1)$ & $(1,1,1,1,-1,-1,-1)$ \\\\\n\\hline $C_8$ & 1 & -1 & $(1,-1,\\nu,\\nu^3,\\nu^5,\\nu^7)$ & $(1,-1,i,-i,\\nu,\\nu^7)$ & $(-1,i,-i,\\nu,\\nu^3,\\nu^5,\\nu^7)$ \\\\\n\\hline $C_8^{\\prime}$ & 1 & -1 & $(1,-1,\\nu,\\nu^3,\\nu^5,\\nu^7)$ & $(1,-1,i,-i,\\nu^3,\\nu^5)$ & $(-1,i,-i,\\nu,\\nu^3,\\nu^5,\\nu^7)$ \\\\\n\\hline $C_4$ & 1 & 1 & $(1,1,i,i,-i,-i)$ & $(1,1,-1,-1,i,-i)$ & $(1,-1,-1,i,i,-i,-i)$ \\\\\n\\hline $C_7$ & 1 & 1 & $(\\eta,\\eta^2,\\eta^3,\\eta^4,\\eta^5,\\eta^6)$ & $(\\eta,\\eta^2,\\eta^3,\\eta^4,\\eta^5,\\eta^6)$ & $(1,\\eta,\\eta^2,\\eta^3,\\eta^4,\\eta^5,\\eta^6)$ \\\\\n\\hline $C_6$ & 1 & -1 & $(1,-1,\\mu,\\mu^2,\\mu^4,\\mu^5)$ & $(1,-1,\\mu,\\mu^2,\\mu^4,\\mu^5)$ & $(1,1,-1,\\mu,\\mu^2,\\mu^4,\\mu^5)$ \\\\\n\\hline $C_3$ & 1 & 1 & $(1,1,\\omega,\\omega,\\omega^2,\\omega^2)$ & $(1,1,\\omega,\\omega,\\omega^2,\\omega^2)$ & $(1,1,1,\\omega,\\omega,\\omega^2,\\omega^2)$ \\\\\n\\hline\n\\end{tabular} \\\\\n\\begin{tabular}{|c||c|c|c|} \\hline\n & $\\Sigma_7^{\\prime}$ & $\\Sigma_8$ & $\\Sigma_8^{\\prime}$ \\\\\n\\hline \\hline $C_1$ & $(1,1,1,1,1,1,1)$ & $(1,1,1,1,1,1,1,1)$ & $(1,1,1,1,1,1,1,1)$ \\\\\n\\hline $C_2$ & $(1,1,1,-1,-1,-1,-1)$ & $(1,1,1,1,-1,-1,-1,-1)$ & $(1,1,1,1,-1,-1,-1,-1)$ \\\\\n\\hline $C_2^{\\prime}$ & $(1,1,1,-1,-1,-1,-1)$ & $(1,1,1,1,1,-1,-1,-1)$ & $(1,1,1,-1,-1,-1,-1,-1)$ \\\\\n\\hline $C_8$ & $(1,i,-i,\\nu,\\nu^3,\\nu^5,\\nu^7)$ & $(1,-1,i,-i,\\nu,\\nu^3,\\nu^5,\\nu^7)$ & $(1,-1,i,-i,\\nu,\\nu^3,\\nu^5,\\nu^7)$ \\\\\n\\hline $C_8^{\\prime}$ & $(1,i,-i,\\nu,\\nu^3,\\nu^5,\\nu^7)$ & $(1,-1,i,-i,\\nu,\\nu^3,\\nu^5,\\nu^7)$ & $(1,-1,i,-i,\\nu,\\nu^3,\\nu^5,\\nu^7)$ \\\\\n\\hline $C_4$ & $(1,-1,-1,i,i,-i,-i)$ & $(1,1,-1,-1,i,i,-i,-i)$ & $(1,1,-1,-1,i,i,-i,-i)$ \\\\\n\\hline $C_7$ & $(1,\\eta,\\eta^2,\\eta^3,\\eta^4,\\eta^5,\\eta^6)$ & $(1,1,\\eta,\\eta^2,\\eta^3,\\eta^4,\\eta^5,\\eta^6)$ & $(1,1,\\eta,\\eta^2,\\eta^3,\\eta^4,\\eta^5,\\eta^6)$ \\\\\n\\hline $C_6$ & $(1,-1,-1,\\mu,\\mu^2,\\mu^4,\\mu^5)$ & $(1,-1,\\mu,\\mu^2,\\mu^2,\\mu^4,\\mu^4,\\mu^5)$ & $(1,-1,\\mu,\\mu,\\mu^2,\\mu^4,\\mu^5,\\mu^5)$ \\\\\n\\hline $C_3$ & $(1,1,1,\\omega,\\omega,\\omega^2,\\omega^2)$ & $(1,1,\\omega,\\omega,\\omega,\\omega^2,\\omega^2,\\omega^2)$ & $(1,1,\\omega,\\omega,\\omega,\\omega^2,\\omega^2,\\omega^2)$ \\\\\n\\hline\n\\end{tabular} \\\\\n\\caption{Eigenvalues of group elements in each conjugacy class of $PGL(2;7)$, where $\\omega = e^{2\\pi i\/3}$, $\\mu = e^{2\\pi i\/6}$ and $\\eta = e^{2\\pi i\/7}$.} \\label{table:evalues-IP3}\n\\end{center}\n\\end{table}\n\nWe now consider the eigenvalues of the representation matrices to determine which of the remaining $\\gamma_1^{(i)}$ are embeddings of $PSL(2;7)$ in $G_2$.\nThese eigenvalues are given in Table \\ref{table:evalues-IP3}. As described in Section \\ref{sect:subgroupsG2}, these eigenvalues can be determined from the character table of $PGL(2;7)$. The additional information that is needed is to note that the eigenvalues for group elements in $C_4$ square to those for elements in $C_2$, those for $C_8$, $C_8^{\\prime}$ square to those for $C_4$, those for $C_6$ square to those for $C_3)$ and also cube to those for $C_2^{\\prime}$.\nThese observations follow from the fact that, for example, it is impossible to choose eigenvalues for group elements in $C_4$ which square to those for elements in $C_2^{\\prime}$ for all irreducible representations.\nNote also that we have omitted the eigenvalues for matrices in the irreducible representation $\\Sigma_6^{(2)\\prime}$. These eigenvalues are identical to those for $\\Sigma_6^{(2)}$, except for elements in the conjugacy classes $C_8$, $C_8^{\\prime}$, where for $\\Sigma_6^{(2)\\prime}$ the eigenvalues for elements in $C_8$, $C_8^{\\prime}$ respectively are given by those for elements in $C_8^{\\prime}$, $C_8$ respectively in the representation $\\Sigma_6^{(2)}$.\n\nFrom considering the set of eigenvalues $X_C$ for group elements in $C$ in the representation $\\gamma_1^{(i)}$, $i=4,8$, we see that there is a choice of $(t_1^C,t_2^C) \\in X_C$ such that $\\mathcal{E}_{t_1^C,t_2^C} = X_C$, for all conjugacy classes $C$. We present one such choice of eigenvalues $(t_1^C,t_2^C)$ in Table \\ref{table:t1,t2-IP3}. Thus we set $\\varrho_1^{(1)} = \\Sigma_7^{\\prime}$, $\\varrho_1^{(2)} = \\Sigma_1^{\\prime} + \\Sigma_6^{(1)}$.\nThe McKay graphs $\\mathcal{G}^{\\varrho_1^{(1)}}_{PGL(2;7)}$, $\\mathcal{G}^{\\varrho_1^{(2)}}_{PGL(2;7)}$ for $\\varrho_1^{(1)}$, $\\varrho_1^{(2)}$ are given in Figure \\ref{Fig-McKay_Graph-IP3-rho1}, where we use the same notation as previously.\nNote that the graph given in \\cite[Figure 2]{he:2003} is not the McKay graph for the restriction $\\varrho_1$ of the fundamental seven-dimensional representation of $G_2$ as claimed in \\cite{he:2003}, but is rather the McKay graph for the seven-dimensional representation $\\Sigma_7$, which as shown above does not define an embedding of $PGL(2;7)$ in $G_2$.\n\n\\begin{figure}[tb]\n\\begin{center}\n \\includegraphics[width=110mm]{Fig-McKay_Graph-IP3-rho1}\\\\\n \\caption{The McKay graphs $\\mathcal{G}^{\\varrho_1^{(i)}}_{PGL(2;7)}$, $i=1,2$.} \\label{Fig-McKay_Graph-IP3-rho1}\n\\end{center}\n\\end{figure}\n\n\\begin{table}[tb]\n\\begin{center}\n\\begin{tabular}{|c||c|c|c|c|c|c|c|c|} \\hline\n& \\multicolumn{2}{|c|}{$\\varrho_1^{(1)}$} & \\multicolumn{2}{|c|}{$\\varrho_1^{(2)}$} & & & & \\\\\n$C$ & $(t_1^C,t_2^C)$ & $\\chi_{\\varrho_2^{(1)}}(C)$ & $(t_1^C,t_2^C)$ & $\\chi_{\\varrho_2^{(2)}}(C)$ & $\\chi_{\\gamma_2^{(1)}}(C)$ & $\\chi_{\\gamma_2^{(2)}}(C)$ & $\\chi_{\\gamma_2^{(3)}}(C)$ & $\\chi_{\\gamma_2^{(4)}}(C)$ \\\\\n\\hline \\hline $C_1$ & $(1,1)$ & 14 & $(1,1)$ & 14 & 14 & 14 & 14 & 14 \\\\\n\\hline $C_2$ & $(1,-1)$ & -2 & $(1,-1)$ & -2 & -2 & -2 & 2 & 2 \\\\\n\\hline $C_2^{\\prime}$ & $(1,-1)$ & -2 & $(1,-1)$ & -2 & 2 & -2 & 2 & -2 \\\\\n\\hline $C_8$ & $(i,\\nu^7)$ & 0 & $(-1,\\nu)$ & 0 & 0 & 0 & $\\sqrt{2}$ & $\\sqrt{2}$ \\\\\n\\hline $C_8^{\\prime}$ & $(i,\\nu^5)$ & 0 & $(-1,\\nu)$ & 0 & 0 & 0 & $-\\sqrt{2}$ & $-\\sqrt{2}$ \\\\\n\\hline $C_4$ & $(-1,i)$ & 2 & $(1,i)$ & 2 & 2 & 2 & 0 & 0 \\\\\n\\hline $C_7$ & $(\\eta,\\eta^5)$ & 0 & $(\\eta,\\eta^5)$ & 0 & 0 & 0 & 0 & 0 \\\\\n\\hline $C_6$ & $(-1,\\mu)$ & 1 & $(-1,\\mu)$ & 1 & -1 & 1 & -1 & 1 \\\\\n\\hline $C_3$ & $(1,\\omega)$ & -1 & $(1,\\omega)$ & -1 & -1 & -1 & -1 & -1 \\\\\n\\hline\n\\end{tabular} \\\\\n\\caption{Choice of eigenvalues $(t_1^C,t_2^C)$ for $\\varrho_1^{(i)}$, $i=1,2$, and corresponding values of $\\chi_{\\varrho_2^{(i)}}(C)$.} \\label{table:t1,t2-IP3}\n\\end{center}\n\\end{table}\n\nSince $\\chi_{\\varrho_2^{(i)}}(C) = \\Phi_2(t_1^C,t_2^C)$, we see from Table \\ref{table:t1,t2-II2} that the decomposition of the fundamental fourteen-dimensional representation into irreducible representations of $PGL(2;7)$ is given by $\\varrho_2 = \\gamma_2^{(2)} = \\Sigma_6^{(1)} + \\Sigma_8^{\\prime}$ for both $i=1,2$.\nThe values of $x^{(i)} = \\chi_{\\varrho_1^{(i)}}(C) \\in [-2,7]$, $y = \\chi_{\\varrho_2}(C) \\in [-2,14]$ for $PGL(2;7)$ are given in Table \\ref{table:(x,y)-IP3}.\n\n\\begin{table}[tb]\n\\begin{center}\n\\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|} \\hline\n$C$ & $C_1$ & $C_2$ & $C_2^{\\prime}$ & $C_8$ & $C_8^{\\prime}$ & $C_4$ & $C_7$ & $C_6$ & $C_3$ \\\\\n\\hline $\\chi_{\\varrho_1^{(1)}}(C) \\in [-2,7]$ & 7 & -1 & -1 & 1 & 1 & -1 & 0 & -1 & 1 \\\\\n\\hline $\\chi_{\\varrho_1^{(2)}}(C) \\in [-2,7]$ & 7 & -1 & -1 & -1 & -1 & 3 & 0 & -1 & 1 \\\\\n\\hline $\\chi_{\\varrho_2}(C) \\in [-2,14]$ & 14 & -2 & -2 & 0 & 0 & 2 & 0 & 1 & -1 \\\\\n\\hline\n\\end{tabular} \\\\\n\\caption{$\\chi_{\\varrho_j}(C)$ for group $PGL(2;7)$, $j=1,2$.} \\label{table:(x,y)-IP3}\n\\end{center}\n\\end{table}\n\nThen from (\\ref{eqn:moments-subgroupG2}) and Tables \\ref{table:Character_table-IP3}, \\ref{table:(x,y)-IP3} and \\ref{Table:subgroupsG2-orbits(theta1,theta2)}, we see that\n\\begin{align*}\n\\varsigma_{m,n} & = \\frac{1}{336} \\Omega^W(0,0) + \\frac{21+28}{336} \\Omega^W(0,1\/2) + \\frac{56}{336} \\Omega^W(0,1\/3) \\\\\n& \\quad + \\frac{56}{336} \\Omega^W(1\/6,1\/2) + \\frac{48}{336} \\Omega^W(1\/7,3\/7) + \\frac{42}{336} \\Omega' + \\frac{42+42}{336} \\Omega'',\n\\end{align*}\nwhere $\\Omega^W(\\theta_1,\\theta_2)$ is as in Section \\ref{sect:II1} and $(\\Omega',\\Omega'')$ is $(\\Omega^W(1\/4,1\/2),\\Omega^W(1\/8,3\/8))$ for $\\varrho_1^{(1)}$, and $(\\Omega^W(0,1\/4),\\Omega^W(1\/8,1\/2))$ for $\\varrho_1^{(2)}$.\n\n\\begin{figure}[tb]\n\\begin{center}\n \\includegraphics[width=135mm]{Fig-OmegaWIP3}\\\\\n \\caption{The orbits of $(a)$ $(1\/6,1\/2)$, $(b)$ $(1\/8,3\/8) \\, \\bullet$ and $(1\/8,1\/2) \\, \\ast$.} \\label{Fig-OmegaWIP3}\n\\end{center}\n\\end{figure}\n\nNow $12\\Omega^W(1\/6,1\/2) = 4 \\int_{\\mathbb{T}^2} \\Omega(\\theta_1,\\theta_2) (J^2\/64\\pi^4) \\, \\mathrm{d}_6 \\times \\mathrm{d}_6$, as illustrated in Figure \\ref{Fig-OmegaWIP3}$(a)$ since the Jacobian $J=0$ along the boundaries of the orbit of the fundamental domain, whilst $J^2(g(1\/6,1\/2)\/64\\pi^4) = 9$ for all $g \\in D_{12}$.\n\nThe orbits of $(1\/8,3\/8)$, $(1\/8,1\/2)$ are illustrated in Figure \\ref{Fig-OmegaWIP3}$(b)$, represented by $\\bullet$, $\\ast$ respectively.\nNeither orbit gives a linear combination of the measures in Definition \\ref{def:4measures}, thus we have $12\\Omega^W(1\/8,3\/8) = \\int_{\\mathbb{T}^2} \\Omega(\\theta_1,\\theta_2) \\left( \\sum_{g \\in D_{12}} \\delta_{g(e^{\\pi i\/4},e^{3\\pi i\/4})} \\right)$, $12\\Omega^W(1\/8,1\/2) = \\int_{\\mathbb{T}^2} \\Omega(\\theta_1,\\theta_2) \\left( \\sum_{g \\in D_{12}} \\delta_{g(e^{\\pi i\/4},-1)} \\right)$.\n\nThe measures $J^2 \\, \\mathrm{d}_7 \\times \\mathrm{d}_7$, $K' \\, \\mathrm{d}_4 \\times \\mathrm{d}_4$, $\\mathrm{d}_3 \\times \\mathrm{d}_3$, $\\mathrm{d}_2 \\times \\mathrm{d}_2$ and $\\mathrm{d}_1 \\times \\mathrm{d}_1$ supported by the other points have all appeared in the previous sections, so we obtain:\n\n\\begin{Thm}\nThe joint spectral measure (over $\\mathbb{T}^2$) for the non-conjugate embeddings of $PGL(2;7)$ into the fundamental representations of $G_2$ is\n\\begin{equation}\n\\begin{split}\n\\mathrm{d}\\varepsilon & = \\frac{1}{1344\\pi^4} J^2 \\, \\mathrm{d}_7 \\times \\mathrm{d}_7 + \\frac{1}{1152\\pi^4} J^2 \\, \\mathrm{d}_6 \\times \\mathrm{d}_6 + \\frac{1}{24} K' \\, \\mathrm{d}_4 \\times \\mathrm{d}_4 + \\frac{1}{4} \\, \\mathrm{d}_3 \\times \\mathrm{d}_3 \\\\\n& \\quad + \\frac{1}{9} \\, \\mathrm{d}_2 \\times \\mathrm{d}_2 - \\frac{23}{504} \\, \\mathrm{d}_1 \\times \\mathrm{d}_1 + \\frac{1}{48} \\sum_{g \\in D_{12}} \\delta_{g(e^{\\pi i\/4},t)},\n\\end{split}\n\\end{equation}\nwhere $K' = 16-K$ for the embedding of $PGL(2;7)$ in $G_2$ given by $\\varrho_1^{(1)} = \\Sigma_7^{\\prime}$ and $K' = K$ for the embedding given by $\\varrho_1^{(2)} = \\Sigma_1^{\\prime} + \\Sigma_6$, with $K$ as in Theorem \\ref{thm:measureII1}, whilst $t = e^{3\\pi i\/4}$ for the embedding of $PGL(2;7)$ in $G_2$ given by $\\varrho_1^{(1)}$ and $t=-1$ for the embedding given by $\\varrho_1^{(2)}$,\nand where $\\mathrm{d}_m$ is the uniform measure over $m^{\\mathrm{th}}$ roots of unity and $\\delta_x$ is the Dirac measure at the point $x$.\n\\end{Thm}\n\n\n\n\n\\section{Group $PSL(2;8)$}\n\nThe subgroup $PSL(2;8)$ of $G_2$ is an irreducible primitive group of order 504.\nIt has nine irreducible representations, all real, five of which have dimension less than or equal to 7. The character table for $PSL(2;8)$ is given in\nTable \\ref{table:Character_table-IP2} \\cite{james\/liebeck:2001} (the orders of the elements in each conjugacy class can be obtained from \\cite{lopez_pena\/majid\/rietsch:2010}).\nThe spectral measure for the first three seven-dimensional representations are equal, since the conjugacy classes $C_{9}$, $C_{9}^{\\prime}$, $C_{9}^{\\prime\\prime}$ each have the same order.\n\n\\begin{table}[tb]\n\\begin{center}\n\\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|} \\hline\n$C$ & $C_1$ & $C_3$ & $C_9$ & $C_9^{\\prime}$ & $C_9^{\\prime\\prime}$ & $C_2$ & $C_7$ & $C_7^{\\prime}$ & $C_7^{\\prime\\prime}$ \\\\\n\\hline $|C|$ & 1 & 56 & 56 & 56 & 56 & 63 & 72 & 72 & 72 \\\\\n\\hline \\hline $\\Sigma_1$ & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\\\\n\\hline $\\Sigma_7^{(1)}$ & 7 & 1 & $-p$ & $-q$ & $p+q$ & -1 & 0 & 0 & 0 \\\\\n\\hline $\\Sigma_7^{(1)\\prime}$ & 7 & 1 & $p+q$ & $-p$ & $-q$ & -1 & 0 & 0 & 0 \\\\\n\\hline $\\Sigma_7^{(1)\\prime\\prime}$ & 7 & 1 & $-q$ & $p+q$ & $-p$ & -1 & 0 & 0 & 0 \\\\\n\\hline $\\Sigma_7^{(2)}$ & 7 & -2 & 1 & 1 & 1 & -1 & 0 & 0 & 0 \\\\\n\\hline $\\Sigma_8$ & 8 & -1 & -1 & -1 & -1 & 0 & 1 & 1 & 1 \\\\\n\\hline $\\Sigma_9$ & 9 & 0 & 0 & 0 & 0 & 1 & $2\\cos(2\\pi\/7)$ & $2\\cos(4\\pi\/7)$ & $2\\cos(6\\pi\/7)$ \\\\\n\\hline $\\Sigma_9^{\\prime}$ & 9 & 0 & 0 & 0 & 0 & 1 & $2\\cos(4\\pi\/7)$ & $2\\cos(6\\pi\/7)$ & $2\\cos(2\\pi\/7)$ \\\\\n\\hline $\\Sigma_9^{\\prime\\prime}$ & 9 & 0 & 0 & 0 & 0 & 1 & $2\\cos(6\\pi\/7)$ & $2\\cos(2\\pi\/7)$ & $2\\cos(4\\pi\/7)$ \\\\\n\\hline\n\\end{tabular} \\\\\n\\caption{Character table for $PSL(2;8)$, where $p=2\\cos(4\\pi\/9)$, $q=2\\cos(8\\pi\/9)$.} \\label{table:Character_table-IP2}\n\\end{center}\n\\end{table}\n\nWe thus have four candidates for the restriction $\\varrho_1$ of the fundamental representation $\\rho_1$ of $G_2$ to $PSL(2;8)$. The Kronecker squares of $\\Sigma_7^{(1)}$, $\\Sigma_7^{(2)}$ decompose into irreducibles as\n$$(\\Sigma_7^{(1)})^2 = \\mathrm{id} + \\Sigma_7^{(1)} + \\Sigma_7^{(1)\\prime} + \\Sigma_7^{(2)} + \\Sigma_9 + \\Sigma_9^{\\prime} + \\Sigma_9^{\\prime\\prime}, \\qquad\n(\\Sigma_7^{(2)})^2 = \\mathrm{id} + \\Sigma_7^{(1)} + \\Sigma_7^{(1)\\prime} + \\Sigma_7^{(1)\\prime\\prime} + \\Sigma_9 + \\Sigma_9^{\\prime} + \\Sigma_9^{\\prime\\prime},$$\nwhere $\\mathrm{id} = \\Sigma_1$. The irreducible representation $\\Sigma_7^{(2)}$ does not appear in the decomposition of its Kronecker square into irreducibles, and therefore does not give an embedding of $PSL(2;8)$ into $G_2$.\nThe other three seven-dimensional representations give non-conjugate embeddings of $PSL(2;8)$ into $G_2$. We will fix $\\varrho_1 = \\Sigma_7$ for the remainder of this section.\n\nThe McKay graph for $\\varrho_1$ is given in Figure \\ref{Fig-McKay_Graph-IP2-rho1}, where we use the same notation as previously.\nNote that the graph given in \\cite[Figure 2]{he:2003} is not the McKay graph for the restriction $\\varrho_1$ of the fundamental seven-dimensional representation of $G_2$ as claimed in \\cite{he:2003}, but is rather the McKay graph for the seven-dimensional representation $\\Sigma_7^{(2)}$.\n\n\\begin{figure}[tb]\n\\begin{center}\n \\includegraphics[width=55mm]{Fig-McKay_Graph-IP2-rho1}\\\\\n \\caption{The McKay graph $\\mathcal{G}^{\\varrho_1}_{PSL(2;8)}$.} \\label{Fig-McKay_Graph-IP2-rho1}\n\\end{center}\n\\end{figure}\n\nFrom the decomposition of the Kronecker square of $\\varrho_1$ and dimension considerations there is only one possibility for the fourteen-dimensional representation $\\varrho_2$, that is, $\\varrho_2 = \\Sigma_7^{(1)\\prime} + \\Sigma_7^{(2)}$.\nThe values of $x = \\chi_{\\varrho_1}(C) \\in [-2,7]$, $y = \\chi_{\\varrho_2}(C) \\in [-2,14]$ for $PSL(2;8)$ are given in Table \\ref{table:(x,y)-IP2}.\n\n\\begin{table}[tb]\n\\begin{center}\n\\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|} \\hline\n$C$ & $C_1$ & $C_3$ & $C_9$ & $C_9^{\\prime}$ & $C_9^{\\prime\\prime}$ & $C_2$ & $C_7$ & $C_7^{\\prime}$ & $C_7^{\\prime\\prime}$ \\\\\n\\hline $\\chi_{\\varrho_1}(C) \\in [-2,7]$ & 7 & 1 & $-p$ & $-q$ & $p+q$ & -1 & 0 & 0 & 0 \\\\\n\\hline $\\chi_{\\varrho_2}(C) \\in [-2,14]$ & 14 & -1 & $p+q+1$ & $1-p$ & $1-q$ & -2 & 0 & 0 & 0 \\\\\n\\hline\n\\end{tabular} \\\\\n\\caption{$\\chi_{\\varrho_j}(C)$ for group $PSL(2;8)$, $j=1,2$.} \\label{table:(x,y)-IP2}\n\\end{center}\n\\end{table}\n\nThen from (\\ref{eqn:moments-subgroupG2}) and Tables \\ref{table:Character_table-IP2} and \\ref{Table:subgroupsG2-orbits(theta1,theta2)}, we see that\n\\begin{align*}\n\\varsigma_{m,n} & = \\frac{1}{504} \\Omega^W(0,0) + \\frac{56}{504} \\Omega^W(0,1\/3) + \\frac{63}{504} \\Omega^W(0,1\/2) + \\frac{72+72+72}{504} \\Omega^W(1\/7,3\/7) \\\\\n& \\quad + \\frac{56}{504} \\Omega^W(1\/9,4\/9) + \\frac{56}{504} \\Omega^W(1\/9,1\/3) + \\frac{56}{504} \\Omega^W(2\/9,5\/9),\n\\end{align*}\nwhere $\\Omega^W(\\theta_1,\\theta_2)$ is as in Section \\ref{sect:II1}.\nNow $J^2(g(1\/9,4\/9)\/64\\pi^4) = 9(3+2\\cos(\\pi\/9)+2\\cos(2\\pi\/9))\/4 =: a_1$, $J^2(g(1\/9,1\/3)\/64\\pi^4) = 9(3-2\\cos(2\\pi\/9)+2\\sin(\\pi\/18))\/4 =: a_2$ and $J^2(g(2\/9,5\/9)\/64\\pi^4) = 9(3-2\\cos(\\pi\/9)-2\\sin(\\pi\/18))\/4 =: a_3$, for all $g \\in D_{12}$.\nThus $a_1 \\Omega^W(1\/9,4\/9) = 18 \\int_{\\mathbb{T}^2} \\Omega(\\theta_1,\\theta_2) (J^2\/64\\pi^4) \\, \\mathrm{d}^{((9\/2))}$, $a_2 \\Omega^W(1\/9,1\/3) = 18 \\int_{\\mathbb{T}^2} \\Omega(\\theta_1,\\theta_2) (J^2\/64\\pi^4) \\, \\mathrm{d}^{((9\/4))}$ and $a_3 \\Omega^W(2\/9,5\/9) = 18 \\int_{\\mathbb{T}^2} \\Omega(\\theta_1,\\theta_2) (J^2\/64\\pi^4) \\, \\mathrm{d}^{((9))}$, as illustrated in Figure \\ref{Fig-OmegaWIP2} since the Jacobian $J=0$ along the boundaries of the orbit of the fundamental domain.\nThe measures $J^2 \\, \\mathrm{d}_7 \\times \\mathrm{d}_7$, $\\mathrm{d}_3 \\times \\mathrm{d}_3$, $\\mathrm{d}_2 \\times \\mathrm{d}_2$, $\\mathrm{d}_1 \\times \\mathrm{d}_1$ and $\\mathrm{d}^{(1)}$ supported by the other points above have all appeared in the previous sections, so we obtain:\n\n\\begin{figure}[tb]\n\\begin{center}\n \\includegraphics[width=55mm]{Fig-OmegaWIP2}\\\\\n \\caption{The orbits of $(1\/9,4\/9)\\textcolor{red}{\\scriptscriptstyle{\\bullet}}$, $(1\/9,1\/3)\\textcolor{blue}{\\scriptscriptstyle{\\bullet}}$, $(2\/9,5\/9)\\textcolor{green}{\\scriptscriptstyle{\\bullet}}$.} \\label{Fig-OmegaWIP2}\n\\end{center}\n\\end{figure}\n\n\\begin{Thm}\nThe joint spectral measure (over $\\mathbb{T}^2$) for all embeddings of $PSL(2;8)$ into the fundamental representations of $G_2$ is\n\\begin{equation}\n\\begin{split}\n\\mathrm{d}\\varepsilon & = \\frac{1}{448\\pi^4} J^2 \\, \\mathrm{d}_7 \\times \\mathrm{d}_7 + \\frac{1}{6} \\, \\mathrm{d}_3 \\times \\mathrm{d}_3 + \\frac{1}{6} \\, \\mathrm{d}_2 \\times \\mathrm{d}_2 - \\frac{5}{126} \\, \\mathrm{d}_1 \\times \\mathrm{d}_1 - \\frac{1}{18} \\, \\mathrm{d}^{(1)} \\\\\n& \\quad + \\frac{1}{384\\pi^4} a_3^{-1} J^2 \\, \\mathrm{d}^{((9))} + \\frac{1}{384\\pi^4} a_1^{-1} J^2 \\, \\mathrm{d}^{((9\/2))} + \\frac{1}{384\\pi^4} a_2^{-1} J^2 \\, \\mathrm{d}^{((9\/4))},\n\\end{split}\n\\end{equation}\nwhere $\\mathrm{d}^{((n))}$ is as in Definition \\ref{def:4measures}, $\\mathrm{d}_m$ is the uniform measure over $m^{\\mathrm{th}}$ roots of unity and $\\mathrm{d}^{(k+4)}$ is the uniform measure on the points in $C_k^W$.\n\\end{Thm}\n\n\n\n\\section{Group $PSL(2;13)$}\n\nThe subgroup $PSL(2;13)$ of $G_2$ is an irreducible primitive group of order 1092. It has nine irreducible representations, all real, and its character table is given in Table \\ref{table:Character_table-IP1} \\cite{cohen:1998}.\n\n\\begin{table}[tb]\n\\begin{center}\n\\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|} \\hline\n$C$ & $C_1$ & $C_{13}$ & $C_{13}^{\\prime}$ & $C_2$ $C_7$ & $C_7^{\\prime}$ & $C_7^{\\prime\\prime}$ & $C_6$ & $C_3$ & \\\\\n\\hline $|C|$ & 1 & 84 & 84 & 91 & 156 & 156 & 156 & 182 & 182 \\\\\n\\hline \\hline $\\Sigma_1$ & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\\\\n\\hline $\\Sigma_7$ & 7 & $p_-$ & $p_+$ & -1 & 0 & 0 & 0 & -1 & 1 \\\\\n\\hline $\\Sigma_7^{\\prime}$ & 7 & $p_+$ & $p_-$ & -1 & 0 & 0 & 0 & -1 & 1 \\\\\n\\hline $\\Sigma_{12}$ & 12 & -1 & -1 & 0 & $r_1$ & $r_2$ & $r_3$ & 0 & 0 \\\\\n\\hline $\\Sigma_{12}^{\\prime}$ & 12 & -1 & -1 & 0 & $r_2$ & $r_3$ & $r_1$ & 0 & 0 \\\\\n\\hline $\\Sigma_{12}^{\\prime\\prime}$ & 12 & -1 & -1 & 0 & $r_3$ & $r_1$ & $r_2$ & 0 & 0 \\\\\n\\hline $\\Sigma_{13}$ & 13 & 0 & 0 & 1 & -1 & -1 & -1 & 1 & 1 \\\\\n\\hline $\\Sigma_{14}$ & 14 & 1 & 1 & -2 & 0 & 0 & 0 & 1 & -1 \\\\\n\\hline $\\Sigma_{14}^{\\prime}$ & 14 & 1 & 1 & 2 & 0 & 0 & 0 & -1 & -1 \\\\\n\\hline\n\\end{tabular} \\\\\n\\caption{Character table for $PSL(2;13)$, where $p_{\\pm} = (1\\pm\\sqrt{13})\/2$, $r_j=-2\\cos(2j\\pi\/7)$.} \\label{table:Character_table-IP1}\n\\end{center}\n\\end{table}\n\nOnly three of the irreducible representation have dimension less than or equal to 7. These are the identity representation, and two seven-dimensional representations $\\Sigma_7$, $\\Sigma_7^{\\prime}$ whose character values only differ on the two conjugacy classes $C_{13}$, $C_{13}^{\\prime}$ whose elements have order 13. Here $\\chi_{\\Sigma_7}(g) = p,q$ for $g \\in C_{13},C_{13}^{\\prime}$ respectively, whilst $\\chi_{\\Sigma_7^{\\prime}}(g) = q,p$ respectively, where $p=(1+\\sqrt{13})\/2=1+\\zeta+\\zeta^3+\\zeta^4+\\zeta^9+\\zeta^{10}+\\zeta^{12}$, $q=(1-\\sqrt{13})\/2=1+\\zeta^2+\\zeta^5+\\zeta^6+\\zeta^7+\\zeta^8+\\zeta^{11}$, for $\\zeta = e^{2\\pi i\/13}$.\nThus the spectral measure for both seven-dimensional representations are equal, since $C_{13}$, $C_{13}^{\\prime}$ both have the same order, and both give embeddings of $PSL(2;13)$ in $G_2$.\nThe McKay graph for the fundamental seven-dimensional representation is given in \\cite[Figure 2]{he:2003}, and is reproduced (twice) in Figure \\ref{Fig-McKay_Graph-IP1-rho1} for completeness, where we use the same notation as previously. The figure on the right hand side illustrates the resemblance of $\\mathcal{G}^{\\varrho_1}_{PSL(2;13)}$ with the McKay graph of $G_2$ itself.\n\n\\begin{figure}[tb]\n\\begin{center}\n \\includegraphics[width=60mm]{Fig-McKay_Graph-IP1-rho1} \\hspace{15mm} \\includegraphics[width=60mm]{Fig-McKay_Graph-IP1-rho1-2}\\\\\n \\caption{Two presentations of the McKay graph $\\mathcal{G}^{\\varrho_1}_{PSL(2;13)}$.} \\label{Fig-McKay_Graph-IP1-rho1}\n\\end{center}\n\\end{figure}\n\nThe set $X_C$ of eigenvalues of group elements in each conjugacy class for $\\varrho_1 = \\Sigma_7$ are given in Table \\ref{table:evalues-IP1}, along with a choice of $(t_1^C,t_2^C) \\in X_C$ such that $\\mathcal{E}_{t_1^C,t_2^C} = X_C$.\n\n\\begin{table}[tb]\n\\begin{center}\n\\begin{tabular}{|c||c|c|c|c|c|c|} \\hline\n$C$ & $X_C$ & $(t_1^C,t_2^C)$ & $\\chi_{\\varrho_1}(C)$ & $\\chi_{\\varrho_2}(C)$ & $\\chi_{\\Sigma_{14}}(C)$ & $\\chi_{\\Sigma_{14}^{\\prime}}(C)$ \\\\\n\\hline \\hline $C_1$ & $(1,1,1,1,1,1,1)$ & $(1,1)$ & 7 & 14 & 14 & 14 \\\\\n\\hline $C_{13}$ & $(1,\\zeta,\\zeta^3,\\zeta^4,\\zeta^9,\\zeta^{10},\\zeta^{12})$ & $(\\zeta,\\zeta^4)$ & $(1+\\sqrt{13})\/2$ & 1 & 1 & 1 \\\\\n\\hline $C_{13}^{\\prime}$ & $(1,\\zeta^2,\\zeta^5,\\zeta^6,\\zeta^7,\\zeta^8,\\zeta^{11})$ & $(\\zeta^2,\\zeta^7)$ & $(1-\\sqrt{13})\/2$ & 1 & 1 & 1 \\\\\n\\hline $C_2$ & $(1,1,1,-1,-1,-1,-1)$ & $(1,-1)$ & -1 & -2 & -2 & 2\\\\\n\\hline $C_7$ & $(1,\\eta,\\eta^2,\\eta^3,\\eta^4,\\eta^5,\\eta^6)$ & $(\\eta,\\eta^5)$ & 0 & 0 & 0 & 0 \\\\\n\\hline $C_7^{\\prime}$ & $(1,\\eta,\\eta^2,\\eta^3,\\eta^4,\\eta^5,\\eta^6)$ & $(\\eta,\\eta^5)$ & 0 & 0 & 0 & 0 \\\\\n\\hline $C_7^{\\prime\\prime}$ & $(1,\\eta,\\eta^2,\\eta^3,\\eta^4,\\eta^5,\\eta^6)$ & $(\\eta,\\eta^5)$ & 0 & 0 & 0 & 0 \\\\\n\\hline $C_6$ & $(1,-1,-1,\\mu,\\mu^2,\\mu^4,\\mu^5)$ & $(-1,\\mu)$ & -1 & 1 & 1 & -1 \\\\\n\\hline $C_3$ & $(1,1,1,\\omega,\\omega,\\omega^2,\\omega^2)$ & $(1,\\omega)$ & 1 & -1 & -1 & -1\\\\\n\\hline\n\\end{tabular} \\\\\n\\caption{Choice of eigenvalues $(t_1^C,t_2^C)$ for $\\varrho_1$ and corresponding values of $\\chi_{\\varrho_2}(C)$, where $\\omega = e^{2\\pi i\/3}$, $\\mu = e^{2\\pi i\/6}$, $\\eta = e^{2\\pi i\/7}$ and $\\zeta = e^{2\\pi i\/13}$.} \\label{table:evalues-IP1}\n\\end{center}\n\\end{table}\n\nThe decomposition of the Kronecker square of $\\varrho_1$ into irreducibles is given by\n$$\\varrho_1^2 = \\mathrm{id} + \\varrho_1 + \\Sigma_{13} + \\Sigma_{14} + \\Sigma_{14}^{\\prime},$$\nwhere as before the notation $\\Sigma_n$ denotes an irreducible representation of $PSL(2;13)$ of dimension $n$.\nFrom dimension considerations there are thus two candidates for the fourteen-dimensional representation $\\varrho_2$, which are given by the two fourteen-dimensional irreducible representations.\nHowever, since $\\chi_{\\varrho_2}(C) = \\Phi_2(t_1^C,t_2^C)$, we see from Table \\ref{table:evalues-IP1} that the decomposition of the fundamental fourteen-dimensional representation into irreducible representations of $PSL(2;13)$ is given by $\\varrho_2 = \\Sigma_{14}$, not $\\Sigma_{14}^{\\prime}$.\n\nThen from (\\ref{eqn:moments-subgroupG2}) and Tables \\ref{table:evalues-IP1} and \\ref{Table:subgroupsG2-orbits(theta1,theta2)}, we see that\n\\begin{align*}\n\\varsigma_{m,n} & = \\frac{1}{1092} \\Omega^W(0,0) + \\frac{91}{1092} \\Omega^W(0,1\/2) + \\frac{182}{1092} \\Omega^W(0,1\/3) + \\frac{182}{1092} \\Omega^W(1\/6,1\/2) \\\\\n& \\quad + \\frac{156+156+156}{1092} \\Omega^W(1\/7,3\/7) + \\frac{84}{1092} \\Omega^W(1\/13,4\/13) + \\frac{84}{1092} \\Omega^W(2\/13,7\/13),\n\\end{align*}\nwhere $\\Omega^W(\\theta_1,\\theta_2)$ is as in Section \\ref{sect:II1}.\n\n\\begin{figure}[tb]\n\\begin{center}\n \\includegraphics[width=55mm]{Fig-OmegaWIP1}\\\\\n \\caption{The orbit of $(1\/13,4\/13)$ and $(2\/13,7\/13)$.} \\label{Fig-OmegaWIP1}\n\\end{center}\n\\end{figure}\n\nThe orbit of the points $(1\/13,4\/13)$, $(2\/13,7\/13)$, illustrated in Figure \\ref{Fig-OmegaWIP1}, do not give a linear combination of the measures in Definition \\ref{def:4measures}, thus we have $12\\Omega^W(1\/13,4\/13) = \\int_{\\mathbb{T}^2} \\Omega(\\theta_1,\\theta_2) \\left( \\sum_{g \\in D_{12}} \\delta_{g(\\zeta,\\zeta^4)} \\right)$ and $12\\Omega^W(2\/13,7\/13) = \\int_{\\mathbb{T}^2} \\Omega(\\theta_1,\\theta_2) \\left( \\sum_{g \\in D_{12}} \\delta_{g(\\zeta^2,\\zeta^7)} \\right)$.\nThe measures $J^2 \\, \\mathrm{d}_7 \\times \\mathrm{d}_7$, $J^2 \\, \\mathrm{d}_6 \\times \\mathrm{d}_6$, $\\mathrm{d}_3 \\times \\mathrm{d}_3$, $\\mathrm{d}_2 \\times \\mathrm{d}_2$, $\\mathrm{d}_1 \\times \\mathrm{d}_1$ and $\\mathrm{d}^{(1)}$ supported by the other points above have all appeared in the previous sections, so we obtain:\n\n\\begin{Thm}\nThe joint spectral measure (over $\\mathbb{T}^2$) for all embeddings of $PSL(2;13)$ into the fundamental representations of $G_2$ is\n\\begin{equation}\n\\begin{split}\n\\mathrm{d}\\varepsilon & = \\frac{1}{448\\pi^4} J^2 \\, \\mathrm{d}_7 \\times \\mathrm{d}_7 + \\frac{1}{1152\\pi^4} J^2 \\, \\mathrm{d}_6 \\times \\mathrm{d}_6 + \\frac{1}{4} \\, \\mathrm{d}_3 \\times \\mathrm{d}_3 + \\frac{1}{9} \\, \\mathrm{d}_2 \\times \\mathrm{d}_2 \\\\\n& \\quad - \\frac{22}{819} \\, \\mathrm{d}_1 \\times \\mathrm{d}_1 - \\frac{1}{12} \\, \\mathrm{d}^{(1)} + \\frac{7}{192} \\sum_{g \\in D_{12}} (\\delta_{g(\\zeta,\\zeta^4)} + \\delta_{g(\\zeta^2,\\zeta^7)})\n\\end{split}\n\\end{equation}\nwhere $\\mathrm{d}_m$ is the uniform measure over $m^{\\mathrm{th}}$ roots of unity, $\\mathrm{d}^{(k+4)}$ is the uniform measure on the points in $C_k^W$, $\\delta_x$ is the Dirac measure at the point $x$ and $\\zeta = e^{2 \\pi i\/13}$.\n\\end{Thm}\n\n\n\\section{Group $PU(3;3) \\cong G_2(2)'$}\n\nThe subgroup $PU(3;3)$ of $G_2$ is an irreducible primitive group of order 6048.\nIt has fourteen irreducible representations (seven real repreentations, one quaternionic representation $\\Sigma_6$, and three pairs of complex conjugate representations) and its character table is given in Table \\ref{table:Character_table-IP4} \\cite{conway\/curtis\/norton\/parker\/wilson:1985}.\n\n\\begin{table}[tb]\n\\begin{center}\n\\begin{tabular}{|@{\\hspace{1.5mm}}c@{\\hspace{1.5mm}}||@{\\hspace{1mm}}c@{\\hspace{1mm}}|@{\\hspace{1mm}}c@{\\hspace{1mm}}|@{\\hspace{1mm}}c @{\\hspace{1mm}}|@{\\hspace{1mm}}c@{\\hspace{1mm}}|@{\\hspace{1mm}}c@{\\hspace{1mm}}|@{\\hspace{1mm}}c@{\\hspace{1mm}}|@{\\hspace{1mm}}c @{\\hspace{1mm}}|@{\\hspace{1mm}}c@{\\hspace{1mm}}|@{\\hspace{1mm}}c@{\\hspace{1mm}}|@{\\hspace{1mm}}c@{\\hspace{1mm}}|@{\\hspace{1mm}}c @{\\hspace{1mm}}|@{\\hspace{1mm}}c@{\\hspace{1mm}}|@{\\hspace{1mm}}c@{\\hspace{1mm}}|@{\\hspace{1mm}}c@{\\hspace{1mm}}|} \\hline\n$C$ & $C_1$ & $C_3$ & $C_2$ & $C_4$ & $C_4^{\\prime}$ & $C_4^{\\prime\\prime}$ & $C_{12}$ & $C_{12}^{\\prime}$ & $C_6$ & $C_3^{\\prime}$ & $C_8$ & $C_8^{\\prime}$ & $C_7$ & $C_7^{\\prime}$ \\\\\n\\hline $|C|$ & 1 & 56 & 63 & 63 & 63 & 378 & 504 & 504 & 504 & 672 & 756 & 756 & 864 & 864 \\\\\n\\hline \\hline $\\Sigma_1$ & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\\\\n\\hline $\\Sigma_6$ & 6 & -3 & -2 & -2 & -2 & 2 & 1 & 1 & 1 & 0 & 0 & 0 & -1 & -1 \\\\\n\\hline $\\Sigma_7$ & 7 & -2 & 3 & $-1-2i$ & $-1+2i$ & 1 & $-1-i$ & $-1+i$ & 0 & 1 & $-i$ & $i$ & 0 & 0 \\\\\n\\hline $\\Sigma_7^{\\ast}$ & 7 & -2 & 3 & $-1+2i$ & $-1-2i$ & 1 & $-1+i$ & $-1-i$ & 0 & 1 & $i$ & $-i$ & 0 & 0 \\\\\n\\hline $\\Sigma_7^{\\prime}$ & 7 & -2 & -1 & 3 & 3 & -1 & 0 & 0 & 2 & 1 & -1 & -1 & 0 & 0 \\\\\n\\hline $\\Sigma_{14}$ & 14 & 5 & -2 & 2 & 2 & 2 & -1 & -1 & 1 & -1 & 0 & 0 & 0 & 0 \\\\\n\\hline $\\Sigma_{21}$ & 21 & 3 & 1 & $-3-2i$ & $-3+2i$ & -1 & $-i$ & $i$ & 1 & 0 & $i$ & $-i$ & 0 & 0 \\\\\n\\hline $\\Sigma_{21}^{\\ast}$ & 21 & 3 & 1 & $-3+2i$ & $-3-2i$ & -1 & $i$ & $-i$ & 1 & 0 & $-i$ & $i$ & 0 & 0 \\\\\n\\hline $\\Sigma_{21}^{\\prime}$ & 21 & 3 & 5 & 1 & 1 & 1 & 1 & 1 & -1 & 0 & -1 & -1 & 0 & 0 \\\\\n\\hline $\\Sigma_{27}$ & 27 & 0 & 3 & 3 & 3 & -1 & 0 & 0 & 0 & 0 & 1 & 1 & -1 & -1 \\\\\n\\hline $\\Sigma_{28}$ & 28 & 1 & -4 & $-4i$ & $4i$ & 0 & $i$ & $-i$ & -1 & 1 & 0 & 0 & 0 & 0 \\\\\n\\hline $\\Sigma_{28}^{\\ast}$ & 28 & 1 & -4 & $4i$ & $-4i$ & 0 & $-i$ & $i$ & -1 & 1 & 0 & 0 & 0 & 0 \\\\\n\\hline $\\Sigma_{32}$ & 32 & -4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & $\\frac{1+i\\sqrt{7}}{2}$ & $\\frac{1-i\\sqrt{7}}{2}$ \\\\\n\\hline $\\Sigma_{32}^{\\ast}$ & 32 & -4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & $\\frac{1-i\\sqrt{7}}{2}$ & $\\frac{1+i\\sqrt{7}}{2}$ \\\\\n\\hline\n\\end{tabular} \\\\\n\\caption{Character table for $PU(3;3)$.} \\label{table:Character_table-IP4}\n\\end{center}\n\\end{table}\n\nThere are two non-conjugate real seven-dimensional representations, $\\gamma_1^{(1)} = \\Sigma_1 + \\Sigma_6$ and $\\gamma_1^{(2)} = \\Sigma_7^{\\prime}$.\nThese both satisfy the condition that $\\gamma_1^{(i)}$ appears in the decomposition of $(\\gamma_1^{(i)})^2$.\nWe thus consider the eigenvalues of the representation matrices to determine which of the $\\gamma_1^{(i)}$ are embeddings of $PU(3;3)$ in $G_2$.\nThese eigenvalues are given in Table \\ref{table:evalues-IP4} for the representations $\\Sigma_1$, $\\Sigma_6$ and $\\Sigma_7^{\\prime}$. As described in Section \\ref{sect:subgroupsG2}, these eigenvalues can be determined from the character table of $PU(3;3)$. The additional information that is needed is to note that the eigenvalues for group elements in $C_4$, $C_4^{\\prime}$ and $C_4^{\\prime\\prime}$ all square to those for elements in $C_2$, those for $C_8$, $C_8^{\\prime}$ square to those for $C_4$, $C_4^{\\prime}$ respectively, those for $C_6$ square to those for $C_3$ and also cube to those for $C_2$, those for $C_{12}$ square to those for $C_6$ and cube to those for $C_4$ whilst those for $C_{12}^{\\prime}$ also square to those for $C_6$ but cube to those for $C_4^{\\prime}$ (see \\cite{conway\/curtis\/norton\/parker\/wilson:1985}).\n\n\\begin{table}[tb]\n\\begin{center}\n\\begin{tabular}{|c||c|c|c||c|} \\hline\n & $\\Sigma_1$ & $\\Sigma_6$ & $\\Sigma_7^{\\prime}$ & $(t_1^C,t_2^C)$ \\\\\n\\hline \\hline $C_1$ & 1 & $(1,1,1,1,1,1)$ & $(1,1,1,1,1,1,1)$ & $(1,1)$ \\\\\n\\hline $C_3$ & 1 & $(\\omega,\\omega,\\omega,\\omega^2,\\omega^2,\\omega^2)$ & $(1,\\omega,\\omega,\\omega,\\omega^2,\\omega^2,\\omega^2)$ & $(\\omega,\\omega^2)$ \\\\\n\\hline $C_2$ & 1 & $(1,1,-1,-1,-1,-1)$ & $(1,1,1,-1,-1,-1,-1)$ & $(1,-1)$ \\\\\n\\hline $C_4$ & 1 & $(-1,-1,i,i,-i,-i)$ & $(1,1,1,i,i,-i,-i)$ & $(1,i)$ \\\\\n\\hline $C_4^{\\prime}$ & 1 & $(-1,-1,i,i,-i,-i)$ & $(1,1,1,i,i,-i,-i)$ & $(1,i)$ \\\\\n\\hline $C_4^{\\prime\\prime}$ & 1 & $(1,1,i,i,-i,-i)$ & $(1,-1,-1,i,i,-i,-i)$ & $(-1,i)$ \\\\\n\\hline $C_{12}$ & 1 & $(\\xi,\\xi^2,\\xi^5,\\xi^7,\\xi^{10},\\xi^{11})$ & $(1,\\xi,\\xi^4,\\xi^5,\\xi^7,\\xi^8,\\xi^{11})$ & $(\\xi,\\xi^5)$ \\\\\n\\hline $C_{12}^{\\prime}$ & 1 & $(\\xi,\\xi^2,\\xi^5,\\xi^7,\\xi^{10},\\xi^{11})$ & $(1,\\xi,\\xi^4,\\xi^5,\\xi^7,\\xi^8,\\xi^{11})$ & $(\\xi,\\xi^5)$ \\\\\n\\hline $C_6$ & 1 & $(\\mu,\\mu,\\mu^2,\\mu^4,\\mu^5,\\mu^5)$ & $(1,\\mu,\\mu,\\mu^2,\\mu^4,\\mu^5,\\mu^5)$ & $(\\mu,\\mu^2)$ \\\\\n\\hline $C_3^{\\prime}$ & 1 & $(1,1,\\omega,\\omega,\\omega^2,\\omega^2)$ & $(1,1,1,\\omega,\\omega,\\omega^2,\\omega^2)$ & $(1,\\omega)$ \\\\\n\\hline $C_8$ & 1 & $(i,-i,\\nu,\\nu^3,\\nu^5,\\nu^7)$ & $(1,-1,-1,\\nu,\\nu^3,\\nu^5,\\nu^7)$ & $(-1,\\nu)$ \\\\\n\\hline $C_8^{\\prime}$ & 1 & $(i,-i,\\nu,\\nu^3,\\nu^5,\\nu^7)$ & $(1,-1,-1,\\nu,\\nu^3,\\nu^5,\\nu^7)$ & $(-1,\\nu)$ \\\\\n\\hline $C_7$ & 1 & $(\\eta,\\eta^2,\\eta^3,\\eta^4,\\eta^5,\\eta^6)$ & $(1,\\eta,\\eta^2,\\eta^3,\\eta^4,\\eta^5,\\eta^6)$ & $(\\eta,\\eta^5)$ \\\\\n\\hline $C_7^{\\prime}$ & 1 & $(\\eta,\\eta^2,\\eta^3,\\eta^4,\\eta^5,\\eta^6)$ & $(1,\\eta,\\eta^2,\\eta^3,\\eta^4,\\eta^5,\\eta^6)$ & $(\\eta,\\eta^5)$ \\\\\n\\hline\n\\end{tabular} \\\\\n\\caption{Eigenvalues of group elements in each conjugacy class of $PU(3;3)$ for the irreducible representations $\\Sigma_1$, $\\Sigma_6$ and $\\Sigma_7^{\\prime}$, where $\\omega = e^{2\\pi i\/3}$, $\\mu = e^{2\\pi i\/6}$, $\\eta = e^{2\\pi i\/7}$, $\\nu = e^{2\\pi i\/8}$ and $\\xi = e^{2\\pi i\/12}$} \\label{table:evalues-IP4}\n\\end{center}\n\\end{table}\n\nFrom considering the set of eigenvalues $X_C$ for group elements in $C$ in the representation $\\gamma_1^{(i)}$, we see that there is no choice of $(t_1^C,t_2^C) \\in X_C$ such that $\\mathcal{E}_{t_1^C,t_2^C} = X_C$ for $i=1$ when $C=C_{12},C_{12}^{\\prime}$. However, such a choice does exist for all conjugacy classes $C$ for $i=2$, thus we have $\\varrho_1 = \\gamma_1^{(2)}$. We present one such choice of eigenvalues $(t_1^C,t_2^C)$ in the final column of Table \\ref{table:evalues-IP4}.\nThe McKay graph $\\mathcal{G}^{\\varrho_1}_{PU(3;3)}$ is given in \\cite[Figure 2]{he:2003}, and we reproduce it here in Figure \\ref{Fig-McKay_Graph-IP4-rho1} for completeness.\n\n\\begin{figure}[tb]\n\\begin{center}\n \\includegraphics[width=70mm]{Fig-McKay_Graph-IP4-rho1}\\\\\n \\caption{The McKay graphs $\\mathcal{G}^{\\varrho_1}_{PU(3;3)}$.} \\label{Fig-McKay_Graph-IP4-rho1}\n\\end{center}\n\\end{figure}\n\nThe decomposition of the Kronecker square of $\\varrho_1$ into irreducibles is given by\n$$\\varrho_1^2 = \\mathrm{id} + \\varrho_1 + \\Sigma_{14} + \\Sigma_{27},$$\nwhere $\\mathrm{id} = \\Sigma_1$.\nThus the fourteen-dimensional representation $\\varrho_2$ must be $\\Sigma_{14}$, and we note that $\\chi_{\\varrho_2}(C) = \\Phi_2(t_1^C,t_2^C)$ as required.\n\nThen from (\\ref{eqn:moments-subgroupG2}) and Tables \\ref{table:Character_table-IP4} and \\ref{Table:subgroupsG2-orbits(theta1,theta2)}, we see that\n\\begin{align*}\n\\varsigma_{m,n} & = \\frac{1}{6048} \\Omega^W(0,0) + \\frac{56}{6048} \\Omega^W(1\/3,2\/3) + \\frac{63}{6048} \\Omega^W(0,1\/2) + \\frac{672}{6048} \\Omega^W(0,1\/3) \\\\\n& \\quad + \\frac{63+63}{6048} \\Omega^W(0,1\/4) + \\frac{378}{6048} \\Omega^W(1\/4,1\/2) + \\frac{504}{6048} \\Omega^W(1\/6,1\/3) \\\\\n& \\quad + \\frac{864+864}{6048} \\Omega^W(1\/7,3\/7) + \\frac{756+756}{6048} \\Omega^W(1\/8,1\/2) + \\frac{504+504}{6048} \\Omega^W(1\/12,5\/12),\n\\end{align*}\nwhere $\\Omega^W(\\theta_1,\\theta_2)$ is as in Section \\ref{sect:II1}.\n\n\\begin{figure}[tb]\n\\begin{center}\n \\includegraphics[width=135mm]{Fig-OmegaWIP4}\\\\\n \\caption{The orbits of $(a)$ $(1\/6,1\/3)$, $(b)$ $(1\/12,5\/12)$.} \\label{Fig-OmegaWIP4}\n\\end{center}\n\\end{figure}\n\nNow $2\\Omega^W(1\/3,2\/3) = \\int_{\\mathbb{T}^2} \\Omega(\\theta_1,\\theta_2) \\, (3 \\mathrm{d}^{(1)} - \\mathrm{d}_1 \\times \\mathrm{d}_1)$.\nThe points $\\circ$ in Figure \\ref{Fig-OmegaWIP4}$(a)$ give the measure $3 \\, \\mathrm{d}^{(1)}$ whilst the points $\\diamond$ in Figure \\ref{Fig-OmegaWIP4}$(a)$ give the measure $4 \\, \\mathrm{d}_2 \\times \\mathrm{d}_2 - \\mathrm{d}_1 \\times \\mathrm{d}_1$, thus we see that $6\\Omega^W(1\/6,1\/3) = \\int_{\\mathbb{T}^2} \\Omega(\\theta_1,\\theta_2) \\left( 12 \\, \\mathrm{d}^{(2)} - 3 \\, \\mathrm{d}^{(1)} - 4 \\, \\mathrm{d}_2 \\times \\mathrm{d}_2 + \\mathrm{d}_1 \\times \\mathrm{d}_1 \\right)$.\n\nFinally, $12\\Omega^W(1\/12,5\/12) = 18 \\int_{\\mathbb{T}^2} \\Omega(\\theta_1,\\theta_2) (J^2\/64\\pi^4) \\, \\mathrm{d}^{(4)}$, as illustrated in Figure \\ref{Fig-OmegaWIP4}$(b)$ since the Jacobian $J=0$ along the boundaries of the orbit of the fundamental domain, whilst $J^2(g(1\/12,5\/12)\/64\\pi^4) = 12$ for all $g \\in D_{12}$.\n\nThe measures $J^2 \\, \\mathrm{d}_7 \\times \\mathrm{d}_7$, $(24-K) \\, \\mathrm{d}_4 \\times \\mathrm{d}_4$, $\\mathrm{d}_3 \\times \\mathrm{d}_3$ and $\\sum_{g \\in D_{12}} \\delta_{g(e^{\\pi i\/4},-1)}$ supported by the other points have all appeared in the previous sections, so we obtain:\n\n\\begin{Thm}\nThe joint spectral measure (over $\\mathbb{T}^2$) for all embeddings of $PU(3;3)$ into the fundamental representations of $G_2$ is\n\\begin{equation}\n\\begin{split}\n\\mathrm{d}\\varepsilon & = \\frac{1}{672\\pi^4} J^2 \\, \\mathrm{d}_7 \\times \\mathrm{d}_7 + \\frac{1}{144} (24-K) \\, \\mathrm{d}_4 \\times \\mathrm{d}_4 + \\frac{1}{6} \\, \\mathrm{d}_3 \\times \\mathrm{d}_3 - \\frac{1}{12} \\, \\mathrm{d}_2 \\times \\mathrm{d}_2 \\\\\n& \\quad + \\frac{1}{168} \\, \\mathrm{d}_1 \\times \\mathrm{d}_1 + \\frac{1}{1152\\pi^4} J^2 \\, \\mathrm{d}^{(4)} + \\frac{1}{6} \\, \\mathrm{d}^{(2)} - \\frac{1}{12} \\, \\mathrm{d}^{(1)} + \\frac{1}{48} \\sum_{g \\in D_{12}} \\delta_{g(e^{\\pi i\/4},-1)},\n\\end{split}\n\\end{equation}\nwhere $K(\\theta_1,\\theta_2) = (\\sin(2\\pi(\\theta_1+\\theta_2))-\\sin(2\\pi(2\\theta_1-\\theta_2))-\\sin(2\\pi(2\\theta_2-\\theta_1)))^2$, $\\mathrm{d}_m$ is the uniform measure over $m^{\\mathrm{th}}$ roots of unity, $\\mathrm{d}^{(k+4)}$ is the uniform measure on the points in $C_k^W$, and $\\delta_x$ is the Dirac measure at the point $x$.\n\\end{Thm}\n\n\n\\section{Group $G_2(2)$} \\label{sect:IP5}\n\nThe subgroup $G_2(2) = G_2(\\mathbb{F}_2)$ of $G_2 = G_2(\\mathbb{C})$ is an irreducible primitive group of order 12096. It is the group $G_2$ defined over the Galois field $\\mathbb{F}_2$.\nIt has sixteen irreducible representations (fourteen real and two complex conjugate representations), and its character table is given in\n\\cite{he:2003} (the orders of the elements in each conjugacy class can be obtained from \\cite{koca\/koc:1994}).\n\n\\begin{table}[tb]\n\\begin{center}\n\\begin{tabular}{|@{\\hspace{1.5mm}}c@{\\hspace{1.5mm}}||@{\\hspace{1mm}}c@{\\hspace{1mm}}|@{\\hspace{1mm}}c@{\\hspace{1mm}}|@{\\hspace{1mm}}c @{\\hspace{1mm}}|@{\\hspace{1mm}}c@{\\hspace{1mm}}|@{\\hspace{1mm}}c@{\\hspace{1mm}}|@{\\hspace{1mm}}c@{\\hspace{1mm}}|@{\\hspace{1mm}}c @{\\hspace{1mm}}|@{\\hspace{1mm}}c@{\\hspace{1mm}}|@{\\hspace{1mm}}c@{\\hspace{1mm}}|@{\\hspace{1mm}}c@{\\hspace{1mm}}|@{\\hspace{1mm}}c @{\\hspace{1mm}}|@{\\hspace{1mm}}c@{\\hspace{1mm}}|@{\\hspace{1mm}}c@{\\hspace{1mm}}|@{\\hspace{1mm}}c@{\\hspace{1mm}}|@{\\hspace{1mm}}c @{\\hspace{1mm}}|@{\\hspace{1mm}}c@{\\hspace{1mm}}|} \\hline\n$C$ & $C_1$ & $C_3$ & $C_2$ & $C_4$ & $C_2^{\\prime}$ & $C_4^{\\prime}$ & $C_4^{\\prime\\prime}$ & $C_6$ & $C_3^{\\prime}$ & $C_{12}$ & $C_{12}^{\\prime}$ & $C_{12}^{\\prime\\prime}$ & $C_8$ & $C_8^{\\prime}$ & $C_7$ & $C_6^{\\prime}$ \\\\\n\\hline $|C|$ & 1 & 56 & 63 & 126 & 252 & 252 & 378 & 504 & 672 & 1008 & 1008 & 1008 & 1512 & 1512 & 1728 & 2016 \\\\\n\\hline $\\Sigma_1$ & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\\\\n\\hline $\\Sigma_1'$ & 1 & 1 & 1 & 1 & -1 & -1 & 1 & 1 & 1 & -1 & -1 & 1 & -1 & 1 & 1 & -1 \\\\\n\\hline $\\Sigma_6$ & 6 & -3 & -2 & -2 & 0 & 0 & 2 & 1 & 0 & $i\\sqrt{3}$ & $-i\\sqrt{3}$ & 1 & 0 & 0 & -1 & 0 \\\\\n\\hline $\\Sigma_6^{\\ast}$ & 6 & -3 & -2 & -2 & 0 & 0 & 2 & 1 & 0 & $-i\\sqrt{3}$ & $i\\sqrt{3}$ & 1 & 0 & 0 & -1 & 0 \\\\\n\\hline $\\Sigma_7$ & 1 & -2 & -1 & 3 & -1 & 3 & -1 & 2 & 1 & 0 & 0 & 0 & 1 & -1 & 0 & -1 \\\\\n\\hline $\\Sigma_7'$ & 1 & -2 & -1 & 3 & 1 & -3 & -1 & 2 & 1 & 0 & 0 & 0 & -1 & -1 & 0 & 1 \\\\\n\\hline $\\Sigma_{14}$ & 14 & 5 & -2 & 2 & -2 & 2 & 2 & 1 & -1 & -1 & -1 & -1 & 0 & 0 & 0 & 1 \\\\\n\\hline $\\Sigma_{14}'$ & 14 & -4 & 6 & -2 & 0 & 0 & 2 & 0 & 2 & 0 & 0 & -2 & 0 & 0 & 0 & 0 \\\\\n\\hline $\\Sigma_{14}''$ & 14 & 5 & -2 & 2 & 2 & -2 & 2 & 1 & -1 & 1 & 1 & -1 & 0 & 0 & 0 & -1 \\\\\n\\hline $\\Sigma_{21}$ & 21 & 3 & 5 & 1 & 3 & -1 & 1 & -1 & 0 & -1 & -1 & 1 & 1 & -1 & 0 & 0 \\\\\n\\hline $\\Sigma_{21}'$ & 21 & 3 & 5 & 1 & -3 & 1 & 1 & -1 & 0 & 1 & 1 & 1 & -1 & -1 & 0 & 0 \\\\\n\\hline $\\Sigma_{27}$ & 27 & 0 & 3 & 3 & 3 & 3 & -1 & 0 & 0 & 0 & 0 & 0 & -1 & 1 & -1 & 0 \\\\\n\\hline $\\Sigma_{27}'$ & 27 & 0 & 3 & 3 & -3 & -3 & -1 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & -1 & 0 \\\\\n\\hline $\\Sigma_{42}$ & 42 & 6 & 2 & -6 & 0 & 0 & -2 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n\\hline $\\Sigma_{56}$ & 56 & 2 & -8 & 0 & 0 & 0 & 0 & -2 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n\\hline $\\Sigma_{64}$ & 64 & -8 & 0 & 0 & 0 & 0 & 0 & 0 & -2 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\\\\n\\hline\n\\end{tabular} \\\\\n\\caption{Character table for $G_2(2)$.} \\label{table:Character_table-IP5}\n\\end{center}\n\\end{table}\n\nThe group $G_2(2)$ has only two seven-dimensional real representations, both of which are irreducible. Of these two, only $\\Sigma_7$ has character values in $[-2,7]$ for all $g \\in G_2(2)$, thus this is the restriction $\\varrho_1$ of the seven-dimensional fundamental representation $\\rho_1$ of $G_2$ to $G_2(2)$.\nThe McKay graph for $\\varrho_1$ is given in Figure \\ref{Fig-McKay_Graph-IP5-rho1}.\nThis graph is a $\\mathbb{Z}_2$-orbifold of the McKay graph $\\mathcal{G}^{\\varrho_1}_{PU(3;3)}$ for $PU(3;3)$.\nNote that the McKay graph given in \\cite[Figure 2]{he:2003} is not that for $\\varrho_1$, but rather for the other irreducible seven-dimensional representation $\\Sigma_7'$ of $G_2(2)$.\n\n\\begin{figure}[tb]\n\\begin{center}\n \\includegraphics[width=70mm]{Fig-McKay_Graph-IP5-rho1}\\\\\n \\caption{The McKay graphs $\\mathcal{G}^{\\varrho_1}_{G_2(2)}$.} \\label{Fig-McKay_Graph-IP5-rho1}\n\\end{center}\n\\end{figure}\n\nThe decomposition of the Kronecker square of $\\varrho_1$ into irreducibles is given by\n$$\\varrho_1^2 = \\mathrm{id} + \\varrho_1 + \\Sigma_{14} + \\Sigma_{27},$$\nthus the fourteen-dimensional representation $\\varrho_2$ is given by the irreducible representation $\\Sigma_{14}$.\nWe note that the eigenvalues of the representation matrices for $C_4, C_4^{\\prime}, C_4^{\\prime\\prime}$ all square to those for $C_2$, those for $C_6$ square to those for $C_3$, those for $C_6^{\\prime}$ square to those for $C_3^{\\prime}$, those for $C_8$ square to those for $C_4$, those for $C_8^{\\prime}$ square to those for $C_4^{\\prime\\prime}$, and those for $C_{12}, C_{12}^{\\prime}, C_{12}^{\\prime\\prime}$ all square to those for $C_6$.\n\n\nThen from (\\ref{eqn:moments-subgroupG2}) and Tables \\ref{table:Character_table-IP5} and \\ref{Table:subgroupsG2-orbits(theta1,theta2)}, we see that\n\\begin{align*}\n\\varsigma_{m,n} & = \\frac{1}{12096} \\Omega^W(0,0) + \\frac{56}{12096} \\Omega^W(1\/3,2\/3) + \\frac{63+252}{12096} \\Omega^W(0,1\/2) + \\frac{672}{12096} \\Omega^W(0,1\/3) \\\\\n& \\quad + \\frac{126+252}{12096} \\Omega^W(0,1\/4) + \\frac{378}{12096} \\Omega^W(1\/4,1\/2) + \\frac{504}{12096} \\Omega^W(1\/6,1\/3) \\\\\n& \\quad + \\frac{1728}{12096} \\Omega^W(1\/7,3\/7) + \\frac{1512}{12096} \\Omega^W(1\/8,1\/2) + \\frac{1512}{12096} \\Omega^W(1\/8,3\/8) \\\\\n& \\quad + \\frac{1008+1008+1008}{12096} \\Omega^W(1\/12,5\/12) + \\frac{2016}{12096} \\Omega^W(1\/6,1\/2),\n\\end{align*}\nwhere $\\Omega^W(\\theta_1,\\theta_2)$ is as in Section \\ref{sect:II1}.\nThe measures given by these points have all appeared in the previous sections.\nNote however that $12(\\Omega^W(1\/8,1\/2) + \\Omega^W(1\/8,3\/8)) = 8 \\int_{\\mathbb{T}^2} \\Omega(\\theta_1,\\theta_2) (J^2\/64\\pi^4) \\, \\mathrm{d}_8 \\times \\mathrm{d}_8$, since the Jacobian $J=0$ along the boundaries of the orbit of the fundamental domain, whilst $J^2(g(1\/8,1\/2))\/64\\pi^4 = J^2(g(1\/8,3\/8))\/64\\pi^4 = 8$ for all $g \\in D_{12}$.\nThus we obtain:\n\n\\begin{Thm}\nThe joint spectral measure (over $\\mathbb{T}^2$) for all embeddings of $G_2(2)$ into the fundamental representations of $G_2$ is\n\\begin{equation}\n\\begin{split}\n\\mathrm{d}\\varepsilon & = \\frac{1}{768\\pi^4} J^2 \\, \\mathrm{d}_8 \\times \\mathrm{d}_8 + \\frac{1}{1344\\pi^4} J^2 \\, \\mathrm{d}_7 \\times \\mathrm{d}_7 + \\frac{1}{1152\\pi^4} J^2 \\, \\mathrm{d}_6 \\times \\mathrm{d}_6 + \\frac{1}{12} \\, \\mathrm{d}_4 \\times \\mathrm{d}_4 \\\\\n& \\quad + \\frac{1}{12} \\, \\mathrm{d}_3 \\times \\mathrm{d}_3 - \\frac{1}{72} \\, \\mathrm{d}_2 \\times \\mathrm{d}_2 - \\frac{1}{252} \\, \\mathrm{d}_1 \\times \\mathrm{d}_1 + \\frac{1}{768\\pi^4} J^2 \\, \\mathrm{d}^{(4)} + \\frac{1}{12} \\, \\mathrm{d}^{(2)} - \\frac{1}{24} \\, \\mathrm{d}^{(1)},\n\\end{split}\n\\end{equation}\nwhere $\\mathrm{d}_m$ is the uniform measure over $m^{\\mathrm{th}}$ roots of unity and $\\mathrm{d}^{(k+4)}$ is the uniform measure on the points in $C_k^W$.\n\\end{Thm}\n\n\n\n\n\n\n\n\n\n\\bigskip \\bigskip\n\n\\begin{footnotesize}\n\\noindent{\\it Acknowledgement.}\n\nThe second author was supported by the Coleg Cymraeg Cenedlaethol.\n\\end{footnotesize}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLet $S$ be a complex manifold and $D$ be a (reduced) hypersurface $D$, referred to as a \\emph{divisor} in the sequel. \nIn the landmark paper \\cite{Sai80}, Kyoji Saito introduced the sheaves of $\\O_S$-modules of logarithmic differential forms and logarithmic vector fields on $S$ along $D$.\nLogarithmic vector fields are tangent to $D$ at any smooth point of $D$; logarithmic differential forms have simple poles and form a complex under the usual differential.\nSaito's clean algebraically flavored definition encodes deep geometric, topological, and representation theoretic information on the singularities that is yet only partly understood.\nThe precise target for his theory was the Gau\\ss-Manin connection on the base $S$ of the semiuniversal deformation of isolated hypersurface singularities, a logarithmic connection along the discriminant $D$.\nSaito developed mainly three aspects of his logarithmic theory in loc.~cit.: Free divisors, logarithmic stratifications, and logarithmic residues.\nMany fascinating developments grew out of Saito's paper, a few of which we highlight in the following brief overview.\n\nA divisor is called free if the sheaf of logarithmic vector fields, or its dual, the sheaf of logarithmic $1$-forms, is a vector bundle; in particular, normal crossing divisors are free.\nNot surprisingly, discriminants of isolated hypersurface singularities are free divisors (see \\cite[(3.19)]{Sai80}). \nSimilar results were shown for isolated complete intersection singularities (see \\cite[\\S6]{Loo84}) and space curve singularities (see \\cite{vST95}).\nBoth the reflection arrangements and discriminants associated with finite unitary reflection groups are free divisors (see \\cite{Ter80b}).\nMore recent examples include discriminants in certain prehomogeneous vector spaces (see \\cite{GMS11}) whose study led to new constructions such as a chain rule for free divisors (see \\cite[\\S4]{BC12}).\nFree divisors can be seen as the extreme case opposite to isolated singularities: Unless smooth, free divisors have Cohen--Macaulay singular loci of codimension $1$.\nThe freeness property is closely related to the complement of the divisor being a $K(\\pi,1)$-space (see \\cite[(1.12)]{Sai80}, \\cite{Del72}), although these two properties are not equivalent (see \\cite{ER95}). \nEven in special cases, such as that of hyperplane arrangements, freeness is not fully understood yet. \nFor instance, Terao's conjecture on the combinatorial nature of freeness for arrangements is one of the central open problems in arrangement theory.\n\nSaito's second topic, the so-called logarithmic stratification of $S$, consists of immersed integral manifolds of logarithmic vector fields along $D$.\nContrary to what the terminology suggests, the resulting decomposition of $S$ is not locally finite, in general.\nSaito attached the term holonomic to this additional feature: a point in $S$ is holonomic if a neighborhood meets only finitely many logarithmic strata.\nAlong any logarithmic stratum, the pair $(D,S)$ is analytically trivial which turns holonomicity into a property of logarithmic strata.\nThe logarithmic vector fields are tangent to the strata of the canonical Whitney stratification; the largest codimension up to which all Whitney strata are (neccessarily holonomic) logarithmic strata is called the holonomic codimension (see \\cite[p.~221]{DM91}).\nHolonomic free divisors were later called Koszul--free divisors.\n\nIn case of a normal crossing divisor, the complex of logarithmic differential forms computes the cohomology of the complement of $D$ in $S$, an ingredient of Deligne's mixed Hodge structure (see \\cite{Del71}).\nThe natural question, for which free divisors the same holds true for the complex of logarithmic differential forms is referred to as the logarithmic comparison theorem, or, for short, by the LCT (see \\cite{Tor07} for a survey).\nFor free divisors, this property turned out to be related to homogeneity properties of the singularities. \nIndeed, an explicit class of hypersurfaces for which the LCT holds true is that of (weakly) locally quasihomogeneous divisors (see \\cite{CNM96,Nar08}).\nMoreover, it is conjectured that the LCT implies strong Euler homogeneity of $D$, which has been proved only for Koszul free divisors, and in dimension $\\dim S\\le3$ (see \\cite{CMNC02,GS06}).\nFor strongly Koszul--free divisors $D$ (see \\cite[Def.~7.1]{GS10}), the logarithmic comparison theorem holds true exactly if $D$ is strongly Euler homogeneous and $-1$ is the minimal integer root of all local $b$-functions (see \\cite[Cor.~4.3]{CN05} and \\cite[Cor.~1.8]{Tor04}).\nFor isolated quasihomogeneous singularities, the LCT is equivalent to the vanishing of certain graded parts of the Milnor algebra (cee \\cite{HM98}); there are related Hodge-theoretic properties in the non-quasihomogeneous case (\\cite{Sch10}).\nThe study of the LCT lead to a variant of $D$-module theory over the ring of logarithmic differential operators along a free divisor (see \\cite{CU02,CN05,Nar08}).\nA key player in this context is the $D$-module $M^{\\log D}$ defined by the ideal of logarithmic vector fields, considered as differential operators of order one (see \\cite{CU02}); Saito-holonomicity of $D$ implies holonomicity of $M^{\\log D}$ in the $D$-module-sense; but the converse is false.\n\nMuch less attention has been devoted to Saito's logarithmic residues, the main topic in this paper.\nIt was Poincar\\'e who first defined a residual $1$-form of a rational differential $2$-form on $\\mathds{C}^2$ (see \\cite{Poi87}).\nLater, the concept was generalized by de Rham and Leray to residues of closed meromorphic $p$-forms with simple poles along a smooth divisor $D$; these residues are holomorphic $(p-1)$-form on $D$ (see \\cite{Ler59}).\nThe construction of Deligne's mixed Hodge structure uses, again holomorphic, residues of logarithmic differential forms along normal crossing divisors (see \\cite{Del71}).\nNotably, in Saito's generalization to arbitrary singular divisors $D$, the residue of a logarithmic $p$-form becomes a \\emph{meromorphic} $(p-1)$-form on $D$, or on its normalization $\\widetilde D$. \nUsing work of Barlet~\\cite{Bar78}, Aleksandrov linked Saito's construction to Grothendieck duality theory: The image of Saito's logarithmic residue map is the module of regular differential forms on $D$ (see \\cite[\\S4, Thm.]{Ale88} and \\cite{Bar78}).\nWith Tsikh, he suggested a generalization for complete intersection singularities based on multilogarithmic differential forms depending on a choice of generators of the defining ideal (see \\cite{AT01}). \nRecently, he approached the natural problem of describing the mixed Hodge structure on the complement of an LCT divisor in terms of logarithmic differential forms (see \\cite{Ale12}).\nIn Dolgachev's work, one finds a different sheaf of logarithmic differential forms which is a vector bundle exactly for normal crossing divisors and whose reflexive hull is Saito's sheaf of logarithmic differential forms (see \\cite{Dol07}).\nAlthough his approach to logarithmic residues using adjoint ideals has a similar flavor to ours, he does not reach the conclusion of our main Theorem~\\ref{10} (see Remark~\\ref{49b}).\n\n\\medskip\n\nWhile most constructions in Saito's logarithmic theory and its generalizations have a dual counterpart, a notion of a dual logarithmic residue associated to a vector field was not known to the authors.\nThe main motivation and fundamental result of this article is the construction of a dual logarithmic residue (see Section~\\ref{13}).\nThis turned out to have surprising applications including a proof of a conjecture of Saito, that was open for more than 30 years.\nSaito's conjecture is concerned with comparing logarithmic residues of $1$-forms, that is, certain meromorphic functions on $\\widetilde D$, with holomorphic functions on $\\widetilde D$.\nThe latter can also be considered as a weakly holomorphic function on $D$, that is, functions on the complement of the singular locus $Z$ of $D$, locally bounded near points of $Z$.\nWhile any such weakly holomorphic function is the residue of some logarithmic $1$-form, the image of the residue map can contain functions which are not weakly holomorphic.\nThe algebraic condition of equality was related by Saito to a geometric and a topological property as follows (see \\cite[(2.13)]{Sai80}).\n\n\\begin{thm}[Saito]\\label{28}\nFor a divisor $D$ in a complex manifold $S$, consider the following conditions:\n\\begin{enumerate}[(A)]\n\\item\\label{28a} The local fundamental groups of the complement $S\\backslash D$ are Abelian.\n\\item\\label{28b} In codimension $1$, that is, outside of an analytic subset of codimension at least $2$ in $D$, $D$ is normal crossing.\n\\item\\label{28c} The residue of any logarithmic $1$-form along $D$ is a weakly holomorphic function on $D$.\n\\end{enumerate}\nThen the implications \\eqref{28a} $\\Rightarrow$ \\eqref{28b} $\\Rightarrow$ \\eqref{28c} hold true.\n\\end{thm}\n\nSaito asked whether the the converse implications in Theorem~\\ref{28} hold true.\nThe first one was later established by L\\^e and Saito~\\cite{LS84}; it generalizes the Zariski conjecture for complex plane projective nodal curves proved by Fulton and Deligne (see \\cite{Ful80,Del81}).\n\n\\begin{thm}[L\\^e--Saito]\nThe implication \\eqref{28a} $\\Leftarrow$ \\eqref{28b} in Theorem~\\ref{28} holds true.\n\\end{thm}\n\nOur duality of logarithmic residues turns out to translate condition \\eqref{28c} in Theorem~\\ref{28} into the more familiar equality of the Jacobian ideal and the conductor ideal.\nA result of Ragni Piene~\\cite{Pie79} proves that such an equality forces $D$ to have only smooth components if it has a smooth normalization. \nThis is a technical key point which leads to a proof of the missing implication in Theorem~\\ref{28}.\n\n\\begin{thm}\\label{10}\nThe implication \\eqref{28b} $\\Leftarrow$ \\eqref{28c} in Theorem~\\ref{28} holds true: If the residue of any logarithmic $1$-form along $D$ is a weakly holomorphic function on $D$ then $D$ is normal crossing in codimension $1$.\n\\end{thm}\n\n\\begin{rmk}\\label{49b}\nSaito~\\cite[(2.11)]{Sai80} proved Theorem~\\ref{10} for plane curves.\nIf $D$ has holonomic codimension at least $1$ (as defined above), this yields the general case by analytic triviality along logarithmic strata (see \\cite[\\S3]{Sai80}).\nUnder this latter hypothesis, Theorem~\\ref{10} follows also from a result of Dolgachev (see \\cite[Cor.~2.2]{Dol07}).\nHowever, for example, the equation $xy(x+y)(x+yz)=0$ defines a well-known free divisor with holonomic codimension $0$.\n\\end{rmk}\n\nThe preceding results and underlying techniques serve to address two natural questions:\nThe algebraic characterization of condition~\\eqref{28c} through Theorem~\\ref{10} raises the question about the algebraic characterizations of normal crossing divisors.\nEleonore Faber was working on this question at the same time as the results presented here were developed.\nShe considered freeness as a first approximation for being normal crossing and noticed that normal crossing divisors satisfy an extraordinary condition:\nThe ideal of partial derivatives of a defining equation is radical. \nShe proved the following converse implications (see \\cite{Fab11,Fab12}).\n\n\\begin{thm}[Faber]\nConsider the following condition:\n\\begin{enumerate}[(A)]\\setcounter{enumi}{3}\n\\item\\label{28e} At any point $p\\in D$, there is a local defining equation $h$ for $D$ such that the ideal $\\mathcal{J}_h$ of partial derivatives is radical.\n\\item\\label{28f} $D$ is normal crossing.\n\\end{enumerate}\nThen the following holds:\n\\begin{asparaenum}\n\\item If $D$ is free then condition~\\eqref{28e} decends to all irreducible components of $D$.\n\\item Conditions~\\eqref{28e} and \\eqref{28f} are equivalent if $D$ is locally a plane curve or\na hyperplane arrangement, or \nif its singular locus is Gorenstein.\n\\end{asparaenum}\n\\end{thm}\n\nMotivated by Faber's problem we prove the following \n\n\\begin{thm}\\label{16}\nExtend the list of conditions in Theorem~\\ref{28} as follows:\n\\begin{enumerate}[(A)]\\setcounter{enumi}{5}\n\\item\\label{28d} The Jacobian ideal $\\mathcal{J}_D$ of $D$ is radical.\n\\item\\label{28g} The Jacobian ideal $\\mathcal{J}_D$ of $D$ equals the conductor ideal $\\mathcal{C}_D$ of the normalization $\\tilde D$.\n\\end{enumerate}\nThen condition~\\eqref{28d} implies condition~\\eqref{28b}.\nIf $D$ is a free divisor then conditions~\\eqref{28b}, \\eqref{28d} and \\eqref{28g} are equivalent.\n\\end{thm}\n\n\\begin{rmk}\\label{9}\nNote that $\\mathcal{J}_h$ is an $\\O_S$-ideal sheaf depending on a choice of local defining equation whereas its image $\\mathcal{J}_D$ in $\\O_D$ is intrinsic to $D$.\nIn particular, condition~\\eqref{28e} implies condition~\\eqref{28d}.\n\\end{rmk}\n\nWe obtain the following algebraic characterization of normal crossing divisors.\n\n\\begin{thm}\\label{38}\nFor a free divisor with smooth normalization, any one of the conditions~\\eqref{28a}, \\eqref{28b}, \\eqref{28c}, \\eqref{28d}, or \\eqref{28g} implies condition \\eqref{28f}.\n\\end{thm}\n\n\\begin{rmk}\nThe implication \\eqref{28d} $\\Rightarrow$ \\eqref{28f} in Theorem~\\ref{38} improves Theorem A in \\cite{Fab12} (see Remark~\\ref{9}), which is proved using \\cite{Pie79} like in the proof of our main result.\nProposition~C in \\cite{Fab12} is the implication \\eqref{28c} $\\Rightarrow$ \\eqref{28f} in Theorem~\\ref{38}, for the proof of which Faber uses our arguments.\n\\end{rmk}\n\nAs remarked above, free divisors are characterized by their singular loci being (empty or) maximal Cohen--Macaulay.\nIt is natural to ask when the singular locus of a divisor is Gorenstein.\nThis question is answered by the following\n\n\\begin{thm}\\label{40}\nA divisor $D$ has Gorenstein singular locus $Z$ of codimension $1$ if and only if $D$ is locally the product of a quasihomogeneous plane curve and a smooth space.\nIn particular, $D$ is locally quasihomogeneous and $Z$ is locally a complete intersection.\n\\end{thm}\n\n\\begin{rmk}\nTheorem~\\ref{40} complements a result of Kunz--Waldi \\cite[Satz~2]{KW84} saying that a Gorenstein algebroid curve has Gorenstein singular locus if and only if it is quasihomogeneous.\n\\end{rmk}\n\n\\section{Logarithmic modules and fractional ideals}\\label{30}\n\nIn this section, we review Saito's logarithmic modules, the relation of freeness and Cohen--Macaulayness of the Jacobian ideal, and the duality of maximal Cohen--Macaulay fractional ideals.\nWe switch to a local setup for the remainder of the article.\n \nLet $D$ be a reduced effective divisor defined by $\\mathcal{I}_D=\\O_S\\cdot h$ in the smooth complex analytic space germ $S=(\\mathds{C}^n,0)$.\nDenote by $h\\colon S\\to T=(\\mathds{C},0)$ a function germ generating the ideal $\\mathcal{I}_D=\\O_S\\cdot h$ of $D$.\nWe abbreviate by\n\\[\n\\Theta_S:=\\Der_\\mathds{C}(\\O_S)=\\Hom_{\\O_S}(\\Omega^1_S,\\O_S)\n\\]\nthe $\\O_S$-module of vector fields on $S$.\nRecall Saito's definition \\cite[\\S1]{Sai80} of the $\\O_S$-modules of logarithmic differential forms and of logarithmic vector fields.\n\n\\begin{dfn}[Saito]\\label{33}\n\\begin{align*}\n\\Omega^p(\\log D)&:=\\{\\omega\\in\\Omega^p_S(D)\\mid d\\omega\\in\\Omega_S^{p+1}(D)\\}\\\\\n\\Der(-\\log D)&:=\\{\\delta\\in\\Theta_S\\mid dh(\\delta)\\in\\mathcal{I}_D\\}\n\\end{align*}\n\\end{dfn}\n\nThese modules are stalks of analogously defined coherent sheaves of $\\O_S$-modules (see \\cite[(1.3),(1.5)]{Sai80}).\nIt is obvious that each of these sheaves $\\L$ is torsion free and normal, and hence reflexive (see \\cite[Prop.~1.6]{Har80}).\nMore precisely, $\\Omega^1(\\log D)$ and $\\Der(-\\log D)$ are mutually $\\O_S$-dual (see \\cite[(1.6)]{Sai80}).\nNormality of a sheaf $\\L$ means that $\\L=i_*i^*\\L$ where $i\\colon S\\setminus Z\\hookrightarrow S$ denotes the inclusion of the complement of the singular locus of $D$.\nIn case of $\\L=\\Der(-\\log D)$, this means that $\\delta\\in\\Der(-\\log D)$ if and only if $\\delta$ is tangent to $D$ at all smooth points.\nIn addition, $\\Omega^\\bullet(\\log D)$ is an exterior algebra over $\\O_S$ closed under exterior differentiation and $\\Der(-\\log D)$ is closed under the Lie bracket.\n\n\\begin{dfn}\nA divisor $D$ is called free if $\\Der(-\\log D)$ is a free $\\O_S$-module.\n\\end{dfn}\n\nThe definition of $\\Der(-\\log D)$ can be rephrased as a short exact sequence of $\\O_S$-modules\n\\begin{equation}\\label{1}\n\\SelectTips{cm}{}\\xymatrix{\n0&\\mathcal{J}_D\\ar[l]&\\Theta_S\\ar[l]_-{dh}&\\Der(-\\log D)\\ar[l]&0\\ar[l]\n}\n\\end{equation}\nwhere the Jacobian ideal $\\mathcal{J}_D$ of $D$ is defined as the Fitting ideal \n\\[\n\\mathcal{J}_D:=\\mathcal{F}^{n-1}_{\\O_D}(\\Omega_D^1)=\\ideal{\\frac{\\partial h}{\\partial x_1},\\dots,\\frac{\\partial h}{\\partial x_n}}\\subset{\\O_D}.\n\\]\nNote that $\\mathcal{J}_D$ is an ideal in $\\O_D$ and pulls back to $\\ideal{h,\\frac{\\partial h}{\\partial x_1},\\dots,\\frac{\\partial h}{\\partial x_n}}$ in $\\O_S$.\nWe shall consider the singular locus $Z$ of $D$ equipped with the structure defined by $\\mathcal{J}_D$, that is,\n\\begin{equation}\\label{21}\n\\O_Z:=\\O_D\/\\mathcal{J}_D.\n\\end{equation}\nNote that $Z$ might be non-reduced.\nThere is the following intrinsic characterization of free divisors in terms of their singular locus (see \\cite[\\S1 Thm.]{Ale88} or \\cite[Prop.~2.4]{Ter80a}).\n\n\\begin{thm}\\label{19}\nThe following are equivalent:\n\\begin{enumerate}\n\\item\\label{19a} $D$ is a free divisor.\n\\item\\label{19b} $\\mathcal{J}_D$ is a maximal Cohen--Macaulay $\\O_D$-module.\n\\item\\label{19c} $D$ is smooth or $Z$ is Cohen--Macaulay of codimension $1$.\n\\end{enumerate}\n\\end{thm}\n\n\\begin{proof}\nIf $dh(\\Theta_S)$ does not minimally generate $\\mathcal{J}_D$, then $D\\cong D'\\times(\\mathds{C}^k,0)$, $k>0$, by the triviality lemma~\\cite[(3.5)]{Sai80}.\nBy replacing $D$ by $D'$, we may therefore assume that \\eqref{1} is a minimal resolution of $\\mathcal{J}_D$ as $\\O_S$-module.\nThus, the equivalence of \\eqref{19a} and \\eqref{19b} is due to the Auslander--Buchsbaum formula.\nBy Lemma~\\ref{25} below, $\\mathcal{J}_D$ has height at least $1$ and the implication~\\eqref{19b} $\\Leftrightarrow$ \\eqref{19c} is proved in \\cite[Satz~4.13]{HK71}.\n\\end{proof}\n\n\\begin{cor}\\label{27}\nAny $D$ is free in codimension $1$.\n\\end{cor}\n\n\\begin{proof}\nBy Theorem~\\ref{19}, the non-free locus of $D$ is contained in $Z$ and equals\n\\[\n\\{z\\in Z\\mid\\depth\\O_{Z,z}}[r]\\ar[d]^-\\phi&T\\ar[d]^-\\Phi\\\\\nX\\ar@{^(->}[r]&S.\n}\n\\]\nSetting $\\Phi_i=x_i\\circ\\Phi$ and $\\phi_i=\\Phi_i+\\mathcal{I}_Y$ for coordinates $x_1,\\dots,x_n$ on $S$ and $\\mathcal{I}_Y$ the defining ideal of $Y$ in $T$, we can write $\\Phi=(\\Phi_1,\\dots,\\Phi_n)$ and $\\phi=(\\phi_1,\\dots,\\phi_n)$ and hence\n\\begin{equation}\\label{26}\n\\Omega^1_{Y\/X}=\\frac{\\Omega^1_Y}{\\sum_{i=1}^n\\O_Yd\\phi_i}=\\frac{\\Omega^1_T}{\\O_Td\\mathcal{I}_Y+\\sum_{i=1}^n\\O_Td\\Phi_i}.\n\\end{equation}\nWe may choose $T$ of minimal dimension so that $\\mathcal{I}_Y\\subseteq\\mathfrak{m}_T^2$ and hence $d\\mathcal{I}_Y\\subseteq\\mathfrak{m}_T\\Omega^1_T$.\nNow \\eqref{26} and the hypothesis $\\Omega^1_{Y\/X}=0$ show that $\\Omega^1_T=\\sum_{i=1}^n\\O_Td\\Phi_i+\\mathfrak{m}_T\\Omega^1_T$ which implies that $\\Omega^1_T=\\sum_{i=1}^n\\O_Td\\Phi_i$ by Nakayama's Lemma.\nBut then $\\Phi$ and hence $\\phi$ is a closed embedding as claimed.\n\\end{proof}\n\n\\begin{lem}\\label{7}\nIf $\\mathcal{J}_D=\\mathcal{C}_D$ and $\\widetilde D$ is smooth then $D$ has smooth irreducible components.\n\\end{lem}\n\n\\begin{proof}\nBy definition, the ramification ideal of $\\pi$ is the Fitting ideal\n\\[\n\\mathcal{R}_\\pi:=\\mathcal{F}^0_{\\O_{\\widetilde D}}(\\Omega^1_{\\widetilde D\/D}).\n\\]\nAs a special case of a result of Ragnie Piene~\\cite[Cor.~1, Prop.~1]{Pie79} (see also \\cite[Cor.~2.7]{OZ87}), \n\\[\n\\mathcal{C}_D\\mathcal{R}_\\pi=\\mathcal{J}_D\\O_{\\widetilde D}\n\\]\nBy hypothesis, this becomes\n\\[\n\\mathcal{C}_D\\mathcal{R}_\\pi=\\mathcal{C}_D\n\\]\nsince $\\mathcal{C}_D$ is an ideal in both $\\O_D$ and $\\O_{\\widetilde D}$.\nBy Nakayama's lemma, it follows that that $\\mathcal{R}_\\pi=\\O_{\\widetilde D}$ and hence that $\\Omega^1_{\\widetilde D\/D}=0$.\n\nSince $\\widetilde D$ is normal, irreducible and connected components coincide.\nBy localization to a connected component $\\widetilde D_i$ of $\\widetilde D$ and base change to $D_i=\\pi(\\widetilde D_i)$ (see \\cite[Ch.~II, Prop.~8.2A]{Har77}), we obtain $\\Omega^1_{\\widetilde D_i\/D_i}=0$.\nThen the normalization $\\widetilde D_i\\to D_i$ is an immersion by Lemma~\\ref{35} and hence $D_i=\\widetilde D_i$ is smooth.\n\\end{proof}\n\nWe are now ready to prove our main results.\n\n\\begin{proof}[Proof of Theorem~\\ref{10}]\nIn codimension $1$, $D$ is free by Corollary~\\ref{27} and hence $\\mathcal{J}_D=\\mathcal{C}_D$ by Corollary~\\ref{42} and our hypothesis.\nMoreover, $\\widetilde D$ is smooth in codimension $1$ by normality. \nBy our language convention, this means that there is an analytic subset $A\\subset D$ of codimension at least $2$ such that, for $p\\in D\\setminus A$, $\\mathcal{J}_{D,p}=\\mathcal{C}_{D,p}$ and $\\widetilde D$ is smooth above $p$.\nFrom Lemma~\\ref{7} we conclude that the local irreducible components $D_i$ of the germ $(D,p)$ are smooth. \nThe hypothesis $\\mathcal{R}_D=\\O_{\\widetilde D}$ at $p$ then reduces to the equality $\\mathcal{R}_{D,p}=\\bigoplus\\O_{D_i}$.\nThus, the implication \\eqref{60d} $\\Leftarrow$ \\eqref{60c} in Theorem~\\ref{60} yields the claim. \n\\end{proof}\n\n\\begin{proof}[Proof of Theorem~\\ref{16}]\nIn order to prove that \\eqref{28d} implies \\eqref{28b}, we may assume that $Z$ is smooth and reduce to the case of a plane curve as in the proof of Theorem~\\ref{40}. \nThen the Mather--Yau theorem~\\cite{MY82} applies (see \\cite[Prop.~9]{Fab12} for details).\n\nNow assume that $D$ is free and normal crossing in codimension $1$. \nBy the first assumption and Theorem~\\ref{19}, $Z$ is Cohen--Macaulay of codimension $1$ and, in particular, satisfies Serre's condition $S_1$.\nBy the second assumption, $Z$ also satisfies Serre's condition $R_0$.\nThen $Z$ is reduced, and hence $\\mathcal{J}_D$ is radical, by Serre's reducedness criterion.\nThis proves that \\eqref{28b} implies \\eqref{28d} for free $D$.\n\nThe last equivalence then follows from Theorems~\\ref{28} and \\ref{10} and Corollary~\\ref{42}.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem~\\ref{38}]\nBy Theorems~\\ref{28}, \\ref{10}, and \\ref{16}, we may assume that $\\mathcal{J}_D=\\mathcal{C}_D$.\nThen Lemma~\\ref{7} shows that the irreducible components $D_i=\\{h_i=0\\}$, $i=1,\\dots,m$, of $D$ are smooth, and hence normal.\nIt follows that \n\\[\n\\mathcal{R}_D=\\O_{\\widetilde D}=\\bigoplus_{i=1}^m\\O_{\\widetilde D_i}=\\bigoplus_{i=1}^m\\O_{D_i}.\n\\]\nBy the implication \\eqref{60a} $\\Leftarrow$ \\eqref{60c} in Theorem~\\ref{60}, this is equivalent to \n\\begin{equation}\\label{51}\n\\Omega^1(\\log D)=\\sum_{i=1}^m\\O_S\\frac{dh_i}{h_i}+\\Omega^1_S.\n\\end{equation}\nOn the other hand, Saito's criterion \\cite[(1.8) i)]{Sai80} for freeness of $D$ reads\n\\begin{equation}\\label{52}\n\\bigwedge^n\\Omega^1(\\log D)=\\Omega^n_S(D).\n\\end{equation}\nCombining \\eqref{51} and \\eqref{52}, it follows immediately that $D$ is normal crossing (see also \\cite[Prop.~B]{Fab12}):\n\nAs $\\O_S$-module and modulo $\\Omega_S^n$, the left hand side of \\eqref{52} is, due to \\eqref{51}, generated by expressions \n\\begin{gather}\\label{53}\n\\frac{d h_{i_1}\\wedge\\dots\\wedge d h_{i_k}\\wedge dx_{j_1}\\wedge\\dots\\wedge dx_{j_{n-k}}}{h_{i_1}\\cdots h_{i_k}},\\\\ \n\\nonumber1\\le i_1<\\cdots$223\\\\\n RefineNet \\cite{RefineNet} & Dilated-ResNet152 & 47.3 & - & $>$223\\\\\n MSCI \\cite{MSCI} & Dilated-ResNet152 & 50.3 & - & $>$223\\\\\n PSPNet \\cite{zhao2017pyramid} & Dilated-ResNet101 & - & 43.29 & $>$223 \\\\\n SAC \\cite{zhang2017scale} & Dilated-ResNet101 & - & 44.30 & $>$223 \\\\\n EncNet \\cite{Zhang_2018_CVPR} & Dilated-ResNet101 & 51.7 & 44.65 & 234\\\\\n DANet \\cite{fu2019dual} & Dilated-ResNet101 & 52.6 & - & $>$223 \\\\ \n APCNet \\cite{he2019adaptive} & Dilated-ResNet101 & 54.7 & 45.38 & 245\\\\ \n CFNet \\cite{Zhang_2019_CVPR} & Dilated-ResNet101 & 54.0 & 44.89 & $>$223 \\\\ \n ACNet \\cite{Fu_2019_ICCV} & Dilated-ResNet101 & 54.1 & \\textbf{45.90} & $>$223\\\\ \n APNB \\cite{zhu2019asymmetric} & Dilated-ResNet101 & 52.8 & 45.24 & $>$223 \\\\ \n DMNet \\cite{he2019dynamic} & Dilated-ResNet101 & 54.4 & 45.50 & 242\\\\\n\\hline\n Ours & ResNet101 & \\textbf{55.3} & {45.28} & \\textbf{70} \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{comment}\n\\begin{table}[!t]\n\\caption{Segmentation results of state-of-the-art methods on ADE20K validation set.}\n\\label{table:ade20k}\n\\centering\n\\begin{tabular}{lll}\n\\hline\n\\textbf{Method} & \\textbf{Backbone} & \\textbf{mIoU\\%} \\\\\\hline\\hline\nPSPNet \\cite{zhao2017pyramid} & Dilated-ResNet101 & 43.29 \\\\\nEncNet \\cite{Zhang_2018_CVPR} & Dilated-ResNet101 & 44.65 \\\\\nSAC \\cite{zhang2017scale} & Dilated-ResNet101 & 44.30 \\\\\nDSSPN \\cite{DSSPN} & Dilated-ResNet101 & 43.68 \\\\\nAPCNet \\cite{he2019adaptive} & Dilated-ResNet101 & 45.38 \\\\\nCFNet \\cite{Zhang_2019_CVPR} & Dilated-ResNet101 & 44.89 \\\\\nDMNet \\cite{he2019dynamic} & Dilated-ResNet101 & 45.50 \\\\ \nACNet \\cite{Fu_2019_ICCV} & Dilated-ResNet101 & 45.90 \\\\ \nAPCNet \\cite{he2019adaptive} & Dilated-ResNet101 & 45.38 \\\\\nCCNet \\cite{Huang_2019_ICCV} & Dilated-ResNet101 & 45.22 \\\\ \nAPNB \\cite{zhu2019asymmetric} & Dilated-ResNet101 & 45.24 \\\\ \n\\hline\nOurs & ResNet101 & \\textbf{45.28} \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\\end{comment}\n\n\n\n\\begin{figure}[!t]\n\\centering\n\\begin{center}\n\\begin{tabular}{C{2.2cm}C{2.2cm}C{2.2cm}C{2.2cm}C{2.2cm}}\n \\includegraphics[width=2.20cm]{figs\/exp1_fig0_a} &\n \\includegraphics[width=2.20cm]{figs\/exp1_fig0_b} &\n \\includegraphics[width=2.20cm]{figs\/exp1_fig0_c} &\n \\includegraphics[width=2.20cm]{figs\/exp1_fig0_d} &\n \\includegraphics[width=2.20cm]{figs\/exp1_fig0_e} \\\\\n \\includegraphics[width=2.20cm]{figs\/exp1_fig1_a} &\n \\includegraphics[width=2.20cm]{figs\/exp1_fig1_b} &\n \\includegraphics[width=2.20cm]{figs\/exp1_fig1_c} &\n \\includegraphics[width=2.20cm]{figs\/exp1_fig1_d} &\n \\includegraphics[width=2.20cm]{figs\/exp1_fig1_e} \\\\\n \\includegraphics[width=2.20cm]{figs\/exp1_fig2_a} &\n \\includegraphics[width=2.20cm]{figs\/exp1_fig2_b} &\n \\includegraphics[width=2.20cm]{figs\/exp1_fig2_c} &\n \\includegraphics[width=2.20cm]{figs\/exp1_fig2_d} &\n \\includegraphics[width=2.20cm]{figs\/exp1_fig2_e} \\\\\n \n \n \n \n \n \\centering (a) & (b) & (c) & (d) & (e) \\\\\n\\end{tabular}\n\\end{center}\n\\caption{(a) Input images from the PASCAL Context and ADE20K dataset. (b-e) Different weighting maps $\\tilde{A}_i$ for creating the holistic codewords.}\n\\label{fig:weighting_maps}\n\n\\begin{center}\n\\begin{tabular}{C{2.50cm}C{2.50cm}C{2.50cm}C{2.50cm}}\n \\includegraphics[width=2.50cm]{figs\/figs_pcontext\/2010_004877_img} &\n \\includegraphics[width=2.50cm]{figs\/figs_pcontext\/2010_004877_gt} &\n \\includegraphics[width=2.50cm]{figs\/figs_pcontext\/2010_004877_fcn} &\n \\includegraphics[width=2.50cm]{figs\/figs_pcontext\/2010_004877_our} \\\\\n \\includegraphics[width=2.50cm]{figs\/figs_pcontext\/2008_004203_img} &\n \\includegraphics[width=2.50cm]{figs\/figs_pcontext\/2008_004203_gt} &\n \\includegraphics[width=2.50cm]{figs\/figs_pcontext\/2008_004203_fcn} &\n \\includegraphics[width=2.50cm]{figs\/figs_pcontext\/2008_004203_our} \\\\\n \\includegraphics[width=2.50cm]{figs\/figs_pcontext\/2010_004980_img} &\n \\includegraphics[width=2.50cm]{figs\/figs_pcontext\/2010_004980_gt} &\n \\includegraphics[width=2.50cm]{figs\/figs_pcontext\/2010_004980_fcn} &\n \\includegraphics[width=2.50cm]{figs\/figs_pcontext\/2010_004980_our} \\\\\n \\centering (a) Image & (b) GT & (c) Baseline & (d) EfficientFCN \\\\\n\\end{tabular}\n\\end{center}\n\\caption{Visualization results from the PASCAL Context dataset.}\n\\label{fig:vis}\n\\end{figure}\n\n\\noindent \\textbf{Number of holistic codewords.} We also conduct experiments to survey the\neffectiveness of the number of codewords in our predicted semantic codebook for feature upsampling.\nAs shown in Table \\ref{table:ablation_n_codewords}, as the number of the semantic codewords\nincreases from 32 to 512, the performance improves 1\\% in terms of mIoU on PASCAL Context.\nHowever, when the number of the semantic codewords further increases from 512 to 1024, the performance\nhas a slight drop, which might be caused by the additional parameters. The larger model capacity\nmight cause model to overfit the training data. In addition, since the assembly coefficients of the\nsemantic codewords are predicted from the OS=8 multi-scale fused feature $m_8$, the increased number\nof the semantic codewords also leads to significantly more extra computational cost. Thus, to balance\nthe performance and also the efficiency, we set the number of the holistic codewords as 256 for the\nPASCAL Context and PASCAL VOC 2012 datasets. Since PASCAL Context only has 60 classes and we observe\nthe number of codewords needed is approximately 4 times than the number of classes. We therefore set\nthe number of codewords as 600 for ADE20K, which has 150 classes.\n\n\\noindent \\textbf{Importance of the codeword information transfer for accurate assembly coefficient estimation.} \nThe key of our proposed HGD is how to linearly assemble holistic codewords at each spatial location\nto form high-resolution upsampled feature maps based on the feature maps $m_8$. In our HGD, although\nthe OS=8 features have well maintained structural image information, we argue that directly using OS=8 features to predict codeword assembly coefficients are less effective since they have no information about the codewords. \nWe propose to transfer the codeword information as the average codeword basis,\nwhich is location-wisely added to the OS=8 feature maps. To verify this\nargument, we design an experiment that removes the additive information\ntransfer, and only utilizes two $1\\times 1$ convolutions with the same output\nchannels on the OS=8 feature maps $m_8$ for directly predicting assembly\ncoefficients. The mIoU of this implementation is 54.2\\%, which has a clear performance drop if there\nis no codeword information transfer from the codeword generation branch to the codeword coefficient prediction branch.\n\n\n\\noindent \\textbf{Visualization of the weighting maps and example results.} \nTo better interpret the obtained holistic codewords, we visualize the weighting maps $\\tilde{A}$ for\ncreating the holistic codewords in Fig.~\\ref{fig:weighting_maps}, where each column shows one\nweighting map $\\tilde{A}_i$ for generating one holistic codeword. Some weighting maps focus on summarizing foreground objects or regions to create holistic codewords, while some other weighting maps pay attention to summarizing background contextual regions or objects as the holistic codewords. The visualization shows that the learned codewords implicitly capture different global contexts from the scenes.\nIn Fig.~\\ref{fig:vis}, we also visualize some predictions by the baseline\nDilatedFCN-8s and by our EfficientFCN, where our model significantly improves the visualized results with the proposed HGD.\n\n\n\n\n\\begin{table*}[!t]\n\\small\n\\centering\n\\caption{Results of each category on PASCAL VOC 2012 test set. Our\n EfficientFCN obtains 85.4 \\% without MS COCO dataset pre-training and 87.6\\% with MS COCO dataset pre-training. (For each\n columns, the best two entries are filled in gray color. )}\n\\label{table:pascal_voc_2012}\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{l|cccccccccccccccccccc|c}\n\\hline\n\\textbf{Method} & \\textbf{aero} & \\textbf{bike} & \\textbf{bird} & \\textbf{boat} & \\textbf{bottle} & \\textbf{bus} & \\textbf{car} & \\textbf{cat} & \\textbf{chair} & \\textbf{cow} & \\textbf{table} & \\textbf{dog} & \\textbf{horse} & \\textbf{mbike} & \\textbf{person} & \\textbf{plant} & \\textbf{sheep} & \\textbf{sofa} & \\textbf{train} & \\textbf{tv} & \\textbf{mIoU\\%} \\\\ \\hline\\hline\n\\textbf{FCN} \\cite{long2015fully} & 76.8 & 34.2 & 68.9\n& 49.4 & 60.3 & 75.3 & 74.7 & 77.6\n& 21.4 & 62.5 & 46.8 & 71.8 & 63.9\n& 76.5 & 73.9 & 45.2 & 72.4 & 37.4\n& 70.9 & 55.1 & 62.2 \\\\\n\\textbf{DeepLabv2} \\cite{chen2017deeplab} & 84.4 & 54.5 & 81.5\n& 63.6 & 65.9 & 85.1 & 79.1 & 83.4\n& 30.7 & 74.1 & 59.8 & 79.0 & 76.1 & 83.2 & 80.8 & 59.7 & 82.2 & 50.4 & 73.1 & 63.7 & 71.6 \\\\\n\\textbf{CRF-RNN} \\cite{CRF-RNN} & 87.5 & 39.0 & 79.7 & 64.2 & 68.3 & 87.6 & 80.8 & 84.4 & 30.4 & 78.2 & 60.4 & 80.5 & 77.8 & 83.1 & 80.6 & 59.5 & 82.8 & 47.8 & 78.3 & 67.1 & 72.0 \\\\\n\\textbf{DeconvNet} \\cite{DeconvNet} & 89.9 & 39.3 & 79.7 & 63.9 & 68.2 & 87.4 & 81.2 & 86.1 & 28.5 & 77.0 & 62.0 & 79.0 & 80.3 & 83.6 & 80.2 & 58.8 & 83.4 & 54.3 & 80.7 & 65.0 & 72.5 \\\\\n\\textbf{DPN} \\cite{DPN} & 87.7 & 59.4 & 78.4 & 64.9 & 70.3 & 89.3 & 83.5 & 86.1 & 31.7 & 79.9 & 62.6 & 81.9 & 80.0 & 83.5 & 82.3 & 60.5 & 83.2 & 53.4 & 77.9 & 65.0 & 74.1 \\\\\n\\textbf{Piecewise} \\cite{Piecewise} & 90.6 & 37.6 & 80.0\n& 67.8 & 74.4 & 92 & 85.2 & 86.2\n& 39.1 & 81.2 & 58.9 & 83.8 & 83.9\n& 84.3 & 84.8 & 62.1 & 83.2 & 58.2\n& 80.8 & 72.3 & 75.3 \\\\\n\\textbf{ResNet38} \\cite{ResNet38} & 94.4 & 72.9 & 94.9\n& 68.8 & 78.4 & 90.6 & 90.0 & 92.1\n& 40.1 & 90.4 & 71.7 & 89.9 & 93.7\n& \\bgGray 91.0 & 89.1 & 71.3 & 90.7 & 61.3 & 87.7 & 78.1 & 82.5 \\\\\n\\textbf{PSPNet} \\cite{zhao2017pyramid} & 91.8 & 71.9 & 94.7 & 71.2 & 75.8 & 95.2 & 89.9 & 95.9 & 39.3 & 90.7 & 71.7 & 90.5 & 94.5 & 88.8 & 89.6 & 72.8 & 89.6 & \\bgGray {64.0} & 85.1 & 76.3 & 82.6 \\\\\n\\textbf{EncNet} \\cite{Zhang_2018_CVPR} & 94.1 & 69.2 & \\bgGray\\textbf{96.3} & \\bgGray 76.7 & \\bgGray \\textbf{86.2} & 96.3 & 90.7 & 94.2 & 38.8 & 90.7 & 73.3 & 90.0 & 92.5 & 88.8 & 87.9 & 68.7 & 92.6 & 59.0 & 86.4 & 73.4 & 82.9 \n \\\\\n \\textbf{APCNet} \\cite{he2019adaptive} & 95.8 &\\bgGray 75.8 & 84.5 & 76.0 & 80.6 &\n \\bgGray 96.9 & 90.0 & 96.0 & \\bgGray\\textbf{42.0} & \\bgGray 93.7\n &\\bgGray 75.4 & 91.6 & 95.0 & 90.5 &\n 89.3 & 75.8 & 92.8 & 61.9 & 88.9 & \\bgGray 79.6 & 84.2\n \\\\\n \\textbf{CFNet} \\cite{Zhang_2019_CVPR} & 95.7 & 71.9 &\\bgGray\n 95.0 &\\bgGray 76.3 & \\bgGray 82.8 &\n 94.8 & 90.0 & 95.9 & 37.1 & 92.6 & 73.0 & \\bgGray 93.4 & 94.6 & 89.6 &\n 88.4 & 74.9 & \\bgGray \\textbf{95.2} & 63.2 & \\bgGray \\textbf{89.7} & 78.2 & 84.2\n \\\\\n \\textbf{DMNet} \\cite{he2019dynamic} & \\bgGray 96.1 &\n \\bgGray\\textbf{77.3} & 94.1 & 72.8 & 78.1 & \n \\bgGray\\textbf{97.1} & \\bgGray \\textbf{92.7} & \\bgGray 96.4 & 39.8 & 91.4 & \\bgGray 75.5 & 92.7 & \\textbf{95.8} &\n \\bgGray {91.0} & \\bgGray {90.3} & \\bgGray {76.6} & \\bgGray 94.1 & 62.1 & 85.5 & 77.6 & \\bgGray 84.4 \n \\\\ \\hline\n \\textbf{Ours} & \\bgGray \\textbf{96.4} & {74.1} &\n 92.8 & \\bgGray 75.6 & 81.9 &\\bgGray 96.9 \n & \\bgGray {92.6} & \\bgGray \\textbf{97.1} & \\bgGray 41.6 & \\bgGray \\textbf{95.4} \n & 72.9 & \\bgGray \\textbf{93.9} & \\bgGray \\textbf{95.9} \n & {90.6} &\\bgGray \\textbf{ 90.6} &\\bgGray \\textbf{77.2} & 94.0 &\n 67.5 &\\bgGray 89.3 &\n \\bgGray \\textbf{79.8} & \\bgGray \\textbf{85.4} \\\\\n \\hline\n \\multicolumn{22}{c}{\\textbf{With COCO Pre-training}}\\\\\n \\hline\n \n \\textbf{CRF-RNN}~\\cite{CRF-RNN} & 90.4 & 55.3 & 88.7 & 68.4 & 69.8 & 88.3 & 82.4 & 85.1 & 32.6 & 78.5 & 64.4 & 79.6 & 81.9 & 86.4 & 81.8 & 58.6 & 82.4 & 53.5 & 77.4 & 70.1 & 74.7 \\\\\n \n \\textbf{Piecewise}~\\cite{Piecewise} & 94.1 & 40.7 & 84.1 & 67.8 & 75.9 & 93.4 & 84.3 & 88.4 & 42.5 & 86.4 & 64.7 & 85.4 & 89.0 & 85.8 & 86.0 & 67.5 & 90.2 & 63.8 & 80.9 & 73.0 & 78.0 \\\\\n \n \n \\textbf{DeepLabv2}~\\cite{chen2017deeplab} & 92.6 & 60.4 & 91.6 & 63.4 & 76.3 & 95.0 & 88.4 & 92.6 & 32.7 & 88.5 & 67.6 & 89.6 & 92.1 & 87.0 & 87.4 & 63.3 & 88.3 & 60.0 & 86.8 & 74.5 & 79.7 \\\\\n \\textbf{RefineNet}\\cite{RefineNet} & 95.0 & 73.2 & 93.5 & 78.1 & 84.8 & 95.6 & 89.8 & 94.1 & 43.7 & 92.0 & 77.2 & 90.8 & 93.4 & 88.6 & 88.1 & 70.1 & 92.9 & 64.3 & 87.7 & 78.8 & 84.2 \\\\\n \\textbf{ResNet38}\\cite{ResNet38} & 96.2 & 75.2 & \\cellcolor[gray]{0.92} \n 95.4 & 74.4 & 81.7 & 93.7 & 89.9 & 92.5 & \\cellcolor[gray]{0.92} 48.2 & 92.0 & 79.9\n & 90.1 & 95.5 & 91.8 & 91.2 & 73.0 & 90.5 & 65.4 & 88.7 & 80.6 & 84.9\\\\\n \\textbf{PSPNet}~\\cite{zhao2017pyramid} & {95.8} & {72.7} & {95.0}\n & {78.9} & {84.4} & 94.7 & \\cellcolor[gray]{0.92} {92.0} & {95.7} & {43.1} &\n {91.0} & \\bf \\cellcolor[gray]{0.85}{80.3} & {91.3} & {96.3} & {92.3} & {90.1} & {71.5}\n & \\cellcolor[gray]{0.92} {94.4} & \\cellcolor[gray]{0.92} {66.9} & {88.8} & \\bf \\cellcolor[gray]{0.85}{82.0} & {85.4} \\\\\n \\textbf{DeepLabv3}\\cite{chen2017rethinking} & \\cellcolor[gray]{0.92} 96.4 & 76.6\n & 92.7 & 77.8 & \\cellcolor[gray]{0.92} {87.6} & 96.7 & 90.2 & 95.4 & 47.5 &\n \\cellcolor[gray]{0.92} 93.4 & 76.3 & 91.4 & \\bf \\cellcolor[gray]{0.85}{97.2} & 91.0 & \\bf \\cellcolor[gray]{0.85}{92.1} &\n 71.3 & 90.9 & \\cellcolor[gray]{0.92} {68.9} & \\cellcolor[gray]{0.92} {90.8} & 79.3 & 85.7 \\\\ \n \\textbf{EncNet}\\cite{Zhang_2018_CVPR} & 95.3 & 76.9 & 94.2 &\n \\cellcolor[gray]{0.92} 80.2 & 85.2 & 96.5 & 90.8 & 96.3 & 47.9\n & 93.9 & \\cellcolor[gray]{0.92} 80.0 & 92.4 & \\cellcolor[gray]{0.92} 96.6 & 90.5 & 91.5 & 70.8 & 93.6 & 66.5 & 87.7 & 80.8 & 85.9 \n \n \\\\ \n \\textbf{CFNet} \\cite{Zhang_2019_CVPR} &\\bf \\cellcolor[gray]{0.85} 96.7 &\\cellcolor[gray]{0.92} 79.7 &\n 94.3 & 78.4 & 83.0 & \\bf \\cellcolor[gray]{0.85} 97.7 & 91.6 &\\cellcolor[gray]{0.92} 96.7 &\\bf \\cellcolor[gray]{0.85} 50.1\n &\\cellcolor[gray]{0.92} 95.3 & 79.6 & \\bgGray 93.6 &\\bf \\cellcolor[gray]{0.85} 97.2 &\\cellcolor[gray]{0.92} \n 94.2 &\\cellcolor[gray]{0.92} 91.7 & \\bgGray {78.4} &\\bf \\cellcolor[gray]{0.85} 95.4 & \\bgGray\n \\textbf{69.6} & 90.0 & 81.4 & \\cellcolor[gray]{0.92} 87.2\n \\\\ \\hline\n \\textbf{Ours} & \\bgGray {96.6} & \\bf \\cellcolor[gray]{0.85} {80.6} &\n \\bf \\cellcolor[gray]{0.85} 96.1 & \\bf \\cellcolor[gray]{0.85}{82.3} &\\bf \\cellcolor[gray]{0.85} 87.8 &\\bf \\cellcolor[gray]{0.85} 97.7 \n & \\bf \\cellcolor[gray]{0.85} {94.4} & \\bf \\cellcolor[gray]{0.85} {97.3} & 47.1 & \\bgGray \\textbf{96.3} \n & {77.9} & \\bgGray \\textbf{94.8} & \\bgGray\n \\textbf{97.2} \n &\\bgGray \\textbf{94.3} & 91.1 & \\bf \\cellcolor[gray]{0.85} 81.0 & 94.3 & 61.5\n &\\bf \\cellcolor[gray]{0.85} 91.6 &\\bf \\cellcolor[gray]{0.85} {83.5} & \\bgGray \\textbf{87.6}\n \\\\ \\hline\n\\end{tabular}}\n\\end{table*}\n\n\\noindent\n\\textbf{Comparison with state-of-the-art methods.} \nTo further demonstrate the effectiveness of our proposed EffectiveFCN with the holistically-guided decoder, the comparisons with state-of-the-art methods are shown in Table \\ref{table:pascal-context}. The dilatedFCN based methods dominate semantic segmentation. However, our work is still able to achieve the best results compared to the dilatedFCN based methods on the PASCAL Context validation set without using any dilated convolution and has significantly less computational cost. Because of the efficient design of our HGD, our EfficientFCN only has 1\/3 of the computational cost of state-of-the-arts methods but can still achieve the best performance. \n\n\\subsection{Results on PASCAL VOC}\nThe original PASCAL VOC 2012 dataset consists of 1,464 images for training, 1,449 for validation, and 1,456 for testing, which is a major benchmark dataset for semantic object segmentation. It includes 20 foreground objects classed and one background class. The augmented training set of 10,582 images, namely train-aug, is adopted as the training set following the previous experimental set in \\cite{Zhang_2019_CVPR}.\nTo further demonstrate the effectiveness of our proposed HGD. We adopt all the best strategies of HGD design and compare it with state-of-the-art methods on the test set of PASCAL-VOC 2012, which is evaluated on the official online server. As shown in Table \\ref{table:pascal_voc_2012}, the dilatedFCN based methods dominate the top performances on the PSCAL VOC benchmark. However, our EfficientFCN with a backbone having no dilated convolution can still achieve the best results among all the ResNet101-based methods. \n\\subsection{Results on ADE20K}\nThe ADE20K dataset consists of 20K images for training, 2K images for\nvalidation, and 3K images for testing, which were used for ImageNet Scene\nParsing Challenge 2016. This dataset is more complex and challenging with 150 labeled classes and more diverse scenes. As shown in Table \\ref{table:pascal-context}, our\nEfficientFCN achieves the competitive performance than the dilatedFCN based\nmethods but has only 1\/3 of their computational cost.\n\\section{Conclusions}\nIn this paper, we propose the EfficientFCN model with the holistically-guied decoder for achieving efficient and accurate semantic segmentation. The novel decoder is able to reconstruct the high-resolution semantic-rich feature maps from multi-scale feature maps of the encoder. \nBecause of the superior feature upsampling performance of the HGD, our EfficientFCN, with much fewer parameters and less computational cost, achieves competitive or even better performance compared with state-of-the-art dilatedFCN based methods. \n\n\n\\subsection*{Acknowledgements}\nThis work is supported in part by SenseTime Group Limited, in part by the General Research Fund through the Research Grants Council of Hong Kong under Grants CUHK 14202217 \/ 14203118 \/ 14205615 \/ 14207814 \/ 14213616 \/ 14208417 \/ 14239816, in part by CUHK Direct Grant.\n\n\\clearpage\n\n\n\n{\\small\n\\bibliographystyle{splncs04}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\n\\section{Introduction}\n\nThe discovery and characterization of primeval galaxies constitute some of the biggest challenges in current observational and theoretical cosmology\\footnote{In the following we assume cosmological parameters compatible with \\emph{Planck} results, i.e. a $\\Lambda$CDM model with total matter, vacuum and baryonic densities in units of the critical density $\\Omega_{\\Lambda}= 0.692$, $\\Omega_{m}= 0.308$, $\\Omega_{b}= 0.0481$, Hubble constant $\\rm H_0=100\\,{\\rm h}\\,{\\rm km}\\,{\\rm s}^{-1}\\,{\\rm Mpc}^{-1}$ with ${\\rm h}=0.678$, spectral index $n=0.967$, $\\sigma_{8}=0.826$ \\citep[][]{planck:2013_xvi_parameters}.}.\n\nDeep optical\/near infrared (IR) surveys \\citep{Dunlop13,Madau14,Bouwens:2015} have made impressive progresses in identifying galaxies well within the Epoch of Reionization ($z\\simeq6$). Such surveys yield key information about the star formation (SF) of hundreds of galaxies in the early Universe. They also allow to statistically characterize galaxies in terms of their UltraViolet (UV) luminosity up to $z\\sim10$ \\citep{Bouwens:2015}. However -- using these surveys broad band alone -- little can be learned about other properties as their gas and dust content, metallicity, interactions with the surrounding environment \\citep[e.g.][]{Barnes:2014PASP}, feedback \\citep[e.g.][]{Dayal14}, and outflows \\citep{gallerani:2016outflow}. \n\nTo obtain a full picture of these systems, optical\/IR surveys must be complemented with additional probes. Information on the metal content and energetics of the interstellar medium (ISM) can be obtained with observations of Far IR (FIR) fine structure lines, and in particular the \\hbox{[C~$\\scriptstyle\\rm II $]}~{\\small$\\left(^{2}P_{3\/2} \\rightarrow\\,^{2}P_{1\/2}\\right)$} line at 157.74~$\\mu$m. The \\hbox{[C~$\\scriptstyle\\rm II $]}~line is the dominant coolant of the ISM being excited in different ISM phases, as the diffuse cold neutral medium (CNM), warm neutral medium (WNM), high density photodissociation regions (PDRs), and -- to a lower extent -- ionized gas \\citep[][]{Tielens:1985ApJ,Wolfire:1995ApJ,Abel:2006MNRAS,Vallini:2013MNRAS}. As \\hbox{[C~$\\scriptstyle\\rm II $]}~emission can be enhanced by shocks, it has been suggested as a good outflow tracer\\\\ (e.g. \\citealt{maiolino:2012,kreckel:2014apj,cicone:2015aa,janssen:2016arxiv}), and can thus in general be used to study feedback processes in galaxies.\n\nObservationally, the \\hbox{[C~$\\scriptstyle\\rm II $]}~line is a promising probe as it is often the brightest among FIR emission lines, accounting for up to $\\sim1\\%$ of the total IR luminosity of galaxies \\citep[e.g.][]{Crawford:1985ApJ,Madden:1997ApJ}. It has been successfully used to probe the low-$z$ ISM \\citep[e.g.][]{delooze:2014aa}. The unprecedented sensitivity of the Atacama Large Millimeter\/Submillmeter Array (ALMA) makes it possible for the first time to use \\hbox{[C~$\\scriptstyle\\rm II $]}~emission to characterize high-$z$ galaxies. Before the ALMA advent, in fact, detections were limited to a handful of QSO host galaxies, and rare galaxies with extreme SF rates \\citep[$SFR\\simeq10^3{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}$, e.g.][]{maiolino:2005AA,debreuck:2011,Carilli:2013ARA&A,gallerani:2012aa,cicone:2015aa}.\n\nHowever, for \\quotes{normal} star forming galaxies ($\\lsim10^{2}{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}$) at $z\\sim 6-7$ early ALMA searches for \\hbox{[C~$\\scriptstyle\\rm II $]}~lines have mostly yielded upper limits (e.g. \\citealt{ouchi2013} \\citealt{kanekar2013}; \\citealt{ota:2014apj,schaerer:2015}). The situation has changed recently with a number of robust \\hbox{[C~$\\scriptstyle\\rm II $]}~detections (e.g. \\citealt{maiolino:2015arxiv,capak:2015arxiv}; \\citealt{Willott:2015arXiv15,knudsen:2016arxiv}).\n\nIn many cases the high-$z$ \\hbox{[C~$\\scriptstyle\\rm II $]}~line luminosity is fainter than expected from the \\hbox{[C~$\\scriptstyle\\rm II $]}-$SFR$ relation found in local galaxies \\citep{delooze:2014aa}. To explain such \\hbox{[C~$\\scriptstyle\\rm II $]}-$SFR$~\\emph{deficit}, some efforts have been devoted to model the \\hbox{[C~$\\scriptstyle\\rm II $]}~emission from high-$z$ galaxies \\citep{nagamine:2006ApJ,Vallini:2013MNRAS,munoz:2014MNRAS,vallini:2015,olsen:2015apj}. In brief, these theoretical works show that the \\hbox{[C~$\\scriptstyle\\rm II $]}-$SFR$~deficit can be ascribed to different effects:\n\\begin{itemize}\n\\item[(a)] Lower metallicity of high-$z$ galaxies \\citep{Vallini:2013MNRAS,munoz:2014MNRAS,vallini:2015}, in particular supported by observations of lensed galaxies \\citep{knudsen:2016arxiv}.\n\\item[(b)] Suppression of \\hbox{[C~$\\scriptstyle\\rm II $]}~line around star forming regions \\citet{Vallini:2013MNRAS}, typically observed as a displacement of the \\hbox{[C~$\\scriptstyle\\rm II $]}~ with respect to the UV emitting region, as seen e.g. in BDF3299 \\citep{maiolino:2015arxiv} and in some of the \\citet{capak:2015arxiv} galaxies. This would be a signature of stellar feedback heating\/ionizing the putative \\hbox{[C~$\\scriptstyle\\rm II $]}-emitting gas.\n\\item[(c)] Suppression of \\hbox{[C~$\\scriptstyle\\rm II $]}~line by the increased CMB temperature in the WNM\/CNM component \\citep[][]{pallottini:2015_cmb,vallini:2015}, similarly to what observed for dust emission \\citep{dacunha:2013apj}.\n\\end{itemize}\n\nSimulating the ISM of early galaxies at sufficient resolution and including feedback effects might shed light on these questions. Feedback prescriptions are particularly important as such process regulates the amount of (dense) gas likely to radiate most of the power detected with FIR lines. Several studies have explored optimal strategies to include feedback in galaxy simulations.\n\nFor some works, the interest is in the comparison between different kind of stellar feedback prescription, as modelled via thermal and\/or kinetic energy deposition in the gas from supernovae (SN), winds \\citep[][]{agertz:2012arxiv,fire:2014mnras,barai:2015mnras,agertz:2015apj}, and radiation pressure \\citep[][]{wise:2012radpres,ceverino:2014}; other analyses focus on implementing complex chemical networks in simulations \\citep{tomassetti:2015MNRAS,maio:2015,bovino:2015arxiv,richings:2016,grassi_dust:2016}, radiative transfer effect \\citep{petkova:2012mnras,roskar:2014,rosdahl:2015mnras,maio:2016mnras}, or aim at removing tensions between different coding approaches \\citep[][]{agora:2013arxiv}.\n\nThus, we can improve galaxy simulations by providing theoretical expectations for \\hbox{[C~$\\scriptstyle\\rm II $]}~that should be compared with state-of-the-art data. Such a synergy between theory and observations, in turn, can guide the interpretation of upcoming ALMA data and drive future experiments of large\nscale \\hbox{[C~$\\scriptstyle\\rm II $]}~mapping \\citep{Gong:2012ApJ,silva:2015apj,bin:2015mapping, pallottini:2015_cmb}, which would led to a statistical characterization of the high-$z$ galaxy population. In the present work we simulate a $z\\sim6$ galaxy typically detected in \\hbox{[C~$\\scriptstyle\\rm II $]}~with ALMA current observations.\n\nThe paper is structured as follows. In Sec. \\ref{sec_numerical} we detail the numerical model used to set-up the zoom-in simulation, and describe the adopted ${\\rm {H_2}}$~star formation prescription (Sec. \\ref{sec_model_sf}), mass and energy inputs from the stellar populations (Sec. \\ref{sec_stellar_inputs}) and feedback (including SN, winds and radiation pressure Sec. \\ref{sezione_blast} -- see also App. \\ref{app_rad_press} and App. \\ref{app_blastwave}). The results are discussed in Sec. \\ref{sec_result}, where we analyze star formation history and feedback effects in relation to ISM thermodynamics (Sec. \\ref{sec_sfr_result}) and its structural properties. The expected \\hbox{[C~$\\scriptstyle\\rm II $]}~emission and other observational properties of high-$z$ galaxies are discussed in Sec. \\ref{sec_final_results}. Conclusions are given in Sec. \\ref{sec_conclusioni}.\n\n\n\\section{Numerical simulations}\\label{sec_numerical}\n\n\\begin{table}\n\\centering\n\\begin{tabular}{ccccccc}\n\\hline\n~ & $m_{dm}$ & $m_{b}$ & $\\Delta_{x}^{\\rm max}$ & $\\Delta_{x}^{\\rm min}$ & $\\Delta_{x}^{\\rm min}$ at $z=6$\\\\\n~ & \\multicolumn{2}{c}{${\\rm M}_{\\odot}\/{\\rm h}$} & \\multicolumn{2}{c}{${\\rm kpc}\/{\\rm h}$} & pc\\\\\n\\hline\n{\\tt cosmo} & $3.4\\times 10^{7}$ & $-$ & $78.1$ & $78.1$ & $2.5\\times10^3$\\\\\n{\\tt zoom} & $6.7\\times 10^{4}$ & $1.2\\times 10^{4}$ & $9.7$ & $0.1$ & $32.1$\\\\\n\\end{tabular}\n\\caption{Resolution set-up for the cosmological run ({\\tt cosmo}) and subsequent zoom-in ({\\tt zoom}) simulation. $m_{dm}$ and $m_{b}$ are in units of ${\\rm M}_{\\odot}\/{\\rm h}$ and indicate the dark matter (DM) and baryon mass resolution, respectively; $\\Delta_{x}^{\\rm max}$ and $\\Delta_{x}^{\\rm min}$ indicate the coarse grid and minimum available refinement scale, respectively. Both scales are reported in comoving ${\\rm kpc}\/{\\rm h}$. For $\\Delta_{x}^{\\rm min}$ we also report also the physical pc scale at $z=6$. For the {\\tt cosmo} run, no refinement is used, and for the {\\tt zoom}, we indicate the increased resolution of the zoomed halo due to the multi-mass approach and the AMR.\n\\label{tagella_res}}\n\\end{table}\n\nWe carry out our simulation using a customized version of the adaptive mesh refinement (AMR) code \\textlcsc{ramses} \\citep[][]{Teyssier:2002}. \\textlcsc{ramses} is an octree-based code that uses Particle Mesh N-body solver for the dark matter (DM) and an unsplit 2nd-order MUSCL\\footnote{MUSCL: Monotone Upstream-centred Scheme for Conservation Laws} scheme for the baryons. Gravity is accounted by solving the Poisson equation on the AMR grid via a multi-grid scheme with Dirichlet boundary conditions on arbitrary domains \\citep{guillet:2011Jcoph}. For the present simulation we choose a refinement based on a Lagrangian mass threshold-based criterion.\n\nChemistry and heating\/cooling processes of the baryons are implemented with \\textlcsc{grackle} 2.1\\footnote{See also \\url{https:\/\/grackle.readthedocs.org\/}} \\citep{bryan:2014apjs}, the standard library of thermo-chemical processes of the {\\tt AGORA} project \\citep{agora:2013arxiv}. Via \\textlcsc{grackle}, we follow the \\hbox{H}~and \\hbox{He}~primordial network and tabulated metal cooling and photo-heating rates calculated with \\textlcsc{cloudy} \\citep{cloudy:2013}. Cooling includes also inverse Compton off the cosmic microwave background (CMB), and heating from a redshift-dependent ionizing UV background \\citep[][UVB]{Haardt:2012}. Since ${\\rm {H_2}}$~gas phase formation is not accounted for, we do not include the cooling contribution of such species.\n\nBecause of stellar feedback (Sec \\ref{sec_stellar_inputs} and \\ref{sezione_blast}), the gas can acquire energy both in thermal and kinetic form. The distinction is considered by following the gas evolution of the standard thermal energy and a \\quotes{non-thermal} energy \\citep{agertz:2012arxiv}. Such approach is one of the possible scheme used to solve the over-cooling problem that affect galaxy-scale simulations \\citep[see][ and references therein]{dale:2015new}. The non-thermal energy mimics turbulence, i.e. it is not affected by cooling. The non-thermal energy variation is due to gas advection ($v\\nabla v$), work ($PdV$), and dissipation \\citep{agertz:2015apj}. Following \\citet{maclow1999turb} we assume a dissipation time scale proportional to the size of the cell (injection scale) and inversely proportional to the Mach number\\footnote{While the distinction in thermal and non-thermal is similar to previous works \\citep[e.g.][]{agertz:2015apj}, we note that usually the time scale for dissipation is fixed to $10\\,\\rm Myr$.}. Since the dynamical time is essentially set by the free-fall time, the dissipation time can be written as $t_{\\rm diss} = 9.785 (l_{\\rm cell}\/100\\,{\\rm pc})\/(v_{\\rm turb}\/10\\,{\\rm km}\\,{\\rm s}^{-1}) \\rm Myr$. Then, the non-thermal energy loss due to dissipation can be written as $\\dot{e}_{\\rm nth} = -e_{\\rm nth}\/t_{\\rm diss}$ \\citep[][see eq. 2]{teyssier:2013mnras}. As noted in \\citet{teyssier:2013mnras}, such scheme for non-thermal energy and its dissipation gives results qualitatively similar to a delayed cooling approach \\citep{stinson:2006mnras}.\n\n\\subsection{Initial conditions}\n\nThe initial conditions (IC) used for the suite are generated with \\textlcsc{music} \\citep{hahn:2011mnras}. \\textlcsc{music} produces IC on nested grid using a real-space convolution approach \\citep[cf.][]{bertschinger:1995astro}. The adopted Lagrangian perturbation theory scheme is perfectly suited to produce IC for multi-mass simulations and -- in particular -- zoom simulations. To generate the ICs, the transfer functions are taken from \\citep{eisenstein:1998apj}.\n\nTo set-up the zoom-in simulation, we start by carrying out a cosmological DM-only run. The simulation evolves a volume $V^{\\rm cosmo}=(20\\,{\\rm Mpc}\/{\\rm h})^{3}$ from $z=100$ to $z=6$ with DM mass resolution of $m_{dm}^{\\rm cosmo} = 3.4\\times 10^{7} \/{\\rm h}\\,{\\rm M}_{\\odot}$. The resolution of the coarse grid is $\\Delta x^{\\rm cosmo} = 78.1 \/{\\rm h}\\,{\\rm kpc}$, and we do not include additional levels of refinement. Using \\textlcsc{hop} \\citep{eisenstein_hop_1998apj} we find the DM halo catalogue at $z=6$. The cumulative halo mass function extracted from the catalogue is in agreement with analytical expectations \\citep[e.g.][]{sheth:1999mnras}, within the precision of halo-finder codes \\citep[e.g.][]{knebe:2013arxiv}.\n\nFrom the catalogue we select a halo with DM mass $M_{\\rm h} \\simeq 10^{11}\/{\\rm h}\\,{\\rm M}_{\\odot}$ (resolved by $\\simeq5\\times10^{4}$ DM particles), whose virial radius is $r_{\\rm vir}\\simeq 15\\,{\\rm kpc}$ at $z=6$. Using \\textlcsc{hop} we select the minimum ellipsoid enveloping $10\\,r_{\\rm vir}$, and trace it back to $z=100$. As noted in \\citet[][]{onorbe:2014mnras}, this is usually sufficient to avoid contamination\\footnote{A posteriori, we have checked that the halos in the zoom-in region have a contamination level $\\lsim0.1\\%$.}. At $z=100$ the trace back volume is $V^{\\rm zoom}\\simeq(2.1\\,{\\rm Mpc}\/{\\rm h})^{3}$. Using \\textlcsc{music} we recalculate the ICs, by generating 3 additional level of refinement. For such multi-mass set-up, the finer DM resolution is $m_{dm}^{\\rm zoom} = 6.7\\times 10^{4} \/{\\rm h}\\,{\\rm M}_{\\odot}$, that corresponds to a spatial resolution of $\\Delta x^{\\rm zoom} = 9.7 \/{\\rm h}\\,{\\rm kpc}$. We note that because of the traced back volume, our simulation is expected to probe not only the target halo, but also its satellites and environment, similar to other works (e.g. \\citealt{fiacconi:2015}, where the target halo is chosen at $z\\simeq 3$).\n\nIn the zoom-in simulation $\\Delta x^{\\rm zoom}$ corresponds to our coarse grid resolution, and we allow for 6 additional refinement levels, based on a Lagrangian mass threshold-based criterion. At $z=6$, the baryonic component of the selected halo has a mass resolution of $m_{b} = 1.8\\times 10^{4}{\\rm M}_{\\odot}$ and a physical resolution of $\\Delta x^{\\rm min} = 31.9\\,{\\rm pc}$. For convenience, a summary of the resolution outline can be found in Tab. \\ref{tagella_res}. Note that the refined cell of our simulations have mass and size typical of molecular clouds \\citep[MC, e.g.][]{gorti:2002apj,federrath:2013}.\n\nIn the present paper we refer to metallicity ($Z$) as the sum of all the heavy element species without differentiating among them, and assume solar abundance ratios \\citep{asplund:2009ara&a}. In the IC, the gas is characterized by a mean molecular weight $\\mu = 0.59$, and has metallicity floor $Z=Z_{\\rm floor}>0$. The metallicity floor mimics the pre-enrichment of the halo at high-$z$, when we do not have the resolution to follow precisely star formation and gas enrichment. We set $Z_{\\rm floor}=10^{-3}{\\rm Z}_{\\odot}$, a level that is compatible with the metallicity found at high-$z$ in cosmological simulations for diffuse enriched gas \\citep{dave:2011mnras,pallottini:2014_sim,maio:2015}. Note that such low metallicity only marginally affects the gas cooling time, but is above the critical metallicity for formation of Population III stars. Additionally, a posteriori, we have found that the metallicity floor contribute for only $\\lsim 0.2\\%$ of the total metal mass produced by stars by $z=6$ in the refined region.\n\n\\subsection{Star formation model}\\label{sec_model_sf}\n\nWe model star formation (SF) by assuming a ${\\rm {H_2}}$~dependent Schmidt-Kennicutt relation \\citep{schmidt:1959apj,kennicutt:1998apj}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.49\\textwidth]{plots_pdf\/kmt09_test.pdf}\n\\caption{\n${\\rm {H_2}}$~fraction ($f_{\\rm H2}$) as a function of gas density ($n$) obtained using the \\citetalias{krumholz:2009apj} model (eqs. \\ref{eq_fh2_full}). Different solid lines correspond to different metallicity ($Z$) of the gas. Horizontal dotted grey lines mark $f_{\\rm H2}$ values of $0.5$ and $1$. Vertical dashed lines indicate the critical density $n_{c}$ where $f_{\\rm H2}=0.5$ for different $Z$; these critical density values are obtained as a fit (eq. \\ref{eq_critical_density}) to the \\citetalias{krumholz:2009apj} model (eq. \\ref{eq_fh2_full}). See the text for details.\n\\label{fig_kmt_test}}\n\\end{figure}\n\\begin{subequations}\\label{eq_sfr_tot}\n\\be\\label{eq_sfr1}\n\\dot{\\rho}_{\\star}=f_{\\rm H2} \\rho\/t_{\\rm sf}\\,\n\\ee\nwhere $\\dot{\\rho}_{\\star}$ is the local SF rate ($SFR$) density, $f_{\\rm H2}$ the molecular hydrogen fraction, $\\rho$ the gas density and $t_{\\rm sf}$ the SF time scale. In eq. \\ref{eq_sfr1} we assume the SF time scale to be proportional to the free-fall time, i.e.\n\n\\be\\label{eq_sfr2}\nt_{\\rm sf} = \\zeta_{\\rm sf}^{-1} \\sqrt{3\\pi\/(32\\,G\\rho)} \\,,\n\\ee\n\\end{subequations}\nwhere $\\zeta_{\\rm sf}$ describes the SF efficiency and it is treated as a parameter in the present work \\citep[cf.][see discussion in Sec. \\ref{sez_sfr_efficiency}]{semenov:2015}. To calculate $f_{\\rm H2}$ we adopt the \\citetalias{krumholz:2009apj} model \\citep{krumholz:2008apj,krumholz:2009apj,mckee:2010apj}. Such model considers ${\\rm {H_2}}$~formation on dust grains by computing radiative transfer on a idealized MC and assumes equilibrium between formation and dissociation rate of ${\\rm {H_2}}$. The solution for $f_{\\rm H2}$ can be approximated as\n\n\\begin{subequations}\\label{eq_fh2_full}\n\\begin{align}\nf_{\\rm H2} &= \\left(1 -0.75\\,s\/(1+0.25\\,s) \\right)\\Theta(2-s)\\\\\ns &= \\ln\\left(1+0.6\\,\\chi +0.01\\chi^{2}\\right) \/0.6\\,\\tau_{\\rm uv}\\label{eq_dust_optical_depth}\\\\\n\\chi &= 71\\, \\left(\\sigma_{d,21}\/\\mathcal{R}_{-16.5}\\right)\\,\\left((G\/G_{0})\/(n\/{\\rm cm}^{-3})\\right)\\,,\\label{eq_chi_full}\n\\end{align}\nwhere $\\Theta$ is the Heaviside function, $\\tau_{\\rm uv}$ the dust optical depth of the cloud, $\\sigma_{d}^{-21}=\\sigma_{d}\/10^{-21}{\\rm cm}^{-2}$ is the dust absorption cross section \\citep{li_draine:2001apj}, $\\mathcal{R}\/10^{-16.5}{\\rm cm}^{3}\\,{\\rm s}^{-1} $ is the formation rate coefficient of ${\\rm {H_2}}$~on dust grains \\citep{wolfire:2008apj}, $G$ is the FUV flux in the Habing band ($6-13.6\\,{\\rm eV}$) normalized to the average Milky Way (MW) value $G_{0}$ \\citep{habing:1968,draine:1978apjs}, and $n$ is the hydrogen number density. As in \\citetalias{krumholz:2009apj}, we calculate the dust optical depth by linearly rescaling the MW value, i.e. $\\tau_{\\rm uv} = 10^{-21}{\\rm cm}^{-2} N_{H}\\, Z\/{\\rm Z}_{\\odot} \/\\mu$, where $N_{H}$ is the hydrogen column density and $\\mu$ the mean molecular weight. In the simulation, the column density is calculated as $N_{H}= n\\,l_{\\rm cell}$; because of the mass threshold-based criterion used as a refinement in AMR, we expect $l_{\\rm cell} \\propto n^{-1\/3}$, thus $N_{H} \\propto n^{2\/3}$.\n\nNote that both $\\sigma_{d}$ and $\\mathcal{R}$ are proportional to the dust mass, that we assume to be proportional to the metallicity. Then the ratio between $\\sigma_{d}$ and $\\mathcal{R}$ is independent of $Z$. Additionally, eq. \\ref{eq_fh2_full} can be simplified by assuming pressure equilibrium between the CNM and WNM. In this case, eq. \\ref{eq_chi_full} turns out to be independent on $G\/G_{0}$ and can be written as \\citep{krumholz:2009apj}\n\\be\\label{eq_sfr_last}\n\\chi = 0.75\\,\\left(1+3.1\\,(Z\/{\\rm Z}_{\\odot})^{0.365}\\right)\\,.\n\\ee \n\\end{subequations}\nAs shown in \\citep{krumholz:2011apj}, for $Z\\gsim10^{-2}{\\rm Z}_{\\odot}$ such approximation gives ${\\rm {H_2}}$~fractions compatible with those resulting from a full non-equilibrium radiative transfer calculations.\n\nIn Fig. \\ref{fig_kmt_test} we plot $f_{\\rm H2}$ from the \\citetalias{krumholz:2009apj} model as a function of the gas density. Different solid lines refer to different metallicity. At a fixed metallicity, the molecular fraction as a function of density vanishes for low values of $n$; it steeply rises up to $f_{\\rm H2} \\sim 0.8$ in one density dex and asymptotically reaches $f_{\\rm H2} = 1$. The critical density where the gas can be considered molecular ($f_{\\rm H2}=0.5$) is roughly inversely proportional to the metallicity, i.e. $n_{c} \\sim 25 (Z\/{\\rm Z}_{\\odot})^{-1}{\\rm cm}^{-3}$ \\citep[see also][]{agertz:2012arxiv}. We note that when detailed chemistry calculations are performed, such critical density depends on the chemical network and the assumptions regarding gas shielding from external radiation and clumpiness. As a consequence, the actual critical density can be higher that the one predicted by the \\citetalias{krumholz:2009apj} model \\citep[e.g.][]{bovino:2015arxiv}.\n\nBecause of the particular shape of the $f_{\\rm H2}(n)$ relation, the adopted SF law (eqs. \\ref{eq_sfr_tot}--\\ref{eq_fh2_full}) is roughly equivalent to a prescription based on a density threshold criterion:\n\\begin{subequations}\\label{eqs_sfr_equivalence}\n\\be\n\\dot{\\rho}_{\\star}=\\Theta(n - n_{c}) m_p\\,n \/t_{\\rm sf}\\,,\n\\ee\nwhere $m_p$ is the proton mass and the critical density\n\\be\\label{eq_critical_density}\nn_{c} \\simeq 26.45 \\, (Z\/{\\rm Z}_{\\odot})^{-0.87} {\\rm cm}^{-3}\\,\n\\ee\n\\end{subequations}\nis calculated as a fit to the $f_{\\rm H2}$ \\citetalias{krumholz:2009apj} model. In Fig. \\ref{fig_kmt_test}, we show $n_c$ for various metallicities (dashed vertical lines).\n\nEqs. \\ref{eqs_sfr_equivalence} are not used to calculate the $SFR$ in the simulation. However, being simpler, such formulation can be used to enhance our physical intuition of the adopted SF law\\footnote{As a consequence of the rough equivalence, it is not necessary to manually prevent SF in underdense regions, by imposing that an overdensity $\\Delta>200$ is needed to form stars. At the start of the simulation ($z=100$), the mean density of the gas is $\\sim 0.1\\,m_p\\,{\\rm cm}^{-3}$, while the \\quotes{effective} SF threshold would be $n_{c} \\sim 10^4{\\rm cm}^{-3}$ for gas at $Z=Z_{\\rm floor}$.\\label{footnote_sfr_equivalence}} in analyzing the results. As noted in \\citet[][]{hopkins:2013arxiv}, the morphology of a galaxy is very sensitive to the minimum density of the cells that are able to form star.\n\nDuring the simulation, eqs. \\ref{eq_sfr_tot} are solved stochastically, by drawing the mass of the new star particles from a Poisson distribution \\citep{rasera:2006,dubois:2008,pallottini:2014_sim}. We impose that no more than half of a cell mass can be converted into a star particle in each event. This prescription ensures the numerical stability of the code \\citep{dubois:2008}. This is also consistent with the picture that nearly half of the mass in a MC is Jeans unstable \\citep{federrath:2013}.\n\nWe allow SF only if the mass of a new star particle is at least equal to the baryon mass resolution. This avoids numerical errors for the star particle dynamics and enables us to treat the particle as a stellar population with a well sampled initial mass function (IMF). Additionally, the SF law is driven by ${\\rm {H_2}}$~formation on dust grains, we do not allow gas to form stars if the dust temperature is larger than $\\simeq2\\times 10^{3}$, because of dust sublimation (see Sec. \\ref{sezione_blast} and App. \\ref{app_rad_press} for the details on the dust prescriptions).\n\nFor the present work we assume a SF efficiency $\\zeta_{\\rm sf}=10\\%$, in accordance with the average values inferred from MC observations \\citep[][see also \\citealt{agertz:2012arxiv}]{murray:2011apj}. Note that varying the parameters for the SF law should lead to similar $SFR$ once feedback are properly included, although the galaxy morphology can be different \\citep{hopkins:2013arxiv}.\n\n\\subsection{Mass and energy inputs from stars}\\label{sec_stellar_inputs}\n\nBecause of the finite mass resolution, it is necessary to introduce (according to eqs. \\ref{eq_sfr_tot}--\\ref{eq_sfr_last}) ``star particles'' to represent stellar populations. To this aim, we adopt a \\citet{kroupa:2001} IMF\n\\begin{subequations}\n\\begin{align}\n\\Phi(m)\\propto & \\left[m^{-\\alpha_{1}} \\Theta(m_{1}-m)\\right.\\label{eq_imf}\\\\\n+& \\left. m^{-\\alpha_{2}} \\Theta(m-m_{1}) m_{1}^{\\alpha_{2}-\\alpha_{1}} \\right]\\,,\\nonumber\n\\end{align}\nwhere $\\alpha_{1}= 1.3$, $\\alpha_{2}= 2.3$, $m_{1} = 0.5\\,{\\rm M}_{\\odot}$, and $m$ is in the range $[10^{-1}-10^{2}]{\\rm M}_{\\odot}$. The proportionality constant is chosen such that\n\\be\n\\int_{ 0.1\\,{\\rm M}_{\\odot}}^{100\\,{\\rm M}_{\\odot}} m\\Phi\\,{\\rm d}m=1\\, .\n\\ee\n\\end{subequations}\n\nOnce formed, stars affect the environment with chemical, mechanical and radiative feedback. These stellar inputs are parameterized by the cumulative fraction of the returned gas mass, metals and energy \\citep[e.g.][]{salvadori:2008mnras,debennassuti2014mnras,salvadori:2015}. Mass and energy inputs are conveniently expressed per unit stellar mass formed ($M_{\\star}$).\n\nChemical feedback depends on the return fraction ($R$) and the yield ($Y$):\n\\begin{subequations}\\label{eqs_stellar_inputs}\n\\begin{align}\\label{eqs_def_R_Y}\n R(t_{\\star})\t=&\\int_{m(t_{\\star}) }^{100\\,{\\rm M}_{\\odot}} (m-w) \\Phi\\,{\\rm d}m\\\\\n Y(t_{\\star})\t=&\\int_{m(t_{\\star}) }^{100\\,{\\rm M}_{\\odot}} m_{Z} \\Phi\\,{\\rm d}m\\,,\n\\end{align}\nwhere $w(m,Z_{\\star})$ and $m_{Z}(m,Z_{\\star})$ are the stellar remnant and the metal mass produced for a star of mass $m$ and metallicity $Z_{\\star}$ \\citep[e.g.][]{woosley:1995apjs,vandenhoek:1997a&as}, and $m(t_{\\star})$ is the minimum stellar mass with lifetime\\footnote{Stellar lifetimes are roughly independent of metallicity for $Z_{\\star}>10^{-4}{\\rm Z}_{\\odot}$ \\citep[][see eq. 3]{raiteri:1996eq3}.} shorter than $t_{\\star}$, the time elapsed from the creation of the stellar particle (i.e. the \\quotes{burst age}).\n\nThis approach is used both in zoom galaxy simulations \\citep[e.g.][]{agora:2013arxiv} and cosmological simulations \\citep[e.g.][hereafter \\citetalias{pallottini:2014_sim}]{pallottini:2014_sim}. Compared to cosmological simulations, though, zoom simulations have typically a better spatial and -- consequently -- time resolution (e.g. $\\Delta t\\sim 10^{-2}\\,\\rm Myr$ vs $\\Delta t\\sim \\rm Myr$). Thus, here we can follow the gradual release of both gas and metals in the ISM.\n\nThe mechanical energy input includes SN explosions and winds, either by OB or AGB stars in young ($< 40\\,\\rm Myr$) or evolved stellar populations:\n\\begin{align}\\label{eqs_def_mec_energy}\n \\epsilon_{\\rm sn}(t_{\\star}) =&\\int_{m(t_{\\star})>8\\,{\\rm M}_{\\odot}}^{40\\,{\\rm M}_{\\odot} }\te_{\\rm sn}\\Phi\\,{\\rm d}m,\\\\\n \\epsilon_{\\rm w}(t_{\\star}) =&\\int_{m(t_{\\star})}^{100\\,{\\rm M}_{\\odot} }\t \te_{\\rm w}\\Phi\\,{\\rm d}m\\,,\n\\end{align}\nwhere $e_{\\rm sn}=e_{\\rm sn}(m,Z)$ and $e_{\\rm w}=e_{\\rm w}(m,Z)$ are the energy released by SN and stellar winds in units of $10^{51}{\\rm erg}\\equiv{\\rm 1 foe}$; we have further assumed that only stars with $8 \\leq m\/{\\rm M}_{\\odot}\\leq40$ can explode as SN.\n\nRadiative energy inputs can be treated within a similar formalism. The cumulative energy $\\epsilon_{12}$ associated to the spectral range $(\\lambda_{1}, \\lambda_{2})$ can be written as\n\\begin{align}\\label{eqs_def_rad_energy}\n \\epsilon_{\\rm 12}(t_{\\star}) \t=&\\int_0^{t_{\\star}}\\int_{m(t)}^{100\\,{\\rm M}_{\\odot} } L_{12}\\Phi\\,{\\rm d}m\\,{\\rm d}t\\\\\n L_{\\rm 12}(t) \t\t=& \\int_{\\lambda_{1}}^{\\lambda_{2}}L_{\\lambda}{\\rm d}{\\lambda}\\,,\n\\end{align}\n\\end{subequations}\nwhere $L_{\\lambda}=L_{\\lambda}(m,Z_{\\star})$ is the luminosity per unit wavelength and mass. For convenience, we express the radiation energy in units of ${\\rm foe}$, as for the mechanical energy (eqs. \\ref{eqs_def_mec_energy}). In the following we specify $\\epsilon_{\\rm 12}$ in eq. \\ref{eqs_def_rad_energy}, by separately considering ionizing radiation ($\\lambda_{1}=0$, $\\lambda_{2}=912\\,\\textrm{A\\kern -1.3ex\\raisebox{0.6ex}{$^\\circ$}}$) denoted by $\\epsilon_{\\rm ion}$, and the soft UV band, $\\epsilon_{\\rm uv}$, defined as the range ($\\lambda_{1}=912\\,\\textrm{A\\kern -1.3ex\\raisebox{0.6ex}{$^\\circ$}}$, $\\lambda_{2}=4000\\,\\textrm{A\\kern -1.3ex\\raisebox{0.6ex}{$^\\circ$}}$).\n\nIn eqs. \\ref{eqs_stellar_inputs}, the quantities $w$, $m_{Z}$, $e_{\\rm sn}$, $e_{\\rm w}$, and $L_{\\lambda}$ can be calculated from stellar evolutionary models. We adopt the {\\tt padova} \\citep{padova:1994} stellar tracks for metallicities $Z_{\\star}\/{\\rm Z}_{\\odot} = 0.02,\\, 0.2,\\, 0.4,{\\rm and}\\, 1$ to compute the chemical\\footnote{Similarly to \\citet{agora:2013arxiv}, when computing the yields in eq. \\ref{eqs_def_R_Y}, we assume that the metal mass is linked to the oxygen and iron masses via $m_{Z}= 2.09\\,m_{\\rm O} + 1.06\\,m_{\\rm Fe}$, as appropriate for \\citet{asplund:2009ara&a} abundances.}, mechanical and radiative inputs using \\textlcsc{starburst99} \\citep{starburst99:1999,starburst99:2010apjs}.\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.49\\textwidth]{plots_pdf\/stellar_inputs.pdf}\n\\caption{\nStellar inputs (cumulative fraction) as a function of stellar age ($t_{\\star}$). Shown are the return fraction ($R$), metal yield ($Y$), SN mechanical energy ($\\epsilon_{\\rm sn}$), wind mechanical energy ($\\epsilon_{\\rm w}$), ionizing radiation energy ($\\epsilon_{\\rm ion}$), and UV radiation energy ($\\epsilon_{\\rm uv}$).\nThe fractions are given per unit stellar mass formed; energies are expressed in units of $10^{51}{\\rm erg}\\equiv{\\rm foe}$.\nCumulative fractions are indicated with a different colours, as indicated in the legend: the shaded regions cover the $0.02\\leq Z_{\\star}\/{\\rm Z}_{\\odot}\\leq1$ metallicity range; dark lines denote single metallicity {\\tt padova} stellar tracks \\citep{padova:1994}.\nTo guide the eye, the SN explosion period is bracketed by vertical dashed lines; in the upper axis we report the value of $m(t_{\\star})$, the minimum stellar mass corresponding to the stellar lifetime $t_{\\star}$. For definitions, see eqs. \\ref{eqs_stellar_inputs}.\n\\label{fig_gamete_tables}}\n\\end{figure}\n\nIn Fig. \\ref{fig_gamete_tables} we plot $R$, $Y$, $\\epsilon_{\\rm sn}$, $\\epsilon_{\\rm w}$, $\\epsilon_{\\rm ion}$ and $\\epsilon_{\\rm uv}$ as a function of $t_{\\star}$. For each curve the shaded regions denote the $0.02\\leq Z_{\\star}\/{\\rm Z}_{\\odot}\\leq1$ metallicity range; single $Z_{\\star}$ tracks are indicated with dark lines. The time interval during which massive stars can explode as SN ($0.8 \\lsim \\log t_{\\star}\/\\rm Myr\\lsim 1.6$) is highlighted with vertical dashed lines, and the upper axis is labelled with the corresponding stellar mass.\n\nNote that the OB stars contribution ($\\log t_{\\star}\/\\rm Myr\\lsim 0.8$) to $\\epsilon_{\\rm w}$, $Y$ and $R$ is roughly proportional to $t_{\\star}$ and $Z_{\\star}$ (see also \\citealt{agertz:2012arxiv}, in particular eqs. 4). As in the simulation the metallicity floor is set to $Z_{\\rm floor}=10^{-3}{\\rm Z}_{\\odot}$, we slightly overestimate the wind contribution for low $Z_{\\star}$.\n\nFinally, note that the change of behavior of $\\epsilon_{\\rm uv}$ at $\\log t_{\\star}\/\\rm Myr\\lsim 2$ is due to the ionizing ($\\lambda\\leq\\,912\\,\\textrm{A\\kern -1.3ex\\raisebox{0.6ex}{$^\\circ$}}$) photon production suppression. At late times ($\\log t_{\\star}\/\\rm Myr\\gsim 1.6$), AGB stars give a negligible mechanical energy contribution ($\\epsilon_{\\rm w}\\simeq{\\rm constant}$) but return mass and metals to the gas ($R$, $Y$).\n\n\\subsection{Stellar feedback}\\label{sezione_blast}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.49\\textwidth]{plots_pdf\/feedback_fractions.pdf}\n\\caption{\nExample of the adopted feedback model. Fractional energy evolution for a single SN explosion ($E_{0}=1\\,{\\rm foe}$) in a gas characterized by $n=1\\,{\\rm cm}^{-3}$ and $Z=10^{-3}{\\rm Z}_{\\odot}$ as a function of the time interval from the explosion $\\Delta t$. We plot the total ($f=f_{\\rm th}+f_{\\rm kn}$), thermal ($f_{\\rm th}$), and kinetic ($f_{\\rm th}$) energy fraction acquired by the gas (see eq. \\ref{sn_energy_gas_equation}) with solid black, dashed red and dotted blue lines, respectively.\nShaded regions indicate different stages of the SN evolution, i.e. the energy conserving Sedov-Taylor (ST) stage, shell formation (SF) stage, and pressure driven snowplow (PDS).\nIn the adopted formalism, the initial energy $E_{0}$ is a function of the stellar input (see Sec. \\ref{sec_stellar_inputs} and eq. \\ref{eqs_def_mec_energy}), e.g. $E_{0}=[\\epsilon_{\\rm sn}(t_{\\star}+\\Delta t)-\\epsilon_{\\rm sn}(t_{\\star})] M_{\\star}$. The full model is presented in App. \\ref{app_blastwave}.\n\\label{fig_blastwave_sketch}}\n\\end{figure}\n\nEqs. \\ref{eqs_stellar_inputs} provide us with the energy produced by stars in different forms. The next step is to understand what fraction of that energy is eventually deposited in the ISM. Consider a stellar population of initial mass $M_{\\star}$, metallicity $Z_{\\star}$ and age $t_{\\star}$ residing in a gas cell with volume $V_{\\rm cell}$. In our scheme, when the simulation evolves for a time $\\Delta t$, the chemical feedback act as follows:\n\n\\begin{subequations}\n\\begin{align}\n \\rho \t&= \\rho + \\left[R(t_{\\star}+\\Delta t)-R(t_{\\star})\\right] M_{\\star}\/V_{\\rm cell}\\\\\n Z \t&= Z \t+ \\left[Y(t_{\\star}+\\Delta t)-Y(t_{\\star})\\right] M_{\\star}\/V_{\\rm cell}\\,,\n\\end{align}\nwhere $\\rho$ and $Z$ are the the gas density and metallicity and $R$ and $Y$ are taken from eqs. \\ref{eqs_def_R_Y}. Note that chemical enrichment is due both to the SN and AGB winds.\n\n\\subsubsection{Supernova explosions}\n\nFor the mechanical feedback, let us first consider the case of SNe. At each SN event the specific energy of the gas changes as\n\n\\begin{align}\\label{sn_energy_gas_equation}\n e_{\\rm th} &= e_{\\rm th}+ f_{\\rm th} \\left[\\epsilon_{\\rm sn}(t_{\\star}+\\Delta t)-\\epsilon_{\\rm sn}(t_{\\star})\\right] M_{\\star}\/V_{\\rm cell}\\\\\n e_{\\rm nth}&= e_{\\rm nth}+ f_{\\rm kn} \\left[\\epsilon_{\\rm sn}(t_{\\star}+\\Delta t)-\\epsilon_{\\rm sn}(t_{\\star})\\right] M_{\\star}\/V_{\\rm cell}\\,,\n\\end{align}\n\\end{subequations}\nwhere $e_{\\rm th}$ and $e_{\\rm nth}$ are the thermal and non-thermal energy densities, and $f_{\\rm th}$ and $f_{\\rm kn}$ are the fractions of thermal and kinetic energy deposited in the ISM. Thus, $e_{\\rm nth}$ accounts for the momentum injection by SN and $e_{\\rm th}$ for the thermal pressure part.\n\nIn the present work, we have developed a novel method to compute such quantities. The method derives $f_{\\rm th}$ and $f_{\\rm kn}$ from a detailed modelling of the subgrid blastwave evolution produced by the SN explosion. We calculate $f_{\\rm th}$ and $f_{\\rm kn}$ by evaluating the shock evolution at time $\\Delta t$, the time step of the simulation\\footnote{The underlying assumption is that the shock fronts exit the cell in $\\lsim\\Delta t$. This is quite consistent because the shock is expected to be supersonic, and the sound crossing time is larger or comparable with the simulation time step $\\Delta t$, dictated by the Courant-Friedrichs-Lewy conditions.}.\n\nThe adopted blastwave model is based on \\citet[][hereafter \\citetalias{ostriker:1988rvmp}]{ostriker:1988rvmp}, and it accounts for the evolution of the blast through its different evolutionary stages (energy conserving, momentum conserving, etc.). While each stage is self-similar, the passage from one stage to the next is determined by the cooling time. Thus, $f_{\\rm th}$ and $f_{\\rm kn}$ depends on the blastwave evolutionary stage. The latter, in turn depends on the gas density, cooling time, and the initial energy of the blast ($E_{0}=[\\epsilon_{\\rm sn}(t_{\\star}+\\Delta t)-\\epsilon_{\\rm sn}(t_{\\star})] M_{\\star}$, in eq. \\ref{eqs_def_mec_energy}).\n\nThe model details are presented in App. \\ref{app_blastwave}. As an example, in Fig. \\ref{fig_blastwave_sketch}, we show the energy evolution for a single SN explosion ($E_{0}=1\\,{\\rm foe}$) in a gas characterized by $n=1\\,{\\rm cm}^{-3}$ and $Z=10^{-3}{\\rm Z}_{\\odot}$. The total energy $E(t)$ is constant in the Sedov-Taylor (ST) stage, it decrease down to $0.5\\,E_{0}$ during the shell formation (SF) stage, and it evolves as $\\Delta t^{-2\/7}$ in the pressure driven snowplow (PDS) stage (see eq. \\ref{eq_energy_shock}). In the ST stage most of the energy is thermal, i.e. $f_{\\rm kn}\/f_{\\rm th}\\simeq 0.4$; however, in the SF stage $f_{\\rm kn}$ increases, since part of the thermal energy is radiated away and some is converted into kinetic form\\citep[e.g.][]{cox:1972apj,cioffi:1988apj}. Finally, during the PDS stage the ratio of thermal to kinetic is $f_{\\rm kn}\/f_{\\rm th}\\simeq 2$ (see eqs. 6.14 in \\citetalias{ostriker:1988rvmp}).\n\nIn this particular example -- a $1\\,{\\rm foe}$ SN exploding in a $n=1\\,{\\rm cm}^{-3}$ cell -- by assuming a simulation time step of $\\Delta t\\simeq 10^{-2}\\rm Myr$, we find that the blastwave is in the PDS stage, and the gas receives (via eqs. \\ref{sn_energy_gas_equation}) a fraction of energy $f_{\\rm th} \\simeq 8\\%$ and $f_{\\rm kn} \\simeq 16\\%$ in thermal and kinetic form, respectively. During $\\Delta t$, about $\\simeq 75$\\% of the initial SN energy has been either radiated away or lost to work done by the blastwave to exit the cell. The model is in broad agreement with other more specific numerical studies \\citep[e.g.][]{cioffi:1988apj,walch:2015mnras,martizzi:2015mnras}.\n\n\\subsubsection{Stellar winds}\n\nStellar winds are implemented in a manner paralleling the above scheme for SNe. The energy variation can be calculated via eq. \\ref{eqs_def_mec_energy}, where $\\epsilon_{\\rm sn}$ is substituted with $\\epsilon_{\\rm w}$, given in eqs. \\ref{sn_energy_gas_equation}. Then, $f_{\\rm th}$ and $f_{\\rm kn}$ for winds are calculated via a stage scheme similar to SN. The main difference in the efficiency factors calculation depends on the mode of energy production, i.e. impulsive for SNe, continuous for winds. The complete scheme is detailed in App. \\ref{app_blastwave}.\n\nThe efficiency of SN is greatly increased when the gas is pre-processed by stellar winds \\citep{walch:2015mnras,fierlinger:2016}, since the energy loss process is highly non-linear \\citep[][see Fig. 8]{fierlinger:2016}. For example, when a SN explodes in the lower density bubble produced by the stellar progenitor wind, the adiabatic phase lasts longer and consequently $f_{\\rm kn}$ and $f_{\\rm th}$ increase considerably. \n\n\\subsubsection{Radiation pressure}\\label{sec_rad_press}\n\nFinally, we account for radiation pressure from stars. The coupling of the gas with the radiation can be expressed in terms of $\\dot{p}_{rad}$, the rate of momentum injection \\citep{krumholz:2009radpress,hopkins:2011mnras,krumholz:2012radpress,wise:2012radpres,agertz:2012arxiv}, and accounts for the contribution from ionization, and from dust UV heating and IR-trapping\n\n\\begin{subequations}\n\\begin{align}\\label{eq_rad_moment_injection}\n\\dot{p}_{rad} =& (L_{\\rm ion}\/c)(1-\\exp(-\\tau_{\\rm ion})) \\\\\n +& (L_{\\rm uv}\/c)((1-\\exp(-\\tau_{\\rm uv})) +f_{\\rm ir} )\\,,\\nonumber \n\\end{align}\nwhere $c$ is the speed of light, $\\tau_{\\rm ion}$ the hydrogen optical depth to ionizing radiation, and $f_{\\rm ir}$ is the term accounting for the IR-trapping. $L_{\\rm ion}$ and $L_{\\rm uv}$ are calculated by integration of the stellar tracks (eqs \\ref{eqs_def_rad_energy}). The calculation of $\\tau_{\\rm uv}$ is modelled in Sec. \\ref{sec_model_sf} (eq. \\ref{eq_dust_optical_depth} and related text). We compute $\\tau_{\\rm ion}$ and $f_{\\rm ir}$ according to the physical properties of the gas, as detailed in App. \\ref{app_rad_press}. Note that we do not assume, as sometimes done, $\\tau_{\\rm ion}\\sim \\tau_{\\rm uv}\\gg1$, i.e. we allow for the possibility that some LyC photons can escape.\n\nIn smoothed particle hydrodynamics (SPH) codes, radiation pressure (eq. \\ref{eq_rad_moment_injection}) can be implemented as a \\quotes{kick} \\citep[e.g.][]{hopkins:2011mnras,barai:2015mnras}. Namely, a velocity $\\Delta v = \\dot{p}_{rad}\\Delta t\/m_b$ is directly added to some of the SPH particles of mass $m_b$ near the photon source. The particles that receive kicks are statistically chosen according to a probability $\\mathcal{P}_{\\rm kick}$, and with kick direction $\\hat{v}$ that is sampled from a random distribution. Considering the specific kinetic energy of the SPH particles, we would have\n\\be\ne_k = 0.5\\,\\left\\langle m_b (\\mathbf{v} + \\Delta v \\mathcal{P}_{\\rm kick}\\mathbf{\\hat{v}} )^{2} \\right\\rangle\/V_{cell}\n\\ee\nwhere $\\mathbf{v}$ is the original particle velocity, the $\\langle\\,\\rangle$ operator indicates the particles sum weighted by the SPH kernel, and $V_{cell}$ is the kernel volume. Thus, because of the kick, the increase of energy density would be\\footnote{In eq. \\ref{eq_red_press_energy_increase}, when going from the first to the second line, note that first terms gives a null contribution, as $\\mathbf{v}$ is ordered motion, while the kicks are randomly oriented via $\\mathbf{\\hat{v}}$, and that, by definition, $\\langle \\mathcal{P}_{\\rm kick}\\rangle = 1$.}\n\n\\begin{align}\\label{eq_red_press_energy_increase}\n\\Delta e_k &= \\langle m_b\\, \\mathcal{P}_{\\rm kick} (\\Delta v\\mathbf{v}\\mathbf{\\hat{v}} + 0.5(\\Delta v)^{2} \\rangle\/V_{cell} \\nonumber\\\\\n &= 0.5\\, m_b \\, (\\Delta v)^2\/V_{cell} \\\\\n &= 0.5\\,(\\dot{p}_{rad}\\Delta t)^2\/(m_b\\,V_{cell}) \\nonumber\\,,\n\\end{align}\n\\end{subequations}\nwhere $\\dot{p}_{rad}$ can be calculated via eq. \\ref{eq_rad_moment_injection}, and eq. \\ref{eq_red_press_energy_increase} can be directly cast into the AMR formalism. Additionally, because of our approximate treatment of IR-trapping (see App. \\ref{app_rad_press}), we force energy conservation: $V_{cell} \\Delta e_k \\leq \\Delta t\\,(L_{\\rm ion} + L_{\\rm uv})$, i.e. the deposited energy must not exceed the radiative input energy. Finally, we recall here that non-thermal energy is dissipated with a time scale $t_{\\rm diss}$, as described in the beginning of Sec. \\ref{sec_numerical}.\n\n\\section{Results}\\label{sec_result}\n\nAt $z=6$ ($t \\simeq 920\\, \\rm Myr$), the simulated zoom-in region contains a group of 15 DM haloes that host galaxies. We target the most massive halo ($M_{\\rm h} = 1.8\\times 10^{11}{\\rm M}_{\\odot}$) that hosts \\quotes{\\emph{Dahlia}}, which is a galaxy characterized by a stellar mass of $M_{\\star}=1.6\\times 10^{10}{\\rm M}_{\\odot}$, therefore representative of a typical LBG galaxy at that epoch. {Dahlia} has 14 satellites located within $\\simeq 100 \\,{\\rm kpc}$ from its centre. The six largest ones have a DM mass in the range $M_{\\rm h} = 2.5\\times 10^{9}{\\rm M}_{\\odot} - 1.2\\times 10^{10}{\\rm M}_{\\odot}$, and they host stars with total mass $M_{\\star}\\lsim 10^{9}{\\rm M}_{\\odot}$. Additionally, there are eight smaller satellites ($M_{\\rm h} \\simeq 10^{7}{\\rm M}_{\\odot}$), with $M_{\\star}\\simeq 10^{5}{\\rm M}_{\\odot}$.\n\n\\subsection{Overview}\\label{sec_res_barions}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.99\\textwidth,height=0.33\\textwidth]{plots_pdf\/maps\/landscape_completo_type3_zoomed_rtdensmap_27.pdf}\n\\vspace{.3pt}\n\n\\includegraphics[width=0.99\\textwidth,height=0.33\\textwidth]{plots_pdf\/maps\/landscape_completo_type3_zoomed_rttemp_27.pdf}\n\\vspace{.3pt}\n\n\\includegraphics[width=0.99\\textwidth,height=0.33\\textwidth]{plots_pdf\/maps\/landscape_completo_type3_zoomed_rtpress_27.pdf}\n\\vspace{.3pt}\n\n\\includegraphics[width=0.99\\textwidth,height=0.33\\textwidth]{plots_pdf\/maps\/landscape_completo_type3_zoomed_rtmetal_27.pdf}\n\\caption{\n(Caption next page.) %\n\\label{fig_mappe_hydro}\n}\n\\end{figure*}\n\\addtocounter{figure}{-1}\n\\begin{figure*}\n\\caption{(Previous page.) %\nMaps of the simulated galaxy {Dahlia} $z=6$. From left to right we plot subsequent zooms on the galaxy. From top to bottom we plot the density ($n$), temperature ($T$), pressure ($P$) and metallicity ($Z$). Each map is obtained\\textsuperscript{\\ref{footnote_pymses}} by mass-averaging the physical quantity along the line of sight piercing the field of view and centred on {Dahlia}. In all panels the physical scale is indicated as an inset. Movies of {Dahlia} can be found at \\url{https:\/\/www.researchgate.net\/profile\/Andrea_Pallottini}.\n}\n\\end{figure*}\n\nWe start by looking at the overall properties of {Dahlia} on decreasing scales. In the following we refer to Fig. \\ref{fig_mappe_hydro}, which shows the simulated density ($n$), temperature ($T$), total (thermal+kinetic) pressure ($P$), and metallicity ($Z$) maps\\footnote{Most of the maps of this paper are obtained with a customized version of \\textlcsc{pymses} \\citep{labadens:2012aspc}, a visualization software that implements optimized techniques for the AMR grid of \\textlcsc{ramses}.\\label{footnote_pymses}}\nat $z=6$.\n\n\\subsubsection{Environment (scale $\\simeq 160$~kpc)}\n\n{Dahlia} sits at the centre of a cosmic web knot and accretes mass from the intergalactic medium (IGM) mainly via 3 filaments of length $\\simeq 100\\,{\\rm kpc}$, confirming previous findings \\citep[][]{dekel:2009nat}. These overdense filaments ($n\\simeq 10^{-2}{\\rm cm}^{-3}$) are slightly colder ($T\\simeq10^{3.5}{\\rm K}$) than the IGM ($\\langle T \\rangle\\simeq10^{4.5}{\\rm K}$) as a consequence of their shorter radiative cooling time ($t_{\\rm cool}\\propto n^{-1}$). Along these cold streams, pockets of shock-heated ($T\\gsim10^{4.5}{\\rm K}$) gas produced by both structure formation and feedback (SN and winds) are visible.\n\nThe galaxy locations can be pinpointed from the metallicity map, showing a dozen of metal-polluted regions. The size of the metal bubbles ranges from $\\simeq20$ kpc in the case of {Dahlia} to a few ${\\rm kpc}$ for the satellites. Bubble sizes increase with the total stellar mass (see \\citetalias{pallottini:2014_sim}, in particular Fig. 13), and age of the galaxy stellar population. \n\nOn these scales, the pressure is dominated by the thermal component ($P\\simeq P_{\\rm th}\\sim 10^4{\\rm K}\\,{\\rm cm}^{-3}$); higher values of pressure, associated to non-thermal feedback effects (e.g. gas bulk motion), are confined around star forming regions, again traced by the metallicity distribution.\n\n\\subsubsection{Circumgalactic medium (scale $\\simeq 50$~kpc)}\\label{sec_CGM}\n\nTo investigate the circumgalactic medium (CGM), we zoom in a region within $\\sim 3\\, r_{\\rm vir} = 47.5$ kpc from {Dahlia}'s centre. On these scales, we can appreciate the presence of several {Dahlia}'s satellites, i.e. extended (few ${\\rm kpc}$) structures that are $\\sim 100$ times denser than the filament in which they reside. Two of these density structures are particularly noticeable. These are located at a distance of $\\sim10~{\\rm kpc}$ from the centre in the upper left and lower left part of the map, respectively. By looking at the metallicity distribution, we find that both satellites reside within their own metal bubble, which is separated from Dahlia's one. This clearly indicates an in-situ star formation activity.\n\nAdditionally, the density map shows about $20$ smaller ($\\sim 10-100\\,{\\rm pc}$) overdense clumps ($n\\gsim 10\\,{\\rm cm}^{-3}$). The ones within {Dahlia}'s metal bubble are enriched to $Z\\simeq {\\rm Z}_{\\odot}$. This high $Z$ value is indicative of in-situ self-pollution, which possibly follows an initial pre-enrichment phase from {Dahlia}. Clumps outside {Dahlia} metal bubble have on average an higher density ($n\\sim 10^{2}{\\rm cm}^{-3}$). Since these clumps are unpolluted, they have not yet formed stars, as the effective density threshold for star formation is $\\sim 25\/(Z\/{\\rm Z}_{\\odot}){\\rm cm}^{-3}$ (see eq. \\ref{eq_critical_density} and Sec. \\ref{sec_model_sf}). Such clumps represent molecular cloud complexes caught in the act of condensing as the gas streams through the CGM \\citep{ceverino:2016MNRAS}. Such clumps have gas mass in the range $10^5 - 10^6 {\\rm M}_{\\odot}$, and are not DM-confined, as the DM density field is flat on their location.\n\nStar forming regions are surrounded by an envelope of hot ($T\\simeq 10^{5.5}{\\rm K}$), diffuse ($n\\gsim 10^{-2}\\,{\\rm cm}^{-3}$) and mildly enriched ($Z\\sim 10^{-2}{\\rm Z}_{\\odot}$) gas produced by SN explosions and winds. In the centre of star forming regions, instead, the gas can cool very rapidly due to the high densities\/metallicities. Nevertheless, these regions are highly pressurized due to bulk motions mostly driven by radiation pressure (see Fig. \\ref{fig_feedback_vs_time}).\n\n\\subsubsection{ISM (scale $ \\simeq 10$~kpc)}\\label{sec_small_scale}\n\nThe structure of Dahlia's ISM emerges once we zoom in a region $\\sim 0.5\\, r_{\\rm vir}$ from its centre. In the inner region ($\\simeq 2\\,{\\rm kpc}$), a counterclockwise disk spiral pattern is visible, since the field of view is perpendicular to the rotation plane of the galaxy (see \\citealt{gallerani:2016outflow} for the analysis of the velocity field of {Dahlia}). The presence of disks in these early systems has already been suggested by other studies. For example, \\citet{feng:2015apj} show that already at $z \\sim 8$ nearly $70\\%$ of galaxies with $M_{\\star}\\simeq 10^{10}{\\rm M}_{\\odot}$ have disks (see also Sec. \\ref{sec_final_results}).\n\nThe spiral central region and the spiral arms are dense ($n\\simeq 10^{2}{\\rm cm}^{-3}$) and cold ($T\\simeq 10^3{\\rm K}$), and the active SF produces a large in-situ enrichment ($Z\\simeq {\\rm Z}_{\\odot}$). Winds and shocks from SN have no effect in the inner part of the galaxy, because of the high density and short cooling time of the gas; this implies that metals remain confined within $\\sim 2\\,{\\rm kpc}$.\n Within spiral arms radiation pressure induced bulk motions largely dominate the total pressure, which reaches values as high as $P\\gsim 10^{6.5}{\\rm K}\\,{\\rm cm}^{-3}$. The imprint of SN shocks is evident in the temperature map in regions with $T\\gsim 10^5{\\rm K}$. Shock driven outflows originated in spiral arms travel outward in the CGM, eventually reaching the IGM if outflow velocities exceed the escape velocity ($\\sim 100\\,{\\rm km}\\,{\\rm s}^{-1}$, see Fig. 4 in \\citealt{gallerani:2016outflow}).\n\nOutflows are either preferentially aligned with the galaxy rotation axis, or they start at the edge of the disk. However, when spherically averaged, infall and outflow rates are nearly equal ($\\sim 30\\,{\\rm M}_{\\odot}\/{\\rm yr}$ at $z\\sim6$, \\citealt{gallerani:2016outflow}), and the system seems to self-regulate \\citep[see also][]{dekel:2014}.\n\nOutside the disk, clumps with density $n\\simeq 10^{2}{\\rm cm}^{-3}$ are also present and are actively producing stars. These isolated star forming MCs are located at a distance $\\gsim 2\\,{\\rm kpc}$ from the centre, and show up as spots of high pressure ($P\\gsim 10^7{\\rm K}\\,{\\rm cm}^{-3}$); some of this MCs are completely disrupted by internal feedback and they can be recognized by the low metallicity ($Z\\sim 10^{-3}{\\rm Z}_{\\odot}$): this is consistent with the outcome of numerical simulations of multiple SN explosions in single MC \\citep[e.g.][]{kortgen:2016}.\n\n\\subsubsection{Radial profiles}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.485\\textwidth]{plots_pdf\/eos\/profile_sph_profile_dahlia_27.pdf}\n\\caption{\nDensity ($n$, blue), metallicity ($Z$, red) and molecular hydrogen density ($n_{\\rm H2}$, yellow) radial profile ($r$) with respect to {Dahlia} centre. The profiles are spherically averaged, as indicated by the $\\langle\\,\\rangle_{V}$ operator, and the upper axis shows the radial distance $r$ as a function of the virial radius of {Dahlia} ($r_{\\rm vir}$).\n\\label{fig_sph_profile}\n}\n\\end{figure}\n\nFig. \\ref{fig_sph_profile} shows spherically averaged density, metallicity, and ${\\rm {H_2}}$~density profiles for the gas. The density profile rapidly decreases from $n\\sim 30\\,{\\rm cm}^{-3}$ at $r\\sim 0$ to $n\\sim 0.1\\,{\\rm cm}^{-3}$ at $r\\sim 6\\,{\\rm kpc} (\\sim 0.5\\,r_{\\rm vir})$, and then flattens at larger distances. Such profile is consistent with the average profile of $z=4$ galaxies presented in \\citetalias{pallottini:2014_sim}. There we claimed that the density profile is universal once rescaled to the halo virial radius (see also \\citealt{liang:2016}). Superposed to the mean density profile, local peaks are clearly visible: they result from individual clumps\/satellites, as discussed above. \n\nThe central metallicity is close to the solar value, but by $r\\sim12\\,{\\rm kpc}\\sim r_{\\rm vir}$ it has already dropped to $Z=Z_{floor}$. Within $0\\lsim r\/{\\rm kpc}\\lsim6$, the metallicity gradient closely tracks the density profile, while for $6\\lsim r\/{\\rm kpc}\\lsim15$ the decrease is steeper. \\citet{pallottini:2014cgmh} find that the metallicity profile is not universal, however it usually extend up to few virial radii, as for {Dahlia}; further insights can be obtained by analyzing the $n$-$Z$ relation (Sec. \\ref{sec_eos}).\n\nIn Fig. \\ref{fig_sph_profile} we note that the $Z$ gradient found in {Dahlia} at $z = 6$ is slightly steeper than the one inferred from observations of $z\\sim 3$ galaxies: i.e. we find $\\Delta Z\/r \\sim -0.1\\, {\\rm dex}\/{\\rm kpc}$ while the observed ones are $\\sim 0\\, {\\rm dex}\/{\\rm kpc}$ \\citep{wuyts:2016} and $\\sim +0.1\\, {\\rm dex}\/{\\rm kpc}$ \\citep{troncoso:2013arxiv1311}. This suggests that the metallicity profile evolve with cosmic time and that the flattening is likely caused by stellar feedback, which in our Dahlia may occur in the following Gyr of the evolution. However, to prove such claim we should evolve the simulation to $z\\sim3$.\n\nThe ${\\rm {H_2}}$~profile is spiky, and each peak marks the presence of a distinct SF region\\footnote{We remind that the profiles are volume-weighted, thus the plotted $n_{\\rm H2}$ accounts for the fact that ${\\rm {H_2}}$~is present only in a fraction of the gas at a given radius.}. In {Dahlia} ${\\rm {H_2}}$~is mainly concentrated within $r\\lsim 0.5\\,{\\rm kpc}$ and it is distributed in the disk-like structure seen in Fig. \\ref{fig_mappe_hydro} (see Sec. \\ref{sec_final_results}). The location of the other peaks correspond to the satellites, which are mostly co-located with metallicity peaks. With increasing metallicity, in fact, lower densities are needed to form ${\\rm {H_2}}$~(eq. \\ref{eq_critical_density}).\n\n\\subsection{Star formation and feedback history}\\label{sec_sfr_result}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.49\\textwidth]{plots_pdf\/sameplot_dahlia_27.pdf}\n\\includegraphics[width=0.49\\textwidth]{plots_pdf\/sfr_dahlia_7.pdf}\n\\caption{\n{\\bf Left}: {Dahlia} and satellites cumulative stellar masses ($M_{\\star}$, upper left panel) and star formation rates ($SFR$, lower left panel) as a function of cosmic time ($t$). For each galaxy, individual $M_{\\star}$ and $SFR$ are plotted with a solid line, coloured accordingly to the total dark matter mass ($M_{\\rm h}$) of the host halo at $z=6$. For both $M_{\\star}$ and $SFR$, {Dahlia}'s tracks are plotted with a blue line, and the totals ({Dahlia}+satellites) are in black. {\\bf Right:} $SFR$ as a function of cosmic time, with individual galaxies defined by the merger history up to $z\\simeq8.5$. Note the different $M_{\\rm h}$ colourbar scale with respect to the left panel. \n\\label{fig_sfr_smf_energy}\n}\n\\end{figure*}\n\nWe analyze the SF history of {Dahlia} and its major satellites by plotting in Fig. \\ref{fig_sfr_smf_energy} the cumulative stellar mass ($M_{\\star}$) and star formation rate ($SFR$) vs. time\\footnote{The $SFR$ is averaged in steps of $\\simeq 3\\,\\rm Myr$. We have checked that smaller steps do not alter the following analysis.}.\n\nFor the whole galaxy sample, the time averaged ($\\pm$ r.m.s.) specific star formation is $\\langle{\\rm sSFR}\\rangle= (16.6 \\pm 32.8)\\,{\\rm Gyr}^{-1}$. This mean value is comparable to that obtained by previous simulations of high-$z$ galaxies \\citep{wise:2012radpres} and broadly in agreement with $z\\sim 7$ observations \\citep{Stark:2013ApJ}. At early times the $sSFR$ reaches a maximum of $\\sim 100\\,{\\rm Gyr}^{-1}$, while a minimum of $3.0\\,{\\rm Gyr}^{-1}$ is found during the late time evolution. Both the large ${\\rm sSFR}$ range and maximum at early times are consistent with simulations by \\citet{shen:2014}. At late times, the $sSFR$ is in agreement with analytical calculation \\citep{behroozi:2013apj}, and with $z=7$ observations \\citep{gonzalez:2010}, although we note {Dahlia} has a larger stellar mass with respect to the galaxies in the sample ($M_{\\star}\\simeq 5\\times 10^9{\\rm M}_{\\odot}$).\n\nAt all times, {Dahlia} dominates both the stellar mass and star formation rate, whose mean value is $\\langle SFR\\rangle \\simeq (35.3 \\pm 32.7)\\,{\\rm M}_{\\odot}\/{\\rm yr}$. Its stellar mass grows rapidly, and it reaches $M_{\\star}\\sim 10^{9}{\\rm M}_{\\odot}$ by $t\\simeq 400\\,{\\rm Myr}$ ($z=11$), i.e. after $\\simeq 120\\,{\\rm Myr}$ from the first star formation event. Such rapid mass build-up is due to merger-induced SF, that plays a major role at high-$z$ \\citep{poole:2016MNRAS,behroozi:2013apj,salvadori2010MNRAS}. The $SFR$ is roughly constant from $z\\sim 11$ to $z\\sim 8.5$ and reaches a maximum of $\\simeq 130\\, {\\rm M}_{\\odot}\/{\\rm yr}$ at $z\\sim 6.7$. With respect to observations of $z\\sim6$ LBG galaxies \\citep[e.g.][]{stanway:2003MNRAS,stark:2009apj} the $SFR$ and $M_{\\star}$ of {Dahlia} are above the mean values, but still consistent within one sigma. Additionally, the combination of $SFR$, $M_{\\star}$, and $Z_{\\star}$ for {Dahlia} are compatible with the fundamental mass metallicity relation observed in local galaxies \\citep{mannucci:2010mnras}.\n\nThe total stellar mass in satellites is $M_{\\star}\\sim 10^{9}{\\rm M}_{\\odot}$. Typically, SF starts with a burst, generating $\\sim 10^{7.5} {\\rm M}_{\\odot}$ of stars during the first $\\simeq 20\\,\\rm Myr$. Then the $SFR$ exponentially declines and becomes intermittent with a bursty duty cycle of $\\sim100\\,\\rm Myr$. This process can be explained as follows. As an halo forms, at its centre the density of the gas slowly rises. When the density is higher than the critical density of ${\\rm {H_2}}$~formation (eq. \\ref{eq_critical_density}), the gas in the inner region is converted into stars in few free-fall times. Then feedback, and in particularly coherent SN explosions ($t_{\\star}\\gsim 10\\,\\rm Myr$, see Fig. \\ref{fig_gamete_tables}), quenches $SFR$, and the star formation activity becomes self-regulated. As mergers supply fresh gas, the $SFR$ suddenly goes out of equilibrium and becomes bursty again. Note that self-regulation is possible only for major satellites, since smaller ones ($M_{\\rm h}\\lsim 10^8{\\rm M}_{\\odot}$) cannot retain a large fraction of their gas following feedback events due to their shallow potential wells (see \\citetalias{pallottini:2014_sim}).\n\nNote that the duty cycle and the amplitude of the burst are fairly in agreement with observations of $M_{\\star}\\sim10^8-10^{10}{\\rm M}_{\\odot}$ galaxies at $z\\lsim0.3$ \\citep{kauffmann:2014mnras}. Furthermore, in our satellites we find that the typical behavior of the burst phases -- starburst - quiescent - post-starburst -- is qualitatively similar to what found by \\citet{read:2016mnras}, that simulate the evolution of a $M_{\\star}\\simeq 10^9{\\rm M}_{\\odot}$ galaxy for $\\simeq 1\\, {\\rm Gyr}$ (see also \\citealt[][]{teyssier:2013mnras,read:2016mnras_b} for further specific studies on the bursty nature of this kind of galaxies).\n\nSince individual galaxies are defined as group of star particles in the same DM halo at $z=6$, the SF history accounts for the sum of all the stars that formed in different progenitors of the considered halo. For comparison, in the right panel of Fig. \\ref{fig_sfr_smf_energy} we plot the $SFR$ of individual halos defined by their merger history at $z=8.7$. Galaxies with active SF at $300-550$ Myr merge into {Dahlia} at a later time, thus they do not appear individually in the left panel of Fig. \\ref{fig_sfr_smf_energy}.\n\nSuperimposed to the global trend, the SF history of {Dahlia} and its satellites fluctuates on time scales of $\\sim 10\\,\\rm Myr$, corresponding to the time scale of energy deposition by feedback \\citep[see e.g.][]{torrey:2016arxiv}.\n\n\\subsubsection{Star formation efficiency}\\label{sez_sfr_efficiency}\n\n\\begin{figure}\n\\includegraphics[width=0.49\\textwidth]{plots_pdf\/eos\/std_eos_ele_semenov_dahlia_27.pdf}\n\\caption{Effective star formation efficiency ($\\zeta_{\\rm sf}\\,f_{\\rm H2}$) vs density ($n$) at $z=6$. The distribution is ${\\rm {H_2}}$~mass weighted; we consider gas within $3\\, r_{\\rm vir} = 47.5$ kpc from {Dahlia} centre.\n\\label{fig_cfr_semenov}}\n\\end{figure}\n\n$\\zeta_{\\rm sf}\\,f_{\\rm H2}$ represents the quantity of gas converted in stars within a free-fall time (see eq. \\ref{eq_sfr_tot}). In Fig. \\ref{fig_cfr_semenov} we plot the effective star formation efficiency ($\\zeta_{\\rm sf}\\,f_{\\rm H2}$) as a function of gas density, weighted by the ${\\rm {H_2}}$~mass fraction at $z=6$. Most of the ${\\rm {H_2}}$~is contained in the range $n=10-100 {\\rm cm}^{-3}$, and the effective efficiency $\\zeta_{\\rm sf}\\,f_{\\rm H2}$ varies from $10^{-3}$ to $10^{-1}$. Since $\\zeta_{\\rm sf}= \\mathrm{const.} =0.1$, the spread is purely due to the dependence of $f_{\\rm H2}$ on density and metallicity (see Fig. \\ref{fig_kmt_test}). Note that by construction $\\zeta_{\\rm sf}\\,f_{\\rm H2}\\leq0.1$, and the plot does not show values very close to such limit, since gas with higher effective efficiency is converted into stars within a few free-fall times (eq. \\ref{eq_sfr2}).\n\nInterestingly, our ${\\rm {H_2}}$-based star formation criterion is reminiscent of a density threshold one, as below $n \\simeq 3\\, {\\rm cm}^{-3}$ the efficiency drops abruptly (eqs. \\ref{eqs_sfr_equivalence}). However, an important difference remains, i.e. in the present model at any given density the efficiency varies considerably as a result of the metallicity dependence. The relation between efficiency and density is also similar to that found by \\citet{semenov:2015} (\\citetalias{semenov:2015}). This is striking because these authors use a star formation efficiency that depends on the turbulent velocity dispersion of the gas, with no notion of the local metallicity. This comparison is discussed further in Sec. \\ref{sec_conclusioni}.\n\n\\subsubsection{Feedback energy deposition}\\label{sec_feedback_res}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.49\\textwidth]{plots_pdf\/otf_ist_dahlia.pdf}\n\\caption{Rate of energy deposition in the gas, ${\\rm d}E\/{\\rm d}t$, by feedback processes as a function of cosmic time. Different contributions (SN, wind and radiation) are plotted with a different colour, and we additionally distinguish between the kinetic (thick lines) and thermal (thin lines) energy variation. By definition, radiation pressure has no thermal contribution. Note the jump at $t\\simeq500\\,\\rm Myr$ due to the onset of radiation pressure by AGB stars. The upper axis indicates the corresponding redshift.\n\\label{fig_feedback_vs_time}\n}\n\\end{figure}\n\nAs discussed in Sec. \\ref{sezione_blast}, only a small fraction of the available energy produced by stars can couple to the gas. During the simulation, we find that the time average efficiency of the conversion is $f \\sim 0.1\\%$, regardless of the feedback type. These low efficiencies imply that energy is mostly dissipated within MCs where the stars reside and produce it. For SN and winds, such small efficiency is a consequence of the short cooling times in MCs (see also App. \\ref{app_blastwave}). For radiation pressure the efficiency is limited by the relatively small dust optical depths (see also App. \\ref{app_rad_press}).\n\nNote that, typically in simulations \\citep[e.g.][]{wise:2012radpres,agertz:2012arxiv}, energy from stars is directly deposited in the gas, and then dissipation (mostly by radiative losses) occurs during the hydrodynamical time step. Within our scheme, instead, the deposited energy is already dissipated within high density cells, where cooling is important. Nevertheless, this does not appear to determine major differences in, e.g., $SFR$ history and ISM thermodynamics, as discussed in Sec. \\ref{sec_eos}.\n\nIn Fig. \\ref{fig_feedback_vs_time} we plot the energy deposition rate in the gas by various feedback processes as a function of time. Most evidently, \\emph{radiation dominates the energy budget at all times}: $\\dot E_{rad} \\simeq 10^{2} \\dot E_{SN}\\simeq 10^3 \\dot E_{w}$. The ratios of these energy rates somewhat reflect the stellar inputs shown in Fig. \\ref{eqs_stellar_inputs}, although this is not a trivial finding, given that the interplay among different feedback types is a highly non-linear process.\n\nAs expected, the energy deposition rate behaves as $\\dot E \\propto SRF^q$, with $q \\simgt 1$, apart from fluctuations and jumps as the one at $t\\simeq 500\\,\\rm Myr$. The scaling can be understood by simple dimensional arguments. Assume that most of the energy is deposited by radiation pressure. In the optically thick limit, we can combine eqs. \\ref{eq_red_press_energy_increase} and \\ref{eq_rad_moment_injection} to write $\\dot E_{rad} \\Delta t \\simeq (L_{\\rm uv} \\Delta t)^2 \/ (M_{g}\\,c^2)$, where $M_{g}$ is the gas mass accelerated by radiation, and we neglect ionizing radiation. Then, using \\ref{eqs_def_rad_energy}, we can write $\\dot E_{rad} \\propto SFR\\,(M_{\\star}\/M_{g})$. Initially, $M_{g}\\simeq M_{\\star}$, thus $\\dot E_{rad} \\propto SFR$. Once the gas mass is expelled from the star forming region or converted into stars, $M_{g}\\ll M_{\\star}$. Thus the deposition rate increases faster than the $SFR$ and it is very sensitive to the amount of gas mass around the sources.\n\nThe previous argument holds until the gas remains optically thick. This is warranted by AGB metal\/dust production which becomes important after for stellar ages $t_{\\star}\\sim 100\\,\\rm Myr$ (see Fig. \\ref{fig_gamete_tables}). When combined with the parallel increase of UV photons by the same sources, it is easy to interpret the rapid increase of the radiative feedback efficiency at $t\\simeq500\\,\\rm Myr$, i.e. after $\\simeq 200 \\,\\rm Myr$ from the first star formation events in {Dahlia}. We checked this interpretation by looking at the IR-trapping recorded on the fly during the simulation. We find that on average $f_{\\rm ir}\\simeq 10^{-2}$ for $t\\lsim 500\\, \\rm Myr$, and $f_{\\rm ir}\\simeq 0.1$ at later times, thus confirming our hypothesis.\n\nThe energy deposition rates for different feedback types are highly correlated in time (Pearson coefficients $\\gsim 0.7$). This is partially due to the fact that the same stellar population inputs wind, radiation and supernova energy in the gas. Additionally, as we have just seen for the case of AGB star, different types of feedback are mutually dependent. For example, radiation pressure is more effective when the gas is metal and dust enriched by SN and AGB stars; winds and SN can more efficiently couple with low density gas (longer cooling time).\n\nNote that short and intense peaks in energy deposition rate correspond to the complete disruption of multiple MCs. This occurs following strong SF events in small satellites ($M_{\\rm h} \\sim 10^{7}{\\rm M}_{\\odot}$) that cannot retain the gas and sustain a continuous star formation activity.\n\nFinally, we remind that, when compared with observational\/analytical constraints, the $SFR$ and $M_{\\star}$ of {Dahlia} are higher then the mean, but still consistent within one sigma. We caution that this might imply a somewhat weak feedback prescription.\n\n\\subsubsection{Feedback effects on ISM thermodynamics}\\label{sec_eos}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.485\\textwidth]{plots_pdf\/eos\/eos_dahlia_27.pdf}\n\\includegraphics[width=0.485\\textwidth]{plots_pdf\/eos\/eos_pressure_dahlia_27.pdf}\n\n\\includegraphics[width=0.485\\textwidth]{plots_pdf\/eos\/eos_metal_dahlia_27.pdf}\n\\includegraphics[width=0.485\\textwidth]{plots_pdf\/eos\/metal_vs_density_dahlia_27.pdf}\n\\caption{\nEquation of State of the gas within $\\simeq47.5\\,{\\rm kpc}$ $(3\\, r_{\\rm vir})$ from {Dahlia} centre at $z=6$. Each EOS consists in a mass- or metal-weighted probability distribution function (PDF) as specified by the colourbar. We plot the PDF in the $n$-$T$ plane ({\\bf upper left panel}), in the $n$-$P$ plane ({\\bf upper right panel}), the metal-mass weighted PDF in the $n$-$T$ plane ({\\bf lower left panel}), and mass-weighted relation between gas $n$ and $Z$ ({\\bf lower left right}). Mean relations and r.m.s. dispersions are overplotted with solid black and dashed lines, respectively. In the upper horizontal axis of each panel, we indicate the overdensity ($\\Delta$) corresponding to $n$. The density range of rarefied, diffuse and dense phases used in the text are indicated. For the panels on the left, the rarefied gas is additionally divided in \\emph{photo-ionized} ($T<10^{4.5}{\\rm K}$) and \\emph{shock-heated} ($T\\geq10^{4.5}{\\rm K}$). See Tab. \\ref{tagella_eos_riassunto} for a summary of the total values.\n\\label{fig_eos_1}\n}\n\\end{figure*}\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lcccc}\n\\hline\\hline\n& mass & rarefied & diffuse & dense \\\\\n\\hline\nGas & $1.3 \\times 10^{10}{\\rm M}_{\\odot}$ & $44\\%$ & $34\\%$ & $22\\%$ \\\\\nMetals & $ 4.2\\times 10^5{\\rm M}_{\\odot}$ & $5\\%$ & $25\\%$ & $70\\%$\\\\\n${\\rm {H_2}}$ & $ 3.6\\times 10^8{\\rm M}_{\\odot}$ & $0\\%$ & $1\\%$ & $99\\%$\\\\\n\\hbox{C~$\\scriptstyle\\rm II $} & $ 2.2\\times 10^5{\\rm M}_{\\odot}$ & $4\\%$ & $22\\%$ & $74\\%$\\\\\n\\end{tabular}\n\\caption{\nSummary of the gas masses for total, metal, \\hbox{C~$\\scriptstyle\\rm II $}, and ${\\rm {H_2}}$~within $\\sim 47.5\\,{\\rm kpc}$ $(3\\, r_{\\rm vir})$ from {Dahlia} center. In the table, we report also the fraction that is contained in different gas phases\\textsuperscript{\\ref{footnote_phases}}: \\emph{rarefied} ($\\log(n\/{\\rm cm}^3)\\leq -1$), \\emph{diffuse} ($-1<\\log(n\/{\\rm cm}^3)\\leq 1$) and \\emph{dense} ($\\log(n\/{\\rm cm}^3)> 1$). Discussion about gas and metal mass is found in Sec. \\ref{sec_eos}; analysis of ${\\rm {H_2}}$~and \\hbox{C~$\\scriptstyle\\rm II $}~is in Sec. \\ref{sec_final_results} (see also App. \\ref{sez_cloudy_model} for \\hbox{C~$\\scriptstyle\\rm II $}~calculation).\n\\label{tagella_eos_riassunto}}\n\\end{table}\n\nFeedback leaves clear imprints in the ISM thermodynamics. For convenience, we classify ISM phases according to their density: we define the gas to be in the \\emph{rarefied}, \\emph{diffuse}, and \\emph{dense} phase if $n \\leq 0.1\\,{\\rm cm}^{-3}$, $0.1 \\leq n\/{\\rm cm}^{-3}\\leq 10$, $n > 10\\,{\\rm cm}^{-3}$, respectively\\footnote{Compared to the definitions used in \\citet{klessen:2014review}, the rarefied corresponds to the warm and hot ionized medium, the diffuse phase to the cold and warm neutral medium and the dense phase to the molecular gas.\\label{footnote_phases}}.\n\nWe focus at $z=6$ and consider the gas in a region within $\\simeq 47.5\\,{\\rm kpc}$ $(3\\, r_{\\rm vir})$ from {Dahlia}'s centre, essentially the scale of the CGM described in Sec. \\ref{sec_CGM}. This region contains a total gas mass of $1.3\\times 10^{10} {\\rm M}_{\\odot}$, and metal mass of $4.2\\times 10^5{\\rm M}_{\\odot}$ (additional data in Tab. \\ref{tagella_eos_riassunto}).\n\nFig. \\ref{fig_eos_1} shows the Equation of State (EOS, or phase diagram) of the gas. The fraction of gas in the rarefied, diffuse and dense phases is $44\\%$, $34\\%$ and $22\\%$; these phases contain $5\\%$, $25\\%$ and $70\\%$ of the metals, respectively. Thus, while the gas mass is preferentially located in the lower density phases, metals are mostly found in dense gas, i.e. star forming regions\/MC. Additionally only $\\sim 30\\%$ of the considered volume shows $Z>10^{-3}{\\rm Z}_{\\odot}=Z_{\\rm floor}$, i.e. it has been polluted by stars in the simulation. We note that the EOS in the $n$-$T$ plane is fairly consistent with the one found in other high-$z$ galaxy simulations \\citep[e.g. see Fig. 5 in][]{wise:2012radpres}. Comparison between the EOS in the $n$-$T$ and $n$-$P$ plane highlights the relative importance of different feedback types.\n\nThe \\emph{rarefied} gas is characterized by long cooling times. Thus, once engulfed by shocks, such phase becomes mildly enriched ($\\langle Z\\rangle \\sim10^{-2}{\\rm Z}_{\\odot}$) and remains hot ($T\\sim10^{6}{\\rm K}$). The enriched rarefied gas preferentially populates the $n\\simeq 10^{-3}{\\rm cm}^{-3}$ and $T\\simeq10^{6.5}{\\rm K}$ region of the phase diagram. However, part of the rarefied gas has $T\\simeq10^{4}{\\rm K}$. This gas component has a temperature set by the equilibrium between adiabatic cooling and the photo-heating by the UV background; it feeds the accretion onto Dahlia, but it is not affected by stellar feedback. As such it is not central in the present analysis.\n\nThe \\emph{dense} gas is mostly unaffected by shocks and it is concentrated in the disk. Typically, such gas has $n\\sim 10^2 {\\rm cm}^{-3}$ and $T\\sim10^{2}{\\rm K}$, thus a thermal pressure $P_{\\rm th}\/k \\sim 10^4 {\\rm cm}^{-3}\\,{\\rm K}$ is expected. However, the total gas pressure is $P\/k \\sim 10^7 \\,{\\rm cm}^{-3}$ K (see the $P$-$n$ EOS). The extra contribution is provided in kinetic form by radiation pressure, thanks to the strong coupling with the gas allowed by the high optical depth of this phase. This leads to the important implication that the central structure of {Dahlia} is radiation-supported (see also Sec. \\ref{sec_final_results}).\n\nThe \\emph{diffuse} gas acts as an interface between the dense disk gas and the rarefied gas envelope. Diffuse gas is found both in hot ($T\\sim10^{5}{\\rm K}$) and cold ($T\\sim10^{3}{\\rm K}$) states. The cold part has a sufficiently high mean metallicity, $Z\\sim 0.1{\\rm Z}_{\\odot}$, to allow an efficient cooling of the gas. This is highlighted by the metal-weighted EOS, where we can see that most of the metals present in the diffuse phase are cold.\n\nNote that the phase diagram also shows evidence for the classical 2-phase medium shape for pressures around $P\/k \\sim 10^3 {\\rm cm}^{-3}\\,{\\rm K}$, while at higher (and lower) pressures only one stable phase is allowed; nevertheless, at any given pressure a range of densities can be supported. Such situation, though, is highly dynamic and does not correspond to a true thermal equilibrium.\n\nA final remark is that by $z=6$ a $n-Z$ correlation is already in place, although considerable scatter is present. The relation gets steeper at large densities, and at the same time the scatter decreases. Such relation arises from the superposition of the analogous relation for metal bubbles of individual galaxies ({Dahlia} and satellites). The scatter instead results from the fact that the slope of the $n-Z$ relation depends on the $SFR$ history (for an in-depth analysis see \\citetalias{pallottini:2014_sim}). The average $n-Z$ relation found is consistent with the results from $z\\simeq3$ galaxies \\citep{shen:2014}.\n\n\\subsection{Additional ISM properties}\\label{sec_final_results}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.32\\textwidth]{plots_pdf\/maps\/landscape_c_rt_surf_dens_kmt_H2_27.pdf}\n\\includegraphics[width=0.32\\textwidth]{plots_pdf\/maps\/landscape_c_rt_surf_dens_CII_27.pdf}\n\\includegraphics[width=0.32\\textwidth]{plots_pdf\/maps\/landscape_c_rt_emission_CII_map_27.pdf}\n\\vspace{.5pt}\n\n\\includegraphics[width=0.32\\textwidth]{plots_pdf\/maps\/landscape_c_edgeon_rt_surf_dens_kmt_H2_27.pdf}\n\\includegraphics[width=0.32\\textwidth]{plots_pdf\/maps\/landscape_c_edgeon_rt_surf_dens_CII_27.pdf}\n\\includegraphics[width=0.32\\textwidth]{plots_pdf\/maps\/landscape_c_edgeon_rt_emission_CII_map_27.pdf}\n\\caption{\nFace-on ({\\bf upper panels}) and edge-on ({\\bf lower panels}) $z=6$ {Dahlia} surface maps for ${\\rm {H_2}}$~density ($\\Sigma_{\\rm H2}\/(\\msun\\,{\\rm kpc}^{-2})$ {\\bf left panels}), \\hbox{C~$\\scriptstyle\\rm II $}~density ($\\Sigma_{\\rm CII}\/(\\msun\\,{\\rm kpc}^{-2})$ {\\bf middle panels}), and \\hbox{[C~$\\scriptstyle\\rm II $]}~brightness ($S_{\\rm [CII]}\/(\\lsun\\,{\\rm kpc}^{-2})$ {\\bf left panels}). The scale is $10$~kpc, as in the right-most panels of Fig. \\ref{fig_mappe_hydro} (Sec. \\ref{sec_small_scale}). Note that lower limits for the maps are drawn for visualization purposes ($\\log(\\Sigma_{\\rm H2}\/(\\msun\\,{\\rm kpc}^{-2}))\\simeq \\log(S_{\\rm [CII]}\/(\\lsun\\,{\\rm kpc}^{-2})) \\simeq 5$, $\\log(\\Sigma_{\\rm CII}\/(\\msun\\,{\\rm kpc}^{-2}))\\simeq 2$). Additionally, an average of the maps is plotted in Fig. \\ref{fig_mappe_results_profili}.\n\\label{fig_mappe_tutte}\n}\n\\end{figure*}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.49\\textwidth]{plots_pdf\/dahlia_profiles.pdf}\n\\caption{\nRadially-averaged profiles for face-on (full colour) and edge-on (transparent and hatched) views of Dahlia at $z=6$. {\\bf Upper left:} \n${\\rm {H_2}}$~surface density; {\\bf upper right:} stellar surface density; {\\bf lower left:} \\hbox{C~$\\scriptstyle\\rm II $}~surface density; {\\bf lower right:} \\hbox{[C~$\\scriptstyle\\rm II $]}~surface brightness.\n\\label{fig_mappe_results_profili}\n}\n\\end{figure}\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lccc}\n\\hline\n~ & \\multicolumn{2}{c}{$r_{1\/2}\/{\\rm kpc}$} & approximate\\\\\n~ & face-on & edge-on & value at $r_{1\/2}$\\\\\n\\hline\t\t\t\n${\\rm {H_2}}$\t& 0.59 & 0.36 &\t$10^{8.23}{\\rm M}_{\\odot}$\\\\\n\\hbox{C~$\\scriptstyle\\rm II $}\t& 0.64 & 0.38 &\t$10^{5.14}{\\rm M}_{\\odot}$\\\\\nstars\t& 0.37 & 0.23 &\t$10^{9.89}{\\rm M}_{\\odot}$\\\\\n\\hbox{[C~$\\scriptstyle\\rm II $]}\t& 0.60 & 0.36 &\t$10^{7.25}{\\rm L}_{\\odot}$\\\\\n\\end{tabular}\n\\caption{\nSummary of the effective radius ($r_{1\/2}$) for the ${\\rm {H_2}}$, \\hbox{[C~$\\scriptstyle\\rm II $]}, \\hbox{C~$\\scriptstyle\\rm II $}~and stellar component in {Dahlia} at $z=6$ for the face-on and edge-on case and corresponding values for the mass\/luminosity. For each entry, $r_{1\/2}$ is defined as the radius including half of the mass\/luminosity. The quoted radii have a reference error of $\\pm 0.015\\,{\\rm kpc}$. Note that the approximate values of masses\/luminosity at $r_{1\/2}$ are insensitive to the orientation of the projection (face-on\/edge-on). The full profiles are shown in Fig. \\ref{fig_mappe_results_profili}.\n\\label{tagella_halflightradius}}\n\\end{table}\n\nWe conclude our analysis by inspecting the distribution of two key ISM species, molecular hydrogen and \\hbox{C~$\\scriptstyle\\rm II $}, along with the expected surface brightness of the corresponding $158\\mu$m \\hbox{[C~$\\scriptstyle\\rm II $]}~line. The surface maps of these quantities in Dahlia ($z=6$) are shown in Fig. \\ref{fig_mappe_tutte} for the face-on and edge-on view cases. For reference, in Fig. \\ref{fig_mappe_results_profili} we additionally plot the radially-averaged profiles of the same quantities, and in Tab. \\ref{tagella_halflightradius} we give their typical radial scales.\n\n\\subsubsection{Molecular Hydrogen}\n\n{Dahlia} has a total ${\\rm {H_2}}$~mass $M_{\\rm H2}\\simeq 3.6\\times 10^8{\\rm M}_{\\odot}$, that is mainly concentrated in a disk-like structure of radius $\\simeq 0.6\\,{\\rm kpc}$ and scale height $\\simeq 200\\,{\\rm pc}$, with a sharp cut off beyond these scales\\footnote{Such scales are calculated by using the principal component analysis of the ${\\rm {H_2}}$~distribution around the galaxy.}. The disk has mean surface density $\\langle\\Sigma_{\\rm H2}\\rangle \\simeq 10^{7.5}\\msun\\,{\\rm kpc}^{-2}$, that is approximately constant with radius and presents perturbed spiral arms along which the density is enhanced by a factor $\\simeq 3$. The spiral arms are less pronounced than in a more massive, MW-like galaxy (see \\citetalias{semenov:2015} and \\citealt{ceverino:2015}). This trend with mass has already been pointed out by \\citet{ceverino:2010MNRAS}.\n\nThe disk is composed by dense ($n\\gsim 25\\,{\\rm cm}^{-3}$), enriched ($Z\\simeq 0.5\\,{\\rm Z}_{\\odot}$), radiation-pressure supported gas, as already discussed. It is fed by frequent mergers driving fresh gas to the centre, and supports a star formation rate per unit area of $\\simeq 15\\,{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}\\,{\\rm kpc}^{-2}$, i.e. more than 1000 times the Milky Way value. Fragmentation of the disk is relatively weak \\citep[cfr.][]{mayer:2016}, as indicated by smooth surface density map, and also paralleling the flat metallicity profile in the inner $\\simeq\\,{\\rm kpc}$.\nFor the fragmentation of the ${\\rm {H_2}}$~component, we caution that this result has been obtained assuming a uniform UV interstellar field; stronger fragmentation in the ${\\rm {H_2}}$~distribution may occur when accounting for local radiation sources: Lyman-Werner photons from these sources might in fact locally dissociate the ${\\rm {H_2}}$~by generating pockets of \\HI~in the distribution.\n\nWhile most of the ${\\rm {H_2}}$~gas resides in the disk, we can clearly distinguish 3 clumps of molecular gas both in the face-on and edge-on maps. These clumps are located few kpc away from the centre, and are characterized by sizes of $\\sim 150\\,{\\rm pc}$ and $M_{\\rm H2} \\sim 5\\times10^6 {\\rm M}_{\\odot}$. Such clumps are Jeans-unstable and form stars as they infall and stream through the CGM, as it can be appreciated by comparing ${\\rm {H_2}}$~and stellar mass profiles (Fig. \\ref{fig_mappe_results_profili}). \nThe stellar mass profiles also highlight the presence of 3 stellar clumps at $r\\sim 1\\,{\\rm kpc}$ with no associated ${\\rm {H_2}}$. These \\quotes{older} clumps share the same nature of the previous ones, but the ${\\rm {H_2}}$~has been already consumed and\/or dispersed by the star formation activity that produced the stars present at $z=6$.\n\n\\subsubsection{Singly ionized Carbon}\n\nThe \\hbox{C~$\\scriptstyle\\rm II $}~abundance is calculated by post-processing the simulation outputs with the photoionization code \\textlcsc{cloudy} (\\citealt{cloudy:2013}, and see App. \\ref{sez_cloudy_model}). The result is shown in Fig. \\ref{fig_mappe_tutte}. {Dahlia} contains a \\hbox{C~$\\scriptstyle\\rm II $}~mass of $M_{\\rm CII} = 2.2\\times 10^5 {\\rm M}_{\\odot}$, accounting for $\\sim 50\\%$ of the total metals produced. About 74\\% of the \\hbox{C~$\\scriptstyle\\rm II $}~mass is located in the dense phase, $22\\%$ in the diffuse phase, $4\\%$ in the rarefied phase. Note that the \\hbox{C~$\\scriptstyle\\rm II $}~mass phase distribution differs only for $\\lsim 10\\%$ from the $Z$ distribution (see Tab. \\ref{tagella_eos_riassunto}). The difference arises because shock-heated gas can be collisionally excited to higher ionization states. Thus, to first order, we expect the \\hbox{C~$\\scriptstyle\\rm II $}~spatial distribution to follow the metallicity one.\n\nThe face-on \\hbox{C~$\\scriptstyle\\rm II $}~surface density has a central maximum ($\\Sigma_{\\rm CII} \\sim 10^5\\msun\\,{\\rm kpc}^{-2}$), it gradually decreases to up to $\\simeq 1.2\\,{\\rm kpc}$, and drastically drops to $\\Sigma_{\\rm CII} \\lsim 10^2\\msun\\,{\\rm kpc}^{-2}$ beyond that radius (see also Fig. \\ref{fig_mappe_results_profili}). Thus, most of the \\hbox{C~$\\scriptstyle\\rm II $}~is located into the disk, but a more extended envelope containing a sizable fraction of mass exists. On top of this smooth distribution, there are \\hbox{C~$\\scriptstyle\\rm II $}~enhancements corresponding to the ${\\rm {H_2}}$~clumps described above. \n\nThe \\hbox{C~$\\scriptstyle\\rm II $}~profile is similar for edge-on and face-on case. However, the edge-on has a higher \\hbox{C~$\\scriptstyle\\rm II $}~central density and a steeper slope. While the higher central value is obviously due to the larger column density encountered along the disk, the sharp drop is related to metal transport. As most of the star formation activity is located in the disk, metals above it can be only brought by outflows which become progressively weaker with distance. Metal outflows originating from the centre are preferentially aligned with the rotation axis, and the pollution region starting from the edge is stretched by the disk rotation and by tidal interaction with satellites.\n\n\\subsubsection{Emission from singly ionized carbon}\n\nWe finally compute the expected \\hbox{[C~$\\scriptstyle\\rm II $]}~line emission using the same prescriptions of \\citet{Vallini:2013MNRAS,vallini:2015}, as detailed in App. \\ref{sez_cloudy_model}. Note that for the present work we assume uniform UV interstellar radiation. This approximation is valid in the MW, where variations around the mean field value are limited to a factor of 3. The results are plotted in Fig. \\ref{fig_mappe_tutte}.\n\nWithin $1\\,{\\rm kpc}$ from the centre the \\hbox{[C~$\\scriptstyle\\rm II $]}~emission structure closely follows the \\hbox{C~$\\scriptstyle\\rm II $}~distribution, and we find $S_{\\rm [CII]}\/{\\rm L}_{\\odot} \\simeq 200\\, \\Sigma_{\\rm CII}\/{\\rm M}_{\\odot}$. At larger radii the \\hbox{[C~$\\scriptstyle\\rm II $]}~surface brightness suddenly drops, although the peaks associated with ${\\rm {H_2}}$~clumps are preserved. This result holds both for the face-on and edge-on cases. \n\nSuch behavior can be understood as follows. Take a typical MC with $n = 10^2{\\rm cm}^{-3}$, $Z={\\rm Z}_{\\odot}$, and total mass $M$. Its \\hbox{[C~$\\scriptstyle\\rm II $]}~\nluminosity is $L_{[\\rm CII]}\/{\\rm L}_{\\odot} \\simeq 0.1 (M\/{\\rm M}_{\\odot})$ \\citep[]{vallini:2016a,goicoechea:2015apj}. Also, the \\hbox{[C~$\\scriptstyle\\rm II $]}~emission is $\\propto Z\\,n$ for $n\\lsim 10^3$, i.e. the critical density for \\hbox{C~$\\scriptstyle\\rm II $}~collisional excitation by H atoms \\citep{Vallini:2013MNRAS}. Then, \n\\be\\label{eq_stima_luminosita}\nL_{[\\rm CII]} \\simeq 0.1\\, \\left({n\\over 100\\, {\\rm cm}^{-3}}\\right) \\left({Z\\over {\\rm Z}_{\\odot}}\\right) \\left({M\\over {\\rm M}_{\\odot}}\\right)\\,{\\rm L}_{\\odot}\\,.\n\\ee\nIn the central kpc, where $n \\simeq 10^2{\\rm cm}^{-3}$ and $Z\\simeq {\\rm Z}_{\\odot}$, the luminosity depends only on the molecular mass contained in the disk, and the same holds even for ${\\rm {H_2}}$~clumps outside the disk. The envelope is instead more diffuse ($n\\lsim 10{\\rm cm}^{-3}$) and only mildly enriched ($Z\\lsim10^{-1}{\\rm Z}_{\\odot}$). As a result, its \\hbox{[C~$\\scriptstyle\\rm II $]}~luminosity per unit mass is lower. \n\nThe emission from this diffuse component is further suppressed by the CMB \\citep[][]{dacunha:2013apj,pallottini:2015_cmb,vallini:2015}. Namely, for gas with $n\\lsim 0.1\\,{\\rm cm}^{-3}$, the upper levels of the \\hbox{[C~$\\scriptstyle\\rm II $]}~transition cannot be efficiently populated through collisions, thus the spin temperature of the transition approaches the CMB one, and to a first order the gas cannot be observed in emission.\n\nIn summary, $\\simeq 95\\%$ of {Dahlia} \\hbox{[C~$\\scriptstyle\\rm II $]}~emission comes from dense gas located in the ${\\rm {H_2}}$~disk. Indeed, the \\hbox{[C~$\\scriptstyle\\rm II $]}~half light radius coincides with the ${\\rm {H_2}}$~half mass radius, i.e. $0.59\\,{\\rm kpc}$ ($0.36\\,{\\rm kpc}$) in the face-on (edge-on) case (see also Tab. \\ref{tagella_halflightradius}). Within such radius, the molecular gas has a mass $M_{\\rm H_{2}}\\simeq 1.69\\times10^8{\\rm M}_{\\odot}$ and the luminosity is $L_{\\rm CII}\\simeq 1.78\\times10^7{\\rm M}_{\\odot}$, i.e. with a \\hbox{[C~$\\scriptstyle\\rm II $]}-${\\rm {H_2}}$~scaling ratio consistent within $15\\%$ from the simple estimate in eq. \\ref{eq_stima_luminosita}.\n\nDahlia has a total \\hbox{[C~$\\scriptstyle\\rm II $]}~luminosity $L_{\\rm CII} \\simeq 3.5\\times 10^{7}{\\rm L}_{\\odot}$; this is fainter than expected on the basis of the local \\hbox{[C~$\\scriptstyle\\rm II $]}-$SFR$ relation \\citep[$L_{\\rm CII}\\sim 10^8 - 10^9 {\\rm L}_{\\odot}$, i.e.][]{delooze:2014aa}. However, at high-$z$, such relation seems to hold only for a small subset of the observed galaxies \\citet[i.e.][]{capak:2015arxiv,Willott:2015arXiv15}. The majority of the observed galaxies show a strong \\hbox{[C~$\\scriptstyle\\rm II $]}-$SFR$~deficit, when considering both detections \\citep[e.g. BDF3299, A383-5.1][]{maiolino:2015arxiv,knudsen:2016arxiv} and upper limits \\citep[e.g. Himiko, IOK1, MS0451-H][]{ouchi2013,ota:2014apj,knudsen:2016arxiv}.\n\nFor {Dahlia}, the \\hbox{[C~$\\scriptstyle\\rm II $]}-$SFR$~deficit depends on multiple factors. The main contribution from \\hbox{[C~$\\scriptstyle\\rm II $]}~emission is in the ${\\rm {H_2}}$~disk, that on average has $\\langle Z\\rangle\\simeq 0.5 {\\rm Z}_{\\odot}$, i.e. slightly lower then solar. Additionally, the gas in the disk is efficiently converted in stars ($SFR\\simeq100{\\rm M}_{\\odot}\/{\\rm yr}$) and has $\\langle n\\rangle\\simeq 25\\,{\\rm cm}^{-3}$, thus the \\hbox{[C~$\\scriptstyle\\rm II $]}~emission is hindered (eq. \\ref{eq_stima_luminosita}). Finally, there is a marginal contribution to \\hbox{[C~$\\scriptstyle\\rm II $]}~from the diffuse and rarefied phase: $\\simeq30\\%$ of \\hbox{C~$\\scriptstyle\\rm II $}~is locked in a the low density and metallicity gas that gives a negligible contribution to \\hbox{[C~$\\scriptstyle\\rm II $]}~emission, particularly because of CMB suppression.\n\n\\section{Summary and discussion}\\label{sec_conclusioni}\n\nWith the aim of characterizing the internal properties of high-$z$ galaxies, we have performed an AMR zoom-in simulation of \\quotes{Dahlia}, a $z\\simeq6$ galaxy with a stellar mass of $M_{\\star}=1.6\\times10^{10}{\\rm M}_{\\odot}$, therefore representative of LBGs at that epoch. We follow the zoom-in region with a gas mass resolution of $10^{4}{\\rm M}_{\\odot}$ and a spatial resolution of $30\\,{\\rm pc}$.\n\nThe simulation contains a rich set of physical processes. We use a star formation prescription based on a ${\\rm {H_2}}$~dependent Schmidt-Kennicutt relation. The ${\\rm {H_2}}$~abundance is computed from the \\citetalias{krumholz:2009apj} model (Fig. \\ref{fig_kmt_test}). Using stellar evolutionary models \\citep{padova:1994,starburst99:1999}, we include chemical, radiative and mechanical energy inputs, accounting for their time evolution and metallicity dependence on the stellar population properties (Fig. \\ref{fig_gamete_tables}). We include feedback from SN, winds and radiation pressure with a novel, physically motivated coupling scheme between gas and stars. We also compute \\hbox{C~$\\scriptstyle\\rm II $}~abundance and the $158\\mu$m \\hbox{[C~$\\scriptstyle\\rm II $]}~emission, by post-processing the outputs with \\textlcsc{cloudy} \\citep{cloudy:2013}, and a FIR~emission model drawn from radiative transfer numerical simulations \\citep{Vallini:2013MNRAS,vallini:2015}.\n\nThe main results can be summarized as follows:\n\n\\begin{itemize}\n\n\\item[\\bf 1.] {Dahlia} sits at the centre of a cosmic web knot, and accretes mass from the intergalactic medium mainly via 3 filaments of length $\\simeq 100\\,{\\rm kpc}$ (Fig. \\ref{fig_mappe_hydro}). Dahlia has $\\sim 6$ major satellites ($M_{\\star}\\lsim 10^{9}{\\rm M}_{\\odot}$) and is surrounded by $\\sim 10$ minor ones ($M_{\\star}\\sim 10^{5}{\\rm M}_{\\odot}$). The latter represent molecular cloud (MC) complexes caught in the act of condensing as the gas streams through the circumgalactic medium (Fig. \\ref{fig_sph_profile}). {Dahlia} dominates both the stellar mass ($M_{\\star}\\sim 10^{10}{\\rm M}_{\\odot}$) and the SFR of the galaxy ensemble ($SFR\\simeq 100\\,{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}$, Fig. \\ref{fig_sfr_smf_energy}).\n\n\\item[\\bf 2.] Only a small fraction of the available energy produced by stars couples to the gas, as energy is mostly dissipated within MCs where the stars reside. Radiation dominates the feedback energy budget by a factor $> 100$ (Fig. \\ref{fig_feedback_vs_time}). \n\n\\item[\\bf 3.] By $z=6$ {Dahlia} forms a ${\\rm {H_2}}$~disk of mass of $M_{\\rm H2}= 3.6\\times 10^{8}{\\rm M}_{\\odot}$, effective radius $0.6\\,{\\rm kpc}$, and scale height $200\\,{\\rm pc}$ (Fig. \\ref{fig_mappe_tutte}). The disk is dense ($n\\gsim 25\\,{\\rm cm}^{-3}$), enriched ($Z\\simeq 0.5\\,{\\rm Z}_{\\odot}$), and it is fed by frequent mergers driving fresh gas to the centre, and supports a star formation rate per unit area of $\\simeq 15\\,{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}\\,{\\rm kpc}^{-2}$. \n\n\\item[\\bf 4.] The disk is mostly unaffected by SN shocks, and it is pressure-supported by radiation. SN\/winds drive hot metal outflows (Fig. \\ref{fig_eos_1}), that are either preferentially aligned with the galaxy rotation axis, or start at the edge of the disk.\n\n\\item[\\bf 5.] The total \\hbox{[C~$\\scriptstyle\\rm II $]}~luminosity of {Dahlia} is $10^{7.55}{\\rm L}_{\\odot}$, and $\\simeq 95\\%$ of the emission is co-located with the ${\\rm {H_2}}$~disk (Fig. \\ref{fig_mappe_results_profili}). The diffuse, enriched material surrounding {Dahlia} contains $30\\%$ of the \\hbox{C~$\\scriptstyle\\rm II $}~mass, but it negligibly contributes to the \\hbox{[C~$\\scriptstyle\\rm II $]}~emission (Fig. \\ref{fig_mappe_tutte}) due to its low density ($n\\simeq 10\\,{\\rm cm}^{-3}$) and metallicity ($Z\\simeq10^{-1}{\\rm Z}_{\\odot}$). {Dahlia} is under-luminous with respect to the local \\hbox{[C~$\\scriptstyle\\rm II $]}-$SFR$ relation; however, its luminosity is consistent with upper limits derived for most $z\\sim6$ galaxies. \n\\end{itemize}\n\nWe find clear indications that the SF subgrid prescription might considerably affect the \\hbox{[C~$\\scriptstyle\\rm II $]}-$SFR$ relation and the ISM structure, as noted also by \\citep{hopkins:2013arxiv}. This is because stars form in gas of different densities depending on the chosen prescription. \nIn our simulation gas is converted into stars with an efficiency $\\zeta_{\\rm sf}\\,f_{\\rm H2}$, where the ${\\rm {H_2}}$~fraction is computed from the \\citetalias{krumholz:2009apj} model and we set $\\zeta_{\\rm sf}=0.1$. In \\citetalias{semenov:2015} the SF follows a \\textit{total} (i.e. not molecular) density Schmidt-Kennicutt relation. Further the SF efficiency depends on the free-fall time and the turbulent eddy turnover time. The SF relation is derived from an empirical fit to MC simulations \\citep{padoan:2012}, with no notion of the local metallicity. \n\nInterestingly, although the approaches are considerably different, the resulting efficiencies are compatible: in \\citetalias{semenov:2015} the bulk of the star forming gas has $n\\sim 10^{1.5}{\\rm cm}^{-3}$, as in {Dahlia} (Fig. \\ref{fig_cfr_semenov}). However, with respect to \\citetalias{semenov:2015}, Dahlia misses part of the very dense, star forming gas, and its corresponding contribution to \\hbox{[C~$\\scriptstyle\\rm II $]}~from $Z\\sim{\\rm Z}_{\\odot}$ MCs with $n\\sim10^{3}{\\rm cm}^{-3}$. These MC are expected to have high \\hbox{[C~$\\scriptstyle\\rm II $]}~fluxes (see eq. \\ref{eq_stima_luminosita}), but their abundance might be low \\citep{padoan:2012}.\nFurther investigation is needed before we draw any solid conclusion. To this aim, we plan to upgrade our simulations to a more sophisticated non-equilibrium ${\\rm {H_2}}$~evolution model. This is because the chemical equilibrium assumed in \\citetalias{krumholz:2009apj} does not hold in low-metallicity regimes. \n\nAnother important caveat is that we have assumed a uniform UV background. Instead, discrete sources (stellar clusters) might have a strong impact on star formation. For example, Lyman-Werner photons might locally dissociate the ${\\rm {H_2}}$~by generating pockets of \\HI~in the gas distribution. Thus, unshielded (low dust column density) gas in the disk would contribute only marginally to the SFR.\n\nFurthermore, a uniform UVB assumption likely leads to inaccurate computation of the ISM thermodynamic state. We find that $Z\\simeq 10^{-3}{\\rm Z}_{\\odot}$ gas with $n\\gsim 10^{2}\\,{\\rm cm}^{-3}$ has $T\\simeq 10^{4}$ (Fig. \\ref{fig_eos_1}), with the temperature been set by the UVB heating. However, such gas should be likely able to self-shield from the impinging UVB, whereas internal radiation sources could still play a role \\citep[e.g.][]{gnedin:2010}. \n\nFinally, local FUV flux variations can change the \\hbox{[C~$\\scriptstyle\\rm II $]}~emission from individual regions of the galaxy. Also, very high FUV fluxes can photoevaporate MC on short time scales ($\\lsim t_{\\rm ff}$ for gas with $Z\\sim 10^{-2}{\\rm Z}_{\\odot}$, \\citealt[][]{vallini:2016a}). This effect are particularly important, as it might be responsible for the displacement between the \\hbox{[C~$\\scriptstyle\\rm II $]}~and the UV emitting region observed in BDF3299 \\citep{maiolino:2015arxiv}, and in some of the \\citet{capak:2015arxiv} galaxies. To solve these problems, a multi-frequency radiative transfer computation must be coupled to the present simulations. This work is ongoing and will be presented elsewhere. \n\n\n\n\\section*{Acknowledgments}\nWe are grateful to the participants of \\emph{The Cold Universe} program held in 2016 at the KITP, UCSB, for discussions during the workshop. \nWe acknowledge the {\\tt AGORA} project members and the {\\tt DAVID} group for stimulating discussion. \nWe thank the authors and the community of \\textlcsc{pymses} for their work. \nWe thank B. Smith for support in implementing \\textlcsc{grackle}. \nThis research was supported in part by the National Science Foundation under Grant No. NSF PHY11-25915.\nS.S. was supported by the European Research Council through a Marie-Skodolowska-Curie Fellowship, project PRIMORDIAL-700907.\n\n\\bibliographystyle{mnras}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\n\n\n Support Vector Machines (SVMs) are one of the most widely used algorithms for classification problems. Originally proposed in the works of Boser et al~\\cite{boser1992training} and Cortes and Vapnik~\\cite{cortes1995support}, they can be defined as learning machines which construct an $n$-dimensional decision boundary surface (also called a hyperplane) that optimally separates data into positive and negative classes by maximizing the margin of separation between them. \n \n Contrary to artificial neural networks, which provide a local minimum solution to the optimization problem, SVMs provide a unique globally optimal solution for the margin separation problem, which is addressed through the application of a kernel-based learning method. In this context, a kernel is understood as a similarity function that is applied to each data point to map the original non-linear observations into a higher dimensional space where the observations may become linearly separable. A wide range of different kernels have been proposed in the literature, targeting specific classification problems~\\cite{Haykin08}. The Gaussian kernel (also referred to as the Radial Basis Function kernel (RBF)), is probably the most widely used kernel demonstrating 'state of the art' performance in a variety of classification problems~\\cite{Bhattacharyya11}. \\\\\n\\indent More recently, new mappings between non-linear separable observations and higher dimensional feature spaces have been proposed with the purpose of extending the capabilities of SVMs towards theoretically feasible quantum-inspired machine learning algorithms~\\cite{schuld2019quantum,schuld2019machine,havlivcek2019supervised,mehta2019high,killoran2019strawberry,killoran2018continuous,srinivasan2018learning,adhikary2019supervised,bartkiewicz2019experimental}. For instance, under a quantum theoretical perspective, by mapping data into coherent states which are a superposition of eigen-functions of a quantum harmonic oscillator with minimum Heisenberg uncertainty, the RBF kernel can be understood as the inner product of two coherent states \\cite{kubler2019quantum}. The coherent state of this harmonic oscillator comprises the following properties:\n\\begin{enumerate*}[label=({\\it \\roman*})]\n \\item It is obtained by the displacement operators on the ground state;\n \\item It is an eigenfunction of the annihilation operator;\n \\item It satisfies the minimum uncertainty relation, i.e., $\\Delta(\\mathbf{x})=\\Delta(\\mathbf{p})=\\sigma\/\\sqrt{2}$, in which $\\Delta(\\mathbf{x})$ and $\\Delta(\\mathbf{p})$ are respectively the variance of the position and momentum of the harmonic oscillator;\n \\item It is over-complete.\n\\end{enumerate*}\n\\\\\n\\indent \nThe over-completeness property implies an arbitrary function can be expressible as a linear combination of kernel functions in a \"reproducing Hilbert space\" \\cite{combescure2012coherent}. Any of the first three above-mentioned properties lead to a definition of generalized coherent states, although property ({\\it iv}) is necessary for the definition of coherent states. For example, while a Gazeau-Klauder coherent state is defined by property ({\\it ii}) and fulfils property ({\\it iii}), displacement-type coherent states are obtained by displacement operators on reference states \\cite{ali2000coherent}.\n\\indent \nRecently, Schuld and Killoran proposed to map data from the original space to a feature space by using squeezed coherent states \\cite{schuld2019quantum}. Squeezed states are coherent states that saturate the Heisenberg uncertainty principle in such a way that the variance of position and momentum depend on a so-called squeezed parameter. Therefore, the reduced uncertainty is one of its quadrature components, while increased uncertainty is the latter, i.e., $\\Delta(\\mathbf{x})=\\exp{\\zeta}\/\\sqrt{2}$ and $\\Delta(\\mathbf{p})=\\exp{-\\zeta}\/\\sqrt{2}$, where $\\zeta$ is the squeezing parameter. The squeezing parameter controls uncertainty via a quadrature component, while the third property of coherent states are preserved.\\\\\n\\indent \nGiven the large number of kernel functions currently being proposed in the literature, the question naturally arises as to which kernel function to apply. In SVM-based classification problems, the appropriate choice of a kernel is fundamental, however, the current 'trial-and-error' nature of selecting the best kernel poses significant challenges, especially when one considers kernels that can support both classical and quantum-inspired machine learning algorithms, which renders the kernel choice problem an open research question \\cite{Ali06}.\\\\\n\\indent To address this problem (and taking as basis the work of Schuld and Killoran on squeezed coherent states~\\cite{schuld2019quantum}), we propose a generalised meta-kernel from which the RBF kernel (and other kernels) can be derived by using a deformed Weyl-Heisenberg (dW-H) algebra, dependent on a parameter $\\alpha \\in \\mathbb{R}$. By applying the associated displacement operator on the reference state, the non-linear coherent state is generated by considering the specific value of the parameter, i.e. $\\alpha=2$, $\\alpha=0$, and $\\alpha=-2$, $SU(2)$, W-H, and $SU(1,1)$-coherent state are respectively recovered. \\cite{dehdashti2013coherent,dehdashti2015realization,castillo2019polynomial}. \nThe choice of $\\alpha$ allows a specific kernel function to be defined. Therefore, the theory of coherent states can be seen as providing a meta-kernel from which kernel functions can be derived. To the best of our knowledge, no such theory of meta-kernels presently exists.\n \n By means of a feature mapping, data is mapped into the feature space, represented by the deformed coherent space. Schematically, this is illustrated in Figure \\ref{fig1}. Geometrically, the feature space constructed by the dW-H coherent state is a surface of revolution with constant curvature, i.e., the surfaces associated with $\\alpha-SU(2)$ and $\\alpha-SU(1,1)$ are respectively a positive compact surface, and negative surface, while $\\alpha=0$ produces a flat surface. Therefore, a kernel function defined in any one of the configurations is an inner product of two elements on the related surface. Through this process, our dW-H algebra acts like a meta-theory from which a new class of two parameter non-linear kernel functions can be derived.\n\\begin{figure}[t]\n \\centering\n \n \\includegraphics[scale=.55]{fig1.pdf}\n \\caption{Schematic representation of the SVM method based on the Weyl-Heisenberg algebra, showing the mapping of data into the feature space, represented by the dW-H coherent state.}\n \\label{fig1}\n\\end{figure}\n\n\nThe paper will proceed as follows. In Section \\ref{coherent_states}, we provide a brief introduction to dW-H coherent states which are then expressed in kernel functions. We also describe the geometric properties of the feature spaces in which these kernel functions are defined. In Section \\ref{test_design}, a test design is formulated for an illustrative evaluation of the introduced kernel functions, from the standpoint of enrichment of Gaussian strategies in SVM classification. \nAccompanying empirical results are presented, along with visualisations that aid descriptions of relevant observations. Section \\ref{discussion} discusses the benefits of the algebra within SVM classification based on the results of the empirical evaluation.\n\\section{Deriving kernel functions from deformed Coherent States}\n\\label{coherent_states}\nA supervised machine learning (ML) classification problem can be formalised in the following way. Given a set of $N$ training examples $\\{ (x_{1}, y_{1}), (x_{2}, y_{2}), \\cdots, (x_{N}, y_{N})\\}$, where $x_{i}$ corresponds to the $ith$ training example, each training example is represented by a set of input features, and $y_{i}$ corresponds to the 'ground truth' label of the training example $x_{i}$, the objective of ML is to learn a model, $h(x)$, that represents the training set. Ideally, the outcome is to generate the model that is most capable of correctly predicting the class labels of unseen instances. \n\n\nOne way of predicting unseen examples is through the application of a similarity function, a \\textit{kernel}, between the unseen input instance ${x^{\\prime}}$ and each of the training inputs, ${x_i}$, learned during the training phase.\n\nKernel methods, $K(x, x^{\\prime})$, use the inner product between any two inputs $x, x^{\\prime} \\in \\mathcal{X}$, as distance measures in order to construct models that capture the properties of a data distribution. These distance measures can be defined in a feature space $\\mathcal{X}$, depending on whether the data is linear, or non-linearly separable.\n\nThe left side of Figure \\ref{fig1} schematically represents this process. One can define a complex Hilbert space as the feature space, where a feature mapping $\\phi: \\mathcal{X}\\rightarrow \\mathcal{F}$, in which $\\mathcal{F}$ is a complex Hilbert state, $\\phi: x \\rightarrow |\\phi(x) \\rangle $, implies a kernel function can be defined as $K(x, x^{\\prime} ) =\n \\langle\\phi(x), \\phi(x^{\\prime}) \\rangle$. By operating a dW-H algebra on the reference state, the feature space is constructed with deformed coherent states. These coherent states depend on the attributed sign of the parameter $\\alpha$, meaning `positivity' or `negativity' defines an $\\alpha-SU(2)$ coherent state or $\\alpha-SU(1,1)$ coherent state respectively; while $\\alpha=0$ stands for a harmonic oscillator coherent state ( see Figure \\ref{fig1}). For the sake of simplicity, we define dW-H coherent states for positive and negative values, separately. The first is titled the $\\alpha-SU(2)$ coherent state, defined as follows:\n \\begin{eqnarray}\\label{eq3}\n|x;z,\\alpha\\rangle\n&=&\\left[1+\\tan^{2}\\left(z\\sqrt{\\frac{\\alpha}{2}}\\right)\\right]^{-k}\\sum_{m=0}^{2k} (-1)^{m} e^{-imx}\\nonumber\\\\\n&\\times&\\tan^{m}\\left(z\\sqrt{\\frac{\\alpha}{2}}\\right) \\sqrt{\\frac{(2k)!}{m!(2k-m)!}}|k,m\\rangle,\n\\end{eqnarray} \n in which $\\alpha \\geq 0$, and $k \\in N$. The second is named the $\\alpha-SU(1,1)$ coherent state:\n \\begin{eqnarray}\\label{eq2}\n|x;z,\\alpha\\rangle&=&\\left[1-\\tanh^{2}\\left(z\\sqrt{\\frac{|\\alpha|}{2}}\\right)\\right]^{k}\\sum_{m=0}^{\\infty} (-1)^{m} e^{-imx}\\nonumber\\\\\n&\\times&\\tanh^{m}\\left(z\\sqrt{\\frac{|\\alpha|}{2}}\\right) \\sqrt{\\frac{\\Gamma[2k+m]}{m!\\Gamma[2k]}}|k,k+m\\rangle,\n\\end{eqnarray} \nwhere $\\alpha\\leq 0$, and $k\\in N$. \n Note that in the case of $\\alpha = \\pm 2$, coherent states (\\ref{eq3}) and (\\ref{eq2}), respectively reduce $SU(2)$ and $SU(1,1)$ coherent states. It was also shown that if $\\alpha $ approaches zero, both coherent states (\\ref{eq3}) and (\\ref{eq2}) reduce to the harmonic oscillator coherent states \\cite{dehdashti2013coherent} , i.e.,\n \\begin{eqnarray}\n |z\\rangle=e^{-|z|^{2}}\\sum_{m=0}^{\\infty} \\frac{z^{m} }{\\sqrt{m!}}|m\n \\rangle.\n \\end{eqnarray}\n \\indent Hence, by considering a multi-dimensional input set in a data set of vectors $\\mathbf{x} = (x_{1},\\cdots,x_{N})^{T} \\in \\mathbb{R}^{N}$, one can define the joint state of $N$ deformed coherent states,\n \\begin{align*}\n & \\phi: (x_{1},\\cdots,x_{N}) \\rightarrow \\\\ & |x_{1},z,\\alpha\\rangle\\otimes |x_{2},z,\\alpha\\rangle \\otimes \\cdots \\otimes |x_{N},z,\\alpha\\rangle. \\numberthis\n \\end{align*}{}\n Therefore, the kernel is defined as the following:\n \\begin{eqnarray}\n K(\\mathbf{x},\\mathbf{x^{\\prime}})= \\prod_{i=1}^{N}\\langle x_{i};z,\\alpha | \n x_{i}^{\\prime};z,\\alpha \\rangle.\n \\end{eqnarray}\nIn the case of $\\alpha-SU(2)$ feature space, the kernel is obtained as follows:\\\\\n\\begin{tcolorbox}[boxsep=2pt,left=2pt,right=2pt,top=4pt,bottom=4pt]\n\\bf{$\\bf{\\alpha-SU(2)}$ Kernel Function}\n\\begin{eqnarray}\nK(\\mathbf{x},\\mathbf{x^{\\prime}})= \\prod_{i=1}^{N} \\left[\\frac{1+\\tan^{2}\\left(z\\sqrt{\\frac{|\\alpha|}{2}}\\right)e^{i(x_{i}-x^{\\prime}_{i})}}{1+\\tan^{2}\\left(z\\sqrt{\\frac{|\\alpha|}{2}}\\right)}\\right]^{2k},\n\\end{eqnarray}\n\\end{tcolorbox}\nMoreover, in the case of $\\alpha-SU(1,1)$ feature space, the kernel is given as follows:\n\\begin{tcolorbox}[boxsep=2pt,left=2pt,right=2pt,top=4pt,bottom=4pt]\n\\bf{$\\bf{\\alpha-SU(1,1)}$ Kernel Function}\n\\begin{eqnarray}\nK(\\mathbf{x},\\mathbf{x^{\\prime}})= \\prod_{i=1}^{N} \\left[\\frac{1-\\tanh^{2}\\left(z\\sqrt{\\frac{|\\alpha|}{2}}\\right)}{1-\\tanh^{2}\\left(z\\sqrt{\\frac{|\\alpha|}{2}}\\right)e^{i(x_{i}-x^{\\prime}_{i})}}\\right]^{2k}.\n\\end{eqnarray}\n\\end{tcolorbox}\n\\indent For understanding the role of $\\alpha$ and $k$, we study the geometrical properties of the above-mentioned feature spaces. We can define the line element of the feature space, by using the Fubini\u2013Study metric \\cite{bengtsson2017geometry}, that is,\n\\begin{eqnarray}\nds^{2}=\\| d|x;z,\\alpha\\rangle\\|^{2}-|\\langle x;z,\\alpha|d|x;z,\\alpha\\rangle|^{2},\n\\end{eqnarray}\n By using the above definition, the metric of $\\alpha-SU(2)$ feature space is obtained by\n\\begin{eqnarray}\\label{eq11}\nds^{2}=k\\alpha dz^{2}+\\frac{k}{2} \\sin^{2} \\left(z\\sqrt{2\\alpha}\\right)dx^{2},\n\\end{eqnarray}\nwhich describes a positive constant curvature with the scalar Ricci $R=4\/k$. This is in fact a surface of revolution conforming with a sphere \\cite{carinena2005central}.\nBy using the same method, the metric of the $\\alpha-SU(1,1)$ feature space is given by\n\\begin{eqnarray}\\label{eq10}\nds^{2}=k|\\alpha| dz^{2}+\\frac{k}{2} \\sinh^{2} \\left(z\\sqrt{2|\\alpha|}\\right)dx^{2}.\n\\end{eqnarray}\nwhich describes a negative constant curvature with the scalar Ricci $R=-4\/k$, conformal with pseudo-spheres \\cite{carinena2005central}. Figure \\ref{parametric_feature_spaces} shows topological categories of feature spaces associated with $\\alpha-SU(2)$ and $\\alpha-SU(1,1)$ coherent states.\n\\begin{figure}[t]\n \\centering\n \n \\includegraphics[scale=0.6]{fig2.pdf}\n \\caption{Spatial definition of feature spaces for kernel functions $\\alpha-SU(2)$ \\textbf{(A.)} And $\\alpha-SU(1,1)$ \\textbf{(B.)} Under parametric configuration $\\alpha = 2$ And $k = 1$}\n \\label{parametric_feature_spaces}\n\\end{figure}\nAs seen above, two differing categories of the deformed coherent states are defined with different topologies: one ($\\alpha-SU(2)$) is constructed on a truncated Hilbert space, forming a compact constant curvature feature space, conforming with a sphere. The other ($\\alpha-SU(1,1)$) is built on an infinite Hilbert space that leads to a feature space conforming with a pseudo-sphere, with negative constant curvature.\n\n\nIn the next section, we present a illustrate the empirical effectiveness of the meta-theoretically derived kernel functions in different experimental settings.\n\n\\section{Empirical evaluation of the kernel functions}\\label{test_design}\n\nIn order to assess the effectiveness of the proposed kernel functions, we conducted a series of experiments using different well known synthetic datasets of the literature, namely 1) Python's \\textit{scikit-learn} library: the circles, moons, and 2) the iris dataset. \nThe circles and the moons dataset involves binary features in a binary classification problem. The iris dataset is a multiclass classification task with three classes and four features.\n\n\nSince the goal of SVMs is to find the maximum-margin hyperplane, a set of parameters are needed to control the error between these margins (Figure~\\ref{fig:svm_hyper} shows this optimization in the high-dimensional feature space). For the RBF kernel, two parameters play a major role in this optimization process:\n\\begin{itemize}\n\\item Hyperparameter $C$: is a regularization parameter that controls the trade-off between the decision boundary and mis-classification term. It basically controls how much mis-classifications are tolerable during the optimization problem.\n\\item $\\gamma$, which controls the non-linearity of the decision boundary. It defines how far influences the calculation of plausible line of separation. A low $\\gamma$ takes into consideration far away points to influence the decision boundary; a high $\\gamma$ considers only points that are close to the decision boundary.\n\\end{itemize}\n\\begin{figure}[t]\n \n \n \\centering\n \\includegraphics[scale=.6]{fig3.pdf}\n \\caption{Kernel Trick. A kernel is applied to each data point to map the original non-linear observations into a higher dimensional space where the observations may become linearly separable through a hyperplane.}\n \\label{fig:svm_hyper}\n\\end{figure}\nWhen considering the proposed meta-kernel, we have a set of new parameters that extend the current RBF kernel with a set of new non-linear functions that are based on the dW-H algebra. These extra parameters also enable the construction of a feature space over a surface of revolution with constant curvature. Theoretically, this could lead to significant improvements when the dataset is distributed along revolution surfaces. The parameters for both $SU(1,1)$ and $SU(2)$ kernels are the following:\n\\begin{itemize}\n \\item Parameter $k$ is related to the curvature of the feature surface and controls the non-linearity of the decision boundary in such way that high values of the parameter consider points that are near to the decision boundary whilst low values cause points further away to influence the decision boundary. Figures \\ref{vis_su11} and \\ref{fig:vis_su2} illustrate the rule of the parameter $k$. \n \\item By considering a fixed curvature, i.e., $k=const.$, the product of parameters $z$ and $\\sqrt{\\alpha}$, as an extra parameter $z\\sqrt{\\alpha}$, controls the decision boundary as well. Figure \\ref{vis_su11} and \\ref{fig:vis_su2} indicates a schematic behaviour of parameters $\\alpha$ and $k$, for $\\alpha-SU(1,1)$ and $\\alpha-SU(2)$ respectively. \n\\end{itemize}\n\\begin{figure}[t]\n \n \n \\centering\n \\includegraphics[scale=0.55]{fig4.pdf}\n \\caption{Shape of the kernel function obtained by $\\alpha-SU(1,1)$-coherent state for different strength hyperparameters $\\alpha$ and $k$, while $z=1$. The input $x$ is fixed at $(0, 0)$ and $x^{\\prime}$ is varied.}\n \\label{vis_su11}\n\\end{figure}\n\\begin{figure}[t]\n \n \n\\centering\n \\includegraphics[scale=.55]{fig5.pdf}\n \\caption{Shape of the kernel function obtained by $\\alpha-SU(2)$-coherent state for different strength hyperparameters $\\alpha$ and $k$, while $z=1$. The input $x$ is fixed at $(0, 0)$ and $x^{\\prime}$ is varied.}\n \\label{fig:vis_su2}\n\\end{figure}\n\nFigure~\\ref{fig:svc_classification} illustrates different decision boundaries that can be computed using the different kernels. One can see the different non-linearity properties of the $\\alpha-SU(2)$ and $\\alpha-SU(1,1)$ kernels compared to the standard RBF kernel. \n\n\n\\begin{figure}[t]\n \n\\centering\n \\includegraphics[scale=.7]{fig6.pdf}\n \\caption{Decision boundaries computed using different kernels for the synthetic datasets \\textit{Moons} and \\textit{Circles}.}\n \\label{fig:svc_classification}\n\\end{figure}\n\nFor evaluation purposes, the parameters that provided the best results in terms of precision were found using a grid search approach (for more details on the evaluation code and experimental setup. For the synthetic datasets, Moons and Circles, 1000 samples were generated, where 70\\% were used for the training process and 30\\% were used for the evaluation task. To make the classification task more challenging, we applied noise factors of 0.3 and 0.1 to these datasets, respectively. Learning curves were analysed to ensure unbiased results and no overfitting. Table~\\ref{tab:results} summarises the results.\n\n\\begin{figure}[]\n \n\\centering\n \\includegraphics[scale=.7]{table.pdf}\n \\caption{Results obtained using the proposed kernels $\\alpha$-$SU(1,1)$ and $\\alpha$-$SU(2)$ for different datasets. Note that for the Moons and Circles synthetic dataset, we generated 1000 samples of which $70\\%$ were used for training and the remaining $30\\%$ for test. These datasets were generated using a noise factor of 0.3 and 0.1, respectively.}\n\\label{tab:results}\n\\end{figure}\n\n\n The proposed meta-kernels $\\alpha$-SU(1,1) and $\\alpha$-SU(2) exhibit state of the art performance when compared with the RBF kernel. \n These kernels provide a significant advantage for data points distributed over curved surfaces. Given that it is hard to find benchmark datasets with those characteristics, the results presented in Table~\\ref{tab:results} suggest an cautiously encouraging first step.\n\n\n\n\\section{Discussion} \\label{discussion}\n\n\nIn SVM-based classification problems, the appropriate choice of a kernel is fundamental to achieve high classification performance. \nHowever, the current 'trial-and-error' nature of selecting the best kernel poses significant challenges, especially when one considers kernels that can support both classical and quantum-inspired machine learning algorithms~\\cite{Ali06}. \nIn this section, a visual analysis is provided of how different kernels are derived from the $\\alpha$ and $k$ parameters of both $\\alpha-SU(1,1)$ and $\\alpha-SU(2)$ kernels in order to promote a clear discrimination between kernel functions. \n\nThe $\\alpha$ parameter controls allows a specific kernel function is derived from deformed Weyl-Heisenberg (dW-H) algebra. \nA value of $\\alpha=0$ is a flat surface. When $\\alpha$ is small, it contributes to an almost flat surface, when maintaining $k$ low. On the other hand, when $\\alpha$ is high, it contributes to squeeze the function towards its center. Figure~\\ref{vis_su11} shows the impact of parameter $\\alpha$ in in the $\\alpha-SU(1,1)$ kernel.\nRegarding the $\\alpha-SU(2)$ kernels, the parameter $\\alpha$ generates a kernel that maps the non-linear observations into a higher dimensional space that `folds' the data in the feature space. A high value in $\\alpha$ and $k$ squeeze these folds towards the center of the distribution of the geodesic distances between the data points as visualized in Figure~\\ref{fig:vis_su2}.\n\n\n\n\n\n\nIn terms of the empirical evaluation of the kernels, Table~\\ref{tab:results} indicates the the best results were obtained with low values of $\\alpha = 0.1$, $1.0 \\leq k \\leq 2.0$, with the $z$ parameters in the range: $ 0.2 \\leq z \\leq 4.6$. \n\n\n\\section{Conclusion}\nIn this paper, \nby using the theory of non-linear coherent states, we put forward a meta-kernel approach for deriving kernel functions for use in ML. \nMore specifically, data is mapped into a feature space which is defined as a deformed coherent state as defined by a deformed Weyl-Heisenberg algebra.\nThis algebra unifies the well-known $SU(2)$, Weyl-Heisenberg, and $SU(1,1)$ groups, through a common parameter $\\alpha$.\nIn addition, by studying tgeometrical properties of feature space constructed on the dW-H coherent state, we showed that the meta-kernel function applies associated surfaces of revolution as feature spaces identified with non-linear coherent states. \nAn empirical investigation compares the $\\alpha-SU(2)$ and $\\alpha-SU(1,1)$ kernels derived from the meta-kernel which shows performance similar to the Radial Basis kernel.\n\nKernel functions drive developments in the field of machine learning and the meta-kernel function presented in this paper opens new theoretical avenues for the definition and exploration of kernel functions.\\\\\n\n\n\n\n\n\n\n\n\n\\section*{Acknowledgement}\nThis research was supported by the Asian Office of Aerospace Research and Development (AOARD) grant: FA2386-17-1-4016.\n\n\\appendices\n\\section{Geometrical Properties of Feature surfaces}\nThe Christoffel symbols of the second kind according to definition are give by\n\\begin{eqnarray}\n\\Gamma_{ij}^{k}=\\frac{1}{2}g^{kl}\\left[\\partial_{i}g_{jl}+\\partial_{j}g_{il}-\\partial_{l}g_{ij}\\right]\n\\end{eqnarray}\nin which $g_{ij}$ is the $(i,j)$th component of the metric, $g^{ij}=g_{ij}^{-1}$ and $\\partial_{i}$ is an abbreviation of $\\frac{\\partial}{\\partial x_{i}}$. Also, according to standard notation, the Einstein summation convention is applied, i.e., summation over a set of indexed terms in a formula, e.g. $g^{ij}g_{jk}=g^{i1}g_{1k}+g^{i2}g_{2k}$. By using the metric (\\ref{eq11}), non-zero components of \nthe Christoffel symbols of the second kind are respectively given by:\n\\begin{eqnarray}\n\\Gamma_{xz}^{x}&=&\\sqrt{2\\alpha}\\cot (\\sqrt{2\\alpha}\\ z)\\nonumber\\\\\n\\Gamma_{xx}^{z}&=&-\\frac{1}{2\\sqrt{2\\alpha}}\\sin (2\\sqrt{2\\alpha}\\ z)\n\\end{eqnarray}\nAlso, according to definition, the Ricci tensor is given by\n\\begin{eqnarray}\nR_{ij}=\\partial_{k}\\Gamma_{ij}^{k}-\\partial_{i}\\Gamma_{kj}^{k}+\n\\Gamma_{ij}^{k}\\Gamma_{kl}^{l}-\n\\Gamma_{ik}^{l}\\Gamma_{lj}^{k}.\n\\end{eqnarray}{}\nHence,\nthe non-zero Ricci tensors are given by:\n\\begin{eqnarray}\nR_{xx}= \\sin^{2} \\left[\\sqrt{2\\alpha} z\\right], R_{zz}=2\\alpha. \n\\end{eqnarray}\nThe Ricci scalar, which gives the curvature, is obtained by\n\\begin{eqnarray}\nR=\\frac{4}{k}\n\\end{eqnarray}\n\\indent In the case of metric (\\ref{eq10}), non-zero components are given by\n\\begin{eqnarray}\n\\Gamma_{xz}^{x}&=&\\sqrt{2\\alpha}\\coth (\\sqrt{2\\alpha}\\ z)\\nonumber\\\\\n\\Gamma_{xx}^{z}&=&\\frac{-1}{2\\sqrt{2\\alpha}}\\sinh (2\\sqrt{2\\alpha}\\ z)\n\\end{eqnarray}\nand the Ricci tensor are given by\n\\begin{eqnarray}\nR_{xx}=\\sinh^{2}(z\\sqrt{2\\alpha}), R_{zz}=-2\\alpha \n\\end{eqnarray}\nThe Ricci scalar, which gives the curvature, is obtained by\n\\begin{eqnarray}\nR=-\\frac{4}{k}\n\\end{eqnarray}\n\n\n\n\n\n\n\\bibliographystyle{unsrt} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdvlr b/data_all_eng_slimpj/shuffled/split2/finalzzdvlr new file mode 100644 index 0000000000000000000000000000000000000000..5864da97d72659ff149d2e35a8feac744b24c2da --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdvlr @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nWaves can exhibit wavefront dislocations or vortices, i.e. phase \nsingularities of the complex wavefuntion, where the modulus of the wave\nvanishes, and around which the phase of the wave changes by a multiple of \n2$\\pi$~\\cite{Berry:royal:1974}. \nIn~\\cite{Irvine:NatPhys:2008}, it has been shown that Maxwell's equations in fact admit solutions where \nthese lines of dislocation or vortex lines are tied into knots embedded in light fields.\nHere, the term ``knot'' refers to an embedding of a circle $S^1$ into a three-sphere $S^3$~\\cite{knot_book}.\nBoth theoretical~\\cite{Shabtay:OC:2003} as well as \nexperimental advances~\\cite{Padgett:NJP:2005,Whyte:NJP:2005,Dennis:Nature:2010,Shanblatt:OE:11} in three-dimensional light shaping allowed to \nthe experimental realization of knotted vortex lines embedded in optical fields. \nApart from optics, the study of knotted topological defect lines and their dynamics has fascinated scientists from diverse settings, including\nclassical fluid dynamics~\\cite{Moffatt:JFM:1969,Moffatt:nature:1990}, excitable media~\\cite{Paul:PRE:2003,Sutcliffe:PRL:2016}, \nchiral nematic colloids~\\cite{Tkalec:science:2011} to semiconductors~\\cite{Babaev:PRL:2002,Babaev:PRB:2009}.\n\nLord Kelvin speculated in 1867~\\cite{Kelvin:1867} that atoms can be described by vortex tubes in the aether.\nIn fact, almost two decades ago it was suggested that (nontrivial) knots might exist as stable solitons or Hopfions\nin three-dimensional field theories~\\cite{Faddeev:Nature:1997}. A Skyrme model served as an example for a field theory \nwhich admits stable knot solitons~\\cite{Paul:PRL:1998}. \n\nIn the context of spinor Bose-Einstein condensates (BECs)~\\cite{TinLun:PRL:98,Machida:JphysJpn:1998}, the\nexistence of knots with nonzero Hopf charge was studied~\\cite{Niemi:PRB:2002,Ueda:PRL:08,Ueda:PTPS:2010} and recently realized\nexperimentally~\\cite{Mottonen:NatPhys:2016}. However, there is work~\\cite{Speight:JGP:2010} that casts doubt on existence of\nstable Hopfions in two-component Ginzburg-Landau type systems. \n\nContrasting these studies, \nnumerical investigations on the robustness and centre of mass motion of {\\em knotted vortex lines} embedded in a {\\em single}\ncomponent BEC have been undertaken~\\cite{Barenghi:PRE:2012,Barenghi:JOP:2014}. \nSuch formations are typically unstable, since there is no topological stabilization mechanism and reconnections of \nvortex lines are allowed due to the occurrence of the quantum stress tensor in the hydrodynamic formulation of the Gross-Pitaevskii Equation. \nNucleation~\\cite{Frisch:PRL:1992} and reconnection of vortex lines~\\cite{Koplik:PRL:1993,Bewley:PNAS:2008,Barenghi:PhysFluid:2012} and vortex line bundles~\\cite{Barenghi:PRL:2008} \nand subsequent emission of sound waves~\\cite{Leadbeater:PRL:2001} have been theoretically studied extensively. \n\nIn this paper, we follow up on the idea formulated in~\\cite{Ruostekoski:PRA:2005}, and \ndiscuss a general experimental scheme to create a two-component BEC which contains a knotted vortex line in one \nof its components. \nWe suggest using a light field containing a knotted vortex line as probe field of a Raman-pulse that drives a \ncoherent two-photon Raman transition of three-level atoms with $\\Lambda$-level configuration [cf.~\\reffig{fig:lambda_scheme}(a)]. \nPreviously, similar methods have been used to create dark solitons and vortices~\\cite{Wright:PRL:1997,Dum:prl:98,Ruostekoski:PRL:2004,Ruostekoski:PRA:2005,Andersen:PRL:2006} \nand have been experimentally realized using microwave~\\cite{Cornell:PRL::1999_2,Cornell:PRL::1999} and more recently optical coupling~\\cite{Schmiegelow:ARXIV:2015}. \nThe pump beam will be the mentioned knotted (stationary) light beam, whereas \nthe control beam is a co-propagating plane wave to remove fast oscillations in the $z$-direction~\\cite{Ruostekoski:PRA:2005}.\nThe large controllability of the pulse parameters (i.e. strength and duration of the Rabi-pulse) \nallows for a large controllability of the excitation. \nThe dynamics of two-component BECs can be monitored in real time via in situ measurements without ballistic expansion~\\cite{Cornell:PRL:2000}. \nUsing numerical methods, we study excitation and subsequent dynamics of specific examples of knotted matter waves for experimentally feasible parameters. \n\nThis paper thus paves the way for experimentally accessing many of the \nphenomena discussed only theoretically in, e.g.~\\cite{Barenghi:PRE:2012,Barenghi:JOP:2014,Irvine:arXiv:2015},\nand if such system were realized experimentally, it would give controlled experimental access to reconnection of vortex lines, \nsubsequent emission of sound waves and more generally quantum turbulence. \n\n\\section{Model}\\label{sec:artificial_light}\n\\subsection{Equations of motion for light-matter wave coupling}\n\nConsider a three-level atom in $\\Lambda$-configuration with ground states $\\ket{a}$ and $\\ket{b}$, which \nare off-resonantly coupled to an excited state $\\ket{e}$ with detuning $\\Delta$ and spatially dependent Rabi-frequencies\n$\\Omega_a({\\bf r})$ and $\\Omega_b(z)$. Assuming $\\Delta\\gg\\Omega_a,\\Omega_b$, the excited state can be \nadiabatically eliminated, and the dynamics of the condensate wave function confined in a trap $V({\\bf r})$ \ncan be described~\\cite{Dum:prl:98,Ruostekoski:PRA:2005} using~\\refeqs{eq:eqmo1}{eq:eqmo2}\n\\begin{eqnarray}\n\\fl i\\hbar\\partial_t\\psi_a&=\\left(-\\frac{\\hbar^2}{2m}\\nabla^2 + V({\\bf r}) + Ng_{aa} |\\psi_a|^2+Ng_{ab}|\\psi_b|^2\\right)\\psi_a+\\frac{\\hbar\\Omega_a({\\bf r})\\Omega_b^*(z)}{8\\Delta}\\psi_b\\label{eq:eqmo1}\\\\\n\\fl i\\hbar\\partial_t\\psi_b&=\\left(-\\frac{\\hbar^2}{2m}\\nabla^2 + V({\\bf r}) + Ng_{ba} |\\psi_a|^2+Ng_{bb}|\\psi_b|^2 -\\hbar\\delta \\right)\\psi_b+\\frac{\\hbar\\Omega_a^*({\\bf r})\\Omega_b(z)}{8\\Delta}\\psi_a\n \\label{eq:eqmo2}\n\\end{eqnarray}\nwith $g_{kj}=4\\pi\\hbar^2 a_{kj}\/m$, $a_{kj}$ denoting the scattering length between the species and \n\\begin{equation}\nV({\\bf r})= \\frac{1}{2}m\\omega^2 r_\\perp^2+\\frac{1}{2}m\\omega_z^2 z^2.\n\\end{equation}\n\\reffig{fig:lambda_scheme}(a) depicts the level scheme of three-level atoms in $\\Lambda$-type configuration.\n\\begin{figure}\\begin{center}\n\\includegraphics[width=\\textwidth]{fig1.eps}\n\\caption{\n(color online) (a) Schematic sketch of a Raman-type transition with $\\Lambda$-configuration. The state $\\ket{a}$ is\noff-resonantly coupled to state $\\ket{b}$ with two-photon detuning $\\delta$.\nThe knotted light field $\\mathcal{E}$ proportional to $\\Omega_a({\\bf r})$ off-resonantly couples state $\\ket{a}$ to $\\ket{e}$ with detuning $\\Delta$. \nThe final state $\\ket{b}$ then reflects the involved structure of the light field $\\mathcal{E}$ associated with $\\Omega_a({\\bf r})$.\n(b) depicts an isosurface of the intensity (purple) of the light-field $\\mathcal{E}$ at a low isointensity value and a slice of its \nphase in the $(x,y)$-plane. Additional to the vortex lines in the center of the beam, the usual diffraction cones become visible.\n(c) Illustration of the situation at $t=0$ for the parameters discussed in~\\ref{sec:trefoil}.\nThe isodensity of the cigar-shaped wave function $\\psi_a$ (yellow) is subject to the knotted light field (purple) from Fig. (b).\n\\label{fig:lambda_scheme}}\n\\end{center}\\end{figure}\nThe Rabi-frequencies $\\Omega_k$, $k=a,b$ relate to their corresponding electric fields via\n\\begin{equation}\n \\Omega_k=\\frac{dE_k}{\\hbar},\n \\label{eq:Omega_a}\n\\end{equation}\nwhere the transition dipole element $d$ has been introduced. \nAs electric fields, we consider a monochromatic, quasi-linearly polarized light field with \nwavevector $k_0$. \nThen, its slowly varying envelope~$\\mathcal{E}$ is defined by \n\\begin{equation}\n E({\\bf r},t)=\\sqrt{\\frac{\\omega_0\\mu_0}{2k_0}}\\mathcal{E}({\\bf r})e^{i\\left(k_0z-\\omega_0t\\right)}+{\\rm c.c.},\n \\label{eq:sve}\n\\end{equation}\nwhere ${\\rm c.c}$ denotes the complex conjugate. \nWithin the paraxial approximation, the slowly varying envelope $\\mathcal{E}$ of the beam is governed by \n\\begin{equation}\n2ik_0\\partial_z \\mathcal{E}({\\bf r}_{\\perp},z) = -\\nabla^2_{\\perp}\\mathcal{E}({\\bf r}_{\\perp},z),\n\\label{eq:paraxial}\n\\end{equation}\nwhere ${\\bf r}_\\perp=(x,y)$, and $\\nabla^2_\\perp$ denotes the transverse Laplacian.\n\nSuch a system can be realized by considering the two hyperfine ground states $\\ket{a}=\\ket{S_{1\/2},F=2,M_F=-1}$, $\\ket{b}=\\ket{S_{1\/2},F=2,M_F=1}$ \nof ${}^{87}$Rb, that are off-resonantly coupled to an excited state manifold. \nThen, the scattering lengths between the species $a_{kj}$ are given by $a_{ba}=5.5$nm, and the ratio $a_{aa}$:$a_{ba}$:$a_{bb}$ is given by $1.03$:$1$:$0.97$~\\cite{Cornell:PRL:1998}. \nThe motivation for assuming cylindrical symmetry of the trapping potential $V({\\bf r})$ is associated with \nthe specific form of the paraxial wave equation~\\refeq{eq:paraxial}. \nWhereas we use different beams for trapping and Raman transition, the aspect ratio $\\omega_z\/\\omega$ \nfor our otherwise independent cigar shaped trap cannot be chosen arbitrarily.\nInstead, in order to accommodate the optical knot in the BEC (see~\\reffig{fig:lambda_scheme}(c)), \nwe have to make sure that extent of the BEC in the z-direction is large enough. \nThe aspect ratio of the optical beam (and the optical knot) can be expressed using the ratio between the Rayleigh \nlength $z_r=k_0\\sigma^2$ and the width $\\sigma$ of the light field in the \n$(x,y)$-plane:\n\\begin{equation}\n \\frac{z_r}{\\sigma}=\\frac{k_0\\sigma^2}{\\sigma}=k_0\\sigma.\n \\label{eq:aspect_ratio}\n\\end{equation}\nThe basic idea towards imprinting knotted vortex lines into BECs is depicted in~\\reffig{fig:lambda_scheme}(a).\nConsider the case where all atoms are initially in state $\\ket{a}$ [$\\psi_b(t=0)=0$].\nThen, assuming $\\Omega_i\\ll \\Delta$ and $\\delta\\approx 0$ the \nRabi pulse coherently transfers population from $\\ket{a}$ to $\\ket{b}$ as \nillustrated in according to~\\refeqs{eq:eqmo1}{eq:eqmo2}. The light field is depicted in~\\reffig{fig:lambda_scheme}(b) and together with the density of $\\psi_a$ in~\\reffig{fig:lambda_scheme}(c).\nThe laser associated with $\\Omega_b$ is chosen to be a simple plane wave, which coherently co-propagates with $\\mathcal{E}$, \nso that the fast variations $\\sim e^{ik_0z}$ in~\\refeq{eq:sve} are canceled whenever products $\\Omega_a\\Omega_b^*$, as in~\\refeqs{eq:eqmo1}{eq:eqmo2}, occur.\nThe explanation as to why such a setup allows to imprint the phase of the structured light field $\\Omega_a$ onto the condensate is the following.\nFor short time and two-photon detuning $\\delta=0$, the small change $\\delta\\psi_b$ to $\\psi_b$ is \ngiven by \n\\begin{equation}\n \\delta\\psi_b \\approx-\\frac{i}{\\hbar}\\frac{\\hbar\\Omega_a^*({\\bf r})\\Omega_b(z)}{8\\Delta}\\psi_a(t=0)\\delta t.\n \\label{eq:simple_argument}\n\\end{equation}\nThus, we may conclude that at least for small times, it should be possible to populate state $\\ket{b}$ \nwith a given phase profile using our light field. \nOnce the phase profile $\\theta$ has been imprinted ($\\mathcal{E}=|\\mathcal{E}|e^{i\\theta}$) onto \nthe condensate, the atoms will display motion according to ${\\bf v}=\\hbar\\nabla\\theta\/m$~\\cite{Ruostekoski:PRA:2005}. \nTo realize such a scenario, the intensity of the involved \nlaser beams must be sufficiently strong, such that the time-scales of the imprinting are small compared to\ntime-scales of the dynamics of the condensate.\nWe will use numerical methods and realistic experimental \nparameters to extend this simple idea beyond the perturbative limit and \nstudy its subsequent dynamics. \n\n\n\\subsection{Knotted Light Field}\\label{sec:knotted_light_field}\n\nLaguerre-Gaussian (LG) functions $\\mathrm{LG}_{l,p}$ \nform a basis set for solutions for the paraxial wave equation~\\refeq{eq:paraxial}, and are given by the expression\n\\begin{eqnarray}\n \\mathrm{LG}_{l,p}^{\\sigma,z_r}({\\bf r}_\\perp,z)=&\\sqrt{\\frac{p!}{\\pi (|l|+p)!}}\\frac{r_\\perp^{|l|}e^{il\\varphi}}{\\sigma^{|l|+1}}\\frac{(1-iz\/z_r)^p}{(1+iz\/z_r)^{p+|l|+1}}\\nonumber\\\\\n & \\times e^{-r_{\\perp}^{2}\/2\\sigma^2(1+iz\/z_r)} L^{|l|}_p \\left( \\frac{r_{\\perp}^{2}}{\\sigma^2\\left[1+\\left(z\/z_r\\right)^2\\right]} \\right)\n \\label{eq:LG_modes}\n\\end{eqnarray}\nwith $r_{\\perp}^2=x^2+y^2$ and $L^{|l|}_p$ being the associated Laguerre polynomials. \n\nWe seek a linear superposition of LG-modes, $\\sum_{l,p}a_{lp}\\mathrm{LG}_{l,p}$, \nthat describe light-beams containing knotted vortex lines. \nTo this end, we will review the method proposed in~\\cite{Dennis:Nature:2010}, which uses Milnor polynomials~\\cite{Milnor_book} as an ansatz for complex light fields\nin the shape of torus knots to determine appropriate amplitudes $a_{lp}$, and rescale for application to our setup.\n\nThe basic idea~\\cite{King:thesis,Dennis:Nature:2010} is to parametrize an $N$-strand braid as the roots of the polynomial \n\\begin{equation}\n p_h^{N,n}(u)=\\prod_{j=0}^{N-1} \\left[u-s_j(h)\\right]\n\\end{equation}\nHere, $h$ denotes the height of the periodic braid and $N$ denotes the number of strands or roots of $p_h(u)$ in $u$.\nLet us choose $N=2$ and~\\cite{King:thesis} \n\\begin{equation}\n s_j(h)=\\cos(h_j)+i\\sin(h_j), \\quad h_j=(h-2\\pi j\/n)n\/2.\n\\end{equation}\nThe projection of the braid onto the $(h=0)$-plane leads to a circle. The parameter $n$ represents the number of braid crossings. \nWe will consider the cases $n=2,3$ in the following. \nA small computation allows us to find an explicit expression for the Milnor polynomial $p_h$\nin the variables $u$ and $\\exp(ih)=:v$,\n\\begin{equation}\n p_h^{2,n}=u^2-v^n\n \\label{eq:milnor}\n\\end{equation}\nWe can now imagine a cylinder containing the braid, and ``glue'' top and bottom surfaces together to obtain a knot. In fact, one can show~\\cite{Alexander:PNAS:1923} that\nany knot can be represented as a closure of a braid. \nAn easy way to deform our cylinder into a torus and to make it explicitly dependent on ${\\bf{r}^\\prime}=(x^\\prime,y^\\prime,z^\\prime)$ \nis to write $u$ and $v$ as an inverse \nstereographic projection from three-dimensional space to a \nunit three-sphere, $\\mathbb{R}^3\\rightarrow\\mathbb{S}^3$, i.e.\n\\begin{eqnarray}\nu&=\\frac{r^{\\prime 2}-1+2iz^{\\prime}}{r^{\\prime 2}+1},\\\\\nv&=\\frac{2(x^{\\prime}+iy^{\\prime})}{r^{\\prime 2}+1}\\label{eq:stereo}\n\\end{eqnarray}\nHere, $r^\\prime=\\sqrt{x^{\\prime 2}+y^{\\prime 2}+z^{\\prime 2}}$ and the units $\\bf{r}^\\prime$ are non-dimensional. \nOne can easily show that $|u|^2+|v|^2=\\Re(u)^2+\\Im(u)^2+\\Re(v)^2+\\Im(v)^2=1$, \nand thus~\\refeq{eq:stereo} indeed represents a parametrisation of a three-sphere. \nSince we aim to describe an actual light field, i.e. a solution to the paraxial wave equation, \ninstead of considering the Milnor polynomial~\\refeq{eq:milnor} $p_h$ as it is, it is reasonable to get rid of the denominator in \n$p_h$ and to consider~\\cite{Dennis:Nature:2010,King:thesis} \n\\begin{eqnarray}\n\\xi_a({\\bf r}^\\prime)&=\\left(u^2-v^n\\right)\\left(r^{\\prime 2}+1\\right)^n.\n\\label{eq:ansatz_light}\n\\end{eqnarray}\nFor $n=2,{\\ }3$,~\\refeq{eq:ansatz_light} \ndescribes a Hopf link and a trefoil knot, respectively, which \nrepresent the simplest non-trivial examples of a link and a knot, respectively.\nThe latter two will be used as exemplary fields in the following. \n\nIn order to use~\\refeq{eq:ansatz_light} to find appropriate coefficients $a_{l,p}$, \nlet us rescale the light field~\\refeq{eq:LG_modes} by $R_s$ to nondimensional units $(x^\\prime,y^\\prime,z^\\prime)$, such that \n\\begin{equation}\n {\\bf r}_\\perp={\\bf r}^\\prime_\\perp R_s,\\quad\n \\frac{z}{z_r}=\\frac{z\/R_s}{k_0R_s\\sigma^2\/R_s^2}=\\frac{z^\\prime}{(k_0R_s) (\\sigma^2\/R_s^2)},\n\\end{equation}\nto equate the light field to our knot in the same dimensionless units. \nEffectively, $R_s$ scales the abstract unit sphere to have a transverse extent of approximately $R_s$ with respect to the laser beam.\nHence, there are two length scales of our system, $R_s$ and $\\sigma$, that are associated with the nodal lines of the knot and the transverse extent of the beam. \nAs we will see, the ratio $w=\\sigma\/R_s$ will play a crucial role as an important degree of freedom, that allows us to change the width of the \nbeam relative to the positions of the nodal lines of the knot. \nTo find appropriate superpositions of ${\\mathrm{LG}}$-modes, it is sufficient to restrict considerations to the $(x,y)$-plane only.\nLet us equate~\\refeq{eq:ansatz_light} with a superposition of the above-mentioned rescaled version of~\\refeq{eq:LG_modes}: \n\\begin{eqnarray}\n\\xi_a({\\bf r}_\\perp^\\prime,z^\\prime=0) = \\sum_{l,p} a_{l,p}(w)\\mathrm{LG}_{l,p}^{w,k_0R_sw^2}\\left({\\bf r}_\\perp^\\prime,z^\\prime=0\\right)\\sqrt{\\pi}e^{r_{\\perp}^{\\prime 2}\/2w^2}w.\n\\label{eq:determine_alp}\n \\end{eqnarray}\nComparing different powers in $r_\\perp^\\prime$ allows us to determine the finite number of coefficients $a_{l,p}$ uniquely,\nwhich depend only on the real number $w$. Whereas this ansatz can be used for some knots, there are counterexamples, and \nthere is no rigorous general proof as to why and in which cases this ansatz leads to success. \nA large value of $w$ ensures that the vortex lines \nare actually embedded in the beam, and not chopped off.\nFurthermore, for finite $w$, additional vortex lines in the shape of hairpins appear for larger $z$-values, which leads to the fact that \n$w$ must be chosen large enough. On the other hand, if $w$ is too large, \nthe polynomial increase in the polynomials describing the knots will not be attenuated quickly enough\nby the Gaussian, and thus intensity variations become huge, which is undesirable for our setup. \nIt is possible~\\cite{Dennis:Nature:2010} to further optimize these coefficients to separate vortices with regions of \nlarger intensity. \nOnce appropriate coefficients $a_{l,p}$ have been found, the light field can be \nwritten down as superposition of LG modes by rescaling back\ninto physical units. For the sake of clarity, let us introduce an auxiliary function $f$ defined as \n$f(r_\\perp\/\\sigma,z\/z_r)\/\\sigma:=\\mathrm{LG}_{l,p}^{\\sigma,z_r}({\\bf r}_\\perp,z)$. Then, we find \n\\begin{eqnarray}\n \\mathcal{E}_a({\\bf r}_\\perp,z)&=\\frac{A}{R_s}\\sum_{l,p}a_{l,p}(w)\\mathrm{LG}_{l,p}^{w,k_0R_sw^2}({\\bf r}_\\perp^\\prime,z^\\prime),\\\\\n &=\\frac{A}{R_s}\\sum_{l,p}a_{l,p}(w)\\frac{1}{w}f\\left(\\frac{r_\\perp^\\prime}{w},\\frac{z^\\prime}{k_0R_sw^2}\\right),\\\\\n &=A\\sum_{l,p}a_{l,p}(w)\\frac{1}{\\sigma}f\\left(\\frac{r_\\perp}{\\sigma},\\frac{z}{z_r}\\right)\\\\\n &=A\\sum_{l,p}a_{l,p}(w)\\mathrm{LG}_{l,p}^{\\sigma,z_r}({\\bf r}_\\perp,z).\n \\label{eq:superpos}\n\\end{eqnarray}\nHere, the amplitude $A$ of the light field has been introduced, which gives the intensity of the beam the right value in \nappropriate units. \nNote, that~\\refeq{eq:superpos} is no longer dependent on the choice of $R_s$. \n\n\\subsection{Rescaling}\n\nLet us rescale~\\refeqs{eq:eqmo1}{eq:eqmo2} by \nintroducing spatial $r^\\prime$ and time $t^\\prime$ coordinates rescaled \nby oscillator length $a_0=\\sqrt{\\hbar\/m\\omega}$ and trapping frequency $\\omega$, respectively,\n\\begin{eqnarray}\n t^\\prime&=\\omega t,\\\\\n r^\\prime&=\\frac{r}{a_0},\\\\\n \\psi^\\prime&=a_0^{3\/2}\\psi.\n\\end{eqnarray}\nThen, after multiplying~\\refeqs{eq:eqmo1}{eq:eqmo2} by $1\/m\\omega^2\\sqrt{a_0}$, ~\\refeqs{eq:eqmo1}{eq:eqmo2} become\n\\begin{eqnarray}\n \\fl i\\partial_{t^\\prime}\\psi_a^\\prime&=\\left(-\\frac{1}{2}\\nabla^{\\prime 2} + \\frac{r_\\perp^{\\prime 2}}{2} + \\gamma_z^2 \\frac{z^{\\prime 2}}{2} + \\kappa_{aa} |\\psi_a^\\prime|^2+\\kappa_{ab}|\\psi_b^\\prime|^2\\right)\\psi_a^\\prime\n +\\frac{\\Omega_a({\\bf r}^\\prime)\\Omega_b^*(z^\\prime)}{8\\omega\\Delta}\\psi_b^\\prime\\label{eq:rescaled_eqmo1}\\\\\n \\fl i\\partial_{t^\\prime}\\psi_b^\\prime&=\\frac{\\Omega_a^*({\\bf r}^\\prime)\\Omega_b(z^\\prime)}{8\\omega\\Delta}\\psi_a^\\prime+ \n \\left(-\\frac{1}{2}\\nabla^{\\prime 2} + \\frac{r_\\perp^{\\prime 2}}{2} + \\gamma_z^2 \\frac{z^{\\prime 2}}{2} + \\kappa_{ba} |\\psi_a^\\prime|^2+\\kappa_{bb}|\\psi_b^\\prime|^2-\\frac{\\delta}{\\omega}\\right)\\psi_b^\\prime\n \\label{eq:rescaled_eqmo2}\n\\end{eqnarray}\nwhere we left away the prime for our non-dimensional time and space variables and introduced $\\gamma_z=\\omega_z\/\\omega$, and $\\kappa_{kj}=4\\pi a_{kj}N\/a_0$.\nWe can express as $\\kappa_{kj}=a_0^2\/2a_h^2$ using the healing length $a_h=\\sqrt{8\\pi a_{kj} N\/a_0^3}$. \nSince the aspect ratio $\\gamma_z$ of the trapping frequencies is typically small, it is more convenient for numerical studies to \nrescale the elongated $z$-axis as follows:\n\\begin{eqnarray}\n z^{\\prime\\prime}&=\\gamma_z z^\\prime,\\\\\n \\psi^{\\prime\\prime}_i&=\\frac{1}{\\sqrt{\\gamma_z}}\\psi_i^\\prime.\n\\end{eqnarray}\nThis rescaling finally yields our equations of motion:\n\\begin{eqnarray}\n\\fl i\\partial_t\\psi_a&=\\left(-\\frac{1}{2}\\left[\\nabla^2_\\perp + \\gamma_z^2\\partial_z^2 \\right] + \\frac{r^2}{2} + \\gamma_z\\kappa_{ja} |\\psi_a|^2+\\gamma_z\\kappa_{jb}|\\psi_b|^2\\right)\\psi_a\n +\\frac{\\Omega_a({\\bf r})\\Omega_b^*(z)}{8\\omega\\Delta}\\psi_b\\label{eq:rescaled_eqmo_final1}\\\\\n\\fl i\\partial_t\\psi_b&=\\left(-\\frac{1}{2}\\left[\\nabla^2_\\perp + \\gamma_z^2\\partial_z^2 \\right] + \\frac{r^2}{2} + \\gamma_z\\kappa_{ja} |\\psi_a|^2+\\gamma_z\\kappa_{jb}|\\psi_b|^2-\\frac{\\delta}{\\omega}\\right)\\psi_b\n +\\frac{\\Omega_a^*({\\bf r})\\Omega_b(z)}{8\\omega\\Delta}\\psi_a\n \\label{eq:rescaled_eqmo_final2}\n\\end{eqnarray}\nwhere we have dropped the primes for convenience. \nFurthermore, applying the same rescaling to the paraxial wave equation~[\\refeq{eq:paraxial}], we find the following expression in \nour rescaled units:\n\\begin{equation}\n2ik_0\\partial_z \\mathcal{E}({\\bf r}_{\\perp},z) = -\\nabla^2_{\\perp}\\mathcal{E}({\\bf r}_{\\perp},z).\n\\label{eq:paraxial_rescaled}\n\\end{equation}\nHere, $k_0$ has been redefined and stands for $k_0=2\\pi a_0\\gamma_z\/\\lambda$. \nThen, the functions defined in~\\refeq{eq:LG_modes} remain solutions to~\\refeq{eq:paraxial_rescaled} in the new coordinates, \nand using our modified $k_0$, we can still write the Rayleigh length as $z_r=k_0\\sigma^2$, where \n$z_r$ is measured in units of $a_0\/\\gamma_z$ and $\\sigma$ in units of $a_0$.\n\nWe have $m=1.44\\times 10^{-25}$kg for ${}^{87}$Rb, so that a trapping frequency of $\\omega=2\\pi\\times 10$Hz leads to \nan order of magnitude of $a_0=3.4\\mu$m for the typical transverse extent of the BEC. \nAn order of magnitude estimate for the off-diagonal coupling terms is given by \n\\begin{equation}\n \\frac{\\Omega_a({\\bf r})\\Omega_b^*(z)}{8\\omega\\Delta}\\approx \\frac{\\Omega_a({\\bf r})\\Omega_b^*(z)}{\\Delta}0.016 {\\mathrm s} \\approx \\Range{2d2}{2d6}\n\\end{equation}\nfor $\\Omega_i\\approx \\Range{1}{10}$MHz and $\\Omega_i\/\\Delta=\\Range{0.01}{0.1}$. \nThe required tight focusing of the beam leads to the fact, that we only need \nmoderate beam powers to achieve adequate Rabi-frequencies. \nIn the following, we set the two-photon detuning $\\delta=0$ to achieve optimal transfer. \n\nFinally, we need to find an estimate for the Rabi pulse duration $t_d$. \nTo this end, consider the simple case, where $\\Omega_a$ and $\\Omega_b$ describe driving by plane waves. \nWe seek to find the optimal time for maximal transfer of atoms from $\\ket{a}$ to $\\ket{b}$. \nGiven that the ``off-diagonal'' terms in the coupled equations~\\refeqs{eq:rescaled_eqmo_final1}{eq:rescaled_eqmo_final2} are much larger than the diagonals, we can approximate \nthe population $N_b$ in state $\\ket{b}$ as \n\\begin{equation}\nN_b\\sim 1-\\cos^2\\left(\\frac{\\Omega_a\\Omega_b^*}{8\\Delta \\omega}t\\right).\n\\end{equation}\nThis expression allows us to find the optimal (non-dimensional) time duration of the Rabi pulse $t_d$ that leads to a complete transfer of population:\n\\begin{equation}\n t_d=\\frac{\\pi\/2}{\\Omega_a\\Omega_b^*\/8\\Delta \\omega}.\n \\label{eq:t_d}\n\\end{equation}\nOn the other hand, this consideration does not apply to our case due to the spatial dependence of the Rabi frequency $\\Omega_a({\\bf r})$. \nDue to spatial dependence of the intensity variations, we do not really have an optimal pulse duration \n$t_d$, and our choice of $t_d$ is a compromise between on one hand \nhaving sufficiently large transfer of atoms to state $\\ket{b}$\nand on the other hand avoiding population being transferred back from\nstate $\\ket{b}$ to state $\\ket{a}$ in positions where the intensity is large and thus the transfer \ndynamics is faster.\nHence, we have to choose shorter pulse durations than what~\\refeq{eq:t_d} suggests. \nFurthermore, due to the nodal lines in the pump beam, it is never possible to use this arrangement for \na total conversion of atomic population from $\\ket{a}$ to $\\ket{b}$. \nWith these aspects kept in mind, we may still use~\\refeq{eq:t_d} as a rough estimate for the order of magnitude for our choice of $t_d$.\n\n\\section{Numerical Investigation}\n\nIn the previous section, we elaborated on how to inscribe complex knotted vortex lines into matter waves. Whereas the \nsimple argument of~\\refeq{eq:simple_argument} allows us to conclude that our setup should work at least in the perturbative regime for \nshort time-scales and small amounts of atoms in state $\\ket{b}$, we cannot infer what happens beyond this perturbative regime. \nIn this section, we aim to numerically study excitation and decay of our highly excited states into more elementary unknots or \nvanishing of the latter by collision and annihilation of vortex lines. \nWe will do so by considering specific examples. However, these examples are by no means an exhaustive treatment of the\nthe huge potential manifold of realizations possible. To thoroughly understand the dynamics remains \nan elusive goal, which goes beyond the scope of this paper. \n\n\\subsection{Hopf-link}\n\nOne of the simplest torus knots is the so-called Hopf-link, which corresponds to setting $n=2$ in~\\refeq{eq:ansatz_light} and is \ndepicted e.g. by the green isodensity surface in~\\reffig{fig:hopf_projection}(b). Using the method outlined in~\\ref{sec:knotted_light_field} and~\\refeq{eq:determine_alp}, we find the following superposition \ncoefficients of Laguerre-Gaussian polynomials for a Hopf-link:\n\\begin{eqnarray}\n \\frac{\\Omega_a\\Omega_b^*}{8\\omega\\Delta} &= A[(1-2w^2+2w^4)\\mathrm{LG}_{00}+(2w^2-4w^4)\\mathrm{LG}_{01}+2w^4\\mathrm{LG}_{02}\\nonumber\\\\\n &-4\\sqrt{2}w^2\\mathrm{LG}_{20}]\\Theta(t_{d}-t).\n\\end{eqnarray}\nTo this end, let us consider the dynamics for $\\kappa_{ab}=4\\pi a_{ab} N\/a_0=1887$, $\\gamma_z=0.125$. \nWe set the two-photon detuning to $\\delta=0$.\nThis can be achieved by using typical parameters of ${}^{87}$Rb with $\\omega=2\\pi\\times 10 $Hz, $\\omega_z=2.5\\pi $Hz, $N=93000$, \n$a_{ab}=5.5$nm, $\\lambda=780$nm, which amounts to typical \nunits of length scales of $a_0=3.4\\mu$m in the $(x,y)$-plane and $a_0\/\\gamma_z=27.2\\mu m$ in the $z$-direction.\nThomas-Fermi like dynamics, where the nonlinearity is large compared to the broadening due to the Laplacian, \nis preferable to ensure robustness of the vortex cores and avoid the trivial broadening of the latter. \nFor that reason, we have to choose a sufficiently large value for $\\kappa_{ij}$.\nFor the other $a_{ij}$, we assume the ratio $a_{aa}$:$a_{ab}$:$a_{bb}$ to be given by $1.03$:$1$:$0.97$~\\cite{Cornell:PRL:1998}.\nWith respect to our Rabi pulse, we choose \n$A=450$, $w=1.5$, $\\sigma=0.675$ corresponding to $2.3\\mu$m, \nand a pulse duration of $t_{d}=0.002$ corresponding to $31.8\\mu$s.\n\nWe use the common Fourier split-step method~\\cite{Agrawal_Kivshar:2003} and adaptive Runge-Kutta algorithm to fourth order for the time-step for numerical computation. \nParts of the code were written using~\\cite{XMDS}. To find the appropriate state $\\psi_a(t=0)$, we used imaginary time-evolution (e.g.~\\cite{Tosi:PRE:2000}).\nIt is crucial to use a variable time-step, since the dynamics during the time duration of the Rabi-pulse $t_d$ has to be temporally \nfully resolved, and thus the time-step has to be much smaller than $t_d$ within $t_d$. After $t>t_d$, the time-step can be chosen \nmuch larger, since it only needs to resolve the characteristic time-scale of the BEC dynamics. We used time-steps varying between $\\Delta t=\\Range{d-8}{5d-4}$ and\na spatial resolution of $\\Delta x=0.05-0.06$ with $220^3$ points.\n\n\\reffig{fig:hopf_projection}(a-d) depict an isodensity surface of the condensate wave function $\\psi_b$ with small value, \nwhich serves as an illustration of the knotted vortex core, and its phase profile in different ways, shortly after the imprinting occurred.\nHere, the initial wave function $\\psi_a(t=0)$ was computed as the ground state of the trap and $\\psi_b(t=0)=0$. \n\nA basic qualitative anticipation of the dynamics can be found by \nprojecting the knot orthogonally to the beam- or $z$-axis and considering the momentum associated with \nthe (unit-)areas, as elaborated in~\\cite{Ricca:Proc:2013}. Consider the projection in the $(x,y)$-plane, as \nshown in~\\reffig{fig:hopf_projection}(c). \nThe index $I_j$ assigned to each of the areas $R_j$ enclosed by the vortex lines $\\gamma$ can be found by evaluating the sum~\\cite{Ricca:Proc:2013}\n\\begin{equation}\n I_j=\\sum_{{\\bf \\rho}\\cap {\\bf \\gamma}} \\mathrm{sign} \\left( {\\bf e_z} {\\bm \\rho} \\times {\\bf t} \\right),\n \\label{eq:I_j}\n\\end{equation}\nwhich runs over all intersections of the chosen vector ${\\bf\\rho}$ with the projected vortex line $\\gamma$ for a given region $R_j$. \nHere, ${\\bm \\rho}$ denotes an arbitrary vector pointing from the inside to the outside of the enclosed area \nand $\\mathrm{sign}$ is the usual sign-function taking the values $\\mathrm{sign}(\\ast)=\\pm 1$. \nFurthermore, the vector $\\bf{t}$ is a tangent vector to the vortex line curve (denoted as $\\gamma$) at the point where ${\\bm \\rho}$ crosses $\\gamma$. \nThe direction of $\\bf t$ is given by the orientation of the arcs, which is determined by the phase~[see~\\reffig{fig:hopf_projection}(c)].\nThe values assigned to the regions in~\\reffig{fig:hopf_projection}(c) correspond to the values computed by~\\refeq{eq:I_j}.\nThe associated momenta of a region $R_j$ can be found by~\\cite{Ricca:Proc:2013}\n\\begin{equation}\n \\left({\\bf p}_z\\right)_j=\\oint_{R_j} {\\bm \\omega} d^2r I_j,\n\\end{equation}\nwhere $\\bm \\omega$ denotes the vorticity. \n\n\\begin{figure}\\begin{center}\n\\includegraphics[width=\\columnwidth]{fig2.eps}\n\\caption{\n(color online) (a) Illustration of the phase of $\\psi_b$ in the $(x,y)$-plane after short time evolution. (b) Additional to the phase in the $(x,y)$-plane from Fig. (a), \nthe green surface represents a low-value isodensity surface of $|\\psi_b|^2$ around the \nvortex line imprinted on the BEC. (c) Projection of Fig (b) into the $(x,y)$-plane and associated indices according to~\\refeq{eq:I_j}. Expected dynamics should thus be, that \nthe central region moves faster in the $z$-direction relative to its neighbouring regions. \n(d) shows again the isosurface from Fig. (b), however, the coloring of the isosurface illustrates the value of the phase at each position of the isosurface. \n\\label{fig:hopf_projection}}\n\\end{center}\\end{figure}\n\nWhat one can deduce from these arguments is a movement in the positive $z$-direction with the central region traveling fastest [\\reffig{fig:hopf_projection}(c)]. \nLet us now look at the numerically computed dynamics. \nSnapshots of the latter are shown in~\\reffig{fig:hopf_to_unknot} (see also movies \\href{hopf_1.mp4}{\\textit{hopf\\_1.mp4}} and \\href{hopf_2.mp4}{\\textit{hopf\\_2.mp4}}).\nClearly, there is an overall center-of-mass mass motion towards the positive $z$-direction, as expected. \nHowever, reconnection of vortex lines as well as finite size effects of the condensate leading to sharp gradients in the density lead to a very involved dynamics, \nwhich we were not able to predict or understand in simple terms.\nSurprisingly, the formation decays into two unknots that propagate into the positive and negative $y$-direction, respectively. \n\n\\begin{figure}\\begin{center}\n\\includegraphics[width=\\columnwidth]{fig3.eps}\n\\caption{\n(color online) Dynamics of the condensate wavefunction $\\psi_b$, whose vortex lines form a Hopf-link. Both upper and lower row show the dynamics in timesteps of \n$\\Delta t=0.1$ (corresponding to $\\Delta t=1.6$ms) starting from $t=0.05$ (corresponding to $t=0.8$ms). \nBoth rows use the same isosurface for $|\\psi_b|^2$, however, in the lower row we cropped the box at a smaller value, so that the ``boundary'' of\nthe condensate does not obfuscate the dynamics of the vortex lines. Additional to that, we used the phase to color the isosurface in the lower row. \nWe can see an overall drift the positive $z$-direction, which can be understood \nfrom~\\reffig{fig:hopf_projection}. \nSee also the movies \\href{hopf_1.mp4}{\\textit{hopf\\_1.mp4}} and \\href{hopf_2.mp4}{\\textit{hopf\\_2.mp4}} for the full propagation dynamics.\n\\label{fig:hopf_to_unknot}}\n\\end{center}\\end{figure}\n\n\\subsection{Trefoil}\\label{sec:trefoil}\n\nIn this section we will study the dynamics of a trefoil imprinted on a BEC. The dynamics is generic, quite different initial conditions lead to similar results,\nthe basics of which can be understood in the simple terms described before. \n\nIn this case, let $w=1.2$. Instead of using the prefactors $a_{l,p}(w)$ from~\\refeq{eq:determine_alp}, \nwe use the optimized prefactors given in~\\cite{King:thesis,Dennis:Nature:2010} of the Laguerre-Gaussian modes for the light field. \n\\begin{equation}\n \\fl \\frac{\\Omega_a\\Omega_b^*}{8\\omega\\Delta} = A( 1.51\\mathrm{LG}_{00} - 5.06\\mathrm{LG}_{10} + 7.23\\mathrm{LG}_{20}- 2.03\\mathrm{LG}_{30} - 3.97\\mathrm{LG}_{03} )\\Theta(t_{d}-t).\n\\end{equation}\nTo this end, let us use $\\sigma=0.78$ corresponding to $2.65\\mu$m and $A=700$. \nFurthermore, let us choose $\\kappa = 3753$, $\\gamma_z = 0.05$. \nThis can be realized using again $\\omega=2\\pi\\times 10 $Hz, $\\omega_z=1\\pi $Hz,\n$N=1.85\\times10^5$, $a_{ab}=5.5$nm and $\\lambda=780$nm. \nWith respect to our Rabi pulse, we use\na pulse duration of $t_{d}=0.004$ corresponding to $63.7\\mu$s.\n\n\\reffig{fig:trefoil_projection}(a) illustrates the phase of $\\psi_b$ and (b) its vortex core shortly after imprinting. \nProjecting onto the $(x,y)$-plane and assigning values according to~\\refeq{eq:I_j} leads to~\\reffig{fig:trefoil_projection}(c).\nAgain, we can expect that the overall knot will propagate in the positive $z$-direction, and the central part carries the largest momentum.\n\\reffig{fig:trefoil_projection}(d) illustrates the expected reconnection dynamics according to the rules found in e.g.~\\cite{Koplik:PRL:1993}.\nThe arrows indicate the direction of ${\\bf t}$, which is again determined by the phase. The inset (grey dashed box) illustrates how \nthe vortex lines should reconnect. If we apply this simple rule to all three crossings, we can expect that the green vortex line evolves into what is depicted by the solid \nblack line, i.e. a decay into two unkots.\n\n\\begin{figure}\\begin{center}\n\\includegraphics[width=\\columnwidth]{fig4.eps}\n\\caption{\n(color online) (a) Illustration of the phase of $\\psi_b$ in the $(x,y)$-plane after short time evolution. (b) Additional to the phase in the $(x,y)$-plane from Fig. (a), \nthe green line represents a low-value isodensity surface of $|\\psi_b|^2$ around the \nvortex line imprinted on the BEC. (c) Projection of Fig (b) into the $(x,y)$-plane and associated indices. \nSince the indices reflect the momenta of the regions, we expect that \nthe central region moves faster in the $z$-direction relative to its neighboring regions. \n(d) shows the expected breakup of the green vortex lines \ninto the solid black lines. The black arrows illustrate the orientation, which is determined by the phase. The grey dashed inset illustrates \nthe expected product of collision and reconnection of the vortex lines according to, e.g.~\\cite{Koplik:PRL:1993}.\n\\label{fig:trefoil_projection}}\n\\end{center}\\end{figure}\n\nLet us now consider the dynamics, which is shown in~\\reffig{fig:vortex_expelled} (see also the movie \n\\href{trefoil_1.mp4}{\\textit{trefoil\\_1.mp4}} and \\href{trefoil_2.mp4}{\\textit{trefoil\\_2.mp4}} for full dynamics).\nWe see that the vortex lines of this specific trefoil knot first reconnect [see~\\reffig{fig:vortex_expelled}], and the \ncentral regions travels fastest into the positive $z$-direction according to our expectation~\\reffig{fig:trefoil_projection}(b--c).\nAfter that, the central region expels an unknot or vortex ring which decouples from the rest of the knot, as shown in~\\reffig{fig:vortex_expelled}. \nThe single unknot propagates to the positive $z$-direction faster than the remaining part of the knot. \nInterestingly, our dynamics differs from the usual dynamics \nof a clear breakup into two unknots as expected from [\\reffig{fig:trefoil_projection}(c), solid black line] \nand what has been found in~\\cite{Barenghi:PRE:2012,Irvine:arXiv:2015}. \nDifferences to previous observations are due to finite size effects of the cloud, that in our case \nthe knot has a large aspect ratio, which with our rescaling basically means that the dynamics in the $z$-direction is much slower, smallness of the knot, and finally that we \nactually consider two coupled fields (dynamics of $\\psi_a$ not shown). \n\n\\begin{figure}\\begin{center}\n\\includegraphics[width=\\textwidth]{fig5.eps}\n\\caption{\n(color online) Dynamics of the condensate wavefunction $\\psi_b$, whose vortex lines form a trefoil knot. Both upper and lower row show the same dynamics in timesteps of \n$\\Delta t=0.1$ (corresponding to $\\Delta t=0.8$ms) starting from $t=0.05$ (corresponding to $t=0.8$ms). \nBoth rows use the same isosurface for $|\\psi_b|^2$, however, in the lower row we cropped the box at a smaller value, so that the ``boundary'' of\nthe condensate does not obfuscate the dynamics of the vortex lines. Additional to that, we used the phase to color the isosurface in the lower row. \nUpon evolution, vortex cores reconnect, which leads to a decay into a vortex ring being expelled from \nthe central region, which can be understood from~\\reffig{fig:trefoil_projection}).\nSee also the movie \\href{trefoil_1.mp4}{\\textit{trefoil\\_1.mp4}} and \\href{trefoil_2.mp4}{\\textit{trefoil\\_2.mp4}} for the full propagation dynamics.\n\\label{fig:vortex_expelled}}\n\\end{center}\\end{figure}\n\n\\section{Conclusions}\nRecently there has been a growing interest in the dynamics of knotted vortex lines in BECs. \nHowever, thus far there has been no actual proposal on how to excite such matter waves in a controlled fashion. \nIn this work, we have presented a setup with physically realistic parameters to create such knotted vortex lines in ultracold matter waves. \nWe combined recent theoretical and experimental results from complex light shaping which allowed us to create knotted vortex lines embedded in light fields. We \nuse the latter as a probe field for three-level atoms in a $\\Lambda$-type setup to inscribe the nodal lines to BECs. \nThe setup is quite generic, and allows investigation of a large variety of potential dynamics. \nThe finite time span to populate state $\\psi_b$ reduces production of soundwaves. \nThe finiteness of the condensate, the smallness of the knot as well as the reconnections of the vortex lines give\nrise to a very involved dynamics. \nThe velocity in the $z$-direction of the vortex knot should not be large. \nThis velocity can be controlled by the ratio between the trapping frequencies $\\gamma_z=\\omega_z\/\\omega$ relative to the condensate. \nThe probe field which determines $\\Omega_a$ should not be too spatially broad, otherwise the value of $\\gamma_z$ required to fully embed the knotted vortex line \nbecomes impractical. We have assumed that the light field can be treated within the paraxial approximation, although we note that, in principle, \nthis condition can be relaxed to non-paraxial knotted light fields~\\cite{Dennis:OL:11}. \nDue to the tight focusing of the optical beam, only moderate powers of the light fields that determine the Rabi-frequencies $\\Omega_i$ are required. \nThus, the setup should work generically for a large range of parameters. \nWe have shown two illustrative examples of a trefoil knot and a Hopf-link, and discussed the dynamics using already well-established techniques. \nThe data presented in this paper is available online at~\\cite{Maucher:data}.\n\n\\section{Acknowledgements} \nThis work was funded by the Leverhulme Trust Research Programme Grant RP2013-K-009, SPOCK: Scientific Properties Of Complex Knots. \nWe would like to thank P. M. Sutcliffe, J. L. Helm, T. P. Billam, D. Sugic and M. Dennis for stimulating discussions. \n\n\\section*{References}\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nDetailed understanding of non-contact friction and energy transfer processes in nanostructures is of great importance, both \nfrom the conceptual and practical viewpoints.\nExisting theoretical studies, starting with the seminal paper by Pendry \\cite{Pendry1}, mostly consist of calculations of friction coefficients, i.e. friction force between two parallel dielectric plates (e.g. supported graphenes) in uniform relative motion which is experimentally not easily measured (e.g. current drag in one graphene caused by current flow in another \none) \\cite{Theory1,Theory2,Theory3,Theory4,Theory5,Theory6,Theory7}.\n\nWhile the experiments with two slabs in parallel relative motion with constant velocity are difficult to perform, we suggest here \nthat for the same systems experiments with slabs in relative oscillatory motion with fixed or variable frequency might be easier to \nperform, and could lead to new and interesting observations. \nRecently a similar approach has been realized experimentally \\cite{QF_exp_osc1,QF_exp_theory_osc,QF_exp_osc2,QF_exp_osc3}.\nIn these experiments the system (usually an AFM tip above the surface) oscillates at some characteristic \nfrequency. These oscillations are then, because of various dissipation mechanisms (which includes quantum friction), damped. \nOur model is based on a slightly different concept; one of the slabs, e.g. the AFM tip, is driven with variable frequency. This means that the friction can \nbe deduced from the energy dissipated in one oscillating cycle. \nIn this paper we provide a general theoretical description of such processes, expecting that this method might become a useful tool to study dynamical properties of \nlow-dimensional systems \\cite{Munez}.\n\n\nThe main objective of this paper is therefore a theoretical description of these phenomena in systems consisting \nof two non-touching polarizable media, specifically conservative (van der Waals or Casimir) and \ndissipative forces (quantum friction) between two quasi-twodimensional (q2D crystals) in \nrelative parallel and oscillatory motion. While the case of slabs in parallel uniform motion has been extensively studied \\cite{Pendry1,Persson1,Volokitin1,\nVolokitin2,Theory4,Pendry2,Philbin}, here we develop an analogous theory describing interaction of atomically thick slabs (q2D crystals) in oscillatory motion. \n\n\n \nIn Sec.\\ref{Sec2} the expressions for van der Waals and dissipative energies and forces are derived for such a q2D system in a very general case,\nfor variable slab temperatures and dynamical properties characterized by their surface response functions $D_1$ and \n$D_2$, and for variable oscillating frequencies and amplitudes. \nWe assume 2D translational invariance and neglect retardation for the slab distances in consideration. \nFor the sake of clarity and comparison, in Appendix\\ref{AppA} we derive analogous results for the \ncase of parallel uniform motion, recovering but also generalizing some earlier results \\cite{trenjePRB,PFK}. \n \n \n\nIn Sec.\\ref{DerofD} we derive general expressions for surface response functions $D_i$ for multilayer slabs, later to \nbe specified for monolayers of a substance like graphene or silicene adsorbed on dielectric substrates. Surface response functions \n$D_1$ and $D_2$ will be the key ingredients in the expressions describing dissipative and \nreactive processes in Sec.\\ref{Sec2} and Sec.\\ref{DerofD}. \nIn Sec.\\ref{DerofD} we also show how to calculate surface response functions $D_i$ for a specific case of q2D crystals on a dielectric substrate\nThe expression for the surface excitation propagator of a system of \ntwo coupled slabs is also derived.\n\n \nIn Sec.\\ref{nabnak} we present the models used to describe the q2D crystal and substrate dynamical response. \nWe study the specific case of a graphene monolayer on a dielectric substrate, which is chosen to be ionic \ncrystal SiO$_2$. \nThe substrate is considered as a homogenous semiinfinite ionic crystal SiO$_2$ with the appropriate dielectric function in \nthe longwavelength limit. Graphene monolayer dynamical response is determined from \nfirst principles. Also some computational details are specified. \n\n\nIn Sec.\\ref{resuuult} general expressions of previous sections are applied to the system of \ntwo slabs, where each slab represents a graphene($E_{Fi}$)\/SiO$_2$ system, and where graphene doping is characterized \nby Fermi energy $E_F$ relative to the Dirac point. \n\n \nIn Sec.\\ref{cmspe} we demonstrate how the spectra of electronic excitations in one slab \nand in two coupled slabs depend on graphene doping $E_F$.\n \n\nThe form of these coupled ecitations is responsible for the behaviour of the atractive forces and dissipation.\nWe first discuss in Sec.\\ref{weakvdW} the modification of van der Waals force for oscillating in comparison \nwith the static slabs. Van der Waals energies depend on two factors. They increase with the increased \ngraphene doping, but are reduced for the asymmetric doping when excitations in two slabs are off-resonance. \nDynamical vdW energy shows unusual behavior: it starts as plateau, and then decreases.\nThis is, because the fast Dirac plasmon in one slab for low driving frequencies $\\omega_0<\\omega_p$, still perfectly follows Doppler shifted charge density fluctuations in another slab. \nFor larger driving frequencies this is not the case and vdW energies decrease. \nFinally, for small or zero doping the $\\pi\\rightarrow\\pi^*$ and $\\pi\\rightarrow\\sigma$ excitations \ncause linear weakening of the dynamical vdW energy. \n \nIn Sec.\\ref{DISsub} we calculate and discuss how dissipated power depends on various parameters: \ndriving amplitude $\\rho_0$ and frequency $\\omega_0$, on the separations between slabs $a$ and on the \nsubstrate. We find simple $\\rho^2_0$ dependence, while the $\\omega_0$ dependence is determined \nby the intensity of resonant coupling between hybridized Dirac plasmons and substrate TO phonons.\nWe found that in realistic grahenes (in comparison with Drude model when excitation of undamped Dirac plasmons provides \nunrealistically strong $2\\omega_p$ peak in the dissipated power) the dissipation power peak is strongly reduced and red shifted.\nWe also explain why the substrate substantially reduces dissipated power peak. \nFor larger separations $a$ additional peaks appear in dissipated power originating from the excitations of hybridized substrate phonons. \n\nIn Sec.\\ref{DISdop} we explore how the dissipated power depends on graphene dopings.\nWe show that if one graphene is pristine ($E_F=0$) it causes the disappearance of strong $2\\omega_p$ peak \nin the dissipated power. Moreover, for larger separations the doping causes shifts, appearance and disappearance of many peaks originating \nfrom resonant coupling between hybridized substrate phonons and Dirac plasmons.\n\n \nIn Sec.\\ref{Sec5} we present the conclusions.\n\n\n\n\n\n\\section{General theory: Oscillating slabs}\n\\label{Sec2}\n\n\n\\subsection{Van der Waals energy and force}\nIn Appendix \\ref{vdWenergyforce} we have derived van der Waals energy and force between two slabs \nin uniform relative motion in some detail because it will help us to treat a \nsimilar problem of two oscillating slabs. \n\nWe shall later assume that the slabs consist of graphene monolayers with variable doping, deposited on dielectric slabs of thickness $\\Delta$ \ndescribed by local dielectric functions $\\epsilon(\\omega)$, as shown in Fig.\\ref{Fig1}. The left slab mechanically oscillates with frequency $\\omega_0$ and \namplitude ${\\hbox{\\boldmath $\\rho$}}_0$ relative to the right slab. Again we calculate the diagram in Fig.\\ref{FigA1} as in the \\ref{vdWenergyforce}, but now the slab parallel coordinates change in time \nas \n\\begin{equation}\n{\\hbox{\\boldmath $\\rho$}}-{\\hbox{\\boldmath $\\rho$}}_1\\rightarrow{\\hbox{\\boldmath $\\rho$}}-{\\hbox{\\boldmath $\\rho$}}_1-{\\hbox{\\boldmath $\\rho$}}_0(\\sin\\omega_0t-\\sin\\omega_0t_1)\n\\label{kukuli}\n\\end{equation}\nso that instead of (\\ref{nabij3}) we have\n\\begin{equation}\n\\begin{array}{c}\nE_{c}=\\int^{\\infty}_{-\\infty}dt_1\\int\\frac{d{\\bf Q}}{(2\\pi)^2}\ne^{-i{\\bf Q}{\\hbox{\\boldmath $\\rho$}}_0(\\sin\\omega_0t-\\sin\\omega_0t_1)}\n\\nonumber\\\\\n\\nonumber\\\\\n\\int^{\\infty}_{-\\infty} dzdz_1dz_2dz_3 \nS_1({\\bf Q},z,z_1,t-t_1)V({\\bf Q},z,z_3)\n\\nonumber\\\\\n\\nonumber\\\\\nD_2({\\bf Q},z_3,z_2,t-t_1)V({\\bf Q},z_2,z_1). \n\\end{array}\n\\label{bijem3}\n\\end{equation}\n\n\\begin{figure}[t]\n\\includegraphics[width=0.9\\columnwidth]{Fig1.pdf}\n\\caption{Geometry of the system.}\n\\label{Fig1}\n\\end{figure} \nIf we use \n\\[\ne^{iz\\sin\\phi}=\\sum^{\\infty}_{m=-\\infty}J_m(z)e^{im\\phi} \n\\]\nwhere $J_m$ are Bessel functions, after Fourier transformation in $\\omega$ space, using expressions (\\ref{Def1}--\\ref{Def3}), (\\ref{impsv}) and \nintegration over $z$ coordinates we obtain \n\\begin{equation}\n\\begin{array}{c}\nE_{c}=\\hbar\\int\\frac{d{\\bf Q}}{(2\\pi)^2}e^{-2Qa}\n\\sum^{\\infty}_{m,m'=-\\infty}J_m({\\bf Q}{\\hbox{\\boldmath $\\rho$}}_0)J_{m'}({\\bf Q}{\\hbox{\\boldmath $\\rho$}}_0)\\hspace{3cm}\n\\nonumber\\\\\n\\nonumber\\\\\n\\int^{\\infty}_{-\\infty}\\frac{d\\omega}{2\\pi}\n\\left[2n_1(\\omega)+1\\right]e^{i(m-m')\\omega_0t}\\hspace{3cm}\n\\nonumber\\\\\n\\nonumber\\\\\nImD_1({\\bf Q},\\omega)ReD_2({\\bf Q},\\omega+m\\omega_0).\\hspace{3cm}\n\\end{array}\n\\end{equation}\nHere we have also used the fact that $ImD_2({\\bf Q},\\omega)$ is an antisymmetric function of \n$\\omega$ and does not contribute to integration. \nWe see that the energy oscillates in time with frequencies $(m-m')\\omega_0$. \nIf we assume to measure energies on a time scale $\\Delta t>T$, where $T=\\frac{2\\pi}{\\omega_0}$ is the maximal \nduration of one cycle, then we can average over $T$\n\\begin{equation}\n\\frac{1}{T}\\int^{T}_0dte^{i(m-m')\\omega_0t}=\\delta_{mm'},\n\\label{Timeaverage}\n\\end{equation}\nand find the result independent of time: \n\\[\n\\begin{array}{c}\nE_{c}=\\frac{\\hbar}{2}\\int\\frac{d{\\bf Q}}{(2\\pi)^2}e^{-2Qa}\n\\sum^{\\infty}_{m=0}(2-\\delta_{m0})J^2_m({\\bf Q}{\\hbox{\\boldmath $\\rho$}}_0)\n\\nonumber\\\\\n\\nonumber\\\\\n\\int^{\\infty}_{-\\infty}\\frac{d\\omega}{2\\pi}\\ \\left\\{[2n_1(\\omega)+1]\nImD_1({\\bf Q},\\omega)ReD_2({\\bf Q},\\omega+m\\omega_0)+\\right.\n\\nonumber\\\\\n\\nonumber\\\\\n\\left.[2n_2(\\omega)+1]ImD_2({\\bf Q},\\omega)ReD_1({\\bf Q},\\omega+m\\omega_0)\\right\\},\n\\end{array}\n\\]\nwhere the expression in curly brackets is fully analogous to the one in (\\ref{jura66}), but now $\\omega'\\rightarrow\\omega_m=\\omega+m\\omega_0$.\nInclusion of higher order processes follows the same procedure as for the parallel motion in \\ref{vdWenergyforce}. After \nintegration over the coupling constant, we obtain the result analogous to (\\ref{jujujuju7}) \n\\begin{eqnarray}\nE_{c}=\\frac{\\hbar}{2}\\int\\frac{d{\\bf Q}}{(2\\pi)^2}\\sum^{\\infty}_{m=0}(2-\\delta_{m0})J^2_m({\\bf Q}{\\hbox{\\boldmath $\\rho$}}_0)\\times\n\\label{vdwFIN}\\\\\n\\nonumber\\\\\n\\int^{\\infty}_{-\\infty}\\frac{d\\omega}{2\\pi}A({\\bf Q},\\omega,\\omega_m)\\hspace{3cm}\n\\nonumber\n\\end{eqnarray}\nwhere $A$ is given by (\\ref{jalko}) and (\\ref{jujucka}), with $\\omega_m=\\omega+m\\omega_0$.\n\nAgain, the limiting cases can be obtained from Sec.\\ref{vdWenergyforce}.\nFor $\\omega_0=0\\ (\\omega'=\\omega)$ and ${\\hbox{\\boldmath $\\rho$}}_0=0$ we find the well known result for van der \nWaals interaction when the slabs are at rest \\cite{vdW2007,Pedro}: \n\\begin{eqnarray}\nE_{c}(a)=\\frac{\\hbar}{2}\\int\\frac{d{\\bf Q}}{(2\\pi)^2}\\int^{\\infty}_{0}\\frac{d\\omega}{2\\pi}\\ sgn\\omega\\times\\hspace{2cm} \n\\nonumber\\\\\n\\nonumber\\\\\n\\hspace{3cm}Im\\ln\\left[1-e^{-2Qa}D_1({\\bf Q},\\omega)D_2({\\bf Q},\\omega)\\right]\n\\nonumber\n\\end{eqnarray}\nFor finite frequency $\\omega_0$ and $D_1=D_2=D$ we find: \n\\begin{eqnarray}\nE_{c}(a)=\\frac{\\hbar}{2}\\int\\frac{d{\\bf Q}}{(2\\pi)^2}\n\\sum^{\\infty}_{m=0}(2-\\delta_{m0})J^2_m({\\bf Q}{\\hbox{\\boldmath $\\rho$}}_0)\n\\nonumber\\\\\n\\nonumber\\\\\n\\int^{\\infty}_{-\\infty}\\frac{d\\omega}{2\\pi}sgn\\omega\\ \nIm \\ln\\left[1-e^{-2Qa}D({\\bf Q},\\omega)D({\\bf Q},\\omega_m)\\right].\n\\nonumber\n\\end{eqnarray}\nWe notice that the frequency integrals are the same as \nin (\\ref{jujujuju7}--\\ref{sinko}). Also, the attractive van der Waals force \nbetween two oscillating slabs is given by \n\\begin{eqnarray}\nF_{\\perp}(a)=-\\frac{dE_{c}(a)}{da}=\\hspace{3cm}\n\\nonumber\\\\\n\\nonumber\\\\\n\\hbar\\int\\frac{d{\\bf Q}}{(2\\pi)^2}Qe^{-2Qa}\n\\sum^{\\infty}_{n=0}(2-\\delta_{m0})J^2_m({\\bf Q}{\\hbox{\\boldmath $\\rho$}}_0)\\times\n\\nonumber\\\\\n\\nonumber\\\\\n\\int^{\\infty}_{-\\infty}\\frac{d\\omega}{2\\pi}B({\\bf Q},\\omega,\\omega_m) \n\\end{eqnarray}\nwhere the function $B$ is given by (\\ref{prcko}) and (\\ref{prdf99}). \nThe same holds for the $\\omega_0\\rightarrow 0$ or $D_1=D_2=D$ limits when the \nexpressions for $B$ become (\\ref{jurec}) or (\\ref{kurec}), respectively.\n\\subsection{Dissipated power}\nWe can perform the calculation of the dissipated power for two slabs oscillating parallel to each \nother with amplitude ${\\hbox{\\boldmath $\\rho$}}_0$ and frquency $\\omega_0$ in analogy with the previous treatment of \ntwo slabs in uniform relative motion in Sec.\\ref{jurniga}. Again, we have to transform the parallel \ncoordinates in the left slabs as in (\\ref{kukuli}).\nThen (\\ref{losse5}), after integration over $t_1$ becomes \n\\begin{equation}\n\\begin{array}{c}\nP_{12}(t)=-i\\hbar\\int\\frac{d{\\bf Q}}{(2\\pi)^2}\\int\\frac{d\\omega}{2\\pi}\\sum^{\\infty}_{m,m'=-\\infty}\n\\\\\n\\\\\n(-1)^{m+m'}e^{i(m'-m)\\omega_0t}(m'\\omega_0-\\omega)\\ \nJ_m({\\bf Q}{\\hbox{\\boldmath $\\rho$}}_0)J_m'({\\bf Q}{\\hbox{\\boldmath $\\rho$}}_0)\n\\\\\n\\\\\nS_1({\\bf Q},|\\omega|,z,z_1)\\otimes V({\\bf Q},z,z_3)\\otimes \n\\\\\n\\\\\nD_2({\\bf Q},m'\\omega_0-\\omega,z_3,z_2)\n\\otimes V({\\bf Q},z_2,z_1) \n\\end{array}\n\\end{equation}\nWe see that the energy transfer rate is time dependent and oscillates \nwith frequency $(m'-m)\\omega_0$. Again, from (\\ref{Timeaverage}) we see \nthat for time intervals large with respect to the oscillation period $T$ the \nterms $m\\ne m'$ do not contribute and the energy transfer rate is\n\\begin{equation}\n\\begin{array}{c}\nP_{12}=\n-i\\hbar\\int\\frac{d{\\bf Q}}{(2\\pi)^2}\\int\\frac{d\\omega}{2\\pi}\\sum^{\\infty}_{m=-\\infty}(m\\omega_0-\\omega)\\ \nJ^2_m({\\bf Q}{\\hbox{\\boldmath $\\rho$}}_0)\n\\\\\n\\\\\nS_1({\\bf Q},|\\omega|,z,z_1)\\otimes V({\\bf Q},z,z_3)\\otimes \n\\\\\n\\\\\nD_2({\\bf Q},m\\omega_0-\\omega,z_3,z_2)\n\\otimes V({\\bf Q},z_2,z_1) \n\\end{array}\n\\label{loss11}\n\\end{equation} \nIf we now use (\\ref{Def1}), the definitions (\\ref{Def2}) and (\\ref{Def3}) \nof the surface correlation function and the surface excitation propagator, respectively, and the connection (\\ref{impsv}) between the surface \ncorrelation function and the imaginary part of \nsurface excitation propagator, equation (\\ref{loss11}) can be written as\n\\begin{eqnarray}\nP_{12}=-\\frac{\\hbar}{\\pi}\\sum^{\\infty}_{m=-\\infty}\\int\\frac{d{\\bf Q}}{(2\\pi)^2}e^{-2Qa}J^2_m({\\bf Q}{\\hbox{\\boldmath $\\rho$}}_0)\\hspace{3cm} \n\\nonumber\\\\\n\\label{loss12}\\\\\n\\int\\frac{d\\omega}{2\\pi}\\ \\omega_m\\ [2n_1(\\omega)+1]\\ ImD_1({\\bf Q},\\omega) ImD_2({\\bf Q},\\omega_m).\\hspace{1cm} \n\\nonumber\n\\end{eqnarray} \nEvaluating (\\ref{loss12}) we have used the fact that the real part of the function under summation and integration is \nodd and the imaginary part is an even function of $n$ and $\\omega$.\n$P_{12}$ is the energy transferred from the left to the right slab.\nNow we have to repeat the discussion in Sec.\\ref{jurniga} and substract the part of this energy which \nwill be reversibly returned to the left slab. The same arguments, leading to (\\ref{losse17}), will give \nthis energy to be\n\\begin{eqnarray}\nP'_{12}=\\hbar\\sum^{\\infty}_{n=-\\infty}\\int\\frac{d{\\bf Q}}{(2\\pi)^2}e^{-2Qa}J^2_n({\\bf Q}{\\hbox{\\boldmath $\\rho$}}_0)\\hspace{2cm} \n\\nonumber\\\\\n\\label{loss13}\\\\\n\\int\\frac{d\\omega}{2\\pi}\\ \\omega\\ [2n_1(\\omega)+1]\\ Im D_1({\\bf Q},\\omega) Im D_2({\\bf Q},\\omega_n). \n\\nonumber\n\\end{eqnarray} \nExpression (\\ref{loss13}) represents the energy transferred from the \nleft to right but which will be reversibly returned, as shown in Fig.\\ref{FigA3}b. Therefore the energy \nwhich is irreversibly transferred from the left to the right, i.e. the dissipated power, is \n\\begin{equation}\n\\begin{array}{c}\nP_{1}=P_{12}-P'_{12}=\n2\\hbar\\sum^{\\infty}_{m=1}m\\omega_0\\ \\int\\frac{d{\\bf Q}}{(2\\pi)^2}e^{-2Qa}J^2_m({\\bf Q}{\\hbox{\\boldmath $\\rho$}}_0) \n\\\\\n\\\\\n\\int\\frac{d\\omega}{2\\pi}\\ [2n_1(\\omega)+1] ImD_1({\\bf Q},\\omega) ImD_2({\\bf Q},\\omega_m).\n\\end{array}\n\\label{diss1}\n\\end{equation}\nAnalogous calculation would give the energy dissipated in the process where the charge \nfluctuation in the right slab induces fluctuations in the left slab. We have to \nexchange $1$ and $2$ in (\\ref{diss1}) and replace $m\\rightarrow-m$. \nRepeating the steps in (\\ref{jutro}) the final result becomes: \n\n\n\\begin{eqnarray}\nP=P_1+P_2=\\hspace{3cm}\n\\nonumber\\\\\n\\nonumber\\\\\n4\\hbar\\sum^{\\infty}_{m=1}m\\omega_0\\ \\int\\frac{d{\\bf Q}}{(2\\pi)^2}e^{-2Qa}J^2_m({\\bf Q}{\\hbox{\\boldmath $\\rho$}}_0)\\int^{\\infty}_{-\\infty}\\frac{d\\omega}{2\\pi}\\hspace{2cm}\n\\nonumber\\\\\n\\label{njunja}\\\\\n\\left[n_1(\\omega)-n_2(\\omega_m)\\right]ImD_1({\\bf Q},\\omega)ImD_2({\\bf Q},\\omega_m).\n\\nonumber\n\\end{eqnarray}\nThis expression is analogous to (\\ref{jutro}). For $T=0$ $2n(\\omega)+1\\rightarrow sgn\\omega$ and \n(\\ref{njunja}) can be written as \n\\begin{eqnarray}\nP=\\hspace{5cm}\n\\label{njunja1}\\\\\n\\nonumber\\\\\n4\\hbar\\sum^{\\infty}_{m=1}m\\omega_0\\ \\int\\frac{d{\\bf Q}}{(2\\pi)^2}e^{-2Qa}J^2_m({\\bf Q}{\\hbox{\\boldmath $\\rho$}}_0)\n\\int^{m\\omega_0}_{0}\\frac{d\\omega}{2\\pi}\\hspace{2cm}\n\\nonumber\\\\\n\\nonumber\\\\\nImD_1({\\bf Q},\\omega)ImD_2({\\bf Q},m\\omega_0-\\omega).\n\\nonumber\n\\end{eqnarray}\nAdding higher order terms (\\ref{Eka1},\\ref{Eka2}) we obtain the energy \ndissipated per unit time: \n\\begin{eqnarray}\nP=2\\hbar\\sum^{\\infty}_{m=1}m\\omega_0\\ \\int\\frac{d{\\bf Q}}{(2\\pi)^2}e^{-2Qa}J^2_m({\\bf Q}{\\hbox{\\boldmath $\\rho$}}_0)\\times\n\\nonumber\\\\\n\\label{losshop}\n\\\\\n\\hspace{3cm}\\int^{\\infty}_{-\\infty}\\frac{d\\omega}{2\\pi}C({\\bf Q},\\omega,\\omega_m)\n\\nonumber\n\\end{eqnarray}\nwhere $C$ is given by (\\ref{kaka}). Limiting casses \nare also obtained from (\\ref{losshop}). For $\\omega_0=0$ and\/or for $\\rho_0=0$ \nobviously $P=0$.\n\\section{Derivation of the slab surface excitation propagators $D_{1,2}({\\bf Q},\\omega)$}\n\\label{DerofD}\nThe main quantities which appear in the formula for van der Waals interaction $E_c$ or \ndissipated power $P$ are the surface excitation propagators $D_{1}({\\bf Q},\\omega)$ and $D_{2}({\\bf Q},\\omega)$ of the left (first) and right (second) slab, respectively.\nThe derivation of $D_1$ and $D_2$ is analogous for both slabs, so here we shall derive just one surface \nexcitation propagator $D$. The structure of the monolayer-substrate composite (e.g. graphene on \nSiO$_2$) is shown in Fig.\\ref{Fig2}. The slab consists of the graphene monolayer adsorbed at some small distance $h$ (e.g. $h=0.4$nm) above the substrate of macroscopic thickness $\\Delta$. The \ndielectric, e.g. the SiO$_2$ slab is placed in the region $-\\Delta-h\\le z\\le-h$ and the graphene \nlayer occupies $z=0$ plane. The same model system is used in Refs.\\cite{Ivan1,Ivan2} where \nthe authors explore plasmon-phonon hybridization, stopping power and wake effect produced \nby the proton moving parallel to the composite. The unit cell for such huge nanostructure would \nconsist of hundreds of atoms, so it is impossible to perform full {\\em ab initio} ground state and structure optimization calculation. Moreover, an {\\em ab initio} calculation of the response function would be even more demanding so we need an approximation for the response function calculation. The easiest (and probably the best) approximation is to treat \nthe SiO$_2$ slab as a homogeneous dielectric described by some local dielectric function $\\epsilon_S(\\omega)$ and to consider graphene as a purely 2D system described \nby the response function $R({\\bf Q},\\omega)$, as sketched in Fig.\\ref{Fig2}. \n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=4.0cm,height=7cm]{Fig2.pdf}\n\\caption{(color online) Simplified model where the SiO$_2$ substrate is shown as a homogenous dielectric slab described by the local dielectric function $\\epsilon_S(\\omega)$ and graphene is described by 2D response function $R({\\bf Q},\\omega)$. $D({\\bf Q},\\omega)$ is the surface excitation propagator of the substrate\/graphene composite.}\n\\label{Fig2}\n\\end{figure}\nIn order to derive the surface excitation \npropagator $D({\\bf Q},\\omega)$ we start from its definition:\n\\begin{eqnarray}\nD({\\bf Q},\\omega)=v_Q\\int^0_{-\\infty}\\ dzdz' e^{-Q(z+z')}R({\\bf Q},\\omega,z,z')= \n\\nonumber\\\\\n\\label{defD}\\\\\n\\frac{1}{v_Q}\\left\\{W({\\bf Q},\\omega,z=0,z'=0)-v_Q\\right\\};\\ i=1,2.\n\\nonumber\n\\end{eqnarray}\nwhich connects the surface excitation propagator with the screened Coulomb interaction $W({\\bf Q},\\omega,z=0,z'=0)$ at $z=z'=0$ surface.\nHere $R({\\bf Q},\\omega,z,z')$ represents the nonlocal dielectric function of graphene\/dielectric composite which we assume occupies the region $z,z'\\le 0$. \n\nIt is well known \\cite{gr2D1,gr2D2,gr2D3,gr2D4} that physical properties of a graphene \nmonolayer in the low ($Q,\\omega$) region can be described to a very good approximation \nassuming the monolayer to be strictly twodimensional, so that the nonlocal independent \nelectron response function can be written as\n\\begin{equation}\nR^{0}({\\bf Q},\\omega,z,z')=R^{0}({\\bf Q},\\omega)\\delta(z)\\delta(z') \n\\end{equation}\nwhere we assume that the graphene lies in the $z=0$ plane and the response function \n$R^{0}({\\bf Q},\\omega)$ can be derived from first \nprinciples, as decribed in Sec.\\ref{nabnak}. \nDynamically screened response function $R({\\bf Q},\\omega)$ in RPA is given as \na series of terms\n\\begin{equation}\nR({\\bf Q},\\omega)=R^{0}+R^{0}v_QR^{0}+...=\\frac{R^0({\\bf Q},\\omega)}{1-v_QR^0({\\bf Q},\\omega)}.\n\\label{RPARR}\n\\end{equation} \nIf we assume for the moment that there is no dielectric in the system (e.g. $\\epsilon_S(\\omega)=1$) then the screened Coulomb \ninteraction is simply given by \n\\begin{equation}\nW({\\bf Q},\\omega,z=0,z'=0)=v_Q+v_QR({\\bf Q},\\omega)v_Q.\n\\label{scrCint}\n\\end{equation}\nUsing the definition (\\ref{defD}) the surface excitation propagator becomes\n\\begin{equation}\nD({\\bf Q},\\omega)=v_QR({\\bf Q},\\omega).\n\\end{equation}\nWhen the dielectric slab is introduced, the external charges and charge density fluctuations in the graphene layer do not interact via the bare Coulomb \ninteraction $v_Q$ but via the Columob interaction modified by the presence of \nthe dielectric slab \\cite{Laplace} \n\\begin{equation}\nv_Q\\rightarrow\\tilde{v}_Q(\\omega)=v_Q\\left[1+D_S({\\bf Q},\\omega)\\right],\n\\label{tildeV11}\n\\end{equation}\nwhere the substrate surface excitation propagator is\n\\begin{equation}\nD_S({\\bf Q},\\omega)=D_S(\\omega)\\frac{1-e^{-2Q\\Delta}}{1-D^2_S(\\omega)e^{-2Q\\Delta}}e^{-2Qh}\n\\label{labina}\n\\end{equation}\nand\n\\begin{equation}\nD_S(\\omega)=\\frac{1-\\epsilon_S(\\omega)}{1+\\epsilon_S(\\omega)}\n\\end{equation} \nrepresents the surface excitation propagator of a semiinfinite \n($\\Delta\\rightarrow\\infty$, $h=0$) dielectric.\nThis causes that the screened Colulomb interaction (\\ref{scrCint}) becomes the function \nof $\\tilde{v}_Q(\\omega)$\n\\begin{equation}\nW\\rightarrow\\tilde{W}=\\tilde{v}_Q(\\omega)+\\tilde{v}_Q(\\omega)\\tilde{R}({\\bf Q},\\omega)\\tilde{v}_Q(\\omega),\n\\label{modifiedW}\n\\end{equation}\nwhere, because charge density fluctuations inside graphene also interact via $\\tilde{v}_Q(\\omega)$, the screened response function is modified as\n\\begin{equation}\n\\tilde{R}({\\bf Q},\\omega)=\\frac{R^0({\\bf Q},\\omega)}{1-\\tilde{v}_Q(\\omega)R^0({\\bf Q},\\omega)}.\n\\label{curoni}\n\\end{equation}\nFinally, after inserting (\\ref{modifiedW}) into (\\ref{defD}) we obtain the surface excitation propagator in the presence of \nthe dielectric\n\\begin{eqnarray}\nD({\\bf Q},\\omega)=\\frac{1}{v_Q}\\left\\{\\tilde{v}_Q(\\omega)\\tilde{R}_i({\\bf Q},\\omega)\\tilde{v}_Q(\\omega)+\\right.\n\\label{newD}\\\\\n\\hspace{3cm}\\left.\\tilde{v}_Q(\\omega)-v_Q\\right\\}.\n\\nonumber\n\\end{eqnarray}\nwhich can be rewritten in a more transparent form as\n\\begin{eqnarray}\nD({\\bf Q},\\omega)=\\hspace{5cm}\n\\nonumber\\\\\n\\nonumber\\\\\n\\frac{D_S({\\bf Q},\\omega)+v_QR({\\bf Q},\\omega)+2v_QR({\\bf Q},\\omega)D_S({\\bf Q},\\omega)}{1-v_QR({\\bf Q},\\omega)D_S({\\bf Q},\\omega)}.\n\\end{eqnarray}\nThe spectrum of coupled excitations in a single slab can be calculated from\n\\begin{equation}\nS({\\bf Q},\\omega)=-\\frac{1}{\\pi}ImD({\\bf Q},\\omega).\n\\end{equation}\nFor the coupled slabs described by their surface excitations propagators \n$D_1$ and $D_2$, separated by the distance $a$, in a similar way we can derive \nthe propagator $\\tilde{D}$ for the coupled system \n\\begin{equation}\n\\tilde{D}({\\bf Q},\\omega)=\n\\frac{D_1({\\bf Q},\\omega)+D_2({\\bf Q},\\omega)+2D_1({\\bf Q},\\omega)D_2({\\bf Q},\\omega)}{1-e^{-2Qa}D_1({\\bf Q},\\omega)D_2({\\bf Q},\\omega)}\n\\end{equation}\nand the excitation spectrum of this system is\n\\begin{equation}\n\\tilde{S}({\\bf Q},\\omega)=-\\frac{1}{\\pi}Im\\tilde{D}({\\bf Q},\\omega).\n\\end{equation}\n\\section{Description of substrate and graphene dynamical response}\n\\label{nabnak}\nThe results in Sec.\\ref{DerofD} are quite general and can be applied to a monolayer \nof any material on any dielectric substrate.\nNow we shall specify the dielectric substrate to be the homogenous \nlayer of ionic crystal SiO$_2$.\n\nDielectric properties (or dynamical response) of bulk ionic crystals in the long-wavelength limit can \nbe described in terms of their optical phonons at the $\\Gamma$ point. More complex polar crystals such as SiO$_2$ possess a multitude of different optical \nphonons of different symmetries and polarizations. However, here we suppose that \nSiO$_2$ posses two well-defined, non-dispersing transverse\noptical (TO) phonon modes at the frequencies $\\omega_{TO1}$\nand $\\omega_{TO2}$ with the corresponding damping rates\n$\\gamma_{TO1}$ and $\\gamma_{TO2}$, giving rise to a \ndielectric function of the form \\cite{Ivan1,Ivan2}\n\\begin{eqnarray}\n\\epsilon_S(\\omega)=\\epsilon_{\\infty}+(\\epsilon_i-\\epsilon_{\\infty})\\frac{\\omega^2_{TO2}}{\\omega^2_{TO2}-\\omega^2-i\\omega\\gamma_{TO2}}+\n\\nonumber\\\\\n(\\epsilon_0-\\epsilon_i)\\frac{\\omega^2_{TO1}}{\\omega^2_{TO1}-\\omega^2-i\\omega\\gamma_{TO1}} ,\n\\label{dielectric}\n\\end{eqnarray}\nwhere $\\epsilon_0$, $\\epsilon_i$, and $\\epsilon_{\\infty}$ represent \nthe dielectric constant for SiO$_2$ at the zero, intermediate, and very large\nfrequencies. This dielectric function will be inserted in the expression (\\ref{labina}) for the substrate surface excitation propagator $D_S({\\bf Q},\\omega)$. \n\nThe graphene response function $R({\\bf Q},\\omega)$ is given by (\\ref{curoni}) in terms of the noninteracting response function\n\\begin{equation}\nR^{0}({\\bf Q},\\omega)=L\\ R^{0}_{{\\bf G}=0{\\bf G}'=0}({\\bf Q},\\omega)\n\\label{Chi02D}\n\\end{equation}\nwhere the 3D Fourier transform of independent electron response function is given \nby \\cite{PRB13} \n\\begin{eqnarray}\nR^{0}_{{\\bf G}{\\bf G}'}({\\bf Q},\\omega)=\\hspace{5cm}\n\\nonumber\\\\\n\\frac{2}{\\Omega}\\sum_{{\\bf K}\\in S.B.Z.}\\sum_{n,m}\\ \\frac{f_n({\\bf K})-f_m({\\bf K}+{\\bf Q})}\n{\\hbar\\omega+i\\eta+E_n({\\bf K})-E_m({\\bf K}+{\\bf Q})}\\times\n\\label{Resfun0}\\\\\n\\rho_{n{\\bf K},m{\\bf K}+{\\bf Q}}({\\bf G})\\ \\rho^*_{n{\\bf K},m{\\bf K}+{\\bf Q}}({\\bf G'}),\n\\nonumber\n\\end{eqnarray} \nwhere $f_{n{\\bf K}}=[e^{(E_{n{\\bf K}}-E_F)\/kT}+1]^{-1}$ is the Fermi-Dirac distribution at \ntemperature $T$. The charge vertices in (\\ref{Resfun0}) have the form \n\\begin{equation}\n\\rho_{n{\\bf K},m{\\bf K}+{\\bf Q}}({\\bf G})=\n\\int_\\Omega\\ d{\\bf r}e^{-i({\\bf Q}+{\\bf G}){\\bf r}}\\ \\phi^*_{n{\\bf K}}({\\bf r})\\phi_{n{\\bf K}+{\\bf Q}}({\\bf r})\n\\label{Matrel}\n\\end{equation}\nwhere ${\\bf Q}$ is the momentum transfer vector parallel to the $x-y$ plane, ${\\bf G}=({\\bf G}_\\parallel,G_z)$ are $3D$ reciprocal lattice vectors and \n${\\bf r}=({\\hbox{\\boldmath $\\rho$}},z)$ is a $3D$ position vector. Integration in (\\ref{Matrel}) is performed over the normalization volume $\\Omega=S\\times L$, where $S$ is the \nnormalization surface and $L$ is the superlattice constant in $z$ direction (separation between graphene layers is superlattice arrangement). \nPlane wave expansion of the wave function has the form \n\\[\n\\phi_{n{\\bf K}}({\\hbox{\\boldmath $\\rho$}},z)=\\frac{1}{\\sqrt{\\Omega}}e^{i{\\bf K}{\\hbox{\\boldmath $\\rho$}}}\\ \\sum_{\\bf G}C_{n{\\bf K}}({\\bf G})e^{i{\\bf G}{\\bf r}},\n\\]\nwhere the coefficients $C_{n{\\bf K}}$ are obtained by solving the Local Density Approximation-Kohn Sham (LDA-KS) equations selfconsistently as will be discussed below.\nHowever, this straightforward calculation of graphene response functions $R({\\bf Q},\\omega)$ is not \nsufficient if we want to investigate the hybridization between the Dirac plasmon and Fuchs-Kliewer (FK) phonons at dielectric surfaces. Namely, due to \nthe very low energy of FK phonons ($\\sim 50$meV) the crossing of their dispersion relations with Dirac plasmon occurs for very small wave vectors ($Q<0.001$a.u.). \nOn the other hand even for very dense $K$-point mesh sampling, as for example $601\\times 601\\times 1$ used in this calculation, \nthe minimum transfer wave vector $Q$ which can be reached (e.g. $Q=0.0026$a.u.$^{-1}$ in this calculation) is still bigger than FK phonon-Dirac \nplasmon crossing wave vector. Therefore we have to find the way how to calculate $R({\\bf Q},\\omega)$ for a denser Q-point mesh \nin the optical $Q\\approx 0$ limit. One possible way is that instead of calculating response function $R^{0}({\\bf Q},\\omega)$ we calculate the optical \n($Q=0$) conductivity $\\sigma(\\omega)$. The optical conductivity in graphene can be written as \\cite{gr2D3} \n\\begin{equation}\n\\sigma(\\omega)=\\sigma^{\\mathrm{intra}}(\\omega)+\\sigma^{\\mathrm{inter}}(\\omega), \n\\label{curren1}\n\\end{equation}\nwhere \n\\begin{equation}\n\\sigma^{\\mathrm{intra}}(\\omega)=\\frac{i\\rho_0}{\\omega+i\\eta_{\\mathrm{intra}}}\n\\label{curren2}\n\\end{equation}\nis intraband or Drude conductivity and where \n\\begin{equation}\n\\rho_0=-\\frac{2}{\\Omega}\\sum_{{\\bf K},n}\\frac{\\partial f^i_n({\\bf K})}{\\partial E_n({\\bf K})}\n|j^{x}_{n{\\bf K},n{\\bf K}}({\\bf G}=0)|^2\n\\label{curren3}\n\\end{equation}\nrepresents the effective number of charge carriers. \nThe interband conductivity is \n\\begin{eqnarray}\n\\sigma^{\\mathrm{inter}}(\\omega)=\n\\frac{-2i}{\\omega\\Omega}\\sum_{{\\bf K},n\\neq m}\\ \\frac{\\hbar\\omega}{E_n({\\bf K})-E_m({\\bf K})}\\times\n\\nonumber\\\\\n\\frac{f^i_n({\\bf K})-f^i_m({\\bf K})}\n{\\hbar\\omega+i\\eta_{\\mathrm{inter}}+E_n({\\bf K})-E_m({\\bf K})}\\times\n\\label{curren4}\\\\\nj^{x}_{n{\\bf K},m{\\bf K}}({\\bf G}=0)\\ [j^{x}_{n{\\bf K},m{\\bf K}}({\\bf G}'=0)]^*\n\\nonumber\n\\end{eqnarray}\nwhere the current vertices are given by\n\\begin{equation}\nj^{\\mu}_{n{\\bf K},m{\\bf K}+{\\bf Q}}({\\bf G})=\n\\int_\\Omega\\ d{\\bf r}e^{-i({\\bf Q}+{\\bf G}){\\bf r}}\\ \nj^{\\mu}_{n{\\bf K},m{\\bf K}+{\\bf Q}}({\\bf r}),\n\\label{curren5}\n\\end{equation}\nand \n\\begin{eqnarray}\nj^{\\mu}_{n{\\bf K},m{\\bf K}+{\\bf Q}}({\\bf r})=\n\\frac{\\hbar e}{2im}\n\\left\\{\\phi_{n{\\bf K}}^*({\\bf r})\\partial_\\mu\\phi_{m{\\bf K}+{\\bf Q}}({\\bf r})\\right.\\hspace{2cm}\n\\label{curren6}\\\\\n\\hspace{2cm}-\\left.[\\partial_\\mu\\phi_{n{\\bf K}}^*({\\bf r})]\\phi_{m{\\bf K}+{\\bf Q}}({\\bf r})\\right\\}.\n\\nonumber\n\\end{eqnarray}\nIn the optical $Q\\approx 0$ limit the independent electron response function can be written in terms of optical \nconductivities (\\ref{curren1}) as \\cite{Zoran}\n\\begin{equation}\nR^0({\\bf Q}\\approx 0,\\omega)=L\\ \\frac{Q^2}{i\\omega}\\sigma(\\omega).\n\\label{chi-sigma}\n\\end{equation}\nFinally, the RPA or screened response function $R({\\bf Q},\\omega)$ can be obtained from (\\ref{chi-sigma}) \nusing (\\ref{RPARR}). \n\nIn the calculation of Sec.\\ref{resuuult} we shall assume the \ngraphene response to be isotropic in the small $({\\bf Q},\\omega)$ limit. This means \nthat the graphene response functions and the corresponding surface excitation functions are functions of \n$Q$ and not of ${\\bf Q}$. \n\n\\subsection{Computational details}\n\\label{Comp}\nThe first part of the calculation consists of determining the KS ground state of the single layer graphene and the corresponding wave functions \n$\\phi_{n{\\bf K}}({\\hbox{\\boldmath $\\rho$}},z)$ and energies $E_n({\\bf K})$. For graphene unit cell constant we use the experimental value of $a=4.651\\ \\mathrm{a.u.}$ \\cite{lattice}, and superlattice unit cell constant (separation of graphene layers) is $L=5a$. For calculating KS wave functions and energies we use a plane-wave self-consistent field DFT code (PWSCF) within the QUANTUM ESPRESSO (QE) package \\cite{QE}. The core-electron interaction was approximated by the norm-conserving pseudopotentials \\cite{normcon}, and the exchange correlation (XC) potential by the Perdew-Zunger local density approximation (LDA) \\cite{lda1}. To calculate the ground state electronic density we use $21\\times21\\times1$ Monkhorst-Pack K-point \nmesh \\cite{MPmesh} of the first Brillouin zone (BZ) and for the plane-wave cut-off energy we choose 50 Ry. \nThe second part of calculation consists of determining the independent electron response function (\\ref{Resfun0}) and conductivity \n(\\ref{curren1}--\\ref{curren4}). In order to achieve better resolution in the long wavelength ($Q\\approx 0$) and low energy ($\\omega\\approx 0$) \nlimit the response function (\\ref{Resfun0},\\ref{Matrel}) and conductivity (\\ref{curren1}--\\ref{curren6}) are evaluated from the wave functions $\\phi_{n{\\bf K}}({\\bf r})$ and energies $E_n({\\bf K})$ calculated for the $601\\times601\\times1$ Monkhorst-Pack K-point mesh which coresponds to 361801 K-points in the first Brillouin zone (1BZ). Band summations ($n,m$) in (\\ref{Resfun0}), (\\ref{curren3}) and (\\ref{curren4}) are performed over 30 bands. In the calculation we use two kinds of damping parameters: $\\eta_{\\mathrm{intra}}=10$meV for transitions within the same bands ($n\\leftrightarrow n$), and $\\eta_{\\mathrm{inter}}=50$meV for transitions between different bands ($n\\leftrightarrow m$). \nFor bulk SiO$_2$ dielectric function given by (\\ref{dielectric}) we use the following \nparameters: $\\epsilon_0=3.9$, $\\epsilon_i=3.05$, $\\epsilon_{\\infty}=2.5$, \n$\\omega_{TO1}=55.6$ meV, $\\omega_{TO2}=138.1$ meV, $\\gamma_{TO1}=5.368$ meV and $\\gamma_{TO2}=8.947$ meV taken from Ref.\\cite{Dielectricpar}. \nFor the gap between graphene and the SiO$_2$ surface, we take $h=4\\AA[7.55$ a.u.$]$ \\cite{height}.\n\n\n\\section{Results for graphene monolayers on SiO$_2$ substrates}\n\\label{resuuult}\nTheoretical expressions derived in Sec.\\ref{Sec2} (and in Appendix \\ref{AppA}) are quite \ngeneral, i.e. are valid for any pair of crystal slabs described by their response \nfunctions, while the corresponding surface excitation functions derived in \nSec.\\ref{DerofD} are valid for any 2D adsorbed monolayer on any dielectric \nsubstrate. In this section we shall apply these results to calculate reactive and \ndissipative response of various combinations of slabs consisting of graphene monolayers with variable doping on SiO$_2$ substrate, using the dynamical surface response functions of \nthese materials given in Sec.\\ref{DerofD}. \n\nBefore proceeding with detailed calculations a few general comments are \nin order. Though the derived expressions for van der Waals and dissipated \npower (\\ref{vdwFIN}) and (\\ref{losshop}), respectively, include temperature \ndependence, in the systems studied here inclusion of finite temperature leads to practically \nno effects, therefore all results will be reported for $T=0$. \nThe dependence of these two physical properties on the two parameters, the distance \nbetween the slabs $a$ and the oscillation amplitude $\\rho_0$, can be analyzed if \nwe recognize in the expressions (\\ref{vdwFIN}) and (\\ref{losshop}) the function \n\\begin{equation}\nf_m(x)=\\int^{2\\pi}_0\\frac{d\\phi}{2\\pi}J^2_m(x\\cos\\phi),\n\\label{fm}\n\\end{equation}\nwhich is possible because of the assumed isotropy of graphene response. The function \n$f_m(x)$ is shown in Fig.\\ref{Fig3} for first four $m$'s, where $x=Q\\rho_0$. \n\\begin{figure}[t]\n\\includegraphics[width=6cm,height=5cm]{Fig3.pdf}\n\\caption{Function $f_m(x)$ for m=0 (blue solid line), $m=1$ (black solid line), $m=2$ (black dashed line) and $m=3$ (black dashed-dotted line). Vertical dashed line denotes the maximum argument $x_{cut}$ defined by parameters ($a$ and $\\rho_0$) used in the calculation.}\n\\label{Fig3}\n\\end{figure} \nAnother important factor in (\\ref{vdwFIN}) and (\\ref{losshop}) is $e^{-2Qa}$ which defines the \ncutoff wave vector $Q_c$, depending on the slab separation $a$. \nThe separations we shall consider in this calculation are $a=10-50$nm which defines the cutoff wave \nvector $Q_c\\approx 0.05a.u.$. On the other hand, the ampitudes which will be considered are \n$\\rho_0\\approx 0.1-1$nm. This finally provides the maximum argument $Q$ of the functions (\\ref{fm}) which is $x_{cut}\\approx 1$. From Fig.\\ref{Fig3} is obvious that up to $x_{cut}$ only the $m=0$ \nand $m=1$ terms will contribute. Moreover, for $x1$ and therefore \n\\begin{equation}\nf_0\\approx 1-\\frac{x^2}{4};\\ \\ \\ f_1(x)\\approx \\frac{x^2}{8}. \n\\label{approx}\n\\end{equation}\nIn Fig.\\ref{Fig3} we see that approximation (\\ref{approx}) is valid almost up \nto $x_{cut}$.\n\n\\subsection{Spectra of coupled modes}\n\\label{cmspe}\n \nIn this section we shall first discuss the spectra of coupled plasmon\/phonon excitations \nin one and two graphene\/SiO$_2$ slabs separated by distance $a$ in order to understand \nthe dominant dissipation mechanisms.\n\nFig.\\ref{Fig4}(a) shows the spectrum of surface excitations $S(Q,\\omega)=-Im D(Q,\\omega)$ \nin graphene(200meV)\/SiO$_2$ slab (as shown in Fig.\\ref{Fig2}) and Fig.\\ref{Fig4}(b) in the system \nwhich consists of two graphene\/SiO$_2$ slabs (as shown in Fig.\\ref{Fig1}) separated by distance $a=5$nm. In the lonwavelength limit the SiO$_2$ surface suports two surface polar (FK) TO phonons with \nflat dispersions and the doped graphene contains a Dirac plasmon with square root dispersion. \nCoupling between these modes results in three branches, as shown in Fig.\\ref{Fig4}(a). For larger \n$Q$ the first and second flat branches are phononlike, i.e. their induced \nelectrical fields mostly come from polarization modes on the dielectric\nsurface. On the other hand, the third square root branch is \nplasmon-like, i.e. its induced electrical field mostly comes from charge density \noscillations localised in the graphene layer. However, in the $Q\\rightarrow 0$ limit \nthe strong hybridization (avoided crossings) between these modes occur and they \npossess mixed plasmon-phonon character. When another slab is brought in the vicinity the \nthree modes in each slab interact which results in the mode splitting and formation of \nsix coupled modes as shown in Fig.\\ref{Fig4}(b). \nFigure \\ref{Fig4}(c) shows the spectrum of surface excitations in the \ngraphene($0$meV)\/SiO$_2$ slab. Because the pristine graphene does not support Dirac plasmon \nthe spectrum consist just of two weak phonon branches $\\omega_{TO1}$ and $\\omega_{TO2}$ damped \nby $\\pi\\rightarrow\\pi^*$ excitations. The spectrum of surface excitations in two equal \ngraphene(0meV)\/SiO$_2$ slabs separated by 5nm (not shown here) is very similar to the one \nshown in Fig.\\ref{Fig4}(c) which indicates weak interaction between phonons \nin the two slabs. This could be the consequence of strong screening of FK phonons \nby graphene adlayers which reduces the range of their induced electrical \nfield. Figure \\ref{Fig4}(d) shows the spectrum in the system which consists of two different \nslabs, graphene($0$meV)\/SiO$_2$ and graphene($200$meV)\/SiO$_2$, separated by $5$nm. \nOne can notice interesting hybridization between the Dirac plasmon and two phonons in one slab and two phonons in another slab giving five branches. \n\\begin{figure*}[tt]\n\\includegraphics[width=1.0\\columnwidth]{Fig4a.jpg}\n\\includegraphics[width=1.0\\columnwidth]{Fig4c.jpg}\n\\includegraphics[width=1.0\\columnwidth]{Fig4b.jpg}\n\\includegraphics[width=1.0\\columnwidth]{Fig4d.jpg}\n\\caption{(Color online) The spectra of surface excitations in (a) graphene(200meV)\/SiO$_2$ single \nslab (as shown in Fig.\\ref{Fig2}), (b) in the system consisting of two equal graphene(200meV)\/SiO$_2$ slabs, (as shown in Fig.\\ref{Fig1}) separated by distance 5nm, (c) single \ngraphene(0meV)\/SiO$_2$ slab and (d) in the system consisting of two unequal slabs, \ngraphene(200meV)\/SiO$_2$ and graphene(0meV)\/SiO$_2$, separated by distance 5nm.} \n\\label{Fig4}\n\\end{figure*}\nIn the next section we shall explore how particular plasmon-phonon modes contribute to the \ndissipated power in two oscillating slabs. \n\n\\subsection{Modification of van der Waals force}\n\\label{weakvdW}\nVan der Waals energy and attractive force are usually calculated and measured for static \nobjects. Here we show how their relative oscillating motion can reduce this attraction, \nwhich can be relevant not only from the theoretical standpoint but \nalso in some experimental situations and applications. This phenomenon is present \nalso in the case of parallel motion, as shown in the Appendix \\ref{AppA}, but \nthis situation would be more difficult to realize in practice. \n\nMaking use of the approximation (\\ref{approx}) for the lowest order \nterms of the functions $f_0$ and $f_1$ given by (\\ref{fm}) we can \nrewrite the expression (\\ref{vdwFIN}) for the van der Waals energy as \n\\begin{eqnarray}\nE_{c}(a)=\\frac{\\hbar}{2}\\int\\frac{QdQ}{2\\pi}\n\\int^{\\infty}_{-\\infty}\\frac{d\\omega}{2\\pi}\\hspace{3cm}\n\\label{java}\\\\\n\\nonumber\\\\\n\\left\\{ \\left[1-\\frac{Q^2\\rho_0^2}{4}\\right]A(Q,\\omega,\\omega)+\\frac{1}{4}Q^2\\rho_0^2\\ A(Q,\\omega,\\omega-\\omega_0)\\right\\}\n\\nonumber\n\\end{eqnarray}\nwhere $A$ is given by (\\ref{jalko}) and (\\ref{jujucka}). \nIn the $T\\rightarrow 0$ limit and neglecting higher order terms $A$ reduces to\n\\begin{eqnarray}\nA(Q,\\omega,\\omega')=\\hspace{5cm}\n\\nonumber\\\\\ne^{-2Qa}sgn\\omega\\left\\{ImD_1(Q,\\omega)ReD_2(Q,\\omega')+(1\\leftrightarrow 2)\\right\\}\n\\nonumber\n\\end{eqnarray}\nWe see that for $\\rho_0\\rightarrow 0$ the van der Waals energy reduces \nto the standard result for the static case, and for $\\rho_0\\ne 0$ and \n$\\omega_0\\ne 0$ the lowest order corrections scale with $\\rho^2_0$. \nFrom (\\ref{java}), and also from (\\ref{njunja2}), we see that the slab separation $a$ (because of exponential \nfactor $e^{-2Qa}$) reduces the wave vector range to $Q<1\/2a$.\n\nFig.\\ref{Fig5} shows van der Waals energies $E_c$ of two variously doped, unsupported full conductivity \n(\\ref{curren1}--\\ref{curren4}) graphenes as functions of the driving frequency $\\omega_0$. \nThe driving amplitude is $\\rho_0=20$nm and separation between slabs is $a=10$nm. \nFor the case of two heavily and equally doped graphenes $1-1$eV (thick black solid line)\nthe 'static' ($\\omega_0=0$) van der Waals energy is the largest in comparison with other doping combinations. This is reasonable considering that \nthen except of $\\pi$ and $\\pi+\\sigma$ plasmons (and corresponding electron-hole \nexcitations) the graphenes support strong Dirac plasmons which are all in resonance. \nTherefore, the charge density fluctuation in one slab $ImD_1(\\omega)$ resonantly induces electrical field in another slab \n$ReD_2(\\omega)$ to which it couples, and \nvice versa. \nAs the driving frequency $\\omega_0$ increases the fluctuation and the induced field do not match any more, i.e. $ImD_1(\\omega)$ and $ReD_2(\\omega+n\\omega_0)$ become \nDoppler shifted and vdW energy is expected to decrease. \nHowever, the vdW energy first exhibits a wide plateau until $\\omega_0<50$THz. \nWe performed a separate vdW energy calculation for two unsupported \nDrude (\\ref{curren1},\\ref{curren2}) graphenes (not shown here) and noticed that it shows the same \nfeatures as presented in Fig.\\ref{Fig5}. This suggests that Dirac plasmons are responsible for all characteristic \nfeatures in vdW energy (for larger dopings).\nTherefore, the plateau arises probably because the Dirac plasmon fluctuation in one slab, e.g. at $\\omega_p$, can be efficiently screened by induced \nplasmon field in another slab which is not necessarily at the same frequency $\\omega_p$. \nMoreover, graphene, regardless of doping, exhibits perfect screening $ReD(Q\\approx 0,\\omega\\approx 0)\\approx -1$ \\cite{Duncan2012} causing that \nthe static point charge feels image potential. This causes that $E_c$ shows almost identical plateau \nfor the case of differently doped graphenes $1-0.2$eV (black solid line) and $1-0eV$ \n(thin black solid line). \nAs the doping difference increases plateau energy decreases which is \nreasonable because of plasmon resonance breakdown. \nFor larger $\\omega_0>50$THz the Dirac plasmon in one slab does not match any more the \nperfect screening regime in another one, resulting in a rapid decrease or weakening of vdW energy.\nIn the case of weakly doped graphenes, such as the combinations $0.2-0.2$eV (red dashed line) and $0.2-0$eV (thin red dashed \nlines), the 'static' $\\omega_0\\approx 0$ van der Waals energy reduces in comparison with the heavy\ndoping (combinations with $1$eV) cases. This is reasonable considering that Dirac plasmon spectral weight decreases with \ndoping. Additionally, it can be noted that for lower doping the vdW plateau shifts to $\\omega_0<25$THz. \nThis is because the perfect screening frequency region can be roughly estimated as \n$ReD(\\omega<\\omega_p)\\approx-1$, so, as the plasmon energy decreases the frequency interval whithin which fluctuations are perfectly screened becomes narrower. \nIt is interesting to notice that for some frequencies (e.g. $\\omega_0>100$THz) the resonant but low doping vdW energy (e.g. $0.2-0.2$eV case) \novercomes the heavily doped but off resonance vdW energy (such as the cases $1-0.2$eV and $1-0$eV). \nThe static $\\omega_0=0$ vdW energy of pristine graphenes $0-0$eV (blue dashed dotted line) is the weakest and shows smooth decreasing, almost \nlinear behaviour. In this case there are no Dirac plasmons in the graphenes spectra. Therefore, only resonant coupling between $\\pi\\rightarrow\\pi^*$ electron-hole \nexcitations, $\\pi$ and $\\pi+\\sigma$ plasmons contribute to the vdW energy. As the frequency $\\omega_0$ increases the overlap between these \nelectronic excitations decreases causing smooth and linear vdW energy weakening. \nThe same linear behaviour (for $\\omega_0>50$THz) can be noticed for doping combinations $0.2-0.2$eV and $0.2-0$eV \nwhich proves that for lower dopings the dominant vdW energy weakening mechanism becomes off-resonant coupling \nbetween $\\pi\\rightarrow\\pi^*$ electron-hole excitations, $\\pi$ and $\\pi+\\sigma$ plasmons. \n\nIt should be noted here that such designed (graphene based) slabs might enable modification of attraction between slabs, e.g. controlled 'sticking' \nand 'un-sticking' of two slabs. For example, two heavily doped graphenes ($1-1$eV case in Fig.\\ref{Fig5}) are strongly bound, however \nbinding energy between pristine graphenes ($0-0$eV case achieved, e.g. simply by electrostatic gating) is reduced more than \ntwice. Moreover, for larger $\\omega_0$ (and fixed doping) the dynamical binding energy is substantially reduced, leading to 'un-sticking' \nof two slabs, and vice versa, their 're-sticking' by reducing the driving frequency. \n\n\n\\begin{figure}[t]\n\\includegraphics[width=1.0\\columnwidth]{Fig5.pdf}\n\\caption{(Color online). Van der Waals energies $E_c$ of two variously doped, unsupported full conductivity \n(\\ref{curren1}--\\ref{curren4}) graphenes as functions of driving frequency $\\omega_0$.\nThe left-right graphene dopings are $1-1$eV (thick black solid), $1-0.2$eV (black solid), 1-1eV (thin black \nsolid), $0.2-0.2$eV (red dashed), $0.2-0$eV (thin red dashed), $0-0$eV (blue dashed-dotted), as also denoted in the figure.\nSeparation between graphenes is $a=10$nm and oscillating amplitude is $\\rho_0=20$nm. } \n\\label{Fig5}\n\\end{figure}\n\n\n\\subsection{Dissipated power - substrate dependence}\n\\label{DISsub}\nIn this section we shall explore how the dissipation power in two oscillating \nslabs depends on the conductivity model we use to describe graphene and how \nsubstrate influences the dissipation power. \n\n\nIn order to facilitate the analysis of the results we shall again use the \napproximation (\\ref{approx}). The lowest order term \nwhich contributes in (\\ref{losshop}) is $f_1$, and from Fig.\\ref{Fig3} it is \nobvious that, for $x0.5$ epochs) and a sufficient number of labels ($>20\\%$ of the data is labeled). The latter finding is surprising since the addition of an unsupervised learning algorithm depends on the presence of labels in order to deliver marginal benefits over gradient descent.\n\nThe underlying form of the learned rule that makes \\textbf{HAT} successful is still a mystery; we find that while the meta-learner may learn a useful update rule during training, the meta-learner does not converge to this useful rule in the long run and instead devolves into a linear function \\textbf{Converged-Rule}. This converged function preserves fully-converged weights by reinforcing incoming weights for neurons with high activations.\n\n\\subsection{Future Work}\n\nThe discovery that \\textbf{HAT} does not stably converge to a function makes analysis quite difficult. However, there is potential for future work to do more subtle analyses.\n\nImagine a time $t$ during training in which the meta-learner $M$ has converged to a useful function, but the learner $L$ has not yet finished training. A follow-up to this thesis might be to discover whether there such a time $t$ exists, what the structure of $M$ at time $t$ is, and how $M$ changes the weights of $L$ at time $t$. One potential methodology might be to observe the function $f$ not as a 3-dimensional function in $(v_i, w_{ij}, v_j)$ but rather as a 4-dimensional function in $(v_i, w_{ij}, v_j, t)$. Observing the function along the $t$-axis and checking for phase changes would shed light on whether a single useful update rule is learned during training or whether \\textbf{HAT}'s learning is truly transient and continuous. If this follow-up were to succeed, then we could have an a priori rule to apply without having to metalearn update rules.\n\nExtracting the local rules from multiple domains could either find that \\textbf{HAT} learns a universal rule or that functional distance between two rules describes the ``difference'' between their originating domains.\n\\vspace*{-1mm}\n\\begin{itemize}\n \\itemsep-0.4em \n \\item Suppose we always metalearn the same rule, regardless of problem domain. \\textbf{Optimal-Hebb} is then a universal learning rule.\n \\item Suppose \\textbf{Optimal-Hebb} is not universal for all problems. For local rules $R_A, R_B$ on problems $A,B$, integrating $\\int_{\\mathbb{R}^3} (R_A-R_B)\\cdot dF(v_i, w_{ij}, v_j)$ for input distribution $F$ gives an explicit measure for how similar $A$ and $B$ are. This provides a systematic way to identify pairs of learning problems that are good candidates for transfer learning.\n\\end{itemize}\n\n\\newpage\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIt is a property of fundamental importance within statistical physics that generic and realistic thermodynamic systems exhibit one particular state -- thermal equilibrium -- which is always approached, irrespective of the initial condition. Yet the important question of which\nmicroscopic conditions are necessary or sufficient for the thermalization of a closed quantum many-body system is still largely unanswered~\\cite{Polkovnikov11}. This is of particular importance, especially because there exists a specific class of isolated quantum systems, termed integrable, for which relaxation to thermal states is prevented due to the presence of an extensive number of (quasi-)local conservation laws~\\cite{Rigol06,Rigol07,Kollar11}.\nSuch particular systems often represent isolated points in the parameter space of physical many-body systems and demand a precise tuning of the microscopic parameters. Nevertheless, these models are very valuable because they often represent fixed points of renormalization group theories and as such contain the low-temperature equilibrium properties of a much wider class of systems. This directly leads to an apparent dilemma in quantum many-body theory which has attracted a lot of interest recently. In particular, beyond equilibrium these integrable models become nongeneric as they fail to thermalize. Instead, they are trapped in extended prethermal states described by nonthermal generalized Gibbs ensembles~\\cite{Polkovnikov11,Rigol06,Rigol07,Rigol08,Kollar11,cazalilla06,cardycalabrese06,Barthel08}. Resolving this dilemma is one of the major challenges for the understanding of the coherent dynamics of quantum many-body systems.\n\nIn this work we address this question for a paradigmatic low-energy model: the Luttinger liquid~\\cite{haldane81,haldane81b,giamarchi04}, representing the fixed point theory of systems of interacting fermionic particles in one dimension at low temperatures. The Luttinger liquid is an integrable theory failing to thermalize but rather exhibiting a description in terms of a generalized Gibbs ensemble~\\cite{cazalilla06,Iucci09,Iucci10}. Here, we will be interested in the nonequilibrium dynamics in the presence of a weak fermionic band curvature, which represents a generic perturbation, irrelevant in the low-energy equilibrium limit, but relevant on intermediate to long time scales in order to drive the crossover towards thermalization. \n\nThe increasing number of cold atom experiments performed under out of equilibrium conditions~\\cite{Greiner2002ux,kinoshita,Hofferberth06,trupke13,schmiedmayer12,nagerl13,meinert14,preiss15,hild14,cheneau12} has driven significant interest in the theoretical understanding of the non-equilibrium dynamics in quantum many-body systems. Importantly, these experiments share a remarkable isolation from the environment, thereby probing the purely coherent unitary time evolution on the experimentally relevant time scales. This has paved the way to experimentally study the constrained relaxational dynamics of quantum systems close to integrability~\\cite{kinoshita,Langen14,agarwal14,schmiedmayernphys12}, showing unconventional properties due to the anticipated (quasi-)local conservation laws. Although the inherent integrability breaking terms, resulting from, e.g., imperfections in the particle-particle interactions or higher orbital modes, are considered to be weak, they are believed to eventually cause relaxation to thermal states on long-time scales. Yet a full understanding of this process has not been achieved so far.\nWithin the current understanding, however, the thermalization dynamics of quantum many-body systems with weak integrability-breaking perturbations is expected to occur via a two-stage process. Initially, the dynamics of local observables at transient and intermediate time scales are controlled by the corresponding integrable theory\nserving as a metastable attractor for the non-integrable dynamics~\\cite{Moeckel,Kollar11,Stark13}. \nThis trapping in a metastable state has been termed prethermalization~\\cite{berges_pretherm,Moeckel} and is expected to exist for several non-integrable models and models close to integrability~\\cite{Moeckel,Eckstein09,Essler14,Kollar11,Rosch08,Marcuzzi13,Nessi15,Fagotti14,Fagotti15,Babadi2015,Bertini15}. \nIn the quasi-particle picture, prethermalization is associated with the initial formation of well-defined excitations \\cite{Moeckel} which leads to a dephasing of all terms that are not diagonal in quasi-particle modes, i.e. to a projection of the initial density matrix onto the diagonal ensemble in the quasi-particle basis. After this intermediate quasi-particle formation, the dynamics eventually crosses over to the thermalization regime, where weak quasi-particle scattering leads to a slow redistribution of energy and establishes detailed balance between the different modes. This causes asymptotic thermalization on long time scales compatible with the Eigenstate-Thermalization-Hypothesis~\\cite{Srednicki94,Deutsch91,Rigol08,Gibbs,Biroli10}.\n\nIn equilibrium, the fermionic band curvature in the Luttinger liquid, because irrelevant in the renormalization group sense, does not modify static correlation functions, which are well described by the quadratic Luttinger theory. Importantly, however, the curvature has a strong impact on frequency-resolved fermionic quantities. This has been observed in Coulomb drag experiments \\cite{Debray01,Debray02}, which could not be explained in terms of a quadratic Luttinger theory. In a hydrodynamic representation, the band curvature describes resonant scattering processes between the elementary phononic excitations of the system, such that perturbation theory is plagued by divergences due to the resonant nature of the interactions. Important first approaches to the interacting Luttinger liquid applied a self-consistent Born approximation in order to determine the phonon self-energy on the mass-shell \\cite{andreev80,samokhin98,zwerger06}. However, these works were unable to explain the frequency-dependence of the self-energy, which appeared to be non-negligible for dynamic observables. Using a combination of bosonization and subsequent refermionization a general theory has been developed which has been very successful in determining spectral equilibrium properties such as the dynamic structure factor and the fermionic spectral function in thermal equilibrium \\cite{pustilnik03,pustilnik07,imambekov09,imambekov09a}. Importantly for the scope of the present work, however, it has not yet been possible to generalize this methodology to systems out of equilibrium. Only recently, these equilibrium results have been recovered by a quantum hydrodynamic approach \\cite{lamacraft15,lamacraft15a}, showing that hydrodynamics is also capable of controlling the resonant phonon interactions.\n\nThe theoretical finding of these works is that the elementary excitations are no longer described in terms of bosonic quasi-particles with exact energy-momentum relation $\\omega=u|q|$ but dissolve into a continuum of excitations. This continuum, however, is energetically confined between two well-defined excitation branches $\\epsilon^-_q<\\omega<\\epsilon^+_q$ (with $\\epsilon^{\\pm}_q\\rightarrow 0$ as $q\\rightarrow0$) at which the spectral weight of the bosonic excitations features algebraic divergences, reflected in corresponding divergences of the dynamical structure factor. This fine structure in the bosonic spectral weight, and equivalently self-energy, makes the development of a general kinetic theory for \\emph{frequency-resolved} observables a very demanding task, which has not yet found a satisfactory solution. However, as will be shown in this work, static properties and their time evolution are nevertheless accessible.\n\nThe goal of this work is to study \\changed{the escape out of the prethermalization regime and the crossover towards thermalization} in Luttinger liquids with \\changed{quadratic} fermionic dispersion on the basis of a hydrodynamic description. Specifically, we aim at formulating a kinetic theory for the momentum distribution of the phononic degrees of freedom \\changed{taking into account the leading nonlinear corrections due to the quadratic dispersion. While in this way we are able to describe the escape out of the prethermalization regime in a controlled way, the final asymptotic thermalization of the system might be modified by the more subleading, off-resonant contributions which we do not consider here.}\nThe kinetic equation describes the time-evolution of the phonon momentum distribution and is\nsuitable in the long-wavelength limit and for weak quenches but still goes beyond the regime of linear response. In turn this kinetic theory gives a valid description for the fermionic occupation distribution in the vicinity of the Fermi points where the anticipated fine-structure of the bosonic spectral weight only gives subleading contributions.\nThis \"semi-static\" -- and as a consequence tractable -- description, covers the forward time evolution of any static, i.e. frequency independent, observable.\nWe show that the dynamics of precisely these frequency independent observables depend only on the time evolution of the momentum distribution of excitations $n_q$ and can be captured within a kinetic theory. \nThe justification for this approach is the subleading width of the excitation spectrum $|\\epsilon^+_q-\\epsilon^-_q|\\ll u|q|$ compared to the phonon energy for all relevant $q$ (below the Luttinger liquid cutoff), which is equivalent to the statement that even in the presence of the non-linearity the continuum of excitations in the hydrodynamic description is tightly bound to the mass-shell. This condition replaces the common quasi-particle criterion \\cite{kamenevbook} and enables a thorough kinetic description.\n\n\\mh{The applicability of the kinetic equation requires the preformation of well-defined quasi-particles out of the bare particles which occurs during the process of prethermalization before the quasi-particle scattering sets in. We, however, find that close to the integrable point of vanishing fermionic interactions quasi-particle formation becomes very slow shifting the applicability of the theory for weakly interacting fermions to long time scales and far distances. We give quantitative estimates of the corresponding spatio-temporal scales of the breakdown of the kinetic theory. Not too close to the noninteracting point, however, the kinetic equation is well justified and allows us to study the escape out of the prethermalization regime towards thermalization.}\nIn the regime of applicability, the kinetic equation leads in the asymptotic long-time limit to a linearized quantum Boltzmann equation whose attractor is the desired thermal Gibbs state.\nWe find that the thermalization dynamics out of the prethermal state is triggered by short wavelength modes and afterwards progressing algebraically slowly towards longer wavelengths. \nWhether this is a generic feature of weakly-perturbed integrable theories, is an important and interesting question for future work. \n\nThe main result of this work is a spatio-temporal decomposition of correlations in the studied nonlinear Luttinger Liquid, which is illustrated in Fig.~\\ref{fig:QuenchDiag}. \nBy analyzing the equal-time fermionic Green's function $G\\sub{t,x}^{<}$, the Fourier transform of the fermionic occupation distribution, we find three regimes which we term prequench, prethermal, and thermal and which are separated by two crossover scales $x\\sub{th}(t)$ and $x\\sub{pt}(t)$ obeying $x\\sub{th}(t) < x\\sub{pt}(t)$. The crossover scale $x\\sub{pt}(t)=2ut$ sets the light cone~\\cite{cardycalabrese06} with $u$ the sound velocity of the elementary bosonic excitations of the integrable theory. Causality implies that for distances $x \\gg x\\sub{pt}(t)$ the system's properties are not yet influenced by the nonequilibrium protocol, but are rather given by the initial state yielding the notion of the prequench regime. Inside the light cone for distances $x2ut, x>x\\sub{th}$, the Green's function is determined by the quasi-particles of the initial state and feature algebraic decay in real-space corresponding to the pre-quench state of the system, modulated by a amplitude decaying as a stretched exponential in time. In the intermediate regime $2ut0 \\end{array}\\right.\n}\nand both the quadratic Hamiltonian as well as the non-linearity are modified by this interaction change.\nThe eigenbasis of $H\\sub{LL}$, which is expressed in terms of the physically more transparent phononic creation and annihilation operators $\\creo{a}{q}, \\anno{a}{q}$ according to the canonical Bogoliubov transformation \n\\eq{Bog1}{\n\\theta_x &=& \\theta_0 +\\frac{i}{2}\\int_q\\left(\\frac{2\\pi}{|q|K}\\right)^{1\/2} e^{-iqx-\\frac{|q|}{\\Lambda}}\\left(\\creo{a}{q}-\\anno{a}{-q}\\right),\\\\\n\\phi_x&=&\\phi_0-\\frac{i}{2}\\int_{q}\\left(\\frac{2\\pi K}{|q|}\\right)^{1\/2}\\mbox{sgn}(q)\\ e^{-iqx-\\frac{|q|}{\\Lambda}}\\left(\\creo{a}{q}+\\anno{a}{-q}\\right)\\label{Bog2},} is therefore obviously transformed by the quench. This transformation depends on the interaction via the Luttinger parameter $K$. \n\nThe state of the system before the quench corresponds in general no longer to an equilibrium state after the quench, and the system will consequently undergo a nontrivial time evolution according to the new Hamiltonian. The occupations of bosonic modes after the quench can be computed via the above Bogoliubov transformation. Before the quench, the interacting system is in equilibrium at zero temperature, such that $G^K_{q,t=0}=\\langle \\{\\anno{a}{q},\\creo{a}{q}\\}\\rangle=1$ in the prequench basis. This yields the postquench occupations\n\\begin{eqnarray}\nn_{t=0,q}&=&\\langle \\creo{a}{q}\\anno{a}{q}\\rangle_{t=0}=\\frac{1}{2}\\left[\\frac{\\lambda^2+1}{\\lambda}n_{i,q}+\\frac{\\left(\\lambda-1\\right)^2}{2\\lambda}\\right],\\nonumber\\\\\nm_{t=0,q}&=&\\langle\\creo{a}{q}\\creo{a}{-q}\\rangle_{t=0}=\\frac{1-\\lambda^2}{4\\lambda}\\left(2n_{i,q}+1\\right), \\mbox{ with } \\lambda=\\frac{K\\sub{f}}{K\\sub{i}}.\\ \\ \\ \\ \\ \\ \\ \\ \\ \\label{Occ}\n\\end{eqnarray}\nHere, $n_{i,q}$ is the initial occupation of the bosonic modes and $\\lambda=\\frac{K\\sub{f}}{K\\sub{i}}$ the ratio between the final $K\\sub{f}=\\sqrt{1+\\frac{g\\sub{f}}{\\pi v\\sub{F}}}$ and the initial $K\\sub{i}=\\sqrt{1+\\frac{g\\sub{i}}{\\pi v\\sub{F}}}$ Luttinger parameter. \nIn this work, we focus on a zero temperature initial state, $n_{i,q}=0$ for all $q$. The phonon density after the quench $n_{t,q}> 0$ is always larger than the density before the quench, resulting in a nonzero excitation energy $\\Delta E=\\langle H\\sub{f}\\rangle-\\langle H\\sub{i}\\rangle>0$ generated by the quench. Non-zero off-diagonal occupations $m_{t,q}\\neq0$ indicate that the correlations are not diagonal in the post-quench quasi-particle basis and in order to relax to an equilibrium state, $m_{t,q}$ must decay to zero. In the present setting, we choose $m_{t,q}=e^{-2iu|q|t}\\langle\\creo{a}{q}\\creo{a}{-q}\\rangle_{t}$, such that the off-diagonal occupations remain always real, being either positive or negative, depending on the quench.\n\nIn the phonon basis, \n\\eq{Eq5a}{\nH\\hspace{-1mm}=\\hspace{-1.2mm}\\int_q \\hspace{-1.2mm} u|q|\\creo{a}{q}\\anno{a}{q}\\hspace{-0.5mm}+\\hspace{-1mm}\\int_{q,k}\\hspace{-3mm}\\sqrt{|qk(k+q)|}\\ v(k,q) \\left(\\creo{a}{q+k}\\anno{a}{q}\\anno{a}{k}+\\mbox{h.c.}\\right),}\nwith the vertex function $v(k,q)=v\\left(\\frac{q}{|q|},\\frac{k}{|k|},\\frac{k+q}{|k+q|}\\right)$, which depends on the signs of the in- and outgoing momenta. In the interaction representation the phonon scattering Hamiltonian is\n\\eq{Eq5b}{\nH\\hspace{-1mm}_I(t)=\\hspace{-1mm}\\int_{q,k}\\hspace{-3mm}\\sqrt{|qk(k+q)|}\\ v(k,q)\\left(\\creo{a}{q+k}\\anno{a}{q}\\anno{a}{k}e^{iut(|q+k|-|q|-|k|)}+\\mbox{h.c.}\\right).\n}\nInstead of solving the full problem, we aim at extracting the dominant contributions of the nonlinearity which are relevant for intermediate and large times and which drive the crossover towards thermalization.\nIn view of Eq.~\\eqref{Eq5b}, off-resonant processes, for which $|q|+|k| \\not= |k+q|$, will dephase and as a consequence become negligible for the intermediate and long-time evolution of the system~\\cite{zwerger06}. Resonant processes on the other hand, here set by $|q|+|k| = |k+q|$, will at intermediate and long times become relevant in the renormalization group sense, as discussed in Ref.~\\cite{Heyl2015nd}. \n\\changed{The off-resonant processes can be eliminated perturbatively \\cite{Heyl2015nd}, yielding subleading corrections for intermediate and large times, which we will neglect in the following. For the asymptotic thermalization process, these subleading corrections will yield non-universal corrections (i.e. observable in microscopic constants and prefactors). For instance, the presence of off-resonant scattering events will eventually lower the asymptotic temperature compared to a system with purely resonant scattering events. The influence of off-resonant interactions on the decay rate of the bosonic and fermionic quasi-particles has been investigated in Ref.~\\cite{Proto14a}. The decay rate extracted from this computation is orders of magnitude lower than the rate due to purely resonant scattering processes. Furthermore, it has a subleading scaling behavior $\\sim Tq^4$ compared to $\\sim \\sqrt{q^3T}$ for resonant scattering processes at small momenta $q$ \\cite{zwerger06, andreev80}. Consequently, it is thus no influence on the leading order long time behavior. This allows us for the present purpose to restrict the phonon scattering to the resonant processes alone:}\n\\eq{Eq5}{\nH\\hspace{-1mm}=\\hspace{-1.2mm}\\int_q \\hspace{-1.2mm} u|q|\\creo{a}{q}\\anno{a}{q}\\hspace{-0.5mm}+\\hspace{-1mm}v_0\\int_{q,k}'\\hspace{-3mm}\\sqrt{|qk(k+q)|} \\left(\\creo{a}{q+k}\\anno{a}{q}\\anno{a}{k}+\\mbox{h.c.}\\right),}\nwhere the integral $\\int_{q,k}'$ is performed for momenta $|q+k|=|q|+|k|$ and $v_0=v(1,1)=\\frac{3}{m}\\sqrt{\\frac{\\pi}{K}}$ is the strength of the nonlinearity at resonance \\cite{buchholdmethod,zwerger06}.\n\nAs we are interested in fermionic correlation functions, we switch from an operator based formalism to a field theoretical formulation on the Keldysh contour, which is explained in the appendix \\ref{appendix2}, see also Ref.~\\cite{buchholdmethod}. This allows us to treat both spatial and temporal forward time correlations on an equal footing. We will focus our analysis on the so-called fermionic lesser Green's function \n\\eq{Green}{G^{<}_{t,x}=-i\\langle\\cre{\\psi}{t,x}\\anno{\\psi}{t,0}\\rangle} at equal forward times $t$ from which all fermionic equal time correlations can be deduced. Especially, in terms of a physical interpretation it is the Fourier transform of the fermionic momentum distribution \n\\eq{Mom}{\nn^{\\mbox{\\tiny F}}_{t,q}=i\\int_x e^{iqx}G^<_{t,x}.\n}\nIn the field theory representation, the bosonized fermionic lesser Green's function at equal times is\n\\eq{GF1}{\nG^{<}_{\\eta,t,x}=-i\\langle\\cre{\\psi}{\\eta,-,t,x}\\anno{\\psi}{\\eta,+,t,0}\\rangle=-i\\Lambda\\frac{e^{-i\\eta k\\sub{F}x}}{2\\pi}e^{-\\frac{i}{2}\\mathcal{G}^<_{\\eta,t,x}}.\n}\nHere, $\\cre{\\psi}{\\nu},\\ann{\\psi}{\\nu}$ label Grassmann fields with the index $\\nu=(\\eta,\\gamma,t,x)$ representing right and left movers ($\\eta=\\pm$), the contour variables on the Keldysh plus and minus contour ($\\gamma=\\pm$), the forward time coordinate $t$ and the relative spatial distance $x$. The corresponding lesser exponent $\\mathcal{G}^<$ is defined as \n\\eq{GF2}{\n\\mathcal{G}^<_{\\eta, t,x}=2i\\log\\left\\langle e^{i\\left(\\eta\\phi_{+,t,0}-\\theta_{+,t,0}-\\eta\\phi_{-,t,x}+\\theta_{-,t,x}\\right)} \\right\\rangle.\n}\nThe extra index $(\\pm)$ of the Luttinger fields labels position on the plus-minus contour, see appendix. Combining Eq.~\\eqref{GF2} and the Bogoliubov transformation above, one finds that $\\mathcal{G}^{<}_{-\\eta,t,x}=\\mathcal{G}^{<}_{\\eta,t,-x}$. The Green's function of the left movers is the spatially mirrored Green's function of the right movers, and it is sufficient to consider only the Green's function of the right movers \n\\eq{GF3}{\nG^{<}_{t,\\eta x}\\equiv G^{<}_{+,t,\\eta x}=G^{<}_{\\eta,t,x}\n}\nand equivalently for the exponent $\\mathcal{G}^<$. According to the linked cluster theorem, the logarithm in Eq.~\\eqref{GF2} is defined as the sum of all connected diagrams in an expansion of the exponent. As a consequence, it can be expressed to leading order in terms of the full Green's functions, with the next non-vanishing correction being proportional to the equal-time one-particle irreducible four-point vertex, which is zero in the microscopic theory. Its effective correction remains negligibly small. In particular, the four-point vertex will only contribute to $\\mathcal{O}[(um)^{-4}]$ which is two orders of magnitude smaller than the desired accuracy and its contribution can be safely neglected. The static one-particle irreducible four-point vertex represents a negligible correction for any equilibrium problem since it can only be generated via multiple concatenation of subleading three-point vertices. Especially it is not responsible for the modifications of the dynamic structure factor reported in Refs.~\\cite{pustilnik07,imambekov09,lamacraft15}, since at zero temperature vertex corrections vanish exactly due to causality \\cite{buchholdmethod,forsternelson76}. Consequently, the modifications of the dynamic structure factor happen entirely on the basis of the irreducible two-point vertex, i.e. the phonon self-energy. In the present case, the four-point vertex is exactly zero before the quench since this state corresponds to a zero temperature state as well as immediately after the quench, since a flat quasi-particle distribution in Eq.~\\eqref{Occ} leads to a vanishing vertex correction. \nIn terms of the Luttinger fields and apart from four-point vertex corrections, the exponent for the fermionic Green's function is\n\\eq{GF4}{\n\\mathcal{G}^<_{t,x}=\\sum_{\\alpha,\\beta=\\theta,\\phi}\\left(2\\delta_{\\alpha\\beta}\\hspace{-0.1cm}-\\hspace{-0.1cm}1\\right)\\left[G^K_{\\alpha\\beta,t,0}\\hspace{-0.1cm}\n-\\hspace{-0.1cm}G^K_{\\alpha\\beta,t,x}\\hspace{-0.1cm}+\\hspace{-0.1cm}G^A_{\\alpha\\beta,t,x}\\hspace{-0.1cm}-\\hspace{-0.1cm}\nG^R_{\\alpha\\beta,t,x}\\right]\n,}\nwhere $G^{R\/A}_{\\alpha\\beta}$ is the retarded, advanced Green's function for $\\alpha,\\beta=\\theta,\\phi$ and $G^K_{\\alpha\\beta}$ is the corresponding Keldysh Green's function, i.e. $G^R_{\\alpha\\beta,t,x}=-i\\langle \\alpha_{q,x,t}\\beta_{c,0,t}\\rangle$.\nApplying the Bogoliubov transformation to the phonon basis, the equal time exponent becomes\n\\begin{widetext}\n\\eq{GF5}{\n\\mathcal{G}^<_{t,x}=i\\int_q\\left[\\mbox{$\\frac{\\pi e^{-\\frac{|q|}{\\Lambda}}}{|q|}$}(\\cos(qx)-1)\\left[\\mbox{$\\frac{K^2+1}{K}$}(2n_{t,q}+1)+2\\mbox{$\\frac{K^2-1}{K}$}\\cos(2u|q|t)m_{t,q}\\right]\\right]+2\\arctan(\\Lambda x)+4i\\int_q\\left[\\frac{\\pi e^{-\\frac{|q|}{\\Lambda}}}{|q|}\\sin(|q|x)\\sin(2u|q|t)m_{t,q}\\right].\n}\\end{widetext}\n Here, $n_{t,q}=\\langle \\creo{a}{t,q}\\ann{a}{t,q}\\rangle$ and $m_{t,q}=|\\langle\\ann{a}{t,-q}\\ann{a}{t,q}\\rangle|$ are the equal time normal and anomalous phonon densities, which evolve in time due to phonon scattering. The absence of the quasi-particle self-energy in this expression is caused by the equal time properties of the Green's function and underlines the fact that time-local, i.e. static, observables, even if explicitly forward-time dependent, are not modified by the frequency resolved fine structure of the self-energies once the time dependent distribution $n_{t,q}$ is known. In the remainder of this paper, we will analyze the time evolution of the exponent \\eqref{GF5} after the interaction quench and its implications for the fermionic Green's function \\eqref{GF1}.\n\nConcerning the relevance of the interacting Luttinger model, before closing the section, we would like to mention that only recently pioneering experiments in ultra-cold gases both in and out of equilibrium explored the transient and prethermalization dynamics of systems~\\cite{Hofferberth06,trupke13,schmiedmayer12,Langen14,nagerl13,meinert14,preiss15,hild14,cheneau12,agarwal14,schmiedmayernphys12,schmiedmayernjp13,bloch13,Guan2013} effectively described by a quadratic Luttinger model, the bosonic theory of the Hamiltonian in Eq.~\\eqref{Eq2}. In particular, in Refs.~\\cite{Hofferberth06,trupke13,schmiedmayer12,Langen14} prethermal states in the relative phase of a suddenly split condensate have been identified that have been stable on the experimentally accessible time scales. For the latter experiments, the cubic nonlinearity studied in the present work constitutes the leading order correction to the quadratic theory in a gradient expansion. Therefore, the framework developed in the subsequent sections to describe the relaxation dynamics in the system, is of direct experimental relevance once the time scales are experimentally accessible to study the escape out of the prethermalization plateau. It is, however, important to note that the concrete experimental setup of the suddenly split condensate requires a further but straightforward extension of the considered model system to include two species of coupled bosonic fields. Moreover, let us emphasize that these experimental systems do not simulate the Luttinger liquid of interacting fermions -- our initial motivation -- but directly the effective bosonic low-energy theory. In this way, it might be possible to obtain experimental access to the dynamics of the bosonic occupation distributions, governed by the kinetic theory formulated below, via time-of-flight imaging.\n\n\\section{Summary of main results}\n\\label{sec:summaryofresults}\n\nBefore formulating and solving the kinetic theory for the interacting Luttinger liquid in detail, we briefly summarize the main results reported in this work. In the subsequent sections, we will then present the detailed calculations. Specifically, the known results on the purely integrable system are reformulated within the present framework in Sec.~\\ref{sec:PT}. The kinetic equation, used to address the presence of the nonlinear phonon scattering, is derived in Sec.~\\ref{sec:kinetic_equation}. This kinetic equation is then solved in Sec.~\\ref{sec:thermalization_dynamics}.\n\nIt is the aim of this work to study the thermalization dynamics of the fermionic equal time Green's function \\eqref{Green}, which is the Fourier transform of the fermionic momentum distribution \\eqref{Mom} and contains the information on quadratic equal time fermion observables.\nWithout loss of generality, we focus on the distribution of the right-movers, i.e., $\\eta=+$. In the presence of phonon scattering, we determine the time-evolution of $G^<_{t,x}$ via a set of kinetic equations derived later in Sec.~\\ref{sec:kinetic_equation}.\n\nWe find that $G^<_{t,x}$ features two distinct spatio-temporal crossover scales $x\\sub{th}(t)$ and $x\\sub{pt}(t)$, separating three regimes with distinct scaling behavior:\n\\renewcommand{\\arraystretch}{1.5}\n\\begin{center}\n\\begin{tabular}{l l c }\n\t1. & prequench: $\\quad$\t&\t$x\\sub{pt}(t) \\ll |x|$, \\\\\n\t2. & prethermal:\t&\t$x\\sub{th}(t) \\ll |x| \\ll x\\sub{pt}(t)$, \\\\\n\t3. & thermal:\t\t&\t$|x| \\ll x\\sub{th}(t)$. \\\\\n\\end{tabular}\n\\end{center}\n\\renewcommand{\\arraystretch}{1}\nWe find for the associated crossover scales $x_\\mathrm{pt}(t)$ and $x_\\mathrm{th}(t)$:\n\\eq{eq:crossoverScales}{\nx_\\mathrm{pt}(t) = 2ut, \\qquad \\qquad x_\\mathrm{th}(t) =\\frac{x_{\\lambda}}{\\Lambda} \\left( v_0 \\Lambda^2 t \\right)^{\\alpha_\\lambda}.\n}\nThe first crossover at $x\\sub{pt}(t)$ determines the light cone~\\cite{cardycalabrese06} set by the sound velocity $u$ of the phononic elementary excitations and is known from the non-interacting Luttinger model. Two space points a distance $x \\gg x\\sub{pt} (t)$ apart from each other have not been able to exchange information after the quench due to causality. Therefore, the properties at such distances are solely given by the initial condition before the quench such that we term this regime ``prequench''. \\changed{For distances $xx_{\\text{th}}$. In the intermediate regime $x_c(t)1\/2$ generally) is not surprising. The non-linearity in the Luttinger model introduces a microscopic energy scale $v_0\\Lambda^2$ which represents the characteristic time scale of the dynamics induced by the non-linearity, i.e. in the present case the thermalization dynamics beyond the quadratic theory. Additionally, the non-linearity breaks the scale invariance of the quadratic model, which is responsible for the fact that all microscopic scales can be eliminated from macroscopic observables in that case. In the absence of scale invariance, however, the microscopic length scale $\\Lambda$ will appear in certain observables, expressing that their explicit value depends on model specific details.\n\nAs we show in our detailed analysis below, we find that this separation into three spatio-temporal regimes -- prequench, prethermal, and thermal -- reflects itself in a remarkable factorization property of the Green's function\n\\eq{eq:factorization2}{\nG^<_{t,x}=G^<_{0,x}Z\\sub{pt}(s\\sub{pt})Z\\sub{th}(s\\sub{th}),}\nwhich holds everywhere except in the vicinity of the crossover scales $x_\\mathrm{th}(t)$ and $x_\\mathrm{pt}(t)$. Here, we have introduced the following short-hand notations:\n\\begin{equation}\ns\\sub{pt}=\\left\\{\\begin{array}{cl} x &\\mbox{ for } xx_\\mathrm{pt}(t)\\end{array}\\right. , \\quad s\\sub{th}=\\left\\{\\begin{array}{cc}x&\\mbox{ for } xx\\sub{th}(t)\\end{array}\\right. .\n\\end{equation}\nWhile the factorization into $G^<_{0,x}$ and $Z\\sub{pt}$ has been already known for the exact solution of the integrable model~\\cite{cazalilla06}, here, we show that the influence of the nonlinearity can be captured by a further factor in terms of $Z\\sub{th}$. The thermal contribution $Z\\sub{th}(s\\sub{th})$ exhibits interesting spatio-temporal dynamics in particular in the long-time regime $ut \\gg x\\sub{th}(t)$. It is defined as\n\\begin{equation}\n Z_\\mathrm{th} (s_\\mathrm{th}) = \\exp\\left(-\\frac{K^2+1}{K} \\frac{\\pi \\tilde T_t|s_\\mathrm{th}|}{u}\\right)\n\\end{equation}\nand features two different spatio-temporal regimes.\n\n\n\\emph{(i) thermalized regime:}\nDeep in the thermalized region $|x|\\ll x\\sub{th}(t)$ where $s\\sub{th}=x$, $Z\\sub{th}=\\exp(-|x|\/\\xi_{\\tilde{T}_t})$ exhibits the conventional exponential decay with distance that the system experiences in thermal states with an associated thermal length\n\\begin{equation}\\label{tlength}\n\t\\xi_{\\tilde T_t} = \\frac{K}{1+K^2} \\frac{u}{\\pi \\tilde T_t}.\n\\end{equation}\nThe effective temperature $\\tilde T_t$, however, entering this equation remains a dynamical quantity with\n\\begin{equation}\n \\tilde T_t=T+u \\Lambda \\Delta_\\lambda(v_0 \\Lambda^2 t)^{-\\mu},\n\\end{equation}\napproaching the temperature $T$ of the final thermal ensemble algebraically slowly. We find that the numerical simulations of the kinetic equation are consistent with an analytical estimate for the exponent $\\mu=2\/3$. Thus, the system in this spatial region appears to be hotter than in the final asymptotic thermal state. The associated excess energy stored at short distances has to be transported to larger distances which, however, is an algebraically slow process since this energy transport in the presence of detailed balance is carried out by dynamical slow modes, emerging as a consequence of exact conservation laws \\cite{Lux13}.\n\n\\emph{(ii) prethermal and prequench regime:} Within the prethermal and prequench region $x\\sub{th}(t) \\ll x$, the amplitude $Z\\sub{th}(s\\sub{th})=Z\\sub{th}(x\\sub{th}(t))$ approaches a space-independent but time-dependent constant quantifying the temporal decay of the prethermal correlations:\n\\begin{equation}\n Z\\sub{th}(x\\sub{th}(t)) = \\exp[-x\\sub{th}(t)\/\\xi_{\\tilde T_t}].\n\\end{equation}\nBecause $x\\sub{th}(t) \\propto (v_0 \\Lambda^2 t)^{\\alpha_\\lambda}$, we have, remarkably, that this amplitude decays in stretched exponential form. This decay is sub-exponential and thus inherently nonperturbative in nature, highlighting the capabilities of our present approach.\n\n\\section{Dynamics in the absence of phonon scattering}\nIn order to systematically understand the effect of phonon scattering on the relaxation dynamics after the interaction quench, we first determine the dynamics of the exponent $\\mathcal{G}^<_{t,x}$ in the absence of scattering, i.e. for $\\frac{1}{m},v_0\\rightarrow 0$. This quench scenario has been extensively discussed in \\cite{Karrasch12,Kennes13,cazalilla06,Iucci09,Iucci10}, and we will only briefly list the known results in the present formalism in order to make contact to the relaxation dynamics in the presence of phonon scattering, which are discussed subsequently.\n\\subsection{Ground state properties}\nFor a system in the ground state, $n_{t,q}=m_{t,q}=0$ and the exponent evaluates to\n\\eq{GF6}{\n\\mathcal{G}^<_{t,x}=-i\\frac{K^2+1}{2K}\\log(1+\\Lambda^2x^2)+2\\arctan(\\Lambda x),}\nwhich leads to a time-independent fermionic Green's function\n\\eq{GF7}{\nG^<_{t,x}=-\\frac{i\\Lambda}{2\\pi}e^{-ik\\sub{F}x-i\\arctan(\\Lambda x)}\\sqrt{1+\\Lambda^2x^2}^{-\\frac{K^2+1}{2K}}\n,}\nwell known from the literature \\cite{haldane81,giamarchi04}. It features an algebraic decay in space $\\sim x^{-\\frac{K^2+1}{2K}}$ and a power law singularity of the fermionic momentum distribution close to the Fermi momentum $n^{\\mbox{\\tiny F}}_q\\sim|q-k\\sub{F}|^{-\\frac{(K-1)^2}{2K}}$ \\cite{giamarchi04}. \n\\subsection{Quench from the ground state}\\label{sec:PT}\nInitializing the fermions in the ground state and performing an interaction quench leads to constant non-zero phonon densities in the post-quench basis, according to Eq.~\\eqref{Occ}. In the absence of scattering, the phonon densities are constants of motion and remain time independent, $n_{t,q}=n_{0,0}\\equiv n$ and $m_{t,q}=m_{0,0}\\equiv m$. In this situation, only dephasing of the off-diagonal Green's functions induces relaxation and the exponent is\n\\begin{widetext}\\begin{eqnarray}\n\\label{GF8}\n\\mathcal{G}^<_{t,x}&=&2\\arctan(\\Lambda x)-i\\mbox{$\\frac{K^2+1}{2K}$}(2n+1)\\log(1+\\Lambda^2x^2)+im\\log\\left(\\mbox{$\\frac{1+\\Lambda^2(x-2ut)^2}{1+\\Lambda^2(x+2ut)^2}$}\\right)-i\\mbox{$\\frac{K^2-1}{2K}$}m\\left[\\log\\left(\\mbox{$\\frac{1+\\Lambda^2(x-2ut)^2}{1+4u^2t^2\\Lambda^2}$}\\right)+\\log\\left(\\mbox{$\\frac{1+\\Lambda^2(x+2ut)^2}{1+4u^2t^2\\Lambda^2}$}\\right)\\right]\\nonumber\\\\\n&=&\\mathcal{G}^<_{0,x}+im\\log\\left(\\mbox{$\\frac{1+\\Lambda^2(x-2ut)^2}{1+\\Lambda^2(x+2ut)^2}$}\\right)-i\\mbox{$\\frac{K^2-1}{2K}$}m\\log\\left[\\mbox{$\\frac{(1+\\Lambda^2(x+2ut)^2)(1+\\Lambda^2(x-2ut)^2)}{(1+4u^2t^2\\Lambda^2)^2(1+x^2\\Lambda^2)^2}$}\\right].\\end{eqnarray}\\end{widetext}\nHere, $\\mathcal{G}^<_{0,x}$ is the exponent corresponding to the prequench state, i.e. the ground state of interacting fermions with the prequench Luttinger parameter $K_i$. Consequently the fermion Green's function \\eqref{GF1} factorizes\n\\eq{GF9}{\nG^<_{t,x}=G^<_{0,x}\\tilde{Z}\\sub{pt}(x,t).\n}\nThe factor $\\tilde{Z}\\sub{pt}$ is defined by Eqs.~\\eqref{GF8} and \\eqref{GF1} and describes the time-dependent modification of the initial zero temperature Green's function due to the quench. In view of the following discussion it is useful to investigate this factor on distances away from the light cone $x=2ut$. For distances $|x|\\ll 2ut$, the temporal factors in Eq.~\\eqref{GF8} cancel each other and $\\tilde{Z}\\sub{pt}(t,x)\\overset{|x|\\ll2ut}{\\rightarrow} Z\\sub{pt}(x)$ looses its time dependence. On the other hand, for distances $|x|\\gg2ut$, the spatial dependence drops out and $\\tilde{Z}\\sub{pt}(t,x)\\overset{|x|\\gg2ut}{\\rightarrow} Z\\sub{pt}(2ut)$. This defines the prethermal amplitude\n\\eq{GF10}{\nZ\\sub{pt}(s)=\\left(\\sqrt{1+\\Lambda^2s^2}\\right)^{\\frac{K^2-1}{2K}\\frac{1-\\lambda^2}{4\\lambda}}.\n}\nThe process associated with the crossover of $Z\\sub{pt}(s)$ from a temporal to a spatial dependence as a function of time is the formation of quasi-particles corresponding to the post-quench Hamiltonian. This is the typical prethermalization scenario in the absence of quasi-particle scattering. For short times, the properties of the system are dominated by the initial state of the system, and the fermion Green's function is only modified by a global amplitude but has the same spatial scaling behavior as for the initial state. The effect of the quadratic Hamiltonian in the time evolution is the dephasing of all terms, which are not diagonal in the basis of the post-quench quasi-particles, leading to a diagonal ensemble in the quasi-particles with a non-equilibrium phonon density. This non-equilibrium distribution of phonons induces a scaling behavior of the fermion Green's function in real space, which is different from the zero and finite temperature cases.\n\nIn the absence of phonon scattering, the diagonal phonon densities $n_{t,q}$ are constants of motion and do not relax, the density matrix $\\rho$ therefore does not approach a Gibbs state but is rather described in the asymptotic limit $t\\rightarrow\\infty$ by a generalized Gibbs ensemble (GGE), which respects the constants of motion and maximizes the entropy. It is given by\n\\eq{GF11}{\n\\rho\\sub{GGE}=Z\\sub{GGE}^{-1}e^{-\\int_q \\nu_q \\hat{n}_q},}\nwhere the Lagrange parameters $\\nu_q=2\\log\\left(\\frac{\\lambda+1}{|\\lambda-1|}\\right)$ depend on the quench parameter and $Z\\sub{GGE}$ is the normalization factor.\n\nThe fermion Green's function for the two different regimes is then\n\\eq{GF12}{\nG^<_{t,x}=G^<_{0,x}\\times\\left\\{\\begin{array}{ll} Z\\sub{pt}(2ut) &\\mbox{ for } |x|\\gg 2ut\\\\\nZ\\sub{pt}(x)& \\mbox{ for } |x|\\ll 2ut\\end{array}\\right. ,\n}\nwith the non-equilibrium scaling behavior \n\\eq{GF13}{G^<_{t,x}\\overset{t\\rightarrow\\infty}{\\sim} |x|^{-\\gamma\\sub{Eq}\\gamma\\sub{GGE}},\n}\nwhere $\\gamma\\sub{Eq}=\\frac{K^2+1}{2K}$ is the equilibrium exponent and $\\gamma\\sub{GGE}=\\frac{\\lambda^2+1}{2\\lambda}=2n+1$ (see, Eq.~\\eqref{Occ}) is the non-equilibrium correction resulting from a non-thermal quasi-particle distribution.\n\\section{Phonon scattering and the kinetic equation}\n\\label{sec:kinetic_equation}\n\nIn the previous sections, we have demonstrated that the forward time evolution of the fermionic equal-time Green's function can be determined solely from the momentum dependent excitation distributions $n_{t,q}, m_{t,q}$. All quadratic, equal-time observables on the other hand can be computed from the fermionic equal-time Green's function via a unitary transformation, such that the knowledge of $n_{t,q}$ and $m_{t,q}$ gives access to the forward time evolution of all the frequency independent quadratic fermion observables. Therefore the time-evolution of this specific set of observables can be captured by the time evolution of the frequency independent and well-defined quantities $n_{t,q}, m_{t,q}$, which does not necessitate the frequency resolved fine structure in the fermionic spectrum. \nIn order to determine the time-evolution of the phonon densities, we derive kinetic equations for the excitation distribution function \\cite{kamenevbook} in the limit of well defined excitations, closely following the steps in Ref.~\\cite{buchholdmethod} and briefly discussing the approximations. \n\nBefore we start with the explicit derivation, we review very briefly the known results for nonlinear Luttinger liquids (c.f. \\cite{imambekov12}) and place the present approach into this context. At zero temperature and without band curvature, long wavelength physics of the interacting fermion model can be exactly mapped to the quadratic Luttinger model and therefore has well-defined, sharp phononic excitations, expressed by a spectral function of the phonons $\\mathcal{A}_{q,\\omega}=i(G^R_{q,\\omega}-G^A_{q,\\omega})=2\\pi \\delta(\\omega-u|q|)$. In the presence of band curvature, however, the phonons themselves interact via a resonant three-point scattering vertex, which leads to a broadening of the spectral function around the mass-shell $\\omega=u|q|$. This broadening can be described in terms of two excitations branches at frequencies $\\omega=\\epsilon^{\\pm}_q$, where $\\epsilon^-_qu|q|$ labels a phononic branch (such that $|\\epsilon^{+}_q-\\epsilon^-_q|\/q\\rightarrow 0$ for $q\\rightarrow0$)\\cite{imambekov12,lamacraft15}. The spectral weight of the excitations in the nonlinear Luttinger liquid is distributed continuously between these two branches. Whereas the solitonic branch represents an exact boundary (i.e. no spectral weight is located at frequencies $\\omega<\\epsilon_q^-$), featuring a power law singularity for frequencies above $\\epsilon^-_q$, the phononic branch represents an algebraically sharp boundary (i.e. the spectral weight for frequencies $\\omega>\\epsilon_q^+$ is strongly algebraically suppressed), featuring a power law singularity from both sides \\cite{imambekov12}. While the power-law singularities at the edges of the spectral weight obviously cannot be explained by a frequency independent self-energy, the characteristic width of the spectral weight $\\delta\\omega_q=\\epsilon^+_q-\\epsilon^-_q=\\frac{q^2}{m^*}$ can be captured by an imaginary part of the on-shell value of the self-energy $\\Sigma^R_{q,\\omega=u|q|}$, which determines the renormalized mass $m^*$ \\cite{zwerger06,aristov,lamacraft15,imambekov12}. These results hold for the zero temperature limit of the problem. At finite temperature $T>0$, however, a self-consistent Born-approximation for the on-shell self-energy predicts a scaling of the spectral weight $\\delta\\omega_q\\sim \\sqrt{|q|^3T}$ \\cite{lamacraft13,gangardt13}, which has been also observed in numerical simulations of interacting one-dimensional bosons \\cite{lamacraft13}. For $\\delta\\omega_q\\ll u|q|$, i.e. the width of the spectral weight of the excitations being much smaller than the average excitation energy, the spectral weight is still sharply concentrated at the mass-shell and one can still think (physically) of well defined excitations although the fine structure of the spectral weight is very different from what one is used to for weakly interacting quasi-particles. As a consequence, it is possible to derive a kinetic equation for the excitation densities in this regime, applying the common quasi-particle and local time approximations, and we will implement this approach below. It neglects the specific structure of the spectral weight of nonlinear Luttinger liquids, which is valid for \"static\" variables in the quasi-particle limit $\\delta\\omega_q\\ll u|q|$.\nWe begin by introducing the interaction picture for the Heisenberg operators \n\\eq{Eq21}{\n\\cre{a}{t,q}\\rightarrow \\cre{a}{t,q}e^{-iu|q|t},}\nwhich leaves the Hamiltonian \\eqref{Eq5} unmodified but shifts the spectral weight of diagonal modes to zero frequency and eliminates the phase $\\sim e^{i2u|q|t}$ of off-diagonal correlation functions \\cite{buchholdmethod}. \n The Green's functions in the interaction representation are labeled with a tilde. The Keldysh Green's function in Nambu space is\n\\eq{Eq22}{\ni\\tilde{G}^K_{t,q,\\delta}=\\left(\\hspace{-0.15cm}\\begin{array}{ll}\\langle\\{\\ann{a}{t+\\frac{\\delta}{2},q},\\cre{a}{t-\\frac{\\delta}{2},q}\\}\\rangle & \\langle\\{\\ann{a}{t+\\frac{\\delta}{2},q},\\ann{a}{t-\\frac{\\delta}{2},-q}\\}\\rangle\\\\\n\\langle\\{\\cre{a}{t+\\frac{\\delta}{2},-q},\\cre{a}{t-\\frac{\\delta}{2},q}\\}\\rangle&\\langle\\{\\cre{a}{t+\\frac{\\delta}{2},q},\\ann{a}{t-\\frac{\\delta}{2},q}\\}\\rangle\n\\end{array}\\hspace{-0.15cm}\\right),\\ \\ \\ \n}\nwhere $\\{\\cdot,\\cdot\\}$ is the anti-commutator and we introduced an additional relative time shift $\\delta$ associated with spectral properties of the system. The retarded Green's function is\n\\eq{Eq23}{\ni\\tilde{G}^R_{t,q,\\delta}&=&\\theta(\\delta)\\left(\\hspace{-0.15cm}\\begin{array}{ll}\\langle[\\ann{a}{t+\\frac{\\delta}{2},q},\\cre{a}{t-\\frac{\\delta}{2},q}]\\rangle & \\langle[\\ann{a}{t+\\frac{\\delta}{2},q},\\ann{a}{t-\\frac{\\delta}{2},-q}]\\rangle\\\\\n\\langle[\\cre{a}{t+\\frac{\\delta}{2},-q},\\cre{a}{t-\\frac{\\delta}{2},q}]\\rangle&\\langle[\\cre{a}{t+\\frac{\\delta}{2},q},\\ann{a}{t-\\frac{\\delta}{2},q}]\\rangle\n\\end{array}\\hspace{-0.15cm}\\right)\\nonumber\\\\\n&=&\\theta(\\delta)\\left(\\hspace{-0.15cm}\\begin{array}{cc}\\langle[\\ann{a}{t+\\frac{\\delta}{2},q},\\cre{a}{t-\\frac{\\delta}{2},q}]\\rangle & 0\\\\\n0&\\langle[\\cre{a}{t+\\frac{\\delta}{2},q},\\ann{a}{t-\\frac{\\delta}{2},q}]\\rangle\n\\end{array}\\hspace{-0.15cm}\\right).}\nThe off-diagonal retarded and advanced Green's functions are exactly zero. This is a consequence of the Hamiltonian, which does not introduce a coupling between the modes $q$ and $-q$, such that the commutator $[\\ann{a}{t+\\frac{\\delta}{2},q},\\ann{a}{t-\\frac{\\delta}{2},-q}]=0$ for all times $t,\\delta$. The anti-hermitian Keldysh Green's function is parametrized according to \\cite{kamenevbook,buchholdmethod}\n\\eq{Eq24}{\n\\tilde{G}^K_{t,q,\\delta}=\\left(\\tilde{G}^R\\circ\\sigma_z\\circ F-F\\circ\\sigma_z\\circ\\tilde{G}^A\\right)_{t,q,\\delta}}\nin terms of the time-dependent, hermitian quasi-particle distribution function $F$ and the Pauli matrix $\\sigma_z$, the latter preserving the symplectic structure of bosonic Nambu space. The $\\circ$ represents matrix multiplication with respect to momentum space and convolution with respect to time. Switching to Wigner coordinates by Fourier transforming the Keldysh Green's function with respect to relative time\n\\eq{Eq25}{\n\\tilde{G}^K_{t,q,\\omega}=\\int_{\\delta}\\tilde{G}^K_{t,q,\\delta}\\ \\ e^{i\\omega\\delta}\n}\nand applying the Wigner approximation, which, due to the RG-irrelevant interactions, is justified in the same regime for which the Luttinger description is applicable \\cite{buchholdmethod,Heating}, we find\n\\eq{Eq26}{\n\\tilde{G}^K_{t,q,\\omega}=\\tilde{G}^R_{t,q,\\omega}\\sigma_zF_{t,q,\\omega}-F_{t,q,\\omega}\\sigma_z\\tilde{G}^A_{t,q,\\omega},} \nwhich is diagonal in momentum and frequency space. Inverting Eq.~\\eqref{Eq26} by multiplying it with $\\left(\\tilde{G}^R\\right)^{-1}$ from the left and $\\left(\\tilde{G}^A\\right)^{-1}$ from the right, yields the kinetic equation for the distribution function\n\\eq{Eq27}{\ni\\partial_tF_{t,q,\\omega}\\hspace{-0.1cm}=\\hspace{-0.1cm}\\sigma_z\\Sigma^R_{t,q,\\omega}F_{t,q,\\omega}\\hspace{-0.1cm}-\\hspace{-0.1cm}F_{t,q,\\omega}\\Sigma^A_{t,q,\\omega}\\sigma_z\\hspace{-0.1cm}-\\hspace{-0.1cm}\\sigma_z\\Sigma^K_{t,q,\\omega}\\sigma_z.\\ \\ \\ \\ \\ \\ \n}\nThe retarded, advanced self-energies $\\Sigma^{R\/A}_{t,q,\\omega}$ are diagonal in Nambu space, while the Keldysh self-energy $\\Sigma^K_{t,q,\\omega}$ consists of non-vanishing diagonal and off-diagonal entries due to the initial off-diagonal occupations $m_{0,q}\\neq0$.\n\nThe kinetic equation for the phonon occupations is obtained by multiplying Eq.~\\eqref{Eq27} on both sides with the spectral function $\\tilde{\\mathcal{A}}_{t,q,\\omega}=i\\left(\\tilde{G}^R_{t,q,\\omega}-\\tilde{G}^A_{t,q,\\omega}\\right)$ and integrating over frequency space. For interacting Luttinger Liquids, the spectral function $\\tilde{\\mathcal{A}}_{t,q,\\omega}$ is very narrowly peaked at the mass shell and the kinetic equation is essentially locked onto $\\omega=0$ in this way (in the interaction picture, the mass shell is at $\\omega=0$). As a consequence, one finds kinetic equations for the diagonal densities\n\\eq{Eq28}{\n\\partial_tn_{t,q}=-\\sigma^R_{t,q}(2n_{t,q}+1)+\\sigma^K_{t,q}\n}\nand the off-diagonal densities\n\\eq{Eq29}{\n\\partial_tm_{t,q}=-2\\sigma^R_{t,q}m_{t,q}-\\Gamma^K_{t,q}.\n}\nThey can be expressed in terms of the imaginary part of the retarded on-shell self-energy \\eq{Nun1}{\\sigma^R_{t,q}=\\frac{1}{2}\\int_{\\omega}\\tilde{\\mathcal{A}}_{t,q,\\omega}\\left(\\Sigma^R_{t,q,\\omega}-\\Sigma^A_{t,q,\\omega}\\right)\\approx\\frac{1}{2}\\left(\\Sigma^R_{t,q,\\omega=0}-\\Sigma^A_{t,q,\\omega=0}\\right)} and the Keldysh on-shell self-energies \\eq{Nun2}{\\sigma^K_{t,q}=\\frac{i}{2}\\int_{\\omega}\\tilde{\\mathcal{A}}_{t,q,\\omega}\\left(\\Sigma^K_{t,q,\\omega}\\right)_{11}\\approx\\frac{i}{2}\\left(\\Sigma^K_{t,q,\\omega=0}\\right)_{11}} and \\eq{Nun3}{\\Gamma^K_{t,q}=\\frac{i}{2}\\int_\\omega\\mathcal{A}_{t,q,\\omega}\\left(\\Sigma^K_{t,q,\\omega}\\right)_{12}\\approx\\frac{i}{2}\\left(\\Sigma^K_{t,q,\\omega=0}\\right)_{12}.} The Keldysh self-energy is always anti-hermitian and therefore purely imaginary in frequency and momentum space, such that Eqs.~\\eqref{Eq28}, \\eqref{Eq29} are real. Since the criterion $|\\epsilon^+_q-\\epsilon^-_q|\\ll u|q|$ is equivalent to $\\sigma^R_{t,q}\\ll u|q|$ at zero and finite temperature equilibrium, we also apply the latter criterion for the present out-of-equilibrium situation in order to estimate the validity of our approach.\n\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=1\\linewidth]{Diag}\n \\caption{Diagrammatic illustration of the Dyson-Schwinger equations up to cubic order. Here, $G$ represents the full Green's function, $S^{(3)}$ the bare three-body vertex and $\\Gamma^{(3)}$ the full three-body vertex. For convenience, this displays only the topology of the diagrams, which has not been extended to Keldysh space.}\n \\label{fig:Diag}\n\\end{figure}\n\nThe phonon scattering terms in Eq.~\\eqref{Eq5} are resonant, i.e. they describe scattering between a continuum of energetically degenerate states, and as a consequence, perturbation theory diverges. In order to determine the self-energies $\\sigma^R_{t,q}, \\sigma^K_{t,q}, \\Gamma^K_{t,q}$, we apply non-perturbative Dyson-Schwinger equations, which are truncated at cubic order. This takes into account renormalization effects of the cubic vertex and yields non-perturbative self-energies. The topology of the corresponding diagrams is shown in Fig.~\\ref{fig:Diag}.\n If we neglect the cubic vertex correction, the Dyson-Schwinger equations reduce to the self-consistent Born approximation \\cite{buchholdmethod}. For an initial state with constant phonon density, as it is the case for the present setup, the vertex correction has been shown to be exactly zero \\cite{buchholdmethod,forsternelson76}, however it obtains a non-zero value in the time-evolution of the system. The kinetic equations \\eqref{Eq28}, \\eqref{Eq29} are solved iteratively, starting at a certain time $t$, the self-energies and vertex correction are computed as functions of the distributions $n_{t,q}, m_{t,q}$. Subsequently $\\partial_tn_{t,q}, \\partial_tm_{t,q}$ are determined, and used in turn to compute the distributions $n_{t+\\Delta,q}, m_{t+\\Delta,q}$ for an infinitesimally later time. This procedure is repeated in order to determine the time-evolution of the phonon densities and self-energies. A more detailed, technical derivation of the iterative solution for the kinetic equation, self-energies and vertex correction can be found in \\cite{buchholdmethod}.\n\n\\section{Thermalization and Prethermalization Dynamics}\n\\label{sec:thermalization_dynamics}\n\nAs one can see from the kinetic equations in Eq.~\\eqref{Eq28} and Eq.~\\eqref{Eq29}, the diagonal and off-diagonal phonon densities are no longer constants of motion in the presence of phonon scattering and energy is redistributed between the different momentum modes. On a general level, when the system thermalizes, as we will show below, the steady state of the dynamics in the presence of a cubic scattering as in Eq.~\\eqref{Eq5}, is solely determined by the associated temperature $T$ and independent of any further details of the initial nonequilibrium state. Specifically, the diagonal modes acquire a Bose-Einstein distribution $n_{\\infty,q} = n_{t\\rightarrow\\infty,q}=\\left(e^{ u|q|\/T}-1\\right)^{-1}$ whereas the off-diagonal distributions $m_{q}=0$ have to vanish.\n\n\nImportantly, in the resonant approximation, the final temperature $T$ ($k\\sub{B}=1$ in the following) can be computed directly from the initial state as will be shown now. In a closed system, the total energy is conserved. Moreover, the conservation of the kinetic energy is an additional exact feature of the derived kinetic equation. As a consequence, also the interaction energy itself is individually conserved. The latter is not an artifact of the kinetic equation but a feature of the resonant nature of the interactions, which, by definition of resonance, commute with the quadratic part of the Hamiltonian \\eqref{Eq5} already on an operator level. \nThis implies that the relaxation dynamics due to the interactions takes place in closed subsets of degenerate eigenstates of the quadratic Hamiltonian, which would in the absence of phonon scattering only acquire a global phase and were not able to thermalize. \nConsequently, the kinetic energy of the initial ($e_0$) and final state ($e_f$) have to be equal, which yields:\n\\eq{Eq20}{\ne_{0}=un_{\\lambda}\\Lambda^2=\\int_q \\hspace{-0.1cm}u|q| n_{0,q}\\overset{!}{=}\\int_q u |q|n_{\\infty,q}=\\frac{T^2_{\\lambda}\\pi^2}{3u}=e_{f}.\\ \\ \\ \n}\nHere, $n_{0,q}$ is the initial momentum distribution, see Eq.~\\eqref{Occ}, and $n_{\\infty,q} = \\left(e^{\\beta u|q|}-1\\right)^{-1}$ is the final, thermal distribution. This gives:\n\\eq{temperature}{T_{\\lambda}=\\frac{u\\Lambda}{\\pi}\\sqrt{3n_{\\lambda}},} which depends on the details of the quench only through the quench parameter $\\lambda$ such that we denote the temperature via $T\\sub{$\\lambda$}$ in the following. Importantly, this temperature yields a criterion for the applicability of the Luttinger theory for the present quench scenario, since Luttinger theory is only well-defined for temperatures lower than the cutoff $T_{\\lambda}1$.\n\n\\begin{figure*}\n \\includegraphics[width=1\\linewidth]{OccFull}\n \\caption{Simulation of the time-evolution of the diagonal phonon density $n_{\\tau,q}$ (left column) and off-diagonal density $m_{\\tau,q}$ (right column) for different quench parameters $\\lambda$. In each row, the individual lines correspond to different times $\\tau=(0,1,2,3,4,5)$. \nLeft column: The total phonon density increases in time (from light to dark green) and the dotted lines represent the corresponding asymptotic density in the limit $\\tau\\rightarrow\\infty$, which is a Bose distribution with the quench dependent temperature $T_{\\lambda}=(0.035, 0.124, 0.24)u\\Lambda$ (from the top to the bottom row). The distribution function is separated into two regimes according to Eq.~\\eqref{Eq37}, with a linear increase in momentum for small momenta and a corresponding thermal distribution for larger momenta. The crossover momentum separating the two regimes is marked with a dot.\nRight column: The off-diagonal phonon density is decreasing in time (from light to dark red), displaying two distinct momentum regimes: For momenta larger than the crossover, $q>q\\sub{th}$, the off-diagonal occupation decreases exponentially in momentum, while it remains close to its initial value $m_{0,q}=m_{\\lambda}$ for momenta smaller than the crossover. While any momentum mode $n_{\\tau,q>0}$ will thermalize at a finite time $\\tau<\\infty$, the zero momentum mode remains pinned to its initial value $n_{\\tau,q=0}=n_{\\tau=0,q=0}$. The latter is not an artifact of the approximation but a consequence of exact fermionic particle number conservation, as outlined in the main text.\n}\n \\label{fig:OccFull}\n\\end{figure*}\n\nThe time evolution of the phonon densities for three different quench parameters $\\lambda$ is shown in Fig.~\\ref{fig:OccFull}. It features two characteristic regimes, which are separated by a time-dependent crossover momentum $q\\sub{th}(\\tau)$, which turns out to be the inverse thermal length scale $x\\sub{th}(\\tau)=1\/q\\sub{th}(\\tau)$. According to the numerical simulations, $q\\sub{th}(\\tau)$ can be parametrized as $q\\sub{th}(\\tau)=Q_{\\lambda}\\tau^{\\alpha_{\\lambda}}$, where the exponent $\\alpha_{\\lambda}$ and the amplitude $Q_{\\lambda}$ are monotonic functions of the quench parameter (for $\\lambda>1$). According to Fig.~\\ref{fig:OccFull}, away from the crossover, the phonon distribution can be written as\n\\eq{Eq37}{\nn_{\\tau,q}=\\left\\{\\begin{array}{cl}n_{\\lambda}+c_{\\tau,\\lambda}|q|& \\mbox{ for } |q|q\\sub{th}(\\tau)\\end{array}\\right. .\n}\nFor small momenta $|q|q\\sub{th}$ fast quasi-particle scattering events have established a local equilibrium and the phonon density is well described by a Bose distribution function $n\\sub{B}(u|q|,\\tilde{T}_{\\tau,\\lambda})=\\left(e^{u|q|\/\\tilde{T}_{\\tau,\\lambda}}-1\\right)^{-1}$, which can be approximated by a classical Rayleigh-Jeans distribution, as in Eq.~\\eqref{Eq37}, for intermediate momenta $q\\sub{th}||$\\eqref{Eq39b}$|$, which leads to the condition on the distance $xq\\sub{th}$ are described by a single, well defined temperature $\\tilde{T}_{t,\\lambda}$ such that $n_{t,q}=n_{\\mbox{\\tiny B}}(u|q|,T)\\approx T\/(u|q|)$. For momenta $qq\\sub{th}$, the modes are described by the same temperature, indicating the presence of local detailed balance in the momentum regime larger than the crossover. In this regime, the temperature decays algebraically, revealing energy transport from the thermalized to the non-thermalized region, carried by dynamical slow modes. The inset shows the decay of the effective temperature for large times, allowing for numerical estimate $\\mu=2\/3$, which corresponds to the red, dotted line.\n}\n \\label{fig:MomTherm}\n\\end{figure*}\n\nIn order to determine the asymptotic dynamics in the thermalized regime, we define a momentum and time dependent temperature by inverting the on-shell Bose distribution function \n\\eq{Eq41}{\n\\tilde{T}_{t,\\lambda,q}=\\frac{u|q|}{\\log\\left(\\frac{n_{t,q}+1}{n_{t,q}}\\right).}\n}\nThe time evolution of $\\tilde{T}_{t,\\lambda,q}$ is shown in Fig.~\\ref{fig:MomTherm}. For momenta $qq\\sub{th}$, $\\tilde{T}_{t,\\lambda,q}$ becomes momentum independent and a global property of the high momentum modes. The decay of $\\tilde{T}_{t,\\lambda}=\\tilde{T}_{t,q>q\\sub{th},\\lambda}$ follows a power law in time, which can be expressed\n\\eq{Eq42}{\n\\tilde{T}_{t,\\lambda}=T_{\\lambda}+u\\Lambda\\Delta_{\\lambda}\\left(v_0\\Lambda^2t\\right)^{-\\mu},\n}\nwhere $\\mu$ is the relaxation exponent associated with the dynamical slow modes. For a one-dimensional system with energy and momentum conserving dynamics $\\mu=2\/3$, since this behavior corresponds to the Kardar-Parisi-Zhang (KPZ) universality class \\cite{KPZ,spohn04,lamacraft13,vanBeijeren,spohn15}. Performing a single parameter fit from the numerical simulations, we find that for large times $\\mu=2\/3$ agrees very well with the numerical data for various different quench scenarios. However, for intermediate times, we find scaling behavior with $\\mu>2\/3$ for some quenches, which might be traced back to the presence of subleading correction terms due to couplings to other diffusive modes \\cite{Lux13,narayan02,Mukerjee}. Numerically a distinction of these possible scaling contributions is only possible for simulation times of multiple decades, such that we cannot exclude a different exponent $\\mu<2\/3$ at the largest times \\cite{Lux13}, which is however not observed in our simulations.\n\n\nWhile the establishment of a local detailed balance, leading to effective thermalization and thermal-like fermionic correlation functions is an effect of local quasi-particle scattering, the asymptotic thermalization dynamics describing energy transport over large distances in momentum space is determined by macroscopic diffusive modes in the system. This is observable by an algebraically decaying temperature towards the final temperature of the system $T_{\\lambda}$. The discussion on the dynamical slow modes remains valid even in the presence of off-resonant scattering processes and therefore the universal properties of the asymptotic thermalization process remain unmodified. However, non-universal properties, such as the final temperature as well as the relaxation rate will be modified by the off-resonant processes. Their precise computation would be a task for numerical simulations.\n\n\\section{Conclusion}\nIn this work, we have analyzed the relaxation dynamics of interacting Luttinger liquids, microscopically represented by one-dimensional interacting fermions with band curvature, after a sudden quench in the fermionic interaction. The theoretical analysis is based on quantum kinetic equations for the phonon distribution function and non-perturbative Dyson-Schwinger equations, which are both well suited to determine the time-evolution of static observables for interacting Luttinger liquids with resonant, cubic interactions, and applicable in a broad parameter regime within the Luttinger framework. The central result is a two-step thermalization procedure including a spatio-temporal prethermalized regime for intermediate distances and times, which leads to fermionic correlation functions described by a generalized Gibbs state on these distances, and corresponds to fast quasi-particle formation after the quench. On smaller distances, a thermalized regime occurs due to the scattering and associated redistribution of energy between the quasi-particle modes. This regime is described by thermal correlation functions with a characteristic thermal correlation length and a thermal quasi-particle distribution with an effective temperature that decays algebraically in time towards its asymptotic value.\n\n This work shows in which way thermalization and prethermalization occur and spread in space for RG-irrelevant, and in this sense weak, integrability breaking interactions. In this setup both thermalization and prethermalization occur locally in space. While the prethermalized region spreads ballistically in space, the thermalized region spreads sub-ballistically due to the subleading, RG-irrelevant nature of the interactions. This allows for a well-defined prethermal regime in time and space, which would not be possible for a constant, momentum independent scattering vertex, for which thermalization would occur immediately on all different length scales.\nThis underpins the statement that typical candidates for clearly observable prethermalized regimes within generic thermalization dynamics are quasi-particle theories with RG irrelevant interactions.\n\n\n\n\\begin{acknowledgments}\nWe acknowledge valuable discussions with Alessio Recati. This research was supported by the\nGerman Research Foundation (DFG) through the Institutional\nStrategy of the University of Cologne within the German\nExcellence Initiative (ZUK 81) and the European Research\nCouncil (ERC) under the European Unions Horizon\n2020 research and innovation programme (grant agreement\nNo 647434)\nas well as the\nDeutsche Akademie der Naturforscher Leopoldina under grant numbers LPDS 2013-07 and LPDR 2015-01.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction\\label{sec:Introduction}}\n\nWe develop an empirical framework to identify and estimate the heterogeneous\neffects of treatments on outcomes of interest, where the treatments\nare the result of agents' interaction (e.g., bargaining,\noligopolistic entry, decisions in the presence of peer effects or\nstrategic effects). Treatments are determined as an equilibrium of\na game and these strategic decisions of players endogenously affect\ncommon or player-specific outcomes. For example, one may be interested\nin the effects of entry of newspapers on local political behavior,\nentry of carbon-emitting companies on local air pollution and health\noutcomes, the presence of potential entrants in nearby markets on\npricing or investment decisions of incumbents, the exit decisions\nof large supermarkets on local health outcomes, or the provision of\nlimited resources when individuals make participation decisions under\npeer effects and their own gains from the treatment.\\footnote{The entry and pollution is our leading example introduced in Section\n\\ref{sec:stylized_ex}; the other examples are discussed in detail\nin Appendix \\ref{sec:Examples}.} As reflected in some of these examples, our framework allows us to\nstudy the \\textit{externalities of strategic decisions}, such as societal\noutcomes resulting from firm behavior. Ignoring strategic interaction\nin the treatment selection process may lead to biased, or at least\nless informative, conclusions about the effects of interest.\n\nWe consider a model in which agents play a discrete game of complete\ninformation, whose equilibrium actions (i.e., a profile of binary\nendogenous treatments) determine a post-game outcome in a nonseparable\nmodel with endogeneity. We are interested in the various treatment\neffects of this model. In recovering these parameters, the setting\nof this study poses several challenges. First, the first-stage game\nposits a structure in which binary dependent variables are simultaneously\ndetermined in threshold crossing models, thereby, making the model,\nas a whole, \\textit{incomplete}. This is related to the problem of\nmultiple equilibria in the game. Second, due to this simultaneity,\nthe selection process for each treatment in the profile does not exhibit\nthe conventional monotonic property \\`a la \\citet{imbens1994identification}.\nFurthermore, we want to remain flexible with other components of the\nmodel. That is, we make no assumptions on the joint distributions\nof the unobservables nor parametric restrictions on the player's payoff\nfunction and how treatments affect the outcome. In addition, we do\nnot impose any arbitrary equilibrium selection mechanism to deal with\nthe multiplicity of equilibria, nor require that players be symmetric.\nIn nonparametric models with multiplicity and\/or endogeneity, identification\nmay be achieved using excluded instruments with large support. Although\nsuch a strong requirement can be met in practice, estimation and inference\ncan still be problematic (\\citet{andrews1998semiparametric}, \\citet{khan2010irregular}).\nThus, we avoid such assumptions for instruments and other exogenous\nvariables.\n\nThe first contribution of this study is to establish that under strategic\nsubstitutability, regions that predict the equilibria of the treatment\nselection process in the first-stage game can present a monotonic\npattern in terms of the number of treatments selected.\\footnote{To estimate payoff parameters, \\citet{berry1992estimation} partly\ncharacterizes equilibrium regions. To calculate the bounds on these\nparameters, \\citet{CT09} simulate their moment inequalities model\nthat are implied by the shape of these regions, especially the regions\nfor multiple equilibria. While their approaches are sufficient for\ntheir analyses, full analytical results are critical for the identification\nanalysis in this current study.} The second contribution of this study is to show, after restoring\nthe \\textit{generalized monotonicity} in the selection process, how\nthe model structure and the data can provide information about treatment\nparameters, such as the average treatment effects (ATEs). We first\nestablish the bounds on the ATE and other related parameters with\npossibly discrete instruments. We also show that tighter bounds on\nthe ATE can be obtained by introducing (possibly discrete) exogenous\nvariables excluded from the first-stage game. This is especially motivated\nwhen the outcome variable is affected by externalities generated by\nthe players. We can derive sharp bounds as long as the outcome variable\nis binary. To deal with the multiple equilibria problem in our analysis,\nwe assume that instruments vary sufficently to offset the effect of\nstrategic substitutability. We provide a simple testable implication\nfor the existence of such instrument variation in the case of mutually\nindependent payoff unobservables. This requirement of variation is\nqualitatively different and substantially weaker than a typical large\nsupport assumption. A marked feature of our analyses is that for the\nsharp bounds on the ATE, player-specific instruments are not necessary.\n\nOur bound analysis\nbuilds on \\citet{VY07} and \\citet{SV11}, which consider point and partial identification in single-agent nonparametric triangular models\nwith binary endogenous variables. Unlike them, however, we allow for multi-agent strategic interaction\nas a key component of the model. Some studies have extended a single-treatment\nmodel to a multiple-treatment setting (e.g., \\citet{heckman2006understanding},\n\\citet{jun2011tighter}), but their models maintain monotonicity in\nthe selection process and none of them allow simultaneity among the\nmultiple treatments resulting from agents' interaction, as we do in\nthis study.\n\nIn interesting recent work, \\citet{heckman2015unordered}, and \\citet{lee2016identifying}\nextend the monotonicity of the selection process in multi-valued treatments\nsettings. \\citet{heckman2015unordered} introduce unordered monotonicity,\nwhich is a different type of treatment selection mechanisms than ours.\n\\citet{lee2016identifying} consider more general non-monotonicity\nand do mention entry games as one example of the treatment selection\nprocesses they allow. However, they assume known payoffs and bypass\nthe multiplicity of equilibria by assuming a threshold-crossing equilibrium\nselection mechanism, both of which we do not assume in this study.\nIn addition, \\citet{lee2016identifying}'s focus is on the identification\nof marginal treatment effects with continuous instruments. In another\nrelated work, \\citet{chesher2014generalized} consider a class\nof generalized instrumental variable models in which our model may\nfall and propose a systematic method of characterizing sharp identified\nsets for admissible structures. This present study's characterization\nof the identified sets is analytical, which helps investigate how the\nidentification is related to exogenous variation in the model and\nto the equilibrium characterization in the treatment selection. Also,\ncalculating the bounds on the treatment parameters using their approach\ninvolves projections of identified sets that may require parametric\nrestrictions. Lastly, \\citet{Han18,Han19c} consider identification\nof dynamic treatment effects and optimal treatment regimes in a nonparametric\ndynamic model, in which the dynamic relationship causes non-monotonicity\nin the determination of each period's outcome and treatment.\n\nWithout triangular structures, \\citet{manski1997monotone}, \\citet{MP00}\nand \\citet{Man13} also propose bounds on the ATE with multiple treatments\nunder various monotonicity assumptions, including an assumption on\nthe sign of the treatment response. We take an alternative approach\nthat is more explicit about treatments interaction while remaining\nagnostic about the direction of the treatment response. Our results\nsuggest that provided there exist exogenous variation excluded from\nthe selection process, the bounds calculated from this approach can\nbe more informative than those from their approach.\n\nIdentification in models for binary games with complete information\nhas been studied in \\citet{Tam03}, \\citet{CT09}, and \\citet{bajari2010identification},\namong others.\\footnote{See also \\citet{galichon2011set} and \\citet{beresteanu2011sharp}\nfor a more general setup that includes complete information games\nas an example.} This present study contributes to this literature by considering post-game outcomes that are often not of players' direct concern.\nAs related work that considers post-game outcomes, \\citet{ciliberto2015market}\nintroduce a model in which firms make simultaneous decisions of entry\nand pricing upon entry. Consequently, their model can be seen as a\nmulti-agent extension of a sample selection model. On the other hand,\nthe model considered in this study is a multi-agent extension of a\nmodel for endogenous treatments. At a more general level, our approach is an attempt to bridge the treatment effect literature and the industrial organization (IO) literature. We are interested in the evaluation of treatments that are the result of agents' strategic interaction, an aspect that is key in the IO literature. To conduct the counterfactual analysis, however, we closely follow the treatment effect literature, instead of the structural approach of the IO literature. For example, \\citet{CT09} and \\citet{ciliberto2015market}\nimpose economic structure and parametric assumptions to recover model primitives for policy analyses. In contrast, our parameters of interest are\ntreatment effects as functionals of the primitives (but excluding\nthe game parameters), and thus, allow our model to remain nonparametric. In addition, as the goal is different, we employ a different approach to partial identification\nunder the multiplicity of equilibria than theirs.\\footnote{Even if we are willing to\nassume a known distribution for the unobserved payoff types, their approach to multiplicity is not applicable\nto the particular setting of this study.}\n\n\n\nTo demonstrate the applicability of our method, we take the bounds\nwe propose to data on airline market structure and air pollution in\ncities in the U.S. Aircrafts and airports land operations are a major\nsource of emissions, and thus, quantifying the causal effect of air\ntransport on pollution is of importance to policy makers.\nWe explicitly allow market structure to be determined endogenously\nas the outcome of an entry game in which airlines behave strategically\nto maximize their profits and where the resulting pollution in this\nmarket is not internalized by the firms. Additionally, we do not impose\nany structure on how airline competition affects pollution and allow\nfor heterogenous effects across firms. In other words, not only do\nwe allow the effect of a different number of firms in the market on\npollution to be nonlinear and not restricted, but also\ndistinguish the identity of the firms. The latter is important if we believe that behavior post-entry differs across\nairlines. For example, different airlines might operate the market with a higher frequency\nor with different types of airplanes, hence affecting pollution in a different\nway.\nTo implement our application, we combine data from two sources. The\nfirst contains airline information from the Department of Transportation,\nwhich we use to construct a dataset of airlines' presence in each\nmarket. We then merge it with air pollution data in each airport from\nair monitoring stations compiled by the Environmental Protection Agency.\nIn our preferred specification, our outcome variable is a binary measure\nof the level of particulate matter in the air.\n\nWe consider three sets of ATE exercises to investigate different aspects\nof the relationship between market structure and pollution in equilibrium.\nThe first simply quantifies the effects of each airline operating\nas a monopolist compared to a situation in which the market is not\nserved by any airline. We find that the effect of each airline on\npollution is positive and statistically significant. We also find\nevidence of heterogeneity in the effects across different airlines.\nThe second set of exercises examines the ATEs of all potential market\nstructures on pollution. We find that the probability of high pollution\nis increasing with the number of airlines in the market, but at a\ndecreasing rate. Finally, the third set of exercises quantifies the\nATE of a single airline under all potential configurations of the\nmarket in terms of its rivals. We observe that in all cases, Delta\nentering a market has a positive effect on pollution and this effect\nis decreasing with the number of rivals. The results from the last\ntwo set of exercises are consistent with the results of a Cournot-competition\noligopolistic model in which incumbents \\emph{accommodate} new entrants\nby reducing the quantity they produce.\n\nThis paper is organized as follows. Section \\ref{sec:stylized_ex}\nsummarizes the analysis of this study using a stylized example. Section\n\\ref{sec:General-Theory} presents a general theory: Section \\ref{subsec:Model}\nintroduces the model and the parameters of interest; Section \\ref{subsec:Geometry}\npresents the generalized monotonicity for equilibrium regions for\nmany players; and Section \\ref{subsec:Partial-Identification} delivers\nthe partial identification results of this study. Section \\ref{sec:Monte-Carlo-Studies}\npresents a numerical illustration and Section \\ref{sec:Empirical-Application}\nthe empirical application on airlines and pollution. In the Appendix,\nSection \\ref{sec:Examples} provides more examples to which our setup\ncan be applied. Section \\ref{sec:Extensions} contains four extensions\nof our main results. Finally, Section \\ref{sec:Proofs} collects the\nproofs of theorems and lemmas.\n\n\\section{A Stylized Example\\label{sec:stylized_ex}}\n\nWe first illustrate the main results of this study with a stylized\nexample. Suppose we are interested in the effects of airline competition\non local air quality (or health). Let $Y_{i}$ denote the binary indicator\nof air pollution in market $i$. For illustration, we assume there\nare two potential airlines. In the next section, we present a general\ntheory with more than two players. Let $D_{1,i}$ and $D_{2,i}$ be\nbinary variables that indicate the decisions to enter market $i$\nby Delta and United, respectively. We allow the decisions $D_{1,i}$\nand $D_{2,i}$ to be correlated with some unobserved characteristics\nof the local market that affect $Y_{i}$. Moreover, since $D_{1,i}$\nand $D_{2,i}$ are equilibrium outcomes of the entry game, we allow\nthem to be outcomes from multiple equilibria. The endogeneity and\nthe presence of multiple equilibria are our key challenges in this\nstudy.\n\nLet $Y_{i}(d_{1},d_{2})$ be the potential air quality had Delta and\nUnited's decisions been $(D_{1},D_{2})=(d_{1},d_{2})$; for example,\n$Y_{i}(1,1)$ is the potential air quality from duopoly, $Y_{i}(1,0)$\nis with Delta being a monopolist, and so on. Let $X_{i}$ be a vector\nof market characteristics that affect $Y_{i}$. Our parameter of interest\nis the ATE, $E[Y_{i}(d_{1},d_{2})-Y_{i}(d_{1}',d_{2}')|X_{i}=x]$,\nwhich captures the effect of market structure on pollution. One interesting\nATE is $E[Y_{i}(1,d_{2})-Y_{i}(0,d_{2})|X_{i}=x]$ for each $d_{2}$,\nwhere we can learn the interaction effects of treatments, e.g., how\nmuch the average effect of Delta's entry is affected by United's entry:\n$E\\left[Y_{i}(1,1)-Y_{i}(0,1)\\right]-E\\left[Y_{i}(1,0)-Y_{i}(0,0)\\right]$\n(suppressing $X_{i}$). In our empirical application (Section \\ref{sec:Empirical-Application}),\nwe consider this and other related parameters in a more realistic\nmodel, where there are more than two airlines.\n\nWe show how we overcome the problems of endogeneity and multiple equilibria\nand how to construct bounds on the ATE using the excluded instruments\nand other exogenous variables. Let $Z_{1,i}$ and $Z_{2,i}$ be cost\nshifters for Delta and United, respectively, which serve as instruments.\nAs a benchmark, we first consider naive bounds analogous to \\citet{manski1990nonparametric}\nusing excluded instruments which satisfy \n\\begin{align}\nY_{i}(d_{1},d_{2}) & \\perp(Z_{1,i},Z_{2,i})|X_{i}\\label{as:manski_IV}\n\\end{align}\nfor all $(d_{1},d_{2})$. To simplify notation, we suppress the index\n$i$ henceforth, let $\\boldsymbol{D}\\equiv(D_{1},D_{2})$ and $\\boldsymbol{Z}\\equiv(Z_{1},Z_{2})$,\nand write $E[\\cdot|w]\\equiv E[\\cdot|W=w]$ for a generic r.v. $W$.\nAs an illustration, we focus on calculating bounds on $E[Y(1,1)|X=x]$.\nNote that \n\\begin{align}\nE[Y(1,1)|x]=E[Y(1,1)|\\boldsymbol{z},x] & =E[Y|\\boldsymbol{D}=(1,1),\\boldsymbol{z},x]\\Pr[\\boldsymbol{D}=(1,1)|\\boldsymbol{z},x]\\nonumber \\\\\n & +\\sum_{\\boldsymbol{d}^{\\prime}\\neq(1,1)}E[Y(1,1)|\\boldsymbol{D}=\\boldsymbol{d}^{\\prime},\\boldsymbol{z},x]\\Pr[\\boldsymbol{D}=\\boldsymbol{d}^{\\prime}|\\boldsymbol{z},x],\\label{eq:Manski_expand-1}\n\\end{align}\nwhere the first equality is by \\eqref{as:manski_IV}. Manski-type\nbounds can be obtained by observing that the counterfactual term $E[Y(1,1)|\\boldsymbol{D}=\\boldsymbol{d}^{\\prime},\\boldsymbol{z},x]=\\Pr[Y(1,1)=1|\\boldsymbol{D}=\\boldsymbol{d}^{\\prime},\\boldsymbol{z},x]$\nis bounded above by one and below by zero. By further using the variation\nin $\\boldsymbol{Z}$, which is excluded from $Y(1,1)$, the lower\nand upper bounds on $E[Y(1,1)\\vert x]$ can be written as \n\\begin{align*}\nL_{Manski}(x) & \\equiv\\sup_{\\boldsymbol{z}\\in\\mathcal{Z}}\\Pr[Y=1,\\boldsymbol{D}=(1,1)\\vert\\boldsymbol{z},x],\\\\\nU_{Manski}(x) & \\equiv\\inf_{\\boldsymbol{z}\\in\\mathcal{Z}}\\left\\{ \\Pr[Y=1,\\boldsymbol{D}=(1,1)\\vert\\boldsymbol{z},x]+1-\\Pr[\\boldsymbol{D}=(1,1)\\vert\\boldsymbol{z}]\\right\\} .\n\\end{align*}\nThe goal of our analysis is to derive tighter bounds than $L_{Manski}(x)$\nand $U_{Manski}(x)$ by introducing further assumptions motivated\nby economic theory.\n\nTo illustrate, we introduce the following semi-triangular model with\nlinear indices. In the next section, we generalize this model by\nintroducing fully nonparametric models that allow continuous $Y$. All the assumptions and results illustrated in the current section are formally stated and proved in the next section. Consider\n\\begin{align}\nY & =1[\\mu_{1}D_{1}+\\mu_{2}D_{2}+\\beta X\\ge\\epsilon],\\label{eq:model_ex1}\\\\\nD_{1} & =1[\\delta_{2}D_{2}+\\gamma_{1}Z_{1}\\ge U_{1}],\\label{eq:model_ex2}\\\\\nD_{2} & =1[\\delta_{1}D_{1}+\\gamma_{2}Z_{2}\\ge U_{2}],\\label{eq:model_ex3}\n\\end{align}\nwhere $(\\epsilon,U_{1},U_{2})$ are continuously distributed unobservables\nthat can be arbitrarily correlated, $(U_{1},U_{2})$ are uniform,\nand assume \n\\begin{align}\n & (\\epsilon,U_{1},U_{2})\\perp(Z_{1},Z_{2})|X,\\label{eq:my_IV}\\\\\n & \\delta_{1}<0\\text{ and }\\delta_{2}<0,\\label{eq:strategic_sub}\\\\\n & sgn(\\mu_{1})=sgn(\\mu_{2}).\\label{eq:mono}\n\\end{align}\nNote that \\eqref{eq:my_IV} replaces \\eqref{as:manski_IV}, \\eqref{eq:strategic_sub}\nassumes strategic substitutability, and \\eqref{eq:mono} is plausible\nin the current example of air quality and entry. Owing to the first\nstage simultaneity,\nthe model \\eqref{eq:model_ex1}--\\eqref{eq:model_ex3} is \\textit{incomplete}, i.e., the model primitives and the\ncovariates do not uniquely predict $(Y,\\boldsymbol{D})$. In this\nmodel, we are \\textit{not} interested in the players' payoff parameters\n$(\\delta_{-s},\\gamma_{s})$ for $s=1,2$, individual parameters $(\\mu_{1},\\mu_{2},\\beta)$\nthat generate the outcome, nor distributional parameters. Instead,\nwe are interested in the ATE as a function of $(\\mu_{1},\\mu_{2},\\beta)$.\nThis is in contrast to \\citet{ciliberto2015market}, where payoff\nand pricing parameters are direct parameters of interest, and thus,\nour identification question and strategy (especially how we deal with\nmultiple equilibria) are different from theirs.\n\nTypically, a standard approach that utilizes instrumental variables\ncompares the reduced-form relationship between the outcome and treatment\nwith the reduced-form relationship between the treatment and instrument.\nWe apply the same idea here by changing the values of $Z_{1}$ and\n$Z_{2}$ and measure the change in $Y$ relative to the change in\n$D_{1}$ and $D_{2}$. To this end, for two realizations $\\boldsymbol{z},\\boldsymbol{z}'$\nof $\\boldsymbol{Z}$, say low and high entry cost for both airlines,\nwe introduce reduced-form objects directly recovered from the data:\n\\begin{align}\nh(\\boldsymbol{z},\\boldsymbol{z}',x) & \\equiv\\Pr[Y=1|\\boldsymbol{z},x]-\\Pr[Y=1|\\boldsymbol{z}',x],\\label{eq:h(zzx)-1}\\\\\nh_{\\boldsymbol{d}}(\\boldsymbol{z},\\boldsymbol{z}',x) & \\equiv\\Pr[Y=1,\\boldsymbol{D}=\\boldsymbol{d}|\\boldsymbol{z},x]-\\Pr[Y=1,\\boldsymbol{D}=\\boldsymbol{d}|\\boldsymbol{z}',x]\\label{eq:hj-1}\n\\end{align}\nfor $d\\in\\{(0,0),(1,0),(0,1),(1,1)\\}\\equiv\\mathcal{D}$. We show that\n\\eqref{eq:h(zzx)-1}--\\eqref{eq:hj-1} deliver useful information\nabout the outcome index function ($\\mu_{1}D_{1}+\\mu_{2}D_{2}+\\beta X$),\nwhich in turn is helpful in constructing bounds on the ATE. Note that\n\\begin{align}\nh(\\boldsymbol{z},\\boldsymbol{z}',x) & =h_{11}(\\boldsymbol{z},\\boldsymbol{z}',x)+h_{10}(\\boldsymbol{z},\\boldsymbol{z}',x)+h_{01}(\\boldsymbol{z},\\boldsymbol{z}',x)+h_{00}(\\boldsymbol{z},\\boldsymbol{z}',x)\\nonumber \\\\\n & =\\Pr[Y=1,\\boldsymbol{D}=(1,1)|\\boldsymbol{z},x]-\\Pr[Y=1,\\boldsymbol{D}=(1,1)|\\boldsymbol{z}',x]\\nonumber \\\\\n & +\\Pr[Y=1,\\boldsymbol{D}=(1,0)|\\boldsymbol{z},x]-\\Pr[Y=1,\\boldsymbol{D}=(1,0)|\\boldsymbol{z}',x]\\nonumber \\\\\n & +\\Pr[Y=1,\\boldsymbol{D}=(0,1)|\\boldsymbol{z},x]-\\Pr[Y=1,\\boldsymbol{D}=(0,1)|\\boldsymbol{z}',x]\\nonumber \\\\\n & +\\Pr[Y=1,\\boldsymbol{D}=(0,0)|\\boldsymbol{z},x]-\\Pr[Y=1,\\boldsymbol{D}=(0,0)|\\boldsymbol{z}',x],\\label{eq:h_derive0}\n\\end{align}\nwhere $\\boldsymbol{D}=(1,0)$ and $(0,1)$ are the airlines' decisions\nthat may arise as multiple equilibria. The increase in cost (from\n$\\boldsymbol{z}$ to $\\boldsymbol{z}'$) will make the operation of\nthese airlines less profitable in some markets, depending on the values\nof the unobservables $\\boldsymbol{U}=(U_{1},U_{2})$. This will result\nin a change in the market structure in those markets. Specifically,\nmarkets ``on the margin'' may experience one of the following changes\nin structure as cost increases: (a) from duopoly to Delta-monopoly;\n(b) from duopoly to United-monopoly; (c) from Delta-monopoly to no\nentrant; (d) from United-monopoly to no entrant; and (e) from duopoly\nto no entrant. These changes are depicted in Figure \\ref{fig:As_EQ-1},\nwhere each $R_{d_{1},d_{2}}(\\boldsymbol{z})$ denotes the maximal\nregion that predicts $(d_{1},d_{2})$, given $\\boldsymbol{Z}=\\boldsymbol{z}$.\\footnote{See Section \\ref{subsec:notation} in the Appendix for a formal definition.\nThe figure is drawn in a way that $\\gamma_{1}$ and $\\gamma_{2}$\nare negative.} \n\\begin{figure*}[t]\n\\centering \\begin{subfigure}[t]{0.35\\textwidth} \\centering \\begin{tikzpicture}[scale=0.33]\n\\draw[step=1cm,gray,very thin] (-3,-3) rectangle (5,5); \\draw (-3,-3) node[anchor=north east] {0}; \\draw (5,-3) node[anchor=north] {1}; \\draw (-3,5) node[anchor=east] {1};\n\n\\path [draw=none, fill=gray, opacity=0.3] (3.5,1) rectangle (-3,5); \\path [draw=none, fill=gray, opacity=0.3] (1.7,-3) rectangle (5,3);\n\n\\draw[thick,->] (1.7,3) -- (5,3); \\draw[thick,->] (1.7,3) -- (1.7,-3); \\draw[thick,->] (3.5,1) -- (-3,1); \\draw[thick,->] (3.5,1) -- (3.5,5); \n\n\\node [below right, black] at (-3.2,5) {$R_{10}(\\boldsymbol{z})$};\n\\node [below right, black] at (1.4,-0.5) {$R_{01}(\\boldsymbol{z})$};\n\\node [below right, black] at (3.2,5) {$R_{00}(\\boldsymbol{z})$}; \\node [below right, black] at (-3.2,-0.5) {$R_{11}(\\boldsymbol{z})$};\n\n\\draw (-4,6) node[anchor=east] {$U_2$};\n\\draw (6.3,-4) node[anchor=north] {$U_1$};\n\\draw (-3,1) node[anchor=east] {\\small{$\\delta_1+\\gamma_2 z_2$}};\n\\draw (1.7,-3) node[anchor=north] {\\small{$\\delta_2+\\gamma_1 z_1$}};\n\\draw (5,6) node[anchor=east] {\\small{$\\gamma_1 z_1$}};\n\\draw (6.5,3.5) node[anchor=north] {\\small{$\\gamma_2 z_2$}};\n\n\\end{tikzpicture} \\caption{When $\\boldsymbol{Z}=\\boldsymbol{z}$}\n\\end{subfigure\n~ \\begin{subfigure}[t]{0.35\\textwidth} \\centering \\begin{tikzpicture}[scale=0.33]\n\\draw[step=1cm,gray,very thin] (-3,-3) rectangle (5,5); \\draw (-3,-3) node[anchor=north east] {0}; \\draw (5,-3) node[anchor=north] {1}; \\draw (-3,5) node[anchor=east] {1};\n\\path [draw=none, fill=red, opacity=0.3] (1,-1) rectangle (-3,5); \\path [draw=none, fill=red, opacity=0.3] (-1,-3) rectangle (5,0);\n\n\\draw[thick,->] (1.7,3) -- (5,3); \\draw[thick,->] (1.7,3) -- (1.7,-3); \\draw[thick,->] (3.5,1) -- (-3,1); \\draw[thick,->] (3.5,1) -- (3.5,5); \n\n\\draw[thick,red,dashed,->] (-1,0) -- (5,0); \n\\draw[thick,red,dashed,->] (-1,0) -- (-1,-3); \n\\draw[thick,red,dashed,->] (1,-1) -- (-3,-1); \n\\draw[thick,red,dashed,->] (1,-1) -- (1,5); \n\n\\node [below right, red] at (-3.2,3) {$R_{10}(\\boldsymbol{z}')$};\n\\node [below right, red] at (0.3,-1) {$R_{01}(\\boldsymbol{z}')$};\n\\node [below right, red] at (1.1,3) {$R_{00}(\\boldsymbol{z}')$}; \\node [below right, red] at (-4,-1) {$R_{11}(\\boldsymbol{z}')$};\n\n\\draw (-4,6) node[anchor=east] {$U_2$};\n\\draw (6.3,-4) node[anchor=north] {$U_1$};\n\\draw[red] (-3,-0.6) node[anchor=east] {\\small{$\\delta_1+\\gamma_2 z'_2$}};\n\\draw[red] (-0.5,-3) node[anchor=north] {\\small{$\\delta_2+\\gamma_1 z'_1$}};\n\\draw[red] (2.3,6) node[anchor=east] {\\small{$\\gamma_1 z'_1$}};\n\\draw[red] (6.5,1) node[anchor=north] {\\small{$\\gamma_2 z'_2$}};\n\n\\end{tikzpicture}\n\n\\caption{When $\\boldsymbol{Z}=\\boldsymbol{z}'$}\n\\end{subfigure}\\caption{Change in Equilibrium Regions with Compensating Strategic Substitutability.\\label{fig:As_EQ-1}}\n\\end{figure*}\n\nThese changes (a)--(e) are a consequence of the monotonic pattern\nof equilibrium regions, which we formally establish in a general setting\nof more than two players in Theorem \\ref{thm:mono_pattern} of Section\n\\ref{subsec:Geometry}.\n\nIn general, besides these five scenarios, there may be markets that\nused to be Delta-monopoly but become United-monopoly and vice versa,\ni.e., markets that exhibit \\textit{non-monotonic} behaviors; see Remark\n\\ref{rem:nonmonotone} below for details. Owing to possible multiple\nequilibria, we are agnostic about these latter types of changes except\nin extreme cases, where one equilibrium is selected with probability\none. We generally do not know the equilibrium selection mechanism\nin play, much less about how such mechanism changes as cost $\\boldsymbol{Z}$\nchanges. The key idea in this study is to overcome the non-monotonicity\nby shifting the cost sufficiently so that there is no market that\nswitches from one monopoly to another. We show that the shift in cost\nthat compensates the strategic substitutability does just that, as\nis depicted in Figure \\ref{fig:As_EQ-1}. In this figure, we assume\n$\\delta_{2}+\\gamma_{1}z_{1}>\\gamma_{1}z_{1}'$ and $\\delta_{1}+\\gamma_{2}z_{2}>\\gamma_{2}z_{2}'$.\nIn other words, we assume \n\\begin{align}\n\\left|\\gamma_{s}(z_{s}'-z_{s})\\right| & \\ge\\left|\\delta_{-s}\\right|\\text{ for }s=1,2.\\label{eq:EQ_S2}\n\\end{align}\nImportantly, we do not require infinite variation in $\\boldsymbol{Z}$.\\footnote{Of course, changing each $Z_{s}$ from $-\\infty$ to $\\infty$ will\ntrivially achieve our requirement of having no market that switches\nfrom one monopoly to another.} In fact, we show that the compensating strategic substitutability\n\\eqref{eq:EQ_S2} is implied by the following condition, which can\nbe tested using the data: there exist $\\boldsymbol{z},\\boldsymbol{z}'\\in\\mathcal{Z}$\nsuch that \n\\begin{align}\n\\Pr[\\boldsymbol{D}=(0,0)|\\boldsymbol{z}]+\\Pr[\\boldsymbol{D}=(1,1)|\\boldsymbol{z}'] & >2-\\sqrt 2.\\label{eq:asy2-1}\n\\end{align}\n\nSuppose $\\boldsymbol{z},\\boldsymbol{z}'$ satisfy \\eqref{eq:EQ_S2}.\nThen, by \\eqref{eq:my_IV}, we can derive from \\eqref{eq:h_derive0}\nthat (suppressing $X=x$ for simplicity) \n\\begin{align}\nh(\\boldsymbol{z},\\boldsymbol{z}') & =\\Pr[\\epsilon\\leq\\mu_{1}+\\mu_{2},\\boldsymbol{U}\\in\\Delta_{a}\\cup\\Delta_{b}\\cup\\Delta_{e}]\\nonumber \\\\\n & -\\Pr[\\epsilon\\leq\\mu_{1},\\boldsymbol{U}\\in\\Delta_{a}]+\\Pr[\\epsilon\\leq\\mu_{1},\\boldsymbol{U}\\in\\Delta_{c}]\\nonumber \\\\\n & -\\Pr[\\epsilon\\leq\\mu_{2},\\boldsymbol{U}\\in\\Delta_{b}]+\\Pr[\\epsilon\\leq\\mu_{2},\\boldsymbol{U}\\in\\Delta_{d}]\\nonumber \\\\\n & -\\Pr[\\epsilon\\leq0,\\boldsymbol{U}\\in\\Delta_{c}\\cup\\Delta_{d}\\cup\\Delta_{e}],\\label{eq:h_derive}\n\\end{align}\nwhere $\\Delta_{i}$ ($i\\in\\{a,...,e\\}$) are disjoint and each $\\Delta_{i}$\ncharacterizes those markets on the margin described above: $\\Delta_{a}$\ncorresponds to the set of $\\boldsymbol{U}$'s that experience (a),\n$\\Delta_{b}$ corresponds to (b), and so on. Once \\eqref{eq:h_derive}\nis derived, it is easy to see that \n\\begin{align}\nsgn\\{h(\\boldsymbol{z},\\boldsymbol{z}',x)\\} & =sgn(\\mu_{1})=sgn(\\mu_{2}),\\label{eq:sign_match}\n\\end{align}\nwhich is formally shown in Lemma \\ref{lem:asy_to_asy_star}(i). See\nSection \\ref{subsec:proof_S=00003D00003D2} in the Appendix for a\nproof in this specific two-player case, which simplifies the argument\nin the general proof. The result \\eqref{eq:sign_match} is helpful\nfor our bound analysis. Again, focus on $E[Y(1,1)|x]$ and suppose\n$h(\\boldsymbol{z},\\boldsymbol{z}',x)>0$. Then, $\\mu_{1}>0$ and $\\mu_{2}>0$,\nand thus, we can derive the lower bound on, e.g., $E[Y(1,1)|\\boldsymbol{D}=(1,0),\\boldsymbol{z},x]$\nin \\eqref{eq:Manski_expand-1} as \n\\begin{align}\nE[Y(1,1)|\\boldsymbol{D}=(1,0),\\boldsymbol{z},x] & =\\Pr[\\epsilon\\le\\mu_{1}+\\mu_{2}+\\beta x|\\boldsymbol{D}=(1,0),\\boldsymbol{z},x]\\nonumber \\\\\n & \\ge\\Pr[\\epsilon\\le\\mu_{1}+\\beta x|\\boldsymbol{D}=(1,0),\\boldsymbol{z},x]\\label{eq:ex_tigher_bound-1-1}\\\\\n & =E[Y|\\boldsymbol{D}=(1,0),\\boldsymbol{z},x],\\nonumber \n\\end{align}\nwhich is larger than zero, the previous naive lower bound. Similarly,\nwe can calculate the lower bounds on all $E[Y(1,1)|\\boldsymbol{D}=\\boldsymbol{d},\\boldsymbol{z},x]$\nfor $\\boldsymbol{d}\\neq(1,1)$. Consequently, by \\eqref{eq:Manski_expand-1},\nwe have \n\\begin{align*}\nE[Y(1,1)|x] & \\ge\\Pr[Y=1|\\boldsymbol{z},x],\n\\end{align*}\ni.e., the lower bound on $E[Y(1,1)|x]$ is \n\\begin{align*}\n\\tilde{L}(x) & \\equiv\\sup_{\\boldsymbol{z}}\\Pr[Y=1|\\boldsymbol{z},x].\n\\end{align*}\nNote that $\\tilde{L}(x)\\ge L_{Manski}(x)$. In this case, $\\tilde{U}(x)=U_{Manski}(x)$.\nIn Section \\ref{subsec:Partial-Identification}, we show that $\\tilde{L}(x)$\nand $\\tilde{U}(x)$ are sharp under \\eqref{eq:model_ex1}--\\eqref{eq:mono}.\n\nWe can further tighten the bounds if we have exogenous variables that\nare excluded from the entry decisions, i.e., from the $D_{1}$ and\n$D_{2}$ equations. The existence of such variables is not necessary\nbut helpful in tightening the bounds, and can be motivated by the\nnotion of externalities. That is, there can exist factors that affect\n$Y$ but do not enter the players' first-stage payoff functions. Modify\n\\eqref{eq:my_IV} and assume \n\\begin{align}\n(\\epsilon,U_{1},U_{2}) & \\perp(Z_{1},Z_{2},X),\\label{eq:my_IV2}\n\\end{align}\nwhere conditioning on other (possibly endogenous) covariates is suppressed.\nHere, $X$ can be the characteristics of the local market that\ndirectly affect pollution or health levels, such as weather shocks\nor the share of pollution-related industries in the local economy.\nWe assume that conditional on other covariates, these factors affect\nthe outcome but do not enter the payoff functions, since the airlines\ndo not take them into account in their decisions.\n\nTo exploit the variation in $X$ (in addition to the variation in\n$\\boldsymbol{Z}$), let $(x,\\tilde{x},\\tilde{\\tilde{x}})$ be (possibly\ndifferent) realizations of $X$, and define \n\\begin{equation}\n\\tilde{h}(\\boldsymbol{z},\\boldsymbol{z}',x,\\tilde{x},\\tilde{\\tilde{x}})\\equiv h_{00}(\\boldsymbol{z},\\boldsymbol{z}',x)+h_{10}(\\boldsymbol{z},\\boldsymbol{z}',\\tilde{x})+h_{01}(\\boldsymbol{z},\\boldsymbol{z}',\\tilde{x})+h_{11}(\\boldsymbol{z},\\boldsymbol{z}',\\tilde{\\tilde{x}}).\\label{eq:h_tilde-1}\n\\end{equation}\nUnder \\eqref{eq:my_IV2} and analogous to \\eqref{eq:sign_match},\nwe can show that if \n\\begin{align}\nsgn\\{\\tilde{h}(\\boldsymbol{z},\\boldsymbol{z}',x',x',x)\\} & =sgn(-\\mu_{1})=sgn(-\\mu_{2})\\label{eq:sign_match_x}\n\\end{align}\nis positive (negative), then $sgn\\{\\mu_{1}+\\beta(x-x')\\}=sgn\\{\\mu_{2}+\\beta(x-x')\\}$\nis positive (negative). This is formally shown in Lemma \\ref{lem:asy_to_asy_star}(ii).\n\nAs before, suppose $h(\\boldsymbol{z},\\boldsymbol{z}',x)>0$, and thus,\n$\\mu_{1}>0$ and $\\mu_{2}>0$ by \\eqref{eq:sign_match}. Now, if $\\tilde{h}(\\boldsymbol{z},\\boldsymbol{z}',x',x',x)<0$,\nthen $\\mu_{1}+\\beta x<\\beta x'$ and $\\mu_{2}+\\beta x<\\beta x'$.\nTherefore, we can derive \n\\begin{align*}\nE[Y(1,1)|\\boldsymbol{D}=(1,0),\\boldsymbol{z},x]= & \\Pr[\\epsilon\\le\\mu_{1}+\\mu_{2}+\\beta x|\\boldsymbol{D}=(1,0),\\boldsymbol{z},x]\\\\\n\\le & \\Pr[\\epsilon\\le\\mu_{1}+\\beta x'|\\boldsymbol{D}=(1,0),\\boldsymbol{z},x']\\\\\n= & \\Pr[Y=1|\\boldsymbol{D}=(1,0),\\boldsymbol{z},x'],\n\\end{align*}\nwhere the second equality also uses \\eqref{eq:my_IV2} and \\eqref{eq:model_ex2}--\\eqref{eq:model_ex3}.\nSimilarly, we have $E[Y(1,1)|\\boldsymbol{D}=(0,1),\\boldsymbol{z},x]\\le\\Pr[Y=1|\\boldsymbol{D}=(0,1),\\boldsymbol{z},x']$,\nand consequently, the upper bound on $E[Y(1,1)\\vert x]$ becomes \n\\[\nU(x)\\equiv\\inf_{\\boldsymbol{z}\\in\\mathcal{Z}}\\left\\{ \\Pr[Y=1,\\boldsymbol{D}=(1,1)\\vert\\boldsymbol{z},x]+\\Pr[Y=1,\\boldsymbol{D}\\in\\{(1,0),(0,1)\\}\\vert\\boldsymbol{z},x']+\\Pr[\\boldsymbol{D}=(0,0)\\vert\\boldsymbol{z},x]\\right\\} \n\\]\nby \\eqref{eq:Manski_expand-1}, and the lower bound is $L(x)=\\tilde{L}(x)$.\nNote that we can further take infimum over $x'$ such that $\\tilde{h}(\\boldsymbol{z},\\boldsymbol{z}',x',x',x)<0$.\n\nTo summarize our illustration, by using two values of $\\boldsymbol{Z}$\nthat satisfy \\eqref{eq:EQ_S2} and $h(\\boldsymbol{z},\\boldsymbol{z}',x)>0$\nand two values of $X$ that satisfy $\\tilde{h}(\\boldsymbol{z},\\boldsymbol{z}',x',x',x)<0$,\nour lower and upper bounds, $L(x)$ and $U(x)$, on $E[Y(1,1)\\vert x]$\nachieve \n\\begin{align*}\nL(x) & =\\tilde{L}(x)\\ge L_{Manski}(x),\\\\\nU(x) & \\ge\\tilde{U}(x)=U_{Manski}(x),\n\\end{align*}\nwhere the inequalities are strict if $\\sum_{\\boldsymbol{d}\\neq(1,1)}\\Pr[Y=1,\\boldsymbol{D}=\\boldsymbol{d}|\\boldsymbol{z},x]>0$\nand $\\Pr[Y=0,\\boldsymbol{D}\\in\\{(1,0),(0,1)\\}\\vert\\boldsymbol{z},x']>0$.\nWe discuss the sharpness of $L(x)$ and $U(x)$ in the next section.\nSimilarly, we can derive lower and upper bounds on other $E[Y(\\boldsymbol{d})\\vert x]$'s\nfor $\\boldsymbol{d}\\neq(1,1)$, and eventually construct bounds on\nany ATE. The gain from our approach is also exhibited in Figure \\ref{fig:sim1}\nin Section \\ref{sec:Monte-Carlo-Studies}, where we use the same data\ngenerating process as in this section and calculate different bounds\non the ATE, $E[Y(1,1)\\vert x]-E[Y(0,0)\\vert x]$.\n\n\\begin{remark}[Point Identification of the ATE]\\label{rem:Identification-under-Full}\nWhen there exist player-specific excluded instruments with large support,\nwe point identify the ATEs. To invoke an identification-at-infinity\nargument, the following assumptions are instead needed to hold: \n\\begin{align}\n & \\gamma_{1}\\text{ and }\\gamma_{2}\\text{ are nonzero},\\label{eq:infinity1}\\\\\n & Z_{1}|(X,Z_{2})\\text{ and }Z_{2}|(X,Z_{1})\\text{ has an everywhere positive Lebesgue density}.\\label{eq:infinity2}\n\\end{align}\nThese assumptions impose a player-specific exclusion restriction and\nlarge support. Under \\eqref{eq:infinity1}--\\eqref{eq:infinity2},\nwe can easily show that the ATE in \\eqref{eq:ATE} is point identified.\nIn this case, the structure we impose, especially on the outcome function\n(such as the threshold-crossing structure, or more generally Assumption\nM in Section \\ref{subsec:Assumptions} below) is not needed.\n\nThe identification strategy is to exploit the large variation of player\nspecific instruments based on \\eqref{eq:infinity1}--\\eqref{eq:infinity2},\nwhich simultaneously solves the multiple equilibria and the endogeneity\nproblems. For example, to identify $E[Y(1,1)|x]$, consider \n\\begin{align*}\n & E[Y|\\boldsymbol{D}=(1,1),\\boldsymbol{z},x]=E[Y(1,1)|\\boldsymbol{D}=(1,1),\\boldsymbol{z},x]\\\\\n & =E[Y(1,1)|\\delta_{2}+\\gamma_{1}z_{1}\\geq U_{1},\\delta_{1}+\\gamma_{2}z_{2}\\geq U_{2},x]\\rightarrow E[Y(1,1)|x],\n\\end{align*}\nwhere the second equation is by \\eqref{eq:my_IV} and $Y(1,1)=\\mu_{1}+\\mu_{2}+\\beta X$,\nand the convergence is by \\eqref{eq:infinity1}--\\eqref{eq:infinity2}\nwith $z_{1}\\rightarrow\\infty$ and $z_{2}\\rightarrow\\infty$. The\nidentification of $E[Y(0,0)|x]$, $E[Y(1,0)|x]$ and $E[Y(0,1)|x]$\ncan be achieved by similar reasoning. Note that $\\boldsymbol{D}=(1,0)$\nor $\\boldsymbol{D}=(0,1)$ can be predicted as an outcome of multiple\nequilibria. However, when either $(z_{1},z_{2})\\rightarrow(\\infty,-\\infty)$\nor $(z_{1},z_{2})\\rightarrow(-\\infty,\\infty)$ occurs, a unique equilibrium\nis guaranteed as a dominant strategy, i.e., $\\boldsymbol{D}=(1,0)$\nor $\\boldsymbol{D}=(0,1)$, respectively.\\end{remark}\n\n\\begin{remark}[Non-Monotonicity of Treatment Selection]\\label{rem:nonmonotone}In\nthe case of a single binary treatment, the standard selection equation\nexhibits monotonicity that facilitates various identification strategies\n(e.g., \\citet{imbens1994identification}, \\citet{heckman2005structural},\n\\citet{VY07} to name a few). Relatedly, \\citet{vytlacil2002independence}\nshows the equivalence between imposing the selection equation with\nthreshold-crossing structure and assuming the local ATE (LATE) monotonicity.\nThis equivalence (and thus, previous identification strategies) is\ninapplicable to our setting due to the simultaneity in the first stage\n\\eqref{eq:main_model2}. To formally state this, let $\\boldsymbol{D}(\\boldsymbol{z})$\nbe a potential treatment vector, had $\\boldsymbol{Z}=\\boldsymbol{z}$\nbeen realized. When cost $\\boldsymbol{Z}=(Z_{1},Z_{2})$ increases\nfrom $\\boldsymbol{z}$ to $\\boldsymbol{z}'$, it may be that some\nmarkets witness Delta entering and United going out of business (i.e.,\n$\\boldsymbol{D}(\\boldsymbol{z})=(0,1)$ and $\\boldsymbol{D}(\\boldsymbol{z}')=(1,0)$),\nwhile other markets witness the opposite (i.e., $\\boldsymbol{D}(\\boldsymbol{z})=(1,0)$\nand $\\boldsymbol{D}(\\boldsymbol{z}')=(0,1)$). The direction of monotonicity\nis reversed in the two groups of markets, and thus, $\\Pr[\\boldsymbol{D}(\\boldsymbol{z})\\ge\\boldsymbol{D}(\\boldsymbol{z}')]\\neq1$\nand $\\Pr[\\boldsymbol{D}(\\boldsymbol{z})\\le\\boldsymbol{D}(\\boldsymbol{z}')]\\neq1$\nwhere the inequality for vectors is pair-wise inequalities, which\nviolates the LATE monotonicity.\\footnote{The same argument applies with a scalar multi-valued treatment $\\tilde{D}\\in\\{1,2,3,4\\}$,\nwhich has a one-to-one map with $\\boldsymbol{D}\\in\\{(0,0),(0,1),(1,0),(1,1)\\}$.\nThen, some markets can experience $\\tilde{D}(\\boldsymbol{z})=2$ and\n$\\tilde{D}(\\boldsymbol{z}')=3$ while others experience $\\tilde{D}(\\boldsymbol{z})=3$\nand $\\tilde{D}(\\boldsymbol{z}')=2$, and thus, it is possible to have\n$\\Pr[\\tilde{D}(\\boldsymbol{z})\\ge\\tilde{D}(\\boldsymbol{z}')]\\neq1$\nand $\\Pr[\\tilde{D}(\\boldsymbol{z})\\le\\tilde{D}(\\boldsymbol{z}')]\\neq1$.} Despite this non-monotonic pattern, Theorem \\ref{thm:mono_pattern}\nbelow restores generalized monotonicity, i.e., monotonicity in terms of\nthe algebra of sets. This generalized monotonicity, combined with\nthe compensating strategic substitutability \\eqref{eq:EQ_S2}, allows\nus to use a strategy analogous to the single-treatment case for our\nbound analysis. This also suggests that we can introduce a generalized\nversion of the LATE parameter in the current framework, although we\ndo not pursue it in this study.\n\nRelated to our study, \\citet{lee2016identifying} introduce a framework\nfor treatment effects with general non-monotonicity of selection,\nand consider the simultaneous treatment selection as one of the examples.\nAlthough they engage in a similar discussion on non-monotonicity,\ntheir approach to gain tractability for identification is different\nfrom ours. When they allow the identity of players being observed\nas in our setting, they show that their treatment measurability condition\n(Assumption 2.1) introduced to restore monotonicity is satisfied,\nprovided they assume a threshold-crossing equilibrium selection mechanism.\nIn contrast, we avoid making assumptions on equilibrium selection,\nbut require compensating variation of instruments. In addition, for\nthis particular example, they assume the first-stage is known (i.e.,\npayoff functions are known), and focus on point identification of\nthe MTE with continuous instruments.\\end{remark}\n\n\\section{General Theory\\label{sec:General-Theory}}\n\n\\subsection{Setup\\label{subsec:Model}}\n\nLet $\\boldsymbol{D}\\equiv(D_{1},...,D_{S})\\in\\mathcal{D}\\subseteq\\{0,1\\}^{S}$\nbe an $S$-vector of observed binary treatments and $\\boldsymbol{d}\\equiv(d_{1},...,d_{S})$\nbe its realization, where $S$ is fixed. We assume that $\\boldsymbol{D}$\nis predicted as a pure strategy Nash equilibrium of a complete information\ngame with $S$ players who make entry decisions or individuals who\nchoose to receive treatments.\\footnote{While this study does not consider mixed strategy equilibria, it may\nbe possible to extend the setup to incorporate mixed strategies, following\nthe argument in \\citet{CT09}.} Let $Y$ be an observed post-game outcome that results from profile\n$\\boldsymbol{D}$ of endogenous treatments. It can be an outcome common\nto all players or an outcome specific to each player. Let $(X,Z_{1},...,Z_{S})$\nbe observed exogenous covariates. We consider a model of a semi-triangular\nsystem: \n\\begin{align}\nY & =\\theta(\\boldsymbol{D},X,\\epsilon_{\\boldsymbol{D}}),\\label{eq:main_model1}\\\\\nD_{s} & =1\\left[\\nu^{s}(\\boldsymbol{D}_{-s},Z_{s})\\geq U_{s}\\right],\\mbox{\\qquad}s\\in\\{1,...,S\\},\\label{eq:main_model2}\n\\end{align}\nwhere $s$ is indices for players or interchangeably for treatments,\nand $\\boldsymbol{D}_{-s}\\equiv(D_{1},...,D_{s-1},D_{s+1},...,D_{S})$.\nWithout loss of generality, we normalize the scalar $U_{s}$ to be\ndistributed as $Unif(0,1)$, and $\\nu^{s}:\\mathbb{R}^{S-1+d_{z_{s}}}\\rightarrow(0,1]$\nand $\\theta:\\mathbb{R}^{S+d_{x}+d_{\\epsilon}}\\rightarrow\\mathbb{R}$\nare unknown functions that are nonseparable in their arguments. We\nallow the unobservables $(\\epsilon_{\\boldsymbol{D}},U_{1},...,U_{S})$\nto be arbitrarily dependent on one another. Although the notation\nsuggests that the instruments $Z_{s}$'s are player\/treatment-specific,\nthey are not necessarily required to be so for the analyses in this\nstudy; see Appendix \\ref{subsec:Common_Z} for a discussion. The exogenous\nvariables $X$ are variables excluded from all the equations for $D_{s}$.\nThe existence of $X$ is not necessary but useful for the bound analysis\nof the ATE. There may be covariates $W$ common to all the equations\nfor $Y$ and $D_{s}$, which is suppressed for succinctness. Implied\nfrom the complete information game, player $s$'s decision $D_{s}$\ndepends on the decisions of all others $\\boldsymbol{D}_{-s}$ in $\\mathcal{D}_{-s}$,\nand thus, $\\boldsymbol{D}$ is determined by a simultaneous system.\nAs before, the model \\eqref{eq:main_model1}--\\eqref{eq:main_model2}\nis incomplete because of the simultaneity in the first stage, and\nthe conventional monotonicity in the sense of \\citet{imbens1994identification}\nis not exhibited in the selection process because of simultaneity.\nThe unit of observation, a market or geographical region, is indexed\nby $i$ and is suppressed in all the expressions.\n\nThe potential outcome of receiving treatments $\\boldsymbol{D}=\\boldsymbol{d}$\ncan be written as \n\\begin{align*}\nY(\\boldsymbol{d}) & =\\theta(\\boldsymbol{d},X,\\epsilon_{\\boldsymbol{d}}),\\mbox{\\qquad}\\boldsymbol{d}\\in\\mathcal{D},\n\\end{align*}\nand $\\epsilon_{\\boldsymbol{D}}=\\sum_{\\boldsymbol{d}\\in\\mathcal{D}}1[\\boldsymbol{D}=\\boldsymbol{d}]\\epsilon_{\\boldsymbol{d}}$.\nWe are interested in the ATE and related parameters. Using the average\nstructural function (ASF) $E[Y(\\boldsymbol{d})|x]$, the ATE can be\nwritten as \n\\begin{align}\nE[Y(\\boldsymbol{d})-Y(\\boldsymbol{d}^{\\prime})|x] & =E[\\theta(\\boldsymbol{d},x,\\epsilon_{\\boldsymbol{d}})-\\theta(\\boldsymbol{d}^{\\prime},x,\\epsilon_{\\boldsymbol{d}^{\\prime}})],\\label{eq:ATE}\n\\end{align}\nfor $\\boldsymbol{d},\\boldsymbol{d}'\\in\\mathcal{D}$. Another parameter\nof interest is the average treatment effects on the treated or the untreated:\n$E[Y(\\boldsymbol{d})-Y(\\boldsymbol{d}^{\\prime})|D=\\boldsymbol{d}'',z,x]$\nfor $\\boldsymbol{d}''\\in\\{ \\boldsymbol{d},\\boldsymbol{d}' \\}$.\\footnote{Technically, $\\boldsymbol{d}''$ does not necessarily have to be equal\nto $\\boldsymbol{d}$ or $\\boldsymbol{d}'$, but can take another value.} One might also be interested\nin the sign of the ATE, which in this multi-treatment case is essentially\nestablishing an ordering among the ASF's.\n\nAs an example of the ATE, we may choose $\\boldsymbol{d}=(1,...,1)$\nand $\\boldsymbol{d}'=(0,...,0)$ to measure the cancelling-out effect\nor more general nonlinear effects. Another example would be choosing\n$\\boldsymbol{d}=(1,\\boldsymbol{d}_{-s})$ and $\\boldsymbol{d}'=(0,\\boldsymbol{d}_{-s})$\nfor given $\\boldsymbol{d}_{-s}$, where we use the notation $\\boldsymbol{d}=(d_{s},\\boldsymbol{d}_{-s})$\nby switching the order of the elements for convenience. Sometimes,\nwe instead want to focus on learning about complementarity between\ntwo treatments, while averaging over the remaining $S-2$ treatments.\nThis can be examined in a more general framework of defining the ASF\nand ATE by introducing a partial potential outcome; this is discussed\nin Appendix \\ref{subsec:Partial-ATE}.\n\nIn identifying these treatment parameters, suppose we attempt to recover\nthe effect of a single treatment $D_{s}$ in model \\eqref{eq:main_model1}--\\eqref{eq:main_model2}\n\\textit{conditional on} $\\boldsymbol{D}_{-s}=\\boldsymbol{d}_{-s}$,\nand then recover the effects of multiple treatments by transitively\nusing these effects of single treatments. This strategy is not valid\nsince $\\boldsymbol{D}_{-s}$ is a function of $D_{s}$ and also because\nof multiplicity. Therefore, the approaches in the literature with\nsingle-treatment, single-agent triangular models are not directly\napplicable and a new theory is necessary in this more general setting.\n\n\\subsection{Monotonicity in Equilibria\\label{subsec:Geometry}}\n\nAs an important step in the analyses in this study, we establish that\nthe equilibria of the treatment selection process in the first-stage\ngame present a monotonic pattern when the instruments move. Specifically,\nwe consider the regions in the space of the unobservables that predict\nequilibria and establish a monotonic pattern of these regions in terms\nof instruments. The analytical characterization of the equilibrium\nregions when there are more than two players ($S>2$) can generally\nbe complicated (\\citet[p. 1800]{CT09}); however, under a mild uniformity\nassumption (Assumption M1), our result is obtained under strategic\nsubstitutability. Let $\\mathcal{Z}_{s}$ be the support of $Z_{s}$.\nWe make the following assumptions on the first-stage nonparametric\npayoff function for each $s\\in\\{1,...,S\\}$.\n\n\\begin{asSS}For every $z_{s}\\in\\mbox{\\ensuremath{\\mathcal{Z}}}_{s}$,\n$\\nu^{s}(\\boldsymbol{d}_{-s},z_{s})$ is strictly decreasing in each\nelement of $\\boldsymbol{d}_{-s}$.\\end{asSS}\n\n\\begin{asM1}For any given $z_{s},z_{s}'\\in\\mathcal{Z}_{s}$, either\n$\\nu^{s}(\\boldsymbol{d}_{-s},z_{s})\\geq\\nu^{s}(\\boldsymbol{d}_{-s},z_{s}')$\n$\\forall\\boldsymbol{d}_{-s}\\in\\mathcal{D}_{-s}$, or $\\nu^{s}(\\boldsymbol{d}_{-s},z_{s})\\leq\\nu^{s}(\\boldsymbol{d}_{-s},z_{s}')$\n$\\forall\\boldsymbol{d}_{-s}\\in\\mathcal{D}_{-s}$.\\end{asM1}\n\nAssumption SS asserts that the agents' treatment decisions are produced\nin a game with strategic substitutability. The strictness of the monotonicity\nis not important for our purpose but convenient in making statements\nabout the equilibrium regions. In the language of \\citet{CT09}, we\nallow for heterogeneity in the \\textit{fixed competitive effects}\n(i.e., how each of other entrants affects one's payoff), as well as\nheterogeneity in how each player is affected by other entrants, which\nis ensured by the nonseparability between $\\boldsymbol{d}_{-s}$ and\n$z_{s}$ in $\\nu^{s}(\\boldsymbol{d}_{-s},z_{s})$; this heterogeneity\nis related to the \\textit{variable competitive effects}. Assumption\nM1 is required in this multi-agent setting, and the uniformity is\nacross $\\boldsymbol{d}_{-s}$. Note that this assumption is weaker\nthan a conventional monotonicity assumption that $\\nu^{s}(\\boldsymbol{d}_{-s},z_{s})$\nis either non-decreasing or non-increasing in $z_{s}$ for all $\\boldsymbol{d}_{-s}$.\nAssumption M1 is justifiable, especially when $z_{s}$ is chosen to\nbe of the same kind for all players. For example, in an entry game,\nif $z_{s}$ is chosen to be each player's cost shifters, the payoffs\nwould decrease in their costs for any given opponents.\n\nAs the first main result of this study, we establish the geometric\nproperty of the equilibrium regions. For $j=0,...,S$, let $\\boldsymbol{R}_{j}(\\boldsymbol{z})\\subset\\mathcal{U}\\equiv(0,1]^{S}$\ndenote the region that predicts all equilibria with $j$ treatments\nselected or $j$ entrants, defined as a subset of the space of the\nentry unobservables $\\boldsymbol{U}\\equiv(U_{1},...,U_{S})$; see\nSection \\ref{subsec:notation} in the Appendix for a formal definition.\nThen, define the region of all equilibria with \\textit{at most} $j$\nentrants as \n\\begin{align*}\n\\boldsymbol{R}^{\\le j}(\\boldsymbol{z}) & \\equiv\\bigcup_{k=0}^{j}\\boldsymbol{R}_{k}(\\boldsymbol{z}).\n\\end{align*}\nAlthough this region is hard to express explicitly in general, it\nhas a simple feature that serves our purpose. For given $j$, choose\n$z_{s},z_{s}'\\in\\mathcal{Z}_{s}$ such that \n\\begin{align}\n\\Pr[\\boldsymbol{D}=(1,...,1)|\\boldsymbol{Z}=(z_{s},\\boldsymbol{z}_{-s})] & >\\Pr[\\boldsymbol{D}=(1,...,1)|\\boldsymbol{Z}=(z_{s}',\\boldsymbol{z}_{-s})]\\label{eq:ps_condi}\n\\end{align}\nfor all $s$. This condition is to merely fix $\\boldsymbol{z},\\boldsymbol{z}'$\nthat change the \\textit{joint propensity score}, and the direction\nof change is without loss of generality. Such $\\boldsymbol{z},\\boldsymbol{z}'$\nexist by the relevance of the instruments, which is assumed below.\nLet $\\mathcal{Z}$ be the support of $\\boldsymbol{Z}\\equiv(Z_{1},...,Z_{S})$.\n\n\\begin{theorem}\\label{thm:mono_pattern}Under Assumptions SS and\nM1 and for $\\boldsymbol{z},\\boldsymbol{z}'\\in\\mathcal{Z}$ that satisfy\n\\eqref{eq:ps_condi}, we have \n\\begin{equation}\n\\boldsymbol{R}^{\\le j}(\\boldsymbol{z})\\subseteq\\boldsymbol{R}^{\\le j}(\\boldsymbol{z}')\\text{ }\\forall j.\\label{eq:R^j(z)_vs_R^j(z')}\n\\end{equation}\n\n\\end{theorem}\n\nTheorem \\ref{thm:mono_pattern} establishes a generalized version\nof monotonicity in the treatment selection process. This theorem plays\na crucial role in calculating the bounds on the treatment parameters\nand in showing the sharpness of the bounds. Relatedly, \\citet{berry1992estimation}\nderives the probability of the event that the number of entrants is\nless than a certain value, which can be written as $\\Pr[\\boldsymbol{U}\\in\\boldsymbol{R}^{\\le j}(\\boldsymbol{z})]$\nusing our notation. However, his result is not sufficient for our\nstudy and relies on stronger assumptions, such as restricting the\npayoff functions to only depend on the number of opponents.\n\n\\subsection{Main Assumptions\\label{subsec:Assumptions}}\n\nTo characterize the bounds on the treatment parameters, we make the\nfollowing assumptions. Unless otherwise noted, the assumptions hold\nfor each $s\\in\\{1,...,S\\}$.\n\n\\begin{asIN}\\label{as:IN}$(X,\\boldsymbol{Z})\\perp(\\epsilon_{\\boldsymbol{d}},\\boldsymbol{U})$\n$\\forall\\boldsymbol{d}\\in\\mathcal{D}$.\\end{asIN}\n\n\\begin{asE} The distribution of $(\\epsilon_{\\boldsymbol{d}},\\boldsymbol{U})$\nhas strictly positive density with respect to Lebesgue measure on\n$\\mathbb{R}^{S+1}$ $\\forall\\boldsymbol{d}\\in\\mathcal{D}$. \\end{asE}\n\n\\begin{asEX}For each $\\boldsymbol{d}_{-s}\\in\\mathcal{D}_{-s}$, $\\nu^{s}(\\boldsymbol{d}_{-s},Z_{s})|X$\nis nondegenerate.\\end{asEX}\n\nAssumptions IN, EX and all the following analyses can be understood\nas conditional on $W$, the common covariates in $X$ and $\\boldsymbol{Z}=(Z_{1},...,Z_{S})$.\nAssumption EX is related to the exclusion restriction and the relevance\ncondition of the instruments $Z_{s}$.\n\nWe now impose a shape restriction on the outcome function $\\theta(\\boldsymbol{d},x,\\epsilon_{\\boldsymbol{d}})$\nvia restrictions on \n\\[\n\\vartheta(\\boldsymbol{d},x;\\boldsymbol{u})\\equiv E[\\theta(\\boldsymbol{d},x,\\epsilon_{\\boldsymbol{d}})|\\boldsymbol{U}=\\boldsymbol{u}]\n\\]\na.e. $\\boldsymbol{u}$. This restriction on the conditional mean is\nweaker than the one directly imposed on $\\theta(\\boldsymbol{d},x,\\epsilon_{\\boldsymbol{d}})$.\nLet $\\mathcal{X}$ be the support of $X$. Recall that we use the\nnotation $\\boldsymbol{d}=(d_{s},\\boldsymbol{d}_{-s})$ by switching\nthe order of the elements for convenience.\n\n\\begin{asM}For every $x\\in\\mathcal{X}$, either $\\vartheta(1,\\boldsymbol{d}_{-s},x;\\boldsymbol{u})\\geq\\vartheta(0,\\boldsymbol{d}_{-s},x;\\boldsymbol{u})$\na.e. $\\boldsymbol{u}$ $\\forall\\boldsymbol{d}_{-s}\\in\\mathcal{D}_{-s}$\n$\\forall s$ or $\\vartheta(1,\\boldsymbol{d}_{-s},x;\\boldsymbol{u})\\leq\\vartheta(0,\\boldsymbol{d}_{-s},x;\\boldsymbol{u})$\na.e. $\\boldsymbol{u}$ $\\forall\\boldsymbol{d}_{-s}\\in\\mathcal{D}_{-s}$\n$\\forall s$. Also, $Y\\in[\\underline{Y},\\overline{Y}]$.\\end{asM}\n\nAssumption M holds in, but is not restricted to, the leading case\nof binary $Y$ with a threshold crossing model that satisfies uniformity.\n\n\\begin{asM2}(i) $\\theta(\\boldsymbol{d},x,\\epsilon_{\\boldsymbol{d}})=1[\\mu(\\boldsymbol{d},x)\\geq\\epsilon_{\\boldsymbol{d}}]$\nwhere $\\epsilon_{\\boldsymbol{d}}$ is scalar and $F_{\\epsilon_{\\boldsymbol{d}}|\\boldsymbol{U}}=F_{\\epsilon_{\\boldsymbol{d}'}|\\boldsymbol{U}}$\nfor any $\\boldsymbol{d},\\boldsymbol{d}'\\in\\mathcal{D}$; (ii) for\nevery $x\\in\\mathcal{X}$, either $\\mu(1,\\boldsymbol{d}_{-s},x)\\geq\\mu(0,\\boldsymbol{d}_{-s},x)$\n$\\forall\\boldsymbol{d}_{-s}\\in\\mathcal{D}_{-s}$ $\\forall s$ or $\\mu(1,\\boldsymbol{d}_{-s},x)\\le\\mu(0,\\boldsymbol{d}_{-s},x)$\n$\\forall\\boldsymbol{d}_{-s}\\in\\mathcal{D}_{-s}$ $\\forall s$.\\end{asM2}\n\nAssumption M$^{*}$ implies Assumption M. The second statement in\nAssumption M is satisfied with binary $Y$.\\footnote{Another example would be when $Y\\in[0,1]$, as in Example \\ref{example3}.}\nThe first statement in Assumption M can be stated in two parts, corresponding\nto (i) and (ii) of Assumption M$^{*}$: (a) for every $x$ and $\\boldsymbol{d}_{-s}$,\neither $\\vartheta(1,\\boldsymbol{d}_{-s},x;\\boldsymbol{u})\\geq\\vartheta(0,\\boldsymbol{d}_{-s},x;\\boldsymbol{u})$\na.e. $\\boldsymbol{u}$, or $\\vartheta(1,\\boldsymbol{d}_{-s},x;\\boldsymbol{u})\\leq\\vartheta(0,\\boldsymbol{d}_{-s},x;\\boldsymbol{u})$\na.e. $\\boldsymbol{u}$; (b) for every $x$, each inequality statement\nin (a) holds for all $\\boldsymbol{d}_{-s}$. For an outcome function\nwith a scalar index, $\\theta(\\boldsymbol{d},x,\\epsilon_{\\boldsymbol{d}})=\\tilde{\\theta}(\\mu(\\boldsymbol{d},x),\\epsilon_{\\boldsymbol{d}})$,\npart (a) is implied by $\\epsilon_{\\boldsymbol{d}}=\\epsilon_{\\boldsymbol{d}'}=\\epsilon$\n(or more generally, $F_{\\epsilon_{\\boldsymbol{d}}|\\boldsymbol{U}}=F_{\\epsilon_{\\boldsymbol{d}'}|\\boldsymbol{U}}$)\nfor any $\\boldsymbol{d},\\boldsymbol{d}'\\in\\mathcal{D}$ and $E[\\tilde{\\theta}(t,\\epsilon_{\\boldsymbol{d}})|\\boldsymbol{U}=\\boldsymbol{u}]$\nbeing strictly increasing (decreasing) in $t$ a.e. $\\boldsymbol{u}$.\\footnote{A single-treatment version of the latter assumption appears in \\citet{VY07}\n(Assumption A-4), which is weaker than assuming $\\tilde{\\theta}(t,\\epsilon)$\nis strictly increasing (decreasing) a.e. $\\epsilon$; see \\citet{VY07}\nfor related discussions.} Functions that satisfy the latter assumption include strictly monotonic\nfunctions, such as transformation models $\\tilde{\\theta}(t,\\epsilon)=r(t+\\epsilon)$\nwith $r(\\cdot)$ being possibly unknown strictly increasing, or their\nspecial case $\\tilde{\\theta}(t,\\epsilon)=t+\\epsilon$, allowing continuous\ndependent variables; and functions that are not strictly monotonic,\nsuch as models for limited dependent variables, $\\tilde{\\theta}(t,\\epsilon)=1[t\\ge\\epsilon]$\nor $\\tilde{\\theta}(t,\\epsilon)=1[t\\ge\\epsilon](t-\\epsilon)$. However,\nthere can be functions that violate the latter assumption but satisfy\npart (a). For example, consider a threshold crossing model with a\nrandom coefficient: $\\theta(\\boldsymbol{d},x,\\epsilon)=1[\\phi(\\epsilon)\\boldsymbol{d}\\beta^{\\top}\\geq x\\gamma^{\\top}]$,\nwhere $\\phi(\\epsilon)$ is nondegenerate. When $\\beta_{s}\\geq0$,\nthen $E[\\theta(1,\\boldsymbol{d}_{-s},x,\\epsilon)-\\theta(0,\\boldsymbol{d}_{-s},x,\\epsilon)|\\boldsymbol{U}=\\boldsymbol{u}]=\\Pr\\left[\\frac{x\\gamma^{\\top}}{\\beta_{s}+\\boldsymbol{d}_{-s}\\beta_{-s}^{\\top}}\\leq\\phi(\\epsilon)\\leq\\frac{x\\gamma^{\\top}}{\\boldsymbol{d}_{-s}\\beta_{-s}^{\\top}}|\\boldsymbol{U}=\\boldsymbol{u}\\right]$,\nand thus, nonnegative a.e. $\\boldsymbol{u}$, and vice versa. Part\n(a) also does not impose any monotonicity of $\\theta$ in $\\epsilon_{\\boldsymbol{d}}$\n(e.g., $\\epsilon_{\\boldsymbol{d}}$ can be a vector).\n\nPart (b) of Assumption M imposes uniformity, as we deal with more\nthan one treatment. Uniformity is required across different values\nof $\\boldsymbol{d}_{-s}$ and $s$. For instance, in the empirical\napplication of this study, this assumption seems reasonable, since\nan airline's entry is likely to increase the expected pollution regardless\nof the identity or the number of existing airlines. On the other hand,\nin Example \\ref{example3} in the Appendix regarding media and political\nbehavior, this assumption may rule out the ``over-exposure'' effect\n(i.e., too much media exposure diminishes the incumbent's chance\nof being re-elected). In any case, knowledge on the direction of the\nmonotonicity is not necessary in this assumption, unlike \\citet{manski1997monotone}\nor \\citet{Man13}, where the semi-monotone treatment response is assumed\nfor possible multiple treatments.\n\nLastly, we require that there exists variation in $\\boldsymbol{Z}$\nthat offsets the effect of strategic substitutability. Similar as\nbefore, using the notation $\\boldsymbol{d}_{-s}=(d_{s'},\\boldsymbol{d}_{-(s,s')})$\nwhere $\\boldsymbol{d}_{-(s,s')}$ is $\\boldsymbol{d}$ without $s$-th\nand $s'$-th elements, note that Assumption SS can be restated as\n$\\nu^{s}(0,\\boldsymbol{d}_{-(s,s')},z_{s})>\\nu^{s}(1,\\boldsymbol{d}_{-(s,s')},z_{s})$\nfor every $z_{s}$. Given this, we assume the following.\n\n\\begin{asEQ}There exist $\\boldsymbol{z},\\boldsymbol{z}'\\in\\mathcal{Z}$,\nsuch that $\\nu^{s}(0,\\boldsymbol{d}_{-(s,s')},z_{s}')\\le\\nu^{s}(1,\\boldsymbol{d}_{-(s,s')},z_{s})$\n$\\forall\\boldsymbol{d}_{-(s,s')}$ $\\forall s,s'$.\\end{asEQ}\n\nFor example, in an entry game with $Z_{s}$ being cost shifters, Assumption\nEQ may hold with $z_{s}'>z_{s}$ $\\forall s$. In this example, players\nmay become less profitable with an increase in cost from government\nregulation. In particular, players' decreased profits cannot be overturned\nby the market being less competitive, as one player is absent due\nto unprofitability. Recall that Assumption EQ is illustrated in Figure\n\\ref{fig:As_EQ-1} with $\\nu^{s}(0,z_{s}')=\\gamma_{s}z_{s}'<\\nu^{s}(1,z_{s})=\\delta_{-s}+\\gamma_{s}z_{s}$\nfor $s=1,2$. Assumption EQ is key for our analysis. To see this,\nlet $R_{j}^{M}(\\cdot)$ denote the region that predicts multiple equilibria\nwith $j$ treatments selected or $j$ entrants. In the proof of a\nlemma that follows, we show that Assumption EQ holds if and only if\n$R_{j}^{M}(\\boldsymbol{z})\\cap R_{j}^{M}(\\boldsymbol{z}')=\\emptyset$.\nThat is, we can at least ensure that there is no market where firms'\ndecisions change from one realization of multiple equilibria to another\nrealization of multiple equilibria with the same number of entrants.\nTo the extent of our analysis, this liberates us from concerns about\nthe regions of multiple equilibria and about a possible change in\nequilibrium selection when changing $\\boldsymbol{Z}$.\\footnote{In Section \\ref{subsec:Group}, we discuss an assumption, partial\nconditional symmetry, which can be imposed alternative to Assumption\nEQ.} Assumption EQ has a simple testable sufficient condition, provided\nthat the unobservables in the payoffs are mutually independent.\n\n\\begin{asEQ2}There exist $\\boldsymbol{z},\\boldsymbol{z}'\\in\\mathcal{Z}$,\nsuch that \n\\begin{align}\n\\Pr[\\boldsymbol{D}=\\boldsymbol{d}^{j}|\\boldsymbol{z}]+\\Pr[\\boldsymbol{D}=\\boldsymbol{d}^{j-2}|\\boldsymbol{z}'] & >2-\\sqrt{2}.\\label{eq:EQ_suff}\n\\end{align}\nfor all $\\boldsymbol{d}^{j}\\in\\mathcal{D}^{j}$, $\\boldsymbol{d}^{j-2}\\in\\mathcal{D}^{j-2}$\nand $2\\le j\\le S$.\n\n\\end{asEQ2}\n\nWhen $S=2$, the condition is stated as $\\Pr[\\boldsymbol{D}=(1,1)|\\boldsymbol{z}]+\\Pr[\\boldsymbol{D}=(0,0)|\\boldsymbol{z}']>2-\\sqrt{2}$.\nAs is detailed in the proof, this essentially restricts the sum of\nradii of two circular isoquant curves to be less than the length of\nthe diagonal of $\\mathcal{U}$: $(1-\\Pr[\\boldsymbol{D}=(1,1)|\\boldsymbol{z}])+(1-\\Pr[\\boldsymbol{D}=(0,0)|\\boldsymbol{z}'])<\\sqrt{2}$.\nThis ensures the required variation in Assumption EQ.\n\n\\begin{lemma}\\label{lem:asy_to_asy_star}Under Assumptions SS, M1,\nand $U_{s}\\perp U_{t}$ for all $s\\neq t$, Assumption EQ$^{*}$ implies\nAssumption EQ.\n\n\\end{lemma}\n\nThe mutual independence of $U_{s}$'s (conditional on $W$) is useful\nin inferring the relationship between players' interaction and instruments\nfrom the observed choices of players. The intuition for the sufficiency\nof Assumption EQ$^{*}$ is as follows. As long as there is no dependence\nin unobserved types, \\eqref{eq:EQ_suff} dictates that the variation\nof $\\boldsymbol{Z}$ is large enough to offset strategic substitutability,\nbecause otherwise, the payoffs of players cannot move in the same\ndirection, and thus, will not result in the same decisions. The requirement\nof $\\boldsymbol{Z}$ variation in \\eqref{eq:EQ_suff} is significantly\nweaker than the large support assumption invoked for an identification\nat infinity argument to overcome the problem of multiple equilibria.\n\n\\subsection{Partial Identification of the ATE\\label{subsec:Partial-Identification}}\n\nUnder the above assumptions, we now present a generalized version\nof the sign matching results \\eqref{eq:sign_match} and \\eqref{eq:sign_match_x}\nin Section \\ref{sec:stylized_ex}. We need to introduce additional\nnotation. Let $\\boldsymbol{d}^{j}\\in\\mathcal{D}^{j}$ denote an equilibrium\nprofile with $j$ treatments selected or $j$ entrants, i.e., a vector\nof $j$ ones and $S-j$ zeros, where $\\mathcal{D}^{j}$ is a set of\nall equilibrium profiles with $j$ treatments selected. For realizations\n$x$ of $X$ and $\\boldsymbol{z},\\boldsymbol{z}'$ of $\\boldsymbol{Z}$,\ndefine \n\\begin{align}\nh(\\boldsymbol{z},\\boldsymbol{z}',x) & \\equiv E[Y|\\boldsymbol{z},x]-E[Y|\\boldsymbol{z}',x],\\label{eq:h(zzx)}\\\\\nh_{\\boldsymbol{d}^{j}}(\\boldsymbol{z},\\boldsymbol{z}',x) & \\equiv E[Y|\\boldsymbol{D}=\\boldsymbol{d}^{j},\\boldsymbol{z},x]\\Pr[\\boldsymbol{D}=\\boldsymbol{d}^{j}|\\boldsymbol{z}]\\nonumber \\\\\n & -E[Y|\\boldsymbol{D}=\\boldsymbol{d}^{j},\\boldsymbol{z}',x]\\Pr[\\boldsymbol{D}=\\boldsymbol{d}^{j}|\\boldsymbol{z}'].\\label{eq:hj}\n\\end{align}\nSince $\\sum_{j=0}^{S}\\sum_{\\boldsymbol{d}^{j}\\in\\mathcal{D}^{j}}\\Pr[\\boldsymbol{D}=\\boldsymbol{d}^{j}|\\cdot]=1$,\n$h(\\boldsymbol{z},\\boldsymbol{z}',x)=\\sum_{j=0}^{S}\\sum_{\\boldsymbol{d}^{j}}h_{\\boldsymbol{d}^{j}}(\\boldsymbol{z},\\boldsymbol{z}',x)$.\nLet $\\tilde{\\boldsymbol{x}}=(x_{0},...,x_{S})$ be an $(S+1)$-dimensional\narray of (possibly different) realizations of $X$, i.e., each $x_{j}$\nfor $j=0,...,S$ is a realization of $X$, and define \n\\[\n\\tilde{h}(\\boldsymbol{z},\\boldsymbol{z}',\\tilde{\\boldsymbol{x}})\\equiv\\sum_{j=0}^{S}\\sum_{\\boldsymbol{d}^{j}\\in\\mathcal{D}^{j}}h_{\\boldsymbol{d}^{j}}(\\boldsymbol{z},\\boldsymbol{z}',x_{j}).\n\\]\nFor $1\\le k\\le j$, define a \\textit{reduction} of $\\boldsymbol{d}^{j}=(d_{1}^{j},...,d_{S}^{j})$\nas $\\boldsymbol{d}^{j-k}=(d_{1}^{j-k},...,d_{S}^{j-k})$, such that\n$d_{s}^{j-k}\\le d_{s}^{j}$ $\\forall s$. Symmetrically, for $1\\le k\\le S-j$,\ndefine an \\textit{extension} of $\\boldsymbol{d}^{j}$ as $\\boldsymbol{d}^{j+k}=(d_{1}^{j+k},...,d_{S}^{j+k})$,\nsuch that $d_{s}^{j+k}\\ge d_{s}^{j}$ $\\forall s$. For example, given\n$\\boldsymbol{d}^{2}=(1,1,0)$, a reduction $\\boldsymbol{d}^{1}$ is\neither $(1,0,0)$ or $(0,1,0)$ but not $(0,0,1)$, a reduction $\\boldsymbol{d}^{0}$\nis $(0,0,0)$, and an extension $\\boldsymbol{d}^{3}$ is $(1,1,1)$.\nLet $\\mathcal{D}^{<}(\\boldsymbol{d}^{j})$ and $\\mathcal{D}^{>}(\\boldsymbol{d}^{j})$\nbe the set of all reductions and extensions of $\\boldsymbol{d}^{j}$,\nrespectively, and let $\\mathcal{D}^{\\le}(\\boldsymbol{d}^{j})\\equiv\\mathcal{D}^{<}(\\boldsymbol{d}^{j})\\cup\\{\\boldsymbol{d}^{j}\\}$\nand $\\mathcal{D}^{\\ge}(\\boldsymbol{d}^{j})\\equiv\\mathcal{D}^{>}(\\boldsymbol{d}^{j})\\cup\\{\\boldsymbol{d}^{j}\\}$.\nRecall $\\vartheta(\\boldsymbol{d},x;\\boldsymbol{u})\\equiv E[\\theta(\\boldsymbol{d},x,\\epsilon)|\\boldsymbol{U}=\\boldsymbol{u}]$.\nNow, we state the main lemma of this section.\n\n\\begin{lemma}\\label{lem:sign_match_gen}In model \\eqref{eq:main_model1}--\\eqref{eq:main_model2},\nsuppose Assumptions SS, M1, IN, E, EX, and M hold, and $h(\\boldsymbol{z},\\boldsymbol{z}',x)$\nand $h(\\boldsymbol{z},\\boldsymbol{z}',\\tilde{\\boldsymbol{x}})$ are\nwell-defined. For $\\boldsymbol{z},\\boldsymbol{z}'$ such that \\eqref{eq:ps_condi}\nand Assumption EQ hold, and for $j=1,...,S$, it satisfies that\\\\\n (i) $sgn\\{h(\\boldsymbol{z},\\boldsymbol{z}',x)\\}=sgn\\left\\{ \\vartheta(\\boldsymbol{d}^{j},x;\\boldsymbol{u})-\\vartheta(\\boldsymbol{d}^{j-1},x;\\boldsymbol{u})\\right\\} $\na.e. $\\boldsymbol{u}$ $\\forall\\boldsymbol{d}^{j-1}\\in\\mathcal{D}^{<}(\\boldsymbol{d}^{j})$;\\\\\n (ii) for $\\iota\\in\\{-1,0,1\\}$, if $sgn\\{\\tilde{h}(\\boldsymbol{z},\\boldsymbol{z}',\\tilde{\\boldsymbol{x}})\\}=sgn\\{-\\vartheta(\\boldsymbol{d}^{k},x_{k};\\boldsymbol{u})+\\vartheta(\\boldsymbol{d}^{k-1},x_{k-1};\\boldsymbol{u})\\}=\\iota$\n$\\forall\\boldsymbol{d}^{k-1}\\in\\mathcal{D}^{<}(\\boldsymbol{d}^{k})$\n$\\forall k\\neq j$ ($k\\ge1$), then $sgn\\{\\vartheta(\\boldsymbol{d}^{j},x_{j};\\boldsymbol{u})-\\vartheta(\\boldsymbol{d}^{j-1},x_{j-1};\\boldsymbol{u})\\}=\\iota$\na.e. $\\boldsymbol{u}$ $\\forall\\boldsymbol{d}^{j-1}\\in\\mathcal{D}^{<}(\\boldsymbol{d}^{j})$.\\end{lemma}\n\nParts (i) and (ii) parallel \\eqref{eq:sign_match} and \\eqref{eq:sign_match_x},\nrespectively. Using Lemma \\ref{lem:sign_match_gen}, we can learn\nabout the ATE. First, note that the sign of the ATE is identified\nby Lemma \\ref{lem:sign_match_gen}(i), since $E[Y(\\boldsymbol{d})|x]=E[\\vartheta(\\boldsymbol{d},x;\\boldsymbol{U})]$.\nNext, we establish the bounds on $E[Y(\\boldsymbol{d}^{j})|x]$ for\ngiven $\\boldsymbol{d}^{j}$ for some $j=0,...,S$.\n\nWe first present the bounds using the variation in $\\boldsymbol{Z}$\nonly, i.e., by using Lemma \\ref{lem:sign_match_gen}(i). To this end,\nwe fix $X=x$ and suppress it in all relevant expressions. To gain\nefficiency we define the integrated version of $h$ as \n\\begin{align}\nH(x) & \\equiv E\\left[h(\\boldsymbol{Z},\\boldsymbol{Z}',x)\\left|(\\boldsymbol{Z},\\boldsymbol{Z}')\\in\\mathcal{Z}_{EQ,j}\\text{ }\\forall j=0,...,S-1\\right.\\right],\\label{eq:H}\n\\end{align}\nwhere $\\mathcal{Z}_{EQ,j}$ is the set of $(\\boldsymbol{z},\\boldsymbol{z}')$\nthat satisfy \\eqref{eq:ps_condi} and Assumption EQ given $j$, and\n$h(\\boldsymbol{z},\\boldsymbol{z}',x)=0$ whenever it is not well-defined.\nWe focus on the case $H(x)>0$; $H(x)<0$ is symmetric and $H(x)=0$\nis straightforward. Using Lemma \\ref{lem:sign_match_gen}(i), one\ncan readily show that $L_{\\boldsymbol{d}^{j}}(x)\\le E[Y(\\boldsymbol{d}^{j})|x]\\le U_{\\boldsymbol{d}^{j}}(x)$\nwith \n\\begin{align}\nU_{\\boldsymbol{d}^{j}}(x) & \\equiv\\inf_{\\boldsymbol{z}\\in\\mathcal{Z}}\\Biggl\\{\\Pr[Y=1,\\boldsymbol{D}\\in\\mathcal{D}^{\\ge}(\\boldsymbol{d}^{j})|\\boldsymbol{z},x]+\\Pr[\\boldsymbol{D}\\in\\mathcal{D}\\backslash\\mathcal{D}^{\\ge}(\\boldsymbol{d}^{j})|\\boldsymbol{z},x]\\Biggr\\},\\label{eq:U_dj}\\\\\nL_{\\boldsymbol{d}^{j}}(x) & \\equiv\\sup_{\\boldsymbol{z}\\in\\mathcal{Z}}\\Biggl\\{\\Pr[Y=1,\\boldsymbol{D}\\in\\mathcal{D}^{\\le}(\\boldsymbol{d}^{j})|\\boldsymbol{z},x]\\Biggr\\}.\\label{eq:L_dj}\n\\end{align}\nWe can simplify these bounds and show that they are sharp under the\nfollowing assumption.\n\n\\begin{asC}(i) $\\mu_{\\boldsymbol{d}}(\\cdot)$ and $\\nu_{\\boldsymbol{d}_{-s}}(\\cdot)$\nare continuous; (ii) $\\mathcal{Z}$ is compact.\\end{asC}\n\nUnder Assumption C, for given $\\boldsymbol{d}^{j}$, there exist vectors\n$\\bar{\\boldsymbol{z}}\\equiv(\\bar{z}_{1},...,\\bar{z}_{S})$ and $\\underline{\\boldsymbol{z}}\\equiv(\\underline{z}_{1},...,\\underline{z}_{S})$\nthat satisfy \n\\begin{equation}\n\\begin{array}{c}\n\\bar{\\boldsymbol{z}}=\\arg\\max_{\\boldsymbol{z}\\in\\mathcal{Z}}\\max_{\\boldsymbol{d}\\in\\mathcal{D}^{\\ge}(\\boldsymbol{d}^{j})}\\Pr[\\boldsymbol{D}=\\boldsymbol{d}|\\boldsymbol{z}],\\\\\n\\underline{\\boldsymbol{z}}=\\arg\\min_{\\boldsymbol{z}\\in\\mathcal{Z}}\\min_{\\boldsymbol{d}\\in\\mathcal{D}^{\\ge}(\\boldsymbol{d}^{j})}\\Pr[\\boldsymbol{D}=\\boldsymbol{d}|\\boldsymbol{z}].\n\\end{array}\\label{eq:max_min_pM}\n\\end{equation}\nThe following is the first main result of this study, which establishes\nthe sharp bounds on $E[Y(\\boldsymbol{d}^{j})|x]$, where $X=x$ is\nfixed in the model.\n\n\\begin{theorem}\\label{thm:sharp}Given model \\eqref{eq:main_model1}--\\eqref{eq:main_model2}\nwith fixed $X=x$, suppose Assumptions SS, M1, IN, E, EX, M$^{*}$,\nEQ and C hold. In addition, suppose $H(x)$ is well-defined and $H(x)\\ge0$.\nThen, the bounds $U_{\\boldsymbol{d}^{j}}$ and $L_{\\boldsymbol{d}^{j}}$\nin \\eqref{eq:U_dj} and \\eqref{eq:L_dj} simplify to \n\\begin{align*}\nU_{\\boldsymbol{d}^{j}}(x) & =\\Pr[Y=1,\\boldsymbol{D}\\in\\mathcal{D}^{\\ge}(\\boldsymbol{d}^{j})|\\bar{\\boldsymbol{z}},x]+\\Pr[\\boldsymbol{D}\\in\\mathcal{D}\\backslash\\mathcal{D}^{\\ge}(\\boldsymbol{d}^{j})|\\bar{\\boldsymbol{z}},x],\\\\\nL_{\\boldsymbol{d}^{j}}(x) & =\\Pr[Y=1,\\boldsymbol{D}\\in\\mathcal{D}^{\\le}(\\boldsymbol{d}^{j})|\\underline{\\boldsymbol{z}},x],\n\\end{align*}\nand these bounds are sharp.\n\n\\end{theorem}\n\nWith binary $Y$ (Assumption M$^{*}$), sharp bounds on the mean treatment\nparameters can be obtained, which is reminiscent of the findings of\nstudies that consider single-treatment models. However, our analysis\nis substantially different from earlier studies. In a single treatment\nmodel, \\citet{SV11} use the propensity score as a scalar conditioning\nvariable, which summarizes all the exogenous variation in the selection\nprocess and is convenient for simplifying the bounds and proving sharpness.\nHowever, in the context of our paper this approach is invalid, since\n$\\Pr[D_{s}=1|Z_{s}=z_{s},\\boldsymbol{D}_{-s}=\\boldsymbol{d}_{-s}]$\ncannot be written in terms of a propensity score of player $s$ as\n$\\boldsymbol{D}_{-s}$ is endogenous. We instead use the vector $\\boldsymbol{Z}$\nas conditioning variables and establish partial ordering for the relevant\nconditional probabilities (that define the lower and upper bounds)\nwith respect to the joint propensity score $\\Pr[\\boldsymbol{D}=\\boldsymbol{d}|\\boldsymbol{Z}=\\boldsymbol{z}]$\n$\\forall\\boldsymbol{d}\\in\\mathcal{D}^{\\ge}(\\boldsymbol{d}^{j})$.\n\nWhen the variation of $X$ is additionally exploited in the model,\nthe bounds will be narrower than the bounds in Theorem \\ref{thm:sharp}.\nWe now proceed with this case, utilizing Lemma \\ref{lem:sign_match_gen}\n(i) and (ii). First, analogous to \\eqref{eq:H}, we define the integrated\nversion of $\\tilde{h}(\\boldsymbol{z},\\boldsymbol{z}',\\tilde{\\boldsymbol{x}})$\nas \n\\begin{align*}\n\\tilde{H}(\\tilde{\\boldsymbol{x}}) & \\equiv E\\left[\\tilde{h}(\\boldsymbol{Z},\\boldsymbol{Z}',\\tilde{\\boldsymbol{x}})\\left|(\\boldsymbol{Z},\\boldsymbol{Z}')\\in\\mathcal{Z}_{EQ,j}\\text{ }\\forall j=0,...,S-1\\right.\\right],\n\\end{align*}\nwhere $\\tilde{h}(\\boldsymbol{z},\\boldsymbol{z}',\\tilde{\\boldsymbol{x}})=0$\nwhenever it is not well-defined. Then, we define the following sets\nof two consecutive elements $(x_{j},x_{j-1})$ of $\\boldsymbol{x}$\nthat satisfy the conditions in Lemma \\ref{lem:sign_match_gen}: for\n$j=1,...,S$, define $\\mathcal{X}_{j,j-1}^{0}(\\iota)\\equiv\\{(x_{j},x_{j-1}):sgn\\{\\tilde{H}(\\tilde{\\boldsymbol{x}})\\}=\\iota,x_{0}=\\cdots=x_{S}\\}$\nand for $t\\ge1$, \n\\begin{align*}\n\\mathcal{X}_{j,j-1}^{t}(\\iota) & \\equiv\\{(x_{j},x_{j-1}):sgn\\{\\tilde{H}(\\tilde{\\boldsymbol{x}})\\}=\\iota,(x_{k},x_{k-1})\\in\\mathcal{X}_{k,k-1}^{t-1}(-\\iota)\\mbox{ \\ensuremath{\\forall}}k\\neq j\\}\\cup\\mathcal{X}_{j,j-1}^{t-1}(\\iota),\n\\end{align*}\nwhere the sets are understood to be empty whenever $\\tilde{h}(\\boldsymbol{z},\\boldsymbol{z}',\\tilde{\\boldsymbol{x}})$\nis not well-defined for any $p_{M^{\\le j}}(\\boldsymbol{z})}(\\boldsymbol{d}^{j})$,\n\\begin{align}\n\\mathcal{X}_{\\boldsymbol{d}^{j}}^{L}(x;\\boldsymbol{d}') & \\equiv\\left\\{ x_{j'}:(x_{k},x_{k-1})\\in\\mathcal{X}_{k,k-1}(-1)\\cup\\mathcal{X}_{k,k-1}(0)\\mbox{ for }j'+1\\leq k\\leq j,x_{j}=x\\right\\} \\nonumber \\\\\n & \\cup\\left\\{ x_{j'}:(x_{k},x_{k-1})\\in\\mathcal{X}_{k,k-1}(1)\\cup\\mathcal{X}_{k,k-1}(0)\\mbox{ for }j+1\\leq k\\leq j',x_{j}=x\\right\\} ,\\label{eq:script_X_L}\\\\\n\\mathcal{X}_{\\boldsymbol{d}^{j}}^{U}(x;\\boldsymbol{d}') & \\equiv\\left\\{ x_{j'}:(x_{k},x_{k-1})\\in\\mathcal{X}_{k,k-1}(1)\\cup\\mathcal{X}_{k,k-1}(0)\\mbox{ for }j'+1\\leq k\\leq j,x_{j}=x\\right\\} \\nonumber \\\\\n & \\cup\\left\\{ x_{j'}:(x_{k},x_{k-1})\\in\\mathcal{X}_{k,k-1}(-1)\\cup\\mathcal{X}_{k,k-1}(0)\\mbox{ for }j+1\\leq k\\leq j',x_{j}=x\\right\\} .\\label{eq:script_X_U}\n\\end{align}\nThe following theorem summarizes our results:\n\n\\begin{theorem}\\label{thm:main} In model \\eqref{eq:main_model1}--\\eqref{eq:main_model2},\nsuppose the assumptions of Lemma \\ref{lem:sign_match_gen} hold. Then\nthe sign of the ATE is identified, and the upper and lower bounds\non the ASF and ATE with $\\boldsymbol{d},\\tilde{\\boldsymbol{d}}\\in\\mathcal{D}$\nare \n\\begin{align*}\nL_{\\boldsymbol{d}}(x) & \\leq E[Y(\\boldsymbol{d})|x]\\leq U_{\\boldsymbol{d}}(x)\n\\end{align*}\nand $L_{\\boldsymbol{d}}(x)-U_{\\tilde{\\boldsymbol{d}}}(x)\\leq E[Y(\\boldsymbol{d})-Y(\\tilde{\\boldsymbol{d}})|x]\\leq U_{\\boldsymbol{d}}(x)-L_{\\tilde{\\boldsymbol{d}}}(x)$,\nwhere for any given $\\boldsymbol{d}^{j}\\in\\mathcal{D}^{j}\\subset\\mathcal{D}$\nfor some $j$, \n\\begin{align*}\nU_{\\boldsymbol{d}^{j}}(x) & \\equiv\\inf_{\\boldsymbol{z}\\in\\mathcal{Z}}\\Biggl\\{ E[Y|\\boldsymbol{D}=\\boldsymbol{d}^{j},\\boldsymbol{z},x]\\Pr[\\boldsymbol{D}=\\boldsymbol{d}^{j}|\\boldsymbol{z}]+\\Pr[\\boldsymbol{D}\\in\\mathcal{D}^{j}\\backslash\\{\\boldsymbol{d}^{j}\\}|\\boldsymbol{z}]\\overline{Y}\\\\\n & +\\sum_{\\boldsymbol{d}^{\\prime}\\in\\mathcal{D}^{<}(\\boldsymbol{d}^{j})\\cup\\mathcal{D}^{>}(\\boldsymbol{d}^{j})}\\inf_{x'\\in\\mathcal{X}_{\\boldsymbol{d}^{j}}^{U}(x;\\boldsymbol{d}')}E[Y|\\boldsymbol{D}=\\boldsymbol{d}',\\boldsymbol{z},x']\\Pr[\\boldsymbol{D}=\\boldsymbol{d}'|\\boldsymbol{z}]\\Biggr\\},\\\\\nL_{\\boldsymbol{d}^{j}}(x) & \\equiv\\sup_{\\boldsymbol{z}\\in\\mathcal{Z}}\\Biggl\\{ E[Y|\\boldsymbol{D}=\\boldsymbol{d}^{j},\\boldsymbol{z},x]\\Pr[\\boldsymbol{D}=\\boldsymbol{d}^{j}|\\boldsymbol{z}]+\\Pr[\\boldsymbol{D}\\in\\mathcal{D}^{j}\\backslash\\{\\boldsymbol{d}^{j}\\}|\\boldsymbol{z}]\\underline{Y}\\\\\n & +\\sum_{\\boldsymbol{d}'\\in\\mathcal{D}^{<}(\\boldsymbol{d}^{j})\\cup\\mathcal{D}^{>}(\\boldsymbol{d}^{j})}\\sup_{x'\\in\\mathcal{X}_{\\boldsymbol{d}^{j}}^{L}(x;\\boldsymbol{d}')}E[Y|\\boldsymbol{D}=\\boldsymbol{d}',\\boldsymbol{z},x']\\Pr[\\boldsymbol{D}=\\boldsymbol{d}'|\\boldsymbol{z}]\\Biggr\\}.\n\\end{align*}\n\\end{theorem}\n\nSee Sections \\ref{sec:Monte-Carlo-Studies} and \\ref{sec:Empirical-Application}\nfor concrete examples of the expression of $U_{\\boldsymbol{d}^{j}}(x)$\nand $L_{\\boldsymbol{d}^{j}}(x)$. The terms $\\Pr[\\boldsymbol{D}=\\boldsymbol{d}^{'}|\\boldsymbol{z}]\\overline{Y}$\nand $\\Pr[\\boldsymbol{D}=\\boldsymbol{d}^{'}|\\boldsymbol{z}]\\underline{Y}$\nappear in the expression of the bounds because Lemma \\ref{lem:sign_match_gen}\ncannot establish an order between $\\vartheta(\\boldsymbol{d},x;\\boldsymbol{u})$'s\nfor $\\boldsymbol{d}\\in\\mathcal{D}^{j}$, which is related to the complication\ndue to multiple equilibria, which occurs for $\\boldsymbol{d}\\in\\mathcal{D}^{j}$.\nWhen the variation in $\\boldsymbol{Z}$ is only used in deriving the\nbounds, $\\mathcal{X}_{k,k-1}(\\iota)$ should simply reduce to $\\mathcal{X}_{k,k-1}^{0}(\\iota)$\nin the definition of $\\mathcal{X}_{\\boldsymbol{d}^{j}}^{L}(x;\\boldsymbol{d}')$\nand $\\mathcal{X}_{\\boldsymbol{d}^{j}}^{U}(x;\\boldsymbol{d}')$. When\n$Y$ is binary with no $X$, such bounds are equivalent to \\eqref{eq:U_dj}\nand \\eqref{eq:L_dj}. The variation in $X$ given $\\boldsymbol{Z}$\nyields substantially narrower bounds than the sharp bounds established\nin Theorem \\ref{thm:sharp} under Assumption C. However, the resulting\nbounds are not automatically implied to be sharp from Theorem \\ref{thm:sharp},\nsince they are based on a different DGP and the additional exclusion\nrestriction.\n\n\\begin{remark}\\label{rem:mourifie}Maintaining that $Y$ is binary,\nsharp bounds on the ATE with variation in $X$ can be derived assuming\nthat the signs of $\\vartheta(\\boldsymbol{d},x;\\boldsymbol{u})-\\vartheta(\\boldsymbol{d}',x';\\boldsymbol{u})$\nare identified for $\\boldsymbol{d}\\in\\mathcal{D}$ and $\\boldsymbol{d}'\\in\\mathcal{D}^{<}(\\boldsymbol{d})$\nand $x,x'\\in\\mathcal{X}$ via Lemma \\ref{lem:sign_match_gen}. To\nsee this, define \n\\begin{align*}\n\\mathcal{\\tilde{X}}_{\\boldsymbol{d}}^{U}(x;\\boldsymbol{d}') & \\equiv\\left\\{ x':\\vartheta(\\boldsymbol{d},x;\\boldsymbol{u})-\\vartheta(\\boldsymbol{d}',x';\\boldsymbol{u})\\leq0\\text{ a.e. }\\boldsymbol{u}\\right\\} ,\\\\\n\\mathcal{\\tilde{X}}_{\\boldsymbol{d}}^{L}(x;\\boldsymbol{d}') & \\equiv\\left\\{ x':\\vartheta(\\boldsymbol{d},x;\\boldsymbol{u})-\\vartheta(\\boldsymbol{d}',x';\\boldsymbol{u})\\geq0\\text{ a.e. }\\boldsymbol{u}\\right\\} ,\n\\end{align*}\nwhich are identified by assumption. Then, by replacing $\\mathcal{X}_{\\boldsymbol{d}}^{i}(x;\\boldsymbol{d}')$\nwith $\\mathcal{\\tilde{X}}_{\\boldsymbol{d}}^{i}(x;\\boldsymbol{d}')$\n(for $i\\in\\{U,L\\}$) in Theorem \\ref{thm:main}, we may be able to\nshow that the resulting bounds are sharp. Since Lemma \\ref{lem:sign_match_gen}\nimplies that $\\mathcal{X}_{\\boldsymbol{d}^{j}}^{i}(x;\\boldsymbol{d}')\\subset\\mathcal{\\tilde{X}}_{\\boldsymbol{d}^{j}}^{i}(x;\\boldsymbol{d}')$\nbut not necessarily $\\mathcal{X}_{\\boldsymbol{d}^{j}}^{i}(x;\\boldsymbol{d}')\\supset\\mathcal{\\tilde{X}}_{\\boldsymbol{d}^{j}}^{i}(x;\\boldsymbol{d}')$,\nthese modified bounds and the original bounds in Theorem \\ref{thm:main}\ndo not coincide. This contrasts with the result of \\citet{SV11} for\na single-treatment model, and the complication lies in the fact that\nwe deal with an incomplete model with a vector treatment. When there\nis no $X$, Lemma \\ref{lem:sign_match_gen}(i) establishes equivalence\nbetween the two signs, and thus, $\\mathcal{X}_{\\boldsymbol{d}^{j}}^{i}(x;\\boldsymbol{d}')=\\mathcal{\\tilde{X}}_{\\boldsymbol{d}^{j}}^{i}(x;\\boldsymbol{d}')$\nfor $i\\in\\{U,L\\}$, which results in Theorem \\ref{thm:sharp}. Relatedly,\nwe can also exploit variation in $W$, namely, variables that are\ncommon to both $X$ and $\\boldsymbol{Z}$ (with or without exploiting\nexcluded variation of $X$). This is related to the analysis of \\citet{Chi10}\nand \\citet{mourifie2015sharp} in a single-treatment setting. One\ncaveat of this approach is that, similar to these papers, we need\nto additionally assume that $W\\perp(\\epsilon,\\boldsymbol{U})$.\\end{remark}\n\n\\begin{remark}When $X$ does not have enough variation, we can calculate\nthe bounds on the ATE. To see this, suppose we do not use the variation\nin $X$ and suppose $H(x)\\ge0$. Then $\\vartheta(\\boldsymbol{d}^{j},x;\\boldsymbol{u})\\ge\\vartheta(\\boldsymbol{d}^{j-1},x;\\boldsymbol{u})$\n$\\forall\\boldsymbol{d}^{j-1}\\in\\mathcal{D}^{<}(\\boldsymbol{d}^{j})$\n$\\forall j=1,...,S$ by Lemma \\ref{lem:sign_match_gen}(i) and by\ntransitivity, $\\vartheta(\\boldsymbol{d}^{'},x;\\boldsymbol{u})\\ge\\vartheta(\\boldsymbol{d},x;\\boldsymbol{u})$\nwith $\\boldsymbol{d}'$ being an extension of $\\boldsymbol{d}$. Therefore,\nwe have \n\\begin{align}\nE[Y(\\boldsymbol{d})|x] & \\le E[Y|\\boldsymbol{D}=\\boldsymbol{d},\\boldsymbol{z},x]\\Pr[\\boldsymbol{D}=\\boldsymbol{d}|\\boldsymbol{z}]+\\sum_{\\boldsymbol{d}'\\in\\mathcal{D}^{>}(\\boldsymbol{d})}E[Y|\\boldsymbol{D}=\\boldsymbol{d}',\\boldsymbol{z},x]\\Pr[\\boldsymbol{D}=\\boldsymbol{d}'|\\boldsymbol{z}]\\nonumber \\\\\n & +\\sum_{\\boldsymbol{d}'\\in\\mathcal{D}\\backslash\\mathcal{D}^{\\ge}(\\boldsymbol{d})}E[Y(\\boldsymbol{d}^{j})|\\boldsymbol{D}=\\boldsymbol{d}',\\boldsymbol{z},x]\\Pr[\\boldsymbol{D}=\\boldsymbol{d}'|\\boldsymbol{z}].\\label{eq:interim_bd}\n\\end{align}\nWithout using variation in $X$, we can bound the last term in \\eqref{eq:interim_bd}\nby $Y\\in[\\underline{Y},\\overline{Y}]$. This is done above with $\\theta(\\boldsymbol{d},x,\\epsilon)=1[\\mu(\\boldsymbol{d},x)\\geq\\epsilon_{\\boldsymbol{d}}]$\nand $\\vartheta(\\boldsymbol{d},x;\\boldsymbol{u})=F_{\\epsilon|\\boldsymbol{U}}(\\mu(\\boldsymbol{d},x)|\\boldsymbol{u})$.\n\n\\end{remark}\n\n\\section{Numerical Study\\label{sec:Monte-Carlo-Studies}}\n\nTo illustrate the main results of this study in a simulation exercise,\nwe calculate the bounds on the ATE using the following data generating\nprocess: \n\\begin{align*}\nY_{\\boldsymbol{d}} & =1[\\tilde{\\mu}_{\\boldsymbol{d}}+\\beta X\\ge\\epsilon],\\\\\nD_{1} & =1[\\delta_{2}D_{2}+\\gamma_{1}Z_{1}\\ge V_{1}],\\\\\nD_{2} & =1[\\delta_{1}D_{1}+\\gamma_{2}Z_{2}\\ge V_{2}],\n\\end{align*}\nwhere $(\\epsilon,V_{1},V_{2})$ are drawn, independent of $(X,\\boldsymbol{Z})$,\nfrom a joint normal distribution with zero means and each correlation\ncoefficient being $0.5$. We draw $Z_{s}$ ($s=1,2$) and $X$ from\na multinomial distribution, allowing $Z_{s}$ to take two values,\n$\\mathcal{Z}_{s}=\\{-1,1\\}$, and $X$ to take either three values,\n$\\mathcal{X}=\\{-1,0,1\\}$, or fifteen values, $\\mathcal{X}=\\{-1,-\\frac{6}{7},-\\frac{5}{7},...,\\frac{5}{7},\\frac{6}{7},1\\}$.\nBeing consistent with Assumption M, we choose $\\tilde{\\mu}_{11}>\\tilde{\\mu}_{10}$\nand $\\tilde{\\mu}_{01}>\\tilde{\\mu}_{00}$. Let $\\tilde{\\mu}_{10}=\\tilde{\\mu}_{01}$.\nWith Assumption SS, we choose $\\delta_{1}<0$ and $\\delta_{2}<0$.\nWithout loss of generality, we choose positive values for $\\gamma_{1}$,\n$\\gamma_{2}$, and $\\beta$. Specifically, $\\tilde{\\mu}_{11}=0.25$,\n$\\tilde{\\mu}_{10}=\\tilde{\\mu}_{01}=0$ and $\\tilde{\\mu}_{00}=-0.25$.\nFor default values, $\\delta_{1}=\\delta_{2}\\equiv\\delta=-0.1$, $\\gamma_{1}=\\gamma_{2}\\equiv\\gamma=1$\nand $\\beta=0.5$.\n\nIn this exercise, we focus on the ATE $E[Y(1,1)-Y(0,0)\\vert X=0]$,\nwhose true value is $0.2$ given our choice of parameter values. For\n$h(\\boldsymbol{z},\\boldsymbol{z}',x)$, we consider $\\boldsymbol{z}=(1,1)$\nand $\\boldsymbol{z}'=(-1,-1)$. Note that $H(x)=h(\\boldsymbol{z},\\boldsymbol{z}',x)$\nand $\\tilde{H}(x,x',x'')=\\tilde{h}(\\boldsymbol{z},\\boldsymbol{z}',x,x',x'')$,\nsince $Z_{s}$ is binary. Then, we can derive the sets $\\mathcal{X}_{\\boldsymbol{d}}^{U}(0;\\boldsymbol{d}')$\nand $\\mathcal{X}_{\\boldsymbol{d}}^{L}(0;\\boldsymbol{d}')$ for each\n$\\boldsymbol{d}\\in\\{(1,1),(0,0)\\}$ and $\\boldsymbol{d}'\\neq\\boldsymbol{d}$\nin Theorem \\ref{thm:main}.\n\nBased on our design, $H(0)>0$, and thus, the bounds when we use $Z$\nonly are, with $x=0$, \n\\[\n\\max_{\\boldsymbol{z}\\in\\mathcal{Z}}\\Pr[Y=1,\\boldsymbol{D}=(0,0)\\vert\\boldsymbol{z},x]\\le\\Pr[Y(0,0)=1\\vert x]\\le\\min_{\\boldsymbol{z}\\in\\mathcal{Z}}\\Pr[Y=1\\vert\\boldsymbol{z},x],\n\\]\nand \n\\[\n\\max_{\\boldsymbol{z}\\in\\mathcal{Z}}\\Pr[Y=1\\vert\\boldsymbol{z},x]\\le\\Pr[Y(1,1)=1\\vert x]\\le\\min_{\\boldsymbol{z}\\in\\mathcal{Z}}\\left\\{ \\Pr[Y=1,\\boldsymbol{D}=(1,1)\\vert\\boldsymbol{z},x]+1-\\Pr[\\boldsymbol{D}=(1,1)\\vert\\boldsymbol{z},x]\\right\\} .\n\\]\nUsing both $\\boldsymbol{Z}$ and $X$, we obtain narrower bounds.\nFor example, when $\\left|\\mathcal{X}\\right|=3$, with $\\tilde{H}(0,-1,-1)<0$,\nthe lower bound on $\\Pr[Y(0,0)=1\\vert X=0]$ becomes \n\\[\n\\max_{\\boldsymbol{z}\\in\\mathcal{Z}}\\left\\{ \\Pr[Y=1,\\boldsymbol{D}=(0,0)\\vert\\boldsymbol{z},0]+\\Pr[Y=1,\\boldsymbol{D}\\in\\{(1,0),(0,1)\\}\\vert\\boldsymbol{z},-1]\\right\\} .\n\\]\nWith $\\tilde{H}(1,1,0)<0$, the upper bound on $\\Pr[Y(1,1)=1\\vert X=0]$\nbecomes \n\\[\n\\min_{\\boldsymbol{z}\\in\\mathcal{Z}}\\left\\{ \\Pr[Y=1,\\boldsymbol{D}=(1,1)\\vert\\boldsymbol{z},0]+\\Pr[Y=1,\\boldsymbol{D}\\in\\{(1,0),(0,1)\\}\\vert\\boldsymbol{z},1]+\\Pr[\\boldsymbol{D}=(0,0)\\vert\\boldsymbol{z},0]\\right\\} .\n\\]\nFor comparison, we calculate the bounds in \\citet{manski1990nonparametric}\nusing $\\boldsymbol{Z}$. These bounds are given by \n\\begin{align*}\n & \\max_{\\boldsymbol{z}\\in\\mathcal{Z}}\\Pr[Y=1,\\boldsymbol{D}=(0,0)\\vert\\boldsymbol{z},x]\\le\\Pr[Y(0,0)=1\\vert x]\\\\\n & \\le\\min_{\\boldsymbol{z}\\in\\mathcal{Z}}\\left\\{ \\Pr[Y=1,\\boldsymbol{D}=(0,0)\\vert\\boldsymbol{z},x]+1-\\Pr[\\boldsymbol{D}=(0,0)\\vert\\boldsymbol{z}]\\right\\} ,\n\\end{align*}\nand \n\\begin{align*}\n & \\max_{\\boldsymbol{z}\\in\\mathcal{Z}}\\Pr[Y=1,\\boldsymbol{D}=(1,1)\\vert\\boldsymbol{z},x]\\le\\Pr[Y(1,1)=1\\vert x]\\\\\n & \\le\\min_{\\boldsymbol{z}\\in\\mathcal{Z}}\\left\\{ \\Pr[Y=1,\\boldsymbol{D}=(1,1)\\vert\\boldsymbol{z},x]+1-\\Pr[\\boldsymbol{D}=(1,1)\\vert\\boldsymbol{z}]\\right\\} .\n\\end{align*}\nWe also compare the estimated ATE using a standard linear IV model\nin which the nonlinearity of the true DGP is ignored:\n\\begin{align*}\nY & =\\pi_{0}+\\pi_{1}D_{1}+\\pi_{2}D_{2}+\\beta X+\\epsilon,\\\\\n\\left(\\begin{array}{c}\nD_{1}\\\\\nD_{2}\n\\end{array}\\right) & =\\left(\\begin{array}{c}\n\\gamma_{10}\\\\\n\\gamma_{20}\n\\end{array}\\right)+\\left(\\begin{array}{cc}\n\\gamma_{11} & \\gamma_{12}\\\\\n\\gamma_{21} & \\gamma_{22}\n\\end{array}\\right)\\left(\\begin{array}{c}\nZ_{1}\\\\\nZ_{2}\n\\end{array}\\right)+\\left(\\begin{array}{c}\nV_{1}\\\\\nV_{2}\n\\end{array}\\right).\n\\end{align*}\nHere, the first stage is the reduced-form representation of the linear\nsimultaneous equations model for strategic interaction. Under this\nspecification, the ATE becomes $E[Y(1,1)-Y(0,0)\\vert X=0]=\\pi_{1}+\\pi_{2}$,\nwhich is estimated via two-stage least squares (TSLS).\n\nThe bounds calculated for the ATE are shown in Figures \\ref{fig:sim1}--\\ref{fig:sim4}.\nFigure \\ref{fig:sim1} shows how the bounds on the ATE change, as\nthe value of $\\gamma$ changes from $0$ to $2.5$. The larger $\\gamma$\nis, the stronger the instrument $\\boldsymbol{Z}$ is. The first conspicuous\nresult is that the TSLS estimate of the ATE is biased because of the\nproblem of misspecification. Next, as expected, Manski's bounds and\nour proposed bounds converge to the true value of the ATE as the instrument\nbecomes stronger. Overall, our bounds, with or without exploiting\nthe variation of $X$, are much narrower than Manski's bounds.\\footnote{Although we do not make a rigorous comparison of the assumptions here,\nnote that the bounds by \\citet{MP00} under the semi-MTR is expected\nto be similar to ours. However, their bounds need to assume the direction\nof the monotonicity.} Notice that the sign of the ATE is identified in the whole range\nof $\\gamma$, as predicted by the first part of Theorem \\ref{thm:main},\nin contrast to Manski's bounds. Using the additional variation in\n$X$ with $\\left|\\mathcal{X}\\right|=3$ decreases the width of the\nbounds, particularly with the smaller upper bounds on the ATE in this\nsimulation design. Figure \\ref{fig:sim2} depicts the bounds using\n$X$ with $\\left|\\mathcal{X}\\right|=15$, which yields narrower bounds\nthan when $\\left|\\mathcal{X}\\right|=3$, and substantially narrower\nthan those only using $\\boldsymbol{Z}$.\n\nFigure \\ref{fig:sim3} shows how the bounds change as the value of\n$\\beta$ changes from $0$ to $1.5$, where a larger $\\beta$ corresponds\nto a stronger exogenous variable $X$. The jumps in the upper bound\nare associated with the sudden changes in the signs of $\\tilde{H}(-1,0,-1)$\nand $\\tilde{H}(0,1,1)$. At least in this simulation design, the strength\nof $X$ is not a crucial factor for obtaining narrower bounds. In\nfact, based on other simulation results (omitted in the paper), we\nconclude that the number of values $X$ can take matters more than\nthe dispersion of $X$ (unless we pursue point identification of the\nATE).\n\nFinally, Figure \\ref{fig:sim4} shows how the width of the bounds\nis related to the extent to which the opponents' actions $D_{-s}$\naffect one's payoff, captured by $\\delta$. We vary the value of $\\delta$\nfrom $-2$ to $0$, and when $\\delta=0$, the players solve a single-agent\noptimization problem. Thus, heuristically, the bound at this point\nwould be similar to the ones that can be obtained when \\citet{SV11}\nis extended to a multiple-treatment setting with no simultaneity.\nIn the figure, as the value of $\\delta$ becomes smaller, the bounds\nget narrower.\n\n\\section{Empirical Application: Airline Markets and Pollution\\label{sec:Empirical-Application}}\n\nAircrafts are a major source of emissions, and thus, quantifying the\ncausal effect of air transport on pollution is of importance to policy\nmakers. Therefore, in this section, we take the bounds proposed in\nSection \\ref{subsec:Partial-Identification} to data on airline market\nstructure and air pollution in cities in the U.S.\n\nIn 2013, aircrafts were responsible for about 3 percent of total U.S.\ncarbon dioxide emissions and nearly 9 percent of carbon dioxide emissions\nfrom the U.S. transportation sector, and it is one of the fastest\ngrowing sources.\\footnote{See \\texttt{https:\/\/www.c2es.org\/content\/reducing-carbon-dioxide-emissions-from-aircraft\/7\/}}\nAirplanes remain the single largest source of carbon dioxide emissions\nwithin the U.S. transportation sector, which is not yet subject to\ngreenhouse gas regulations. In addition to aircrafts, airport land\noperations are also a big source of pollution, making airports one\nof the major sources of air pollution in the U.S. For example, 43\nof the 50 largest airports are in ozone non-attainment areas and 12\nare in particulate matter non-attainment areas.\\footnote{Ozone is not emitted directly but is formed when nitrogen oxides and\nhydrocarbons react in the atmosphere in the presence of sunlight.\nIn United States environmental law, a non-attainment area is an area\nconsidered to have air quality worse than the National Ambient Air\nQuality Standards as defined in the Clean Air Act.}\n\nThere is growing literature showing the effects of air pollution on\nvarious health outcomes (see, \\citet{schlenker2015airports}, \\citet{cg2003},\n\\citet{kms2011}). In particular, \\citet{schlenker2015airports} show\nthat the causal effect of airport pollution on the health of local\nresidents---using a clever instrumental variable approach---is sizable.\nTheir study focuses on the 12 major airports in California and implicitly\nassume that the level of competition (or market structure) is fixed.\nUsing high-frequency data, they exploit weather shocks in the East\ncoast---that might affect airport activity in California through\nnetwork effects---to quantify the effect of airport pollution on\nrespiratory and cardiovascular health complications. In contrast,\nwe take the link between airport pollution and health outcomes as\ngiven and are interested in quantifying the effects of different market\nstructures of the airline industry on air pollution.\\footnote{In this section, we refer to market structure as the particular configuration\nof airlines present in the market. In other words, market structure\nnot only refers to the number of firms competing in a given market\nbut to the actual identities of the firms. Thus, we will regard a\nmarket in which, say, United and American operate as different from\na market in which Southwest and Delta operate, despite both markets\nhaving two carriers.} We explicitly allow market structure to be determined endogenously,\nas the outcome of an entry game in which airlines behave strategically\nto maximize their profits and the resulting pollution in this market\nis not internalized by the firms. Understanding these effects can\nthen help inform the policy discussion on pollution regulation. Given\nthat we treat market structure as endogenous, one cannot simply run\na regression of a measure of pollution on market structure (or the\nnumber of airlines present in a market) to obtain the causal effect,\nif there are unobserved market characteristics that affect both firm\ncompetition and pollution outcomes. For example, if at both ends of\na city-pair there are firms from a high-polluting industry which engage\nin a lot of business travel, it would drive both pollution and the\nentry of airlines in the market. Therefore, we use the method presented\nin Section \\ref{subsec:Partial-Identification}.\n\nIn each market, we assume that a set of airlines chooses to be in\nor out as part of a simultaneous entry game of perfect information,\nas introduced in Section \\ref{subsec:Model}. Therefore, we treat\nmarket structure as our endogenous treatment. We then model air pollution\nas a function of the airline market structure as in equation (\\ref{eq:main_model1}),\nwhere $Y$ is a measure of air pollution at the airport level (including\nboth aircraft and land operation pollution), the vector $\\boldsymbol{D}$\nrepresents the market structure, and $X$ includes market specific\ncovariates that affect pollution directly (i.e., not through airline\nactivity), such as weather shocks or the share of pollution-related\nactivity in the local economy.\\footnote{Note that our definition of market is a city-pair; hence, all of our\nvariables are, in fact, weighted averages over the two cities.} Additionally, we allow for market-level covariates, $\\boldsymbol{W}$,\nwhich affect both the participation decisions and pollution (e.g.,\nthe size of the market as measured by population or the level of economic\nactivity). As instruments, $Z_{s}$, we consider a firm-market proxy\nfor cost introduced in \\citet{CT09}. We discuss the definition and\nconstruction of the variables in detail below.\n\nOur objective is to estimate the effect of a change in market structure\non air pollution, $E[Y(\\boldsymbol{d})-Y(\\boldsymbol{d}')]$. For\nexample, we might be interested in the average effect on pollution\nof moving from two airlines operating in the market to three, or how\nthe pollution level changes on average when Delta is a monopolist\nversus a situation in which Delta faces competition from American.\nFollowing entry, firms compete by choosing their pricing, frequency,\nand which airplanes to operate. Different market structures will have\ndifferent impacts on these variables, which in turn, affect the level\nof pollution. Note that the effects might be asymmetric. That is,\nfor a given number of entrants, their identities are important to\ndetermine the pollution level. For example, when comparing the effect\nof a monopoly on pollution, we find that the airline operating plays\na role.\n\nTo illustrate our estimation procedure, we consider three types of\nATE exercises. The first examines the effects on pollution from a\nmonopolist airline vis-a-vis a market that is not served by any airline.\nThe second set of exercises examine the total effect of the industry\non pollution under all possible market configurations. Finally, the\nthird type of exercises examine how the (marginal) effect of a given\nairline changes when the firm faces different levels of competition.\nNotice that regardless of the exercise we run, we quantify ``reduced-form''\neffects, in that they summarize structural effects resulting from\na given market structure. The idea is that given the market structure,\nprices are determined, and given demand, ultimately the frequency\nof flights in the market is determined, which in fact, causes pollution.\n\nIn the rest of this section, we first describe our data sources, then\nshow results for three different ATE exercises, and conclude with\na brief discussion relating our results to potential policy recommendations.\n\n\\subsection{Data}\n\nFor our analysis, we combine data spanning the period 2000--2015\nfrom two sources: airline data from the U.S. Department of Transportation\nand pollution data from the Environmental Protection Agency (EPA).\n\n\\textbf{Airline Data.} Our first data source contains airline information\nand combines publicly available data from the Department of Transportation's\nOrigin and Destination Survey (DB1B) and Domestic Segment (T-100)\ndatabase. These datasets have been used extensively in the literature\nto analyze the airline industry (see, e.g., \\citet{borenstein89},\n\\citet{berry1992estimation}, \\citet{CT09}, and more recently, \\citet{robertssweeting2013}\nand \\citet{ciliberto2015market}). The DB1B database is a quarterly\nsample of all passenger domestic itineraries. The dataset contains\ncoupon-specific information, including origin and destination airports,\nnumber of coupons, the corresponding operating carriers, number of\npassengers, prorated market fare, market miles flown, and distance.\nThe T-100 dataset is a monthly census of all domestic flights broken\ndown by airline, and origin and destination airports.\n\nOur time-unit of analysis is a quarter and we define a market as the\nmarket for air connection between a pair of airports (regardless of\nintermediate stops) in a given quarter.\\footnote{In cities that operate more than one airport, we assume that flights\nto different airports in the same metropolitan area are in separate\nmarkets.} We restrict the sample to include the top 100 metropolitan statistical\nareas (MSA's), ranked by population at the beginning of our sample\nperiod. We follow \\citet{berry1992estimation} and \\citet{CT09} and\ndefine an airline as actively serving a market in a given quarter,\nif we observe at least 90 passengers in the DB1B survey flying with\nthe airline in the corresponding quarter.\\footnote{This corresponds to approximately the number of passengers that would\nbe carried on a medium-size jet operating once a week.} We exclude from our sample city pairs in which no airline operates\nin the whole sample period. Notice that we do include markets that\nare temporarily not served by any airline. This leaves us with 181,095\nmarket-quarter observations.\n\nIn our analysis, we allow for airlines to have a heterogeneous effect\non pollution, and to simplify computation, in each market we allow\nfor six potential participants: American (AA), Delta (DL), United\n(UA), Southwest (WN), a medium-size airline, and a low-cost carrier.\\footnote{That is, to limit the number of potential market structures, we lump\ntogether all the low cost carriers into one category, and Northwest,\nContinental, America West, and USAir under the medium airline type.} The latter is not a bad approximation to the data in that we rarely\nobserve more than one medium-size or low-cost in a market but it assumes\nthat all low-cost airlines have the same strategic behavior, and so\ndo the medium airlines. Table \\ref{tab:marketstructure} shows the\nnumber of firms in each market broken down by size as measured by\npopulation. As the table shows, market size alone does not explain\nmarket structure, a point first made by \\citet{CT09}.\n\n\\begin{table}[t!]\n\\caption{Distribution of the Number of Carriers by Market Size}\n\\label{tab:marketstructure} \\centering{}%\n\\begin{tabular}{lrrrr}\n\\hline \n & \\multicolumn{3}{c}{\\rule{0ex}{2.5ex}Market size} & \\tabularnewline\n\\hline \n\\rule{0ex}{2.5ex}\\# firms & Large & Medium & Small & Total\\tabularnewline\n\\hline \n\\rule{0ex}{2.5ex}0 & 7.96 & 8.20 & 8.62 & 8.18\\tabularnewline\n1 & 41.18 & 22.53 & 20.58 & 30.30\\tabularnewline\n2 & 28.14 & 23.41 & 21.25 & 25.04\\tabularnewline\n3 & 12.65 & 20.00 & 16.67 & 16.05\\tabularnewline\n4 & 7.65 & 14.72 & 15.17 & 11.51\\tabularnewline\n5 & 1.98 & 9.90 & 16.48 & 7.80\\tabularnewline\n6+ & 0.52 & 1.23 & 2.21 & 1.12\\tabularnewline\n\\hline \n\\rule{0ex}{2.5ex}\\# markets & 79,326 & 64,191 & 37,578 & 181,095\\tabularnewline\n\\hline \n\\end{tabular}\n\\end{table}\n\nIn our application, we consider two instruments for the entry decisions.\nThe first is the \\emph{airport presence} of an airline proposed by\n\\citet{berry1992estimation}. For a given airline, this variable is\nconstructed as the number of markets it serves out of an airport as\na fraction of the total number of markets served by all airlines out\nof the airport. A hub-and-spoke network allows firms to exploit demand-side\nand cost-side economies, which should affect the firm's profitability.\nWhile \\citet{berry1992estimation} assumes that an airline's airport\npresence only affects its own profits (and hence, is excluded from\nrivals' profits), \\citet{CT09} argue that this may not be the case\nin practice, since airport presence might be a measure of product\ndifferentiation, rendering it likely to enter the profit function\nof all firms through demand. While an instrument that enters all of\nthe profit functions is fine in our context (see Appendix \\ref{subsec:Common_Z}),\nwe also consider the instrument proposed by \\citet{CT09}, which captures\nshocks to the fixed cost of providing a service in a market. This\nvariable, which they call \\emph{cost}, is constructed as the percentage\nof the nonstop distance that the airline must travel in excess of\nthe nonstop distance, if the airline uses a connecting instead of\na nonstop flight.\\footnote{Mechanically, the variable is constructed as the difference between\nthe sum of the distances of a market's endpoints and the closest hub\nof an airline, and the nonstop distance between the endpoints, divided\nby the nonstop distance.} Arguably, this variable only affects its own profits and is excluded\nfrom rivals' profits.\n\n\\begin{table}[t!]\n\\caption{Airline Summary Statistics}\n\\label{tab:airlinessumstat} \\centering{}%\n\\begin{tabular}{llccccccc}\n\\hline \n\\rule{0ex}{2.5ex} & & American & Delta & United & Southwest & medium & low-cost & \\tabularnewline\n\\hline \n\\rule{0ex}{2.5ex}Market presence (0\/1) & mean & 0.44 & 0.57 & 0.28 & 0.25 & 0.56 & 0.17 & \\tabularnewline\n & sd & 0.51 & 0.51 & 0.46 & 0.44 & 0.51 & 0.38 & \\tabularnewline\nAirport presence (\\%) & mean & 0.43 & 0.56 & 0.27 & 0.25 & 0.39 & 0.10 & \\tabularnewline\n & sd & 0.17 & 0.18 & 0.16 & 0.18 & 0.14 & 0.08 & \\tabularnewline\nCost (\\%) & mean & 0.71 & 0.41 & 0.76 & 0.29 & 0.22 & 0.04 & \\tabularnewline\n & sd & 1.56 & 1.28 & 1.43 & 0.83 & 0.60 & 0.17 & \\tabularnewline\n\\hline \n\\end{tabular}\n\\end{table}\n\nTable \\ref{tab:airlinessumstat} presents the summary statistics of\nthe airline related variables. Of the leading airlines, we see that\nAmerican and Delta are present in about half of the markets, while\nUnited and Southwest are only present in about a quarter of the markets.\nAmerican and Delta tend to dominate the airports in which they operate\nmore than United and Southwest. From the cost variable, we see that\nboth American and United tend to operate a hub-and-spoke network,\nwhile Southwest (and to a lesser extent Delta) operates most markets\nnonstop.\n\n\\textbf{Pollution Data.} The second component of our dataset is the\nair pollution data. The EPA compiles a database of outdoor concentrations\nof pollutants measured at more than 4,000 monitoring stations throughout\nthe U.S., owned and operated mainly by state environmental agencies.\nEach monitoring station is geocoded, and hence, we are able to merge\nthese data with the airline dataset by matching all the monitoring\nstations that are located within a 10km radius of each airport in\nour first dataset.\n\nThe principal emissions of aircraft include the greenhouse gases carbon\ndioxide ($\\text{CO}_{2}$) and water vapor ($\\text{H}_{2}\\text{O}$),\nwhich have a direct impact on climate change. Aircraft jet engines\nalso produce nitric oxide ($\\text{NO}$) and nitrogen dioxide ($\\text{NO}_{2}$)\n(which together are termed nitrogen oxides ($\\text{NO}_{\\text{x}}$)),\ncarbon monoxide (CO), oxides of sulphur ($\\text{SO}_{\\text{x}}$),\nunburned or partially combusted hydrocarbons (also known as volatile\norganic compounds or VOC's), particulates, and other trace compounds\n(see, \\citet{FAA2015}). In addition, ozone ($\\text{O}_{3}$) is formed\nby the reaction of VOC's and $\\text{NO}_{\\text{x}}$ in the presence\nof heat and sunlight. The set of pollutants other than $\\text{CO}_{2}$\nare more pernicious in that they can harm human health directly and\ncan result in respiratory, cardiovascular, and neurological conditions.\nResearch to date indicates that fine particulate matter (PM) is responsible\nfor the majority of the health risks from aviation emissions, although\nozone has a substantial health impact too.\\footnote{See \\citet{FAA2015}.}\nTherefore, as our measure of pollution, we will consider both.\n\nOur measure of ozone is a quarterly mean of daily maximum levels in\nparts per million. In terms of PM, as a general rule, the smaller\nthe particle the further it travels in the atmosphere, the longer\nit remains suspended in the atmosphere, and the more risk it poses\nto human health. PM that measure less than 2.5 micrometer can be readily\ninhaled, and thus, potentially pose increased health risks. The variable\nPM2.5 is a quarterly average of daily averages and is measured in\nmicrograms\/cubic meter. For each airport in our sample, we take an\naverage (weighted by distance to the airport) of the data from all\nair monitoring stations within a 10km radius. The top panel of Table\n\\ref{tab:marketsumstats} shows the summary statistics of the pollution\nmeasures.\n\n\\begin{table}[t!]\n\\caption{Market-level Summary Statistics}\n\\label{tab:marketsumstats} \\centering{}%\n\\begin{tabular}{lcc}\n\\hline \n\\rule{0ex}{2.5ex} & Mean & Std. Dev.\\tabularnewline\n\\hline \n\\rule{0ex}{2.5ex}Pollution & & \\tabularnewline\n\\hspace{2ex}Ozone ($\\text{O}_{3}$) & .0477 & .0056\\tabularnewline\n\\hspace{2ex}Particulate matter (PM2.5) & 8.3881 & 2.5287 \\tabularnewline\n\\rule{0ex}{2.5ex}Other controls & & \\tabularnewline\n\\hspace{2ex}Market size (pop.) & 2307187.8 & 1925533.4\\tabularnewline\n\\hspace{2ex}Income (per capita) & 34281.6 & 4185.5\\tabularnewline\n\\rule{0ex}{2.5ex}\\# of markets & 181,095 & \\tabularnewline\n\\hline \n\\end{tabular}\n\\end{table}\n\n\\textbf{Other Market-Level Controls.} We also include in our analysis\nmarket-level covariates that may affect both market structure and\npollution levels. In particular, we construct a measure of market\nsize by computing the (geometric) mean of the MSA populations at the\nmarket endpoints and a measure of economic activity by computing the\naverage per capita income at the market endpoints, using data from\nthe Regional Economic Accounts of the Bureau of Economic Analysis.\n\nFinally, as we mentioned in Section \\ref{subsec:Partial-Identification},\nhaving access to data on a variable that affects pollution but is\nexcluded from the airline participation decisions can greatly help\nin calculating the bounds of the ATE. Therefore, we construct a variable\nthat measures the economic activity of pollution related industries\n(manufacturing, construction, and transportation other than air transportation)\nin a given market (MSA) as a fraction of total economic activity in\nthat market, again, using data from the Regional Economic Accounts\nof the Bureau of Economic Analysis.\nOur implicit assumption is as follows. The size of the market, among\nother things, determines whether a firm might enter it, but not the\ntype of economic activity in the cities.\nThe idea is that conditional on the market GDP, a market with a higher\nshare of polluting industries will have a higher level of pollution\nbut this share would not affect the airline market structure.\n\n\\subsection{Estimation and Results}\n\n\nTo simplify the estimation, we discretize all continuous variables\ninto binary variables (taking a value of 0 (1) if the corresponding\ncontinuous variable is below (above) its median). Using the notation\nfrom Section \\ref{subsec:Model}, let the elements of the treatment\nvector $\\boldsymbol{d}=(d_{\\text{DL}},d_{\\text{AA}},d_{\\text{UA}},d_{\\text{WN}},d_{\\text{med}},d_{\\text{low}})$\nbe either 0 or 1, indicating whether each firm is active in the market.\nWe compute the upper and lower bounds on the ATE using the result\nfrom Theorem \\ref{thm:main} and the fact that our $Y$ variable is\nbinary. Specifically, given two treatment vectors $\\boldsymbol{d}$\nand $\\tilde{\\boldsymbol{d}}$ we can bound the ATE \n\\begin{align*}\nL(\\boldsymbol{d},\\tilde{\\boldsymbol{d}};x,w) & \\leq E[Y(\\boldsymbol{d})-Y(\\tilde{\\boldsymbol{d}})|x,w]\\leq U(\\boldsymbol{d},\\tilde{\\boldsymbol{d}};x,w)\n\\end{align*}\nwhere the upper bound can be characterized by \n\\begin{align*}\nU(\\boldsymbol{d},\\tilde{\\boldsymbol{d}};x,w) & \\equiv\\text{Pr}[Y=1,\\boldsymbol{D}=\\boldsymbol{d}|\\boldsymbol{z},x,w]+\\sum_{\\boldsymbol{d}'\\in\\mathcal{D}^{j}\\backslash\\{\\boldsymbol{d}\\}}\\Pr[\\boldsymbol{D}=\\boldsymbol{d}'|\\boldsymbol{Z}=\\boldsymbol{z},W=w]\\\\\n & \\quad+\\sum_{\\boldsymbol{d}'\\in\\mathcal{D}^{<}(\\boldsymbol{d})\\cup\\mathcal{D}^{>}(\\boldsymbol{d})}\\text{Pr}[Y=1,\\boldsymbol{D}=\\boldsymbol{d}'|\\boldsymbol{Z}=\\boldsymbol{z},X=x'(\\boldsymbol{d}'),W=w]\\\\\n & \\quad-\\text{Pr}[Y=1,\\boldsymbol{D}=\\tilde{\\boldsymbol{d}}|\\boldsymbol{Z}=\\boldsymbol{z},X=x,W=w]\\\\\n & \\quad-\\sum_{\\boldsymbol{d}''\\in\\mathcal{D}^{<}(\\tilde{\\boldsymbol{d}})\\cup\\mathcal{D}^{>}(\\tilde{\\boldsymbol{d}})}\\text{Pr}[Y=1,\\boldsymbol{D}=\\boldsymbol{d}''|\\boldsymbol{Z}=\\boldsymbol{z},X=x''(\\boldsymbol{d}''),W=w]\n\\end{align*}\nfor every $\\boldsymbol{z}$, $x'(\\boldsymbol{d}')\\in\\mathcal{X}_{\\boldsymbol{d}}^{U}(x;\\boldsymbol{d}')$\nfor $\\boldsymbol{d}'\\neq\\boldsymbol{d}$, and $x''(\\boldsymbol{d}'')\\in\\mathcal{X}_{\\tilde{\\boldsymbol{d}}}^{L}(x;\\boldsymbol{d}'')$\nfor $\\boldsymbol{d}''\\neq\\tilde{\\boldsymbol{d}}$. Similarly, the\nlower bound can be characterized by \n\\begin{align*}\nL(\\boldsymbol{d},\\tilde{\\boldsymbol{d}};x,w) & \\equiv\\text{Pr}[Y=1,\\boldsymbol{D}=\\boldsymbol{d}|\\boldsymbol{Z}=\\boldsymbol{z},X=x,W=w]\\\\\n & \\quad+\\sum_{\\boldsymbol{d}'\\in\\mathcal{D}^{<}(\\boldsymbol{d})\\cup\\mathcal{D}^{>}(\\boldsymbol{d})}\\text{Pr}[Y=1,\\boldsymbol{D}=\\boldsymbol{d}'|\\boldsymbol{Z}=\\boldsymbol{z},X=x'(\\boldsymbol{d}'),W=w]\\\\\n & \\quad-\\text{Pr}[Y=1,\\boldsymbol{D}=\\tilde{\\boldsymbol{d}}|\\boldsymbol{Z}=\\boldsymbol{z},X=x,W=w]-\\sum_{\\boldsymbol{d}''\\in\\mathcal{D}^{j}\\backslash\\{\\tilde{\\boldsymbol{d}}\\}}\\Pr[\\boldsymbol{D}=\\boldsymbol{d}''|\\boldsymbol{Z}=\\boldsymbol{z},W=w]\\\\\n & \\quad-\\sum_{\\boldsymbol{d}''\\in\\mathcal{D}^{<}(\\tilde{\\boldsymbol{d}})\\cup\\mathcal{D}^{>}(\\tilde{\\boldsymbol{d}})}\\text{Pr}[Y=1,\\boldsymbol{D}=\\boldsymbol{d}''|\\boldsymbol{Z}=\\boldsymbol{z},X=x''(\\boldsymbol{d}''),W=w]\n\\end{align*}\nfor every $\\boldsymbol{z}$, $x'(\\boldsymbol{d}')\\in\\mathcal{X}_{\\boldsymbol{d}}^{L}(x;\\boldsymbol{d}')$\nfor $\\boldsymbol{d}'\\neq\\boldsymbol{d}$, and $x''(\\boldsymbol{d}'')\\in\\mathcal{X}_{\\tilde{\\boldsymbol{d}}}^{U}(x;\\boldsymbol{d}'')$\nfor $\\boldsymbol{d}''\\neq\\tilde{\\boldsymbol{d}}$. We estimate the\npopulation objects above using their sample counterparts. We experimented\nwith both measures of pollution discussed earlier and obtain qualitatively\nand quantitatively similar results in all cases, which is not surprising\ngiven that the two pollution measures are highly correlated. In order\nto save space, we only show results using PM2.5 as our outcome variable.\nWe also experimented with several specifications of the covariates,\n$\\boldsymbol{X}$ and $\\boldsymbol{W}$, and instruments, $\\boldsymbol{Z}$.\nIn particular, we tried different discretizations of each variable\n(including allowing for more than two points in their supports and\ndifferent cutoffs). Clearly, there is a limit to how finely we can\ncut the data even with a large sample size such as ours. The coarser\ndiscretization occurs when each covariate (and instrument) is binary\nand it seems to produce reasonable results; hence, we stick with this\ndiscretization in all of our exercises. Again, aiming at the most\nparsimonious model, and after some experimentation, we obtained reasonable\nresults when both $\\boldsymbol{X}$ and $\\boldsymbol{W}$ are scalars\n(share of pollution related industries in the market and total GDP\nin the market, respectively).\n\nWe also compute confidence sets by deriving unconditional moment inequalities\nfrom our conditional moment inequalities and implementing the Generalized\nMoment Selection test proposed by \\citet{as2010}. The confidence\nsets are obtained by inverting the test.\\footnote{For details of this procedure, see \\citet{dm2016}.}\n\n\\begin{figure*}[!t]\n\\begin{centering}\n\\includegraphics[scale=0.7\n{fig2_all_cs_monop_CROP.pdf}\\caption{Effect of a Monopolistic Market Structure}\n\\label{fig:emp_monop} \n\\par\\end{centering}\n\\medskip{}\n\n\\begin{small} This plot shows the ATEs of a change in market structure\nfrom no airline serving a market to a monopolist serving it. The solid\nblack intervals are our estimates of the identified sets and the thin\nred lines are the 95\\% confidence sets. \\end{small} \n\\end{figure*}\n\n\\textbf{Monopoly Effects.} Here we examine a very simple ATE of a\nchange in market structure from no airline serving a market to a monopolist\nserving it. Intuitively, we want to understand the change in the probability\nof being a high-pollution market when an airline starts operating\non it. Recall that we allow each firm to have different effects on\npollution; hence, we estimate the effects of each one of the six firms\nin our data becoming a monopolist. Thus, we are interested in the\nATEs of the form \n\\[\nE[Y(\\boldsymbol{d}_{\\text{monop}})-Y(\\boldsymbol{d}_{\\text{noserv}})|X,W]\n\\]\nwhere $\\boldsymbol{d}_{\\text{monop}}$ is one of the six vectors in\nwhich only one element is 1 and the rest are 0's, and $\\boldsymbol{d}_{\\text{noserv}}$\nis a vector of all 0's. The results are shown in Figure \\ref{fig:emp_monop},\nwhere the solid black intervals are our estimates of the identified\nsets and the thin red lines are the 95\\% confidence sets. We see that\nall ATEs are positive and statistically significant different from\n0, except for the medium-size carriers. While there no major differences\non the effects of the leader carriers, with the exception of Delta\nwhich seems to induce a higher increase in the probability of high\npollution, the medium and low-cost carriers induce a smaller effect.\n\n\\begin{figure*}[!t]\n\\begin{centering}\n\\includegraphics[scale=0.7\n{fig1_all_cs_num_entrants_CROP.pdf}\\caption{Total Market Structure Effect}\n\\label{fig:emp_numentrants} \n\\par\\end{centering}\n\\medskip{}\n\n\\begin{small} This plot shows the ATEs of the airline industry under\nall possible market configurations. The solid black intervals are\nour estimates of the identified sets and the thin red lines are the\n95\\% confidence sets. The bars in each cluster correspond to all possible\nmarket configurations, respectively. \\end{small} \n\\end{figure*}\n\n\\textbf{Total Market Structure Effect.} We now turn to our second\nset of exercises. Here, we quantify the effect of the airline industry\non the likelihood of a market having high levels of pollution. To\ndo so, we estimate ATEs of the form \n\\[\nE[Y(\\boldsymbol{d})-Y(\\boldsymbol{d}_{\\text{noserv}})|X,W]\n\\]\nfor all potential market configurations $\\boldsymbol{d}$, and where,\nas before, $\\boldsymbol{d}_{\\text{noserv}}$ is a vector of all 0's.\nFigure \\ref{fig:emp_numentrants} depicts the results. The left-most\nset of intervals corresponds to the 6 different monopolistic market\nstructures, and by construction, coincide with those from Figure \\ref{fig:emp_monop}.\nThe next set corresponds to all possible duopolistic structures, which\nhas 15 possibilities, and so on. Not surprisingly, we observe that\nthe effect on the probability of being a high-pollution market is\nincreasing in the number of firms operating in the market. More interesting\nis the non-linearity of the effect: the effect increases at a decreasing\nrate. This would be consistent with a model in which firms \\emph{accommodate}\nnew entrants by decreasing their frequency, which is analogous to\nthe prediction of a Cournot competition model, as we increase the\nnumber of firms. To further investigate this point, in the next set\nof exercises, we examine the effect of one firm as we change the competition\nit faces.\n\n\\begin{figure*}[!t]\n\\begin{centering}\n\\includegraphics[scale=0.7\n{fig3_all_cs_compet_effect_DL_CROP.pdf}\\caption{Marginal Effect of Delta under Different Market Structures}\n\\label{fig:emp_competDL} \n\\par\\end{centering}\n\\medskip{}\n\n\\begin{small} This plot shows the ATEs of Delta entering the market\ngiven all possible rivals market configurations. The solid black intervals\nare our estimates of the identified sets and the thin red lines are\nthe 95\\% confidence sets. The bars in each cluster correspond to all\npossible market configurations, respectively. \\end{small} \n\\end{figure*}\n\n\\textbf{Marginal Carrier Effect.} In our last set of exercises, we\nare interested in investigating how the marginal effect (i.e., the\neffect of introducing one more firm into the market) changes under\ndifferent configurations of the market structure. Say we are interested\nin the effect of Delta entering the market on pollution, given that\nthe current market structure (excluding Delta) is $\\boldsymbol{d}_{\\text{--DL}}=(d_{\\text{AA}},d_{\\text{UA}},d_{\\text{WN}},d_{\\text{med}},d_{\\text{low}})$.\nThen, we want to estimate \n\\[\nE[Y(1,\\boldsymbol{d}_{\\text{--DL}})-Y(0,\\boldsymbol{d}_{\\text{--DL}})|X,W].\n\\]\n\nFigure \\ref{fig:emp_competDL} shows the identified sets and confidence\nsets of the marginal effect of Delta on the probability of high pollution\nunder all possible market configuration for Delta's rivals. We obtain\nqualitatively similar results when estimating the marginal effects\nof the other five carriers, and hence, we omit the graphs to save\nspace. In the Figure, the left-most exercise is the effect of Delta\nas a monopolist, and coincides, by construction, with the left-most\nexercise in Figure \\ref{fig:emp_monop}. The second exercise (from\nthe left) corresponds to the additional effect of Delta on pollution\nwhen there is already one firm operating in the market, which yields\nfive different possibilities. The next exercise shows the effect of\nDelta when there are two firms already operating in the market yielding\n10 possibilities, and so on. Again, the estimated marginal ATEs in\nall cases are positive and statistically significant. Interestingly,\nalthough we cannot entirely reject the null hypothesis that all the\neffects are the same, it seems that the marginal effect of Delta is\ndecreasing in the number of rivals it faces. Intuitively, this suggests\na situation in which Delta enters the market and operates with a frequency\nthat is decreasing with the number of rivals (again, as we would expect\nin a Cournot competition model) and is consistent with the findings\nin our previous set of exercises.\n\nThe conclusions from the \\emph{total market} and \\emph{marginal} ATEs\nare also interesting from a policy perspective. For example, a merger\nof two airlines in which duplicate routes are eliminated would imply\na decrease in total pollution in the affected markets, but by less\nthan what one would have naively anticipated from removing one airline\nwhile keeping everything else constant.\\footnote{Note that, however, to the extent that a merger alters the way the merged firm behaves post entry, the treatment effects we estimate will not be informative. In other words, our model can only speak to the effects of a merger on pollution that only affects behavior through entry.} In other words, there are\ntwo effects of removing an airline from a market. The first is a direct\naffect: pollution decreases by the amount of pollution by the carrier\nthat is no longer present in the market. However, the remaining firms\nin the market will react strategically to the new market structure.\nIn our exercises, we find that this indirect effect implies an increase\nin pollution. The overall effect is a net decrease in pollution. Moreover,\ngiven the non-linearities of the ATEs we estimate it looks like the\noverall effect, while negative, might be negligible in markets with\nfour or more competitors. While it is unclear that merger analysis, which is typically concerned\nwith price increases post-merge or cost savings of the merging firms,\nshould also consider externalities such as pollution, (social) welfare\nanalysis should. Hence, our findings may serve as guidance to policy\ndiscussion on air traffic regulation.\n\n\\medskip{}\n\n \\bibliographystyle{ecta}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzhmqv b/data_all_eng_slimpj/shuffled/split2/finalzzhmqv new file mode 100644 index 0000000000000000000000000000000000000000..e8f7d7c0ec9679ee990ef6e8551b6f27398a4650 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzhmqv @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe COVID-19 pandemic has exacerbated the already poor educational experiences of millions of students in Africa who were grappling with educational challenges like poor access to computers, the internet, and teachers. In 2018, the average student-teacher ratio in Sub-Saharan Africa was 35:1 which is higher compared to 14:1 in Europe \\cite{UNESCO2020}. In this context, students struggle to get answers to their questions. Hence, offering quick and accurate answers, outside of the classroom, could improve their overall learning experience. However, it is difficult to scale this support with human teachers.\n\nIn 2020, we developed Kwame \\cite{boateng2021b}, a bilingual AI teaching assistant that provides answers to students' coding questions in English and French for SuaCode, a smartphone-based online coding course \\cite{boateng2019,boateng2021}. Kwame is a deep learning-based question answering system that finds the paragraph most semantically similar to the question via cosine similarity with a Sentence-BERT model. We extended Kwame to work for science education and deployed it as a web app. Specifically, Kwame for Science \\footnote{\\href{http:\/\/kwame.ai\/}{http:\/\/kwame.ai\/}} answers questions of students based on the Integrated Science subject of the West African Senior Secondary Certificate Examination (WASSCE). This is a core subject that covers various aspects of science such as biology, chemistry, physics, earth science, and agricultural science. It is mandatory for senior high school students in the West African Education Council (WAEC) member countries (Ghana, Nigeria, Sierra Leone, Liberia, and The Gambia).\n\nThere are virtual teaching assistants (TA) such as Jill Watson \\cite{goel2016,goel2020}, Rexy \\cite{benedetto2019}, and a physics course TA \\cite{zylich2020} and Curio SmartChat (for K-12 science) \\cite{raamadhurai2019} (see \\cite{boateng2020} for a detailed description of related work). These works are focused on answering logistical questions, except Curio SmartChat. In comparison to Curio SmartChat which is the closest work to ours, our work uses a state-of-the-art language model (Sentence-BERT) relative to theirs (Universal Sentence Encoder). Also, our work is the first to be developed and deployed in the context of high school science education in West Africa.\n\n\\section{Kwame for Science System Architecture}\nKwame for Science is a Sentence-BERT-based question-answering web app that displays 3 paragraphs as answers along with a confidence score which represents the similarity score in response to science questions (Figure \\ref{fig:kwame4science}). Additionally, it displays the top 5 related past exam questions and their answers in addition to the 3 paragraphs. We used a Sentence-BERT (SBERT) model that was pretrained on a large and diverse set of question-answer pairs. We used the SBERT model as it was with plans for fine-tuning after real-world data collection, especially since exploratory evaluation for our science use case showed it had decent performance.\n\nWhen a user types a question in the web app, our system computes an embedding of the question using the SBERT model. Next, it computes cosine similarity scores with a bank of answers (which are paragraphs from our knowledge source), retrieves, and returns the top 3 answers along with a confidence score and any figures or images referenced in that paragraph to the web app. Additionally, it computes cosine similarity scores with a bank of past exam questions, retrieves, and returns the top 5 related questions and their answers, along with confidence scores. The web app then displays the answers and the related past exam questions that are above a preset similarity score threshold. If no answer is above the threshold, a message is shown saying the question could not be answered using the knowledge source of that subject. We precomputed embeddings for fast real-time retrieval and saved them as indices in ElasticSearch which we hosted on Google Cloud Platform. \n\n\\begin{figure}\n\\includegraphics[width=\\linewidth]{kwame4science.png}\n\\caption{Screenshots of Kwame for Science} \n\\label{fig:kwame4science}\n\\end{figure}\n\n\n\\section{Dataset Curation and Preprocessing}\nGiven that our goal was for Kwame to provide answers based on the Integrated Science subject of the WASSCE exam, our training data and knowledge source had to cover the topics in the WASSCE Integrated Science curriculum. We sought to use one of the approved textbooks in Ghana. Unfortunately, their copyrights did not permit such use and the publishers were unwilling to partner with us. Consequently, we searched for free and open-source books and datasets that fulfilled our needs. We came across a middle school science dataset \u2014 Textbook Questions Answering (TQA) \\cite{kembhavi2017} which was curated from the free and open-source textbook, CK-12. Our exploration of the dataset revealed that though it covered several of the WASSCE Integrated Science topics, it lacked others, particularly those related to agricultural science. Consequently, we additionally used a dataset based on Simple Wikipedia to cover those gaps. We used Simple Wikipedia since its explanations were simple and better suited for middle school and high school students compared to regular Wikipedia.\n\nWe parsed the JSON files of the dataset into paragraphs. We also extracted figures that were referenced in the paragraphs so they could be returned to students along with the answers. We then split the paragraphs into groups of 3 sentences, computed embeddings, and indexed them using ElasticSearch to enable fast retrieval and run time. These constituted the answers returned for questions. Furthermore, we augmented our question-answering with curriculum-specific content. In particular, we created question-answer pairs using WASSCE questions that cover exams from 2000 to 2020. The exam has three parts, objectives (multiple-choice), theory, and practicals. Similar to the paragraphs, we computed embeddings of the questions and indexed them using ElasticSearch. These constituted the related past questions (with answers) returned when a question is asked.\n\n\\section{Preliminary Evaluation and Results}\nWe launched the web app in beta on 10th June 2022. Users could provide feedback by upvoting or downvoting answers in response to the question \"Was this helpful?.\" To evaluate Kwame for Science, we used the metrics top 1 and top 3 accuracies. Top 1 accuracy quantifies performance assuming only one answer was returned and voted on. Top 3 accuracy refers to the performance where for each question that received a vote, at least one answer was rated as helpful out of the 3 answers that were returned. The statistics for the deployment between 10th June 2022 and 27th June 2022 (2.5 weeks) are 190 users across 11 countries (6 in Africa), 433 questions with the metrics 71.8\\% top 1 accuracy (n=117 answers), and 87.5\\% top 3 accuracy (n=56 questions). The top 3 accuracy result is good, showing that Kwame for Science has a high chance of giving at least one useful answer among the 3. Some challenging cases occurred when there were typos in the spelling of scientific words and the questions were related to topics outside the scope of the knowledge source. Also, some unhelpful answers were cases the returned paragraph was incomplete due to issues with the dataset.\n\n\\section{Conclusion}\nIn this work, we developed and evaluated Kwame for Science which provides instant answers to the Science questions of students across West Africa. Our future work will fine-tune the SBERT model to improve its accuracy. Also, we will make Kwame for Science available in local languages across Africa, and available via offline channels such as SMS, USSD, and toll-free calling. Kwame for Science will enable the delivery of scalable, cost-effective, and quality remote education to millions of people across Africa. \n\\section{Introduction}\nThe COVID-19 pandemic has exacerbated the already poor educational experiences of millions of students in Africa who were grappling with educational challenges like poor access to computers, the internet, and teachers. In 2018, the average student-teacher ratio in Sub-Saharan Africa was 35:1 which is higher compared to 14:1 in Europe \\cite{UNESCO2020}. In this context, students struggle to get answers to their questions. Hence, offering quick and accurate answers, outside of the classroom, could improve their overall learning experience. However, it is difficult to scale this support with human teachers.\n\nIn 2020, we developed Kwame \\cite{boateng2021b}, a bilingual AI teaching assistant that provides answers to students' coding questions in English and French for SuaCode, a smartphone-based online coding course \\cite{boateng2019,boateng2021}. Kwame is a deep learning-based question answering system that finds the paragraph most semantically similar to the question via cosine similarity with a Sentence-BERT model. We extended Kwame to work for science education and deployed it as a web app. Specifically, Kwame for Science \\footnote{\\href{http:\/\/kwame.ai\/}{http:\/\/kwame.ai\/}} answers questions of students based on the Integrated Science subject of the West African Senior Secondary Certificate Examination (WASSCE). This is a core subject that covers various aspects of science such as biology, chemistry, physics, earth science, and agricultural science. It is mandatory for senior high school students in the West African Education Council (WAEC) member countries (Ghana, Nigeria, Sierra Leone, Liberia, and The Gambia).\n\nThere are virtual teaching assistants (TA) such as Jill Watson \\cite{goel2016,goel2020}, Rexy \\cite{benedetto2019}, and a physics course TA \\cite{zylich2020} and Curio SmartChat (for K-12 science) \\cite{raamadhurai2019} (see \\cite{boateng2020} for a detailed description of related work). These works are focused on answering logistical questions, except Curio SmartChat. In comparison to Curio SmartChat which is the closest work to ours, our work uses a state-of-the-art language model (Sentence-BERT) relative to theirs (Universal Sentence Encoder). Also, our work is the first to be developed and deployed in the context of high school science education in West Africa.\n\n\\section{Kwame for Science System Architecture}\nKwame for Science is a Sentence-BERT-based question-answering web app that displays 3 paragraphs as answers along with a confidence score which represents the similarity score in response to science questions (Figure \\ref{fig:kwame4science}). Additionally, it displays the top 5 related past exam questions and their answers in addition to the 3 paragraphs. We used a Sentence-BERT (SBERT) model that was pretrained on a large and diverse set of question-answer pairs. We used the SBERT model as it was with plans for fine-tuning after real-world data collection, especially since exploratory evaluation for our science use case showed it had decent performance.\n\nWhen a user types a question in the web app, our system computes an embedding of the question using the SBERT model. Next, it computes cosine similarity scores with a bank of answers (which are paragraphs from our knowledge source), retrieves, and returns the top 3 answers along with a confidence score and any figures or images referenced in that paragraph to the web app. Additionally, it computes cosine similarity scores with a bank of past exam questions, retrieves, and returns the top 5 related questions and their answers, along with confidence scores. The web app then displays the answers and the related past exam questions that are above a preset similarity score threshold. If no answer is above the threshold, a message is shown saying the question could not be answered using the knowledge source of that subject. We precomputed embeddings for fast real-time retrieval and saved them as indices in ElasticSearch which we hosted on Google Cloud Platform. \n\n\\begin{figure}\n\\includegraphics[width=\\linewidth]{kwame4science.png}\n\\caption{Screenshots of Kwame for Science} \n\\label{fig:kwame4science}\n\\end{figure}\n\n\n\\section{Dataset Curation and Preprocessing}\nGiven that our goal was for Kwame to provide answers based on the Integrated Science subject of the WASSCE exam, our training data and knowledge source had to cover the topics in the WASSCE Integrated Science curriculum. We sought to use one of the approved textbooks in Ghana. Unfortunately, their copyrights did not permit such use and the publishers were unwilling to partner with us. Consequently, we searched for free and open-source books and datasets that fulfilled our needs. We came across a middle school science dataset \u2014 Textbook Questions Answering (TQA) \\cite{kembhavi2017} which was curated from the free and open-source textbook, CK-12. Our exploration of the dataset revealed that though it covered several of the WASSCE Integrated Science topics, it lacked others, particularly those related to agricultural science. Consequently, we additionally used a dataset based on Simple Wikipedia to cover those gaps. We used Simple Wikipedia since its explanations were simple and better suited for middle school and high school students compared to regular Wikipedia.\n\nWe parsed the JSON files of the dataset into paragraphs. We also extracted figures that were referenced in the paragraphs so they could be returned to students along with the answers. We then split the paragraphs into groups of 3 sentences, computed embeddings, and indexed them using ElasticSearch to enable fast retrieval and run time. These constituted the answers returned for questions. Furthermore, we augmented our question-answering with curriculum-specific content. In particular, we created question-answer pairs using WASSCE questions that cover exams from 2000 to 2020. The exam has three parts, objectives (multiple-choice), theory, and practicals. Similar to the paragraphs, we computed embeddings of the questions and indexed them using ElasticSearch. These constituted the related past questions (with answers) returned when a question is asked.\n\n\\section{Preliminary Evaluation and Results}\nWe launched the web app in beta on 10th June 2022. Users could provide feedback by upvoting or downvoting answers in response to the question \"Was this helpful?.\" To evaluate Kwame for Science, we used the metrics top 1 and top 3 accuracies. Top 1 accuracy quantifies performance assuming only one answer was returned and voted on. Top 3 accuracy refers to the performance where for each question that received a vote, at least one answer was rated as helpful out of the 3 answers that were returned. The statistics for the deployment between 10th June 2022 and 27th June 2022 (2.5 weeks) are 190 users across 11 countries (6 in Africa), 433 questions with the metrics 71.8\\% top 1 accuracy (n=117 answers), and 87.5\\% top 3 accuracy (n=56 questions). The top 3 accuracy result is good, showing that Kwame for Science has a high chance of giving at least one useful answer among the 3. Some challenging cases occurred when there were typos in the spelling of scientific words and the questions were related to topics outside the scope of the knowledge source. Also, some unhelpful answers were cases the returned paragraph was incomplete due to issues with the dataset.\n\n\\section{Conclusion}\nIn this work, we developed and evaluated Kwame for Science which provides instant answers to the Science questions of students across West Africa. Our future work will fine-tune the SBERT model to improve its accuracy. Also, we will make Kwame for Science available in local languages across Africa, and available via offline channels such as SMS, USSD, and toll-free calling. Kwame for Science will enable the delivery of scalable, cost-effective, and quality remote education to millions of people across Africa. \n\\section{Acknowledgement}\nThis work was supported with grants from ETH for Development (ETH4D) and the MTEC Foundation, both at ETH Zurich.\n\n\\bibliographystyle{splncs04}\n\n\\section{Acknowledgement}\nThis work was supported with grants from ETH for Development (ETH4D) and the MTEC Foundation, both at ETH Zurich.\n\n\\bibliographystyle{splncs04}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\n\n\n\n\n\n\n\n\n\nMachine learning models, especially neural networks, are becoming\nubiquitous in various real-life applications. For example, they are used in medical diagnosis~\\cite{kononenko2001machine}, self-driving cars~\\cite{bojarski2016end} and criminal sentencing~\\cite{compas2016data}. \nMeanwhile, more and more attention has been paid to the fairness issues of these machine learning models~\\cite{verma2018fairness,angell2018themis,ghosh2020justicia,zhang2020white,tse2021adf,ruoss2020learning,salimi2019interventional,buolamwini2018gender,galhotra2017fairness} as discrimination has been discovered in many applications~\\cite{gianfrancesco2018potential,ruoss2020learning,udeshi2018automated,zafar2017fairness}. For instance, machine learning models were used to predict recidivism risk for suspected criminals by computing the likelihood of committing a future crime~\\cite{compas2016data}. Analysis results show that the prediction model was more likely to mislabel black defendants as high recidivism risk and mislabel white defendants as low risk. To minimize such ethical risks, it is crucial to systematically test the fairness of machine learning models, especially neural networks where such issues are typically `hidden' due to the lack of interpretability~\\cite{pei2017deepxplore, tian2018deeptest}.\n\nRecently, multiple efforts have been made in the testing community to first search for (and then guide mitigating) discrimination of machine learning models spanning from traditional ones to neural networks \\cite{zhang2020white,tse2021adf,udeshi2018automated,zafar2017fairness,galhotra2017fairness}. For instance, state-of-the-art fairness testing work utilizes gradient information of the input sample to accelerate search\/generation of discriminative samples~\\cite{zhang2020white,tse2021adf,sg}. Despite being effective, existing research has mostly focused on \\emph{individual discrimination}, i.e., identifying or generating individual discriminatory instances of a machine learning model~\\cite{zhang2020white,tse2021adf,udeshi2018automated,zafar2017fairness,galhotra2017fairness}.\n\\emph{Group discrimination}, which characterizes a model's discrimination against a certain group (whose sensitive features\\footnote{we use ``feature''\/``attribute'' interchangeably} satisfy certain conditions), is another concerning type of discrimination, which has been widely studied~\\cite{galhotra2017fairness, zafar2017fairness, tramer2017fairtest, kleinberg2016inherent}. However, testing against group discrimination has been much less studied so far. Compared to testing of individual discrimination, testing a machine learning model against group discrimination imposes new challenges. First, it is highly non-trivial to effectively enumerate all combinations of sensitive features (especially when the sensitive features have multiple or even continuous values). Second, \\emph{group discrimination can be hidden, i.e., there might be `subtle' group discrimination against\nthose groups whose sensitive features satisfy certain unknown conditions, e.g., male-white of certain age group}. While a prior work~\\cite{kearns2018preventing} similarly addresses discrimination against subgroups defined over conjunctions of protected features in the learning phase, we propose an automatic testing approach to systematically identify such subgroups using interpretable rules and measure such discrimination before model deployment. \n\n\n\\begin{figure*}[t] \\small\n\\centering \n\\includegraphics[width=0.75\\linewidth]{testsgd_overview.pdf}\n\\caption{An Overview of \\tool{TestSGD}.} \n\\label{fig:tool} \n\\end{figure*}\n\nSpecifically, in this work, we develop an effective method to systematically \\emph{test} a given machine learning model against such hidden \\emph{s}ubtle \\emph{g}roup \\emph{d}iscrimination, namely \\tool{TestSGD}. An overview of \\tool{TestSGD } is shown in Figure~\\ref{fig:tool}, which consists of three main phases: 1) candidate rule set generation, 2) group fairness identification, and 3) discrimination mitigation. In the first phase, \\tool{TestSGD } will automatically generate a candidate set of rules concerning multiple sensitive features. Note that we only consider frequent rule set with sufficient support (which characterize a sufficiently large group). In the second phase, the rule set \\emph{R} effectively partitions the samples into two groups, i.e., $samples_r$ which satisfies the rules and $samples_{\\neg r}$ which does not. The key intuition behind is to develop effective criteria to automatically mine interpretable rules which are practical and relevant in the real-world applications. Then we measure if the model suffers from group discrimination (against the groups partitioned by the rule set) by measuring the group fairness score. \nNote that, solely relying on the training samples might not be enough to accurately measure such a score. We thus propose to apply a standard data augmentation method, i.e., imposing minor perturbation on the available seed samples to generate new samples, and obtain an accurate estimation of the group fairness score (with bounded errors). The testing results of the first two phases are thus the identified subtle group discrimination (characterized by the rules) and their corresponding group fairness score (with bounded errors). For example, we test the model trained on the \\textbf{Crime}~\\cite{crime2009dataset} dataset which predicts whether the violent crimes per population in a specific community is high. The interpretable rule set found by \\tool{TestSGD } shows that it discriminates against communities in which the percentage of Caucasian population is lower than 80\\% and the percentage of females who are divorced is higher than 40\\%, with a 60.7\\% group fairness score, i.e., it is 60.7\\% more likely to predict high crime rate for such a community. In the last phase (optional depending whether the identified discrimination is considered to be harmful), \\tool{TestSGD } leverages the testing results to mitigate the identified subtle group discrimination. That is, to improve group fairness, we generate new samples according to the condition under which discrimination exists and retrain the original model. \n\n\n\n\n\n \n \\begin{comment}\n Our method works for both models trained on structured data (such as feed-forward neural networks trained on structured dataset) as well as models trained on text data (such as recurrent neural networks for NLP tasks). Furthermore, we apply perturbation-based and replacement-based generation for structured data and textual data generation respectively so that our method can maintain the original data distribution. \n \\end{comment}\n\n\\tool{TestSGD } is implemented as an open-source software~\\cite{website}. We evaluate our \\tool{TestSGD } on 8 models trained on widely adopted datasets including both structured data and text data. The experimental results show \\tool{TestSGD } is effective in identifying and measuring subtle group discrimination. The results also show that \\emph{subtle group discrimination does exist in all of these 8 models and sometimes to a surprising level which has never been revealed before}. For instance, the model trained on the \\textbf{COMPAS}~\\cite{compas2016data} dataset is much less likely to predict Hispanic males older than 40 years old as criminals with high recidivism risk. \nFurthermore, our experiments show that the testing-guided discrimination mitigation is useful.\nThat is, we can mitigate identified subtle group discrimination for all models without sacrificing the accuracy. \n\nIn a nutshell, we summarize our main contributions as follows.\n\\begin{itemize} [leftmargin=*]\n \\item We propose a method to automatically generate an interpretable rule set to identify \n \n subtle group discrimination in neural networks, applicable for both structured and text data;\n \\item We develop a theoretical bound for accurately sampling and estimating the group fairness score against two groups. \n \n \\item We show that we can generate samples systematically based on the interpretable rule set to mitigate subtle group discrimination.\n\\end{itemize}\n\nThe remainder of this paper is structured as follows. Section~\\ref{sec:background} provides the background on input types and fairness definitions. Section~\\ref{sec:prob_definition} defines our problem.\nWe present the proposed \\tool{TestSGD } framework in Section \\ref{sec:methodology}, which is evaluated in Sections \\ref{sec:evaluation}. Lastly, we review related work in Section~\\ref{sec:related work} and conclude in Section~\\ref{sec:conclusion}.\n\n\\section{Background} \\label{sec:background}\nOur goal is to develop a black-box method to identify subtle group discrimination in a user-provided neural network model. Our method supports neural networks trained on two different kinds of data, i.e., structure data and text data. Our method does not require the inner details of the neural network. That is, the neural network is viewed as a function $M: {R}^{p} \\to {R}^{q}$ which maps an input $x\\in {R}^{p}$ to an output $y \\in {R}^{q}$. Furthermore, we focus on deep feed-forward neural networks and recurrent neural networks. \n\n\\subsection{Input Type}\nFirst of all, we define two different data, i.e., structure data, text data, and their corresponding sensitive features which are used to evaluate the discrimination of the neural networks. \n\nA sample of structured dataset is composed of a set of features, i.e., a feature vector. A feature can be categorical (i.e., with a fixed set of values) or continuous (i.e., with a certain value range). We define the structure data and the corresponding sensitive features as follows. \n\\begin{definition}[Structured Data] \\label{def:structured data}\nA structured data $x$ contains $N$ features $\\{x_1, x_2, \\cdots, x_N\\}$, where $\\forall x_i, x_i \\in L_i$, where $L_i$ is a set of feature values. We write $S=\\{s_1, s_2, \\cdots, s_n\\}$ to denote the set of sensitive features in $x$, where $n \\epsilon) < 1-\\delta$, where $\\hat{f}$ is the real group fairness score over all possible samples. \n\nAlgorithm~\\ref{alg:score} shows how we measure the group fairness score. We maintain two sets of samples, i.e., $samples_r$ which contains samples satisfying $R$ and $samples_{\\neg r}$ which contains samples not satisfying $R$. At line 1, we set both $samples_r$ and $samples_{\\neg r}$ to be empty, error margin $\\epsilon$ to be infinity and the number of generated samples as 0. During the loop from line 2 to 16, we keep generating samples and calculating group fairness score until the error margin $\\epsilon$ is no more than the given error threshold $error\\_thr$. From line 3 to line 6, we generate new samples for $samples_r$ and $samples_{\\neg r}$ respectively using a function $Sample$. We remark that the generated samples should follow the original data distribution (i.e., that of the training dataset). We present details on how we sample on structured and text dataset in the next subsection. \n\nAt line 7, we increase $num$ by 1. After generating a sufficient number of samples, we check the error margin $\\epsilon$ from line 9 to 15. We first calculate the probability of predicting $l$ at line 9 and 10 for two sets of samples. Then at line 11, we calculate the error margin $\\epsilon$ on the group fairness score. We explain why it is calculated this way below. If $\\epsilon$ is less than or equal to $error\\_thr$, the stopping criteria is satisfied (as in line 12 and 13). Lastly, at line 17, we return the absolute difference between $\\phi_{r}$ and $\\phi_{\\neg r}$ as the group fairness score. \n\n\\begin{algorithm}[t]\n\\caption{$GroupFairnessScore(D, M, R, sample\\_thr, error\\_thr)$ where $D$ is the training dataset; $M$ is the machine learning model; $R$ is a rule set, $sample\\_thr$ is the number of generated inputs threshold; $error\\_thr$ is error margin threshold}\n\\label{alg:score}\n\\begin{algorithmic}[1]\n\\STATE $samples_r \\gets \\varnothing$, $samples_{\\neg r} \\gets \\varnothing$, $\\epsilon \\gets +\\infty$, $num \\gets 0$\n\\WHILE{$\\epsilon > error\\_thr$}\n\\STATE $x \\gets Sample(D, R)$\n\\STATE $x' \\gets Sample(D, \\neg R)$\n\\STATE $samples_r \\gets samples_r \\cup {x}$\n\\STATE $samples_{\\neg r} \\gets samples_{\\neg r} \\cup {x'}$\n\\STATE $num \\gets num + 1$\n\\IF{$num > sample\\_thr$}\n\\STATE $\\phi_r \\gets \\#\\{i \\in samples_r | M(i)=l\\}\/ num$\n\\STATE $\\phi_{\\neg r} \\gets \\#\\{i \\in samples_{\\neg r} | M(i)=l\\}\/num$\n\\STATE $\\epsilon \\gets z\\times\\sqrt{\\frac{\\phi_r(1-\\phi_r)}{num}} + z\\times\\sqrt{\\frac{\\phi_{\\neg r}(1-\\phi_{\\neg r})}{num}} $\n\\IF{$\\epsilon \\leq error\\_thr$}\n\\STATE break\n\\ENDIF\n\\ENDIF\n\\ENDWHILE\n\\RETURN $f \\gets |\\phi_r-\\phi_{\\neg r}|$\n\\end{algorithmic} \n\\end{algorithm}\n\nIn the above algorithm, we estimate the error margin of the group fairness score based on an estimation of $prob(R,l)$ and $prob(\\neg R,l)$. The complication is that both $prob(R,l)$ and $prob(\\neg R,l)$ carry certain error margin, which may magnify the error margin for the group fairness score. In the following, we prove that line 11 in the above algorithm allows us to conservatively estimate the error margin of the group fairness score.\n\n\\begin{theorem} \\label{theorem:confidence}\nAssume that $\\phi_r$ satisfies the following \t\n\\begin{equation} \\label{equ:pro}\nprob(|\\phi_r-\\hat{\\phi}_r|>\\epsilon_r) < 1-\\delta_r\n\\end{equation}\nwhere $\\epsilon_r$ and $\\delta_r$ are constants. Similarly, $\\phi_{\\neg r}$ satisfies the following.\n\\begin{equation} \\label{equ:unp}\nprob(|\\phi_{\\neg r}-\\hat{\\phi}_{\\neg r}|>\\epsilon_{\\neg r}) < 1-\\delta_{\\neg r}\n\\end{equation}\nThen the following is satisfied. \n\\begin{equation} \\label{equ:fair}\nprob(|f-\\hat{f}|>\\epsilon_{r}+\\epsilon_{\\neg r}) < 1-\\delta_{r}\\delta_{\\neg r} \n\\end{equation}\n \\\\\n\\textbf{Proof:} Since $prob(|\\phi_r-\\hat{\\phi_r}|>\\epsilon_r) < 1-\\delta_r$ and $prob(|\\phi_{\\neg r}-\\hat{\\phi_{\\neg r}}|>\\epsilon_{\\neg r}) < 1-\\delta_{\\neg r}$, we have:\n\\begin{equation} \nprob(|\\phi_r-\\hat{\\phi}_r| \\leq \\epsilon_r) \\geq \\delta_r \\nonumber\n\\end{equation}\n\\begin{equation} \nprob(|\\phi_{\\neg r}-\\hat{\\phi}_{\\neg r}| \\leq \\epsilon_{\\neg r}) \\geq \\delta_{\\neg r} \\nonumber\n\\end{equation}\nHence \n\\begin{equation} \n\\begin{aligned}\n&prob(|(\\phi_{r}-\\hat{\\phi_{r}})-(\\phi_{\\neg r}-\\hat{\\phi}_{\\neg r})| \\leq \\epsilon_{r} + \\epsilon_{\\neg r}) \\geq \\\\ \n&prob(|\\phi_r\\!-\\!\\hat{\\phi}_r|\\!\\leq\\!\\epsilon_r) \\cdot prob(|\\phi_{\\neg r}\\!-\\!\\hat{\\phi}_{\\neg r}|\\!\\leq\\!\\epsilon_{\\neg r}) \\geq \\delta_r\\delta_{\\neg r} \\nonumber\n\\end{aligned}\n\\end{equation} \nand \n\\begin{equation} \nprob(|(\\phi_{r}-\\hat{\\phi_{r}})-(\\phi_{\\neg r}-\\hat{\\phi_{\\neg r}})| > \\epsilon_{r} + \\epsilon_{\\neg r}) < 1- \\delta_r\\delta_{\\neg r} \\nonumber\n\\end{equation}\n\\begin{equation} \nprob(|(\\phi_{r}-\\phi_{\\neg r})-(\\hat{\\phi}_{r}-\\hat{\\phi}_{\\neg r})| > \\epsilon_{r} + \\epsilon_{\\neg r}) < 1- \\delta_r\\delta_{\\neg r} \\nonumber\n\\end{equation}\nAccording to Definition~\\ref{def:score}, group fairness score $f=\\phi_{r}-\\phi_{\\neg r}$. Thus \n\\begin{equation} \nprob(|f-\\hat{f}|>\\epsilon_{r}+\\epsilon_{\\neg r}) < 1-\\delta_{r}\\delta_{\\neg r} \\nonumber \n\\end{equation} \\hfill $\\qed$\n\\end{theorem}\nThe above theorem provides a theoretical guarantee on the statistical confidence for the group fairness score estimation. \nThat is, based on the Equation~\\ref{equ:fair}, the fairness level for fairness score $f$ is $\\delta_{r}\\delta_{\\neg r}$ and the margin of error is the sum of two margin of errors as $\\epsilon_{r}+\\epsilon_{\\neg r}$. Each $\\epsilon$ is calculated by:\n\\begin{equation} \\label{equ:error}\n\\epsilon=z\\times\\sqrt{\\frac{\\phi(1-\\phi)}{num}}\n\\end{equation}\nwhere $z$ is the value from the standard normal distribution for a certain confidence level $\\delta$ (e.g., for a 95\\% confidence level, $z=1.96$). So the final margin of error for fairness score $f$ is shown in line 11 of Algorithm~\\ref{alg:score}. \nBased on the result, we derive the stopping criteria, as shown in line 12 and 13 of Algorithm~\\ref{alg:score}. \n\nThe above shows how we compute the group fairness score for one rule set. Given multiple rule sets, we systematically compute the fairness score for each rule set with Algorithm~\\ref{alg:score}, and then rank the rule sets according to the resultant group fairness score. If the group fairness score of certain rule set is more than a given tolerable threshold, we report that discrimination is identified. \\\\\n\n\\noindent \\emph{Example} Take a model trained on the (structured) \\textbf{Census Income} dataset as an example. We fixed the confidence level to 95\\% and the corresponding $z$-value is 1.96. We set the sampling threshold $sample\\_thr$ as 1000 and the error of margin threshold $error\\_thr$ as 0.05. We are given a rule set \n$$\\{gender = Male, race = White, 40 \\leq age < 60\\}$$ \nFirst, we sample 1000 inputs as $samples_{r}$ using $Sample$ function that represents white males who are older than 40 but younger than 60. Then we sample another 1000 inputs as $samples_{\\neg r}$ using $Sample$ function that represents the rest individuals. We observe that 283 samples in $samples_{r}$ are labeled as ``True'', while only 91 samples in $samples_{\\neg r}$ are labeled as ``True''. \nSo $\\phi_{r}$ is 28.3\\% and $\\phi_{\\neg r}$ is 9.1\\%. According to Algorithm~\\ref{alg:score}, $\\epsilon_{r}$ is 0.028 and $\\epsilon_{\\neg r}$ is 0.018. So the margin of error $\\epsilon$ for fairness score is 0.046. Since $\\epsilon$ is less than 0.05, we stop sampling. Finally, the group fairness score is computed as 19.2\\% with 90.25\\% confidence. \\hfill $\\qed$\n\n\\subsection{Input Sampling}\\label{sec:generation}\n\n\n\n\nAs discussed above, Algorithm~\\ref{alg:score} requires us to sample inputs with a distribution which is similar to the data distribution of the training dataset. As shown in~\\cite{goodfellow2019research}, modern machine learning models mostly rely on the i.i.d. assumptions. That is, the training and test set are assumed to be generated independently from an identical distribution. It is more likely for machine learning models to predict identically distributed data correctly.\n\nWhile it is impossible to know the actual data distribution, we aim to generate additional samples from a distribution as close as possible to the distribution of the training set. For structured data, instead of generating feature vectors randomly, we generate new samples by adding tiny perturbations on original samples uniformly. The perturbation is added to one randomly selected non-sensitive attribute with randomly selected perturbation direction and the perturbation size is 1 for integer variables or 0.01 for decimal variables. Formally, given the rule set $R$, we first search a seed instance from the dataset $D$ as $seed=\\{x_1, x_2, \\cdots, x_N\\}$, where $\\forall r \\in R.~seed \\vDash r$. Then we randomly select a non-sensitive attribute $x_k$, where $x_k \\notin S$. We perturb $x_k$ as $x_k^{\\prime} = x_k + dir \\cdot s\\_pert$, where $dir \\in [-1, 1]$ and $s\\_pert$ is the perturbation size.\n\nFor text data, we generate new samples by replacing sensitive terms with a different term in the same sensitive term category. For example, when we test the machine learning model trained on the \\textbf{Wikipedia Talk Pages} dataset, given a rule set $\\{``gay\"\\}$, we need to generate additional comments containing the term ``gay''. First, we search all comments containing gender-related sensitive terms such as ``lesbian'' and ``bisexual'', as defined in Table~\\ref{tab:terms}. Then we replace these terms in the original comments with the term ``gay'' to generate new comments. That is, we can generate ``I am a gay'' from an original comment ``I am a lesbian''. The reason why we use text replacement instead of text perturbation, as in the case of structured data, is that perturbing texts with synonyms (as proposed in~\\cite{sato2018interpretable} for adversarial attacks) is ineffective to generate the texts in the desired group. Our text generation method also has the benefit of mitigating the influence of data imbalance which may cause unintended bias~\\cite{dixon2018measuring}. Formally, given the rule set $R=\\{r_1, r_2, \\cdots, r_m\\}$, we first search a seed instance from the dataset $D$ as $seed=\\{x_1, x_2, \\cdots, x_N\\}$, where $\\forall r \\in R.~contains(seed,s_r)$, where $s_r$ is the sensitive category referring to $r$ and $contains(d,s_{r})$ is a proposition which is true if and only if $d$ contains at least one term in the category $s_r$. Then we replace the term $x_i$ to term $r_{j}$, for all $r_{j} \\in R$ and $x_i \\in s_{r_{j}}$.\n\n\n\\section{Implementation and Evaluation} \\label{sec:evaluation}\n\nWe have implemented \\tool{TestSGD } as a self-contained software toolkit based on Tensorflow~\\cite{abadi2016tensorflow} with about 6K lines of Python code. \n\\vspace{1mm}\n\n\n\n\\noindent \\emph{Experiment Subjects} Our experiments are based on 8 models trained with the following benchmark datasets. These datasets have been widely used as evaluation subjects in multiple previous studies on fairness~\\cite{zhang2020white, tse2021adf, galhotra2017fairness, dixon2018measuring, ruoss2020learning, ma2020metamorphic}. \n\n\n\\begin{itemize} [leftmargin=*]\n\\item{\\textbf{Census Income}~\\cite{census1996dataset}: \nThe dataset contains more than 30,000 samples and is used to predict whether the income of an adult is above \\$50,000 annually. The attributes $gender$, $race$ and $age$ are sensitive attributes. }\n\\item{\\textbf{Bank Marketing}~\\cite{moro2014data}: \nThe dataset contains 45,000+ samples and is used to train models for predicting whether the client would subscribe a term deposit. Its sensitive attribute is $age$.}\n\\item{\\textbf{German Credit}~\\cite{credit1994dataset}: \nThis is a small dataset with 600 samples. The task is to assess an individual's credit. The sensitive attributes are $gender$ and $age$.}\n\\item{\\textbf{COMPAS}~\\cite{compas2016data}: \nThis dataset contains 7,000+ samples. The task is to predict whether the recidivism risk score for an individual is high. The sensitive attributes are $gender$, $race$ and $age$.}\n\\item{\\textbf{Crime}~\\cite{crime2009dataset}: \nThis dataset contains almost 2,000 data for communities within the US. The task is to predict whether the violent crimes per population in a specific community is high. Since this dataset records population statistics, their sensitive features are shown in multiple attributes with percentage values. Here, we extract all gender\/race\/age related attributes to learn rule sets.}\n\\item{\\textbf{Law School}~\\cite{anthony2003analysis}: \nThis dataset has more than 20,000 application records and is used to predict whether a student passes the bar exam. The attributes, $race$ and $gender$ are sensitive attributes.}\n\\item{\\textbf{Wiki Talk Pages}~\\cite{wulczyn2017ex}: \nThis is a textual dataset containing more than 100,000 Wikipedia TalkPage comments. The task is to predict whether a given comment is toxic. }\n\\item{\\textbf{IMDB}~\\cite{maas-EtAl:2011:ACL-HLT2011}: \nIMDB dataset contains 50,000 movie reviews. The task is to predict whether a given sentence is a positive review.\n}\n\\end{itemize}\n\n\nFor the first six structured datasets, we train a six-layer feed-forward neural network using the exact same configuration as reported in the previous studies~\\cite{zhang2020white,tse2021adf}. For the last two textual datasets, we train a convolutional neural network (CNN) combined with long short-term memory (LSTM). The details of trained models are shown in Table~\\ref{tab:models}. The accuracy of the trained models is expectedly similar to what is reported in the previous studies. Table~\\ref{tab:parameters} shows the value of parameters used in our experiment to run \\tool{TestSGD}.\nAll experiments are conducted on a server running Ubuntu 1804 with 1 Intel Core 3.10GHz CPU, 32GB memory and 2 NVIDIA GV102 GPU. To mitigate the effect of randomness, all the results are the average of 3 runs. \n\nWe aim to answer multiple research questions as follows.\n\\vspace{1mm}\n\n\\begin{table}[t] \\small\n\\caption{Parameters of the Experiments}\n\\begin{tabular}{|c|c|c|}\n\\hline\nParameters & Value & Discription \\\\ \\hline\n$\\theta$ & 5\\% & support threshold \\\\ \\hline\nsample\\_thr & 1000 & sampling threshold \\\\ \\hline\n$\\delta$ & 95\\% & confidence level \\\\ \\hline\nerror\\_thr & 0.05 & error margin threshold \\\\ \\hline\nz & 1.96 & z value \\\\ \\hline\ns\\_pert & 1 & perturbation size for integer variables \\\\ \\hline\ns\\_pert & 0.01 & perturbation size for decimal variables \\\\ \\hline\n\\end{tabular}\n\\label{tab:parameters}\n\\end{table}\n\n\\begin{table}[t]\n\\caption{Dataset and Models of Experiments}\n\\label{tab:models}\n\\resizebox{\\linewidth}{!}{\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\textbf{Dataset} & \\textbf{Model} & \\textbf{Accuracy} \\\\ \\hline\nCensus Income & Six-layer Fully-connected NN & 86.13\\% \\\\ \\hline\nBank Marketing & Six-layer Fully-connected NN & 91.62\\% \\\\ \\hline\nGerman Credit & Six-layer Fully-connected NN & 100\\% \\\\ \\hline\nCOMPAS & Six-layer Fully-connected NN & 78.99\\% \\\\ \\hline\nCrime & Six-layer Fully-connected NN & 92.52\\% \\\\ \\hline\nLaw School & Six-layer Fully-connected NN & 95.19\\% \\\\ \\hline\nWikipedia Talk Pages & CNN Long Short-term memory network & 93.89\\% \\\\ \\hline\nIMDB & CNN Long Short-term memory network & 86.68\\% \\\\ \\hline\n\\end{tabular}}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\noindent \\emph{RQ1: Is our method effective in identifying subtle group discrimination of a given machine learning model?} \n\\begin{table*}[t]\n\\caption{Rule Sets and Fairness Scores for Neural Networks}\n\\label{tab:results}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{|c|cc|cc|cc|}\n\\hline\n\\multirow{3}{*}{\\textbf{Dataset}} & \\multicolumn{2}{c|}{\\textbf{top 1}} & \\multicolumn{2}{c|}{\\textbf{top 2}} & \\multicolumn{2}{c|}{\\textbf{top 3}} \\\\ \\cline{2-7} \n & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{Rule Set}}} & \\textbf{Fairness Score} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{Rule Set}}} & \\textbf{Fairness Score} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{Rule Set}}} & \\textbf{Fairness Score} \\\\ \n & \\multicolumn{1}{c|}{} & ($\\phi_{r}, \\phi_{\\neg r}$) & \\multicolumn{1}{c|}{} & ($\\phi_{r}, \\phi_{\\neg r}$) & \\multicolumn{1}{c|}{} & ($\\phi_{r}, \\phi_{\\neg r}$) \\\\ \\hline\n\\multirow{2}{*}{Census Income} & \\multicolumn{1}{c|}{gender=male, 40$\\leq$age\\textless{}80,} & 20.2\\% & \\multicolumn{1}{c|}{gender=male, 40$\\leq$age\\textless{}70,} & 19.4\\% & \\multicolumn{1}{c|}{gender=male, 40$\\leq$age\\textless{}80, race=White,} & 18.4\\% \\\\ \n & \\multicolumn{1}{c|}{race=White or Asian-Pac-Islander} & (29.9\\%, 9.7\\%) & \\multicolumn{1}{c|}{race=White or Amer-Indian-Eskimo} & (28.9\\%, 9.5\\%) & \\multicolumn{1}{c|}{Asian-Pac-Islander or Amer-Indian-Eskimo} & (26.9\\%, 8.5\\%) \\\\ \\hline\n\\multirow{2}{*}{Bank Marketing} & \\multicolumn{1}{c|}{\\multirow{2}{*}{10 $\\leq$ age < 90}} & 38.2\\% & \\multicolumn{1}{c|}{\\multirow{2}{*}{10 $\\leq$ age < 70}} & 22.8\\% & \\multicolumn{1}{c|}{\\multirow{2}{*}{10 $\\leq$ age < 60}} & 20.5\\% \\\\ \n & \\multicolumn{1}{c|}{} & (3.3\\%, 41.5\\%) & \\multicolumn{1}{c|}{} & (26.6\\%, 3.8\\%) & \\multicolumn{1}{c|}{} & (4.7\\%, 25.2\\%) \\\\ \\hline\n\\multirow{2}{*}{German Credit} & \\multicolumn{1}{c|}{\\multirow{2}{*}{gendre = female, 60$\\leq$age\\textless{}70}} & 21.9\\% & \\multicolumn{1}{c|}{\\multirow{2}{*}{gender = female, 60$\\leq$age\\textless{}80}} & 21.8\\% & \\multicolumn{1}{c|}{\\multirow{2}{*}{gender = male, 40$\\leq$age\\textless{}80}} & 15.5\\% \\\\ \n & \\multicolumn{1}{c|}{} & (72.5\\%, 50.6\\%) & \\multicolumn{1}{c|}{} & (70.5\\%, 48.7\\%) & \\multicolumn{1}{c|}{} & (52.6\\%, 47.1\\%) \\\\ \\hline\n\\multirow{2}{*}{COMPAS} & \\multicolumn{1}{c|}{gender = male, age$\\geq$40,} & 62.4\\% & \\multicolumn{1}{c|}{gender = male, 40$\\leq$age\\textless{}60,} & 62.3\\% & \\multicolumn{1}{c|}{gender = male, 50$\\leq$age\\textless{}60,} & 62.3\\% \\\\\n & \\multicolumn{1}{c|}{race = Hispanic or other race} & (20.7\\%, 83.1\\%) & \\multicolumn{1}{c|}{race = Hispanic or other race} & (20.3\\%, 82.6\\%) & \\multicolumn{1}{c|}{race = Hispanic} & (19.5\\%, 81.8\\%) \\\\ \\hline\n\\multirow{2}{*}{Law School} & \\multicolumn{1}{c|}{\\multirow{2}{*}{gender = male, race = Asian or Black}} & 15.0\\% & \\multicolumn{1}{c|}{\\multirow{2}{*}{gender = female, race = Asian or Black}} & 11.1\\% & \\multicolumn{1}{c|}{\\multirow{2}{*}{gender = female, race = Black}} & 10.2\\% \\\\ \n & \\multicolumn{1}{c|}{} & (84.5\\%, 99.5\\%) & \\multicolumn{1}{c|}{} & (88.8\\%, 99.9\\%) & \\multicolumn{1}{c|}{} & (89.7\\%, 99.9\\%) \\\\ \\hline\n\\multirow{2}{*}{Crime} & \\multicolumn{1}{c|}{\\multirow{2}{*}{FemalePctDiv$\\geq$0.4, racePctWhite$\\leq$0.8}} & 60.7\\% & \\multicolumn{1}{c|}{\\multirow{2}{*}{ FemalePctDiv$\\geq$0.5, racePctWhite$\\leq$0.8}} & 59.6\\% & \\multicolumn{1}{c|}{\\multirow{2}{*}{ FemalePctDiv$\\geq$0.5, racePctWhite$\\leq$0.6}} & 59.5\\% \\\\ \n & \\multicolumn{1}{c|}{} & (83.8\\%, 23.2\\%) & \\multicolumn{1}{c|}{} & (87.0\\%, 27.4\\%) & \\multicolumn{1}{c|}{} & (94.6\\%, 35.1\\%) \\\\ \\hline\n\\multirow{2}{*}{Wiki Talk Pages} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\"gay\", \"taoist\"}} & 6.5\\% & \\multicolumn{1}{c|}{\\multirow{2}{*}{\"gay\", \"protestant\"}} & 5.4\\% & \\multicolumn{1}{c|}{\\multirow{2}{*}{\"gay\", \"african american\"}} & 5.1\\% \\\\ \n & \\multicolumn{1}{c|}{} & (13.0\\%, 6.5\\%) & \\multicolumn{1}{c|}{} & (12.9\\%, 7.5\\%) & \\multicolumn{1}{c|}{} & (12.5\\%, 7.4\\%) \\\\ \\hline\n\\multirow{2}{*}{IMDB} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\"european\", \"yong\"}} & 6.6\\% & \\multicolumn{1}{c|}{\\multirow{2}{*}{\"white\", \"older\"}} & 6.6\\% & \\multicolumn{1}{c|}{\\multirow{2}{*}{\"lgbtq\"}} & 6.5\\% \\\\ \n & \\multicolumn{1}{c|}{} & (56.0\\%, 49.4\\%) & \\multicolumn{1}{c|}{} & (59.1\\%, 52.6\\%) & \\multicolumn{1}{c|}{} & (7.5\\%, 14.0\\%) \\\\ \\hline\n\\end{tabular}}\n\\end{table*}\nTo answer the question, we systematically apply our approach to the above-mentioned models and measure the results. The results are summarized in Table~\\ref{tab:results}. It shows results on the six models trained on structured data and results on the two models trained on text data. These four columns show datasets, rule sets, group fairness scores and model accuracies respectively. The favorable label is ``True'', the meaning of which can be found in the above introduction on the corresponding dataset. Note that for each model, we rank the identified subtle discrimination according to the group fairness score and we report the top 3 worst discrimination only. \n\nWe can observe that subtle discrimination does exist in these models, which were never revealed in the previous studies~\\cite{zhang2020white,tse2021adf,galhotra2017fairness,dixon2018measuring,ma2020metamorphic}. \nFor instance, the model trained on the \\textbf{Bank Marketing} dataset predicts only 3.3\\% of the clients who are older than 10 but younger than 90 would subscribe a term deposit, whilst predicting 41.5\\% of clients older than 90 would subscribe a term deposit. All of the top 3 testing results all show the model discriminates against young clients. \nWe remark that although this is unfair according to the definition, it may have its underlying reasons and it is still up to human experts to decide whether it is actual discrimination. \n\nFor the models trained on the \\textbf{Census Income} dataset, \\textbf{German Credit} dataset and the \\textbf{Law School} dataset, they show relatively mild discrimination. In contrast, the model trained on the \\textbf{COMPAS} dataset shows severe discrimination, with a fairness score of 62.4\\%. That is, for Hispanic or other race male individuals who are older than 40, the model is much less likely to predict the recidivism risk as high. For the remaining individuals, the model predicts 83.1\\% of them have high recidivism risk. Top 2 and top 3 test results also show severe discrimination against older Hispanic or other race male individuals. Similarly, the model trained on the \\textbf{Crime} dataset also shows high discrimination. \nDifferent from the first five structured datasets, samples in this dataset have 10 different sensitive features, each of which is a decimal ranging from 0.0 to 1.0 representing the percentage of certain population. \nAs shown in the top 1 testing result, when the percentage of divorced females is above 40\\% and the percentage of Caucasian is below 80\\%, the model is much more likely to predict that the violent crimes per population in this community is high. All testing results on the model trained on \\textbf{Crime} dataset suggest that the model discriminates against communities with high percentage of divorced females and low percentage of Caucasian. \n\n\\begin{comment}\nOur observation is that there are indeed subtle discrimination in these models, which were never revealed in previous studies~\\cite{zhang2020white,tse2021adf,galhotra2017fairness,dixon2018measuring,ma2020metamorphic}. For instance, the model trained on the \\textbf{Census Income} dataset is more likely to predict the income of an individual above \\$50,000 when this person is a male who is older than 40 years old but less than 60 years old and his race is Caucasian or Asian-Pacific-Islander. The group fairness score 20.2\\%, which means an individual in this group is 20.2\\% more likely to be predicted positively. The model trained on the \\textbf{Bank Marketing} dataset only has one sensitive attribute $age$. This model predicts only 3.3\\% of the clients who are older than 10 but younger than 90 would subscribe a term deposit, whilst predicting 41.5\\% of clients older than 90 would subscribe a term deposit. For the models trained on the \\textbf{German Credit} dataset and the \\textbf{Law School} dataset, they show relatively mild discrimination. In contrast, the model trained on the \\textbf{COMPAS} dataset shows severe discrimination, with a fairness score of 57.7\\%. That is, for male individuals who are older than 40 and are Hispanic or other race, the model is much less likely to predict the recidivism risk as high. For the remaining individuals, the model predicts 83.3\\% of them have high recidivism risk. Similarly, the model trained on the \\textbf{Crime} dataset also shows high discrimination. Different from the first five structured datasets, samples in this dataset have 10 different sensitive features, each of which is a decimal ranging from 0.0 to 1.0. There are two gender-related features: $MalePctDiv$ and $FemalePctDiv$, which means the percentage of males\/females who are divorced; four race-related features: $racePctWhite$, $racePctBlack$, $racePctAsian$ and $racePctHisp$, representing the percentage of population that is Caucasian, African American, Asian heritage or Hispanic heritage respectively; and 4 age-related features: $agePct12t21$, $agePct12t29$, $agePct16t24$ and $agePct65up$, each of which represents the percentage of population in a certain age group. As shown in the last row, when the percentage of divorced females is above 50\\% and the percentage of Caucasian is below 70\\%, the model is much more likely to predict that the violent crimes per population in this community is high. \n\nWe remark that while there are indeed discrimination according to the group fairness definition, such discrimination may have its underlying reasons and may or may not be problematic. We consider it to be an orthogonal question on whether the model should be improved to reduce such discrimination and how. At the least, our method provides a way so that we are aware the existence of such discrimination. Furthermore, being able to identify the condition under which discrimination exists allows us to judge whether the discrimination is indeed problematic or not, and could potentially reveal problems in the training set. \n\\end{comment}\n\nIn Table~\\ref{tab:results}, the last two rows show the results on models trained on the text data. In general, we observe that the models trained on text dataset show less discrimination. The maximum fairness score for the model trained on the \\textbf{Wikipedia Talk Pages} dataset is 6.5\\%. That is, the model predicts 13.0\\% of comments containing both ``gay'' and ``taoist'' as toxic. For other comments (i.e., those without one of these two terms or both), the model predicts only 6.5\\% of them as toxic. Top 2 and top 3 testing results show that the model discriminates against comments containing both ``gay'' and ``protestant'' and comments containing both ``gay'' and ``african american'' respectively. The model trained on the \\textbf{IMDB} dataset shows a similar level of discrimination. It is more likely to predict reviewers containing ``european'' and ``young'' and reviews containing ``white'' and ``older'' as positive. It also shows discrimination against reviews containing ``lgbtq''.\nOur conjecture on why the level of discrimination is considerably lower on these models is that each sample in these text datasets often has many features and as a result, the influence of each term (including sensitive terms) is distributed. \n\n\\begin{tcolorbox}[fonttitle = \\bfseries]\n \\textbf{Answer to RQ1:} \\tool{TestSGD } is effective in identifying subtle group discrimination in neural networks.\n\\end{tcolorbox}\n\n\n\n\\begin{comment}\n\\begin{table}[t]\n\\caption{Rule Sets and Fairness Scores for Models Trained on Textual Data}\n\\label{tab:text}\n\\resizebox{.95\\columnwidth}{!}{\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\multirow{2}{*}{\\textbf{Dataset}} & \\multirow{2}{*}{\\textbf{Rule Set}} & \\textbf{Fairness Score} \\\\ \n & & ($\\phi_{r}, \\phi_{\\neg r}$) \\\\ \\hline\n\\multirow{2}{*}{Wiki Talk Pages} & \\multirow{2}{*}{``gay'', ``taoist''} & 3.7\\% \\\\ \n & & (12.4\\%, 8.8\\%) \\\\ \\hline\n\\multirow{2}{*}{IMDB} & \\multirow{2}{*}{``European'', ``young''} & 7.2\\% \\\\\n & & (56.0\\%, 48.8\\%) \\\\ \\hline\n\\end{tabular}}\n\\end{table}\n\\end{comment}\n\n\\noindent \\emph{RQ2: Is our method efficient?} To answer this question, we measure the amount of time required to identify the subtle discrimination for each model. The total execution time and the numbers of tested rule sets are shown in Table~\\ref{tab:time}. For all models, the time required to identify the subtle discrimination is less than 20 hours. Furthermore, models trained on structured dataset take considerably less time than those trained on text dataset. That is, models trained on the \\textbf{Census Income}, \\textbf{Bank Marketing}, \\textbf{German Credit}, \\textbf{COMPAS} and \\textbf{Law School} take less than 16 minutes. One exception is the model trained on the \\textbf{Crime} dataset that takes more than 8 hours. The main reason is that it has a large number of rule sets, due to a large number of sensitive features (i.e., 10), all of which are continuous features. In contrast, both models trained on text dataset take more than 9 hours to finish. The main reason is that generating additional samples for such dataset takes much more time in general. We remark that the sampling procedure can be easily parallelized and thus we could significantly reduce the time if it is an issue. \n\n\\begin{table}[t] \\small\n\\centering\n\\caption{Time Taken to Identify the subtle discrimination}\n\\label{tab:time}\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\textbf{Dataset} & \\textbf{Time (seconds)} & \\textbf{\\#rule set} \\\\ \\hline\nCensus Income & 869.35 & 880 \\\\ \\hline\nBank Marketing & 141.52 & 34 \\\\ \\hline\nGerman Credit & 104.85 & 53 \\\\ \\hline\nCOMPAS & 908.5 & 1590 \\\\ \\hline\nLaw School & 18.46 & 17 \\\\ \\hline\nCrime & 29150.01 & 13282 \\\\ \\hline\nWiki Talk Pages & 34982.28 & 732 \\\\ \\hline\nIMDB & 69125.16 & 876 \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\nNote that the support threshold $\\theta$ is set to be 5\\% in all the above experiments. Intuitively, it means that each rule must be relevant to 5\\% of the population (although the rule set, which is a conjunction of multiple rules, may impact a smaller population). This hyper-parameter largely determines how many rule sets that we must examine and thus may have an impact on the execution time. We thus conduct additional experiments with different $\\theta$ values, ranging from 1\\% to 50\\%, to evaluate the effect of $\\theta$ on the execution time and the results. The results on two models, i.e., the model on \\textbf{Law School} and the model on \\textbf{COMPAS}, are detailed in Table~\\ref{tab:support thr}.\n\n\\begin{table}[t] \\small\n\\caption{Effect of Different $\\theta$}\n\\label{tab:support thr}\n\\resizebox{\\linewidth}{!}{\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multirow{2}{*}{\\textbf{Dataset}} & \\multirow{2}{*}{\\textbf{$\\theta$}} & \\textbf{Time} & \\multirow{2}{*}{\\textbf{\\#rule sets}} & \\multirow{2}{*}{\\textbf{Rule Set}} & \\textbf{Fairness} \\\\ \n & & \\textbf{(seconds)} & & & \\textbf{Score} \\\\ \\hline\n\\multirow{10}{*}{Law School} & \\multirow{2}{*}{1\\%} & \\multirow{2}{*}{46.71} & \\multirow{2}{*}{59} & gender=male, & \\multirow{2}{*}{16.3\\%} \\\\ \n & & & & race=Black & \\\\ \\cline{2-6} \n & \\multirow{2}{*}{5\\%} & \\multirow{2}{*}{18.46} & \\multirow{2}{*}{17} & gender=male, & \\multirow{2}{*}{15.0\\%} \\\\ \n & & & & race=Asian or Black & \\\\ \\cline{2-6} \n & \\multirow{2}{*}{10\\%} & \\multirow{2}{*}{17.83} & \\multirow{2}{*}{16} & gender=male, & \\multirow{2}{*}{1.0\\%} \\\\ \n & & & & race=Asian or White & \\\\ \\cline{2-6} \n & \\multirow{2}{*}{20\\%} & \\multirow{2}{*}{17.83} & \\multirow{2}{*}{16} & gender=male, & \\multirow{2}{*}{0.9\\%} \\\\ \n & & & & race=Asian or White & \\\\ \\cline{2-6} \n & \\multirow{2}{*}{50\\%} & \\multirow{2}{*}{6.28} & \\multirow{2}{*}{2} & gender=male, & \\multirow{2}{*}{0.3\\%} \\\\ \n & & & & race=other race & \\\\ \\hline\n\\multirow{10}{*}{COMPAS} & \\multirow{2}{*}{1\\%} & \\multirow{2}{*}{1175.79} & \\multirow{2}{*}{2063} & gender=male, age$\\geq$40 & \\multirow{2}{*}{62.4\\%} \\\\ \n & & & & race=Hispanic or other race & \\\\ \\cline{2-6} \n & \\multirow{2}{*}{5\\%} & \\multirow{2}{*}{908.50} & \\multirow{2}{*}{1590} & gender=male, age$\\geq$40 & \\multirow{2}{*}{62.4\\%} \\\\ \n & & & & race=Hispanic or other race & \\\\ \\cline{2-6} \n & \\multirow{2}{*}{10\\%} & \\multirow{2}{*}{676.74} & \\multirow{2}{*}{1180} & gender=male, age$\\geq$20 & \\multirow{2}{*}{43.9\\%} \\\\ \n & & & & race=Hispanic or other race & \\\\ \\cline{2-6} \n & \\multirow{2}{*}{20\\%} & \\multirow{2}{*}{0} & \\multirow{2}{*}{0} & \\multirow{2}{*}{NULL} & \\multirow{2}{*}{NULL} \\\\\n & & & & & \\\\ \\cline{2-6} \n & \\multirow{2}{*}{50\\%} & \\multirow{2}{*}{0} & \\multirow{2}{*}{0} & \\multirow{2}{*}{NULL} & \\multirow{2}{*}{NULL} \\\\\n & & & & & \\\\ \\hline\n\\end{tabular}}\n\\end{table}\n\nThe table shows the execution time, the number of rule sets and the worst group fairness score. We can observe that, the larger a $\\theta$ we set, the fewer rule sets, the less execution time and the smaller group fairness score in general. If the threshold $\\theta$ is too low, e.g., 1\\%, we spend a lot of time on testing a huge number of rule sets, which may not be interesting (one such example is $\\{gender=Male, age \\geq 100\\}$). In contrast, if the threshold $\\theta$ is too high, e.g., 20\\% or 50\\%, there may only exists few or even none rule set (as in the case of the model trained on the \\textbf{COMPAS} dataset). \n\nWe note that different $\\theta$ may result in different discrimination being identified. For the model trained on \\textbf{Law School}, the rule set shows that the model discriminates against black or Asian males the most when $\\theta$ is 5\\%. However, when we set $\\theta$ to be 1\\%, the model is shown to discriminate against black male individuals the most. For the model trained on the \\textbf{COMPAS} dataset, the model discriminates against Hispanic or other race males who is older than 40 years old most when we set $\\theta$ to be 5\\%. However, when we set $\\theta$ higher (i.e., 10\\%), the age range is expanded to be over 20 years in the identified rule set. Such a result is expected as a large $\\theta$ requires us to find discrimination against a large group. What is considered to be a reasonable value for $\\theta$ is a complicated question, which should probably be answered by lawmakers.\n\n\\begin{tcolorbox}[fonttitle = \\bfseries]\n \\textbf{Answer to RQ2:} \\tool{TestSGD } is reasonably efficient.\n\\end{tcolorbox}\n\n\n\n\n\n\\begin{table*}[t] \\small\n\\centering\n\\caption{Discrimination Mitigation for Neural Networks}\n\\label{tab:mitigation}\n \\resizebox{0.65\\linewidth}{!}{\n\\begin{tabular}{|c|c|cc|cc|}\n\\hline\n\\multirow{3}{*}{\\textbf{Dataset}} & \\multirow{3}{*}{\\textbf{Rule Set}} & \\multicolumn{2}{c|}{\\textbf{Before}} & \\multicolumn{2}{c|}{\\textbf{After}} \\\\ \\cline{3-6} \n & & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{accuracy}}} & \\textbf{Fairness Score} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{accuracy}}} & \\textbf{Fairness Score} \\\\ \n & & \\multicolumn{1}{c|}{} & ($\\phi_{r}, \\phi_{\\neg r}$) & \\multicolumn{1}{c|}{} & ($\\phi_{r}, \\phi_{\\neg r}$) \\\\ \\hline\n\\multirow{2}{*}{Census Income} & gender=male, 40$\\leq$age<80, & \\multicolumn{1}{c|}{\\multirow{2}{*}{86.1\\%}} & 20.2\\% & \\multicolumn{1}{c|}{\\multirow{2}{*}{86.2\\%}} & 10.1\\% \\\\ \n & race=White or Asian-Pac-Islander & \\multicolumn{1}{c|}{} & (29.9\\%, 9.7\\%) & \\multicolumn{1}{c|}{} & (18.9\\%, 8.8\\%) \\\\ \\hline\n\\multirow{2}{*}{Bank Marketing} & \\multirow{2}{*}{10 $\\leq$ age < 90} & \\multicolumn{1}{c|}{\\multirow{2}{*}{91.6\\%}} & 38.2\\% & \\multicolumn{1}{c|}{\\multirow{2}{*}{90.6\\%}} & 5.4\\% \\\\ \n & & \\multicolumn{1}{c|}{} & (3.3\\%, 41.5\\%) & \\multicolumn{1}{c|}{} & (6.9\\%, 12.3\\%) \\\\ \\hline\n\\multirow{2}{*}{German Credit} & \\multirow{2}{*}{gender = female, 60$\\leq$age<70} & \\multicolumn{1}{c|}{\\multirow{2}{*}{100.0\\%}} & 21.9\\% & \\multicolumn{1}{c|}{\\multirow{2}{*}{100.0\\%}} & 7.3\\% \\\\ \n & & \\multicolumn{1}{c|}{} & (72.3\\%, 50.6\\%) & \\multicolumn{1}{c|}{} & (45.9\\%, 53.2\\%) \\\\ \\hline\n\\multirow{2}{*}{COMPAS} & gender = male, age$\\geq$40, & \\multicolumn{1}{c|}{\\multirow{2}{*}{79.0\\%}} & 62.4\\% & \\multicolumn{1}{c|}{\\multirow{2}{*}{78.5\\%}} & 4.2\\% \\\\ \n & race = Hispanic or other race & \\multicolumn{1}{c|}{} & (20.7\\%, 83.1\\%) & \\multicolumn{1}{c|}{} & (80.9\\%, 85.1\\%) \\\\ \\hline\n\\multirow{2}{*}{Law School} & \\multirow{2}{*}{gender = male, race = Black} & \\multicolumn{1}{c|}{\\multirow{2}{*}{95.2\\%}} & 15.0\\% & \\multicolumn{1}{c|}{\\multirow{2}{*}{95.1\\%}} & 7.5\\% \\\\\n & & \\multicolumn{1}{c|}{} & (84.5\\%, 99.5\\%) & \\multicolumn{1}{c|}{} & (92.3\\%, 99.8\\%) \\\\ \\hline\n\\multirow{2}{*}{Crime} & \\multirow{2}{*}{FemalePctDiv$\\geq$0.4, racePctWhite$\\leq$0.8} & \\multicolumn{1}{c|}{\\multirow{2}{*}{93.9\\%}} & 60.7\\% & \\multicolumn{1}{c|}{\\multirow{2}{*}{98.1\\%}} & 51.4\\% \\\\ \n & & \\multicolumn{1}{c|}{} & (83.8\\%, 23.2\\%) & \\multicolumn{1}{c|}{} & (90.6\\%, 39.2\\%) \\\\ \\hline\n\\multirow{2}{*}{Wiki Talk Pages} & \\multirow{2}{*}{\"gay\", \"taoist\"} & \\multicolumn{1}{c|}{\\multirow{2}{*}{93.9\\%}} & 6.5\\% & \\multicolumn{1}{c|}{\\multirow{2}{*}{95.5\\%}} & 0.4\\% \\\\ \n & & \\multicolumn{1}{c|}{} & (13.0\\%, 6.5\\%) & \\multicolumn{1}{c|}{} & (8.4\\%, 8.0\\%) \\\\ \\hline\n\\multirow{2}{*}{IMDB} & \\multirow{2}{*}{\"european\", \"yong\"} & \\multicolumn{1}{c|}{\\multirow{2}{*}{86.7\\%}} & 6.6\\% & \\multicolumn{1}{c|}{\\multirow{2}{*}{84. \\%}} & 3.3\\% \\\\ \n & & \\multicolumn{1}{c|}{} & (56.0\\%, 49.4\\%) & \\multicolumn{1}{c|}{} & (43.7\\%, 40.4\\%) \\\\ \\hline\n\\end{tabular}}\n\\end{table*}\n\n\n\\noindent \\emph{RQ3: Can we mitigate subtle discrimination using our testing results?} To further show the usefulness of our approach, we evaluate whether we can mitigate the identified subtle discrimination using our testing results. The idea is to mitigate the discrimination by retraining. We remark that there are alternative approaches for improving fairness as well~\\cite{kearns2018preventing,salimi2019interventional}. Note that we generate additional instances satisfying the rule set with the sampling approach described in Section~\\ref{sec:generation}. We only select those generated instances with the opposite label. For example, the model trained on \\textbf{COMPAS} is more likely to predict elderly males who are Hispanic or other race with ``False'' label. We can use the $Sample$ function to generate instances satisfying the condition that are labeled as ``True'' according to the original model. Afterward, we retrain the original model with these additional instances and testing the subtle discrimination with respect to the same rule set to see the improvement. Note that we gradually increase the number of additional instances from 50 to 10\\% of original dataset size to achieve the lowest fairness score without decreasing the accuracy of the retrained model. \n\n\nWe only consider the top 1 worst rule sets to mitigate the discrimination. The results are shown in Table~\\ref{tab:mitigation} for six models trained on additional structured data and two models retrained on additional textual data. We can observe that all models show reduced subtle discrimination and almost the same accuracy. The fairness scores for retrained models on \\textbf{Census Income}, \\textbf{German Credit} and \\textbf{Law School} decrease by about half. For the most improvement, the model retrained on the \\textbf{COMPAS} dataset shows much less subtle discrimination as the fairness score decreases by more than 10 times, i.e., from 57.7\\% to 4.2\\%. The fairness score of the model trained on the \\textbf{Crime} dataset decreases from 60.7\\% to 51.4\\%. Relatively, the fairness improvement is not obvious. We believe that it is due to its many continuous sensitive features and the large number of features (i.e., each input contains more than 100 attributes). That is, it would require a lot more additional data to improve fairness. In terms of CNN models, the fairness score decreases from from 6.5\\% to 0.4\\% for the model retrained on \\textbf{Wikipedia Talk Pages} and decreases from 6.6\\% to 3.3\\% for the model retrained on \\textbf{IMDB}. \n\\begin{tcolorbox}[fonttitle = \\bfseries]\n \\textbf{Answer to RQ3:} \\tool{TestSGD}$ $ is useful in mitigating the identified subtle group discrimination through retraining.\n\\end{tcolorbox}\n\n\n\\noindent \\emph{Comparison with Baselines} We identify the following two baselines from literature which can potentially identify similar group discrimination as our work.\n\\textbf{1) THEMIS~\\cite{galhotra2017fairness}} \ncalculates group discrimination scores over combinations of multiple features (subgroups) by measuring the difference between the maximum and minimum frequencies of two subgroups on randomly generated samples. \nThose subgroups \ncan then be regarded as identified discrimination if the score is higher than a threshold. \\textbf{2) FairFictPlay \\cite{kearns2018preventing}} proposed an in-processing algorithm aiming to improve subgroup fairness. The subgroups are identified with user-provided constraints in the form of conjunctions of Boolean attributes, linear threshold functions, or bounded degree polynomial threshold functions over multiple protected features. \n\n\n\nIn Table~\\ref{tab:baseline}, we show the identified group discrimination with \\tool{TestSGD}, Themis and FairFictPlay respectively, along with the fairness scores,\non the same models trained on structured data (similarly to \nTable~\\ref{tab:models}). \nWe set the timeout as 24 hours. Note that FairFictPlay uses complex linear functions on all the protected features (which are hard to interpret) to define discriminatory subgroups, thus \nwe do not show the exact concrete linear functions in the table. \nWe have the following observations. 1) Compared to FairFictPlay, \\tool{TestSGD } identifies discrimination with higher scores (more discriminating) while being interpretable. Moreover, \\tool{TestSGD } automatically identifies the discriminated subgroups without any prior knowledge. \n2) Similar to \\tool{TestSGD}, Themis is able to identify discriminated subgroups automatically. However, Themis identifies two subgroups which are maximally different (in terms of being predicted favorably) while \\tool{TestSGD } identifies subgroups which are predicted different from the rest. These two approaches thus produce results that are complementary to each other. Note that Themis does not support text data. \n\n\n\n\n\\begin{table*}[t]\n\\caption{Comparisons Between \\tool{TestSGD }, Themis and FairFictPlay. `-' means timeout.}\n\\label{tab:baseline}\n\\resizebox{\\linewidth}{!}{\n\\begin{tabular}{|c|cc|cc|cc|}\n\\hline\n\\multirow{2}{*}{\\textbf{Dataset}} & \\multicolumn{2}{c|}{\\textbf{\\tool{TestSGD }}} & \\multicolumn{2}{c|}{\\textbf{Themis}} & \\multicolumn{2}{c|}{\\textbf{FairFictPlay}} \\\\ \\cline{2-7} \n & \\multicolumn{1}{c|}{\\textbf{Rule Set}} & \\textbf{Fairness Score} & \\multicolumn{1}{c|}{\\textbf{Sensitive Attributes' values for Max\/Min Proportion}} & \\textbf{Fairness Score} & \\multicolumn{1}{c|}{\\textbf{Subgroup}} & \\textbf{Fairness Score} \\\\ \\hline\n\\multirow{2}{*}{Census Income} & \\multicolumn{1}{c|}{Gender=male, 40$\\leq$age<80,} & \\multirow{2}{*}{20.20\\%} & \\multicolumn{1}{c|}{{[}gender=Female, 60$\\leq$age\\textless{}70, race=Asian-Pac\\_islander{]} -} & \\multirow{2}{*}{26.6\\%} & \\multicolumn{1}{c|}{\\multirow{2}{*}{Linear Threshold Function}} & \\multirow{2}{*}{13.9\\%} \\\\\n & \\multicolumn{1}{c|}{race=White or Asian-Pac\\_islander} & & \\multicolumn{1}{c|}{{[}gender=Male, 10$\\leq$age\\textless{}20, race=White{]}} & & \\multicolumn{1}{c|}{} & \\\\ \\hline\n\\multirow{2}{*}{Bank Marketing} & \\multicolumn{1}{c|}{\\multirow{2}{*}{10$\\leq$age\\textless{}90}} & \\multirow{2}{*}{38.2\\%} & \\multicolumn{1}{c|}{\\multirow{2}{*}{{[}60$\\leq$age\\textless{}70{]} - {[}10$\\leq$age\\textless{}20{]}}} & \\multirow{2}{*}{8.4\\%} & \\multicolumn{1}{c|}{\\multirow{2}{*}{Linear Threshold Function}} & \\multirow{2}{*}{7.6\\%} \\\\\n & \\multicolumn{1}{c|}{} & & \\multicolumn{1}{c|}{} & & \\multicolumn{1}{c|}{} & \\\\ \\hline\n\\multirow{2}{*}{German Credit} & \\multicolumn{1}{c|}{\\multirow{2}{*}{gender=feamle, 60$\\leq$age\\textless{}70}} & \\multirow{2}{*}{21.9\\%} & \\multicolumn{1}{c|}{{[}gender=Female, 80$\\leq$age\\textless{}90{]} -} & \\multirow{2}{*}{17.1\\%} & \\multicolumn{1}{c|}{\\multirow{2}{*}{Linear Threshold Function}} & \\multirow{2}{*}{7.0\\%} \\\\\n & \\multicolumn{1}{c|}{} & & \\multicolumn{1}{c|}{{[}gender=Male, 10$\\leq$age\\textless{}20{]}} & & \\multicolumn{1}{c|}{} & \\\\ \\hline\n\\multirow{2}{*}{COMPAS} & \\multicolumn{1}{c|}{gender=male, age$\\geq$40,} & \\multirow{2}{*}{62.4\\%} & \\multicolumn{1}{c|}{{[}gender=Female, 10$\\leq$age\\textless{}20, race=Native American{]} -} & \\multirow{2}{*}{67.3\\%} & \\multicolumn{1}{c|}{\\multirow{2}{*}{Linear Threshold Function}} & \\multirow{2}{*}{22.4\\%} \\\\\n & \\multicolumn{1}{c|}{race=Hispanic or other race} & & \\multicolumn{1}{c|}{{[}gender=Male, 60$\\leq$age\\textless{}70, race=other race{]}} & & \\multicolumn{1}{c|}{} & \\\\ \\hline\n\\multirow{2}{*}{Law School} & \\multicolumn{1}{c|}{\\multirow{2}{*}{gender=male, race=Asian or Black}} & \\multirow{2}{*}{15.0\\%} & \\multicolumn{1}{c|}{{[}gender=Male, race=White{]} -} & \\multirow{2}{*}{13.5\\%} & \\multicolumn{1}{c|}{\\multirow{2}{*}{Linear Threshold Function}} & \\multirow{2}{*}{3.7\\%} \\\\\n & \\multicolumn{1}{c|}{} & & \\multicolumn{1}{c|}{{[}gender=Female, race=Black{]}} & & \\multicolumn{1}{c|}{} & \\\\ \\hline\n\\multirow{2}{*}{Crime} & \\multicolumn{1}{c|}{\\multirow{2}{*}{FemalePctDiv$\\geq$0.4, racePctWhite$\\leq$0.8}} & \\multirow{2}{*}{60.7\\%} & \\multicolumn{1}{c|}{\\multirow{2}{*}{-}} & \\multirow{2}{*}{-} & \\multicolumn{1}{c|}{\\multirow{2}{*}{Linear Threshold Function}} & \\multirow{2}{*}{38.8\\%} \\\\\n & \\multicolumn{1}{c|}{} & & \\multicolumn{1}{c|}{} & & \\multicolumn{1}{c|}{} & \\\\ \\hline\n\\end{tabular}}\n\\end{table*}\n\n\\vspace{-1em}\n\\section{Related Work} \\label{sec:related work}\nMany existing works attempted to test discrimination according to different fairness definitions and measurements~\\cite{dwork2012fairness, calders2010three}. In~\\cite{feldman2015certifying}, Feldman \\emph{et al.} provide a fairness definition which is measured according to demographic parity of model predictions. It measures how well the sensitive class can be predicted based on classification accuracy. In~\\cite{hardt2016equality}, Hardt \\emph{et al.} present an alternate definition of fairness based on demographic parity. It requires a decision to be independent of the sensitive attribute. In~\\cite{kusner2017counterfactual}, Kusner \\emph{et al.} define counterfactual discrimination which focuses on single decisions towards an individual. A prediction is counterfactual fair if it is the same in the actual group and a different demographic group. In~\\cite{galhotra2017fairness}, Galhotra \\emph{et al.} propose causal discrimination to measure the fraction of inputs for which model causally discriminates. This definition is similar to counterfactual fairness, but it takes instances of discrimination into account. In~\\cite{kearns2018preventing}, Kearns \\emph{et al.} proposed an in-processing algorithm aiming to improve the fairness of given subgroups, where subgroups are defined as conjunctions of attributes, linear threshold functions, or bounded degree polynomial threshold functions over multiple protected features.\nMost existing works~\\cite{galhotra2017fairness, kleinberg2016inherent, biswas2020machine} use positive classification rate as fairness measurement. \n \nSubsequently, many works focus on individual discrimination to generate individual discriminatory instances~\\cite{zhang2020white,tse2021adf,agarwal2018automated, huchard2018proceedings}. They tried to generated instances which are classified differently after changing sensitive attributes. \nIn~\\cite{agarwal2018automated}, Agarwal \\emph{et al.} present an automated testing approach to generate test inputs to find individual discrimination. In~\\cite{ruoss2020learning}, Ruoss \\emph{et al.} propose a fairness representation framework to generalize individual fairness to multiple notions. It learns a mapping from similar individuals to latent representations.\nHowever, the testing on individual discrimination cannot provide a statistical measurement of fairness. \n\nSome other existing works attempted to test model discrimination with fairness score measurements. In~\\cite{tramer2017fairtest}, Tramer \\emph{et al.} propose an unwarranted associations framework to detect unfair, discriminatory or offensive user treatment in data-driven applications. It identifies discrimination according to multiple metrics including the CV score, related ratio and associations between outputs and sensitive attributes. In~\\cite{kleinberg2016inherent}, Kleinberg \\emph{et al.} also test multiple discrimination scores and compare different fairness metrics. In~\\cite{galhotra2017fairness}, Galhotra \\emph{et al.} propose a tool called THEMIS to measure software discrimination. It tests discrimination with two fairness definitions, i.e., group discrimination score and causal discrimination score. \nIn~\\cite{adebayo2016iterative}, Adebayo \\emph{et al.} try to determine the relative significance of a model's inputs in determining the outcomes and use it to assess the discriminatory extent of the model. \n\nSome prior work has been done on fairness for text classification tasks as well. In~\\cite{blodgett2017racial}, Blodgett \\emph{et al.} discuss the impact of unfair natural language in NLP and show how statistical discrimination arises in processing applications. \nIn~\\cite{bolukbasi2016man}, Bolukbasi \\emph{et al.} show gender bias in the world embedding and provide a methodology for modifying an embedding to remove gender bias. In~\\cite{dixon2018measuring}, Dixon \\emph{et al.} measure discrimination using a set of common demographic identity terms and propose a method to mitigate the unintended bias by balancing the training data. \n\nCompared with all the above-mentioned existing works, we provide further fairness testing. Instead of measuring the overall discrimination, our approach systematically identifies and measures subtle discrimination. That is, we not only measure statistical discrimination with a confidence guarantee but also offer interpretable rule sets to represent subtle discrimination. \n\nThis work is remotely related to works on applying rule-based models for model explanation. In~\\cite{yang2017scalable}, Yang \\emph{et al.} present an algorithm for building probabilistic rule lists with logical IF-THEN structure.\nIn~\\cite{lakkaraju2016interpretable}, Lakkaraju \\emph{et al.} propose interpretable decision sets to interpret model predictions with high accuracy and high interpretation. Our work leverage such rule-based interpretable structure to present subtle discrimination in models. \n\n\\vspace{-1em}\n\\section{Conclusion} \\label{sec:conclusion}\nIn this work, we focus on testing neural network models against subtle group discrimination and propose a framework to systematically identify interpretable subtle group discrimination based on group fairness measurement with a certain confidence. \nOur extensive evaluation demonstrates that subtle group discrimination in neural networks is common to a surprising level. \nWe also show that it is possible to mitigate such discrimination by utilizing our testing results to generate more data for retraining. \n\n\n\\balance\n\\bibliographystyle{splncs04}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nFirst-passage percolation (FPP) was introduced by Hammersley and Welsh \\cite{HW} as a model for fluid flow through a porous medium. However, it has since developed into a field of its own, serving for instance as a model for growing interfaces (see \\cite{KS} and connections to other models \\cite{HH}) and competing infections (see \\cite{BS, DH, GM, HP, Hoffman1}). For a survey of recent results, see \\cite{GK}.\n\nWe consider FPP on $(\\ensuremath{\\mathbb{Z}^2},\\ensuremath{\\mathcal{E}^2}),$ the two-dimensional square lattice. $\\mathbb{P}$ will denote a probability measure on the space $\\Omega = \\mathbb{R}^{\\ensuremath{\\mathcal{E}^2}}$ (satisfying some conditions outlined in the next section). An element $\\omega \\in \\Omega$ represents an edge-weight configuration; the passage time across the edge $e$ is denoted $\\omega_e = \\omega(e)$. The passage time between two sites $x,y$ will be called\n\\[\n\\tau(x,y) = \\inf_{\\gamma:x \\to y} \\tau(\\gamma)\\ ,\n\\]\nwhere the infimum is over all (finite) lattice paths from $x$ to $y$ and $\\tau(\\gamma) = \\sum_{e \\in \\gamma} \\omega_e$. \n\n\nIn this paper we study geodesics, (typically self-avoiding) paths in $\\mathbb{Z}^2$ which are everywhere time-minimizing. Precisely, define a finite geodesic from $x$ to $y$ to be a finite lattice path $\\gamma$ from $x$ to $y$ such that $\\tau(\\gamma) = \\tau(x,y)$. Define an infinite geodesic to be an infinite path such that each finite subpath is a finite geodesic. In the mid 90's, Newman \\cite{Newman95} and Licea-Newman \\cite{LN}, along with Wehr \\cite{Wehr} began the rigorous study of infinite geodesics. This was in part motivated by connections between ``bigeodesics'' in FPP and ground states of disordered ferromagnetic spin models \\cite{FLN, newmanbook}. The main questions involve existence of infinite geodesics with asymptotic directions, uniqueness and coalescence of such geodesics, and absence of bigeodesics. After considerable progress on lattice FPP, Howard and Newman gave an essentially complete description for a continuum variant, called Euclidean FPP \\cite{HN}.\n\nThe main theorems proved to date require heavy assumptions on the model, for instance strong moment bounds and so-called curvature inequalities (the establishment of which provides a major open problem in FPP). The main goals of this paper are to prove versions of the current geodesic theorems under minimal assumptions necessary to guarantee their validity. Because the methods of Newman and collaborators involve curvature bounds and concentration inequalities (the latter of which cannot hold under low moment assumptions), we are forced to develop completely new techniques. \n\nOur analysis centers on Busemann functions, which were used and analyzed in papers of Hoffman \\cite{Hoffman1, Hoffman2}. His work was one of the first (along with Garet-Marchand \\cite{GM}) to assert existence of multiple disjoint infinite geodesics under general assumptions, finding at least four almost surely. The methods are notable in their ability to extract any information without knowing the existence of limits for Busemann functions. Indeed, proving the existence of such limits, corresponding to\n\\[\n\\lim_{n \\to \\infty} \\left[ \\tau(x,x_n) - \\tau(y,x_n) \\right]\n\\]\nfor fixed $x,y$ and a deterministic sequence of vertices $(x_n)$ growing to infinity along a ray, provides a major open problem and appears to be an impediment to further analysis of geodesics in the model. Incidentally, in an effort to describe the microstructure of the limiting shape for the model, Newman \\cite{Newman95} was able to show that under strong assumptions, this limit exists in Lebesgue-almost every direction. \n\nOne main aim of the present paper is to develop a framework to overcome the existence of the above limit. We will analyze distributional limits of Busemann functions and relate these back to the first-passage model. The relationship between Busemann functions and geodesics will be preserved in the limit and will provide information about directional geodesics, coalescence, and the structure of geodesic graphs, the latter of which gives nonexistence of certain types of bigeodesics.\n\n\n\n\n\n\\subsection{Main results}\n\nWe will make one of two main assumptions on the passage time distribution. These relate to the degree of independence in the model. The first deals with i.i.d. passage times:\n\\begin{enumerate}\n\\item[{\\bf A1}] $\\mathbb{P}$ is a product measure whose common distribution satisfies the criterion of Cox and Durrett \\cite{coxdurrett}: if $e_1, \\ldots, e_4$ are the four edges touching the origin,\n\\begin{equation}\\label{eq: jackisherenow}\n\\mathbb{E} \\left[ \\min_{i=1,\\ldots, 4} \\omega_{e_i} \\right]^2 < \\infty\\ .\n\\end{equation}\nFurthermore we assume $\\mathbb{P}(\\omega_e= 0) < p_c=1\/2$, the bond percolation threshold for $\\mathbb{Z}^2$.\n\\end{enumerate}\nCondition \\eqref{eq: jackisherenow} is implied by, for example, the assumption $\\mathbb{E} \\omega_e < \\infty$.\n\nThe other assumption is on distributions that are only translation-invariant. Condition (d) below deals with the limit shape, which is defined in the next paragraph.\n\\begin{enumerate}\n\\item[{\\bf A2}] $\\mathbb{P}$ is a measure satisfying the conditions of Hoffman \\cite{Hoffman2}:\n\\begin{enumerate}\n\\item $\\mathbb{P}$ is ergodic with respect to translations of $\\mathbb{Z}^2$;\n\\item $\\mathbb{P}$ has all the symmetries of $\\mathbb{Z}^2$;\n\\item $\\mathbb{E} \\omega_e^{2+\\varepsilon} < \\infty$ for some $\\varepsilon>0$;\n\\item the limit shape for $\\mathbb{P}$ is bounded.\n\\end{enumerate}\n\\end{enumerate}\nSome of the conditions here can be weakened. For instance, the $2+\\varepsilon$ moment condition can be replaced with a condition of a finite Lorentz-type norm; see \\cite{boivin} for details.\n\nIn each of these settings, a ``shape theorem'' has been proved \\cite{boivin, coxdurrett} for the set of sites accessible from 0 in time $t$. For $x,y \\in \\mathbb{R}^2$ we set $\\tau(x,y) = \\tau(\\tilde x, \\tilde y)$, where $\\tilde x$ and $\\tilde y$ are the unique points in $\\mathbb{Z}^2$ such that $x \\in \\tilde x + [-1\/2,1\/2)^2$ and $y \\in \\tilde y + [-1\/2,1\/2)^2$. For any $t\\geq 0$ write $B(t)$ for the set of $x$ in $\\mathbb{R}^2$ such that $\\tau(0,x) \\leq t$ and $B(t)\/t = \\{x\/t : x \\in B(t)\\}$. There exists a deterministic compact convex set $\\mathcal{B}$, symmetric about the axes and with nonempty interior such that for each $\\varepsilon>0$,\n\\[\n\\mathbb{P}\\left( (1-\\varepsilon) \\mathcal{B} \\subseteq B(t)\/t \\subseteq (1+\\varepsilon) \\mathcal{B} \\text{ for all large } t\\right) = 1\\ .\n\\]\nThe statement that $\\mathcal{B}$ has nonempty interior is not explicitly proved in \\cite{boivin} but follows from the maximal lemma stated there.\n\n\\subsubsection{Directional results}\n\nOur first results deal with asymptotic directions for infinite geodesics. Much is known about such questions under various strong assumptions (for instance uniformly positive curvature of $\\mathcal{B}$, exponential moments for $\\mathbb{P}$; see Section~\\ref{sec: global} for a more precise discussion). However, under only {\\bf A1} or {\\bf A2}, very little is known. After initial results by H\\\"aggstr\\\"om-Pemantle \\cite{HP}, Garet-Marchand \\cite{GM} and Hoffman \\cite{Hoffman1}, it was proved by Hoffman \\cite{Hoffman2} that under {\\bf A2}, there exist at least 4 infinite geodesics that are pairwise disjoint almost surely. Nothing is known about the directions of the geodesics; for instance, Hoffman's results do not rule out the case in which the geodesics spiral around the origin.\n\nBelow we will show that under {\\bf A1} or {\\bf A2} there are geodesics that are asymptotically directed in sectors of aperture no bigger than $\\pi\/2$. Under a certain directional condition on the boundary of the limit shape (see Corollary~\\ref{cor: exposed}) we show existence of geodesics with asymptotic direction. To our knowledge, the only work of this type so far \\cite[Theorem~2.1]{Newman95} requires a global curvature assumption to show the existence of geodesics in even one direction.\n\nTo describe the results, we endow $[0,2\\pi)$ with the distance of $S^1$: say that $dist(\\theta_1,\\theta_2) < r$ if there exists an integer $m$ such that $|\\theta_1-\\theta_2 - 2\\pi m| < r$. For $\\Theta \\subseteq [0,2\\pi)$ we say that a path $\\gamma = x_0, x_1, \\ldots$ is {\\it asymptotically directed in} $\\Theta$ if for each $\\varepsilon>0$, $\\arg x_k \\in \\Theta_\\varepsilon \\text{ for all large } k$, where $\\Theta_\\varepsilon = \\{\\theta: dist(\\theta,\\phi) < \\varepsilon \\text{ for some } \\phi \\in \\Theta\\}$. For $\\theta \\in [0,2\\pi)$, write $v_\\theta$ for the unique point of $\\partial \\mathcal{B}$ with argument $\\theta$. Recall that a supporting line $L$ for $\\mathcal{B}$ at $v_\\theta$ is one that touches $\\mathcal{B}$ at $v_\\theta$ such that $\\mathcal{B}$ lies on one side of $L$. If $\\theta$ is an angle such that $\\partial \\mathcal{B}$ is differentiable at $v_\\theta$ (and therefore has a unique supporting line $L_\\theta$ (the tangent line) at this point), we define an interval of angles $I_\\theta$:\n\\begin{equation}\\label{eq: last_night}\nI_\\theta = \\{\\theta' : v_{\\theta'} \\in L_\\theta\\}\\ .\n\\end{equation}\n\n\n\\begin{thm}\\label{thm: sectors}\nAssume either {\\bf A1} or {\\bf A2}. If $\\partial \\mathcal{B}$ is differentiable at $v_\\theta$, then with probability one there is an infinite geodesic containing the origin which is asymptotically directed in $I_\\theta$.\n\\end{thm}\n\nThe meaning of the theorem is that there is a measurable set $\\mathcal{A}$ with $\\mathbb{P}(\\mathcal{A})=1$ such that if $\\omega \\in \\mathcal{A}$, there is an infinite geodesic containing the origin in $\\omega$ which is asymptotically directed in $I_\\theta$. This also applies to any result we state with the phrases ``with probability one there is an infinite geodesic'' or ``with probability one there is a collection of geodesics.''\n\nWe now state two corollaries. A point $x \\in \\partial \\mathcal{B}$ is {\\it exposed} if there is a line that touches $\\mathcal{B}$ only at $x$.\n\n\\begin{cor}\\label{cor: exposed}\nAssume either {\\bf A1} or {\\bf A2}. Suppose that $v_\\theta$ is an exposed point of differentiability of $\\partial \\mathcal{B}$. With probability one there exists an infinite geodesic containing the origin with asymptotic direction $\\theta$.\n\\end{cor}\n\\begin{proof}\nApply Theorem~\\ref{thm: sectors}, noting that $I_\\theta = \\{\\theta\\}$.\n\\end{proof}\n\nIn the next corollary we show that there are infinite geodesics asymptotically directed in certain sectors. Because the limit shape is convex and compact, it has at least 4 extreme points. Angles corresponding to the arcs connecting these points can serve as the sectors.\n\n\\begin{cor}\\label{cor: extreme}\nAssume either {\\bf A1} or {\\bf A2}. Let $\\theta_1\\neq \\theta_2$ be such that $v_{\\theta_1}$ and $v_{\\theta_2}$ are extreme points of $\\mathcal{B}$. If $\\Theta$ is the set of angles corresponding to some arc of $\\partial \\mathcal{B}$ connecting $v_{\\theta_1}$ to $v_{\\theta_2}$, then with probability one there exists an infinite geodesic containing the origin which is asymptotically directed in $\\Theta$.\n\\end{cor}\n\n\\begin{proof}\n\nChoose $\\theta_3 \\in \\Theta$ such that $\\theta_1 \\neq \\theta_3 \\neq \\theta_2$ and $\\mathcal{B}$ has a unique supporting line $L_{\\theta_3}$ at $v_{\\theta_3}$ (this is possible since the boundary is differentiable almost everywhere). \nLet $C$ be the closed arc of $\\partial \\mathcal{B}$ from $v_{\\theta_1}$ to $v_{\\theta_2}$ that contains $v_{\\theta_3}$ and write $D$ for its open complementary arc. We claim $D \\subseteq I_{\\theta_3}^c$. This will prove the corollary after applying Theorem~\\ref{thm: sectors} with $\\theta = \\theta_3$.\n\nFor a contradiction, suppose that $L_{\\theta_3}$ intersects $D$ at some point $v_{\\phi}$ and write $S$ for the segment of $L_{\\theta_3}$ between $v_{\\theta_3}$ and $v_\\phi$. Since $L_{\\theta_3}$ is a supporting line, the set $\\mathcal{B}$ lies entirely on one side of it. On the other hand, since $\\mathcal{B}$ is convex and $v_{\\theta_3}, v_\\phi \\in \\mathcal{B}$, $S \\subseteq \\mathcal{B}$. Therefore $S \\subseteq \\partial \\mathcal{B}$ and must be an arc of the boundary. It follows that one of $v_{\\theta_1}$ or $v_{\\theta_2}$ is in the interior of $S$, contradicting the fact that these are extreme points of $\\mathcal{B}$.\n\\end{proof}\n\n\\begin{rem}\nIf $\\mathbb{P}$ is a product measure with $\\mathbb{P}(\\omega_e=1) = \\vec p_c \\text{ and } \\mathbb{P}(\\omega_e <1) = 0$, where $\\vec p_c$ is the critical value for directed percolation, \\cite[Theorem~1]{AD11} implies that $(1\/2,1\/2)$ is an exposed point of differentiability of $\\mathcal{B}$. Corollary~\\ref{cor: exposed} then gives a geodesic in the direction $\\pi\/4$. Though all points of $\\partial \\mathcal{B}$ (for all measures not in the class of Durett-Liggett \\cite{durrettliggett}) should be exposed points of differentiability, this is the only proven example.\n\\end{rem}\n\n\\begin{rem}\nFrom \\cite[Theorem~1.3]{HM}, for any compact convex set $\\mathcal{C}$ which is symmetric about the axes with nonempty interior, there is a measure $\\mathbb{P}$ satisfying {\\bf A2} (in fact, with bounded passage times) which has $\\mathcal{C}$ as a limit shape. Taking $\\mathcal{C}$ to be a Euclidean disk shows that there exist measures for which the corresponding model obeys the statement of Corollary~\\ref{cor: exposed} in any deterministic direction $\\theta$.\n\\end{rem}\n\n\n\n\\subsubsection{Global results}\\label{sec: global}\n\nIn this section we use the terminology of Newman \\cite{Newman95}. Call $\\theta$ a {\\it direction of curvature} if there is a Euclidean ball $B_\\theta$ with some center and radius such that $\\mathcal{B} \\subseteq B_\\theta$ and $\\partial B_\\theta \\cap \\mathcal{B} = \\{v_\\theta\\}$. We say that $\\mathcal{B}$ has {\\it uniformly positive curvature} if each direction is a direction of curvature and there exists $M< \\infty$ such that the radius of $B_\\theta$ is bounded by $M$ for all $\\theta$.\n\nIn \\cite[Theorem~2.1]{Newman95}, Newman has shown that under the assumptions (a) $\\mathbb{P}$ is a product measure with $\\mathbb{E} e^{\\beta \\omega_e}< \\infty$ for some $\\beta>0$, (b) the limit shape $\\mathcal{B}$ has uniformly positive curvature and (c) $\\omega_e$ is a continuous variable, two things are true with probability one.\n\\begin{enumerate}\n\\item For each $\\theta \\in [0,2\\pi)$, there is an infinite geodesic with asymptotic direction $\\theta$.\n\\item Every infinite geodesic has an asymptotic direction.\n\\end{enumerate}\nAs far as we know, there has been no weakening of these assumptions.\n\nBelow we improve on Newman's theorem. We first reduce the moment assumption on $\\mathbb{P}$ to that of {\\bf A1}. Next we extend the theorem to non-i.i.d. measures. Newman's proof uses concentration inequalities of Kesten \\cite{Kesten} and Alexander \\cite{Alexander}, which require exponential moments on the distribution (and certainly independence). So to weaken the moment assumptions we need to use a completely different method, involving Busemann functions instead.\n\nTo state the theorem, we make slightly stronger hypotheses:\n\\begin{enumerate}\n\\item[{\\bf A1'}] $\\mathbb{P}$ satisfies {\\bf A1} and the common distribution of $\\omega_e$ is continuous.\n\\item[{\\bf A2'}] $\\mathbb{P}$ satisfies {\\bf A2} and $\\mathbb{P}$ has unique passage times.\n\\end{enumerate}\nThe phrase ``unique passage times'' means that for all paths $\\gamma$ and $\\gamma'$ with distinct edge sets, $\\mathbb{P}(\\tau(\\gamma)=\\tau(\\gamma'))=0$.\n\n\\begin{thm}\\label{thm: newman}\nAssume either {\\bf A1'} or {\\bf A2'} and that $\\mathcal{B}$ has uniformly positive curvature.\n\\begin{enumerate}\n\\item With $\\mathbb{P}$-probability one, for each $\\theta$ there is an infinite geodesic with direction $\\theta$.\n\\item With $\\mathbb{P}$-probability one, every infinite geodesic has a direction.\n\\end{enumerate}\n\\end{thm}\n\nThe same method of proof shows the following. \n\n\\begin{cor}\\label{cor: newman2}\nAssume either {\\bf A1'} or {\\bf A2'} and suppose $v_\\theta$ is an exposed point of differentiability of $\\partial \\mathcal{B}$ for all $\\theta$. Then the conclusions of Theorem~\\ref{thm: newman} hold.\n\\end{cor}\n\n\\begin{rem}\nThe proofs of the above two results only require that the set of extreme points of $\\mathcal{B}$ is dense in $\\partial \\mathcal{B}$. In fact, a similar result holds for a sector in which extreme points of $\\mathcal{B}$ are dense in the arc corresponding to this sector.\n\\end{rem}\n\n\n\n\n\n\\subsubsection{Coalescence for geodesics}\\label{subsec: coalesce}\n\nIn this section we describe results for coalescence of infinite geodesics. For this we need some notation. For $S\\subseteq \\mathbb{R}^2$ define the point-to-set passage time \n\\[\n\\tau(x,S) = \\inf_{y \\in S} \\tau(x,y) \\text{ for } x \\in \\mathbb{R}^2\\ .\n\\]\nBy the subadditivity property $\\tau(x,y) \\leq \\tau(x,z) + \\tau(z,y)$ we find\n\\begin{equation}\\label{eq: subadditivity}\n\\tau(x,S) \\leq \\tau(x,y) + \\tau(y,S) \\text{ for } x,y \\in \\mathbb{R}^2\\ .\n\\end{equation}\nA path $\\gamma$ from a point $x \\in \\ensuremath{\\mathbb{Z}^2}$ to a point in \n\\begin{equation}\\label{eq: hatS}\n\\hat S = \\{y \\in \\ensuremath{\\mathbb{Z}^2} : y + [-1\/2,1\/2)^2 \\cap S \\neq \\varnothing\\}\n\\end{equation}\nis called a {\\it geodesic from $x$ to $S$} if $\\tau(\\gamma) = \\tau(x,S)$. Under assumptions {\\bf A1} or {\\bf A2}, one can argue from the shape theorem and boundedness of the limit shape that a geodesic from $x$ to $S$ exists $\\mathbb{P}$-almost surely. However, it need not be unique. In the case, though, that we assume {\\bf A1'} or {\\bf A2'}, there is almost surely exactly one geodesic from $x$ to $S$. Note that if $\\gamma$ is a geodesic from $x$ to $S$ and $y\\in \\gamma$, then the piece of $\\gamma$ from $x$ to $y$ is a geodesic from $x$ to $y$ and the piece of $\\gamma$ from $y$ to $S$ is a geodesic from $y$ to $S$.\n\nThe set $S$ gives a directed geodesic graph $\\mathbb{G}_S = \\mathbb{G}_S(\\omega)$: $\\langle x,y \\rangle$ is an edge of $\\mathbb{G}_S$ if it is in some geodesic from a point to $S$ and $\\tau(x,S) \\geq \\tau(y,S)$ (we will explain more about this graph in Section~\\ref{subsec: GG}). We say that a sequence of directed graphs $G_n = (\\mathbb{Z}^2,E_n)$ converges to a directed graph $G=(\\mathbb{Z}^2,E)$ if each edge $\\langle x,y \\rangle$ is in only finitely many of the symmetric differences $E_n \\Delta E$. If $x$ and $y$ are vertices of a directed graph $G$, write $x \\to y$ if there is a directed path from $x$ to $y$ in $G$. Last, we say that two infinite directed paths $\\Gamma$ and $\\Gamma'$ {\\it coalesce} if their (edge) symmetric difference is finite.\n\nFor the main theorems on coalescence we need an extra assumption in the case {\\bf A2'}. It allows us to apply ``edge modification'' arguments. Write $\\omega = (\\omega_e,\\check{\\omega})$, where $\\check{\\omega}_f = (\\omega)_{f\\neq e}$. \n\n\\begin{df}\nWe say that $\\ensuremath{\\mathbb{P}}$ has the {\\it upward finite energy property} if for each $\\lambda > 0$ such that $\\mathbb{P}(\\omega_e \\geq \\lambda)>0$,\n\t\t\\begin{equation}\n\t\t\\label{finite_energy_def}\n\t\t\\ensuremath{\\mathbb{P}}\\left(\\omega_e \\geq \\lambda \\, \\big|\\, \\check{\\omega} \\right) >0 \\quad \\text{almost surely}\\ .\n\t\t\\end{equation}\n\\end{df}\nNote that if $\\mathbb{P}$ is a product measure, it has the upward finite energy property.\n\n\\begin{thm}\\label{thm: random_hyperplanes}\nAssume either {\\bf A1'} or both {\\bf A2'} and the upward finite energy property. Let $v \\in \\mathbb{R}^2$ be any nonzero vector and for $\\beta \\in \\mathbb{R}$ define\n\\[\nL_\\beta(v) = \\{y \\in \\mathbb{R}^2 : y \\cdot v = \\beta\\}\\ .\n\\]\nThere exists an event $\\mathcal{A}$ with $\\mathbb{P}(\\mathcal{A}) = 1$ such that for each $\\omega \\in \\mathcal{A}$, the following holds. There exists an ($\\omega$-dependent) increasing sequence $(\\alpha_k)$ of real numbers with $\\alpha_k \\to \\infty$ such that $\\mathbb{G}_{L_{\\alpha_k}(v)}(\\omega) \\to G(\\omega)$, a directed graph with the following properties.\n\\begin{enumerate}\n\\item Viewed as an undirected graph, $G$ has no circuits.\n\\item Each $x \\in \\mathbb{Z}^2$ has out-degree 1 in $G$.\n\\item (All geodesics coalesce.) Write $\\Gamma_x$ for the unique infinite path in $G$ from $x$. If $x,y \\in \\mathbb{Z}^2$ then $\\Gamma_x$ and $\\Gamma_y$ coalesce.\n\\item (Backward clusters are finite.) For all $x \\in \\mathbb{Z}^2$, the set $\\{y \\in \\mathbb{Z}^2: y \\to x \\text{ in } G\\}$ is finite.\n\\end{enumerate} \n\\end{thm}\n\nOur last theorem deals with coalescence and asymptotic directions. Before stating it, we discuss some previous results. In 1995, Licea and Newman \\cite{LN} proved that given $\\theta \\in [0,2\\pi)$, all directional geodesics almost surely coalesce except in some deterministic (Lebesgue-null) set $D \\subseteq [0,2\\pi)$. Specifically they showed that under the assumptions (a) $\\mathbb{P}$ is a product measure whose one-dimensional marginals are continuous with finite exponential moments and (b) uniformly positive curvature of $\\mathcal{B}$,\n\\begin{equation}\\label{eq: liceanewman}\n\\text{there exists }D \\subseteq [0,2\\pi) \\text{ with Lebesgue measure zero such that if } \\theta \\in [0,2\\pi) \\setminus D\\ ,\n\\end{equation}\n\\begin{enumerate}\n\\item almost surely, there exists a collection of infinite geodesics $\\{\\gamma_x : x \\in \\mathbb{Z}^2\\}$ such that each $\\gamma_x$ has asymptotic direction $\\theta$ and for all $x,y$, the paths $\\gamma_x$ and $\\gamma_y$ coalesce and \n\\item almost surely, for each $x$, there is a unique infinite geodesic containing $x$ with asymptotic direction $\\theta$.\n\\end{enumerate}\nSince \\cite{LN} it has been an open problem to show that $D$ can be taken to be empty. Zerner \\cite[Theorem~1.5]{newmanbook} proved that $D$ can be taken to be countable. In a related exactly solvable model (directed last-passage percolation, using exponential weights on sites), Coupier has proved \\cite[Theorem~1(3)]{coupier}, building on work of Ferrari-Pimentel \\cite{FP}, that $D$ can be taken to be empty. These results rely on a mapping to the TASEP particle system.\n\nIn part 2 of the next theorem, we improve on \\eqref{eq: liceanewman} in the general case. The result reduces the set $D$ to be empty for existence of coalescing geodesics (item 1 above). It however does not address uniqueness. We reduced the moment condition of \\cite{LN}, extended to non-i.i.d. measures and replaced the global curvature assumption with a directional condition. Without this condition, part 3 gives the existence of coalescing geodesics directed in sectors. For the statement, recall the definition of $I_\\theta$ in \\eqref{eq: last_night}.\n\n\\begin{thm}\\label{thm: exceptional_set}\nAssume either {\\bf A1'} or both {\\bf A2'} and the upward finite energy property. Let $\\theta \\in [0,2\\pi)$.\n\\begin{enumerate}\n\\item If $\\partial \\mathcal{B}$ is differentiable at $v_\\theta$ then with probability one there exists a collection $\\{\\gamma_x:x \\in \\mathbb{Z}^2\\}$ of infinite geodesics in $\\omega$ such that \n\\begin{enumerate}\n\\item each $x$ is a vertex of $\\gamma_x$;\n\\item each $\\gamma_x$ is asymptotically directed in $I_\\theta$;\n\\item for all $x,y \\in \\mathbb{Z}^2$, $\\gamma_x$ and $\\gamma_y$ coalesce and\n\\item each $x$ is on $\\gamma_y$ for only finitely many $y$.\n\\end{enumerate}\n\\item If $v_\\theta$ is an exposed point of differentiability of $\\mathcal{B}$ then the above geodesics all have asymptotic direction $\\theta$.\n\\item Suppose $\\theta_1 \\neq \\theta_2$ are such that $v_{\\theta_1}$ and $v_{\\theta_2}$ are extreme points of $\\mathcal{B}$. If $\\Theta$ is the set of angles corresponding to some arc of $\\partial \\mathcal{B}$ connecting $v_{\\theta_1}$ to $v_{\\theta_2}$ then the above geodesics can be taken to be asymptotically directed in $\\Theta$.\n\\end{enumerate}\n\\end{thm}\n\nTheorems~\\ref{thm: random_hyperplanes} and \\ref{thm: exceptional_set} follow from a stronger result. In Sections~\\ref{sec: GG} and \\ref{sec: coalesceG}, we prove that any subsequential limit $\\mu$ defined as in Section~\\ref{sec: mudef} is supported on geodesic graphs with properties 1-4 of Theorem~\\ref{thm: random_hyperplanes}.\n\n\\begin{rem}\nThe finiteness of backward clusters in the graphs produced in the previous two theorems (see item 4 of the first and item 1(d) of the second) is related to nonexistence of bigeodesics. It shows that when constructing infinite geodesics using a certain limiting procedure, it is impossible for doubly infinite paths to arise.\n\\end{rem}\n\n\n\n\n\n\n\n\n\n\n\\subsection{Notation}\n\nWe denote the standard orthonormal basis vectors for \\ensuremath{\\mathbb{R}^2} by ${\\mathbf{e}}_1$ and ${\\mathbf{e}}_2.$ The translation operators $T_{{\\mathbf{e}}_i},~ i = 1,2$\nact on a configuration $\\omega$ as follows: \n$\\left(T_{{\\mathbf{e}}_i} (\\omega) \\right)_{e'} = \\omega_{e'-{\\mathbf{e}}_i}.$ Under any of the assumptions laid out above, the measure $\\mathbb{P}$ is invariant under these translations. Furthermore the passage times have a certain translation-covariance: for $i=1,2$,\n\\begin{equation}\\label{eq: transcov}\n\\tau(x,S)(T_{{\\mathbf{e}}_i}\\omega) = \\tau(x-{\\mathbf{e}}_i,S-{\\mathbf{e}}_i)(\\omega)\\ ,\n\\end{equation}\nwhere $S-{\\mathbf{e}}_i = \\{ x-{\\mathbf{e}}_i : x \\in S\\}$.\n\nWe shall need a function $g:\\mathbb{R}^2 \\to \\mathbb{R}$ which describes the limiting shape $\\mathcal{B}$. It is the norm whose closed unit ball is $\\mathcal{B}$. There are many ways to define it; for instance one can use $g(x) = \\inf\\{\\lambda > 0: x\/\\lambda \\in \\mathcal{B}\\}$. It follows from the shape theorem that under {\\bf A1} or {\\bf A2},\n\\[\n\\lim_{n \\to \\infty} \\tau(0,nx)\/n = g(x) \\text{ for all } x \\in \\mathbb{R}^2, ~\\mathbb{P}\\text{-almost surely}\\ .\n\\]\nFurthermore, there is convergence in $L^1$:\n\\[\n\\lim_{n \\to \\infty} \\mathbb{E}\\tau(0,nx)\/n = g(x) \\text{ for all } x \\in \\mathbb{R}^2\\ .\n\\]\nIn the case of {\\bf A1} this follows from \\cite[Lemma~3.2]{coxdurrett} and under {\\bf A2} it can be derived from the shape theorem and \\cite[Lemma~2.6]{Hoffman2} (the reader can also see a derivation in the appendix of \\cite{gouere}). We denote the $\\ell^1$ norm on $\\mathbb{R}^2$ by $\\|\\cdot\\|_1$ and the $\\ell^2$ norm by $\\| \\cdot \\|_2 .$ Since the limit shape is bounded and has nonempty interior, there are constants $0 < C_1, C_2 < \\infty$ such that\n\\begin{equation}\\label{eq: normequivalence}\nC_1 \\|x\\|_2 \\leq g(x) \\leq C_2 \\|x\\|_2 \\text{ for all } x \\in \\mathbb{R}^2\\ .\n\\end{equation}\n\nWe recall the fact that under {\\bf A1} or {\\bf A2},\n\\begin{equation}\\label{eq: finitesecondmoment}\n\\mathbb{E} \\tau(x,y)^2 < \\infty \\text{ for all } x,y \\in \\mathbb{R}^2\\ .\n\\end{equation}\nThis was proved in \\cite[Lemma~3.1]{coxdurrett} assuming {\\bf A1} and in the other case it follows directly from the fact that $\\mathbb{E} \\omega_e^{2+\\varepsilon}<\\infty$ for some $\\varepsilon>0$.\n\nWe write $x \\cdot y$ for the standard dot product between $x$ and $y$ in $\\mathbb{R}^2$. \n\n\\begin{center}\n{\\bf For the rest of the paper we assume A1 or A2.}\n\\end{center}\n\n\n\n\\subsection{Structure of the paper}\nIn the next section, we give basic properties of Busemann functions and geodesic graphs. In Section~\\ref{sec: BID} we introduce Busemann increment configurations and construct probability measures on them. Next we reconstruct Busemann functions, and in Section~\\ref{sec: limits} we prove a shape theorem for the reconstruction. Section~\\ref{sec: GG} begins the study of distributional limits $\\mathbb{G}$ of geodesic graphs, where we show that all paths are asymptotically directed in a sector given by the reconstructed Busemann function. In Section~\\ref{sec: coalesceG} we show coalescence of all paths in $\\mathbb{G}$. We use all of these tools in Section~\\ref{sec: proofs} to prove the main results of the paper.\n\n\n\n\n\n\n\n\n\\section{Busemann functions and geodesic graphs}\n\nIn this section we will give basic properties of Busemann functions and geodesic graphs. These will be carried over through weak limits to a space introduced in the next section. \n\n\n\\subsection{Busemann functions}\n\nFor any $S \\subseteq \\mathbb{R}^2$ and configuration $\\omega$, we define the Busemann function $B_S : \\ensuremath{\\mathbb{Z}^2}\\times \\ensuremath{\\mathbb{Z}^2} \\to \\mathbb{R}$ as\n\\[\nB_S(x,y) = \\tau(x,S) - \\tau(y,S)\\ ,\n\\]\nThis function measures the discrepancy between travel times from $x$ and $y$ to $S$. We list below some basic properties of Busemann functions. One of the most interesting is the additivity property 1. It is the reason that the asymptotic shape for the Busemann function is a half space whereas the asymptotic shape for $\\tau$ is a compact set.\n\n\\begin{prop}\\label{prop: busemannprop1}\nLet $S \\subseteq \\mathbb{R}^2$. The Busemann function $B_S$ satisfies the following properties $\\mathbb{P}$-almost surely for $x,y,z \\in \\ensuremath{\\mathbb{Z}^2}$:\n\\begin{enumerate}\n\\item (Additivity)\n\\begin{equation}\\label{eq: busemannadditivity}\nB_S(x,y) = B_S(x,z) + B_S(z,y)\\ .\n\\end{equation}\n\\item for $i=1,2$,\n\\begin{equation}\\label{eq: busemanntranscov}\nB_S(x,y)(T_{{\\mathbf{e}}_i} \\omega) = B_{S-{\\mathbf{e}}_i}(x-{\\mathbf{e}}_i,y-{\\mathbf{e}}_i)(\\omega)\\ .\n\\end{equation}\nTherefore the finite-dimensional distributions of $B_S$ obey a translation invariance:\n\\[\n\\left( B_S(x,y) \\right) \\underset{d}{=} \\left(B_{S-{\\mathbf{e}}_i}(x-{\\mathbf{e}}_i,y-{\\mathbf{e}}_i) \\right)\\ .\n\\]\n\\item\n\\begin{equation}\\label{eq: bboundtau}\n|B_S(x,y)| \\leq \\tau(x,y)\\ .\n\\end{equation}\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\nThe first property follows from the definition. The third is a consequence of subadditivity \\eqref{eq: subadditivity} of $\\tau(y,S)$. The second item follows from the statement \\eqref{eq: transcov} for passage times.\n\\end{proof}\n\nThe last property we need regards the relation between geodesics and Busemann functions. Though it is simple, it will prove to be important later.\n\\begin{prop}\\label{prop: busemannpassagetime}\nLet $S \\subseteq \\mathbb{R}^2$ and $x \\in \\ensuremath{\\mathbb{Z}^2}$. If $\\gamma$ is a geodesic from $x$ to $S$ and $y$ is a vertex of $\\gamma$ then $B_S(x,y) = \\tau(x,y)$.\n\\end{prop}\n\n\\begin{proof}\nWrite $\\tau_\\gamma(x,y)$ for the passage time along $\\gamma$ between $x$ and $y$. Since every segment of a geodesic itself a geodesic, $\\tau(x,S)-\\tau(y,S) = \\tau_\\gamma(x,S) - \\tau_\\gamma(y,S) = \\tau_\\gamma(x,y) = \\tau(x,y)$.\n\\end{proof}\n\nUsing this proposition and additivity of the Busemann function we can relate $B_S(x,y)$ to coalescence. If $\\gamma_x$ and $\\gamma_y$ are geodesics from $x$ and $y$ to $S$ (respectively) and they meet at a vertex $z$ then $B_S(x,y) = \\tau(x,z)-\\tau(y,z)$. This is a main reason why Busemann functions are useful for studying coalescence of geodesics.\n\n\n\n\n\n\n\n\n\n\\subsection{Geodesic graphs}\\label{subsec: GG}\n\nFor any $S \\subseteq \\ensuremath{\\mathbb{Z}^2}$ and configuration $\\omega$, we denote the set of edges in all geodesics from a point $v \\in \\ensuremath{\\mathbb{Z}^2}$ to $S$ as $G_S(v)$. We regard each geodesic in $G_S(v)$ as a directed path, giving orientation $\\langle x,y \\rangle$ to an edge if $\\tau(x,S) \\geq \\tau(y,S)$ (the direction in which the edge is crossed), and set $\\vec{G}_S(v)$ to be the union of these directed edges. Let $\\mathbb{G}_S(\\omega)$ be the directed graph induced by the edges in $\\cup_v \\vec{G}_S(v)$. Last, define the configuration $\\eta_S(\\omega)$ of directed edges by\n\\[\n\\eta_S(\\omega)(\\langle x,y \\rangle) = \\begin{cases}\n1 & \\text{if }\\langle x,y \\rangle \\in \\vec{G}_S(v) \\text{ for some } v \\\\\n0 & \\text{otherwise}\n\\end{cases}\\ .\n\\]\nFor $S \\subseteq \\mathbb{R}^2$ we define $\\eta_S(\\omega)$ and $\\mathbb{G}_S(\\omega)$ using $\\hat S$ as in \\eqref{eq: hatS}.\n\n\\begin{prop}\\label{prop: firstGG}\nLet $S \\subseteq \\ensuremath{\\mathbb{R}^2}$. The graph $\\mathbb{G}_S$ and the collection $(\\eta_S)$ satisfy the following properties $\\mathbb{P}$-almost surely.\n\\begin{enumerate}\n\\item Every finite directed path is a geodesic. It is a subpath of a geodesic ending in $S$.\n\\item If there is a directed path from $x$ to $y$ in $\\mathbb{G}_S$ then $B_S(x,y) = \\tau(x,y)$.\n\\item For $i=1,2$,\n\\begin{equation}\\label{eq: GGtranscov}\n\\eta_S(e)(T_{{\\mathbf{e}}_i}\\omega) = \\eta_{S-{\\mathbf{e}}_i}(e-{\\mathbf{e}}_i)(\\omega)\\ .\n\\end{equation}\nTherefore the finite dimensional distributions of $\\eta_S$ obey a translation invariance:\n\\[\n(\\eta_S(e)) \\underset{d}{=} (\\eta_{S-{\\mathbf{e}}_i}(e-{\\mathbf{e}}_i))\\ .\n\\]\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\nThe third property follows from translation covariance of passage times \\eqref{eq: transcov}. The second property follows from the first and Proposition~\\ref{prop: busemannpassagetime}.\n\nTo prove the first, let $\\gamma$ be a directed path in $\\mathbb{G}_S$ and write the edges of $\\gamma$ in order as $e_1,\\ldots, e_n$. Write $J \\subseteq \\{1, \\ldots, n\\}$ for the set of $k$ such that the path $\\gamma_k$ induced by $e_1, \\ldots, e_k$ is a subpath of a geodesic from some vertex to $S$. We will show that $n \\in J$. By construction of $\\mathbb{G}_S$, the edge $e_1$ is in a geodesic from some point to $S$, so $1 \\in J$. Now suppose that $k \\in J$ for some $k < n$; we will show that $k+1 \\in J$. Take $\\sigma$ to be a geodesic from a point $z$ to $S$ which contains $\\gamma_k$ as a subpath. Write $\\sigma'$ for the portion of the path from $z$ to the far endpoint $v_k$ of $e_k$ (the vertex to which $e_k$ points). The edge $e_{k+1}$ is also in $\\mathbb{G}_S$ so it is in a geodesic from some point to $S$. If we write $\\hat \\sigma$ for the piece of this geodesic from $v_k$ of $e_k$ to $S$, we claim that the concatenation of $\\sigma'$ with $\\hat \\sigma$ is a geodesic from $z$ to $S$. To see this, write $\\tau_{\\tilde \\gamma}$ for the passage time along a path $\\tilde \\gamma$:\n\\[\n\\tau(z,S) = \\tau_\\sigma(z,v_k) + \\tau_\\sigma(v_k,S) = \\tau_{\\sigma'}(z,v_k) + \\tau_{\\hat \\sigma}(v_k,S)\\ .\n\\]\nThe last equality holds since both the segment of $\\hat \\sigma$ from $v_k$ to $S$ and the segment of $\\sigma$ from $v_k$ to $S$ are geodesics, so they have equal passage time. Hence $k+1 \\in J$ and we are done.\n\\end{proof}\n\nNote that each vertex $x \\notin \\hat S$ has out-degree at least 1 in $\\mathbb{G}_S$. Furthermore it is possible to argue using part 1 of the previous proposition and the shape theorem that there are no infinite directed paths in $\\mathbb{G}_S$. Since we will not use this result later, we omit the proof. Once we take limits of measures on such graphs later, infinite paths will appear.\n\nIf $\\mathbb{P}$ has unique passage times, we can say more about the structure of $\\mathbb{G}_S$. \n\\begin{prop}\\label{prop: secondGG}\nAssume {\\bf A1'} or {\\bf A2'}. The following properties hold $\\mathbb{P}$-almost surely.\n\\begin{enumerate}\n\\item Each vertex $x \\notin \\hat S$ has out-degree 1. Here $\\hat S$ is defined as in \\eqref{eq: hatS}.\n\\item Viewed as an undirected graph, $\\mathbb{G}_S$ has no circuits.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\nFor the first property note that every vertex $x\\notin \\hat S$ has out-degree at least 1 because there is a geodesic from the vertex to $S$ and the first edge is directed away from $x$. Assuming $x$ has out-degree at least 2 then we write $e_1$ and $e_2$ for two such directed edges. By the previous proposition, there are two geodesics $\\gamma_1$ and $\\gamma_2$ from $x$ to $S$ such that $e_i \\in \\gamma_i$ for $i=1,2$. If either of these paths returned to $x$ then there would exist a finite path with passage time equal to 0. By the ergodic theorem there would then be infinitely many distinct paths with passage time 0 (with positive probability), contradicting unique passage times. This implies that $\\gamma_1$ and $\\gamma_2$ have distinct edge sets. However, they have the same passage time, again contradicting unique passage times.\n\nFor the second property suppose that there is a circuit in the undirected version of $\\mathbb{G}_S$. Each vertex has out-degree 1, so this is actually a directed circuit and thus a geodesic. But then it has passage time zero, giving a contradiction as above.\n\\end{proof}\n\nProperty 2 implies that $\\mathbb{G}_S$, viewed as an undirected graph, is a forest. It has more than one component if and only if $\\hat S$ has size at least 2. We will see later that after taking limits of measures on these graphs, the number of components will reduce to 1.\n\n\n\n\n\n\n\n\n\n\n\\section{Busemann increment distributions}\\label{sec: BID}\n\nWe are interested in taking limits of measures on Busemann functions and geodesic graphs. \nWe will choose a one-parameter family of lines $L_\\alpha = L + \\alpha \\mathbf{v}$ for $\\mathbf{v}$ a normal vector to $L$ and consider the Busemann functions $B_{L_\\alpha}(x,y)$. The main question is whether or not the limit\n\\begin{equation}\\label{eq: newmanlimit}\n\\lim_{\\alpha \\to \\infty} B_{L_\\alpha}(x,y)\n\\end{equation}\nexists for $x,y \\in \\mathbb{Z}^2$. If one could show this, then one could prove many results about FPP, for instance, that infinite geodesics with an asymptotic direction always exist. Under an assumption of uniformly positive curvature of the limit shape $\\mathcal{B}$ and exponential moments for the common distribution of the $\\omega_e$'s (in the case that $\\mathbb{P}$ is a product measure) Newman \\cite{Newman95} has shown the existence of this limit for Lebesgue-almost every unit vector $\\mathbf{v}$.\n\nWe will try to overcome the difficulty of existence of limits \\eqref{eq: newmanlimit} by enlarging the space to work with subsequential limits in a systematic way. This technique is inspired by work \\cite{AD, ADNS} on ground states of short-range spin glasses.\n\n\n\n\n\n\n\n\n\\subsection{Definition of $\\mu$}\\label{sec: mudef}\n\nWe begin by assigning a space for our passage times. Let $\\Omega_1 = \\mathbb{R}^{\\mathbb{Z}^2}$ be a copy of $\\Omega$. A sample point in $\\Omega_1$ we call $\\omega$ as before. Our goal is to enhance this space to keep track of Busemann functions and geodesic graphs. We will take limits in a fixed direction, so for the remainder of this section, let $\\varpi\\in \\partial \\mathcal{B}$ and let $g_\\varpi$ be any linear functional on $\\mathbb{R}^2$ that takes its maximum on $\\mathcal{B}$ at $\\varpi$ with $g_\\varpi(\\varpi)=1$. The nullspace of $g_\\varpi$ is then a translate of a supporting line for $\\mathcal{B}$ at $\\varpi$.\nFor $\\alpha \\in \\mathbb{R}$, define \n\\[\nL_\\alpha = \\left\\{ x \\in \\mathbb{R}^2: g_\\varpi(x) = \\alpha \\right\\}\\ .\n\\]\nFor future reference, we note the inequality\n\\begin{equation}\\label{eq: gvarpibound}\n\\text{for all } x \\in \\mathbb{R}^2,~ g_\\varpi(x) \\leq g(x)\\ .\n\\end{equation}\nIt clearly holds if $x \\neq 0$. Otherwise since $x\/g(x) \\in \\mathcal{B}$, $1 \\geq g_\\varpi(x\/g(x)) = g_\\varpi(x)\/g(x)$.\n\n\nGiven $\\alpha \\in \\mathbb{R}$ and $\\omega \\in \\Omega_1$, write $B_\\alpha(x,y)(\\omega) = B_{L_\\alpha}(x,y)(\\omega)$. Define the space $\\Omega_2 = (\\mathbb{R}^2)^{\\mathbb{Z}^2}$ with the product topology and Borel sigma-algebra and the {\\it Busemann increment configuration} $B_\\alpha(\\omega) \\in \\Omega_2$ as\n\\begin{align*}\nB_\\alpha(\\omega) = \\big( \\, B_\\alpha(v, v+{\\mathbf{e}}_1),\\, B_\\alpha(v,v+{\\mathbf{e}}_2)\\, \\big)_{v \\in \\ensuremath{\\mathbb{Z}^2}}\\ .\n\\end{align*}\n\nWe also consider directed graphs of geodesics. These are points in a directed graph space $\\Omega_3 = \\{0,1\\}^{\\vec{\\mathcal{E}}^2}$, where $\\vec{\\mathcal{E}}^2$ is the set of oriented edges $\\langle x,y \\rangle$ of $\\mathbb{Z}^2$, and we use the product topology and Borel sigma-algebra. For $\\eta \\in \\Omega_3$, write $\\mathbb{G} = \\mathbb{G}(\\eta)$ for the directed graph induced by the edges $e$ such that $\\eta(e) = 1$. Using the definition from the last section, set\n\\[\n\\eta_\\alpha(\\omega) = \\eta_{L_\\alpha}(\\omega) \\in \\Omega_3 \\text{ and } \\mathbb{G}_\\alpha(\\omega) = \\mathbb{G}(\\eta_\\alpha(\\omega)) \\text{ for } \\alpha \\in \\mathbb{R}\\ .\n\\]\n\n\nSet $\\widetilde \\Omega = \\Omega_1 \\times \\Omega_2 \\times \\Omega_3$, equipped with the product topology and Borel sigma-algebra; \n\\[\n(\\omega, \\Theta,\\eta) = (\\omega(e), \\theta_1(x),\\theta_2(x),\\eta(f) : e \\in \\mathcal{E}^2, x \\in \\ensuremath{\\mathbb{Z}^2},~f \\in \\vec{\\mathcal{E}}^2)\n\\] \ndenotes a generic element of the space $\\widetilde \\Omega.$ Define the map\n\\begin{equation}\\label{eq: phidef}\n\\Phi_\\alpha: \\Omega_1 \\longrightarrow \\widetilde \\Omega \\text{ by } \\omega \\mapsto (\\omega, B_\\alpha(\\omega),\\eta_\\alpha(\\omega))\\ .\n\\end{equation}\nBecause $\\Phi_\\alpha$ is measurable, we can use it to push forward the distribution $\\ensuremath{\\mathbb{P}}$ to a probability measure $\\mu_\\alpha$ on $\\widetilde \\Omega$. Given the family $(\\mu_\\alpha)$ and $n \\in \\mathbb{N}$, we define the empirical average\n\\begin{equation}\\label{eq: munstar}\n\\mu_n^*\\left( \\cdot \\right) := \\frac{1}{n} \\int_{0}^n \\mu_\\alpha \\left( \\cdot \\right) \\mathrm{d} \\alpha. \n\\end{equation}\nTo prove that this defines a probability measure, one must show that for each measurable $A \\subseteq \\widetilde \\Omega$, the map $\\alpha \\mapsto \\mu_\\alpha(A)$ is Lebesgue-measurable. The proof is deferred to Appendix~\\ref{sec: appendix}.\n\nFrom $B_\\alpha(x,y) \\leq \\tau(x,y)$, the sequence $\\left( \\mu_n^* \\right)_{n=1}^{\\infty}$ is seen to be tight and thus has a subsequential weak limit $\\mu.$ We will call the marginal of $\\mu$ on $\\Omega_2$ a {\\it Busemann increment distribution} and the marginal on $\\Omega_3$ a {\\it geodesic graph distribution}. It will be important to recall the Portmanteau theorem, a basic result about weak convergence. The following are equivalent if $(\\nu_k)$ is a sequence of Borel probability measures on a metric space $X$:\n\\begin{align}\n\\lim_{k \\to \\infty} \\nu_k &\\to \\nu \\text{ weakly } \\nonumber \\\\\n\\limsup_{k \\to \\infty} \\nu_k(A) &\\leq \\nu(A) \\text{ if } A \\text{ is closed} \\label{eq: kallenbergclosed}\\\\\n\\liminf_{k \\to \\infty} \\nu_k(A) &\\geq \\nu(A) \\text{ if } A \\text{ is open} \\label{eq: kallenbergopen}\\ .\n\\end{align}\n(See, for example, \\cite[Theorem~3.25]{Kallenberg}.) Because $\\widetilde \\Omega$ is metrizable, these statements apply.\n\nIn this section and the next, we prove general properties about the measure $\\mu$ and focus on the marginal on $\\Omega_2$. In Sections~\\ref{sec: GG} and \\ref{sec: coalesceG} we study the marginal on $\\Omega_3$ and in Section~\\ref{sec: proofs} relate results back to the original FPP model. It is important to remember that $\\mu$ depends among other things not only on $\\varpi$, but on the choice of the linear functional $g_\\varpi$. We will suppress mention of $\\varpi$ in the notation. Furthermore we will use $\\mu$ to represent the measure and also its marginals. For instance, if we write $\\mu(A)$ for an event $A \\subseteq \\Omega_2$ we mean $\\mu(\\Omega_1 \\times A \\times \\Omega_3)$.\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Translation invariance of $\\mu$.}\nWe will show that $\\mu$ inherits translation invariance from \\ensuremath{\\mathbb{P}}. The natural translations $\\tilde{T}_m,~m=1,2$ act on $\\widetilde \\Omega$ as follows:\n\\[\n\\left[ \\tilde{T}_m (\\omega,\\Theta,\\eta) \\right](e,x,f) = \\left( \\omega_{e-{\\mathbf{e}}_m}, \\theta_1 (x - {\\mathbf{e}}_m), \\theta_2 (x - {\\mathbf{e}}_m), \\eta(f-{\\mathbf{e}}_m) \\right)\\ .\n\\]\nHere, for example, we interpret $e-{\\mathbf{e}}_m$ for the edge $e= (y,z)$ as $(y-{\\mathbf{e}}_m,z-{\\mathbf{e}}_m)$.\n\n\n\n\\begin{lem}\n\\label{translate_1}\nFor any $\\alpha \\in \\mathbb{R}$ and $m=1,2$, $\\mu_\\alpha \\circ \\tilde T_m = \\mu_{\\alpha + g_{\\varpi}({\\mathbf{e}}_m)}$.\n\n\\begin{proof}\nLet $A$ be a cylinder event for the space $\\widetilde \\Omega$ of the form\n\\[\nA = \\left\\{ \\omega_{e_i} \\in \\mathbf{B}_i, \\theta_{r_j}(x_j) \\in \\mathbf{C}_j, \\eta(f_k) = a_k : i = 1, \\ldots, l, ~j = 1, \\ldots, m, ~k=1, \\ldots, n \\right\\}\\ ,\n\\]\nwhere each $\\mathbf{B}_i, \\mathbf{C}_j$ is a (real) Borel set with $a_k \\in \\{0,1\\}$, each $r_j \\in \\{1,2\\}$, and each $e_i \\in \\mathcal{E}^2, x_j \\in \\mathbb{Z}^2$ and $f_k \\in \\vec{\\mathcal{E}}^2$. We will show that for $m=1,2$,\n\\begin{equation}\\label{eq: chazisagooddog}\n\\mu_\\alpha \\left(\\tilde{T}_m^{-1} A\\right) = \\mu_{\\alpha + g_{\\varpi}({\\mathbf{e}}_m)}(A)\\ .\n\\end{equation}\nSuch $A$ generate the sigma-algebra so this will imply the lemma. For $m \\in \\{1,2\\}$,\n\\begin{equation}\n\\label{cylindershift}\n\\tilde{T}_m^{-1}(A) = \\left\\{\\omega_{e_i - {\\mathbf{e}}_m} \\in \\mathbf{B}_i, \\theta_{r_j}(x_j-{\\mathbf{e}}_m) \\in \\mathbf{C}_j, \\eta(f_k-{\\mathbf{e}}_m) = a_k \\right\\}\\ . \\nonumber\n\\end{equation}\nRewriting $\\mu_\\alpha(\\cdot) = \\mathbb{P}(\\Phi_\\alpha^{-1}(\\cdot))$ and using the definition of $\\Phi_\\alpha$ \\eqref{eq: phidef}, \n\\[\n\\mu_\\alpha(\\tilde T_m^{-1}(A)) = \\ensuremath{\\mathbb{P}} \\left( \\omega_{e_i-{\\mathbf{e}}_m} \\in \\mathbf{B}_i, B_\\alpha(x_j - {\\mathbf{e}}_m, x_j - {\\mathbf{e}}_m + {\\mathbf{e}}_{r_j} ) \\in \\mathbf{C}_j, \\eta_\\alpha(f_k-{\\mathbf{e}}_m)(\\omega) = a_k \\right)\\ .\n\\]\nNote that translation invariance of $\\ensuremath{\\mathbb{P}}$ allows to shift the translation by ${\\mathbf{e}}_m$ from the arguments of $\\omega$, $B_\\alpha$ and $\\eta_\\alpha$ to the position of the line $L_\\alpha$. We have equality in distribution:\n\\[\n\\omega_{e-{\\mathbf{e}}_m} \\underset{d}{=} \\omega_e,~ B_\\alpha(x - {\\mathbf{e}}_m, y - {\\mathbf{e}}_m) \\underset{d}{=} B_\\beta(x,y) \\text{ and } \\eta_\\alpha(e-{\\mathbf{e}}_m) \\underset{d}{=} \\eta_\\beta(e)\\ ,\n\\]\nwhere $\\beta=\\alpha + g_\\varpi ({\\mathbf{e}}_m)$. In fact, using the translation covariance statements \\eqref{eq: transcov}, \\eqref{eq: busemanntranscov} and \\eqref{eq: GGtranscov}, equality of the above sort holds for the joint distribution of the $\\omega$'s, Busemann increments and graph variables appearing in the event $A.$ This proves \\eqref{eq: chazisagooddog}.\n\\end{proof}\n\\end{lem}\n\n\n\n\n\n\n\\begin{prop}\n\t$\\mu$ is invariant under the translations $\\tilde{T}_m$, $m=1,2$.\n\\end{prop}\n\\begin{proof}\n\tLet $f$ be a continuous function (bounded by $D \\geq 0$) on the space $\\widetilde \\Omega,$ and fix $\\epsilon > 0.$ Choose an increasing sequence $(n_k)$ such that $\\mu_{n_k}^* \\to \\mu$ weakly as $k \\to \\infty$. We can then find $k_0$ such that\n$|\\mu(f) - \\mu_{n_k}^*(f)| < \\epsilon\/3$ for $k > k_0.$ \nBy Lemma~\\ref{translate_1}, $\\mu_\\alpha \\circ \\tilde T_m = \\mu_{\\alpha + g_{\\varpi}({\\mathbf{e}}_m)}$ for $m=1,2$. Therefore \n\\begin{align*}\n\t\\left[\\mu_{n_k}^* \\circ \\tilde{T}_m\\right] \\left(f\\right) &= \\frac{1}{n_k}\\int_{g_{\\varpi}({\\mathbf{e}}_m)}^{n_k+g_{\\varpi}({\\mathbf{e}}_m)} \\mu_\\alpha \\left(f \\right) \\mathrm{d} \\alpha\\\\\n\t\\Rightarrow \\left| \\left[\\mu_{n_k}^* \\circ \\tilde{T}_m\\right] \\left(f\\right) - \\mu_{n_k}^* \\left(f\\right) \\right| &\\leq\n\t\t\\frac{1}{n_k}\\left| \\int_{0}^{g_{\\varpi}({\\mathbf{e}}_m)} \\mu_\\alpha \\left(f \\right) \\mathrm{d} \\alpha \\right| + \\frac{1}{n_k} \\left| \\int_{n_k}^{n_k+g_{\\varpi}({\\mathbf{e}}_m)} \\mu_\\alpha \\left(f \\right) \\mathrm{d} \\alpha \\right|\\\\\n\t&\\leq \\frac{2 g_{\\varpi}({\\mathbf{e}}_m)D}{n_k} \\rightarrow 0 \\text{ as } k \\to \\infty\\ .\n\\end{align*}\nAs $\\tilde T_m$ is a continuous on $\\widetilde \\Omega$, $(\\mu_{n_k}^* \\circ \\tilde{T}_m) $ converges weakly to $\\mu \\circ \\tilde{T}_m,$ so there exists $k_1>k_0$ such that \n$|\\mu\\circ \\tilde T_m(f) - \\mu_{n_k}^* \\circ \\tilde T_m (f)| < \\epsilon\/3$ for all $k > k_1,$ and $k_2>k_1$ with $2g_{\\varpi}({\\mathbf{e}}_m)D\/n_{k_2} < \\epsilon\/3.$ So $|\\mu(f) - \\mu \\circ \\tilde{T}_m (f)| < \\epsilon \\text{ for all } \\varepsilon>0$, giving $\\mu = \\mu \\circ \\tilde T_m$.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Reconstructed Busemann functions}\nWe wish to reconstruct an ``asymptotic Busemann function\" $f: \\ensuremath{\\mathbb{Z}^2} \\rightarrow \\mathbb{R}$ by summing the Busemann increments of $\\Theta \\in \\Omega_2$. That $\\Theta$ is almost surely curl-free allows the construction to proceed independent of the path we sum over. For this we need some definitions.\n\nGiven $\\Theta \\in \\Omega_2$, $x \\in \\mathbb{Z}^2$ and $z \\in \\mathbb{Z}^2$ with $\\|z\\|_1=1$ we set $\\theta(x,z) = \\theta(x,z)(\\Theta)$ equal to\n\\[\n\\theta(x,z) = \\begin{cases}\n\\theta_1(x) & z = {\\mathbf{e}}_1 \\\\\n\\theta_2(x) & z= {\\mathbf{e}}_2 \\\\\n-\\theta_1(x-{\\mathbf{e}}_1) & z=-{\\mathbf{e}}_1 \\\\\n-\\theta_2(x-{\\mathbf{e}}_2) & z = -{\\mathbf{e}}_2\n\\end{cases}\\ .\n\\]\nFor any finite lattice path $\\gamma$ we write its vertices in order as $x_1, \\ldots, x_n$ and set\n\\[\nf(\\gamma) = f(\\gamma)(\\Theta) = \\sum_{i=1}^{n-1} \\theta(x_i, x_{i+1}-x_i)\\ .\n\\]\n\n\n\\begin{lem}\\label{lem: curlfree}\nWith $\\mu$-probability one, $f$ vanishes on all circuits:\n\\[\n\\mu \\left( f(\\gamma)=0 \\text{ for all circuits } \\gamma \\right) = 1\\ .\n\\]\n\\end{lem}\n\n\\begin{proof}\nPick a circuit $\\gamma$ and let $A\\subseteq \\widetilde \\Omega_2$ denote the event $\\{ \\Theta : f(\\gamma) = 0\\}$. Choose an increasing sequence $(n_k)$ such that $\\mu_{n_k}^* \\to \\mu$ weakly. For fixed $\\gamma$, $f(\\gamma)$ is a continuous function on $\\widetilde \\Omega$, so the event $A$ is closed, giving $\\mu(A) \\geq \\limsup_{k} \\mu_{n_k}^*(A)$ by \\eqref{eq: kallenbergclosed} . However, for each $\\alpha$, by additivity of $B_\\alpha(\\cdot,\\cdot)$ (see \\eqref{eq: busemannadditivity}),\n\\[\n\\mu_\\alpha(A) = \\mathbb{P}\\left(\\sum_{i=1}^n B_\\alpha(x_i,x_{i+1}) = 0\\right) = 1\\ .\n\\]\nThus $\\mu_n^*(A) = 1$ for all $n$ and $\\mu(A)=1$. There are countably many $\\gamma$'s so we are done.\n\\end{proof}\n\nUsing the lemma we may define the reconstructed Busemann function. Fix a deterministic family of finite paths $\\{\\gamma_{x,y}\\}$, one for each pair $(x,y) \\in \\mathbb{Z}^2$ and define \n\\[\nf(x,y) = f(x,y)(\\Theta) := f(\\gamma_{x,y})\\ .\n\\] \nAlthough we use fixed paths $\\gamma_{x,y}$, this is only to ensure that $f$ is a continuous function on $\\widetilde \\Omega$. Actually, for any $\\Theta$ in the $\\mu$-probability one set of Lemma~\\ref{lem: curlfree} and vertices $x,y \\in \\mathbb{Z}^2$ we could equivalently define $f(x,y) = f(\\gamma)$, where $\\gamma$ is any finite lattice path from $x$ to $y$. To see that it would then be well-defined (that is, only a function of $x,y$ and the configuration $\\Theta$) is a standard argument. If we suppose that $\\gamma_1$ and $\\gamma_2$ are finite lattice paths from $x$ to $y$ and $\\Theta$ is given as above, the concatenation of $\\gamma_1$ with $\\gamma_2$ (traversed in the opposite direction) is a circuit and thus has $f$-value zero. However, by definition, this is the difference of $f(\\gamma_1)$ and $f(\\gamma_2)$ and proves the claim.\n\n\nWe now give some properties about asymptotic Busemann functions that come over from the original model. The third says that $f$ retains translation covariance. This will allow us to prove the existence of almost-sure limits using the ergodic theorem in the next section.\n\\begin{prop}\n\\label{bd_f_by_tau_prop}\nThe reconstructed Busemann function satisfies the following properties for $x,y,z \\in \\ensuremath{\\mathbb{Z}^2}$.\n\\begin{enumerate}\n\\item \n\\begin{equation}\\label{eq: additivity}\nf(x,y) + f(y,z) = f(x,z) ~\\mu\\text{-almost surely}\\ . \n\\end{equation}\n\\item For $m=1,2$ \n\\begin{equation}\\label{eq: ftranslation}\nf(x,y)(\\tilde T_m \\Theta) = f(x-{\\mathbf{e}}_m,y-{\\mathbf{e}}_m)(\\Theta) ~\\mu\\text{-almost surely}\\ .\n\\end{equation}\n\\item \\begin{equation}\\label{eq: continuity}\nf(x,y):\\widetilde \\Omega \\to \\mathbb{R} \\text{ is continuous}\\ .\n\\end{equation}\n\\item $f$ is bounded by $\\tau$:\n\\begin{equation}\\label{eq: fboundtau}\n|f(x,y)| \\leq \\tau(x,y)) ~\\mu\\text{-almost surely}\\ .\n\\end{equation}\n\\end{enumerate}\n\\end{prop}\n\n\\begin{proof}\nThe first two properties follow from path-independence of $f$ and the third holds because $f$ is a sum of finitely many Busemann increments, each of which is a continuous function. We show the fourth property. For $x,y \\in \\ensuremath{\\mathbb{Z}^2}$, the event\n\\[\n\\{(\\omega,\\Theta) : |f(x,y)(\\Theta)| - \\tau(x,y)(\\omega) \\leq 0\\}\n\\]\nis closed because $|f(x,y)|-\\tau(x,y)$ is continuous. For every $\\alpha$, \\eqref{eq: bboundtau} gives $|B_\\alpha(x,y)| \\leq \\tau(x,y)$ with $\\mathbb{P}$-probability one, so the above event has $\\mu_\\alpha$-probability one. Taking limits and using \\eqref{eq: kallenbergclosed}, $\\mu(|f(x,y)(\\Theta)| \\leq \\tau(x,y)(\\omega)) = 1$.\n\\end{proof}\n\n\n\n\n\n\n\n\n\\subsection{Expected value of $f$}\nIn this section we compute $\\mathbb{E}_\\mu f(0,x)$ for all $x \\in \\mathbb{Z}^2$. The core of our proof is a argument from Hoffman \\cite{Hoffman2}, which was developed using an averaging argument due to Garet-Marchand \\cite{GM}. The presentation we give below is inspired by that of Gou\\'er\\'e \\cite[Lemma~2.6]{gouere}. In fact, the proof shows a stronger statement. Without need for a subsequence,\n\\[\n\\mathbb{E}_{\\mu_{n}^*}f(0,x) \\to g_\\varpi(x)\\ .\n\\]\n\n\\begin{thm}\\label{thm: expected_value}\nFor each $x \\in \\mathbb{Z}^2$, $\\mathbb{E}_\\mu f(0,x) = g_\\varpi(x)$.\n\\end{thm}\n\n\\begin{proof}\nWe will use an elementary lemma that follows from the shape theorem.\n\\begin{lem}\\label{lem: pointtoplane}\nThe following convergence takes place almost surely and in $L^1(\\mathbb{P})$:\n\\[\n\\frac{\\tau(0,L_\\alpha)}{\\alpha} \\to 1 \\text{ as } \\alpha \\to \\infty\\ .\n\\]\n\\end{lem}\n\n\\begin{proof}\nSince $\\alpha \\varpi \\in L_\\alpha$,\n\\[\n\\limsup_{\\alpha \\to \\infty} \\frac{\\tau(0,L_\\alpha)}{\\alpha} \\leq \\lim_{\\alpha \\to \\infty} \\frac{\\tau(0,\\alpha \\varpi)}{\\alpha} = 1\\ .\n\\]\nOn the other hand, given $\\varepsilon>0$ and any $\\omega$ for which the shape theorem holds, we can find $K$ such that for all $x \\in \\mathbb{R}^2$ with $\\|x\\|_1 \\geq K$, $\\tau(0,x) \\geq g(x)(1-\\varepsilon)$. So if $\\alpha$ is large enough that all $x \\in L_\\alpha$ have $\\|x\\|_1 \\geq K$, then we can use \\eqref{eq: gvarpibound}:\n\\[\n\\tau(0,L_\\alpha) = \\min_{x \\in L_\\alpha} \\tau(0,x) \\geq (1-\\varepsilon) \\min_{x \\in L_\\alpha} g(x) \\geq (1-\\varepsilon)\\alpha\\ .\n\\]\nConsequently, $\\liminf_{\\alpha \\to \\infty} \\tau(0,L_\\alpha)\/\\alpha \\geq 1$, giving almost sure convergence in the lemma.\n\nFor $L^1$ convergence, note $0 \\leq \\tau(0,L_\\alpha)\/\\alpha \\leq \\tau(0,\\alpha \\varpi)\/\\alpha$, so the dominated convergence theorem and $L^1$ convergence of point to point passage times completes the proof.\n\\end{proof}\n\nFor any $x \\in \\mathbb{Z}^2$ and integer $n \\geq 1$, use the definition of $\\mu_n^*$ to write \n\\[\n\\mathbb{E}_{\\mu_n^*}(f(-x,0)) = \\frac{1}{n} \\left[ \\int_0^n \\mathbb{E} \\tau(-x,L_\\alpha)~\\mathrm{d} \\alpha - \\int_0^n \\mathbb{E} \\tau(0, L_\\alpha) ~\\mathrm{d} \\alpha \\right]\\ .\n\\]\nUsing translation covariance of passage times,\n\\[\n\\int_0^n \\mathbb{E} \\tau(-x,L_\\alpha)~\\mathrm{d} \\alpha = \\int_0^n \\mathbb{E} \\tau(0,L_{\\alpha + g_\\varpi(x)})~\\mathrm{d} \\alpha = \\int_{g_\\varpi(x)}^{n+g_\\varpi(x)} \\mathbb{E} \\tau(0,L_\\alpha) ~\\mathrm{d} \\alpha\\ .\n\\]\nTherefore\n\\begin{equation}\\label{eq: almostformula}\n\\mathbb{E}_{\\mu_n^*}(f(-x,0)) = \\frac{1}{n} \\left[ \\int_{n}^{n+g_\\varpi(x)} \\mathbb{E} \\tau(0,L_\\alpha) ~\\mathrm{d} \\alpha - \\int_0^{g_\\varpi(x)} \\mathbb{E} \\tau(0,L_\\alpha) ~\\mathrm{d} \\alpha \\right] \\ .\n\\end{equation}\n\n\n\nChoose $(n_k)$ to be an increasing sequence such that $\\mu_{n_k}^* \\to \\mu$ weakly. We claim that\n\\begin{equation}\\label{eq: momentconvergence}\n\\mathbb{E}_{\\mu_{n_k}^*} f(-x,0) \\to \\mathbb{E}_\\mu f(-x,0)\\ .\n\\end{equation}\nTo prove this, note that for any $R>0$, if we define the truncated variable\n\\[\nf_R(-x,0) = \\text{ sgn} f(-x,0) \\min\\{R, |f(-x,0)|\\}\\ ,\n\\]\nthen continuity of $f$ on $\\widetilde \\Omega$ gives $\\mathbb{E}_{\\mu_{n_k}^*} f_R(-x,0) \\to \\mathbb{E}_\\mu f_R(-x,0)$. To extend this to \\eqref{eq: momentconvergence}, it suffices to prove that for each $\\varepsilon>0$, there exists $R>0$ such that\n\\begin{equation}\\label{eq: limsupcondition}\n\\limsup_{k \\to \\infty} \\mathbb{E}_{\\mu_{n_k}^*} |f(-x,0)| I(|f(-x,0)| \\geq R) < \\varepsilon\\ ,\n\\end{equation}\nwhere $I(A)$ is the indicator of the event $A$. Because $\\mathbb{E}_{\\mu_{n_k}^*} f(-x,0)^2 \\leq \\mathbb{E} \\tau(-x,0)^2 < \\infty$ for all $k$ by \\eqref{eq: finitesecondmoment}, condition \\eqref{eq: limsupcondition} follows from the Cauchy-Schwarz inequality. This proves \\eqref{eq: momentconvergence}. \n\nCombining \\eqref{eq: almostformula} and \\eqref{eq: momentconvergence}, we obtain the formula\n\\begin{equation}\\label{eq: almostdoneagain}\n\\mathbb{E}_\\mu f(-x,0) = \\lim_{k \\to \\infty} \\frac{1}{n_k} \\int_{n_k}^{n_k+g_\\varpi(x)} \\mathbb{E} \\tau(0,L_\\alpha) ~\\mathrm{d} \\alpha = \\lim_{k \\to \\infty} \\int_0^{g_\\varpi(x)} \\frac{\\mathbb{E} \\tau(0,L_{\\alpha + n_k})}{n_k} ~\\mathrm{d} \\alpha\\ .\n\\end{equation}\nBy Lemma~\\ref{lem: pointtoplane}, for each $\\alpha$ between $0$ and $g_\\varpi(x)$,\n\\[\n\\lim_{k \\to \\infty} \\frac{\\mathbb{E} \\tau(0,L_{\\alpha + n_k})}{n_k} = \\lim_{k \\to \\infty} \\frac{\\mathbb{E} \\tau(0,L_{\\alpha + n_k})}{\\alpha + n_k} \\cdot \\frac{\\alpha + n_k}{n_k} = 1\\ .\n\\]\nSo using $\\mathbb{E} \\tau(0,L_{\\alpha + n_k}) \\leq \\mathbb{E} \\tau(0,L_{2n_k})$ for large $k$, we can pass the limit under the integral in \\eqref{eq: almostdoneagain} to get $\\mathbb{E}_\\mu f(0,x) = \\mathbb{E}_\\mu f(-x,0) = g_\\varpi(x)$.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Limits for reconstructed Busemann functions}\\label{sec: limits}\n\nIn this section we study the asymptotic behavior of the reconstructed Busemann function $f$. We will see that $f$ is asymptotically a projection onto a line and if the boundary of the limit shape is differentiable at $\\varpi$, we give the explicit form of the hyperplane. Without this assumption we show that the line is a translate of a supporting line for $\\mathcal{B}$ at $\\varpi$.\n\nOne of the advantages of constructing $f$ from our measure $\\mu$ is that we can use the ergodic theorem and translation invariance to show the existence of limits. This gives us almost as much control on the Busemann function as we would have if we could show existence of the limit in \\eqref{eq: newmanlimit}. If we knew this, we would not need differentiability at $\\varpi$ to deduce the form of the random hyperplane for $f$; we could derive it from ergodicity and symmetry. \n\n\n\n\\subsection{Radial limits}\n\nIn this section we will prove the existence of radial limits for $f$. This is the first step to deduce a shape theorem, which we will do in the next section. We extend the definition of $f$ to all of $\\mathbb{R}^2 \\times \\mathbb{R}^2$ in the usual way: $f(x,y)$ is defined as $f(\\tilde x, \\tilde y)$ where $\\tilde x$ and $\\tilde y$ are the unique points in $\\ensuremath{\\mathbb{Z}^2}$ such that $x \\in \\tilde x + [-1\/2,1\/2)^2$ and $y \\in \\tilde y + [-1\/2,1\/2)^2$.\n\n\\begin{prop}\\label{prop: rationallimits}\nLet $q \\in \\mathbb{Q}^2$. Then\n\\[\n\\rho_q := \\lim_{n \\to \\infty} \\frac{1}{n} f(0,nq) \\text{ exists } \\mu\\text{-almost surely}\\ .\n\\]\n\\end{prop}\n\n\\begin{proof}\nChoose $M \\in \\mathbb{N}$ such that $Mq \\in \\mathbb{Z}^2$. We will first show that\n\\begin{equation}\\label{eq: firstrationallimit}\n\\lim_{n \\to \\infty} \\frac{1}{Mn} f(0,nMq) \\text{ exists } \\mu\\text{-almost surely}\\ .\n\\end{equation}\nTo do this, we note that since $\\tau(0,Mq) \\in L^2(\\mu)$ (from \\eqref{eq: finitesecondmoment}), it is also in $L^1$. Using \\eqref{eq: fboundtau}, $f(0,Mq) \\in L^1(\\mu)$ as well. Define the map $\\tilde T_q$ on $\\Omega_2$ as\n\\[\n\\left[ \\tilde T_q \\Theta \\right] (x) = (\\theta_1(x-Mq),\\theta_2(x-Mq))\\ .\n\\]\nThis is a composition of maps $\\tilde T_m$, $m=1,2$, so it is measure-preserving. By \\eqref{eq: additivity} and \\eqref{eq: ftranslation},\n\\[\nf(0,nMq)(\\Theta) = \\sum_{i=1}^n f((i-1)Mq,iMq)(\\Theta) = \\sum_{i=0}^{n-1} f(0,Mq)(\\tilde T_q^{-i} (\\Theta))\\ .\n\\]\nApplying the ergodic theorem finishes the proof of \\eqref{eq: firstrationallimit}.\n\nTo transform \\eqref{eq: firstrationallimit} into the statement of the proposition we need to ``fill in the gaps.'' Choose $M$ as above and for any $n$ pick $a_n \\in \\mathbb{Z}$ such that $a_nM \\leq n < (a_n+1)M$. Then\n\\[\n\\left| \\frac{f(0,nq)}{n} - \\frac{f(0,a_nMq)}{a_nM} \\right| \\leq \\left| \\frac{f(0,a_nMq)}{a_nM} \\right| \\left| 1 - \\frac{a_nM}{n} \\right| + \\frac{1}{n} \\left| f(0,a_nMq) - f(0,nq) \\right|\\ . \n\\]\nThe first term on the right converges to 0. To show the same for the second term we use the fact that $f(x,y) \\in L^1(\\mu,\\Omega_2)$ for all $x,y \\in \\mathbb{R}^2$. Indeed, the difference $f(0,a_nMq)-f(0,nq)$ is equal to $f(nq, a_nMq)$, which has the same distribution as $f(0,(a_nM-n)q)$. For each $\\varepsilon>0$,\n\\[\n\\sum_{n \\geq 1} \\mu(|f(0,(n-a_nM)q)| \\geq \\varepsilon n) \\leq \\frac{1}{\\varepsilon} \\sum_{i=1}^M \\| f(0,-iq)\\|_{L^1(\\mu)} < \\infty\\ .\n\\]\nSo only finitely many of the events $\\{|f(0,a_nMq) - f(0,nq)| \\geq \\varepsilon n\\}$ occur and we are done.\n\\end{proof}\n\n\n\nThe last proposition says that for each $q$ there exists a random variable $\\rho_q = \\rho(q,\\Theta)$ such that $\\mu$-almost surely, the above limit equals $\\rho_q$. Assume now that we fix $\\Theta$ such that this limit exists for all $q \\in \\mathbb{Q}^2$. We will consider $\\rho_q$ as a function of $q$. The next theorem states that $\\rho_q$ represents a random projection onto a vector $\\varrho$.\n\\begin{thm}\\label{thm: pizzapie}\nThere exists a random vector $\\varrho = \\varrho(\\Theta)$ such that\n\\[\n\\mu\\left( \\rho_q = \\varrho \\cdot q \\text{ for all } q \\in \\mathbb{Q}^2 \\right) = 1\\ .\n\\]\nFurthermore $\\varrho$ is translation invariant:\n\\[\n\\varrho(\\tilde T_m \\Theta) = \\varrho(\\Theta) \\text{ for } m=1,2\\ .\n\\]\n\\end{thm}\n\n\\begin{proof}\nWe will show that $q \\mapsto \\rho_q$ is a (random) linear map on $\\mathbb{Q}^2$. Specifically, writing an arbitrary $q \\in \\mathbb{Q}^2$ as $(q_1,q_2)$, we will show that\n\\begin{equation}\\label{eq: blackboard2}\n\\mu\\left(\\rho_q = q_1 \\rho_{{\\mathbf{e}}_1} + q_2 \\rho_{{\\mathbf{e}}_2} \\text{ for all } q \\in \\mathbb{Q}^2\\right) = 1\\ .\n\\end{equation}\nThen, setting $\\varrho = (\\rho_{{\\mathbf{e}}_1},\\rho_{{\\mathbf{e}}_2})$, we will have proved the theorem.\n\nThe first step is to show translation invariance of $\\rho_q$. Given $q \\in \\mathbb{Q}^2$, let $M \\in \\mathbb{N}$ be such that $Mq \\in \\mathbb{Z}^2$. For $m=1,2$, translation covariance implies\n\\begin{eqnarray*}\n|f(0,nMq)(\\tilde T_m \\Theta) - f(0,nMq)(\\Theta)| &=& |f(-{\\mathbf{e}}_m, nMq-{\\mathbf{e}}_m)(\\Theta) - f(0,nMq)(\\Theta)| \\\\\n&\\leq& |f(-{\\mathbf{e}}_m,0)(\\Theta)| + |f(nMq-{\\mathbf{e}}_m,nMq)(\\Theta)|\\ .\n\\end{eqnarray*}\nFurthermore, given $\\delta>0$,\n\\[\n\\sum_n \\mu\\left( |f(nMq-{\\mathbf{e}}_m, nMq)| > \\delta n\\right) \\leq \\sum_n \\mu\\left( |f(0, {\\mathbf{e}}_m)| > \\delta n\\right) \\leq \\frac{1}{\\delta} \\|f(0,{\\mathbf{e}}_m)\\|_{L^1(\\mu)} < \\infty\\ .\n\\]\nTherefore only finitely many of the events $\\{|f(nMq - {\\mathbf{e}}_m,nMq)| > \\delta n\\}$ occur and \n\\[\n\\rho_q(\\tilde T_m\\Theta) = \\lim_{n \\to \\infty} \\frac{f(0,nMq)(\\tilde T_m \\Theta)}{nM} = \\lim_{n \\to \\infty} \\frac{f(0,nMq)(\\Theta)}{nM} = \\rho_q(\\Theta) \\text{ almost surely}\\ .\n\\]\n\nTo complete the proof we show that $q \\mapsto \\rho_q$ is almost surely additive. Over $\\mathbb{Q}$, this suffices to show linearity and thus \\eqref{eq: blackboard2}. Let $q_1,q_2 \\in \\mathbb{Q}^2$ and choose $M \\in \\mathbb{N}$ with $Mq_1,Mq_2 \\in\\mathbb{Z}^2$. By Proposition~\\ref{prop: rationallimits}, for $\\varepsilon>0$, we can pick $N$ such that if $n \\geq N$ then the following hold:\n\\begin{enumerate}\n\\item $\\mu\\left( |(1\/nM)f(0,nMq_1) - \\rho_{q_1} | > \\varepsilon\/2 \\right) < \\varepsilon\/2$ and\n\\item $\\mu\\left( |(1\/nM)f(0,nMq_2) - \\rho_{q_1}| > \\varepsilon\/2 \\right) < \\varepsilon\/2$.\n\\end{enumerate}\nWriting $\\tilde T_{-q}(\\Theta)(x) = \\Theta(x+Mq)$ and using translation invariance of $\\rho_{q_2}$,\n\\begin{eqnarray*}\n&& f(0,nM(q_1+q_2))(\\Theta) - nM\\rho_{q_1}(\\Theta) - nM\\rho_{q_2}(\\Theta) \\\\\n&=& f(0,nMq_1)(\\Theta) - nM\\rho_{q_1}(\\Theta) + f(0,nMq_2)(\\tilde T_{-q_1}^n \\Theta) - nM\\rho_{q_2}(\\tilde T_{-q_1}^n \\Theta)\\ .\n\\end{eqnarray*}\nSo by translation invariance of $\\mu$ and items 1 and 2 above,\n\\begin{eqnarray*}\n&& \\mu(|(1\/nM)f(0,nM(q_1+q_2)) - (\\rho_{q_1}+\\rho_{q_2})| > \\varepsilon) \\\\\n&\\leq& \\mu(|(1\/nM)f(0,nMq_1) - \\rho_{q_1}| > \\varepsilon\/2) + \\mu(|(1\/nM)f(0,nMq_2) - \\rho_{q_2}| > \\varepsilon\/2) < \\varepsilon\\ .\n\\end{eqnarray*}\nThus $(1\/nM)f(0,nM(q_1+q_2))$ converges in probability to $\\rho_{q_1} + \\rho_{q_2}$. By Proposition~\\ref{prop: rationallimits}, this equals $\\rho_{q_1+q_2}$.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{A shape theorem} \n\nWe will now upgrade the almost-sure convergence in each rational direction, from Proposition~\\ref{prop: rationallimits}, to a sort of shape theorem for the Busemann function $f$. The major difference is that, unlike in the usual shape theorem of first-passage percolation, the limiting shape of $f$ is allowed to be random.\n\n\\begin{thm}\n\\label{shapetheorem}\nFor each $\\delta>0$,\n\\begin{equation}\n\\label{shape_condition}\n\\mu\\left(|f(0,x) - x \\cdot \\varrho| < \\delta \\|x\\|_1 \\text{ for all } x \\text{ with } \\|x\\|_1 \\geq M \\text{ and all large } M \\right) = 1.\n\\end{equation}\n\\end{thm}\n\nAs in the proofs of the usual shape theorems, we will need a lemma which allows us to compare $f$ in different directions. A result showing that with positive probability, $f(0,x)$ grows at most linearly in $\\|x\\|$ will be sufficient for our purposes. The fourth item of Proposition \\ref{bd_f_by_tau_prop} allows us to derive such a bound by comparison with the usual passage time $\\tau(0,x).$\n\n\\begin{lem}\nThere exist deterministic $K < \\infty$ and $p_g > 0$ depending only on the passage time distribution such that\n\\[\\ensuremath{\\mathbb{P}}\\left( \\sup_{\\substack{x \\in \\ensuremath{\\mathbb{Z}^2}\\\\x \\neq 0}} \\frac{\\tau(0,x)}{\\|x\\|_1} \\leq K\\right) = p_g > 0. \\]\n\\end{lem}\n\\begin{proof}\nBy the first-passage shape theorem, there exists $\\lambda < \\infty$ and $T, p_g > 0$ such that\n\\[\n\\ensuremath{\\mathbb{P}}\\left(\\forall t \\geq T, \\,B(t)\/t \\supseteq [-\\lambda,\\lambda]^2\\,\\right) = p_g\\ .\n\\]\n(Here we are using \\eqref{eq: normequivalence}.) Choosing $K = T + 2 \/ \\lambda$ completes the proof.\n\n\\end{proof}\n\nThe development of the shape theorem from this point is similar to that of the usual first-passage shape theorem for ergodic passage time distributions. \n\nWe will say that $z \\in \\mathbb{Z}^2$ is ``good\" for a given outcome if\n\\[ \n\\sup_{\\substack{x \\in \\ensuremath{\\mathbb{Z}^2}\\\\x \\neq z}} \\frac{\\tau(z,x)}{\\|x - z\\|_1} \\leq K\\ . \n\\]\nNote that $\\ensuremath{\\mathbb{P}}(z \\text{ is good}) = p_g>0$ for all $z \\in \\mathbb{Z}^2.$\n\n\\begin{lem}\n\\label{cheesy_lemma}\nLet $\\zeta$ be a nonzero vector with integer coordinates, and let $z_n =n\\zeta.$ Let $(n_k)$ denote the increasing sequence of integers such that $z_{n_k}$ is good. $\\ensuremath{\\mathbb{P}}$-almost surely, $(n_k)$ is infinite and $\\lim_{k \\rightarrow \\infty} (n_{k+1}\/n_k) = 1$.\n\\end{lem}\n\\begin{proof}\nThe ergodic theorem shows that $(n_k)$ is a.s. infinite. Let $B_i$ denote the event that $z_i$ is good.\nBy another application of the ergodic theorem,\n\\begin{equation} \n\\label{cheesier_still}\n\\frac{k}{n_k} = \\frac{1}{n_k} \\sum_{i=1}^{n_k} \\mathbf{1}_{B_i} \\longrightarrow p_g \\quad \\text{a.s.}\n\\end{equation}\nThus,\n\\[\\frac{n_{k+1}}{n_k} = \\left(\\frac{n_{k+1}}{k+1}\\right) \\left(\\frac{k}{n_k}\\right) \\left(\\frac{k+1}{k}\\right) \\longrightarrow 1 \\quad \\text{a.s.},\\]\nsince the first and second factors converge to $p_g$ and $p_g^{-1}$ by (\\ref{cheesier_still}).\n\n\\end{proof}\n\nIn what follows, we will use the fact that there is a positive density of good sites to show convergence of $f(0,z) \/ \\|z\\|_1$ in all directions. Given the convergence of $f(0,nq) \/ n$ for each rational $q,$ we will find enough good sites along lines close to $nq$ to let us to bound the difference $|f(0,nq) - f(0,z)|.$ To describe this procedure, we need to make several definitions. Call a vector $\\zeta$ satisfying the a.s. event of Lemma \\ref{cheesy_lemma} a good direction. We will extend this definition to $\\zeta \\in \\mathbb{Q}^2$: such a $\\zeta$ will be called a good direction if $m\\zeta$ is, where $m$ is the smallest natural number such that $m\\zeta \\in \\mathbb{Z}^2$. \n\nBy countability, there exists a probability one event $\\Omega''$ on which each $\\zeta \\in \\mathbb{Q}^2$ is a good direction.\nFor each integer $M \\geq 1,$ let $V_M = \\left\\{ x\/M: x \\in \\ensuremath{\\mathbb{Z}^2}\\right\\},$ and let $V = \\cup_{M \\geq 1} V_M.$\nSet $B = \\{z \\in \\mathbb{R}^2: z \\in V, \\, \\|z\\|_1 = 1\\}$ and note that $B$ is dense in the unit sphere of $\\mathbb{R}^2$ (with norm $\\|\\cdot\\|_1$).\nBy Theorem \\ref{thm: pizzapie}, we can find a set $\\hat{\\Omega} \\subseteq \\Omega_2$ with $\\mu (\\hat{\\Omega}) = 1$ such that, for all $\\Theta \\in \\hat{\\Omega},$\n\\begin{equation}\\label{eq: planetolondon}\n\\lim_{n \\rightarrow \\infty} \\frac{1}{n} f(n z_0) (\\Theta) = z_0 \\cdot \\varrho(\\Theta) \\text{ for all } z_0 \\in B\\ .\n\\end{equation}\n\n\\begin{proof}[Proof of Theorem \\ref{shapetheorem}]\nAssume that there exist $\\delta > 0$ and an event $D_\\delta$ with $\\mu(D_\\delta)>0$ such that, for every outcome in $D_\\delta,$ there are infinitely many vertices $x \\in \\ensuremath{\\mathbb{Z}^2}$ with $\\left|f(x) - x \\cdot \\varrho\\right| \\geq \\delta \\|x\\|_1.$ Then $D_\\delta \\cap\\hat{\\Omega} \\cap \\Omega''$ is nonempty and so it contains some outcome $(\\omega,\\Theta,\\eta)$. We will derive a contradiction by showing that $(\\omega,\\Theta,\\eta)$, by way of its membership in these three sets, has contradictory properties.\n\nBy compactness of the $\\ell^1$ unit ball, we can find a sequence $\\{x_n\\}$ in $\\ensuremath{\\mathbb{Z}^2}$ with $\\|x_n\\| \\rightarrow \\infty$ and $y \\in \\mathbb{R}^2$ with $\\|y\\|_1 = 1$ such that $x_n\/\\|x_n\\|_1 \\to y$ and \n\\begin{equation}\n\\label{non_converging_pizza}\n\\left| \\frac{f(x_n)[\\Theta]}{\\|x_n\\|_1} - y \\cdot \\varrho[\\Theta] \\right| > \\frac{\\delta}{2} \\text{ for all } n\\ .\n\\end{equation}\nLet $\\delta' > 0$ be arbitrary (we will ultimately take it to be small). Our first goal is the approximation of $x_n$ by multiples of some element of $B.$\nChoose $z \\in B$ such that $\\|z - y\\|_1 <\\delta'$ and let $\\{n_k\\}$ denote the increasing sequence of integers such that $n_k z$ is good. (Here if $z \\notin \\mathbb{Z}^2$, then $z$ being good means that $Mz$ is good, where $Mz$ was chosen after Lemma~\\ref{cheesy_lemma} to be $\\mathbb{Z}^2$. Therefore $(n_k)$ would then be of the form $(M l_k)$ for some increasing sequence $l_k$.) Note that $n_{k+1}\/n_k \\to 1$ by Lemma \\ref{cheesy_lemma} so we are able to choose a $K > 0$ such that \n\\begin{equation}\\label{eq: planetolondon2}\nn_{k+1} < (1+\\delta') n_k \\text{ and } \\left|\\frac{f(0,n_k z)}{n_k} - \\varrho \\cdot z\\right| \\leq \\delta' \\text{ for all } k > K\\ .\n\\end{equation}\n\nBy the triangle inequality, the left-hand side of (\\ref{non_converging_pizza}) is bounded above by\n\\begin{align}\n\\label{pizza_telescope}\n \\left|\\frac{f(0,x_n)}{\\|x_n\\|_1} - \\frac{f(0,n_k z)}{\\|x_n\\|_1} \\right| + \\left|\\frac{f(0,n_k z)}{\\|x_n\\|_1} - \\frac{f(0,n_k z)}{n_k}\\right| + \\left| \\frac{f(0,n_k z)}{n_k} - \\varrho \\cdot z \\right| + \\left| \\varrho \\cdot z - \\varrho \\cdot y\\right|\n\\end{align}\nfor arbitrary $n$ and $n_k.$ Choose some $N_0$ such that\n$\\|x_n - \\|x_n\\|_1\\,y\\|_1 \\leq \\delta' \\|x_n\\|_1$ for all $n > N_0,$ and note that\n\\begin{equation}\\label{ex_to_zee}\n \\left\\|x_n - \\|x_n\\|_1 z\\right\\|_1 \\leq \\left\\|x_n - \\|x_n\\|_1 y \\right\\|_1 + \\|x_n\\|_1\\left\\| y - z \\right\\|_1 \\leq 2 \\|x_n\\|_1 \\delta' \\text{ for } n > N_0\\ .\n\\end{equation}\nFor any $n$, let $k=k(n)$ be the index such that $n_{k+1} \\geq \\|x_n\\|_1 > n_k.$ If $n$ is so large that $k(n) > K$, then $\\|\\, \\|x_n\\|_1 z - n_k z\\|_1 < \\delta' \\|x_n\\|_1.$ Combining this observation with (\\ref{ex_to_zee}) gives\n\\begin{equation}\n\\label{ex_to_zee_2}\n\\|x_n - n_k z \\|_1 \\leq 3 \\delta' \\|x_n\\|_1 \\text{ for } \\|x_n\\|_1 \\in (n_k,n_{k+1}] \\text{ when } k = k(n) > K\\ .\n\\end{equation}\n\n\nFor the remainder of the proof, fix any $n > N_0$ such that $k = k(n) > K$, so that (\\ref{ex_to_zee_2}) holds. We will now control the terms in (\\ref{pizza_telescope}), working our way from right to left. The rightmost term may be bounded by noting\n\\[ \n| \\varrho \\cdot z - \\varrho \\cdot y| = | \\varrho \\cdot (z - y) | \\leq \\|z-y\\|_2 \\|\\varrho\\|_2 \\leq \\delta' \\|\\varrho\\|_2\\ .\n\\]\nThe second term from the right is bounded above by $\\delta'$ by \\eqref{eq: planetolondon2}. To bound the third term from the right, note that $n_k < \\|x_n\\|_1 \\leq n_{k+1}$, so by \\eqref{eq: planetolondon2},\n\\begin{align*}\n\\left|\\frac{f(0,n_k z)}{\\|x_n\\|_1} - \\frac{f(0,n_k z)}{n_k}\\right| &= \\left|\\frac{f(0,n_k z)}{n_k}\\right| \\left(1 - \\frac{n_k}{\\|x_n\\|_1} \\right) \\,\\\\\n&\\leq \\left[\\left| \\varrho \\cdot z\\right| +\\delta' \\right] \\left( 1- \\frac{1}{1 + \\delta'}\\right)\\ .\n\\end{align*}\nIt remains to bound the first term of (\\ref{pizza_telescope}). To do this, note that by \\eqref{ex_to_zee_2},\n\\[\n|f(0,x_n) - f(0,n_k z)| = |f(n_k z, x_n)| \\leq \\tau(n_k z, x_n) \\leq K \\|x-n_k z\\|_1 \\leq 3K \\delta' \\|x_n\\|_1\\ .\n\\]\nSo \n\\[ \n\\left|\\frac{f(0,x_n)}{\\|x_n\\|_1} - \\frac{f(0,n_k z)}{\\|x_n\\|_1}\\right| \\leq 3K \\delta'\\ .\n\\]\n\nApplying our estimates for each term in (\\ref{pizza_telescope}) to the left side of (\\ref{non_converging_pizza}) gives\n\\[\n\\frac{\\delta}{2} \\leq 3K \\delta' + (|\\varrho \\cdot z|+\\delta') \\left( 1- \\frac{1}{1 + \\delta'}\\right) + \\delta' + \\delta' \\|\\varrho\\|_2\\ .\n\\]\nBecause this holds for all $\\delta' >0,$ and because the right-hand side goes to zero as $\\delta' \\rightarrow 0,$ we have derived a contradiction and proved the theorem.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\\subsection{General properties of $\\varrho$}\n\nIn this short section we study the random vector $\\varrho$. In the case that $\\partial \\mathcal{B}$ is differentiable at $\\varpi$, the vector $\\varrho$ is deterministic and we give the explicit form. \n\nThe main theorem of the section is below. It says that the line\n\\[\nL_\\varrho : = \\{x \\in \\mathbb{R}^2 : \\varrho \\cdot x =1\\}\n\\]\nis $\\mu$-almost surely a supporting line for $\\mathcal{B}$ at $\\varpi$. \n\n\\begin{thm}\n\\label{rational_directions}\nWith $\\mu$-probability one, $\\varrho \\cdot \\varpi = 1$ and $\\varrho \\cdot x \\leq 1$ for all $x \\in \\mathcal{B}$. Thus $L_\\varrho$ is a supporting line for $\\mathcal{B}$ at $\\varpi$.\n\\end{thm}\n\nThis theorem has an important corollary. It follows directly from the fact that there is a unique supporting line for $\\mathcal{B}$ at points of differentiability of $\\partial \\mathcal{B}$.\n\\begin{cor}\n\\label{cor: diff}\nIf $\\partial \\mathcal{B}$ is differentiable at $\\varpi$ then\n\\[ \n\\mu\\big( \\varrho = (g_\\varpi({\\mathbf{e}}_1),g_\\varpi({\\mathbf{e}}_2)) \\big) = 1\\ .\n\\]\n\\end{cor}\n\n\n\\begin{proof}[Proof of Theorem~\\ref{rational_directions}]\n\nUsing Theorem~\\ref{thm: expected_value}, we first find the expected value of $\\varrho \\cdot y$ for $y \\in \\mathbb{R}^2$. We simply apply the dominated convergence theorem with the bound $|f(0,my)| \\leq \\tau(0,my)$. Letting $y_m \\in \\mathbb{Z}^2$ be such that $my \\in y_m + [-1\/2,1\/2)^2$,\n\\[\n\\mathbb{E}_\\mu (\\varrho \\cdot y) = \\lim_{m \\to \\infty} \\frac{1}{m}\\mathbb{E}_\\mu f(0,my) = \\lim_{m \\to \\infty} g_\\varpi(y_m\/m) = g_\\varpi(y)\\ .\n\\]\nThe theorem follows from this statement and\n\\begin{equation}\\label{eq: nachosgrande}\n\\mu\\left(x \\cdot \\varrho \\leq g(x) \\text{ for all } x \\in \\mathcal{B}\\right)=1\\ .\n\\end{equation}\nIndeed, assuming this, we have\n\\[\n\\mu(\\varrho\\cdot \\varpi \\leq 1)=1 \\text{ and } \\mathbb{E}_\\mu(\\varrho \\cdot \\varpi) = g_\\varpi(\\varpi) = 1\\ ,\n\\]\ngiving $\\varrho \\cdot \\varpi = 1$ with $\\mu$-probability one. To prove \\eqref{eq: nachosgrande}, first take $x \\in \\mathbb{Q}^2 \\cap \\mathcal{B}$. Then by \\eqref{eq: fboundtau}, for all $n$, $f(nx) \\leq \\tau(nx)$ with $\\mu$-probability one. Dividing by $n$ and taking limits with Proposition~\\ref{prop: rationallimits} and the shape theorem we get $x \\cdot \\varrho \\leq g(x)$. For non-rational $x \\in \\mathcal{B}$ we extend the inequality by almost sure continuity of both sides in $x$.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Geodesic graphs}\\label{sec: GG}\n\nIn this section we study the behavior of $\\mu$ on $\\Omega_3$. Given $\\eta \\in \\Omega_3$ recall from Section~\\ref{sec: mudef} the definition of the geodesic graph $\\mathbb{G}$ of $\\eta$ as the directed graph induced by the edges $e$ for which $\\eta(e)=1$. In this section we prove a fundamental property about infinite directed paths in this graph which relates them to the asymptotic Busemann function constructed from $\\Theta$.\n\n\n\\subsection{Basic properties}\n\nWe begin by showing that properties of $\\eta_\\alpha$ from Section~\\ref{subsec: GG} carry over to $\\eta$. We use some new notation. We say that $y\\in \\mathbb{Z}^2$ is connected to $z \\in \\mathbb{Z}^2$ in $\\mathbb{G}$ (written $y \\to z$) if there exists a sequence of vertices $y=y_0, y_1, \\ldots, y_n = z$ such that $\\eta(\\langle y_k,y_{k+1}\\rangle) = 1$ for all $k=0, \\ldots, n-1$. We say that a path in $\\mathbb{G}$ is a geodesic (for the configuration $(\\omega,\\Theta,\\eta)$) if it is a geodesic in $\\omega$.\n\n\\begin{prop}\\label{prop: firstGG2}\nWith $\\mu$-probability one, the following statements hold for $x,y,z \\in \\ensuremath{\\mathbb{Z}^2}$.\n\\begin{enumerate}\n\\item Each directed path in $\\mathbb{G}$ is a geodesic.\n\\item If $x \\to y$ in $\\mathbb{G}$ then $f(x,y) = \\tau(x,y)$.\n\\item If $x \\to z$ and $y \\to z$ in $\\mathbb{G}$ then $f(x,y) = \\tau(x,z) - \\tau(y,z)$.\n\\item There exists an infinite self-avoiding directed path starting at $x$ in $\\mathbb{G}$.\n\\end{enumerate}\n\\end{prop}\n\n\n\\begin{proof}\nThe third item follows directly from the second and additivity of $f$ (from \\eqref{eq: additivity}). For the first item, if $\\gamma$ is a deterministic finite directed path, write $A_\\gamma$ for the event that all edges of $\\gamma$ are edges of $\\mathbb{G}$ and\n\\[\nB_\\gamma = A_\\gamma^c \\cup \\left( A_\\gamma \\cap \\{\\gamma \\text{ is a geodesic}\\}\\right)\\ .\n\\]\nThe event in question equals the intersection over all finite $\\gamma$'s of $B_\\gamma$, so it suffices to show that for each $\\gamma$, $\\mu(B_\\gamma)=1$.\n\nBy part 1 of Proposition~\\ref{prop: firstGG}, for all $\\alpha \\in \\mathbb{R}$ the $\\mathbb{P}$-probability that all directed paths in $\\mathbb{G}_\\alpha(\\omega)$ are geodesics is 1. By pushing forward to $\\widetilde \\Omega$, for each $\\alpha,~ \\mu_\\alpha(B_\\gamma) = 1$ and thus $\\mu_n^*(B_\\gamma) = 1$ for all $n$. Once we show that $B_\\gamma$ is a closed event, we will be done, as we can then apply \\eqref{eq: kallenbergclosed}. To show this we note that the event that a given finite path is a geodesic is a closed event. Indeed, letting $\\gamma_1$ and $\\gamma_2$ be finite paths, the function $\\tau(\\gamma_1) - \\tau(\\gamma_2)$ is continuous on $\\widetilde \\Omega$. Therefore the event $\\{\\omega \\in \\Omega_1 : \\tau(\\gamma_1) \\leq \\tau(\\gamma_2)\\}$ is closed. We then write\n\\[\n\\{\\gamma_1 \\text{ is a geodesic}\\} = \\bigcap_{\\gamma_2} \\{\\tau(\\gamma_1) \\leq \\tau(\\gamma_2)\\}\\ ,\n\\]\nwhere the intersection is over all finite paths $\\gamma_2$ with the same endpoints as those of $\\gamma_1$. Thus $\\{\\gamma_1 \\text{ is a geodesic}\\}$ is closed. Since $A_\\gamma$ depends on finitely many edge variables $\\eta(e)$, it is closed and its complement is closed. Therefore $B_\\gamma$ is closed and we are done.\n\nFor item 2, we write $\\gamma_{xy}$, any path from $x$ to $y$ in $\\mathbb{G}$, in order as $x=x_0, x_1, \\ldots, x_n = y$ and use additivity of $f$:\n\\[\nf(x,y) = \\sum_{i=0}^{n-1} f(x_i,x_{i+1})\\ .\n\\]\nFor each $i$, $x_i \\to x_{i+1}$, and by item 1, $\\gamma_{xy}$ is a geodesic. This means that we only need to show that if $x$ and $y$ are neighbors such that $\\eta(\\langle x,y \\rangle) = 1$ then $f(x,y) = \\omega_{\\langle x,y \\rangle}$, the passage time of the edge between $x$ and $y$. By part 2 of Proposition~\\ref{prop: firstGG}, for each $\\alpha$, with $\\mathbb{P}$-probability one, if $\\eta_\\alpha(\\langle x,y \\rangle) = 1$ then $B_\\alpha(x,y) = \\omega_{\\langle x,y \\rangle}$. By similar reasoning to that in the last item,\n\\[\n\\{\\eta(\\langle x,y \\rangle) = 0\\} \\cup \\left( \\{ \\eta(\\langle x,y \\rangle) = 1\\} \\cap \\{f(x,y) = \\omega_{\\langle x,y \\rangle} \\} \\right)\n\\]\nis closed and since it has $\\mu_\\alpha$-probability 1 for all $\\alpha$, it also has $\\mu$-probability one.\n\nWe now argue for item 4. By translation-invariance we can just prove it for $x=0$. For $n \\geq 1$ let $A_n \\subseteq \\Omega_3$ be the event that there is a self-avoiding directed path starting at 0 in $\\mathbb{G}$ that leaves $[-n,n]^2$. We claim that $\\mu(A_n) = 1$ for all $n$. Taking $n \\to \\infty$ will prove item 4. \n\nFor each $\\alpha>0$ so large that $[-n,n]^2$ is contained on one side of $L_\\alpha$, let $\\gamma$ be a geodesic from $0$ to $L_\\alpha$. This path is contained in $\\mathbb{G}_\\alpha$. We may remove loops from $\\gamma$ so that it is self-avoiding, and still a geodesic. It will also be directed in the correct way: as we traverse the path from 0, each edge will be directed in the direction we are traveling. So for all large $\\alpha>0$, with $\\mathbb{P}$-probability one, there is a self-avoiding directed path starting at 0 in $\\mathbb{G}_\\alpha$ that leaves $[-n,n]^2$. Thus $\\mu_\\alpha(A_n) = 1$ for all large $\\alpha$ and $\\mu_{n_k}^*(A_n) \\to 1$ as $k \\to \\infty$. The indicator of $A_n$ is continuous on $\\widetilde \\Omega$, as $A_n$ depends on $\\eta(f)$ for finitely many edges $f$, so $\\mu(A_n)=1$.\n\\end{proof}\n\n\n\n\n\\begin{prop}\\label{prop: secondGG2}\nAssume {\\bf A1'} or {\\bf A2'}. With $\\mu$-probability one, the following statements hold.\n\\begin{enumerate}\n\\item Each vertex in $\\ensuremath{\\mathbb{Z}^2}$ has out-degree 1 in $\\mathbb{G}$. Consequently from each vertex $x$ emanates exactly one infinite directed path $\\Gamma_x$.\n\\item Viewed as an undirected graph, $\\mathbb{G}$ has no circuits. \n\\end{enumerate}\n\\end{prop}\n\n\\begin{proof}\nFor $x \\in \\mathbb{Z}^2$, let $A_x\\subseteq \\widetilde \\Omega$ be the event that $\\eta(\\langle x,y\\rangle) = 1$ for only one neighbor $y$ of $x$. Note that the indicator of $A_x$ is a bounded continuous function, so since $\\mu_\\alpha(A_x) = 1$ for all $\\alpha$ such that $x$ is not within Euclidean distance $1$ of $L_\\alpha$ (from part 1 of Proposition~\\ref{prop: secondGG} -- here $\\hat S$ is contained in the set of vertices within distance 1 of $L_\\alpha$) it follows that $\\mu(A_x)=1$. For each $z$ that is not a neighbor of $x$, $\\eta(\\langle x,z \\rangle)=0$ with $\\mu_\\alpha$-probability one for all $\\alpha$. This similarly implies that in $\\mathbb{G}$ with $\\mu$-probability one, there is no edge between $x$ and such a $z$.\n\nTo prove the second statement, fix any circuit $\\mathcal{C}$ in $\\mathbb{Z}^2$ and let $A_\\mathcal{C}$ be the event that each edge of $\\mathcal{C}$ is in $\\mathbb{G}$. Because there are no circuits in $\\mathbb{G}_\\alpha$ with $\\mathbb{P}$-probability one, we have $\\mu_n^*(A_\\mathcal{C})=0$ for all $n$. The indicator of $A_\\mathcal{C}$ is a continuous function on $\\widetilde \\Omega$, so we may take limits and deduce $\\mu(A_\\mathcal{C})=0$. There are a countable number of circuits, so we are done. \n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Asymptotic directions}\nRecall the definition $L_\\varrho = \\{x \\in \\mathbb{R}^2: x \\cdot \\varrho = 1\\}$ for the vector $\\varrho = \\varrho(\\Theta)$ of Theorem~\\ref{thm: pizzapie}. Set \n\\begin{equation}\\label{eq: on_skype}\nJ_\\varrho = \\{ \\theta : L_\\varrho \\text{ touches } \\mathcal{B} \\text{ in direction } \\theta\\}\\ .\n\\end{equation}\nThe main theorem of this subsection is as follows.\n\\begin{thm}\\label{thm: nachostheorem}\nWith $\\mu$-probability one, for all $x \\in \\ensuremath{\\mathbb{Z}^2}$, the following holds. Each directed infinite self-avoiding path in $\\mathbb{G}$ which starts at $x$ is asymptotically directed in $J_{\\varrho}$.\n\\end{thm}\n\n\n\\begin{proof}\nWe will prove the theorem for $x=0$. Assuming we do this, then using translation invariance of $\\mu$ and $\\varrho$ it will follow for all $x$.\n\nLet $\\varepsilon_k = 1\/k$ for $k \\geq 1$ and $\\delta>0$. We will show that if $S_0= \\{x \\in \\mathbb{Z}^2 : 0 \\to x$ in $\\mathbb{G}\\}$ then\n\\begin{equation}\\label{eq: nachos1}\n\\text{ for each } k \\geq 1,~ \\mu(\\arg x \\in (J_\\varrho)_{\\varepsilon_k} \\text{ for all but finitely many } x \\in S_0) > 1-\\delta\\ .\n\\end{equation}\nHere we write $(J_\\varrho)_{\\varepsilon_k}$ for all angles $\\theta$ with $dist(\\theta,\\theta')<\\varepsilon_k$ for some $\\theta' \\in J_\\varrho$. The line $L_\\varrho$ only touches $\\mathcal{B}$ in directions in $J_\\varrho$ so by convexity, $v_\\theta \\cdot \\varrho < 1$ for all $\\theta \\notin J_\\varrho$. Since the set of angles not in $(J_\\varrho)_{\\varepsilon_k}$ is compact in $[0,2\\pi)$ (using the metric $dist$), we can find a random $a \\in (0,1)$ with $v_\\theta \\cdot \\varrho < 1-a$ for all $\\theta \\notin (J_\\varrho)_{\\varepsilon_k}$. We can then choose $a$ to be deterministic such that\n\\begin{equation}\\label{eq: nachos2}\n\\mu\\left( v_\\theta \\cdot \\varrho < 1-a \\text{ for all } \\theta \\notin (J_\\varrho)_{\\varepsilon_k} \\right) > 1-\\delta\/3\\ .\n\\end{equation}\n\nBy the shape theorem there exists $M_0$ such that $M \\geq M_0$ implies\n\\[\n\\mathbb{P}( \\tau(0,x) \\geq g(x)(1-a\/2) \\text{ for all } x \\text{ with } \\|x\\|_1 \\geq M) > 1-\\delta\/3\\ .\n\\]\nThe marginal of $\\mu$ on $\\Omega_1$ is $\\mathbb{P}$ so this holds with $\\mu$ in place of $\\mathbb{P}$. By part 2 of Proposition~\\ref{prop: firstGG2}, \n\\begin{equation}\\label{eq: nachos3}\n\\mu( f(x) \\geq g(x)(1-a\/2) \\text{ for all } x \\text{ with } \\|x\\|_1 \\geq M \\text{ and } 0 \\to x) > 1-\\delta\/3\\ .\n\\end{equation}\nChoose $C>0$ such that $\\|x\\|_1 \\leq Cg(x)$ for all $x \\in \\mathbb{R}^2$. This is possible by \\eqref{eq: normequivalence}. By Theorem~\\ref{shapetheorem}, there exists $M_1 \\geq M_0$ such that $M \\geq M_1$ implies\n\\[\n\\mu\\left( |f(x) - x\\cdot\\varrho|< \\frac{a}{2C} \\|x\\|_1 \\text{ for all } x \\text{ with } \\|x\\|_1 \\geq M \\right) > 1-\\delta\/3\\ .\n\\]\nThis implies that for $M \\geq M_1$,\n\\begin{equation}\\label{eq: nachos4}\n\\mu\\left( |f(x) - x\\cdot\\varrho| < \\frac{a}{2} g(x) \\text{ for all } x \\text{ with } \\|x\\|_1 \\geq M \\right) > 1-\\delta\/3\\ .\n\\end{equation}\n\nWe claim that the intersection of the events in \\eqref{eq: nachos2}, \\eqref{eq: nachos3} and \\eqref{eq: nachos4} implies the event in \\eqref{eq: nachos1}. Indeed, take a configuration in the intersection of the three events for some $M \\geq M_1$. For a contradiction, assume there is an $x \\in S_0$ with $\\arg x \\notin (J_\\varrho)_{\\varepsilon_k}$ and $\\|x\\|_1 \\geq M$. Then \n\\[\n(x\/g(x)) \\cdot \\varrho < 1-a \\text{ by \\eqref{eq: nachos2}}\\ .\n\\]\nHowever, since the event in \\eqref{eq: nachos3} occurs and $\\|x\\|_1 \\geq M$,\n\\[\nf(0,x) \\geq g(x)(1-a\/2)\\ .\n\\]\nLast, as the event in \\eqref{eq: nachos4} occurs,\n\\[\nf(0,x) < x \\cdot \\varrho + \\frac{a}{2} g(x)\\ .\n\\]\nCombining these three inequalities,\n\\[\ng(x)(1-a\/2) \\leq x \\cdot \\varrho + (a\/2)g(x) < g(x)(1-a) + (a\/2)g(x)\\ ,\n\\]\nor $g(x)(1-a\/2) < g(x)(1-a\/2)$, a contradiction. This completes the proof.\n\n\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Coalescence in $\\mathbb{G}$}\\label{sec: coalesceG}\n\nIn this section we prove that all directed infinite paths coalesce in $\\mathbb{G}$. Recall that under either {\\bf A1'} or {\\bf A2'}, for $x \\in \\mathbb{Z}^2$, $\\Gamma_x$ is the unique infinite directed path in $\\mathbb{G}$ starting at $x$.\n\n\\begin{thm}\\label{thm: Gcoalescethm}\nAssume either {\\bf A1'} or both {\\bf A2'} and the upward finite energy property. With $\\mu$-probability one, for each $x,y \\in \\mathbb{Z}^2$, the paths $\\Gamma_x$ and $\\Gamma_y$ coalesce.\n\\end{thm}\n\nThe proof will be long, so we first explain the main ideas. We apply the technique of Licea-Newman \\cite{LN}, whose central tool is a Burton-Keane type argument \\cite{burtonkeane}. We proceed by contradiction, so suppose there are vertices $x, y$ such that $\\Gamma_x$ and $\\Gamma_y$ do not coalesce. By results of the last section, they cannot even intersect. We show in Sections~\\ref{sec: building_blocks} and~\\ref{sec: Bprime} that there are many triples of non-intersecting paths $\\Gamma_{x_1}, \\Gamma_{x_2}$ and $\\Gamma_{x_3}$ such that $\\Gamma_{x_2}$ is ``shielded'' from all other infinite paths in $\\mathbb{G}$. To do this, we must use the information in Theorem~\\ref{thm: nachostheorem} about asymptotic directions. A contradiction comes in Section~\\ref{sec: contradiction} from translation invariance because when $\\Gamma_{x_2}$ is shielded, the component of $x_2$ in $\\mathbb{G}$ has a unique least element in a certain lexicographic-like ordering of $\\mathbb{Z}^2$. This is a different concluding argument than that given in \\cite{LN}, where these shielded paths are used for a Burton-Keane ``lack of space'' proof.\n\n\nWe now give the proof. For the entirety we will assume either {\\bf A1'} or both {\\bf A2'} and the upward finite energy property.\n\n\n\n\n\n\n\n\n\n\\subsection{Constructing ``building blocks''}\\label{sec: building_blocks}\nAssume for the sake of contradiction that there are disjoint $\\Gamma_x$'s in $\\mathbb{G}$. Then for some vertex $z_0,$ the event $A_0(z_0)\\subseteq \\widetilde \\Omega$ has positive $\\mu$-probability, where \n\\[\nA_0(z_0) = \\{\\Gamma_{z_0} \\text{ and } \\Gamma_0 \\text{ share no vertices}\\}\\ .\n\\] \nWe begin with a geometric lemma. It provides a (random) line such that with probability one, any path that is asymptotically directed in $J_\\varrho$ (from \\eqref{eq: on_skype}) intersects this line finitely often. We will need some notation which is used in the rest of the proof.\n\nLet $\\varpi'$ be a vector with \n\\begin{equation}\\label{eq: varpiprimedef}\n\\arg \\varpi' \\in \\{j \\pi \/ 4,\\, j = 0, \\ldots,\\,7\\} \\text{ and } \\|\\varpi'\\|_\\infty = 1\\ ,\n\\end{equation}\nwhere $\\|\\cdot\\|_\\infty$ is the $\\ell^\\infty$ norm. (A precise value of $j$ will be fixed shortly.)\nDefine (for $N \\in \\mathbb{N}$) $L'_N = \\{z \\in \\mathbb{R}^2: \\, \\varpi' \\cdot z = N\\}.$ For such an $N$ and for $x \\in \\ensuremath{\\mathbb{Z}^2},$ write $x \\prec L'_N$ if $\\varpi' \\cdot x < N$ and $x \\succ L'_N$ if $\\varpi' \\cdot x > N.$ The symbols $\\preceq$ and $\\succeq$ are interpreted in the obvious way. We use the terms ``far side of $L'_N$\" and ``near side of $L'_N$\" for the sets of $x \\in \\mathbb{R}^2$ with $x \\succ L'_N$ and $x \\prec L'_N,$ respectively. Note that any lattice path $\\gamma$ intersecting both sides of $L'_N$ contains a vertex $z \\in L'_N.$\n\n\n\n\\begin{lem}\\label{lem: varpilemma}\nThere is a measurable choice of $\\varpi'$ as in \\eqref{eq: varpiprimedef} such that with $\\mu$-probability one, the following holds. For each vertex $x$ and each integer $N$, \n\\[\n\\Gamma_x \\cap \\{z \\in \\mathbb{Z}^2 : z \\preceq L'_N\\} \\text{ is finite}\\ .\n\\] \nIn other words, $\\Gamma_x$ eventually lies on the far side of $L'_N$ for all $x$ and $N$.\n\\end{lem}\n\n\\begin{proof}\nThe limit shape $\\mathcal{B}$ is convex and compact, so it has an extreme point $p$. Because it is symmetric with respect to the rotation $R$ of $\\mathbb{R}^2$ by angle $\\pi\/2$, the points $p_i = R^i p$, $i=1, \\ldots, 3$ are all extreme points of $\\mathcal{B}$. $J_\\varrho$ is an interval of angles corresponding to points of contact between $\\mathcal{B}$ and one of its supporting lines, so it is connected (in the topology induced by $dist$) and must lie between (inclusively) $\\arg p_i$ and $\\arg p_{i+1}$ for some $i=0, \\ldots, 3$ (here we identify $p_4=p_0$). Therefore $\\mathrm{diam} ~J_\\varrho \\leq \\pi\/2$ almost surely and contains at most three elements of the set $\\{j\\pi\/4 : j = 0, \\ldots, 7\\}$ (and they must be consecutive). Choose five of the remaining elements to be consecutive and label them $j_1\\pi\/4, \\ldots, (j_1+4)\\pi\/4$. The interval $[j_1\\pi\/4,(j_1+4)\\pi\/4]$ defines a half-plane $H$ in $\\mathbb{R}^2$ and since the distance between this interval and $J_\\varrho$ is positive (measured with $dist$), for all sufficiently small $\\varepsilon>0$, the sector\n\\[\n\\{x \\in \\mathbb{R}^2 : x \\neq 0 \\text{ and } dist(\\arg x, \\phi) < \\varepsilon \\text{ for some } \\phi \\in J_\\varrho\\}\n\\]\nis contained in $H^c$. This implies the statement of the lemma for a (random) $\\varpi'$ equal to the normal to $H$. Since $\\varpi'$ can be chosen as a measurable function of $\\varrho$ (which is clearly Borel measurable on $\\widetilde \\Omega$), we are done.\n\\end{proof}\n\n\nFor the rest of the proof, fix a deterministic $\\varpi'$ as in \\eqref{eq: varpiprimedef} that satisfies Lemma~\\ref{lem: varpilemma} with positive probability on the event $A_0(z_0)$. (This is possible because there are only eight choices for $\\varpi'$.) Let $A_0'(0,z_0)$ be the intersection of $A_0(z_0)$ and the event in the lemma. On $A_0'(0,z_0)$, $\\Gamma_0$ and $\\Gamma_{z_0}$ eventually cease to intersect $L'_0.$ In particular, they each have a last intersection with $L'_0.$ Since there are only countably many possible pairs of such last intersections, we see that some pair $(y, y')$ in $L'_0$ occurs with positive probability; that is, $\\mu(A(y,y'))>0$, where $A(y,y')$ is defined by the conditions\n\\begin{enumerate}\n\\item[I.] $\\Gamma_y \\cap \\Gamma_{y'} = \\varnothing;$\n\\item[II.] $\\Gamma_y$ intersects $L'_0$ only at $y$; $\\Gamma_{y'}$ intersects $L'_0$ only at $y'$ and\n\\item[III.] $\\Gamma_u \\cap L_N'$ is nonempty and bounded for $u=y,y'$ and all integers $N \\geq 0$.\n\\end{enumerate}\n(Note that condition III follows directly from the preceding lemma because $\\Gamma_u$ contains infinitely many vertices.) By translation invariance, there exists $z \\in L_0'$ with $\\mu(A(0,z))>0$.\n\nFix \n\\begin{equation}\\label{eq: chicken_alfredo}\n\\varsigma = \\text{ a nonzero vector with the smallest integer coordinates normal to } \\varpi'\n\\end{equation}\n(it will be a rotation of either (0,1) or (1,1) by a multiple of $\\pi\/2$). Defining $\\tilde T_\\varsigma : \\widetilde \\Omega \\to \\widetilde \\Omega$ as the translation by $\\varsigma$ (that is, $\\tilde T_1^{a_1} \\circ \\tilde T_2^{a_2}$, where $\\varsigma = a_1{\\mathbf{e}}_1 + a_2 {\\mathbf{e}}_2$),\n\\[ \n\\mathbf{1}_{A(0,z)}\\left((\\omega,\\Theta,\\eta)\\right) = \\mathbf{1}_{A(\\varsigma,z+\\varsigma)}\\left(\\widetilde{T}_{\\varsigma}(\\omega,\\Theta,\\eta)\\right)\\ .\n\\]\nSince $\\mu$ is invariant under the action of $\\widetilde{T}_{\\varsigma},$ the ergodic theorem implies\n\\begin{equation}\n\\label{poincare}\n\\frac{1}{N}\\sum_{j=0}^{N-1}\\mathbf{1}_{A(j\\varsigma,z+j\\varsigma)}\\left((\\omega,\\Theta,\\eta)\\right) = \\frac{1}{N}\\sum_{j=0}^{N-1}\\mathbf{1}_{A(0,z)}\\left(\\widetilde{T}_{\\varsigma}^j (\\omega,\\Theta,\\eta)\\right) \\rightarrow g(\\omega,\\Theta,\\eta),\n\\end{equation}\nwhere $g$ is a function in $L^1(\\mu);$ the convergence is both $\\mu$-almost sure and in $L^1(\\mu),$ so $\\int g \\, \\mathrm{d} \\mu = \\mu(A(0,z))>0.$ Using this in (\\ref{poincare}) gives infinitely many $j$ with \n\\begin{equation}\n\\label{two_pair}\n\\mu\\left(A(0,z) \\cap A(j\\varsigma, z + j\\varsigma) \\right) > 0.\n\\end{equation}\nWe fix $j > \\|z\\|_1$ to ensure $\\Gamma_{j\\varsigma}$ and $\\Gamma_{z + j\\varsigma}$ are outside the region bounded by $L'_0,$ $\\Gamma_0,$ and $\\Gamma_z.$\n\nWhat is the significance of the event in (\\ref{two_pair})? When it occurs, we are guaranteed that there is a line $L_0'$ and four directed paths remaining on its far side apart from their initial vertices. We claim that at least three of them never intersect. Indeed, ordering the paths using the direction of $\\varsigma$, we are guaranteed that the ``first two\" paths do not intersect each other, nor do the ``last two.\" But if the middle two paths ever intersect, they would merge beyond that point and the three remaining paths could not touch.\n\nFor $x_1,x_2 \\in L_0'$, let $B(0,x_1,x_2)$ be the event that $\\Gamma_0, \\Gamma_{x_1}$ and $\\Gamma_{x_2}$ (a) never intersect, (b) stay on the far side of $L_0'$ except for their initial vertices and (c) intersect $L_N'$ in a bounded set for each $N \\geq 1$. Then the above implies \n\\[\nB(0,z,j\\varsigma) \\cup B(0,z,z+j\\varsigma) \\supseteq A(0,z) \\cap A(j\\varsigma,z + j\\varsigma)\\ .\n\\]\nTherefore we may choose $x_1,x_2 \\in L_0'$ such that the portion of $L_0'$ from 0 to $x_2$ contains $x_1$ and so that $\\mu(B(0,x_1,x_2)) > 0$. The vertices $x_1$ and $x_2$ are fixed for the rest of the proof.\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Constructing $B'$}\\label{sec: Bprime}\n\nOur next step is to refine $B(0,x_1,x_2)$ to a positive probability subevent $B'(x^*;N,R)$ on which no paths $\\Gamma_z$ with $z \\preceq L'_N$ (outside of some large polygon) merge with $\\Gamma_{x_1}.$ We will need to pull events back from $\\widetilde \\Omega$ to $\\Omega_1$ to do an edge modification and this will present a considerable difficulty. Our strategy is reminiscent of that in \\cite{AD}. In the first subsection we give several lemmas that we will need. In the next subsection we will define $B'$ and show it has positive probability.\n\n\n\n\n\n\\subsubsection{Lemmas for $B'$}\n\nWe wish to construct a barrier of high-weight edges on the near side of some $L'_N$. Set\n\\[ \n\\lambda_0^+ = \\sup \\left\\{ \\lambda > 0: \\, \\ensuremath{\\mathbb{P}}\\left( \\omega_e \\in [\\lambda,\\infty)\\right) > 0 \\right\\}\\ .\n\\] \nBecause we do not wish to assume $\\lambda_0^+ = \\infty,$ our barrier will occupy some wide polygon (in the case that $\\lambda_0^+ = \\infty,$ many of the complications which we address below can be neglected; we direct the interested reader to \\cite{LN}).\nTo control the exit of our directed paths from the polygon, we will need a lemma about weak angular concentration of paths:\n\n\\begin{lem}\n\\label{path_concentration}\nFor $x_1$, $x_2$, and $\\varpi'$ as above, define $B_G(0,x_1,x_2)$ to be the subevent of $B(0,x_1,x_2)$ on which, for all $\\varepsilon > 0,$ there are infinitely\nmany values of $N \\in \\mathbb{N}$ such that the first intersections $\\zeta_N$ and $\\zeta'_N$ of $\\Gamma_0$ and $\\Gamma_{x_2}$ (respectively) with $L'_N$ satisfy $dist(\\arg \\zeta_N, \\arg \\zeta'_N) < \\varepsilon.$\nThen $\\mu\\left(B_G (0,x_1,x_2) \\mid \\, B(0,x_1,x_2)\\right) = 1.$\n\\end{lem}\n\\begin{proof}\nAssume for the sake of contradiction that \n\\begin{equation}\n\\label{paths_spreading}\n\\mu\\left(B_G^c(0,x_1,x_2) \\cap B(0,x_1,x_2)\\right) > 0.\n\\end{equation}\nFor $z \\in \\mathbb{Z}^2$, denote by $\\zeta_N(z)$ the first point of intersection of $\\Gamma_z$ with $L'_N.$ On the event in (\\ref{paths_spreading}), for all but finitely many $N \\in \\mathbb{N}$, we have $dist(\\arg \\zeta_N(0), \\arg \\zeta_N(x_2)) > \\varepsilon \/ 2.$ Taking $\\varsigma$ as before (fixed in \\eqref{eq: chicken_alfredo}) and translating the event in (\\ref{paths_spreading}) by multiples of $\\varsigma$, we see by the ergodic theorem that with positive $\\mu$-probability infinitely many such translates occur.\n\nSo given any finite $b > 0,$ we can find an event of positive $\\mu$-probability on which we have at least $b$ directed paths in $\\mathbb{G}$ which never return to $L'_0$ and such that the first intersections of neighboring paths with lines $L'_N$ stay at least an angle $\\varepsilon$ apart. This is in contradiction with the fact that all directed infinite paths are asymptotically confined to a sector.\n\\end{proof}\n\nThe next lemma is a modification of the usual first-passage shape theorem.\n\\begin{lem}\n\\label{l1_shape}\nThere exists a deterministic $c^+ < \\lambda_0^+$ such that, $\\ensuremath{\\mathbb{P}}$-a.s., \n\\[\n\\lim_{M \\to \\infty} \\sup_{\\|x\\|_1 \\geq M} \\tau(0,x)\/\\|x\\|_1 < c^+\\ .\n\\]\n\\end{lem}\n\n\\begin{proof}\nBecause either {\\bf A1'} or {\\bf A2'} hold, $\\mathbb{E}(\\tau_e) < \\lambda_0^+$. For any $z \\in \\mathbb{Z}^2$, choose a deterministic path $\\gamma_z$ with number of edges equal to $\\|z\\|_1$. For $x \\in \\mathbb{Q}^2$ and $n \\geq 1$ with $nx \\in \\mathbb{Z}^2$,\n\\[\n\\mathbb{E} \\tau(0,nx) \\leq \\mathbb{E} \\tau(\\gamma_{nx}) = n \\|x\\|_1 \\mathbb{E}\\tau_e\\ , \\text{ so } g(x) \\leq \\|x\\|_1 \\mathbb{E} \\tau_e\\ .\n\\] \nThis extends to all $x \\in \\mathbb{R}^2$ by continuity, so the shape theorem gives the result.\n\\end{proof}\n\nWe need a lemma to pull events back from $\\widetilde \\Omega$ to $\\Omega_1$. Fix an increasing sequence $(n_k)$ such that $\\mu^*_{n_k} \\to \\mu$ weakly.\n\\begin{lem}\n\\label{toalphas}\nLet $E \\subseteq \\widetilde \\Omega$ be open with $\\mu(E) > \\beta$. There exists $C_\\beta>0$ and $K_0$ such that for $k\\geq K_0$, the Lebesgue measure of the set $\\{ \\alpha \\in [0,n_k]: \\, \\mu_\\alpha (E) > \\beta\/2 \\}$ is at least $C_\\beta\\, n_k$.\n\\end{lem}\n\\begin{proof}\nCall the Lebesgue measure of the above set $\\lambda.$ Since $E$ is open, \\eqref{eq: kallenbergopen} allows us to pick $K_0$ such that if $k \\geq K_0$ then $\\mu^*_{n_k}(E) > \\beta$. For such $k$, we can write\n\\[\n\\frac{1}{n_k} \\left( \\lambda + (n_k - \\lambda) \\beta\/2\\right) \\geq \\mu_{n_k}^*(E) > \\beta,~ \\text{giving } \\lambda > \\frac{n_k \\beta}{2 (1 - \\beta\/2)}\\ .\n\\]\nSetting $C_\\beta := \\beta (2 - \\beta)^{-1}$ completes the proof.\n\\end{proof}\n\n\nThe last lemma is based on \\cite[Lemma~3.4]{AD} and will be used in the edge-modification argument. To push the upward finite energy property forward from $\\Omega_1$ to $\\widetilde \\Omega$ we need concrete lower bounds for probabilities of modified events. We write a typical element of $\\Omega_1$ as $\\omega = (\\omega_e, \\check{\\omega}),$ where $\\check{\\omega} = (\\omega_f)_{f \\neq e}.$ We say an event $A \\subseteq \\Omega_1$ is {\\it $e$-increasing} if, for all $(\\omega_e,\\check{\\omega}) = \\omega \\in A$ and $r > 0,$ $(\\omega_e + r, \\check{\\omega}) \\in A.$\n\n\\begin{lem}\n\\label{lem: edge_modification}\nLet $\\lambda > 0$ be such that $\\mathbb{P}\\left(\\omega_e \\geq \\lambda \\right) > 0.$ For each $\\vartheta>0$ there exists $C = C(\\vartheta,\\lambda) > 0$ such that for all edges $e$ and all $e$-increasing events $A$ with $\\mathbb{P}(A) \\geq \\vartheta$,\n\\[\n\\mathbb{P}\\left( A, ~\\omega_e \\geq \\lambda\\right) \\geq C ~\\mathbb{P}\\left(A\\right)\\ .\n\\]\n\\end{lem}\n\n\\begin{proof}\nIf $\\mathbb{P}(A,~ \\omega_e < \\lambda) \\leq (1\/2) \\mathbb{P}(A)$ then \n\\begin{equation}\\label{eq: on_skype_again}\n\\mathbb{P}(A,~\\omega_e \\geq \\lambda) \\geq (1\/2) \\mathbb{P}(A)\\ .\n\\end{equation}\nOtherwise, we assume that\n\\begin{equation}\\label{eq: newassumption}\n\\mathbb{P}(A,~ \\omega_e < \\lambda) \\geq (1\/2) \\mathbb{P}(A)\\ .\n\\end{equation}\n\nWe then need to define an extra random variable. Let $\\omega_e'$ be a variable such that, given $\\check{\\omega}$ from $\\omega \\in \\Omega_1$, it is an independent copy of the variable $\\omega_e$. In other words, letting $\\mathbb{Q}$ be the joint distribution of $(\\omega, \\omega_e')$ on the space $\\Omega_1 \\times \\mathbb{R}$, for $\\mathbb{Q}$-almost every $\\check{\\omega}$,\n\\begin{itemize}\n\\item $\\omega_e'$ and $\\omega_e$ are conditionally independent given $\\check{\\omega}$ and\n\\item the distributions $\\mathbb{Q}(\\omega_e \\in \\cdot \\mid \\check{\\omega})$ and $\\mathbb{Q}(\\omega_e' \\in \\cdot \\mid \\check{\\omega})$ are equal.\n\\end{itemize}\n(This can be defined, for instance, by setting $\\mathbb{Q}(A \\times B) = \\int_A \\mathbb{P}(\\omega_e \\in B \\mid \\check{\\omega}) ~\\mathrm{d} \\mathbb{P}(\\omega)$ for Borel sets $A \\subseteq \\Omega_1$ and $B \\subseteq \\mathbb{R}$.)\n\n\nWe now write $\\mathbb{P}(A, \\omega_e \\geq \\lambda)$ as\n\\begin{align}\n\\mathbb{Q} [ (\\omega_e, \\check{\\omega})\\in A,\\, \\omega_e \\in [\\lambda,\\infty)] & \\geq \\mathbb{Q}\\left[(\\omega_e,\\check{\\omega}) \\in A,\\, \\omega_e \\in [\\lambda,\\infty),\\,\\omega_e' \\in [0,\\lambda)\\right]\\nonumber\\\\\n&=\\mathbb{E}_{\\mathbb{Q}} \\left[ \\mathbf{1}_{(\\omega_e,\\check{\\omega}) \\in A}\\, \\mathbf{1}_{ \\omega_e \\in [\\lambda,\\infty)}\\, \\mathbf{1}_{\\omega_e' \\in [0,\\lambda)}\\right]\\nonumber\\\\\n&\\geq\\mathbb{E}_{\\mathbb{Q}} \\left[ \\mathbf{1}_{(\\omega_e',\\check{\\omega}) \\in A}\\, \\mathbf{1}_{ \\omega_e \\in [\\lambda,\\infty)}\\, \\mathbf{1}_{\\omega_e' \\in [0,\\lambda)}\\right]\\label{switcheroo}\\\\\n&=\\mathbb{E}_{\\mathbb{Q}} \\left[ \\mathbf{1}_{(\\omega_e',\\check{\\omega})\\in A}\\, \\mathbf{1}_{\\omega_e'\\in [0,\\lambda)}\\, \\mathbb{E}_{\\mathbb{Q}}\\left(\\mathbf{1}_{\\omega_e \\in [\\lambda,\\infty)}\\, \\mid \\check{\\omega},\\omega_e' \\right)\\right]. \\label{condintready}\n\\end{align}\nIn (\\ref{switcheroo}), we have used that $A$ is $e$-increasing. Using conditional independence in (\\ref{condintready}),\n\\begin{equation}\\label{eq: something_something}\n\\ensuremath{\\mathbb{P}}(A, ~\\omega_e \\geq \\lambda) \\geq \\mathbb{E}_{\\mathbb{Q}} \\left[ \\mathbf{1}_{(\\omega_e',\\check{\\omega})\\in A}\\, \\mathbf{1}_{\\omega_e' \\in [0,\\lambda)}\\, \\mathbb{E}_{\\mathbb{Q}}\\left(\\mathbf{1}_{\\omega_e \\in [\\lambda,\\infty)}\\, \\mid \\check{\\omega}\\right)\\right]\\ .\n\\end{equation}\nBy the upward finite energy property, \n\\[\n\\mathbb{E}_{\\mathbb{Q}}(\\mathbf{1}_{\\omega_e \\in [\\lambda,\\infty)} \\mid \\check{\\omega}) = \\mathbb{E}(1_{\\omega_e \\in [\\lambda,\\infty)}\\mid \\check{\\omega}) > 0 ~~\\mathbb{Q} \\text{-almost surely}\\ ,\n\\]\nso choose $c>0$ such that \n\\[\n\\mathbb{Q}\\left[ \\mathbb{E}_{\\mathbb{Q}}( \\mathbf{1}_{\\omega_e \\in [\\lambda,\\infty)} \\mid \\check{\\omega}) \\geq c \\right] \\geq 1-(\\vartheta\/4)\\ .\n\\]\nNote that this choice of $c$ depends only on $\\lambda$ and $\\vartheta$. By \\eqref{eq: newassumption} and the assumption $\\mathbb{P}(A) \\geq \\vartheta$, the right side is at least $1-(1\/2)\\mathbb{P}(A,~\\omega_e< \\lambda)$, implying\n\\[\n\\mathbb{Q}\\left[ (\\omega_e',\\check{\\omega}) \\in A, ~\\omega_e'\\in [0,\\lambda),~ \\mathbb{E}_{\\mathbb{Q}}(\\mathbf{1}_{\\omega_e \\in [\\lambda,\\infty)} \\mid \\check{\\omega}) \\geq c\\right] \\geq (1\/2) \\mathbb{P}(A,~\\omega_e < \\lambda)\\ .\n\\]\nCombining with \\eqref{eq: something_something}, we find $\\mathbb{P}(A,~\\omega_e\\geq \\lambda) \\geq (c\/2)\\mathbb{P}(A,~\\omega_e < \\lambda)$. We finish the proof by writing\n\\[\n\\ensuremath{\\mathbb{P}}(A) = \\ensuremath{\\mathbb{P}}(A, \\omega_e < \\lambda) + \\ensuremath{\\mathbb{P}}(A, \\omega_e \\geq \\lambda)\\\\\n\\leq \\left[\\frac{2}{c}+1 \\right] \\ensuremath{\\mathbb{P}}(A,\\omega_e \\geq \\lambda)\\ .\n\\]\nObserving this inequality and \\eqref{eq: on_skype_again}, we set $C = \\min\\{1\/2, c\/(2+c))\\}$.\n\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsubsection{Defining $B'$}\n\nWe begin with the definition of the ``barrier event'' $B'$. For an integer $R > N,$ let \n\\[\nS(R,N) = \\{y \\in \\mathbb{Z}^2 : 0 \\leq y \\cdot \\varpi' \\leq N,~ |y \\cdot \\varsigma| \\leq R\\}\\ .\n\\] \nFor any vertex $x^* \\in S(R,N) \\cap L_N'$, define $B'(x^*;R,N)$ by the condition\n\\begin{equation}\\label{eq: B_prime_def}\n\\text{for all } z \\in \\mathbb{Z}^2 \\setminus S(R,N) \\text{ with } z \\preceq L_N',~ \\Gamma_z \\cap \\Gamma_{x^*} = \\varnothing\\ .\n\\end{equation}\n\n\\begin{prop}\n\\label{B_prime}\nThere exist values of $R,N$ and $x^*$ such that $\\mu(B'(x^*;R,N)) > 0.$\n\\end{prop}\nOur strategy is to pull back cylinder approximations of $B(0,x_1,x_2)$ to $\\Omega_1$ to find events that depend on $\\mathbb{G}$ in the vicinity of $0, x_1$ and $x_2.$ We will find a subevent which is monotone increasing in the weights of edges lying in $S(R,N)$ between the pulled-back versions of $\\Gamma_0$ and $\\Gamma_{x_2}.$ When we look at the subevent on which all of these weights are large (``edge modification\"), the pullback of $\\Gamma_{x_1}$ will be unchanged (past $S(R,N)$), and no pullback of any $\\Gamma_z$ can intersect it if $z \\preceq L'_N$ and $z \\notin S(R,N).$ We will then choose $x^*$ to be a certain point on $\\Gamma_{x_1} \\cap L'_N$. The constants $N$ and $R$ will be chosen to guarantee that the pullback of $\\Gamma_{x_1}$ is so isolated. Pushing forward the subevent to $\\widetilde{\\Omega}$ will complete the proof.\n\n\\begin{proof}\nWe will first fix some parameters to prepare for the main argument. Recall the definition of $c^+$ from Lemma~\\ref{l1_shape} and let\n\\[ \n\\lambda^+ := \\min\\{\\lambda_0^+, \\, 2 c^+\\}\\ ,\n\\]\nand put $\\delta^+ := \\lambda^+ - c^+>0$ (giving $\\lambda^+ = 2c^+$ when $\\lambda_0^+=\\infty$). Choose once and for all some \n\\begin{equation}\\label{eq: epsilondef}\n\\varepsilon < \\frac{\\delta^+}{16\\lambda^+},\n\\end{equation}\nsuch that also\n\\begin{equation}\\label{eq: pizzapie34}\n\\limsup_{\\|x\\|_1 \\rightarrow \\infty}\\,\\, \\sup_{y: \\, \\|y - x\\|_1 \\leq \\varepsilon \\|x\\|_1} \\frac{\\tau(0,y)}{\\|x\\|_1} < \\lambda^+ - \\frac{7 \\delta^+}{8}\\quad \\mu\\text{-a.s.}\n\\end{equation}\nThis follows from Lemma~\\ref{l1_shape} because if $\\|y\\|_1$ is large, $\\|y-x\\|_1 \\leq \\varepsilon \\|x\\|_1$ gives $\\tau(0,y)\/\\|x\\|_1 \\leq (\\tau(0,y)\/\\|y\\|_1)(1+\\varepsilon) < c^+(1+\\varepsilon)$. Fix $\\beta > 0$ with $\\mu(B(0,x_1,x_2)) > \\beta.$ \n\nThe majority of the proof will consist of defining a few events in sequence, the second of which we will pull back to the space $\\Omega_1$ to do the edge modification. We will need to choose further parameters to ensure that each of these events has positive probability. For an arbitrary outcome in $\\widetilde{\\Omega}$ and $N\\geq 0$, denote by $r_0(N)$ and $r_2(N)$ the segments of $\\Gamma_0$ and $\\Gamma_{x_2}$ up to their first intersections with $L'_N$ (if they exist) and let $w_N$ denote the midpoint of the segment of $L'_N$ lying between these first intersections. The first event $B^\\circ(R,N,\\varepsilon)$ is defined by the conditions (for $R,N \\geq 1$)\n\\begin{enumerate}\n\\item $\\Gamma_0, \\Gamma_{x_1}$ and $\\Gamma_{x_2}$ never intersect,\n\\item they stay on the far side of $L'_0$ except for their initial vertices,\n\\item $\\Gamma_0$ and $\\Gamma_{x_2}$ intersect $L'_N$ and their first intersection points are within $\\ell^1$ distance $\\varepsilon N$ of each other,\n\\item for $i=0,2$, $\\tau(r_i(N)) < (\\lambda^+-7\\delta^+\/8)\\|w_N\\|_1$ and\n\\item $\\Gamma_0$ and $\\Gamma_{x_2}$ do not touch any $x \\preceq L'_N$ with $x \\notin S(R,N)$.\n\\end{enumerate}\nSee Figure~\\ref{fig: b_circ} for a depiction of the event $B^\\circ(R,N,\\varepsilon)$.\n\n\\begin{figure}[h]\n\\caption{The event $B^\\circ(R,N,\\varepsilon).$ The solid dots represent the first intersection points of $\\Gamma_0$ and $\\Gamma_{x_2}$ with $L'_N$. They are within $\\ell^1$ distance $\\varepsilon N$ of each other.}\n\\label{fig: b_circ}\n\\centering\n\\includegraphics[scale=0.65]{B_circ_9_11.pdf}\n\\end{figure}\n\nWe claim that there exists $N_0$ and $R_0$ such that \n\\begin{equation}\\label{eq: pizzapie44}\n\\mu(B^\\circ (R_0,N_0,\\varepsilon)) > 0\\ .\n\\end{equation}\nWe also need $N_0$ to satisfy a technical requirement. It will be used at the end of the proof:\n\\begin{equation}\\label{eq: technical_condition}\n\\|x_2\\|_1 \\leq \\varepsilon N_0\\ .\n\\end{equation}\n\nTo pick $N_0$, first choose $N_1>0$ so large that if $N \\geq N_1$ then\n\\begin{equation}\\label{eq: pizzapie45}\n\\ensuremath{\\mathbb{P}}\\left( \\forall z,z' \\text{ with } \\|z\\|_1 \\geq N, \\text{ and } \\frac{\\|z-z'\\|_1}{\\|z\\|_1} \\leq \\varepsilon,~ \\frac{\\tau(0,z')}{\\|z\\|_1} < (\\lambda^+ - \\frac{7 \\delta^+}{8}) \\right) > 1 - \\beta \/ 4\\ ,\n\\end{equation}\nand $\\|x_2\\|_1 \\leq \\varepsilon N$. This is possible by \\eqref{eq: pizzapie34}. Write $E_0(N)$ for the event in \\eqref{eq: pizzapie45} and $E_{x_2}(N)$ for $E_0(N)$ translated so that $0$ is mapped to $x_2$. Then $\\mathbb{P}(B(0,x_1,x_2) \\cap E_0(N) \\cap E_{x_2}(N)) > \\beta\/2$. By Lemma~\\ref{path_concentration}, we can then choose $N_0 \\geq N_1$ such that\n\\begin{equation}\\label{eq: pizzapie46}\n\\mu(B(0,x_1,x_2) \\cap E_0(N_0) \\cap E_{x_2}(N_0) \\cap C(0,x_2;N_0)) > 0\\ ,\n\\end{equation}\nwhere $C(0,x_2;N_0)$ is the event that $\\Gamma_0$ and $\\Gamma_{x_2}$ intersect $L_{N_0}'$ and their first intersection points are within $\\ell^1$ distance $\\varepsilon N_0$ of each other. On the event in \\eqref{eq: pizzapie46}, the endpoints of the $r_i(N_0)$'s are within distance $\\varepsilon N_0$ of $w_{N_0}$ and since they are on $L'_{N_0}$, their $\\ell^1$ distance from $0$ or $x_2$ is at least $N_0$. Therefore $\\tau(r_i(N_0)) < (\\lambda^+ - 7 \\delta^+\/8)\\|w_{N_0} \\|_1$ for $i=0,2$. This shows that the intersection of four of the five events in the definition of $B^\\circ(R,N_0,\\varepsilon)$ occurs with positive probability. For the fifth, recall that on $B(0,x_1,x_2)$, the paths $\\Gamma_0$, $\\Gamma_{x_1}$ and $\\Gamma_{x_2}$ contain only finitely many vertices $z \\preceq L'_{N_0}$. Thus we can choose $R_0$ large enough (depending on $N_0$) to satisfy condition 5 and complete the proof of \\eqref{eq: pizzapie44}.\n\nFix these $R=R_0$ and $N=N_0$ from now on. The next event we define is a cylinder approximation of the first event. It will be needed to pull back to $\\Omega_1$. For $M>0$ and $x \\in \\ensuremath{\\mathbb{Z}^2},$ let $\\Gamma^M_x$ be the finite path formed by starting at $x$ and then passing along out-edges of $\\mathbb{G}$ until we first reach a vertex of $\\mathbb{R}^2 \\setminus (-M,M)^2$. (Note that by this definition, $\\Gamma_x^M = \\{x\\}$ whenever $x \\notin (-M,M)^2$.) We define $B^\\circ_M(R,N,\\varepsilon)$ with the same conditions as $B^\\circ(R,N,\\varepsilon),$\nexcept replacing the paths $\\Gamma_{(\\cdot)}$ by the segments $\\Gamma_{(\\cdot)}^M$. In addition, however, we impose the restriction that, writing\n\\[\n\\partial M = [-M,M]^2 \\setminus (-M,M)^2\\ ,\n\\]\nwe have\n\\begin{equation}\\label{eq: neweq}\n\\Gamma_y^M \\cap \\partial M \\subseteq \\{ z \\in \\mathbb{R}^2 : z \\succ L'_N\\},~ y=0,x_2\\ .\n\\end{equation}\nOf course, if $\\Gamma_0^M$ (etc.) does not intersect $L'_N,$ then $B^\\circ_M$ does not occur. Then $B^\\circ_M(R,N,\\varepsilon)$ is open for all $M$ and we claim that\n\\begin{equation}\\label{eq: cylinderapprox}\nB^\\circ (R,N,\\varepsilon) = \\cup_{M_0 = 1}^{\\infty} \\cap_{M=M_0}^{\\infty} B^\\circ_M(R,N,\\varepsilon)\\ .\n\\end{equation}\nAssuming we show this, then there exists some $M_0$ such that $\\mu(\\cap_{M=M_0}^{\\infty} B^\\circ_M(R,N,\\varepsilon)) > 0$ and so there is some $\\beta'$ with\n\\begin{equation}\\label{eq: aftercylinder}\n\\mu(B^\\circ_M(R,N,\\varepsilon)) > \\beta' \\text{ for all }M \\geq M_0\\ .\n\\end{equation}\n\nTo prove \\eqref{eq: cylinderapprox}, note that the right side is the event that $B^\\circ_M(R,N,\\varepsilon)$ occurs for all $M$ bigger than some random $M_0$. Suppose that an outcome is in the left side. Then the paths $\\Gamma_0$, $\\Gamma_{x_1}$ and $\\Gamma_{x_2}$ are disjoint and remain on the far side of $L_0'$ (except for their first vertices), so the same is true for each $\\Gamma_{(\\cdot)}^M$ for all $M \\geq 1$. Also $\\Gamma_0^M$ and $\\Gamma_{x_2}^M$ do not touch any $x \\preceq L_N'$ with $x \\notin S(R,N)$ for all $M \\geq 1$. Because $\\Gamma_0$ and $\\Gamma_{x_2}$ intersect $L_N'$, so do $\\Gamma_0^M$ and $\\Gamma_{x_2}^M$ for all $M$ bigger than some random $M_1$. Their first intersection points are the same as those of $\\Gamma_0$ and $\\Gamma_{x_2}$, so for $M \\geq M_1$, their first intersection points with $L'_N$ are within $\\ell^1$ distance $\\varepsilon N$ of each other. Further, the passage times of the segments up to $L'_N$ are strictly bounded above by $(\\lambda^+-7\\delta^+\/8)\\|w_N\\|_1$. Last, because $\\Gamma_0$ and $\\Gamma_{x_2}$ do not touch any $x \\preceq L'_N$ with $x \\notin S(R,N)$, they share only finitely many vertices with $\\{z \\in \\mathbb{Z}^2 : z \\preceq L'_N\\}$ and so must eventually lie on the far side of $L'_N$. This allows us to further increase $M_1$ to an $M_0$ such that if $M \\geq M_0$ then in addition \\eqref{eq: neweq} holds.\n\nSuppose conversely that the right side of \\eqref{eq: cylinderapprox} occurs. Then for all $M$ bigger than some random $M_0$, the six events comprising $B^\\circ_M(R,N,\\varepsilon)$ occur. In particular, the paths $\\Gamma_0$, $\\Gamma_{x_1}$ and $\\Gamma_{x_2}$ are disjoint and stay on the far side of $L_0'$ except for their first vertices (parts 1 and 2 of $B^\\circ(R,N,\\varepsilon)$). Furthermore $\\Gamma_0$ and $\\Gamma_{x_2}$ cannot touch any $x \\preceq L_N'$ with $x \\notin S(R,N)$ (part 5). For $M \\geq M_0$, the paths $\\Gamma_0^M$ and $\\Gamma_{x_2}^M$ intersect $L_N'$, with their first intersection points within distance $\\varepsilon N$ of each other (with passage time strictly bounded above by $(\\lambda^+ - 7\\delta^+\/8)\\|w\\|_1$). These are the same first intersection points as $\\Gamma_0$ and $\\Gamma_{x_2}$, so parts 3 and 4 of $B^\\circ(R,N,\\varepsilon)$ occur.\n\nWe now pull the cylinder approximation $B^\\circ_M(R,N,\\varepsilon)$ back to $\\Omega_1$ using Lemma~\\ref{toalphas}. Because this is an open event and satisfies \\eqref{eq: aftercylinder} for $M\\geq M_0$, we can find an $M$-dependent number $K_0$ such that if $k \\geq K_0$, then there is a set $\\Lambda_{M,k}$ of values of $\\alpha \\in [0,n_k]$ which has Lebesgue measure at least $C_{\\beta'} n_k$, on which $\\mu_\\alpha(B^\\circ_M(R,N,\\varepsilon)) > \\beta'\/2$. Pull back to $\\Omega_1,$ setting $B_M^\\alpha:= \\Phi_\\alpha^{-1}(B^\\circ_M(R,N,\\varepsilon)),$ where $\\Phi_\\alpha$ was defined in \\eqref{eq: phidef}. (Here we have suppressed mention of $R,N,\\varepsilon$ in the notation, as they are fixed for the remainder of the proof.) Then \n\\begin{equation}\\label{eq: pizzapie77}\n\\ensuremath{\\mathbb{P}}(B_M^\\alpha) > \\beta'\/2 \\text{ for all } \\alpha \\in \\Lambda_{M,k} \\text{ if } M \\geq M_0 \\text{ and } k \\geq K_0(M)\\ .\n\\end{equation}\nWe henceforth restrict to values of $M,$ $\\alpha$ and $k$ such that (\\ref{eq: pizzapie77}) holds. In the end of the proof we will take $k \\to \\infty$ and then $M \\to \\infty$. In particular then we will be thinking of \n\\[\n\\alpha \\gg M \\gg N\\ , \n\\]\nthe latter of which is fixed. Some of the remaining definitions will only make sense for such $\\alpha$, $M$ and $N$ but this does not affect the argument.\n\nNext we define the third of our four events, now working on $\\Omega_1$. Let $s^\\alpha_{y}$ be the geodesic from $y \\in \\ensuremath{\\mathbb{Z}^2}$ to $L_\\alpha$ (recall this was defined for $\\varpi$ and not $\\varpi'$), and $s^\\alpha_y(M)$ the path $s^\\alpha_y$ up to its first intersection with $\\mathbb{R}^2 \\setminus (-M,M)^2$. If $s_0^\\alpha(M)$ and $s_{x_2}^\\alpha(M)$ intersect $L_N'$ then write $r_i^\\alpha(M),$ $i=0,2$ for the portions up to the first intersection point. As before, let $w^\\alpha_N$ be the midpoint of the segment of $L_N'$ between these two intersection points. Let $\\mathcal{R}_1^\\alpha(M)$ be the closed connected subset (in $\\mathbb{R}^2$) of $\\{x \\in \\mathbb{R}^2: x \\succeq L'_0\\}$ with boundary curves $s_0^\\alpha(M)$, $s_{x_2}^\\alpha(M)$, $L_0'$ and $\\partial M$. Similarly let $\\mathcal{R}_2^\\alpha(M)$ be the closed connected subset of $\\mathcal{R}_1^\\alpha(M)$ with the following boundary curves: the portions of $s_0^\\alpha(M)$ and $s_{x_2}^\\alpha(M)$ after their last intersections with $L_N'$, the segment of $L_N'$ between these intersections and last, $\\partial M$. Note that when \\eqref{eq: neweq} holds, $\\mathcal{R}_2^\\alpha(M)$ is contained in $\\{z \\in \\mathbb{R}^2: z \\succeq L'_N\\}$. See Fig.~\\ref{my_first_table} for an illustration of these definitions.\n\n\\begin{figure}[h]\n\\caption{The regions $\\mathcal{R}_1^\\alpha(M)$ and $\\mathcal{R}_2^\\alpha(M)$. The left figure shows $\\mathcal{R}_1^\\alpha(M)$ in green. It has boundary curves $L'_0$, $\\partial M$, $s_0^\\alpha(M)$ and $s_{x_2}^\\alpha(M)$. The right figure shows $\\mathcal{R}_2^\\alpha(M) \\subseteq \\mathcal{R}_1^\\alpha(M)$ in green. It has boundary curves $L'_N$, $\\partial M$, and the pieces of $s_0^\\alpha(M)$ and $s_{x_2}^\\alpha(M)$ from their last intersections with $L'_N$. Note that $\\mathcal{R}_2^\\alpha(M)$ is contained in the far side of $L'_N$ by \\eqref{eq: neweq}.}\n\\label{my_first_table}\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[scale=0.40]{R_region_1_9_11.pdf}\n\\includegraphics[scale=0.40]{R_region_2.pdf}\n\\end{tabular}\n\\end{figure}\n\nThe event $\\hat B_M^\\alpha \\subseteq \\Omega_1$ is then defined by the following conditions:\n\\begin{itemize}\n\\item $s_0^\\alpha(M)$ and $s_{x_2}^\\alpha(M)$ intersect $L'_0$ only once, are disjoint, and do not touch any $y \\preceq L'_N$ with $y \\notin S(R,N)$.\n\\item $s_0^\\alpha(M)$ and $s_{x_2}^\\alpha(M)$ intersect $L'_N$ and their first intersection points are within $\\ell^1$ distance $\\varepsilon N$ of each other; the paths $r_i^\\alpha(M)$ satisfy $\\tau(r_i^\\alpha(M)) < (\\lambda^+ -7 \\delta^+\/8) \\|w^\\alpha_N\\|_1,$ for $i=0,2$.\n\\item $s_y^\\alpha(M) \\cap \\partial M \\subseteq \\{z \\in \\mathbb{R}^2 : z \\succ L'_N\\}$ for $y=0,x_2$,\n\\item there is a vertex $X^* \\in L'_N \\cap S(R,N)$ such that $s_{X^*}^\\alpha(M)$ is disjoint from $s_0^\\alpha(M)$ and $s_{x_2}^\\alpha(M)$ but is contained in $\\mathcal{R}_2^\\alpha(M)$, and \n\\item the portions of $s_0^\\alpha,$ $s_{X^*}^\\alpha$ and $s_{x_2}^\\alpha$ beyond $[-M,M]^2$ do not contain a vertex of $S(R,N)$;\n\\end{itemize}\n\nWe claim there is an $M_0' \\geq M_0$ such that\n\\begin{equation}\\label{eq: pizzapie99}\n\\mathbb{P}(\\hat B_M^\\alpha) > \\beta'\/4 \\text{ for all } M \\geq M_0'\\ .\n\\end{equation}\nVerifying this requires us to define an auxiliary event. Let $H_{M} \\subseteq \\Omega_1$ denote the event that no geodesic from any point in $S(R,N)$ returns to $S(R,N)$ after its first intersection with $\\partial M.$ Then $\\ensuremath{\\mathbb{P}}(H_M) \\rightarrow 1$ as $M \\rightarrow \\infty.$ So for any $M$ larger than some $M_0' \\geq M_0$, $\\mathbb{P}(H_M) > 1-\\beta'\/4$, giving\n\\[ \n\\ensuremath{\\mathbb{P}}(B_M^\\alpha \\cap H_M) > \\beta'\/4 \\text{ for all } M \\geq M_0'\\ .\n\\]\nTo finish the proof of \\eqref{eq: pizzapie99} we show that $B_M^\\alpha \\cap H_M \\subseteq \\hat B_M^\\alpha$. Note that the first three conditions of $\\hat B_M^\\alpha$ are immediately implied by $B_M^\\alpha$; they are the analogues on $\\Omega_1$ of the conditions that make up $B_M^\\circ(N,R,\\varepsilon)$ (each $\\Gamma_{(\\cdot)}^M$ is replaced by $s_{(\\cdot)}^\\alpha(M)$). For the fourth condition, note that when $B_M^\\alpha$ occurs, $s_0^\\alpha(M)$, $s_{x_1}^\\alpha(M)$ and $s_{x_2}^\\alpha(M)$ stay on the far side of $L_0'$ (aside from their initial vertices) and stop when they touch $\\partial M$. Therefore by planarity, $s_{x_1}^\\alpha(M)$ is contained in $\\mathcal{R}_1^\\alpha(M)$. In particular, if we choose $X^*$ to be the last intersection point of $s_{x_1}^\\alpha(M)$ with $L'_N$, then $s_{X^*}^\\alpha(M)$ is trapped in $\\mathcal{R}_2^\\alpha(M)$. We can see this as follows. The last vertex of $s_{X^*}^\\alpha(M)$ is clearly in this region because it must be in $\\mathcal{R}_1^\\alpha(M) \\cap \\partial M$ and this equals $\\mathcal{R}_2^\\alpha(M) \\cap \\partial M$. Proceeding backward along $s_{X^*}^\\alpha(M)$ from this final vertex, the path can only leave $\\mathcal{R}_2^\\alpha(M)$ if it (a) leaves $[-M,M]^2$ (b) crosses $s_0^\\alpha(M)$ or $s_{x_2}^\\alpha(M)$ or (c) crosses $L'_N$. Because none of these can happen, the fourth condition holds. As for the fifth, it is implied by $H_M$, so we have proved \\eqref{eq: pizzapie99}.\n\nOur fourth and final event will fix some random objects to be deterministic so that we can apply the edge modification lemma. On the event $\\hat B_M^\\alpha$, let $U$ denote the (random) closed connected subset of $[-M,M]^2$ with boundary curves $L'_0$, $L'_N$, $r_0^\\alpha(M)$ and $r_2^\\alpha(M)$. Note that $U \\subseteq S(R,N)$. Furthermore we note that on $\\hat B_M^\\alpha$, $U \\cap \\mathcal{R}_2^\\alpha(M)$ is contained in $L_N'$. This is because $\\mathcal{R}_2^\\alpha(M) \\subseteq \\{z : z \\succeq L'_N\\}$, whereas $U\\subseteq \\{z : z \\preceq L'_N\\}$. Last, define $U_\\mathcal{E}$ to be the random set of edges with both endpoints in $U$ and which are not edges in $s_0^\\alpha(M),s_{x_2}^\\alpha(M), L_0'$ or $L_N'$. See Figure~\\ref{fig: bddcase} for an illustration of these definitions. \n\n\\begin{figure}[h]\n\\caption{Illustration of definitions on $\\hat B_M^\\alpha$. The region $U$ is in blue and is contained in $S(R,N)$ (not pictured). It is bounded by curves $L'_0$, $L'_N$, $r_0^\\alpha(M)$ and $r_2^\\alpha(M)$. The path $s_{x^*}$ begins at the final intersection point of the dotted path with $L'_N$.}\n\\label{fig: bddcase}\n\\centering\n\\includegraphics[scale=0.65]{bddcase2.pdf}\n\\end{figure}\n\nOn $\\hat{B}_M^\\alpha,$ there are at most $2^{64NR}$ possibilities for $U$ and $U_\\mathcal{E}$ and at most $2R$ choices for $X^*.$ So there exist some deterministic $U',$ $U_{\\mathcal{E}}',$ and $x^*$ such that, if we define\n\\[\n\\tilde{B}_M^\\alpha:= \\hat{B}^\\alpha_M \\cap \\{U = U', \\, U_\\mathcal{E} = U'_{\\mathcal{E}}\\}\\cap \\{X^* = x^*\\}\\ , \n\\]\nthen\n\\begin{equation}\\label{eq: nachos_bellgrande}\n\\ensuremath{\\mathbb{P}}(\\tilde{B}_M^\\alpha) > 2^{-2 - 64NR} \\beta' \/ 2R \\text{ for } M \\geq M_0' \\text{ and }\\alpha \\in \\Lambda_{M,k}\\ . \n\\end{equation}\nThe meaning of the event $\\{X^*=x^*\\}$ is that the deterministic point $x^*$ satisfies the conditions in the fourth and fifth items of the description of $\\hat B_M^\\alpha$.\n\nIn the rest of the proof we perform the edge modification and push forward to $\\widetilde \\Omega$. To apply Lemma~\\ref{lem: edge_modification} we need to verify that $\\tilde B_M^\\alpha$ is $e$-increasing for all $e \\in U'_{\\mathcal{E}}$. For this purpose, suppose that $\\omega \\in \\tilde B_M^\\alpha$ and that $\\omega'$ is another configuration such that $\\omega_e' \\geq \\omega_e$ for some fixed $e \\in U'_{\\mathcal{E}}$ but $\\omega_f' = \\omega_f$ for all other $f \\neq e$. By construction, $e$ is not an edge of $s_0^\\alpha(M)$, $s_{x^*}^\\alpha(M)$ or $s_{x_2}^\\alpha(M)$ ($e \\notin s_{x^*}^\\alpha(M)$ since $e$ is contained in $U_\\mathcal{E}$, which does not meet $L_N'$, so is not in $\\mathcal{R}_2^\\alpha(M) \\supseteq s_{x^*}^\\alpha(M)$). Furthermore because $s_0^\\alpha$, $s_{x^*}^\\alpha$ and $s_{x_2}^\\alpha$ do not re-enter $S(R,N)$ after leaving $[-M,M]^2$ and all edges of $U'_{\\mathcal{E}}$ have both endpoints in $S(R,N)$, $e$ cannot be on these paths either. This means that \n\\[\ns_y^\\alpha(\\omega) = s_y^\\alpha(\\omega') \\text{ for } y=0,x^*,x_2 \\text{ and } U(\\omega) = U(\\omega'),~U_{\\mathcal{E}}(\\omega) = U_{\\mathcal{E}}(\\omega')\\ .\n\\]\nSo the fifth condition of $\\hat B_M^\\alpha$ occurs in $\\omega'$. The paths $s_y^\\alpha(M)$ are then equal in $\\omega$ and $\\omega'$, so conditions 1, the first part of 2, and 3 and 4 hold in $\\omega'$. As $e$ is not on any of these paths, their passage times are the same in $\\omega'$. This gives the second part of condition 2 of $\\hat B_M^\\alpha$ and shows that $\\tilde B_M^\\alpha$ is $e$-increasing. \n\n\n\n\nNow we conclude the proof in a slightly different manner depending on whether or not $\\lambda_0^+$ is finite; we focus first on the case that $\\lambda_0^+ < \\infty.$ We will use Lemma~6.6, but several times in sequence, appending events onto $\\hat B_M^\\alpha$. Precisely we note for reference that if $e_1, \\ldots, e_j$ are edges and $a_1, \\ldots, a_j \\in \\mathbb{R}$ then\n\\[\n\\hat B_\\alpha^M \\cap \\left[ \\cap_{i=1}^j \\{ \\omega_{e_i} \\geq a_i \\} \\right] \\text{ is } e\\text{-increasing for } e \\in U'_{\\mathcal{E}}\\ .\n\\]\nUsing Lemma~\\ref{lem: edge_modification} once for each edge $e \\in U'_{\\mathcal{E}}$ and the upper bound $|U'_{\\mathcal{E}}| \\leq 32 N R$, we can find some constant $C_{N,R}$ such that, defining\n\\[ \nB_M'^\\alpha := \\tilde{B}_M^\\alpha \\cap \\left\\{ \\forall e \\in U'_{\\mathcal{E}}, \\, \\omega_e \\geq \\lambda^+ - \\delta^+ \/ 4 \\right\\}\\ ,\n\\]\nwe have\n\\[ \n\\ensuremath{\\mathbb{P}} \\left( B_M'^\\alpha \\right) > C_{N,R} > 0 \\text{ for all }M \\geq M_0' \\text{ and } \\alpha \\in \\Lambda_{M,k} \\text{ when } k \\geq K_0(M)\\ .\n\\]\n(For the first application of the lemma we use $\\vartheta = 2^{-2-64NR}\\beta'\/2R$, for the second, a smaller $\\vartheta$, and so on.)\n\n\nWe claim that on $B_M'^\\alpha,$ no $z \\in \\ensuremath{\\mathbb{Z}^2} \\cap [-M,M]^2$ with $z \\preceq L'_N$ and $z \\notin S(R,N)$ has $s_z^\\alpha(M) \\cap s_{x^*}^\\alpha(M) \\neq \\varnothing$. We argue by first estimating the passage time between vertices from $L_0'$ to $L_N'$ in $U'$. For any outcome in $B_M'^\\alpha,$ given vertices $x \\in U' \\cap L'_0$ and $y \\in U' \\cap L'_N,$ there is a path from $x$ to $y$ formed by moving along $L'_0$ to $0,$ taking $r_0^\\alpha$ to $L'_N,$ and moving similarly along $L'_N$ to $y.$ This gives\n\\begin{equation}\n\\label{fastish_path}\n\\tau(x,y) < (\\lambda^+ - 7\\delta^+\/8)\\|w_N^\\alpha\\|_1+ (N\\varepsilon + \\|x_2\\|_1)\\lambda^+ .\n\\end{equation}\nUsing the choice of $\\varepsilon$ from \\eqref{eq: epsilondef} and condition \\eqref{eq: technical_condition} to bound the right side of (\\ref{fastish_path}),\n\\begin{equation}\n\\label{whoa_so_fast}\n\\tau(x,y) \\leq (\\lambda^+ - 3 \\delta^+ \/ 4) \\|w_N^\\alpha\\|_1.\n\\end{equation}\nSuppose now that a point $z$ exists as in the claim. Since $s_0^\\alpha(M)$ and $s_{x_2}^\\alpha(M)$ do not touch any $y \\notin S(R,N)$ with $y \\preceq L'_N$ (see item 1 in the definition of $\\hat B_M^\\alpha$), \n\\[\n\\mathcal{R}_1^\\alpha(M) \\cap \\{y : y \\preceq L'_N\\} \\subseteq S(R,N)\\ .\n\\]\nThis implies $z \\notin \\mathcal{R}_1^\\alpha(M)$, whereas $x^* \\in \\mathcal{R}_1^\\alpha(M)$. As $s_z^\\alpha(M)$ cannot touch $s_0^\\alpha(M)$ or $s_{x_2}^\\alpha(M)$ (else it would merge with one of them) it would have to enter $\\mathcal{R}_1^\\alpha(M)$ through $L_0'$ and pass through all of $U'$ from $L'_0$ to $L'_N$, thus taking only edges of $U'_{\\mathcal{E}}.$ The portion $\\gamma'$ of $\\gamma$ from its first intersection with $L_0'$ to its first intersection with $L_N'$ would then satisfy\n\\begin{align*}\n\\tau(\\gamma') &\\geq \\left(\\lambda^+ - \\delta^+ \/ 4 \\right) \\left[ \\|w^\\alpha_N\\|_1 - \\|x_2\\|_1 - N \\varepsilon\\right]\\\\\n\t&\\geq (\\lambda^+ - \\delta^+ \/ 4) \\|w^\\alpha_N\\|_1 - 2 \\|w^\\alpha_N\\|_1 \\varepsilon \\lambda^+\\\\\n\t&\\geq (\\lambda^+ - 3 \\delta^+ \/ 8) \\|w^\\alpha_N\\|_1,\n\\end{align*}\nin contradiction with the estimate of (\\ref{whoa_so_fast}). This establishes the claim.\n\nFor the final step in the case that $\\lambda_0^+<\\infty$, note that by the previous claim, the pushforward, $\\Phi_\\alpha (B_M'^\\alpha)$, is a sub-event of $B_M'= B_M'(x^*;R,N),$ defined exactly as the event $B'=B'(x^*;R,N)$ in \\eqref{eq: B_prime_def} except with $\\Gamma_{x^*}$ and $\\Gamma_z$ replaced by the truncated paths $\\Gamma_{x^*}^M$ and $\\Gamma_z^M$ and considering only $z \\in [-M,M]^2$. Thus \n\\[\n\\mu_\\alpha(B_M') \\geq C_{N,R} \\text{ for all } M \\geq M_0',~ k \\geq K_0(M) \\text{ and } \\alpha \\in \\Lambda_{M,k}\\ ,\n\\]\nwith $\\Lambda_{M,k} \\subseteq [0,n_k]$ of Lebesgue measure at least $C_{\\beta'}n_k$. As the indicator of $B_M'$ is continuous,\n\\[\n\\mu(B_M') = \\lim_{k \\to \\infty} \\mu_{n_k}^*(B_M') \\geq C_{N,R} C_{\\beta'}\\ .\n\\]\nLast, \n\\[\n\\mu(B') = \\mu(B_M' \\text{ for infinitely many }M) \\geq C_{N,R} C_{\\beta'} > 0\\ ,\n\\]\ncompleting the proof in the case $\\lambda_0^+ < \\infty$.\n\nIf $\\lambda_0^+ = \\infty,$ we are no longer guaranteed the estimate (\\ref{whoa_so_fast}), since the passage time of a path taking $N \\varepsilon$ steps along $L'_N$ is not necessarily bounded above by $N \\varepsilon \\lambda^+.$ However, writing $\\tilde E$ for the set of edges with an endpoint within $\\ell^1$ distance 1 of $U'$ but not in $U'_{\\mathcal{E}}$ and noting\n\\[\nA_C := \\{\\text{for all } e \\in \\tilde E,~ \\tau_e \\leq C\\}\n\\]\nsatisfies $\\mathbb{P}(A_C) \\to 1$ as $C \\to \\infty$ independently of $k$ and $M$, we can choose $C_{\\text{big}}$ such that \n\\[\n\\mathbb{P}(\\tilde B_M^\\alpha \\cap A_{C_{\\text{big}}}) > 0\n\\]\nindependently of $k$ and $M$. This event is still monotone increasing in the appropriate edge variables. In particular, we can modify the edges in $U'_{\\mathcal{E}}$ to be each larger than $2C_{\\text{big}} |\\tilde E|$ and the rest of the proof follows as in the case $\\lambda_0^+< \\infty$.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Deriving a contradiction}\\label{sec: contradiction}\n\n\nGiven that the event $B'(x^*;R,N)$ of the preceding section has positive probability, we now derive a contradiction, proving that all paths in $\\mathbb{G}$ must merge. The next lemma is an example of a mass-transport principle. (See \\cite{BLS, Haggstrom1, Haggstrom2} for a more comprehensive treatment.)\n\n\\begin{lem}\n\\label{mass_transport}\nLet $m: \\ensuremath{\\mathbb{Z}^2} \\times \\ensuremath{\\mathbb{Z}^2} \\rightarrow [0,\\infty)$ be such that $m(x,y) = m(x+z,y+z)$ for all $x,y,z \\in \\mathbb{Z}^2.$\nThen\n\\[\n\\forall x \\in \\ensuremath{\\mathbb{Z}^2}, \\quad \\sum_{y \\in \\ensuremath{\\mathbb{Z}^2}} m(x,y) = \\sum_{y \\in \\ensuremath{\\mathbb{Z}^2}} m(y,x)\\ .\n\\]\n\\end{lem}\n\\begin{proof}\nWrite\n\\begin{align*}\n\\sum_{y \\in \\ensuremath{\\mathbb{Z}^2}} m(x,y) = \\sum_{z \\in \\ensuremath{\\mathbb{Z}^2}} m(x,x+z) = \\sum_{z \\in \\ensuremath{\\mathbb{Z}^2}} m(x-z,x) = \\sum_{y \\in \\ensuremath{\\mathbb{Z}^2}} m(y,x)\\ .\n\\end{align*}\n\\end{proof}\nGiven a realization of $\\mathbb{G}$ and $x \\in \\mathbb{Z}^2$, order the set \n\\begin{equation}\\label{eq: backclusterdef}\nC_x = \\{ y \\in \\ensuremath{\\mathbb{Z}^2}: y \\to x \\text{ in } \\mathbb{G}\\}\n\\end{equation}\nusing a dictionary-type ordering where $y$ precedes $y'$ if either $\\varpi' \\cdot y < \\varpi' \\cdot y'$ or if both $\\varpi' \\cdot y = \\varpi' \\cdot y'$ and $y \\cdot \\varsigma < y' \\cdot \\varsigma$ (where $\\varsigma$ was fixed in \\eqref{eq: chicken_alfredo}); clearly this defines a total ordering. If there is a least element $y$ under this ordering, we will call $y$ the progenitor of $x$ (relative to $\\mathbb{G}$).\nWe define the $\\mathbb{G}$-dependent function $m_\\mathbb{G}$ on pairs of vertices $x,y$ by\n\\[ m_{\\mathbb{G}}(x,y) = \\begin{cases}\n\t1 &\\text{if $y$ is the progenitor of $x$}\\\\\n\t0 &\\text{otherwise},\n\t\\end{cases} \\]\nand let $m(x,y):= \\mathbb{E}_\\mu(m_{\\mathbb{G}}(x,y)).$ Note that $m(x,y) = m(x+z,y+z)$ by the fact that $\\mathbb{G}$ has a translation-invariant distribution.\n\nSince each $x$ can have at most one progenitor, \n\\begin{equation}\n\\label{small_mass}\n\\sum_{y\\in \\mathbb{Z}^2} m(x,y) \\leq 1 \\text{ for all } x\\in \\mathbb{Z}^2\\ .\n\\end{equation}\nOn the other hand, if $B'(x^*; R,N)$ occurs, then $\\Gamma_z$ cannot intersect $\\Gamma_{x^*}$ if $z \\preceq L'_N$ and $z \\notin S(R,N).$ Therefore, on this event, there is some vertex $y \\in S(R,N)$ which is the progenitor of infinitely many vertices of $\\Gamma_{x^*}.$ In particular,\n\\begin{equation}\n\\label{big_mass}\n\\sum_{y \\in \\mathbb{Z}^2} m(y,x) = \\infty.\n\\end{equation}\nThe contradiction implied by (\\ref{small_mass}), (\\ref{big_mass}) and Lemma \\ref{mass_transport} gives $\\mu(B'(x^*;R,N))=0$. However this contradicts the previous section and completes the proof of Theorem~\\ref{thm: Gcoalescethm}.\n\n\n\n\n\n\n\n\n\n\n\\subsection{Absence of backward infinite paths}\n\n\nIn this section, we move on from Theorem~\\ref{thm: Gcoalescethm} to show that because all paths in $\\mathbb{G}$ coalesce, all paths in the ``reverse\" direction terminate. That is, recalling the definition of $C_x$ in \\eqref{eq: backclusterdef},\n\n\\begin{thm}\n\\label{no_back_path}\nFor each $x \\in \\ensuremath{\\mathbb{Z}^2},$ $|C_x| < \\infty$ with $\\mu$-probability one.\n\\end{thm}\n\n\\begin{rem}\nThe proof below applies to the following general setting. Suppose $\\nu$ is a translation-invariant probability measure on directed subgraphs of $\\mathbb{Z}^2$ and there is a line $L \\subseteq \\mathbb{R}^2$ such that $\\nu$-almost surely (a) each $x$ has exactly one forward path and it is infinite (b) all forward paths coalesce and (c) each forward infinite path emanating from a vertex on $L$ intersects it finitely often. Then all backward clusters are finite $\\nu$-almost surely.\n\\end{rem}\n\nWe assume that, contrary to the theorem, there exists $x \\in \\ensuremath{\\mathbb{Z}^2}$ with\n$\\mu(|C_x|=\\infty)>0$ for the remainder of this section to derive a contradiction. Using Lemma~\\ref{lem: varpilemma}, choose a deterministic $\\varpi'$ with argument in $\\{j \\pi\/4 : j = 0, \\ldots, 7\\}$ such that with positive $\\mu$-probability on $\\{|C_x| = \\infty\\}$, each $\\Gamma_z$ eventually lies on the far side of each $L'_N$. Note that this event is translation-invariant, so by conditioning on it, we may assume that it occurs with probability 1 (and $\\mu$ is still translation-invariant).\n\n\\begin{clam}\nThere exist vertices $z \\neq z'$ in $L'_0$ such that\n\\begin{equation}\n\\label{triple_pt}\n \\mu\\left(|C_z| = \\infty,\\, |C_{z'}| = \\infty, \\, \\Gamma_z \\cap L'_0 = \\{z\\}, \\, \\Gamma_{z'} \\cap L'_0 = \\{z'\\}\\right) > 0\\ .\n\\end{equation}\n\\end{clam}\n\\begin{proof}\nBy translation-invariance, we may assume that the $x$ with $\\mu(|C_x|=\\infty)>0$ satisfies $x \\prec L'_0.$ $\\mu$-almost surely, $\\Gamma_x$ has a last intersection with $L'_0.$ There are countably many choices for such a last intersection, so there exists a vertex $z \\in L'_0$ such that\n\\[\n\\mu\\left( |C_z| = \\infty, \\, \\Gamma_z \\cap L'_0 = \\{z\\}\\right) > 0\\ .\n\\]\nTranslating by $\\varsigma$ (chosen from \\eqref{eq: chicken_alfredo}), the ergodic theorem gives $z, z'$ satisfying (\\ref{triple_pt}).\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{no_back_path}.]\nGiven an outcome in the event in (\\ref{triple_pt}), $\\Gamma_{z}$ and $\\Gamma_{z'}$ almost surely merge. So there is some random $z_{\\mathbb{G}} \\in \\ensuremath{\\mathbb{Z}^2}$ which is the first intersection point of $\\Gamma_{z}$ and $\\Gamma_{z'}$ (``first\" in the sense of both the ordering in $\\Gamma_z$ and in the ordering of $\\Gamma_{z'}$). Again $z_{\\mathbb{G}}$ can take only countably many values, and so there is a $z_0$ which occurs with positive probability; call the intersection of the event in (\\ref{triple_pt}) with the event $\\{z_{\\mathbb{G}} = z_0\\}$ by the name $B.$\n\nWe now consider the graph $\\mathbb{G}$ as an undirected graph, in which vertices $x$ and $y$ are adjacent if $\\langle x,y \\rangle$ or $\\langle y,x \\rangle$ are in $\\mathbb{G}$ (we abuse notation by using the same symbol for both the directed and undirected versions of $\\mathbb{G}$). We define an encounter point of the undirected $\\mathbb{G}$ to be a vertex whose removal splits $\\mathbb{G}$ into at least three infinite components. Note that $B \\subseteq \\{z_0$ is an encounter point$\\}$; by translation invariance, we see that there is a uniform $c_t > 0$ such that the probability of any fixed vertex to be an encounter point is at least $c_t.$\n\nWe are now in the setting of Burton-Keane \\cite{burtonkeane}. To briefly synopsize, the number of points on the boundary of $[-M,M]^2$ must be at least the number of encounter points within. In particular, the number of encounter points is surely bounded above by $8M$. But since each point within has probability at least $c_t$ to be an encounter point, the expected number of encounter points within $[-M,M]^2$ is at least $c_t M^2.$ This is a contradiction for large $M.$\n\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Proofs of main theorems}\\label{sec: proofs}\n\n\\subsection{Proof of Theorem~\\ref{thm: sectors}}\\label{sec: sectors}\nSuppose that $\\partial \\mathcal{B}$ is differentiable at $v_\\theta = \\varpi$ and construct the measure $\\mu$ as in Section~\\ref{sec: mudef}. Using the notation of Theorem~\\ref{thm: nachostheorem}, we set \n\\[\nL_\\varrho = \\{x \\in \\mathbb{R}^2 : x \\cdot \\varrho = 1\\}\\ .\n\\]\nFrom the theorem, we deduce that with $\\mu$-probability 1, $\\Gamma_0$ is asymptotically directed in $J_\\varrho$. But by the assumption of differentiability, $J_\\varrho = I_\\theta$ with $\\mu$-probability 1 and thus\n\\begin{equation}\\label{eq: ptonboyz1}\n\\mu\\left( \\Gamma_0 \\text{ is asymptotically directed in } I_\\theta \\right) = 1\\ .\n\\end{equation}\nBy Proposition~\\ref{prop: firstGG2}, each finite piece of $\\Gamma_0$ is a geodesic, so $\\Gamma_0$ is an infinite geodesic. Define $\\hat \\Omega \\subseteq \\Omega_1$ as the set\n\\[\n\\hat \\Omega = \\{\\omega \\in \\Omega_1 : \\mu(\\Gamma_0 \\text{ is asymptotically directed in }I_\\theta \\mid \\omega) = 1\\}\\ .\n\\]\nThe inner probability measure is the regular conditional probability measure. The set $\\hat \\Omega$ is measurable and because the marginal of $\\mu$ on $\\Omega_1$ is $\\mathbb{P}$, it satisfies $\\mathbb{P}(\\hat \\Omega)=1$. Further, for each $\\omega \\in \\hat \\Omega$ there is an infinite geodesic from 0 which is asymptotically directed in $I_\\theta$.\n\n\n\n\n\n\n\n\n\\subsection{Proof of Theorem~\\ref{thm: newman}}\\label{sec: newman}\n\nIn this section we assume either {\\bf A1'} or {\\bf A2'}. Assume that the limit shape $\\mathcal{B}$ has uniformly positive curvature. Then the boundary $\\partial \\mathcal{B}$ cannot contain any straight line segments. This implies that the extreme points $ext(\\mathcal{B})$ are dense in $\\partial \\mathcal{B}$. Choose some countable set $D \\subseteq ext(\\mathcal{B})$ that is dense in $\\partial \\mathcal{B}$. For any $\\theta_1$ and $\\theta_2$ with $0_\\theta \\theta_2$ if $I(\\theta_1,\\theta)$ contains $\\theta_2$. Because $D$ is dense in $\\partial \\mathcal{B}$, we can find two sequences $(\\theta_n^1)$ and $(\\theta_n^2)$ such that (a) $0 < dist(\\theta_n^i,\\theta)<\\pi$ for all $n$ and $i$, (b) for $i=1,2$, $dist(\\theta_n^i,\\theta) \\to 0$ as $n \\to \\infty$ and (c) for each $i=1,2$ and $n$, $\\theta_n^j >_\\theta \\theta_{n+1}^j$. Let $v_n$ be the point $nv_\\theta$ and let $\\gamma_n$ be the geodesic from $0$ to $v_n$. Define $\\gamma$ as any subsequential limit of $(\\gamma_n)$. By this we mean a path $\\gamma$ such that for each finite subset $E$ of $\\mathbb{R}^2$, the intersection $\\gamma_n \\cap E$ equals $\\gamma \\cap E$ for all large $n$. We claim that $\\gamma$ has asymptotic direction $\\theta$.\n\nLet $\\varepsilon>0$ and choose $N$ such that $dist(\\theta,\\theta_N^j)<\\varepsilon$ for $j=1,2$. Because $\\omega \\in \\Omega'$, for $j=1,2$, we can choose an infinite geodesic $\\gamma_N^j$ containing 0 with asymptotic direction in $I(\\theta_N^j,\\theta_{N+1}^j)$. Write $P$ for the union of $\\gamma_N^1$ and $\\gamma_N^2$. This complement of $P$ in $\\mathbb{R}^2$ consists of two open connected components (as $P$ cannot contain a circuit). Because both paths are directed away from $\\theta$, exactly one of these two components contains all but finitely many of the $nv_\\theta$'s. Let $C_1$ be the union of $P$ with this component and let $C_2$ be the other component.\n\nChoose $N_0$ so that $nv_\\theta \\in C_1$ for all $n \\geq N_0$. We claim now that each finite geodesic $\\gamma_n$ for $n \\geq N_0$ is contained entirely in $C_1$. If this were not true, $\\gamma_n$ would contain a vertex $z$ in $C_2$ and therefore it would cross $P$ to get from $z$ to $v_n$. Then if $w$ is any vertex on $\\gamma_n \\cap P$ visited by $\\gamma_n$ after $z$, then there would be two different geodesics from $0$ to $w$ and this would contradict unique passage times. Therefore, as $\\gamma_n$ is contained in $C_1$ for all large $n$, so must $\\gamma$. This implies that $\\gamma$ is asymptotically directed in the set of angles within distance $\\varepsilon$ of $\\theta$ (for each $\\varepsilon>0$) and therefore has asymptotic direction $\\theta$.\n\nTo prove the second statement choose $\\omega \\in \\Omega'$ and let $\\gamma$ be an infinite geodesic. If $\\gamma$ does not have an asymptotic direction then, writing $x_n$ for the $n$-th vertex of $\\gamma$, we can find an angle $\\phi \\in [0,2\\pi)$ such that $\\phi$ is a limit point of $\\{\\arg x_n : n \\geq 1\\}$ (under the metric $dist$) but $(\\arg x_n)$ does not converge to $\\phi$. So there exists a number $\\varepsilon$ with $0<\\varepsilon<\\pi$ and a subsequence $(x_{n_k})$ of $(x_n)$ such that for each $m$, $dist(\\arg x_{n_{2m}}, \\phi) < \\varepsilon\/2$ but $dist(\\arg x_{n_{2m+1}}, \\phi) > \\varepsilon$. By the first part of the theorem we can find infinite geodesics $\\gamma_1$ and $\\gamma_2$ from $0$ such that $\\gamma_1$ has asymptotic direction $\\phi + 3\\varepsilon\/4$ and $\\gamma_2$ has asymptotic direction $\\phi - 3\\varepsilon\/4$. Now it is clear that if we write $P$ for the union of $\\gamma_1$ and $\\gamma_2$ then $\\gamma$ must both contain infinitely many vertices of $P$ and infinitely many vertices of $P^c$. This again contradicts unique passage times.\n\n\n\n\\begin{proof}[Proof of Corollary~\\ref{cor: newman2}]\nIf $\\theta$ is an exposed point of differentiability then by Corollary~\\ref{cor: exposed}, with probability one there exists an infinite geodesic from 0 in each rational direction. Then the proof above goes through with minor modifications.\n\\end{proof}\n\n\n\n\n\n\n\n\n\\subsection{Proof of Theorem~\\ref{thm: random_hyperplanes}}\n\nAssume either {\\bf A1'} or both {\\bf A2'} and the upward finite energy property. Let $v \\in \\mathbb{R}^2$ be nonzero and $\\varepsilon>0$. We will prove that the statement of the theorem holds with probability at least $1-\\varepsilon$. Choose $\\varpi \\in \\partial \\mathcal{B}$ to be parallel to $v$ and construct a measure $\\mu$ as in Section~\\ref{sec: mudef}. Let $(n_k)$ be an increasing sequence such that $\\mu_{n_k}^* \\to \\mu$ weakly.\n\nWe will define a double sequence of cylinder events that approximate the events in the theorem. For $m \\leq n$, a configuration $\\eta \\in \\Omega_3$ and $x,y \\in [-m,m]^2 \\cap \\mathbb{Z}^2$, we say that $x$ is $n$-connected to $y$ ($x \\to_n y$) if there exists a directed path from $x$ to $y$ whose vertices stay in $[-n,n]^2$. We say that $x$ and $y$ are $n$-connected $(x \\leftrightarrow_n y)$ if there is an undirected path connecting $x$ and $y$ in $[-n,n]^2$. For $m \\leq n$ write $A_{m,n} \\subseteq \\Omega_3$ for the event that\n\\begin{enumerate}\n\\item all vertices $v \\in [-m,m]^2$ have exactly one forward neighbor in $\\mathbb{G} \\cap [-n,n]^2$,\n\\item there is no undirected circuit contained in $[-m,m]^2$,\n\\item for all vertices $v,w \\in [-m,m]^2$, there exists $z \\in [-n,n]^2$ such that $v \\to_n z$ and $w \\to_n z$ and\n\\item for all vertices $v \\in [-m,m]^2$ there is no $z \\in [-n,n]^2 \\setminus (-n,n)^2$ such that $z \\to_n v$.\n\\end{enumerate}\n\nWe claim that for any $m$ there exists $n(m) \\geq m$ such that $\\mu(A_{m,n(m)}) > 1-\\varepsilon\/4^{m+2}$. To prove this, let $\\hat \\Omega \\subseteq \\widetilde \\Omega$ be the event that (a) all vertices have one forward neighbor in $\\mathbb{G}$, (b) $\\mathbb{G}$ has no undirected circuits, (c) for all $x,y \\in \\mathbb{Z}^2$, $\\Gamma_x$ and $\\Gamma_y$ coalesce and (d) $|C_x| < \\infty$ for all $x \\in \\mathbb{Z}^2$. By Proposition~\\ref{prop: secondGG2}, Theorem~\\ref{thm: Gcoalescethm} and Theorem~\\ref{no_back_path}, the $\\mu$-probability of $\\hat \\Omega$ is 1. Therefore conditions 1 and 2 above have probability 1 for all $m$ and $n$. For any configuration in $\\hat \\Omega$ and $m \\geq 1$ we can then choose a random and finite $N(m) \\geq m$ to be minimal so that conditions 3 and 4 hold for all $n \\geq N(m)$. Taking $n(m)$ so large that $\\mu(N(m) \\geq n(m)) \\leq \\varepsilon\/4^{m+1}$ completes the proof of the claim.\n\nWe now pull $A_{m,n(m)}$ back to $\\Omega_1$, using the fact that it is a cylinder event in $\\Omega_3$ and thus its indicator function is continuous. There is an $m$-dependent number $K_0(m)$ such that if $k \\geq K_0(m)$ then $\\mu_{n_k}^*(A_{m,n(m)}) > 1-\\varepsilon\/4^{m+2}$. By definition of $\\mu_{n_k}^*$ in \\eqref{eq: munstar} and $\\Phi_\\alpha$ in \\eqref{eq: phidef}, the set $\\Lambda_{m,k}$ of values of $\\alpha \\in [0,n_k]$ such that $\\mathbb{P}(\\Phi_\\alpha^{-1}(A_{m,n(m)})) > 1-\\varepsilon\/2^{m+2}$ has Lebesgue measure at least $n_k(1-2^{-(m+2)})$. \n\nThe next step is to construct a deterministic sequence $(a_m)_{m \\geq 1}$ of real numbers such that\n\\begin{equation}\\label{eq: clambake}\na_m \\to \\infty \\text{ and } \\mathbb{P}\\left( \\cap_{j=1}^m \\Phi_{a_m}^{-1} (A_{j,n(j)}) \\right) \\geq 1-\\varepsilon\/2 \\text{ for all } m\\ .\n\\end{equation}\nWe do this by induction on $m$. For $m=1$, let $a_1$ be any number in the set $\\Lambda_{1,K_0(1)}$. By definition then $\\mathbb{P}(\\Phi_{a_1}^{-1}(A_{1,n(1)})) \\geq 1-\\varepsilon\/2$. Assuming that we have fixed $a_1, \\ldots, a_m$, we now define $a_{m+1}$. Let $k$ be such that $k \\geq \\max \\{K_0(1), \\ldots, K_0(m+1)\\}$ and $n_k \\geq 3a_m$ and consider $\\Lambda_{1,k}, \\ldots, \\Lambda_{m+1, k}$ as above. The intersection of these sets has Lebesgue measure at least $3n_k\/4$ so choose $a_{m+1}$ as any element of the nonempty set $(3a_m\/2,n_k] \\cap \\left[ \\cap_{i=1}^{m+1} \\Lambda_{i,k}\\right]$. For this choice,\n\\[\n1- \\mathbb{P}\\left(\\cap_{j=1}^{m+1} \\Phi_{a_{m+1}}^{-1}(A_{j,n(j)}) \\right) \\leq \\sum_{j=1}^\\infty \\varepsilon \/2^{j+2} = \\varepsilon\/4\\ .\n\\]\nAs $a_{m+1} \\geq 3a_m\/2$, the condition $a_m \\to \\infty$ holds and we are done proving \\eqref{eq: clambake}.\n\nFrom \\eqref{eq: clambake}, we deduce $\\mathbb{P}(A) \\geq 1-\\varepsilon\/2$, where\n\\[\nA = \\{\\cap_{j=1}^m \\Phi_{a_m}^{-1}(A_{j,n(j)}) \\text{ occurs for infinitely many } m\\}\\ .\n\\]\nWe complete the proof by showing that the statement of the theorem holds for any $\\omega\\in A$. Fix such an $\\omega$ and a random subsequence $(a_{m_k})$ of $(a_m)$ such that $\\omega \\in \\cap_{j=1}^{m_k}\\Phi_{a_{m_k}}^{-1}(A_{j,n(j)})$ for all $k$. By extracting a further subsequence, we may assume that $\\mathbb{G}_{L_{a_{m_k}}(\\varpi)}$ converges to some graph $G$. The event $\\Phi_{\\alpha}^{-1}(A_{j,n(j)})$ is exactly that the graph $\\mathbb{G}_{L_\\alpha(\\varpi)}$ satisfies the conditions of $A_{j,n(j)}$ above, so in particular, it has no undirected circuits in $[-j,j]^2$, all directed paths starting in $[-j,j]^2$ coalesce before leaving $[-n(j),n(j)]^2$, no directed paths connect $[-n(j),n(j)]^2 \\setminus (-n(j),n(j))^2$ to $[-j,j]^2$, and all vertices in $[-j,j]^2$ have one forward neighbor in $[-n(j),n(j)]^2$. On the subsequence $(a_{m_k})$, the events $\\Phi_{a_{m_k}}^{-1}(A_{1,n(1)})$ occur for all $k$, so $G$ must satisfy the conditions of $A_{1,n(1)}$ as well. The same is true for $A_{j,n(j)}$ for all $j$, so $G$ satisfies the conditions of the theorem.\n\n\n\n\n\\subsection{Proof of Theorem~\\ref{thm: exceptional_set}}\n\nThis theorem follows directly from results of the previous sections. Assume either {\\bf A1'} or both {\\bf A2'} and the upward finite energy property. For the first part of the theorem, suppose that $\\partial \\mathcal{B}$ is differentiable at $v_\\theta$. Choose $\\varpi = v_\\theta$ and construct the measure $\\mu$ as in Section~\\ref{sec: mudef}. Given $(\\omega,\\Theta,\\eta) \\in \\widetilde \\Omega$, let $\\mathbb{G}(\\eta)$ be the geodesic graph associated to $\\eta$. By Theorems~\\ref{thm: nachostheorem}, \\ref{thm: Gcoalescethm} and \\ref{no_back_path}, with $\\mu$-probability one, all directed paths in $\\mathbb{G}$ are asymptotically directed in $I_\\theta$, they coalesce, and no vertex $x$ has $|C_x|$ infinite. Call this event $A$ and define\n\\[\n\\hat \\Omega = \\{\\omega \\in \\Omega_1 : \\mu(A \\mid \\omega) = 1\\}\\ .\n\\]\n$\\mu(\\cdot \\mid \\omega)$ is the regular conditional probability measure. $\\hat \\Omega$ is a measurable set and satisfies $\\mathbb{P}(\\hat \\Omega)=1$ since the marginal of $\\mu$ on $\\Omega_1$ is $\\mathbb{P}$. Further, for each $\\omega \\in \\hat \\Omega$, the theorem holds.\n\nFor the other two parts of the theorem we simply argue as in the proof of Corollaries~\\ref{cor: exposed} and \\ref{cor: extreme}. In the former case we just notice that if $v_\\theta$ is also exposed, then $I_\\theta = \\{\\theta\\}$. In the latter case, we find a point $v_\\theta$ on the arc joining $v_{\\theta_1}$ to $v_{\\theta_2}$ at which $\\partial \\mathcal{B}$ is differentiable. The set $I_\\theta$ contains only angles associated to points on the arc and we are done.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\t\n\nHigh-resolution satellite images from most of the Earth's surface have been available for over a decade, and in recent years advanced deep learning methods have been applied to them. Mnih et. al. started this field with a classifier that detects roads \\cite{mnih}. More recently, deep learning models and satellite imagery have been used for more complicated tasks such as crop yield prediction in the U.S. \\cite{cropyield} and poverty prediction in Africa \\cite{jean}. \\par\n\nWe believe there are still many unexplored questions and potential applications for this field of study.\nOne such problem is using satellite images to enhance existing house price prediction models, which historically only used hand-selected features (e.g. number of rooms or floor area). We show that a vision-based model can learn important relevant features from the neighborhood of a house (e.g. distance from green fields or highways) by processing a single satellite image, and then use those features to improve the estimation accuracy.\n\nAlthough high-resolution satellite images contain an abundance of information that might be correlated with house prices, such data are highly unstructured and thus challenging to extract meaningful insights from. Although deep learning models such as convolutional neural networks (CNN) could in principle be trained to directly estimate prices from satellite imagery, the scarcity of training data makes the application of these techniques challenging. Therefore, we use transfer learning techniques to overcome this problem in that we will use knowledge gained while solving one problem and apply it to our different but related problem. We will start with a CNN model that has been trained on ImageNet \\cite{imagenet}, a large image classification data set, to identify low-level image features such as edges and corners that are common to many vision tasks. Next, we will build on the knowledge gained from this image classification task and fine-tune the model on a new task.\n\nWe combine a dataset of house prices (from LA County's property assessment dataset \\footnote{\\href{https:\/\/data.lacounty.gov\/Parcel-\/Assessor-Parcels-Data-2017\/vak5-2hqh}{https:\/\/data.lacounty.gov\/Parcel-\/Assessor-Parcels-Data-2017\/vak5-2hqh}}) and a dataset of satellite images (collected via Google Static Maps API \\footnote{\\href{https:\/\/developers.google.com\/maps\/documentation\/static-maps\/intro}{https:\/\/developers.google.com\/maps\/documentation\/static-maps\/intro}}) to accomplish this objective.\nThe input to our model is a $640\\times640$ image (with the house located at the center) and a feature vector (number of bedrooms, etc.). The output is a single number, the value of the house in dollars. We feed the image to the pretrained CNN whose last two layers have been deleted after pretraining, and get a feature vector for the image. We then combine the two feature vectors using several fully-connected layers and get the output.\n\nFinally, we evaluate our estimation performance using $R^2$ and mean-squared error (MSE) metrics. The results will be compared to two baseline models that only use hand-selected features.\n\nThe inspiration for this project came from Zillow's Home Value Prediction competition \\footnote{\\href{https:\/\/www.kaggle.com\/c\/zillow-prize-1}{https:\/\/www.kaggle.com\/c\/zillow-prize-1}}, which is challenging the data science community to help push the accuracy of the house price estimations further by utilizing any additional source of information \\footnote{We even tried reducing Zillow's model's estimation error (which is what the competition is about) using satellite images, but it proved to be a challenging task. Zillow's model already performs exceptionally well, and improving a blackbox model without having access to its details is difficult. We are not sure if Zillow's model uses any vision-based features or not.}. \n\n\\section{Related work}\nTraditionally, house price prediction models used hand-selected features (e.g. number of rooms or floor area). Feed-forward neural networks and various decision-tree-based models \\cite{dectree} outperformed other approaches. However, as \\cite{Chopra} mentions, the parameters that determine the price of a house are twofold: one that is intrinsic to the house and one that indicates the \"desirability\" of the house. Traditional models can only learn the former, but the latter usually depends on the location of the house \\cite{Kockelman}. In particular, social and economic metrics such as crime rate, pollution levels and distance from important locations (e.g. train station, hospitals) can affect the housing price \\cite{Bency}. Therefore, a model that is able to capture the resulting spatial correlations can potentially outperform other models. Spatial auto-regressive (SAR) models are one approach to capturing the existing spatial correlation in housing prices. These models rely on a spatial contiguity matrix that is usually hand-designed with the help of a domain expert, but as \\cite{Chopra} showed, can also be learned by algorithms. Non of these models use vision-related features. More recently, \\cite{Bency} proposed a model that uses deep neural networks to extract information from satellite images, combined them with an SAR model and other house features. This approach resulted in a $57\\%$ reduction in RMSE compared to a model with SAR but without satellite images.\n\nIt is worth noting that these papers use different datasets which in most cases are gathered by the authors of the paper. This makes it difficult to compare their results in a completely fair manner \\footnote{We also use a dataset that we collected.} However, it can be seen that the focus of the research has been moving towards models that can capture features from outside the building itself, which has generally improved the accuracy of estimators.\n\n\\section{Dataset and Features}\n\nOur dataset consists of two main parts: 1. real estate properties, and 2. high-resolution satellite images. Details of each part are explained in this section.\n\n\\subsection{Real Estate Properties}\nWe collected properties data from LA County's property assessment data which is available online\\footnote{\\href{https:\/\/data.lacounty.gov\/Parcel-\/Assessor-Parcels-Data-2017\/vak5-2hqh}{https:\/\/data.lacounty.gov\/Parcel-\/Assessor-Parcels-Data-2017\/vak5-2hqh}}. This dataset consists of more than 2 million real-estate properties in LA county assessed between 2006 and 2017. Each entry had 50 features, floor area, number of bedrooms, year built, use type, latitude and longitude to name a few. Preprocessing was done on the data, details of which will be discussed later in this paper.\n\n\\subsection{High-Resolution Satellite Images}\nTo better predict the price error, we combined our main dataset with satellite images of houses, collected via Google Static Maps API \\footnote{\\href{https:\/\/developers.google.com\/maps\/documentation\/static-maps\/intro}{https:\/\/developers.google.com\/maps\/documentation\/static-maps\/intro}} using the longitude and latitude of each house. This API gives access to different zoom levels between 1 (the Earth) and 24 (the most detailed). In this study, we set the zoom level to 19, and retrieved images with size 640x640 which is the highest resolution available for the free tier users\\footnote{We planned to try different zoom levels, but the limit of 25,000 images per day for the free tier made image gathering very time-consuming}. Figure \\ref{fig:sample} shows a few examples. We believe this zoom level provides a suitable amount of information about the neighborhood while showing the shape of the building itself.\n\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{sample}\n\\centering\n\\caption{Examples of satellite images. Each image is $640\\times640$ pixels.}\n\\label{fig:sample}\n\\end{figure}\n\n\\subsection{Data Preprocessing}\n\nBefore beginning analysis, the LA County dataset was preprocessed. Eight columns contained unrelated information such as assessor's ID and parcel ID, which were removed. One redundant column was also removed. With each property, came three price values: total land value, personal property value, and total value. In addition to residential properties, the data included rows for non-residential properties as well as empty lots. Since our aim is to study housing price estimations, we focused our analysis on entries for which personal property value was non-zero, and used this column as our labels. Consequently, non-residential rows were removed. Finally, we ended up with 40 features per property, among which 23 were categorical parameters (e.g. use type). We replaced categories with integer values. Each column in the dataset was then normalized to have zero mean and standard deviation of $1$. Table \\ref{table:dataset} shows a summary of the size of our dataset.\n\n\\begin{table}{}\n\\centering\n\\begin{tabular}{|c|c|c|c|}\n\\hline\nTrain Set Size & Validation Set Size&Test Set Size&Number of Features \\\\\n\\hline\n48,548&2,698&2,698&40\\\\\n\\hline\n\\end{tabular}\n\\vspace{6px}\n\\caption{Train\/validation\/test size of the dataset}\n\\label{table:dataset}\n\\end{table}\n\n\\section{Methods}\n\n\\subsection{Baseline Models}\n\nWe started our analysis by building two baseline models: decision-tree-based models and neural networks. For the former, we analyzed multiple tree-based estimators. Extra tree regressor \\cite{extratrees} demonstrated the best performance and was chosen as our baseline. Moreover, we trained a neural network on our features dataset. Performance of these models are provided in the Experiments and Results section.\n\n\n\\subsection{Inception-v3}\n\nSince we do not have enough data to train vision-based convolutional neural networks, we use pretrained models to encode images. We use Inception-v3 model \\cite{inception}, trained on ImageNet, to construct 2048-dimensional vectors for our images. The process is as follows. First, JPEG satellite images are converted into RGB matrices, resized to $299\\times299\\times3$ and normalized to have values in $[0, 1]$ to match the expected input of Inception-v3. Feeding the data into the model, we then save resulting vectors in a binary file to be used later in our network. This time-consuming process generated around 8 gigabytes of processed data. We do this in order to make the training process faster and be able to iterate more quickly in hyperparameter tuning part of the project.\n\n\\subsection{Cost Function}\nWe use mean-squared error (MSE) as our cost function:\n$\nJ= \\frac{1}{m} \\sum_{i=1}^{m} L(\\hat{y}^{(i)}, y^{(i)})=\\frac{1}{m} \\sum_{i=1}^{m}(\\hat{y}^{(i)}-y^{(i)})^2\n$. \nNote that we report our results using $R^2$ (not MSE), but the two metrics are nicely related in a way that minimizing MSE is equivalent to maximizing $R^2$:\n$\nR^2=1-\\frac{MSE}{S}\n$\nwhere $S$ is the variance of the test set's labels.\n\n\n\\section{Experiments and Results}\n\n\\subsection{Experiments}\nAll numerical results are provided in table \\ref{table:results}.\nWe have studied to see if images alone are able to make estimations about prices. Encodings are fed into fully-connected neural network layers and trained to minimize mean squared error (MSE) loss. \n\nFrom the results we got, it is apparent that by themselves, images are not able to predict prices. This can be attributed to the difficulty of estimations of house size from images. It is not possible for a model to decide on the size of a house by just looking at a satellite picture, as is may include several other buildings too.\n\nNext step was to train models on both features and images to see if combining the two can improve our estimations. We feed our two feature vectors to two separate dense networks and we concatenate the results in the final layer. We will describe the details of this model in the Network Architecture section.\n\n\\begin{table}{}\n\\centering\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\nModel& Train $R^2$ & Dev $R^2$& Test $R^2$& Train MSE& Dev MSE& Test MSE\\\\\n\\hline\nExtra Tree (baseline) &\n0.99\n&0.86&\n0.71&\n0.000&\n0.112&\n0.002\\\\\n\\hline\nNeural Net F (baseline) &\n0.96\n& 0.84\n&0.85\n& 0.037\n& 0.136\n& 0.001\\\\\n\\hline\nNeural Net I &\n$0.000$&-0.0001&-0.038&\n 1.062\n & 0.859\n & 0.010\\\\\n\\hline\nNeural Net F+I &\n 0.98\n& 0.94\n& 0.93\n& 0.011\n& 0.044\n& 0.001\n\\\\\n\\hline\n\\end{tabular}\n \\vspace{0.6em}\n\\caption{Summary of results. MSE (mean-square error) is calculated using normalized labels. F is features, I is satellite images.}\n\\label{table:results}\n\\end{table}\n\\subsection{Interpretation of the Results}\nWe see that the F+I model outperforms all other models by at least $10\\%$ in $R^2$ metric. Intuitively, the cost of a house is approximately floor area $\\times$ price per square foot. While features provide the model with the first factor, images help it improve its estimation for the second. We ranked data points based on how much F+I improves estimations compared to F model. Top 5 positive (F+I corrected underestimation) and top 4 negative improvements (F+I improved overestimation) are shown in figures \\ref{fig:top} and \\ref{fig:bottom} respectively.\n\\begin{figure}\n\\includegraphics[width=0.6\\textwidth]{top}\n\\centering\n\\caption{Baselines underestimated the price of these houses and F+I corrected them.}\n\\label{fig:top}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=0.6\\textwidth]{bottom}\n\\centering\n\\caption{Baselines overestimated the price of these houses and F+I corrected them.}\n\\label{fig:bottom}\n\\end{figure}\n\nAn interesting observation is that our dataset -despite all the preprocessing that we have done- contained a few empty lands with no buildings\\footnote{Perhaps due to changes that occurred between the time LA County has assessed properties and the time Google has captured their images}. The fact that these data points are present in figures \\ref{fig:top} and \\ref{fig:bottom} shows that our model is more robust to noise in the data.\nAfter evaluating figure \\ref{fig:top}, we think it shows that our model realizes that cars are an indicator of higher prices (since it shows higher accessibility or low distance from city center), so it adds to the estimated price.\n\n\\subsection{Network Architecture}\nWe have tried many different architectures and hyperparameters, and in this section only describe the best results. Also, due to lack of space we only describe our model for F+I model.\nWe use the architecture described in figure \\ref{fig:architecture}. The hyper parameters are mentioned in table \\ref{table:hyperparameters}. Our model, especially on image side is very prone to over-fitting, so we use L2 regularization and dropout to prevent that. The learning rate (lr) is decreased after each batch is processed, using the following formula:\n$lr = \\frac{lr}{1 + \\alpha \\times t}$. We use a relatively big batch size because we were training on a GPU\\footnote{NVIDIA Tesla K80}. Larger batch sizes (but smaller than available memory size) help increase the utilization of GPU and generally speed up the training process.\n\n\\begin{table}{}\n\\centering\n\\begin{tabular}{|c|c|}\n\\hline\nL2 Regularization Parameter & $0.1$\\\\\n\\hline\nFirst Dropout (drop probability)&$0.3$ \\\\\n\\hline\nFirst Dropout (drop probability)&$0.2$ \\\\\n\\hline\nActivation Function & ReLU\\\\\n\\hline\nBatch Size& $1024$ \\\\\n\\hline\nOptimizer& Adam(learning rate=$0.0005$, $\\beta_1=0.9$, $\\beta_2=0.999$)\\\\\n\\hline\nEpochs& $200$\\\\\n\\hline\nLearning Rate Decay &$\\alpha=0.0001$ \\\\\n\\hline\n\\end{tabular}\n \\vspace{0.6em}\n\\caption{Hyperparameters for F+I model}\n\\label{table:hyperparameters}\n\\end{table}\n\n\\begin{figure}\n\\includegraphics[width=0.6\\textwidth]{architecture}\n\\centering\n\\caption{F+I neural network architecture}\n\\label{fig:architecture}\n\\end{figure}\n\n\\section{Conclusion\/Future Work}\nTo sum up, by adding satellite images to our prediction model, we were able to improve $R^2$ by ~10\\% and reduce MSE by ~60\\% compared to the baseline. The results are promising. We believe the following future steps are worth further analysis.\n\n\n Using different image zoom levels or a combination of them: Lower zoom levels will be able to capture more details about the neighborhood as they cover a wider range around the property, while a higher zoom level may estimate based on more details from the house, for example the materials used to build it.\n Fine tuning last layers of the Inception-v3 network: Inception-v3 encodes images of size 299x299x3 to 2048-dimensional vectors. Thus, valuable information may be omitted by this process. As a future step, we can take outputs of earlier layers of the pretrained model or fine tune parameters of the last few convolutional layers. This will allow us to capture more relevant features from the images.\n Adding data from different locations to the dataset to test how generalizable the model is from one location to another: Since public data of property transactions are widely available today, analysis can continue on larger datasets consisting of more counties. This will add to inputs' variance, and potentially make the model more robust. We can then examine if our approach can be generalized.\n\n\n\\medskip\n\n\\bibliographystyle{abbrvnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe Hubbard model is a cornerstone of condensed matter physics. As a paradigmatic model of strongly-correlated electrons \\cite{ANDERSON1196}, it is simple to formulate yet rich in behavior. In two dimensions (relevant e.g. to cuprate superconductors) observed behaviors include, but are not limited to, antiferromagnetism, unconventional metallic behavior characterized by a pseudogap and deviations from Fermi liquid theory~\\cite{tremblay2006review,Alloul2014,Gunnarsson2015,Wu2017,Schafer}, as well as stripe orders~\\cite{PhysRevB.93.035126,Zheng1155} closely competing with superconducting states at low temperatures~\\cite{PhysRevX.10.031016}. \nBut despite decades of effort, a comprehensive understanding of the phase diagram of the two-dimensional Hubbard model has not yet been fully reached.\nTherefore any solutions of the Hubbard model, whether obtained analytically or by accurate and controlled numerical techniques, are of great value.\n\nThe most reliable and comprehensive solutions of the Hubbard model obtained so far have been mainly in (quasi-) one dimension \\cite{PhysRevLett.20.1445} and in infinite dimensions (infinite lattice coordination number)~\\cite{PhysRevB.45.6479,PhysRevLett.62.324,RevModPhys.68.13}.\nIn one dimension, solutions can be obtained by integrability~\\cite{1D_Hubbard_book} and bosonisation methods~~\\cite{Giamarchi_book}, \nas well as numerically with matrix product state (MPS) tensor network methods \\cite{PhysRevLett.69.2863,SCHOLLWOCK201196}. \nThe latter can also reliably treat quasi-one-dimensional ladder or cylindrical geometries with a small transverse size. \nIn the limit of infinite dimensions the Hubbard model is again numerically tractable due to the fact that dynamical mean-field theory (DMFT) becomes exact \\cite{RevModPhys.68.13}. Using this technique, the phase diagram of the infinite-dimensional Hubbard model can be\nmapped out and a detailed understanding of the interaction-driven Mott insulator to metal transition \nhas been established (for reviews, see Refs.~\\cite{RevModPhys.68.13,RevModPhys.78.865,\nDMFT@25,georges_lectures,rozenberg_lectures}).\n\nWhile they provide useful insights into the physics of the two-dimensional Hubbard model, these limiting cases also have peculiarities which limit the generality of the \nconclusions that can be drawn from their study. In the limit of infinite dimensions, the metallic state is found to be a Fermi liquid, with interactions affecting one-particle properties in a local \n(momentum-independent) manner only. Hence, in this limit, the feedback of long-wavelength collective modes or even short-range spatial correlations \non quasiparticle properties is entirely absent. \nThese effects are important, in particular close to a critical point. In two dimensions, they are, for example, responsible \nfor the formation of a pseudogap \\cite{Preuss1997,Macridin2006,Kyung2006,Gunnarsson2015,Scheurer2018}.\nIn contrast, in one dimension the low-energy excitations consist {\\it only} of bosonic collective modes, associated with charge \nand spin degrees of freedom.\nThere is no notion of a Fermi liquid and the metallic behavior of the Hubbard model is a Luttinger liquid which lacks coherent quasiparticles \nand displays spin-charge separation~\\cite{Giamarchi_book}. \n\nIn this work we perform a controlled and accurate numerical study of the ground state of the Hubbard model on the Bethe lattice with a finite coordination number $z$, focusing on the case $z=3$. This is an infinite lattice that has a tree structure, where every site is connected to the same number of other sites ($z$) but there are no loops. We show a portion of this lattice in Fig. \\ref{fig:BL intro}.\n\\begin{figure}\n\t\\includegraphics[scale=0.3]{BL_large_4_gen}\n\t\\caption{A portion of the infinite Bethe lattice with coordination number $z = 3$. The figure depicts four ``generations\" of the tree structure, with a particular site chosen to be the ``center\" site. Note that in the actual Bethe lattice there is no special site, and generations can be counted from any of the sites.}\n\t\\label{fig:BL intro}\n\\end{figure}\nThis lattice provides an intermediate case between one dimension ($z=2$) and infinite dimensions ($z = \\infty$), with the key virtue that it admits controlled solutions via tensor network methods, including away from half filling and in the presence of strong interactions. \n\nExact solutions of models on the Bethe lattice have a long history in statistical mechanics~\\cite{Baxter_book}, \nstarting with the pioneering article of Hans Bethe~\\cite{Bethe_1935}. \nSolutions on the finite coordination number Bethe lattice provide a better approximation to thermodynamic quantities than the mean-field approximation \n(corresponding to the infinite dimensional limit)~\\cite{Baxter_book,Bethe_1935,PhysRevLett.74.809}. \nModels studied on the Bethe lattice include classical and quantum spin models~\\cite{PhysRevB.80.144415,PhysRevB.77.214431,NAGY2012542,PhysRevB.86.195137,PhysRevB.88.035138,PhysRevB.89.054426,LIU20141}, spin glass systems \\cite{Parisi_BL_spin_glass,PhysRevLett.56.1082,doi:10.1002\/pssb.200541282,PhysRevB.78.134424}, the Bose Hubbard model \\cite{PhysRevB.80.014524}, and models of Anderson localization \\cite{Abou_Chacra_1973,Mirlin_1991,PhysRevLett.78.2803,PhysRevB.100.094201,dupont2020dirty}. The fermionic Hubbard model on the finite version of the $z=3$ Bethe lattice (known as a Cayley tree) has also been studied previously using a variant of the density matrix renormalization group (DMRG) algorithm \\cite{Lepetit2000}, but only the case of half filling was studied (which is a charge insulator) and only local ground state quantities given (energy, staggered magnetization and its fluctuations, and neighboring spin correlations). Since that time, there have been significant advances in DMRG and related algorithms for infinite one-dimensional systems, which we generalize here to the Bethe lattice and use to obtain our results. Notably, there has been no previous study of metallic states on the finite connectivity Bethe lattice, to the best of our knowledge. \n\nWe determine the full phase diagram of the fermionic Hubbard model on the $z=3$ Bethe lattice, allowing for a two-site unit cell, and establish \nthe nature of the doping-driven Mott insulator to metal transition (MIT). We find that this transition is first-order, and that for every value of interaction strength there is a region of forbidden density. Therefore, in the interaction-density plane the model exhibits phase separation at low doping levels. \nWe find that, for all allowed values of the density, the doped metallic ground-state does not display magnetic long-range order. \n\nImportantly, we also demonstrate that the doped metal hosts coherent quasiparticles at all studied values of the interaction strength, $U$, from weak to strong coupling, and determine the behavior of the quasiparticle weight as a function of $U$. This answers in the affirmative the question of whether Fermi liquid behaviour \napplies as soon as the peculiar kinematic constraints of one dimension are alleviated, and also provides a concrete description of a Fermi liquid ground state with tensor networks, both of which are key motivations for our work. Generally, it is difficult for tensor networks to accurately describe interacting metallic states above one dimension, although much progress has been made in this direction \\cite{PhysRevB.81.165104,PhysRevB.93.045116,PhysRevB.100.195141,mortier2020resolving}. Our work provides an alternate route to this agenda, avoiding the computational challenges of two dimensional tensor networks, while going beyond the restrictions of one dimensional physics. \n\nWe obtain our results by generalizing a recently developed MPS method, variational uniform MPS (VUMPS) \\cite{PhysRevB.97.045145}, to tree tensor network (TTN) states \\cite{PhysRevA.74.022320,PhysRevB.82.205105,doi:10.1063\/1.4798639}, which we dub the variational uniform tree state algorithm (VUTS). We further introduce the fermionic version of the VUTS algorithm, using the swap-gate method of Refs. \\cite{PhysRevB.81.165104,Orus_review}. The VUTS algorithm works directly in the thermodynamic limit, which is important in the study of models on the Bethe lattice. The alternative is to study the finite Cayley tree and perform a finite-size scaling analysis. However, the number of boundary sites on the Cayley tree is always more than half of the total, and therefore finite-size effects are unusually strong and can even lead to conclusions that do not hold on the Bethe lattice \\cite{Baxter_book,PhysRevB.87.085107,OSTILLI20123417}. Working directly in the infinite-size limit is therefore important for models on trees. All previous works studying quantum models on the (infinite) Bethe lattice using tensor networks have used a variant of the infinite time-evolving block decimation (iTEBD) algorithm \\cite{PhysRevLett.98.070201}. In the one-dimensional case, the VUMPS algorithm has been found to be much more efficient than other methods that work in the thermodynamic limit, such as iTEBD or earlier infinite DMRG algorithms \\cite{PhysRevB.97.045145}, and indeed we find its extension in the form of the VUTS algorithm we develop to be very efficient. Our method scales as ${\\mathcal O}(\\chi^{z+1})$ where $\\chi$ is the bond dimension of the tensor network being optimized. For $z=3$ this scaling is significantly better than that of the most modern and accurate projected entangled pair states (PEPS) algorithms that scale as ${\\mathcal O}(\\chi^{10})$ for PEPS bond dimension $\\chi$ (assuming the boundary MPS bond dimension scales as $\\chi^2$) \\cite{PhysRevB.92.035142,PhysRevB.94.035133,PhysRevB.94.155123,PhysRevB.98.235148,PhysRevX.9.031041,PhysRevB.100.195141}, but is more challenging than the ${\\mathcal O}(\\chi^3)$ scaling of DMRG. However, the steeper scaling is mitigated by the fact that the typical bond dimension required to reach an accurate solution generally decreases as one goes to higher dimensions and larger coordination numbers due to the monogamy of entanglement and more mean-field-like properties of the wavefunction.\nThe accuracy of tensor network methods is often measured by the \\emph{truncation error}, which measures the typical loss of fidelity incurred during the truncation step of the optimization algorithm. In this work, for a bond dimension of $\\chi = 100$, we are able to achieve a truncation error of less than $10^{-3}$ in the most computationally challenging part of the phase diagram (most entangled ground state), and a truncation error of less than $10^{-7}$ in the best cases. This level of accuracy allows us to measure long-distance correlation functions well enough to extract information about quasiparticle coherence in the metallic phase, which demonstrates that our method can be used to reliably study critical phases of matter on the Bethe lattice.\n\nThis work suggests promising\nfuture directions for studying the behavior of strongly correlated electrons in a controlled setup. Because Fermi liquid behavior is a rather generic feature of metallic states, the present study allows to establish a controlled platform which can be used to study how Fermi liquid behaviour can be broken by further perturbations to the model considered here, or in other fermionic models. In the concluding section of this article, we discuss possible routes towards achieving this goal. If successful, tensor network solutions of correlated electrons on the $z=3$ Bethe lattice could provide a new platform for studying non-Fermi liquids~\\cite{RevModPhys.73.797,doi:10.1146\/annurev-conmatphys-031016-025531} in a controlled and accurate manner. Other potential applications are the study of the interaction-driven MIT in frustrated fermionic systems, and the study of fermions on closely related tree-like lattices, such as the Husimi cactus on which the Heisenberg model and other spin models have been shown to display spin liquid phases~\\cite{Chandra_1994}. We elaborate on all these directions and others at the end of the paper.\n\nThis paper is organized as follows. In section \\ref{sec:VUTS} we describe the general VUTS method, applicable to generic Hamiltonians. In section \\ref{sec: Hubbard model} we define the Hubbard model on the Bethe lattice and show the phase diagram obtained from the VUTS solution. Section~\\ref{sec:quasiparticles} discusses the calculation of the quasiparticle weight from the occupation function, as well as Luttinger's theorem. Finally, in Sec. \\ref{sec:discussion} we summarize and discuss future directions. \n\n\n\n\n\n\n\\section{Variational uniform tree state algorithm}\n\\label{sec:VUTS}\n\nIn this section we introduce the variational uniform tree state algorithm (VUTS), a generalization of the variational uniform matrix product state algorithm (VUMPS) \\cite{PhysRevB.97.045145}, for optimizing infinite tree tensor network (TTN) states. We start with a Bethe lattice of quantum degrees of freedom. For simplicity we focus on the algorithm for coordination number $z = 3$, which is the value for the model studied in this paper, but the extension to general $z$ is straightforward. We use an infinite TTN as our ansatz to approximate quantum states on the Bethe lattice. For $z = 3$, the infinite TTN with a 1-site unit cell consists of an order 4 tensor $A \\in \\mathbb{C}^{\\chi \\times \\chi \\times \\chi \\times d}$, with one physical leg ($s$) which runs over the physical degrees of freedom $1,...,d$, and three virtual legs ($l_0,l_1,l_2)$ that run over virtual degrees of freedom $1,...,\\chi$. The virtual legs of neighboring tensors connect to each other, forming the same geometry as the Bethe lattice, as shown in the tensor network diagram in Fig. \\ref{fig:TTN Bethe lattice unlabeled}.\n\\begin{figure}\n\t\\includegraphics[scale=0.25]{tree_TN_Bethe_lattice_unlabeled}\n\t\\caption{A finite portion of the infinite tree tensor network (TTN) state describing the many-body wave function on the Bethe lattice. The physical legs (green dashed lines) form the nodes of the lattice, while the virtual legs (straight black lines) form the edges. In this case, a single tensor $A$ comprises the state, and the unit cell is just a single site.}\n\t\\label{fig:TTN Bethe lattice unlabeled}\n\\end{figure}\n\\\\\n\\indent \nThe Hamiltonians we will focus on here are isotropic and have an equivalence between all sites (the analog of translational invariance for hypercubic lattices). The ground state will potentially break this isotropy completely, and break the site equivalence down to a non-trivial unit cell. In this paper, we allow for the state to be fully anisotropic between the different directions emanating from a given site, but for simplicity we focus exclusively on the case when the unit cell consists of no more than two sites (generalizing to arbitrary unit cells is straightforward). The infinite TTN state we study therefore has a 2-site unit cell, and is parameterized by a set of $2z$ tensors $A_{i,m}$, where $i = 0,1$ labels the location in the unit cell, and $m = 1, \\dots, z$ labels the direction of the gauge (defined below). Again, each tensor has one physical index $s_i = 1, \\dots, d$ and three virtual ``link\" indices $l_0,l_1,l_2$.\n\\\\\n\\indent\nBecause the TTN has no loops, it is straightforward to work in the \\emph{canonical gauge}, i.e. the gauge where the tensors are constrained to be orthonormal bases when viewed as a matrix from two link indices $l_n,l_k$ and the physical index $s_i$ to the remaining link index $l_m$. This constraint on the tensors is very useful for making the variational optimization faster and more stable, and is standard in a wide variety of tensor network algorithms, particularly in 1D algorithms like VUMPS and DMRG. The constraint on the tensors is written as\n\\begin{equation}\n\\displaystyle\\sum_{s_i,l_n,l_k} \\bar{A}^{s_i,l'_m,l_n,l_k}_{i,m} A^{s_i,l_m,l_n,l_k}_{i,m} = \\mathbb{1}^{l'_m,l_m}_{i,m},\n\\label{eq:canonical gauge}\n\\end{equation}\nwhere we have introduced the notation $\\bar 0 = 1, \\bar 1 = 0$ for the unit cell indices and use $\\bar{A}$ to denote the complex conjugation of $A$. The matrices $\\mathbb{1}_{i,m}$ are identities. Diagrammatically, Eq. (\\ref{eq:canonical gauge}) is equivalent to Fig.~\\ref{fig:canonical gauge}. The arrows on the links denote the gauge of the $A$ tensors (the outgoing link is the direction of the gauge). Any TTN can be brought into the form where the tensors obey Eq. (\\ref{eq:canonical gauge}) (or equivalently Fig. \\ref{fig:canonical gauge}) by inserting a particular set of ``gauge transformations\"\nonto the link degrees of freedom, i.e. inserting a particular set of resolutions of the identity $X X^{-1}$ (where $X$ is an invertible matrix) onto the links of the TTN. Note that the gauge transformation does not affect the observables of the system, and any TTN can be transformed into the canonical gauge efficiently.\n\\begin{figure}\n \\includegraphics[scale=0.25]{gauge_condition_1}\n \\caption{Diagrammatic version of the gauge conditions of Eq. (\\ref{eq:canonical gauge}). The bonds labelled $k,n,m$ have link indices $l_k,l_n,l_m$ respectively (and the uncontracted $m$ bond on the ket has link index $l_m'$). The unlabelled dashed bond is the physical degree of freedom with index $s_i$.}\n\t\\label{fig:canonical gauge}\n\\end{figure}\n\\begin{figure}\n\t\\includegraphics[scale=0.45]{tree_TN_Bethe_lattice_with_centers_2}\n\t\\caption{The same portion of the Bethe lattice as in Fig. \\ref{fig:TTN Bethe lattice unlabeled}, but with the bonds labeled with $m = 0,1,2$, which have link indices $l_0,l_1,l_2$ respectively. Additionally, each tensor is labeled with subscripts $i,m$, where $i = 0,1$ is the unit cell index and $m$ is the direction of the gauge. In the top diagram, the gauge center $C_2$ is shown on a bond labelled by $2$. The next equality shows that the $C_2$ tensor can be absorbed into the $A_{1,2}$ tensor to put the gauge center on the site tensor, creating $A_{1,C}$.}\n\t\\label{fig:TTN Bethe lattice with gauge centers}\n\\end{figure}\n\\\\\n\\indent\nExamples of the TTN state with a 2-site unit cell in the canonical gauge are shown in Fig. \\ref{fig:TTN Bethe lattice with gauge centers}. In the top diagram, the gauge center is $C_2$. Here, $C_2$ represents the projection of the infinite wavefunction of the system onto the finite-sized Hilbert space of that link of the network. The center matrices $C_m$ constitute invertible gauge transformations relating the $A$ tensors to each other via $A_{i,m} C_m = A_{i,n} C_{n}$, where $i = 0,1$ and $n \\neq m$. The center matrices $C_m$ also contain important information like the entanglement spectrum between the two infinite halves of the system split by that link. Additionally, the gauge center can be absorbed onto an $A$ tensor, defining the center site tensors $A_{i,C} = A_{i,m} C_m$ for any $m$. This is shown in the lower diagram of Fig. \\ref{fig:TTN Bethe lattice with gauge centers}. In this case, $A_{i,C}$ would represent the infinite wavefunction of the system projected onto a single site (and again, different spectra of that tensor relate to entanglement spectra of different bipartitions of the lattice).\n\\\\\n\\indent\nWe describe the algorithm for the case of $H = \\sum_{\\} h_{i,j}$, where $h_{i,j}$ is a two-site operator that acts on nearest neighbors only. The case of longer-range operators is treatable using techniques like those described in Appendix~C of Ref. \\cite{PhysRevB.97.045145}. The total energy is given by $E = \\sum_{\\} \\<\\psi|h_{i,j}|\\psi\\>$, and we want to minimize $E$, treating the tensor elements of our TTN as variational parameters. As in VUMPS (and many related tensor network ground state methods), VUTS proceeds in three main steps that are iterated until convergence:\n\\begin{enumerate}\n \\item Compute the projected Hamiltonians (the Hamiltonian projected into the basis corresponding to the virtual degrees of freedom of the network) to turn the global optimization into a local optimization problem.\n \\label{alg:projected Hamiltonian}\n \\item Find the optimized tensors by minimizing the energy of the projected Hamiltonian.\n \\label{alg:optimize tensors}\n \\item Update the tensor network with the new optimized tensors.\n \\label{alg:update network}\n\\end{enumerate}\n\\indent\nTo begin, say we are interested in optimizing a 1-site projected wavefunction $A_{i,C}$, as defined in Fig. \\ref{fig:TTN Bethe lattice with gauge centers}. Step \\ref{alg:projected Hamiltonian} requires computing infinite sums of local Hamiltonian terms, projected into the basis of our gauged TTN (defined by the tensors $A_{i,m}$), for each of the $z=3$ infinite subtrees connected to $A_{i,C}$. In order to perform the infinite sum, we focus on summing the energy contributions of a single subtree. An example for the series that needs to be summed for the $m=2$ direction in order to optimize the $A_{0,C}$ tensor is shown in Fig. \\ref{fig:H_{1,2} series}. We define the results of these summations as the matrices $H_{i,m}$. The summation can be carried out by making use of the fact that the sum is a geometric series. However, care has to be taken to project out infinite energy contributions to keep the series convergent (i.e. keep the norm of the solution $H_{i,m}$ from diverging). The procedure of performing the summation and projecting out the infinite energy contributions is a generalization of the one in Appendix D of Ref. \\cite{PhysRevB.97.045145}, and we discuss it in more detail in Appendix \\ref{subsec:summing Hamiltonian terms}.\n\\begin{figure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n \\includegraphics[width=1\\linewidth]{H_1_2_series_example}\n \\caption{}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.38\\textwidth}\n \\includegraphics[width=1\\linewidth]{h_tensor}\n \\caption{}\n \\label{fig:h_i_m}\n \\end{subfigure}\n\t\\caption{(a) The first few terms of the series for $H_{1,2}$, the projected Hamiltonian contribution for one of the three branches of the infinite Bethe lattice connected to the center site tensor $A_{0,C}$ (on the $i=0$ sublattice). (b) Definition of the $h_{i,m}$ tensors used in (a), which are a sum of two local Hamiltonian environment tensors sitting on two branches of the Bethe lattice. Note that the tensor labeled $h$ represents the two-site operator term $h_{i,j}$, which the Hamiltonian is made up of.}\n\t\\label{fig:H_{1,2} series}\n\\end{figure}\n\\\\\n\\indent \nOnce the environment tensors are found, we can proceed to step \\ref{alg:optimize tensors} of the algorithm, which we begin by optimizing $A_{i,C}$. This is done by finding the ground state of the Hamiltonian projected onto the sublattice site $i$, a standard procedure in VUMPS and DMRG. The eigenvalue equation for $A_{i,C}$ is shown diagrammatically in Fig. \\ref{fig:H_Ac}, and is solved iteratively (using a Hermitian eigensolver such as Lanczos). To find the TTN ground state, we obtain the eigenvector with the smallest eigenvalue. As in the VUMPS algorithm, in addition to optimizing $A_{i,C}$, we also optimize $C_m$. $A_{i,C}$ and $C_m$ are then used to solve for $A_{i,m}$, which make up the updated infinite TTN state (see next paragraph). The eigenvalue equation for $C_m$ is shown diagrammatically in Fig. \\ref{fig:H_C}.\n\\begin{figure}\n\t\\includegraphics[scale=0.35]{Ac_eig_eqn.pdf}\n\t\\caption{The eigenvalue equation for $A_{i,C}$. The sum over $m$ is a sum over the contributions from each leg of the $A_{i,C}$ tensor.}\n\t\\label{fig:H_Ac}\n\\end{figure}\n\\begin{figure}\n\t\\includegraphics[scale=0.45]{C_eig_eqn.pdf}\n\t\\caption{The eigenvalue equation for $C_{m}$.}\n\t\\label{fig:H_C}\n\\end{figure}\n\\\\\n\\indent\nFinally, once $A_{i,C}, C_m$ for all $i = 0,1, m = 0,1,2$ are optimized, we can proceed to step \\ref{alg:update network} of the algorithm and solve for our new $A_{i,m}$ tensors by minimizing\n\\begin{equation}\n\\epsilon_{i,m}\n=\n\\min_{A_{i,m}^{\\dagger} A_{i,m} \\, = \\, \\mathbb{1}_{i,m}} || A_{i,C} - A_{i,m} C_m ||. \n\\label{eq:new A tensors}\n\\end{equation}\nThis minimization problem can be solved optimally using techniques described in Eqs. (18)-(22) of Ref. \\cite{PhysRevB.97.045145}. The new $A_{i,m}$ we obtain constitute our updated TTN, and steps \\ref{alg:projected Hamiltonian}-\\ref{alg:update network} are repeated until convergence. Convergence is achieved when the largest error found in Eq. (\\ref{eq:new A tensors}), $\\epsilon_{\\text{prec}} \\equiv \\max\\{\\epsilon_{i,m}\\}$, falls below a chosen threshold (e.g. $\\epsilon_{\\text{prec}} < 10^{-12}$).\n\\\\\n\\indent\nThe VUTS algorithm with a 1-site update, as we describe here, scales as $O(\\chi^{z+1})$ which becomes $O(\\chi^4)$ for the $z=3$ Bethe lattice and $O(\\chi^3)$ for $z=2$, thus reducing to the scaling of VUMPS in the $z=2$ case. Additionally, a 2-site update can be formulated, analogous to the 2-site DMRG algorithm which is commonly used. This requires a slight modification of the algorithm where ground states of 2-site and 1-site projected Hamiltonians are computed (as opposed to 1-site and 0-site projected Hamiltonians in the version of the algorithm described above). This can lead to improved convergence since a larger local Hilbert space is explored, but has a higher computational cost of ${\\mathcal O}(\\chi^5)$ for $z=3$. We use this technique at lower bond dimensions in more challenging parts of the phase diagram (near the phase transition), and switch to the 1-site algorithm later in the calculation to reach higher bond dimensions. To dynamically change the bond dimensions, we use a generalization of the subspace expansion procedure described in Appendix B of Ref. \\cite{PhysRevB.97.045145}.\n\\\\\n\\indent \nFor fermionic models like the Hubbard model, we need to use a fermionic version of VUTS. We use the method outlined in Refs. \\cite{PhysRevB.81.165104,Orus_review}. Every tensor is now endowed with a fermion parity $Z_2$ quantum number and is parity-preserving. When two tensor legs cross on a planar projection of a tensor diagram, a fermionic swap gate is placed at the crossing. In order to employ this method, we need to use a fixed ordering convention for the legs of the tensors $A_{i,m}, A_{i,C}, C_m$, which must be kept consistent in all of the diagrams in the calculation. We address the associated subtleties and details of this approach in Appendix \\ref{subsec:fermions}. Other symmetries beyond the $Z_2$ parity can also be used, such as $U(1)$ particle number conservation (to fix the filling), $U(1)$ spin projection symmetry in the z-direction, and also the spin $SU(2)$ symmetry. The inclusion of these symmetries makes the tensors more sparse and should therefore make the tensor operations more efficient, allowing us to reach larger bond dimensions. In this work, we only employ parity quantum numbers, and leave the use of additional symmetries for future work.\n\n\n\n\n\n\n\n\n\\section{Model and phase diagram} \n\\label{sec: Hubbard model}\n\nThe Hubbard Hamiltonian is given by\n\\begin{align}\nH = - t \\sum_{\\langle i,j \\rangle} \\sum_{\\sigma = \\uparrow, \\downarrow}c_{i, \\sigma}^{\\dagger} c_{j, \\sigma} + U \\sum_i n_{i, \\uparrow} n_{i, \\downarrow} - \\mu \\sum_{i, \\sigma} n_{i, \\sigma},\n\\label{eq:interacting Hamiltonian}\n\\end{align}\nwhere the site index $i$ now runs over the Bethe lattice and $n_{i,\\sigma} = c_{i, \\sigma}^{\\dagger} c_{i, \\sigma}$ is the on-site density for an electron of spin $\\sigma$. We set $t = 1$ and vary $U \\geq 0$ and $\\mu$. Since the model is particle-hole symmetric, we only need to consider $\\delta \\mu \\equiv \\mu - \\frac{U}{2} \\geq 0$. To obtain the phase diagram, we compute the ground state of the Hubbard model using the fermionic VUTS algorithm for several values of the bond dimension, $\\chi$. The details of the numerical calculation are given in Appendix \\ref{sec:numerical details}. We perform extrapolations in $\\chi$, the results of which we describe below. The additional plots of the finite $\\chi$ results and details of the extrapolations are in Appendix \\ref{sec:phase diagram more plots}. \n\\\\\n\\indent \nAt half filling, $\\delta \\mu = 0$, the system is a charge insulator with antiferromagnetic order for all $U > 0$, as in one dimension. To illustrate this we compute the staggered magnetization of the bipartite sublattice, $m_s \\equiv \\abs{(\\< \\vec{S}_A \\> - \\< \\vec{S}_B \\>)\/2}$. The extrapolated values $m_s(\\chi\\to\\infty)$ are shown in Fig. \\ref{fig:ms extrapolation}. \n\\begin{figure}\n\t\\includegraphics[scale=0.8]{ms_extrapolated_vs_U}\n\t\\caption{The extrapolated values of the staggered magnetization $m_s$ of the insulating state at half filling, plotted as a function of $U$. Note the logarithmic scale used for $m_s$. The error bars are the discrepancy in the extrapolation with and without the last data point (for the points where they are absent they are smaller than the data points).}\n\t\\label{fig:ms extrapolation}\n\\end{figure}\nWe can see that for any $U>0$ the magnetization is non-zero, tending to zero as $U \\rightarrow 0$. From general mean-field theory considerations we expect $m_s \\sim e^{-\\frac{c}{U}}$ for small $U$. \nHowever, we did not attempt to confirm this functional form numerically by systematic calculations at very small values of $U$.\n\\\\\n\\indent \nTo illustrate the charge gap at half filling, we compute the on-site occupation $\\< n_{i} \\> = \\sum_{\\sigma} \\< n_{i,\\sigma} \\>$ as a function of $\\delta \\mu$. We show this in Fig. \\ref{fig:n and energy vs delmu for the two branches at U = 6 and chi = 50} for a single value of $U = 6$ and $\\chi = 50$. All other $U$ and $\\chi$ look qualitatively similar.\n\\begin{figure}\n\t\\begin{subfigure}[c]{0.45\\textwidth}\n \\includegraphics[width=0.85\\linewidth]{occ_vs_delmu_both_branches_U_6_chi_50}\n \\caption{$U = 6, \\chi = 50$.}\n \\end{subfigure}\n\t\\begin{subfigure}[c]{0.45\\textwidth}\n \\includegraphics[width=0.85\\linewidth]{energy_vs_delmu_both_branches_U_6_chi_50}\n \\caption{$U = 6, \\chi = 50$.}\n \\end{subfigure}\n\t\\caption{(a) The occupation versus $\\delta \\mu$ for $U = 6$ and $\\chi = 50$ for the both metallic and insulating branches. All other values of $U,\\chi$ look similar. (b) The energy per site of both branches in (a). \n\tTheir crossing point, $\\delta \\mu_c$ is indicated by a dashed line in both (a) and (b).}\n\t\\label{fig:n and energy vs delmu for the two branches at U = 6 and chi = 50}\n\\end{figure}\nWe see that there are two branches of VUTS solutions: insulating ($\\< n_{i} \\> = 1$) and metallic ($\\< n_{i} \\> > 1$). The insulating branch exists for $\\delta\\mu\\leq\\delta\\mu_1$, and the metallic branch for $\\delta\\mu\\geq\\delta\\mu_2$: these \ntwo values, $\\delta\\mu_{1}$ and $\\delta\\mu_{2}$ (with $\\delta\\mu_{2} < \\delta\\mu_{1}$), are spinodal values limiting the meta-stability of the insulating and metallic solutions, respectively (see Appendix \\ref{sec:phase diagram more plots} for details).\nFor each value of $\\delta \\mu$ the ground state is the branch with the lower energy. The energies of the two branches cross at a particular value, which we define to be $\\delta\\mu_c (\\chi)$. At $\\delta\\mu_c (\\chi)$, the ground state changes from insulating for $\\delta\\mu < \\delta \\mu_c$ to metallic for $\\delta\\mu > \\delta \\mu_c$. The occupation undergoes a finite jump, $\\delta n(\\delta \\mu_c, \\chi) = \\< n_i(\\delta \\mu_c, \\chi) \\> - 1$, indicating that this is a first order metal-insulator transition. We estimate the size of the jump in the real system by the $\\chi \\to \\infty$ extrapolated values, which are shown in Fig. \\ref{fig:jump in occupation extrapolation}. \n\\begin{figure}\n\t\\includegraphics[scale=0.8]{jump_in_n_extrapolated_vs_U}\n\t\\caption{The extrapolated values of the jump in density $\\delta n_c$ at the first-order transition occuring at $\\delta \\mu_c$ plotted as a function of $U$. The error bars, which are the discrepancy in the extrapolation with and without the last data point, are smaller than the data points.}\n\t\\label{fig:jump in occupation extrapolation}\n\\end{figure}\nWe can see they remain finite for all $U$, meaning that this is a true first-order transition, and not an artifact of finite bond dimension. Using a derivation based on the Maxwell construction (detailed in Appendix~\\ref{sec:phase diagram more plots}), \nit can be shown that the total charge gap\nis given by $\\Delta_c (\\chi) = 2 \\, \\delta\\mu_c (\\chi)$. In order to obtain the charge gap $\\Delta_c$ for the real system, we extrapolate $\\Delta_c (\\chi)$ in $\\chi$. The result is shown in Fig. \\ref{fig:charge gap extrapolation}.\n\\begin{figure}\n\t\\includegraphics[scale=0.8]{charge_gap_extrapolated_vs_U}\n\t\\caption{The extrapolated values of the charge gap $\\Delta_c$ at half filling plotted as a function of $U$. The error bars, which are the discrepancy in the extrapolation with and without the last data point, are smaller than the data points.}\n\t\\label{fig:charge gap extrapolation}\n\\end{figure}\nWe can see that the extrapolated gap shows an exponential-like behavior at small $U$ similar to the one-dimensional case, though the precise behavior is difficult to extract numerically. At large $U$ the gap crosses over to a more linear dependence on $U$. \n\\\\\n\\indent \nThe first-order transition we observe implies that for every $U$ there is a range of forbidden density. Hence, in the $(U,n)$ plane the model exhibits phase separation. Our numerical method works in the grand canonical ensemble (we do not fix particle number per unit cell), so we cannot observe this phase separation directly. However, VUTS, similar to other variational tensor network methods like DMRG and VUMPS, can get ``stuck\" in local minima. We use this fact to find both branches of solutions near the transition point, even when they are meta-stable (i.e. not the lowest energy states), by way of hysteresis in the numerical algorithm (see Appendix \\ref{sec:phase diagram more plots} for a detailed explanation). The resulting branches, shown in Fig. \\ref{fig:n and energy vs delmu for the two branches at U = 6 and chi = 50}, can be continued to find the spinodal values of the first-order transition (see Appendix \\ref{sec:phase diagram more plots}).\n\\\\\n\\indent \nThe metallic ground-state has no magnetic order for any value of density: we find that the staggered magnetization vanishes once $\\delta \\mu$ crosses $\\delta \\mu_c$. To illustrate the typical magnetization behavior we observe, in Fig. \\ref{fig:ms of metallic phase} we show the staggered magnetization $m_s$ as a function of $\\delta \\mu$ across $\\delta \\mu_c$ for $U = 6$ at a large but fixed $\\chi = 90$.\n\\begin{figure}\n\t\\includegraphics[scale=0.8]{ms_vs_delmu_U_6}\n\t\\caption{The staggered magnetization $m_s$ versus the chemical potential $\\delta \\mu$ for $U = 6$ and $\\chi = 90$. The critical point $\\delta \\mu_c$ is indicated by a dashed line. We see the drop from $m_s > 0$ to $m_s = 0$, indicating the first-order transition from the antiferromagnetic insulator to the paramagnetic metal.}\n\t\\label{fig:ms of metallic phase}\n\\end{figure}\nOnce $\\delta \\mu$ becomes large enough to drive the system metallic, the staggered magnetization $m_s$ immediately drops to a very small value, which is zero within our error tolerance. All other values of $U$ and $\\chi$ behave similarly. As $\\chi$ increases, the small value of magnetization in the metal decreases further, although the behavior is not monotonic, as shown in Appendix \\ref{sec:phase diagram more plots}. Also, in Appendix \\ref{sec:numerical details} we describe the strategy we use to make sure we do not bias the magnetization of the metallic solution with our ansatzes.\n\\\\\n\\indent \nIt is interesting to compare our results on the $z=3$ Bethe lattice to those established for the doping-driven MIT \nin the $z=\\infty$ limit where DMFT becomes exact. \nOnly a few studies~\\cite{camjayi_rozenberg_2006,wang_millis_2009,fratino_tremblay_prb_2017} consider this transition while also taking into account phases with magnetic long-range order. As in our results, an antiferromagnetic insulator is found at half-filling for a range of chemical potentials, as well as a non-magnetic metallic solution which can be stabilized for values of the chemical potential above a spinodal value $\\delta \\mu_{2}$. \nFurthermore, a magnetic metallic solution is found to exist in a narrow range of chemical potentials, which connects the magnetic insulator and the non-magnetic metal. \nThis may appear to differ from our findings, but it should be emphasized that all these studies consider only non-zero temperatures. As temperature is lowered, it is reported in Refs~\\cite{camjayi_rozenberg_2006,wang_millis_2009} that the magnetic metallic solution appears to exist only in an increasingly narrow interval of chemical potentials, and Ref.\\cite{camjayi_rozenberg_2006} suggested that at low temperature the MIT is a first-order transition between the magnetic insulator and the non-magnetic metal, with a forbidden range of density corresponding to phase separation. Although, to the best of our knowledge, this has not yet been fully established directly at $T=0$ for $z=\\infty$, this conclusion is consistent with our findings on the finite coordination number lattice. In contrast, on fully frustrated $z=\\infty$ lattices, which do not allow for long-range magnetic order (e.g. on the fully connected lattice with random hopping), it is established that the \ndoping-driven MIT is second order at $T=0$ and becomes first-order only at finite temperature~\\cite{RevModPhys.68.13,moeller_1995,kotliar_2002,werner_2007}. \n\\\\\n\\indent \nTo conclude this section, we mention briefly the numerical accuracy of the data presented here. As noted in the Introduction, the numerical accuracy in tensor networks is generally measured by the truncation error, denoted by $\\epsilon_{\\rho}$. We show here the scaling of $\\epsilon_{\\rho}$ in the metallic state, which is the state with the largest entanglement and therefore the most computationally challenging for tensor networks. \nIn Fig. \\ref{fig:truncation error metal n = 1.2} we plot our estimate of $\\epsilon_\\rho$ at a fixed density of $\\ = 1.2$ as a function of $\\chi$ for various $U$, and also as a function of $U$ at the largest values $\\chi = 90,100$.\n\\begin{figure}\n\t\\begin{subfigure}[]{0.49\\textwidth}\n \\includegraphics[scale=0.8]{truncation_error_all_U_vs_chi_metal_n_1_2}\n \\caption{}\n \\end{subfigure}\\hspace{0.01\\textwidth}\n\t\\begin{subfigure}[]{0.49\\textwidth}\n \\includegraphics[scale=0.8]{truncation_error_at_chi_90_and_100_vs_U_metal_at_n_1_2}\n \\caption{}\n \\end{subfigure}\n\t\\caption{Our estimate for the truncation error $\\epsilon_\\rho$ in the metallic phase for $\\ = 1.2$ (a) as a function of $\\chi$ and (b) as a function of $U$ for the two largest $\\chi$. Note that since (a) is a log-log plot the linear form indicates an algebraic relationship.}\n\t\\label{fig:truncation error metal n = 1.2}\n\\end{figure}\nWe can see that the decay with $\\chi$ is algebraic, as expected for a gapless system. As a function of $U$, $\\epsilon_\\rho$ increases initially and then potentially saturates, although the large $U$ behavior is undetermined. Notably, we can see that $\\epsilon_\\rho < 10^{-3}$ for all $U$ and $\\chi = 100$, which is a high level of accuracy. We discuss the behavior of $\\epsilon_\\rho$ at other points in the phase diagram in Appendix \\ref{sec:numerical details}. \n\n\n\n\n\n\n\n\n\n\\section{Quasiparticles}\n\\label{sec:quasiparticles}\n\nIn this section we address the existence of quasiparticles in the system. We do this by computing the quasiparticle weight $Z$ from the ``momentum\" distribution function of the ground state, which in turn is obtained from real-space correlation functions. \nOne peculiarity of the Bethe lattice, is that correlations between any two degrees of freedom sitting on individual nodes of the lattice have a maximal finite correlation length, even at criticality, due to the geometry of the lattice. However, algebraically-decaying correlations reappear after a change of basis to single-particle states that are weighted sums of all nodes of a given generation emanating from a chosen center site. These bases of states unveil the traditional long-range criticality present in gapless states on the Bethe lattice. Below, we introduce a subset of these weighted states called the \\emph{symmetric} states, which we focus on in the rest of this section. From these symmetric states we define a quantum number which plays an analogous role on the Bethe lattice to quasi-momentum on the hypercubic lattice, despite the absence of conventional translation invariance.\n\n\n\\subsection{Single-particle basis of symmetric states}\n\nFor $U = 0$, the free particle Hamiltonian was diagonalized in Ref. \\cite{PhysRevB.63.155110}, using the \\emph{symmetric} set of single-particle states. These are given as follows. Choose any site to be labeled as the origin, with site label $0$. Then consider all permutations of the nodes at each generation $l$ from the center. The symmetric states are those which are invariant under all such permutations. Their creation operators are given by\n\\begin{equation}\n\\tilde{c}^{\\dagger}_{0,\\sigma} \\; \\equiv \\; c^{\\dag}_{0,\\sigma}\n\\label{eq:single particle states l 0}\n\\end{equation}\nand \n\\begin{equation}\n\\tilde{c}^{\\dagger}_{l,\\sigma} \\; \\equiv \\; \\frac{1}{\\sqrt{z(z-1)^{l-1}}} \\displaystyle\\sum_{\\eta_1 = 0}^{z-1} \\; \\displaystyle\\sum_{\\eta_2 \\neq \\eta_1 } \\dots \\displaystyle\\sum_{\\eta_l \\neq \\eta_{l-1}} c^{\\dag}_{\\eta_1 + \\eta_2 + \\dots + \\eta_l,\\sigma}\n\\label{eq:single particle states}\n\\end{equation}\nfor $l > 0$. The collection of $\\eta_i$ denotes a unique path from the origin to the $l$-th generation (this is the usual notation for nodes on the Bethe lattice). The state $\\tilde{c}^{\\dagger}_{l,\\sigma} |\\text{vacuum}\\>$ is the symmetric combination of all the singly-occupied spin-$\\sigma$ states of the $l$-th generation of the tree. These states form an orthonormal subset of all the states on the Bethe lattice, but for $U = 0$ they are the only relevant ones. In the symmetric state basis, the free particle Hamiltonian maps onto fermions hopping on an infinite half-chain, with the first hopping amplitude equal to $\\sqrt{z}$ and all the rest equal to $\\sqrt{z-1}$ (remember $t$ has been set to one). The conjugate variable that replaces momentum is an angle $\\theta \\in \\[0,\\pi\\]$, and the band energy is given by $\\epsilon(\\theta) = 2 \\sqrt{z-1} \\, \\cos\\theta$. Note that the energy of a regular one-dimensional band is obtained by replacing $\\theta$ with a momentum $k$ and $z$ with $2$. The single-particle wavefunctions $\\psi_l(\\theta)$ that diagonalize the Hamiltonian are given by\n\\begin{align}\n\\begin{split}\n\\psi_0(\\theta) &= \\sqrt{\\frac{2}{\\pi}} \\frac{\\sqrt{z(z-1)}\\sin(\\theta)}{\\sqrt{z^2 - 4(z-1) \\cos^2(\\theta)}},\n\\\\ \n\\psi_{l \\neq 0}(\\theta) &= \\sqrt{\\frac{2}{\\pi}} \\sin(l \\cdot \\theta + \\gamma(z,\\theta)),\n\\\\\n\\gamma(z,\\theta) &= \n\\begin{cases}\n\\arcsin\\left(\\frac{z \\sin (\\theta )}{\\sqrt{z^2-4 (z-1) \\cos ^2(\\theta )}}\\right), & \\hspace{-2mm} \\theta \\in \\[0,\\frac{\\pi}{2}\\) \\\\\n\\pi -\\arcsin\\left(\\frac{z \\sin (\\theta )}{\\sqrt{z^2-4 (z-1) \\cos ^2(\\theta)}}\\right), & \\hspace{-2mm} \\theta \\in \\[\\frac{\\pi}{2},\\pi\\].\n\\end{cases}\n\\end{split}\n\\label{eq:real space wavefunctions n U=0 exact}\n\\end{align}\n\\\\\n\\indent \nOnce $U \\neq 0$, the symmetric states are no longer enough to describe the system. Indeed, if we simply consider a Mott-like state with one particle sitting on each of the sites at generation $l = 1$ away from the center site, that state cannot be written using only the single-particle symmetric states associated with the same center site. \nIn general, an arbitrary multi-particle Fock state cannot be constructed as a tensor product of the symmetric single-particle states. Therefore, to construct it one must employ states from other symmetry sectors. The interacting ground states we find numerically in this work therefore contain states from various symmetry sectors. However, excitations above the ground state can occur in any of these sectors, and we do not have to consider all of them. In order to tractably answer the question of existence of quasiparticles, we choose to focus on excitations in the symmetric sector. How quasiparticles in different symmetry sectors are related to each other is an interesting question for future work \\cite{Eckstein_thanks}.\n\n\n\\subsection{$\\theta$-distribution function for $U = 0$}\n\nThe $\\theta$-distribution function is calculated from the equal-time correlation functions of symmetric single-particle excitations, $\\< \\td c^{\\dag}_{0,\\sigma} \\td c_{l,\\sigma} \\>$, where $0$ labels a chosen center site and $l$ labels the generation away from the center site. These can be computed as\n\\begin{equation}\n\\begin{split}\n\\< \\td c^{\\dag}_{0,\\sigma} \\td c_{l,\\sigma}\\> \n& = \\int\\displaylimits_{- \\infty}^{0} d{\\omega} ~ \\mathcal{A}_{0l}(\\omega)\n\\\\ & = \\int\\displaylimits_{-2 \\sqrt{z-1}}^{\\mu} d\\varepsilon \\; \\sqrt{\\frac{z}{2 \\pi}} \\frac{\\psi_l\\(\\arccos(-\\frac{\\varepsilon}{2 \\sqrt{z-1}})\\)}{\\sqrt{z^2 - \\varepsilon^2}},\n\\end{split}\n\\label{eq:correlation function U=0 exact}\n\\end{equation}\nwhere $\\mathcal{A}_{0l}(\\omega)$ is the probability of inserting an electron with frequency ${\\omega}$ at the center site and observing it at generation $l$ at the same frequency. The occupation function in $\\theta$-space is defined as\n\\begin{equation}\nn_{\\sigma}(\\theta, \\theta') \\equiv \\< \\tilde{c}_{\\theta,\\sigma}^{\\dag} \\tilde{c}_{\\theta',\\sigma}\\>,\n\\label{eq:occupation function definition}\n\\end{equation}\nwhere \n\\begin{equation}\n\\td c_{\\theta, \\sigma} = \\lim\\limits_{L \\to \\infty}\\sqrt{\\frac{\\pi}{L+1}} \\sum_{d = 0}^{L} \\psi_d(\\theta) \\, \\td c_{d,\\sigma} \n\\label{eq:theta transform c operators}\n\\end{equation}\nare the $\\theta$-transforms of the symmetric state operators $\\td c_{d,\\sigma}$. The key difference with the usual calculation in hypercubic lattices is that $\\< \\td c^{\\dag}_{d,\\sigma} \\td c_{d',\\sigma}\\>$ is not simply a function of $\\abs{d-d'}$, due to the fact that the symmetric states are defined relative to a chosen center site. This is illustrated in Fig. \\ref{fig:BL 4 generations}, where we show that $\\< \\td c^{\\dag}_{2,\\sigma} \\td c_{4,\\sigma}\\>$ contains correlations of length $2,4$ and $6$.\n\\begin{figure}\n\t\\includegraphics[scale=0.4]{BL_large_4_gen_colored}\n\t\\caption{Four generations of the $z = 3$ infinite Bethe lattice emanating from a given center site. The inner shell is at generation $d = 2$, and the outer shell is at generation $d' = 4$. Choosing a given site on the inner shell (colored red), there are three different groups of sites on the outer shell (colored orange, green and purple) that contribute different-length correlations.}\n\t\\label{fig:BL 4 generations}\n\\end{figure}\nTherefore, $n_{\\sigma}(\\theta, \\theta')$ is not diagonal in $\\theta$. One way to understand this is to think of the entire symmetric sector parametrized by $\\theta$ as the $k = 0$ space on the hypercubic lattice, i.e. the one that is fully symmetric under translations. However, within this sector there is no additional symmetry of the Bethe lattice that requires the total $\\theta$ to be conserved in a scattering process, and therefore $n_{\\sigma}(\\theta, \\theta')$ is not diagonal \\cite{Eckstein_thanks}. Carefully inserting Eq. (\\ref{eq:theta transform c operators}) into Eq. (\\ref{eq:occupation function definition}) and rewriting the result in terms of $\\< \\td c^{\\dag}_{0,\\sigma} \\td c_{l,\\sigma}\\>$ gives\n\\begin{equation}\n\\begin{split}\n& n_{\\sigma}(\\theta, \\theta') = \n\\lim\\limits_{L \\to \\infty}\\frac{\\pi}{L+1} \\sum_{d,d' = 0}^{L} \\psi_d(\\theta) \\psi_{d'}(\\theta') \\, \n\\frac{z-2}{\\sqrt{z (z-1)}}\n\\\\ &\n\\(\\frac{\\sqrt{z}(z-2)}{(z-1)^{3\/2}}\\)^{\\delta_{\\min(d,d'),0}} \n\\sum_{r=0}^{\\min(d,d')} \\(\\frac{z-1}{z-2}\\)^{\\delta_{r,0} + \\delta_{r,\\min(d,d')}} \n\\\\ & \n\\sqrt{\\frac{z}{z-1}}^{\\delta_{d,d'} \\delta_{r,0} - \\delta_{d,0} \\delta_{d',0}} \n\\, \\< \\td c^{\\dag}_{0,\\sigma} \\td c_{\\abs{d-d'}+2r,\\sigma} \\>. \n\\end{split}\n\\label{eq:n_theta final}\n\\end{equation}\nDetails of the derivation of Eq. (\\ref{eq:n_theta final}) are in Appendix \\ref{sec:occupation function}. We plot the exact $n_{\\sigma}(\\theta, \\theta')$ for $U = 0$ at half-filling in Fig. \\ref{fig:n_theta_theta_prime_U_0_delmu_0_exact}.\n\\begin{figure}\n\t\\begin{subfigure}[c]{0.49\\textwidth}\n \\includegraphics[scale=0.9]{n_theta_theta_prime_U_0_n_i_1_exact}\n \\caption{}\n\t \\label{fig:n_theta_theta_prime_U_0_delmu_0_exact}\n \\end{subfigure}\\hspace{0.01\\textwidth}%\n\t\\begin{subfigure}[]{0.49\\textwidth}\n \\includegraphics[scale=0.8]{n_theta_U_0_exact}\n \\caption{}\n \\label{fig:n_theta_U_0_exact}\n \\end{subfigure}\n\t\\caption{(a) The occupation function $n_{\\sigma}(\\theta, \\theta')$ for the half-filled case at $U = 0$. Aside from the expected step-function along the diagonal, there is non-trivial off-diagonal structure. (b) The diagonal component $n_{\\sigma}(\\theta)$ for $U = 0$ and densities $\\ = 1,1.2$, corresponding to values of $\\theta_F = \\pi\/2$ and $\\theta_F \\approx 1.81$, respectively. Calculating the occupation from Eq. (\\ref{eq:n_theta final}) requires a large distance cutoff $L$, which is chosen here to be (a) $L = 100$ and (b) $L = 200$. This introduces an artificial correlation length, causing the step function to be (slightly) smoothed out.}\n\t\\label{fig:n_2D_and_1D_U_0_exact}\n\\end{figure}\n\\\\\n\\indent\nIn this work, we focus exclusively on the diagonal component of the occupation function, $n_{\\sigma}(\\theta) \\equiv n_{\\sigma}(\\theta, \\theta)$. This tells us the occupation of excitations that preserve total $\\theta$ when scattering with each other. We leave the detailed study of the full occupation function $n_{\\sigma}(\\theta, \\theta')$ for future work. We plot the exact $n_{\\sigma}(\\theta)$ in Fig. \\ref{fig:n_theta_U_0_exact} for $U = 0$ and densities $\\ = 1, 1.2$. We can see the expected behavior of $n_{\\sigma} = \\Theta(\\theta_F - \\theta)$, where $\\Theta(x)$ is the Heaviside step function and $\\theta_F$ is the $\\theta$-analog of the ``Fermi momentum\" (the step function in Fig. \\ref{fig:n_theta_U_0_exact} is slightly smoothed out due to a finite $L$ in Eq. (\\ref{eq:n_theta final})).\n\\\\\n\\indent \nIn order to compute the correlation function $\\< \\td c^{\\dag}_{0,\\sigma} \\td c_{l,\\sigma}\\>$ from the VUTS numerical solution, \nwe use the fact that even in the interacting ground state, $\\< c^{\\dag}_{i,\\sigma} c_{j,\\sigma} \\>$ is still only a function of the distance $\\abs{i-j}$ (and $\\sigma$). We can then compute $\\< c^{\\dag}_{0,\\sigma} c_{l,\\sigma} \\>$ for an arbitrary branch to write\n\\begin{equation}\n\\< \\td c^{\\dag}_{0,\\sigma} \\td c_{l,\\sigma}\\>\n= \\sqrt{z^{1-\\delta_{l,0}} (z-1)^{l-1 + \\delta_{l,0}}} \\< c^{\\dag}_{0,\\sigma} c_{l,\\sigma}\\>. \n\\label{eq:correlation function symmetric states}\n\\end{equation}\nNote that the difference here from the previous considerations of this paragraph is that one of the reference points has been set to the center site. We plot $\\< \\td c^{\\dag}_{0,\\sigma} \\td c_{l,\\sigma}\\>$ measured with VUTS in Fig. \\ref{fig:corr function U=0 real space} for $U = 0$ and densities $\\< n_i \\> = 1, 1.2$, along with the exact results. \n\\begin{figure}[!htbp]\n\t\\begin{subfigure}[c]{0.49\\textwidth}\n \\includegraphics[scale=0.8]{corr_func_U_0_n_1}\n \\caption{$U = 0, \\ = 1$.}\n \\label{fig:corr function U=0 n = 1}\n \\end{subfigure}\\hspace{0.01\\textwidth}%\n\t\\begin{subfigure}[c]{0.49\\textwidth}\n \\includegraphics[scale=0.8]{corr_func_U_0_n_1_2}\n \\caption{$U = 0, \\ = 1.2$.}\n \\label{fig:corr function U=0 n = 1.2}\n \\end{subfigure}\n\t\\begin{subfigure}[c]{0.49\\textwidth}\n \\includegraphics[scale=0.8]{xi_vs_chi_U_0}\n \\caption{}\n \\label{fig:xi vs chi}\n \\end{subfigure}\n\t\\caption{The correlation function $\\< \\td c^{\\dag}_{0,\\uparrow} \\td c_{r,\\uparrow}\\>$ ($\\sigma = \\downarrow$ gives the same) for densities (a) $\\ = 1$ and (b) $\\ = 1.2$ for the non-interacting case $U=0$. We show the $\\chi = 50,100$ results along with the exact solution of Eq. (\\ref{eq:correlation function U=0 exact}). Also shown are the functions $\\frac{1}{r} \\, \\psi_r(\\theta_F) \\, \\psi_0(\\theta_F)$ with (a) $\\theta_F = \\pi\/2$ and (b) $\\theta_F \\approx 1.81$, which are excellent fits beyond a short distance scale, showing Friedel oscillations due to the Fermi surface singularity. The insets show that the $1\/r$ decay is fit perfectly over a larger distance by the exact solution, while the finite $\\chi$ solutions display an exponential decay at large distances.\n\t\t(c) The correlation length extracted from the long distance behavior of $\\< \\td c^{\\dag}_{0,\\sigma} \\td c_{r,\\sigma}\\>$, plotted versus $1\/\\chi$. The slope for both densities is $\\xi(\\chi) \\sim \\chi^{0.84}$.}\n\t\\label{fig:corr function U=0 real space}\n\\end{figure}\nThe finite $\\chi$ results are close to the exact ones, but, as expected, the correlation function at large enough distances decays exponentially with a finite correlation length $\\xi(\\chi)$. We can measure $\\xi(\\chi)$ by fitting $\\< \\td c^{\\dag}_{0,\\sigma} \\td c_{l,\\sigma}\\>$ to an exponential at large $l$. The results are shown in Fig. \\ref{fig:xi vs chi}, where the polynomial fit gives $\\xi(\\chi) \\sim \\chi^{0.84}$ for both densities.\n\\\\\n\\indent \nWhen calculating $n_{\\sigma}(\\theta)$ from Eq. (\\ref{eq:n_theta final}), in practice we must choose a finite value of $L$. As long as we take $L$ large enough, the correlation length $\\xi(\\chi)$ will act as the long-distance cutoff and the value of $L$ will not have any effect. Our results are obtained using bond dimensions up to $\\chi = 100$, for which the induced correlation length is $\\xi(\\chi) \\lesssim 40$. We find that a value of $L \\sim 400$ is large enough for all $\\chi$ we study. The finite $\\xi(\\chi)$ smoothes out the step function in $n_\\sigma(\\theta)$, so we estimate $\\theta_F$ from the location of the maximum of $\\abs{n'_{\\sigma}(\\theta)}$. In Fig. \\ref{fig:n_theta near theta_F at U=0 and n=1.2} we show $n_{\\sigma}(\\theta)$ for $\\ = 1.2$ near $\\theta_F$. \n\\begin{figure}\n\t\\includegraphics[scale=0.8]{n_theta_near_theta_F_U_0_n_1_2}\n\t\\caption{Plots of $n_{\\sigma}(\\theta)$ near $\\theta_F$ for $U = 0$ at density $\\ = 1.2$ and a range of $\\chi$. The value of $L$ used here from Eq. (\\ref{eq:n_theta final}) is $L = 400$.}\n\t\\label{fig:n_theta near theta_F at U=0 and n=1.2}\n\\end{figure}\nThe finite slope at $\\theta_F$ diverges as a power law in $\\chi$, as we show in Fig. \\ref{fig:nprime_theta_at_theta_F_vs_chi_all_U_and_n1_2}. \n\\begin{figure}\n\t\\includegraphics[scale=0.9]{nprime_theta_at_theta_F_vs_chi_all_U_n_1_2}\n\t\\caption{The value of $\\abs{n'_{\\sigma}(\\theta_F)}$ vs $1\/\\chi$ for various $U$ at density $\\ = 1.2$. The straight lines on the log-log plot are fit by $\\abs{n'_{\\sigma}(\\theta_F)} \\sim \\chi ^{{\\alpha}}$, with $0.5 < {\\alpha} < 1$.}\n\t\\label{fig:nprime_theta_at_theta_F_vs_chi_all_U_and_n1_2}\n\\end{figure}\nThis indicates that $\\xi(\\chi)$ is the only low-energy scale in the problem, and the state is truly gapless in the $\\chi \\to \\infty$ limit.\n\\\\\n\\indent\nIn summary, in this section we show how to compute the diagonal part of the occupation function in the non-interacting case, using VUTS and \na careful extrapolation in the bond dimension, and achieve excellent agreement with the analytical result.\n\n\n\n\\subsection{Quasiparticles in the interacting system}\n\nNow we turn on interactions. In Fig. \\ref{fig:n_theta near theta_F all U and n=1.2} we plot $n_{\\sigma}(\\theta)$ near $\\theta_F$ for $\\ = 1.2$ and $U = 5$ at various $\\chi$ as well as for various $U$ at $\\chi = 70$.\n\\begin{figure}\n\t\\begin{subfigure}[]{0.49\\textwidth}\n \\includegraphics[scale=0.8]{n_theta_near_theta_F_U_5_n_1_2}\n \\caption{$\\ = 1.2, U = 5$.}\n \\end{subfigure}\\hspace{0.01\\textwidth}%\n\t\\begin{subfigure}[]{0.49\\textwidth}\n \\includegraphics[scale=0.8]{n_theta_near_theta_F_all_U_n_1_2}\n \\caption{$\\ = 1.2, \\chi = 70$.}\n \\end{subfigure}\n\t\\caption{Plots of $n_{\\sigma}(\\theta)$ near $\\theta_F$ at density $\\ = 1.2$, showing the dependence on (a) $\\chi$ for fixed $U = 5$, and on (b) $U$ for fixed $\\chi = 70$. The value of $L$ from Eq. (\\ref{eq:n_theta final}) is $L = 400$.}\n\t\\label{fig:n_theta near theta_F all U and n=1.2}\n\\end{figure}\nThe occupation shows a form similar to that of the free case, albeit with a reduced size of the step at $\\theta_F$. The value of the slope at $\\theta_F$ diverges for all $U$, as we show in Fig. \\ref{fig:nprime_theta_at_theta_F_vs_chi_all_U_and_n1_2}. The interacting system is therefore also gapless in the $\\chi \\to \\infty$ limit, as expected.\n\\\\\n\\indent \nThe quasiparticle weight, $Z$, of the symmetric state excitations is defined as \n\\begin{equation}\nZ \\equiv \\(\\lim\\limits_{\\theta \\to \\theta_F^-} - \\lim\\limits_{\\theta \\to \\theta_F^+} \\) n_{\\sigma}(\\theta).\n\\label{eq:Z definition}\n\\end{equation}\nFor a Fermi liquid $n_{\\sigma}(\\theta)$ has a step at $\\theta_F$ and $Z > 0$, while for a Luttinger liquid the occupation function has a higher order non-analyticity that scales as $n_{\\sigma}(\\theta - \\theta_F) \\sim \\abs{\\theta - \\theta_F}^{\\gamma} \\sign(\\theta - \\theta_F)$ for some $\\gamma < 1$ and therefore $Z = 0$. Our goal is to find the true thermodynamic value of $Z$ to distinguish between these two scenarios. Of course, for a finite $\\chi$ Eq. (\\ref{eq:Z definition}) will always give zero. However, we can define a quantity $Z(\\chi)$ whose limit will give $Z$ in the $\\chi \\to \\infty$ limit. We define this as\n\\begin{equation}\nZ(\\chi) \\equiv n_{\\sigma}\\(\\theta_F - \\frac{\\pi}{2 \\, \\xi (\\chi)} \\) - n_{\\sigma}\\(\\theta_F + \\frac{\\pi}{2 \\, \\xi (\\chi)} \\),\n\\label{eq:Z finite chi definition}\n\\end{equation}\nwhich satisfies the desired property because $\\xi(\\chi)\\rightarrow \\infty$ with increasing $\\chi$. We choose a spacing of $\\Delta \\theta = \\pi\/\\xi(\\chi)$ around $\\theta_F$ because that is roughly the resolution one expects from a finite correlation length, and therefore the convergence in $1\/\\chi$ should be fastest. We plot $Z(\\chi)$ vs $\\chi$ for $\\ = 1.2$ and a range of $U$ in Fig. \\ref{fig:Z vs chi for all U at n=1.2}. \n\\begin{figure}\n\t\\includegraphics[scale=0.9]{Z_vs_chi_all_U_n_1_2}\n\t\\caption{$Z(\\chi)$ at density $\\ = 1.2$, shown with linear fits.}\n\t\\label{fig:Z vs chi for all U at n=1.2}\n\\end{figure}\nThe results show that the extrapolated $Z$ is (a) very close to the expected value of $Z = 1$ for the free theory and (b) finite for all $U$ we study. We plot $Z$ as a function of $U$ in Fig. \\ref{fig:Z vs U for all n}, where we can see that it decreases as a function of $U$, but seems to saturate to a finite value. The saturation value is an increasing function of doping $\\ - 1$, as \nillustrated by Fig.~\\ref{fig:Z at U =20 vs n-1} where we plot the value of $Z(U=20)$ as a function \nof doping. \n\n\\begin{figure}\n\t\\begin{subfigure}[]{0.49\\textwidth}\n \t\\includegraphics[scale=0.8]{Z_vs_U_all_n}\n \\caption{}\n\t \\label{fig:Z vs U for all n}\n \\end{subfigure}\n\t\\begin{subfigure}[]{0.49\\textwidth}\n \t\\includegraphics[scale=0.8]{Z_U_20_vs_n_minus_1}\n \t\\caption{}\n\t \\label{fig:Z at U =20 vs n-1}\n \\end{subfigure}\n \t\\caption{(a) The extrapolated values of $Z$ at densities $\\ = 1.2,1.3,1.4$, plotted as a function of $U$. The first two blue points are covered by the orange ones. The error bars, which are the discrepancy in the extrapolation with and without the last data point, are smaller than the data points. We can see that the curves are close to saturation at $U = 20$. (b) The values of $Z$ at $U = 20$, which seem close to the saturated values, as a function of $\\ - 1$.}\n\\end{figure}\nWe also address the question of Luttinger's theorem for $n_{\\sigma}(\\theta)$. We check that $\\theta_F$ is independent of $U$ (the dependence on $\\chi$ is negligible) for various densities in the range $\\ \\in (1.1,1.4)$. From this we conclude that Luttinger's theorem holds for all values of the density and interaction strength. \n\\\\\n\\indent\nThe decrease of $Z$ with increasing $U$ as well as the saturation at large $U$ to a value which increases with doping are both qualitatively consistent with results established in the $z=\\infty$ (DMFT) limit and with slave boson approaches (for general lattices) ~\\cite{RevModPhys.68.13,4bosons}.\nA distinctive aspect of these theories, however, is that the effective mass of \nquasiparticles is related to $Z$ by $m^*\/m=1\/Z$. On the technical level, this is due to the locality of the self-energy, while physically this reflects the inability of these approaches to capture the feedback of short-range order and collective modes \ninto the physics of quasiparticles. \nIn contrast, in the present case, since the connectivity is kept finite, we would expect this feedback to be present. \nIt is therefore an outstanding question for future work to explore whether the dispersion of quasiparticles is \nrenormalized in a different manner than $Z$ itself, and in particular whether it is affected by \nshort-range antiferromagnetic correlations at low doping level. This is left for future work since it requires \nan extension of our algorithm to the study of excited states.\n\n\n\n\n\n\n\n\\section{Discussion}\n\\label{sec:discussion}\n\nIn summary, we have introduced a new numerical algorithm,\n(fermionic) VUTS, to study quantum (fermionic) models on the Bethe lattice. \nWe apply it to the Hubbard model for coordination number $z = 3$, allowing for a two-site unit cell, obtain the $T = 0$ phase diagram and study the doping-induced Mott transition. \nWe find an antiferromagnetic insulating phase at half filling and a paramagnetic metallic phase for the doped system, which are separated by a first-order insulator to metal phase transition. \nThe model displays phase separation at low doping, with a range of forbidden densities. These conclusions were reached by allowing for a two-site unit cell. We cannot exclude that phases with more complex charge or \nmagnetic ordering exist when allowing for a larger unit cell, which we leave as an open question for future work. By studying the diagonal component of the occupation function for momenta of the symmetric single-particle sector, we find that the quasiparticle weight is non-zero, \nconsistent with the existence of a Fermi liquid ground state for fermions on the Bethe lattice. We find that this Fermi liquid state obeys Luttinger's theorem.\n\nAn interesting direction in which to extend this work would be to further characterize this Fermi liquid state. \nOne would like to know, for example, what happens near $\\theta_F$ to the off-diagonal $n_\\sigma(\\theta,\\theta')$ when interactions are turned on. \nIt is also interesting to look at some of the other symmetry sectors, say the ones that leave each of the $z$ sub-trees connected to the center site invariant, \nand see whether they have quasiparticles and how the quasiparticle weight depends on the sector.\nThese questions are the Bethe lattice version of the important physical question of `momentum dependence' of quasiparticle properties on the Fermi surface of hypercubic lattices.\n\nIn the one-dimensional VUMPS algorithm, it has been shown that low-lying excitations above the ground-state can be accurately computed~\\cite{PhysRevB.97.235155}. \nAn obvious question is how to extend these ideas to VUTS (this is another potential advantage of this method over imaginary time evolution).\nExtension of the algorithm to excited states would allow to characterize the effective mass (dispersion) of the quasiparticles, paving the road for a study of \nhow short-range (e.g. antiferromagnetic) correlations affect quasiparticle properties. This is a very important question for the physics of strongly correlated electron systems. \nA breakdown of the $m^*\/m=1\/Z$ relation would signal that this feedback is indeed present, in contrast to the infinite connectivity limit. \nFinally, studying the energy dependence of the quasiparticle lifetime, as well as the interactions between quasiparticles would be a comprehensive study of Landau \nFermi liquid theory on the Bethe lattice. \n\nIt would also be interesting to look at the entanglement structure of the Fermi liquid state, as compared to a Luttinger liquid. Since the entanglement spectrum is easily obtained on the one-dimensional and Bethe lattices from the singular value decomposition of a single bond tensor, this question could in principle be easily answered. \n\nWe also propose that the finite $z$ Bethe lattice can be used as a computationally tractable platform for the study of how quasiparticles can be destroyed \nand Fermi liquid behaviour breaks down when considering other fermionic hamiltonians on this lattice. \nOne route to explore this, which connects to the feedback of long-wavelength collective modes or short-range spatial correlations on quasiparticle properties, is to study the vicinity of a quantum critical point. \nAnother route is to study microscopic models that are tailor-engineered to have incoherent excitations, such as the one of Ref.~\\cite{PhysRevB.86.045128}, or the multi-channel Kondo lattice. \n\nStudying the interplay between frustration and strong correlations is another promising direction for future research. \nIntroducing a frustrating next-nearest-neighbor hopping term has been found with DMFT to yield an interaction-driven metal-insulator transition on the infinite $z$ Bethe lattice~\\cite{rozenberg1994,RevModPhys.68.13}. An interesting question is whether this transition can also be found at finite $z$. \nThe VUTS algorithm could also be extended to other lattices with a tree-like structure, such as the Husimi cactus. \nThe study of spin models on such lattices have revealed spin-liquid ground-states\\cite{Chandra_1994,PhysRevB.93.075154} (see also Ref.~\\cite{Udagawa_2019} for considerations on the spin-ice model), \nopening the question of how these models behave upon doping. \n\nFinally, increasing the temperature to a non-zero value is another interesting direction. Intuitively, some finite-temperature properties may be less sensitive to the differences between tree lattices and two- or three-dimensional hypercubic lattices. This could be done using the purification method \\cite{PhysRevLett.93.207205,PhysRevLett.93.207204,PhysRevB.72.220401} which has been formulated on the Bethe lattice in Ref. \\cite{PhysRevB.100.125121}.\n\n\n\n\n\n\\section*{Acknowledgments}\n\nWe thank Julien Agier, Martin Claassen, Michel Ferrero, Gabriel Kotliar, Chris Laumann, Sung-Sik Lee, Roderich Moessner, Marcelo Rozenberg, Steve White and in particular Martin Eckstein and Riccardo Rossi for useful discussions. We thank Ruben Verresen for comments on the manuscript. All tensor network calculations and the VUTS code were implemented with the ITensor Library (C++ version 3.1) \\cite{itensor}. The Flatiron Institute is a division of the Simons Foundation.\n\n\n\n\n\\bibliographystyle{apsrev4-1}\n\n\\section{Introduction}\n\nThe Hubbard model is a cornerstone of condensed matter physics. As a paradigmatic model of strongly-correlated electrons \\cite{ANDERSON1196}, it is simple to formulate yet rich in behavior. In two dimensions (relevant e.g. to cuprate superconductors) observed behaviors include, but are not limited to, antiferromagnetism, unconventional metallic behavior characterized by a pseudogap and deviations from Fermi liquid theory~\\cite{tremblay2006review,Alloul2014,Gunnarsson2015,Wu2017,Schafer}, as well as stripe orders~\\cite{PhysRevB.93.035126,Zheng1155} closely competing with superconducting states at low temperatures~\\cite{PhysRevX.10.031016}. \nBut despite decades of effort, a comprehensive understanding of the phase diagram of the two-dimensional Hubbard model has not yet been fully reached.\nTherefore any solutions of the Hubbard model, whether obtained analytically or by accurate and controlled numerical techniques, are of great value.\n\nThe most reliable and comprehensive solutions of the Hubbard model obtained so far have been mainly in (quasi-) one dimension \\cite{PhysRevLett.20.1445} and in infinite dimensions (infinite lattice coordination number)~\\cite{PhysRevB.45.6479,PhysRevLett.62.324,RevModPhys.68.13}.\nIn one dimension, solutions can be obtained by integrability~\\cite{1D_Hubbard_book} and bosonisation methods~~\\cite{Giamarchi_book}, \nas well as numerically with matrix product state (MPS) tensor network methods \\cite{PhysRevLett.69.2863,SCHOLLWOCK201196}. \nThe latter can also reliably treat quasi-one-dimensional ladder or cylindrical geometries with a small transverse size. \nIn the limit of infinite dimensions the Hubbard model is again numerically tractable due to the fact that dynamical mean-field theory (DMFT) becomes exact \\cite{RevModPhys.68.13}. Using this technique, the phase diagram of the infinite-dimensional Hubbard model can be\nmapped out and a detailed understanding of the interaction-driven Mott insulator to metal transition \nhas been established (for reviews, see Refs.~\\cite{RevModPhys.68.13,RevModPhys.78.865,\nDMFT@25,georges_lectures,rozenberg_lectures}).\n\nWhile they provide useful insights into the physics of the two-dimensional Hubbard model, these limiting cases also have peculiarities which limit the generality of the \nconclusions that can be drawn from their study. In the limit of infinite dimensions, the metallic state is found to be a Fermi liquid, with interactions affecting one-particle properties in a local \n(momentum-independent) manner only. Hence, in this limit, the feedback of long-wavelength collective modes or even short-range spatial correlations \non quasiparticle properties is entirely absent. \nThese effects are important, in particular close to a critical point. In two dimensions, they are, for example, responsible \nfor the formation of a pseudogap \\cite{Preuss1997,Macridin2006,Kyung2006,Gunnarsson2015,Scheurer2018}.\nIn contrast, in one dimension the low-energy excitations consist {\\it only} of bosonic collective modes, associated with charge \nand spin degrees of freedom.\nThere is no notion of a Fermi liquid and the metallic behavior of the Hubbard model is a Luttinger liquid which lacks coherent quasiparticles \nand displays spin-charge separation~\\cite{Giamarchi_book}. \n\nIn this work we perform a controlled and accurate numerical study of the ground state of the Hubbard model on the Bethe lattice with a finite coordination number $z$, focusing on the case $z=3$. This is an infinite lattice that has a tree structure, where every site is connected to the same number of other sites ($z$) but there are no loops. We show a portion of this lattice in Fig. \\ref{fig:BL intro}.\n\\begin{figure}\n\t\\includegraphics[scale=0.3]{BL_large_4_gen}\n\t\\caption{A portion of the infinite Bethe lattice with coordination number $z = 3$. The figure depicts four ``generations\" of the tree structure, with a particular site chosen to be the ``center\" site. Note that in the actual Bethe lattice there is no special site, and generations can be counted from any of the sites.}\n\t\\label{fig:BL intro}\n\\end{figure}\nThis lattice provides an intermediate case between one dimension ($z=2$) and infinite dimensions ($z = \\infty$), with the key virtue that it admits controlled solutions via tensor network methods, including away from half filling and in the presence of strong interactions. \n\nExact solutions of models on the Bethe lattice have a long history in statistical mechanics~\\cite{Baxter_book}, \nstarting with the pioneering article of Hans Bethe~\\cite{Bethe_1935}. \nSolutions on the finite coordination number Bethe lattice provide a better approximation to thermodynamic quantities than the mean-field approximation \n(corresponding to the infinite dimensional limit)~\\cite{Baxter_book,Bethe_1935,PhysRevLett.74.809}. \nModels studied on the Bethe lattice include classical and quantum spin models~\\cite{PhysRevB.80.144415,PhysRevB.77.214431,NAGY2012542,PhysRevB.86.195137,PhysRevB.88.035138,PhysRevB.89.054426,LIU20141}, spin glass systems \\cite{Parisi_BL_spin_glass,PhysRevLett.56.1082,doi:10.1002\/pssb.200541282,PhysRevB.78.134424}, the Bose Hubbard model \\cite{PhysRevB.80.014524}, and models of Anderson localization \\cite{Abou_Chacra_1973,Mirlin_1991,PhysRevLett.78.2803,PhysRevB.100.094201,dupont2020dirty}. The fermionic Hubbard model on the finite version of the $z=3$ Bethe lattice (known as a Cayley tree) has also been studied previously using a variant of the density matrix renormalization group (DMRG) algorithm \\cite{Lepetit2000}, but only the case of half filling was studied (which is a charge insulator) and only local ground state quantities given (energy, staggered magnetization and its fluctuations, and neighboring spin correlations). Since that time, there have been significant advances in DMRG and related algorithms for infinite one-dimensional systems, which we generalize here to the Bethe lattice and use to obtain our results. Notably, there has been no previous study of metallic states on the finite connectivity Bethe lattice, to the best of our knowledge. \n\nWe determine the full phase diagram of the fermionic Hubbard model on the $z=3$ Bethe lattice, allowing for a two-site unit cell, and establish \nthe nature of the doping-driven Mott insulator to metal transition (MIT). We find that this transition is first-order, and that for every value of interaction strength there is a region of forbidden density. Therefore, in the interaction-density plane the model exhibits phase separation at low doping levels. \nWe find that, for all allowed values of the density, the doped metallic ground-state does not display magnetic long-range order. \n\nImportantly, we also demonstrate that the doped metal hosts coherent quasiparticles at all studied values of the interaction strength, $U$, from weak to strong coupling, and determine the behavior of the quasiparticle weight as a function of $U$. This answers in the affirmative the question of whether Fermi liquid behaviour \napplies as soon as the peculiar kinematic constraints of one dimension are alleviated, and also provides a concrete description of a Fermi liquid ground state with tensor networks, both of which are key motivations for our work. Generally, it is difficult for tensor networks to accurately describe interacting metallic states above one dimension, although much progress has been made in this direction \\cite{PhysRevB.81.165104,PhysRevB.93.045116,PhysRevB.100.195141,mortier2020resolving}. Our work provides an alternate route to this agenda, avoiding the computational challenges of two dimensional tensor networks, while going beyond the restrictions of one dimensional physics. \n\nWe obtain our results by generalizing a recently developed MPS method, variational uniform MPS (VUMPS) \\cite{PhysRevB.97.045145}, to tree tensor network (TTN) states \\cite{PhysRevA.74.022320,PhysRevB.82.205105,doi:10.1063\/1.4798639}, which we dub the variational uniform tree state algorithm (VUTS). We further introduce the fermionic version of the VUTS algorithm, using the swap-gate method of Refs. \\cite{PhysRevB.81.165104,Orus_review}. The VUTS algorithm works directly in the thermodynamic limit, which is important in the study of models on the Bethe lattice. The alternative is to study the finite Cayley tree and perform a finite-size scaling analysis. However, the number of boundary sites on the Cayley tree is always more than half of the total, and therefore finite-size effects are unusually strong and can even lead to conclusions that do not hold on the Bethe lattice \\cite{Baxter_book,PhysRevB.87.085107,OSTILLI20123417}. Working directly in the infinite-size limit is therefore important for models on trees. All previous works studying quantum models on the (infinite) Bethe lattice using tensor networks have used a variant of the infinite time-evolving block decimation (iTEBD) algorithm \\cite{PhysRevLett.98.070201}. In the one-dimensional case, the VUMPS algorithm has been found to be much more efficient than other methods that work in the thermodynamic limit, such as iTEBD or earlier infinite DMRG algorithms \\cite{PhysRevB.97.045145}, and indeed we find its extension in the form of the VUTS algorithm we develop to be very efficient. Our method scales as ${\\mathcal O}(\\chi^{z+1})$ where $\\chi$ is the bond dimension of the tensor network being optimized. For $z=3$ this scaling is significantly better than that of the most modern and accurate projected entangled pair states (PEPS) algorithms that scale as ${\\mathcal O}(\\chi^{10})$ for PEPS bond dimension $\\chi$ (assuming the boundary MPS bond dimension scales as $\\chi^2$) \\cite{PhysRevB.92.035142,PhysRevB.94.035133,PhysRevB.94.155123,PhysRevB.98.235148,PhysRevX.9.031041,PhysRevB.100.195141}, but is more challenging than the ${\\mathcal O}(\\chi^3)$ scaling of DMRG. However, the steeper scaling is mitigated by the fact that the typical bond dimension required to reach an accurate solution generally decreases as one goes to higher dimensions and larger coordination numbers due to the monogamy of entanglement and more mean-field-like properties of the wavefunction.\nThe accuracy of tensor network methods is often measured by the \\emph{truncation error}, which measures the typical loss of fidelity incurred during the truncation step of the optimization algorithm. In this work, for a bond dimension of $\\chi = 100$, we are able to achieve a truncation error of less than $10^{-3}$ in the most computationally challenging part of the phase diagram (most entangled ground state), and a truncation error of less than $10^{-7}$ in the best cases. This level of accuracy allows us to measure long-distance correlation functions well enough to extract information about quasiparticle coherence in the metallic phase, which demonstrates that our method can be used to reliably study critical phases of matter on the Bethe lattice.\n\nThis work suggests promising\nfuture directions for studying the behavior of strongly correlated electrons in a controlled setup. Because Fermi liquid behavior is a rather generic feature of metallic states, the present study allows to establish a controlled platform which can be used to study how Fermi liquid behaviour can be broken by further perturbations to the model considered here, or in other fermionic models. In the concluding section of this article, we discuss possible routes towards achieving this goal. If successful, tensor network solutions of correlated electrons on the $z=3$ Bethe lattice could provide a new platform for studying non-Fermi liquids~\\cite{RevModPhys.73.797,doi:10.1146\/annurev-conmatphys-031016-025531} in a controlled and accurate manner. Other potential applications are the study of the interaction-driven MIT in frustrated fermionic systems, and the study of fermions on closely related tree-like lattices, such as the Husimi cactus on which the Heisenberg model and other spin models have been shown to display spin liquid phases~\\cite{Chandra_1994}. We elaborate on all these directions and others at the end of the paper.\n\nThis paper is organized as follows. In section \\ref{sec:VUTS} we describe the general VUTS method, applicable to generic Hamiltonians. In section \\ref{sec: Hubbard model} we define the Hubbard model on the Bethe lattice and show the phase diagram obtained from the VUTS solution. Section~\\ref{sec:quasiparticles} discusses the calculation of the quasiparticle weight from the occupation function, as well as Luttinger's theorem. Finally, in Sec. \\ref{sec:discussion} we summarize and discuss future directions. \n\n\n\n\n\n\n\\section{Variational uniform tree state algorithm}\n\\label{sec:VUTS}\n\nIn this section we introduce the variational uniform tree state algorithm (VUTS), a generalization of the variational uniform matrix product state algorithm (VUMPS) \\cite{PhysRevB.97.045145}, for optimizing infinite tree tensor network (TTN) states. We start with a Bethe lattice of quantum degrees of freedom. For simplicity we focus on the algorithm for coordination number $z = 3$, which is the value for the model studied in this paper, but the extension to general $z$ is straightforward. We use an infinite TTN as our ansatz to approximate quantum states on the Bethe lattice. For $z = 3$, the infinite TTN with a 1-site unit cell consists of an order 4 tensor $A \\in \\mathbb{C}^{\\chi \\times \\chi \\times \\chi \\times d}$, with one physical leg ($s$) which runs over the physical degrees of freedom $1,...,d$, and three virtual legs ($l_0,l_1,l_2)$ that run over virtual degrees of freedom $1,...,\\chi$. The virtual legs of neighboring tensors connect to each other, forming the same geometry as the Bethe lattice, as shown in the tensor network diagram in Fig. \\ref{fig:TTN Bethe lattice unlabeled}.\n\\begin{figure}\n\t\\includegraphics[scale=0.25]{tree_TN_Bethe_lattice_unlabeled}\n\t\\caption{A finite portion of the infinite tree tensor network (TTN) state describing the many-body wave function on the Bethe lattice. The physical legs (green dashed lines) form the nodes of the lattice, while the virtual legs (straight black lines) form the edges. In this case, a single tensor $A$ comprises the state, and the unit cell is just a single site.}\n\t\\label{fig:TTN Bethe lattice unlabeled}\n\\end{figure}\n\\\\\n\\indent \nThe Hamiltonians we will focus on here are isotropic and have an equivalence between all sites (the analog of translational invariance for hypercubic lattices). The ground state will potentially break this isotropy completely, and break the site equivalence down to a non-trivial unit cell. In this paper, we allow for the state to be fully anisotropic between the different directions emanating from a given site, but for simplicity we focus exclusively on the case when the unit cell consists of no more than two sites (generalizing to arbitrary unit cells is straightforward). The infinite TTN state we study therefore has a 2-site unit cell, and is parameterized by a set of $2z$ tensors $A_{i,m}$, where $i = 0,1$ labels the location in the unit cell, and $m = 1, \\dots, z$ labels the direction of the gauge (defined below). Again, each tensor has one physical index $s_i = 1, \\dots, d$ and three virtual ``link\" indices $l_0,l_1,l_2$.\n\\\\\n\\indent\nBecause the TTN has no loops, it is straightforward to work in the \\emph{canonical gauge}, i.e. the gauge where the tensors are constrained to be orthonormal bases when viewed as a matrix from two link indices $l_n,l_k$ and the physical index $s_i$ to the remaining link index $l_m$. This constraint on the tensors is very useful for making the variational optimization faster and more stable, and is standard in a wide variety of tensor network algorithms, particularly in 1D algorithms like VUMPS and DMRG. The constraint on the tensors is written as\n\\begin{equation}\n\\displaystyle\\sum_{s_i,l_n,l_k} \\bar{A}^{s_i,l'_m,l_n,l_k}_{i,m} A^{s_i,l_m,l_n,l_k}_{i,m} = \\mathbb{1}^{l'_m,l_m}_{i,m},\n\\label{eq:canonical gauge}\n\\end{equation}\nwhere we have introduced the notation $\\bar 0 = 1, \\bar 1 = 0$ for the unit cell indices and use $\\bar{A}$ to denote the complex conjugation of $A$. The matrices $\\mathbb{1}_{i,m}$ are identities. Diagrammatically, Eq. (\\ref{eq:canonical gauge}) is equivalent to Fig.~\\ref{fig:canonical gauge}. The arrows on the links denote the gauge of the $A$ tensors (the outgoing link is the direction of the gauge). Any TTN can be brought into the form where the tensors obey Eq. (\\ref{eq:canonical gauge}) (or equivalently Fig. \\ref{fig:canonical gauge}) by inserting a particular set of ``gauge transformations\"\nonto the link degrees of freedom, i.e. inserting a particular set of resolutions of the identity $X X^{-1}$ (where $X$ is an invertible matrix) onto the links of the TTN. Note that the gauge transformation does not affect the observables of the system, and any TTN can be transformed into the canonical gauge efficiently.\n\\begin{figure}\n \\includegraphics[scale=0.25]{gauge_condition_1}\n \\caption{Diagrammatic version of the gauge conditions of Eq. (\\ref{eq:canonical gauge}). The bonds labelled $k,n,m$ have link indices $l_k,l_n,l_m$ respectively (and the uncontracted $m$ bond on the ket has link index $l_m'$). The unlabelled dashed bond is the physical degree of freedom with index $s_i$.}\n\t\\label{fig:canonical gauge}\n\\end{figure}\n\\begin{figure}\n\t\\includegraphics[scale=0.45]{tree_TN_Bethe_lattice_with_centers_2}\n\t\\caption{The same portion of the Bethe lattice as in Fig. \\ref{fig:TTN Bethe lattice unlabeled}, but with the bonds labeled with $m = 0,1,2$, which have link indices $l_0,l_1,l_2$ respectively. Additionally, each tensor is labeled with subscripts $i,m$, where $i = 0,1$ is the unit cell index and $m$ is the direction of the gauge. In the top diagram, the gauge center $C_2$ is shown on a bond labelled by $2$. The next equality shows that the $C_2$ tensor can be absorbed into the $A_{1,2}$ tensor to put the gauge center on the site tensor, creating $A_{1,C}$.}\n\t\\label{fig:TTN Bethe lattice with gauge centers}\n\\end{figure}\n\\\\\n\\indent\nExamples of the TTN state with a 2-site unit cell in the canonical gauge are shown in Fig. \\ref{fig:TTN Bethe lattice with gauge centers}. In the top diagram, the gauge center is $C_2$. Here, $C_2$ represents the projection of the infinite wavefunction of the system onto the finite-sized Hilbert space of that link of the network. The center matrices $C_m$ constitute invertible gauge transformations relating the $A$ tensors to each other via $A_{i,m} C_m = A_{i,n} C_{n}$, where $i = 0,1$ and $n \\neq m$. The center matrices $C_m$ also contain important information like the entanglement spectrum between the two infinite halves of the system split by that link. Additionally, the gauge center can be absorbed onto an $A$ tensor, defining the center site tensors $A_{i,C} = A_{i,m} C_m$ for any $m$. This is shown in the lower diagram of Fig. \\ref{fig:TTN Bethe lattice with gauge centers}. In this case, $A_{i,C}$ would represent the infinite wavefunction of the system projected onto a single site (and again, different spectra of that tensor relate to entanglement spectra of different bipartitions of the lattice).\n\\\\\n\\indent\nWe describe the algorithm for the case of $H = \\sum_{\\} h_{i,j}$, where $h_{i,j}$ is a two-site operator that acts on nearest neighbors only. The case of longer-range operators is treatable using techniques like those described in Appendix~C of Ref. \\cite{PhysRevB.97.045145}. The total energy is given by $E = \\sum_{\\} \\<\\psi|h_{i,j}|\\psi\\>$, and we want to minimize $E$, treating the tensor elements of our TTN as variational parameters. As in VUMPS (and many related tensor network ground state methods), VUTS proceeds in three main steps that are iterated until convergence:\n\\begin{enumerate}\n \\item Compute the projected Hamiltonians (the Hamiltonian projected into the basis corresponding to the virtual degrees of freedom of the network) to turn the global optimization into a local optimization problem.\n \\label{alg:projected Hamiltonian}\n \\item Find the optimized tensors by minimizing the energy of the projected Hamiltonian.\n \\label{alg:optimize tensors}\n \\item Update the tensor network with the new optimized tensors.\n \\label{alg:update network}\n\\end{enumerate}\n\\indent\nTo begin, say we are interested in optimizing a 1-site projected wavefunction $A_{i,C}$, as defined in Fig. \\ref{fig:TTN Bethe lattice with gauge centers}. Step \\ref{alg:projected Hamiltonian} requires computing infinite sums of local Hamiltonian terms, projected into the basis of our gauged TTN (defined by the tensors $A_{i,m}$), for each of the $z=3$ infinite subtrees connected to $A_{i,C}$. In order to perform the infinite sum, we focus on summing the energy contributions of a single subtree. An example for the series that needs to be summed for the $m=2$ direction in order to optimize the $A_{0,C}$ tensor is shown in Fig. \\ref{fig:H_{1,2} series}. We define the results of these summations as the matrices $H_{i,m}$. The summation can be carried out by making use of the fact that the sum is a geometric series. However, care has to be taken to project out infinite energy contributions to keep the series convergent (i.e. keep the norm of the solution $H_{i,m}$ from diverging). The procedure of performing the summation and projecting out the infinite energy contributions is a generalization of the one in Appendix D of Ref. \\cite{PhysRevB.97.045145}, and we discuss it in more detail in Appendix \\ref{subsec:summing Hamiltonian terms}.\n\\begin{figure}\n\t\\begin{subfigure}[b]{0.45\\textwidth}\n \\includegraphics[width=1\\linewidth]{H_1_2_series_example}\n \\caption{}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.38\\textwidth}\n \\includegraphics[width=1\\linewidth]{h_tensor}\n \\caption{}\n \\label{fig:h_i_m}\n \\end{subfigure}\n\t\\caption{(a) The first few terms of the series for $H_{1,2}$, the projected Hamiltonian contribution for one of the three branches of the infinite Bethe lattice connected to the center site tensor $A_{0,C}$ (on the $i=0$ sublattice). (b) Definition of the $h_{i,m}$ tensors used in (a), which are a sum of two local Hamiltonian environment tensors sitting on two branches of the Bethe lattice. Note that the tensor labeled $h$ represents the two-site operator term $h_{i,j}$, which the Hamiltonian is made up of.}\n\t\\label{fig:H_{1,2} series}\n\\end{figure}\n\\\\\n\\indent \nOnce the environment tensors are found, we can proceed to step \\ref{alg:optimize tensors} of the algorithm, which we begin by optimizing $A_{i,C}$. This is done by finding the ground state of the Hamiltonian projected onto the sublattice site $i$, a standard procedure in VUMPS and DMRG. The eigenvalue equation for $A_{i,C}$ is shown diagrammatically in Fig. \\ref{fig:H_Ac}, and is solved iteratively (using a Hermitian eigensolver such as Lanczos). To find the TTN ground state, we obtain the eigenvector with the smallest eigenvalue. As in the VUMPS algorithm, in addition to optimizing $A_{i,C}$, we also optimize $C_m$. $A_{i,C}$ and $C_m$ are then used to solve for $A_{i,m}$, which make up the updated infinite TTN state (see next paragraph). The eigenvalue equation for $C_m$ is shown diagrammatically in Fig. \\ref{fig:H_C}.\n\\begin{figure}\n\t\\includegraphics[scale=0.35]{Ac_eig_eqn.pdf}\n\t\\caption{The eigenvalue equation for $A_{i,C}$. The sum over $m$ is a sum over the contributions from each leg of the $A_{i,C}$ tensor.}\n\t\\label{fig:H_Ac}\n\\end{figure}\n\\begin{figure}\n\t\\includegraphics[scale=0.45]{C_eig_eqn.pdf}\n\t\\caption{The eigenvalue equation for $C_{m}$.}\n\t\\label{fig:H_C}\n\\end{figure}\n\\\\\n\\indent\nFinally, once $A_{i,C}, C_m$ for all $i = 0,1, m = 0,1,2$ are optimized, we can proceed to step \\ref{alg:update network} of the algorithm and solve for our new $A_{i,m}$ tensors by minimizing\n\\begin{equation}\n\\epsilon_{i,m}\n=\n\\min_{A_{i,m}^{\\dagger} A_{i,m} \\, = \\, \\mathbb{1}_{i,m}} || A_{i,C} - A_{i,m} C_m ||. \n\\label{eq:new A tensors}\n\\end{equation}\nThis minimization problem can be solved optimally using techniques described in Eqs. (18)-(22) of Ref. \\cite{PhysRevB.97.045145}. The new $A_{i,m}$ we obtain constitute our updated TTN, and steps \\ref{alg:projected Hamiltonian}-\\ref{alg:update network} are repeated until convergence. Convergence is achieved when the largest error found in Eq. (\\ref{eq:new A tensors}), $\\epsilon_{\\text{prec}} \\equiv \\max\\{\\epsilon_{i,m}\\}$, falls below a chosen threshold (e.g. $\\epsilon_{\\text{prec}} < 10^{-12}$).\n\\\\\n\\indent\nThe VUTS algorithm with a 1-site update, as we describe here, scales as $O(\\chi^{z+1})$ which becomes $O(\\chi^4)$ for the $z=3$ Bethe lattice and $O(\\chi^3)$ for $z=2$, thus reducing to the scaling of VUMPS in the $z=2$ case. Additionally, a 2-site update can be formulated, analogous to the 2-site DMRG algorithm which is commonly used. This requires a slight modification of the algorithm where ground states of 2-site and 1-site projected Hamiltonians are computed (as opposed to 1-site and 0-site projected Hamiltonians in the version of the algorithm described above). This can lead to improved convergence since a larger local Hilbert space is explored, but has a higher computational cost of ${\\mathcal O}(\\chi^5)$ for $z=3$. We use this technique at lower bond dimensions in more challenging parts of the phase diagram (near the phase transition), and switch to the 1-site algorithm later in the calculation to reach higher bond dimensions. To dynamically change the bond dimensions, we use a generalization of the subspace expansion procedure described in Appendix B of Ref. \\cite{PhysRevB.97.045145}.\n\\\\\n\\indent \nFor fermionic models like the Hubbard model, we need to use a fermionic version of VUTS. We use the method outlined in Refs. \\cite{PhysRevB.81.165104,Orus_review}. Every tensor is now endowed with a fermion parity $Z_2$ quantum number and is parity-preserving. When two tensor legs cross on a planar projection of a tensor diagram, a fermionic swap gate is placed at the crossing. In order to employ this method, we need to use a fixed ordering convention for the legs of the tensors $A_{i,m}, A_{i,C}, C_m$, which must be kept consistent in all of the diagrams in the calculation. We address the associated subtleties and details of this approach in Appendix \\ref{subsec:fermions}. Other symmetries beyond the $Z_2$ parity can also be used, such as $U(1)$ particle number conservation (to fix the filling), $U(1)$ spin projection symmetry in the z-direction, and also the spin $SU(2)$ symmetry. The inclusion of these symmetries makes the tensors more sparse and should therefore make the tensor operations more efficient, allowing us to reach larger bond dimensions. In this work, we only employ parity quantum numbers, and leave the use of additional symmetries for future work.\n\n\n\n\n\n\n\n\n\\section{Model and phase diagram} \n\\label{sec: Hubbard model}\n\nThe Hubbard Hamiltonian is given by\n\\begin{align}\nH = - t \\sum_{\\langle i,j \\rangle} \\sum_{\\sigma = \\uparrow, \\downarrow}c_{i, \\sigma}^{\\dagger} c_{j, \\sigma} + U \\sum_i n_{i, \\uparrow} n_{i, \\downarrow} - \\mu \\sum_{i, \\sigma} n_{i, \\sigma},\n\\label{eq:interacting Hamiltonian}\n\\end{align}\nwhere the site index $i$ now runs over the Bethe lattice and $n_{i,\\sigma} = c_{i, \\sigma}^{\\dagger} c_{i, \\sigma}$ is the on-site density for an electron of spin $\\sigma$. We set $t = 1$ and vary $U \\geq 0$ and $\\mu$. Since the model is particle-hole symmetric, we only need to consider $\\delta \\mu \\equiv \\mu - \\frac{U}{2} \\geq 0$. To obtain the phase diagram, we compute the ground state of the Hubbard model using the fermionic VUTS algorithm for several values of the bond dimension, $\\chi$. The details of the numerical calculation are given in Appendix \\ref{sec:numerical details}. We perform extrapolations in $\\chi$, the results of which we describe below. The additional plots of the finite $\\chi$ results and details of the extrapolations are in Appendix \\ref{sec:phase diagram more plots}. \n\\\\\n\\indent \nAt half filling, $\\delta \\mu = 0$, the system is a charge insulator with antiferromagnetic order for all $U > 0$, as in one dimension. To illustrate this we compute the staggered magnetization of the bipartite sublattice, $m_s \\equiv \\abs{(\\< \\vec{S}_A \\> - \\< \\vec{S}_B \\>)\/2}$. The extrapolated values $m_s(\\chi\\to\\infty)$ are shown in Fig. \\ref{fig:ms extrapolation}. \n\\begin{figure}\n\t\\includegraphics[scale=0.8]{ms_extrapolated_vs_U}\n\t\\caption{The extrapolated values of the staggered magnetization $m_s$ of the insulating state at half filling, plotted as a function of $U$. Note the logarithmic scale used for $m_s$. The error bars are the discrepancy in the extrapolation with and without the last data point (for the points where they are absent they are smaller than the data points).}\n\t\\label{fig:ms extrapolation}\n\\end{figure}\nWe can see that for any $U>0$ the magnetization is non-zero, tending to zero as $U \\rightarrow 0$. From general mean-field theory considerations we expect $m_s \\sim e^{-\\frac{c}{U}}$ for small $U$. \nHowever, we did not attempt to confirm this functional form numerically by systematic calculations at very small values of $U$.\n\\\\\n\\indent \nTo illustrate the charge gap at half filling, we compute the on-site occupation $\\< n_{i} \\> = \\sum_{\\sigma} \\< n_{i,\\sigma} \\>$ as a function of $\\delta \\mu$. We show this in Fig. \\ref{fig:n and energy vs delmu for the two branches at U = 6 and chi = 50} for a single value of $U = 6$ and $\\chi = 50$. All other $U$ and $\\chi$ look qualitatively similar.\n\\begin{figure}\n\t\\begin{subfigure}[c]{0.45\\textwidth}\n \\includegraphics[width=0.85\\linewidth]{occ_vs_delmu_both_branches_U_6_chi_50}\n \\caption{$U = 6, \\chi = 50$.}\n \\end{subfigure}\n\t\\begin{subfigure}[c]{0.45\\textwidth}\n \\includegraphics[width=0.85\\linewidth]{energy_vs_delmu_both_branches_U_6_chi_50}\n \\caption{$U = 6, \\chi = 50$.}\n \\end{subfigure}\n\t\\caption{(a) The occupation versus $\\delta \\mu$ for $U = 6$ and $\\chi = 50$ for the both metallic and insulating branches. All other values of $U,\\chi$ look similar. (b) The energy per site of both branches in (a). \n\tTheir crossing point, $\\delta \\mu_c$ is indicated by a dashed line in both (a) and (b).}\n\t\\label{fig:n and energy vs delmu for the two branches at U = 6 and chi = 50}\n\\end{figure}\nWe see that there are two branches of VUTS solutions: insulating ($\\< n_{i} \\> = 1$) and metallic ($\\< n_{i} \\> > 1$). The insulating branch exists for $\\delta\\mu\\leq\\delta\\mu_1$, and the metallic branch for $\\delta\\mu\\geq\\delta\\mu_2$: these \ntwo values, $\\delta\\mu_{1}$ and $\\delta\\mu_{2}$ (with $\\delta\\mu_{2} < \\delta\\mu_{1}$), are spinodal values limiting the meta-stability of the insulating and metallic solutions, respectively (see Appendix \\ref{sec:phase diagram more plots} for details).\nFor each value of $\\delta \\mu$ the ground state is the branch with the lower energy. The energies of the two branches cross at a particular value, which we define to be $\\delta\\mu_c (\\chi)$. At $\\delta\\mu_c (\\chi)$, the ground state changes from insulating for $\\delta\\mu < \\delta \\mu_c$ to metallic for $\\delta\\mu > \\delta \\mu_c$. The occupation undergoes a finite jump, $\\delta n(\\delta \\mu_c, \\chi) = \\< n_i(\\delta \\mu_c, \\chi) \\> - 1$, indicating that this is a first order metal-insulator transition. We estimate the size of the jump in the real system by the $\\chi \\to \\infty$ extrapolated values, which are shown in Fig. \\ref{fig:jump in occupation extrapolation}. \n\\begin{figure}\n\t\\includegraphics[scale=0.8]{jump_in_n_extrapolated_vs_U}\n\t\\caption{The extrapolated values of the jump in density $\\delta n_c$ at the first-order transition occuring at $\\delta \\mu_c$ plotted as a function of $U$. The error bars, which are the discrepancy in the extrapolation with and without the last data point, are smaller than the data points.}\n\t\\label{fig:jump in occupation extrapolation}\n\\end{figure}\nWe can see they remain finite for all $U$, meaning that this is a true first-order transition, and not an artifact of finite bond dimension. Using a derivation based on the Maxwell construction (detailed in Appendix~\\ref{sec:phase diagram more plots}), \nit can be shown that the total charge gap\nis given by $\\Delta_c (\\chi) = 2 \\, \\delta\\mu_c (\\chi)$. In order to obtain the charge gap $\\Delta_c$ for the real system, we extrapolate $\\Delta_c (\\chi)$ in $\\chi$. The result is shown in Fig. \\ref{fig:charge gap extrapolation}.\n\\begin{figure}\n\t\\includegraphics[scale=0.8]{charge_gap_extrapolated_vs_U}\n\t\\caption{The extrapolated values of the charge gap $\\Delta_c$ at half filling plotted as a function of $U$. The error bars, which are the discrepancy in the extrapolation with and without the last data point, are smaller than the data points.}\n\t\\label{fig:charge gap extrapolation}\n\\end{figure}\nWe can see that the extrapolated gap shows an exponential-like behavior at small $U$ similar to the one-dimensional case, though the precise behavior is difficult to extract numerically. At large $U$ the gap crosses over to a more linear dependence on $U$. \n\\\\\n\\indent \nThe first-order transition we observe implies that for every $U$ there is a range of forbidden density. Hence, in the $(U,n)$ plane the model exhibits phase separation. Our numerical method works in the grand canonical ensemble (we do not fix particle number per unit cell), so we cannot observe this phase separation directly. However, VUTS, similar to other variational tensor network methods like DMRG and VUMPS, can get ``stuck\" in local minima. We use this fact to find both branches of solutions near the transition point, even when they are meta-stable (i.e. not the lowest energy states), by way of hysteresis in the numerical algorithm (see Appendix \\ref{sec:phase diagram more plots} for a detailed explanation). The resulting branches, shown in Fig. \\ref{fig:n and energy vs delmu for the two branches at U = 6 and chi = 50}, can be continued to find the spinodal values of the first-order transition (see Appendix \\ref{sec:phase diagram more plots}).\n\\\\\n\\indent \nThe metallic ground-state has no magnetic order for any value of density: we find that the staggered magnetization vanishes once $\\delta \\mu$ crosses $\\delta \\mu_c$. To illustrate the typical magnetization behavior we observe, in Fig. \\ref{fig:ms of metallic phase} we show the staggered magnetization $m_s$ as a function of $\\delta \\mu$ across $\\delta \\mu_c$ for $U = 6$ at a large but fixed $\\chi = 90$.\n\\begin{figure}\n\t\\includegraphics[scale=0.8]{ms_vs_delmu_U_6}\n\t\\caption{The staggered magnetization $m_s$ versus the chemical potential $\\delta \\mu$ for $U = 6$ and $\\chi = 90$. The critical point $\\delta \\mu_c$ is indicated by a dashed line. We see the drop from $m_s > 0$ to $m_s = 0$, indicating the first-order transition from the antiferromagnetic insulator to the paramagnetic metal.}\n\t\\label{fig:ms of metallic phase}\n\\end{figure}\nOnce $\\delta \\mu$ becomes large enough to drive the system metallic, the staggered magnetization $m_s$ immediately drops to a very small value, which is zero within our error tolerance. All other values of $U$ and $\\chi$ behave similarly. As $\\chi$ increases, the small value of magnetization in the metal decreases further, although the behavior is not monotonic, as shown in Appendix \\ref{sec:phase diagram more plots}. Also, in Appendix \\ref{sec:numerical details} we describe the strategy we use to make sure we do not bias the magnetization of the metallic solution with our ansatzes.\n\\\\\n\\indent \nIt is interesting to compare our results on the $z=3$ Bethe lattice to those established for the doping-driven MIT \nin the $z=\\infty$ limit where DMFT becomes exact. \nOnly a few studies~\\cite{camjayi_rozenberg_2006,wang_millis_2009,fratino_tremblay_prb_2017} consider this transition while also taking into account phases with magnetic long-range order. As in our results, an antiferromagnetic insulator is found at half-filling for a range of chemical potentials, as well as a non-magnetic metallic solution which can be stabilized for values of the chemical potential above a spinodal value $\\delta \\mu_{2}$. \nFurthermore, a magnetic metallic solution is found to exist in a narrow range of chemical potentials, which connects the magnetic insulator and the non-magnetic metal. \nThis may appear to differ from our findings, but it should be emphasized that all these studies consider only non-zero temperatures. As temperature is lowered, it is reported in Refs~\\cite{camjayi_rozenberg_2006,wang_millis_2009} that the magnetic metallic solution appears to exist only in an increasingly narrow interval of chemical potentials, and Ref.\\cite{camjayi_rozenberg_2006} suggested that at low temperature the MIT is a first-order transition between the magnetic insulator and the non-magnetic metal, with a forbidden range of density corresponding to phase separation. Although, to the best of our knowledge, this has not yet been fully established directly at $T=0$ for $z=\\infty$, this conclusion is consistent with our findings on the finite coordination number lattice. In contrast, on fully frustrated $z=\\infty$ lattices, which do not allow for long-range magnetic order (e.g. on the fully connected lattice with random hopping), it is established that the \ndoping-driven MIT is second order at $T=0$ and becomes first-order only at finite temperature~\\cite{RevModPhys.68.13,moeller_1995,kotliar_2002,werner_2007}. \n\\\\\n\\indent \nTo conclude this section, we mention briefly the numerical accuracy of the data presented here. As noted in the Introduction, the numerical accuracy in tensor networks is generally measured by the truncation error, denoted by $\\epsilon_{\\rho}$. We show here the scaling of $\\epsilon_{\\rho}$ in the metallic state, which is the state with the largest entanglement and therefore the most computationally challenging for tensor networks. \nIn Fig. \\ref{fig:truncation error metal n = 1.2} we plot our estimate of $\\epsilon_\\rho$ at a fixed density of $\\ = 1.2$ as a function of $\\chi$ for various $U$, and also as a function of $U$ at the largest values $\\chi = 90,100$.\n\\begin{figure}\n\t\\begin{subfigure}[]{0.49\\textwidth}\n \\includegraphics[scale=0.8]{truncation_error_all_U_vs_chi_metal_n_1_2}\n \\caption{}\n \\end{subfigure}\\hspace{0.01\\textwidth}\n\t\\begin{subfigure}[]{0.49\\textwidth}\n \\includegraphics[scale=0.8]{truncation_error_at_chi_90_and_100_vs_U_metal_at_n_1_2}\n \\caption{}\n \\end{subfigure}\n\t\\caption{Our estimate for the truncation error $\\epsilon_\\rho$ in the metallic phase for $\\ = 1.2$ (a) as a function of $\\chi$ and (b) as a function of $U$ for the two largest $\\chi$. Note that since (a) is a log-log plot the linear form indicates an algebraic relationship.}\n\t\\label{fig:truncation error metal n = 1.2}\n\\end{figure}\nWe can see that the decay with $\\chi$ is algebraic, as expected for a gapless system. As a function of $U$, $\\epsilon_\\rho$ increases initially and then potentially saturates, although the large $U$ behavior is undetermined. Notably, we can see that $\\epsilon_\\rho < 10^{-3}$ for all $U$ and $\\chi = 100$, which is a high level of accuracy. We discuss the behavior of $\\epsilon_\\rho$ at other points in the phase diagram in Appendix \\ref{sec:numerical details}. \n\n\n\n\n\n\n\n\n\n\\section{Quasiparticles}\n\\label{sec:quasiparticles}\n\nIn this section we address the existence of quasiparticles in the system. We do this by computing the quasiparticle weight $Z$ from the ``momentum\" distribution function of the ground state, which in turn is obtained from real-space correlation functions. \nOne peculiarity of the Bethe lattice, is that correlations between any two degrees of freedom sitting on individual nodes of the lattice have a maximal finite correlation length, even at criticality, due to the geometry of the lattice. However, algebraically-decaying correlations reappear after a change of basis to single-particle states that are weighted sums of all nodes of a given generation emanating from a chosen center site. These bases of states unveil the traditional long-range criticality present in gapless states on the Bethe lattice. Below, we introduce a subset of these weighted states called the \\emph{symmetric} states, which we focus on in the rest of this section. From these symmetric states we define a quantum number which plays an analogous role on the Bethe lattice to quasi-momentum on the hypercubic lattice, despite the absence of conventional translation invariance.\n\n\n\\subsection{Single-particle basis of symmetric states}\n\nFor $U = 0$, the free particle Hamiltonian was diagonalized in Ref. \\cite{PhysRevB.63.155110}, using the \\emph{symmetric} set of single-particle states. These are given as follows. Choose any site to be labeled as the origin, with site label $0$. Then consider all permutations of the nodes at each generation $l$ from the center. The symmetric states are those which are invariant under all such permutations. Their creation operators are given by\n\\begin{equation}\n\\tilde{c}^{\\dagger}_{0,\\sigma} \\; \\equiv \\; c^{\\dag}_{0,\\sigma}\n\\label{eq:single particle states l 0}\n\\end{equation}\nand \n\\begin{equation}\n\\tilde{c}^{\\dagger}_{l,\\sigma} \\; \\equiv \\; \\frac{1}{\\sqrt{z(z-1)^{l-1}}} \\displaystyle\\sum_{\\eta_1 = 0}^{z-1} \\; \\displaystyle\\sum_{\\eta_2 \\neq \\eta_1 } \\dots \\displaystyle\\sum_{\\eta_l \\neq \\eta_{l-1}} c^{\\dag}_{\\eta_1 + \\eta_2 + \\dots + \\eta_l,\\sigma}\n\\label{eq:single particle states}\n\\end{equation}\nfor $l > 0$. The collection of $\\eta_i$ denotes a unique path from the origin to the $l$-th generation (this is the usual notation for nodes on the Bethe lattice). The state $\\tilde{c}^{\\dagger}_{l,\\sigma} |\\text{vacuum}\\>$ is the symmetric combination of all the singly-occupied spin-$\\sigma$ states of the $l$-th generation of the tree. These states form an orthonormal subset of all the states on the Bethe lattice, but for $U = 0$ they are the only relevant ones. In the symmetric state basis, the free particle Hamiltonian maps onto fermions hopping on an infinite half-chain, with the first hopping amplitude equal to $\\sqrt{z}$ and all the rest equal to $\\sqrt{z-1}$ (remember $t$ has been set to one). The conjugate variable that replaces momentum is an angle $\\theta \\in \\[0,\\pi\\]$, and the band energy is given by $\\epsilon(\\theta) = 2 \\sqrt{z-1} \\, \\cos\\theta$. Note that the energy of a regular one-dimensional band is obtained by replacing $\\theta$ with a momentum $k$ and $z$ with $2$. The single-particle wavefunctions $\\psi_l(\\theta)$ that diagonalize the Hamiltonian are given by\n\\begin{align}\n\\begin{split}\n\\psi_0(\\theta) &= \\sqrt{\\frac{2}{\\pi}} \\frac{\\sqrt{z(z-1)}\\sin(\\theta)}{\\sqrt{z^2 - 4(z-1) \\cos^2(\\theta)}},\n\\\\ \n\\psi_{l \\neq 0}(\\theta) &= \\sqrt{\\frac{2}{\\pi}} \\sin(l \\cdot \\theta + \\gamma(z,\\theta)),\n\\\\\n\\gamma(z,\\theta) &= \n\\begin{cases}\n\\arcsin\\left(\\frac{z \\sin (\\theta )}{\\sqrt{z^2-4 (z-1) \\cos ^2(\\theta )}}\\right), & \\hspace{-2mm} \\theta \\in \\[0,\\frac{\\pi}{2}\\) \\\\\n\\pi -\\arcsin\\left(\\frac{z \\sin (\\theta )}{\\sqrt{z^2-4 (z-1) \\cos ^2(\\theta)}}\\right), & \\hspace{-2mm} \\theta \\in \\[\\frac{\\pi}{2},\\pi\\].\n\\end{cases}\n\\end{split}\n\\label{eq:real space wavefunctions n U=0 exact}\n\\end{align}\n\\\\\n\\indent \nOnce $U \\neq 0$, the symmetric states are no longer enough to describe the system. Indeed, if we simply consider a Mott-like state with one particle sitting on each of the sites at generation $l = 1$ away from the center site, that state cannot be written using only the single-particle symmetric states associated with the same center site. \nIn general, an arbitrary multi-particle Fock state cannot be constructed as a tensor product of the symmetric single-particle states. Therefore, to construct it one must employ states from other symmetry sectors. The interacting ground states we find numerically in this work therefore contain states from various symmetry sectors. However, excitations above the ground state can occur in any of these sectors, and we do not have to consider all of them. In order to tractably answer the question of existence of quasiparticles, we choose to focus on excitations in the symmetric sector. How quasiparticles in different symmetry sectors are related to each other is an interesting question for future work \\cite{Eckstein_thanks}.\n\n\n\\subsection{$\\theta$-distribution function for $U = 0$}\n\nThe $\\theta$-distribution function is calculated from the equal-time correlation functions of symmetric single-particle excitations, $\\< \\td c^{\\dag}_{0,\\sigma} \\td c_{l,\\sigma} \\>$, where $0$ labels a chosen center site and $l$ labels the generation away from the center site. These can be computed as\n\\begin{equation}\n\\begin{split}\n\\< \\td c^{\\dag}_{0,\\sigma} \\td c_{l,\\sigma}\\> \n& = \\int\\displaylimits_{- \\infty}^{0} d{\\omega} ~ \\mathcal{A}_{0l}(\\omega)\n\\\\ & = \\int\\displaylimits_{-2 \\sqrt{z-1}}^{\\mu} d\\varepsilon \\; \\sqrt{\\frac{z}{2 \\pi}} \\frac{\\psi_l\\(\\arccos(-\\frac{\\varepsilon}{2 \\sqrt{z-1}})\\)}{\\sqrt{z^2 - \\varepsilon^2}},\n\\end{split}\n\\label{eq:correlation function U=0 exact}\n\\end{equation}\nwhere $\\mathcal{A}_{0l}(\\omega)$ is the probability of inserting an electron with frequency ${\\omega}$ at the center site and observing it at generation $l$ at the same frequency. The occupation function in $\\theta$-space is defined as\n\\begin{equation}\nn_{\\sigma}(\\theta, \\theta') \\equiv \\< \\tilde{c}_{\\theta,\\sigma}^{\\dag} \\tilde{c}_{\\theta',\\sigma}\\>,\n\\label{eq:occupation function definition}\n\\end{equation}\nwhere \n\\begin{equation}\n\\td c_{\\theta, \\sigma} = \\lim\\limits_{L \\to \\infty}\\sqrt{\\frac{\\pi}{L+1}} \\sum_{d = 0}^{L} \\psi_d(\\theta) \\, \\td c_{d,\\sigma} \n\\label{eq:theta transform c operators}\n\\end{equation}\nare the $\\theta$-transforms of the symmetric state operators $\\td c_{d,\\sigma}$. The key difference with the usual calculation in hypercubic lattices is that $\\< \\td c^{\\dag}_{d,\\sigma} \\td c_{d',\\sigma}\\>$ is not simply a function of $\\abs{d-d'}$, due to the fact that the symmetric states are defined relative to a chosen center site. This is illustrated in Fig. \\ref{fig:BL 4 generations}, where we show that $\\< \\td c^{\\dag}_{2,\\sigma} \\td c_{4,\\sigma}\\>$ contains correlations of length $2,4$ and $6$.\n\\begin{figure}\n\t\\includegraphics[scale=0.4]{BL_large_4_gen_colored}\n\t\\caption{Four generations of the $z = 3$ infinite Bethe lattice emanating from a given center site. The inner shell is at generation $d = 2$, and the outer shell is at generation $d' = 4$. Choosing a given site on the inner shell (colored red), there are three different groups of sites on the outer shell (colored orange, green and purple) that contribute different-length correlations.}\n\t\\label{fig:BL 4 generations}\n\\end{figure}\nTherefore, $n_{\\sigma}(\\theta, \\theta')$ is not diagonal in $\\theta$. One way to understand this is to think of the entire symmetric sector parametrized by $\\theta$ as the $k = 0$ space on the hypercubic lattice, i.e. the one that is fully symmetric under translations. However, within this sector there is no additional symmetry of the Bethe lattice that requires the total $\\theta$ to be conserved in a scattering process, and therefore $n_{\\sigma}(\\theta, \\theta')$ is not diagonal \\cite{Eckstein_thanks}. Carefully inserting Eq. (\\ref{eq:theta transform c operators}) into Eq. (\\ref{eq:occupation function definition}) and rewriting the result in terms of $\\< \\td c^{\\dag}_{0,\\sigma} \\td c_{l,\\sigma}\\>$ gives\n\\begin{equation}\n\\begin{split}\n& n_{\\sigma}(\\theta, \\theta') = \n\\lim\\limits_{L \\to \\infty}\\frac{\\pi}{L+1} \\sum_{d,d' = 0}^{L} \\psi_d(\\theta) \\psi_{d'}(\\theta') \\, \n\\frac{z-2}{\\sqrt{z (z-1)}}\n\\\\ &\n\\(\\frac{\\sqrt{z}(z-2)}{(z-1)^{3\/2}}\\)^{\\delta_{\\min(d,d'),0}} \n\\sum_{r=0}^{\\min(d,d')} \\(\\frac{z-1}{z-2}\\)^{\\delta_{r,0} + \\delta_{r,\\min(d,d')}} \n\\\\ & \n\\sqrt{\\frac{z}{z-1}}^{\\delta_{d,d'} \\delta_{r,0} - \\delta_{d,0} \\delta_{d',0}} \n\\, \\< \\td c^{\\dag}_{0,\\sigma} \\td c_{\\abs{d-d'}+2r,\\sigma} \\>. \n\\end{split}\n\\label{eq:n_theta final}\n\\end{equation}\nDetails of the derivation of Eq. (\\ref{eq:n_theta final}) are in Appendix \\ref{sec:occupation function}. We plot the exact $n_{\\sigma}(\\theta, \\theta')$ for $U = 0$ at half-filling in Fig. \\ref{fig:n_theta_theta_prime_U_0_delmu_0_exact}.\n\\begin{figure}\n\t\\begin{subfigure}[c]{0.49\\textwidth}\n \\includegraphics[scale=0.9]{n_theta_theta_prime_U_0_n_i_1_exact}\n \\caption{}\n\t \\label{fig:n_theta_theta_prime_U_0_delmu_0_exact}\n \\end{subfigure}\\hspace{0.01\\textwidth}%\n\t\\begin{subfigure}[]{0.49\\textwidth}\n \\includegraphics[scale=0.8]{n_theta_U_0_exact}\n \\caption{}\n \\label{fig:n_theta_U_0_exact}\n \\end{subfigure}\n\t\\caption{(a) The occupation function $n_{\\sigma}(\\theta, \\theta')$ for the half-filled case at $U = 0$. Aside from the expected step-function along the diagonal, there is non-trivial off-diagonal structure. (b) The diagonal component $n_{\\sigma}(\\theta)$ for $U = 0$ and densities $\\ = 1,1.2$, corresponding to values of $\\theta_F = \\pi\/2$ and $\\theta_F \\approx 1.81$, respectively. Calculating the occupation from Eq. (\\ref{eq:n_theta final}) requires a large distance cutoff $L$, which is chosen here to be (a) $L = 100$ and (b) $L = 200$. This introduces an artificial correlation length, causing the step function to be (slightly) smoothed out.}\n\t\\label{fig:n_2D_and_1D_U_0_exact}\n\\end{figure}\n\\\\\n\\indent\nIn this work, we focus exclusively on the diagonal component of the occupation function, $n_{\\sigma}(\\theta) \\equiv n_{\\sigma}(\\theta, \\theta)$. This tells us the occupation of excitations that preserve total $\\theta$ when scattering with each other. We leave the detailed study of the full occupation function $n_{\\sigma}(\\theta, \\theta')$ for future work. We plot the exact $n_{\\sigma}(\\theta)$ in Fig. \\ref{fig:n_theta_U_0_exact} for $U = 0$ and densities $\\ = 1, 1.2$. We can see the expected behavior of $n_{\\sigma} = \\Theta(\\theta_F - \\theta)$, where $\\Theta(x)$ is the Heaviside step function and $\\theta_F$ is the $\\theta$-analog of the ``Fermi momentum\" (the step function in Fig. \\ref{fig:n_theta_U_0_exact} is slightly smoothed out due to a finite $L$ in Eq. (\\ref{eq:n_theta final})).\n\\\\\n\\indent \nIn order to compute the correlation function $\\< \\td c^{\\dag}_{0,\\sigma} \\td c_{l,\\sigma}\\>$ from the VUTS numerical solution, \nwe use the fact that even in the interacting ground state, $\\< c^{\\dag}_{i,\\sigma} c_{j,\\sigma} \\>$ is still only a function of the distance $\\abs{i-j}$ (and $\\sigma$). We can then compute $\\< c^{\\dag}_{0,\\sigma} c_{l,\\sigma} \\>$ for an arbitrary branch to write\n\\begin{equation}\n\\< \\td c^{\\dag}_{0,\\sigma} \\td c_{l,\\sigma}\\>\n= \\sqrt{z^{1-\\delta_{l,0}} (z-1)^{l-1 + \\delta_{l,0}}} \\< c^{\\dag}_{0,\\sigma} c_{l,\\sigma}\\>. \n\\label{eq:correlation function symmetric states}\n\\end{equation}\nNote that the difference here from the previous considerations of this paragraph is that one of the reference points has been set to the center site. We plot $\\< \\td c^{\\dag}_{0,\\sigma} \\td c_{l,\\sigma}\\>$ measured with VUTS in Fig. \\ref{fig:corr function U=0 real space} for $U = 0$ and densities $\\< n_i \\> = 1, 1.2$, along with the exact results. \n\\begin{figure}[!htbp]\n\t\\begin{subfigure}[c]{0.49\\textwidth}\n \\includegraphics[scale=0.8]{corr_func_U_0_n_1}\n \\caption{$U = 0, \\ = 1$.}\n \\label{fig:corr function U=0 n = 1}\n \\end{subfigure}\\hspace{0.01\\textwidth}%\n\t\\begin{subfigure}[c]{0.49\\textwidth}\n \\includegraphics[scale=0.8]{corr_func_U_0_n_1_2}\n \\caption{$U = 0, \\ = 1.2$.}\n \\label{fig:corr function U=0 n = 1.2}\n \\end{subfigure}\n\t\\begin{subfigure}[c]{0.49\\textwidth}\n \\includegraphics[scale=0.8]{xi_vs_chi_U_0}\n \\caption{}\n \\label{fig:xi vs chi}\n \\end{subfigure}\n\t\\caption{The correlation function $\\< \\td c^{\\dag}_{0,\\uparrow} \\td c_{r,\\uparrow}\\>$ ($\\sigma = \\downarrow$ gives the same) for densities (a) $\\ = 1$ and (b) $\\ = 1.2$ for the non-interacting case $U=0$. We show the $\\chi = 50,100$ results along with the exact solution of Eq. (\\ref{eq:correlation function U=0 exact}). Also shown are the functions $\\frac{1}{r} \\, \\psi_r(\\theta_F) \\, \\psi_0(\\theta_F)$ with (a) $\\theta_F = \\pi\/2$ and (b) $\\theta_F \\approx 1.81$, which are excellent fits beyond a short distance scale, showing Friedel oscillations due to the Fermi surface singularity. The insets show that the $1\/r$ decay is fit perfectly over a larger distance by the exact solution, while the finite $\\chi$ solutions display an exponential decay at large distances.\n\t\t(c) The correlation length extracted from the long distance behavior of $\\< \\td c^{\\dag}_{0,\\sigma} \\td c_{r,\\sigma}\\>$, plotted versus $1\/\\chi$. The slope for both densities is $\\xi(\\chi) \\sim \\chi^{0.84}$.}\n\t\\label{fig:corr function U=0 real space}\n\\end{figure}\nThe finite $\\chi$ results are close to the exact ones, but, as expected, the correlation function at large enough distances decays exponentially with a finite correlation length $\\xi(\\chi)$. We can measure $\\xi(\\chi)$ by fitting $\\< \\td c^{\\dag}_{0,\\sigma} \\td c_{l,\\sigma}\\>$ to an exponential at large $l$. The results are shown in Fig. \\ref{fig:xi vs chi}, where the polynomial fit gives $\\xi(\\chi) \\sim \\chi^{0.84}$ for both densities.\n\\\\\n\\indent \nWhen calculating $n_{\\sigma}(\\theta)$ from Eq. (\\ref{eq:n_theta final}), in practice we must choose a finite value of $L$. As long as we take $L$ large enough, the correlation length $\\xi(\\chi)$ will act as the long-distance cutoff and the value of $L$ will not have any effect. Our results are obtained using bond dimensions up to $\\chi = 100$, for which the induced correlation length is $\\xi(\\chi) \\lesssim 40$. We find that a value of $L \\sim 400$ is large enough for all $\\chi$ we study. The finite $\\xi(\\chi)$ smoothes out the step function in $n_\\sigma(\\theta)$, so we estimate $\\theta_F$ from the location of the maximum of $\\abs{n'_{\\sigma}(\\theta)}$. In Fig. \\ref{fig:n_theta near theta_F at U=0 and n=1.2} we show $n_{\\sigma}(\\theta)$ for $\\ = 1.2$ near $\\theta_F$. \n\\begin{figure}\n\t\\includegraphics[scale=0.8]{n_theta_near_theta_F_U_0_n_1_2}\n\t\\caption{Plots of $n_{\\sigma}(\\theta)$ near $\\theta_F$ for $U = 0$ at density $\\ = 1.2$ and a range of $\\chi$. The value of $L$ used here from Eq. (\\ref{eq:n_theta final}) is $L = 400$.}\n\t\\label{fig:n_theta near theta_F at U=0 and n=1.2}\n\\end{figure}\nThe finite slope at $\\theta_F$ diverges as a power law in $\\chi$, as we show in Fig. \\ref{fig:nprime_theta_at_theta_F_vs_chi_all_U_and_n1_2}. \n\\begin{figure}\n\t\\includegraphics[scale=0.9]{nprime_theta_at_theta_F_vs_chi_all_U_n_1_2}\n\t\\caption{The value of $\\abs{n'_{\\sigma}(\\theta_F)}$ vs $1\/\\chi$ for various $U$ at density $\\ = 1.2$. The straight lines on the log-log plot are fit by $\\abs{n'_{\\sigma}(\\theta_F)} \\sim \\chi ^{{\\alpha}}$, with $0.5 < {\\alpha} < 1$.}\n\t\\label{fig:nprime_theta_at_theta_F_vs_chi_all_U_and_n1_2}\n\\end{figure}\nThis indicates that $\\xi(\\chi)$ is the only low-energy scale in the problem, and the state is truly gapless in the $\\chi \\to \\infty$ limit.\n\\\\\n\\indent\nIn summary, in this section we show how to compute the diagonal part of the occupation function in the non-interacting case, using VUTS and \na careful extrapolation in the bond dimension, and achieve excellent agreement with the analytical result.\n\n\n\n\\subsection{Quasiparticles in the interacting system}\n\nNow we turn on interactions. In Fig. \\ref{fig:n_theta near theta_F all U and n=1.2} we plot $n_{\\sigma}(\\theta)$ near $\\theta_F$ for $\\ = 1.2$ and $U = 5$ at various $\\chi$ as well as for various $U$ at $\\chi = 70$.\n\\begin{figure}\n\t\\begin{subfigure}[]{0.49\\textwidth}\n \\includegraphics[scale=0.8]{n_theta_near_theta_F_U_5_n_1_2}\n \\caption{$\\ = 1.2, U = 5$.}\n \\end{subfigure}\\hspace{0.01\\textwidth}%\n\t\\begin{subfigure}[]{0.49\\textwidth}\n \\includegraphics[scale=0.8]{n_theta_near_theta_F_all_U_n_1_2}\n \\caption{$\\ = 1.2, \\chi = 70$.}\n \\end{subfigure}\n\t\\caption{Plots of $n_{\\sigma}(\\theta)$ near $\\theta_F$ at density $\\ = 1.2$, showing the dependence on (a) $\\chi$ for fixed $U = 5$, and on (b) $U$ for fixed $\\chi = 70$. The value of $L$ from Eq. (\\ref{eq:n_theta final}) is $L = 400$.}\n\t\\label{fig:n_theta near theta_F all U and n=1.2}\n\\end{figure}\nThe occupation shows a form similar to that of the free case, albeit with a reduced size of the step at $\\theta_F$. The value of the slope at $\\theta_F$ diverges for all $U$, as we show in Fig. \\ref{fig:nprime_theta_at_theta_F_vs_chi_all_U_and_n1_2}. The interacting system is therefore also gapless in the $\\chi \\to \\infty$ limit, as expected.\n\\\\\n\\indent \nThe quasiparticle weight, $Z$, of the symmetric state excitations is defined as \n\\begin{equation}\nZ \\equiv \\(\\lim\\limits_{\\theta \\to \\theta_F^-} - \\lim\\limits_{\\theta \\to \\theta_F^+} \\) n_{\\sigma}(\\theta).\n\\label{eq:Z definition}\n\\end{equation}\nFor a Fermi liquid $n_{\\sigma}(\\theta)$ has a step at $\\theta_F$ and $Z > 0$, while for a Luttinger liquid the occupation function has a higher order non-analyticity that scales as $n_{\\sigma}(\\theta - \\theta_F) \\sim \\abs{\\theta - \\theta_F}^{\\gamma} \\sign(\\theta - \\theta_F)$ for some $\\gamma < 1$ and therefore $Z = 0$. Our goal is to find the true thermodynamic value of $Z$ to distinguish between these two scenarios. Of course, for a finite $\\chi$ Eq. (\\ref{eq:Z definition}) will always give zero. However, we can define a quantity $Z(\\chi)$ whose limit will give $Z$ in the $\\chi \\to \\infty$ limit. We define this as\n\\begin{equation}\nZ(\\chi) \\equiv n_{\\sigma}\\(\\theta_F - \\frac{\\pi}{2 \\, \\xi (\\chi)} \\) - n_{\\sigma}\\(\\theta_F + \\frac{\\pi}{2 \\, \\xi (\\chi)} \\),\n\\label{eq:Z finite chi definition}\n\\end{equation}\nwhich satisfies the desired property because $\\xi(\\chi)\\rightarrow \\infty$ with increasing $\\chi$. We choose a spacing of $\\Delta \\theta = \\pi\/\\xi(\\chi)$ around $\\theta_F$ because that is roughly the resolution one expects from a finite correlation length, and therefore the convergence in $1\/\\chi$ should be fastest. We plot $Z(\\chi)$ vs $\\chi$ for $\\ = 1.2$ and a range of $U$ in Fig. \\ref{fig:Z vs chi for all U at n=1.2}. \n\\begin{figure}\n\t\\includegraphics[scale=0.9]{Z_vs_chi_all_U_n_1_2}\n\t\\caption{$Z(\\chi)$ at density $\\ = 1.2$, shown with linear fits.}\n\t\\label{fig:Z vs chi for all U at n=1.2}\n\\end{figure}\nThe results show that the extrapolated $Z$ is (a) very close to the expected value of $Z = 1$ for the free theory and (b) finite for all $U$ we study. We plot $Z$ as a function of $U$ in Fig. \\ref{fig:Z vs U for all n}, where we can see that it decreases as a function of $U$, but seems to saturate to a finite value. The saturation value is an increasing function of doping $\\ - 1$, as \nillustrated by Fig.~\\ref{fig:Z at U =20 vs n-1} where we plot the value of $Z(U=20)$ as a function \nof doping. \n\n\\begin{figure}\n\t\\begin{subfigure}[]{0.49\\textwidth}\n \t\\includegraphics[scale=0.8]{Z_vs_U_all_n}\n \\caption{}\n\t \\label{fig:Z vs U for all n}\n \\end{subfigure}\n\t\\begin{subfigure}[]{0.49\\textwidth}\n \t\\includegraphics[scale=0.8]{Z_U_20_vs_n_minus_1}\n \t\\caption{}\n\t \\label{fig:Z at U =20 vs n-1}\n \\end{subfigure}\n \t\\caption{(a) The extrapolated values of $Z$ at densities $\\ = 1.2,1.3,1.4$, plotted as a function of $U$. The first two blue points are covered by the orange ones. The error bars, which are the discrepancy in the extrapolation with and without the last data point, are smaller than the data points. We can see that the curves are close to saturation at $U = 20$. (b) The values of $Z$ at $U = 20$, which seem close to the saturated values, as a function of $\\ - 1$.}\n\\end{figure}\nWe also address the question of Luttinger's theorem for $n_{\\sigma}(\\theta)$. We check that $\\theta_F$ is independent of $U$ (the dependence on $\\chi$ is negligible) for various densities in the range $\\ \\in (1.1,1.4)$. From this we conclude that Luttinger's theorem holds for all values of the density and interaction strength. \n\\\\\n\\indent\nThe decrease of $Z$ with increasing $U$ as well as the saturation at large $U$ to a value which increases with doping are both qualitatively consistent with results established in the $z=\\infty$ (DMFT) limit and with slave boson approaches (for general lattices) ~\\cite{RevModPhys.68.13,4bosons}.\nA distinctive aspect of these theories, however, is that the effective mass of \nquasiparticles is related to $Z$ by $m^*\/m=1\/Z$. On the technical level, this is due to the locality of the self-energy, while physically this reflects the inability of these approaches to capture the feedback of short-range order and collective modes \ninto the physics of quasiparticles. \nIn contrast, in the present case, since the connectivity is kept finite, we would expect this feedback to be present. \nIt is therefore an outstanding question for future work to explore whether the dispersion of quasiparticles is \nrenormalized in a different manner than $Z$ itself, and in particular whether it is affected by \nshort-range antiferromagnetic correlations at low doping level. This is left for future work since it requires \nan extension of our algorithm to the study of excited states.\n\n\n\n\n\n\n\n\\section{Discussion}\n\\label{sec:discussion}\n\nIn summary, we have introduced a new numerical algorithm,\n(fermionic) VUTS, to study quantum (fermionic) models on the Bethe lattice. \nWe apply it to the Hubbard model for coordination number $z = 3$, allowing for a two-site unit cell, obtain the $T = 0$ phase diagram and study the doping-induced Mott transition. \nWe find an antiferromagnetic insulating phase at half filling and a paramagnetic metallic phase for the doped system, which are separated by a first-order insulator to metal phase transition. \nThe model displays phase separation at low doping, with a range of forbidden densities. These conclusions were reached by allowing for a two-site unit cell. We cannot exclude that phases with more complex charge or \nmagnetic ordering exist when allowing for a larger unit cell, which we leave as an open question for future work. By studying the diagonal component of the occupation function for momenta of the symmetric single-particle sector, we find that the quasiparticle weight is non-zero, \nconsistent with the existence of a Fermi liquid ground state for fermions on the Bethe lattice. We find that this Fermi liquid state obeys Luttinger's theorem.\n\nAn interesting direction in which to extend this work would be to further characterize this Fermi liquid state. \nOne would like to know, for example, what happens near $\\theta_F$ to the off-diagonal $n_\\sigma(\\theta,\\theta')$ when interactions are turned on. \nIt is also interesting to look at some of the other symmetry sectors, say the ones that leave each of the $z$ sub-trees connected to the center site invariant, \nand see whether they have quasiparticles and how the quasiparticle weight depends on the sector.\nThese questions are the Bethe lattice version of the important physical question of `momentum dependence' of quasiparticle properties on the Fermi surface of hypercubic lattices.\n\nIn the one-dimensional VUMPS algorithm, it has been shown that low-lying excitations above the ground-state can be accurately computed~\\cite{PhysRevB.97.235155}. \nAn obvious question is how to extend these ideas to VUTS (this is another potential advantage of this method over imaginary time evolution).\nExtension of the algorithm to excited states would allow to characterize the effective mass (dispersion) of the quasiparticles, paving the road for a study of \nhow short-range (e.g. antiferromagnetic) correlations affect quasiparticle properties. This is a very important question for the physics of strongly correlated electron systems. \nA breakdown of the $m^*\/m=1\/Z$ relation would signal that this feedback is indeed present, in contrast to the infinite connectivity limit. \nFinally, studying the energy dependence of the quasiparticle lifetime, as well as the interactions between quasiparticles would be a comprehensive study of Landau \nFermi liquid theory on the Bethe lattice. \n\nIt would also be interesting to look at the entanglement structure of the Fermi liquid state, as compared to a Luttinger liquid. Since the entanglement spectrum is easily obtained on the one-dimensional and Bethe lattices from the singular value decomposition of a single bond tensor, this question could in principle be easily answered. \n\nWe also propose that the finite $z$ Bethe lattice can be used as a computationally tractable platform for the study of how quasiparticles can be destroyed \nand Fermi liquid behaviour breaks down when considering other fermionic hamiltonians on this lattice. \nOne route to explore this, which connects to the feedback of long-wavelength collective modes or short-range spatial correlations on quasiparticle properties, is to study the vicinity of a quantum critical point. \nAnother route is to study microscopic models that are tailor-engineered to have incoherent excitations, such as the one of Ref.~\\cite{PhysRevB.86.045128}, or the multi-channel Kondo lattice. \n\nStudying the interplay between frustration and strong correlations is another promising direction for future research. \nIntroducing a frustrating next-nearest-neighbor hopping term has been found with DMFT to yield an interaction-driven metal-insulator transition on the infinite $z$ Bethe lattice~\\cite{rozenberg1994,RevModPhys.68.13}. An interesting question is whether this transition can also be found at finite $z$. \nThe VUTS algorithm could also be extended to other lattices with a tree-like structure, such as the Husimi cactus. \nThe study of spin models on such lattices have revealed spin-liquid ground-states\\cite{Chandra_1994,PhysRevB.93.075154} (see also Ref.~\\cite{Udagawa_2019} for considerations on the spin-ice model), \nopening the question of how these models behave upon doping. \n\nFinally, increasing the temperature to a non-zero value is another interesting direction. Intuitively, some finite-temperature properties may be less sensitive to the differences between tree lattices and two- or three-dimensional hypercubic lattices. This could be done using the purification method \\cite{PhysRevLett.93.207205,PhysRevLett.93.207204,PhysRevB.72.220401} which has been formulated on the Bethe lattice in Ref. \\cite{PhysRevB.100.125121}.\n\n\n\n\n\n\\section*{Acknowledgments}\n\nWe thank Julien Agier, Martin Claassen, Michel Ferrero, Gabriel Kotliar, Chris Laumann, Sung-Sik Lee, Roderich Moessner, Marcelo Rozenberg, Steve White and in particular Martin Eckstein and Riccardo Rossi for useful discussions. We thank Ruben Verresen for comments on the manuscript. All tensor network calculations and the VUTS code were implemented with the ITensor Library (C++ version 3.1) \\cite{itensor}. The Flatiron Institute is a division of the Simons Foundation.\n\n\n\n\n\\bibliographystyle{apsrev4-1}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzjqxx b/data_all_eng_slimpj/shuffled/split2/finalzzjqxx new file mode 100644 index 0000000000000000000000000000000000000000..c8277fa8bb783d21ffc3bbfcc5e36c3f1cebd823 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzjqxx @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe ATLAS experiment \\cite{ATLAS} at CERN was equipped with the Insertable B-Layer (IBL) \\cite{IBL} between the existing inner pixel layer and a new beam pipe during the long shutdown one to improve the tracking performance. The planar and 3D sensors of the IBL are exposed to a high flux of ionizing radiation because of the close position to the interaction point. The planar sensors are designed to withstand a fluence of $5\\cdot 10^{15}\\,\\text{n}_\\text{eq}\/\\text{cm}^{2}$. They are produced in n$^+$-in-n sensor technology and have a pixel pitch of $250\\,$\\textmu m $\\times$ $50\\,$\\textmu m \\cite{sensor1, sensor2}. The 80 columns and 336 rows of pixels are read out with the Front-End-I4 readout chip \\cite{FE-I4}.\n\nWhile operating the detector, the radiation damage of the sensors will increase their leakage current and depletion voltage, whereas the charge collection efficiency decreases. In order to continue to obtain a high particle detection efficiency, the bias voltage needs to be increased gradually. Voltages up to $1000\\,$V can be applied to the sensors causing an increase in power consumption. The higher power consumption produces heat which needs to be dissipated by the detectors cooling system to prevent thermal runaway of the sensors' leakage current. Therefore, a higher detection efficiency at lower bias voltage is desirable.\n\nNew pixel implantation shapes were designed in Dortmund to achieve electrical field strength maxima in the pixel and thus increase charge collection and particle detection efficiency at lower voltages after irradiation \\cite{REINER}.\n\nIn the proceeding \\emph{Lab and test beam results of irradiated silicon sensors with modified {ATLAS} pixel implantations} \\cite{meinPaper}, results of these proton and neutron irradiated modules were presented and showed incongruent results. This paper now examines the hypothesis that an annealing process caused the observed differences. \n\n\n\\section{Design of the Pixel Cell}\nThe baseline for the new designs is the IBL pixel design (see figure \\ref{fig:design}, label V0). In the \\mbox{$250\\,$\\textmu m $\\times$ $50$ \\textmu m} pixel cell, the standard n$^+$-implantation is located centrally with rounded corners to create a homogeneous electrical field in the pixel. Moderated p-spray is applied to isolate neighboring pixel cells. In the upper region of the pixel cell the bump bond pad is visible which is the connection to the readout chip. The bias dot with connection to the bias grid is positioned at the other end.\n\nDifferent n$^+$-implantation shapes are realized in the REINER$^2$\\note[2]{\\textbf{RE}designed, \\textbf{IN}novativ, \\textbf{E}xciting and \\textbf{R}ecognizable} pixel designs V1 to V6 (see figure \\ref{fig:design}): For the pixel designs V1 and V4, the n$^+$-implantation is divided in four uniform segments isolated with p-spray. The corners of the n$^+$- implantation of pixel design V1 are rounded, while they are rectangular for pixel design V4. Further division of the n$^+$-implantation are used for pixel designs V2 and V3. No isolation between the segments is used here due to reduced space. A narrowed shape of the metal layer and the n$^+$-implantation are realized in the pixel designs V5 and V6. While in V6 the region for moderated p-spray is the same as for the standard pixel, the section is increased to the inner part of the pixel cell for design V5.\n\nThe connection to the bias dot and bias grid is the same for all pixel designs and causes a loss in efficiency as mentioned in \\cite{effVerlust1, effVerlust2}. This will not be taken into account in this proceeding as it focuses on the influence of the different pixel shapes. \n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=.6\\textwidth]{PixelLayoutQuer.png}\n\\caption{\\label{fig:design} Schematic drawing of the pixel structure and the shape of its components. The $\\text{n}^+$-implantation is shown in blue, the metalization in grey. The areas with high p-spray dose correspond to the nitride-openings as indicated in green. The bump bond pad is depicted in orange \\cite{meinPaper}.}\n\\end{figure}\n\n\\section{Sensors and Modules}\nThe REINER pixel sensor permits measurement of all the described pixel types on one sensor. It is produced in n$^+$-in-n sensor technology with a bulk thickness of $200\\,$\\textmu m. The pixel matrix of 80 columns and 336 rows is read out with the FE-I4 readout chip via bump bonds.\n\nThe sensor is divided in eight structures consisting of ten columns and 336 rows of the same pixel design. The outermost structures on the sensor comprise the standard IBL pixel design (labeled as 05 and V0). In between these standard structures, structures with modified pixel designs (V1 to V6) are placed (see figure \\ref{fig:sensor}, left). Each structure has its own p$^+$-implant and two HV pads. All structures are surrounded by thirteen guard rings beneath the second to last pixel column and the last pixel column (see figure \\ref{fig:sensor}, right). In this way, measurements of the structures independent from each other are possible.\n\nTo compare the performance of the different pixel designs, several sensors are bump bonded to FE-I4 readout chips and tested before and after irradiation in irradiation facilities. The results presented in this work are obtained with modules irradiated with neutrons at the TRIGA reactor in Ljubljana \\cite{Ljubljana} and at the Sandia Annular Core Research Reactor$^4$\\note[4]{https:\/\/www.sandia.gov\/research\/facilities\/annular\\_core\\_research\\_reactor.html}. They are irradiated with neutrons to target fluences of $1\\cdot10^{15}\\,\\text{n}_\\text{eq}\/\\text{cm}^2$ and $5\\cdot10^{15}\\,\\text{n}_\\text{eq}\/\\text{cm}^2$.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=1.0\\textwidth]{P-Seite2.png}\n\\caption{\\label{fig:sensor} Left: P-side of a REINER pixel sensor with six modified structures and two IBL standard structures (05 and V0). Right: Magnification of the area between two structures. The second to last and last column of each structure are beneath guard rings \\cite{meinPaper}.}\n\\end{figure}\n\n\\section{Test Beam Measurements}\nThe sensors are investigated in test beam measurements at the SPS-Beam Line H6 at CERN with a pion beam energy of $120\\,$GeV and at the beam line 22 at DESY \\cite{DESYPaper} with an electron beam energy of $5\\,\\text{GeV}$. With the EUDET-type beam telescopes ACONITE at CERN and DURANTA at DESY the hit detection efficiency can be measured with a spatial resolution of less than $5\\,$\\textmu m (CERN) and $10\\,$\\textmu m (DESY) \\cite{resolution}. A telescope consists of two arms, each equipped with three Mimosa 26 modules \\cite{mimosa26}. In between the two arms, a cooled box holds the Devices Under Test (DUTs). The box is flushed with nitrogen and cooled using a chiller (CERN) or dry ice (DESY). A non-irradiated IBL-type module is used as a timing reference sensor.\n\nWith this setup the REINER modules are investigated at different positions, bias voltages and tunings of the readout chip. For irradiated REINER modules high efficiencies around $97\\,\\%$ can be reached for all pixel designs with a high bias voltage. Therefore, operation with a lower bias voltage where differences in the performance of the pixel designs occur is desirable.\n\nThe traversing particles produce hits in the telescope modules and the DUTs which are reconstructed to tracks using the software EUTelescope$^5$\\note[5]{http:\/\/eutelescope.web.cern.ch\/}. To get unbiased tracks the hit information of the DUTs is not used for track fitting. About $200\\,\\text{k}$ to $500\\,\\text{k}$ events are collected in one \"run\".\n\nThe hit detection efficiency, as one part of the analysis of the performance of the DUTs, is computed with the software tool TBMon2$^6$\\note[6]{https:\/\/gitlab.cern.ch\/tbmon2}.\n\nAll reconstructed particles measured by the reference sensor and not in a masked, edge or noisy region of the investigated sensor are used for efficiency calculation and called \"tracks\". If a \"track\" is also measured by the investigated sensor, it is defined as a \"hit\". The hit detection efficiency of a sensor $\\epsilon$ is calculated by the quotient of \"hits\" and \"tracks\":\n\n\\begin{equation}\n\\epsilon = \\frac{n_\\text{hits}}{n_\\text{tracks}}, \\quad \\sigma_\\epsilon = \\sqrt{\\frac{\\epsilon \\cdot (1-\\epsilon)}{n_\\text{tracks}}}\n\\end{equation}\n\nTo summarize the efficiencies for runs taken under the same conditions the mean efficiency weighted with the number of tracks is calculated. The fluctuation of the efficiency from run to run is determined with the Clopper-Pearson confidence interval \\cite{clopper-pearson} with a confidence level of $\\gamma=95\\,\\%$. For the lower interval limit the $\\frac{1-\\gamma}{2}$-quantile and for the upper limit the $\\frac{1+\\gamma}{2}$-quantile are calculated \\cite{Andreas}.\n\nFor the following section the efficiency is calculated by using only the four innermost pixel columns of every structure to neglect the influence of guard rings.\n\n\\section{Previous Results}\nThe measurements presented in \\cite{meinPaper} showed that for non-irradiated modules the hit detection efficiency for the different pixel designs is consistent. But for modules irradiated with neutrons or protons the hit detection efficiencies are not consistent. Even the two neutron irradiated sensors R1, irradiated at Sandia, and R3, irradiated in Ljubljana, to the same target fluence of $5\\cdot 10^{15}\\,\\text{n}_\\text{eq}\/\\text{cm}^2$ and measured at the same voltage and tuning ($3200\\,$e threshold and a ToT response of $6\\,$ at a reference charge of $20\\,$ke) show dissimilar results (see figure \\ref{fig:bild1}, left). For the module R1, irradiated at Sandia, the pixel designs with narrowed n$^+$-implantation V5 and V6 reach the highest efficiencies at $400\\,$V. No efficiency of the pixel design V0 was measured for module R1 as the beam spot was focused on the left side of the module and did not cover pixel design V0.\n\nFor the Ljubljana irradiated module R3, the pixel design with narrowed n$^+$-implantation V6 has the lowest efficiency while all other pixel designs have similar efficiencies of approx. $50\\,\\%$.\n\nThese diverging results might have been caused by the different neutron energy spectra of the two irradiation facilities while another hypothesis is a different temperature of the modules during irradiation leading to annealing of defects. In Ljubljana the maximum temperature of the modules was $45^\\circ$C while they reached about $100^\\circ$C in the irradiation at Sandia (see figure \\ref{fig:bild1}, right).\n\nTo test the annealing hypothesis, the modules R3 and R9 were irradiated with neutrons in Ljubljana to different target fluences and were now annealed in several steps at $80^\\circ$C.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=.48\\textwidth]{R1_R3_comparison_3200_iWORID.png}\n\\includegraphics[width=.48\\textwidth]{TempertureProfileIrradiation_IWORID.png}\n\\caption{\\label{fig:bild1} Left: Efficiencies of the different pixel designs for two sensors irradiated with neutrons to a target fluence of $5\\cdot 10^{15}\\,\\text{n}_\\text{eq}\/\\text{cm}^2$ at $400\\,$V (Tuning: $3200\\,$e threshold, $6\\,$ToT at $20\\,$ke). R1 is irradiated at Sandia while R3 is irradiated in Ljubljana. Right: Temperatur profile during the neutron irradiation of the module R1 at Sandia.}\n\\end{figure}\n\n\\section{First Annealing Results}\nFor the annealing procedure the climate chamber is preheated to $80^\\circ$C before the module is inserted. The temperature of the module is monitored and it is not biased during the annealing. The cooling down afterwards takes place at room temperature and the module is put back in the freezer for storage. After each annealing step IV scans and test beam measurements are performed at DESY.\n \nTo determine if differences in the efficiency occur with annealing at all, the intention was to anneal a module for a long time. Therefore, the first annealing step of module R3 was three hours at $80^\\circ$C. The following annealing steps lasted for two hours each.\nThe results of the hit detection efficiency for the different pixel designs for module R3 after several annealing steps are presented in figure \\ref{fig:bild2}. For these measurements at $300\\,$V the module was tuned to a threshold of $1600\\,$e and a ToT response of $6\\,$ at a reference charge of $20\\,$ke. The standard pixel designs 05 and V0 reach the highest efficiencies for the non-annealed case while the pixel designs with narrowed n$^+$-implantation V5 and V6 are less efficient compared to all other designs. This behavior appears to be independent on the threshold as figure \\ref{fig:bild1} on the left shows where the sensor was measured at a threshold of $3200\\,$e while in figure \\ref{fig:bild2} the module was tuned to a threshold of $1600\\,$e.\n\nWith the first annealing step of three hours the efficiency drops for all pixel designs except for the designs with a narrowed n$^+$-implantation. Compared to the standard pixel design, the hit detection efficiency drops by $8\\,\\%$ while for pixel design V5 the efficiency increases by more than $12\\,\\%$. After this first annealing step, the efficiency of pixel design V5 is even higher than the non-annealed results of the standard designs. With further annealing steps the efficiency of the standard pixel designs is almost constant, but for pixel designs V5 and V6 where the efficiency increases.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=.7\\textwidth]{Efficiency_R3_vergleich_0h_3h_5h_7h_80C_1600e_300V_IWORID.png}\n\\caption{\\label{fig:bild2}\nEfficiencies of the different pixel designs for the sensor R3 after various annealing steps at $300\\,$V (Tuning: $1600\\,$e threshold, $6\\,$ToT at $20\\,$ke).}\n\\end{figure}\n\n\\newpage\n\nThese results confirm the hypothesis that annealing caused the narrowed pixel designs to be more efficient compared to the standard pixel design. To determine if this effect is dependent on the particle fluence and to see when the effect becomes relevant, the module R9 was irradiated with neutrons in Ljubljana to a target fluence of $1\\cdot10^{15}\\,\\text{n}_\\text{eq}\/\\text{cm}^2$ and measured with a shorter annealing step time of five minutes at $80^\\circ$C. The results of the hit detection efficiency for the module R9 at $100\\,$V, tuned to a theshold of $1600\\,$e and a ToT response of $6\\,$ at a reference charge of $20\\,$ke, are presented in \\mbox{figure \\ref{fig:bild3}}.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=.7\\textwidth]{Efficiency_R9_vergleich_0min_5min_10min_80C_1600e_100V_IWORID.png}\n\\caption{\\label{fig:bild3} Efficiencies of the different pixel designs for the sensor R9 after various annealing steps at $100\\,$V (Tuning: $1600\\,$e threshold, $6\\,$ToT at $20\\,$ke).}\n\\end{figure}\n\nFor the non-annealed measurement the pixel designs with narrowed n$^+$-implantation have the lowest efficiency. Because the beam spot is focused on the left part of the sensor, there is no data point for the pixel design V0. After the first annealing step an increase in efficiency is visible for the pixel designs with a narrowed n$^+$-implantation and the standard pixel design. However, after the second annealing step the efficiency decreases for all designs. Compared to the non-annealed results, the efficiency after the second annealing step is still higher for the standard pixel design and the designs with narrowed n$^+$-implantation.\n\nIt is known, that annealing performed in a short time scale is able to reduce the effective doping concentration in the sensor material and more signal at the same voltage can be measured. This process is called beneficial annealing. In contrast, reverse annealing degrades sensor properties in the long term. For more information on annealing effects see \\cite{Moll}. Beneficial annealing can explain the higher efficiencies after the first annealing step of the module R9 for the shorter annealing time. But with the second step the reverse annealing becomes dominant and the efficiency drops for all designs. In case of the module R3 only the reverse annealing can be observed because of the long term annealing of three hours at $80^\\circ$C.\n\nThe observed changes of the hit detection efficiency with annealing are caused by at least two different effects. With annealing the depletion voltage and the charge collection efficiency change. Thus, if one of these effects depend on the pixel design, different hit detection efficiencies might be observed. Also charge multiplication could explain the higher efficiencies, which was also observed in \\cite{Milovanoivi,Kramberger1}, where neutron irradiated n-in-p micro-strip sensors showed higher electrical fields and therefore charge multiplication close to the strips.\n\n\\section{Summary and Outlook}\nThe results of the Ljubljana neutron irradiated module R3 indicate that long term annealing caused a higher hit detection efficiency for the pixel designs with narrowed n$^+$-implantation compared to all other designs. As a result of this, the higher detection efficiency observed for pixel designs with narrowed n$^+$-implantation on the neutron irradiated sensor R1 at Sandia (see \\cite{meinPaper}) are explained as being due to the higher temperature during the irradiation.\n\nAs this effect was not visible for the module R9, which was irradiated to a lower fluence and annealed in shorter steps, it is not clear if it depends on the neutron fluence. Therefore, the module R9 needs to be further investigated with additional annealing steps.\n\nFor the module R3 further annealing steps will be performed to find the maximum efficiency of pixel design V5 and further research of the performance after reaching the maximum efficiency.\n\nThe higher hit detection efficiencies might be explained with charge multiplication which is observed in n-in-p micro-strip detectors after neutron irradiation and long term annealing \\cite{Milovanoivi,Kramberger1}. To understand if the higher efficiency is caused by charge amplification, which was the intention of the REINER pixel designs, TCT measurements at different voltages and annealing steps are planned for irradiated modules. With this method the pixel designs can be investigated in the range of \\textmu m to see which pixel parts might cause charge multiplication.\n\n\\acknowledgments\nThe authors would like to thank the team at the Sandia Annular Core Research Reactor, especially M. Hoeferkamp and S. Seidel, and the team at the TRIGA reactor in Ljubljana, especially V. Cindro, for their help with irradiation of the sensors.\n\nMany thanks to all participants of the ATLAS ITk pixel test beam campaigns, especially those who develop and maintain the corresponding hardware and software.\n\nThe presented work is carried out within the framework of Forschungsschwerpunkt FSP 103 and supported by the Bundesministerium f\\\"ur Bildung und Forschung BMBF under grant 05H15PECA9.\n\nThis project has received funding from the European Union's Horizon 2020 Research and Innovation programme under Grant Agreement no. 654168.\n\nThe measurements leading to these results have been performed at the Test Beam Facility at DESY Hamburg (Germany), a member of the Helmholtz Association (HGF).\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\hspace{-0.6cm}.\\hspace{0.3cm}#1}} %\n\\newcommand{\\zd}{z^{\\delta}} %\n\\newcommand{\\be}{B_{\\Omega}(z;X)} %\n\\newcommand{\\ca}{C_{\\Omega}(z;X)} %\n\\newcommand{\\ko}{K_{\\Omega}(z;X)} %\n\\newcommand{\\tzd}{\\tilde z^{\\delta}} %\n\\newcommand{\\tjd}{\\tilde \\zeta^{\\delta}} %\n\\newcommand{\\jd}{\\zeta^{\\delta}} %\n\\newcommand{\\ld}{\\lambda_{\\delta}} %\n\\newcommand{\\tj}{\\tilde \\zeta_1} %\n\\newcommand{\\tz}{\\tilde z_1} %\n\\newcommand{\\Cb}{C_b(z_0,z^{\\delta_0})} %\n\\newcommand{\\jdb}{(d\\delta^{\\frac 1\\eta},0,- b\\delta\/2)} %\n\\newcommand{\\dbar}{\\overline{\\partial}} %\n\\newcommand{\\oo}{\\overline \\Omega} %\n\\newcommand{\\ct}{\\Bbb{C}^3} %\n\\newcommand{\\al}{{\\alpha}} %\n\\newcommand{\\ov}{\\overline} %\n\\newcommand{\\dd}{\\partial \\overline{\\partial}} %\n\\newcommand{\\de}{\\delta^{\\frac 1\\eta}} %\n\\newcommand{\\dee}{\\delta^{-\\frac 2\\eta}} %\n\\newcommand{\\bo}{b\\Omega} %\n\\newcommand{\\od}{\\Omega_\\delta} %\n\\newcommand{\\gd}{g_{\\tz,\\delta}} %\n\\newcommand{\\gdd}{g_\\delta} %\n\\newcommand{\\qd}{Q_{\\gamma\\delta}(\\tz)} %\n\\newcommand{\\qad}{Q_{a\\delta}(\\tz)} %\n\\newcommand{\\qdd}{Q_{\\gamma\\delta}(\\tzd)} %\n\\newcommand{\\rd}{R_{\\gamma\\delta}(\\tz)} %\n\\newcommand{\\rbd}{R_{b\\delta}(\\tzd)} %\n\\newcommand{\\qbd}{Q_{b\\delta}(\\tzd)} %\n\\newcommand{\\rdd}{R_{\\gamma\\delta}(\\tjd)} %\n\\newcommand{\\rcd}{R_{c\\delta}(\\jd)} %\n\\newcommand{\\ol}{\\overline L} %\n\\newcommand{\\Pd}{\\Phi_{\\tzd}} %\n\\newcommand{\\td}{\\tau(\\tzd,\\delta)} %\n\n\\newtheorem{thm}{Theorem}[section]\n\\newtheorem{lem}[thm]{Lemma}\n\\newtheorem{prop}[thm]{Proposition}\n\\newtheorem{rem}[thm]{Remark}\n\\newtheorem{cor}[thm]{Corollary}\n\\newtheorem{defn}[thm]{Definition}\n\n\\numberwithin{equation}{section}\n\n\n\\begin{document}\n\n\\title {Necessary conditions for H\\\"older regularity gain of $\\dbar$ equation in $\\mathbb C^3$}\n\n\\author{Young Hwan You \\thanks{Department of Mathematics, Indiana University East, IN 47374, USA. E-mail:youy@iue.edu\n\\newline 2010 \\textit{Mathematics Subject Classification} Primary 32F45; Secondary 32T25.}}\n\n\\date{}\n\n\\maketitle\n\n\\begin{abstract}\n\\noindent Suppose that a smooth holomorphic curve $V$ has order of contact $\\eta$ at a point $w_0$ in the boundary of a pseudoconvex domain $\\Omega$ in $\\mathbb{C}^3.$\nWe show that the maximal gain in H\\\"older regularity for solutions of the $\\bar{\\partial}$-equation is at most $\\frac{1}{\\eta}.$ \n\\end{abstract}\n\n\\section{Introduction}\\label{sec1}\n\n\nLet $\\Omega$ be a given domain in $\\mathbb{C}^n$ and $\\alpha$ be a $\\bar{\\partial}$-closed form of type $(0,1)$ in $\\Omega$. The $\\bar{\\partial}$-problem consists of finding a solution $u$ of $\\bar{\\partial} u = \\alpha$ that satisfies certain boundary regularity estimates as measured by either $L^2$ or $L^p$ norms or in H\\\"older norms.\n\n\nWhen $\\Omega$ is strongly pseudoconvex, in the $L^2$-sense, Kohn \\cite{FK,K1,K2} showed that for any $s\\geq 0$, there is a canonical solution of $\\bar{\\partial}u = \\alpha$ such that \n\\begin{equation}\\label{kohnestimate}\n|||u|||_{s+\\epsilon} \\leq C \\left\\|\\alpha\\right\\|_s \\quad \\mbox{and} \\quad \\ u \\perp A(\\Omega) \\cap L^2(\\Omega),\n\\end{equation}\nwith $\\epsilon = \\frac{1}{2}.$ \n(We say $u$ is the canonical solution if $u \\perp A(\\Omega) \\cap L^2(\\Omega).$)\nHere, $\\left\\|\\cdot\\right\\|^2_s$ is the $L^2$-Sobolev norm of order $s$ and the norm $|||\\cdot |||_{s+\\epsilon}$ measures tangential derivatives near the boundary of order $s+\\epsilon$ in the tangential directions. Kohn showed that if $U$ satisfies\n$\\square U = (\\bar{\\partial}\\bar{\\partial}^* + \\bar{\\partial}^*\\bar{\\partial})U = \\alpha,$ and if $\\bar{\\partial}\\alpha = 0,$ then\n$u = \\bar{\\partial}^*U$ is the canonical solution of $\\bar{\\partial}u = \\alpha.$\nTo prove regularity for this solution, Kohn proved the a priori estimate\n\\begin{equation}\\label{subellipticestimate}\n|||\\phi|||_{\\epsilon}^2 \\leq C(\\left\\|\\bar{\\partial} \\phi \\right\\|^2 + \\left\\|\\bar{\\partial}^* \\phi\\right\\|^2 + \\left\\|\\phi\\right\\|^2)\n\\end{equation} \nwith $\\epsilon = \\frac{1}{2}.$\nHere, $\\phi \\in C_{(0, 1)}^{\\infty}(W) \\cap \\mbox{Dom}(\\bar{\\partial}) \\cap \\mbox{Dom}(\\bar{\\partial}^*) $ is compactly supported in the neighborhood $W$ of the boundary point $w_0.$ Using this estimate and a bootstrap argument, Kohn proved (\\ref{kohnestimate}). Stein and Greiner \\cite{GS} later extended (\\ref{kohnestimate}) to similar estimates in $L^p$ and H\\\"older spaces. For example, if $\\left\\|\\cdot\\right\\|_{\\Lambda^s (\\Omega)}$ is the H\\\"older norm of degree $s$, then Stein and Greiner proved that $u$ satisfies \n\\begin{equation} \\label{holder_estimate}\n\\left\\|u\\right\\|_{\\Lambda^{s+\\epsilon}(\\Omega)} \\leq C \\left\\|\\alpha\\right\\|_{\\Lambda^s (\\Omega)}, \n\\end{equation}\nwith $\\epsilon = \\frac{1}{2}.$\n\n Kohn extended his $L^2$ results to when $\\Omega$ is a regular finite 1-type pseudoconvex domain in $\\mathbb{C}^2.$ To define a regular finite 1-type, we measure the order of contact of a given holomorphic curve at $w_0 \\in b\\Omega.$ Let $V$ be a one-dimensional {\\it smooth} variety parametrized by $\\zeta \\rightarrow \\gamma(\\zeta) = (\\gamma_1(\\zeta), \\cdots, \\gamma_n(\\zeta)),$ where $\\gamma(0) = w_0$ and $\\gamma'(0) \\neq 0.$ We define the order of contact of the curve by $\\nu_o(R\\circ\\gamma),$ where $R$ is a defining function of $\\Omega$ and $\\nu_o(g)$ is just the order of vanishing (an integer at least equal to $2$) of $g$ at $0.$ We then define the type, $T_{\\Omega}^{reg}(w_0) = \\sup\\{\\nu_o (R \\circ \\gamma) ; \\mbox{all} \\ \\gamma \\ \\mbox{with} \\ \\gamma(0) = w_0, \\gamma'(0) \\neq 0\\}.$ Further, we can define the regular type of $\\Omega$ by \n$T^{reg}(\\Omega) = \\sup \\{T_{\\Omega}^{reg}(w_0) ; w_0 \\in b\\Omega\\}.$\n Kohn \\cite{K} proved that if $\\Omega$ is a regular finite 1-type pseudoconvex domain in $\\mathbb{C}^2$, then (\\ref{kohnestimate}) holds for $\\epsilon = \\frac{1}{T^{reg}(\\Omega)}.$ Similarly, Nagel-Rosay-Stein-Wainger \\cite{NRSW} showed that (\\ref{holder_estimate}) also holds for the same $\\epsilon$.\n\n\nIn order to discuss similar estimates in $\\mathbb{C}^n,$ it is important to consider the order of contact of {\\it singular curves}. We define the order of contact of a holomorphic curve parametrized by $\\zeta \\rightarrow \\gamma(\\zeta),$ with $\\gamma(0) = w_0,$ by $C_{\\Omega}(\\gamma, w_0) = \\frac{\\nu_o (R \\circ \\gamma)}{\\nu_o(\\gamma)},$ where $\\nu_o(\\gamma)=\\min\\{\\nu_o (\\gamma_k); k =1, \\cdots, n\\}.$ Define the type of point $w_0$ by $T_{\\Omega}(w_0)= \\sup \\{C_{\\Omega}(\\gamma, w_0); \\mbox{all} \\ \\gamma \\ \\mbox{with} \\ \\gamma(0)= w_0 \\}$ and finally, the type of $\\Omega$ is $T_{\\Omega}= \\sup \\{T_{\\Omega}(w_0); w_0 \\in b\\Omega \\}.$ In the case of the $L^2$-norm, Catlin \\cite{C3} showed that if there is a curve $V$ parametrized by $\\gamma$ through $w_0 \\in b\\Omega$, where $\\Omega \\subset \\mathbb{C}^n$ and (\\ref{subellipticestimate}) holds, then $\\epsilon \\leq \\frac{1}{C_{\\Omega}(\\gamma, w_0)}.$ In H\\\"older norms, McNeal \\cite{Mc} proved that if, with an additional assumption, $\\Omega$ admits a holomorphic support function at $w_0 \\in b\\Omega$ and (\\ref{holder_estimate}) holds, then $\\epsilon \\leq \\frac{1}{C_{\\Omega}(\\gamma, w_0)}$. \n\n\n\n\nThere is the third notion of type, the ``Bloom-Graham\" type, $T_{BG}(w_0).$ It turns out that $T_{BG}(w_0)$ is the maximal order of contact of smooth $(n-1)$-dimensional complex submanifold. Thus, it follows that for any $w_0 \\in b\\Omega,$ $T_{BG}(w_0) \\leq T_{\\Omega}^{reg}(w_0) \\leq T_{\\Omega}(w_0).$ Krantz \\cite{Kr} showed that if $T_{BG}(w_0) = m,$ then $\\epsilon \\leq \\frac{1}{m}.$ \\\\\n\n\nIn this paper we present geometric conditions that must hold if H\\\"older estimate of order $\\epsilon$ is valid in a neighborhood of $w_0 \\in b\\Omega$ in $\\mathbb{C}^3.$ The main result is the following theorem: \n\n\\begin{thm} \\label{main_theorem}\nLet $\\Omega = \\{R(w) < 0\\}$ be a smoothly bounded pseudoconvex domain in $\\mathbb{C}^3.$ Suppose that there is a $1$-dimensional smooth analytic variety $V$ passing through $w_0$ such that for all $w \\in V$, $w$ sufficiently close to $w_0$, $$|R(w)| \\leq C|w-w_0|^\\eta,$$\nwhere $\\eta >0.$\n{\\it If there exists neighborhood $W$ of $w_0$ so that for all $\\alpha \\in L_{\\infty}^{0,1} ({\\Omega})$ with $\\bar{\\partial}\\alpha = 0$, there is a $u \\in \\Lambda^{\\epsilon} (W \\cap \\overline{\\Omega})$ and $C>0$ such that $\\bar{\\partial}u =\\alpha$ and $$ {\\lVert u \\rVert}_{{\\Lambda^\\epsilon}(W \\cap \\overline{\\Omega})} \\leq C{\\lVert \\alpha \\rVert}_{L_{\\infty}(\\Omega)},$$\nthen $\\epsilon \\leq \\frac{1}{\\eta}$}.\n\\end{thm}\n\n\n\\begin{cor} \n$\\epsilon \\leq \\frac{1}{T_{\\Omega}^{reg}(w_0)}.$\n\\end{cor}\n\n\n\\begin{rem} \\\n\\begin{enumerate}[\\normalfont i)]\n \\item If $T_{BG}(w_0) = +\\infty$, Krantz's result \\cite {Kr} holds for any $m > 0$ and we conclude \\\\ $\\epsilon \\leq \\frac{1}{m} \\leq \\frac{1}{\\eta}$ for large $m$. Thus we can assume $T_{BG}(w_0) = m < \\infty.$ Furthermore, since $\\epsilon \\leq \\frac{1}{m}$, we can assume $m < \\eta$ in the rest of this paper.\n \\item Theorem \\ref{main_theorem} improves the results by Krantz \\cite{Kr} and McNeal \\cite{Mc} in the sense that we obtain sharp result since $\\eta > m$ and do not assume the existence of a holomorphic support function. Note that the existence of holomorphic support function is satisfied for restricted domains (see the Kohn-Nirenberg Domain\\cite{KN}).\n \n\\end{enumerate}\n\n\\end{rem}\n\n\nTo prove Theroem \\ref{main_theorem}, the key components are the complete analysis of the local geometry near $w_0 \\in b\\Omega$ (Section \\ref{special coordinates}) and the construction of a bounded holomorphic function with large nontangential derivative near the boundary point (Section \\ref{Sec4}). In Section \\ref{special coordinates}, we construct special holomorphic coordinates about $w_0$ which are adapted to both Bloom-Graham type and the order of contact of $V$. Then, we use the truncation technique developed in \\cite{C} to deal with two dimensional slices of the domain. In Section \\ref{Sec4}, by using the holomorphic function constructed by Catlin \\cite{C2} on two dimensional slice, we construct a bounded holomorphic function $f$ with a large nontangential derivative defined locally up to the boundary in $\\mathbb{C}^3$. Finally, in Section \\ref{Sec5}, we prove Theorem \\ref{main_theorem} by using the constructed holomorphic function.\n\\\\\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Special coordinates}\\label{special coordinates}\n\n\n Let $\\Omega$ be a smoothly bounded pseudoconvex domain in $\\mathbb C^3$ with a smooth defining function $R$ and let $w_0 \\in b\\Omega.$ Since $dR(w_0) \\neq 0$, clearly we can assume that $\\frac{\\partial R}{\\partial w_3}(w) \\neq 0$ for all $w$ in a small neighborhood $W$ about $w_0.$ Furthermore, we may assume that $w_0 = 0.$ In Theorem \\ref{special coordinate}, we construct a special coordinate near $w_0$ which changes the given smooth holomorphic curve into the $z_1$ axis and have a nonzero term along the $z_2$ axis when $z_1 = 0.$\n \n\n\n\n\n\n\\begin{thm} \\label{special coordinate}\n\nLet $\\Omega = \\{ w ; R(w)< 0 \\}$ be a smoothly bounded pseudoconvex domain in $\\mathbb{C}^3$ and let $T_{BG}(0)= m$, where $0 \\in b \\Omega $. Suppose that there is a smooth $1$-dimensional complex analytic variety $V$ passing through $0$ such that for all $w \\in V, w $ sufficiently close to $0$, \n \\begin{equation}\n |R(w)| \\leq C|w|^{\\eta}, \\label{condition of order of contact}\n \\end{equation}\nwhere $\\eta > 0$. Then there is a holomorphic coordinate system $(z_1, z_2, z_3)$ about $0$ with $w = \\Psi(z)$ such that \n\n\\begin{enumerate}[{\\normalfont (i)}]\n \n \\item $r(z)= R \\circ \\Psi(z) = \\mbox{\\normalfont{Re}}{z_3} + \\sum\\limits_ {\\substack{|\\alpha|+|\\beta| = m \\\\ |\\alpha| > 0, |\\beta| > 0}}^\\eta a_{\\alpha, \\beta}{z'}^{\\alpha} {\\bar z}'^{\\beta} + \\mathcal{O}(|z_3||z|+|z'|^{\\eta +1}),$ \\label{special coordinate 1}\n\n \\item $|r(t,0,0)| \\lesssim |t|^{\\eta}$ \\label{changed order of contact}\n \n \n \\item $ {a_{0, \\alpha_2, 0,\\beta_2}} \\neq 0$ with $\\alpha_2+\\beta_2 = m$ for some $\\alpha_2 > 0, \\beta_2 > 0,$ \\label{nonzero term}\n\\end{enumerate}\nwhere $z' = (z_1, z_2),$ and $z = (z_1, z_2, z_3).$\n\n\\end{thm} \n\n\n\n\n\n\nNote that $\\eta$ is a positive integer since $V$ is a smooth 1-dimensional complex analytic variety. \nTo construct the special coordinate in Theorem \\ref{special coordinate}, we start with a similar coordinate about $0$ in $\\mathbb{C}^3$ as in Proposition 1.1 in \\cite{C2}. \n\n\n\n\n\n\n\\begin{prop} \\label{proposition 1}\n\nLet $T_{BG}(0) = m$ and $\\Omega = \\{ w \\in \\mathbb{C}^3 ; R(w)< 0 \\}$.\nThen there is a holomorphic coordinate system $ u = (u_1, u_2, u_3)$ with $ w = \\widetilde{\\Psi}(u)$ such that the function $\\tilde{R}$, given by $\\widetilde{R}(u) = R\\circ \\widetilde{\\Psi}(u)$, satisfies \n\n \\begin{equation}\\label{coordinate 1} \n \\widetilde{R}(u) = \\mbox{{\\normalfont Re}}{u_3} + \\sum_ {\\substack{ |\\alpha|+|\\beta| = m \\\\ | \\alpha| > 0, |\\beta| > 0}}^\\eta b_{\\alpha, \\beta} {u'}^{\\alpha} {\\bar u}'^{\\beta} + \\mathcal{O}(|u_3||u|+|u'|^{\\eta +1}), \n \\end{equation}\nwhere $u'=(u_1,u_2),$ and where $b_{\\alpha, \\beta} \\neq 0$ for some $\\alpha, \\beta$ with $|\\alpha| + |\\beta| = m.$\n\n\\end{prop}\n\n\n\\begin{proof} \nBloom and Graham \\cite{BG} showed that $T_{BG}(w_0) = m$ if and only if there exists coordinate with $w_0$ equal to the origin in $\\mathbb{C}^3$ and $b_{\\alpha, \\beta} \\neq 0$ for some $\\alpha, \\beta$ with $|\\alpha| + |\\beta| = m$ such that \n\n\\begin{equation*}\nR(w) = \\mbox{Re}{w_3} + \\sum\\limits_ {\\substack{|\\alpha|+|\\beta| = m \\\\ |\\alpha| > 0, |\\beta| > 0}} b_{\\alpha, \\beta} {w'}^{\\alpha} {\\bar w}'^{\\beta} + \\mathcal{O}(|w_3||w|+|w'|^{m+1}),\n\\end{equation*}\nwhere $\\alpha = (\\alpha_1, \\alpha_2), \\beta = (\\beta_1, \\beta_2) \\ \\mbox{and} \\ w' = (w_1, w_2).$ \n\nNow assume that we have defined $\\phi^{l} : \\mathbb{C}^3 \\rightarrow \\mathbb{C}^3 $ so that there exist numbers $b_{\\alpha, \\beta}$ for $|\\alpha|, |\\beta|>0$ \nand $|\\alpha|+|\\beta| < l + 1$ with $l > m$ so that $ R_l = R\\circ \\phi^{l}$ satisfies\n\n\\begin{equation} \\label{coordinate 2}\n R_l(v) = \\mbox{{\\normalfont {Re}}}{v_3} + \\sum\\limits_{\\substack{|\\alpha|+|\\beta| = m \\\\ |\\alpha| > 0, |\\beta| > 0}}^{l} b_{\\alpha, \\beta} \n {v'}^{\\alpha}\\bar{v}'^{\\beta} + \\mathcal{O}(|v_3||v|+|v'|^{l+1}), \n\\end{equation}\nwhere $v' = (v_1, v_2)$ and $v = (v_1, v_2, v_3).$\n\nIf we define $$\\phi^{l+1} (u) = \\biggl( u_1, u_2, u_3-\\sum_{|\\alpha|=l+1} {\\frac{2}{\\alpha!}}{\\frac{\\partial^{l+1} R_l}{\\partial {v'}^\\alpha}}(0){u'}^\\alpha \\biggr),$$ then $R_{l+1} = {R_l}\\circ \\phi^{l+1} = R \\circ \\phi^l \\circ \\phi^{l+1}$ satisfies the similar form of (\\ref{coordinate 2}) with $l$ replaced by $l+1$. Therefore, if we take $\\widetilde{\\Psi} = \\phi^l \\circ \\cdots \\circ \\phi^{\\eta}$, then $\\widetilde{R} = R\\circ \\widetilde{\\Psi}$ satisfies \n$$ \\widetilde{R}(u) = \\mbox{Re}{u_3} + \\sum\\limits_{\\substack{ |\\alpha|+|\\beta| =m \\\\ |\\alpha| > 0, |\\beta| > 0}}^\\eta b_{\\alpha, \\beta} {u'}^{\\alpha} {\\bar u}'^{\\beta} + \\mathcal{O}(|u_3||u|+|u'|^{\\eta +1}).$$ \n\\end{proof}\n\n\n\n\nFrom now on, without loss of generality, we may assume that $\\widetilde{R}$ is $R$ by Proposition \\ref{proposition 1}.\n\n\n\n\n\\begin{lem} \\label{parametrization lemma}\nLet $\\gamma = (\\gamma_1,\\gamma_2,\\gamma_3) :\\mathbb{C} \\to V$ be a local parametrization of a one-dimensional smooth complex analytic variety $V$.\nIf $|R(w)| \\lesssim |w|^{\\eta}$ for $w \\in V$, then we can assume\n $\\gamma =(\\gamma_1,\\gamma_2, 0) $ (i.e., $\\gamma_3$ vanishes to order at least $\\eta$).\n\\end{lem} \n\n\n\\begin{proof}We show $\\gamma_3 $ vanishes to order at least $\\eta$.\nSince $\\gamma(0)= 0$, we know $\\gamma_3$ vanishes to some order $l$.\nIf we suppose $l<\\eta$, then $\\gamma_3(t)= a_l {t}^l+\\mathcal{O}(t^{l+1})$, where $a_l \\neq 0$. Then\n\\begin{align*}\n R(\\gamma(t))&= \\mbox{\\normalfont Re}{\\gamma_3} + \\sum_ {\\substack{|\\alpha|+|\\beta|=m \\\\ |\\alpha| > 0, |\\beta| > 0}}^\\eta b_{\\alpha,\\beta}\\gamma_1^{\\alpha_1} {{\\bar \\gamma}_1}^{\\beta_1}\\gamma_2^{\\alpha_2} {{\\bar \\gamma}_2}^{\\beta_2}+\\mathcal{O}(|\\gamma_3||\\gamma|+|\\gamma|^{\\eta +1}) \\\\ \n &= \\biggl( {\\frac{a_l}{2}}{t^l}+{\\frac{\\bar a_l}{2}}{\\bar{t}^l} \\biggr) + \\biggl( \\sum_ {\\substack{j + k = m \\\\ j > 0, k > 0}}^\\eta c_{jk} t^j \\bar{t}^k \\biggr) + \\mathcal{O}(|t|^{l+1}).\n\\end{align*}\n\nNote that the first parenthesis consists of order $l$ pure terms and the summation part consists of the mixed terms. The first one is essentially $|t|^l$ with $l<\\eta$, so if we want to improve on the order of contact, then some terms of the summation part must cancel it. However, it is impossible because the summation part has all mixed terms. This contradicts our assumption $|r \\circ \\gamma(t)| \\lesssim |t|^\\eta$. Therefore, $\\gamma_3$ vanishes to order at least $\\eta.$ \n\n\\end{proof}\n\n\nLet $ A(u_1, u_2) = \\sum\\limits_{\\substack {|\\alpha|+|\\beta| = m \\\\ |\\alpha| > 0, |\\beta| > 0}} b_{\\alpha, \\beta} {u}'^{\\alpha} {\\bar u}'^{\\beta}$ be the homogeneous polynomial part of order $m$ in the summation part of (\\ref{coordinate 1}). In the following lemma, we show that there is some nonzero mixed term along some direction in $\\mathbb{C}^2$.\n\n\n\\begin{lem} \\label{z_2 direction} \nConsider $A(hz, z)$ for all $h, z \\in \\mathbb{C}.$ Then there is some $h \\in \\mathbb{C}$ such that\n\\begin{equation*}\n\\frac{\\partial^m A}{{\\partial z^{j}}{\\partial {{\\bar z}^k}}}(0,0) \\neq 0, \\ \\text{ for }\\\nj, k >0.\n\\end{equation*} \n\\end{lem}\n\n\n\n\\begin{proof}\nSuppose that for all $h,$ $A(hz, z) = P(h)z^m + \\overline{P(h)z^m}.$ Since $A(hz, z)$ is a polynomial in $z, {\\bar z}, h$ and ${\\bar h}$ and $\\frac{\\partial^m A}{\\partial z^m} = m! P(h)$, $P(h)$ is a polynomial.\nLet $P(h) = \\sum a_{j,k} h^j {\\bar h}^k.$ Now, we have $A(hz, z)= \\sum a_{j,k} h^j {\\bar h}^k z^m + \\sum {\\bar a}_{j,k} {\\bar h}^j {h}^k {\\bar z}^m.$ Since $u_1 = hz$ and $u_2 = z,$ we have $h = \\frac{u_1}{u_2}$ and $z = u_2$. Therefore, $A(u_1, u_2) = \\sum a_{j,k} (\\frac{u_1}{u_2})^j ({\\frac{{\\bar{u}_1}}{{\\bar{u}_2}}})^k u_2^m + \\sum \\bar{a}_{j,k} ({\\frac{\\bar{u}_1}{{\\bar u}_2}})^j ({\\frac{u_1}{u_2}})^k {\\bar u_2}^m.$ This forces $j$ and $k$ to be $0$ because $A(u_1, u_2)$ is a polynomial. Therefore, we have $A(hz, z)= a_{0,0}z^m + \\bar{a}_{0, 0} \\bar{z}^m.$ This means $A(u_1, u_2)= a_{0,0}{u_2} ^m + \\bar{a}_{0, 0} {\\bar u_2}^m.$\nHowever, this contradicts $b_{\\alpha, \\beta} \\neq 0 $ for some $\\alpha, \\beta$ with $|\\alpha|, |\\beta| > 0 $ and $|\\alpha|+|\\beta|=m $ in (\\ref{coordinate 1}). \n\\end{proof}\n\n\nNow, we prove Theorem \\ref{special coordinate}\n\n\n\n\\begin{proof}[Proof of Theorem \\ref{special coordinate}]\nWe may assume ${\\gamma_1}' (0) \\neq 0$, and hence, after reparametrization, we can write $\\gamma(t) = (t, \\gamma_2(t), 0)$. Now, define \n\\begin{equation*}\n u = \\Psi_1 (v) = (v_1, v_2 + \\gamma_2(v_1), v_3).\n\\end{equation*}\nSince $\\gamma_2(t) = \\mathcal{O}(|t|)$ is holomorphic, (\\ref{coordinate 1}) means \n\\begin{align*}\n r_1 (v) &= R \\circ \\Psi_1 (v) \n = \\mbox{Re}{v_3}+\\sum_ {\\substack{|\\alpha|+|\\beta|=m \\\\ |\\alpha| > 0, |\\beta| > 0}}^\\eta b_{\\alpha, \\beta} v_1^{\\alpha_1}{\\bar v}_1^{\\beta_1}(v_2+\\gamma_2(v_1))^{\\alpha_2} {\\overline {(v_2+\\gamma_2(v_1)) }}^{\\beta_2} \n + E_1 (v) \\\\\n &=\\mbox{Re}{v_3}+\\sum_ {\\substack{|\\alpha|+|\\beta|=m \\\\ |\\alpha| > 0, |\\beta| > 0}}^\\eta c_{\\alpha, \\beta} v_1^{\\alpha_1}{\\bar v_1}^{\\beta_1}v_2^{\\alpha_2} {\\bar v_2}^{\\beta_2}+ E_1(v), \\ \\text{ where }\\ E_1(v) = \\mathcal{O}(|v_3||v|+|v'|^{\\eta +1}).\n\\end{align*}\nNote that $T_{BG} = m$ means $c_{\\alpha, \\beta} \\neq 0$ for some $\\alpha, \\beta > 0$ with $|\\alpha|+|\\beta| = m.$ Now, we fix $h$ in lemma \\ref{z_2 direction} and define \n\n\\begin{equation*}\n v = \\Psi_2 (z) = (z_1 + h z_2, z_2, z_3).\n\\end{equation*}\nThen, we have \n \\begin{align} \n r(z) &= r_1 \\circ \\Psi_2(z) = R \\circ \\Psi_1 \\circ \\Psi_2 (z) \\nonumber \\\\\n &= \\mbox{Re}z_3 +\\sum_ {\\substack{|\\alpha|+|\\beta|=m \\\\ |\\alpha| > 0, |\\beta| > 0}}^\\eta c_{\\alpha, \\beta} (z_1+hz_2)^{\\alpha_1}{\\overline {(z_1+hz_2)}}^{\\beta_1}z_2^{\\alpha_2} {\\bar z_2}^{\\beta_2}+ E_1(z), \\label{eqn2} \\\\\n &=\\mbox{Re}z_3 + \\sum_ {\\substack{|\\alpha|+|\\beta|=m \\\\ |\\alpha| > 0, |\\beta| > 0}}^\\eta a_{\\alpha, \\beta} z_1^{\\alpha_1}{\\bar z_1}^{\\beta_1}z_2^{\\alpha_2} {\\bar z_2}^{\\beta_2}+ E_1(z), \\label{eqn3}\n\\end{align}\nwhere $a_{\\alpha, \\beta}$ is a polynomial of $h$ and ${\\bar h},$ and where $E_1(z) = \\mathcal{O}(|z_3||z|+|z'|^{\\eta +1}).$\nLet $\\Psi = \\Psi_1 \\circ \\Psi_2.$ Then we have $r(z) = R \\circ \\Psi$ and (\\ref{eqn3}) shows (\\ref{special coordinate 1}) of Theorem \\ref{special coordinate}. Furthermore, since $|r(t, 0, 0)| = |R \\circ \\Psi(t, 0, 0)|= |R(\\gamma(t))| \\lesssim |t|^\\eta,$ this proves part (\\ref{changed order of contact}). For (\\ref{nonzero term}), if we consider $r(0, z_2, 0)$ and (\\ref{eqn2}), we have\n\\begin{equation*} \n r(0, z_2, 0) = A(hz_2, z_2) + \\sum\\limits_ {\\substack{|\\alpha|+|\\beta| = m +1 \\\\ |\\alpha| > 0, |\\beta| > 0}}^\\eta c_{\\alpha,\\beta}{(hz_2)}^{\\alpha_1}{\\overline{(hz_2)}}^{\\beta_1}z_2^{\\alpha_2} {\\bar z}_2^{\\beta_2} +\\mathcal{O}(|z_2|^{\\eta +1}).\n\\end{equation*}\nThen Lemma \\ref{z_2 direction} means\n\\begin{equation*}\n \\frac{\\partial^m r }{{\\partial {z_2}^{\\alpha_2}}{\\partial {\\bar z}_2}^{\\beta_2}}(0) = \\frac{\\partial^m A }{{\\partial {z_2}^{\\alpha_2}}{\\partial {\\bar z}_2}^{\\beta_2}}(0,0) \\neq 0\n\\end{equation*} \nfor some $\\alpha_2, \\beta_2 > 0$ with $\\alpha_2 + \\beta_2 = m .$\nSince $\\frac{\\partial^m r }{{\\partial {z_2}^{\\alpha_2}}{\\partial {\\bar z}_2}^{\\beta_2}}(0) = \\alpha_2 ! \\beta_2 ! a_{0, \\alpha_2, 0, \\beta_2}$ in (\\ref{eqn3}), this completes the proof. \n\\end{proof}\n\n\n\n\nCatlin \\cite{C2} constructed a bounded holomorphic funtion with a large derivative near a finite type point in the boundary of pseudoconvex domain in $\\mathbb{C}^2$. To construct a similar function in $\\mathbb{C}^3$, we will use the function constructed by Catlin. In order to achieve this goal, as a first step, we need to consider two dimensional slice with respect to the $z_2$ and $z_3$ variables when $z_1$ is fixed at some point. For this, we consider the representative terms in the summation part of (\\ref{special coordinate 1}) of Theorem \\ref{special coordinate}.\n\nLet \n\\begin{align*}\n \\Gamma &= \\{(\\alpha, \\beta); a_{\\alpha, \\beta} \\neq 0, m \\leq |\\alpha|+|\\beta| \\leq \\eta \\ \\mbox{and} \\ |\\alpha|, |\\beta| > 0 \\} \\\\\n S &= \\{(p, q); \\alpha_1 + \\beta_1 = p, \\alpha_2 + \\beta_2 = q \\ \\mbox{for some} \\ (\\alpha, \\beta) \\in \\Gamma \\} \\cup \\{(\\eta, 0)\\}.\n\\end{align*}\nThen there is an positive integer $N$ such that $(p_\\nu, q_\\nu) \\in S$ for $\\nu = 0,\\cdots, N $ and $\\eta_\\nu, \\lambda_\\nu > 0$ for $\\nu = 1, \\cdots, N$ satisfying\n \n\\begin{enumerate}[(1)]\n\t\\item $(p_0, q_0)=(\\eta, 0), (p_{N}, q_{N})= (0, m) , \\lambda_N = m, \\eta_1 = \\eta,$ \\label{condition1}\n\t\\item $p_0 > p_1 > \\cdots > p_{N}$ and $ q_0 < q_1 < \\cdots < q_{N},$ \\label{condition2}\n\t\\item $\\lambda_1 < \\lambda_2 <\\cdots < \\lambda_N$ and $\\eta_1 > \\eta_2 > \\cdots > \\eta_N,$ \\label{condition3}\n\t\\item $\\frac{p_{\\nu-1}}{\\eta_\\nu} + \\frac{q_{{\\nu-1}}}{\\lambda_\\nu} = 1$ and $\\frac{p_{\\nu}}{\\eta_\\nu} + \\frac{q_{\\nu}}{\\lambda_\\nu}=1$ \\label{condition4} and\n\t\\item $a_{\\alpha, \\beta}=0$ if $\\frac{\\alpha_1 + \\beta_1}{\\eta_\\nu}+ \\frac{\\alpha_2 + \\beta_2}{\\lambda_\\nu} < 1$ for each $\\nu = 1,\\cdots,N.$\\label{condition5}\n\\end{enumerate}\n\nNote that if $1 \\leq l \\leq m,$ then $q_{\\nu - 1} < l \\leq q_\\nu$ for some $\\nu = 1, \\cdots, N.$\nLet $L_\\nu$ be the line segment from $(p_{\\nu - 1}, q_{\\nu - 1})$ to $(p_{\\nu}, q_{\\nu})$ for each $\\nu = 1, \\cdots, N$ and set $L =\\ L_1 \\cup L_2 \\cup \\cdots \\cup L_{N}.$ Define\n\n\\begin{itemize}\n\t\\item $\\Gamma_L = \\{(\\alpha, \\beta) \\in \\Gamma; \\alpha+\\beta \\in L \\}.$\n\t\\item $t_l = \\begin{cases}\n\t \\eta & \\text{if $l = 0$} \\\\\n\t \\eta_\\nu \\biggl(1 - \\frac{l}{\\lambda_\\nu} \\biggr) & \\text{if $q_{\\nu-1} < l \\leq q_\\nu$ for some $\\nu$.} \n\t \\end{cases}$\n\\end{itemize}\nNote that $(p_{\\nu-1}, q_{\\nu-1}), (t_l, l)$ and $(p_\\nu, q_\\nu)$ are collinear points in the first quadrant of the plane and $\\eta_\\nu$ and $\\lambda_\\nu$ are the $x$, $y$-intercepts of the line. \\\\\n\nNow, we want to show that for each element $(p_\\nu, q_\\nu)$ with $\\nu = 1, \\cdots, N$, there is some $(\\alpha, \\beta)$ allowing a mixed term in the $z_2$ variable. To show this, we need to use a variant of the notations and the results from Lemma 4.1 and Proposition 4.4 in \\cite{C}.\nFor $t$ with $0 0, \\beta_2^\\nu > 0$\nand $\\alpha^\\nu + \\beta^\\nu = (p_\\nu, q_\\nu).$\n\n\\end{lem}\n\n\n\\begin{proof}\nConsider ${\\widetilde r}^\\nu$, which is plurisubharmonic. Now, consider $${\\widetilde {({\\widetilde r}^\\nu)}}^{\\nu+1} = \\lim\\limits_{t \\to 0} t^{-1}({H_t^{\\nu+1}}^* {\\widetilde r^\\nu }).$$ This is also plurisubharmonic. Since $(p_\\nu, q_\\nu)$ is the unique point with $L_\\nu \\cap L_{\\nu + 1}$ (i.e., $\\frac{p_{\\nu}}{\\eta_\\nu} + \\frac{q_{{\\nu}}}{\\lambda_\\nu} = 1$ and $\\frac{p_{\\nu}}{\\eta_{\\nu+1}} + \\frac{q_{\\nu}}{\\lambda_{\\nu+1}}=1$), we have \n \n\\begin{equation}\\label{formoftrucated}\n {\\widetilde {({\\widetilde r}^\\nu)}}^{\\nu+1} = \\mbox{Re}{z_3} + \\sum_ {\\substack{\\alpha + \\beta = (p_\\nu, q_\\nu) \\\\ (\\alpha, \\beta) \\in \\Gamma_L}} a_{\\alpha_1,\\alpha_2 ,\\beta_1,\\beta_2} z_1^{\\alpha_1} {\\bar z}_1^{\\beta_1}z_2^{\\alpha_2} {\\bar z}_2^{\\beta_2}. \n \n\\end{equation}\nIn particular, $(\\alpha, \\beta) \\in \\Gamma_L$ means $|\\alpha|, |\\beta| > 0.$\nSuppose that ${\\widetilde {({\\widetilde r}^\\nu)}}^{\\nu+1}$ has no terms with both $\\alpha_2 > 0$ and $\\beta_2>0$ in (\\ref{formoftrucated}) (i.e., no mixed terms in $z_2$ variable). Thus\n$${\\widetilde{(\\widetilde{r^\\nu})}}^{\\nu+1} = \\mbox{Re}{z_3} + P_{q_\\nu}(z_1){z_2}^{q_\\nu} +\\overline{P_{q_\\nu}(z_1){z_2}^{q_\\nu}}$$ \nwhere $P_{q_\\nu}(z_1) = \\sum\\limits_{\\alpha_1 +\\beta_1 = p_\\nu} c_{\\alpha_i,\\beta_i} {z_1}^{\\alpha_1} {\\bar z_1}^{\\beta_1}$ with $\\beta_1 > 0$. \nBy the plurisubharmonicity of ${\\widetilde {({\\widetilde r}^\\nu)}}^{\\nu+1}$, \n $${\\widetilde {({\\widetilde r}^\\nu)}}^{\\nu+1}_{11} {\\widetilde {({\\widetilde r}^\\nu)}}^{\\nu+1}_{22}- {\\widetilde {({\\widetilde r}^\\nu)}}^{\\nu+1}_{12}{\\widetilde {({\\widetilde r}^\\nu)}}^{\\nu+1}_{21} = - \\lvert {q_\\nu} \\frac{\\partial {P_{q_\\nu}}}{\\partial{\\bar z_1}}(z_1) {z_2}^{q_\\nu -1} \\rvert^2 \\geq 0,$$\nwhere ${{\\widetilde{(\\widetilde{r^\\nu})}}^{\\nu+1}}_{ij} = \\frac{\\partial^{2}{\\widetilde{({\\widetilde r}^\\nu)}}^{\\nu+1}}{{\\partial z_i}{\\partial \\bar{z_j}}} $ for $i, j = 1, 2.$ \nTherefore, we have $\\frac{\\partial {P_{q_\\nu}}}{\\partial{\\bar{z_1}}}(z_1) = 0.$ This means $P_{q_\\nu} (z_1)$ is holomorphic. \nThis contradicts the fact that $P_{q_\\nu}(z_1) = \\sum\\limits_{\\alpha_1 +\\beta_1 = p_\\nu} c_{\\alpha_i,\\beta_i} {z_1}^{\\alpha_1} {{\\bar {z_1}}^{\\beta_1}}$ with $\\beta_1 > 0$. \n\\end{proof} \n\n\n\nNow, we define these special terms with respect to the $z_2$ variable. Let\n\\begin{equation*}\n\\Lambda = \\{(\\alpha, \\beta) \\in \\Gamma_L ; \\alpha+\\beta = (p_\\nu, q_\\nu), \\alpha_2 > 0, \\beta_2 > 0, \\nu = 1, \\cdots, N \\}.\n\\end{equation*} \nThen we represent the expression of $r$ in terms of these terms. \\\\\n\n\n\n\\begin{prop}\\label{final expression of r}\n The defining function $r$ can be expressed as \n\\begin{equation} \\label{repre of r}\n r(z) = \\mbox{\\normalfont Re}{z_3} + \\sum_ {\\Gamma_L - \\Lambda} a_{\\alpha, \\beta} {z'}^{\\alpha} {\\bar z}'^{\\beta} + {\\sum_{\\nu = 1}^N}\\sum_{\\substack{\\alpha_2 + \\beta_2= q_\\nu \\\\ \\alpha_2 > 0, \\beta_2 > 0}} M_{\\alpha_2, \\beta_2}(z_1) {z_2}^{\\alpha_2} {\\bar z}_2^{\\beta_2}+E_2(z),\n\\end{equation} \nwhere $M_{\\alpha_2, \\beta_2}(z_1) = \\sum\\limits_{\\alpha_1+\\beta_1 = p_\\nu} a_{\\alpha, \\beta} z_1^{\\alpha_1} \\bar{z}_1^{\\beta_1}$ and $E_2(z) = \\mathcal{O}(|z_3||z|+\\sum_{\\nu = 1}^N \\sum_{l = q_{\\nu - 1}}^{q_\\nu} |z_1|^{[t_l]+1}|z_2|^l+|z_2|^{m+1}).$\n\n\\end{prop}\n\n\n\n\\begin{proof}\nBy theorem \\ref{special coordinate}, we have \n\\begin{equation} \\label{rform}\n r(z) = {\\mbox{Re}}{z_3}+ \\sum_{\\Gamma_L} a_{\\alpha, \\beta} {z'}^{\\alpha}{\\bar z}'^{\\beta}+ \\sum_{\\Gamma-\\Gamma_L} a_{\\alpha, \\beta} {z'}^{\\alpha}{\\bar z}'^{\\beta}+\\mathcal{O}(|z_3||z|+|z'|^{\\eta +1}). \n\\end{equation}\nSuppose that $(k, l) = (\\alpha_1 + \\beta_1, \\alpha_2 + \\beta_2)$ for some $(\\alpha, \\beta)\\in \\Gamma-\\Gamma_L.$ Then, we consider two cases; $1\\leq l \\leq m$ and $m < l < \\eta.$ If $1\\leq l \\leq m$, there is a unique $\\nu =1, \\cdots, N$ so that $q_{\\nu-1} < l \\leq q_\\nu$ and $t_l = \\eta_\\nu \\biggl(1 - \\frac{l}{\\lambda_\\nu} \\biggr).$ Since $(k, l) = (\\alpha_1 + \\beta_1, \\alpha_2 + \\beta_2)$ for some $(\\alpha, \\beta) \\in \\Gamma -\\Gamma_L,$ $ \\frac{k}{\\eta_\\nu}+\\frac{l}{\\lambda_\\nu} > 1.$ This gives $t_l = \\eta_\\nu \\biggl(1 - \\frac{l}{\\lambda_\\nu}\\biggr) < k.$ Since $k$ is an integer, $[t_l]+1 \\leq k.$ Thus, we have $|z_1|^k|z_2|^l \\leq |z_1|^{[t_l]+1}|z_2|^l$ for each $l = 1, \\cdots, m.$ \nOn the other hand, if $(k, l)=(\\alpha_1 + \\beta_1, \\alpha_2 + \\beta_2)$ for some $(\\alpha, \\beta)\\in \\Gamma-\\Gamma_L$ and $m < l < \\eta,$ then $|z_1|^k|z_2|^l \\leq |z_1|^k|z_2|^{m+1} \\leq |z_2|^{m+1}$ for small $z_1$ and $z_2.$\nSince $|z'|^{\\eta +1} \\approx |z_1|^{\\eta +1} + |z_2|^{\\eta +1},$ it follows that\n$\\sum_{\\Gamma -\\Gamma_L} a_{\\alpha, \\beta} {z'}^{\\alpha}{\\bar{z'}}^{\\beta}+\\mathcal{O}(|z_3||z|+|z'|^{\\eta +1})= \\mathcal{O}(|z_3||z|+\\sum_{\\nu = 1}^N \\sum_{l = q_{\\nu - 1}}^{q_\\nu} |z_1|^{[t_l]+1}|z_2|^l+|z_2|^{m+1}).$ Therefore, $r(z)$ in (\\ref{rform}) is represented as \n\\begin{equation} \\label{r form 2}\n{\\mbox{Re}}{z_3}+ \\sum_{\\Gamma_L} a_{\\alpha, \\beta} {z'}^{\\alpha}{\\bar z}'^{\\beta} + \\mathcal{O}(|z_3||z|+\\sum_{\\nu = 1}^N \\sum_{l = q_{\\nu - 1}}^{q_\\nu} |z_1|^{[t_l]+1}|z_2|^l+|z_2|^{m+1}).\n\\end{equation}\nNow, apply $\\Gamma_L = (\\Gamma_L - \\Lambda) \\cup \\Lambda$ for the second part of summation in (\\ref{rform}). \n\\end{proof}\n\n\n\\begin{rem}\\label{sizeofM} \\\n\\begin{enumerate}[\\normalfont i)] \n\t\\item $M_{\\alpha_2, \\beta_2}(z_1)$ is not identically zero for $\\alpha_2 + \\beta_2 = q_\\nu$ and the homogeneous polynomial is of order $p_\\nu$ for each $\\nu = 1, \\cdots, N-1.$\n\t\\item If $\\nu = N,$ then $|M_{\\alpha_2, \\beta_2}(z_1)|$ is a nonzero constant for all $\\alpha_2, \\beta_2 > 0$ with $\\alpha_2+\\beta_2 = m = q_N$ since $ p_N = 0.$\n\t\\item Since $M_{\\alpha_2, \\beta_2}(z_1)$ is a homogeneous polynomial of order $p_\\nu, \\nu =1, \\cdots, N,$ in $z_1$-variable, there are $\\theta_0 \\in [0, 2\\pi]$ and a small constant $c > 0$ such that $|M_{\\alpha_2, \\beta_2}(\\tau e^{i\\theta})| \\neq 0$ for all $|\\theta - \\theta_0| < c$ and $0 < \\tau \\leq 1.$ \n\t In particular, if we take $d = e^{i\\theta_0}$ and $\\tau = \\delta^{\\frac{1}{\\eta}}$ we have $|M_{\\alpha_2, \\beta_2}(d \\delta^{\\frac{1}{\\eta}})| \\approx \\delta^{\\frac{p_\\nu}{\\eta}}$ for all $\\alpha_2+ \\beta_2 = q_\\nu$ with all $\\nu = 1, \\cdots, N.$\n\\end{enumerate}\n\\end{rem}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{The construction of bounded holomorphic function with large derivative near the boundary}\\label{Sec4}\n\nLet $z_1 = d\\delta^{\\frac{1}{\\eta}}.$ Then, we get a complex two dimensional slice. After the holomorphic coordinate change as Proposition 1.1 in \\cite{C2}, we can define a bounded holomorphic function with a large nontangential derivative as in \\cite{C2} on the slice. In this section, first, we construct a holomorphic coordinate system in $\\mathbb{C}^3$ to exactly fit the holomorphic coordinate system as in proposition 1.1 of \\cite{C2} when $z_1$ is fixed as $d\\delta^{\\frac{1}{\\eta}}$. Second, we show that the holomorphic function defined on the slice is also well-defined on a family of slices along the small neighborhood of $z_1 = d\\delta^{\\frac{1}{\\eta}}.$ To show the well-definedness of the holomorphic function up to boundary in $\\mathbb{C}^3,$ we need the estimates of derivatives. Let's denote $ U \\big|_{z_1 = d\\delta^{\\frac{1}{\\eta}}} = U \\cap \\{(d\\delta^{\\frac{1}{\\eta}}, z_2, z_3)\\}$ and let $ \\widetilde{e}_\\delta = (d\\delta^{\\frac{1}{\\eta}}, 0, e_\\delta)$ satisfy $r(\\widetilde{e}_\\delta) = 0.$ Since $\\frac{\\partial r}{\\partial z_3}(0) \\neq 0,$ clearly $\\frac{\\partial r}{\\partial z_3}(\\widetilde{e}_\\delta) \\neq 0.$ We start with the similar argument as Proposition 1.1 in \\cite{C2}.\\\\\n\n\n\\begin{prop} \\label{coordinatechangeinc2}\nFor $ \\widetilde{e}_\\delta \\in U \\big|_{z_1 = d\\delta^{\\frac{1}{\\eta}}},$ there exists a holomorphic coordinate system $ (z_2, z_3)= \\Phi_{\\widetilde{e}_\\delta}(\\zeta '') = (\\zeta_2 , \\Phi_3 (\\zeta'')))$ such that in the new coordinate $\\zeta'' = (\\zeta_2, \\zeta_3)$ defined by\n\\begin{equation}\n\\Phi_{\\widetilde{e}_\\delta}(\\zeta '')= \\biggl(\\zeta_2, \\ e_\\delta + \\biggl(\\frac{\\partial r}{\\partial z_3}(\\widetilde{e_\\delta}) \\biggr)^{-1} \\biggl(\\frac{\\zeta_3}{2}- \\sum_{l=2}^m c_l({\\widetilde e}_\\delta) \\zeta_2^l - \\frac{\\partial r}{\\partial z_2} ({\\widetilde e}_\\delta){\\zeta_2} \\biggr) \\biggr), \\label{catlin version coordinate chanage}\n\\end{equation}\nthe function $\\rho(d\\delta^{\\frac{1}{\\eta}}, \\zeta'') = r(d\\delta^{\\frac{1}{\\eta}}, z'') \\circ \\Phi_{\\widetilde{e}_\\delta}(\\zeta'')$ satisfies \n\\begin{equation} \\label{catlin_defining_expression}\n\\rho(d\\delta^{\\frac{1}{\\eta}},\\zeta'')= \\mbox{\\normalfont{Re}}\\zeta_3 + \\sum\\limits_{\\substack{j+k=2\\\\j, k > 0}}^{m} a_{j,k}(\\widetilde{e_\\delta}) \\zeta_2^j {\\bar \\zeta_2}^k + \\mathcal{O}(|\\zeta_3||\\zeta''|+|\\zeta_2|^{m+1} ),\n\\end{equation}\nwhere $z'' = (z_2, z_3)$.\n\\end{prop} \n\n\n\\begin{proof} For $ {\\widetilde e}_\\delta \\in U \\big|_{z_1 = d\\delta^{\\frac{1}{\\eta}}},$ define\n\\begin{equation}\\label{rho2changeofcoordinate}\n \\Phi_{\\widetilde{e}_\\delta}^1(w'') =\\biggl( w_2, \\ e_\\delta + \\biggl(\\frac{\\partial r}{\\partial z_3}({\\widetilde e}_\\delta) \\biggr)^{-1}\\biggl(\\frac{w_3}{2} - \\frac{\\partial r}{\\partial z_2} ({\\widetilde e}_\\delta)w_2 \\biggr)\\biggr). \n\\end{equation}\nThen we have \n\\begin{equation}\\label{rho2expression}\n\\rho_2(d\\delta^{\\frac{1}{\\eta}}, w'') = r(d\\delta^{\\frac{1}{\\eta}}, z'') \\circ \\Phi_{\\widetilde{e}_\\delta}^1(w'') = \\mbox{Re} w_3 + \\mathcal{O}(|w''|^2),\n\\end{equation} \nwhere $w'' = (w_2, w_3).$\nNow assume that we have defined $\\Phi_{\\widetilde{e}_\\delta}^{l-1} : \\mathbb{C}^2 \\rightarrow \\mathbb{C}^2 $ so that there exist numbers $a_{j, k}$ for $j, k > 0$ and $j+k < l$ so that $\\rho_l(d\\delta^{\\frac{1}{\\eta}}, w'') = r(d\\delta^{\\frac{1}{\\eta}}, z'')\\circ \\Phi_{\\widetilde{e}_\\delta}^{l-1}(w'')$ satisfies\n$$\\rho_l(d\\delta^{\\frac{1}{\\eta}}, w'') = \\mbox{Re}w_3 + \\sum_{\\substack{j+k=2\\\\j, k > 0}}^{l-1} a_{j,k}({\\widetilde e}_\\delta) w_2^j {\\bar w_2}^k + \\mathcal{O}(|w_3||w''|+|w_2|^l), $$\nwhere $w'' = (w_2, w_3).$\nIf we define $\\Phi_{\\widetilde{e}_\\delta}^l = \\Phi_{\\widetilde{e}_\\delta}^{l-1}\\circ \\phi^l,$ where \n\\begin{equation}\\label{lthchangeofvariable}\n\\phi^l(\\zeta'') = \\biggl(\\zeta_2, \\ \\zeta_3 - \\frac{2}{l!} \\frac{\\partial^l \\rho_l}{\\partial w_2^l}(d\\delta^{\\frac{1}{\\eta}}, 0, 0)\\zeta_2^l \\biggr).\n\\end{equation} \nthen \n\\begin{equation}\\label{rholexpression}\n\\rho_{l+1}(d\\delta^{\\frac{1}{\\eta}}, \\zeta'')= \\rho_l \\circ \\phi^l (\\zeta'') = r(d\\delta^{\\frac{1}{\\eta}}, z'')\\circ \\Phi_{\\widetilde{e}_\\delta}^l(\\zeta'')\n\\end{equation} satisfies\n$$\\rho_{l+1}(d\\delta^{\\frac{1}{\\eta}}, \\zeta'') = \\mbox{Re}\\zeta_3 + \\sum\\limits_{\\substack{j+k=2\\\\j, k > 0}}^{l} a_{j,k}({\\widetilde e}_\\delta) \\zeta_2^j {\\bar \\zeta_2}^k + \\mathcal{O}(|\\zeta_3||\\zeta''|+|\\zeta_2|^{l+1}),$$ where $\\zeta'' = (\\zeta_2, \\zeta_3).$\nTherefore, if we choose $\\Phi_{\\widetilde{e}_\\delta} = \\Phi_{\\widetilde{e}_\\delta}^m = \\Phi_{\\widetilde{e}_\\delta}^{m-1} \\circ \\phi^m = \\cdots = \\Phi_{\\widetilde{e}_\\delta}^1 \\circ \\phi^2 \\circ \\cdots \\circ \\phi^m,$ then $\\rho = \\rho_{m+1} = \\rho_m \\circ \\phi^m = r \\circ \\Phi_{\\widetilde{e}_\\delta}.$ \nThis shows (\\ref{catlin version coordinate chanage}) and (\\ref{catlin_defining_expression}), where $ c_l({\\widetilde e}_\\delta)$ is defined by\n\\begin{equation}\\label{clexpression}\n c_l({\\widetilde e}_\\delta) = \\frac{1}{l!}\\frac{\\partial^l \\rho_l}{\\partial w_2^l}(d\\delta^{\\frac{1}{\\eta}}, 0, 0). \n\\end{equation}\n\\end{proof}\n\nAs in \\cite{C2}, we set\n\\begin{equation}\\label{def of Al}\nA_l ({\\widetilde e}_\\delta) = \\mbox{max} \\{|a_{j,k}({\\widetilde e}_\\delta)|; j+k = l \\}, \\hspace{0.3in} l=2, \\cdots, m \n\\end{equation}\nand \n\\begin{equation}\\label{taudef}\n\\tau({\\widetilde e}_\\delta, \\delta) = \\min \\biggl\\{\\biggl(\\frac{\\delta}{A_l({\\widetilde e}_\\delta)} \\biggr)^{1\/l}; 2 \\leq l \\leq m \\biggr\\}\n\\end{equation}\nAs we will see later (Remark \\ref{Amnonzero}), we have $A_m ({\\widetilde e}_\\delta) \\neq 0$ since $ |A_m ({\\widetilde e}_\\delta)| \\geq c_m > 0,$ where $\\delta > 0$ is sufficiently small. This means $$\\tau({\\widetilde e}_\\delta, \\delta) \\lesssim \\delta^{\\frac{1}{m}}.$$\nDefine\n\\begin{equation}\\label{rbox}\n R_\\delta ({\\widetilde e}_\\delta)= \\{\\zeta'' \\in \\mathbb{C}^2; |\\zeta_2| < \\tau({\\widetilde e}_\\delta, \\delta), |\\zeta_3| < \\delta \\}.\n\\end{equation} \\\\\n\n\nBefore estimating the derivative of $r$, we estimate the size of $e_\\delta$.\nSince $r({\\widetilde e}_\\delta)= 0,$ Taylor's theorem in $z_3$ about $e_\\delta$ gives\n$$r(d\\delta^{\\frac{1}{\\eta}}, 0, z_3)= 2\\mbox{Re}\\biggl( \\frac{\\partial r}{\\partial z_3}(d\\delta^{\\frac{1}{\\eta}}, 0, e_\\delta)(z_3 - e_\\delta) \\biggr)+ \\mathcal{O}(|z_3 - e_\\delta|^2).$$\nIf we take $z_3 = 0,$ then $|r(d\\delta^{\\frac{1}{\\eta}}, 0, 0)|= \\left|2\\mbox{Re}\\biggl( \\frac{\\partial r}{\\partial z_3}(d\\delta^{\\frac{1}{\\eta}}, 0, 0)(- e_\\delta) \\biggr)+ \\mathcal{O}(|e_\\delta|^2)\\right| \\approx |e_\\delta|$ since $|e_\\delta| \\ll 1$ and $|\\frac{\\partial r}{\\partial z_3}| \\approx 1$ near $0.$ Therefore \\ref{changed order of contact}) of Theorem \\ref{special coordinate} means $|e_\\delta| \\lesssim \\delta.$ \\\\\n\n\n\n\n\n\n\n\n\n\n\n\\begin{lem}\\label{rderivative}\nLet $ l = 1, 2, \\cdots, m$ and let $\\alpha_2^\\nu$ and $\\beta_2^\\nu$ be positive numbers as given in Lemma \\ref{existence of mixed term in z_2} for $\\nu = 1, \\cdots, N.$ Then the function $r$ satisfies\n\\begin{enumerate}[\\normalfont (i)]\n\t\\item $\\biggl|\\frac{\\partial^l r}{{\\partial z_2^{\\alpha_2}}{\\partial {\\bar{z}_2}}^{\\beta_2}}({\\widetilde e}_\\delta)\\biggr| \\lesssim \\delta^{\\frac{t_l}{\\eta}},$\n\t \\quad \\text{where $\\alpha_2, \\beta_2 \\geq 0.$} \\label{rderivative1}\n\t\\item $\\biggl| \\frac{\\partial^{q_\\nu} r }{{\\partial {z_2}^{\\alpha_2^\\nu}}{\\partial {\\bar z_2}^{\\beta_2^\\nu}} }({\\widetilde e}_\\delta)\\biggr| \\approx \\delta^{\\frac{p_\\nu}{\\eta}},$ \\quad \\text{where $\\alpha_2^\\nu > 0$ and $\\beta_2^\\nu > 0.$}\\label{rderivative2}\n\\end{enumerate}\n\\end{lem}\n\n\n\\begin{proof}\nBy (\\ref{r form 2}) and $t_l < [t_l]+1$, we have \n\\begin{equation}\\label{rdifferentiation}\n \\biggl|\\frac{\\partial^l r}{{\\partial z_2^{\\alpha_2}}{\\partial {\\bar{z}_2}}^{\\beta_2}}({\\widetilde e}_\\delta)\\biggr| \\lesssim \\delta^{\\frac{t_l}{\\eta}} + |e_\\delta| + \\delta^{\\frac{[t_l]+1}{\\eta}} \\lesssim \\delta^{\\frac{t_l}{\\eta}}.\n\\end{equation} \nFor (\\ref{rderivative2}), note that if $l = q_\\nu,$ then $t_l = p_\\nu$. Therefore,\n(\\ref{repre of r}) gives\n\\begin{equation*}\n |M_{\\alpha_2^\\nu, \\beta_2^\\nu}(d\\delta^{\\frac{1}{\\eta}})| - C_1(|e_\\delta| + \\delta^{\\frac{p_\\nu + 1}{\\eta}}) \\leq \\biggl|\\frac{1}{{\\alpha_2^\\nu}!{\\beta_2^\\nu}!}\\frac{\\partial^{q_\\nu} r }{{\\partial {z_2}^{\\alpha_2^\\nu}}{\\partial {\\bar z_2}^{\\beta_2^\\nu}} }(\\widetilde{e_\\delta})\\biggr| \\leq |M_{\\alpha_2^\\nu, \\beta_2^\\nu}(d\\delta^{\\frac{1}{\\eta}})| + C_1(|e_\\delta| + \\delta^{\\frac{{p_\\nu} + 1}{\\eta}})\n\\end{equation*}\nfor some constant $C_1.$\nSince Remark \\ref{sizeofM} means $|M_{\\alpha_2^\\nu, \\beta_2^\\nu}(d\\delta^{\\frac{1}{\\eta}})| \\approx \\delta^{\\frac{p_\\nu}{\\eta}},$ we have $$\\biggl|\\frac{\\partial^{q_\\nu} r }{{\\partial {z_2}^{\\alpha_2^\\nu}}{\\partial {\\bar z_2}^{\\beta_2^\\nu}} }({\\widetilde e}_\\delta)\\biggr| \\approx \\delta^{\\frac{p_\\nu}{\\eta}}.$$ \n\\end{proof}\n\n\n\n\n\n\\begin{lem} \\label{rhoderivative}\nLet $\\rho_l, \\phi^l$ and $\\Phi^l$ be given as in (\\ref{rho2changeofcoordinate})-(\\ref{rholexpression}) for $l=2, \\cdots, m+1$ and $\\alpha_2^\\nu$ and $\\beta_2^\\nu$ be positive numbers as given in Lemma \\ref{existence of mixed term in z_2} for $\\nu = 1, \\cdots, N.$ Then\n\\begin{enumerate}[\\normalfont (i)] \n \\item $\\biggl|\\frac{\\partial^k \\rho_l}{{\\partial \\zeta_2^{\\alpha_2}}{\\partial {{\\bar \\zeta}_2}^{\\beta_2}}}(d\\delta^{\\frac{1}{\\eta}}, 0, 0)\\biggr| \\lesssim \\delta^{\\frac{t_k}{\\eta}} \\quad \\text{for each } \\ k = 1, \\cdots, m.$\n \\item $\\biggl|\\frac{\\partial^{q_\\nu} \\rho_l }{{\\partial {\\zeta_2}^{\\alpha_2^\\nu}}{\\partial {\\bar \\zeta_2}^{\\beta_2^\\nu}} }(d\\delta^{\\frac{1}{\\eta}}, 0, 0) \\biggr| \\approx \\delta^{\\frac{p_\\nu}{\\eta}} \\quad \\text{for each } \\ \\nu = 1, \\cdots, N.$\n\\end{enumerate} \n \nIn particular, $|c_l({\\widetilde e}_\\delta)| \\lesssim \\delta^{\\frac{t_l}{\\eta}},$ where $c_l({\\widetilde e}_\\delta)$ is given in (\\ref{clexpression}).\n\\end{lem} \n\n\n\\begin{proof}\nBy induction, we prove both (i) and (ii). For part (i), let $l = 2.$ Since $\\rho_2(d\\delta^{\\frac{1}{\\eta}}, \\zeta'') = r(d\\delta^{\\frac{1}{\\eta}}, z'') \\circ \\Phi_{\\widetilde{e}_\\delta}^1 (\\zeta''),$ by chain rule and Lemma \\ref{rderivative}, we have\n\\begin{equation*}\n\\biggl|\\frac{\\partial^k \\rho_2}{{\\partial \\zeta_2^{\\alpha_2}}{\\partial \\bar{\\zeta_2}^{\\beta_2}}}(d\\delta^{\\frac{1}{\\eta}}, 0, 0)\\biggr| \n \\lesssim \\biggl|\\frac{\\partial^k r}{{\\partial z_2^{\\alpha_2}}{\\partial {\\bar z_2}^{\\beta_2}}}({\\widetilde e}_\\delta)\\biggr| \n + \\biggl|\\frac{\\partial r}{\\partial z_2}({\\widetilde e}_\\delta) \\biggr| \n \\lesssim \\delta^{\\frac{t_k}{\\eta}}+\\delta^{\\frac{t_1}{\\eta}} \\lesssim \\delta^{\\frac{t_k}{\\eta}}.\n\\end{equation*}\nfor all $ k = 1, \\cdots, m.$\nThis proves for the case $l = 2.$ \nNow, by induction, we assume\n$$\\biggl|\\frac{\\partial^k \\rho_l}{{\\partial \\zeta_2^{\\alpha_2}}{\\partial \\bar{\\zeta_2}^{\\beta_2}}}(d\\delta^{\\frac{1}{\\eta}}, 0, 0)\\biggr| \\lesssim \\delta^{\\frac{t_k}{\\eta}}$$ for all $k = 1, \\cdots, m$ and $l = 2, \\cdots, j.$ \\\nNote that\n\\begin{equation}\\label{rholrealexpression}\n\\rho_{j+1}(d\\delta^{\\frac{1}{\\eta}}, \\zeta_2, \\zeta_3) = \\rho_j(d\\delta^{\\frac{1}{\\eta}},\\ \\zeta_2,\\ \\zeta_3 - 2 c_j({\\widetilde e}_\\delta)\\zeta_2^j).\n\\end{equation}\nIf $k < j,$ the inductive assumption gives\n$$\\biggl|\\frac{\\partial^k \\rho_{j+1}}{{\\partial \\zeta_2^{\\alpha_2}}{\\partial {\\bar \\zeta_2}^{\\beta_2}}}(d\\delta^{\\frac{1}{\\eta}}, 0, 0)\\biggr| = \\biggl| \\frac{\\partial^k \\rho_{j}}{{\\partial w_2^{\\alpha_2}}{\\partial {\\bar w_2}^{\\beta_2}}}(d\\delta^{\\frac{1}{\\eta}}, 0, 0)\\biggr| \\lesssim \\delta^{\\frac{t_k}{\\eta}}.$$\nNow, let $k = j.$ If $\\alpha_2 > 0$ and $\\beta_2 > 0,$ we have the same result as the previous one. Otherwise, $\\frac{\\partial^j \\rho_{j+1}}{\\partial \\zeta_2^j}(d\\delta^{\\frac{1}{\\eta}}, 0, 0) = \\frac{\\partial^j \\rho_{j}}{\\partial w_2^{j}}(d\\delta^{\\frac{1}{\\eta}}, 0, 0) - 2j!c_j({\\widetilde e}_\\delta)\\frac{\\partial \\rho_j}{\\partial w_3}(d\\delta^{\\frac{1}{\\eta}}, 0, 0) = 0.$ \nIf $k > j,$ the inductive assumption gives\n\\begin{equation*}\n\\biggl|\\frac{\\partial^k \\rho_{j+1}}{{\\partial \\zeta_2^{\\alpha_2}}{\\partial \\bar{\\zeta_2}^{\\beta_2}}}(d\\delta^{\\frac{1}{\\eta}}, 0, 0)\\biggr| \\lesssim \\biggl| \\frac{\\partial^k \\rho_{j}}{{\\partial w_2^{\\alpha_2}}{\\partial \\bar{w_2}^{\\beta_2}}}(d\\delta^{\\frac{1}{\\eta}}, 0, 0)\\biggr| + |c_j({\\widetilde e}_\\delta)| \\lesssim \\delta^{\\frac{t_k}{\\eta}}+\\delta^{\\frac{t_j}{\\eta}} \\lesssim \\delta^{\\frac{t_k}{\\eta}}.\n\\end{equation*} \n\n\nFor part (ii), let $l = 2$ and apply the chain rule again to $\\rho_2,$ we have \n\\begin{equation*}\n\\biggl|\\frac{\\partial^{q_\\nu} r}{{\\partial z_2^{\\alpha_2^\\nu}}{\\partial \\bar{z_2}^{\\beta_2^\\nu}}}(\\widetilde{e_\\delta})\\biggl| - C\\biggl|\\frac{\\partial r}{\\partial z_2}(\\widetilde{e_\\delta}) \\biggr| \\leq \\biggl|\\frac{\\partial^{q_\\nu} \\rho_2}{{\\partial \\zeta_2^{\\alpha_2^\\nu}}{\\partial \\bar{\\zeta_2}^{\\beta_2^\\nu}}}(d\\delta^{\\frac{1}{\\eta}}, 0, 0)\\biggr| \\leq \\biggl|\\frac{\\partial^{q_\\nu} r}{{\\partial z_2^{\\alpha_2^\\nu}}{\\partial \\bar{z_2}^{\\beta_2^\\nu}}}(\\widetilde{e_\\delta})\\biggl| + C\\biggl|\\frac{\\partial r}{\\partial z_2}(\\widetilde{e_\\delta}) \\biggr| \n\\end{equation*}\nfor some constant $C.$ Then, Lemma \\ref{rderivative} means \n\\begin{equation}\\label{lowerbound2}\n\\delta^{\\frac{p_\\nu}{\\eta}} - \\delta^{\\frac{t_1}{\\eta}} \\lesssim \\biggl|\\frac{\\partial^{q_\\nu} \\rho_2}{{\\partial \\zeta_2^{\\alpha_2^\\nu}}{\\partial \\bar{\\zeta_2}^{\\beta_2^\\nu}}}(d\\delta^{\\frac{1}{\\eta}}, 0, 0)\\biggr| \\lesssim \\delta^{\\frac{p_\\nu}{\\eta}} + \\delta^{\\frac{t_1}{\\eta}}.\n\\end{equation}\nSince $1 < q_\\nu$ for each $\\nu = 1, \\cdots, N,$ it gives $p_\\nu = t_{q_\\nu} < t_1.$ Therefore, we have \n\\begin{equation*}\n\\biggl|\\frac{\\partial^{q_\\nu} \\rho_2}{{\\partial \\zeta_2^{\\alpha_2^\\nu}}{\\partial \\bar{\\zeta_2}^{\\beta_2^\\nu}}}(d\\delta^{\\frac{1}{\\eta}}, 0, 0)\\biggr| \\approx \\delta^{\\frac{p_\\nu}{\\eta}}.\n\\end{equation*}\nThis proves the statement for the case $l = 2.$ By induction, assume\n$\\biggl|\\frac{\\partial^{q_\\nu} \\rho_l}{{\\partial \\zeta_2^{\\alpha_2^\\nu}}{\\partial \\bar{\\zeta_2}^{\\beta_2^\\nu}}}(d\\delta^{\\frac{1}{\\eta}}, 0, 0)\\biggr| \\approx \\delta^{\\frac{p_\\nu}{\\eta}}.$\nFirst, consider the case when $q_\\nu \\leq l.$ Since $\\alpha_2^\\nu > 0$ and $\\beta_2^\\nu > 0,$ by the similar argument as in the proof of (i) and the by inductive assumption, we have $$\\biggl|\\frac{\\partial^{q_\\nu} \\rho_{l+1}}{{\\partial \\zeta_2^{\\alpha_2^\\nu}}{\\partial \\bar{\\zeta_2}^{\\beta_2^\\nu}}}(d\\delta^{\\frac{1}{\\eta}}, 0, 0)\\biggr| = \\biggl| \\frac{\\partial^{q_\\nu} \\rho_{l}}{{\\partial w_2^{\\alpha_2^\\nu}}{\\partial {{\\bar w}_2}^{\\beta_2^\\nu}}}(d\\delta^{\\frac{1}{\\eta}}, 0, 0)\\biggr| \\approx \\delta^{\\frac{t_{q_\\nu}}{\\eta}} = \\delta^{\\frac{{p_\\nu}}{\\eta}} .$$ \nNow, consider the case when $q_\\nu > l,$ If we take the derivative of $\\rho_{l+1}$ in (\\ref{rholrealexpression}) about $\\zeta_2,$ the derivative related to the third component involves $c_l(\\widetilde{e_\\delta}).$ Therefore, we have\n\\begin{align*}\n\\biggl| \\frac{\\partial^{q_\\nu} \\rho_{l}}{{\\partial w_2^{\\alpha_2^\\nu}}{\\partial \\bar{w_2}^{\\beta_2^\\nu}}}(d\\delta^{\\frac{1}{\\eta}}, 0, 0)\\biggr| - C'|c_l(\\widetilde{e_\\delta})| \\leq \\biggl|\\frac{\\partial^{q_\\nu} \\rho_{l+1}}{{\\partial \\zeta_2^{\\alpha_2^\\nu}}{\\partial \\bar{\\zeta_2}^{\\beta_2^\\nu}}}(d\\delta^{\\frac{1}{\\eta}}, 0, 0)\\biggr| & \\leq \\biggl| \\frac{\\partial^{q_\\nu} \\rho_{l}}{{\\partial w_2^{\\alpha_2^\\nu}}{\\partial \\bar{w_2}^{\\beta_2^\\nu}}}(d\\delta^{\\frac{1}{\\eta}}, 0, 0)\\biggr| \\\\ \n& + C'|c_l(\\widetilde{e_\\delta})|\n\\end{align*}\nfor some constant $C'$. \nTherefore, the inductive assumption and part (i) means \n\\begin{equation}\\label{lowerboundl}\n\\delta^{\\frac{p_\\nu}{\\eta}} - \\delta^{\\frac{t_l}{\\eta}} \\lesssim \\biggl|\\frac{\\partial^{q_\\nu} \\rho_{l+1}}{{\\partial \\zeta_2^{\\alpha_2^\\nu}}{\\partial \\bar{\\zeta_2}^{\\beta_2^\\nu}}}(d\\delta^{\\frac{1}{\\eta}}, 0, 0)\\biggr| \\lesssim \\delta^{\\frac{p_\\nu}{\\eta}} - \\delta^{\\frac{t_l}{\\eta}}\n\\end{equation}\nSince $q_\\nu > l$, it means $p_\\nu = t_{q_\\nu} < t_l.$ Thus, we have \n$\\biggl|\\frac{\\partial^{q_\\nu} \\rho_{l+1}}{{\\partial \\zeta_2^{\\alpha_2^\\nu}}{\\partial \\bar{\\zeta_2}^{\\beta_2^\\nu}}}(d\\delta^{\\frac{1}{\\eta}}, 0, 0)\\biggr| \\approx \\delta^{\\frac{p_\\nu}{\\eta}}.$\n\n\\end{proof}\n\nFinally, we show that the derivatives of $\\rho$ can be bounded from below.\n\n\n\n\\begin{rem}\\label{Amnonzero}\nTake $\\nu = N.$ Since $\\biggl|\\frac{\\partial^{q_\\nu} \\rho }{{\\partial {\\zeta_2}^{\\alpha_2^\\nu}}{\\partial {\\bar \\zeta_2}^{\\beta_2^\\nu}} }(d\\delta^{\\frac{1}{\\eta}}, 0, 0) \\biggr| \\approx |A_m ({\\widetilde e}_\\delta)| ,$ Lemma \\ref{rhoderivative} means $|A_m (\\widetilde{e_\\delta})| \\approx 1.$\n\\end{rem} \n\n\nNow, we recall some facts in \\cite{C2} before showing the holomorphic function defined in the complex two dimensional slice(i.e $z_1$ is fixed) is well-defined when we move $z_1$ in a small neighborhood of $z_1 = d\\delta^{\\frac{1}{\\eta}}.$\n\n\\begin{thm}[\\bf Catlin]\\label{existenceofholomorphic}\nSuppose the defining function $\\rho$ for a pseudoconvex domain in $b\\Omega \\subset \\mathbb{C}^2$ has the following form: $$\\rho(\\zeta) = \\mbox{\\normalfont{Re}}\\zeta_2 + \\sum_{\\substack{j+k=2 \\\\ j,k > 0 }}^m a_{j,k}{\\zeta_1}^j{\\bar{\\zeta_1}}^k + \\mathcal{O}(|\\zeta_2||\\zeta|+|\\zeta_1|^{m+1}).$$\nSet $$A_l = \\max \\{|a_{j, k}| ; j+k =l \\}, \\qquad l = 2, \\cdots, m.$$ and $$J_\\delta(\\zeta) = (\\delta^2 + |\\zeta_2|^2 + \\sum\\limits_{k=2}^m (A_k)^2 |\\zeta_1|^{2k} )^{\\frac{1}{2}}.$$\nDefine\n$$\\Omega_{a, \\delta}^{\\epsilon_0} = \\{\\zeta ; |\\zeta_1| < a , |\\zeta_2| 0.$}$$\nIf we have $|A_m| \\geq c_m > 0$ for some positive constant $c_m,$\nthen there exist small constants $a, \\epsilon_0 > 0$ so that for any sufficiently small $\\delta>0,$ there is a $L^2$ holomorphic function $f \\in A(\\Omega_{a, \\delta}^{\\epsilon_0})$ satisfying $\\biggl|\\frac{\\partial f}{\\partial\\zeta_2}(0, -\\frac{b\\delta}{2}) \\biggr| \\geq \\frac{1}{2\\delta}$ for some small constant $b$. Moreover, the values $a$ and $\\epsilon_0$ depend only on the constant $c_m$ and $C_{m+1} = \\left\\| \\rho \\right\\|_{C^{m+1}(U)},$ where $U$ is a small neighborhood of $0.$\n\\end{thm}\n \nThe result stated in \\cite{C2} applies to a more restricted situation, but a careful examination of the proof actually implies the above result. To apply theorem \\ref{existenceofholomorphic} to the complex two dimensional slice, we consider the pushed out domain about ${\\widetilde e}_\\delta.$ Let $\\Phi_{\\widetilde{e}_\\delta}$ be the map associated with ${\\widetilde e}_\\delta$ as in (\\ref{coordinatechangeinc2}). Set ${U'' \\big|}_{z_1 = d\\delta^{\\frac{1}{\\eta}}} = \\{\\zeta''=(\\zeta_2, \\zeta_3) ; \\Phi_{\\widetilde{e}_\\delta} (\\zeta'') \\in U \\big|_{z_1 = d\\delta^{\\frac{1}{\\eta}}}\\}.$ For all small $\\delta,$ define \n\\begin{equation} \\label{def of Jdelta}\n J_\\delta (\\zeta'') = \\biggl(\\delta^2 + |\\zeta_3|^2 + \\sum_{k = 2}^{m} (A_k ({\\widetilde e}_\\delta))^2 |\\zeta_2|^{2k} \\biggr)^{\\frac{1}{2}}\n\\end{equation} \nand the pushed-out domain with respect to the slice \n\\begin{equation} \\label{working domain} \n \\Omega_{a, \\delta}^{\\epsilon_0} = \\{(\\zeta_2, \\zeta_3) ; |\\zeta_2| < a, |\\zeta_3| < a \\ \\mbox{and} \\ \\rho (d\\delta^{\\frac{1}{\\eta}}, \\zeta'') < {\\epsilon_0} J_\\delta(\\zeta'') \\}. \n\\end{equation} \nBy Theorem \\ref{existenceofholomorphic}, we have a $L^2$ holomorphic function $f$ in $ \\Omega_{a, \\delta}^{\\epsilon_0}$ satisfying\n\\begin{equation}\\label{large derivative}\n \\biggl| \\frac{\\partial f}{\\partial \\zeta_3} ( 0, -\\frac{b\\delta}{2})\\biggr| \\geq \\frac{1}{2\\delta}.\n\\end{equation}\nIn order to show the well-definedness of the holomorphic function $f$ when $z_1$ moves in a small neighborhood of $z_1 = d\\delta^{\\frac{1}{\\eta}}$, we use $\\Phi_{\\widetilde{e}_\\delta} $ given as in (\\ref{coordinatechangeinc2}) and define\n\\begin{equation*}\n\\Phi(\\zeta_1, \\zeta_2, \\zeta_3) = (\\zeta_1, \\zeta_2, \\Phi_3 (\\zeta)), \n\\end{equation*}\nwhere $\\Phi_3 (\\zeta)$ is defined by\n\n\\begin{equation}\\label{phitobeused}\n \\Phi_3 (\\zeta) = e_\\delta + \\biggl(\\frac{\\partial r}{\\partial z_3}({\\widetilde e}_\\delta) \\biggr)^{-1}\\biggl(\\frac{\\zeta_3}{2}- \\sum_{l=2}^m c_l({\\widetilde e}_\\delta)\\zeta_2^l - \\frac{\\partial r}{\\partial z_2} ({\\widetilde e}_\\delta){\\zeta_2} \\biggr) \n\\end{equation}\nand define\n\n\\begin{equation}\\label{rhotobeused} \n \\rho(\\zeta_1, \\zeta_2, \\zeta_3) = r(z_1, z_2, z_3)\\circ \\Phi(\\zeta_1, \\zeta_2, \\zeta_3).\n\\end{equation}\nIn particular, when we fix $z_1 = d\\delta^{\\frac{1}{\\eta}},$ we have the holomophic function $f$ defined in the slice $\\Omega_{a, \\delta}^{\\epsilon_0}$ satisfying (\\ref{large derivative}). Now, we consider the domain given by the family of the pushed out domains of the slice along with $\\zeta_1$ axis and the domain in the new coordinate of $\\Omega$ by $\\Phi$. \nDefine $$\\Omega_{a, \\delta, \\zeta_1}^{\\epsilon_0} = \\{\\zeta \\in \\mathbb{C}^3; |\\zeta_1 - d\\delta^{\\frac{1}{\\eta}}| < c\\delta^{\\frac{1}{\\eta}}, |\\zeta_2| < a, |\\zeta_3| < a \\ \\mbox{and} \\ \\rho (d \\delta^{\\frac{1}{\\eta}}, \\zeta'') < {\\epsilon_0} J_\\delta(\\zeta'') \\}$$ and \n$${\\Omega}_{a, \\delta, \\zeta_1} = \\{\\zeta \\in \\mathbb{C}^3; |\\zeta_1 - d\\delta^{\\frac{1}{\\eta}}| < c\\delta^{\\frac{1}{\\eta}}, |\\zeta_2| < a, |\\zeta_3| < a \\ \\mbox{and} \\ \\rho (\\zeta_1, \\zeta'') < 0 \\} $$ for some small $c > 0$ only depending on $\\epsilon_0.$ \nSince the holomorphic function $f(\\zeta_2, \\zeta_3)$ defined in $\\Omega_{a, \\delta}^{\\epsilon_0}$ is independent of $\\zeta_1,$ $f$ is the well-defined holomophic function in $\\Omega_{a, \\delta, \\zeta_1}^{\\epsilon_0}.$ \nWe want to show $f$ is well-defined holomorphic function in ${\\Omega}_{a, \\delta, \\zeta_1}.$ Therefore, it is enough to show ${\\Omega}_{a, \\delta, \\zeta_1} \\subset \\Omega_{a, \\delta, \\zeta_1}^{\\epsilon_0}$ for the well-definedness of $f$ in ${\\Omega}_{a, \\delta, \\zeta_1}$. More specifically, \n\\begin{align*}\n{\\Omega}_{a, \\delta, \\zeta_1} \\subset \\Omega_{a, \\delta, \\zeta_1}^{\\epsilon_0}\n \n &\\ \\Leftrightarrow \\rho(d\\delta^{\\frac{1}{\\eta}}, \\zeta'') - \\rho (\\zeta_1, \\zeta'') < {\\epsilon_0}J_\\delta(\\zeta''),\n\\end{align*}\nwhere $\\zeta'' = (\\zeta_2, \\zeta_3)$ and $|\\zeta_1-d\\delta^{\\frac{1}{\\eta}}| < c\\delta^{\\frac{1}{\\eta}}, |\\zeta_2| < a \\ \\mbox{and} \\ |\\zeta_3| < a$. \n\n\n\n\\begin{prop} \\label{well-defined property}\nGiven any small $\\epsilon \\leq {\\epsilon_0},$ there is a small $c > 0$ such that if $|\\zeta_1-d\\delta^{\\frac{1}{\\eta}}| < c\\delta^{\\frac{1}{\\eta}}, |\\zeta_2| < a \\ \\mbox{and} \\ |\\zeta_3| < a,$ then\n $$|\\rho(d\\delta^{\\frac{1}{\\eta}}, \\zeta'') - \\rho (\\zeta_1, \\zeta'')| \\lesssim {\\epsilon}J_\\delta (\\zeta'').$$ \n\\end{prop}\n\n\nBefore proving Proposition \\ref{well-defined property}, we note that from the standard interpolation method, we have the following fact: Let $(p_1, q_1), (p, q)$ and $(p_2, q_2)$ be collinear points in the first quadrant of the plane, and $ p_1 \\leq p \\leq p_2, q_2 \\leq q \\leq q_1.$ Then, we have\n$$|\\zeta_1|^{p}|\\zeta_2|^q \\leq |\\zeta_1|^{p_1}|\\zeta_2|^{q_1} + |\\zeta_1|^{p_2}|\\zeta_2|^{q_2}$$ for sufficiently small $\\zeta_1 , \\zeta_2 \\in \\mathbb{C}$. In particular, this means that if $(\\alpha, \\beta) \\in \\Gamma_L,$ then \n\\begin{equation} \\label{interpolation}\n|\\zeta_1|^{\\alpha_1 + \\beta_1} |\\zeta_2|^{\\alpha_2 + \\beta_2} \\lesssim |\\zeta_1|^{p_{\\nu-1}} |\\zeta_2|^{q_{\\nu-1}} + |\\zeta_1|^{p_{\\nu}} |\\zeta_2|^{q_{\\nu}} \n\\end{equation} for some $\\nu = 1, \\cdots, N.$ \n \n\\begin{proof}[Proof of Proposition \\ref{well-defined property}]\nDefine \n $${J_\\delta}^\\nu(\\zeta'') = \\delta + |\\zeta_3| + \\sum_{\\nu = 1}^N {\\delta^{\\frac{p_\\nu}{\\eta}}}|\\zeta_2|^{q_\\nu}.$$\n\nIn order to show the proposition, it is enough to show ${J_\\delta}^\\nu(\\zeta'') \\lesssim J_\\delta (\\zeta'')$ and $|\\rho(d\\delta^{\\frac{1}{\\eta}}, \\zeta_2, \\zeta_3) - \\rho (\\zeta_1, \\zeta_2, \\zeta_3)| \\lesssim {\\epsilon}{J_\\delta}^\\nu (\\zeta''),$ where $|\\zeta_1 - d\\delta^{\\frac{1}{\\eta}}| < c\\delta^{\\frac{1}{\\eta}}, |\\zeta_2| < a \\ \\mbox{and} \\ |\\zeta_3| < a.$ \nBy (\\ref{def of Al}) and $a_{j,k}(\\widetilde{e_\\delta}) = j!k! \\frac{\\partial^{j+k} \\rho}{{\\partial {\\zeta_2}^j}{\\partial {\\bar{\\zeta_2}}^k}}(d\\delta^{\\frac{1}{\\eta}}, 0, 0),$ we have $$|\\frac{\\partial^{j+k} \\rho}{{\\partial {\\zeta_2}^j}{\\partial {\\bar{\\zeta_2}}^k}}(d\\delta^{\\frac{1}{\\eta}}, 0, 0)| \\lesssim |A_l(\\widetilde{e_\\delta})| $$ for $ j+k = l$ with $l =2,\\cdots,m.$\nTherefore, Lemma \\ref{rhoderivative} means that \n$$ \\delta^{\\frac{p_\\nu}{\\eta}} \\approx \\biggl|\\frac{\\partial^{q_\\nu} \\rho }{{\\partial {\\zeta_2}^{\\alpha_2^\\nu}}{\\partial {\\bar \\zeta_2}^{\\beta_2^\\nu}} }(d\\delta^{\\frac{1}{\\eta}}, 0, 0) \\biggr| \\lesssim |A_{q_\\nu}(\\widetilde{e_\\delta})|,$$\nwhere $\\alpha_2^\\nu + \\beta_2^\\nu = q_\\nu, \\alpha_2^\\nu \\ \\mbox{and} \\ \\beta_2^\\nu > 0.$ This shows ${J_\\delta}^\\nu(\\zeta'') \\lesssim J_\\delta (\\zeta'').$ \\\\\n\nLet's estimate $|\\rho(d\\delta^{\\frac{1}{\\eta}}, \\zeta'') - \\rho (\\zeta_1, \\zeta'')|$. Let $D_1$ denote the differential operator either $\\frac{\\partial}{\\partial \\zeta_1}$ or $\\frac{\\partial}{\\partial {\\overline \\zeta}_1}.$ Then,\n\\begin{equation} \\label{originalestimate}\n|\\rho (\\zeta_1, \\zeta'')- \\rho(d\\delta^{\\frac{1}{\\eta}}, \\zeta'')| \n \\leq c \\delta^{\\frac{1}{\\eta}} \\max\\limits_{|\\zeta_1 - d\\delta^{\\frac{1}{\\eta}}| < c\\delta^{\\frac{1}{\\eta}}} |D_1 \\rho (\\zeta_1, \\zeta'')|.\n\\end{equation}\nLet's estimate $ D_1 \\rho (\\zeta_1, \\zeta'').$ By (\\ref{r form 2}), (\\ref{phitobeused}) and (\\ref{rhotobeused}), we know\n\\begin{align*}\n\\rho(\\zeta_1, \\zeta'') &= \\mbox{Re}(\\Phi_3 (\\zeta) ) + \\sum_{\\Gamma_L} a_{\\alpha, \\beta}{\\zeta_1}^{\\alpha_1}{\\bar{\\zeta}_1}^{\\beta_1}{\\zeta_2}^{\\alpha_2}{\\bar{\\zeta}_2}^{\\beta_2} + \\mathcal{O}(|\\Phi_3(\\zeta)||(\\zeta_1, \\zeta_2, \\Phi_3(\\zeta))| \\\\\n &\\ \\qquad +\\sum_{\\nu = 1}^N \\sum_{l = q_{\\nu - 1}}^{q_\\nu} |\\zeta_1|^{[t_l]+1}|\\zeta_2|^l + |\\zeta_2|^{m+1}).\n\\end{align*}\nSince $|\\zeta_1 - d\\delta^{\\frac{1}{\\eta}}| < c\\delta^{\\frac{1}{\\eta}}$ and $\\Phi_3$ is independent of $\\zeta_1$, we have\n\\begin{equation} \\label{rhozeta1derivative}\n |{D_1 \\rho}(\\zeta_1, \\zeta'')|\n \\lesssim \\sum_{\\Gamma_L}{\\delta}^{\\frac{\\alpha_1 + \\beta_1 -1}{\\eta}} |\\zeta_2|^{\\alpha_2 + \\beta_2} + |\\Phi_3 (\\zeta)| + \\sum_{\\nu = 1}^N \\sum_{l = q_{\\nu - 1}}^{q_\\nu} \\delta^{\\frac{[t_l]}{\\eta}}|\\zeta_2|^l . \n\\end{equation}\nCombining (\\ref{originalestimate}) with (\\ref{rhozeta1derivative}), we obtain\n\\begin{equation}\\label{finalestimatetobeused} \n|\\rho (\\zeta_1, \\zeta'')- \\rho(d\\delta^{\\frac{1}{\\eta}}, \\zeta'')| \\lesssim c\\biggl( \\sum_{\\Gamma_L}{\\delta}^{\\frac{\\alpha_1 + \\beta_1}{\\eta}}|\\zeta_2|^{\\alpha_2 + \\beta_2} + |\\Phi_3 (\\zeta)| \\nonumber + \\sum_{\\nu = 1}^N \\sum_{l = q_{\\nu - 1}}^{q_\\nu} \\delta^{\\frac{[t_l]+1}{\\eta}}|\\zeta_2|^l \\biggr) \n\\end{equation}\n\n\n\\noindent With $\\zeta_1 = d\\delta^{\\frac{1}{\\eta}}$, (\\ref{interpolation}) means $\\sum\\limits_{\\Gamma_L}{\\delta}^{\\frac{\\alpha_1 + \\beta_1}{\\eta}}|\\zeta_2|^{\\alpha_2 + \\beta_2} \\lesssim {J_\\delta}^\\nu (\\zeta'').$ \nAlso, (\\ref{clexpression}) and Lemma \\ref{rhoderivative} gives $|\\Phi_3(\\zeta)| \\lesssim |e_\\delta| + |\\zeta_3| + \\sum_{l = 1}^m |c_l(\\widetilde{e_\\delta})||\\zeta_2|^l \\lesssim \\delta + |\\zeta_3| + \\sum_{l = 1}^m \\delta^{\\frac{t_l}{\\eta}}|\\zeta_2|^l.$ Since $(t_l, l ) \\in L_\\nu$ for some $\\nu = 1, \\cdots, N,$ again, (\\ref{interpolation}) gives $|\\Phi_3(\\zeta)| \\lesssim {J_\\delta}^\\nu (\\zeta'').$\nFurthermore, since $\\delta^{\\frac{[t_l]+1}{\\eta}}|\\zeta_2|^l \\lesssim \\delta^{\\frac{t_l}{\\eta}}|\\zeta_2|^l$, the same argument as before gives $\\sum_{\\nu = 1}^N \\sum_{l = q_{\\nu - 1}}^{q_\\nu} \\delta^{\\frac{[t_l]+1}{\\eta}}|\\zeta_2|^l \\lesssim {J_\\delta}^\\nu (\\zeta'').$ \n\\end{proof} \n\n\\vspace{0.5cm}\n\nNow, we know that there is a holomorphic function $f(\\zeta_1, \\zeta_2, \\zeta_3)= f(\\zeta_2, \\zeta_3)$ defined on ${\\Omega}_{a, \\delta, \\zeta_1}^{\\epsilon_0}$ such that\n\\begin{enumerate}[i)]\n\t\\item ${\\Omega}_{a, \\delta, \\zeta_1} \\subset {\\Omega}_{a, \\delta, \\zeta_1}^{\\epsilon_0}$\n\n\t\\item $\\biggl| \\frac{\\partial f}{\\partial \\zeta_3} (0, -\\frac{b\\delta)}{2}\\biggr| \\geq \\frac{1}{2\\delta}$ for a small constant $b > 0.$ \n\\end{enumerate}\n\nWithout loss of generality, we can assume ${\\Omega}_{a, \\delta, \\zeta_1} \\subset {\\Omega}_{a, \\delta, \\zeta_1}^{\\frac{\\epsilon_0}{2}} \\subset {\\Omega}_{a, \\delta, \\zeta}^{\\epsilon_0}.$ For the boundedness of $f$ in ${\\Omega}_{\\frac{a}{2}, \\delta, \\zeta_1}^{\\frac{\\epsilon_0}{2}},$ we follow the same argument as Chapter 7 (p 462) in \\cite{C2}. Before showing the boundedness, we define a polydisc $P_{a_1} ({\\zeta''_0})$ by\n$$P_{a_1} (\\zeta''_0) = \\{\\zeta'' = (\\zeta_2, \\zeta_3); |\\zeta_2 - {\\zeta_2^0}| < \\tau (\\widetilde{e_\\delta}, a_1 J_\\delta (\\zeta''_0)) \\ \\mbox{and} \\\n |\\zeta_3 - {\\zeta_3^0}| < a_1 J_\\delta (\\zeta''_0)\\}, $$ \nwhere $\\zeta''_0 = (\\zeta_2^0, \\zeta_3^0)$ and $a_1 > 0.$\n\n\n\n\n\n\n\\begin{thm}\\label{boundedholomorphicfunction}\n$f$ is bounded holomorphic function in ${\\Omega}_{\\frac{a}{2}, \\delta, \\zeta_1}^{\\frac{\\epsilon_0}{2}}$ such that \n\\begin{equation}\\label{largederivative}\n\\biggl| \\frac{\\partial f}{\\partial \\zeta_3} \\biggl(0, -\\frac{b\\delta}{2}\\biggr)\\biggr| \\geq \\frac{1}{2\\delta} \\ \\text{for a small constant $b > 0$}.\n\\end{equation}\n\\end{thm}\n\n\\begin{proof}\nSince $f$ is a $L^2$ holomorphic function in ${\\Omega}_{a, \\delta, \\zeta_1}^{\\epsilon_0}$ with (\\ref{largederivative}), it is enough to show $f$ is bounded in ${\\Omega}_{\\frac{a}{2}, \\delta, \\zeta_1}^{\\frac{\\epsilon_0}{2}}$. Let $(\\zeta_2^0, \\zeta_3^0) \\in \\{\\rho(d\\delta^{\\frac{1}{\\eta}}, \\zeta'') = \\frac{\\epsilon_0}{2}J_\\delta(\\zeta''), |\\zeta_2| < \\frac{3a}{4}, |\\zeta_3|<\\frac{3a}{4}\\} \\subset {\\Omega}_{a, \\delta, \\zeta}^{\\epsilon_0}.$ By the similar property as (iii) of Proposition 4.3 in \\cite{C2}, if $\\zeta''_0 = (\\zeta_2^0, \\zeta_3^0) \\in \\{\\rho(d\\delta^{\\frac{1}{\\eta}}, \\zeta'') = \\frac{\\epsilon_0}{2}J_\\delta(\\zeta''), |\\zeta_2| < \\frac{3a}{4}, |\\zeta_3|<\\frac{3a}{4}\\}$, then\n$$ P_{a_1} (\\zeta''_0) \\subset {\\Omega}_{a, \\delta, \\zeta}^{\\epsilon_0},$$\nfor some small constant $a_1 > 0.$ We can apply the same argument as Chapter 7 (p 462) in \\cite{C2} to obtain $|f(\\zeta_2^0, \\zeta_3^0)| \\lesssim 1$. \nFor all others points on the boundary and interior of ${\\Omega}_{\\frac{a}{2}, \\delta, \\zeta_1}^{\\frac{\\epsilon_0}{2}}$, we can choose the polydics with fixed radius which is contained in ${\\Omega}_{{a}, \\delta, \\zeta_1}^{{\\epsilon_0}}$ and apply the same argument as Chapter 7 in \\cite{C2}.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Proof of Theorem 1.1} \\label{Sec5}\n\nIn this section, we prove our main theorem. Before proving the Theorem, let's recall the notations for H\\\"older norm and H\\\"older space. For $U \\in \\mathbb{C}^n$, we denote by ${\\lVert u \\rVert}_{L_{\\infty}(U)}$ the essential supremum of $u \\in L_{\\infty}(U)$ in $U$. For a real $0 < \\epsilon < 1$, set\n \n $${\\lVert u \\rVert}_{\\Lambda^{\\epsilon}(U)} = {\\lVert u \\rVert}_{L_{\\infty}(U)} + \\mbox{sup}_{z,w \\in U} \\frac{|u(w)-u(z)|}{|w-z|^\\epsilon}, $$ \n $$ \\Lambda^{\\epsilon} (U) = \\{ u : {\\lVert u \\rVert}_{\\Lambda^{\\epsilon}(U)} < \\infty \\} $$\nIn here, ${\\lVert u \\rVert}_{\\Lambda^{\\epsilon}(U)}$ denote the H\\\"older norm of order $\\epsilon$. \n\nBy theorem \\ref{special coordinate}, we can assume $\\Omega = \\{z \\in \\mathbb{C}^3; r(z)< 0\\}$ and restate Theorem \\ref{main_theorem}: \n\n\n\\begin{thm}\nLet $\\Omega = \\{ r(z) < 0\\}$ be a smoothly bounded pseudoconvex domain in $\\mathbb{C}^3,$ where $r$ given by theorem \\ref{special coordinate}. Furthermore,\nif there exists a neighborhood $U$ of $0$ so that for all $\\alpha \\in L_{\\infty}^{0,1} ({\\Omega})$ with $\\bar{\\partial}\\alpha = 0$, there is a $u \\in \\Lambda_{\\epsilon} (U \\cap \\overline{\\Omega})$ and $C>0$ such that $\\bar{\\partial}u =\\alpha$ and \n\n\\begin{equation} \\label{holder estimate in z}\n{\\lVert u \\rVert}_{\\Lambda^{\\epsilon}(U \\cap \\overline{\\Omega})} \\leq C{\\lVert \\alpha \\rVert}_{L_{\\infty}(\\Omega),}\n\\end{equation}\nthen $\\epsilon \\leq \\frac{1}{\\eta}$.\n\\end{thm}\n\n\n\n\n\n\\begin{proof}Let us consider $U' = \\{(\\zeta_1, \\zeta_2, \\zeta_3) ; \\Phi(\\zeta_1, \\zeta_2, \\zeta_3) \\in U \\}$ and $\\rho = r \\circ \\Phi$ as (\\ref{phitobeused}) and (\\ref{rhotobeused}). Let's choose $\\beta =\\bar{\\partial}(\\phi(\\frac{|\\zeta_1 - d\\delta^{\\frac{1}{\\eta}}|}{c\\delta^{\\frac{1}{\\eta}}})\\phi(\\frac{|\\zeta_2|}{a\/2})\\phi(\\frac{|\\zeta_3|}{a\/2})f(\\zeta_2, \\zeta_3))$, where \n\n\\begin{displaymath}\n\\phi (t) = \\left \\{\n \\begin{array}{lr}\n 1 & , |t| \\leq \\frac{1}{2}\\\\\n 0 & , |t| \\geq \\frac{3}{4}\n \\end{array}\n \\right.\n\\end{displaymath} \nNote that $f$ is the well-defined bounded holomorphic function in ${\\Omega}_{\\frac{a}{2}, \\delta, \\zeta_1}^{\\frac{\\epsilon}{2}}$ by Theorem \\ref{boundedholomorphicfunction}.\nIf we define $\\alpha = (\\Phi^{-1})^* \\beta,$ then $\\bar{\\partial}(\\Phi^* u) = \\Phi^* \\bar{\\partial} u = \\Phi^* \\alpha = \\beta$. Therefore, if we set $U_1 = \\Phi^* u =u\\circ \\Phi$, (\\ref{holder estimate in z}) means \n\\begin{equation} \\label{holder estimate in zeta}\n{\\lVert U_1 \\rVert}_{\\Lambda^{\\epsilon}(U' \\cap \\overline{\\Omega})} \\leq C{\\lVert \\beta \\rVert}_{L_{\\infty}} \n\\end{equation}\nIn here, we note that the definition of $\\beta$ means\n\\begin{equation}\\label{supnorminofbeta}\n{\\lVert \\beta \\rVert}_{L^\\infty} \\lesssim \\delta^{-\\frac{1}{\\eta}}\n\\end{equation}\nNow, let $h(\\zeta_1, \\zeta_2, \\zeta_3) = U_1(\\zeta_1, \\zeta_2, \\zeta_3) - \\phi(\\frac{|\\zeta_1 - d\\delta^{\\frac{1}{\\eta}}|}{c\\delta^{\\frac{1}{\\eta}}})\\phi(\\frac{|\\zeta_2|}{a\/2})\\phi(\\frac{|\\zeta_3|}{a\/2})f(\\zeta_2, \\zeta_3).$ Then $\\bar{\\partial} U_1 = \\beta$ means $h$ is holomorphic.\nSet $ q_1^\\delta(\\theta)= (d\\delta^{\\frac{1}{\\eta}}+\\frac{4}{5}c\\delta^{\\frac{1}{\\eta}} e^{i\\theta}, 0, -\\frac{b\\delta}{2}) \\ \\mbox{and} \\ q_2^\\delta(\\theta) = ( d\\delta^{\\frac{1}{\\eta}}+\\frac{4}{5}c\\delta^{\\frac{1}{\\eta}}e^{i\\theta}, 0, -b\\delta)$, where $\\theta \\in \\mathbb{R}$.\nFrom now on, we estimate the lower bound and upper bound of the integral \n\n\\begin{equation*}\n H_{\\delta} = \\biggl| \\frac{1}{2\\pi} \\int_0^{2\\pi} [h(q_1^\\delta(\\theta))-h(q_2^\\delta(\\theta))] d\\theta \\biggr|. \n\\end{equation*}\nFrom the definition of $\\phi,$ (\\ref{holder estimate in zeta}), and (\\ref{supnorminofbeta}) we have \n\\begin{equation} \\label{upperbound} \n H_{\\delta} = \\biggl|\\frac{1}{2\\pi} \\int_0^{2\\pi} [U_1(q_1^\\delta (\\theta))-U_1(q_2^\\delta (\\theta))] d\\theta \\biggr| \\lesssim \\delta^{\\epsilon} {\\lVert \\beta \\rVert}_{L^\\infty} \\lesssim \\delta^{\\epsilon-\\frac{1}{\\eta}} \n\\end{equation} \n\n\n\n\nOn the other hand, for the lower bound estimate, we start with an estimate of the holomorphic function $f$ with a large nontangential derivative we constructed in theorem \\ref{boundedholomorphicfunction}. The Taylor's theorem of $f$ in $\\zeta_3$ and Cauchy's estimate means \n\n $$f(0, \\zeta_3) = f(0, -\\frac{b\\delta}{2}) + \\frac{{\\partial{f}}}{{\\partial{\\zeta_3}}}(0, -\\frac{b\\delta}{2})(\\zeta_3 + \\frac{b\\delta}{2})\n + \\mathcal{O}(|\\zeta_3 + \\frac{b\\delta}{2}|^2). $$\nNow, if we take $\\zeta_3 = -b\\delta$, we have\n $$ f(0, -b\\delta) - f(0, -\\frac{b\\delta}{2})= \\frac{{\\partial{f}}}{{\\partial{\\zeta_3}}}(0, -\\frac{b\\delta}{2})(-\\frac{b\\delta}{2})\n + \\mathcal{O}(\\delta^2).$$ \nSince $|\\frac{\\partial{f}}{\\partial{z_3}} (0, -\\frac{b\\delta}{2} )| \\geq \\frac{1}{2\\delta},$ we know\n\\begin{equation} \\label{contradictionequation}\n |f(0, -b\\delta) - f(0, -\\frac{b\\delta}{2})| = \\biggl|\\frac{{\\partial{f}}}{{\\partial{\\zeta_3}}}(0, -\\frac{b\\delta}{2})(-\\frac{b\\delta}{2})\n + \\mathcal{O}(\\delta^2)\\biggr| \\gtrsim 1 \n\\end{equation} \nfor all sufficiently small $\\delta > 0$.\nReturning to the lower bound estimate of $H_{\\delta},$ the Mean Value Property, (\\ref{holder estimate in zeta}), (\\ref{supnorminofbeta}), and (\\ref{contradictionequation}) give\n\\begin{align}\n H_{\\delta} &= \\biggl| \\frac{1}{2\\pi} \\int_0^{2\\pi} [h(q_1^\\delta (\\theta))) \n -h(q_2^\\delta (\\theta)) ]d\\theta \\biggr| = \\left|h(d\\delta^{\\frac{1}{\\eta}}, 0, -\\frac{b\\delta}{2} )- h(d\\delta^{\\frac{1}{\\eta}}, 0, -b\\delta) )\\right| \\nonumber \\\\\n &= \\left|U_1(d\\delta^{\\frac{1}{\\eta}}, 0, -\\frac{b\\delta}{2}) - f(0, -\\frac{b\\delta}{2})- U_1(d\\delta^{\\frac{1}{\\eta}}, 0, -b\\delta) + f(0, -b\\delta)\\right| \\nonumber \\\\\n &\\geq \\left|f(0, -b\\delta)-f(0, -\\frac{b\\delta}{2})| -|U_1(d\\delta^{\\frac{1}{\\eta}}, 0, -\\frac{b\\delta}{2})-U_1(d\\delta^{\\frac{1}{\\eta}}, 0, -b\\delta)\\right| \\nonumber \\\\\n &\\gtrsim 1 - \\delta^{\\epsilon-\\frac{1}{\\eta}} \\label{lowerbound} \n \\end{align}\nIf we combine (\\ref{upperbound}) with (\\ref{lowerbound}), we have \n \n\\begin{equation} \\label{last_estimate}\n 1 \\lesssim \\delta^{\\epsilon-\\frac{1}{\\eta}}. \n\\end{equation}\nIf we assume $\\epsilon > \\frac{1}{\\eta}$ and $\\delta \\rightarrow 0$, (\\ref{last_estimate}) will be a contradiction. Therefore, $\\epsilon \\leq \\frac{1}{\\eta}.$ \n\\\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nModern advancement of machine learning is heavily driven by the advancement of computational power and techniques. Nowadays, it is not unusual to train a single model using hundreds of computational devices such as GPUs. As a result, scaling up training algorithms in the distributed setting has attracted intensive interests over the years. One important direction is communication efficient distributed training, which enhances the scalability of the training system by reducing the communication cost. Example techniques include quantization~\\citep{pmlr-v70-zhang17e,Wangni2018-ux}, decentralization~\\citep{Lian2017-ni, Koloskova*2020Decentralized, NIPS2018_8028}, and asynchronous communication~\\citep{DBLP:journals\/corr\/ZhengMWCYML16, NIPS2015_6031}.\n\nOne widely used strategy for alleviating the communication overhead is gradient compression, Before communication, the original gradient $\\bm{g}$ will be compressed into $\\mathcal{C}_{\\omega}[\\bm{g}]$, where $\\mathcal{C}_{\\omega}[\\cdot]$ {\\footnote{$\\mathcal{C}_{\\omega}[\\cdot]$ could also include randomness.}} is the compress operator. As a result the communication volume could be greatly reduced. However, this gradient compression could slow down the convergence speed because important information might get lost during the compression. To recover this information lost, error-compensated compression strategy was proposed: Instead of compressing the gradient at $t$-th iteration directly, we would first add back the compression error from the last step and then do the compression. Recent studies \\citep{martinmemory}\nobserved that by using error-compensated compression, the asymptotic convergence speed remains unchanged for \\textbf{SGD} even using 1-bit compression.\n\n\nOn the other hand, many state-of-the-art models have to be trained using a more complicated variant, \\textbf{Adam} \\citep{adam}. For example, to train models such as BERT, one has to resort to the \\textbf{Adam} optimizer, since training it with vanilla\/momentum \\textbf{SGD} has been shown to be less effective. Unfortunately, we find that error-compensated compression does not work for \\textbf{Adam}, because \\textbf{Adam} is non-linearly dependent on the gradient which affects the error compensation mechanism (see Section~\\ref{sec:moti-convergence} and~\\ref{intuition:why_adam_fails} for more details).\n\nIn this paper, we first analyze the limitation of directly applying existing compression technique to \\textbf{Adam}. One of our key findings is that Adam's variance (the non-linear term) becomes stable at early stage of training (Section~\\ref{sec:moti-variance}). This motivates us to design a new 2-stage algorithm, {{{\\textbf{1-bit Adam}}}}, which uses \\textbf{Adam} (warmup stage) to ``pre-condition'' a communication compressed momentum \\textbf{SGD} algoirthm (compression stage). We provide theoretical analysis on communication compressed momentum \\textbf{SGD}, which is the core component of {{{\\textbf{1-bit Adam}}}}. We design a custom collective primitive using MPI to transfer the $5\\times$ communication volume reduction (achieved by our algorithm) into actual runtime speedup, which is hard to accomplish using existing DL framework libraries. Experiments with BERT-Base, BERT-Large, SQuAD 1.1 and ResNet-18 training tasks on up to 256 GPUs show that {{{{\\textbf{1-bit Adam}}}}} converges as fast as uncompressed \\textbf{Adam}, and runs up to $3.3\\times$ faster than uncompressed algorithms.\n\n{\\bf (Contributions)}\nWe make the following contributions:\n\\vspace{-0.3cm}\n\\begin{itemize}\n\\item We propose a new algorithm, {{{\\textbf{1-bit Adam}}}}, a communication efficient momentum \\textbf{SGD} algorithm pre-conditioned with \\textbf{Adam} optimizer, which to the best of our knowledge is the first work that apply a pre-conditioned strategy for compressed momentum \\textbf{SGD}. We present theoretical analysis on the convergence of {{{\\textbf{1-bit Adam}}}}, and show that it admits the same asymptotic convergence rate as the uncompressed one.\n\n\\item We conduct experiments on large scale ML tasks that are currently challenging for \\textbf{SGD} to train. We show that on both BERT pre-training, SQuAD fine-tuning and ResNet-18, {{{\\textbf{1-bit Adam}}}} is able to achieve the same convergence behaviour and final accuracy as \\textbf{Adam}, together with up to $5\\times$ less communication volume and $3.3\\times$ faster end-to-end throughput (including the full-precision warmup stage). To our best knowledge, this is the first distributed learning algorithm with communication compression that can train a model as demanding as BERT.\n\n\\item We implement a custom collective communication primitive using Message Passing Interface (MPI) to provide a scalable and efficient communication system for 1-bit Adam. The communication as well as the 1-bit Adam optimizer has been open sourced in a deep learning optimization library called DeepSpeed\\footnote{https:\/\/github.com\/microsoft\/DeepSpeed, https:\/\/www.deepspeed.ai\/}.\n\\end{itemize}\n\n\\section{Related Work}\n\\paragraph{Communication-efficient distributed learning:}\nTo further reduce the communication overhead, one promising direction is to compress the variables that are sent between different workers ~\\citep{NIPS2019_8694,NIPS2019_9473}. Previous work has applied a\nrange of techniques such as quantizaiton,\nsparsification, and sketching\n~\\citep{Alistarh2017-yh,Agarwal2018-hg,Spring2019-ep,Ye2018-mf}.\nThe compression is mostly assumed to be unbiased ~\\citep{Wangni2018-ux,pmlr-v80-shen18a,pmlr-v70-zhang17e,NIPS2017_6749,NIPS2018_7519}.\nA general theoretical analysis of centralized compressed parallel \\textbf{SGD} can be found in ~\\citet{Alistarh2017-yh}. Beyond this, some biased compressing methods are also proposed and proven to be quite efficient in reducing the communication cost. One example is the \\textbf{1-bit SGD} ~\\citep{1-bitexp}, which compresses the entries in gradient vector into $\\pm 1$ depends on its sign. \n\n\\paragraph{Error-compensated compression:}\nThe idea of using error compensation for compression is proposed in ~\\citet{1-bitexp}, where they find that by using error compensation the training could still achieves a very good speed even using $1$-bit compression. Recent study indicates that this strategy admits the same asymptotic convergence rate as the uncompressed one~\\citep{martinmemory}, which means that the influence of compression is trivial. More importantly, by using error compensation, it has been proved that we can use almost any compression methods~\\citep{martinmemory}, whereas naive compression could only converge when the compression is unbiased (the expectation of the compressed tensor is the same as the original).This method can be combined with decentralized training \\citep{ec_decentralize}, local SGD \\citep{ec_local}, accelerated algorithms \\citep{ec_linearly}. Due to the promising efficiency of this method, error compensation has been applied into many related area ~\\citep{NIPS2019_9321,9051706,NIPS2019_8694,8884924,NIPS2019_9473,NIPS2019_8598,NIPS2019_9610,NIPS2019_9571} in order to reduce the communication cost. \n\n\\paragraph{\\textbf{Adam}:} \\textbf{Adam}~\\citep{Kingma2015AdamAM} has shown\npromising speed for many deep learning tasks, and also admits a very good robustness to the choice of the hyper-parameters, such as learning rate. \nIt can be viewed as an adaptive method that scales the learning rate with the magnitude of the gradients on each coordinate when running \\textbf{SGD}. Beyond \\textbf{Adam}, many other strategies that that shares the same idea of changing learning rate dynamically was studied. For example, \\citet{JMLR:v12:duchi11a} (\\textbf{Adagrad}) and \\citep{rmsprop} (\\textbf{RESprop}), use the gradient, instead of momentum, for updating the parameters; \\textbf{Adadelta}~\\citep{DBLP:journals\/corr\/abs-1212-5701} changes the variance term of \\textbf{Adam} into a non-decreasing updating rule; \\citet{luo2018adaptive} proposed \\textbf{AdaBound} that gives both upper bound and lower bound for the variance term. In \\citet{adam_theoretical,adam_liu2020adam} authors develop a novel analysis for the convergence rate of \\textbf{Adam}. \n\n\\section{Motivation and Insights}\n\\subsection{Communication overhead affects the efficiency of distributed training}\n\\label{sec:moti-profile}\nTo demonstrate the opportunity for communication compression, we conduct performance profiling experiments that measures the impact of communication time with respect to the total training time per step. Here we use BERT-Large pre-training task as an example (sequence length 128, detailed training parameters can be found at Section~\\ref{sec:bert-eval}), since BERT and transformer models in general are the state-of-the-art approaches in natural language processing and many other areas. We evaluate two different kinds of clusters: the first cluster has 4 NVIDIA Tesla V100 GPUs per node, and different nodes are connected by 40 Gigabit Ethernet (effective bandwidth is 4.1 Gbps based on iperf benchmark); the second cluster has 8 V100 GPUs per node, and different nodes are connected by 100 Gigabit InfiniBand EDR (effective bandwidth is close to theoretical peak based on microbenchmark). We perform BERT-Large pre-training using the two clusters with different number of nodes and GPUs, batch sizes, and gradient accumulation steps. And we measure the average latency of forward, backward (allreduce and everything else), and step function calls. Table~\\ref{table_comm_overhead} presents the profiling results.\n\nResults show that allreduce communication contributes to a great portion of the training time per step, up to 94\\% and 75\\% for our experiments on two different kinds of inter-node networks. As expected, communication overhead is proportionally larger when the number of nodes is larger, when the batch size\/gradient accumulation step is smaller, and when the network bandwidth is lower. These are the situations where communication compression could provide the most benefit.\n\n\\begin{table*}\n \\footnotesize\n \\caption{BERT-Large pre-training sequence 128 profiling results.}\\label{table_comm_overhead}\n \\centering\n \\begin{tabular}{rrrrrrrrrrr}\n \\hline\n Cluster& Num.& Num.& Batch& Batch& Grad& Forward& Backward& Backward& Step& allreduce\\% \\\\\n Network& node& GPU& size per& size& accum.& (ms)& allreduce& everything& (ms)& \\\\\n Type& & & GPU& & step& & (ms)& else (ms)& & \\\\\n \\hline\n Ethernet& 16& 64& 1& 64& 1& 36.65& 2205.86& 33.63& 74.96& \\textbf{94\\%} \\\\\n Ethernet& 16& 64& 16& 1024& 1& 35.71& 2275.43& 60.81& 75.59& 93\\% \\\\\n Ethernet& 16& 64& 16& 4096& 4& 137.80& 2259.36& 243.72& 74.92& 83\\% \\\\\n Ethernet& 8& 32& 16& 512& 1& 37.91& 2173.35& 60.71& 75.63& 93\\% \\\\\n Ethernet& 4& 16& 16& 256& 1& 36.94& 2133.24& 62.82& 76.85& 92\\% \\\\\n Ethernet& 2& 8& 16& 128& 1& 34.95& 1897.21& 61.23& 75.26& 92\\% \\\\\n Ethernet& 1& 4& 16& 64& 1& 35.99& 239.76& 59.95& 74.21& 58\\% \\\\\n \\hline\n InfiniBand& 8& 64& 1& 64& 1& 25.36& 316.18& 23.25& 58.49& \\textbf{75\\%} \\\\\n InfiniBand& 8& 64& 16& 1024& 1& 32.81& 336.40& 59.99& 57.79& 69\\% \\\\\n InfiniBand& 8& 64& 16& 4096& 4& 131.04& 339.52& 237.92& 56.91& 44\\% \\\\\n InfiniBand& 4& 32& 16& 512& 1& 33.45& 297.28& 56.81& 57.98& 67\\% \\\\\n InfiniBand& 2& 16& 16& 256& 1& 32.86& 183.74& 56.49& 58.60& 55\\% \\\\\n InfiniBand& 1& 8& 16& 128& 1& 32.74& 28.18& 59.73& 57.29& 16\\% \\\\\n \\hline\n \\end{tabular}\\vspace{-0.1cm}\n\\end{table*}\n\n\\subsection{Basic compression affects \\textbf{Adam}'s convergence}\n\\label{sec:moti-convergence}\nGiven the great opportunity for communication compression, we investigate whether existing Error-Compensated gradient compression strategy can be applied to \\textbf{Adam}, an important optimization algorithm for large model distributed training. We implement a basic compression strategy for \\textbf{Adam} based on the compression-based \\textbf{SGD} approach~\\citep{martinmemory}, where we perform error-compensated 1-bit compression over the gradient, and update both the momentum and variance based on the compressed gradient. We compare the BERT-Large pre-training (sequence 128) training loss when using vanilla \\textbf{Adam} and \\textbf{Adam} with our basic compression strategy in Figure~\\ref{fig:moti_loss}.\n\nResults show that basic compression based on existing work greatly affects the convergence speed for Adam. The main reason is that \\textbf{Adam} is non-linearly dependent to the gradients (see Section~\\ref{intuition:why_adam_fails} for more details). This motivates us to look for new compression strategy that overcomes the non-linear gradient dependency challenge, and at the same time achieves the same convergence speed as \\textbf{Adam}.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{moti_loss.pdf}\n\\caption{Training loss for BERT-Large pre-training using vanilla Adam and Adam with error compensated gradient compression.}\\label{fig:moti_loss}\\vspace{-0.1cm}\n\\end{figure}\n\n\\subsection{\\textbf{Adam}'s variance becomes stable during training}\n\\label{sec:moti-variance}\nUnlike \\textbf{SGD}, which directly uses the gradient $\\bm{g}$ to update the model $\\bm{x}$, \\textbf{Adam} uses two auxiliary variables $\\bm{m}$ and $\\bm{v}$ for the update. The mathematical updating rule of original \\textbf{Adam} can be summarized as:\n\\begin{align*}\n\\bm{m}_{t+1} =& \\beta_1\\bm{m}_t + (1-\\beta_1)\\bm{g}_t\\\\\n\\bm{v}_{t+1} =& \\beta_2\\bm{v}_t + (1-\\beta_2)(\\bm{g}_t)^2,\\numberthis\\label{alg:v}\\\\\n\\bm{\\bm{x}}_{t+1} =& \\bm{x}_t - \\gamma\\frac{\\bm{m}_{t+1}}{\\sqrt{\\bm{v}_{t+1} + \\eta}}\n\\end{align*}\nHere $\\bm{x}_t$ is the model at $t$-iteration, $\\bm{g}_t = \\nabla F(\\bm{x}_t;\\bm{\\zeta}_t)$ is the stochastic gradient, $\\gamma$ is the learning rate, $\\eta$ usually is a very small constant, $\\beta_1$ and $\\beta_2$ are decaying factor that controls the speed of forgetting history information. Notice here we disable the bias correction term in the original \\textbf{Adam}, which is consistent with exact optimizer for training BERT \\citep{bert}.\n\nHere we refer $\\bm{m}_t$ as the momentum term and $\\bm{v}_t$ as the variance term. Notice that when $\\bm{v}_t$ is changed into a constant $\\bm{v}$, then \\textbf{Adam} becomes equivalent to \\textbf{Momentum SGD} under a coordinate-dependent learning rate $\\frac{\\gamma}{\\sqrt{\\bm{v}} + \\eta}$.\n\n\nTo investigate the non-linear gradient dependency challenge, we analyze \\textbf{Adam}'s variance during BERT-Large pre-training (sequence 128). At each step, we fuse the variance of all parameters, and calculate the norm of the fused variance. Figure~\\ref{fig:moti_var_norm_log} presents this fused variance norm at each step. Results show that the variance norm becomes stable after around $23K$ steps. This motivates our approach {{{\\textbf{1-bit Adam}}}} to ``freeze'' the Adam variance after it becomes stable, and then use it as a precondition during 1-bit compression stage.\n\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{moti_var_norm_log.pdf}\n\\caption{Norm of fused variance for BERT-Large pre-training using vanilla Adam. The y-axis is in log scale.}\\label{fig:moti_var_norm_log}\\vspace{-0.1cm}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{{{{\\textbf{1-bit Adam}}}} Algorithm}\nIn this section, we start with some background introduction for error compensated compression and why it is incompatible with \\textbf{Adam}. Then we give full description of {{{\\textbf{1-bit Adam}}}}.\n\n\\paragraph*{Problem setting} In this paper, we focus on the following optimization task and rely on the following notions and definitions:\n\\vspace{-0.3cm}\n\\begin{equation}\n\\min_{\\bm{x}\\in\\mathcal{R}^d}\\quad f(\\bm{x}) = \\frac{1}{n} \\sum_{i=1}^n \\underbrace{\\mathbb{E}_{\\bm{\\zeta}^{(i)}\\sim\\mathcal{D}_i}F(\\bm{x}; \\bm{\\bm{\\zeta}}^{(i)})}_{:=f_i(\\bm{x})},\\label{eq:main}\n\\end{equation}\nwhere $d$ is the dimension of the input model $\\bm{x}$, $\\mathcal{D}_i$ is the data distribution of individual data sample $\\bm{\\zeta}^{(i)}$ on the $i$-th worker, $F(\\bm{x};\\bm{\\zeta})$ is the loss function.\n\n\\paragraph{Notations and definitions}\nThroughout this paper, we use the following notations:\n\\begin{itemize}\n\\item $\\nabla f(\\cdot)$ denotes the gradient of a function $f$.\n\\item $f^{*}$ denotes the optimal value of the minimization problem \\eqref{eq:main}.\n\\item $f_i(\\bm{x}) := \\mathbb{E}_{\\bm{\\zeta}^{(i)}\\sim\\mathcal{D}_i}F(\\bm{x}; \\bm{\\zeta}^{(i)})$.\n\\item $\\|\\cdot\\|$ denotes the $\\ell_2$ norm for vectors and the spectral norm for matrices.\n\\item $\\|X\\|_A:=\\text{Tr}(X^{\\top}AX)$.\n\\item $\\bm{C}_{\\omega}(\\cdot)$ denotes the randomized compressing operator, where $\\omega$ denotes the random variable. One example is the randomized quantization operator, for example, $\\bm{C}_{\\omega}(0.7) = 1$ with probability $0.7$ and $\\bm{C}_{\\omega}(0.7) = 0$ with probability $0.3$. \n\\item { $\\sqrt{\\cdot}$ denotes the square root of the argument. In this paper if the argument is a vector, then it returns a vector taking the element-wise square root.}\n\\item $(\\bm{x})^2$ denotes the element-wise square operation if $\\bm{x}$ is a vector.\n\\item $\\frac{\\bm{a}}{\\bm{b}}$ or $\\bm{a}\/\\bm{b}$ denotes the element-wise division operation if both $\\bm{a}$ and $\\bm{b}$ are vectors and their dimension matches.\n\\end{itemize}\n\n\n\\subsection{Why error compensation works for \\textbf{SGD}}\nFor \\textbf{SGD} , since the update is linearly dependent to the gradient, using error compensation could potentially remove the side-effect of the history compression error. The updating rule of \\textbf{vanilla SGD} follows\n\\begin{align*}\n\\bm{x}_{t+1} =& \\bm{x}_t - \\gamma \\bm{g}_t = \\bm{x}_0 - \\gamma\\sum_{s=0}^t\\bm{g}_s.\\numberthis\\label{intuition:sgd_eq1}\n\\end{align*}\nWhen directly compressing the gradient without error compensation, the updating rule becomes\n\\begin{align*}\n\\bm{x}_{t+1} =& \\bm{x}_t - \\gamma C_\\omega[\\bm{g}_t] = \\bm{x}_t - \\gamma (\\bm{g}_t-\\bm{\\delta}_t)\\\\\n= &\\bm{x}_0 - \\gamma\\sum_{s=0}^t\\bm{g}_s + \\underbrace{\\gamma\\sum_{s=0}^t \\bm{\\delta}_s}_{\\text{history compression error}}.\\numberthis\\label{intuition:sgd_eq2}\n\\end{align*}\nAs we can see in \\eqref{intuition:sgd_eq2}, the history compression error would get accumulated and therefore slow down the convergence rate. Moreover, previous work \\citep{Alistarh2017-yh} indicates that when using biased compression operator, the training convergence cannot be guaranteed. \n\nNow if we apply error compensation at each compression step, the updating rule becomes\n\\begin{align*}\n\\bm{x}_{t+1} =& \\bm{x}_t - \\gamma C_\\omega[\\bm{g}_t + \\bm{\\delta}_{t-1}] = \\bm{x}_t - \\gamma (\\bm{g}_t-\\underbrace{\\bm{\\delta}_t +\\bm{\\delta}_{t-1}}_{\\text{error cancellation}})\\\\\n=& \\bm{x}_0 - \\gamma\\sum_{s=0}^t\\bm{g}_s + \\gamma\\sum_{s=0}^t(\\bm{\\delta}_s - \\bm{\\delta}_{s-1})\\\\\n=& \\bm{x}_0 - \\gamma\\sum_{s=0}^t\\bm{g}_s + \\gamma\\bm{\\delta}_t.\\numberthis\\label{intuition:sgd_eq3}\n\\end{align*}\n\n\nThis demonstrates that by using error compensation, each step's compression error would get cancelled in the next step instead of getting accumulated over steps. To make the error compensation work correctly, it is necessary that we ensure an error cancellation term $\\bm{\\delta}_t +\\bm{\\delta}_{t-1}$ in the updating rule. Below we are going to see that this cannot be achieved for \\textbf{Adam}.\n\n\n\\subsection{Why \\textbf{Adam} cannot be combined with error compensation}\\label{intuition:why_adam_fails}\nAs we can see, \\textbf{Adam} is non-linearly dependent to the gradient, and this non-linearity is widely believed to be essential for the superiority of \\textbf{Adam}. Below we are going to first intuitively explain why error compensation works well for \\textbf{SGD}, and then discuss two major reasons why this non-linearity makes \\textbf{Adam} incompatible with error compensation.\n\n\n\n\n\\paragraph{Difficulty for estimating the variance term $\\bm{v}$.} Notice that for \\textbf{Adam}, it is necessary to communicate the gradient $\\bm{g}_t$ or momentum $\\bm{m}_t$, and the variance term can be updated using $\\bm{g}_t$. However, when using error-compensated gradient to update $\\bm{v}_t$, the updating rule follows:\n\\begin{align*}\n\\bm{v}_{t+1} = &\\beta_2 \\bm{v}_t + (1-\\beta_2)\\left(C_\\omega[\\bm{g}_t + \\bm{\\delta}_{t-1}] \\right)^2\\\\\n=& \\beta_2 \\bm{v}_t + (1-\\beta_2)\\left(\\bm{g}_t + \\bm{\\delta}_{t-1} - \\bm{\\delta}_t \\right)^2\\\\\n= & \\beta_2 \\bm{v}_t + (1-\\beta_2)\\left(\\bm{g}_t \\right)^2 + \\underbrace{\\left( \\bm{\\delta}_{t-1} - \\bm{\\delta}_t \\right)^2}_{\\text{non-linear error correction}} + 2 \\langle \\bm{g}_t,\\bm{\\delta}_{t-1} - \\bm{\\delta}_t\\rangle.\n\\end{align*}\nHere the quadratic term $\\left( \\bm{\\delta}_{t-1} - \\bm{\\delta}_t \\right)^2$ cannot be cancelled by itself, therefore it will be hard to get an accurate estimation of $\\bm{v}_t$ with history error being cancelled.\n\\paragraph{Difficulty for setting the correction factor.} Another problem is that for \\textbf{SGD} , when applying error compensation under a time varying learning rate $\\gamma_t$, we need to compensate the history error using\n\\begin{align*}\nC\\left[ \\bm{g}_t + \\frac{\\gamma_t}{\\gamma_{t-1}}\\bm{\\delta}_{t-1}\\right],\n\\end{align*} instead of adding back $\\bm{\\delta}_{t-1}$ directly. In this case,\nif we view $\\frac{\\gamma}{\\sqrt{\\bm{v}_t} + \\eta}$ as a coordinate-dependent learning rate, which makes \\textbf{Adam} equivalent to \\textbf{Momentum SGD} with time-varying learning rate, we need to apply the scale factor according to \n\\begin{align*}\n\\bm{m}_{t+1} = C_\\omega\\left[\\beta_1\\bm{m}_t + (1-\\beta_1)\\bm{g}_t + \\frac{\\sqrt{\\bm{v}_{t-1}} + \\eta}{\\sqrt{\\bm{v}_{t}} + \\eta}\\bm{\\delta}_{t-1}\\right].\n\\end{align*}\nThe problem is that we cannot get the value of $\\bm{v}_{t}$ after the compression, which makes it impossible to set the scale factor for error compensation.\n\n\n\n\\begin{figure*}[t]\n\\centering\n\\subfigure[\\scriptsize \\textbf{Gather step}: Each worker sends its $i$-th chunk to worker $i$.]{\n\\begin{minipage}[t]{0.3\\linewidth}\n\\centering\n\\includegraphics[width=1\\textwidth]{gather.pdf}\n\\end{minipage}\n}\\quad\n\\subfigure[\\scriptsize \\textbf{Average step}: Each worker averages all chunks it receives.]{\n\\begin{minipage}[t]{0.3\\linewidth}\n\\centering\n\\includegraphics[width=1\\textwidth]{average.pdf}\n\\end{minipage}%\n}\\quad\n\\subfigure[\\scriptsize \\textbf{Scatter step}: Each worker receives the $i$-th chunk from worker $i$.]{\n\\begin{minipage}[t]{0.3\\linewidth}\n\\centering\n\\includegraphics[width=1\\textwidth]{scatter.pdf}\n\\end{minipage}%\n}%\n\\centering\n\\caption{Efficient system design for communication (compressed\\_allreduce)}\\label{allreduce}\\vspace{-0.1cm}\n\\label{fig:allreduce}\n\\end{figure*}\n\n\\subsection{{{{\\textbf{1-bit Adam}}}}}\\label{alg:description}\nBased on our findings (Section~\\ref{sec:moti-variance}) that \\textbf{Adam}'s variance term becomes stable at an early stage, we propose {{{\\textbf{1-bit Adam}}}} summarized in Algorithm \\ref{alg:de_ec}. First we use vanilla \\textbf{Adam} for a few epochs as a warm-up. After the warm-up stage, the compression stage starts and we stop updating the variance term $\\bm{v}$ and use it as a fixed precondition. At the compression stage, we communicate based on the momentum applied with error-compensated 1-bit compression. The momentums are quantized into 1-bit representation (the sign of each element). Accompanying the vector, a scaling factor is computed as $\\frac{\\text{magnitude of compensated gradient}}{\\text{magnitude of quantized gradient}}.$ This scaling factor ensures that the compressed momentum has the same magnitude as the uncompressed momentum. This 1-bit compression could reduce the $97\\%$ communication cost of the original for float32 type training and $94\\%$ for float16 type training. \n\n\n\n\n\n\n\n\\begin{algorithm}[t!]\\caption{{{{\\textbf{1-bit Adam}}}}}\n\\begin{algorithmic}[1]\n\\footnotesize\n\\STATE {\\bfseries Initialize}: $\\bm{x}_0$, learning rate $\\gamma$, initial error $\\bm{\\delta} = \\boldsymbol{0}$, $\\bm{m}_0 = \\boldsymbol{0}$, $\\bm{v}_0 = \\boldsymbol{0}$, number of total iterations $T$, warm-up steps $T_{w}$, two decaying factor $\\beta_1$ and $\\beta_2$ for \\textbf{Adam}.\n\n\\STATE Running the original \\textbf{Adam} for $T_{w}$ steps, then store the variance term (defined as $\\bm{v}_t$ in \\eqref{alg:v}) $\\bm{v}_{_{ T_w}}$.\n\\FOR {$t=T_w,\\ldots,T$}\n\n\\STATE \\textbf{(On $i$-th node)}\n\\STATE Randomly sample $\\bm{\\xi}_t^{(i)}$ and compute local stochastic gradient $\\bm{g}_t^{(i)} := \\nabla F_i(\\bm{x}_t^{(i)}, \\bm{\\xi}_t^{(i)})$.\n\n\\STATE Update the local momentum variable $\\bm{m}_{t-1}$ according to\n$\n\\bm{m}_t^{(i)} = \\beta_1\\bm{m}_{t-1} + (1 - \\beta_1)\\bm{g}_t^{(i)}.\n$\n\\STATE Compress $\\bm{m}_t^{(i)}$ into $\\hat{\\bm{m}}_t^{(i)} = \\bm{C}_\\omega\\left[\\bm{m}_t^{(i)} + \\bm{\\delta}_{t-1}^{(i)}\\right]$, and update the compression error by $\\bm{\\delta}_t^{(i)} = \\bm{m}_t^{(i)} + \\bm{\\delta}_{t-1}^{(i)} - \\hat{\\bm{m}}_t^{(i)}$.\n\\STATE Send the $\\hat{\\bm{m}}_t^{(i)}$ to the server.\n\\STATE \\textbf{(On server)}\n\\STATE Take the average over all $\\hat{\\bm{m}}_t^{(i)}$ it receives and compress it into\n$\n\\overline{\\bm{m}}_t =\\bm{C}_\\omega\\left[ \\frac{1}{n}\\sum_{i=1}^n\\hat{\\bm{m}}_t^{(i)} + \\overline{\\bm{\\delta}}_{t-1}\\right],\n$\n and update the compression error accordingly by $\\overline{\\bm{\\delta}}_t = \\frac{1}{n}\\sum_{j=1}^n \\bm{C}_\\omega\\left[\\bm{m}_t^{(i)}\\right] + \\overline{\\bm{\\delta}}_{t-1} - \\overline{\\bm{m}}_t$.\n \\STATE Send $\\overline{\\bm{m}}_t$ to all the workers.\n \\STATE \\textbf{(On $i$-th node)}\n \\STATE Set $\\bm{m}_t = \\overline{\\bm{m}}_t$ , and update local model $\\bm{x}_{t+1} = \\bm{x}_t - \\gamma \\bm{m}_t\/\\sqrt{\\bm{v}_{_{\\tiny T_w}}}$.\n\\ENDFOR\n\\STATE {\\bfseries Output}: $\\bm{x}$.\n\\end{algorithmic}\\label{alg:de_ec}\n\\end{algorithm}\n\n\n\\section{Theoretical Analysis}\nNotice that for {{{\\textbf{1-bit Adam}}}}, we only use original \\textbf{Adam} at warm-up, and then we essentially run error-compensated momentum \\textbf{SGD} with coordinate-dependent learning rate $\\frac{\\gamma}{\\sqrt{\\bm{v}_{_{T_w}}}}$. Therefore here we consider the \\textbf{Adam}-based warm-up phase as a way to find a good precondition variance term $\\bm{v}_{_{T_w}}$ to be used in the compression phase. Below we are going to introduce the convergence rate for the compression phase after warm-up. We first introduce some necessary assumptions, then we present the theoretical guarantee of the convergence rate for {{{\\textbf{1-bit Adam}}}}.\n\n\n\n\n\n\n\n\n\n\\begin{assumption}\\label{ass:global}\nWe make the following assumptions:\n\\begin{enumerate}\n\\item \\textbf{Lipschitzian gradient:} $f(\\cdot)$ is assumed to be with $L$-Lipschitzian gradients, which means\n \\begin{align*}\n \\|\\nabla f(\\bm{x}) - \\nabla f(\\bm{y}) \\| \\leq L \\|\\bm{x} - \\bm{y} \\|,\\quad \\forall \\bm{x},\\forall \\bm{y},\n \\end{align*}\n \\item\\label{ass:var} \\textbf{Bounded variance:}\nThe variance of the stochastic gradient is bounded\n\\begin{align*}\n\\mathbb E_{\\bm{\\zeta}^{(i)}\\sim\\mathcal{D}_i}\\|\\nabla F(\\bm{x};\\bm{\\zeta}^{(i)}) - \\nabla f(\\bm{x})\\|^2 \\leq \\sigma^2,\\quad\\forall \\bm{x},\\forall i.\n\\end{align*}\n\\item \\textbf{Bounded magnitude of error for $\\mathcal{C}_{\\omega}[\\cdot]$:}\nThe magnitude of worker's local errors $\\bm{\\delta}_t^{(i)}$ and the server's global error $\\overline{\\bm{\\delta}}_t$, are assumed to be bounded by a constant $\\epsilon$\n\\begin{align*}\n\\sum_{k=1}^n\\mathbb E_{\\omega} \\left\\|\\bm{\\delta}_t^{(i)}\\right\\|\\leq \\frac{\\epsilon}{2},\\quad\n\\sum_{i=1}^n\\mathbb E_{\\omega}\\left\\|\\overline{\\bm{\\delta}}_t\\right\\|\\leq \\frac{\\epsilon}{2},\\quad\\forall t,\\forall i.\n\\end{align*}\n\\end{enumerate}\n\\end{assumption}\n\n\nNext we present the main theorem for {{{\\textbf{1-bit Adam}}}}.\n\\begin{theorem}\\label{theo:global}\n Under Assumption~\\ref{ass:global}, for {{{\\textbf{1-bit Adam}}}}, we have the following convergence rate\n \\begin{align*}\n &\\left(1-\\frac{\\gamma L}{v_{\\min}} - \\frac{2\\gamma^2 L^2}{(1-\\beta)^2v_{\\min}^2} \\right)\\sum_{t=0}^T \\mathbb E\\|\\nabla f(\\bm{x}_t)\\|^2_{V}\\\\\n \\leq & \\frac{2\\mathbb E f(\\bm{x}_{0}) - 2\\mathbb Ef(\\bm{x}^*)}{\\gamma} + \\frac{6\\gamma^2L^2\\epsilon^2 T}{(1-\\beta)^2v_{\\min}^3} + \\frac{L\\gamma \\sigma^2T}{nv_{\\min}} + \\frac{2\\gamma^2L^2\\sigma^2 T}{n(1-\\beta)^2v_{\\min}^2},\\numberthis\\label{main:theo:eq}\n\\end{align*}\nwhere $V= \\text{diag}\\left(1\/\\bm{v}_{T_w}^{(1)},1\/\\bm{v}_{T_w}^{(2)},\\cdots,1\/\\bm{v}_{T_w}^{(d)}\\right)$ is a diagonal matrix spanned by $\\bm{v}_{_{T_w}}$ and $v_{\\min} = \\min\\{\\bm{v}_{T_w}^{(1)},\\bm{v}_{T_w}^{(2)},\\cdots,\\bm{v}_{T_w}^{(d)}\\}$ is the mimimum value in $\\bm{v}_{T_w}$\n\\end{theorem}\n\nGiven the generic result in Theorem~\\ref{theo:global}, we obtain the convergence rate for {{{\\textbf{1-bit Adam}}}} with appropriately chosen learning rate $\\gamma$.\n\n\n\\begin{corollary}\\label{coro:global}\nUnder Assumption~\\ref{ass:global}, for {{{\\textbf{1-bit Adam}}}}, choosing\n$\n\\gamma = \\frac{1}{4L(v_{\\min})^{-1} + \\sigma\\sqrt{\\frac{ T}{n}} + \\epsilon^{\\frac{2}{3}} T^{\\frac{1}{3}}(v_{\\min})^{-1} },\n$\nwe have the following convergence rate\n\\begin{align*}\n\\frac{1}{Tv_{\\min}}\\sum_{t=0}^{T-1}\\mathbb{E}\\|\\nabla f(\\bm{x}_t)\\|^2_V \\lesssim \\frac{\\sigma}{\\sqrt{nT}} + \\frac{\\epsilon^{\\frac{2}{3}}}{T^{\\frac{2}{3}}} + \\frac{1}{ T},\n\\end{align*}\nwhere we treat $f(\\bm{x}_1) - f^*$, $\\beta$ and $L$ as constants.\n\\end{corollary}\n\n\nThis result suggests that: {{{\\textbf{1-bit Adam}}}} essentially admits the same convergence rate as distributed \\textbf{SGD} in the sense that both of them admit the asymptotical convergence rate $O(1\/\\sqrt{nT})$, which means we can still achieve linear speedup w.r.t. the number of workers $n$.\n\n\n\\section{Efficient system design for compressed communication}\nNVIDIA NCCL is an efficient and widely used communication library that has been tightly integrated in DL frameworks like PyTorch and TensorFlow. However, NCCL library cannot be used directly for performing communication based on 1-bit compression. This is because the collective communication primitives like Allreduce and Allgather are at a higher level of abstraction and can only perform data movement and\/or simple operations like sum, min, max etc. In addition, NCCL library (before v2.7) did not expose either an Alltoall primitive or any point-to-point (send\/recv) communication primitives that can be used to implement an Alltoall. Thus for {{{\\textbf{1-bit Adam}}}}, we designed a custom collective primitive using Message Passing Interface (MPI). We call it ``compressed allreduce'' and it has three phases as shown in Figure~\\ref{fig:allreduce}: 1) The gather step, which we have implemented using the MPI\\_Alltoall (personalized exchange) primitive, 2) The average step, where {{{\\textbf{1-bit Adam}}}} computes the average of compressed local momentums, and 3) The scatter step, which we implement using MPI\\_Allgather. We develop two versions of compressed allreduce: 1) CUDA-Aware version that exploits GPUDirect features and requires CUDA-Aware libraries like MVAPICH2-GDR and 2) Basic version that can be used with any MPI library but copies data between GPU and CPU buffers. The CUDA-Aware version works only on systems with InfiniBand whereas the basic version can run on any system with Ethernet interconnect. \n\n\\section{Experiments}\n\nWe evaluate {{{{\\textbf{1-bit Adam}}}}} and existing approaches using BERT-Base, BERT-Large, SQuAD 1.1 and ResNet-18 training tasks on up to 256 GPUs. We show that {{{{\\textbf{1-bit Adam}}}}} converges as fast as uncompressed \\textbf{Adam}, and runs up to 3.3 times faster than uncompressed algorithms under limited bandwidth.\n\n \n \n\n\\subsection{BERT pre-training and fine-tuning}\n\\label{sec:bert-eval}\n\\paragraph{Dataset and models} We evaluate the convergence and performance of {{{\\textbf{1-bit Adam}}}} and uncompressed \\textbf{Adam} for BERT-Base ($L=12$, $H=768$, $A=12$, $110M$ params) and BERT-Large ($L=24$, $H=1024$, $A=16$, $340M$ params) pre-training tasks. We use the same dataset as \\citet{bert}, which is a concatenation of Wikipedia and BooksCorpus with $2.5B$ and $800M$ words respectively. We use the GLUE fine-tuning benchmark\\citep{glue} to evaluate the convergence of the BERT models trained by \\textbf{Adam} and {{{\\textbf{1-bit Adam}}}}.\n\nIn addition, we also evaluate the convergence and performance of {{{\\textbf{1-bit Adam}}}} for SQuAD 1.1 fine-tuning task\\footnote{https:\/\/rajpurkar.github.io\/SQuAD-explorer\/} using a pre-trained BERT model checkpoint from HuggingFace\\footnote{https:\/\/github.com\/huggingface\/transformers}.\n\n\\paragraph{Hardware} We use the two clusters described in Section~\\ref{sec:moti-profile}. We use up to 256 GPUs for pre-training tasks and up to 32 GPUs for fine-tuning tasks.\n\n\\paragraph{Training parameters} For BERT pre-training, the learning rate linearly increases to $4\\times 10^{-4}$ as a warmup in the first $12.5K$ steps, then decays into $0.99$ of the original after every $520$ steps. We set the two parameters in Algorithm~\\ref{alg:de_ec} as $\\beta_1 = 0.9$ and $\\beta_2 = 0.999$ for {{{\\textbf{1-bit Adam}}}} and \\textbf{Adam}. For convergence test, we set total batch size as $4K$ for BERT-Base and BERT-Large. For performance test, we test different batch sizes. Table~\\ref{table_bert_steps} summarizes the total number of steps for BERT sequence length 128 and 512 phases, together with the number of warmup steps for {{{\\textbf{1-bit Adam}}}}.\n\nFor GLUE benchmarks we use original \\textbf{Adam} optimizer and perform single-task training on the dev set. We search over the hyperparameter space with batch sizes $\\in\\{8,16\\}$ and learning rates $\\in\\{1\\times 10^{-5},3\\times 10^{-5},5\\times 10^{-5},8\\times 10^{-5}\\}$. Other setting are the same as pre-training task.\n\nFor SQuAD fine-tuning we use the same parameters as published by HuggingFace (batch size = $24$, learning rate=$3e-5$, dropout=$0.1$, 2 epochs), except that we increase the batch size to $96$ (using $32$ GPUs). The first $400$ steps out of total $1848$ steps are used as the warmup stage for {{{\\textbf{1-bit Adam}}}}.\n\n\\begin{table}[t]\n \\footnotesize\n \\caption{Number of steps for BERT pre-training tasks.}\\label{table_bert_steps}\n \\centering\n \\begin{tabular}{lll}\n \\hline\n & Seqlen 128& Seqlen 512\\\\\n & (warmup)& (warmup)\\\\\n \\hline\n BERT-Base \\textbf{Adam}& $118K$ (N\/A)& $22K$ (N\/A) \\\\\n BERT-Base {{{\\textbf{1-bit Adam}}}}& $118K$ ($16K$)& $22K$ ($1.5K$) \\\\\n BERT-Large \\textbf{Adam}& $152K$ (N\/A)& $10K$ (N\/A) \\\\\n BERT-Large {{{\\textbf{1-bit Adam}}}}& $152K$ ($23K$)& $10K$ ($1.5K$) \\\\\n \\hline\n \\end{tabular}\\vspace{-0.1cm}\n\\end{table}\n\n\\paragraph{Convergence results}\nFigure~\\ref{fig:bert} presents the sample-wise convergence results. We use the BertAdam \\citep{bert} optimizer as the uncompressed baseline. For both BERT-Base and BERT-Large and for both sequence length phases, we find that {{{\\textbf{1-bit Adam}}}} provides the same convergence speed as baseline, while the communication volume is reduced into $6\\%$ of the original during the compression stage.\n\nTable~\\ref{table1} presents the GLUE results using the checkpoints from our pre-training experiments. {{{\\textbf{1-bit Adam}}}} achieves similar accuracy compared to the uncompressed baseline and the numbers reported in previous work.\n\nFor SQuAD 1.1 fine-tuning task using checkpoint from HuggingFace, {{{\\textbf{1-bit Adam}}}} achieves similar F1 score (93.32) compared to the score reported by HuggingFace (93.33) using same number of samples and trainig parameters. \n \n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{large_loss.pdf}\n\\caption{Epoch-wise convergence speed for BERT-Large pre-training sequence length 128. {{{\\textbf{1-bit Adam}}}} and \\textbf{Adam} also achieve the same convergence speed for BERT-Base pre-training.}\\label{fig:bert}\\vspace{-0.1cm}\n\\end{figure}\n\n\\begin{table*}[t]\n\\footnotesize\n \\caption{GLUE development set results. BERT-Base\/Large(original) results are from \\citet{bert}. BERT-Base\/Large (uncompressed) results use the full-precision \\textbf{BertAdam} with the same training parameters as the {{{\\textbf{1-bit Adam}}}} case. BERT-Base\/Large (compressed) are the results using {{{\\textbf{1-bit Adam}}}}. The scores are the median scores over 10 runs.}\\label{table1}\n \\centering\n \\begin{tabular}{lccccccc}\n \\hline \n \\textbf{Model}& RTE& MRPC& CoLA & SST-2& QNLI& QQP& MNLI-(m\/mm) \\\\\n \\hline \n BERT-Base (original) & 66.4 & 84.8 & 52.1 & 93.5 & 90.5& 89.2& 84.6\/83.4\\\\\n BERT-Base (uncompressed) & 68.2 & 84.8 & 56.8 & 91.8 & 90.9& 90.9& 83.6\/83.5\\\\\n BERT-Base (compressed) & 69.0& 84.8 & 55.6 & 91.6 & 90.8& 90.9& 83.6\/83.9\\\\\n \\hline\n BERT-Large (original) & 70.1& 85.4 & 60.5 & 94.9 & 92.7& 89.3& 86.7\/85.9\\\\\n BERT-Large (uncompressed) & 70.3& 86.0 & 60.3 & 93.1 & 92.2& 91.4& 86.1\/86.2\\\\\n BERT-Large (compressed) & 70.4& 86.1 & 62.0 & 93.8 & 91.9& 91.5& 85.7\/85.4\\\\\n \\hline\n \\end{tabular}\\vspace{-0.1cm}\n\\end{table*}\n\n\\begin{figure*}[t]\n\\centering\n\\subfigure[Bert-Large pre-training, batch size = number of GPUs $\\times$ 16]{\n\\begin{minipage}[t]{0.33\\linewidth}\n\\centering\n\\includegraphics[width=1\\textwidth]{eval_bert_accum1.pdf}\\label{fig:e2e-1}\n\\end{minipage}\n}\n\\subfigure[Bert-Large pre-training, batch size = 4K]{\n\\begin{minipage}[t]{0.33\\linewidth}\n\\centering\n\\includegraphics[width=1\\textwidth]{eval_bert_bsz4k.pdf}\\label{fig:e2e-2}\n\\end{minipage}\n}\n\\subfigure[SQuAD fine-tuning, batch size = number of GPUs $\\times$ 3]{\n\\begin{minipage}[t]{0.28\\linewidth}\n\\centering\n\\includegraphics[width=1\\textwidth]{eval_squad_accum1.pdf}\\label{fig:e2e-3}\n\\end{minipage}%\n}\n\\centering\n\\caption{Scalability of {{{\\textbf{1-bit Adam}}}} for BERT-Large pre-training sequence length 128 and SQuAD 1.1 fine-tuning on V100 GPUs. \\textbf{Adam} lines represent the throughput at {{{\\textbf{1-bit Adam}}}}'s warmup stage (i.e., baseline \\textbf{Adam}'s throughput). {{{\\textbf{1-bit Adam}}}} lines represent the throughput at compression stage. Annotations represent the highest speedup achieved in each figure. Note that this is the speedup between warmup and compression stage. The end-to-end speedup also depends on the percentage of warmup.}\\label{fig:e2e}\\vspace{-0.1cm}\n\\end{figure*}\n\n\n\\paragraph{Performance results}\nComputed as 1\/(warmup ratio + (1 - warmup ratio)\/16) for FP16 training, {{{\\textbf{1-bit Adam}}}} offers up to 5x less end-to-end communication volume for BERT-Base and BERT-Large. This leads to to 3.3x higher throughput for BERT-Large sequence length 128 pre-training and up to 2.9x higher throughput for SQuAD fine-tuning. This end-to-end throughput improvement is enabled by the 5.48x (Figure~\\ref{fig:e2e-1}) and 6.17x (Figure~\\ref{fig:e2e-3}) speedup observed during the compression stage. Figure~\\ref{fig:e2e-2} shows that {{{\\textbf{1-bit Adam}}}} also provides better scalability: \\textbf{Adam}'s throughput reaches peak at 32 GPUs on Ehternet, while {{{\\textbf{1-bit Adam}}}}'s throughput keeps increasing until 128 GPUs. It is also worth mentioning that {{{\\textbf{1-bit Adam}}}} on Ethernet (4.1 Gbps effective bandwidth, 4 GPUs per node) is able to achieve comparable throughput as \\textbf{Adam} on InfiniBand (near 100 Gbps effective bandwidth, 8 GPUs per node), which demonstrates {{{\\textbf{1-bit Adam}}}}'s efficiency considering the hardware differences.\n\n\n\n\\subsection{ResNet on CIFAR10}\\label{resnet}\n\n\n\\begin{figure}[t]\n\\centering\n\\subfigure[Training loss]{\n\\begin{minipage}[t]{0.35\\linewidth}\n\\centering\n\\includegraphics[width=1\\textwidth]{resnet_loss.pdf}\n\\end{minipage}\n}\n\\subfigure[Testing accuracy]{\n\\begin{minipage}[t]{0.25\\linewidth}\n\\centering\n\\includegraphics[width=1\\textwidth]{resnet_acc.pdf}\n\\end{minipage}\n}\n\\centering\n\\caption{Epoch-wise convergence speed for ResNet-18.}\\label{fig:resnet}\\vspace{-0.1cm}\n\\end{figure}\n\nTo further evaluate the convergence speed of {{{\\textbf{1-bit Adam}}}} and related works, we train CIFAR10 using ResNet-18\\citep{7780459}. The dataset has a training set of 50000 images and a test set of 10000 images, where each image is given one of the 10 labels. We run the experiments on $8$ 1080Ti GPUs where each GPU is used as one worker. The batch size on each worker is $128$ and the total batch size is $1024$.\n\nWe evaluate five implementations for comparison: 1) Original \\textbf{SGD}. 2) Original \\textbf{Adam} \\citep{adam}. 3) {{{\\textbf{1-bit Adam}}}} where we use $13$ out of $200$ epochs as warmup. 4) {{{\\textbf{1-bit Adam}}}}\\textbf{(32-bits)} where we do not compress the momentum while still freezing the varaince. 5) \\textbf{Adam(1-bit Naive)} where we compress the gradient instead of momentum, and don't freeze the variance. We set the learning rate as $1\\times 10^{-1} $ for \\textbf{SGD} and $1\\times 10^{-4}$ for the other 4 cases. For all five cases, the learning rate is decayed into $10\\%$ of the original after every $100$ epochs.\n\n\n\n\n\n\n\nAs illustrated in Figure~\\ref{fig:resnet}, {{{\\textbf{1-bit Adam}}}} achieves similar convergence speed as \\textbf{Adam} and {{{\\textbf{1-bit Adam}}}}\\textbf{(32-bits)}. \\textbf{SGD} has a slightly slower convergence speed while \\textbf{Adam(1-bit Naive)} is much worse. This and Section~\\ref{sec:moti-convergence} demonstrate that existing compression method doesn't work for \\textbf{Adam}. In the supplementary materials we further compare {{{\\textbf{1-bit Adam}}}} with other related works using ResNet-18.\n\n\\section{Conclusions}\nIn this paper, we propose an error-compensated \\textbf{Adam} preconditioned momentum SGD algorithm, {{{\\textbf{1-bit Adam}}}}, which provides both communication efficiency and \\textbf{Adam}'s convergence speed. Our theoretical analysis demonstrates that ${{\\textbf{1-bit Adam}}}$ admits a linear speed w.r.t the number of workers in the network, and is robust to any compression method. We validate the performance of {{{\\textbf{1-bit Adam}}}} empirically on BERT, SQuAD and ResNet training tasks on up to 256 GPUs. Results show that {{{\\textbf{1-bit Adam}}}} converges as fast as uncompressed \\textbf{Adam}, reduces communication volume by up to 5x, and runs up to 3.3 times faster than uncompressed algorithms.\n\n\n\\bibliographystyle{abbrvnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n An American Mathematical Monthly problem posed within relatively recent memory [{\\bf{1}}] sought the\nevaluation\n\\begin{equation}\n\\int_{\\,0}^{\\,\\pi\/2}\\left\\{\\rule{0mm}{4mm}\\log(\\,2\\sin(x)\\,)\\right\\}^{2}dx\\,=\\,\\frac{\\,\\,\\,\\pi^{\\,3}\\,}{\\,24\\,}\\;.\n\\end{equation}\nOne mode of solution depended upon integration of an analytic function around the periphery $\\,\\Omega\\,$ of a semi-infinite\nvertical strip with no singularities enclosed, the quadrature having thus a null outcome\\footnote{Both\ncontour $\\,\\Omega\\,,$ a vertical rectangle of unlimited height, and the notion of integrating an analytic function thereon so\nas to obtain a null result, imitate a similar ploy utilized in [{\\bf{2}}] on behalf of (2) and still further attributed there\nto Ernst Lindel{\\\"{o}}f.} known in advance on the strength of Cauchy's theorem.\nEvaluation (1) emerged automatically by setting to zero the real part of that integral,\\footnote{Two solutions for (1) were submitted by\nthe undersigned, one involving contour integration in the manner suggested, and the other based upon a Fourier series.\nThe Bernoulli recurrence (5) emerged as a spontaneous by-product of an ancillary, null-quadrature calculation upon\nthat same contour $\\,\\Omega\\,,$ initially aimed only at evaluating the log-sine integrals (4). This note embodies the content\nof that collateral calculation, slightly rephrased so as to highlight the newly recovered Bernoulli number sum identity.}\nwhereas the complementary requirement that the imaginary part likewise vanish brought into play, and successfully so, the known quadratures\n\\begin{equation}\n\\int_{\\,0}^{\\,\\pi}\\log(\\,\\sin(x)\\,)\\,dx\\,=\\,-\\,\\pi\\log(2)\n\\end{equation}\nand\n\\begin{equation}\n\\int_{\\,0}^{\\,\\pi}x\\log(\\,\\sin(x)\\,)\\,dx\\,=\\,-\\frac{\\,\\,\\,\\pi^{\\,2}\\,}{\\,2\\,}\\log(2)\\;.\n\\end{equation}\nWith (2) and (3) in plain view, a temptation arose to provide for them, too, an {\\em{ab initio}} verification, and,\nmore even than that, to evaluate the entire hierarchy of log-sine integrals\\footnote{If that were the only goal then we should\nassuredly stop dead in our tracks, simply because, on the one hand, {\\bf{MATHEMATICA}} provides all such evaluations\non demand, with great aplomb, and this even in its symbolic mode, while, on the other, a relatively painless\nderivation can be based upon a Fourier series, one which emerges in its turn from the power series for $\\,\\log(1-z)\\,$\nwhen argument $\\,z\\,$ is forced to lie upon the unit circle. This Fourier series underlies in addition an essentially zinger\nverification of (1). All such manifold benefits of the Fourier option are sketched in an Appendix. Moreover, it goes without\nsaying that both contour-based (14) and Fourier-based (27) evaluations of (4), even though they may be of secondary interest\nin the present context, do stand in complete agreement.} \n\\begin{equation}\nI_{n}\\,=\\,\\int_{\\,0}^{\\,\\pi}x^{n}\\log(\\,\\sin(x)\\,)\\,dx\n\\end{equation}\nas the power of $\\,x\\,$ roams over all non-negative integers $\\,n\\,=\\,0,\\,1,\\,2,\\,3,\\,\\ldots\\,{\\rm{ad\\;inf}}\\,.$\nNot only was this fresh ambition, digressive and self-indulgent though it may have been, easy to satisfy via quadrature\non the same contour as before, but it also exposed to view once more the fundamental Bernoulli number recurrence\n\\begin{equation}\n\\sum_{k\\,=\\,0}^{n-1} \\left(\\!\\!\\begin{array}{c}\n n \\\\\n k\n \\end{array}\\!\\!\\right)\nB_{k}\\,=\\,0\n\\end{equation}\nwhich is valid for $\\,n\\geq 2\\,$ and, together with the initial condition $\\,B_{0}\\,=\\,1\\,$ and the self-consistent choice\n$\\,B_{1}\\,=\\,-1\/2\\,,$ is adequate to populate the entire Bernoulli ladder, complete with null entries at all odd\nindices beyond $\\,k\\,=\\,1\\,,$ {\\em{viz.,}} $\\,B_{2l+1}\\,=\\,0\\,$ whenever $\\,l\\geq1\\,.$ Source material on the\nBernoulli numbers and the related Bernoulli polynomials is ubiquitous, and can be sampled, for example, in [\\textbf{3-5}].\nReferences [\\textbf{6,7}] provide a valuable overview all at once of their\nmathematical properties and historical genesis in com-\n\\newpage\n\\mbox{ }\n\\newline\n\\newline\n\\newline\nputing sums of finite progressions of successive integers raised to\nfixed positive powers. Equally valuable is online Reference [{\\bf{8}}], which cites a rich literature and\ncovers besides a vast panorama of diverse mathematical knowledge.\n\n\nBernoulli identity (5), which is the principal object of our present concern, emerges thus by setting to\nzero the imaginary part of the analytic quadrature (6), below, around contour $\\,\\Omega\\,,$\nwith the corresponding null value requirement on its real part providing an evaluation of the general term from sequence (4),\nlisted in (14). No claim whatsoever is made here as to any ultimate novelty in outcome (14), which is available\nin symbolic form at any desired index $\\,n\\,$ through routine demand from \\textbf{MATHEMATICA}. Outcome (14),\nexpressed here as a finite sum of Riemann zeta functions at odd integer arguments, continues to attract the attention\nof contemporary research focused upon polylogarithms [\\textbf{9-12}]. But the formulae thus\nmade available are subordinated in [\\textbf{11,12}] and elsewhere to the task of evaluating a variety of dissimilar\nquantities, and appear to be tangled in thickets of notation. From this\nstandpoint, formula (14) (and its identical twin (27) derived in an even more elementary fashion)\nmay perhaps still provide the modest service of a stand-alone, encapsulated result, easily derived and\neasily surveyed. In particular, the canonical method of derivation evolved in [\\textbf{9}] and repeatedly\nalluded to in [\\textbf{11,12}] requires rather strenuous differentiations of Gamma function ratios,\nand results finally in a recurrence on the individual $\\,I_{n}\\,$ (or else an equivalent generating function).\nTo be sure, while the work in [\\textbf{9}] is immensely elegant, it is at the same time immensely more intricate\nthan either of our independent derivations culminating in (14) and (27).\n\n\n On the other hand, it does appear to have escaped previous notice that the Bernoulli recurrence (5), which\nis ancient and foundational in its own right, should likewise re\\\"{e}merge (via (16)) from the same\nquadrature around contour $\\,\\Omega\\,$ when one insists that the\ncorresponding imaginary part also vanish. And, just as is the case with (14), formula (16), too, emerges\nfrom a contact with Riemann zeta functions, but evaluated this time at even integer arguments, which latter\ncircumstance, by virtue of the celebrated Euler connection, opens the portal to entry by the similarly\nindexed Bernoulli numbers. It is of course none of our purpose here to compete with, let alone to supplant\nin any way the standard derivations of (5). Rather, we seek merely to highlight its re\\\"{e}mergence in what surely\nmust be conceded to be an unexpected setting. \n\n We round out this note with an appendix wherein contour integration cedes place to the more elementary\nsetting of a Fourier series on whose basis (14) is recovered yet again (as (27)) through repeated integration by\nparts. That same Fourier series provides moreover an exceedingly short and simple confirmation of (1),\ncomplementary to the contour integral method, an option to which allusion has already been made in Footnote 3.\nOf course, at this point, no further light can, nor need be shed upon (5) \\textit{per se}.\n\\vspace{-4mm}\n\n\n \n\\section{Null Quadratures on Contour $\\,\\Omega$}\n\\vspace{-2mm}\n\n Guided by the cited example in [{\\bf{2}}], we consider for $\\,n\\,\\geq\\,0\\,$ the sequence of numbers\n\\begin{equation}\nK_{n}\\,=\\,\\int_{\\,\\Omega}z^{n}\\log\\left(1-e^{\\,2\\,i\\,z}\\right)\\,dz\\,=\\,0\\;\\,,\n\\end{equation}\nall of them annulled by virtue of closed contour $\\,\\Omega\\,$ being required to\nlie within a domain of analyticity for $\\,\\log\\left(1-e^{\\,2\\,i\\,z}\\right)\\,$\nin the plane of complex $z=x+iy.$ Save for quarter-circle indentations of vanishing radius\n$\\,\\delta\\,$ around $\\,z\\,=\\,0\\,$ and $\\,z\\,=\\,\\pi\\,,$ contour $\\,\\Omega\\,$\nbounds a semi-infinite vertical strip, with a left leg having $\\,x\\,=\\,0\\,$\n\\newpage\n\\mbox{ }\n\\newline\n\\newline\n\\newline\nfixed and descending from $\\,y\\,=\\,\\infty\\,$ to $\\,y\\,=\\,\\delta\\,$ (quadrature\ncontribution $\\,L_{n}\\,$), and a right leg at a fixed $\\,x\\,=\\,\\pi\\,$ ascending from\n$\\,y\\,=\\,\\delta\\,$ to $\\,y\\,=\\,\\infty\\,$ (quadrature contribution $\\,R_{\\,n}\\,$),\nlinked at their bottom by a horizontal segment with $\\,y\\,=\\,0\\,$ and\n$\\,\\delta\\,\\leq\\,x\\,\\leq\\,\\pi-\\,\\delta\\,$ (quadrature contribution $\\,H_{n}\\,$).\nIn what follows it will be readily apparent that the limit $\\,\\delta\\!\\downarrow\\!0+\\,$\nmay be enforced with full impunity, a gesture whose {\\em{fait accompli}} status will\nbe taken for granted. Likewise passed over without additional\ncomment will be the fact that no contribution is to be sought from contour completion\nby a retrograde horizontal segment $\\,\\pi\\,\\geq\\,x\\,\\geq\\,0\\,$ at\ninfinite remove, $\\,y\\,\\rightarrow\\,\\infty\\,.$\n\n\n We now find\n\\begin{equation}\nL_{n}\\,=\\,-\\,i^{\\,n+1}\\int_{\\,0}^{\\,\\infty}y^{\\,n}\\log\\left(1-e^{-2\\,y}\\right)\\,dy\\;\\,,\n\\end{equation}\n\\begin{equation}\nR_{\\,n}\\,=\\,+\\,i\\int_{\\,0}^{\\,\\infty}\\left(\\,\\pi\\,+\\,i\\,y\\,\\right)^{n}\\log\\left(1-e^{-2\\,y}\\right)\\,dy\\;\\,,\n\\end{equation}\nand\n\\begin{eqnarray}\nH_{n} & = & \\int_{\\,0}^{\\,\\pi}x^{n}\\left[\\rule{0mm}{4mm}\\log(2)\\,-\\,\n \\frac{\\,i\\,\\pi\\,}{\\,2\\,}\\,+\\,i\\,x\\,+\\,\\log(\\,\\sin(x)\\,)\\,\\right]\\,dx \\nonumber \\\\\n & = & \\frac{\\,\\pi^{\\,n+1}\\,}{\\,n\\,+\\,1\\,}\\log(2)\\,-\\,i\\,\\frac{\\,\\pi^{\\,n+2}\\,}{\\,2\\,(\\,n\\,+\\,1\\,)\\,}\\,+\\,\ni\\,\\frac{\\,\\pi^{\\,n+2}\\,}{\\,\\,n\\,+\\,2\\,}\\,+\\,\\int_{\\,0}^{\\,\\pi}x^{n}\\log(\\,\\sin(x)\\,)\\,dx\\;\\,.\n\\end{eqnarray}\nSeries expansion of the logarithm further gives\n\\begin{equation}\nL_{n}\\, = \\, +\\,i^{\\,n+1}\\sum_{l\\,=\\,1}^{\\infty}\\frac{\\,1\\,}{\\,l\\,}\\int_{\\,0}^{\\,\\infty}y^{\\,n}e^{-2\\,l\\,y}\\,dy \\,=\\,+\\,\ni^{\\,n+1}\\,\\frac{\\,n\\,!\\,}{\\,2^{\\,n+1}\\,}\\sum_{l\\,=\\,1}^{\\infty}\\frac{\\,1\\,}{\\,l^{\\,n+2}\\,} \\,,\n\\end{equation}\nthe interchange in summation and integration being legitimated by Beppo Levi's monotone convergence theorem, and similarly\n\\begin{equation}\nR_{\\,n}\\, = \\, -\\,i\\sum_{k\\,=\\,0}^{n}\\left(\\!\\!\\begin{array}{c}\n n \\\\\n k\n \\end{array}\\!\\!\\right)\\pi^{\\,n-k}\\,i^{\\,k}\n\\frac{\\,k\\,!\\,}{\\,2^{\\,k+1}\\,}\\sum_{l\\,=\\,1}^{\\infty}\\frac{\\,1\\,}{\\,l^{\\,k+2}\\,}\\;\\,,\n\\end{equation}\nin both of which there insinuates itself the Riemann zeta function \n\\begin{equation}\n\\zeta\\,(s)\\,=\\,\\sum_{l\\,=\\,1}^{\\infty}\\frac{\\,1\\,}{\\,l^{\\,s}\\,}\n\\end{equation}\nat a variety of its argument values $\\,s.$\\footnote{This canonical\ndefinition implies a guarantee of series convergence, assured by the requirement that $\\,\\Re\\,s\\,>\\,1\\,.$\nA robust arsenal of knowledge exists for continuing $\\,\\zeta(s)\\,$ across the entire plane of\ncomplex variable $\\,s\\,=\\,\\sigma\\,+\\,i\\,t\\,,$ with a simple pole emerging at $\\,s\\,=\\,1\\,.$}\nSo armed, we proceed next to set\n\\begin{equation}\nK_{n}\\,=\\,L_{n}\\,+\\,H_{n}\\,+\\,R_{\\,n}\\,=\\,0\n\\end{equation}\nand remark that, regardless of the parity of index $\\,n\\,,$ $\\,L_{n}\\,$ {\\em{per se}}\nis always absorbed by the contribution from the highest power $\\,y^{\\,n}\\,$ within the\nintegrand for $\\,R_{\\,n}\\,.$ This circumstance accounts for the imminent appearance of\nthe floor function affecting the highest value\nof summation index $\\,k\\,$ in Eqs. (14)-(16) and (19) below.\n\\newpage\n\\mbox{ }\n\\newline\n\n\n A requirement that the real part of (13) vanish provides now the following string of valuable\nlog-sine quadrature formulae\n\\begin{eqnarray}\n\\int_{\\,0}^{\\,\\pi}x^{n}\\log(\\,\\sin(x)\\,)\\,dx & = & -\\,\\frac{\\,\\pi^{\\,n+1}\\,}{\\,n\\,+\\,1\\,}\\log(2)\\,+ \\nonumber \\\\\n & & \\rule{-2.3cm}{0mm} +\\,\\frac{\\,n\\,!\\,}{\\,2^{\\,n+1}}\\!\\sum_{\\,k\\,=\\,1}^{\\lfloor n\/2 \\rfloor}\\,(-1)^{\\,k}\\, \n \\frac{\\,(2\\pi)^{n\\,-\\,2k\\,+\\,1}\\,} {\\,(n\\,-\\,2k\\,+\\,1\\,)\\,!\\,}\\,\\zeta\\,(\\,2k+1\\,) \\;\\,,\n\\end{eqnarray}\nof which the first two, at $\\,n\\,=\\,0\\,$ and $\\,n\\,=\\,1\\,,$ with the sum on the\nright missing, validate (2) and (3), and are in any event widely tabulated. And again, as was\nfirst stated in Footnote 3, Eq. (14) is consistently reaffirmed by {\\bf{MATHEMATICA}},\neven when harnessed in its symbolic mode. We note in passing the self-evident fact that,\nunlike the corresponding prescriptions found in [\\textbf{9,10}],\nformula (14) is fully explicit, needing to rely neither upon a generating function nor\na recurrence, even though, naturally, such recurrence arrives at a final rendezvous with identically\nthe same result.\n\n\n A close prelude to identity (5) follows next from the co\\\"{e}xisting requirement that\nthe imaginary part of (13) vanish. This requirement takes the initial form\n\\begin{eqnarray}\n-\\,\\frac{\\,\\pi^{\\,n+2}\\,}{\\,2\\,(\\,n\\,+\\,1\\,)\\,}\\,+\\,\\frac{\\,\\pi^{\\,n+2}\\,}{\\,\\,n\\,+\\,2\\,}\\,- \\rule{5.2cm}{0mm} & & \\nonumber \\\\\n-\\,\\sum_{k\\,=\\,0}^{\\lfloor \\frac{n-1}{2} \\rfloor }\\left(\\!\\!\\begin{array}{c}\n n \\\\\n 2k\n \\end{array}\\!\\!\\right)\\pi^{\\,n-2k}(-1)^{\\,k}\n\\frac{\\,(2k)\\,!\\,}{\\,2^{\\,2k\\,+\\,1}\\,}\\,\\zeta\\,(\\,2k+2\\,) \\rule{0.0cm}{0mm} & = & 0 \\rule{8mm}{0mm} \n\\end{eqnarray}\nand is subsequently moulded into the shape\n\\begin{equation}\n\\sum_{k\\,=\\,0}^{\\lfloor \\frac{n-1}{2} \\rfloor }\\left(\\!\\!\\begin{array}{c}\n n \\\\\n 2k\n \\end{array}\\!\\!\\right)\n\\frac{\\,B_{2k\\,+\\,2}\\,}{\\,(\\,k\\,+\\,1\\,)(\\,2k\\,+\\,1\\,)\\,}\\,=\\,\\frac{\\,n\\,}{\\,(\\,n\\,+\\,1\\,)(\\,n\\,+\\,2\\,)\\,} \n\\end{equation}\non taking note of Euler's\ncelebrated connection [{\\bf{3-8}}]\n\\begin{equation}\n\\zeta\\,(2k)\\,=\\,(-1)^{\\,k\\,+\\,1}(2\\,\\pi)^{2k}\\frac{\\,B_{2k}\\,}{\\,2\\,(2k)\\,!\\,} \\;\\;\\;(\\,k\\,=\\,1\\,,\\,2\\,,\\,3\\,,\\,\\ldots\\,)\n\\end{equation}\nallowing us to displace attention from the even-argument values of Riemann's zeta\nto the correspondingly indexed Bernoulli numbers $\\,B_{2k}\\,.$\n\\parindent=0.25in\n\n\n\\section{Recurrence Reduction}\n\n Recurrence (16) is not quite yet in the desired form (5), but it is easily steered\ntoward this goal. That process begins by noting that\n\\begin{equation}\n\\left(\\!\\!\\begin{array}{c}\n n \\\\\n 2k\n \\end{array}\\!\\!\\right)\n\\frac{\\,1\\,}{\\,(\\,k\\,+\\,1\\,)(\\,2k\\,+\\,1\\,)\\,}\\,=\\,\n\\left(\\!\\!\\begin{array}{c}\n \\,n+2 \\\\\n 2k+2\n \\end{array}\\!\\!\\right)\\frac{\\,2\\,}{\\,(\\,n\\,+\\,1\\,)(\\,n\\,+\\,2\\,)\\,}\\;,\n\\end{equation}\n\\newpage\n\\mbox{ }\n\\newline\n\\newline\n\\newline\nwhereupon (16) becomes\n\\begin{equation}\n\\sum_{k\\,=\\,0}^{\\lfloor \\frac{n-1}{2} \\rfloor}\\left(\\!\\!\\begin{array}{c}\n \\,n+2 \\\\\n 2k+2\n \\end{array}\\!\\!\\right)B_{2k\\,+\\,2}\\,=\\,\\frac{\\,n\\,}{\\,2\\,}\\;.\n\\end{equation}\nNow the advance of index $\\,2k\\,$ in steps of two means that it reaches a maximum value $\\,M=n-1\\,$ when $\\,n\\,$ is odd,\nand one offset instead by two below $n,$ $M=n-2,$ when $\\,n\\,$ is even. At the same time the accepted {\\mbox{null value of odd-index}}\nBernoulli numbers starting with $\\,B_{3}=0\\,$ means that we are free, and self-consistently so, to intercalate all missing indices in steps of one\nand to entertain a common maximum $\\,M=n-1\\,,$ regardless of the parity of $n.$ Altogether then, (19) re{\\\"{e}}merges as\n\\begin{equation}\n\\sum_{k\\,=\\,2}^{n+1}\\left(\\!\\!\\!\\begin{array}{c}\n \\,n+2 \\\\\n k\n \\end{array}\\!\\!\\right)B_{k}\\,=\\,\\frac{\\,n\\,}{\\,2\\,}\\;,\n\\end{equation}\nor else\n\\begin{equation}\n\\sum_{k\\,=\\,0}^{n+1}\\left(\\!\\!\\!\\begin{array}{c}\n \\,n+2 \\\\\n k\n \\end{array}\\!\\!\\right)B_{k}\\,=\\,\\frac{\\,n\\,}{\\,2\\,}\\,+\\,\\left\\{\\left(\\!\\!\\!\\begin{array}{c}\n \\,n+2 \\\\\n 0\n \\end{array}\\!\\!\\right)B_{0}\\,+\\,\\left(\\!\\!\\!\\begin{array}{c}\n \\,n+2 \\\\\n 1\n \\end{array}\\!\\!\\right)B_{1}\\right\\}\\;.\n\\end{equation}\nBut now we find that\n\\begin{equation}\n\\left(\\!\\!\\!\\begin{array}{c}\n \\,n+2 \\\\\n 0\n \\end{array}\\!\\!\\right)B_{0}\\,+\\,\\left(\\!\\!\\!\\begin{array}{c}\n \\,n+2 \\\\\n 1\n \\end{array}\\!\\!\\right)B_{1}\\,=\\,1\\,-\\,\\frac{\\,n+2\\,}{\\,2\\,}\\,=\\,-\\frac{\\,n\\,}{\\,2\\,}\\,,\n\\end{equation}\nwith the effect of reducing (21) to just\n\\begin{equation}\n\\sum_{k\\,=\\,0}^{n+1}\\left(\\!\\!\\!\\begin{array}{c}\n \\,n+2 \\\\\n k\n \\end{array}\\!\\!\\right)B_{k}\\,=\\,0\\;,\n\\end{equation}\nwhich is nothing other than (5).\n\n\n\\section{Appendix: A Fourier Series Grace Note}\n\n\n A somewhat more pedestrian derivation of (14) rests upon consideration of the power series\n\\begin{equation}\n\\log\\,(\\,1-z\\,)\\,=\\,-\\,\\sum_{l\\,=\\,1}^{\\infty}\\,\\frac{\\,z^{\\,l}\\,}{\\,l\\,}\n\\end{equation}\nalong the unit circle $\\,z\\,=\\,e^{\\,i\\,\\vartheta}.$\nSeparation into real and imaginary parts emerges as a pair of Fourier series\n\\begin{equation}\n\\log\\left(\\rule{0mm}{5mm}2\\left|\\,\\sin\\,\\left\\{\\frac{\\,\\vartheta\\,}{\\,2\\,}\\right\\}\\right|\\right)\\,=\\,\n-\\sum_{l\\,=\\,1}^{\\infty}\\,\\frac{\\,\\cos\\,(l\\,\\vartheta)\\,}{\\,l\\,}\n\\end{equation}\nand\n\\begin{equation}\n\\left\\{\\rule{0mm}{5mm}\\frac{\\,\\vartheta-\\pi\\,}{\\,2\\,}\\,,\\,{\\rm{mod}}\\,2\\,\\pi\\right\\}\\,\n=\\,-\\sum_{l\\,=\\,1}^{\\infty}\\,\\frac{\\,\\sin\\,(l\\,\\vartheta)\\,}{\\,l\\,}\\;\\;,\n\\end{equation}\nof which the second is of no interest {\\em{vis-\\`{a}-vis}} our immediate objective. We repress all scruples\nhenceforth as to the divergence of series (25) whenever $\\,\\vartheta\\,=\\,0\\;\\,{\\rm{mod}}\\;2\\,\\pi\\,.$\n\n\n\\newpage\n\\mbox{ }\n\\newline\n\n\n Repeated integration by parts {\\em{vis-\\`{a}-vis}} the first of these Fourier series,\nwhen multiplied by the argument power $\\,\\vartheta^{\\,n}\\,,$\nadvances by $\\,\\cos\\rightarrow\\sin\\rightarrow\\cos\\,$ couplets, with end-point contributions\narising only on the second beat, and the argument powers falling\nin steps of two.\\footnote{In particular, this quadrature cadence provides a\nmotivation, alternative to that previously given,\nas to why it is that the floor function affects the upper index cutoff\n$\\,\\lfloor n\/2 \\rfloor\\,$ in both (14) and (27), allowing for unit growth in that\ncutoff only when $\\,n\\,$ {\\em{per se}} advances by two.} One assembles in this manner the general formula\n\\begin{equation}\n\\int_{\\,0}^{\\,\\pi}\\,\\vartheta^{\\,n}\\log(\\,\\sin(\\vartheta)\\,)\\,d\\,\\vartheta \\, = \\, \n-\\,\\frac{\\,\\pi^{\\,n\\,+\\,1}\\,}{\\,n\\,+\\,1\\,}\\,\\log(2)\\,+\\,\\frac{\\,n\\,!\\,}{\\,2^{\\,n\\,+\\,1}\\,}\n\\,\\sum_{\\,k\\,=\\,1}^{\\lfloor n\/2 \\rfloor}\\,(-\\,1)^{\\,k}\\, \n\\frac{\\,(\\,2\\,\\pi\\,)^{\\,n\\,-\\,2k\\,+\\,1}\\,}\n{\\,(\\,n\\,-\\,2k\\,+\\,1\\,)\\,!\\,}\\,\\zeta\\,(\\,2k\\,+\\,1\\,\n\\end{equation}\nholding good unrestrictedly for $\\,n\\,$ even or odd, and agreeing in every respect with (14). The only\nwrinkle to notice, perhaps, is that the sequence of integrations by parts which underlies (27) terminates,\nat each summation index $\\,l\\,$ in (25), with a term proportional to either\n\\begin{equation}\n\\int_{\\,0}^{\\,\\pi}\\cos(2l\\vartheta)\\,d\\,\\vartheta\\,=\\,0\n\\end{equation}\nin the event that $\\,n\\,$ is even, or\n\\begin{equation}\n\\int_{\\,0}^{\\,\\pi}\\vartheta \\cos(2l\\vartheta)\\,d\\,\\vartheta\\,=\\,0\n\\end{equation} \notherwise. Equation (28) is of course obvious whereas (29), while equally true and\nwelcome as such, is, at first blush, mildly surprising. All in all the derivation which underlies (14) is\nfar smoother and less apt to inflict bookkeeping stress, even if it is (27) which seems to rest\non a more elementary underpinning.\n\n It would be truly disappointing were we not able to utilize (25) so as to give an\nessentially one-line, zinger-style proof of (1). This anticipation is readily met simply\nby squaring both sides of (25), with summation indices $\\,l\\,$ and $\\,l\\,'\\,$ figuring\nnow on its right, and noting that when, as here, both $\\,l\\,\\geq\\,1\\,$ and\n$\\,l\\,'\\,\\geq\\,1\\,,$\n\\begin{equation}\n\\int_{\\,0}^{\\,\\pi}\\cos(\\,2\\,l\\,\\vartheta\\,)\\cos(\\,2\\,l\\,'\\vartheta\\,)\\,d\\,\\vartheta\\,=\\,\\frac{\\,\\pi\\,}{\\,2\\,}\\,\n\\delta^{\\,l}_{\\,l\\,'}\\;\\,,\n\\end{equation}\nwith $\\,\\delta^{\\,l}_{\\,l\\,'}\\,$ being the Kronecker delta, unity when its indices match, and zero otherwise.\nIt follows immediately that\n\\begin{equation}\nI\\,=\\,\\frac{\\,1\\,}{\\,2\\,}\n\\int_{\\,0}^{\\,\\pi}\\left\\{\\rule{0mm}{4mm}\\,\\log(\\,2\\,\\sin(\\vartheta)\\,)\\,\\right\\}^{\\,2}\\!d\\vartheta\\,=\\,\n\\frac{\\,\\pi\\,}{\\,4\\,}\\sum_{l=1}^{\\infty}\\frac{\\,1\\,}{\\,l^{\\,2}\\,}\\,=\\,\\frac{\\;\\;\\pi^{\\,3}\\,}{\\,24\\,}\\;\\,,\n\\end{equation}\nand we are done.\n\n\n\\parindent=0.0in\n\n\n\\section{References}\n\n1.\tOmran Kouba, Problem No. 11639, {\\bf{The American Mathematical Monthly}}, Vol. 119, No. 4, April 2012, p. 345.\n\n2.\tLars V. Ahlfors, {\\bf{Complex Analysis}}, McGraw-Hill Book Company, Inc., New York, 1953, pp. 130-131.\n\\newpage\n\\mbox{ }\n\\newline\n\\newline\n\\newline\n3.\tTom M. Apostol, {\\bf{Introduction to Analytic Number Theory}}, Springer-Verlag, New York, 1976, p. 266.\n\n4.\tHans Rademacher, {\\bf{Topics in Analytic Number Theory}}, Springer-Verlag, New York, 1973, p. 16.\n\n5.\tHerbert S. Wilf, {\\bf{Mathematics for the Physical Sciences}}, John Wiley \\& Sons, Inc., New York, 1962, pp. 114-116.\n\n6.\tTom M. Apostol, {\\bf{A Primer on Bernoulli Numbers and Polynomials}}, Mathematics Magazine, Vol. 81, No. 3, June 2008, pp. 178-190.\n\n7.\tOmran Kouba, {\\bf{Lecture Notes: Bernoulli Polynomials and Applications}}, arXiv:1309.7560v1 [math.CA] 29 Sep 2013.\n\n8.\tEric W. Weisstein, {\\bf{Bernoulli Number}}, from {\\em{MathWorld}}--A Wolfram Web Resource available\n@ {\\bf{http:\/\/mathworld.wolfram.com\/BernoulliNumber.html}}\n\n9.\tL. Lewin, {\\bf{On the evaluation of log-sine integrals}}, {\\em{The Mathematical Gazette}}, Vol. 42, 1958, pp. 125-128.\n\n10.\tL. Lewin, {\\bf{Polylogarithms and associated functions}}, North Holland, 1981.\n\n11.\tJonathan M. Borwein and Armin Straub, {\\bf{Special values of generalized log-sine integrals}},\n{\\em{Proceedings of ISSAC 2011 (36th International Symposium on Symbolic and Algebraic Computation)}}, 2011, pp. 43-50.\n\n\n12.\tJonathan M. Borwein and Armin Straub, {\\bf{Mahler measures, short walks and log-sine integrals}}, {\\em{Theoretical Computer Science\n(Special issue on Symbolic and Numeric Computation)}}, Vol. 479, No. 1, 2013, pp. 4-21.\n\n\n\n\n\\end{document}\n \t\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{}\n\n\n\\tableofcontents\n\n\\section{Introduction}\n\nA large class of interesting and useful asymptotically locally anti-de Sitter (AlAdS) spacetimes have been constructed by starting with AdS in Poincar\\'e coordinates, in which the spacetime is foliated by slices on which the metric is conformal to the Minkowski metric $\\eta_{ab}$, and replacing $\\eta_{ab}$ with any Ricci-flat metric $\\gamma_{ab}$. Thus~$(D-1)$-dimensional vacuum solutions to Einstein's equations straightforwardly give rise to new $D$-dimensional solutions to Einstein's equation with a negative cosmological constant. For example, taking $\\gamma_{ab}$ to be the Schwarzchild black hole yields a black cigar \\cite{Chamblin:1999by}, and $\\gamma_{ab}$ was taken to be a vacuum $pp$-wave in \\cite{Chamblin:1999cj} to construct a wave in the far field of an AdS-brane spacetime. Furthermore, the AlAdS solutions so generated are of particular interest in light of the AdS\/CFT correspondence~\\cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj}, since they are dual to a large-$N$, strongly coupled conformal field theory (CFT) that lives on the spacetime~$\\gamma_{ab}$ (or a conformally rescaled version thereof). For instance, \\cite{Engelhardt:2013jda,Engelhardt:2014mea} took~$\\gamma_{ab}$ to be a vacuum Kasner metric in order to study a cosmological singularity by computing the entanglement entropy and Wightman functions of the CFT\\footnote{In fact, in~\\cite{Engelhardt:2014mea} the CFT lived on a singularity-free conformally rescaled version of Kasner.}.\n\nIn this paper we generalize this construction and show how non-vacuum $(D-1)$-dimensional spacetimes can be used to give AlAdS spacetimes with nonzero stress-energy. Specifically, if $\\gamma_{ab}$ is a solution to the Einstein equations in $(D-1)$-dimensions with stress-energy tensor $\\widehat{T}_{ab}$, we show that replacing $\\eta_{ab}$ on the Poincar\\'e slices with $\\gamma_{ab}$ gives a $D$-dimensional AdS solution with stress-energy $T_{ab}$ that satisfies\n\\be\nT_{\\mu\\nu} = \\widehat{T}_{\\mu\\nu},\n\\ee\nwhere~$\\mu,\\nu = 0,\\ldots,D-1$. By appropriate choice of~$\\widehat{T}_{ab}$, we use this construction to find new AlAdS spacetimes that have a physically sensible stress-energy. We also show that these spacetimes can be ``solitonized'' \\cite{Haehl:2012tw} by adding a compact dimension that shrinks smoothly to zero in the AdS bulk. A variation of this non-vacuum construction was performed in \\cite{Cvetic:2000gj,Lu:2000xc,Park:2001jh}, which studied getting supergravity gauge fields ``on the brane'' by doing a Kaluza-Klein reduction of a supergravity theory in the higher dimensional AdS spacetime. A related\nconstruction starting with supergravity fields in ten dimensions was used to explore properties of time dependent boundaries in references\n \\cite{Das:2006dz,Das:2006pw,Awad:2007fj,Awad:2008jf}.\n The relation between their higher and lower dimensional matter theories differs from what we find here, as will be clarified in Section~\\ref{derive}. \n\nAs a special example, we will take~$\\gamma_{ab}$ to be a Friedman-Robertson-Walker (FRW) cosmology\\footnote{From the CFT side, such solutions can be thought of as a generalization of those in~\\cite{Koyama:2001rf}, which took the boundary metric to be a conformally flat FRW geometry.}. The cosmological stress-energy is an isotropic perfect fluid with energy density $\\tilde{\\rho} (t)$ and pressure $ \\tilde{p} (t) $, which are related by an equation of state $\\tilde{p} (t) = w \\tilde{\\rho} (t) $. We will show that in the AdS spacetime, the fluid is still isotropic on the cosmological slices with the same equation of state $p= w\\rho $ and that the pressure in the AdS radial direction is given by $p_y= (3w - 1)\\rho \/2 $. The pressures and density decay towards the AdS boundary as well as in time as the universe expands. A case of special interest is a free, massless scalar field which in a $(D-1)$-dimensional FRW spacetime has the equation of state $\\tilde{p} = \\tilde{\\rho} $, that is, $w=1$. Hence the scalar field generates a $D$-dimensional AdS cosmology which is isotropic in all spatial directions and has corresponding equation of state $p=\\rho$. According to the AdS\/CFT prescription, such a scalar field in the bulk AdS is dual to to a scalar operator in the CFT with vanishing expectation value but nonzero source.\n \nWe will focus on FRW metrics with negatively curved spatial slices, in which case~$\\gamma_{ab}$ approaches the future Milne wedge of Minkowski space at late time, so long as $w>-1\/3$. The resulting AlAdS solution therefore approaches either the Poincar\\'e patch of AdS or the AdS soliton at late times, which we interpret as an approach to equilibrium. We use these solutions to perturbatively study the approach to equilibrium of the boundary stress tensor and the ADM charges. \nInterestingly, we find that the latter \\textit{decrease} to their equilibrium values at late times, with the time dependent correction proportional to the dimensionless density parameter of the universe $\\Omega$. \nFor example, the mass of the solitonized cosmology decays as\n\\be\n {\\cal M} = \\left( 1-{1\\over 2} \\Omega \\right) {\\cal M}^{(0)} + \\cdots,\n \\ee\n where ${\\cal M}^{(0)} $ is the mass of the static soliton and~$\\cdots$ stands for subleading terms at late times.\nIn the context of spacetimes approaching the AdS soliton, this result is consistent with the energy conjecture of \\cite{Horowitz:1998ha} that the AdS soliton is the lowest energy spacetime with the prescribed asymptotic structure.\n\nA second application of our cosmological AdS solutions will be to compute the behavior of the entanglement entropy $S$ of a spherical region in the CFT as the spacetime evolves to equilibrium. We use the covariant prescription of~\\cite{Hubeny:2007xt}, which is a generalization of the static prescription~\\cite{Ryu:2006ef}. This states that the entanglement entropy~$S_\\mathcal{R}$ of a region~$\\mathcal{R}$ of a holographic CFT is related to the area of a special bulk surface~$\\Sigma$. In general,~$S_\\mathcal{R}$ is UV-divergent, but it can be regulated and the behavior of this regulated entropy~$S_\\mathrm{ren}$ is studied. We find that at late times,~$S_\\mathrm{ren}$ decays as a power law in the proper time of an asymptotically static observer.\n\nOur results add to and complement the substantial body of work in the literature on vacuum AlAdS spacetimes in which the metric on the AdS boundary is time dependent. For instance,~\\cite{Fischler:2013fba} constructed an elegant solution in which the metric on each Poincar\\'e slice is a de Sitter cosmology. Several studies in the general category of holographic cosmology apply coordinate transformations to AdS black holes to produce cosmological boundaries \\cite{Lidsey:2009xz,Erdmenger:2012yh,Ghoroku:2012vi,Banerjee:2012dw,\nCompere:2008us,Binetruy:1999hy,Kajantie:2008hh,Apostolopoulos:2008ru}, and resulting metrics have been analyzed as describing an expanding boost-invariant plasma \\cite{Janik:2005zt,Janik:2006ft,Kajantie:2008jz,Culetu:2009xm,Pedraza:2014moa}.\nSignificant analytical work has also been done on out of equilibrium thermal properties of field theories using various AdS black hole spacetimes, including \\cite{Janik:2010we,Lamprou:2011sa,Figueras:2009iu,\nTetradis:2009bk,Heller:2011ju,Fischetti:2012ps,Fischetti:2012vt,Beuf:2009cx}. Discussion and further references can be found in \\cite{Marolf:2013ioa}. Our work adds a set of new \nnon-vacuum AlAdS spacetimes which allow a wide range of boundary metrics.\n \nThis paper is organized as follows. Section \\ref{derive} contains the derivation of the new AlAdS solutions, as well as an analysis of the scalar field and perfect fluid cases. In section \\ref{sec:bst}, the leading time dependent corrections to the boundary stress tensor and the ADM charges for a solitonic cosmology in an open universe are found. In section \\ref{entropy} the perturbation to the entanglement entropy is calculated, and section \\ref{conclusion} contains discussion and concluding remarks. Unless otherwise specified, we take Newton's constant~$G_N = 1$. \n\n\n\n\\section{AdS and AdS soliton cosmologies}\\label{derive}\n\nWe start by considering AlAdS spacetimes of the general form\n\\be\n\\label{metric}\nds_D^2 = dy^2 + e^{2y\/l}\\gamma_{\\mu\\nu } (x^\\alpha ) dx^\\mu dx^\\nu,\n\\ee\nwhere~$l$ is the AdS length and as before~$\\mu,\\nu = 0,1,\\dots, D-1$. For $\\gamma_{\\mu\\nu} = \\eta_{\\mu\\nu}$ this is AdS in Poincar\\'e coordinates with cosmological constant given by \n\\be\n\\Lambda = -{(D-1)(D-2)\\over 2l^2},\n\\ee\nso we will refer to hypersurfaces~$y = \\mathrm{const.}$ as ``Poincar\\'e slices''. As mentioned above, it is well known that the Einstein equations with cosmological constant $\\Lambda$ are still satisfied for any Ricci-flat $\\gamma_{\\mu\\nu}(x^\\rho)$. In the particular case where~$\\gamma_{\\mu\\nu}$ is a cosmological metric, we refer to \\eqref{metric} as an AdS cosmology. \n\nIn general, spacetimes of the form~\\eqref{metric} with~$\\gamma_{\\mu\\nu} \\neq \\eta_{\\mu\\nu}$ suffer from a singularity at the Poincar\\'e horizon~$y \\to -\\infty$. This singularity can be resolved by introducing an additional compact direction~$v$:\n\\be\n\\label{solmetric}\nds_D^2 = {dy^2 \\over F(y) } + e^{2y\/l}\\left( F(y) dv^2 + \\gamma_{\\mu\\nu } (x^\\rho ) dx^\\mu dx^\\nu \\right),\n\\ee\nwhere $F(y) = 1- e^{ -(D-1)(y-y_+)\/l }$, and now~$\\mu,\\nu= 0,1,\\dots,D-2$. The metric~\\eqref{solmetric} is capped off at~$y = y_+$, so that the Poincar\\'e horizon (and its possibly singular behavior) is removed. Regularity at this cap fixes the period of~$v$ to be\n\\be\\label{period}\nv \\sim v + \\frac{4\\pi l e^{-y_+\/l}}{D-1}.\n\\ee\nNow, if $\\gamma_{\\mu\\nu } = \\eta_{\\mu\\nu}$, then~\\eqref{solmetric} is the usual AdS soliton metric~\\cite{Horowitz:1998ha}\\footnote{Although this is no longer Poincar\\'e AdS, we will continue to call surfaces of~$y = \\mathrm{const.}$,~$v = \\mathrm{const.}$ Poincar\\'e slices.}. However, it was noted in \\cite{Haehl:2012tw} that the Einstein equations with negative cosmological constant $\\Lambda$ will still be satisfied for any Ricci-flat~$\\gamma_{\\mu\\nu}$. In analogy with~\\eqref{metric}, if~$\\gamma_{\\mu\\nu}$ is a cosmological metric, we will refer to~\\eqref{solmetric} as an AdS soliton cosmology.\n\nThe solutions~\\eqref{metric} and~\\eqref{solmetric} provide a simple construction of AlAdS spacetimes with any desired Ricci-flat boundary metric~$\\gamma_{\\mu\\nu}$\\footnote{Technically, the boundary of~\\eqref{solmetric} is~$\\gamma_{\\mu\\nu}$ cross the circle direction~$v$.}. Our goal is to generalize the above results to isotropic FRW cosmological metrics $\\gamma_{\\mu\\nu}$; as such cosmologies are not (in general) Ricci-flat, we will require the introduction of matter fields.\n\n\n\n\\subsection{Massless Scalar Field}\n\nWe obtain a direct analogue of the results for vacuum metrics by considering a free massless scalar field $\\phi$ in the AdS and AdS soliton spacetimes above. The full $D$-dimensional Einstein-massless scalar equations are\n\\be\n\\label{einstein}\nG_{ab}=-\\Lambda g_{ab}+8\\pi T_{ab},\\qquad \\nabla^2\\phi=0,\n\\ee\nwhere~$G_{ab}$ is the Einstein tensor and \n\\be\nT_{ab}={1\\over 8\\pi}[(\\nabla_a\\phi)\\left(\\nabla_b\\phi\\right)-{1\\over 2}g_{ab}g^{cd}(\\nabla_c\\phi)\\left(\\nabla_d\\phi\\right)]\n\\ee\nis the stress-energy of a free massless scalar field in any dimension. \nConsider a lower-dimensional metric $\\gamma_{\\mu\\nu}(x^\\rho)$ and scalar field configuration $\\phi(x^\\mu)$ that solve the Einstein-scalar equations\n\\be\n\\label{slice}\n{\\widehat G}_{\\mu\\nu} = 8\\pi{\\widehat T}_{\\mu\\nu}, \\qquad \\widehat\\nabla^2\\phi=0,\n\\ee\nwhere hatted objects are computed with respect to the metric~$\\gamma_{\\mu\\nu}$. Furthermore, let\n\\be\\label{smetricdef}\ns_{\\mu\\nu}= e^{2y\/l} \\gamma_{\\mu\\nu}\n\\ee\nbe the induced metric on a Poincar\\'e slice. Finally, we pause to note that the scalar field stress energy satisfies the important property that from the full~$D$-dimensional point of view, the induced stress tensor on each Poincar\\'e slice is equal to the lower-dimensional stress tensor of the scalar field on~$\\gamma_{\\mu\\nu}$:\n\\be\n\\label{Tcondition}\nT_{\\mu\\nu} = \\widehat{T}_{\\mu\\nu}.\n\\ee\n\nNow, consider first the metric (\\ref{metric}). The $D$-dimensional Ricci tensor for $g_{ab}$ is related to the Ricci tensor of $s_{\\mu\\nu}$ by\n %\n\\be\n\\label{ricci}\nR_{\\mu\\nu} [g] = R_{\\mu\\nu} [s] - \\frac{D-1}{l^2} \\, s_{\\mu\\nu}, \\quad R_{yy} = \\frac{D-1}{l^2}.\n\\ee\n When these components are assembled into the $D$-dimensional Einstein tensor and substituted into the left hand side of the Einstein field equation \\eqref{einstein}, one sees that the terms which do not involve the curvature of $s_{\\mu\\nu}$ are equal to the cosmological constant term on the right hand side. If $\\gamma_{\\mu\\nu}$ is Ricci flat, then the metric (\\ref{metric}) is a solution with $T_{ab}=0$. If instead $\\gamma_{\\mu\\nu}$ is a solution to (\\ref{slice})\n with nonzero ${\\widehat T}_{\\mu\\nu}$,\n then it is then straightforward to show that the metric~\\eqref{metric} constructed from~$\\gamma_{\\mu\\nu}$ will satisfy the full equations of motion \\eqref{einstein}, with the full bulk scalar field taken to be~$\\phi(x^\\mu)$ (which, in particular, is independent of~$y$). The additional nonzero component of the\n stress-energy tensor is $8\\pi T_{yy}=-{1\\over 2} s^{\\mu\\nu} \\nabla_\\mu \\phi \\nabla_\\nu \\phi$. The construction with the solitonized metric~\\eqref{solmetric} proceeds in a similar way, and one finds that $T_{yy}$ is the same and $T_{vv} = g_{vv} T_{yy}$.\n\nOne may naturally ask if such a straightforward foliation can be extended to other types of matter as well. For instance, one might hope to replace the scalar field with a Maxwell field and obtain multi-black hole solutions analogous to those of~\\cite{Kastor:1992nn}. This is not the case: a key ingredient in the proof was the property \\eqref{Tcondition} of the scalar field stress-energy. \nThis property holds for the massless scalar field stress tensor but not {\\it e.g.} for Maxwell fields, or even for a scalar field with nonzero potential $V(\\phi)$.\n\nA massless scalar field that depends only on time can serve as the source for an FRW cosmology on the Poincar\\'e slices of either \\eqref{metric} or the soliton metric \\eqref{solmetric}. For instance, setting $d\\hat s^2=\\gamma_{\\mu\\nu}dx^\\mu dx^\\nu$ and specializing to $4$-dimensional cosmologies with flat spatial sections we have\n\\be\nd\\hat s^2= -dt^2 + \\left({t\\over t_0}\\right)^{2\/3}(dx^2+dy^2+dz^2),\\quad \\phi=-\\sqrt{{2\\over 3}} \\, \\ln\\left({t\\over t_0}\\right),\n\\ee\nwith corresponding stress tensor equal to that of a perfect fluid obeying the stiff matter equation of state $\\tilde{p}=\\tilde{\\rho}$. From the holographic perspective, the AdS\/CFT dictionary tells us that the bulk scalar field is dual to a scalar operator in the CFT. To be specific, the near-boundary behavior of a massless scalar field in AdS takes the form\n\\be\n\\phi(y) = \\left(\\phi_0 + \\cdots\\right) + e^{-(D-1)y\/l} \\left(\\phi_{(D-1)} + \\cdots \\right),\n\\ee\nwhere~$\\phi_0$ and~$\\phi_{(D-1)}$ are independent parameters that are fixed by the boundary conditions, and~$\\cdots$ represent subleading terms in~$e^{-y\/l}$. The coefficient~$\\phi_0$ should be interpreted as the source of a scalar operator~$\\mathcal{O}$ of dimension~$D-1$, whose expectation value is~$\\left\\langle \\mathcal{O} \\right\\rangle = \\phi_{(D-1)}$. Our solutions correspond to the special case~$\\phi_{(D-1)} = 0$. Note that this is unconventional: the operator~$\\mathcal{O}$ is being sourced, but nevertheless has a zero expectation value.\n\n\n\\subsection{Perfect Fluid Matter}\n\nNoting that property~\\eqref{Tcondition} of the scalar field stress tensor was the key element in the above construction, we may extend the range of AdS and AdS soliton cosmologies by considering more general types of stress-energy that satisfy this condition. We will shortly focus on perfect fluids, but we begin by assuming just that the metric $\\gamma_{\\mu\\nu}$ satisfies Einstein's equation on a Poincar\\'e slice with some stress-energy ${\\widehat T}_{\\mu\\nu}$. We can then analyze the content of the full $D$ dimensional Einstein equations,\n beginning with the AdS type metrics (\\ref{metric}), in the following way. \n\nUsing the relations between the components of the Ricci tensor (\\ref{ricci}) as in the previous subsection, we find that\n the AdS-type metric \\eqref{metric} solves the Einstein equation~\\eqref{einstein} with stress-energy given by\n\\be\n\\label{fullads}\nT_{\\mu\\nu} = {\\widehat T}_{\\mu\\nu},\\qquad T_{yy}={1\\over D-3}\\, T ,\n\\ee\nwhere $T=s^{\\mu\\nu}\\,T_{\\mu\\nu}$. \n A similar analysis for the AdS soliton-type metric \\eqref{solmetric} shows that the Einstein equation \\eqref{einstein} is solved with stress-energy given by\n\\be\n\\label{fullsoliton}\nT_{\\mu\\nu} = {\\widehat T}_{\\mu\\nu},\\qquad T_{yy}={1\\over D-4} \\, g_{yy}T, \\qquad T_{vv}={1\\over D-4} \\, g_{vv}T.\n\\ee\nFor example, one could embed a textbook example of a four-dimensional spherical static star into AdS. According to\n(\\ref{fullsoliton}) the pressures in the radial AdS and compact soliton directions of this cigar-star will be equal to each other,\nbut different from the radial pressure in the Poincar\\'e plane.\n\nWe now specialize to the case of AdS and AdS soliton cosmologies, taking\nthe metric $\\gamma_{\\mu\\nu}$ to have the FRW form\n\\be\n\\label{cosmometric}\nd\\hat s^2 = -dt^2 + a^2 (t)\\, d\\Sigma _k ^2,\n\\ee\nwhere $d\\Sigma _k ^2 $ is a metric on a space with constant curvature $k=0,\\pm 1$. We also restrict our attention to $4$-dimensional Poincar\\'e slices, so that the AdS cosmologies \\eqref{metric} have overall dimension $D=5$ and the AdS soliton cosmologies \\eqref{solmetric} have dimension $D=6$. \n Finally, we assume that the stress-energy ${\\widehat T}_{\\mu\\nu}$ on the slice has the perfect fluid form\n\\be\n{\\widehat T}_{\\mu\\nu} = (\\hat\\rho+\\hat p)\\hat{u}_\\mu \\hat{u}_\\nu +\\hat p \\gamma_{\\mu\\nu},\n\\ee\nwith $\\gamma^{\\mu\\nu}\\hat{u}_\\mu \\hat{u}_\\nu=-1$ and equation of state $\\hat p=w\\hat\\rho$. Note that the strong energy condition requires~$w \\geq -1\/3$. Important special cases are $w=0$ for dust, $w=1\/3$ for radiation, and $w=1$ for the massless free scalar field; indeed, note that such a stress tensor obeys the condition~\\eqref{Tcondition}\\footnote{Values of $w$ different from~1 could be obtained from an interacting scalar field, but as such interactions would require the introduction of a scalar potential, they are not compatible with our ansatz. We will keep $w$ general, with the understanding that this is a bulk, hydrodynamic description.}.\n\nThe cosmological scale factor on the Poincar\\'e slices evolves according to the Friedmann equations\n\\be\n\\label{frweqs}\nd(\\hat \\rho\\, a^3 ) = - \\hat p\\, d(a^3 ) , \\qquad\n\\left( {\\dot a\\over a}\\right)^2 ={8\\pi \\hat\\rho\\over 3}-{k\\over a^2}.\n\\ee\nThe full stress energy tensor $T_{ab}$ for AdS cosmologies, given by \\eqref{fullads}, now has the form of an anisotropic fluid, with a distinct equation of state parameter for the pressure in the $y$-direction. Moreover, the energy density and pressures depend on the radial coordinate~$y$, as well as on time. One finds that the energy density is given by $\\rho= e^{-2y\/l}\\hat\\rho$, while the pressures tangent to the Poincar\\'e slices satisfy an equation of state in the $D$-dimensions of the same form as in $(D-1)$, namely $p=w\\rho$. In the $y$-direction one finds that $p_y = w_y\\rho$ with\n\\be\nw_y ={ (3w -1)\\over 2}.\n\\ee\nWith a soliton, equation \\eqref{fullsoliton} implies that the pressure in the compact $v$-direction is equal to $p_y$, so also\n$p_v= w_y\\rho$. To summarize, the stress-energy for the AdS soliton cosmology is\n\\be\n\\rho= e^{-2y\/l}\\hat\\rho (t), \\quad p=w\\rho, \\quad p_y =p_v = { (3w -1)\\over 2} \\, \\rho.\n\\ee\nSome observations are as follows. For $w=1$, which corresponds to the massless scalar field discussed above, $w_y=1$ as well so\nthe pressure in the full spacetime is isotropic. For radiation ($w=1\/3$), the stress-energy on the Poincar\\'e slices is traceless and the pressure orthogonal to the slices vanishes, so the stress tensor remains traceless. For $w<1\/3$, the orthogonal pressure is negative.\n\n\n\\subsection{Open AdS and AdS Soliton Cosmologies}\n\nWe will be particularly interested in AdS and AdS soliton cosmologies with open ($k=-1$) FRW universes on the Poincar\\'e slices. In this case, provided that the equation of state parameter is in the range $w>-1\/3$ (that is, that the strong energy condition holds), the energy density $\\hat\\rho$ will fall off faster than $1\/a^2$ and at late times the scale factor will grow linearly in time. At sufficiently late times, the metric $\\gamma_{\\mu\\nu}$ on the Poincare slices then approaches\n\\be\nd\\hat s^2_\\mathrm{late} = -dt^2 + t^2\\, d\\Sigma _{-1} ^2,\n\\ee\nwhich is flat spacetime in Milne coordinates.\nThe full AdS and AdS soliton cosmological metrics \\eqref{metric} and \\eqref{solmetric} then respectively approach the AdS or AdS soliton metrics at late times. The late-time behavior of these cosmologies can therefore be thought of as an approach to equilibrium; in particular, the CFT dual can be thought of as an expanding isotropic plasma equilibrating at late time. The solutions~\\eqref{metric} and~\\eqref{solmetric} correspond to the plasma being in a deconfined or confined phase, respectively.\n\n\n\n\\section{ADM Mass and Boundary Stress Tensor for AdS Soliton Cosmologies}\n\\label{sec:bst} \n\nFrom the field theoretic side, the late-time behavior of the AdS cosmology with open spatial slices described above is interpreted as a relaxation of the CFT to the vacuum state. This relaxation can be studied by computing the late-time behavior of CFT observables. \nAs a first examination of the properties of these AdS cosmologies, we look at how the cosmological expansion impacts the boundary stress tensor and the\nADM mass and tensions of the AdS soliton (which corresponds to the confined phase of the dual field theory; see e.g.~\\cite{Mateos:2007ay}). In general we lack a definition of the ADM charges that will apply at the boundary $y=\\infty$ with a time dependent boundary metric. However, as we will see the special case of an open cosmology with matter obeying the strong energy condition~$w>-1\/3$ allows for a perturbative computation of how the ADM charges of the soliton approach their static values at late times.\n\nThe static AdS soliton has negative ADM mass, reflecting the negative Casimir energy of the boundary field theory with a compact direction, and is conjectured to be the lowest energy solution among spacetimes with these asymptotics \\cite{Horowitz:1998ha}.\nIn addition to its mass, the AdS soliton has nonzero ADM tensions \\cite{El-Menoufi:2013pza,El-Menoufi:2013tca}. The tension along the compact $v$-direction in (\\ref{solmetric}) is found to be large and positive, while the other three spatial tensions have negative values, such that the trace of the ADM charges (sum of the mass and the tensions) vanishes. One can think of the static soliton solution as an equilibrium configuration. In this section we will compute the approach to equilibrium of the boundary stress tensor, as well as the mass and tensions for an open AdS soliton cosmology. We will see that the mass decreases to the static soliton value, a result that is consistent with the minimum mass conjecture with matter obeying the strong energy condition.\n\nAs noted above, the FRW boundary metric does not have a time-translation symmetry and therefore the ADM mass is not defined in the usual sense. However, at late times the FRW cosmologies with negatively curved spatial slices approach Minkowski spacetime. We can then define a time dependent ADM mass in this late time limit by writing the metric as static AdS plus time dependent perturbations that decay to zero. These perturbations to the metric determine the late time corrections to the asymptotic constant value of the mass of the soliton. \n\n\nConsider an AdS soliton cosmology \\eqref{solmetric} with an open FRW metric \n\\be\n\\label{openfrw}\nd\\hat s^2 = - dt^2 + a^2 (t) \\left( d\\chi ^2 + \\sinh ^2 \\chi \\, d\\Omega_{(2)} ^2 \\right)\n\\ee\non the Poincar\\'e slices. We assume that $w>-1\/3$, so that in the late time limit $a(t)\\simeq t$. Define new coordinates on the slices according to\n\\be\n\\label{coordtrans}\nT =a(t ) \\cosh \\chi, \\quad R= a(t) \\sinh \\chi.\n\\ee\nNote that since $R\/T =\\tanh \\chi$ it follows that $R\/T \\leq 1$ with equality when $\\chi \\rightarrow \\infty$.\nIn terms of these new coordinates the AdS soliton cosmology has the form\n\\be\n\\label{latemetric}\nds^2 = {dy^2 \\over F(y) } + e^{2y\/l} \\left[\\, F(y)\\, dv^2 -dT^2 (1-\\delta \\tilde{g}_{TT} ) +dR^2 (1+ \\delta \\tilde{g}_{RR} ) + 2 \\, \\delta \\tilde{g}_{TR}\\, dR\\, dT +\nR^2 d\\Omega_{(2)} ^2 \\right]\n\\ee\nwhere \n$F(y)$ is given in (\\ref{solmetric}) and the functions $\\delta \\tilde{g}_{TT}$, $\\delta \\tilde{g}_{RR}$ and $\\delta \\tilde{g}_{TR}$, which give the deviance of the metric on the Poincar\\'e slices from flat, may be written as\n\\be\n\\label{fndef}\n\\delta \\tilde{g}_{TT} = \\Omega \\, \\frac{1}{1-(R\/T)^2}, \\quad\n \\delta \\tilde{g}_{RR} = \\Omega \\, \\frac{(R\/T)^2}{1-(R\/T)^2}, \\quad \\delta \\tilde{g}_{TR} = \\Omega \\, \\frac{R\/T}{1-(R\/T)^2}.\n\\ee\nHere $\\Omega$ is the dimensionless density parameter of the open FRW metric,\n\\be\\label{omegadef}\n\\Omega = {8\\pi \\hat \\rho \\over 3 H^2} = \\left(1- {1\\over \\dot{a} ^2 } \\right),\n\\ee\nand $H=\\dot a\/a$ is the Hubble parameter. For an open universe $\\Omega<1$ and approaches zero in the far future. Hence, the metric \\eqref{latemetric} approaches the AdS soliton at late times. We emphasize that the expressions in \\eqref{fndef} are exact up to this point.\n\n\nTo proceed further, the density parameter $\\Omega$ must be expressed in terms of the asymptotically Minkowski coordinates $(T,R)$, which requires the expression for the scale factor $a(t)$ at late times. To obtain this expression, first we substitute the equation of state~$p=w \\rho$ into the Friedman equations \\eqref{frweqs}, which allows the energy density to be solved for in terms of the scale factor, giving\n\\be\\label{laterho}\n\\hat\\rho (t)= {3\\bar{\\Omega}_* H_*^2\\over 8\\pi (H_*a(t))^{3(1+w)}} \\ , \\mbox{ where } \\bar{\\Omega}_* \\equiv {\\Omega_* \\over (1- \\Omega_* )^{3(w+1)\/2 }}\n\\ee\nand $H_*$ and $\\Omega_*$ are the Hubble and density parameters evaluated at a fiducial time $t=t_*$.\nThe density and scale factor evaluated at $t_*$ are given by $\\hat{ \\rho}_* = 3\\Omega_* H_*^2\/8\\pi$ and $a_* = 1\/ ( H_* \\sqrt{1-\\Omega_* }) $ respectively. The equation for the scale factor then reduces to\n\\be\n\\dot a^2 = 1 +{ \\bar\\Omega_*H_*^2 a_*^{3(1+w)}\\over a^{1+3w}}.\n\\ee\nFor $w>-1\/3$, this reduces in the limit of large scale factor to $\\dot a^2 \\simeq1$, giving $a(t)\\simeq t$ in the late time limit. Including a subleading correction of the form~$a(t) \\simeq t + \\alpha t^\\beta$ yields\n\\begin{subequations}\n\\label{latea}\n\\bea\na(t) &\\simeq t - {\\bar\\Omega_* \\over 6wH_* ( H_* t ) ^{3w } }, \\quad w\\neq 0, \\\\\na(t) &\\simeq t + {\\bar\\Omega_* \\over 2 H_* } \\ln \\left( {t \\over H_* } \\right) ,\\quad w = 0,\n\\eea\n\\end{subequations}\nwhich by~\\eqref{omegadef} yield\n\\be\\label{omegat}\n\\Omega (t) \\simeq {\\bar\\Omega_*\\over (H_* t) ^{(1+3w )}}.\n\\ee\n\nThe expressions~\\eqref{latea} can be inverted and combined with the transformation to~$(R,T)$ coordinates to yield the coordinate transformation from~$t$ to~$(R,T)$, valid at late times, including terms up to order $R^2 \/ T^2$,\n\\begin{subequations}\n\\bea\nt&\\simeq T+{\\bar\\Omega_*\\over 6wH_*(H_*T)^{3w}}-{R^2\\over 2T},\\qquad w\\neq 0, \\\\\nt&\\simeq T -{\\bar\\Omega_*\\over 2H_*} \\ln\\left( {T \\over H_* } \\right) -{R^2\\over 2T},\\qquad w= 0,\n\\eea\n\\end{subequations}\nwhich then give $\\Omega$ as a function of $T$ and $R$:\n\\begin{subequations}\n\\label{lateomega}\n\\bea\n\\Omega &\\simeq {\\bar\\Omega_*\\over ( H_* T) ^{3w+1 } } \n\\left(1- {(1+3w)\\bar\\Omega_*\\over 6w (H_* T ) ^{3w +1 } } + {(1+3w ) R^2 \\over 2T^2 } \\right), \\quad w\\neq 0, \\\\\n\\Omega &\\simeq {\\bar\\Omega_*\\over H_* T} \\left( 1 +\n {\\hat\\Omega_* \\over 2 H_* T } \\ln\\left( {T \\over H_* } \\right) +{ R^2 \\over 2 T^2} \\right), \\quad w = 0.\n\\eea\n\\end{subequations}\nThis is our desired result.\n\n\nWe are now prepared to compute the leading late time corrections to the boundary stress tensor density, which we will denote by\n $\\tau _{\\mu\\nu}$. Let $K _{\\mu\\nu}$ be the extrinsic curvature of the AdS boundary. In the boundary stress tensor formalism a boundary\n action is defined that includes an integral over $K$ plus\n geometrical counterterms that are constructed from the metric on the boundary $s_{\\mu\\nu}$, defined in equation \\eqref{smetricdef}.\n These terms include a cosmological constant,\n the scalar curvature of $s_{\\mu\\nu}$, and potentially higher derivative counter terms as needed. \n The stress tensor density results from \nvarying the boundary action with respect to $s^{\\mu\\nu}$. The coefficients of the counter terms are\n chosen to cancel divergences that occur in $\\tau_{\\mu\\nu}$ and are dimension dependent. One finds the result \\cite{Balasubramanian:1999re,Myers:1999psa,de Haro:2000xn}\n\\be\n\\label{bst}\n8\\pi \\tau_{\\mu\\nu} = \\sqrt{-s} \\left( K _{\\mu\\nu} - Ks_{\\mu\\nu} +{ D-2 \\over l} s_{\\mu\\nu} +{1\\over D-3} G _{\\mu\\nu} [s] + \\cdots \\right),\n\\ee\nwhere the $\\cdots$ indicate higher derivative terms in the Riemann tensor of $s_{\\mu\\nu}$, which we will show are subdominant at late times.\nWe work with the boundary stress tensor density because the volume element of the late time metric changes at leading order, and \nalso because this is the appropriate quantity to integrate to get the ADM charges.\n\n In the case of the static AdS soliton, the metric on the boundary is flat and the terms in (\\ref{bst})\n depending on the curvature of $s_{\\mu\\nu}$ all vanish. This is no longer true for\n the cosmological AdS spacetimes. \n The Einstein tensor term in (\\ref{bst}) \n contributes a time-dependent piece to $\\tau_{\\mu\\nu}$ which goes to zero at late times like the energy density $\\hat\\rho$ in (\\ref{laterho}). Additional time dependence\n in $\\tau_{\\mu\\nu}$ comes from the volume element in (\\ref{bst}) which goes like\n %\n \\be\\label{volchange}\n \\sqrt{-s} = \\left(1-{1\\over 2} \\, \\Omega \\right) \\sqrt{-s_{(0)} },\n\\ee\nwhere $\\sqrt{-s_{(0)} }$ denotes the volume element in the static AdS soliton, and the late time behavior of the density parameter\n$\\Omega$ is given in (\\ref{omegat}).\nComparing the decay rates of $\\hat\\rho$ and $\\Omega$, one finds that the \ncontribution of $G_{\\mu\\nu}$ to the boundary stress tensor density is subdominant at late times compared to that of the volume element. The contributions of higher derivative terms in (\\ref{bst}) will decay\n even more rapidly. The leading contributions to the boundary stress tensor density are then readily found by combining\n the results for the static AdS soliton in \\cite{El-Menoufi:2013pza} with equation \\eqref{volchange} giving\n \n %\n \\be\\label{solitonbst}\n \\tau_{\\mu\\nu } \n = {e^{5y_+\/l} \\over 16\\pi l} \\left( 1-{1\\over 2} \\, \\Omega \\right) \\mathrm{diag} (-1, 1 ,1, 1, -4),\n \\ee\n %\n where the coordinates are ordered according to $(t, x_1, x_2 , x_3 , v)$. \nHence the decaying time dependent corrections to the static values of $\\tau_{\\mu\\nu } $ are simply proportional to $\\Omega$, the density parameter of the cosmology.\n \n \nWe now use the results above to determine the ADM mass and tensions for the AdS soliton cosmologies. Comparison with \\cite{El-Menoufi:2013pza} shows that the integrands of the ADM charges in AdS coincide with the first three terms in the boundary stress tensor in equation (\\ref{bst}), and the components of $\\tau_{\\mu\\nu} $ above then just need to be integrated to obtain the ADM charges. In the static coordinates, the density parameter $\\Omega$ depends on $R$ as well as $T$, so the integrand is not a constant. This does not mean that $R=0$ is a special point, since any location in the homogeneous open cosmology could equally well be chosen as the origin. For the static AdS soliton, the ADM charges are made finite by taking the planar geometry to be periodically identified, with {\\ $-L_j \/2 \\leq x^j \\leq L_j \/2$. For notational brevity let the asymptotic volume be $V= L_1 L_2 L_3 L_v$ where $L_v$ is the range of compact coordinate $v$ given in (\\ref{period}). In the limit that the plane is infinite, the relevant energy is the mass per unit volume obtained by dividing the total mass by $V$, and similarly for the spatial tensions.\n \nFinally, it is important to note that the static radial coordinate $R$ has the range $0\\leq R\\leq T$, with the upper limit corresponding to $\\chi \\rightarrow \\infty$ in the coordinate transformation (\\ref{coordtrans}).\nThe integrals for the ADM charges are then over a box of length $L< T$, and at the end we divide out the volume of the box.\nDefine the spatial average of the density parameter $\\Omega$ at time $T$ by\n\\be\n\\label{avomega}\n\\ev{\\Omega} \n= {\\bar\\Omega_*\\over V ( H_* T) ^{3w+1 } } \\int dx_1 dx_2 dx_3 dv \\, \\Omega (R , T).\n\\ee\nIn the late time limit, we substitute the approximate expression for~$\\Omega$ given in (\\ref{lateomega}).\nFor the general case $w\\neq 0$, this yields\n\\be\\label{intomega}\n\\ev{\\Omega} = {\\bar\\Omega_*\\over V ( H_* T) ^{3w+1 } } \\left(1- {c_1\\over T ^{3w +1 } } + { c_2L^2 \\over T^2 }\\right),\n\\ee \nwhere the coefficients $c_1 , c_2$ can be read off of the expansion of $\\Omega$ in equation (\\ref{lateomega}). One sees that\nthe terms proportional to $c_1$ and $c_2$ make increasingly small contributions and so will be dropped in subsequent formulae. This also allows us to treat the cases~$w = 0$,~$w \\neq 0$ simultaneously, since the leading-order term in~\\eqref{intomega} is identical to that obtained in the special case\n $w=0$.\n\nFollowing the conventions of past work ({\\it e.g.} \\cite{El-Menoufi:2013pza}), we give the ADM tension rather than a pressure, where tension is simply minus the pressure\\footnote{This convention\nis natural in asymptotically flat static spacetimes where the gravitational tension can be shown to be positive \\cite{Traschen:2003jm}.}.\n Assembling the pieces, at late times the mass and tensions of the soliton in the metric \\eqref{latemetric} are\n\\begin{subequations}\n\\label{admcharges}\n\\bea\n{\\cal M} = {\\cal T} _j & = -{ V \\over 16 \\pi l } e^{5y_+\/l}\\left( 1 -{1\\over 2}\\ev{\\Omega} \\right) \\ , \\quad j=1,2,3, \\\\\n{\\cal T}_v &= {4V \\over 16 \\pi l }e^{5y_+\/l} \\left( 1 -{1\\over 2} \\ev{\\Omega} \\right).\n\\eea\n\\end{subequations}\nThe expressions for the ADM charges have the same structure as the components of the boundary stress tensor, relaxing to the \nequilibrium values like $\\ev{\\Omega} $. \nSince~$\\ev{\\Omega} > 0$, the mass of the AdS soliton cosmology decreases as $ \\ev{\\Omega} $ goes to zero, approaching \nits negative static value at late times, consistent with the energy bound conjectured in \\cite{Horowitz:1998ha}. The tension ${\\cal T}_v$ around the compact dimension increases to its static positive value, while the trace ${\\cal M} + {\\cal T}_v +\\Sigma _j {\\cal T}_j $ vanishes\nthroughout the relaxation process.\n\n\n\n\\section{Entanglement Entropy}\n\\label{entropy}\n\nThe new AdS cosmological solutions allow us to compute how the entanglement entropy of a region in the dual CFT approaches equilibrium. To perform the computation, we use the holographic prescription \\cite{Ryu:2006ef,Hubeny:2007xt}, which proposes that the entanglement entropy of a region~$\\mathcal{R}$ (called the entangling region) in the boundary CFT is equal to\n\\be\n\\label{sa}\nS_\\mathcal{R} = \\frac{\\mathrm{Area}\\left[\\Sigma\\right]}{4G_N},\n\\ee\nwhere~$\\Sigma$ (referred to as the entangling surface) is the minimal-area extremal surface in the bulk spacetime anchored to~$\\partial\\mathcal{R}$ and homologous to~$\\mathcal{R}$. Note that in this section we have restored Newton's constant $G_N$. We will also keep~$w$ general, though we emphasize that only the case~$w = 1$ (wherein the bulk matter is a scalar field) has a well-understood CFT dual.\n\nParametrizing~$\\Sigma$ as~$X^a(\\sigma^i)$, with~$\\sigma^i$ coordinates on~$\\Sigma$, $i=1,..., D-2$, the area functional is\n\\be\n\\label{eq:area}\nA = \\int \\sqrt{h} \\, d^{D-2} \\sigma,\n\\ee\nwhere~$h$ is the determinant of the induced metric on the surface\n\\be\n\\label{eq:inducedh}\nh_{ij} = g_{ab} \\partial_i X^a \\partial_j X^b.\n\\ee\n\nIn general, extremizing~\\eqref{eq:area} to obtain the entangling surface is difficult to accomplish analytically, and the AdS soliton cosmologies\nare no exception. However, we can make progress by working in the non-solitonized AdS cosmology~\\eqref{metric} and noting that the calculations performed there should approximate those in the AdS soliton cosmology, as long as the relevant surfaces do not extend too deeply into the spacetime. The boundary metric is then\n\\be\n\\label{eq:bndryflat}\nds^2_\\partial = -dT^2 (1-\\delta \\tilde{g}_{TT}) + dR^2 (1+\\delta \\tilde{g}_{RR}) - 2\\delta \\tilde{g}_{TR} \\, dR \\, dT + R^2 d\\Omega_{(2)}^2,\n\\ee\nand the full metric is given in (\\ref{metric}) with $\\gamma_{\\mu\\nu}$ equal to $ds^2_\\partial$.\nWorking in pure AdS has the significant advantage that the extremal surface is known for a spherical entangling region on the boundary \\cite{Ryu:2006ef}. This allows us to use perturbative techniques to compute the time dependent correction to the area as the metric approaches the static AdS spacetime in the future.\n\nIn order to compute the late-time behavior of the entanglement entropy we work to first order in powers of $R\/T$ in~$\\delta \\tilde{g}_{TT}$,~$\\delta \\tilde{g}_{RR}$, and~$\\delta \\tilde{g}_{TR}$, given in equations \\eqref{fndef} and \\eqref{lateomega}. We take the boundary of the entangling region to be a sphere of radius~$R_0$ at some time~$T_0$; the corresponding entangling surface~$\\Sigma$ in pure AdS was found in \\cite{Ryu:2006ef}. We may then perturb off of this solution to compute the leading correction to the area. There are two natural options for how this sphere should evolve in time: (i) the sphere can be of fixed proper size in the asymptotically static coordinates so that~$R_0$ is held constant as~$T_0$ advances; or (ii) the sphere can be comoving, so\nthat fluid elements on the boundary of the sphere follow geodesics, and~$R_0$ grows like~$a(t)$. We will discuss both choices below.\n\n\n\\subsection{Zeroth Order Solutions}\n\nAt zeroth order, the boundary metric~\\eqref{eq:bndryflat} is just Minkowski space. Parametrizing the surface by~$z \\equiv l e^{-y\/l}$ and the coordinates on the sphere, the area functional~\\eqref{eq:area} is\n\\be\nA = 4\\pi l^3 \\int _\\epsilon ^1 dx \\, {(1-x^2 )^{1\/2} \\over x^3 },\n\\ee\nwhere $\\epsilon = z_\\mathrm{cut} \/ R_0 $ and $z_\\mathrm{cut}$ is a UV cutoff to regulate the integral. The corresponding entangling surfaces were calculated in~\\cite{Ryu:2006ef} and are given by \n\\be\n\\label{zerosurf}\n\\Sigma_{0}: \\quad z^2 + R^2 = R_0 ^2 \\ , \\quad T = T_0,\n\\ee\nwith area \n\\be\n\\label{zeroarea}\nA^{(0)} = l^3 \\left[ {A_\\mathrm{static} \\over 2 z_\\mathrm{cut} ^2} - \\pi \\ln\\left( \\frac{A_\\mathrm{static} }{ \\pi z_\\mathrm{cut}^2 }\\right)-\\pi \\right],\n\\ee\nwhere~$A_\\mathrm{static} = 4\\pi R_0 ^2$ is the area of~$\\partial\\Sigma _0 $. The first term in the above expression denotes the usual area law growth of the entanglement entropy, while the coefficient of the logarithmically divergent term provides a UV-independent measure of the entanglement entropy.\n\n\n\\subsection{First Order Corrections: Approach to Equilibrium}\n\nNow, consider corrections to~\\eqref{zeroarea} which arise both from perturbations to the metric and to the surface~$X^a$. Write each as a zeroth order piece plus a perturbation,\n\\be\n\\label{realdeal}\ng_{ab} = g_{ab}^{(0)} + \\delta g_{ab} \\ , \\quad \\ X^a (\\sigma_i ) = X^a _{(0)} + \\delta X^a .\n\\ee\nTo first order the volume element on the surface becomes\n\\be\n\\label{deltaa}\nh = h^{(0)} \\left( 1 + \\mathrm{Tr}\\left[ \\delta g_{ab}\\partial_i X_{(0)}^a \\partial_j X_{(0)}^b + 2 g_{ab} ^{(0)} \\partial_i \\delta X^a \\partial_j X_{(0)}^b\\right]\\right).\n\\ee\nHowever, the second term in the trace is a variation of the surface in the background metric, and so this integrates to zero since the background surface is extremal. Thus the first-order change in the area is governed by the perturbation to the metric:\n\\be\n\\label{deltaatwo}\n\\delta A ={1\\over 2} \\int \\sqrt{h^{(0)} } \\, \\mathrm{Tr}\\left[ \\delta g_{ab}\\partial_i X_{(0)}^a \\partial_j X_{(0)}^b\\right] \\, d^{d-1} \\sigma.\n\\ee\nThe final step is to substitute the expressions for the metric perturbations (\\ref{fndef}) into the metric equations (\\ref{metric}), (\\ref{eq:bndryflat}).\n Using $R^\\prime (z) = -z\/R $ on\nthe zeroth order surface \\eqref{zerosurf}, the induced metric in the perturbed spacetime is given by\n\\be\n\\label{latethree}\n(g_{ab} ^{(0)} +\\delta g_{ab} ) \\partial_i X_{(0)}^a \\partial_j X_{(0)}^b d\\sigma^i d\\sigma^j|_{\\Sigma_0} \n= {l^2 \\over z^2 } \\left\\{ \\left( {R_0^2 \\over R^2 } + {z^2\\Omega \\over T_0^2} \\right) dz^2\n+ R^2 d\\Omega_{(2)} ^2 \\right\\},\n\\ee\nwhere $R=\\sqrt{ R_0 ^2 -z^2 }$. Using this expression in (\\ref{deltaatwo}) and substituting $\\Omega$ from (\\ref{lateomega}) gives\n the first-order correction to the area of the entangling surface\n\\bea\n\\label{latearea}\n\\delta A &= { 4\\pi l^3 \\bar{\\Omega}_* \\over ( H_* T_0 ) ^{3w+1} } \\left( { R_0 ^2 \\over 2T_0^2} \\right) \\int _\\epsilon ^1 dx {(1-x^2 )^{3\/2} \\over x } \\\\\n\t\t &= { l^3 \\bar{\\Omega}_* \\over 4 (H_* T_0 ) ^{3w+3}} H_*^2 A_\\mathrm{static} \\left( \\ln\\left( {A_\\mathrm{static}\\over \\pi z_\\mathrm{cut} ^2}\\right) -{4\\over 3} \\right).\n\\eea\nThis result is valid at sufficiently late times such that $H_* T_0 \\gg 1$ and $T_0 \\gg R_0 $. \n\nThe entanglement entropy, including the leading late time contribution, follows from substituting\n(\\ref{latearea}) and (\\ref{zeroarea}) into the entropy-area relation in equation (\\ref{sa}). \n The conversion from area to entropy contains the prefactor $l^3\/G^{(5)}_N$, which can be translated into the parameters of the dual CFT. According to the AdS\/CFT correspondence, the solutions~\\eqref{metric} in~$D = 5$ are dual to an~$\\mathcal{N} = 4$ supersymmetric Yang-Mills theory on the FRW spacetime \\eqref{cosmometric}. Following the discussion in \\cite{Ryu:2006ef}, we consider ${\\cal N} =4 \\ SU(N)$ SYM theory on AdS$_5 \\times S^5$, \n in which case the AdS radius, the ten dimensional Newton's constant, and the five dimensional Newton's constant are identified with the string coupling, string tension, and $N$ according to $l^4 = 4\\pi g_s (\\alpha^\\prime)^2 N$, $G^{(10)}_N = 8\\pi^6 g_s^2 (\\alpha^\\prime)^4 $, and \n$G^{(5)}_N =G_N^{(10)} \/ l^5$. This gives~$l^3\/G^{(5)}_N = 2N^2\/\\pi $, and the entanglement entropy is then\n\\be\\label{eq:Stot}\nS = \\frac{N^2 }{2\\pi }\\left[\\frac{A_\\mathrm{static}}{2z_\\mathrm{cut}^2} - \n\\pi \\ln \\left( {A_\\mathrm{static}\\over \\pi z_\\mathrm{cut} ^2}\\right) -\\pi\n+ \\frac{ \\bar{\\Omega}_* H_*^2 A_\\mathrm{static}}{4( H_* T )^{3w+3} }\\left( \\ln \\left( { 2A_\\mathrm{static}\\over \\pi z_\\mathrm{cut} ^2}\\right) - {4\\over 3} \\right) + \\cdots\n\\right],\n\\ee\nwhere $\\cdots$ denotes terms that are subleading at late time, and the subscript on $T$ has been dropped for simplicity. The coefficient of the logarithmic term is invariant under rescalings of the cutoff, so it serves as a regularized measure~$S_\\mathrm{ren}$ of the entanglement entropy. One finds\n\\be\n\\label{eq:Sren}\n\\delta S_\\mathrm{ren} \\simeq \\frac{N^2 }{8\\pi} \\frac{ \\bar{\\Omega}_* H_*^2 A_\\mathrm{static}}{( H_* T )^{3w+3}}.\n\\ee\nNote that this is positive, which means that~$S$ \\textit{decreases} to its equilibrium value. This behavior differs markedly from that of quenches in CFTs~\\cite{Calabrese:2004eu,AbajoArrastia:2010yt,Albash:2010mv,\nBalasubramanian:2010ce,Caceres:2012em,\nAlishahiha:2014cwa,Alishahiha:2014jxa}, wherein the entanglement entropy grows until is saturates. There is a temptingly simple and compelling physical reason for the decrease of $S$ found in this calculation: in the Cartesian coordinates~$T,R$ there is a nonzero radial flux proportional to $g_{TR}$, so the decrease of the entropy in the ball $R \\leq R_0$ can be interpreted as due to an energy flow out of the ball. In particular, in the quasi-particle picture of entanglement entropy propagation~\\cite{Calabrese:2004eu}, entanglement is carried by entangled particle pairs; a new flow of such particles out of the entangling ball~$R \\leq R_0$ leading to a decrease in entanglement entropy is consistent with this picture. Alternatively, note that at late time, our bulk solution approach the Minkowski vacuum, and therefore the CFT evolves from an excited state to the zero-temperature vacuum state. We would therefore naturally expect probes of correlation (such as entanglement entropy) to decay in the late time limit\\footnote{We thank Juan Pedraza for this observation.}.\n\nThe time dependent contribution decays as a power law, and the time scale for the decay is set by the Hubble parameter $H_*$. The power depends on the equation of state. For example, for dust the correction goes to zero like $T^{-3} $, and for a free massless scalar field like $T^{-6} $. The time dependence in $\\delta S$ is analogous to the result of \\cite{Engelhardt:2013jda}, in which the entropy of a strip in a vacuum-Kasner AdS spacetime was found to have a power law behavior, in both cases a reflection of the time evolution of the cosmology.\n\nTurning to the amplitude of $\\delta S_\\mathrm{ren}$, we see that this is set by an interesting combination of factors. At sufficiently late times $t_* \\Omega_* \\ll 1$, so that $H_* ^2 \\bar{\\Omega}_* \\simeq 8 \\pi G_N ^{(5)}\\hat\\rho_* \/ 3 $.\nHence the dimensionless combination $H_* ^2 \\bar{\\Omega}_* A_\\mathrm{static}$ has the interpretation of the non-vacuum energy, measured in Planck units, that is\ncontained in a shell of width the Planck length that surrounds the sphere. That is, the entangling modes of the perturbation \nact like they are concentrated on the boundary of the sphere. This is a reflection of the fact that the change in the area of the \n extremal surface comes from the metric perturbations near the surface.\n\n \n\\subsection{The Cosmological View}\n\nAn alternative way to interpret the time evolution of the entanglement entropy is to take the boundary sphere to be comoving, so that\npoints on the boundary sphere follow geodesics. \n In the cosmological coordinates \\eqref{openfrw} this means that the sphere is at a fixed coordinate $\\chi= \\chi _0$. The extremal surface $\\Sigma_0$ does not lie within a slice of constant cosmological time, but it does intersect the boundary at a constant time, as can be seen by \nevaluating \\eqref{coordtrans} at $z=0$. \nTransforming the zeroth order surface (\\ref{zerosurf}) to the cosmological coordinates gives\n %\n \\be\\label{cosmosurf}\n \\Sigma_0 : \\quad a(t) \\cosh \\chi = a( t_b ) \\cosh \\chi_0 \\ , \\quad z^2 + \\cosh^2 \\chi_0 \\tanh ^2 \\chi = a(t_b )^2 \\sinh ^2 \\chi _0.\n \\ee\nLet\n\\be\nA_\\mathrm{geod}(t_b ) = 4\\pi a( t_b )^2 \\sinh ^2 \\chi _0\n\\ee\nbe the proper area of the comoving sphere on the boundary at $t_b$.\n Then in terms of the cosmological coordinates the zeroth order area (\\ref{zeroarea}) becomes\n\\be\\label{areacosmo}\nA^{(0)} (t_b ) = l^3 \\left[ { A_\\mathrm{geod}\\over 2 z_\\mathrm{cut} ^2} - \\pi \\ln \\left({ A_\\mathrm{geod} \\over \\pi z_\\mathrm{cut}^2 } \\right) -\\pi \n \\right] \n\\ee\nand the time dependent correction (\\ref{latearea}) is\n\\be\\label{dacosmo}\n\\delta A = l^3 \\bar{\\Omega}_* H_*^2 (4\\pi a_*^2 \\sinh ^2 \\chi _0 ) \\left( {a_* \\over a(t_b ) } \\right)^{3w+1} \n \\left( \\ln\\left( { A_\\mathrm{geod} \\over 2\\pi z_\\mathrm{cut}^2 }\\right) -{2\\over 3} \\right).\n\\ee\nHence the time dependent piece redshifts to zero as \n $(1+z_\\mathrm{b} )^{-(3w+1)} $ where $1+z_b = a(t_b ) \/ a_* $ is the cosmological redshift.\nThe power in the decaying term is different than for the static sphere (\\ref{latearea}) because $A_\\mathrm{geod} $ increases\n as $a^2$. \n \n So far the expressions for the area (\\ref{areacosmo}) and (\\ref{dacosmo}) are just translations from\nthe asymptotically static coordinates to the cosmological coordinates. \nThe difference \nfrom the previous section, in which the area of the boundary sphere is held constant,\n comes when one follows the time evolution by considering increasing values of the boundary time $t_b$.\nThe area of the boundary co-moving sphere increases like $a^2 (t_b )$, so \n although (the UV-independent part of) $\\delta A$ is positive and decreasing to zero, the total entropy increases with $t_b$. This brings up the important issue of \n the range of validity of the expressions in cosmological time. As discussed in\n section \\ref{entropy}, if the results are to be good approximations to the results in a solitonized spacetime, one needs to\n restrict to surfaces that do not penetrate too deeply into the bulk, which precludes taking~$R \\sim t_b \\sinh \\chi _0$ too large. This means\n that the validity of (\\ref{latearea}) is restricted to times that are not so large that the proper radius of the boundary sphere\n approaches the length scale set by the soliton, that is, we need $ t_b \\sinh \\chi _0 \\ll l e^{-y_+ \/l}$. The situation here\n is similar to that in \\cite{Engelhardt:2014mea}.\n \n \n\n\\section{Discussion}\n\\label{conclusion}\n\nIn this paper, we have shown how to construct AdS cosmologies that satisfy the Einstein equations with a nonzero stress tensor and negative cosmological constant. Our solutions were built as foliations of lower-dimensional solutions; the induced metric on each of these hypersurfaces itself satisfies Einstein's equations with a nonzero stress tensor. This construction has the advantage that the boundary metric of the AlAdS solution is just (conformal to) the induced metric on each hypersurface. Therefore, this construction offers us significant freedom in constructing AlAdS spacetimes with a boundary metric of our choosing.\n\nThe particular AdS cosmologies that we have constructed take the stress tensor to be that of a perfect fluid obeying the strong energy condition; for the equation of state~$w = 1$, the fluid is sourced by a massless noninteracting scalar field. Moreover, we have focused on the specific case in which the spatial slices of the FRW cosmologies are negatively curved, as such FRW cosmologies then approach the Milne patch of Minkowski space at late times. The AdS cosmology constructed from these slices therefore approaches the Poincar\\'e patch of AdS at late times, while the AdS soliton cosmology approaches the static AdS soliton.\n\nSuch solutions are especially interesting because they allow us to perturbatively calculate the behavior of physically relevant quantities at late times. For instance, we have calculated the late-time perturbation to the ADM mass of the AdS soliton cosmology, and have found this perturbation to be\n\\be\n\\label{deltam}\n\\delta{\\cal M} = -{ \\Omega {\\cal M}\\over 2},\n\\ee\nwhere~${\\cal M}$ is the unperturbed mass and~$\\Omega$ is the dimensionless density parameter of the FRW cosmology, which goes to zero at late times in the solutions we are interested in. Since ${\\cal M}$ is negative for the soliton this implies that the mass \\textit{decreases} to the mass of the static AdS soliton. Hence this result is consistent with the energy conjecture of \\cite{Horowitz:1998ha} that the AdS soliton is the lowest energy spacetime with the prescribed asymptotic structure. We found the ADM tensions to be modified in a similar manner to~${\\cal M}$.\n\nMoreover, our solutions also have immediate applicability to large-$N$, strongly coupled CFTs via the AdS\/CFT correspondence. Indeed, our AdS cosmologies are dual to CFTs living on an FRW cosmology, while the massless, noninteracting scalar field in the bulk is dual to a scalar operator in the CFT with zero expectation value but nonzero source. This atypical behavior can be tied to the fact that our AdS cosmologies are singular at the Poincar\\'e horizon. This singularity is removed by ``solitonizing'': introducing a compactified direction in the bulk that caps off the geometry. This cap amounts to putting the CFT in a confined phase. \n\nAs a probe of the behavior of the CFT on these FRW spacetimes, we study the entanglement entropy~$S$ of a sphere of constant radius. Using perturbative techniques to find the leading time dependent correction to the entropy, we find that the regulated entanglement entropy $S_\\mathrm{ren}$ decays as a power law to its equilibrium value. The power depends on the equation of state of the fluid. Note that this decay to equilibrium is starkly different from the behavior of entanglement entropy after a quench, when the entanglement entropy \\textit{grows} to its equilibrium (thermal) value. However, note that our solutions at late time approach Poincar\\'e AdS, which has zero temperature; this is drastically different from the end state of a quench, which in the bulk is usually modelled by the injection of energy, forming a black hole of finite temperature.\n\nSeveral issues and questions are raised by these examples. First, can these solutions be generalized to include planar black holes in the bulk spacetimes? Such solutions could conceivably be used to model the approach of a CFT on an FRW cosmology to thermal equilibrium with a nonzero temperature~$T$, much as here we have modeled the approach to equilibrium at~$T = 0$.\nSecond, for reasons of tractability our entropy calculations have been perturbative analyses in the AdS cosmology. It would also be interesting to study the entanglement entropy of the CFT at all times, in both the AdS cosmologies and especially in the AdS soliton cosmologies, to see if there is any behavior that was not captured by our perturbative methods. Such calculations would most likely be numerical, so we leave them for the future. \n\n\n\n\\section*{Acknowledgements}\n\nThe authors with to thank Netta Engelhardt and Don Marolf for useful discussions. We also thank Juan Pedraza for useful comments on an earlier version of this paper. This project was supported in part by the National Science Foundation under Grant No PHY11-25915, by FQXi grant FRP3-1338, and by funds from the University of California.\n\n\n\n\n\\bibliographystyle{JHEP}\n\\providecommand{\\href}[2]{#2}\\begingroup\\raggedright","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzlefc b/data_all_eng_slimpj/shuffled/split2/finalzzlefc new file mode 100644 index 0000000000000000000000000000000000000000..93cf6748f7935b00cfeeb96ce4362291b89e54e7 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzlefc @@ -0,0 +1,5 @@ +{"text":"\\section{Omitted Proofs in \\Cref{sec:composition-theorems}}\n\\label{app:CLT}\n\n\n\n\n\nWe first collect the basic properties of $\\otimes$ listed in \\Cref{sub:composition_theorem}.\n\\begin{proposition}\\label{prop:tensor_properties}\n\tThe product $\\otimes$ defined in \\Cref{def:product} has the following properties:\n\t\\begin{enumerate}\n\t\\setlength\\itemsep{0.15em}\n\t\\setcounter{enumi}{-1}\n\t\\item The product $\\otimes$ is well-defined.\n\t\\item The product $\\otimes$ is commutative and associative.\n\t\\item If $g_1\\geqslant g_2$, then $f\\otimes g_1 \\geqslant f\\otimes g_2$.\n\t\\item $f \\otimes \\Id = \\Id \\otimes f = f$.\n\n\t\\item $(f\\otimes g)^{-1} = f^{-1}\\otimes g^{-1}$.\n\n\n\t\\item For GDP, $G_{\\mu_1} \\otimes G_{\\mu_2} \\otimes \\cdots \\otimes G_{\\mu_n} = G_{\\mu}$, where $\\mu = \\sqrt{\\mu_1^2+\\cdots+\\mu_n^2}$.\n\t\\end{enumerate}\n\\end{proposition}\nProperty 0 and 2 are already proved in \\Cref{app:self}. So we only prove 1,3,4,5 here.\n\\begin{proof}[Proof of Properties (1,3,4,5)]\n\tWe will assume $f=T(P,P'), g =T(Q,Q')$ in the entire proof. The upshot is that\n\\[T(P,P')\\otimes T(Q,Q') = T(P\\times Q,P'\\times Q').\\]\n\\begin{enumerate}\n\t\\item Commutativity:\n\t\t$$f\\otimes g = T(P,P')\\otimes T(Q,Q') = T(P\\times Q,P'\\times Q') \\stackrel{(a)}{=} T(Q\\times P,Q'\\times P') =T(Q,Q') \\otimes T(P,P') = g\\otimes f.$$\n\t\tIn step $(a)$, we switch the order of the components of the product, which obviously keeps the trade-off function unchanged.\n\n\t\tAssociativity:\n\t\tLet $h=T(R,R')$.\n\t\t\\begin{align*}\n\t\t\t(f\\otimes g)\\otimes h &= T(P\\times Q,P'\\times Q') \\otimes T(R,R') = T(P\\times Q\\times R,P'\\times Q'\\times R')\\\\\n\t\t\tf\\otimes (g\\otimes h) &= T(P,P')\\otimes T(Q\\times R,Q'\\times R') = T(P\\times Q\\times R,P'\\times Q'\\times R')\n\t\t\\end{align*}\n\t\tSo $(f\\otimes g)\\otimes h = f\\otimes (g\\otimes h)$.\n\t\\item Let $R$ be an arbitrary degenerate distribution, i.e. $R$ puts mass 1 on a single point. Then $\\Id = T(R,R)$ and\n\t\t\\[f\\otimes \\Id = T(P\\times R, P'\\times R) = T(P,P') = f.\\]\n\t\\item\n\t\tBy \\Cref{lem:symmetry}, taking the inverse amounts to flipping the arguments of $T(\\cdot,\\cdot)$.\n\t\t\\begin{align*}\n\t\t\t(f\\otimes g)^{-1} = T(P'\\times Q',P\\times Q) = T(P',P)\\otimes T(Q',Q) = f^{-1}\\otimes g^{-1}.\n\t\t\\end{align*}\n\t\\item Let $\\bmu=(\\mu_1,\\mu_2)\\in\\R^2$ and $I_2$ be the $2\\times 2$ identity matrix. Then\n\t\t\\begin{align*}\n\t\t\tG_{\\mu_1}\\otimes G_{\\mu_2} &= T\\big(\\N(0,1),\\N(\\mu_1,1)\\big)\\otimes T\\big(\\N(0,1),\\N(\\mu_1,1)\\big)\\\\\n\t\t\t&= T\\big(\\N(0,1)\\times \\N(0,1),\\N(\\mu_1,1)\\times \\N(\\mu_2,1)\\big)\\\\\n\t\t\t&= T\\big(\\N(0,I_2),\\N(\\bmu,I_2)\\big)\n\t\t\\end{align*}\n\t\tAgain we use the invariance of trade-off functions under invertible transformations. $\\N(0,I_2)$ is rotation invariant, So we can rotate $\\N(\\bmu,I_2)$ so that the mean is $(\\sqrt{\\mu_1^2+\\mu_2^2},0)$. Continuing the calculation\n\t\t\\begin{align*}\n\t\t\tG_{\\mu_1}\\otimes G_{\\mu_2} &=T\\big(\\N(0,I_2),\\N(\\bmu,I_2)\\big)\\\\\n\t\t\t&= T\\big(\\N(0,1)\\times \\N(0,1),\\N(\\sqrt{\\mu_1^2+\\mu_2^2},1)\\times \\N(0,1)\\big)\\\\\n\t\t\t&= T\\big(\\N(0,1),\\N(\\sqrt{\\mu_1^2+\\mu_2^2},1)\\big)\\otimes T\\big(\\N(0,1),\\N(0,1)\\big)\\\\\n\t\t\t&= G_{\\sqrt{\\mu_1^2+\\mu_2^2}}\\otimes \\Id\\\\\n\t\t\t&= G_{\\sqrt{\\mu_1^2+\\mu_2^2}}.\n\t\t\\end{align*}\n\\end{enumerate}\n\\end{proof}\n\n\n\n\t\n\nThe following proposition explains why our central limit theorems need $f_n$ to approach $\\Id$.\n\n\\begin{proposition} \\label{prop:trivial_limit}\n\tFor any trade-off function $f$ that is not $\\Id$,\n\t$$\\lim_{n\\to+\\infty} f^{\\otimes n}(\\alpha)=0, \\quad\\forall\\alpha\\in(0,1].$$\n\tIn fact, the convergence is exponentially fast.\n\\end{proposition}\n\\begin{proof}[Proof of \\Cref{prop:trivial_limit}]\n\tFor any trade-off function $f$, let $P,Q$ be probability measures such that $\\F(P,Q)=f$. The existence is guaranteed by \\Cref{prop:trade-off}. It is well-known that $1-\\mathrm{TV}(P,Q)$ is the minimum sum of type I and type II error, namely,\n\t$$1-\\mathrm{TV}(P,Q) = \\min_{\\alpha\\in[0,1]} \\alpha+f(\\alpha).$$\n\tWe claim that the following limit suffices to prove the theorem:\n\t\\begin{equation}\\label{eq:dtv}\n\t\t\\lim_{n\\to\\infty} \\mathrm{TV}(P^n,Q^{ n})=1.\n\t\\end{equation}\n\tTo see why it suffices, recall that by definition $\\F(P^n,Q^{ n})=f^{\\otimes n}$. Hence\n\t$$1-\\mathrm{TV}(P^n,Q^{ n}) = \\min_{\\alpha\\in[0,1]} \\alpha+f^{\\otimes n}(\\alpha).$$\n\tLet $\\alpha_n$ be the type i error that achieves minimum in the above equation, i.e.\n\t$$\\alpha_n+f^{\\otimes n}(\\alpha_n) = 1-\\mathrm{TV}(P^n,Q^{ n}).$$\n\tThe total variation limit \\eqref{eq:dtv} implies $\\alpha_n\\to0$ and $f^{\\otimes n}(\\alpha_n)\\to0$. For each $n$, consider the piecewise linear function that interpolates $(0,1),(\\alpha_n,f^{\\otimes n}(\\alpha_n))$ and $(1,0)$, which will be denoted by $h_n$. By the convexity of $f^{\\otimes n}$ we know that $f^{\\otimes n}\\leqslant h_n$ in $[0,1]$. It suffices to show that $h_n(\\alpha)\\to 0,\\forall \\alpha\\in(0,1]$. Since $\\alpha_n\\to0$, for large enough\t$n$, $h_n(\\alpha)$ is evaluated on the lower linear segment of $h_n$. So $h_n(\\alpha)\\leqslant h_n(\\alpha_n)\\leqslant f^{\\otimes n}(\\alpha_n) \\to 0$. This yields the desired limit of $f^{\\otimes n}$.\\\\\n\tNow we use Hellinger distance $H^2(P,Q) := \\E_Q\\big[(1-\\sqrt{\\frac{P}{Q}})^2\\big]$ to show the total variation limit \\eqref{eq:dtv}.\\\\\n\tAn elementary inequality relating total variation and Hellinger distance is\n\t$$\\frac{1}{2}H^2(P,Q)\\leqslant \\mathrm{TV}(P,Q)\\leqslant H(P,Q).$$\n\tAnother nice property of Hellinger distance is it tensorizes in the following sense:\n\t$$1-\\frac{H^2(P^n,Q^{ n})}{2} = \\Big(1-\\frac{H^2(P,Q)}{2}\\Big)^n.$$\n\t$f$ is not the diagonal $\\alpha\\mapsto1-\\alpha$, so $P\\neq Q$. Hence $\\mathrm{TV}(P,Q)> 0$. By the second inequality in the sandwich bound, $H^2(P,Q)>0$. By the tensorization property, $H^2(P^n,Q^{ n})\\to 2$. By the first inequality in the sandwich bound and that $\\mathrm{TV}$ is bounded by 1 we have\n\t$$ \\frac{1}{2} H^2(P^n,Q^{ n})\\leqslant \\mathrm{TV}(P^n,Q^{ n})\\leqslant1.$$This shows $\\mathrm{TV}(P^n,Q^{ n})\\to 1$ and completes the proof.\n\\end{proof}\n\n\n\n\nNow we set out the journey to prove the Berry-Esseen style central limit theorem \\ref{thm:Berry}. We first restate the theorem.\n\\berryrep*\nOur approach is to consider the log-likelihood ratio between the distributions of the composition mechanism on neighboring datasets. This log-likelihood ratio can be reduced to the sum of \\textit{independent} components that each correspond to the log-likelihood ratio of a trade-off function in the tensor product. This reduction allows us to carry over the classical Berry--Esseen bound to Theorem~\\ref{thm:Berry}.\n\nAs the very first step, let's better understand the functionals $\\mathrm{kl},\\kappa_2$ and $\\bar{\\kappa}_3$ used in the statement of the theorem. We focus on symmetric $f$ with $f(0)=1$, although some of the following discussion generalizes beyond that subclass. Recall that\n\\begin{align*}\n\t\\mathrm{kl}(f) &= -\\int_0^1\\log |f'(x)|\\diff x\\\\\n\n\t\\kappa_2(f)&=\\int_0^1\\big(\\log |f'(x)|\\big)^2\\diff x\\\\% \\int_0^1\\big(\\log g(x)-\\mu\\big)^2\\diff x\\\\\n\t\\bar{\\kappa}_3(f)&=\\int_0^1\\big|\\log |f'(x)|+\\mathrm{kl}(f)\\big|^3\\diff x\n\n\n\\end{align*}\nFirst we finish the argument mentioned in \\Cref{sub:a_berry_esseen_type_of_clt} that these functionals are well-defined and take values in $[0,+\\infty]$. For $\\kappa_2$ and $\\bar{\\kappa}_3$, as well as the non-central version $\\kappa_3$, the argument is easy because the integrands are non-negative.\n\nFor $\\mathrm{kl}$, the only possible singularities of the integrand is 0 and 1. If 1 is singular then $\\log |f'(x)|\\to -\\infty$ near 1. This is okay because the functionals are allowed to take value $+\\infty$. We need to rule out the case when 0 is a singularity and $\\int^{\\epsilon}_0\\log |f'(x)|\\diff x=+\\infty$. That cannot happen because $\\log |f'(x)|\\leqslant |f'(x)|-1$ and $|f'(x)| = -f'(x)$ is integrable in $[0,1]$ as it is the derivative of $-f$, an absolute continuous function. \nNon-negativity of $\\mathrm{kl}$ follows from Jensen's inequality.\n\nIn the discussion of \\Cref{prop:fdiv}, we showed that $\\mathrm{kl}(T(P,Q)) = D_{\\mathrm{KL}}(P\\|Q)$. This explains the name of this functional. In fact, $\\kappa_2$ also corresponds to a divergence called \\textit{exponential divergence} (\\cite{eguchi1985differential}).\n\nWe introduce a notation that will be useful in the calculation below. For a trade-off function $f$, let $Df$ be a function with the following expression:\n\\[Df(x) = |f'(1-x)| = -f'(1-x).\\]\nIn fact, this is the density introduce in the proof of \\Cref{prop:trade-off}.\n\nBy a simple change of variable, the three functionals can be re-written as\n\\begin{align*}\n\t\\mathrm{kl}(f) &= -\\int_0^1\\log Df(x)\\diff x\\\\\n\n\t\\kappa_2(f)&=\\int_0^1\\big(\\log Df(x)\\big)^2\\diff x\\\\% \\int_0^1\\big(\\log g(x)-\\mu\\big)^2\\diff x\\\\\n\t\\bar{\\kappa}_3(f)&=\\int_0^1\\big|\\log Df(x)+\\mathrm{kl}(f)\\big|^3\\diff x\n\n\n\\end{align*}\n\nThe following ``shadows'' of the above functionals will appear in the proof:\n\\begin{align*}\n\t\\mathrm{lk}(f) &:= \\int_0^1Df(x)\\log Df(x)\\diff x\\\\\n\t\\tilde{\\kappa}_2(f)&:=\\int_0^1Df(x)\\big(\\log Df(x)\\big)^2\\diff x\\\\%\\int_0^1\\big(\\log g(x)-\\tilde{\\mu}\\big)^2g(x)\\diff x\\\\\n\t\\tilde{\\kappa}_3(f)&:=\\int_0^1Df(x)\\big|\\log Df(x)-\\mathrm{lk}(f)\\big|^3\\diff x\n\\end{align*}\nThese functionals are also well-defined on $\\T$ and take values in $[0,+\\infty]$. The argument is similar to that of $\\mathrm{kl}, \\kappa_2$ and $\\bar{\\kappa}_3$.\n\n\nThe following calculations turn out to be useful in the proof.\n\\begin{proposition}\\label{prop:functionals}\n\tSuppose $f\\in\\T^S$ and $f(0)=1$. Then\n\t\\begin{align*}\n\t\t\\mathrm{kl}(f) &= \\mathrm{lk}(f)\\\\\n\t\n\t\t\\kappa_2(f)&= \\tilde{\\kappa}_2(f)\\\\\n\t\t\\bar{\\kappa}_3(f)&=\\tilde{\\kappa}_3(f).\n\t\\end{align*}\n\\end{proposition}\n\\begin{proof}\n\tOur approach, taking $\\kappa_2$ as example, is to show $\\kappa_2(f^{-1}) = \\tilde{\\kappa}_2(f)$. By definition of symmetry, $f^{-1} = f$ and hence the desired result follows. First observe for $f\\in\\T^S$ with $f(0)=1$, $f^{-1}$ agrees with the ordinary function inverse, hence we can apply calculus rule as follows:\n\t\\[Df^{-1}(x) = -\\frac{\\diff f^{-1}}{\\diff x}(1-x) = \\frac{-1}{f'(f^{-1}(1-x))}.\\]\n\tWe only prove $\\kappa_2(f^{-1})= \\tilde{\\kappa}_2(f)$ here and the other two identities can be proved similarly.\n\t\\begin{align*}\n\t\t\\kappa_2(f^{-1}) &= \\int_0^1\\big(\\log Df^{-1}(x)\\big)^2\\diff x \\\\\n\t\t&= \\int_0^1\\big(-\\log\\big[-f'(f^{-1}(1-x))\\big]\\big)^2\\diff x\\\\\n\t\t&= \\int_0^1\\log^2\\big[-f'(f^{-1}(1-x))\\big]\\diff x\n\t\\end{align*}\n\tLet $y=f^{-1}(1-x)$, then $f'(y)\\diff y=-\\diff x$, and $x=0$ corresponds to $y=0$, $x=1$ corresponds to $y=1$.\n\t\\begin{align*}\n\t\t\\kappa_2(f^{-1})\n\t\t&= \\int_0^1\\log^2 [-f'(y)]\\cdot\\big(-f'(y)\\big)\\diff y&& (y=f^{-1}(1-x))\\\\\n\t\t&= \\int_0^1\\log^2 [-f'(1-z)]\\cdot\\big(-f'(1-z)\\big)\\diff z&& (z=1-y)\\\\\n\t\t&= \\int_0^1Df(z)\\big(\\log Df(z)\\big)^2\\diff z\\\\\n\t\t&= \\tilde{\\kappa}_2(f).\n\t\\end{align*}\n\\end{proof}\nWe remark that by properly extending the definition of the shadow functionals, identities like $\\mathrm{kl}(f^{-1}) = \\mathrm{lk}(f)$ holds for general trade-off function $f$.\n\n\n\n\n\nBefore we finally start the proof, let's recall Berry-Esseen theorem for random variables. Suppose we have $n$ independent random variables $X_1,\\ldots, X_n$ with $\\E X_i = \\mu_i, \\Var X_i = \\sigma_i^2, \\E|X_i-\\mu_i|^3 = \\rho_i^3$. Consider the normalized random variable\n$$S_n := \\frac{\\sum_{i=1}^n X_i-\\mu_i}{\\sqrt{\\sum_{i=1}^n \\sigma^2_i}}.$$\nDenote its cdf by $F_n$. Then\n\\begin{theorem}[Berry-Esseen]\\label{thm:BerryRV}\nThere exists a universal constant $C>0$ such that\n\t\\[\\sup_{x\\in\\R}|F_n(x)-\\Phi(x)|\\leqslant C\\cdot \\frac{\\sum_{i=1}^n \\rho_i^3}{\\big(\\sum_{i=1}^n\\sigma_i^2\\big)^{\\frac{3}{2}}}.\\]\n\\end{theorem}\nTo the best of our knowledge, the best $C$ is 0.5600 due to \\cite{shevtsova2010improvement}.\n\n\\bigskip\n\nNow we proceed to the proof of \\Cref{thm:Berry}.\n\\begin{proof}[Proof of \\Cref{thm:Berry}]\nFor simplicity let\n\\[\\boldsymbol{f} := f_{1}\\otimes f_{2} \\otimes \\cdots \\otimes f_{n}.\\]\n\nFirst let's find distributions $P_0$ and $P_1$ such that $\\F(P_0,P_1)=\\boldsymbol{f}$.\n\nFirst, by symmetry, if $f_i(0)<1$, then $f_i'(x)=0$ in some interval $[1-\\ep,1]$ for some $\\ep>0$, which yields $\\mathrm{kl}(f_i)=+\\infty$. So we can assume $f_i(0)$ for all $i$.\n\nRecall that $Df_{i}(x) = -f'_{i}(1-x)$. Let $P$ be the uniform distribution on $[0,1]$ and $Q_{i}$ be the distribution supported on $[0,1]$ with density $Df_{i}$. These are the distributions constructed in the proof of \\Cref{prop:trade-off}. Since $f_i$ are all symmetric and $f_i(0)=1$, the supports of $P$ and all $Q_{i}$ are all exactly $[0,1]$, and we have $\\F(P,Q_{i})=f_{i}$. Hence by definition $\\boldsymbol{f} = \\F(P^{ n},Q_{1}\\times\\cdots\\times Q_{n})$.\n\nNow let's study the hypothesis testing problem $P^{ n}$ vs $Q_{1}\\times\\cdots\\times Q_{n}$. Let\n\\[L_{i}(x) := \\log \\frac{\\diff Q_{i}}{\\diff P}(x) = \\log Df_{i}(x)\\]\nbe the log likelihood ratio. \nSince both hypotheses are product distributions, Neyman-Pearson lemma implies that the optimal rejection rules of this testing problem must be a threshold function of the quantity $\\sum_{i=1}^n L_{i}$.\nWe need to study $\\sum_{i=1}^n L_{i}(x_i)$ under both the null and the alternative hypothesis, i.e. when $(x_1,\\ldots, x_n)$ comes from $P^{ n}$ and $Q_{1}\\times\\cdots\\times Q_{n}$. From here we implement the following plan: first find the quantities that exhibit central limit behavior, then express $\\alpha$ and $\\boldsymbol{f}(\\alpha)$ in terms of these quantities.\n\nFor further simplification, let\n$$T_n := \\sum_{i=1}^n L_{i}.$$\nAs we turn off the $x_i$ notation, we should bear in mind that $T_n$ has different distributions under $P^{ n}$ and $Q_{1}\\times\\cdots\\times Q_{n}$, but it is an independent sum in both cases.\n\nIn order to find quantities with central limit behavior, it suffices to normalize $T_n$ under both distributions. The mysterious functionals we introduced are specifically designed for this purpose.\n\\begin{align*}\n\t\\E_P[L_{i}] &= \\int_0^1 \\log Df_{i}(x_i)\\diff x_i = -\\mathrm{kl}(f_{i}),\\\\\n\t\\E_{Q_{i}}[L_{i}] &= \\int_0^1 Df_{i}(x_i)\\log Df_{i}(x_i)\\diff x_i = \\mathrm{lk}(f_{i}) = \\mathrm{kl}(f_{i}).\n\\end{align*}\nIn the last step we used \\Cref{prop:functionals}. With the bold vector notation,\n\\begin{align*}\n\t\\E_{P^n}[T_n] &= \\sum_{i=1}^n -\\mathrm{kl}(f_{i}) = -\\|\\boldsymbol{\\mathrm{kl}}\\|_1,\\\\\n\t\\E_{Q_{1}\\times\\cdots\\times Q_{n}}[T_n] &= \\sum_{i=1}^n \\mathrm{kl}(f_{i})= \\|\\boldsymbol{\\mathrm{kl}}\\|_1.\n\\end{align*}\nSimilarly for the variances:\n\\begin{align*}\n\n\t\\Var_{P}[L_{i}] &=\\E_P[L_{i}^2] - \\big(\\E_P[L_{i}]\\big)^2 = \\kappa_2(f_{i}) - \\mathrm{kl}^2(f_{i})\n\t\n\t,\\\\\n\n\t\\Var_{Q_{i}}[L_{i}] &= \\E_{Q_{i}}[L_{i}^2] - \\big(\\E_{Q_{i}}[L_{i}]\\big)^2 = \\tilde{\\kappa}_2(f_{i}) - \\mathrm{lk}^2(f_{i})= \\kappa_2(f_{i}) - \\mathrm{kl}^2(f_{i}).\n\t\n\\end{align*}\n\\[\n\t\\Var_{P^n}[T_n] =\\Var_{Q_{1}\\times\\cdots\\times Q_{n}}[T_n] = \\sum_{i=1}^n \\kappa_2(f_{i}) - \\mathrm{kl}^2(f_{i}) = \\|\\boldsymbol{\\kappa_2}\\|_1 - \\|\\boldsymbol{\\mathrm{kl}}\\|_2^2.\n\\]\nIn order to apply Berry-Esseen theorem (for random variables) we still need the centralized third moments:\n\\begin{align*}\n\t\\E_P|L_i-\\E_P[L_i]|^3 &=\\int_0^1\\big|\\log Df_i(x)+\\mathrm{kl}(f_i)\\big|^3\\diff x = \\bar{\\kappa}_3(f_i),\\\\\n\t\\E_{Q_i}|L_i-\\E_{Q_i}[L_i]|^3 &=\\int_0^1Df_i(x)\\big|\\log Df_i(x)-\\mathrm{lk}(f_i)\\big|^3\\diff x = \n\t\\tilde{\\kappa}_3(f_i)= \\bar{\\kappa}_3(f_i).\n\\end{align*}\nLet $F_n$ be the cdf of $\\frac{T_n + \\|\\boldsymbol{\\mathrm{kl}}\\|_1}{\\sqrt{\\|\\boldsymbol{\\kappa_2}\\|_1 - \\|\\boldsymbol{\\mathrm{kl}}\\|_2^2}}$ under $P^{ n}$, and $\\tilde{F}^{(n)}$ be the cdf of $\\frac{T_n-\\|\\boldsymbol{\\mathrm{kl}}\\|_1}{\\sqrt{\\|\\boldsymbol{\\kappa_2}\\|_1-\\|\\boldsymbol{\\mathrm{kl}}\\|_2^2}}$ under $Q_{1}\\times\\cdots\\times Q_{n}$. By Berry-Esseen Theorem \\ref{thm:BerryRV},\n\\begin{align}\\label{eq:closetonormal}\n\t\\sup_{x\\in\\R}|F_n(x)-\\Phi(x)|\n\t&\\leqslant C\\cdot\\frac{\\|\\boldsymbol{\\bar{\\kappa}_3}\\|_1}{\\big(\\|\\boldsymbol{\\kappa_2}\\|_1 - \\|\\boldsymbol{\\mathrm{kl}}\\|_2^2\\big)^{\\frac{3}{2}}}=\\gamma\n\n\\end{align}\nand similarly $\\sup_{x\\in\\R}|\\tilde{F}^{(n)}(x)-\\Phi(x)|\\leqslant\\gamma$.\n\n\nSo we find the quantities that exhibit central limit behavior. Now let's relate them with $\\boldsymbol{f}$. Consider the testing problem $(P^{ n},Q_{1}\\times\\cdots\\times Q_{n})$. For a fixed $\\alpha\\in[0,1]$, let the optimal rejection rule (potentially randomized) at level $\\alpha$ be $\\phi$.\nBy Neyman-Pearson lemma, $\\phi$ must be a thresholding on $T_n$. An equivalent form that highlights the central limit behavior is the following:\n$$\\phi=\n\\left\\{\n\\begin{array}{ll}\n1, \t\t& \\frac{T_n + \\|\\boldsymbol{\\mathrm{kl}}\\|_1}{\\sqrt{\\|\\boldsymbol{\\kappa_2}\\|_1 - \\|\\boldsymbol{\\mathrm{kl}}\\|_2^2}}>t, \\\\\np, & \\frac{T_n + \\|\\boldsymbol{\\mathrm{kl}}\\|_1}{\\sqrt{\\|\\boldsymbol{\\kappa_2}\\|_1 - \\|\\boldsymbol{\\mathrm{kl}}\\|_2^2}}=t,\\\\\n0, & \\frac{T_n + \\|\\boldsymbol{\\mathrm{kl}}\\|_1}{\\sqrt{\\|\\boldsymbol{\\kappa_2}\\|_1 - \\|\\boldsymbol{\\mathrm{kl}}\\|_2^2}}t\\Big] + p \\cdot P^n\\Big[\\frac{T_n + \\|\\boldsymbol{\\mathrm{kl}}\\|_1}{\\sqrt{\\|\\boldsymbol{\\kappa_2}\\|_1 - \\|\\boldsymbol{\\mathrm{kl}}\\|_2^2}}=t\\Big]\\\\\n\t&= 1-F_n(t) + p \\cdot [F_n(t)-F_n(t^-)].\n\\end{align*}\nHere $F_n(t^-)$ is the left limit of the function $F_n$ at $t$.\nSimple algebra yields\n\\[\n\t1-\\alpha = 1- \\E_{P^n}[\\phi]= (1-p)F_n(t)+pF_n(t^-)\n\\]\nand consequently the inequality\n\\[\n\tF_n(t^-)\\leqslant 1-\\alpha\\leqslant F_n(t).\n\\]\nFor $\\E_{Q_{1}\\times\\cdots\\times Q_{n}}[\\phi]$ it is helpful to introduce another letter $\\tau := t-\\mu$. In the theorem statement $\\mu$ was defined to be $\\frac{2\\|\\boldsymbol{\\mathrm{kl}}\\|_1}{\\sqrt{\\|\\boldsymbol{\\kappa_2}\\|_1 - \\|\\boldsymbol{\\mathrm{kl}}\\|_2^2}}$\nso we have the equivalence\n\\begin{equation}\\label{eq:equiv}\n\t\\frac{T_n + \\|\\boldsymbol{\\mathrm{kl}}\\|_1}{\\sqrt{\\|\\boldsymbol{\\kappa_2}\\|_1 - \\|\\boldsymbol{\\mathrm{kl}}\\|_2^2}}>t\n\t\\Leftrightarrow \\frac{T_n-\\|\\boldsymbol{\\mathrm{kl}}\\|_1}{\\sqrt{\\|\\boldsymbol{\\kappa_2}\\|_1-\\|\\boldsymbol{\\mathrm{kl}}\\|_2^2}}\n\t>\n\t\\tau.\n\\end{equation}\nWith this extra notation we have\n\\begin{align*}\n\t1-\\boldsymbol{f}(\\alpha)\n\t=&\\,\\, \\E_{Q_{1}\\times\\cdots\\times Q_{n}}[\\phi]\\\\\n\t=&\\,\\, Q_{1}\\times\\cdots\\times Q_{n}\\Big[\\frac{T_n + \\|\\boldsymbol{\\mathrm{kl}}\\|_1}{\\sqrt{\\|\\boldsymbol{\\kappa_2}\\|_1 - \\|\\boldsymbol{\\mathrm{kl}}\\|_2^2}}>t\\Big]+\\\\\n\t&\\,\\, p \\cdot Q_{1}\\times\\cdots\\times Q_{n}\\Big[\\frac{T_n + \\|\\boldsymbol{\\mathrm{kl}}\\|_1}{\\sqrt{\\|\\boldsymbol{\\kappa_2}\\|_1 - \\|\\boldsymbol{\\mathrm{kl}}\\|_2^2}}=t\\Big]\n\t&& \\text{(Def. of $\\phi$)}\\\\\n\t=&\\,\\, Q_{1}\\times\\cdots\\times Q_{n}\\Big[\\frac{T_n-\\|\\boldsymbol{\\mathrm{kl}}\\|_1}{\\sqrt{\\|\\boldsymbol{\\kappa_2}\\|_1-\\|\\boldsymbol{\\mathrm{kl}}\\|_2^2}}>\\tau\\Big]+\\\\\n\t&\\,\\, p \\cdot Q_{1}\\times\\cdots\\times Q_{n}\\Big[\\frac{T_n-\\|\\boldsymbol{\\mathrm{kl}}\\|_1}{\\sqrt{\\|\\boldsymbol{\\kappa_2}\\|_1-\\|\\boldsymbol{\\mathrm{kl}}\\|_2^2}}=\\tau\\Big]\n\t&& \\text{ (By \\eqref{eq:equiv}) }\\\\\n\t=&\\,\\, 1-\\tilde{F}^{(n)}(\\tau) + p \\cdot [\\tilde{F}^{(n)}(\\tau)-\\tilde{F}^{(n)}(\\tau^-)].\n\\end{align*}\nSimilar algebra as before yields\n\\[\\boldsymbol{f}(\\alpha) = (1-p)\\tilde{F}^{(n)}(\\tau)+p\\tilde{F}^{(n)}(\\tau^-)\\]\nand hence\n\\[\n\\tilde{F}^{(n)}(\\tau^-)\\leqslant \\boldsymbol{f}(\\alpha)\\leqslant \\tilde{F}^{(n)}(\\tau).\n\\]\nSo far we have\n\\begin{align}\n\tF_n(t^-)\\leqslant 1-\\alpha\\leqslant F_n(t),\\\\\n\\tilde{F}^{(n)}(\\tau^-)\\leqslant \\boldsymbol{f}(\\alpha)\\leqslant \\tilde{F}^{(n)}(\\tau).\\label{eq:sana}\n\\end{align}\nIn \\eqref{eq:closetonormal} we show $F_n$ and $\\tilde{F}^{(n)}$ are within distance $\\gamma$ to the cdf of standard normal, so\n\\[\n\t\\Phi(t)-\\gamma\\leqslant F_n(t^-)\\leqslant 1-\\alpha\\leqslant F_n(t)\\leqslant \\Phi(t)+\\gamma\n\\]\nand hence\n\\begin{equation}\\label{eq:mina}\n\t\\Phi^{-1}(1-\\alpha-\\gamma)\\leqslant t\\leqslant \\Phi^{-1}(1-\\alpha+\\gamma).\n\\end{equation}\nUsing \\eqref{eq:sana} and \\eqref{eq:mina}, \n\\begin{align*}\n\t\\boldsymbol{f}(\\alpha)&\\leqslant\\tilde{F}^{(n)}(\\tau)\\\\\n\t&\\leqslant \\Phi(\\tau)+\\gamma\\\\\n\t&= \\Phi(t-\\mu)+\\gamma\\\\\n\t&\\leqslant\\Phi(\\Phi^{-1}(1-\\alpha+\\gamma)-\\mu)+\\gamma\\\\\n\t&=G_{\\mu}(\\alpha-\\gamma)+\\gamma\n\\end{align*}\nSimilarly we can show that $\\boldsymbol{f}(\\alpha)\\geqslant G_{\\mu}(\\alpha+\\gamma)-\\gamma$. The proof is now complete.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nNext we prove the asymptotic version. Recall that our goal is\n\\asymprep*\n\\begin{proof}[Proof of \\Cref{thm:CLT}]\n\tWe will first construct pointwise convergence $f_{n1}\\otimes f_{n2} \\otimes \\cdots \\otimes f_{nn}\\to G_{2K\/s}$ and then conclude uniform convergence from a general theorem.\n\n\tApply Berry-Esseen Theorem \\ref{thm:Berry} to the $n$-th row of the triangular array and we have\n\t\\begin{equation*}\\label{eq:sandwich}\n\t\tG_{\\mu_n}(\\alpha+\\gamma_n)-\\gamma_n\\leqslant f_{n1}\\otimes f_{n2} \\otimes \\cdots \\otimes f_{nn}(\\alpha)\\leqslant G_{\\mu_n}(\\alpha-\\gamma_n)+\\gamma_n.\n\t\\end{equation*}\n\tHere $\\mu_n$ and $\\gamma_n$ are the counterparts of $\\mu$ and $\\gamma$ defined in \\Cref{thm:Berry} when applied to $f_{n1},\\ldots,f_{nn}$. Namely,\n\t\\begin{align*}\n\t\t\\mu_n=&\\,\\,\\frac{2\\|\\boldsymbol{\\mathrm{kl}}^{(n)}\\|_1}{\\sqrt{\\|\\boldsymbol{\\kappa_2}^{(n)}\\|_1 - \\|\\boldsymbol{\\mathrm{kl}}^{(n)}\\|_2^2}},\\\\\n\t\t\\gamma_n=&\\,\\,0.56\\cdot\\frac{\\|\\boldsymbol{\\bar{\\kappa}_3}^{(n)}\\|_1}{\\big(\\|\\boldsymbol{\\kappa_2}^{(n)}\\|_1 - \\|\\boldsymbol{\\mathrm{kl}}^{(n)}\\|_2^2\\big)^{\\frac{3}{2}}}\n\t\\end{align*}\n\tHere the bold vector notation with a superscript $(n)$ denotes the vector for the $n$-th row. For example, $\\boldsymbol{\\mathrm{kl}}^{(n)} = \\big(\\mathrm{kl}(f_{n1}),\\ldots, \\mathrm{kl}(f_{nn})\\big)$.\n\n\tBy the sandwich inequality, pointwise convergence of $f_{n1}\\otimes f_{n2} \\otimes \\cdots \\otimes f_{nn}$ follows from the two limits\n\t\\begin{equation}\\label{eq:wendy}\n\t\tG_{\\mu_n}(\\alpha+\\gamma_n)-\\gamma_n\\to G_{2K\/s}(\\alpha), \\quad G_{\\mu_n}(\\alpha-\\gamma_n)+\\gamma_n\\to G_{2K\/s}(\\alpha).\n\t\\end{equation}\n\tTo prove these, let's first show $\\gamma_n\\to 0$ and $\\mu_n\\to2K\/s$.\n\n\n\n\tReformulating the assumptions in bold vector notations, we have\n\t\\[||\\boldsymbol{\\mathrm{kl}}^{(n)}||_1\\to K,\\quad ||\\boldsymbol{\\mathrm{kl}}^{(n)}||_\\infty\\to 0,\\quad ||\\boldsymbol{\\kappa_2}^{(n)}||_1\\to s^2,\\quad ||\\boldsymbol{\\kappa_3}^{(n)}||_1\\to 0.\\]\n\tIn addition to these, it suffices to show\n\t\\begin{equation}\\label{eq:ning}\n\t\t\\|\\boldsymbol{\\mathrm{kl}}^{(n)}\\|_2^2\\to0 ~~\\text{ and }~~ \\|\\boldsymbol{\\bar{\\kappa}_3}^{(n)}\\|_1\\to0.\n\t\\end{equation}\n\tFor the first half, notice that $\\|\\boldsymbol{\\mathrm{kl}}^{(n)}\\|_2^2=\\langle \\boldsymbol{\\mathrm{kl}}^{(n)}, \\boldsymbol{\\mathrm{kl}}^{(n)}\\rangle\\leqslant \\|\\boldsymbol{\\mathrm{kl}}^{(n)}\\|_\\infty\\cdot \\|\\boldsymbol{\\mathrm{kl}}^{(n)}\\|_1\\to0$. In fact, $\\|\\boldsymbol{\\mathrm{kl}}^{(n)}\\|_\\infty\\to0$ is not only sufficient but also necessary, because $\\|\\boldsymbol{\\mathrm{kl}}^{(n)}\\|_\\infty\\leqslant\\|\\boldsymbol{\\mathrm{kl}}^{(n)}\\|_2$.\n\n\tNext we use the assumptions to show $\\|\\boldsymbol{\\bar{\\kappa}_3}^{(n)}\\|_1\\to0$.\n\tWe need a lemma\n\t\\begin{lemma} \\label{lem:cubicmoments}\n\tFor a trade-off function $f$,\n\t\t$$\\bar{\\kappa}_3(f)\\leqslant \\kappa_3(f) + 3\\mathrm{kl}(f)\\cdot\\kappa_2(f)+3\\mathrm{kl}^2(f)\\cdot \\sqrt{\\kappa_2(f)}+\\mathrm{kl}^3(f).$$\n\t\\end{lemma}\n\t\\begin{proof}[Proof of \\Cref{lem:cubicmoments}]\n\t\t\\begin{align*}\n\t\t\t\\bar{\\kappa}_3(f)=&\\phantom{+}\\int_0^1\\big|\\log Df(x)+\\mathrm{kl}(f)\\big|^3\\diff x\\\\\n\t\t\t\\leqslant&\\phantom{+} \\int_0^1\\big(\\big|\\log Df(x)\\big|+\\big|\\mathrm{kl}(f)\\big|\\big)^3\\diff x\\\\\n\t\t\t\\leqslant&\\phantom{+} \\int_0^1\\big|\\log Df(x)\\big|^3\\diff x+3\\mathrm{kl}(f)\\cdot\\int_0^1\\big|\\log Df(x)\\big|^2\\diff x\\\\\n\t\t\t&+3\\mathrm{kl}^2(f)\\cdot\\int_0^1\\big|\\log Df(x)\\big|\\diff x + \\mathrm{kl}^3(f)\\\\\n\t\t\t\\leqslant&\\phantom{+} \\kappa_3(f) + 3\\mathrm{kl}(f)\\cdot\\kappa_2(f)+3\\mathrm{kl}^2(f)\\cdot \\sqrt{\\kappa_2(f)}+\\mathrm{kl}^3(f).\n\t\t\\end{align*}\n\t\tIn the last step we used Jensen's inequality.\n\t\\end{proof}\n\tApply \\Cref{lem:cubicmoments} to each $f_{ni}$ and sum them up:\n\t\\begin{align*}\n\t\t\\|\\boldsymbol{\\bar{\\kappa}_3}^{(n)}\\|_1 \n\t\t&\\leqslant \\|\\boldsymbol{\\kappa_3}^{(n)}\\|_1 + 3\\textstyle\\sum_i \\mathrm{kl}(f_{ni})\\cdot \\kappa_2(f_{ni})+ 3\\textstyle\\sum_i \\mathrm{kl}(f_{ni})\\cdot \\sqrt{\\kappa_2(f_{ni})} \\cdot \\mathrm{kl}(f_{ni}) + \\textstyle\\sum_i \\mathrm{kl}(f_{ni})\\cdot \\mathrm{kl}^2(f_{ni}).\n\t\\end{align*}\n\tUsing $|\\sum a_i b_i|\\leqslant |\\sum a_i| \\cdot \\max |b_i|$ and Cauchy-Schwarz inequality yields\n\t\\begin{align*}\n\t\t\\|\\boldsymbol{\\bar{\\kappa}_3}^{(n)}\\|_1 \n\t\t&\\leqslant \\|\\boldsymbol{\\kappa_3}^{(n)}\\|_1 + 3 \\|\\boldsymbol{\\mathrm{kl}}^{(n)}\\|_\\infty\\cdot \\|\\boldsymbol{\\kappa_2}^{(n)}\\|_1 + 3\\|\\boldsymbol{\\mathrm{kl}}^{(n)}\\|_\\infty \\cdot \\Big(\\textstyle\\sum_i \\sqrt{\\kappa_2(f_{ni})} \\cdot \\mathrm{kl}(f_{ni})\\Big) + \\|\\boldsymbol{\\mathrm{kl}}^{(n)}\\|_\\infty^2 \\cdot \\|\\boldsymbol{\\mathrm{kl}}^{(n)}\\|_1\\\\\n\t\t&\\leqslant \\|\\boldsymbol{\\kappa_3}^{(n)}\\|_1 + 3 \\|\\boldsymbol{\\mathrm{kl}}^{(n)}\\|_\\infty\\cdot \\|\\boldsymbol{\\kappa_2}^{(n)}\\|_1 + 3\\|\\boldsymbol{\\mathrm{kl}}^{(n)}\\|_\\infty \\cdot \\sqrt{\\|\\boldsymbol{\\kappa_2}^{(n)}\\|_1 \\cdot \\|\\boldsymbol{\\mathrm{kl}}^{(n)}\\|_2^2} + \\|\\boldsymbol{\\mathrm{kl}}^{(n)}\\|_\\infty^2 \\cdot \\|\\boldsymbol{\\mathrm{kl}}^{(n)}\\|_1.\n\t\\end{align*}\n\tBy the assumptions\n\t\\[||\\boldsymbol{\\mathrm{kl}}^{(n)}||_1\\to K,\\quad ||\\boldsymbol{\\mathrm{kl}}^{(n)}||_\\infty\\to 0,\\quad ||\\boldsymbol{\\kappa_2}^{(n)}||_1\\to s^2,\\quad ||\\boldsymbol{\\kappa_3}^{(n)}||_1\\to 0\\]\n\tand $\\|\\boldsymbol{\\mathrm{kl}}^{(n)}\\|_2^2\\to0$ which we just proved,\n\tit's easy to see that all four terms goes to 0 as $n$ goes to infinity.\n\n\n\n\n\n\tThe two limits \\eqref{eq:ning} we have just proved imply $\\mu_n\\to2K\/s$ and $\\gamma_n\\to 0$. Given these, convergence \\eqref{eq:wendy} is easy once we notice that $G_{\\mu}(\\alpha)=\\Phi(\\Phi^{-1}(1-\\alpha)-\\mu)$ is continuous in both $\\alpha$ and $\\mu$.\n\n\tIf the readers are concerned with $1-\\alpha-\\gamma_n$ exceeding $[0,1]$, then observe that when $\\alpha\\in(0,1)$, $1-\\alpha-\\gamma_n$ eventually ends up in $(0,1)$ where $\\Phi^{-1}$ is well-defined and continuous. So the only concern is at 0 and 1. If $\\alpha=0$, $\\Phi^{-1}(1-\\alpha-\\gamma_n)\\to+\\infty$ so $G_{\\mu_n}(0+\\gamma_n)-\\gamma_n\\to1 = G_{2K\/s}(0)$. A similar argument works for $\\alpha=1$.\n\n\tAnyway, we have shown pointwise convergence. Uniform convergence is again a direct consequence of \\Cref{lem:uniform}.\n\tThe proof is now complete.\n\\end{proof}\n\n\n\nNext we explain the effect of tensoring $f_{0,\\delta}$.\n\n\t\\begin{equation}\n\tf\\otimes f_{0,\\delta}(\\alpha) =\n\t\t\\left\\{\n\t\t\\begin{array}{ll}\n\t\t(1-\\delta)\\cdot f(\\frac{\\alpha}{1-\\delta}), \t\t& 0\\leqslant \\alpha \\leqslant 1-\\delta \\\\\n\t\t0, & 1-\\delta\\leqslant \\alpha\\leqslant 1.\n\t\t\\end{array}\n\t\t\\right.\\tag{\\ref{prop:ruibbit}}\n\t\\end{equation}\n\\begin{proof}[Proof of \\Cref{prop:ruibbit}]\n\tFirst, $f_{0,\\delta}$ is the trade-off function of two uniform distributions $f_{0,\\delta} = T\\big(U[0,1],U[\\delta,1+\\delta]\\big)$.\n\tTo see this, observe that any optimal test $\\phi$ for $U[0,1]$ vs $U[\\delta,1+\\delta]$ \n\tmust have the following form:\n\t$$\\phi(x)=\n\t\\left\\{\n\t\\begin{array}{ll}\n\t1, \t\t& x\\in(1,1+\\delta]\\\\\n\tp, & x\\in[\\delta,1],\\\\\n\t0, & x\\in[0,\\delta)\n\t\\end{array}\n\t\\right.\n\t$$\n\tThat is, we know it must be from $U[0,1]$ if we see something in $[0,\\delta)$, and must be from $U[\\delta,1+\\delta]$ if we see something in $(1,1+\\delta]$. Otherwise the only thing we can do is random guessing. It's easy to see that the errors of such $\\phi$ linearly interpolates between $(0,1-\\delta)$ and $(1-\\delta,0)$, i.e. type I and type II error add up to $1-\\delta$. On the other hand, by definition, $f_{0,\\delta}(\\alpha) = \\max{\\{1-\\delta-\\alpha,0\\}}$. So they indeed agree with each other.\n\n\tNow suppose $f=T(P,Q)$. By definition of tensor\tproduct, $f\\otimes f_{0,\\delta} = T(P\\times U[0,1], Q\\times U[\\delta,1+\\delta])$. If the optimal test for $P$ vs $Q$ at level $\\alpha$ is $\\phi_\\alpha$, then an optimal test for $P\\times U[0,1]$ vs $Q\\times U[\\delta,1+\\delta]$ must be of the following form:\n\t$$\\tilde{\\phi}_\\alpha(\\omega, x)=\n\t\\left\\{\n\t\\begin{array}{ll}\n\t1, \t\t& x\\in(1,1+\\delta]\\\\\n\t\\phi_\\alpha(\\omega), & x\\in[\\delta,1],\\\\\n\t0, & x\\in[0,\\delta)\n\t\\end{array}\n\t\\right.\n\t$$\n\tThe errors are\n\t\\begin{align*}\n\t\t\\E_{P\\times U[0,1]}[\\tilde{\\phi}_\\alpha] &= P\\big[x\\in(1,1+\\delta]\\big]+ P\\big[x\\in[\\delta,1]\\big]\\cdot \\E_{P}[\\phi_\\alpha(\\omega)] \\\\\n\t\t&= 0+(1-\\delta)\\alpha=(1-\\delta)\\alpha\\\\\n\t\t1-\\E_{Q\\times U[\\delta,1+\\delta]}[\\tilde{\\phi}_\\alpha] &= 1-P\\big[x\\in(1,1+\\delta]\\big]- P\\big[x\\in[\\delta,1]\\big]\\cdot \\E_{Q}[\\phi_\\alpha(\\omega)] \\\\\n\t\t&=1- \\delta- (1-\\delta)\\big(1-f(\\alpha)\\big) = (1-\\delta)f(\\alpha)\n\t\\end{align*}\n\tThis completes the proof.\n\\end{proof}\n\n\n\n\n\n\n\n\\DPCLTrep*\n\\begin{proof}[Proof of \\Cref{thm:DPCLT}]\nAs in the main body, we first apply rules $f_{\\ep,\\delta} = f_{\\ep,0}\\otimes f_{0,\\delta}$ and $f_{0,\\delta_1}\\otimes f_{0,\\delta_2} = f_{0,1-(1-\\delta_1)(1-\\delta_2)}$ to get\n\\begin{align*}\n\tf_{\\ep_{n1},\\delta_{n1}}\\otimes \\cdots \\otimes f_{\\ep_{nn},\\delta_{nn}}\n\t&= \\big(f_{\\ep_{n1},0}\\otimes \\cdots \\otimes f_{\\ep_{nn},0}\\big)\\otimes \\big(f_{0,\\delta_{n1}}\\otimes \\cdots \\otimes f_{0,\\delta_{nn}}\\big)\\\\\n\t&= \\big(\\underbrace{f_{\\ep_{n1},0}\\otimes \\cdots \\otimes f_{\\ep_{nn},0}}_{f^{(n)}}\\big)\\otimes f_{0,\\delta^{(n)}}\n\\end{align*}\nwith $\\delta^{(n)} = 1-\\prod_{i=1}^n(1-\\delta_{ni})$.\nFor the second factor, let's first prove the limit $\\delta^{(n)}\\to 1-\\mathrm{e}^{-\\delta}$.\nChanging the product into sum, we have\n\\[\\log (1-\\delta^{(n)}) = \\textstyle\\sum_{i=1}^n\\log(1-\\delta_{ni})\\]\nThe limit almost follows from the Taylor expansion $\\log (1+x) = x+o(x)$, but we need to be a little more careful as the number of summation terms also goes to infinity. \nSince $\\max_{1\\leqslant i\\leqslant n} \\delta_{ni}\\to 0$, we can assume for large $n$, $\\delta_{ni}hq(\\omega), \\\\\n\tc, \t\t& \\text{if } p(\\omega)=hq(\\omega), \\\\\n\t0, \t\t& \\text{if } p(\\omega)h_n\\\\\nc_n, & \\frac{p_1(\\omega)}{p_0(\\omega)}=h_n\\\\\n0, & \\frac{p_1(\\omega)}{p_0(\\omega)}t] = 1-\\Phi(t),\\quad \\beta(t) = \\P[X+\\mu\\leqslant t] = \\Phi(t-\\mu).\\]\n\tSolving $\\alpha$ from $t$, $t=\\Phi^{-1}(1-\\alpha)$. So\n\t\\[G_\\mu(\\alpha) = \\beta(\\alpha) = \\Phi\\big(\\Phi^{-1}(1-\\alpha)-\\mu\\big).\\]\n\\end{proof}\n\n\\postrep*\n\\begin{proof}[Proof of Lemma~\\ref{lem:post}]\nThe idea is that whatever can be done with the processed outcome can also be done with the original outcome. Formally, if an optimal test $\\phi:Z\\to[0,1]$ for the problem $\\mathrm{Proc} (P)$ vs $\\mathrm{Proc} (Q)$ at level $\\alpha$ can achieve type II error $\\beta = \\F\\big(\\mathrm{Proc} (P),\\mathrm{Proc} (Q)\\big)(\\alpha)$, then it is easy to verify that $\\phi\\circ\\mathrm{Proc} :Y\\to[0,1]$ has the same errors $\\alpha,\\beta$ for the problem $P$ vs $Q$. The optimal error $T(P,Q)(\\alpha)$ can only be smaller than $\\beta$.\n\\end{proof}\n\n\n\n\n\n\n\nThe next result is a generalization of \\Cref{eq:Gmu}, together with the interesting inverse.\n\nLet $P$ be a probability distribution on $\\R$ with density $p$, cdf $F:\\R\\to[0,1]$, quantile $F^{-1}:[0,1]\\to[-\\infty,+\\infty]$ and $\\xi$ be a random variable from the distribution $P$. Then we have\n\\begin{proposition}\\label{prop:logconcave}\n\t$\\F(\\xi,t+\\xi)(\\alpha) = F(F^{-1}(1-\\alpha)-t)$ holds for every $t>0$ if and only if the density $p$ is log-concave.\n\\end{proposition}\nIn particular, normal density is log-concave, so the expression of $G_\\mu$ is a special case.\n\\begin{proof}[Proof of \\Cref{prop:logconcave}]\n\tFor convenience let\n\t\\begin{equation*}\\label{eq:logconcave}\n\t\tf_t(\\alpha) := F(F^{-1}(1-\\alpha)-t).\n\t\\end{equation*}\n\t``if'': This is the easier direction. Fix $t>0$ and consider the log likelihood ratio of $\\xi$ and $t+\\xi$:\n\t$$\\mathrm{llk} = \\log p(x-t)-\\log p(x).$$\n\tllk is increasing in $x$ because of log-concavity, so according to Neyman-Pearson lemma, the optimal rejection rule must have the form $1_{[h,+\\infty)}$. Hence by a similar calculation as of Gaussian case, the trade-off function indeed has the form $f_t$.\n\n\t``only if'': We are given that $\\F(\\xi,t+\\xi) = f_t$ holds for every $t>0$, and we want to show that $p$ is log-concave. Now that $f_t$ is a trade-off function for every $t>0$, it must be convex. By chain rule$$f_t'(\\alpha) = (-1)\\cdot\\frac{p(F^{-1}(1-\\alpha)-t)}{p(F^{-1}(1-\\alpha))}.$$\n\tFix any $t>0$, convexity implies $f_t'(\\alpha)$ is increasing in $\\alpha$ for any $\\alpha\\in[0,1]$. Setting $x = F^{-1}(1-\\alpha)$, we know $\\frac{p(x-t)}{p(x)}$ is increasing in $x$ for all $x\\in\\R$, hence also $\\log p(x-t)-\\log p(x)$.\n\n\n\n\n\n\n\n\tFor convenience let $g=\\log p$. We know $g(x-t)-g(x)$ is increasing in $x, \\forall t>0$. Equivalently, $g'(x-t)-g'(x)>0, \\forall x, \\forall t>0$, which means $g'(x)$ is decreasing, i.e. $g=\\log p$ is concave. The proof is complete.\n\\end{proof}\n\n\nNext we prove results presented in \\Cref{sub:a_primal_dual_connection_with_}.\n\\ftoDPrep*\n\\begin{proof}[Proof of \\Cref{prop:ftoDP}]\n The tangent line of $f$ with slope $k$ has equation $y=kx-f^*(k)$, so when $k=-\\mathrm{e}^\\ep$ the equation is\n \\[y = -\\mathrm{e}^\\ep x-f^*(-\\mathrm{e}^\\ep).\\]\n Compare it to $f_{\\ep,\\delta}$, we see $1-\\delta = -f^*(-\\mathrm{e}^\\ep)$. By symmetry, the collection $\\{f_{\\ep,1+f^*(-\\mathrm{e}^{\\ep})}\\}_{\\ep\\geqslant0}$ envelopes the function $f$.\n\\end{proof}\n\\GDPtoDPrep*\n\\begin{proof}[Proof of \\Cref{corr:GDPtoDP}]\n By \\Cref{prop:ftoDP}, $\\mu$-GDP is equivalent to $(\\ep,1+G_\\mu^*(-\\mathrm{e}^{\\ep}))$-DP, so it suffices to compute the expression of $G_\\mu^*(-\\mathrm{e}^\\ep)$.\n\n Recall that $G_\\mu(x) = \\Phi\\big(\\Phi^{-1}(1-x)-\\mu\\big)$.\n By definition,\n \\[G_\\mu^*(y) = \\sup_{x\\in[0,1]} yx-\\Phi\\big(\\Phi^{-1}(1-x)-\\mu\\big).\\]\n Let $t = \\Phi^{-1}(1-x)$. Equivalently, $x = 1-\\Phi(t)=\\Phi(-t)$. Do the change of variable and we have\n \\[G_\\mu^*(y) = \\sup_{t\\in\\R} ~y\\Phi(-t)-\\Phi(t-\\mu).\\]\n From the shape of $G_\\mu$ we know the supremum must be achieved at the unique critical point. Setting the derivative of the objective function to be zero yields\n \\begin{align*}\n \\frac{\\diff}{\\diff t} \\big[y\\Phi(-t)-\\Phi(t-\\mu)\\big] &= 0\\\\\n -y\\varphi(-t) - \\varphi(t-\\mu) &=0\\\\\n y\\mathrm{e}^{-\\frac{1}{2}t^2}+\\mathrm{e}^{-\\frac{1}{2}(t-\\mu)^2}&=0\\\\\n y+\\mathrm{e}^{\\mu t-\\frac{1}{2}\\mu^2}&=0\n \\end{align*}\n So $t = \\frac{\\mu}{2}+\\frac{1}{\\mu}\\log(-y)$. Plug this back in the expression of $G_\\mu^*$ and we have\n \\[G_\\mu^*(y) = y\\Phi\\Big(-\\frac{\\mu}{2}-\\frac{1}{\\mu}\\log(-y)\\Big)-\\Phi\\Big(-\\frac{\\mu}{2}+\\frac{1}{\\mu}\\log(-y)\\Big).\\]\n When $y=-\\mathrm{e}^\\ep$,\n \\[G_\\mu^*(-\\mathrm{e}^\\ep) = -\\mathrm{e}^\\ep\\Phi\\Big(-\\frac{\\mu}{2}-\\frac{\\ep}{\\mu}\\Big)-\\Phi\\Big(-\\frac{\\mu}{2}+\\frac{\\ep}{\\mu}\\Big).\\]\n $1+G_\\mu^*(-\\mathrm{e}^{\\ep})$ agrees with the stated formula in \\Cref{corr:GDPtoDP}. The proof is complete.\n\\end{proof}\n\n\n\n\nThe rest of the section is devoted to group privacy results. The main theorem is\n\n\\groupthm*\n\nFor convenience we define an operation $\\mathbin{\\hat{\\circ}}$, which is function composition with a slight twist. For $f,g\\in\\T$,\n\\[f\\mathbin{\\hat{\\circ}} g (x) := f\\big(1-g(x)\\big).\\]\n$f^{\\mathbin{\\hat{\\circ}} k}$ is defined iteratively:\n\\[f^{\\mathbin{\\hat{\\circ}} k} = \\underbrace{f\\mathbin{\\hat{\\circ}}\\cdots\\mathbin{\\hat{\\circ}} f}_k.\\]\nNotice that $f\\mathbin{\\hat{\\circ}} g = 1-(1-f)\\circ(1-g)$, so $f^{\\mathbin{\\hat{\\circ}} k} = 1-(1-f)^{\\circ k}$.\n\\begin{lemma}\n\tThe operation $\\mathbin{\\hat{\\circ}}$ has the following properties for $f,g\\in\\T$:\n\t\\begin{enumerate}\n\t\t\\item[(a)] $f\\mathbin{\\hat{\\circ}} g \\in\\T$.\n\t\t\\item[(b)] $(f\\mathbin{\\hat{\\circ}} g)^{-1}=(g^{-1})\\mathbin{\\hat{\\circ}} (f^{-1}) $. In particular, if $f\\in\\T^S$, then $f^{\\mathbin{\\hat{\\circ}} k}\\in\\T^S$.\n\t\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\n\\begin{enumerate}\n\t\\item[(a)]\n\tBy \\Cref{prop:trade-off}, it suffices to check the four properties for $f\\mathbin{\\hat{\\circ}} g$. Monotonicity and continuity are obvious. Convexity follows by the well-known fact that decreasing convex function composed with a concave function is convex. Finally, because $f(x)\\leqslant 1-x,g(x)\\leqslant 1-x$, we have\n\t\\[f\\mathbin{\\hat{\\circ}} g (x) = f\\big(1-g(x)\\big)\\leqslant 1-\\big(1-g(x)\\big) = g(x)\\leqslant 1-x.\\]\n\t\\item[(b)]\n\tRecall that $f^{-1}(y) = \\inf\\{x\\in[0,1]:f(x)\\leqslant y\\}$. We have\n\t\\begin{align*}\n\t\t\\big[(g^{-1})\\mathbin{\\hat{\\circ}} (f^{-1})\\big](y)\n\t\t=g^{-1}\\big(1-f^{-1}(y)\\big)=\\inf\\{x\\in[0,1]:g(x)\\leqslant 1-f^{-1}(y)\\}.\n\t\\end{align*}\n\tFor any two numbers $x,y\\in[0,1]$, we have the following equivalence chain:\n\t$$g(x)\\leqslant 1-f^{-1}(y) \\Leftrightarrow f^{-1}(y)\\leqslant 1-g(x) \\Leftrightarrow f(1-g(x))\\leqslant y\\Leftrightarrow f\\mathbin{\\hat{\\circ}} g (x)\\leqslant y.$$\n\tSo\n\t\\begin{align*}\n\t\t\\big[(g^{-1})\\mathbin{\\hat{\\circ}} (f^{-1})\\big](y)\n\t\t&=\\inf\\{x\\in[0,1]:f\\mathbin{\\hat{\\circ}} g (x)\\leqslant y\\} = (f\\mathbin{\\hat{\\circ}} g)^{-1}(y).\n\t\\end{align*}\n\tThat is, $(g^{-1})\\mathbin{\\hat{\\circ}} (f^{-1}) = (f\\mathbin{\\hat{\\circ}} g)^{-1}$.\n\tThe proof is complete.\n\\end{enumerate}\n\\end{proof}\n\n\n\n\n\\Cref{thm:group} is an immediate consequence of the following lemma:\n\\begin{lemma} \\label{lem:group}\n\tSuppose $\\F(P,Q)\\geqslant f,\\F(Q,R)\\geqslant g$, then $\\F(P,R)\\geqslant g\\mathbin{\\hat{\\circ}} f$.\n\\end{lemma}\n\\begin{proof}\n\tFix $\\alpha\\in[0,1]$. Suppose $\\phi$ is the optimal testing rules of the problem $P$ vs $R$ at the level of $\\alpha$. Then we know the type I error $\\E_P[\\phi]=\\alpha$ and the type II error achieves the optimal value, i.e.\n\t\\[1-\\E_R[\\phi]=\\F(P,R)(\\alpha).\\]\n\n\t$\\phi$ is suboptimal as a testing rule for the problem $Q$ vs $R$, so the type I and II errors must be above the trade-off function $g$. That is,\n\t\\[1-\\E_R[\\phi]\\geqslant \\F(Q,R)(\\E_Q[\\phi]) \\geqslant g(\\E_Q[\\phi]).\\]\n\tSimilarly, $\\phi$ is also suboptimal for the problem $P$ vs $Q$. So $1-\\E_Q[\\phi]\\geqslant f(\\E_P[\\phi]) = f(\\alpha)$. Equivalently,\n\t\\[\\E_Q[\\phi]\\leqslant 1-f(\\alpha).\\]\n\tPut them together\n\t\\begin{align*}\n\t\\F(P,R)(\\alpha) &= 1-\\E_R[\\phi]\\\\\n\t&\\geqslant g(\\E_Q[\\phi])\\\\\n\t&\\geqslant g\\big(1-f(\\alpha)\\big) \\quad\\quad(g \\text{ is decreasing})\\\\\n\t&=g\\mathbin{\\hat{\\circ}} f(\\alpha).\n\t\\end{align*}\n\tThis completes the proof.\n\\end{proof}\n\n\n\\begin{proof}[Proof of \\Cref{thm:group}]\n\tSuppose $S$ and $S'$ are $k$-neighbors, i.e. there exist datasets $S = S_0, S_1, \\ldots, S_k = S'$ such that $S_i$ and $S_{i+1}$ are neighboring or identical for all $i = 0, \\ldots, k-1$. By privacy of $M$, we know $T\\big(M(S_i),M(S_{i+1})\\big)\\geqslant f$. Iteratively apply \\Cref{lem:group} and we have\n\\[ T\\big(M(S),M(S_2)\\big)\\geqslant f\\mathbin{\\hat{\\circ}} f, \\quad T\\big(M(S),M(S_3)\\big)\\geqslant f^{\\group3} \\quad \\ldots \\quad T\\big(M(S),M(S')\\big)\\geqslant f^{\\mathbin{\\hat{\\circ}} k}.\\]\nWe know that $f^{\\mathbin{\\hat{\\circ}} k} = 1-(1-f)^{\\circ k}$, so the $f$-DP part of the claim is done.\n\nThe GDP part of the claim follows by an easy formula: $G_\\mu\\mathbin{\\hat{\\circ}} G_{\\mu'} = G_{\\mu+\\mu'}$.\nTo see this, recall that $G_\\mu(\\alpha)=\\Phi(\\Phi^{-1}(1-\\alpha)-\\mu)$.\n\\begin{align*}\n\tG_\\mu\\mathbin{\\hat{\\circ}} G_{\\mu'}(\\alpha) = G_\\mu\\big(1-G_{\\mu'}(\\alpha)\\big) = \\Phi\\big(\\Phi^{-1}(G_{\\mu'}(\\alpha))-\\mu\\big) = \\Phi\\big(\\Phi^{-1}(1-\\alpha)-\\mu-\\mu'\\big) = G_{\\mu+\\mu'}(\\alpha).\n\\end{align*}\n\\end{proof}\nIn fact, similar conclusion holds for any log-concave noise. See \\Cref{prop:logconcave}.\n\n\\grouplimit*\nAs What makes it even more interesting is the convergence occurs with very small $k$. In \\Cref{fig:group} we set $\\ep=0.5$ and $f = 1-(1-f_{\\ep,0})^{\\circ 2}$. So the blue curve in the last panel is $1-(1-f)^{\\circ 2} = 1-(1-f_{\\ep,0})^{\\circ 4}$. Next we set $\\mu=k\\ep = 4\\cdot 0.5 = 2$. It turns out these numbers are good enough for the condition $k\\ep\\to\\mu$, because the predicted limit $\\F\\big(\\mathrm{Lap}(0,1),\\mathrm{Lap}(\\mu,1)\\big)$ (orange curve in the last panel) is almost indistinguishable from the blue curve $1-(1-f_{\\ep,0})^{\\circ 4}$.\n\\begin{figure}[!htp]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.8\\linewidth]{.\/figures\/group.pdf}\n\t\\end{center}\n\t\\captionof{figure}{Group privacy corresponds to function composition. Here $f = 1-(1-f_{\\ep,0})^{\\circ 2}$ with $\\ep=0.5$, so the blue curve in the last panel is $1-(1-f)^{\\circ 2} = 1-(1-f_{\\ep,0})^{\\circ 4}$. Orange curve is the predicted limit $T\\big(\\mathrm{Lap}(0,1),\\mathrm{Lap}(2,1)\\big)$. The distinction is almost invisible even when $k$ is only 4.}\n\t\\label{fig:group}\n\\end{figure}\n\n\n\n\\begin{lemma} \\label{lem:lap}\n\tThe trade-off function between Laplace distributions has expression\n\\begin{align*}\n\t\\F\\big(\\mathrm{Lap}(0,1),\\mathrm{Lap}(\\mu,1)\\big)(\\alpha)=\n\t\\left\\{\n\t\\begin{array}{ll}\n\t\t1-\\mathrm{e}^{\\mu}\\alpha, \t\t& \\alpha < \\mathrm{e}^{-\\mu}\/2, \\\\\n\t\t\\mathrm{e}^{-\\mu}\/4\\alpha, & \\mathrm{e}^{-\\mu}\/2\\leqslant\\alpha\\leqslant1\/2,\\\\\n\t\t\\mathrm{e}^{-\\mu}(1-\\alpha), & \\alpha>1\/2.\n\t\\end{array}\n\t\\right.\n\\end{align*}\n\\end{lemma}\n\n\\begin{figure}[!htp]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.7\\linewidth]{.\/figures\/Laplace_app.pdf}\n\t\\end{center}\n\t\\captionof{figure}{\n\tGraph of $\n\t\\F\\big(\\mathrm{Lap}(0,1),\\mathrm{Lap}(\\mu,1)\\big)$ with $\\mu=1$. It agrees with the reciprocal function in the middle.\n\t}\n\t\\label{fig:laplace}\n\\end{figure}\nThe graph of this function with $\\mu = 1$ is illustrated in \\Cref{fig:laplace}. In general, it consists of two symmetric line segments: $(0,1)$ connecting $(\\mathrm{e}^{-\\mu}\/2, 1\/2)$ and $(1\/2,\\mathrm{e}^{-\\mu}\/2)$ connecting $(1,0)$. Then $(\\mathrm{e}^{-\\mu}\/2, 1\/2)$ is connected to $(1\/2,\\mathrm{e}^{-\\mu}\/2)$ by the reciprocal function. It is easy to check that this function is $C^1$, i.e. has continuous derivative.\n\\begin{proof}[Proof of \\Cref{lem:lap}]\nLet $F$ be the cdf of $\\mathrm{Lap}(0,1)$. By \\Cref{prop:logconcave},\n$$\\F\\big(\\mathrm{Lap}(0,1),\\mathrm{Lap}(\\mu,1)\\big)(\\alpha) = F\\big(F^{-1}(1-\\alpha)-\\mu\\big).$$\nEasy calculation yields\n\\begin{align*}\n\tF(x)=\n\t\\left\\{\n\t\\begin{array}{ll}\n\t\\mathrm{e}^{x}\/2, \t\t& x \\leqslant 0, \\\\\n\t1-\\mathrm{e}^{-x}\/2, & x > 0.\n\t\\end{array}\n\t\\right.\n\\end{align*}\nSo we must expect to divide into several categories. We will refer to the above two expressions as negative and positive regimes.\n\nWhen $\\alpha>1\/2$, we are in negative regime. Solving $\\mathrm{e}^{x}\/2 = 1-\\alpha$ gives us $F^{-1}(1-\\alpha) = \\log 2(1-\\alpha)<0$. An additional $-\\mu$ keeps us in negative regime, so\n\\[F\\big(F^{-1}(1-\\alpha)-\\mu\\big) = \\exp\\big(F^{-1}(1-\\alpha)-\\mu\\big)\/2 = \\mathrm{e}^{\\log 2(1-\\alpha)-\\mu}\/2 = \\mathrm{e}^{-\\mu}(1-\\alpha).\\]\nWhen $\\alpha\\leqslant1\/2$, solving $1-\\mathrm{e}^{-x}\/2 = 1-\\alpha$ gives us $F^{-1}(1-\\alpha) = -\\log 2\\alpha\\geqslant0$. If $-\\log 2\\alpha - \\mu \\leqslant 0$, i.e. $\\mathrm{e}^{-\\mu}\/2\\leqslant \\alpha$, we are in negative regime and\n\\[F\\big(F^{-1}(1-\\alpha)-\\mu\\big) = \\mathrm{e}^{-\\log 2\\alpha-\\mu}\/2 = \\mathrm{e}^{-\\mu}\/4\\alpha.\\]\nIf $-\\log 2\\alpha - \\mu > 0$, i.e. $\\alpha<\\mathrm{e}^{-\\mu}\/2$, we are in positive regime and\n\\[F\\big(F^{-1}(1-\\alpha)-\\mu\\big) = 1-\\mathrm{e}^{\\log 2\\alpha+\\mu}\/2 = 1-\\mathrm{e}^{\\mu}\\alpha.\\]\nThe proof is complete.\n\\end{proof}\n\n\n\n\\begin{proof}[Proof of \\Cref{prop:group_limit}]\nFor simplicity assume $\\mu=1$. All arguments carry over for general $\\mu$.\n\nLet $f_n = 1-f_{\\ep,0} = 1-f_{1\/n,0}$. Fix $x_0$ and let $x_{n,k} = f_n^{\\circ k}(x_0) = (1-f_{\\ep,0})^{\\circ k}(x_0)$. We are interested in showing\n\\[\\lim_{n\\to\\infty}1-x_{n,n} = \\F\\big(\\mathrm{Lap}(0,1),\\mathrm{Lap}(1,1)\\big)(x_0).\\]\nFirst we make a general observation: the sequence $\\{x_{n,k}\\}$ is increasing in $k$ for any $n$. This is because $f_{\\ep,0}(x) \\geqslant 1-x$ and hence $f_n(x)\\geqslant x$.\n\n\\newcommand{\\frac{1}{1+\\mathrm{e}^{\\frac{1}{n}}}}{\\frac{1}{1+\\mathrm{e}^{\\frac{1}{n}}}}\nLet $\\theta_n = \\frac{1}{1+\\mathrm{e}^{\\frac{1}{n}}}$. By the expression of $f_{\\ep,0}$, we obtain the following two dynamics:\n\\begin{align*}\n\\begin{array}{rcll}\n\tf_n(x) &=& \\mathrm{e}^{\\frac{1}{n}}x, & \\text{ if } x\\leqslant \\theta_n,\\\\\n\t1-f_n(x) &=& \\mathrm{e}^{-\\frac{1}{n}}(1-x),& \\text{ if } x\\geqslant \\theta_n.\n\\end{array}\n\\end{align*}\nThe sequence $\\{x_{n,k}\\}$ evolves according to one of the two formula, potentially different for eack $k$. We will refer to $x\\leqslant \\theta_n$ case as \\textit{linear dynamics} and $x\\geqslant \\theta_n$ case as \\textit{flip linear regime} for evident reason.\nFor any $x_0$ and $n$, since $\\{x_{n,k}\\}$ is increasing in $k$, there exists a moment such that linear dynamics governs before and flip linear dynamics governs after. Extreme cases are one of the dynamics governs from $k=0$ to $n$.\nWe divide the analysis into three cases depending on the initial location $x_0$:\n\\begin{itemize}\n\t\\item[(a)] $x_0< \\frac{1}{2\\mathrm{e}}$. In this case, for large enough $n$, the linear dynamics governs all the time. To see this, notice that $\\theta_n$ increases to $\\frac{1}{2}$ as $n\\to\\infty$. So for large enough $n$, $x_0< \\frac{1}{\\mathrm{e}}\\cdot \\theta_n$. It's easy to see that $x_{n,k}$ never exceeds $\\theta_n$. Hence $x_{n,n} = \\mathrm{e} x_0$.\n\t\\item[(b)] $x_0\\geqslant\\frac{1}{2} = \\sup_n \\theta_n$. $x_{n,k}$ is born above threshold and remains above forever. Flip linear dynamics governs all the $n$ steps, so $1-x_{n,n} =\\mathrm{e}^{-1}(1-x_0)$.\n\t\\item[(c)] $\\frac{1}{2e}\\leqslant x < \\frac{1}{2}$. Let $t$ be the time of dynamics change. More precisely,\n\t\\begin{equation}\\label{eq:group_threshold}\n\t\tt-1 = \\max\\{k:\\mathrm{e}^{\\frac{k}{n}}x\\leqslant \\theta_n.\\}\n\t\\end{equation}\n\tand\n\t\\[x_{n,t} = \\mathrm{e}^{\\frac{1}{n}}x_{n,t-1}=\\mathrm{e}^{\\frac{t}{n}}x_0,\\quad 1-x_{n,n} = \\mathrm{e}^{-\\frac{n-t}{n}}(1-x_{n,t}).\\]\n\tTaking $n\\to\\infty$ in \\eqref{eq:group_threshold} (using $\\liminf$ and $\\limsup$ when necessary), we know $\\mathrm{e}^{\\frac{t}{n}}\\to\\frac{1}{2x_0}$. So\n\t\\[\\lim_{n\\to\\infty}1-x_{n,n} = \\lim_{n\\to\\infty}\\mathrm{e}^{\\frac{t}{n}-1}(1-x_{n,t}) = \\lim_{n\\to\\infty}\\mathrm{e}^{\\frac{t}{n}-1}(1-\\mathrm{e}^{\\frac{t}{n}}x_0) = \\mathrm{e}^{-1}\\cdot\\frac{1}{2x_0}(1-\\frac{1}{2}) = \\mathrm{e}^{-1}\\cdot\\frac{1}{4x_0}.\\]\n\\end{itemize}\nCollecting all three cases, we have\n\\begin{align*}\n\t\\lim_{n\\to\\infty}1-x_{n,n} =\n\t\\left\\{\n\t\\begin{array}{ll}\n\t\t1-\\mathrm{e} x_0, \t\t& x_0< \\frac{1}{2\\mathrm{e}}, \\\\\n\t\t\\mathrm{e}^{-1}\\cdot\\frac{1}{4x_0}, & \\mathrm{e}^{-\\mu}\/2\\leqslant\\alpha\\leqslant1\/2,\\\\\n\t\t\\mathrm{e}^{-1}(1-x_0), & x_0\\geqslant\\frac{1}{2}.\n\t\\end{array}\n\t\\right.\n\\end{align*}\nBy \\Cref{lem:lap}, this agrees with $\\F\\big(\\mathrm{Lap}(0,1),\\mathrm{Lap}(1,1)\\big)$.\nUniform convergence comes for free for trade-off functions once we have pointwise convergence. This is a direct consequence of \\Cref{lem:uniform} below, which will be used multiple times in this paper.\n\\end{proof}\n\t\\begin{lemma} \\label{lem:uniform}\n\t\tLet $f_n:[a,b]\\to\\R$ be a sequence of non-increasing functions. If $f_n$ has pointwise limit $f:[a,b]\\to\\R$ where $f$ is continuous on $[a,b]$, then the limit is uniform.\n\t\\end{lemma}\n\n\tThis is an easy variant of P\\'{o}lya's theorem (\\cite{polya1920zentralen}. See also Theorem 2.6.1 in \\cite{lehmann2004elements}). For completeness, we provide a proof.\n\t\\begin{proof}[Proof of \\Cref{lem:uniform}]\n\tWe are going to show that for every $\\ep>0$, there exists $N$ such that\n\t$$|f_n(x)-f(x)|< \\ep, \\quad\\forall x\\in[a,b], \\forall n\\geqslant N.$$\n\tSince $f$ is continuous on a closed interval, it is uniformly continuous. So for a fixed $\\ep>0$, we can find $\\delta>0$ such that whenever $x,y\\in[a,b]$ satisfies $|x-y|<\\delta$, we have $|f(x)-f(y)|<\\ep\/2$.\\\\\n\tThen we can divide $[a,b]$ into small intervals $a=x_01.\n\t\\end{array}\n\t\\right.$\nAs in \\Cref{app:relation}, let $z_f = \\inf\\{x\\in[0,1]:f(x)=0\\}$ be the first zero of $f$. \n\\begin{proposition} \\label{prop:chiplus}\n\tFor a pair of distributions $P$ and $Q$ such that $T(P,Q)=f$ is a symmetric trade-off function with $f(0)=1$,\n\t\\[\\chi^2_+(P\\|Q) = \\chi^2_+(f).\\]\n\\end{proposition}\n\\begin{proof}[Proof of \\Cref{prop:chiplus}]\n\tBy \\Cref{prop:fdiv}, when $f=T(P,Q)$, $\\chi^2_+(P\\|Q)$ can be computed via the following expression:\n$$\\chi^2_+(P\\|Q)=\\int_0^{z_f} \\big({\\big|f'(x)\\big|}^{-1}-1\\big)_+^2\\cdot \\big|f'(x)\\big| \\diff x + F(0)\\cdot(1-f(0))+\\tau_F\\cdot (1-z_f)$$\nwhere $F(0) = \\lim_{p\\to 0^+} {F(t)} = 0, \\tau_F=\\lim_{p\\to+\\infty} \\frac{F(t)}{t} = +\\infty$. Since we assume $f$ is symmetric, $z_f= f(0)=1$. This also implies that $f^{-1}$ is the ordinary function inverse, i.e. $f(f(x))=x$. \nLet $y = f^{-1}(x) = f(x)$. Then $\\diff y = f'(x) \\diff x$. On the other hand, $x=f(y),\\diff x = f'(y)\\diff y$. $x=1$ corresponds to $y=0$ and $x=0$ corresponds to $y=1$, so\n\\begin{align*}\n\t\\chi^2_+(P\\|Q)\n\t&=\\int_0^{1} \\big({\\big|f'(x)\\big|}^{-1}-1\\big)_+^2\\cdot \\big|f'(x)\\big| \\diff x\\\\\n\t&=\\int_0^{1} \\Big({\\Big|\\frac{\\diff y}{\\diff x}\\Big|}^{-1}-1\\Big)_+^2\\cdot \\big|f'(x)\\big| \\diff x\\\\\n\t&=-\\int_1^{0} \\big({\\big|f'(y)\\big|}-1\\big)_+^2\\diff y\\\\\n\t&=\\int_0^{1} \\big({\\big|f'(y)\\big|}-1\\big)_+^2\\diff y\\\\\n\t&=\\chi^2_+(f).\n\\end{align*}\n\\end{proof}\nWe need some more calculation tools to prove \\Cref{thm:mixtureSGD}.\n\\begin{lemma}\\label{lem:functionals2}\n\tLet $f\\in\\T^S$ with $f(0)=1$ and $x^*$ be its unique fixed point. Then\n\t\\begin{align*}\n\t\t\\chi^2_+(f)&= \\int_0^{x^*}(f'(x)+1)^2\\diff x\\\\\n\t\t\\mathrm{kl}(f) &= \\int_0^{x^*}\\big(|f'(x)|-1\\big)\\log |f'(x)|\\diff x \\\\\n\t\n\t\t\\kappa_2(f)&= \\int_0^{x^*}\\big(|f'(x)|+1\\big)\\big(\\log |f'(x)|\\big)^2\\diff x \\\\\n\t\t\\bar{\\kappa}_3(f)&=\\int_0^{x^*}\\big|\\log |f'(x)|+\\mathrm{kl}(f)\\big|^3+|f'(x)|\\cdot\\big|\\log |f'(x)|-\\mathrm{kl}(f)\\big|^3\\diff x\\\\\n\t\t\\kappa_3(f)&=\\int_0^{x^*}\\big(|f'(x)|+1\\big)\\big(\\log |f'(x)|\\big)^3\\diff x.\n\t\\end{align*}\n\\end{lemma}\n\n\\begin{proof}[Proof of \\Cref{lem:functionals2}]\n\tFirst we observe that $f'(x)\\leqslant -1$ for $x\\leqslant x^*$ and $f'(x)\\geqslant -1$ for $x\\geqslant x^*$. This means the integrand involved in $\\chi^2_+$ is 0 in $[x^*,1]$ and hence proves the first identity.\n\n\tThe rest of the proof is entirely based on a trick we used above.\n\tLet $y = f^{-1}(x) = f(x)$. Then $x=f(y),\\diff x = f'(y)\\diff y$. Since $x^*$ is the fixed point of $f$, $x=x^*$ corresponds to $y=x^*$. $x=1$ corresponds to $y=0$ and $x=0$ corresponds to $y=1$.\n\t\\begin{align*}\n\t\t-\\int_{x^*}^1\\log |f'(x)|\\diff x\n\t\t&=\\int_{x^*}^1\\log |f'(x)|^{-1}\\diff x\\\\\n\t\t&=\\int_{x^*}^0\\log \\Big|\\frac{\\diff x}{\\diff y}\\Big| \\cdot f'(y)\\diff y\\\\\n\t\t&=\\int^{x^*}_0\\log |f'(y)| \\cdot |f'(y)|\\diff y.\n\t\\end{align*}\n\tSo\n\t\\begin{align*}\n\t\t\\mathrm{kl}(f) &= -\\int_0^{1}\\log |f'(x)|\\diff x \\\\\n\t\t&=-\\int^{x^*}_0\\log |f'(x)|\\diff x-\\int_{x^*}^1\\log |f'(x)|\\diff x\\\\\n\t\t&=-\\int^{x^*}_0\\log |f'(x)|\\diff x+\\int^{x^*}_0\\log |f'(x)| \\cdot |f'(x)|\\diff x\\\\\n\t\t&= \\int_0^{x^*}\\big(|f'(x)|-1\\big)\\log |f'(x)|\\diff x.\n\t\\end{align*}\n\tThe rest of identities can be proved in exactly the same way.\n\\end{proof}\n\n\n\\begin{lemma} \\label{lem:CpMoments}\n\tSuppose $f\\in\\T^S$ and $f(0)=1$. $x^*$ is its unique fixed point. Let $g(x) = -f'(x)-1 = |f'(x)|-1$. Then\n\t\\begin{align*}\n\t\t\\mathrm{kl}(C_p(f)) &= p\\int_0^{x^*}g(x)\\log \\big(1+pg(x)\\big)\\diff x \\\\\n\t\t\\kappa_2(C_p(f))&= \\int_0^{x^*}\\big(2+pg(x)\\big)\\big[\\log \\big(1+pg(x)\\big)\\big]^2\\diff x \\\\\n\t\t\\kappa_3(C_p(f))&=\\int_0^{x^*}\\big(2+pg(x)\\big)\\big[\\log \\big(1+pg(x)\\big)\\big]^3\\diff x.\n\t\\end{align*}\n\\end{lemma}\n\\begin{proof}[Proof of \\Cref{lem:CpMoments}]\n\tWe prove for kl and the rest are similar. Let $x_p^*$ be the fixed point of $C_p(f)$. By \\Cref{lem:functionals2},\n\t\\[\n\t\t\\mathrm{kl}(C_p(f)) = \\int_0^{x^*_p}\\big(|C_p(f)'(x)|-1\\big)\\log |C_p(f)'(x)|\\diff x.\n\t\\]\n\tFrom the expression of $C_p(f)$ \\eqref{eq:Cp_expression} we know $\\log |C_p(f)'(x)|=0$ in the interval $[x^*,x^*_p]$, and $C_p(f) = f_p = pf+(1-p)\\Id$ in the interval $[0,x^*]$. So\n\t\\begin{align*}\n\t\t\\mathrm{kl}(C_p(f)) \n\t\t&= \\int_0^{x^*}\\big(|f_p'(x)|-1\\big)\\log |f_p'(x)|\\diff x.\n\t\\end{align*}\n\tIn the interval $[0,x^*]$, $g(x)=|f'(x)|-1\\geqslant 0$. $f_p'(x) = pf'(x)+(1-p)(-1) = p(f'(x)+1)-1 = -pg(x)-1$, so $|f_p'(x)| = pg(x)+1$. When plugged in to the expression above, we have\n\t\\[\n\t\\mathrm{kl}(C_p(f)) = p\\int_0^{x^*}g(x)\\log \\big(1+pg(x)\\big)\\diff x.\n\t\\]\n\\end{proof}\n\\begin{proof}[Proof of \\Cref{thm:mixtureSGD}]\n\tIt suffices to compute the limits in \\Cref{thm:CLT}, namely\n\t$$T\\cdot \\mathrm{kl}(C_p(f)), \\,\\,T\\cdot \\kappa_2(C_p(f)) \\text{ and } T\\cdot \\kappa_3(C_p(f)).$$\n\tSince $T\\sim p^{-2}$, we can consider $p^{-2}\\mathrm{kl}(C_p(f))$ and so on.\n\n\tAs in \\Cref{lem:CpMoments}, let $x^*$ be the unique fixed point of $f$ and $g(x) = -f'(x)-1 = |f'(x)|-1$. Note that $g(x)\\geqslant 0$ for $x\\in[0,x^*]$. The assumption expressed in terms of $g$ is simply\n\t$$\\int_0^1 g(x)^4\\diff x<+\\infty.$$\n\tIn particular, it implies $g(x)^k$ are integrable in $[0,x^*]$ for $k=2,3,4$. In addition, $\\chi^2_+(f) = \\int_0^{x^*}g(x)^2\\diff x$ by \\Cref{lem:functionals2}.\n\n\n\n\tFor the functional $\\mathrm{kl}$, by \\Cref{lem:CpMoments},\n\t\\begin{align}\n\t\t\\lim_{p\\to0^+}\\frac{1}{p^2}\\,\\mathrm{kl}(C_p(f))\n\t\t&= \\lim_{p\\to0^+}\\int_0^{x^*}g(x)\\cdot \\frac{1}{p}\\log \\big(1+pg(x)\\big)\\diff x\\tag{$*$}\\label{eq:sgd1}\\\\\n\t\t&= \\int_0^{x^*}g(x)\\cdot \\lim_{p\\to0^+}\\frac{1}{p}\\log \\big(1+pg(x)\\big)\\diff x\\nonumber\\\\\n\t\t&=\\int_0^{x^*}g(x)^2\\diff x=\\chi^2_+(f)\\nonumber\n\t\\end{align}\n\tChanging the order of the limit and the integral in \\eqref{eq:sgd1} is approved by {dominated convergence theorem}. To see this, notice that $\\log(1+x)\\leqslant x$.\n\n\n\n\n\n\n\n\tThe integrand in \\eqref{eq:sgd1} satisfies\n\t\\begin{align*}\n\t\t0\\leqslant g(x)\\cdot \\frac{1}{p}\\log \\big(1+pg(x)\\big)\\leqslant g(x)^2.\n\t\\end{align*}\n\tWe already argued that $g(x)^2$ is integrable, so it works as a dominating function and the limit is justified. When $p\\sqrt{T}\\to p_0$, we have\n\t\\[T\\cdot \\mathrm{kl}(C_p(f))\\to p_0^2\\cdot\\chi^2_+(f).\\]\n\tSo the constant $K$ in \\Cref{thm:CLT} is $p_0^2\\cdot\\chi^2_+(f)$.\n\n\tFor the functional $\\kappa_2$ we have\n\t\\begin{align*}\n\t\t\\frac{1}{p^2}\\kappa_2(C_p(f)) &= \\int_0^{x^*}\\big(2+pg(x)\\big)\\Big[\\frac{1}{p}\\log \\big(1+pg(x)\\big)\\Big]^2\\diff x.\n\t\\end{align*}\n\tBy a similar dominating function argument,\n\t\\begin{align*}\n\t\t\\lim_{p\\to0^+}\\frac{1}{p^2}\\,\\kappa_2(C_p(f)) = 2\\int_0^{x^*}g(x)^2\\diff x=2\\chi^2_+(f).\n\t\\end{align*}\n\tAdding in the limit $p\\sqrt{T}\\to p_0$, we know $s^2$ in \\Cref{thm:CLT} is $2p_0^2\\cdot\\chi^2_+(f)$. Once again, we have $s^2 = 2K$.\n\n\tThe same argument involving $g(x)^4$ applies to the functional $\\kappa_3$ and yields\n\t$$\\lim_{p\\to0^+}\\frac{1}{p^3}\\,\\kappa_3(C_p(f)) = 2\\int_0^{x^*}g(x)^3\\diff x.$$\n\tNote the different power in $p$ in the denominator. It means $\\kappa_3(C_p(f)) = o(p^2)$ and hence $T\\cdot \\kappa_3(C_p(f))\\to 0$ when $p\\sqrt{T}\\to p_0$.\n\n\tHence all the limits in \\Cref{thm:CLT} check and we have a $G_\\mu$ limit where\n\t\\[\\mu = 2K\/s = s = \\sqrt{2p_0^2\\cdot\\chi^2_+(f)} = p_0\\cdot\\sqrt{2\\chi^2_+(f)}.\\]\n\tThis completes the proof.\n\\end{proof}\n\\chirep*\n\\begin{proof}[Proof of \\Cref{lem:chi2GDP}]\n\tWe use \\Cref{prop:chiplus} as the tool. Obviously $P=\\N(\\mu,1)$ and $Q=\\N(0,1)$ satisfy the conditions there. So it suffices to compute $\\chi^2_+(\\N(\\mu,1)\\|\\N(0,1))$. Recall that $\\chi^2_+$ is the $F$-divergence with $F(t) = (t-1)_+^2$, so $\\chi^2_+(P\\|Q) = \\E_Q\\big[( \\frac{P}{Q}-1)_+^2\\big]$. Let $\\varphi$ and $\\Phi$ be the density function and cdf of the standard normal. We have\n\t\\begin{align*}\n\t\t\\chi^2_+(G_\\mu) &= \\chi^2_+(\\N(\\mu,1)\\|\\N(0,1))\\\\\n\t\t&= \\E_{x\\sim \\N(0,1)}\\Big[\\Big(\\frac{\\varphi(x-\\mu)}{\\varphi(x)}-1\\Big)_+^2\\Big]\\\\\n\t\t&= \\int_{\\mu\/2}^{+\\infty}\\Big(\\frac{\\varphi(x-\\mu)}{\\varphi(x)}-1\\Big)^2\\cdot \\varphi(x)\\diff x\\\\\n\t\t&= \\int_{\\mu\/2}^{+\\infty}\\Big(\\frac{\\varphi(x-\\mu)}{\\varphi(x)}\\Big)^2\\cdot \\varphi(x)\\diff x -2 \\int_{\\mu\/2}^{+\\infty}\\varphi(x-\\mu)\\diff x+\\int_{\\mu\/2}^{+\\infty}\\varphi(x)\\diff x\\\\\n\t\t&= \\underbrace{\\int_{\\mu\/2}^{+\\infty}e^{2\\mu x-\\mu^2}\\cdot \\varphi(x)\\diff x}_{I}-2(1-\\Phi(\\mu\/2))+\\Phi(-\\mu\/2)\\\\\n\t\t&=I+3\\Phi(-\\mu\/2)-2.\n\t\\end{align*}\n\tFor the integral $I$,\n\t\\begin{align*}\n\t\tI &= \\int_{\\mu\/2}^{+\\infty}e^{2\\mu x-\\mu^2}\\cdot \\varphi(x)\\diff x\\\\\n\t\t&= \\int_{\\mu\/2}^{+\\infty}\\frac{1}{\\sqrt{2\\pi}}\\cdot e^{2\\mu x-\\mu^2-x^2\/2}\\diff x\\\\\n\t\t&= \\int_{\\mu\/2}^{+\\infty}\\frac{1}{\\sqrt{2\\pi}}\\cdot e^{-(x-2\\mu)^2\/2}\\cdot e^{\\mu^2}\\diff x\\\\\n\t\t&= e^{\\mu^2}\\cdot P[\\N(2\\mu,1)\\geqslant \\mu\/2]\\\\\n\t\t&= e^{\\mu^2}\\cdot\\Phi(3\\mu\/2)\n\t\\end{align*}\n\tThis completes the proof.\n\\end{proof}\n\\SGDlimitrep*\n\\begin{proof}[Proof of \\Cref{thm:SGDlimit}]\n\tCombining \\Cref{thm:sgdcompo,thm:mixtureSGD} and \\Cref{lem:chi2GDP}, it suffices to check $\\int_0^1(f'(x)+1)^4\\diff x<+\\infty$ when $f(x) = G_a(x) = \\Phi(\\Phi^{-1}(1-x)-a)$. Let $y = \\Phi^{-1}(1-x)$. We have $\\varphi(y)\\diff y = -\\diff x$. Hence\n\t\\[G_a'(x) = \\varphi(y-a) \\cdot \\frac{\\diff y}{\\diff x} = -\\frac{\\varphi(y-a)}{\\varphi(y)} = -\\mathrm{e}^{ay-\\frac{a^2}{2}}.\\]\n\tThe integral is\n\t\\begin{align*}\n\t\t\\int_0^1(G_a'(x)+1)^4\\diff x\n\t\t&=\\int_{-\\infty}^{+\\infty}(-\\mathrm{e}^{ay-\\frac{a^2}{2}}+1)^4\\varphi(y)\\diff y,\n\t\\end{align*}\n\twhich is just a linear combination of moment generating functions of the standard normal and hence finite.\n\\end{proof}\n\\functionalGmurep*\n\\begin{proof}[Proof of \\Cref{lem:functionalGmu}]\n\tWe will use \\Cref{lem:CpMoments}. It's easy to show the fixed point of $G_\\mu$ is $x^* = \\Phi(-\\mu\/2)$. So\n\t\\begin{align*}\n\t\t\\mathrm{kl}\\big(C_p(G_\\mu)\\big) &=\n\t\tp\\int_0^{\\Phi(-\\mu\/2)}\\big(-G_\\mu'(x)-1\\big)\\log \\big(1+p(-G_\\mu'(x)-1)\\big)\\diff x\n\t\\end{align*}\n\tUsing the same change of variable $y = \\Phi^{-1}(1-x) = -\\Phi^{-1}(x)$, we have\n\t\\begin{align*}\n\t\t\\mathrm{kl}\\big(C_p(G_\\mu)\\big) &=\n\t\tp\\int^{+\\infty}_{\\mu\/2}\\Big(\\frac{\\varphi(y-\\mu)}{\\varphi(y)}-1\\Big)\\log \\Big(1+p\\Big(\\frac{\\varphi(y-\\mu)}{\\varphi(y)}-1\\Big)\\Big)\\varphi(y)\\diff y\\\\\n\t\t&=p\\int_{\\mu\/2}^{+\\infty} Z(y)\\cdot\\big(\\varphi(y-\\mu)-\\varphi(y)\\big)\\diff y.\n\t\\end{align*}\n\tThe rest can be proved similarly.\n\\end{proof}\n\\SGDBerryrep*\n\\begin{proof}[Proof of \\Cref{thm:SGDBerry}]\n\tFollows from plugging in the expressions above into \\Cref{thm:Berry}.\n\\end{proof}\n\\section{Conversion from $f$-DP to Divergence Based DP}\n\\label{app:relation}\nAs the title suggests, the central question of this section is the conversion from $f$-DP to divergence based DP. It boils down to the conversion from trade-off functions to various divergences. We first introduce the most general tool, and then give explicit formula for a large class of divergences, including {R\\'enyi} divergence. At the end we prove the claim we made in \\Cref{sec:conn-with-blackw} that privacy notion based on R\\'enyi divergence does not induce a strictly larger order than the Blackwell order.\n\n\nSuppose we have a ``divergence'' $D(\\cdot\\|\\cdot)$, which takes in a pair of probability distributions on a common measurable space and outputs a number. We say $D$ satisfies data processing inequality if $D\\big(\\mathrm{Proc}(P)\\|\\mathrm{Proc}(Q)\\big)\\geqslant D(P\\|Q)$ for any post-processing Proc.\n\\begin{proposition} \\label{thm:functional}\n\tIf $D(\\cdot\\|\\cdot)$ satisfies data processing inequality, then there exists a functional $l_D:\\T\\to\\R$ that computes $D$ through the trade-off function:\n\t$$D(P\\|Q) = l_D\\big(\\F(P,Q)\\big).$$\n\\end{proposition}\n\\begin{proof}\n\tIt's almost immediate from the following\n\t\\begin{lemma} \\label{lem:onlyonce}\n\tIf $\\F(P',Q')\\geqslant\\F(P,Q)$, then $D(P'\\|Q') \\leqslant D(P\\|Q)$. In particular, $\\F(P,Q)=\\F(P',Q')$ implies $D(P\\|Q) = D(P'\\|Q')$.\n\t\\end{lemma}\n\tTo see why the lemma holds, notice by Blackwell's theorem, $\\F(P',Q')\\geqslant\\F(P,Q) $ implies that there is a Proc such that $P'=\\mathrm{Proc}(P),Q'=\\mathrm{Proc}(Q)$, and by data processing inequality, $D(P'\\|Q') \\leqslant D(P\\|Q)$.\n\n\tThe lemma implies the existence of $l_D$ because we can define $l_D(f) = D(P\\|Q)$ through any pair $P,Q$ such that $T(P,Q)=f$. This definition is independent of the choice of $P$ and $Q$.\n\\end{proof}\nAn immediate corollary is\n\\begin{corollary}\\label{cor:wellbeing}\n\tIf two trade-off functions $f,g$ satisfy $f\\geqslant g$, then $l_D(f)\\le l_D(g)$.\n\\end{corollary}\n\\paragraph{Example: $F$-divergence}\nLet $P,Q$ be a pair of distributions with density $p$ and $q$ with respect to some common dominating measure. For a convex function $F:(0,+\\infty)\\to\\R$ such that $F(1) = 0$, the $F$-divergence $D_F(P\\|Q)$ is defined as (see \\cite{liese2006divergences})\n\\begin{align*}\n\tD_F(P\\|Q) &= \\int_{\\{pq>0\\}} F\\Big(\\frac{p}{q}\\Big) \\diff Q + F(0)Q[p=0]+\\tau_F P[q=0]\n\\end{align*}\nwhere $F(0) = \\lim_{t\\to 0^+} {F(t)}$ and $\\tau_F:=\\lim_{t\\to+\\infty} \\frac{F(t)}{t}$.\nWe further set the rules $F(0) \\cdot 0 = \\tau_F\\cdot 0=0$ even if $F(0)=+\\infty$ or $\\tau_F=+\\infty$.\n\\begin{proposition} \\label{prop:fdiv}\nLet $z_f = \\inf\\{x\\in[0,1]:f(x)=0\\}$ be the first zero of $f$. The functional $l_F:\\T\\to\\R$ that computes $F$-divergence has expression\n$$l_F(f) = \\int_0^{z_f} F\\big({\\big|f'(x)\\big|}^{-1}\\big)\\cdot \\big|f'(x)\\big| \\diff x + F(0)\\cdot(1-f(0))+\\tau_F\\cdot (1-z_f).$$\nIn particular, when $f\\in\\T^S$ and $f(0)=1$, we have\n\\begin{equation}\\label{eq:fdiv}\n\tl_F(f) = \\int_0^1 F\\big({\\big|f'(x)\\big|}^{-1}\\big)\\cdot \\big|f'(x)\\big| \\diff x.\n\\end{equation}\n\\end{proposition}\n\\begin{proof}[Proof of \\Cref{prop:fdiv}]\n\tFor a given trade-off function $f$, in order to determine $l_F(f)$, it suffices to find $P,Q$ such that $f=T(P,Q)$ and then use the property $l_F(f) = D_F(P\\|Q)$. Such a pair is constructed in the proof of \\Cref{prop:trade-off}: $P$ is the uniform distribution on $[0,1]$ and $Q$ has density $|f'(1-x)|$ on $[0,1)$ and an atom at $1$ with $Q[\\{1\\}] = 1-f(0)$. When we set the dominating measure $\\mu$ to be Lebesgue in $[0,1)$ and have an atom at 1 with measure 1, the densities $p$ and $q$ have expressions\n\t\\[p(x) = \n\t\\left\\{\n\t\\begin{array}{ll}\n\t\t1, \t\t& x\\in[0,1), \\\\\n\t\t0, & x=1.\n\t\\end{array}\n\t\\right.\n\t\\quad\\text{ and }\\quad q(x) = \n\t\\left\\{\n\t\\begin{array}{ll}\n\t\t|f'(1-x)|, \t\t& x\\in[0,1), \\\\\n\t\t1-f(0), & x=1.\n\t\\end{array}\n\t\\right.\\]\n\tReaders should keep in mind that the value at 1 matters because the base measure $\\mu$ has an atom there. For a trade-off function $f$, its derivative $f'(x)$ never vanishes before $f$ hits zero, i.e. $f'(x)>0$ for $x0\\} = (1-z_f,1]$ and $\\{q=0\\} = [0,1-z_f]$.\n\tSo\n\t\\begin{align*}\n\t\tD_F(P\\|Q)\n\t\t&= \\int_{\\{pq>0\\}} F\\Big(\\frac{p}{q}\\Big) \\diff Q + F(0)Q[p=0]+\\tau_F P[q=0]\\\\\n\t\t&= \\int_{1-z_f}^1 F(|f'(1-x)|^{-1}) \\cdot |f'(1-x)| \\diff x+ F(0)\\cdot(1-f(0)) + \\tau_F\\cdot (1-z_f)\\\\\n\t\t&=\\int^{z_f}_0 F(|f'(x)|^{-1}) \\cdot |f'(x)| \\diff x+ F(0)\\cdot(1-f(0)) + \\tau_F\\cdot (1-z_f).\n\t\\end{align*}\n\tStarting from the second line, the integral is Lebesgue integral. Now the proof is complete.\n\\end{proof}\nBecause of the generality of $F$-divergence, \\Cref{eq:fdiv} has broad applications. Many important divergences can be computed via a simple formula. Below are some of the examples.\n\\begin{itemize}\n\t\\item \\textbf{Total variation distance} corresponds to $F(t) = \\frac{1}{2}|t-1|$. Easy calculation yields\n\t\\[l_{\\mathrm{TV}}(f) = \\frac{1}{2}\\int_0^1 \\big|1+f'(x)\\big|\\diff x.\\]\n\t\\item \\textbf{KL divergence} corresponds to $F(t) = t\\log t$. We have\n\t\\[l_{\\mathrm{KL}}(f) =-\\int_0^1 \\log\\big|f'(x)\\big|\\diff x.\n\n\n\n\n\n\n\t\\]\n\tThis functional plays an important role in our central limit theorem. We call it $\\mathrm{kl}(f)$ there.\n\n\n\n\n\n\t\\item \\textbf{{Power} divergence} of order $\\alpha$ corresponds to $F_\\alpha(t) =\\frac{t^\\alpha-\\alpha(t-1)-1}{\\alpha(\\alpha-1)}$. The corresponding functional is\n\t\\begin{align*}\n\t\tl_{F_\\alpha}(f) = \n\t\t\\left\\{\n\t\t\\begin{array}{ll}\n\t\t\t\\frac{1}{\\alpha(\\alpha-1)}\\Big(\\int_0^1 \\big|f'(x)\\big|^{1-\\alpha}\\diff x - 1\\Big), \t\t& z_f = 1, \\\\\n\t\t\t+\\infty, & z_f<1.\n\t\t\\end{array}\n\t\t\\right.\n\t\\end{align*}\n\t\\item \\textbf{{R\\'enyi} divergence} of order $\\alpha$ is defined as\n\t\\[D_\\alpha(P\\|Q) = \\tfrac{1}{\\alpha-1}\\log \\big(\\E_P(\\tfrac{p}{q})^{\\alpha-1}\\big)=\\tfrac{1}{\\alpha-1}\\log\\int p^\\alpha q^{1-\\alpha}.\\]\n\tIt is related to power divergence of order $\\alpha$ by\n\t\\begin{equation}\\label{eq:renyipower}\n\t\tD_\\alpha(P\\|Q) = \\frac{1}{\\alpha-1}\\cdot \\log\\big(\\alpha(\\alpha-1)D_{F_\\alpha}(P\\|Q)+1\\big).\n\t\\end{equation}\n\tSo the corresponding functional, which we denote by $l_\\alpha^{\\text{R{\\'e}nyi}}$, has expression\n\t\\begin{equation}\\label{eq:renyi_functional}\n\t\tl_\\alpha^{\\text{R{\\'e}nyi}}(f)=\n\t\t\\left\\{\n\t\t\\begin{array}{ll}\n\t\t\t\\frac{1}{\\alpha-1}\\log \\int_0^1 \\big|f'(x)\\big|^{1-\\alpha}\\diff x, \t\t& z_f = 1, \\\\\n\t\t\t+\\infty, & z_f<1.\n\t\t\\end{array}\n\t\t\\right.\n\t\\end{equation}\n\\end{itemize}\n\\begin{proof}[Proof of \\Cref{eq:renyipower}]\n\t\\begin{align*}\n\t\tD_{F_\\alpha}(P\\|Q) &= \\int q\\cdot F_\\alpha\\Big(\\frac{p}{q}\\Big)\\\\\n\t\t&= \\int q\\cdot \\frac{(\\frac{p}{q})^\\alpha - \\alpha(\\frac{p}{q}-1)-1}{\\alpha(\\alpha-1)}\\\\\n\t\t&=\\frac{1}{\\alpha(\\alpha-1)}\\cdot \\int p^\\alpha q^{1-\\alpha} + 0 -\\frac{1}{\\alpha(\\alpha-1)}\\\\\n\t\t&=\\frac{1}{\\alpha(\\alpha-1)}\\Big(e^{(\\alpha-1)D_\\alpha(P\\|Q)}-1\\Big).\n\t\\end{align*}\n\tSolving for $D_\\alpha(P\\|Q)$ yields \\eqref{eq:renyipower}.\n\\end{proof}\nIntroduced in \\cite{renyi}, a mechanism $M$ is said to be $(\\alpha, \\epsilon)$-{R\\'enyi} differentially private (RDP) if\nfor all neighboring pairs $S, S'$ it holds that\n\\begin{equation}\\label{eq:RDP}\nD_\\alpha(M(S) \\| M(S')) \\le \\epsilon,\n\\end{equation}\nA few other DP definitions, including zero concentrated differential privacy (zCDP) \\cite{concentrated2} and truncated concentrated differential privacy (tCDP) \\cite{tcdp}, are defined through imposing bounds in the form of \\eqref{eq:RDP} with certain collections of $\\alpha$. The following proposition provides the general conversion from $f$-DP to RDP via \\eqref{eq:renyi_functional}.\n\\begin{proposition} \\label{lem:}\n\tIf a mechanism is $f$-DP, then it is $\\big(\\alpha,l_\\alpha^{\\text{R{\\'e}nyi}}(f)\\big)$-RDP for any $\\alpha>1$.\n\\end{proposition}\n\nSpecializing to the most important subclass, we have a simple expression.\n\\begin{corollary}\n\tIf a mechanism is $\\mu$-GDP, then it is $(\\alpha,\\frac{1}{2}\\mu^2\\alpha)$-RDP for any $\\alpha>1$.\n\\end{corollary}\n\\begin{proof}\nBy the property of $l_\\alpha^{\\text{R{\\'e}nyi}}$, we know $l_\\alpha^{\\text{R{\\'e}nyi}}(G_\\mu) = D_\\alpha\\big(\\N(0,1)\\|\\N(\\mu,1)\\big)$.\n\tEasy calculation shows $D_\\alpha\\big(\\N(0,1)\\|\\N(\\mu,1)\\big) = \\frac{1}{2}\\mu^2\\alpha$. Readers can refer to Proposition 7 in \\cite{renyi} for a detailed derivation.\n\\end{proof}\n\n\nThe functional $l_D$ allows a consistent, easy conversion from an $f$-DP guarantee to all divergence based DP guarantees. The above conversion to RDP is, among all, the most useful example. On the other hand, conversion from divergence, either to trade-off function or to other divergences, often requires case by case analysis, sometimes significantly non-trivial. What's worse is that it is often hard to tell whether a given conversion between divergences is improvable or already lossless. For conversion between $F$-divergences, a systematic approach called joint range is developed in \\cite{harremoes2011pairs}, but it is still significantly more complicated than \\Cref{eq:fdiv}. On the other hand, \\Cref{thm:functional} means conversion from trade-off to divergence is lossless and unimprovable.\n\nThis fine-grainedness of trade-off function (see also \\Cref{sec:conn-with-blackw}) is somewhat expected: it summarizes the indistinguishability of a pair of distribution by a \\textit{function}, which is an infinite dimensional object. In contrast, divergences usually just summarize by a number, which is obviously less informative by a function.\n\nConnecting back to informativeness, we argue that, even when we consider $\\{D_\\alpha(P\\|Q):\\alpha>1\\}$ as an infinite dimensional object, it still does not induce Blackwell's order. In the language of \\Cref{sec:conn-with-blackw}, $\\mathrm{Ineq}({\\preceq_{\\textup{R\\'enyi}}})\\supsetneq \\mathrm{Ineq}({\\preceq_{\\mathrm{Blackwell}}})$. In other words, there are two pairs of distributions, one easier to distinguish than the other in the R\\'enyi sense, but not in the Blackwell sense.\n\n\nLet $P_\\ep$ and $Q_{\\ep}$ denote Bernoulli distributions with success probabilities $\\frac{\\mathrm{e}^\\ep}{1+\\mathrm{e}^\\ep}$ and $\\frac{1}{1+\\mathrm{e}^\\ep}$, respectively.\n\\begin{proposition}\\label{prop:renyi_fail}\nThere exists $\\ep>0$ such that the following two statements are both true:\t\n\\begin{enumerate}\n\\item[(a)]\nFor all $\\alpha>1$, $D_\\alpha(P_\\ep \\| Q_\\ep) \\leqslant D_\\alpha\\big(\\N(0,1) \\| \\N(\\ep,1)\\big)$;\n\\item[(b)] $\\mathrm{TV}(P_\\ep, Q_\\ep)> \\mathrm{TV}\\big(\\N(0,1), \\N(\\ep,1)\\big)$.\n\\end{enumerate}\n\\end{proposition}\nSurprisingly, although the whole collection of {R\\'enyi} divergences asserts that the pair $\\big(\\N(0,1) ,\\N(\\ep,1)\\big)$ is easier to distinguish than $(P_\\ep , Q_\\ep)$, one can nevertheless achieve smaller summed type I and type II errors when trying to distinguish $P_\\ep, Q_\\ep$.\nFact (a) equivalently says that $\\big(\\N(0,1) ,\\N(\\ep,1)\\big)\\preceq_{\\text{R\\'enyi}} (P_\\ep \\| Q_\\ep)$, while fact (b) excludes the possibility that $\\big(\\N(0,1) ,\\N(\\ep,1)\\big)\\preceq_{\\mathrm{Blackwell}} (P_\\ep , Q_\\ep)$, since otherwise data processing inequality of the total variation distance would imply $\\mathrm{TV}(P_\\ep, Q_\\ep)\\leqslant \\mathrm{TV}\\big(\\N(0,1), \\N(\\ep,1)\\big)$.\n\nWe point out that (a) in \\Cref{{prop:renyi_fail}} in fact holds for all $\\ep\\geqslant0$, which is proved in \\cite{concentrated2}, partially based on numerical evidence. Our proof is entirely analytical.\n\\begin{proof}[Proof of \\Cref{prop:renyi_fail}]\n\t\\begin{align*}\n\t\tD_\\alpha(P_\\ep\\|Q_\\ep)\n\t\t&= \\frac{1}{\\alpha-1}\\log (p^\\alpha q^{1-\\alpha}+q^\\alpha p^{1-\\alpha})\\\\\n\t\t&= \\frac{1}{\\alpha-1}\\log\\, \\frac{\\mathrm{e}^{\\ep\\alpha}+\\mathrm{e}^{\\ep(1-\\alpha)}}{1+\\mathrm{e}^\\ep}.\\\\\n\t\t&= \\frac{1}{\\alpha-1}\\log\\, \\frac{\\mathrm{e}^{\\ep(\\alpha-\\frac{1}{2})}+\\mathrm{e}^{\\ep(\\frac{1}{2}-\\alpha)}}{\\mathrm{e}^{-\\frac{\\ep}{2}}+\\mathrm{e}^{\\frac{\\ep}{2}}}\\\\\n\t\t&= \\frac{1}{\\alpha-1}\\log\\, \\frac{\\cosh \\ep(\\alpha-\\frac{1}{2})}{\\cosh \\frac{\\ep}{2}}\n\t\\end{align*}\n\tNow we claim that $\\cosh x \\cdot \\mathrm{e}^{-\\frac{1}{2}x^2}$ is monotone decreasing for $x\\geqslant0$. To see this, simply take the derivative\n\t$$\\big(\\cosh x \\cdot \\mathrm{e}^{-\\frac{1}{2}x^2}\\big)' = \\sinh x\\cdot \\mathrm{e}^{-\\frac{1}{2}x^2} + \\cosh x \\cdot (-x) \\cdot \\mathrm{e}^{-\\frac{1}{2}x^2} = (\\tanh x - x)\\cdot\\cosh x\\cdot \\mathrm{e}^{-\\frac{1}{2}x^2}.$$\n\tIt is easy to show $\\tanh x\\leqslant x$ for $x\\geqslant0$. Hence the derivative is always non-positive, which justifies the claimed monotonicity.\n\tSince $\\alpha>1,\\ep\\geqslant0$, we have $\\ep(\\alpha-\\frac{1}{2})\\geqslant\\frac{\\ep}{2}\\geqslant 0$. By the monotonicity,\n\t\\begin{align*}\n\t\t\\cosh \\ep(\\alpha-\\frac{1}{2}) \\cdot \\mathrm{e}^{-\\frac{1}{2}\\ep^2(\\alpha-\\frac{1}{2})^2} &\\leqslant \\cosh \\frac{\\ep}{2} \\cdot \\mathrm{e}^{-\\frac{1}{2}\\cdot (\\frac{\\ep}{2})^2}.\n\t\\end{align*}\n\tThat is,\n\t\\[\n\t\t\\frac{\\cosh \\ep(\\alpha-\\frac{1}{2})}{\\cosh \\frac{\\ep}{2}} \\leqslant \\mathrm{e}^{\\frac{1}{2}\\ep^2\\alpha(\\alpha-1)}.\n\t\\]\n\tSo for any $\\ep\\geqslant0$,\n\t\\begin{align*}\n\t\tD_\\alpha(P_\\ep\\|Q_\\ep) = \\frac{1}{\\alpha-1}\\cdot \\log\\, \\frac{\\cosh \\ep(\\alpha-\\frac{1}{2})}{\\cosh \\frac{\\ep}{2}} \\leqslant \\frac{1}{2}\\ep^2\\alpha = D_\\alpha\\big(\\N(0,1)\\|\\N(\\ep,1)\\big).\n\t\\end{align*}\n\tFor the second part, easy calculation and Taylor expansion yields\n\t\\begin{align*}\n\t\t\\mathrm{TV}(P_\\ep,Q_\\ep) &= \\frac{\\mathrm{e}^\\ep-1}{\\mathrm{e}^\\ep+1} = \\tanh \\frac{\\ep}{2} = \\frac{\\ep}{2} +o(\\ep^2),\\\\\n\t\t\\mathrm{TV}\\big(\\N(0,1),\\N(\\ep,1)\\big) &= 1-2\\Phi(-\\frac{\\ep}{2}) = \\int_{-\\frac{\\ep}{2}}^{\\frac{\\ep}{2}}\\varphi(x)\\diff x \\\\\n\t\t&= \\varphi(0)\\cdot\\ep+o(\\ep^2) = \\frac{1}{\\sqrt{2\\pi}}\\cdot\\ep+o(\\ep^2).\n\t\\end{align*}\n\tSince $\\frac{1}{\\sqrt{2\\pi}}<\\frac{1}{2}$, for small enough $\\ep$, $\\mathrm{TV}\\big(\\N(0,1),\\N(\\ep,1)\\big)< \\mathrm{TV}(P_\\ep,Q_\\ep)$.\n\\end{proof}\n\nIn summary, \\Cref{thm:functional} and \\Cref{prop:fdiv} provide general tools to losslessly convert a trade-off function to divergences and hence justifies the fine-grainedness of trade-off functions. This is complementary to the informativeness argument in \\Cref{sec:conn-with-blackw}.\n\n\n\\section{Proof of \\Cref{thm:fast}}\n\\label{app:fast}\nThis section is devoted to the proof of \\Cref{thm:fast}. \nSince we always assume $\\delta=0$, it is dropped from the subscript and we use $f_\\ep$ to denote $f_{\\ep,0}$. As in the proof of \\Cref{thm:Berry}, the first step is to express $f_\\ep^{\\otimes n}$ in the form\n$$1-f_\\ep^{\\otimes n}(\\alpha) = F_n\\big[x_n-F_n^{-1}(1-\\alpha)\\big]$$\nwith $F_n\\to\\Phi$ and $x_n\\to 1$. Then we show both convergences have rate $1\/n$. \n\\begin{proof}[Proof of \\Cref{thm:fast}]\n\tLet's find $F_n$ first. Fix $\\ep$ and let $p = \\tfrac{1}{1+e^\\ep}, q = 1-p = p\\cdot e^\\ep$. Recall that $f_\\ep^{\\otimes n} = T\\big(B(n,p),B(n,q)\\big)$ and we know that it is the linear interpolation of points given by binomial tails.\n\tThe main goal here is to avoid the linear interpolation.\n\t\n\tFor the simple hypothesis testing problem $B(n,p)$ vs $B(n,q)$, we know via Neyman-Pearson that every optimal rejection rule $\\phi$ must have the following form:\n\t$$\n\t\t\\phi(x)=\\left\\{\n\t\t\\begin{array}{ll}\n\t\t1, \t\t& \\text{if } x>k, \\\\\n\t\t0, \t\t& \\text{if } xk] + (1-c)\\P[X=k] \\\\\n\t\t&= \\P[X>k] + \\P[Y>c]\\cdot \\P[X=k] \\\\\n\t\t&= \\P[X+Y>k+c]\\\\\n\t\t\\beta_{(k,c)}&= \\E_{x\\sim B(n,q)}[1-\\phi(x)]= \\E[1-\\phi(n-X)]\\\\\n\t\t&=\\P[n-Xn-k]+\\P[Y>1-c]\\cdot \\P[X = n- k]\\\\\n\t\t&= \\P[X+Y>n+1-k-c]\n\t\\end{align*}\n\t$X+Y$ supports on $[0,n+1]$ and has a piecewise constant density. As a consequence, the cdf $F_{X+Y}$ is a bijection between $[0,n+1]$ and $[0,1]$. So for a fixed type I error $\\alpha\\in[0,1]$, the optimal testing rule $(k,c)$ is uniquely determined by the formula\n\t$$k+c = F_{X+Y}^{-1} (1-\\alpha).$$\n\tAnd we have for the trade-off function:\n\t$$1-f_\\ep^{\\otimes n}(\\alpha) = F_{X+Y}\\big( n+1 - F_{X+Y}^{-1} (1-\\alpha)\\big).$$\n\tNow we proceed to write $F_{X+Y}$ in a form that reveals its central limit behavior. First notice $\\E[X+Y] = np+\\tfrac{1}{2}, \\Var[X+Y] = \\Var[X] + \\Var[Y] = npq+\\tfrac{1}{12}$. For simplicity denote this variance by $\\sigma^2$. Let $F_n$ be the normalized cdf of $X+Y$, i.e.\n\t$$F_n(x) = P\\Big[\\tfrac{X+Y - \\E[X+Y]}{\\sqrt{\\Var[X+Y]}}\\leqslant x\\Big] = F_{X+Y}\\Big[np+\\tfrac{1}{2} + x\\sigma\\Big].$$\n\tSimple algebra yields\n\t\\begin{equation}\\label{eq:fast1}\n\t1-f_\\ep^{\\otimes n}(\\alpha) = F_n\\Big[\\tfrac{n(q-p)}{\\sigma}-F_n^{-1}(1-\\alpha)\\Big].\n\t\\end{equation}\n\tIt's easy to show that $\\tfrac{n(q-p)}{\\sigma}\\to 1$ and $F_n\\to\\Phi$ pointwise. However, we need to show that the convergence rates are both $1\/n$, which is technically involved, especially for the convergence of $F_n$. In view of this, we pack the conclusions into the following lemmas, and provide the proofs later:\n\t\\begin{lemma} \\label{lem:pqlimit}\n\tWith $\\ep = 1\/\\sqrt{n}$ and $p,q,\\sigma$ defined as above,\n\t\n\t\\[\\frac{n(q-p)}{\\sigma} = 1-\\frac{1}{8n}+o(n^{-1}).\\]\n\t\\end{lemma}\n\tAs a consequence, there exists $C>0$ such that\n\t\\begin{equation}\\label{eq:fast2}\n\t\\big|\\tfrac{n(q-p)}{\\sigma}-1\\big|\\leqslant\\tfrac{C}{n}.\n\t\\end{equation}\n\t\\begin{lemma}\\label{prop:ch.f.}\n\t\tThere is a positive number $C$ such that $|F_n(x)-\\Phi(x)|\\leqslant \\tfrac{C}{n}$ holds for $n\\geqslant 2$.\n\t\\end{lemma}\n\tSince $\\Phi(x)\\geqslant F_n(x)-\\frac{C}{n}$, setting $x = F_n^{-1}(1-\\alpha)$ yields\n\t\\[\\Phi\\big(F_n^{-1}(1-\\alpha)\\big)\\geqslant F_n\\big(F_n^{-1}(1-\\alpha)\\big)-\\tfrac{C}{n} = 1-\\alpha-\\tfrac{C}{n}.\\]\n\tHence\n\t\\begin{equation}\\label{eq:fast3}\n\t\tF_n^{-1}(1-\\alpha)\\geqslant \\Phi^{-1}\\big(1-\\alpha-\\tfrac{C}{n}\\big).\n\t\\end{equation}\n\tWith (\\ref{eq:fast1}--\\ref{eq:fast3}) and \\Cref{prop:ch.f.} we have\n\t\\begin{align*}\n\t\t1-f_\\ep^{\\otimes n}(\\alpha) &= F_n\\Big[\\tfrac{n(q-p)}{\\sigma}-F_n^{-1}(1-\\alpha)\\Big]\\\\\n\t\t&\\leqslant \\Phi\\Big[\\tfrac{n(q-p)}{\\sigma}-F_n^{-1}(1-\\alpha)\\Big]+\\tfrac{C}{n}\\\\\n\t\t&\\leqslant \\Phi\\Big[1+\\tfrac{C}{n}-F_n^{-1}(1-\\alpha)\\Big]+\\tfrac{C}{n}\\\\\n\t\t&\\leqslant \\Phi\\Big[1+\\tfrac{C}{n}-\\Phi^{-1}(1-\\alpha-\\tfrac{C}{n})\\Big]+\\tfrac{C}{n}.\n\t\\end{align*}\n\tThe function $\\Phi$ is $\\frac{1}{\\sqrt{2\\pi}}$-Lipschitz, so\n\t\\[1-f_\\ep^{\\otimes n}(\\alpha)\\leqslant \\Phi\\Big[1-\\Phi^{-1}(1-\\alpha-\\tfrac{C}{n})\\Big]+\\tfrac{1}{\\sqrt{2\\pi}}\\cdot\\tfrac{C}{n}+\\tfrac{C}{n}.\\]\n\tBy blowing up the current $C$ and using the symmetry of standard normal, we have\n\t\\begin{align*}\n\t\tf_\\ep^{\\otimes n}(\\alpha) &\\geqslant 1-\\Phi\\Big[1-\\Phi^{-1}(1-\\alpha-\\tfrac{C}{n})\\Big]-\\tfrac{C}{n}\\\\\n\t\t&= \\Phi\\Big[\\Phi^{-1}(1-\\alpha-\\tfrac{C}{n}) - 1 \\Big]- \\tfrac{C}{n}\\\\\n\t\t& = G_1(\\alpha+\\tfrac{C}{n})-\\tfrac{C}{n}.\n\t\\end{align*}\n\tSimilarly, we can show the upper bound\n\t\\[f_\\ep^{\\otimes n}(\\alpha)\\leqslant G_1(\\alpha-\\tfrac{C}{n})+\\tfrac{C}{n}.\\]\n\tThe proof is now complete.\n\\end{proof}\nNext we show \\Cref{lem:pqlimit} and \\Cref{prop:ch.f.}.\n\\begin{proof}[Proof of \\Cref{lem:pqlimit}]\n\tThe proof is basically careful Taylor expansion. We will frequently use the assumption that $\\ep = 1\/\\sqrt{n}$. First we factor the objective as\n\t\\begin{align*}\n\t\t\\frac{n(q-p)}{\\sigma} &= 2\\sqrt{n}(q-p)\\cdot\\frac{\\sqrt{n}}{2\\sigma} = \\frac{2(q-p)}{\\ep}\\cdot\\frac{\\sqrt{n}}{2\\sigma}\n\t\\end{align*}\n\tand consider Taylor expansions of the two factors separately.\n\n\tFor the first factor, recall that\n\t\\[q-p = \\frac{e^\\ep-1}{e^\\ep+1} = \\frac{e^{\\tfrac{\\ep}{2}}-e^{-\\tfrac{\\ep}{2}}}{e^{\\tfrac{\\ep}{2}}+e^{-\\tfrac{\\ep}{2}}} = \\tanh \\tfrac{\\ep}{2}.\\]\n\tUsing the Taylor expansion $\\tanh x = x-x^3\/3+o(x^4)$, we have\n\t\\begin{equation}\\label{eq:pq1}\n\t\t\\tfrac{2(q-p)}{\\ep} = \\tanh \\tfrac{\\ep}{2} \\,\\big\/\\, \\tfrac{\\ep}{2} = 1-\\tfrac{1}{3}(\\tfrac{\\ep}{2})^2+o(\\ep^3) = 1-\\tfrac{1}{12n} + o(n^{-3\/2}).\n\t\\end{equation}\n\tFor the second one, since $p+q=1$, we have $4pq=(p+q)^2-(p-q)^2 = 1-(q-p)^2$.\n\tA shorter expansion shows $q-p = \\tanh\\tfrac{\\ep}{2} = \\tfrac{\\ep}{2} +o(\\ep^2)$, and hence\n\t\\begin{align*}\n\t\t4pq= 1-\\big(\\tfrac{\\ep}{2}+o(\\ep^2)\\big)^2 = 1-\\tfrac{\\ep^2}{4}+o(\\ep^3)= 1-\\tfrac{1}{4n}+o(n^{-3\/2}).\n\t\\end{align*}\n\tRecall that $\\sigma$ is defined to be $\\sqrt{npq+\\frac{1}{12}}$. Using the above expansion of $4pq$, we have\n\t\\begin{align*}\n\t\t\\frac{\\sqrt{n}}{2\\sigma} &= \\sqrt{\\frac{n}{4\\sigma^2}} = \\sqrt{\\frac{n}{4npq+\\tfrac{1}{3}}} = \\big(4pq+\\tfrac{1}{3n}\\big)^{-1\/2} = \\big(1+\\tfrac{1}{12n}+o(n^{-3\/2})\\big)^{-1\/2}\n\t\\end{align*}\n\tSince $(1+x)^{-1\/2} = 1-\\tfrac{1}{2}x+o(x)$, we have\n\t\\begin{equation}\\label{eq:pq2}\n\t\t\\frac{\\sqrt{n}}{2\\sigma} = 1-\\tfrac{1}{2}\\big(\\tfrac{1}{12n}+o(n^{-3\/2})\\big)+o(n^{-1}) = 1-\\tfrac{1}{24n}+o(n^{-1}).\n\t\\end{equation}\n\tCombining the expansions \\eqref{eq:pq1} and \\eqref{eq:pq2},\n\t\\begin{align*}\n\t\t\\frac{n(q-p)}{\\sigma} &= \\frac{2(q-p)}{\\ep}\\cdot\\frac{\\sqrt{n}}{2\\sigma}\\\\\n\t\t&=\\Big(1-\\frac{1}{12n}+o(n^{-3\/2})\\Big)\\cdot\\Big(1-\\frac{1}{24n}+o(n^{-1})\\Big)\\\\\n\t\t&=1-\\frac{1}{8n}+o(n^{-1}).\n\t\\end{align*}\n\tThe proof is complete.\n\\end{proof}\nThen we move on to the more challenging \\Cref{prop:ch.f.}.\n\\begin{proof}[Proof of \\Cref{prop:ch.f.}]\n\tThe proof is inspired by Problem 6 on page 305 of \\cite{uspensky1937introduction}. Though involved, the idea is not hard: reduce the bound on cdfs to a bound on characteristic functions (ch.f. for short) by an appropriate Fourier inversion, then control the ch.f. by careful Taylor expansion.\n\n\tRecall that $\\ep,p,q,\\sigma$ depend on $n$ via\n\t\\[\\ep = \\frac{1}{\\sqrt{n}}, \\quad p = \\frac{1}{1+e^\\ep}, \\quad q=\\frac{e^\\ep}{1+e^\\ep},\\quad \\sigma = \\sqrt{npq+\\tfrac{1}{12}}.\\]\n\tRandom variables $X\\sim B(n,p),Y\\sim U[0,1]$. $F_n$ is the normalized cdf of $X+Y$. More precisely, since\n\t$$\\E[X+Y] = np+\\frac{1}{2}, \\quad \\Var[X+Y] = \\Var\\, X + \\Var\\, Y = npq+\\tfrac{1}{12} = \\sigma^2,$$\n\t$F_n$ is the cdf of $\\sigma^{-1}(X+Y-\\frac{1}{2}-np)$. Our goal is to show that $\\sup_{x\\in\\R}|F_n(x)-\\Phi(x)|=O(\\frac{1}{n})$.\n\n\tFirst let's compute the characteristic function (ch.f. for short) $\\varphi_n$ of the distribution $F_n$.\n\t\\begin{align*}\n\t\t\\varphi_n(t)\n\t\t&= \\E[e^{it\\sigma^{-1}(X+Y-\\frac{1}{2}-np)}]\\\\\n\t\t&= e^{-i{np}t\/\\sigma}\\cdot\\E[e^{it\/\\sigma(X+Y-\\frac{1}{2})}]\\\\\n\t\t&= e^{-i{np}t\/\\sigma}\\cdot \\varphi_{X}(t\/\\sigma)\\cdot \\varphi_{Y-\\frac{1}{2}}(t\/\\sigma).\n\t\\end{align*}\n\tEasy calculation shows that the ch.f. of $X$ is $(pe^{it}+q)^n$ and that of $Y-\\tfrac{1}{2}$ is $\\tfrac{\\sin t\/2}{t\/2}$. So\n\t\\begin{align*}\n\t\t\\varphi_n(t)\n\t\t&= e^{-i{np}t\/\\sigma}\\cdot(pe^{it\/\\sigma}+q)^n \\cdot \\tfrac{\\sin t\/2\\sigma}{t\/2\\sigma}\\\\\n\t\t&= (pe^{iqt\/\\sigma}+qe^{-ipt\/\\sigma})^n\\cdot \\tfrac{\\sin t\/2\\sigma}{t\/2\\sigma}.\n\t\\end{align*}\n\tThe base $pe^{iqt\/\\sigma}+qe^{-ipt\/\\sigma}$ is a convex combination of two complex numbers on the unit circle, so we have $|\\varphi_n(t)|\\leqslant \\tfrac{|\\sin t\/2\\sigma|}{|t\/2\\sigma|} \\leqslant \\min\\{\\tfrac{2\\sigma}{|t|}, 1\\}$.\n\n\tNow let's connect back to cdf. We need some form of Fourier inversion formula. \n\tLet $\\varphi(t) = e^{-t^2\/2}$ be the ch.f. of the standard normal.\n\t\\begin{lemma} \\label{lem:chf}\n\t\tWe have the following inversion formula\n\t\t$$F_n(x) - \\Phi(x) = -\\frac{1}{2\\pi i}\\int_{-\\infty}^{+\\infty}e^{-itx}\\cdot\\frac{\\varphi_n(t)-\\varphi(t)}{t} \\diff t.$$\n\t\\end{lemma}\n\tThe integrand is integrable over $\\R$ because: (1) At infinity $|\\varphi_n(t)|=O(\\frac{1}{t})$, so the integrand has modulus $O(\\frac{1}{t^2})$; (2) When $t\\to0$,\n\t$$\\tfrac{\\varphi_n(t)-\\varphi(t)}{t} = \\tfrac{\\varphi_n(t)-1-\\varphi(t)+1}{t}=\\tfrac{\\varphi_n(t)-\\varphi_n(0)}{t}-\\tfrac{\\varphi(t)-\\varphi(0)}{t}\\to\\varphi_n'(0)-\\varphi'(0) = \\E_{Z\\sim F_n} [Z]$$\n\tis a finite number. So the integrand is continuous at 0.\n\n\t\\Cref{lem:chf} makes it possible to control $F_n(x) - \\Phi(x)$ by controlling $\\varphi_n(t)-\\varphi(t)$.\n\t\\begin{align*}\n\t\t2\\pi|F_n(x) - \\Phi(x)| \\leqslant&\\phantom{+}\\int^{+\\infty}_{-\\infty}\\frac{|\\varphi_n(t)-\\varphi(t)|}{|t|}\\diff t\\\\\n\t\t\\leqslant&\\phantom{+}\n\t\t\\int_{|t|\\leqslant r\\sigma}\\frac{|\\varphi_n(t)-\\varphi(t)|}{|t|} \\diff t &&(I_1)\\\\\n\t\t&+\\int_{|t|>r\\sigma}\\frac{|\\varphi_n(t)|}{|t|} \\diff t &&(I_2)\\\\\n\t\t&+\\int_{|t|>r\\sigma}\\frac{|\\varphi(t)|}{|t|} \\diff t &&(I_3)\n\t\\end{align*}\n\tIt suffices to find some constant $r$ such that all three integrals are $O(\\frac{1}{n})$. This is done via the following three lemmas.\n\t\\begin{lemma} \\label{lem:I1}\n\t\tThere exist universal constants $r>0, C>0$ such that when $|t|\\leqslant r\\sigma$,\n\t\\[|\\varphi_n(t)-\\varphi(t)|\\leqslant Ce^{-\\tfrac{t^2}{8}}\\cdot\\big(\\tfrac{t^2}{n}+\\tfrac{|t|^3}{n}+\\tfrac{t^4}{n}\\big).\\]\n\t\\end{lemma}\n\tConsequently,\n\t\\begin{align*}\n\t\tI_1 &= \\int_{|t|\\leqslant r\\sigma}\\frac{|\\varphi_n(t)-\\varphi(t)|}{|t|} \\diff t\\\\\n\t\t&\\leqslant \\int_\\R Ce^{-\\tfrac{t^2}{8}}\\cdot\\big(\\tfrac{|t|}{n}+\\tfrac{t^2}{n}+\\tfrac{|t|^3}{n}\\big)\\diff t=O(\\tfrac{1}{n}).\n\t\\end{align*}\n\t\\begin{lemma} \\label{lem:I2}\n\t\tFor $r<\\pi$, we have\n\t\t$I_2\\leqslant (2+\\frac{48}{r^2})\\cdot\\frac{1}{n}$.\n\t\\end{lemma}\n\t\\begin{lemma} \\label{lem:I3}\n\t\tFor $n\\geqslant 2$, $I_3\\leqslant\\frac{10}{r^2}\\cdot \\frac{1}{n}\\cdot e^{-0.1r^2n}$ holds for any positive $r$.\n\t\\end{lemma}\n\tSo we can select a small enough $r$ such that all three estimates hold, which implies $I_1=O(\\frac{1}{n}), I_2=O(\\frac{1}{n})$ and $I_3\\ll\\frac{1}{n}$. In summary,\n\t\\[|F_n(x) - \\Phi(x)|\\leqslant \\frac{1}{2\\pi}(I_1+I_2+I_3) = O(\\frac{1}{n}).\\]\n\tAssuming correctness of \\Cref{lem:chf,lem:I1,lem:I2,lem:I3}, the proof of \\Cref{thm:fast} is complete.\n\\end{proof}\nThe rest is to prove \\Cref{lem:chf,lem:I1,lem:I2,lem:I3}. We deal with the three integrals first, and then come back to inversion formula.\n\\begin{proof}[Proof of \\Cref{lem:I1}]\n\tLet $w=pe^{itq\/\\sigma}+qe^{-itp\/\\sigma}$. Then $\\varphi_n(t) = w^n\\cdot \\frac{\\sin t\/2\\sigma}{t\/2\\sigma}$. We have\n\t\\begin{align}\\label{eq:twoterms}\n\t\t|\\varphi_n(t)-\\varphi(t)| &= |w^n\\cdot \\frac{\\sin t\/2\\sigma}{t\/2\\sigma} - e^{-\\frac{1}{2}t^2}|\\notag\\\\\n\t\t&\\leqslant |w^n- e^{-\\frac{1}{2}t^2}| + |w|^n \\cdot \\Big|1-\\frac{\\sin t\/2\\sigma}{t\/2\\sigma}\\Big|\n\t\\end{align}\n\tAll we need is a positive $r$ such that when $|t|\\leqslant r\\sigma$, both of the above terms are small. We are going to shrink $r$ as we need from time to time.\n\n\tFirst, on the disk $|z|\\leqslant r$ we have Taylor expansion \n\t$e^z=1+z+\\tfrac{1}{2}z^2+\\tfrac{1}{6}z^3+O(|z|^4)$.\n\tSo for $|t|\\leqslant r\\sigma$ we have\n\t\\begin{align}\\label{eq:taylor}\n\t\tpe^{itq\/\\sigma} &= p\\big(1+\\tfrac{itq}{\\sigma}+\\tfrac{1}{2}\\big(\\tfrac{itq}{\\sigma}\\big)^2+\\tfrac{1}{6}\\big(\\tfrac{itq}{\\sigma}\\big)^3+O(\\tfrac{t^4}{\\sigma^4})\\big)\\notag\\\\\n\t\tqe^{-itp\/\\sigma} &= q\\big(1-\\tfrac{itp}{\\sigma}+\\tfrac{1}{2}\\big(\\tfrac{itp}{\\sigma}\\big)^2-\\tfrac{1}{6}\\big(\\tfrac{itp}{\\sigma}\\big)^3+O(\\tfrac{t^4}{\\sigma^4})\\big)\\notag\\\\\n\t\tw &= 1-\\tfrac{t^2}{2\\sigma^2}\\cdot (pq^2+qp^2) + \\tfrac{t^3}{6\\sigma^3}\\big(-ipq^3-qp^3(-i)\\big)+O(\\tfrac{t^4}{\\sigma^4})\\notag\\\\\n\t\t&=1-\\tfrac{pq}{\\sigma^2}\\cdot\\tfrac{t^2}{2} + \\tfrac{t^3}{6\\sigma^3}\\cdot ipq(p-q)+O(\\tfrac{t^4}{\\sigma^4}).\n\t\\end{align}\n\tObviously this implies $w = 1-\\tfrac{pq}{\\sigma^2}\\cdot\\tfrac{t^2}{2} + o(\\tfrac{t^2}{\\sigma^2})$ (we will return to the more delicate \\eqref{eq:taylor} soon).\n\tSince $npq \\geqslant pq \\geqslant \\frac{1}{4} - \\frac{1}{16n} \\geqslant \\frac{3}{16}$, we have\n\t\\[\\tfrac{pq}{\\sigma^2} = \\tfrac{pq}{npq+\\frac{1}{12}}=\\tfrac{npq}{npq+\\frac{1}{12}}\\cdot \\tfrac{1}{n}\\geqslant \\tfrac{1}{n} \\cdot \\tfrac{3}{16}\/(\\tfrac{3}{16}+\\tfrac{1}{12}) = \\tfrac{9}{13n}>\\tfrac{2}{3n}.\\]\n\tThat is, the quadratic term is more than $\\tfrac{t^2}{3n}$. We can tune $r$ so that the little $o$ remainder is even smaller, i.e. $|w-1+\\tfrac{pq}{\\sigma^2}\\cdot\\tfrac{t^2}{2}|<\\tfrac{t^2}{12n}$. This implies\n\t$$|w|<1-\\tfrac{t^2}{3n}+\\tfrac{t^2}{12n} = 1-\\tfrac{t^2}{4n}\\leqslant e^{-\\tfrac{t^2}{4n}}.$$\n\tOne consequence is we can bound the second term in \\eqref{eq:twoterms}. By Taylor expansion again, $\\frac{\\sin x}{x} = 1+O(x^2)$, so\n\t\\begin{equation}\\label{eq:first}\n\t\t|w|^n \\cdot \\big|1-\\tfrac{\\sin t\/2\\sigma}{t\/2\\sigma}\\big| \\leqslant e^{-\\tfrac{t^2}{4}} \\cdot O(\\tfrac{t^2}{\\sigma^2}) = e^{-\\tfrac{t^2}{4}} \\cdot O(\\tfrac{t^2}{n}). \n\t\\end{equation}\n\tThe first term in \\eqref{eq:twoterms} requires a more careful analysis. Let $z = e^{-\\tfrac{t^2}{2n}}$ and $\\gamma = e^{-\\tfrac{t^2}{4n}}$. Our goal is $|w^n-z^n|$. We have proved $|w|<\\gamma$, while $|z|=\\gamma^2<\\gamma$ is obviously true. We have\n\t\\[|w^n-z^n|\\leqslant |w^n-w^{n-1}z|+\\cdots +|wz^{n-1}-z^n|\\leqslant n|w-z|\\cdot \\gamma^{n-1}.\\]\n\tWithout loss of generality assume $n\\geqslant 2$, then $\\gamma^{n-1} = e^{-\\frac{t^2}{4}\\cdot \\frac{n-1}{n}}\\leqslant e^{-\\tfrac{t^2}{8}}$. That is,\n\t\\begin{equation}\\label{eq:a}\n\t\t|w^n- e^{-\\frac{1}{2}t^2}| \\leqslant n|w-e^{-\\frac{t^2}{2n}}|\\cdot e^{-\\frac{1}{8}t^2}.\n\t\\end{equation}\n\n\tFor $n|w-e^{-\\frac{t^2}{2n}}|$ we need \\eqref{eq:taylor} again. First decompose it as\n\t\\begin{align}\\label{eq:b}\n\t\tn|w-e^{-\\frac{t^2}{2n}}| \\leqslant n\\big|w-1+\\tfrac{t^2}{2n}\\big| + n\\big|e^{-\\frac{t^2}{2n}} - 1+\\tfrac{t^2}{2n}\\big|.\n\t\\end{align}\n\tSince $|p-q|=\\tfrac{e^\\ep-1}{e^\\ep+1}\\leqslant e^\\ep-1 = O(\\ep) = O(\\tfrac{1}{\\sqrt{n}})$ and $\\sigma^{-1} = O(\\frac{1}{\\sqrt{n}})$, we have\n\t\\[w = 1-\\tfrac{pq}{\\sigma^2}\\cdot\\tfrac{t^2}{2} + \\tfrac{t^3}{6\\sigma^3}\\cdot ipq(p-q)+O(\\tfrac{t^4}{\\sigma^4}) = 1-\\tfrac{pq}{\\sigma^2}\\cdot\\tfrac{t^2}{2} +O(\\tfrac{t^3}{n^2})+O(\\tfrac{t^4}{n^2}).\\]\n\tNote that neither of two ``big $O$'' dominate each other, because $t$ can be as small as 0 and as large as $r\\sigma = O(\\sqrt{n})$.\n\tUsing the more delicate expansion of $w$ and that $\\sigma^2 = npq+\\frac{1}{12}$, we have\n\t\\begin{align}\\label{eq:c}\n\t\tn\\big|w-1+\\tfrac{t^2}{2n}\\big| &= n\\big|\\tfrac{t^2}{2n}-\\tfrac{pq}{\\sigma^2}\\cdot\\tfrac{t^2}{2} +O(\\tfrac{t^3}{n^2})+O(\\tfrac{t^4}{n^2})\\big|\\notag\\\\\n\t\t&= \\big|\\tfrac{t^2}{2}-\\tfrac{npq}{\\sigma^2}\\cdot\\tfrac{t^2}{2} +O(\\tfrac{t^3}{n})+O(\\tfrac{t^4}{n})\\big|\\notag\\\\\n\t\t&= \\big|\\tfrac{t^2}{2}\\cdot \\tfrac{1}{12\\sigma^2} +O(\\tfrac{t^3}{n^2})+O(\\tfrac{t^4}{n^2})\\big|\\notag\\\\\n\t\t&=O(\\tfrac{t^2}{n^2})+O(\\tfrac{t^3}{n^2})+O(\\tfrac{t^4}{n^2}).\n\t\\end{align}\n\tBy Taylor expansion again, we can tune $r$ so that when $|t|\\leqslant r\\sigma$, we have\n\t\\begin{equation}\\label{eq:d}\n\t\tn\\big|e^{-\\frac{t^2}{2n}} - 1+\\tfrac{t^2}{2n}\\big| = n\\cdot O(\\tfrac{t^4}{n^2}) = O(\\tfrac{t^4}{n}).\n\t\\end{equation}\n\tNow plug \\eqref{eq:c} and \\eqref{eq:d} back into \\eqref{eq:b}, and then into \\eqref{eq:a} to get\n\t\\[|w^n- e^{-\\frac{1}{2}t^2}| \\leqslant e^{-\\frac{1}{8}t^2}\\cdot\\big(O(\\tfrac{t^2}{n^2})+O(\\tfrac{t^3}{n^2})+O(\\tfrac{t^4}{n^2})\\big).\\]\n\tThis ends the analysis of the first term of \\eqref{eq:twoterms}. Combining with the estimate of the first term, we have\n\t\\[|\\varphi_n(t)-\\varphi(t)| \\leqslant e^{-\\frac{1}{8}t^2}\\cdot\\big(O(\\tfrac{t^2}{n^2})+O(\\tfrac{t^3}{n^2})+O(\\tfrac{t^4}{n^2})\\big).\\]\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\end{proof}\n\\begin{proof}[Proof of \\Cref{lem:I2}]\n\tFor this integral we only care about the modulus of $|\\varphi_n(t)|$. Let's simplify it first.\n\t\\begin{align*}\n\t\t|\\varphi_n(t)| &= |pe^{iqt\/\\sigma}+qe^{-ipt\/\\sigma}|^n \\cdot \\Big|\\tfrac{\\sin t\/2\\sigma}{t\/2\\sigma}\\Big|\n\t\\end{align*}\n\tLet $\\theta = t\/\\sigma$. Using $|z|^2 = z\\bar{z}$, we have\n\t\\begin{align*}\n\t\t|pe^{iq\\theta}+qe^{-ip\\theta}|^2\n\t\t&= \\big(pe^{iq\\theta}+qe^{-ip\\theta}\\big) \\cdot \\big(pe^{-iq\\theta}+qe^{ip\\theta}\\big)\\\\\n\t\t&= p^2+q^2+pq(e^{i\\theta}+e^{-i\\theta})\\\\\n\t\t&= 1-2pq+2pq \\cos \\theta\\\\\n\t\t&= 1-4pq\\sin^2 \\tfrac{\\theta}{2}.\n\t\\end{align*}\n\tSo \n\t\\[|\\varphi_n(t)| = \\big(1-4pq\\sin^2 \\tfrac{t}{2\\sigma}\\big)^{n\/2} \\cdot \\Big|\\tfrac{\\sin t\/2\\sigma}{t\/2\\sigma}\\Big|.\\]\n\tWe see from this expression that the integrand of $I_2$ is an even function. Therefore,\n\t\\begin{align*}\n\t\\frac{1}{2}I_2 =& \\int_{r\\sigma}^{+\\infty}\\tfrac{|\\varphi_n(t)|}{t} \\diff t\\\\\n\t\t=& \\int_{r\\sigma}^{+\\infty}\\tfrac{1}{t} \\big(1-4pq\\sin^2 \\tfrac{t}{2\\sigma}\\big)^{n\/2} \\cdot \\Big|\\tfrac{\\sin t\/2\\sigma}{t\/2\\sigma}\\Big|\\diff t\\\\\n\t\t=& \\int_{r\/2}^{+\\infty}\\tfrac{1}{t^2} \\big(1-4pq\\sin^2 t\\big)^{n\/2} \\cdot |\\sin t|\\diff t\n\t\\end{align*}\n\tIn the last step we do a change of variable $s = t\/2\\sigma$ and rename $s$ to $t$. Next, we break down the integral at $k\\pi$, and upper bound the $\\frac{1}{t^2}$ factor by its value at the left end of the interval, so that the rest of the integrand is periodic.\n\t\\begin{align*}\n\t\t\\frac{1}{2}I_2\n\t\t\\leqslant&\\phantom{+}\n\t\t\\int_{r\/2}^{\\pi}\\tfrac{1}{t^2} \\big(1-4pq\\sin^2 t\\big)^{n\/2} \\cdot |\\sin t|\\diff t\\\\\n\t&+\\sum_{k=1}^{+\\infty}\\int_{k\\pi}^{(k+1)\\pi}\\tfrac{1}{t^2} \\big(1-4pq\\sin^2 t\\big)^{n\/2} \\cdot |\\sin t|\\diff t\\\\\n\t\t\\leqslant&\\phantom{+} \\Big(\\tfrac{4}{r^2}+\\sum_{k=1}^{+\\infty}\\tfrac{1}{k^2\\pi^2}\\Big)\\underbrace{\\int_{0}^{\\pi} \\big(1-4pq\\sin^2 t\\big)^{n\/2} \\cdot \\sin t\\diff t}_{J}\n\t\\end{align*}\n\tThe integral $J$ can be estimated as follows:\n\n\n\n\n\n\t\\begin{align*}\n\t\n\t\n\t\tJ\n\t\t&=\\int_{0}^{\\pi} \\big(1-4pq\\sin^2 t\\big)^{n\/2} \\cdot \\sin t\\diff t\\\\\n\t\t&= -\\int_{0}^{\\pi} \\big(1-4pq(1-\\cos^2 t)\\big)^{n\/2} \\diff\\cos t\\\\\n\t\t&= \\int_{-1}^{1} \\big(1-4pq(1-x^2)\\big)^{n\/2} \\diff x\\\\\n\t\n\t\t&= 2\\int_{0}^{1} (1-4pq + 4pqx^2)^{n\/2} \\diff x.\n\t\\end{align*}\n\tWe have seen that $1-4pq = (p-q)^2 = \\tanh^2 \\frac{\\ep}{2}$. It is easy to show that $\\tanh x \\leqslant x$ for $x\\geqslant0$. So\n\t$$\n\t\tpq = \\tfrac{1}{4}(1-\\tanh^2 \\tfrac{\\ep}{2})\\geqslant \\tfrac{1}{4}(1-\\tfrac{\\ep^2}{4}) = \\tfrac{1}{4} - \\tfrac{1}{16n}.\n\t$$\n\tSince $0\\leqslant x \\leqslant 1$, we have \n\t\\[1-4pq + 4pqx^2 \\leqslant \\tfrac{1}{4n}+(1- \\tfrac{1}{4n})x^2.\\]\n\tHence\n\t\\begin{align*}\n\t\tJ \\leqslant 2\\int_{0}^{1} \\Big(\\tfrac{1}{4n}+(1-\\tfrac{1}{4n})x^2\\Big)^{n\/2} \\diff x.\n\t\\end{align*}\n\tIt's easy to check that $\\tfrac{1}{4n-1}$ and 1 are the two roots of the quadratic equation $\\tfrac{1}{4n}+(1-\\tfrac{1}{4n})x^2 = x$. So we have $\\tfrac{1}{4n}+(1-\\tfrac{1}{4n})x^2\\leqslant x$ between the two roots, i.e. for $x\\in[\\tfrac{1}{4n-1},1]$. For the rest of the interval, we upper bound the integrand by 1. That is,\n\t\\begin{align*}\n\t\t\\int_{0}^{1} \\Big(\\tfrac{1}{4n}+(1-\\tfrac{1}{4n})x^2\\Big)^{n\/2} \\diff x\n\t\t\\leqslant\\int_{0}^{\\tfrac{1}{4n-1}} 1\\diff x + \\int_{\\tfrac{1}{4n-1}}^1 x^{n\/2}\\diff x\\leqslant \\tfrac{1}{4n-1} +\\tfrac{1}{n\/2+1} \\leqslant \\tfrac{3}{n}.\n\t\\end{align*}\n\tSo we have $J\\leqslant \\frac{6}{n}$. Returning to $I_2$, with the well-known identity $\\sum_{k=1}^{+\\infty}\\tfrac{1}{k^2} = \\frac{\\pi^2}{6}$, we have\n\t\\begin{align*}\n\t\tI_2&\\leqslant 2\\Big(\\tfrac{4}{r^2}+\\sum_{k=1}^{+\\infty}\\tfrac{1}{k^2\\pi^2}\\Big)\\cdot J\\\\\n\t\t&= \\big(\\tfrac{8}{r^2}+\\tfrac{\\pi^2}{6}\\cdot\\tfrac{2}{\\pi^2}\\big)\\cdot\\tfrac{6}{n}\\\\\n\t\t&= \\big(2+\\tfrac{48}{r^2}\\big)\\cdot\\tfrac{1}{n}\n\t\\end{align*}\n\tThe estimate of $I_2$ is complete.\n\\end{proof}\n\\begin{proof}[Proof of \\Cref{lem:I3}]\n\tFirst notice the following simple facts:\n\t\\begin{enumerate}\n\t \t\\item When $t>r\\sigma$, we have $\\frac{1}{t}\\leqslant t\\cdot \\frac{1}{r^2\\sigma^2}$.\n\t \t\\item $\\sigma^2>0.2n$ for any $n$.\n\t\\end{enumerate}\n\tThe second follows from a bound we derive in the proof of \\Cref{lem:I2}: $pq\\geqslant \\tfrac{1}{4} - \\tfrac{1}{16n}$. In fact,\n\t\\[\\sigma^2 = npq+\\tfrac{1}{12} \\geqslant \\tfrac{n}{4} - \\tfrac{1}{16}+\\tfrac{1}{12} = \\tfrac{n}{4} - \\tfrac{1}{48} > \\tfrac{n}{5}.\\]\n\tUsing these two facts, we can bound $I_3$ as follows:\n\t\\begin{align*}\n\t\tI_3 \n\t\t&= \\int_{|t|>r\\sigma}\\frac{|\\varphi(t)|}{|t|} \\diff t\\\\\n\t\t& = 2\\int_{r\\sigma}^{+\\infty}\\frac{1}{t}e^{-\\frac{t^2}{2}}\\diff t\\\\\n\t\t&\\leqslant \\frac{2}{r^2\\sigma^2}\\cdot\\int_{r\\sigma}^{+\\infty}te^{-\\frac{t^2}{2}}\\diff t\\\\\n\t\t&= \\frac{2}{r^2\\sigma^2}\\cdot e^{-\\frac{t^2}{2}}\\Big|^{r\\sigma}_{+\\infty}\\\\\n\t\t&= \\frac{2}{r^2\\sigma^2}\\cdot e^{-\\frac{r^2\\sigma^2}{2}}\\\\\n\t\t&\\leqslant \\frac{10}{r^2}\\cdot \\frac{1}{n}\\cdot e^{-0.1r^2n}\n\t\\end{align*}\n\tThe estimate of $I_3$ is complete.\n\\end{proof}\n\nWe are done with the three integrals. Before we dive into the proof of the inversion formula \\ref{lem:chf}, we make a few observations.\n\nFirst, one cannot hope to obtain this lemma by showing \n$$F_n(x)=\\tfrac{1}{2\\pi}\\int_{-\\infty}^{+\\infty}-e^{-itx}\\cdot\\tfrac{\\varphi_n(t)}{it} \\diff t$$\nand a similar expression for $\\Phi(x)$ separately because this alternative integrand is not even integrable. To see this, notice $\\varphi_n(0)=1$, so the integrand $\\approx \\frac{1}{t}$ around 0.\n\nInversion formula \\ref{lem:chf} has the same form as Lemma 3.4.19 of \\cite{durrett2019probability}. However, the ch.f.s are assumed to be (absolutely) integrable there, while $\\varphi_n$ is not. To see this, recall that Fourier inversion tells us that if the ch.f. is absolutely integrable, then the probability distribution has continuous density (see e.g. \\cite{durrett2019probability}, Theorem 3.3.14). This is not true for $X+Y$ because its density is piecewise constant. So $\\varphi_n$ cannot be in $L^1(\\R)$. There seems to be no shortcut, so let's work out our own proof.\n\\begin{proof}[Proof of \\Cref{lem:chf}]\n\tApplying the general inversion formula (see e.g. \\cite{durrett2019probability} Theorem 3.3.11) to $F_n$, we have\n\t\\begin{align*}\n\tF_n(x)-F_n(a)&=\\frac{1}{2\\pi}\\lim_{T\\to+\\infty}\\int^T_{-T}\\frac{e^{-ita}-e^{-itx}}{it}\\cdot\\varphi_n(t)\\diff t\n\t\\end{align*}\n\t$\\varphi_n$ is continuous and decays in the rate $\\tfrac{1}{|t|}$, so the integrand is dominated by $O(t^{-2}\\wedge 1)$ and hence the limit on $T$ is equal to the Lebesgue integral. That is,\n\t\\begin{equation}\\label{eq:FnInversion}\n\t\tF_n(x)-F_n(a)=\\frac{1}{2\\pi}\\int^{+\\infty}_{-\\infty}\\frac{e^{-ita}-e^{-itx}}{it}\\cdot\\varphi_n(t)\\diff t.\n\t\\end{equation}\n\tSimilarly,\n\t\\[\n\t\\Phi(x)-\\Phi(a)=\\frac{1}{2\\pi}\\int^{+\\infty}_{-\\infty}\\frac{e^{-ita}-e^{-itx}}{it}\\cdot\\varphi(t)\\diff t.\n\t\\]\n\tNote that in \\eqref{eq:FnInversion}, we cannot let $a\\to-\\infty$ and use Riemann-Lebesgue lemma because $\\frac{\\varphi_n(t)}{t}$ is not integrable, as discussed before the proof. However, subtracting the two formula yields\n\t\\begin{equation}\\label{eq:ligoat}\n\t\t\\big(F_n(x)-\\Phi(x)\\big)-\\big(F_n(a)-\\Phi(a)\\big)=\\frac{1}{2\\pi}\\int^{+\\infty}_{-\\infty}\\frac{e^{-ita}-e^{-itx}}{it}\\cdot\\big(\\varphi_n(t)-\\varphi(t)\\big)\\diff t\n\t\\end{equation}\n\tConsider the part involving $a$\n\t\\[\\int^{+\\infty}_{-\\infty}e^{-ita}\\cdot\\frac{\\varphi_n(t)-\\varphi(t)}{it}\\diff t.\\]\n\tWe argued right after introducing \\Cref{lem:chf} that $\\frac{\\varphi_n(t)-\\varphi(t)}{it} \\in L^1(\\R)$, so by Riemann-Lebesgue lemma we have the limit\n\t\\[\\lim_{a\\to-\\infty}\\int^{+\\infty}_{-\\infty}e^{-ita}\\cdot\\frac{\\varphi_n(t)-\\varphi(t)}{it}\\diff t = 0.\\]\n\tTake the limit $a\\to-\\infty$ on both sides of \\eqref{eq:ligoat} and we have\n\t\\begin{align*}\n\t\tF_n(x)-\\Phi(x)&=\\frac{1}{2\\pi}\\int^{+\\infty}_{-\\infty}\\frac{-e^{-itx}}{it}\\cdot\\big(\\varphi_n(t)-\\varphi(t)\\big)\\diff t\\\\\n\t\t&=-\\frac{1}{2\\pi i}\\int_{-\\infty}^{+\\infty}e^{-itx}\\cdot\\frac{\\varphi_n(t)-\\varphi(t)}{t} \\diff t.\n\t\\end{align*}\n\tThe proof is now complete.\n\\end{proof}\n\\section{Omitted Details in \\Cref{sec:subsampling}}\n\\label{app:property}\nWe begin this appendix with a small example showing our subsampling theorem is generically unimprovable.\n\\paragraph{Tightness}\n\\label{par:example_and_tightness}\n\n\nConsider the mechanism $\\widetilde{M}$ that randomly releases one individual's private information in the dataset. The privacy analysis is easy: without loss of generality we can assume two neighboring datasets differ in the first individual. Effectively we are trying to distinguish uniform distributions over $\\{1,2,\\ldots, n\\}$ and $\\{1',2,\\ldots, n\\}$. It's not hard to see that the trade-off function of these two uniform distributions is $f_{0,1\/n}$, i.e. $(\\ep,\\delta)$-DP with $\\ep=0, \\delta=1\/n$. This is exact --- the adversary has tests that achieve every point on the curve.\n\n\nOur \\cref{thm:subsample} yields the same result, showing its tightness. To see this, let $M$ be the identity map that takes in one individual and outputs his\/her entire private information. Then $\\widetilde{M} = M\\circ\\mathtt{Sample}_{\\frac{1}{n}}$. Privacy of $M$ is described by $f\\equiv 0$. By \\Cref{thm:subsample}, $\\widetilde{M}$ is $C_{1\/n}(f)$-DP. \\Cref{fig:subsample} shows that $C_{1\/n}(f)=f_{0,1\/n}$.\n\n\n\nNext we show the following two equations:\n\t\\begin{align*}\n\t\t\\ep'&=\\log(1-p + pe^\\ep),\\\\\n\t\t\\delta'&=p\\big(1+f^*(-e^{\\ep})\\big)\n\t\\end{align*}\n\tcan be re-parameterized into \n\t\\begin{equation}\n\t\t\\delta'=1+f_p^*(-e^{\\ep'})\\tag{\\ref{eq:fp}}\n\t\\end{equation}\n\twhere $f_p = pf+(1-p)\\Id$.\n\\begin{proof}[Proof of \\Cref{eq:fp}]\n\t\n\tSince $\\ep\\mapsto\\log(1-p + pe^\\ep)$ maps $[0,+\\infty)$ to $[0,+\\infty)$ monotonically, we can solve $\\ep$ from $\\ep'$ and plug into $\\delta'$. We have\n\t\\begin{align*}\n\t\t\\frac{1}{p}(1-e^{\\ep'}) = 1-e^{\\ep} \\quad\\text{ and }\\quad\n\t\t\\delta'=p\\big(1+f^*(-e^{\\ep})\\big)\n\t\t&=p\\big(1+f^*(\\tfrac{1}{p}(1-e^{\\ep'})-1)\\big).\n\t\\end{align*}\n\tLet $y = -e^{\\ep'}$ and it suffices to show for any $y\\leqslant-1$,\n\t\\begin{equation}\\label{eqn:ning}\n\t\t1+f_p^*(y) = p\\big(1+f^*(\\tfrac{1}{p}(1+y)-1)\\big).\n\t\\end{equation}\n\tTo see this, expand $f_p^*$ as follows\n\t\\begin{align*}\n\t\tf_p^*(y) &= \\sup_{x} yx-f_p(x)\\\\\n\t\t&=\\sup_{x} yx-pf(x)-(1-p)(1-x)\\\\\n\t\t&=p-1 + \\sup_{x} (y+1-p)x -pf(x)\\\\\n\t\t&=p-1 + p\\cdot\\sup_{x} (\\tfrac{1}{p}(1+y)-1)x -f(x)\\\\\n\t\t&=p-1 +pf^*(\\tfrac{1}{p}(1+y)-1)\n\t\\end{align*}\n\t\\eqref{eqn:ning} follows directly.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\nNext we provide the general tool mentioned in \\Cref{sub:proof_of_subsample_theorems} that convert collections of $(\\ep,\\delta)$-DP guarantee in the form of \\eqref{eq:fp} to some $f$-DP.\n\nThe symmetrization operator $\\mathrm{Symm}:\\T\\to\\T^S$ maps a general trade-off function to a symmetric trade-off function. It's defined as follows:\n\\begin{definition} \\label{def:symm}\n\tFor $f\\in\\T$, let $\\bar{x} = \\inf\\{x\\in[0,1]:-1\\in\\partial f(x)\\}$. The symmetrization operator $\\mathrm{Symm}:\\T\\to\\T^S$ is defined as\n\t\\[\\mathrm{Symm}(f):= \\left\\{\n\t\\begin{array}{ll}\n\t\\min\\{f,f^{-1}\\}^{**}, &\\text{ if }\\,\\, \\bar{x}\\leqslant f(\\bar{x}),\\\\\n\t\\max\\{f,f^{-1}\\}, &\\text{ if }\\,\\, \\bar{x}>f(\\bar{x}).\n\t\\end{array}\n\t\\right.\\]\n\\end{definition}\n\n\\begin{proposition}\\label{prop:asymm_env}\n\tLet $f\\in\\T$, not necessarily symmetric. Suppose a mechanism is $(\\ep,1+f^*(-e^{\\ep}))$-DP for all $\\ep\\geqslant 0$, then it is $\\mathrm{Symm}(f)$-DP.\n\\end{proposition}\n\n\nRecall from basic convex analysis that double convex conjugate $f^{**}$ is the greatest convex lower bound of $f$. If $f$ itself is convex then $f^{**}=f$. For $f$ symmetric , $f=f^{-1}$. By convexity of $f$, we have $\\mathrm{Symm}(f)=f$ in both cases. So \\Cref{prop:ftoDP} is a special case of \\Cref{prop:asymm_env}. The first half of \\Cref{prop:asymm_env} is \\Cref{prop:asymm_for_proof}, the part we used in the proof of our subsampling theorem.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.7\\linewidth]\n{.\/figures\/Symm.pdf}\n \\captionof{figure}{Action of Symm. Left panel: $\\bar{x}\\leqslant f(\\bar{x})$. Right panel: $\\bar{x} > f(\\bar{x})$. For both panels the effective parts (red bars on $x$-axes) are $[0,\\bar{x}]$ of $f$ and $[f(\\bar{x}),1]$ of $f^{-1}$. No overlap in the left panel since $\\bar{x} < f(\\bar{x})$, so interpolate with straight line; overlap in the right panel so the max is taken.}\n \\label{fig:symm}\n\\end{figure}\n\nFrom \\Cref{fig:symm} it's not hard to see that\n\\begin{align*}\n\t\\min\\{f,f^{-1}\\}^{**}(x) =\n\t\\left\\{\n\t\\begin{array}{ll}\n\tf(x),&x\\in[0,\\bar{x}], \\\\\n\t\\bar{x}+f(\\bar{x})-x, & x\\in[\\bar{x},f(\\bar{x})], \\\\\n\tf^{-1}, & x\\in[f(\\bar{x}),1].\n\t\\end{array}\n\t\\right.\n\\end{align*}\n\n\n\n\\begin{proof}[Proof of \\Cref{prop:asymm_env}]\n\t$M$ being $\\big(\\ep,\\delta(\\ep)\\big)$-DP means that for any neighboring datasets $S$ and $S'$,\n\t\\[T\\big(M(S),M(S')\\big)(x)\\geqslant-e^\\ep x+1-\\delta(\\ep).\\]\n\tFix $x\\in[0,1]$. Since the DP condition holds for all $\\ep\\geqslant 0$, the lower bound still holds when we take the supremum over $\\ep\\geqslant 0$. In other words, $M$ is ${f}_\\mathrm{env}$-DP with\n\t$${f}_\\mathrm{env}(x) = \\max\\{0,\\,\\,\\sup_{\\ep\\geqslant0}1-\\delta(\\ep)-e^\\ep x\\}.$$\n\tBy \\Cref{prop:symmetry} $M$ is also $\\max\\{{f}_\\mathrm{env},{f}_\\mathrm{env}^{-1}\\}$-DP. The proof will be complete if we can show $\\max\\{{f}_\\mathrm{env},{f}_\\mathrm{env}^{-1}\\} = \\mathrm{Symm}(f)$.\n\n\t\\begin{figure}[h]\n\t \\centering\n\t \\includegraphics[width=0.7\\linewidth]\n\t{.\/figures\/Symm2.pdf}\n\t \\captionof{figure}{Symm explained. Left panel: $\\bar{x}\\leqslant f(\\bar{x})$. Right panel: $\\bar{x} > f(\\bar{x})$. }\n\t \\label{fig:symm2}\n\t\\end{figure}\n\n\tWe achieve this by first showing:\n\t\\[{f}_\\mathrm{env}(x) =\n\t\\left\\{\n\t\\begin{array}{ll}\n\tf(x),&x\\in[0,\\bar{x}], \\\\\n\t\\bar{x}+f(\\bar{x})-x, & x\\in[\\bar{x},\\bar{x}+f(\\bar{x})], \\\\\n\t0, & x\\in[\\bar{x}+f(\\bar{x}),1].\n\t\\end{array}\n\t\\right.\\]\n\tFrom \\Cref{fig:symm2} it is almost obvious. We still provide the argument below.\n\n\tPlug in $\\delta(\\ep) = 1+f^*(-e^{\\ep})$ and change the variable $y=-e^\\ep$:\n\t\\begin{align*}\n\t\t\\sup_{\\ep\\geqslant0}[-e^\\ep x+1-\\delta(\\ep)]\n\t\t&=\\sup_{\\ep\\geqslant0}[-e^\\ep x-f^*(-e^{\\ep})]\\\\\n\t\t&=\\sup_{y\\leqslant-1}[y x-f^*(y)]\n\t\\end{align*}\n\tFrom convex analysis we know if $y\\in\\partial f(x)$ then $yx = f(x)+f^*(y)$. By definition of $\\bar{x}$, if $x\\leqslant\\bar{x}$, then at least one subgradient $y\\in\\partial f(x)$ is no greater than $-1$. So this specific $y x-f^*(y) = f(x)$ is involved in the supremum, i.e. $\\sup_{y\\leqslant-1}[y x-f^*(y)] = f(x)$. This justifies the expression for the first segment.\n\n\tWhen $x>\\bar{x}$, the supremum is always attained at $y=-1$. In fact, if we let $l_y(x) = y x-f^*(y)$, then\n\t\\begin{lemma} \\label{lem:ning}\n\t \t$l_y(x)\\leqslant l_{-1}(x)$ when $y\\leqslant -1$ and $x>\\bar{x}$.\n\t\\end{lemma}\n\t\\begin{proof}[Proof of \\Cref{lem:ning}]\n\t$l_y$ is the supporting linear function of $f$ with slope $y$. It suffices to show that $l_y(x)$ is monotone increasing in $y$. To see this, change the variable from the slope $y$ to the supporting location $u$. As $f$ is convex, $y=f'(u)$ is increasing in $u$. In terms of $u$, $l_y(x) = f(u) + f'(u)(x-u)$. Taking derivative with respect to $u$:\n\t\\[\\frac{\\partial}{\\partial u }\\, l_y(x) = f'(u) + f''(u)(x-u) + f'(u)\\cdot(-1) = f''(u)(x-u).\\]\n\t$y\\leqslant -1$ corresponds to location $u\\leqslant \\bar{x}$ and hence $u\\alpha$ is worse than $A=\\alpha$, because the convex combination equality requires $\\bar{A}<\\alpha$. $B$ has to increase anyway. Both $A$ and $B$ increase, so the objective $A+B$ also increases.\n\t\t\t\\item If $A$ is decreased from $\\alpha$ by some amount, $B$ has to increase by $\\mathrm{e}^\\ep$ times that amount, which is not worth it.\n\t\t\\end{enumerate}\n\t\tSo the minimum of the relaxed linear program is achieved at $A=\\bar{A}=\\alpha, B=k\\alpha+b,\\bar{B} = 1-\\alpha$, thereby inducing the claimed lower bound.\n\t\\end{proof}\n\n\tSo $\\beta\\geqslant p(k\\alpha+b)+(1-p)(1-\\alpha)$. Changing back to $\\ep,\\delta$, we have\n\t\\[\\beta\\geqslant p(-e^\\ep\\alpha+1-\\delta)+(1-p)(1-\\alpha) = -[p\\mathrm{e}^\\ep+1-p]\\alpha+1-p\\delta = -\\mathrm{e}^{\\ep'}\\alpha+1-\\delta'.\\]\n\tBy \\Cref{thm:privacy_testing}, $\\widetilde{M}$ is $(\\ep',\\delta')$-DP.\n\n\n\n\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{A Self-contained Proof of the Composition Theorem}\n\\label{app:self}\nIn this section we prove the well-definedness of $\\otimes$ and Composition \\Cref{thm:n_steps}.\n\n\nWe begin with the setting of the key lemma, which compares indistinguishability of two pairs of \\textit{randomized algorithms}. Let $K_1, K_1': Y \\to Z_1$ and $K_2, K_2': Y \\to Z_2$ be two pairs of randomized algorithms. Suppose the following is true for these four algorithms: for each fixed input $y\\in Y$, testing problem $K_1(y) $ vs $ K_1'(y)$ is harder than $K_2(y) $ vs $ K_2'(y)$. In mathematical language, let $f_i^y = T\\big(K_i(y), K_i'(y)\\big)$ (See the left panel of \\Cref{fig:comparison}). The above assumption amounts to saying $f_1^y\\geqslant f_2^y$. So far we have fixed the input $y$. In the two pairs of testing problems, if the input of the null comes from $P$ and the input of the alternative comes from $P'$, then intuitively both testing problems become easier than when inputs are fixed, because now the inputs also provide information. Formally, the observation comes from input-output joint distribution $\\big(P, K_i(P)\\big)$ or $\\big(P', K_i'(P')\\big)$ (with a little abuse of notation). Let $f_i = T\\big( (P, K_i(P)), (P', K_i'(P')) \\big), i=1,2$ be the trade-off functions of the joint testing problems (See the right panel of \\Cref{fig:comparison}). As discussed, we expect that $f_1\\leqslant f_1^y, f_2\\leqslant f_2^y$ for all $y$. But what about $f_1$ and $f_2$? Which joint testing problem is harder? The following lemma answers the question.\n\n\\begin{lemma} \\label{lem:comparison}\n\tIf $f_1^y\\geqslant f_2^y$ for all $y\\in Y$, then $f_1\\geqslant f_2$.\n\\end{lemma}\n\n\\input{figures\/comparisonTikZ}\n\nLet's first use the lemma to show the well-definedness of $\\otimes$ and the composition theorem. Its own proof comes afterwards.\nRecall that in \\Cref{def:product}, $f\\otimes g$ is defined as $T(P\\times P',Q\\times Q')$ if $f=T(P,Q), g = T(P',Q')$. To show this definition does not depend on the choice of $P,Q$ and $P',Q'$, it suffices to verify that when $f = T(P,Q) = T(\\tilde{P},\\tilde{Q})$, we have $T(P\\times P', Q \\times Q') = T(\\tilde{P} \\times P', \\tilde{Q} \\times Q')$. The following lemma is slightly stronger than what we need, but will be useful later.\n\\begin{lemma} \\label{lem:welldefined}\nIf $T(P,Q) \\geqslant T(\\tilde{P},\\tilde{Q})$, then\n\\[ T(P\\times P', Q \\times Q') \\geqslant T(\\tilde{P} \\times P', \\tilde{Q} \\times Q').\\]\nAs a consequence, if the assumption holds with an equality, then so does the conclusion.\n\\end{lemma}\n\\input{figures\/ordering}\n\\begin{proof}\n\tIn order to fit it into the setting of \\Cref{lem:comparison}, let the algorithms output a random variable independent of the input $y$. See \\Cref{fig:well-definedness}. The input-output joint distributions are just product distributions, so by the comparison \\cref{lem:comparison},\n\n\\[ T(P\\times P', Q \\times Q') \\geqslant T(\\tilde{P} \\times P', \\tilde{Q} \\times Q').\\]\n\n\tWhen $T(P,Q) = T(\\tilde{P},\\tilde{Q})$, we can apply the lemma in both directions and conclude that \n\\[ T(P\\times P', Q \\times Q') = T(\\tilde{P} \\times P', \\tilde{Q} \\times Q').\\]\n\tThe proof is complete.\n\\end{proof}\n\n\nNow that we have justified the definition of the composition tensor $\\otimes$, \\cref{lem:welldefined} can be written in a concise way:\n\\begin{equation}\\label{eq:ordering}\n\tg_1\\geqslant g_2\\Rightarrow f\\otimes g_1\\geqslant f\\otimes g_2.\n\\end{equation}\nThis is actually the second property we listed after the definition of $\\otimes$.\n\nFor composition theorem, we prove the following two steps version:\n\\begin{lemma} \\label{lem:two_steps}\n\tSuppose in a two-step composition, the two components $M_1:X\\to Y, M_2:X\\times Y \\to Z$ satisfy\n\t\t\\begin{enumerate}\n\t\t\t\\item $M_1$ is $f$-DP;\n\t\t\t\\item $M_2(\\cdot,y):X\\to Z$ is $g$-DP for each fixed $y\\in Y$.\n\t\t\\end{enumerate}\n\t\tThen the composition $M:X\\to Y\\times Z$ is $f\\otimes g$-DP.\n\\end{lemma}\n\\input{figures\/comparison_proof}\n\\begin{proof}[Proof of \\Cref{lem:two_steps}]\n\tLet $Q,Q'$ be distributions such that $g = \\F(Q,Q')$. Fix a pair of neighboring datasets $S$ and $S'$ and set everything as in \\Cref{fig:comparison_proof}. The input $y$ is an element in the output space of $M_1$. Arrows to the left correspond to the mechanism $M_2$, while arrows to the right ignore the input $y$ and output $Q,Q'$ respectively.\n\n\n\tHere $f_1^y$ in \\Cref{lem:comparison} is $T\\big(M_2(S,y),M_2(S',y)\\big)\\geqslant g$, so the condition in \\Cref{lem:comparison} checks. Consequently,\n\\begin{align*}\n\t\\F\\big(M(S),M(S')\\big)&\\geqslant \\F\\big(M_1(S)\\times Q,M_1(S')\\times Q'\\big) &&\\text{(\\Cref{lem:comparison})}\\\\\n\t&= \\F\\big(M_1(S),M_1(S')\\big) \\otimes \\F(Q,Q')&&\\text{(Def. of $\\otimes$)}\\\\\n\t&= \\F\\big(M_1(S),M_1(S')\\big) \\otimes g\\\\\n\t&\\geqslant f\\otimes g&&\\text{(Privacy of $M_1$ and \\eqref{eq:ordering})}\n\\end{align*}\nThe proof is complete.\n\\end{proof}\n\nNow we prove \\Cref{lem:comparison}. The proof is basically careful application of Neyman-Pearson Lemma \\ref{thm:NPlemma}.\n\n\\begin{proof}[Proof of \\Cref{lem:comparison}]\n\tIn order to further simplify the notations, for $i=1,2$, let $\\mu_i$ and $\\mu_i'$ be the joint distributions $\\big(P,K_i(P)\\big)$ and $\\big(P',K_i'(P')\\big)$ respectively. Then $f_1 = \\F(\\mu_1,\\mu_1'), f_2 = \\F(\\mu_2,\\mu_2')$ and we need to show that the testing problem $\\mu_1$ vs $\\mu_1'$ is harder than $\\mu_2$ vs $\\mu_2'$.\n\n\tConsider the testing problem $\\mu_1$ vs $\\mu_1'$. For $\\alpha\\in[0,1]$, let $\\phi_1:Y\\times Z_1\\to[0,1]$ be the optimal rejection rule at level $\\alpha$. By definition of trade-off function, the power of this test is $1-f_1(\\alpha)$.\n\tFormally,\n\t\\[\\E_{\\mu_1}[\\phi_1] = \\alpha, \\quad \\E_{\\mu_1'}[\\phi_1] = 1-f_1(\\alpha).\\]\n\tIt suffices to construct a rejection rule $\\phi_2:Y\\times Z_2\\to[0,1]$ for the problem $\\mu_2$ vs $\\mu_2'$, at the same level $\\alpha$ but with greater power, i.e.\n\t$$\\E_{\\mu_2} [\\phi_2] = \\alpha~~ \\text{ and }~~ \\E_{\\mu_2'} [\\phi_2]\\geqslant \\E_{\\mu_1'}[\\phi_1] = 1-f_1(\\alpha).$$\n\tIf such $\\phi_2$ exists, then by the sub-optimality of $\\phi_2$ for the problem $\\mu_2$ vs $\\mu_2'$,\n\t\\[1-f_2(\\alpha)\\geqslant \\E_{\\mu_2'} [\\phi_2] \\geqslant 1-f_1(\\alpha),\\]\n\twhich is what we want.\n\n\tFor $y\\in Y$, let $\\phi_1^y:Z_1\\to[0,1]$ be the slice of $\\phi_1$ at $y$, i.e. $\\phi_1^y(z_1) = \\phi_1(y,z_1)$. This is a rejection rule for the problem $K_1(y)$ vs $K_1'(y)$, sub-optimal in general. The type I error is\n\t\\[\\alpha^y := \\E_{z_1\\sim K_1(y)}[\\phi_1^y(z_1)].\\]\n\tThe power is\n\t\\[\\E_{z_1\\sim K_1'(y)}[\\phi_1^y(z_1)]\\leqslant 1-f_1^y(\\alpha^y).\\]\n\tThe last inequality holds because $f_1^y=T\\big(K_1(y),K_1'(y)\\big)$ and that $\\phi_1^y$ is sub-optimal for this problem.\n\tLet $\\phi_2^y:Z_2\\to[0,1]$ be the optimal rejection rule for the testing $K_2(y)$ vs $K_2'(y)$ at level $\\alpha^y$. Construction of $\\phi_2:Y\\times Z_2\\to[0,1]$ is simply putting together these slices $\\phi_2^y$. Formally, $\\phi_2(y,z_2) = \\phi_2^y(z_2)$. Its level is $\\alpha$ because $\\alpha^y$ are averaged in terms of the same distribution $P$. More precisely,\n\t\\begin{align*}\n\t\t\\E_{\\mu_2}[\\phi_2] &= \\E_{y\\sim P}\\big[\\E_{z_2\\sim K_2(y)}[\\phi_2^y(z_2)]\\big] &&\\text{(Construction of $\\phi_2$)}\\\\\n\t\t&= \\E_{y\\sim P}[\\alpha^y]&&\\text{($\\phi_2^y$ has level $\\alpha^y$)}\\\\\n\t\t&= \\E_{y\\sim P}\\big[\\E_{z_1\\sim K_1(y)}[\\phi_1^y(z_1)]\\big]&&\\text{(Def. of $\\alpha^y$)}\\\\\n\t\t&= \\E_{\\mu_1}[\\phi_1] = \\alpha.\n\t\\end{align*}\n\tLet's compute its power:\n\t\\begin{align*}\n\t\t\\E_{\\mu_2'}[\\phi_2] &= \\E_{y\\sim P'}\\big[\\E_{z_2\\sim K_2'(y)}[\\phi_2^y(z_2)]\\big]\\\\\n\t\t&= \\E_{y\\sim P'}\\big[1-f^y_2(\\alpha^y)\\big] &&\\text{($\\phi_2^y$ is optimal)}\\\\\n\t\t&\\geqslant \\E_{y\\sim P'}\\big[1-f^y_1(\\alpha^y)\\big]&&\\text{($f^y_1\\geqslant f^y_2$)}\\\\\n\t\t&\\geqslant\\E_{y\\sim P'}\\big[\\E_{z_1\\sim K_1'(y)}[\\phi_1^y(z_1)]\\big] &&\\text{($\\phi_1^y$ is sub-optimal)}\\\\\n\t\t&=\\E_{\\mu_1'}[\\phi_1] = 1-f_1(\\alpha).&&\\text{(Optimality of $\\phi_1$ for $\\mu_1$ vs $\\mu_1'$)}\n\t\\end{align*}\n\tSo $\\phi_2$ constructed this way does have the desired level and power. The proof is complete.\n\\end{proof} \n\\subsection{Central Limit Theorems for Composition}\n\\label{sub:a_berry_esseen_type_of_clt}\n\n\n\n\\newcommand{\\boldsymbol{\\mathrm{kl}}}{\\boldsymbol{\\mathrm{kl}}}\n\\newcommand{\\boldsymbol{\\kappa_2}}{\\boldsymbol{\\kappa_2}}\n\\newcommand{\\boldsymbol{\\kappa_3}}{\\boldsymbol{\\kappa_3}}\n\\newcommand{\\boldsymbol{\\bar{\\kappa}_3}}{\\boldsymbol{\\bar{\\kappa}_3}}\n\n\nIn this subsection, we identify a central limit theorem type phenomenon of composition in the $f$-DP framework. Our main results (\\Cref{thm:Berry} and \\Cref{thm:CLT}), roughly speaking, show that trade-off functions corresponding to small privacy leakage accumulate to $G_\\mu$ for some $\\mu$ under composition. Equivalently, the privacy of the composition of many ``very private'' mechanisms is best measured by GDP in the limit. This identifies GDP as the focal privacy definition among the family of $f$-DP privacy guarantees, including $(\\epsilon,\\delta)$-DP. More precisely, \\emph{all} privacy definitions that are based on a hypothesis testing formulation of ``indistinguishability'' converge to the guarantees of GDP in the limit of composition. We remark that \\cite{sommer2018privacy} proved a conceptually related central limit theorem for random variables corresponding to the privacy loss. This theorem is used to reason about the non-adaptive composition for $(\\epsilon,\\delta)$-DP. In\ncontrast, our central limit theorem is concerned with the optimal hypothesis testing trade-off functions for the composition theorem. Moreover, our theorem is applicable in the setting of composition, where each mechanism is informed by prior interactions with the same database.\n\n\nFrom a computational viewpoint, these limit theorems yield an efficient method of approximating the composition of general $f$-DP mechanisms. This is very appealing for analyzing the privacy properties of algorithms that are comprised of many building blocks in a sequence. For comparison, the exact computation of privacy guarantees under composition can be computationally hard \\cite{complexity} and, thus, tractable approximations are important. Using our central limit theorems, the computation of the exact overall privacy guarantee $f_1\\otimes\\cdots\\otimes f_n$ in \\Cref{thm:n_steps} can be reduced to the evaluation of a single mean parameter $\\mu$ in a GDP guarantee. We give an exemplary application of this powerful technique in \\Cref{sec:application_in_sgd}.\n\n\n\n\n\nExplicitly, the mean parameter $\\mu$ in the approximation depends on certain functionals of the trade-off functions\\footnote{Although the trade-off function satisfies $f'(x) \\le 0$ almost everywhere on $[0, 1]$, we prefer to use $|f'(x)|$ instead of $-f'(x)$ for aesthetic reasons.}:\n\\begin{align*}\n\t\\mathrm{kl}(f) &:= -\\int_0^1\\log |f'(x)|\\diff x\\\\\n\n\t\\kappa_2(f)&:=\\int_0^1\\log^2 |f'(x)| \\diff x\\\\% \\int_0^1\\big(\\log g(x)-\\mu\\big)^2\\diff x\\\\\n\t\\kappa_3(f)&:=\\int_0^1\\big|\\log |f'(x)|\\big|^3\\diff x\\\\% \\int_0^1\\big(\\log g(x)-\\mu\\big)^2\\diff x\\\\\n\t\\bar{\\kappa}_3(f)&:=\\int_0^1\\big|\\log |f'(x)|+\\mathrm{kl}(f)\\big|^3\\diff x.\n\n\n\\end{align*}\nAll of these functionals take values in $[0,+\\infty]$, and the last is defined for $f$ such that $\\mathrm{kl}(f) < \\infty$. In essence, these functionals are calculating moments of the log-likelihood ratio of $P$ and $Q$ such that $f=T(P,Q)$. In particular, all of these functionals are 0 if $f(x) = \\Id(x) = 1 - x$, which corresponds to zero privacy leakage. As its name suggests, $\\mathrm{kl}(f)$ is the Kullback--Leibler (KL) divergence of $P$ and $Q$ and, therefore, $\\mathrm{kl}(f) \\ge 0$. Detailed elaboration on these functionals is deferred to \\Cref{app:CLT}.\n\n\n\nIn the following theorem, $\\boldsymbol{\\mathrm{kl}}$ denotes the vector $\\big(\\mathrm{kl}(f_{1}),\\ldots, \\mathrm{kl}(f_{n})\\big)$ and $\\boldsymbol{\\kappa_2}, \\boldsymbol{\\kappa_3},\\boldsymbol{\\bar{\\kappa}_3}$ are defined similarly; in addition, $\\|\\cdot\\|_1$ and $\\|\\cdot\\|_2$ are the $\\ell_1$ and $\\ell_2$ norms, respectively.\n\\begin{restatable}{theorem}{berryrep} \\label{thm:Berry}\nLet $f_1,\\ldots, f_n$ be symmetric trade-off functions such that $\\kappa_3(f_i) < \\infty$ for all $1 \\le i \\le n$. Denote\n\\[\n\\mu:= \\frac{2\\|\\boldsymbol{\\mathrm{kl}}\\|_1}{\\sqrt{\\|\\boldsymbol{\\kappa_2}\\|_1 - \\|\\boldsymbol{\\mathrm{kl}}\\|_2^2}}\\,\\, ~ \\text{and} \\,\\, ~ \\gamma:=\\frac{0.56\\|\\boldsymbol{\\bar{\\kappa}_3}\\|_1}{\\big(\\|\\boldsymbol{\\kappa_2}\\|_1 - \\|\\boldsymbol{\\mathrm{kl}}\\|_2^2\\big)^{3\/2}}\n\\]\nand assume $\\gamma < \\frac12$. Then, for all $\\alpha\\in[\\gamma,1-\\gamma]$, we have\\footnote{We can extend $G_{{\\mu}}$ to be 1 in $(-\\infty,0)$ and 0 in $(1,+\\infty)$ so that the assumption that $\\alpha\\in[\\gamma,1-\\gamma]$ can be removed.}\n\\begin{equation}\\label{eq:lower_upper}\nG_\\mu(\\alpha+\\gamma)-\\gamma\\leqslant f_{1}\\otimes f_{2} \\otimes \\cdots \\otimes f_{n}(\\alpha)\\leqslant G_\\mu(\\alpha-\\gamma)+\\gamma.\n\\end{equation}\n\n\\end{restatable}\n\n\nLoosely speaking, the lower bound in \\eqref{eq:lower_upper} shows that the composition of $f_i$-DP mechanisms for $i = 1, \\ldots, n$ is approximately $\\mu$-GDP and, in addition, the upper bound demonstrates that the tightness of this approximation is specified by $\\gamma$. In the case where all $f_i$ are equal to some $f\\neq \\Id$, the theorem reveals that the composition becomes blatantly non-private as $n \\to \\infty$ because $\\mu \\asymp \\sqrt{n} \\to \\infty$. More interesting applications of the theorem, however, are cases where each $f_i$ is close to the ``perfect privacy'' trade-off function $\\Id$ such that collectively $\\mu$ is convergent and $\\gamma$ vanishes as $n \\to \\infty$ (see the example in \\Cref{sec:application_in_sgd}). For completeness, the condition $\\kappa_3(f_i) < \\infty$ (which implies that the other three functionals are also finite) for the use of this theorem excludes the case where $f_i(0) < 1$, in particular, $f_{\\epsilon,\\delta}$ in $(\\epsilon, \\delta)$-DP with\n$\\delta > 0$. We introduce an easy and general technique in \\Cref{sub:_ep_0_dp_} to deal with this issue.\n\n\n\n\nFrom a technical viewpoint, Theorem~\\ref{thm:Berry} can be thought of as a Berry--Esseen type central limit theorem.\nThe detailed proof, as well as that of \\Cref{thm:CLT}, is provided in \\Cref{app:CLT}.\n\n\n\n\nNext, we present an asymptotic version of \\Cref{thm:Berry} for composition of $f$-DP mechanisms. In analogue to classical central limit theorems, below we consider a triangular array of mechanisms $\\{M_{n1},\\ldots, M_{nn}\\}_{n=1}^{\\infty}$, where $M_{ni}$ is $f_{ni}$-DP for $1 \\le i \\le n$.\n\n\n\n\\begin{restatable}{theorem}{asymprep} \\label{thm:CLT}\nLet $\\{f_{ni}: 1\\leqslant i \\leqslant n\\}_{n=1}^{\\infty}$ be a triangular array of symmetric trade-off functions and assume the following limits for some constants $K \\ge 0$ and $s > 0$ as $n \\to \\infty$:\n\n\t\\begin{enumerate}\n\t\t\\item[\\textup{1.}] $\\sum_{i=1}^n \\mathrm{kl}(f_{ni})\\to K;\n\t\t\\item[\\textup{2.}] $\\max_{1\\leqslant i\\leqslant n} \\mathrm{kl}(f_{ni}) \\to 0;$\n\t\t\\item[\\textup{3.}] $\\sum_{i=1}^n \\kappa_2(f_{ni})\\to s^2;\n\t\t\\item[\\textup{4.}] $\\sum_{i=1}^n \\kappa_3(f_{ni})\\to 0$.\n\t\\end{enumerate}\nThen, we have\n\t$$\\lim_{n\\to \\infty} f_{n1}\\otimes f_{n2} \\otimes \\cdots \\otimes f_{nn} (\\alpha) = G_{2K\/s}(\\alpha)$$\nuniformly for all $\\alpha \\in [0,1]$.\n\\end{restatable}\n\nTaken together, this theorem and \\Cref{thm:n_steps} amount to saying that the composition $M_{n1} \\otimes \\ldots \\otimes M_{nn}$ is asymptotically ${2K\/s}$-GDP. In fact, this asymptotic version is a consequence of Theorem~\\ref{thm:Berry} as one can show $\\mu \\to 2K\/s$ and $\\gamma \\to 0$ for the triangular array of symmetric trade-off functions. This central limit theorem implies that GDP is the \\emph{only} parameterized family of trade-off functions that can faithfully represent the effects of composition. In contrast, neither $\\epsilon$- nor $(\\epsilon,\\delta)$-DP can losslessly be tracked under composition---the parameterized family of functions $f_{\\epsilon,\\delta}$ cannot represent the trade-off function that results from the limit under composition.\n\n\n\n\n\n\nThe conditions for use of this theorem are reminiscent of Lindeberg's condition in the central limit theorem for independent random variables. The proper scaling of the trade-off functions is that both $\\mathrm{kl}(f_{ni})$ and $\\kappa_2(f_{ni})$ are of order $O(1\/n)$ for most $1 \\le i \\le n$. As a consequence, the cumulative effects of the moment functionals are bounded. Furthermore, as with Lindeberg's condition, the second condition in Theorem~\\ref{thm:CLT} requires that no single mechanism has a significant contribution to the composition in the limit.\n\n\n\n\nIn passing, we remark that $K$ and $s$ satisfy the relationship $s = \\sqrt{2K}$ in all examples of the application of \\Cref{thm:CLT} in this paper, including \\Cref{thm:DPCLT} and \\Cref{thm:mixtureSGD} as well as their corollaries. As such, the composition is asymptotically $s$-GDP. A proof of this interesting observation or the construction of a counterexample is left for future work.\n\n\n\n\n\n\n\n\n\n\n\n\\section{Composition and Limit Theorems}\n\\label{sec:composition-theorems}\n\n\n\\newcommand{\\mathrm{kl}}{\\mathrm{kl}}\n\\newcommand{\\mathrm{lk}}{\\mathrm{lk}}\n\\renewcommand{\\mathrm{Proc}}{\\mathrm{Proc}}\n\nImagine that an analyst performs a sequence of analyses on a private dataset, in which each analysis is informed by prior analyses on the same dataset. Provided that every analysis alone is private, the question is whether all analyses collectively are private, and if so, how the privacy degrades as the number of analyses increases, namely under composition. It is essential for a notion of privacy to gracefully handle composition, without which the privacy analysis of complex algorithms would be almost impossible.\n\n\nNow, we describe the composition of two mechanisms. For simplicity, this section writes $X$ for the space of datasets and abuse notation by using $n$ to refer to the number of mechanisms in composition\\footnote{As will be clear later, the use of $n$ is consistent with the literature on central limit theorems.}. Let $M_1: X \\to Y_1$ be the first mechanism and $M_2: X \\times Y_1 \\to Y_2$ be the second mechanism. In brief, $M_2$ takes as input the output of the first mechanism $M_1$ in addition to the dataset. With the two mechanisms in place, the joint mechanism $M: X \\to Y_1 \\times Y_2$ is defined as\n\\begin{equation}\\label{eq:M_2_comp}\nM(S) = (y_1, M_2(S, y_1)),\n\\end{equation}\nwhere $y_1 = M_1(S)$.\\footnote{Alternatively, we can write $M(S) = (M_1(S), M_2(S, M_1(S)))$, in which case it is necessary to specify that $M_1$ should be run only once in this expression.} Roughly speaking, the distribution of $M(S)$ is constructed from the marginal distribution of $M_1(S)$ on $Y_1$ and the conditional distribution of $M_2(S, y_1)$ on $Y_2$ given $M_1(S) = y_1$. The composition of more than two mechanisms follows recursively. In general, given a sequence of mechanisms $M_i: X \\times Y_1 \\times \\cdots \\times Y_{i-1} \\to Y_i$ for $i=1,2,\\ldots, n$, we can recursively define the joint mechanism as their composition:\n\\[\nM: X \\to Y_1 \\times \\cdots \\times Y_n.\n\\]\nPut differently, $M(S)$ can be interpreted as the trajectory of a Markov chain whose initial distribution is given by $M_1(S)$ and the transition kernel $M_i(S, \\cdots)$ at each step.\n\n\nUsing the language above, the goal of this section is to relate the privacy loss of $M$ to that of the $n$ mechanisms $M_1, \\ldots, M_n$ in the $f$-DP framework. In short, Section~\\ref{sub:composition_theorem} develops a general composition theorem for $f$-DP. In Sections~\\ref{sub:a_berry_esseen_type_of_clt}, we identify a central limit theorem phenomenon of composition in the $f$-DP framework, which can be used as an approximation tool, just like we use the central limit theorem for random variables. This approximation is extended to and improved for $(\\epsilon, \\delta)$-DP in Section~\\ref{sub:_ep_0_dp_}.\n\n\n\n\n\n\n\n\n\n\\subsection{A General Composition Theorem}\n\\label{sub:composition_theorem}\n\n\nThe main thrust of this subsection is to demonstrate that the composition of private mechanisms is closed and tight\\footnote{\\Cref{sub:group_privacy} shows that $f$-DP is ``closed and tight'' in a similar sense, in terms of the guarantees of group privacy.} in the $f$-DP framework. This result is formally stated in Theorem~\\ref{thm:n_steps}, which shows that the composed mechanism remains $f$-DP with the trade-off function taking the form of a certain product. To define the product, consider two trade-off functions $f$ and $g$ that are given as $f = \\F(P, Q)$ and $g = \\F(P', Q')$ for some probability distributions $P, P', Q, Q'$.\n\n\n\n\n\\begin{definition} \\label{def:product}\nThe tensor product of two trade-off functions $f = \\F(P, Q)$ and $g = \\F(P', Q')$ is defined as\n$$f\\otimes g := \\F(P\\times P', Q\\times Q').$$\n\\end{definition}\nThroughout the paper, write $f \\otimes g (\\alpha)$ for $(f \\otimes g) (\\alpha)$, and denote by $f^{\\otimes n}$ the $n$-fold tensor product of $f$. The well-definedness of $f^{\\otimes n}$ rests on the associativity of the tensor product, which we will soon illustrate.\n\nBy definition, $f\\otimes g$ is also a trade-off function. Nevertheless, it remains to be shown that the tensor product is well-defined: that is, the definition is independent of the choice of distributions used to represent a trade-off function. More precisely, assuming $f = \\F(P, Q) = \\F(\\tilde P, \\tilde Q)$ for some distributions $\\tilde P, \\tilde Q$, we need to ensure that\n\\begin{equation*}\\label{eq:tensor_welldef}\nT(P\\times P', Q \\times Q') = T(\\tilde{P} \\times P', \\tilde{Q} \\times Q').\n\\end{equation*}\nWe defer the proof of this intuitive fact to \\Cref{app:self}. Below we list some other useful properties\\footnote{These properties make the class of trade-off functions a \\textit{commutative monoid}. Informally, a monoid is a group without the inverse operator.} of the tensor product of trade-off functions, whose proofs are placed in \\Cref{app:CLT}.\n\\begin{enumerate}\n\\setlength\\itemsep{0.15em}\n\\item The product $\\otimes$ is commutative and associative.\n\\item If $g_1\\geqslant g_2$, then $f\\otimes g_1 \\geqslant f\\otimes g_2$.\n\\item $f \\otimes \\Id = \\Id \\otimes f = f$, where the identity trade-off function $\\Id(x)=1-x$ for $0 \\le x \\le 1$.\n\\item $(f\\otimes g)^{-1} = f^{-1}\\otimes g^{-1}$. See the definition of inverse in \\eqref{eq:inver_f}.\n\\end{enumerate}\nNote that $\\Id$ is the trade-off function of two identical distributions. Property 4 implies that when $f,g$ are symmetric trade-off functions, their tensor product $f\\otimes g$ is also symmetric.\n\n\n\n\n\n\nNow we state the main theorem of this subsection. Its proof is given in \\Cref{app:self}.\n\\begin{theorem} \\label{thm:n_steps}\nLet $M_i(\\cdot,y_1,\\cdots,y_{i-1})$ be $f_i$-DP for all $y_1 \\in Y_1, \\ldots, y_{i-1}\\in Y_{i-1}$. Then the $n$-fold composed mechanism $M:X\\to Y_1\\times\\cdots\\times Y_n$ is $f_1\\otimes\\cdots\\otimes f_n$-DP.\n\\end{theorem}\n\n\nThis theorem shows that the composition of mechanisms remains $f$-DP or, put differently, composition is closed in the $f$-DP framework. Moreover, the privacy bound $f_1\\otimes\\cdots\\otimes f_n$ in Theorem~\\ref{thm:n_steps} is \\textit{tight} in the sense that it cannot be improved in general. To see this point, consider the case where the second mechanism completely ignores the output of the first mechanism. In that case, the composition obeys\n$$\n\\begin{aligned}\nT\\big(M(S),M(S')\\big) &= T\\big(M_1(S)\\times M_2(S),M_1(S')\\times M_2(S')\\big) \\\\\n&= T\\big(M_1(S),M_1(S')\\big)\\otimes T\\big(M_2(S),M_2(S')\\big).\n\\end{aligned}\n$$\nNext, taking neighboring datasets such that $T\\big(M_1(S),M_1(S')\\big) = f_1$ and $T\\big(M_2(S),M_2(S')\\big) = f_2$, one concludes that $f_1 \\otimes f_2$ is the tightest possible bound on the two-fold composition. For comparison, the advanced composition theorem for $(\\ep, \\delta)$-DP does not admit a single pair of optimal parameters $\\epsilon, \\delta$ \\cite{boosting}. In particular, no pair of $\\ep,\\delta$ can exactly capture the privacy of the composition of $(\\ep,\\delta)$-DP mechanisms. See \\Cref{sub:_ep_0_dp_} and \\Cref{fig:comp} for more elaboration.\n\n\n\n\n\n\n\nIn the case of GDP, composition enjoys a simple and convenient formulation due to the identity\n\\[\nG_{\\mu_1} \\otimes G_{\\mu_2} \\otimes \\cdots \\otimes G_{\\mu_n} = G_{\\mu},\n\\]\nwhere $\\mu = \\sqrt{\\mu_1^2+\\cdots+\\mu_n^2}$. This formula is due to the rotational invariance of Gaussian distributions with identity covariance. We provide the proof in \\Cref{app:CLT}. The following corollary formally summarizes this finding.\n\n\n\n\\begin{corollary}\\label{cor:gdp_comp}\nThe $n$-fold composition of $\\mu_i$-GDP mechanisms is $\\sqrt{\\mu_1^2+\\cdots+\\mu_n^2}$-GDP.\n\\end{corollary}\n\nOn a related note, the pioneering work \\cite{KOV} is the first to take the hypothesis testing viewpoint in the study of privacy composition and to use Blackwell's theorem as an analytic tool therein. In particular, the authors offered a composition theorem for $(\\ep,\\delta)$-DP that improves on the advanced composition theorem \\cite{boosting}. Following this work, \\cite{complexity} provided a self-contained proof by essentially proving the ``$(\\ep,\\delta)$ special case'' of Blackwell's theorem. In contrast, our novel proof of \\Cref{thm:n_steps} only makes use of the Neyman--Pearson lemma, thereby circumventing the heavy machinery of Blackwell's theorem. This simple proof better illuminates the essence of the composition theorem.\n\n\n\n\n\n\\input{CLT\/Berry}\n\\input{CLT\/e0CLT}\n\n\n\n\n\n\n\n\\subsection{Composition of $(\\ep,\\delta)$-DP: Beating Berry--Esseen}\n\\label{sub:_ep_0_dp_}\n\n\\newcommand{\\mathrm{Bern}}{\\mathrm{Bern}}\n\n\n\nNow, we extend central limit theorems to $(\\ep,\\delta)$-DP. As shown by \\Cref{thm:privacy_testing}, $(\\ep, \\delta)$-DP is equivalent to $f_{\\epsilon, \\delta}$-DP and, therefore, it suffices to approximate the trade-off function $f_{\\ep_{1},\\delta_{1}}\\otimes \\cdots \\otimes f_{\\ep_{n},\\delta_{n}}$ by making use of the composition theorem for $f$-DP mechanisms. As pointed out in Section~\\ref{sub:a_berry_esseen_type_of_clt}, however, the moment conditions required in the two central limit theorems (Theorems~\\ref{thm:Berry} and \\ref{thm:CLT}) exclude the case where $\\delta_i > 0$.\n\nTo overcome the difficulty caused by a nonzero $\\delta$, we start by observing the useful fact that\n\\begin{equation}\\label{eq:decomposition}\nf_{\\ep,\\delta} = f_{\\ep,0}\\otimes f_{0,\\delta}.\n\\end{equation}\nThis decomposition, along with the commutative and associative properties of the tensor product, shows\n$$f_{\\ep_{1},\\delta_{1}}\\otimes \\cdots \\otimes f_{\\ep_{n},\\delta_{n}} = \\big(f_{\\ep_{1},0}\\otimes \\cdots \\otimes f_{\\ep_{n},0}\\big)\\otimes \\big(f_{0,\\delta_{1}}\\otimes \\cdots \\otimes f_{0,\\delta_{n}}\\big).$$\nThis identity allows us to work on the $\\epsilon$ part and $\\delta$ part separately. In short, the $\\epsilon$ part $f_{\\ep_{1},0}\\otimes \\cdots \\otimes f_{\\ep_{n},0}$ now can be approximated by $G_{\\sqrt{\\ep_1^2+\\cdots+\\ep_n^2}}$ by invoking Theorem~\\ref{thm:CLT}. For the $\\delta$ part, we can iteratively apply the rule\n\\begin{equation}\\label{eq:delta}\nf_{0,\\delta_1}\\otimes f_{0,\\delta_2} = f_{0,1-(1-\\delta_1)(1-\\delta_2)}\n\\end{equation}\nto obtain $f_{0,\\delta_{1}}\\otimes \\cdots \\otimes f_{0,\\delta_{n}} = f_{0, 1 - (1 - \\delta_1)(1 - \\delta_2) \\cdots (1 - \\delta_n)}$. This rule is best seen via the interesting fact that $f_{0,\\delta}$ is the trade-off function of shifted uniform distributions $T\\big(U[0,1],U[\\delta,1+\\delta]\\big)$.\n\n\nNow, a central limit theorem for $(\\epsilon, \\delta)$-DP is just a stone's throw away. In what follows, the privacy parameters $\\ep$ and $\\delta$ are arranged in a triangular array $\\{(\\ep_{ni},\\delta_{ni}):1\\leqslant i\\leqslant n\\}_{n=1}^{\\infty}$.\n\\begin{restatable}{theorem}{DPCLTrep}\\label{thm:DPCLT}\nAssume\n$$\\sum_{i=1}^n \\ep_{ni}^2 \\to \\mu^2, \\quad \\max_{1\\leqslant i\\leqslant n} \\ep_{ni}\\to 0, \\quad \\sum_{i=1}^n \\delta_{ni} \\to \\delta, \\quad \\max_{1\\leqslant i\\leqslant n} \\delta_{ni}\\to 0$$\nfor some nonnegative constants $\\mu,\\delta$ as $n \\to \\infty$. Then, we have\n$$f_{\\ep_{n1},\\delta_{n1}}\\otimes \\cdots \\otimes f_{\\ep_{nn},\\delta_{nn}} \\to G_{\\mu}\\otimes f_{0, 1 - \\mathrm{e}^{-\\delta}}$$\nuniformly over $[0, 1]$ as $n \\to \\infty$.\n\\end{restatable}\n\\begin{remark}\nA formal proof is provided in \\Cref{app:CLT}. The assumptions concerning $\\{\\delta_{ni}\\}$ give rise to $1 - (1 - \\delta_{n1})(1 - \\delta_{n2}) \\cdots (1 - \\delta_{nn}) \\to 1 - \\mathrm{e}^{-\\delta}$. In general, tensoring with $f_{0,\\delta}$ is equivalent to scaling the graph of the trade-off function $f$ toward the origin by a factor of $1-\\delta$. This property is specified by the following formula, and we leave its proof to \\Cref{app:CLT}:\n\t\\begin{equation}\\label{prop:ruibbit}\n\tf\\otimes f_{0,\\delta}(\\alpha) =\n\t\t\\left\\{\n\t\t\\begin{array}{ll}\n\t\t(1-\\delta)\\cdot f(\\frac{\\alpha}{1-\\delta}), \t\t& 0\\leqslant \\alpha \\leqslant 1-\\delta \\\\\n\t\t0, & 1-\\delta\\leqslant \\alpha\\leqslant 1.\n\t\t\\end{array}\n\t\t\\right.\n\t\\end{equation}\nIn particular, $f\\otimes f_{0,\\delta}$ is symmetric if $f$ is symmetric. Note that \\eqref{eq:decomposition} and \\eqref{eq:delta} can be deduced by the formula above.\n\\end{remark}\n\nThis theorem interprets the privacy level of the composition using Gaussian and uniform distributions. Explicitly, the theorem demonstrates that, based on the released information of the composed mechanism, distinguishing between any neighboring datasets is at least as hard as distinguishing between the following two bivariate distributions:\n\\[\n\\N(0, 1)\\times U[0, 1] \\text{ versus } \\N(\\mu, 1)\\times U[1 - \\mathrm{e}^{-\\delta}, 2 - \\mathrm{e}^{-\\delta}].\n\\]\nWe note that for small $\\delta$, $\\mathrm{e}^{-\\delta}\\approx 1-\\delta$. So $U[1 - \\mathrm{e}^{-\\delta}, 2 - \\mathrm{e}^{-\\delta}]\\approx U[\\delta,1+\\delta]$.\n\nThis approximation of the tensor product $f_{\\ep_{n1},\\delta_{n1}}\\otimes \\cdots \\otimes f_{\\ep_{nn},\\delta_{nn}}$ using simple distributions is important from the viewpoint of computational complexity. Murtagh and Vadhan \\cite{complexity} showed that, given a collection of $\\{(\\ep_i,\\delta_i)\\}_{i=1}^n$, finding the smallest $\\ep$ such that $f_{\\ep,\\delta}\\leqslant f_{\\ep_{1},\\delta_{1}}\\otimes \\cdots \\otimes f_{\\ep_{n},\\delta_{n}}$ is \\#P-hard\\footnote{\\#P is a complexity class that is ``even harder than'' NP (i.e.~a polynomial time algorithm for any \\#P-hard problem would imply P=NP). See, e.g., Ch. 9.~of \\cite{arora2009computational}.} for any $\\delta$. From the dual perspective (see \\Cref{sub:a_primal_dual_connection_with_}), this negative result is equivalent to the\n\\#P-hardness of evaluating the convex conjugate $\\big(f_{\\ep_{1},\\delta_{1}}\\otimes \\cdots \\otimes f_{\\ep_{n},\\delta_{n}}\\big)^*$ at any point. For completeness, we remark that \\cite{complexity} provided an FPTAS\\footnote{An approximation algorithm is called a fully polynomial-time approximation scheme (FPTAS) if its running time is polynomial in both the input size and the inverse of the relative approximation error. See, e.g., Ch.~8.~of \\cite{vazirani2013approximation}.} to approximately find the smallest $\\epsilon$ in $O(n^3)$ time for a \\textit{single} $\\delta$. In comparison, Theorem~\\ref{thm:DPCLT} offers a \\textit{global} approximation of the tensor product in $O(n)$ time using a closed-form expression, subsequently enabling an analytical approximation of the smallest\n$\\epsilon$ for each $\\delta$.\n\n\n\n\n\n\\begin{figure}[!htp]\n\\centering\n \\includegraphics[width=0.75\\linewidth]{.\/figures\/edCLT.pdf}\n \\captionof{figure}{Left: Tensoring with $f_{0,\\delta}$ scales the graph towards the origin by a factor of $1-\\delta$. Right: 10-fold composition of $(1\/\\sqrt{10},0)$-DP mechanisms, that is, $f_{\\ep,0}^{\\otimes n}$ with $n=10, \\ep=1\/\\sqrt{n}.$ The dashed curve corresponds to $\\ep=2.89,\\delta = 0.001$. These values are obtained by first setting $\\delta = 0.001$ and finding the smallest $\\ep$ such that the composition is $(\\ep,\\delta)$-DP. Note that the central limit theorem approximation to the true trade-off curve is almost perfect, whereas the tightest possible approximation via $(\\ep,\\delta)$-DP is substantially looser.}\n \\label{fig:comp}\n\\end{figure}\n\n\n\n\nThat being said, \\Cref{thm:DPCLT} remains silent on the approximation error in applications with a moderately large number of $(\\epsilon, \\delta)$-DP mechanisms. Alternatively, we can apply \\Cref{thm:Berry} to obtain a non-asymptotic normal approximation to $f_{\\ep_{1},0}\\otimes \\cdots \\otimes f_{\\ep_{n},0}$ and use $\\gamma$ to specify the approximation error. It can be shown that $\\gamma = O(1\/\\sqrt{n})$ under mild conditions (\\Cref{cor:e0Berry}). This bound, however, is not sharp enough for tight privacy guarantees if $n$ is not too large (note that $1\/\\sqrt{n} \\approx 0.14$ if $n = 50$, for which exact computation is already challenging, if possible at all). Surprisingly, the following theorem establishes a $O(1\/n)$ bound, thereby ``beating'' the classical Berry--Esseen bound.\n\\begin{theorem} \\label{thm:fast}\nFix $\\mu > 0$ and let $\\ep = \\mu\/\\sqrt{n}$. There is a constant $c > 0$ that only depends on $\\mu$ satisfying\n\\[\nG_{\\mu}\\left( \\alpha+\\tfrac{c}{n} \\right) - \\tfrac{c}{n} \\leqslant f_{\\ep,0}^{\\otimes n}(\\alpha)\\leqslant G_{\\mu}\\left( \\alpha-\\tfrac{c}{n} \\right) + \\tfrac{c}{n}\n\\]\nfor all $n\\geqslant 1$ and $c\/n \\le \\alpha \\le 1 - c\/n$.\n\\end{theorem}\n\n\n\nAs with \\Cref{thm:DPCLT}, this theorem can be extended to approximate DP ($\\delta \\ne 0$) by making use of the decomposition \\eqref{eq:decomposition}. Our simulation studies suggest that $c \\approx 0.1$ for $\\mu = 1$, which is best illustrated in the right panel of \\Cref{fig:comp}. Despite a fairly small $n = 10$, the difference between $G_1$ and its target $f_{\\ep,0}^{\\otimes n}$ is less than 0.013 in the pointwise sense. Interestingly, numerical evidence suggests the same $O(1\/n)$ rate in the inhomogeneous composition provided that $\\ep_1, \\ldots, \\ep_n$ are roughly the same size. A formal proof, or even a quantitative statement of this observation, constitutes an interesting problem for future investigation.\n\n\nIn closing this section, we highlight some novelties in the proof of \\Cref{thm:fast}. Denoting $p_{\\epsilon} = \\frac{1}{1+\\mathrm{e}^\\ep}$ and $q_{\\epsilon} = \\frac{\\mathrm{e}^\\ep}{1+\\mathrm{e}^\\ep}$, \\cite{KOV} presented a very useful expression (rephrased in our framework):\n\\[\nf_{\\ep,0}^{\\otimes n} = \\F\\big(B(n,p_{\\ep}),B(n, q_{\\ep})\\big),\n\\]\nwhere $B(n, p)$ denotes the binomial distribution with $n$ trials and success probability $p$. However, directly approximating $f_{\\ep,0}^{\\otimes n}$ through these two binomial distributions is unlikely to yield \nan $O(1\/n)$ bound because the Berry--Esseen bound is rate-optimal for binomial distributions. Our analysis, instead, rests crucially on a certain smoothing effect that comes for free in testing between the two distributions. It is analogous to the continuity correction for normal approximations to binomial probabilities. See the technical details in \\Cref{app:CLT}.\n\n\n\n\n\n\n\\section{Discussion}\n\\label{sec:discussion}\n\nIn this paper, we have introduced a new framework for private data analysis that we refer to as $f$-differential privacy, which generalizes $(\\ep, \\delta)$-DP and has a number of attractive properties that escape the difficulties of prior work. This new privacy definition uses trade-off functions of hypothesis testing as a measure of indistinguishability of two neighboring datasets rather than a few parameters as in prior differential privacy relaxations. Our $f$-DP retains an interpretable hypothesis testing semantics and is expressive enough to losslessly reason about composition, post-processing, and group privacy by virtue of the informativeness of trade-off functions. Moreover, $f$-DP admits a central limit theorem that identifies a simple and single-parameter family of privacy\ndefinitions as focal: Gaussian differential privacy. Precisely, all hypothesis testing based definitions of privacy converge to Gaussian differential privacy in the limit under composition, which implies that Gaussian differential privacy is the unique such definition that can tightly handle composition. The central limit theorem and its Berry--Esseen variant give a tractable analytical approach to tightly analyzing the privacy cost of iterative methods such as SGD. Notably, $f$-DP is \\emph{dual} to $(\\ep, \\delta)$-DP in a constructive sense, which gives the ability to import results proven for $(\\epsilon,\\delta)$-DP. This powerful perspective allows us to obtain an easy-to-use privacy amplification by subsampling theorem for $f$-DP, which in particular significantly improves on the\nstate-of-the-art counterpart in the $(\\ep, \\delta)$-DP setting. \n\n\n\n\nWe see several promising directions for future work using and extending the $f$-DP framework. First, \\Cref{thm:fast} can possibly be extended to the inhomogeneous case where trade-off functions are different from each other in the composition. Such an extension would allow us to apply the central limit theorem for privacy approximation with strong finite-sample guarantees to a broader range of problems. Second, it would be of interest to investigate whether the privacy guarantee of the subsampled mechanism in \\Cref{thm:subsample} can be improved for some trade-off functions. Notably, we have shown in \\Cref{app:property} that this bound is tight if the trade-off function $f = 0$, that is, the original mechanism is blatantly non-private. Third, the notion of $f$-DP naturally has a \\textit{local} realization where the obfuscation of the sensitive information is applied at the individual record level. In this setting, what are the fundamental limits of estimation with local $f$-DP\nguarantees \\cite{duchi2018minimax}? In light of \\cite{duchi2018right}, what is the correct complexity measure in local $f$-DP estimation? If it is not the Fisher information, can we identify an alternative to the Fisher information for some class of trade-off functions? Moreover, we recognize that an adversary in differentially private learning may set different pairs of target type I and type II errors. For example, an adversary that attempts to control type I and II errors at 10\\% and 10\\%, respectively, can behave very differently from one who aims to control the two errors at 0.1\\% and 99\\%, respectively. An important question is to address the trade-offs between resources such as privacy and statistical efficiency and target type I and type II errors in the framework of $f$-DP. \n\nFinally, we wish to remark that $f$-DP can possibly offer a mathematically tractable and flexible framework for minimax estimation under privacy constraints\n(see, for example, \\cite{cai2019cost,bun2018fingerprinting,dwork2015robust}). Concretely, given a candidate estimator satisfying $(\\epsilon, \\delta)$-DP appearing in the upper bound and a possibly loose lower bound under the $(\\ep, \\delta)$-DP constraint, we can replace the $(\\epsilon, \\delta)$-DP constraint by the $f$-DP constraint where $f$ is the tightest trade-off function characterizing the estimation procedure. As is clear, the $f$-DP constraint is more stringent than the $(\\ep, \\delta)$-DP constraint by recognizing the primal-dual conversion (see \\Cref{prop:ftoDP}). While the upper bound remains the same as the estimator continues to satisfy the new privacy constraint, the lower bound can be possibly improved due to a more stringent constraint. It would be of great interest to investigate to what extent this $f$-DP based approach can reduce the gap between upper and lower bounds minimax estimation under privacy constraints.\n\n\nUltimately, the test of a privacy definition lies not just in its power and semantics, but also in its ability to usefully analyze diverse algorithms. In this paper, we have given convincing evidence that $f$-DP is up to the task. We leave the practical evaluation of this new privacy definition to future work.\n\n\n\n\n\n\n\\subsection{Gaussian Differential Privacy}\n\\label{sub:gaussian_differential_privacy}\n\n\n\nThis subsection introduces a parametric family of $f$-DP guarantees, where $f$ is the trade-off function of two normal distributions. We refer to this specialization as Gaussian differential privacy (GDP). GDP enjoys many desirable properties that lead to its central role in this paper. Among others, we can now precisely define the trade-off function with a single parameter. To define this notion, let\n\\[\nG_\\mu:=\\F\\big(\\N(0,1), \\N(\\mu,1)\\big)\n\\]\nfor $\\mu \\ge 0$. An explicit expression for the trade-off function $G_\\mu$ reads\n\\begin{equation}\\label{eq:Gmu}\n\tG_\\mu(\\alpha) = \\Phi\\big(\\Phi^{-1}(1-\\alpha)-\\mu\\big),\n\\end{equation}\nwhere $\\Phi$ denotes the standard normal CDF. For completeness, we provide a proof of \\eqref{eq:Gmu} in \\Cref{app:fDP}. This trade-off function is decreasing in $\\mu$ in the sense that $G_\\mu \\leqslant G_{\\mu'}$ if $\\mu \\ge \\mu'$. We now define GDP:\n\\begin{definition} \\label{def:GDP}\nA mechanism $M$ is said to satisfy $\\mu$-Gaussian Differential Privacy ($\\mu$-GDP) if it is $G_\\mu$-DP. That is,\n\\[\nT\\big(M(S),M(S')\\big) \\ge G_\\mu\n\\]\nfor all neighboring datasets $S$ and $S'$.\n\\end{definition}\nGDP has several attractive properties. First, this privacy definition is fully described by the single mean parameter of a unit-variance Gaussian distribution, which makes it easy to describe and interpret the privacy guarantees. For instance, one can see from the right panel of \\Cref{fig:DPvsGDP} that $\\mu \\le 0.5$ guarantees a reasonable amount of privacy, whereas if $\\mu \\geqslant 6$, almost nothing is being promised. Second, loosely speaking, GDP occupies a role among all hypothesis testing based notions of privacy that is similar to the role that the Gaussian distribution has among general probability distributions. We formalize this important point by proving central limit theorems for $f$-DP in \\Cref{sec:composition-theorems}, which, roughly speaking, says that $f$-DP converges to GDP under composition in the limit. Lastly, as shown in the remainder of this subsection, GDP \\emph{precisely} characterizes the Gaussian mechanism, one of the most fundamental building blocks of differential privacy.\n\n\n\n\n\n\n\n\n\nConsider the problem of privately releasing a univariate statistic $\\theta(S)$ of the dataset $S$. Define the sensitivity of $\\theta$ as\n\\[\n\\mathrm{sens}(\\theta) = \\sup_{S, S'} | \\theta(S) - \\theta(S') |,\n\\]\nwhere the supremum is over all neighboring datasets. The Gaussian mechanism adds Gaussian noise to the statistic $\\theta$ in order to obscure whether $\\theta$ is computed on $S$ or $S'$. The following result shows that the Gaussian mechanism with noise properly scaled to the sensitivity of the statistic satisfies GDP.\n\\begin{theorem}\\label{thm:g_mech}\nDefine the Gaussian mechanism that operates on a statistic $\\theta$ as $M(S) = \\theta(S) + \\xi$, where $\\xi \\sim \\N(0, \\mathrm{sens}(\\theta)^2\/\\mu^2)$. Then, $M$ is $\\mu$-GDP.\n\\end{theorem}\n\\begin{proof}[Proof of Theorem~\\ref{thm:g_mech}]\nRecognizing that $M(S), M(S')$ are normally distributed with means $\\theta(S), \\theta(S')$, respectively, and common variance $\\sigma^2 = \\mathrm{sens}(\\theta)^2\/\\mu^2$, we get\n\\begin{align*}\n\\F\\big(M(S),M(S')\\big) = \\F\\big( \\N(\\theta(S), \\sigma^2), \\N(\\theta(S'), \\sigma^2)\\big) = G_{|\\theta(S)-\\theta(S')|\/\\sigma}.\n\\end{align*}\nBy the definition of sensitivity, $|\\theta(S)-\\theta(S')|\/\\sigma \\le \\mathrm{sens}(\\theta)\/\\sigma = \\mu$. Therefore, we get\n\\[\n\\F\\big(M(S),M(S')\\big) = G_{|\\theta(S)-\\theta(S')|\/\\sigma} \\geqslant G_\\mu.\n\\]\nThis completes the proof.\n\n\n\n\n\n\\end{proof}\n\n\n\nAs implied by the proof above, GDP offers the tightest possible privacy bound of the Gaussian mechanism. More precisely, the Gaussian mechanism in \\Cref{thm:g_mech} satisfies\n\\begin{equation}\\label{eq:f_inf}\nG_{\\mu}(\\alpha) = \\inf_{\\text{neighboring }S, S'} \\, \\F \\big(M(S),M(S') \\big)(\\alpha),\n\\end{equation}\nwhere the infimum is (asymptotically) achieved at the two neighboring datasets such that $|\\theta(S) - \\theta(S')| = \\mathrm{sens}(\\theta)$ \\textit{irrespective} of the type I error $\\alpha$. As such, the characterization by GDP is precise in the pointwise sense. In contrast, the right-hand side of \\eqref{eq:f_inf} in general is not necessarily a convex function of $\\alpha$ and, in such case, is not a trade-off function according to \\Cref{prop:trade-off}. This nice property of Gaussian mechanism is related to the log-concavity of Gaussian distributions. See \\Cref{prop:logconcave} for a detailed treatment of log-concave distributions.\n\n\n\n\n\n\n\n\\subsection{A Primal-Dual Perspective}\n\\label{sub:a_primal_dual_connection_with_}\n\n\nIn this subsection, we show that $f$-DP is equivalent to an infinite \\textit{collection} of $(\\ep,\\delta)$-DP guarantees via the convex conjugate of the trade-off function. As a consequence of this, we can view $f$-DP as the \\textit{primal} privacy representation and, accordingly, its \\textit{dual} representation is the collection of $(\\ep,\\delta)$-DP guarantees. Taking this powerful viewpoint, many results from the large body of $(\\epsilon, \\delta)$-DP work can be carried over to $f$-DP in a seamless fashion. In particular, this primal-dual perspective is crucial to our analysis of ``privacy amplification by subsampling'' in \\Cref{sec:subsampling}. All proofs are deferred to \\Cref{app:fDP}.\n\n\n\nFirst, we present the result that converts a collection of $(\\epsilon, \\delta)$-DP guarantees into an $f$-DP guarantee.\n\\begin{proposition}[Dual to Primal] \\label{prop:DPtof}\nLet $I$ be an arbitrary index set such that each $i\\in I$ is associated with $\\ep_i\\in[0, \\infty)$ and $\\delta_i\\in[0,1]$. A mechanism is $(\\ep_i,\\delta_i)$-DP for all $i\\in I$ if and only if it is $f$-DP with\n$$f = \\sup_{i\\in I} f_{\\ep_i,\\delta_i}.$$\n\\end{proposition}\nThis proposition follows easily from the equivalence of $(\\ep,\\delta)$-DP and $f_{\\ep,\\delta}$-DP. We remark that the function $f$ constructed above remains a symmetric trade-off function.\n\nThe more interesting direction is to convert $f$-DP into a collection of $(\\ep,\\delta)$-DP guarantees. Recall that the convex conjugate of a function $g$ defined on $(-\\infty, \\infty)$ is defined as\n\\begin{equation}\\label{eq:conjugate}\ng^*(y) = \\sup_{-\\infty < x < \\infty} y x - g(x).\n\\end{equation}\nTo define the conjugate of a trade-off function $f$, we extend its domain by setting $f(x) = \\infty$ for $x < 0$ and $x > 1$. With this adjustment, the supremum is effectively taken over $0 \\le x \\le 1$.\n\n\n\\begin{restatable}[Primal to Dual]{proposition}{ftoDPrep} \\label{prop:ftoDP}\n\tFor a symmetric trade-off function $f$, a mechanism is $f$-DP if and only if it is $\\big(\\ep,\\delta(\\ep)\\big)$-DP for all $\\ep\\geqslant 0$ with $\\delta(\\ep)=1+f^*(-\\mathrm{e}^{\\ep})$.\n\\end{restatable}\n\n\\begin{figure}[!ht]\n\\centering\n \\includegraphics[width=0.6\\linewidth]\n{.\/figures\/envelope.pdf}\n \\captionof{figure}{Each $(\\ep,\\delta(\\ep))$-DP guarantee corresponds to two supporting linear functions (symmetric to each other) to the trade-off function describing the complete $f$-DP guarantee. In general, characterizing a privacy guarantee using only a subset of $(\\ep,\\delta)$-DP guarantees (for example, only those with small $\\delta$) would result in information loss.}\n \\label{fig:envelope}\n\\end{figure}\n\n\nFor example, taking $f=G_\\mu$, the following corollary provides a lossless conversion from GDP to a collection of $(\\ep,\\delta)$-DP guarantees. This conversion is exact and, therefore, any other $(\\ep,\\delta)$-DP guarantee derived for the Gaussian mechanism is implied by this corollary. See \\Cref{fig:envelope} for an illustration of this result.\n\\begin{restatable}{corollary}{GDPtoDPrep}\\label{corr:GDPtoDP}\n A mechanism is $\\mu$-GDP if and only if it is $\\big(\\ep,\\delta(\\ep)\\big)$-DP for all $\\ep\\geqslant0$, where\n \\[\n \\delta(\\ep)= \\Phi\\Big( -\\frac{\\varepsilon}{\\mu} +\\frac{\\mu}{2} \\Big)-\n \\mathrm{e}^{\\varepsilon}\\Phi\\Big(- \\frac{\\varepsilon}{\\mu} - \\frac{\\mu}{2} \\Big).\n \\]\n\\end{restatable}\nThis corollary has appeared earlier in \\cite{balle2018improving}. Along this direction, \\cite{balle2018privacy} further proposed ``privacy profile'', which in essence corresponds to an infinite collection of $(\\ep,\\delta)$. The notion of privacy profile mainly serves as an analytical tool in \\cite{balle2018privacy}.\n\nThe primal-dual perspective provides a useful tool through which we can bridge the two privacy definitions. In some cases, it is easier to work with $f$-DP by leveraging the interpretation and informativeness of trade-off functions, as seen from the development of composition theorems for $f$-DP in \\Cref{sec:composition-theorems}. Meanwhile, $(\\ep,\\delta)$-DP is more convenient to work with in the cases where the lower complexity of two parameters $\\epsilon,\\delta$ is helpful, for example, in the proof of the privacy amplification by subsampling theorem for $f$-DP. In short, our approach in \\Cref{sec:subsampling} is to first work in the dual world and use existing subsampling theorems for $(\\ep,\\delta)$-DP, and then convert the results back to $f$-DP using a slightly more advanced version of \\Cref{prop:ftoDP}.\n\n\n\n\n\n\n\\section{$f$-Differential Privacy and Its Basic Properties}\n\\label{sec:fDP}\n\n\n\n\n\nIn Section~\\ref{sub:trade_off_function}, we give a formal definition of $f$-DP. Section~\\ref{sub:gaussian_differential_privacy} introduces Gaussian differential privacy, a special case of $f$-DP. In Section~\\ref{sec:conn-with-blackw}, we highlight some appealing properties of this new privacy notation from an information-theoretic perspective. Next, Section~\\ref{sub:a_primal_dual_connection_with_} offers a profound connection between $f$-DP and $(\\epsilon, \\delta)$-DP. Finally, we discuss the group privacy properties of $f$-DP.\n\n\n\n\n\n\nBefore moving on, we first establish several key pieces of notation from the differential privacy literature.\n\\begin{itemize}\n\\item \n{\\bf Dataset.} A dataset $S$ is a collection of $n$ records, each corresponding to an individual. Formally, we write the dataset as $S = (x_1,\\ldots, x_n)$, and an individual $x_i \\in X$ for some abstract space $X$. Two datasets $S' = (x'_1,\\ldots, x'_n)$ and $S$ are said to be \\textit{neighbors} if they differ in exactly one record, that is, there exists an index $j$ such that $x_i = x_i'$ for all $i \\ne j$ and $x_j \\ne x_j'$.\n\n\n\n\\item {\\bf Mechanism.} A mechanism $M$ refers to a randomized algorithm that takes as input a dataset $S$ and releases some (randomized) statistics $M(S)$ of the dataset in some abstract space $Y$. For example, a mechanism can release the average salary of individuals in the dataset plus some random noise.\n\n\n\\end{itemize}\n\n\n\\input{fDP\/tradeoff}\n\\input{fDP\/GDP}\n\\input{fDP\/post}\n\\input{fDP\/dual}\n\\input{fDP\/group}\n\n\n\n\n\\subsection{Group Privacy}\\label{sub:group_privacy}\n\\newcommand{\\mathbin{\\hat{\\circ}}}{\\mathbin{\\hat{\\circ}}}\n\n\nThe notion of $f$-DP can be extended to address privacy of a \\textit{group} of individuals, and a question of interest is to quantify how privacy degrades as the group size grows. To set up the notation, we say that two datasets $S, S'$ are $k$-neighbors (where $k \\ge 2$ is an integer) if there exist datasets $S = S_0, S_1, \\ldots, S_k = S'$ such that $S_i$ and $S_{i+1}$ are neighboring or identical for all $i = 0, \\ldots, k-1$. Equivalently, $S,S'$ are $k$-neighbors if they differ by at most $k$ individuals. Accordingly, a mechanism $M$ is said to be $f$-DP for \\textit{groups of size $k$} if\n\\[ T\\big(M(S),M(S')\\big)\\geqslant f\\]\nfor all $k$-neighbors $S$ and $S'$.\n\nIn the following theorem, we use $h^{\\circ k}$ to denote the $k$-fold iterative composition of a function $h$. For example, $h^{\\circ 1} = h$ and $h^{\\circ 2}(x) = h(h(x))$.\n\\begin{restatable}{theorem}{groupthm} \\label{thm:group}\nIf a mechanism is $f$-DP, then it is $\\left[1-(1-f)^{\\circ k}\\right]$-DP for groups of size $k$. In particular, if a mechanism is $\\mu$-GDP, then it is $k\\mu$-GDP for groups of size $k$.\n\\end{restatable}\nFor completeness, $1-(1-f)^{\\circ k}$ is a trade-off function and, moreover, remains symmetric if $f$ is symmetric. These two facts and \\Cref{thm:group} are proved in \\Cref{app:fDP}. As revealed in the proof, the privacy bound $1-(1-f)^{\\circ k}$ in general cannot be improved, thereby showing that the group operation in the $f$-DP framework is \\textit{closed} and \\textit{tight}. In addition, it is easy to see that $1 - (1-f)^{\\circ k} \\leqslant 1 - (1-f)^{\\circ (k-1)}$ by recognizing that the trade-off function $f$ satisfies $1-f(x)\\geqslant x$. This is consistent with the intuition that detecting changes in groups of $k$ individuals becomes easier as the group size increases.\n\nAs an interesting consequence of \\Cref{thm:group}, the group privacy of $\\epsilon$-DP in the limit corresponds to the trade-off function of two Laplace distributions. Recall that the density of $\\mathrm{Lap}(\\mu,b)$ is $\\frac1{2b}\\mathrm{e}^{-|x-\\mu|\/b}$.\n\\begin{restatable}{proposition}{grouplimit}\n\\label{prop:group_limit}\nFix $\\mu \\ge 0$ and set $\\epsilon = \\mu\/k$. As $k \\to \\infty$, we have\n\\[\n1-(1-f_{\\ep,0})^{\\circ k} \\to \\F\\big(\\mathrm{Lap}(0,1),\\mathrm{Lap}(\\mu,1)\\big).\n\\]\nThe convergence is uniform over $[0,1]$.\n\\end{restatable}\n\nTwo remarks are in order. First, $\\F\\big(\\mathrm{Lap}(0,1),\\mathrm{Lap}(\\mu,1)\\big)$ is not equal to $f_{\\ep,\\delta}$ for any $\\ep,\\delta$ and, therefore, $(\\ep,\\delta)$-DP is not expressive enough to measure privacy under the group operation. Second, the approximation in this theorem is very accurate even for small $k$. For example, for $\\mu=1,k=4$, the function $1-(1-f_{\\ep,0})^{\\circ k}$ is within $0.005$ of $\\F\\big(\\mathrm{Lap}(0,1),\\mathrm{Lap}(\\mu,1)\\big)$ uniformly over $[0, 1]$. The proof of \\Cref{prop:group_limit} is deferred to \\Cref{app:fDP}.\n\n\n\n\n\\subsection{Post-processing, Blackwell's Information Ordering, and Connection with Divergence-Based Definitions}\n\n\\subsection{Post-Processing and the Informativeness of $f$-DP}\n\\label{sec:conn-with-blackw}\n\nIntuitively, a data analyst cannot make a statistical analysis more disclosive only by processing the output of the mechanism $M$. This is called the post-processing property, a natural requirement that any notion of privacy, including our definition of $f$-DP, should satisfy.\n\n\n\n\nTo formalize this point for $f$-DP, denote by $\\mathrm{Proc}: Y \\to Z$ a (randomized) algorithm that maps the input $M(S) \\in Y$ to some space $Z$, yielding a new mechanism that we denote by $\\mathrm{Proc} \\circ M$. The following result confirms the post-processing property of $f$-DP.\n\\begin{proposition}\\label{prop:post_process}\nIf a mechanism $M$ is $f$-DP, then its post-processing $\\mathrm{Proc}\\circ M$ is also $f$-DP.\n\\end{proposition}\nProposition~\\ref{prop:post_process} is a consequence of the following lemma. Let $\\mathrm{Proc}(P)$ be the probability distribution of $\\mathrm{Proc}(\\zeta)$ with $\\zeta$ drawn from $P$. Define $\\mathrm{Proc}(Q)$ likewise.\n\\begin{restatable}{lemma}{postrep} \\label{lem:post}\nFor any two distributions $P$ and $Q$, we have\n$$\\F\\big(\\mathrm{Proc}(P),\\mathrm{Proc}(Q)\\big)\\geqslant \\F(P,Q).$$\n\\end{restatable}\n\nThis lemma means that post-processed distributions can only become more difficult to tell apart than the original distributions from the perspective of trade-off functions. While the same property holds for many divergence based measures of indistinguishability such as the {R\\'enyi} divergences\\footnote{See \\Cref{app:relation} for its definition and relation with trade-off functions.} used by the concentrated differential privacy family of definitions \\cite{concentrated,concentrated2,renyi,tcdp}, a consequence of the following theorem is that trade-off functions offer the most informative measure among all. This remarkable inverse of \\Cref{lem:post} is due to Blackwell (see also Theorem 2.5 in \\cite{KOV}).\n\\begin{theorem}[\\cite{blackwell1950comparison}, Theorem 10]\\label{thm:blackwell}\nLet $P,Q$ be probability distributions on $Y$ and $P',Q'$ be probability distributions on $Z$. The following two statements are equivalent:\n\\begin{enumerate}\n\\item[(a)] $\\F(P,Q)\\leqslant \\F(P',Q')$.\n\\item[(b)] There exists a randomized algorithm $\\mathrm{Proc}: Y \\to Z$ such that $\\mathrm{Proc}(P)=P',\\mathrm{Proc}(Q)=Q'$.\n\\end{enumerate}\n\\end{theorem}\n\\newcommand{\\mathrm{Ineq}}{\\mathrm{Ineq}}\nTo appreciate the implication of this theorem, we begin by observing that post-processing induces an order\\footnote{This is in general not a partial order.} on pairs of distributions, which is called the Blackwell order (see, e.g., \\cite{raginsky2011shannon}). Specifically, if the above condition (b) holds, then we write $(P,Q)\\preceq_{\\mathrm{Blackwell}}(P',Q')$ and interpret this as ``$(P,Q)$ is easier to distinguish than $(P',Q')$ in the Blackwell sense''. Similarly, when $\\F(P,Q)\\leqslant \\F(P',Q')$, we write $(P,Q)\\preceq_{\\mathrm{tradeoff}}(P',Q')$ and interpret this as ``$(P,Q)$ is easier to distinguish than $(P',Q')$ in the testing sense''. In general, any privacy measure used in defining a privacy notion induces an order $\\preceq$ on pairs of distributions. Assuming the post-processing property for the privacy notion, the induced order $\\preceq$ must be consistent with $\\preceq_{\\mathrm{Blackwell}}$. Concretely, we\ndenote by $\\mathrm{Ineq}(\\preceq) = \\{(P,Q; P', Q'): (P, Q) \\preceq (P', Q')\\}$ the set of all comparable pairs of the order $\\preceq$. As is clear, a privacy notion satisfies the post-processing property if and only if the induced order $\\preceq$ satisfies $\\mathrm{Ineq}({\\preceq})\\supseteq \\mathrm{Ineq}({\\preceq_{\\mathrm{Blackwell}}})$. \n\n\nTherefore, for any reasonable privacy notion, the set $\\mathrm{Ineq}({\\preceq})$ must be large enough to contain $\\mathrm{Ineq}({\\preceq_{\\mathrm{Blackwell}}})$. However, it is also desirable to have a not too large $\\mathrm{Ineq}({\\preceq})$. For example, consider the privacy notion based on a trivial divergence $D_0$ with $D_0(P\\|Q) \\equiv 0$ for any $P,Q$. Note that $\\mathrm{Ineq}({\\preceq_{D_0}})$ is the largest possible and, meanwhile, it is not informative at all in terms of measuring the indistinguishability of two distributions.\n\n\nThe argument above suggests that going from the ``minimal'' order $\\mathrm{Ineq}({\\preceq_{\\mathrm{Blackwell}}})$ to the ``maximal'' order $\\mathrm{Ineq}({\\preceq_{D_0}})$ would lead to information loss. Remarkably, $f$-DP is the most informative differential privacy notion from this perspective because its induced order $\\preceq_{\\mathrm{tradeoff}}$ satisfies $\\mathrm{Ineq}({\\preceq_{\\mathrm{tradeoff}}}) = \\mathrm{Ineq}({\\preceq_{\\mathrm{Blackwell}}})$. In stark contrast, this is not true for the order induced by other popular privacy notions such as R\\'enyi differential privacy and $(\\epsilon,\\delta)$-DP. We prove this claim in \\Cref{app:relation} and further justify the informativeness of $f$-DP by providing general tools that can losslessly convert $f$-DP guarantees into divergence based privacy guarantees.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Trade-off Functions and $f$-DP}\n\\label{sub:trade_off_function}\n\n\nAll variants of differential privacy informally require that it be hard to \\textit{distinguish} any pairs of neighboring datasets based on the information released by a private a mechanism $M$. From an attacker's perspective, it is natural to formalize this notion of ``indistinguishability'' as a hypothesis testing problem for two neighboring datasets $S$ and $S'$:\n\\[\nH_0: \\text{the underlying dataset is } S \\quad\\text{ versus }\\quad H_1: \\text{the underlying dataset is } S'.\n\\]\nThe output of the mechanism $M$ serves as the basis for performing the hypothesis testing problem. Denote by $P$ and $Q$ the probability distributions of the mechanism applied to the two datasets, namely $M(S)$ and $M(S')$, respectively. The fundamental difficulty in distinguishing the two hypotheses is best delineated by the \\textit{optimal} trade-off between the achievable type I and type II errors. More precisely, consider a rejection rule $0 \\le \\phi \\le 1$, with type I and type II error rates defined as\\footnote{A rejection rule takes as input the released results of the mechanism. We flip a coin and reject the null hypothesis with probability $\\phi$.}\n\\[\n\\alpha_\\phi = \\E_{P}[\\phi], \\quad \\beta_\\phi = 1 - \\E_{Q}[\\phi],\n\\]\nrespectively. The two errors satisfy, for example, the constraint is well known to satisfy\n\\begin{equation}\\label{eq:tv_norm}\n\\alpha_{\\phi} + \\beta_{\\phi} \\ge 1 - \\mathrm{TV}(P, Q),\n\\end{equation}\nwhere the total variation distance $\\mathrm{TV}(P, Q)$ is the supremum of $|P(A) - Q(A)|$ over all measurable sets $A$. Instead of this rough constraint, we seek to characterize the fine-grained trade-off between the two errors. Explicitly, fixing the type I error at \\textit{any} level, we consider the minimal achievable type II error. This motivates the following definition.\n\\begin{definition}[trade-off function] \\label{def:trade-off}\nFor any two probability distributions $P$ and $Q$ on the same space, define the trade-off function $\\F(P, Q): [0, 1] \\rightarrow [0, 1]$ as\n\\[\n\\F(P, Q)(\\alpha) = \\inf \\left\\{ \\beta_\\phi: \\alpha_\\phi \\leqslant \\alpha \\right\\},\n\\]\nwhere the infimum is taken over all (measurable) rejection rules.\n\\end{definition}\nThe trade-off function serves as a clear-cut boundary of the achievable and unachievable regions of type I and type II errors, rendering itself the \\textit{complete} characterization of the fundamental difficulty in testing between the two hypotheses. In particular, the greater this function is, the harder it is to distinguish the two distributions. For completeness, we remark that the minimal $\\beta_{\\phi}$ can be achieved by the likelihood ratio test---a fundamental result known as the Neyman--Pearson lemma, which we state in the appendix as \\Cref{thm:NPlemma}.\n\n\n\n\nA function is called a trade-off function if it is equal to $T(P, Q)$ for some distributions $P$ and $Q$. Below we give a necessary and sufficient condition for $f$ to be a trade-off function. This characterization reveals, for example, that $\\max\\{f, g\\}$ is a trade-off function if both $f$ and $g$ are trade-off functions.\n\\begin{restatable}{proposition}{tradeoffthm}\\label{prop:trade-off}\nA function $f: [0, 1] \\rightarrow [0, 1]$ is a trade-off function if and only if $f$ is convex, continuous\\footnote{Convexity itself implies continuity in $(0,1)$ for $f$. In addition, $f(\\alpha) \\ge 0$ and $f(\\alpha) \\le 1-\\alpha$ implies continuity at 1. Hence, the continuity condition only matters at $x = 0$.}, non-increasing, and $f(x) \\leqslant 1-x$ for $x \\in [0,1]$.\n\\end{restatable}\n\n\n\n\n\n\nNow, we propose a new generalization of differential privacy built on top of trade-off functions. Below, we write $g \\ge f$ for two functions defined on $[0, 1]$ if $g(x) \\ge f(x)$ for all $0 \\le x \\le 1$, and we abuse notation by identifying $M(S)$ and $M(S')$ with their corresponding probability distributions. Note that if $T(P,Q) \\ge T(\\widetilde P, \\widetilde Q)$, then in a very strong sense, $P$ and $Q$ are harder to distinguish than $\\widetilde P$ and $\\widetilde Q$ at \\textit{any} level of type I error.\n\n\\begin{definition}[$f$-differential privacy] \\label{def:kstable}\nLet $f$ be a trade-off function. A mechanism $M$ is said to be $f$-differentially private if\n\\[\nT\\big(M(S), M(S')\\big) \\ge f\n\\]\nfor all neighboring datasets $S$ and $S'$.\n\\end{definition}\n\n\n\nA graphical illustration of this definition is shown in Figure~\\ref{fig:intro_def}. Letting $P$ and $Q$ be the distributions such that $f = \\F(P, Q)$, this privacy definition amounts to saying that a mechanism is $f$-DP if distinguishing any two neighboring datasets based on the released information is at least as difficult as distinguishing $P$ and $Q$ based on a single draw. In contrast to existing definitions of differential privacy, our new definition is parameterized by a function, as opposed to several real valued parameters (e.g.~$\\epsilon$ and $\\delta$). This functional perspective offers a complete characterization of ``privacy'', thereby avoiding the pitfall of summarizing statistical information too early. This fact is crucial to the development of a composition theorem for $f$-DP in \\Cref{sec:composition-theorems}. Although this completeness comes at the cost of increased complexity, as we will see in Section~\\ref{sub:gaussian_differential_privacy}, a simple family of trade-off functions can often closely capture privacy loss in many scenarios.\n\n\\begin{figure}[!htp]\n\\centering\n\\includegraphics[width=.6\\linewidth]{.\/figures\/nonfDP.pdf}\n\\caption{Three different examples of $T\\big(M(S),M(S')\\big)$. Only the dashed line corresponds to a trade-off function satisfying $f$-DP.}\n\\label{fig:intro_def}\n\\end{figure}\n\n\n\n\n\n\n\nNaturally, the definition of $f$-DP is symmetric in the same sense as the neighboring relationship, which by definition is symmetric. Observe that this privacy notion also requires\n\\[\nT\\big(M(S'), M(S)\\big) \\ge f\n\\]\nfor any neighboring pair $S, S'$. Therefore, it is desirable to restrict our attention to ``symmetric'' trade-off functions. Proposition~\\ref{prop:symmetry} shows that this restriction does not lead to any loss of generality.\n\\begin{restatable}{proposition}{symmrep}\\label{prop:symmetry}\nLet a mechanism $M$ be $f$-DP. Then, $M$ is $f^{\\mathrm{S}}$-DP with $f^{\\mathrm{S}} = \\max\\{f, f^{-1}\\}$, where the inverse function is defined as\\footnote{\n\\Cref{eq:inver_f} is the standard definition of the left-continuous inverse of a decreasing function. When $f$ is strictly decreasing and $f(0)=1$ and hence bijective as a mapping, \\eqref{eq:inver_f} corresponds to the inverse function in the ordinary sense, i.e. $f(f^{-1}(x)) = f^{-1}(f(x)) = x$. However, this is not true in general.}\n\\begin{equation}\\label{eq:inver_f}\nf^{-1}(\\alpha):=\\inf\\{t \\in [0,1]:f(t) \\leqslant \\alpha\\}\n\\end{equation}\nfor $\\alpha \\in[0,1]$.\n\\end{restatable}\n\nWriting $f = \\F(P,Q)$, we can express the inverse as $f^{-1} = \\F(Q, P)$, which therefore is also a trade-off function. As a consequence of this, $f^{\\mathrm{S}}$ continues to be a trade-off function by making use of \\Cref{prop:trade-off} and, moreover, is \\textit{symmetric} in the sense that \n\\[\nf^{\\mathrm{S}} = (f^{\\mathrm{S}})^{-1}.\n\\] \nImportantly, this symmetrization gives a tighter bound in the privacy definition since $f^{\\mathrm{S}} \\geqslant f$. In the remainder of the paper, therefore, trade-off functions will always be assumed to be symmetric unless otherwise specified. We prove \\Cref{{prop:symmetry}} in \\Cref{app:fDP}.\n\n\n\nWe conclude this subsection by showing that $f$-DP is a generalization of $(\\ep,\\delta)$-DP. This foreshadows a deeper connection between $f$-DP and $(\\epsilon, \\delta)$-DP that will be discussed in Section~\\ref{sub:a_primal_dual_connection_with_}. Denote\n\\begin{equation}\\label{eq:fed}\nf_{\\ep, \\delta}(\\alpha) = \\max\\left\\{ 0,1 - \\delta - \\mathrm{e}^\\ep \\alpha, \\mathrm{e}^{-\\ep}(1-\\delta-\\alpha) \\right\\}\n\\end{equation}\nfor $0 \\le \\alpha \\le 1$, which is a trade-off function. \\Cref{fig:DPvsGDP} shows the graph of this function and its evident symmetry. The following result is adapted from \\cite{wasserman_zhou}.\n\\begin{proposition}[\\cite{wasserman_zhou}] \\label{thm:privacy_testing}\nA mechanism $M$ is $(\\ep, \\delta)$-DP if and only if $M$ is $f_{\\ep, \\delta}$-DP.\n\\end{proposition}\n\n\n\n\\begin{figure}[!htp]\n\\centering\n \\includegraphics[width=.75\\linewidth]\n{.\/figures\/DPvsGDP.pdf}\n \\captionof{figure}{Left: $f_{\\ep,\\delta}$ is a piecewise linear function and is symmetric with respect to the line $y = x$. It has (nontrivial) slopes $-\\mathrm{e}^{\\pm\\ep}$ and intercepts $1-\\delta$. Right: Trade-off functions of unit-variance Gaussian distributions with different means. The case of $\\mu=0.5$ is reasonably private, $\\mu=1$ is borderline private, and $\\mu=3$ is basically non-private: an adversary can control type I and type II errors simultaneously at only 0.07. In the case of $\\mu=6$ (almost coincides with the axes), the two errors both can be as small as 0.001.}\n \\label{fig:DPvsGDP}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\n\n\nModern statistical analysis and machine learning are overwhelmingly applied to data concerning \\emph{people}. Valuable datasets generated from personal devices and online behavior of billions of individuals contain data on location, web search histories, media consumption, physical activity, social networks, and more. This is on top of continuing large-scale analysis of traditionally sensitive data records, including those collected by hospitals, schools, and the Census. This reality requires the development of tools to perform large-scale data analysis in a way that still protects the \\emph{privacy} of individuals represented in the data.\n\nUnfortunately, the history of data privacy for many years consisted of ad-hoc attempts at ``anonymizing'' personal information, followed by high profile de-anonymizations. This includes the release of AOL search logs, de-anonymized by the \\textit{New York Times} \\cite{aol}, the Netflix Challenge dataset, de-anonymized by Narayanan and Shmatikov \\cite{netflix}, the realization that participants in genome-wide association studies could be identified from aggregate statistics such as minor allele frequencies that were publicly released \\cite{gwas}, and the reconstruction of individual-level census records from aggregate statistical releases \\cite{census}.\n\nThus, we urgently needed a rigorous and principled privacy-preserving framework to prevent breaches of personal information in data analysis. In this context, \\textit{differential privacy} has put private data analysis on firm theoretical foundations \\cite{DMNS06,approxdp}. This definition has become tremendously successful: in addition to an enormous and growing academic literature, it has been adopted as a key privacy technology by Google \\cite{rappor}, Apple \\cite{apple}, Microsoft \\cite{microsoft}, and the US Census Bureau \\cite{census}. The definition of this new concept involves privacy parameters $\\epsilon \\ge 0$ and $0 \\le \\delta \\le 1$.\n\\begin{definition}[\\cite{DMNS06,approxdp}]\\label{def:dpintro}\nA randomized algorithm $M$ that takes as input a dataset consisting of individuals is $(\\ep, \\delta)$-differentially private (DP) if for any pair of datasets $S, S'$ that differ in the record of a single individual, and any event $E$,\n\\begin{equation}\\label{eq:dp_ine}\n\\P\\left[ M(S)\\in E \\right] \\leqslant \\mathrm{e}^\\ep \\P \\left[ M(S')\\in E \\right] + \\delta.\n\\end{equation}\nWhen $\\delta = 0$, the guarantee is simply called $\\epsilon$-DP.\n\\end{definition}\n\nIn this definition, datasets are \\textit{fixed} and the probabilities are taken \\textit{only} over the randomness of the mechanism\\footnote{A randomized algorithm $M$ is often referred to as a mechanism in the differential privacy literature.}. In particular, the event $E$ can take any measurable set in the range of $M$. To achieve differential privacy, a mechanism is necessarily randomized. Take as an example the problem of privately releasing the average cholesterol level of individuals in the dataset $S = (x_1, \\ldots, x_n)$, each $x_i$ corresponding to an individual. A privacy-preserving mechanism may take the form\\footnote{Here we identify the individual $x_i$ with his\/her cholesterol level.}\n\\[\nM(S) = \\frac1n (x_1 + \\cdots + x_n) + \\text{noise}.\n\\]\nThe level of the noise term has to be sufficiently large to mask the \\textit{characteristics} of any individual's cholesterol level,\nwhile not being too large to distort the population average for accuracy purposes. Consequently, the probability distributions of $M(S)$ and $M(S')$ are close to each other for any datasets $S, S'$ that differ in only one individual record.\n\n\n\n\nDifferential privacy is most naturally defined through a hypothesis testing problem from the perspective of an attacker who aims to distinguish $S$ from $S'$ based on the output of the mechanism. This statistical viewpoint was first observed by \\cite{wasserman_zhou} and then further developed by \\cite{KOV}, which is a direct inspiration for our work. In short, consider the hypothesis testing problem\n\\begin{equation}\\label{eq:hyp_into}\nH_0: \\text{the underlying dataset is } S \\quad\\text{ versus }\\quad H_1: \\text{the underlying dataset is } S'\n\\end{equation}\nand call Alice the only individual that is in $S$ but not $S'$. As such, rejecting the null hypothesis corresponds to the detection of absence of Alice, whereas accepting the null hypothesis means to detect the presence of Alice in the dataset. Using the output of an $(\\ep, \\delta)$-DP mechanism, the power\\footnote{The power is equal to 1 minus the type II error.} of any test at significance level $0 < \\alpha < 1$ has an upper bound\\footnote{A more precise bound is given in \\Cref{thm:privacy_testing}.} of $\\mathrm{e}^\\ep\\alpha+\\delta$. This bound is only slightly larger than $\\alpha$ provided that $\\epsilon,\\delta$ are small and, therefore, \\textit{any} test is essentially powerless. Put differently, differential privacy with small privacy parameters protects against any inferences of the presence of Alice, or any other individual, in the dataset.\n\n\nDespite its apparent success, there are good reasons to want to relax the original definition of differential privacy, which has led to a long line of proposals for such relaxations. The most important shortcoming is that $(\\ep,\\delta)$-DP does not tightly handle composition. Composition concerns how privacy guarantees degrade under repetition of mechanisms applied to the same dataset, rendering the design of differentially private algorithms \\textit{modular}. Without compositional properties, it would be near impossible to develop complex differentially private data analysis methods. Although it has been known since the original papers defining differential privacy \\cite{DMNS06,approxdp} that the composition of an $(\\epsilon_1,\\delta_1)$-DP mechanism and an $(\\epsilon_2,\\delta_2)$-DP mechanism yields an $(\\epsilon_1+\\epsilon_2,\\delta_1+\\delta_2)$-DP mechanism, the corresponding upper bound $\\mathrm{e}^{\\epsilon_1 + \\epsilon_2}\\alpha + \\delta_1 + \\delta_2$ on the power of any test at\nsignificance level $\\alpha$ no longer tightly characterizes the trade-off between significance level and power for the testing between $S$ and $S'$. In \\cite{boosting}, Dwork, Rothblum, and Vadhan gave an improved composition theorem, but it fails to capture the correct hypothesis testing trade-off. This is for a fundamental reason: $(\\epsilon,\\delta)$-DP is mis-parameterized in the sense that the guarantees of the composition of $(\\epsilon_i,\\delta_i)$-DP mechanisms cannot be characterized by any pair of parameters $(\\epsilon,\\delta)$. Worse, given any $\\delta$, finding the parameter $\\ep$ that most tightly approximates the correct trade-off between significance level and type II error for a composition of a\nsequence of differentially private algorithms is computationally hard \\cite{complexity}, and so in practice, one must resort to approximations. Given that composition and modularity are first-order desiderata for a useful privacy definition, these are substantial drawbacks and often continue to push practical algorithms with meaningful privacy guarantees out of reach.\n\n\nIn light of this, substantial recent effort has been devoted to developing relaxations of differential privacy for which composition can be handled exactly. This line of work includes several variants of ``concentrated differential privacy'' \\cite{concentrated,concentrated2}, ``R\\'enyi differential privacy'' \\cite{renyi}, and ``truncated concentrated differential privacy'' \\cite{tcdp}. These definitions are tailored to be able to exactly and easily track the ``privacy cost'' of compositions of the most basic primitive in differential privacy, which is the perturbation of a real valued statistic with Gaussian noise. \n\nWhile this direction of privacy relaxation has been quite fruitful, there are still several places one might wish for improvement. First, these notions of differential privacy no longer have hypothesis testing interpretations, but are rather based on studying divergences that satisfy a certain information processing inequality. There are good reasons to prefer definitions based on hypothesis testing. Most immediately, hypothesis testing based definitions provide an easy way to interpret the guarantees of a privacy definition. More fundamentally, a theorem due to Blackwell (see \\Cref{thm:blackwell}) provides a formal sense in which a tight understanding of the trade-off between type I and type II errors for the hypothesis testing problem of distinguishing between $M(S)$ and $M(S')$ contains only more information than any divergence between the distributions $M(S)$ and $M(S')$ (so long as the divergence satisfies the information processing inequality).\n\nSecond, certain simple and fundamental primitives associated with differential privacy---most notably, \\emph{privacy amplification by subsampling} \\cite{KLNRS}---either fail to apply to the existing relaxations of differential privacy, or require a substantially complex analysis \\cite{wang2018subsampled}. This is especially problematic when analyzing privacy guarantees of stochastic gradient descent---arguably the most popular present-day optimization algorithm---as subsampling is inherent to this algorithm. At best, this difficulty arising from using these relaxations could be overcome by using complex technical machinery. For example, it necessitated Abadi et al.~\\cite{deep} to develop the numerical \\emph{moments accountant} method to sidestep the issue.\n\n\\subsection{Our Contributions}\n\\label{sec:our-contributions-1}\nIn this work, we introduce a new relaxation of differential privacy that avoids these issues and has other attractive properties. Rather than giving a ``divergence'' based relaxation of differential privacy, we start fresh from the hypothesis testing interpretation of differential privacy, and obtain a new privacy definition by allowing the \\textit{full} trade-off between type I and type II errors in the simple hypothesis testing problem \\eqref{eq:hyp_into} to be governed by some function $f$. The functional privacy parameter $f$ is to this new definition as $(\\ep, \\delta)$ is to the original definition of differential privacy. Notably, this definition that we term $f$-differential privacy ($f$-DP)---which captures $(\\ep, \\delta)$-DP as a special case---is accompanied by a powerful and elegant toolkit\nfor reasoning about composition. Here, we highlight some of our contributions:\n\n\n\\smallskip\n\\noindent {\\bf An Algebra for Composition.} We show that our privacy definition is \\emph{closed} and \\textit{tight} under composition, which means that the trade-off between type I and type II errors that results from the composition of an $f_1$-DP mechanism with an $f_2$-DP mechanism can always be \\emph{exactly} described by a certain function $f$. This function can be expressed via $f_1$ and $f_2$ in an algebraic fashion, thereby allowing for losslessly reasoning about composition. In contrast, $(\\epsilon,\\delta)$-DP or any other privacy definition artificially restricts itself to a small number of parameters. By allowing for a \\emph{function} to keep track of the privacy guarantee of the mechanism, our new privacy definition avoids the pitfall of premature summarization\\footnote{To quote Susan Holmes \\cite{evil}, ``premature\n summarization is the root of all evil in statistics.''} in intermediate steps and, consequently, yields a comprehensive delineation of the overall privacy guarantee. See more details in \\Cref{sec:composition-theorems}.\n\n\\smallskip\n\n\\noindent{\\bf A Central Limit Phenomenon.} We define a single-parameter family of $f$-DP that uses the type I and type II error trade-off in distinguishing the standard normal distribution $\\N(0,1)$ from $\\N(\\mu,1)$ for $\\mu \\ge 0$. This is referred to as Gaussian differential privacy (GDP). By relating to the hypothesis testing interpretation of differential privacy \\eqref{eq:hyp_into}, the GDP guarantee can be interpreted as saying that determining whether or not Alice is in the dataset is at least as difficult as telling apart $\\N(0,1)$ and $\\N(\\mu,1)$ based on one draw. Moreover, we show that GDP is a ``canonical'' privacy guarantee in a fundamental sense: for any privacy definition that retains a hypothesis testing interpretation, we prove that the privacy guarantee of\ncomposition with an appropriate scaling converges to GDP in the limit. This central limit theorem type of result is remarkable not only because of its profound theoretical implication, but also for providing a computationally tractable tool for analytically approximating the privacy loss under composition. Figure~\\ref{fig:contribution} demonstrates that this tool yields surprisingly accurate approximations to the exact trade-off in testing the hypotheses \\eqref{eq:hyp_into} or substantially improves on the existing privacy guarantee in terms of type I and type II errors. See \\Cref{sub:gaussian_differential_privacy} and \\Cref{sec:composition-theorems} for a thorough discussion.\n\n\n\n\n\\begin{figure}[!htp]\n\\centering\n \\includegraphics[width=0.85\\linewidth]{.\/figures\/contribution.pdf}\n \\captionof{figure}{Left: Our central limit theorem based approximation (in blue) is very close to the composition of just $10$ mechanisms (in red). The tightest possible approximation via an $(\\ep,\\delta)$-DP guarantee (in back) is substantially looser. See \\Cref{fig:comp} for parameter setup. Right: Privacy analysis of stochastic gradient descent used to train a convolutional neural network on MNIST \\cite{lecun-mnisthandwrittendigit-2010}. The $f$-DP framework yields a privacy guarantee (in red) for this problem that is significantly better than the optimal $(\\epsilon,\\delta)$-DP guarantee (in black) that is derived from the moments accountant (MA) method \\cite{deep}. Put simply, our analysis shows that stochastic gradient descent releases less sensitive information than expected in the literature. See \\Cref{sec:application_in_sgd} for more plots and details.}\n \\label{fig:contribution}\n\\end{figure}\n\n\n\n\\smallskip\n\\noindent {\\bf A Primal-Dual Perspective.} We show a general duality between $f$-DP and infinite collections of $(\\epsilon,\\delta)$-DP guarantees. This duality is useful in two ways. First, it allows one to analyze an algorithm in the framework of $f$-DP, and then convert back to an $(\\epsilon,\\delta)$-DP guarantee at the end, if desired. More fundamentally, this duality provides an approach to import techniques developed for $(\\epsilon,\\delta)$-DP to the framework of $f$-DP. As an important application, we use this duality to show how to reason simply about privacy amplification by subsampling for $f$-DP, by leveraging existing results for $(\\epsilon,\\delta)$-DP. This is in contrast to divergence based notions of privacy, in which reasoning about amplification by subsampling is difficult.\n\n\n\n\n\n\n\\smallskip\n\nTaken together, this collection of attractive properties render $f$-DP a mathematically coherent, computationally efficient, and versatile framework for privacy-preserving data analysis. To demonstrate the practical use of this hypothesis testing based framework, we give a substantially sharper analysis of the privacy guarantees of noisy stochastic gradient descent, improving on previous special-purpose analyses that reasoned about divergences rather than directly about hypothesis testing \\cite{deep}. This application is presented in \\Cref{sec:application_in_sgd}.\n\n\n\n\n\n\n\n\n\\iffalse\n\n\n\n\n\n\\ar{In the middle of a first pass on an intro along the lines that Weijie and I discussed earlier this week. In progress}\n\n\n\n\n\\input{phil}\n\nOver the last decade, differential privacy \\cite{DMNS06} has become established as the gold standard for privacy in statistical analyses, seeing wide deployment both in industry \\cite{google, apple, microsoft} and in government, as part of the statistical release procedure for the US 2020 census \\cite{census}. Informally, differential privacy is a \\emph{stability} property on a randomized algorithm, which requires that on input datasets that differ in just a single individual's data record, the algorithm must induce ``similar'' distributions on outputs. Technically, it requires a probability 1 bound on the log-odds ratio when comparing the distribution induced when any given individual's data is included in the dataset, and when their data is not included.\n\n\\ar{Need to change definition to notation from the paper}\n\\begin{definition}[Privacy Loss Definition of Differential Privacy \\cite{DMNS06}]\n\\label{def:privloss}\nFixing a randomized algorithm $M:X\\rightarrow Y$. Define the \\emph{privacy loss random variable} (log odds ratio) for $M$ and a pair of inputs $x, x' \\in X$ $Z$ as follows: Let $\\ell:Y\\rightarrow \\mathbb{R}$ be $f(y) = \\log \\frac{\\Pr[M(x) = y]}{\\Pr[M(x') = y]}$\\footnote{Here $\\Pr[M(x) = y]$ refers to either the probability density function or probability mass function of $M(x)$ depending on whether $M$ induces a discrete or continuous distribution over $Y$.}. Then the privacy loss random variable $Z(M, x, x')$ is distributed as $f(y)$ when $y \\sim M(x)$.\n\n$M$ is $\\epsilon$-differentially private if for all pairs of \\emph{neighboring} datasets $x, x' \\in X$:\n$$\\Pr[Z(M, x, x') \\leq \\epsilon] = 1$$\n\\end{definition}\nWhat exactly constitutes a pair of ``neighboring'' datasets is domain specific, but it usually means two datasets that differ only in the addition or removal of the data corresponding to a single individual.\n\nAs first formalized by Wasserman and Zhou \\cite{WZ10}, differential privacy can also be viewed as controlling the trade-off between the rate of false positives (type I errors) and false negatives (type II errors), for any hypothesis test whose goal is to distinguish whether an individual's data record was an input to a differentially private computation or not. This leads to an alternative (although equivalent) definition of differential privacy, in terms of testing regions (formally defined in Section \\ref{?}):\n\n\\begin{definition}[Testing Region Definition of Differential Privacy \\cite{WZ10}]\n\\label{def:hyptest}\n\\ar{This definition seems wrong -- fix}\nGiven a pair of distributions $P, Q$, their \\emph{testing region} $K_{P,Q} \\subseteq [0,1]^2$ is the set of achievable pairs of type-1 and type-2 error rates, achieved by any hypothesis test designed to distinguish between $P$ and $Q$ from a single sample. Define $K_\\epsilon$ to be the following testing region:\n$$K_\\epsilon = \\{(\\alpha,\\beta) \\in [0,1]^2 : \\alpha \\geq 1 - e^\\epsilon \\beta\\ \\ \\mathrm{and} \\ \\ \\beta \\geq 1 - e^\\epsilon \\alpha \\}$$\n\\ar{Seems like it should be instead:\n$$\nK_\\epsilon = \\{(\\alpha,\\beta) \\in [0,1]^2 : \\alpha \\leq e^\\epsilon(1- \\beta)\\ \\ \\mathrm{and} \\ \\ \\alpha \\geq e^{-\\epsilon}(1-\\beta) \\}$$\n}\n\nAn algorithm $M$ is $\\epsilon$-differentially private if and only if, for any pair of neighboring datasets $x, x'$, the corresponding testing region $K_{M(x),M(x')} \\subseteq K_\\epsilon$.\n\\end{definition}\n\nThe success of differential privacy can be attributed to the fact that it provides strong, formal, interpretable semantic guarantees for individuals, while being flexible enough to accommodate a wide range of statistical analysis tasks, including \\ar{examples...}. What makes differential privacy amenable to numerous algorithmic and statistical tasks is the fact that it satisfies closer under post-processing, and the fact that its guarantees gracefully degrade under \\emph{composition}. Post-processing gives an information-processing like guarantee:\n\n\\begin{proposition}[Postprocessing \\cite{DMNS06}]\n\\label{prop:post}\nIf $M:X\\rightarrow Y$ is $\\epsilon$ differentially private, and $f:Y\\rightarrow Y'$ is an arbitrary randomized map, then $f \\circ M : X \\rightarrow Y'$ is also $\\epsilon$ differentially private.\n\\end{proposition}\n\nComposition rules guarantee that if multiple differentially private algorithms are run in an adaptive sequence on the same dataset, their privacy guarantees degrade gracefully rather than failing catastophically:\n\n\\begin{proposition}[Simple Composition \\cite{DMNS06}]\n\\label{prop:comp}\nLet $M_1 : X \\rightarrow Y_1$ and let $M_2 : Y_1 \\times X \\rightarrow Y_2$ be such that $M_1$ is $\\epsilon_1$ differentially private, and $M_2(y_1,\\cdot)$ is $\\epsilon_2$ differentially private for all $y_1 \\in Y_1$. Define $M:X \\rightarrow Y_1 \\times Y_2$ as the algorithm that first samples $y_1 \\sim M_1(x)$ and then samples $y_2 = M_2(y_1,x)$, and outputs $(y_1,y_2)$. Then $M$ is $(\\epsilon_1 + \\epsilon_2)$-differentially private.\n\nIn particular, the adaptive composition of $k$ $\\epsilon$-differentially private algorithms is $k\\epsilon$-differentially private.\n\\end{proposition}\n\nThese two properties allow the design of differentially private algorithms to be \\emph{modular}, facilitating the development of complex algorithmic techniques. The \\emph{linear} degradation in the privacy parameter given in Proposition \\ref{prop:comp} is often too severe to provide meaningful privacy guarantees in practice, however. A substantial focus in the differential privacy literature has therefore been to understand exactly how privacy loss degrades under composition, and how to relax the definition of differential privacy to better quantify this degradation. To foreshadow our contribution, we remark that thus far, the literature has primarily focused on relaxations Definition \\ref{def:privloss}, the privacy loss formulation of differential privacy. We instead propose a natural relaxation of Definition \\ref{def:hyptest}, the testing region definition of differential privacy, and show that this relaxation has a number of appealing properties.\n\n\\subsection{Relaxations of Differential Privacy}\nBecause of the requirement in Definition \\ref{def:privloss} that the privacy loss random variable be bounded with probability 1, the simple linear manner in which the privacy loss accumulates under composition in Proposition \\ref{prop:comp} cannot be improved upon. It also precludes certain simple mechanisms, such as the perturbation of statistics with Gaussian noise (instead, the canonical perturbation mechanism for achieving $\\epsilon$-differential privacy is to add noise sampled from a Laplace distribution). For these reasons, one of the first relaxations of differential privacy proposed was ``approximate differential privacy'', which is parameterized by two parameters: $\\epsilon$ and $\\delta$ \\cite{}. Up to constants, approximate differential privacy is equivalent to requiring that the privacy loss random variable be bounded with high probability: i.e. that for all neighboring datasets $x, x' \\in X$, $\\Pr[Z(M, x, x') \\leq \\epsilon] \\geq 1-\\delta$ \\cite{KS10}. This is still the most popular relaxation, and yields a number of advantages above and beyond the exact definition. It also retains most of the nice properties of the exact definition. Principle among these are:\n\\begin{enumerate}\n\\item The privacy loss parameter $\\epsilon$ can now be shown to accumulate sub-linearly with the number of composed algorithms. In particular, for sufficiently small $\\epsilon$ and for a fixed $\\delta$, the adaptive composition of $k$ $\\epsilon$-differentially private algorithms can be shown to be approximately differentially private with $\\epsilon' = O(\\sqrt{k\\log 1\/\\delta}\\epsilon)$ \\cite{DRV10}.\n\\item Perturbation with Gaussian noise provides approximate differential privacy \\cite{}.\n\\item Approximate differential privacy continues to be closed under post-processing, and various powerful algorithmic techniques like ``privacy amplification by subsampling'' \\cite{KLNRS08} continue to apply.\n\\item Finally, approximate differential privacy continues to have a canonical ``extremal'' mechanism \\cite{KOV15,MV16} --- i.e. for every pair $(\\epsilon,\\delta)$, there is a ``least private'' mechanism, from which all other $(\\epsilon,\\delta)$-approximately differentially private mechanisms can be derived via appropriate post-processing. This is useful when analyzing attacks, amongst other things.\n\\end{enumerate}\n\nHowever, approximate differential privacy has two serious disadvantages: First: although it is possible to give simple \\emph{asymptotically} tight composition theorems that show that the privacy loss degrades sublinearly under composition \\cite{DRV10,KOV15}, these composition theorems are not exact --- and for practical applications, tightly understanding the constants involved in composition is crucial to providing meaningful privacy guarantees. Moreover, computing the \\emph{exact} composition parameters for approximately differentially private algorithms is computationally hard \\cite{MV17}. Second, the parametrization of approximate differential privacy is cumbersome: although in principle, it allows mechanisms which produce arbitrarily disclosive outputs with probability $\\leq \\delta$, in practice, most natural mechanisms like perturbation with Gaussian noise do not have this property. Instead, they are approximately differentially private for an entire curve of parameters: for every $\\delta \\geq 0$, the Gaussian perturbation mechanism is $(f(\\epsilon),\\delta)$-approximately differentially private for some function $f(\\epsilon)$. Reporting a single pair of parameters $(\\epsilon,\\delta)$ along this curve loses information, and this lossy parameterization tends to result in needless dependencies on $\\delta$ parameters under composition.\n\n\\subsubsection{The ``CDP'' Family of Relaxations}\nDwork and Rothblum \\cite{DR16} introduced a relaxation of differential privacy called ``Concentrated Differential Privacy'' (CDP) that has inspired a flurry of related definitions \\cite{BS16,Mir18,tzdp} that aim to address some of the shortcomings of approximate differential privacy. Informally, concentrated differential privacy requires that the privacy loss random variable $Z(M,x,x')$ be sub-gaussian. The original definition of \\cite{DR16} had the disadvantage that it did not satisfy a post-processing inequality like Proposition \\ref{prop:comp}, but Bun and Steinke introduced a closely related definition, termed ``zCDP'' formulated via \\emph{Renyi Divergences} that did \\cite{BS16}. The main advantage of CDP and zCDP is that composition can be analyzed exactly (without any loss in constants) by simple additive rules. Moreover, the Gaussian mechanism satisfies CDP\/zCDP with easily computed parameters (by design), and so the definition is particularly well suited to algorithms that are designed as simply sequential composition and post-processing of Gaussian perturbations of data. Unfortunately, these definitions are not compatible with other standard techniques that are part of the algorithmic tool-kit of differential privacy: most notably, ``amplification by sub-sampling''. Roughly speaking, if a differentially private (or approximately differentially private) algorithm is run on a random subsample of a $p$ fraction of the dataset, then its privacy parameter is improved by a factor of $p$. This is particularly useful in the analysis of algorithms like stochastic gradient descent, which already operate by randomly sub-sampling data points. Because zCDP does not support amplification by subsampling, the tight analysis of algorithms like stochastic gradient descent requires complex and computationally intensive workarounds, like the ``moments accountant method'' of \\cite{accountant}. In part to correct this issue, Bun et al. \\cite{BDRS18} introduced ``truncated concentrated differential privacy'' (tCDP), which informally requires that the privacy random variable be sub-gaussian up to a set boundary specified by a certain number of standard deviations, but can be less concentrated than a Gaussian beyond this boundary (The formal definition is given in terms of Renyi divergences, as with zCDP \\cite{BS16}). Mironov gave a related definition he termed Renyi Differential Privacy \\cite{RDP}. Truncated concentrated differential privacy and Renyi differential privacy can both be used to analyze privacy amplification under subsampling \\cite{RDP-subsample,BRDS18}, but the analysis is complex. This series of definitional modifications has also led to a definition with reasonable properties, but one that is complicated to understand: for example, the canonical perturbation distribution for tCDP is no longer Guassian as it is for CDP --- it is ``sinh-Normal''.\n\n\n\n\\subsection{Our Contributions}\n\\label{sec:our-contributions}\n\nWe give a simple relaxation of differential privacy that begins from Definition \\ref{def:hyptest} --- the hypothesis testing definition of differential privacy, rather than the privacy loss definition. Informally, our definition, ``Gaussian Differential Privacy'' requires that the testing region $K_{M(x),M(x')}$ that results from running a mechanism $M$ on two neighboring datasets lie within the testing region corresponding to the problem of distinguishing two unit variance Gaussians with shifted means (We give all formal definitions in Section~\\ref{sec:main-results}.). We show that this simple relaxation has many desirable properties:\n\\begin{enumerate}\n\\item Like CDP\/zCDP, it has a simple linear adaptive composition rule, and can be used to describe the privacy of the Gaussian mechanism without nay loss of parameters It also offers a ``group privacy'' guarantee. Like zCDP it is closed under post-processing.\n\\item Like tCDP and Renyi differential privacy (but unlike CDP\/zCDP), it is compatible with privacy amplification by subsampling.\n\\item Like approximate differential privacy (but unlike the CDP family of definitions, as far as we are aware), there is a simple, direct \\emph{transfer theorem} \\cite{DFHPRR15a,DFHPRR15b,BNSSSU16} which states that algorithms which privately and accurately estimate expectations \\emph{in sample} also provide accurate estimates \\emph{out of sample} (i.e. on the distribution from which the data were drawn). Together with the adaptive composition theorem, this provides a general, robust method of performing certain kinds of post-selection inference.\n\\item \\ar{More?}\n\\end{enumerate}\n\n\n\\wjs{define GDP in an informal way as a preview for the reader.}\n\\djs{We should define GDP and $f$-DP informally in the intro. An attempt is}\n\n... the idea of ``differential'' privacy lies in preventing the adversary's ability to do the following:\n\\begin{quote}\n\tGiven the random output of $M$, tell whether the input is $x$ or $x'$\n\\end{quote}\n\\begin{definition}[Informal] \\label{def:GDP_informal}\n\tA mechanism\/randomized algorithm $M$ is $\\mu$-GDP if the above question is harder than telling apart $N(0,1)$ and $N(\\mu,1)$ from a single sample.\n\\end{definition}\nIn principle, the pair of distributions $N(0,1)$ and $N(\\mu,1)$ could be replaced by any pair of distributions. This leads to the general definition of $f$-DP ...\n\n\n\n\nWe derive most of our results in a more general framework, for a family of privacy definitions that can be parameterized by an \\emph{arbitrary} hypothesis testing region --- and Gaussian differential privacy falls out as a special case. Moreover, we show that in a strong sense, Gaussian differential privacy is the inevitable definition that we must end up with if we start with a hypothesis-testing-region definition of privacy (like differential privacy), and consider its behavior under composition. We prove a ``central limit theorem'' that shows that under general conditions, hypothesis testing definitions of privacy converge in the limit under sequential composition to gaussian differential privacy.\n\\fi\n\n\n\\subsection*{Acknowledgements}\n\\label{sec:acknowledgements}\n\nThis work was supported in part by NSF grants CCF-1763314 and CNS-1253345, and by the Sloan Foundation, and by a sub-contract for the DARPA Brandeis program.\n\n\n{\\small\n\\bibliographystyle{alpha}\n\n\\section{Application: Privacy Analysis of Stochastic Gradient Descent}\n\\label{sec:application_in_sgd}\nOne of the most important algorithms in machine learning and optimization is stochastic gradient descent (SGD). This is an iterative optimization method used to train a wide variety of models, for example, deep neural networks. SGD has also served as an important benchmark in the development of private optimization: as an iterative algorithm, the tightness of its privacy analysis crucially depends on the tightness with which composition can be accounted for. The analysis also crucially requires a privacy amplification by subsampling argument. \n\nThe first asymptotically optimal analysis of differentially private SGD was given by \\cite{bassily2014private}. Because of the inherent limits of $(\\epsilon,\\delta)$-DP, however, this original analysis did not give meaningful privacy bounds for realistically sized datasets. This is in part what motivated the development of divergence based relaxations of differential privacy. Unfortunately, these relaxations cannot be directly applied to the analysis of SGD due to the lack of a privacy amplification by subsampling theorem. In response, Abadi et al.~\\cite{deep} circumvented this challenge by developing the moments accountant---a numeric technique tailored specifically to repeated application of subsampling, followed by a Gaussian mechanism---to give privacy bounds for SGD that are strong enough to give non-trivial guarantees when training deep neural networks on real datasets. But this analysis is ad-hoc in the sense that it uses a tool designed specifically for the analysis of SGD. \n\nIn this section, we use the general tools we have developed so far to give a simple and improved analysis of the privacy of SGD. In particular, the analysis rests crucially on the compositional and subsampling properties of $f$-DP.\n\n\n\\subsection{Stochastic Gradient Descent and Its Privacy Analysis}\n\\label{sub:noisy_sgd}\nThe private variant of the SGD algorithm is described in \\Cref{alg:dpsgd}. As we will see, from the perspective of its privacy analysis, it can simply be viewed as a repeated composition of Gaussian mechanisms operating on subsampled datasets.\n\\begin{algorithm}[htb]\n\t\\caption{\\texttt{NoisySGD}}\\label{alg:dpsgd}\n\t\\begin{algorithmic}[1]\n\t\\State{\\bf Input:} Dataset $S = (x_1,\\ldots,x_n)$, loss function $L(\\theta, x)$.\n\t\\Statex \\hspace{1.35cm}Parameters: initial state $\\theta_0$, learning rate $\\eta_t$, batch size $m$, time horizon $T$,\n\t\\Statex \\hspace{3.55cm}noise scale $\\sigma$, gradient norm bound $C$.\n\t\n\t\t\\For{$t = 1, \\ldots, T$}\n\t\t\\State {\\bf Subsampling:}\n\t\t\\Statex {\\hspace{0.55cm}Take a uniformly random subsample $I_t\\subseteq \\{1, \\ldots, n\\}$ with batch size $m$}\n\t\t\\Comment{$\\mathtt{Sample}_m$ in \\Cref{sec:subsampling}}\n\t\t\\For{$i\\in I_t$}\n\t\t\t\\State {\\bf Compute gradient:}\n\t\t\t\\Statex \\hspace{1.1cm} {$v_t^{(i)} \\gets \\nabla_{\\theta} L(\\theta_t, x_i)$}\t\t\n\t\t\t\\State {\\bf Clip gradient:}\n\t\t\t\\Statex \\hspace{1.1cm} {$\\bar{v}_t^{(i)} \\gets v_t^{(i)} \/ \\max\\big\\{1, \\|v_t^{(i)}\\|_2\/C\\big\\}$}\n\t\t\n\t\t\n\t\t\\EndFor\n\t\t\\State {\\bf Average, perturb, and descend:}\n\t\t\\Statex \\hspace{0.55cm} {$\\theta_{t+1} \\gets \\theta_{t} - \\eta_t \\Big(\\frac{1}{m} \\sum_i\\bar{v}_t^{(i)}+\\N(0, \\frac{4\\sigma^2 C^2}{m^2} I)\\Big)$}\n \\Comment{$I$ is an identity matrix}\n\t\t\\EndFor\n\t\t\\State {\\bf Output} $\\theta_T$\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\nTo analyze the privacy of \\texttt{NoisySGD}, we start by building up the privacy properties from the inner loop. Let $V$ be the vector space where parameter $\\theta$ lives in and $M:X^m\\times V\\to V$ be the mechanism that executes lines 4-7 in \\Cref{alg:dpsgd}. Here $m$ denotes the batch size. In effect, what $M$ does in iteration $t$ can be expressed as\n$$M(S_{I_t},\\theta_t) = \\theta_{t+1},$$\nwhere $S_{I_t}$ is the subset of the dataset $S$ indexed by $I_t$. Next, we turn to the analysis of the subsampling step (line 3) and use $\\widetilde{M}$ to denote its composition with $M$, that is, $\\widetilde{M} = M\\circ \\mathtt{Sample}_m $. Taken together, $\\widetilde{M}$ executes lines 3-7 and maps from $X^n\\times V$ to $V$.\n\nThe mechanism we are ultimately interested in\n\\begin{align*}\n\t\\mathrm{\\texttt{NoisySGD}}:X^n&\\to V\\times V\\times\\cdots\\times V\\\\\n\tS&\\mapsto(\\theta_1,\\theta_2,\\ldots, \\theta_T)\n\\end{align*}\nis simply the composition of $T$ copies of $\\widetilde{M}$. To see this fact, note that the trajectory $(\\theta_1,\\theta_2,\\ldots, \\theta_T)$ is obtained by iteratively running\n\\[\n\\theta_{j+1} = \\widetilde{M}(S,\\theta_j)\n\\]\nfor $j = 0, \\ldots, T-1$. Let $M$ be $f$-DP. Straightforwardly, $\\widetilde{M}$ is $C_{m\/n}(f)$-DP by \\Cref{thm:subsample}. Then, from the composition theorem (\\Cref{thm:n_steps}), we can immediately prove that \\texttt{NoisySGD} is $C_{m\/n}(f)^{\\otimes T}$-DP.\n\nHence, it suffices to give a bound on the privacy of $M$. For simplicity, we now focus on a single step and drop the subscript $t$. Recognizing that changing one of the $m$ data points only affects one $v^{(i)}$, the sensitivity of $\\frac{1}{m} \\sum_i\\bar{v}_t^{(i)}$ is at most $\\frac{2C}{m}$ due to the clipping operation. Making use of \\Cref{thm:g_mech}, adding Gaussian noise $N(0, \\sigma^2 \\cdot\\frac{4C^2}{m^2} I)$ to the average gradient renders this step $\\frac1{\\sigma}$-GDP. Since that the gradient update following the gradient averaging step is deterministic, we conclude that $M$ satisfies $\\frac1{\\sigma}$-GDP.\n\n\nIn summary, the discussion above has proved the following theorem:\n\\begin{theorem} \\label{thm:sgdcompo}\n\\Cref{alg:dpsgd} is $C_{m\/n}(G_{\\sigma^{-1}})^{\\otimes T}$-DP.\n\\end{theorem}\nTo clear up any confusion, we remark that this $C_{m\/n}(G_{\\sigma^{-1}})^{\\otimes T}$-DP mechanism does not release the subsampled indices.\n\nThe use of \\Cref{thm:sgdcompo} requires the evaluation of $C_{m\/n}(G_{\\sigma^{-1}})^{\\otimes T}$. However, numerical computation of this tensor product is computationally cumbersome. As a matter of fact, the moment's accountant technique applied to the present problem is basically equivalent to direct computing $C_{m\/n}(G_{\\sigma^{-1}})^{\\otimes T}$. In contrast, our central limit theorems provide an entirely different tool by analytically approximating $C_{m\/n}(G_{\\sigma^{-1}})^{\\otimes T}$ in a way that becomes nearly exact as $T$ grows. The next two subsections presents two such results, corresponding to our two central limit theorems (\\Cref{thm:Berry} and \\Cref{thm:CLT}), respectively. An asymptotic privacy analysis of \\texttt{NoisySGD} is given in \\Cref{sub:asymptotic_analysis_of_noisy_sgd} by developing a general limit theorem for composition of subsampled mechanisms. A Berry--Esseen type analysis is shown in \\Cref{sub:berry_esseen_for_privacy_of_sgd}.\n\n\n\\begin{figure}[!tp]\n\\centering\n \\includegraphics[width=\\linewidth]\n{.\/figures\/compareGoogle.pdf}\n \\captionof{figure}{Comparison of the GDP bounds derived from our method, and the $(\\ep,\\delta)$-DP bounds derived using the moments accountant \\cite{deep}. All three experiments run \\Cref{alg:dpsgd} on the entire MNIST dataset with $n=60,000$ data points, batch size $m=256$, learning rates $\\eta_t$ set to 0.25, 0.15, and 0.25, respectively, and clipping thresholds $C$ set to 1.5, 1.0, 1.5, respectively. The red lines are obtained via \\Cref{thm:SGDlimit}, while the blue dashed lines are produced by the tensorflow\/privacy library. See \\url{https:\/\/github.com\/tensorflow\/privacy} for the detail of the experiments.}\n \\label{fig:compareSGD}\n\\end{figure}\n\n\n\n\\subsection{Asymptotic Privacy Analysis}\n\\label{sub:asymptotic_analysis_of_noisy_sgd}\nIn this subsection, we first consider the limit of $C_p(f)^{\\otimes T}$ for a general trade-off function $f$, then plug in $f = G_{\\sigma^{-1}}$ for the analysis of \\texttt{NoisySGD}. The more general approach is useful for analyzing other iterative algorithms. \n\nRecall from \\Cref{sec:subsampling} that a $p$-subsampled $f$-DP mechanism is $C_p(f)$-DP, where $C_p(f)$ is defined as\n\\[\n\tC_p(f)(x)=\n\t\t\\left\\{\n\t\t\\begin{array}{ll}\n\t\tf_p(x),&x\\in[0,x^*] \\\\\n\t\tx^*+f_p(x^*)-x, & x\\in[x^*,f_p(x^*)]\\\\\n\t\tf_p^{-1}(x), &x\\in[f_p(x^*),1],\n\t\t\\end{array}\n\t\t\\right.\n\\]\nwhere $x^*$ is the unique fixed point of $f$. We will let the sampling fraction $p$ tend to 0 as $T$ approaches infinity.\nIn the following, $a_+^2$ is a short-hand for $(\\max\\{a,0\\})^2$\n\\begin{restatable}{theorem}{mixturerep} \\label{thm:mixtureSGD}\n\tSuppose $f$ is a symmetric trade-off function such that $f(0)=1$ and $\\int_0^1(f'(x)+1)^4\\diff x<+\\infty$. Furthermore, assume $p\\sqrt{T} \\to p_0$ as $T\\to\\infty$ for some constant $p_0 > 0$. Then we have the uniform convergence\n\t$$C_p(f)^{\\otimes T}\\to G_{p_0\\sqrt{2\\chi^2_+(f)}}$$\nas $T \\to \\infty$, where\n\t\\[\\chi^2_+(f)=\\int_0^1\\big(|f'(x)|-1\\big)_+^2\\diff x.\\]\n\\end{restatable}\nThis theorem has implications for the design of iterative private mechanisms involving subsampling as a subroutine. One way to bound the privacy of such a mechanism is to let the sampling ratio $p$ go to zero as the total number of iterations $T$ goes to infinity. The theorem says that the correct scaling between the two values is $p\\sim 1\/\\sqrt{T}$ and, furthermore, gives an explicit form of the limit.\n\n\n\n\n\nIn order to analyze \\texttt{NoisySGD}, we need to compute the quantity $\\chi^2_+(G_\\mu)$. This can be done by directly working with its definition. In \\Cref{app:SGD}, we provide a different approach by relating $\\chi^2_+(f)$ to $\\chi^2$-divergence.\n\\begin{restatable}{lemma}{chirep} \\label{lem:chi2GDP}\nWe have\t\\[\\chi^2_+(G_\\mu) = \\mathrm{e}^{\\mu^2}\\cdot\\Phi(3\\mu\/2)+3\\Phi(-\\mu\/2)-2.\\]\n\\end{restatable}\nWhen using SGD to train large models, we typically perform a very large number of iterations, so it is reasonable to consider the parameter regime in which $n\\to\\infty,T\\to\\infty$. The batch size can also vary with these quantities. The following theorem is a direct consequence of \\Cref{thm:sgdcompo,thm:mixtureSGD} and \\Cref{lem:chi2GDP}.\n\\begin{restatable}{corollary}{SGDlimitrep} \\label{thm:SGDlimit}\n\tIf $m\\sqrt{T}\/n \\to c$, then \\textup{\\texttt{NoisySGD}} is asymptotically $\\mu$-GDP with\n\t\\[\\mu = \\sqrt{2}c\\cdot\\sqrt{\\mathrm{e}^{\\sigma^{-2}}\\cdot\\Phi(1.5\\sigma^{-1})+3\\Phi(-0.5\\sigma^{-1})-2}.\\]\n\\end{restatable}\nFirst, we remark that the condition in the theorem is consistent with the analysis of private SGD in \\cite{bassily2014private}, which considers $m = 1$ and $T = O(n^2)$. We also note that in the deep learning literature, the quantity $\\frac{m}{n}\\cdot\\sqrt{T}$ is generally quite small. The convention in this literature is to reparameterize the number of gradient steps $T$ by the number of ``epochs'' $E$, which is the number of sweeps of the entire dataset. The relationship between these parameters is that $E = Tm\/n$. In this reparameterization, our assumption is that $Em\/n\\to c^2$. Concretely, the AlexNet \\cite{krizhevsky2012imagenet} sets the parameters as $m=128, E\\approx90$ on the ILSVRC-2010 dataset with $n\\approx 1.2\\times 10^6$, leading to $Em\/n < 0.01$. Many other prominent implementations\\footnote{See the webpage of Gluon CV Toolkit \\cite{he2018bag,zhang2019bag} for a collection of such hyperparameters on computer vision\n tasks.} also lead to a small value of $Em\/n$. \n\n\n\n\n\n\n\n\n\n\\subsection{A Berry--Esseen Privacy Bound}\n\\label{sub:berry_esseen_for_privacy_of_sgd}\nNow, we apply the Berry--Esseen style central limit theorem (\\Cref{thm:Berry}) for the privacy analysis of \\texttt{NoisySGD}, with the advantage of giving sharp privacy guarantees. However, the disadvantage is that the expressions it yields are more unwieldy: they are computer evaluable, so usable in implementations, but do not admit simple closed forms.\n\nThe individual components in \\Cref{thm:Berry} have the form $C_p(G_\\mu)$ with $p = m\/n, \\mu = \\sigma^{-1}$. It suffices to evaluate the moment functionals on $C_p(G_\\mu)$. This is done in the following lemma.\n\\begin{restatable}{lemma}{functionalGmurep}\\label{lem:functionalGmu}\n\tLet $Z(x) = \\log(p\\cdot \\mathrm{e}^{\\mu x-\\mu^2\/2}+1-p)$ and $\\varphi(x) = \\frac{1}{\\sqrt{2\\pi}}\\mathrm{e}^{-x^2\/2}$ be the density of the standard normal distribution. Then\n\t\\begin{align*}\n\t\t\\mathrm{kl}\\big(C_p(G_\\mu)\\big) &= p\\int_{\\mu\/2}^{+\\infty} Z(x)\\cdot\\big(\\varphi(x-\\mu)-\\varphi(x)\\big)\\diff x\\\\\n\t\n\t\t\\kappa_2\\big(C_p(G_\\mu)\\big)&= \\int_{\\mu\/2}^{+\\infty} Z^2(x)\\cdot\\big(p\\varphi(x-\\mu)+(2-p)\\varphi(x)\\big)\\diff x\\\\\n\t\t\\bar{\\kappa}_3\\big(C_p(G_\\mu)\\big)&=\\int_{\\mu\/2}^{+\\infty} \\big|Z(x)-\\mathrm{kl}\\big(C_p(G_\\mu)\\big)\\big|^3\\cdot(p\\varphi(x-\\mu)+(1-p)\\varphi(x))\\diff x\\\\\n\t\t&+\\int_{\\mu\/2}^{+\\infty} \\big|Z(x)+\\mathrm{kl}\\big(C_p(G_\\mu)\\big)\\big|^3\\cdot\\varphi(x)\\diff x.\n\t\\end{align*}\n\\end{restatable}\nWe can plug these expressions into \\Cref{thm:Berry} and get\n\\begin{restatable}{corollary}{SGDBerryrep} \\label{thm:SGDBerry}\nLet $p = m\/n, \\mu = \\sigma^{-1}$ and\n\t\\begin{align*}\n\t\t\\tilde{\\mu}=\\frac{2\\sqrt{T}\\cdot\\mathrm{kl}\\big(C_p(G_\\mu)\\big)}{\\sqrt{\\kappa_2\\big(C_p(G_\\mu)\\big) - \\mathrm{kl}^2\\big(C_p(G_\\mu)\\big)}}, \\quad\t\t\\gamma=\\frac{0.56}{\\sqrt{T}}\\cdot \\frac{\\bar{\\kappa}_3\\big(C_p(G_\\mu)\\big)}{\\big(\\kappa_2\\big(C_p(G_\\mu)\\big) - \\mathrm{kl}^2\\big(C_p(G_\\mu)\\big)\\big)^{\\frac{3}{2}}}.\n\t\\end{align*}\n\\textup{\\texttt{NoisySGD}} is $f$-DP with\n\t\\begin{equation*}\n\t\tf(\\alpha) = \\max\\{G_{\\tilde{\\mu}}(\\alpha+\\gamma)-\\gamma,0\\}.\n\t\\end{equation*}\n\\end{restatable}\nWe remark that $G_{\\tilde{\\mu}}$ can be set to 0 in $(1,+\\infty)$ so that $f$ is well-defined when $\\alpha>1-\\gamma$.\n\n\n\n\\section{Amplifying Privacy by Subsampling}\n\\label{sec:subsampling}\n\n\\newcommand{\\mathtt{Sample}}{\\mathtt{Sample}}\n\\newcommand{\\widetilde{M}}{\\widetilde{M}}\n\\newcommand{S\\cup\\{n\\}}{S\\cup\\{n\\}}\n\\newcommand{S}{S}\n\nSubsampling is often used prior to a private mechanism $M$ as a way to \\textit{amplify} privacy guarantees. Specifically, we can construct a smaller dataset $\\tilde S$ by flipping a fair coin for each individual in the original dataset $S$ to decide whether the individual is included in $\\tilde{S}$. This subsampling scheme roughly shrinks the dataset by half and, therefore, we would expect that the induced mechanism applied to $\\tilde S$ is about twice as private as the original mechanism $M$. Intuitively speaking, this privacy amplification is due to the fact that every individual enjoys perfect privacy if the individual is not included in the resulting dataset $\\tilde S$, which happens with probability 50\\%.\n\nThe claim above was first formalized in \\cite{KLNRS} for $(\\ep,\\delta)$-DP. Such a privacy amplification property is, unfortunately, no longer true for the most natural previous relaxations of differential privacy aimed at recovering precise compositions (like concentrated differential privacy (CDP) \\cite{concentrated,concentrated2}). Further modifications such as truncated CDP \\cite{tcdp} have been introduced primarily to remedy this deficiency of CDP---but at the cost of extra complexity in the definition. Other relaxations like {R\\'enyi} differential privacy \\cite{renyi} can be shown to satisfy a form of privacy amplification by subsampling, but both the analysis and the statement are complex \\cite{wang2018subsampled}.\n\n\nIn this section, we show that these obstacles can be overcome by our hypothesis testing based relaxation of differential privacy. Explicitly, our main result is a simple, general, and easy-to-interpret subsampling theorem for $f$-DP. Somewhat surprisingly, our theorem significantly improves on the classical subsampling theorem for privacy amplification in the $(\\ep,\\delta)$-DP framework \\cite{jon}. Note that this classical theorem continues to use $(\\epsilon,\\delta)$-DP to characterize the subsampled mechanism. However, $(\\epsilon,\\delta)$-DP is simply not expressive enough to capture the amplification of privacy.\n\n\n\n\n\\subsection{A Subsampling Theorem}\n\\label{sec:subsample_theorems}\n\n\n\n\nGiven an integer $1 \\le m \\le n$ and a dataset $S$ of $n$ individuals, let $\\mathtt{Sample}_m(S)$ be a subset of $S$ that is chosen uniformly at random among all the $m$-sized subsets of $S$. For a mechanism $M$ defined on $X^m$, we call $M\\big(\\mathtt{Sample}_m(S)\\big)$ the subsampled mechanism, which takes as input an $n$-sized dataset. Formally, we use $M \\circ \\mathtt{Sample}_m$ to denote this subsampled mechanism. To clear up any confusion, note that intermediate result $\\mathtt{Sample}_m(S)$ is not released and, in particular, this is different from the composition in \\Cref{sec:composition-theorems}.\n\n\n\nIn brief, our main theorem shows that the privacy bound of the subsampled mechanism in the $f$-DP framework is given by an operator acting on trade-off functions. To introduce this operator, write the convex combination $f_p:= pf + (1-p)\\Id$ for $0 \\le p \\le 1$, where $\\Id(x) = 1-x$. Note that the trade-off function $f_p$ is asymmetric in general.\n\\begin{definition} \\label{def:Cp}\nFor any $0 \\le p \\le 1$, define the operator $C_p$ acting on trade-off functions as\n\\[C_p(f) := \\min\\{f_p, f_p^{-1}\\}^{**}.\\]\nWe call $C_p$ the $p$-sampling operator.\n\\end{definition}\nAbove, the inverse $f_p^{-1}$ is defined in \\eqref{eq:inver_f}. The biconjugate $\\min\\{f_p, f_p^{-1}\\}^{**}$ is derived by applying the conjugate as defined in \\eqref{eq:conjugate} twice to $\\min\\{f_p, f_p^{-1}\\}$. For the moment, take for granted the fact that $C_p(f)$ is a symmetric trade-off function.\n\n\nNow, we present the main theorem of this section.\n\n\\begin{theorem}\\label{thm:subsample}\nIf $M$ is $f$-DP on $X^m$, then the subsampled mechanism $M\\circ\\mathtt{Sample}_m$ is $C_p(f)$-DP on $X^n$, where the sampling ratio $p=\\frac{m}{n}$.\n\\end{theorem}\n\n\nAppreciating this theorem calls for a better understanding of the operator $C_p$. In effect, $C_p$ performs a two-step transformation: symmetrization (taking the minimum of $f_p$ and its inverse $f_p^{-1}$) and convexification (taking the largest convex lower envelope of $\\min\\{f_p, f_p^{-1}\\}$). The convexification step is seen from convex analysis that the biconjugate $h^{**}$ of any function $h$ is the greatest convex lower bound of $h$. As such, $C_p(f)$ is convex and, with a bit more analysis, \\Cref{prop:trade-off} ensures that $C_p(f)$ is indeed a trade-off function. As an aside, $C_p(f) \\le \\min\\{f_p, f_p^{-1}\\} \\le f_p$. See \\Cref{fig:subsample} for a graphical illustration.\n\n\n\n\nNext, the following facts concerning the $p$-sampling operator qualitatively illustrate this privacy amplification phenomenon.\n\\begin{enumerate}\n\\item If $0\\leqslant p\\leqslant q\\leqslant 1$ and $f$ is symmetric, we have $f=C_1(f)\\leqslant C_q(f)\\leqslant C_p(f)\\leqslant C_0(f)= \\Id$. That is, as the sampling ratio declines from 1 to 0, the privacy guarantee interpolates monotonically between the original $f$ and the perfect privacy guarantee $\\Id$. This monotonicity follows from the fact that $g \\ge h$ is equivalent to $g^{-1} \\ge h^{-1}$ for any trade-off functions $g$ and $h$.\n\n\\item\nIf two trade-off functions $f$ and $g$ satisfy $f \\ge g$, then $C_p(f) \\ge C_p(g)$. This means that if a mechanism is more private than the other, using the same sampling ratio, the subsampled mechanism of the former remains more private than that of the latter, at least in terms of lower bounds.\n\n\\item For any $0 \\le p \\le 1$, $C_p(\\Id) = \\Id$. That is, perfect privacy remains perfect privacy with subsampling.\n\\end{enumerate}\n\nExplicitly, we provide a formula to calculate $C_p(f)$ for a symmetric trade-off function $f$. Letting $x^*$ be the unique fixed point of $f$, that is $f(x^*)= x^*$, we have\n\\begin{equation}\\label{eq:Cp_expression}\n\t\tC_p(f)(x)=\n\t\t\t\\left\\{\n\t\t\t\\begin{array}{ll}\n\t\t\tf_p(x),&x\\in[0,x^*] \\\\\n\t\t\tx^*+f_p(x^*)-x, & x\\in[x^*,f_p(x^*)]\\\\\n\t\t\tf_p^{-1}(x), &x\\in[f_p(x^*),1].\n\t\t\t\\end{array}\n\t\t\t\\right.\n\t\\end{equation}\nThis expression is almost self-evident from the left panel of \\Cref{fig:subsample}. Nevertheless, a proof of this formula is given in \\Cref{app:property}. This formula, together with \\Cref{thm:subsample}, allows us to get a closed-form characterization of the privacy amplification for $(\\epsilon, \\delta)$-DP.\n\\begin{corollary}\\label{cor:ep_d}\nIf $M$ is $(\\epsilon, \\delta)$-DP on $X^m$, then the subsampled mechanism $M\\circ\\mathtt{Sample}_m$ is $C_p(f_{\\epsilon, \\delta})$-DP on $X^n$, where\n\\begin{equation}\\label{eq:ep_d_sump_for}\nC_p(f_{\\epsilon, \\delta})(\\alpha) = \\max\\left\\{f_{\\epsilon', \\delta'}(\\alpha), 1 - p\\delta - p \\, \\frac{\\mathrm{e}^{\\ep} - 1}{\\mathrm{e}^{\\ep} + 1} - \\alpha \\right\\}.\n\\end{equation}\nAbove, $\\epsilon'= \\log(1-p + p \\mathrm{e}^\\ep), \\delta' = p\\delta$, and $p = \\frac{m}{n}$.\n\\end{corollary}\n\n\n\n\n\n\n\\begin{figure}[!htp]\n\\centering\n \\includegraphics[width=.7\\linewidth]\n{.\/figures\/subsample_main.pdf}\n \\captionof{figure}{The action of $C_p$. Left panel: $f=G_{1.8}, p=0.35$. Right panel: $\\ep = 3,\\delta=0.1,p=0.2$. The subsampling theorem \\ref{thm:subsample} results in a significantly tighter trade-off function compared to the classical theorem for $(\\ep,\\delta)$-DP.}\n \\label{fig:subsample}\n\\end{figure}\n\n\nFor comparison, we now present the existing bound on the privacy amplification by subsampling for $(\\ep,\\delta)$-DP. To be self-contained, \\Cref{app:property} gives a proof of this result, which primarily follows \\cite{jon} .\n\n\\begin{restatable}[\\cite{jon}]{lemma}{DPsubsamplerep}\\label{lem:DPsubsample}\nIf $M$ is $(\\ep,\\delta)$-DP, then $M\\circ\\mathtt{Sample}_m$ is $(\\ep',\\delta')$-DP with $\\ep'$ and $\\delta'$ defined in \\Cref{cor:ep_d}.\n\\end{restatable}\n\nUsing the language of the $f$-DP framework, \\Cref{lem:DPsubsample} states that $M\\circ\\mathtt{Sample}_m$ is $f_{\\epsilon',\\delta'}$-DP. \\Cref{cor:ep_d} improves on \\Cref{lem:DPsubsample} because, as is clear from \\eqref{eq:ep_d_sump_for},\n\\[\nC_p(f_{\\epsilon, \\delta}) \\ge f_{\\epsilon',\\delta'}.\n\\]\nThe right panel of \\Cref{fig:subsample} illustrates \\Cref{lem:DPsubsample} and our \\Cref{cor:ep_d} for $\\epsilon = 3, \\delta = 0.1$, and $p = 0.2$. In effect, the improvement is captured by the shaded triangle enclosed by $C_p(f_{\\epsilon, \\delta})$ and $f_{\\epsilon',\\delta'}$, revealing that the minimal sum of type I and type II errors in distinguishing two neighboring datasets with subsampling can be significantly lower than the prediction of \\Cref{lem:DPsubsample}. This gain is only made possible by the flexibility of trade-off functions in the sense that $C_p(f_{\\epsilon, \\delta})$ \\textit{cannot} be expressed within the $(\\epsilon, \\delta)$-DP framework. The unavoidable loss in the $(\\epsilon,\\delta)$-DP representation of the subsampled mechanism is compounded when analyzing the composition of many private mechanisms. \n\n\nIn the next subsection, we prove \\Cref{thm:subsample} by making use of \\Cref{lem:DPsubsample}. Its proof implies that \\Cref{thm:subsample} holds for any subsampling scheme for which \\Cref{lem:DPsubsample} is true. In particular, it holds for the subsampling scheme described at the beginning of this section, that is, independent coin flips for every data item. \n\n\n\n\n\n\\subsection{Proof of the Subsampling Theorem}\n\\label{sub:proof_of_subsample_theorems}\nThe proof strategy is as follows. First, we convert the $f$-DP guarantee into an infinite collection of $(\\ep,\\delta)$-DP guarantees by taking a dual perspective that is enabled by \\Cref{prop:ftoDP}. Next, by applying the classical subsampling theorem (that is, \\Cref{lem:DPsubsample}) to these $(\\ep,\\delta)$-DP guarantees, we conclude that the subsampled mechanism satisfies a new infinite collection of $(\\ep,\\delta)$-DP guarantees. Finally, \\Cref{prop:DPtof} allows us to convert these new privacy guarantees back into an $\\tilde{f}$-DP guarantee, where $\\tilde f$ can be shown to coincide with $C_p(f)$.\n\n\n\\begin{proof}[Proof of \\Cref{thm:subsample}]\nProvided that $M$ is $f$-DP, from \\Cref{prop:ftoDP} it follows that $M$ is $\\big(\\ep,\\delta(\\ep)\\big)$-DP with $\\delta(\\ep) = 1+f^*(- \\mathrm{e}^{\\ep})$ for all $\\ep\\geqslant 0$. Making use of \\Cref{lem:DPsubsample}, the subsampled mechanism $M \\circ \\mathtt{Sample}_m$ satisfies the following collection of $(\\ep',\\delta')$-DP guarantees for all $\\ep\\geqslant 0$:\n\\[\n\\ep' =\\log(1 - p + p\\mathrm{e}^\\ep),\\quad \\delta' =p\\big(1+f^*(-\\mathrm{e}^{\\ep})\\big).\n\\]\nEliminating the variable $\\epsilon$ from the two parametric equations above, we can relate $\\ep'$ to $\\delta'$ using\n\\begin{equation}\\label{eq:fp}\n\\delta' = 1+f_p^*(-\\mathrm{e}^{\\ep'}),\n\\end{equation}\nwhich is proved in \\Cref{app:property}. The remainder of the proof is devoted to showing that $(\\epsilon', \\delta')$-DP guarantees for all $\\ep' \\ge 0$ is equivalent to the $C_p(f)$-DP guarantee.\n\n\nAt first glance, \\eqref{eq:fp} seems to enable the use of \\Cref{prop:ftoDP}. Unfortunately, that would be invalid because $f_p$ is asymmetric. To this end, we need to extend \\Cref{prop:ftoDP} to general trade-off functions. To avoid conflicting notation, let $g$ be a generic trade-off function, not necessarily symmetric. Denote by $\\bar{x}$ be the smallest point such that $g'(x)=-1$, that is, $\\bar{x} = \\inf\\{x\\in[0,1]:g'(x)=-1\\}$.\\footnote{For simplicity, the proof assumes differentiable trade-off functions. If $g$ is not differentiable, use the definition $\\bar{x} = \\inf\\{x\\in[0,1]: -1 \\in \\partial g(x) \\}$ instead. This adjustment applies to other parts of the proof.} As a special instance of \\Cref{prop:asymm_env} in the appendix, the following result serves our purpose.\n\\begin{proposition}\\label{prop:asymm_for_proof}\nIf $g(\\bar{x})\\geqslant\\bar{x}$ and a mechanism $M$ is $(\\ep,1+g^*(-\\mathrm{e}^{\\ep}))$-DP for all $\\ep\\geqslant 0$, then $M$ is $\\min\\{g,g^{-1}\\}^{**}$-DP.\n\\end{proposition}\n\n\nThe proof of the present theorem would be complete if \\Cref{prop:asymm_for_proof} can be applied to the collection of privacy guarantees in \\eqref{eq:fp}for $f_p$. To use \\Cref{prop:asymm_for_proof}, it suffices to verify the condition $f_p(\\bar{x})\\geqslant \\bar{x}$ where $\\bar{x}$ is the smallest point such that $f_p'(x)=-1$. Let $x^*$ be the (unique) fixed point of $f$. To this end, we collect a few simple facts:\n\t\\begin{itemize}\n\t\n\t\t\\item First, $f'(x^*) = -1$. This is because the graph of $f$ is symmetric with respect to the $45^\\circ$ line passing through the origin.\n\t\t\\item Second, $\\bar{x}\\leqslant x^*$. This is because $f_p'(x^*)=pf'(x^*)+(1-p)\\Id'(x^*)=-1$ and, by definition, $\\bar{x}$ can only be smaller.\n\t\\end{itemize}\nWith these facts in place, we get\n\\[f_p(\\bar{x})\\geqslant f_p(x^*) \\geqslant f(x^*) = x^* \\geqslant \\bar{x}\\]\nby recognizing that $f_p$ is decreasing and $f_p \\ge f$. Hence, the proof is complete.\n\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:Introduction}\n Since the discovery in 1995 of a planet orbiting 51 Peg \\citep{1995Natur.378..355M}, over 1800 planets in around 1100 planetary systems have been found \\citep[][]{2011A&A...532A..79S}\\footnote{{http:\/\/exoplanet.eu\/}}: this number increases steadily. Furthermore, close to 470 multiple planetary systems have been detected, some of which are highly complex \\citep[e.g.,][]{2013Sci...340..587B}.\n \n The search for exoplanets has been following two different, but complementary, paths: the detection of exoplanets with increasingly\nlower masses, and the characterization of these exoplanets and their atmospheres. On the detection side, the radial velocity {\\citep[e.g.,][]{2011exop.book...27L,2013A&A...549A.109B}} and transit methods {\\citep[e.g., ][]{2010Sci...327..977B,2011exop.book...55W}} have been the most prolific. One of the most important resul?ts of planet detection surveys is the ubiquity of planets around solar-type stars {\\citep[e.g.,][]{2010Sci...330..653H}}.\n \n On the characterization side, the current frontier of exoplanet characterization has been pushed toward the study of exoplanet atmospheres, both from a composition and from a dynamics point of view. To overcome this difficult challenge, several indirect techniques have been developed. Transmission spectroscopy relies on observing the host star spectrum as it is filtered by a planet atmosphere during a transit \\citep[e.g.,][]{2002ApJ...568..377C,2014arXiv1403.4602K}. Occultation photometry and spectroscopy measure the occultation (secondary eclipse) depth of the star+planet light curve at different wavelengths to derive the planet's thermal \\citep[e.g.,][]{2005Natur.434..740D,2010A&A...513A..76S,2014arXiv1410.2241S} and reflected \\citep{2009A&A...501L..23A-eprintv1,2013AN....334..188R} signatures. The detection of the exoplanet emission spectra was also possible through high-resolution spectroscopy \\citep{2012Natur.486..502B, 2012ApJ...753L..25R, 2014A&A...561A.150D}. The measurement of phase variations relies on the detection of the flux variation along the planet's orbit as it alternately presents its day and\nnight hemisphere to us \\citep[e.g.,][]{2009ApJ...703..769K-eprintv1,2011ApJ...740...61K}. These techniques represent the current front line of exoplanet characterization and are limited only by flux measurement precision\nthat they impose as a result of the low planet-star flux ratio (e.g., in the most favorable cases, for a Jupiter sized planet with a 3 day period orbit $F_{\\rm Planet}\/F_{Star}\\approx10^{-4}$ in the visible and $F_{\\rm Planet}\/F_{Star}\\approx10^{-3}$ in the IR). \n \n {\\cite{1999ApJ...522L.145C} attempted to recover the reflected spectrum of the giant planet orbiting $\\tau$ Boo, which paved the way for detecting reflected light in the\noptical. To do so, they performed a $\\chi^2$ evaluation of simulated star+planet spectra against high-resolution observations obtained\nwith the HIRES spectrometer at the Keck observatory. Although unable to detect the reflected signal from planet $\\tau$ Boo b, they were able to set an upper limit on the planet's maximum planet-to-star flux ratio of about $5\\times10^{-5}$. In the same year, \\cite{1999Natur.402..751C} made an attempt at detecting the reflected signal of the same planet using a least-squares deconvolution technique, also without conclusive results. More recently, other attempts with alternative methods have been made \\citep[e.g.,][]{2003MNRAS.346L..16L, 2010A&A...514A..23R}, albeit all results have been inconclusive about a definitive detection of a reflected exoplanet signal. Nonetheless, all these attempts are of great scientific values because they allow establishing upper limits on the planet-to-star flux ratio.} \n \n More recently, researchers were able to collect measurements of the albedo of several exoplanets in an effort to constrain current atmosphere models \\citep[e.g.,][]{2011ApJ...729...54C,2014arXiv1405.3798D} and infer their composition \\citep[e.g., the presence of clouds in the atmosphere of HD 189733 b as done by ][]{2014arXiv1403.6664B}. An interesting result is the observation of \\object{HD 189733 b,} where researchers were able to infer the planet's color from albedo measurements \\citep{2013ApJ...772L..16E}. Nonetheless, {several of these} results are still subject to some discussion because it has been shown that the blue excess in the planet's transmission spectrum might also be the result of stellar activity \\citep[][]{2014A&A...568A..99O}. \n \n {The detection of sodium absorption in the transmission spectrum of \\object{HD209458} b \\citep{2002ApJ...568..377C} paved the way for the detection of spectral features in exoplanet atmospheres. As new facilities were developed and analysis techniques perfected with time, other elements or even molecules were detected \\cite[e.g.,][]{2008ApJ...673L..87R, 2010Natur.465.1049S, 2013MNRAS.436L..35B, 2014ApJ...791L...9M, 2014arXiv1404.3769B, 2014Natur.509...63S}.}\n \n In this paper we apply the technique described in \\cite{2013MNRAS.436.1215M} to HARPS observations of 51 Peg {to try to retrieve} the reflected spectrum of its planetary companion. \\object{51 Peg} (\\object{HD217014}) is a solar-type star, slightly more massive than our Sun \\cite[${M_{51\\;Peg}}\/{M_{\\odot}}\\approx 1.04$,][]{2013A&A...556A.150S}, at a distance of approximately 15.6 parsec from us \\citep{2007A&A...474..653V}. {With a minimum mass slightly under half the mass of Jupiter} and an orbital period slightly longer than days, \\object{51 Peg b} is the prototype of a hot Jupiter, giant gas planets in close orbits \\citep{1995Natur.378..355M}. The combination of the host brightness \\citep[$V_{Mag} = 5.46$,][]{2009ApJ...694.1085V}, the giant planet's large dimensions, and the short-period planetary orbit yield a relatively high planet-star flux ratio and make this planetary system an excellent candidate for testing the method presented in \\cite{2013MNRAS.436.1215M}. \n \n In Sect. \\ref{sec:Principle} we describe the principle behind our method. In Sect. \\ref{sec:Method} we describe the method and its application to our data. The results are presented in Sect. \\ref{sec:Results} and are discussed in Sect. \\ref{sec:Discussion}. We conclude in Sect. \\ref{sec:Conclusions}.\n \n \n \\section{Principle behind the method} \\label{sec:Principle}\n \\cite{2013MNRAS.436.1215M} showed that the cross-correlation function (hereafter CCF) can be used to mathematically enhance the S\/N of our observations to a level where the extremely low S\/N planetary signal can be recovered. The CCF of a spectrum with a binary mask \\citep{1996A&AS..119..373B} has been extensively tested in detecting exoplanets with the radial velocities method. Briefly, this technique corresponds to mapping the degree of similarity between the stellar spectrum and a binary mask (representing the stellar type), which increases the S\/N of the data by a factor proportional to the square root of the number of spectral lines identified in the mask.\n \n As discussed in \\cite{2013MNRAS.436.1215M}, we expect the planet's signature to replicate the stellar signal, scaled down as a result of geometric (e.g., planet size) and atmospheric (e.g., albedo) factors. {The planetary albedo measures the fraction of incident light that is reflected by the planet atmosphere. Several albedo definitions exist \\cite[e.g.,][]{1999ApJ...513..879M,2002MNRAS.330..187C,2010eapp.book.....S}, but in our study we only considered the geometric albedo $A_{\\rm g}$, which is defined as the ratio of a planet's flux measured at opposition ($\\alpha = 0$) by the flux of a Lambertian disk at the same distance and with the same radius as the planet. This allows us to define the planet\/star flux ratio as\n \\begin{equation}\n \\frac{F_{planet}(\\alpha)}{F_{Star}} = A_{\\rm g} \\, g(\\alpha) \\left[\\frac {R_{planet}}{a} \\right ]^{2}\n \\label{eq:FluxesRatio}\n ,\\end{equation}\n where $A_{\\rm g}$ is the planet's geometric albedo, $\\alpha$ the orbital phase, $g(\\alpha)$ the phase function, and $R_{planet}$ and $a$ the planetary radius and orbital semi major axis.}.\n \n \n \n For \\object{51 Peg b}, assuming an albedo of 0.3 and a planetary radius of $1.2\\;\\rm R_{Jup}$, Eq. \\ref{eq:FluxesRatio} will yield a maximum planet-star flux ratio of $\\approx\\,2\\times10^{-5}$.\nTo detect the planet's reflected signal under these conditions, we consequently need data with a combined S\/N of at least $165\\,000$ for a 3 $\\rm \\sigma_{noise}$ detection. A typical G2 stellar CCF-mask used in HARPS has over 4000 spectral lines. This means that in principle a spectrum with a S\/N of about 2600 will contain enough information to allow us to build a CCF with the necessary S\/N to detect the light spectrum of 51 Peg reflected on its planet (at 3-$\\sigma$ level). A lower S\/N will suffice if the albedo or the planetary radius are higher (according to Eq. \\ref{eq:FluxesRatio}).\n \n\n \\section{Method}\\label{sec:Method}\n \n \\subsection{Data} \\label{sec:Data}\n Our data were collected with the HARPS spectrograph at ESO's 3.6 m telescope at La Silla-Paranal Observatory as part of the ESO program {091.C-0271}. It consists of {90} spectra observed in seven different nights (\\textit{2013-06-08}, \\textit{2013-06-25}, \\textit{2013-08-02}, \\textit{2013-08-04}, \\textit{2013-09-05}, \\textit{2013-09-09,} and \\textit{2013-09-30}), which amounts to about 12.5 h of observing time. These observations were split into several carefully selected time windows in which the planet could be observed close to superior conjunction (i.e., when the day side of the planet faces us) to maximize the planet's flux (maximum phase). These time windows were computed from the ephemeris provided by \\cite{2006ApJ...646..505B}.\n \n The obtained spectra have a S\/N on the $\\rm 50^{th}$ order ($\\sim$5560\\AA) that varies between 122 and 388. The spectra cover the wavelength range from about 3781\\AA \\;to 6910\\AA. More detailed information can be found in Table \\ref{tab:SNRanges}.\n \n \\begin{table}\n \\caption{Available data for the individual nights.}\n \\centering\\begin{tabular}{c c c c}\n \\hline\\\\[-.5em]\n Night & Number of & Total exposure & S\/N range\\\\\n & Spectra & [seconds] & \\\\\n \\hline\\\\[-.5em]\n 2013-06-08 & 3 & 1360 & 243 - 296 \\\\\n 2013-06-25 & 10 & 5260 & 273 - 351 \\\\\n 2013-08-02 & 2 & 1092 & 145 - 151 \\\\\n 2013-08-04 & 20 & 10000 & 215 - 311 \\\\\n 2013-09-05 & 4 & 2400 & 191 - 248 \\\\\n 2013-09-09 & 13 & 7810 & 122 - 265 \\\\\n 2013-09-30 & 39 & 17100 & 179 - 388 \\\\\n \\hline\\\\[-.5em]\n \\end{tabular}\n \\label{tab:SNRanges}\n \\end{table}\n\n Despite the brightness of the target, some cloudy nights decreased the expected S\/N (e.g., S\/N$\\sim$150 after an exposure of 600s on 2013-08-02, while on 2013-09-30 we managed to obtain $\\sim$390 after 450s).\n \n \\subsection{Data reduction}\n To reduce the data and create the CCFs for each spectrum, the \\textit{HARPS DRS} \\citep{2003Msngr.114...20M} was used. The data were reduced using the default settings and were then fed to the CCF calculation recipe, used with a weighted $G2$ mask \\citep{2002A&A...388..632P}. \n \n Selecting an optimized CCF computation step was of critical importance. During the detection process, the CCFs need to be shifted in radial velocity (see below), which implies an interpolation between consecutive pixels. The errors in this interpolation can be minimized and its precision increased by selecting the smallest possible step. On the other hand, the computing time increases as the step size decreases. Therefore we settled for a {$\\rm 50\\; m\\; s^{-1}$} step, a good compromise between computing time and high precision. \n \n The CCF width also requires particular attention because we require a window wide enough to cover the planet's orbital path. Since the expected planet semi-amplitude is about 130 $\\rm km\\; s^{-1}$ \\citep[e.g.,][]{2013ApJ...767...27B}, we selected a window of 175 $\\rm km\\; s^{-1}$ on each side of the stellar radial velocity (centred on the stellar CCF). This allowed covering the planet's full orbit while leaving on each side of the corresponding RV a continuum section large enough to estimate the noise level.\n\n \\subsection{Calculating the best orbital solution} \\label{sec:Orbital}\n\n Our initial ephemerides for the orbit of \\object{51 Peg b} were taken from \\cite{2006ApJ...646..505B}. However, the obtained HARPS data allow us to derive precise radial velocities\\footnote{The radial velocities derived with the HARPS DRS pipeline can be found in the online version.} that can be combined with other measurements from the literature, so that we cover a baseline of RV measurements spanning almost 20 years with {different facilities}{(see Table \\ref{tab:Measurements})}.\n\n \\begin{table}\n \\centering\\caption{\\object{} Radial velocity data for 51 Peg used to derive the orbital parameters. }\n \\centering\\resizebox{\\columnwidth}{!}{%\n \\begin{tabular}{l c c c}\n \\hline\\\\[-.5em]\n Instrument & Number & $RV_{\\rm System}$ & Reference\\\\\n & of points & {[$\\rm km\\; s^{-1}$]} & \\\\\n \\hline\\\\\n ELODIE@OHP & 153 & -33.2516732 & (1)\\\\\n KECK, AAT, Lick & 256 & -0.0020280 &(2)\\\\\n \\hline\n \\end{tabular\n }\n {\\tablebib{(1) \\citet{2004A&A...414..351N}; (2) \\citet{2006ApJ...646..505B}.}\\tablefoot{$RV_{\\rm system}$ corresponds to the radial velocity of the system as measured by the corresponding instrument.}}\n \\label{tab:Measurements}\n \\end{table}\n\n We have thus re-derived the orbital parameters of 51\\,Peg\\,b. These were computed using the code $YORBIT$ \\citep{2011A&A...535A..54S}. This combined dataset allowed us to derive a precise set of orbital parameters for the star and its planetary companion. The derived orbital parameters can be found in Table \\ref{tab:51PegYORBIT}, where the value of $RV_{\\rm system}$ was set to the HARPS value\\footnote{In the fit a different zero point was fitted to each instrument's radial velocities.}. During the fitting process, the eccentricity was fixed to zero because the obtained value was not statistically significant. \n \n \n \n \\begin{table}\n \\centering\\caption{Basic orbital parameters for \\object{51 Peg b} as fitted by $YORBIT$}\n \\centering\\begin{tabular}{l l}\n \\hline\\\\[-.5em]\n Parameters & Value \\\\\n \\hline\\\\\n $RV_{\\rm system}$ {[$\\rm km\\; s^{-1}$]} & $-33.152$ \\\\\n Period {[days]} & $4.231$ \\\\\n e & $0.0 (fixed)$ \\\\\n a { [AU]} & $0.052$ \\\\\n $k_{Star}$ {[$\\rm m\\; s{-1}$]} & $55.2$ \\\\\n $m_2\\;sin(i)$ $\\rm [M_{\\rm Jup}]$ & $0.450$ \\\\\n $\\omega$ {[degrees]} & $0.0 (fixed)$ \\\\\n $t_{0}$ {[BJD-2400000]} \n& $56021.256$ \\\\[.5em] \n \\hline\\\\[-.5em]\n \\end{tabular}\n \\label{tab:51PegYORBIT}\n \\end{table}\n\n \n \\subsection{Recovery of the planet signal: methodology} \\label{sec:PlanetRecovery}\n We extracted the planet signal from the stellar noise with the technique described in \\cite{2013MNRAS.436.1215M}. In brief, after the CCFs of each observation has been computed, the signal can be recovered in two steps: \n \\begin{itemize}\n \\item[step 1:]\\label{itm:template} the CCFs are normalized with a stellar template and\n \\item[step 2:]\\label{itm:stacking} the individual CCFs resulting from step 1 are stacked after correcting for planetary\nvelocity.\\\\\n \\end{itemize}\n {Especial care in this process was taken for step 1, which consisted in dividing (normalizing) each of the star+planet CCFs by a carefully built stellar CCF template that represents the stellar signal as accurately as\npossible. This permitted us to remove the stellar signal so that only the planetary signal and noise were left.} For this template we also needed to ensure the highest possbile S\/N so that the division by this template would not introduce significant additional noise. To this end, we constructed two different templates from our observations:\n \\begin{description}\n \\item[\\textit{Template \\#1}] This template was constructed by stacking all the CCFs in our sample, after correcting each for the stellar radial velocity induced by the planet. This option implies that the template spectrum also includes a contribution {of the planetary reflected light spectrum}. Despite this fact, since the planet RV varies very rapidly, the planet's minute signal is diluted amidst the noise and can in principle be neglected.\\\\\n\n \\item[\\textit{Template \\#2}] This template was constructed by stacking the CCFs of the data collected close to the expected inferior conjunction ephemerids (i.e., the position of the planet in its orbit when its night side faces towards Earth, $\\rm 0.9 < \\phi < 0.1${, where $\\phi$ is the orbital phase}). With this template, the contamination of possible reflected light incoming from the planet is minimized. The downside of this selection is that the number of available spectra for constructing the template is more limited than with template \\#1 and therefore might introduce non-negligible noise in the data. For comparison, since only 20 CCFs in our sample are available for template \\#2, its S\/N is $\\sim$40\\% of the S\/N of template \\#1. \n \\end{description}\n\n To stack the CCFs after normalization by the stellar template (step 2), we discarded observations in which the planetary and stellar signals were spectroscopically blended or close in velocity. To avoid this, we only considered observations (after correcting for the planet's RV computed assuming an elliptical orbit with the orbital parameters in Table \\ref{tab:51PegYORBIT}) in which the radial velocity difference between the planet and the star exceeds $\\rm 8 \\times FWHM_{Star}$ (eight times the FWHM of the stellar CCF). For a more detailed explanation see \\citet{2013MNRAS.436.1215M}. Figure \\ref{fig:Phases} shows the orbital phases of our data (black circles) and of the observations used to recover the planetary signal (red stars) {at maximum detection significance.}\n \n \\begin{figure}\n \\includegraphics[width = \\columnwidth]{.\/TemplatePoints_All.pdf}\n \\caption{Orbital phases of our data (black circles) and of the observations used to recover the planetary signal (red stars) {at maximum detection significance}. Phase zero corresponds to the inferior conjunction phase, i.e., the position on the orbit where the planetary night side faces Earth. The green dotted line corresponds to the orbital fit from the parameters in Table \\ref{tab:51PegYORBIT}.}\n \\label{fig:Phases}\n \\end{figure}\n \n For each individual CCF, the expected radial velocity of the planet was computed from the Keplerian fit presented in Table \\ref{tab:51PegYORBIT}. The different processed CCFs were then re-centered by subtracting the planetary radial velocity from all of them (i.e., centering them around the velocity of the planet at any given moment) and co-adding them to increase their S\/N. \n \n After the individual CCFs were stacked, a Gaussian curve was fitted and its amplitude was used to compute the detection significance. To discard fits with no physical meaning, {two} constrains were set in place:\n \\begin{itemize}\n \\item $\\rm FWHM_{Planet} > 0.9\\; FWHM_{Star}$ - {This lower limit takes into account noise that might decrease the width of the planet's CCF. For instance, if the star's convective envelope were to be tidally locked to its planetary companion, the planet would \"see\" its host star unaffected by the star's rotation and therefore the planetary CCF might be an unbroadened version of the star's \\citep[for further details we refer to][]{1999ApJ...522L.145C}. This is clearly not the case for the 51 Peg system, as the stellar rotation period \\citep[$\\sim$ 21 days, see][]{2010MNRAS.408.1666S} is much longer than the planetary orbital period ($\\sim$ 4 days), and therefore this effect should be negligible.}\n \\item $\\rm FWHM_{Planet} < 4\\; FWHM_{Star} $ - Planetary rotation might cause the planetary CCF to be broadened relatively to the CCF of the host star while allowing for enough of a continuum for noise estimation. This limit was computed from the extreme case of a planet with twice the size of Jupiter but the same rotation period and observed edge-on. Nonetheless, we expect the rotation period of a close-in giant planet to be much lower due to tidal interaction\nwith its star. \n \\end{itemize}\n \n The significance of this detection $D$ is then defined by \n \\begin{equation}\n D = \\left|\\frac{A}{\\rm \\sigma_{noise}}\\right|\n \\label{eq:DetectSig}\n ,\\end{equation}\n where $A$ is the amplitude of a Gaussian fit to the planet's signal and $\\rm \\sigma_{noise}$ is the continuum noise on both sides of the signal. {We define the continuum noise as the standard deviation of the pixel intensity of the stacked CCFs at a separation of more than $\\rm 2\\times FWHM_{Planet}$ from the center of the detected signal.} \n \n As mentioned above, the planet's real mass, and therefore its real orbital velocity semi-amplitude $k_{\\rm Planet}$, is not known. To pinpoint this real semi-amplitude, we computed the detection significance over a range of planetary orbital semi-amplitudes ({75-275 $\\rm km\\; s^{-1}$}) centered on the semi-amplitude ($k_{\\rm Planet}\\sim$ 133 $\\rm km\\; s^{-1}$) calculated from the minimum mass recovered from the radial velocity fit presented in Table \\ref{tab:51PegYORBIT}.\n \n \\subsection{Characterizing the planet}\n When stacking the CCFs, we can expect that the maximum detection significance occurs when the correct velocity shift is used in each individual CCF that we combine. In particular, this is expected to occur if we are able to input the correct semi-amplitude radial velocity signal of the planet. A significant detection of the planetary signal (and its radial velocity semi-amplitude) will then allow inferring the planet-star mass ratio $q$ from\n \\begin{equation}\n q \\equiv \\frac{M_{\\rm Planet}}{M_{\\rm Star}} = \\frac{k_{\\rm Star}}{k_{\\rm Planet}}\n \\label{eq:MassRatio}\n ,\\end{equation}\n where $k_{\\rm Planet}$ is the most significant amplitude as delivered from the analysis of the previous section, and $k_{\\rm Star}$ as derived in Sect. \\ref{sec:Orbital}. With $q$, and given the derived value for the stellar mass ($1.04\\,M_\\odot$), we can compute the real mass for the planet. Together with the minimum mass, this value allows deriving the planetary orbital inclination.\n \n \n \\begin{figure*}\n \\hfill\\includegraphics[width = \\columnwidth, page =1 ]{.\/51Peg_All_Final.pdf}\\hfill\n \\includegraphics[width = \\columnwidth, page =1 ]{.\/51Peg_Transit_Final.pdf}\\hfill\n \\caption{Detection significance as a function of $k_{\\rm Planet}$. The red line corresponds to the $k_{\\rm Planet}$ value for maximum detection. The maximum detection occurs for similar $k_{\\rm Planet}$ values for both templates. The amplitude values set to zero corresponds to $k_{\\rm Planet}$ values for which no Gaussian fit with our restrictions could be achieved. \\textit{Left panel:} using template \\#1; \\textit{right panel:} using template \\#2.}\n \\label{fig:AmpK2}\n \\end{figure*} \n \n \n \\section{Results} \\label{sec:Results}\n \n \n \\begin{figure} \n \\includegraphics[width = \\columnwidth, page =2 ]{.\/51Peg_All_Final.pdf}\n \\caption{Detected signals as a function of $k_{\\rm Planet}$ over the selected velocity range. With decreasing distance to the maximum detection, the signal becomes better defined and the continuum noise less dispersed.}\n \\label{fig:DetectionGRid}\n \\end{figure}\n \n\n Following the method described in Sect. \\ref{sec:Method}, we calculated the detection significance {for evenly distributed radial velocity semi-amplitudes of the planet {75 $\\rm km\\; s^{-1}$ < $k_{\\rm Planet}$ < 275 $\\rm km\\; s^{-1}$},, with a step of 0.05 $\\rm km\\; s^{-1}$}. For each $k_{\\rm Planet}$ we stacked the individual CCFs after correcting for the corresponding radial velocity\\footnote{{The constraints mentioned in Sect. \\ref{sec:PlanetRecovery}\ncause the number of spectra used for constructing each combined CCF to vary with the assumed value of $k_{\\rm Planet}$ (it increases with $k_{\\rm Planet}$). However, the total S\/N of the resulting CCF is always above the S\/N threshold for detection of the observed signal (with an amplitude of $\\rm 6\\pm0.4 \\times 10^{-5}$).}}. {We then performed a simple Gaussian fit to the resulting stacked CCF, using the restrictions discussed in Sect. \\ref{sec:Method}. }The significance of the detection (D) was then derived using Eq. \\ref{eq:DetectSig}. This process was repeated for the two different template options presented in Sect. \\ref{sec:Method}. \n \n {Template \\#1 yields a maximum detection significance of 3.7-$\\rm \\sigma_{noise}$ for {$k_{\\rm Planet}$ = 132 $\\rm km\\; s^{-1}$}, while with Template \\#2 a maximum significance of 5.6-$\\rm \\sigma_{noise}$ is computed for {$k_{\\rm Planet}$ = 133 $\\rm km\\; s^{-1}$}. To be conservative, the error bars on the value derived using template \\#2 were computed from the 2-$\\sigma$ uncertainty in the amplitude of the Gaussian fitted to the CCF of the planet at maximum detection significance, yielding $k_{\\rm Planet} = 133^{+19}_{-20}$ {$\\rm km\\; s^{-1}$}. Using the same procedure to compute the error bars for the results using template \\#1 would lead to $k_{\\rm Planet} = 132^{+2}_{-11}$ {$\\rm km\\; s^{-1}$}. However, for this latter case, a simple visual inspection of Fig. 2 shows that the significance curve is relatively flat for values between about 120 and 150 $\\rm km\\; s^{-1}$ (but always above 3-$\\rm \\sigma_{noise}$ in this range). We thus decided to adopt all the values of $k_{\\rm Planet}$ with a significance higher than 3.0-$\\rm \\sigma_{noise}$, that is, $k_{\\rm Planet} = 132^{+19}_{-15}$ {$\\rm km\\; s^{-1}$}. This of course lowers the significance of our detection to 3.0-$\\rm \\sigma_{noise}$ (and not 3.7-$\\rm \\sigma_{noise}$ as derived with the maximum value).}\n \n Following the constraints presented in Sect. \\ref{sec:PlanetRecovery}, the planetary CCF for maximum significance was constructed from stacking 25 observations of the available {90} ($\\sim$27\\%). Although template \\#2 yields a higher detection significance, we decided to use template \\#1 for the remainder of the paper since it has a much higher S\/N because many more CCFs were used to construct it. Therefore it will represent the stellar signal more reliably and is less prone to introduce additional noise into the CCF (including correlated noise that is difficult to quantify).\n \n{Table \\ref{tab:GaussParams} shows the parameters for the recovered CCF derived from the Gaussian fit using template \\#1. In this table, the amplitudes of the star (0.48) and planet ($\\rm 6\\pm0.4 \\times 10^{-5}$) CCFs correspond to the depth of the CCF when the the continuum has been normalized to 1. The planet-to-star flux ratio will then be given by the ratio of these two quantities. The significance of the detection corresponds to the maximum significance for 75 $\\rm km\\; s^{-1}$ < $k_{\\rm Planet}$ < 275 $\\rm km\\; s^{-1}$ and was computed by dividing the amplitude of the planetary CCF by the noise on the wings of the CCF. The FWHM is the full width at half maximum of the fitted Gaussian, 7.43 $\\rm km\\; s^{-1}$ for the star, $\\rm 22.6 \\pm 3.6$ $\\rm km\\; s^{-1}$ for the planet. }\n\n Figure \\ref{fig:DetectionGRid} shows for several assumed values of $k_{\\rm Planet}$ over the adopted range a section of the stacked CCFs centered on the radial velocity where the planetary signal would be expected from a Keplerian fit to the observations using the parameters in Table \\ref{tab:51PegYORBIT} and the selected $k_{\\rm Planet}$. {The plot illustrates how the recovered CCF changes when we co-add spectra for different assumed semi-amplitude values. When a value for the semi-amplitude close to 132 $\\rm km\\; s^{-1}$ is assumed, the recovered signal is well defined and its wings are less noisy; this is closer to the distribution of the expected Gaussian shape of a CCF. However, as the assumed value of the semi-amplitude departs from 132 $\\rm km\\; s^{-1}$, the CCFs of the {individual} observations will be co-added to each other, albeit imperfectly. Therefore, in these cases a signal is also expected to be detected, but spread out across the {continuum} and of lower significance because the wings additionally contribute to noise. When close enough to the best-fit value of the semi-amplitude, a signal above the noise level can still be seen. This signal might seem to be of similar amplitude as the one detected for the correct value of orbital semi-amplitude, but it will have a lower significance due to the increased noise in the wings. This can be seen in the panels for $k_{\\rm Planet}$ = 115 and 120 $\\rm km\\; s^{-1}$, which show signals that might seem to\nbe of similar amplitude to the one in the central panel, but because of the increased noise in the wings, these signals are of lower significance according to Eq. \\ref{eq:DetectSig} (see Fig. \\ref{fig:AmpK2}).} \n \n \\begin{figure} \n \\includegraphics[width = \\columnwidth, page =3 ]{.\/51Peg_All_Final.pdf}\n \\caption{Detected signals as a function of $k_{\\rm Planet}$ , but for random sections of the normalized CCF where the planetary signal is not expected to be found. }\n \\label{fig:DetectionGRidNull}\n \\end{figure}\n \n To test whether the observed signal might be a spurious combination of random noise, we repeated the process, but instead of using a Keplerian function to compute the expected radial velocity signal of the planet, we attributed a random radial velocity within the range [-50, 80] $\\rm km\\; s^{-1}$ to each\nCCF. The result of that analysis can be seen in Fig. \\ref{fig:DetectionGRidNull}. They show that no significant signal is detected, as expected. This test gives us confidence that the detected CCF of the planet is not a mere artifact. {The dips that appear in the panels of Fig. \\ref{fig:DetectionGRidNull} appear at random positions, and their FWHM is always lower than the FWHM of the star. For instance, the FWHM of the center dip in the 115 $\\rm km\\; s^{-1}$ panel of Fig. \\ref{fig:DetectionGRidNull} has a FWHM of only 3.8 $\\rm km\\; s^{-1}$. This strongly suggests that these peaks are purely caused by noise, especially as they appear randomly positioned. The constraints that we have specified in Sect. \\ref{sec:Method} ensure these dips as discarded as nonphysical and are not confused with the planetary CCF.}\n \n {To verify whether our analysis data-processing was sound, we simulated 100 sets of noiseless CCF functions simulating idealized observations of the star + planet signal for random values of the planetary semi-amplitude $k_{\\rm Planet}$ in the range [100, 180] $\\rm km\\; s^{-1}$. These sets of simulated observations consisted of {90} star+planet CCFs (the number of observations we have), where the planet radial velocity is computed from one of the randomly selected $k_{\\rm Planet}$ through Eq.\\,\\ref{eq:MassRatio}. Each simulated CCF was built by positioning the stellar Gaussian at the observed velocity of the star at each given moment (obtained from the real CCFs) and co-adding the planet Gaussian with an {amplitude} of $5\\times 10^{-5}$ (a value similar to the expected for our test subject) . The goal was to verify whether the injected signal was always fully recovered by the reduction process. The results show that the injected signal was successfully recovered in all simulations. The obtained values of $k_{\\rm Planet}$ were always the values that were injected into the expected figure, with a standard deviation of only 0.11\\%. Similar results were obtained for the {amplitude}, which was always recovered with a standard deviation lower than 0.01\\%. This test shows that that the data-analysis pipeline works correctly and can be used safely to retrieve the planetary signal.}\n \n \\begin{figure} \n \\includegraphics[width = \\columnwidth, keepaspectratio =true, page = 1]{.\/51Peg_Final_CCF.png}\\hfill\n \\caption{Planet signal for maximum detection and fitted Gaussian curve with the Levenberg-Marquardt method.}\n \\label{fig:PlanetCCF}\n \\end{figure}\n\n The planetary CCF with the highest recovered significance is plotted in Fig. \\ref{fig:PlanetCCF} together with a Gaussian fit using a Levenberg-Marquardt algorithm. After removing the planetary CCF and fitting it with a Gaussian, we used the residuals of the fit to define the error bars. To do so, we injected into the residuals a Gaussian curve with the same parameters as the detected planetary CCF after\nsubtracting the detected Gaussian, but at different radial velocities. This procedure was repeated 10\\,000 times, and in each we recovered the signal injected by fitting a Gaussian profile. The standard deviation of the recovered FWHM and {amplitude} values were then considered to be the 1-$\\sigma$ errors as listed in Table\\,\\ref{tab:GaussParams}. \n\n \n \\begin{table}\n \\caption{{Comparison of the stellar and planetary CCF parameters. }}\n \\centering\\begin{tabular}{l c c l}\n \\hline\\\\[-.5em] \n Parameter & Star & {Planet} &\\\\\n & & &\\\\\n \\hline\\\\\n {Amplitude} & 0.48 & 6.0$\\pm$0.4 &\\hspace{-1.em}$\\rm \\times$ $10^{-5}$\\\\\n Significance $[\\rm \\sigma_{noise}]$ &--& 3.7$\\pm$0.2&\\\\\n FWHM {[$\\rm km\\; s^{-1}$]} & 7.43 & \\hspace{-.5em}22.6$\\pm$3.6&\\\\\n \\hline\\\\\n \\end{tabular}\n {\\tablefoot{Comparison of the stellar and planetary CCF parameters. For the star signal, we present the median value of the amplitude and FWHM of its CCF for all observations. For the planetary CCF, we present the values of the amplitude and FWHM of its Gaussian fit and its detection significance. In both cases, the level of the CCF continuum flux has been set to one.}}\n \\label{tab:GaussParams} \n \\end{table}\n\n\n\n \\section{Discussion} \\label{sec:Discussion}\n {Our results suggest that we were able to successfully detect the light spectrum of 51\\,Peg reflected on its hot Jupiter companion. The results also indicate that the best-fit semi-amplitude of the orbital motion of the planet is $k_{\\rm Planet} = 132^{+19}_{-15}$ $\\rm km\\; s^{-1}$.}.\n \n {The planetary mass cannot be lower than its real mass (corresponding to an orbital inclination of 90 $^{\\circ}$), which places a higher limit on the planetary orbital semi-amplitude of $k_{\\rm Planet} \\sim 133$ $\\rm km\\; s^{-1}$. Combining the detected value of the semi-amplitude with this assumption, Eq. \\ref{eq:MassRatio} yields a real mass for \\object{51 Peg b} of $0.46^{+0.06}_{-0.01}\\;\\rm M_{Jup}$ for a stellar mass of $\\rm 1.04\\; M_{\\odot}$ \\citep{2013A&A...556A.150S}. By comparing this with the derived minimum mass of $m_2\\;sin\\;i = 0.45 \\;\\rm M_{Jup}$ (Table \\ref{tab:51PegYORBIT}), we can infer an orbital inclination of $80^{+10}_{-19}$ degrees. This result is compatible with the results obtained independently\nby \\citet{2013ApJ...767...27B} - $79.6^{\\circ}