diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzakjh b/data_all_eng_slimpj/shuffled/split2/finalzzakjh new file mode 100644 index 0000000000000000000000000000000000000000..d965d27f04e45575fcf92e99b1df2b21e1aab607 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzakjh @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe \\emph{Kepler} space telescope collected continuous photometric data for nearly 3.5 years from a small 0.25\\% patch of the celestial sphere \\citep{koc10,bor16}. This uniform monitoring of 150,000 stars enabled the discovery of over 4700 transiting exoplanet candidates through an automated detection pipeline \\citep{jen10}.\\footnote{\\url{https:\/\/exoplanetarchive.ipac.caltech.edu\/docs\/counts_detail.html}} Removing the human component from the signal vetting process ({\\tt Robovetter}; \\citealt{tho18}), enabled the first homogeneous transiting small planet sample suitable for exoplanet population studies. Through full automation, the sample completeness can be measured via transit injection\/recovery procedures \\citep{pet13b,chr15,dres15,chr17,chr20}. Here, artificial transit signals are injected into the raw light curves, enabling quantification of the pipeline's detection efficiency as a function of, for instance, the injection signal strength. In addition, the final \\emph{Kepler} catalog (DR25) also provided a measure of the sample reliability (the rate of false alarm signals) by testing the software's ability to remove systematics from the final catalog \\citep{cou17b}. \n\nWith these catalog measurements available, the underlying planet population can be extracted by correcting for the detection and orbital selection effects. Many studies have used \\emph{Kepler} automated planet catalogs to identify a remarkable surplus of sub-Neptunes and super-Earths at short periods (FGK Dwarfs: \\citealt{you11,how12,pet13b,muld18,hsu19,zin19,he19}; M Dwarfs: \\citealt{dres13,mui15,dres15,har19}), despite their absence in the solar system. Several other population features have been discovered using the \\emph{Kepler} catalog. For example, the apparent deficit of planets near the 1.5--$2 R_\\Earth$ range, colloquially known as the ``radius valley'' \\citep{ful17}. This flux-dependent population feature indicates some underlying formation or evolution mechanism must be at play, separating the super-Earth and sub-Neptune populations \\citep{owe17,gup18}. Nearly 500 multi-planet systems were identified in the \\emph{Kepler} data, enabling examination of intra-system trends. Remarkably, very minor dispersion in planet radii is observed within each system \\citep{cia13}. By examining the intra-system mass values from planetary systems that have measurable transit-timing variations (TTVs), a similar uniformity has been identified for planet mass \\citep{mil17}. Moreover, the orbital period spacing of planets appears far more compact than the planets within the solar system. These transiting multi-planet systems also appear to have relatively homogeneous spacing, indicative of an unvaried dynamic history. This combination of intra-system uniformity in orbital spacing and radii is known as the \"peas-in-a-pod\" finding \\citep{wei18}. \n\n\nOne of the main objectives of the \\emph{Kepler} mission was to provide a baseline measurement for the occurrence of Earth analogs ($\\eta_\\Earth$). Despite the scarce completeness in this area of parameter space, several studies have extrapolated from more populated regions, providing measurements ranging from 0.1--0.4 Earth-like planets per main-sequence star \\citep{cat11,tra12,pet13b,sil15,bur15,zin19b,kun20,bry20b}. Currently, it remains unclear what planet features are important for habitability, making precise measurements unattainable. Moreover, these extrapolations are model dependent and require a more empirical sampling of the Earth-like region of parameter space to reduce our uncertainty. Unfortunately, the continuous photometric data collection of the \\emph{Kepler} field was terminated due to mechanical issues on-board the spacecraft after 3.5 years, limiting the completeness of these long-period small planets.\n\nUpon the failure of two (out of four) reaction wheels on the spacecraft, the telescope was no longer able to collect data from the \\emph{Kepler} field, due to drift from solar radiation pressure. By focusing on fields (Campaigns) along the ecliptic plane, this drift was minimized, giving rise to the \\emph{K2} mission \\citep{how14, cle16}. Each of the 18 Campaigns were observed for roughly 80 days, enabling the analysis of transiting exoplanet populations from different regions of the local Galaxy. However, the remaining solar pressure experienced by the spacecraft still required a telescope pointing adjustment every $\\sim$6 hours. This drift and correction led to unique systematic features in the light curves, which the \\emph{Kepler} automated detection pipeline was not designed to overcome. As a result, all transiting \\emph{K2} planet candidates to date have required some amount of visual assessment. Despite this barrier, nearly 1000 candidates have been identified in the \\emph{K2} light curves (e.g., \\citealt{van16b,bar16,ada16,cro16,pop16,dres17,pet18,liv18,may18,ye18,kru19,zin19c}). However, this assortment of planet detections lacks the homogeneity necessary for demographics research. Attempts have been made to replicate the automation of the \\emph{Kepler} pipeline for \\emph{K2} planet detection \\citep[e.g.,][]{kos19,dat19}, but these procedures still required some amount of visual inspection. \\citet{zin20a} developed the first fully automated \\emph{K2} planet detection pipeline, using the {\\tt EDI-Vetter} suite of vetting metrics to combat the unique \\emph{K2} systematics. This pilot study was carried out on a single \\emph{K2} field (Campaign 5; C5 henceforth) and detected 75 planet candidates. The automation enabled injection\/recovery analysis of the stellar sample, providing an assessment of the planet catalog completeness. Additionally, the reliability (rate of false alarms) was quantified by passing inverted light curves---nullifying existing transit signals---through the automated procedure, analogous to what was done for the \\emph{Kepler} DR25 catalog. Any false candidates identified through this procedure are indicative of the underlying false alarm contamination rate. \n\n\\begin{figure*}\n\\centering \\includegraphics[width=\\textwidth]{kepStell.pdf}\n\\caption{The distribution of \\emph{Kepler} \\citep{ber20} and \\emph{K2} \\citep{hub16,har20} targets as a function of surface gravity ($\\log(g)$), effective stellar temperature ($T_\\textrm{eff}$) and stellar metallicity ([Fe\/H]). \n\\label{fig:HR}}\n\\end{figure*}\n\nWith corresponding measures of sample completeness and reliability for C5, \\cite{zin20b} carried out the first assessment of small transiting planet occurrence outside of the \\emph{Kepler} field, finding a minor reduction in planet occurrence in this metal-poor FGK stellar sample. This provided evidence that stellar metallicity may be linked to the formation of small planets; however, the weak trend detected requires additional data for verification. \n\n\n\nThis paper is a continuation of the Scaling \\emph{K2} series \\citep{har20,zin20a,zin20b}, which aims to leverage \\emph{K2} photometry to expand our exoplanet occurrence rate capabilities and disentangle underlying formation mechanisms. The intent of this study is to derive a uniform catalog of \\emph{K2} planets for all 18 Campaigns (with exception to C9, which observed the crowded galactic plane, and C19, which suffered significantly from low fuel levels), and provide corresponding measurements of the sample completeness and reliability. The underlying target sample and the corresponding stellar properties are discussed in Section \\ref{sec:stellarSample}. \nIn Section \\ref{sec:pipe} we outline the automated pipeline implemented in our uniform planet detection routine. We then procure a homogeneous planet sample and consider the interesting candidates and systems in Section \\ref{sec:catalog}. In Sections \\ref{sec:complete} and \\ref{sec:reli} we provide the corresponding measures of sample completeness and reliability for this catalog of transiting \\emph{K2} planet candidates. Finally, we offer suggestions for occurrence analysis and summarize our findings in Sections \\ref{sec:suggest} and \\ref{sec:summary}. \n\n\n\\section{Target Sample}\n\\label{sec:stellarSample}\n\n\n\nWe first downloaded the most up-to-date raw target pixel files (TPFs) from MAST\\footnote{\\href{https:\/\/archive.stsci.edu\/k2\/}{https:\/\/archive.stsci.edu\/k2\/}}, which recently underwent a full reprocessing using a uniform procedure \\citep{cad20}. To ensure consistency among our data set, we used the final data release (V9.3) for all available campaigns considered in this paper.\\footnote{C8, C12, and C14 were released after our analysis was performed. Thus, a previous version (V9.2) was used for this catalog. Overall, we expect the differences to be minor and leave inclusion of these updated TPFs for future iterations of the catalog.} We began our search using the entire EPIC target list (381,923 targets; \\citealt{hub16}). Of this list, we found 212 targets have FITS file issues our software could not rectify. For all remaining targets we used the {\\tt K2SFF} aperture \\#15 \\citep{van16b}, which is derived from the \\emph{Kepler} pixel response function and varies in size in accordance with the target's brightness. Circular apertures exceeding $10\\arcsec$ radii are prone to significant contamination from nearby sources. Our vetting software, {\\tt EDI-Vetter}, is able to account for this additional flux, but the software's accuracy begins to decay beyond a radius of $20\\arcsec$. Additionally, large apertures have a tendency to contain multiple bright sources, providing further complications. To address this issue, we put an upper limit on the target aperture size (79 pixels; $\\sim20\\arcsec$ radius). This cut removed 4548 targets that have apertures we deemed too large. Of these rejected targets, 492 are in Campaign 11, which contains the crowded galactic bulge field. After applying these cuts, 377,163 targets remained and were then used as our baseline stellar sample.\n\n\n\\begin{figure*}\n\\centering \\includegraphics[height=7cm]{K2_Galaxy.png}\n\\caption{The galactic distribution of \\emph{K2} target stars. The disk structure follows the interpretation provided by \\cite{hay15}. The thick disk consists of stars with a heightened abundance of $\\alpha$-chain elements (O, Ne, Mg, Si, S, Ar, Ca, and Ti) compared to the stars in the thin disk \\citep{wal62}. Additionally, the $\\alpha$-poor disk begins to flare up beyond the solar neighborhood. The galactic coordinates presented here have been calculated using \\emph{Gaia} DR2 \\citep{gai18}.\n\\label{fig:galaxy}}\n\\end{figure*}\n\nTo parameterize this list of targets we relied on the values derived by \\cite{har20}, which used a random forest classifier, trained on LAMOST spectra \\citep{su04}, to derive stellar parameters from photometric data (222,088 unique targets). Additionally, the available \\emph{Gaia} DR2 parallax information was incorporated to significantly improve our understanding of the stellar radii. From this catalog, additional measurements of stellar mass, metallicity, effective temperature ($T_{\\text{eff}}$), and surface gravity ($\\log(g)$) were provided and used in this study. In cases where stellar parameters were not available in \\cite{har20} (typically due to their absence in the \\emph{Gaia} DR2 catalog or not meeting strict photometric selection criteria), we used the stellar parameters derived for the EPIC catalog \\citep{hub16} (94,769 targets). Finally, we used solar values for targets that lack parameterization in both catalogs (41,061 targets).\\footnote{\\cite{zin20b} showed that systematic differences between stellar catalogs are minor. Additionally, we emphasize our pipeline's agnosticism to stellar parameterization.}\n\n\nExcluding the 41,061 targets without stellar parameters, we isolated 223,075 stars that appear to be main-sequence dwarfs based on their surface gravity ($\\log(g)>4$). Within this subset we identified 48,702 targets as M dwarfs ($T_{\\text{eff}}<4000K$), 164,569 as FGK dwarfs ($40006500K$). This abundance of M dwarfs is nearly 17 times that of the \\emph{Kepler} sample (2808 M dwarfs; \\citealt{ber20}) and can be clearly seen in Figure \\ref{fig:HR}. Another noteworthy feature of the \\emph{K2} stellar sample is the wide range of galactic latitudes covered by the 18 campaigns. This enabled \\emph{K2} to probe different regions of the galactic sub-structure. In contrast, a majority of the \\emph{Kepler} stellar sample was bounded within the thin disk (see Figure 1 of \\citealt{zin20b} for a comparison). In Figure \\ref{fig:galaxy} we show this galactic sub-structure span for \\emph{K2}. Making broad cuts in galactic radius ($R_g$) and height ($b$) we can distinguish thick disk ($R_g<8$kpc and $|b|>0.5$kpc) from thin disk stars ($|b|<0.5$kpc). Overall, we found 191,002 dwarfs located in the thin disk, while 17,123 dwarfs reside in the thick disk. This sample distinction is interesting and may warrant further consideration in future occurrence studies.\n\n\nThe underlying stellar sample is important because it is the population from which the planet candidates are drawn. However, the stellar parameters are subject to change as more comprehensive data and more precise measurements become available (i.e., the upcoming \\emph{Gaia} DR3). To ensure our catalog remains relevant upon improved stellar parameterization, we take an agnostic approach to the available stellar features throughout our pipeline. In other words, each light curve is treated consistently regardless of the underlying target parameters. The only caveat to this claim is our treatment of the transit limb-darkening. Our transit model fitting routine requires quadratic limb-darkening parameters. We derived the appropriate values using the ATLAS model coefficients for the \\emph{Kepler} bandpasses \\citep{cla12}, in concert with the available stellar parameters. This minor reliance on our measured stellar values will have negligible effects on the presented catalog. In the case where a signal is transiting with a high impact parameter, this limb darkening choice can be important, modifying the inferred radius ratio by an order of 1\\%. However, this boundary case is well within the uncertainty of our measured radius ratio values ($\\sim4\\%$) and therefore not significantly impacting our inferred radius measurements. All other detection and vetting metrics in our pipeline are independent of this parameterization. \n\n\n\n\\section{The Automated Pipeline}\n\\label{sec:pipe}\nOur automated light curve analysis pipeline consists of four major components: pre-processing, detrending, signal detection, and signal vetting. In Figure \\ref{fig:diagram} we provide a visual overview of this procedure, and now briefly summarize the execution of each step.\n\n\\begin{figure}\n\\centering \\includegraphics[width=\\columnwidth{}]{figureDiagramsmall.pdf}\n\\caption{ An overview schematic of the automated detection pipeline used in the current study to identify \\emph{K2} planet candidates. This follows the same procedure described in \\cite{zin20a}. The effects of pre-processing and detrending have also been depicted for EPIC 211422469, illustrating the importance of the corresponding steps. \n\\label{fig:diagram}}\n\\end{figure}\n\n\\subsection{Pre-Processing}\nThe raw flux measurements from \\emph{K2} are riddled with systematic noise components due to the thruster firing approximately every six hours (required to re-align the telescope pointing) and the momentum dumps every two days. This spacecraft movement smears the target across several different pixels, all with unique noise and sensitivity properties, leading to a significant increase in the overall light curve noise. We passed all raw light curves through the {\\tt EVEREST} software \\citep{lug16,lug18}, which minimizes this flux dispersion issue using pixel-level decorrelation (PLD) to fit and remove noise attributed to the spacecraft roll. In Figure \\ref{fig:diagram} we show how {\\tt EVEREST} reduces the noise, as measured by the root mean-square (RMS), of the EPIC 211422469 light curve by a factor of four. However, this pre-processing also has the ability to reduce transit signals or remove them completely. Thus, it is essential that any injection\/recovery tests address this concern by injecting signals into the data before this pre-processing, as we later discuss in Section \\ref{sec:complete}.\n\n\n\n\\subsection{Detrending}\nIn the example light curve for EPIC 211422469 (Figure \\ref{fig:diagram}) there is a clear long-term trend remaining in the data after pre-processing. The goal of detrending is to remove this red-noise component and any stellar variability, producing a clean, white-noise dominated, time series. We used two Gaussian process (GP) models, with ``rotation'' kernels, to remove these long- and medium-term trends. These kernels use a series of harmonics to match and remove periodic and red-noise trends in the photometry. The first pass GP looked for general flux drifting (periods $>10$ days), subtracting the appropriate model. The second pass GP identified and removed medium-term trends (5 days $<$ period $<10$ days) often associated with stellar variability. In testing, we found these two GP passes were effective in removing red-noise from the data without significantly impacting transit signals. The Ljung-Box test (a portmanteau test for all autocorrelation lags; \\citealt{lju78}) found that $67\\%$ of our processed light curve residuals produced p-values greater than 0.001, indicating a lack of statistically significant short-term correlated structure. However, stellar harmonics with short periods (usually $<0.5$ days) continued to contaminate the remaining 33\\% of our light curves. To address these trends we fit and removed sine waves with periods less than $0.5$ days. These signals are all below the 0.5 period requirement threshold for planet candidacy, but still have the ability to contaminate the light curve. In an effort to minimize over-fitting, we required the signal amplitude to be at least $10\\sigma$ in strength. If this requirement was not met, the harmonic removal was not applied. \n\nThe period limits used for both the GP models and the harmonic fitter attempted to preserve transit signals, avoiding periods which are prone to transit removal (0.5 days $<$ period $<5$ days). Unfortunately, some stellar variability exists in this forbidden period range, making it difficult to address every unique situation. Using an auto-regressive integrated moving average algorithm (as suggested by \\citet{cac19a,cac19b}) may reduce some of these remaining correlated residuals, but this methodology also has limitations in dealing with stellar variability and is beyond the scope of this paper.\n\nIn addition, these detrending mechanisms are capable of reducing or (in the case of very deep transits) removing the signal altogether (i.e., the \\emph{Kepler} harmonic removal highlighted by \\citealt{chr13}).We acknowledge these costs in performing our automated detrending procedure, knowing the effects will be accounted for in our catalog completeness measurements (see Section \\ref{sec:complete}). \n\n\\subsection{Signal Detection}\nOnce the light curve has been scrubbed with our pre-processing and detrending routine, the flux measurements can be examined for transit signals with {\\tt TERRA} \\citep{pet13b}. This algorithm uses a box shape to look for dips in the light curve, enabling a quick examination of each target. To measure the signal strength we rely on the same metric used for the \\emph{Kepler} transiting planet search (TPS), the multiple event statistic \\citep[MES;][]{je02}, which assumes a linear ephemeris to indicate the strength of the whitened signal. For our detection threshold we require a signal MES value greater than $8.68\\sigma$. This threshold, which is higher than that of the \\emph{Kepler} TPS ($7.1\\sigma$), was arbitrarily selected to reduce the total signal count to 20\\% of our total target sample. We also bound the period search of {\\tt TERRA}, ranging from 0.5 days to a period where three transits could be detected (nominally 40 days, varying slightly from campaign to campaign). Any signal above our MES threshold and between these period ranges is given the label of threshold-crossing event (TCE). \n\n\n\\subsubsection{Multi-planet systems}\n\nMulti-planet systems are rich in information, but require careful consideration when attempting to identify these asynchronous signals. Ideally, a model of the first detected signal would be fit and subtracted from the light curve before re-searching the data for additional signals. However, real data are noisy, making such a task difficult to automate. Should the signal be incorrectly fit, the model subtraction will leave significant residuals, which will continuously trigger a detection upon reexamination. Moreover, false positives that do not fit the transit model will also leave significant residuals. Finally, astrophysical transit timing variations (TTVs) are not easily accounted for in an automated routine \\citep{wu13,had14,had17}. Thus, deviations from simple periodicity will leave significant residuals. Dealing with these complex signals, without loss of data, remains a point of continuous discussion (e.g. \\citealt{sch17}).\n\nAs was done in many previous pipelines, we relied on masking of the signal after each iterative TCE detection \\citep{jen02,dres15,sin16,kru19,zin20a}. This method has its own faults, as it requires some photometry to be discarded upon each signal detection. \\citet{zin19} showed that this $3\\times$ the transit duration removal has the ability to make multi-planet systems more difficult to detect. To mitigate this loss of data, we used a mask of $2.5\\times$ the transit duration ($1.25\\times$ the transit duration on either side of the transit midpoint) after each signal was detected. At this point the light curve was reexamined for an additional signal. This process was repeated six times or until the light curve lacked a TCE, enabling this pipeline to detect up to six planet systems.\n\n\\subsubsection{Skye Excess TCE Identification}\n\n\\begin{figure}\n\\centering \\includegraphics[width=\\columnwidth{}]{Skyplot_C04.pdf}\n\\caption{ An illustration of the Skye excess TCE identification process for Campaign 4. The expected white noise range for this number of cadences is plotted in green ($3.63\\sigma$). The median number of TCEs is shown by a central blue dotted line. We highlight the rejected cadences with a red x. These 16 cadences exceed the Skye excess limit and are masked in our final signal search. \n\\label{fig:sky}}\n\\end{figure}\n\nCertain cadences are prone to triggering TCEs within the population of light curves. These can usually be attributed to spacecraft issues and are likely not astrophysical. Inspired by the ``Skye'' metric used in \\citet{tho18}, we minimized this source of contamination by considering the total number of TCEs with transits that fall on each cadence. The median value and expected white noise range was then calculated using:\n\n\\begin{equation}\n\\sqrt{2}\\; \\text{erfcinv}(1\/N_{\\text{cad}}) \\; \\sigma,\n\\label{eq:sky}\n\\end{equation}\nwhere erfcinv is the inverse complementary error function, $N_{\\text{cad}}$ is the total number of cadences in the campaign, and $\\sigma$ is the median absolute deviation in the number of TCEs detected at each cadence. This calculated value represents the largest deviation expected from this number of cadences under the assumption of perfect Gaussian noise. Any cadences that exceed this limit are likely faulty and warrant masking. In Figure \\ref{fig:sky} we provide an example of this procedure for Campaign 4. To further assist future studies we made the Skye mask for each campaign publicly available.\\footnote{\\href{http:\/\/www.jonzink.com\/scalingk2.html}{http:\/\/www.jonzink.com\/scalingk2.html}}\n\nOnce established, the Skye mask was used to reanalyze all light curves, removing problematic cadences. The resulting list of significant signals were then given the official TCE label and allowed to continue through our pipeline. Overall, we found 140,046 TCEs from 52,192 targets. However, a vast majority of these signals are false positives and required thorough vetting.\n\n\n\\subsection{Signal Vetting}\nTo parse through the 140,046 TCEs and identify the real planet candidates, we employed our vetting software {\\tt EDI-Vetter} \\citep{zin20a}. This routine builds upon the metrics developed for the \\emph{Kepler} TPS ({\\tt RoboVetter}; \\citealt{tho18}), with additional diagnostics created to address \\emph{K2} specific issues (i.e., the systematics caused by the spacecraft). These metrics attempt to replicate human vetting, by looking for specific transit features used to discern false positive signals. We now briefly discuss our planet candidacy requirements, but encourage readers to refer to \\citet{zin20a} for a more thorough discussion. \n\n\n\n\\subsubsection{Previous Planet Check}\nThis test looks to identify duplicate signals in the light curve (as originally discussed in Section 3.2.2 of \\citealt{cou16}). This test was only applied if the light curve produced more than one TCE. In such cases, the period and ephemeris were tested to ensure the second signal was truly unique. If not, this repeat identification was labeled a false positive. The goal of this test was to remove detections of the previous signal's secondary eclipse and signals that were not properly masked, leading to their re-detection.\n\n\\subsubsection{Binary Blending}\n\\label{sec:bblend}\nThe goal of this metric is to remove eclipsing binary (EB) contaminants from our catalog. There are two attributes that make EB events unique from planet transits: a deep transit depth and a high impact parameter ($b$). Leveraging these two features, we used the formula first derived by \\citet{bat13} and then modified by \\cite{tho18} to identify these contaminants, while remaining agnostic to the underlying stellar parameters:\n\n\\begin{equation}\n\\frac{R_{pl}}{R_\\star} + b\\le 1.04,\n\\label{eq:EB}\n\\end{equation}\nwhere $R_{pl}\/R_{\\star}$ represents the ratio of the transiting planet and the stellar host radii. Should this metric be exceeded, the TCE would be flagged as a false positive, removing most EBs from our sample.\n\nHowever, this diagnostic assumes the transit depth provides an accurate measure of $R_{pl}\/R_{\\star}$. If an additional source is within the target photometric aperture, the depth of the transit can be diluted by this additional flux, leading to an underestimation of $R_{pl}\/R_{\\star}$. To address this potential flux blending issue, we cross-referenced our target list against the \\emph{Gaia} DR2 catalog \\citep{gai18}. If an additional source is in or near the aperture, we calculated the expected flux contamination using available photometry from \\emph{Gaia} and \\emph{2MASS} \\citep{skr06}, correcting the $R_{pl}\/R_{\\star}$ value accordingly. If Equation \\ref{eq:EB} was not satisfied, the TCE was labeled as a false positive.\n\nOur inferred flux dilution correction assumed the signal in question originated from the brightest source encased by the target aperture. \\footnote{This assumption was enforced by our pipeline, which selected the brightest \\emph{Gaia} target within the aperture.} Therefore, transits emanating from a dimmer background star would experience a greater flux dilution and could avert this metric. We provide further discussion of this potential contamination rate in Section \\ref{sec:astrFP}.\n\n\n\\subsubsection{Transit Outliers}\nThis diagnostic was developed to deal with systematics specific to \\emph{K2}. We expect real candidates to produce dips in the stellar flux, but retain comparable light curve noise properties during the eclipse. In contrast, systematic events can produce a dip with noise properties independent of the light curve, producing a heightened RMS during the event. By measuring the RMS in and out of transit, we can identify significant changes, and flag signals that have false positive-like RMS properties.\n\n\\subsubsection{Individual Transit Check}\n\\label{sec:ITC}\nUsing the formalism developed by \\citet{mul16} (the ``Marshall'' test), we look to identify individual transit events that appear problematic. If one of the transits is either dominating the signal strength or does not fit a transit profile appropriately, it is indicative of a systematic false alarm. By fitting each individual transit signals with any of the four common systematic models (Flat, Logistic, Logistic-exponential, or Double-logistic), we looked for events that did not have an astrophysical origin. If an individual transit fit a systematic model better, that transit was masked and the light curve was re-analyzed to ensure the signal MES still remained above the $8.68\\sigma$ threshold before proceeding. In addition, we made sure the total signal strength is spread evenly among each observed transit. In cases where a single event dominated the signal strength, the TCE was labeled as a false positive. \n\n\\subsubsection{Even\/Odd Transit Test}\nThis vetting test looks for EBs that produce a strong secondary eclipse (SE). In some cases these SEs can be deep enough to trigger a signal detection at half the true period, folding the SE on top of the transit. To identify such contaminants, we separated every other individual transit into odd and even groups. We then re-fit the transit depth within each group and looked for significant discrepancies. In cases where the disparity is greater than $5\\sigma$, the signal was labeled as a false positive.\n\n\\subsubsection{Uniqueness Test}\nThis diagnostic is based on the ``model-shift uniqueness test'' \\citep{row15, mul15, cou16, tho18} and compares the noise profile of the folded light curve with that of the TCE in question. If the folded light curve contains several transit-like features, it is indicative of a systematic false alarm. We compared the strength of the next largest dip, beyond the initial signal and any potential secondary eclipse, in the phase-folded data to the transit in question, assessing the uniqueness and significance of the periodic event. If another similar magnitude dip existed in the folded time series, it likely originated from light curve noise and the TCE was deemed a false positive. \n\n\n\n\\subsubsection{Check for Secondary Eclipse}\nFollowing the methodology presented in Section A.4.1.2 and A.4.1.3 of \\citet{tho18}, we examined the transit signal for a secondary eclipse (SE). As previously noted, deep SE signals are a notable signature of an EB. However, some hot Jupiters are also capable of producing a SE, therefore detection of a SE is not in itself justification for exclusion. TCEs with a meaningful SE must exhibit a transit impact parameter of less than 0.8 or the SE must be less than 10\\% of the transit depth. If neither of these criteria were satisfied, the TCE was labeled as a false positive.\n\n\\subsubsection{Ephemeris Wandering}\nHarmonic signals have the ability to falsely trigger a TCE detection, but their inability to match the transit model leads to significant movement of the measured ephemeris. If the signal's transit mid-point, as detected by {\\tt TERRA}, changed by more than half the transit duration when optimized with our MCMC routine, the TCE earned a false positive label. \n\n\\subsubsection{Harmonic Test}\nIn addition to the ephemeris wandering metric, we also attempted to fit the light curve with a sine wave at the period of the detected TCE. If the amplitude of this harmonic signal was comparable to the TCE depth, or the strength of the harmonic signal was greater than $50\\sigma$, the TCE received a false positive label. To avoid misclassification due to period aliasing, we examined periods of $2\\times$ and $\\frac{1}{2}\\times$ the TCE. Additionally, harmonic signals tend to trigger TCEs with long transit durations, which often correspond to the sine wave period. Therefore, we also tested harmonics with periods equaling $1\\times$, $2\\times$, and $3\\times$ the transit duration. If any of the examined phases exceeded our harmonic metric threshold, the TCE achieved a false positive label.\n\n \n\n\n\n\\subsubsection{Phase Coverage Test}\nIt is important that the transit signal has good phase coverage with the available \\emph{K2} data. In certain cases, a few outlier points can be folded, near an integer multiple of the \\emph{K2} cadence, and trigger a TCE. These limited data signals are not meaningful and warrant a false positive label. In addition, the masking applied by our automated pipeline may remove large fractions of the TCE. To ensure we have good signal phase coverage, we examined the gap sizes in the phase folded light curve. If large portions (more than roughly 30 mins.) of photometry are missing during ingress or egress, we labeled these signals as false positives. \n\n\n\n\\subsubsection{Period and Transit Duration Limits}\nWe limit the {\\tt TERRA} software to period ranges beyond 0.5 days. However, this search algorithm is still able to identify periodic signals just below this user defined threshold.\\footnote{This phenomenon is due to the method in which {\\tt TERRA} steps through period space, moving in cadence integers until it exceeds the period limits.} These boundary case signals are usually astrophysical false positives and are difficult to distinguish from real planet candidates. Thus, we remove any TCEs with periods less than 0.5 days. Furthermore, we imposed a strict limit on the transit duration. If the TCE transit duration was greater than 10\\% of the period of the signal, we deemed it a false positive, as these signals are often misidentified harmonics. Moreover, a transiting signal with a duration that exceeds this limit would represent a planet orbiting within three times the radius of the host star for a short period eclipsing binary. Thus, removing these signals improves the purity of our sample.\n\n\n\\subsubsection{Period Alias Check}\nMis-folding a real transiting signal on an integer multiple of the true signal period will trigger many of the previously described false positive flags. To avoid this misclassification we tested period factors of $2\\times$, $\\frac{1}{2}\\times$, $3\\times$, and $\\frac{1}{3}\\times$ against the original signal. If any of the alternative models produced a likelihood value greater than 1.05 times the original period fit, we reran the entire vetting analysis, using the new corrected period.\n\n\\subsubsection{Ephemeris Match}\nDeep transit signals from bright sources (the parent) can pollute neighboring target apertures (the child) and produce transit artifacts. These signals can be identified by their nearly identical period and ephemeris measurements. We followed the procedures of \\citet{cou14b} and Section A.6 of \\citet{tho18} to identify these false positives, testing our candidate list against itself and our full TCE sample, ensuring previously rejected deep eclipsing binary signals were considered.\n\nWith an ephemeris match it is important to deduce the signal's true origin, since both the parent and child targets will be identified by the matching algorithm. It is expected that the child signal will be significantly polluted by stray starlight, reducing the expected transit depth. Thus, we assign parenthood to the target with the largest transit depth signal and label all additional matches as false positives.\n\nOverall, the implementation of the ephemeris match vetting metric only reduced our catalog by $\\sim2\\%$, considerably less than the $\\sim6\\%$ reduction found for the \\emph{Kepler} DR25. This discrepancy can be attributed to the \\emph{Kepler} field's higher target density, which was dominated by faint stars that are more susceptible to this type of false positive. Therefore, the increased average brightness of \\emph{K2} targets and the reduced target density, explain this reduction in ephemeris matches. \n\n\\begin{figure*}\n\\centering \\includegraphics[height=6.5cm]{hostStarBright.pdf}\n\\caption{The brightness and planet radius distributions of the \\emph{Kepler} and \\emph{K2} host stars, colored by host star effective temperature. The left panel shows the \\emph{Kepler} candidates as described in \\cite{ber20} and the right panel shows the catalog presented in this paper. \\emph{K2} targets tend to be slightly brighter---with larger planets, due to the reduced completeness---than \\emph{Kepler} targets, making them better candidates for follow-up surveys. \n\\label{fig:host}}\n\\end{figure*}\n\n\\subsubsection{Consistency Score}\nFinally, if the TCE passed all of the described vetting diagnostics without achieving a false positive flag, the light curve was reanalyzed---including detrending and signal detection---50 times. This final test measures the stochastic nature of the detrending and the MCMC parameter estimation. Any signal near one of the vetting thresholds would likely be pushed over during these reexaminations. The consistency score was then calculated as the number of times a given TCE was able to pass all the vetting metrics over the number of times tested. Any TCE able to achieve a consistency score greater than 50\\% was granted planet candidacy. Of the 1046 TCEs that initially passed all the {\\tt EDI-Vetter} thresholds, 806 met this final consistency requirement. \n\n\n\n\\section{The Planet Catalog}\n\\label{sec:catalog}\nUsing our fully automated pipeline, we achieved a sample of \\emph{K2} transiting planets suitable for demographics. Within this catalog we found:\n\\begin{itemize}\n \\item 806 transit signals\n \\item 747 unique planet candidates\n \\item 57 multi-planet candidate systems (113 candidates)\n \\item 366 newly detected planet candidates\n \\item 18 newly identified multi-planet candidate systems (38 candidates).\n\\end{itemize}\nThe majority of new candidates were found in campaigns not exhaustively searched (C10-18), with a few new candidates identified with low MES in early campaigns. Of the 806 transiting signals identified, 51 signals were detected in multiple, overlapping, campaigns. In the subsequent sections we discuss how the parameters of the planets were derived and the unique systems and candidates found in this catalog. \n\n\n\\subsection{Planet Parameterization}\nTo maintain the homogeneity of our sample, we used an automated routine to fit and estimate the planet and orbital features. As discussed in Section \\ref{sec:stellarSample} we assume the stellar parameters derived by \\citet{har20}, but found 130 of the 747 detected planets did not have stellar characterization available. Fortunately, our pipeline is agnostic to changes in stellar features, enabling subsequent parameter revision. To avoid the assumption of solar values and provide the most current stellar parameters, we used the \\citet{har20} methodology, equipped with APASS (DR9; \\citealt{hen16}) and SkyMapper \\citep{onk19} photometry, to characterize 59 of the remaining host stars. The final 71 targets, which did not have the necessary photometric coverage and\/or parallax measurements for the aforementioned classification, were parameterized using the {\\tt iscochrones} stellar modeling package \\citep{mor15},\\footnote{In order to maintain stellar parameter uniformity we ran the other 628 planet hosts through {\\tt isochrones}. We fit a linear offset on these targets between our parameters and the {\\tt isochrones} parameters, and applied these offsets to the 71 {\\tt isochrones}-only targets.} ensuring all planets in our catalog have corresponding stellar measurements. Using the {\\tt emcee} software package \\citep{goo10, for13}, we measured the posterior distribution of the transiting planet parameters: The ephemeris, the radius ratio, the transit impact parameter, the period, the semi-major axis to stellar radius ratio (apl), the transit duration (tdur), and the vetting consistency score (score). These values were all derived under the assumption of circular orbits. Furthermore, we provide diagnostic plots for each candidate. In Figure \\ref{fig:newExample} we show an example plot for a new sub-Neptune (EPIC 211679060.01), enabling swift visual inspection. However, it is important to clarify that all planets in our sample have been detected through our fully automated pipeline. These plots are only meant to help prioritize follow-up efforts. Figure \\ref{fig:host} shows how the distribution of \\emph{K2} host targets is skewed toward brighter stars, compared to the \\emph{Kepler} candidate hosts, enabling follow-up efforts for a majority of our catalog. \n\n\nWith careful consideration of the transit radius ratio ($R_{fit}$), the planet radius can be extracted. $R_{fit}$ is directly measured by the MCMC routine, but estimates of the true planet radii ($R_{pl}$) required we take into account the stellar radii measurements ($R_{\\star}$) and potential contamination from nearby sources ($\\frac{F_{total}}{F_{\\star}}$). To attain our best estimate we used the following procedure:\n\\begin{equation}\nR_{pl} =R_{\\textrm{fit}}\\; R_{\\star}\\; \\sqrt{\\frac{F_{\\textrm{total}}}{F_{\\star}}},\\\\\n\\end{equation}\n where $\\frac{F_{total}}{F_{\\star}}$ was calculated using the \\emph{Gaia} DR2 stellar catalog to identify contaminants in and near the photometric aperture and a Gaussian point-source function was implemented to estimate the corresponding magnitude of contamination. \\footnote{We do not impose an upper $R_{pl}$ limit in our catalog, retaining our stellar parameter agnosticism. Therefore, 28 candidates exceed $30R_{pl}$. We provide suggestions for dealing with these candidates in Section \\ref{sec:summary}.} In addition to the aforementioned flux complications, \\cite{zin20a} also showed that the required detrending of \\emph{K2} photometry underestimates the radius ratio by a median value of 2.3\\%. We did not adjust our estimates of the planet radius to reflect this tendency, as the mode of this distribution indicated a majority of the measured radii ratios are accurate (see Figure 14 of \\citealt{zin20a}). However, we increased the uncertainty in our planet radius measurements to account for this additional detrending complication. The overall planet radius uncertainty ($\\sigma_R$) was calculated by assuming parameter independence and adding all of these relevant factors in quadrature,\n\\begin{equation}\n\\begin{aligned}[t]\n\\sigma_R &= \\sqrt{\\sigma_{\\textrm{fit}}^2+\\sigma_{\\star}^2+\\sigma_F^2+\\sigma_{\\textrm{Off}}^2} \\;,& \\textrm{where}\\\\\n\\sigma_{\\textrm{Off}} & = 0.023\\; R_{pl} & \\textrm{and}\\\\\n\\sigma_F & = R_{pl}\\;\\frac{F_{\\textrm{total}}-F_{\\star}}{3.76\\; F_{\\textrm{total}}}\\;. &\n\\end{aligned}\n\\end{equation}\nHere, $\\sigma_{\\textrm{Off}}$ and $\\sigma_F$ represent the uncertainty due to detrending and flux contamination respectively (see Section 8.2 of \\citealt{zin20a} for a thorough explanation and derivation of these parameters). Despite these additional contributions, the majority of uncertainty stems from the radius ratio fit ($\\sigma_{\\textrm{fit}}$; $\\sim4\\%$) and the stellar radius measurement ($\\sigma_{\\star}$; $\\sim6\\%$). For most targets $\\sigma_F$ contributes of order $10^{-3}\\%$ uncertainty, however, the most extreme candidate (EPIC 247384685.01) had a measured $F_{\\star}$ that is only 70\\% of $F_{\\textrm{total}}$, contributing an additional $7\\%$ uncertainty to the radius measurements.\n\n\\begin{figure}\n\\centering \\includegraphics[width=\\columnwidth{}]{211679060.pdf}\n\\caption{A sample diagnostic plot for EPIC 211679060.01; the remaining candidate plots are available \\href{http:\/\/www.jonzink.com\/scalingk2.html}{online}. This planet is a new sub-Neptune found in C18. The grey points represent the light curve data used to extract the signal. The purple points show the binned average (with a bin width equal to 1\/6 the transit duration). \\label{fig:newExample}}\n\\end{figure}\n\n\n\n\\begin{deluxetable*}{lcc}\n\\tablecaption{The homogeneous catalog of \\emph{K2} planet candidates and their associated planet and stellar parameters. The visual inspection flags were manually assigned to help prioritize follow-up efforts. These indicators had no impact on the analysis performed by this pipeline. The known planet (KP) flag indicates a confirmed or validated planet. The planet candidate (PC) flag identifies an unconfirmed candidate and the low priority planet candidate (LPPC) flag designates a weak or more difficult to validate candidate. Finally, the false positive (FP) flag specifies candidates that are likely not planets. \\label{tab:catalog}}\n\\tablehead{\\colhead{Column} & \\colhead{Units} & \\colhead{Explanation}} \n\\startdata\n1 & --- & EPIC Identifier \\\\\n2 & --- & Campaign \\\\\n3 & --- & Candidate ID\\\\\n4 & --- & Found in Multiple Campaigns Flag \\\\\n5 & --- & Consistency Score \\\\\n6 & d & Orbital Period \\\\\n7 & d & Lower Uncertainty in Period \\\\\n8 & d & Upper Uncertainty in Period \\\\\n9 & --- & Planetary to Stellar Radii Ratio \\\\\n10 & --- & Lower Uncertainty in Ratio \\\\\n11 & --- & Upper Uncertainty in Ratio \\\\\n12 & $R_\\earth$ & Planet radius ($R_{pl}$) \\\\\n13 & $R_\\earth$ & Lower Uncertainty in $R_{pl}$ \\\\\n14 & $R_\\earth$ & Upper Uncertainty in $R_{pl}$ \\\\\n15 & d & Transit ephemeris ($t_0$) \\\\\n16 & d & Lower Uncertainty in $t_0$ \\\\\n17 & d & Upper Uncertainty in $t_0$ \\\\\n18 & --- & Impact parameter (b) \\\\\n19 & --- & Lower Uncertainty in b \\\\\n20 & --- & Upper Uncertainty in b \\\\\n21 & --- & Semi-major Axis to Stellar Radii Ratio ($a\/R_\\star$) \\\\\n22 & --- & Lower Uncertainty in $a\/R_\\star$ \\\\\n23 & --- & Upper Uncertainty in $a\/R_\\star$ \\\\\n24 & d & Transit Duration \\\\\n25 & $R_\\sun$ & Stellar Radius ($R_\\star$) \\\\\n26 & $R_\\sun$ & Lower Uncertainty in $R_\\star$ \\\\\n27 & $R_\\sun$ & Upper Uncertainty in $R_\\star$ \\\\\n28 & $M_\\sun$ & Stellar Mass ($M_\\star$) \\\\\n29 & $M_\\sun$ & Lower Uncertainty in $M_\\star$ \\\\\n30 & $M_\\sun$ & Upper Uncertainty in $M_\\star$ \\\\\n31 & K & Stellar Effective Temperature ($T_\\textrm{eff}$) \\\\\n32 & K & Uncertainty $T_\\textrm{eff}$ \\\\\n33 & dex & Stellar Surface Gravity (log(g)) \\\\\n34 & dex & Uncertainty log(g) \\\\\n35 & dex & Stellar Metallicity [Fe\/H] \\\\\n36 & dex & Uncertainty [Fe\/H] \\\\\n37 & --- & Stellar Spectral Classification \\\\\n38 & --- & Visual Inspection Classification \\\\\n\\enddata\n\\tablecomments{This table is available in its entirety in machine-readable form.}\n\\end{deluxetable*}\n\nWe provide our list of planetary parameters for this catalog with corresponding measurements of uncertainty in Table \\ref{tab:catalog}. A plot of the resulting planet period and radius distribution is provided in Figure \\ref{fig:catPl}. The deficit of planets near $2R_\\earth$ is in alignment with the radius gap identified with \\emph{Kepler} data \\citep{ful17} and with previously discovered \\emph{K2} candidates \\citep{har20}. This gap appears to be indicative of some planetary formation or evolution mechanism, the exact origin of which remains unclear. One theory suggests stellar photoevaporation removes the envelope of weakly bound atmospheres in this region of parameter space \\citep{owe17}, separating the super-Earths from the sub-Neptunes. Alternatively, the hot cores of young planets may retain enough energy to expel the atmosphere for planets within this gap (core-powered mass-loss; \\citealt{gup18}). However, these two mechanisms require additional demographic data to parse out the main source of this valley. The catalog derived here provides additional planets and the necessary sample homogeneity needed to examine this feature with greater detail, but such a task is beyond the scope of the current work.\n\n\n\n\\begin{figure}\n\\centering \\includegraphics[width=\\columnwidth{}]{CatalogPlanet.pdf}\n\\caption{The planet sample detected through our fully automated pipeline. The round markers show the new planet candidates (PCs) and + markers show the previously known PCs. The new candidates are uniformly distributed throughout the plot, because many of them come from campaigns not previously examined. The markers have been colored by consistency score to show that most candidates consistently passed the vetting metrics. \n\\label{fig:catPl}}\n\\end{figure}\n\n\n\n\n\n\n\n\\subsection{Multi-Planet Yield}\n\\label{sec:mult}\n\n\n\n\nMulti-planet systems provide a unique opportunity to understand the underlying system architecture and test intra-system formation mechanisms (e.g., \\citealt{owe19}). Furthermore, these planets are more reliable, due to the unlikely probability of identifying two false positives in a light curve \\citep{lis14, sin16}. In our sample we detected 57 unique multi-planet systems, with three systems independently identified in more than one campaign (EPIC 211428897, 212012119, 212072539). In Figure \\ref{fig:multHist} we show the total observed multiplicity distribution for our catalog. Within this sample we did not find any multi-planet systems around A stars, but we identify 17 M dwarf systems and 40 FGK dwarf systems. In consideration of galactic latitude, we found 21 systems that lie greater than $40\\degr$ above\/below the galactic plane. EPIC 206135682, 206209135, and 248545986 all host three planet systems, while the remaining 18 systems only host two. This multi-planet sample provides unique coverage of galactic substructure.\n\n\n\n\n\\subsubsection{Unique Systems}\n\nOur highest multiplicity system is EPIC 211428897, with four Earth-sized planets orbiting an M dwarf (system first identified by \\citealt{dre17}). Our pipeline found this system independently in two of the overlapping fields (C5 and C18), further strengthening its validity. In addition, \\citet{kru19} identified a fifth candidate with a period of 3.28 days. Despite the clear abundance of planet candidates, the \\emph{K2} pixels span $3.98\\arcsec$ and \\emph{Gaia} DR2, which {\\tt EDI-Vetter} uses to identify flux contamination, can only resolve binaries down to $1\\arcsec$ for $\\Delta \\mathrm{mag}\\lesssim3$ \\citep{fur17}. Thus, high-resolution imaging is necessary for system validation. \\citet{dre17} observed this system using Keck NIRC2 and Gemini DSSI, and found a companion star within $0.5\\arcsec$. Since these planets are small, the likely $\\sim \\sqrt{2}$ radii increase, due to flux contamination, will not invalidate their candidacy \\citep{cia17,fur17}. However, it remains unclear whether all of the planets are orbiting one of the stars or some mixture of the two. This will require further follow-up, and remains the subject of future work.\n\n\n\nThrough our analysis we identified 18 new multi-planet systems. EPIC 211502222 had been previously identified as hosting a single sub-Neptune with a period of 22.99 days by \\citet{ye18}. Our pipeline discovered an additional super-Earth at 9.40 days, promoting this G dwarf to a multi-planet host. The remaining 17 systems are entirely new planet candidate discoveries, existing in campaigns not exhaustively searched (C12-C18). Notably, the K dwarf EPIC 249559552 hosts two sub-Neptunes which appear in a 5:2 mean-motion resonance. These resonant systems are important because of the potential detection of additional planets through transit-timing variations (e.g. \\citealt{hol16}). EPIC 249731291 is also interesting since it is an early-type F dwarf (or sub-giant) system, with two short period gas giants.\n\n\\begin{figure}\n\\centering \\includegraphics[width=\\columnwidth{}]{multEdit.pdf}\n\\caption{ A histogram showing the observed system multiplicity distributions of our catalog (Scaling \\emph{K2}) and the \\emph{Kepler} DR25 \\citep{tho18}. The longer data span and reduced noise properties enabled \\emph{Kepler} to identify higher multiplicity systems. In addition, we provide the distributions as a function of stellar spectral type (M: $T_{\\text{eff}}<4000K$; FGK: $4000\\le T_{\\text{eff}}\\le 6500K$; A: $T_{\\text{eff}}>6500K$). \n\\label{fig:multHist}}\n\\end{figure}\n\n\\subsection{Low-Metallicity Planet Host Stars}\nThe core-accretion model indicates a link between stellar metallicity and planet formation \\citep{pol96}. Observational evidence of this connection was first identified with gas giants \\citep{san04, fis05}. However, the recent findings of \\citet{tes19} complicates this narrative, by showing a lack of correlation between planetary residual metallicity and stellar metallicity. Additionally, direct comparison of metal-rich and metal-poor planet hosts in the \\emph{Kepler} super-Earth and sub-Neptune populations, found no clear difference in planet occurrence \\citep{pet18}. This is further indicative of a complex formation process. Thus, identification of low-metallicity planet hosts enables us to test our understanding of formation theories and to identify subtle features. Within our sample of planets, we found four candidates orbiting host stars with abnormally low stellar metallicity ([Fe\/H]$<-0.75$). Upon further examination, two of these candidates (EPIC 212844216.01 and 220299658.01) have radii greater than $30R_\\Earth$, indicative of an astrophysical false positive. The remaining two super-Earth sized planets orbit a low stellar metallicity M dwarf (EPIC 210897587; [Fe\/H]$=-0.831\\pm0.051$). This system has previously been validated using WIYN\/NESSI high resolution speckle imaging \\citep{hir18} and appears in tension with the expectations of core-accretion in-situ formation, providing constraints for planet formation models.\n\n\n\n\n\\section{Measuring Completeness}\n\\label{sec:complete}\n\nAny catalog of planets will inherit some selection effects due to the methodology of detection, limitations of the instrument, and stellar noise. These biases will affect the sample completeness and must be accounted for when conducting a demographic analysis. The selection of transiting planets can be addressed using analytic arguments, but the instrument and stellar noise contributions to the sample completeness are dependent on the stellar sample and the specifics of the instrument. With an automated detection pipeline this detection efficiency mapping can be achieved through the implementation of an injection\/recovery test. Here, artificial signals are injected into the raw photometry and run through the automated software to test the pipeline's recovery capabilities, directly measuring the impact of instrument and stellar noise on the catalog. Many previous studies have used this technique on \\emph{Kepler} data, yielding meaningful completeness measurements \\citep{pet13b,chr13,chr15,chr20,bur17}. \n\n\n\\subsection{Measuring CDPP}\n\\label{sec:cdpp}\nThe first step in computing the detection efficiency is to establish a noise profile for each target light curve. With these values available, the signal strength can be estimated given the transit parameters. The ability to make these calculations enables one to understand the likelihood of detection for a given signal. \\emph{Kepler} used the combined differential photometric precision (CDPP) metric to quantify the expected target stellar variability and systematic noise, as described in \\citet{chr12}. For all targets we provide CDPP measurements for following transit durations: 1, 1.5, 2, 2.5, 3, 4, 5, 6, 7, 8, 9, and 10 hours, spanning the range of occultation timescales expected for \\emph{K2} planet candidates.\n\nTo compute these values we randomly injected a weak ($3\\sigma$) transit signal, with the appropriate transit duration, into each detrended light curve. The strength of the recovered signal, normalized by the injected signal depth, provided a measure of the target CDPP. This process was carried out 450 times to ensure a thorough and robust examination of the light curve, sampling the full light curve for four hour transits ($\\sim$20 day period) and 25\\% of the light curve for one hour transits ($\\sim$0.5 day period). To measure the impact of differing transit durations, this process was executed for each respective CDPP timescale. For a detailed account of this procedure see Section 4 of \\citet{zin20a}. \n\nIn Figure \\ref{fig:cdpp} we show the measured 8h CDPP for the targets within this sample. Overall, there is a clear correlation with photometric magnitude, demonstrating our ability to quantify the light curve noise properties. Moreover, despite the introduction of additional systematic noise from the spacecraft pointing, the detrended \\emph{K2} photometry is still near the theoretical noise limit at the brighter magnitudes. The complete set of measurements are available in Table \\ref{tab:cdpp}.\n\n\\begin{figure}\n\\centering \\includegraphics[width=\\columnwidth{}]{cdppMag.pdf}\n\\caption{The light curve noise (the 8-hour CDPP measurements) for our target sample as a function of the \\emph{Kepler} broadband magnitude measurement. The median CDPP markers represent the median within a one magnitude bin and the corresponding bin standard deviation. The \\emph{Kepler} noise floor represents the shot and read noise expected from the detector alone \\citep{jen10b}.\n\\label{fig:cdpp}}\n\\end{figure}\n\n\\begin{deluxetable*}{lcc}\n\\tablecaption{Description of the CDPP measurements of each stellar target. \\label{tab:cdpp}}\n\\tablehead{\\colhead{Column} & \\colhead{Units} & \\colhead{Explanation}} \n\\startdata\n1 & ... & EPIC identifier\\\\\n2 & ... & Campaign\\\\\n3 & ppm & CDPP RMS Value for Transit of 1.0 hr \\\\\n4 & ppm & CDPP RMS Value for Transit of 1.5 hr \\\\\n5 & ppm & CDPP RMS Value for Transit of 2.0 hr \\\\\n6 & ppm & CDPP RMS Value for Transit of 2.5 hr \\\\\n7 & ppm & CDPP RMS Value for Transit of 3.0 hr \\\\\n8 & ppm & CDPP RMS Value for Transit of 4.0 hr \\\\\n9 & ppm & CDPP RMS Value for Transit of 5.0 hr \\\\\n10 & ppm & CDPP RMS Value for Transit of 6.0 hr \\\\\n11 & ppm & CDPP RMS Value for Transit of 7.0 hr \\\\\n12 & ppm & CDPP RMS Value for Transit of 8.0 hr \\\\\n13 & ppm & CDPP RMS Value for Transit of 9.0 hr \\\\\n14 & ppm & CDPP RMS Value for Transit of 10.0 hr \\\\\n\\enddata\n\\tablecomments{This table is available in its entirety in machine-readable form.}\n\\end{deluxetable*}\n\n\n\\subsection{Injection\/Recovery}\n\nThere are several points along the pipeline at which the signal can be injected. Ideally, injections would be made on the rawest form of photometry (at the pixel-level), but doing so is computationally expensive and provides a marginal gain in completeness accuracy (see \\citealt{chr17} for the effects on the \\emph{Kepler} data set). Moving just one step downstream, the injections can be more easily made at the light curve-level. Here, the artificial signal is introduced into the aperture-integrated flux measurements, followed by pre-processing, detrending and signal detection. Finally, the most accessible, but least accurate, method is by injecting signals after pre-processing (e.g., \\citealt{clo20}). Since the pre-processed {\\tt EVEREST} light curves are readily available, this method requires minimal computational overhead. However, it fails to capture the impact of pre-processing on the sample completeness. These effects are especially important for \\emph{K2} photometry, which undergoes significant modification before being searched. Following the procedures of our previous study \\citep{zin20a}, we injected our artificial signals into the aperture-integrated light curves (before pre-processing; see Figure \\ref{fig:diagram}). In the next few paragraphs we briefly outline our methodology, but suggest interested readers reference Section 5 of our previous work for a more detailed account.\n\nUsing the {\\tt batman} Python package \\citep{kre15}, we created and injected artificial transits in the raw flux data. For each target, we uniformly drew a period from [0.5, 40] days and an $R_{pl}\/R_{\\star}$ from a log-uniform distribution with a range [0.01,0.1]. The ephemeris was uniformly selected, with the requirement that at least three transits reside within the span of the light curve, and the impact parameter was uniformly drawn from [0,1].\\footnote{In the current iteration of this pipeline we have removed the eclipsing binary impact parameter limit mentioned in Section 5 of \\citet{zin20a}. In doing so, we provide a more accurate accounting of the impact of grazing transits.} All injections were assumed to have zero eccentricity. This assumption is motivated by the short period range of detectable \\emph{K2} planets, which many have likely undergone tidal circularization. In addition, eccentricity only affects the transit duration, making its impact on completeness minor. The limb-darkening parameters for the artificial transits were dictated by the stellar parameters discussed in Section \\ref{sec:stellarSample}. Using the ATLAS model coefficients for the \\emph{Kepler} bandpasses \\citep{cla12}, we derived the corresponding quadratic limb-darkening parameters based on their stellar attributes. In cases where stellar parameters did not exist, we assumed solar values.\n\nWe expect the pipeline's recovery capabilities scale as a function of the signal strength. In order to quantify this effect, one must have a measure of the expected injection signal strength (MES). Equipped with our CDPP measurements, this value is directly related to the transit depth (depth) and can be analytically found using:\n\\begin{equation}\n\\text{MES}=C\\;\\frac{\\text{depth}}{\\text{CDPP}_{\\text{t}_{\\text{dur}}}}\\;\\sqrt{N_{\\text{tr}}},\n\\label{eq:MES}\n\\end{equation}\nwhere $\\text{CDPP}_{\\text{t}_{\\text{dur}}}$ represents the targets CDPP measures for a given transit duration (achieved through interpolation of the measured CDPP values discussed in Section \\ref{sec:cdpp}) and $N_{\\text{tr}}$ is the number of available transits within the data span. The $C$ value is a global correction factor that renormalizes the analytic equation to match the detected signal values. For this data set we found $C=0.9488$. Using this equation, we calculated the expected MES values for all injections and measured the sample completeness.\n\n\nOnce our injections were performed, we passed this altered photometry through the software pipeline, testing our recovery capabilities. We considered a planet successfully recovered if it met the following criteria: the detected signal period and ephemeris were within $3\\sigma$ of the injected values and the signal passes all of the vetting metrics. The results of this test can be seen in the completeness map in Figure \\ref{fig:TCEcomplete}. To quantify our detection efficiency as a function of injected MES, we used a uniform kernel density estimator (KDE; width of 0.25 MES) to measure our software's recovery fraction. These values were then fit with a logistic function of the form:\n\\begin{equation}\nf(x)=\\frac{a}{1+e^{-k(x-l)}}.\n\\end{equation}\nThe best fit values of $a$, $k$, and $l$ are listed in Table \\ref{tab:complete}.\n\nWhile MES is closely related to completeness, additional signal parameters can play a role. \\citet{chr20} tested the effects of stellar effective temperature ($T_{\\text{eff}}$), period, $N_{\\text{tr}}$, and photometric magnitude on the completeness of the analogous \\emph{Kepler} injections, finding the strongest effect linked to $N_{\\text{tr}}$. In Figure \\ref{fig:TCEcomplete} we show two of these completeness features for the \\emph{K2} injections: spectral class and $N_{\\text{tr}}$. \\citet{chr15} noted a significant drop ($\\sim4\\%$) in detection efficiency for cooler M dwarfs, which exhibit higher stellar variability. Remarkably, we did not find a significant difference between the AFGK dwarfs ($T_{\\text{eff}}>4000K$) and the M dwarfs ($T_{\\text{eff}}<4000K$). It is likely that the increased systematic noise of \\emph{K2} blurs this completeness feature, making the two populations indistinguishable. We also considered the additional noise contributions expected for young stars. In looking at 8033 targets associated with young star clusters, as indicated by {\\tt BANYAN $\\Sigma$} \\citep{gag18}, we could not identify a significant completeness difference. Like the \\emph{Kepler} TPS, we found $N_{\\text{tr}}$ has the strongest effect on our \\emph{K2} planet sample completeness. Here, a significant drop in detection efficiency was expected for signals near the minimum transit threshold. In light of the pipeline's three transit requirement, these marginal signals were more susceptible to systematics and vetting misclassification. In other words, if any of the three transits were discarded by the vetting, it would immediately receive a false positive label. We also parsed the data into larger $N_{\\text{tr}}$ value bins, but found little difference in completeness between $N_{\\text{tr}}$ equal to four, five, and six; the vetting was unlikely to discard more than one meaningful transit. Furthermore, we considered the effects of the signal period. While this parameter is strongly correlated with $N_{\\text{tr}}$, it has the potential to describe period-dependent detrending and data processing issues. Separating the injected signals at a period of 26 days (see Table \\ref{tab:complete}), we found a loss of completeness ($\\Delta a\\sim0.20$) that was comparable to the $N_{\\text{tr}}$ partition ($\\Delta a\\sim0.21$). This similarity indicates a strong correlation between parameters, suggesting either of the two features would appropriately account for the reduced completeness in this region of parameter space. Since $N_{\\text{tr}}$ provides a slightly larger deficit, we suggest using this function for future demographic analysis of our catalog. \n\nCompleteness measurements provide a natural test of {\\tt EDI-Vetter}'s classification capabilities. The introduction of systematics from the telescope makes signal vetting more difficult, when compared to the \\emph{Kepler} TPS, requiring more drastic vetting metrics. This could lead to significant misclassification, discarding an abundance of meaningful planet candidates. Fortunately, in Figure \\ref{fig:TCEcomplete} we identify a $\\sim13\\%$ loss of completeness due to the vetting metrics, which is comparable to the \\emph{Kepler} {\\tt Robovetter} ($\\sim10\\%$ loss; \\citealt{cou17b}). Ideally, this difference would be zero, but such a minimal loss is acceptable and, more importantly, quantifiable. \n\nWe have provided the completeness parameters necessary for a demographic analysis of this catalog. However, the injections carried out here are computationally expensive and hold significant value for research beyond the scope of this catalog. Therefore, we provide, in addition, the injected {\\tt EVEREST}-processed light curves and a summary table of the injection\/recovery test.\\footnote{\\href{http:\/\/www.jonzink.com\/scalingk2.html}{http:\/\/www.jonzink.com\/scalingk2.html}} These data products enable users to set custom completeness limits and to test their own vetting software with significantly reduced overhead. \n\n\n\\begin{figure*}\n\\centering \\includegraphics[width=\\textwidth{}]{completenessTable.pdf}\n\\caption{Plots of the measured completeness, resulting from our injection\/recovery test, for the presented planet sample. To show the effects of characteristic stellar noise, the number of transits, and the loss of planets due to our vetting software, we provide several slices of the data. The corresponding logistic function parameters are available in Table \\ref{tab:complete}. The heat map shows the overall vetted completeness as a function of planet radius ratio and the transit period. For our stellar sample, the radii ratios of 0.01, 0.03, and 0.1 correspond to median planet radii of 1.1, 3.4, and 11.3 $R_\\Earth$ respectively. \n\\label{fig:TCEcomplete}}\n\\end{figure*}\n\n\n\n\n\\begin{deluxetable}{lccc}\n\\tablecaption{The logistic parameters for the corresponding completeness functions shown in Figures \\ref{fig:TCEcomplete} \\& \\ref{fig:complete_lat}. Additionally, we include the completeness parameters for a separation of 26 day period signals. It is important to highlight that $a$ represents the maximum completeness for high MES signals. \\label{tab:complete}}\n\\tablehead{\\colhead{Model} & \\colhead{a} & \\colhead{k} & \\colhead{l}} \n\\startdata\n\\textbf{Unvetted} & 0.7407 & 0.6859 & 9.7407 \\\\\n\\textbf{Vetted} & 0.6093 & 0.6369 & 10.8531 \\\\\n\\\\\n\\textbf{>3 Transits} & 0.6868 & 0.6347 & 10.9473\\\\\n\\textbf{=3 Transits} & 0.4788 & 0.6497 & 10.5598\\\\\n\\\\\n\\textbf{<26d Periods} & 0.6619 & 0.6231 & 10.9072\\\\\n\\textbf{>26d Periods} & 0.4635 & 0.6607 & 10.5441\\\\\n\\\\\n\\textbf{AFGK Dwarf} & 0.6095 & 0.6088 & 10.8986\\\\\n\\textbf{M Dwarf} & 0.6039 & 0.8455 & 10.5636 \\\\\n\\\\\n\\textbf{C1} & 0.3923 & 0.7654 & 11.3914\\\\\n\\textbf{C2} & 0.6430 & 0.7173 & 10.8544 \\\\\n\\textbf{C3} & 0.7462 & 0.6689 & 10.5701 \\\\\n\\textbf{C4} & 0.6734 & 0.6344 & 11.1443 \\\\\n\\textbf{C5} & 0.4425 & 0.5923 & 11.3923 \\\\\n\\textbf{C6} & 0.7654 & 0.5759 & 10.8772 \\\\\n\\textbf{C7} & 0.3941 & 0.6052 & 11.7002 \\\\\n\\textbf{C8} & 0.6669 & 0.5726 & 10.0560 \\\\\n\\textbf{C10} & 0.5572 & 0.6469 & 10.0056 \\\\\n\\textbf{C11} & 0.2171 & 0.4759 & 12.3882 \\\\\n\\textbf{C12} & 0.6192 & 0.7341 & 10.6272 \\\\\n\\textbf{C13} & 0.6853 & 0.5698 & 11.3878 \\\\\n\\textbf{C14} & 0.7505 & 0.6596 & 10.9776 \\\\\n\\textbf{C15} & 0.6067 & 0.6480 & 10.4673 \\\\\n\\textbf{C16} & 0.6809 & 0.7256 & 10.5453 \\\\\n\\textbf{C17} & 0.5848 & 0.6633 & 10.3635 \\\\\n\\textbf{C18} & 0.6116 & 0.4676 & 11.5783 \\\\\n\\enddata\n\n\\end{deluxetable}\n\n\\subsection{Completeness and Galactic Latitude}\n\\label{sec:comLat}\nEach \\emph{K2} campaign probed a different region along the ecliptic. These fields correspond to unique galactic latitudes, where distinct noise features may spawn differences in inter-campaign completeness. To address these potential variations, we consider the completeness as a function of campaign in Figure \\ref{fig:complete_lat}.\n\nOverall, there is significant scatter among the lower absolute galactic latitude campaigns ($\\mid b \\mid<40\\degr$). This trend is indicative of target crowding near the galactic plane \\citep{gil15}. In these low latitude fields the photometric apertures are more contaminated by background sources, contributing additional noise, variability, and flux dilution, making transit detection more onerous. This is highlighted by Campaign 11, which is the closest field to the galactic plane ($b\\sim9\\degr$) that we analyzed, thus providing the lowest completeness of all campaigns ($a=0.22$). Conversely, Campaign 6 with $b\\sim48\\degr$ has the highest completeness ($a=0.77$).\n\n\\begin{figure*}\n\\centering \\includegraphics[width=\\textwidth{}]{latCom.pdf}\n\\caption{The calculated vetted completeness for the low (left; $\\mid b \\mid<40\\degr$) and high (right; $\\mid b \\mid>40\\degr$) galactic latitude Campaigns. The corresponding logistic function parameters are available in Table \\ref{tab:complete}. \n\\label{fig:complete_lat}}\n\\end{figure*}\n\nIn the high galactic latitude campaigns ($\\mid b \\mid>40\\degr$), this crowding effect is less salient, reducing the inter-campaign completeness scatter. Furthermore, these well isolated targets, provide high quality photometry and yield the highest completeness of all \\emph{K2} campaigns ($a\\sim 0.70$).\n\nThese inter-campaign differences are meaningful, however the limited number of targets (and synthetic transit injections) within each campaign, subjects these completeness measurements to further uncertainty. Therefore, the values provided for the global completeness assessment (for Figure \\ref{fig:TCEcomplete}) are more robust and should be used for full catalog occurrence analysis.\n\n\n\n\n\n\\section{Measuring Reliability}\n\\label{sec:reli}\nDespite efforts to remove problematic cadences, instrument systematics pollute the light curves, creating artificial dips that can be erroneously characterized as a transit signal.\nTo measure the reliability in a homogeneous catalog of planets, the rate of these false alarms (FAs) must be quantified. \\citet{bry19} showed that proper accounting of the sample FA rate is essential in extracting meaningful and consistent planet occurrence measurements. \n\nThe main goal of {\\tt EDI-Vetter} (and its predecessor {\\tt Robovetter}) is to parse through all TCEs and remove FAs without eliminating true planet candidates. However, this process is difficult to automate and requires a method of testing the software's capability to achieve this goal. Accomplishing such a task necessitates an equivalent data set that captures all the unique noise properties, which contributes to FAs, without the existence of any true astrophysical signals. With such data available, the light curves can be processed through the detection pipeline. If the vetting algorithm worked perfectly, nothing would be identified as a planet candidate. Therefore, any signals capable of achieving planet candidacy would be an authentic FA and provide insight into the software's capabilities. \n\n\\citet{cou17b} explored two methods for simulating this necessary data using the existing light curves: data scrambling and light curve inversion. The first method takes large portions of the real light curve data and randomizes the order, retaining all of the noise properties while scrambling out true periodic planet signatures. Using this method \\citep{tho18} was able to replicate the real long period TCE distribution of the \\emph{Kepler} DR25 catalog. The second method inverts the light curve flux measurements. Upon this manipulation, the existing real transit signals will now be seen as flux brightening events, rendering them unidentifiable by the transit search algorithm. Under the assumption that many systematic issues are symmetric upon a flux inversion, the photometry will retain its noise properties without containing any real transit signals. This method was used by \\citet{tho18} to replicate the real short period TCE distribution of the \\emph{Kepler} DR25 catalog, while maintaining the quasi-sinusoidal features, like the rolling bands that repeat due to the spacecraft's temperature fluctuations (see Section 6.7.1 of \\citet{van16} for further detail on this effect).\n\n\\begin{figure}\n\\centering \\includegraphics[width=\\columnwidth{}]{FullCat.pdf}\n\\caption{The distribution of TCEs from the real light curves and the distribution of inverted TCEs from the FA simulation. This shows consistency among the two distributions with a minor surplus of inverted TCEs at longer periods and a deficiency at the 3rd harmonic of the thruster firing (18 hours).\n\\label{fig:TCEreliability}}\n\\end{figure}\n\n\nSince the spacecraft also underwent quasi-periodic roll motion during the \\emph{K2} mission, leading to cyclical instrument systematics, we chose the light curve inversion method. In Figure \\ref{fig:TCEreliability} we assess our ability to capture the noise features using this method. By comparing the distribution of TCEs from the real light curves with those of the inverted light curves, we can identify regions of parameter space where the inversions over- or under-represent systematic noise properties. Overall, the two distributions (108,379 TCEs and 110,548 inverted TCEs)\\footnote{The individual transit check described in Section \\ref{sec:ITC} was the dominant vetting metric for both the TCEs and the inverted TCEs, discarding non-transit shaped signals} are well aligned, providing an adequate simulation of the data set's noise characteristics. However, the inverted TCEs are slightly over-represented at long periods and under-represented at the 3rd harmonic of the 6 hour thruster firing. While we acknowledge these minor discrepancies, their impact will be small on the overall measure of the catalog reliability. \n\nAfter running the inverted light curves through our pipeline we identify 77 FA signals as planet candidates ( 806 candidates were identified in the real light curves). We used this information, alongside the formalism described in Section 4.1 of \\citet{tho18}, to quantify our sample reliability. The first step in achieving this measurement requires an understanding of the vetting routines FA removal efficiency ($E$). From the inverted light curve test we can estimate $E$ as\n\\begin{equation}\nE\\approx\\frac{N_{FP_{\\mathrm{inv}}}}{T_{TCE_{\\mathrm{inv}}}},\n\\end{equation}\nwhere $N_{FP_{\\mathrm{inv}}}$ is the number of TCEs that were accurately flagged as false positives and $T_{TCE_{\\mathrm{inv}}}$ is the total number of TCEs found in the inversion test. For the total data set, we found {\\tt EDI-Vetter} has a 99.9\\% efficiency in removing FA signals. However, this extreme competence must be balanced by the abundance of TCEs found by our pipeline. \n\n\\begin{figure}\n\\centering \\includegraphics[width=\\columnwidth{}]{Reliability_94.1.pdf}\n\\caption{The calculated reliability of our planet sample as a function of orbital period and planet radius. The reliability percent and the number of candidates have been listed in each corresponding box. The white regions represent areas of parameter space where the number of candidates and FAs are sparse, making accurate measurements of reliability unachievable. \n\\label{fig:reliability}}\n\\end{figure}\n\nWe can determine the reliability fraction ($R$) of our catalog using the number of TCEs flagged as false positive ($N_{FP}$) in the real light curves and the number of planet candidates ($N_{PC}$):\n\\begin{equation}\nR=1-\\frac{N_{FP}}{N_{PC}}\\Bigg(\\frac{1-E}{E}\\Bigg).\n\\label{eq:reli}\n\\end{equation}\nOverall, we found the planet catalog provided here is 91\\% reliable. This is slightly lower than the 97\\% reliability of the \\emph{Kepler} DR25 catalog \\citep{tho18}. However, this \\emph{Kepler} value is greatly improved by the extensive data baseline. A more appropriate comparison would consider a period range with a comparable number of transits (\\emph{Kepler} candidates with periods greater than 10 days have a similar number of transits). In this region of parameter space the {\\tt Robovetter} has a reliability of 95\\%, which is closer to the value reported for {\\tt EDI-Vetter}. Moreover, these broad summary statistics fail to capture the complexity of this metric.\n\n\\begin{figure*}\n\\centering \\includegraphics[height=7cm]{latReli.pdf}\n\\caption{The calculated reliability for the low and high galactic latitude Campaigns. The reliability percent and the number of candidates have been listed in each corresponding box. The white regions represent areas of parameter space where the number of candidates and FAs are sparse, making accurate measurements of reliability unachievable. \n\\label{fig:reliability_lat}}\n\\end{figure*}\n\nWe found 8 of the FAs detected in our inversion simulation are hosted by sub-giant and giant stars, which have notably more active stellar surfaces. By excluding all giants with $\\log(g)$ less than 4, we focus on the dwarf star population (in Figure \\ref{fig:reliability}) and consider how reliability changes as a function of period and planet radius. At longer periods the reliability drops. The reduced number of transits available for a given candidate more easily enables systematic features to line up and create a FA signal. Smaller planet radius regions are also more susceptible to FAs due to their weak signal strength, which can be replicated by noise within the data set. When accounting for this period and radius dependence we expect 94\\% of our dwarf host candidates to be real astrophysical signals.\n\n\\subsection{Reliability and Galactic Latitude}\n\\label{sec:galRelib}\n\n\nLike completeness, reliability may also exhibit inter-campaign differences. However, the number of FAs detected by inversion are not significant enough to enable thorough campaign by campaign analysis. Since Section \\ref{sec:comLat} showed that a majority of field differences could be attributed to galactic latitude, we considered differences in reliability for high ($\\mid b \\mid>40\\degr$) and low ($\\mid b \\mid<40\\degr$) absolute galactic latitudes in Figure \\ref{fig:reliability_lat}. Overall, we found that both low and high absolute galactic latitude campaigns produce a very similar reliability, 95\\% and 94\\% respectively. Upon examination of these two stellar populations, no significant differences were identified, thus these changes in reliability are likely due to statistical fluctuations from the reduced number of FAs (34 in low latitudes and 35 in high latitudes). Therefore, we encourage future population analysis to use the full reliability calculation provided in Figure \\ref{fig:reliability}.\n\n\n\\subsection{Astrophysical False Positives}\n\\label{sec:astrFP}\nIt is important to highlight that reliability is a measure of the systematic FA contamination rate. There exist non-planetary astrophysical sources capable of producing a transit signal. For example, dim background eclipsing binaries (EBs) may experience significant flux dilution from the primary target, manifesting a shallow depth transit. This planetary signal mimicry may lead to candidate misclassification \\citep{fre13,san16,mor16,mat18}. The \\emph{Kepler} DR25 relied on the centroid offset test \\citep{mul17} to identify these contaminants. This test considered the TCE's flux difference in and out of transit for each aperture pixel. Larger differences near the edge (or away from the center) of the aperture indicated a non-target star origin, enabling the identification of contaminants to within $1\\arcsec$. Although it is possible that such signals were of planetary nature \\citep{bry13}, the expected flux dilution makes classification difficult. Thus, candidates with centroid offsets were removed from the DR25 candidate catalog. Employing a similar test for \\emph{K2} would be difficult given the spacecraft's roll motion. The {\\tt DAVE} algorithm \\cite{kos19} considered each individual cadence (instead of the entire transit) to carry out a similar procedure for \\emph{K2} and was able to identify 96 centroid offsets from the list of known candidates. While this method is helpful, it lacks the statistical strength of the original centroid test. Instead, we choose to leverage the \\emph{Gaia} DR2 catalog to identify these sub-pixel background sources. Doing so, we are able to identify potential contaminates to within $1\\arcsec$ of the target star \\citep{zie18}, the equivalent limit of the \\emph{Kepler} DR25 centroid offset test.\\footnote{\\emph{Gaia's} spatial resolution reduces to $2\\arcsec$ for magnitude difference greater than 5.} \n\n\\begin{figure}\n\\centering \\includegraphics[width=\\columnwidth{}]{seperation.pdf}\n\\caption{The number of \\emph{Gaia} DR2 sources within a one arcminute radius of each planet candidate. This plot shows that \\emph{K2} candidate hosts (green) generally occupy more isolated fields than \\emph{Kepler} candidate hosts (blue). A characteristic aligned with the fact that $92\\%$ of our \\emph{K2} candidates are observed at an absolute galactic latitudes greater than the edge of the \\emph{Kepler} field ($\\mid b \\mid>22\\degr$), where stellar density subsides.\n\\label{fig:seperation}}\n\\end{figure}\n\nTo estimate the rate of contamination in our sample we consider the \\emph{Kepler} certified false positive (CFP) table \\citep{bry17} alongside the \\emph{Kepler} DR25 candidate catalog. The \\emph{Kepler} CFPs represents a sample of 3,590 \\emph{Kepler} signals not granted candidacy and thoroughly investigated to ensure a non-planetary origin. Carrying out all of our vetting procedures (including the contaminant identification from \\emph{Gaia} DR2) to both the CFP table and the DR25 candidate list, we would expect to find 169 CFPs and 3,470 DR25 candidates (a $\\sim4.6\\%$ contamination rate). However, this rate is likely an upper limit. The astrophysical false positive rate for background EBs should recede with increased distance from the galactic plane, given the expected change in aperture crowding as a function of galactic latitude. Overall, the \\emph{Kepler} target stars are closer to the galactic plane ($b \\lesssim 20\\degr$) compared to the \\emph{K2} targets, which largely occupy a higher absolute galactic latitude ($\\mid b \\mid\\gtrsim 20\\degr$). This apparent density distinction is shown in Figure \\ref{fig:seperation}, where we present the number of \\emph{Gaia} sources within a one arcminute radius of each candidate host. The \\emph{Kepler} candidates clearly occupy a more crowded field than a majority of our \\emph{K2} candidates, subjecting \\emph{Kepler} targets to a heightened occurrence of background contaminating sources. Overall, the CFP table is unique to the \\emph{Kepler} prime mission and differences in instrument performance, catalog construction, and stellar fields limit our ability to make precise contamination rate estimates for our \\emph{K2} catalog.\n\n\n\\begin{figure}\n\\centering \\includegraphics[width=\\columnwidth{}]{fluxDilution.pdf}\n\\caption{The expected Equation \\ref{eq:EB} values for the candidates within our catalog, conservatively assuming that the signals originated from the dimmest aperture-encased sources (Diluted). These quantities are plotted against the catalog values, which assume the signal host was the brightest aperture-encased source (Catalog). The yellow regions, which includes 18 candidates, indicate signals that could be astrophysical false positives ($R_{\\mathrm{pl}}\/R_{\\star}+b>1.04$) with their transit signal diluted. We also found an additional 11 candidates that would have produced a $R_{\\mathrm{pl}}\/R_{\\star}$ greater than 0.3 under this worse-case scenario assumption, presenting EB like properties and warranting rejection from our catalog. Signals that exceed either of these thresholds have been colored in green to indicate a potential background EB signal.\n\\label{fig:fluxDilution}}\n\\end{figure}\n\n\nTo further assess our ability to remove background EBs, we can perform a worst-case scenario test. The \\emph{Gaia} DR2 source catalog is essentially complete \\footnote{Visual companions, who are not spatially resolved, are an exception to this completeness claim.} for photometric $G$-band (comparable to the $Kepler$-band) sources brighter than 17 magnitudes, reducing to $\\sim80\\%$ completeness for $G=20$ targets \\citep{bou20}. Correspondingly, in Figure \\ref{fig:cdpp} it is shown that our CDPP values increase as a function of magnitude, reducing our pipeline's ability to detect signals around these dim targets. For example, a Kep.$=17$ star requires an occultation that exceeds $1\\%$ to qualify as a TCE ($8.68\\sigma$), increasing to $40\\%$ for targets with Kep.$=20$. When these dim stars are within the aperture of a brighter source, the photometric noise floor is further elevated by the noise contribution from both stars. By combining the low probability of greater than $40\\%$ eclipses and the $80\\%$ stellar completeness at $G=20$, \\emph{Gaia} provides sufficient coverage of the parameter space where background EBs could be hidden, yet still detectable by our pipeline. Thus, we can bound our contamination rate by making a conservative assumption that all of our candidate signals originate from the dimmest aperture-encased \\emph{Gaia} DR2 identified source. Currently, our candidate catalog assumes each transit signal corresponds to the brightest star within the aperture, however, such an assumption may misidentify diluted background EBs. Our worse-case scenario test helps quantify the magnitude of this contamination and the corresponding results are displayed in Figure \\ref{fig:fluxDilution}. We found 74 of our candidate targets contain additional aperture-encased \\emph{Gaia} source. Nine of these targets exhibit transits that can only be physically explained by a signal from the brightest star. In other words, the transit brightness reduction exceeds that of the entire flux contribution from the background stars. We found 17 candidates that would exhibit $R_{\\mathrm{pl}}\/R_{\\star}+b$ values greater than our catalog threshold (1.04) and an additional 13 candidates that would not meet our $R_{\\mathrm{pl}}\/R_{\\star}\/le0.3$ requirement. If we assume all 30 of these signals originated from a background source, we establish a background EB contamination rate of $4.0\\%$. However, the true parent source of these candidates remains unclear, thus this estimation is again an upper limit. In addition, we acknowledge that our detection metrics could remain averted by small radius-ratio non-grazing EBs, but we expect such cases to be rare. Overall, we expect contamination from background EBs to make up less than 5\\% of our candidate list.\n\n\n\n\\section{Occurrence Rate Recommendations}\n\\label{sec:suggest}\nAll exoplanet demographic analyses require defining an underlying stellar sample of interest. Our catalog used nearly the entire \\emph{K2} target sample, which may be too broad for future analysis. The completeness measurements given here may not accurately reflect that of a reduced target list. Therefore, we recommend users select their own stellar sample and consult the injection\/recovery summary table to assess the completeness of their selected targets for the highest degree of accuracy.\\footnote{\\label{noteWeb}\\url{www.jonzink.com\/scalingk2.html}} However, the completeness parameters provided in Table \\ref{tab:complete} are robust to minor sample selection modifications. Evidence of this claim is provided by the fact that the M dwarf and AFGK dwarf samples provide consistent results, see Figure \\ref{fig:TCEcomplete}, despite known differences. \n\nIn addition, 41,061 targets in our sample lacked stellar parameters, requiring the assumption of solar values. This may modify the results once these parameters become available. The remaining stellar parameters were provided by \\cite{hub16} for 94,769 targets and \\cite{har20} for 222,088 targets. While \\cite{zin20b} showed the offset between these two methods is minimal, making this mixture of catalogs reasonable, a uniform stellar parameterization would provide more homogeneous results. Fortunately, the pipeline's stellar agnosticism makes parameter updating uncomplicated. As additional data from the forthcoming \\emph{Gaia} DR3 becomes available, it will likely modify many of these stellar values and alter the underlying sample of dwarf stars. We suggest users implement the most up-to-date stellar parameters along with the injection\/recovery summary table to evaluate sample completeness.\n\nUpon close inspection, the \\emph{K2} population appears more stochastic along the main sequence (see Figure \\ref{fig:HR}). This is largely attributed to the guest observing selection process for the \\emph{K2} fields, where individual proposals each applied their own target selection criteria to construct target lists that addressed their specific science goals. This could lead, for instance, to situations where the G dwarfs in a given campaign represent a distinct population from the G dwarfs in another campaign (e.g. probing different ranges of stellar metallicity, which is known to impact planet occurrence rates), or from a given population of field G dwarfs. \\citet{zin20b} looked at the stellar population around Campaign 5 and found this latter selection effect did not provide a biased sample of FGK dwarfs for C5. However, for occurrence rate calculations, similar inspection of other campaigns should be carried out to ensure each campaign provides a uniform representation of their respective region of the sky. Where that does not appear to be true, users of this catalog are encouraged to independently select a set of targets from the full set of available targets (using, for instance, \\emph{Gaia} properties) that more clearly represents an unbiased sample of the desired population.\n\n\nIncorporating \\emph{Gaia} DR2 into our pipeline enabled us to provide more accurate planet radii measurements. \\citet{cia17} showed that non-transiting stellar multiplicity can artificially reduce transiting planet depths, leading to an overestimation in the occurrence of Earth-sized planets by 15-20\\%. Our pipeline used the \\emph{Gaia} DR2 to account for neighboring flux contamination, improving the precision of our radii measurements and the accuracy of future occurrence estimates. However, planet radius is markedly dependent on the underlying stellar radius measurements and our catalog is derived independent of such parameterization. Therefore, we did not impose any strict upper limits on planet radius and found 28 of our candidates have planet radii exceeding $30R_\\Earth$. These candidates are likely astrophysical false positives. We suggest users consider an upper radius bound when carrying out occurrence analyses. Users may also consider using the \\emph{Gaia} renormalized unit weight error (RUWE) values to further purify their sample of interest, as suggested by \\citep{bel20}. \n\nIdeally, the reliability would also be updated as additional information on stellar and planetary parameters are made available. This is possible using the reliability summary table, but given the small number FAs found we expect very minor changes to occur.\n\nIt is important to note that our completeness measurements do not address the window function, which requires three transits occur within the available photometry. All injections were required to have at least three transits occurring within this window, removing this detection probability from the calculated completeness. In testing, we found most light curves follow the expected analytic probability formula ($prob$):\n\n\\begin{equation}\n\\label{eq:win}\n\\begin{aligned}[t]\nprob & =1; & P t_{\\text{span}}\/2,\n\\end{aligned}\n\\end{equation}\nwhere $P$ is the signal period and $t_{\\text{span}}$ is the total span of the data (see Figure 11 of \\citealt{zin20a}). However, intra-campaign data gaps exist and should be carefully considered in any occurrence rate calculations.\n\n\\section{Summary}\n\\label{sec:summary}\nWe provide a catalog of transiting exoplanet candidates using \\emph{K2} photometry from Campaign 1-8 and 10-18, derived using a fully automated detection pipeline. This demographic sample includes 747 unique planets, 366 of which were previously unidentified. Additionally, we found 57 multi-planet candidate systems, of which 18 are newly identified. These discovered systems include a K dwarf (EPIC 249559552) hosting two sub-Neptune candidates in a 5:2 mean-motion resonance, and an early-type F dwarf (EPIC 249731291) with two short period gas giant candidates, providing an interesting constraint on formation and migration mechanisms. Follow-up observations and validation of these and a number of the other new candidates presented in this catalog is currently underway (Christiansen et al. in prep).\n\nWe employed an automated detection routine to achieve this catalog, enabling measurements of sample completeness and reliability. By injecting artificial transit signals before {\\tt EVEREST} pre-processing, we provide the most accurate measurements of \\emph{K2} sample completeness. Additionally, we used the inverted light curves to measure our vetting software's ability to remove systematic false alarms from our catalog of planets, providing a quantitative assessment of sample reliability. Using this planet sample, and the corresponding completeness and reliability measurements, exoplanet occurrence rate calculations can now be performed using \\emph{K2} planet candidates, which will be the subject of the next papers in the Scaling K2 series. With careful consideration of each data set's unique window functions, the \\emph{Kepler} and \\emph{K2} planet samples can now be combined, maximizing our ability to measure transiting planet occurrence rates throughout the local galaxy.\n\n\\section{Acknowledgements}\n\nWe thank the anonymous referee for their thoughtful feedback. This work made use of the gaia-kepler.fun crossmatch database created by Megan Bedell. The simulations described here were performed on the UCLA Hoffman2 shared computing cluster and using the resources provided by the Bhaumik Institute. This research has made use of the NASA Exoplanet Archive and the Exoplanet Follow-up Observation Program website, which are operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This paper includes data collected by the \\emph{Kepler} mission and obtained from the MAST data archive at the Space Telescope Science Institute (STScI). Funding for the \\emph{Kepler} mission is provided by the NASA Science Mission Directorate. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5\u201326555. J. Z. acknowledges funding from NASA ADAP grant 443820HN21811. K. H-U and J. C. acknowledge funding from NASA ADAP grant 80NSSC18K0431.\n\n\n\\software{{\\tt EVEREST} \\citep{lug16,lug18}, {\\tt TERRA} \\citep{pet13b}, {\\tt EDI-Vetter} \\citep{zin20a}, {\\tt PyMC3} \\citep{sal15}, {\\tt Exoplanet} \\citep{for19}, {\\tt RoboVetter} \\citep{tho18}, {\\tt batman} \\cite{kre15}, {\\tt emcee} \\citep{for13}}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe game of Cops and Robbers (defined, along with all the standard notation, later in this section) is usually studied in the context of the {\\em cop number}, the minimum number of cops needed to ensure a winning strategy. The cop number is often challenging to analyze; establishing upper bounds for this parameter is the focus of Meyniel's conjecture that the cop number of a connected $n$-vertex graph is $O(\\sqrt{n}).$ For additional background on Cops and Robbers and Meyniel's conjecture, see the book~\\cite{bonato}.\n\nA number of variants of Cops and Robbers have been studied. For example, we may allow a cop to capture the robber from a distance $k$, where $k$ is a non-negative integer~\\cite{bonato4}, play on edges~\\cite{pawel}, allow one or both players to move with different speeds~\\cite{NogaAbbas, fkl} or to teleport, allow the robber to capture the cops~\\cite{bonato0}, make the robber invisible or drunk~\\cite{drunk1,drunk2}, or allow at most one cop to move in any given round~\\cite{oo, hypercube, lazy_gnp}. See Chapter~8 of~\\cite{bonato} for a non-comprehensive survey of variants of Cops and Robbers.\n\n\\bigskip\n\nIn this paper, we consider a variant of the game of Cops and Robbers, called \\emph{Containment}, introduced recently by Komarov and Mackey~\\cite{komarov}. In this version, cops move from edge to adjacent edge, the robber moves as in the classic game, from vertex to adjacent vertex (but cannot move along an edge occupied by a cop). Formally, the game is played on a finite, simple, and undetected graph. There are two players, a set of \\emph{cops} and a single \\emph{robber}. The game is played over a sequence of discrete time-steps or \\emph{turns}, with the cops going first on turn $0$ and then playing on alternate time-steps. A \\emph{round} of the game is a cop move together with the subsequent robber move. The cops occupy edges and the robber occupies vertices; for simplicity, we often identify the player with the vertex\/edge they occupy. When the robber is ready to move in a round, she can move to a neighbouring vertex but cannot move along an edge occupied by a cop, cops can move to an edge that is incident to their current location. Players can always \\emph{pass}, that is, remain on their own vertices\/edges. Observe that any subset of cops may move in a given round. The cops win if after some finite number of rounds, all edges incident with the robber are occupied by cops. This is called a \\emph{capture}. The robber wins if she can evade capture indefinitely. A \\emph{winning strategy for the cops} is a set of rules that if followed, result in a win for the cops. A \\emph{winning strategy for the robber} is defined analogously. As stated earlier, the original game of \\emph{Cops and Robbers} is defined almost exactly as this one, with the exception that all players occupy vertices.\n\nIf we place a cop at each edge, then the cops are guaranteed to win. Therefore, the minimum number of cops required to win in a graph $G$ is a well-defined positive integer, named the \\emph{containability number} of the graph $G.$ Following the notation introduced in~\\cite{komarov}, we write $\\xi(G)$ for the containability number of a graph $G$ and $c(G)$ for the original \\emph{cop-number} of $G$.\n\n\\bigskip\n\nIn~\\cite{komarov}, Komarov and Mackey proved that for every graph $G$, \n$$\nc(G) \\le \\xi(G) \\le \\gamma(G) \\Delta(G), \n$$\nwhere $\\gamma(G)$ and $\\Delta(G)$ are the domination number and the maximum degree of $G$, respectively. It was conjectured that the upper bound can be strengthened and, in fact, the following holds.\n\n\\begin{conjecture}[\\cite{komarov}]\\label{con:komarov}\nFor every graph $G$, $\\xi(G) \\le c(G) \\Delta(G)$. \n\\end{conjecture}\n\n\\noindent Observe that, trivially, $c(G) \\le \\gamma(G)$ so this would imply the previous result. This seems to be the main question for this variant of the game at the moment. By investigating expansion properties, we provide asymptotically almost sure bounds on the containability number of binomial random graphs ${\\mathcal{G}}(n,p)$ for a wide range of $p=p(n)$, proving that the conjecture holds for some ranges of $p$ (or holds up to a constant or an $O(\\log n)$ multiplicative factors for some other ranges of $p$). However, before we state the result, let us introduce the probability space we deal with and mention a few results for the classic cop-number that will be needed to examine the conjecture (since the corresponding upper bound is a function of the cop number).\n\n\\bigskip\n\nThe \\emph{random graph} ${\\mathcal{G}}(n,p)$ consists of the probability space $(\\Omega, \\mathcal{F}, \\mathbb{P})$, where $\\Omega$ is the set of all graphs with vertex set $\\{1,2,\\dots,n\\}$, $\\mathcal{F}$ is the family of all subsets of $\\Omega$, and for every $G \\in \\Omega$,\n$$\n\\mathbb{P}(G) = p^{|E(G)|} (1-p)^{{n \\choose 2} - |E(G)|} \\,.\n$$\nThis space may be viewed as the set of outcomes of ${n \\choose 2}$ independent coin flips, one for each pair $(u,v)$ of vertices, where the probability of success (that is, adding edge $uv$) is $p.$ Note that $p=p(n)$ may (and usually does) tend to zero as $n$ tends to infinity. All asymptotics throughout are as $n \\rightarrow \\infty $ (we emphasize that the notations $o(\\cdot)$ and $O(\\cdot)$ refer to functions of $n$, not necessarily positive, whose growth is bounded). We say that an event in a probability space holds \\emph{asymptotically almost surely} (or \\emph{a.a.s.}) if the probability that it holds tends to $1$ as $n$ goes to infinity.\n\n\\bigskip\n\nLet us now briefly describe some known results on the (classic) cop-number of ${\\mathcal{G}}(n,p)$. Bonato, Wang, and the author of this paper investigated such games in ${\\mathcal{G}}(n,p)$ random graphs and in generalizations used to model complex networks with power-law degree distributions (see~\\cite{bpw}). From their results it follows that if $2 \\log n \/ \\sqrt{n} \\le p < 1-\\eps$ for some $\\eps>0$, then a.a.s. we have that\n\\begin{equation*\nc({\\mathcal{G}}(n,p))= \\Theta(\\log n\/p),\n\\end{equation*}\nso Meyniel's conjecture holds a.a.s.\\ for such $p$. In fact, for $p=n^{-o(1)}$ we have that a.a.s.\\ $c({\\mathcal{G}}(n,p))=(1+o(1)) \\log_{1\/(1-p)} n$. A simple argument using dominating sets shows that Meyniel's conjecture also holds a.a.s.\\ if $p$ tends to 1 as $n$ goes to infinity (see~\\cite{p} for this and stronger results). Bollob\\'as, Kun and Leader~\\cite{bkl} showed that if $p(n) \\ge 2.1 \\log n \/n$, then a.a.s.\n$$\n\\frac{1}{(pn)^2}n^{ 1\/2 - 9\/(2\\log\\log (pn)) } \\le c({\\mathcal{G}}(n,p))\\le 160000\\sqrt n \\log n\\,.\n$$\nFrom these results, if $np \\ge 2.1 \\log n$ and either $np=n^{o(1)}$ or $np=n^{1\/2+o(1)}$, then a.a.s.\\ $c({\\mathcal{G}}(n,p))= n^{1\/2+o(1)}$. Somewhat surprisingly, between these values it was shown by \\L{}uczak and the author of this paper~\\cite{lp2} that the cop number has more complicated behaviour. It follows that a.a.s.\\ $\\log_n c({\\mathcal{G}}(n,n^{x-1}))$ is asymptotic to the function $f(x)$ shown in Figure~\\ref{fig1} (denoted in blue).\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=3.3in]{zig-zag}\n\\end{center}\n\\caption{The ``zigzag'' functions representing the ordinary cop number (blue) and the containability number (red).}\\label{fig1}\n\\end{figure}\n\nFormally, the following result holds for the classic game.\n\n\\begin{theorem}[\\cite{lp2, bpw}]\\label{thm:zz}\nLet $0<\\alpha<1$ and $d=d(n)=np=n^{\\alpha+o(1)}$.\n\\begin{enumerate}\n\\item If $\\frac{1}{2j+1}<\\alpha<\\frac{1}{2j}$ for some integer $j\\ge 1$, then a.a.s.\\\n$$\nc({\\mathcal{G}}(n,p))= \\Theta(d^j)\\,.\n$$\n\\item If $\\frac{1}{2j}<\\alpha<\\frac{1}{2j-1}$ for some integer $j\\ge 2$, then a.a.s.\\\n\\begin{eqnarray*}\nc({\\mathcal{G}}(n,p)) &=& \\Omega \\left( \\frac{n}{d^j} \\right), \\text{ and } \\\\\nc({\\mathcal{G}}(n,p)) &=& O \\left( \\frac{n \\log n}{d^j} \\right)\\,.\n\\end{eqnarray*}\n\\item If $1\/2 < \\alpha < 1$, then a.a.s.\\\n$$\nc({\\mathcal{G}}(n,p)) = \\Theta \\left( \\frac {n \\log n}{d} \\right).\n$$\n\\end{enumerate}\n\\end{theorem}\n\nThe above result shows that Meyniel's conjecture holds a.a.s.\\ for random graphs except perhaps when $np=n^{1\/(2k)+o(1)}$ for some $k \\in {\\mathbb N}$, or when $np=n^{o(1)}$. The author of this paper and Wormald showed recently that the conjecture holds a.a.s.\\ in ${\\mathcal{G}}(n,p)$~\\cite{PW_gnp} as well as in random $d$-regular graphs~\\cite{PW_gnd}.\n\n\\bigskip\n\nFinally, we are able to state the result of this paper.\n\n\\begin{theorem}\\label{thm:main}\nLet $0<\\alpha<1$ and $d=d(n)=np=n^{\\alpha+o(1)}$.\n\\begin{enumerate}\n\\item If $\\frac{1}{2j+1}<\\alpha<\\frac{1}{2j}$ for some integer $j\\ge 1$, then a.a.s.\\\n$$\n\\xi({\\mathcal{G}}(n,p))= \\Theta(d^{j+1}) = \\Theta(c({\\mathcal{G}}(n,p)) \\cdot \\Delta( {\\mathcal{G}}(n,p) ) )\\,.\n$$\nHence, a.a.s.\\ Conjecture~\\ref{con:komarov} holds (up to a multiplicative constant factor).\n\\item If $\\frac{1}{2j}<\\alpha<\\frac{1}{2j-1}$ for some integer $j\\ge 2$, then a.a.s.\\\n\\begin{eqnarray*}\n\\xi({\\mathcal{G}}(n,p)) &=& \\Omega \\left( \\frac{n}{d^{j-1}} \\right), \\text{ and } \\\\\n\\xi({\\mathcal{G}}(n,p)) &=& O \\left( \\frac{n \\log n}{d^{j-1}} \\right) = O(c({\\mathcal{G}}(n,p)) \\cdot \\Delta( {\\mathcal{G}}(n,p) ) \\cdot \\log n )\\,.\n\\end{eqnarray*}\nHence, a.a.s.\\ Conjecture~\\ref{con:komarov} holds (up to a multiplicative $O(\\log n)$ factor).\n\\item If $1\/2 < \\alpha < 1$, then a.a.s.\\\n$$\n\\xi({\\mathcal{G}}(n,p)) = \\Theta(n) = \\Theta (c({\\mathcal{G}}(n,p)) \\cdot \\Delta( {\\mathcal{G}}(n,p) ) \/ \\log n ) \\le c({\\mathcal{G}}(n,p)) \\cdot \\Delta( {\\mathcal{G}}(n,p).\n$$\nHence, a.a.s.\\ Conjecture~\\ref{con:komarov} holds.\n\\end{enumerate}\n\\end{theorem}\n\nIt follows that a.a.s.\\ $\\log_n \\xi({\\mathcal{G}}(n,n^{x-1}))$ is asymptotic to the function $g(x)$ shown in Figure~\\ref{fig1} (denoted in red). The fact the conjecture holds is associated with the observation that $g(x) - f(x) = x$, which is equivalent to saying that a.a.s.\\ the ratio $\\xi({\\mathcal{G}}(n,p)) \/ c({\\mathcal{G}}(n,p)) = d n^{o(1)} = \\Delta({\\mathcal{G}}(n,p)) \\cdot n^{o(1)}$. Moreover, let us mention that Theorem~\\ref{thm:main} implies that the conjecture is best possible (again, up to a constant or an $O(\\log n)$ multiplicative factors for corresponding ranges of $p$).\n\n\\bigskip\n\nNote that in the above result we skip the case when $np=n^{1\/k+o(1)}$ for some positive integer $k$ or $np=n^{o(1)}$. It is done for a technical reason: an argument for the lower bound for $\\xi({\\mathcal{G}}(n,p))$ uses a technical lemma from~\\cite{lp2} that, in turn, uses Corollary 2.6 from~\\cite{Vu} which is stated only for $np=n^{\\alpha+o(1)}$, where $\\alpha \\neq 1\/k$ for any positive integer $k$. Clearly, one can repeat the argument given in~\\cite{Vu}, which is a very nice but slightly technical application of the polynomial concentration method inequality by Kim and Vu. However, in order to make the paper easier and more compact, a ready-to-use lemma from~\\cite{lp2} is used and we concentrate on the ``linear'' parts of the graph of the zigzag function. Nonetheless, similarly to the corresponding result for $c({\\mathcal{G}}(n,p))$, one can expect that, up to a factor of $\\log^{O(1)}n$, the result extends naturally also to the case $np=n^{1\/k+o(1)}$ as well. \n\nOn the other hand, there is no problem with the upper bound so the case when $np=n^{1\/k+o(1)}$ for some positive integer $k$ is also investigated (see below for a precise statement). Moreover, some expansion properties that were used to prove that Meyniel's conjecture holds for ${\\mathcal{G}}(n,p)$~\\cite{PW_gnp} are incorporated here to investigate sparser graphs. \n\nThe rest of the paper is devoted to prove Theorem~\\ref{thm:main}.\n\n\\section{Proof of Theorem~\\ref{thm:main}}\n\n\\subsection{Typical properties of ${\\mathcal{G}}(n,p)$ and useful inequalities}\n\nLet us start by listing some typical properties of ${\\mathcal{G}}(n,p)$. These observations are part of folklore and can be found in many places, so we will usually skip proofs, pointing to corresponding results in existing literature. Let $N_i(v)$ denote the set of vertices at distance $i$ from $v$, and let $N_i[v]$ denote the set of vertices within distance $i$ of $v$, that is, $N_i[v] = \\bigcup_{0 \\le j \\le i} N_j(v)$. For simplicity, we use $N[v]$ to denote $N_1[v]$, and $N(v)$ to denote $N_1(v)$. Since cops occupy edges but the robber occupies vertices, we will need to investigate the set of edges at ``distance'' $i$ from a given vertex $v$ that we denote by $E_i(v)$. Formally, $E_i(v)$ consists of edges between $N_{i-1}(v)$ and $N_i(v)$, and within $N_{i-1}(v)$. In particular, $E_1(v)$ is the set of edges incident to $v$. Finally, let $P_i(v,w)$ denote the number of paths of length $i$ joining $v$ and $w$.\n\n\\bigskip \n\nLet us start with the following lemma. \n\n\\begin{lemma}\\label{lem:elem1}\nLet $d=d(n) = p(n-1) \\ge \\log^3 n$. Then, there exists a positive constant $c$ such that a.a.s.\\ the following properties hold in ${\\mathcal{G}}(n,p) = (V,E)$.\n\\begin{enumerate}\n\\item Let $S \\subseteq V$ be any set of $s=|S|$ vertices, and let $r \\in {\\mathbb N}$. Then\n$$\n\\left| \\bigcup_{v \\in S} N_r[v] \\right| \\ge c \\min\\{s d^r, n \\}.\n$$\nMoreover, if $s$ and $r$ are such that $s d^r < n \/ \\log n$, then\n$$\n\\left| \\bigcup_{v \\in S} N_r[v] \\right| = (1+o(1)) s d^r.\n$$\n\\item ${\\mathcal{G}}(n,p)$ is connected.\n\\item Let $r = r(n)$ be the largest integer such that $d^r \\le \\sqrt{n \\log n}$. Then, for every vertex $v \\in V$ and $w \\in N_{r+1}(v)$, the number of edges from $w$ to $N_r(v)$ is at most $b$, where \n$$\nb = \n\\begin{cases}\n250 & \\text{ if } d \\le n^{0.49} \\\\\n\\frac {3 \\log n}{\\log \\log n} & \\text{ if } n^{0.49} < d \\le \\sqrt{n}.\n\\end{cases}\n$$\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nThe proof of part (i) can be found in~\\cite{PW_gnp}. The fact that ${\\mathcal{G}}(n,p)$ is connected is well known (see, for example,~\\cite{JLR}). In fact, the (sharp) threshold for connectivity is $p = \\log n \/ n$ so this property holds for even sparser graphs. \n\nFor part (iii), let us first expose the $r$th neighbourhood of $v$. By part (i), we may assume that $|N_r[v]|=(1+o(1)) d^{r} < 2 d^r$. For any $w \\in V \\setminus N_r[v]$, the probability that there are at least $b$ edges joining $w$ to $N_r(v)$ is at most\n$$\nq := {2 d^r \\choose b} p^b \\le \\left( \\frac{2ed^r}{b} \\right)^b \\left( \\frac dn \\right)^b = \\left( \\frac{2ed^{r+1}}{bn} \\right)^b.\n$$\nIf $d \\le n^{0.49}$, then\n$$\nq \\le \\left( \\frac{2e d \\sqrt{n \\log n}}{bn} \\right)^b \\le n^{-0.005 b} = o(n^{-2}),\n$$\nprovided that $b$ is large enough (say, $b = 250$). For $n^{0.49} < d \\le \\sqrt{n}$ (and so $r=1$), we observe that\n$$\nq \\le \\left( \\frac{2e}{b} \\right)^b = \\exp \\left( - (1+o(1)) b \\log b \\right) = o(n^{-2}),\n$$\nprovided $b = 3 \\log n \/ \\log \\log n$. The claim follows by the union bound over all pairs $v, w$. The proof of the lemma is finished.\n\\end{proof}\n\n\\bigskip \n\nThe next lemma can be found in~\\cite{lp2}. (See also~\\cite{lazy_gnp} for its extension.)\n\n\\begin{lemma}\\label{lem:elem2}\nLet $\\eps$ and $\\alpha$ be constants such that $0<\\eps<0.1$, $\\eps<\\alpha<1-\\eps$, and let $d=d(n)=p(n-1)=n^{\\alpha+o(1)}$. Let $\\ell \\in {\\mathbb N}$ be the largest integer such that $\\ell < 1\/\\alpha$. Then, a.a.s.\\ for every vertex $v$ of ${\\mathcal{G}}(n,p)$ the following properties hold.\n\\begin{enumerate}\n\\item [(i)] If $w\\in N_i[v]$ for some $i$ with $2\\le i \\le \\ell$, then $P_i(v,w) \\le \\frac {3}{1-i\\alpha}$.\n\\item [(ii)] If $w\\in N_{\\ell+1}[v]$ and $d^{\\ell+1} \\ge 7 n \\log n$, then $P_{\\ell+1}(v,w) \\le \\frac{6}{1-\\ell \\alpha}\\frac{d^{\\ell+1}}{n}$.\n\\item [(iii)] If $w\\in N_{\\ell+1}[v]$ and $d^{\\ell+1} < 7 n \\log n$, then $P_{\\ell+1}(v,w) \\le \\frac{42}{1-\\ell \\alpha} \\log n$.\n\\end{enumerate}\nMoreover, a.a.s.\\\n\\begin{enumerate}\n\\item [(iv)] Every edge of ${\\mathcal{G}}(n,p)$ is contained in at most $\\eps d$ cycles of length at most $\\ell+2$.\n\\end{enumerate}\n\\end{lemma}\n\n\\bigskip\n\nWe will also use the following variant of Chernoff's bound (see, for example,~\\cite{JLR}):\n\\begin{lemma}[\\textbf{Chernoff Bound}]\nIf $X$ is a binomial random variable with expectation $\\mu$, and $0<\\delta<1$, then \n$$\n\\Pr[X < (1-\\delta)\\mu] \\le \\exp \\left( -\\frac{\\delta^2 \\mu}{2} \\right)\n$$ \nand if $\\delta > 0$,\n$$\n\\Pr [X > (1+\\delta)\\mu] \\le \\exp \\left(-\\frac{\\delta^2 \\mu}{2+\\delta} \\right).\n$$\n\\end{lemma}\n\n\\subsection{Upper bound}\n\nFirst, let us deal with dense graphs that correspond to part (iii) of Theorem~\\ref{thm:main}. In fact, we are going to make a simple observation that the containability number is linear if $G$ has a perfect or a near-perfect matching. The result will follow since it is well-known that for $p=p(n)$ such that $pn - \\log n \\to \\infty$, ${\\mathcal{G}}(n,p)$ has a perfect (or a near-perfect) matching a.a.s. (As usual, see~\\cite{JLR}, for more details.)\n\n\\begin{lemma}\\label{lem:perfect}\nSuppose that $G$ on $n$ vertices has a perfect matching ($n$ is even) or a near-perfect matching ($n$ is odd). Then, $\\xi(G) \\le n$.\n\\end{lemma}\n\\begin{proof}\nSuppose first that $n$ is even. The cops start on the edges of a perfect matching; two cops occupy any edge of the matching for a total of $n$ cops. All vertices of $G$ can be associated with unique cops. The robber starts on some vertex $v$. One edge incident to $v$ (the edge $vv'$ that belongs to the perfect matching used) is already occupied by a cop (in fact, by two cops, associated with $v$ and $v'$). Moreover, the remaining cops can move so that all edges incident to $v$ are protected and the game ends. Indeed, for each edge $vu$, the cop associated with $u$ moves to $vu$. \n\nThe case when $n$ is odd is also very easy. Two cops start on each edge of a near-perfect matching which matches all vertices but $u$. If $u$ is isolated, we may simply remove it from $G$ and arrive back to the case when $n$ is even. (Recall that the cops win if all edges incident with the robber are occupied by cops. As this property is vacuously true when the robber starts on an isolated vertex, we may assume that she does not start on $u$.) Hence, we may assume that $u$ is not isolated. We introduce one more cop on some edge incident to $u$. The total number of cops is at most $2 \\cdot \\frac {n-1}{2} + 1 = n$; again, each vertex of $G$ can be associated with a unique cop and the proof goes as before.\n\\end{proof}\n\n\\bigskip\n\nNow, let us move to the following lemma that yields part (i) of Theorem~\\ref{thm:main}. We combine and adjust ideas from both~\\cite{lp2} and~\\cite{PW_gnp} in order to include much sparser graphs. Cases when $\\alpha = 1\/k$ for some positive integer $k$ are also covered. \n\n\\begin{lemma}\nLet $d=d(n) = p(n-1) \\ge \\log^3 n$. Suppose that there exists a positive integer $r=r(n)$ such that \n$$\n(n \\log n)^{\\frac {1}{2r+1}} \\le d \\le (n \\log n)^{\\frac {1}{2r}}.\n$$\nThen, a.a.s.\\ \n$$\n\\xi({\\mathcal{G}}(n,p)) = O(d^{r+1}).\n$$\n\\end{lemma}\n\n\\begin{proof}\nSince our aim is to prove that the desired bound holds a.a.s.\\ for ${\\mathcal{G}}(n,p)$, we may assume, without loss of generality, that a graph $G$ the players play on satisfies the properties stated in Lemma~\\ref{lem:elem1}. A team of cops is determined by independently choosing each edge of $e \\in E(G)$ to be occupied by a cop with probability $Cd^r\/n$, where $C$ is a (large) constant to be determined soon. It follows from Lemma~\\ref{lem:elem1}(i) that $G$ has $(1+o(1)) dn\/2$ edges. Hence, the expected number of cops is equal to\n$$\n(1+o(1)) \\frac {dn}{2} \\cdot \\frac {Cd^r}{n} = (1+o(1)) \\frac {Cd^{r+1}}{2} .\n$$\nIt follows from Chernoff's bound that the total number of cops is $\\Theta(d^{r+1})$ a.a.s.\n\nThe robber appears at some vertex $v \\in V(G)$. Let $X \\subseteq E(G)$ be the set of edges between $N_r(v)$ and $N_{r+1}(v)$. It follows from Lemma~\\ref{lem:elem1}(i) that \n$$\n|X| \\le (1+o(1)) d |N_r(v)| \\le 2 d^{r+1}.\n$$\nOur goal is to show that with probability $1-o(n^{-1})$ it is possible to assign distinct cops to all edges $e$ in $X$ such that a cop assigned to $e$ is within distance $(r+1)$ of $e$. (Note that here, the probability refers to the randomness in distributing the cops; the graph $G$ is fixed.) If this can be done, then after the robber appears these cops can begin moving straight to their assigned destinations in $X$. Since the first move belongs to the cops, they have $(r+1)$ steps, after which the robber must still be inside $N_r[v]$, which is fully occupied by cops. She is ``trapped'' inside $N_r[v]$, so we can send an auxiliary team of, say, $2 d^{r+1}$ cops to go to every edge in the graph induced by $N_r[v]$, and the game ends. Hence, the cops will win with probability $1-o(n^{-1})$, for each possible starting vertex $v \\in V(G)$. It will follow that the strategy gives a win for the cops a.a.s. \n\nLet $Y$ be the (random) set of edges occupied by cops. Instead of showing that the desired assignment between $X$ and $Y$ exists, we will show that it is possible to assign $b(u)$ distinct cops to all vertices $u$ of $N_{r+1}(v)$, where $b(u)$ is the number of neighbours of $u$ that are in $N_r(v)$ (that is, the number of edges of $X$ incident to $u$) and such that each cop assigned to $u$ is within distance $(r+1)$ from $u$. \n(Note that this time ``distance'' is measured between vertex $u$ and edges which is non-standard. In this paper, we define it as follows: edge $e$ is at distance at most $(r+1)$ from $u$ if $e$ is at distance at most $r$ from some edge adjacent to $u$.)\nIndeed, if this can be done, assigned cops run to $u$, after $r$ rounds they are incident to $u$, and then spread to edges between $u$ and $N_r(v)$; the entire $X$ is occupied by cops. In order to show that the required assignment between $N_{r+1}(v)$ and $Y$ exists with probability $1-o(n^{-1})$, we show that with this probability, $N_{r+1}(v)$ satisfies Hall's condition for matchings in bipartite graphs. \n\nSuppose first that $d \\le n^{0.49}$ and fix $b=250$. It follows from Lemma~\\ref{lem:elem1}(iii) that $b(u) \\le b$ for every $u \\in N_{r+1}(v)$. Set \n$$\nk_0 = \\max \\{ k : k d^{r} < n \\}.\n$$ \nLet $K \\subseteq N_{r+1}(v)$ with $|K|=k \\le k_0$. We may apply Lemma~\\ref{lem:elem1}(i) to bound the size of $\\bigcup_{u \\in K} N_r[u]$ and the number of edges incident to each vertex. It follows that the number of edges of $Y$ that are incident to some vertex in $\\bigcup_{u \\in K} N_r[u]$ can be stochastically bounded from below by the binomial random variable ${\\rm Bin}(\\lfloor c k d^r \\cdot (d\/3) \\rfloor, Cd^r\/n)$, whose expected value is asymptotic to $(Cc\/3) k d^{2r+1} \/ n \\ge (Cc\/3) k \\log n$. Using Chernoff's bound we get that the probability that there are fewer than $bk$ edges of $Y$ incident to this set of vertices is less than $\\exp(-4k \\log n)$ when $C$ is a sufficiently large constant. Hence, the probability that the sufficient condition in the statement of Hall's theorem fails for at least one set $K$ with $|K|\\le k_0$ is at most\n$$\n\\sum_{k=1}^{k_0} {|N_{r+1}(v)| \\choose k} \\exp( - 4 k \\log n) \\le \\sum_{k=1}^{k_0} n^k \\exp( - 4 k \\log n) = o(n^{-1}).\n$$\n\nNow consider any set $K \\subseteq N_{r+1}(v)$ with $k_0 < |K| = k \\le |N_{r+1}(v)| \\le 2 d^{r+1}$ (if such a set exists). Lemma~\\ref{lem:elem1}(i) implies that the size of $\\bigcup_{u \\in K} N_r[u]$ is at least $cn$, so we expect at least $cn \\cdot (d\/3) \\cdot Cd^r\/n = (Cc\/3) d^{r+1}$ edges of $Y$ incident to this set. Again using Chernoff's bound, we deduce that the number of edges of $Y$ incident to this set is at least $2 b d^{r+1} \\ge b |N_{r+1}(v)| \\ge bk$ with probability at least $1-\\exp(- 4 d^{r+1})$, by taking the constant $C$ to be large enough. Since\n$$\n\\sum_{k=k_0+1}^{|N_{r+1}(v)|} {|N_{r+1}(v)| \\choose k} \\exp( - 4 d^{r+1} ) \\le 2^{2 d^{r+1}} \\exp( - 4 d^{r+1} ) = o(n^{-1}),\n$$\nthe necessary condition in Hall's theorem holds with probability $1 - o(n^{-1})$. \n\nFinally, suppose that $d > n^{0.49}$. Since Lemma~\\ref{lem:perfect} implies that the result holds for $d > \\sqrt{n}$, we may assume that $d \\le \\sqrt{n}$. (In fact, for $d > \\sqrt{n}$ we get a better bound of $n$ rather than $O(d^2)$ that we aim for.) This time, set $b = 3 \\log \\log n \/ \\log n$ to make sure $b(u) \\le b$ for all $u \\in N_{r+1}(v)$. The proof is almost the same as before. For small sets of size at most $k_0 = \\Theta(n\/d)$, we expect $(Cc\/3) k d^{3} \/ n \\ge (Cc\/3) k n^{0.47}$ edges, much more than we actually need, namely, $b k$. For large sets of size more than $k_0$, we modify the argument slightly and instead of assigning $b$ cops to each vertex of $N_{r+1}(v)$, we notice that the number of cops needed to assign is equal to $\\sum_{u \\in K} b(u) \\le |X| \\le 2 d^{r+1}$. (There might be some vertices of $N_{r+1}(v)$ that are incident to $b$ edges of $X$ but the total number of incident edges to $K$ is clearly at most $|X|$.) The rest is not affected and the proof is finished.\n\\end{proof}\n\n\\bigskip\n\nThe next lemma takes care of part (ii) of Theorem~\\ref{thm:main}.\n\n\\begin{lemma}\nLet $d=d(n) = p(n-1) \\ge \\log^3 n$. Suppose that there exists an integer $r=r(n) \\ge 2$ such that \n$$\n(n \\log n)^{\\frac {1}{2r}} \\le d \\le (n \\log n)^{\\frac {1}{2r-1}}.\n$$\nThen, a.a.s.\\ \n$$\n\\xi({\\mathcal{G}}(n,p)) = O \\left( \\frac {n \\log n}{d^{r-1}} \\right).\n$$\n\\end{lemma}\n\\begin{proof}\nWe mimic the proof of the previous lemma so we skip details focusing only on differences. A team of cops is determined by independently choosing each edge of $e \\in E(G)$ to be occupied by a cop with probability $C \\log n \/ d^r$, for the total number of cops $\\Theta(n \\log n \/ d^{r-1})$ a.a.s.\n\nThe robber appears at some vertex $v \\in V(G)$. This time, $X \\subseteq E(G)$ is the set of edges between $N_{r-1}(v)$ and $N_{r}(v)$ and $|X| \\le 2 d^r$. We show that it is possible to assign $b=250$ distinct cops to all vertices $u$ of $N_r(v)$ such that a cop assigned to $u$ is within ``distance'' $r$ from $u$. The definition of $k_0$ has to be adjusted. Set \n$$\nk_0 = \\max \\{ k : k d^{r-1} < n \\}.\n$$ \nLet $K \\subseteq N_{r}(v)$ with $|K|=k \\le k_0$. The expected number of edges of $Y$ that are incident to some vertex in $\\bigcup_{u \\in K} N_{r-1}[u]$ is at least $(ckd^{r-1}) (d\/3) (C \\log n \/ d^r) = (Cc\/3) k \\log n$, and the rest of the argument is not affected. Now consider any set $K \\subseteq N_{r}(v)$ with $k_0 < |K| = k \\le |N_{r}(v)| \\le 2 d^{r}$ (if such a set exists). The size of $\\bigcup_{u \\in K} N_{r-1}[u]$ is at least $cn$, so we expect at least \n$$\n(cn) \\left(\\frac {d}{3} \\right) \\left( \\frac {C \\log n}{d^r} \\right) = \\frac {Cc d^r n \\log n}{3 d^{2r-1}} \\ge \\frac {Cc d^r }{3} \n$$ \nedges of $Y$ incident to this set. Hence, the number of edges of $Y$ incident to this set is at least $2 b d^{r} \\ge b |N_{r}(v)| \\ge bk$ with probability at least $1-\\exp(- 4 d^{r})$, by taking the constant $C$ to be large enough. The argument we had before works again, and the proof is finished.\n\\end{proof}\n\n\\subsection{Lower bound}\n\nThe proof of the lower bound is an adaptation of the proof used for the classic cop number in~\\cite{lp2}. The two bounds, corresponding to parts (i) and (ii) in Theorem~\\ref{thm:main}, are proved independently in the following two lemmas. \n\n\\begin{lemma}\\label{lemma:lowerbound1}\nLet $\\frac{1}{2j+1}<\\alpha<\\frac{1}{2j}$ for some integer $j\\ge 1$, $c=c(j,\\alpha)=\\frac{3}{1-2j \\alpha}$, and $d=d(n)=np=n^{\\alpha+o(1)}$. Then, a.a.s.\\\n$$\n\\xi({\\mathcal{G}}(n,p)) > K := \\left( \\frac{d}{30c(2j+1)} \\right)^{j+1}\\,.\n$$\n\\end{lemma}\n\n\\begin{proof}\nSince our aim is to prove that the desired bound holds a.a.s.\\ for ${\\mathcal{G}}(n,p)$, we may assume, without loss of generality, that a graph $G$ the players play on satisfies the properties stated in Lemmas~\\ref{lem:elem1} and~\\ref{lem:elem2}. Suppose that the robber is chased by $K$ cops. Our goal is to provide a winning strategy for the robber on $G$. For vertices $x_1,x_2,\\dots,x_s$, let $\\textrm{C}^{x_1,x_2,\\dots,x_s}_i(v)$ denote the number of cops in $E_i(v)$ (that is, at distance $i$ from $v$) in the graph $G \\setminus \\{x_1,x_2, \\dots,x_s\\}$.\n\nRight before the robber makes her move, we say that the vertex $v$ occupied by the robber is \\emph{safe}, if for some neighbour $x$ of $v$ we have $\\textrm{C}^x_1(v) \\le\\frac{d}{30c(2j+1)}$, and\n$$\n\\textrm{C}_{2i}^x(v), \\textrm{C}_{2i+1}^x(v)\\le \\left( \\frac{d}{30c(2j+1)} \\right)^{i+1}\n$$\nfor $i=1,2,\\dots,j-1$ (such a vertex $x$ will be called a \\emph{deadly neighbour} of $v$). The reason for introducing deadly neighbours is to deal with a situation that many cops apply a greedy strategy and always decrease the distance between them and the robber. As a result, there might be many cops ``right behind'' the robber but they are not so dangerous unless she makes a step ``backwards'' by moving to a vertex she came from in the previous round, a deadly neighbour! Moreover, note that a vertex is called safe for a reason: if the robber occupies a safe vertex, then the game is definitely not over since the condition for $C_1^x(v)$ guarantees that at most a small fraction of incident edges are occupied by cops.\n\nSince a.a.s.\\ $G$ is connected (see Lemma~\\ref{lem:elem1}(ii)), without loss of generality we may assume that at the beginning of the game all cops begin at the same edge, $e$. Subsequently, the robber may choose a vertex $v$ so that $e$ is at distance $2j+2$ from $v$ (see Lemma~\\ref{lem:elem1}(i) applied with $r=2j+1$ to see that almost all vertices are at distance $2j+1$ from both endpoints of $e$). Hence, even if all cops will move from $e$ to $E_{2j+1}(v)$ after this move, $v$ will remain safe as no bound is required for $\\textrm{C}_{2j+1}^x(v)$. (Of course, again, without loss of generality we may assume that all cops pass for the next round and stay at $e$ before starting applying their best strategy against the robber.) Hence, in order to prove the lemma, it is enough to show that if the robber's current vertex $v$ is safe, then she can move along an unoccupied edge to a neighbour $y$ so that no matter how the cops move in the next round, $y$ remains safe.\n\nFor $0 \\le r \\le 2j$, we say that a neighbour $y$ of $v$ is {\\em $r$-dangerous} if\n\\begin{itemize}\n\\item [(i)] an edge $vy$ is occupied by a cop (for $r=0$)\\,, or\n\\item [(ii)] $\\textrm{C}^{v,x}_r(y) \\ge \\frac 13 \\left( \\frac{d}{30c(2j+1)} \\right)^{i}$ (for $r=2i$ or $r=2i-1$, where $i = 1,2, \\ldots, j$)\\,,\n\\end{itemize}\nwhere $x$ is a deadly neighbour of $v$. We will check that for every $r \\in \\{0, 1, \\ldots, 2j\\}$, the number of $r$-dangerous neighbours of $v$, which we denote by $\\textrm{dang}(r)$, is smaller than $\\frac {d}{2(2j+1)}$. Clearly, since $v$ is safe, \n$$\n\\textrm{dang}(0) \\le \\textrm{C}^x_1(v) \\le \\frac{d}{30c(2j+1)} \\le \\frac {d}{2(2j+1)}.\n$$\nSuppose then that $r=2i$ or $r=2i-1$ for some $i \\in \\{1,2,\\ldots, j\\}$. Every $r$-dangerous neighbour of $v$ has at least $\\frac 13 \\left( \\frac{d}{30c(2j+1)} \\right)^i$ cops occupying $E_{\\le (r+1)}(v)$. On the other hand, every edge from $E_{\\le (r+1)}(v)$ is incident to at most $2$ vertices at distance at most $r$ from $v$. Moreover, Lemma~\\ref{lem:elem2}(i) implies that there are at most $c$ paths between $v$ and any $w \\in N_{\\le r}(v)$. Finally, by the assumption that $v$ is safe, we have $\\textrm{C}^{x}_{2i}(v), \\textrm{C}^{x}_{2i+1}(v) \\le \\left( \\frac{d}{30c(2j+1)} \\right)^{i+1}$, provided that $i \\le j-1$; the corresponding conditions for $\\textrm{C}_{2j}^x(v)$ and $\\textrm{C}_{2j+1}^x(v)$ are trivially true, since both can be bounded from above by $K$, the total number of cops. Combining all of these yields\n\\begin{eqnarray*}\n\\frac 13 \\left( \\frac{d}{30c(2j+1)} \\right)^i \\cdot \\textrm{dang}(r) &\\le& 2c \\cdot \\textrm{C}_{\\le (r+1)}^x(v) \\le 2c \\cdot (2+o(1)) \\textrm{C}_{r+1}^x(v) \\\\\n&\\le& 5 c \\cdot \\left( \\frac{d}{30c(2j+1)} \\right)^{i+1},\n\\end{eqnarray*}\nand consequently $\\textrm{dang}(r) \\le \\frac {d}{2(2j+1)}$, as required. Thus, there at most $d\/2$ of neighbours of $v$ are $r$-dangerous for some $r \\in \\{0,1,\\dots,2j\\}$. \n\nSince we have $(1+o(1)) d$ neighbours to choose from (see Lemma~\\ref{lem:elem1}(i)), there are plenty of neighbours of $v$ which are not $r$-dangerous for any $r=0,1,\\dots,2j$ and the robber might want to move to one of them. However, there is one small issue we have to deal with. In the definition of being dangerous, we consider the graph $G \\setminus \\{v,x\\}$ whereas in the definition of being safe we want to use $G \\setminus \\{v\\}$ instead. Fortunately, Lemma~\\ref{lem:elem2}(iv) implies that we can find a neighbour $y$ of $v$ that is not only not dangerous but also $x$ does not belong to the $2j$-neighbourhood of $y$ in $G\\setminus \\{v\\}$. It follows that $vy$ is not occupied by a cop and $\\textrm{C}^{v}_r(y) < \\frac 13 \\left( \\frac{d}{30c(2j+1)} \\right)^{i}$ for $r=2i$ or $r=2i-1$, where $i = 1,2, \\ldots, j$. We move the robber to $y$.\n\nNow, it is time for the cops to make their move. Because of our choice of the vertex $y$, we can assure that the desired upper bound for $\\textrm{C}_r^v(y)$ required for $y$ to be safe will hold for $r \\in \\{1,2,\\dots,2j-1\\}$. Indeed, the best that the cops can do to try to fail the condition for $\\textrm{C}_r^v(y)$ is to move all cops at distance $r-1$ and $r+1$ from $y$ to $r$-neighbourhood of $y$, and to make cops at distance $r$ stay put, but this would not be enough. Thus, regardless of the strategy used by the cops, $y$ is safe and the proof is finished.\n\\end{proof}\n\n\\begin{lemma}\\label{lemma:lowerbound2}\nLet $\\frac{1}{2j}<\\alpha<\\frac{1}{2j-1}$ for some integer $j\\ge 1$, $\\bar c=\\bar c(\\alpha)=\\frac{6}{1-(2j-1) \\alpha}$ and $d=d(n)=np=n^{\\alpha+o(1)}$. Then, a.a.s.\\\n$$\n\\xi({\\mathcal{G}}(n,p))\\ge \\bar{K} := \\left( \\frac{d}{30 \\bar c (2j+1)} \\right)^{j+1} \\frac{n}{d^{2j}}\\,.\n$$\n\\end{lemma}\n\n\\begin{proof}\nThe proof is very similar to that of Lemma~\\ref{lemma:lowerbound1}. The only difference is that checking the desired bounds for $\\textrm{dang}(2j-1)$ and $\\textrm{dang}(2j)$ is slightly more complicated. As before, we do not control the number of cops in $E_{2j}(v)$ and $E_{2j+1}(v)$, clearly $\\textrm{C}^x_{2j}(v)$ and $\\textrm{C}^x_{2j+1}(v)$ are bounded from above by $\\bar{K}$, the total number of cops. We get\n\\begin{eqnarray*}\n\\frac 13 \\left( \\frac{d}{30\\bar c(2j+1)} \\right)^j \\cdot \\textrm{dang}(2j-1) &\\le& 2\\bar c \\cdot \\textrm{C}_{\\le (2j)}^x(v) \\le 2\\bar c \\cdot (2+o(1)) \\bar{K} \\\\\n&\\le& 5\\bar c \\cdot \\left( \\frac{d}{30\\bar c(2j+1)} \\right)^{j+1},\n\\end{eqnarray*}\nand consequently $\\textrm{dang}(2j-1) \\le \\frac {d}{2(2j+1)}$, as required. (Note that we have room to spare here but we cannot take advantage of it so we do not modify the definition of being $(2j-1)$-dangerous.)\n\nLet us now notice that a cop at distance $2j+1$ from $v$ can contribute to the ``dangerousness'' of more than $\\bar c$ neighbours of $v$. However, the number of paths of length $2j$ joining $v$ and $w$ is bounded from above by $\\bar c d^{2j}\/n$ (see Lemma~\\ref{lem:elem2}(ii) and note that $d^{2j} = n^{2j \\alpha + o(1)} \\ge 7 n \\log n$, since $2j \\alpha > 1$). Hence,\n\\begin{eqnarray*}\n\\frac 13 \\left( \\frac{d}{30\\bar c(2j+1)} \\right)^j \\cdot \\textrm{dang}(2j) &\\le& \\frac {2 \\bar c d^{2j}}{n} \\cdot \\textrm{C}_{\\le (2j+1)}^x(v) \\le \\frac {2 \\bar c d^{2j}}{n} \\cdot (2+o(1)) \\bar{K} \\\\\n&\\le& 5 \\bar c \\cdot \\left( \\frac{d}{30\\bar c(2j+1)} \\right)^{j+1},\n\\end{eqnarray*}\nand, as desired, $\\textrm{dang}(2j)\\le \\frac{d}{2(2j+1)}$. Besides this modification the argument remains basically the same.\n\\end{proof}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\\subsection{The spherical sup-norm problem}\nThe sup-norm problem on arithmetic Riemannian manifolds is a question at the interface of harmonic analysis and number theory that intrinsically combines techniques from both areas. Let $X = \\Gamma \\backslash G\/K$ be a locally symmetric space of finite volume, where $\\Gamma$ is an arithmetic subgroup. Arithmetically and analytically, the most interesting functions in $L^2(X)$ are joint eigenfunctions $\\phi$ of all invariant differential operators and the Hecke operators: these are precisely the functions that arise from (spherical) automorphic forms. The sup-norm problem asks for a quantitative comparison of the $L^2$-norm ${\\|\\phi\\|}_2$ and the sup-norm ${\\|\\phi\\|}_{\\infty}$, most classically in terms of the Laplace eigenvalue $\\lambda_{\\phi}$, but depending on the application also in terms of the volume of $X$ or other relevant quantities. Upper bounds for the sup-norm in terms of the Laplace eigenvalue are a measure for the equidistribution of the mass of high energy eigenfunctions which sheds light on the question to what extent these eigenstates can localize (``scarring''). Besides the quantum mechanical interpretation, the sup-norm problem in its various incarnations has connections to the multiplicity problem, zero sets and nodal lines of automorphic functions, and bounds for Faltings' delta function, to name just a few. See \\cite{Sarnak2004Morawetz, Rudnick2005, GhoshReznikovSarnak2013, JorgensonKramer2004}.\n\nIf $X$ is compact, the most general upper bound is due to Sarnak \\cite{Sarnak2004Morawetz}:\n\\begin{equation}\\label{sa}\n{\\|\\phi\\|}_{\\infty} \\ll_X \\lambda_{\\phi}^{(\\dim X - \\rk X)\/4}{\\|\\phi\\|}_2,\n\\end{equation}\na bound which does not use the Hecke property and is in fact sharp (for general $X$) under these weaker assumptions. Sarnak derives this bound from asymptotics of spherical functions. A slightly different but ultimately related argument proceeds via a pre-trace inequality that bounds ${\\|\\phi\\|}_{\\infty}^2$ by a sum of an automorphic kernel over $\\gamma \\in \\Gamma$. If the test function is an appropriate Paley--Wiener function, only the identity contributes to this sum, and one obtains as a (``trivial'') upper bound for ${\\|\\phi\\|}_{\\infty}$ the square-root of the spectral density as given in terms of the Harish-Chandra $\\textbf{c}$-function. If the Langlands parameters of $\\phi$ are in generic position, this coincides with \\eqref{sa}.\n\nTo go beyond \\eqref{sa}, one uses a test function that localizes not only the archimedean Langlands parameters, but in addition the parameters at a large number of finite places (where ``large'' means a function tending to infinity as a small and carefully chosen power of $\\lambda_{\\phi}$). This is called the amplification technique and leads, after estimating the automorphic kernel, to a problem in the geometry of numbers: count the elements of $G$ which appear in Hecke correspondences and lie in regions of $G$ according to the size of the kernel (such as counting rescaled integer matrices lying close to $K$). It has been implemented successfully in a variety of cases, see e.g.\\ \\cite{IwaniecSarnak1995, HarcosTemplier2013, BlomerPohl2016, BlomerMaga2016, Marshall2014a, Templier2015, Saha2017a, BlomerHarcosMagaMilicevic2020} and the references therein.\n\n\\subsection{Automorphic forms with $K$-types}\nIn this paper we want to open a new perspective on the sup-norm problem and propose a version of higher complexity. The sup-norm problem makes perfect sense not only on the level of symmetric spaces, but also on the level of groups, and a priori there is no reason why one should restrict to spherical, i.e.\\ right $K$-invariant automorphic forms. Let $\\tau$ be an irreducible unitary representation of $K$ on some finite-dimensional complex vector space $V^{\\tau}$, and consider the homogeneous vector bundle over $G\/K$ defined by $\\tau$. A cross-section may then be identified with a vector-valued function $f:G\\to V^{\\tau}$ which transforms on the right by $K$ with respect to $\\tau$:\n\\[ f(gk) = \\tau(k^{-1})f(g),\\qquad g\\in G,\\quad k \\in K. \\]\nIt is now an interesting question to bound the sup-norm of $f$ or, more delicately, its components as the \\emph{dimension} of $V^{\\tau}$ gets large. Such a situation cannot be realized in the classical case $G = \\SL_2(\\RR)$, since $K = \\SO_2(\\RR)$ is abelian, hence each $V^{\\tau}$ is one-dimensional. In this paper, we offer a detailed investigation of the first nontrivial case $G = \\SL_2(\\CC)$. For concreteness, we choose the congruence lattice $\\Gamma = \\SL_2(\\ZZ[i])$, although our results extend to more general arithmetic quotients of $G$ using the techniques in \\cite{BlomerHarcosMagaMilicevic2020}.\n\nNontrivial irreducible unitary representations of $G$ are principal series representations parametrized by certain pairs $(\\nu, p) \\in \\mfa^{\\ast}_{\\CC} \\times \\frac{1}{2}\\ZZ$, where as usual $\\mfa$ is the Lie algebra of the subgroup of positive diagonal matrices; see \\S\\ref{SL2C-subsec}. (By a small abuse of notation we will later interpret $\\nu$ simply as a complex number.) Each representation space $V$ of $G$ decomposes as a Hilbert space direct sum\n\\begin{equation}\\label{decomp}\nV = \\bigoplus_{\\substack{\\ell\\geq|p|\\\\\\ell\\equiv p\\one}} V^{\\ell}\n= \\bigoplus_{\\substack{\\ell\\geq|p|\\\\\\ell\\equiv p\\one}} \\ \\bigoplus_{\\substack{|q| \\leq \\ell\\\\q\\equiv\\ell\\one}}V^{\\ell,q},\n\\end{equation}\nwhere $V^{\\ell,q}$ is one-dimensional. Here and later, $\\ell\\in\\frac{1}{2}\\ZZ_{\\geq 0}$ parametrizes the $K$-type, i.e.\\ the $(2\\ell+1)$-dimensional representation $\\tau_{\\ell}$ of $K$, and the diagonal matrix $\\diag(e^{i\\varrho}, e^{-i\\varrho})\\in K$ acts on $V^{\\ell,q}$ by $e^{2qi\\varrho}$. (The upper index $\\ell$ in $V^\\ell$ should not be mistaken for $\\ell$-th power.) \n\nRepresentations occurring in $L^2(\\Gamma\\backslash G)$ consist of even functions on $G$ and have $p\\in\\ZZ$. A representation contains a spherical vector only if $p = 0$. In particular, the forms with $p\\neq 0$ are untouched by any of the spherical sup-norm literature. For $p \\neq 0$, no complementary series exists, so $\\nu \\in i\\mfa^{\\ast}$.\n\n\\subsection{Main results I: vector-valued forms}\\label{main-results-intro-sec}\nAs explained above, we are interested in ``big'' $K$-types which occur for all representation parameters $|p| \\leq \\ell$, but arguably the most interesting case is when the $K$-type is ``new'' and no lower $K$-types appear in the same automorphic representation space. Hence from now on we restrict to $p = \\ell$. The sup-norm problem for large $\\nu$ was studied in detail in \\cite{BlomerHarcosMagaMilicevic2020}, so here we keep $\\nu$ in a fixed compact subset $I\\subset i\\RR$ and let $\\ell$ vary. The spectral density is a constant multiple of $p^2 - \\nu^2$. In particular, for a given $K$-type $\\tau_{\\ell}$, there are $\\OO_I(\\ell^2)$ cuspidal automorphic representations $V\\subset L^2(\\Gamma\\backslash G)$ with spectral parameter $\\nu\\in I$ and $p = \\ell$ (see \\cite{dever2020ambient}), and in the light of the trace formula this bound is expected to be sharp. In each of these we consider the $(2\\ell+1)$-dimensional subspace $V^{\\ell}$. Let us choose an orthonormal basis $\\{ \\phi_q : |q| \\leq \\ell\\}$ of $V^{\\ell}$, with $\\phi_q\\in V^{\\ell,q}$ as in \\eqref{decomp}. The function $G\\to\\CC^{2\\ell+1}$ given by\n\\begin{equation}\\label{eq:vectorvalued}\ng\\mapsto\\left(\\phi_{-\\ell}(g), \\dotsc, \\phi_\\ell(g)\\right)^{\\top}\n\\end{equation}\nis a vector-valued automorphic form for the group $\\Gamma$ with spectral parameter $\\nu$ and $K$-type $\\tau_{\\ell}$.\nThe Hermitian norm of this function,\n\\[\\Phi(g):=\\Bigl(\\sum_{|q|\\leq \\ell} |\\phi_q(g)|^2\\Bigr)^{1\/2},\\qquad g\\in G,\\]\nis independent of the choice of the orthonormal basis, and it satisfies ${\\|\\Phi\\|}_2=(2\\ell+1)^{1\/2}$. Let us fix a compact subset $\\Omega\\subset G$. Our remarks on spectral density and dimension suggest that\n\\begin{equation}\\label{3\/2-exp}\n {\\| \\Phi|_{\\Omega} \\|}_{\\infty} :=\\Bigl\\| \\sum_{|q| \\leq \\ell} |\\phi_q|_{\\Omega}|^2\\Bigr\\|^{1\/2}_{\\infty}\\ll_{\\Omega, I} \\ell^{3\/2}\n \\end{equation}\nshould be regarded as the ``trivial'' bound; cf.~Remark~\\ref{non-arith}. Our first result is a power-saving improvement.\n\n\\begin{theorem}\\label{thm1} Let $\\ell\\geq 1$ be an integer, $I\\subset i\\RR$ and $\\Omega\\subset G$ be compact sets.\nLet $V\\subset L^2(\\Gamma\\backslash G)$ be a cuspidal automorphic representation with minimal $K$-type $\\tau_{\\ell}$ and spectral parameter $\\nu_V\\in I$. Then for any $\\eps>0$ we have\n\\[ {\\| \\Phi|_{\\Omega} \\|}_{\\infty} \\ll_{\\eps,I,\\Omega} \\ell^{4\/3+\\eps}. \\]\n\\end{theorem}\n\nWe will explain some ideas of the proof in a moment, but we remark already at this point that the exponent is the best possible, given that we sacrifice cancellation of the terms on the geometric side of the pre-trace formula and given our\ncurrent knowledge on the construction of the most efficient amplifier. In other words, under these conditions we solve the arising matrix counting problem in a best possible way.\nSince we trivially have ${\\|\\Phi\\|}_{\\infty}\\gg\\ell^{1\/2}$, the above bound is one-sixth of the way from the trivial down to the best possible exponent (absent the possibility of some escape of mass into a cusp). This matches (after a renormalization) the original and still the best available subconvexity exponent $5\/24$ of Iwaniec--Sarnak~\\cite{IwaniecSarnak1995} for the sup-norms of spherical Maa{\\ss} forms of large Laplace eigenvalue on arithmetic hyperbolic surfaces.\n\n\\subsection{Main results II: individual vectors}\nIt is a much more subtle endeavour to investigate the sup-norm of the individual basis elements $\\phi_q$. Here one must contend with the inherent high multiplicity, a known serious barrier in the sup-norm problem. Indeed, a straightforward construction \\cite{Sarnak2004Morawetz} shows that\nsome scalar-valued $L^2$-normalized form $\\phi\\in V^{\\ell}$ (essentially the projection of the vector-valued form \\eqref{eq:vectorvalued} in the modulus-maximizing direction) has sup-norm on $\\Omega$ as large as ${\\|\\Phi|_{\\Omega}\\|}_{\\infty}$ in Theorem~\\ref{thm1}. However, our natural basis $\\{\\phi_q : |q| \\leq \\ell\\}$ of $V^{\\ell}$ is distinguished by consisting of eigenfunctions under the action of the group $\\{\\diag(e^{i\\theta}, e^{-i\\theta}) : \\theta \\in \\RR\\}$ of diagonal matrices in $K$. This is the classical basis with respect to which the representation $\\tau_\\ell$ is given by the Wigner $D$-matrix. By a similar heuristic reasoning as for \\eqref{3\/2-exp}, one might expect that the baseline bound should be ${\\| \\phi_q |_{\\Omega} \\|}_{\\infty} \\ll \\ell$. Indeed, we prove this bound up to a factor of $\\ell^{\\eps}$ (cf.\\ Remark~\\ref{non-arith} below), noting that it is not ``trivial'' in any sense other than that it does not require arithmeticity. Moreover, as shown by the next theorem, we are in fact able to break this barrier.\n\n\\begin{theorem}\\label{thm2}\nUnder the assumptions of Theorem~\\ref{thm1}, we have\n\\[\\max_{|q|\\leq\\ell}\\, {\\| \\phi_q |_{\\Omega} \\|}_{\\infty} \\ll_{\\eps,I,\\Omega} \\ell^{26\/27+\\eps}.\\]\n\\end{theorem}\n\nFor special values of $q$ we can improve on the exponent considerably. The central vector $\\phi_{0}$ is distinguished as can the ``archimedean newvector'' \\cite{Popa2008} in the sense that its Whittaker function determines the archimedean $L$-factor of the underlying representation. Another interesting situation is the extreme case of the vector $\\phi_{\\pm\\ell}$.\n\n\\begin{theorem}\\label{thm3} Keep the assumptions of Theorem~\\ref{thm1}. \n\\begin{enumerate}[(a)]\n\\item\\label{thm3-a}\nFor $q = 0$ we have\n\\[ {\\| \\phi_{0} |_{\\Omega} \\|}_{\\infty} \\ll_{\\eps,I,\\Omega} \\ell^{7\/8+\\eps}.\\]\n\\item\\label{thm3-b}\nSuppose that $V$ lifts to an automorphic representation for $\\PGL_2(\\ZZ[i])\\backslash\\PGL_2(\\CC)$. For $q = \\pm \\ell$ we have\n\\[{\\| \\phi_{\\pm \\ell} |_{\\Omega} \\|}_{\\infty} \\ll_{\\eps,I,\\Omega} \\ell^{1\/2+\\eps}. \\]\n\\end{enumerate}\n\\end{theorem}\n\nThe strong numerical saving in the case $q = \\pm \\ell$, going far beyond the Weyl exponent, is quite remarkable, in particular in view of the seemingly weaker saving in Theorem~\\ref{thm1} which might be regarded as an easier case. We will discuss this in \\S \\ref{KS-intro}. The assumption that $V$ is associated to a representation of $\\PGL_2$ rather than $\\SL_2$ is only for technical simplicity and not essential to the method, cf.\\ \\S \\ref{RSSection}. This assumption holds if and only if the elements of $V$ are fixed by the Hecke operator $T_i$ (which is an involution on $L^2(\\Gamma\\backslash G)$).\n\n\\begin{remark} In the case of the spherical sup-norm problem, Sarnak~\\cite{Sarnak2004Morawetz} put forward the purity conjecture that the accumulation points of the set\n\\[ \\left\\{\\frac{\\log {\\|\\psi\\|}_{\\infty}}{\\log \\lambda_{\\psi}}:\\text{$\\psi$ is a joint eigenfunction}\\right\\} \\]\nlie in $\\frac{1}{4}\\ZZ$. It would be very interesting to see if an analogous conjecture may be expected in the $K$-aspect, and even if there may be examples exhibiting different layers of power growth as in \\cite{Milicevic2011,Blomer2020,BrumleyMarshall2020}. In particular, the savings in Theorem~\\ref{thm3} produce already a considerable ``exponent gap''.\n\\end{remark}\n\n\\begin{remark}\\label{non-arith}\nWe record that our essentially best possible estimates on the spherical trace function in \\S\\ref{gen-sph-fun-intro-sec}, which are of purely analytic nature, coupled with the formalism of the pre-trace inequality, yield what might be considered ``trivial'' geometric estimates: for any co-finite Kleinian subgroup $\\Gamma\\leq G$, without any arithmeticity assumption, we have\n\\[ {\\|\\Phi|_{\\Omega}\\|}_{\\infty}\\ll_{I,\\Omega,\\Gamma}\\ell^{3\/2}\\qquad\\text{and}\\qquad\n{\\|\\phi_q|_{\\Omega}\\|}_{\\infty}\\ll_{\\eps,I,\\Omega,\\Gamma}\\ell^{1+\\eps} \\]\nfor any $L^2$-normalized vector-valued Maa{\\ss} eigenform $(\\phi_{-\\ell},\\dots,\\phi_{\\ell})^{\\top}$ with spectral parameter $\\nu\\in I$ and $K$-type $\\tau_{\\ell}$ (with $\\phi_q\\in V^{\\ell,q}$ as before).\n\\end{remark}\n\nOur Theorems~\\ref{thm1}--\\ref{thm3} above, and the non-spherical sup-norm problem in general, come with several novelties of representation theoretic, analytic and arithmetic nature that we discuss briefly in the following subsections.\n\n\\subsection{Generalized spherical functions}\\label{gen-sph-fun-intro-sec}\nThe classical pre-trace formula features on the geometric side the Harish-Chandra transform $\\widecheck{h}$ of the test function $h$ on the spectral side. This transform is a bi-$K$-invariant function obtained by integrating $h$ against the elementary spherical functions (which themselves are bi-$K$-invariant, and hence in the case of $G = \\SL_2(\\CC)$ simply a function of one real variable). In typical applications there is no cancellation in this integral, so an asymptotic analysis of spherical functions is the first key step (see \\cite{BlomerPohl2016} for a general result in this direction). Our set-up requires a generalized version for homogeneous vector bundles over $G\/K$. For $G = \\SL_2(\\CC)$, the corresponding \\emph{spherical trace function} equals (see \\S \\ref{section:sphericaltransform} for details)\n\\begin{equation}\\label{spherical-def}\n\\varphi_{\\nu,\\ell}^{\\ell}(g) =\n(2\\ell+1)\\int_K \\psi_{\\ell}(\\kappa(k^{-1} g k))\\,e^{(\\nu-1)\\rho(H(gk))}\\,\\dd k,\n\\end{equation}\nwhere $\\dd k$ is the probability Haar measure on $K$, $\\rho$ is the unique positive root, $\\kappa$ (resp.\\ $H$) is the $KAN$ Iwasawa projection onto $K$ (resp.\\ $\\mfa$), and\n\\begin{equation}\\label{chi-ell}\n\\psi_{\\ell}\\left( \\begin{pmatrix} \\alpha &\\beta \\\\ - \\bar{\\beta} & \\bar{\\alpha} \\end{pmatrix} \\right) := \\bar{\\alpha}^{2\\ell}, \\qquad \\left( \\begin{matrix} \\alpha &\\beta \\\\ - \\bar{\\beta} & \\bar{\\alpha} \\end{matrix} \\right) \\in K.\n\\end{equation}\nThe trivial bound is $|\\varphi_{\\nu,\\ell}^{\\ell}(g)|\\leq 2\\ell+1$, which is sharp for $g = \\pm\\id$, and the key question is how quickly $\\varphi_{\\nu,\\ell}^{\\ell}(g)$ decays, uniformly in $\\ell$, as $g\\in G$ moves away from $\\pm\\id$. We observe that $\\varphi_{\\nu,\\ell}^{\\ell}(g)$ is invariant under conjugation by $K$, hence it suffices to investigate it for upper triangular matrices $g\\in G$. We shall use the Frobenius norm\n$\\|g\\|:= \\sqrt{\\tr(gg^*)}$, and we note that for $g\\in G$ this is always at least $\\sqrt{2}$. The following bound is new and most likely sharp for fixed $\\nu\\in i\\RR$ (up to factors $\\ell^{\\eps}$ and powers of $\\| g \\|$).\n\n\\begin{theorem}\\label{thm4} Let $\\ell\\geq 1$ be an integer, and let $g = \\left(\\begin{smallmatrix} z & u\\\\ & z^{-1}\\end{smallmatrix}\\right) \\in G$ be upper triangular. Then for any $\\nu\\in i\\RR$, $k \\in K$, $\\eps>0$, we have\n\\[ \\varphi_{\\nu,\\ell}^{\\ell}(k^{-1}gk) \\ll_{\\eps} \n\\min\\left(\\ell, \\frac{\\ell^\\eps\\|g\\|^6}{|z^2 - 1|^2}, \\frac{\\ell^{1\/2+\\eps}\\|g\\|^3}{|u|}\\right). \\]\n\\end{theorem}\n\nThe proof shows that the factor $\\ell^{\\varepsilon}$ can be replaced with a suitable power of $\\log 2\\ell$. The same remark applies to Theorems~\\ref{thm6} and \\ref{thm5} below. \n \nThe spherical trace function $\\varphi_{\\nu,\\ell}^{\\ell}$ can be used to analyze the vector-valued function\n\\eqref{eq:vectorvalued}. It is, unfortunately, unable to identify the individual components $\\phi_q$, and there does not seem to exist a general theory of spherical functions covering such cases. As the components are eigenfunctions of the action of the diagonal elements, we can single out $\\phi_q$ by considering\n\\begin{equation}\\label{spherical-averaged}\n\\varphi_{\\nu,\\ell}^{\\ell,q}(g):=\\frac1{2\\pi}\\int_0^{2\\pi}\\varphi_{\\nu,\\ell}^{\\ell}\\left(g\\diag(e^{i\\varrho},e^{-i\\varrho})\\right)\\,e^{-2qi\\varrho}\\,\\dd \\varrho.\n\\end{equation}\nThe function $\\varphi_{\\nu,\\ell}^{\\ell,q}$ is an interesting object that does not seem to have been considered before. It is not conjugation invariant anymore, so it needs to be analyzed on the entire $6$-dimensional group $G = \\SL_2(\\CC)$, and little preliminary reduction is possible. When restricted to $K$, it is not hard to see that $\\varphi_{\\nu,\\ell}^{\\ell,q}(k)$, for $k = k[u,v,w]\\in K$ written in terms of Euler angles (cf.\\ \\eqref{decomp-K}), is essentially a Jacobi polynomial in $\\cos 2v$. We refer to \\S \\ref{thm5a-proof-sec} for a more detailed discussion. In particular,\n$\\varphi_{\\nu,\\ell}^{\\ell,q}(\\pm\\id)=1$. Therefore, at least heuristically, a safe baseline bound should be\n\\begin{equation}\\label{trivial-q}\n\\varphi_{\\nu,\\ell}^{\\ell,q}(g) \\ll_\\eps \\ell^\\eps.\n\\end{equation}\nUnlike in the bi-$K$-invariant case, where the trivial bound is just an application of the triangle inequality and hence is indeed trivial, the expected baseline bound \\eqref{trivial-q} turns out to be hard to prove. It requires very strong cancellation in the $\\varrho$-integral, along with the decay properties of $\\varphi_{\\nu,\\ell}^{\\ell}$. Taking \\eqref{trivial-q} for granted, we wish to investigate in what directions and with what speed we can identify decay as we move away from $\\pm\\id\\in G$. Interestingly, this is extremely sensitive to the value of $q$.\n\nLet $\\mcD\\subset G$ be the set of diagonal matrices, $\\mcS$ the normalizer of $A$ in $K$ (which consists of the diagonal and the skew-diagonal matrices lying in $K$), and\n\\begin{equation}\\label{adbc}\n\\mcN:=\\left\\{\\begin{pmatrix}a & b\\\\ c & d \\end{pmatrix}\\in G: |a| = |d|, \\ |b| = |c|\\right\\}.\n\\end{equation}\nIt is clear that $\\mcS\\subset K\\subset\\mcN\\subset G$. For $g \\in G$ and non-empty $\\mcH \\subset G$, we shall write $\\dist(g,\\mcH)$ for their distance $\\inf_{h\\in\\mcH}\\|g-h\\|$. Note that here $\\|g-h\\|=\\|g^{-1}-h^{-1}\\|$, hence also\n\\begin{equation}\\label{distinvariance}\n\\dist(g,\\mcH)=\\dist(g^{-1},\\mcH^{-1}).\n\\end{equation}\nAs an alternative to $\\dist(g,\\mcN)$, we shall also use\n\\begin{equation}\\label{Dgdef}\nD(g):=\\left||a|^2-|d|^2\\right|+\\left||b|^2-|c|^2\\right|.\n\\end{equation}\nFor orientation, we remark the elementary inequality\n\\[\\dist(g,\\mcN)^2\\leq D(g)\\leq 2\\|g\\|\\dist(g,\\mcN).\\]\nIn the spirit of \\cite[Th.~2]{BlomerPohl2016}, we use a soft argument that provides some decay of $\\varphi_{\\nu,\\ell}^{\\ell,q}(g)$ in generic ranges and considerable uniformity.\n\\begin{theorem}\\label{thm6} Let $\\ell,q\\in\\ZZ$ be such that $\\ell\\geq\\max(1,|q|)$. Let $\\nu\\in i\\RR$ and $g \\in G$.\nThen for any $\\eps>0$ and $\\Lambda>0$, we have\n\\begin{equation}\\label{thm6bound}\n\\varphi_{\\nu,\\ell}^{\\ell,q}(g) \\ll_{\\eps,\\Lambda}\\ell^{\\eps}\\min\\left(1,\\frac{\\| g \\|}{\\sqrt{\\ell}\\dist(g,K)^2\\dist(g,\\mcD)}\\right) + \\ell^{-\\Lambda}.\n\\end{equation}\n\\end{theorem}\nIn the special case $q\\in\\{-\\ell,0,\\ell\\}$, we use more elaborate arguments for stronger bounds.\n\n\\begin{theorem}\\label{thm5} Let $\\ell\\geq 1$ be an integer, $\\nu\\in i\\RR$ and $g\\in G$. Let $\\eps>0$ and $\\Lambda>0$ be two parameters.\n\\begin{enumerate}[(a)]\n\\item\\label{thm5-a}\nWe have\n\\begin{equation}\\label{thm5boundq=0}\n\\varphi_{\\nu,\\ell}^{\\ell,0}(g) \\ll_{\\eps,\\Lambda}\n\\ell^{\\eps}\\min\\left(1, \\frac{1}{\\sqrt{\\ell} \\dist(g, \\mcS)}\\right)+\\ell^{-\\Lambda}.\n\\end{equation}\nMoreover, $\\varphi_{\\nu,\\ell}^{\\ell,0}(g) \\ll_{\\Lambda} \\ell^{-\\Lambda}$ holds unless $D(g)\\ll_\\Lambda\\|g\\|^2(\\log\\ell)\/\\sqrt{\\ell}$.\n\\item\\label{thm5-b}\nWe have\n\\begin{equation}\\label{thm5boundq=ell}\n\\varphi_{\\nu,\\ell}^{\\ell,\\pm \\ell}(g) \\ll_{\\eps} \\| g \\|^{-2+\\eps} \\ell^{\\eps}.\n\\end{equation}\nMoreover, $\\varphi_{\\nu,\\ell}^{\\ell,\\pm \\ell}(g) \\ll_{\\Lambda} \\ell^{-\\Lambda}$ holds unless\n$\\dist(g, \\mcD) \\ll_\\Lambda\\| g \\|\\sqrt{\\log\\ell}\/\\sqrt{\\ell}$.\n\\end{enumerate}\n\\end{theorem}\n\nWe expect that the bounds in Theorem~\\ref{thm5} are essentially best possible, possibly up to powers of $\\ell^{\\varepsilon}$ and $\\| g \\|$. The proof requires detailed analysis that could in principle be applied to all values of $q$ and would detect, for instance, further Airy-type bumps in certain regions and for certain choices of parameters.\n\n\\begin{remark}\\label{remark3}\nLess precise results but in a more general setting were obtained by Ramacher~\\cite{Ramacher2018} using operator theoretical methods. Combined with an argument of Marshall~\\cite{Marshall2014a}, these were applied by Ramacher--Wakatsuki~\\cite{RamacherWakatsuki2017a} to the sup-norm problem with $K$-types. For compact arithmetic quotients of $\\SL_2(\\CC)$, and for $\\phi\\in V^{\\ell}$ as before,\n\\cite[Th.~7.12]{RamacherWakatsuki2017a} yields ${\\| \\phi \\|}_{\\infty} \\ll \\ell^{5\/2 - \\delta}$ with an unspecified\nconstant $\\delta>0$; this does not even recover the baseline bound.\n\\end{remark}\n\n\\subsection{Paley--Wiener theory}\nFor a reductive Lie group $G$, Paley--Wiener theory characterizes the image of $C_c^{\\infty}(G)$ under the Harish-Chandra transform. For bi-$K$-invariant functions, this is a famous result of Gangolli~\\cite{MR289724}: the image consists of entire, Weyl group invariant functions satisfying certain growth conditions. For general $K$-finite functions, the picture is much more complicated: any linear relation that holds for the matrix coefficients of generalized principal series also needs to hold for the matrix coefficients of the operator-valued Fourier transform (and hence for the $\\tau$-spherical transforms for $\\tau\\in\\widehat{K}$). A complete list of these ``Arthur--Campoli relations'' requires a full knowledge of all the irreducible subquotients of the non-unitary principal series, which in general is not available. Arthur~\\cite{MR697608} describes them as a sequence of successive residues of certain meromorphic functions; see also \\cite{Campoli1979}. Needless to say, a good knowledge of available functions on the spectral side is crucial for the quantitative analysis of the pre-trace formula in the sup-norm problem.\n\nFor the case of $G = \\SL_2(\\CC)$, in a somewhat neglected paper, Wang~\\cite{Wang1974} devised an elegant argument to establish a completely explicit Paley--Wiener theorem for the $\\tau_{\\ell}$-spherical transform acting on $C_c^{\\infty}(G)$: in addition to the Weyl group symmetry, we have the additional symmetry $(\\nu, p) \\leftrightarrow (p, \\nu)$ whenever $\\nu \\equiv p\\pmod{1}$ and $|\\nu|, |p| \\leq \\ell$; see Theorem~\\ref{thm10} in \\S \\ref{section:sphericaltransform}. The additional symmetry is counter-intuitive at first (the pairs $(\\nu,p)\\neq(0,0)$ satisfying $\\nu\\equiv p\\pmod{1}$ correspond to a discrete set of non-unitary representations), but it enters the picture as it fixes the eigenvalues $\\nu^2+p^2$ and $\\nu p$ of two generators of $Z(\\mathcal{U}(\\mfg))$, and hence the infinitesimal character. See \\cite[Cor.~2]{Wang1974} and its proof. A more conceptual explanation, along the lines of irreducible subquotients, can be found after \\eqref{eq:tau-ell-isotypical-decomposition-algebraic}. Wang's remarkable result is that these are \\emph{all} relations.\n\nThe extra symmetry makes the application of the pre-trace formula more delicate. For instance, it appears impossible to single out an individual value of $p$ by a manageable test function on the spectral side. We circumvent this problem by employing a carefully chosen Gaussian \\eqref{eq:def-gaussian-spectral-weight} that at least asymptotically singles out our preferred value $p=\\ell$. The price to pay for this maneuver is that we lose compact support. As a result of independent interest, we prove a new Paley--Wiener theorem for $K$-finite Schwartz class functions on $G = \\SL_2(\\CC)$. For the notation, see \\S \\ref{section:sphericaltransform}.\n\n\\begin{theorem}\\label{thm:pws} For $f\\in\\mcH(\\tau_{\\ell})$, the following two conditions are equivalent (with implied constants depending on $f$).\n\\begin{enumerate}[(a)]\n\\item\\label{pws-a} The function $f(g)$ is smooth, and for any $m\\in\\ZZ_{\\geq 0}$ and $A>0$ we have\n\\begin{equation}\\label{eq:mAbound}\n\\frac{\\partial^m}{\\partial h^m}f(k_1 a_h k_2)\\ll_{m,A} e^{-A|h|},\\qquad h\\in\\RR,\\quad k_1,k_2\\in K.\n\\end{equation}\n\\item\\label{pws-b} The function $\\widehat{f}(\\nu,p)$ extends holomorphically to $\\CC\\times\\tfrac12\\ZZ$ such that\n\\begin{equation}\\label{eq:symmetry}\\widehat{f}(\\nu,p)=\\widehat{f}(p,\\nu),\\qquad\\nu\\equiv p\\one,\\quad|\\nu|,|p|\\leq\\ell,\n\\end{equation}\nand for any $B,C>0$ we have\n\\begin{equation}\\label{eq:BCbound}\n\\widehat{f}(\\nu,p)\\ll_{B,C} (1+|\\nu|)^{-C},\\qquad|\\Re\\nu|\\leq B,\\quad p\\in\\tfrac12\\ZZ.\n\\end{equation}\n\\end{enumerate}\n\\end{theorem}\n\nThe Schwartz space offers a lot more flexibility in applications. A less precise result for more general groups is given in \\cite[Th.~3]{MR1111747}, and we refer the reader to the introduction of that paper for additional discussion and motivation of Paley--Wiener type theorems for rapidly decaying functions.\n\n\\subsection{Beyond the pre-trace formula: a fourth moment}\\label{KS-intro}\nWe still owe an explanation for the sub-Weyl exponent in Theorem~\\ref{thm3}\\ref{thm3-b}, where $q = \\pm \\ell$. The proof of this bound is different from the other results: it is inspired by a brilliant recent idea of Steiner and Khayutin--Steiner~\\cite{St, KhayutinSteiner2020} in the \\emph{weight} aspect for the groups $\\SO_3(\\RR)$ and $\\SL_2(\\RR)$. The starting point is the desire to choose the amplifier so long that it works as self-amplification. In this way, the amplifier can be made independent of the well-known but inefficient trick of using the Hecke relation $\\lambda_p^2 - \\lambda_{p^2} = 1$. A self-amplified second moment is in effect a fourth moment, and the key observation is that it can be realized as the diagonal term in a \\emph{double} pre-trace formula. This only has a chance to work if the corresponding geometric side can be analyzed sufficiently accurately, and to this end, two extra features are necessary: a special behaviour of spherical functions with rapid decay conditions (such as, for instance, the Bergman kernel for $\\SL_2(\\RR)$) and the possibility for a \\emph{second moment} count on the geometric side, i.e.\\ pairs of matrices, in a best possible way.\n\nFor the proof of Theorem~\\ref{thm3}\\ref{thm3-b}, we implement this idea for first time in the context of principal series representations. Our proof proceeds differently than both of \\cite{St} and \\cite{KhayutinSteiner2020}. We avoid the theta correspondence and instead detect the diagonal term in the double pre-trace formula by an argument that is reminiscent of the Voronoi formula for Rankin--Selberg $L$-functions over $\\QQ[i]$, cf.\\ \\S \\ref{sec28}. As we lose positivity, we have to use the full power of the pre-trace formula, unlike our other results where the softer pre-trace inequality suffices. The argument is analytically subtle, since we also lose the possibility to choose the test function in the pre-trace formula freely: part of it is now given to us by the gamma kernel in the Voronoi summation formula (a new feature compared to \\cite{St} and \\cite{KhayutinSteiner2020}). At this point we need a very precise understanding of the Harish-Chandra transform in Theorem~\\ref{thm:pws} with complete uniformity in the auxiliary complex parameters, and the reader may observe that in the end only the strong $g$-dependence in \\eqref{thm5boundq=ell} saves the final bound.\n\n\\subsection{Matrix counting}\nHaving discussed some of the analytic and representation theoretic novelties, we finally comment briefly on the arithmetic part. In all previous instances of the sup-norm problem, the analysis of the geometric side of the pre-trace formula amounts to counting matrices close to $K$, because the elementary spherical function is bi-$K$-invariant and decays away from $K$. Given the results on spherical trace functions in \\S \\ref{gen-sph-fun-intro-sec}, it is clear that from an arithmetic point of view the sup-norm problem with big $K$-types is conceptually very different from the spherical sup-norm problem.\n\nThe localization behaviour of generalized spherical functions has distinct features as reflected by Theorems~\\ref{thm4} and \\ref{thm5}. The spherical trace function $\\varphi_{\\nu,\\ell}^{\\ell}$ concentrates close to the identity. The functions $\\varphi_{\\nu,\\ell}^{\\ell,\\pm\\ell}$ localize sharply around diagonal matrices (but not necessarily within $K$). For $\\varphi_{\\nu,\\ell}^{\\ell,0}$, there is localization on diagonal and skew-diagonal matrices within $K$, then there is a gradual transition to a second layer in a neighbourhood of the 4-dimensional manifold $\\mcN$ defined by \\eqref{adbc}, and outside this neighbourhood we see sharp decay. Theorem~\\ref{thm6} is in some sense a combination of these two extreme cases. Correspondingly, the counting techniques in \\S\\S \\ref{thm1-proof-sec}--\\ref{sec-proof2} are still based on the geometry of numbers, but they differ conceptually and technically from the earlier treatment of the spherical sup-norm problem. In particular, as mentioned in \\S \\ref{KS-intro}, for the proof of Theorem~\\ref{thm3}\\ref{thm3-b} we have to achieve a best possible double matrix count, cf.\\ Lemma~\\ref{lemma-ell-count}.\n\n\\subsection{Acknowledgements}\nThis work began during D.M.'s term as Director's Mathematician in Residence at the Budapest Semesters of Mathematics program in the summer of 2018; D.M. would like to thank BSM, the Alfr\\'ed R\\'enyi Institute of Mathematics, as well as the Max Planck Institute for Mathematics for their hospitality and excellent working conditions.\n\n\\section{Preliminaries}\n\n\\subsection{Representations of $\\SU_2(\\CC)$}\\label{SU2-subsec}\nIn this subsection, we review the representation theory of the maximal compact subgroup\n\\[ K=\\SU_2(\\CC)=\\left\\{ k[\\alpha,\\beta]:=\\begin{pmatrix}\\alpha&\\beta\\\\-\\bar{\\beta}&\\bar{\\alpha}\\end{pmatrix}:|\\alpha|^2+|\\beta|^2=1\\right\\} \\]\nof $G=\\SL_2(\\CC)$. We use \\cite[\\S2.1.1,2.2]{Lokvenec-Guleska2004} as a convenient reference.\n\nFor $u,v,w\\in\\RR$, we parametrize $K$ using essentially Euler angles $(2u,2v,2w)$ as follows:\n\\begin{equation}\\label{decomp-K}\nk[u,v,w]:=\\begin{pmatrix}e^{iu}&\\\\&e^{-iu}\\end{pmatrix}\\begin{pmatrix}\\cos v&i\\sin v\\\\i\\sin v&\\cos v\\end{pmatrix}\\begin{pmatrix}e^{iw}&\\\\&e^{-iw}\\end{pmatrix}.\n\\end{equation}\nGenerating an equivalence relation $\\sim$ on $\\RR^3$ by\n\\begin{equation}\\label{angleequiv}\n(u,v,w)\\ \\sim\\ (u+2\\pi,v,w),\\ (u,v,w+2\\pi),\\ (u+\\pi,v+\\pi,w),\\ (u+\\pi\/2,-v,w-\\pi\/2)\n\\end{equation}\nwe may parametrize $\\SU_2(\\CC)$ by $\\RR^3\/\\!\\sim$, or by a specific fundamental domain such as $[0,\\pi)\\times[0,\\pi\/2]\\times[-\\pi,\\pi)$, in which each point in $\\SU_2(\\CC)$ has exactly one pre-image other than those with $v\\in\\frac{\\pi}2\\ZZ$. The probability Haar measure on $\\SU_2(\\CC)$ is given by\n\\begin{equation}\\label{dk}\n\\dd k=(2\\pi^2)^{-1}\\sin 2v\\,\\dd u\\,\\dd v\\,\\dd w.\n\\end{equation}\n\nThe irreducible representations of $K=\\SU_2(\\CC)$ are classified as $(2\\ell+1)$-dimensional representations $\\tau_{\\ell}$, for $\\ell\\in\\frac12\\ZZ_{\\geq 0}$, described explicitly as the space $V_{2\\ell}$ of polynomials of degree at most $2\\ell$, with a basis given by $\\{z^{\\ell-q}:|q|\\leq\\ell,\\,q\\equiv\\ell\\pmod{1}\\}$ and $\\SU_2(\\CC)$ action given by\n\\begin{equation}\\label{matrix-coeff}\n\\tau_{\\ell}(k[\\alpha,\\beta])z^{\\ell-q}=(\\alpha z-\\bar{\\beta})^{\\ell-q}(\\beta z+\\bar{\\alpha})^{\\ell+q}=\\sum_{\\substack{|p|\\leq\\ell\\\\p\\equiv\\ell\\one}}\\Phi_{p,q}^{\\ell}(k[\\alpha,\\beta])z^{\\ell-p}.\\end{equation}\nA $K$-invariant scalar product on $V_{2\\ell}$ is given by $(z^{\\ell-q},z^{\\ell-p})=(\\ell-q)!(\\ell+q)!\\delta_{q=p}$, so that $\\Phi_{p,q}^{\\ell}$ are (unnormalized) matrix coefficients of $\\tau_{\\ell}$. Moreover,\n\\[\\left\\{\\Phi_{p,q}^{\\ell}\\,:\\,\\text{$p,q,\\ell\\in\\tfrac12\\ZZ$ and $|p|,|q|\\leq\\ell$ and $p,q\\equiv\\ell\\one$}\\right\\}\\]\nis an orthogonal basis of $L^2(K)$. In harmony with \\cite[\\S4.4.2]{Warner1972a}, we denote by $\\xi_{\\ell}$ the character of $\\tau_{\\ell}$, by $d_{\\ell}=2\\ell+1$ the dimension of $\\tau_\\ell$, and by $\\chi_{\\ell}=d_{\\ell}\\xi_{\\ell}$ the normalized character of $\\tau_\\ell$. Finally, we denote by $\\widehat{K}=\\{\\tau_{\\ell}:\\ell\\in\\frac12\\ZZ_{\\geq 0}\\}$ the unitary dual of $K$.\n\n\\subsection{Representations of $\\SL_2(\\CC)$}\\label{SL2C-subsec}\nFor compatibility with the existing literature, we shall use the Iwasawa decomposition of $G=\\SL_2(\\CC)$ in two forms, $G=NAK$ and $G=KAN$, where $N$ (resp.\\ $A$) is the subgroup of unipotent upper-triangular (resp.\\ positive diagonal) matrices, and $K=\\SU_2(\\CC)$ is the standard maximal compact subgroup.\n\nWe fix a Haar measure on $G$ by setting\n\\[\n\\dd g=|\\dd z|\\frac{\\dd r}{r^5}\\dd k\\quad\\text{for } g=\\begin{pmatrix}1&z\\\\&1\\end{pmatrix}\\begin{pmatrix}r&\\\\&r^{-1}\\end{pmatrix}k, \\quad z\\in\\CC,\\,\\,r>0,\\,\\,k\\in K,\n\\]\nwhere $|\\dd z|=\\dd x\\,\\dd y$ for $z=x+iy$, $x,y\\in\\RR$, and $\\dd k$ is as in \\eqref{dk}.\n\nWe write $\\mfa\\simeq\\RR$ for the Lie algebra of $A$, $\\rho$ for the root on $\\mfa$ mapping\n$\\big(\\begin{smallmatrix}x&\\\\&-x\\end{smallmatrix}\\big)$ to $2x$,\n$\\exp:\\mfa\\to A$ for the exponential map, and $\\kappa:G\\to K$ and $H:G\\to\\mfa$ for\nthe projection and height maps defined by $g\\in\\kappa(g)\\exp(H(g)) N$ for every $g\\in G$.\nThus explicitly, for $g=\\left(\\begin{smallmatrix}a&b\\\\c&d\\end{smallmatrix}\\right)\\in G$ we have\n\\begin{equation}\\label{eq:kappaH}\n\\kappa(g)=\\begin{pmatrix}a\/\\sqrt{|a|^2+|c|^2}&\\ast\\\\c\/\\sqrt{|a|^2+|c|^2}&\\ast\\end{pmatrix},\\quad \\exp(H(g))=\\begin{pmatrix}\\sqrt{|a|^2+|c|^2}&\\\\&1\/\\sqrt{|a|^2+|c|^2}\\end{pmatrix}.\n\\end{equation}\nFinally, let $M\\simeq S^1$ be the centralizer of $A$ in $K$, which consists of diagonal matrices in $K$.\n\nFollowing \\cite[Ch.~III]{GGV}, we introduce for every pair $(\\nu,p)\\in\\CC\\times\\frac12\\ZZ$\nthe (generalized) principal series representation $\\pi_{\\nu,p}$. Let us denote by $C^\\infty(\\CC)$ the set of functions $\\CC\\to\\CC$ that are smooth when regarded as functions $\\RR^2\\to\\CC$. The representation space $V_{\\nu,p}$ consists of those functions $v\\in C^\\infty(\\CC)$ for which the transformed functions\n\\begin{equation}\\label{transformedfunctions}\n\\pi_{\\nu,p}\\left(\\begin{pmatrix}a&b\\\\c&d\\end{pmatrix}\\right)v(z)\n=|bz+d|^{2p+2\\nu-2}(bz+d)^{-2p}v\\left(\\frac{az+c}{bz+d}\\right),\\qquad\n\\begin{pmatrix}a&b\\\\c&d\\end{pmatrix}\\in G,\n\\end{equation}\nextend to elements of $C^\\infty(\\CC)$. The above display then actually defines the representation $\\pi_{\\nu,p}:G\\to\\GL(V_{\\nu,p})$. The space $V_{\\nu,p}$ is complete with respect to the countable family of seminorms\n\\[\\sup\\bigl\\{\\bigl|v^{(a,b)}(x+yi)\\bigr|+\\bigl|\\widehat{v}^{(a,b)}(x+yi)\\bigr|:x^2+y^2\\leq c\\bigr\\},\\qquad (a,b,c)\\in\\NN^3,\\]\nwhere we abbreviate $\\widehat{v}:=\\pi_{\\nu,p}\\left(\\big(\\begin{smallmatrix}&-1\\\\1&\\end{smallmatrix}\\big)\\right)v$ for $v\\in V_{\\nu,p}$. The action of $G$ is continuous in the topology induced by these seminorms; thus, $\\pi_{\\nu,p}$ is a Fr\\'echet space representation.\n\nUsing the action of $K=\\SU_2(\\CC)$ and its diagonal subgroup $\\left\\{\\diag(e^{i\\varrho},e^{-i\\varrho}):\\varrho\\in\\RR\\right\\}$, we can decompose the $K$-finite part of $V_{\\nu,p}$ into an \\emph{algebraic direct sum} of finite-dimensional subspaces and further into one-dimensional subspaces:\n\\begin{equation}\\label{eq:tau-ell-isotypical-decomposition-algebraic}\nV_{\\nu,p}^{\\text{$K$-finite}} = \\bigoplus_{\\substack{\\ell\\geq|p|\\\\\\ell\\equiv p\\one}}V_{\\nu,p}^{\\ell}\n= \\bigoplus_{\\substack{\\ell\\geq|p|\\\\\\ell\\equiv p\\one}}\n\\ \\bigoplus_{\\substack{|q|\\leq\\ell\\\\q\\equiv\\ell\\one}}V_{\\nu,p}^{\\ell,q}.\n\\end{equation}\nPrecisely, $V_{\\nu,p}^\\ell$ is a $(2\\ell+1)$-dimensional subspace on which $\\pi_{\\nu,p}|_K$ acts by $\\tau_\\ell\\in\\widehat{K}$.\n\nIf $\\nu\\not\\equiv p\\pmod{1}$ or $|\\nu|\\leq|p|$, then $\\pi_{\\nu,p}\\cong\\pi_{-\\nu,-p}$ is irreducible, and these are all the equivalences among the representations $\\pi_{\\nu,p}$. If $\\nu\\equiv p\\pmod{1}$ and $|\\nu|>|p|$, then $\\pi_{\\nu,p}$ and $\\pi_{-\\nu,-p}$ are reducible. Assume $\\nu>0$, say. Then the sum of $V_{\\nu,p}^{\\ell}$ with $|p|\\leq\\ell<\\nu$ is a closed invariant subspace of $V_{\\nu,p}$, and the representation induced on the quotient is irreducible. The closure of the sum of $V_{-\\nu,-p}^{\\ell}$ with $\\ell\\geq\\nu$ is an invariant subspace of $V_{-\\nu,-p}$, and the representation induced on it is irreducible. Both of these representations of $G$ are isomorphic to $\\pi_{p,\\nu}\\cong\\pi_{-p,-\\nu}$. This observation will become relevant in \\eqref{eq:spherical-function-symmetry-2} below.\n\nThe space $V_{\\nu,p}$ has a $G$-invariant Hermitian inner product if and only if $\\nu\\in i\\RR$, or $p=0$ and $\\nu\\in(-1,0)\\cup(0,1)$. In the first case, we say that $\\pi_{\\nu,p}$ belongs to the (tempered) unitary principal series. In the second case, we say that $\\pi_{\\nu,p}$ belongs to the (non-tempered) complementary series. In either case, the Fr\\'echet space representation $\\pi_{\\nu,p}$ induces an irreducible unitary representation on the Hilbert space completion\n$\\widehat{V_{\\nu,p}}$ that we shall still denote by $\\pi_{\\nu,p}$. The only equivalences among these unitary representations are $\\pi_{\\nu,p}\\simeq\\pi_{-\\nu,-p}$. The equivalence classes, along with the trivial representation, form the unitary dual $\\widehat{G}$ of $G$.\n\nFor $\\pi\\simeq\\pi_{\\nu,p}\\in\\widehat{G}$ we write\n\\[V_\\pi:=\\widehat{V_{\\nu,p}},\\qquad V_\\pi^\\ell:=V_{\\nu,p}^\\ell,\\qquad V_\\pi^{\\ell,q}:=V_{\\nu,p}^{\\ell,q},\\]\nand then \\eqref{eq:tau-ell-isotypical-decomposition-algebraic} is equivalent to the orthogonal Hilbert space decomposition\n(cf.~\\eqref{decomp}):\n\\[V_\\pi = \\bigoplus_{\\substack{\\ell\\geq|p|\\\\\\ell\\equiv p\\one}}V_\\pi^{\\ell}\n= \\bigoplus_{\\substack{\\ell\\geq|p|\\\\\\ell\\equiv p\\one}}\n\\ \\bigoplus_{\\substack{|q|\\leq\\ell\\\\q\\equiv\\ell\\one}}V_\\pi^{\\ell,q}.\\]\nThe projection $V_\\pi\\to V_{\\pi}^{\\ell}$ is realized by the operator\n\\begin{equation}\\label{eq:projectionbychi}\n\\pi(\\ov{\\chi_\\ell}):=\\int_K \\ov{\\chi_\\ell}(k)\\pi(k)\\,\\dd k\\in \\End(V_\\pi),\n\\end{equation}\nwhere $\\End(V_\\pi)$ denotes the Hilbert space of Hilbert--Schmidt operators on $V_\\pi$ endowed with the Hilbert--Schmidt norm. This leads to the ``block matrix decomposition''\n\\begin{equation}\\label{eq:block-decomposition}\n\\End(V_\\pi)=\\bigoplus_{\\substack{m,n\\geq|p|\\\\ m,n\\equiv p\\one}}\\Hom(V_{\\pi}^{m},V_{\\pi}^{n}),\n\\end{equation}\nwhere the direct sum is meant in the Hilbert space sense. Hence, for $f\\in C_c(G)$, the $(m,n)$-component of the Hilbert--Schmidt operator (cf.\\ \\cite[Th.~2]{GelfandNaimark})\n\\begin{equation}\\label{eq:pif}\n\\pi(f):=\\int_G f(g)\\pi(g)\\,\\dd g\\in\\End(V_\\pi)\n\\end{equation}\nequals\n\\begin{equation}\\label{eq:tau-projection}\n\\pi(\\ov{\\chi_n})\\pi(f)\\pi(\\ov{\\chi_m})=\\pi(\\ov{\\chi_n}\\star f\\star\\ov{\\chi_m})\\in\\Hom(V_{\\pi}^{m},V_{\\pi}^{n}),\n\\end{equation}\nwhere the convolutions are meant over $K$.\n\n\\subsection{Plancherel theorem}\\label{subsec:plancherel}\nIn this subsection, we review the Plancherel theorem for $G=\\SL_2(\\CC)$ pioneered by Gelfand and Naimark, following the original sources \\cite{GelfandNaimark,GelfandNaimark2} and their translations \\cite{GelfandNaimarkTranslated,GelfandNaimark2Translated}. We note that the list of unitary representations given in \\cite{GelfandNaimark2} is incomplete for higher rank groups (cf.\\ \\cite{Stein,Vogan,Tadic}), but this does not affect the results we are quoting. In addition, we warn the reader that the translations contain some misprints not present in the originals, e.g.\\ in the crucial formulae \\cite[(137)--(138)]{GelfandNaimarkTranslated}.\n\nWe identify once and for all (non-canonically) the \\emph{tempered unitary dual} $\\Gtemp$ with the set\n\\[\\left\\{\\pi_{it,p}:(t,p)\\in\\left(\\RR_{>0}\\times\\tfrac12\\ZZ\\right)\\cup\\left(\\{0\\}\\times\\tfrac12\\ZZ_{\\geq 0}\\right)\\right\\},\\]\nwith topology inherited from the standard topology on $\\RR^2$. The \\emph{Plancherel measure} on $\\widehat{G}$ is supported on $\\Gtemp$, and it is given explicitly as\n\\begin{equation}\\label{eq:concrete-plancherel-measure}\n\\dd\\mupl(\\pi_{it,p}):=\\frac{1}{\\pi^2}(t^2+p^2)\\,\\dd t.\n\\end{equation}\nFor $\\pi_{it,p}\\in\\Gtemp$, the underlying Hilbert space $\\widehat{V_{it,p}}$ is independent of the parameters: it equals $\\Vcan:=L^2(\\CC)$. On this common representation space, \\eqref{transformedfunctions} defines the unitary action $\\pi_{it,p}:G\\to\\U(\\Vcan)$ that agrees with \\cite[(65)]{GelfandNaimark} for $(n,\\rho)=(2p,2t)$. The \\emph{operator-valued spherical transform} of $f\\in C_c(G)$ is the map $\\Gtemp\\to\\End(\\Vcan)$ given by $\\pi\\mapsto\\pi(f)$ as in \\eqref{eq:pif}. The Plancherel theorem for $G$ concerns the extension of this transform to $L^2(G)$, and characterizes its image.\n\n\\begin{theorem}[Gelfand--Naimark]\\label{thm:abstract-isomorphism}\nThe map given by \\eqref{eq:pif} extends (uniquely) to an $L^2$-isometry\n\\[L^2(G)\\longrightarrow L^2(\\Gtemp\\to\\End(\\Vcan)),\\]\nwhere the operator-valued $L^2$-space on the right-hand side is meant with respect to the Hilbert--Schmidt norm ${\\|\\cdot\\|}_{\\mathrm{HS}}$ on\n$\\End(\\Vcan)$ and the Plancherel measure $\\mupl$ on $\\Gtemp$. In particular, for every $f\\in L^2(G)$, the following Plancherel formula holds:\n\\begin{equation}\\label{eq:abstract-plancherel}\n\\int_G |f(g)|^2\\,\\dd g = \\int_{\\Gtemp} {\\|\\pi(f)\\|}_{\\mathrm{HS}}^2\\,\\dd \\mupl(\\pi).\n\\end{equation}\n\\end{theorem}\n\n\\begin{proof}\nThe theorem follows from \\cite[Th.~5]{GelfandNaimark}; we only need to check that our Plancherel measure corresponds to the one in \\cite[(137)]{GelfandNaimark}. We do this in four steps.\n\\newline\\emph{Step 1.}\nWe observe that the constant $(8\\pi^4)^{-1}$ in \\cite[(137)]{GelfandNaimark} should be $(16\\pi^4)^{-1}$ due to a small oversight in the derivation of \\cite[(130)]{GelfandNaimark} from \\cite[(129)]{GelfandNaimark}. The oversight is that the change of variables\n\\[(w_1,w_2,\\lambda)\\mapsto(\\zeta_1,\\zeta_2,\\zeta_3):=(w_2,w_1\\bar\\lambda+w_2\/\\bar\\lambda,w_1)\\]\ncoming from \\cite[(123)]{GelfandNaimark} is not 1-to-1 but 2-to-1.\n\\newline\\emph{Step 2.}\nWe rewrite the corrected right-hand side of \\cite[(137)]{GelfandNaimark} as a sum over $p\\in\\frac{1}{2}\\ZZ$ and an integral over $t>0$, keeping in mind that $(n,\\rho)$ in \\cite{GelfandNaimark} is $(2p,2t)$ in our notation.\n\\newline\\noindent\\emph{Step 3.}\nWe observe that the Haar measure $\\dd\\mu(g)$ used by Gelfand--Naimark is $2\\pi^2\\dd g$. Indeed, applying\n\\cite[(40)]{GelfandNaimark} to a right $K$-invariant test function $f\\in C_c(G)$, we obtain by several changes of variables that\n\\begin{align*}\\int_G f(g)\\,\\dd\\mu(g)\n&=\\int_{\\CC\\times\\CC^\\times\\times\\CC}f\\left(\\begin{pmatrix}w^{-1}&z\\\\&w\\end{pmatrix}\n\\begin{pmatrix}1&\\\\v&1\\end{pmatrix}\\right)|\\dd v|\\,|\\dd w|\\,|\\dd z|\\\\[4pt]\n&=\\int_{\\CC\\times\\CC^\\times\\times\\CC}f\\left(\\begin{pmatrix}w^{-1}&z\\\\&w\\end{pmatrix}\n\\begin{pmatrix}1\/\\sqrt{1+|v|^2}&\\bar v\/\\sqrt{1+|v|^2}\\\\&\\sqrt{1+|v|^2}\\end{pmatrix}\\right)|\\dd v|\\,|\\dd w|\\,|\\dd z|\\\\[4pt]\n&=\\int_{\\CC\\times\\CC^\\times\\times\\CC}f\\left(\\begin{pmatrix}w^{-1}&z\\\\&w\\end{pmatrix}\\right)\n\\frac{|\\dd v|\\,|\\dd w|\\,|\\dd z|}{(1+|v|^2)^2}\\\\[4pt]\n&= \\pi \\int_{\\CC^\\times\\times\\CC}f\\left(\\begin{pmatrix}1&z\\\\&1\\end{pmatrix}\n\\begin{pmatrix}w&\\\\&w^{-1}\\end{pmatrix}\\right)\\frac{|\\dd w|\\,|\\dd z|}{|w|^6} = 2\\pi^2\\int_G f(g)\\,\\dd g.\n\\end{align*}\n\\emph{Step 4.}\nPutting everything together, the corrected version of \\cite[(137)]{GelfandNaimark} yields\n\\[\\int_G |f(g)|^2\\,2\\pi^2\\dd g = \\frac{1}{16\\pi^4}\\sum_p\\int_0^\\infty{\\|2\\pi^2\\pi_{it,p}(f)\\|}_{\\mathrm{HS}}^2\\,(4t^2+4p^2)\\,2\\dd t.\\]\nThis formula is equivalent to \\eqref{eq:abstract-plancherel}, hence we are done.\n\\end{proof}\n\n\\begin{remark}\\label{remark:Plancherel} In the proof above, we claimed that the Plancherel measure in \\cite[Th.~5]{GelfandNaimark} is off by a factor of $2$. For double checking this claim, we looked at \\cite[Th.~11.2]{Knapp}, and we found (to our dismay) that the Plancherel measure there is off by a factor of $\\pi$. For example, for the test function $f(g):=1\/\\tr(gg^*)^2$, the Fourier transform given by \\cite[(11.14)]{Knapp} equals $F_f^T(t)=\\pi\/\\tr(tt^*)$, hence in \\cite[(11.17)]{Knapp} the left-hand side is $\\pi^2$, while the right-hand side is $\\pi$. For triple checking our claim, we verified that our Plancherel measure yields the correct inversion formula for the classical spherical transform (for bi-$K$-invariant functions), as in \\cite[\\S 3.3]{FHMM}.\n\\end{remark}\n\n\\begin{theorem}[Gelfand--Naimark]\\label{thm:general-inversion-formula}\nLet $f\\in C_c^\\infty(G)$. For every $\\pi\\in\\Gtemp$, the operator $\\pi(f)\\in\\End(\\Vcan)$ is of trace class, and the following inversion formula holds:\n\\begin{equation}\\label{eq:general-inversion-formula}\nf(g)=\\int_{\\Gtemp} \\tr(\\pi(f)\\pi(g^{-1}))\\,\\dd\\mupl(\\pi).\n\\end{equation}\n\\end{theorem}\n\n\\begin{proof}\nThe theorem follows from \\cite[Th.~19]{GelfandNaimark2} applied to $n=2$ and $x=R(g)f$, or from \\cite[Th.~11.2]{Knapp}, with appropriate correction of the Plancherel measure (cf.\\ Remark~\\ref{remark:Plancherel}).\n\\end{proof}\n\n\\subsection{The \\texorpdfstring{$\\tau_{\\ell}$}{tau-ell}-spherical transform}\\label{section:sphericaltransform}\nFor a given $\\ell\\in\\frac{1}{2}\\ZZ_{\\geq 0}$, it is interesting to see what Theorems~\\ref{thm:abstract-isomorphism} and \\ref{thm:general-inversion-formula} yield for test functions $f\\in L^2(G)$ with the following property: for almost every $\\pi\\in\\Gtemp$, the operator $\\pi(f)$ acts by a scalar on $V_{\\pi}^{\\ell}$ and by zero on its orthocomplement $V_{\\pi}^{\\ell,\\perp}$. In the light of \\eqref{eq:block-decomposition}, \\eqref{eq:tau-projection}, \\eqref{eq:abstract-plancherel}, and Schur's lemma, these test functions form the Hilbert subspace $\\mcH(\\tau_{\\ell})\\subset L^2(G)$ defined by the conditions\n\\begin{itemize}\n\\item $f(g)=f(kgk^{-1})$ for almost every $g\\in G$ and $k\\in K$;\n\\item $f=\\ov{\\chi_\\ell} \\star f \\star\\ov{\\chi_\\ell}$.\n\\end{itemize}\n\nLet $\\Gtemp(\\tau_{\\ell})$ be the set of $\\pi\\in\\Gtemp$ whose restriction to $K$ contains $\\tau_{\\ell}$. For $f\\in\\mcH(\\tau_{\\ell})$, the operator-valued function $\\pi\\mapsto\\pi(f)$ is supported on $\\Gtemp(\\tau_{\\ell})$, and there it is simply determined by the scalar-valued function $\\pi\\mapsto\\tr(\\pi(f))$ via\n\\begin{equation}\\label{eq:opfromtr}\n\\pi(f)|_{V_{\\pi}^{\\ell}}= \\frac{\\tr(\\pi(f))}{2\\ell+1}\\cdot \\id_{V_{\\pi}^{\\ell}}\n\\qquad\\text{and}\\qquad\\pi(f)|_{V_{\\pi}^{\\ell,\\perp}}=0.\n\\end{equation}\nIn particular, for $\\pi\\in\\Gtemp(\\tau_{\\ell})$ and $f\\in\\mcH(\\tau_{\\ell})$,\n\\begin{equation}\\label{eq:HSfromtr}\n{\\|\\pi(f)\\|}_{\\mathrm{HS}}^2 = \\tr(\\pi(f)\\pi(f)^*)= \\frac{|\\tr(\\pi(f))|^2}{2\\ell+1}.\n\\end{equation}\nFor $(\\nu,p)\\in i\\RR\\times\\frac12\\ZZ$, the condition $\\pi_{\\nu,p}\\in\\Gtemp(\\tau_{\\ell})$ is equivalent to $|p|\\leq\\ell$ and $p\\equiv\\ell\\pmod{1}$. Moreover, for $f\\in L^1(G)\\cap\\mcH(\\tau_{\\ell})$, the trace of $\\pi_{\\nu,p}(f)$ can be expressed\nin terms of the \\emph{$\\tau_\\ell$-spherical trace function}\n\\begin{equation}\\label{eq:def-spherical-function}\n\\varphi_{\\nu,p}^{\\ell}(g):=\\tr(\\pi_{\\nu,p}(\\ov{\\chi_\\ell})\\pi_{\\nu,p}(g)\\pi_{\\nu,p}(\\ov{\\chi_\\ell}))\n\\end{equation}\nas (cf.\\ \\eqref{eq:pif} and \\eqref{eq:tau-projection})\n\\begin{equation}\\label{eq:trpif}\n\\widehat{f}(\\nu,p):=\\tr(\\pi_{\\nu,p}(f)) = \\int_G f(g)\\,\\varphi_{\\nu,p}^{\\ell}(g)\\,\\dd g.\n\\end{equation}\n\nThe function $\\varphi_{\\nu,p}^{\\ell}:G\\to\\CC$ vanishes unless $|p|\\leq\\ell$ and $p\\equiv\\ell\\pmod{1}$, for else $\\tau_{\\ell}$ does not appear in $\\pi_{\\nu,p}$, and $\\varphi_{\\nu,p}^{\\ell}(\\id)=2\\ell+1$ in this latter case. Moreover, we have the integral representation of Harish-Chandra \\cite[Cor.~6.2.2.3]{Warner1972}:\n\\[\\varphi_{\\nu,p}^{\\ell}(g)=\\int_K\\left(\\chi_\\ell\\star\\eta_p\\right)(\\kappa(k^{-1}gk))\\,\ne^{(\\nu-1)\\rho(H(gk))}\\,\\dd k.\\]\nHere, $\\eta_p:M\\simeq S^1\\to\\CC^{\\times}$ is the unitary character $\\eta_p(z)=z^{-2p}$, the convolution is over $M$, and $\\kappa$, $\\rho$, and $H$ are as in \\S\\ref{SL2C-subsec}. For computational purposes, we spell out the $\\chi_\\ell\\star\\eta_p$ term explicitly, cf.~\\eqref{matrix-coeff}, \\cite[(10) \\& Lemma~3.2]{Wang1974}, \\cite[Th.~29.18]{HewittRoss}:\n\\[\\begin{aligned}\n\\left(\\chi_\\ell\\star\\eta_p\\right)(k[\\alpha,\\beta])\n&=(2\\ell+1)\\Phi_{p,p}^{\\ell}(k[\\alpha,\\beta])\\\\\n&=(2\\ell+1)\\sum_{r=0}^{\\ell-|p|}(-1)^r\\binom{\\ell+p}{r}\\binom{\\ell-p}{r}\n\\alpha^{\\ell-p-r}{\\bar\\alpha}^{\\ell+p-r}|\\beta|^{2r}.\n\\end{aligned}\\]\nWe collect further useful properties of $\\varphi_{\\nu,p}^{\\ell}:G\\to\\CC$ in the next lemma, where we write\n\\[a_h:= \\diag(e^{h\/2}, e^{-h\/2}),\\qquad h\\in\\RR.\\]\n\n\\begin{lemma} The $\\tau_\\ell$-spherical trace function $\\varphi_{\\nu,p}^{\\ell}(g)$ extends holomorphically to $\\nu\\in\\CC$, and it satisfies the bound\n\\begin{equation}\\label{eq:spherical-function-bound}\n\\bigl|\\varphi_{\\sigma+it,p}^{\\ell}(k_1a_hk_2)\\bigr|\\leq(2\\ell+1)\\frac{\\sinh(\\sigma h)}{\\sigma\\sinh(h)},\n\\qquad \\sigma,t,h\\in\\RR,\\quad k_1,k_2\\in K.\n\\end{equation}\n(For $\\sigma=0$ or $h=0$, the fraction on the right-hand side is understood as $1$.)\nThe extended function has the symmetries\n\\begin{equation}\\label{eq:spherical-function-symmetry}\n\\varphi_{\\nu,p}^{\\ell}(g)=\\ov{\\varphi_{-\\ov{\\nu},p}^{\\ell}(g)}=\\varphi_{\\nu,p}^{\\ell}(g^{-1}),\n\\end{equation}\n\\begin{equation}\\label{eq:spherical-function-symmetry-2}\n\\varphi_{\\nu,p}^{\\ell}(g)=\\varphi_{p,\\nu}^{\\ell}(g),\\qquad\\nu\\equiv p\\one,\\quad|\\nu|,|p|\\leq\\ell.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nThe holomorphic extension of $\\varphi_{\\nu,p}^{\\ell}(g)$ and the bound \\eqref{eq:spherical-function-bound} are\na straightforward generalization of \\cite[Prop.~3.4]{Wang1974} and its proof. The identity $\\ov{\\varphi_{-\\ov{\\nu},p}^{\\ell}(g)}=\\varphi_{\\nu,p}^{\\ell}(g^{-1})$ follows from \\eqref{eq:def-spherical-function} and $\\pi(g)^*=\\pi(g^{-1})$ for $\\nu\\in i\\RR$, and then also for $\\nu\\in\\CC$ by the uniqueness of analytic continuation. The identity\n$\\varphi_{\\nu,p}^{\\ell}(g)=\\varphi_{\\nu,p}^{\\ell}(g^{-1})$ is \\cite[Lemma~3.2]{Wang1974}. Finally, the remarkable symmetry \\eqref{eq:spherical-function-symmetry-2} follows from \\cite[Cor.~2]{Wang1974}, or more conceptually from the discussion below \\eqref{eq:tau-ell-isotypical-decomposition-algebraic}.\n\\end{proof}\n\nAs we shall see in Theorem~\\ref{thm:tr-plancherel} below, the \\emph{$\\tau_\\ell$-spherical transform} defined by \\eqref{eq:trpif} is inverted by the following \\emph{inverse $\\tau_\\ell$-spherical transform}. For $h\\in L^1(\\Gtemp(\\tau_{\\ell}))\\cap L^2(\\Gtemp(\\tau_{\\ell}))$ and $g\\in G$, we define\n\\begin{equation}\\label{eq:inverse-tauell-transform}\n\\widecheck{h}(g):=\\frac{1}{(2\\ell+1)\\pi^2}\\sum_{\\substack{|p|\\leq\\ell\\\\p\\equiv\\ell\\one}}\n\\int_{0}^{\\infty} h(it,p)\\,\\varphi_{it,p}^{\\ell}(g^{-1})\\,(t^2+p^2)\\,\\dd t.\n\\end{equation}\n\n\\begin{theorem}\\label{thm:tr-plancherel} The transforms defined by \\eqref{eq:trpif} and \\eqref{eq:inverse-tauell-transform} extend (uniquely) to a pair of inverse Hilbert space isometries\n\\[\\mcH(\\tau_{\\ell})\\longleftrightarrow L^2(\\Gtemp(\\tau_{\\ell})).\\]\nIn particular, for $f\\in \\mcH(\\tau_{\\ell})$, the following Plancherel formula holds:\n\\begin{equation}\\label{eq:tr-plancherel}\n\\int_G |f(g)|^2\\,\\dd g = \\frac{1}{(2\\ell+1)\\pi^2}\n\\sum_{\\substack{|p|\\leq\\ell\\\\p\\equiv\\ell\\one}}\\int_{0}^{\\infty}\n|\\widehat{f}(it,p)|^2\\,(t^2+p^2)\\,\\dd t.\n\\end{equation}\n\\end{theorem}\n\n\\begin{proof} The fact that $\\ \\widehat{}\\ $ extends to a Hilbert space isomorphism $\\mcH(\\tau_{\\ell})\\to L^2(\\Gtemp(\\tau_\\ell))$ follows from Theorem~\\ref{thm:abstract-isomorphism} and our discussion above. In particular, \\eqref{eq:tr-plancherel} is a special case of \\eqref{eq:abstract-plancherel} in the light of \\eqref{eq:concrete-plancherel-measure}, \\eqref{eq:HSfromtr}, \\eqref{eq:trpif}. We are left with proving that $\\ \\widecheck{}\\ $ is the inverse of $\\ \\widehat{}\\ $, and for this it suffices to verify that $\\ \\widecheck{}\\ $ applied after $\\ \\widehat{}\\ $ is the identity on the dense subset $C_c^{\\infty}(G)\\cap\\mcH(\\tau_{\\ell})$ of the Hilbert space $\\mcH(\\tau_{\\ell})$. For $f\\in C_c^{\\infty}(G)\\cap \\mcH(\\tau_{\\ell})$, \\eqref{eq:projectionbychi}, \\eqref{eq:pif}, \\eqref{eq:concrete-plancherel-measure}, \\eqref{eq:general-inversion-formula}, \\eqref{eq:opfromtr}, \\eqref{eq:def-spherical-function}, \\eqref{eq:trpif} yield\n\\begin{align*}f(g)\n&=\\int_{\\Gtemp}\\tr(\\pi(f)\\pi(g^{-1}))\\,\\dd\\mupl=\\frac{1}{2\\ell+1}\\int_{\\Gtemp}\\tr(\\pi(f))\\tr(\\pi(\\ov{\\chi_\\ell})\\pi(g^{-1}))\\,\\dd\\mupl\\\\\n&=\\frac{1}{(2\\ell+1)\\pi^2} \\sum_{\\substack{|p|\\leq \\ell \\\\ p\\equiv \\ell\\one}} \\int_{0}^{\\infty} \\widehat{f}(it,p)\\, \\varphi_{it,p}^{\\ell}(g^{-1})\\,(t^2+p^2)\\,\\dd t.\n\\end{align*}\nThe proof is complete.\n\\end{proof}\n\nWang~\\cite{Wang1974} proved an analogue of the Paley--Wiener theorem for the $\\tau_\\ell$-spherical transform, and in particular characterized the image of $\\mcH(\\tau_{\\ell})\\cap C_c^\\infty(G)$ under the transform. The following is \\cite[Prop.~4.5]{Wang1974} and should be compared to Theorem~\\ref{thm:pws} in the introduction.\n\n\\begin{theorem}[Wang]\\label{thm10}\nLet $f\\in\\mcH(\\tau_{\\ell})$ be a test function, and let $R>0$ be a radius. Then the following two conditions are equivalent.\n\\begin{enumerate}[(a)]\n\\item The function $f(g)$ is smooth, and\n\\[f(k_1 a_h k_2)=0,\\qquad |h|>R,\\quad k_1,k_2\\in K.\\]\n\\item The function $\\widehat{f}(\\nu,p)$ extends holomorphically to $\\CC\\times\\tfrac12\\ZZ$ such that\n\\[\\widehat{f}(\\nu,p)=\\widehat{f}(p,\\nu),\\qquad\\nu\\equiv p\\one,\\quad|\\nu|,|p|\\leq\\ell,\\]\nand for any $C>0$ we have\n\\[\\widehat{f}(\\nu,p)\\ll_C (1+|\\nu|)^{-C}e^{R|\\Re\\nu|},\\qquad \\nu\\in\\CC,\\quad p\\in\\tfrac12\\ZZ.\\]\n\\end{enumerate}\n\\end{theorem}\n\nWe now prove a Schwartz class version of this result as stated in Theorem~\\ref{thm:pws}.\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:pws}]\nAssume condition \\ref{pws-a}. The holomorphic extension of $\\widehat{f}(\\nu,p)$ follows from \\eqref{eq:spherical-function-bound} coupled with \\eqref{eq:mAbound} for $m=0$, and then \\eqref{eq:symmetry} is immediate from \\eqref{eq:spherical-function-symmetry-2}. In order to derive \\eqref{eq:BCbound}, we use an alternate representation of $\\widehat{f}(\\nu,p)$. We shall assume that $|p|\\leq\\ell$ and $p\\equiv\\ell\\pmod{1}$, for else $\\widehat{f}(\\nu,p)=0$. By the second display on \\cite[p.~621]{Wang1974}, we see that the (unique) holomorphic extension is also provided by\n\\begin{equation}\\label{eq:sphericalviaAbel}\n\\widehat{f}(\\nu,p)=\\frac{2\\ell+1}{2}\\int_{-\\infty}^\\infty\\breve{f}(h,p)\\,e^{\\nu h}\\,\\dd h,\n\\end{equation}\nwhere\n\\begin{equation}\\label{eq:Abelnew}\n\\breve{f}(h,p):=e^h\\int_K\\int_N f(ka_hn)\\,\\Phi_{p,p}^{\\ell}(k)\\,\\dd k\\,\\dd n.\n\\end{equation}\nWe claim that, for any $m\\in\\ZZ_{\\geq 0}$ and $A>0$, we have\n\\begin{equation}\\label{eq:Abelbound}\n\\frac{\\partial^m}{\\partial h^m}\\breve{f}(h,p)\\ll_{m,A} e^{-A|h|},\\qquad h\\in\\RR,\\quad |p|\\leq\\ell,\\quad p\\equiv\\ell\\one.\n\\end{equation}\nFor $|h|>1$ this follows by writing $a_hn=k_1a_{h'}k_2$ in \\eqref{eq:Abelnew}, and then combining \\eqref{eq:mAbound} with some calculus to keep track of the dependence of $h'\\in\\RR$ and $k_1,k_2\\in K$ on $h\\in\\RR$. For $|h|\\leq 1$ we proceed similarly for the part of the integral in \\eqref{eq:Abelnew} that corresponds to $n=\\big(\\begin{smallmatrix}1&z\\\\& 1\\end{smallmatrix}\\big)$ with $|z|>1$, while we estimate the ($h$-derivatives of the) remaining integral directly by the smoothness of $f(g)$. With \\eqref{eq:Abelbound} at hand, \\eqref{eq:BCbound} follows from \\eqref{eq:sphericalviaAbel} via integration by parts. We proved that \\ref{pws-a} implies \\ref{pws-b}.\n\nAssume condition \\ref{pws-b}. By Theorem~\\ref{thm:tr-plancherel},\n\\[f(g)=\\frac{1}{(2\\ell+1)\\pi^2} \\sum_{\\substack{|p|\\leq \\ell \\\\ p\\equiv \\ell\\one}} \\int_{0}^{\\infty} \\widehat{f}(it,p)\\, \\varphi_{it,p}^{\\ell}(g^{-1})\\,(t^2+p^2)\\,\\dd t.\\]\nLet us restrict, without loss of generality, to $g=k_1a_hk_2$ with $h>0$. Using the display below \\cite[(29)]{Wang1974}, we infer\n\\[f(g)=\\frac{1}{4\\pi^2\\sinh(h)}\\sum_{\\substack{|p|,|j|\\leq \\ell \\\\ p,j\\equiv\\ell\\one}}\n\\int_{-h}^h \\widetilde{f}(s,p) \\,\n\\Phi_{-p,j}^{\\ell}(v_{\\theta}^{-1})\\Phi_{j,j}^{\\ell}(k_1k_2) \\Phi_{j,-p}^{\\ell}(v_{\\theta'})\n\\,\\dd s,\\]\nwhere $\\Phi_{-p,j}^{\\ell}(v_{\\theta}^{-1})$ and $\\Phi_{j,-p}^{\\ell}(v_{\\theta'})$ can be explicated using \\cite[(5) \\& (28)]{Wang1974}, and\n\\begin{equation}\\label{eq:def-tilde-f-s-p}\n\\widetilde{f}(s,p):=\\int_{-\\infty}^\\infty\\widehat{f}(it,p)\\,e^{-its}\\,(t^2+p^2)\\,\\dd t,\\qquad s\\in\\RR.\n\\end{equation}\nBy \\eqref{eq:BCbound} and Cauchy's theorem, it follows for any $n\\in\\ZZ_{\\geq 0}$ and $D>0$ that\n\\begin{equation}\\label{eq:nDbound}\n\\frac{\\partial^n}{\\partial s^n}\\widetilde{f}(s,p)\\ll_{n,D} e^{-D|s|},\\qquad s\\in\\RR.\n\\end{equation}\nThe smoothness of $f(g)$ is now straightforward, and this automatically verifies \\eqref{eq:mAbound} for $|h|\\leq 1$. From now on we can assume, without loss of generality, that $h>1$. From \\eqref{eq:symmetry}, \\eqref{eq:nDbound}, and the calculation around \\cite[(38)--(41)]{Wang1974}, we see that\n\\[\\sum_{\\substack{|p|,|j|\\leq \\ell \\\\ p,j\\equiv\\ell\\one}}\n\\int_{-\\infty}^\\infty \\widetilde{f}(s,p) \\,\n\\Phi_{-p,j}^{\\ell}(v_{\\theta}^{-1})\\Phi_{j,j}^{\\ell}(k_1k_2) \\Phi_{j,-p}^{\\ell}(v_{\\theta'})\n\\,\\dd s=0,\\]\nhence in fact\n\\[f(g)=\\frac{-1}{4\\pi^2\\sinh(h)}\\sum_{\\substack{|p|,|j|\\leq \\ell \\\\ p,j\\equiv\\ell\\one}}\n\\left(\\int_{-\\infty}^{-h}+\\int_h^\\infty\\right)\\widetilde{f}(s,p) \\,\n\\Phi_{-p,j}^{\\ell}(v_{\\theta}^{-1})\\Phi_{j,j}^{\\ell}(k_1k_2) \\Phi_{j,-p}^{\\ell}(v_{\\theta'})\n\\,\\dd s.\\]\nFrom here it is straightforward to deduce \\eqref{eq:mAbound} for $h>1$, using \\eqref{eq:nDbound} and the remarks above it. We proved that \\ref{pws-b} implies \\ref{pws-a}.\n\\end{proof}\n\nWe shall denote by $\\mcH(\\tau_{\\ell})_{\\infty}$ the set of functions satisfying the equivalent conditions \\ref{pws-a} and \\ref{pws-b} of Theorem~\\ref{thm:pws}. It is clear that $\\mcH(\\tau_{\\ell})_{\\infty}$ is a convolution subalgebra of $L^1(G)\\cap L^2(G)$.\n\n\\begin{remark} The last display combined with the observation $\\widetilde{f}(s,p)=\\widetilde{f}(-s,-p)$ yields the following refinement of \\eqref{eq:mAbound} when $m=0$:\n\\begin{equation}\\label{eq:inverse-spherical-transform-estimate}\n\\bigl|f(k_1a_hk_2)\\bigr|\\leq\\sum_{\\substack{|p|\\leq \\ell \\\\ p\\equiv\\ell\\one}}\n\\int_h^\\infty \\bigl|\\widetilde{f}(s,p)\\bigr|\\,\\dd s,\\qquad h>1,\\quad k_1,k_2\\in K. \n\\end{equation}\n\\end{remark}\n\nWe end this subsection by stating a two-variable version of some of the previous definitions and results. Taking (topological) tensor products of Hilbert spaces, we can identify $\\mcH(\\tau_{\\ell}) \\hat{\\otimes} \\mcH(\\tau_{\\ell})$ with the space of functions $f \\in L^2(G \\times G)$ satisfying\n\\begin{itemize}\n\\item $f(g_1, g_2) = f(k_1g_1k_1^{-1}, k_2g_2k_2^{-1})$ for almost every\n$g_1, g_2 \\in G$ and $k_1, k_2 \\in K$;\n\\item $(\\ov{\\chi_\\ell},\\ov{\\chi_\\ell}) \\star f \\star (\\ov{\\chi_\\ell},\\ov{\\chi_\\ell})$.\n\\end{itemize}\nThis can be seen by projecting the isomorphism between $L^2(G)\\hat{\\otimes} L^2(G)$ and $L^2(G\\times G)$ (see e.g.\\ \\cite[Cor.~4.11.9]{Simon2015}) to $\\mcH(\\tau_{\\ell}) \\hat{\\otimes} \\mcH(\\tau_{\\ell})$ and the (closed) subspace of functions in question. By Theorem~\\ref{thm:tr-plancherel}, this space is isometrically isomorphic to $L^2(\\Gtemp(\\tau_{\\ell})^2)$ via the obvious extension of the map \\eqref{eq:trpif}:\n\\begin{equation}\\label{doubletransform}\n\\widehat{f}(\\nu_1, p_1, \\nu_2, p_2) := \\int_{G_1\\times G_2} f(g_1, g_2) \\,\n\\varphi_{\\nu_1, p_1}^{\\ell}(g_1)\\varphi_{\\nu_2, p_2}^{\\ell}(g_2) \\, \\dd g_1\\dd g_2.\n\\end{equation}\nFor $h\\in L^2(\\Gtemp(\\tau_{\\ell})^2)\\cap L^2(\\Gtemp(\\tau_{\\ell})^2)$, the \ninverse transform is given as in \\eqref{eq:inverse-tauell-transform}:\n\\begin{equation}\\label{inversedoubletransform}\n\\begin{split}\n\\widecheck{h}(g_1, g_2) := \\frac{1}{(2\\ell+ 1)^2\\pi^4}&\n\\sum_{\\substack{|p_1|, |p_2|\\leq\\ell\\\\p_1 \\equiv p_2\\equiv\\ell\\one}}\n\\int_{0}^{\\infty}\\int_{0}^{\\infty} h(it_1,p_1, it_2, p_2)\\\\\n&\\qquad\\varphi_{it_1,p_1}^{\\ell}(g_1^{-1})\\varphi_{it_2,p_2}^{\\ell}(g_2^{-1})\\,(t_1^2+p_1^2)(t_2^2+p_2^2)\\,\\dd t_1\\,\\dd t_2.\n\\end{split}\n\\end{equation}\nIt is straightforward to adapt the above presented proof of Theorem~\\ref{thm:pws} to obtain the following variant for $\\mcH(\\tau_{\\ell}) \\hat{\\otimes} \\mcH(\\tau_{\\ell})$:\n\\begin{theorem}\\label{thm:pws2} For $f\\in\\mcH(\\tau_{\\ell}) \\hat{\\otimes} \\mcH(\\tau_{\\ell})$, the following two conditions are equivalent (with implied constants depending on $f$).\n\\begin{enumerate}[(a)]\n\\item\\label{pws-a} The function $f(g_1,g_2)$ is smooth, and for any $m\\in\\ZZ_{\\geq 0}$ and $A>0$ we have\n\\[\\frac{\\partial^{2m}}{\\partial h_1^m\\partial h_2^m}f(k_1 a_{h_1} k_2,k_3 a_{h_2} k_4)\\ll_{m,A} e^{-A(|h_1|+|h_2|)},\n\\qquad h_1,h_2\\in\\RR,\\quad k_1,k_2,k_3,k_4\\in K.\\]\n\\item\\label{pws-b} The function $\\widehat{f}(\\nu_1,p_1,\\nu_2,p_2)$ extends holomorphically to $\\CC\\times\\tfrac12\\ZZ\\times\\CC\\times\\tfrac12\\ZZ$ such that\n\\begin{align*}\n\\widehat{f}(\\nu_1,p_1,\\nu_2,p_2) &= \\widehat{f}(p_1,\\nu_1,\\nu_2,p_2),\n\\qquad \\nu_1 \\equiv p_1\\pmod{1},\\quad |\\nu_1|, |p_1| \\leq \\ell,\\\\\n\\widehat{f}(\\nu_1,p_1,\\nu_2,p_2) &= \\widehat{f}(\\nu_1,p_1,p_2,\\nu_2),\n\\qquad \\nu_2 \\equiv p_2\\pmod{1},\\quad |\\nu_2|, |p_2| \\leq \\ell,\n\\end{align*}\nand for any $B,C>0$ we have\n\\[\\widehat{f}(\\nu_1,p_1,\\nu_2,p_2) \\ll_{B, C} (1+|\\nu_1|+|\\nu_2|)^{-C},\n\\qquad |\\Re\\nu_1|,|\\Re\\nu_2|\\leq B,\\quad p_1,p_2\\in\\tfrac{1}{2}\\ZZ.\\]\n\\end{enumerate}\n\\end{theorem}\n\n\n\\subsection{Hecke operators}\nThe arithmetic quotient $\\Gamma\\backslash G$ comes equipped with a rich family of Hecke correspondences, which we now describe, referring to \\cite{BlomerHarcosMilicevic2016} for further details and references. For every $n\\in\\ZZ[i]\\setminus\\{0\\}$, consider the set\n\\[ \\Gamma_n:=\\left\\{\\begin{pmatrix} a&b\\\\c&d\\end{pmatrix}\\in\\MM_2(\\ZZ[i]):ad-bc=n\\right\\}. \\]\nIn particular, $\\Gamma_1=\\Gamma$. Then we may define the Hecke operator $T_n$ acting on functions $\\phi:\\Gamma\\backslash G\\to\\CC$ by\n\\begin{equation}\\label{Hecke-def}\n(T_n\\phi)(g):=\\frac1{|n|}\\sum_{\\gamma\\in\\Gamma\\backslash\\Gamma_n}\\phi\\left(\\frac1{\\sqrt{n}}\\gamma g\\right)=\\frac1{4|n|}\\sum_{ad=n}\\sum_{b\\bmod d}\\phi\\left(\\frac1{\\sqrt{n}}\\begin{pmatrix}a&b\\\\0&d\\end{pmatrix}g\\right),\n\\end{equation}\nwhere the result is independent of the choice of the square-root since $\\pm\\id\\in\\Gamma$. In particular, since $\\Gamma_{-1}=\\Gamma\\cdot\\big(\\begin{smallmatrix}-1&\\\\&1\\end{smallmatrix}\\big)$ and $\\frac{1}{i}\\big(\\begin{smallmatrix}-1&\\\\&1\\end{smallmatrix}\\big)=\\big(\\begin{smallmatrix}i&\\\\&-i\\end{smallmatrix}\\big)\\in\\Gamma$, we have $T_{-1}=T_1=\\id$. We also observe that, as $\\gamma$ ranges through a set of representatives of $\\Gamma\\backslash\\Gamma_n$, $n\\gamma^{-1}$ ranges through a set of representatives of $\\Gamma_n\/\\Gamma$.\n\nThese Hecke operators are self-adjoint on $L^2(\\Gamma\\backslash G)$, commute with each other and the Laplace operator; thus they act by constants $\\lambda_n(V)$ on each irreducible component $V\\subset L^2(\\Gamma\\backslash G)$, with non-zero vectors in each $V$ being joint Hecke--Maa{\\ss} eigenfunctions. They also satisfy the multiplicativity relation\n\\begin{equation}\\label{hecke-mult}\nT_mT_n=\\sum_{(d)\\mid(m,n)}T_{mn\/d^2},\\qquad m,n\\in\\ZZ[i]\\setminus\\{0\\},\n\\end{equation}\nwhere it is clear that the right-hand side does not depend on the choice of the generator $d$. Finally we have the Rankin--Selberg bound\n\\begin{equation}\\label{RS-bound}\n\\sum_{|n|^2\\leq x}|\\lambda_n(V)|^2\\ll_Vx.\n\\end{equation}\n\n\\subsection{Eisenstein series and spectral decomposition}\\label{Eisenstein-subsec}\nIn this subsection, we review the construction and properties of the (not necessarily spherical) Eisenstein series on $\\Gamma\\backslash G$. The quotient $\\Gamma\\backslash G$ has a unique cusp at $\\infty$. For $\\ell\\in\\ZZ_{\\geq 0}$, $p,q\\in\\ZZ$ with $2\\mid p$ and $|p|,|q|\\leq\\ell$, and $\\nu\\in\\CC$ with $\\Re\\nu>1$, we define the Eisenstein series of type $(\\ell,q)$ at $\\infty$ as in \\cite[Def.~3.3.1]{Lokvenec-Guleska2004} by the absolutely and locally uniformly convergent series\n\\begin{equation}\\label{Eisdef}\nE_{\\ell,q}(\\nu,p)(g):=\\sum_{\\gamma\\in\\Gamma_{\\infty}\\backslash\\Gamma}\\phi_{\\ell,q}(\\nu,p)(\\gamma g),\n\\end{equation}\nwhere $\\Gamma_{\\infty}$ is the subgroup of upper-triangular matrices in $\\Gamma$ (the stabilizer of $\\infty$ in $\\Gamma$), and\n\\begin{equation}\\label{eq:phi-ell-q-nu-p}\n\\phi_{\\ell,q}(\\nu,p)\\left(\\begin{pmatrix}r&\\ast\\\\&r^{-1}\\end{pmatrix}k\\right):=r^{2(1+\\nu)}\\Phi_{p,q}^{\\ell}(k),\\qquad r>0,\\quad k\\in K.\n\\end{equation}\nThese Eisenstein series possess a meromorphic continuation to $\\nu\\in\\CC$, which is holomorphic along $i\\RR$ \\cite[\\S5.1]{Lokvenec-Guleska2004}. An easy calculation with \\eqref{Hecke-def} and \\eqref{matrix-coeff} shows that they are also eigenfunctions of the Hecke operators $T_n$ with\n\\begin{equation}\n\\label{Eisenstein-Hecke-eigen}\nT_nE_{\\ell,q}(\\nu,p)=\\lambda_n(E(\\nu,p))E_{\\ell,q}(\\nu,p),\\quad \\lambda_n(E(\\nu,p)):=\\frac{1}{4}\\sum_{n=ad}\\chi_{\\nu,p}(a)\\chi_{-\\nu,-p}(d),\n\\end{equation}\nwhere $\\chi_{\\nu,p}(z):=|z|^{\\nu}(z\/|z|)^{-p}$. In particular,\n\\begin{equation}\\label{eq:hecke-i-eigenvalue-eisenstein}\n\\lambda_{in}(E(\\nu,p))=(-1)^{p\/2}\\lambda_n(E(\\nu,p)).\n\\end{equation}\nWhile $E_{\\ell,q}(\\nu,p)$ for individual $\\nu\\in i\\RR$ (barely) fail to lie in $L^2(\\Gamma\\backslash G)$, their averages against $C_c(i\\RR)$ weights $f(\\nu)$ comfortably do, and upon taking the Hilbert space closure of their span and orthocomplements one obtains the familiar orthogonal decomposition\n\\begin{equation}\\label{eq:spectral-decomposition}\nL^2(\\Gamma\\backslash G)=\\CC\\cdot 1\\oplus L^2(\\Gamma\\backslash G)_{\\cusp}\\oplus L^2(\\Gamma\\backslash G)_{\\Eis}.\n\\end{equation}\n\nLet $H(\\nu,p)$ be the linear span of all $\\phi_{\\ell,q}(\\nu,p)$ with $|p|,|q|\\leq\\ell$. By \\eqref{eq:phi-ell-q-nu-p}, the functions $f\\in H(\\nu,p)$ satisfy\n\\[f\\left(\\begin{pmatrix}z&\\ast\\\\&z^{-1}\\end{pmatrix}g\\right)=|z|^2\\chi_{\\nu,p}(z^2)f(g),\\qquad z\\in\\CC^\\times,\\quad g\\in G,\\]\nand they are determined by their restriction to $K$. In fact $H(\\nu,p)$ as a $(\\mfg,K)$-module is isomorphic to the \n$K$-finite part of $\\pi_{\\nu,p}$ featured in \\eqref{eq:tau-ell-isotypical-decomposition-algebraic}. That is, the appropriate completion of $H(\\nu,p)$ serves as a model of the Fr\u00e9chet\/Hilbert space representation $\\pi_{\\nu,p}$, and we shall denote by \n$H^\\infty(\\nu,p)$ the dense subspace of smooth vectors in this completion.\n\nDenoting by $C^K(\\Gamma\\backslash G)$ the space of $K$-finite smooth functions on $\\Gamma\\backslash G$, an automorphic representation of type $(\\nu,p)$ for $\\Gamma\\backslash G$ may be realized as a $(\\mfg,K)$-module homomorphism $T:H(\\nu,p)\\to C^K(\\Gamma\\backslash G)$, cf.\\ \\cite[\\S3.4]{Lokvenec-Guleska2004}. Such a $T$ may arise as $T_V$ for a cuspidal consituent $V\\simeq\\pi_{\\nu,p}$ occurring discretely in $L^2(\\Gamma\\backslash G)_{\\cusp}$, or from the Eisenstein series via\n\\[T_{E(\\nu,p)}\\phi_{\\ell,q}(\\nu,p):=E_{\\ell,q}(\\nu,p),\\qquad |p|,|q|\\leq\\ell.\\]\nIndeed, by \\eqref{Eisdef}, the last display defines a $(\\mfg,K)$-module homomorphism for $\\Re\\nu>1$, hence by analytic continuation for all $\\nu\\in\\CC$ where the relevant Eisenstein series have no pole. Following custom, we lighten the notation by denoting a generic automorphic representation of type $(\\nu,p)$, whether of type $T_V$ or $T_{E(\\nu,p)}$, as $V$, and its associated Hecke eigenvalues as $\\lambda_n(V)$. Finally, we shall use that the above $(\\mfg,K)$-module homomorphism extends uniquely to a $G$-module homomorphism $H^\\infty(\\nu,p)\\to C^\\infty(\\Gamma\\backslash G)$, and its image consists of functions of moderate growth.\n\nNow \\eqref{eq:spectral-decomposition} is explicated by the following two spectral identities. For $f$ in the space $C^{\\infty}_0(\\Gamma\\backslash G)$ of smooth complex-valued functions on $\\Gamma\\backslash G$ with all rapidly decaying derivatives, we have\n\\begin{equation}\\label{eq:spectral-decomposition-EGM}\n\\begin{split}\nf= \\frac{\\langle f, 1\\rangle }{\\vol(\\Gamma \\backslash G)} & + \\sum_{\\text{$V$ cuspidal}} \\sum_{\\substack{q,\\ell\\in \\ZZ \\\\ |p_V|,|q|\\leq \\ell}} \\frac{\\langle f,T_V \\phi_{\\ell,q}(\\nu_V,p_V) \\rangle}{{\\|\\Phi_{p_V,q}^{\\ell}\\|}_K^2} T_V \\phi_{\\ell,q}(\\nu_V,p_V)\\\\ & + \\frac{1}{\\pi i} \\int_{(0)} \\sum_{p\\in 2\\ZZ} \\sum_{\\substack{q,\\ell\\in \\ZZ \\\\ |p|,|q|\\leq \\ell}} \\frac{\\langle f,E_{\\ell,q}(\\nu,p) \\rangle}{{\\|\\Phi_{p,q}\n^{\\ell}\\|}_K^2} E_{\\ell,q}(\\nu,p) \\,\\dd\\nu,\n\\end{split}\n\\end{equation}\nwith the obvious interpretation of $\\langle f,E_{\\ell,q}(\\nu,p) \\rangle$. For\n$f_1,f_2\\in C^{\\infty}_0(\\Gamma \\backslash G)$, we have with the same interpretation\n\\begin{equation}\\label{eq:spectral-decomposition-plancherel}\n\\begin{split}\n\\langle f_1,f_2 \\rangle = \\frac{ \\langle f_1, 1\\rangle \\langle 1, f_2\\rangle}{\\vol(\\Gamma \\backslash G)} & + \\sum_{\\text{$V$ cuspidal}} \\sum_{\\substack{q,\\ell\\in \\ZZ \\\\ |p_V|,|q|\\leq \\ell}} \\frac{\\langle f_1,T_V \\phi_{\\ell,q}(\\nu_V,p_V) \\rangle \\langle T_V \\phi_{\\ell,q}(\\nu_V,p_V), f_2 \\rangle}{{\\|\\Phi_{p_V,q}^{\\ell}\\|}_K^2} \\\\ & + \\frac{1}{\\pi i} \\int_{(0)} \\sum_{p\\in 2\\ZZ} \\sum_{\\substack{q,\\ell\\in \\ZZ \\\\ |p|,|q|\\leq \\ell}} \\frac{\\langle f_1,E_{\\ell,q}(\\nu,p) \\rangle \\langle E_{\\ell,q}(\\nu,p),f_2 \\rangle}{{\\|\\Phi_{p,q}\n^{\\ell}\\|}_K^2} \\,\\dd\\nu.\n\\end{split}\n\\end{equation}\nCompare with \\cite[Ch.~6, Th.~3.4]{ElstrodtGrunewaldMennicke1998} and \\cite[Th.~8.1]{Lokvenec-Guleska2004}.\n\nWe shorten the notation in two ways. First, for an automorphic representation $V$ (cuspidal or Eisenstein) of type $(\\nu,p)$ occurring in $L^2(\\Gamma\\backslash G)$, we write\n\\[\\phi_{\\ell,q}^{V}:=\\frac{T_V\\phi_{\\ell,q}(\\nu,p)}{{\\|\\Phi_{p,q}^{\\ell}\\|}_K},\\qquad |p|,|q|\\leq\\ell.\\] \nIn particular, when at least one of two such $V$ and $V'$ is cuspidal, $\\langle\\phi_{\\ell,q}^V,\\phi_{\\ell',q'}^{V'}\\rangle$ equals $\\delta_{(\\ell,q,V)=(\\ell',q',V')}$. Second, while the decompositions in \\eqref{eq:spectral-decomposition-EGM} and \\eqref{eq:spectral-decomposition-plancherel} are over all automorphic representations $V$ (cuspidal or Eisenstein) occurring in $L^2(\\Gamma\\backslash G)$, keeping in mind the $\\tau_{\\ell}$-spherical transform of \\S\\ref{section:sphericaltransform}, it will be useful to introduce the shorthand notation $\\int_{[\\ell]}\\,\\dd V$ for the sum-integral over those $V$ of type $(\\nu,p)$ such that $\\pi_{\\nu,p}\\in\\widehat{G}(\\tau_{\\ell})$ (that is, with $|p|\\leq\\ell$ as well as $p\\in 2\\ZZ$ for $V$ Eisenstein). Thus, for example, \\eqref{eq:spectral-decomposition-EGM} may be rewritten in the more compact form\n\\begin{equation}\n\\label{compactform}\nf=\\frac{\\langle f,1\\rangle}{\\vol(\\Gamma \\backslash G)}+\\sum_{\\ell\\geq 0}\\int_{[\\ell]}\\sum_{|q|\\leq\\ell}\\langle f,\\phi_{\\ell,q}^{V}\\rangle\\phi_{\\ell,q}^{V}\\,\\dd V.\n\\end{equation}\n\n\\subsection{Rankin--Selberg convolutions}\\label{RSSection}\nIn this subsection, we review briefly the properties of Rankin--Selberg $L$-functions. We shall restrict to automorphic representations for $\\Gamma\\backslash G$ on which the Hecke operator $T_i$ acts trivially, so that they lift to automorphic representations for $\\PGL_2(\\ZZ[i])\\backslash\\PGL_2(\\CC)$. This allows us to refer to the theory of $\\GL_2$.\n\nThe Rankin--Selberg $L$-function of two automorphic representations $V_j$ of type\n$(\\nu_j,p_j)\\in i\\RR\\times \\ZZ$ for $\\Gamma\\backslash G$ is defined by the absolutely convergent series (cf.~\\eqref{RS-bound})\n\\begin{equation}\\label{RSdef}\nL(s, V_1\\times V_2) = \\frac{1}{4}\\zeta_{\\QQ(i)}(2s)\\sum_{n \\in \\ZZ[i] \\setminus \\{0\\}}\n\\frac{\\lambda_n(V_1)\\lambda_n(V_2)}{(|n|^2)^s}, \\qquad \\Re s > 1.\n\\end{equation}\nThis can be verified by matching the Euler factors on the two sides, using \\cite[Th.~15.1]{Jacquet1972}, \\cite[Prop.~3.5]{JacquetLanglands}, \\cite[(3.1.3)]{Tate1977}, and \\cite[Lemma~1.6.1]{Bump1997}.\nIn particular,\n\\[L(s,V\\times E(\\nu,p))=L(s-\\tfrac{1}{2}\\nu,V\\otimes\\chi_p)L(s+\\tfrac{1}{2}\\nu,V\\otimes\\chi_{-p})\\]\nfor $V$ cuspidal and $(\\nu,p)\\in i\\RR\\times 4\\ZZ$ according to \\eqref{eq:hecke-i-eigenvalue-eisenstein}, as well as\n\\[ L\\big(s,E(\\nu_1,p_1)\\times E(\\nu_2,p_2)\\big)=\\prod_{\\epsilon_1,\\epsilon_2\\in\\{\\pm 1\\}}L\\big(s+\\tfrac{1}{2}(\\epsilon_1\\nu_1+\\epsilon_2\\nu_2),\\chi_{-\\epsilon_1p_1-\\epsilon_2p_2}\\big),\\]\nwith $(\\nu_j,p_j)\\in i\\RR\\times 4\\ZZ$ and $\\chi_p(z):=(z\/|z|)^{-p}$. All $L$-functions are meant over $\\QQ(i)$.\n\nThe Rankin--Selberg $L$-function $L(s,V_1\\times V_2)$ possesses a meromorphic continuation to the entire complex plane with the exception of finitely many possible poles along the line $\\Re s=1$. It is in fact entire except as follows (cf.\\ \\cite[Th.~2.2]{GelbartJacquet}):\n\\begin{itemize}\n\\item If $V_1 = V_2\\,(=V)$ is cuspidal of type $(\\nu,p)$ (that is, $(\\nu_1,p_1)=\\pm(\\nu_2,p_2)$), there is a simple pole at $s=1$ with (strictly) positive residue\n\\begin{equation}\\label{RS-res}\n\\mathop{\\mathrm{res}}_{s=1} L(s, V\\times V) = \\frac{\\pi}{4}\\cdot L(1,\\mathrm{ad^2}V) \\gg_{\\eps} \\big((1 + |p|)(1+ |\\nu|)\\big)^{-\\eps},\n\\end{equation}\nwhere the lower bound follows from \\cite[Prop.~3.2]{Maga2013}.\n\\item If $V_1$ and $V_2$ are both Eisenstein series with $p_1 = \\epsilon p_2$ for some $\\epsilon\\in\\{\\pm 1\\}$, there are simple poles at $s = 1+\\eta(\\nu_1 - \\epsilon\\nu_2)\/2$ for $\\eta\\in\\{\\pm 1\\}$ with residue\n\\begin{equation}\n\\label{RS-res-Eis}\n\\mcL_{\\eta} (V_1,V_2):=\\frac{\\pi}{4}\\cdot\\zeta_{\\QQ(i)}(1+\\eta(\\nu_1-\\epsilon\\nu_2))L(1+\\eta \\nu_1,\\chi_{-2\\eta p_1})L(1-\\eta\\epsilon\\nu_2,\\chi_{2\\eta\\epsilon p_2}),\n\\end{equation}\nunless $\\nu_1=\\pm\\nu_2$ or $\\nu_1=0$ or $\\nu_2=0$, in which case, however, the definition still makes sense as a meromorphic function of $\\nu_1,\\nu_2\\in\\CC$.\n\\end{itemize}\n\nFinally, the associated completed $L$-function satisfies the familiar functional equation\n\\begin{equation}\\label{func-eq}\n\\Lambda(s, V_1 \\times V_2) := 16^sL(s, V_1\\times V_2)L_{\\infty} (s, V_1\\times V_2) = \\Lambda(1-s, V_1 \\times V_2),\n\\end{equation}\nwhere the exponential factor $16^s$ coming from the discriminant of $\\QQ(i)$ is included for convenience, and the factor at infinity is given by\n\\begin{align}\n\\notag L_{\\infty}(s, V_1\\times V_2) = \\Gamma(s,\\vec{\\nu},\\vec{p})\n:&=\\prod_{\\epsilon_1,\\epsilon_2\\in\\{\\pm 1\\}}\nL_\\infty(s,\\chi_{\\epsilon_1\\nu_1,\\epsilon_1p_1}\\cdot\\chi_{\\epsilon_2\\nu_2,\\epsilon_2p_2})\\\\\n\\label{inffactor} &=\\prod_{\\epsilon_1,\\epsilon_2\\in\\{\\pm 1\\}}\n\\Gamma_{\\CC}\\left(s+\\textstyle\\frac{1}{2}(\\epsilon_1\\nu_1+\\epsilon_2\\nu_2)+\\textstyle\\frac{1}{2}|\\epsilon_1p_1+\\epsilon_2p_2|\\right).\n\\end{align}\nHere we used the abbreviations\n\\[\\Gamma_{\\CC}(s):=2(2\\pi)^{-s}\\Gamma(s),\\qquad\\vec{\\nu}:=(\\nu_1,\\nu_2),\\qquad\\vec{p}:=(p_1,p_2).\\]\nIndeed, \\eqref{inffactor} follows from \\cite[Prop.~18.2]{Jacquet1972}, \\cite[\\S 3]{Tate1977}, \\cite[Prop.~6 in \\S VII-2]{Weil1974} and its proof, upon noting that $V_j$ is isomorphic to the principal series representation induced from the pair of characters $(\\chi_{-\\nu_j,-p_j},\\chi_{\\nu_j,p_j})$.\n\n\\begin{lemma}\\label{eis-pos}\nLet $f : i\\RR \\rightarrow \\CC$ be a function decaying as $f(\\nu)\\ll (1+|\\nu|)^{-3}$, and let $p \\in \\ZZ$. Then\n\\[\\int_{(0)}\\int_{(0)}\\sum_{\\eta \\in \\{\\pm 1\\}}\nf(\\nu_1)\\ov{f(\\nu_2)}\\,\\mcL_{\\eta}((\\nu_1,p),(\\nu_2,p))\\,\\frac{\\dd\\nu_1}{\\pi i}\\,\\frac{\\dd\\nu_2}{\\pi i}\\geq 0.\\]\n\\end{lemma}\n\n\\begin{proof} First we note that the $\\eta$-sum cancels the individual poles of $\\mcL_{\\eta}((\\nu_1, p), (\\nu_2, p))$ at $\\nu_1 = \\nu_2$. For $\\eps > 0$ and $V_j = (\\nu_j,p)$ with $j\\in\\{1,2\\}$ define\n\\[\\mcL(V_1, V_2, \\eps):=\\sum_{\\eta\\in\\{\\pm 1\\}}\\frac{\\pi}{4}\\zeta_{\\QQ(i)}(1+\\eps+\\eta(\\nu_1-\\nu_2))\nL(1+\\eta\\nu_1,\\chi_{-2\\eta p})L(1-\\eta\\nu_2,\\chi_{2\\eta p})\\]\nand\n\\[\\mcI(\\eps):=\\int_{(0)}\\int_{(0)}\\sum_{\\eta\\in\\{\\pm 1\\}}\nf(\\nu_1)\\ov{f(\\nu_2)}\\,\\mcL_{\\eta}(V_1,V_2,\\eps)\\,\\frac{\\dd\\nu_1}{\\pi i}\\,\\frac{\\dd\\nu_2}{\\pi i}.\\]\nThis function is continuous at $\\eps = 0$, so it suffices to show $\\mcI(\\eps) \\geq 0$ for $\\eps > 0$.\nInserting the definition and opening the Dedekind zeta function, we see that\n\\[\\mcI(\\eps) = \\frac{\\pi}{16} \\sum_{\\eta \\in \\{\\pm 1\\}} \\sum_{n \\in \\ZZ[i]\\setminus\\{0\\}}\n\\frac{1}{|n|^{2+2\\eps}}\n\\biggl|\\int_{(0)} \\frac{1}{|n|^{2\\eta\\nu}} L(1 + \\eta \\nu, \\chi_{-2\\eta p}) f(\\nu) \\frac{\\dd\\nu}{\\pi i} \\biggr|^2 \\geq 0\\]\nas desired.\n\\end{proof}\n\n\\subsection{Diagonal detection of Voronoi type}\\label{sec28}\nIn this subsection, we prove a Voronoi-type formula that allows us to detect equality of two automorphic representations occurring in $L^2(\\Gamma\\backslash G)$ in terms of a certain weighted orthogonality relation between their Hecke eigenvalues.\nWe shall use that only tempered representations occur in $L^2(\\Gamma\\backslash G)$, e.g.\\ by \\cite[Ch.~7, Prop.~6.2]{ElstrodtGrunewaldMennicke1998}.\n\n\\begin{lemma}\\label{lemma-vor}\nLet $P \\geq 1$ be a parameter. There exists a function\n\\[W_P:\\RR_{>0}\\times\\CC^2\\times\\ZZ^2\\to\\CC,\\]\ngiven explicitly by \\eqref{defW}, with the following properties.\n\\begin{enumerate}[(a)]\n\\item\\label{vor-a} $W_{P}(x,\\vec{\\nu},\\vec{p})$ is an entire function of $\\vec{\\nu}=(\\nu_1,\\nu_2)\\in\\CC^2$, and it is invariant under\n\\[ (\\nu_j,p_j)\\mapsto (-\\nu_j,-p_j)\\quad\\text{as well as}\\quad (\\nu_j,p_j)\\mapsto (p_j,\\nu_j)\\quad (\\nu_j\\in \\ZZ). \\]\n\\item\\label{vor-b} Let us abbreviate $\\tilde P:=\\bigl(1+|p_1+p_2|\\bigr)\\bigl(1+|p_1-p_2|\\bigr)$. Then for every\n$A>|\\Re\\nu_1|+|\\Re\\nu_2|$ we have\n\\begin{equation}\\label{W1}\nW_{P}(x,\\vec{\\nu},\\vec{p})\\ll_{A,\\Re\\nu_1,\\Re\\nu_2}\\bigl(1+(\\tilde P\/P)^{2A-2}\\bigr)\\bigl(1+|\\nu_1|+|\\nu_2|\\bigr)^{4A}x^{-A}.\n\\end{equation}\n\\item\\label{vor-c} For every two automorphic representations $V_j$ of type $(\\nu_j,p_j)\\in i\\RR\\times\\ZZ$ for $\\Gamma\\backslash G$ we have\n\\begin{equation}\\label{vor-formula}\n\\begin{split}\n&\\sum_{n\\in\\ZZ[i]\\setminus\\{0\\}}W_P\\left(\\frac{|n|}{P},\\vec{\\nu},\\vec{p}\\right)\\lambda_n(V_1)\\lambda_n(V_2)\\\\\n&=\\begin{cases}\n\\frac{\\pi}{4}L(1,\\mathrm{ad^2}V_1)P^2,&V_1=V_2\\text{ cuspidal};\\\\\n\\sum_{\\eta\\in\\{\\pm 1\\}}\\mcL_{\\eta}(V_1,V_2)P^{2+\\eta(\\nu_1-\\epsilon\\nu_2)},&V_1,V_2\\text{ Eisenstein, }p_1=\\epsilon p_2,\\,\\,\\epsilon\\in\\{\\pm 1\\};\\\\\n0,&\\text{otherwise,}\n\\end{cases}\n\\end{split}\n\\end{equation}\nwhere $L(1,\\mathrm{ad^2}V_1)$ and $\\mcL_{\\eta}(V_1,V_2)$ are as in \\eqref{RS-res} and \\eqref{RS-res-Eis}.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nLet $w:\\RR_{>0}\\to\\CC$ be a smooth function supported inside $[1,2]$, and normalized so that its Mellin transform $\\widehat{w}(s)=\\int_0^{\\infty}w(x)x^s\\,\\dd x\/x$ satisfies $\\widehat{w}(1)=1$. We define\n\\begin{equation}\\label{defW}\nW_P(x,\\vec{\\nu},\\vec{p}):=\\frac1{8\\pi i}\\int_{(2)}\\zeta_{\\QQ(i)}(2s)\\left(\\widehat{w}(s)-P^{2-4s}\\frac{16^{2s-1}\\Gamma(s,\\vec{\\nu},\\vec{p})}{\\Gamma(1-s,\\vec{\\nu},\\vec{p})}\\widehat{w}(1-s)\\right)x^{-2s}\\,\\dd s,\n\\end{equation}\nwhere $\\Gamma(s,\\vec{\\nu},\\vec{p})$ is as in \\eqref{inffactor}.\n\nShifting the contour to the far right, we see that $W_P(x,\\vec{\\nu},\\vec{p})$ is entire in $\\vec{\\nu}$. The symmetry with respect to $(\\nu_j, p_j) \\mapsto (-\\nu_j, -p_j)$ is obvious from \\eqref{inffactor}. For $r\\in\\frac{1}{2}\\ZZ$ we have the equality\n\\[\\frac{\\Gamma(z+r)}{\\Gamma(1-z+r)}\n= \\frac{\\Gamma(z-r)}{\\Gamma(1-z-r)}\\cdot\\frac{\\sin(\\pi(z-r))}{\\sin(\\pi(z+r))}\n= (-1)^{2r}\\frac{\\Gamma(z-r)}{\\Gamma(1-z-r)}\\]\nof meromorphic functions in $z\\in\\CC$. This shows that (cf.\\ \\eqref{inffactor})\n\\begin{align*}\n\\frac{\\Gamma(s,\\vec{\\nu},\\vec{p})}{\\Gamma(1-s,\\vec{\\nu},\\vec{p})}\n&=\\prod_{\\epsilon_1,\\epsilon_2\\in\\{\\pm 1\\}}\n\\frac{\\Gamma_{\\CC}\\left(s+\\frac{1}{2}(\\epsilon_1\\nu_1+\\epsilon_2\\nu_2) + \\frac{1}{2}|\\epsilon_1p_1+\\epsilon_2p_2|\\right)}\n{\\Gamma_{\\CC}\\left(1-s-\\frac{1}{2}(\\epsilon_1\\nu_1+\\epsilon_2\\nu_2)+\\frac{1}{2}|\\epsilon_1p_1+\\epsilon_2p_2|\\right)}\\\\\n&=\\prod_{\\epsilon_1,\\epsilon_2\\in\\{\\pm 1\\}}\n\\frac{\\Gamma_{\\CC}\\left(s+\\frac{1}{2}(\\epsilon_1\\nu_1+\\epsilon_2\\nu_2) + \\frac{1}{2}(\\epsilon_1p_1+\\epsilon_2p_2)\\right)}\n{\\Gamma_{\\CC}\\left(1-s-\\frac{1}{2}(\\epsilon_1\\nu_1+\\epsilon_2\\nu_2)+\\frac{1}{2}(\\epsilon_1p_1+\\epsilon_2p_2)\\right)}\n\\end{align*}\nis symmetric with respect to $(\\nu_j, p_j) \\mapsto (p_j, \\nu_j)$, completing the proof of \\ref{vor-a}.\n\nCombining the first line of the previous display with \\cite[Lemma~3.2]{Harcos2002}, we infer for $\\Re(s)>\\frac{1}{2}|\\Re\\nu_1|+\\frac{1}{2}|\\Re\\nu_2|$ that\n\\begin{align*}\n&\\left|\\frac{\\Gamma(s,\\vec{\\nu},\\vec{p})}{\\Gamma(1-s,\\vec{\\nu},\\vec{p})}\\right|\n=\\prod_{\\epsilon_1,\\epsilon_2\\in\\{\\pm 1\\}}\n\\left|\\frac{\\Gamma_{\\CC}\\left(s+\\frac{1}{2}(\\epsilon_1\\nu_1+\\epsilon_2\\nu_2) + \\frac{1}{2}|\\epsilon_1p_1+\\epsilon_2p_2|\\right)}\n{\\Gamma_{\\CC}\\left(1-\\ov{s}-\\frac{1}{2}(\\epsilon_1\\ov{\\nu_1}+\\epsilon_2\\ov{\\nu_2})+\\frac{1}{2}|\\epsilon_1p_1+\\epsilon_2p_2|\\right)}\\right|\\\\\n&\\qquad\\ll_{\\Re s,\\Re\\nu_1,\\Re\\nu_2}\\prod_{\\epsilon_1,\\epsilon_2\\in\\{\\pm 1\\}}\n\\left|s+\\tfrac12(\\epsilon_1\\nu_1+\\epsilon_2\\nu_2) + \n\\tfrac12|\\epsilon_1p_1+\\epsilon_2p_2|\\right|^{\\Re(2s+\\epsilon_1\\nu_1+\\epsilon_2\\nu_2)-1}\\\\\n&\\qquad\\ll_{\\Re s,\\Re\\nu_1,\\Re\\nu_2}\\prod_{\\epsilon_1,\\epsilon_2\\in\\{\\pm 1\\}}\n\\left(1+|\\epsilon_1p_1+\\epsilon_2p_2|\\right)^{\\Re(2s+\\epsilon_1\\nu_1+\\epsilon_2\\nu_2)-1}\n\\left(|s|+|\\nu_1|+|\\nu_2|\\right)^{\\Re(2s+\\epsilon_1\\nu_1+\\epsilon_2\\nu_2)}\\\\\n&\\qquad=\\left(1+|p_1+p_2|\\right)^{4\\Re s-2}\\left(1+|p_1-p_2|\\right)^{4\\Re s-2}\\left(|s|+|\\nu_1|+|\\nu_2|\\right)^{8\\Re s}.\n\\end{align*}\nTurning back to \\eqref{defW}, the singularity of the integrand at $s = 1\/2$ is removable, so we can shift the contour to $\\Re s = A\/2$. The bound \\eqref{W1} follows upon noting that\nthat\n\\begin{itemize}\n\\setlength\\itemsep{3pt}\n\\item $\\widehat{w}(s) \\ll_{C, \\Re s} (1 + |s|)^{-C}$ for all $C > 0$ and $s\\in\\CC$;\n\\item $\\zeta_{\\QQ(i)}(2s) \\ll (1 + |s|)^2$ for $\\Re s > 0$ and $|2s-1|>1$.\n\\end{itemize}\n\nFinally, to show \\ref{vor-c}, we start from the following identity, a consequence of \\eqref{RSdef}:\n\\[\\frac{1}{16}\\sum_{m,n\\in\\ZZ[i]\\setminus\\{0\\}}w\\left(\\frac{|m|^4|n|^2}{P^2}\\right)\\lambda_n(V_1)\\lambda_n(V_2)\n= \\frac{1}{2\\pi i}\\int_{(2)} L(s, V_1 \\times V_2) \\widehat{w}(s) P^{2s} \\,\\dd s.\\]\nWe shift the contour to $\\Re s = -1$; the contribution of the possible poles (on the line $\\Re s = 1$) is recorded on the right-hand side of \\eqref{vor-formula}. In the remaining integral we apply the functional equation \\eqref{func-eq} and change variables $s \\mapsto 1-s$ getting\n$$\\frac{1}{2\\pi i}\\int_{(2)} L(s, V_1 \\times V_2) P^{2-4s} \\frac{16^s\\Gamma(s,\\vec{\\nu},\\vec{p})}{16^{1-s}\\Gamma(1-s,\\vec{\\nu},\\vec{p})}\\widehat{w}(1-s) P^{2s}\\,\\dd s.$$\nMoving this term to the other side, we obtain the desired formula \\eqref{vor-formula}, first for $(\\nu_1, p_1) \\neq \\pm (\\nu_2, p_2)$, but then by analytic continuation everywhere.\nThis completes the proof of \\ref{vor-c}.\n\\end{proof}\n\n\\section{Pre-trace formula and amplification}\n\n\\subsection{Amplified pre-trace formula}\\label{sec31}\nIn this subsection, we prove an amplified pre-trace formula based on the theory of Eisenstein series and the spectral decomposition of $L^2(\\Gamma\\backslash G)$ (see \\S\\ref{Eisenstein-subsec}). This is a familiar identity between spectral and geometric data, and its full force will be needed in the proof of Theorem~\\ref{thm3}\\ref{thm3-b}; in fact, as an even more general version, we shall use a double pre-trace formula (see \\S\\ref{sec:double-pre-trace-formula}) in two variables. In many situations, however, all the spectral terms are nonnegative, in which case the pre-trace formula is simply used as an inequality. This is the case for the proof of Theorems~\\ref{thm1},~\\ref{thm2}~and~\\ref{thm3}\\ref{thm3-a}. An amplified pre-trace inequality can be derived with a less heavy machine. In the next subsection, we make a point of methodology by deriving these inequalities more directly, in a special case that is sufficient for us.\n\nLet $A$ be a bounded operator on $L^2(\\Gamma\\backslash G)$ preserving the subspace $C_0^{\\infty}(\\Gamma\\backslash G)$ of smooth functions with all rapidly decreasing derivatives. Assume that for the basis forms $\\phi_{\\ell,q}^{V}$, indexed as in \\eqref{compactform} by $V$ occurring in $L^2(\\Gamma\\backslash G)$ (cuspidal or Eisenstein) and $\\ell,q\\in\\ZZ$ satisfying $\\ell\\geq\\max(|p_V|,|q|)$, there are constants $c_{\\ell,q}^{V}(A)\\in\\CC$ such that\n\\begin{equation}\\label{newconstants}\n\\langle A\\psi,\\phi_{\\ell,q}^{V}\\rangle=c_{\\ell,q}^{V}(A)\\langle \\psi,\\phi_{\\ell,q}^{V}\\rangle,\\qquad\n\\psi\\in C^{\\infty}_0(\\Gamma\\backslash G).\n\\end{equation}\nThen \\eqref{eq:spectral-decomposition-plancherel} yields, for every $\\psi\\in C^{\\infty}_0(\\Gamma\\backslash G)$,\n\\begin{equation}\\label{Apsi-psi}\n\\langle A\\psi,\\psi\\rangle=\\frac{\\langle A\\psi,1\\rangle\\langle 1,\\psi\\rangle}{\\vol(\\Gamma\\backslash G)}+\\sum_{\\ell\\geq 0}\\int_{[\\ell]}\\sum_{|q|\\leq\\ell}c_{\\ell,q}^{V}(A)|\\langle\\psi,\\phi_{\\ell,q}^{V}\\rangle|^2\\,\\dd V.\n\\end{equation}\n\nFor $f\\in C_0(G)$ a rapidly decaying continuous function on $G$, and $\\psi\\in L^2(\\Gamma\\backslash G)$, we may consider the function $R(f)\\psi\\in L^2(\\Gamma\\backslash G)$ defined by\n\\begin{align*}\n(R(f)\\psi)(g)&:=\\int_Gf(h)\\psi(gh)\\,\\dd h=\\int_Gf(g^{-1}h)\\psi(h)\\,\\dd h\\\\\n&=\\int_{\\Gamma\\backslash G}k_f(g,h)\\psi(h)\\,\\dd h,\\qquad k_f(g,h):=\\sum_{\\gamma\\in\\Gamma}f(g^{-1}\\gamma h).\n\\end{align*}\nThus $R(f)$ is a bounded integral operator on $L^2(\\Gamma\\backslash G)$ with kernel $k_f$. It is clear that $R(f)$ preserves $C_0^{\\infty}(\\Gamma\\backslash G)$, and its adjoint equals $R(f)^\\ast=R(f^\\ast)$ with\n\\[f^\\ast(g):=\\ov{f(g^{-1})},\\qquad g\\in G.\\]\nFurther, for a finitely supported sequence of complex coefficients $x=(x_n)_{n\\in\\ZZ[i]\\setminus\\{0\\}}$, let $R_{\\fin}(x)$ be the operator on $L^2(\\Gamma\\backslash G)$ given by\n\\begin{equation}\\label{eq:def-amplifier}\nR_{\\fin}(x):=\\sum_{n\\in\\ZZ[i]\\setminus\\{0\\}}x_nT_n.\n\\end{equation}\nThe adjoint of this operator equals $R_{\\fin}(x)^\\ast=R_{\\fin}(\\ov{x})$.\n\nLet us now fix an integer $\\ell\\geq 1$. Let $f\\in\\mcH(\\tau_{\\ell})_{\\infty}$ be such that $f=f^\\ast$, and let $x=(x_n)$ be as above such that $x=\\ov{x}$. Further, let $V$ be a non-identity (cuspidal or Eisenstein) automorphic representation of arbitrary type $(\\nu_V,p_V)$ occurring in $L^2(\\Gamma\\backslash G)$, and let $\\ell',q\\in\\ZZ$ be such that $\\ell'\\geq\\max(|p_V|,|q|)$. For $V$ cuspidal, \\eqref{eq:opfromtr} and \\eqref{eq:trpif} show that\n\\begin{equation}\\label{joint-eigenfunctions}\n\\begin{aligned}\nR(f)\\phi_{\\ell',q}^{V}&=\\delta_{\\ell'=\\ell}\\frac{\\widehat{f}(V)}{2\\ell+1}\\phi_{\\ell',q}^{V}, & & & & & \\widehat{f}(V)&:=\\widehat{f}(\\nu_V,p_V);\\\\\nR_{\\fin}(x)\\phi_{\\ell',q}^{V}&=\\widehat{x}(V)\\phi_{\\ell',q}^{V}, & & & & & \\widehat{x}(V)&:=\\sum_{n\\in\\ZZ[i]\\setminus\\{0\\}}x_n\\lambda_n(V).\n\\end{aligned}\n\\end{equation}\nFor $V$ Eisenstein, these equations are still valid with the obvious extension of $R(f)$ and $R_{\\fin}(x)$ to functions in $C^\\infty(\\Gamma\\backslash G)$ of moderate growth, as follows from \\eqref{Eisenstein-Hecke-eigen} and the discussion between \\eqref{eq:spectral-decomposition} and \\eqref{eq:spectral-decomposition-EGM}. Therefore, following the usual argument that $R(f)$ and $R_{\\fin}(x)$ are self-adjoint, we obtain that $A:=R(f)R_{\\fin}(x)$ satisfies \\eqref{newconstants} with\n\\[ c_{\\ell',q}^{V}(A)=\\delta_{\\ell'=\\ell}\\frac{\\widehat{f}(V)\\widehat{x}(V)}{2\\ell+1}.\\]\nHence \\eqref{Apsi-psi} holds with these coefficients and $\\ell$-summation replaced by $\\ell'$-summation. We note that the coefficients decay rapidly in $\\nu$ by Theorem~\\ref{thm:pws}. Moreover, $A(1)=R(f)(1)$ vanishes by $f=f\\star\\overline{\\chi_{\\ell}}$ and the orthogonality of characters (recalling that $\\ell\\geq 1$).\n\nApplying \\eqref{Apsi-psi} and recalling our observation below \\eqref{Hecke-def} about $n\\gamma^{-1}$ as $\\gamma\\in\\Gamma\\setminus\\Gamma_n$, we obtain for every $\\psi\\in C_0^{\\infty}(\\Gamma\\backslash G)$ that\n\\begin{align*}\n\\int_{[\\ell]}\\sum_{|q|\\leq\\ell}c_{\\ell,q}^{V}(A)|\\langle\\psi,\\phi_{\\ell,q}^{V}\\rangle|^2\\,\\dd V\n&=\\iint_{(\\Gamma\\backslash G)^2}k_f(g,h) \\sum_{n\\in\\ZZ[i]\\setminus\\{0\\}}x_n T_n\\psi(h)\\overline{\\psi(g)}\\,\\dd g\\dd h\\\\\n&=\\iint_{(\\Gamma\\backslash G)^2}\\sum_{n\\in\\ZZ[i]\\setminus\\{0\\}}\\frac{x_n}{|n|}\\sum_{\\gamma\\in\\Gamma_n}f(g^{-1}\\tilde{\\gamma}h)\\overline{\\psi(g)}\\psi(h)\\,\\dd g\\dd h,\n\\end{align*}\nwhere $\\tilde\\gamma$ abbreviates $\\gamma\/\\sqrt{\\det\\gamma}$. Letting $\\psi$ range through smooth, nonnegative, $L^{1}$-normalized functions supported in increasingly small open neighborhoods of a fixed point $\\Gamma g\\in\\Gamma\\backslash G$, and taking limits using the rapid decay of $c_{\\ell,q}^{V}(A)$, we obtain the desired \\emph{amplified pre-trace formula}\n\\begin{equation}\\label{APTF}\n\\int_{[\\ell]}\\frac{\\widehat{f}(V)\\widehat{x}(V)}{2\\ell+1}\\sum_{|q|\\leq\\ell}|\\phi_{\\ell,q}^{V}(g)|^2\\,\\dd V=\\sum_{n\\in\\ZZ[i]\\setminus\\{0\\}}\\frac{x_n}{|n|}\\sum_{\\gamma\\in\\Gamma_n}f(g^{-1}\\tilde{\\gamma}g).\n\\end{equation}\n\nThe pre-trace formula \\eqref{APTF} isolates forms $\\phi_{\\ell,q}^{V}$ with a specific value of $\\ell$ (thus, forms in the \nchosen constituent $V^{\\ell}$ in the decomposition \\eqref{decomp} for various $V$'s), a starting point for a proof of Theorem~\\ref{thm1}. To further isolate eigenforms in the specific constituent $V^{\\ell,q}$ (for a fixed $|q|\\leqslant\\ell$), starting from our earlier $f\\in\\mcH(\\tau_{\\ell})_{\\infty}$ satisfying $f=f^\\ast$, we define a smooth function $f_q\\in C_0(G)$ by\n\\begin{equation}\\label{fqg-average-2}\nf_q(g):=\\frac1{2\\pi}\\int_0^{2\\pi}f\\big(g\\diag(e^{i\\varrho},e^{-i\\varrho})\\big)\\,e^{2qi\\varrho}\\,\\dd \\varrho =\\frac1{2\\pi}\\int_0^{2\\pi}f\\big(\\diag(e^{i\\varrho},e^{-i\\varrho})g\\big)\\,e^{2qi\\varrho}\\,\\dd \\varrho.\n\\end{equation}\nWe note that $f_q=f_q^\\ast$, but $f_q$ need not lie in $\\mcH(\\tau_{\\ell})_{\\infty}$. By the orthogonality of characters on $\\RR\/\\ZZ$, we have\n\\begin{equation}\\label{projectq}\nR(f_q)=R(f)\\Pi_q=\\Pi_q R(f),\n\\end{equation}\nwhere $\\Pi_q$ is the projection onto the closed subspace consisting of $\\psi\\in L^2(\\Gamma \\backslash G)$ such that $\\psi(g\\diag(e^{i\\varrho},e^{-i\\varrho}))=e^{2qi\\varrho}\\psi(g)$. In particular, $R(f_q)$ is a bounded, self-adjoint operator, which preserves $C_0^{\\infty}(\\Gamma\\backslash G)$. Moreover, by \\eqref{joint-eigenfunctions} and the surrounding discussion,\n\\[R(f_q)\\phi_{\\ell',q'}^{V}= \\delta_{(\\ell',q')=(\\ell,q)}\\frac{\\widehat{f}(V)}{2\\ell+1}\\phi_{\\ell',q'}^{V}\\]\nholds for $V$ cuspidal, and also for $V$ Eisenstein with the obvious extension of $R(f_q)$ to functions in $C^\\infty(\\Gamma\\backslash G)$ of moderate growth. Thus, applying as above \\eqref{Apsi-psi} with $A=R(f_q)R_{\\fin}(x)$, we obtain the following amplified pre-trace formula for individual forms:\n\\begin{equation}\\label{APTF-single-form}\n\\int_{[\\ell]}\\frac{\\widehat{f}(V)\\widehat{x}(V)}{2\\ell+1}|\\phi_{\\ell,q}^{V}(g)|^2\\,\\dd V=\\sum_{n\\in\\ZZ[i]\\setminus\\{0\\}}\\frac{x_n}{|n|}\\sum_{\\gamma\\in\\Gamma_n}f_q(g^{-1}\\tilde{\\gamma}g).\n\\end{equation}\n\nWe proved \\eqref{APTF} and \\eqref{APTF-single-form} for every $f\\in\\mcH(\\tau_{\\ell})_{\\infty}$ and finitely supported $x=(x_n)$ under the assumption that $f=f^\\ast$ and $x=\\ov{x}$. In fact \\eqref{APTF} and \\eqref{APTF-single-form} hold without this assumption, because both sides are $\\CC$-linear in $f$ and $x$. Alternatively, one can modify the above proof to work without the self-adjointness assumption, starting with the analogue of \\eqref{joint-eigenfunctions} for $R(f^\\ast)\\phi_{\\ell',q}^{V}$ and $R_\\fin(\\ov{x})\\phi_{\\ell',q}^{V}$.\n\n\\subsection{Positivity and amplified pre-trace inequality}\\label{sec31b}\nIf the coefficients on the left hand side of \\eqref{APTF} and \\eqref{APTF-single-form} are nonnegative, then by dropping \ncertain terms, we obtain useful inequalities. In this subsection, we derive these inequalities in a streamlined way, drawing inspiration from \\cite[\\S 3]{BlomerHarcosMagaMilicevic2020}.\n\nLet $A$ be a positive operator operator on $L^2(\\Gamma\\backslash G)$, and let $\\mfB$ be a finite orthonormal system of eigenfunctions $\\phi$ of $A$ with (not necessarily distinct) eigenvalues $(c_{\\phi}(A))_{\\phi\\in\\mfB}$. Then, $A$ preserves the orthodecomposition\n\\[L^2(\\Gamma\\backslash G)=\\mathrm{Span}(\\mfB)\\oplus\\mathrm{Span}(\\mfB)^{\\perp},\\]\nand for any $\\psi\\in L^2(\\Gamma\\backslash G)$ the corresponding decomposition $\\psi=\\psi_1+\\psi_2$ with\n\\[\\psi_1:=\\sum_{\\phi\\in\\mfB}\\langle\\psi,\\phi\\rangle\\phi\\qquad \\text{and} \\qquad \\psi_2:=\\psi-\\psi_1\\]\ngives\n\\begin{equation}\\label{positivity}\n\\langle A\\psi,\\psi\\rangle=\\langle A\\psi_1,\\psi_1\\rangle+\\langle A\\psi_2,\\psi_2\\rangle\n\\geq\\langle A\\psi_1,\\psi_1\\rangle=\\sum_{\\phi\\in\\mfB}c_{\\phi}(A)|\\langle\\psi,\\phi\\rangle|^2.\n\\end{equation}\n\nWe will apply this positivity argument to the operators $A=R(f) R_{\\fin}(x)$ and $A=R(f_q) R_{\\fin}(x)$, where $f\\in\\mcH(\\tau_{\\ell})_{\\infty}$ and $x=(x_n)$ are as in the previous subsection. Positivity is achieved by making the operators $R(f)$ and $R_{\\fin}(x)$ individually positive, because Hecke operators commute with integral operators, and $\\Pi_q$ in \\eqref{projectq} is a positive operator commuting with $R(f)$. For the positivity $R(f)$, it suffices that\n\\begin{equation}\\label{positivity-assumption}\nf=u\\star u\\qquad\\text{for some $u\\in\\mcH(\\tau_{\\ell})_{\\infty}$ satisfying $u=u^\\ast$}.\n\\end{equation}\nFor the positivity of $R_{\\fin}(x)$, it suffices that\n\\begin{equation}\n\\label{gen-amplifier}\n\\begin{gathered}\nR_{\\fin}(x)=\\Bigl(\\sum_{l\\in P}y_lT_l\\Bigr) \\star \\Bigl(\\sum_{m\\in P}\\ov{y_m}T_m\\Bigr) +\n\\Bigl(\\sum_{l\\in P}z_lT_{l^2}\\Bigr) \\star \\Bigl(\\sum_{m\\in P}\\ov{z_m}T_{m^2}\\Bigr),\\\\[4pt]\nx_n:=\\sum_{\\substack{l,m\\in P\\\\ (d)|(l,m)\\\\ lm\/d^2=n}}y_l\\overline{y_m}+\n\\sum_{\\substack{l,m\\in P\\\\ (d)|(l^2,m^2)\\\\ l^2m^2\/d^2=n}}z_l\\overline{z_m},\n\\end{gathered}\n\\end{equation}\nwhere $(y_l)_{l\\in P}$ and $(z_l)_{l\\in P}$ are arbitrary complex coefficients supported on a finite set\n$P\\subset\\ZZ[i]\\setminus\\{0\\}$. Here we used that each Hecke operator $T_n$ is self-adjoint.\n\nNow, let $V$ be a cuspidal automorphic representation that occurs in $L^2(\\Gamma\\backslash G)$ and contains $\\tau_{\\ell}$-type vectors. Let $\\mfB=\\{ \\phi_q : |q| \\leq \\ell\\}$ be an orthonormal basis of $V^\\ell$, with $\\phi_q\\in V^{\\ell,q}$. As in the previous subsection, we evaluate the left hand side of \\eqref{positivity} geometrically, and then apply a limit in $\\psi$ to both sides. This way we obtain the following \\emph{amplified pre-trace inequalities} in place of \\eqref{APTF} and \\eqref{APTF-single-form}:\n\\begin{equation}\\label{APTI}\n\\frac{\\widehat{f}(V)\\widehat{x}(V)}{2\\ell+1}\\sum_{\\phi\\in\\mfB}|\\phi(g)|^2\\leq\\sum_{n\\in\\ZZ[i]\\setminus\\{0\\}}\\frac{x_n}{|n|}\\sum_{\\gamma\\in\\Gamma_n}f(g^{-1}\\tilde{\\gamma}g)\n\\end{equation}\nand\n\\begin{equation}\\label{APTI-single-form}\n\\frac{\\widehat{f}(V)\\widehat{x}(V)}{2\\ell+1}|\\phi_q(g)|^2\\leq\\sum_{n\\in\\ZZ[i]\\setminus\\{0\\}}\\frac{x_n}{|n|}\\sum_{\\gamma\\in\\Gamma_n}f_q(g^{-1}\\tilde{\\gamma}g).\n\\end{equation}\n\n\\subsection{Test functions and amplifier}\nThe main idea of the amplified pre-trace inequality \\eqref{APTI} is that it can provide a good upper bound for $\\sum_{\\phi\\in\\mfB}|\\phi(g)|^2$ as long as the test function $f\\in\\mcH(\\tau_{\\ell})_{\\infty}$ and the amplifier $x=(x_n)$ in \\S\\ref{sec31b} are chosen so that $\\widehat{f}(V)$ and $\\widehat{x}(V)$ are sizeable while the right-hand side is not too large. In this subsection, we make these choices.\n\nAs in Theorems~\\ref{thm1},~\\ref{thm2}~and~\\ref{thm3}, let $\\ell\\geq 1$ be an integer, $I\\subset i\\RR$ and $\\Omega\\subset G$ be compact sets. Let $V\\subset L^2(\\Gamma\\backslash G)$ be a cuspidal automorphic representation with minimal $K$-type $\\tau_{\\ell}$ and spectral parameter $\\nu_V\\in I$. Let us introduce the spectral weights\n\\begin{equation}\\label{eq:def-gaussian-spectral-weight}\nh(\\nu,p):=\\begin{cases} e^{(p^2-\\ell^2+\\nu^2)\/2},\\qquad &\\text{$\\nu\\in\\CC$, \\quad $p\\in \\frac1{2}\\ZZ$, \\quad $|p|\\leq\\ell$,}\\\\ 0,&\\text{$\\nu\\in\\CC$, \\quad $p\\in \\frac1{2}\\ZZ$, \\quad $|p|>\\ell$.}\\end{cases}\n\\end{equation}\nAccording to Theorems~\\ref{thm:tr-plancherel}~and~\\ref{thm:pws}, the inverse $\\tau_\\ell$-spherical transform $f:=\\widecheck{h}$ given by \\eqref{eq:inverse-tauell-transform} belongs to $\\mcH(\\tau_{\\ell})_{\\infty}$, and it satisfies $\\widehat{f}=h$. Moreover, if we set $u:=\\widecheck{v}$ with\n\\[v(\\nu,p):=\\begin{cases} (2\\ell+1)^{1\/2}e^{(p^2-\\ell^2+\\nu^2)\/4},\\qquad &\\text{$\\nu\\in\\CC$, \\quad $p\\in \\frac1{2}\\ZZ$, \\quad $|p|\\leq\\ell$,}\\\\ 0,&\\text{$\\nu\\in\\CC$, \\quad $p\\in \\frac1{2}\\ZZ$, \\quad $|p|>\\ell$,}\\end{cases}\\]\nthen $u\\in\\mcH(\\tau_{\\ell})_{\\infty}$, $u=u^\\ast$ by \\eqref{eq:spherical-function-symmetry} and \\eqref{eq:inverse-tauell-transform}, and $\\widehat{f}=\\widehat{u}^2\/(2\\ell+1)=\\widehat{u\\star u}$. This shows that \\eqref{positivity-assumption} is satisfied. Hence $R(f)$ is the kind of positive operator considered in \\S\\ref{sec31b}, and\nby \\eqref{joint-eigenfunctions} we have\n\\begin{equation}\\label{lower-bd-arch}\n\\widehat{f}(V)=h(\\nu_V,\\ell)\\gg_I 1.\n\\end{equation}\nWith the notation \\eqref{eq:def-tilde-f-s-p}, we have\n\\[\n\\widetilde{f}(s,p)=\\sqrt{2\\pi}(p^2+1-s^2)e^{(p^2-\\ell^2-s^2)\/2},\n\\]\nwhence by \\eqref{eq:inverse-spherical-transform-estimate}, \\eqref{eq:inverse-tauell-transform}, and the trivial bound\n$\\bigl|\\varphi_{\\nu,p}^{\\ell}(g^{-1})\\bigr|\\leq 2\\ell+1$, we have\n\\begin{equation}\\label{better-be-bounded}\nf(g) \\ll \\ell^2e^{-\\log^2\\|g\\|}.\n\\end{equation}\nWe shall also use the following supplement, a consequence of \\eqref{eq:spherical-function-symmetry}\nand \\eqref{eq:inverse-tauell-transform}:\n\\begin{equation}\\label{in-the-bulk}\nf(g)\\ll \\ell\\sup_{\\nu\\in i\\RR}\\,\\bigl|\\varphi_{\\nu,\\ell}^{\\ell}(g)\\bigr|+\\ell^{-50}.\n\\end{equation}\n\nWe now choose our amplifier, which we do as in \\cite[\\S5]{BlomerHarcosMilicevic2016}. Let $L\\geq 7$ be a parameter, to be chosen at the very end of the proof of Theorems~\\ref{thm1},~\\ref{thm2}~and~\\ref{thm3}, and set\n\\begin{gather*}\nP(L):=\\left\\{\\text{$l\\in\\ZZ[i]$ prime : $0<\\arg(l)<\\tfrac{\\pi}4$ and $L\\leq |l|^2\\leq 2L$}\\right\\};\\\\\ny_l:=\\sgn(\\lambda_{l}(V)),\\qquad z_l:=\\sgn(\\lambda_{l^2}(V)),\\qquad l\\in P(L).\n\\end{gather*}\nIt follows from the thesis of Erd\\H os~\\cite{Erdos} that $P(L)\\neq\\emptyset$, while\nin \\eqref{eq:def-amplifier} and \\eqref{gen-amplifier} we have\n\\begin{equation}\n\\label{ampl-expand}\nx_n=\\begin{cases}\n\\sum\\nolimits_{l\\in P(L)}(y_l^2+z_l{}^2)\\ll L\/\\log L,&n=1;\\\\\n(1+\\delta_{l_1\\neq l_2})y_{l_1}y_{l_2}+\\delta_{l_1=l_2}z_{l_1}z_{l_2}\\ll 1,&\\text{$n=l_1l_2$ for some $l_1,l_2\\in P(L)$};\\\\\n(1+\\delta_{l_1\\neq l_2})z_{l_1}z_{l_2}\\ll 1,&\n\\text{$n=l_1^2l_2^2$ for some $l_1,l_2\\in P(L)$};\\\\\n0,&\\text{otherwise}.\n\\end{cases}\n\\end{equation}\nThis formula is the analogue of \\cite[(9.16)]{BlomerHarcosMagaMilicevic2020}, except that we forgot to insert the factors $1+\\delta_{l_1\\neq l_2}$ there.\nIn particular, by the inequality $|\\lambda_{l}(V)|+|\\lambda_{l^2}(V)|>1\/2$ that follows from \\eqref{hecke-mult}, we have\n\\begin{equation}\n\\label{lower-bd-non-arch}\n\\widehat{x}(V)=\\Bigl(\\sum_{l\\in P(L)}|\\lambda_{l}(V)|\\Bigr)^2+\n\\Bigl(\\sum_{l\\in P(L)}|\\lambda_{l^2}(V)|\\Bigr)^2\\gg \\frac{L^2}{\\log^2L}.\n\\end{equation}\n\nLet $\\mfB$ be an orthonormal basis of $V^\\ell$. Entering the lower bounds \\eqref{lower-bd-arch} and \\eqref{lower-bd-non-arch} into the amplified pre-trace inequality \\eqref{APTI}, we obtain\n\\begin{equation}\\label{pre-APTI-done}\n\\frac{L^{2-\\eps}}{\\ell}\\sum_{\\phi\\in\\mfB}|\\phi(g)|^2\\ll_{\\eps,I}\\sum_{n\\in\\ZZ[i]\\setminus\\{0\\}}\\frac{|x_n|}{|n|}\\sum_{\\gamma\\in\\Gamma_n}|f(g^{-1}\\tilde\\gamma g)|.\n\\end{equation}\nLet us assume that $g\\in\\Omega$. A straightforward counting combined with the divisor bound shows that\n\\begin{equation}\\label{straightforward}\n\\#\\left\\{\\gamma\\in\\Gamma_n:\\|g^{-1}\\tilde\\gamma g\\|\\leq R\\right\\}\\ll_{\\eps,\\Omega} R^{4+\\eps}|n|^{2+\\eps},\n\\end{equation}\nso that, splitting into dyadic ranges for $\\|g^{-1}\\tilde\\gamma g\\|$ and using \\eqref{better-be-bounded}, we obtain\n\\[ \\sum_{\\substack{\\gamma\\in\\Gamma_n\\\\\\log\\|g^{-1}\\tilde\\gamma g\\|>8\\sqrt{\\log\\ell}}}\n|f(g^{-1}\\tilde\\gamma g)|\n\\ll_{\\eps,\\Omega} \\ell^{-50}|n|^{2+\\eps}. \\]\nThus from \\eqref{in-the-bulk} and \\eqref{pre-APTI-done} we conclude that\n\\begin{equation}\\label{pre-APTI-done-2}\n\\begin{aligned}\n&\\sum_{\\phi\\in\\mfB}|\\phi(g)|^2\\ll_{\\eps,I,\\Omega}L^{-2+\\eps}\\ell^2\\sum_{\\substack{n\\in\\ZZ[i]\\setminus\\{0\\} \\\\ \\gamma\\in\\Gamma_n\\\\\\log\\|g^{-1}\\tilde\\gamma g\\|\\leq 8\\sqrt{\\log\\ell}}}\\frac{|x_n|}{|n|}\n\\sup_{\\nu\\in i\\RR}|\\varphi_{\\nu,\\ell}^{\\ell}(g^{-1}\\tilde\\gamma g)|+L^{2+\\eps}\\ell^{-48}.\n\\end{aligned}\n\\end{equation}\nThe bound \\eqref{pre-APTI-done-2} explicitly reduces the non-spherical sup-norm problem of estimating $\\sum_{\\phi\\in\\mfB}|\\phi(g)|^2$ via the amplification method to two ingredients:\n\\begin{itemize}\n\\item estimates on $\\varphi_{\\nu,\\ell}^{\\ell}(g^{-1}\\tilde\\gamma g)$ for $g^{-1}\\tilde\\gamma g\\in G$ of moderate size;\n\\item counting $\\gamma\\in\\Gamma_n$ according to the size of $\\varphi_{\\nu,\\ell}^{\\ell}(g^{-1}\\tilde\\gamma g)$.\n\\end{itemize}\n\nWe now also derive a version of \\eqref{pre-APTI-done-2} adapted to estimating a single form $|\\phi_q(g)|^2$ for some $|q|\\leq\\ell$. With the specific $f\\in\\mcH(\\tau_{\\ell})_{\\infty}$ provided by \\eqref{eq:inverse-tauell-transform} and \\eqref{eq:def-gaussian-spectral-weight}, we obtain by averaging as in \\eqref{fqg-average-2} the test function\n\\[ f_q(g):=\\frac1{(2\\ell+1)\\pi^2}\\sum_{|p|\\leq\\ell}\\int_{0}^{\\infty}e^{(p^2-\\ell^2-t^2)\/2}\\,\\varphi_{it,p}^{\\ell,q}(g^{-1})\\,(t^2+p^2)\\,\\dd t, \\]\nwhere\n\\[\\varphi_{\\nu,p}^{\\ell,q}(g):=\\frac1{2\\pi}\\int_0^{2\\pi}\n\\varphi_{\\nu,p}^{\\ell}\\bigl(g\\diag(e^{i\\varrho},e^{-i\\varrho})\\bigr)\\,e^{-2qi\\varrho}\\,\\dd \\varrho.\\]\nIn particular, this definition generalizes \\eqref{spherical-averaged}, and\nby \\eqref{eq:spherical-function-symmetry} we have the symmetry\n\\begin{equation}\\label{eq:averaged-spherical-function-symmetry}\n\\varphi_{\\nu,p}^{\\ell,-q}(g)=\\ov{\\varphi_{-\\ov{\\nu},p}^{\\ell,q}(g)}=\\varphi_{\\nu,p}^{\\ell,q}(g^{-1}).\n\\end{equation}\nThe analogues of \\eqref{better-be-bounded}--\\eqref{in-the-bulk} clearly hold for the $\\RR\/\\ZZ$-average $f_q$, hence by \\eqref{APTI-single-form} the following analogue of \\eqref{pre-APTI-done-2} holds as well:\n\\begin{equation}\n\\label{pre-APTI-done-2-single-form}\n\\begin{aligned}\n&|\\phi_q(g)|^2\\ll_{\\eps,I,\\Omega} L^{-2+\\eps}\\ell^2\\sum_{\\substack{n\\in\\ZZ[i]\\setminus\\{0\\} \\\\ \\gamma\\in\\Gamma_n\\\\\\log\\|g^{-1}\\tilde\\gamma g\\|\\leq 8\\sqrt{\\log\\ell}}}\\frac{|x_n|}{|n|}\n\\sup_{\\nu\\in i\\RR}|\\varphi_{\\nu,\\ell}^{\\ell,q}(g^{-1}\\tilde\\gamma g)|+L^{2+\\eps}\\ell^{-48}.\n\\end{aligned}\n\\end{equation}\n\n\\subsection{A double pre-trace formula and a fourth moment}\\label{sec:double-pre-trace-formula}\nLet us fix two integers $\\ell,q\\in\\ZZ$ with $\\ell\\geq\\max(1,|q|)$. Let $n\\in\\ZZ[i]\\setminus\\{0\\}$ and $g\\in G$. By \\eqref{APTF-single-form} and the remarks below it, for any $f\\in\\mcH(\\tau_{\\ell})_{\\infty}$ we have\n\\[\\int_{[\\ell]}\\widehat{f}(V)\\lambda_n(V)|\\phi_{\\ell,q}^{V}(g)|^2\\,\\dd V=\\frac{2\\ell+1}{|n|}\\sum_{\\gamma\\in\\Gamma_n}f_q(g^{-1}\\tilde{\\gamma}g).\\]\nIt is straightforward to adapt the proof of this formula to yield the following two-variable version.\nLet $n_1,n_2\\in\\ZZ[i]\\setminus\\{0\\}$ and $g_1,g_2\\in G$. Then for any $f\\in\\mcH(\\tau_{\\ell}) \\hat{\\otimes} \\mcH(\\tau_{\\ell})$ we have\n\\begin{equation}\\label{two-var}\n\\begin{split}\n&\\int_{[\\ell]}\\int_{[\\ell]} \\widehat{f}(V_1, V_2)\\lambda_{n_1}(V_1)\\lambda_{n_2}(V_2) |\\phi^{V_1}_{\\ell,q}(g_1)|^2|\\phi^{V_2}_{\\ell,q}(g_2)|^2 \\,\\dd V_1 \\,\\dd V_2\\\\\n&=\\frac{(2\\ell+1)^2}{|n_1n_2|}\\sum_{\\gamma_1 \\in \\Gamma_{n_1}}\\sum_{\\gamma_2 \\in \\Gamma_{n_2}}\nf_q(g_1^{-1}\\tilde{\\gamma}_1 g_1, g_2^{-1}\\tilde{\\gamma}_2 g_2),\n\\end{split}\n\\end{equation}\nwhere $\\widehat{f}(V_1, V_2)$ is given by \\eqref{doubletransform} when $V_j$ is of type $(\\nu_j,p_j)\\in i\\RR\\times\\ZZ$, and\n\\[f_{q}(g_1, g_2):=\\frac1{(2\\pi)^2}\\int_0^{2\\pi}\\int_0^{2\\pi} f\\big(g_1\\diag(e^{i\\varrho_1},e^{-i\\varrho_1}), g_2\\diag(e^{i\\varrho_2},e^{-i\\varrho_2})\\big) \\,e^{2q i(\\varrho_1 + \\varrho_2)}\\,\\dd \\varrho_1 \\,\\dd \\varrho_2.\\]\nIn \\eqref{two-var}, we can restrict to pairs $(V_1,V_2)$ satisfying $\\lambda_i(V_j)=1$ by introducing an averaging over $\\{n_1,in_1\\}\\times\\{n_2,in_2\\}$:\n\\begin{equation}\\label{two-var-PGL}\n\\begin{split}\n&\\int_{[\\ell]'}\\int_{[\\ell]'} \\widehat{f}(V_1, V_2)\\lambda_{n_1}(V_1)\\lambda_{n_2}(V_2) |\\phi^{V_1}_{\\ell,q}(g_1)|^2|\\phi^{V_2}_{\\ell,q}(g_2)|^2 \\,\\dd V_1 \\,\\dd V_2\\\\\n&=\\frac{(2\\ell+1)^2}{4|n_1n_2|}\\sum_{\\gamma_1\\in\\Gamma_{n_1}\\cup\\Gamma_{in_1}}\\sum_{\\gamma_2\\in\\Gamma_{n_2}\\cup\\Gamma_{in_2}}\nf_q(g_1^{-1}\\tilde{\\gamma}_1 g_1, g_2^{-1}\\tilde{\\gamma}_2 g_2).\n\\end{split}\n\\end{equation}\nThe prime symbol in $[\\ell]'$ indicates that we sum-integrate over automorphic representations with a lift to $\\PGL_2(\\ZZ[i])\\backslash\\PGL_2(\\CC)$, so that the results of \\S\\ref{RSSection} and \\S\\ref{sec28} are applicable.\n\nNow we consider, for any $n\\in\\ZZ[i]\\setminus\\{0\\}$, the spectral weights\n\\[H(V_1,V_2;n):=h(\\nu_1,p_1)h(\\nu_2,p_2)W_\\ell\\left(\\frac{|n|}{\\ell},\\vec{\\nu},\\vec{p}\\right),\\]\nwhere $h$ is as in \\eqref{eq:def-gaussian-spectral-weight} and $W_\\ell$ is as in Lemma~\\ref{lemma-vor}. Combining \nthe Hilbert space isomorphism\n\\[\\mcH(\\tau_{\\ell}) \\hat{\\otimes} \\mcH(\\tau_{\\ell})\n\\longleftrightarrow L^2(\\Gtemp(\\tau_{\\ell})\\times \\Gtemp(\\tau_{\\ell}))\\]\ninduced by Theorem~\\ref{thm:tr-plancherel} with Theorem~\\ref{thm:pws2} and parts \\ref{vor-a}--\\ref{vor-b} of Lemma~\\ref{lemma-vor}, we see that the function $(g_1,g_2)\\mapsto\\widecheck{H}(g_1,g_2;n)$ given by \\eqref{inversedoubletransform} belongs to $\\mcH(\\tau_{\\ell}) \\hat{\\otimes} \\mcH(\\tau_{\\ell})$, and its double $\\tau_\\ell$-spherical transform equals $H(V_1,V_2;n)$. Therefore, applying \\eqref{two-var-PGL} for $n_1=n_2=n$ and $g_1=g_2=g$, and then summing up over $n$, we arrive at\n\\begin{equation}\\label{eval}\n\\begin{split}\n&\\sum_{n\\in\\ZZ[i]\\setminus\\{0\\}}\\int_{[\\ell]'}\\int_{[\\ell]'}\nH(V_1,V_2;n)\\lambda_n(V_1)\\lambda_n(V_2)|\\phi^{V_1}_{\\ell,q}(g)|^2|\\phi^{V_2}_{\\ell,q}(g)|^2\\,\\dd V_1\\,\\dd V_2\\\\\n&=\\sum_{n\\in\\ZZ[i]\\setminus\\{0\\}}\\frac{(2\\ell+1)^2}{4|n|^2}\\sum_{\\gamma_1,\\gamma_2\\in\\Gamma_n\\cup\\Gamma_{in}}\n\\widecheck{H}_q(g^{-1}\\tilde{\\gamma}_1 g, g^{-1}\\tilde{\\gamma}_2 g;n).\n\\end{split}\n\\end{equation}\nBy Lemma~\\ref{lemma-vor}\\ref{vor-c}, the left-hand side of \\eqref{eval} equals\n\\[\\frac{\\pi}{4}\\ell^2\\sum_{\\substack{\\text{$V$ cuspidal} \\\\T_i(V)=1,\\ |p_V|\\leq \\ell}}\nh(\\nu_V, p_V)^2 \\,L(1,\\mathrm{ad^2} V)\\,|\\phi^{V}_{\\ell,q}(g)|^4\\ +\\ \\text{Eis},\\]\nwhere the term $\\text{Eis}$ is the contribution of Eisenstein representations:\n\\begin{equation}\\label{eval-cusp}\n\\begin{split}\n\\text{Eis}=\\ell^2\\sum_{\\epsilon,\\eta\\in\\{\\pm 1\\}}\\sum_{\\substack{p\\in 4\\ZZ\\\\|p|\\leq\\ell}}\\int_{(0)}\\int_{(0)}\n&\\ell^{\\eta(\\nu_1-\\epsilon \\nu_2)}\\,h(\\nu_1,\\epsilon p) h(\\nu_2,p)\\,\\mcL_{\\eta}((\\nu_1,\\epsilon p),(\\nu_2,p))\\\\\n&|\\phi^{E(\\nu_1,\\epsilon p)}_{\\ell,q}(g)|^2|\\phi^{E(\\nu_2,p)}_{\\ell,q}(g)|^2\n\\,\\frac{\\dd\\nu_1}{\\pi i}\\,\\frac{\\dd\\nu_2}{\\pi i}.\n\\end{split}\n\\end{equation}\nWe make a change of variable $(\\nu_1,\\nu_2,p)\\mapsto(\\eta\\nu_1,\\eta\\epsilon\\nu_2,\\eta\\epsilon p)$. By invariance, we can replace the resulting pairs $(\\eta\\nu_1,\\eta p)$ and $(\\eta\\epsilon\\nu_2,\\eta\\epsilon p)$ by $(\\nu_1,p)$ and $(\\nu_2,p)$, respectively. In this way we see that\n\\[\\begin{split}\n\\text{Eis}=4\\ell^2\\sum_{\\substack{p\\in 4\\ZZ\\\\|p|\\leq\\ell}}\\int_{(0)}\\int_{(0)}\n&\\ell^{\\nu_1-\\nu_2}\\,h(\\nu_1,p) h(\\nu_2,p)\\,\\mcL_{\\eta}((\\nu_1,p),(\\nu_2,p))\\\\\n&|\\phi^{E(\\nu_1,p)}_{\\ell,q}(g)|^2|\\phi^{E(\\nu_2,p)}_{\\ell,q}(g)|^2\n\\,\\frac{\\dd\\nu_1}{\\pi i}\\,\\frac{\\dd\\nu_2}{\\pi i}.\n\\end{split}\\]\nBy Lemma~\\ref{eis-pos}, we conclude that $\\text{Eis}\\geq 0$. In particular, the right-hand side of \\eqref{eval} is real, and it provides an upper bound for the contribution of each cuspidal $V$ in \\eqref{eval-cusp}:\n\\[h(\\nu_V, p_V)^2 \\,L(1,\\mathrm{ad^2} V)\\,|\\phi^{V}_{\\ell,q}(g)|^4\n\\ll\\sum_{n\\in\\ZZ[i]\\setminus\\{0\\}}\\frac{1}{|n|^2}\\sum_{\\gamma_1,\\gamma_2\\in\\Gamma_n\\cup\\Gamma_{in}}\n\\widecheck{H}_q(g^{-1}\\tilde{\\gamma}_1 g, g^{-1}\\tilde{\\gamma}_2 g;n).\\]\nHere we can restrict the $n$-sum to $|n|\\leq\\ell^{1+\\eps}$ at the cost of an error of $\\OO_\\eps(\\ell^{-50})$. Indeed, the contribution of $|n|>\\ell^{1+\\eps}$ on the two sides of \\eqref{eval} are equal, and this contribution is $\\OO_\\eps(\\ell^{-50})$ thanks to the bound $H(V_1, V_2;n)\\ll_A(|n|\/\\ell)^{-A}$ for any $A > 0$ that follows from Lemma~\\ref{lemma-vor}\\ref{vor-b}.\n\nAs before, let $I\\subset i\\RR$ and $\\Omega\\subset G$ be compact subsets. We fix a cuspidal automorphic representation\n$V\\subset L^2(\\Gamma\\backslash G)$ with $\\nu_V\\in I$, $p_V=\\ell$, $\\lambda_i(V)=1$, and we pick a cusp form $\\phi_q\\in V^{\\ell,q}$ with ${\\|\\phi_q\\|}_2=1$. We shall also assume that $g\\in\\Omega$. By \\eqref{RS-res} and our findings above,\n\\begin{equation}\\label{almostdone}\n|\\phi_q(g)|^4\\ll_{\\eps,I}\\ell^\\eps\\sum_{\\substack{n\\in\\ZZ[i]\\setminus\\{0\\}\\\\|n|\\leq\\ell^{1+\\eps}}}\\frac{1}{|n|^2}\\sum_{\\gamma_1,\\gamma_2\\in\\Gamma_n\\cup\\Gamma_{in}}\\widecheck{H}_q(g^{-1}\\tilde{\\gamma}_1 g, g^{-1}\\tilde{\\gamma}_2 g;n)+\\ell^{-50}.\n\\end{equation}\nWe estimate $\\widecheck{H}$ (hence also $\\widecheck{H}_q$) in terms of Cartan coordinates using the two-dimensional analogue of \\eqref{eq:inverse-spherical-transform-estimate}:\n\\[\\begin{split}\n\\big|\\widecheck{H}(k_1a_{h_1}k_2,k_3a_{h_2}k_4;n)\\big|\\leq\n& \\sum_{|p_1|,|p_2|\\leq\\ell}\\ \\ \\iint\\limits_{\\substack{s_1>h_1\\\\s_2>h_2}}\\,\n\\Bigg|\\ \\iint\\limits_{\\substack{t_1\\in\\RR\\\\t_2\\in\\RR}}\nW_\\ell\\left(\\frac{|n|}{\\ell},(it_1, it_2),(p_1,p_2)\\right)\\\\\ne^{-\\ell^2+(p_1^2+p_2^2)\/2}\\,& e^{-(t_1^2 + t_2^2)\/2}\\,e^{-it_1s_1-it_2s_2}\\,(t_1^2+p_1^2)(t_2^2+p_2^2)\n\\,\\dd t_1\\dd t_2\\,\\Bigg|\\,\\dd s_1\\dd s_2.\n\\end{split}\\]\nThis estimate holds for $k_j\\in K$ and $h_j>1$. We restrict the sum to $|p_1|=|p_2|=\\ell$ at the cost of an error of $\\OO_\\eps(\\ell^{-80})$. Then we utilize Lemma~\\ref{lemma-vor}\\ref{vor-a} to further restrict the sum to $p_1=p_2=\\ell$ at the cost of extending the outer double integral to $|s_1|,|s_2|>\\ell$. We arrive at\n\\[\\begin{split}\n\\widecheck{H}(k_1a_{h_1}k_2,k_3a_{h_2}k_4;n)\\ll\n&\\iint\\limits_{\\substack{|s_1|>h_1\\\\|s_2|>h_2}}\\,\n\\Bigg|\\ \\iint\\limits_{\\substack{t_1\\in\\RR\\\\t_2\\in\\RR}}\nW_\\ell\\left(\\frac{|n|}{\\ell},(it_1, it_2),(\\ell,\\ell)\\right)\\\\\n& e^{-(t_1^2 + t_2^2)\/2}\\,e^{-it_1s_1-it_2s_2}\\,(t_1^2+\\ell^2)(t_2^2+\\ell^2)\n\\,\\dd t_1\\dd t_2\\,\\Bigg|\\,\\dd s_1\\dd s_2 + \\ell^{-80}.\n\\end{split}\\]\nWe consider the inner double integral,\nand assume without loss of generality $|s_1| \\geq |s_2|$, the other case being analogous. Shifting the $t_1$-contour downwards if $s_1 > 0$ and upwards if $s_1 < 0$, we conclude from Lemma~\\ref{lemma-vor}\\ref{vor-b} the bound\n\\[\\ll_{\\eps,B}\n\\ell^4 e^{-B|s_1|} \\left(\\frac{|n|}{\\ell}\\right)^{-B-\\eps} = \n\\ell^4 e^{-B\\max(|s_1|,|s_2|)} \\left(\\frac{|n|}{\\ell}\\right)^{-B-\\eps}\\]\nfor the inner double integral, and so\n\\[\\widecheck{H}(k_1a_{h_1}k_2,k_3a_{h_2}k_4;n) \n\\ll_{\\eps,B} \\ell^4 e^{-B\\max(h_1,h_2)} \\left(\\frac{|n|}{\\ell}\\right)^{-B-\\eps} + \\ell^{-80}\\]\nfor any $\\eps, B > 0$ and $h_1, h_2 > 1$. This estimate remains true for general $h_1, h_2 \\geq 0$, as can be seen by using \\eqref{eq:inverse-tauell-transform} instead of \\eqref{eq:inverse-spherical-transform-estimate} for the respective variable if one or both of $h_1, h_2$ are\nat most $1$. The same bound applies for $\\widecheck{H}_q$, hence in particular\n\\[\\widecheck{H}_q(g_1,g_2;n)\n\\ll_{\\eps,C}\\ell^{4+\\eps}\\left(\\frac{\\sqrt{\\ell\/|n|}}{\\|g_1\\|+\\|g_2\\|}\\right)^C + \\ell^{-80}\\]\nfor any $\\eps, C > 0$ and $g_1,g_2\\in G$. So we can refine \\eqref{almostdone} to\n\\[|\\phi_q(g)|^4\\ll_{\\eps,I,\\Omega}\\ell^\\eps\\sum_{\\substack{n\\in\\ZZ[i]\\setminus\\{0\\}\\\\|n|\\leq\\ell^{1+\\eps}}}\n\\sum_{\\substack{\\gamma_1,\\gamma_2\\in\\Gamma_n\\cup\\Gamma_{in}\\\\\\| g^{-1}\\tilde{\\gamma}_j g\\|\\leq\\ell^{\\eps}\\sqrt{\\ell\/|n|}}}\\widecheck{H}_q(g^{-1}\\tilde{\\gamma}_1 g, g^{-1}\\tilde{\\gamma}_2 g;n)+\\ell^{-50}.\\]\n\nIn the last sum, we estimate the terms more directly by \\eqref{inversedoubletransform}, \\eqref{eq:averaged-spherical-function-symmetry}, and Lemma~\\ref{lemma-vor}\\ref{vor-b}:\n\\[\\widecheck{H}_q(g^{-1}\\tilde{\\gamma}_1 g,g^{-1}\\tilde{\\gamma}_2 g;n)\n\\ll_\\eps\\ell^{2+\\eps} F(\\gamma_1)F(\\gamma_2)+\\ell^{-80},\\]\nwhere we abbreviated (suppressing $g$ and $q$ from the notation)\n\\[F(\\gamma):=\\sup_{\\nu\\in i\\RR}|\\varphi_{\\nu,\\ell}^{\\ell,q}(g^{-1}\\tilde{\\gamma}g)|,\\qquad\\gamma\\in\\GL_2(\\CC).\\]\nRecalling also \\eqref{straightforward}, we obtain an inequality of bilinear type:\n\\[|\\phi_q(g)|^4\\ll_{\\eps,I,\\Omega}\\ell^{2+\\eps}\\sum_{\\substack{n\\in\\ZZ[i]\\setminus\\{0\\}\\\\|n|\\leq\\ell^{1+\\eps}}}\n\\sum_{\\substack{\\gamma_1,\\gamma_2\\in\\Gamma_n\\cup\\Gamma_{in}\\\\\\|g^{-1}\\tilde{\\gamma}_j g\\|\\leq\\ell^{\\eps}\\sqrt{\\ell\/|n|}}}F(\\gamma_1)F(\\gamma_2)+\\ell^{-50}.\\]\nIntroducing the notation\n\\[S(n):=\\sum_{\\substack{\\gamma\\in\\Gamma_n\\\\\\| g^{-1}\\tilde{\\gamma} g\\|\\leq\\ell^{\\eps}\\sqrt{\\ell\/|n|}}}F(\\gamma),\\]\nwe arrive at\n\\begin{alignat*}{3}\n|\\phi_q(g)|^4\n&\\ll_{\\eps,I,\\Omega}\\ &&\\ell^{2+\\eps}\\sum_{\\substack{n\\in\\ZZ[i]\\setminus\\{0\\}\\\\|n|\\leq\\ell^{1+\\eps}}}\n\\bigl(S(n)+S(in)\\bigr)^2&&+\\ell^{-50}\\\\\n&\\ll &&\\ell^{2+\\eps}\\sum_{\\substack{n\\in\\ZZ[i]\\setminus\\{0\\}\\\\|n|\\leq\\ell^{1+\\eps}}}\n\\bigl(S(n)^2+S(in)^2\\bigr)&&+\\ell^{-50}.\n\\end{alignat*}\nOf course, the contributions of $S(n)^2$ and $S(in)^2$ are the same. In the end, we conclude\n\\begin{equation}\\label{conclude-KS}\n|\\phi_q(g)|^4\\ll_{\\eps,I,\\Omega}\\ell^{2+\\eps}\\sum_{\\substack{n\\in\\ZZ[i]\\setminus\\{0\\}\\\\|n|\\leq\\ell^{1+\\eps}}}\n\\sum_{\\substack{\\gamma_1,\\gamma_2\\in\\Gamma_n\\\\\\|g^{-1}\\tilde{\\gamma}_j g\\|\\leq\\ell^{\\eps}\\sqrt{\\ell\/|n|}}}\n\\prod_{j=1}^2\\sup_{\\nu\\in i\\RR}|\\varphi_{\\nu,\\ell}^{\\ell,q}(g^{-1}\\tilde{\\gamma}_j g)|+\\ell^{-50},\n\\end{equation}\nwhich serves as an analogue of \\eqref{pre-APTI-done-2-single-form}.\n\n\\subsection{Reduction to Diophantine counting}\\label{reduction}\nIn this subsection, we input into the preliminary estimates \\eqref{pre-APTI-done-2}, \\eqref{pre-APTI-done-2-single-form} and \\eqref{conclude-KS} the results of Theorems~\\ref{thm4},~\\ref{thm6}~and~\\ref{thm5}, which provide the desired estimates on spherical trace functions. We shall assume (as we can) that $\\ell$ is sufficiently large in terms of $\\eps$.\n\nWe begin by explicating the estimate \\eqref{pre-APTI-done-2} using \\eqref{ampl-expand} and Theorem~\\ref{thm4}. For $\\mcL\\geq 1$ and $\\vec{\\delta}=(\\delta_1,\\delta_2)\\in\\RR^{2}_{>0}$, let\n\\[ D(L,\\mcL):=\\left\\{n\\in\\ZZ[i]:\n\\text{$\\mcL\\leq |n|^2\\leq 16\\mcL$, $n=1$ or $n=l_1l_2$ or $n=l_1^2l_2^2$ for some $l_1,l_2\\in P(L)$}\\right\\}, \\]\n\\begin{align*}\nM(g,L,\\mcL,\\vec{\\delta}):=\\sum_{n\\in D(L,\\mcL)}\\#\\bigg\\{\\gamma\\in\\Gamma_n:\\ \n&\\text{$g^{-1}\\tilde{\\gamma}g=k\\begin{pmatrix} z&u\\\\&z^{-1}\\end{pmatrix}k^{-1}$ for some $k\\in K$}\\\\\n&\\text{such that $|z|\\geq 1$, $\\min|z\\pm 1|\\leq\\delta_1$, $|u|\\leq\\delta_2$}\\bigg\\}.\n\\end{align*}\n\nTo each $\\gamma$ occurring in \\eqref{pre-APTI-done-2} we may associate a dyadic vector $\\vec{\\delta}=(\\delta_1,\\delta_2)$ (that is, $\\log_{2}\\delta_{j}\\in\\ZZ$) such that $1\/\\sqrt{\\ell}\\leq\\delta_j\\leq\\ell^\\eps$ and $\\delta_{j}$ are minimal such that $\\gamma$ is counted in the corresponding $M(g,L,\\mcL,\\vec{\\delta})$. Therefore, applying \\eqref{ampl-expand} and the estimates of Theorem~\\ref{thm4} in \\eqref{pre-APTI-done-2} leads to the following result.\n\n\\begin{lemma}\\label{APTI-done-lemma}\nLet $\\ell\\geq 1$ be an integer, $I\\subset i\\RR$ and $\\Omega\\subset G$ be compact sets. Let $V\\subset L^2(\\Gamma\\backslash G)$ be a cuspidal automorphic representation with minimal $K$-type $\\tau_{\\ell}$ and spectral parameter $\\nu_V\\in I$. Let $\\mfB$ be an orthonormal basis of $V^\\ell$, and let $g\\in\\Omega$. Then for any $L\\geq 7$ and $\\eps>0$ we have\n\\begin{align*}\n\\sum_{\\phi\\in\\mfB}|\\phi(g)|^2\n&\\ll_{\\eps,I,\\Omega}\\ell^{3+\\eps}L^{\\eps}\\sum_{\\substack{\\vec{\\delta}\\textnormal{ dyadic}\\\\1\/\\sqrt{\\ell}\\leq\\delta_j\\leq \\ell^{\\eps}}}\\min\\left(\\frac1{\\ell\\delta_1^2},\\frac1{\\sqrt{\\ell}\\delta_2}\\right)\\\\\n&\\qquad\\times\\left(\\frac{M(g,L,1,\\vec{\\delta})}{L}+\\frac{M(g,L,L^2,\\vec{\\delta})}{L^3}+\\frac{M(g,L,L^4,\\vec{\\delta})}{L^4}\\right)+L^{2+\\eps}\\ell^{-48}.\n\\end{align*}\n\\end{lemma}\n\nLemma~\\ref{APTI-done-lemma} is free of any choices of the test function, amplifier, and spherical trace function. It reduces the estimation of $\\sum_{\\phi\\in\\mfB}|\\phi(g)|^2$ to the Diophantine counting problem of estimating $M(g,L,\\mcL,\\vec{\\delta})$ uniformly in $L$, $\\mcL$, and $\\vec{\\delta}$.\n\nNow, we similarly explicate the estimate \\eqref{pre-APTI-done-2-single-form} using \\eqref{ampl-expand} and Theorems~\\ref{thm6}--\\ref{thm5}\\ref{thm5-a}. Recall the sets $\\mcD\\subset G$ and $\\mcS\\subset K\\subset\\mcN\\subset G$ introduced before Theorem~\\ref{thm5}. With $D(L,\\mcL)$ as above, we define for $|q|\\leq\\ell$, $\\mcL\\geq 1$, $\\delta>0$, and $\\vec{\\delta}=(\\delta_1,\\delta_2)\\in\\RR_{>0}^{2}$, the matrix counts\n\\begin{align*}\nM^{\\ast}_0(g,L,\\mcL,\\delta)&:=\\sum_{n\\in D(L,\\mcL)}\\#\\left\\{\\gamma\\in\\Gamma_n:\\dist(g^{-1}\\tilde\\gamma g,\\mcS)\\leq\\delta,\\,\\,\\frac{D(g^{-1}\\tilde\\gamma g)}{\\|g^{-1}\\tilde\\gamma g\\|^2}\\ll\\frac{\\log\\ell}{\\sqrt{\\ell}}\\right\\},\\\\\nM^\\ast(g,L,\\mcL,\\vec{\\delta})&:=\\sum_{n\\in D(L,\\mcL)}\\#\\left\\{\\gamma\\in\\Gamma_n:\\dist\\left(g^{-1}\\tilde\\gamma g,K\\right)\\leq\\delta_1,\\,\\,\\dist(g^{-1}\\tilde\\gamma g,\\mcD)\\leq\\delta_2\\right\\},\n\\end{align*}\nwith a sufficiently large implied constant in the definition of $M^{\\ast}_0(g,L,\\mcL,\\delta)$.\n\nFor $q=0$, we estimate the size of $\\varphi_{\\nu,\\ell}^{\\ell,q}(g^{-1}\\tilde\\gamma g)$ in \\eqref{pre-APTI-done-2-single-form} using Theorem~\\ref{thm5}\\ref{thm5-a}. Since there are at most $\\OO_{\\eps,\\Omega}(\\ell^\\eps|n|^{2+\\eps})$ elements $\\gamma\\in\\Gamma_n$ contributing to the right-hand side of \\eqref{pre-APTI-done-2-single-form}, the total contribution of those elements which fail to satisfy $D(g^{-1}\\tilde\\gamma g)\\ll\\|g^{-1}\\tilde\\gamma g\\|^2(\\log\\ell)\/\\sqrt{\\ell}$ with a sufficiently large implied constant may be absorbed into the existing $\\OO_{\\eps,I,\\Omega}(L^{2+\\eps}\\ell^{-48})$ error term. We may thus restrict to $\\gamma\\in\\Gamma_n$ satisfying these conditions. We associate to each remaining $\\gamma$ in \\eqref{pre-APTI-done-2-single-form} the smallest dyadic $1\/\\sqrt{\\ell}\\leq\\delta\\leq\\ell^{\\eps}$ such that $\\gamma$ is counted in the corresponding $M_0^{\\ast}(g,L,\\mcL,\\delta)$. For a general $|q|\\leq\\ell$, we associate to each $\\gamma$ in \\eqref{pre-APTI-done-2-single-form} the\nlexicographically smallest dyadic vector $\\vec{\\delta}=(\\delta_1,\\delta_2)$ such that $\\delta_j\\leq\\ell^\\eps$ and $\\delta_1^2\\delta_2\\geq 1\/\\sqrt{\\ell}$ and $\\gamma$ is counted in the corresponding $M_q(g,L,\\mcL,\\vec{\\delta})$. Applying \\eqref{ampl-expand} and the estimates of Theorems~\\ref{thm6}--\\ref{thm5}\\ref{thm5-a} in \\eqref{pre-APTI-done-2-single-form} leads to the following result.\n\n\\begin{lemma}\\label{APTI-done-lemma-single-form}\nLet $\\ell\\geq 1$ be an integer, $I\\subset i\\RR$ and $\\Omega\\subset G$ be compact sets. Let $V\\subset L^2(\\Gamma\\backslash G)$ be a cuspidal automorphic representation with minimal $K$-type $\\tau_{\\ell}$ and spectral parameter $\\nu_V\\in I$. Let $\\phi_q\\in V^{\\ell,q}$ such that ${\\|\\phi_q\\|}_2=1$ and let $g\\in\\Omega$. Then for any $L\\geq 7$ and $\\eps>0$ we have\n\\begin{align*}\n|\\phi_0(g)|^2\n&\\ll_{\\eps,I,\\Omega}\\ell^{2+\\eps}L^{\\eps}\\sum_{\\substack{\\delta\\textnormal{ dyadic}\\\\1\/\\sqrt{\\ell}\\leq\\delta\\leq\\ell^{\\eps}}}\\frac1{\\sqrt{\\ell}\\delta}\\\\\n&\\qquad\\times\\left(\\frac{M_0^{\\ast}(g,L,1,\\delta)}{L}+\\frac{M_0^{\\ast}(g,L,L^2,\\delta)}{L^3}+\\frac{M_0^{\\ast}(g,L,L^4,\\delta)}{L^4}\\right)+L^{2+\\eps}\\ell^{-48}.\n\\end{align*}\nMoreover, for $|q|\\leq\\ell$ we have\n\\begin{align*}\n|\\phi_q(g)|^2\n&\\ll_{\\eps,I,\\Omega}\\ell^{2+\\eps}L^{\\eps}\\sum_{\\substack{\\vec{\\delta}\\textnormal{ dyadic},\\,\\,\n\\delta_j\\leq\\ell^{\\eps}\\\\\\delta_1^2\\delta_2\\geq 1\/\\sqrt{\\ell}}}\n \\frac{1}{\\sqrt{\\ell} \\delta_1^2 \\delta_2}\\\\\n&\\qquad\\times\\left(\\frac{M^\\ast(g,L,1,\\vec{\\delta})}{L}+\\frac{M^\\ast(g,L,L^2,\\vec{\\delta})}{L^3}+\\frac{M^\\ast(g,L,L^4,\\vec{\\delta})}{L^4}\\right)+L^{2+\\eps}\\ell^{-48}.\n\\end{align*}\n\\end{lemma}\n\nSimilarly, we explicate \\eqref{conclude-KS} using Theorem~\\ref{thm5}\\ref{thm5-b}. Here we introduce the double matrix count\n\\begin{align*}\nQ(g,L,H_1, H_2):=\\sum_{L \\leq |n| \\leq 2L} \\#\\Bigg\\{(\\gamma_1, \\gamma_2) \\in\\Gamma_n^2 :\\ \n&\\|g^{-1}\\tilde\\gamma_jg\\|\\leq \\sqrt{\\frac{H_j}{L}},\\\\\n&\\,\\dist(g^{-1}\\tilde\\gamma_jg,\\mcD)\\ll\\sqrt{\\frac{H_j\\log\\ell}{L\\ell}}\\Bigg\\},\n\\end{align*}\nwith a sufficiently large implied constant in the distance condition.\n\n\\begin{lemma}\\label{q=ell-case}\nLet $\\ell\\geq 1$ be an integer, $I\\subset i\\RR$ and $\\Omega\\subset G$ be compact sets. Let $V\\subset L^2(\\Gamma\\backslash G)$ be a cuspidal automorphic representation with minimal $K$-type $\\tau_{\\ell}$ and spectral parameter $\\nu_V\\in I$. Suppose that $V$ lifts to an automorphic representation for $\\PGL_2(\\ZZ[i])\\backslash\\PGL_2(\\CC)$.\nLet $\\phi_{\\pm \\ell}\\in V^{\\ell,\\pm\\ell}$ such that ${\\|\\phi_{\\pm \\ell}\\|}_2=1$ and let $g\\in\\Omega$. Then for any $\\eps > 0$ we have\n\\[|\\phi_{\\pm \\ell}(g)|^4\\ll_{ \\eps,I,\\Omega}\\ell^{2+ \\eps} \\max_{1 \\leq L, H_1, H_2 \\leq \\ell^{1+\\eps}} \\frac{Q(g,L,H_1, H_2) }{H_1H_2}+\\ell^{-50}.\\]\n\\end{lemma}\n\n\\section{Proof of Theorem~\\ref{thm4}}\nIn this section, we prove Theorem~\\ref{thm4}. It is clear from the definition \\eqref{eq:def-spherical-function}\nthat we can restrict to $k=1$ without loss of generality, and the first bound holds in the stronger form $|\\varphi_{\\nu,\\ell}^{\\ell}(g)|\\leq 2\\ell+1$. In particular, Theorem~\\ref{thm4} is trivial for $\\ell=1$, hence we shall assume (for notational simplicity) that $\\ell\\geq 2$. In addition, the exponential factor in \\eqref{spherical-def} has absolute value less than $\\|g\\|^2$ thanks to\n\\eqref{eq:kappaH} and the identity\n\\[|ad-bc|^2+|a\\bar b+c\\bar d|^2=(|a|^2+|c|^2)(|b|^2+|d|^2),\\]\nhence it suffices to prove that\n\\begin{equation}\\label{suffices}\n\\int_K |\\psi_{\\ell}(\\kappa(k^{-1} g k))|\\, \\dd k \\ll_\\eps \\ell^\\eps\n\\min\\left(\\frac{\\|g\\|^4}{|z^2 - 1|^2\\ell}, \\frac{\\|g\\|}{|u|\\sqrt{\\ell}}\\right).\n\\end{equation}\nFinally, we shall use the obvious fact that\n\\begin{equation}\\label{eq:bounded_z_z(-1)_u}\n|u|,|z|,|z^{-1}|\\leq\\|g\\|.\n\\end{equation}\n\nWriting $k=k[\\phi,\\theta,\\psi]$ in Euler angles as in \\eqref{decomp-K}, and setting\n\\[\nx:= (z^2 - 1)\\cos\\theta + i e ^{-2i\\phi} uz \\sin\\theta,\n\\]\none computes\n\\[\nk[\\phi, \\theta, \\psi]^{-1} g k[\\phi, \\theta, \\psi] = \\left(\\begin{matrix} (1 + x \\cos\\theta )\/z & \\ast\\\\ \\ -ie^{2i\\psi}x \\sin \\theta \/z& \\ast\\end{matrix}\\right).\n\\]\nOur goal is to estimate then\n\\begin{equation}\\label{eq:thm4_integral_to_bound}\n\\int_0^{\\pi} \\int_0^{\\pi\/2} \\int_{-\\pi}^{\\pi} \\left|\\psi_{\\ell}\\left(\\kappa\\left(\\begin{pmatrix} (1 + x \\cos\\theta )\/z & \\ast\\\\ \\ -ie^{2i\\psi}x \\sin \\theta \/z& \\ast\\end{pmatrix}\\right)\\right)\\right| \\sin 2\\theta\\,\\dd \\psi\\,\\dd \\theta\\,\\dd \\phi.\n\\end{equation}\nWe introduce the notation $\\lambda:=\\sqrt{\\log\\ell}$.\n\n\\subsection{Small values of the integrand}\nFirst we identify a region where $|\\psi_\\ell|$ in the integral \\eqref{eq:thm4_integral_to_bound} is small. Assume that\n\\begin{equation}\\label{tiny}\n\\min\\bigl(\\tan\\theta,|x|\\sin\\theta\\bigr)>\\frac{4\\lambda}{\\sqrt{\\ell}}.\n\\end{equation}\nThen in\n\\[\n\\kappa\\left(\\begin{pmatrix} (1 + x \\cos\\theta )\/z & \\ast\\\\ \\ -ie^{2i\\psi}x \\sin \\theta \/z& \\ast\\end{pmatrix}\\right)=\\begin{pmatrix} \\frac{1 + x \\cos\\theta}{\\sqrt{|1 + x \\cos\\theta|^2+|x\\sin\\theta|^2}}& \\ast \\\\ \\ast & \\ast \\end{pmatrix}\\in K\n\\]\nthe upper left entry has absolute square less than $1-\\lambda^2\/\\ell$, hence\n\\[\\left|\\psi_{\\ell}\\left(\\kappa\\left(\\begin{pmatrix} (1 + x \\cos\\theta )\/z & \\ast\\\\ \\ -ie^{2i\\psi}x \\sin \\theta \/z& \\ast\\end{pmatrix}\\right)\\right)\\right| < \\left(1-\\frac{\\log \\ell}{\\ell}\\right)^{\\ell}<\\frac{1}{\\ell}.\\]\nIn view of \\eqref{eq:bounded_z_z(-1)_u}, this is admissible for \\eqref{suffices}. In the next subsection, we consider the case when \\eqref{tiny} fails.\n\n\\subsection{Large values of the integrand}\nAssume first that $\\tan\\theta\\leq 4\\lambda\/\\sqrt{\\ell}$. Then $\\theta\\leq 4\\lambda\/\\sqrt{\\ell}$, hence the corresponding contribution to \\eqref{eq:thm4_integral_to_bound} is $\\ll\\lambda^2\/\\ell$. This is admissible for \\eqref{suffices} in the light of \\eqref{eq:bounded_z_z(-1)_u}.\n\nNow assume that $|x|\\sin\\theta\\leq 4\\lambda\/\\sqrt{\\ell}$, and decompose the relevant integration domain for $\\theta$ as follows. For any $m,n\\in\\ZZ_{\\geq 0}$ and $\\phi\\in[0,\\pi]$, let\n\\[I(m,n,\\phi):=\\left\\{\\theta\\in\\left(0,\\frac{\\pi}{2}\\right)\\,:\\,\n|x|\\sin\\theta\\leq\\frac{4\\lambda}{\\sqrt{\\ell}},\\\n\\frac{1}{2}<2^m\\sin\\theta\\leq 1,\\ \\frac{1}{2}<2^n\\cos\\theta\\leq 1\\right\\}.\\]\nIf $\\theta\\notin I(m,n,\\phi)$ holds for every $0\\leq m,n\\leq 2\\log\\ell$, then $\\sin 2\\theta=2\\sin\\theta\\cos\\theta\\leq 1\/\\ell$, which is admissible for \\eqref{suffices}. Therefore, by \\eqref{eq:bounded_z_z(-1)_u} and \\eqref{eq:thm4_integral_to_bound}, it suffices to prove the bound\n\\begin{equation}\\label{eq:integral_large_psi}\n\\int_0^{\\pi} \\int_{I(m,n,\\phi)} \\sin 2\\theta\\,\\dd\\theta\\,\\dd\\phi \\ll \\min\\left(\\frac{\\lambda^2}{\\ell|z^2-1|^{2}},\\frac{\\lambda}{\\ell^{1\/2}|uz|}\\right)\n\\end{equation}\nfor every $0\\leq m,n\\leq 2\\log\\ell$. We shall assume that $\\min(m,n)=0$, for otherwise $I(m,n,\\phi)=\\emptyset$.\nWe record also that the Lebesgue measure of $I(m,n,\\phi)$ is $\\OO(2^{-m-n})$, because if $n=0$, then $\\sin\\theta\\asymp \\theta$, while if $m=0$, then $\\cos\\theta\\asymp \\pi\/2-\\theta$. Hence, for any $\\phi\\in[0,\\pi]$, we have\n\\[\\int_{I(m,n,\\phi)} \\sin 2\\theta\\,\\dd \\theta = \\int_{I(m,n,\\phi)} 2\\sin\\theta\\cos\\theta\\,\\dd \\theta \\ll 2^{-2m-2n}.\\]\n\nFirst consider the case when in $x=(z^2 - 1)\\cos\\theta + i e ^{-2i\\phi} uz \\sin\\theta$, whose absolute value does not exceed $2^{m+3}\\lambda\/\\sqrt{\\ell}$, neither of the two summands is large:\n\\[\n|z^2 - 1|2^{-n} \\leq 2^{m+6} \\frac{\\lambda}{\\sqrt{\\ell}},\\qquad |uz|2^{-m}\\leq 2^{m+6} \\frac{\\lambda}{\\sqrt{\\ell}}.\n\\]\nRecalling $\\min(m,n)=0$, the previous two displays imply for any $\\phi\\in[0,\\pi]$ that\n\\[\\int_{I(m,n,\\phi)}\\sin 2\\theta\\,\\dd\\theta \\ll \\min\\left(\\frac{\\lambda^2}{\\ell|z^2-1|^{2}},\\frac{\\lambda}{\\ell^{1\/2}|uz|}\\right).\\]\nSo in this case \\eqref{eq:integral_large_psi} is clear.\n\nNow consider the case when in $x=(z^2 - 1)\\cos\\theta + i e ^{-2i\\phi} uz \\sin\\theta$,\nwhose absolute value does not exceed $2^{m+3}\\lambda\/\\sqrt{\\ell}$, the two summands are individually large:\n\\begin{equation}\\label{eq:triangle_large_sides}\n|z^2 - 1|2^{-n}> 2^{m+4} \\frac{\\lambda}{\\sqrt{\\ell}},\\qquad |uz|2^{-m}> 2^{m+4} \\frac{\\lambda}{\\sqrt{\\ell}}, \\qquad |z^2-1|2^{-n}\\asymp |uz|2^{-m}.\n\\end{equation}\nWe claim that this localizes $\\phi$. Indeed, setting\n\\[\n2\\phi_0=\\arg(iuz)-\\arg(z^2-1),\n\\]\nwe see that\n\\[\n|z^2-1|\\cos \\theta + e^{2i(\\phi_0-\\phi)}|uz|\\sin \\theta \\ll 2^m\\frac{\\lambda}{\\sqrt{\\ell}},\n\\]\nand comparing the imaginary parts, we have that\n\\[\n\\sin(2\\phi-2\\phi_0)\\ll\\frac{2^{2m}\\lambda}{|uz|\\sqrt{\\ell}},\\quad\\text{and so}\\quad\n\\phi\\equiv\\phi_0+\\OO\\left(\\frac{2^{2m}\\lambda}{|uz|\\sqrt{\\ell}}\\right)\\!\\!\\!\\pmod{\\pi\/2}.\n\\]\nAlso, $\\theta$ is localized, since\n\\[\n|z^2-1|\\cos \\theta - |uz|\\sin \\theta \\ll 2^m\\frac{\\lambda}{\\sqrt{\\ell}},\n\\]\nand the first term here is monotone decreasing, the second one is monotone increasing in $\\theta$. We see that $\\theta$ is localized to an interval of length $\n\\OO(2^m\\lambda\/|uz|\\sqrt{\\ell})$ for $\\sin\\theta\\leq \\cos\\theta$ (in which case $n=0$), and to an interval of length $\\OO(\\lambda\/|z^2-1|\\sqrt{\\ell})$ for $\\cos\\theta\\leq\\sin\\theta$ (in which case $m=0$).\n\nWe estimate the left-hand side of \\eqref{eq:integral_large_psi} by exploiting the above localizations and all three parts of \\eqref{eq:triangle_large_sides}. If $\\sin\\theta\\leq\\cos\\theta$, then $n=0$ and $\\sin 2\\theta\\leq 2^{1-m}$, so altogether we obtain a contribution to \\eqref{eq:integral_large_psi} of size\n\\[\\ll 2^{-m} \\cdot \\frac{2^m \\lambda}{|uz|\\sqrt{\\ell}}\\cdot \\frac{2^{2m}\\lambda}{|uz|\\sqrt{\\ell}}\n\\ll \\min\\left(\\frac{\\lambda^2}{|z^2-1|^2\\ell},\\frac{\\lambda}{|uz|\\sqrt{\\ell}}\\right).\\]\nSimilarly, if $\\cos\\theta\\leq\\sin\\theta$, then $m=0$ and $\\sin 2\\theta\\leq 2^{1-n}$, so altogether we obtain a contribution to \\eqref{eq:integral_large_psi} of size\n\\[\\ll2^{-n} \\cdot\\frac{\\lambda}{|z^2-1|\\sqrt{\\ell}}\\cdot\\frac{\\lambda}{|uz|\\sqrt{\\ell}}\n\\ll \\min\\left(\\frac{\\lambda^2}{|z^2-1|^2\\ell},\\frac{\\lambda}{|uz|\\sqrt{\\ell}}\\right).\\]\n\nThe proof of Theorem~\\ref{thm4} is complete.\n\n\\section{Proof of Theorems~\\ref{thm6} and \\ref{thm5}}\nIn this section, we prove Theorems~\\ref{thm6}~and~\\ref{thm5}. We recall that the key player is the function\n\\begin{equation}\\label{varphi-def-proof}\n\\varphi_{\\nu,\\ell}^{\\ell,q}(g) := \\frac{1}{2\\pi} \\int_{0}^{2\\pi} \\varphi_{\\nu,\\ell}^{\\ell}\\big(gk[0, 0, \\varrho]\\big) \\,e^{-2qi\\varrho} \\,\\dd\\varrho,\n\\end{equation}\nwhere\n\\[\\varphi_{\\nu,\\ell}^{\\ell}(g):=(2\\ell+1)\n\\int_K \\psi_{\\ell}(\\kappa(k^{-1} g k)) \\,e^{(\\nu-1)\\rho(H(gk))}\\,\\dd k.\\]\nThe function $\\psi_\\ell:K\\to\\CC$ was defined in \\eqref{chi-ell}, but for calculational purposes we extend it now to $\\GL_2(\\CC)$:\n\\begin{equation}\\label{psidef}\\psi_{\\ell}\\left( \\begin{pmatrix} \\alpha &\\beta \\\\ \\gamma & \\delta \\end{pmatrix} \\right) :=\n\\bar{\\alpha}^{2\\ell}, \\qquad \\left( \\begin{matrix} \\alpha &\\beta \\\\ \\gamma & \\delta \\end{matrix} \\right) \\in\\GL_2(\\CC).\n\\end{equation}\n\n\\subsection{Preliminary computations}\\label{sec:preliminary-computations}\nWe write $g$ in Cartan form\n\\begin{equation}\\label{g}\ng=k[u_1,v_1,w_1]\\begin{pmatrix}r & \\\\ & r^{-1}\\end{pmatrix} k[u_2,v_2,w_2],\n\\end{equation}\nwhere $r\\geq 1$, and we allow $u_j,v_j,w_j\\in\\RR$ to be arbitrary for convenience.\nSpelling out the definitions, and using that the height in the Iwasawa decomposition is left $K$-invariant, we see that $\\varphi_{\\nu,\\ell}^{\\ell,q}(g)$ equals\n\\[\\begin{split}\n\\frac{d_{\\ell}}{4\\pi^3}\\int_{\\substack{0\\leq u\\leq \\pi \\\\ 0\\leq v\\leq \\pi\/2 \\\\ 0\\leq w\\leq 2\\pi \\\\ 0\\leq \\varrho\\leq 2\\pi}} & \\psi_{\\ell}\\left( k[-w,-v,-u]k[u_1,v_1,w_1] \\kappa\\left(\\begin{pmatrix}r&\\\\&r^{-1}\\end{pmatrix} k[u_2,v_2,w_2]k[0,0,\\varrho] k[u,v,w] \\right)\\right)\\\\\n& \\cdot e^{-2iq\\varrho} \\, e^{(\\nu-1)\\rho\\left(H\\left(\\left(\\begin{smallmatrix}r&\\\\&r^{-1}\\end{smallmatrix}\\right) k[u_2,v_2,w_2]k[0,0,\\varrho]k[u,v,w]\\right)\\right)} \\sin 2v \\,\\dd u\\,\\dd v\\,\\dd w\\,\\dd \\varrho.\n\\end{split}\\]\nWith a change of variables $k[u_2,v_2,w_2]k[0,0,\\rho] k[u,v,w] \\mapsto k[u,v,w]$ and dropping the normalized $w$-integration (which is legitimate since the conjugation by $k[0,0,w]$ does not alter the $\\psi_{\\ell}$-value, and the height in the Iwasawa decomposition is also unaffected by right-multiplication by $k[0,0,w]$), we arrive at\n\\[\\begin{split}\n\\frac{d_{\\ell}}{2\\pi^2}\\int_{\\substack{0\\leq u\\leq \\pi \\\\ 0\\leq v\\leq \\pi\/2 \\\\ 0\\leq \\varrho\\leq 2\\pi}} & \\psi_{\\ell}\\left( k[0,-v,-u] k[u_2,v_2,w_2]k[0,0,\\varrho] k[u_1,v_1,w_1] \\kappa\\left(\\begin{pmatrix}r&\\\\&r^{-1}\\end{pmatrix}k[u,v,0]\\right)\\right)\\\\\n& \\cdot e^{-2iq\\varrho} \\, e^{(\\nu-1)\\rho\\left(H\\left(\\left(\\begin{smallmatrix}r&\\\\&r^{-1}\\end{smallmatrix}\\right) k[u,v,0]\\right)\\right)} \\sin 2v \\,\\dd u\\,\\dd v\\,\\dd\\varrho.\n\\end{split}\\]\nThe sum of absolute squares in the first column of $\\diag(r, r^{-1})k[u,v,0]$ equals\n\\[h(r,v):=r^2\\cos^2v+r^{-2}\\sin^2v,\\]\nhence recalling the definitions \\eqref{eq:kappaH} and \\eqref{psidef}, we can rewrite the integral as\n\\[\\begin{split}\n\\frac{d_{\\ell}}{2\\pi^2}\\int_{\\substack{0\\leq u\\leq \\pi \\\\ 0\\leq v\\leq \\pi\/2 \\\\ 0\\leq \\varrho\\leq 2\\pi}} & \\psi_{\\ell}\\left( k[0,-v,-u] k[u_2,v_2,w_2]k[0,0,\\varrho] k[u_1,v_1,w_1]\n\\begin{pmatrix}r&\\\\&r^{-1}\\end{pmatrix}k[u,v,0]\\right) \\\\\n& \\cdot e^{-2iq\\varrho} \\, h(r,v)^{\\nu-1-\\ell} \\sin 2v \\,\\dd u\\,\\dd v\\,\\dd\\varrho.\n\\end{split}\\]\nReplacing $\\varrho$ by $\\varrho-u_1-w_2$, the integral further simplifies to\n\\[\\begin{split}\n\\frac{d_{\\ell}e^{2iq(u_1+w_2)}}{2\\pi^2}\\int_{\\substack{0\\leq u\\leq \\pi \\\\ 0\\leq v\\leq \\pi\/2 \\\\ 0\\leq \\varrho\\leq 2\\pi}} & \\psi_{\\ell}\\left( \\left(\\begin{matrix} e^{-i\\varrho} I + e^{i\\varrho}J & \\ast \\\\ \\ast & \\ast \\end{matrix}\\right)\\right) e^{-2iq\\varrho} \\, h(r,v)^{\\nu-1-\\ell} \\sin 2v \\,\\dd u\\,\\dd v\\,\\dd \\varrho,\n\\end{split}\\]\nwhere\n\\begin{align*}\nI:=&\\left(r^{-1}e^{-2iu-iw_1}\\sin v\\cos v_1+re^{iw_1}\\cos v\\sin v_1\\right)\n\\left(e^{2iu-iu_2}\\sin v\\cos v_2-e^{iu_2}\\cos v\\sin v_2\\right),\\\\\nJ:=&\\left(-r^{-1}e^{-2iu-iw_1}\\sin v\\sin v_1+re^{iw_1}\\cos v\\cos v_1\\right)\n\\left(e^{2iu-iu_2}\\sin v\\sin v_2+e^{iu_2}\\cos v\\cos v_2\\right).\\end{align*}\nEvaluating the $\\varrho$-integral, we obtain\n\\begin{equation}\\label{post-expansion}\n\\varphi_{\\nu,\\ell}^{\\ell,q}(g) = \\frac{d_{\\ell}e^{2iq(u_1+w_2)}}{\\pi}\\binom{2\\ell}{\\ell + q}\n\\int_{\\substack{0\\leq u\\leq \\pi \\\\ 0\\leq v\\leq \\pi\/2 } } \\frac{\\sin 2v}{h(r,v)^{\\ell+1-\\nu}}\n\\,\\bar{I}^{\\ell + q} \\bar{J}^{\\ell - q}\\,\\dd u\\,\\dd v.\n\\end{equation}\nTaking the complex conjugate of the right-hand side, and introducing the new variables $t:=r^{-1}\\tan v$ and $\\phi:=2u$, we get\n\\begin{align*}\n\\bigl|\\varphi_{\\nu,\\ell}^{\\ell,q}(g)\\bigr| = \\frac{d_{\\ell}}{\\pi}&\\binom{2\\ell}{\\ell+q}\n\\bigg|\\int_0^{\\infty}\\int_0^{2\\pi} \\frac{t}{(1 + (t\/r)^2)^{\\ell+1+\\nu}(1+(tr)^2)^{\\ell+1-\\nu}}\\\\\n&\\left(e^{-i\\phi-2iw_1}(t\/r)\\cos v_1+\\sin v_1\\right)^{\\ell+q}\n\\left(e^{i\\phi-2iu_2}(tr)\\cos v_2-\\sin v_2\\right)^{\\ell+q}\\\\\n&\\left(e^{-i\\phi-2iw_1}(t\/r)\\sin v_1-\\cos v_1\\right)^{\\ell-q}\n\\left(e^{i\\phi-2iu_2}(tr)\\sin v_2+\\cos v_2\\right)^{\\ell-q}\\,\\dd\\phi\\,\\dd t\\bigg|.\n\\end{align*}\nNow comes the last key step: in the inner $\\phi$-integral, we can remove the $r$'s. This is so because $e^{-i\\phi}$ must be chosen equally many times as $e^{i\\phi}$, and the $r$'s will cancel out in all terms surviving the integration. Another way to see the same thing is to shift the contour as in $\\phi\\mapsto\\phi+i\\log r$ where the boundary terms cancel out by $2\\pi$-periodicity. Either way, using also the opportunity to replace $\\phi\\mapsto\\phi+u_2-w_1$,\nand writing $\\Delta:=u_2+w_1$, we finally obtain\n\\begin{align*}\n\\bigl|\\varphi_{\\nu,\\ell}^{\\ell,q}(g)\\bigr| \\leq \\frac{d_{\\ell}}{\\pi} \\binom{2\\ell}{\\ell+q}\n\\int_{0}^{\\infty}&\\frac{t}{((1 + (t\/r)^2)(1+(tr)^2))^{\\ell+1}}\\\\\n\\times\\int_0^{2\\pi}\n&\\bigl|e^{i\\phi+i\\Delta}t\\cos v_1+\\sin v_1\\bigr|^{\\ell+q}\n\\bigl|e^{i\\phi -i\\Delta}t\\cos v_2-\\sin v_2\\bigr|^{\\ell+q}\\\\\n&\\bigl|e^{i\\phi+i\\Delta}t\\sin v_1-\\cos v_1\\bigr|^{\\ell-q}\n\\bigl|e^{i\\phi -i\\Delta}t\\sin v_2+\\cos v_2\\bigr|^{\\ell-q}\\,\\dd\\phi\\,\\dd t.\n\\end{align*}\n\nWe estimate the inner integrand using the following lemma, which is purely about inequalities. We state it formally so as to clearly separate issues. (In the case $q=\\pm\\ell$, all expressions raised to exponent 0 should simply be omitted.) As in the previous section, we introduce the notation $\\lambda:=\\sqrt{\\log \\ell}$.\n\n\\begin{lemma}\\label{young}\nLet $\\ell,q\\in\\ZZ$ be such that $\\ell\\geq\\max(1,|q|)$. Let $X>0$ and $\\Lambda>0$.\n\\begin{enumerate}[label=(\\alph*)]\n\\item\\label{part-a}\nIf $A,B\\geq 0$ satisfy $A^2+B^2=X^2$, then\n\\begin{equation}\n\\label{ineq-two-terms}\n\\left(\\frac{2\\ell}{\\ell+q}\\right)^{(\\ell+q)\/2}\\left(\\frac{2\\ell}{\\ell-q}\\right)^{(\\ell-q)\/2}A^{\\ell+q}B^{\\ell-q}\\leq X^{2\\ell}.\n\\end{equation}\nMoreover, the left-hand side is $\\OO_\\Lambda(X^{2\\ell}\\ell^{-\\Lambda})$ unless\n\\begin{equation}\\label{cases-of-eq}\n\\begin{split}\nA^2&=\\frac{\\ell+q}{2\\ell}X^2+\\OO_{\\Lambda}\\left(X^2\\frac{\\lambda^2+\\lambda\\sqrt{\\ell-|q|}}{\\ell}\\right),\\\\\nB^2&=\\frac{\\ell-q}{2\\ell}X^2+\\OO_{\\Lambda}\\left(X^2\\frac{\\lambda^2+\\lambda\\sqrt{\\ell-|q|}}{\\ell}\\right).\n\\end{split}\n\\end{equation}\n\\item\\label{part-b}\nIf $A,B,C,D\\geq 0$ satisfy $A^2+B^2=C^2+D^2=X^2$, then\n\\[ \\binom{2\\ell}{\\ell+q}A^{\\ell+q}B^{\\ell-q}C^{\\ell+q}D^{\\ell-q} \\ll\\frac{X^{4\\ell}}{ 1+\\sqrt{\\ell-|q|}}. \\]\nMoreover, the left-hand side is $\\OO_\\Lambda(X^{4\\ell}\\ell^{-\\Lambda})$ unless \\eqref{cases-of-eq} and the analogous estimates for $C$, $D$ are satisfied.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nLet us first assume $|q| < \\ell$. We use Young's inequality\n\\[xy \\leq \\frac{x^a}{a} + \\frac{y^b}{b} , \\qquad \\frac{1}{a} + \\frac{1}{b}=1,\\]\nto conclude with\n\\begin{equation}\\label{xy}\nx := \\left(\\sqrt{\\frac{2\\ell}{\\ell+q}}\\frac{A}{X}\\right)^{\\frac{\\ell+q}{\\ell}}, \\quad\ny := \\left(\\sqrt{\\frac{2\\ell}{\\ell-q}}\\frac{B}{X}\\right)^{\\frac{\\ell-q}{\\ell}}, \\quad\na := \\frac{2\\ell}{\\ell + q}, \\quad\nb := \\frac{2\\ell}{\\ell - q}\n\\end{equation}\nthat\n\\[\n\\left(\\sqrt{\\frac{2\\ell}{\\ell+q}}\\frac{A}{X}\\right)^{\\frac{\\ell+q}{\\ell}}\n\\left(\\sqrt{\\frac{2\\ell}{\\ell-q}}\\frac{B}{X}\\right)^{\\frac{\\ell-q}{\\ell}}\n\\leq \\frac{A^2+B^2}{X^2}=1.\n\\]\nThis is equivalent to \\eqref{ineq-two-terms}. We also conclude (still using the notation \\eqref{xy}) that the left-hand side of \\eqref{ineq-two-terms} is $\\OO_\\Lambda(X^{2\\ell}\\ell^{-\\Lambda})$ unless\n\\begin{equation}\\label{ineq}\nxy>1\/2,\\qquad xy=1+\\OO_\\Lambda(\\delta),\\qquad\\delta:=\\lambda^2\/\\ell.\n\\end{equation}\n\nLet us explore the consequences of \\eqref{ineq}. First, by $x^a\/a+y^b\/b=1$ we have\n\\[1\/31$ or $b\\delta<1$. Then $y^b\\gg_\\Lambda 1$, whence $y-y_0\\ll_\\Lambda\\sqrt{\\delta\/b}$ by the previous display. From here and \\eqref{ineq} we get the following two approximations for $bxy$:\n\\begin{align*}\nbxy&=bxy_0+\\OO_\\Lambda(\\sqrt{b\\delta})=bx^a+\\OO_\\Lambda(\\sqrt{b\\delta}),\\\\\nbxy&=b+\\OO_\\Lambda(b\\delta)=(b-1)x^a+y^b+\\OO_\\Lambda(b\\delta).\n\\end{align*}\nComparing the right-hand sides, we conclude that\n\\begin{equation}\\label{eq:xayb}\nx^a-y^b\\ll_\\Lambda b\\delta+\\sqrt{b\\delta}.\n\\end{equation}\nIn the remaining case when $y^b\\leq 1$ and $b\\delta\\geq 1$, the inequality \\eqref{eq:xayb} holds automatically in the stronger form $|x^a-y^b|<2\\leq 2b\\delta$.\n\nWe proved that \\eqref{ineq} implies \\eqref{eq:xayb} in all ranges. For our specific set-up \\eqref{xy}, the inequality \\eqref{eq:xayb} says that\n\\[aA^2-bB^2\\ll_\\Lambda X^2(b\\delta+\\sqrt{b\\delta}),\\]\nand this is equivalent to \\eqref{cases-of-eq} in the light of $A^2+B^2=X^2$.\nThis shows \\ref{part-a} under the assumption $0\\leq q<\\ell$, but it is easily seen to continue to hold also for $q = \\ell$ in which case \\eqref{ineq} simply reads $A^2 = X^2 + \\OO_\\Lambda(X^2\\delta)$. The argument for $- \\ell \\leq q < 0$ is identical.\n\nTurning to \\ref{part-b}, we conclude from \\ref{part-a} that\n\\[ \\left(\\frac{2\\ell}{\\ell+q}\\right)^{\\ell+q}\\left(\\frac{2\\ell}{\\ell-q}\\right)^{\\ell-q}A^{\\ell+q}B^{\\ell-q}C^{\\ell+q}D^{\\ell-q}\\leq X^{4\\ell}. \\]\nOn the other hand, using Stirling's formula $n!\\sim(n\/e)^n\\sqrt{2\\pi n}$, we have for $|q|<\\ell$ that\n\\[ \\binom{2\\ell}{\\ell+q}\\asymp\\frac{(2\\ell)^{2\\ell}}{(\\ell+q)^{\\ell+q}(\\ell-q)^{\\ell-q}}\\sqrt{\\frac{2\\ell}{(\\ell+q)( \\ell-q)}}, \\]\nand so combining the two most recent displays we have the announced bound\n \\[ \\binom{2\\ell}{\\ell+q}A^{\\ell+q}B^{\\ell-q}C^{\\ell+q}D^{\\ell-q} \\ll\\frac{X^{4\\ell}}{ 1+\\sqrt{\\ell-|q|}}. \\]\nWe added artificially the $1+$ term in the denominator, so that the inequality also holds for the previously excluded case $|q| = \\ell$ in view of $AC, BD \\leq X^2$ (which follows directly from $A^2 + C^2 = B^2 +D^2 = X^2$). The claim that the left-hand side is negligible unless \\eqref{cases-of-eq} holds for $(A,B)$ and $(C,D)$ is immediate from \\ref{part-a}.\n\\end{proof}\n\nWe now return to the double integral in the upper bound for $\\varphi_{\\nu,\\ell}^{\\ell,q}(g)$.\nWe estimate\nthe inner integral by writing the integrand as $A^{\\ell + q} B^{\\ell - q} C^{\\ell + q} D^{\\ell - q}$ in the obvious way and applying Lemma~\\ref{young}, where\n\\[A^2+B^2=C^2+D^2=X^2=1+t^2,\\]\nand\n\\[ A^2=\\frac{1+t^2}{2}+\\frac{t^2-1}{2}\\cos 2v_1+t\\sin 2v_1\\cos(\\phi+\\Delta), \\]\nwith analogous expressions for $B^2$, $C^2$, and $D^2$. Since\n\\[\\frac{(1 + (t\/r)^2)(1+(tr)^2)}{(1 + t^2)^2} = 1 + \\left(\\frac{r - r^{-1}}{t + t^{-1}}\\right)^2,\\]\nwe conclude that the contribution of the inner integral is $\\OO_\\Lambda(\\ell^{-\\Lambda})$ unless\n\\begin{equation}\\label{r}\n\\min(t,t^{-1})\\ll_\\Lambda\\frac{\\lambda}{(r-1)\\sqrt{\\ell}}.\n\\end{equation}\nFor $r=1$ we treat the right-hand side as infinity. We may then summarize our findings as follows.\n\n\\begin{lemma}\\label{lem1} Let $\\Lambda\\in\\NN$. Let $\\ell,q\\in\\ZZ$ be such that $\\ell\\geq\\max(1,|q|)$, and let $\\nu\\in i\\RR$. Assume that $g\\in\\SL_2(\\CC)$ is given by \\eqref{g}. Let us abbreviate $\\Delta:= u_2 + w_1$ and $\\lambda := \\sqrt{\\log \\ell}$. Let $\\mcM = \\mcM(v_1,v_2,\\Delta,r,\\Lambda)$ be the set of $(\\phi, t) \\in [0, 2\\pi] \\times [0, \\infty)$ satisfying \\eqref{r} as well as\n\\begin{equation}\\label{eq}\n\\begin{split}\n2t\\sin 2v_1\\cos(\\phi+\\Delta)&=(1-t^2)\\cos 2v_1+\\frac{q}{\\ell}(1+t^2)+\\OO_{\\Lambda}\\left((1+t^2)\\frac{\\lambda^2+ \\lambda\\sqrt{\\ell-|q|}}{\\ell} \\right),\\\\\n2t\\sin 2v_2\\cos(\\phi-\\Delta)&=(t^2-1)\\cos 2v_2-\\frac{q}{\\ell}(1+t^2)+\\OO_{\\Lambda}\\left((1+t^2)\\frac{\\lambda^2 + \\lambda\\sqrt{\\ell-|q|}}{\\ell} \\right),\\\\\n\\end{split}\n\\end{equation}\nwith a sufficiently large (but fixed) implied constant depending on $\\Lambda$. Then\n\\begin{equation}\\label{keyupperbound}\n\\varphi_{\\nu,\\ell}^{\\ell,q}(g) \\ll_\\Lambda \\frac{\\ell}{1 + \\sqrt{\\ell - |q|}}\n\\int_{\\mcM} \\frac{t}{(1+t^2)^2} \\, \\dd \\phi\\, \\dd t + \\ell^{-\\Lambda}.\n\\end{equation}\n\\end{lemma}\n\n\\subsection{Simplifying assumptions}\\label{simplifying-section}\nFor the proof of Theorems~\\ref{thm6}~and~\\ref{thm5}, we can and we shall assume that $|\\Delta|\\leq\\pi\/4$. Indeed, using the last relation in \\eqref{angleequiv} multiple times, we can choose the coordinates in \\eqref{g} so that this bound is satisfied. Moreover, we can replace $g$ by\n\\[g^{-1}=k\\left[\\frac{\\pi}{2}-w_2,v_2-\\frac{\\pi}{2},u_2+\\frac{\\pi}{2}\\right]\n\\begin{pmatrix}r & \\\\ & r^{-1}\\end{pmatrix}\nk\\left[w_1-\\frac{\\pi}{2},v_1-\\frac{\\pi}{2},\\frac{\\pi}{2}-u_1\\right]\\]\nif needed, because the quantities $\\Delta$, $\\|g\\|$, $D(g)$ do not change under this replacement, \n$\\bigl|\\varphi_{\\nu,\\ell}^{\\ell,q}(g)\\bigr|=\\bigl|\\varphi_{\\nu,\\ell}^{\\ell,q}(g^{-1})\\bigr|$ holds by \\eqref{eq:averaged-spherical-function-symmetry}, and\n\\[\\dist(g,\\mcH)=\\dist(g^{-1},\\mcH),\\qquad\\mcH\\in\\{K,\\mcD,\\mcS\\}\\]\nholds by \\eqref{distinvariance}.\n\nWe shall derive (most of) the bounds in Theorems~\\ref{thm6}~and~\\ref{thm5} from \\eqref{keyupperbound}. In Lemma~\\ref{lem1}, the pair $(\\Delta,r)$ does not change under the above discussed replacement $g\\mapsto g^{-1}$, while\nthe corresponding integration domains $\\mcM$ are related by\n\\[(\\phi,t)\\in\\mcM\\left(v_2-\\frac{\\pi}{2},v_1-\\frac{\\pi}{2},\\Delta,r,\\Lambda\\right)\\quad\\Longleftrightarrow\\quad\n(\\phi,t^{-1})\\in\\mcM(v_1,v_2,\\Delta,r,\\Lambda).\\]\nMoreover, the integrand in \\eqref{keyupperbound} is invariant under $t\\mapsto t^{-1}$, hence we can assume that the contribution of $t\\leq 1$ is not smaller than the contribution of $t>1$. So from now on we restrict $\\mcM$ in \\eqref{keyupperbound} to the corresponding subset of $[0, 2\\pi] \\times [0, 1]$. On this subset we have, by \\eqref{r},\n\\begin{equation}\\label{r1}\nt\\in[0,1]\\qquad\\text{and}\\qquad t \\ll_\\Lambda \\frac{\\lambda}{(r-1)\\sqrt{\\ell}}.\n\\end{equation}\n\n\\subsection{Proof of Theorem~\\ref{thm6}}\nThe bound \\eqref{thm6bound} is trivial for $\\ell\\ll_\\Lambda 1$, hence we shall assume that $\\ell$ is sufficiently large in terms of $\\Lambda$. With the notation\n\\[\\alpha := \\dist(g, K) \\asymp r-1\\qquad\\text{and}\\qquad\\beta := \\dist(g, \\mcD),\\]\nit follows from \\eqref{keyupperbound} and the previous subsection that it suffices to show\n\\begin{equation}\\label{beginThm6}\n\\frac{\\ell}{1 + \\sqrt{\\ell - |q|}}\\int_{\\mcM} t \\, \\dd \\phi\\, \\dd t \\ll_{\\eps,\\Lambda}\n\\ell^{\\eps}\\min\\left(1,\\frac{\\| g \\|}{\\sqrt{\\ell}\\alpha^2\\beta}\\right),\n\\end{equation}\nwhere $\\mcM$ is now restricted by \\eqref{r1}. In fact our arguments below will show that $\\ell^\\eps$ can be replaced by $(\\log\\ell)^3$.\n\nWe start with the first bound of \\eqref{beginThm6}. With the notation\n\\[\\sigma:=\\lambda^2+\\lambda\\sqrt{\\ell-|q|},\\qquad\n\\mu:=\\frac{q}{\\ell} - \\cos 2v_1,\\qquad \\rho:=\\sin 2 v_1,\\]\nthe first equation in \\eqref{eq} becomes\n\\begin{equation}\\label{tquadratic}\n\\mu t^2 - 2t\\rho\\cos(\\phi+\\Delta) + \\frac{2q}{\\ell}-\\mu+\\OO_\\Lambda\\left(\\frac{\\sigma}{\\ell}\\right)=0.\n\\end{equation}\nWithout loss of generality, $\\mu\\neq 0$, and then we can view \\eqref{tquadratic} as a quadratic equation for $t$. Multiplying by $\\mu$ and completing the square, we obtain the alternative form\n\\begin{equation}\\label{compl}\n\\bigl(\\mu t - \\rho\\cos(\\phi + \\Delta)\\bigr)^2+\\bigl(\\rho\\sin(\\phi + \\Delta)\\bigr)^2\n=1-\\frac{q^2}{\\ell^2}+\\OO_\\Lambda\\left(\\frac{|\\mu|\\sigma}{\\ell}\\right).\n\\end{equation}\nIn particular, the discriminant of \\eqref{tquadratic} equals $4D(\\phi)+\\OO_\\Lambda(|\\mu|\\sigma\/\\ell)$, where\n\\begin{equation}\\label{disc-q}\nD(\\phi) := 1-\\frac{q^2}{\\ell^2}-\\bigl(\\rho\\sin(\\phi + \\Delta)\\bigr)^2\n\\end{equation}\nWe assume first that $|q|\/\\ell\\leq 5\/6$, and decompose $\\mcM$ into two parts $\\mcM^\\pm$ according as\n$|\\rho\\sin(\\phi+\\Delta)|$ exceeds $1\/2$ or not. On $\\mcM^+$, the equation \\eqref{tquadratic} localizes $\\phi$ within $\\ll_\\Lambda\\sigma\/(\\ell t)$ for each given $t\\in[0,1]$. On $\\mcM^-$, we have $D(\\phi)\\geq 1\/18$, hence the equation \\eqref{compl} localizes $t$ within $\\ll_\\Lambda\\sigma\/\\ell$ for each given $\\phi\\in[0,2\\pi]$. This shows that\n\\[\\int_\\mcM t\\,\\dd\\phi\\,\\dd t\\ll_\\Lambda\\int_0^1 t\\,\\frac{\\sigma}{\\ell t}\\,\\dd t+\\int_0^{2\\pi}\\frac{\\sigma}{\\ell}\\,\\dd\\phi\n\\ll\\frac{\\sigma}{\\ell},\\]\nhence the first bound of \\eqref{beginThm6} follows in stronger form. From now on we assume that $|q|\/\\ell>5\/6$. We decompose $\\mcM$ into two parts $\\mcM^\\pm$ according as $D(\\phi)$ is positive or not, and we make two initial observations. First, $\\mcM^+$ is clearly empty when $|q|=\\ell$. Second, $|\\mu|>1\/6$ holds for large $\\ell$, because \\eqref{tquadratic} coupled with $t\\in[0,1]$ yields\n\\[\\frac{2|q|}{\\ell}-|\\mu|-\\OO_\\Lambda\\left(\\frac{\\sigma}{\\ell}\\right)\\leq 2t|\\rho|\n\\leq 2\\sqrt{1 - \\left( \\frac{q}{\\ell} - \\mu\\right)^2}.\\]\nIn order to estimate the contribution of $\\mcM^+$ in \\eqref{beginThm6}, we decompose $\\mcM^+$\ninto pieces\n\\[\\mcM^{+}({\\mtD},\\eta):=\\left\\{(\\phi,t)\\in\\mcM^+:\n\\text{$D(\\phi) \\asymp {\\mtD}$ and $|\\cos(\\phi + \\Delta)| \\asymp \\eta$}\n\\right\\}.\\]\nIf $\\eta \\leq \\ell^{-10}$, we can estimate trivially, so there are only $\\OO(\\log \\ell)$ relevant values for $\\eta$. If $\\rho \\geq \\ell^{-10}$, then by the same argument there are only $\\OO(\\log \\ell)$ relevant values for ${\\mtD}$. If $\\rho < \\ell^{-10}$, then ${\\mtD} > \\ell^{-1}$ by $|q|<\\ell$, hence again\nthere are only $\\OO(\\log \\ell)$ relevant values for ${\\mtD}$. So in all cases it suffices to restrict to $\\OO((\\log \\ell)^2)$ pairs $({\\mtD},\\eta)$. Our current assumptions localize $\\sin(\\phi+\\Delta)$ within\n$\\ll\\sqrt{{\\mtD}}\/|\\rho|$, and hence $\\phi$ within $\\ll\\min(1,\\sqrt{\\mtD}\/|\\rho\\eta|)$, independently of $t$. On the other hand, given $\\phi$, the equation \\eqref{compl} localizes $t$ within $\\ll_\\Lambda\\min((\\sigma\/\\ell){\\mtD}^{-1\/2},\\sqrt{\\sigma\/\\ell})$.\nSuch $t$ are of size $\\ll_\\Lambda |\\rho\\eta| + \\sqrt{\\mtD} + \\sqrt{\\sigma\/\\ell}$, so that\n\\[\\int_{\\mcM^+({\\mtD},\\eta)} t\\,\\dd\\phi\\,\\dd t \\ll_\\Lambda \\left( |\\rho\\eta|+ \\sqrt{\\mtD} + \\sqrt{\\frac{\\sigma}{\\ell}} \\right)\\min\\left( \\frac{\\sigma}{\\ell \\sqrt{\\mtD} }, \\sqrt{\\frac{\\sigma}{\\ell}}\\right) \\min\\left(1, \\frac{\\sqrt{\\mtD} }{|\\rho \\eta|}\\right)\n\\ll\\frac{\\sigma}{\\ell}.\\]\nThis contribution is admissible for the first bound of \\eqref{beginThm6}.\nIt remains to estimate the contribution of $\\mcM^-$ in \\eqref{beginThm6}. On this set we have\n\\[0\\leq -D(\\phi)\\ll_\\Lambda\\frac{\\sigma}{\\ell}\\]\nby \\eqref{compl}. The argument is similar as for $\\mcM^+$, in fact simpler as we only need $\\OO(\\log \\ell)$ pieces $\\mcM^{-}(\\eta)$ defined by $|\\cos(\\phi+\\Delta)|\\asymp\\eta$. Initially we localize $\\phi$ within $\\ll\\min(1,|\\rho\\eta|^{-1}\\sqrt{\\sigma\/\\ell})$, independently of $t$. The equation \\eqref{compl} localizes $t$ within $\\ll_\\Lambda\\sqrt{\\sigma\/\\ell}$, and such $t$ are of size $\\ll_\\Lambda |\\rho\\eta| + \\sqrt{\\sigma\/\\ell}$. We obtain altogether\n\\[\\int_{\\mcM^-(\\eta)} t\\,\\dd\\phi\\,\\dd t \\ll_\\Lambda \\left(|\\rho \\eta| + \\sqrt{\\frac{\\sigma}{\\ell}}\\right)\n\\sqrt{\\frac{\\sigma}{\\ell}}\\min\\left(1, \\frac{1}{|\\rho\\eta|}\\sqrt{\\frac{\\sigma}{\\ell}}\\right) \\ll \\frac{\\sigma}{\\ell},\\]\nwhich is again admissible for the first bound of \\eqref{beginThm6}.\n\nWe now turn to the second bound of \\eqref{beginThm6}. We shall assume (as we can) that $\\mcM\\neq\\emptyset$ and $\\sqrt{\\ell}\\alpha^2\\beta>\\| g \\|$. We pick an arbitrary point $(\\phi,t)\\in\\mcM$. Combining \\eqref{eq} and \\eqref{r1}, we get\n\\[\\cos 2v_j = - \\text{sgn}(q) + \\OO(t)\\sin 2v_j + \\OO_\\Lambda\\left(t^2 +\\frac{\\lambda^2+ \\ell - |q|}{\\ell}\\right),\\]\nwhere for $q=0$ we can replace $\\sgn(q)$ by $1$. After squaring and solving for $\\sin 2v_j$, then feeding back the result into the previous display, we get\n\\[\\sin 2v_j=\\OO_\\Lambda\\left(t +\\frac{\\lambda+ \\sqrt{\\ell - |q|}}{\\sqrt{\\ell}}\\right),\\qquad\n\\cos 2v_j = - \\text{sgn}(q) + \\OO_\\Lambda\\left(t^2 +\\frac{\\lambda^2+ \\ell - |q|}{\\ell}\\right).\\]\nRecalling also \\eqref{g}, and using \\eqref{r1} again, we infer that\n\\[\\beta \\ll_\\Lambda\\|g\\|\\left(\\frac{\\lambda}{\\alpha\\sqrt{\\ell}} +\\frac{\\lambda+ \\sqrt{\\ell - |q|}}{\\sqrt{\\ell}}\\right).\\]\nHence we always have\n\\[1 \\ll_\\Lambda \\frac{\\|g\\|\\lambda}{\\alpha\\beta\\sqrt{\\ell}}\n\\qquad \\text{or} \\qquad 1 \\ll_\\Lambda \\|g\\|\\lambda\\frac{1+\\sqrt{\\ell - |q|}}{\\beta\\sqrt{\\ell}}.\\]\nIn either case, for any $c > 0$, the previous display combined with \\eqref{r1} yields that\n\\begin{align*}\n\\frac{\\ell}{1 + \\sqrt{\\ell - |q|}}\\int_{\\mcM} t \\, \\dd \\phi\\, \\dd t\n&\\ll_{\\Lambda,c}\\frac{\\lambda^2}{(1+\\sqrt{\\ell - |q|})\\alpha^2}\n\\left(\\left(\\frac{\\|g\\|\\lambda}{\\alpha\\beta\\sqrt{\\ell}}\\right)^c+ \\|g \\|\\lambda\\frac{1+\\sqrt{\\ell - |q|}}{\\beta\\sqrt{\\ell}}\\right)\\\\[4pt]\n& \\ll_{\\eps,\\Lambda,c} \\ell^\\eps\\left(\\frac{\\| g \\|^c}{\\ell^{c\/2} \\alpha^{2 + c} \\beta^c}+\\frac{\\| g \\|}{\\sqrt{\\ell} \\alpha^2 \\beta}\\right).\n\\end{align*}\nChoosing $c = 2$, and recalling our initial assumption $\\sqrt{\\ell}\\alpha^2\\beta>\\| g \\|$,\nwe obtain the second bound of \\eqref{beginThm6} in stronger form.\n\nThe proof of Theorem~\\ref{thm6} is complete.\n\n\\subsection{Proof of Theorem~\\ref{thm5}\\ref{thm5-a}}\\label{thm5a-proof-sec}\nThe averaged spherical trace function $\\varphi_{\\nu,\\ell}^{\\ell, q}(g)$ exhibits starkly different behavior depending on the value of $-\\ell\\leq q\\leq\\ell$. Some of these features are already visible along $K=\\SU_2(\\CC)$. From \\eqref{varphi-def-proof} and \\eqref{post-expansion} we can see that, in the notation of \\eqref{decomp-K} and \\eqref{matrix-coeff},\n\\[\\varphi_{\\nu,\\ell}^{\\ell,q}(k[u,v,w])=\\Phi_{q,q}^{\\ell}(k[u,v,w])=\ne^{2\\pi i q(u+w)}(\\cos v)^{2q}P_{\\ell-q}^{(0,2q)}(\\cos 2v).\\]\nThe absolute value of the right-hand side exhibits a primary peak at $v\\in\\pi\\ZZ$ of size 1. For $q=\\pm\\ell$, this is followed by a sharp drop to $\\OO_N(\\ell^{-N})$ after a range of length about $\\ell^{-1\/2}$. For a generic $q$, the drop becomes soft through a highly oscillatory range of magnitude $\\ell^{-1\/2}$ (faster and more oscillatory for smaller $q$) and a secondary, Airy-type peak of size about $\\ell^{-1\/3}$ before the delayed sharp drop. For $q=0$, the secondary peak grows to a full peak of size 1 at $v\\in\\frac12\\pi+\\pi\\ZZ$ (corresponding to skew-diagonal matrices in $K$) and the sharp drop disappears. These varying features, which are illustrated in Figure~\\ref{Jacobi-figure}, become vastly more complicated off $K$, where the hard work in Theorems~\\ref{thm6} and \\ref{thm5} lies. Nevertheless, their traces are visible in the hard localization to $\\mcD$ (but none to $K$!) for $q=\\pm\\ell$ and the hard localization to $\\mcN$ with soft localization to $\\mcS\\subset K\\subset\\mcN$ for $q=0$.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.24\\textwidth]{Jacobi1.pdf}\n\\includegraphics[width=0.24\\textwidth]{Jacobi2.pdf}\n\\includegraphics[width=0.24\\textwidth]{Jacobi3.pdf}\n\\includegraphics[width=0.24\\textwidth]{Jacobi4.pdf}\n\\caption{Plots of $(\\cos v)^{2q}P_{\\ell-q}^{(0,2q)}(\\cos 2v)$ for $0\\leq v\\leq\\pi$, $\\ell=120$, $q=120$, $q=100$, $q=20$, and $q=0$.}\n\\label{Jacobi-figure}\n\\end{figure}\n\nIn this subsection, we consider in more detail the case $q = 0$. Then \\eqref{keyupperbound} simplifies to\n\\begin{equation}\\label{keyupperbound:q=0}\n\\varphi_{\\nu,\\ell}^{\\ell,0}(g) \\ll_\\Lambda\\sqrt{\\ell}\n\\int_{\\mcM} \\frac{t}{(1+t^2)^2} \\, \\dd \\phi\\, \\dd t + \\ell^{-\\Lambda},\n\\end{equation}\nwhere by \\eqref{eq} and the last paragraph of \\S \\ref{sec:preliminary-computations}, the set $\\mcM$ can be described by the constraints given in \\eqref{r1} and\n\\begin{equation}\\label{eq0}\n\\begin{split}\n2t\\sin 2v_1\\cos(\\phi+\\Delta)&=(1-t^2)\\cos 2v_1 +\\OO_\\Lambda(\\lambda\/\\sqrt{\\ell}),\\\\\n2t\\sin 2v_2\\cos(\\phi-\\Delta)&=(t^2-1)\\cos 2v_2 +\\OO_\\Lambda(\\lambda\/\\sqrt{\\ell}).\\\\\n\\end{split}\n\\end{equation}\nWe shall use the notations\n\\begin{align*}\nP(\\phi) &:= \\max(|\\sin 2v_1 \\cos(\\phi + \\Delta)|, |\\sin 2v_2 \\cos(\\phi-\\Delta)|),\\\\\nR & := \\max(|\\cos 2v_1|, |\\cos2v_2|),\\\\\nN & := \\max(|\\sin(2v_1+2v_2) \\cos\\Delta|, |\\sin(2v_1-2v_2)\\sin\\Delta|).\n\\end{align*}\nRecall also the earlier notations \\eqref{adbc} and \\eqref{Dgdef}. As\n\\[|a|^2 - |d|^2 = \\frac{r^2-r^{-2}}{2}(\\cos 2v_1 + \\cos 2v_2), \\qquad\n|b|^2 - |c|^2 = \\frac{r^2-r^{-2}}{2}(\\cos 2v_1 - \\cos 2v_2),\\]\nwe can identify $\\mcN$ as the set of matrices with $r=1$ or $\\cos 2v_1 = \\cos 2v_2 = 0$. More precisely,\nby \\eqref{eq0} and \\eqref{r1} we have\n\\[D(g)\\ll r(r-1)R\\ll_\\Lambda r(r-1)\\left(t +\\frac{\\lambda}{\\sqrt{\\ell}}\\right)\\ll_\\Lambda\\frac{r^2\\lambda}{\\sqrt{\\ell}},\\]\nso that unless $D(g)\\ll_\\Lambda\\|g\\|^2\\lambda \\ell^{-1\/2}$, we have $\\mcM=\\emptyset$, yielding $\\varphi^{\\ell, 0}_{\\nu, \\ell}(g)\\ll_\\Lambda\\ell^{-\\Lambda}$. Hence we are left with proving \\eqref{thm5boundq=0}.\n\nIn \\eqref{keyupperbound:q=0}, the contribution of the $t$-integral over the interval $[0,\\ell^{-\\Lambda\/2-1\/4}]$ is negligible, and we split the rest of $\\mcM$ in dyadic ranges $\\mcM(\\delta)$ according to $\\ell^{-\\Lambda\/2-1\/4} 1\/2$, then for any fixed $t$, \\eqref{eq0} localizes $\\phi$ to a set of measure $\\OO_{\\Lambda}(\\lambda\/\\sqrt{\\ell})$. Otherwise, for any fixed $\\phi$, \\eqref{eq0} localizes $t$ to a set of measure $\\OO_{\\Lambda}(\\lambda\/\\sqrt{\\ell})$. We conclude that\n\\begin{equation}\\label{I1}\n\\meas(\\mcM(\\delta))\\ll_\\Lambda\\lambda\/\\sqrt{\\ell}.\n\\end{equation}\n\nNow we prove the alternative bound\n\\begin{equation}\\label{I2b}\n\\meas(\\mcM(\\delta))\\ll_\\Lambda\\frac{\\lambda^4}{N\\ell}.\n\\end{equation}\nWe shall assume that $N\\ell>1$, for otherwise \\eqref{I2b} follows from \\eqref{I1}. Under this assumption, we have $\\max(|\\sin 2v_1|,|\\sin 2v_2|)\\gg\\ell^{-1}$, which implies that\n\\[\n\\meas(\\{(\\phi,t)\\in \\mcM(\\delta):P(\\phi)\\leq \\ell^{-3}\\}) \\ll \\ell^{-1}.\n\\]\nIndeed, if $\\phi$ changes by at least $\\ell^{-1}$ and at most $\\pi\/4$, then $\\cos(\\phi\\pm\\Delta)$ both change by $\\Omega(\\ell^{-2})$, hence $P(\\phi)$ changes by $\\Omega(\\ell^{-3})$. This implies that $P(\\phi)\\leq \\ell^{-3}$ localizes $\\phi$ to a set of measure $\\OO(\\ell^{-1})$.\nTherefore, the contribution of $\\{(\\phi,t)\\in \\mcM(\\delta):P(\\phi)\\leq \\ell^{-3}\\}$ to the left-hand side of \\eqref{I2b} is\n$\\OO_{\\Lambda}(\\delta\/\\sqrt{\\ell})$, which is admissible by $N\\leq 1$. We decompose the rest of $\\mcM(\\delta)$ into dyadic ranges $\\mcM(\\delta,\\mtP)$ according to $\\ell^{-3}\\leq P(\\phi)\\asymp \\mtP\\leq 1$. The number of such ranges is $\\OO(\\log \\ell)$, hence in order to verify \\eqref{I2b}, it suffices to prove\n\\[\\meas(\\mcM(\\delta,\\mtP))\\ll_\\Lambda\\frac{\\lambda^2}{N\\ell}.\\]\nThe proof of this estimate immediately reduces to the following two localizations:\n\\begin{equation}\\label{phi-loc}\n\\meas(\\{\\phi\\in[0,2\\pi]:(\\phi,t)\\in\\mcM(\\delta,\\mtP)\\text{ for some $t\\asymp \\delta$}\\}) \\ll_{\\Lambda} \\frac{(\\mtP+R)\\lambda}{N\\sqrt{\\ell}},\n\\end{equation}\nand for any $\\phi\\in[0,2\\pi]$,\n\\begin{equation}\\label{t-loc}\n\\meas(\\{t\\in[0,1]:(\\phi,t)\\in\\mcM(\\delta,\\mtP)\\}) \\ll_{\\Lambda} \\frac{\\lambda}{(\\mtP+R)\\sqrt{\\ell}}.\n\\end{equation}\nNow we prove these localizations.\n\nStarting out from \\eqref{eq0}, we execute two eliminations: one to eliminate the main terms of the right-hand sides, and the other one to eliminate the left-hand sides. Introducing\n\\[\nF(\\phi):= \\cos \\phi \\cos \\Delta \\sin(2v_1+2v_2) - \\sin \\phi \\sin \\Delta \\sin(2v_1-2v_2),\n\\]\nthese give\n\\[\ntF(\\phi) \\ll_{\\Lambda} R \\lambda\/\\sqrt{\\ell}\\qquad\\text{and}\\qquad (1-t^2) F(\\phi) \\ll_{\\Lambda} \\mtP \\lambda \/\\sqrt{\\ell}.\n\\]\nIn particular, we obtain both for $t>1\/2$ and $t\\leq 1\/2$ that\n\\begin{equation}\\label{bound-on-Fphi}\nF(\\phi) \\ll_{\\Lambda} (\\mtP+R) \\lambda \/\\sqrt{\\ell}.\n\\end{equation}\nLetting\n\\[\nN':= \\sqrt{\\sin^2(2v_1+2v_2)\\cos^2(\\Delta) + \\sin^2(2v_1-2v_2)\\sin^2(\\Delta)} \\asymp N,\n\\]\nand choosing $\\psi\\in [0,2\\pi)$ such that\n\\[\n\\cos\\psi = \\frac{\\sin(2v_1+2v_2)\\cos\\Delta}{N'},\\qquad\n\\sin\\psi = - \\frac{\\sin(2v_1-2v_2)\\sin \\Delta}{N'},\n\\]\n\\eqref{bound-on-Fphi} gives rise to\n\\[\n\\cos(\\phi-\\psi) \\ll_{\\Lambda} \\frac{(\\mtP+R) \\lambda }{N\\sqrt{\\ell}}.\n\\]\nThis localizes $\\phi$ to a set of measure $\\OO_{\\Lambda}((\\mtP+R)\\lambda\/N\\sqrt{\\ell})$. Indeed, if the right-hand side is very small in terms of the implied constant, then $\\phi-\\psi$ is bounded away from $\\pi\\ZZ$, hence the derivative $\\cos'(\\phi-\\psi)$ is bounded away from zero, while otherwise the claimed localization is trivial. This gives \\eqref{phi-loc}. Fixing $\\phi\\in[0,2\\pi]$, and solving under \\eqref{eq0} the quadratic equation in $t$ of the larger discriminant, we see by \\eqref{disc-lower-bound} that $t$ is localized to a set of measure $\\OO_{\\Lambda}(\\lambda\/(\\mtP+R)\\sqrt{\\ell})$. This gives \\eqref{t-loc}. Altogether, the proof of \\eqref{I2b} is complete.\n\nCombining \\eqref{I1} and \\eqref{I2b}, we obtain\n\\[\\meas(\\mcM(\\delta))\\ll_{\\eps,\\Lambda}\\ell^{\\eps-1}\\mu,\\qquad\\mu:=\\min(\\sqrt{\\ell},N^{-1}).\\]\nWe claim that \n\\begin{equation}\\label{toprovedelta-b}\n\\dist(g,\\mcS)\\ll_{\\Lambda}\\lambda\\delta^{-1}\\mu^{-1},\\qquad \\text{if $\\mcM(\\delta)\\neq\\emptyset$.}\n\\end{equation}\nThis implies the inequality\n\\[\\sqrt{\\ell}\\int_{\\mcM(\\delta)} t\\, \\dd \\phi\\, \\dd t\\ll_{\\eps,\\Lambda}\\ell^{\\eps-1\/2}\\delta\\mu\n\\ll_{\\eps,\\Lambda}\\frac{\\ell^\\eps}{\\sqrt{\\ell}\\dist(g,\\mcS)},\\]\nwhich, summed over the $\\OO(\\log\\ell)$ dyadic ranges for $\\delta$, suffices for the proof of \\eqref{thm5boundq=0}. Note that the bound $\\varphi_{\\nu,\\ell}^{\\ell,q}(g)\\ll_\\eps\\ell^{\\eps}$ is already covered by Theorem~\\ref{thm6}.\n\nTo complete the proof of Theorem~\\ref{thm5}\\ref{thm5-a}, it remains to show \\eqref{toprovedelta-b}. For this final argument, we can and we shall assume that $-\\pi\/8\\leq v_1,v_2\\leq 3\\pi\/8$, because replacing $(u_1,v_1)$ by $(-u_1,v_1+\\pi\/2)$, or $(v_2,w_2)$ by $(v_2+\\pi\/2,-w_2)$, has the effect of multiplying $g$ by $\\left(\\begin{smallmatrix}&i\\\\i&\\end{smallmatrix}\\right)$ from either side without altering $\\Delta$ or the statement \\eqref{toprovedelta-b}. We fix a pair $(\\phi,t)\\in\\mcM(\\delta)$.\n\nNow, $N\\leq \\mu^{-1}$ implies that\n\\begin{equation}\\label{eq:toprovedelta-c}\nv_1+v_2\\in\\frac{\\pi}2\\ZZ+\\OO\\left(\\frac1{\\mu}\\right)\\qquad\\text{and}\\qquad\nv_1-v_2\\in\\frac{\\pi}2\\ZZ+\\OO\\left(\\frac1{\\mu|\\Delta|}\\right).\n\\end{equation}\nLet us introduce the short-hand notation\n\\[m[v]:=\\begin{pmatrix}\\cos v&i\\sin v\\\\i\\sin v&\\cos v\\end{pmatrix},\\qquad v\\in\\RR.\\]\nKeeping \\eqref{decomp-K} and \\eqref{g} in mind, we observe initially that\n\\begin{equation}\\label{initialdistance}\nm[v_1]\\diag\\left(re^{i\\Delta},r^{-1}e^{-i\\Delta}\\right)m[v_2]=m[v_1+v_2]+\\OO\\bigl(r-1+|\\Delta|\\bigr).\n\\end{equation}\nOn the right-hand side, we have $\\dist(m[v_1+v_2],\\mcS)\\ll\\mu^{-1}$ by \\eqref{eq:toprovedelta-c}, and also\n\\begin{equation}\\label{rbound}\nr-1\\ll_\\Lambda\\frac{\\lambda}{t\\sqrt{\\ell}}\\ll\\frac{\\lambda}{\\delta\\mu}\n\\end{equation}\nby \\eqref{r1} and $\\mu\\leq\\sqrt{\\ell}$. Hence \\eqref{toprovedelta-b} follows from \\eqref{initialdistance} as long as $\\Delta\\ll_\\Lambda\\lambda\\delta^{-1}\\mu^{-1}$. In other words, we can and we shall assume that $|\\Delta|\\gg_\\Lambda\\lambda\\delta^{-1}\\mu^{-1}$ holds with a sufficiently large implied constant depending on $\\Lambda$. In particular, we shall assume that the error terms in \\eqref{eq:toprovedelta-c}, and similar error terms for angles in the rest of this subsection, are less than $\\pi\/8$ in size. Under this assumption, \\eqref{eq:toprovedelta-c} breaks into two cases.\n\n\\emph{Case 1:} $v_1,v_2\\ll\\mu^{-1}|\\Delta|^{-1}$ and $v_1+v_2\\ll\\mu^{-1}$. In this case, we refine \\eqref{initialdistance} to\n\\begin{align*}\n&m[v_1]\\diag\\left(re^{i\\Delta},r^{-1}e^{-\\Delta}\\right)m[v_2]\\\\\n&=m[v_1+v_2]+m[v_1]\\diag\\left(re^{i\\Delta}-1,r^{-1}e^{-i\\Delta}-1\\right)m[v_2]\\\\\n&=m[v_1+v_2]+\\diag\\left(re^{i\\Delta}-1,r^{-1}e^{-i\\Delta}-1\\right)+\\OO\\bigl(r-1+\\mu^{-1}\\bigr)\\\\\n&=\\diag\\left(e^{i\\Delta},e^{-i\\Delta}\\right)+\\OO\\bigl(r-1+\\mu^{-1}\\bigr).\n\\end{align*}\nThe main term $\\diag\\left(e^{i\\Delta},e^{-i\\Delta}\\right)$ lies in $\\mcS$, hence \\eqref{toprovedelta-b} follows by \\eqref{rbound}.\n\n\\emph{Case 2:} $v_1,v_2=\\pi\/4+\\OO(\\mu^{-1}|\\Delta|^{-1})$ and $v_1+v_2=\\pi\/2+\\OO(\\mu^{-1})$. As we shall see, this case does not occur. The assumptions imply that $\\sin 2v_1$ and $\\sin 2v_2$ exceed $1\/2$. We multiply the second equation in \\eqref{eq0} by $\\sin 2v_1$, and the first equation in \\eqref{eq0} by $\\sin 2v_2$. Adding and subtracting the resulting two equations, we obtain\n\\begin{align*}\n4t\\sin 2v_1\\sin 2v_2\\cos\\phi\\cos\\Delta&=(t^2-1)\\sin(2v_1-2v_2)+\\OO_\\Lambda(\\lambda\/\\sqrt{\\ell}),\\\\\n4t\\sin 2v_1\\sin 2v_2\\sin\\phi\\sin\\Delta&=(t^2-1)\\sin(2v_1+2v_2)+\\OO_\\Lambda(\\lambda\/\\sqrt{\\ell}).\n\\end{align*}\nWe infer that\n\\[\\delta\\ll|t\\cos\\phi|+|t\\sin\\phi|\n\\ll_\\Lambda|\\sin(2v_1-2v_2)|+\\frac{|\\sin(2v_1+2v_2)|}{|\\Delta|}+\\frac{\\lambda}{\\sqrt{\\ell}|\\Delta|}\n\\ll_\\Lambda\\frac{\\lambda}{\\mu|\\Delta|}.\\]\nThis contradicts our earlier assumption that $\\Delta\\gg_\\Lambda\\lambda\\delta^{-1}\\mu^{-1}$ holds with a sufficiently large implied constant depending on $\\Lambda$.\n\nThe proof of Theorem~\\ref{thm5}\\ref{thm5-a} is complete.\n\n\\subsection{Proof of Theorem~\\ref{thm5}\\ref{thm5-b}}\nWe finally consider the case $q = \\pm \\ell$. By the symmetries \\eqref{distinvariance} and \\eqref{eq:averaged-spherical-function-symmetry}, we can restrict to $q = \\ell$. We have already shown the bound $\\varphi^{\\ell,\\ell}_{\\nu,\\ell}(g)\\ll_\\eps\\ell^{\\eps}$ in greater generality in Theorem~\\ref{thm6}. As a first step, we complement this with a stronger bound for $r \\geq 2$. To this end, we return to \\eqref{post-expansion}. As $q=\\ell$, the binomial coefficient and the $J$-factor disappear. When $\\overline{I}^{2\\ell}$ is expanded, we see a Laurent polynomial of $e^{2iu}$. When we integrate in $u$ from $0$ to $\\pi$, all the terms but the constant one vanish.\nWe calculate the constant term using the binomial theorem and the original product definition of $I$. This way we see that\n\\begin{align*}\n\\varphi_{\\nu,\\ell}^{ \\ell,\\ell}(g) = \\ & d_{\\ell} e^{2i\\ell(u_1-u_2-w_1+w_2)} r^{2\\ell}\n\\sum_{m=0}^{2\\ell}\\binom{2\\ell}{m}^2 (r^{-2}e^{2iu_2+2iw_1}\\cos v_1\\cos v_2)^m\\\\\n&(-\\sin v_1\\sin v_2)^{2\\ell-m}\n\\int_0^{\\pi\/2} (\\sin^2 v)^m (\\cos^2 v)^{2\\ell-m} \\frac{\\sin 2v}{h(r,v)^{\\ell+1-\\nu}}\\,\\dd v.\n\\end{align*}\nUsing the variable $x:=\\sin^2 v$, we rewrite this as\n\\begin{align*}\n\\varphi_{\\nu,\\ell}^{ \\ell,\\ell}(g) = \\ & d_{\\ell} e^{2i\\ell(u_1-u_2-w_1+w_2)} r^{2\\nu-2}\n\\sum_{m=0}^{2\\ell}\\binom{2\\ell}{m}^2\n(r^{-2}e^{2iu_2+2iw_1}\\cos v_1\\cos v_2)^m\\\\\n&(-\\sin v_1\\sin v_2)^{2\\ell-m}\\int_0^1 \\frac{x^m (1-x)^{2\\ell-m}}{(1-x+r^{-4}x)^{\\ell+1-\\nu}}\\,\\dd x.\n\\end{align*}\nWith the short-hand notation\n\\[U:=r^{-1}e^{iu_2+iw_1}\\sqrt{x\\cos v_1\\cos v_2}\\qquad\\text{and}\\qquad V:=i\\sqrt{(1-x)\\sin v_1\\sin v_2},\\]\nwe obtain finally\n\\begin{align}\\label{eq:phi-ell-ell-ell-bound}\n\\begin{split}\n\\bigl|\\varphi_{\\nu,\\ell}^{\\ell,\\ell}(g)\\bigr| \\leq &\\ \\frac{2\\ell+1}{r^2}\\left|\n\\int_0^1\\sum_{m=0}^{2\\ell}\\binom{2\\ell}{m}^2\\frac{U^{2m}V^{4\\ell-2m}}{(1-x+r^{-4}x)^{\\ell+1-\\nu}}\\,\\dd x \\right|\\\\[4pt]\n=&\\ \\frac{2\\ell+1}{r^2} \\left|\\int_0^1\\frac{1}{2\\pi}\\int_0^{2\\pi}\n\\frac{(Ue^{i\\phi}+V)^{2\\ell}(Ue^{-i\\phi}+V)^{2\\ell}}{(1-x+r^{-4}x)^{\\ell+1-\\nu}}\\,\\dd\\phi\\,\\dd x\\right|.\n\\end{split}\n\\end{align}\nUsing that $(Ue^{i\\phi}+V)(Ue^{-i\\phi}+V)=U^2+V^2+2UV\\cos\\phi$ is on the line segment connecting $(U+V)^2$ and $(U-V)^2$, we observe that\n\\[\\frac{|(Ue^{i\\phi}+V)(Ue^{-i\\phi}+V)|^2}{1-x+r^{-4}x}\\leq \\max_\\pm \\frac{|U\\pm V|^4}{1-x+r^{-4}x},\\]\nwhich by the Cauchy--Schwarz inequality can be further upper bounded by\n\\[\\leq\\frac{(1-x+r^{-2}x)^2}{1-x+r^{-4}x}=1-\\frac{x}{1+\\frac{2}{r^2-1}+\\frac{1}{\\left(r^2-1\\right)^2 (1-x)}}.\\]\nHence the contribution to the rightmost expression in \\eqref{eq:phi-ell-ell-ell-bound} of $x\\in[0,1]$ satisfying\n\\[x>\\delta\\left(1+\\frac{2}{r^2-1}+\\frac{1}{\\left(r^2-1\\right)^2 (1-x)}\\right),\\qquad\\delta:=\\frac{\\log\\ell}{\\ell},\\]\nis admissible for \\eqref{thm5boundq=ell}. By $r\\geq 2$, the remaining values $x\\in[0,1]$ satisfy\n\\[x<3\\delta\\qquad\\text{or}\\qquad x(1-x)<\\frac{3\\delta}{(r^2-1)^2},\\]\nhence also $x<3\\delta$ or $1-x<8\\delta\/r^4$. So the remaining contribution is\n\\[\\leq\\frac{2\\ell+1}{r^2}\\int_{[0,3\\delta)\\cup(1-8\\delta\/r^4,1]}\\frac{\\dd x}{1-x+r^{-4}x}\\ll\\frac{\\log\\ell}{r^2},\\]\nwhich is again admissible for \\eqref{thm5boundq=ell}.\n\nBy \\eqref{keyupperbound}, it remains to show that\n\\begin{equation}\\label{toprovedelta-a}\n\\dist(g, \\mcD) \\ll_\\Lambda\\| g \\|\\lambda\/\\sqrt{\\ell},\\qquad \\text{if $\\mcM\\neq\\emptyset$.}\n\\end{equation}\nIn the present case $q=\\ell$, the condition \\eqref{eq} simplifies to\n\\begin{equation}\\label{eqell}\n\\begin{split}\n2t\\sin 2v_1\\cos(\\phi+\\Delta)&=(1-t^2)\\cos 2v_1 + (1 + t^2)+\\OO_\\Lambda(\\lambda^2\/\\ell),\\\\\n2t\\sin 2v_2\\cos(\\phi-\\Delta)&=(t^2-1)\\cos 2v_2 - (1 + t^2)+\\OO_\\Lambda(\\lambda^2\/\\ell),\n\\end{split}\n\\end{equation}\nhence for the proof of \\eqref{toprovedelta-a} we can and we shall assume that $|v_1+v_2|\\leq\\pi\/2$. Indeed, replacing $v_1$ by $v_1+\\pi$ has the effect of replacing $g$ by $-g$ without altering $\\Delta$ or the statement \\eqref{toprovedelta-a}. We fix a pair $(\\phi,t)\\in\\mcM$.\n\nThe two equations in \\eqref{eqell} yield readily that\n\\[(\\sin 2v_j)^2 \\leq 2+2\\cos 2v_j\\ll_\\Lambda t^2+t|\\sin 2v_j|+\\lambda^2\/\\ell.\\]\nHence $\\sin 2v_j \\ll_\\Lambda t+\\lambda\/\\sqrt{\\ell}$, that is,\n\\begin{equation}\\label{pi-coset}\nv_1, v_2 \\in \\frac{\\pi}{2}\\ZZ +\\OO_\\Lambda\\left(t+\\frac{\\lambda}{\\sqrt{\\ell}}\\right).\n\\end{equation}\nCombining \\eqref{eqell} with the Cauchy--Schwarz inequality, we also get\n\\[ (1+t^2)^2+\\OO_\\Lambda(\\lambda^2\/\\ell)\\leq (1-t^2)^2+4t^2\\cos^2(\\phi\\pm\\Delta).\\]\nEquivalently,\n\\[\\sin(\\phi\\pm\\Delta)\\ll_\\Lambda \\frac{\\lambda}{t\\sqrt{\\ell}}.\\]\nUsing also our initial assumption $|\\Delta|\\leq\\pi\/4$, we conclude that\n\\begin{equation}\\label{cos-ess-1}\n\\Delta\\ll_\\Lambda\\frac{\\lambda}{t\\sqrt{\\ell}}\\qquad\\text{and}\n\\qquad\\phi\\in\\pi\\ZZ+\\OO_\\Lambda\\left(\\frac{\\lambda}{t\\sqrt{\\ell}}\\right).\n\\end{equation}\nIn particular, $\\cos(\\phi\\pm\\Delta)=\\epsilon+\\OO_\\Lambda(\\lambda^2\/t^2\\ell)$ for some $\\epsilon\\in\\{\\pm 1\\}$. Plugging this back to \\eqref{eqell}, and using also \\eqref{pi-coset} along with\n\\[t \\left(t+\\frac{\\lambda}{\\sqrt{\\ell}}\\right) \\min\\left(1, \\frac{\\lambda^2}{t^2\\ell}\\right) \\ll \\frac{\\lambda^2}{\\ell},\\]\nwe obtain\n\\begin{equation}\\label{eqell-rewrite}\n\\begin{split}\n2t\\epsilon\\sin 2v_1&=(1-t^2)\\cos 2v_1 + (1 + t^2)+\\OO_\\Lambda(\\lambda^2\/\\ell),\\\\\n2t\\epsilon\\sin 2v_2&=(t^2-1)\\cos 2v_2 - (1 + t^2)+\\OO_\\Lambda(\\lambda^2\/\\ell).\n\\end{split}\n\\end{equation}\nNow consider the following three unit vectors in $\\RR^2$:\n\\[\\mbv_1:=(\\cos 2v_1,\\sin 2v_1),\\qquad \\mbv_2:=(\\cos 2v_2,-\\sin 2v_2),\\qquad\n\\mbt:=\\left(\\frac{t^2-1}{t^2+1},\\frac{2t\\epsilon}{t^2+1}\\right).\\]\nBy \\eqref{eqell-rewrite}, the scalar products $\\mbv_j\\mbt$ are $1+\\OO_\\Lambda(\\lambda^2\/\\ell)$, hence the directed angles $\\arg(\\mbv_j)-\\arg(\\mbt)$ lie in $2\\pi\\ZZ+\\OO_\\Lambda(\\lambda\/\\sqrt{\\ell})$. It follows that\n\\[\\arg(\\mbv_1)-\\arg(\\mbv_2)\\in2\\pi\\ZZ+\\OO_\\Lambda(\\lambda\/\\sqrt{\\ell}),\\]\nand then the assumption $|v_1+v_2|\\leq\\pi\/2$ forces that\n\\begin{equation}\\label{eps}\nv_1+v_2\\ll_\\Lambda\\lambda\/\\sqrt{\\ell}.\n\\end{equation}\n\nWe are now ready to complete the proof of Theorem~\\ref{thm5}\\ref{thm5-b}. By \\eqref{pi-coset} and \\eqref{eps}, there exists a multiple $v$ of $\\pi\/2$ such that\n\\begin{align*}\nm[v_1]&=m[v]+\\OO_\\Lambda\\bigl(t+\\lambda\/\\sqrt{\\ell}\\bigr),\\\\\nm[v_2]&=m[-v]+\\OO_\\Lambda\\bigl(t+\\lambda\/\\sqrt{\\ell}\\bigr),\\\\\nm[v_1+v_2]&=\\id+\\OO_\\Lambda\\bigl(\\lambda\/\\sqrt{\\ell}\\bigr).\n\\end{align*}\nTherefore, using also \\eqref{r1} and \\eqref{cos-ess-1}, we conclude that\n\\begin{align*}\n&m[v_1]\\diag\\left(re^{i\\Delta},r^{-1}e^{-\\Delta}\\right)m[v_2]\\\\\n&=m[v_1+v_2]+m[v_1]\\diag\\left(re^{i\\Delta}-1,r^{-1}e^{-i\\Delta}-1\\right)m[v_2]\\\\\n&=m[v_1+v_2]+m[v]\\diag\\left(re^{i\\Delta}-1,r^{-1}e^{-i\\Delta}-1\\right)m[-v]\n+\\OO_\\Lambda\\bigl(r\\lambda\/\\sqrt{\\ell}\\bigr)\\\\\n&=m[v]\\diag\\left(re^{i\\Delta},r^{-1}e^{-i\\Delta}\\right)m[-v]\n+\\OO_\\Lambda\\bigl(r\\lambda\/\\sqrt{\\ell}\\bigr).\n\\end{align*}\nThe main term $m[v]\\diag\\left(re^{i\\Delta},r^{-1}e^{-i\\Delta}\\right)m[-v]$ lies in $\\mcD$, hence \\eqref{toprovedelta-a} follows.\n\nThe proof of Theorem~\\ref{thm5}\\ref{thm5-b} is complete.\n\n\\section{Proof of Theorem~\\ref{thm1}}\\label{thm1-proof-sec}\nIn this section, we prove Theorem~\\ref{thm1}. Lemma~\\ref{APTI-done-lemma}, which results from the amplified pre-trace inequality and estimates on the spherical trace function, proves an estimate on $|\\Phi(g)|^2$ for $g\\in\\Omega$ in terms of the Diophantine counts $M(g,L,\\mcL,\\vec{\\delta})$. We begin with the key remaining step of estimating these counts.\n\nWe allow all implied constants within this section to depend on $\\Omega$, and we drop the subscript from notation. Moreover, we adopt the notation $A\\preccurlyeq B$ to mean that $|A|\\ll_{\\eps}\\ell^\\eps L^\\eps B$, where $\\eps>0$ is fixed but may be taken as small as desired at each step, and the implied constant is allowed to depend on $\\eps$. \n\nFor each $\\mcL\\in\\{1,L^2,L^4\\}$ and $\\vec{\\delta}=(\\delta_1,\\delta_2)$ with $0<\\delta_1,\\delta_2\\leq\\ell^\\eps$, we will estimate the count $M(g,L,\\mcL,\\vec{\\delta})$ of matrices\n\\begin{align}\\label{detgamma}\n\\begin{aligned}\n&\\gamma=\\begin{pmatrix}a&b\\\\c&d\\end{pmatrix}\\in\\MM_2(\\ZZ[i]),\n\\qquad&&\\det\\gamma=n\\in D(L,\\mcL),\\qquad |n|\\asymp\\mcL^{1\/2},\\\\\n&g^{-1}\\tilde{\\gamma}g=k\\begin{pmatrix} z&u\\\\&z^{-1}\\end{pmatrix}k^{-1}\n&&\\text{for some $k\\in K$ such that}\\\\\n&&&\\text{$|z|\\geq 1$, $\\min|z\\pm 1|\\leq\\delta_1$, $|u|\\leq\\delta_2$,}\n\\end{aligned}\n\\end{align}\nwhere as before $\\tilde{\\gamma} = \\gamma\/\\sqrt{n}$. By the symmetry $\\gamma\\leftrightarrow -\\gamma$, we can and we shall assume that $|z-1|\\leq|z+1|$. Then the conditions imply that both $|z-1|$ and $|z^{-1}-1|$ are at most $\\delta_1$, hence\n\\[\\left|\\frac{a+d}{\\sqrt{n}}-2\\right|=|\\tr\\tilde\\gamma-2|=|z+z^{-1}-2|=|z-1||z^{-1}-1|\\leq\\delta_1^2.\\]\nOn the other hand, since $\\|g\\|\\asymp_{\\Omega}1$, we also have that\n\\[ \\|\\tilde{\\gamma}-\\id\\|=\\left\\|gk\\begin{pmatrix}z-1&u\\\\&z^{-1}-1\\end{pmatrix}k^{-1}g^{-1}\\right\\|\n\\ll\\delta_1+\\delta_2. \\]\nSummarizing, we need to estimate the number of matrices $\\gamma$ as in \\eqref{detgamma} such that\n\\begin{equation}\\label{abcd-conditions}\n\\left|a+d-2\\sqrt{n}\\right|\\leq\\delta_1^2\\sqrt{|n|},\\qquad |a-d|,|b|,|c|\\ll(\\delta_1+\\delta_2)\\sqrt{|n|}.\n\\end{equation}\nIn particular, we have $|a+d|\\preccurlyeq\\sqrt{|n|}$ and\n\\begin{equation}\\label{trace-cor}\n(a-d)^2+4bc=(a+d)^2-4n\\preccurlyeq\\delta_1^2|n|.\n\\end{equation}\n\nAs is often the case, parabolic matrices $\\gamma$ (those with trace $\\pm 2\\sqrt{n}$) play a distinctive role in this counting problem, and we split the count accordingly into the parabolic and non-parabolic subcounts as\n\\[ M(g,L,\\mcL,\\vec{\\delta})=M^{\\pp}(g,L,\\mcL,\\vec{\\delta})+M^{\\np}(g,L,\\mcL,\\vec{\\delta}). \\]\nWe shall prove the following result using \\eqref{detgamma}, \\eqref{abcd-conditions}, and \\eqref{trace-cor}.\n\n\\begin{lemma}\\label{counting-for-thm1}\nLet $\\Omega\\subset G$ be a compact subset, $L\\geq 1$, and $\\mcL\\in\\{1,L^2,L^4\\}$. For $g\\in\\Omega$ and $\\vec{\\delta}=(\\delta_1,\\delta_2)$ with $0<\\delta_1,\\delta_2\\preccurlyeq 1$, we have the following bounds.\n\\begin{align}\n\\label{Mbound1}M(g,L,1,\\vec{\\delta})&\\preccurlyeq_{\\Omega}1,\\\\\n\\label{Mbound2}M^{\\pp}(g,L,\\mcL,\\vec{\\delta})&\\preccurlyeq_{\\Omega} \\mcL^{1\/2}+\\mcL\\delta_2^2,\\\\\n\\label{Mbound3}M^{\\np}(g,L,L^2,\\vec{\\delta})&\\preccurlyeq_{\\Omega} L^4\\delta_1^4(\\delta_1^2+\\delta_2^2),\\\\\n\\label{Mbound4}M^{\\np}(g,L,L^4,\\vec{\\delta})&\\preccurlyeq_{\\Omega} L^6\\delta_1^4(\\delta_1^2+\\delta_2^2).\n\\end{align}\nMoreover,\n\\begin{equation}\\label{otherwise-vanishes}\nM^{\\np}(g,L,\\mcL,\\vec{\\delta})=0\\qquad\\text{unless}\\qquad\\delta_1\\succcurlyeq\\mcL^{-1\/4}.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nThe bound \\eqref{Mbound1} is immediate from \\eqref{abcd-conditions}. We turn to the bound \\eqref{Mbound2}, which counts parabolic matrices $\\gamma$. In this case, we have $(a-d)^2+4bc=0$ and $z=1$, hence in particular \\eqref{abcd-conditions} holds with $0$ in place of $\\delta_1$. If $bc\\neq 0$, then there are $\\ll\\mcL^{1\/2}$ choices for $a+d=2\\sqrt{n}$, and $\\ll\\mcL^{1\/2}\\delta_2^2$ choices for $a-d\\neq 0$. The difference $a-d$ determines the product $bc$ uniquely, hence by the divisor bound, there are $\\preccurlyeq 1$ choices for $(b,c)$. This is admissible for \\eqref{Mbound2}. If $bc=0$, then there are $\\ll\\mcL^{1\/2}$ choices for $a=d=\\sqrt{n}$, and $\\ll 1+\\mcL^{1\/2}\\delta_2^2$ choices for $(b,c)$. This is again admissible for \\eqref{Mbound2}.\n\nFrom now on we count non-parabolic matrices $\\gamma$, in which case $(a-d)^2+4bc\\neq 0$.\nThe statement \\eqref{otherwise-vanishes} is immediate from \\eqref{trace-cor}, so we are left with proving \\eqref{Mbound3} and \\eqref{Mbound4}. If $bc\\neq 0$, then there are $\\preccurlyeq\\mcL^{1\/2}$ choices for $a+d$, and $\\ll 1+\\mcL^{1\/2}(\\delta_1^2+\\delta_2^2)$ choices for $a-d$. For given $a-d$, there are $\\preccurlyeq 1+\\mcL\\delta_1^4$ choices for $(b,c)$ by \\eqref{trace-cor} and the divisor bound. This is admissible for \\eqref{Mbound3} in the light of \\eqref{otherwise-vanishes}.\nIf $bc=0$, then there are $\\preccurlyeq\\mcL^{1\/2}$ choices for $a+d$, $\\preccurlyeq\\mcL^{1\/2}\\delta_1^2$ choices for $a-d\\neq 0$ by \\eqref{trace-cor}, and $\\ll 1+\\mcL^{1\/2}(\\delta_1^2+\\delta_2^2)$ choices for $(b,c)$. This is again admissible for \\eqref{Mbound3} in the light of \\eqref{otherwise-vanishes}. In the high range $\\mcL=L^4$, this argument gives the bound\n\\[M^{\\np}(g,L,L^4,\\vec{\\delta})\\preccurlyeq_{\\Omega} L^8\\delta_1^4(\\delta_1^2+\\delta_2^2),\\]\nwhich is weaker than \\eqref{Mbound4} by a factor of $L^2$. However, we can save a factor of $L^2=\\mcL^{1\/2}$ as follows. In the high range, $n=l_1^2l_2^2$ is a square, and $(a-d)^2+4bc\\neq 0$ factors as $(a+d+2l_1l_2)(a+d-2l_1l_2)$. Hence the triple $(a-d,b,c)$ determines $a+d$ up to $\\preccurlyeq 1$ possibilities by the divisor bound, while earlier we considered $\\preccurlyeq\\mcL^{1\/2}$ possibilities for $a+d$.\n\\end{proof}\n\nCombining Lemmata~\\ref{APTI-done-lemma} and \\ref{counting-for-thm1}, we obtain that\n\\[\\sum_{\\phi\\in\\mfB}|\\phi(g)|^2\\preccurlyeq_{I,\\Omega}\\ell^3\n\\left(\\frac1L+S^{\\pp}(L)+S^{\\np}(L,L^2)+S^{\\np}(L,L^4)\\right)+L^{2}\\ell^{-48},\\]\nwhere\n\\begin{alignat*}{3}\nS^{\\pp}(L)&:=\\sum_{\\substack{\\vec{\\delta}\\text{ dyadic}\\\\1\/\\sqrt{\\ell}\\leq\\delta_j\\preccurlyeq 1}}\\frac1{\\sqrt{\\ell}\\delta_2}\\left(\\frac{L+L^2\\delta_2^2}{L^3}+\\frac{L^2+L^4\\delta_2^2}{L^4}\\right)&&\\preccurlyeq\\frac1{L^2}+\\frac1{\\sqrt{\\ell}},\\\\\nS^{\\np}(L,L^2)&:=\\sum_{\\substack{\\vec{\\delta}\\text{ dyadic}\\\\1\/\\sqrt{\\ell}\\leq\\delta_j\\preccurlyeq 1}}\n\\frac1{\\ell\\delta_1^2}\\cdot\\frac{L^4\\delta_1^4(\\delta_1^2+\\delta_2^2)}{L^3}&&\\preccurlyeq\\frac{L}{\\ell},\\\\\nS^{\\np}(L,L^4)&:=\\sum_{\\substack{\\vec{\\delta}\\text{ dyadic}\\\\1\/\\sqrt{\\ell}\\leq\\delta_j\\preccurlyeq 1}}\\frac1{\\ell\\delta_1^2}\\cdot\\frac{L^6\\delta_1^4(\\delta_1^2+\\delta_2^2)}{L^4}&&\\preccurlyeq\\frac{L^2}{\\ell}.\n\\end{alignat*}\nPutting everything together, we conclude that\n\\[\\sum_{\\phi\\in\\mfB}|\\phi(g)|^2\\preccurlyeq_{I,\\Omega}\\ell^3\n\\left(\\frac1L+\\frac1{\\sqrt{\\ell}}+\\frac{L^2}{\\ell}\\right)+L^2\\ell^{-48}\\ll\\ell^{8\/3}, \\]\nby making the essentially optimal choice $L:=7\\ell^{1\/3}$ (which satisfies our earlier\ncondition $L\\geq 7$).\n\nThe proof of Theorem~\\ref{thm1} is complete.\n\n\\section{Proof of Theorem~\\ref{thm3}}\\label{thm2proof-sec}\nIn this section, we prove Theorem~\\ref{thm3}. For $q= 0$, Lemma~\\ref{APTI-done-lemma-single-form} provides an estimate on $|\\phi_q(g)|^2$ for $g\\in\\Omega$ in terms of the Diophantine count $M_0^{\\ast}(g,L,\\mcL,\\delta)$, while for $q = \\pm \\ell$ we need to analyze $Q(g, L, H_1,H_2)$ as follows from Lemma~\\ref{q=ell-case}. We begin by estimating these counts. We keep the notational conventions from \\S\\ref{thm1-proof-sec}.\n\n\\subsection{A comparison lemma}\nThe Diophantine counts in Lemmata~\\ref{APTI-done-lemma-single-form} and \\ref{q=ell-case} involve the positioning relative to certain special sets of the matrix $g^{-1}\\tilde{\\gamma}g$, which we now explicate in preparation for a counting argument. Using $g\\in\\Omega$, we may write explicitly\n\\[ g=\\begin{pmatrix} g_1&g_2\\\\g_3&g_4\\end{pmatrix},\\qquad g_j\\ll 1. \\]\nAn explicit calculation shows that\n\\[ g^{-1}\\begin{pmatrix} a&b\\\\c&d\\end{pmatrix}g=\\begin{pmatrix}\\frac{a+d}{2}+L_1&L_2\\\\L_3&\\frac{a+d}{2}-L_1\\end{pmatrix}, \\]\nwhere\n\\begin{equation}\\label{coordinates}\n\\begin{alignedat}{7}\nL_1&=\\hphantom{-}(a-d)\\big(\\tfrac12+g_2g_3\\big)&&+bg_3g_4&&-cg_1g_2, \\\\\nL_2&=\\hphantom{-}(a-d)g_2g_4&&+bg_4^2&&-cg_2^2,\\\\\nL_3&=-(a-d)g_1g_3&&-bg_3^2&&+cg_1^2.\n\\end{alignedat}\n\\end{equation}\nWe record the following simple but effective result, which will be used in both parts of Theorem~\\ref{thm3}.\n\n\\begin{lemma}\\label{l2l3-small}\nLet $\\Omega\\subset G$ be a compact subset, and $g\\in\\Omega$. Let $a,b,c,d\\in\\CC$ and $\\Delta>0$ be such that\n$L_2,L_3\\ll\\Delta$.\n\\begin{enumerate}[(a)]\n\\item\\label{1213-a}\nFor at least one $s\\in\\{a-d,b,c\\}$, we have\n\\[\\begin{bmatrix} a-d&b&c\\end{bmatrix}^{\\top}\n=\\begin{bmatrix}\\lambda_1&\\lambda_2&\\lambda_3\\end{bmatrix}^{\\top}s+\\OO(\\Delta)\\]\nwith $\\lambda_1,\\lambda_2,\\lambda_3\\ll 1$ depending only on $g$.\n\\item\\label{1213-b}\nFor the same choice of $s\\in\\{a-d,b,c\\}$, we have\n\\[ (a-d)^2+4bc=\\mu s^2+\\OO(\\Delta |s|+\\Delta^2), \\]\nwith $\\mu=\\lambda_1^2+4\\lambda_2\\lambda_3\\gg 1$. If additionally $(a-d)^2+4bc=0$, then $a-d,b,c\\ll\\Delta$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nWe may write the defining equations for $L_2$ and $L_3$ as $\\begin{bmatrix} L_2&L_3\\end{bmatrix}^{\\top}=M\\begin{bmatrix} a-d&b&c\\end{bmatrix}^{\\top}$ for a $2\\times 3$ matrix $M$ whose $2\\times 2$ minors we compute to be\n\\[ \\begin{vmatrix}g_4^2&-g_2^2\\\\-g_3^2&g_1^2\\end{vmatrix}=g_1g_4+g_2g_3,\\quad \\begin{vmatrix} g_2g_4&-g_2^2\\\\-g_1g_3&g_1^2\\end{vmatrix}=g_1g_2,\\quad \\begin{vmatrix} g_2g_4&g_4^2\\\\-g_1g_3&-g_3^2\\end{vmatrix}=g_3g_4. \\]\nAt least one of these minors exceeds $1\/3$ in absolute value, since\n\\[(g_1g_4+g_2g_3)^2-4g_1g_2g_3g_4=1.\\]\nConsider the case when $|g_1g_4+g_2g_3|>1\/3$. Then we may solve the latter two equations in \\eqref{coordinates} for $b$, $c$, which yields\n\\[ \\begin{bmatrix} b\\\\c\\end{bmatrix}=\\begin{bmatrix}-g_1g_2\\\\g_3g_4\\end{bmatrix}\\frac{a-d}{g_1g_4+g_2g_3}+\\OO(\\Delta).\\]\nThis settles the first claim in the lemma with $s=a-d$. The second claim follows from\n\\[ (a-d)^2+4bc=\\frac{(a-d)^2}{(g_1g_4+g_2g_3)^2}+\\OO\\big(\\Delta|a-d|+\\Delta^2\\big). \\]\nThe other cases (of which it suffices to consider one) are similar. For example, under $|g_1g_2|>1\/3$ we have\n\\[ \\begin{bmatrix}a-d\\\\c\\end{bmatrix}=\\begin{bmatrix}-g_1g_4-g_2g_3\\\\-g_3g_4\\end{bmatrix}\\frac{b}{g_1g_2}+\\OO(\\Delta),\\quad (a-d)^2+4bc=\\frac{b^2}{(g_1g_2)^2}+\\OO\\big(\\Delta|b|+\\Delta^2\\big), \\]\nfrom which the lemma follows.\n\\end{proof}\n\n\\subsection{Second moment count for $q=\\pm\\ell$}\nWe will now establish an upper bound for the quantity $Q(g,L, H_1, H_2)$ counting pairs of matrices $(\\gamma_1,\\gamma_2)$ such that\n\\begin{equation}\\label{thm2a-KS-conditions}\n\\begin{gathered}\n\\gamma_j=\\begin{pmatrix} a_j&b_j\\\\c_j&d_j\\end{pmatrix}\\in\\MM_2(\\ZZ[i]),\n\\qquad \\det\\gamma_1=\\det\\gamma_2=n,\\qquad L \\leq |n|\\leq 2 L,\\\\\n\\|g^{-1}\\tilde{\\gamma}_jg\\|\\leq\\sqrt{\\frac{H_j}{L}},\\qquad\n\\dist(g^{-1}\\tilde{\\gamma}_jg,\\mcD)\\ll\\sqrt{\\frac{H_j\\log\\ell}{L\\ell}}.\n\\end{gathered}\n\\end{equation}\nWe denote the quantities in \\eqref{coordinates} corresponding to $\\gamma_j$ as\n$L_{1j}$, $L_{2j}$, $L_{3j}$. From \\eqref{coordinates} and \\eqref{thm2a-KS-conditions} we deduce that\n\\begin{equation}\\label{2a-KS-immediate}\n\\|\\gamma_j\\|\\ll\\sqrt{H_j},\\qquad L_{2j},L_{3j}\\preccurlyeq\\sqrt{H_j\/\\ell},\n\\end{equation}\nand\n\\begin{equation}\\label{det-cond}\n(a_1+d_1)^2-(a_2+d_2)^2=(a_1-d_1)^2+4b_1c_1-(a_2-d_2)^2-4b_2c_2.\n\\end{equation}\nWe shall prove the following result using \\eqref{thm2a-KS-conditions}, \\eqref{2a-KS-immediate}, and \\eqref{det-cond}.\n\n\\begin{lemma}\\label{lemma-ell-count}\nLet $\\Omega\\subset G$ be a compact subset and $L\\geq 1$. For $g\\in\\Omega$ and\n$1\\leq H_1, H_2\\preccurlyeq\\ell$, we have\n\\begin{equation}\\label{Qbound}\nQ(g,L,H_1, H_2)\\preccurlyeq_\\Omega H_1H_2.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof} We shall use that the entries $a_j,b_j,c_j,d_j\\in\\ZZ[i]$ of each participating $\\gamma_j$ satisfy the conditions of Lemma~\\ref{l2l3-small} with $\\Delta_j\\preccurlyeq 1$ in the role of $\\Delta$. Indeed, this follows from \\eqref{2a-KS-immediate} and $H_1, H_2\\preccurlyeq\\ell$.\n\nLet $s_j\\in\\{a_j-d_j,b_j,c_j\\}$ be as in Lemma~\\ref{l2l3-small}\\ref{1213-a}. By Lemma~\\ref{l2l3-small}\\ref{1213-a} and \\eqref{2a-KS-immediate}, for a given pair $(s_1,s_2)$, there are $\\preccurlyeq 1$ choices for the two triples $(a_j-d_j,b_j,c_j)$, which then determine both sides of \\eqref{det-cond}. Using this preliminary observation, we do the counting in two steps.\n\nFirst we count $(\\gamma_1,\\gamma_2)$ satisfying \\eqref{thm2a-KS-conditions} and $(a_1+d_1)^2\\neq(a_2+d_2)^2$. By \\eqref{2a-KS-immediate}, there are $\\ll H_1H_2$ choices for the pair $(s_1, s_2)$, hence $\\preccurlyeq H_1H_2$ choices for the two triples $(a_j-d_j,b_j,c_j)$. Given the triples, by \\eqref{2a-KS-immediate}--\\eqref{det-cond} and the divisor bound, there are $\\preccurlyeq 1$ choices for $(a_1+d_1,a_2+d_2)$. This is admissible for \\eqref{Qbound}.\n\nNow we count $(\\gamma_1,\\gamma_2)$ satisfying \\eqref{thm2a-KS-conditions} and $(a_1+d_1)^2=(a_2+d_2)^2$.\nIn this case, Lemma~\\ref{l2l3-small}\\ref{1213-b} coupled with \\eqref{2a-KS-immediate}--\\eqref{det-cond} shows that\n$s_1^2-s_2^2\\preccurlyeq\\sqrt{H_1}+\\sqrt{H_2}$. Hence, by the divisor bound (separating the case when $s_1^2=s_2^2$), there are $\\preccurlyeq\\max(H_1,H_2)$ choices for the pair $(s_1, s_2)$ and same for the two triples $(a_j-d_j,b_j,c_j)$. Independently of the triples, by \\eqref{2a-KS-immediate}, there are $\\ll\\min(H_1,H_2)$ choices for $(a_1+d_1,a_2+d_2)$. This is again admissible for \\eqref{Qbound}.\n\\end{proof}\n\n\\subsection{Interlude: a first moment count}\\label{thm2a-first-moment-sec}\nFor the proof of Theorem~\\ref{thm2} in \\S \\ref{sec-proof2} below, we need a variation of the previous Diophantine argument that is most conveniently stated and proved at this point. \nFor $\\mcL\\in\\{1,L^2,L^4\\}$ and every $0<\\delta\\preccurlyeq 1$, we will establish an upper bound on the quantity\n\\begin{equation}\\label{thm2a-conditions}\nM_\\mcD(g,L,\\mcL,\\eps,\\delta):=\\sum_{n\\in D(L,\\mcL)}\n\\#\\left\\{\\gamma\\in\\Gamma_n:\\|g^{-1}\\tilde\\gamma g\\|\\ll\\ell^{\\eps},\\,\\,\\dist(g^{-1}\\tilde\\gamma g,\\mcD)\\leq\\delta\\right\\},\n\\end{equation}\nwhere the implied constant is absolute. As before, we conclude from the conditions in \\eqref{thm2a-conditions} and the explicit description in \\eqref{coordinates} that\n\\begin{equation}\\label{coeff-bound-2a_l2l3-delta0}\n\\|\\gamma\\|\\preccurlyeq\\mcL^{1\/4}\\qquad\\text{and}\\qquad L_2,L_3\\ll\\mcL^{1\/4}\\delta.\n\\end{equation}\nWe shall prove the following result using \\eqref{coeff-bound-2a_l2l3-delta0} and the identity\n\\begin{equation}\\label{parabolicidentity}\n(a-d)^2+4bc=(a+d)^2-4n.\n\\end{equation}\n\n\\begin{lemma}\\label{first-moment-count-lemma}\nLet $\\Omega\\subset G$ be a compact subset, $L\\geq 1$, and $\\eps>0$. For $g\\in\\Omega$ and $0<\\delta\\preccurlyeq 1$, we have the following bounds.\n\\begin{align}\n\\label{Nbound1}M_{\\mcD}(g,L,1,\\eps,\\delta)&\\preccurlyeq_{\\Omega} 1,\\\\\n\\label{Nbound2}M_{\\mcD}(g,L,L^2,\\eps,\\delta)&\\preccurlyeq_{\\Omega} L^2+L^4\\delta^4,\\\\\n\\label{Nbound3}M_{\\mcD}(g,L,L^4,\\eps,\\delta)&\\preccurlyeq_{\\Omega} L^2+L^6\\delta^4.\n\\end{align}\n\\end{lemma}\n\n\\begin{proof}\nThe bound \\eqref{Nbound1} corresponds to $\\mcL=1$, and it is immediate from \\eqref{coeff-bound-2a_l2l3-delta0}. Hence we focus on the bounds \\eqref{Nbound2}--\\eqref{Nbound3} that correspond to $\\mcL\\in\\{L^2,L^4\\}$. We shall use that the entries $a,b,c,d\\in\\ZZ[i]$ of each participating $\\gamma$ satisfy the conditions of Lemma~\\ref{l2l3-small} with $\\Delta=\\mcL^{1\/4}\\delta$, as follows from \\eqref{coeff-bound-2a_l2l3-delta0}.\n\nFirst we count parabolic matrices $\\gamma$. In this case, we have $(a-d)^2+4bc=0$, hence also $a-d,b,c\\ll\\mcL^{1\/4}\\delta$ by Lemma~\\ref{l2l3-small}\\ref{1213-b}. If $bc\\neq 0$, then there are $\\ll\\mcL^{1\/2}$ choices for $a+d=\\pm 2\\sqrt{n}$, and $\\ll\\mcL^{1\/2}\\delta^2$ choices for $a-d\\neq 0$. The difference $a-d$ determines the product $bc$ uniquely, hence by the divisor bound, there are $\\preccurlyeq 1$ choices for $(b,c)$. This is admissible for \\eqref{Nbound2}--\\eqref{Nbound3}. If $bc=0$, then there are $\\ll\\mcL^{1\/2}$ choices for $a=d=\\pm\\sqrt{n}$, and $\\ll 1+\\mcL^{1\/2}\\delta^2$ choices for $(b,c)$. This is again admissible for \\eqref{Nbound2}--\\eqref{Nbound3}.\n\nNow we count non-parabolic matrices $\\gamma$, in which case $(a-d)^2+4bc\\neq 0$. Let $s\\in\\{a-d,b,c\\}$ be as in Lemma~\\ref{l2l3-small}\\ref{1213-a}. There are $\\preccurlyeq\\mcL^{1\/2}$ choices both for $s$ and for $a+d$. For a given $s$, by Lemma~\\ref{l2l3-small}\\ref{1213-a}, there are $\\ll 1+\\mcL\\delta^4$ choices for the triple $(a-d,b,c)$. This is admissible for \\eqref{Nbound2}. In the high range $\\mcL=L^4$, this argument gives the bound $\\preccurlyeq L^4+L^8\\delta^4$ for the relevant count, which is weaker than \\eqref{Nbound3} by a factor of $L^2$. However, we can save a factor of $L^2=\\mcL^{1\/2}$ as follows. In the high range, $n=l_1^2l_2^2$ is a square, and $(a-d)^2+4bc\\neq 0$ factors as $(a+d+2l_1l_2)(a+d-2l_1l_2)$. Hence the triple $(a-d,b,c)$ determines $a+d$ up to $\\preccurlyeq 1$ possibilities by the divisor bound, while earlier we considered $\\preccurlyeq\\mcL^{1\/2}$ possibilities for $a+d$.\n\\end{proof}\n\n\\subsection{Counting setup for $q=0$}\nFor each $\\mcL\\in\\{1,L^2,L^4\\}$ and $0<\\delta\\preccurlyeq 1$, we will establish an upper bound on the quantity $M_0^{\\ast}(g,L,\\mcL,\\delta)$ consisting of matrices\n\\begin{equation}\\label{thm2b-conditions}\n\\begin{gathered}\n\\gamma=\\begin{pmatrix}a&b\\\\c&d\\end{pmatrix}\\in\\MM_2(\\ZZ[i]),\n\\qquad\\det\\gamma=n\\in D(L,\\mcL),\\qquad |n|\\asymp\\mcL^{1\/2}\\\\\n\\dist(g^{-1}\\tilde{\\gamma}g,\\mcS)\\leq\\delta,\\qquad\\frac{D(g^{-1}\\tilde\\gamma g)}{\\|g^{-1}\\tilde\\gamma g\\|^2}\\ll\\frac{\\log\\ell}{\\sqrt{\\ell}}.\n\\end{gathered}\n\\end{equation}\nFrom the first distance condition in \\eqref{thm2b-conditions} we conclude that\n\\begin{equation}\\label{coeff-bound}\na,b,c,d\\preccurlyeq\\mcL^{1\/4}.\n\\end{equation}\nUsing the description in \\eqref{coordinates}, the distance conditions in \\eqref{thm2b-conditions} imply that\n\\begin{gather}\n\\label{cond1} \\left\\{\\begin{aligned} &L_2,L_3\\ll\\delta\\sqrt{|n|}\\\\ &\\left|\\tfrac{a+d}{2}\\pm L_1\\right|=(1+\\OO(\\delta))\\sqrt{|n|}\\end{aligned}\\right. \\qquad\\text{or}\\qquad\n\\left\\{\\begin{aligned} &a+d,L_1\\ll\\delta\\sqrt{|n|}\\\\ &|L_2|,|L_3|=(1+\\OO(\\delta))\\sqrt{|n|};\\end{aligned}\\right.\\\\\n\\label{cond0} \\left|\\tfrac{a+d}{2}+L_1\\right|^2-\\left|\\tfrac{a+d}{2}-L_1\\right|^2\n\\preccurlyeq\\sqrt{\\mcL\/\\ell}\\qquad\\text{and}\\qquad|L_2|^2-|L_3|^2\\preccurlyeq\\sqrt{\\mcL\/\\ell}.\n\\end{gather}\nAs in \\S\\ref{thm1-proof-sec}, we split the count into the parabolic and non-parabolic subcounts as\n\\[ M_0^{\\ast}(g,L,\\mcL,\\delta)=M_0^{\\ast\\pp}(g,L,\\mcL,\\delta)+M_0^{\\ast\\np}(g,L,\\mcL,\\delta). \\]\nWe shall prove the following result using \\eqref{thm2b-conditions}--\\eqref{cond0} and \\eqref{parabolicidentity}.\n\n\\begin{lemma}\\label{counting-for-thm2}\nLet $\\Omega\\subset G$ be a compact subset, $L\\geq 1$, and $\\mcL\\in\\{L^2,L^4\\}$. For $g\\in\\Omega$ and $0<\\delta\\preccurlyeq 1$, we have the following bounds.\n\\begin{align}\n\\label{Rbound1} M_0^{\\ast}(g,L,1,\\delta)&\\preccurlyeq_{\\Omega} 1,\\\\\n\\label{Rbound2} M_0^{\\ast\\pp}(g,L,\\mcL,\\delta)&\\preccurlyeq_{\\Omega}\\mcL^{1\/2}+\\mcL\\delta^2,\\\\\n\\label{Rbound3} M_0^{\\ast}(g,L,L^2,\\delta)&\\preccurlyeq_{\\Omega} L^{3\/2}+L^3\\delta^3+\\frac{L^2+L^{7\/2}\\delta^2}{\\sqrt{\\ell}}+\\frac{L^4\\delta^2}{\\ell},\\\\[4pt]\n\\label{Rbound4} M_0^{\\ast\\np}(g,L,L^4,\\delta)&\\preccurlyeq_{\\Omega} L^3+L^5\\delta^2+\\frac{L^4+L^6\\delta^2}{\\sqrt{\\ell}}.\n\\end{align}\n\\end{lemma}\n\n\\begin{proof}\nThe bound \\eqref{Rbound1} is immediate from \\eqref{coeff-bound}. For the proof of \\eqref{Rbound2}, we observe that, in the parabolic case, \\eqref{cond1} implies $L_2,L_3\\ll\\mcL^{1\/4}\\delta$. Indeed, this is clear when the first half of \\eqref{cond1} holds. Otherwise, the conditions $a+d=\\pm 2\\sqrt{n}$ and $a+d\\ll\\delta\\sqrt{|n|}$ force $\\delta\\gg 1$, so the claimed bound is clear again. Applying Lemma~\\ref{l2l3-small}\\ref{1213-b}, we infer that $a-d,b,c\\ll\\mcL^{1\/4}\\delta$ holds in the parabolic case. From here \\eqref{Rbound2} follows readily, as in the second paragraph of the proof of Lemma~\\ref{first-moment-count-lemma}. Finally, we shall prove \\eqref{Rbound3} and \\eqref{Rbound4} in the next two subsections.\n\\end{proof}\n\n\\subsection{Volume argument}\\label{subsec:volume-argument}\nHere, we present a volume argument that we will use repeatedly to estimate the number of lattice points satisfying \\eqref{thm2b-conditions}--\\eqref{cond0}. The symbol $\\vol$ will refer to the Lebesgue measure in $\\CC^m\\simeq\\RR^{2m}$, with $m$ being clear from the context.\n\nThe explicit expressions for the linear forms in \\eqref{coordinates} may be rewritten as \n\\begin{equation}\\label{Ag3}\n\\begin{bmatrix} L_1&L_2&L_3\\end{bmatrix}^{\\top}=A_0(g)\\begin{bmatrix}a-d&b&c\\end{bmatrix}^{\\top},\n\\end{equation}\nwhere $A_0:\\Omega\\to\\GL_3(\\CC)$ is a continuous function.\nIt is straightforward to verify that $\\det A_0(g)=1\/2$ holds identically. \nWe shall also use the $4$-dimensional variant\n\\begin{equation}\\label{Ag4}\n\\begin{bmatrix} a+d&L_1&L_2&L_3\\end{bmatrix}^{\\top}=\\diag(1,A_0(g))\\begin{bmatrix} a+d&a-d&b&c\\end{bmatrix}^{\\top}.\n\\end{equation}\n\nNow, let $m\\geq 1$ be a fixed integer ($m\\in\\{2,3,4\\}$ in our applications), and let $A:\\Omega\\to\\GL_m(\\CC)$ be a fixed continuous function. As $\\Omega$ is compact, there exists a fixed compact subset $K=K(A,\\Omega)\\subset\\CC^m$ such that each $2m$-dimensional lattice $A(g)\\ZZ[i]^m\\subset\\CC^m$ ($g\\in\\Omega)$ has a fundamental parallelepiped lying in $K$ and of volume $\\asymp 1$. It follows by a standard volume argument that for any compact subset $V\\subset\\CC^m$ and $g\\in\\Omega$ we have\n\\begin{equation}\\label{volume-bound}\n\\#\\bigl(V\\cap A(g)\\ZZ[i]^m\\bigr)\\ll\\vol V^\\bullet\\qquad\\text{where}\\qquad V^\\bullet:=V+K.\n\\end{equation}\n\nWe also record for repeated reference a simple volume computation. For $r,\\Delta>0$, we define the sets\n\\begin{align*}\nW_1(r,\\Delta)&:=\\bigl\\{(z_1,z_2)\\in\\CC^2:|z_1|,|z_2|\\leq r,\\,\\,\\Re(z_1\\overline{z_2})\\leq\\Delta\\bigr\\},\\\\\nW_2(r,\\Delta)&:=\\bigl\\{(z_1,z_2)\\in\\CC^2:|z_1|,|z_2|\\leq r,\\,\\,\\bigl||z_1|^2-|z_2|^2\\bigr|\\leq\\Delta\\bigr\\}.\n\\end{align*}\nCutting these into two parts according to whether $|z_2|\\leq |z_1|$ or $|z_2|>|z_1|$, we obtain readily by Fubini's theorem that\n\\[\\vol W_j(r,\\Delta)\\ll\\min(r^4,r^2\\Delta).\\]\nOn the other hand, we have\n\\[W_j(r,\\Delta)^\\bullet\\subset W_j\\bigl(r+\\OO(1),\\Delta+\\OO(r+1)\\bigr)\\]\nwith implied constants depending only on $A$ and $\\Omega$, hence\n\\begin{equation}\\label{volW}\n\\vol W_j(r,\\Delta)^{\\bullet}\\ll\\min\\bigl((r+1)^4,(r+1)^2(\\Delta+r+1)\\bigr)\\ll 1+r^2\\Delta+r^3.\n\\end{equation}\n\n\\subsection{Middle and high range for $q=0$}\nWe now estimate the count $M_0^{\\ast}(g,L,\\mcL,\\delta)$ in the ``middle range'' $\\mcL=L^2$ and the ``high range'' $\\mcL=L^4$. In the high range, we shall focus on the non-parabolic contribution $M_0^{\\ast\\np}(g,L,L^4,\\delta)$, since \nwe have already proved \\eqref{Rbound2}, and here we shall profit substantially from the fact that $\\det\\gamma$ is a square.\n\n\\subsubsection{Middle range}\nIn the middle range $\\mcL=L^2$, we estimate the number of choices in $M_0^{\\ast}(g,L,L^2,\\delta)$ as follows.\n\nFor the case when the first half of \\eqref{cond1} holds, we introduce the set\n\\begin{align*}\nV_1(\\delta):=\\big\\{(z_0,z_1,z_2,z_3)\\in\\CC^4:\\ &z_0,z_1\\preccurlyeq\\mcL^{1\/4},\\,\\,\n\\Re(z_0\\overline{z_1})\\preccurlyeq\\sqrt{\\mcL\/\\ell},\\\\\n&z_2,z_3\\ll\\mcL^{1\/4}\\delta,\\,\\,|z_2|^2-|z_3|^2\\preccurlyeq \\sqrt{\\mcL\/\\ell}\\big\\},\n\\end{align*}\nsuppressing from notation the dependence implicit in $\\preccurlyeq$. Then we have by \\eqref{volW}\n\\begin{align}\n\\nonumber\\vol V_1(\\delta)^{\\bullet}&\\preccurlyeq\n\\vol W_1(\\mcL^{1\/4},\\sqrt{\\mcL\/\\ell})^{\\bullet}\\cdot \n\\vol W_2(\\mcL^{1\/4}\\delta,\\sqrt{\\mcL\/\\ell})^{\\bullet}\\\\\n\\label{V1bound}&\\ll (\\mcL^{3\/4}+\\mcL\/\\sqrt{\\ell})(1+\\mcL^{3\/4}\\delta^3+\\mcL\\delta^2\/\\sqrt{\\ell}).\n\\end{align}\nFor the case when the second half of \\eqref{cond1} holds, we introduce the set\n\\begin{align*}\nV_2(\\delta)=\\big\\{(z_0,z_1,z_2,z_3)\\in\\CC^4:\\ &z_0,z_1\\ll\\mcL^{1\/4}\\delta,\\,\\,\n\\Re(z_0\\overline{z_1})\\preccurlyeq\\sqrt{\\mcL\/\\ell},\\\\\n&z_2,z_3\\preccurlyeq\\mcL^{1\/4},\\,\\,|z_2|^2-|z_3|^2\\preccurlyeq\\sqrt{\\mcL\/\\ell}\\big\\},\n\\end{align*}\nsuppressing from notation the dependence implicit in $\\preccurlyeq$. Then we have by \\eqref{volW}\n\\begin{align}\n\\nonumber\\vol V_2(\\delta)^{\\bullet}&\\preccurlyeq\n\\vol W_1(\\mcL^{1\/4}\\delta,\\sqrt{\\mcL\/\\ell})^{\\bullet}\\cdot \n\\vol W_2(\\mcL^{1\/4},\\sqrt{\\mcL\/\\ell})^{\\bullet}\\\\\n\\label{V2bound}&\\ll (\\mcL^{3\/4}+\\mcL\/\\sqrt{\\ell})(1+\\mcL^{3\/4}\\delta^3+\\mcL\\delta^2\/\\sqrt{\\ell}).\n\\end{align}\n\nUsing \\eqref{thm2b-conditions}--\\eqref{cond0}, \\eqref{Ag4}--\\eqref{volume-bound}, and \\eqref{V1bound}--\\eqref{V2bound}, we conclude \\eqref{Rbound3} in the form\n\\[ M_0^{\\ast}(g,L,L^2,\\delta)\\preccurlyeq (L^{3\/2}+L^2\/\\sqrt{\\ell})(1+L^{3\/2}\\delta^3+L^2\\delta^2\/\\sqrt{\\ell}). \\]\n\n\\subsubsection{High range}\nAs in the proof of Lemmata~\\ref{counting-for-thm1} and \\ref{first-moment-count-lemma}, in the high range $\\mcL=L^4$, once the triple $(a-d,b,c)$ is determined for a non-parabolic matrix $\\gamma$ (so that \\eqref{parabolicidentity} holds), $a+d$ and along with it $\\gamma$ is determined up to $\\preccurlyeq 1$ choices by the divisor bound, using that $n=l_1^2l_2^2$ is a square. We now estimate the number of choices in $M_0^{\\ast\\np}(g,L,L^4,\\delta)$ as follows.\n\nFor the case when the first half of \\eqref{cond1} holds, we introduce the set\n\\[ V_3(\\delta):=\\big\\{(z_1,z_2,z_3)\\in\\CC^3:\nz_1\\preccurlyeq\\mcL^{1\/4},\\,\\,z_2,z_3\\ll\\mcL^{1\/4}\\delta,\\,\\,|z_2|^2-|z_3|^2\\preccurlyeq\\sqrt{\\mcL\/\\ell}\\big\\}, \\]\nsuppressing from notation the dependence implicit in $\\preccurlyeq$. Then we have by \\eqref{volW}\n\\begin{equation}\\label{V3bound}\n\\vol V_3(\\delta)^{\\bullet}\\preccurlyeq\n\\sqrt{\\mcL}\\cdot\\vol W_2(\\mcL^{1\/4}\\delta,\\sqrt{\\mcL\/\\ell})^{\\bullet}\n\\ll \\sqrt{\\mcL}(1+\\mcL\\delta^2\/\\sqrt{\\ell}+\\mcL^{3\/4}\\delta^3).\n\\end{equation}\nFor the case when the second half of \\eqref{cond1} holds, we introduce the set\n\\[ V_4(\\delta):=\\big\\{(z_1,z_2,z_3)\\in\\CC^3:z_1\\ll\\mcL^{1\/4}\\delta,\\,\\,z_2,z_3\\preccurlyeq\\mcL^{1\/4},\\,\\,|z_2|^2-|z_3|^2\\preccurlyeq\\sqrt{\\mcL\/\\ell}\\big\\}, \\]\nsuppressing from notation the dependence implicit in $\\preccurlyeq$. Then we have by \\eqref{volW}\n\\begin{equation}\\label{V4bound}\n\\vol V_4(\\delta)^{\\bullet}\\preccurlyeq\n(1+\\mcL^{1\/4}\\delta)^2\\cdot\\vol W_2(\\mcL^{1\/4},\\sqrt{\\mcL\/\\ell})^{\\bullet}\n\\ll(1+\\sqrt{\\mcL}\\delta^2)(\\mcL^{3\/4}+\\mcL\/\\sqrt{\\ell}).\n\\end{equation}\n\nUsing \\eqref{parabolicidentity}, \\eqref{thm2b-conditions}--\\eqref{cond0}, \\eqref{Ag3}, \\eqref{volume-bound}, and \\eqref{V3bound}--\\eqref{V4bound}, we conclude \\eqref{Rbound4} in the form\n\\[ M_0^{\\ast\\np}(g,L,L^4,\\delta)\\preccurlyeq\nL^2(1+L^4\\delta^2\/\\sqrt{\\ell}+L^3\\delta^3)+(1+L^2\\delta^2)(L^3+L^4\/\\sqrt{\\ell}). \\]\n\nThe proof of Lemma~\\ref{counting-for-thm2} is complete.\n\n\\subsection{Proof of Theorem~\\ref{thm3}}\n\nIn the case $q=0$, we combine Lemmata~\\ref{APTI-done-lemma-single-form} and \\ref{counting-for-thm2} to see that\n\\[ |\\phi_0(g)|^2\\preccurlyeq_{I,\\Omega}\\ell^2\n\\left(\\frac1L+S^{\\ast}_0(L,L^2)+S^{\\ast}_0(L,L^4)\\right)+L^{2}\\ell^{-48}, \\]\nwhere\n\\begin{alignat*}{3}\nS_0^{\\ast}(L,L^2)&:=\\sum_{\\substack{\\delta\\text{ dyadic}\\\\1\/\\sqrt{\\ell}\\leq\\delta\\preccurlyeq 1}}\n\\frac1{\\sqrt{\\ell}\\delta L^3}\n\\left(L^{3\/2}+L^3\\delta^3+\\frac{L^2+L^{7\/2}\\delta^2}{\\sqrt{\\ell}}+\\frac{L^4\\delta^2}{\\ell}\\right)\n&&\\preccurlyeq\\frac1{L^{3\/2}}+\\frac1{\\sqrt{\\ell}}+\\frac{L}{\\ell^{3\/2}},\\\\\nS_0^{\\ast}(L,L^4)&:=\\sum_{\\substack{\\delta\\text{ dyadic}\\\\1\/\\sqrt{\\ell}\\leq\\delta\\preccurlyeq 1}}\n\\frac1{\\sqrt{\\ell}\\delta L^4}\n\\left(L^3+L^5\\delta^2+\\frac{L^4+L^6\\delta^2}{\\sqrt{\\ell}}\\right)\n&&\\preccurlyeq\\frac1L+\\frac{L}{\\sqrt{\\ell}}+\\frac{L^2}{\\ell}.\n\\end{alignat*}\nPutting everything together, we conclude that\n\\[ |\\phi_0(g)|^2\\preccurlyeq_{I,\\Omega}\\ell^2\\left(\\frac1L+\\frac{L}{\\sqrt{\\ell}}+\\frac{L^2}{\\ell}\\right)+L^2\\ell^{-48}\\ll \\ell^{7\/4}, \\]\nby making the essentially optimal choice $L:=7\\ell^{1\/4}$ (which satisfies our earlier\ncondition $L\\geq 7$).\n\nThe case $q = \\pm \\ell$ is immediate from Lemmata~\\ref{q=ell-case} and \\ref{lemma-ell-count}, hence the proof of Theorem~\\ref{thm3} is complete.\n\n\\section{Proof of Theorem~\\ref{thm2}}\\label{sec-proof2}\nIn this section, we prove Theorem~\\ref{thm2}. Here we take the aim of the softest possible proof based on the localization properties of the averaged spherical trace function (proved in Theorem~\\ref{thm6} and then encoded in the form of the amplified pre-trace inequality in Lemma~\\ref{APTI-done-lemma-single-form}) and the already available ingredients for the counting problem.\n\nFor each $\\mcL\\in\\{1,L^2,L^4\\}$ and $\\vec{\\delta}=(\\delta_1,\\delta_2)$ with $0<\\delta_1,\\delta_2\\leq\\ell^\\eps$, the count $M^\\ast(g,L,\\mcL,\\vec{\\delta})$ in Lemma~\\ref{APTI-done-lemma-single-form} may be estimated in a split fashion as\n\\[ M^\\ast(g,L,\\mcL,\\vec{\\delta})\\leq \\min\\big(M_K(g,L,\\mcL,\\delta_1),M_\\mcD(g,L,\\mcL,\\eps,\\delta_2)\\big), \\]\nwhere\n\\[M_K(g,L,\\mcL,\\delta):=\\sum_{n\\in D(L,\\mcL)}\\#\\left\\{\\gamma\\in\\Gamma_n:\\dist\\left(g^{-1}\\tilde\\gamma g,K\\right)\\leq\\delta\\right\\},\\]\nand $M_{\\mcD}(g,L,\\mcL,\\eps,\\delta)$ is as in \\eqref{thm2a-conditions}. The quantity $M_K(g,L,\\mcL,\\delta)$ is the classical Diophantine count in the spherical sup-norm problem in the eigenvalue aspect, which in the present context\nwas treated in detail in \\cite{BlomerHarcosMilicevic2016}. In the notation of that paper, we have:\n\\begin{itemize}\n\\item $u(\\tilde\\gamma gK,gK)\\asymp\\dist(g^{-1}\\tilde\\gamma g,K)^2$ in \\cite[(5.3)]{BlomerHarcosMilicevic2016};\n\\item $N=1$, and $r\\asymp_{\\Omega}1$ for $g\\in\\Omega$, in \\cite[(6.2)]{BlomerHarcosMilicevic2016}.\n\\end{itemize}\nThus the count $M_K(g,L,\\mcL,\\delta_1)$ agrees with $M(gK,L,\\mcL,\\OO(\\delta_1^2))$ in \\cite[(5.17)--(5.18)]{BlomerHarcosMilicevic2016}. Importing estimates \\cite[(7.1), (7.2), (7.5), (11.1), (11.6)]{BlomerHarcosMilicevic2016}, we conclude that\n\\[M_K(g,L,1,\\delta_1)\\preccurlyeq_{\\Omega} 1,\\quad\nM_K(g,L,L^2,\\delta_1)\\preccurlyeq_{\\Omega} L^2+L^4\\delta_1,\\quad\nM_K(g,L,L^4,\\delta_1)\\preccurlyeq_{\\Omega} L^3+L^6\\delta_1.\\]\n\nThe count $M_{\\mcD}(g,L,\\mcL,\\eps,\\delta)$ was estimated in Lemma~\\ref{first-moment-count-lemma}. Combining everything, we obtain the following lemma.\n\n\\begin{lemma}\\label{soft-counting-for-general-q}\nFor $g\\in\\Omega$, $L>0$, and arbitrary $\\eps>0$ and $\\vec{\\delta}=(\\delta_1,\\delta_2)$ with $0<\\delta_{j}\\preccurlyeq 1$, the quantity $M^\\ast(g,L,\\mcL,\\vec{\\delta})$ in Lemma~\\ref{APTI-done-lemma-single-form} satisfies\n\\begin{align*}\nM^\\ast(g,L,1,\\vec{\\delta})&\\preccurlyeq_{\\Omega} 1,\\\\\nM^\\ast(g,L,L^2,\\vec{\\delta})&\\preccurlyeq_{\\Omega}\\min\\big(L^2+L^4\\delta_1,L^2+L^4\\delta_2^4\\big),\\\\\nM^\\ast(g,L,L^4,\\vec{\\delta})&\\preccurlyeq_{\\Omega}\\min\\big(L^3+L^6\\delta_1,L^2+L^6\\delta_2^4\\big).\n\\end{align*}\n\\end{lemma}\n\nWe are now ready for the proof of Theorem~\\ref{thm2}. From Lemma~\\ref{soft-counting-for-general-q}, we have for every pair $\\vec{\\delta}=(\\delta_1,\\delta_2)$ with $0<\\delta_1,\\delta_2\\leq\\ell^{\\eps}$ that\n\\[\\frac{M^\\ast(g,L,1,\\vec{\\delta})}{L}+\\frac{M^\\ast(g,L,L^2,\\vec{\\delta})}{L^3}+\\frac{M^\\ast(g,L,L^4,\\vec{\\delta})}{L^4}\n\\preccurlyeq_{\\Omega} \\left(\\frac1L+L^2\\min\\left(\\delta_1,\\delta_2^4\\right)\\right).\\]\nInserting this into Lemma~\\ref{APTI-done-lemma-single-form}, we find that\n\\begin{align*}\n|\\phi_q(g)|^2&\\preccurlyeq_{I,\\Omega}\\ell^{2}\n\\sum_{\\substack{\\vec{\\delta}\\textnormal{ dyadic},\\,\\,\\delta_j\\preccurlyeq 1\\\\\\delta_1^2\\delta_2\\geq 1\/\\sqrt{\\ell}}}\n\\frac{1}{\\sqrt{\\ell}\\delta_1^2\\delta_2}\n\\left(\\frac1L+ L^2 \\min\\left(\\delta_1,\\delta_2^4\\right)\\right)+L^{2}\\ell^{-48}\\\\\n&\\preccurlyeq \\ell^{2} \\Biggl(\\frac1L+\n\\sum_{\\substack{\\vec{\\delta}\\textnormal{ dyadic},\\,\\,\\delta_j\\preccurlyeq 1\\\\\\delta_1^2\\delta_2\\geq 1\/\\sqrt{\\ell}}}\nL^2 \\min\\left(\\frac{1}{\\sqrt{\\ell}\\delta_1\\delta_2}, \\delta_1, \\delta_2^4\\right)\\Biggr)\n\\preccurlyeq \\ell^{2} \\left(\\frac1L + \\frac{L^2}{\\ell^{2\/9}}\\right),\n\\end{align*}\nwhere we used $\\min(A, B, C) \\leq A^{4\/9}B^{4\/9} C^{1\/9}$ in the last step.\nThe choice $L:=7\\ell^{2\/27}$ is optimal up to a constant, and it satisfies our earlier\ncondition $L\\geq 7$, hence we obtain Theorem~\\ref{thm2} in the form\n\\[{\\|\\phi_q|_{\\Omega}\\|}_\\infty\\preccurlyeq_{I,\\Omega}\\ell^{26\/27}.\\qedhere \\]\n\nThe proof of Theorem~\\ref{thm2} is complete.\n\n\\bibliographystyle{amsalpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nFrom AlexNet \\citep{krizhevsky2012imagenet}, VGG \\citep{simonyan2014very}, ResNet \\citep{he2016deep}, to SENet \\citep{hu2018squeeze}, tremendous research effort has been put in efficient deep learning model designs, leading to state-of-the-art performance for various machine learning tasks, including image recognition, object detection, and image segmentation \\citep{ren2015faster, long2015fully}. The success of deep learning models relies on sufficient training on a vast amount of data with correct labels, which however are difficult to acquire in practice. In particular, the privacy issue, which has attracted much attention recently, makes data collection from clients much more challenging \\citep{pmlr-v54-mcmahan17a}.\n\nTo train deep learning models adequately while protecting the privacies of clients, federated learning has come up as the major driving force to enhance the data security of deep learning methods \\citep{pmlr-v54-mcmahan17a}. Instead of training models using collected data from clients, federated learning needs no data collection. Instead, each client trains the machine learning model on its local edge device and then uploads the model parameters to the central server. Since only machine learning models are exchanged in the air, the risk of the client's data leakage becomes much smaller \\citep{truex2019hybrid}. Although straightforward as it may seem, integrating machine learning training methods into the framework of federated learning while with indistinguishable performance loss is of no simple matter. This has triggered recent theoretical research in both machine learning and optimizations \\cite{li2020on}. \n\nTherefore, in federated learning, it is crucial to think about how to use automated machine learning to search for neural architectures directly from the clients' data.\nThanks to gradient-based neural architecture search \\citep{liu2018darts, cai2018proxylessnas}, the model search can be represented as updating architecture parameters. \nSome NAS methods search for backbone cells on proxy data to trim down computational cost \\citep{liu2018darts, xu2020pcdarts}. \nHowever, these proxy strategies do not guarantee that the backbone cells have optimal performance on the target data\\citep{yang2019evaluation}. \nMore importantly, in federated learning, the difference in distribution between proxy data and target data will be larger due to the presence of non-iid data.\nTherefore, there are enough reasons to propose a non-proxy, gradient-based Federated Direct Neural Architecture Search (\\textbf{FDNAS}), with a feature of no data exchange. The first contribution of this paper is to propose such a scheme. \n\nOn the other hand, due to prevalent human biases, preferences, habits, etc., clients are divided into different groups, in each of which the clients are similar to each other in terms of both data and hardware.\nTherefore, instead of looking for an architecture that is too dense to suit each client, we expect that FDNAS should allow each client to use a lightweight architecture that fits their individual tasks.\nTo achieve this goal, We treat different clients' models as a large ensemble, in which the models are \\textit{highly diverse and client-specific}.\nThen, our \\textbf{primary question} is: how can we effectively search for these clusters' diverse models that are near-optimal on their respective tasks in a federated manner, while their weights are entangled? To exploit the model diversity in such a complex ensemble, we resort to the fundamental idea of meta-learning. \n\nParticularly, in meta-learning, the model's weights that have been already meta-trained can be very efficient when they are adapted to different tasks via meta-test \\citep{chen2019closerfewshot}.\nTherefore, we propose to use the SuperNet in a meta-test-like manner in order to obtain all client-specific neural architectures in federated learning. In the order of ``meta-train\" to ``meta-test\", the SuperNet trained on all clients in the first phase are considered as the meta-training model for the next ``meta-test\" client-specific adaptation. Following this idea, we propose the Cluster Federated Direct Neural Architecture Search (\\textbf{CFDNAS}) that divides all clients into groups by data similarity, and each group is trained using the SuperNet from the previous phase. In turn, each group utilizes the previous SuperNet and can adjust the architecture to fit their own client data after only a few rounds of updates like a meta-test. Consequently, CFDNAS can quickly generate a specific architecture for each client's data in parallel, under the framework of federated learning.\nThe contributions of our FDNAS are summarized below: \n\\begin{itemize}\n\\item[1.] \nWe have integrated federated learning with gradient-based and proxy-less NAS. This allows federated learning not only to train weights but also to search model architectures directly from the clients' data.\n\\item[2.] \nInspired by meta-learning, we extended FDNAS to CFDNAS to exploit the model diversity so that client-specific models can be quickly searched at very low computational cost.\n\\end{itemize}\n\n\\section{Related Work}\n\\textbf{Efficient neural architecture} designing is important for the practical deployment.\nMobileNetV2 \\citep{sandler2018mobilenetv2} proposes MBconv blocks, which largely reduce the model's FLOPs. Besides, efficient neural architecture search has recently gained increasing attention.\nTo speed up the NAS, the one-shot NAS approaches treat all normal nets as different subnets of the SuperNet and share weights among the operation candidates \\citep{brock2018smash}.\nENAS \\citep{pham2018efficient} uses the RNN controller to sample subnets in SuperNet and uses reinforce method to obtain approximate gradients of architecture.\nDARTS \\citep{liu2018darts} improves the search efficiency by representing each edge as a mixture of candidate operations and optimizing the weights of operation candidates in continuous relaxations.\nRecently, hardware-aware NAS methods like ProxylessNAS, incorporate the latency feedback in search as the joint optimization task without any expensive manual attempts. \\citep{wu2019fbnet, wan2020fbnetv2, cai2018proxylessnas}.\nAt the same time, some NAS methods search for the best backbone cells and transfer them to other target tasks by stacking them layer by layer \\citep{liu2018darts, xu2020pcdarts}. \nHowever, there is a big difference between the proxy and the target here: typically, the distribution of proxy data differs from the target data, and the best block searched by the proxy method differs from the optimal normal net after stacked \\citep{ yang2019evaluation}.\nTheir motivations and approaches are very different from our FDNAS, where our goal is to search the neural architecture without the gap between proxy task and clients' data, but directly on clients' data in a privacy-preserving way and automatically design a variety of client-specific models that satisfy clients' diversity.\n\n\\noindent \\textbf{Federated learning} commonly deploys predefined neural architectures on the client. They then use FedAvg as a generic algorithm to optimize the model \\citep{pmlr-v54-mcmahan17a}.\nThere are some fundamental challenges associated with the research topic of federated learning \\citep{li2019federated}: communication overhead, statistical heterogeneity of data(non-iid), client privacy.\nThe communication between the central server and the client is a bottleneck due to frequent weights exchange. So some studies aim to design more efficient communication strategies \\citep{konevcny2016federated, 45672, fedpaq19}.\nIn practice, the data is usually non-iid distributed. So the hyper-parameter settings of FedAvg (e.g., learning rate decay) are analyzed to study their impacts on non-iid data \\citep{li2020on}. In addition, a global data-sharing strategy is proposed to improve the accuracy of the algorithm on non-iid data \\citep{zhao2018federated}.\nOther research efforts have focused on privacy security \\citep{agarwal2018cpsgd}. For example, differential privacy is applied to federated training, thus preserving the privacy of the client \\citep{article17eth, 9069945}. Recently, FedNAS searches out the cell architecture and stacks it \\citep{he2020fednas}.\nNote that all of the above techniques are deploying pre-defined network architectures, or only searching for backbone cells for stacking using proxy strategies. However, our FDNAS can search the complete model architecture directly from clients' data and allow the model to better adapt to the clients' data distribution while protecting privacy. Also, with CFDNAS, it is able to provide multiple suitable networks for diverse clients at a very low computational cost.\n\\section{Preliminary}\nTo motivate the idea of this paper, we briefly review the basics of federated learning and ProxylessNAS in this section. \n\\subsection{Federated Learning}\n\\label{sec:FedAvg}\n\nAs an emerging privacy-preserving technology, federated learning (FedAvg) enables edge devices to collaboratively train a shared global model without uploading their private data to a central server \\citep{pmlr-v54-mcmahan17a}. In particular, in the training round $t+1$, an edge device $k\\in S$ downloads the shared machine learning model $\\mathbf w^{k}_{t}$ (e.g., a CNN model) from the central server, and utilizes its local data to update the model parameters. Then, each edge device sends its updated model $\\{\\mathbf w^{k}_{t} \\}_{k \\in S}$ to the central server for aggregation. \n\n\n\n\\subsection{ProxylessNAS}\n\\label{sec:plnas}\n\nIn ProxylessNAS, a SuperNet (over-parameterized net) is firstly constructed, which is denoted by a directed acyclic graph (DAG) with $N$ nodes. Each node $x^{(i)}$ represents a latent representation (e.g., a feature map), and each directed edge $e^{(i,j )}$ that connects node $x^{(i)}$ and $x^{(j)}$ defines the following operation:\n\\begin{align}\nx^{(j)} = \\sum_{n=1}^N b_n^{(i \\rightarrow j)} o_n \\left[ x^{(i)} \\rightarrow x^{(j)} \\right],\n\\end{align}\nwhere $o_n\\left[x^{(i)} \\rightarrow x^{(j)}\\right] \\in \\mathcal O$ denotes a operation candidate (e.g., convolution, pooling, identity, ect.) that transforms $x^{(i)}$to $x^{(j)}$, and vector $\\mathbf b^{(i \\rightarrow j)} = [b_1^{(i \\rightarrow j)}, \\cdots, b_N^{(i \\rightarrow j)}]$ is a binary gate that takes values as one hot vector which only set $b_n^{(i \\rightarrow j)}=1$ with a probability $p_n^{(i \\rightarrow j)}$ and other elements are 0 in forwarding.\nRather than computing all the operations in the operation set $\\mathcal O$ \\citep{liu2018darts}, there is only one operation $o_n \\left[ x^{(i)} \\rightarrow x^{(j)}\\right]$ that is utilized to transform each node $x^{(i)}$ to one of its neighbors $x^{(j)}$ with a probability $p_n^{(i \\rightarrow j)}$ at each run-time, thus greatly saving the memory and computations for training.\nSo it opens the door to directly learn the optimal architecture from large-scale datasets without resorting to proxy tasks. Furthermore, it applies the incorporation of hardware latency loss term into the NAS for reducing the inference latency on devices.\n\n\n\n\n\\section{Federated Neural Architecture Search}\n\\subsection{Motivation and Problem Formulation}\nIn the last section, ProxylessNAS is introduced as an efficient framework of great value for real-world applications since it is more efficient to use models directly searched from diverse daily data.\nHowever, the training of ProxylessNAS needs to collect all target data in advance, which inevitably limits its practical use due to the privacy concern of clients. In contrast, FedAvg trains a neural network without exchanging any data and thus can protect the data privacy of clients. However, it usually does not take the NAS into account, and therefore requires cumbersome manual architecture design (or hyperparameter tuning) for the best performance. \n\nTo integrate the merits from ProxylessNAS and FedAvg while overcoming their shortcomings, we develop a federated ProxylessNAS algorithm in this section. In particular, we propose a novel problem formulation as follows:\n\\begin{align}\n \\min_{\\pmb {\\alpha}} \\quad & \\sum_{k=1}^K\\mathcal{L}_{val}^{k}(\\mathbf w^*(\\pmb {\\alpha}), \\pmb {\\alpha}) \\label{eq:outerfednasFormulation}\\\\\n \\text{s.t.} \\quad &\\mathbf w^*(\\pmb {\\alpha}) = \\mathrm{argmin}_{\\mathbf w} \\enskip \\sum_{k=1}^K\\mathcal{L}_{train}^{k}(\\mathbf w, \\pmb {\\alpha}), \\label{eq:innerfednasFormulation}\n\\end{align}\nwhere $\\mathcal{L}_{val}^{k}(\\cdot, \\cdot)$ denotes the validation loss function of client $k$, and $\\mathcal{L}_{train}^{k}(\\cdot, \\cdot)$ denotes its training loss function. Vector $\\pmb {\\alpha}$ collects all architecture parameters $\\{\\alpha_n^{i \\rightarrow j}, \\forall i \\rightarrow j, \\forall n\\}$ and $\\mathbf w$ collects all weights $\\{\\mathbf w^{i\\rightarrow j}, \\forall i\\rightarrow j\\}$. This nested bilevel optimization formulation is inspired by those proposed in ProxylessNAS \\citep{cai2018proxylessnas} and FedAvg \\citep{pmlr-v54-mcmahan17a}. More specifically, for each client, the goal is to search the optimal architecture $\\pmb {\\alpha}$ that gives the best performance on its local validation dataset, while with the optimal model weights $\\mathbf w^*(\\pmb {\\alpha})$ learnt from its local training dataset.\n\n\n\\subsection{Federated Algorithm Development}\n\\begin{figure*}[htbp]\n \\centering\n \\includegraphics[width=0.825\\linewidth]{illustration-fed-search.pdf}\n \\caption{Illustration of FDNAS among clients.}\n \\label{fig:fdnas}\n\\end{figure*}\n\nTo optimize $\\pmb {\\alpha}$ and $\\mathbf w$ while protecting the privacy of client data, the FDNAS is developed in this subsection. \n\nFollowing the set-up of federated learning, assume that there are $K$ clients in the set $S$ and one central server. In communication round $t$, each client downloads global parameters $\\pmb {\\alpha}^g_t$ and $\\mathbf w^g_t$ from the server, and update these parameters using its local validation dataset and training dataset, respectively. With global parameters $\\pmb {\\alpha}^g_t$ and $\\mathbf w^g_t$ as initial values, the updates follow the steps in {\\bf Algorithm \\ref{alg:fdnas}}, i.e., \n\\begin{align}\n \\mathbf w_{t+1}^k, \\pmb \\alpha_{t+1}^k \\leftarrow \\text{ProxylessNAS}( \\mathbf w_t^{g}, \\pmb \\alpha_t^{g}), \\forall k \\in S. \\label{eq:w-alpha-from-single-client}\n\\end{align}\nThen, each edge device sends their updated parameters $\\{\\mathbf w^{k}_{t+1},\\pmb \\alpha^{k}_{t+1} \\}_{k \\in S}$ to the central server for aggregation. Central server aggregates these parameters to update global parameters $\\pmb {\\alpha}^g_{t+1}$ and $\\mathbf w^g_{t+1}$:\n\\begin{align}\n \\mathbf w_{t+1}^{g}, \\pmb \\alpha_{t+1}^{g} \\leftarrow \\sum_{k=1}^K \\frac{N_k}{N} \\mathbf w_{t+1}^k, \\sum_{k=1}^K \\frac{N_k}{N} \\pmb \\alpha_{t+1}^k, \\label{eq:sum-params-from-clients}\n\\end{align}\nwhere $N_k$ is the size of local dataset in client $k$, and $N$ is the sum of all client's data size (i.e., $N = \\sum_{k=1}^K N_k)$. \n\nAfter $T$ communication rounds, each client downloads the learnt global parameters $\\pmb {\\alpha}^g_t$ and $\\mathbf w^g_t$ from the server, from which the optimized neural network architecture and model parameters can be obtained, as introduced in Section Preliminary. Furthermore, with the learnt neural network architecture, the weights $\\mathbf w^k$ in each client could be further refined by the conventional FedAvg algorithm for better validation performance. The proposed algorithm is summarized in {\\bf Algorithm \\ref{alg:cfdnas}} and illustrated in Figure \\ref{fig:fdnas}.\n\n\n\\begin{algorithm}[!h]\n\\caption{FDNAS: Federated Direct Neural Architecture Search}\n\\label{alg:fdnas}\n\n\\begin{algorithmic}\n\\SUB{Central server:}\n \\STATE Initialize $\\mathbf w_0^{g}$ and $\\pmb \\alpha_0^{g}$.\n \\FOR{each communication round $t = 1, 2, \\dots, T$}\n \n \n \\FOR{each client $k \\in S$ \\textbf{in parallel}}\n \\STATE $\\mathbf w_{t+1}^k, \\pmb \\alpha_{t+1}^k \\leftarrow \\textbf{ClientUpdate}(\\mathbf w_t^{g}, \\pmb \\alpha_t^{g})$\n \\ENDFOR\n \\STATE $\\mathbf w_{t+1}^{g}, \\pmb \\alpha_{t+1}^{g} \\leftarrow \\sum_{k=1}^K \\frac{N_k}{N} \\mathbf w_{t+1}^k, \\sum_{k=1}^K \\frac{N_k}{N} \\pmb \\alpha_{t+1}^k$\n \n \\ENDFOR\n \\STATE\n \n \\SUB{ClientUpdate($\\mathbf w, \\pmb \\alpha$):} \/\/ \\emph{On client platform.}\n \\FOR{each local epoch $i$ from $1$ to $E$}\n \\STATE update $\\mathbf w, \\pmb \\alpha$ by ProxylessNAS\n \\ENDFOR\n \\STATE return $\\mathbf w, \\pmb \\alpha$ to server\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Further Improvement and Insights}\n\\subsubsection{Clustering-aided model compression}\n\\begin{algorithm}[tb]\n\\caption{CFDNAS: Cluster Federated Direct Neural Architecture Search}\n\\label{alg:cfdnas}\n\n\\begin{algorithmic}\n\\SUB{Cluster server:}\n \\STATE load $\\mathbf w_0^{g}, \\pmb \\alpha_0^{g}$ by central server.\n \\STATE $\\{S_{1},\\cdots,S_{P}\\} \\leftarrow$ (split $S_{all}$ into clusters by users' tag.)\n \\FOR {cluster $S_{tag} \\in \\{S_{1},\\cdots,S_{P}\\}$ \\textbf{in parallel}}\n \\FOR{each round $t = 1, 2, \\dots$}\n \n \n \\FOR{each sampled client $k \\in S_{tag}$ \\textbf{in parallel}}\n \\STATE $\\mathbf w_{t+1}^k, \\pmb \\alpha_{t+1}^k \\leftarrow \\text{ClientUpdate}_k(\\mathbf w_t^{g}, \\pmb \\alpha_t^{g})$ \n \\ENDFOR\n \\STATE $\\mathbf w_{t+1}^{g}, \\pmb \\alpha_{t+1}^{g} \\leftarrow \\sum_{k=1}^K \\frac{n_k}{n_{all}} \\mathbf w_{t+1}^k, \\sum_{k=1}^K \\frac{n_k}{n_{all}} \\pmb \\alpha_{t+1}^k$\n \\ENDFOR\n \\ENDFOR\n \\STATE\n\n\\SUB{ClientUpdate($\\mathbf w, \\pmb \\alpha$):} \/\/ \\emph{On client platform.}\n \\FOR{each local epoch $i$ from $1$ to $E$}\n \\STATE update $\\mathbf w, \\pmb \\alpha$ by ProxylessNAS\n \\ENDFOR\n \\STATE return $\\mathbf w, \\pmb \\alpha$ to server\n\\end{algorithmic}\n\\end{algorithm}\n\nAt the end of the proposed algorithm (i.e., {\\bf Algorithm \\ref{alg:cfdnas}}), the neural network architecture is learnt from the datasets of all the clients in a federated way. Although it enables the the knowledge transfer, accounting for every piece of information might result in structure redundancy for a particular group of clients. More specifically, assume that 10 clients are collaborated together to train a model for image classification. Some clients are with images about \\textit{birds, cats, and deer} and thus can form a ``animal\" group, while other clients in the ``transportation\" group are with images about \\textit{airplanes, cars, ships, and trucks}. Although these images all contribute to the neural network structure learning in {\\bf Algorithm \\ref{alg:cfdnas}}, the operations tailored for ``animal\" group might not help the knowledge extraction from the ``transportation\" group significantly. Therefore, an immediate idea is: could we further refine the model architecture by utilizing the clustering information of clients?\n\nTo achieve this, we further propose a clustering-aided refinement scheme into the proposed FDNAS. In particular, after $\\pmb \\alpha^{g}$converges, each client could send a tag about its data to the server, e.g., \\textit{animal, transportation}, which are not sensitive about its privacy. Then the server gets the clustering information about the clients: which ones are with similar data distributions, based on which the clients' set $S$ can be divided into several groups:\n\\begin{align}\n S \\rightarrow \\{S_{1},\\cdots,S_{P}\\}.\n \\label{eq:s-to-si}\n\\end{align}\nFinally, the clients in the same group could further refine their SuperNet by re-executing the proposed FDNAS algorithm. The proposed clustering-aided refinement scheme labeled as \\textbf{CFDNAS} is summarized in {\\bf Algorithm \\ref{alg:cfdnas}}. \n\n\\subsubsection{Hardware-tailored model compression}\nClients might deploy models in different devices, such as mobile phone, CPU, and GPU. Previous studies show that taking hardware-ware loss into account could guide the NAS to the most efficient one in terms of the inference speed. \nFor example, ProxylessNAS uses hardware-aware loss to \\textit{avoid} $\\pmb\\alpha$ convergence to the heaviest operations at each layer.\nThe proposed FDNAS and CFDNAS could naturally integrate the hardware information into its search scheme. In particular, we could let each device send their hardware information such as \\textit{GPU, CPU} to the server. Then, the server divides these clients into several groups according to their hardware type. For the clients in the same group, a hardware-ware loss term is added to the training loss. By taking this hardware-ware loss term into the training process, the proposed algorithm will drive the SuperNet to a compact one that gives the fastest inference speed for the particular hardware-platform. The details of hardware-ware loss term refer to ProxylessNAS.\n\n\\begin{table*}[!htbp]\n\\centering\n\\begin{tabular}{lcccc}\n\\toprule\n\\textbf{\\multirow{2}{*}{Architecture}} &\\textbf{\\multirow{2}{*}{Test Acc. (\\%)}} & \\textbf{Params} & \\textbf{Search Cost} & \\textbf{\\multirow{2}{*}{Method}} \\\\\n\\cmidrule(lr){3-4} &\n & \\textbf{(M)} & \\textbf{(GPU hours)} &\\\\\n\\hline\nDenseNet-BC~ \\citep{densenet} & 94.81 & 15.3 & - & manual \\\\\nMobileNetV2 ~ \\citep{sandler2018mobilenetv2} & 96.05 & 2.5 & - & manual \\\\\n\\hline\nNASNet-A ~ \\citep{zoph2018learning} & 97.35 & 3.3 & 43.2K & RL \\\\\nAmoebaNet-A ~ \\citep{amoebanet} & 96.66 & 3.2 & 75.6K & evolution \\\\\nHireachical Evolution~ \\citep{liu2018hierarchical} & 96.25 & 15.7 & 7.2K & evolution \\\\\nPNAS~ \\citep{liu2018progressive} & 96.59 & 3.2 & 5.4K & SMBO \\\\\nENAS~ \\citep{pham2018efficient} & 97.11 & 4.6 & 12 & RL \\\\\n\\hline\nDARTS~ \\citep{liu2018darts} & 97.24 & 3.3 & 72 & gradient \\\\\nSNAS ~ \\citep{xie2018snas} & 97.02 & 2.9 & 36 & gradient \\\\\nP-DARTS C10~ \\citep{chen2019progressive} & 97.50 & 3.4 & 7.2 & gradient \\\\\nP-DARTS C100~ \\citep{chen2019progressive} & 97.38 & 3.6 & 7.2 & gradient \\\\\nPC-DARTS ~ \\cite{xu2020pcdarts} & 97.39 &3.6 &2.5 & gradient\\\\\nGDAS ~ \\citep{dong2019searching} & 97.07 & 3.4 & 5 & gradient \\\\\n\\hline\n\\textbf{FDNAS(ours)} & \\textbf{78.75$^*$\/97.25$\\dag$} & \\textbf{3.4} & \\textbf{59} & \\textbf{gradient \\& federated} \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{\\textbf{CIFAR-10 performance.} $^*$: The federated averaged model's accuracy. $\\dag$: mean local accuracy, i.e., Each client completes training local epochs after downloading the server model, and then averages the local accuracy obtained by testing inference on the local test data.}\n\\label{tab_ev_cifar}\n\\end{table*}\n\n\\section{Experiments}\n\n\\subsection{Implementation Details}\nWe use PyTorch \\citep{paszke2019pytorch} to implement FDNAS and CFDNAS.\nWe searched on CIFAR-10 \\citep{cifar10} and then trained normal networks from scratch. CIFAR-10 has 50K training images, 10K test images, and 10 classes of labels. To simulate a federation scenario, we set up 10 clients. The first 3 classes of images are randomly and evenly assigned to the first 3 clients. Then, the middle 3 classes of images were randomly assigned to the middle 3 clients, and the last 4 classes of images were randomly and evenly assigned to the last 4 clients. \nEach client has 4500 images as a training set for learning $\\mathbf w$ and 500 images as a validation set for learning $\\pmb \\alpha$.\nBesides, to test the transferability of the architecture searched out by FDNAS, we used it to train on ImageNet\\citep{5206848}.\n\n\\subsection{Image Classification on CIFAR-10}\n\\subsubsection{Training Settings}\nWe use a total batch size of 256 and set the initial learning rate to $0.05$. Then, we use the cosine rule as a decay strategy for the learning rate. When training SuperNet, we use Adam to optimize $\\pmb \\alpha$ and momentum SGD to optimize $\\mathbf w$. The weight decay of $\\mathbf w$ is $3e-4$, while we do not use weight decay on $\\pmb \\alpha$.\nThe SuperNet has 19 searchable layers, each consisting of MBconv blocks, the same as ProxylessNAS \\citep{cai2018proxylessnas}.\nWe train the FDNAS SuperNet with 10 clients for a total of 125 rounds. Meanwhile, the local epoch of each client is 5. \nWe assume that all clients can always be online during the training procedure.\nAfter that, CFDNAS clusters clients-0, 1, and 2 into the GPU group to train CFDNAS-G SuperNet, and clients-3, 4, and 5 into the CPU group to train CFDNAS-C SuperNet. CFDNAS SuperNets are all searched separately for 25 rounds.\nAfter training the SuperNet, we derive the normal net from the SuperNet and then run 250 rounds of scratch training on clients via FedAvg.\nGPU latency is measured on a TITAN Xp GPU with a batch size of 128 in order to avoid severe underutilization of the GPU due to small batches. CPU latency is measured on two 2.20GHz Intel(R) Xeon(R) E5-2650 v4 servers with a batch size of 128.\nBesides, the random number seeds were set to the same value for all experiments to ensure that the data allocation remained consistent for each training.\n\n\n\\subsubsection{Results}\nSince our FDNAS is based on FedAvg and protects the privacy of clients' data, we put the federated averaged model's accuracy and clients' models mean local accuracy in the Table~\\ref{tab_ev_cifar}.\nAs demonstrated, our FDNAS normal net achieves $78.75\\%$ federated averaged accuracy and $97.25\\%$ mean local accuracy. \nIt costs 59 GPU hours. However, the federated learning framework is naturally suited to distributed training. During training, all clients can be trained simultaneously. \nOur FDNAS outperforms both evolution-based NAS and gradient-based DARTS in terms of search time cost, and our clients'local accuracy is higher than their central accuracy.\nIn addition, we use MobileNetV2 as a predefined, hand-crafted model trained under FedAvg for a fairer comparison with FDNAS. In Table~\\ref{ablation1}, our FDNAS outperforms MobileNetV2 in terms of federated averaged accuracy and mean local accuracy. See ablation study~\\ref{alation-sec} subsection for more analysis and CFDNAS's performance.\n\n\n\\subsection{Image Classification on ImageNet}\n\\begin{table*} [!t]\n\\centering\n\\begin{tabular}{lcccccc}\n\\toprule\n\\textbf{\\multirow{2}{*}{Architecture}} & \\multicolumn{2}{c}{\\textbf{Test Acc. (\\%)}} & \\textbf{Params} & $\\times+$ & \\textbf{Search Cost} & \\textbf{\\multirow{2}{*}{Search Method}} \\\\\n\\cmidrule(lr){2-3}\n& \\textbf{top-1} & \\textbf{top-5} & \\textbf{(M)} & \\textbf{(M)} & \\textbf{(GPU hours)} &\\\\\n\\hline\nMobileNet~ \\citep{howard2017mobilenets} & 70.6 & 89.5 & 4.2 & 569 & - & manual \\\\\nShuffleNet 2$\\times$ (v2)~ \\citep{ma2018shufflenet} & 74.9 & - & $\\sim$5 & 591 & - & manual \\\\\nMobileNetV2~ \\citep{sandler2018mobilenetv2} & 72.0 & 90.4 & 3.4 & 300 & - & manual \\\\\n\n\\hline\nNASNet-A~ \\citep{zoph2018learning} & 74.0 & 91.6 & 5.3 & 564 & 43.2K & RL \\\\\nNASNet-B~ \\citep{zoph2018learning} & 72.8 & 91.3 & 5.3 & 488 & 43.2K & RL \\\\\nAmoebaNet-A~ \\citep{amoebanet} & 74.5 & 92.0 & 5.1 & 555 & 75.6K & evolution \\\\\nAmoebaNet-B~ \\citep{amoebanet} & 74.0 & 91.5 & 5.3 & 555 & 75.6K & evolution \\\\\nPNAS~ \\citep{liu2018progressive} & 74.2 & 91.9 & 5.1 & 588 & 5.4K & SMBO \\\\\nMnasNet~ \\citep{tan2019mnasnet} & 74.8 & 92.0 & 4.4 & 388 & - & RL \\\\\n\\hline\nDARTS ~ \\citep{liu2018darts} & 73.3 & 91.3 & 4.7 & 574 & 96 & gradient \\\\\nSNAS ~ \\citep{xie2018snas} & 72.7 & 90.8 & 4.3 & 522 & 36 & gradient \\\\\nProxylessNAS ~ \\citep{cai2018proxylessnas} & 75.1 & 92.5 & 7.1 & 465 & 200 & gradient \\\\\nP-DARTS-C10~ \\citep{chen2019progressive} & 75.6 & 92.6 & 4.9 & 557 & 7.2 & gradient \\\\\nP-DARTS-C100 ~ \\citep{chen2019progressive} & 75.3 & 92.5 & 5.1 & 577 & 7.2 & gradient \\\\\nGDAS ~ \\citep{dong2019searching} & 74.0 & 91.5 & 5.3 & 581 & 5 & gradient \\\\\\hline\n\n\\textbf{FDNAS(ours)} & \\textbf{75.3} & \\textbf{92.9} & \\textbf{5.1} & \\textbf{388} & \\textbf{59} & \\textbf{gradient~ \\& federated} \\\\\n\n\\bottomrule\n\\end{tabular}\n\\caption{\\textbf{ImageNet performance.} $\\times+$ denotes the number of multiply-add operations(FLOPs).}\n\\label{ev_imagenet}\n\\end{table*}\n\\subsubsection{Training Settings}\nTo test the generality on larger image classification tasks, we moved the FDNAS general net to ImageNet for training. Following the general mobile setting \\citep{liu2018darts}, we set the input image size to $224\\times224$ and the model's FLOPs were constrained to below 600M.\nWe use an SGD optimizer with a momentum of 0.9. The initial learning rate is 0.4 and decays to 0 by the cosine decay rule. Then the weight decay is 4e-5. The dropout rate is 0.2.\nIn order to fit the FDNAS net to ImageNet's image size, Layer 1, 3, 6, 8, and 16 are set to the downsampling layers.\n\n\\subsubsection{Results}\nAs shown in Table~\\ref{ev_imagenet}, we achieved SOTA performance compared to other methods. FDNAS test accuracy is $75.3\\%$, which is better than GDAS, ProxylessNAS, SNAS, and AmoebaNet. Besides, our FLOPs are 388M, which is also smaller than them. The search cost is only 59 GPU hours, which is smaller than ProxylessNAS and makes sense in real-world deployments.\nAlso, for a fairer comparison, we compare FDNAS to MobileNetV2 since they are both composed of MBconv blocks.\nOur FDNAS normal net's $75.3\\%$ accuracy outperforms MobileNetV2 by $3.3\\%$. Moreover, the MobileNetV2(1.4)'s FLOPs is 585M which is quite more dense than FDNAS 388M. but our FDNAS's accuracy still outperforms its $0.6\\%$. \nSumming up the above analysis, the model searched by our FDNAS in a privacy-preserving manner is highly transferable and it has an outstanding trade-off between accuracy and FLOPs. These show that learning the neural architecture from the data can do away with the bias caused by human effort and attain better efficiency.\n\n\n\\subsection{Ablation study}\n\\label{alation-sec}\n\\begin{table*}[!t]\n\\centering\n\\begin{tabular}{lcccccc}\n\\toprule\n\\textbf{\\multirow{2}{*}{Architecture}} & \\multicolumn{2}{c}{\\textbf{Latency (ms})} & \\textbf{Params} & \\textbf{$\\times+$} & \\textbf{Search Cost} & \\textbf{\\multirow{2}{*}{Test Acc.(\\%)}} \\\\ \n\\cmidrule(lr){2-3}\n& \\textbf{GPU} & \\textbf{CPU} & \\textbf{(M)} & \\textbf{(M)} & \\textbf{(GPU hours)} &\\\\ \\hline\n MobileNetV2 & 52.31 & 890.69 &2.5 &296.5 & - & 68.45$\\ddag$\/96.51$\\dag$\\\\ \\hline\n\n FDNAS & 52.78 & 600.17 & 3.4 &346.6 & 59.00 & 78.75$\\ddag$\/97.25$\\dag$\\\\ \\hline\n CFDNAS-G & 40.33 & 463.86 & 3.3 &318.4 & 3.53 & 73.60$\\ddag$\/ 98.93$\\dag$\\\\\n CFDNAS-C & 31.00 & 186.52 & 2.0 &169.3 & 3.46 & 71.29$\\ddag$\/ 93.01$\\dag$\\\\\n\\bottomrule\n\\end{tabular}\n\\caption{\\textbf{A comparison between MobileNetV2, FDNAS and CFDNAS on CIFAR-10}: \n$\\dag$ and $\\ddag$ is explained in Table~\\ref{tab_ev_cifar}.}\n\n\\label{ablation1}\n\\end{table*}\n\\begin{table*}[!t]\n\\centering\n\\begin{tabular}{lccccccc}\n\\toprule\n\\textbf{\\multirow{2}{*}{Architecture}} &\\multirow{2}{*}{\\textbf{Client ID}} & \\textbf{Params} & \\textbf{$\\times+$} &\\textbf{Search Cost} &{\\textbf{Client Local}} &\\textbf{Mean Local} \\\\ \n& & \\textbf{(M)} & \\textbf{(M)} &\\textbf{(GPU hours)}&\\textbf{Acc.(\\%)} & \\textbf{Acc.(\\%)} &\\\\ \\hline\n\nFDNAS \n &0,1,2 & 3.38 & 346.64 &59 & 98.14\/99.35\/98.90 & 98.79 \\\\ \n \nnaive-CFDNAS-G \n &0,1,2 & 3.70 & 356.83 &18 &97.56\/98.11\/97.36 &97.67 \\\\ \n\\textbf{CFDNAS-G} \n &0,1,2 & \\textbf{3.33} & \\textbf{318.44} &\\textbf{3.53} &\\textbf{98.39\/99.02\/99.38} &\\textbf{98.93} \\\\ \\hline\nFDNAS \n &3,4,5 &3.38 &346.64 &59 & 94.20\/91.81\/93.04 & 93.01 \\\\ \n \nnaive-CFDNAS-C \n &3,4,5 & 1.92 & 187.89 &18 &88.61\/89.88\/90.25 &89.58 \\\\ \n\\textbf{CFDNAS-C} \n &3,4,5 & \\textbf{2.03} & \\textbf{169.35} &\\textbf{3.46} &\\textbf{93.21\/92.73\/93.12} &\\textbf{93.02} \\\\\n\\bottomrule\n\n\\end{tabular}\n\\caption{\\textbf{Enhancement by CFDNAS}. ``naive-CFDNAS'' means that SuperNet for CFDNAS is not inherited from FDNAS, but is searched directly in the cluster group from scratch.}\n\\label{ablation3}\n\\end{table*}\n\\begin{table*}[!t]\n\\centering\n\\begin{tabular}{lccccc}\n\\toprule\n\\textbf{\\multirow{2}{*}{Architecture}} & \\textbf{Params} & \\textbf{$\\times+$} &\\textbf{Search Cost} & \\textbf{Test Acc.} \\\\ \n& \\textbf{(M)} & \\textbf{(M)} &\\textbf{(GPU hours)}& \\textbf{(\\%)} \\\\ \\hline\nDNAS(on client-0) & 2.54 & 264.35 &6.4 & 95.83 \\\\ \nDNAS(on client-1) & 3.50 & 330.58 &6.5 & 97.04 \\\\ \nDNAS(on client-2) & 2.85 & 269.76 &8.4 & 97.55 \\\\ \nDNAS(on client-3) & 2.77 & 258.90 &9.9 & 87.33 \\\\ \nDNAS(on client-4) & 2.28 & 233.99 &6.7 & 88.38 \\\\ \nDNAS(on client-5) & 3.27 & 299.63 &7.5 & 87.53 \\\\ \nDNAS(on client-6) & 2.78 & 297.03 &7.7 & 97.83 \\\\ \nDNAS(on client-7) & 3.25 & 336.20 &7.1 & 96.83 \\\\ \nDNAS(on client-8) & 2.68 & 263.89 &8.8 & 97.38 \\\\ \nDNAS(on client-9) & 2.80 & 286.46 &7.8 & 97.47 \\\\ \nmean & 2.87 & 284.08 &7.7 & 94.31 \\\\ \\hline\n\nDNAS(on collected CIFAR-10) & 5.02 & 500.71 &24.5 & 96.71 \\\\ \\hline\nFDNAS(on all clients) & 3.38 & 346.64 &59.0 & 78.75$\\ddag$\/97.25$\\dag$\\\\ \\hline\nCFDNAS-G(on client-0, 1, 2) & 3.33 & 318.44 &3.53 & 73.60$\\ddag$\/98.93$\\dag$\\\\\nCFDNAS-C(on client-3, 4, 5) & 2.03 & 169.35 &3.46 & 71.29$\\ddag$\/93.01$\\dag$\\\\\n\\bottomrule\n\\end{tabular}\n\\caption{\\textbf{Compared with conventional DNAS.} $\\dag$ and $\\ddag$ is explained in Table~\\ref{tab_ev_cifar}. The DNAS search algorithm is ProxylessNAS like FDNAS, but uses well-collected data rather than federated learning.\n}\n\\label{ablation2}\n\\end{table*}\n\n\\subsubsection{Effectiveness of CFDNAS}\nFrom the Table~\\ref{ablation1}, we demonstrate CFDNAS for comparison.\nCFDNAS-G (GPU platform) and CFDNAS-C (CPU platform) are trained for 25 epochs each based on the inherited FDNAS SuperNet.\nBoth CFDNAS normal nets have smaller FLOPs than FDNAS. Both nets also have lower GPU\/CPU inference latencies than FDNAS and the hand-crafted MobileNetV2.\nWe then study the improvement in accuracy by CFDNAS in Table~\\ref{ablation3}.\nBenefiting from the clustering approach, for the GPU group (including clients-0, 1, and 2), our CFDNAS-G search model achieves $98.92\\%$ accuracy. It is more accurate than the original FDNAS on clients of the same GPU group and requires only 3.53 GPU hours of SuperNet adaptation, which is a negligible additional cost.\nThe ``naive-CFDNAS'' has no inheritance parameters ($\\mathbf w$ and $\\pmb \\alpha$) and no SuperNet's ``meta-test'' adaptation. As a result, the convergence time of naive-CFDNAS takes 18 GPU hours, which is 5.1 times that of CFDNAS. In the same GPU group, our CFDNAS-G is $1.26\\%$ more accurate than naive-CFDNAS, while its FLOPs are smaller.\nFor the CPU group (including clients-3, 4, and 5), our CFDNAS-C also outperforms FDNAS and naive-CFDNAS-G in terms of accuracy and FLOPs.\nCPU group's data is harder than others, but our CFDNAS-C is still more accurate than naive-CFDNAS-C and more stable than FDNAS.\n\nCompared to FDNAS, CFDNAS gets a better trade-off between accuracy and latency due to the meta-learning mechanism.\nCompared to naive-CFDNAS, CFDNAS gets better performance because it inherits $\\mathbf w$ and $\\pmb \\alpha$ from the FDNAS SuperNet, which is fully trained with data from all clients.\nAlso, because the FDNAS-based ``meta-trained\" SuperNet performs the search, it helps meta-adaptation cost less than naive-CFDNAS.\n\n\\subsubsection{Contributions of federated mechanism}\nIn Table~\\ref{ablation2}, we show the effects of using a traditional DNAS with collected data (including all data) and a single-client DNAS (including only local data, with no federated training).\nDNAS searches the model directly on the clients' data but requires data collection in advance, and it achieves a central accuracy of $96.71\\%$. However, our FDNAS still has higher local accuracy than DNAS while protecting data privacy.\nBesides, our FDNAS has smaller FLOPs than single-client conventional DNAS results. Our federated averaged accuracy is $78.75\\%$, and local accuracy is $97.25\\%$ which is $2.94\\%$ higher than single-client local average accuracy. Thanks to the federated mechanism, FDNAS can use data from a wide range of clients to search for more efficient models. At the same time, it trains models with higher accuracy. Privacy protection and efficiency will be beneficial for the social impacts and effectiveness of actual machine learning deployments.\n\n\\subsubsection{Contributions of directly search}\nFor a fairer comparison, we use MobileNetV2, which is also composed of MBconv blocks, as a predefined hand-crafted neural architecture trained in FedAvg.\nWe train the normal net of FDNAS with the collected data from all clients and obtain an accuracy of $96.03\\%$ in a centralized way, a negligible difference compared to the centralized results of MobileNetV2.\nThen we present the FedAvg results for FDNAS and MobileNetV2 in Table~\\ref{ablation1}. \nOur FDNAS federated averaged accuracy and local accuracy are $6.63\\%$ and $1.4\\%$ higher than MobileNetV2, respectively.\nAlso, although both have similar GPU latency, FDNAS can be faster than MobileNetV2 on the CPU platform. So compared to the FDNAS, MobileNetV2 is not optimal for diverse clients in federated scenarios.\nThe FDNAS search architecture from the dataset is superior to hand-crafted models, and it demonstrates that the NAS approach can provide a significant improvement over human design in federated learning.\n\n\\section{Discussion and Future Work}\nWe propose FDNAS, a privacy-preserving neural architecture search scheme that directly searches models from clients' data under the framework of federated learning. Different from previous federated learning approaches, the complete neural architecture is sought from clients' data automatically without any manual attempts. Extensive numerical results have shown that our FDNAS greatly improves the model's performance and efficiency compared to predefined hand-crafted neural architectures in federated learning.\nOn the other hand, our FDNAS model achieves state-of-the-art results on ImageNet in a mobile setting, which demonstrates the transferability of FDNAS. Moreover, inspired by meta-learning, CFDNAS, an extension to FDNAS, can discover diverse high-accuracy and low-latency models adapted from a SuperNet of FDNAS at a very low computational cost. In future work, we will extend our FDNAS to search for different tasks such as object detection, semantic segmentation, model compression, etc., as well as larger-scale datasets with more categories.\n\\newpage\n\\section{Ethical Impact}\nAutoML has been widely used in many fields, such as searching deep neural networks and tuning hyper-parameters in computer vision and natural language processing. \nIn general, NAS brings some important implications for the neural network design, and here we focus on the use of FDNAS to provide a NAS solution in a better privacy-preserving way and the use of CFDNAS to provide a suitable neural network architecture for each client. There are many benefits of the schemes, such as exploiting the diversity of models, respecting the diversity of clients, and reducing risks of privacy and security. \n\nNowadays, there is a lot of research about federated learning for improving deep learning's fairness, safety, and trustworthiness. \nTo liberate expensive manual attempts in federated learning, we believe that the impacts of FDNAS in real-world scenarios are worthy of attention. In addition, privacy leak from the gradient update is still an open problem in machine learning. \nIt is worth exploring a more secure AutoML system combining with some potential privacy protection methods such as differential privacy, homomorphic encryption, and secure multi-party computing.\n\\input{ref.bbl}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe sources of majority of cosmic rays (CRs) are believed to be supernovae\n(SNe) and supernova remnants (SNRs), which are capable of accelerating\nparticles to multi-PeV energies. As CRs propagate through the interstellar\nmedium (ISM) they scatter off the magnetic turbulences in a process that on large\nscales can be described with a diffusive transport equation\n\\citep[][]{1964ocr..book.....G,1990acr..book.....B}. The spectrum of\nturbulence controls the value of the diffusion coefficient and its energy\ndependence because the CR particles most efficiently scatter on turbulence that is\ncomparable in size to their gyroradii. The turbulence is\nassumed to initially form at large scales and then cascade down to smaller\nscales thus affecting particles at all rigidities. The power-law shape of the\nenergy spectrum of turbulence\n\\citep{1941DoSSR..30..301K,1964SvA.....7..566I,1965PhFl....8.1385K} translates\ninto a power-law rigidity dependence of the diffusion coefficient with the\nindex taking a value between 0.3 and 0.6. Observations of the ratios of stable secondary to\nprimary species in CRs and the abundances of radioactive secondaries are\nusually employed to determine both the power-law index and normalization of\nthe diffusion coefficient averaged over a significant volume of the\ninterstellar space surrounding the Solar system\n\\citep[e.g.,][]{1998ApJ...509..212S,2007ARNPS..57..285S,JohannessonEtAl:2016},\ntypically several kpc in radius. Recent estimates of the power-law index are\nclustered around 0.35 with a normalization of about $4\\times10^{28}$ cm$^2$\ns$^{-1}$ at a rigidity of 4 GV \\citep[e.g.,][]{BoschiniEtAl:2018b}.\n\nPulsars are rapidly spinning and strongly magnetized neutron stars that are at\nthe final stage of the stellar evolution. They are formed in SN explosions\nand can often be found inside their associated SNR. \nPulsars represent a class of CR sources that has not been considered as extensively as the more usual scenario of acceleration in SNR shocks, but the fact that they may produce CRs is well-known \\citep{1981IAUS...94..175A,1987ICRC....2...92H,1989ApJ...342..807B}. However,\nrecent measurements of positrons in CRs \n\\citep{AdrianiEtAl:2009, AckermannEtAl:2012, AguilarEtAl:2014}\nin excess of predictions of propagation models\n\\citep{1982ApJ...254..391P,1998ApJ...493..694M}, made under an assumption of\ntheir entirely secondary production in the ISM, elevated\npulsars to be one of the primary candidate sources responsible for this excess\n\\citep[e.g.,][]{2009JCAP...01..025H,2009PhRvL.103e1101Y,2009PhRvD..80f3005M}.\nThe magnetospheres of rapidly\nrotating neutron stars are capable of producing electrons and positrons in\nsignificant numbers and accelerating them to very high energies resulting in a\npulsar wind nebula (PWN) that is observable from radio to high-energy $\\gamma$-ray{s} \\citep{GaenslerSlane:2006}; an archetypical example of such a source is the Crab pulsar and its PWN. The accelerated particles eventually escape from the PWN into the ISM and some of them can reach the Solar system. Therefore, PWNe can\nbe natural candidates responsible for the puzzling excess in CR positrons\nobserved by several experiments.\n\nRecent observations of the extended TeV emission around Geminga and PSR~B0656+14\nPWN by the High Altitude Water Cherenkov (HAWC) telescope constrain the\ndiffusion coefficient in their vicinities to be about two orders of\nmagnitude smaller than the average value derived from observations of CRs\n\\citep{AbeysekaraEtAl:2017}. Application of such slow diffusion to the local\nMilky Way results in a contradiction with other CR observations, in particular\nobservations of high-energy CR electrons and positrons. Fast energy losses of\nTeV particles through inverse Compton (IC) scattering and synchrotron emission\nlimits their lifetime to $\\sim$100~ky \\citep{1998ApJ...509..212S}.\nIf such slow diffusion is representative for the ISM\nwithin about a few hundred pc of the Solar system, the sources of the TeV particles detected at Earth also need to be within a few 10s of pc.\n\\citet{ProfumoEtAl:2018} highlighted that such nearby sources have\nnot been identified and proposed a\ntwo-zone diffusion model with the slow diffusion confined to a small region\naround the PWN. Other authors have considered similar scenarios with varying\ndetails \\citep{TangPiran:2018,FangEtAl:2018,EvoliEtAl:2018}. Interestingly, a\nsimilar suppression of the CR diffusion was observed in the Large Magellanic\nCloud around the 30 Doradus star-forming region, where an analysis of combined\n$\\gamma$-ray{} \nand radio observations \nyielded a diffusion coefficient, averaged over a region with radius\n200--300 pc, an order of magnitude smaller than the typical value in the Milky\nWay \\citep{2012ApJ...750..126M}. Meanwhile, a strong suppression of the\ndiffusion coefficient around a SNR due to excitation of magnetic\nturbulence by escaping CRs\nwas predicted some time before the HAWC observations were reported \\citep[e.g.,][see also references therein]{2008AdSpR..42..486P,MalkovEtAl:2013,2016PhRvD..94h3003D}. A similar mechanism may also be at work in PWN.\n\nIn this paper the two-zone diffusion model is explored using the latest\nversion of the GALPROP\\footnote{\\url{http:\/\/galprop.stanford.edu} \\label{url}}\npropagation code \\citep{PorterEtAl:2017,JohannessonEtAl:2018}. The results indicate that such a model is a viable interpretation for the HAWC observations and confirm similar conclusions made by other authors. Predictions for lower-energy $\\gamma$-ray{} emission that can be observed with the {\\it Fermi} Large Area Telescope\n({\\em Fermi}-LAT) are made and the contribution of energetic positrons coming from Geminga to the\nobserved CR positron flux in different scenarios is studied. Effects of the\nsize of the slower diffusion zone (SDZ) and the properties of the accelerated electrons\/positrons are taken into\naccount as is the effect of the proper motion of Geminga. Unless\nboth Geminga and PSR~B0656+14 are special cases there are expectations for more regions of\nslower diffusion around other PWNe in the Milky Way. The implications that\nsuch inhomogeneity of the diffusion in the ISM can have on the CR distribution\nthroughout the Milky Way is also explored.\n\n\\section{A Model for Geminga}\n\n\\subsection{Physical Setup}\n\nIt is assumed that the Geminga pulsar is injecting accelerated electrons and\npositrons into the ISM in equal numbers with a fraction $\\eta$ of its\nspin-down power converted to pairs. After injection the particles\npropagate via a diffusive process. The pulsar\nparameters used for this paper are identical to those from\n\\citet{AbeysekaraEtAl:2017} and the\nenergy distribution of the injected electrons\/positrons is described with a\nsmoothly joined broken power-law\n\\begin{equation}\n \\frac{dn}{dp} \\propto E_k^{-\\gamma_0}\\left[ 1 + \\left( \\frac{E_k}{E_b}\n \\right)^\\frac{\\gamma_1-\\gamma_0}{s} \\right]^{-s}.\n \\label{eq:injectionSpectrum}\n\\end{equation}\nHere $n$ is the number density of electrons\/positrons, $p$ is the particle\nmomentum, $E_k$ is the particle kinetic energy, and $\\gamma_1$ is a\npower-law index at high energies. The smoothness parameter $s=0.5$ is assumed\nconstant and so is the low-energy index $\\gamma_0 = -1$ and the break energy\n$E_b=10$~GeV. This low-energy break effectively truncates the injected particle spectrum that is not expected to extend unbroken to lower energies \\citep[e.g.,][]{Amato:2014}. The break is required to keep the value of $\\eta$ below 1 for the largest values of $\\gamma_0$ considered. The truncation occurs at energies below that explored in this paper and has no effect on the results.\nThe injection spectrum is normalized so that the total power injected is given by the expression\n\\begin{equation}\n L(t) = \\eta \\dot{E}_0 \\left( 1 + \\frac{t}{\\tau_0} \\right)^{-2},\n \\label{eq:PulsarPower}\n\\end{equation}\nwhere $\\dot{E}_0$ is the initial spin-down power of the pulsar, and $\\tau_0 =\n13$~kyr. The initial spin-down power is calculated using the current\nspin-down power of $\\dot{E}=3.26\\times10^{34}$ erg s$^{-1}$ assuming that the\npulsar age is $T_p = 340$~kyr. The distance to Geminga has been determined to\nbe 250~pc \\citep{FahertyEtAl:2007}. The spatial grid for the propagation\ncalculations with GALPROP is right handed with the Galactic centre (GC) at the origin, the Sun\nat $(x,y,z)=(8.5, 0, 0)$~kpc, and the $z$-axis oriented toward the Galactic\nnorth pole. This coordinate system places Geminga at $(8.7407, 0.0651, 0.0186)$~kpc at the current epoch.\n\nPrevious interpretations of the HAWC observations have assumed that the Geminga\npulsar is a stationary object\n\\citep{ProfumoEtAl:2018,TangPiran:2018,FangEtAl:2018,EvoliEtAl:2018}. Only\n\\citet{TangPiran:2018} discussed the effect of the proper motion of Geminga,\nbut concluded that it has little effect on electrons generating the observed\nTeV $\\gamma$-ray{} emission because of their short cooling timescale.\n\\citet{FahertyEtAl:2007} measured the proper motion of Geminga to be 107.5 mas\nyr$^{-1}$ in right ascension and 142.1 mas yr$^{-1}$ in declination. Transforming this into\nGalactic coordinates leads to $-80.0$ mas yr$^{-1}$ in longitude and 156.0 mas\nyr$^{-1}$ in latitude.\nAssuming a constant proper velocity that is currently\nperpendicular to the line-of-sight, the corresponding velocity in the spatial\ngrid coordinate system is $(v_x, v_y, v_z) = (24.3, -89.9,\n182.6)$~km s$^{-1}$ with Geminga originally located at $(8.7320, 0.0963, -0.0449)$~kpc. \nIn this case, Geminga was born in a SN explosion of a star that\nhas travelled from the direction of the Orion OB1a association\n\\citep{SmithEtAl:1994}. While the TeV $\\gamma$-ray{} emission is not significantly\naffected by the proper motion, the large traveled distance and proximity of\nGeminga to the Sun should produce a ``trailing'' tail of CRs whose $\\gamma$-ray{} emissions may be observable\nat lower energies with {\\em Fermi}-LAT.\n\nFor the two-zone diffusion model, the diffusion coefficient in a confined\nregion around the pulsar is assumed to be lower than that in the ISM due to\nthe increased turbulence of the magnetic field.\nThis region will hereafter be called a ``slower diffusion zone'' (SDZ). \nIt is also assumed that the stronger turbulence does not change the power\nspectrum and hence the rigidity dependence of the diffusion coefficient is the\nsame for the SDZ and the ISM. Let $r$ be the distance from the center of the\nSDZ, then the spatial dependence of the diffusion coefficient is given by\n\\begin{equation}\n D\\!=\\!\\beta \\left( \\frac{R}{R_0} \\right)^\\delta\n \\begin{cases}\n \\displaystyle D_z, & r < r_z, \\\\\n \\displaystyle D_z \\left[ \\frac{D_0}{D_z} \\right]^{\\frac{r-r_z}{r_t-r_z}}, & r_z \\le r \\le r_t, \\\\\n D_0, & r > r_t. \n \\end{cases} \n \\label{eq:diffusion}\n\\end{equation}\nHere $\\beta$ is the particle velocity in units of the speed of light,\n$D_0$ is the normalization of the diffusion coefficient in the general ISM,\n$D_z$ is the normalization of the diffusion coefficient within the SDZ\nwith radius $r_z$, $R$ is the particle rigidity, and $R_0=4$ GV is the normalization (reference) rigidity. In the transitional layer between $r_z$ and $r_t$, the\nnormalization of the diffusion coefficient increases exponentially with $r$\nfrom $D_z$ to the interstellar value $D_0$. Depending on the exact origin of\nthe SDZ, its radius can be time dependent as can its location. To be\nconsistent with the HAWC observations, both $r_z$ and $r_t$ have to be of the\norder of a few tens of pc at the current time.\n\nThe effect of the SDZ around the PWN on the spectra and distribution of\ninjected electrons\/positrons and their emission may depend on its origin. \nTwo general categories distinguished by the origin of the SDZ are\nconsidered. For the first category the SDZ is due to events external to the PWN itself (external SDZ) that may be related to the\nprogenitor SNR, or surrounding environment. It is assumed that the\nparticle propagation time in the zone is much longer than the time necessary\nto generate such a zone itself (``instant'' generation) and its location is\nfixed. For the second category, the SDZ is assumed to be associated with and\ngenerated by the PWN itself (PWN SDZ).\nTherefore, the SDZ is moving with the Geminga pulsar and its size increases\nproportionally to the square root of time. This is in qualitative\nagreement with the evolution of a PWN as determined from simulations\n\\citep[e.g.][]{vanderSwaluw:2003}.\n\nFor the PWN SDZ, the expectation is that there may be a link\nbetween the pulsar that is generating the magnetic turbulence and the surrounding\ninterstellar plasma. In this case, the pulsar would transfer a momentum to the medium within the SDZ.\nTo estimate the evolution of the velocity of Geminga a simple model is put forward. It is assumed that\nthe radius of the SDZ region $r_z$ around Geminga is increasing in a diffusive manner so that $r_z(t) = \\mu \\sqrt{t}$, where $\\mu$ is a constant. It is further assumed that a fraction $f$ of the ISM within\n$r_z$ is swept by the PWN and sped up to the velocity $v$ of the pulsar. Using conservation of momentum $dp = -v dM$ gives\n$v=v_0 (M_0\/M)^2$, where $v_0$ and $M_0\\approx M_\\odot$ are the initial velocity and mass of the system, respectively.\nSubstitution of $r_z(t)$ and $v(M)$ into the mass conservation formula $dM = f \\pi \\rho r_z^2 v dt$, yields the time evolution of the velocity of Geminga:\n\\begin{equation}\n v = \\frac{v_0}{\\left(3 A_d v_0 t^2 \/2 + 1\\right)^{2\/3}}.\n \\label{eq:vDiffusion}\n\\end{equation}\nHere $A_d = f \\pi \\rho \\mu^2 \/ M_0$ is the drag coefficient and $\\rho \\approx\n0.03$~$M_\\odot$~pc$^{-3}$ is the mass density of the ISM.\nEq.~(\\ref{eq:vDiffusion}) can be integrated to get the full traveled distance:\n\\begin{equation}\n \\lambda(t) = v_0 t\\ _2F_1\\left( \\frac{1}{2}, \\frac{2}{3}; \\frac{3}{2}; -\\frac{3 A_d v_0 t^2}{2} \\right),\n \\label{eq:lDiffusion}\n\\end{equation}\nwhere $_2F_1$ is the hypergeometric function. Further assuming that $r_z(T_p)\n\\approx 30$~pc and that $f\\approx 10^{-3}$, the distance traveled by Geminga\nis $\\lambda(T_p) \\approx 330$~pc and its initial velocity is $v_0 \\approx\n4300$~km s$^{-1}$. This places the birth of Geminga at the location of Orion\nOB1a, which is at a distance of 330~pc from its current location. \nThis is a very intriguing possibility, but in this case $v_0$ is much higher\nthan the observed velocities of any other neutron stars that reach only up to\n1000~km s$^{-1}$ \\citep[e.g.,][]{HobbsEtAl:2005}. Assuming $f\\approx 10^{-4}$\nresults in a more reasonable speed of $v_0 \\approx 400$~km s$^{-1}$ and a\ntotal distance traveled of $\\lambda(T_p) \\approx 100$~pc. A moderate slow\ndown agrees with observations of older pulsars having, on average, smaller velocities than young pulsars \\citep{HobbsEtAl:2005}.\n\n\n\\subsection{Calculation Setup}\n\nThe calculations are made using the latest release of the GALPROP\ncode\\textsuperscript{\\ref{url}} \\citep{PorterEtAl:2017,\nJohannessonEtAl:2018}. The GALPROP code solves the diffusion-advection\nequation in three spatial dimensions allowing for diffusive re-acceleration in\nthe ISM -- see the website and GALPROP team papers for full details. Of major\nrelevance for this paper, the code fully accounts for energy losses due to synchrotron\nradiation and inverse Compton (IC) scattering. The resulting synchrotron and\nIC emission is calculated for an observer located at the position of the Solar\nsystem. The IC calculations \\citep{2000ApJ...528..357M} take into account the\nanisotropy of the interstellar radiation field (ISRF). The current\ncalculations employ the magnetic field model of \\citet{SunEtAl:2008} described\nby their Eq.~(7) and the R12 model for the ISRF developed by\n\\citet{PorterEtAl:2017}. While the turbulent magnetic field\nis expected to be larger in the SDZ, the magnetic field model is not updated\nto account for this. The expected increase in the synchrotron cooling rate and\ncorresponding electromagnetic emissions will not significantly affect the results presented below.\n\nThe code has also been enhanced to use\nnon-equidistant grids to allow for increased resolution in particular areas of\nthe Milky Way, in this case around Geminga.\nThis improvement to {\\it GALPROP}\\ is inspired by the the Pencil Code\\footnote{See Section 5.4 of\n \\url{http:\/\/pencil-code.nordita.org\/doc\/manual.pdf}}\n\\citep{BrandenburgDobler:2002}, where the usage of analytic functions can have advantages in terms of speed and memory usage compared to purely numerical implementations for non-uniform grid spacing.\nThe current run uses the grid function\n\\begin{equation}\n z(\\zeta) = \\frac{\\epsilon}{a}\\tan\\left[ a\\left( \\zeta - \\zeta_0 \\right) \\right] + z_0\n \\label{eq:gridFunction}\n\\end{equation}\nfor all spatial coordinates $\\zeta = x, y, z$, where $\\epsilon, a, \\zeta_0, z_0$\nare parameters. This function maps from the linear grid $\\zeta$ to the\nnon-linear grid in each of the spatial coordinates $x, y, z$. The equations are solved on the $\\zeta$ grid after the\ndifferential equations have been updated to account for the change in first\nand second derivatives.\n\nThe parameters of the grid function are chosen such that the resolution is 2\npc at a central location that is defined below for each calculation, but goes\nup to $0.1$\\,kpc at a distance of $700$\\,pc from the grid center. This setup\nprovides about 0.2 degree resolution on the sky for objects\nlocated at a distance of 250~pc along the line of sight towards the center of the\ngrid. To minimize artificial asymmetry, the grid\nhas the current location of Geminga close to the center of\na pixel and close enough to the center of the non-equidistant grid so that\nthere is little distortion due to the variable pixel size. For the chosen\nparameters the grid size is approximately uniform within a box having a width of\n$\\sim60$~pc, which is not enough to enclose the entire evolution of the\nGeminga PWN. The resolution and size of the grid is bounded by computation costs; the selected parameters are a result of a compromise\nbetween accuracy, memory requirements, and computation speed. This\nnecessarily means that the start and (or) the end of the evolution of Geminga are (is) not fully resolved. Because the diffusion process smooths the distribution of particles as time evolves, the grid is chosen such that the current location of Geminga, and hence the TeV $\\gamma$-ray{} emission, is\nalways well resolved, but sometimes this may come with a small decrease of spatial resolution at its birth place.\n\nThe calculations are performed in a square box with a width of 8~kpc.\nThis is wide enough so that the CR propagation calculations in the Solar neighborhood are not affected by the boundary conditions. The non-equidistant grid allows the boundaries to be extended this far without imposing large computational\ncosts. A fixed timestep of 50 years is\nused for the calculations. This is small enough to capture the propagation and\nenergy losses near the upper boundary of the energy grid, which is 1~PeV. The\nupper energy boundary is chosen to be well above that for the particles\nproducing the HAWC data. The lower energy\nboundary is set at 100 MeV, much lower than the cutoff in the injection\nspectrum. The energy grid is logarithmic with 16 bins per decade.\n\nThe values of $D_0 = 4.5 \\times 10^{28}$ cm$^2$ s$^{-1}$ and $\\delta=0.35$ are\nchosen to match the latest AMS-02 data on secondary CR species (see Section~\\ref{sec:MW}). The SDZ diffusion coefficient of $D_z = 1.3 \\times 10^{26}$ cm$^2$ s$^{-1}$ at $R_0=4$ GV corresponds to the value derived from the HAWC observations at higher energies \\citet{AbeysekaraEtAl:2017}. No attempt is made to independently fit the latter to the HAWC observations because each propagation calculation is\ncomputationally expensive, taking $\\sim$ 24\nhours to complete on a modern 40 core CPU. The calculations also include\ndiffusive re-acceleration with an Alfv{\\'e}n speed $v_A = 17$~km s$^{-1}$, as determined from the fit to the secondary-to-primary data. \nTo save CPU time the calculations are made using electrons only,\nbecause the energy losses and propagation of positrons and electrons are\nidentical. When comparing to the measured positron flux at Earth, the\ncalculated particle flux is simply divided by two. The IC emission is\ncalculated on a HEALPix \\citep{GorskiEtAl:2005} grid having an order of 9\ngiving a resolution of about 0.1 degrees. The IC emission is evaluated on a\nlogarithmic grid in energy from 10~GeV to 40~TeV with 32 energy planes.\n\n\\subsection{Results}\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{fig1a}\\\\\n \\includegraphics[width=0.48\\textwidth]{fig1b}\n \\caption{Surface brightness of the modelled IC emission shown as a function\n of angular distance from the current location of Geminga. The models shown\n here assume Geminga is stationary, $\\gamma_1=2.0$, and the SDZ is\n stationary and centered at Geminga. The top panel shows the energy range\n 8~TeV to 40~TeV compared to the observations of HAWC\n \\citep{AbeysekaraEtAl:2017}. The bottom panel shows the model predictions\n for the energy range 10~GeV to 30~GeV. The different curves represent\ndifferent assumptions about the size of the SDZ, encoded as SDZ($r_z$, $r_t$)\nin the legend.}\n \\label{fig:StationarySB}\n\\end{figure}\n\n\nFor a comparison with results of previous studies the first set of calculations is performed\nassuming that there is no proper motion of Geminga. The index of the injection spectrum is\nfixed to $\\gamma_1 = 2.0$ and the efficiency parameter is $\\eta=0.26$. The SDZ\nis centered at the current location of Geminga $(l_G, b_G) = (195.\\!\\!^\\circ14, 4.\\!\\!^\\circ26)$ and is static in size. The spatial grid is also centered\nat the same location. Several combinations of $(r_z,r_t)$ are used,\ncovering a range \nfrom 30~pc to 50~pc for $r_z$ and from 50~pc to 70~pc for $r_t$. \nCalculations are also made using a model with $(r_z,r_t) =\n(\\infty, \\infty)$ for a comparison with the results from\n\\citet{AbeysekaraEtAl:2017}. Figure~\\ref{fig:StationarySB} (top panel) shows the angular\nprofile of the surface brightness of the modeled IC emission evaluated over the\nenergy range from 8~TeV to 40~TeV and compared with the data from HAWC. The\nresulting profile is clearly independent of the size of the SDZ,\nbecause the cooling time limits the diffusion length at the corresponding CR\nelectron energies ($\\gtrsim 100$~TeV) to be $\\lesssim 10$~pc. The same cannot be said for the IC emission evaluated\nfor the energy range from 10~GeV to 30~GeV, which is also shown in\nFigure~\\ref{fig:StationarySB} (bottom panel). Because of the longer cooling timescales the\nemission profile is highly sensitive to the size of the SDZ, via both $r_z$\nand $r_t$. A smaller SDZ\nsize leads to a correspondingly lower number density of CR electrons\/positrons within the zone, which produces a flatter profile and fainter emission in the GeV energy\nrange.\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{fig2a}\\\\\n \\includegraphics[width=0.48\\textwidth]{fig2b}\n \\caption{Spectrum of IC emission averaged over a $10^\\circ$ wide region\n around the current location of Geminga. The models shown here assume\n Geminga is stationary and that the SDZ is stationary and centered at\n Geminga. The top panel shows the spectrum for different values of\n $\\gamma_1$ as indicated in the legend, but using fixed values of $(r_z,\n r_t)=(30,50)$~pc. The bottom panel shows the spectrum for models with\n different pairs of $(r_z, r_t)$, but fixed value of $\\gamma_1=2.0$. The\n green shaded region corresponds to HAWC observations assuming a diffusion\n profile for the spatial distribution \\citep{AbeysekaraEtAl:2017} and the\n magenta arrow is the upper limit from observations with the MAGIC telescope\n corrected for the diffusion profile \\citep{TangPiran:2018}.}\n \\label{fig:StationarySpectrum}\n\\end{figure}\n\nCalculations with different values of $\\gamma_1=1.8$ and 2.2 \nare made using $(r_z, r_t)=(30,50)$~pc. The conversion efficiency, $\\eta$, is\ncorrespondingly updated for these models to provide consistency with the HAWC data. The values are\n$\\eta = 0.18$ and $\\eta = 0.75$ for $\\gamma_1=1.8$ and $\\gamma_1=2.2$,\nrespectively. The average spectrum of the IC emission over a circular region\nof $10^\\circ$ in radius around the current location of Geminga is shown in\nFigure~\\ref{fig:StationarySpectrum}. The models are all tuned to agree\nwith the HAWC data and, therefore, the predicted profiles differ significantly at lower energies.\nAlso shown is the prediction made using the isotropic assumption for the IC\ncross section (dotted curve in upper panel), which is a commonly used approximation that ignores the angular dependence of the ISRF.\nUsing the isotropic approximation under-predicts the emission by $\\sim$10\\% at 10~TeV, and $\\sim25$\\% at 10~GeV. The discrepancy between the IC\nemission calculated with the realistic angular distribution of the ISRF and\nwith the isotropic distribution depends on the adopted electron injection spectrum\nand the size of the SDZ, with softer injection spectra and larger SDZs producing larger differences. \n\nKinematically, the energy dependence of the discrepancy can be understood where the very highest energy $\\gamma$-ray{s} are produced in head-on collisions from back-scattered soft photons, where the ultrarelativistic electrons ``see'' the angular distribution of soft photons concentrated in a narrow (head on) beam. Meanwhile, for lower electron energies the angular distribution of the background photons is considerably more important and more significantly affects the energy of the upscattered $\\gamma$-ray{s}. \nThe softer injection spectra have more low-energy electrons and there is more confinement for low-energy electrons for larger sizes of the SDZ. \nThe surface brightness profile is also slightly affected\nby the isotropic assumption of the IC emission as shown in Figure~\\ref{fig:StationarySB} (bottom panel).\nThe profile is more peaked\nunder the isotropic assumption; the under-prediction in the wings of the\nprofile is $\\sim$ 10\\% deeper than near the center.\nConsequently, using the averaging made for the isotropic approximation has some effect that can produce an incorrect determination of the diffusion coefficient from the shape of the profile.\n\nAlso shown in\nFigure~\\ref{fig:StationarySpectrum} is the effect of the size of the SDZ on\nthe average spectrum. A larger SDZ size results in a larger electron number density that, in turn, leads to an increased emission in the GeV energy range. The observations made with the MAGIC telescope\n(\\citealp{AhnenEtAl:2016}, updated to match the diffusion profile by\n\\citealp{TangPiran:2018}) constrain both the particle injection spectrum and\nthe size of the SDZ. A smaller SDZ and\/or harder particle injection spectrum\nis required by the observations.\nThe perceived degeneracy between the size of the SDZ and the\ninjected spectrum of the particles from this Figure is broken when the angular\nprofile of the low-energy emission is taken into account. Changing the injection spectrum\nwill not affect the shape of the angular profile while changing the SDZ size does. The angular extent of the emission should therefore\nbe a good indicator of the size of the SDZ.\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{fig3a}\\\\\n \\includegraphics[width=0.48\\textwidth]{fig3b}\n \\caption{\nPredicted flux of positrons at Earth. The models shown here assume Geminga is\nstationary and that the SDZ is stationary and centered at the current location\nof Geminga. The top panel shows the spectrum for different values of\n$\\gamma_1$ as indicated in the legend, but using a fixed SDZ size $(r_z,\nr_t)=(30,50)$~pc. The bottom panel shows the spectrum for models with different\npairs of $(r_z, r_t)$, but fixed value of $\\gamma_1=2.0$. The points are\nAMS-02 data \\citep{AguilarEtAl:2014}.}\n \\label{fig:StationaryPositronSpectrum}\n\\end{figure}\n\nOne of the main conclusions of the study by \\citet{AbeysekaraEtAl:2017} is\nthat the unexpected rise in the positron fraction as measured by PAMELA\n\\citep{AdrianiEtAl:2009}, {\\em Fermi}-LAT{} \\citep{AckermannEtAl:2012}, and AMS-02\n\\citep{2013PhRvL.110n1102A,AguilarEtAl:2014,2014PhRvL.113l1101A} cannot be due\nto the positrons accelerated by the Geminga PWN. This conclusion is based on a\none-zone diffusion model constructed to agree with the HAWC data. For a two-zone diffusion model the\nconclusion can be quite different. \\citet{ProfumoEtAl:2018} and\n\\citet{FangEtAl:2018} found that the positron excess could easily be\nreproduced with such a model using a power-law injection spectrum for\npositrons (and electrons) with index of $2.34$ and $2.2$, respectively.\n\\citet{TangPiran:2018} also used data from the MAGIC telescope to\nconstrain the model and found\nthat a harder particle injection spectrum below a break energy of 30~TeV was\nnecessary, which \nresulted in a calculated positron flux at Earth that did not fully explain the rise in the positron fraction.\n\nFigure~\\ref{fig:StationaryPositronSpectrum} shows the positron flux at Earth as predicted by the models considered in this paper,\nwhere the top and bottom panels illustrate models with different SDZ sizes, and the effect of changing $\\gamma_1$, respectively. The results clearly show a strong\nvariation in the expected positron flux at Earth. A smaller SDZ\nleads to larger flux of positrons at Earth because of the larger\n\\emph{effective} diffusion coefficient. Only if the SDZ extends all the way to\nthe Solar system are the results of \\citet{AbeysekaraEtAl:2017} reproduced. Larger\nvalues of $\\gamma_1$ also results in a larger positron flux at Earth in the observed\nenergy range, meanwhile values larger than $\\gamma_1 \\approx 2.2$ are excluded\nas the predicted positron flux would exceed the data for a \nSDZ of size $(r_z,r_t)=(30,50)$~pc.\n\nSo far the calculations have been made considering a\nstationary source located at the current position of Geminga $(l_G, b_G)$.\nHowever, such an approximation is not supported by its observed large proper motion.\nIn the following analysis, the proper motion is taken into account, but different\nassumptions are made on the origin of the SDZ and the value of\nthe drag coefficient $A_d$ introduced in Eq.~(\\ref{eq:vDiffusion}). The\ndifferent assumptions are referred to as scenarios A to D and detailed below.\nFor scenarios A and B, the SDZ is assumed to be stationary and is unrelated to\nGeminga. In both scenarios, $A_d=0$ and the pulsar velocity is constant.\nScenario A assumes that the SDZ is centered on the current position of\nGeminga by chance,\nand that the parameters of the zone are:\n$(r_z,r_t)=(30,50)$~pc. Scenario B assumes that the SDZ is centered at the\nbirth place of Geminga. In this scenario\nthe SDZ has to be much larger to extend from the birth place of Geminga to its\ncurrent location, $(r_z,r_t)=(90,110)$~pc. The\ncenter of the spatial grid is placed at $(8.7358, 0.0962,\n-0.0124)$~kpc for both these scenarios. This is about half-way between the birth place and the current\nlocation of Geminga for the $x$- and $y$-axes, but only a quarter of the way\nfor the $z$-axis. As described earlier, the location of the center of the grid is set to give the current location of Geminga preference over its birth location.\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{fig4}\n \\caption{\nSurface brightness of the modeled IC emission for scenarios A through D shown as a function of the angular distance from the current location of Geminga. Also shown is the profile for the model with a stationary Geminga PWN with $(r_z,r_t)=(30,50)$~pc and $\\gamma_1=2.0$. The data points show the profile observed by HAWC \\citep{AbeysekaraEtAl:2017}.}\n \\label{fig:ProfilesScenarios}\n\\end{figure}\n\n\\begin{figure*}[tb]\n \\centering\n \\includegraphics[width=0.49\\textwidth]{fig5a}\n \\includegraphics[width=0.49\\textwidth]{fig5b}\\\\\n \\includegraphics[width=0.49\\textwidth]{fig5c}\n \\includegraphics[width=0.49\\textwidth]{fig5d}\n \\caption{\nIC intensity maps evaluated at 10~GeV around the current location of Geminga. Shown are maps for scenarios A (top left), B (top right), C (bottom left), and D (bottom right). The maps are displayed using the colormap bgyw from the colorcet package \\citep{Kovesi:2015}.\n }\n \\label{fig:Maps10GeV}\n\\end{figure*}\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{fig6}\n \\caption{\nPredicted positron fluxes at Earth for scenarios A through D. Also shown are the results for a model where Geminga is stationary with $(r_z,r_t)=(30,50)$~pc and $\\gamma_1=2.0$. The points are AMS-02 data \\citep{AguilarEtAl:2014}.}\n \\label{fig:PositronsScenarios}\n\\end{figure}\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{fig7}\n \\caption{\nSynchrotron intensity averaged over a region around the current location of Geminga with a radius of $1^\\circ$ for scenarios A through D. Also shown are the\nresults for a model where Geminga is stationary with $(r_z,r_t)=(30,50)$~pc\nand $\\gamma_1=2.0$.}\n \\label{fig:SynchrotronScenarios}\n\\end{figure}\n\nFor scenarios C and D, the center of the SDZ follows the location of Geminga\nand its size increases proportionally to the square root of time, normalized such that the final size of the SDZ is\n$(r_z,r_t)=(30,50)$~pc. The difference between these two scenarios is the value\nof the drag coefficient, $A_d$. In scenario C, $A_d=0$ and the velocity is\nconstant over time, while in scenario D,\n$A_d=2.636\\times10^{-8}$~pc$^{-1}$~yr$^{-1}$ giving Geminga an initial\nvelocity of $410$~km~s$^{-1}$ at a\nposition of $(8.72775, 0.113017, -0.0787255)$~kpc.\nFor numerical stability the gradient of the diffusion coefficient is limited to be less than one order of magnitude per grid pixel.\nBecause the ratio $D_0\/D_z \\sim 300$, it is necessary to increase $D_z$ initially for scenarios C and D to fulfill this condition\nuntil the difference between $r_t$ and $r_z$ corresponds to the size of 2 pixels in all directions of the grid. Correspondingly, $D_z$\nis increased by up to an order of magnitude for the first few thousand years\nonly, but this stability requirement does not significantly affect the results.\nThe center of the spatial grid in scenario C is the same as that in\nscenarios A and B, while in scenario D the center of the grid is at $(8.7300,\n0.1075, -0.0213)$~kpc to better resolve the entire evolution of Geminga.\nAgain the center is about half-way between the origin\nand current location of Geminga for $x$- and $y$-axes, but only a quarter of\nthe way for the $z$-axis. For scenarios A through D the variables $(r_t,\nr_z)$ are fixed to the values provided above and $\\gamma_1=2.0$. \n\nFigure~\\ref{fig:ProfilesScenarios} shows the angular profile of the surface\nbrightness in the energy range 8~TeV to 40~TeV compared to the HAWC\nobservations \\citep{AbeysekaraEtAl:2017}. For comparison the results of the\nmodel with stationary Geminga, $(r_z,r_t)=(30,50)$~pc and $\\gamma_1=2.0$ are\nalso shown. As expected, all models agree well with the data and\nsignificantly overlap. Though it is not seen in the Figure, the emission is\nstill reasonably symmetric around the current location of Geminga. To\n quantify the symmetry, the standard deviation of the distribution of\nintensities of pixels in the calculated HEALPix maps within each angular bin\nis calculated. The\nstandard deviation in each angular bin is of the order of 5\\% to 10\\% only.\nMeanwhile, some differences between models are also noticeable: the largest\nstandard deviation of the emission core is seen in scenario D because of the\nlarger proper velocity of the PWN, while the largest deviation in the tail is\nseen in scenario B with the largest SDZ size.\n\nIn contrast, at lower energies all models produce quite different\ndistributions of the surface brightness. In particular, considerable asymmetry in the intensity maps around 10 GeV is seen in\nFigure~\\ref{fig:Maps10GeV}, especially for scenario B with the largest SDZ.\nAll scenarios where Geminga is moving show a distinctive tail in the low\nenergy range. In the extreme case of scenario B the tail is exceptionally\nbroad and the emission of the tail is, in fact, much brighter than that at the\ncurrent location of Geminga. The differences between other scenarios are small\nand hard to distinguish in these maps. The tail in scenario C is slightly\nmore extended than in scenarios A and D. Observations of the Geminga PWN at\n10 GeV will thus hardly be able to distinguish between these scenarios,\nbut may be able to verify the presence of the tail in Scenario B.\n\nDespite their similar $\\gamma$-ray{} emission maps (with exception of\nscenario B), all scenarios predict different positron fluxes at Earth\n(Figure~\\ref{fig:PositronsScenarios}). For energies below 1 TeV, the positron\nflux in scenario A is about a factor of 5 larger than that predicted for\nscenarios C and D. The difference between scenarios C and D is much smaller\nand mostly at the highest energies where scenario D produces a higher flux.\nThis is due to the faster movement of the diffusion zone in scenario D that\nallows the CR particles to escape quickly once the diffusion zone has left\nthem behind. This is somewhat idealized and it is more likely that the SDZ\nwill have a shape elongated along the direction of motion. For such a case\nthe results would become closer to scenario B, with a longer IC emission tail\nat low energies and smaller flux of positrons at low and high energies, but\nwith a larger peak at around 300~GeV.\n\nIn addition to IC emission, GALPROP can calculate the predicted synchrotron emission from the models. In a magnetic field with a strength of several $\\mu$G, as is expected in the vicinity of Geminga, the particles responsible for the TeV $\\gamma$-ray-emission will also emit synchrotron photons with energies of $\\sim$ keV. Because of the shape of the electron\/positron spectrum of the injected particles from Geminga, the emission in radio and mm wavelengths is significantly dimmer than that from CR electrons from the Milky Way at large. At keV energies the synchrotron emission follows a profile similar to that of the TeV IC emission, being peaked at the current location of Geminga with a half-width of about a degree. Figure~\\ref{fig:SynchrotronScenarios} shows the spectrum of the average intensity of the synchrotron emission within a degree of the current location of Geminga. All the models considered predict a similar intensity level of the emission that approximately follows a power-law $I\\propto E^{-1.8}$. The calculations are cut off at few 10s of keV because the electron spectrum can only be reliably determined up to the energies corresponding to the HAWC observations, anything beyond that is an extrapolation. This energy is reached already at a few keV in the synchrotron spectrum. The synchrotron emission is close to the level of the diffuse X-ray emission observed in the Milky Way ridge \\citep{2000ApJ...534..277V}, but significantly less intense than that observed from the Geminga pulsar and the apparent nearby tails \\citep{2010ApJ...715...66P}.\n\n\n\n\\section{Implications for Propagation Throughout the Milky Way}\n\\label{sec:MW}\n\nIf the result for the two PWNe reported in \\citet{AbeysekaraEtAl:2017} are not special cases, then it is likely that there are similar pockets of slow diffusion around many CR sources elsewhere in the Milky Way. The effective diffusion coefficient would thus be smaller in regions where the number density of CR sources is higher. \nSecondary CR species with different inelastic production cross sections (e.g., $\\bar{p}$ vs.\\ B) probe different propagation distances. Interpretations of their data would yield different average diffusion coefficients as was indeed found by \\citet{JohannessonEtAl:2016}. \nStarburst galaxies with large star formation rate are expected to have very slow diffusion and should exhibit an energy-loss-dominated spectrum of $\\gamma$-ray{} emission that is much flatter than that observed from galaxies where the leakage of CRs dominates the energy losses. Interestingly, exactly such evolution of the spectral shape is observed when galaxies with different star formation rate, such as the Magellanic Clouds, Milky Way, M31 vs.\\ NGC 253, NGC 4945, M82, NGC 1068, are compared \\citep[][and references therein]{2012ApJ...755..164A}.\n\nHere it is assumed that the distribution of SDZs follows that of the CR sources, while the exact origin of these SDZs is not essential. With current CR propagation codes and reasonable resources resolving the entire Milky Way with a few pc grid size on short time scales is intractable.\nA simpler approach must, therefore, be taken to explore the effects such SDZs have on the propagation of CRs throughout the Milky Way. Assuming that the properties of CR propagation and the distribution of CR sources vary on-average very little over the residence time of CRs in the Milky Way, a steady-state model can be used. Also assuming that the residence time of CRs in the Milky Way is larger than the active injection time of the CR sources, the CR source distribution can be assumed smooth with injected power that approximates as constant with time.\nThese two approximations have been extensively used in the past and are the starting point of almost all studies on CR propagation across the Milky Way.\n\nAs was shown in the previous Section, the effect of an SDZ is a local increase of the density of CRs for a certain period of time, leading to an increase of CR reaction rates with the ISM contained within the SDZ. CRs thus spend more time in the vicinity of their sources compared to a model with a homogeneous diffusion. This is equivalent to an effective decrease of the diffusion coefficient averaged over a larger volume. Even though, such an approximation does not account for the detailed spatial distribution of the individual SDZs, it does account for the increased interaction rate with the ISM, such as cooling and generation of secondary CR particles. Therefore, this approximation should still provide for the correct spectra and abundances of CR species. \n\nAssuming that CRs first propagate through a SDZ with diffusion coefficient $D_z$ and\nradius $x_z$ that is embedded in a region with diffusion\ncoefficient $D_0$ and radius $x_0 \\gg x_z$, their residence time in the total volume can be estimated as\n\\begin{equation}\n \\tau \\sim \\frac{x_0^2}{D_0} + \\frac{x_z^2}{D_z}.\n \\label{eq:resTimeTwoZone}\n\\end{equation}\nAssuming a single average diffusion coefficient $D$, the residence time can also be expressed as \n\\begin{equation}\n \\tau \\sim \\frac{x_0^2}{D}.\n \\label{eq:resTimeAv}\n\\end{equation}\nCombining the two equations results in\n\\begin{equation}\n D \\approx D_0 \\left[ 1 + \\left( \\frac{x_z}{x_0} \\right)^{2} \\frac{D_0}{D_z}\\right]^{-1}.\n \\label{eq:avDiffusion}\n\\end{equation}\n\nAssuming that there is a SDZ around each CR source, the density of SDZs is\nthe same as the CR source number density, $q(\\vec{x})$, where $\\vec{x}$ is the spatial coordinate. Let $V_z$ be the volume of each SDZ and $\\Delta V$ be the volume of an element in the spatial grid used in the calculation. Then for each volume element\n\\begin{equation}\n \\frac{x_z}{x_0} \\approx \\left[\\frac{V_z}{\\Delta V} \\int_{\\Delta V} q(\\vec{x}) dV\\right]^{1\/3} \\approx \\left[q_x V_z\\right]^{1\/3},\n \\label{eq:xzx0approx}\n\\end{equation}\nwhere $q_x$ is the CR source number density at the center of the volume element. The value $ Q=\\int_V q(\\vec{x}) dV$ is the total number of active CR sources in the Galaxy at any given time, and $QV_z$ is then the combined volume of all SDZs in the Galaxy. \nAssuming that the SN rate is 0.01 year$^{-1}$, and each source continuously accelerates particles for $\\sim$$10^5$ years, there are $\\sim$$10^3$ active CR sources at any given time. Assuming a radius of $x_z\\sim 30$~pc for each source, the combined volume of all SDZs is $QV_z \\approx 0.1$~kpc$^3$.\nThis value is in good agreement with estimates from \\citet{HooperEtAl:2017} and\n\\citet{ProfumoEtAl:2018}.\nThe ratio $x_z\/x_0$ in Eq.~(\\ref{eq:xzx0approx}) depends on the product of the number of SDZs and their sizes.\nBecause of this, equivalent results can be obtained also for, e.g., fewer SDZs of larger sizes, provided that the total occupied volume is the same and their distribution follows $q(\\vec{x})$.\nEq.~(\\ref{eq:avDiffusion}) also depends on the value of $D_0\/D_z$ which is set to 300 in the following calculations, which is similar to the value used in the Geminga calculations. The results described below, therefore, are valid across different model assumptions provided they are consistent with Eq.~(\\ref{eq:avDiffusion}).\n\n\\begin{figure*}[tb]\n \\centering\n \\includegraphics[width=0.49\\textwidth]{fig8a}\n \\includegraphics[width=0.49\\textwidth]{fig8b}\n \\caption{Diffusion coefficient evaluated in the Galactic plane at the normalization rigidity $R_0=4$~GV for a model where the effective diffusion coefficient is given by Eq.~(\\ref{D_distr}). The left panel shows the distribution of the diffusion coefficient for the SA0 model while the right panel for the SA50 model. The location of the Sun is marked as a white star and the GC is at (0,0). The maps are displayed using the colormap bgyw from the colorcet package \\citep{Kovesi:2015}.\n }\n \\label{fig:DiffusionPlane}\n\\end{figure*}\n\nThe modified diffusion described above has been implemented in the\nGALPROP code to test its effect on the propagation of CR species in the Milky Way.\nWith a switch in the configuration file it is now possible to change to a diffusion configuration such that\n\\begin{equation}\n D(\\vec{x}) = D_0 \\left\\{1 + \\left[q\\left( \\vec{x} \\right) V_z \\right]^{2\/3} \\frac{D_0}{D_z} \\right\\}^{-1}.\n \\label{D_distr}\n\\end{equation}\nTherefore, even with $D_0$ and $D_z$ fixed the {\\it effective} diffusion coefficient is still spatially varying, where the total volume of SDZs in the Milky Way is a normalization parameter $QV_z\\approx 0.1$ kpc$^{3}$, as explained above. In principle, the number distribution of CR sources can be different from the distribution of their injected power into CRs in the Milky Way, but here they are assumed identical.\nTwo models SA0 and SA50 are used for the CR source density distribution\n\\citep{PorterEtAl:2017}. The SA0 model is a 2D axisymmetric pulsar-like\ndistribution \\citep{2004A&A...422..545Y} for the CR source density in the disk. The\nSA50 model has half of the injected CR luminosity distributed as the SA0\ndensity and the other half distributed as the spiral arms for the R12 ISRF\nmodel. Each CR source model is\npaired with both a constant homogeneous diffusion and the modified diffusion\ndescribed above resulting in a total of four models. The model calculations are performed on a 3D spatial grid that now includes the whole Milky Way using the grid function defined in Eq.~(\\ref{eq:gridFunction}). The parameters of the grid function are chosen such that the resolution in the $x$- and $y$-coordinates is 200~pc at the GC, increasing to 1 kpc at a distance of 20 kpc which is also the boundary of the grid. At the distance of the Solar system the resolution is about 350~pc in the $x$-direction and the Sun is located at the center of a volume element. In the $z$-direction the resolution is 50~pc in the plane, increasing to 200~pc at the boundary of the grid at $|z|=6$~kpc. The energy grid is logarithmic, ranging from 3~MeV to 30~TeV with 112 energy planes.\n\nThe distribution of the effective diffusion coefficient $D(\\vec{x})$ in the Galactic plane is shown in Figure~\\ref{fig:DiffusionPlane}, where the numbers correspond to its value at the normalization rigidity $R_0=4$~GV.\nThe spiral arm structure of the SA50 model is clearly visible.\nThe maximum change in the effective diffusion coefficient is about a factor of 3, which corresponds to a peak value of $x_z\/x_0 \\sim 10^{-3}$.\nAll models employ the R12 ISRF from \\citet{PorterEtAl:2017} and the 3D gas distributions from \\citet{JohannessonEtAl:2018}.\nThe same standard procedure for parameter adjustment to match\nthe recent observations of CR species listed in Table~\\ref{tab:CRdata} is followed for all source density models.\nSolar modulation is treated using the force-field approximation \\citep{1968ApJ...154.1011G}, one\nmodulation potential value for each observational period. The use of the latter is justified by the available resources and the main objective of this paper, i.e.\\ to study the effect of the modified CR diffusion in a self-consistent manner, rather than to update the local interstellar spectra of CR species or to accurately determine the propagation parameters\\footnote{For a detailed treatment of heliospheric propagation of CRs, see, e.g.,\n \\citet{BoschiniEtAl:2017,BoschiniEtAl:2018b,BoschiniEtAl:2018a} and references therein.}.\nThe procedure for parameter tuning is the same as used by \\citet{PorterEtAl:2017} and \\citet{JohannessonEtAl:2018}.\nThe propagation parameters are first determined by fitting the models to the observed spectra of CR species, Be, B, C, O, Mg, Ne, and Si. These are then\nkept fixed and the injection spectra for electrons, protons, and He fitted\nseparately. To reduce the number of parameters, only the most relevant primary abundances are fitted simultaneously with the propagation parameters while the rest are kept fixed. Because the injection spectra of all CR species are normalized relative to the proton spectrum at normalization energy 100 GeV\/nucleon the procedure is repeated to ensure consistency. One iteration provides a satisfactory accuracy in all cases. The only difference from the procedure described in \\citet{PorterEtAl:2017} and \\citet{JohannessonEtAl:2018} is the inclusion of elemental spectra of Be, C, and O from AMS-02 and the elemental spectra from ACE\/CRIS, while the data from HEAO-C3 and PAMELA has been removed.\n\n\\begin{deluxetable}{lll}[tb!]\n\\tablecolumns{3}\n\\tablewidth{0pc}\n\\tablecaption{Datasets used to derive propagation parameters\n \\label{tab:CRdata}}\n\\tablehead{\n\\multicolumn{1}{l}{Instrument} &\n\\multicolumn{1}{l}{Datasets} &\n\\colhead{Refs.\\tablenotemark{a}}\n}\n\\startdata\nAMS-02 (2011-2016) & Be, B, Be\/B, Be\/C, B\/C, Be\/O, B\/O & I \\\\\nAMS-02 (2011-2016) & C, O, C\/O & II \\\\\nAMS-02 (2011-2013) & e$^-$ & III \\\\\nAMS-02 (2011-2013) & H & IV \\\\\nAMS-02 (2011-2013) & He & V \\\\\nACE\/CRIS (1997-1998) & B, C, O, Ne, Mg, Si & VI \\\\\nVoyager 1 (2012-2015) & H, He, Be, B, C, O, Ne, Mg, Si & VII \n\\enddata\n\\tablenotetext{a}{I: \\citet{AguilarEtAl:2018}, II: \\citet{AguilarEtAl:2017}, III:\n\\citet{AguilarEtAl:2014}, IV: \\citet{AguilarEtAl:2015a}, V:\n\\citet{AguilarEtAl:2015b}, VI: \\citet{GeorgeEtAl:2009}, VII:\n\\citet{CummingsEtAl:2016}}\n\\end{deluxetable}\n\n\\begin{deluxetable}{lcccc}[tb!]\n\\tablecolumns{5}\n\\tablewidth{0pc}\n\\tablecaption{Final propagation model parameters. \\label{tab:CRparameters} }\n\\tablehead{\n & \\multicolumn{2}{l}{Homogeneous diffusion} & \\multicolumn{2}{l}{Modified diffusion} \\\\\n\\multicolumn{1}{l}{Parameter} & \n\\colhead{SA0} & \n\\colhead{SA50} &\n\\colhead{SA0} & \n\\colhead{SA50} \n}\n\\startdata\n\\tablenotemark{a}$D_{0,xx}$ [$10^{28}$cm$^2$\\,s$^{-1}$] & $4.36$ & $4.55$ \n & $4.41$ & $4.78$ \\\\\n\\tablenotemark{a}$\\delta$ & $0.354$ & $0.344$ \n & $0.358$ & $0.360$ \\\\\n$v_{A}$ [km s$^{-1}$] & $17.8$ & $18.1$ \n & $15.7$ & $17.6$ \\smallskip\\\\\n\\tablenotemark{b}$\\gamma_0$ & $1.33$ & $1.43$ \n & $1.40$ & $1.47$ \\\\\n\\tablenotemark{b}$\\gamma_1$ & $2.377$ & $2.399$ \n & $2.403$ & $2.381$ \\\\\n\\tablenotemark{b}$R_1$ [GV] & $3.16$ & $3.44$ \n & $3.80$ & $3.82$ \\\\\n\\tablenotemark{b}$\\gamma_{0,p}$ & $1.96$ & $1.99$ \n & $1.92$ & $1.93$ \\\\\n\\tablenotemark{b}$\\gamma_{1,p}$ & $2.450$ & $2.466$ \n & $2.469$ & $2.453$ \\\\\n\\tablenotemark{b}$\\gamma_{2,p}$ & $2.391$ & $2.355$ \n & $2.359$ & $2.321$ \\\\\n\\tablenotemark{b}$R_{1,p}$ [GV] & $12.0$ & $12.2$ \n & $11.3$ & $11.3$ \\\\\n\\tablenotemark{b}$R_{2,p}$ [GV] & $202$ & $266$ \n & $213$ & $371$ \\\\\n$\\Delta_{\\rm He}$ & $0.033$ & $0.035$ \n & $0.039$ & $0.032$ \\smallskip\\\\\n\\tablenotemark{b} $\\gamma_{0,e}$ & $1.62$ & $1.49$ \n & $1.46$ & $1.46$ \\\\\n\\tablenotemark{b} $\\gamma_{1,e}$ & $2.843$ & $2.766$ \n & $2.787$ & $2.762$ \\\\\n\\tablenotemark{b} $\\gamma_{2,e}$ & $2.494$ & $2.470$ \n & $2.506$ & $2.480$ \\\\\n\\tablenotemark{b} $R_{1,e}$ [GV] & $6.72$ & $5.14$ \n & $5.03$ & $5.13$ \\\\\n\\tablenotemark{b} $R_{2,e}$ [GV] & $52$ & $69$ \n & $69$ & $69$ \\smallskip\\\\\n\\tablenotemark{c}$J_p$ & $4.096$ & $4.113$ \n & $4.102$ & $4.099$ \\\\\n\\tablenotemark{c}$J_e$ & $2.386$ & $2.362$ \n & $2.288$ & $2.345$ \\smallskip\\\\\n\\tablenotemark{d}$q_{0,^{4}\\rm He} \\hfill [10^{-6}]\\qquad\\qquad$ & $ 92495$ & $ 91918$ \n & $ 92094$ & $ 93452$ \\\\\n\\tablenotemark{d}$q_{0,^{12}\\rm C} \\hfill [10^{-6}]\\qquad\\qquad$ & $ 2978$ & $ 2915$ \n & $ 2912$ & $ 2986$ \\\\\n\\tablenotemark{d}$q_{0,^{16}\\rm O} \\hfill [10^{-6}]\\qquad\\qquad$ & $ 3951$ & $ 3852$ \n & $ 3842$ & $ 3956$ \\\\\n\\tablenotemark{d}$q_{0,^{20}\\rm Ne} \\hfill [10^{-6}]\\qquad\\qquad$ & $ 358$ & $ 327$ \n & $ 322$ & $ 359$ \\\\\n\\tablenotemark{d}$q_{0,^{24}\\rm Mg} \\hfill [10^{-6}]\\qquad\\qquad$ & $ 690$ & $ 704$ \n & $ 681$ & $ 744$ \\\\\n\\tablenotemark{d}$q_{0,^{28}\\rm Si} \\hfill [10^{-6}]\\qquad\\qquad$ & $ 801$ & $ 786$ \n & $ 762$ & $ 833$ \\smallskip\\\\\n\\tablenotemark{e}$\\Phi_{\\rm AMS,I}$ [MV] & $ 729$ & $ 741$ \n & $ 709$ & $ 729$ \\\\\n\\tablenotemark{e}$\\Phi_{\\rm AMS,II}$ [MV] & $ 709$ & $ 729$ \n & $ 696$ & $ 729$ \\\\\n\\tablenotemark{e}$\\Phi_{\\rm ACE\/CRIS}$ [MV] & $ 359$ & $ 370$ \n & $ 345$ & $ 354$ \n\\enddata\n\\tablenotetext{a}{$D(R) \\propto \\beta R^{\\delta}$, $D_0$ is the normalization at $R_0=4$~GV. \n}\n\\tablenotetext{b}{The injection spectrum is parameterized as $q(R) \\propto R^{-\\gamma_0}$ for $R \\le R_1$, $q(R) \\propto R^{-\\gamma_1}$ for $R_1 < R \\le R_2$, and $q(r) \\propto R^{-\\gamma_2}$ for $R > R_2$. The spectral shape of the injection spectrum is\nthe same for all species except CR $p$ and He. $R_1$, and $R_2$ are the same for $p$\nand He and $\\gamma_{i,\\rm He} = \\gamma_{i,p}-\\Delta_{\\rm He}$}\n\\tablenotetext{c}{The CR $p$ and e$^-$ fluxes are normalized at the Solar\nlocation at a kinetic energy of 100~GeV. $J_p$ is in units of $10^{-9}$\ncm$^{-2}$ s$^{-1}$ sr$^{-1}$ MeV$^{-1}$ and $J_e$ is in units of $10^{-11}$ cm$^{-2}$ s$^{-1}$ sr$^{-1}$ MeV$^{-1}$.}\n\\tablenotetext{d}{The injection spectra for isotopes are normalized relative to the proton injection spectrum at 100~GeV\/nuc. \nThe normalization constants for isotopes not listed here are the same as given in \\citet{JohannessonEtAl:2016}.}\n\\tablenotetext{e}{The force-field approximation is used for calculations of the solar\nmodulation and is determined independently for each model and each\nobserving period. $\\Phi_{\\rm AMS,I}$ and $\\Phi_{\\rm AMS,II}$ correspond\nto the 2011-2016 and 2011-2013 observing periods for the AMS-02\ninstrument, respectively.}\n\\end{deluxetable}\n\nTable~\\ref{tab:CRparameters} lists the best-fit parameters for the four models\nconsidered. The parameters are similar for all models and the modified\ndiffusion and different source distributions only slightly affect the best-fit\nvalues.\nMost interestingly, despite the fact that the value of the effective diffusion coefficient exhibits strong spatial variations in the Galactic plane including at the Solar system location, where it is about a factor of 2 smaller compared to that in the model with homogeneous diffusion (Fig.~\\ref{fig:DiffusionPlane}), the value of $D_0$ is not significantly different from that obtained for the homogeneous model.\nThis is connected with the relatively small volume affected by the modified diffusion regions because they are associated with the CR sources, which have a relatively narrow distribution about the Galactic plane with exponential $z$ scale-height 200~pc.\nThe modified diffusion slightly affects the low-energy part of the spectrum, resulting in smaller\nmodulation potentials and Alfv{\\'e}n speeds.\nOverall the addition of the SDZs does not significantly alter the global diffusive properties of the ISM.\n\n\\begin{figure*}[tb]\n \\centering\n \\includegraphics[width=0.49\\textwidth]{fig9a}\n \\includegraphics[width=0.49\\textwidth]{fig9b}\\\\\n \\includegraphics[width=0.49\\textwidth]{fig9c}\n \\includegraphics[width=0.49\\textwidth]{fig9d}\\\\\n \\includegraphics[width=0.49\\textwidth]{fig9e}\n \\includegraphics[width=0.49\\textwidth]{fig9f}\\\\\n \\caption{A comparison between the best-fit model predictions and observation of CR species\nnear the Solar system. The bottom panel of each subfigure shows the data subtracted from the model in units of data flux. References to the data are provided in Table~\\ref{tab:CRdata}. For uniformity of the presentation, AMS-02 data has been converted from rigidity to kinetic energy per nucleon units using isotopic composition of each element as measured by ACE\/CRIS, but the likelihood in the fitting procedure is calculated using rigidity. Different curves represent the four different models considered. SA0 and SA50 cases with the homogeneous diffusion are shown as solid and dotted lines, respectively, while SA0 and SA50 cases with the modified diffusion are show as dashed and dash-dotted lines, respectively. Note that there is significant overlap between the lines for the different models for the top panels of the subfigures.\n }\n \\label{fig:CRelements}\n\\end{figure*}\n\nA comparison between the calculated spectra of CR species and data is shown in Figure~\\ref{fig:CRelements}. The\nmodels agree reasonably well with the data and deviate less than 10\\% from the\nmeasurements for most elements. The largest systematic deviations\nshown are at around 10~GeV for B and C where the relative residuals are as large as\n$\\sim$20\\%. The residuals for Be and O are similar (not shown). These\nare likely caused by the lack of freedom in the injection spectra for the\nprimaries, because the deviations are hardly visible in the secondary-to-primary \nratios of which B\/C is shown in the Figure for illustration. If the deviations were caused by propagation\neffects they would be stronger in the secondary component and be\ncorrespondingly more\nprominent in the secondary-to-primary ratios. Deviations from the H\nand He data are smaller and the feature at 10~GeV is not visible.\nFigure~\\ref{fig:CRelements} also shows that the models agree with each other to\nwithin 5\\% for all elements, and this further illustrates that the addition of the SDZ pockets does not strongly affect predictions for the local spectra of CR species.\n\n\n\\section{Discussion}\n\nObservations of $\\gamma$-ray{} emission from two PWNe made by HAWC \\citep{AbeysekaraEtAl:2017} provide a direct insight into the diffusive properties of the ISM on small scales. Combined with direct measurements of the spectra of CR species in the Solar neighborhood, the HAWC observations point to quite different diffusion coefficients in space surrounding the Geminga and PSR~B0656+14 PWNe compared to the ISM at large, with that associated with the former (SDZs) about two orders of magnitude smaller than the latter. The size of the SDZs is constrained by the upper limits of $\\gamma$-ray{} emission at lower energies obtained by the MAGIC telescope to be $\\lesssim 100$~pc \\citep{AhnenEtAl:2016, TangPiran:2018}. Observations by the {\\em Fermi}-LAT{} may be used to further constrain the properties of the SDZs, although it may be difficult to break a degeneracy between the size of the diffusion zone and the shape of the injection spectra of CR electrons and positrons. This is particularly true for PWNe with a large proper motion, like the case of Geminga, which significantly affects the shape of the IC emission in the energy range of the {\\em Fermi}-LAT.\n\nOne of the main claims of \\citet{AbeysekaraEtAl:2017} is that positrons produced by Geminga provide a negligible fraction of the positrons observed by the AMS-02 instrument \\citep{AguilarEtAl:2014}. This statement is correct only if the diffusion coefficient is small in the entire region between Geminga and the Solar system. Using a two-zone model consistent with both HAWC and MAGIC observations of Geminga, the predicted positron flux can vary over a wide range from a small fraction to almost the entire observed positron flux, dependent on the assumed properties of the SDZ and the injection spectrum. Even if the $\\gamma$-ray{} emission can be constrained with the {\\em Fermi}-LAT{} observations, changing the SDZ properties can result in a variation of the predicted flux by a factor of few. This is illustrated by Scenarios A and C shown in Figures~\\ref{fig:Maps10GeV} and~\\ref{fig:PositronsScenarios}: while the predicted $\\gamma$-ray{} emission differs only marginally, the predicted positron flux differs by about a factor of 5.\n\nIf the result for the two PWNe reported in \\citet{AbeysekaraEtAl:2017} are not special cases, then it is likely that there are similar pockets of slow diffusion around many CR sources elsewhere in the Milky Way. In the likely case that the distribution of the SDZs follows the distribution of CR sources, CRs spend more time in the inner Milky Way and generally in the plane than in the outer Milky Way or its halo.\nDespite this the predicted fluxes of secondary CR species near the Solar system are not strongly affected (see Figure~\\ref{fig:CRelements}), and the propagation model parameters obtained with SDZs included are very close to those determined for models using homogeneous diffusion (see Table~\\ref{tab:CRparameters}). This is a non-trivial result because the production rate of secondary CR species may also increase in the same regions of slow diffusion provided there is enough interstellar gas there. Compared to a model with homogeneous diffusion, the density of CRs should increase towards the inner Milky Way where the distribution of CR sources peaks. Correspondingly, the interstellar high-energy $\\gamma$-ray{} emission (IEM) should also be brighter in the direction of the inner Milky Way. At the same time, this differs from the results of the analysis of the {\\em Fermi}-LAT{} data which indicate that the gradient is even smaller than predicted by models with two-dimensional axisymmetric geometry and homogeneous diffusion \\citep[e.g.][]{AbdoEtAl:2010,AckermannEtAl:2011}. An updated analysis that uses three-dimensional spatial models and includes localized propagation effects may lead to a new interpretation of the distribution of the diffuse emission and of the {\\em Fermi}-LAT{} data.\n\nIn this study the GALPROP framework has been used, which is fully capable of calculating the spectrum and distribution of the expected interstellar diffuse $\\gamma$-ray{} emission, but because of the assumptions incorporated into the modified diffusion model, the predictions of the diffuse emission would be impractical. The sources of CRs are transient in nature and spatially localized and so are the potential SDZs associated with them. Depending on the exact nature of the SDZs, CR particles may be confined within these regions for a significant fraction of their residence time in the Milky Way, contrary to the assumption made in Eqs.~(\\ref{eq:avDiffusion}) and (\\ref{eq:xzx0approx}). This may lead to the incorrect brightness distribution of the predicted $\\gamma$-ray{} emission, which is sensitive to the details of CR distribution throughout the Milky Way. Properly accounting for the transient nature of CR sources in the entire Milky Way is beyond the scope of this work, but will be addressed by a forthcoming paper.\n\nOne plausible explanation for the generation of SDZs near CR sources is the self-excitation of Alfv{\\'e}n waves by the CRs themselves as they stream out into the ISM. Such streaming instabilities have been discussed since the early 70s \\citep{Skilling:1971} and are reviewed in \\citet{AmatoBlasi:2018}. The size and magnitude of the effect of streaming depends on the gradient of the CR distribution and the properties of the ISM, in particular the number density of neutrals and the Alfv{\\'e}n speed. Numerical studies of CR diffusion near SNRs reveal that lower number densities lead to smaller sizes for the SDZ, but also smaller change in the diffusion constant \\citep{NavaEtAl:2016,NavaEtAl:2019}. It is also expected that the injection spectrum is time dependent and may lead to an energy dependence of the diffusion coefficient in SDZs that is quite different from that in the ISM. Such models for the SDZ around Geminga would be in agreement with the HAWC observation that only constrains the diffusion at the highest energies, but produce a wide range of predictions for the expected flux of CR positrons near the Solar system as well as for the expected spectrum of IC emission at low energies. In turn, this would result in an increased CR flux in the inner Galaxy that is dominated by the high-energy particles, leading to a hardening of the spectrum that has been indeed observed by the {\\em Fermi}-LAT{} \\citep{SeligEtAl:2015,AjelloEtAl:2016}. Exploration of these effects is deferred to future work.\n\nTo summarize, observations of the SDZs made by HAWC provide an interesting opportunity to get insight into the fairly complex details of the CR propagation phenomenon. Extension of these observation onto the whole Milky Way \\emph{in specialibus generalia quaerimus} is, however, a non-trivial task \nand further understanding of the origin and properties of the SDZs are necessary to get a correct picture. \n\n\n\\acknowledgements\nT.A.P. and I.V.M. acknowledge partial support from NASA grant NNX17AB48G.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbobe b/data_all_eng_slimpj/shuffled/split2/finalzzbobe new file mode 100644 index 0000000000000000000000000000000000000000..0651fb6243dd2421d324211efe43ef4fe4f9bfe7 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbobe @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nSolitary waves are an important phenomenon in nonlinear physics and applied mathematics. Solitary waves have been studied in a diverse array of physical models including water waves~\\cite{Russell,KdV,Ben:67}, quantum electronic devices (Josephson junctions)~\\cite{BEMS:71}, and cosmology~\\cite{AOM:91,BV:01} One of the most important applications is to nonlinear optical communications where solitary waves have been proposed as information bits in optical fiber transmission systems~\\cite{HT} and produced experimentally about 25 years ago~\\cite{MSG:80}. Other solitary wave phenomena in nonlinear optics include gap solitons in Bragg gratings~\\cite{AW,CJ} and dispersion managed solitons~\\cite{dmsx,dmnls}, which hold promise for eliminating the timing jitter associated with soliton transmission systems.\n\nA single solitary wave propagating through a uniform medium appears particle-like in its coherence and steady propagation. Of great interest are the interaction of multiple solitary waves and the behavior of solitary waves propagating through non-uniform media. Solitary waves of completely integrable equations are known as solitons, and their interactions can be described completely, using multiple-soliton formulas derived via the inverse scattering transform~\\cite{AS:81}. The infinite set of conservation laws in integrable systems severely constrain the dynamics: collisions are elastic, and the solitons will re-emerge from a collision propagating with their initial amplitudes and speeds intact, although their positions will have undergone a finite jump. Solitary wave collisions in non-integrable wave equations, can usually not be found in closed form, and show a much richer variety of behaviors: the waves may attract or repel each other and, upon collision, the solitary waves may lose their coherence and break apart, merge into a single localized structure, or even oscillate about one another.~\\cite{KM:89,TY00,TY:01,TY:01a,CPS:86,CP:86,CSW:83,PC:83}. \n\nIn a soliton-based communications system, the bits are represented by solitons. In the simplest scenario, the presence of a soliton in a given timing window codes a one, and its absence codes a zero. \nCollisions between solitons, coupled with random noise in fiber characteristics, can lead to large perturbations in the solitons polarizations and to timing jitter~\\cite{MGH:95}. A bit that arrives at the wrong time may be interpreted incorrectly by a receiver, as would a soliton that splits in half or two solitons that merge. Ueda and Kath show such behaviors are possible and cite several additional numerical studies of solitons collisions not included here. We an approach to the modeling and analysis of these phenomena that, while highly idealized, leads to new insights into these collisions.\n\nInteracting pairs of solitary waves from several distinct (non-integrable) physical models have shown an interesting behavior in common. At high speeds, the solitary waves move right past each other, hardly interacting, while at speeds below some critical velocity, the solitary waves interact strongly and may merge into a single localized state. Interspersed among the initial velocities that lead to this capture are ``resonance windows'', for which the two waves approach each other, interact with each other for a finite time, and then move apart again; see the second and third graph in figure~\\ref{fig:tanyang}. This has been explored by Tan and Yang in a system of coupled nonlinear Schr\\\"odinger (CNLS) equations that model nonlinear propagation of light in birefringent optical fibers~\\cite{TY00,TY:01,TY:01a}, and by Cambell and collaborators in kink-antikink interaction in the $\\phi^4$ equations and several other nonlinear Klein-Gordon models~\\cite{CPS:86,CP:86,CSW:83,PC:83}. These windows form a complicated fractal structure that has been described qualitatively and even quantitatively, but for which the underlying mechanism has been poorly understood.\n\nThe same phenomenon was also observed by Fei, Kivshar, and V\\'azquez in the interaction of traveling kink solutions of the sine-Gordon and $\\phi^4$ equations with weak localized defects~\\cite{FKV:92a,FKV:92,KFV:91}. Instead of two solitary waves merging, in this case the soliton could be captured, or pinned, at the location of the defect. Almost all of the described models have been studied using the so-called variational approximation, in which the complex dynamics of the full PDE are modeled by a small, finite-dimensional system of ordinary differential equations.\n\nThe sine-Gordon equation with defect and the birefringent fiber-optic model discussed above feature a small parameter measuring the ``non-integrability'' of the system. In a recent publication, Goodman and Haberman~\\cite{GH:04}, exploited this small parameter to construct approximate solutions to the system of ODEs for the sine-Gordon model derived in~\\cite{FKV:92}. We calculated the critical velocity for defect-induced soliton capture via an energy calculation involving separatrix crossing, and the location of the resonance windows using a quantization condition that occurs in the asymptotic expansion. In the current paper, we apply the same method to derive similar quantitative features in Ueda and Kath's ODE model of solitary wave collision in coupled nonlinear Schr\\\"odinger equations, and to explain the structure underlying the fractal structure of resonance windows.\n\nIn section~\\ref{sec:theproblem}, we\nintroduce the physical model---a coupled system of Nonlinear Schr\\\"odinger\nequations---and describe previous results in which the ``two pass resonance''\nphenomenon has been observed. In section~\\ref{sec:themodel}, we introduce Ueda\nand Kath's finite-dimensional model system that captures the observed\ndynamics, and introduce a simplified model which partially linearizes the\nsystem and renders it amenable to our analysis. We show numerically that this\nsimplification does not qualitatively alter the dynamics. In section~\\ref{sec:DeltaE},\nwe set up the calculation as a singular perturbation problem, and describe the\nunperturbed dynamics. We determine the critical velocity by calculating the\nenergy that is lost to vibrations as the solitons pass each other, employing a\nMelnikov integral. We generalize this calculation slightly for subsequent\ninteractions. In section~\\ref{sec:matching}, we construct approximate\nsolutions using matched asymptotic approximations, incorporating the previously calculated energy\nchanges. Section~\\ref{sec:nonlinearity} contains a\ndiscussion of the differences between the original model and its\nsimplification and presents a weakly nonlinear theory to account for them. We\nconclude in section~\\ref{sec:conclusion} with a physical summary and a discussion on\nthe applicability of these results to other systems displaying similar\nbehaviors. \n \n\\section{Physical problem and prior results}\n\\label{sec:theproblem}\nFollowing the previously cited~\\cite{UK:90, TY:01}, we consider the model of polarized light propagation in a optical fiber, given by the system of coupled nonlinear Schr\\\"odinger equations \n\\begin{equation} \n\\begin{split} \n\\label{eq:CNLS}\n i \\partial_t A + \\partial_z^2 A + \\left(\\abs{A}^2 + \\beta \\abs{B}^2 \\right) A &=0 \\\\\n i \\partial_t B + \\partial_z^2 B + \\left(\\abs{B}^2 + \\beta \\abs{A}^2 \\right) B &=0.\n\\end{split}\n\\end{equation}\nThis system replaces the more familiar scalar Schr\\\"odinger equation when polarization is taken into effect~\\cite{Agrawal}. The equations may be derived using the slowly varying envelope approximation\nto Maxwell's equations in an optical fiber waveguide. The variables $A$ and $B$ describe the envelopes of wave\npackets in the two polarization directions and $\\beta$ is the nonlinear cross-phase modulation (XPM) coefficient that arises due to cubic ($\\chi^{(3)}$) terms in the dielectric response of the glass. Here we use $z$ as a space-like variable and $t$ as a time-like variable. Of course, in the optics interpretation,\nthe labels $z$ and $t$ are switched, as the signal is defined as a function of\ntime at $z=0$ and the evolution occurs as the pulse moves down the length of\nthe fiber. For mathematical simplicity, we will use $t$ as the evolution\nvariable. \n\nOur interest is in the interaction of solitary waves in the above system. In the cases $\\beta=0$ and $\\beta=1$, system~\\eqref{eq:CNLS} is completely integrable~\\cite{ZS, Manakov}. In the first case it reduces to a pair of uncoupled NLS equations; in the second it is known as the Manakov system. For other values of $\\beta$, the equations are not integrable. Of special interest is the case $\\beta= \\frac{2}{3}$, which corresponds to linear fiber birefringence. For very small values of $\\beta$, this system models light propagation in a two-mode optical fiber~\\cite{CDP}. In the case $\\beta=0$, the equations are simply a pair of focusing nonlinear Schr\\\"odinger equations, with well known soliton solutions, first suggested as carriers of optical signals by Hasegawa and Tappert~\\cite{HT}. When $\\beta$ takes any other value, the equations are non-integrable. Yang~\\cite{Yang97} studied these equations in great detail, enumerating several families of solitary waves and determining their stability. Of these, the only stable solitary waves come from a family of symmetric single-humped solutions. \n\nThe simplest solutions of interest to~\\eqref{eq:CNLS} consist of an exponentially-localized soliton in the first component, $A$, and zero in the second component, $B$, or vice versa. A single soliton propagates at constant speed with a fixed spatial profile. An important problem is the interaction of two such solitons upon collision, as interactions between two such solitons may lead to errors in a soliton-based transmission system.\n\nTan and Yang numerically studied the interaction of two solitons initialized in orthogonal channels with identical amplitude, headed toward each other with exactly opposite initial velocities~\\cite{TY00,TY:01,TY:01a}. \nFor small values of $\\beta \\approx 0.05$, their simulations show that waves traveling above a critical velocity $\\vc$, the solitons pass by each other, losing a little bit of speed, but not otherwise showing a complicated interaction. At initial velocities below $\\vc$, the solitons capture each other and merge into a stationary state near their point of collision. See figure~\\ref{fig:tanyang}, which shows three graphs from~\\cite{TY:01} in which the input velocity of the solitons is plotted on the $x$-axis, and the solutions followed until they separate, and their outgoing velocity plotted on the $y$-axis, and assigned the value zero if they merge.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=3in]{fig1a}\n\\includegraphics[width=3in]{fig1b}\n\\includegraphics[width=3in]{fig1c}\n\\caption{The exit velocity as a function of the input velocity for $\\beta=0.05$, $\\beta=0.2$, and $\\beta=0.6$, from Tan and Yang~\\cite{TY:01}, original authors' annotations removed.}\n\\label{fig:tanyang}\n\\end{center}\n\\end{figure}\n\nFor somewhat larger values of $\\beta \\approx 0.2$, they find that in addition to the above behavior, that the capture region is interrupted by a sequence of ``resonant reflection windows.'' Solitons with initial velocities in these resonance windows are reflected instead of being captured. The numerical simulations show that the solitons pass each other once, undergo a finite number of width oscillations, then pass each other a second time. Thus they call this the ``two-pass'' resonance. \n\nFor larger values of $\\beta \\approx 0.6$, they find not only reflection windows, but an intricate fractal-like structure of both reflection and transmission windows. Certain portions of the structure, when properly scaled, look like copies, in come cases even reflected copies, of other portions of the structure, and such features are seen at many different scales. \n\nThe two-bounce resonance in kink-antikink interactions was explained qualitatively in the first papers of the Campbell group~\\cite{CSW:83,PC:83}. As the kinks approach each other, they begin to interact, and, at time $t_1$, and energy is transferred into a secondary mode of vibration, with some characteristic frequency $\\omega$. If the initial velocity is below a critical value, the kinks no longer have enough energy to escape each other's orbit, and turn around to interact a second time $t_2$. They show numerically that a resonant reflection occurs if $t_2 - t_1 \\approx 2\\pi n\/\\omega +\\delta$. The parameter $\\delta$ is found by a least squares fit with numerical data. This relation is used to estimate the resonant initial velocities. This reasoning has subsequently been adapted in studies of sine-Gordon kink-defect interactions~\\cite{FKV:92,KFV:91} and of vector soliton collisions~\\cite{TY00,TY:01,TY:01a} which are the focus of this paper.\n\n\\section{The model equations}\n\\label{sec:themodel}\nIn order to gain further insight into the resonance phenomenon, Tan and Yang examine a model system\nderived by Ueda and Kath~\\cite{UK:90} using the variational method. In the variational method, the solution is assumed to take a certain functional form $A(\\vec p(t))$, $B(\\vec p(t))$, dependent on parameters $\\vec p(t)$ that are allowed to vary as a function of time. This ansatz is then substituted into the Lagrangian functional for the PDE, which is integrated in space to yield a finite-dimensional effective Lagrangian,\n$$\nL_{\\rm eff} = \\int_{-\\infty}^{\\infty} \\cL(A,A^*,B,B^*) dz,\n$$\nwhose Euler-Lagrange equations describe the evolution of the time-dependent parameters.Equation~\\eqref{eq:CNLS} has Lagrangian density.\n\\begin{equation}\n\\label{eq:CNLS_Laga}\n\\cL = i(A A_z^* - A_z A^*) + i( B B_z^* - B_z B^*) + (\\abs{A_t}^2 -\\abs{A}^4)+( \\abs{B_t}^2-\\abs{B}^4) -2 \\beta\\abs{A}^2\\abs{B}^2\n\\end{equation}\nMany examples using this method for PDE's arising as Euler-Lagrange equations are given in a recent review by Malomed~\\cite{M:02}.\n\nFollowing~\\cite{UK:90}, we take an ansatz corresponding to two solitons at distance $2X$ of equal magnitude heading toward each other with equal speed, \n\\begin{equation}\n\\begin{split}\n\\label{eq:ansatz}\nA & = \\eta \\sech{\\frac{z-X}{w}} \\exp {i \\left(v (z-X) + \\frac{b}{2w}(z-X)^2 +\\sigma\\right)}, \\\\\nB & = \\eta \\sech{\\frac{z+X}{w}} \\exp {i \\left(-v (z+X) + \\frac{b}{2w}(z+X)^2 +\\sigma\\right)}\n\\end{split}\n\\end{equation}\nwhere $\\eta$, $X$, $w$, $v$, $b$, and $\\sigma$ are time-dependent parameters for the amplitude, position, width, velocity, chirp, and phase, whose evolution remains to be determined. The variational procedure yields a conserved quantity $K = \\eta^2 w$,\nrelated to the conservation of the $L^2$ norm in CNLS, as well as the relations $\\diff{X}{t}=v$ and $\\diff{w}{t}=b$.\nThe evolution is described by the Euler-Lagrange equations\n\\begin{subequations}\n\\begin{align}\n\\diffn{X}{t}{2}&= \\frac{16K\\beta}{w^2}\\frac{d}{d\\alpha} F(\\alpha) \\label{eq:TY1}\\\\\n\\diffn{w}{t}{2}&=\\frac{16}{\\pi^2 w^2}\\left(\\frac{1}{w}-K \n- 3 \\beta K \\frac{d}{d\\alpha}\\left(\\alpha F(\\alpha)\\right)\\right) \\label{eq:TY2}\n\\end{align}\n\\label{eq:TanYang}\n\\end{subequations}\nwhere $\\alpha= X\/w$ and the potential and coupling terms are given by \n\\begin{equation}\n\\label{eq:F}\nF(x) = \\frac{x \\cosh{x} - \\sinh{x}}{\\sinh^3{x}}.\n\\end{equation}\nNote that $F$, actually $-F$, is a potential term, not a force. We keep this notation for continuity with previous studies.\n\nNumerical simulations show that for small $\\beta$, a solution to~\\eqref{eq:CNLS} with an initial condition of the form given in ansatz~\\eqref{eq:ansatz} will remain close to that form, i.\\ e.\\ the solution will continue to consist of two nearly orthogonally polarized solitons, at least until they merge into a single bound state. Using the symmetries of equation~\\eqref{eq:TanYang}, we may set $K=1$ without loss of generality. Equivalently, the PDE symmetry may be used to set $K=1$ in the ansatz used by the variational method.\n\nThese equations display the two-bounce resonance phenomenon, as shown by Tan and Yang.\nConsider the initial value problem, with ``initial'' conditions describing the behavior as $t \\to -\\infty$.\n$$\nX \\to -\\infty;\\; \n\\diff{X}{t} \\to \\vin>0;\\;\nw \\to 1; \\; \n\\diff{w}{t} \\to 0\n$$\nThis does not strictly determine a unique solution, since the solution is invariant to time translation. We plot $\\vout$ as a function of $\\vin$ with $\\beta=0.05$ in figure~\\ref{fig:R003}. These and all other ODE simulations were performed using routines from ODEPACK~\\cite{ODEPACK}.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=3in,angle=-90]{fig2}\n\\caption{The input vs.\\ output velocity of a pair of orthogonally polarized solitons with $\\beta=0.05$}\n\\label{fig:R003}\n\\end{center}\n\\end{figure}\nCompare this figure to the three plots of figure~\\ref{fig:tanyang}. There are key similarities and differences between this graph and the exit velocity graphs of the full PDE simulations. The critical velocity in this figure is about $\\vc=0.19$, close to the value $\\vc=0.1715$ found in~\\cite{TY:01}.\nA noteworthy difference is the complex behavior of solutions with initial velocity below $\\vc$---no such behavior, not even the two-pass windows, was seen in the very careful simulations of Tan and Yang. This should not be surprising, as system~\\eqref{eq:TanYang} is Hamiltonian, and the set of initial conditions leading to unbounded trajectories in backwards time and bounded trajectories in forward time has measure zero, by reasoning similar to Poincar\\'e recurrence, as shown in Proposition~1 of~\\cite{GHW:02}. Localized solutions to~\\eqref{eq:CNLS} may lose energy to radiation modes, a dissipation mechanism not present in the ODE model. As a further result of the dissipation, the output speeds of the reflected solutions are much smaller than the input speeds in the PDE solutions, whereas at the very center of the ODE windows, the output speed exactly matches the input speed.\nA more interesting difference can be seen in the presence of the wide reflection windows, which were not found in the PDE simulations with this value of $\\beta$, summarized in figure~\\ref{fig:tanyang}.\n\nIn figure~\\ref{fig:R012}, the exit velocity graph of~\\eqref{eq:TanYang} shows that even at $\\beta=0.2$, the ODE dynamics display a complex fractal-like structure in addition to the reflection windows, which are not seen in the PDE dynamics for such small values of $\\beta$. The numerical value of the crititcal velocity is $\\vc=0.86$, close to the value $\\vc=0.936$ found in~\\cite{TY:01}.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=3in]{fig3a}\n\\includegraphics[width=3in]{fig3b}\n\\caption{Left: The exit velocity graph for equation~\\eqref{eq:TanYang} for $\\beta=0.2$, showing reflection windows and a variety of more complex fractal-like structures. Right: The same figure with all but the main resonant reflections removed.}\n\\label{fig:R012}\n\\end{center}\n\\end{figure}\n\nThe numerical solutions of~\\eqref{eq:TanYang} qualitatively explain the resonance windows. In figure~\\ref{fig:BT009}, the $w(t)$ components of the solutions with initial velocity $v$ at the center of the first two resonance windows (actually the points tangents to the line $\\vout=-\\vin$). In the leftmost window, the oscillator $w(t)$ is excited, oscillates about 5 times and then is de-excited. In the next window, $w(t)$ oscillates 6 times. In each of the successive windows, $w(t)$ oscillates one more time before it is extinguished. We will refer to the first window as the 2-5 window and the second window as the 2-6 resonance window. Recall that no such windows have been found in the PDE dynamics for this value of $\\beta$, but such windows have been found the in the ODE dynamics for all values of $\\beta$. Tan and Yang demonstrated a width oscillation in the PDE solutions in analogy with that shown here. The minimum value of $n$ in the 2-$n$ resonance decreases with increasing $\\beta$. There does exist a 2-1 resonance with velocity $v=0.649$ in the ODE dynamics shown in figure~\\ref{fig:R012}, while the first resonance window found in the PDE simulations is the 2-$2$ resonance at about $v=0.9$ in figure~\\ref{fig:tanyang}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=4in]{fig4.eps}\n\\caption{Plots of the $w(t)$ component of~\\eqref{eq:TanYang} with initial velocity $v= 0.09988$ (top) and $v=0.13464$ (bottom) and $\\beta=0.05$, showing the 2-5 and 2-6 resonances.}\n\\label{fig:BT009}\n\\end{center}\n\\end{figure}\n\n\\subsection{A further simplified model}\n\nThe model~\\eqref{eq:TanYang} bears a striking resemblance to the system derived in~\\cite{FKV:92} to study the two-pass resonance in the sine-Gordon equation with defect, and analyzed in~\\cite{GH:04}. In that case, however, the situation is much simpler: the term equivalent to $w$ in~\\eqref{eq:TanYang} occurs only linearly, and the potential and coupling terms, equivalent to $F(X\/w)$ here, are functions of $X$ alone. This allows us to solve the analog of~\\eqref{eq:TY2} by variation of parameters to solve for this term and then insert it into the equivalent of equation~\\eqref{eq:TY1}, a critical step in our analysis. In our numerically-computed solutions displaying the two-bounce resonance for small values of $\\beta$, the width $w$ undergoes only a small oscillation about its initial width $w=1$. Therefore, we may partially linearize system~\\eqref{eq:TanYang}, which allows us to proceed in the same manner as we have for the sine-Gordon system. We find reasonable agreement, with a few notable differences, between the two ODE systems. We will discuss the linearized theory first and then discuss corrections due to the nonlinearity.\n\nAllowing $w=1+W$, where $W$ is considered small, expand all the terms in $W$, and keep only leading-order terms. We arrive at the reduced system:\n\\begin{subequations}\n\\begin{align}\n\\diffn{X}{t}{2}&= 16\\beta\\left(F'(X)+G'(X)W\\right); \\label{eq:simplified_a}\\\\\n\\diffn{W}{t}{2}+ \\frac{16}{\\pi^2}W&=\\frac{48\\beta}{\\pi^2}G(X), \\label{eq:simplified_b}\n\\end{align}\n\\label{eq:simplified}\n\\end{subequations}\nwhere\n$$\nG(X)=-\\left(X F(X) \\right)'.\n$$\nFigure~\\ref{fig:vc} shows that this simplified equation gives an accurate estimate of the critical velocity for small values of $\\beta$ based on numerical simulation. Figure~\\ref{fig:SR002} shows the equivalent of figure~\\ref{fig:R003} with the same value of $\\beta=0.05$ for the simplified equations. It shows that the qualitative picture, chaotic scattering interupted by resonance windows for $v<\\vc$, is the same, while the actual location of those windows varies greatly. In the case $\\beta=0.05$, the simplified equation has a 2-4 window, while the full equation's first resonance is 2-5. As $\\beta$ was decreased further, the agreement between the two systems improved.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=3in]{fig5}\n\\caption{The critical velocity for capture as a function of the coupling $\\beta$ for the fully nonlinear system~\\eqref{eq:TanYang} (solid) and the simplified system~\\eqref{eq:simplified} (dashed).}\n\\label{fig:vc}\n\\end{center}\n\\end{figure}\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=3in,angle=-90]{fig6}\n\\caption{The exit velocity graph for the simplified system~\\eqref{eq:simplified}, showing qualitative agreement with figure~\\ref{fig:R003}.}\n\\label{fig:SR002}\n\\end{center}\n\\end{figure}\nWe rescale time by allowing $t \\to 4 \\sqrt{\\beta} t$, transforming equations~\\eqref{eq:TanYang} to\n\\begin{subequations}\n\\begin{align}\n\\ddot X&= F'(X)+G'(X)W; \\label{eq:scaled_a}\\\\\n\\ddot W+\\lambda^2 W&=\\frac{3}{\\pi^2}G(X), \\label{eq:scaled_b}\n\\end{align}\n\\label{eq:scaled}\n\\end{subequations}\nwith fast frequency $\\lambda$ given by\n\\begin{equation}\n\\label{eq:lambda}\n\\lambda=\\frac{1}{\\pi\\sqrt{\\beta}}.\n\\end{equation}\nThe dot notation will be used for derivatives with respect to the scaled time. The conditions in backward time as $t \\to -\\infty$ become:\n\\begin{equation}\nX \\to -\\infty;\\; \n\\dot X \\to \\Vin>0;\\;\nW \\to 0 \\\n\\dot W \\to 0. \\label{eq:init}\n\\end{equation}\nWe will use capital $V$ to represent velocities in the scaled time $t$ and lower-case $v$ for velocities in the physical time.\n\n\\section{Determination of energy change and critical velocity}\\label{sec:DeltaE}\n\\label{section:DeltaE}\n\\subsection{Setup of Melnikov Integral for $\\Delta E$}\nFirst, note that if $W$ is held equal to zero, equation~\\eqref{eq:scaled_a} has the phase space shown in figure~\\ref{fig:phasespace}, showing three distinct types of orbits: closed orbits, corresponding to a pair of solitons bound together as a breather, unbounded orbits, corresponding to two solitons passing each other by, and orbits heteroclinic to degenerate saddle points at $(X,\\dot X)=(\\pm \\infty,0)$---separatrices---that form a boundary between the two regimes. These orbits correspond to level sets, where the energy \n\\begin{equation}\n\\label{eq:energy}\nE= \\half \\dot X^2 - F(X)\n\\end{equation}\nis negative, positive, or zero, respectively. As $W$ is allowed to vary, solutions may cross the separatrices. We will show below that $W$ remains $O(\\sqrt{\\beta})$ by variation of parameters~\\eqref{eq:varparams} below and, thus, that perturbation methods are applicable.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=4in]{fig7}\n\\caption{The $X$ phase plane, showing trapped (dashed), untrapped (dash-dot), and separatrix (thin solid) orbits, corresponding to level sets of~\\eqref{eq:energy}. Superimposed is the $X-\\dot X$ projection of the 2-6 resonant solution to the fully nonlinear equations~\\eqref{eq:TanYang} with $\\beta=0.05$ (thick solid line).}\n\\label{fig:phasespace}\n\\end{center}\n\\end{figure}\n\nWe wish to asymptotically analyze orbits near the separatrix (see figure~\\ref{fig:phasespace}), since two solitons are initially captured when they cross the separatrix and are reflected or transmitted when they cross it a second time and may escape. We first determine the energy loss as a soliton goes from $X=-\\infty$ to $X=+\\infty$ by computing an energy integral called a Melnikov integral~\\cite{M:63}. A Melnikov integral is a perturbative device for measuring the change of energy in a given system. It is simply the integral of the time rate-of-change of the energy along some trajectory in the unperturbed problem. A zero of the Melnikov integral is commonly a necessary condition for chaos in low-dimensional dynamical systems~\\cite{GH:83}. In our case, we simply wish to calculate a change in energy.\n\nThe calculation has been simplified significantly from that given in~\\cite{GH:04}, in a manner that yields additional insight into the form of the energy loss. In particular, we do not need to keep track of whether certain functions possess even or odd symmetry, and we find in an elementary way that the change of energy is negative. First, we note that the separatrix is given by the level set $E=0$, therefore, along the separatrix, equation~\\eqref{eq:energy} may be solved for $X(t)$, giving\n\\begin{equation}\n\\diff{\\Xs}{t}= \\sqrt{2 F(\\Xs)}.\n\\label{eq:deltadot}\n\\end{equation}\nGiven the function $F$ in~\\eqref{eq:F}, it is not possible to find the separatrix orbit $\\Xs(t)$ in closed form. The time-dependent energy exactly satisfies the differential equation\n\\begin{equation}\n\\diff{E}{t} = \\left(\\ddot X - F(X) \\right) \\dot X = \\dot X G'(X) W= \\left( \\diff{}{t} G(X(t))\\right) W,\n\\label{eq:DeltaESetup}\n\\end{equation}\nwhere we have used equations~\\eqref{eq:scaled_a}. We approximate the change in energy for one nearly heteroclinic orbit along the separatrix (from one saddle at infinity to the next saddle approach) by approximating $X(t)$ in~\\eqref{eq:DeltaESetup} with the known separatrix solution $\\Xs(t)$. We integrate~\\eqref{eq:DeltaESetup} along the length of the orbit and integrate by parts to find the total change in energy:\n\\begin{equation*}\n\\begin{split}\n\\Delta E & = \\intinf \\left( \\diff{}{t} G(\\Xs(t))\\right) W\\, dt \\\\\n&= - \\intinf G(\\Xs(t)) \\diff{W(t)}{t}\\, dt,\n\\end{split}\n\\end{equation*}\nwhere we have integrated by parts. Given the initial condition~\\eqref{eq:init}, with $V=0$ for the separatrix case, we may solve equation~\\eqref{eq:simplified_b} for $W(t)$ using variation of parameters:\n\\begin{equation}\nW(t) = \\frac{-3}{\\pi^2\\lambda} \\cos{\\lambda t} \\int_{-\\infty}^{t} G(\\Xs(\\tau)) \\sin{\\lambda \\tau} \\, d\\tau\n+\\frac{3}{\\pi^2\\lambda} \\sin{\\lambda t} \\int_{-\\infty}^{t} G(\\Xs(\\tau)) \\cos{\\lambda \\tau} \\, d\\tau,\n\\label{eq:varparams}\n\\end{equation}\n(again approximating $X(t)$ by $\\Xs(t)$) and\n$$\n\\diff{W(t)}{t} = \\frac{3}{\\pi^2} \\sin{\\lambda t} \\int_{-\\infty}^{t} G(\\Xs(\\tau)) \\sin{\\lambda \\tau} \\, d\\tau\n+\\frac{3}{\\pi^2} \\cos{\\lambda t} \\int_{-\\infty}^{t} G(\\Xs(\\tau)) \\cos{\\lambda \\tau} \\, d\\tau\n$$\nSetting \n$I_{\\rm s}(t) = \\int_{-\\infty}^{t} G(\\Xs(\\tau)) \\sin{\\lambda \\tau} \\, d\\tau$ and\n$I_{\\rm c}(t) = \\int_{-\\infty}^{t} G(\\Xs(\\tau)) \\cos{\\lambda \\tau} \\, d\\tau$, we find that \n\\begin{equation}\n\\begin{split}\n\\Delta E &= - \\frac{3}{\\pi^2}\\intinf I_{\\rm s}(t) I_{\\rm s}'(t) dt \n- \\frac{3}{\\pi^2}\\intinf I_{\\rm c}(t) I_{\\rm c}'(t) dt \\\\\n&= -\\frac{3}{2\\pi^2}( I_{\\rm s}^2(\\infty) + I_{\\rm c}^2(\\infty)).\n\\end{split}\n\\end{equation}\nThis may be integrated by a standard substitution to yield\n\\begin{equation}\n\\begin{split}\n\\Delta E &= - \\frac{3}{2 \\pi^2} \\left(\n\\left(\\intinf G(\\Xs(\\tau)) \\sin{\\lambda \\tau} \\ d\\tau\\right)^2 +\n\\left(\\intinf G(\\Xs(\\tau)) \\cos{\\lambda \\tau} \\ d\\tau\\right)^2 \\right) \\\\\n& = - \\frac{3}{2\\pi^2} \\left \\lvert\n\\intinf G(\\Xs(\\tau)) e^{i \\lambda \\tau} \\ d\\tau\\right \\rvert^2.\n\\end{split}\n\\label{eq:DeltaEfinal}\n\\end{equation}\nThus, the problem is reduced to to calculating the integral~\\eqref{eq:DeltaEfinal}. In fact, because in this case $G(\\Xs(t))$ is an even function, \n\\begin{equation}\n\\label{eq:DeltaEFinal2}\n\\Delta E= - \\frac{3}{2\\pi^2} I_{\\rm c}(\\infty)^2.\n\\end{equation}\n Note that this shows the change in energy is generically negative when we assume $W \\to 0$ as $t \\to -\\infty$. In fact, it must be negative, as the system conserves an energy that is positive-definite as $\\abs{X} \\to \\infty$, and no energy resides in the width oscillation initially. Using $\\Delta E = -\\frac{\\vc^2}{2}$, we find\n$$\n\\vc= \\frac{\\sqrt{3}}{\\pi} I_{{\\rm c},\\infty}\n$$\nwhere $ I_{{\\rm c},\\infty}= I_{\\rm c}(\\infty)$. The integral in equation~\\eqref{eq:DeltaEFinal2} may be solved numerically by converting it into a differential equation, which may be integrated simultaneously with equation~\\eqref{eq:deltadot}. Alternatively, we derive closed-form approximations to $\\Delta E $ $\\vc$ in section~\\ref{sec:complexvariables} below using complex analysis.\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width=3in]{fig8}\n\\caption{The critical velocity of the ODE system~\\eqref{eq:simplified} computed via direct numerical simulation (solid line), the first-order asymptotic approximation (dots) and the second-order asymptotic approximation (dashed line) from~\\eqref{eq:vseries}, and via numerical evaluation of integral~\\eqref{eq:DeltaEfinal}.} \n\\label{fig:vseries}\n\\end{center}\n\\end{figure}\n\n\\subsection{A Generalizaton of the calculation}\nNext, we briefly mention two generalizations of the Melnikov calculation above that will be useful later. First, suppose that instead of approaching zero as $t \\to -\\infty$, \n$$W \\sim \\frac{3}{\\pi^2\\lambda} I_{{\\rm c},\\infty} \\cW \\sin{\\lambda(t-\\phi)}, $$\nThen the change of energy will be given by \n$$\n\\Delta E = \\frac{3 I_{{\\rm c}, \\infty}^2}{\\pi^2}\\left(-\\half + \\cW \\cos{\\lambda\\phi}\\right).\n$$\nAs $X$ traverses the heteroclinic orbit in the reverse direction, the sign of $G'(\\Xs (t))$ in~\\eqref{eq:DeltaESetup} is reversed, which leads to:\n\\begin{equation}\n\\label{eq:DeltaEreturn}\n\\Delta E = \\frac{3 I_{{\\rm c},\\infty}^2}{\\pi^2}\\left(-\\half - \\cW \\cos{\\lambda\\phi}\\right).\n\\end{equation}\nFor a resonance to occur, the change of energy calculated in the first Melnikov integral must cancel with the energy jump on the return trip. Assume the forward heteroclinic orbit has ``symmetry time'' $t_1$ at which $X=0$, with symmetry time $t_2$ on the return trip. Then, by~\\eqref{eq:varparams}, as $t\\to\\infty$ on the forward heteroclinic orbit, \n\\begin{equation}\nW(t) \\sim \\frac{3}{\\pi^2\\lambda} I_{{\\rm c},\\infty} \\sin{\\lambda(t-t_1)}.\n\\label{eq:Delta_E_general}\n\\end{equation}\nFor an exact resonance to occur, the energy change along the two heteroclinics must cancel, leading to the condition\n$$\n\\Delta E_1 + \\Delta E_2 =- \\frac{3 I_{{\\rm c},\\infty}^2}{2\\pi^2} +\n \\frac{3 I_{{\\rm c},\\infty}^2}{\\pi^2}\\left(-\\half - \\cos{\\lambda(t_2-t_1)}\\right)=0,\n$$\nobtained by combining~\\eqref{eq:DeltaEfinal} and~\\eqref{eq:DeltaEreturn} with $\\cW=1$ and $\\phi=t_2-t_1$. Thus $\\cos{\\lambda(t_2-t_1)}=-1$, or \n\\begin{equation}\n\\label{eq:t2mt1}\nt_2-t_1 = \\frac{(2n+1)\\pi}{\\lambda}.\n\\end{equation}\nThis differs from the equivalent resonance condition in~\\cite{GH:04}, in which $t_2-t_1 = 2\\pi n\/\\lambda$. The difference arises because in that system the equivalent term to $G(X)$ was an odd function, whereas here $G$ is even.\n\nMany analyses of two-bounce and two-pass resonance phenomena have been based on the assumption that $t_2-t_1= \\frac{2\\pi n}{\\lambda} + \\delta$ for some undetermined $\\delta$, a phase shift that accounts for unidentified physical processes that have not been modeled. Equation~\\eqref{eq:t2mt1} shows that in this case $\\delta = \\frac{\\pi}{\\lambda}$. It is worth computing a linear fit of $t_2-t_1$ vs.\\ $n$ for comparison with earlier studies and, we will see, for comparison with the analogous result for the fully nonlinear ODE~\\eqref{eq:TanYang}. At $\\beta=0.05$, we find the linear fit $t_2-t_1= 4.935\\left(n + \\half\\right) - 0.011$ and for $\\beta=0.2$, we find $t_2-t_1\\approx4.931 \\left(n + \\half\\right)+ 0.139$, whereas $\\frac{2\\pi}{\\lambda}\\approx4.9348$. Therefore, we see that to leading order, relation~\\eqref{eq:t2mt1} holds, and that the agreement improves with decreasing $\\beta$.\n\n\\subsection{Evaluation of critical velocity using complex analysis}\n\\label{sec:complexvariables}\n\nSince $\\lambda$ is large and $G(X)$ is analytic, the calculated change of energy~\\eqref{eq:DeltaEfinal} is exponentially small. In a calculation for a similar system, we were able to calculate the analogous integrals explicitly because $\\Xs(t)$ was known in a simple closed form~\\cite{GH:04}. In the present case, we are forced instead to expand the integrand of~\\eqref{eq:DeltaEfinal} about a certain (branch) pole. Given the form of the potential~\\eqref{eq:F}, $F$ has a pole whenever $\\sinh{X}=0$ and the numerator of $F$ is nonzero or has a zero of order less than 3. The nearest pole to $X=0$ occurs at $X=i\\pi$. Along the separatrix\n\\begin{equation}\n\\label{eq:sep}\n \\diff{X}{t} = \\sqrt{2 F(X)},\n \\end{equation}\nso we let $t^*$ be chosen such that $X(t^*)=0$ given the initial condition $X(0)=0$. This gives the formula:\n\\begin{equation}\n\\label{eq:tstar}\niT \\equiv t^* = \\int_{0}^{t^*} dt = \\int_{0}^{i\\pi}\\sqrt{\\frac{1}{2F(X)}} \\, dX =\n\\frac{i}{\\sqrt{2}} \\int_{0}^{\\pi} \\sqrt{\\frac{\\sin^3 y}{\\sin y - y\\cos y} } dy\n \\approx 2.10392\\, i.\n\\end{equation}\nWe expand~\\eqref{eq:sep} about $X=i\\pi$ and $t=t^*$ and find\n\\begin{align*}\n\\left(\\frac{(-1)^{-1\/4}}{\\sqrt{2\\pi}} (X-i\\pi)^{3\/2} +O((X-i\\pi)^{9\/2})\\right)\\,dX& = \\ dt \\\\\n\\frac{(-1)^{-1\/4}\\sqrt{2}}{5\\sqrt{\\pi}} (X-i\\pi)^{5\/2} +O((X-i\\pi)^{11\/2}) & = t-t^* \n\\end{align*}\nwhich may be inverted to form\n$$\nX-i \\pi = (-1)^{1\/10} \\pi^{1\/5}2^{-1\/5} 5^{-2\/5} (t-t^*)^{2\/5} + O((t-t^*)^{8\/5}).\n$$\nBased on the expansion\n$$\nF(X)=i \\pi (X-i\\pi)^{-3} + O(1)\n$$\nwe compute the two leading order terms of the integrand of~\\eqref{eq:DeltaEfinal}\n$$\nG(\\Xs(t))= \\frac{(-1)^{3\/5} \\pi^{6\/5}2^{4\/5}3}{5^{8\/5}} (t-t^*)^{-8\/5} +\n \\frac{(-1)^{1\/5} \\pi^{2\/5}2^{8\/5}} {5^{6\/5}} (t-t^*)^{-6\/5} + O( (t-t^*)^{-2\/5} ).\n $$\n Therefore~\\eqref{eq:DeltaEfinal} involves integrals of the type\n $$\nI(\\lambda,T,p)= \\intinf e^{i \\lambda t} (t-i T)^p \\, dt\n $$ \n with $\\lambda>0$, $T>0$, and $p<0$. Here $iT$ is the branch pole, from which a branch line extends vertically to $i\\infty$. By a shift of contour and a change of variables to $z=i \\lambda(t-iT)$, this can be replaced by an integral over the Hankel contour $\\gamma$, which starts at $-\\infty$ below the real axis, circles zero once in the positive direction, and returns to $-\\infty$ along (and above) the real axis~\\cite{CKP}.\n $$\nI(\\lambda,T,p)= \\frac{(-i)^{p+1}}{\\lambda^{p+1}} e^{-\\lambda T} \n\\int_\\gamma e^z z^p dz\n$$\nwhich forms part of a familiar representation of Euler's gamma function and yields the exponentially small term\n\\begin{equation}\nI(\\lambda,T,p)= (-1)^{-p\/2} 2 \\sin{\\left((p+1)\\pi\\right)} \\G{(p+1)} e^{-\\lambda T}\\lambda^{-(p+1)}.\n\\label{eq:Intlambda}\n\\end{equation}\n\nUsing~\\eqref{eq:Intlambda} and standard trigonometric and gamma function identities, we evaluate the integral in~\\eqref{eq:DeltaEfinal} \n\\begin{equation}\n\\intinf G(\\Xs(\\tau)) e^{i \\lambda \\tau} \\ d\\tau =\n\\left(\n\\frac{(-1)^{7\/5} 2^{4\/5} \\pi^{6\/5}}{5^{3\/5}} \\Gamma\\left(\\frac{2}{5}\\right)\\sin{\\frac{2\\pi}{5}}\\lambda^{3\/5} +\n\\frac{(-1)^{4\/5} 2^{8\/5} \\pi^{2\/5}}{5^{1\/5}} \\Gamma\\left(\\frac{4}{5}\\right)\\sin{\\frac{4\\pi}{5}}\\lambda^{1\/5} + \nO(\\lambda^{-3\/5}) \\right) e^{-\\lambda T}.\n\\end{equation}\nAs the integrand is real, we choose the branch $(-1)^{1\/5} = -1$ above. Using that $\\Delta E = \\frac{\\vc^2}{2}$ and the scaling relation given before~\\eqref{eq:scaled}, we arrive at the expansion for the critical velocity in physical variables:\n\\begin{equation}\n\\vc = \\frac{8\\sqrt{3}}{5} e^{-T\/\\pi\\sqrt{\\beta}}\\left(\\theta(2\/5)\\alpha \\beta^{1\/5} - \\theta(4\/5)\\alpha^2 \\beta^{2\/5} + \\ldots \\right),\n\\label{eq:vseries}\n\\end{equation}\nwhere $\\theta(x)= \\sin{\\pi x}\\G{(x)}$ and $\\alpha= \\pi^{-2\/5} 2^{4\/5} 5^{2\/5}$. using equations~\\eqref{eq:lambda}, \\eqref{eq:energy}, as well as the two integrals above. \nFigure~\\ref{fig:vseries} shows that the critical velocity is poorly predicted by the first term in this series, but well-predicted up to about $\\beta=0.1$ when the second term is added. The series expansion of the integrand of~\\eqref{eq:DeltaEfinal} about $t=t^*$ contains one more integrable term which does not lead to a visible improvement of the approximation to $\\vc$. In order to improve the approximation, one would have to calculate expansions about the additional singularities of $G(X_S(t))$ further off the imaginary $t$-axis.\n\n\\section{Matched asymptotic construction of solutions}\n\\label{sec:matching}\n\\subsection{The expansion framework}\nIf $\\Vin >\\Vc$, then $\\dot X$ remains positive for all time and $X \\to +\\infty$ monotonically. We can call this a one-pass transmitted solution. A ``pass'' will occur each time $X=0$, when the two solitons pass each other and energy is transferred between the translation and vibration modes. If $\\Vin < \\Vc$, then the energy is negative after one pass, and the solitons reverse direction, setting up the second pass. On the first pass, the change of energy was shown in~\\eqref{eq:DeltaEfinal} to be negative, but on subsequent passes, it may take either sign, by~\\eqref{eq:DeltaEreturn}. On the second, and subsequent, passes the solitons may escape if the energy is positive, or may be reversed again. We will focus primarily on the case that the solitons interact twice before escaping.\n\nFollowing~\\cite{GH:04}, we construct 2-pass solutions by a matched asymptotic expansion. The solution consists of sequences of nearly heteroclinic orbits connected to near saddle approaches at $X=\\pm\\infty$. The change in energy from one saddle approach to the next is approximated by the Melnikov integral calculated in section~\\ref{sec:DeltaE}.\nThe 2-bounce solution can be constructed from the following 5 pieces:\n\\begin{enumerate}\n\\item A near saddle approach to $X=-\\infty$ with energy $E_0=\\half \\Vin^2$, such that $\\dot X \\to \\Vin < \\Vc$, as $t \\searrow -\\infty $ ;\n\\item a heteroclinic orbit with $dX\/dt>0$ such that $X(t_1)=0$, with energy change $\\Delta E_1$ given by~\\eqref{eq:DeltaEFinal2};\n\\item a near saddle approach to $X=+\\infty$ with negative energy $E=-\\half M^2$, such that $X$ achieves its maximum at $t=t^{*}$;\n\\item a heteroclinic orbit with $dX\/dt <0$ such that $X(t_2)=0$, with energy change $\\Delta E_2$ given by~\\eqref{eq:DeltaEreturn};\n\\item and a near saddle approach to $X=-\\infty$ with positive energy $E=\\half \\Vout^2$, such $\\dot X \\to -\\Vout$, as $t \\nearrow \\infty$.\n\\end{enumerate}\nThe times $t_1$, $t_2$, and $t^*$, as well as the energy levels, remain to be determined below. In the language of matched asymptotics, the approximations at steps 1, 3, and 5 are the ``outer solutions'' and steps 2 and 4 are the ``inner solutions.''\n\nA comment about the last step is in order. For general initial velocity $\\Vin$, the energy at step 5 will not match the energy at step 1. If these two energies match exactly, then we say the solution is a 2-pass resonance. If the energy at step 5 is positive but less than $\\Vin^2\/2$, then the solution is in the 2-pass window, and may be called an incomplete resonance. Physically, the solitons reflect off each other, but with reduced speed and with significant energy remaining in their width oscillation. The outer edges of the window will be given by velocities where the energy at step 5 is identically zero. This defines the width of the windows. If at step 5, the energy is instead negative, then the solution remains trapped for another step, alternating between negative energy near-saddle approaches to $X=-\\infty$ and $X=\\infty$ until enough energy is returned to $X$ such that $E=\\Vout^2\/2$, and $X\\to \\pm \\infty$. Non-resonant solutions and higher resonances are explained in section~\\ref{sec:generalized}.\n\nFor the simpler sine-Gordon system, we wrote down a general asymptotic formula for $n$-pass solutions, calculated the location of 3-pass windows, and calculated the widths of the 2-bounce windows~\\cite{GH:04}. Analogous results are possible in the present situation and are discussed below in section~\\ref{sec:generalized}, although in less detail than in the previous paper.\n\nWe use the method of matched asymptotic expansions, as in~\\cite{DH:02,DH:03, GH:04}. The heteroclinic orbits along the separatrix are matched (forward and backward in time) to the finite time singularities associated with the near-saddle approaches. We will not make use of the two positive energy expansions, so we will not compute them. They enter the analysis when the energy change calculated over the heteroclinic orbits in the above section will then be used to connect the positive and negative energy expansions. \n\n\\subsection{Asymptotic description of heteroclinic orbit for large $X$}\n\nWe first construct an expansion of the ``inner solution,'' given by the heteroclinic orbit. Along the heteroclinic orbit, ${\\dot X}^2\/2 = F(X)$. Setting $X=0$ at $t=t_1$, the trajectory is given as the solution to\n\\begin{equation}\n\\label{eq:zero_energy}\n\\int_{t_1}^t dt' = \\int_0^X \\frac{dX'}{\\sqrt{2F(X')}} = \\int_0^{X_0} \\frac{dX'}{\\sqrt{2F(X')}} z+ \\int_{X_0}^X \\frac{dX'}{\\sqrt{2F(X')}} \n\\end{equation}\nfor an arbitrary $O(1)$ constant $X_0 \\frac{3+2\\sqrt{3}}{9}e^{\\sqrt{3}}$, with the larger root relevant. Thus, we come to the revised estimate of the resonant velocities\n\\begin{equation}\n\\label{eq:vn_comp2}\nv_n=\\sqrt{\\vc^2 - \\frac{16\\left(1+\\frac{1}{2C_n}\\right)}{\\pi^2 (2n+1)^2}}.\n\\end{equation}\nFigure~\\ref{fig:vn_computed} shows that this does better than our first estimate. In a similar computation, we found that this analysis in a neighborhood of $X=\\infty$ was enough to determine the resonant velocities. We find in the next section that we can do better with a numerical criterion based on~\\eqref{eq:t2mt1}.\n\n\\subsubsection*{A Numerical Condition for Resonance}\nIn all situations where heteroclinic or homoclinic orbits are matched to near-saddle approaches $t_2-t_1$ equals half the period of~\\eqref{eq:scaled_a} with $W$ set to zero,\n\\begin{equation}\n\\label{eq:XnoW}\n\\ddot X - F(X) =0.\n\\end{equation} \nLet $P(E)$ be the period of the closed orbit of~\\eqref{eq:XnoW} with energy $E<0$. We may solve for $P(E)$ by evaluating the definite integral\n\\begin{equation}\n\\label{eq:Period}\nP = 2 \\int_{-\\Xmax(E)}^{\\Xmax(E)} \\frac{dX}{\\sqrt{2}\\sqrt{F(X)+E}}\n\\end{equation}\nwhere $\\Xmax(E)$ is the positive root of $F(X)+E=0$. The period $P$ cannot be computed in closed form given the particular potential $-F(X)$ in this problem. Alternately, one can compute the period simply by integrating equation~\\eqref{eq:XnoW} with initial conditions $X=\\Xmax(E)$, $\\dot X = 0$ until reaching the termination condition $X=-\\Xmax(E)$ at the time $P(E)\/2$.\n\nIn either case, the above calculation must be inverted numerically to yield the energy as a function of the period, using the secant method or some variant. In the scaled variables, we have, from equation~\\eqref{eq:t2mt1}, \n\\begin{equation}\n\\frac{P(E_n)}{2} = \\frac{(2n+1)\\pi}{\\lambda}\n\\label{eq:nlresonance}\n\\end{equation}\nThis is solved for $E_n = -\\frac{M_n^2}{2}$, and the resonant velocity is found using~\\eqref{eq:VnDeltaEMn}. This, and the other two approximations are shown in figure~\\ref{fig:vn_computed} \nAll solutions with positive $n$ up to $n=14$ are shown in the figure. Approximations~\\eqref{eq:vn_comp1} and~\\eqref{eq:vn_comp2} both predict the existence of a 2-3 window, while the numerical calculation~\\eqref{eq:nlresonance} does not, and no such window is found by direct numerical simulation.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=4in]{fig9}\n\\caption{The resonant velocities, indexed by the number of complete oscillations of $w(t)$, with $\\beta=0.05$. The thick solid curve at the bottom is the result of direct numerical simulation. From top to bottom, the other curves are the asymptotic results of equations~\\eqref{eq:vn_comp1} and~\\eqref{eq:vn_comp2}, and the value involving numerical calculation of the energy level, given the resonant period~\\eqref{eq:nlresonance}}\n\\label{fig:vn_computed}\n\\end{center}\n\\end{figure}\n\n\\subsection{Generalizaton to near-resonances and higher resonances}\n\\label{sec:generalized}\nThe 2-pass resonant solutions are a countable, and thus measure-zero, family of initial conditions. Each 2-pass window has finite width whose left and right edges can be found by imposing the conditions that $\\Delta E_2 = M^2\/2$, so that the output energy is identically zero. It can be shown that the window widths scale as $n^{-3}$ for large $n$.\n\nIn between the 2-pass windows there is a complicated structure consisting of many narrower windows. These include 3-pass windows, which can be found as follows. A three-pass resonant solution has three energy jumps. Just as $W(t)$ and $X(t)$ are even functions about $t^*$ in two-pass solutions, in 3-pass solutions, $W(t)$ and $X(t)$ are odd functions about their center time. We can place the three ``center times'' at $t=-t_0$, $t=0$, and $t=t_0$, and notice that if the solution is odd, then $\\Delta E=0$ at $t=0$. The change of energy at the second jump is $\\Delta E = -3 I_{\\rm{c},\\infty}^2\/\\pi^2 (\\frac{1}{2} + \\lambda t_0)$, which implies \n$$\nt_0 = \\frac{\\left(2n+1 \\pm \\frac{1}{3}\\right) \\pi}{\\lambda}.\n$$\nand gives 3-pass resonant solutions with \n$$\nv_{3,n\\pm} = \\sqrt{\\vc^2 - \\frac{16}{\\pi^2 (2n+1\\pm\\frac{1}{3})^2}}.\n$$\nA corrected formula, as in equation~\\eqref{eq:vn_comp2}, and a more accurate numerical condition, as in equation~\\eqref{eq:nlresonance}, may also be derived. A general formula for the locations of higher complete resonances can be derived as in~\\cite{GH:04}, but this equation must be solved numerically. \n\n\\section{The effect of coupling to a weakly nonlinear oscillator}\n\\label{sec:nonlinearity}\nWe briefly discuss the discrepancies between the full ODE model~\\eqref{eq:TanYang} and the simplified model~\\eqref{eq:simplified}, in order to account for the marked difference between the window locations between figures~\\ref{fig:R003} and~\\ref{fig:SR002}. The most obvious, seen by comparing the two graphs of figure~\\ref{fig:w_vs_W}, where the component $w(t)$ and $1+W(t)$ are both plotted for the 2-5 resonance with $\\beta=0.2$. It is clear that $w(t)$, the fully nonlinear oscillation, has a larger amplitude, a larger period, and the mean about which it oscillates is displaced upward. As done following equation~\\eqref{eq:t2mt1}, we fit $t_2-t_1$ from our numerical simulations and find $t_2-t_1 \\approx 5.00\\left(n+\\half\\right)+0.541$ when $\\beta=0.05$. For the case $\\beta=0.2$, careful examination shows that $t_2-t_1$ is not approximated that well by a linear fit, with the growth in $t_2-t_1$ slowing as $n$ increased, and $t_2-t_1\\approx 6.38\\left(n+\\half\\right) + 0.327$ when the first 10 resonances are used, and $t_2-t_1\\approx 6.20 \\left(n+\\half\\right)+ 1.93$ when resonances 11 through 20 are used, and the error in this fit is much larger than in the simplified model, especially when the leftmost windows are included. We see then that, in addition to a large correction to the oscillation frequency, a significant phase shift appears in the fully nonlinear dynamics.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=3in]{fig10a}\n\\includegraphics[width=3in]{fig10b}\n\\caption{The solutions components of the 2-5 resonance of \\textbf{(a)} $1+W(t)$ of the simplified equation~\\eqref{eq:simplified} and \\textbf{(b)} $w(t)$ of the full ODE~\\eqref{eq:TanYang}.}\n\\label{fig:w_vs_W}\n\\end{center}\n\\end{figure}\n\nThis shift in amplitude, frequency, and mean value can be explained using a strained coordinate or Poincar\\'e-Lindstedt expansion~\\cite{KC:96}. We assume that $w=W+1$ satisfies the homogeneous part of~\\eqref{eq:TY2}\n\\begin{equation}\n\\label{eq:nonlin}\n\\diffn{w}{t}{2}=\\frac{\\lambda^2}{w^2}\\left(\\frac{1}{w}-1\\right)\n\\end{equation}\nwhere we have used scaling~\\eqref{eq:lambda} and have again set $K=1$. Expanding this as a power series in $W$, we find\n\\begin{equation}\n\\label{eq:W_expansion}\n\\diffn{W}{t}{2} +\\lambda^2 W = \\lambda^2\\left(3 W^2 - 6 W^3 + O(W^4)\\right)\n\\end{equation}\n We look for periodic solutions with initial conditions\n \\begin{equation}\n W(0)=\\epsilon \\text{ and } \\dot W (0)=0,\n \\label{eq:W_ic}\n \\end{equation}\n noting that these initial conditions are somewhat arbitrary, and leaving $\\epsilon$ positive and small, but for now undefined. We expand both the function $W$ and the frequency of oscillations $\\Omega$ in powers of $\\epsilon$\n\\begin{equation}\n\\label{eq:W_PL}\n\\begin{split}\nW &= \\sum_{k=1}^{\\infty} \\epsilon^k W_{k-1}(T) \\\\\n\\Omega &= \\sum_{k=0}^{\\infty} \\epsilon^k \\Omega_k \n\\end{split}\n\\end{equation}\nwhere $T=\\Omega t$, and assume that the solution has period $2\\pi$ in $T$. The equation is satisfied at each order in $\\epsilon$, with the $\\Omega_k$ chosen to suppress secular growth terms. We find that \n$$\n\\Omega_0 = \\lambda ; \\; \\Omega_1=0;\\; \\Omega_2=-\\frac{3}{2}\\lambda\n$$\nand\n\\begin{equation}\n\\label{eq:W_k}\n\\begin{split}\nW_0 & = \\cos{\\Omega t}\\\\\nW_1 & = \\frac{3}{2} - \\cos {\\Omega t} -\\half \\cos{2\\Omega t} \\\\\nW_2 &= -3 + \\frac{13}{8} \\cos{\\Omega t} + \\cos{2\\Omega t} + \\frac{3}{8}\\cos{3\\Omega t} \n\\end{split}\n\\end{equation}\nThus the period of oscillation is decreased at larger amplitudes, as found from the least squares fits.\n\nIt remains to determine a suitable value of $\\epsilon$ in~\\eqref{eq:W_ic} and its effect on the resonance. The full ODE model~\\eqref{eq:TanYang} conserves the Hamiltonian \n\\begin{equation}\n\\label{eq:NLEnergy}\nH = \\half{\\dot X}^2 +\\frac{2\\pi^2}{3} \\dot{w}^2 + \\frac{8}{3}\\left(1+ \\frac{1}{w^2}-\\frac{2}{w}\\right) \n- \\frac {16\\beta}{w} F\\left(\\frac{X}{w}\\right),\n\\end{equation}\nnot written here in canonical variables.\nWe see from figure~\\ref{fig:vc} that $\\vc$, and hence $\\Delta E$ is approximately the same for the full and simplified ODE systems. Expanding this in powers of $W=w-1$, we obtain the approximate Hamiltonian\n\\begin{equation}\n\\label{eq:Energy_expansion}\nH \\approx \\half{\\dot X}^2 +\\frac{2\\pi^2}{3} \\dot{W}^2 + \\frac{8}{3}\\left( W^2 - 2W^3 + 3 W^4+\\ldots\\right) \n- 16\\beta F(X) - 16\\beta G(X)W + \\ldots. \n\\end{equation}\n\nAs $t\\to -\\infty$, all of the energy is stored as kinetic energy in the soliton modes $H=\\half v_0^2$. At the symmetry time, which we can set to $t^{*}=0$, $\\dot X=0$ and $\\dot w=0$, and $X=\\Xmax$ is given, if $G(X) W$ is small enough to be ignored, by the solution to $\\half v_0^2 - \\half \\vc^2 = -16 \\beta F(\\Xmax)$.\nPlugging this back into the energy~\\eqref{eq:NLEnergy} and using the expansion~\\eqref{eq:Energy_expansion} only for the coupling $F(X\/w)$ term, we obtain\n$$\n\\half \\vc^2 \\approx \\frac{8}{3}\\left(1+ \\frac{1}{(1+W)^2}-\\frac{2}{1+W}\\right) - 16 \\beta G(\\Xmax)W.\n$$\nFor resonant velocities sufficiently close to $\\vc$ or for small enough $\\beta$, $16\\beta G(\\Xmax)$ is negligibly small, and we can solve the resulting quadratic equation for $W(0)$ and obtain \n$$\nW(0)= \\frac {\\pm \\frac{\\sqrt 3}{4}\\vc}{1\\pm \\frac{\\sqrt 3}{4}\\vc}\\approx\\pm \\frac{\\sqrt 3}{4} \\vc.\n$$\nFor larger values of $16\\beta G(\\Xmax)$, the equation is cubic in $W$ and the roots may be found by a perturbation expansion around the previously found roots. We use this value of $W(0)$ as our value of $\\epsilon$. For $\\beta=0.05$, it is sufficient to use $\\epsilon=\\sqrt{3}\\vc\/4$, which gives a period $5.00$, as was found from the linear fit. For $\\beta=0.2$, we find that weakly nonlinear theory is not useful as the first several terms of the expansion of the Hamiltonian in $W$ of~\\eqref{eq:Energy_expansion} are all found numerically to about the same order, so that ignoring the $W^4$ and $W^5$ term in the Poincar\\'e-Lindstedt expansion is invalid.\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nWe have explained many of the phenomena seen in the collision of vector solitons in CNLS. First we derived and justified a simplified version of a model derived by Ueda and Kath. Using a Melnikov integral, we estimated the critical velocity, and using matched asymptotic expansions near separatrices, we explained how to connect subsequent passes to construct an approximate solution. Imposing the condition that the total energy change after two passes is zero allowed us to find the locations of the exact two-pass resonant velocities, the centers of the two-pass windows. It remains to be seen if this phenomenon can be produced in physical experiments, but the experimental setup would appear to be fairly simple.\n\nMore importantly, we have elucidated the mechanism underlying two-pass and two-bounce resonance phenomena in general. The important elements of an ODE model are the following:\n\\begin{itemize}\n\\item a ``position'' mode $Z(t)$ that moves in a potential well $V(Z)$, which is localized near $Z=0$, so that the force approaches zero at large distances,\n\\item a secondary oscillator mode $W(t)$ that acts as a temporary energy reservoir;\n\\item a term $C(Z,W)$ that couples the two modes together, also localized near $Z=0$, so that coupling decays at large distances.\n\\end{itemize}\nTrapping takes place on the initial interaction if enough energy leaks from the position mode ($Z$) to the energy reservoir ($W$). In that case, the first mode crosses a separatrix curve in its unperturbed phase space. The energy change on subsequent interactions may be positive or negative, depending sensitively on the phase of $W(t)$ at the interaction time, even though $W$ may remain exponentially small. Eventually, in any Hamiltonian model, enough energy will eventually be transferred back to the position mode that it returns to the unbounded portions of phase space and escapes. It is transmitted if it escapes to $+\\infty$ and reflected if it escapes to $-\\infty$. All the models we have seen are Hamiltonian, but it should be possible to carry through much of the analysis in the presence of a simple dissipative term.\n\nWe believe this mechanism to be present in all the systems which have displayed two-bounce resonance phenomena, including the foundational papers on kink-antikink interactions in nonlinear Klein-Gordon equations~\\cite{CP:86,CPS:86,CSW:83,PC:83}. A finite dimensional model for kink-antikink interactions in the $\\phi^4$ model is presented by Anninos et al.~\\cite{AOM:91}. The model they derive is essentially of the form described, but the potential and coupling terms are much more complicated than~\\eqref{eq:TanYang}, and it would be quite difficult even to find expansions about poles in the solution, as was done in section~\\ref{sec:DeltaE} of the current papers. What's more there is no natural small parameter measuring the coupling and the difference in time scales of the two modes, as we have here. The topology is slightly different in the $\\phi^4$ kink-antikink problem: the separatrix is given by an orbit homoclinic to infinity, rather than heteroclinic. This is indeed why the interaction is a ``bounce'' rather than a ``pass.'' We have developed a simple model that displays the same topology as in~\\cite{AOM:91} and are in the process of analyzing it as a next step in understanding the two-bounce case.\n\n\\section*{Acknowledgements}\nWe thank Jianke Yang for useful discussions and use of his figures. Roy Goodman was supported by NSF-DMS 0204881 and by an NJIT SBR grant.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Introduction}\nAnomaly detection is a binary classification problem to determine whether an input contains an anomaly.\nDetecting anomalies is a critical and long-standing problem faced by the manufacturing and financial industries.\nAnomaly detection is usually formulated as a one-class classification because abnormal examples are either inaccessible or insufficient to model distribution during the training.\nWhen concentrating on image data, detected anomalies can also be localized, and anomaly segmentation problem is to localize the anomalies at the pixel level.\nIn this study, we tackle the problems of image anomaly detection and segmentation.\n\nOne-class support vector machine (OC-SVM)~\\cite{ocsvm} and support vector data description (SVDD)~\\cite{svdd} are classic algorithms used for one-class classification.\nGiven a kernel function, OC-SVM seeks a max-margin hyperplane from the origin in the kernel space.\nLikewise, SVDD searches for a data-enclosing hypersphere in the kernel space.\nThese approaches are closely related, and Vert et al.~\\cite{ocsvm_consistency} showed their equivalence in the case of a Gaussian kernel.\nRuff et al.~\\cite{deepSVDD} proposed a deep learning variant of SVDD, Deep SVDD, by deploying a deep neural network in the place of the kernel function.\nThe neural network was trained to extract a data-dependent representation, removing the need to choose an appropriate kernel function by hand.\nFurthermore, Ruff et al.~\\cite{deep_sad} re-interpreted Deep SVDD in an information-theoretic perspective and applied to semi-supervised scenarios.\n\n\nIn this paper, we extend Deep SVDD to a patch-wise detection method, thereby proposing Patch SVDD.\nThis extension is rendered nontrivial by the relatively high level of intra-class variation of the patches and is facilitated by self-supervised learning.\nThe proposed method enables anomaly segmentation and improves anomaly detection performance.\nFig.~\\ref{fig:example} shows an example of the localized anomalies using the proposed method.\nIn addition, the results in previous studies~\\cite{patch_location,lens} show that the features of a randomly initialized encoder might be used to distinguish anomalies.\nWe detail the more in-depth behavior of random encoders and investigate the source of separability in the random features.\n\n\\input{txt\/1_fig_example}\n\n\\section{Background}\n\\subsection{Anomaly detection and segmentation}\n\\subsubsection{Problem formulation} \nAnomaly detection is a problem to make a binary decision whether an input is an anomaly or not.\nThe definition of \\textit{anomaly} ranges from a tiny defect to an out-of-distribution image.\nWe focus here on detecting a defect in an image.\nA typical detection method involves training a scoring function, $\\mathcal{A}_{\\theta}$, which measures the abnormality of an input.\nAt test time, inputs with high $\\mathcal{A}_{\\theta}(\\mathbf{x})$ values are deemed to be an anomaly.\nA \\textit{de facto} standard metric for the scoring function is the area under the receiver operating characteristic curve (AUROC), as expressed in Eq.~\\ref{eq:auroc}~\\cite{auroc}.\n\n\\begin{equation} \\label{eq:auroc}\n \\text{AUROC}\\left [ \\mathcal{A}_{\\theta} \\right ] = \\mathbb{P} \\left [ \\mathcal{A}_{\\theta}(\\mathbf{X}_{\\text{normal}}) < \\mathcal{A}_{\\theta}(\\mathbf{X}_{\\text{abnormal}}) \\right ].\n\\end{equation}\nA good scoring function is, thus, one that assigns a low anomaly score to normal data and a high anomaly score to abnormal data.\nAnomaly segmentation problem is similarly formulated, with the generation of an anomaly score for every pixel (i.e., an anomaly map) and the measurement of AUROC with the pixels.\n\n\\subsubsection{Auto encoder-based methods}\nEarly deep learning approaches to anomaly detection used auto encoders~\\cite{vae_ad,recon_and_detect,ocgan}.\nThese auto encoders were trained with the normal training data and did not provide accurate reconstruction of abnormal images.\nTherefore, the difference between the reconstruction and the input indicated abnormality.\nFurther variants have been proposed to utilize structural similarity indices~\\cite{ssim_ae}, adversarial training~\\cite{recon_and_detect}, negative mining~\\cite{ocgan}, and iterative projection~\\cite{iterative_project}.\nCertain previous works utilized the learned latent feature of the auto encoder for anomaly detection.\nAkcay et al.~\\cite{ganomaly} defined the reconstruction loss of the latent feature as an anomaly score, and Yarlagadda et al.~\\cite{satellite} trained OC-SVM~\\cite{ocsvm} using the latent features.\nMore recently, several methods have made use of factors other than reconstruction loss, such as restoration loss~\\cite{itae} and an attention map~\\cite{ve_vae}.\n\n\n\\input{txt\/3_fig_big_picture.tex}\n\n\n\\subsubsection{Classifier-based methods}\nAfter the work of Golan et al.~\\cite{geom}, discriminative approaches have been proposed for anomaly detection.\nThese methods exploit an observation that classifiers lose their confidence~\\cite{oodbaseline} for the abnormal input images.\nGiven an unlabeled dataset, a classifier is trained to predict synthetic labels.\nFor example, Golan et al.~\\cite{geom} randomly flip, rotate, and translate an image, and the classifier is trained to predict the particular type of transformation performed.\nIf the classifier does not provide a confident and correct prediction, the input image is deemed to be abnormal.\nWang et al.~\\cite{e3outlier} proved that such an approach could be extended to an unsupervised scenario, where the training data also contains a few anomalies.\nBergman et al.~\\cite{goad} adopted an open-set classification method and generalized the method to include non-image data.\n\n\n\\subsubsection{SVDD-based methods}\nSVDD~\\cite{svdd} is a classic one-class classification algorithm.\nIt maps all the normal training data into a predefined kernel space and seeks the smallest hypersphere that encloses the data in the kernel space.\nThe anomalies are expected to be located outside the learned hypersphere.\nAs a kernel function determines the kernel space, the training procedure is merely deciding the radius and center of the hypersphere.\n\nRuff et al.~\\cite{deepSVDD} improved this approach using a deep neural network.\nThey adopted the neural network in place of the kernel function and trained it along with the radius of the hypersphere.\nThis modification allows the encoder to learn a data-dependent transformation, thus enhancing detection performance on high-dimensional and structured data.\nTo avoid a trivial solution (i.e., the encoder outputs a constant), they removed the bias terms in the network.\nRuff et al.~\\cite{deep_sad} further applied this method to a semi-supervised scenario.\n\n\\input{txt\/3_fig_map_comparison}\n\n\\subsection{Self-supervised representation learning}\nLearning a representation of an image is a core problem of computer vision.\nA series of methods have been proposed to learn a representation of an image without annotation.\nOne branch of research suggests training the encoder by learning with a \\textit{pretext task}, which is a self-labeled task to provide synthetic learning signals.\nWhen a network is trained to solve the pretext task well, the network is expected to be able to extract useful features.\nThe pretext tasks include predicting relative patch location~\\cite{patch_location}, solving a jigsaw puzzle~\\cite{jigsaw}, colorizing images~\\cite{colorization}, counting objects~\\cite{count}, and predicting rotations~\\cite{rotation}.\n\n\n\\section{Methods}\n\\subsection{Patch-wise Deep SVDD} \\label{sec:patch_svdd}\n\nDeep SVDD~\\cite{deepSVDD} trains an encoder that maps the entire training data to features lying within a small hypersphere in the feature space.\nThe encoder, $f_\\theta$, is trained to minimize the Euclidean distances between the features and the center of the hypersphere using the following loss function:\n\n\\begin{equation} \\label{eq:deep_svdd}\n \\mathcal{L}_{\\text{SVDD}} = \\sum_i \\left \\| f_\\theta (\\mathbf{x}_i) - \\mathbf{c} \\right \\| _2,\n\\end{equation}\nwhere $\\mathbf{x}$ is an input image.\nAt test time, the distance between the representation of the input and the center is used as an anomaly score.\nThe center $\\mathbf{c}$ is calculated in advance of the training, as shown in Eq.~\\ref{eq:deep_svdd_center}, where $N$ denotes the number of the training data.\nTherefore, the training pushes the features around a single center.\n\n\\begin{equation} \\label{eq:deep_svdd_center}\n \\mathbf{c} \\doteq \\frac{1}{N} \\sum_i^N f_\\theta (\\mathbf{x}_i).\n\\end{equation}\n\nIn this study, we extend this approach to patches; the encoder encodes each patch, not the entire image, as illustrated in Fig.~\\ref{fig:svdd_comparison}.\nAccordingly, inspection is performed for each patch.\nPatch-wise inspection has several advantages.\nFirst, the inspection result is available at each position, and hence we can localize the positions of defects.\nSecond, such fine-grained examination improves overall detection performance.\n\n\nA direct extension of Deep SVDD~\\cite{deepSVDD} to a patch-wise inspection is straightforward.\nA patch encoder, $f_\\theta$, is trained using $\\mathcal{L}_{\\text{SVDD}}$ with $\\mathbf{x}$ replaced with a patch, $\\mathbf{p}$.\nThe anomaly score is defined accordingly, and the examples of the resulting anomaly maps are provided in Fig.~\\ref{fig:ablation_loss_map}.\nUnfortunately, the detection performance is poor for the images with high complexity.\nThis is because patches have high intra-class variation; some patches correspond to the background, while the others contain the object.\nAs a result, mapping all the features of dissimilar patches to a single center and imposing a uni-modal cluster weaken the connection between the representation and the content.\nTherefore, using a single center $\\mathbf{c}$ is inappropriate, yet deciding on the appropriate number of multiple centers and the allocation of patches to each center are cumbersome.\n\n\\input{txt\/3_fig_self_supervision}\n\nTo bypass the above issues, we do not explicitly define the center and allocate the patches.\nInstead, we train the encoder to gather semantically similar patches by itself.\nThe semantically similar patches are obtained by sampling spatially adjacent patches, and the encoder is trained to minimize the distances between their features using the following loss function:\n\n\\begin{equation} \\label{eq:patch_svdd}\n \\mathcal{L}_{\\text{SVDD'}} = \\sum_{i,i'} \\left \\| f_\\theta (\\mathbf{p}_i) - f_\\theta (\\mathbf{p}_{i'}) \\right \\| _2,\n\\end{equation}\nwhere $\\mathbf{p}_{i'}$ is a patch near $\\mathbf{p}_{i}$.\nFurthermore, to enforce the representation to capture the semantics of the patch, we append the following self-supervised learning.\n\n\\subsection{Self-supervised learning} \\label{sec:self_supervised}\nDoersch et al.~\\cite{patch_location} trained an encoder and classifier pair to predict the relative positions of two patches, as depicted in Fig.~\\ref{fig:self_supervision}.\nA well-performing pair implies that the trained encoder extracts useful features for location prediction.\nAside from this particular task, previous research~\\cite{jigsaw,rotation,revisit_ssl} reported that the self-supervised encoder functions as a powerful feature extractor for downstream tasks.\n\nFor a randomly sampled patch $\\mathbf{p}_1$, Doersch et al.~\\cite{patch_location} sampled another patch $\\mathbf{p}_2$ from one of its eight neighborhoods in a 3 $\\times$ 3 grid.\nIf we let the true relative position be $y \\in \\{0, ..., 7\\}$, the classifier $C_{\\phi}$ is trained to predict $y=C_\\phi(f_\\theta(\\mathbf{p}_1), f_\\theta(\\mathbf{p}_2))$ correctly.\nThe size of the patch is the same as the receptive field of the encoder.\nTo prevent the classifier from exploiting shortcuts~\\cite{lens} (e.g., color aberration), we randomly perturb the RGB channels of the patches.\nFollowing the approach by Doersch et al.~\\cite{patch_location}, we add a self-supervised learning signal by adding the following loss term:\n\\begin{equation} \\label{eq:ssl}\n \\mathcal{L}_{\\text{SSL}} = \\texttt{Cross-entropy} \\left (y, C_\\phi \\left ( f_\\theta(\\mathbf{p}_1), f_\\theta(\\mathbf{p}_2) \\right ) \\right ).\n\\end{equation}\nAs a result, the encoder is trained using a combination of two losses with the scaling hyperparameter $\\lambda$, as presented in Eq.~\\ref{eq:patch_svdd_final}.\nThis optimization is performed using stochastic gradient descent and Adam optimizer~\\cite{adam}.\n\n\\begin{equation} \\label{eq:patch_svdd_final}\n \\mathcal{L}_{\\text{Patch SVDD}} = \\lambda \\mathcal{L}_{\\text{SVDD'}} + \\mathcal{L}_{\\text{SSL}}.\n\\end{equation}\n\n\n\\input{txt\/3_fig_hierarchical}\n\\subsection{Hierarchical encoding} \\label{sec:hierarchical}\nAs anomalies vary in size, deploying multiple encoders with various receptive fields helps in dealing with variation in size.\nThe experimental results in Section~\\ref{sec:hierarchical_results} show that enforcing a hierarchical structure on the encoder boosts anomaly detection performance as well.\nFor this reason, we employ a hierarchical encoder that embodies a smaller encoder; the hierarchical encoder is defined as\n\\begin{equation} \\label{eq:hierarchical}\n f_{\\text{big}}(\\mathbf{p})=g_{\\text{big}}(f_{\\text{small}}(\\mathbf{p})).\n\\end{equation}\n\nAn input patch $\\mathbf{p}$ is divided into a 2 $\\times$ 2 grid, and their features are aggregated to constitute the feature of $\\mathbf{p}$, as shown in Fig.~\\ref{fig:hierarchical}.\nEach encoder with receptive field size $K$ is trained with the self-supervised task of patch size $K$.\nThroughout the experiment, the receptive field sizes of the large and small encoders are 64 and 32, respectively.\n\n\n\\input{txt\/3_fig_overall_method}\n\n\n\\subsection{Generating anomaly maps}\nAfter training the encoders, the representations from the encoder are used to detect the anomalies.\nFirst, the representation of every normal train patch, $\\left \\{f_\\theta (\\mathbf{p}_{\\text{normal}}) | \\mathbf{p}_{\\text{normal}} \\right \\}$, is calculated and stored.\nGiven a query image $\\mathbf{x}$, for every patch $\\mathbf{p}$ with a stride $S$ within $\\mathbf{x}$, the L2 distance to the nearest normal patch in the feature space is then defined to be its anomaly score using Eq.~\\ref{eq:anomaly_score_patch}.\nTo mitigate the computational cost of the nearest neighbor search, we adopt its approximate algorithm\\footnote{\\url{https:\/\/github.com\/yahoojapan\/NGT}}.\nAs a result, the inspection of a single image from MVTec AD~\\cite{mvtecad} dataset for example, requires approximately 0.48 second.\n\n\\begin{equation} \\label{eq:anomaly_score_patch}\n \\mathcal{A}_{\\theta}^{\\text{patch}}(\\mathbf{p}) \\doteq \\min_{\\mathbf{p}_{\\text{normal}}} \\left \\| f_\\theta (\\mathbf{p}) - f_\\theta (\\mathbf{p}_{\\text{normal}}) \\right \\| _2.\n\\end{equation}\n\nPatch-wise calculated anomaly scores are then distributed to the pixels.\nAs a consequence, pixels receive the average anomaly scores of every patch to which they belong, and we denote the resulting anomaly map as $\\mathcal{M}$.\n\nThe multiple encoders discussed in Section~\\ref{sec:hierarchical} constitute multiple feature spaces, thereby yielding multiple anomaly maps.\nWe aggregate the multiple maps using element-wise multiplication, and the resulting anomaly map, $\\mathcal{M}_{\\text{multi}}$, provides the answer to the problem of anomaly segmentation:\n\n\\begin{equation} \\label{eq:anomaly_map_multi}\n \\mathcal{M}_{\\text{multi}} \\doteq \\mathcal{M}_{\\text{small}} \\odot \\mathcal{M}_{\\text{big}},\n\\end{equation}\nwhere $\\mathcal{M}_{\\text{small}}$ and $\\mathcal{M}_{\\text{big}}$ are the generated anomaly maps using $f_{\\text{small}}$ and $f_{\\text{big}}$, respectively.\nThe pixels with high anomaly scores in the map $\\mathcal{M}_{\\text{multi}}$ are deemed to contain defects.\n\nIt is straightforward to address the problem of anomaly detection.\nThe maximum anomaly score of the pixels in an image is its anomaly score, as expressed in Eq.~\\ref{eq:anomaly_score_image}.\nFig.~\\ref{fig:overall_method} illustrates the overall flow of the proposed method, and its pseudo-code is provided in Appendix~\\ref{sec:appendix_pseudo_code}.\n\n\\begin{equation} \\label{eq:anomaly_score_image}\n \\mathcal{A}_{\\theta}^{\\text{image}}(\\mathbf{x}) \\doteq \\max_{i,j} \\mathcal{M}_{\\text{multi}}(\\mathbf{x})_{ij}.\n\\end{equation}\n\n\\section{Results and Discussion}\nTo verify the validity of the proposed method, we applied it to MVTec AD~\\cite{mvtecad} dataset.\nThe dataset consists of 15-class industrial images, each class categorized as either an \\textit{object} or \\textit{texture}.\nTen \\textit{object} classes contain regularly positioned objects, whereas the \\textit{texture} classes contain repetitive patterns.\nThe implementation details used throughout the study are provided in Appendix~\\ref{sec:appendix_implementation}, and please refer to \\cite{mvtecad} for more details on the dataset.\n\n\n\\input{txt\/4_table_auroc.tex}\n\n\\subsection{Anomaly detection and segmentation results}\nFig.~\\ref{fig:anomaly_maps} shows anomaly maps generated using the proposed method, indicating that the defects are properly localized, regardless of their size.\nTable~\\ref{table:anomaly_det_seg} shows the detection and segmentation performances for MVTec AD~\\cite{mvtecad} dataset compared with state-of-the-art baselines in AUROC.\nPatch SVDD provides state-of-the-art performance over the powerful baselines including auto encoder-based and classifier-based methods and outperforms Deep SVDD~\\cite{deepSVDD} by 55.6\\% improvement.\nMore numerical results are provided in Appendix~\\ref{sec:appendix_numerical}.\n\n\n\\input{txt\/4_fig_TSNE_Embedding.tex}\n\n\\subsection{Detailed analysis}\n\\subsubsection{t-SNE visualization}\nFig.~\\ref{fig:tsne} shows t-SNE visualizations~\\cite{tsne} of the learned features of multiple train images.\nPatches located at the points shown in Fig.~\\ref{fig:tsne}(b) are mapped to the points with the same color and size in Fig.~\\ref{fig:tsne}(a) and Fig.~\\ref{fig:tsne}(c).\nIn Fig.~\\ref{fig:tsne}(a), the points with similar color and size form clusters in the feature space.\nSince the images in the cable class are regularly positioned, the patches from the same position have similar content, even if they are from different images.\nLikewise, for the regularly positioned \\textit{object} classes, the points with similar color and size in t-SNE visualization (i.e., the patches with similar positions) can be regarded to be semantically similar.\nBy contrast, features of the leather class in Fig.~\\ref{fig:tsne}(c) show the opposite tendency.\nThis is because the patches in \\textit{texture} classes are analogous, regardless of their position in the image; the positions of the patches are not quite related to their semantics for the \\textit{texture} images.\n\n\n\\input{txt\/4_fig_ablation_loss.tex}\n\\input{txt\/4_fig_ablation_loss_tsne.tex}\n\n\\subsubsection{Effect of self-supervised learning} \\label{sec:ssl_effect}\nPatch SVDD trains an encoder using two losses: $\\mathcal{L}_{\\text{SVDD'}}$ and $\\mathcal{L}_{\\text{SSL}}$, where $\\mathcal{L}_{\\text{SVDD'}}$ is a variant of $\\mathcal{L}_{\\text{SVDD}}$.\nTo compare the roles of the proposed loss terms, we conduct an ablation study.\nTable~\\ref{table:ablation_loss} suggests that the modification of $\\mathcal{L}_{\\text{SVDD}}$ to $\\mathcal{L}_{\\text{SVDD'}}$ and the adoption of $\\mathcal{L}_{\\text{SSL}}$ improve the anomaly detection and segmentation performances.\nFig.~\\ref{fig:ablation_loss} shows that the effects of the proposed loss terms vary among classes.\nSpecifically, the \\textit{texture} classes (e.g. tile and wood) are less sensitive to the choice of loss, whereas the \\textit{object} classes, including cable and transistor, benefit significantly from $\\mathcal{L}_{\\text{SSL}}$.\n\nTo investigate the reason behind these observations, we provide (in Fig.~\\ref{fig:ablation_loss_tsne}) t-SNE visualizations of the features of an \\textit{object} class (the transistor) for the encoders trained with $\\mathcal{L}_{\\text{SVDD}}$, $\\mathcal{L}_{\\text{SVDD'}}$, and $\\mathcal{L}_{\\text{SVDD'}} + \\mathcal{L}_{\\text{SSL}}$.\nWhen training is performed with $\\mathcal{L}_{\\text{SVDD}}$ (Fig.~\\ref{fig:ablation_loss_tsne}(a)) or $\\mathcal{L}_{\\text{SVDD'}}$ (Fig.~\\ref{fig:ablation_loss_tsne}(b)), the features form a uni-modal cluster.\nIn contrast, $\\mathcal{L}_{\\text{SSL}}$ results in multi-modal feature clusters on the basis of their semantics (i.e., color and size), as shown in Fig.~\\ref{fig:ablation_loss_tsne}(c).\nThe multi-modal property of the features is particularly beneficial to the \\textit{object} classes, which have high intra-class variation among the patches.\nFeatures of the patches with dissimilar semantics are separated, and hence anomaly inspection using those features becomes more deliberate and accurate.\n\n\\input{txt\/4_fig_ablation_together}\n\nThe intrinsic dimensions (ID)~\\cite{intrinsic_dimension} of the features also indicate the effectiveness of $\\mathcal{L}_{\\text{SSL}}$.\nThe ID is the minimal number of coordinates required to describe the points without significant information loss~\\cite{intrinsic_dimension2}.\nA larger ID denotes that the points are spreaded in every direction, while a smaller ID indicates that the points lie on low-dimensional manifolds with high separability.\nIn Fig.~\\ref{fig:ablation_loss_id}, we show the average IDs of features in each class trained with three different losses.\nIf the encoder is trained with the proposed $\\mathcal{L}_{\\text{Patch SVDD}}$, features with the lowest ID are yielded, implying that these features are neatly distributed.\n\n\n\\subsubsection{Hierarchical encoding} \\label{sec:hierarchical_results}\nIn Section~\\ref{sec:hierarchical}, we proposed the use of hierarchical encoders.\nFig.~\\ref{fig:hierarchical_helps} shows that aggregating multi-scale results from multiple encoders improves the inspection performances.\nIn addition, an ablation study with a non-hierarchical encoder shows that the hierarchical structure itself also boosts performance.\nWe postulate that the hierarchical architecture provides regularization for the feature extraction.\nNote that the non-hierarchical encoder has a number of parameters similar to that of the hierarchical counterpart.\n\nWe provide an example of multi-scale inspection results, together with an aggregated anomaly map, in Fig.~\\ref{fig:hierarchical_maps}.\nThe anomaly maps from various scales provide complementary inspection results; the encoder with a large receptive field coarsely locates the defect, whereas the one with a smaller receptive field refines the result.\nTherefore, an element-wise multiplication of the two maps localizes the accurate position of the defect.\n\n\\input{txt\/4_fig_hierarchical_map.tex}\n\n\n\\subsubsection{Hyperparameters}\nAs shown in Eq.~\\ref{eq:patch_svdd_final}, the hyperparameter $\\lambda$ balances $\\mathcal{L}_{\\text{SVDD'}}$ and $\\mathcal{L}_{\\text{SSL}}$.\nA large $\\lambda$ emphasizes gathering of the features, while a small $\\lambda$ promotes their informativeness.\nInterestingly, the most favorable value of $\\lambda$ varies among the classes.\nAnomalies in the \\textit{object} classes are well detected under a smaller $\\lambda$, while the \\textit{texture} classes are well detected with a larger $\\lambda$.\nFig.~\\ref{fig:ablation_lambda} shows an example of this difference; the anomaly detection performance for the cable class (\\textit{object}) improves as $\\lambda$ decreases, while the wood class (\\textit{texture}) shows the opposite trend.\nAs discussed in the previous sections, this occurs because the self-supervised learning is more helpful when the patches show high intra-class variation, which is the case for the \\textit{object} classes.\nThe result coincides with that shown in Fig.~\\ref{fig:ablation_loss} because using $\\mathcal{L}_{\\text{SVDD'}}$ as a loss is equivalent to using $\\mathcal{L}_{\\text{Patch SVDD}}$ with $\\lambda >> 1$.\n\n\nThe number of feature dimensions, $D$, is another hyperparameter of the encoder.\nThe anomaly inspection performance for varying $D$ is depicted in Fig.~\\ref{fig:ablation_D}(a).\nA larger $D$ signifies improved performance---a trend that has been discussed in a self-supervised learning venue~\\cite{revisit_ssl}.\nFig.~\\ref{fig:ablation_D}(b) indicates that the ID of the resulting features increases with increasing $D$.\nThe black dashed line represents the $y=x$ graph, and it is the upper bound of ID.\nThe average ID of features among the classes saturates as $D=64$; therefore, we used a value of $D=64$ throughout our study.\n\n\\input{txt\/4_fig_ablation_D.tex}\n\n\n\\subsubsection{Random encoder}\n\\input{txt\/4_table_random_encoder.tex}\nDoersch et al.~\\cite{patch_location} showed that randomly initialized encoders perform reasonably well in image retrieval; given an image, the nearest images in the random feature space look similar to humans as well.\nInspired by this observation, we examined the anomaly detection performance of the random encoders and provided the results in Table~\\ref{table:random_enc}.\nAs in Eq.~\\ref{eq:anomaly_score_patch}, the anomaly score is defined to be the distance to the nearest normal patch, but in the random feature space.\nIn the case of certain classes, the features of the random encoder are effective in distinguishing between normal and abnormal images.\nSome results even outperform the trained deep neural network model (L2-AE).\n\nHere, we investigate the reason for the high separability of the random features.\nFor simplicity, let us assume the encoder to be a one-layered convolutional layer parametrized by a weight $W \\neq \\mathbf{0}$ and a bias $b$ followed by a nonlinearity, $\\sigma$.\nGiven two patches $\\mathbf{p}_1$ and $\\mathbf{p}_2$, their features $h_1$ and $h_2$ are provided by Eq.~\\ref{eq:random_encoder_define}, where $*$ denotes a convolution operation.\n\n\\begin{equation} \\label{eq:random_encoder_define}\n\\begin{split}\n & h_1 = \\sigma (W * \\mathbf{p}_1 + b) \\\\\n & h_2 = \\sigma (W * \\mathbf{p}_2 + b).\n\\end{split}\n\\end{equation}\n\nAs suggested by Eq.~\\ref{eq:random_encoder}, when the features are close, so are the patches, and vice versa.\nTherefore, retrieving the nearest patch in the feature space is analogous to doing so in the image space.\n\n\n\\begin{equation} \\label{eq:random_encoder}\n\\begin{split}\n \\left \\| h_1 - h_2 \\right \\| _2 \\approx 0 & \\Leftrightarrow (W * \\mathbf{p}_1 + b) - (W * \\mathbf{p}_2 + b) \\approx 0 \\\\\n & \\Leftrightarrow W * (\\mathbf{p}_1 - \\mathbf{p}_2) \\approx 0 \\\\\n & \\Leftrightarrow \\left \\| \\mathbf{p}_1 - \\mathbf{p}_2 \\right \\| _2 \\approx 0.\n\\end{split}\n\\end{equation}\n\n\nIn Table~\\ref{table:random_enc}, we also provide the results for anomaly detection task using the nearest neighbor algorithm using the raw patches (i.e., $f_\\theta(\\mathbf{p})=\\mathbf{p}$ in Eq.~\\ref{eq:anomaly_score_patch}).\nFor certain classes, the raw patch nearest neighbor algorithm works surprisingly well.\nThe effectiveness of the raw patches for anomaly detection can be attributed to the high similarity among the normal images.\n\nFurthermore, the well-separated classes provided by the random encoder are well-separated by the raw patch nearest neighbor algorithm, and vice versa.\nTogether with the conclusion of Eq.~\\ref{eq:random_encoder}, this observation implies the strong relationship between the raw image patch and its random feature.\nTo summarize, the random features of anomalies are easily separable because they are alike the raw patches, and the raw patches are easily separable.\n\n\\section{Conclusion}\n\nIn this paper, we proposed Patch SVDD, a method for image anomaly detection and segmentation.\nUnlike Deep SVDD~\\cite{deepSVDD}, we inspect the image at the patch level, and hence we can also localize defects.\nMoreover, additional self-supervised learning improves detection performance.\nAs a result, the proposed method achieved state-of-the-art performance on MVTec AD~\\cite{mvtecad} industrial anomaly detection dataset.\n\nIn previous studies~\\cite{jigsaw,colorization}, images were featurized prior to the subsequent downstream tasks because of their high-dimensional and structured nature.\nHowever, the results in our analysis suggest that a nearest neighbor algorithm with a raw patch often discriminates anomalies surprisingly well.\nMoreover, since the distances in random feature space are closely related to those in the raw image space, random features can provide distinguishable signals.\n\n\n\\section{Pseudo code} \\label{sec:appendix_pseudo_code}\n\\input{txt2\/1_pseudo_code.tex}\n\\input{txt2\/1_pseudo_code2.tex}\nAlgorithm~\\ref{algo:pseudo} trains a hierarchical encoder using $\\mathcal{L}_{\\text{Patch SVDD}}$.\nAfter the training, sets of features of normal patches are extracted using the trained multi-scale encoders.\nThe outputs of Algorithm~\\ref{algo:pseudo} are the sets of normal features and trained encoders.\nAlgorithm~\\ref{algo:pseudo2} performs inspection on a query image and outputs the anomaly map and anomaly score.\n\\newpage\n\n\n\\section{Results}\n\\subsection{Numerical results} \\label{sec:appendix_numerical}\n\\input{txt2\/2_table_auroc_full.tex}\n\\input{txt2\/2_table_auroc_hier.tex}\n\\newpage\n\n\\subsection{Anomaly maps}\n\\input{txt2\/2_fig_anomaly_maps1.tex}\n\\clearpage\n\\input{txt2\/2_fig_anomaly_maps2.tex}\n\\clearpage\n\\input{txt2\/2_fig_anomaly_maps3.tex}\n\\clearpage\n\n\\section{Implementation details} \\label{sec:appendix_implementation}\n\\subsection{Dataset}\nThe dataset in the study, MVTec AD~\\cite{mvtecad}, consists of 15-class industrial images.\nEach class is categorized as either an \\textit{object}\\footnote{bottle, cable, capsule, hazelnut, metal\\_nut, pill, screw, toothbrush, transistor, and zipper} or \\textit{texture}\\footnote{carpet, grid, leather, tile, and wood}.\nEach class contains 60 to 390 normal train images and 40 to 167 test images.\nTest images include both normal and abnormal examples, and the defects of the abnormal images are annotated at the pixel level in the form of binary masks.\nWe downsampled every image to a resolution of 256 $\\times$ 256.\nGray-scale images are converted to RGB images by replicating the single channel to three.\nNo data augmentation method (e.g., horizontal flip, rotation) was used for the training.\n\n\\subsection{Networks}\nTwo neural networks are used throughout the study: an encoder and a classifier.\nThe encoder is composed of convolutional layers only.\nThe classifier is a two-layered MLP model having 128 hidden units per layer, and the input to the classifier is a subtraction of the features of the two patches.\nThe activation function for both networks is a LeakyReLU~\\cite{relu} with a $\\alpha=0.1$.\nPlease refer to the code\\footnote{\\url{https:\/\/github.com\/nuclearboy95\/Anomaly-Detection-PatchSVDD-PyTorch}} for the detailed architecture of the networks.\n\nAs proposed in Section 3.3 of the main paper, the encoder has a hierarchical structure.\nThe receptive field of the encoder is $K=64$, and that of the embedded smaller encoder is $K=32$.\nPatch SVDD divides the images into patches with a size $K$ and a stride $S$.\nThe values for the strides are $S=16$ and $S=4$ for the encoders with $K=64$ and $K=32$, respectively.\n\n\\subsection{Environments}\n\nThe experiments throughout the study were conducted on a machine equipped with an Intel i7-5930K CPU and an NVIDIA GeForce RTX 2080 Ti GPU.\nThe code is implemented in python 3.7 and PyTorch~\\cite{pytorch}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n Blazars (Flat Spectrum Radio Quasars and BL Lac objects) are the most extreme type of Active Galactic Nuclei (AGN), with\nrelativistic jets pointing toward us. Their emission is non-thermal and it is amplified due to the relativistic bulk motion of the jets.\nBlazar Spectral Energy Distribution (SED) consists of two broad ``humps'', one spanning from radio to optical-UV (and occasionally X-ray) bands and another one\nthat extends from X-rays to multi-GeV and occasionally to TeV $\\gamma$-rays. The low-frequency component is believed to be due to synchrotron emission by non-thermal electrons, while the higher one is attributed to inverse Compton (IC) scattering of the relativistic electrons on\nsynchrotron or external photons. Relativistic plasma motion and radiative cooling affect the temporal variability of these sources, which helps define the emitting region's size. \n Certain patterns in the spectral features became apparent and are now known as the ``Blazar\nSequence\" \\cite{Foss98}. Blazars become redder with increasing bolometric luminosity\n$L_{\\rm bol}$, in that their synchrotron peak frequency $\\nu_{\\rm pk}^{\\rm syn}$ decreases as\n$L_{\\rm bol}$ increases; at the same time, their Compton dominance (CD; i.e., the ratio of their IC to synchrotron luminosities) increases and so do their $\\gamma$-ray spectral indices, i.e. their spectral index becomes steeper.\nThe Blazar Sequence, established originally with 132 objects out of which only 33 were detected in high\nenergy $\\gamma$-rays, was supplemented with the launch of \\emph{Fermi} and the discovery\nof more than 5000 $\\gamma$-ray Blazars as recorded in the 4th Fermi Blazar Catalog\n \\cite{4th}. This\ncompilation provided novel correlations that replaced those of the original Blazar Sequence. This result is implying that the underlying physics is probably related to fundamental parameters of the AGN phenomenology. \\\\\n\\indent The optically thin Blazar GeV emission suggests its location to be far from the\naccreting black hole (BH), possibly out to a distance as large as $10^6R_S~\\sim$ 10 pc \\cite{Marscher10} (where $R_S$ is the Schwarzschild radius ). Furthermore, the AGN torii (dusty, molecular structures of height\/radius ratios\n$z\/D \\simeq 1$) invoked in the unification of the radio-quiet or radio-loud AGN subclasses \\cite{AntonMill}, are of similar scales and, as we argue, play a significant role in Blazar physics. To reconcile the discrepancy of the torii geometry expected, \\cite{KK94} proposed that these torii are, in fact, MHD accretion disk winds \\cite{BKM19, CL94}; these are launched across the entire disk, from the BH\nvicinity of a few $R_S$ to the BH influence radius $D$ $\\sim~(c\/\\sigma)^2 R_S \\sim 10^6 R_S \\sim 10 pc$ (for $M_{BH}\\simeq 10^8M_{\\odot}$). Furthermore, the discovery of Warm Absorbers (WA, blue-shifted absorption features) and their successful modeling as photoionized MHD\nwinds that extend to $r \\sim 10^6R_S$ \\cite{Behar09,FKCB}, established the combined AGN WA-torii as a\nsingle entity.\nFinally, modeling the absorbers of the Galactic BH GRO 1655-40 with the same type of winds \\cite{FKCB17}\nsuggests the possibility of their presence in any accreting BH. \\\\\n\\indent The physical properties of such MHD winds depend on a few parameters and their presence allowed us to reproduce a theoretical Blazar Sequence by varying only one parameter, namely the mass accretion rate, \\cite{BKM19,20BMK}. Here we will show the results of a two-zone emission model based on the extension of our previous works. However, now we assume that electrons are accelerated into a zone close to the central engine and escape and radiate into a larger region, which will refer as the cooling zone.\n\\vspace{-0.2cm}\n\\section{Emission model-Scalings to mass accretion rate}\n\n\\indent The broader morphology of the non-thermal Blazar SED depends\non the ratio of the magnetic to photon energy densities. To calculate the former we assume some sort of equipartition with the accreting matter, the energy density. By assuming the power of the accretion $P_{\\rm acc}= \\dot{m}{\\cal M}L_{\\rm Edd}$, with $\\dot{m}$ the mass accretion rate normalized to the Eddington one and \n${\\cal M}= M_{\\rm BH}\/M_{\\rm \\odot}$, where\n$M_{\\rm BH}$ is the mass of the black hole, one can calculate the magnetic field energy density at position z:\n\\begin{equation}\\label{B}\n U_{\\rm B} \\propto \\eta_{\\rm b} \\dot{m}{\\cal{M}}^{-1},\n\\end{equation}\nwhere $\\eta_{\\rm b}$ is a proportionality constant.\nWe assume for the external photon field that is related to photons which are scattered on accretion disk wind particles \\cite{BKM19} and thus the external photon field density $U_{ext}$ in the jet frame has the form: \n \\begin{equation}\\label{ext}\n U_{\\rm ext}= \\Gamma^2 {\\rm U}_{\\rm sc}\\propto \\Gamma^2 \\epsilon \\dot{m}^{\\alpha+1}{\\cal M}^{-1}~~(\\alpha =1 ~{\\rm{for}}~\\dot{m}\\geq 0.1 ~~{\\rm{and}}~ \\alpha=2 ~{\\rm{for}}~\\dot{m}<0.1),\n \\end{equation}\n where $\\epsilon$ is the efficiency of the conversion\nof the accreting mass into radiation and $\\Gamma$ is the bulk Lorentz factor of the source. We have assumed that the disk emits like a black body characterized by a temperature $T_{\\rm disk} $ in order to estimate the spectrum of the scattered photons. As we pointed in \\cite{BKM19}, all input parameters required for the calculation of the spectrum are scaled with $\\dot{m}$ and ${\\cal{M}}$. Using the above definitions for the physical properties of the source, we obtain the Blazar SED, by solving the coupled integro-differential kinetic equations of electrons and photons as described in \\cite{MK95}.\n We emphasize that according to relations \\ref{B},\\ref{ext} the basic parameters of the system of equations depends basically only on the mass accretion rate, $\\dot{m}$.\n \n \n Therefore, we assume a blob of plasma of radius $R$ where particles are accelerated with a characteristic timescale $t_{acc}$, (e.g. \\cite{KRM98}). In the case of the first order Fermi acceleration we have \\begin{equation}\\label{tacc}\n t_{{acc}_{FI}}\\geq 6\\left(\\frac{c}{u_s}\\right)^2\\frac{\\lambda}{c}\\simeq 6 \\frac{r_g c}{u_s^2},\n\\end{equation}\nwhere \n\\begin{equation}\n r_g=\\frac{\\gamma m c^2}{eB}.\n\\end{equation}\n\n\\begin{figure}[!htbp]\n\\begin{subfigure}{0.6\\textwidth}\n\\includegraphics[width=\\linewidth]{acc.pdf}\n \\label{fig:a}\n \\end{subfigure}\\hspace*{\\fill}\n\\begin{subfigure}{0.5\\textwidth}\n\\includegraphics[width=\\linewidth]{BS_acc.pdf}\n\\label{fig:b}\n\\end{subfigure}\n\\caption{\\textbf{Left:} The dependece of the acceleration timescale to the normalized mass accretion rate. \\textbf{Right:} The calculated Spectral Energy Distribution of BL Lac objects in the case for various values of the mass accretion rate. } \\label{fig:1a}\n\\end{figure}\nIn our study, we re-write the Equation \\Ref{tacc} as $t_{acc}\\propto \\gamma A_{acc}(\\dot{m})$ taking into account the dependence on the mass accretion rate. By solving the set of equations we find there is an almost linear dependence between $t_{acc}$ and $\\dot{m}$ in order to explain the high peaked synchrotron blazars, see Figure~\\ref{fig:1a}.\n\n\\section{Particles escape: A two-zone emission model}\nAs a second step, we study a two zone model: particles accelerate in a zone closer to the central engine, then they escape to a larger volume further away and cool due to synchrotron and inverse Compton losses. \n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=0.5\\linewidth]{model_2zone.png}\n \\caption{Sketch of the two-zone model. Particles accelerate and radiate in the acceleration zone (red). While, a fraction of the accelerated particles escapes into a larger volume(blue), where they cool and radiate.}\n \\label{fig:model_2zone}\n\\end{figure}\nThe kinetic equation of electrons in the first zone (or acceleration zone) has the form: \n \\begin{equation}\n \\frac{\\partial n_{e_I}(\\gamma,t)}{\\partial t}+\\frac{n_{e_I}(\\gamma,t)}{t_{{esc}_I}(\\dot{m},\\gamma)}+\\frac{\\partial}{\\partial \\gamma}\\left[\\frac{\\gamma}{t_{acc}(\\dot{m},\\gamma)}n_{e_I}(\\gamma,t)\\right]=\\mathcal{L}_{syn}(\\gamma,t) + \\mathcal{L}_{ICS}(\\gamma,t).\n \\end{equation}\nThe kinetic equation of relativistic electrons in the second zone (or cooling zone) has the form: \n \\begin{equation}\n \\frac{\\partial n_{e_{II}}(\\gamma,t)}{\\partial t}+\\frac{n_{e_{II}}(\\gamma,t)}{t_{{esc}_{II}}}=Q_{inj}+\\mathcal{L}_{syn}(\\gamma,t) + \\mathcal{L}_{ICS}(\\gamma,t),\n \\end{equation}\nwhere $n_{e_I}, ~n_{e_{II}}$ the electron differential distribution function in the first and second zone respectively, $t_{{esc}_I},~t_{{esc}_{II}}$ the electron escape timescale from the first and the second zone respectively, the term $Q_{inj}=\\frac{n_{e_I}}{tesc_I}$ refers to the relativistic electrons of the acceleration zone that escape and inject to the cooling zone, (e.g. \\cite{1998KRM}). \nTo calculate synchrotron losses, we assume that the magnetic field decreases with the distance $z$ from the central engine as $B\\propto 1\/z$. Furthermore, the energy density of the external photon field $U_{ext}$ is constant with the distance z, \\cite{BKM19}. According to our assumption, the two classes of blazars show a very distinct behavior: \n\\begin{itemize}\n\\item BL Lac objects emission is dominated by the first zone. This results from the fact that the energy density of the magnetic field dominates in both zones. The reason is that in the acceleration zone, we have $U_{B_I}>>U_{ext}$ as BL Lac objects have a weak external photon field. In the radiation zone\\footnote{The size of the second zone is related to the electrons cooling timescale, in order the losses to be Compton dominated.} even the fact that the magnetic energy density $U_B$ decreases with z (see above), \nwe still have that $U_{B_{II}} >> U_{ext}$ everywhere, where $U_{B_{II}}$ the magnetic field strength in the second zone, see Figure \\Ref{fig:sed_2}. As a result, the spectrum is dominated by the emission of the acceleration zone. \n\\item FSRQs synchrotrom emission is mainly produced by the first zone, while the second zone is dominated by the inverse Compton scattering. For the parameter set that we study in the acceleration zone, the magnetic energy density is again more significant than the energy density of the external photon field. However, the second zone is further from the central engine, where the magnetic field decreases, but the external photons' energy density remains constant \\cite{BKM19}. Now we have $U_{ext}>>U_{B_{II}}$ and the contribution of the external inverse Compton scattering is more significant than this of the first zone. As a result, in the total flux, the low-frequency component is related to the synchrotron radiation from the acceleration zone and the high one to the inverse Compton scattering losses from the cooling zone. \n\\end{itemize}\nThe above can be illustrated by Figure \\Ref{fig:sed_2} which depicts our calculations for the two zones in the cases of FSRQs and BL Lac objects. \n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=0.75\\linewidth]{BS_comp.pdf}\n \\caption{Results for FSRQ and BL Lac objects according to the two-zone model. Straight lines depict the emission from the acceleration zone and dotted lines the emission from the cooling zone.}\n \\label{fig:sed_2}\n\\end{figure}\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=0.75\\linewidth]{BS_2zone.pdf}\n \\caption{The theoretical Blazar Sequence according to the superposition of the two-zone emission by varying only the mass accretion rate, see Table \\Ref{tab3} for the values of the input parameters. The acceleration zone is at a distance of $z=0.01~pc$ from the central black hole which is assumed to have a mass $\\mathcal{M}=10^8$. The external photon field is produced from the isotropic scattering of disk photons on the wind particles between radii $R_1=9\\cdot 10^{14}~\\rm{cm}$ and $R_2=3\\cdot 10^{18}~\\rm{cm}$. The efficiencies of the magnetic field and the external photon field are $\\eta_{\\rm b}=0.01$, and $\\epsilon=0.05$, respectively. The number of the injected electrons in the acceleration process, which is assumed to be of Type I Fermi, depends linearly on the mass accretion rate. The bulk Lorentz factor is $\\Gamma=30$ and the Doppler factor is $\\delta=15$. The characteristic temperature of the accretion disk is $T_{disk}=3 \\cdot 10^3~$K.}\n \\label{fig:sed_2zone}\n\\end{figure}\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=0.75\\linewidth]{3C273.pdf}\n \\caption{An application of the two-zone model in the case of FSRQ 3C273. The acceleration zone is at distance $z=0.01pc$. The external photon field is produced from the isotropic scattering of disk photons on the wind particles between radii $R_1=9\\cdot 10^{14}~\\rm{cm}$ and $R_2=3\\cdot 10^{18}~\\rm{cm}$. The magnetic field strength in the acceleration zone is $B=1~G$, its radius is $R=5\\cdot 10^{15}$ cm, and the energy density of the external photon field is $U_{ext}=2.5\\cdot10^{-3} ~\\frac{erg}{sec}$. The bulk Lorentz factor is $\\Gamma=30$ and the Doppler factor is $\\delta=15$. The characteristic temparature of the accretion disk is $T_{\\rm disk}=3 \\cdot 10^4$ K. Data are reproduced from \\cite{GIommi12P}.}\n \\label{fig:3c273}\n\\end{figure}\n\\begin{table} \n\\begin{center}\n{\n \\begin{tabular}{ |c|c | c | c | c |}\n \\hline\n $ \\dot{m}$ & $ B({\\rm G}) $ & $U_{\\rm ext}$ $\\left(\\frac{\\rm erg}{\\rm{cm}^3}\\right)$ & $A_{\\rm acc}$ & Blazar Class\\\\ \\hline\n -0.5 & 1 & -2.6 &-4& FSRQ\\\\ \\hline\n -1.5 & 0 & -5.6 & -5& LBL \\\\ \\hline\n -2.5 & -1 & -8.6 &-6& HBL \\\\ \n \\hline\n \\end{tabular}} \n\\end{center}\n\\caption{The values of the input parameters for different mass accretion rates when particle acceleration is included in the numerical code. All the values are in a logarithmic scale. }\\label{tab3}\n\\end{table} \n\nIn Figure \\Ref{fig:sed_2zone} we present our results for a theoretical Blazar Sequence in the case of the two-zone model, see Table \\ref{tab3} for the values of the input parameters and the dependence of $\\dot{m}$. In both classes of blazars particles escape from the acceleration zone. However, FSRQs present a characteristic signature in $\\gamma$-rays, because particles inject into the cooling zone and interact with a strong external photon field. On the contrary, in BL Lac objects, when particles escape, they do not interact with a strong external photon field and as a result, the cooling zone has a lower contribution to the total flux. \n\nIn Figure \\Ref{fig:3c273} we show our results in applying a two-zone model in the case of 3C273, an FSRQ object. We zoom on the high-energy part of the spectrum (X-rays, $\\gamma$-rays). X-rays are produced by SSC of the acceleration zone and $\\gamma$-rays by the external Compton of the cooling zone.\n\n\\section{Conclusion}\nIn this work, we reproduce the theoretical Blazar Sequence based on the model of \\cite{BKM19} by varying only the mass accretion rate that seems to explain the Blazar phenomenology. We solve self-consistently the electron and photon kinetic equations, by assuming that electrons accelerate into a small region and lose energy through synchrotron and inverse Compton scattering. While, a part escapes to a larger volume where they lose energy through the same physical processes. Under this assumption, we produce the theoretical Blazar Sequence by adding up the fluxes of the two zones in both the cases of FSRQs and BL Lac objects. \n\n\\bibliographystyle{JHEP}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe Shortest Common Superstring (SCS) problem is a classical combinatorial problem on strings with applications in many domains, e.g.~DNA fragment assembly, data compression, etc. (see \\cite{GePi14} for a recent survey). It consists, given a finite set of strings $S$ over an alphabet $\\Sigma$, in finding a shortest string containing as factors (substrings) all the strings in $S$. The decision version of the problem is known to be NP-complete \\cite{MaSt77,GallantPhD,GaMaSt80}, even under several restrictions on the structure of $S$ (see again \\cite{GePi14}). However, a particularly simple greedy algorithm introduced by Gallant in his Ph.D. thesis \\cite{GallantPhD} is widely used in applications since it has very good performance in practice (see for instance \\cite{Ma09} and references therein). It consists in repetitively replacing a pair of strings with maximum overlap with the string obtained by overlapping the two strings, until one string remains. The greedy algorithm can be implemented using Aho-Corasick automaton in $\\Oh(n)$ randomized time (with hashing on the symbols of the alphabet) or $\\Oh(n \\min(\\log m,\\log |\\Sigma|))$ deterministic time (see \\cite{Uk90}), where $n$ is the sum of the lengths of the strings in $S$ and $m$ its cardinality.\n\nThe approximation of the greedy algorithm is usually measured in two different ways: one consists in taking into account the \\emph{approximation ratio} (also known as the \\emph{length ratio}) $k_g\/k_{min}$, where $k_g$ is the length of the output string of greedy and $k_{min}$ the length of a shortest superstring, the other consists in taking into account the \\emph{compression ratio} $(n-k_g)\/(n-k_{min})$. \n\nFor the approximation ratio, Turner \\cite{Tu89} proved that there is no constant $c<2$ such that $k_g\/k_{min}\\leq c$. The \\emph{greedy conjecture} states that this approximation ratio is in fact $2$ \\cite{BlJiLiTrYa94}. The best bound currently known is 3.5 due to Kaplan and Shafrir \\cite{KaSh05}. Algorithms with better approximation ratio are known; the best one is due to Mucha, with an approximation ratio of $2\\frac{11}{23}$ \\cite{Mu13}. \n\nFor the compression ratio, Tarhio and Ukkonen \\cite{TaUk88} proved that $(n-k_g)\/(n-k_{min})\\geq \\frac12$ and this bound is tight, since it is achieved for the set $S=\\{ab^h,b^ha,b^{h+1}\\}$ when greedy makes the first choice merging the first two strings together.\n\nLet us formally state the SCS problem:\n\n\\bigskip\n\n \\defdsproblem{Shortest Common Superstring ($\\mathit{SCS}$)}{\n \\textbf{Input:} strings $S=\\{s_1,\\ldots,s_m\\}$ of total length $n$.\\\\\n \\textbf{Output:} a shortest string $u$ that contains $s_i$ for each $i=1,\\ldots,m$ as a factor.\n }\n \n\n \\bigskip\n \n Several variations of SCS have been considered in literature. For example, shortest common superstring problem with reverse complements was considered in\n \\cite{DBLP:journals\/algorithmica\/KececiogluM95}. In this setting the alphabet is $\\Sigma=\\{\\mathtt{a,t,g,c}\\}$ and the complement of a string $s$ is $\\bar{s}^{R}$, where $\\, \\bar{}\\, $ is defined by\n $\\bar{\\mathtt{a}}=\\mathtt{t}$,\n $\\bar{\\mathtt{t}}=\\mathtt{a}$,\n $\\bar{\\mathtt{g}}=\\mathtt{c}$,\n $\\bar{\\mathtt{c}}=\\mathtt{g}$,\n and $t^R$ denotes the reversal of $t$, that is the string obtained reading $t$ backwards. In particular, this problem was shown to be NP-complete.\n \n Other variations of the SCS problem can be found in \\cite{Jiang94,cpm,swap,rest}. \n \n In this paper, we address the problem of searching for a string $u$ of minimal length such that for every $s_i\\in S$, $u$ contains as a factor $s_i$ \\emph{or} its reversal $s_i^{R}$.\n \n \\bigskip\n \n \\defdsproblem{Shortest Common Superstring with Reversals ($\\mathit{SCS\\mbox{-}R}$)}{\n \\textbf{Input:} strings $S=\\{s_1,\\ldots,s_m\\}$ of total length $n$.\\\\\n \\textbf{Output:} a shortest string $u$ that contains for each $i=1,\\ldots,m$ at least one of the\n strings $s_i$ or $s^{R}_i$ as a factor.\n }\n \n \\bigskip\nFor example, if $S=\\{aabb, aaac, abbb\\}$, then a solution of SCS-R for $S$ is $caaabbb$. Notice that a shortest superstring with reversals can be much shorter than a classical shortest superstring. An extremal example is given by an input set of the form $S=\\{ab^h,cb^h\\}$.\n\n\n The SCS-R problem was already considered by Jiang et al.\\ \\cite{DBLP:journals\/ipl\/JiangLD92}, who observed (not giving any proof) that the problem is still NP-hard. We provide a proof at the end of the paper.\n \n In \\cite{DBLP:journals\/ipl\/JiangLD92}, the authors proposed a greedy 4-approximation algorithm. Here, we show that an adaptation of the classical greedy algorithm can be used for solving the SCS-R problem with an (optimal) compression ratio $\\frac12$, and that this algorithm can be implemented in linear time with respect to the total size of the input set.\n\n \n\n\\section{Basics and Notation}\n\n\nLet $\\Sigma$ be a finite alphabet. We assume that $\\Sigma$ is linearly sortable, e.g., $\\Sigma=\\{0,\\ldots,n^{\\Oh(1)}\\}$. The \\emph{length} of a string $s$ over $\\Sigma$ is denoted by $|s|$. The \\emph{empty string}, denoted by $\\epsilon$, is the unique string of length zero.\nA string $t$ \\emph{occurs} in a string $s$ if $s=vtz$ for some strings $v,z$. In this case we say that $t$ is a \\emph{factor} of $s$. \nIn particular, we say that $t$ is a \\emph{prefix} of $s$ when $v=\\epsilon$ and a \\emph{suffix} of $s$ when $z=\\epsilon$.\nWe say that a factor $t$ is \\emph{proper} if $s \\ne t$.\n\nThe string $s^{R}$ obtained by reading $s$ from right to left is called the \\emph{reversal} (or \\emph{mirror image}) of $s$.\nGiven a set of strings $S=\\{s_1,\\ldots,s_m\\}$, we define the set $S^R = \\{s_1^{R},\\ldots,s_m^{R}\\}$ and the set $\\widetilde{S}=S \\cup S^R$.\n\nGiven two strings $u,v$, we define the (maximum) overlap between $u$ and $v$, denoted by $\\mathit{ov}(u,v)$, as the length of the longest suffix of $u$ that is also a prefix of $v$. Sometimes we abuse the notation and also say that the suffix of $u$ of length $\\mathit{ov}(u,v)$ is the overlap of $u$ and $v$.\nIn general $\\mathit{ov}(u,v)$ is not equal to $\\mathit{ov}(v,u)$, but it is readily verified that $\\mathit{ov}(u,v) = \\mathit{ov}(v^{R},u^{R})$.\nAdditionally, we define $\\pr(u,v)$ as the prefix of $u$ obtained by removing the suffix of length $\\mathit{ov}(u,v)$ and denote $u \\otimes v=\\pr(u,v) v$.\nNote that the $\\otimes$ operation is in general neither symmetric nor associative.\n\nA set of strings $S$ is called \\emph{factor-free} if no string in $S$ is a factor of another string in $S$. We say that $S$ is \\emph{reverse-factor-free} if there are no distinct strings $u,v\\in S$ such that $u$ is a factor of $v$ or~$v^R$.\n\n\nGiven a factor-free set of strings $S=\\{s_1,\\ldots,s_m\\}$, the SCS problem for $S$ is known to be equivalent to that of finding a maximum-weight Hamiltonian path $\\pi$ in the \\emph{overlap graph} $G_S$, which is a directed weighted graph $(S,E,w)$ with arcs $E=\\{(s_i,s_j)\\mid i\\neq j\\}$ \nof weights $w(s_i,s_j)=\\mathit{ov}(s_i,s_j)$ (cf.\\ Theorem 2.3 in~\\cite{TaUk88}). \nIn this setting, a path $\\pi=s_{i_1}, \\ldots, s_{i_k}$ corresponds to a string $\\mathit{str}(\\pi):=\\pr(s_{i_1},s_{i_2})\\cdots \\pr(s_{i_{k-1}},s_{i_k})s_{i_k}$.\nBy $\\mathit{ov}(\\pi)$ we denote the total weight of arcs in the path $\\pi$.\n\n\nTo accommodate reversals we extend the notion of an overlap graph to $\\barG{S}=(V,E,w)$.\nHere $V=\\{v_s\\,:\\,s \\in S\\} \\cup \\{v^R_s\\,:\\,s \\in S\\}$ so every $s\\in S$ corresponds to exactly two vertices,\n$v_s$ and $v^R_s$. We define $\\mathit{str}(v_s)=s$ and $\\mathit{str}(v^R_s)=s^R$.\nFor a vertex $\\alpha\\in \\barG{S}$ we define $\\alpha^R$ as $v^R_s$ if $\\alpha=v_s$ for some $s$\nor as $v_s$ if $\\alpha = v^R_s$ for some $s$. Note that $\\mathit{str}(\\alpha^R)=\\mathit{str}(\\alpha)^R$.\nFor every $\\alpha,\\beta \\in V$, $\\alpha \\ne \\beta$, we introduce an arc from $\\alpha$ to $\\beta$ with weight $\\mathit{ov}(\\mathit{str}(\\alpha),\\mathit{str}(\\beta))$.\nFor an arc $e=(\\alpha,\\beta)$ we define $e^R = (\\beta^R,\\alpha^R)$. Note that the weight of $e^R$ is the same as the weight of $e$.\n\nFor paths $\\pi$ in $\\barG{S}$ we also use the notions of $\\mathit{str}(\\pi)$ and $\\mathit{ov}(\\pi)$.\nWe say that a path $\\pi$ in $\\barG{S}$ is \\emph{semi-Hamiltonian} if $\\pi$ contains, for every vertex $\\alpha\\in \\barG{S}$, exactly\none of the two vertices $\\alpha$, $\\alpha^R$.\nObserve that a solution to SCS-R problem for a reverse-factor-free set $S$ corresponds to a maximum-weight semi-Hamiltonian path $\\pi$ in the overlap graph $\\barG{S}$.\n\n\\section{Greedy Algorithm and its Linear-Time Implementation}\nWe define an auxiliary procedure \\textsl{\\textsc{Make-Reverse-Factor-Free}}$(S)$ that\nremoves from $S$ all strings $u$ which are contained as a factor in $v$ or $v^R$ for some $v \\in S$, $v \\ne u$.\nNote that the resulting set $S'$ is reverse-factor-free and, moreover, a string is a common superstring with reversals for $S'$ if and only if it is a common superstring with reversals for $S$.\n\n\n\\begin{example}\n Let $S=\\{ab,aaa,aab,baa\\}$. Then \\textsl{\\textsc{Make-Reverse-Factor-Free}}$(S)$ produces $S'=\\{aaa,aab\\}$ or $S'=\\{aaa,baa\\}$.\n\\end{example}\n\nThe Greedy-R algorithm works as follows:\nwhile $|S|>1$, choose $u,v \\in \\widetilde{S}$ (excluding $u=v$ and $u=v^{R}$) \nwith largest overlap, insert into $S$ the string $u \\otimes v$,\nand remove from $S$ all strings among $u,v,u^{R},v^{R}$ that belong to $S$;\nsee the pseudocode.\n\n\n\\DontPrintSemicolon\n\\begin{algorithm}\n \\KwSty{Algorithm} \\textsl{Greedy-R}($S$)\\\\\n \\KwIn{a non-empty set of strings $S$}\n \\KwOut{a superstring of $S$ that approximates a solution of SCS-R problem for $S$}\n \\Begin{\n $S$ := \\textsl{\\textsc{Make-Reverse-Factor-Free}}($S$) \\;\n \\While{$|S|>1$}{\n $P:=\\{ (u,v) : u,v\\in \\widetilde{S}, u\\notin \\{v,v^R\\}\\}$ \\;\n $\\{$ $S$ is reverse-factor-free and $|S|>1$, so $|P|\\ge 1$ $\\}$ \\\\\n take $(u,v)\\in P$ with the maximal value of $\\mathit{ov}(u,v)$ \\;\n $S:= S \\cup \\{u \\otimes v$\\} \\;\n $S:=S \\setminus \\{u,v,u^{R},v^{R}\\}$\\; \n }\n \\Return{the only element of $S$}\n }\n \\textbf{end}\n\\end{algorithm}\n\nLet us state two properties of this algorithm useful for its efficient implementation.\n\n\\begin{lemma}\\label{lem:free}\nThe set $S$ stays reverse-factor-free after each iteration of the \\textbf{while} loop.\n\\end{lemma}\n\\begin{proof}\nSuppose that at some point $S$ ceases to be reverse-factor-free.\nThis might only be due to the fact that, when $w=u\\otimes v$ is introduced to $S$, $\\widetilde{S}$ contains a string\n$w' \\notin \\{u,u^R,v,v^R\\}$ such that $w'$ is a factor of $w$ but not of $u$ or $v$.\nThe latter, however, implies $\\mathit{ov}(u,w')>\\mathit{ov}(u,v)$ and $\\mathit{ov}(w',v)>\\mathit{ov}(u,v)$.\nThat contradicts the choice\nof $(u,v)\\in P$ maximizing $\\mathit{ov}(u,v)$.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:stays}\nBefore $u\\otimes v$ is inserted to $S$, we have $\\mathit{ov}(w, u\\otimes v)=\\mathit{ov}(w,u)$ and $\\mathit{ov}(u\\otimes v,w)=\\mathit{ov}(v,w)$\nfor every $w\\in \\widetilde{S}$.\n\\end{lemma}\n\\begin{proof}\nClearly, $\\mathit{ov}(w, u\\otimes v)\\ge \\mathit{ov}(w,u)$ and $\\mathit{ov}(u\\otimes v,w)\\ge \\mathit{ov}(v,w)$. \nMoreover, one of these inequalities might be strict only if $\\mathit{ov}(w, u\\otimes v)>|u|$ or $\\mathit{ov}(u\\otimes v, w)>|v|$, in particular, only if $w$ contains respectively $u$ or $v$ as a proper factor.\nThis, however, contradicts Lemma~\\ref{lem:free}.\n\\end{proof}\n\n\n\\subsection{Interpretation on the Overlap Graph}\n\n\nTarhio and Ukkonen~\\cite{TaUk88} work with the following interpretation of the greedy algorithm on the overlap graph\n$G_S$: They maintain a set of arcs $F\\subseteq E(G_S)$ forming a collection of disjoint paths and\nat each step they add to $F$ an arc $e\\in E(G_S)\\setminus F$ of maximum weight so that the resulting set still forms a collection of disjoint paths.\nHere, paths correspond to strings in the collection $S$ maintained by the original implementation. \nInsertion of an arc $(s_i,s_j)$ to $F$ results in merging two paths into one and this corresponds to replacing strings $u,v\\in S$ with $u\\otimes v$.\nIt turns out that $\\mathit{ov}(u,v)=\\mathit{ov}(s_i,s_j)$ and thus \nboth implementations of the greedy algorithm are equivalent.\n\n\n\\DontPrintSemicolon\n\\begin{algorithm}[h]\n \\KwSty{Algorithm} \\textsl{Greedy-R2}($S$)\\\\\n \\KwIn{a non-empty set of strings $S$}\n \\KwOut{a superstring of $S$ that approximates a solution of SCS-R problem for $S$}\n \\Begin{\n $S$ := \\textsl{\\textsc{Make-Reverse-Factor-Free}}($S$) \\;\n Construct $\\barG{S}$\\;\n $F := \\emptyset$\\;\n \\For{$i := 1$ \\KwSty{to} $|S|-1$}{\n $P:=\\{e \\in E(\\barG{S})\\setminus F : F\\cup\\{e,e^R\\}\\text{ forms a collection of disjoint paths in }\\barG{S}\\}$ \\;\n take $e\\in P$ with the maximal weight $w(e)$ \\;\n $F := F\\cup\\{e,e^R\\}$\\; \n }\n \\Return{$\\mathit{str}(\\pi)$ for one of the maximal paths $\\pi$ formed by $F$ in $\\barG{S}$}\n }\n \\textbf{end}\n\\end{algorithm}\n\nHere, we provide an analogous interpretation of Greedy-R on the overlap graph $\\barG{S}$\nand, for completeness, explicitly prove its equivalence to the original implementation.\nWe also maintain a set of arcs $F\\subseteq E(\\barG{S})$ forming a \ncollection of disjoint paths. In each step we extend $F$ with a pair of arcs $\\{e,e^R\\}$ of maximum (common)\nweight so that the resulting set still forms a collection of disjoint paths; see the pseudocode of algorithm Greedy-R2.\nObserve that this way paths formed by $F$ come in pairs $\\pi,\\pi^R$ such that $\\pi=\\alpha_1,\\ldots,\\alpha_p$\nand $\\pi^R=\\alpha_p^R,\\ldots,\\alpha_1^R$ (in particular, $\\mathit{str}(\\pi^R)=\\mathit{str}(\\pi)^R$).\nWe claim that these pairs of paths correspond to strings $s\\in S$ of the original implementation (with $s=\\mathit{str}(\\pi)$ or $s=\\mathit{str}(\\pi^R)$).\nIn particular, each string $s\\in \\widetilde{S}$ corresponds to a single path formed by $F$ unless\n$s=s^R$ when it corresponds to two reverse paths. \n The claim is certainly true at the beginning of the algorithm, so let us argue\nthat single iterations of the main loops in both algorithms perform analogous operations.\n\nFirst, consider an arc $e=(\\alpha,\\beta)$ such that there exists a path $\\pi$ that ends at $\\alpha$ and a path $\\pi'$\nthat starts at $\\beta$. (Otherwise, $F\\cup\\{e,e^R\\}$ does not form a collection of disjoint paths).\nObserve that strings $u=\\mathit{str}(\\pi)$ and $v=\\mathit{str}(\\pi')$ belong to $\\widetilde{S}$ in Greedy-R and that $u\\in \\{v,v^R\\}$\nif and only if $F\\cup\\{e,e^R\\}$ yields a cycle. Thus, both algorithms consider\nessentially the same set of possibilities \n(the only caveat is that if $u$ or $v$ is a palindrome, then several arcs $e$ correspond to the same pair of strings from $\\widetilde{S}$). \nBy Lemma~\\ref{lem:stays} the weight of an arc $(\\alpha,\\beta)$ is $\\mathit{ov}(\\mathit{str}(\\alpha),\\mathit{str}(\\beta))=\\mathit{ov}(u,v)$. Thus, \nthe maximum-weight arcs correspond to pairs $(u,v)\\in P$ with largest overlap.\n\nClearly, setting $F:= F\\cup\\{e,e^R\\}$ results in merging $\\pi'$ with $\\pi$\nand $\\pi^R$ with $\\pi'^R$. These paths represent $u\\otimes v$ and $v^R \\otimes u^R$.\nSince the set $S$ in Greedy-R stays reverse-factor-free due to Lemma~\\ref{lem:free} and, in particular,\n$S$ does not contain any pair of strings $v,v^R$ such that $v$ is not a palindrome,\nthe ``$S:=S\\setminus \\{u,u^R,v,v^R\\}$'' instruction always removes exactly two elements of $S$.\nThis means that the bijection between $S$ and pairs of paths formed by $F$ is preserved.\n\nFinally, observe that after exactly $|S|-1$ steps $F$ forms two semi-Hamiltonian paths.\nEither of them can be returned as a solution.\n\n\\subsection{Linear-Time Implementation}\n\nUkkonen \\cite{Uk90} showed that the Greedy algorithm for the original SCS problem\ncan be implemented in linear time based on the overlap-graph interpretation.\nOur linear-time implementation of the Greedy-R2 algorithm is quite similar.\n\nFirst, let us show how to efficiently implement the \\textsl{\\textsc{Make-Reverse-Factor-Free}}\\ operation.\nIt is actually slightly easier to compute it for $\\widetilde{S}$ instead of $S$.\nHowever, this is not an issue: if we substitute $S$ with $\\widetilde{S}$ in the very beginning of the greedy algorithm,\nthen the overlap graph will stay the same.\n\n\\begin{lemma}\\label{lem:mrff}\n The result of \\textsl{\\textsc{Make-Reverse-Factor-Free}}$(\\widetilde{S})$ can be computed in time linear in the total length of strings in $S$.\n\\end{lemma}\n\n\\begin{proof}\n Let us introduce an auxiliary procedure \\textsl{\\textsc{Make-Factor-Free}}$(X)$ \n that removes from $X$ all strings $u$ for which there exists\n a string $v$ in $X$ such that $u$ is a proper factor of $v$.\n \n Ukkonen \\cite{Uk90} applied the following result for the preprocessing phase of the greedy algorithm for the ordinary SCS problem.\n\n \\begin{claim}[\\cite{Uk90}]\n \\textsl{\\textsc{Make-Factor-Free}}$(X)$ can be implemented in time linear in the total length of the strings in $X$.\n \\end{claim}\n \n Observe that in order to compute \\textsl{\\textsc{Make-Reverse-Factor-Free}}$(\\widetilde{S})$ it suffices to determine\n $S' = \\,$\\textsl{\\textsc{Make-Factor-Free}}$(\\widetilde{S})$ and then for every pair of strings $u,u^R\\in S'$\n leave exactly one of these strings, e.g., the lexicographically smaller one. \n Note that $u\\in S'$ if and only if $u^R \\in S'$, so for the latter\n it suffices to iterate through $S'$ and report a string $u$ if and only if $u \\le u^R$.\n \n The whole procedure works in linear time in the total length of strings in $\\widetilde{S}$,\n which is at most twice the total length of strings in $S$.\n \\end{proof}\n\nNow, we show the main result.\n\n\\begin{theorem}\n Greedy-R algorithm can be implemented in time linear in the total length of strings in $S$.\n\\end{theorem}\n\\begin{proof}\n By Lemma~\\ref{lem:mrff}, we can make the input set reverse-factor-free in linear time.\n Let $S=\\{s_1,\\ldots,s_m\\}$ be the set of remaining strings and $n$ the total length of strings in $S$.\n \n We actually implement the equivalent algorithm Greedy-R2.\n We cannot store the whole graph $\\barG{S}$, since this would take too much space.\n Therefore we only store its vertex set, whereas we will be considering the edges of the graph in an indirect way.\n \n Denote by $\\Pref(X)$ the set of all different prefixes of strings in $X$.\n Each element of this set can be represented as a state of the Aho-Corasick automaton constructed for $X$, thus\n using $\\Oh(1)$ space per element.\n Further, given a string $w$, denote by $\\PrefSet(w,X)$, $\\SufSet(w,X)$ the sets (represented as lists of identifiers) of strings\n in $X$ having $w$ as a prefix, suffix, respectively.\n Ukkonen \\cite{Uk90} applied the Aho-Corasick automaton for $X$ to show the following fact:\n\n \\begin{claim}[\\cite{Uk90}]\n $\\PrefSet(w,X)$, $\\SufSet(w,X)$ for all $w \\in \\Pref(X)$ can be computed in time\n linear in the total length of the strings in $X$.\n \\end{claim}\n In the implementation we use sets of the form $\\Pref(\\widetilde{S})$, $\\PrefSet(w,\\widetilde{S})$ and $\\SufSet(w,\\widetilde{S})$.\n Our implementation actually requires $\\PrefSet$ and $\\SufSet$ sets to consist of vertices $\\alpha\\in V(\\barG{S})$\n instead of strings $\\mathit{str}(\\alpha)$. Thus, we replace every string identifier with one or two vertex identifiers\n if the string is not a palindrome or is a palindrome, respectively.\n\n Instead of simulating the main loop directly, we will consider all overlaps $w$ of $\\mathit{str}(\\alpha)$ and $\\mathit{str}(\\beta)$\n to find the maximum-weight arc $e=(\\alpha,\\beta)\\in E(\\barG{S})$. \n Observe that at subsequent iterations of the loop, the weight $w(e)$ may only decrease.\n Hence, we iterate over all $w \\in \\Pref(\\widetilde{S})$ in decreasing length order\n and for each $w$ check if there exists an appropriate arc $e=(\\alpha,\\beta)$ such that $F\\cup\\{e,e^R\\}$\n forms a collection of disjoint paths. More formally, we seek vertices $\\alpha,\\beta\\in V(\\barG{S})$ such that:\n \\begin{enumerate}[(a)]\n \\item a path formed by $F$ ends in $\\alpha$ and a \\emph{different} path formed by $F$ starts at $\\beta$,\n \\item $\\beta \\ne \\alpha^R$, and\n \\item $w$ is the overlap of $\\mathit{str}(\\alpha)$ and $\\mathit{str}(\\beta)$.\n \\end{enumerate}\n Once we find such an arc $e$, we set $F:=F\\cup\\{e,e^R\\}$. \n\n Let us explain how this approach can be implemented efficiently. \n To iterate through all $w \\in \\Pref(\\widetilde{S})$ in decreasing length order we simply traverse all the states of the Aho-Corasick automaton\n in reverse-BFS order, breaking ties arbitrarily.\n To check the conditions (a)-(c), we could iterate through pairs of elements\n $\\beta \\in \\PrefSet(w,\\widetilde{S})$ and $\\alpha \\in \\SufSet(w,\\widetilde{S})$ and verify if they satisfy the conditions.\n However, to avoid repetitively scanning \\emph{redundant} elements, we remove $\\beta\\in \\PrefSet(w,\\widetilde{S})$\n and $\\alpha\\in \\SufSet(w,\\widetilde{S})$ if no path starts in $\\beta$ and no path ends in $\\alpha$, respectively.\n For each vertex we will remember if a path starts or ends there, and, if so, what is the other endpoint of the path.\n Observe that for each $\\alpha \\in \\SufSet(w,\\widetilde{S})$ there are at most two non-redundant elements $\\beta \\in \\PrefSet(w,\\widetilde{S})$\n for which the arc $e=(\\alpha,\\beta)$ is not valid.\n Indeed, these might only be $\\alpha^R$ and the starting vertex of the path ending at $\\alpha$. \n \n Hence, with amortized constant-time overhead, either an arc $e=(\\alpha,\\beta)$ satisfying (a)-(c) is found,\n or it can be verified that no such arc exists for this particular string $w$ and then\n we can continue iterating through strings in $\\Pref(\\widetilde{S})$.\n If an arc is found, we start the next search with the same string $w$, since\n there could still be arcs satisfying conditions (a)-(c) for the same string $w$.\n \n\n To conclude: in every step of the simulation, in amortized constant time we either find a pair of arcs to be introduced to $F$\n or discard the given candidate $w \\in \\Pref(\\widetilde{S})$.\n The former situation takes place at most $m-1$ times and the latter happens at most $n$ times.\n The whole algorithm thus works in $\\Oh(n)$ time.\n \\end{proof}\n\n\n\\section{Compression Ratio}\n\nIn this section we prove that the compression ratio of the Greedy-R algorithm is always at least $\\frac12$, and that this value is effectively achieved, so the bound is tight.\n\nLet $S=\\{s_1,\\ldots,s_m\\}$ be the input set of strings.\nWe assume that $S$ is already reverse-factor-free.\nLet $\\opt(S)$ be the length of a longest semi-Hamiltonian path in $G=\\barG{S}$.\nLet $\\mathit{pgreedy}(S)$ denote the length of the semi-Hamiltonian path produced by the Greedy-R algorithm for $S$.\nWe will show that $\\mathit{pgreedy}(S) \\ge \\frac12 \\opt(S)$.\n\nIn the proof we use as a tool the following fact from \\cite{TaUk88} (see Lemma~3.1 in \\cite{TaUk88}):\n\\begin{lemma}\\label{lem:4vert}\n If strings $x_1,x_2,x_3,x_4$ satisfy\n $$\\max(\\mathit{ov}(x_1,x_4),\\mathit{ov}(x_2,x_3)) \\le \\mathit{ov}(x_1,x_3),$$\n then\n $$\\mathit{ov}(x_1,x_4)+\\mathit{ov}(x_2,x_3) \\le \\mathit{ov}(x_1,x_3)+\\mathit{ov}(x_2,x_4).$$\n\\end{lemma}\n\nWe proceed with the following crucial lemma.\n\n\\begin{lemma}\\label{lem:compression_ratio}\n Let $S$ be a set of strings and let $u,v \\in \\widetilde{S}$ be two elements for which $\\mathit{ov}(u,v)$ is maximal\n ($u \\notin \\{v,v^R\\}$).\n Set $\\mathit{OV}=\\mathit{ov}(u,v)$.\n Let $\\opt(S)$ be the length of a longest semi-Hamiltonian path in $G$\n and let $\\opt'(S)$ be the length of a longest semi-Hamiltonian path in $G$ that contains the arc $(u,v)$.\n Then:\n $$\\opt'(S) \\ge \\opt(S)-\\mathit{OV}.$$\n\\end{lemma}\n\n\\begin{proof}\n We consider the path $\\pi$ corresponding to $\\opt(S)$ and show how it can be modified\n without losing its semi-Hamiltonicity so that the arc $(u,v)$ occurs in the path and the length of the path decreases by at most $\\mathit{OV}$.\n\n Obviously, if $\\pi$ already contains the arc $(u,v)$, nothing is to be done.\n If both $u$ and $v$ occur in $\\pi$, we perform transformations as in the proof of a similar fact\n from \\cite{TaUk88} related to the ordinary SCS problem.\n If $u$ occurs in $\\pi$ before $v$ then we select $\\pi'$ as in Fig.~\\ref{fig:case_a} and:\n $$\\mathit{ov}(\\pi') \\ge \\mathit{ov}(\\pi)-\\mathit{ov}(u,b)-\\mathit{ov}(c,v)+\\mathit{ov}(u,v) \\ge \\mathit{ov}(\\pi)-\\mathit{ov}(u,b) \\ge \\mathit{ov}(\\pi)-\\mathit{OV}.$$\n Note that both nodes $b, c$ exist (they could be the same node, though).\n If any of the remaining nodes does not exist, it is simply skipped on the path.\n\n \\begin{figure}[htpb]\n \\begin{center}\n \\tikzstyle{mynode} = [circle,draw]\n \\begin{tikzpicture}[scale=0.85]\n \\draw (-1,0) node {$\\pi$};\n \\foreach \\x\/\\c in {0.1\/s,3\/a,4\/u,5\/b,8\/c,9\/v,10\/d,13\/t}\n \\draw (\\x,0) node[mynode] (\\c) {\\it\\scriptsize \\c};\n \\foreach \\x\/\\y in {s\/a,b\/c,d\/t}\n \\draw[decorate, decoration={snake},-latex] (\\x) -- (\\y);\n \\foreach \\x\/\\y in {a\/u,u\/b,c\/v,v\/d}\n \\draw[-latex] (\\x) -- (\\y);\n\n \\begin{scope}[yshift=-1cm]\n \\draw (-1,0) node {$\\pi'$};\n \\foreach \\x\/\\c in {0.1\/s,3\/a,4\/u,5\/b,8\/c,9\/v,10\/d,13\/t}\n \\draw (\\x,0) node[mynode] (\\c) {\\it\\scriptsize \\c};\n \\foreach \\x\/\\y in {s\/a,b\/c,d\/t}\n \\draw[decorate, decoration={snake},-latex] (\\x) -- (\\y);\n \\foreach \\x\/\\y in {a\/u,v\/d}\n \\draw[-latex] (\\x) -- (\\y);\n \\draw[-latex] (c) .. controls (5,-2) and (3,-2) .. (s); %\n \\draw[-latex] (u) .. controls (5.5,-1.5) and (7.5,-1.5) .. (v); %\n \\end{scope}\n \\end{tikzpicture}\n \\end{center}\n \\vspace*{-0.4cm}\n \\caption{\\label{fig:case_a}Proof of Lemma~\\ref{lem:compression_ratio}, first case: $u$ occurs in $\\pi$ before $v$.}\n\\end{figure}\n\n If $v$ occurs before $u$, then by applying the inequality of Lemma~\\ref{lem:4vert}\n (with $x_1=u$, $x_2=a$, $x_3=v$, $x_4=d$)\n we have \n $$\\mathit{ov}(u,d)+\\mathit{ov}(a,v) \\le \\mathit{ov}(u,v)+\\mathit{ov}(a,d).$$\n By this inequality, for the path $\\pi'$ defined in Fig.~\\ref{fig:case_b} we have:\n \\begin{align*}\n \\mathit{ov}(\\pi') &\\ge \\mathit{ov}(\\pi)-\\mathit{ov}(a,v)-\\mathit{ov}(u,d)+\\mathit{ov}(u,v)+\\mathit{ov}(a,d)-\\mathit{ov}(v,b)\\\\\n &\\ge \\mathit{ov}(\\pi)-\\mathit{ov}(v,b) \\ge \\mathit{ov}(\\pi)-\\mathit{OV}.\n \\end{align*}\n As before, if any of the depicted nodes does not exist, we simply skip the corresponding part of the path.\n In particular, if any of the nodes $u,v$ is an endpoint of the path $\\pi$, we do not need to use the aforementioned\n inequality to show that $\\mathit{ov}(\\pi') \\ge \\mathit{ov}(\\pi)-\\mathit{OV}$.\n\n \\begin{figure}[htpb]\n \\begin{center}\n \\tikzstyle{mynode} = [circle,draw]\n \\begin{tikzpicture}[scale=0.85]\n \\draw (-1,0) node {$\\pi$};\n \\foreach \\x\/\\c in {0.1\/s,3\/a,4\/v,5\/b,8\/c,9\/u,10\/d,13\/t}\n \\draw (\\x,0) node[mynode] (\\c) {\\it\\scriptsize \\c};\n \\foreach \\x\/\\y in {s\/a,b\/c,d\/t}\n \\draw[decorate, decoration={snake},-latex] (\\x) -- (\\y);\n \\foreach \\x\/\\y in {a\/v,v\/b,c\/u,u\/d}\n \\draw[-latex] (\\x) -- (\\y);\n\n \\begin{scope}[yshift=-1.5cm]\n \\draw (-1,0) node {$\\pi'$};\n \\foreach \\x\/\\c in {0.1\/s,3\/a,4\/v,5\/b,8\/c,9\/u,10\/d,13\/t}\n \\draw (\\x,0) node[mynode] (\\c) {\\it\\scriptsize \\c};\n \\foreach \\x\/\\y in {s\/a,b\/c,d\/t}\n \\draw[decorate, decoration={snake},-latex] (\\x) -- (\\y);\n \\foreach \\x\/\\y in {c\/u}\n \\draw[-latex] (\\x) -- (\\y);\n \\draw[-latex] (a) .. controls (5,-1.5) and (8,-1.5) .. (d); %\n \\draw[-latex] (t) .. controls (10,-2) and (8,-2) .. (b); %\n \\draw[-latex] (u) .. controls (7.5,1.5) and (5.5,1.5) .. (v); %\n \\end{scope}\n \\end{tikzpicture}\n \\end{center}\n \\vspace*{-0.4cm}\n \\caption{\\label{fig:case_b}Proof of Lemma~\\ref{lem:compression_ratio}, second case: $u$ occurs in $\\pi$ after $v$.}\n\\end{figure}\n\n Differently from the original SCS problem considered in \\cite{TaUk88}, it\n might not be the case that $u$ and $v$ are in $\\pi$.\n If none of them is, then $\\pi$ contains both $u^R$ and $v^R$ and by reversing $\\pi$ (that is, taking the path $\\pi^R$)\n we obtain a semi-Hamiltonian path that contains both $u$ and $v$,\n which was the case considered before.\n Thus, we can assume that $u$ and $v^{R}$ occur in $\\pi$ (the case of $u^{R}$ and $v$ is symmetric).\n Again, we have two cases, depending on which of the two nodes comes first in $\\pi$.\n If $u$ occurs before $v^{R}$ in $\\pi$, then we have (note that $\\mathit{ov}(c,v^{R})=\\mathit{ov}(v,c^{R})$):\n $$\\mathit{ov}(\\pi') \\ge \\mathit{ov}(\\pi)-\\mathit{ov}(u,b)-\\mathit{ov}(v^{R},d)+\\mathit{ov}(u,v) \\ge \\mathit{ov}(\\pi)-\\mathit{ov}(u,b) \\ge \\mathit{ov}(\\pi)-\\mathit{OV},$$\n see also Fig.~\\ref{fig:case_c}.\n \nFinally, if $v^{R}$ occurs before $u$, then (see Fig.~\\ref{fig:case_d}):\n $$\\mathit{ov}(\\pi') \\ge \\mathit{ov}(\\pi)-\\mathit{ov}(v^{R},b)-\\mathit{ov}(u,d)+\\mathit{ov}(u,v) \\ge \\mathit{ov}(\\pi)-\\mathit{ov}(v^{R},b) \\ge \\mathit{ov}(\\pi)-\\mathit{OV}.$$\n\n\\begin{figure}\n \\begin{center}\n \\tikzstyle{mynode} = [circle,draw]\n \\begin{tikzpicture}[scale=0.85]\n \\draw (-1,0) node {$\\pi$};\n \\draw (9,0) node[mynode,inner sep=0pt,minimum size=0.58cm] (vr) {\\scriptsize $v^{R}$};\n \\foreach \\x\/\\c in {0.1\/s,3\/a,4\/u,5\/b,8\/c,10\/d,13\/t}\n \\draw (\\x,0) node[mynode] (\\c) {\\it\\scriptsize \\c};\n \\foreach \\x\/\\y in {s\/a,b\/c,d\/t}\n \\draw[decorate, decoration={snake},-latex] (\\x) -- (\\y);\n \\foreach \\x\/\\y in {a\/u,u\/b,c\/vr,vr\/d}\n \\draw[-latex] (\\x) -- (\\y);\n\n \\begin{scope}[yshift=-2cm]\n \\draw (-1,0) node {$\\pi'$};\n \\draw (5,0) node[mynode,inner sep=0pt,minimum size=0.58cm] (br) {\\scriptsize $b^{R}$};\n \\draw (8,0) node[mynode,inner sep=0pt,minimum size=0.58cm] (cr) {\\scriptsize $c^{R}$};\n \\foreach \\x\/\\c in {0.1\/s,3\/a,4\/u,9\/v,10\/d,13\/t}\n \\draw (\\x,0) node[mynode] (\\c) {\\it\\scriptsize \\c};\n \\foreach \\x\/\\y in {s\/a,cr\/br,d\/t}\n \\draw[decorate, decoration={snake},-latex] (\\x) -- (\\y);\n \\foreach \\x\/\\y in {a\/u,v\/cr}\n \\draw[-latex] (\\x) -- (\\y);\n \\draw[-latex] (u) .. controls (5.5,-1.5) and (7.5,-1.5) .. (v); %\n \\draw[-latex] (br) .. controls (6.5,1.5) and (8.5,1.5) .. (d); %\n \\end{scope}\n \\end{tikzpicture}\n \\end{center}\n \\vspace*{-0.4cm}\n \\caption{\\label{fig:case_c}Proof of Lemma~\\ref{lem:compression_ratio}, third case: $u$ occurs in $\\pi$ before $v^R$.}\n\\end{figure}\n\n\\begin{figure}\n\\bigskip\n \\begin{center}\n \\tikzstyle{mynode} = [circle,draw]\n \\begin{tikzpicture}[scale=0.85]\n \\draw (-1,0) node {$\\pi$};\n \\draw (4,0) node[mynode,inner sep=0pt,minimum size=0.56cm] (vr) {\\scriptsize $v^{R}$};\n \\foreach \\x\/\\c in {0.1\/s,3\/a,5\/b,8\/c,9\/u,10\/d,13\/t}\n \\draw (\\x,0) node[mynode] (\\c) {\\it\\scriptsize \\c};\n \\foreach \\x\/\\y in {s\/a,b\/c,d\/t}\n \\draw[decorate, decoration={snake},-latex] (\\x) -- (\\y);\n \\foreach \\x\/\\y in {a\/vr,vr\/b,c\/u,u\/d}\n \\draw[-latex] (\\x) -- (\\y);\n\n \\begin{scope}[yshift=-2cm]\n \\draw (-1,0) node {$\\pi'$};\n \\draw (0.1,0) node[mynode,inner sep=0pt,minimum size=0.56cm] (sr) {\\scriptsize $s^{R}$};\n \\draw (3,0) node[mynode,inner sep=0pt,minimum size=0.56cm] (ar) {\\scriptsize $a^{R}$};\n \\foreach \\x\/\\c in {4\/v,5\/b,8\/c,9\/u,10\/d,13\/t}\n \\draw (\\x,0) node[mynode] (\\c) {\\it\\scriptsize \\c};\n \\foreach \\x\/\\y in {ar\/sr,b\/c,d\/t}\n \\draw[decorate, decoration={snake},-latex] (\\x) -- (\\y);\n \\foreach \\x\/\\y in {v\/ar,c\/u}\n \\draw[-latex] (\\x) -- (\\y);\n \\draw[-latex] (u) .. controls (7.5,1.5) and (5.5,1.5) .. (v); %\n \\draw[-latex] (sr) .. controls (3,-2) and (7,-2) .. (d); %\n \\end{scope}\n \\end{tikzpicture}\n \\end{center}\n \\vspace*{-0.4cm}\n \\caption{\\label{fig:case_d}Proof of Lemma~\\ref{lem:compression_ratio}, third case: $u$ occurs in $\\pi$ after $v^R$.}\n\\end{figure}\n\nThis completes the proof of the lemma.\n \\end{proof}\n\nTo conclude the proof of the compression ratio of the Greedy-R algorithm we use an inductive argument.\nAssume that $\\mathit{pgreedy}(S') \\ge \\frac12 \\opt(S')$ for all $S'$ such that $|S'|<|S|$.\nBy Lemma~\\ref{lem:compression_ratio}, for $S'=S \\setminus \\{u,v,u^R,v^R\\} \\cup \\{u \\otimes v\\}$:\n$$\\opt(S)-\\mathit{OV} \\le \\opt'(S) \\le \\opt(S')+\\mathit{OV},$$\nso that $\\opt(S')+2\\mathit{OV} \\ge \\opt(S)$.\nMoreover, $\\mathit{pgreedy}(S)=\\mathit{pgreedy}(S')+\\mathit{OV}$ and, by the inductive hypothesis, $\\mathit{pgreedy}(S') \\ge \\frac12 \\opt(S')$.\nConsequently:\n$$\\mathit{pgreedy}(S) = \\mathit{pgreedy}(S')+\\mathit{OV} \\ge \\frac12 (\\opt(S')+2\\mathit{OV}) \\ge \\frac12 \\opt(S).$$\nWe arrive at the following theorem.\n\n\\begin{theorem}\n Greedy-R has compression ratio $\\frac12$.\n\\end{theorem}\n\nIt turns out that the bound on the compression ratio of the Greedy-R algorithm is tight.\nIt suffices to consider the set of strings $\\{ab^h,b^hc,b^{h+1}\\}$\n(this is actually the same example as from the analysis of the Greedy algorithm \\cite{BlJiLiTrYa94}).\nThe output of Greedy-R can be the string $ab^hcb^{h+1}$ with total overlap $h$,\nwhereas an optimal solution to SCS-R is $ab^{h+1}c$ of total overlap $2h$.\n\n\n\\section{NP-completeness of the SCS-R Problem}\n\n \n Let $\\Gamma=\\Sigma\\cup\\{\\$,\\#\\}$, where $ \\$,\\# \\not \\in \\Sigma$.\n Let $h$ be the morphism from $\\Sigma^*$ to $\\Gamma^*$ defined by $h(c)=\\$\\#\\,c$, for every $c \\in \\Sigma$.\n Given $k\\geq 1$, we also define the morphism $g_k(c)=c^k$, for every $c \\in \\Sigma$. \n\n \\begin{observation}\\label{obs:small_ov}\n For every nonempty strings $u,v$, the strings $h(u)$ and $h(v)^{R}$ have an overlap of length at most one.\n \\end{observation}\n\n \\begin{observation}\\label{obs:multiple}\n For every nonempty strings $u,v$, one has\n $$|h(g_k(u))|=3k|u| \\quad\\mbox{and }\\quad \\mathit{ov}(h(g_k(u)),h(g_k(v))) = 3k\\cdot\\mathit{ov}(u,v).$$\n \\end{observation}\n\n In the decision version of the SCS-R problem we need to check if a given set of strings\n admits a common superstring with reversals of length at most $\\ell$, where $\\ell$ is\n an additional input parameter.\n\n \\begin{proposition}\\label{NP}\n The decision version of the SCS-R problem is NP-complete.\n \\end{proposition}\n \\begin{proof}\n Let $s_1,\\ldots,s_m$ be an instance of the SCS problem.\n Set $y_i=h(g_m(s_i))$, for $i=1,\\ldots,m$.\n We have the following claim.\n\n \\begin{claim}\n There exists a common superstring of $s_1,\\ldots,s_m$ of length at most $\\ell$\n if and only if\n there exists a common superstring with reversal of $y_1,\\ldots,y_m$ of length at most $3m\\ell$.\n \\end{claim}\n \\begin{proof}\n $(\\Rightarrow)$\n If $u$ is a common superstring of $s_1,\\ldots,s_m$,\n then $h(g_m(u))$ is a common superstring of $y_1,\\ldots,y_m$.\n Hence, $h(g_m(u))$ is also a common superstring with reversals of $y_1,\\ldots,y_m$.\n If $|u|=\\ell$, then $|h(g_m(u))|=3m\\ell$.\n\n $(\\Leftarrow)$\n Let $u$ be a common superstring with reversals of $y_1,\\ldots,y_m$.\n We will show that, thanks to the special form of the strings, there exists\n a common superstring \\emph{without} reversals of $y_1,\\ldots,y_m$ of length not much greater than $|u|$.\n\n Let $\\pi=z_{i_1},\\ldots,z_{i_m}$ be the sequence of nodes on the path in the overlap graph $G$\n that corresponds to $u$;\n here $\\{i_1,\\ldots,i_m\\}=\\{1,\\ldots,m\\}$ and each $z_i$ is either $y_i$ or $y^R_i$. Let us construct a new sequence of nodes $\\pi'$, that first contains all nodes from $\\pi$\n of the form $y_i$ in the same order as in $\\pi$, and then all nodes from $\\pi$\n of the form $y_i^R$, but given in the reverse order and taken without reversal.\n Let $u'$ be the common superstring corresponding to $\\pi'$.\n By Observation~\\ref{obs:small_ov}, $|u'| \\le |u|+m-1$.\n Note that $u'$ is an ordinary common superstring of $y_1,\\ldots,y_m$.\n\n By Observation~\\ref{obs:multiple}, $|u'|$ is a multiple of $3m$ and $u'$\n corresponds to a common superstring $v$ of $s_1,\\ldots,s_m$\n of length $|u'|\/(3m)$.\n If $|u|\\le 3m\\ell$, then\n $$|v|=\\frac{|u'|}{3m} \\le \\left\\lfloor\\frac{|u|+m-1}{3m}\\right\\rfloor \\le \\ell.$$\n \n \\vspace{-0.5cm}\n \\end{proof}\n \n The claim provides a reduction of the decision version of the SCS problem to the decision version of the SCS-R problem.\n This shows that the latter is NP-hard, hence NP-complete, as it is obviously in NP.\n \\end{proof}\n\n\n\n\\section{Acknowledgments}\n\nThe authors thank anonymous referees for a number of helpful comments and remarks.\n\nThis work started during a visit of Gabriele Fici to the University of Warsaw funded by the Warsaw Center of Mathematics and Computer Science. \n\nGabriele Fici is supported by the PRIN 2010\/2011 project ``Automi e Linguaggi Formali: Aspetti Matematici e Applicativi'' of the Italian Ministry of Education (MIUR).\nTomasz Kociumaka is supported by Polish budget funds for science in 2013-2017 as a research project under the `Diamond Grant' program.\nJakub Radoszewski, Wojciech Rytter and Tomasz Wale\\'n are supported by the Polish National Science Center, grant no 2014\/13\/B\/ST6\/00770.\n\n\n\\bibliographystyle{plain}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:Introduction}\nThe solution of constitutive equations for viscoelastic fluids involves some important considerations, as for instance, the theoretical issues concerning the existence results~\\cite{Chupin2018,Ervin2003,Lukakova2017,Renardy1991}, and the development of numerical schemes for solving complex fluid flows~\\cite{Dellar2014,Han2000,Lee2004}. \n\\par\nSome forms of viscoelastic constitutive equations can be constructed considering the\nupper-convected time derivative or Oldroyd derivative~\\cite{Oldroyd1950}, which is defined as\n\\begin{equation}\n\t\\uctd{\\zeta} \\coloneqq \\frac{\\partial {\\zeta}}{\\partial t} + \\left( {u} \\cdot \\nabla \\right) {\\zeta} - (\\nabla {u}) {\\zeta} - {\\zeta}(\\nabla {u})^\\top,\n\t\\label{der}\n\t\\end{equation}\nwhere $u(x,t) \\in \\mathbb{R}^d$ is the velocity field of the flow and ${\\zeta}(x,t) \\in \\mathbb{R}^{d\\times d}_{\\rm sym}$ is a tensor to represent the non-Newtonian contribution for $d=(1,) 2, 3$.\nRoughly speaking, the derivative form of~\\eqref{der} is generally used for describing responses of viscoelastic fluids, as for instance, the deformation induced by the rate of strain. Therefore, the upper-convected time derivative~\\eqref{der} is employed to formulate the constitutive equations of the \nmost popular models, as for instance the Oldroyd-B, Phan-Tien--Tanner~(PTT), Giesekus, etc~\\cite{MalPruSkrSul-2018,RenardyBook}. \n\\par\nIn particular, we are interested in the numerical approximations for model equations based on the classical differential constitutive equation for the Oldroyd-B fluid in a dimensionless form:\n\t\\begin{equation}\n\t{\\zeta} + Wi \\, \\uctd{\\zeta} = 2\\left( 1 - \\beta \\right) {D}({u}),\n\t\\label{eq:OB}\n\t\\end{equation}\nwhere ${D}({u}) = {[\\nabla {u} + (\\nabla {u})^\\top]\/{2}}$ is the strain-rate tensor, and the non-dimensional positive parameters $Wi$ and $\\beta$ are respectively the Weissenberg number and the viscosity ratio~$(\\beta \\in (0,1))$.\n\\par\nThe Weissenberg number \\cite{White:1964} is a parameter related to the memory of the fluid, i.e., for a viscoelastic material, the $Wi$ is a dimensionless number which can represent the relaxation time of the fluid. From a rheological point-of-view, the Weissenberg number can be interpreted as a number which can be used to measure the competition between elastic and viscous forces present in the concept of the viscoelasticity. A naive form to interprete the mathematical effect of this non-dimensional number is considering if $Wi=0$ in Eq. (1.2), and in this case, the stress, represented here by $\\zeta$, is given by an explicit relation with the strain-rate tensor $D(u)$. Otherwise, for $Wi\\neq 0$, the relation between the stress and the velocity gradient (rate-of-strain) can be modeled by a differential model, as for instance Eq. (1.2). Notice that increasing the value of the Weissenberg number in Eq. (1.2), the convected time derivative assumes a more significant effect in the equation, and therefore, the numerical treatment of this term needs to be improved in order to obtain a correct approximation of the solution. More details concerning the effect of the Weissenberg number on the partial differential equations whose describe viscoelastic fluid flows can be found in the works of Renardy \\cite{RenardyAnnual,RenardyBook}.\n\\par\nFrom a numerical point of view, in order to preserve the stability of the solutions, Eulerian frameworks for solving equation~\\eqref{eq:OB} need to apply a high-order spatial discretization for treating the convective terms in~\\eqref{der}. Generally, the methods for dealing with convection-dominant terms of the upper-convected time derivative are based on the explicit and implicit upwind methodologies~\\cite{Baba1981,Harten1984,Schlichting2017}. Considering explicit upwind strategies, many numerical approaches have been proposed in the literature for solving constitutive equations of viscoelastic models based on Eq.~\\eqref{eq:OB}, e.g. the Eulerian schemes using Finite-Element~(FE) \\cite{Castillo,Fortin,Hulsen,Sandri}, Finite-Volume~(FV) \\cite{Alves2003,Darwish,Oliveira,Pimenta}, Finite-Difference~(FD)~\\cite{Franca,Martins2015,Tome}, etc. It is worth to notice that the main drawback of the explicit upwind schemes is the severe time step limitations, and the application of implicit time integrators has been used for developing more robust frameworks \\cite{Breuss2006,Schlichting2017,Yee1996}, where a typical example is the so-called CFL condition. However, the construction of fully implicit upwind algorithms is complex resulting in general in high-cost computational schemes due to the solution of large systems. An additional drawback of implicit upwind schemes for solving convection-dominant problems is the excessive numerical diffusion. \n\\par\nIn a different framework, Lagrangian methods combined with the method of characteristics~\\cite{Benitez2012,BermejoGalanSaavedra2012,Douglas1982,NT-2016-M2AN,Sul-1988} for solving viscoelastic fluid flows have been proposed by~\\cite{Baranger1997,Basombrio1991,Hadj1990,Notsu2015,LMNT-Peterlin_Oseen_Part_I,LMNT-Peterlin_Oseen_Part_II,Machmoum2001}. In these schemes, the Eulerian discretization of the convective term in~\\eqref{der}, i.e., $(u\\cdot\\nabla) \\zeta$, is avoided by using a Lagrangian discretization of the material derivative, i.e., $\\partial\\zeta\/\\partial t + (u\\cdot\\nabla) \\zeta$, with the idea of the method of characteristics.\nThe idea is to consider the trajectory of a fluid particle and discretize the material derivative along the trajectory.\nSince it is natural from a physical viewpoint and such Lagrangian schemes have advantages, e.g., the symmetry of resulting coefficient matrices of the system of linear equations in the implicit framework, no artificial parameters and no need of the so-called CFL condition, they are useful for flow problems appearing in the field of scientific computing.\n\\par\nA different approach for avoiding numerical instabilities and to obtain accurate solutions of Eq.~\\eqref{eq:OB} is mathematically rooted on the concept of the generalized Lie derivatives~(GLD)~\\cite{Lee2006,LeeXuZhang2011,LeeBook2011} which modifies the definition of Eq.~\\eqref{der}. In particular, this elegant methodology was firstly presented by Lee and Xu~\\cite{Lee2006} (see also a similar idea proposed in~\\cite{Petera2002}). In that pioneer work, the authors reformulated Eq.~\\eqref{eq:OB} using some mathematical properties to define generalized Ricatti equations in terms of GLD. In summary, the upper-convected time derivative~\\eqref{der} was re-written using the concept of the transition matrix. This idea was adopted in the context of the finite element discretization in Lee et al. \\cite{LeeBook2011} to numerically solve the Poiseuille flow between two parallel plates around a cylinder while in~\\cite{Lee2006} the authors presented theoretical results concerning the discretized version of the formulation proposed in~\\cite{LeeBook2011}.\n\\par\nIn spite of the good stability properties observed in the numerical results and the sophisticated theoretical analysis of the works in~\\cite{Lee2006,LeeBook2011}, to the best knowledge of the authors, the application of the GLD for solving equations in the form of~\\eqref{eq:OB} is limited for finite element discretization resulting in schemes of (mainly) first-order in time.\nIn~\\cite{Lee2006}, two finite element schemes of second-order in time are presented based on the Crank--Nicolson or the Adams--Bashforth method along the trajectory of fluid particle. There are, however, no truncation error analysis of second-order in time and no numerical results yet, while numerical results by a GLD-based finite element scheme of first-order in time are given in~\\cite{LeeBook2011}.\nTherefore, main contributions of this work can be summarized as follows: i)~the combination of the GLD strategy with the method of characteristics to develop temporal second-order finite difference schemes for treating the upper-convected time derivative~\\eqref{der}, and ii)~the application of simple stable algorithms avoiding the need to solve large systems as commonly occur for implicit upwind schemes.\n\\par\nIn this paper, we present finite difference approximations of the upper-convected time derivative~\\eqref{der} based on GLD, and apply them to simple models.\nThe approximations are of second-order in time, where the truncation error of second-order in time is proved in Theorem~\\ref{thm:sec_order}, and a practical form is given in Corollary~\\ref{cor:sec_order}.\nTo the best knowledge of the authors, it is noted that the form, cf.~\\eqref{eq:sec_order_n}, in the corollary is new and that there are no proofs of truncation error of second-order in time for time-discretized approximations using GLD-approach.\nCombining the approximation with the (bi)linear~($p=1$) and (bi)quadratic~($p=2$) Lagrange interpolations, we present full discretizations of the upper-convective time derivative of second-order in time and $p$-th order in space, i.e., $O(\\Delta t^2+h^p)$, which are proved in Theorem~\\ref{prop:sec_order_p_order}.\nWe present two numerical schemes for simple models in $d$-dimensional spaces~($d=1,2$), cf.~\\eqref{scheme}, which are both explicit.\nThe difference of the schemes is the accuracy in space, i.e., one is of first-order~($p=1$) and the other is of second-order~($p=2$) in space as (bi)linear and (bi)quadratic Lagrange interpolation operators have been employed, respectively.\nAfter the presentation of the schemes, numerical experiments for simple models in $d$-dimensional spaces~($d=1,2$) are presented.\nThey are consistent with the theoretical accuracies shown in Theorem~\\ref{prop:sec_order_p_order}.\n\\par\nIn the case of Lagrangian finite element methods (often called Lagrange--Galerkin methods), a numerical integration is often employed in real computation for an integration of a composite function, since it is not easy to compute the integration of a composite function exactly.\nIn fact, a rough numerical integration may cause instability, cf.~\\cite{Tab-2007,TabFuj-2006}, where a robustness of a scheme of second-order in time with a choice of $\\Delta t$ depending on $h$ is discussed in the papers.\nOn the other hand, a quadrature-free scheme is proposed by using a mass-lumping technique in~\\cite{PirTab-2010}, and schemes with the exact integration of a composite function are proposed by introducing a linear interpolation of the velocity and implemented in two-dimensional numerical experiments in~\\cite{TabUch-2016-CD,Tabata2018}.\nIn these quadrature-free schemes, there is no discrepancy between the theory and real computation.\nBesides them, to the best of our knowledge, it is still a standard technique for the integration of a composite function to employ a high-order quadrature rule, cf., e.g., \\cite{BermejoSaavedra2012,ColeraCarpioBermejo2021,HNY-2016,NT-2015-JSC,NT-2016-M2AN}, whose computation cost depends mainly on the number of quadrature points.\nIn the end, we need to choose a suitable high-order quadrature rule by considering the computation cost and the \\emph{error} depending on the (expected) solution, $\\Delta t$, $h$ and so on.\nIn the case of Lagrangian finite difference method, however, there is no need to choose a quadrature rule as no integration is used. \nThis is an advantage of the Lagrangian finite difference method, cf~\\cite{NRT-2013}.\nThe GLD-type Lagrangian finite difference schemes which will be presented in this paper also have this advantage.\n\\par\nThe paper is organized as follows.\nIn Section~\\ref{sec:Preliminaries}, basic concepts for the for flow map and the upper-convected time derivative in the framework of the generalized Lie derivative and a simple model to be dealt in this paper are introduced.\nIn Section~\\ref{sec:FD_discretizations}, finite difference discretizations of the upper-convected time derivative are presented, where truncation errors are proved.\nIn Section~\\ref{nm}, GLD-type numerical schemes of second-order in time and $p$-th order in space for the simple model and their algorithms are presented.\nIn Section~\\ref{numerics}, numerical results by our schemes are presented to see the experimental orders of convergence.\nIn Section~\\ref{sec:conclusions}, conclusions are given.\nIn Appendix, properties of GLD introduced in Section~\\ref{sec:Preliminaries} are proved, and the main algorithms of the work are described in details.\n\\section{Preliminaries}\n\\label{sec:Preliminaries}\nIn this section, we present some basic concepts concerning the flow map and the ideas of the generalized Lie derivatives. For these purposes, we need to consider some mathematical statements.\n\\par\nLet $\\Omega \\subset \\mathbb{R}^d~(d=1, 2, 3)$ be a bounded domain and $T$ be a positive constant.\nLet $u:\\Omega\\times (0,T) \\to \\mathbb{R}^d$ be a given velocity with the following hypothesis:\n\\begin{Hyp}\\label{hyp:u}\n\tThe velocity $u$ is sufficiently smooth and satisfies $u_{|\\partial\\Omega}=0$.\n\\end{Hyp}\nLet $\\Delta t>0$ be a time increment, $N_T \\coloneqq \\lfloor T\/\\Delta t \\rfloor$ the total number of time steps, and $t^n \\coloneqq n\\Delta t~(n\\in\\mathbb{Z})$.\nFor a function~$f$ defined in $\\Omega\\times (0,T)$, let $f^n \\coloneqq f(\\cdot,t^n)$ be the function at $n$-th time step.\nWe define two mappings $X_1, \\tilde{X}_1: \\Omega\\times (0,T)\\to \\mathbb{R}^d$ by\n\\[\nX_1(x,t) \\coloneqq x-\\Delta t\\, u(x,t), \\qquad \\tilde{X}_1(x,t) \\coloneqq x-2\\Delta t\\, u(x,t),\n\\]\nwhich are upwind points of $x$ with respect to $u(x,t)$.\n{We introduce a symbol ``$\\circ$'' to represent a composition of functions defined by}\n\\[\n(g \\circ X_1^n) (x) \\coloneqq g( X_1^n (x) ),\n\\]\nfor a function~$g$ defined in~$\\Omega$, where $X_1^n(x) = X_1(x,t^n) = x - \\Delta t\\,u^n(x)$.\nWe prepare a hypothesis for $\\Delta t$:\n\\begin{Hyp}\\label{hyp:dt}\n\tThe time increment $\\Delta t$ satisfies\n\t$\\displaystyle \\Delta t |u|_{C^0([0,T];W^{1,\\infty}(\\Omega)^d)} \\le 1\/8$.\n\\end{Hyp}\n\\begin{Rmk}\\label{rmk:upwind_points}\n\tHypotheses~\\ref{hyp:u} and~\\ref{hyp:dt} ensure that $X_1(\\Omega, t) = \\tilde{X}_1(\\Omega, t) = \\Omega$, and that {\\rm Jacobian}s of the mappings~$X_1(\\cdot,t)$ and $\\tilde{X}_1(\\cdot,t)$ are greater than or equal to $1\/2$, for $t\\in [0,T]$, cf.~\\cite{RuiTab-2002,Tabata2018}.\n\tWe note that Hypothesis~\\ref{hyp:dt} has no relation with the so-called CFL condition as any spatial mesh size is not included in it.\n\\end{Rmk}\n\\subsection{Lagrangian framework and the generalized Lie derivative}\n\\par\nFor a fixed $(x,t)\\in \\bar\\Omega\\times [0,T]$, let $X(x,t; s) \\in \\mathbb{R}^d$ be a solution of the following ordinary differential equation with an initial condition:\n\t%\n\t\\begin{subequations}\\label{eqns:ode}\n\t\t\\begin{align}\n\t\t\\prz{}{s} X(x,t; s) & = u(X(x,t; s), s),\\quad s \\in (0, T),\\label{eq:ode1}\\\\\n\t\tX(x,t; t) & = x, \\label{eq:ode2}\n\t\t\\end{align}\n\t\\end{subequations}\n\t%\n\tfor $(x,t) \\in \\Omega\\times (0,T)$.\n\tPhysically, $X(x,t; s)$ gives the position of fluid particle at time~$s$ whose position at time~$t$ is $x$. It is known as a flow map and an illustration of this concept can be seen in Fig.~\\ref{fig:flowmap}.\n\t%\n\t\\begin{figure}[!htbp]\n\t\t\\centering\n\t\n\t\t\\begin{tikzpicture}[thick,scale=3, every node\/.style={scale=1.7}]\n\t\n\t\n\t\t\\draw[thick,->] (-1.0,-1.0) -- (1.1,-1.0);\n\t\t\\draw[thick,->] (-1.0,-1.0) -- (-1.0,1.1);\n\t\t\n\t\n\t\t\\draw[dashed] (-1.0,-0.25) -- (1.0,-0.25);\n\t\t\\draw[dashed] (-1.0,0.5) -- (1.0,0.5);\n\t\t\n\t\t\\draw[blue] (-0.75,-1.0) to[out=90,in=-100] (-0.25,1.0);\n\t\t\\draw[blue] (-0.125,-1.0) to[out=90,in=-100] (0.375,1.0);\n\t\t\\draw[blue] (0.5,-1.0) to[out=90,in=-100] (1.0,1.0);\n\t\t\n\t\n\t\t\\filldraw[fill] (-0.125,-1.0) circle (0.02cm)\n\t\t(-0.01,-0.25) circle (0.02cm)\n\t\t(0.25,0.5) circle (0.02cm)\n\t\t(-0.75,-1.0) circle (0.02cm) \n\t\t(-0.64,-0.25) circle (0.02cm)\n\t\t(-0.375,0.5) circle (0.02cm)\n\t\t;\n\t\t\n\t\n\t\t\\draw (-0.15,-1.2) node{$(x,t)$};\n\t\t\\draw (-0.75,-1.2) node{\\scriptsize $(\\tilde{x},t)$};\n\t\t\\draw (1.25,-1.2) node{$\\mathds{R}$};\n\t\t\\draw (-1.15,-1.0) node{$t$};\n\t\t\\draw (-1.15,-0.25) node{$s$};\n\t\t\\draw (-1.15,0.5) node{$\\tilde{t}$};\n\t\t\\draw (-1.25,1.2) node{$time$};\n\t\t\\draw (-0.6,-0.15) node{\\scriptsize $X(\\tilde{x},t;s)$};\n\t\t\\draw (0.425,-0.15) node{$X(x,t;s)$};\n\t\t\\draw (-0.35,0.6) node{\\scriptsize $X(\\tilde{x},t;\\tilde{t})$};\n\t\t\\draw (0.66,0.6) node{$X(x,t;\\tilde{t})$};\n\t\t\n\t\t%\n\t\t\\end{tikzpicture}\n\t\n\t\t\\caption{Sketch of the flow map for $X(x,t; s)$.}\n\t\t\\label{fig:flowmap}\n\t\\end{figure}\n\t%\n\t\\par\n\tFor $(x,t) \\in \\Omega\\times (0,T)$, let us introduce a matrix valued function~$L(x,t; \\cdot, \\cdot): (0,T)\\times (0,T) \\to \\mathbb{R}^{d\\times d}$ defined by\n\t\\begin{eqnarray}\\label{def:L}\n\tL_{ij} (x,t; t_1, t_2) \\coloneqq \\Bigl[ \\prz{}{z_j} X_i(z,t_1; t_2) \\Bigr]_{{\\displaystyle |}z=X(x,t; t_1)},\\quad i,j = 1, \\ldots, d,\n\t\\end{eqnarray}\n\t%\n\twhich is the so-called deformation gradient.\n\tIt is known that the function~$L$ has the following properties:\n\t%\n\t\\begin{subequations}\\label{eqns:L}\n\t\t\\begin{align}\n\t\tL(x,t;t_1,t_2) L(x,t;t_2,t_1) & = L(x,t;t_1,t_1) = I,\n\t\t\\label{eqns:L1} \\\\\n\t\t\\prz{}{s} L(x,t;t_1,s) & = (\\nabla u) \\bigl( X(x,t; s), s \\bigr) L(x,t;t_1,s), \n\t\t\\label{eqns:L2} \\\\\n\t\t\\prz{}{s} L(x,t;s,t_1) & = - L(x,t;s,t_1) (\\nabla u) \\bigl( X(x,t; s), s \\bigr), \n\t\t\\label{eqns:L3}\n\t\t\\end{align}\n\t\\end{subequations}\n\t%\n\tfor $t_1, t_2 \\in [0, T]$, where $I \\in \\mathbb{R}^{d\\times d}_{\\rm sym}$ is the identity matrix.\n\tAlthough the proofs can be found in, e.g., \\cite{LeeBook2011}, we give the proofs again in Appendix~\\ref{A.subsec:L} under the assumption of unique existence of smooth regular~$L$.\n\t%\n\t\\par\n\tLet $D\/Dt$ be the material derivation defined by\n\t\\[\n\t\\frac{D}{Dt} \\coloneqq \\prz{}{t} + u\\cdot\\nabla.\n\t\\]\n\tFor a function~$\\zeta:\\Omega\\times(0,T)\\to\\mathbb{R}^{d\\times d}$, it is well-known that the material derivative of~$\\zeta$ can be written as\n\t%\n\t\\begin{equation}\n\t\\frac{D\\zeta}{Dt} (x,t) = \\Bigl[ \\prz{\\zeta}{t} + (u\\cdot\\nabla) \\zeta \\Bigr] (x,t) = \\frac{\\partial}{\\partial s} \\zeta \\bigl( X(x,t; s), s \\bigr)_{{\\displaystyle |}s=t}.\n\t\\end{equation}\n\t%\n\tHere, we define the so-called generalized Lie derivative~$\\mathcal{L}_u\\zeta$ by\n\t%\n\t\\begin{align}\n\t(\\mathcal{L}_u\\zeta) \\bigl( X(x,t; s), s \\bigr) \n\t& \\coloneqq\n\tL(x,t;t,s) \\prz{}{s}\\Bigl[ L(x,t;s,t) \\zeta \\bigl( X(x,t; s), s \\bigr) L(x,t;s,t)^\\top \\Bigr] L(x,t;t,s)^\\top.\n\t\\label{def:GLD}\n\t\\end{align}\n\t%\n\tFrom~\\eqref{eqns:L}, the upper-convected time derivative can be rewritten by using $\\mathcal{L}_u\\zeta$, i.e.,\n\t%\n\t\\begin{equation}\n\t\\uctd{\\zeta}(x,t) \n\t= (\\mathcal{L}_u\\zeta) (x,t) = (\\mathcal{L}_u\\zeta)\\bigl( X(x,t; s), s \\bigr)_{|s=t},\n\t\\label{eq:UCM_GLD}\n\t\\end{equation}\n\t%\n\twhich is shown in Appendix~\\ref{A.subsec:UCM_GLD}. \n\\subsection{The model equation}\n\\label{sce}\nBased on the above description, we consider a simplified model equation in order to present the application of finite difference schemes for dealing with the generalized Lie derivative. Particularly, based on the Oldroyd-B constitutive equation (\\ref{eq:OB}), the problem is to find $\\zeta: \\Omega\\times (0, T) \\to \\mathbb{R}^{d\\times d}_{\\rm sym}$ such that\n\t%\n\t\\begin{subequations}\\label{eqns:prob}\n\t\t\\begin{eqnarray}\n\t\t\\uctd{\\zeta} & = & F \\quad\\ \\mbox{in}\\ \\Omega\\times (0, T), \n\t\t\\label{eq:prob1}\\\\\n\t\t\\zeta & = & \\zeta_\\mathrm{in} \\quad \\mbox{on}\\ \\Gamma_\\mathrm{in}\\times (0, T), \n\t\t\\label{eq:prob2}\\\\\n\t\t\\zeta & = & \\zeta^0 \\quad\\ \\mbox{in}\\ \\Omega,\\ \\mbox{at}\\ t=0, \n\t\t\\end{eqnarray}\n\t\\end{subequations}\n\t%\n\twhere $\\Gamma_\\mathrm{in}$ is an inflow boundary defined by $\\Gamma_\\mathrm{in} \\coloneqq \\{ x\\in\\partial\\Omega;\\ u(x,t)\\cdot n(x) < 0 \\}$ for the outward unit normal vector~$n: \\partial\\Omega\\to\\mathbb{R}^d$, and $F: \\Omega\\times (0,T) \\to \\mathbb{R}^{d\\times d}_{\\rm sym}$, $\\zeta_\\mathrm{in}: \\Gamma_{\\rm in}\\times (0, T) \\to \\mathbb{R}^{d\\times d}_{\\rm sym}$ and $\\zeta^0: \\Omega \\to \\mathbb{R}^{d\\times d}_{\\rm sym}$ are given functions.\n\t\\begin{Rmk}\n\t\t(i)~From~\\eqref{eq:UCM_GLD}, Eq.~\\eqref{eq:prob1} can be reformulated using the generalized {\\rm Lie} derivative resulting in:\n\t\\begin{equation}\n\t\\mathcal{L}_u\\zeta\n\t= F \\quad \\mbox{in}\\ \\Omega\\times (0, T).\n\t\\label{eq:prob1nf}\n\t\\end{equation}\n\t(ii)~In general, the inflow boundary~$\\Gamma_\\mathrm{in}$ depends on time~$t$, i.e., $\\Gamma_\\mathrm{in} = \\Gamma_\\mathrm{in}(t)$, while $\\Gamma_\\mathrm{in}$ is the empty set under Hypothesis~\\ref{hyp:u}.\n\tThroughout the paper, we deal with the inflow boundary~$\\Gamma_\\mathrm{in}$ independent of time~$t~(\\in (0,T))$.\n\t\\end{Rmk}\n\\section{Finite difference discretizations}\n\\label{sec:FD_discretizations}\nIn this section, we present descriptions concerning the spatial and temporal discretizations. The main results related to the numerical analysis of the schemes are also described in details.\n\\subsection{Space discretizations and interpolation operators}\nIn this subsection, we introduce spatial discretizations and interpolation operators in one- and two-dimensions.\nBefore starting them, for an integer~$i$ and a positive number~$\\delta$, we prepare two functions $\\eta_i^{(1)}(\\,\\cdot\\,; \\delta)$ and $\\eta_i^{(2)}(\\,\\cdot\\,; \\delta):~\\mathbb{R} \\to \\mathbb{R}$.\nThe former, $\\eta_i^{(1)}(\\,\\cdot\\,; \\delta)$, is defined by\n\\begin{align*}\n\\displaystyle\n\\eta_i^{(1)}(s; \\delta) & \\coloneqq\n\\left\\{\n\\begin{aligned}\n& \\frac{s}{\\delta} -i + 1 && \\bigl( s\\in [(i-1)\\delta, i\\delta ) \\bigr),\\\\\n& i+1 - \\frac{s}{\\delta} && \\bigl( s\\in [i\\delta, (i+1)\\delta] \\bigr),\\\\\n& \\ \\ \\ \\ \\ 0 && \\bigl( {\\rm otherwise} \\bigr),\n\\end{aligned}\n\\right.\n\\end{align*}\nand the latter, $\\eta_i^{(2)}(\\,\\cdot\\,;\\delta)$, is defined by\n\\begin{align*}\n\\intertext{(i) $i$ : even number}\n\\displaystyle\n\\eta_i^{(2)}(s;\\delta) & \\coloneqq\n\\left\\{\n\\begin{aligned}\n& \\Bigl( \\frac{s}{\\delta} - i + 1 \\Bigr) \\Bigl( \\frac{s}{2\\delta} - \\frac{i}{2} + 1 \\Bigr) && \\bigl( s\\in [(i-2)\\delta, i\\delta ) \\bigr),\\\\\n&\\Bigl( i+1 - \\frac{s}{\\delta} \\Bigr) \\Bigl( \\frac{i}{2} + 1 - \\frac{s}{2\\delta} \\Bigr) && \\bigl( s\\in [i\\delta, (i+2)\\delta] \\bigr),\\\\\n& \\qquad\\qquad\\ \\ \\ 0 && \\bigl( {\\rm otherwise} \\bigr),\n\\end{aligned}\n\\right.\n\\intertext{(ii) $i$ : odd number}\n\\eta_i^{(2)}(s;\\delta) & \\coloneqq\n\\left\\{\n\\begin{aligned}\n& \\Bigl( \\frac{s}{\\delta} - i + 1\\Bigr) \\Bigl( i+1 - \\frac{s}{\\delta} \\Bigr) && \\bigl( s\\in [(i-1)\\delta, (i+1)\\delta] \\bigr),\\\\\n& \\qquad\\quad\\quad \\ \\ \\ 0 && \\bigl( {\\rm otherwise} \\bigr).\n\\end{aligned}\n\\right.\n\\end{align*}\nThe functions~$\\eta_i^{(1)}(\\,\\cdot\\,; \\delta)$ and~$\\eta_i^{(2)}(\\,\\cdot\\,; \\delta)$ are used below for the definitions of (bi)linear and (bi)quadratic interpolation operators $\\Pi_h^{(1)}$ and $\\Pi_h^{(2)}$, respectively.\n\\subsubsection{One-dimensional case $(d=1)$}\nInitially, we consider one spatial dimension, i.e., $d = 1$.\nFor the sake of simplicity, we assume $\\Omega = (0, a)$ for a positive number~$a$. \nLet $N\\in\\mathbb{N}$ be a number, $h\\coloneqq a\/N$ a mesh size, and $x_i \\coloneqq ih~(i\\in\\mathbb{Z})$ lattice points.\nWe define a set of lattice points~$\\bar\\Omega_h$ and a discrete function space~$V_h$ restrict to the number $N$, by\n\\begin{align*}\n\\bar\\Omega_h & \n\\coloneqq \\{ x_i \\in \\bar\\Omega;\\ i = 0,\\ldots,N \\} \\ (\\subset \\bar\\Omega \\subset \\mathbb{R}^d = \\mathbb{R}), \\\\\nV_h & \\coloneqq \\{ v_h: \\bar\\Omega_h \\to \\mathbb{R}^{d\\times d}_{\\rm sym} \\} = \\{ v_h: \\bar\\Omega_h \\to \\mathbb{R} \\}.\n\\end{align*}\nWe introduce a set of basis functions~$\\{\\varphi_i^{(1)}: \\bar\\Omega \\to \\mathbb{R}; \\ i=0, \\ldots, N \\}$ defined by\n\\[\n\\varphi_i^{(1)}(x) \\coloneqq \\eta_i^{(1)}(x; h),\n\\qquad\ni=0,\\ldots,N.\n\\]\nThe functions~$\\varphi_0^{(1)}$ and $\\varphi_N^{(1)}$ are simplified to\n\\begin{align*}\n\\displaystyle\n\\varphi_0^{(1)}(x) & \\coloneqq\n\\left\\{\n\\begin{aligned}\n& 1 - \\frac{x}{h} && \\bigl( x\\in [x_0, x_1] \\bigr) \\\\\n& \\ \\ \\ 0 && \\bigl( {\\rm otherwise} \\bigr)\n\\end{aligned}\n\\right.\n=\n\\left\\{\n\\begin{aligned}\n&\\frac{x_1-x}{h} && \\bigl( x\\in [ x_0, x_1] \\bigr) \\\\\n& \\ \\ \\ \\ 0 && \\bigl( {\\rm otherwise} \\bigr)\n\\end{aligned}\n\\right.,\n\\\\\n\\varphi_N^{(1)}(x) & \\coloneqq\n\\left\\{\n\\begin{aligned}\n& \\frac{x}{h} - N + 1 && \\bigl( x\\in [x_{N-1}, x_N ] \\bigr) \\\\\n& \\ \\ \\ \\ \\ \\ 0 && \\bigl( {\\rm otherwise} \\bigr)\n\\end{aligned}\n\\right.\n=\n\\left\\{\n\\begin{aligned}\n&\\frac{x-x_{N-1}}{h} && \\bigl( x\\in [ x_{N-1}, x_N ] \\bigr) \\\\\n& \\ \\ \\ \\ \\ \\ 0 && \\bigl( {\\rm otherwise} \\bigr)\n\\end{aligned}\n\\right.,\n\\end{align*}\nas defined in $\\bar\\Omega = [x_0, x_N] = [0, a]$.\nLet $\\Pi_h^{(1)}: V_h \\to C^0(\\bar\\Omega)$ be the linear interpolation operator defined by\n\\[\n\\bigl( \\Pi_h^{(1)} v_h \\bigr) (x) \\coloneqq \\sum_{i=0}^N v_h(x_i)\\varphi_i^{(1)}(x).\n\\]\n\\par\nWe describe the ideas for using a quadratic interpolation. Let $N\\in\\mathbb{N}$ be an even number, and $M\\coloneqq N\/2\\in\\mathbb{N}$. For the definition of the quadratic interpolation operator $\\Pi_h^{(2)}$, we define a set of basis functions~$\\{\\varphi_i^{(2)}: \\bar\\Omega \\to \\mathbb{R}; \\ i=0, \\ldots, N \\}$ by\n\\[\n\\varphi_i^{(2)}(x) \\coloneqq \\eta_i^{(2)}(x; h),\n\\qquad\ni=0,\\ldots,N,\n\\]\nwhere $\\varphi_0^{(2)}$ and $\\varphi_N^{(2)} \\ (=\\varphi_{2M}^{(2)})$ are reduced to\n\\begin{align*}\n\\displaystyle\n\\varphi_0^{(2)}(x) & =\n\\left\\{\n\\begin{aligned}\n&\\frac{(x_1-x)(x_2-x)}{2h^2} && \\bigl( x\\in [x_0, x_2] \\bigr),\\\\\n& \\qquad\\quad \\ 0 && \\bigl( {\\rm otherwise} \\bigr),\n\\end{aligned}\n\\right.\n\\\\\n\\varphi_N^{(2)}(x) & =\n\\left\\{\n\\begin{aligned}\n&\\frac{(x-x_{N-1})(x-x_{N-2})}{2h^2} && \\bigl( x\\in [x_{N-2}, x_N ] \\bigr),\\\\\n& \\qquad\\quad\\ \\ \\ \\ \\ 0 && \\bigl( {\\rm otherwise} \\bigr).\n\\end{aligned}\n\\right.\n\\end{align*}\nLet $\\Pi_h^{(2)}: V_h \\to C^0(\\bar\\Omega)$ be the quadratic interpolation operator defined by\n\\[\n\\bigl( \\Pi_h^{(2)} v_h \\bigr) (x) \\coloneqq \\sum_{i=0}^N v_h(x_i)\\varphi_i^{(2)}(x).\n\\]\n\\begin{Rmk}\\label{rmk:upwind_cell_1d}\n\tFor $\\alpha$, $\\beta \\in \\mathbb{R}~(\\alpha<\\beta)$, and $N_0\\in\\mathbb{N}$ with $\\delta_0 = \\delta_0(\\alpha,\\beta,N_0) \\coloneqq (\\beta-\\alpha)\/N_0 > 0$, let $\\mathcal{I}(\\cdot; \\alpha, \\beta, N_0) : \\mathbb{R} \\to \\{0,\\ldots,N_0\\}$ be an integer-valued index indicator function defined by\n\t\\begin{align}\n\t\\label{def:index_func}\n\t\\mathcal{I} (s; \\alpha, \\beta, N_0) \\coloneqq\n\t\\left\\{\n\t\\begin{aligned}\n\t& \\left\\lfloor \\frac{s-\\alpha}{\\delta_0} \\right\\rfloor && \\bigl( s \\in (\\alpha,\\beta) \\bigr), \\\\\n\t& \\quad\\ \\ 0 && (s \\le \\alpha), \\\\\n\t& \\quad\\ N_0 && (s \\ge \\beta).\n\t\\end{aligned}\n\t\\right.\n\t\\end{align}\n\n\tWe note that the integer~$i_0 = \\mathcal{I}(s; \\alpha, \\beta, N_0)$ satisfies $i_0\\delta_0+\\alpha \\le s < (i_0+1)\\delta_0+\\alpha$ for $s\\in (\\alpha, \\beta)$, and that, for an even number~$N_0$ with $M_0 = N_0\/2 \\in\\mathbb{N}$, the integer~$k_0 = \\mathcal{I}(s; \\alpha, \\beta, M_0)$ satisfies $2k_0\\delta_0+\\alpha \\le s < 2(k_0+1)\\delta_0+\\alpha$ for $s\\in (\\alpha, \\beta)$ as $\\delta_0 = (\\beta - \\alpha)\/M_0 = 2 (\\beta - \\alpha)\/N_0$. \n\n\t\\par\n\tFor $d=1$, we introduce two notations of intervals,\n\t\\begin{align*}\n\t&&&& K_{i+1\/2}^{(1)} & \\coloneqq [x_i, x_{i+1}], & i & \\in\\{0,\\ldots,N-1\\}, &&&& \\\\\n\t&&&& K_{2k+1}^{(2)} & \\coloneqq [x_{2k}, x_{2k+2}], & k & \\in\\{0,\\ldots,M-1\\}, &&&&\n\t\\end{align*}\n\twhose measures are $h$ and $2h$, respectively.\n\tLet $x \\in \\mathbb{R}$ be given arbitrarily.\n\tThen, the following are practically useful in computation: \\smallskip\\\\\n\t$(i)$~Let $i_0 \\coloneqq \\mathcal{I}(x; 0, a, N)\\in\\{0,\\ldots,N\\}$.\n\tWhen $x\\in\\Omega$, the integer $i_0$ satisfies $x\\in K_{i_0+1\/2}^{(1)} = [x_{i_0}, x_{{i_0}+1}]$, and we have two-points representation of~$( \\Pi_h^{(1)} v_h) (x)$,\n\t%\n\t\\begin{equation}\n\t\\bigl( \\Pi_h^{(1)} v_h \\bigr) (x) = v_{i_0} \\varphi_{i_0}^{(1)}(x) + v_{i_0+1} \\varphi_{{i_0}+1}^{(1)}(x),\n\t\\label{int1}\n\t\\end{equation}\n\n\twhere we have used a notation~$v_i = v_h(x_i)$. \\smallskip\\\\\n\t%\n\t$(ii)$~Let $k_0 \\coloneqq \\mathcal{I}(x; 0, a, M)\\in\\{0,\\ldots,M\\}$. \n\tWhen $x\\in\\Omega$, the integer $k_0$ satisfies $x\\in K_{2k_0+1}^{(2)} = [x_{2k_0}, x_{2{k_0}+2}]$, and we have three-points representation of~$( \\Pi_h^{(2)} v_h) (x)$,\n\n\t\\begin{equation}\n\t\\bigl( \\Pi_h^{(2)} v_h \\bigr) (x) = v_{2k_0} \\varphi_{2k_0}^{(2)}(x) + v_{2k_0+1} \\varphi_{2k_0+1}^{(2)}(x) + v_{2k_0+2} \\varphi_{2k_0+2}^{(2)}(x),\n\t\\label{int2}\n\t\\end{equation}\n\t%\n\tfor $v_i = v_h(x_i)$. \\smallskip\\\\\n\t%\n\t$(iii)$~If the value~$(\\Pi_h^{(p)} v_h)(x)~(p=1,2)$ is needed for $x\\notin\\Omega$, we can employ, instead of it, the closest end value of $v_h$, i.e., $v_0 = v_h(0)~(x \\le 0)$ or $v_N = v_h(a)~(x \\ge a)$, while the value~$v_0$ or~$v_N$ should be given by using~$\\zeta_{\\rm in}$ as $x$ corresponds to an upwind point and $x\\notin\\bar\\Omega$ means the high possibility of existence of ``inflow'' boundary near $x$.\n\tThe function~$\\mathcal{I}(\\cdot;\\alpha,\\beta,N)$ is, therefore, also useful for $x\\notin\\Omega$ in the sense that $\\mathcal{I}(x;\\alpha,\\beta,N)$ provides the closest index of lattice point.\n\\end{Rmk}\n\\subsubsection{Two-dimensional case $(d=2)$}\n\nWe consider two spatial dimensions, i.e., $d=2$.\nFor the sake of simplicity, we assume $\\Omega = (0, a_1)\\times (0, a_2)$ for positive numbers~$a_1$ and $a_2$.\nLet $N_i \\in\\mathbb{N}~(i=1,2)$ be numbers, $h_i\\coloneqq a_i\/N_i~(i=1,2)$ mesh sizes in $x_i$-direction, $h_{\\min} \\coloneqq \\min\\{h_i;\\ i=1,\\ldots,d\\}$ and $h = h_{\\max} \\coloneqq \\max\\{h_i;\\ i=1,\\ldots,d\\}$ minimum and maximum mesh sizes, and $x_{i,j} \\coloneqq (ih_1,jh_2)^\\top~(i,j\\in\\mathbb{Z})$ lattice points.\nWe assume a family of meshes satisfying the next hypothesis:\n\\begin{Hyp}\\label{hyp:mesh}\nThere exist positive constants $h_0$, $\\gamma_1$ and $\\gamma_2$ such that\n\\[\nh \\in (0,h_0], \\quad \\mbox{and} \\quad \\gamma_1 \\le \\frac{h}{h_{\\min}} \\le \\gamma_2.\n\\]\n\\end{Hyp}\n\\begin{Rmk}\nThe hypothesis is set for $d=2$ essentially, as it always holds for $d=1$ with $\\gamma_1 = \\gamma_2 = 1$.\n\\end{Rmk}\n\\par\nWe define a set of lattice points~$\\bar\\Omega_h$ and a discrete function space~$V_h$ restrict to the numbers $N_i \\in\\mathbb{N}~(i=1,2)$, by\n\\begin{align*}\n\\bar\\Omega_h & \n\\coloneqq \\{ x_{i,j} \\in \\bar\\Omega;\\ i = 0,\\ldots,N_1,\\ j = 0,\\ldots,N_2 \\}, \\\\\nV_h &\\coloneqq \\{ v_h: \\bar\\Omega_h \\to \\mathbb{R}^{d\\times d}_{\\rm sym} \\},\n\\end{align*}\nwhere it is noted that $\\bar\\Omega_h \\subset \\bar\\Omega \\subset \\mathbb{R}^d~(= \\mathbb{R}^2)$.\nUsing~$\\eta_i^{(1)}(\\,\\cdot\\,; \\delta)$, we introduce a set of basis functions~$\\{\\varphi_{i,j}^{(1)}: \\bar\\Omega \\to \\mathbb{R}; \\ x_{i,j}\\in\\bar\\Omega_h,\\ i,j\\in\\mathbb{Z} \\}$ defined by\n\\begin{displaymath}\n\\displaystyle\n\\varphi_{i,j}^{(1)}(x) = \\varphi_{i,j}^{(1)}(x_1,x_2) \\coloneqq \n\\eta_i^{(1)}(x_1;h_1) \\eta_j^{(1)}(x_2;h_2).\n\\end{displaymath}\nLet $\\Pi_h^{(1)}: V_h \\to C^0(\\bar\\Omega)$ be the bilinear interpolation operator defined by\n\\[\n\\bigl( \\Pi_h^{(1)} v_h \\bigr) (x) \\coloneqq \\sum_{x_{i,j}\\in \\bar\\Omega_h} v_h(x_{i,j})\\varphi_{i,j}^{(1)}(x).\n\\]\n\\par\nThe extension of the above interpolation using the biquadratic interpolation strategy can be defined as follows. Let $N_1, N_2 \\in\\mathbb{N}$ be even numbers, and $M_i\\coloneqq N_i\/2\\in\\mathbb{N}$ for $i=1,2$.\nFor the definition of the biquadratic interpolation operator $\\Pi_h^{(2)}$, we introduce basis functions~$\\{\\varphi_{i,j}^{(2)}: \\bar\\Omega \\to \\mathbb{R}; \\ x_{i,j}\\in\\bar\\Omega_h \\}$ defined by\n\\[\n\\varphi_{i,j}^{(2)}(x) = \\varphi_{i,j}^{(2)}(x_1,x_2) \\coloneqq \\eta_i^{(2)}(x_1;h_1) \\eta_j^{(2)}(x_2;h_2).\n\\]\nLet $\\Pi_h^{(2)}: V_h \\to C^0(\\bar\\Omega)$ be the biquadratic interpolation operator defined by\n\\[\n\\bigl( \\Pi_h^{(2)} v_h \\bigr) (x) \\coloneqq \\sum_{x_{i,j}\\in\\bar\\Omega_h} v_h(x_{i,j})\\varphi_{i,j}^{(2)}(x).\n\\]\n\\begin{Rmk}\\label{rmk:upwind_cell_2d}\nFor $d=2$, we introduce two notations of boxes (cells),\n\\begin{align*}\nK_{i+1\/2,j+1\/2}^{(1)} & \\coloneqq [ih_1, (i+1)h_1] \\times [jh_2, (j+1) h_2], \n&\n(i,j) & \\in \\{0,\\ldots,N_1-1\\} \\times \\{0,\\ldots,N_2-1\\}, \\\\\nK_{2k+1,2l+1}^{(2)} & \\coloneqq [2kh_1, (2k+2)h_1] \\times [2l h_2, (2l+2)h_2], \n& (k,l) & \\in \\{0,\\ldots,M_1-1\\} \\times \\{0,\\ldots,M_2-1\\},\n\\end{align*}\nwhose measures are $h_1h_2$ and $4h_1h_2$, respectively.\nLet $x \\in \\mathbb{R}^2$ be given arbitrarily.\nThen, the following are practically useful in computation: \\smallskip\\\\\n$(i)$~Let $i_0 \\coloneqq \\mathcal{I}(x_1; 0, a_1, N_1)\\in\\{0,\\ldots,N_1\\}$ and $j_0 \\coloneqq \\mathcal{I}(x_2; 0, a_2, N_2)\\in\\{0,\\ldots,N_2\\}$.\nWhen $x\\in\\Omega$, the set of integers $(i_0,j_0)$ satisfies $x\\in K_{i_0+1\/2,j_0+1\/2}^{(1)} = [i_0h_1, ({i_0}+1)h_1] \\times [j_0h_2, ({j_0}+1)h_2]$, and we have four-points representation of~$( \\Pi_h^{(1)} v_h) (x)$,\n\\begin{align}\n\\label{int3}\n\\bigl( \\Pi_h^{(1)} v_h \\bigr) (x) = \n\\sum_{m,n=0,1} v_{i_0+m,j_0+n} \\, \\varphi_{i_0+m,j_0+n}^{(1)}(x),\n\\end{align}\nwhere we have used a simplified notation~$v_{i,j} = v_h(x_{i,j})$. \\smallskip\\\\\n$(ii)$~Let $k_0 \\coloneqq \\mathcal{I}(x_1; 0, a_1, M_1)\\in\\{0,\\ldots,M_1\\}$ and $l_0 \\coloneqq \\mathcal{I}(x_2; 0, a_2, M_2)\\in\\{0,\\ldots,M_2\\}$.\nWhen $x\\in\\Omega$, the integer $k_0$ satisfies $x\\in K_{2k_0+1}^{(2)} = [x_{2k_0}, x_{2{k_0}+2}]$, and we have nine-points representation of~$( \\Pi_h^{(2)} v_h) (x)$,\n\\begin{equation}\n\\label{int4}\n\t\\bigl( \\Pi_h^{(2)} v_h \\bigr) (x) = \\sum_{m,n=0,1,2} v_{2k_0+m,2l_0+n} \\varphi_{2k_0+m,2l_0+n}^{(2)}(x).\n\\end{equation}\n\t$(iii)$~If the value~$(\\Pi_h^{(p)} v_h)(x)~(p=1,2)$ is needed for $x\\notin\\Omega$, we can employ, instead of it, the closest end value of $v_h$, i.e., one of the values of $v_h(x_{i,j})~(x_{i,j} \\in \\bar\\Omega_h\\cap\\partial\\Omega)$, while the value should be given by using~$\\zeta_{\\rm in}$ as $x$ corresponds to an upwind point.\n\\end{Rmk}\n\\begin{Rmk}\\label{rmk:3D}\nWe omit the extension of the interpolation operators~$\\Pi_h^{(p)}~(p=1, 2)$ to the three-dimensional case, i.e., $d=3$, since it is naturally defined by introducing basis functions~$\\varphi_{i,j,k}^{(p)}(x) = \\varphi_{i,j,k}^{(p)}(x_1,x_2,x_3) \\coloneqq \n\\eta_i^{(p)}(x_1;h_1) \\eta_j^{(p)}(x_2;h_2) \\eta_k^{(p)}(x_3;h_3)$ for $p = 1, 2$ in a similar manner.\n\\end{Rmk}\n\\subsection{Time discretization: truncation error analysis}\\label{subsec:time_discretization}\nFor the velocity~$u$, let $L_1,\\ \\tilde{L}_1: \\Omega \\times (0,T) \\to \\mathbb{R}^{d\\times d}$ be matrices defined by\n\\begin{align}\nL_1 (x,t) \\coloneqq I + \\Delta t (\\nabla u)(x,t),\n\\quad\n\\tilde{L}_1 (x,t) \\coloneqq I + 2\\Delta t (\\nabla u)(x,t),\n\\label{defs:L1_tilde_L1}\n\\end{align}\nwhich are approximations of $L(x,t;\\, t-\\Delta t,t)$ and $L(x,t;\\, t-2\\Delta t,t)$, respectively, cf.~Lemma~\\ref{lem:L_with_k} below.\nNow, we present a theorem which provides an approximation of the upper-convected time derivative of second-order in time.\n\\begin{Thm}\\label{thm:sec_order}\n\tSuppose that Hypotheses~\\ref{hyp:u} and~\\ref{hyp:dt} hold true.\n\tLet $\\zeta: \\bar\\Omega\\times [0,T] \\to \\mathbb{R}^{d\\times d}$ be a sufficiently smooth function.\n\tThen, for any $x\\in\\bar\\Omega$ and $t\\in [2\\Delta t, T]$, we have\n\t%\n\t\\begin{align}\n\t\\uctd{\\zeta} (x,t) \n\t= \\frac{1}{2\\Delta t}\\Bigl[\n\t3\\zeta(x,t) \\notag\n\t-4 L_1 (x,t) \\zeta(X_1(x,t),t-\\Delta t) L_1 (x,t)^\\top \\\\\n\t+ \\tilde{L}_1 (x,t) \\zeta(\\tilde{X}_1(x,t),t-2\\Delta t) \\tilde{L}_1 (x,t)^\\top\n\t\\Bigr] + O(\\Delta t^2).\n\t\\label{eq:sec_order}\n\t\\end{align}\n\\end{Thm}\nWe give the proof of Theorem~\\ref{thm:sec_order} after giving a remark and preparing two lemmas.\n\\begin{Rmk}\n(i)~Let us consider $(x,t) \\in \\bar\\Omega\\times [2\\Delta t,T]$ as a fixed point and employ simple notations~$X = X(x,t;\\,\\cdot\\,)$ and~$L(\\,\\cdot\\,,\\,\\cdot\\,) = L(x,t;\\,\\cdot\\,,\\,\\cdot\\,)$.\nThen, an approximation of~$\\uctd{\\zeta} (x,t)$ of first-order in time is obtained as follows:\n\\begin{align*}\n\\uctd{\\zeta}(x,t) & = (\\mathcal{L}_u\\zeta)\\bigl( X(s), s \\bigr)_{{\\displaystyle |}s=t} \n\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\quad\n \\mbox{\\rm (by~\\eqref{eq:UCM_GLD})} \\\\\n& = L(t,s) \\prz{}{s}\\Bigl[ L(s,t) \\zeta \\bigl( X(s), s \\bigr) L(s,t)^\\top \\Bigr] L(t,s)^\\top_{{\\displaystyle |}s=t} \n\\qquad\\ \\, \\mbox{\\rm (by definition~\\eqref{def:GLD})} \\\\\n& = L(t,s) \\frac{1}{\\Delta t} \\Bigl[ L(s,t) \\zeta \\bigl( X(s), s \\bigr) L(s,t)^\\top \\\\\n& \\qquad - L(s-\\Delta t,t) \\zeta \\bigl( X(s-\\Delta t), s-\\Delta t \\bigr) L(s-\\Delta t,t)^\\top \\Bigr] L(t,s)^\\top_{{\\displaystyle |}s=t} + O(\\Delta t) \\\\\n& \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\quad\n \\mbox{\\rm (by the {\\rm Euler} method with respect to~$s$)} \\\\\n& = \\frac{1}{\\Delta t} \\Bigl[ \\zeta \\bigl( X(t), t \\bigr) - L(t-\\Delta t,t) \\zeta \\bigl( X(t-\\Delta t), t-\\Delta t \\bigr) L(t-\\Delta t,t)^\\top \\Bigr] + O(\\Delta t) \\\\\n& \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\quad\n \\mbox{\\rm (by substituting $t$ into $s$ and~\\eqref{eqns:L1})} \\\\\n& = \\frac{1}{\\Delta t} \\Bigl[ \\zeta (x,t) - L_1(x,t) \\zeta \\bigl( X_1(x,t), t-\\Delta t \\bigr) L_1(x,t)^\\top \\Bigr] + O(\\Delta t),\n\\end{align*}\nwhere the last equality holds true from the initial condition~\\eqref{eq:ode2} for $X$, i.e., $X(t)=x$, and the relations,\n\\begin{align*}\nL_1(x,t) &= L(t-\\Delta t,t)+O(\\Delta t^2), &\nX_1(x,t) &= X(t-\\Delta t) + O(\\Delta t^2),\n\\end{align*}\nwhich will be shown in Lemmas~\\ref{lem:L_with_k} and~\\ref{lem:phi_g_with_k} with $k=1$ below, respectively.\n\\smallskip\\\\\n(ii)~Theorem~\\ref{thm:sec_order} presents an approximation of~$\\uctd{\\zeta} (x,t)$ of second-order in time based on the two-step {\\rm Adams--Bashforth} method, i.e., for a smooth function $f: \\mathbb{R}\\to\\mathbb{R}$,\n\\[\nf^\\prime(t) = \\frac{d}{ds}f(s)_{{\\displaystyle |}s=t} = \\frac{1}{2\\Delta t}[ 3f(t) - 4f(t-\\Delta t) + f(t-2\\Delta t)] + O(\\Delta t^2),\n\\]\nin place of the {\\rm Euler} method in~(i).\n\\end{Rmk}\n\\begin{Lem}\\label{lem:L_with_k}\n\tSuppose that Hypotheses~\\ref{hyp:u} and~\\ref{hyp:dt} hold true.\n\tLet $k = 1$ or $2$ be fixed.\n\tThen, for any $x\\in\\bar\\Omega$ and $t\\in [k\\Delta t, T]$, we have\n\t%\n\t\\begin{equation}\n\tL(x,t;t-k\\Delta t, t) = I + k \\Delta t (\\nabla u)(x,t) + \\frac{(k\\Delta t)^2}{2} U(x,t) + O(\\Delta t^3),\n\t\\label{eq:L_with_k}\n\t\\end{equation}\n\t%\n\twhere $U: \\Omega\\times (0, T) \\to \\mathbb{R}^{d\\times d}$ is a function defined by\n\t%\n\t\\[\n\tU \\coloneqq (\\nabla u)^2 - \\frac{D (\\nabla u)}{Dt}.\n\t\\]\n\t%\n\\end{Lem}\n\\begin{proof}\n\tFrom the Taylor expansion, we have\n\t\\begin{align}\n\t& L(x,t; t-k\\Delta t, t) \n\t= L(x,t; s-k\\Delta t, t)_{|s=t} \\notag\\\\\n\t& = \\Bigl[ L(x,t; s, t) - k\\Delta t \\prz{}{s} L(x,t; s, t) + \\frac{(k\\Delta t)^2}{2} \\prz{^2}{s^2} L(x,t; s, t) \\Bigr]_{{\\displaystyle |}s=t} + O(\\Delta t^3) \\notag\\\\\n\t& = \\Bigl[ L(x,t; s, t) - k\\Delta t \\bigl[ - L(x,t; s, t) (\\nabla u)\\bigl( X(x,t; s), s \\bigr) \\bigr] + \\notag\\\\\n\t& \\quad + \\frac{(k\\Delta t)^2}{2} \\prz{}{s} \\bigl[ - L(x,t; s, t) (\\nabla u)\\bigl( X(x,t; s), s \\bigr) \\bigr] \\Bigr]_{{\\displaystyle |}s=t} + O(\\Delta t^3) \\qquad \\mbox{(by~\\eqref{eqns:L3})}\\notag\\\\\n\t& = I + k\\Delta t (\\nabla u) (x,t) \n\t- \\frac{(k\\Delta t)^2}{2} \\prz{}{s} \\bigl[ L(x,t; s, t) (\\nabla u)\\bigl( X(x,t; s), s \\bigr) \\bigr]_{{\\displaystyle |}s=t} + O(\\Delta t^3).\n\t\\label{eq:proof_lem_L_with_k_1}\n\t\\end{align}\n\tWe evaluate $\\prz{}{s} \\bigl[ L(x,t; s, t) (\\nabla u)\\bigl( X(x,t; s), s \\bigr) \\bigr]_{|s=t}$ as follows:\n\t%\n\t\\begin{align}\n\t& \\prz{}{s} \\bigl[ L(x,t; s, t) (\\nabla u)\\bigl( X(x,t; s), s \\bigr) \\bigr]_{{\\displaystyle |}s=t} \\notag\\\\\n\t& = \\Bigl[\n\t\\Bigl( \\prz{}{s} L(x,t; s, t) \\Bigr) (\\nabla u)\\bigl( X(x,t; s), s \\bigr) \n\t+ L(x,t; s, t) \\Bigl( \\prz{}{s} (\\nabla u)\\bigl( X(x,t; s), s \\bigr) \\Bigr)\n\t\\Bigr]_{{\\displaystyle |}s=t} \\notag\\\\\n\t& = \\Bigl[\n\t- L(x,t; s, t) (\\nabla u)^2 \\bigl( X(x,t; s), s \\bigr) \n\t+ L(x,t; s, t) \\frac{D (\\nabla u)}{Dt} \\bigl( X(x,t; s), s \\bigr)\n\t\\Bigr]_{{\\displaystyle |}s=t} \\notag\\\\\n\t& = \n\t- (\\nabla u)^2 (x,t)\n\t+ \\frac{D (\\nabla u)}{Dt} (x,t)\n\t= - U(x,t).\n\t\\label{eq:proof_lem_L_with_k_2}\n\t\\end{align}\n\t%\n\tCombining~\\eqref{eq:proof_lem_L_with_k_2} with~\\eqref{eq:proof_lem_L_with_k_1}, we obtain~\\eqref{eq:L_with_k}.\n\t%\n\\end{proof}\n\\begin{Lem}\\label{lem:phi_g_with_k}\n\tSuppose that Hypotheses~\\ref{hyp:u} and~\\ref{hyp:dt} hold true.\n\tLet $k = 1$ or $2$ be fixed.\n\tThen, for any $x\\in\\bar\\Omega$ and $t\\in [k\\Delta t, T]$, we have the following:\\\\\n\t%\n\t(i)~It holds that\n\t%\n\t\\[\n\tX(x,t; t-k\\Delta t) = x - k \\Delta t u(x,t) + \\frac{(k\\Delta t)^2}{2}\\frac{Du}{Dt} (x,t) + O(\\Delta t^3).\n\t\\]\n\t(ii)~Let $\\zeta: \\Omega\\times (0,T) \\to \\mathbb{R}^{d\\times d}$ be a sufficiently smooth function.\n\tIt holds that\n\t%\n\t\\[\n\t\\zeta \\bigl( X(x,t; t-k\\Delta t), t-k\\Delta t \\bigr)\n\t = \\zeta \\bigl( x - k \\Delta t u(x,t), t-k\\Delta t \\bigr) + \\frac{(k\\Delta t)^2}{2} Z(x,t) + O(\\Delta t^3),\n\t\\]\n\t%\n\twhere $Z: \\Omega\\times (0,T) \\to \\mathbb{R}^{d\\times d}$ is a function defined by\n\t%\n\t\\[\n\tZ \\coloneqq \\Bigl( \\frac{Du}{Dt} \\cdot \\nabla \\Bigr) \\zeta.\n\t\\]\n\t%\n\\end{Lem}\n\\begin{proof}\n\tWe prove~(i).\nRecalling that $X(x,t;s)$ is a solution to~\\eqref{eqns:ode} and noting that the following identity,\n\\[\nX(x,t; t-k\\Delta t) = x - \\int_{t-k\\Delta t}^t u\\bigl( X(x,t; s), s \\bigr) ds,\n\\]\nholds true, we have\n\t\\begin{align*}\n\t& X(x,t; t-k\\Delta t) - [ x - k\\Delta t u(x,t)] \\\\\n\t& = x - \\int_{t-k\\Delta t}^t u\\bigl( X(x,t; s), s \\bigr) ds - \\Bigl[ x - \\int_{t-k\\Delta t}^t u\\bigl( X(x,t; t), t \\bigr) ds \\Bigr] \\\\\n\t& = \\int_{t-k\\Delta t}^t \\Bigl[ u \\bigl( X(x,t; t), t \\bigr) - u\\bigl( X(x,t; s), s \\bigr) \\Bigr] ds \n\t= \\int_{t-k\\Delta t}^t ds \\Bigl[ u \\bigl( X(x,t; s_1), s_1 \\bigr) \\Bigr]_{s_1=s}^t \\\\\n\t& = \\int_{t-k\\Delta t}^t ds \\int_s^t \\frac{Du}{Dt} \\bigl( X(x,t; s_1), s_1 \\bigr) ds_1 \n\t= \\int_{t-k\\Delta t}^t ds \\int_s^t \\Bigl( \\frac{Du}{Dt} (x,t) + O(\\Delta t) \\Bigr) ds_1 \\\\\n\t& = \\frac{(k\\Delta t)^2}{2} \\frac{Du}{Dt} (x,t) + O(\\Delta t^3),\n\t\\end{align*}\n\t%\n\twhich completes the proof of~(i).\n\t\\par\n\tWe prove~(ii).\n\tFrom (i) and the Taylor expansion, we have\n\t%\n\t\\begin{align*}\n\t& \\zeta \\bigl( X(x,t; t-k\\Delta t), t-k\\Delta t \\bigr) \\\\\n\t& = \\zeta \\biggl( x - k \\Delta t u(x,t) + \\frac{(k\\Delta t)^2}{2}\\frac{Du}{Dt} (x,t), t-k\\Delta t \\biggr) + O(\\Delta t^3) \\\\\n\t& = \\zeta \\bigl( x - k \\Delta t u(x,t), t-k\\Delta t \\bigr) + \\frac{(k\\Delta t)^2}{2} \\Bigl[ \\Bigl( \\frac{Du}{Dt} (x,t) \\cdot \\nabla \\Bigr) \\zeta \\Bigr] \\bigl( x - k \\Delta t u(x,t), t-k\\Delta t \\bigr) + O(\\Delta t^3) \\\\\n\t& = \\zeta \\bigl( x - k \\Delta t u(x,t), t-k\\Delta t \\bigr) + \\frac{(k\\Delta t)^2}{2} Z (x,t) + O(\\Delta t^3),\n\t\\end{align*}\n\t%\n\twhere we have used the relation,\n\t\\[\n\t\\Bigl[ \\Bigl( \\frac{Du}{Dt} (x,t) \\cdot \\nabla \\Bigr) \\zeta \\Bigr] \\bigl( x - k \\Delta t u(x,t), t-k\\Delta t \\bigr) = Z(x,t) + O(\\Delta t),\n\t\\]\n\tfor the last equality.\n\t%\n\\end{proof}\n\\begin{proof}[Proof of Theorem~\\ref{thm:sec_order}]\n\tIn the proof, we often employ simple notations, $L(\\cdot,\\cdot) = L(x,t;\\,\\cdot\\,, \\cdot\\,)$ and $X = X(x,t; \\,\\cdot\\,)$, if there is no confusion, since $(x,t)$ is considered as a fixed position in space and time.\n\tFrom the Adams--Bashforth method, i.e., for a smooth function~$g$ defined in $\\mathbb{R}$, $g^\\prime (s) = \\frac{1}{2\\Delta t} [ 3g(s)-4g(s-\\Delta t)+g(s-2\\Delta t) ] + O(\\Delta t^2)$, we have\n\t\\begin{align}\n\t\\uctd{\\zeta} (x,t) \n\t& = (\\mathcal{L}_u\\zeta) (x,t)\n\t= (\\mathcal{L}_u\\zeta)\\bigl( X(s), s \\bigr)_{|s=t} \n\t= L(t,s) \\prz{}{s}\\Bigl[ L(s,t) \\, \\zeta \\bigl( X(s), s \\bigr) L(s,t)^{ \\top } \\Bigr] L(t,s)^\\top {}_{{\\displaystyle |}s=t} \\notag\\\\\n\t& = L(t,s) \\frac{1}{2\\Delta t}\\Bigl[\n\t3 L(s,t) \\, \\zeta \\bigl( X(s), s \\bigr) L(s,t)^{ \\top } \n\t-4 L(s-\\Delta t,t) \\, \\zeta \\bigl( X(s-\\Delta t), s-\\Delta t \\bigr) L(s-\\Delta t,t)^{ \\top } \\notag\\\\\n\t& \\quad + L(s-2\\Delta t,t) \\, \\zeta \\bigl( X(s-2\\Delta t), s-2\\Delta t \\bigr) L(s-2\\Delta t,t)^{ \\top } \\Bigr] L(t,s)^{ \\top } {}_{{\\displaystyle |}s=t} + O(\\Delta t^2) \\notag\\\\\n\t& = \\frac{1}{2\\Delta t}\\Bigl[\n\t3 \\zeta \\bigl( X(x,t; s), s \\bigr) \n\t-4 L(t,s) L(s-\\Delta t,t) \\, \\zeta \\bigl( X(s-\\Delta t), s-\\Delta t \\bigr) L(s-\\Delta t,t)^{ \\top } L(t,s)^{ \\top } \\notag\\\\\n\t& \\quad + L(t,s)L(s-2\\Delta t,t) \\, \\zeta \\bigl( X(s-2\\Delta t), s-2\\Delta t \\bigr) L(s-2\\Delta t,t)^{ \\top } L(t,s)^{ \\top }\n\t\\Bigr] {}_{{\\displaystyle |}s=t} + O(\\Delta t^2) \\quad \\mbox{(by \\eqref{eqns:L1})} \\notag\\\\\n\t& = \\frac{1}{2\\Delta t}\\Bigl[\n\t3 \\zeta (x,t) \n\t-4 L(t-\\Delta t,t) \\, \\zeta \\bigl( X(t-\\Delta t), t-\\Delta t \\bigr) L(t-\\Delta t,t)^{ \\top } \\notag\\\\\n\t& \\quad + L(t-2\\Delta t,t) \\, \\zeta \\bigl( X(t-2\\Delta t), t-2\\Delta t \\bigr) L(t-2\\Delta t,t)^{ \\top }\n\t\\Bigr] + O(\\Delta t^2) \\quad \\mbox{(by \\eqref{eq:ode2} and \\eqref{eqns:L1})} \\notag\\\\\n\t& = \\frac{1}{2\\Delta t}\\biggl[\n\t3 \\zeta (x,t) -4 \\Bigl[ L_1 + \\frac{\\Delta t^2}{2} U \\Bigr] (x,t) \\, \\zeta \\bigl( X(t-\\Delta t), t-\\Delta t \\bigr) \\Bigl[ L_1 + \\frac{\\Delta t^2}{2} U \\Bigr]^{ \\top }(x,t) \\notag\\\\\n\t& \\quad + \\bigl[ \\tilde{L}_1 + 2 \\Delta t^2 U \\bigr](x,t) \\, \\zeta \\bigl( X(t-2\\Delta t), t-2\\Delta t \\bigr) \\bigl[ \\tilde{L}_1 + 2 \\Delta t^2 U \\bigr]^{ \\top } (x,t)\n\t\\biggr] + O(\\Delta t^2) \\notag\\\\\n\t& \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\mbox{(by Lem.~\\ref{lem:L_with_k} with definitions of $L_1$ and $\\tilde{L}_1$)} \\notag\\\\\n\t& = \\frac{1}{2\\Delta t}\\Bigl[\n\t3 \\zeta (x,t) -4 L_1 (x,t) \\, \\zeta \\bigl( X(t-\\Delta t), t-\\Delta t \\bigr) L_1^\\top (x,t) \\notag\\\\\n\t& \\quad + \\tilde{L}_1 (x,t) \\, \\zeta \\bigl( X(t-2\\Delta t), t-2\\Delta t \\bigr) \\tilde{L}_1^\\top (x,t) \\notag\\\\\n\t& \\quad - 2\\Delta t^2 \\bigl[ \\zeta \\bigl( X(t-\\Delta t), t-\\Delta t \\bigr) - \\zeta \\bigl( X(t-2\\Delta t), t-2\\Delta t \\bigr) \\bigr] U^{ \\top }(x,t) \\notag\\\\\n\t& \\quad - 2\\Delta t^2 U(x,t) \\bigl[ \\zeta \\bigl( X(t-\\Delta t), t-\\Delta t \\bigr) - \\zeta \\bigl( X(t-2\\Delta t), t-2\\Delta t \\bigr) \\bigr]\n\t\\Bigr] + O(\\Delta t^2) \\notag\\\\\n\t& = \\frac{1}{2\\Delta t}\\Bigl[\n\t3 \\zeta (x,t) - 4 L_1 (x,t) \\, \\zeta \\bigl( X(t-\\Delta t), t-\\Delta t \\bigr) L_1^\\top(x,t) \\notag\\\\\n\t& \\quad + \\tilde{L}_1(x,t) \\, \\zeta \\bigl( X(t-2\\Delta t), t-2\\Delta t \\bigr) \\tilde{L}_1^\\top (x,t)\n\t\\Bigr] + O(\\Delta t^2),\n\t\\label{eq:proof1}\n\t\\end{align}\n\t%\n\twhere the relation,\n\t\\[\n\t\\zeta \\bigl( X(t-\\Delta t), t-\\Delta t \\bigr) - \\zeta \\bigl( X(t-2\\Delta t), t-2\\Delta t \\bigr) = O(\\Delta t),\n\t\\]\n\thas been employed for the last equality.\n\tCombining~Lemma~\\ref{lem:phi_g_with_k}-(ii) with~\\eqref{eq:proof1} and recalling $x-\\Delta t u(x,t) = X_1(x,t)$ and $x-2\\Delta t u(x,t) = \\tilde{X}_1(x,t)$, we obtain\n\t%\n\t\\begin{align*}\n\t\\uctd{\\zeta} (x,t) = \\frac{1}{2\\Delta t}\\Bigl[\n\t3 \\zeta (x,t) \n\t-4 L_1 (x,t) \\zeta \\bigl( X_1(x,t), t-\\Delta t \\bigr) L_1^\\top(x,t) \\\\\n\t+ \\tilde{L}_1 (x,t) \\zeta \\bigl( \\tilde{X}_1(x,t), t-2\\Delta t \\bigr) \\tilde{L}_1^\\top (x,t)\n\t\\Bigr] + O(\\Delta t^2),\n\t\\end{align*}\n\t%\n\twhich completes the proof.\n\\end{proof}\nSubstituting $t^n$ into $t$ in~\\eqref{eq:sec_order}, the discrete form of second-order in time for the upper-convected time derivative is given as follows.\n\\begin{Cor}\\label{cor:sec_order}\nUnder the same assumptions of Theorem~\\ref{thm:sec_order}, we have\n\\begin{align}\n\t\\uctd{\\zeta} (x, t^n) \n\t= \\frac{1}{2\\Delta t}\\Bigl[\n\t3\\zeta^n(x) -4 L_1^n (x) \\bigl( \\zeta^{n-1}\\circ X_1^n \\bigr)(x) L_1^n (x)^\\top \n\t+ \\tilde{L}_1^n (x) \\bigl( \\zeta^{n-2}\\circ \\tilde{X}_1^n \\bigr)(x) \\tilde{L}_1^n (x)^\\top \\Bigr] + O(\\Delta t^2) \n\t\\label{eq:sec_order_n}\n\t\\end{align}\nfor $n = 2, \\ldots, N_T$.\n\\end{Cor}\n\\begin{Rmk}\nAlthough the approximation~\\eqref{eq:sec_order_n} of $\\uctd{\\zeta}(x,t^n)$ of second-order in time is combined with the finite difference method in this paper below, one can combine it with other methods, e.g., the finite element method and the finite volume method.\n\\end{Rmk}\n\\subsection{Full discretizations of the upper-convected time derivative}\nSuppose that $\\zeta \\in C([0,T];C(\\bar\\Omega;\\mathbb{R}^{d\\times d}_{\\rm sym}))$ and $\\zeta_h = \\{\\zeta_h^n\\}_{n=0}^{N_T}\\subset V_h$ are given.\nFor $n \\in \\{1, \\ldots, N_T\\}$ and $p\\in\\{1, 2\\}$, let $\\mathcal{A}^n \\zeta: \\bar\\Omega \\to \\mathbb{R}^{d\\times d}_{\\rm sym}$ and $\\mathcal{A}_h^{n,(p)} \\zeta_h: \\bar\\Omega_h \\to \\mathbb{R}^{d\\times d}_{\\rm sym}$ be functions defined by\n\\begin{align}\n\t[\\mathcal{A}^n \\zeta] (x)\n\t& \\coloneqq \n\t\\left\\{\n\t\\begin{aligned}\n\t\\frac{1}{2\\Delta t}\\Bigl[\n\t3\\zeta^n(x) \n\t-4 L_1^n (x) \\bigl( \\zeta^{n-1}\\circ X_1^n \\bigr)(x) L_1^n (x)^\\top \\qquad \\\\\n\t+ \\tilde{L}_1^n (x) \\bigl( \\zeta^{n-2}\\circ \\tilde{X}_1^n \\bigr)(x) \\tilde{L}_1^n (x)^\\top \\Bigr] \\quad (n \\ge 2), \\\\\n\t\\frac{1}{\\Delta t} \\Bigl[\n\t\\zeta^1(x) \n\t- L_1^1 (x) \\bigl( \\zeta^0 \\circ X_1^1 \\bigr) (x) L_1^1 (x)^\\top \\Bigr] \\quad (n = 1),\n\t\\end{aligned}\n\t\\right. \\notag \\\\\n\t\\label{def:An_h}\n\t[\\mathcal{A}_h^{n,(p)} \\zeta_h] (x) \n\t& \\coloneqq \\left\\{\n\t\\begin{aligned}\n\t\\frac{1}{2\\Delta t} \\Bigl[\n\t3\\zeta_h^n (x)\n\t-4 L_1^n (x) \\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta_h^{n-1}\\bigr) \\circ X_1^n \\bigr] (x) L_1^n (x)^\\top \\qquad \\\\\n\t+ \\tilde{L}_1^n (x) \\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta_h^{n-2} \\bigr) \\circ \\tilde{X}_1^n \\bigr] (x) \\tilde{L}_1^n (x)^\\top\n\t\\Bigr] \\quad (n \\ge 2), \\\\\n\t\\frac{1}{\\Delta t} \\Bigl[\n\t\\zeta_h^1 (x)\n\t- L_1^1 (x) \\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta_h^0 \\bigr) \\circ X_1^1 \\bigr] (x) L_1^1 (x)^\\top \\Bigr] \\quad (n = 1),\n\t\\end{aligned}\n\t\\right.\n\\end{align}\nrespectively.\nUsing the notation $\\mathcal{A}^n \\zeta$, we can write Eq.~\\eqref{eq:sec_order_n} as, for $n=\\{2,\\ldots,N_T\\}$,\n\\[\n\t\\uctd{\\zeta} (x, t^n) = [\\mathcal{A}^n \\zeta] (x) + O(\\Delta t^2).\n\\]\n\\par\nNow, we present a theorem on the truncation error of our finite difference approximations of the upper-convected time derivative, where the function~$\\mathcal{A}_h^{n,(p)} \\zeta: \\bar\\Omega_h \\to \\mathbb{R}^{d\\times d}_{\\rm sym}$ to be used in the theorem has meaning since $\\zeta \\in C([0,T];C(\\bar\\Omega;\\mathbb{R}^{d\\times d}_{\\rm sym}))$ can be considered as a series of functions in $V_h$, i.e., $\\zeta = \\{\\zeta^n\\}_{n=0}^{N_T} \\subset V_h$.\n\\begin{Thm}\\label{prop:sec_order_p_order}\n\tSuppose that Hypotheses~\\ref{hyp:u},~\\ref{hyp:dt} and~\\ref{hyp:mesh} hold true.\n\tLet $\\zeta: \\bar\\Omega\\times [0,T] \\to \\mathbb{R}^{d\\times d}$ be a sufficiently smooth function.\n\tThen, we have\n\t%\n\t\\begin{equation}\n\t\\label{eq:sec_order_p_order}\n\t\\uctd{\\zeta} (x, t^n) = [\\mathcal{A}_h^{n,(p)} \\zeta] (x) + O(\\Delta t^2 + h^p)\n\t\\end{equation}\n\tfor $x\\in\\bar\\Omega_h$, $n \\in \\{2, \\ldots, N_T\\}$ and $p=1, 2$.\n\\end{Thm}\n\\begin{proof}\nSince for $x\\in\\bar\\Omega_h$ we have\n\\begin{align}\n\t\\uctd{\\zeta} (x, t^n)\n\t& = [\\mathcal{A}^n \\zeta] (x) + O(\\Delta t^2) \\notag\\\\\n\t& = [\\mathcal{A}_h^{n,(p)} \\zeta] (x) - \\Bigl( [\\mathcal{A}_h^{n,(p)} \\zeta] (x) - [\\mathcal{A}^n \\zeta] (x) \\Bigr) + O(\\Delta t^2) \\notag\\\\\n\t& = [\\mathcal{A}_h^{n,(p)} \\zeta] (x) + \\frac{2}{\\Delta t} L_1^n (x) \\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta^{n-1} - \\zeta^{n-1}\\bigr) \\circ X_1^n \\bigr] (x) L_1^n (x)^\\top \\notag \\\\\n\t& \\quad - \\frac{1}{2\\Delta t} \\tilde{L}_1^n (x) \\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta^{n-2} - \\zeta^{n-2} \\bigr) \\circ \\tilde{X}_1^n \\bigr] (x) \\tilde{L}_1^n (x)^\\top + O(\\Delta t^2)\n\t\\label{eq:prop2_proof0}\n\\end{align}\nfrom Corollary~\\ref{cor:sec_order}, it is enough for the proof to show the following estimates,\n\\begin{subequations}\n\\label{eq:prop2_proof1}\n\\begin{align}\n\\frac{2}{\\Delta t}\\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta^{n-1} - \\zeta^{n-1}\\bigr) \\circ X_1^n \\bigr] (x) & = O(h^p), \\\\\n\\frac{1}{2\\Delta t}\\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta^{n-2} - \\zeta^{n-2} \\bigr) \\circ \\tilde{X}_1^n \\bigr] (x)\n& = O(h^p),\n\\end{align}\n\\end{subequations}\nwhere simple estimates~\\eqref{eqns:prop2_proof1_easy} are easily obtained as shown in Remark~\\ref{rmk:prop2_proof1_easy_interpolation} later and the key issue is to eliminate the negative order in $\\Delta t$ from~\\eqref{eqns:prop2_proof1_easy} and get~\\eqref{eq:prop2_proof1}.\nWe prove the former equality of~\\eqref{eq:prop2_proof1} for $d=2$ only, as the equality for $d=1$ is simpler and the latter one is proved similarly.\nLet $x = x_{i,j} \\in \\bar\\Omega_h$ and $y^n \\coloneqq X_1^n(x) = x - u^n(x)\\Delta t$.\nTo simplify notations, we omit superscripts ${}^{n-1}$ and ${}^n$ from $\\zeta^{n-1}$ and $y^n$ in the rest of proof, respectively, if there is no confusion.\n\\par\nLet us start with $p=1$.\nFrom Hypotheses~\\ref{hyp:u} and~\\ref{hyp:dt}, we have $y \\in\\bar\\Omega$ and there exists a pair of indexes~$(i_0, j_0)$ such that $y \\in K^{(1)}_{i_0+1\/2,j_0+1\/2} \\, (= [i_0h_1, (i_0+1)h_1] \\times [j_0h_2, (j_0+1)h_2])$.\nLet $\\Lambda^{(1)}(y)$ be a set of pairs of indexes of lattice points near $y$ defined by $\\Lambda^{(1)}(y) \\coloneqq \\{ (i_0,j_0), (i_0+1,j_0), (i_0,j_0+1), (i_0+1,j_0+1) \\}$.\nLet $a=(a_1,a_2)^{ \\top } \\coloneqq y-x_{i_0,j_0} = ((i-i_0)h_1-u^n_1(x_{i,j})\\Delta t, (j-j_0)h_2-u^n_2(x_{i,j})\\Delta t)$ and $\\tilde{a}=(\\tilde{a}_1,\\tilde{a}_2)^{ \\top } \\coloneqq x_{i_0+1,j_0+1}-y$.\nWithout loss of generality, we can assume that $u^n_k(x_{i,j}) \\ge 0~(k=1,2)$, $i_0 < i$, $j_0 < j$ and $a_k, \\tilde{a}_k \\ge 0~(k=1,2)$, cf. Fig.~\\ref{fig:a1a2}.\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics[width=0.6\\linewidth]{a1a2}\n\\caption{Notations in the proof of Theorem~\\ref{prop:sec_order_p_order}}\n\\label{fig:a1a2}\n\\end{figure}\nThen, we have\n\\begin{align}\n& \\Bigl[ \\bigl( \\Pi_h^{(1)}\\zeta - \\zeta \\bigr) \\circ X_1^n \\Bigr] (x)\n= (\\Pi_h^{(1)}\\zeta) (y) - \\zeta (y) \\notag\\\\\n& = \\sum_{(k,l) \\in \\Lambda^{(1)}(y)} \\bigl[ \\zeta (x_{k,l}) - \\zeta (y) \\bigr] \\varphi^{(1)}_{k,l} (y) \n\\qquad \\mbox{(by~$\\sum_{(k,l)\\in\\Lambda^{(1)}(y)} \\varphi^{(1)}_{k,l} (y) = 1$)} \\notag\\\\\n& = \\sum_{(k,l) \\in \\Lambda^{(1)}(y)} \\Bigl[ \\zeta \\bigl( y + s(x_{k,l}-y) \\bigr) \\Bigr]_{s=0}^1 \\, \\varphi^{(1)}_{k,l} (y) \\notag\\\\\n& = \\sum_{(k,l) \\in \\Lambda^{(1)}(y)} \\int_0^1 \\Bigl( [ (x_{k,l}-y)\\cdot\\nabla ] \\zeta \\Bigr) \\bigl( y + s_1(x_{k,l}-y) \\bigr) ds_1 \\, \\varphi^{(1)}_{k,l} (y) \\notag\\\\\n\\label{eq:prop2_proof2}\n& = \\sum_{(k,l) \\in \\Lambda^{(1)}(y)} \\int_0^1ds_1\\int_0^{s_1} \\Bigl( [ (x_{k,l}-y)\\cdot\\nabla ]^2 \\zeta \\Bigr) \\bigl( y + s_2(x_{k,l}-y) \\bigr) ds_2 \\, \\varphi^{(1)}_{k,l} (y) \\\\\n& \\qquad\\qquad\\qquad\\qquad\\qquad \\mbox{(by~$\\sum_{(k,l) \\in \\Lambda^{(1)}(y)} ( [ (x_{k,l}-y)\\cdot\\nabla ] \\zeta ) ( y ) \\varphi^{(1)}_{k,l} (y) = 0$),} \\notag\n\\end{align}\nand, for $(k,l)=(i_0,j_0)$,\n\\begin{align}\n& \\biggl| \\int_0^1ds_1\\int_0^{s_1} \\Bigl( [ (x_{k,l}-y)\\cdot\\nabla ]^2 \\zeta \\Bigr) \\bigl( y + s_2(x_{k,l}-y) \\bigr) ds_2 \\, \\varphi^{(1)}_{k,l} (y) \\biggr| \n= \\biggl| \\int_0^1ds_1\\int_0^{s_1} \\bigl( [ a \\cdot\\nabla ]^2 \\zeta \\bigr) ( y - s_2 a ) ds_2 \\, \\frac{\\tilde{a}_1\\tilde{a}_2}{h_1h_2} \\biggr| \\notag\\\\\n& \\le c_1 (a_1+a_2)^2 \\|\\zeta^{n-1}\\|_{C^2(K^{(1)}_{i_0+1\/2,j_0+1\/2};\\mathbb{R}^{d\\times d}_{\\rm sym})} \\frac{\\tilde{a}_1\\tilde{a}_2}{h_1h_2} \n\\le c_1^\\prime (a_1\\tilde{a}_1+a_2\\tilde{a}_2) \\|\\zeta^{n-1}\\|_{C^2(\\bar\\Omega;\\mathbb{R}^{d\\times d}_{\\rm sym})} \\notag\\\\ \n& \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n\\mbox{(by $a_k, \\tilde{a}_k \\le h_k,\\ k=1, 2$, and Hyp.~\\ref{hyp:mesh})}\n\\label{eq:prop2_proof3}\n\\end{align}\nfor positive constants~$c_1$ and $c_1^\\prime$ independent of $h$ and $\\Delta t$.\n\\par\nWe evaluate $a_1\\tilde{a}_1$.\nLet $U^\\infty \\coloneqq \\|u\\|_{C([0,T];C(\\bar\\Omega;\\mathbb{R}^d))} = \\max\\{ |u_k(x,t)|; x \\in \\bar\\Omega, t \\in [0,T], k=1,2\\}$.\nFrom $y_1 = [x_{i,j} - u^n(x_{i,j})\\Delta t ]_1 \\in [i_0h_1, (i_0+1)h_1]$, it holds that\n\\begin{displaymath}\n (i-i_0-1)h_1 \\le u^n_1(x_{i,j})\\Delta t \\le (i-i_0)h_1.\n\\end{displaymath}\nIn the case of $i-i_0-1 \\in \\mathbb{N}$, from $h_1 \\le \\frac{u^n_1(x_{i,j})\\Delta t}{i-i_0-1} \\le U^\\infty\\Delta t$, we have $a_1\\tilde{a}_1 \\le h_1^2 \\le h_1 U^\\infty \\Delta t$.\nIn the case of $i-i_0-1 = 0$, from $a_1 \\le h_1$ and $\\tilde{a}_1 = u^n_1(x_{i,j})\\Delta t \\le U^\\infty\\Delta t$, we have $a_1\\tilde{a}_1 \\le h_1 U^\\infty \\Delta t$.\nHence, it holds that, for any case,\n\\[\na_1\\tilde{a}_1 \\le h_1 U^\\infty \\Delta t.\n\\]\nSince it holds that $a_2\\tilde{a}_2 \\le h_2 U^\\infty \\Delta t$ similarly, we obtain\n\\begin{equation}\n\\label{eq:prop2_proof4}\n a_1\\tilde{a}_1 + a_2\\tilde{a}_2 \\le 2 h U^\\infty \\Delta t,\n\\end{equation}\nwhere this estimate holds also for $(k,l)=(i_0+1,j_0), (i_0,j_0+1), (i_0+1,j_0+1)$ similarly.\nCombining~\\eqref{eq:prop2_proof3} and~\\eqref{eq:prop2_proof4} with~\\eqref{eq:prop2_proof2}, we have, for a positive constant~$c_2$ independent of $h$ and~$\\Delta t$,\n\\[\n \\frac{2}{\\Delta t}\\Bigl[ \\bigl( \\Pi_h^{(1)}\\zeta - \\zeta \\bigr) \\circ X_1^n \\Bigr] (x) \n \\le c_2 U^\\infty h \\|\\zeta\\|_{C([0,T];C^2(\\bar\\Omega;\\mathbb{R}^{d\\times d}_{\\rm sym}))} = O(h),\n\\]\nwhich implies the former equality in~\\eqref{eq:prop2_proof1} with $p=1$, and the latter is obtained similarly.\nThus, we get~\\eqref{eq:sec_order_p_order} with $p=1$.\n\\par\nIn the case of $p=2$, the result, i.e., \\eqref{eq:sec_order_p_order} with $p=2$, are obtained similarly by taking into account the next identity,\n\\begin{align*}\n\\Bigl[ \\bigl( \\Pi_h^{(2)}\\zeta - \\zeta \\bigr) \\circ X_1^n \\Bigr] (x) \n= \\sum_{(k,l) \\in \\Lambda^{(2)}(y)} \\int_0^1ds_1 \\int_0^{s_1}ds_2 \\int_0^{s_2} \\Bigl( [ (x_{k,l}-y)\\cdot\\nabla ]^3 \\zeta \\Bigr) \\bigl( y + s_3(x_{k,l}-y) \\bigr) ds_3 \\, \\varphi^{(2)}_{k,l} (y),\n\\end{align*}\nwhere $\\Lambda^{(2)}(y) \\coloneqq \\{ (2i_{\\ast}+p,2j_{\\ast}+q);\\ p, q = 0, 1, 2\\}$ for $i_{\\ast}\\in \\{ 0, \\ldots, M_1\\}$ and $j_{\\ast} \\in \\{ 0, \\ldots, M_2\\}$ satisfying $y\\in [2i_{\\ast}h_1, 2(i_{\\ast}+1)h_1] \\times [2j_{\\ast}h_2, 2(j_{\\ast}+1)h_2]$.\n\\end{proof}\n\\begin{Rmk}\n\\label{rmk:prop2_proof1_easy_interpolation}\nIt is obvious that\n\\begin{align}\n\\uctd{\\zeta} (x, t^n) = [\\mathcal{A}_h^{n,(p)} \\zeta] (x) + O\\Bigl(\\Delta t^2 + \\frac{h^{p+1}}{\\Delta t}\\Bigr)\n\\label{eq:dt2_h_to_p_plus_1_over_dt}\n\\end{align}\nfor $x\\in\\bar\\Omega_h$, $n \\in \\{2, \\ldots, N_T\\}$ and $p=1, 2$, since $\\Pi_h^{(p)}\\zeta$ has an accuracy of $O(h^{p+1})$.\nIn fact, from the approximation property of $\\Pi_h^{(p)}\\zeta$, we have\n\\begin{subequations}\n\\label{eqns:prop2_proof1_easy}\n\\begin{align}\n\\frac{2}{\\Delta t}\\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta^{n-1} - \\zeta^{n-1}\\bigr) \\circ X_1^n \\bigr] (x) & = O \\Bigl( \\frac{h^{p+1}}{\\Delta t} \\Bigr), \\\\\n\\frac{1}{2\\Delta t}\\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta^{n-2} - \\zeta^{n-2} \\bigr) \\circ \\tilde{X}_1^n \\bigr] (x)\n& = O \\Bigl( \\frac{h^{p+1}}{\\Delta t} \\Bigr),\n\\end{align}\n\\end{subequations}\nand the relation~\\eqref{eq:dt2_h_to_p_plus_1_over_dt} is obtained by combining~\\eqref{eqns:prop2_proof1_easy} with~\\eqref{eq:prop2_proof0}.\nTheorem~\\ref{prop:sec_order_p_order} eliminates the negative order in $\\Delta t$ from~\\eqref{eq:dt2_h_to_p_plus_1_over_dt} and ensures that we can take small~$\\Delta t$ even for a fixed mesh size from a view point of accuracy.\n\\end{Rmk}\n\\section{Numerical schemes}\n\\label{nm}\nIn this section, we present finite difference schemes of second-order in time and of first- and second-order in space for problem~\\eqref{eqns:prob} by using the ideas of discretizations given in Section~\\ref{sec:FD_discretizations}.\n\\par\nSuppose that $u\\in C^0([0,T];C^1(\\bar\\Omega;\\mathbb{R}^d))$ and $\\zeta^0\\in C^0(\\bar\\Omega;\\mathbb{R}^{d\\times d}_{\\rm sym})$ are given, and that Hypotheses~\\ref{hyp:u}, \\ref{hyp:dt} and~\\ref{hyp:mesh} hold true.\nOur schemes are written in a unified form for $d=1, 2(, 3)$ and $p=1, 2$; find $\\{\\zeta_h^n \\in V_h;\\ n=1, \\ldots, N_T\\}$ such that\n\\begin{subequations}\\label{scheme}\n\t\\begin{align}\n\t\\label{scheme_An_h:general_step}\n\t&&&& [\\mathcal{A}_h^{n,(p)} \\zeta_h] (x) & = F^n(x), & x &\\in \\bar\\Omega_h, \\ n\\ge 1, &&&& \\\\\n\t\\label{scheme_An_h:initial_step}\n\t&&&& \\zeta_h^0(x) & = \\zeta^0(x), & x &\\in \\bar\\Omega_h, &&&&\n\t\\end{align}\n\\end{subequations}\nwhich are equivalent to\n\\begin{subequations}\\label{scheme_detail}\n\t\\begin{align}\n\t\\frac{1}{2\\Delta t} \\Bigl[\n\t3\\zeta_h^n(x)\n\t-4 L_1^n (x) \\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta_h^{n-1}\\bigr) \\circ X_1^n \\bigr] (x) & L_1^n (x)^\\top \\notag\\\\\n\t+ \\tilde{L}_1^n (x) \\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta_h^{n-2} \\bigr) \\circ \\tilde{X}_1^n \\bigr] (x) \\tilde{L}_1^n (x)^\\top\n\t\\Bigr] & = F^n(x), \n\t& x & \\in\\bar\\Omega_h, \\ n\\ge 2, \n\t\\label{scheme:general_step} \\\\\n\t%\n\t\\label{scheme:first_step}\n\t\\frac{1}{\\Delta t} \\Bigl[\n\t\\zeta_h^1(x)\n\t- L_1^1 (x) \\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta_h^0 \\bigr) \\circ X_1^1 \\bigr] (x) L_1^1 (x)^\\top \\Bigr] \n\t& = F^1(x), & x & \\in\\bar\\Omega_h, \\\\\n\t%\n\t\\label{scheme:initial_step}\n\t\\zeta_h^0(x) & = \\zeta^0(x), & x & \\in\\bar\\Omega_h.\n\t\\end{align}\n\\end{subequations}\nThe unified scheme~\\eqref{scheme} (equivalent to~\\eqref{scheme_detail}) includes four schemes, i.e., $p=1$ and $2$ correspond to schemes of first- and second-order in space, respectively, and the spatial dimension~$d~(= 1, 2)$ is implicitly dealt in the symbols~$\\bar\\Omega_h$ and~$V_h$.\nAn approximate initial value~$\\zeta_h^0 \\in V_h$ is given by~\\eqref{scheme:initial_step}.\nWe find $\\zeta_h^1\\in V_h$ from~\\eqref{scheme:first_step} and $\\zeta_h^n\\in V_h$ for $n \\ge 2$ from~\\eqref{scheme:general_step}.\nHere, we additionally provide practical form of~\\eqref{scheme}:\n\\begin{subequations}\\label{scheme_explicit}\n\t\\begin{align}\n\t\\zeta_h^n(x) & =\n\t\\frac{4}{3} L_1^n (x) \\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta_h^{n-1} \\bigr) \\circ X_1^n \\bigr](x) L_1^n (x)^\\top \\notag\\\\\n\t& \\quad \n\t- \\frac{1}{3} \\tilde{L}_1^n (x) \\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta_h^{n-2} \\bigr) \\circ \\tilde{X}_1^n \\bigr] (x) \\tilde{L}_1^n (x)^\\top\n\t+ \\frac{2\\Delta t}{3} F^n(x), & x & \\in \\bar\\Omega_h, \\ n \\ge 2, \\label{scheme_explicit:general_step} \\\\\n\t%\n\t\\label{scheme_explicit:first_step}\n\t\\zeta_h^1(x) & =\n\tL_1^1 (x) \\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta_h^0 \\bigr) \\circ X_1^1 \\bigr] (x) L_1^1 (x)^\\top + \\Delta t F^1(x), &\n\tx & \\in \\bar\\Omega_h, \\\\\n\t%\n\t\\label{scheme_explicit:initial_step}\n\t\\zeta^0_h(x) & = \\zeta^0(x), &\n\tx & \\in \\bar\\Omega_h,\n\t\\end{align}\n\\end{subequations}\nwhich imply that scheme~\\eqref{scheme} is explicit.\n\\begin{Rmk}\nFrom Hypotheses~\\ref{hyp:u} and~\\ref{hyp:dt} and Remark~\\ref{rmk:upwind_points}, we have $\\Gamma_{\\rm in} = \\emptyset$ and $X_1(\\Omega,t)=\\tilde{X}_1(\\Omega,t)=\\Omega~(t\\in [0, T])$, i.e., all of upwind points are in $\\bar\\Omega$.\nHence, the functions~$(\\Pi_h^{(p)}\\zeta_h^{n-1}) \\circ X_1^n~(n \\ge 1)$ and $(\\Pi_h^{(p)}\\zeta_h^{n-2}) \\circ \\tilde{X}_1^n~(n \\ge 2)$ are well defined in $\\bar\\Omega$ for $p=1,2$.\n\\end{Rmk}\n\\begin{Rmk}\nIn scheme~\\eqref{scheme}, we employ the backward {\\rm Euler} method~\\eqref{scheme:first_step} of first-order in time once to find $\\zeta_h^1$ needed in~\\eqref{scheme:general_step} with $n=2$. It is expected that there is no influence on the second-order convergence in time, cf. \\cite{Notsu2016}.\n\\end{Rmk}\n\\begin{Rmk}\\label{rmk:symmetry}\nSuppose that Hypotheses~\\ref{hyp:u} and~\\ref{hyp:dt} hold true.\nThen, under $F \\in C(\\bar\\Omega\\times [0,T]; \\mathbb{R}^{d\\times d}_{\\rm sym})$ and $\\zeta^0 \\in C(\\bar\\Omega; \\mathbb{R}^{d\\times d}_{\\rm sym})$, the scheme~\\eqref{scheme} preserves the symmetry, i.e., $\\zeta_h^n(x)^{ \\top } = \\zeta_h^n(x)~(x\\in\\bar\\Omega_h,\\ n=0,\\ldots,N_T)$ from the following.\nFor $d=1$, it is obvious, and let us consider $d = 2(, 3)$.\n$\\zeta_h^0(x)~(x\\in\\bar\\Omega_h)$ is symmetric from the symmetry of~$\\zeta^0$.\nWe show the symmetry of $\\zeta^1_h(x)~(x\\in\\bar\\Omega_h)$.\nNoting~\\eqref{scheme_explicit:first_step} and letting $A(x) = L_1^1(x)$, $B(x) = \\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta_h^0 \\bigr) \\circ X_1^1 \\bigr] (x)$, and $C(x) = \\Delta t F^1(x)$, we have\n\t\\begin{align*}\n\t\\zeta^1_h(x)^{ \\top }\n\t& = \\bigl[ A(x) B(x) A(x)^{ \\top } + C(x) \\bigr]^{ \\top } \n\t= A(x) B(x)^{ \\top } A(x)^{ \\top } + C(x)^{ \\top } \\\\\n\t& = A(x) B(x) A(x)^{ \\top } + C(x)\n\t= \\zeta^1_h(x),\n\t\\end{align*}\nwhich implies symmetry of~$\\zeta^1_h(x)$ for~$x\\in\\bar\\Omega_h$, where we have used the fact that $B(x)$ and $C(x)$ are symmetric for the second equality from the last.\nFor~$n\\ge 2$, the symmetry of~$\\zeta_h^n(x)$ is obtained similarly from~\\eqref{scheme:general_step}.\n\\end{Rmk}\n\\subsection{Schemes in one-dimensional space~$(d=1)$}\nIn this subsection, we rewrite the finite difference scheme~\\eqref{scheme} in a unified form for $d=1$ and $p=1, 2$.\nWe introduce simplified notations, $\\zeta^n_i \\coloneqq \\zeta_h^n(x_i)$, $u^n_i \\coloneqq u^n(x_i)$, $\\nabla u^n_i \\coloneqq (\\nabla u^n)(x_i)$, $F^n_i \\coloneqq F(x_{i},t^{n})$, $\\Lambda_\\Omega \\coloneqq \\{0,\\ldots,N\\}$, and $\\Lambda_T \\coloneqq \\{1,\\ldots,N_T\\}$.\nThe schemes are to find $\\{\\zeta^n_i \\in \\mathbb{R};\\ i \\in \\Lambda_\\Omega,\\ n \\in \\Lambda_T \\}$ such that\n\\begin{subequations}\\label{scheme_1d}\n\t\\begin{align}\n\t\\zeta^n_i & = \\frac{4}{3} ( 1 + \\Delta t \\nabla u^n_i )^2 \\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta_h^{n-1}\\bigr) \\circ X_1^n \\bigr] (x_i) \n\t\\qquad\\qquad\\ \\notag\\\\\n\t& \\quad -\\frac{1}{3} ( 1 + 2\\Delta t \\nabla u^n_i )^2 \\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta_h^{n-2} \\bigr) \\circ \\tilde{X}_1^n \\bigr] (x_i) + \\frac{2\\Delta t}{3} F^n_{i}, \n\t& i & \\in \\Lambda_\\Omega, \\ n \\ge 2, \n\t\\label{scheme_1d:general_step} \\\\\n\t%\n\t\\zeta^1_i & = ( 1 + \\Delta t \\nabla u^1_i )^2 \\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta_h^0 \\bigr) \\circ X_1^1 \\bigr] (x_i) + \\Delta t F^1_i, & i &\\in \\Lambda_\\Omega, \n\t\\label{scheme_1d:first_step} \\\\\n\t%\n\t\\zeta^0_i & = \\zeta^0(x_i), & i & \\in \\Lambda_\\Omega.\n\t\\label{scheme_1d:initial_step}\n\t\\end{align}\n\\end{subequations}\n\\par\nWe give the algorithm as follows:\n\\smallskip\n\\par\n\\textbf{Algorithm~1~($d=1$).}\nSet $\\bar\\Omega_h = \\{ x_i \\in\\bar\\Omega;\\ i \\in \\Lambda_\\Omega \\}$ with $h=a\/N$, and $\\{ \\zeta^0_i;\\ i\\in\\Lambda_\\Omega\\}$ by~\\eqref{scheme_1d:initial_step} to get $\\zeta_h^0\\in V_h$, where $N$ is an even number and $M = N\/2$ for $p=2$.\n\t\\smallskip\\\\\n\tSet~$n=1$. \\smallskip\\\\\n\t\\quad\n\tFor each $i \\in \\Lambda_\\Omega$ do:\n\t\t\\begin{enumerate}\n\t\t\\item Compute $F^1_i$, $u^1_i$, $\\nabla u^1_i$, and $y^1_i \\coloneqq X_1^{1} (x_i) = x_{i} - \\Delta t\\,u^1_i$.\n\t\t\\item Compute $Z_i^{1,(p)} \\coloneqq [ ( \\Pi_h^{(p)}\\zeta_h^0) \\circ X_1^1 ] (x_i) = ( \\Pi_h^{(p)}\\zeta_h^0) (y_i^1)$ according to~\\eqref{int1} with $i_0 = \\mathcal{I}(y_i^1; 0, a, N)$ for $p=1$, or \\eqref{int2} with $k_0 = \\mathcal{I}(y_i^1; 0, a, M)$ for $p=2$.\n\t\t\\item Compute $\\zeta^1_i$ by~\\eqref{scheme_1d:first_step}, which is equivalent to\n\t\t\\[\n\t\t\\zeta^1_i = ( 1 + \\Delta t \\nabla u^1_i )^2 \\, Z_i^{1,(p)} + \\Delta t F^1_i.\n\t\t\\]\n\t\t\\end{enumerate}\n\t(Here, computation of $\\zeta_h^1 \\in V_h$ is completed.) \\\\\n\tSet~$n=2$. \\smallskip\\\\\n\t\tWhile $n \\le N_T$ do:\\\\\n\t\t\\quad\n\t\tFor each $i\\in \\Lambda_\\Omega$ do:\n\\begin{enumerate}\n\t\\item Compute $F^n_i$, $u^n_i$, $\\nabla u^n_i$, $y^n_i \\coloneqq X_1^{n} (x_i) = x_{i} - \\Delta t\\,u^n_i$, and $\\tilde{y}^n_i \\coloneqq \\tilde{X}_1^{n} (x_i) = x_{i} - 2\\Delta t\\,u^n_i$.\n\t\t\\item Compute $Z_i^{n,(p)} \\coloneqq [ ( \\Pi_h^{(p)}\\zeta_h^{n-1}) \\circ X_1^n ] (x_i) = ( \\Pi_h^{(p)}\\zeta_h^{n-1}) (y_i^n)$ according to~\\eqref{int1} with $i_0 = \\mathcal{I}(y_i^n; 0, a, N)$ for $p=1$, or \\eqref{int2} with $k_0 = \\mathcal{I}(y_i^n; 0, a, M)$ for $p=2$.\n\t\tSimilarly, compute $\\tilde{Z}_i^{n,(p)} \\coloneqq [ ( \\Pi_h^{(p)}\\zeta_h^{n-2}) \\circ \\tilde{X}_1^n ] (x_i) = ( \\Pi_h^{(p)}\\zeta_h^{n-2}) (\\tilde{y}_i^n)$.\n\t\t\\item Compute $\\zeta^n_i$ by~\\eqref{scheme_1d:general_step}, which is equivalent to\n\t\t\\[\n\t\t\\zeta^n_i = \\frac{4}{3}( 1 + \\Delta t \\nabla u^n_i )^2 \\, z_i^{n,(p)} - \\frac{1}{3}( 1 + 2\\Delta t \\nabla u^n_i )^2 \\, \\tilde{Z}_i^{n,(p)} + \\frac{2\\Delta t}{3} F^n_i.\n\t\t\\]\n\\end{enumerate}\t\t\n\t\\quad\n\t(Computation of $\\zeta_h^n \\in V_h$ is completed.) \\\\\n\t\\quad\n\tSet $n=n+1$.\n\\subsection{Schemes in two-dimensional space~($d=2$)}\\label{nm2}\nSimilarly to the previous subsection, we rewrite the unified finite difference scheme~\\eqref{scheme} for $d=2$ and $p=1,2$.\n\\par\nLet us introduce simplified notations, $\\zeta^n_{i,j} \\coloneqq \\zeta_h^n(x_{i,j})$, $u^n_{i,j} \\coloneqq u^n(x_{i,j})$, $\\nabla u^n_{i,j} \\coloneqq (\\nabla u^n)(x_{i,j})$, $F^n_{i,j} \\coloneqq F(x_{i,j},t^{n})$, and $\\Lambda_\\Omega \\coloneqq \\{(i,j);\\ i = 0,\\ldots,N_1,\\ j=0,\\ldots,N_2\\}$.\nThe schemes are to find $\\{\\zeta^n_{i,j} \\in \\mathbb{R}^{2\\times2}_{\\rm sym};\\ (i,j) \\in \\Lambda_\\Omega,\\ n \\in \\Lambda_T \\}$ such that\n\t\\begin{subequations}\\label{scheme_2d}\n\t\t\\begin{align}\n\t\t\\zeta_{i,j}^n & =\n\t\t\\frac{4}{3} \\bigl[I + \\Delta t (\\nabla u^n_{i,j}) \\bigr] \\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta_h^{n-1} \\bigr) \\circ X_1^n \\bigr](x_{i,j}) \\bigl[I + \\Delta t (\\nabla u^n_{i,j}) \\bigr]^{ \\top } \\notag\\\\\n\t\t& \\quad - \\frac{1}{3} \\bigl[I + 2\\Delta t (\\nabla u^n_{i,j}) \\bigr] \\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta_h^{n-2} \\bigr) \\circ \\tilde{X}_1^n \\bigr] (x_{i,j}) \\bigl[I + 2\\Delta t (\\nabla u^n_{i,j}) \\bigr]^{ \\top } + \\frac{2\\Delta t}{3} F^n_{i,j}, &\n\t\t(i,j) & \\in \\Lambda_\\Omega, \\ n\\ge 2, \n\t\t\\label{scheme_2d:general_step} \\\\\n\t\t%\n\t\t\\zeta_{i,j}^1 & =\n\t\t\\bigl[I + \\Delta t (\\nabla u^1_{i,j}) \\bigr] \\bigl[ \\bigl( \\Pi_h^{(p)}\\zeta_h^0 \\bigr) \\circ X_1^1 \\bigr] (x_{i,j}) \\bigl[I + \\Delta t (\\nabla u^1_{i,j}) \\bigr]^{ \\top } + \\Delta t F^1_{i,j}, & (i,j) & \\in\\Lambda_\\Omega, \\label{scheme_2d:first_step} \\\\\n\t\t%\n\t\t\\zeta^0_{i,j} & = \\zeta^0(x_{i,j}), & (i,j) & \\in \\Lambda_\\Omega.\n\t\t\\label{scheme_2d:initial_step}\n\t\t\\end{align}\n\t\\end{subequations}\n\t%\n\t%\n\t%\n\t%\n\t%\n\\par\nWe give an algorithm of schemes~\\eqref{scheme_2d} for $d=2$ and $p=1,2$, while the construction is analogous to Algorithm~1 for $d=1$.\n\t\\smallskip \\par\n\t\\textbf{Algorithm~2~($d=2$).}\nSet $\\bar\\Omega_h = \\{ x_{i,j} \\in\\bar\\Omega;\\ (i,j) \\in \\Lambda_\\Omega \\}$ with $h_i=a_i\/N_i~(i=1,2)$, and $\\{ \\zeta^0_{i,j};\\ (i,j)\\in\\Lambda_\\Omega\\}$ by~\\eqref{scheme_2d:initial_step} to get $\\zeta_h^0\\in V_h$, where $N_i~(i=1,2)$ are even numbers and $M_i = N_i\/2~(i=1,2)$ for $p=2$.\n\t\\smallskip\\\\\n\tSet~$n=1$. \\smallskip\\\\\n\t\\quad\n\tFor each $(i,j) \\in \\Lambda_\\Omega$ do:\n\t\t\\begin{enumerate}\n\t\t\\item Compute $F^1_{i,j}$, $u^1_{i,j}$, $\\nabla u^1_{i,j}$, and $y^1_{i,j} \\coloneqq X_1^{1} (x_{i,j}) = x_{i,j} - \\Delta t\\,u^1_{i,j}$.\n\t\t\\item Compute $Z_{i,j}^{1,(p)} \\coloneqq [ ( \\Pi_h^{(p)}\\zeta_h^0) \\circ X_1^1 ] (x_{i,j}) = ( \\Pi_h^{(p)}\\zeta_h^0) (y_{i,j}^1)$ according to~\\eqref{int3} with $i_0 = \\mathcal{I}((y_{i,j}^1)_1;\\ 0, a_1, N_1)$ and $j_0 = \\mathcal{I}((y_{i,j}^1)_2;\\ 0, a_2, N_2)$ for $p=1$, or \\eqref{int4} with $k_0 = \\mathcal{I}((y_{i,j}^1)_1;\\ 0, a_1, M_1)$ and $l_0 = \\mathcal{I}((y_{i,j}^1)_2;\\ 0, a_2, M_2)$ for $p=2$.\n\t\t\\item Compute $\\zeta^1_{i,j}$ by~\\eqref{scheme_2d:first_step}, which is equivalent to\n\t\t\\[\n\t\t\\zeta_{i,j}^1 =\n\t\t\\bigl[I + \\Delta t (\\nabla u^1_{i,j}) \\bigr] Z_{i,j}^{1,(p)} \\bigl[I + \\Delta t (\\nabla u^1_{i,j}) \\bigr]^{ \\top } + \\Delta t F^1_{i,j}.\n\t\t\\]\n\t\t\\end{enumerate}\n\t(Here, computation of $\\zeta_h^1 \\in V_h$ is completed.) \\\\\n\tSet~$n=2$. \\smallskip\\\\\n\t\tWhile $n \\le N_T$ do:\\\\\n\t\t\\quad\n\t\tFor each $(i,j) \\in \\Lambda_\\Omega$ do:\n\\begin{enumerate}\n\t\\item Compute $F^n_{i,j}$, $u^n_{i,j}$, $\\nabla u^n_{i,j}$, $y^n_{i,j} \\coloneqq X_1^{n} (x_{i,j}) = x_{i,j} - \\Delta t\\,u^n_{i,j}$, and $\\tilde{y}^n_{i,j} \\coloneqq \\tilde{X}_1^{n} (x_{i,j}) = x_{i,j} - 2\\Delta t\\,u^n_{i,j}$.\n\t\t\\item Compute $Z_{i,j}^{n,(p)} \\coloneqq [ ( \\Pi_h^{(p)}\\zeta_h^{n-1}) \\circ X_1^n ] (x_{i,j}) = ( \\Pi_h^{(p)}\\zeta_h^{n-1}) (y_{i,j}^n)$ according to~\\eqref{int3} with $i_0 = \\mathcal{I}((y_{i,j}^n)_1;\\ 0, a_1, N_1)$ and $j_0 = \\mathcal{I}((y_{i,j}^n)_2;\\ 0, a_2, N_2)$ for $p=1$, or \\eqref{int4} with $k_0 = \\mathcal{I}((y_{i,j}^n)_1; 0, a_1, M_1)$ and $l_0 = \\mathcal{I}((y_{i,j}^n)_2; 0, a_2, M_2)$ for $p=2$.\n\t\tSimilarly, compute $\\tilde{Z}_{i,j}^{n,(p)} \\coloneqq [ ( \\Pi_h^{(p)}\\zeta_h^{n-2}) \\circ \\tilde{X}_1^n ] (x_{i,j}) = ( \\Pi_h^{(p)}\\zeta_h^{n-2}) (\\tilde{y}_{i,j}^n)$.\n\t\t\\item Compute $\\zeta^n_{i,j}$ by~\\eqref{scheme_2d:general_step}, which is equivalent to\n\t\t\\begin{align*}\n\t\t\\zeta_{i,j}^n & =\n\t\t\\frac{4}{3} \\bigl[I + \\Delta t (\\nabla u^n_{i,j}) \\bigr] Z_{i,j}^{n,(p)} \\bigl[I + \\Delta t (\\nabla u^n_{i,j}) \\bigr]^\\top - \\frac{1}{3} \\bigl[I + 2\\Delta t (\\nabla u^n_{i,j}) \\bigr] \\tilde{Z}_{i,j}^{n,(p)} \\bigl[I + 2\\Delta t (\\nabla u^n_{i,j}) \\bigr]^{ \\top } + \\frac{2\\Delta t}{3} F^n_{i,j}.\n\t\t\\end{align*}\n\\end{enumerate}\t\t\n\t\\quad\n\t(Computation of $\\zeta_h^n \\in V_h$ is completed.) \\\\\n\t\\quad\n\tSet $n=n+1$.\n\\section{Numerical results}\n\\label{numerics}\nIn this section, numerical results for problems with manufactured solutions are presented to observe experimental convergence orders of proposed schemes.\nIn the following, we denote scheme~\\eqref{scheme} with $p=1$ and $p=2$ by (S1) and (S2), respectively.\nFrom Theorem~\\ref{prop:sec_order_p_order}, the expected orders of convergence are of $O(\\Delta t^2+h^p)$ for $p=1,2$.\nTo see the experimental orders of convergence, the efficient choices of $\\Delta t$ for (S1) and (S2) are respectively $\\Delta t = c\\sqrt{h}$ and $\\Delta t = c^\\prime h$, for positive constants~$c$ and $c^\\prime$.\nThe choices of $\\Delta t$ for (S1) and (S2) lead to an expected order of convergence of $O(\\Delta t^2)\\ (=O(h^p))$.\nIn the computations below, as mentioned in Remark~\\ref{rmk:upwind_cell_1d}-(iii) and Remark~\\ref{rmk:upwind_cell_2d}-(iii), we employ a value of $\\zeta_{\\rm in}$ at closest lattice point to an upwind point~$X_1^n(x)$ or $\\tilde{X}_1^n(x)$ for $x=x_{i}~(d=1)$ or $x_{i,j}~(d=2)$ when the upwind point is outside the domain, where the integer-valued index indicator function~$\\mathcal{I}$ given by~\\eqref{def:index_func} is used.\n\\par\nFor $\\psi_h:\\bar\\Omega_h\\to\\mathbb{R}$ and $\\phi_h = \\{\\phi_h^n:\\bar\\Omega_h\\to\\mathbb{R};\\ n=1,\\ldots,N_T \\}$, let $\\|\\cdot\\|_{\\ell^\\infty(\\bar\\Omega_h)}$ and $\\|\\cdot\\|_{\\ell^\\infty(\\ell^\\infty)}$ be norms defined by\n\\begin{align*}\n\\| \\psi_h \\|_{\\ell^\\infty(\\bar\\Omega_h)} & = \\| \\psi_h \\|_{\\ell^\\infty(\\bar\\Omega_h;\\mathbb{R})} \\coloneqq \\max \\bigl\\{ | \\psi_h(x) | ;\\ x\\in \\bar\\Omega_h \\bigr\\}, \\\\\n\\| \\phi_h \\|_{\\ell^\\infty(\\ell^\\infty)} & \\coloneqq \\max \\bigl\\{ \\| \\phi_h^n \\|_{\\ell^\\infty(\\bar{\\Omega}_h)};\\ n = 1,\\ldots, N_T \\bigr\\}.\n\\end{align*}\nLet $E_{ij} = E_{ij}(\\Delta t, h),\\ i,j=1,\\ldots,d,$ be errors between a numerical solution~$\\zeta_h = \\{ \\zeta_h^n \\}_{n=1}^{N_T} \\subset V_h$ and a corresponding exact solution~$\\zeta \\in C([0,T];C(\\bar\\Omega;\\mathbb{R}^{d\\times d}_{\\rm sym}))$ defined by\n\\[\n E_{ij} = E_{ij}(\\Delta t, h) \\coloneqq \\bigl\\| [\\zeta_h]_{ij}-\\zeta_{ij} \\bigr\\|_{\\ell^\\infty(\\ell^\\infty)}, \\quad i, j = 1, \\ldots, d,\n\\]\nand $E_{11}$ is denoted by $E$ simply when $d=1$.\n\\begin{Rmk}\nTo solve the problems proposed in this Section, we are assuming a defined source term and a prescribed velocity field. In addition, we need to establish at least one initial condition and a wall condition where we can call the flow inlet. The initial condition $\\zeta_{h}^{0}(x)$ is directly derived from the exact solution $\\zeta_{\\rm exact}(x,0)$. The boundary condition is computed assuming a Dirichlet-type condition, i.e., we use the exact solution $\\zeta_{\\rm in} = \\zeta_{\\rm exact}^{n}(x_{0})$ at the first point of the boundary for a positive velocity field (if $u^n(x)<0$ then the inlet of the domain is located on the opposite side, making us consider $\\zeta_{\\rm exact}^{n}(x_{N})$). Therefore, when we have the case described in Fig.~\\ref{fig:BC1}, the interpolated point $X^n_1(x_0)$ at previous time is outside the domain; thus we have imposed the boundary condition $\\zeta_{\\rm in}^{n}(x)=\\zeta_{\\rm exact}^{n}(x)$. \n\n\tFor the opposite side of the domain as represented by Fig.~\\ref{fig:NeumannBC1}, we do not impose any wall conditions, since our method can also be used to update the value of unknown function $\\zeta_{\\rm in}^{n}(x_N)$ on the outflow wall. In addition, it is also possible to assume a Neumann boundary condition on this wall and then we apply the method until $x_{N-1}$ and update the last point as in an explicit scheme $\\zeta_{\\rm in}^{n}(x_N) = \\zeta_{\\rm in}^{n}(x_{N-1})$.\n\n\tMore details about the implementation of these strategies can be found in Appendix~\\ref{A.subsec:PseudoCode}.\t \n\n\n\n\n\t\t\\begin{figure}[htbp]\n\t\t\t\\centering\n\t\t\n\t\t\t\\begin{tikzpicture}[thick,scale=1.7, every node\/.style={scale=1.2}]\n\t\t\t\\draw[thick,->] (-1.0,-1.0) -- (1.25,-1.0);\n\t\t\t\\draw[thick,->] (-1.0,-1.0) -- (-1.0,1.1);\n\t\t\t\\draw[thick] (1.0,-1.05) -- (1.0,-0.95);\n\t\t\t\n\t\t\t\\draw (1.5,-0.5) node{$t^{n-1}$};\n\t\t\t\\draw (1.5,0.0) node{$t^{n}$};\n\t\t\t\\draw (-1.05,-1.25) node{$x_0=0$};\n\t\t\t\\draw (0.95,-1.25) node{$x_N=a$};\n\t\t\t\n\t\t\n\t\t\t\\draw[dashed] (-1.0,-0.5) -- (1.0,-0.5);\n\t\t\t\\draw[dashed] (-1.0,0.0) -- (1.0,0.0);\n\t\t\t\\draw[dashed,blue] (-1.5,-1.0) -- (-0.5,1.0);\n\t\t\t\\draw (0.3,1.0) node{$characteristic$};\n\t\t\n\t\t\t\\filldraw[fill] (-1.0,0.0) circle (0.02cm)\n\t\t\t(-1.0,-0.5) circle (0.02cm)\n\t\t\t(-1.25,-0.5) circle (0.02cm);\n\t\t\n\t\t\t\\draw (-0.7,0.2) node{$\\zeta^n_{\\rm in}$};\n\t\t\t\\draw (-0.7,-0.3) node{$\\zeta^{n-1}_{\\rm in}$};\n\t\t\t\\draw (-1.75,-0.4) node{$X^{n}_{1}(x_0)$};\n\t\t\t\n\t\t\t\\end{tikzpicture}\n\t\t\n\t\t\t\\caption{Sketch of the wall treatment for unknown boundary condition.}\n\t\t\t\\label{fig:BC1}\n\t\t\\end{figure}\n\n\t\t\\begin{figure}[htbp]\n\t\t\t\\centering\n\t\t\n\t\t\t\\begin{tikzpicture}[thick,scale=1.7, every node\/.style={scale=1.7}]\n\t\t\n\t\t\t\\draw[thick,->] (-1.0,-1.0) -- (1.25,-1.0);\n\t\t\t\\draw[thick,->] (-1.0,-1.1) -- (-1.0,1.1);\n\t\t\t\\draw[thick] (1.0,-1.1) -- (1.0,-0.9);\n\t\t\t\n\t\t\t\\draw (-1.25,1.0) node{$t$};\n\t\t\t\\draw (1.35,-1.25) node{$x$};\n\t\t\t\\draw (1.75,0.25) node{$\\zeta^n_{i}=\\zeta^n_{i-1}$};\n\t\t\t\\draw (0.45,0.25) node{$\\zeta^n_{i-1}$};\n\t\t\t\n\t\t\n\t\t\t\\draw[dashed] (-1.0,-0.5) -- (1.0,-0.5);\n\t\t\t\\draw[dashed] (-1.0,0.0) -- (1.0,0.0);\n\t\t\t\\draw[dashed,blue] (0.1,-1.0) -- (0.85,0.5);\n\t\t\t\\draw[dashed,blue] (0.5,-1.0) -- (1.0,0.0);\n\t\t\t\n\t\t\n\t\t\t\\draw (-1.0,-1.2) node{$0$};\n\t\t\t\\draw (0.9,-1.2) node{$x_N$};\n\t\t\n\t\t\n\t\t\t\\filldraw[fill] (0.35,-0.5) circle (0.02cm)\n\t\t\t(0.6,0.0) circle (0.02cm)\n\t\t\t(1.0,0.0) circle (0.02cm);\n\t\t\n\t\t\n\t\t\t\\draw (-1.75,0.1) node{$t^n$};\n\t\t\t\\draw (-1.5,-0.4) node{$t^{n-1}$};\n\t\t\t\n\t\t\t\\end{tikzpicture}\n\t\t\n\t\t\t\\caption{Sketch of the wall treatment for Neumann boundary condition.}\n\t\t\t\\label{fig:NeumannBC1}\n\t\t\\end{figure}\n\n\\end{Rmk}\n\\subsection{Examples in one-dimensional space~$(d=1)$}\nWe consider the next example in one-dimensional space.\n\\begin{Ex}[$d=1$]\\label{ex:1d}\nIn problem~\\eqref{eqns:prob}, let $d=1$, $\\Omega = (0,1)$ and $T=1$.\nWe consider three functions for the velocity:\n\\[\n (i)~u(x,t) = t, \\qquad\n (ii)~u(x,t) = x+t, \\qquad\n (iii)~u(x,t) = \\sin(x+t),\n\\]\nwhich imply $\\Gamma_{\\rm in} = \\{ 0 \\}~(t\\in (0,T])$.\nThe functions~$F$, $\\zeta_{\\rm in}$ and $\\zeta^0$ are given so that the exact solution is\n\\[\n \\zeta(x,t) = \\sin (x+t) + 2.\n\\]\n\\end{Ex}\n\\par\nWe solve Example~\\ref{ex:1d} by (S1) with $\\Delta t = c\\sqrt{h}$ for $c = 1\/50$ and (S2) with $\\Delta t = c^\\prime h$ for $c^\\prime = 1$, where the mesh is constructed for $h=1\/N$ with $N = 10, 20, 40, 80, 160$ and $320$, the constants $c$ and $c^\\prime$ are as larger as possible in order to numerically verify the convergence order of the temporal discretizations.\nTables~\\ref{table:ex1_s1_dt_sqrth} and \\ref{table:ex1_s2_dt_h} show the values of error $E$ and their slopes in~$\\Delta t$.\nAccording to the results in the tables,\twe can confirm that (S1) and (S2) are of second-order in $\\Delta t$ for the three cases of velocity, $(i)$, $(ii)$ and $(iii)$.\nThese results are consistent with the theoretical results in Theorem~\\ref{prop:sec_order_p_order}.\n\t\\begin{table}[!htbp]\n\t\t\\centering\n\t\t\\caption{Example~\\ref{ex:1d} by {\\rm (S1)} with $\\Delta t = c \\sqrt{h}$~$(c=1\/50)$: Values of~$E$ and their slopes in $\\Delta t$.}\n\t\t\\begin{tabular}{rrrcrrcrr}\n\t\t\t\\toprule\n\t\t\t& \\multicolumn{2}{c}{$(i)$} && \\multicolumn{2}{c}{$(ii)$} && \\multicolumn{2}{c}{$(iii)$} \\\\ \\cline{2-3}\\cline{5-6}\\cline{8-9}\n\t\t\t\\multicolumn{1}{c}{$N$} & \\multicolumn{1}{c}{$E$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E$} & \\multicolumn{1}{c}{Slope} \\\\\n\t\t\t\\hline\n\t\t\t$10$ & $1.54 \\times 10^{-2}$ & -- && $3.45 \\times 10^{-2}$ & -- && $2.11 \\times 10^{-2}$ & -- \\\\\n\t\t\t$20$ & $8.07 \\times 10^{-3}$ & $1.86$ && $1.83 \\times 10^{-2}$ & $1.87$ && $1.11 \\times 10^{-2}$ & $1.86$ \\\\\n\t\t\t$40$ & $4.15 \\times 10^{-3}$ & $1.92$ && $9.38 \\times 10^{-3}$ & $1.92$ && $5.69 \\times 10^{-3}$ & $1.92$ \\\\\n\t\t\t$80$ & $2.10 \\times 10^{-3}$ & $1.96$ && $4.75 \\times 10^{-3}$ & $1.96$ && $2.88 \\times 10^{-3}$ & $1.96$ \\\\\n\t\t\t$160$ & $1.06 \\times 10^{-3}$ & $1.98$ && $2.39 \\times 10^{-3}$ & $1.98$ && $1.45 \\times 10^{-3}$ & $1.98$ \\\\\n\t\t\t$320$ & $5.31 \\times 10^{-4}$ & $1.99$ && $1.13 \\times 10^{-3}$ & $2.16$ && $7.27 \\times 10^{-4}$ & $1.99$ \\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\t\\label{table:ex1_s1_dt_sqrth}\n\t\\end{table}\n\t\\begin{table}[!htbp]\n\t\t\\centering\n\t\t\\caption{Example~\\ref{ex:1d} by {\\rm (S2)} with $\\Delta t = h$~$(c^\\prime = 1)$: Values of~$E$ and their slopes in $\\Delta t$.}\n\t\t\\begin{tabular}{rrrcrrcrr}\n\t\t\t\\toprule\n\t\t\t& \\multicolumn{2}{c}{$(i)$} && \\multicolumn{2}{c}{$(ii)$} && \\multicolumn{2}{c}{$(iii)$} \\\\ \\cline{2-3}\\cline{5-6}\\cline{8-9}\n\t\t\t\\multicolumn{1}{c}{$N$} & \\multicolumn{1}{c}{$E$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E$} & \\multicolumn{1}{c}{Slope} \\\\\n\t\t\t\\hline\n\t\t\t$10$ & $4.65 \\times 10^{-3}$ & -- && $8.05 \\times 10^{-2}$ & -- && $1.65 \\times 10^{-2}$ & -- \\\\\n\t\t\t$20$ & $1.11 \\times 10^{-3}$ & $2.07$ && $2.19 \\times 10^{-2}$ & $1.88$ && $5.45 \\times 10^{-3}$ & $1.60$ \\\\\n\t\t\t$40$ & $2.68 \\times 10^{-4}$ & $2.04$ && $5.63 \\times 10^{-3}$ & $1.96$ && $1.53 \\times 10^{-3}$ & $1.84$ \\\\\n\t\t\t$80$ & $6.59 \\times 10^{-5}$ & $2.03$ && $1.42 \\times 10^{-3}$ & $1.98$ && $4.02 \\times 10^{-4}$ & $1.93$ \\\\\n\t\t\t$160$ & $1.63 \\times 10^{-5}$ & $2.01$ && $3.58 \\times 10^{-4}$ & $1.99$ && $1.03 \\times 10^{-4}$ & $1.97$ \\\\\n\t\t\t$ 320 $ & $ 4.06 \\times 10^{-6}$ & $ 2.01 $ && $8.96 \\times 10^{-5}$ & $2.00 $ && $ 2.61 \\times 10^{-5}$ & $1.98 $ \\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\t\\label{table:ex1_s2_dt_h}\n\t\\end{table}\t\n\nIn order to numerically verify that our methodology is stable for small time-steps, we have fixed a coarse mesh $h=1\/40$ and the finest mesh $h=1\/320$ simulating the reduction of the time-step as $\\Delta t(k)=\\frac{\\sqrt{h}\/50}{2^{k}}$ for the first-order scheme and as $\\Delta t(k)=\\frac{h}{2^{k}}$ for the second-order method. Results for (S1) in Table \\ref{tableX1} while in Table \\ref{tableX2} we have described the results for (S2). \n\t\nAccording to these tables, we can confirm that our methodologies, first- and second-order spatial discretization schemes, are unconditionally stable since the errors are decreasing as $\\Delta t$ is reduced. It is important to highlight that error for the smallest time-step in Table \\ref{tableX2} for $h=1\/40$ is approximately two order smaller than the error of the largest time-step, confirming the good stability property of the second-order scheme.\n\n\t\\begin{table}[ht!]\n\t\\centering\n\t\\caption{Example~\\ref{ex:1d} by {\\rm (S1)}: reducing the time-step as $\\Delta t(k)=\\frac{\\sqrt{h}\/50}{2^{k}}$.}\n\t\\label{tableX1}\n\t\\begin{tabular}{rrr}\n\t\t\\hline\n\t\t\\multicolumn{3}{c}{$h = 0.025$} \\\\ \\hline\n\t\t\\multicolumn{1}{c}{$k$} & \\multicolumn{1}{c}{$\\Delta t$} & \\multicolumn{1}{c}{$Error$} \\\\\n\t\t\\hline\n\t\t$0$ & $3.16 \\times 10^{-3}$ & $1.18375 \\times 10^{-2}$ \\\\\n\t\t$1$ & $1.58 \\times 10^{-3}$ & $1.08387 \\times 10^{-2}$ \\\\\n\t\t$2$ & $7.91 \\times 10^{-4}$ & $1.03445 \\times 10^{-2}$ \\\\\n\t\t$3$ & $3.95 \\times 10^{-4}$ & $1.01025 \\times 10^{-2}$ \\\\\n\t\t$4$ & $1.98 \\times 10^{-4}$ & $9.98198 \\times 10^{-3}$ \\\\\n\t\t$5$ & $9.88 \\times 10^{-5}$ & $9.92182 \\times 10^{-3}$ \\\\\n\t\t$6$ & $4.94 \\times 10^{-5}$ & $9.89128 \\times 10^{-3}$ \\\\\n\t\n\t\n\t\t\\hline\n\t\t\\multicolumn{3}{c}{$h = 0.003125$} \\\\ \\hline\n\t\t\\multicolumn{1}{c}{$k$} & \\multicolumn{1}{c}{$\\Delta t$} & \\multicolumn{1}{c}{$Error$} \\\\\n\t\t\\hline\n\t\t$0$ & $1.12 \\times 10^{-3}$ & $1.94633 \\times 10^{-3}$ \\\\\n\t\t$1$ & $5.59 \\times 10^{-4}$ & $1.57935 \\times 10^{-3}$ \\\\\n\t\t$2$ & $2.80 \\times 10^{-4}$ & $1.39709 \\times 10^{-3}$ \\\\\n\t\t$3$ & $1.40 \\times 10^{-4}$ & $1.30627 \\times 10^{-3}$ \\\\\n\t\t$4$ & $6.99 \\times 10^{-5}$ & $1.26087 \\times 10^{-3}$ \\\\\n\t\t$5$ & $3.49 \\times 10^{-5}$ & $1.23823 \\times 10^{-3}$ \\\\\n\t\t$6$ & $1.75 \\times 10^{-5}$ & $1.22691 \\times 10^{-3}$ \\\\\n\t\n\t\n\t\t\\hline\n\t\\end{tabular}\n\n\\end{table}\n\n\n\n\\begin{table}[ht!]\n\t\\centering\n\t\\caption{Example~\\ref{ex:1d} by {\\rm (S2)}: reducing the time-step as $\\Delta t(k)=\\frac{h}{2^{k}}$.}\n\t\\label{tableX2}\n\t\\begin{tabular}{rrr}\n\t\t\\hline\n\t\t\\multicolumn{3}{c}{$h = 0.025$} \\\\ \\hline\n\t\t\\multicolumn{1}{c}{$k$} & \\multicolumn{1}{c}{$\\Delta t$} & \\multicolumn{1}{c}{$Error$} \\\\\n\t\t\\hline\n\t\t$0$ & $2.50 \\times 10^{-2}$ & $5.63 \\times 10^{-3}$ \\\\\n\t\t$1$ & $1.25 \\times 10^{-2}$ & $1.50 \\times 10^{-3}$ \\\\\n\t\t$2$ & $6.25 \\times 10^{-3}$ & $4.30 \\times 10^{-4}$ \\\\\n\t\t$3$ & $3.13 \\times 10^{-3}$ & $1.58 \\times 10^{-4}$ \\\\\n\t\t$4$ & $1.56 \\times 10^{-3}$ & $8.97 \\times 10^{-5}$ \\\\\n\t\t$5$ & $7.81 \\times 10^{-4}$ & $7.27 \\times 10^{-5}$ \\\\\n\t\t$6$ & $3.91 \\times 10^{-4}$ & $6.84 \\times 10^{-5}$ \\\\\n\t\n\t\n\t\t\\hline\n\t\t\\multicolumn{3}{c}{$h = 0.003125$} \\\\ \\hline\n\t\t\\multicolumn{1}{c}{$k$} & \\multicolumn{1}{c}{$\\Delta t$} & \\multicolumn{1}{c}{$Error$} \\\\\n\t\t\\hline\n\t\t$0$ & $3.13 \\times 10^{-3}$ & $8.96 \\times 10^{-5}$ \\\\\n\t\t$1$ & $1.56 \\times 10^{-3}$ & $2.34 \\times 10^{-5}$ \\\\\n\t\t$2$ & $7.81 \\times 10^{-4}$ & $6.64 \\times 10^{-6}$ \\\\\n\t\t$3$ & $3.91 \\times 10^{-4}$ & $2.41 \\times 10^{-6}$ \\\\\n\t\t$4$ & $1.95 \\times 10^{-4}$ & $1.36 \\times 10^{-6}$ \\\\\n\t\t$5$ & $9.77 \\times 10^{-5}$ & $1.10 \\times 10^{-6}$ \\\\\n\t\t$6$ & $4.88 \\times 10^{-5}$ & $1.03 \\times 10^{-6}$ \\\\\n\t\n\t\n\t\t\\hline\n\t\\end{tabular}\n\n\\end{table}\n\n\n\\subsection{Examples for the two-dimensional case~$(d=2)$}\nWe set the next example in two-dimensional space.\n\\begin{Ex}[$d=2$]\\label{ex:2d}\n\tIn problem~\\eqref{eqns:prob}, let $d=2$, $\\Omega = (0,1)^d$ and $T=1$.\n\tWe consider three functions for the velocity:\n\\begin{eqnarray*}\n & (i)~u(x,t) = (t,t)^{ \\top }, \\quad\n (ii)~u(x,t) = (x_1+t,x_2+t)^{ \\top }, \\\\\n & (iii)~u(x,t) = (\\sin(x_1+x_2+t), \\sin(x_1+x_2+t))^{ \\top },\n\\end{eqnarray*}\n\twhich imply $\\Gamma_\\mathrm{in} = \\{ (s,0)^{ \\top }\\in\\partial\\Omega;\\ s\\in [0, 1] \\} \\cup \\{ (0,s)^{ \\top }\\in\\partial\\Omega;\\ s\\in [0, 1] \\}$~$(t\\in (0,T])$.\n\tThe functions~$F$, $\\zeta_\\mathrm{in}$ and $\\zeta^0$ are given so that the exact solution is\n\t\\[\n\t \\zeta(x,t) = \n\t \\begin{bmatrix}\n\t \\sin(x_1 + x_2 + t) + 2 & \\sin(x_1 + x_2 + t) \\\\\n\t \\sin(x_1 + x_2 + t) & \\sin(x_1 + x_2 + t) + 2\n\t \\end{bmatrix}.\n\t\\]\n\\end{Ex}\n\\par\nWe solve Example~\\ref{ex:2d} by (S1) with $\\Delta t = c\\sqrt{h}$ for $c = 1\/20$ and (S2) with $\\Delta t = c^\\prime h$ for $c^\\prime = 1\/10$, where the mesh is constructed for $h_1=h_2=h=1\/N$, i.e., $N_1=N_2=N$, with $N = 10, 20, 40$ and $80$.\nTables~\\ref{table:ex2_s1_dt_sqrth} and \\ref{table:ex2_s2_dt_h} show the values of error $E_{11}$ and their slopes in~$\\Delta t$. Slope results for~$E_{12}$ and~$E_{22}$ adopting different velocity fields $(i)$, $(ii)$ and $(iii)$ are very similar to those obtained for $E_{11}$; thus they are omitted here in order to save space.\nWe can confirm that (S1) and (S2) are of second-order in $\\Delta t$ in two-dimensional space for the three cases of velocity, $(i)$, $(ii)$ and $(iii)$.\nThese results are consistent with the theoretical results in Theorem~\\ref{prop:sec_order_p_order}.\n\\begin{table}[!htbp]\n\t\\centering\n\t\\caption{{Example~\\ref{ex:2d} by {\\rm (S1)} with $\\Delta t = c\\sqrt{h}$~$(c=1\/20)$:} Values of~$E_{11}$ and their slopes in $\\Delta t$.}\n\t\\begin{tabular}{rrrcrrcrr}\n\t\\toprule\n\t& \\multicolumn{2}{c}{$(i)$} && \\multicolumn{2}{c}{$(ii)$} && \\multicolumn{2}{c}{$(iii)$} \\\\ \\cline{2-3}\\cline{5-6}\\cline{8-9}\n\t\\multicolumn{1}{c}{$N$} & \\multicolumn{1}{c}{$E_{11}$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E_{11}$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E_{11}$} & \\multicolumn{1}{c}{Slope} \\\\\n\t\\hline\n\t$10$ & $3.87 \\times 10^{-2}$ & -- && $3.84 \\times 10^{-2}$ & -- && $3.87 \\times 10^{-2}$ & -- \\\\\n\t$20$ & $1.98 \\times 10^{-2}$ & $1.94$ && $1.96 \\times 10^{-2}$ & $1.94$ && $1.98 \\times 10^{-2}$ & $1.94$ \\\\\n\t$40$ & $9.99 \\times 10^{-3}$ & $1.97$ && $9.94 \\times 10^{-3}$ & $1.97$ && $9.99 \\times 10^{-3}$ & $1.97$ \\\\\n\t$80$ & $5.03 \\times 10^{-3}$ & $1.98$ && $5.01 \\times 10^{-3}$ & $1.98$ && $5.03 \\times 10^{-3}$ & $1.98$ \\\\ \n\t\\bottomrule\n\t\\end{tabular}\n\t\\label{table:ex2_s1_dt_sqrth}\n\\end{table}\n\\begin{table}[!htbp]\n\t\\centering\n\t\\caption{{Example~\\ref{ex:2d} by {\\rm (S2)} with $\\Delta t = c^\\prime h$~$(c^\\prime=1\/10)$:} Values of~$E_{11}$ and their slopes in $\\Delta t$.}\n\t\\begin{tabular}{rrrcrrcrr}\n\t\\toprule\n\t& \\multicolumn{2}{c}{$(i)$} && \\multicolumn{2}{c}{$(ii)$} && \\multicolumn{2}{c}{$(iii)$} \\\\ \\cline{2-3}\\cline{5-6}\\cline{8-9}\n\t\\multicolumn{1}{c}{$N$} & \\multicolumn{1}{c}{$E_{11}$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E_{11}$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E_{11}$} & \\multicolumn{1}{c}{Slope} \\\\\n\t\\hline\n\t$10$ & $2.07 \\times 10^{-4}$ & -- && $2.18 \\times 10^{-3}$ & -- && $9.79 \\times 10^{-4}$ & -- \\\\\n\t$20$ & $5.10 \\times 10^{-5}$ & $2.02$ && $5.35 \\times 10^{-4}$ & $2.02$ && $2.53 \\times 10^{-4}$ & $1.95$ \\\\\n\t$40$ & $1.27 \\times 10^{-5}$ & $2.00$ && $1.32 \\times 10^{-4}$ & $2.02$ && $6.39 \\times 10^{-5}$ & $1.98$ \\\\\n\t$80$ & $3.17 \\times 10^{-6}$ & $2.00$ && $3.27 \\times 10^{-5}$ & $2.01$ && $1.61 \\times 10^{-5}$ & $1.99$ \\\\\n\t\\bottomrule\n\t\\end{tabular}\n\t\\label{table:ex2_s2_dt_h}\n\\end{table}\n\\subsection{The Oldroyd-B constitutive equation in two-dimensional space}\nWe apply our approximations of the upper-convected time derivative of second-order in time~\\eqref{eq:sec_order_p_order} in Theorem~\\ref{prop:sec_order_p_order} for solving a problem governed by the Oldroyd-B constitutive equation in two-dimensional space; find $\\zeta: \\Omega\\times (0, T) \\to \\mathbb{R}^{d\\times d}_{\\rm sym}$ such that\n\t%\n\t\\begin{subequations}\\label{eqns:prob_ob}\n\t\t\\begin{align}\n\t\t&&&& \\zeta + Wi \\, \\uctd{\\zeta} & = 2 ( 1 - \\beta ) D(u) + F && \\mbox{in}\\ \\Omega\\times (0, T), &&&&\n\t\t\\label{eq:prob_ob1}\\\\\n\t\t&&&& \\zeta & = \\zeta_\\mathrm{in} && \\mbox{on}\\ \\Gamma_\\mathrm{in}\\times (0, T), &&&&\n\t\t\\label{eq:prob_ob2}\\\\\n\t\t&&&& \\zeta & = \\zeta^0 && \\mbox{in}\\ \\Omega,\\ \\mbox{at}\\ t=0. &&&&\n\t\t\\end{align}\n\t\\end{subequations}\n\\par\nThe scheme to solve problem~\\eqref{eqns:prob_ob} is to find $\\{\\zeta_h^n \\in V_h;\\ n=1, \\ldots, N_T\\}$ such that\n\\begin{subequations}\\label{scheme_ob}\n\t\\begin{align}\n\t\\label{scheme_ob:general_step}\n\t\\zeta_h^n(x) + Wi \\, [\\mathcal{A}_h^{n,(p)} \\zeta_h] (x)\n\t& = 2(1-\\beta)D(u^n)(x) + F^n(x), & x & \\in \\bar\\Omega_h,\\ n \\ge 1, \\\\\n\t\\label{scheme_ob:initial_step}\n\t\\zeta_h^0(x) & = \\zeta^0(x), & x & \\in \\bar\\Omega_h,\n\t\\end{align}\n\\end{subequations}\nwhere $\\mathcal{A}_h^{n,(p)} \\zeta_h:\\bar\\Omega_h\\to\\mathbb{R}^{d\\times d}_{\\rm sym}$ is the function defined already by~\\eqref{def:An_h}.\nWhen an upwind point is outside the domain, we employ a value of $\\zeta_\\mathrm{in}$ at closest lattice point to the upwind point similarly to the case of scheme~\\eqref{scheme} as mentioned in Remark~\\ref{rmk:upwind_cell_2d}-(iii).\nIn the following, scheme~\\eqref{scheme_ob} with $p=1$ and $p=2$ for problem~\\eqref{eqns:prob_ob} are called $({\\rm S}1)^\\prime$ and $({\\rm S}2)^\\prime$, respectively.\n\\par\nWe set two examples below:\n\\begin{Ex}[$d=2$]\\label{ex:OldB2D}\n\tIn problem~\\eqref{eqns:prob_ob}, let $d=2$, $\\Omega = (0,1)^d$, $T=1$ and $\\beta = 1\/9$.\n\tWe consider six values of the {\\rm Weissenberg} number $Wi$,\n\t\\[\n\t\tWi = 0.025, 1, 5, 10, 50, 100,\n\t\\]\n\tand the following function for the velocity field:\n\\begin{align*}\n u(x,t) & = (\\sin(x_1+x_2+t), \\sin(x_1+x_2+t))^{ \\top },\n\\end{align*}\n\twhich implies $\\Gamma_\\mathrm{in} = \\{ (s,0)^{ \\top }\\in\\partial\\Omega;\\ s\\in [0, 1] \\} \\cup \\{ (0,s)^{ \\top }\\in\\partial\\Omega;\\ s\\in [0, 1] \\}$.\n\tThe functions~$F$, $\\zeta_{\\rm in}$ and $\\zeta^0$ are given so that the exact solution is \n\t\\[\n\t \\zeta(x,t) = \n\t \\begin{bmatrix}\n\t \\sin(x_1 + x_2 + t) + 2 & \\sin(x_1 + x_2 + t) \\\\\n\t \\sin(x_1 + x_2 + t) & -\\sin(x_1 + x_2 + t) + 2\n\t \\end{bmatrix}.\n\t\\]\n\\end{Ex}\n\\begin{Ex}[$d=2$, {\\cite{Venkatesan2017}}]\\label{ex:Venkatesan2017}\n\tIn problem~\\eqref{eqns:prob_ob}, let $d=2$, $\\Omega = (0,1)^d$, $T=0.5$, $\\beta = 0.75$ and $Wi = 0.25$.\n\tWe consider the following function for the velocity field:\n\\[\nu(x, t) = (\\exp(-0.1t)\\sin(\\pi x_1), -\\pi\\exp(-0.1t)x_2\\cos(\\pi x_1))^{ \\top },\n\\]\n\twhich implies $\\Gamma_{\\rm in} = \\{ (s,0)^{ \\top }\\in\\partial\\Omega;\\ s\\in [0, 1] \\} \\cup \\{ (0,s)^{ \\top }\\in\\partial\\Omega;\\ s\\in [0, 1] \\}$.\n\tThe functions~$F$, $\\zeta_{\\rm in}$ and $\\zeta^0$ are given so that the exact solution is\n\\[\n\\zeta(x,t) = \n\\begin{bmatrix}\n\t\\exp(-0.1t)\\sin(\\pi x_1) & -\\pi\\exp(-0.1t)x_2\\cos(\\pi x_1) \\\\ \n\t-\\pi\\exp(-0.1t)x_2\\cos(\\pi x_1) & \\exp(-0.1t)\\sin(\\pi x_1)\\cos(\\pi x_2) \\end{bmatrix}.\n\\]\n\\end{Ex}\n\\par\nWe solve Example~\\ref{ex:OldB2D} by $({\\rm S}1)^\\prime$ with $\\Delta t = c\\sqrt{h}$ for $c = 1\/50$ and $({\\rm S}2)^\\prime$ with $\\Delta t = c^\\prime h$ for $c^\\prime = 1\/5$, where the mesh is constructed for $h_1=h_2=h=1\/N$, i.e., $N_1=N_2=N$, with $N = 10, 20, 40$ and $80$.\nIn order to further investigate the errors and the orders of convergence of the schemes for solving problem~\\eqref{eqns:prob_ob}, we give the results for the three different components $\\zeta_{11}$, $\\zeta_{12}$ and $\\zeta_{22}$.\nTables~\\ref{table:ErrorOB2D1stOrder} and~\\ref{table:ErrorOB2D2stOrder} show the results by $({\\rm S}1)^\\prime$ and $({\\rm S}2)^\\prime$, respectively, for~$Wi = 0.025$.\nFrom a quantitative point of view, the results are consistent with the theoretical results in Theorem~\\ref{prop:sec_order_p_order}.\n\\begin{table}[htbp!]\n\t\t\\centering\n\t\t\\caption{Example~\\ref{ex:OldB2D} by $({\\rm S}1)^\\prime$ with $\\Delta t = c\\sqrt{h}$~$(c=1\/50)$: Values of each tensor entry ~$E_{11},E_{12},E_{22}$ and their slopes in $\\Delta t$ for $Wi = 0.025$.}\n\t\t\\begin{tabular}{rrrcrrcrr}\n\t\t\t\\toprule\n\t\t\t\\multicolumn{1}{c}{$N$} & \\multicolumn{1}{c}{$E_{11}$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E_{12}$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E_{22}$} & \\multicolumn{1}{c}{Slope} \\\\\n\t\t\\hline\n\t\t\t$10$ & $2.03 \\times 10^{-3}$ & -- && $2.03 \\times 10^{-3}$ & -- && $2.03 \\times 10^{-3}$ & -- \\\\\n\t\t\t$20$ & $1.02 \\times 10^{-3}$ & $1.99 $ && $1.02 \\times 10^{-3}$ & $1.99 $ && $ 1.02 \\times 10^{-3}$ & $1.99 $ \\\\\n\t\t\t$40$ & $5.11 \\times 10^{-4}$ & $ 1.99$ && $5.11 \\times 10^{-4}$ & $1.99 $ && $ 5.11 \\times 10^{-4}$ & $ 1.99$ \\\\\n\t\t\t$80$ & $2.56 \\times 10^{-4}$ & $1.99 $ && $2.56 \\times 10^{-4}$ & $1.99 $ && $2.56 \\times 10^{-4}$ & $ 1.99$ \\\\\n\t\t\t\\bottomrule\n\t\\end{tabular}\n\t\\label{table:ErrorOB2D1stOrder}\n\\end{table}\n\\begin{table}[htbp!]\n\t\\centering\n\t\\caption{Example~\\ref{ex:OldB2D} by $({\\rm S}2)^\\prime$ with $\\Delta t = c^\\prime h$~$(c^\\prime=1\/5)$: Values of each tensor entry~$E_{11},E_{12},E_{22}$ and their slopes in $\\Delta t$ for $Wi = 0.025$.}\n\t\\begin{tabular}{rrrcrrcrr}\n\t\t\\toprule\n\t\t\\multicolumn{1}{c}{$N$} & \\multicolumn{1}{c}{$E_{11}$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E_{12}$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E_{22}$} & \\multicolumn{1}{c}{Slope} \\\\\n\t\t\\hline\n\t\t$10$ & $7.62 \\times 10^{-5}$ & -- && $7.24 \\times 10^{-5}$ & -- && $7.62 \\times 10^{-5}$ & -- \\\\\n\t\t$20$ & $1.89 \\times 10^{-6}$ & $2.02 $ && $1.80 \\times 10^{-6}$ & $2.01 $ && $1.89 \\times 10^{-5}$ & $2.02 $ \\\\\n\t\t$40$ & $4.75 \\times 10^{-6}$ & $1.99 $ && $4.57 \\times 10^{-6}$ & $ 1.98$ && $4.75 \\times 10^{-6}$ & $ 1.99$ \\\\\n\t\t$80$ & $1.21 \\times 10^{-6}$ & $ 1.97$ && $1.17 \\times 10^{-6}$ & $ 1.96$ && $ 1.21 \\times 10^{-6}$ & $1.97 $ \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\label{table:ErrorOB2D2stOrder}\n\\end{table}\n\\par\n\nA computational challenge in viscoelastic fluid flows is the application of high values of the Weissenberg number, i.e., $Wi>1$. In fact, the \\textit{infamous} High Weissenberg Number Problem \\cite{Fattal:2004,Hulsen,Martins2015} depends of some particular factors on viscoelastic flows, as for instance, domain geometry, boundary conditons, fluid type, mesh size, etc. In summary, this instability is related to the unbounded values of the stress tensor during the transient solution resulting in the fail of the numerical methods. It is important to highlight that some classical methods, i.e. without stabilization techniques, have failed for $Wi= O(1)$ exhibiting numerical oscillations of the solution. Rougly speaking, the High Weissenberg Number Problem can be interpreted according to a critical value of the Weissenberg number, $Wi_{crit}$, which the numerical solution is boundly maintained during the simulation of the classical constitutive formulations. For example, considering the traditional Oldroyd-B model, Fattal and Kupferman \\cite{Fattal:2004} described $Wi_{crit} \\approx 0.5$ for the cavity flow while Oliveira and Miranda \\cite{OliveiraMiranda} pointed out $Wi_{crit} \\approx 1$ for unsteady viscoelastic flow past bounded cylinders. Moreover, Walters and Webster \\cite{Walters2003} presented results for the $4:1$ contraction problem with the critical Weissenberg number near to 3. Therefore, there is an effort of the researchers to circumvent the High Weissenberg Number Problem developing new formulations that can be stable in simulations with $Wi > Wi_{crit}$.\n\nIt is important to highlight that the schemes presented in this current work can deal with high values of $Wi$ without the need to employ stabilization strategies. To test the accuracy of $({\\rm S}2)^\\prime$, we vary the values of Weissenberg number as $Wi=1,5,10,50,100$ in Example~\\ref{ex:OldB2D} and the results are presented in Table \\ref{table:ErrorOB2D2nd_Wi_varied1}. The main focus for varying the Weissenberg number is to verify the ability of $({\\rm S}2)^\\prime$ for dealing with the Oldroyd-B constitutive equation defined on the context of high elasticity. From the results presented in Table \\ref{table:ErrorOB2D2nd_Wi_varied1}, we can notice that the numerical order of convergence of $({\\rm S}2)^\\prime$ is of second-order in both time and space, and that the effect of varying the Weissenberg number is not significant for this example.\n\n\t\\begin{table}[ht!]\n\t\\centering\n\t\\caption{Example~\\ref{ex:OldB2D} by $({\\rm S}2)^\\prime$ with $\\Delta t = c^\\prime h$~$(c^\\prime=1\/5)$ and different values of $Wi$ number.}\n\t\\label{table:ErrorOB2D2nd_Wi_varied1}\n\t\\begin{tabular}{rrrcrrcrr}\n\t\t\\hline\n\t\t\\multicolumn{9}{c}{$Wi=1.0$} \\\\ \\hline\n\t\t\\multicolumn{1}{c}{$N$} & \\multicolumn{1}{c}{$E_{11}$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E_{12}$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E_{22}$} & \\multicolumn{1}{c}{Slope} \\\\ \\hline\n\t\t$10$ & $1.55 \\times 10^{-3}$ & -- && $1.06 \\times 10^{-3}$ & -- && $5.54 \\times 10^{-4}$ & -- \\\\\n\t\t$20$ & $4.23 \\times 10^{-4}$ & $1.88$ && $2.93 \\times 10^{-4}$ & $1.85$ && $1.48 \\times 10^{-4}$ & $1.91$ \\\\\n\t\t$40$ & $1.09 \\times 10^{-4}$ & $1.95$ && $7.65 \\times 10^{-5}$ & $1.94$ && $3.79 \\times 10^{-5}$ & $1.96$ \\\\\n\t\t$80$ & $2.77 \\times 10^{-5}$ & $1.98$ && $1.95 \\times 10^{-5}$ & $1.98$ && $9.58 \\times 10^{-6}$ & $1.99$ \\\\\n\t\n\t\n\t\t\\hline\n\t\t\\multicolumn{9}{c}{$Wi=5$} \\\\ \\hline\n\t\t\\multicolumn{1}{c}{$N$} & \\multicolumn{1}{c}{$E_{11}$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E_{12}$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E_{22}$} & \\multicolumn{1}{c}{Slope} \\\\ \\hline\n\t\t$10$ & $1.97 \\times 10^{-3}$ & -- && $1.37 \\times 10^{-3}$ & -- && $7.13 \\times 10^{-4}$ & -- \\\\\n\t\t$20$ & $5.36 \\times 10^{-4}$ & $1.87$ && $3.80 \\times 10^{-4}$ & $1.85$ && $1.97 \\times 10^{-4}$ & $1.86$ \\\\\n\t\t$40$ & $1.39 \\times 10^{-4}$ & $1.95$ && $9.90 \\times 10^{-5}$ & $1.94$ && $5.14 \\times 10^{-5}$ & $1.94$ \\\\\n\t\t$80$ & $3.51 \\times 10^{-5}$ & $1.98$ && $2.52 \\times 10^{-5}$ & $1.98$ && $1.31 \\times 10^{-5}$ & $1.97$ \\\\\n\t\n\t\n\t\t\\hline\n\t\t\\multicolumn{9}{c}{$Wi=10$} \\\\ \\hline\n\t\t\\multicolumn{1}{c}{$N$} & \\multicolumn{1}{c}{$E_{11}$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E_{12}$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E_{22}$} & \\multicolumn{1}{c}{Slope} \\\\ \\hline\n\t\t$10$ & $2.03 \\times 10^{-3}$ & -- && $1.42 \\times 10^{-3}$ & -- && $7.38 \\times 10^{-4}$ & -- \\\\\n\t\t$20$ & $5.54 \\times 10^{-4}$ & $1.87$ && $3.93 \\times 10^{-4}$ & $1.85$ && $2.04 \\times 10^{-4}$ & $1.85$ \\\\\n\t\t$40$ & $1.43 \\times 10^{-4}$ & $1.94$ && $1.03 \\times 10^{-4}$ & $1.94$ && $5.35 \\times 10^{-5}$ & $1.93$ \\\\\n\t\t$80$ & $3.63 \\times 10^{-5}$ & $1.98$ && $2.61 \\times 10^{-5}$ & $1.98$ && $1.36 \\times 10^{-5}$ & $1.97$ \\\\\n\t\n\t\t\\hline\n\t\t\\multicolumn{9}{c}{$Wi=50$} \\\\ \\hline\n\t\t\\multicolumn{1}{c}{$N$} & \\multicolumn{1}{c}{$E_{11}$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E_{12}$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E_{22}$} & \\multicolumn{1}{c}{Slope} \\\\ \\hline\n\t\t$10$ & $2.08 \\times 10^{-3}$ & -- && $1.46 \\times 10^{-3}$ & -- && $7.59 \\times 10^{-4}$ & -- \\\\\n\t\t$20$ & $5.69 \\times 10^{-4}$ & $1.87$ && $4.05 \\times 10^{-4}$ & $1.85$ && $2.11 \\times 10^{-4}$ & $1.85$ \\\\\n\t\t$40$ & $1.47 \\times 10^{-4}$ & $1.95$ && $1.06 \\times 10^{-4}$ & $1.94$ && $5.53 \\times 10^{-5}$ & $1.93$ \\\\\n\t\t$80$ & $3.72 \\times 10^{-5}$ & $1.98$ && $2.68 \\times 10^{-5}$ & $1.98$ && $1.41 \\times 10^{-5}$ & $1.97$ \\\\\n\t\n\t\t\\hline\n\t\t\\multicolumn{9}{c}{$Wi=100$} \\\\ \\hline\n\t\t\\multicolumn{1}{c}{$N$} & \\multicolumn{1}{c}{$E_{11}$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E_{12}$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E_{22}$} & \\multicolumn{1}{c}{Slope} \\\\ \\hline\n\t\t$10$ & $2.09 \\times 10^{-3}$ & -- && $1.46 \\times 10^{-3}$ & -- && $7.62 \\times 10^{-4}$ & -- \\\\\n\t\t$20$ & $5.71 \\times 10^{-4}$ & $1.87$ && $4.06 \\times 10^{-4}$ & $1.85$ && $2.12 \\times 10^{-4}$ & $1.85$ \\\\\n\t\t$40$ & $1.48 \\times 10^{-4}$ & $1.95$ && $1.06 \\times 10^{-4}$ & $1.94$ && $5.55 \\times 10^{-5}$ & $1.93$ \\\\\n\t\t$80$ & $3.74 \\times 10^{-5}$ & $1.98$ && $2.69 \\times 10^{-5}$ & $1.98$ && $1.42 \\times 10^{-5}$ & $1.97$ \\\\\n\t\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}\n\\par\nFinally, example~\\ref{ex:Venkatesan2017} employs the manufactured solution used by Venkatesan and Ganesan~\\cite{Venkatesan2017}.\nNotice that in this study we are investigating the numerical behavior of the schemes for non-homogeneous boundary conditions in parts of the domain.\nTable~\\ref{table:ErrorOB2_Venkatesan2017} describes the results for Example~\\ref{ex:Venkatesan2017} by $({\\rm S}2)^\\prime$ with $\\Delta t = c^\\prime h$ for $c^\\prime = 1\/10$, where the mesh is constructed for $h_1=h_2=h=1\/N$, i.e., $N_1=N_2=N$, with $N = 10, 20, 40$ and $80$.\nFrom this table we can see that these results are consistent with our truncation error analysis in Theorem~\\ref{prop:sec_order_p_order}.\n\\begin{table}[htbp!]\n\t\\centering\n\t\\caption{Example~\\ref{ex:Venkatesan2017} by $({\\rm S}2)^\\prime$ with $\\Delta t = c^\\prime h$~$(c^\\prime=1\/10)$: Values of each tensor entry~$E_{11},E_{12},E_{22}$ and their slopes in $\\Delta t$ for $Wi = 0.25$ and $\\beta = 0.75$.}\n\t\\begin{tabular}{rrrcrrcrr}\n\t\t\\toprule\n\t\t\\multicolumn{1}{c}{$N$} & \\multicolumn{1}{c}{$E_{11}$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E_{12}$} & \\multicolumn{1}{c}{Slope} && \\multicolumn{1}{c}{$E_{22}$} & \\multicolumn{1}{c}{Slope} \\\\\n\t\t\\hline\n\t\t$10$ & $4.10 \\times 10^{-3}$ & -- && $7.64 \\times 10^{-2}$ & -- && $1.98 \\times 10^{-2}$ & -- \\\\\n\t\t$20$ & $1.02 \\times 10^{-3}$ & $2.01$ && $2.11 \\times 10^{-3}$ & $1.86$ && $5.19 \\times 10^{-3}$ & $1.93$ \\\\\n\t\t$40$ & $2.82 \\times 10^{-4}$ & $1.86$ && $5.83 \\times 10^{-4}$ & $1.85$ && $1.32 \\times 10^{-3}$ & $1.97$ \\\\\n\t\t$80$ & $7.47 \\times 10^{-5}$ & $1.91$ && $1.54 \\times 10^{-4}$ & $1.92$ && $3.30 \\times 10^{-4}$ & $2.00$ \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\label{table:ErrorOB2_Venkatesan2017}\n\\end{table}\n\\section{Conclusions}\n\\label{sec:conclusions}\nThe application of the generalized Lie derivative~(GLD) for constructing schemes to deal with the upper-convected time derivative is an alternative form in the numerical solution of constitutive equations.\nIn spite of the success of this strategy firstly proposed by Lee and Xu~\\cite{Lee2006}, to the best knowledge of the authors, the methodology was only applied in the context of finite elements.\nIn this work, we have combined a Lagrangian framework with GLD to develop new second-order finite difference approximations for the upper-convected time derivative.\nParticularly, the schemes are constructed based on bilinear and biquadratic interpolation operators for solving a simple model in one- and two-dimensional spaces.\nThe schemes are explicit and no CFL condition is required as the Lagrangian framework is employed.\nTruncation errors of~$O(\\Delta t^2 + h^p)$ $(p=1,2)$ for the finite difference approximations of the the upper-convected time derivative have been proved.\nA numerical integration of composite functions may cause an instability in the case of Lagrangian finite element method, our schemes, however, do not have such instability since there is no numerical integration thanks to the finite difference method.\nAccording to our numerical results for simplified model equations, the new finite difference schemes can reach second-order of accuracy in time and {space~$(p=2)$} corroborating with the theoretical analysis. Moreover, the proposed strategy has been also applied to solve a two-dimensional Oldroyd-B constitutive equation subjected to a prescribed velocity field. The results have been very satisfactory since the increasing of the Weissenberg number did not influence the good properties of accuracy and stability of the finite difference approximations. As a future work, we intend to extend our schemes for solving viscoelastic fluid flows governed by different constitutive equations at high Weissenberg numbers.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzcklg b/data_all_eng_slimpj/shuffled/split2/finalzzcklg new file mode 100644 index 0000000000000000000000000000000000000000..f99d91b31ee5b3262acf064f24f31d74e690931c --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzcklg @@ -0,0 +1,5 @@ +{"text":"\n\\section{Introduction}\n\\input{sections\/introduction}\n\n\\section{Model}\n\\label{sec:model}\n\\input{sections\/model}\n\n\\section{Non-adaptive Optimization}\n\\label{sec:adaptivity}\n\\input{sections\/adaptivity}\n\n\\section{Algorithms}\n\\label{sec:algorithms}\n\\input{sections\/algorithms}\n\n\\section{Experiments}\n\\label{sec:experiments}\n\\input{sections\/experiments}\n\n\\section{Related work}\n\\input{sections\/related}\n\n\\section*{Acknowledgement}\nThis research is supported in part by a Google Research Grant and NSF grant CCF-1301976.\n\n\\bibliographystyle{abbrv}\n\\balance\n\n\n\\subsection{Adaptivity Gap}\\label{sec:gap}\n\nWe will now justify the use of non-adaptive strategies by showing that the\noptimal solution for this form of non-adaptive strategies yields a higher value\nthan adaptive ones. For brevity, given a probability vector $\\pi\\in[0,1]^m$ we\nwrite:\n\\begin{equation}\\label{eq:multi}\n F(\\pi) \\equiv\n \\sum_{R\\subseteq\\neigh{X}}\\left(\\prod_{u\\in\n R}\\pi_u\\prod_{u\\in\\neigh{X}\\setminus R}(1-\\pi_u)\\right)\n f(R)\n\\end{equation}\nas well as $\\textbf{p}\\otimes \\textbf{q}$ to denote the component-wise\nmultiplication between vectors $\\textbf{p}$ and $\\textbf{q}$. Finally, we write\n$\\mathcal{F}_{A} \\equiv \\{S \\subseteq X : |S|\\leq k\\}$, and $\\mathcal{F}_{NA}\n\\equiv\\{(S,\\textbf{q}), |S|+\\textbf{p}^T\\textbf{q} \\leq k, q_u \\leq\n\\mathbf{1}_{\\{u\\in\\neigh{S}\\}}\\}$ to denote the feasible regions of the\nadaptive and non-adaptive problems, respectively.\n\n\\begin{proposition}\\label{prop:gap}\nFor additive functions given by \\eqref{eq:voter}, the value of the optimal\nadaptive policy is upper bounded by the optimal non-adaptive policy:\n \\begin{displaymath}\n \\begin{aligned}[t]\n &\\max_{S\\subseteq X} \\sum_{R\\subseteq\\neigh{S}} p_R\n \\max_{\\substack{T\\subseteq R\\\\|T|\\leq k-|S|}}f(T)\\\\\n &\\text{s.t. }S \\in \\mathcal{F}_{A}\n \\end{aligned}\n \\leq\n \\begin{aligned}[t]\n &\\max_{\\substack{S\\subseteq X\\\\\\textbf{q}\\in[0,1]^n}}\n F(\\mathbf{p}\\otimes\\mathbf{q})\\\\\n &\\text{s.t. } (S,\\textbf{q}) \\in \\mathcal{F}_{NA}\n \\end{aligned}\n \\end{displaymath}\n\\end{proposition}\n\n\nThe proof of this proposition can be found in Appendix~\\ref{sec:ad-proofs} and\nrelies on the following fact: the optimal adaptive policy can be written as\na feasible non-adaptive policy, hence it provides a lower bound on the value of\nthe optimal non-adaptive policy.\n\n\\subsection{From Non-Adaptive to Adaptive Solutions}\\label{sec:round}\n\nFrom the above proposition we now know that optimal non-adaptive solutions have\nhigher values than adaptive solutions. Given a non-adaptive solution\n$(S,\\mathbf{q})$, a possible scheme would be to use $S$ as an adaptive\nsolution. But since $(S, \\mathbf{q})$ is a solution to the non-adaptive\nproblem, Proposition~\\ref{prop:gap} does not provide any guarantee on how well\n$S$ performs as an adaptive solution.\n\nHowever, we show that from a non-adaptive solution $(S,\\mathbf{q})$, we can\nobtain a lower bound on the adaptive value of $S$, that is, the expected\ninfluence attainable in expectation over all possible arrivals of neighbors of\n$S$. Starting from $S$, in every realization of neighbors $R$, sample every\nnode $u \\in R \\cap \\mathcal{N}(S)$ with probability $q_{u}$, to obtain a random\nset of nodes $I_R \\subseteq R \\cap S$. $(S, \\mathbf{q})$ being a non-adaptive\nsolution, it could be that selecting $I_R$ exceeds our budget. Indeed, the only\nguarantee that we have is that $|S| + \\mathbb{E}\\big[|I_R|\\big]\\leq k$. As\na consequence, an adaptive solution starting from $S$ might not be able to\nselect $I_R$ on the second stage.\n\n\n\n \n\n\nFortunately, the probability of exceeding the budget is small enough and with\nhigh probability $I_R$ will be feasible. This is exploited in \\cite{vondrak} to\ndesign a randomized rounding method with approximation guarantees. These\nrounding methods are called \\emph{contention resolution schemes}.\nTheorem~1.3 of this paper gives us a contention resolution scheme which will\ncompute from $\\mathbf{q}$ and for any realization $R$ a \\emph{feasible} set\n$\\tilde{I}_R$, such that:\n\\begin{equation}\n \\label{eq:cr}\n \\mathbb{E}_R\\big[f(\\tilde{I}_R)\\big] \\geq (1-\\varepsilon) F(\\mathbf{q})\n\\end{equation}\nWhat this means is that starting from a non-adaptive solution\u00a0$(S,\\mathbf{q})$,\nthere is a way to construct a random \\emph{feasible} subset on the second stage\nsuch that in expectation, this set attains almost the same influence value as\nthe non-adaptive solution. Since the adaptive solution starting from $S$ will\nselect optimally from the realizations $R\\subseteq\\neigh{S}$,\n$\\mathbb{E}_R[f(\\tilde{I}_R)]$ provides a lower bound on the adaptive value of\n$S$ that we denote by $A(S)$.\n\nMore precisely, denoting by $\\text{OPT}_A$ the optimal value of the adaptive\nproblem~\\eqref{eq:problem}, we have the following proposition whose proof can\nbe found in Appendix~\\ref{sec:ad-proofs}.\n\\begin{proposition}\\label{prop:cr}\n Let $(S,\\textbf{q})$ be an $\\alpha$-approximate solution to the\n non-adaptive problem \\eqref{eq:relaxed}, then $\\mathrm{A}(S) \\geq \\alpha\n \\mathrm{OPT}_A$.\n\\end{proposition} \n\n\n\\subsection{An LP-Based Approach}\n\\label{sec:lp}\nNote that due to linearity of expectation, for a linear function $f$ of the\nform given by \\eqref{eq:voter} we have:\n\\begin{equation}\\label{eq:multi-voter}\n \\begin{split}\n F(\\textbf{p}) \n &=\\mathbb{E}_{R}\\big[f(R)\\big]\n =\\mathbb{E}_{R}\\Bigg[\\sum_{u\\in\\neigh{X}}w_u\\mathbf{1}_{\\{u\\in\n R\\}}\\Bigg]\\\\\n &=\\sum_{u\\in\\neigh{X}}w_u\\mathbb{P}[u\\in R]\n =\\sum_{u\\in\\neigh{X}}p_uw_u\n \\end{split}\n\\end{equation}\n\nThus, the non-adaptive optimization problem \\eqref{eq:relaxed} can be written as:\n\\begin{displaymath}\n \\begin{split}\n \\max_{\\substack{S\\subseteq X\\\\\\mathbf{q}\\in[0,1]^n} } \n & \\sum_{u\\in\\neigh{X}}p_uq_uw_u\\\\\n \\text{s.t. } & |S|+ \\textbf{p}^T\\textbf{q} \\leq k,\\;\n q_u \\leq \\mathbf{1}\\{u\\in\\neigh{S}\\}\n \\end{split}\n\\end{displaymath}\n\nThe choice of the set $S$ can be relaxed by introducing a variable\n$\\lambda_v\\in[0,1]$ for each $v\\in X$. We obtain the following\nLP for the adaptive seeding problem:\n\\begin{equation}\\label{eq:lp}\n \\begin{split}\n \\max_{\\substack{\\mathbf{q}\\in[0,1]^n\\\\\\boldsymbol\\lambda\\in[0,1]^m}}\n & \\;\\sum_{u\\in\\neigh{X}}p_uq_uw_u\\\\\n \\text{s.t. } & \\sum_{v\\in X}\\lambda_v+\\textbf{p}^T\\textbf{q} \\leq k,\\;\n q_u \\leq \\sum_{v\\in\\neigh{u}} \\lambda_v\n\\end{split}\n\\end{equation}\n\nAn optimal solution to the above problem can be found in polynomial time using\nstandard LP-solvers. The solution returned by the LP is \\emph{fractional}, and\nrequires a rounding procedure to return a feasible solution to our problem,\nwhere $S$ is integral. To round the solution we use the pipage rounding\nmethod~\\cite{pipage}. We defer the details to Appendix~\\ref{sec:lp-proofs}.\n\n\\begin{lemma}\n For \\mbox{\\textsc{AdaptiveSeeding-LP}} defined in \\eqref{eq:lp}, any fractional solution $(\\boldsymbol\\lambda, \\mathbf{q})\\in[0,1]^m\\times[0,1]^n$ can be rounded to an integral solution $\\bar{\\boldsymbol\\lambda} \\in \\{0,1\\}^{m}$ s.t. $(1-1\/e) F(\\mathbf{p}\\circ\\mathbf{q}) \\leq A(\\bar{\\lambda})$ in $O(m + n)$ steps.\n\\end{lemma}\n\n\\subsection{A Combinatorial Algorithm}\n\\label{sec:comb}\n\nIn this section, we introduce a combinatorial algorithm with an identical\napproximation guarantee to the LP-based approach. However, its running time,\nstated in Proposition~\\ref{prop:running_time} can be better than the one given\nby LP solvers depending on the relative sizes of the budget and the number of\nnodes in the graph. Furthermore, as we discuss at the end of this section, this\nalgorithm is amenable to parallelization. \n\nThe main idea is to reduce the problem to a monotone submodular maximization\nproblem and apply a variant of the celebrated greedy\nalgorithm~\\cite{nemhauser}. \nIn contrast to standard influence maximization, the submodularity of the\nnon-adaptive seeding problem is not simply a consequence of properties of the\ninfluence function; it also strongly relies on the combinatorial structure of\nthe two-stage optimization. \n\nIntuitively, we can think of our problem as trying to find a set $S$ in the\nfirst stage, for which the nodes that can be seeded on the second stage have\nthe largest possible value. To formalize this, for a budget $b\\in\\mathbf{R}^+$\nused in the second stage and a set of neighbors $T\\subseteq\\mathcal{N}(X)$, we\nwill use $\\mathcal{O}(T,b)$ to denote the solution to:\n\\begin{equation}\\label{eq:knap}\n \\begin{split}\n \\mathcal{O}(T,b)\\equiv\n \\max_{\\textbf{q}\\in[0,1]^n} & \\sum_{u\\in\\neigh{X} \\cap T} p_uq_uw_u\\\\\n \\text{s.t. } & \\mathbf{p}^T\\mathbf{q}\\leq b\n \n\\end{split}\n\\end{equation}\n\nThe optimization problem \\eqref{eq:relaxed} for\nnon-adaptive policies can now be written as:\n\\begin{equation}\\label{eq:sub}\n \\max_{S\\subseteq X} \\; \\mathcal{O}\\big(\\neigh{S},k-|S|\\big)\n \\quad \\text{s.t. } |S|\\leq k\n\\end{equation}\n\nWe start by proving in Proposition~\\ref{prop:sub} that for fixed $t$,\n$\\mathcal{O}(\\neigh{\\cdot}, t)$ is submodular. This proposition relies on\nlemmas~\\ref{lemma:nd} and~\\ref{lemma:sub} about the properties of\n$\\mathcal{O}(T,b)$.\n\n\\begin{lemma}\\label{lemma:nd}\n Let $T \\subseteq \\mathcal{N}(X)$ and $x \\in \\mathcal{N}(X)$, then\n $\\mathcal{O}(T\\cup\\{x\\},b)-\\mathcal{O}(T,b)$ is\n non-decreasing in $b$.\n\\end{lemma}\n\nThe proof of this lemma can be found in Appendix~\\ref{sec:comb-proofs}. The main\nidea consists in writing:\n\\begin{multline*}\n \\mathcal{O}(T\\cup\\{x\\},c)-\\mathcal{O}(T\\cup\\{x\\},b)=\\int_b^c\\partial_+\\mathcal{O}_{T\\cup\\{x\\}}(t)dt\n\\end{multline*}\nwhere $\\partial_+\\mathcal{O}_T$ denotes the right derivative of\n$\\mathcal{O}(T,\\cdot)$. For a fixed $T$ and $b$, $\\mathcal{O}(T,b)$ defines\na fractional Knapsack problem over the set $T$. Knowing the form of the optimal\nfractional solution, we can verify that\n$\\partial_+\\mathcal{O}_{T\\cup\\{x\\}}\\geq\\partial_+\\mathcal{O}_T$ and obtain:\n\\begin{multline*}\n \\mathcal{O}(T\\cup\\{x\\},c)-\\mathcal{O}(T\\cup\\{x\\},b)\\geq \n \\mathcal{O}(T,c)-\\mathcal{O}(T,b)\n\\end{multline*}\n\n\\begin{lemma}\\label{lemma:sub}\n For any $b\\in\\mathbf{R}^+$, $\\mathcal{O}(T,b)$ is submodular in $T$, $T\\subseteq\\neigh{X}$.\n\\end{lemma}\n\nThe proof of this lemma is more technical.\u00a0For $T\\subseteq\\neigh{X}$ and $x,\ny\\in\\neigh{X}\\setminus T$, we need to show that:\n \\begin{displaymath}\n \\mathcal{O}(T\\cup\\{x\\},b)-\\mathcal{O}(T,b)\\geq\n \\mathcal{O}(T\\cup\\{y, x\\},b)-\\mathcal{O}(T\\cup\\{y\\},b)\n \\end{displaymath}\n This can be done by partitioning the set $T$ into ``high value\n items'' (those with weight greater than $w_x$) and ``low value items'' and\n carefully applying Lemma~\\ref{lemma:nd} to the associated subproblems.\n The proof is in Appendix~\\ref{sec:comb-proofs}.\n\nFinally, Lemma~\\ref{lemma:sub} can be used to show Proposition~\\ref{prop:sub}\nwhose proof can be found in Appendix~\\ref{sec:comb-proofs}.\n\n\\begin{proposition}\\label{prop:sub}\n Let $b\\in\\mathbf{R}^+$, then $\\mathcal{O}(\\neigh{S},b)$ is monotone and\n submodular in $S$, $S\\subseteq X$.\n\\end{proposition}\n\nWe can now use Proposition~\\ref{prop:sub} to reduce \\eqref{eq:sub} to a monotone submodular maximization problem. First, we note that~\\eqref{eq:sub} can be rewritten:\n\\begin{equation}\\label{eq:sub-mod}\n \\max_{\\substack{S\\subseteq X\\\\ t \\in \\mathbb{N}}} \\; \\mathcal{O}\\big(\\neigh{S},t\\big)\n \\quad\\text{s.t. } |S| + t\\leq k\n\\end{equation}\n\nIntuitively, we fix $t$ arbitrarily so that the maximization above becomes a submodular maximization problem with fixed budget $t$. We then optimize over the value of $t$. Combining this observation with the greedy algorithm for monotone submodular maximization~\\cite{nemhauser}, we obtain Algorithm~\\ref{alg:comb}, whose performance guarantee is summarized in Proposition~\\ref{prop:main_result}.\n\n\\begin{algorithm}\n \\caption{Combinatorial algorithm}\n \\label{alg:comb}\n \\algsetup{indent=2em}\n \\begin{algorithmic}[1]\n \\STATE $S\\leftarrow \\emptyset$\n \\FOR{$t=1$ \\TO $k-1$}\n \\STATE $S_t\\leftarrow \\emptyset$\n \\FOR{$i=1$ \\TO $k-t$}\n \\STATE $x^*\\leftarrow\\argmax_{x\\in\n X\\setminus S_t}\\mathcal{O}(\\neigh{S_t\\cup\\{x\\}},t)\n -\\mathcal{O}(\\neigh{S_t},t)$\\label{line:argmax}\n \\STATE $S_t\\leftarrow S_t\\cup\\{x^*\\}$\n \\ENDFOR\n \\IF{$\\mathcal{O}(\\neigh{S_t},t)>\\mathcal{O}(\\neigh{S},k-|S|)$}\n \\STATE $S\\leftarrow S_t$\n \\ENDIF\n \\ENDFOR\n \\RETURN $S$\n \\end{algorithmic}\n\\end{algorithm}\n\n\\begin{proposition}\\label{prop:main_result}\n Let $S$ be the set computed by Algorithm~\\ref{alg:comb} and let us denote\n by $\\mathrm{A}(S)$ the value of the adaptive policy selecting $S$ on the first\n stage. Then $\\mathrm{A}(S) \\geq (1-1\/e)\\mathrm{OPT}_A$.\n\\end{proposition}\n\n\n\n\\noindent\\textbf{Parallelization.} The algorithm described above considers all\npossible ways to split the seeding budget between the first and the second\nstage. For each possible split $\\{(t,k-t)\\}_{t=1\\ldots,k-1}$, the algorithm\ncomputes an approximation to the optimal non adaptive solution that uses $k-t$\nnodes in the first stage and $t$ nodes in the second stage, and returns the\nsolution for the split with the highest value (breaking ties arbitrarily).\nThis process can be trivially parallelized across $k-1$ machines, each\nperforming a computation of a single split. With slightly more effort, for any\n$\\epsilon>0$ one can parallelize over $\\log_{1+\\epsilon}n$ machines at the cost\nof losing a factor of $\\epsilon$ in the approximation guarantee (see Appendix~\\ref{sec:para} for details).\\newline\n\n\\noindent \\textbf{Implementation in MapReduce.} While the previous paragraph\ndescribes how to parallelize the outer \\texttt{for} loop of\nAlgorithm~\\ref{alg:comb}, we note that its inner loop can also be parallelized\nin the MapReduce framework. Indeed, it corresponds to the greedy algorithm\napplied to the function $\\mathcal{O}\\left(\\neigh{\\cdot}, t\\right)$. The\n\\textsc{Sample\\&Prune} approach successfully applied in \\cite{mr} to obtain\nMapReduce algorithms for various submodular maximizations can also be applied\nto Algorithm~\\ref{alg:comb} to cast it in the MapReduce framework. The details\nof the algorithm can be found in Appendix~\\ref{sec:mr}.\n\\newline\n\n\n\n\\noindent \\textbf{Algorithmic speedups.} To implement Algorithm~\\ref{alg:comb} efficiently, the computation of the $\\argmax$ on line 5 must be dealt with carefully. $\\mathcal{O}(\\neigh{S_t\\cup\\{x\\}},t)$ is the optimal solution to the fractional Knapsack problem~\\eqref{eq:knap} with budget $t$ and can be computed in time $\\min(\\frac{t}{p_\\text{min}},n)$ by iterating over the list of nodes in $\\neigh{S_t\\cup\\{x\\}}$ in decreasing order of the degrees. This decreasing order of $\\neigh{S_t}$ can be maintained throughout the greedy construction of $S_t$ by:\n\\begin{itemize}\n \\item ordering the list of neighbors of nodes in $X$ by decreasing order of the degrees when initially constructing the graph. This is responsible for a $O(n\\log n)$ pre-processing time.\n\\item when adding node $x$ to $S_t$, observe that $\\neigh{S_t\\cup\\{x\\}} = \\neigh{S_t}\\cup\\neigh{\\{x\\}}$. Hence, if $\\neigh{S_t}$ and $\\neigh{\\{x\\}}$ are sorted lists, then $\\mathcal{O}(\\neigh{S_t\\cup\\{x\\}},t)$ can be computed in a single iteration of length $\\min(\\frac{t}{p_\\text{min}},n)$ where the two sorted lists are merged on the fly.\n\\end{itemize}\nAs a consequence, the running time of line 5 is bounded from above by $m\\min(\\frac{t}{p_\\text{min}},n)$. The two nested \\textsf{for} loops are responsible for the additional $k^2$ factor. The running time of Algorithm~\\ref{alg:comb} is summarized in Proposition~\\ref{prop:running_time}.\n\n\\begin{proposition}\\label{prop:running_time}\n Let $p_\\text{min} =\\min\\{p_u,u\\in\\neigh{X}\\}$, then Algorithm~\\ref{alg:comb} runs in time \n ${O\\big(n\\log n + k^2 m \\min(\\frac{k}{p_\\text{min}},n)\\big)}$.\n\\end{proposition}\n\n\\section{Adaptivity proofs}\n\\label{sec:ad-proofs}\n\\begin{proof}[of Proposition~\\ref{prop:gap}]\n We will first show that the optimal adaptive policy can be interpreted as\n a non-adaptive policy. Let $S$ be the optimal adaptive\n solution and define $\\delta_R:\\neigh{X}\\mapsto \\{0,1\\}$:\n \\begin{displaymath}\n \\delta_R(u) \\equiv \\begin{cases}\n 1&\\text{if } u\\in\\argmax\\big\\{f(T);\\; T\\subseteq R,\\; |T|\\leq\n k-|S|\\big\\} \\\\\n 0&\\text{otherwise}\n \\end{cases},\n \\end{displaymath}\n one can write\n \\begin{displaymath}\n \\begin{split}\n \\sum_{R\\subseteq\\neigh{S}} p_R\n \\max_{\\substack{T\\subseteq R\\\\|T|\\leq k-|S|}}f(T)\n &=\n \\sum_{R\\subseteq\\neigh{S}} p_R\n \\sum_{u\\in\\neigh{X}}\\delta_R(u)w_u\\\\\n &=\n \\sum_{u\\in\\neigh{X}}w_u\\sum_{R\\subseteq\\neigh{S}}p_R\\delta_R(u).\n \\end{split}\n \\end{displaymath}\n\n Let us now define for $u\\in\\neigh{X}$:\n \\begin{displaymath}\n q_u \\equiv \\begin{cases}\n \\sum_{R\\subseteq\\neigh{S}}\\frac{p_R}{p_u}\\delta_R(u)\n &\\text{if }p_u\\neq 0\\\\\n 0&\\text{otherwise}\n \\end{cases}.\n \\end{displaymath}\n This allows us to write:\n \\begin{displaymath}\n \\sum_{R\\subseteq\\neigh{S}} p_R\n \\max_{\\substack{T\\subseteq R\\\\|T|\\leq k-|S|}}f(T)\n = \\sum_{u\\in\\neigh{X}}p_uq_uw_u = F(\\mathbf{p}\\circ\\mathbf{q})\n \\end{displaymath}\n where the last equality is obtained from \\eqref{eq:multi} by successively using the linearity of the expectation and the linearity of $f$.\n\n Furthermore, observe that $q_u\\in[0,1]$, $q_u=0$ if $u\\notin\\neigh{S}$ and:\n \\begin{displaymath}\n \\begin{split}\n |S|+\\sum_{u\\in\\neigh{X}}p_uq_u\n &= |S|+\\sum_{R\\subseteq\\neigh{S}}p_R\\sum_{u\\in\\neigh{X}}\\delta_R(u)\\\\\n &\\leq |S| + \\sum_{R\\subseteq\\neigh{S}}p_R(k-|S|)\\leq k\n \\end{split}\n \\end{displaymath}\n\n Hence, $(S,\\mathbf{q})\\in\\mathcal{F}_{NA}$. In other words, we have written the optimal adaptive solution as a relaxed\n non-adaptive solution. This conclude the proof of the proposition.\n\\end{proof}\n\n\\vspace{0.5em}\n\n\\begin{proof}[of Proposition~\\ref{prop:cr}]\n Using the definition of $\\mathrm{A}(S)$, one can write:\n \\begin{displaymath}\n \\mathrm{A}(S) = \\sum_{R\\subseteq\\neigh{S}} p_R\n \\max_{\\substack{T\\subseteq R\\\\|T|\\leq k-|S|}}f(T)\n \\geq \\sum_{R\\subseteq\\neigh{S}} p_R \\mathbf{E}\\big[f(I)\\big]\n \\end{displaymath}\n where the inequality comes from the fact that $I$ is a feasible random set: $|I|\\leq k-|S|$, hence the expected value of $f(I)$ is bounded by the maximum of $f$ over feasible sets.\n\nEquation~\\eqref{eq:cr} then implies:\n\\begin{equation}\\label{eq:tmp}\n \\mathrm{A}(S) \n \\geq (1-\\varepsilon)\\sum_{R\\subseteq\\neigh{S}} p_R F(\\mathbf{q})\n = (1-\\varepsilon)F(\\mathbf{p}\\circ\\mathbf{q}).\n\\end{equation}\n\nEquation~\\eqref{eq:tmp} holds for any $\\varepsilon\\geq 0$. In particular, for $\\varepsilon$ smaller than $\\inf_{S\\neq T} |A(S)-A(T)|$, we obtain that $\\mathrm{A}(S)\\geq F(\\mathbf{p}\\circ\\mathbf{q})$. Note that such a $\\varepsilon$ is at most polynomially small in the size of the instance.\n$(S, \\mathbf{q})$ is an $\\alpha$-approximate non adaptive solution, hence $F(\\mathbf{p}\\circ\\mathbf{q}) \\geq \\alpha\\mathrm{OPT}_{NA}$. We can then conclude by applying Proposition~\\ref{prop:gap}. \n\\end{proof}\n\n\\section{Algorithms Proofs}\n\\label{sec:alg-proofs}\nWe first discuss the NP-hardness of the problem.\n\n\\noindent \\textbf{\\textsc{NP}-Hardness.} In contrast to standard influence maximization, adaptive seeding is already \\textsc{NP}-Hard even for the simplest cases. In the case when $f(S)=|S|$ and all probabilities equal one, the decision problem is whether given a budget $k$ and target value $\\ell$ there exists a subset of $X$ of size $k-t$ which yields a solution with expected value of $\\ell$ using $t$ nodes in $\\mathcal{N}(X)$. This is equivalent to deciding whether there are $k-t$ nodes in $X$ that have $t$ neighbors in $\\mathcal{N}(X)$. To see this is \\textsc{NP}-hard, consider reducing from \\textsc{Set-Cover} where there is one node $i$ for each input set $T_i$, $1\\leq i\\leq n$, with $\\neigh{i}= T_i$ and integers $k,\\ell$, and the output is ``yes'' if there is a family of $k$ sets in the input which cover at least $\\ell$ elements, and ``no'' otherwise.\n\n\n\\subsection{LP-based approach}\n\\label{sec:lp-proofs}\nIn the LP-based approach we rounded the solution using the pipage rounding method. We discuss this with greater detail here.\n\n\\noindent \\textbf{Pipage Rounding.}\nThe pipage rounding method~\\cite{pipage} is a deterministic rounding method that can be applied to a variety of problems. In particular, it can be applied to LP-relaxations of the \\textsc{Max-K-Cover} problem where we are given a family of sets that cover elements of a universe and the goal is to find $k$ subsets whose union has the maximal cardinality. The LP-relaxation is a fractional solution over subsets, and the pipage rounding procedure then rounds the allocation in linear time, and the integral solution is guaranteed to be within a factor of $(1-1\/e)$ of the fractional solution. \nWe make the following key observation: for any given $\\textbf{q}$, one can remove all elements in $\\mathcal{N}(X)$ for which $q_{u}=0$, without changing the value of any solution $(\\boldsymbol\\lambda,\\textbf{q})$.\nOur rounding procedure can therefore be described as follows: given a solution $(\\boldsymbol\\lambda,\\textbf{q})$ we remove all nodes $u \\in \\mathcal{N}(X)$ for which $q_{u}=0$, which leaves us with a fractional solution to a (weighted) version of the \\textsc{Max-K-Cover} problem where nodes in $X$ are the sets and the universe is the set of weighted nodes in $\\mathcal{N}(X)$ that were not removed. We can therefore apply pipage rounding and lose only a factor of $(1-1\/e)$ in quality of the solution.\n\n\\subsection{Combinatorial Algorithm}\n\\label{sec:comb-proofs}\nWe include the missing proofs from the combinatorial algorithm section. The scalability and implementation in MapReduce are discussed in this section as well.\n\n\\begin{proof}[of Lemma~\\ref{lemma:nd}]\n\\emph{W.l.o.g} we can rename and order the pairs in $T$ so that $w_1\\geq w_2\\geq\\ldots\\geq w_m $.\nThen, $\\mathcal{O}(T,b)$ has the following simple piecewise linear expression:\n\\begin{displaymath}\\label{eq:pw}\n \\mathcal{O}(T,b) = \n \\begin{cases}\n b w_1&\\text{if }0\\leq b t\\Big \\}$ with $n(t)=+\\infty$ when the set is empty. In\nparticular, note that $x\\mapsto n(t)$ is non-decreasing. Denoting\n$\\partial_+\\mathcal{O}_T$ the right derivative of $\\mathcal{O}(T,\\cdot)$, one\ncan write $\\partial_+\\mathcal{O}_T(t)=w_{n(t)}$, with the convention that\n$w_\\infty = 0$.\n\nWriting $i \\equiv \\sup\\Big\\{j\\text{ s.t.\n} w_j\\geq w_x\\Big\\}$, it is easy to see that\n$\\partial_+\\mathcal{O}_{T\\cup\\{x\\}}\\geq\\partial_+\\mathcal{O}_T$. Indeed:\n\\begin{enumerate}\n \\item if $n(t)\\leq i$ then $\\partial_+\\mathcal{O}_{T\\cup\\{x\\}}(t)\n = \\partial_+\\mathcal{O}_T(t)= w_{n(t)}$.\n \\item if $n(t)\\geq i+1$ and $n(t-c)\\leq i$ then $\\partial_+\\mathcal{O}_{T\\cup\\{x\\}}(t)\n = w_x\\geq w_{n(t)}= \\partial_+\\mathcal{O}_T(t)$.\n \\item if $n(t-c)\\geq i+1$, then $\\partial_+\\mathcal{O}_{T\\cup\\{x\\}}\n = w_{n(t-c)}\\geq w_{n(t)}=\\partial_+\\mathcal{O}_T(t)$.\n\\end{enumerate}\n\nLet us now consider $b$ and $c$ such that $b\\leq c$. Then, using the integral\nrepresentation of $\\mathcal{O}(T\\cup\\{x\\},\\cdot)$ and $\\mathcal{O}(T,\\cdot)$, we get:\n\\begin{multline*}\n \\mathcal{O}(T\\cup\\{x\\},c)-\\mathcal{O}(T\\cup\\{x\\},b)=\\int_b^c\\partial_+\\mathcal{O}_{T\\cup\\{x\\}}(t)dt\\\\\n \\geq\\int_b^c\\partial_+\\mathcal{O}_T(t)dt=\\mathcal{O}(T,c)-\\mathcal{O}(T,b)\n\\end{multline*}\nRe-ordering the terms, $\\mathcal{O}(T\\cup\\{x\\},c)-\\mathcal{O}(T,c)\\geq\n\\mathcal{O}(T\\cup\\{x\\},b)-\\mathcal{O}(T,b)$\nwhich concludes the proof of the lemma.\n\\end{proof}\n\n\\vspace{0.5em}\n\n\\begin{proof}[of Lemma~\\ref{lemma:sub}]\n Let $T\\subseteq\\neigh{X}$ and $x, y\\in\\neigh{X}\\setminus T$. Using the\n second-order characterization of submodular functions, it suffices to show\n that:\n \\begin{displaymath}\\label{eq:so}\n \\mathcal{O}(T\\cup\\{x\\},b)-\\mathcal{O}(T,b)\\geq\n \\mathcal{O}(T\\cup\\{y, x\\},b)-\\mathcal{O}(T\\cup\\{y\\},b)\n \\end{displaymath}\n\n We distinguish two cases based on the relative position of $w_x$ and $w_y$.\n The following notations will be useful: $S_T^x \\equiv \\big\\{u\\in\n T\\text{ s.t. }w_x\\leq w_u\\big\\}$ and $P_T^x\\equiv\n T\\setminus S_T^x$.\n\n \\textbf{Case 1:} If $w_y\\geq w_x$, then one can\n write:\n \\begin{gather*}\n \\mathcal{O}(T\\cup\\{y,x\\},b) = \\mathcal{O}(P_T^y\\cup\\{y\\},b_1)+\n \\mathcal{O}(S_T^y\\cup\\{x\\},b_2)\\\\\n \\mathcal{O}(T\\cup\\{y\\},b) = \\mathcal{O}(P_T^y\\cup\\{y\\},b_1)\n + \\mathcal{O}(S_T^y,b_2)\n \\end{gather*}\n where $b_1$ is the fraction of the budget $b$ spent on $P_T^y\\cup\\{y\\}$ and\n $b_2=b-b_1$.\n \n Similarly:\n \\begin{gather*}\n \\mathcal{O}(T\\cup\\{x\\},b) = \\mathcal{O}(P_T^y, c_1) + \\mathcal{O}(S_T^y\\cup\\{x\\},c_2)\\\\\n \\mathcal{O}(T, b) = \\mathcal{O}(P_T^y, c_1) + \\mathcal{O}(S_T^y,c_2)\n \\end{gather*}\n where $c_1$ is the fraction of the budget $b$ spent on $P_T^y$ and $c_2\n = b - c_1$. \n\n Note that $b_1\\geq c_1$: an optimal solution will first spent as much\n budget as possible on $P_T^y\\cup\\{y\\}$ before adding elements in\n $S_T^y\\cup\\{x\\}$.\n\n In this case:\n \\begin{displaymath}\n \\begin{split}\n \\mathcal{O}(T\\cup\\{x\\},b)-\\mathcal{O}(T,b)&=\n \\mathcal{O}(S_T^y\\cup\\{x\\},c_2)+\\mathcal{O}(S_T^y,c_2)\\\\\n &\\geq \\mathcal{O}(S_T^y\\cup\\{x\\},b_2)+\\mathcal{O}(S_T^y,b_2)\\\\\n & = \\mathcal{O}(T\\cup\\{y, x\\},b)-\\mathcal{O}(T\\cup\\{y\\},b)\n \\end{split}\n \\end{displaymath}\n where the inequality comes from Lemma~\\ref{lemma:nd} and \n $c_2\\geq b_2$.\n\n \\textbf{Case 2:} If $w_x > w_y$, we now decompose\n the solution on $P_T^x$ and $S_T^x$:\n \\begin{gather*}\n \\mathcal{O}(T\\cup\\{x\\},b) = \\mathcal{O}(P_T^x\\cup\\{x\\},b_1)\n + \\mathcal{O}(S_T^x,b_2)\\\\\n \\mathcal{O}(T,b) = \\mathcal{O}(P_T^x,c_1)+\\mathcal{O}(S_T^x,c_2)\\\\\n \n \\mathcal{O}(T\\cup\\{y, x\\},b) = \\mathcal{O}(P_T^x\\cup\\{x\\},b_1)\n + \\mathcal{O}(S_T^x\\cup\\{y\\},b_2)\\\\\n \\mathcal{O}(T\\cup\\{y\\},b) = \\mathcal{O}(P_T^x,c_1)+\\mathcal{O}(S_T^x\\cup\\{y\\},c_2)\n \\end{gather*}\n with $b_1+b_2=b$, $c_1+c_2=b$ and $b_2\\leq c_2$. \n\n In this case again:\n \\begin{multline*}\n \\mathcal{O}(T\\cup\\{x\\},b)-\\mathcal{O}(T,b)=\n \\mathcal{O}(S_T^x,b_2)-\\mathcal{O}(S_T^x,c_2)\\\\\n \\geq \\mathcal{O}(S_T^x\\cup\\{y\\},b_2)-\\mathcal{O}(S_T^x\\cup\\{y\\},c_2)\\\\\n = \\mathcal{O}(T\\cup\\{y, x\\},b)-\\mathcal{O}(T\\cup\\{y\\},b)\n \\end{multline*}\n where the inequality uses Lemma~\\ref{lemma:nd} and $c_2\\geq b_2$.\n\n In both cases, we were able to obtain the second-order characterization of submodularity. This concludes the proof of the lemma.\n\\end{proof}\n\n\\vspace{0.5em}\n\n\\begin{proof}[of Proposition~\\ref{prop:sub}]\n Let us consider $S$ and $T$ such that $S\\subseteq T\\subseteq X$ and $x\\in\n X\\setminus T$. In particular, note that $\\neigh{S}\\subseteq\\neigh{T}$. \n\n Let us write $\\neigh{S\\cup\\{x\\}}=\\neigh{S}\\cup R$ with $\\neigh{S}\\cap\n R=\\emptyset$ and similarly, $\\neigh{T\\cup\\{x\\}}=\\neigh{T}\\cup R'$ with\n $\\neigh{T}\\cap R'=\\emptyset$. It is clear that $R'\\subseteq R$. Writing $R'=\\{u_1,\\ldots,u_k\\}$:\n \\begin{multline*}\n \\mathcal{O}(\\neigh{T\\cup\\{x\\}},b)- \\mathcal{O}(\\neigh{T},b)\\\\\n =\\sum_{i=1}^k\\mathcal{O}(\\neigh{T}\\cup\\{u_1,\\ldots u_i\\},b)\n -\\mathcal{O}(\\neigh{T}\\cup\\{u_1,\\ldots u_{i-1}\\},b)\\\\\n \\leq \\sum_{i=1}^k\\mathcal{O}(\\neigh{S}\\cup\\{u_1,\\ldots u_i\\},b)\n -\\mathcal{O}(\\neigh{S}\\cup\\{u_1,\\ldots u_{i-1}\\},b)\\\\\n =\\mathcal{O}(\\neigh{S}\\cup R',b)-\\mathcal{O}(\\neigh{S},b)\n \\end{multline*}\n where the inequality comes from the submodularity of $\\mathcal{O}(\\cdot,b)$ proved in Lemma~\\ref{lemma:sub}. This same function is also obviously set-increasing, hence:\n \\begin{multline*}\n \\mathcal{O}(\\neigh{S}\\cup R',b)-\\mathcal{O}(\\neigh{S},b)\\\\\n \\leq \\mathcal{O}(\\neigh{S}\\cup R,b)-\\mathcal{O}(\\neigh{S},b)\\\\\n =\\mathcal{O}(\\neigh{S\\cup\\{x\\}},b)-\\mathcal{O}(\\neigh{S},b)\n \\end{multline*}\n This concludes the proof of the proposition.\n\\end{proof}\n\n\n\\begin{proof}[of Proposition~\\ref{prop:main_result}]\n We simply note that the content of the outer \\textsf{for} loop on line 2 of Algorithm~\\ref{alg:comb} is the greedy submodular maximization algorithm of \\cite{nemhauser}. Since $\\mathcal{O}(\\neigh{\\cdot}, t)$ is submodular (Proposition~\\ref{prop:sub}), this solves the inner $\\max$ in \\eqref{eq:sub-mod} with an approximation ratio of $(1-1\/e)$. The outer \\textsf{for} loop then computes the outer $\\max$ of \\eqref{eq:sub-mod}.\n\n As a consequence, Algorithm~\\ref{alg:comb} computes a $(1-1\/e)$-approximate non-adaptive solution. We conclude by applying Proposition~\\ref{prop:cr}.\n\\end{proof}\n\n\\subsection{Parallelization}\n\\label{sec:para}\nAs discussed in the body of the paper, the algorithm can be parallelized across $k$ different machines, each one computing an approximation for a fixed budget $k-t$ in the first stage and $t$ in the second.\nA slightly more sophisticated approach is to consider only $\\log n$ splits: $(1,k-1),(2,k-2),\\ldots,(2^{\\lfloor \\log n \\rfloor},1)$ and then select the best solution from this set. It is not hard to see that in comparison to the previous approach, this would reduce the approximation guarantee by a factor of at most 2: if the optimal solution is obtained by spending $t$ on the first stage and $k-t$ in the second stage, then since $t \\leq 2\\cdot2^{\\lfloor \\log t \\rfloor}$ the solution computed for $(2^{\\lfloor \\log t \\rfloor}, k - 2^{\\lfloor \\log t \\rfloor})$ will have at least half that value. \nMore generally, for any $\\epsilon>0$ one can parallelize over $\\log_{1+\\epsilon}n$ machines at the cost of losing a factor of $(1+\\epsilon)$ in the approximation guarantee.\n\n\\subsection{Implementation in MapReduce}\n\\label{sec:mr}\n\nAs noted in Section~\\ref{sec:comb}, lines 4 to 7 of Algorithm~\\ref{alg:comb}\ncorrespond to the greedy heuristic of \\cite{nemhauser} applied to the\nsubmodular function\u00a0$f_t(S) \\equiv \\mathcal{O}\\big(\\neigh{S}, t\\big)$.\nA variant of this heuristic, namely the $\\varepsilon$-greedy heuristic,\ncombined with the \\textsc{Sample\\&Prune} method of \\cite{mr} allows us to write\na MapReduce version of Algorithm~\\ref{alg:comb}. The resulting algorithm is\ndescribed in Algorithm~\\ref{alg:combmr}\n\n\\begin{algorithm}\n \\caption{Combinatorial algorithm, MapReduce}\n \\label{alg:combmr}\n \\algsetup{indent=2em}\n \\begin{algorithmic}[1]\n \\STATE $S\\leftarrow \\emptyset$\n \\FOR{$t=1$ \\TO $k-1$}\n \\STATE $S_t\\leftarrow \\emptyset$\n \\FOR{$i=1$ \\TO $\\log_{1+\\varepsilon}\\Delta$}\n \\STATE $U\\leftarrow X$, $S'\\leftarrow \\emptyset$\n \\WHILE{$|U|>0$}\n \\STATE $R\\leftarrow$ sample from $U$ w.p. $\\min\\left(1,\n \\frac{\\ell}{|U|}\\right)$\n \\WHILE{$|R|>0$ \\OR $|S_t\\cup S'|< k$}\n \\STATE $x\\leftarrow$ some element from $R$\n \\IF{$\\nabla f_t(S_t\\cup S', x)\\geq\\frac{\\Delta}{(1+\\varepsilon)^i}$}\n \\STATE $S'\\leftarrow S'\\cup\\{x\\}$\n \\ENDIF\n \\STATE $R\\leftarrow R\\setminus\\{x\\}$\n \\ENDWHILE\n \\STATE $S_t\\leftarrow S_t\\cup S'$\n \\STATE $U\\leftarrow\\{x\\in U\\,|\\, \\nabla f_t(S_t,\n x)\\geq\\frac{\\Delta}{(1+\\varepsilon)^i}\\}$\n \\ENDWHILE\n \\ENDFOR\n \\IF{$\\mathcal{O}(\\neigh{S_t},t)>\\mathcal{O}(\\neigh{S},k-|S|)$}\n \\STATE $S\\leftarrow S_t$\n \\ENDIF\n \\ENDFOR\n \\RETURN $S$\n \\end{algorithmic}\n\\end{algorithm}\n\nWe denoted by $\\nabla f_t(S, x)$ the marginal increment of $x$ to the set $S$\nfor the function $f_t$, $\\nabla f_t(S, x) = f_t(S\\cup\\{x\\}) - f_t(S)$.\n$\\Delta$ is an upper bound on the marginal contribution of any element. In our\ncase, $\\Delta = \\max_{u\\in\\neigh{X}} w_u$ provides such an upper bound. The\nsampling in line 7 selects a small enough number of elements that the\n\\texttt{while} loop from lines 8 to 14 can be executed on a single machine.\nFurthermore, lines 7 and 16 can be implemented in one round of MapReduce each.\n\nThe approximation ratio of Algorithm~\\ref{alg:combmr} is\n$1-\\frac{1}{e}-\\varepsilon$. The proof of this result as well as the optimal\nchoice of $\\ell$ follow from Theorem 10 in \\cite{mr}.\n\n\\subsection{Experimental setup}\nWe tested our algorithms on three types of datasets. Each of them allows us to\nexperiment on a different aspect of the adaptive seeding problem. The Facebook\nPages dataset that we collected ourselves has a central place in our\nexperiments since it is the one which is closet to actual applications of\nadaptive seeding.\n\n\\textbf{Synthetic networks.} Using standard models of social networks we\ngenerated large-scale graphs to model the social network. To emulate the\nprocess of users following a topic (the core set $X$) we sampled subsets of\nnodes at random, and applied our algorithms on the sample and their neighbors.\nThe main advantage of these data sets is that they allow us to generate graphs\nof arbitrary sizes and experiment with various parameters that govern the\nstructure of the graph. The disadvantages are that users who follow a topic\nare not necessarily random samples, and that social networks often have\nstructural properties that are not captured in generative models.\n\n\\textbf{Real networks.} We used publicly available data sets of real social\nnetworks available at \\cite{snapnets}. As for synthetic networks, we used\na random sample of nodes to emulate users who follow a topic, which is the main\ndisadvantage of this approach. The advantage however, is that such datasets\ncontain an entire network which allows testing different propagation\nparameters. \n \n\\textbf{Facebook Pages.} We collected data from several Facebook Pages, each\nassociated with a commercial entity that uses the Facebook page to communicate\nwith its followers. For each page, we selected a post and then collected data\nabout the users who expressed interest (``liked'') the post and their friends.\nThe advantage of this data set is that it is highly representative of the\nscenario we study here. Campaigns run on a social network will primarily target\nusers who have already expressed interests in the topic being promoted. The\nmain disadvantage of this method is that such data is extremely difficult to\ncollect due to the crawling restrictions that Facebook applies and gives us\nonly the 2-hop neighborhood around a post. This makes it difficult to\nexperiment with different propagation parameters. Fortunately, as we soon\ndiscuss, we were able to circumvent some of the crawling restrictions and\ncollect large networks, and the properties of the voter influence model are\nsuch that these datasets suffice to accurately account for influence\npropagation in the graph.\\newline\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{images\/para.pdf}\n \\caption{Comparison of the average degree of the core set users and the\n average degree of their friends.}\n \\label{fig:paradox}\n \\vspace{-10pt}\n\\end{figure}\n\n\\noindent\\textbf{Data collection.}\nWe selected Facebook Pages in different verticals (topics). Each page is\noperated by an institution or an entity whose associated Facebook Page is\nregularly used for promotional posts related to this topic. On each of these\npages, we selected a recent post (posted no later than January 2014) with\napproximately 1,000 \\emph{likes}. The set of users who liked those posts\nconstitute our core set. We then crawled the social network of\nthese sets: for each user, we collected her list of friends, and the degrees\n(number of friends) of these friends.\\newline\n\n\\noindent\\textbf{Data description.} Among the several verticals we collected,\nwe select eight of them for which we will report our results. We obtained\nsimilar results for the other ones. Table~\\ref{tab:data} summarizes statistics\nabout the selected verticals. We note that depending on the privacy settings\nof the core set users, it was not always possible to access their list of\nfriends. We decided to remove these users since their ability to spread\ninformation could not be readily determined. This effect, combined with various\nerrors encountered during the data collection, accounts for an approximate 15\\%\nreduction between the users who liked a post and the number of users in the\ndatasets we used. Following our discussion in the introduction, we observe\nthat on average, the degrees of core set users is much lower than the degrees of\ntheir friends. This is highlighted on Figure~\\ref{fig:paradox} and justifies\nour approach.\n\n\\begin{table}[t]\n \\small\n \\centering\n \\setlength{\\tabcolsep}{3pt}\n \\begin{tabular}{llrr}\n \\toprule\n Vertical & Page & $m$ & $n$ \\\\%& $S$ & $F$\\\\\n \\midrule\n Charity & Kiva & 978 & 131334 \\\\%& 134.29 & 1036.26\\\\\n Travel & Lonely Planet & 753 & 113250 \\\\%& 150.40 & 898.50\\\\\n \n Fashion & GAP & 996 & 115524 \\\\%& 115.99 & 681.98\\\\\n Events & Coachella & 826 & 102291 \\\\%& 123.84 & 870.16\\\\\n Politics & Green Party & 1044 & 83490 \\\\%& 79.97 & 1053.25\\\\\n Technology & Google Nexus & 895 & 137995 \\\\%& 154.19 & 827.84\\\\\n News & The New York Times & 894 & 156222 \\\\%& 174.74 & 1033.94 \\\\\n \n Entertainment & HBO & 828 & 108699 \\\\%& 131.28 & 924.09\\\\\n \\bottomrule\n\\end{tabular}\n\\caption{Dataset statistics. $m$: number of users in the core set, $n$: number\nof friends of core set users.}\n\\label{tab:data}\n \\vspace{-10pt}\n\\end{table}\n\n\\subsection{Performance of Adaptive Seeding}\n\\label{sec:performance} For a given problem instance with a budget of $k$ we\napplied the adaptive seeding algorithm (the combinatorial version). Recall from\nSection~\\ref{sec:model} that performance is defined as the expected influence\nthat the seeder can obtain by optimally selecting users on the second stage,\nwhere \\emph{influence} is defined as the sum of the degrees of the selected\nusers. We tested our algorithm against the following benchmarks:\n\n\\begin{itemize}\n \\item \\emph{Random Node} (\\textsf{RN}): we randomly select $k$ users from\n the core set. This is a typical benchmark in comparing influence\n maximization algorithms~\\cite{KKT03}.\n \\item \\emph{Influence Maximization} (\\textsf{IM}): we apply the optimal\n influence maximization algorithm on the core set. This is the naive\n application of influence maximization. For the voter model, when the\n propagation time is polynomially large in the network size, the optimal\n solution is to simply take the $k$ highest degree nodes~\\cite{even-dar}.\n We study the case of bounded time horizons in Section~\\ref{sec:inf}.\n \\item \\emph{Random Friend} (\\textsf{RF}): we implement a naive two-stage approach:\n randomly select $k\/2$ nodes from the core set, and for each\n node select a random neighbor (hence spending the budget of $k$ rewards\n overall). This method was recently shown to outperform standard\n influence maximization when the core set is random~\\cite{LS13}.\n\\end{itemize}\n\n\n\\begin{figure*}[t]\n \\centerline{\\includegraphics[width=0.99\\textwidth]{images\/perf2.pdf}}\n \\vspace{-5pt}\n \\caption{\\small{Performance of adaptive seeding compared to other influence\n maximization approaches. The horizontal axis represents the budget used as\na fraction of the size of the core set. The vertical axis is the\nexpected influence reachable by optimally selecting nodes on the second stage.}}\n \\label{fig:performance}\n \\vspace{-10pt}\n\\end{figure*}\n\n\\subsection{Performance on Facebook Pages} Figure~\\ref{fig:performance}\ncompares the performance of \\emph{adaptive seeding}, our own approach, to the\nafore-mentioned approaches for all the verticals we collected. In this first\nexperiment we made simplifying assumptions about the parameters of the model.\nThe first assumption is that all probabilities in the adaptive seeding model\nare equal to one. This implicitly assumes that every friend of a user who\nfollowed a certain topic is interested in promoting the topic given a reward.\nAlthough this is a strong assumption that we will revisit, we note that the\nprobabilities can be controlled to some extent by the social networking service\non which the campaign is being run by showing prominently the campaign material\n(sponsored links, fund-raising banners, etc.). The second assumption is that\nthe measure of influence is the sum of the degrees of the selected set. This\nmeasure is an appealing proxy as it is known that in the voter model, after\npolynomially many time steps, the influence of each node is proportional to its\ndegree with high probability~\\cite{even-dar}. Since the influence process\ncannot be controlled by the designer, the assumption is often that the\ninfluence process runs until it stabilizes (in linear thresholds and\nindependent cascades for example, the process terminates after a linear number\nof steps~\\cite{KKT03}). We perform a set of experiments for different time\nhorizons in Section~\\ref{sec:inf}.\n\n \n\n\nIt is striking to see how well adaptive seeding does in comparison to other\nmethods. Even when using a small budget (0.1 fraction of the core set, which\nin these cases is about 100 nodes), adaptive seeding improves influence by\na factor of at least 10, across all verticals. To confirm this, we plot the\nrelative improvements of \\emph{adaptive seeding} over \\textsf{IM} in aggregate\nover the different pages. The results are shown in Figure~\\ref{fig:compare}.\nThis dramatic improvement is largely due to the friendship paradox phenomenon\nthat adaptive seeding leverages. Returning to Figure~\\ref{fig:performance}, it\nis also interesting to note that the \\textsf{RF} heuristic significantly\noutperforms the standard \\textsf{IM} benchmark. Using the same budget, the\ndegree gain induced by moving from the core set to its neighborhood is such\nthat selecting at random among the core set users' friends already does better\nthan the best heuristic restricted only on the core set. Using \\emph{adaptive\nseeding} to optimize the choice of core set users based on their friends'\ndegrees then results in an order of magnitude increase over \\textsf{RF},\nconsistently for all the pages.\\newline\n\n\\begin{figure}[t]\n \\vspace{-10pt}\n \\centerline{ \\includegraphics[width=0.4\\textwidth]{images\/comp2.pdf} }\n \\vspace{-10pt}\n \\caption{Ratio of the performance of adaptive seeding to \\textsf{IM}. Bars represents the mean improvement across all verticals, and the ``error bar'' represents the range of improvement across verticals.}\n \\label{fig:compare}\n \\vspace{-15pt}\n\\end{figure}\n\n\\subsection{The Effect of the Probabilistic Model}\n\\label{sec:robustness}\n\nThe results presented in Section~\\ref{sec:performance} were computed assuming\nthe probabilities in the adaptive seeding model are one. We now describe\nseveral experiments we performed with the Facebook Pages data set that test the\nadvantages of adaptive seeding under different probability models.\\newline \n\n\n\\noindent\\textbf{Impact of the Bernouilli parameter.} \nFigure~\\ref{fig:prob} shows the impact of the probability of nodes realizing in\nthe second stage. We computed the performance of \\emph{adaptive seeding} when\neach friend of a seeded user in the core set joins during the second stage\nindependently with probability $p$, using different values of $p$. We call $p$\nthe \\emph{Bernouilli} parameter, since the event that a given user joins on the\nsecond stage of adaptive seeding is governed by a Bernouilli variable of\nparameter $p$. We see that even with $p=0.01$, \\emph{adaptive seeding} still\noutperforms \\textsf{IM}. As $p$ increases, the performance of \\emph{adaptive\nseeding} quickly increases and reaches $80\\%$ of the values of\nFigure~\\ref{fig:performance} at $p=0.5$.\\newline\n\n\\begin{figure}[t]\n \\begin{subfigure}[t]{0.23\\textwidth}\n \\includegraphics[scale=0.48]{images\/prob.pdf}\n \\vspace{-10pt}\n \\caption{}\n \\label{fig:prob}\n\\end{subfigure}\n\\hspace{1pt}\n\\begin{subfigure}[t]{0.23\\textwidth}\n \\includegraphics[scale=0.48]{images\/hbo_likes.pdf}\n \\caption{}\n \\label{fig:killer}\n \\end{subfigure}\n \\vspace{-5pt}\n \\caption{\\small{(a) Performance of adaptive seeding for various propagation\n probabilities. (b) Performance of \\emph{adaptive seeding} when restricted\n to the subgraph of users who \\emph{liked} HBO (red line).}}\n \\vspace{-20pt}\n\\end{figure}\n\n\\noindent\\textbf{Coarse estimation of probabilities.} \nIn practice, the probability a user may be interested in promoting a campaign\nher friend is promoting may vary. However, for those who have already expressed\ninterest in the promoted content, we can expect this probability to be close to\none. We therefore conducted the following experiment. We chose a page (HBO)\nand trimmed the social graph we collected by only keeping on the second stage\nusers who indicated this page (HBO) in their list of interests. This is\na coarse estimation of the probabilities as it assumes that if a friend follows\nHBO she will be willing to promote with probability 1 (given a reward), and\notherwise the probability of her promoting anything for HBO is 0.\nFigure~\\ref{fig:killer} shows that even on this very restricted set of users,\n\\emph{adaptive seeding} still outperforms \\textsf{IM} and reaches approximately\n$50\\%$ of the unrestricted adaptive seeding.\\newline\n\n\\noindent\\textbf{Impact of the probability distribution.} In order to test\nscenarios where users have a rich spectrum of probabilities of realizing on the\nsecond stage. We consider a setting where the Bernouilli parameter $p$ is drawn\nfrom a distribution. We considered four different distributions; for each\ndistribution for fixed values of the budget and the parameter $p$, we tuned the\nparameters of the distribution so that its mean is exactly $p$. We then plotted\nthe performance as a function of the budget and mean $p$. \n\nFor the Beta distribution, we fixed $\\beta=5$ and tuned the $\\alpha$ parameter\nto obtain a mean of $p$, thus obtaining a unimodal distribution. For the normal\ndistribution, we chose a standard deviation of $0.01$ to obtain a distribution\nmore concentrated around its mean than the Beta distribution. Finally, for the\ninverse degree distribution, we took the probability of a node joining on\nthe second stage to be proportional to the inverse of its degree (scaled so that on\naverage, nodes join with probability $p$). The results are shown in\nFigure~\\ref{fig:bernouilli}.\n\nWe observe that the results are comparable to the one we obtained in the\nuniform case in Figure~\\ref{fig:prob} except in the case of the inverse degree\ndistribution for which the performance is roughly halved. Remember that the\nvalue of a user $v$ on the second stage of adaptive seeding is given by $p_v\nd_v$ where $d_v$ is its degree and $p_v$ is the its probability of realizing on\nthe second stage. Choosing $p_v$ to be proportional to ${1}\/{d_v}$ has the\neffect of normalizing the nodes on the second stage and is a strong\nperturbation of the original degree distribution of the nodes available on the\nsecond stage.\n\n\\begin{figure*}[t!]\n\\centering\n\\begin{subfigure}[b]{0.25\\textwidth}\n\\includegraphics[width=\\textwidth]{images\/beta.pdf}\n\\caption{Beta distribution}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.25\\textwidth}\n\\includegraphics[width=\\textwidth]{images\/gauss.pdf}\n\\caption{Normal Distribution}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.24\\textwidth}\n\\includegraphics[width=\\textwidth]{images\/power.pdf}\n\\caption{Power-law distribution}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.24\\textwidth}\n\\includegraphics[width=\\textwidth]{images\/deg.pdf}\n\\caption{Inverse degree}\n\\end{subfigure}\n\\caption{Performance of adaptive seeding as a function of the budget and the\nmean of the distribution from which the Bernouilli parameters are drawn. The\ndetails of the parameters for each distribution can be found in\nSection~\\ref{sec:robustness}.}.\n\\label{fig:bernouilli}\n\\vspace{-10pt}\n\\end{figure*}\n\n\\subsection{Impact of the Influence Model}\n\\label{sec:inf}\n\nThe Facebook Pages data set we collected is limited in that we only have access\nto the 2-hop neighborhood around the seed users and we use the degree of the\nsecond stage users as a proxy for their influence. As proved in\n\\cite{even-dar}, in the voter model, the influence of nodes converges to their\ndegree with high probability when the number of time steps become polynomially\nlarge in the network size.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.48\\textwidth]{images\/voter.pdf}\n \\vspace{-20pt}\n \\caption{Performance of adaptive seeding compared to \\textsf{IM} for the voter\n influence model with $t$ steps.}\n \\vspace{-10pt}\n \\label{fig:voter}\n\\end{figure}\n\nIn order to analyze the expected number of nodes influenced according to the\nvoter model that terminates after some fixed number of time steps, we use\npublicly available data sets from \\cite{snapnets} where the entire network is\nat our disposal. As discussed above, we sample nodes uniformly at random to\nmodel the core set. We then run the voter model for $t$ time steps\nto compute the influence of the second stage users. Figure~\\ref{fig:voter}\nshows the performance of adaptive seeding as a function of $t$ compared to the\nperformance of the \\textsf{IM} benchmark. In this experiment, the budget was\nset to half the size of the core set.\n\nWe see that the performance of adaptive seeding quickly converges (5 time steps\nfor \\textsf{Slashdot}, 15 time steps for \\textsf{Epinions}). In practice, the\nvoter model converges much faster than the theoretical guarantee of\n\\cite{even-dar}, which justifies using the degree of the second stage users as\nmeasure of influence as we did for the Facebook Pages data sets.\nFurthermore, we see that similarly to the Facebook data sets, adaptive seeding\nsignificantly outperforms \\textsf{IM}.\n\n\n\\subsection{Performance on Synthetic Networks} \n\nIn order to analyze the impact of topological variations we generated synthetic\ngraphs using standard network models. All the generated graphs have $100,000$\nvertices, for each model, we tuned the generative parameters to obtain when\npossible a degree distribution (or graph density otherwise) similar to what we\nobserved in the Facebook Pages data sets.\n\n\\begin{itemize}\n \\item \\emph{Barab\u00e1si-Albert:} this well-known model is often used to model\n social graphs because its degree distribution is a power law. We took\n 10 initial vertices and added 10 vertices at each step, using the\n preferential attachment model, until we reached 100,000 vertices.\n \\item \\emph{Small-World:} also known as the Watts-Strogatz model. This\n model was one of the first models proposed for social networks. Its\n diameter and clustering coefficient are more representative of a social\n network than what one would get with the Erd\u0151s\u2013R\u00e9nyi model. We started\n from a regular lattice of degree 200 and rewired each edge with\n probability 0.3.\n \\item \\emph{Kronecker:} Kronecker graphs were more recently introduced in\n \\cite{kronecker} as a scalable and easy-to-fit model for social\n networks. We started from a star graph with 4 vertices and computed\n Kronecker products until we reached 100,000 nodes.\n \\item \\emph{Configuration model:} The configuration model allows us to\n construct a graph with a given degree distribution. We chose a page\n (GAP) and generated a graph with the same degree distribution using the\n configuration model.\n\\end{itemize}\nThe performance of adaptive seeding compared to our benchmarks can be found in\nFigure~\\ref{fig:synth}. We note that the improvement obtained by adaptive\nseeding is comparable to the one we had on real data except for the\n\\emph{Small-World} model. This is explained by the nature of the model:\nstarting from a regular lattice, some edges are re-wired at random. This model\nhas similar properties to a random graph where the friendship paradox does not\nhold~\\cite{LS13}. Since adaptive seeding is designed to leverage the friendship\nparadox, such graphs are not amenable to this approach.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{images\/perf_synth.pdf}\n \\vspace{-15pt}\n \\caption{Performance of adaptive seeding on synthetic networks.}\n \\label{fig:synth}\n \\vspace{-10pt}\n\\end{figure}\n\n\\subsection{Scalability}\\label{sec:scalability}\nTo test the scalability of adaptive seeding we were guided by two central\nquestions. First, we were interested to witness the benefit our non-sampling\napproach has over the standard SAA method. Secondly, we wanted to understand\nwhen one should prefer to use the LP-based approach from Section~\\ref{sec:lp}\nover the combinatorial one from Section~\\ref{sec:comb}. The computations in\nthis section were run on Intel Core i5 CPU 4x2.40Ghz. For each computation, we\nplot the time and number of CPU cycles it took.\\newline\n\n\\noindent\\textbf{Comparison with SAA.} The objective function of the\nnon-adaptive problem \\eqref{eq:relaxed} is an expectation over exponentially\nmany sets, all possible realizations of the neighbors in the second stage.\nFollowing the sampling-based approach introduced in \\cite{singer}, this\nexpectation can be computed by averaging the values obtained in\n$O\\left(n^2\\right)$ independent sample realizations of the second stage users\n($n$ is the number of neighbors of core set users). One important aspect of\nthe algorithms introduced in this paper is that in the additive case, this\nexpectation can be computed exactly without sampling, thus significantly\nimproving the theoretical complexity.\n\nIn Figure~\\ref{fig:sampling}, we compare the running time of our combinatorial\nalgorithm to the same algorithm where the expectation is computed via sampling.\nWe note that this sampling-based algorithm is still simpler than the algorithm\nintroduced in \\cite{singer} for general influence models. However, we observe\na significant gap between its running time and the one of the combinatorial\nalgorithm. Since each sample takes linear time to compute, this gap is in fact\n$O(n^3)$, quickly leading to impracticable running times as the size of the\ngraph increases. This highlights the importance of the \\emph{sans-sampling}\napproach underlying the algorithms we introduced.\\newline\n\n\\begin{figure}[t]\n \\centerline{ \\includegraphics[width=0.48\\textwidth]{images\/sampling.pdf} }\n \\vspace{-10pt}\n \\caption{Running time and number of CPU cycles used by the sampling-based\n algorithm and the combinatorial adaptive seeding algorithm for different\nsizes of the core set.}\n \\label{fig:sampling}\n \\vspace{-10pt}\n\\end{figure}\n\n\\noindent\\textbf{Combinatorial vs. LP algorithm.}\nWe now compare the running time of the LP-based approach and the combinatorial\napproach for different instance sizes.\n\nFigure~\\ref{fig:time} shows the running time and number of CPU cycles used by\nthe LP algorithm and the combinatorial algorithm as a function of the network\nsize $n$. The varying size of the network was obtained by randomly sampling\na varying fraction of core set users and then trimming the social graph by only\nkeeping friends of this random sample on the second stage. The LP solver used\nwas CLP~\\cite{clp}.\n\n\\begin{figure}[t]\n \\centerline{ \\includegraphics[scale=0.9]{images\/time.pdf} }\n \\vspace{-10pt}\n \\caption{Running time and number of CPU cycles of the combinatorial algorithm and the LP algorithm as a function of the number of nodes $n$. First row with budget $k=100$, second row with budget $k=500$.}\n \\label{fig:time}\n \\vspace{-10pt}\n\\end{figure}\n\nWe observe that for a \\emph{small} value of the budget $k$ (first row\nof Figure~\\ref{fig:time}), the combinatorial algorithm outperforms the LP\nalgorithm. When $k$ becomes \\emph{large} (second row of\nFigure~\\ref{fig:time}), the LP algorithm becomes faster. This can be explained\nby the $k^2$ factor in the running time of the combinatorial\nalgorithm (see Proposition~\\ref{prop:running_time}). Even though the asymptotic\nguarantee of the combinatorial algorithm should theoretically\noutperform the LP-based approach for large $n$, we were not able to observe it\nfor our instance sizes. In practice, one can choose which of the two algorithms\nto apply depending on the relative sizes of $k$ and $n$.\\newline\n\n\n\n\n\\subsection{A scalable approach}\n\n\\noindent \\textbf{Scalability.} One of the challenges in adaptive seeding is\nscalability. This is largely due to the stochastic nature of the problem\nderived from uncertainty about arrival of neighbors. The main result\nin~\\cite{singer} is a constant factor approximation algorithm for well-studied\ninfluence models such as independent cascade and linear threshold which is, at\nlarge, a theoretical triumph. These algorithms rely on various forms of\nsampling, which lead to a significant blowup in the input size. While such\ntechniques provide strong theoretical guarantees, for social network data sets\nwhich are often either large or massive, such approaches are inapplicable. The\nmain technical challenge we address in this work is how to design scalable\nadaptive optimization techniques for influence maximization which do not\nrequire sampling.\\newline\n\n\\noindent \\textbf{Beyond random users.} The motivation for the adaptive\napproach hinges on the friendship paradox, but what if the core set is not\na random sample? The results in~\\cite{LS13} hold when the core set of users is\nrandom but since users who follow a particular topic are not a random sample of\nthe network, we must somehow evaluate adaptive seeding on representative data\nsets. The experimental challenge is to estimate the prominence of high degree\nneighbors in settings that are typical of viral marketing campaigns.\nFigure~\\ref{fig:para} is a foreshadowing of the experimental methods we used to\nshow that an effect similar to the friendship paradox exists in such cases as\nwell. \\newline\n\n \n\n\n \n\n\n\n\n\n\n\n\n\n\n\\noindent \\textbf{Main results.}\nOur main results in this paper show that adaptive seeding is a scalable\napproach which can dramatically improve upon standard approaches of influence\nmaximization. We present a general method that enables designing adaptive\nseeding algorithms in a manner that avoids sampling, and thus makes adaptive\nseeding scalable to large size graphs. We use this approach as a basis for\ndesigning two algorithms, both achieving an approximation ratio of $(1-1\/e)$\nfor the adaptive problem. The first algorithm is implemented through a linear\nprogram, which proves to be extremely efficient over instances where there is\na large budget. The second approach is a combinatorial algorithm with the same\napproximation guarantee which can be easily parallelized, has good theoretical\nguarantees on its running time and does well on instances with smaller budgets.\nThe guarantees of our algorithms hold for linear models of influence,\n\\emph{i.e.} models for which the influence of a set can be expressed as the sum\nof the influence of its members. While this class does not include models such\nas the independent cascade and the linear threshold model, it includes the\nwell-studied \\emph{voter model}~\\cite{holley1975ergodic,even-dar} and measures\nsuch as node degree, click-through-rate or retweet measures of users which\nserve as natural proxies of influence in many settings~\\cite{ZHGS10}. In\ncomparison to submodular influence functions, the relative simplicity of linear\nmodels allows making substantial progress on this challenging problem.\n\nWe then use these algorithms to conduct a series of experiments to show the\npotential of adaptive approaches for influence maximization both on synthetic\nand real social networks. The main component of the experiments involved\ncollecting publicly available data from Facebook on users who expressed\ninterest (``liked'') a certain post from a topic they follow and data on their\nfriends. The premise here is that such users mimic potential participants in\na viral marketing campaign. The results on these data sets suggest that\nadaptive seeding can have dramatic improvements over standard influence\nmaximization methods.\\newline\n\n\\noindent \\textbf{Paper organization.}\nWe begin by formally describing the model and the assumptions we make in the\nfollowing section. In Section~\\ref{sec:adaptivity} we describe the reduction\nof the adaptive seeding problem to a non-adaptive relaxation. In\nSection~\\ref{sec:algorithms} we describe our non-adaptive algorithms for\nadaptive seeding. In Section~\\ref{sec:experiments} we describe our\nexperiments, and conclude with a brief discussion on related work.\n \n\n\\subsection{Problem and notations}\nGiven a graph $G=(V,E)$, for a node $v\\in V$ we denote by $\\neigh{v}$\nthe neighborhood of $v$. By extension, for any subset of nodes $S\\subseteq V$,\n$\\neigh{S}\\equiv \\bigcup_{v\\in S}\\neigh{v}$ will denote the neighborhood of\n$S$. The notion of influence in the graph is captured by a function\n$f:2^{|V|}\\rightarrow \\mathbf{R}_+$ mapping a subset of nodes to a non-negative\ninfluence value.\\newline\n\n\\noindent \\textbf{The adaptive seeding model.} \nThe input of the \\emph{adaptive seeding} problem is a \\emph{core set} of nodes\n$X\\subseteq V$ and for any node $u\\in\\neigh{X}$ a probability $p_u$ that $u$\nrealizes if one of its neighbor in $X$ is seeded. We will write $m=|X|$ and\n$n=|\\neigh{X}|$ the parameters controlling the input size. The seeding process\nis the following:\n\\begin{enumerate}\n \\item \\emph{Seeding:} the seeder selects a subset of nodes $S\\subseteq\n X$ in the core set.\n \\item \\emph{Realization of the neighbors:} every node $u\\in\\neigh{S}$\n realizes independently with probability $p_u$. We denote by\n $R\\subseteq\\neigh{S}$ the subset of nodes that is realized during this\n stage.\n \\item \\emph{Influence maximization:} the seeder selects the set of nodes\n $T\\subseteq R$ that maximizes the influence function $f$.\n\\end{enumerate}\n\nThere is a budget constraint $k$ on the total number of nodes that can be\nselected: $S$ and $T$ must satisfy $|S|+|T|\\leq k$. The seeder chooses the set\n$S$ before observing the realization $R$ and thus wishes to select optimally in\nexpectation over all such possible realizations. Formally, the objective can be\nstated as:\n\\begin{equation}\\label{eq:problem}\n \\begin{split}\n &\\max_{S\\subseteq X} \\sum_{R\\subseteq\\neigh{S}} p_R\n \\max_{\\substack{T\\subseteq R\\\\|T|\\leq k-|S|}}f(T)\\\\\n &\\text{s.t. }|S|\\leq k\n \\end{split}\n\\end{equation}\nwhere $p_R$ is the probability that the set $R$ realizes,\n$ p_R \\equiv \\prod_{u\\in R}p_u\\prod_{u\\in\\neigh{S}\\setminus R}(1-p_u)$.\n\nIt is important to note that the process through which nodes arrive in the\nsecond stage is \\emph{not} an influence process. The nodes in the second stage\narrive if they are willing to spread information in exchange for a unit of the\nbudget. Only when they have arrived does the influence process occur. This\nprocess is encoded in the influence function and occurs after the influence\nmaximization stage without incentivizing nodes along the propagation path. In\ngeneral, the idea of a two-stage (or in general, multi-stage) approach is to\nuse the nodes who arrive in the first stage to recruit influential users who\ncan be incentivized to spread information. In standard influence maximization,\nthe nodes who are not in the core set do not receive incentives to propagate\ninformation, and cascades tend to die off\nquickly~\\cite{YC10,BHMW11,GWG12,CADKL14}.\\newline\n\n\n\\noindent \\textbf{Influence functions.}\nIn this paper we focus on \\emph{linear} (or additive) influence models:\nin these models the value of a subset of nodes can be expressed as\na weighted sum of their individual influence. One important example of such\nmodels is the \\emph{voter model} \\cite{richardson} used to represent the spread\nof opinions in a social network: at each time step, a node adopts an opinion\nwith a probability equal to the fraction of its neighbors sharing this opinion\nat the previous time step. Formally, this can be written as a discrete-time\nMarkov chain over opinion configurations of the network. In this model\ninfluence maximization amounts to ``converting'' the optimal subset of nodes to\na given opinion at the initial time so as to maximize the number of converts\nafter a given period of time. Remarkably, a simple analysis shows that under\nthis model, the influence function $f$ is additive:\n\\begin{equation}\\label{eq:voter}\n \\forall S\\subseteq V,\\; f(S) = \\sum_{u\\in S} w_u\n\\end{equation}\nwhere $w_u, u\\in V$ are weights which can be easily computed from the powers of\nthe transition matrix of the Markov chain. This observation led to the\ndevelopment of fast algorithms for influence maximization under the voter\nmodel~\\cite{even-dar}.\\newline\n\n\\noindent \\textbf{\\textsc{NP}-Hardness.} In contrast to standard influence maximization, adaptive seeding is already \\textsc{NP}-Hard even for the simplest influence functions such as $f(S) = |S|$ and when all probabilities are one. We discuss this in Appendix~\\ref{sec:alg-proofs}.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{intro}\nIn their seminal work on spectral properties of the distance matrix $D$ of a tree $T$, Graham and Lov\\'asz \\cite{GL78} showed that, for a tree $T$ and $c_k(T)$ denoting the coefficient of $x^k$ in $\\text{det}(D(T)-xI)=(-1)^np_{D(G)}(x)$, the quantity $d_k(T)=(-1)^{n-1}c_k(T)\/2^{n-k-2}$ is determined as a fixed linear combination of the number of certain subtrees in $T$. The values $d_k(T)$ are called the \\emph{normalized coefficients} of the distance characteristic polynomial of $T$. \n\nA sequence $a_0,a_1,a_2,\\ldots, a_n$ of real numbers is {\\em unimodal} if there is a $k$ such that $a_{i-1}\\leq a_{i}$ for $i\\leq k$ and $a_{i}\\geq a_{i+1}$ for $i\\geq k$. Graham and Lov\\'asz \\cite{GL78} conjectured that the sequence $d_0(T),\\ldots,d_{n-2}(T)$ of normalized coefficients of the distance characteristic polynomial of a tree is unimodal with the maximum value occurring at $\\lfloor\\frac{n}{2}\\rfloor$ for a tree $T$ of order $n$.\nLittle progress on this problem is known. Collins \\cite{C89} confirmed the conjecture for stars, and also showed that for a paths the sequence is unimodal with a maximum value at $(1-\\frac{1}{\\sqrt{5}})n$. Thus, Graham and Lov\\'asz conjecture was reformulated as follows:\n\n\\begin{con}\\cite{GL78,C89}\\label{con:coefficientstree}\nThe normalized coefficients of the distance characteristic polynomial of any tree with $n$ vertices are unimodal with peak between $\\lfloor \\frac{n}{2} \\rfloor$ and $\\lceil(1-\\frac{1}{\\sqrt{5}})n\\rceil$.\n\\end{con}\n\nAalipour et al. \\cite{GRWC2015} confirmed the unimodality part of Conjecture \\ref{con:coefficientstree} by proving that the sequence of normalized coefficients is\nindeed always unimodal; in fact, they proved the stronger statement that this sequence is log-concave. \n\n\n\n\nA natural and widely used generalization of trees are block graphs (also known as clique trees). A connected graph is a \\emph{block graph} or \\emph{clique tree} if its blocks ($2$-connected components) are cliques. Graham, Hoffman, and Hosoya \\cite{GHH77} showed that the determinant of the distance matrix of any graph only depends on the determinant of its $2$-connected components.\nBapat and Sivasubramanian \\cite{BS} extended the result of the determinant of the distance matrix of a tree by Graham and Lov\\'asz from \\cite{GL78} to block graphs, and Das and Mohanty \\cite{DM} did the same for multi-block graphs. Also the distance eigenvalues of a block graph have received some attention \\cite{DLbook, LLL,JGZ21,XLS2020}. The study of distance matrix has a long history, and many results concerning the distance matrix and distance eigenvalues are reported in the literature, for a survey we refer the reader to the survey papers by Aouchiche and Hansen \\cite{AH} and Hogben and Reinhart \\cite{HR21}. \n\n\nIn this article we consider the sequence of coefficients $c_0,c_1,\\dots,c_n$ of the distance characteristic polynomial of a block graph. Motivated by Conjecture \\ref{con:coefficientstree}, we investigate the following question:\n \n \n\\begin{ques} \\label{que:coefficientsblock}\n Is the sequence of coefficients $c_0,c_1,\\dots,c_n$ of the distance characteristic polynomial of a block graph unimodal, and where does it peak?\n\\end{ques}\n\n\nThis paper is structured as follows. In Section \\ref{preliminaries}, we start by recalling some definitions and preliminary results. In Section \\ref{sec:emb} we develop a general theory regarding the eigenvalues of metric spaces. We use this in Section \\ref{sec:unimodality} to show the unimodality part of Question \\ref{que:coefficientsblock} for block graphs, which extends the corresponding result for trees by Aalipour {\\it et al.}~\\cite{GRWC2015}. In Section \\ref{sec:peak} we prove that the peak part of Question~\\ref{que:coefficientsblock} holds for several extremal classes of block graphs with small diameter. As a corollary of our results we obtain Collins' result for stars \\cite{C89}. Although we show that the peak can move quite a bit when considering block graphs, our results provide evidence that Conjecture~\\ref{con:coefficientstree} might still hold in a more general setting.\n\n\n\\section{Preliminaries}\\label{preliminaries}\n\nThroughout this paper, $G=(V,E)$ denotes an undirected, simple, connected and loopless graph with $n$ vertices. The \\emph{distance matrix} $D(G)$ of a connected graph $G$ is the matrix indexed by the vertices of $G$ whose $(i,j)$-entry equals the distance between the vertices $v_i$ and $v_j$, i.e., the length of a shortest path between $v_i$ and $v_j$. Dependence on $G$ may be omitted when it is clear from the context. The characteristic polynomial of $D$ is defined by $p_{D}(x)=\\det(xI-D)$ and is called the \\emph{distance characteristic polynomial} of $G$. Since $D$ is a real symmetric matrix, all of the roots of the distance characteristic polynomial are real. \n\nA metric space can be attached to any connected graph $G =(V,E)$ in the following way. The \\emph{path metric} of $G$, denoted $d_{G}$, is the metric where for all\nvertices $u,v\\in V$, $d_{G}(u,v)$ is the distance between $u$ and $v$ in $G$. Then, $(V,d_{G})$ is a metric space, called the \\emph{graphic metric space}\nassociated with $G$.\n\nA \\emph{cut vertex} of a graph $G$ is a vertex whose deletion increases the number of connected components of $G$. A \\emph{block} of $G$ is a maximal connected subgraph of $G$ which has no cut vertices. Thus, a block is either a maximal\n2-connected subgraph, or a cut-edge, or an isolated vertex, and every such\nsubgraph is a block. Two blocks of $G$ can overlap in at most one vertex, which is a cut-vertex; hence, every edge of $G$ lies in a unique block, and $G$ is the\nunion of its blocks. A connected graph $G$ is called a \\emph{block graph} if all of its blocks are cliques. In\nparticular, we say that $G$ is a \\emph{$t$-uniform block graph} if all of its cliques have size $t$.\n\nGiven two graphs $G$ and $H$, their \\emph{Cartesian product} is the graph $G\\square H$ whose vertex\nset is $V(G)\\times V(H)$ and whose edges are the pairs $((a,x),(b,y))$ with $a,b\\in V(G)$, $x,y\\in V(H)$ and either $(a,b)\\in E(G)$ and $x=y$, or $a=b$ and $(x,y)\\in E(H)$.\nThe Cartesian product $H_{1}\\square\\cdots\\square H_{k}$ of graphs $H_{1},\\ldots,H_{k}$ is also denoted $\\prod_{h=1}^{k}H_{h}$.\n\nFor integers $d\\geq2$ and $n\\geq2$, the \\emph{Hamming graph} $H(d,n)$ is a graph whose vertex set is the $d$-tuples with elements from $\\{0,1,2,\\ldots,n-1\\}$, where two vertices are adjacent if and only if their corresponding $d$-tuples differ only in one coordinate. Equivalently, $H(d,n)$ is the Cartesian product of $d$ copies of $K_n$. For a positive integer $d$, the \\emph{hypercube} or the \\emph{$d$-cube} is the graph $H(d,2)$.\n\nA metric space $(X_1,d_1)$ is \\emph{isometrically embeddable} in a metric space $(X_2,d_2)$ if there exists a mapping $\\sigma:X_1\\rightarrow X_2$ such that $d_2(\\sigma(a), \\sigma(b))=d_1(a,b)$. If $(X_1,d_1)$ and $(X_2,d_2)$ are graphic metric spaces associated with graphs $G$ and $H$ respectively, then $G$ is called an \\emph{isometric subgraph} of $H$.\nThe cases when the graph $H$ is a hypercube, a Hamming graph, or\na Cartesian product of cliques of different sizes are of much importance. For other definitions and notations related to embeddability of metric spaces, we refer the reader to~\\cite{TD1987, DLbook}.\n\nAs mentioned earlier, Graham, Hoffman, and Hosoya \\cite{GHH77} proved the following elegant formula to calculate the determinant of the distance matrix of a block graph. Note that this formula depends only on the sizes of the blocks and not on the graph structure.\n\\begin{thm}[\\cite{GHH77}]\\label{GHH}\nIf $G$ is a block graph with blocks $G_{1},~G_2,~\\ldots,~G_t$, then\n\\begin{eqnarray*}\n\\textnormal{cof}~D(G)&=&\\prod\\limits_{i=1}^{t}\\textnormal{cof}~D(G_i),\\\\\n\\det D(G)&=&\\sum\\limits_{i=1}^{t}\\det D(G_i)\\prod\\limits_{j\\neq i}\\textnormal{cof}~D(G_j).\n\\end{eqnarray*}\n\\end{thm}\n\nThe \\emph{coefficient sequence} of a real polynomial $p(x)=a_nx_n+\\cdots+a_1x+a_0$ is the sequence $a_0,a_1,a_2,\\ldots,a_n$. The polynomial $p$ is \\emph{real-rooted} if all roots of $p$ are real (by convention, constant polynomials are considered real-rooted). A sequence $a_0,a_1,a_2,\\ldots, a_n$ of real numbers is {\\em unimodal} if there is a $k$ such that $a_{i-1}\\leq a_{i}$ for $i\\leq k$ and $a_{i}\\geq a_{i+1}$ for $i\\geq k$, and the sequence is {\\em log-concave} if $a_j^2\\geq a_{j-1}a_{j+1}$ for all $j=1,\\dots, n-1$.\n\n\n\nFinally, we will make use of the following well-known result (see for instance [\\cite{2}, Lemma 1.1] and [\\cite{5}, Theorem B, p. 270]) with the additional assumption that the polynomial coefficients are nonnegative, but it is straightforward to remove that assumption.\n\n\\begin{lema}\\label{knownunimodalitylemma}\n\t\\leavevmode\n\\begin{description}\n\\item[$(i)$] If $p(x)=a_nx_n+\\cdots+a_1x+a_0$ is a real-rooted polynomial, then the coefficient sequence $a_i$ of $p$ is log-concave.\n\\item[$(ii)$] If the sequence $a_0,a_1,a_2,\\ldots,a_n$ is positive and log-concave, then it is unimodal.\n\\end{description}\n\\end{lema}\n\n\n\n\\section{On $\\ell_1$-embeddability of metric spaces}\\label{sec:emb}\n\n\n\nMetric spaces whose distance matrix has exactly one positive eigenvalue have received much attention since the work of Deza and Laurent~\\cite{DLbook}. Metric properties of regular graphs have also been investigated by Koolen \\cite{Koolenmscthesis}.\n\nIn this section we develop a general theory regarding the eigenvalues of metric spaces. The main result of this section, Theorem~\\ref{thmfinal}, will be used to show the unimodality part of Question \\ref{que:coefficientsblock}.\n\n\t\nWe say that a metric space $(X,d)$ is {\\it of negative type} if for all weight functions $w:X\\to\\mathbb{Z}$ with $\\sum\\limits_{x\\in X} w(x)=0$ we have \n\\begin{equation}\\label{neg-type-ineq}\n\\sum\\limits_{x\\in X}\\sum\\limits_{y\\in X}w(x)w(y)d(x,y)\\leq0.\n\\end{equation} \nIf the same inequality holds for all weight functions with $\\sum\\limits_{x\\in X} w(x)=1$, then we say that $(X,d)$ is {\\it hypermetric}. It is fairly easy to show that $(X,d)$ being hypermetric implies it is of negative type. An alternative way to define metric spaces of negative type is by using Schoenberg's Theorem~\\cite{Sch38}: a metric space $(X,d)$ is of negative type if and only if $(X,\\sqrt{d})$ is isometrically embeddable in the Eucledian space.\n\t\nA metric space $(X,d)$ is said to be \\textit{$\\ell_1$-embeddable} if it can be embedded isometrically into the $\\ell_{1}$-space ($\\mathbb{R}^m, d_{\\ell_{1}}$) for some integer $m\\geq1$. Here, $d_{\\ell_{1}}$ denotes the $\\ell_{1}$-distance defined by\n\\begin{equation*}\nd_{\\ell_1}(x,y):=\\sum\\limits_{1\\leq i\\leq m}|x_i-y_i|~~\\textrm{for}~\nx,y\\in \\mathbb{R}^m.\n\\end{equation*}\nOne of the basic results of $\\ell_{1}$-embeddable metric spaces is a characterization in terms of cut semimetrics. Given a subset $S$ of the $n$-set $V_n:= \\{1,\\ldots,n\\}$, the \\emph{cut semimetric} $\\delta_S$ is the distance on $V_n$ defined as\n\\begin{equation*}\n\\delta_S(i,j)=\\left\\{\n\\begin{array}{ll}\n1, & \\hbox{$i\\in S,~j\\in V_n\\setminus S$,} \\\\\n0, & \\hbox{$i, j \\in S$, \\textrm{or} $i,j\\in V_n\\setminus S$.}\n\\end{array}\n\\right.\n\\end{equation*}\nNote that every cut semimetric is clearly $\\ell_{1}$-embeddable. In fact, a distance $d$ is $\\ell_{1}$-\\emph{embeddable} if and only if it can be decomposed as a nonnegative linear combination of cut semimetrics. We also note that $\\ell_1$-embeddability of $(X,d)$ implies it is hypermetric~\\cite{TD1987,Kelly67}.\n\t\n\t\nLet $G$ be a graph and let $(X,d)$ be the graphic metric space and associated with~$G$.\nWe have the following implications:\n \n\\begin{gather*}\n(X,d) \\text{ is $\\ell_1$-embeddable} \\\\\n\t\t\\Downarrow \\\\\n(X,d) \\text{ is hypermetric} \\\\\n\t\t\\Downarrow \\\\\n(X,d) \\text{ is of negative type} \\\\\n\t\t\\Downarrow \\\\\n\\text{The distance matrix of $G$ has exactly one positive eigenvalue}\n\\end{gather*}\n\n\n\n\\medskip\n\nFor more details on the metric hierarchy we refer the reader to the book by Deza and Laurent \\cite{DLbook}.\n\t\n\t\nLet ($X_1,d_1$) and ($X_2,d_2$) be two metric spaces. Their direct product is the metric space $(X_1\\times X_2, d_1\\otimes d_2)$ where, for $x_1,y_1\\in X_1$, $x_2,y_2\\in X_2$,\n\\begin{equation}\\label{product-metric}\nd_1\\otimes d_2\\big((x_1,x_2),(y_1,y_2)\\big)=d_1(x_1,y_1)+d_2(x_2,y_2).\n\\end{equation}\nNote that for path metrics, the direct product operation corresponds to the Cartesian product of graphs. Namely, if $G$ and $H$ are two connected graphs, then the direct product of their path metrics coincides with the path metric of the Cartesian product of $G$ and $H$. The following lemmas provide a relation between the metric hierarchy and the direct product of respective metric spaces.\n\\begin{lema}\\label{l1-embed-spaces}\nLet $d_i$ be a distance on the set $X_i$, for $i=1,2$. Then $(X_1\\times X_2, d_1\\otimes d_2)$ is $\\ell_{1}$-embeddable if and only if both $(X_1,d_1)$ and $(X_2,d_2)$ are $\\ell_{1}$-embeddable.\n\\end{lema}\n\\begin{proof}\n\t\t\nSince both $d_1$ and $d_2$ are $\\ell_{1}$-embeddable, they can be decomposed as a nonnegative linear combination of cut semimetrics. Therefore, if $d_1=\\sum\\limits_{S\\subseteq X_1}a_{S}\\delta_{S}$ and $d_2=\\sum\\limits_{T\\subseteq X_2}b_{T}\\delta_{T}$, then by (\\ref{product-metric}) we obtain that\n\\begin{equation*}\nd_1\\otimes d_2=\\sum\\limits_{S\\subseteq X_1}a_{S}\\delta_{S\\times X_2}+\\sum\\limits_{T\\subseteq X_2}b_{T}\\delta_{X_1\\times T}.\n\\end{equation*}\nThis implies that $d_1\\otimes d_2$ is decomposed into nonnegative linear combination of cut semimetrics. Thus $d_1\\otimes d_2$ is $\\ell_{1}$-embeddable.\n\\qedhere\n\\end{proof}\n\t\n\\begin{lema}\\label{l2-embed-spaces}\nLet $d_i$ be a distance on the set $X_i$, for $i=1,2$. Then $(X_1\\times X_2, d_1\\otimes d_2)$ is hypermetric (resp. of negative type) if and only if both ($X_1,d_1$) and ($X_2,d_2$) are hypermetric (resp. of negative type).\n\\end{lema}\n\\begin{proof}\nSimilar to Lemma~\\ref{l1-embed-spaces}, it suffices to show that the inequality~(\\ref{neg-type-ineq}) holds for $d_1\\otimes d_2$ assuming $(X,d_1)$ and $(X,d_2)$ are hypermetric (resp. of negative type). By definition we have\n\\begin{align*}\n&\\sum\\limits_{(x_1,x_2)\\in X_1\\times X_2}\\sum\\limits_{(y_1,y_2)\\in X_1\\times X_2} w(x_1,x_2)w(y_1,y_2)(d_1\\otimes d_2)\\left((x_1,x_2),(y_1,y_2)\\right) = \\\\\n&= \\sum\\limits_{x_1\\in X_1}\\sum\\limits_{x_2\\in X_2}\\sum\\limits_{y_1\\in X_1}\\sum\\limits_{y_2\\in X_2} w(x_1,x_2)w(y_1,y_2)\\left(d_1(x_1,y_1)+d_2(x_2,y_2)\\right) = \\\\\n&= \\sum\\limits_{x_1\\in X_1}\\sum\\limits_{y_1\\in X_1}\\left(\\sum\\limits_{x_2\\in X_2}w(x_1,x_2)\\right)\\left(\\sum\\limits_{y_2\\in X_2}w(y_1,y_2)\\right)d_1(x_1,y_1)+ \\\\\n&+ \\sum\\limits_{x_2\\in X_2}\\sum\\limits_{y_2\\in X_2}\\left(\\sum\\limits_{x_1\\in X_1}w(x_1,x_2)\\right)\\left(\\sum\\limits_{y_1\\in X_1}w(y_1,y_2)\\right)d_2(x_2,y_2).\n\\end{align*}\nFor a weight function $w:X_1\\times X_2\\to \\mathbb{Z}$ such that \n\n$$\\sum\\limits_{(x_1,x_2)\\in X_1\\times X_2} w(x_1,x_2)=1\\text{ (resp. $0$),}$$ we define $w_1:X_1\\to\\mathbb{Z}$ and $w_2:X_2\\to\\mathbb{Z}$ such that \n$$w_1(x)=\\sum\\limits_{x_2\\in X_2} w(x,x_2)$$ and $$w_2(x)=\\sum\\limits_{x_1\\in X_1} w(x_1,x).$$ Note that $$\\sum\\limits_{x_1\\in X_1} w_1(x_1)=\\sum\\limits_{(x_1,x_2)\\in X_1\\times X_2}w(x_1,x_2)=1\\text{ (resp. $0$),}$$ and thus for $(X_1,d_1)$ being hypermetric (resp. of negative type) and the weight function $w_1$ we have $$\\sum\\limits_{x_1\\in X_1}\\sum\\limits_{y_1\\in X_1} w_1(x_1) w_1(y_1)d_1(x_1,y_1)\\leq 0.$$ Similarly, we have $$\\sum\\limits_{x_2\\in X_2}\\sum\\limits_{y_2\\in X_2} w_2(x_2)w_2(y_2)d_2(x_2,y_2)\\leq 0.$$ The inequality~(\\ref{neg-type-ineq}) for $d_1\\otimes d_2$ then follows.\n\\end{proof}\n\n\\begin{thm}\\label{thmfinal}\nLet $G$ be the a graph whose $2$-connected components are of negative type. If $D(G)$ is the distance matrix of $G$, then $D(G)$ has exactly one positive eigenvalue.\n\\end{thm}\n\\begin{proof}\nLemma \\ref{l2-embed-spaces} ensures us that the Cartesian product of the $2$-connected components of $G$ is of negative type. Since $G$ is an isometric subgraph of the Cartesian product, the result follows immediately.\n\\end{proof}\n\n \nTerwilliger and Deza \\cite{TD1987} investigated finite distance spaces having integral distances: a finite set $X$ and a map $d:X^2\\rightarrow \\mathbb{Z}$. The relation $d=1$ is assumed to be connected. They provided the following classification of hypermetric spaces and metric spaces of negative type: $(X,d)$ has negative type if and only if it is metrically embeddable in a Euclidean space and generates a root lattice (direct sum of lattices of types $A,D,E$). Moreover, $(X,d)$ is hypermetric if and only if it is isomorphic to a subspace of the (complete) Cartesian products of the half-cubes, the CP-graphs, and the Johnson, Schl\u00e4fli and Gosset graphs. These graphs correspond to the minimum vectors in the lattices dual to the root lattices mentioned above. Terwilliger and Deza conclude by describing how a given hypermetric $(X,d)$ may be embedded into a complete one. We should note that the results in \\cite{TD1987} provide an alternative way to show Theorem~\\ref{thmfinal} for hypermetric graphs. \n\n\nNext we observe that the Cartesian product of graphs does not preserve the one positive distance eigenvalue property. \n\n\\begin{rem}\nThe Cartesian product of two graphs having one positive distance eigenvalues does not necessarily have one positive distance eigenvalue; see, e.g., Figure \\ref{fig:2evproduct}. In fact, if a vertex of $C_5$ is identified with any 4-degree vertex in the graph in Figure \\ref{fig:2evproduct}, right, then the resulting 11-vertex graph has the following distance spectrum:\n$$\\{-7.83, -2.62, -2, -2, -1.38, -1, -1.0000, -0.38, -0.35, 0.08, 18.48\\}.$$\n\n\\begin{figure}[ht!]\n\\centering\n\\resizebox{7cm}{!}{\n\\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]\n\n\\draw (121.78,141.11) -- (38.49,140.8) -- (13.05,61.5) -- (80.6,12.79) -- (147.8,61.99) -- cycle ;\n\\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (35.53,140.8) .. controls (35.53,139.16) and (36.86,137.84) .. (38.49,137.84) .. controls (40.13,137.84) and (41.46,139.16) .. (41.46,140.8) .. controls (41.46,142.44) and (40.13,143.76) .. (38.49,143.76) .. controls (36.86,143.76) and (35.53,142.44) .. (35.53,140.8) -- cycle ;\n\\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (118.81,141.11) .. controls (118.81,139.47) and (120.14,138.14) .. (121.78,138.14) .. controls (123.41,138.14) and (124.74,139.47) .. (124.74,141.11) .. controls (124.74,142.74) and (123.41,144.07) .. (121.78,144.07) .. controls (120.14,144.07) and (118.81,142.74) .. (118.81,141.11) -- cycle ;\n\\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (10.08,61.5) .. controls (10.08,59.86) and (11.41,58.53) .. (13.05,58.53) .. controls (14.68,58.53) and (16.01,59.86) .. (16.01,61.5) .. controls (16.01,63.14) and (14.68,64.46) .. (13.05,64.46) .. controls (11.41,64.46) and (10.08,63.14) .. (10.08,61.5) -- cycle ;\n\\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (144.84,61.99) .. controls (144.84,60.35) and (146.17,59.03) .. (147.8,59.03) .. controls (149.44,59.03) and (150.77,60.35) .. (150.77,61.99) .. controls (150.77,63.63) and (149.44,64.96) .. (147.8,64.96) .. controls (146.17,64.96) and (144.84,63.63) .. (144.84,61.99) -- cycle ;\n\\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (77.64,12.79) .. controls (77.64,11.15) and (78.97,9.83) .. (80.6,9.83) .. controls (82.24,9.83) and (83.57,11.15) .. (83.57,12.79) .. controls (83.57,14.43) and (82.24,15.76) .. (80.6,15.76) .. controls (78.97,15.76) and (77.64,14.43) .. (77.64,12.79) -- cycle ;\n\\draw (305.74,12.29) -- (358.27,90.93) -- (253.22,90.93) -- cycle ;\n\\draw (358.27,141.93) -- (253.22,90.93) -- (358.27,90.93) -- cycle ;\n\\draw (253.27,141.93) -- (358.27,90.93) -- (253.27,90.93) -- cycle ;\n\\draw (305.74,12.29) -- (305.74,116.43) ;\n\\draw (320.27,55.08) -- (305.74,115.58) -- (305.74,12.29) -- cycle ;\n\\draw (320.27,55.08) -- (253.22,90.93) ;\n\\draw (320.27,55.08) -- (253.27,141.93) ;\n\\draw (358.27,90.93) -- (320.27,55.08) ;\n\\draw (320.27,55.08) -- (358.27,141.93) ;\n\\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (302.78,12.29) .. controls (302.78,10.65) and (304.1,9.33) .. (305.74,9.33) .. controls (307.38,9.33) and (308.71,10.65) .. (308.71,12.29) .. controls (308.71,13.93) and (307.38,15.26) .. (305.74,15.26) .. controls (304.1,15.26) and (302.78,13.93) .. (302.78,12.29) -- cycle ;\n\\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (317.3,55.08) .. controls (317.3,53.44) and (318.63,52.11) .. (320.27,52.11) .. controls (321.9,52.11) and (323.23,53.44) .. (323.23,55.08) .. controls (323.23,56.71) and (321.9,58.04) .. (320.27,58.04) .. controls (318.63,58.04) and (317.3,56.71) .. (317.3,55.08) -- cycle ;\n\\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (355.3,90.93) .. controls (355.3,89.3) and (356.63,87.97) .. (358.27,87.97) .. controls (359.9,87.97) and (361.23,89.3) .. (361.23,90.93) .. controls (361.23,92.57) and (359.9,93.9) .. (358.27,93.9) .. controls (356.63,93.9) and (355.3,92.57) .. (355.3,90.93) -- cycle ;\n\\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (250.25,90.93) .. controls (250.25,89.3) and (251.58,87.97) .. (253.22,87.97) .. controls (254.85,87.97) and (256.18,89.3) .. (256.18,90.93) .. controls (256.18,92.57) and (254.85,93.9) .. (253.22,93.9) .. controls (251.58,93.9) and (250.25,92.57) .. (250.25,90.93) -- cycle ;\n\\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (302.8,116.43) .. controls (302.8,114.8) and (304.13,113.47) .. (305.77,113.47) .. controls (307.4,113.47) and (308.73,114.8) .. (308.73,116.43) .. controls (308.73,118.07) and (307.4,119.4) .. (305.77,119.4) .. controls (304.13,119.4) and (302.8,118.07) .. (302.8,116.43) -- cycle ;\n\\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (355.3,141.93) .. controls (355.3,140.3) and (356.63,138.97) .. (358.27,138.97) .. controls (359.9,138.97) and (361.23,140.3) .. (361.23,141.93) .. controls (361.23,143.57) and (359.9,144.9) .. (358.27,144.9) .. controls (356.63,144.9) and (355.3,143.57) .. (355.3,141.93) -- cycle ;\n\\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (250.3,141.93) .. controls (250.3,140.3) and (251.63,138.97) .. (253.27,138.97) .. controls (254.9,138.97) and (256.23,140.3) .. (256.23,141.93) .. controls (256.23,143.57) and (254.9,144.9) .. (253.27,144.9) .. controls (251.63,144.9) and (250.3,143.57) .. (250.3,141.93) -- cycle ;\n\n\n\n\n\\end{tikzpicture}\n\n}\n\\caption{Two graphs whose distance matrices have exactly one positive eigenvalue, but the graph resulting from joining them in one vertex has two positive distance eigenvalues.}\n\\label{fig:2evproduct}\n\\end{figure}\nMoreover, from Zhang's and Godsil's result \\cite[Theorem 3.3]{zg2013} it follows that if the distinguishing one vertex does not preserve the one distance eigenvalues property, then the Cartesian product does not preserve it either.\n\\end{rem}\n \n\\begin{rem} If all the $2$-connected components of a graph $G$ have a full rank distance matrix, it does not necessarily follow that $D(G)$ has full rank. For example, consider the graph in Figure~\\ref{fig:fullrank} with two biconnected components: $G'$ on vertices $\\{0,1,\\dots,7\\}$ and $G''$ on vertices $\\{0,8,9\\}$. The distance matrices of $G$ and $G'$ are:\n\n{\\tiny{\n\t\\[\n\tD(G)=\\left(\n\t\\begin{array}{cccccccccc}\n\t\t0 & 1 & 2 & 1 & 2 & 2 & 2 & 2 & 1 & 1 \\\\\n\t\t1 & 0 & 1 & 1 & 2 & 2 & 2 & 2 & 2 & 2 \\\\\n\t\t2 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 3 & 3 \\\\\n\t\t1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 2 & 2 \\\\\n\t\t2 & 2 & 1 & 1 & 0 & 1 & 2 & 2 & 3 & 3 \\\\\n\t\t2 & 2 & 1 & 1 & 1 & 0 & 2 & 2 & 3 & 3 \\\\\n\t\t2 & 2 & 1 & 1 & 2 & 2 & 0 & 2 & 3 & 3 \\\\\n\t\t2 & 2 & 1 & 1 & 2 & 2 & 2 & 0 & 3 & 3 \\\\\n\t\t1 & 2 & 3 & 2 & 3 & 3 & 3 & 3 & 0 & 1 \\\\\n\t\t1 & 2 & 3 & 2 & 3 & 3 & 3 & 3 & 1 & 0 \\\\\n\t\\end{array}\n\t\\right), \\;\\;\\;\n\tD(G')=\\left(\n\t\\begin{array}{cccccccc}\n\t\t0 & 1 & 2 & 1 & 2 & 2 & 2 & 2 \\\\\n\t\t1 & 0 & 1 & 1 & 2 & 2 & 2 & 2 \\\\\n\t\t2 & 1 & 0 & 1 & 1 & 1 & 1 & 1 \\\\\n\t\t1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 \\\\\n\t\t2 & 2 & 1 & 1 & 0 & 1 & 2 & 2 \\\\\n\t\t2 & 2 & 1 & 1 & 1 & 0 & 2 & 2 \\\\\n\t\t2 & 2 & 1 & 1 & 2 & 2 & 0 & 2 \\\\\n\t\t2 & 2 & 1 & 1 & 2 & 2 & 2 & 0 \\\\\n\t\\end{array}\n\t\\right),\n\t\\]\n\t}}\nand the ranks of $D(G')$ and $D(G'')$ are $8$ and $3$ respectively, so they are full rank matrices, whereas the rank of $D(G)$ is $9$, so it is not a full-rank matrix.\n\\end{rem}\n \n\n\\begin{figure}[ht!]\n\t\\centering\n\t\\resizebox{4cm}{!}{\n\t\\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]\n\t\n\t\t\n\t\n\t\t\\draw (141.87,52.67) -- (111.33,81) ;\n\t\t\\draw [shift={(111.33,81)}, rotate = 137.14] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\t\\draw [shift={(141.87,52.67)}, rotate = 137.14] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\n\t\t\\draw (111.33,81) -- (141.33,111) ;\n\t\t\\draw [shift={(141.33,111)}, rotate = 45] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\t\\draw [shift={(111.33,81)}, rotate = 45] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\n\t\t\\draw (141.87,52.67) -- (141.33,111) ;\n\t\t\\draw [shift={(141.33,111)}, rotate = 90.52] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\t\\draw [shift={(141.87,52.67)}, rotate = 90.52] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\n\t\t\\draw (111.33,81) -- (81.33,51.67) ;\n\t\t\\draw [shift={(81.33,51.67)}, rotate = 224.36] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\t\\draw [shift={(111.33,81)}, rotate = 224.36] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\n\t\t\\draw (81.33,111.67) -- (81.33,51.67) ;\n\t\t\\draw [shift={(81.33,51.67)}, rotate = 270] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\t\\draw [shift={(81.33,111.67)}, rotate = 270] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\n\t\t\\draw (111.33,81) -- (81.33,111.67) ;\n\t\t\\draw [shift={(81.33,111.67)}, rotate = 134.37] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\t\\draw [shift={(111.33,81)}, rotate = 134.37] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\n\t\t\\draw (81.33,51.67) -- (51.33,82.33) ;\n\t\t\\draw [shift={(51.33,82.33)}, rotate = 134.37] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\t\\draw [shift={(81.33,51.67)}, rotate = 134.37] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\n\t\t\\draw (81.33,111.67) -- (51.33,82.33) ;\n\t\t\\draw [shift={(51.33,82.33)}, rotate = 224.36] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\t\\draw [shift={(81.33,111.67)}, rotate = 224.36] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\n\t\t\\draw (81.33,51.67) -- (51.33,22.33) ;\n\t\t\\draw [shift={(51.33,22.33)}, rotate = 224.36] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\t\\draw [shift={(81.33,51.67)}, rotate = 224.36] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\n\t\t\\draw (51.33,82.33) -- (51.33,22.33) ;\n\t\t\\draw [shift={(51.33,22.33)}, rotate = 270] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\t\\draw [shift={(51.33,82.33)}, rotate = 270] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\n\t\t\\draw (51.33,22.33) -- (21.33,53) ;\n\t\t\\draw [shift={(21.33,53)}, rotate = 134.37] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\t\\draw [shift={(51.33,22.33)}, rotate = 134.37] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\n\t\t\\draw (51.33,82.33) -- (21.33,53) ;\n\t\t\\draw [shift={(21.33,53)}, rotate = 224.36] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\t\\draw [shift={(51.33,82.33)}, rotate = 224.36] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\n\t\t\\draw (81.33,51.67) -- (21.33,53) ;\n\t\t\\draw [shift={(21.33,53)}, rotate = 178.73] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\t\\draw [shift={(81.33,51.67)}, rotate = 178.73] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\n\t\t\\draw (20.67,82.33) -- (51.33,82.33) ;\n\t\t\\draw [shift={(51.33,82.33)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\t\\draw [shift={(20.67,82.33)}, rotate = 0] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\n\t\t\\draw (20.67,82.33) -- (81.33,51.67) ;\n\t\t\\draw [shift={(81.33,51.67)}, rotate = 333.18] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\t\\draw [shift={(20.67,82.33)}, rotate = 333.18] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\n\t\t\\draw (81.33,21) -- (51.33,82.33) ;\n\t\t\\draw [shift={(51.33,82.33)}, rotate = 116.06] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\t\\draw [shift={(81.33,21)}, rotate = 116.06] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\n\t\t\\draw (81.33,21) -- (81.33,51.67) ;\n\t\t\\draw [shift={(81.33,51.67)}, rotate = 90] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\t\\draw [shift={(81.33,21)}, rotate = 90] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0, 0) circle [x radius= 1, y radius= 1] ;\n\t\t\n\t\t\n\t\n\t\t\\draw (106,63.07) node [anchor=north west][inner sep=0.75pt] {$0$};\n\t\n\t\t\\draw (86,43.73) node [anchor=north west][inner sep=0.75pt] {$1$};\n\t\n\t\t\\draw (47.33,84.4) node [anchor=north west][inner sep=0.75pt] {$2$};\n\t\n\t\t\\draw (68,109.07) node [anchor=north west][inner sep=0.75pt] {$3$};\n\t\n\t\t\\draw (9.33,38.4) node [anchor=north west][inner sep=0.75pt] {$4$};\n\t\n\t\t\\draw (37.33,12.4) node [anchor=north west][inner sep=0.75pt] {$5$};\n\t\n\t\t\\draw (8.67,68.4) node [anchor=north west][inner sep=0.75pt] {$6$};\n\t\n\t\t\\draw (85.33,10.4) node [anchor=north west][inner sep=0.75pt] {$7$};\n\t\n\t\t\\draw (145.67,45.73) node [anchor=north west][inner sep=0.75pt] {$8$};\n\t\n\t\t\\draw (145.33,106.4) node [anchor=north west][inner sep=0.75pt] {$9$};\n\t\t\n\t\t\n\t\\end{tikzpicture}\n\t}\n\t\\caption{A graph $G$ whose biconnected components have full rank distance matrices but its distance matrix $D(G)$ does not have full rank.}\n\t\\label{fig:fullrank}\n\\end{figure}\n\n\n\\section{Extension of Graham and Lov\\'{a}sz conjecture to block graphs}\\label{sec:conjectureforblockgraphs}\n\nRecall that, for a graph $G$, $c_k(G)$ denotes the coefficient of $x^k$ in $$\\text{det}(D(G)-xI)=(-1)^np_{D(G)}(x).$$ The coefficients of the distance polynomial of a tree all have a common factor of $(-1)^{n-1}2^{n-k-2}$ due to a result of Graham and Lov\\'{a}sz~\\cite{GL78}. We say that the quantities $d_k(T)=(-1)^{n-1}c_k(T)\/2^{n-k-2}$ are the \\emph{normalized coefficients} of the distance characteristic polynomial of a tree $T$. Due to such common factor in the coefficients of trees \\cite{EGG76}, Conjecture~\\ref{con:coefficientstree} uses the normalized coefficients. We should note that we will not use the normalized coefficients to investigate Question \\ref{que:coefficientsblock}.\n\nIn this section we show that the sequence of coefficients of the distance characteristic polynomial of a block graph is unimodal, and we establish the peak for several extremal classes of block graphs with small diameter. \n\n\\subsection{Unimodality}\\label{sec:unimodality}\n\nTo answer the unimodality part of Question \\ref{que:coefficientsblock}, we will follow a similar approach as it was done in \\cite{GRWC2015} to show the unimodality of trees. The main idea relies on the fact that the distance matrix of a block graph on $n$ vertices has one positive and $n-1$ negative eigenvalues. We begin with a preliminary result, which extends the known result for trees by Edelberg, Garey and Graham \\cite{EGG76}. \n\n\\begin{lema}\\label{claim}\nThe coefficients of the distance characteristic polynomial of a block graph $G$ satisfy\n$$(-1)^{n-1}c_{k}(G)>0 \\quad \\text{for }0\\leq k \\leq n-2.$$\n\\end{lema}\n\\begin{proof}\nIt was shown in~\\cite[Theorem~3.2]{LLL} that the distance matrix $D(G)$ of a block graph $G$ has one positive and $n-1$ negative eigenvalues.\nWe now extend the argument given in ~\\cite[Theorem 2.3]{EGG76} that $(-1)^{n-1}c_{k}(T)>0$ for $0\\leq k\\leq n-2$ for a tree $T$, given that its distance matrix has one positive and $n-1$ negative eigenvalues. Let the eigenvalues of the distance matrix $D(G)$ of a block graph $G$ be denoted by $\\lambda_1, -\\lambda_2, \\dots, -\\lambda_n$, where $\\lambda_i>0$ for~$i=1,\\dots,n$. Then the distance characteristic polynomial is\n\\begin{align*}\n\t\\det(D(G)-xI) &= (-1)^n(x-\\lambda_1)(x+\\lambda_2)\\cdots(x+\\lambda_n)= \\\\\n\t&= (-1)^n(x-\\lambda_1)\\sum\\limits_{k=0}^{n-1} g_{n-1-k} x^{k}= \\\\\n\t&= (-1)^n\\left(x^n+\\sum\\limits_{k=1}^{n-1}(g_{n-k}-\\lambda_1g_{n-k-1})x^{k}-\\lambda_1g_{n-1}\\right),\n\\end{align*}\nwhere $g_k$ is the sum of all $k$-fold products of $\\lambda_2,\\dots,\\lambda_n$. Then, $c_{n-1}(G)=g_1-\\lambda_1$, but also $c_{n-1}(G)=-c_n(G)\\text{tr}(D)=0$; thus, $g_1=\\lambda_1$. Then, since $g_{n-k}-\\lambda_1g_{n-k-1}=g_{n-k}-g_1g_{n-k-1}<0$ for $k=1,\\dots,n-2$, \nand since for $k=0$, $-\\lambda_1g_{n-1}=-g_1g_{n-1}=-\\prod\\limits_{i=1}^{n}\\lambda_i<0$, it follows that $(-1)^{n-1}c_{k}(G)>0$ for $0\\leq k\\leq n-2$.\n\\end{proof}\n\n\n\n\n\n\\begin{thm}\\label{theo:unimodality}\nFor a block graph $G$, the sequence of coefficients of the distance characteristic polynomial $(-1)^{n-1}c_0(G),\\ldots,(-1)^{n-1}c_{n-2}(G)$ is unimodal.\n\\end{thm}\n\n\\begin{proof}\nFirst, it follows from Lemma \\ref{claim} that if $G$ is a block graph, then the coefficients of the distance characteristic polynomial satisfy $(-1)^{n-1}c_{k}(G)>0 \\quad \\text{for }0\\leq k \\leq n-2$.\n\nSince the distance matrix $D$ is a real symmetric matrix, the distance characteristic polynomial is real-rooted. From Lemma \\ref{knownunimodalitylemma}$(i)$, it follows that the sequence is log-concave.\n\nMoreover, since\n$$(-1)^{n-1}c_{k}(G)>0 \\quad \\text{for }0\\leq k \\leq n-2,$$\nthen Lemma \\ref{knownunimodalitylemma}(ii) implies that the sequence of coefficients of the distance characteristic polynomial is unimodal.\n\\end{proof}\n\n\n\\subsection{Peak location}\\label{sec:peak}\n\n\n \n\nIn this section we answer the peak location part of Question~\\ref{que:coefficientsblock} for several extremal families of uniform block graphs with small diameter. \n\nThe idea is to derive an explicit formula for the coefficients and use the unimodality to find the peak. However, while the method for obtaining the peak for stars and paths relies on the algebraic properties of the corresponding distance matrix \\cite{C89}, for block graphs we will exploit several of its spectral properties.\n\nConsider the \\emph{windmill graph} $W(k,t)$, which is a block graph formed by joining $k$ cliques of size $t$ at a shared universal vertex. The following result uses Stirling's approximation to prove an estimate for the peak location of a windmill graph for large $k$. \n\n\\begin{thm}\\label{thm:friend.gen.approx} Consider a windmill graph $W(k,t)$ with $k\\geq 2$ and $t\\geq 3$, so that $n=|V|=k(t-1)+1$. Then, the sequence of coefficients is unimodal, and as $k$ approaches infinity the peak of the sequence occurs at $\\frac{kt(t-1)}{2(t+1)} + O(\\log k)$.\n\\end{thm}\n\n\n\n\\begin{proof} The distance polynomial of the graph is\n{\\small{\n$$p_D(x)=(x+1)^{(t-2)k}(x+t)^{k-1}(x^2-(t-2+2(t-1)(k-1))x-k(t-1)).$$\n}}\nTo locate the peak of $p_D(x)$, it is sufficient to consider $(x+1)^{(t-2)k}(x+t)^{k-1}$.\n\tWe know $(x+1)^{(t-2)k}$ peaks at $\\frac{k(t-2)}2$. The coefficients of $(x+t)^{k-1}$ are defined by the formula $f_i = {k-1 \\choose i}t^{k-1-i}$. By calculating explicitly $f_{\\frac{k}{t+1}-2},f_{\\frac{k}{t+1}-1},f_{\\frac{k}{t+1}},f_{\\frac{k}{t+1}+1}$ and using unimodality of binomial coefficients we conclude the sequence $f_i$ peaks at $\\frac{k}{t+1}$.\n\t\n\tDefine $$p = \\frac{k(t-2)}2+\\frac{k}{t+1}= \\frac{kt(t-1)}{2(t+1)},$$ and let $d_i$ be the coefficients of $(x+1)^{(t-2)k}(x+t)^{k-1}$. Then $$d_i=\\sum\\limits_{j=0}^i {(t-2)k \\choose i-j} {k-1 \\choose j} t^{k-1-j}.$$ We define $m_i = \\max\\limits_{j} {(t-2)k \\choose i-j} {k-1 \\choose j} t^{k-1-j}$, the maximal term in the sum. Then we have $d_i \\leq k\\cdot m_i$ for all $i$. On the other hand, $d_p$ has $${(t-2)k \\choose (t-2)k\/2} {k-1 \\choose k\/(t+1)} t^{k-1-k\/(t+1)}$$ as one of the terms in its sum, so $d_p$ is greater than that. The idea is to show that $k m_i \\leq {(t-2)k \\choose (t-2)k\/2} {k-1 \\choose k\/(t+1)} t^{k-1-k\/(t+1)}$ for all $i\\leq k-1$, which implies $d_i \\leq d_p$.\n\t\n\tTo find $m_i$, we apply Stirling's formula to ${(t-2)k \\choose i-j} {k-1 \\choose j} t^{k-1-j}$ and find its derivative with respect to $j$. Note that for $n\\to\\infty$, we have $$\\log n! = n\\log n-n+\\frac12\\log(2\\pi n)+O(\\log n),\\text{ or }$$\n$$n!-\\left(\\frac{n}{e}\\right)^n\\sqrt{2\\pi}=n^{O(1)},$$\n\tmeaning that $n!$ and $\\frac{n^n}{e^n} \\sqrt{2\\pi n}$ are asymptotically equivalent. Also observe that since the sequence is unimodal it is sufficient to find a local peak among the coefficients with high values of $k$ and $n-k$. Hence, for large enough $n$, $k$, and $n-k$ we may derive\n\t{\\small{\n\t$${n \\choose k}= \\frac{n!}{k!(n-k)!}\\sim \\frac{\\sqrt{2\\pi n} n^n e^{n-k} e^k}{2\\pi \\sqrt{k(n-k)} k^k (n-k)^{n-k} e^n} =\\frac{n^n\\sqrt{n}}{ \\sqrt{2\\pi k(n-k)} k^k (n-k)^{n-k}}.$$\n\t}}\n\t\n\tThen, we have\n\t{\\small{\\begin{align*}\n\t &\\left.{(t-2)k \\choose i-j} {k-1 \\choose j} t^{k-1-j}\\right. \\sim \\\\ \n\t &\\sim \\frac{((t-2)k)^{k(t-2)+1\/2} (k-1)^{k-1+1\/2}}{2\\pi (i-j)^{i-j+1\/2} ((t-2)k-i+j)^{(t-2)k-i+j+1\/2} j^{j+1\/2} (k-1-j)^{k-1-j+1\/2}},\n\t\\end{align*}}}\n\tand its derivative is then equal to $0$ if and only if\n\t{\\small \\begin{align*}\n\t\t\\log\\left(\\frac{(i-j)(k-1-j)}{j (k(t-2)-i+j)}\\right) &= \\frac1{2j}-\\frac1{2(i-j)}-\\frac1{2(k-1-j)}+\\frac1{2(k(t-2)-i+j)}, \n\t\\end{align*}}\t\t\n\t{\\small \\begin{align*}\t\t\n\t\t\\frac{(i-j)(k-1-j)}{j (k(t-2)-i+j)} &= \\sqrt{\\frac{e^{1\/j} e^{1\/(k(t-2)-i+j)}}{e^{1\/(i-j)} e^{1\/(k-1-j)}}.} \n\t\\end{align*}}\nSince $\\sqrt{e^{1\/z}}$ and $1$ are asymptotically equivalent for $z\\to\\infty$, we may assume\n\\begin{align*}\n\t\\frac{(i-j)(k-1-j)}{j(k(t-2)-i+j)} &\\sim 1, \\\\\n\t\\frac{i(k-1)}{k(t-1)-1} &\\sim j \n\\end{align*}\nFor $k\\to\\infty$ we may assume $j= \\frac{i}{t-1}$ and then the inequality we want to prove is\n\n{\\footnotesize{\n$$k {(t-2)k \\choose i-i\/(t-1)} {k-1 \\choose i\/(t-1)} t^{k-1-i\/(t-1)} \\leq {(t-2)k \\choose (t-2)k\/2} {k-1 \\choose k\/(t+1)} t^{k-1-k\/(t+1)}.$$\n}}\nThis follows for the two inequalities:\n{\\small{\n$$k {(t-2)k \\choose i-i\/(t-1)} t^{k\/(t+1)} \\leq {(t-2)k \\choose (t-2)k\/2} t^{i\/(t-1)} \\text{ and } {k-1 \\choose i\/(t-1)} \\leq {k-1 \\choose k\/(t+1)}.$$\n}}\nThe latter inequality follows from $\\frac{i}{t-1}\\leq\\frac{k}{t+1}$ for $k\\to\\infty$. The former inequality can be shown using Stirling approximation.\n\\end{proof}\n\n\\begin{rem}\nOne may wonder on the extension to a windmill graph in which the cliques are not all of the same size. Consider a block graph with one universal vertex and $k_i$ cliques of size $t_i$ for $i\\in\\{1,\\dots,l\\}$ and some $l$, all sizes $t_1,\\dots,t_l$ are distinct. Then the distance characteristic polynomial takes form\n$$p_D(x) = (x+1)^{n-k-1} \\prod\\limits_{i=1}^l (x+t_i)^{k_i-1} p_B(x),$$\nwhere $p_B(x)$ is the characteristic polynomial of the quotient matrix corresponding to the coarsest partition into $l+1$ vertex subsets $X_0,X_1,\\dots X_l$: $X_0$ only contains the universal vertex, and $X_i$ has all vertices (except for the universal one) from all $k_i$ cliques of size $t_i$. Then the problem of finding $m_i$ as it is defined above is equivalent to a problem of finding a peak of an $l$-dimensional function, which complicates the application of the approximation approach even for small $l$.\n\\end{rem}\n\n\n\nThe \\emph{friendship graph} (or \\emph{Dutch windmill graph}), denoted $F_{2k+1}$, is the graph $W(k,3)$ obtained by joining $k$ copies of $K_3$ by a common vertex so that $n=|V|=2k+1$. As a direct consequence of Theorem~\\ref{thm:friend.gen.approx} we obtain the following corollary.\n\n\\begin{cor}\nLet $F_{2k+1}$ be the friendship graph on $n=|V|=2k+1$ vertices. Then, the sequence of coefficients is unimodal, and as $k$ approaches infinity the peak of the sequence occurs at $3k\/4 + O(\\log k)$.\n\\end{cor}\n\nUsing a different and non-asymptotic approach, next we show that the windmill graph with 2 cliques, $W(2,t)$, has distance characteristic polynomial coefficients with peak exactly at $\\lfloor\\frac{n}{2}\\rfloor$. \n\n\\begin{thm}\nLet $G$ be a graph obtained by adding a universal vertex to the disjoint union of two cliques $K_t$, and let $n=|V(G)|=2t+1$. Then the sequence of coefficients of the distance characteristic polynomial of $G$ is unimodal with peak at $\\lfloor\\frac{n}{2}\\rfloor$.\n\\end{thm}\n\\begin{proof}\nConsider an equitable partition of $G$ into three subsets, one is the universal vertex and the other two correspond to cliques $K_t$. Using the quotient matrix of this partition, we can compute the distance characteristic polynomial of $G$ to be\n\\begin{eqnarray*}\np_{D}(x)&=&(x+1)^{2t-2}(-(t+1)-x)(-2t-(3t-1)x+x^2) =\\\\\n&=&(x+1)^{2t-2}(ax^3 + bx^2 + cx + d),\n\\end{eqnarray*}\nwhere $a=-1$, $b=2t-2$, $c=3t^2+4t-1$, and $d=2t^2+2t$.\nMultiplying the binomial expansion of $(x+1)^{2t-2}$ by $(ax^3 + bx^2 + cx + d)$ and combining the coefficients of terms with the same power, we obtain\n{\\allowdisplaybreaks{\n{\\small{\n\\begin{align*}\np_{D}(x)&=\\left[d{2t-2\\choose 0}\\right]x^0 +\\\\\n&+\\left[c{2t-2\\choose 0}+d{2t-2\\choose 1}\\right]x^1+\\\\\n&+\\left[b{2t-2\\choose0}+c{2t-2\\choose 1}+d{2t-2\\choose 2}\\right]x^2+\\\\\n&+\\sum_{i=0}^{2t-5}\\left[a{2t-2\\choose i} +b{2t-2\\choose i+1}+c{2t-2\\choose i+2}+d{2t-2\\choose i+3}\\right]x^{i+3}+\\\\\n&+\\left[a{2t-2\\choose 2t-4} + b{2t-2\\choose 2t-3} + c{2t-2\\choose 2t-2}\\right]x^{2t-1}+\\\\\n&+\\left[a{2t-2\\choose 2t-3}+b{2t-2\\choose 2t-2}\\right]x^{2t}+\\\\\n&+\\left[a{2t-2 \\choose 2t-2}\\right]x^{2t+1}.\n\\end{align*}\n}}\n}}\nLet $c_i$ be the coefficient of $x^i$ in $p_{D}(x)$. Then, for $t\\geq 4$ and $j\\geq 0$,\n\\begin{equation}\n\\label{beq1}\nc_{t-j}=a{2t-2\\choose t-3-j} +b{2t-2\\choose t-2-j}+c{2t-2\\choose t-1-j}+d{2t-2\\choose t-j}.\n\\end{equation}\nUsing the formula ${n\\choose k} = \\frac{(n - k + 1)}{k} { n\\choose k - 1}$, we can rewrite \\eqref{beq1} as\n\n\\begin{align*}\nc_{t-j} &= \\binom{2 t-2}{t-3-j}\n\\left(\na\n+b\\cdot\\frac{j+t+1}{t-j-2}\n+c\\cdot\\frac{j+t+1}{t-j-2}\\cdot\\frac{j+t}{t-j-1}\\right. +\\\\\n &+\\left. d\\cdot\\frac{j+t-1}{t-j-2}\\cdot\\frac{j+t}{t-j-1}\\cdot\\frac{j+t+1}{t-j}\\right) = \\\\\n&= \\binom{2 t-2}{t-3-j} \\cdot f(t,j),\n\\end{align*}\nwhere $f(t,j)=\\frac{t (j+t) \\left(t \\left(j^2+j (3-4 t)-t (5 t+11)+2\\right)+2\\right)}{(j-t) (j-t+1) (j-t+2)}$.\nThen, $c_{t-j}\\geq c_{t-(j+1)}$ if and only if\n$$\\binom{2 t-2}{t-3-j} \\cdot f(t,j)\\geq \\binom{2 t-2}{t-3-(j+1)} \\cdot f(t,j+1),$$\nwhich is equivalent to\n\\begin{equation}\n\\label{beq2}\n\\frac{f(t,j)}{t-3-j}\\geq \\frac{f(t,j+1)}{t+j+2}.\n\\end{equation}\nIt can be verified using a software with symbolic algebra for rational functions that \\eqref{beq2} holds for all integers $j$ and $t$ with $t\\geq 4$ and $0\\leq j < t-3$; for $t\\leq 3$, it can be verified that $c_{t-j}\\geq c_{t-(j+1)}$ by explicitly computing the distance characteristic polynomial. Similarly as above, it can be shown that $c_{t+j}\\geq c_{t+(j+1)}$ for all $t$ and $j\\geq 0$. Thus, it follows that $c_0\\leq \\ldots\\leq c_t\\geq \\ldots \\geq c_{2t+1}$, and hence the distance characteristic polynomial of $G$ is unimodal with peak at $t = \\lfloor \\frac{2t+1}{2}\\rfloor=\\lfloor \\frac{n}{2}\\rfloor$.\n\\end{proof}\n\nConsider now the class of \\textit{barbell graphs} $B(t,\\ell)$, which are obtained by connecting two cliques $K_t$ with a path on $\\ell$ vertices by identifying the leaves of the path with one of the vertices in each clique. We also consider \\textit{lollipop graphs} $L(t,\\ell)$, which are obtained by adding a path on $\\ell$ vertices to a single clique $K_t$ so that one of the leaves of the path is also a vertex of the clique. We begin by locating the peak of the distance characteristic polynomial for a barbell graph with $\\ell\\in\\{2,3,4,5\\}$.\n\n\\begin{thm}\\label{thm.barbell}\t\n\tLet $G$ be a barbell graph $B(t,\\ell)$, so that $|V(G)|=n=2t+\\ell-2$. Then the sequence of coefficients of the distance characteristic polynomial of $G$ is unimodal with peak at $t-1$ if $\\ell=2$ and at $t$ if $\\ell\\in\\{3,4,5\\}$.\n\\end{thm}\n\n\\begin{proof}\nWe prove the claim for $\\ell=2$; the cases $\\ell\\in\\{3,4,5\\}$ can be shown analogously.\nConsider a quotient partition into $4$ vertex sets: $2$ of them are the two vertices of the path of length $2$ connecting the cliques and the other $2$ correspond to the remaining $t-1$ vertices of each clique. We then have the quotient matrix\n$$B=\\left(\n\t\\begin{array}{cccc}\n\t\tt-2 & 1 & 2 & 3 (t-1) \\\\\n\t\tt-1 & 0 & 1 & 2 (t-1) \\\\\n\t\t2 (t-1) & 1 & 0 & t-1 \\\\\n\t\t3 (t-1) & 2 & 1 & t-2 \\\\\n\t\\end{array}\n\t\\right)$$\n\twith characteristic polynomial $$p_B(x)=-x^4+(2 t-4) x^3+\\left(8 t^2-4 t-4\\right) x^2+\\left(14 t^2-12t\\right) x+5 t^2-4 t.$$ \n\tFrom \\cite[Theorem 3.3]{HR21}, it follows that the distance characteristic polynomial of $G$ is \n\t{\\footnotesize{\n\t$$p_D(x)=(x+1)^{2t-4}\\left(-x^4+(2 t-4) x^3+\\left(8 t^2-4 t-4\\right) x^2+\\left(14 t^2-12t\\right) x+5 t^2-4 t\\right).$$\n\t}}\n\t\nLet $a=-1$, $b=2t-4$, $c=8t^2-4t-4$, $d=14t^2-12t$, and $e=5t^2-4t$. We can use $$(x+1)^{2t-4}=\\sum\\limits_{k=0}^{2t-4}\\binom{2t-4}{k} x^k$$ to write down a formula in the case $5\\leq k\\leq 2t-4$:\n\t$$c_{k}=a\\binom{2t-4}{k-4}+b\\binom{2t-4}{k-3}+c\\binom{2t-4}{k-2}+d\\binom{2t-4}{k-1}+e\\binom{2t-4}{k}.$$\n\tUsing the identity $\\binom{n}{k}=\\frac{n-k+1}k\\binom{n}{k-1}$ we obtain\n\n {\\small{\n\\begin{align*}\n\tc_{k}&=\\binom{2t-4}{k-4}\\left(a+\\frac{2t-k}{k-3}\\cdot b+\\frac{2t-k-1}{k-2}\\cdot\\frac{2t-k}{k-3}\\cdot c+\\right. \\\\\n\t&+\\left.\\frac{2t-k-2}{k-1}\\cdot\\frac{2t-k-1}{k-2}\\cdot\\frac{2t-k}{k-3}\\cdot d\\right. + \\\\ &+\\left.\\frac{2t-k-3}{k}\\cdot\\frac{2t-k-2}{k-1}\\cdot\\frac{2t-k-1}{k-2}\\cdot\\frac{2t-k}{k-3}\\cdot e \\right) = \\\\\n\t&= \\binom{2t-4}{k-4}f(t,k).\n\\end{align*}\n }}\n\tThen, $c_k\\geq c_{k-1}$ if and only if\n\t$$\\binom{2t-4}{k-4}f(t,k)\\geq \\binom{2t-4}{k-5}f(t,k-1),$$\n\twhich by applying the identity $\\binom{n}{k}=\\frac{n-k+1}k\\binom{n}{k-1}$ leads to\n\t$$\\frac{f(t,k)}{k-4}\\geq \\frac{f(t,k-1)}{2t-k+1}.$$\n\tIf $k=t$ then the simplified form of the above inequality is\n\t$$\\frac{21 t^4-35 t^3+t^2+3 t+6}{t^4-8 t^3+17 t^2+2 t-24}\\leq 0,$$\n\tand for $k=t-1$ we have $$\\frac{33 t^5+41 t^4-183 t^3+7 t^2+114 t-24}{(t-5) (t-4) (t-3)\n\t\t(t-2) (t+2)}\\geq 0.$$\n\tIt is straightforward to verify that if $t\\geq 6$, the inequality for the $k=t-1$ case holds, but the one for $k=t$ does not, meaning $c_{t-2}\\leq c_{t-1}> c_t$. Since the sequence of coefficients is unimodal by Theorem~\\ref{theo:unimodality}, this implies that $t-1$ is indeed the peak location. The cases $t\\leq5$ can be checked by straightforward calculation.\n\\end{proof}\n\nAn analogous argument as in the proof of Theorem \\ref{thm.barbell}\t can also be used to show the peak location of lollipop graphs with small~$\\ell$.\n\n\\begin{cor} Let $G$ be a lollipop graph $L(t,\\ell)$, so that $|V(G)|=n=t+\\ell-1$. Let $\\ell\\in\\{2,3,4,5\\}$. Then the sequence of coefficients of the distance characteristic polynomial of $G$ is unimodal with peak at $\\lfloor\\frac{n-1}2\\rfloor$.\n\\end{cor}\n\n\n\n\\section{Concluding remarks}\n\nWe end up by discussing another extremal case of block graphs, namely, block paths. According to SageMath simulations, the peak of a block path seems to be located between $\\lfloor\\frac{n}{3}\\rfloor$ and $\\lfloor\\frac{n}{2}\\rfloor$. A natural approach to extend the result from paths to block paths would be to generalize Collins proof for the distance characteristic polynomial of paths~\\cite{C89} and calculate the distance characteristic polynomial coefficients of block paths using the formula for edge-weighted block graphs (see Theorem~\\ref{GHH}). In general, the coefficient formula is given by\n$$c_{n-k}=(-1)^{n-k}\\sum\\limits_{1\\leq i_1\\leq\\dots\\leq i_k\\leq n}\\det D[v_{i_1},\\dots,v_{i_k}],$$\nwhere $D[v_{i_1},\\dots,v_{i_k}]$ is a $k\\times k$ submatrix of a $D$ whose rows and columns are indexed by vertices $v_{i_1},\\dots,v_{i_k}$. The sum is over all possible ways to choose $k$ out of $n$ vertices.\nThe main idea of Collins' approach is to interpret $D[v_{i_1},\\dots,v_{i_k}]$ as a distance matrix of an edge-weighted block graph, and then use Theorem~\\ref{GHH} to calculate its determinant. However, for the case when the blocks within the path have size at least $3$, the description of blocks of $D[v_{i_1},\\dots,v_{i_k}]$ can get very irregular: depending on a particular choice of $k$ vertices, they can be anything from $2$- to $k$-cliques with weights of edges seemingly following no pattern. Thus, for the more general setting of block graphs, it seems hopeless to use an analogous approach as Collins does for trees. \n\n\nOur computational results suggest the following question:\n\n\\begin{ques}\nAre the coefficients of the distance characteristic polynomial of any block graph with $n$ vertices unimodal with peak between $\\lfloor\\frac{n}{3}\\rfloor$ and $\\lfloor\\frac{n}{2}\\rfloor$?\n\\end{ques}\n\n\n We conclude by observing that the formula for the normalized coefficients of the characteristic polynomial of a tree \\cite[page 81]{GL78} probably also holds for uniform block graphs. If this was the case, then we could define the normalized coefficients for a $t$-uniform block graph (the distance eigenvalues of Hamming graphs \\cite[Section 3.1]{GRWC2015v2} suggest that there exists such a common factor). \n\n\n\\subsection*{Acknowledgements}\n\nAida Abiad is partially supported by the FWO (Research Foundation Flanders), grant number 1285921N. Antonina P. Khramova is supported by the NWO (Dutch Science Foundation), grant number OCENW.KLEIN.475. Jack H.~Koolen is partially supported by the National Natural Science Foundation of China (No. 12071454), Anhui Initiative in Quantum Information Technologies (No. AHY150000) and the National Key R and D Program of China (No. 2020YFA0713100). \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Optimal Risk-Aware LQR Controllers\n}\\label{Section_Analysis}\n\nThe analysis of the risk-aware dynamic program~\\eqref{FOR_EQN_LQR_constrained} consists of the following steps. First, we ensure the well-definiteness of~\\eqref{FOR_EQN_LQR_constrained}, also showing that~\\eqref{FOR_EQN_LQR_constrained} can be equivalently reexpressed as a \\textit{sequential variational Quadratically Contrained Quadratic Program (QCQP)}, or, more precisely, as a \\textit{Quadratically Constrained LQR (QC-LQR)} problem (Proposition~\\ref{ANA_PROP_Reformulation}). Then, we exploit Lagrangian duality (Theorem~\\ref{KKT}) to solve~\\eqref{FOR_EQN_LQR_constrained} \\textit{exactly and in closed form}. More specifically, we first derive an explicit expression for the optimal risk-aware controller~(Theorem~\\ref{ANA_THM_Optimal_Input}), given an arbitrary but fixed Lagrange multiplier. Then, we show how an optimal Lagrange multiplier may be efficiently discovered via trivial bisection~(Theorem~\\ref{OPTIMAL} and Proposition \\ref{ANA_PROP_Risk_Evaluation}).\n\\begin{proposition}[\\textbf{QCQP Reformulation}]\\label{ANA_PROP_Reformulation}\n\tLet Assumption~\\ref{FOR_ASS_noise} be in effect, and define the higher-order weighted statistics\n\t\\begin{align}\n\tM_3&\\triangleq \\ensuremath{\\mathbb{E}}\\set{(w_{i}-\\bar{w})(w_{i}-\\bar{w})'Q(w_{i}-\\bar{w})}\\in\\mathbb{R}^n\n\t\\,\\,\\, \\textrm{and}\\nonumber\\\\\n\tm_4&\\triangleq \\ensuremath{\\mathbb{E}}\\Neg{-.5}\\big\\{\\Neg{1.5}\\left[(w_{i}-\\bar{w})'Q(w_{i}-\\bar{w})-\\Tr (WQ)\\right]^2\\Neg{1.5}\\big\\}\\label{ANA_EQN_fourth_moment}\\ge0.\\nonumber\n\t\\end{align}\nThen, the risk-constrained LQR problem~\\eqref{FOR_EQN_LQR_constrained} is well-defined and equivalent to the sequential variational QCQP\n\\begin{equation}\\label{ANA_EQN_Reformulation}\n\\begin{aligned}\n \\min_{u} &\\hspace{-6pt}& J(u) \\triangleq\\,\\,& \\ensuremath{\\mathbb{E}}\\set{ x'_NQ x_N+\\sum_{t=0}^{N-1} x'_tQx_t+u'_tRu_t}\\\\\n\\mathrm{s.t.} &\\hspace{-6pt}& J_R(u) \\triangleq\\,\\,& \\ensuremath{\\mathbb{E}}\\set{\\sum_{t=1}^{N}4x'_tQWQx_t+4x_t'QM_3}\\le \\bar{\\epsilon}\\\\\n&\\hspace{-6pt}& & x_{t+1}=Ax_t+Bu_t+w_{t+1}\\\\\n&\\hspace{-6pt}& & u_t\\in \\ensuremath{\\mathcal{L}}_{2}(\\ensuremath{\\mathscr{F}}_t),\\quad t=0,\\dots N-1,\n\\end{aligned}\\,\\,,\n\\end{equation}\nwhere $\\bar{\\epsilon}\\triangleq\\epsilon-Nm_4+4N\\Tr\\set{(WQ)^2}$.\n\\end{proposition}\nThe proof can be found in the Appendix. Proposition~\\ref{ANA_PROP_Reformulation}\nis critical because it shows that our risk constraint is quadratic with respect to the state and control inputs. This enables us to apply duality theory, as discussed next.\n\\subsection{Lagrangian Duality}\nTo tackle problem~\\eqref{FOR_EQN_LQR_constrained}, we now consider the \\textit{variational Lagrangian} $\\ensuremath{{\\mathpzc{L}_{\\hspace{.1pt}}}} : \\ensuremath{\\mathcal{L}}_{2}(\\ensuremath{\\mathscr{F}}_0) \\times \\dots \\times \\ensuremath{\\mathcal{L}}_{2}(\\ensuremath{\\mathscr{F}}_{N-1}) \\times \\mathbb{R}_+ \\rightarrow \\mathbb{R}$ of the sequential QCQP~\\eqref{ANA_EQN_Reformulation}, defined as\n\\begin{align}\\label{ANA_EQN_Lagrangian_Original}\n&\\ensuremath{{\\mathpzc{L}_{\\hspace{.1pt}}}}(u,\\mu)\\triangleq J(u)+\\mu J_R(u)-\\mu \\bar{\\epsilon},\n\\end{align}\nwhere $\\mu\\in\\mathbb{R}_+$ is a multiplier associated with the variational risk constraint of \\eqref{ANA_EQN_Reformulation}. Hereafter, problem \\eqref{ANA_EQN_Reformulation} will be called the \\textit{primal problem}.\nAccordingly, the \\textit{dual function} $D\\hspace{-0.5pt}\\hspace{-0.5pt}:\\hspace{-0.5pt}\\hspace{-0.5pt}\\mathbb{R}_{+}\\hspace{-0.5pt}\\hspace{-0.5pt}\\rightarrow\\hspace{-0.5pt}\\hspace{-0.5pt}\\hspace{-0.5pt}[-\\infty,\\infty)$\nis additionally defined as\n\\begin{equation}\\label{FDUAL}\nD(\\mu)\\triangleq\\inf_{u\\in{\\cal U}_0}\\mathpzc{L}(u,\\mu),\n\\end{equation}\nwhere the \\textit{implicit feasible set} $\\mathcal{U}_0$ obeys ($k\\le N-1$)\n\\begin{equation}\n\\mathcal{U}_k \\Neg{1} \\triangleq \\Neg{1.5} \\set{\\Neg{.5} u_{k:N-1} \\Neg{1}\\in\\Neg{1} \\prod_{t=k}^{N-1} \\ensuremath{\\mathcal{L}}_{2}(\\ensuremath{\\mathscr{F}}_t) \\Neg{-1}\\Bigg|\\Neg{-1} x_{t+1}\\Neg{1}=\\Neg{1}Ax_t\\Neg{1}+\\Neg{1}Bu_t\\Neg{1}+\\Neg{1}w_{t+1} \\Neg{.5}}\\Neg{1}\\Neg{.5},\\Neg{1}\\nonumber\n\\end{equation}\nand contains the constraints of \\eqref{ANA_EQN_Reformulation} that have not been dualized in the construction of the Lagrangian in \\eqref{ANA_EQN_Lagrangian_Original}.\nNote that it is always the case that $D\\le J^{*}$ on $\\mathbb{R}_{+}$, where $J^{*}\\hspace{-1pt}\\in\\hspace{-1pt}[0,\\infty]$\ndenotes the optimal value of the primal problem \\eqref{ANA_EQN_Reformulation}.\nThen, the optimal value of the always concave \\textit{dual problem}\n\\begin{equation}\\label{DUAL}\n\\sup_{\\mu\\ge0} D(\\mu) \\equiv \\sup_{\\mu\\ge0}\\inf_{u\\in{\\cal U}_0}\\mathpzc{L}(u,\\mu),\n\\end{equation}\n${D}^{*}\\hspace{-2pt}\\triangleq\\hspace{-.5pt}\\sup_{\\mu\\ge0}{D}(\\mu)\\hspace{-1pt}\\in\\hspace{-1pt}[-\\infty,\\hspace{-0.5pt}\\infty]$,\nis the tightest under-estimate of ${J}^{*}$, when knowing\nonly ${D}$.\n\nLeveraging Lagrangian duality, we may now state the following result, which provides sufficient optimality conditions for the QCQP \\eqref{ANA_EQN_Reformulation}. The proof is omitted, as it follows as direct application of \\cite[Theorem 4.10]{ruszczynski2011nonlinear} \n.\n\n\\begin{theorem}[\\textbf{Optimality Conditions}]\\label{KKT}\nLet Assumption \\ref{FOR_ASS_noise} be in effect. Suppose that there exists a feasible policy-multiplier pair $(u^*,\\mu^*) \\in \\mathcal{U}_0 \\times \\mathbb{R}_+$ such that\n\\begin{enumerate}\n \\item $\\ensuremath{{\\mathpzc{L}_{\\hspace{.1pt}}}}(u^*(\\mu^*),\\mu^*)=\\min_{u\\in\\mathcal{U}_0}\\ensuremath{{\\mathpzc{L}_{\\hspace{.1pt}}}}(u,\\mu^*)=D(\\mu^*)$;\n \n \\item $J_R(u^*)\\le\\bar{\\epsilon}$, i.e., the dualized variational risk constraint of \\eqref{ANA_EQN_Reformulation} is satisfied by control policy $u^*$;\n \n \\item $\\mu^*\\Neg{.5}(J_R(u^*)-\\bar{\\epsilon})\\Neg{2.1}=\\Neg{1.8}0$, i.e., complementary slackness holds.\n\\end{enumerate}\nThen, $u^*$ is optimal for both the primal problem \\eqref{ANA_EQN_Reformulation} and the initial problem \\eqref{FOR_EQN_LQR_constrained}, $\\mu^*$ is optimal for the dual problem \\eqref{DUAL}, and \\eqref{ANA_EQN_Reformulation} exhibits zero duality gap, that is, $D^*\\equiv P^*<\\infty$.\n\\end{theorem}\n\n\n\nTheorem \\ref{KKT} will be serving as the backbone of our analysis towards the solution to problem \\eqref{ANA_EQN_Reformulation}, presented \nas follows. \n\n\n\n\\subsection{Optimal Risk-Aware Control Policies}\nLet $\\mu\\ge0$ be arbitrary but fixed. First, we may simplify the form of the Lagrangian $\\ensuremath{{\\mathpzc{L}_{\\hspace{.1pt}}}}$ and express it within a canonical dynamic programming framework. In this respect, we have the following, almost obvious, but important result.\n\\begin{lemma}[\\textbf{Lagrangian Reformulation}]\\label{ANA_THM_Lagrangian}\nLet Assumption \\ref{FOR_ASS_noise} be in effect and define the inflated state penalty matrix\n\\begin{equation}\n Q_{\\mu} \\triangleq Q+4\\mu QWQ.\\nonumber\n\\end{equation}\nThen, for every $u_t\\in\\ensuremath{\\mathcal{L}}_2(\\ensuremath{\\mathscr{F}}_t)$, $t\\le N-1$, the Lagrangian function $\\ensuremath{{\\mathpzc{L}_{\\hspace{.1pt}}}}$ can be expressed as\n\t\\begin{equation}\\label{ANA_EQN_Lagrangian}\n\t\\ensuremath{\\mathcal{L}}(u,\\mu)\\hspace{-1pt}=\\hspace{-1pt} \\ensuremath{\\mathbb{E}}\\hspace{-1pt}\\set{g_{N}(x_N,\\mu)\\hspace{-1pt}+\\hspace{-1pt}\\sum_{t=0}^{N-1}g_{t}(x_t,u_t,\\mu)\\hspace{-1pt}}\\hspace{-.5pt}+g(\\mu),\\hspace{-1pt}\n\t\\end{equation}\n\twhere, in the notation of Proposition~\\ref{ANA_PROP_Reformulation},\n\t\\begin{align*}\n\t\tg_N(x_N,\\mu)&\\triangleq x_N'Q_{\\mu}x_N+4\\mu M_3'Qx_N,\\\\\n\tg_{t}(x_t,u_t,\\mu)&\\triangleq x_t'Q_{\\mu}x_t+4\\mu M_3'Qx_t+u_t'Ru_t, \\, t\\le N-1,\\Neg{-1.5} \\\\\\\n\t\\textrm{and}\\quad g(\\mu)&\\triangleq -\\mu \\bar{\\epsilon}-4\\mu x_0'QWQ x_0-4\\mu M_3'Qx_0.\n\t\\end{align*}\n\\end{lemma}\n\\begin{proof}\nIt follows from Proposition~\\ref{ANA_PROP_Reformulation} and the form of $\\ensuremath{{\\mathpzc{L}_{\\hspace{.1pt}}}}$.\n\\end{proof}\n\\begin{remark}[\\textit{Relation to generalized LQR with tracking}]\\label{ANA_REM_LQR}\nThe Lagrangian~\\eqref{ANA_EQN_Lagrangian} has the structure of a generalized LQR problem with a tracking objective. By completing the squares we can rewrite the stage cost $g_t(x_t,u_t,\\mu)$ as\n\\begin{align*}\n g_{t}(x_t,u_t,\\mu)=(x_t+2\\mu M_3)'Q(x_t+2\\mu M_3)\\Neg{-12}\\\\\n+x'_t(4\\mu QWQ)x_t+u_t'Ru_t-4\\mu^2 M_3'QM_3,\n\\end{align*}\ni.e., the state penalty is quadratic and consists of two distinct terms. The first one, i.e., $(x_t+2\\mu M_3)'Q(x_t+2\\mu M_3)$ is a tracking error term that forces the state to be close to the static target $-2\\mu M_3$. Informally, in the case of skewed noise, by tracking $-2\\mu M_3$ we pre-compensate for directions in which the distribution of the noise has heavy tails. This decreases the statistical variability of the predicted stage cost. The second term, $x_t'(4\\mu QWQ)x_t$, is a standard quadratic penalty term; notice that, contrary to the risk-neutral case, the covariance of the noise $W$ now affects the penalty term. Informally, this term penalizes state directions which not only lead to high cost but are also more sensitive to noise, as captured by the product $QWQ$. \nHence, the risk-neutral LQR framework can exhibit inherent risk-averse properties, provided that its parameters are selected in a principled way. Of course, selecting those parameters \\emph{a priori} is not trivial.\n\\end{remark}\n\nThe structure of the Lagrangian as suggested by Lemma~\\ref{ANA_THM_Lagrangian} enables us to derive both a closed-form expression for its minimum \nand an explicit optimal control policy. To this end, define the \\textit{optimal cost-to-go} at stage $k\\le N-1$ as\n\\begin{equation}\n\\begin{aligned}\n& \\Neg{6}\\ensuremath{{\\mathpzc{L}_{\\hspace{.1pt}}}}^*_k(x_k,\\mu) \\\\\n& \\Neg{1.5}\\triangleq\\Neg{1.5}\\inf_{u_{k:N-1} \\in \\Neg{-1} \\mathcal{U}_k} \\ensuremath{\\mathbb{E}}\\Neg{1} \\set{g_N(x_N,\\mu)\\Neg{1}+\\Neg{1}\\sum_{t=k}^{N-1}g_t(x_t,u_t,\\mu)\\Bigg|\\ensuremath{\\mathscr{F}}_k},\\nonumber\n\\end{aligned}\n\\end{equation}\nwhere we omit the constant components of the Lagrangian. Under this definition, it is true that\n\\[\nD(\\mu)\\equiv\\inf_{u\\in\\mathcal{U}_0} \\ensuremath{{\\mathpzc{L}_{\\hspace{.1pt}}}}(u,\\mu)=\\ensuremath{{\\mathpzc{L}_{\\hspace{.1pt}}}}^*_0(x_0,\\mu)+g(\\mu).\n\\]\nWe may now derive the complete solution to \\eqref{FDUAL}, which is one of the main results of this paper, and provides optimal risk-aware control policies for every fixed multiplier $\\mu\\ge0$.\n\\begin{theorem}[\\textbf{Optimal Risk-Aware Controls}]\\label{ANA_THM_Optimal_Input}\n Let Assumption~\\ref{FOR_ASS_noise} be in effect, choose $\\mu\\ge0$, and adopt the notation of Lemma \\ref{ANA_THM_Lagrangian}. For $t\\le N-1$, the optimal cost-to-go $\\ensuremath{{\\mathpzc{L}_{\\hspace{.1pt}}}}^*_t(x_t,\\mu)$ may be expressed as\n\\begin{align}\\label{FOR_EQN_optimal_cost_to_go}\n\\ensuremath{{\\mathpzc{L}_{\\hspace{.1pt}}}}^*_t(x_t,\\mu)= x'_{t}V_{t}x_{t} +4\\mu M'_3S'_{t}x_{t}+2\\bar{w}'T'_t x_t+c_{t},\n\\end{align}\nwhere the quantities $V_t$, $S_t$, $T_t$ and $c_t$ are evaluated through the backward recursions\n\t\\begin{align}\n\\label{ANA_EQN_DEF_Riccati_Matrix}\nV_{t-1}&\\Neg{1}=\\Neg{1}A'V_{t}A\\hspace{-.5bp}+\\hspace{-.5bp}Q_{\\mu}\\hspace{-1.5bp}-\\hspace{-1.5bp}A'V_{t}B(B'V_{t}B\\hspace{-.5bp}+\\hspace{-.5bp}R)^{-1}B'V_{t}A,\\\\\nK_{t-1}&\\Neg{1}=\\Neg{1}-(B'V_{t}B+R)^{-1}B'V_{t}A,\\\\\nS_{t-1}\t&\\Neg{1}=\\Neg{1}(A+BK_{t-1})'S_t+Q,\\\\\nT_{t-1}\t&\\Neg{1}=\\Neg{1}(A+BK_{t-1})'(T_t+V_t),\\\\\nl_{t-1}&\\Neg{1}=\\Neg{1}-2\\mu(B'V_{t}B+R)^{-1}B' S_t M_3,\\\\\nh_{t-1}&\\Neg{1}=\\Neg{1}-(B'V_{t}B+R)^{-1}B'(V_t+T_t)\\bar{w}\\quad\\textrm{and}\\\\\nc_{t-1}&\\Neg{1}=\\Neg{0.5}c_t+\\Tr (WV_{t})+\\bar{w}'(2T_{t}'+V_t)\\bar{w}+4\\mu M_3'S_N'\\bar{w}\\nonumber \\\\\n&\\Neg{-6}-(l_{t-1}+h_{t-1})'(B'V_tB+R)(l_{t-1}+h_{t-1}),\n\\end{align}\nwith terminal values $V_N=Q_{\\mu}$, $S_N=Q$, $T_N=0$ and $c_N=0$.\nAdditionally, an optimal control policy \nthat achieves the dual value in \\eqref{FDUAL} may be expressed as\n\t\t\\begin{align}\\label{ANA_EQN_Optimal_Input}\nu^*_{t}(\\mu)=K_{t}x_{t}+l_{t}+h_t \\in \\ensuremath{\\mathcal{L}}_2(\\ensuremath{\\mathscr{F}}_t),\\,\\,\\,\\forall t\\le N-1,\n\t\\end{align}\nand is unique up to sets of probability measure zero. \n\\end{theorem}\n\\begin{proof}\nBy using dynamic programming and assuming (temporarily) that involved measurability issues are resolved (\\cite{bertsekas2012approximate}, Appendix A), we have, for every $k\\le N-1$, the recursive optimality condition (i.e., the Bellman equation)\n\\begin{align}\n\\Neg{8.5}\\ensuremath{{\\mathpzc{L}_{\\hspace{.1pt}}}}^*_k\\Neg{0.5}(x_k,\\Neg{0.5}\\mu)&\\Neg{1.5}=\\Neg{1.5}\\inf_{u_k} \\Neg{0.5}\\ensuremath{\\mathbb{E}}\\Neg{1.8}\\set{g_{k}\\Neg{0.5}(x_k,\\Neg{0.5}u_k,\\Neg{0.5}\\mu)\\Neg{2}+\\Neg{2}\\ensuremath{{\\mathpzc{L}_{\\hspace{.1pt}}}}^*_{k+1}\\Neg{0.8}(x_{k+1},\\Neg{0.5}\\mu)\\big|\\ensuremath{\\mathscr{F}}_{k}\\Neg{0.6}}\n\\Neg{1.5},\\Neg{6}\\nonumber\n\\end{align}\nwhere minimization is taken \\textit{pointwise} (i.e., over constants) over the control space $\\mathbb{R}^p$, and with $\\ensuremath{{\\mathpzc{L}_{\\hspace{.1pt}}}}^*_N(x_N,\\mu)=x'_NQx_N$.\nWe will prove the result by induction. The base case $k=N$ is immediate. Assume that~\\eqref{FOR_EQN_optimal_cost_to_go} is true for $k=t+1$;\nwe will show that this implies the same for $k=t$.\nBy Lemma~\\ref{ANA_THM_Lagrangian}, and after standard algebraic manipulations, we obtain\n\\begin{align}\n&\\ensuremath{\\mathbb{E}}\\Neg{1.5}\\set{g_{t}(x_t,u_t,\\mu)\\Neg{1.5}+\\Neg{1.5}\\ensuremath{{\\mathpzc{L}_{\\hspace{.1pt}}}}^*_{t+1}(x_{t+1},\\mu)\\big|\\ensuremath{\\mathscr{F}}_{t}}\n\\Neg{2}=\\Neg{1}u_{t}'(B'V_{t+1}B\\Neg{1}+\\Neg{1}R) u_{t}\\nonumber\\\\&+2\\set{x'_t A' V'_{t+1}+\\bar{w}'(V_{t+1}+T'_{t+1})+2\\mu M'_3S'_{t+1}}Bu_{t}\\nonumber\\\\\n&+x_{t}'(A'V_{t+1}A +\\bar{Q})x_{t}+c_{t+1}\\nonumber\\\\&+2\\set{2\\mu M'_3S'_{t+1}+\\bar{w}'(V_{t+1}+T_{t+1}')}Ax_{t}\\nonumber\\\\\n&+\\bar{w}'(2T_{t}'+V_t)\\bar{w}+4\\mu M_3'S_N'\\bar{w}+\\Tr (WV_{t+1}),\\label{ANA_EQN_Cost_To_Go_Form}\n\\end{align}\nwhere we have exploited the identities $x_{t+1}=Ax_t+Bu_t+w_{t+1}$, $\\ensuremath{\\mathbb{E}}(w_{t+1}|\\ensuremath{\\mathscr{F}}_{t})=\\bar{w}$ and $\\ensuremath{\\mathbb{E}} (w_{t+1}-\\bar{w})(w_{t+1}-\\bar{w})'=W$. The reader may also verify that all measurability issues are now resolved in a recursive way, retrospectively.\nThe unique stationary point of the convex quadratic \\eqref{ANA_EQN_Cost_To_Go_Form} is\n\\begin{align*}\nu^*_t=K_{t}x_t+l_t+h_t,\n\\end{align*}\nwhich may be readily verified to lie in $\\ensuremath{\\mathcal{L}}_2(\\ensuremath{\\mathscr{F}}_t)$, as well.\nReplacing $u^*_t$ into~\\eqref{ANA_EQN_Cost_To_Go_Form} yields the optimal cost-to-go~\\eqref{FOR_EQN_optimal_cost_to_go}.\n\\end{proof}\nAs suggested by Remark~\\ref{ANA_REM_LQR}, it turns out that the optimal controller~\\eqref{ANA_EQN_Optimal_Input} is affine with respect to the state. The noise-dependent term $\\ell_t$ forces the state to track the reference $-2\\mu M_3$, which points away from \nheavy-tailed regions of the noise distribution.\nMeanwhile, the state-feedback term accounts for the internal dynamics. Similar to the risk-neutral case, the controller's behavior is governed the Riccati difference equation~\\eqref{ANA_EQN_DEF_Riccati_Matrix}. However, we now have an inflated stage cost matrix $\\bar{Q}=Q+4\\mu QWQ$, instead of the original. As suggested by the product $QWQ$, the risk-aware control gain becomes more strict in directions that are simultaneously more costly and prone to noise, as captured by the covariance $W$. Finally, the term $h_t$ acts against the mean value of the noise--such a term also appears in risk-neutral LQR.\n\nAs a corollary, from standard LQR theory, we immediately obtain that for any $\\mu\\ge 0$, the optimal controller~\\eqref{ANA_EQN_Optimal_Input} will be internally stable, i.e., the spectral radius will be bounded $\\rho(A+BK_t)<1$, as the horizon $N$ grows to infinity.\n\\begin{corollary}[\\textbf{Internal Stability}]\\label{STABILITY}\nLet Assumptions \\ref{FOR_ASS_noise} and \\ref{FOR_ASS_LQR} be in effect, and adopt the notation of Lemma \\ref{ANA_THM_Lagrangian}.\nFor fixed $\\mu\\ge0$, consider the control policy $u^*(\\mu)$, as defined in~\\eqref{ANA_EQN_Optimal_Input}. As $N\\rightarrow \\infty$, $V_t$ converges exponentially fast to the unique stabilizing solution\\footnote{A stabilizing solution renders $A+BK$ stable.} of the algebraic Riccati equation\n\\[\nV=A'VA+Q_{\\mu}- A'VB(B'VB+R)^{-1}B'VA.\n\\] \nAs a result, for every $t\\ge 0$, it is true that, as $N\\rightarrow \\infty$, \n\\begin{align*}\n K_t&\\rightarrow K\\triangleq -(B'VB+R)^{-1}B'VA,\\\\\n S_{t}&\\rightarrow S \\triangleq (I-(A+BK)')^{-1}Q,\\\\\n T_t&\\rightarrow T \\triangleq (I-(A+BK)')^{-1}(A+BK)'V,\\\\\n l_t&\\rightarrow l \\triangleq -2\\mu(B'VB+R)^{-1}B'SM_3\\quad \\textrm{and}\\\\\n h_t&\\rightarrow h \\triangleq -(B'VB+R)^{-1}B'(V+T)\\bar{w},\n\\end{align*}\nexponentially fast, and the closed-loop matrix $A+BK$ is stable (spectral radius $\\rho(A+BK)<1$).\n\n\n\n\\end{corollary}\n\\begin{proof}\nSince $Q_{\\mu}\\succeq Q$ and $(A,Q^{1\/2})$ is detectable, the pair $(A,Q_{\\mu}^{1\/2})$ is also detectable.\nSince $(A,B)$ is stabilizable, $(A,Q_{\\mu}^{1\/2})$ is detectable, and $R\\succ 0$, the exponential convergence of $V_t$ and $K_t$ to $V$ and $K$ respectively, and the stability of $A+BK$ follow from standard LQR theory~\\cite[Chapter 4]{anderson2005optimal}. The proof of the convergence of the remaining terms follows similar steps.\n\\end{proof}\nUp to now we have discussed the properties of the optimal controller given a fixed $\\mu\\ge0$. In what follows, we show how to compute an optimal multiplier $\\mu^*$, which will also satisfy the sufficient conditions for optimality suggested by Theorem \\ref{KKT}, as stated previously.\n\n\n\\subsection{Recovery of Primal-Optimal Solutions}\nFrom Theorem~\\ref{ANA_THM_Optimal_Input}, we know how to compute the relaxed optimal input $u^*(\\mu)$, for any given multiplier $\\mu\\ge0$.\nBut the risk constraint of the primal problem \\eqref{ANA_EQN_Reformulation} is the only one that has been dualized in the construction of the Lagrangian in~\\eqref{ANA_EQN_Lagrangian_Original}. Then, it turns out that we can also compute an optimal multiplier $\\mu^*$ via bisection, thus providing a complete solution to the primal problem.\nWe exploit the fact that, under the relaxed optimal policy $u^*(\\cdot)$, both the LQR cost $J(u^*(\\cdot))$ and the risk functional $J_R(u^*(\\cdot))$ are monotone functions.\n\\begin{theorem}[\\textbf{Primal-Optimal Solution}]\\label{OPTIMAL} Let Assumption \\ref{FOR_ASS_noise} be in effect, and consider\nthe control policy $u^*(\\mu), \\mu\\ge0$, as defined in~\\eqref{ANA_EQN_Optimal_Input}. Then, the following statements are true:\n\\begin{enumerate}\n \\item The LQR cost $J(u^*(\\mu))$ is increasing with $\\mu\\ge0$, while the risk constraint functional $J_R(u^*(\\mu))$ is decreasing.\n \\item\n Define the multiplier \n \\begin{align}\\label{ANA_EQN_Optimal_Multiplier}\n \\mu^* \\triangleq \\inf\\set{\\mu \\ge 0:\\:J_R(u^*(\\mu))\\le \\bar{\\epsilon}}.\n\\end{align}\nIf $\\mu^*$ is finite, then the policy $u^*(\\mu^*)$ is optimal for the primal problem~\\eqref{ANA_EQN_Reformulation}, and this is the case as long as \n\\eqref{ANA_EQN_Reformulation} \n satisfies\nSlater's condition.\n\\end{enumerate} \n\\end{theorem}\n\\begin{proof}\nTo prove part 1), let $\\mu_2> \\mu_1\\ge 0$. From the definition of the Lagrangian and optimality of the controller $u^*(\\mu)$, we obtain the inequalities\n\\begin{align*}\n J(u^*(\\mu_1))+\\mu_1 J_R(u^*(\\mu_1))&\\le J(u^*(\\mu_2))+\\mu_1 J_R(u^*(\\mu_2))\\\\\n J(u^*(\\mu_1))+\\mu_2 J_R(u^*(\\mu_1))&\\ge J(u^*(\\mu_2))+\\mu_2 J_R(u^*(\\mu_2)) .\n\\end{align*}\nBy subtracting, we get\n\\[\n(\\mu_2-\\mu_1)\\set{J_R(u^*(\\mu_1))-J_R(u^*(\\mu_2))}\\ge 0,\n\\]\nwhich shows that $J_R(u^*(\\mu_1))\\ge J_R(u^*(\\mu_2))$. The proof of\n$J(u^*(\\mu_1))\\le J(u^*(\\mu_2))$ is similar.\n\nTo prove part 2), we first show that, whenever $\\mu^*<\\infty$, $\\mu^*\\Neg{0.6}(J_R(u^*\\Neg{0.6}(\\mu^*))\\Neg{1}-\\Neg{1}\\bar{\\epsilon})\\Neg{2.2}=\\Neg{2}0$, i.e., complementary slackness holds. \nWe have two cases: either $\\mu^*=0$, where complementary slackness is satisfied trivially; or $\\mu^*>0$, $J_R(u^*(\\mu^*))\\le \\bar{\\epsilon}$. Therefore, it will be sufficient to show that in the latter case we can only have $J_R(u^*(\\mu^*))= \\bar{\\epsilon}$.\nSince $\\mu^*>0$, it is true that $J_R(u^*(0))>\\bar{\\epsilon}$. From Theorem~\\ref{ANA_THM_Optimal_Input} and Proposition~\\ref{ANA_PROP_Risk_Evaluation}, it also follows that the function $J_R(u^*(\\mu))$ is continuous with respect to $\\mu$ (all matrix inverses in~\\eqref{ANA_EQN_DEF_Riccati_Matrix} are continuous since $R\\succ 0$). Now, assume that $J_R(u^*(\\mu^*))< \\bar{\\epsilon}$. Then by continuity, there exists a $0<\\bar{\\mu}<\\mu^*$ such that $J_R(u^*(\\bar{\\mu}))=\\bar{\\epsilon}$, contradicting the definition of $\\mu^*$. Hence, we can only have $J_R(u^*(\\mu^*))= \\bar{\\epsilon}$, which shows that complementary slackness is satisfied. \n\nNow, complementary slackness, along with the trivial fact that $J_R(u^*(\\mu^*))\\le \\bar{\\epsilon}$ imply that the policy-multiplier pair $(u^*(\\mu^*),\\mu^*) \\in \\mathcal{U}_0 \\times \\mathbb{R}_+$ satisfies the sufficient conditions for optimality provided by Theorem \\ref{KKT}. Enough said. \n\nTo prove the last claim of part 2), suppose that \\eqref{ANA_EQN_Reformulation} \n satisfies\nSlater's condition, i.e., that there is an admissible policy $u^\\dagger$ in $\\mathcal{U}_0$ such that $J_R(u^\\dagger)-\\bar{\\epsilon}<0$. For every $\\mu\\ge0$, we have\n\\begin{equation}\n\\begin{aligned}\n D(\\mu) &\\le J(u^\\dagger) + \\mu (J_R(u^\\dagger)-\\bar{\\epsilon}) \\nonumber\\\\\n\\implies D(\\mu) - \\mu (J_R(u^\\dagger)-\\bar{\\epsilon}) &\\le J(u^\\dagger)<\\infty. \\nonumber\n\\end{aligned}\n\\end{equation}\nNext, suppose that, for every $\\mu\\ge0$, $J_R(u^*(\\mu))-\\bar{\\epsilon}\\ge0$. Because $J(u^*(\\cdot))$ is increasing on $\\mathbb{R}_+$, it must be true that\n\\begin{equation}\n\\begin{aligned}\n J(u^\\dagger) \\Neg{2}&\\ge\\Neg{1} \\sup_{\\mu\\ge0} D(\\mu) \\Neg{1}-\\Neg{1} \\mu (J_R(u^\\dagger)\\Neg{1}-\\Neg{1}\\bar{\\epsilon}) \\nonumber\\\\\n & =\\Neg{1} \\sup_{\\mu\\ge0} J(u^*(\\mu)) \\Neg{1}+\\Neg{1} \\mu (J_R(u^*(\\mu))\\Neg{1}-\\Neg{1}\\bar{\\epsilon}) \\Neg{1}-\\Neg{1} \\mu (J_R(u^\\dagger)\\Neg{1}-\\Neg{1}\\bar{\\epsilon}) \\nonumber\\\\\n & = \\Neg{1}\\infty,\n\\end{aligned}\n\\end{equation}\nwhich contradicts the fact that $J(u^\\dagger)<\\infty$. Therefore, there must exist $\\mu^\\dagger \\ge0$, such that $J_R(u^*(\\mu^\\dagger))-\\bar{\\epsilon}<0$. But $J_R(u^*(\\cdot))$ is decreasing on $\\mathbb{R}_+$ and, consequently, it must be the case that $\\mu^*\\in[0,\\mu^\\dagger)$. The proof is now complete.\n\\end{proof}\n\n\n\nTheorem \\ref{OPTIMAL} implies that we can find an optimal multiplier satisfying the optimality conditions on Theorem \\ref{KKT} from~\\eqref{ANA_EQN_Optimal_Multiplier}, by performing simple bisection on $\\mu$.\nOf course, this requires evaluating the risk constraint functional $J_R(u^*(\\mu))$ for different values $\\mu\\ge0$. The evaluation may be performed in a recursive fashion, as the following result suggests.\n\\begin{proposition}[\\textbf{Risk Functional Evaluation}]\\label{ANA_PROP_Risk_Evaluation} Let Assumption \\ref{FOR_ASS_noise} be in effect, and adopt the notation of Lemma \\ref{ANA_THM_Lagrangian}.\nFor fixed $\\mu\\ge0$, consider the control policy $u^*(\\mu)$, as defined in~\\eqref{ANA_EQN_Optimal_Input}.\nWith terminal values $P_N=4QWQ$, $z_N=4M_3'Q$, $r_N=0$, consider the backward recursions\n\\begin{align}\n&P_{t-1}=(A+BK_{t-1})'P_t(A+BK_{t-1})+4QWQ,\\nonumber\\\\\n&z_{t-1}=(A+BK_{t-1})'z_t+4QM_3\\nonumber\\\\\n&\\,\\,\\,+2(A+BK_{t-1})'P_t\\paren{Bh_{t-1}+Bl_{t-1}+\\bar{w}}\\,\\,\\,\\, \\textrm{and}\\nonumber\\\\\n&r_{t-1}=r_t+\\Tr(P_tW)+z_t'(\\bar{w}+Bl_{t-1}+Bh_{t-1})\\nonumber\\\\\n&\\,\\,\\,+(Bl_{t-1}+Bh_{t-1}+\\bar{w})'P_{t}(Bl_{t-1}+Bh_{t-1}+\\bar{w}).\\nonumber\n\\end{align}\nThen, the risk constraint in problem \\eqref{ANA_EQN_Reformulation} may be evaluated by\n\\begin{equation}\nJ_R(u^*(\\mu))=x'_0P_0x_0+z'_0x_0+r_0.\\nonumber\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\nOmitted; it is similar to that of Theorem~\\ref{ANA_THM_Optimal_Input}. \n\\end{proof}\n\n\n\\section*{Appendix: Proof of Proposition~\\ref{ANA_PROP_Reformulation}}\nLet $\\Delta_t\\triangleq x'_tQ x_t-\\ensuremath{\\mathbb{E}}\\paren{x'_tQx_t|\\ensuremath{\\mathscr{F}}_{t-1}}$ be the prediction error of the stage penalty at time $t$ given $\\ensuremath{\\mathscr{F}}_{t-1}$.\nWe proceed in two steps.\n\tFirst, we show that $\\Delta_t$ is well-defined and belongs to $\\ensuremath{\\mathcal{L}}_2(\\ensuremath{\\mathscr{F}}_{t})$. Second, we obtain the closed form expression for the expected predictive variance $\\ensuremath{\\mathbb{E}} \\set{\\Delta^2_t}$. \n\t\n\t\\textit{Step a).} The state $x_t$ of the system depends linearly on past inputs $u_k$ as well as past noises $w_{k+1}$, for $k\\le t-1$. Under the constraint $u_k\\in\\ensuremath{\\mathcal{L}}_2(\\ensuremath{\\mathscr{F}}_k)$, and since by Assumption~\\ref{FOR_ASS_noise} $w_k\\in\\ensuremath{\\mathcal{L}}_2(\\ensuremath{\\mathscr{F}}_k)$, it also follows that \n\t$x_t\\in \\ensuremath{\\mathcal{L}}_2(\\ensuremath{\\mathscr{F}}_t)$, for all $t\\le N-1$. As a result, the expectation of $x_t'Qx_t$ exists and any conditional expectation $\\ensuremath{\\mathbb{E}}\\paren{x'_tQ x_t|\\ensuremath{\\mathscr{F}}_{t-1}}$ is well-defined and finite almost everywhere, for all $t\\le N-1$.\n\tDefine\n\t\\begin{align}\n\t\\hat{x}_{t}&\\triangleq \\ensuremath{\\mathbb{E}}(x_t|\\ensuremath{\\mathscr{F}}_{t-1})=Ax_{t-1}+Bu_{t-1}+\\bar{w}\\quad\\textrm{and}\\\\\n\t\\delta_t&\\triangleq w_t-\\bar{w}.\n\t\\end{align}\n\tNote that $\\hat{x}_t$ is well-defined since $x_{t}\\in \\ensuremath{\\mathcal{L}}_2(\\ensuremath{\\mathscr{F}}_t)$.\n\tReplacing $x_t$ with $\\hat{x}_{t}+\\delta_{t}$, we obtain the representation\n\t\\begin{align*}\n\tx'_tQ x_t&=\\hat{x}'_{t}Q\\hat{x}_{t}+2\\hat{x}'_{t}Q\\delta_t+\\delta_t'Q\\delta_t.\n\t\\end{align*}\n\tAll of the terms above are integrable since $\\hat{x}_t$, $\\delta_t$ are square-integrable.\n\tSince $\\hat{x}_{t}$ is measurable with respect to $\\ensuremath{\\mathscr{F}}_{t-1}$, the expectation of $x'_tQ x_t$ conditioned on $\\ensuremath{\\mathscr{F}}_{t-1}$ is\n\t\t\\begin{align*}\n\t\\ensuremath{\\mathbb{E}}\\paren{x'_tQ x_t|\\ensuremath{\\mathscr{F}}_{t-1}}&=\\hat{x}'_{t}Q\\hat{x}_{t}+\\Tr (WQ).\n\t\\end{align*}\nThen, the difference of the above quantities is\n\t\\begin{align*}\n\\Delta_t=\\delta_t'Q\\delta_t -\\Tr (WQ)+2\\hat{x}'_{t}Q\\delta_{t}.\n\t\\end{align*}\n\tComputing the squares of both sides leads to the expression\n\t\\begin{equation}\\label{ANA_EQN_Prediction_Difference}\n\t\\begin{aligned}\n\\Delta^2_t&=(\\delta_t'Q\\delta_t -\\Tr (WQ))^2+4\\hat{x}'_{t}Q\\delta_{t}\\delta'_{t}Q\\hat{x}_t\\\\&\\quad\\,+4\\hat{x}'_{t}Q\\delta_{t}(\\delta_t'Q\\delta_t -\\Tr (WQ)).\n\t\\end{aligned}\n\t\\end{equation}\n\tNote that all of the above terms are integrable, hence $\\ensuremath{\\mathbb{E}}\\set{\\Delta^2_t}$ is well-defined and finite. Integrability of the first term follows from Assumption~\\ref{FOR_ASS_noise}. Integrability of the second term comes from the fact that $\\hat{x}_t$ and $\\delta_t$ are square-integrable and independent of each other. Similarly, itegrability of the last term follows from integrability of $\\hat{x}_t$, Assumption~\\ref{FOR_ASS_noise} \n\tand independence of $\\hat{x}_t,\\delta_t$. \n\t\n\t\\textit{Step b).}\nFrom~\\eqref{ANA_EQN_Prediction_Difference}, it is true that\n\t\\begin{align*}\n\t&\\ensuremath{\\mathbb{E}}\\set{\\Delta_t^2|\\ensuremath{\\mathscr{F}}_{t-1}}=\n\t4\\hat{x}'_{t}QWQ\\hat{x}_{t}+m_4+4\\hat{x}'_{t}QM_3.\n\t\\end{align*}\n Taking expectation again gives\n \\begin{align*}\n\t\\ensuremath{\\mathbb{E}}\\set{\\Delta_t^2}=m_4+\\ensuremath{\\mathbb{E}}(4\\hat{x}'_{t}QWQ\\hat{x}_{t}+4\\hat{x}'_{t}QM_3).\n \\end{align*}\nBy orthogonality of $\\hat{x}_t$, $\\delta_t$, and since $\\ensuremath{\\mathbb{E}} \\delta_t=0$, $\\ensuremath{\\mathbb{E}} \\delta_t\\delta'_t=W$, \n \\begin{align*}\n\t\\ensuremath{\\mathbb{E}}\\hspace{-1pt}\\set{\\Delta_t^2}&\\hspace{-2pt}=\\hspace{-1pt}\\ensuremath{\\mathbb{E}}(4x_{t}'QWQx_{t}\\hspace{-1pt}+\\hspace{-1pt}4x_{t}'QM_3)\\hspace{-1pt}+\\hspace{-1pt}m_4\\hspace{-1pt}-\\hspace{-1pt}4\\Tr\\Neg{1}\\set{(WQ)^2}\\hspace{-1pt}.\n \\end{align*}\n The result follows if we replace $\\ensuremath{\\mathbb{E}}\\set{\\Delta^2_t}$ with the right-hand side above in the risk constraint\n $\n \\sum_{t=1}^{N}\\ensuremath{\\mathbb{E}}\\set{\\Delta^2_t}\\le \\epsilon.\n$\n\\hfill $\\qedsymbol$\n\\section{Conclusion and Future Work}\\label{Section_Extensions}\nWe presented a new risk-aware reformulation of the classical LQR problem, where we introduce a new risk measure to be used as an explicit and tunable risk constraint, along with the standard LQR objective. By restricting the expected cumulative predictive variance of the state penalties, we can decrease the variability of the state at will, protecting the system against uncommon but strong random disturbances. The optimal controller enjoys a simple closed-form expression with clear interpretation, is always stable and is easy to tune.\nOur scheme works for arbitrary noise process distributions, as long as the corresponding fourth-order moments are finite.\n\n\nMoving forward, our framework opens up many directions for extensions and future research. First, we would like to note that our analysis does not depend on the constraints\nhaving the same matrix $Q$ as in the cost. In fact, we can define our risk constraint as\n\\[\n\\ensuremath{\\mathbb{E}} \\set{\\sum_{t=1}^{N} \\left[ x'_tQ_c x_t-\\ensuremath{\\mathbb{E}}\\paren{x'_tQ_cx_t|\\ensuremath{\\mathscr{F}}_{t-1}} \\right]^2}\\le \\epsilon,\n\\]\nwhere $Q_c$ is a design choice. Proposition~\\ref{ANA_PROP_Reformulation} still holds, in the sense that the constraint can be rewritten as a quadratic one.\nThis implies that, thanks to its simplicity, the predictive variance constraint can be easily incorporated in more general problems, e.g., MPC or classical constrained-LQR, adding a risk-aware flavor to them.\nAnother possibility is to employ stage-wise constraints of the form\n\\[\n\\ensuremath{\\mathbb{E}}\\left[ x'_tQ_t x_t-\\ensuremath{\\mathbb{E}}\\paren{x'_tQ_tx_t|\\ensuremath{\\mathscr{F}}_{t-1}} \\right]^2\\le \\epsilon_t,\\quad t\\le N.\n\\]\nIn this case, the optimal controller will be similar to~\\eqref{ANA_EQN_Optimal_Input}, but will depend on multiple Lagrange multipliers that can be optimized with primal-dual algorithms.\nLastly, our predictive variance constraint is based on one-step-ahead prediction. In some cases, careless selection of $Q_c$ \nmight make our controller more myopic. At the same time, though, increasing the prediction horizon might not always preserve the quadratic form of the constraint. In future work, we would also like to address this issue.\n\n\n\\section{Risk-Constrained LQR Formulation} \\label{Section_Formulation}\n\nConsider a discrete-time linear system in state-space form evolving according to the stochastic difference equation\n\\begin{equation}\\label{FOR_EQN_system}\nx_{t+1}=Ax_{t}+Bu_{t}+w_{t+1},\n\\end{equation}\nwhere $x_t\\in \\ensuremath{\\mathbb{R}}^n$ is the state, $u_t\\in \\ensuremath{\\mathbb{R}}^p$ is an exogenous control signal, $A$ is the state transition matrix, and $B$ is the input matrix. We assume that the initial value $x_0$ is deterministic and fixed. Signal $w_{t}\\in \\ensuremath{\\mathbb{R}}^n$ is a random process noise (not necessarily Gaussian) and is assumed to be \\textit{i.i.d}. \nFor $t\\ge0$, let $\\ensuremath{\\mathscr{F}}_t=\\sigma\\paren{x_{0:t},u_{0:t}}$ be the $\\sigma$-algebra generated by all observables up to time $t$, and let $\\ensuremath{\\mathscr{F}}_{-1}$ be the trivial $\\sigma$-algebra. Based on this notation, $x_t,u_t,w_t$ are $\\ensuremath{\\mathscr{F}}_t$-measurable, while $w_{t+1}$ is independent of $\\ensuremath{\\mathscr{F}}_t$.\nWe also make an additional assumption on the process noise, as follows.\n\\begin{assumption}[\\textbf{Noise Regularity}]\\label{FOR_ASS_noise}\nThe process $w_t$ has finite fourth-order moment, i.e., for every $t\\in \\mathbb{N}$, $\\ensuremath{\\mathbb{E}}\\norm{w_{t}}^4_2<\\infty$.\n\\end{assumption}\n\\noindent The above condition is mild and satisfied by very general noise distributions, including many heavy-tailed ones.\nDenote the mean of the noise by $\\bar{w}\\triangleq\\ensuremath{\\mathbb{E}} w_k$ and its variance by $W\\triangleq \\ensuremath{\\mathbb{E}}(w_k-\\bar{w})(w_k-\\bar{w})'$.\n\n In the classical risk-neutral formulation of the LQR problem, one is interested in the multistage stochastic program\n \\begin{equation}\\label{FOR_EQN_Risk_Neutral_LQR}\n\\begin{aligned}\n\\min_{u}&\\quad \\ensuremath{\\mathbb{E}}\\set{ x'_NQ x_N+\\sum_{t=0}^{N-1} x'_tQx_t+u'_tRu_t}\\\\\n\\mathrm{s.t.}&\\quad x_{t+1}=Ax_t+Bu_t+w_{t+1}\\\\\n&\\quad u_t\\in \\ensuremath{\\mathcal{L}}_{2}(\\ensuremath{\\mathscr{F}}_t),\\,t=0,\\dots N-1\n\\end{aligned},\n\\end{equation}\nwhere $u=u_{0:N-1}$ are the inputs from time $0$ up to time $N-1$, for some horizon $N\\in \\mathbb{N}$. For each $t$, the \\textit{causality constraint} on $u_t$ restricts the inputs to the space of square-integrable $\\ensuremath{\\mathscr{F}}_t$-measurable vector-valued random elements of appropriate dimension,\ndenoted as $\\ensuremath{\\mathcal{L}}_2(\\ensuremath{\\mathscr{F}}_t)$. It also guarantees that the optimization problem is well-defined and with finite cost. In order for the optimal LQR controller to be well-behaved and stable as the horizon $N$ grows, we also make the following standard assumption.\n\\begin{assumption}[\\textbf{LQR}]\\label{FOR_ASS_LQR}\nThe pair $(A,B)$ is stabilizable, the pair $(A,Q^{1\/2})$ is detectable, matrix $Q\\succeq 0$ is positive semi-definite and matrix $R\\succ 0$ is positive definite.\n\\end{assumption}\nAs mentioned above, the classical LQR problem is risk-neutral, since it optimizes performance only on average ~\\cite{bertsekas2017dynamic}. \nStill, even if the average performance is good, the state can grow arbitrarily large under less probable, yet extreme events.\nIn other words, the state can exhibit large variability.\nTo deal with this issue, we propose a risk-constrained formulation of the LQR problem, posed as\n\\begin{equation}\\label{FOR_EQN_LQR_constrained}\n\\begin{aligned}\n\\min_{u} &\\quad \\ensuremath{\\mathbb{E}}\\set{ x'_NQ x_N+\\sum_{t=0}^{N-1} x'_tQx_t+u'_tRu_t}\\\\\n\\mathrm{s.t.} &\\quad \\ensuremath{\\mathbb{E}} \\set{\\sum_{t=1}^{N} \\left[ x'_tQ x_t-\\ensuremath{\\mathbb{E}}\\paren{x'_tQx_t|\\ensuremath{\\mathscr{F}}_{t-1}} \\right]^2}\\le \\epsilon\\\\\n&\\quad x_{t+1}=Ax_t+Bu_t+w_{t+1}\\\\ \n&\\quad u_t\\in \\ensuremath{\\mathcal{L}}_{2}(\\ensuremath{\\mathscr{F}}_t),\\,t=0,\\dots, N-1\n\\end{aligned}\\,.\n\\end{equation}\n\\noindent Here, the risk measure adopted is the cumulative expected predictive variance of the state cost. The predictive variance incorporates information about the tail \\textit{and} skewness of the penalty $x_t'Qx_t$. This forces the controller to take higher-order noise statistics into account, mitigating the effect of rare though large noise values.\nHence, our risk-aware LQR formulation not only forces the state $x_t$ to be close to zero, but also explicitly restricts its variability.\n\nProblem \\eqref{FOR_EQN_LQR_constrained} offers a simple and interpretable way to control the trade-off between average performance and risk. By simply decreasing $\\epsilon$, we increase risk-awareness. Inspired by standard risk-aware formulations, in the above optimization problem our risk definition is tied to the specific state penalty $x'_tQx_t$ of the LQR. However, all of our results are still valid if we employ the predictive variance of a different quadratic form, e.g., the norm of the state, $\\norm{x_t}^2$, in the constraint---see Section~\\ref{Section_Extensions}. \nLastly, note that the initial state is fixed (for simplicity), so there is no associated risk term for $t=0$.\n\nIn the next section, we show that the risk constraint of \\eqref{FOR_EQN_LQR_constrained} can be rewritten in quadratic form. This will allow us to solve problem \\eqref{FOR_EQN_LQR_constrained} using duality theory and obtain a closed-form solution, exploiting higher-order noise moments.\n\n\n\n\\section{Introduction}\\label{Section_Introduction}\nAchieving good performance in expectation is often insufficient in \nthe design of stochastic control systems, \nespecially when dealing with modern, critical applications. Examples\nappear naturally in many areas, including wireless industrial control \\cite{Ahlen2019}, energy \\cite{Bruno2016,Moazeni2015}, finance \\cite{Markowitz1952,Follmer2002,Shang2018}, robotics \\cite{Kim2019,Pereira2013}, networking \\cite{Ma2018}, and safety \\cite{samuelson2018safety,chapman2019cvar}, to name a few. Indeed, occurrence of less probable, non-typical or unexpected events might lead the underlying dynamical system to experience shocks with possibly catastrophic consequences, e.g., a drone diverging too much from a given trajectory in a hostile environment, or an autonomous vehicle crashing onto a wall or hitting a pedestrian. In such situations, design of effective \\emph{risk-aware} control policies is highly desirable, systematically compensating for those extreme events, at the cost of slightly sacrificing average performance under nominal conditions.\n\n\nTo highlight the usefulness of a risk-aware control policy, let us consider the following simple, motivating example. Let $x_{k+1}=x_{k}+u_k+w_{k+1}$ model an aerial robot, moving along a line. Assume that the process noise $w_k$ is \\textit{i.i.d.} Bernoulli, taking the values $\\beta>2$ with probability $1\/\\beta$ and $0$ with probability $1-1\/\\beta$. This noise represents shocks, e.g., wind gusts, that can occur with some small probability. We would then like to minimize the LQR cost $\\ensuremath{\\mathbb{E}} \\sum_{t=0}^{N} \\{x^2_t\\}$, i.e., the total displacement of the robot over a horizon of $N$ time steps. In this case, the LQR optimal controller is $u^{\\mathrm{LQR}}_k=-x_k-1$, where $-1\\equiv-\\mathbb{E}w_k$ cancels the mean of the process noise. We see that the LQR solution is \\textit{risk-neutral}, as it does not account for the fact that the shock $\\beta$ could be arbitrarily large. On the other hand, the risk-aware LQR formulation proposed in this work results in a \\textit{family} of optimal controllers of the form\n\\[\nu^{*}_{t}(\\mu)=-x_t-1-\\frac{\\mu}{1+2\\mu}(\\beta-2),\\quad\\mu\\ge0,\n\\]\nwhere $\\mu$ controls the trade-off between average performance and risk. As $\\mu$ increases, we move from the risk-neutral to the \\textit{maximally risk-aware} controller $u^{*}_{t}(\\infty)=-x_t-\\beta\/2$, which treats the noise as adversarial---see Fig.~\\ref{fig:toy_example}.\n\\begin{figure}[t]\n\\vspace{6.5bp}\n\t\\centering\n\t\\includegraphics[width=\\columnwidth]{toy_figure.pdf}\\vspace{-4bp}\n\t\\caption{Comparison between risk-neutral and risk-aware control performance, when the system experiences rare but large shocks---here the shock occurs at time $6$. By sacrificing average behavior, the risk-aware controllers push the state away from the direction of the shock. \n}\n\t\\label{fig:toy_example}\n \t\\vspace{-13bp}\n\\end{figure}\n\nIn both classical and recent literature in linear-quadratic problems, risk awareness in estimation and control is typically achieved by replacing the respective random cost with its exponentiation~\\cite{jacobson1973optimal,Whittle1981,bacsar2000risk,pham2012linear,roulet2019convergence,Speyer1992,Dey1999,Moore1997,Dey1997,Bauerle2014more}. Yet, the resulting stochastic control problem might not be well-defined for general classes of noise distributions, as it requires the moment generating function of the cost to be finite. Thus, heavy-tailed or skewed distributions, which are precisely those exhibiting high risk, are naturally excluded. Also, even if the expectation of the exponential cost is finite, it does not lead to a general, closed-form and interpretable solution. \nA notable exception is that of a\nGaussian process noise, also known as the Linear Exponential Quadratic Gaussian (LEQG) problem, which does enjoy a simple closed-form solution~\\cite{Whittle1981}. \nApparently though, the Gaussian assumption is unable to capture distributions with asymmetric (skewed) structure, as in the above example.\n\nIn this paper, we propose a new risk-aware reformulation of the LQR problem, in which the standard LQR objective is minimized \\textit{subject to} an explicit and tunable risk constraint. Our contributions are as follows.\n\n\\noindent\\textbf{--New Risk Measure.} We introduce the cumulative expected one-step predictive variance of the associated state penalty as a new risk measure for LQR control.\nIn this way, our risk-constraint formulation ensures not only a small LQR cost, but also \\textit{guaranteed} statistical variability of the state penalty.\n\n\n\\noindent\n\\textbf{--Optimal Risk-Aware Controls \\& Stability.}\nWe show that our new risk-constrained formulation results in a quadratically constrained LQR problem (Proposition~\\ref{ANA_PROP_Reformulation}), which admits an explicit closed-form solution with a natural interpretation. The optimal risk-aware feedback controller is affine with respect to the system state (Theorems~\\ref{ANA_THM_Optimal_Input} \\& \\ref{OPTIMAL}). \nThe affine component of the control law \npushes the state away from directions where the noise exhibits (skewed) heavy tails. Meanwhile, the state feedback gain satisfies a new risk-aware Riccati recursion, in which the state penalty is inflated in riskier directions, where both the noise covariance and state penalty are simultaneously larger. Further, we show that our optimal risk-aware controller is always stable, under standard LQR conditions (Corollary \\ref{STABILITY}).\n\n\\noindent\\textbf{--Arbitrary Noise Model.}\nContrary to the LEQG approach, our results are valid for arbitrary noise distributions, provided the associated fourth-order moments are finite; thus, heavy-tailed or skewed noises are supported within our framework.\n\n\\noindent\\textbf{--Relation to LQR with Tracking.}\nWe show that, \\textit{by appropriate re-parameterization}, our risk-aware LQR problem is equivalent to a generalized risk-neutral LQR problem with a tracking objective. Essentially, this implies that risk-neutral LQR formulations can provide inherent risk-averse behavior, as long as the involved parameters are selected in a principled way, as presented herein.\n\n\\textbf{\\textit{Related Work:}}\nRisk-aware optimization has been studied in a wide variety of decision making contexts \\cite{A.2018,Cardoso2019,W.Huang2017,Jiang2017,Kalogerias2018b,Tamar2017,Vitt2018,Zhou2018,Ruszczynski2010,SOPASAKIS2019281,chapman2019cvar}. The basic idea is to replace expectations by more general functionals, called risk measures\\cite{ShapiroLectures_2ND}, purposed to effectively quantify the statistical volatility of the involved random cost function, in addition to mean performance. Typical examples are mean-variance functionals \\cite{Markowitz1952,ShapiroLectures_2ND},\nmean-semideviations \\cite{Kalogerias2018b}, and Conditional Value-at-Risk\n(CVaR) \\cite{Rockafellar1997}.\n\n \nIn the case of control systems, apart form the aforementioned exponential approach, CVaR optimization techniques have also been considered for risk-aware constraint satisfaction~\\cite{chapman2019cvar}. Although CVaR captures variability and tail events well, CVaR optimization problems rarely enjoy closed-form expressions. Approximations are usually required to make computations tractable, e.g., process noise and controls are assumed to be finite-valued~\\cite{chapman2019cvar}, thus excluding the LQR setting. Predictive variance constraints have also been used as a measure of risk in portfolio optimization~\\cite{abeille2016lqg}. Different from our paper, the noise is assumed to be Gaussian and the variance is with respect to linear stage costs.\n\nAnother related concept is that of robust control, where the system model is unknown. The objective is to optimally control the true system under worst case model uncertainty. When there is no model uncertainty, in the case of stochastic process noise, robust controllers usually reduce to their risk-neutral LQR counterparts~\\cite{tzortzis2016robust,dean2017sample}. On the contrary, in risk-aware control, extreme noise events are part of the system model. Even if the system is exactly modeled, we would still need to consider risk-aware control if the process noise is heavy-tailed or highly variable. From this point of view, robustness and risk are orthogonal concepts.\nInterestingly, in the case of adversarial noise, there is a connection between robust and maximally risk-aware controllers~\\cite{glover1988state}.\n\n\n\\textit{\\textbf{Notation:}}\nThe transpose operation is denoted by $(\\cdot)'$. If $x_{k},\\dots,x_{t}$ is a sequence of vectors, then $x_{k:t}$ denotes the batch vector of all $x_i$ for $k\\le i\\le t$. The $\\sigma$-algebra generated by a random vector $x$ is denoted by $\\sigma(x)$.\n\n\n\\section{Simulations And Discussion}\\label{Section_Simulations}\n\\begin{figure}[t]\n\t\\vspace{3.5bp}\n\t\\centering\n\t\\includegraphics[width=\\columnwidth]{xQx_figure.pdf}\\vspace{-4bp}\n\t\\caption{Evolution of the state penalties $x'_kQx_k$, over the first $50$ steps. Notice that our risk-aware LQR controller indeed limits the variability of $x'_kQx_k$. In fact, it sacrifices performance under small wind forces, but protects the system against large wind gusts, for example at time $5-10$.}\n\t\\vspace{-12bp}\n\t\\label{fig:xQx}\n\\end{figure}\nConsider a flying robot that moves on a horizontal plane, i.e., the Euclidean space $\\mathbb{R}^2$. We assume that its linearized dynamics can be abstracted by a double integrator as\n\\begin{equation}\\label{SIM_eq:integrator}\n x_{k+1}=\\matr{{cccc}1&T_s&0&0\\\\0&1&0&0\\\\0&0&1&T_s\\\\0&0&0&1}x_k+\\matr{{cc}\\tfrac{T_s^2}{2}&0\\\\T_s&0\\\\0&\\tfrac{T_s^2}{2}\\\\0&T_s}(v_k+d_k),\n\\end{equation}\nwhere $T_s=0.5$ is the sampling time, $x_{k,1}$, $x_{k,3}$ are the position coordinates, $x_{k,2}$, $x_{k,4}$ the respective velocities and $v_k$ is the acceleration input. Let $d_k$ be a wind disturbance force that acts on the robot, which is modeled as follows: We assume that $d_{k,1}$ constitutes the dominant wind direction with non-zero mean and large variability, while the orthogonal direction $d_{k,2}$ is a weak wind direction with zero mean and small variability.\nWe model $d_{k,1}$ as a mixture of two gaussians $\\mathcal{N}(30,30)$, $\\mathcal{N}(80,60)$ with weights $0.8$ and $0.2$, respectively. This bimodal distribution models the presence of infrequent but large wind gusts. The weak direction $d_{k,2}$ is modeled as zero-mean Gaussian~$\\mathcal{N}(0,5)$.\nIf we cancel the mean of $d_k$ by applying $v_k=u_k-\\ensuremath{\\mathbb{E}} d_k$, then~\\eqref{SIM_eq:integrator}, can be written in terms of~\\eqref{FOR_EQN_system}, where $w_k=B(d_k-\\ensuremath{\\mathbb{E}} d_k)$ is now a zero-mean disturbance. \n\n\nConsider now the LQR problem with parameters \n\\[\nQ = \\textrm{diag}(1,0.1,2,0.1)\n\\Neg{2}\\quad\\textrm{and}\\quad R=I,\n\\] \nand a horizon of length $N=5000$. We primarily compare our risk-aware LQR formulation with the classical, risk-neutral LQR via simulations. To tune our controller, we vary $\\mu$ in~\\eqref{ANA_EQN_Optimal_Input} directly instead of varying $\\epsilon$. We also (heuristically) compare our controller with the exponential (LEQG) method, even though the noise is not Gaussian, by plugging in the second order statistics $W$. Let the tuning parameter of LEQG be $\\theta$. Note that the exponential problem is well defined only if $\\theta<0.001276$ (roughly), where the ``neurotic breakdown\" occurs~\\cite{Whittle1981}. For the purpose of comparison, we simulate all schemes under the same noise sequence $w_{0:N}$. \n\n\nIn Fig.~\\ref{fig:xQx}, we see the evolution of the state penalty terms $x_k'Qx_k$, for the first $50$ time steps, under the different control schemes. By slightly sacrificing performance under small wind forces, our risk-aware LQR controller forces the state to have less variability and, thus, protects the robot against large gusts. On the other hand, the state of the robot state can grow very large under the risk-neutral \\textit{and} LEQG schemes. This behavior is illustrated even more clearly in Fig.~\\ref{fig:cdf}, where we present the time-empirical cumulative distribution of the state penalties for all $N$ time steps. The time-empirical \"probability\" of suffering large state penalties is drastically smaller compared to LQR or LEQG.\n\n\\begin{figure}[t]\n\\vspace{3.5bp}\n\t\\centering{}\n\t\\includegraphics[width=\\columnwidth]{cdf_figure.pdf}\\vspace{-4bp}\n\t\\caption{The time-empirical cdf for the state penalties $x_k'Qx_k$, $k\\le N$, for the LQR (risk-neutral), our method, and LEQG. Our method sacrifices some average performance but exhibits much smaller variability for the state penalties. It also protects the system against rare but large wind gusts.}\n\t\\vspace{-12bp}\n\t\\label{fig:cdf}\n\\end{figure}\n\\begin{figure}[t]\n\\vspace{3.5bp}\n\t\\centering\n\t\\includegraphics[width=\\columnwidth]{x1_u1_figure.pdf}\\vspace{-4bp}\n\t\\caption{Evolution of the state $x_{k,1}$, and the input $u_{k,1}$ over the first $50$ steps. The controller pushes the state away from the direction of the large gusts, which helps the robot to avoid extreme perturbations. Meanwhile, by inflating the state penalty with the $\\mu QWQ$, we force the state-feedback component to be more cautious with the state. Naturally, being more cautious with the state requires extra control effort.}\n\t\\vspace{-13bp}\n\t\\label{fig:x1}\n\\end{figure}\nTo better illustrate how the proposed risk-aware controller works, we also discuss the evolution of the position $x_{k,1}$ and the input $u_{k,1}$, as shown in Fig. \\ref{fig:x1}, for the first $50$ steps. First, we observe that the controller pushes the state $x_{k,1}$ towards negative values, away from the direction of the large gusts.\nSecond, notice that we penalize $x_{k,3}$ more in $Q$. In fact, the risk-neutral LQR results in the steady state gains $K_{\\mathrm{LQR},11}=-0.697$, $K_{\\mathrm{LQR},12}=-1.201$, $K_{\\mathrm{LQR},23}=-0.925$, $K_{\\mathrm{LQR},24}=-1.376$, i.e., it is stricter with direction $x_{k,3}$. However, $x_{k,1}$ exhibits more variability due to the strong wind direction. In contrast, our risk-aware scheme adapts to the noise in a principled way. Due to the inflation term $\\mu QWQ$, our scheme returns the steady-state gains $K_{11}=-2.1008$, $K_{12}=-2.2132$, $K_{23}=-1.1161$, $K_{24}=-1.5131$, which means that the risky direction $x_{k,1}$ is controlled more strictly. Naturally, being more cautious with the state leads to higher control effort. Lastly, although the LEQG controller is also more state-cautious, it is agnostic to the heavy tails of the wind distribution. Hence, it still suffers from large perturbation due to the wind gusts.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn the recent work, \\cite{Noi} \\cite{Raf}, the problem of the minimum time optimal control of a two level quantum system ({\\it qubit}) was solved. In the model considered, the system was subject to a drift and a control field orthogonal to the drift and with bounded norm (cf. section \\ref{Rev} for a precise statement of the optimal control problem). Two level quantum systems are of paramount importance in quantum mechanics and, in particular, in quantum computation (see, e.g., \\cite{NC}). In this context, evolution in minimum time is a natural requirement when it is desired to minimize the effect of the environment or to increase the speed of implementation of a given quantum computation. In fact, the circuit model of quantum computation (cf. \\cite{NC}) requires a cascade of simple evolutions on elementary quantum systems which therefore have to occur in very short time in order to maximize the speed of the overall computation. The solution of the minimum time problem given in \\cite{Noi} \\cite{Raf} is quite simple and explicit, requiring only very elementary numerical work, something very rare for optimal control problems.\n\nWhen trying to control $N \\geq 2$ qubits {\\it simultaneously} with independent controls in minimum time, one could argue that the above results could simply be applied $N$ times to obtain the optimal control. Let us assume, for simplicity of exposition, $N=2$, and let $X_{f1}$ and $X_{f2}$ the desired final condition for system $S_1$ and system $S_2$, respectively. Let us denote by $T_1$ and $T_2$ the optimal times to reach $X_{f1}$ and $X_{f2}$ for system $S_1$ and $S_2$, respectively. If $T_1=T_2$, then the two optimal controls designed with the techniques of \\cite{Noi} \\cite{Raf}, will drive the two systems to the desired final conditions, in minimum time. However if $T_1 \\not= T_2$, then there is a {\\it slow} system and a {\\it fast} system and the solution is not simply to slow down the fast system to synchronize it with the slow system. The problem is that for a (bilinear) system with drift, such as the ones considered here, the fact that a certain evolution can be performed at (minimum) time $T_1$ does not ensure that the same evolution can be achieved at a later time $T_2 > T_1$. Therefore, this problem requires a careful analysis of the {\\it reachable sets} ${\\cal R}(T)$, for the systems under consideration, that is, the set of evolutions or states which can be reached at exactly time $T$.\n\n\nThe goal of this paper is twofold. On one hand we want to solve the above mentioned time optimal and synchronization control problem for $N$ qubits to any desired final condition. On the other hand we want to describe the structure of the reachable sets for the dynamics of two level quantum systems. In doing so we will determine the features of the dynamics of quantum bits which make the time optimal control problem amenable of such a simple solution as described in \\cite{Noi}, \\cite{Raf}. In fact the underlying geometric structure of the problem, which facilitates its solution, can be found in other problems as well.\n\n\n\n\nThe paper is organized as follows. In section \\ref{Rev} we describe the problem of minimum time control for a two level quantum system which mathematically amounts to a time optimal control problem for a right invariant bilinear system on the Lie group $SU(2)$. We shall consider and review the main results and the approach of \\cite{Noi}, \\cite{Raf}. We will see that the time optimal synthesis can be visualized in the unit disk where all the geometric analysis can be performed. A special trajectory called the {\\it critical trajectory} plays a special role in that time optimal trajectories loose optimality intersecting this curve. In section \\ref{Contin}, we further analyze the optimal synthesis for two level quantum systems by studying the continuity properties of the minimum time function as a function of the bound on the controls or of the final point. This analysis sheds further light on the time optimal synthesis for this model. It shows that the minimum time function is continuous except for points on the critical trajectory where it presents a right discontinuity. This discontinuity is a manifestation of the fact that the reachable sets for this model do not monotonically grow with time. The analysis of reachable sets is performed in section \\ref{Reachsets}. Here using a change of coordinates we reduce the problem to the study of the reachable sets for a driftless system. We use monotonicity of the reachable sets in this case and the relation between the geometry of reachable sets and time optimal control trajectories. In this section, we try to present the results in a way that can be applied to more general systems on Lie groups highlighting the features of the system which allow the treatment to go through. Finally, the description of the reachable sets is used in section \\ref{synchro} to give an algorithm for the design of the synchronous time optimal control for $N$ qubits.\n\n\\section{The time optimal control problem for a two level quantum system}\\label{Rev}\n\nLet us consider the {\\it Schr\\\"odinger operator equation} (see, e.g., \\cite{Sakurai}) for a spin $\\frac{1}{2}$ particle in a magnetic field with time varying components (controls) in the $x$ and $y$ direction, $u_x$ and $u_y$. The equation is written as\n\\be{basicmodel}\n\\dot X=\\tilde \\sigma_z X + u_x \\tilde \\sigma_x X+u_y \\tilde \\sigma_y X, \\qquad X(0)={\\bf 1},\n\\end{equation}\nwhere $X \\in SU(2)$, $X(0)={\\bf 1}$ the identity, and $\\tilde \\sigma_{x,y,z}$ are proportional to the {\\it Pauli matrices}, $\\sigma_{x,y,z}$, and form a basis of the Lie algebra $su(2)$. They are defined as\n\\be{Paulimat}\n\\tilde \\sigma_x:=\\frac{i}{2}\\sigma_x= \\frac{1}{2}\\begin{pmatrix} 0 & i \\cr i & 0\\end{pmatrix}, \\qquad\n\\tilde \\sigma_y:=\\frac{-i}{2}\\sigma_y= \\frac{1}{2} \\begin{pmatrix} 0 & -1 \\cr 1 & 0 \\end{pmatrix},\n\\qquad \\tilde \\sigma_z:=\\frac{i}{2} \\sigma_z= \\frac{1}{2}\\begin{pmatrix} i & 0 \\cr 0 & -i\\end{pmatrix}.\n\\end{equation}\nThe Lie algebra $su(2)$ is equipped with an inner product between matrices, $\\langle \\cdot, \\cdot \\rangle$, defined as $\\langle A, B \\rangle:=Tr(A B^\\dagger)$, so that the associated norm is $\\|A\\|:=\\sqrt{\\langle A, A \\rangle}$. The coefficient of $\\tilde \\sigma_z$ in (\\ref{basicmodel}) (which is called {\\it Larmor frequency} in NMR applications see, e.g., \\cite{Abragam}) is taken equal to $1$, in the appropriate units, without loss of generality, since to this situation we can always reduce ourselves with an appropriate re-scaling and-or reversing of the time variable and-or a redefinition of the bound on the control\\footnote{This is true unless the Larmor frequency is zero in which case we have a driftless system which will be considered in detail in the following (see section \\ref{Reachsets}).} (cf. Remark 1.1 in \\cite{Noi})\n\n\nThe problem considered in \\cite{Noi} and \\cite{Raf} is, given a desired final condition $X_f \\in SU(2)$, to find the control functions $u_x,u_y$, that steer the state $X$ of system (\\ref{basicmodel}) from the identity, ${\\bf 1}$, to $X_f$ in minimum time, under the constraint that $u_x^2(t)+u_y^2(t) \\leq \\gamma^2$, for every $t$. We shall later generalize this problem (cf. section \\ref{synchro}) to the minimum time {\\it simultaneous} control of $N$ two level quantum systems.\n\n\n\n\nThe classical approach to the solution of this type of problems is to apply the {\\it Pontryagin Maximum Principle PMP} (see, e.g., \\cite{Agrachev}, \\cite{Flerish}, and, in particular \\cite{Mikobook}, \\cite{Tesi} for applications to quantum systems.), which gives the necessary conditions for optimality. This results in a set of candidate optimal control functions (more or less explicit) which are typically parametrized by some real values. Then these controls are placed back in the dynamics which is (numerically) integrated. The parameters are then chosen so that the final condition is met and the time of transfer is the minimum one. In essence, the PMP allows one to reduce a search over a space of functions (the controls) to a search over a finite dimensional space (the space of the parameters). In the case of system (\\ref{basicmodel}) application of the PMP\\footnote{Along with a result showing the non-optimality of singular extremals (cf. \\cite{Noi}).} shows that the optimal candidate controls (extremals) are of the type \\cite{Noi}:\n\\be{NSE}\nu_x=\\gamma \\sin(\\omega \\tau + \\tilde \\phi), \\qquad u_y=-\\gamma \\cos(\\omega \\tau +\\tilde \\phi),\n\\end{equation}\nwhere $\\tau$ denotes the time variable and $\\omega$ and $\\tilde \\phi$ are two parameters (frequency and phase) to be tuned in order to reach the desired final condition while minimizing the time.\n\n\n By plugging $u_x$ and $u_y$ in (\\ref{NSE}) into (\\ref{basicmodel}), the corresponding differential equation {\\it can be explicitly integrated}.\nThe solution is given by\n\\be{soluzexpli}\nX(\\tau,\\omega,\\tilde \\phi):=\\begin{pmatrix} e^{i \\omega t}(\\cos(a t)+ i \\frac{b}{a} \\sin(a t)) & e^{i (\\omega t + \\tilde \\phi)} \\frac{\\gamma}{a} \\sin(a t) \\cr - e^{-i(\\omega t + \\tilde \\phi)} \\frac{\\gamma}{a} \\sin(a t) & e^{-i \\omega t}(\\cos(a t)-i \\frac{b}{a} \\sin(a t)) \\end{pmatrix},\n\\end{equation}\nfor $t:=\\frac{\\tau}{2}$, $b:=1-\\omega$, $a:=\\sqrt{\\gamma^2 + b^2}$. Direct inspection of formula (\\ref{soluzexpli}) shows that the $\\tilde \\phi$ only affects the phase of the off-diagonal element. This means that the minimum time only depends on the $(1,1)$ element of the matrix giving the desired final condition.\\footnote{This can also be proved without using the explicit form of the optimal controls (cf. Proposition 2.1 in \\cite{Noi}).} We can use an arbitrary phase (for example $\\tilde \\phi=0$) and study the trajectory of the $(1,1)$ element which belongs to the unit disk. Once the frequency $\\omega$ corresponding to the time optimal control steering to the desired point of the unit disk has been found, then the phase $\\tilde \\phi$ is chosen to match the phase of the off diagonal element of the desired final condition.\n\nLet $x=x(t)$ and $y=y(t)$ be the real and the imaginary part of the (1,1) element in (\\ref{soluzexpli}), which, parametrized by $\\omega$, is given by:\n\\be{curvex}\nx(t):=x_{\\omega}(t)=\\cos(\\omega t)\\cos(at)-\\frac{b}{a}\\sin(\\omega t)\\sin(at),\n\\end{equation}\n\\be{curvey}\ny(t):=y_\\omega(t)= \\sin(\\omega t)\\cos(at)+\\frac{b}{a}\\cos(\\omega t)\\sin(at).\n\\end{equation}\nIn \\cite{Noi} the optimal synthesis was described in the unit disk.\nThis was done for values of $\\gamma$, $\\frac{1}{\\sqrt{3}} \\leq \\gamma \\leq 1$. A typical picture for the optimal trajectories is in Figure \\ref{Fig1} to which we shall refer in the following discussion. In general, the values of $\\omega$ for which the controls in (\\ref{NSE}) are optimal are $-\\infty < \\omega \\leq 1+\\gamma^2:=\\omega_c$. The limit value $\\omega_c:=1+\\gamma^2$ called the {\\it critical frequency} corresponds to a trajectory, also called the {\\bf critical trajectory} which has a cuspid at the time $T_c=\\frac{\\pi}{2 \\gamma \\sqrt{1+\\gamma^2}}$. The critical trajectory is optimal until time $T_c$ and then looses optimality. It is the curve in blue in Figure \\ref{Fig1}. Among the other trajectories, another important one is the one corresponding to $\\omega=\\omega^*:=\\frac{1+\\gamma^2}{2}$ which corresponds to a circle (in red in Figure \\ref{Fig1}) centered at $\\left( \\frac{\\gamma^2}{1+\\gamma^2}, 0\\right)$ and with radius $\\frac{1}{1+\\gamma^2}$. This trajectory was called the {\\it separatrix} since optimal trajectories corresponding to $-\\infty < \\omega < \\omega^*$ entirely lay outside of the region bounded by it until they loose optimality upon reaching the boundary of the unit disk. Also, trajectories with $\\omega^* < \\omega \\leq \\omega_c$ remain {\\it inside} the separatrix until reaching the critical trajectory and loosing optimality there\\footnote{The loss of optimality is of different type at the boundary of the unit disk and on the critical trajectory. At the boundary of the unit disk the trajectory (with corresponding $\\omega$) is optimal until an {\\it including} the point on the unit disk. However, for a curve intersecting the critical trajectory, this curve is optimal until {\\it but not including}, the point on the critical trajectory.}(sample trajectories are drawn in black in Figure \\ref{Fig1}).\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{OS1}\n\\caption{Optimal synthesis for $\\gamma=\\frac{1}{\\sqrt{2}}$. Reported here are the separatrix in red corresponding to $\\omega=\\omega^*=\\frac{3}{4}$ and the critical curve in blue corresponding to $\\omega=2\\omega^*=\\omega_c=\\frac{3}{2}$. Moreover in black are the optimal trajectories corresponding to $\\omega=-3$, $\\omega=-1$, $\\omega=0$, $\\omega=0.2\\omega^*$, $\\omega=0.5\\omega^*$, $\\omega=0.7\\omega^*$, $\\omega=0.9\\omega^*$, outside the separatrix, and $\\omega=1.3\\omega^*$, $\\omega=1.45\\omega^*$, $\\omega=1.6\\omega^*$, $\\omega=1.8\\omega^*$.}\n\\label{Fig1}\n\\end{figure}\n\n\n\nGiven, the qualitative picture of the optimal synthesis as in Figure \\ref{Fig1} it is straightforward to find the time optimal control to steer to a desired final condition. Let $X_f$ be the desired final condition and $P_f$ the point in the unit disk representing the $(1,1)$ entry of this matrix. Then one finds $\\omega$ such that the corresponding trajectory contains $P_f$ (this can be done for example using a bisection algorithm and the graphics of the trajectories). Once $\\omega$ has been found one determines the minimum time. This can be done for example solving a static optimization problem, minimizing (in $t$) the distance of the trajectory with fixed $\\omega$ from the point $P_f$. Finally, one determines the phase $\\tilde \\phi$ in (\\ref{NSE}) to match (in (\\ref{soluzexpli})) the phase of the off diagonal entry of the desired final condition.\n\n\n\\vspace{0.25cm}\n\nThe assumption of $\\gamma \\leq 1$ was used in \\cite{Noi} to guarantee that the minimum time to reach points on the boundary of the unit disk set is an increasing function of the phase of the point. Without this feature the optimal synthesis becomes more complicated. The assumption of $\\frac{1}{\\sqrt{3}} \\leq \\gamma$ was used to render the synthesis inside the separatrix analytically tractable. As $\\gamma \\rightarrow 0$ the critical trajectory become longer and longer and resembles a long spiral filling the whole disk bounded by the separatrix. Simulations show that the optimal trajectories follow this trajectory before going around its endpoint (the one corresponding to $T_c=\\frac{\\pi}{2 \\gamma \\sqrt{1+\\gamma^2}}$) and intersecting the critical trajectory. This behavior observed with numerical simulations is however difficult to describe analytically.\n\nThese difficulties were overcome in \\cite{Raf} where the author reconsidered the curves (\\ref{curvex}), (\\ref{curvey}) but this time keeping $t$ fixed and considering $\\omega$ as a variable. If this is done, the curves (\\ref{curvex}) and (\\ref{curvey}) represent, for $\\omega$ in a certain interval, part of the boundary of the reachable set at that time $t$. For every $\\omega$ in that interval the corresponding point is the endpoint of an optimal trajectory. The points where the curve in (\\ref{curvex}) (\\ref{curvey}) are not endpoints of optimal trajectories anymore are the ones where the curve corresponding to time $t$, say ${\\cal F}_t$\\footnote{This curve is called the {\\it optimal frontline} in \\cite{Raf}.} intersects the one at time $t+dt$, ${\\cal F}_{t+dt}$. The curve of all these intersections with varying $t$ {\\it coincides with the critical trajectory above discussed} which is therefore the {\\it envelope} of these curves. As $\\gamma \\rightarrow \\infty$ the critical trajectory becomes shorter and shorter and disappears at the limit which corresponds to a driftless system\\footnote{Recall that in (\\ref{basicmodel}) we have normalized the Larmor frequency to $1$ and the value of $\\gamma$ is in fact $\\gamma:=\\frac{\\gamma^{'}}{\\omega_0}$ where $\\gamma^{'}$ is the `physical' bound on the norm of the control and $\\omega_0$ is the value of the Larmor frequency which tends to zero for a driftless system.} At the other end, when $\\gamma \\rightarrow 0$ the critical trajectory becomes very long. Besides giving an interpretation of the critical trajectory, the analysis of \\cite{Raf} provides an alternative and general method to find the minimum time control. One consider the frontlines ${\\cal F}_t$ with evolving $t$ and bounded on one end by the boundary of the unit disk and on the other end by the critical trajectory (i.e., (\\ref{curvex}), (\\ref{curvey}) with $\\omega=\\omega_c$) and look for the smallest $t$ such that the desired final point $P_f$ is in ${\\cal F}_t$. This idea will be used later in this paper. We refer to \\cite{Raf} for details.\n\nIn the next two sections we shall further elaborate on these results and investigate the geometric nature of the minimum time problem on $SU(2)$ and its relation with the geometry of the reachable sets.\n\n\n\n\\section{Properties of the minimum time function}\\label{Contin}\n\nWe now study the continuity and monotonicity properties of the minimum time function for the above problem for a two level quantum system. This is a function of the final state $X_f \\in SU(2)$ fixed and of the bound on the control $\\gamma$. As discussed above $X_f$ is represented by a point $P_f$ in the unit disc. For any $\\gamma>0$, and for any final condition $P_f=(x_f,y_f)$, with $0\\leq x_f^2+y_f^2\\leq 1$, we denote by\n $t_{P_f}=t_{P_f}({\\gamma})$ the optimal time to reach the fixed final condition, and by\n$\\omega_{P_f}=\\omega_{P_f}({\\gamma})$ the corresponding frequency $\\omega$ of the optimal control (cf. (\\ref{NSE}))\n\nUsing (\\ref{curvex}) (\\ref{curvey}), we know that\n\\be{curva}\n\\begin{array}{ccl}\nx_f&= &x_{\\omega_{P_f}({\\gamma})}(t_{P_f}({\\gamma})) \\\\\ny_f&= &y_{\\omega_{P_f}({\\gamma})}(t_{P_f}({\\gamma}))\n\\end{array} \\end{equation}\nas in equations (\\ref{curvex}), (\\ref{curvey}). And we let\n$b_{P_f}({\\gamma}):=1-\\omega_{P_f}({\\gamma})$ and $a_{P_f}({\\gamma})=\\sqrt{(b_{P_f}({\\gamma}))^2+\\gamma^2}$.\n\n\nThe function $t_{P_f}(\\gamma)$ is monotonic non increasing since all controls available for $\\gamma=\\gamma_1$ are also available for any $\\gamma \\geq \\gamma_1$.\n\n\n\nWe consider first the final condition {\\it in the interior} of the unit circle, so let $P_f=(x_f,y_f)$ be a fixed point such that $0\\leq x_f^2+y_f^2<1$. We give a bound on the value of the optimal frequency $\\omega$ to be used in (\\ref{NSE}). Define\n$K_{P_f}:=\\sqrt{\\frac{x_f^2+y_f^2}{1-(x_f^2+y_f^2)}}$.\n\n\\bp{ovvia}\nIf $\\omega\\in \\RR$ and $t>0$ is such that $x_{\\omega}(t)=x_f$ and $y_{\\omega}(t)=y_f$, then\n\\be{ovviae}\n1-\\gamma K_{P_f} \\leq \\omega \\leq 1+\\gamma K_{P_f}.\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\nIf (\\ref{ovviae}) does not hold, then $b^2=(1-\\omega)^2>\\gamma^2K_{P_f}^2$. This implies (using $a^2:=\\gamma^2+b^2$)\n\\[\n\\frac{\\gamma^2}{a^2}\\sin^2(at)<\\frac{1}{1+K_{P_f}^2}.\n\\]\nBy using equations (\\ref{curvex}) and (\\ref{curvey}), we have:\n\\[\nx_f^2+y_f^2=1-\\frac{\\gamma^2}{a^2}\\sin^2(at)> \\frac{K_{P_f}^2}{1+K_{P_f}^2}=x_f^2+y_f^2,\n\\]\nwhich is a contradiction.\n\\end{proof}\nThe above simple proposition gives bounds on the frequencies to be used for a given desired final condition $P_f$. This can be used in the search of the optimal control described in the previous section. In particular, if $P_f=(0,0)$, which corresponds to SWAP-like final conditions, $X_f:=\\begin{pmatrix}0 & e^{i\\phi}\\cr e^{-i\\phi} & 0 \\end{pmatrix}$, then $K_{P_f}=0$, so the only admissible $\\omega$ is $\\omega=\\omega_{P_f}(\\gamma)=1$, independently of $\\gamma$ (these are the resonant controls considered for example in \\cite{OptRes}).\n\nWe now study the limit of $t_{P_f}(\\gamma)$ as $\\gamma$ goes to zero.\n\\bp{seconda}\nIf $P_f$ is in the interior of the unit disc, then\n\\be{t-gamma}\n\\lim_{\\gamma \\to 0^+} t_{P_f}({\\gamma})=+\\infty.\n\\end{equation}\nIf $P_f$ is on the boundary of the unit disc and corresponds to a phase $\\psi_f$, i.e., $P_f=e^{i \\psi_f}$, then\n\\be{T-circonferenza}\n\\lim_{\\gamma \\to 0^+} t_{P_f}({\\gamma})=\\psi_f.\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\n\nConsider first the case where $P_f$ is inside the unit circle, i.e., the case of (\\ref{t-gamma}).\nUsing equations (\\ref{curvex}) and (\\ref{curvey}),\nwe have that:\n\\be{relazione}\n\\frac{\\gamma^2}{(b_{P_f})^2+\\gamma^2}\\sin^2\\left(a_{P_f}\\,t_{P_f}\\right)\n=1- x_f^2+y_f^2>0.\n\\end{equation}\n\nFirst we prove that\n\\be{omega-gamma}\n\\lim_{\\gamma \\to 0^+} |b_{P_f}({\\gamma})|=0.\n\\end{equation}\n\n\n\nAssume, by the way of contradiction, that (\\ref{omega-gamma}) does not hold. Then we have:\n\\[\n\\exists \\,\\epsilon >0, \\text{ such that }\\forall \\ n>0 \\ \\exists \\ 0<\\gamma_n<\\frac{1}{n} \\text{ with }\n(b_{P_f}({\\gamma_n}))^2 >\\epsilon.\n\\]\nThis implies\n\\[\n0\\leq \\frac{\\gamma_n^2}{(b_{P_f}^{\\gamma_n})^2+\\gamma_n^2}\\leq \\frac{\\gamma_n^2}{\\epsilon+\\gamma_n^2}\n\\]\nThus we have:\n\\[\n\\lim_{n\\to+\\infty} \\frac{\\gamma_n^2}{(b_{P_f}({\\gamma_n}))^2+\\gamma_n^2}\n\\sin^2\\left(\\sqrt{((b_{P_f}({\\gamma_n}))^2+\\gamma_n^2)}\\,t_{P_f}(\\gamma_n) \\right)=0\n\\]\nsince the $\\sin$ function is bounded, and $\\gamma_n \\to 0$. This contradicts equation (\\ref{relazione}),\nthus (\\ref{omega-gamma}) holds.\n\nNow we use (\\ref{omega-gamma}) to prove (\\ref{t-gamma}).\nAssume, by the way of contradiction, that (\\ref{t-gamma}) does not hold. As done before, this would imply that, for some $K>0$ there exists a sequence $\\gamma_j$ ($j>0$) with $0<\\gamma_j<\\frac{1}{j}$, such that $00\\}$. We shall restrict ourselves to study this function in the interior of ${\\cal C}$, i.e., ${\\texttt{int} \\cal C}:= \\{ x_f, y_f, \\gamma \\, | \\, 0 \\leq x_f^2+y_f^2 < 1, \\, \\gamma >0\\}$.\n\n\\vspace{0.25cm}\n\n\\bp{continuita}\nThe function $T:=T(x_f,y_f,\\gamma)$ is continuous at all points in $(P_f, \\gamma) \\in \\texttt{int} {\\cal C}$ such that $P_f$ is not on the critical trajectory corresponding to $\\gamma$.\n\\end{proposition}\n\nRecall from the results of \\cite{Noi}, \\cite{Raf} reviewed in the previous section that the critical trajectory is given by (\\ref{curvex}) (\\ref{curvey}) with $\\omega:=\\omega_c=1+\\gamma^2$, and $0 \\leq t \\leq \\frac{\\pi}{2 a_c}:=\\frac{\\pi}{2 \\gamma \\sqrt{1+\\gamma^2}}$. The locus of points of discontinuity is a surface in ${\\cal C}$ whose intersection with the planes $\\gamma=\\texttt{const}$ is the critical trajectory at that $\\gamma$, which is a spiral-like curve, which is very short for large $\\gamma$ and very long for small $\\gamma$.\n\n\\begin{proof}\n For a given $P_f$ and $\\bar{\\gamma}$, with $P_f$ not in the critical trajectory of $\\bar{\\gamma}$,\nconsider the corresponding values $t_{P_f}(\\bar{\\gamma})$ and $\\omega_{P_f}(\\bar{\\gamma})$ which are optimal. Then the two equations\n\\be{zero}\n\\begin{array}{llll}\nF_1(t,\\omega,x,y,\\gamma):=x(t,\\omega,\\gamma)-x=0,\\\\\nF_2(t,\\omega,x,y,\\gamma):=y(t,\\omega,\\gamma)-y=0,\n \\end{array}\n \\end{equation}\nhold at $t_{P_f}(\\bar{\\gamma}),\\omega_{P_f}(\\bar{\\gamma}), x_f,y_f,\\bar{\\gamma}$. Moreover,\nfrom the Implicit Mapping Theorem, they define in an open neighborhood $N$ of $(x_f,y_f,\\bar \\gamma)$ two continuous functions $t:=t(x,y,{\\gamma})$, $\\omega=\\omega(x,y,\\gamma)$, as long as the Jacobian with respect to the variables $t$ and $\\omega$ is different from zero. We calculate using (\\ref{curvex}) (\\ref{curvey}),\n\\[\n\\text{Det } \\left( \\begin{array}{cc}\n \\frac{\\partial}{\\partial t}F_1 & \\frac{\\partial}{\\partial t}F_2 \\\\\n & \\\\\n\n \\frac{\\partial}{\\partial \\omega}F_1 & \\frac{\\partial}{\\partial \\omega}F_2\n \\end{array} \\right) = \\frac{\\gamma^2(\\gamma^2+1-\\omega)}{a^4} \\sin( a t) \\left[\\sin( at)-at \\cos (at)\\right]\n \\]\nThis expression is zero if and only if $t=0$, $t=\\pi\/a$ or $\\omega=1+\\gamma^2$.\nSince the optimal time $t_{P_f}({\\bar{\\gamma}})$ is always positive and strictly less then $\\frac{\\pi}{a}$,\\footnote{This follows from the results of \\cite{Noi} \\cite{Raf}. At $t=\\frac{\\pi}{a}$ every optimal candidate trajectory is at the boundary of the unit circle where it looses optimality (if it had not lost it before by intersecting the critical trajectory).} and the point $P_f$ does not belong to the critical trajectory, i.e., $\\omega_{P_f}(\\bar{\\gamma})\\neq {1+\\bar{\\gamma}^2}$, the Jacobian is not zero, so the two continuous functions are defined.\nWe know that\n $t(x_f,y_f,\\bar \\gamma)=t_{P_f}(\\bar \\gamma)$ and $\\omega(x_f,y_f,\\bar \\gamma)=\\omega_{P_f}(\\bar \\gamma)$. Let $V=\\left(t(N),\\omega(N)\\right)$, this is a neighborhood of $\\left(t_{P_f}(\\bar \\gamma), \\omega_{P_f}(\\bar \\gamma)\\right)$, moreover if $(t,\\omega,x,y,\\gamma)\\in V \\times N$, and satisfies the equations in (\\ref{zero}), then necessarily $t=t(x,y, \\gamma)$ and $\\omega=\\omega(x,y, \\gamma)$.\n\n To show continuity of the map $T$ at $(P_f,{\\bar{\\gamma}})$,i.e., of the time optimal function,\n we prove that there exists a\n neighborhood $W\\subseteq N$, such that if\n $ (x,y,\\gamma)\\in W$ then $T(x,y,\\gamma)$ coincides with the implicit map $t(x,y,{\\gamma})$, whose continuity is guaranteed by the Implicit Mapping Theorem.\n\n It is obvious by definition that $t(x,y,{\\gamma})\\geq T(x,y,\\gamma)$. Assume, by the way of contradiction, that\n the statement is false, then for all $n$ sufficiently large, there exists a sequence $\\{P_n=(x_n,y_n)\\}$ and a sequence $\\{\\gamma_n\\}$, such that\n the sequence $P_n$ goes to $P_f$, the sequence $\\gamma_n$ goes to $\\bar{\\gamma}$, and $t(x_n,y_n,\\gamma_n)>T(x_n,y_n,\\gamma_n)$. On the other hand, we have:\n \\[\n x_n=x\\left(t(x_n,y_n,\\gamma_n), \\omega(x_n,y_n,\\gamma_n),\\gamma_n\\right)=\n x\\left(T(x_n,y_n,\\gamma_n), \\omega_{(x_n,y_n)}(\\gamma_n),\\gamma_n\\right),\n \\]\n \\[\n y_n=y\\left(t(x_n,y_n,\\gamma_n), \\omega(x_n,y_n,\\gamma_n),\\gamma_n\\right)=\n y\\left(T(x_n,y_n,\\gamma_n), \\omega_{(x_n,y_n)}(\\gamma_n),\\gamma_n\\right),\n \\]\nwhere $\\omega_{(x_n,y_n)}(\\gamma_n)$ are the optimal values. Since $T(x_n,y_n,\\gamma_n)$ and $\\omega_{(x_n,y_n)}(\\gamma_n)$ belong to compact sets (for the values $\\omega$ see Proposition \\ref{ovvia}), we may assume, after passing if necessary to a subsequence, that:\n\\[\n\\omega_{(x_n,y_n)}(\\gamma_n) \\to \\bar{\\omega},\n\\ \\ T(x_n,y_n,\\gamma_n) \\to \\bar{t}.\\]\nSince by continuity\n\\[\n x\\left(T(x_n,y_n,\\gamma_n), \\omega_{(x_n,y_n)}(\\gamma_n),\\gamma_n\\right) \\to x(\\bar{t},\\bar{\\omega},\\bar{\\gamma}), \\ \\ y\\left(T(x_n,y_n,\\gamma_n), \\omega_{(x_n,y_n)}(\\gamma_n),\\gamma_n\\right) \\to y(\\bar{t},\\bar{\\omega},\\bar{\\gamma}),\n \\]\n we must have\n\\[\nx_f= x(\\bar{t},\\bar{\\omega},\\bar{\\gamma}), \\ \\ y_f= y(\\bar{t},\\bar{\\omega},\\bar{\\gamma}).\n\\]\nFrom the fact that $t(x_n,y_n,\\gamma_n)>T(x_n,y_n,\\gamma_n)$, we have $t_{P_f}(\\bar \\gamma)=t(x_f,y_f,\\bar \\gamma)\\geq \\bar{t}$, since $t_{P_f}(\\bar \\gamma)$ is optimal we conclude\n$t_{P_f}(\\bar \\gamma)=t(x_f,y_f,\\bar \\gamma)= \\bar{t}$. This, in turn, implies that also\n$\\bar{\\omega}=\\omega_{P_f}(\\bar \\gamma)$, since the optimal values is unique.\\footnote{see \\cite{Noi} or alternatively the geometric analysis of next section which shows that the value $\\omega$ is the value of the parameter at the intersection of two parametric curves which is uniquely defined.}\nThus for $n$ sufficiently large $\\left(T(x_n,y_n,\\gamma_n), \\omega_{(x_n,y_n)}(\\gamma_n)\\right)$ belongs to $V$, thus we must have $T(x_n,y_n,\\gamma_n)=t(x_n,y_n,\\gamma_n)$ since the\n Implicit Map Theorem guarantees uniqueness of the function $t$, which contradicts $t(x_n,y_n,\\gamma_n)>T(x_n,y_n,\\gamma_n)$.\n\n\n\\end{proof}\n\n\nWe now are interested in studying the discontinuity of the function $t_{P_f}=t_{P_f}(\\gamma)$ at the points on the critical trajectory. The following result summarizes the continuity properties of this function.\n\n\\bp{discontin}\nThe function $t_{P_f}=t_{P_f}(\\gamma)$ is continuous for every $\\gamma$ except for the $\\gamma$'s such that $P_f$ is in the interior of the critical trajectory. On these points it presents a discontinuity on the left and it is right continuous.\n\\end{proposition}\n\n\\begin{proof}\n\nFirst we first prove right continuity everywhere.\n\nBy monotonicity of the function $t_{P_f}$, for a fixed value $\\bar{\\gamma}$, we know that:\n\\be{limitedestro}\n\\lim_{\\gamma\\to\\bar{\\gamma}^+} t_{P_f}({\\gamma})=\\sup_{\\gamma>\\bar{\\gamma}}t_{P_f}({\\gamma})=\nl_+({\\bar{\\gamma}})\\leq t_{P_f}({\\bar{\\gamma}}),\n\\end{equation}\nWe will prove that $l_+({\\bar{\\gamma}})= t_{P_f}({\\bar{\\gamma}})$.\n\nBy definition of $l_+({\\bar{\\gamma}})$, we know that for each $n\\geq 1$ there exists $\\bar{\\gamma}<\\gamma_n<\\bar{\\gamma}+\\frac{1}{n}$, such that $l_+({\\bar{\\gamma}})-\\frac{1}{n} 0$ sufficiently small, $t_{P_f}(\\bar \\gamma -\\epsilon)> \\frac{\\pi}{2 (\\bar \\gamma - \\epsilon) \\sqrt{1+(\\bar \\gamma -\\epsilon)^2}}$. This gives\n \\be{discontinuita77}\n \\lim_{\\gamma \\to \\bar \\gamma^-} t_{P_f}(\\gamma)=\\lim_{\\epsilon \\to 0+} t_{P_f}(\\bar \\gamma -\\epsilon) \\geq\n \\frac{\\pi}{2 \\bar \\gamma \\sqrt{1+ \\bar \\gamma^2}} > \\alpha \\frac{\\pi}{2 \\bar \\gamma \\sqrt{1+ \\bar \\gamma^2}}. \\end{equation}\nThe analysis of \\cite{Noi} \\cite{Raf} also shows left continuity of $t_{P_f}$ if $P_f$ is exactly at the endpoint of the critical trajectory.\\footnote{For $\\gamma$ smaller than $\\bar \\gamma$ optimal trajectories reaching $P_f$ travel around the end point of the critical trajectory corresponding to that $\\gamma$ before reaching $P_f$. The time to reach $P_f$ is therefore greater than the maximum time on the critical trajectory. However as $\\gamma \\to \\bar \\gamma^-$ the two points and the two times coincide.}\n\n\n\\end{proof}\n\n\\vspace{0.25cm}\n\n\n\\br{Swapetal}\nIf $P_f$ corresponds to a SWAP like operator, then, since $P_f:=(0,0)$ is not on any critical trajectory, $t_{P_f}$ is a continuous function of $\\gamma$. For other points $P_f$ however in the interior of the unit disc, there are infinitely many values of $\\gamma$ such that $P_f$ is on the corresponding critical trajectory.\\footnote{They have a limit point at zero} At each of these values, the function $t_{P_f}$ has a jump on the left. If we are trying to reach a point at exactly a time $T$ which is possibly larger than the minimum time corresponding to a bound $\\bar \\gamma$, we cannot always use the time optimal control for a $\\gamma$ smaller than $\\bar \\gamma$. The time $T$ might not be in the range of the function $t_{P_f}$ even though such a function tends to $+\\infty$ as $\\gamma \\rightarrow 0$ as shown in proposition \\ref{seconda}. A characterization of the reachable sets for every $T$ will allow us to know exactly at what times we can reach a given state and how.\n\\end{remark}\n\n\\section{Geometry of the reachable sets} \\label{Reachsets}\n\nWe now give a description of the reachable sets for system (\\ref{basicmodel}), which will then be used in the solution of the minimum time synchronization problem in the next section \\ref{synchro}. As the method has more general validity, we shall first describe it for general, bilinear, right invariant systems on Lie groups and then specialize to system (\\ref{basicmodel}).\\footnote{Some of the concepts and ideas we shall describe are valid for more general families of vector fields. Restricting ourselves to bilinear right invariant vector fields ensure us that the solution of the associated initial value problems exists for every time $t$. For concreteness, our notation refers to {\\it matrix} Lie groups, although we could have extended the discussion to abstract general Lie group simply replacing the exponential of a matrix with the exponential map.}\n\n\n\n\n\\subsection{General method}\n\n\nConsider a system\n\\be{gensys2}\n\\dot X=AX+\\sum_{j=1}^m u_jB_j X, \\qquad X(0)=X_0,\n\\end{equation}\nwhere $A,B_1,\\ldots,B_m$ are matrices in a matrix Lie algebra ${\\cal L}$, $X$ belongs to the corresponding Lie group $e^{\\cal L}$, $X_0$ is the given initial condition and $u_j$, $j=1,\\ldots,m$ are the controls, which are assumed to belong to a set ${\\cal U}$ of functions of time. The {\\bf reachable set} at time $T$, ${\\cal R}(T)$ is the set of states $X_f$ in $e^{\\cal L}$ such that there exist functions $u_1,\\ldots,u_m$ in ${\\cal U}$ defined in $[0,T]$ so that the solution $X$ of (\\ref{gensys2}) with these controls satisfies $X(T)=X_f$. ${\\cal R}(0)=\\{ X_0\\}$ by definition and the reachable set ${\\cal R}(\\leq T)$ is defined as\n\\be{reaches}\n{\\cal R}(\\leq T):=\\bigcup_{0 \\leq t \\leq T} {\\cal R}(t).\n\\end{equation}\nIt follows obviously from the definition that the reachable sets ${\\cal R}(\\leq T)$ are non decreasing with $T$, i.e., ${\\cal R}(\\leq T_1) \\subseteq {\\cal R}(\\leq T_2)$ if $T_1 \\leq T_2$, a property not necessarily true for the reachable sets ${\\cal R}(T)$.\n\\vspace{0.25cm}\nTo study reachable sets we can make a, possibly time varying, change of variables in system (\\ref{gensys2}) and study the reachable sets for the resulting system. The reachable sets for the original system are obtained by mapping back the reachable sets for the new system. In the case of system (\\ref{gensys2}) we define\\footnote{This is called `{\\it passage to the interaction picture}' in the physics literature.}\n\\be{intepic}\nU(t):=e^{-At}X(t).\n\\end{equation} From (\\ref{gensys2}), we obtain the differential equation for $U$,\n\\be{gensys3}\n\\dot U=\\left(\\sum_{j=1}^m u_j e^{-At} B_j e^{At} \\right) U, \\qquad U(0)=X_0.\n\\end{equation}\nLet ${\\cal I}$ the smallest subspace of ${\\cal L}$ which contains $\\{B_1,\\ldots,B_m\\}$ and is invariant under Lie bracket with $A$, and let $\\{ C_1,C_2,\\ldots, C_l \\}$ be a basis of ${\\cal I}$. Then there are analytic functions $\\gamma_{j,i}:=\\gamma_{j,i}(t)$ such that\n\\be{llo2}\ne^{-At} B_j e^{At}:=\\sum_{i=1}^l \\gamma_{j,i}(t)C_i.\n\\end{equation}\nBy replacing this in (\\ref{gensys3}) and defining\n\\be{Vi}\nv_i:=\\sum_{j=1}^m u_j \\gamma_{j,i},\n\\end{equation}\nwe obtain\n\\be{L3e}\n\\dot U=\\left( \\sum_{i=1}^l v_i C_i \\right) U, \\qquad U(0)=X_0.\n\\end{equation}\nIf ${\\cal V}$ denotes the image of the set of control functions, ${\\cal U}$, under the map\n(\\ref{Vi}), then we can study the reachable sets for (\\ref{L3e}) under the set of controls ${\\cal V}$, denote them by ${\\cal R}_U( T)$ and ${\\cal R}_U(\\leq T)$, respectively, and the reachable\nsets for the original system (\\ref{gensys2}), are recovered from (cf. (\\ref{intepic}))\n\\be{recovering}\n{\\cal R}(T)=e^{AT} {\\cal R}_U(T),\\qquad {\\cal R}(\\leq T)=\\bigcup_{0 \\leq t \\leq T} e^{At} {\\cal R}_U(t).\n\\end{equation}\nThis method is particularly useful when the set of controls ${\\cal V}$ has the {\\it scalability property}, that is, $\\vec v \\in {\\cal V}$ implies $L_\\alpha({\\vec v}):= \\alpha \\vec v (\\alpha t) \\in {\\cal V}$ for every $\\alpha \\in [0,1]$. In this case, the reachable sets ${\\cal R}_U(T)$ are increasing with $T$, so that ${\\cal R}_U(T)={\\cal R}_U(\\leq T)$ for every $T$. This is seen with a standard argument for driftless systems such as (\\ref{L3e}). Let $\\vec v$ a control in ${\\cal V}$ steering the initial condition $X_0$ to $U_f$ in time $T_1$, so that $U_f \\in {\\cal R}_U(T_1)$. Let $T_2 > T_1$ and consider the control $L_{\\frac{T_1}{T_2}} \\vec v(t)=\\frac{T_1}{T_2} \\vec v(\\frac{T_1}{T_2}t)$. With this control the function $\\tilde U(t):=U(\\frac{T_1}{T_2}t)$ solves (\\ref{L3e}) and it is such that $\\tilde U(T_2)=U_f$, so that $U_f \\in {\\cal R}_U(T_2)$.\n\n\nIn general, studying ${\\cal R}(\\leq T)$ is easier than studying ${\\cal R}(T)$ because this set is related to the solution of the time optimal control problem. It is a well known fact in geometric control theory that if $X_f$ is a final point of a time optimal trajectory at time $T$, then $X_f$ is on the boundary of the reachable ${\\cal R}(\\leq T)$. On the other hand, by definition, the minimum time to reach $X_f$, is the smallest time $T$ such that $X_f \\in {\\cal R}(T)$. In the above described situation, the minimum time $T$ is the smallest time $t$ such that\n\\be{L4r}\nX_f \\in {\\cal R}(t)=e^{At} {\\cal R}_U(t),\n\\end{equation}\nand, if we have a description of ${\\cal R}_U(t)(={\\cal R}_U(\\leq t))$ this gives an alternative way to find the minimum time and control.\n\n\\subsection{Reachable sets for systems on $SU(2)$}\n\nWe now apply the strategy outlined above to the case of system (\\ref{basicmodel}). In this case, the space of controls, ${\\cal U}$ is the space of Lebesgue measurable functions $\\vec u:=(u_x,u_y)$ with Euclidean norm $\\| \\vec u \\|:=\\sqrt{u^2_x+ u^2_y} \\leq \\gamma$. Specializing (\\ref{llo2}), we obtain\n\\be{L1X}\ne^{-\\tilde \\sigma_z \\tau} \\tilde \\sigma_x e^{\\tilde \\sigma_z \\tau}=\\cos(\\tau) \\tilde \\sigma_x -\\sin(\\tau)\\tilde \\sigma_y,\n\\end{equation}\nand\n\\be{L1Y}\ne^{-\\tilde \\sigma_z \\tau} \\tilde \\sigma_y e^{\\tilde \\sigma_z \\tau}=\\sin(\\tau) \\tilde \\sigma_x+ \\cos(\\tau) \\tilde \\sigma_y.\n\\end{equation}\nTherefore, the equation corresponding to (\\ref{L3e}) is\n\\be{L5bis}\n\\dot U=(v_x \\tilde \\sigma_x + v_y \\sigma_y )U, \\qquad U(0)= {\\bf 1}\n\\end{equation}\nwith $v_x:=\\cos(\\tau) u_x+ \\sin(\\tau) u_y$ and $v_y=-\\sin(\\tau) u_x + \\cos(\\tau) u_y$. Therefore, if $\\vec u:=[u_x,u_y]^T$ and $\\vec v:=[v_x,v_y]^T$, we have\n\\be{relat32}\n\\vec v=\\begin{pmatrix} \\cos(\\tau) & \\sin(\\tau) \\cr -\\sin(\\tau) & \\cos(\\tau) \\end{pmatrix} \\vec u.\n\\end{equation}\n Formula (\\ref{relat32}) gives a one to one correspondence between Lebesgue measurable functions $\\vec u$ with norm bounded by $\\gamma$ and Lebesgue measurable functions $\\vec v$ with norm bounded by $\\gamma$. That is, in this case, the space of controls ${\\cal V}$ coincides with ${\\cal U}$. Moreover, ${\\cal V}$ has the scalability property, and therefore ${\\cal R}_U(\\leq T)={\\cal R}_U(T)$ for every $T\\geq 0$.\n\n\n\n The following theorem describes ${\\cal R}_U(T)$ for system (\\ref{L5bis}), from which, in the following corollary, we obtain the reachable sets for the original system (\\ref{basicmodel}). As we have already done in the previous section, we scale the time variable as $t:=\\frac{\\tau}{2}$ and we we refer to `time' (as for example for $T$ in ${\\cal R}_U(T)$) we shall refer to the time $t$ defined this way.\n\n\\bt{Reachset1} The reachable set ${\\cal R}_U(T)$ for the system (\\ref{L5bis}) is given by the set of matrices in $SU(2)$ with the $x_{1,1}:=x+iy$ entry in the region of the unit disc bounded by the parametric curve ${\\cal F}_T$ defined as.\\footnote{These are the optimal frontlines studies in \\cite{Raf}.}\n\\be{CurveX9}\nx=x(\\omega)=\\cos(\\omega T) \\cos(aT)+\\frac{\\omega}{a} \\sin(\\omega T) \\sin(aT),\n\\end{equation}\n\\be{CurveY9}\ny=y(\\omega)=\\sin(\\omega T) \\cos(aT)-\\frac{\\omega}{a} \\cos(\\omega T) \\sin(aT),\n\\end{equation}\nwhere $a:=\\sqrt{\\omega^2+\\gamma^2}$ and the parameter $\\omega \\in \\left[ -\\sqrt{\\frac{\\pi^2}{T^2}-\\gamma^2}, \\sqrt{\\frac{\\pi^2}{T^2}-\\gamma^2} \\right]$.\n\\end{theorem}\nThe region of the unit disc representing ${\\cal R}_U(T)$ is at the right of the curve ${\\cal F}_T$ which, for every $T$, connects two points on the boundary of the unit disc and is symmetric with respect to the $x$-axis. The region grows with $T$, and when $T=\\frac{\\pi}{\\gamma}$, the curve ${\\cal F}_T$ collapses to the point $(-1,0)$ and the reachable set ${\\cal R}_U(T)$ becomes all of $SU(2)$. Figure \\ref{Fig2} shows the typical behavior for the curves ${\\cal F}_T$ for various values of $T$.\n\n\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=0.7\\textwidth, height=0.45\\textheight]{OS2}\n\\caption{Boundaries of reachable sets ${\\cal R}_U(\\leq T)={\\cal R}_U(T)$ for various values of $T$. The parameter $\\gamma$ is chosen equal to $1$. In red, we have the boundary (curves (\\ref{CurveX9}), (\\ref{CurveY9})) for $T=1$, in green for $T=1.1$, in blue for $T=2$, in purple for $T=3$. For $T=\\frac{\\pi}{\\gamma}=\\pi$ the boundary of the reachable set collapses to the single point $(-1,0)$ and the reachable set becomes the whole unit disc and therefore the whole $SU(2)$. }\n\\label{Fig2}\n\\end{figure}\n\nSpecializing (\\ref{recovering}) we obtain\n\\bc{special}\n\\be{cf}\n{\\cal R}(T)=e^{2 \\tilde \\sigma_z T} {\\cal R}_U(T).\n\\end{equation}\nIn other terms, $X_f \\in {\\cal R}(T)$ if and only if, denoted by $P_f$ the $(1,1)$ entry of $X_f$, we have that $e^{-iT} P_f$ is in the region described in Theorem \\ref{Reachset1}.\n\\end{corollary}\n\\br{explanationdisc}\nThe above corollary gives an alternative geometric explanation of the discontinuity we described in the previous section (cf. Remark \\ref{Swapetal}). Consider the desired point $P_f$, in the unit disc. As $T$ increases $e^{-iT} P_f$ describes a circle inside the unit disc. At the same time, as $T$ increases the curve ${\\cal F}_T$ moves towards left with a speed which decreases with decreasing $\\gamma$. The optimal time is the minimum time where these two curves intersect for the first time. If this intersection happens at a point of tangency, a small decrease in $\\gamma$ implies that $e^{-iT} P_f$ will have to go around almost an entire circle again before intersecting ${\\cal F}_T$, which explains the discontinuity at that $\\gamma$. Notice that this problem does not occur at the origin (corresponding to SWAP-like operators) since in this case the circle $e^{-iTP_f}$ reduces to a single point.\n\\end{remark}\n\n\n\nTo prove Theorem \\ref{Reachset1} we shall need the following facts about the parametric curves ${\\cal F}_T$ (with parameter $\\omega$ and fixed $T$), and its extension to values (in (\\ref{CurveX9}), (\\ref{CurveY9})) of ${\\omega} \\in (-\\infty, -\\sqrt{\\frac{\\pi^2}{T^2}-\\gamma^2}) \\bigcup (\\sqrt{\\frac{\\pi^2}{T^2}-\\gamma^2}, +\\infty)$, which we denote by ${\\cal S}_T$. It can be inferred by plotting the curves for different values of $T$, but we present an analytic proof in the appendix.\n\n\n\\newpage\n\n\\bl{SIO}\nLet ${\\cal F}_t$ and ${\\cal S}_t$ the previous defined curves, then:\n\\begin{enumerate}\n\\item If $t_1\\neq t_2$ then ${\\cal F}_{t_1}\\cap {\\cal F}_{t_2}=\\emptyset$\n\\item The curve ${\\cal F}_t$ does not have self intersections.\n\\item For any $0 T_1$ such that $X_{f,\\bar j} \\in {\\cal R}_{\\bar j}(T)$. Find such a $T$ and then check that for all other $j=1,\\ldots,N$, $j \\not= \\bar j$, $X_{f,j} \\in {\\cal R}_j(T)$. If that is the case, the algorithm stops otherwise it continues going back and replacing $T_1$ with $T$. The algorithms ends because for $T \\geq \\frac{\\pi}{\\gamma_{min}}$, where $\\gamma_{min}:=\\min\\{\\gamma_1,\\ldots,\\gamma_N\\}$, ${\\cal R}_j(T)=SU(2)$, for every $j$.\n\n\\vspace{0.25cm}\n\nWe summarize the procedure in the following formal algorithm.\n\n\\vspace{0.25cm}\n\n\n{\\scshape\n\n\\vspace{0.25cm}\n\n{\\bf ALGORITHM}\n\n\\vspace{0.25cm}\n\\begin{enumerate}\n\n\\item Solve the Time Optimal Control Problem for systems 1 through N and find the minimum\ntimes $T_1,\\ldots,T_N$.\n\nSet $T_{curr}:=\\max \\{T_1,\\ldots,T_N\\}$.\n\nSet $k_{curr}$ a value of $j$ such that $T_j=T_{curr}$.\n\n\\item For $j \\not=k_{curr}$ check that\n\n $X_{f,j} \\in {\\cal R}_j(T_{curr})$\n\n If this is the case then STOP.\n\n\\item Choose a $\\bar j \\in \\{1,\\ldots,N\\}$ such that\n\n $X_{f,\\bar j} \\notin {\\cal R}_{\\bar j}(T_{curr})$\n\n\\item Find the smallest $T > T_{curr}$ such that\n\n $X_{f,\\bar j} \\in {\\cal R}_{\\bar j}(T)$\n\n\\item Set $T_{curr}=T$, $k_{curr}=\\bar j$\n\n\\item Go back to step 2.\n\n\\end{enumerate}\n}\n\n\n\n\n\n\n\\section*{Acknowledgement} Domenico D'Alessandro's research was supported by ARO MURI grant W911NF-11-1-0268. He also would like to thank Dr. R. Romano for useful discussions. The authors\n would like to thank Prof. B. Jakubczyk for helpful comments which went as far as providing meaningful examples for some of the properties we wanted to prove in this paper.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nIn this paper, we introduce and solve a new exactly solvable nonlinear partial integro-differential equation which is a natural spin generalization of the Benjamin-Ono (BO) equation \\cite{benjamin1967,ono1975} and which we therefore call the {\\em spin BO (sBO) equation}. We present arguments that the sBO equation is not only interesting from a mathematical point of view but also for physics. We also introduce, discuss, and present results about a closely related spin generalization of the non-chiral intermediate long-wave (ncILW) equation recently introduced and studied by us in \\cite{berntson2020,berntsonlangmann2020,berntsonlangmann2021}.\n\nThe sBO equation describes the time evolution of a square matrix-valued function $\\mathsf{U}=\\mathsf{U}(x,t)$ depending on position and time variables $x\\in{\\mathbb R}$ and $t\\in{\\mathbb R}$, respectively, and it is given by \n\\begin{equation} \n\\label{eq:sBO} \n\\boxed{ \\mathsf{U}_t + \\{\\mathsf{U},\\mathsf{U}_x\\} + H\\mathsf{U}_{xx} +\\mathrm{i} [\\mathsf{U},H\\mathsf{U}_x]=0 }\n\\end{equation} \nwhere $\\mathsf{U}_t$ is short for $\\frac{\\partial}{\\partial t}\\mathsf{U}$ etc., $H$ is the usual Hilbert transform (which, for functions $f$ of $x\\in{\\mathbb R}$ is given by \n\\begin{equation}\n\\label{eq:H} \n(Hf)(x)\\coloneqq \\frac1{\\pi}\\Xint-_{{\\mathbb R}} \\frac1{x'-x} f(x')\\,\\mathrm{d}{x'}\n\\end{equation} \nwhere $\\Xint-$ is the principal value integral), $\\mathrm{i}\\coloneqq\\sqrt{-1}$, and $[\\cdot,\\cdot]$ and $\\{\\cdot,\\cdot\\}$ denote the commutator and anti-commutator of square matrices, respectively (the matrix dimension $d\\times d$ is arbitrary, with $d=1$ corresponding to the standard BO equation). \n The sBO equation is interesting for several reasons: (i) It is exactly solvable, and it includes well-known soliton equations as limiting cases; in particular, the sBO equation \\eqref{eq:sBO} not only generalizes the BO equation but also the half-wave maps (HWM) equation \\cite{zhou2015,lenzmann2018,lenzmann2018b}. \n(ii) It describes interesting physics beyond what has previously been described by exactly solvable equations; in particular, the sBO equation provides a hydrodynamic description of interacting particles with internal spin degrees of freedom, and it determines the evolution of the nonlinearly coupled charge- and spin-densities of this particle system. As will be explained in more detail, integrable systems of this kind are relevant for the quantum Hall effect in situations where the electron spin cannot be ignored, a topic which has received considerable interest in the physics literature in recent years; see e.g.\\ \\cite{senthil1999,mccann2006,bernevig2006}. \n(iii) The sBO equation has multi-soliton solutions constructed via a simple spin-pole ansatz where the time evolution of the spins and poles is determined by the spin generalization of the $A$-type Calogero-Moser (CM) system due to Gibbons and Hermsen \\cite{gibbons1984}, which we call the spin CM (sCM) system; see also \\cite{wojciechowski1987}. (iv) It provides another example of a correspondence between Calogero-Moser-Sutherland type systems and soliton equations; in particular, it is known that the BO equation and the ncILW equation are naturally related to $A$-type Calogero-Moser-Sutherland systems in several different ways \\cite{berntson2020}, and while \\eqref{eq:sBO} is a spin generalization of the BO equation, we also present a spin generalization of the ncILW equation and thus propose a correspondence between soliton equations and sCM systems in all four cases: rational (I), trigonometric (II), hyperbolic (III), elliptic (IV) (see, e.g., \\cite{olshanetsky1981} for basic facts about CM systems).\nWe substantiate this proposal in cases I--III; case IV is technically more demanding and thus left for future work. \n(v) Our results suggest that this equation is exactly solvable in the same strong sense as the BO and HWM equations, and this opens up possibilities for several future research projects. \n\nIn the following, we explain the above points (i)--(v) in more detail; we emphasize that these points are complementary and, depending on the interests of the reader, some of them can be ignored without loss of continuity. \n\n\\paragraph{(i) Special cases.} \nThe BO equation is an exactly solvable nonlinear partial integro-differential equation that describes one-dimensional internal waves in deep water. In particular, physically relevant $N$-soliton solutions of the BO equation can be obtained by a simple pole ansatz where the dynamics of the poles is governed by an $A_{N-1}$ CM system \\cite{chen1979}. \nThus, the BO equation on the real line is related to the rational CM system, while the BO equation with periodic boundary conditions is related to the trigonometric CM system.\n\nThe HWM equation is a nonlinear partial integro-differential equation that describes a spin density, represented by an $S^2$-valued function, propagating in one dimension. As two of us found recently in collaboration with Klabbers \\cite{berntsonklabbers2020}, the HWM equation is similar to the BO equation in that it has $N$-soliton solutions obtained via a spin-pole ansatz where the dynamics of the poles and of the spin degrees of freedom are determined by the $A_{N-1}$ sCM system and where, again, the real-line and periodic problems for the HWM equation correspond to the rational and trigonometric cases of this sCM system, respectively; see also \\cite{ matsuno2022}. \n It is known that the BO equation is related to a hydrodynamic description of the $A_{N-1}$ CM system \\cite{abanov2009}, and that the HWM equation can be derived as a continuum limit of a spin chain which corresponds to a limiting case of the $A_{N-1}$ sCM system \\cite{zhou2015}. This suggested to us that there should exist a more general soliton equation which includes the BO and HMW equations as limiting cases. As shown in the next two paragraphs, the sBO equation \\eqref{eq:sBO} is such an equation. We found the sBO equation by generalizing the logic in \\cite{abanov2009} to the sCM system, making use of a known B\\\"acklund transformation of $A$-type sCM systems \\cite{gibbons1983}. \n\nThe BO equation is given by \n\\begin{equation} \n\\label{eq:BO} \nu_t+2uu_x+Hu_{xx}=0 \n\\end{equation} \nfor an ${\\mathbb R}$-valued function $u=u(x,t)$. We make the ansatz $\\mathsf{U}(x,t)=u(x,t)\\mathsf{P}$ with $\\mathsf{P}$ a non-zero hermitian matrix satisfying $\\mathsf{P}^2=\\mathsf{P}$; then the sBO equation \\eqref{eq:sBO} is satisfied if and only if $u(x,t)$ satisfies the BO equation \\eqref{eq:BO}. It is also interesting to note that by restricting $\\mathsf{U}$ to diagonal matrices, the sBO equation is reduced to $d$ decoupled BO equations: $\\mathsf{U}=\\mathrm{diag}(u_1,\\ldots,u_d)$ satisfies \\eqref{eq:sBO} if and only if $u_\\mu=u_\\mu(x,t)$ solves the BO equation \\eqref{eq:BO} for $\\mu=1,\\ldots,d$. \n\nThe HWM equation describes the time evolution of an ${\\mathbb R}^3$-valued function $\\mathbf{m}=(m^1,m^2,m^3)=\\mathbf{m}(x,t)$ satisfying the constraint $\\mathbf{m}^2\\coloneqq (m^1)^2+(m^2)^2+(m^3)^2=1$ as follows, \n\\begin{equation}\n\\label{eq:HWM} \n\\mathbf{m}_t = \\mathbf{m}\\wedge H \\mathbf{m}_x \n\\end{equation} \nwhere $\\mathbf{m}\\wedge\\mathbf{n} \\coloneqq (m^2n^3-m^3n^2,m^3n^1-m^1n^3,m^1n^2-m^2n^1)$ is the usual wedge product of three-vectors. By scaling $\\mathsf{U}(x,t)\\to \\lambda\\mathsf{U}(x,2\\lambda t)$ and changing variables $2\\lambda t\\to t$ with $\\lambda>0$ a scaling parameter, the sBO equation \\eqref{eq:sBO} becomes \n\\begin{equation} \n\\label{eq:sBO1} \n\\mathsf{U}_t + \\frac12 \\{\\mathsf{U},\\mathsf{U}_x\\} + \\frac1{2\\lambda}H\\mathsf{U}_{xx}+\\frac{\\mathrm{i}}{2}[\\mathsf{U},H\\mathsf{U}_x] =0 . \n\\end{equation} \nThis reduces to a generalization of the HWM equation in the limit $\\lambda\\to\\infty$ if we impose the constraint $\\mathsf{U}^2=I$, where $I$ denotes the identity matrix. \nIndeed, $\\mathsf{U}^2=I$ implies $ \\{\\mathsf{U},\\mathsf{U}_x\\}=0$, and by specializing $\\mathsf{U}$ to $2\\times 2$ traceless hermitian matrices using the parametrization \n\\begin{equation} \n\\mathsf{U} = \\mathbf{m}\\cdot\\boldsymbol{\\sigma} = \\begin{pmatrix} m^3 & m^1-\\mathrm{i} m^2\\\\ m^1+\\mathrm{i} m^2 & -m^3 \\end{pmatrix} \n\\end{equation} \nwith $\\boldsymbol{\\sigma}=(\\sigma_1,\\sigma_2,\\sigma_3)$ the Pauli matrices, \\eqref{eq:HWM} is obtained from \\eqref{eq:sBO1} in the limit $\\lambda\\to\\infty$, while $\\mathsf{U}^2=I$ is equivalent to $\\mathbf{m}^2=1$. We note that, while this reduction of the sBO equation to the HWM equation is mathematically simple, there is another similar but more complicated reduction explained in (ii) below which is more interesting from a physics point of view. \n\n\\paragraph{(ii) Physics applications.} While hydrodynamics was initially developed to describe the propagation of fluids, recent developments in physics have established that hydrodynamic equations can provide a powerful tool to compute transport properties of strongly correlated electron systems like the cuprates or graphene; see e.g.\\ \\cite{hartnoll2007,andreev2011,svintsov2013}. Moreover, there exists a variety of topological such systems where the physical behavior is independent of model details and, in such a situation, one can expect a successful description by integrable hydrodynamic equations (well-known arguments to explain this relation between universality and integrability in the context of soliton equations were given by Calogero \\cite{calogero1991}); as an example, we mention the use of the BO equation to describe nonlinear waves at the boundary of fractional quantum Hall effect systems \\cite{bettelheim2006,wiegmann2012}. \n\nReal electrons have spin and, for this reason, standard hydrodynamic equations describing the time evolution of a scalar density can only account for situations where the electron spin can be ignored. While this is the case for many conventional fractional quantum Hall effect systems, there also exist interesting such systems where the electron spin is important \\cite{senthil1999,mccann2006,bernevig2006}. For such a system, one is interested in a hydrodynamic description by a soliton equation describing the time evolution of a fluid of particles carrying spin. The sBO equation \\eqref{eq:sBO}, in the simplest non-trivial case when $\\mathsf{U}$ is a hermitian $2\\times 2$ matrix, is a natural candidate for such an equation: as shown below, the sBO equation in this case can be written as a coupled system describing the time evolution of a charge- and a spin-density. Thus, we believe that it would be interesting to investigate if (a quantum version of) the sBO equation can be used to describe phenomena observed in quantum Hall effect systems where spin cannot be ignored, in generalization of results in \\cite{bettelheim2006,wiegmann2012}. We recently proposed that the ncILW equation is relevant for parity invariant fractional Hall effect systems \\cite{berntson2020}, and this suggests to us that its spin generalization presented in this paper (see \\eqref{eq:sncILW}) will find applications in the context of the quantum spin Hall effect \\cite{bernevig2006}. \n\nAs another possible application of the sBO equation in physics, we mention the relation of the BO equation to conformal field theory of spin-less fermions \\cite{abanov2005}. While spin-less fermions\\footnote{To be more precise: spin-less chiral fermions in $1+1$ spacetime dimensions.} are among the simplest examples of a conformal field theory, there are conformal field theories that are natural spin generalizations of these models known as Wess-Zumino-Witten models \\cite{difrancesco1997} which can take electron spin (and more complicated internal degrees of freedom) into account. We expect that, in a similar way as the BO equation is related to spin-less fermions \\cite{abanov2005}, the sBO equation can be related to Wess-Zumino-Witten models. It would be interesting to substantiate this expectation. \n\nWe now rewrite the sBO equation in the case where $\\mathsf{U}$ is a hermitian $2\\times 2$ matrix as a coupled system describing the time evolution of a charge density $u$ and a spin density $\\mathbf{m}$. For that, we parametrize $\\mathsf{U}$ as follows, \n\\begin{equation} \n\\label{eq:mUansatz}\n\\mathsf{U} = \\frac{u}{2}\\left( I + \\mathbf{m}\\cdot\\boldsymbol{\\sigma}\\right) = \\frac{u}{2} \\begin{pmatrix} 1 +m^3 & m^1-\\mathrm{i} m^2\\\\ m^1+\\mathrm{i} m^2 & 1-m^3 \\end{pmatrix} \n\\end{equation} \nwhere $u=u(x,t)$ and $\\mathbf{m}=(m^1,m^2,m^3)=\\mathbf{m}(x,t)$ are ${\\mathbb R}$- and ${\\mathbb R}^3$-valued functions, respectively. \nWith this parametrization, we find after some computations that \\eqref{eq:sBO} is equivalent to\n\\begin{equation} \n\\label{eq:utbmt} \n\\begin{split} \nu_t + (1+\\mathbf{m}^2)uu_x +\\mathbf{m}\\cdot\\mathbf{m}_x u^2 +Hu_{xx} &= 0 ,\n\\\\\n\\mathbf{m}_t + u_x\\mathbf{m}(1-\\mathbf{m}^2)+u[\\mathbf{m}_x-\\mathbf{m}(\\mathbf{m}\\cdot\\mathbf{m}_x)] +\\frac1u[H(u\\mathbf{m})_{xx} -\\mathbf{m} Hu_{xx}] -\\mathbf{m}\\wedge H(u\\mathbf{m})_{x} \n &=\\mathbf{0} .\n\\end{split} \n\\end{equation}\nThis system combines and generalizes the physics of the BO equation and of the HWM equation in a non-trivial way, preserving the exact solvability. Indeed, setting $\\mathbf{m}(x,t)=\\mathbf{m}_0$ (constant) such that $\\mathbf{m}_0^2=1$, the first equation in \\eqref{eq:utbmt} becomes the BO equation \\eqref{eq:BO}, while the second equation is trivially fulfilled. On the other hand, setting $u(x,t)=u_0$ (constant) and transforming $\\mathbf{m}(x,t)\\to \\mathbf{m}(x-u_0t,u_0t)$, the first equation in \\eqref{eq:utbmt} is satisfied if we impose the condition $\\mathbf{m}^2=1$ and, with that, the second equations becomes the HWM equation \\eqref{eq:HWM} in the limit $u_0\\to \\infty$. This suggests that \\eqref{eq:utbmt} can be well approximated by the BO equation if the variation of the spin density in space and time can be ignored, while the HWM equation is a good approximation to \\eqref{eq:utbmt} (on an appropriate time scale) if the charge density is large and only deviates slightly from a constant background $u_0$. \n\n\n\\paragraph{(iii) Multi-soliton solutions.} Following Ref.~\\cite{gibbons1984}, we use the Dirac bra-ket notation \\cite{dirac1939} and denote by $|e\\rangle$ and $\\langle f|$ vectors in some $d$-dimensional complex vector space $\\mathcal{V}$ and its dual $\\mathcal{V}^*$, respectively; in particular, $|e\\rangle\\langle f|$ represents a $d\\times d$ matrix with complex entries, and $|e\\rangle\\langle f|^\\dag = |f\\rangle\\langle e|$ is the hermitian conjugate of this matrix. Moreover, $*$ is complex conjugation. (Readers not familiar with this notation can identify $|e\\rangle\\in \\mathcal{V}$ with $(e_{\\mu})_{\\mu=1}^d\\in{\\mathbb C}^d$, $\\langle f|\\in \\mathcal{V}^*$ with $(f^*_{\\mu})_{\\mu=1}^d\\in{\\mathbb C}^d$, $\\langle f|e \\rangle$ with the scalar product $\\sum_{\\mu=1}^d f_\\mu^*e^{\\phantom*}_\\mu$, and $|e\\rangle\\langle f|$ with the matrix $(e^{\\phantom*}_\\mu f_\\nu^*)_{\\mu,\\nu=1}^d$.) \n\nWe show in Section~\\ref{sec:sBO} that, for arbitrary integer $N\\geq 1$ and $x\\in{\\mathbb R}$, \n\\begin{equation}\n\\label{eq:mUintro} \n\\mathsf{U}(x,t) = \\mathrm{i} \\sum_{j=1}^N |e_j(t)\\rangle \\langle f_j(t)| \\frac{1}{x-a_j(t)} - \\mathrm{i} \\sum_{j=1}^N |f_j(t)\\rangle \\langle e_j(t)| \\frac{1}{x-a_j(t)^*}\n\\end{equation} \nis a solution of the sBO equation \\eqref{eq:sBO} provided that the variables $a_j=a_j(t)\\in{\\mathbb C}$, $|e_j\\rangle=|e_j(t)\\rangle\\in\\mathcal{V}$ and $\\langle f_j|=\\langle f_j(t)|\\in\\mathcal{V}^*$ satisfy the following time evolution equations,\\footnote{$\\sum_{k\\neq j}^N$ is short for $\\sum_{k=1,k\\neq j}^N$.} \n\\begin{equation} \n\\label{eq:sCMintro} \n\\begin{split} \n\\frac{\\mathrm{d}^2}{\\mathrm{d} t^2} a_j = & \\, 8\\sum_{k\\neq j}^N\\frac{\\langle f_j|e_k\\rangle\\langle f_k|e_j\\rangle }{(a_j-a_k)^3},\\\\\n\\frac{\\mathrm{d}}{\\mathrm{d} t}|e_j\\rangle = &\\, 2\\mathrm{i}\\sum_{k\\neq j}^N \\frac{|e_k\\rangle\\langle f_k|e_j\\rangle}{(a_j-a_k)^2},\\\\ \n\\frac{\\mathrm{d}}{\\mathrm{d} t}\\langle f_j| = &\\, -2\\mathrm{i}\\sum_{k\\neq j}^N\\frac{\\langle f_j|e_k\\rangle \\langle f_k|}{(a_j-a_k)^2} \n\\end{split} \n\\end{equation} \nfor $j=1,\\ldots,N$, with initial conditions that satisfy certain constraints; see Theorem~\\ref{thm:sBO} for a precise formulation. \nMoreover, we show that initial conditions satisfying the pertinent constraints can be constructed by solving a linear algebra problem involving a $N d\\times N d$ hermitian matrix, and that the solution \\eqref{eq:mUintro} depends on $N d$ complex parameters; see Section~\\ref{sec:NsolitonssBO} for details. \n\nIt is important to note that, up to a rescaling of time, the time evolution equations \\eqref{eq:sCMintro} and one of the above-mentioned constraints on the initial conditions (given in \\eqref{eq:fjej}) defines the sCM model discovered by Gibbons and Hermsen \\cite{gibbons1984}, and another constraint (given in \\eqref{eq:BTt=0}) is a known B\\\"acklund transformation of the sCM model \\cite{gibbons1983}.\nMoreover, for $d=1$, the solution above reduces to the $N$-soliton solution of the BO equation found in \\cite{chen1979} that relates the BO equation to the rational $A_{N-1}$ CM model. Thus, in the same sense as the sCM model is a natural generalization of the simplest non-trivial CM model, the sBO equation is a natural generalization of the BO equation. \n\nAs shown by Gibbons and Hermsen in their original paper \\cite[Section 6]{gibbons1984}, the sCM model describes the time evolution of poles and spins of rational solutions of the boomeron equation, which is a soliton equation introduced and studied in \\cite{calogero1976coupled}. One might wonder if there is a relation between this fact and our work. We cannot exclude this possibility: it is conceivable to us that it is possible to derive the boomeron equation as a local limit $\\delta\\to\\infty $ of the sncILW equation \\eqref{eq:sncILW} or the sILW equation \\eqref{eq:sILW} introduced below and, in this way, there could be an indirect relation. However, even if this is the case, we expect that it would be challenging to make this relation precise. Indeed, while the boomeron equation is similar to \\eqref{eq:utbmt} in that it is a system describing the time evolution of a vector coupled to a scalar, it is different in other important ways; in particular, \\eqref{eq:utbmt} is rotation invariant (and this is also the case for the certain local limits of these equations that we derive, see Section~\\ref{sec:locallimit} for details), while the boomeron equation is not.\n\n\\paragraph{(iv) Spin generalization of ncILW equation and soliton-CM correspondence.} For $\\delta>0$, we define the following integral operators acting on functions $f$ of $x\\in{\\mathbb R}$, \n\\begin{equation} \n\\label{eq:TT}\n\\begin{split} \n(Tf)(x) &= \\frac1{2\\delta}\\Xint-_{{\\mathbb R}}\\coth\\left(\\frac{\\pi}{2\\delta}(x'-x)\\right)f(x')\\,\\mathrm{d}{x'}, \\\\\n(\\tilde{T} f)(x) &= \\frac1{2\\delta}\\int_{{\\mathbb R}}\\tanh\\left(\\frac{\\pi}{2\\delta}(x'-x)\\right)f(x')\\,\\mathrm{d}{x'}.\n\\end{split} \n\\end{equation} \nThe ncILW equation was introduced in \\cite{berntson2020} as a non-chiral version of the ILW equation and is given for two scalar-valued functions $u(x,t)$ and $v(x,t)$ by\n\\begin{equation} \n\\label{eq:ncILW} \n\\begin{split} \n&u_t + 2 u u_x + Tu_{xx}+\\tilde{T}v_{xx}=0,\\\\\n&v_t - 2 v v_x - Tv_{xx}-\\tilde{T}u_{xx}=0.\n\\end{split} \n\\end{equation} \nHere, we introduce the following spin generalization of the ncILW equation given for two square matrix-valued functions $\\mathsf{U}=\\mathsf{U}(x,t)$ and $\\mathsf{V}=\\mathsf{V}(x,t)$ by\n\\begin{equation} \n\\label{eq:sncILW} \n\\boxed{\n\\begin{aligned} \n\\mathsf{U}_t &+ \\{\\mathsf{U},\\mathsf{U}_x\\} + T\\mathsf{U}_{xx}+\\tilde{T} \\mathsf{V}_{xx} +\\mathrm{i} [\\mathsf{U},T\\mathsf{U}_x]+\\mathrm{i} [\\mathsf{U},\\tilde{T} \\mathsf{V}_x] =0,\\\\\n\\mathsf{V}_t &- \\{\\mathsf{V},\\mathsf{V}_x\\} - T\\mathsf{V}_{xx}-\\tilde{T} \\mathsf{U}_{xx} +\\mathrm{i} [\\mathsf{V},T\\mathsf{V}_x]+\\mathrm{i}[\\mathsf{V},\\tilde{T} \\mathsf{U}_x] =0\n\\end{aligned} \n}\n\\end{equation} \n(again, the matrix size $d\\times d$ of $\\mathsf{U}$ and $\\mathsf{V}$ is arbitrary; $d=1$ corresponds to the ncILW equation \\eqref{eq:ncILW}).\nWe refer to \\eqref{eq:sncILW} as the {\\em spin ncILW (sncILW) equation}. \nOur main result on the sncILW equation is a construction of $N$-soliton solutions obtained via a spin-pole ansatz where the dynamics of the spins and poles are determined by the $A_{N-1}$ sCM model in the hyperbolic case (III), in natural generalization of a known result about the ncILW equation \\cite{berntson2020}; see Theorem~\\ref{thm:sncILW}. \n\nNote that $\\lim_{\\delta\\to+\\infty}\\tilde{T} =0$ \\cite{berntson2020} and thus, clearly, the two equations in \\eqref{eq:sncILW} decouple in the limit $\\delta\\to\\infty$. \nMoreover, since the limit $\\delta\\to\\infty$ of $T$ coincides with the Hilbert transform $H$ given in \\eqref{eq:H} \\cite{kodama1981} (see also \\cite{berntson2020}), the first of these decoupled equations is identical to the sBO equation \\eqref{eq:sBO}, and the second equation is obtained from the sBO equation \\eqref{eq:sBO} by the parity transformation $\\mathsf{U}(x,t)\\to \\mathsf{V}(x,t)=\\mathsf{U}(-x,t)$. Since the sBO equation and the one obtained from it by this parity transformation are different, the sBO equation \\eqref{eq:sBO} is chiral, and we therefore regard \\eqref{eq:sncILW} as a non-chiral generalization of the sBO equation where the two chiral degrees of freedom, $\\mathsf{U}$ and $\\mathsf{V}$, are coupled by the operator $\\tilde{T}$.\n\nWe also introduce periodic versions of the sBO and sncILW equations: the former is given by \\eqref{eq:sBO} for an $L$-periodic, square matrix-valued function $\\mathsf{U}$, $\\mathsf{U}(x+L,t)=\\mathsf{U}(x,t)$ with $L>0$ a fixed parameter, and the Hilbert transform\n\\begin{equation}\n\\label{eq:Hp} \n(Hf)(x)\\coloneqq \\frac1{L}\\Xint-_{-L\/2}^{L\/2} \\cot\\left(\\frac{\\pi}{L}(x'-x) \\right)f(x')\\,\\mathrm{d}{x'}; \n\\end{equation} \nthe latter is given by \\eqref{eq:sncILW} for $L$-periodic, square matrix-valued functions $\\mathsf{U}$ and $\\mathsf{V}$ and the integral operators \n\\begin{equation} \n\\label{eq:TTe}\n\\begin{split} \n(Tf)(x) &= \\frac1{\\pi}\\Xint-_{L\/2}^{L\/2}\\zeta_1(x'-x)f(x')\\,\\mathrm{d}{x'}, \\\\\n(\\tilde{T} f)(x) &= \\frac1{\\pi}\\int_{-L\/2}^{L\/2}\\zeta_1(x'-x+\\mathrm{i}\\delta)f(x')\\,\\mathrm{d}{x'},\n\\end{split} \n\\end{equation} \nwhere \n\\begin{equation}\n\\label{eq:zeta1} \n\\zeta_{1}(z)\\coloneqq \\zeta(z)-\\frac{\\eta_{1}}{\\omega_{1}}z\n\\end{equation} \nwith $\\zeta(z)$ the Weierstrass $\\zeta$-function with half-periods $(\\omega_1,\\omega_2)=(L\/2,\\mathrm{i}\\delta)$ and $\\eta_1=\\zeta(\\omega_1)$ \\cite{DLMF}. \nSimilarly as in the real-line case, the periodic sBO equation is chiral, and the periodic sncILW equation reduces to two decoupled periodic sBO equations of opposite chirality in the limit $\\delta\\to\\infty$ (details can be found in \\cite{berntson2020}). \n\nTo explain the relation between the soliton equations discussed above and CM systems, we recall that each CM system comes in four versions which can be distinguished by the special function\n\\begin{equation} \n\\label{eq:alpha} \n\\alpha(z)\\coloneqq \\begin{cases} 1\/z & \\text{(I: rational case)}\\\\\n(\\pi\/L)\\cot(\\pi z\/L) & \\text{(II: trigonometric case)}\\\\\n(\\pi\/2\\delta)\\coth(\\pi z\/2\\delta) & \\text{(III: hyperbolic case)}\n\\end{cases} \\quad (z\\in{\\mathbb C}) \n\\end{equation} \nwith $L>0$ and $\\delta>0$ fixed parameters; the fourth version corresponds to \n\\begin{equation} \n\\alpha(z)\\coloneqq \\zeta_1(z) \\quad \\text{(IV: elliptic case)}, \n\\end{equation} \nbut we do not include it in \\eqref{eq:alpha} since it is more complicated and, for this reason, our results in this paper are restricted to cases I--III. \nThis function $\\alpha(z)$ is important since, for example, it determines the corresponding CM interaction potential as $V(z)=-\\alpha'(z)$ \\cite{olshanetsky1981}. \nMoreover, using these special functions, one can define integral operators\n\\begin{equation} \n\\label{eq:TTalpha}\n\\begin{split} \n(Tf)(x) &\\coloneqq \\frac1\\pi \\Xint- \\alpha(x'-x)f(x')\\,\\mathrm{d}{x'} \\quad \\text{(cases I--IV)}, \\\\\n(\\tilde{T} f)(x)&\\coloneqq \\frac1\\pi \\Xint- \\alpha(x'-x+\\mathrm{i}\\delta)f(x')\\,\\mathrm{d}{x'} \\quad \\text{(cases III and IV)}, \n\\end{split} \n\\end{equation} \nwhere the integrations are over ${\\mathbb R}$ in cases I and III and over $[-L\/2,L\/2]$ in cases II and IV. \nWe note that, in cases I and II, $T$ is identical to the Hilbert transform $H$ in \\eqref{eq:H} and \\eqref{eq:Hp}, respectively, suggesting that the real-line and periodic versions of the sBO equation are related to the $A$-type sCM system in the rational and trigonometric cases, respectively. Similarly, in cases III and IV, the operators $T$ and $\\tilde{T}$ in \\eqref{eq:TTalpha} are identical to the operators in \\eqref{eq:TT} and \\eqref{eq:TTe}, suggesting that the real-line and periodic versions of the sncILW equation are related to the $A$-type sCM system in the hyperbolic and elliptic cases, respectively. The $N$-soliton solutions of these equations obtained in this paper confirm these expectations in cases I--III, and we conjecture that this result can be generalized to case IV. Thus, the equations proposed in the present paper extend the relation between soliton equations and CM systems proposed in \\cite{berntson2020} to the spin setting. \n\n\nAs shown by two of us in collaboration with Klabbers \\cite{berntsonklabbers2021}, there exists a non-chiral variant of an intermediate generalization of the Heisenberg ferromagnetic equation which generalizes the HWM equation and which has soliton solutions given by a spin-pole ansatz governed by the hyperbolic sCM model, in generalization of a result in \\cite{berntsonklabbers2020} for the HWM equation mentioned above; this {\\em non-chiral intermediate Heisenberg ferromagnetic equation} is given by \n\\begin{equation} \n\\begin{split} \n\\mathbf{m}_t & =\\, + \\mathbf{m}\\wedge T\\mathbf{m}_{x} - \\mathbf{m}\\wedge \\tilde{T}\\mathbf{n}_{x},\\\\\n\\mathbf{n}_t & =\\, - \\mathbf{n}\\wedge T\\mathbf{n}_{x} + \\mathbf{n}\\wedge \\tilde{T}\\mathbf{m}_{x},\\\\\n\\end{split} \n\\end{equation} \nfor two ${\\mathbb R}^3$-valued functions $\\mathbf{m}=\\mathbf{m}(x,t)$ and $\\mathbf{n}=\\mathbf{n}(x,t)$ satisfying $\\mathbf{m}^2=\\mathbf{n}^2=1$ \\cite{berntsonklabbers2021}, and we checked that it is obtained from the sncILW equation \\eqref{eq:sncILW} in a similar way as the HWM equation is obtained from the sBO equation \\eqref{eq:sBO} (as explained in the paragraph containing \\eqref{eq:sBO1}), after changing the sign of $\\mathsf{V}$ (the latter is merely a convention).\n\nOur results in the present paper, together with results in the literature on the standard (chiral) ILW equation \\cite{kodama1981} and on the non-chiral intermediate Heisenberg ferromagnet equation \\cite{berntsonklabbers2021}, suggest that the following is also an integrable generalization of the sBO equation, \n\\begin{equation} \n\\label{eq:sILW} \n\\boxed{ \n\\mathsf{U}_t + \\{\\mathsf{U},\\mathsf{U}_x\\} + \\frac1\\delta\\mathsf{U}_x + T\\mathsf{U}_{xx} + \\mathrm{i} [\\mathsf{U},T\\mathsf{U}_x] =0\n}\n\\end{equation} \n(we added the term $\\mathsf{U}_x\/\\delta$ for convenience; note that this term can be removed by a Galilean transformation $x\\to x-t\/\\delta$). We refer to \\eqref{eq:sILW} as the {\\em spin ILW (sILW) equation}. Note that, in the limit $\\delta\\to\\infty$, it reduces to the sBO equation \\eqref{eq:sBO}. We show that, in generalization of the well-known fact that the standard ILW equation reduces to the Korteweg-de Vries (KdV) equation in a limit $\\delta\\downarrow 0$ (see e.g. \\cite{scoufis2005}), \n \\eqref{eq:sILW} reduces to the following matrix generalization of the KdV equation in a certain $\\delta\\downarrow 0$ limit, \n\\begin{equation} \n\\label{eq:mKdV} \n\\mathsf{U}_t + \\{\\mathsf{U},\\mathsf{U}_x\\} + \\mathsf{U}_{xxx} = 0 \n\\end{equation} \n(see Section~\\ref{sec:locallimit}). \nThis so-called {\\em matrix KdV equation} was introduced by Lax \\cite{lax1968}, and its multisoliton solutions were constructed in \\cite{goncharenko2001}. \nMoreover, the sILW equation allows for another limit $\\delta\\downarrow 0$ leading to the following generalization of the Heisenberg ferromagnet (HF) equation \n\\begin{equation} \n\\label{eq:sHF} \n\\mathsf{U}_t + \\mathrm{i} [\\mathsf{U},\\mathsf{U}_{xx}]=0\n\\end{equation} \nwith the constraint $\\mathsf{U}^2=I$, where $I$ denotes the identity matrix (see Section~\\ref{sec:locallimit} for details); note that \\eqref{eq:sHF} reduces to the standard HF equation if $\\mathsf{U}$ is restricted to lie in the class of $2\\times 2$ hermitian traceless matrices. \nThus, the sILW equation is an interpolation between the matrix KdV equation, the HF equation, and the sBO equation. \n \n We call the equations introduced in the present paper the {\\em spin} (rather than the {\\em matrix}) BO and (nc)ILW equations since, in contrast to the matrix KdV equation \\eqref{eq:mKdV}, they incorporate the nonlinear physics of charge- and spin-densities in a single equation; as discussed above, this property is lost in the local limit $\\delta\\to\\infty$ (since, depending on the scaling, either the Heisenberg-term $\\mathrm{i} [\\mathsf{U},\\mathsf{U}_{xx}]$ or the KdV-terms $\\{\\mathsf{U},\\mathsf{U}\\}_x+\\mathsf{U}_{xxx}$ disappear in that limit).\n\n\\paragraph{(v) Summary and open problems.} \nIn this paper, we introduce a family of equations containing well-known soliton equations as special and limiting cases. This family of equations consists of the sBO equation \\eqref{eq:sBO}, the sncILW equation \\eqref{eq:sncILW}, and the sILW equation \\eqref{eq:sILW} for different matrix sizes $d\\in{\\mathbb Z}_{\\geq 2}$; for $d=1$, these equations reduce to the known BO, ncILW, and ILW equations, respectively. \nWe show that the sBO and sncILW equations are exactly solvable and, by that, establish a relation to sCM models. While we do not give any result about the sILW equation for $0<\\delta<\\infty$, we believe that the arguments we present about this equation strongly suggest that it is integrable.\nObviously, it would be interesting to generalize other known results about the special case $d=1$ (e.g., Hirota bilinear forms, Lax pairs, B\\\"acklund transformations, and inverse scattering transforms) to $d>1$.\n\nFor the ncILW equation, we were able to generalize the multi-soliton solutions in the real-line case to the periodic case in \\cite{berntson2020}. \nHowever, for the sncILW equation \\eqref{eq:sncILW}, we do not have this result. We found that the construction of soliton solutions of the sncILW equation in the periodic case is more challenging for $d>1$ than for $d=1$ for the following reasons: first, the generalization of Proposition~\\ref{prop:BT} to the elliptic case (IV) is more difficult (for $d=1$, this generalization is known), and second, the commutator terms in \\eqref{eq:sncILW} lead to severe complications. We thus believe that the construction of soliton solutions of the periodic sncILW equation is an interesting problem requiring new ideas.\n\nWe were inspired to search for the soliton equations presented in this paper by heuristic arguments suggested to us by the known relation between the quantum version of the BO equation and the Calogero-Sutherland model\\footnote{This is the quantum version of the trigonometric $A$-type CM model.} \\cite{abanov2005}, together with a generalization of this relation to the corresponding elliptic Calogero-Sutherland model that lead us to the ncILW equation \\cite{berntson2020}. \nIt would be interesting to promote these heuristic arguments to precise results by constructing a second quantization of the quantum versions of the trigonometric sCM models, in generalization of results in \\cite{carey1999}. We believe that this can open a way towards finding a precise relation between the sBO equation and Wess-Zumino-Witten models.\n\nAs already discussed in (ii) above, there are several systems in the real world motivating the development of hydrodynamic descriptions of identical particles with spin; however, hydrodynamics for coupled charge- and spin-densities is not a fully developed subject. As one specific example, we mention recent work deriving such hydrodynamic equations using the known hydrodynamic description of the standard CM model as a guide \\cite{kulkarni2009}; see also Xing's PhD thesis \\cite{xing2015} aiming in this direction.\nWe hope that the present paper opens up a way to push these results further towards a hydrodynamic description of the sCM model which can serve as a prototype of spinful hydrodynamics.\n\n\\paragraph{Plan of paper.} We state and prove the B\\\"acklund transformations for the sCM model in the rational, trigonometric and hyperbolic cases in Section~\\ref{sec:sCM}. In Section~\\ref{sec:sBO} and \\ref{sec:sncILW}, we state and prove our multi-soliton solutions of the sBO and sncILW equation, respectively; we note that while the results in Section~\\ref{sec:sBO} on the sBO equation are for the real-line and periodic cases, the results in Section~\\ref{sec:sncILW} on the sncILW equation only apply to the real-line case. Section~\\ref{sec:results} contains the Hamiltonian formulations of the sBO and sncILW equations (Section~\\ref{sec:Hamiltonian}), details on the local limit $\\delta\\to\\infty$ of the sILW equation (Section~\\ref{sec:locallimit}), and the spin generalization of the bidirectional BO equation that led us to the results in the present paper (Section~\\ref{sec:2sBO}). We also include three appendices with functional identities that we need (Appendix~\\ref{app:alphaV}), and the non-hermitian solutions of the sBO equation (Appendix~\\ref{app:sBOsolutions}) and the sncILW equation (Appendix~\\ref{app:sncILWsolutions}), respectively. \n\n\n\\section{Spin Calogero-Moser systems}\n\\label{sec:sCM} \nIn this section, we collect results about the $A$-type sCM systems due to Gibbons and Hermsen \\cite{gibbons1984} that we need. In particular, we define the $A$-type sCM systems in Section~\\ref{sec:sCM_def}, and, following \\cite{gibbons1983}, we state and prove a B\\\"acklund transformation for these systems in Sections~\\ref{sec:sCM_BT} and \\ref{sec:BT_proof}. For simplicity, we restrict our discussion to cases I--III; we expect a similar result for the elliptic case IV, but this case is more complicated and thus left to future work. \n\nWe believe that the results in this section are of interest in their own right: to our knowledge, the proof of the relevant B\\\"acklund transformation (see \\eqref{eq:BT}) in the literature has previously been restricted to the rational case (I) and $M=N$ \\cite{gibbons1983}. We mention in passing that, while the original paper introduced and solved the sCM model only in the rational case \\cite{gibbons1984}, the integrability of the sCM model in all cases I--IV was proved in \\cite{hikami1993}, and explicit solutions of the sCM model in the elliptic case (IV) can be found in \\cite{krichever1995}. \n\n\\subsection{Definition}\n\\label{sec:sCM_def}\nLet $N\\in{\\mathbb Z}_{\\geq 1}$ be arbitrary and let\n\\begin{equation} \n\\label{eq:V} \nV(z)\\coloneqq \\begin{cases} 1\/z^2 & \\text{(I: rational case)}\\\\\n(\\pi\/L)^2\/\\sin^2(\\pi z\/L) & \\text{(II: trigonometric case)}\\\\\n(\\pi\/2\\delta)^2\/\\sinh^2(\\pi z\/2\\delta) & \\text{(III: hyperbolic case).}\\\\\n\\end{cases} \\quad (z\\in{\\mathbb C}) \n\\end{equation} \nFor each case I--III, the corresponding $A_{N-1}$ sCM system is a dynamical system of $N$ particles moving in the complex plane and with internal degrees of freedom described by a $d$-dimensional vector space $\\mathcal{V}$ and its dual $\\mathcal{V}^*$, with $d\\in{\\mathbb Z}_{\\geq 1}$ arbitrary. \nDenoting the position of the $j$th particle at time $t\\in{\\mathbb R}$ by $a_j=a_j(t)\\in{\\mathbb C}$ and its internal degrees of freedom by vectors $|e_j\\rangle=|e_j(t)\\rangle\\in \\mathcal{V}$ and $\\langle f_j|=\\langle f_j(t)|\\in \\mathcal{V}^*$, this system can be defined by the time evolution equations\n\\begin{equation} \n\\label{eq:sCM1a} \n\\ddot a_j = -4\\sum_{k\\neq j}^N \\langle f_j|e_k\\rangle\\langle f_k|e_j\\rangle V'(a_j-a_k) \\quad (j=1,\\ldots,N) \n\\end{equation} \nand\n\\begin{equation} \n\\begin{split} \n\\label{eq:sCM1b} \n|\\dot e_j\\rangle &= 2\\mathrm{i}\\sum_{k\\neq j}^N |e_k\\rangle\\langle f_k|e_j\\rangle V(a_j-a_k),\\\\\n\\langle\\dot f_j| & = -2\\mathrm{i}\\sum_{k\\neq j}^N\\langle f_j|e_k\\rangle \\langle f_k| V(a_j-a_k)\n\\end{split} \\quad (j=1,\\ldots,N) \n\\end{equation} \n(the dot indicates differentiation with respect to $t$ and the prime indicates differentiation with respect to the argument of the respective function), together with the following constraints, \n\\begin{equation} \n\\label{eq:sCM1c} \n\\langle f_j|e_j\\rangle = 1 \\quad (j=1,\\ldots,N) \n\\end{equation} \n(our notation is explained in the paragraph above \\eqref{eq:mUintro}).\nObserve that the constraints \\eqref{eq:sCM1c} are preserved under the equations of motion \\eqref{eq:sCM1b}.\n\n\n\\subsection{B\\\"acklund transformations}\n\\label{sec:sCM_BT}\nWe consider the sCM system \\eqref{eq:sCM1a}--\\eqref{eq:sCM1c}, together with another such system involving $M\\in{\\mathbb Z}_{\\geq 0}$ particles located at the positions $b_j=b_j(t)\\in{\\mathbb C}$, $j = 1, \\dots, M$, together with spin degrees of freedom $|g_j\\rangle=|g_j(t)\\rangle\\in \\mathcal{V}$ and $\\langle h_j|=\\langle h_j(t)|\\in \\mathcal{V}^*$ (note that the vector spaces $\\mathcal{V}$ and $\\mathcal{V}^*$ are the same as for the first system; while $M=N$ is an important special case, also the cases $M\\neq N$ and, in particular, $M=0$ are interesting). \nMore specifically, the second system is given by the time evolution equations\n\\begin{equation} \n\\label{eq:sCM2a} \n\\ddot b_j = -4\\sum_{k\\neq j}^M \\langle h_j|g_k\\rangle\\langle h_k|g_j\\rangle V'(b_j-b_k) \\quad (j=1,\\ldots,M) \n\\end{equation} \nand \n\\begin{equation} \n\\begin{split} \n\\label{eq:sCM2b} \n|\\dot g_j\\rangle &= 2\\mathrm{i}\\sum_{k\\neq j}^M |g_k\\rangle\\langle h_k|g_j\\rangle V(b_j-b_k),\\\\\n\\langle\\dot h_j| & = -2\\mathrm{i}\\sum_{k\\neq j}^M\\langle h_j|g_k\\rangle \\langle h_k| V(b_j-b_k), \n\\end{split} \\quad (j=1,\\ldots,M) \n\\end{equation} \nand the constraints \n\\begin{equation} \n\\label{eq:sCM2c} \n\\langle h_j|g_j\\rangle = 1 \\quad (j=1,\\ldots,M) .\n\\end{equation} \n\nAs shown in \\cite{gibbons1983} in a special case, two such sCM systems \\eqref{eq:sCM1a}--\\eqref{eq:sCM1c} and \\eqref{eq:sCM2a}--\\eqref{eq:sCM2c} are connected by a B\\\"acklund transformation as follows, \n\\begin{subequations} \n\\label{eq:BT} \n\\begin{align} \n\\label{eq:BTa} \n\\dot a_j \\langle f_j| &= 2\\mathrm{i}\\sum_{k\\neq j}^N \\langle f_j|e_k\\rangle \\langle f_k|\\alpha(a_j-a_k) -2\\mathrm{i}\\sum_{k=1}^M \\langle f_j|g_k\\rangle\\langle h_k|\\alpha(a_j-b_k)\\quad &(j=1,\\ldots,N), \\\\\n\\label{eq:BTb} \n\\dot b_j |g_j\\rangle &=-2\\mathrm{i}\\sum_{k\\neq j}^M |g_k\\rangle\\langle h_k|g_j\\rangle\\alpha(b_j-b_k)+2\\mathrm{i}\\sum_{k=1}^N|e_k\\rangle\\langle f_k|g_j\\rangle\\alpha(b_j-a_k) \\quad &(j=1,\\ldots,M), \n\\end{align} \n\\end{subequations} \nwith the function $\\alpha(z)$ given by \\eqref{eq:alpha}. The precise statement is given below. \n\n\\begin{proposition}[B\\\"acklund transformation for sCM system]\n\\label{prop:BT} \nIn each case I--III, the first order equations \\eqref{eq:sCM1b}, \\eqref{eq:sCM2b} and \\eqref{eq:BT}, together with the constraints \\eqref{eq:sCM1c} and \\eqref{eq:sCM2c}, imply the second order equations \\eqref{eq:sCM1a} and \\eqref{eq:sCM2a}. \n\\end{proposition} \nA self-contained proof of Proposition \\ref{prop:BT} can be found in Section~\\ref{sec:BT_proof}.\n\n\n\\begin{remark} \nAs already mentioned, the special case $M=N$ in the rational case (I) was stated and proved in \\cite{gibbons1983}, More specifically, in this special case, the equations \\eqref{eq:BTa} and \\eqref{eq:BTb} reduce to the second and first equations in \\cite[Eq.~(17)]{gibbons1983}, respectively, using the transformation $t\\to -2t$ and the identifications \n\\begin{equation} \n (a_j,\\dot a_j,|e_j\\rangle,\\langle f_j|,b_j,\\dot b_j,|g_j\\rangle,\\langle h_j|) \\to x^+_j,p^+_j,|e^+_j\\rangle,\\langle f^+_j|,x_j,p_j,|e_j\\rangle,\\langle f_j|) \\quad (j=1\\ldots,N).\n\\end{equation} \n\\end{remark} \n\nIt is interesting to note that Proposition~\\ref{prop:BT} has the following consistent reduction when $N=M$,\n\\begin{equation} \n\\label{eq:reduction} \nb_j=a_j^*,\\quad |g_j\\rangle = \\langle f_j|^\\dag = |f_j\\rangle,\\quad \\langle h_j| = |e_j\\rangle^\\dag = \\langle e_j|\\quad (j=1,\\ldots,N)\n\\end{equation} \nwhere $*$ and $\\dag$ indicate complex and hermitian conjugation, respectively. \nIndeed, by imposing these conditions, \\eqref{eq:sCM2a}--\\eqref{eq:sCM2c} and \\eqref{eq:BTb} become the hermitian conjugate of \\eqref{eq:sCM1a}--\\eqref{eq:sCM1c} and \\eqref{eq:BTa}, respectively, and Proposition~\\ref{prop:BT} simplifies as follows. \n\n\\begin{corollary}\n\\label{cor:BT} \nIn each case I--III, the first order equations \\eqref{eq:sCM1b} and \n\\begin{equation}\n\\label{eq:BThermitian} \n\\dot a_j \\langle f_j| = 2\\mathrm{i}\\sum_{k\\neq j}^N \\langle f_j|e_k\\rangle \\langle f_k|\\alpha(a_j-a_k) -2\\mathrm{i}\\sum_{k=1}^N \\langle f_j|f_k\\rangle\\langle e_k|\\alpha(a_j-a^*_k)\\quad (j=1,\\ldots,N), \n\\end{equation} \ntogether with the constraints \\eqref{eq:sCM1c}, imply the second order equations \\eqref{eq:sCM1a}. \n\\end{corollary} \n\n\\subsection{Proof of Proposition~\\ref{prop:BT}} \n\\label{sec:BT_proof} \nWe note that, in the system of equations \\eqref{eq:sCM1a}--\\eqref{eq:BT}, the sets of variables $\\{a_j,|e_j\\rangle,\\langle f_j|\\}_{j=1}^N$ and $\\{b_j,\\langle h_j|, |g_j\\rangle\\}_{j=1}^M$ can be swapped by hermitian conjugation and renaming $a_j^*\\to a_j$, $b_j^*\\to b_j$. Due to this symmetry, it suffices to verify the claim for the first set of variables, i.e., it is enough to show that \\eqref{eq:sCM1a} follows from \\eqref{eq:sCM1b}, \\eqref{eq:sCM2b}, \\eqref{eq:BT}, subject to \\eqref{eq:sCM1c} and \\eqref{eq:sCM2c}.\n \nWe introduce the shorthand notation \n\\begin{equation} \n\\label{eq:shorthandBT} \n(a_j,|e_j\\rangle,\\langle f_j|,r_j) = \\begin{cases} (a_j,|e_j\\rangle,\\langle f_j|,+1) & (j=1,\\ldots,N) \\\\ (b_{j-N},|g_{j-N}\\rangle,\\langle h_{j-N}|,-1) & (j=N+1,\\ldots,\\mathcal {N}) , \\end{cases} \\quad \n\\mathcal {N}\\coloneqq N+M, \n\\end{equation}\nand\n\\begin{equation} \n\\label{eq:mBj} \n \\mathsf{P}_j\\coloneqq |e_j\\rangle\\langle f_j| , \\quad \\mathsf{B}_j\\coloneqq \\mathrm{i}\\sum_{k\\neq j}^{\\mathcal {N}} r_k \\mathsf{P}_k \\alpha(a_j-a_k),\\quad (j=1,\\ldots,\\mathcal {N}) \n\\end{equation} \nto write \\eqref{eq:BT} as \n\\begin{equation}\n\\label{eq:BTshort} \n\\begin{split} \n\\dot a_j\\langle f_j| = &\\; 2\\langle f_j|\\mathsf{B}_j\\quad (j=1,\\ldots,N), \\\\\n\\dot a_j|e_j\\rangle = &\\; 2\\mathsf{B}_j|e_j\\rangle \\quad (j=N+1,\\ldots,\\mathcal {N}).\n\\end{split}\n\\end{equation} \n\nMoreover, this notation allows us to write the two sets of equations \\eqref{eq:sCM1a}--\\eqref{eq:sCM1c} and \\eqref{eq:sCM2a}--\\eqref{eq:sCM2c} as one: \n\\begin{equation} \n\\label{eq:sCM3a} \n\\ddot a_j = -2\\sum_{k\\neq j}^\\mathcal {N} (1+r_jr_k)\\langle f_j|e_k\\rangle\\langle f_k|e_j\\rangle V'(a_j-a_k) \\quad (j=1,\\ldots,\\mathcal {N}) \n\\end{equation} \nand \n\\begin{equation} \n\\begin{split} \n\\label{eq:sCM3b} \n|\\dot e_j\\rangle &= \\mathrm{i}\\sum_{k\\neq j}^\\mathcal {N} (1+r_jr_k) |e_k\\rangle\\langle f_k|e_j\\rangle V(a_j-a_k),\\\\\n\\langle\\dot f_j| & = -\\mathrm{i}\\sum_{k\\neq j}^\\mathcal {N}(1+r_jr_k)\\langle f_j|e_k\\rangle \\langle f_k| V(a_j-a_k), \n\\end{split} \\quad (j=1,\\ldots,\\mathcal {N}) , \n\\end{equation} \ntogether with\n\\begin{equation} \n\\label{eq:sCM3c} \n\\langle f_j|e_j\\rangle = 1 \\quad (j=1,\\ldots,\\mathcal {N}). \n\\end{equation} \n\nBy differentiating the first set of equations in \\eqref{eq:BTshort} with respect to time, we obtain \n\\begin{equation} \n\\label{eq:ddotajfj1}\n\\ddot a_j\\langle f_j| = \\langle \\dot f_j|(2\\mathsf{B}_j-\\dot a_j) + 2\\langle f_j|\\dot \\mathsf{B}_j\n\\end{equation} \nwhere, here and below in this section, $j=1,\\ldots,N$. \nWe compute, using \\eqref{eq:sCM3b} and $|e_k\\rangle\\langle f_k|=\\mathsf{P}_k$ (note that $r_j=+1$), \n\\begin{equation} \n\\label{eq:dotfj} \n\\begin{split} \n\\langle \\dot f_j|(2\\mathsf{B}_j-\\dot a_j) = & -\\mathrm{i}\\sum_{k\\neq j}^\\mathcal {N}(1+r_k) \\langle f_j|e_k\\rangle\\langle f_k |(2\\mathsf{B}_j-\\dot a_j)V(a_j-a_k) \\\\\n= & -\\mathrm{i}\\sum_{k\\neq j}^\\mathcal {N}(1+r_k)\\langle f_j|( 2\\mathsf{P}_k \\mathsf{B}_j-2\\mathsf{B}_j\\mathsf{P}_k)V(a_j-a_k) \\\\\n= & -2\\mathrm{i}\\sum_{k\\neq j}^\\mathcal {N}(1+r_k)\\langle f_j|[\\mathsf{P}_k, \\mathsf{B}_j]V(a_j-a_k), \n\\end{split} \n\\end{equation} \ninserting \\eqref{eq:BTshort} in the second step. Moreover, the definition \\eqref{eq:mBj} of $\\mathsf{B}_j$ and the relation $\\alpha'(z)=-V(z)$ imply\n\\begin{equation} \n2\\langle f_j|\\dot \\mathsf{B}_j = 2\\mathrm{i}\\sum_{k\\neq j}^\\mathcal {N} r_k \\langle f_j|\\dot\\mathsf{P}_k \\alpha(a_j-a_k) - 2\\mathrm{i}\\sum_{k\\neq j}^\\mathcal {N} r_k \\langle f_j|\\mathsf{P}_k(\\dot a_j-\\dot a_k)V(a_j-a_k), \n\\end{equation} \nand by using \\eqref{eq:BTshort} we compute\n\\begin{equation} \n\\label{eq:fjPkdotajdotak}\n\\begin{split} \n\\langle f_j|\\mathsf{P}_k(\\dot a_j-\\dot a_k) = &\\; \\langle f_j|e_k\\rangle\\langle f_k| (\\dot a_j-\\dot a_k) \\\\\n= & \\; \\dot a_j\\langle f_j|e_k\\rangle\\langle f_k| - \\frac12(1+r_k)\\langle f_j|e_k\\rangle\\dot a_k \\langle f_k| - \\frac12(1-r_k)\\langle f_j|e_k\\rangle \\dot a_k\\langle f_k|\\\\\n= &\\; 2 \\langle f_j|\\mathsf{B}_j\\mathsf{P}_k - (1+r_k)\\langle f_j|\\mathsf{P}_k\\mathsf{B}_k -(1-r_k)\\langle f_j|\\mathsf{B}_k\\mathsf{P}_k\\\\\n= & \\; \\langle f_j|\\big(\\{\\mathsf{P}_k,\\mathsf{B}_j-\\mathsf{B}_k\\} + r_k[\\mathsf{P}_k,\\mathsf{B}_j-\\mathsf{B}_k] - (1+r_k)[\\mathsf{P}_k,\\mathsf{B}_j]\\big) . \n\\end{split} \n\\end{equation} \nInserting the results in \\eqref{eq:dotfj}--\\eqref{eq:fjPkdotajdotak} into \\eqref{eq:ddotajfj1} and using $r_k^2=1$, we find that $ \\langle \\dot f_j|(2\\mathsf{B}_j-\\dot a_j)$ is canceled by the part of $2\\langle f_j|\\dot \\mathsf{B}_j$ involving the term $-(1+r_k)\\langle f_j[\\mathsf{P}_k,\\mathsf{B}_j]$ in \\eqref{eq:fjPkdotajdotak}, and we obtain \n\\begin{equation} \n\\label{eq:ddotajfj2}\n\\begin{split} \n\\ddot a_j\\langle f_j| = &\\; 2\\mathrm{i}\\sum_{k\\neq j}^\\mathcal {N} r_k \\langle f_j|\\dot\\mathsf{P}_k \\alpha(a_j-a_k) \\\\\n& -2\\mathrm{i} \\sum_{k\\neq j}^\\mathcal {N} \\langle f_j|\\big( r_k \\{\\mathsf{P}_k,\\mathsf{B}_j-\\mathsf{B}_k\\} + [\\mathsf{P}_k,\\mathsf{B}_j-\\mathsf{B}_k] \\big)V(a_j-a_k) . \n\\end{split} \n\\end{equation} \nTo proceed, we use $\\mathsf{P}_k=|e_k\\rangle\\langle f_k|$ and \\eqref{eq:sCM3b} to compute \n\\begin{equation} \n\\label{eq:dotPk} \n\\begin{split} \n\\dot \\mathsf{P}_k = &\\; |\\dot e_k\\rangle\\langle f_k| + |e_k\\rangle\\langle \\dot f_k| \\\\\n= &\\; \\mathrm{i}\\sum_{l\\neq k}^{\\mathcal {N}}(1+r_kr_l) ( |e_l\\rangle\\langle f_l | e_k\\rangle\\langle f_k| - |e_k\\rangle\\langle f_k|e_l\\rangle\\langle f_l|)V(a_k-a_l) \\\\\n= & -\\mathrm{i}\\sum_{l\\neq k}^{\\mathcal {N}}(1+r_kr_l) [\\mathsf{P}_k,\\mathsf{P}_l]V(a_k-a_l).\n\\end{split} \n\\end{equation} \nFor future use, it is convenient to rewrite \\eqref{eq:dotPk} as \n\\begin{equation}\n\\label{eq:dotPk2}\n\\dot \\mathsf{P}_k=- \\mathrm{i}(1+r_k)[\\mathsf{P}_k,\\mathsf{P}_j]V(a_j-a_k)-\\mathrm{i}\\sum_{l\\neq j,k}^{\\mathcal {N}}(1+r_kr_l) [\\mathsf{P}_k,\\mathsf{P}_l]V(a_k-a_l),\n\\end{equation}\nusing $r_j=1$ and that $V(z)$ is even in the first term. \nNext, by the definition of $\\mathsf{B}_j$ \\eqref{eq:mBj}, we have, for $k = 1, \\dots, \\mathcal{N}$ with $k \\neq j$,\n\\begin{equation}\n\\begin{split} \n\\label{eq:mBjmBk} \n(\\mathsf{B}_j-\\mathsf{B}_k)V(a_j-a_k) = &\\; \\mathrm{i} \\Big( \\sum_{l\\neq j}^{\\mathcal {N}} r_l \\mathsf{P}_l \\alpha(a_j-a_l)- \\sum_{l\\neq k}^{\\mathcal {N}} r_l \\mathsf{P}_l \\alpha(a_k-a_l)\\Bigr)V(a_j-a_k) \\\\\n = &\\; \\mathrm{i}(r_k\\mathsf{P}_k+\\mathsf{P}_j)\\alpha(a_j-a_k)V(a_j-a_k) \\\\ & + \\mathrm{i}\\sum_{l\\neq j,k}^{\\mathcal {N}} r_l\\mathsf{P}_l V(a_j-a_k)\\big( \\alpha(a_j-a_l) - \\alpha(a_k-a_l) \\big), \n\\end{split} \n\\end{equation} \nusing $r_j=1$ and that $\\alpha(z)$ is odd to simplify $r_k\\mathsf{P}_k\\alpha(a_j-a_k)- r_j\\mathsf{P}_j\\alpha(a_k-a_j)=(r_k\\mathsf{P}_k+\\mathsf{P}_j)\\alpha(a_j-a_k)$. \nBy inserting \\eqref{eq:dotPk2} and \\eqref{eq:mBjmBk} into \\eqref{eq:ddotajfj2} and simplifying, we obtain \n\\begin{equation} \n\\label{eq:ddotajfj3}\n\\begin{split} \n\\ddot a_j\\langle f_j| = & 2\\sum_{k\\neq j}^\\mathcal {N} \\sum_{l\\neq j,k}^{\\mathcal {N}}(r_k+r_l) \\langle f_j| [\\mathsf{P}_k,\\mathsf{P}_l]\\alpha(a_j-a_k)V(a_k-a_l) \\\\\n& +2\\sum_{k\\neq j}^\\mathcal {N} \\langle f_j|\\big( 2\\mathsf{P}_k+ r_k \\{\\mathsf{P}_k,\\mathsf{P}_j\\} +(2+r_k) [\\mathsf{P}_k,\\mathsf{P}_j] \\big)\\alpha(a_j-a_k)V(a_j-a_k) \\\\ \n& + 2\\sum_{k\\neq j}^\\mathcal {N} \\sum_{l\\neq j,k}^{\\mathcal {N}} \\langle f_j|\\big( r_kr_l \\{\\mathsf{P}_k,\\mathsf{P}_l\\} +r_l[\\mathsf{P}_k,\\mathsf{P}_l]) \n\\big( \\alpha(a_j-a_l) - \\alpha(a_k-a_l) \\big)V(a_j-a_k), \n\\end{split} \n\\end{equation} \nusing $r_k^2=1$, $r_k\\{\\mathsf{P}_k,r_k\\mathsf{P}_k+\\mathsf{P}_j\\}=2\\mathsf{P}_k^2+r_k\\{\\mathsf{P}_k,\\mathsf{P}_j\\}$, $[\\mathsf{P}_k,r_k\\mathsf{P}_k+\\mathsf{P}_j]=[\\mathsf{P}_k,\\mathsf{P}_j]$, and \n\\begin{equation} \n\\label{eq:mPsquare} \n\\mathsf{P}_k^2 = |e_k\\rangle\\langle f_k|e_k\\rangle\\langle f_k| = |e_k\\rangle\\langle f_k| = \\mathsf{P}_k\n\\end{equation} \nby \\eqref{eq:sCM3c}. \nSince $\\langle f_j|\\mathsf{P}_j = \\langle f_j|e_j\\rangle\\langle f_j|=\\langle f_j|$ by \\eqref{eq:sCM3c}, we can simplify further: \n\\begin{equation}\n\\label{eq:ddotajfj3A}\n \\langle f_j|\\big( 2\\mathsf{P}_k+ r_k \\{\\mathsf{P}_k,\\mathsf{P}_j\\} +(2+r_k) [\\mathsf{P}_k,\\mathsf{P}_j] \\big) \n= 2(1+r_k)\\langle f_j| \\mathsf{P}_k\\mathsf{P}_j.\n\\end{equation} \nMoreover, since $V(z)$ is even, \n\\begin{multline} \\label{eq:ddotajfj3B}\n2\\sum_{k\\neq j}^\\mathcal {N} \\sum_{l\\neq j,k}^{\\mathcal {N}}(r_k+r_l) \\langle f_j| [\\mathsf{P}_k,\\mathsf{P}_l]\\alpha(a_j-a_k)V(a_k-a_l) \\\\\n= \n\\sum_{k\\neq j}^\\mathcal {N} \\sum_{l\\neq j,k}^{\\mathcal {N}}(r_k+r_l) \\langle f_j| [\\mathsf{P}_k,\\mathsf{P}_l]\\big( \\alpha(a_j-a_k)- \\alpha(a_j-a_l)\\big)V(a_k-a_l) . \n\\end{multline} \nTo proceed, we need the identities \n\\begin{equation}\n\\label{eq:IdalphaV}\n\\alpha(z)V(z)=-\\frac12V'(z)\n\\end{equation}\nand \n\\begin{equation} \n\\label{eq:IdalalV} \n\\big( \\alpha(a_j-a_l) - \\alpha(a_k-a_l) \\big)V(a_j-a_k) \n= -\\big( \\alpha(a_j-a_k) - \\alpha(a_j-a_l) \\big)V(a_k-a_l). \n\\end{equation} \nThe first identity \\eqref{eq:IdalphaV} can be obtained by differentiating \\eqref{eq:IdV} with respect to $z$ while the second identity \\eqref{eq:IdalalV} can be obtained by differentiating \\eqref{eq:Idmain} with respect to $b$ and setting $a=a_j$, $b=a_k$, and $c=a_l$. Thus, inserting \\eqref{eq:ddotajfj3A}, \\eqref{eq:ddotajfj3B} with \\eqref{eq:IdalalV}, and \\eqref{eq:IdalphaV}, \\eqref{eq:ddotajfj3} becomes \n\\begin{equation} \n\\label{eq:ddotajfj4}\n\\begin{split} \n\\ddot a_j\\langle f_j| = & \\sum_{k\\neq j}^\\mathcal {N} \\sum_{l\\neq j,k}^{\\mathcal {N}}(r_k-r_l) \\langle f_j| [\\mathsf{P}_k,\\mathsf{P}_l]\\big( \\alpha(a_j-a_k) - \\alpha(a_j-a_l)\\bigl)V(a_k-a_l) \\\\\n& -2\\sum_{k\\neq j}^\\mathcal {N} (1+r_k) \\langle f_j| \\mathsf{P}_k\\mathsf{P}_j V'(a_j-a_k)\\\\\n& - 2\\sum_{k\\neq j}^\\mathcal {N} \\sum_{l\\neq j,k}^{\\mathcal {N}} r_kr_l\\langle f_j| \\{\\mathsf{P}_k,\\mathsf{P}_l\\}\\big( \\alpha(a_j-a_k) - \\alpha(a_j-a_l) \\big)V(a_k-a_l).\n\\end{split} \n\\end{equation} \nSince $V(z)$ is even, the double sums in the second and third lines in \\eqref{eq:ddotajfj4} both vanish by symmetry, and using that $(1+r_k)=2$ for $k=1,\\ldots,N$ and $0$ otherwise, \nwe get \n\\begin{equation} \n \\ddot a_j\\langle f_j| = - 4\\sum_{k\\neq j}^N \\langle f_j|\\mathsf{P}_k\\mathsf{P}_j V'(a_j-a_k)\\quad (j=1,\\ldots,N). \n\\end{equation} \nBy multiplying this from the right with $|e_j\\rangle$ and using $\\mathsf{P}_j|e_j\\rangle=|e_j\\rangle$ and $\\langle f_j|\\mathsf{P}_k|e_j\\rangle = \\langle f_j|e_k\\rangle\\langle f_k|e_j\\rangle$, we obtain \\eqref{eq:sCM1a}. \n\n\\section{Multi-soliton solutions of the sBO equation} \n\\label{sec:sBO} \nIn this section, we present and derive multi-soliton solutions of the sBO equation \\eqref{eq:sBO}, both in the real-line case and the $L$-periodic case. \nThe special functions $\\alpha(z)$ and $V(z)$ are given in \\eqref{eq:alpha} and \\eqref{eq:V}, with the real-line case corresponding to I (rational case) and the $L$-periodic case corresponding to II (trigonometric case), respectively, throughout this section. All our results hold true in both cases. \n\n\\subsection{Result}\n\\label{sec:sBOresult}\nWe fix $d\\in{\\mathbb Z}_{\\geq 1}$ and consider the case where $\\mathsf{U}$ is a $d\\times d$ matrix-valued function; \n for $d=1$, \\eqref{eq:sBO} reduces to the standard BO equation, and our result below is well-known in this case \\cite{chen1979}. \n \nThe following theorem, whose proof is given in Section \\ref{sec:sBOproof}, is our main result about the sBO equation.\n\n\\begin{theorem} \n\\label{thm:sBO} \nLet $\\{a_j(t),|e_j(t)\\rangle,\\langle f_j(t)|\\}_{j=1}^N$ be a solution of the time evolution equations \\eqref{eq:sCM1a}--\\eqref{eq:sCM1b} with initial conditions \n\\begin{equation} \na_j(0)=a_{j,0},\\quad \\dot a_j(0)=v_j,\\quad |e_j(0)\\rangle = |e_{j,0}\\rangle,\\quad \\langle f_j(0)|=\\langle f_{j,0}|\\quad (j=1,\\ldots,N)\n\\end{equation} \nsatisfying the constraints \n\\begin{equation} \n\\label{eq:imajt=0}\n\\mathrm{Im} (a_{j,0})<0 \\quad (j=1,\\ldots,N), \n\\end{equation} \n\\begin{equation} \n\\label{eq:fjej}\n\\langle f_{j,0}|e_{j,0}\\rangle =1 \\quad (j=1,\\ldots,N), \n\\end{equation} \nand \n\\begin{equation}\n\\label{eq:BTt=0} \n\\begin{split} \nv_j \\langle f_{j,0}| = &\\; 2\\mathrm{i}\\sum_{k\\neq j}^N \\langle f_{j,0}|e_{k,0}\\rangle \\langle f_{k,0}|\\alpha(a_{j,0}-a_{k,0}) \\\\\n& -2\\mathrm{i}\\sum_{k=1}^N \\langle f_{j,0}|f_{k,0}\\rangle\\langle e_{k,0}|\\alpha(a_{j,0}-a^*_{k,0})\\quad (j=1,\\ldots,N). \n\\end{split} \n\\end{equation} \nThen,\n\\begin{equation} \n\\label{eq:ansatz} \n\\mathsf{U}(x,t)= \\mathrm{i}\\sum_{j=1}^N |e_j(t)\\rangle\\langle f_j(t)|\\alpha(x-a_j(t)) - \\mathrm{i}\\sum_{j=1}^N |f_j(t)\\rangle\\langle e_j(t)|\\alpha(x-a^*_j(t)) \n\\end{equation} \nis a solution of the sBO equation \\eqref{eq:sBO} for all times $t$ provided that the following condition holds true,\n\\begin{equation} \n\\label{eq:imaj}\n\\mathrm{Im} (a_j(t))<0 \\quad (j=1,\\ldots,N).\n\\end{equation} \n\\end{theorem} \n\n\\begin{remark} \nWe expect that the condition in \\eqref{eq:imaj} is automatically fulfilled for all times $t\\in{\\mathbb R}$ under the stated assumptions, and thus can be dropped. \nIt would be interesting to prove, or falsify, this expectation. Similar remarks apply to Theorems~\\ref{thm:sncILW}, \\ref{thm:sBOgen} and \\ref{thm:sncILWgen}.\n\\end{remark} \n\n\\subsubsection{One-soliton solutions}\nFor $N=1$, the time evolution equations \\eqref{eq:sCM1a}--\\eqref{eq:sCM1b} simplify to $\\ddot a_1=0$, $|\\dot e_1\\rangle=0$, and $\\langle \\dot f_1|=0$. Moreover, the general solution of the constraints \\eqref{eq:imajt=0}--\\eqref{eq:BTt=0} is $|e_{1,0}\\rangle = |f_{1,0}\\rangle\/\\langle f_{1,0}|f_{1,0}\\rangle$ with $\\langle f_{1,0}|\\in\\mathcal{V}^*$ an arbitrary non-zero vector, together with\n\\begin{equation} \nv_1 = -2\\mathrm{i}\\alpha(a_{1,0}^{\\phantom *}-a_{1,0}^*) = \\begin{cases} -1\/\\aI_1 & \\text{(I)}\\\\ - (2\\pi\/L)\\coth(2\\pi\\aI_1\/L) & \\text{(II)} \\end{cases} , \n\\end{equation} \nwhere $a_{1,0}=a^\\mathrm{R}_1+\\mathrm{i}\\aI_1$ with $a^\\mathrm{R}_1\\in{\\mathbb R}$ and $\\aI_1<0$. Thus, $a_1(t)=a^\\mathrm{R}_1+\\mathrm{i}\\aI_1+v_1 t$, $|e_1(t)\\rangle\\langle f_1(t)|=|f_{1,0}\\rangle\\langle f_{1,0}|\/\\langle f_{1,0}|f_{1,0}\\rangle$ independent of $t$,\nand Theorem~\\ref{thm:sBO} implies the following explicit formula for the one-soliton solutions of the sBO equation \\eqref{eq:sBO}, \n\\begin{equation} \n\\label{eq:onesoliton} \n\\mathsf{U}(x,t)= \\frac{|f_{1,0}\\rangle\\langle f_{1,0}|}{\\langle f_{1,0}|f_{1,0}\\rangle}\n\\big(\\mathrm{i}\\alpha(x-a^\\mathrm{R}_1-\\mathrm{i}\\aI_1 - v_1t) - \\mathrm{i}\\alpha(x-a^\\mathrm{R}_1+\\mathrm{i}\\aI_1 - v_1t) \\big) .\n\\end{equation} \nThus, a soliton of the sBO equation is characterized by $d$ complex parameters: the pole $a_{j,0}$ at time $t=0$ (one complex parameter), and a non-zero vector $\\langle f_{1,0}|\\in\\mathcal{V}^*$ modulo the transformations $\\langle f_{1,0}| \\to c\\langle f_{1,0}|$, $c\\in{\\mathbb C}\\setminus\\{0\\}$ arbitrary, which leave \\eqref{eq:onesoliton} invariant ($d-1$ complex parameters).\n It is interesting to note that only $v_1>0$ is allowed: the sBO equation is chiral in the sense that its solitons can only move to the right (note that the center of the soliton is located at the position $x=a^\\mathrm{R}_1+v_1t$).\n \n\\subsubsection{Non-interacting multi-soliton solutions}\nLet $N\\leq d$, and pick $N$ non-zero orthogonal vectors $\\langle f_{j,0}|\\in\\mathcal{V}^*$: \n\\begin{equation} \n\\langle f_{j,0}|f_{k,0}\\rangle=\\delta_{j,k}\\langle f_{j,0}|f_{j,0}\\rangle\\quad (j,k=1,\\ldots,N). \n\\end{equation} \nOne can check that $a_j(t)=a^\\mathrm{R}_j+\\mathrm{i}\\aI_j+v_j t$ and $|e_j(t)\\rangle\\langle f_j(t)|=|f_{j,0}\\rangle\\langle f_{j,0}|\/\\langle f_{j,0}|f_{j,0}\\rangle$\nis a solution of the time evolution equations \\eqref{eq:sCM1a}--\\eqref{eq:sCM1b} satisfying the constraints \\eqref{eq:BTt=0}--\\eqref{eq:imaj} provided that $a^\\mathrm{R}_j\\in{\\mathbb R}$, $\\aI_j<0$ and \n \\begin{equation} \nv_j = \\begin{cases} -1\/\\aI_j & \\text{(I)}\\\\ - (2\\pi\/L)\\coth(2\\pi\\aI_j\/L) & \\text{(II)} \\end{cases} \n\\end{equation} \nfor $j=1,\\ldots,N$. Thus, by Theorem~\\ref{eq:sBO}, \n\\begin{equation} \n\\mathsf{U}(x,t)= \\sum_{j=1}^N \\frac{|f_{j,0}\\rangle\\langle f_{j,0}|}{\\langle f_{j,0}|f_{j,0}\\rangle}\n\\big(\\mathrm{i}\\alpha(x-a^\\mathrm{R}_j-\\mathrm{i}\\aI_j - v_jt) - \\mathrm{i}\\alpha(x-a^\\mathrm{R}_j+\\mathrm{i}\\aI_j - v_jt) \\big) \n\\end{equation} \nis an exact solution of the sBO equation \\eqref{eq:sBO}. Clearly, this solution describes $N$ non-interacting one-solitons. \n\n\\subsubsection{Generic multi-soliton solutions}\n\\label{sec:NsolitonssBO}\nThe initial data for the $N$-soliton in Theorem \\ref{thm:sBO} is specified in terms of the $2N$ complex numbers $a_{j,0}$ and $v_j$,\nas well as the $2N$ vectors $|e_{j,0}\\rangle \\in \\mathcal{V}$ and $\\langle f_{j,0}| \\in \\mathcal{V}^*$ ($j=1,\\ldots,N$). However, the constraints \\eqref{eq:fjej} and \\eqref{eq:BTt=0} imply that these parameters cannot all be independently specified. Moreover, some choices of these parameters give rise to the same soliton solution $\\mathsf{U}$. In what follows, we show that, in the generic case when certain matrices are invertible, the constraints \\eqref{eq:fjej} and \\eqref{eq:BTt=0} can be solved explicitly in terms of only linear operations. As a result, we will see that if $\\mathcal{V}$ is (complex) $d$-dimensional, then the family of $N$-solitons of the sBO equation generically depends on $Nd$ complex parameters. Moreover, we give a recipe for the construction of the $N$-soliton solution in terms of these $Nd$ parameters.\n\nFor each $j = 1, \\dots, N$, let us identify $|e_{j,0}\\rangle \\in \\mathcal{V}$ with the vector $\\mathbf{e}_j \\in {\\mathbb C}^d$ whose components $(\\mathbf{e}_j)_{\\mu}$, ${\\mu} = 1, \\dots, d$, are the components of $|e_{j,0}\\rangle$ with respect to some given basis of $\\mathcal{V}$.\nNext, let us identify the collection of $N$ vectors $\\mathbf{e}_j$, $j=1,\\ldots,N$, with the single vector $\\mathbf{e} \\in {\\mathbb C}^{Nd}$ whose components $\\mathbf{e}_{j,\\mu} := (\\mathbf{e}_j)_{\\mu}$ are indexed by $j=1,\\ldots,N$ and ${\\mu}=1,\\ldots,d$.\nSimilarly, let us identify the three collections of vectors $\\langle e_{j,0}|$, $\\langle f_{j,0}|$, and $|f_{j,0}\\rangle$, $j=1,\\ldots,N$, with the vectors $\\mathbf{e}^*=(e^*_{j,\\mu})\\in {\\mathbb C}^{Nd}$, $\\mathbf{f}^*=(f^*_{j,\\mu}) \\in {\\mathbb C}^{Nd}$, and $\\mathbf{f}=(f_{j,\\mu})\\in {\\mathbb C}^{Nd}$, respectively. (Since all considerations here will concern the constraints at time $t = 0$, we do not include the subscript $0$ in our notation for simplicity.)\nWith this notation, we can write the constraint \\eqref{eq:BTt=0} as \n\\begin{equation} \n\\label{eq:system}\n\\mathsf{A}\\mathbf{e}^*+ \\mathsf{B}\\mathbf{e} = \\mathsf{C} \\mathbf{f}^* \n\\end{equation} \nwhere the $Nd \\times Nd$ matrices $\\mathsf{A},\\mathsf{B}$, and $\\mathsf{C}$ are given by \n\\begin{equation} \n\\label{ABdvdef}\n\\begin{split} \nA_{j,\\mu;k,\\nu} &= -2\\mathrm{i} \\langle f_j|f_k\\rangle\\delta_{\\mu,\\nu}\\alpha(a^{\\phantom*}_{j,0}-a_{k,0}^*),\\\\\nB_{j,\\mu;k,\\nu} &= 2\\mathrm{i} (1-\\delta_{j,k})f^*_{k,{\\mu}}f^*_{j,{\\nu}}\\alpha(a_{j,0}-a_{k,0}), \\\\\nC_{j,\\mu;k,\\nu} &= v_j\\delta_{j,k}\\delta_{\\mu,\\nu}. \n\\end{split} \n\\end{equation} \nWe write \\eqref{eq:system} and the negative of its complex conjugate as the linear system,\\footnote{Note that the star in $\\mathsf{A}^*$ etc.\\ means complex conjugation, i.e., $\\mathsf{A}^*$ is given by the matrix elements $(A_{j,\\mu;k,\\nu})^*$ where $A_{j,\\mu;k,\\nu}$ are the matrix elements of $\\mathsf{A}$.} \n\\begin{equation} \n\\label{eq:system1}\n \\begin{pmatrix} \\mathsf{A} & \\mathsf{B} \\\\ -\\mathsf{B}^* & -\\mathsf{A}^* \\end{pmatrix}\\begin{pmatrix}\n\\mathbf{e}^*\\\\ \\mathbf{e} \\end{pmatrix} = \\begin{pmatrix} \\mathsf{C}\\mathbf{f}^*\\\\ -\\mathsf{C}^* \\mathbf{f} \\end{pmatrix}, \n\\end{equation} \nand note that the $2Nd\\times 2Nd$ matrix in \\eqref{eq:system1} is hermitian. Thus, restricting ourselves to the generic case when this $2Nd\\times 2Nd$ matrix is invertible, we obtain\n\\begin{equation} \n\\label{eq:solve} \n\\begin{pmatrix}\n\\mathbf{e}^*\\\\ \\mathbf{e} \\end{pmatrix} \n= \\begin{pmatrix} \\mathsf{A} & \\mathsf{B} \\\\ -\\mathsf{B}^* & -\\mathsf{A}^* \\end{pmatrix}^{-1} \n\\begin{pmatrix} \\mathsf{C}\\mathbf{f}^*\\\\ - \\mathsf{C}^* \\mathbf{f} \\end{pmatrix}. \n\\end{equation} \nSubstitution of these expressions for $\\mathbf{e}$ and $\\mathbf{e}^*$ into the constraint \\eqref{eq:fjej} and its complex conjugate gives $2N$ linear equations which can be solved uniquely for the $2N$ initial velocities $v_j, v_j^* \\in {\\mathbb C}$, $j = 1, \\dots, N$, provided that the relevant determinant is nonzero (this is the generic case).\nGiven these expressions for $v_j$, the vectors $\\mathbf{e}$ and $\\mathbf{e}^*$ can be found from (\\ref{eq:solve}). This means that all the constraints can be solved in terms of $\\{a_{j,0}\\}_{j=1}^N$ and $\\mathbf{f}$.\nSo in the Hermitian case considered here, the class of $N$-soliton solutions of the sBO equation is parametrized by the $N$ complex parameters $\\{a_{j,0}\\}_{j=1}^N$ as well as $Nd$ further complex parameters needed to determine $\\mathbf{f}$. However, some of these parameter configurations yield the same $N$-soliton solution $\\mathsf{U}(x,t)$ via \\eqref{eq:ansatz}. Indeed, the equations \\eqref{eq:sCM1a}--\\eqref{eq:sCM1b} and \\eqref{eq:imajt=0}--\\eqref{eq:ansatz} are invariant under the replacements\n\\begin{equation} \n|e_j\\rangle \\to c_j |e_j\\rangle, \\qquad \\langle f_j | \\to \\frac{1}{c_j} \\langle f_j |,\n\\end{equation} \nwhere $\\{c_j\\}_{j=1}^N$ are nonzero complex constants. This means that the family of $N$-solitons generically is $Nd$ complex dimensional; in other words, each soliton is specified by $d$ complex parameters, which is consistent with the result for the one-soliton obtained above.\n\n\\subsection{Proof of Theorem~\\ref{thm:sBO}}\n\\label{sec:sBOproof}\nWe start with the spin-pole ansatz \n\\begin{equation} \n\\label{eq:ansatzh} \n\\mathsf{U}(x,t)=\\mathrm{i} \\sum_{j=1}^N \\mathsf{P}_j(t)\\alpha(x-a_j(t)) - \\mathrm{i} \\sum_{j=1}^N \\mathsf{P}^\\dag_j(t)\\alpha(x-a^*_j(t)) \n\\end{equation} \nwhere $\\mathsf{P}_j=\\mathsf{P}_j(t)$ are $d\\times d$ matrices and $a_j=a_j(t)\\in{\\mathbb C}$; the $\\mathsf{P}_j$ and $a_j$ correspond to the spin degrees of freedom and poles, respectively. \n\n\\begin{proposition} \n\\label{prop:sBO1} \nThe function $\\mathsf{U}(x,t)$ in \\eqref{eq:ansatzh} satisfies the sBO equation \\eqref{eq:sBO} provided \\eqref{eq:imaj} and the following equations hold true,\n\\begin{equation} \n\\label{eq:BT1a} \n\\dot a_{j} \\mathsf{P}_{j} = 2\\mathrm{i}\\sum_{k\\neq j}^N \\mathsf{P}_{j}\\mathsf{P}_{k}\\alpha(a_{j}-a_{k}) -2\\mathrm{i}\\sum_{k=1}^N \\mathsf{P}_{j}\\mathsf{P}^\\dag _{k}\\alpha(a_{j}-a^*_{k})\\quad (j=1,\\ldots,N), \\\\\n\\end{equation}\n\\begin{equation} \n\\label{eq:BT2a} \n\\dot \\mathsf{P}_j = -2\\mathrm{i}\\sum_{k \\neq j}^N[\\mathsf{P}_j,\\mathsf{P}_k]V(a_j-a_k) \\quad (j=1,\\ldots,N),\n\\end{equation} \nand \n\\begin{equation} \n\\label{eq:BT3a} \n\\mathsf{P}_j^2=\\mathsf{P}_j \\quad (j=1,\\ldots, N).\n\\end{equation} \n\\end{proposition} \n\n\\begin{proof} \nWe use the short-hand notation\n\\begin{equation} \n\\label{eq:shorthand} \n(a_j,\\mathsf{P}_j,r_j)\\coloneqq \\begin{cases} (a_j,\\mathsf{P}_j,+1) & (j=1,\\ldots,N) \\\\ (a^*_{j-N},\\mathsf{P}^\\dag_{j-N},-1) & (j=N+1,\\ldots,\\mathcal {N}) \\end{cases} , \\quad \\mathcal {N}=2N\n\\end{equation} \nto write \\eqref{eq:ansatzh} as \n\\begin{equation} \n\\label{eq:U} \n\\mathsf{U}= \\mathrm{i}\\sum_{j=1}^{\\mathcal {N}} r_j \\mathsf{P}_j\\alpha(x-a_j) .\n\\end{equation} \nThe proof will follow by inserting \\eqref{eq:U} into the sBO equation \\eqref{eq:sBO} and performing long but straightforward computations.\n\nWe compute each term in \\eqref{eq:sBO}. We start with \n\\begin{equation} \n\\label{eq:Udot}\n\\mathsf{U}_t = \\sum_{j=1}^\\mathcal {N} \\Big( \\mathrm{i} r_j \\dot \\mathsf{P}_j\\alpha(x-a_j) - \\mathrm{i} r_j \\mathsf{P}_j \\dot a_j \\alpha'(x-a_j)\\Big) . \n\\end{equation} \nNext, we compute\n\\begin{align}\n\\label{eq:UUx1}\n\\{\\mathsf{U},\\mathsf{U}_x\\} =&\\; -\\sum_{j=1}^{\\mathcal {N}}\\sum_{k=1}^{\\mathcal {N}} r_j r_k \\{\\mathsf{P}_j, \\mathsf{P}_k\\} \\alpha(x-a_j)\\alpha'(x-a_k) \\nonumber \\\\\n=&\\; -2\\sum_{j=1}^{\\mathcal {N}} \\mathsf{P}_j^2\\alpha(x-a_j)\\alpha'(x-a_j) -\\sum_{j=1}^{\\mathcal {N}}\\sum_{k\\neq j}^{\\mathcal {N}} r_jr_k \\{\\mathsf{P}_j,\\mathsf{P}_k\\} \\alpha(x-a_j)\\alpha'(x-a_k) \\nonumber \\\\\n=&\\; \\sum_{j=1}^{\\mathcal {N}} \\mathsf{P}_j^2 \\alpha''(x-a_j) + \\sum_{j=1}^{\\mathcal {N}}\\sum_{k\\neq j}^{\\mathcal {N}} r_j r_k \\{\\mathsf{P}_j,\\mathsf{P}_k\\}\\alpha(a_j-a_k)\\alpha'(x-a_k) \\nonumber \\\\\n&\\; +\\sum_{j=1}^{\\mathcal {N}}\\sum_{k\\neq j}^{\\mathcal {N}}r_j r_k \\{\\mathsf{P}_j,\\mathsf{P}_k\\}V(a_j-a_k)\\big(\\alpha(x-a_j)-\\alpha(x-a_k)\\big),\n\\end{align}\nwhere we have used the identities \n\\begin{equation} \n\\label{eq:Id1} \n2\\alpha(x-a_j)\\alpha'(x-a_j)=-\\alpha''(x-a_j)\n\\end{equation} \nand\\footnote{We write the following identity in a seemingly strange way, mixing $\\alpha'$ and $V=-\\alpha'$, to emphasize the similarity with a corresponding identity \\eqref{eq:Aidentity2} used in the sncILW case.} \n\\begin{equation}\n\\label{eq:Id2} \n\\alpha(x-a_j)\\alpha'(x-a_k)=- \\alpha(a_j-a_k)\\alpha'(x-a_k) -V(a_j-a_k)\\big(\\alpha(x-a_j)-\\alpha(x-a_k)\\big).\n\\end{equation} \nThe first identity \\eqref{eq:Id1} can be obtained by differentiating \\eqref{eq:IdV} with respect to $z$ and setting $z=x-a_j$ while the second identity \\eqref{eq:Id2} can be obtained by differentiating \\eqref{eq:Idmain} with respect to $c$ and setting $a=x$, $b=a_j$, and $c=a_k$.\n\nThe final sum in \\eqref{eq:UUx1} vanishes because the summand is antisymmetric under the interchange $j\\leftrightarrow k$. Hence, after re-labelling summation indices $j\\leftrightarrow k$ in the second sum in \\eqref{eq:UUx1} using \\eqref{eq:Id4}, we are left with\n\\begin{equation}\n\\label{eq:UUx2}\n\\{\\mathsf{U},\\mathsf{U}_x\\} = \\sum_{j=1}^{\\mathcal {N}} \\mathsf{P}_j^2 \\alpha''(x-a_j)-\\sum_{j=1}^{\\mathcal {N}}\\sum_{k\\neq j}^{\\mathcal {N}} r_jr_k \\{\\mathsf{P}_j,\\mathsf{P}_k\\}\\alpha(a_j-a_k)\\alpha'(x-a_j).\n\\end{equation}\n\nTo compute terms in \\eqref{eq:sBO} involving the Hilbert transform $H$, we use\n\\begin{equation}\n\\label{eq:Id3} \n\\begin{split}\n(H\\alpha'(\\cdot-a_j))(x) = \\mathrm{i} r_j\\alpha'(x-a_j) \\\\\n\\end{split} \n\\end{equation}\n(this follows from the well-known facts (i) $\\alpha(x-a)$ is an eigenfunction of $H$ with eigenvalue $+\\mathrm{i}$ for $\\mathrm{Im} (a)<0$ and $-\\mathrm{i}$ for $\\mathrm{Im} (a)>0$ \\cite{chen1979} and (ii) $H$ commutes with differentiation \\cite[Chapter~4.8]{king2009}). Hence,\n\\begin{equation}\n\\label{eq:HUx}\nH\\mathsf{U}_x= -\\sum_{j=1}^{\\mathcal {N}} \\mathsf{P}_j \\alpha'(x-a_j),\\qquad H\\mathsf{U}_{xx}= -\\sum_{j=1}^{\\mathcal {N}} \\mathsf{P}_j \\alpha''(x-a_j),\n\\end{equation}\nwhere we have again used the fact that $H$ commutes with differentiation to derive the second equation from the first.\nFrom \\eqref{eq:U} and the first equation in \\eqref{eq:HUx}, we compute\n\\begin{equation}\n\\label{eq:UHUx1}\n\\begin{split} \n\\mathrm{i} [\\mathsf{U},H\\mathsf{U}_x]=&\\; \\sum_{j=1}^{\\mathcal {N}}\\sum_{k\\neq j}^{\\mathcal {N}} r_j [\\mathsf{P}_j,\\mathsf{P}_k] \\alpha(x-a_j)\\alpha'(x-a_k) \\\\\n=&\\; -\\sum_{j=1}^{\\mathcal {N}} \\sum_{k\\neq j}^{\\mathcal {N}} r_j[\\mathsf{P}_j,\\mathsf{P}_k] \\alpha(a_j-a_k)\\alpha'(x-a_k)\\\\\n & -\\sum_{j=1}^{\\mathcal {N}}\\sum_{k\\neq j}^{\\mathcal {N}} r_j[\\mathsf{P}_j,\\mathsf{P}_k] V(a_j-a_k)\\big(\\alpha(x-a_j)-\\alpha(x-a_k)\\big), \n\\end{split} \n\\end{equation}\ninserting \\eqref{eq:Id2} \nin the second step. We can rewrite the second sum as follows, \n\\begin{multline}\n\\sum_{j=1}^{\\mathcal {N}} \\sum_{k\\neq j}^{\\mathcal {N}} r_j[\\mathsf{P}_j,\\mathsf{P}_k] V(a_j-a_k)\\big(\\alpha(x-a_j)-\\alpha(x-a_k)\\big) \\\\\n= \\frac12 \\sum_{j=1}^{\\mathcal {N}} \\sum_{k\\neq j}^{\\mathcal {N}} (r_j+r_k)[\\mathsf{P}_j,\\mathsf{P}_k] V(a_j-a_k)\\big(\\alpha(x-a_j)-\\alpha(x-a_k)\\big) \\\\\n=\\sum_{j=1}^{\\mathcal {N}} \\sum_{k\\neq j}^{\\mathcal {N}} (r_j+r_k)[\\mathsf{P}_j,\\mathsf{P}_k] V(a_j-a_k) \\alpha(x-a_j)\n\\end{multline}\nsince $V(z)$ is an even function. \nAlso changing variables $j\\leftrightarrow k$ in the first sum in \\eqref{eq:UHUx1} using that $\\alpha(z)$ is odd, we arrive at\n\\begin{equation}\n\\label{eq:UHUx2}\n\\begin{split} \n\\mathrm{i} [\\mathsf{U},H\\mathsf{U}_x]=&\\; -\\sum_{j=1}^{\\mathcal {N}}\\sum_{k\\neq j}^{\\mathcal {N}} r_k[\\mathsf{P}_j,\\mathsf{P}_k] \\alpha(a_j-a_k)\\alpha'(x-a_j)\n\\\\ & -\\sum_{j=1}^{\\mathcal {N}}\\sum_{k\\neq j}^{\\mathcal {N}} (r_j+r_k)[\\mathsf{P}_j,\\mathsf{P}_k]V(a_j-a_k)\\alpha(x-a_j).\n\\end{split} \n\\end{equation}\nInserting \\eqref{eq:Udot}, \\eqref{eq:UUx2}, the second equation in \\eqref{eq:HUx}, and \\eqref{eq:UHUx2} into \\eqref{eq:sBO} gives\n\\begin{equation} \n\\begin{split}\n0=&\\; \\sum_{j=1}^{\\mathcal {N}} \\Bigg(\\mathrm{i} r_j \\dot{\\mathsf{P}}_j- \\sum_{k\\neq j}^{\\mathcal {N}} (r_j+r_k)[\\mathsf{P}_j,\\mathsf{P}_k]V(a_j-a_k)\\Bigg) \\alpha(x-a_j) \\\\\n&\\; +\\sum_{j=1}^{\\mathcal {N}} \\Bigg(-\\mathrm{i} r_j \\mathsf{P}_j\\dot{a}_j -\\sum_{k\\neq j}^{\\mathcal {N}} r_k \\big( r_j\\{\\mathsf{P}_j,\\mathsf{P}_k\\} +[\\mathsf{P}_j,\\mathsf{P}_k] \\big)\\alpha(a_j-a_k)\\Bigg)\\alpha'(x-a_j) \\\\\n&\\; + \\sum_{j=1}^{\\mathcal {N}} \\big( \\mathsf{P}_j^2 -\\mathsf{P}_j \\big) \\alpha''(x-a_j).\n\\end{split}\n\\end{equation} \nThus, the function $\\mathsf{U}$ defined in \\eqref{eq:U} satisfies the sBO equation \\eqref{eq:sBO} if and only if the following conditions are fulfilled, \n\\begin{equation} \n\\label{eq:BT1} \n\\mathsf{P}_j\\dot a_j = \\mathrm{i} \\sum_{k\\neq j}^{\\mathcal {N}} r_k \\bigl( \\{\\mathsf{P}_j,\\mathsf{P}_k\\} + r_j[\\mathsf{P}_j,\\mathsf{P}_k] \\big) \\alpha(a_j-a_k), \\\\\n\\end{equation} \n\\begin{equation} \n\\label{eq:BT2} \n\\dot \\mathsf{P}_j = -\\mathrm{i} \\sum_{k\\neq j}^{\\mathcal {N}}(1+r_jr_k)[\\mathsf{P}_j,\\mathsf{P}_k]V(a_j-a_k) ,\\\\\n\\end{equation} \n\\begin{equation} \n\\label{eq:BT3} \n\\mathsf{P}_j^2 = \\mathsf{P}_j \n\\end{equation} \nfor $j=1,\\ldots,\\mathcal {N}$. Recalling \\eqref{eq:shorthand}, one can check that \\eqref{eq:BT1}, \\eqref{eq:BT2}, and \\eqref{eq:BT3} are equivalent to \\eqref{eq:BT1a}, \\eqref{eq:BT2a} and \\eqref{eq:BT3a}, respectively. \n\\end{proof} \n\nWe now make the ansatz \n\\begin{equation}\n\\label{eq:mPj} \n\\mathsf{P}_j= |e_j\\rangle\\langle f_j| \\quad (j=1,\\ldots,N) \n\\end{equation} \nwith vectors $|e_j\\rangle\\in\\mathcal{V}$ and $\\langle f_j|\\in\\mathcal{V}^*$. \nThen the function $\\mathsf{U}(x,t)$ defined in \\eqref{eq:ansatzh} satisfies the equations in Proposition~\\ref{prop:sBO1} whenever $\\{a_j,|e_j\\rangle,\\langle f_j|\\}_{j=1}^N$ satisfy the equations defining the B\\\"acklund transformations of the sCM system discussed in Section~\\ref{sec:sCM_BT}. The precise statement is as follows.\n\n\\begin{lemma}\n\\label{lem:sBOlemma}\nSuppose that $\\{a_j,|e_j\\rangle,\\langle f_j|\\}_{j=1}^N$ satisfy the equations \\eqref{eq:sCM1b}, \\eqref{eq:sCM1c} and \\eqref{eq:BThermitian}, and $\\mathsf{P}_j$ is given by \\eqref{eq:mPj}. \nThen $\\mathsf{U}$ \\eqref{eq:ansatzh} satisfies \\eqref{eq:BT1a}--\\eqref{eq:BT3a}. \n\\end{lemma} \n\n\\begin{proof} \nBy multiplying \\eqref{eq:BThermitian} from the left by $|e_j\\rangle$, one gets \n\\begin{equation*}\n\\dot a_j |e_j\\rangle\\langle f_j| = 2\\mathrm{i}\\sum_{k\\neq j}^N |e_j\\rangle\\langle f_j|e_k\\rangle \\langle f_k|\\alpha(a_j-a_k) -2\\mathrm{i}\\sum_{k=1}^M |e_j\\rangle\\langle f_j|f_k\\rangle\\langle e_k|\\alpha(a_j-a_k^*)\n\\end{equation*} \nwhich is \\eqref{eq:BT1a} (note that $\\mathsf{P}_j^\\dag = |f_j\\rangle\\langle e_j|$). \n\nUsing \\eqref{eq:sCM1b}, we compute \n\\begin{equation} \n\\begin{split} \n\\dot\\mathsf{P}_j = & |e_j\\rangle\\langle \\dot f_j|+|\\dot e_j\\rangle\\langle f_j| \\\\\n= & -2\\mathrm{i}\\sum_{k\\neq j}^N \\big( |e_j\\rangle\\langle f_j|e_k\\rangle \\langle f_k| - |e_k\\rangle\\langle f_k|e_j\\rangle\\langle f_j| \\big) \nV(a_j-a_k) \\\\\n= & -2\\mathrm{i}\\sum_{k\\neq j}^N \\big( \\mathsf{P}_j\\mathsf{P}_k-\\mathsf{P}_k\\mathsf{P}_j \\big) V(a_j-a_k) , \n\\end{split} \n\\end{equation} \nwhich is \\eqref{eq:BT2a}.\n\nFinally, using \\eqref{eq:sCM1c}, \n\\begin{equation} \n\\mathsf{P}_j^2 = |e_j\\rangle\\langle f_j|e_j\\rangle\\langle f_j| = |e_j\\rangle\\langle f_j|=\\mathsf{P}_j, \n\\end{equation} \nwhich is \\eqref{eq:BT3a}. \n\\end{proof} \n\nThen, the theorem is implied by Proposition~\\ref{prop:BT}.\n\n\\section{Multi-soliton solutions of the sncILW equation}\n\\label{sec:sncILW}\nIn this section, we present results for the sncILW equation on the real line in analogy with those for the sBO equation in the previous section. \nWe use the hyperbolic case (III) special functions $\\alpha(z)=(\\pi\/2\\delta)\\coth(\\pi z\/2\\delta)$ and \n$V(z)=(\\pi\/2\\delta)^2\/\\sinh^2(\\pi z\/2\\delta)$ with $\\delta>0$; see \\eqref{eq:alpha} and \\eqref{eq:V}.\n\n\\subsection{Result}\nWe use the same conventions as described in Section~\\ref{sec:sBOresult}. In the case $d=1$, \\eqref{eq:sncILW} reduces to the standard ncILW equation, whose soliton solutions were constructed in \\cite{berntson2020}. Our main result about the sncILW equation, stated below, generalizes the construction in \\cite{berntson2020}.\n\n\\begin{theorem} \n\\label{thm:sncILW} \nLet $\\{a_j(t),|e_j(t)\\rangle,\\langle f_j(t)|\\}_{j=1}^N$ be a solution of the time evolution equations \\eqref{eq:sCM1a}--\\eqref{eq:sCM1b} with initial conditions \n\\begin{equation} \na_j(0)=a_{j,0},\\quad \\dot a_j(0)=v_j,\\quad |e_j(0)\\rangle = |e_{j,0}\\rangle,\\quad \\langle f_j(0)|=\\langle f_{j,0}|\\quad (j=1,\\ldots,N)\n\\end{equation} \nsatisfying the constraints \n\\begin{equation} \n\\label{eq:imajtsncILW=0}\n-\\frac{3\\delta}{2}<\\mathrm{Im} (a_{j,0})< -\\frac{\\delta}{2} \\quad (j=1,\\ldots,N), \n\\end{equation} \n\\begin{equation} \n\\label{eq:fjejsncILW}\n\\langle f_{j,0}|e_{j,0}\\rangle =1 \\quad (j=1,\\ldots,N), \n\\end{equation} \nand \n\\begin{equation}\n\\label{eq:BTt=0_sncILW} \n\\begin{split} \nv_j \\langle f_{j,0}| = & 2\\mathrm{i}\\sum_{k\\neq j}^N \\langle f_{j,0}|e_{k,0}\\rangle \\langle f_{k,0}|\\alpha(a_{j,0}-a_{k,0}) \\\\\n& -2\\mathrm{i}\\sum_{k=1}^N \\langle f_{j,0}|f_{k,0}\\rangle\\langle e_{k,0}|\\alpha(a_{j,0}-a^*_{k,0}+\\mathrm{i}\\delta)\\quad (j=1,\\ldots,N). \n\\end{split} \n\\end{equation} \nThen,\n\\begin{equation} \n\\label{eq:mUmV} \n\\begin{split}\n\\mathsf{U}(x,t)=&\\; \\mathrm{i}\\sum_{j=1}^N |e_j(t)\\rangle\\langle f_j(t)|\\alpha(x-a_j(t) -\\mathrm{i}\\delta\/2) - \\mathrm{i}\\sum_{j=1}^N |f_j(t)\\rangle\\langle e_j(t)|\\alpha(x-a^*_j(t)+\\mathrm{i}\\delta\/2), \\\\\n\\mathsf{V}(x,t)=&\\; -\\mathrm{i}\\sum_{j=1}^N |e_j(t)\\rangle\\langle f_j(t)|\\alpha(x-a_j(t)+\\mathrm{i}\\delta\/2) + \\mathrm{i}\\sum_{j=1}^N |f_j(t)\\rangle\\langle e_j(t)|\\alpha(x-a^*_j(t)-\\mathrm{i}\\delta\/2) \n\\end{split}\n\\end{equation} \nis a solution of the sncILW equation \\eqref{eq:sncILW} for all times $t$ provided that the following condition holds true,\n\\begin{equation} \n\\label{eq:imaj_sncILW}\n-\\frac{3\\delta}{2}<\\mathrm{Im} ( a_j(t)) <-\\frac{\\delta}{2} \\quad (j=1,\\ldots,N).\n\\end{equation} \n\\end{theorem} \n\n\\subsection{Proof of Theorem~\\ref{thm:sncILW}}\nSince details of the proof are very similar to the ones of Theorem~\\ref{thm:sBO}, we only explain the key differences. \n\nThe proof is facilitated by introducing the notation\n\\begin{equation}\n\\label{eq:circ} \n\\left(\\begin{array}{c} F_1 \\\\ F_2 \\end{array}\\right)\\circ \\left(\\begin{array}{c} G_1 \\\\ G_2 \\end{array}\\right)\\coloneqq \\left(\\begin{array}{c}F_1G_1 \\\\ -F_2G_2 \\end{array}\\right)\n\\end{equation}\nfor ${\\mathbb C}$-valued functions $F_j$, $G_j$ ($j=1,2$), and the operator\n\\begin{equation}\n\\label{eq:cT} \n\\mathcal{T}\\coloneqq \\left(\\begin{array}{cc} T & \\tilde{T} \\\\ -\\tilde{T} & -T \\end{array}\\right)\n\\end{equation}\nto be interpreted as linear operator acting on vector-valued functions, see \\cite{berntson2020}. \nIn the present paper, we use the product $\\circ$ defined in \\eqref{eq:circ} \nalso for vectors $\\mathcal{F}$, $\\mathcal{G}$ whose components $F_j$, $G_j$ are complex $d\\times d$ matrices, and we let\n\\begin{equation}\n\\{\\mathcal{F}\\,\\overset{\\circ}{,}\\, \\mathcal{G}\\}\\coloneqq \\mathcal{F}\\circ\\mathcal{G}+\\mathcal{G}\\circ \\mathcal{F},\\qquad [\\mathcal{F}\\,\\overset{\\circ}{,}\\, \\mathcal{G}]\\coloneqq \\mathcal{F}\\circ\\mathcal{G}- \\mathcal{G}\\circ \\mathcal{F} \n\\end{equation}\nbe the corresponding generalizations of the commutator and anticommutator, respectively. \n\nUsing this notation, the system \\eqref{eq:sncILW} can be written in terms of the vector\n\\begin{equation}\n\\label{eq:cU} \n\\mathcal{U}(x,t)\\coloneqq \\left(\\begin{array}{c} \\mathsf{U}(x,t) \\\\ \\mathsf{V}(x,t) \\end{array}\\right)\n\\end{equation}\nas\n\\begin{equation}\n\\label{eq:sncILW2}\n\\mathcal{U}_t+ \\{\\mathcal{U}\\,\\overset{\\circ}{,}\\, \\mathcal{U}_x\\} +\\mathcal{T}\\mathcal{U}_{xx}+\\mathrm{i} [\\mathcal{U}\\,\\overset{\\circ}{,}\\, \\mathcal{T}\\mathcal{U}_x]=0.\n\\end{equation}\nTo solve \\eqref{eq:sncILW2}, we use the spin-pole ansatz\n\\begin{equation}\n\\label{eq:ansatzh_sncILW}\n\\mathcal{U}(x,t)=\\mathrm{i} \\sum_{j=1}^{N} \\mathsf{P}_j(t) \\mathcal{A}_+(x-a_j(t))-\\mathrm{i}\\sum_{j=1}^N \\mathsf{P}_j^{\\dag}(t)\\mathcal{A}_-(x-a_j^*(t)),\n\\end{equation}\nwhere $\\mathsf{P}_j=\\mathsf{P}_j(t)$ are $d\\times d$ matrices, $a_j=a_j(t)\\in{\\mathbb C}$, and\n\\begin{equation}\n\\label{eq:cA}\n\\mathcal{A}_{\\pm}(z)\\coloneqq \\left(\\begin{array}{c} +\\alpha(z \\mp \\mathrm{i}\\delta\/2) \\\\ -\\alpha(z\\pm\\mathrm{i}\\delta\/2) \\end{array}\\right)\n\\end{equation}\nfor all $z\\in{\\mathbb C}$ such that $\\delta\/2<|\\mathrm{Im} (z)|<3\\delta\/2$. \nUsing the shorthand notation \\eqref{eq:shorthand}, we can write \\eqref{eq:ansatzh_sncILW} as\n\\begin{equation}\n\\label{eq:ansatz_sncILW}\n\\mathcal{U}=\\mathrm{i}\\sum_{j=1}^{\\mathcal {N}} r_j \\mathsf{P}_j \\mathcal{A}_{r_j}(x-a_j).\n\\end{equation}\nWith this notation in place, one can prove Theorem~\\ref{thm:sncILW} by the very same computations we presented in our proof of Theorem~\\ref{thm:sBO} in Section~\\ref{sec:sBOproof}. The reason for this is that the vector-valued function $\\mathcal{A}(z)$ defined in \\eqref{eq:cA} satisfies the following identities which are natural generalizations of the identities \\eqref{eq:Id1}, \\eqref{eq:Id2} and \\eqref{eq:Id3}, respectively ($j,k=1,\\ldots,\\mathcal {N}$ and $x\\in{\\mathbb R}$): \n\\begin{equation}\n\\label{eq:Aidentity1}\n2\\mathcal{A}_{r_j}(x-a_j)\\circ \\mathcal{A}_{r_j}'(x-a_j)=- \\mathcal{A}_{r_j}''(x-a_j) \n\\end{equation}\n(this is implied by \\eqref{eq:Id1} and definitions),\n\\begin{align}\n\\label{eq:Aidentity2}\n\\mathcal{A}_{r_j}(x-a_j) & \\circ \\mathcal{A}_{r_k}'(x-a_k)= -\\alpha(a_j-a_k+\\mathrm{i}(r_j-r_k)\\delta\/2) \\mathcal{A}_{r_k}'(x-a_k) \\nonumber \\\\\n&\\; -V(a_j-a_k+\\mathrm{i}(r_j-r_k)\\delta\/2)\\big(\\mathcal{A}_{r_j}(x-a_j)-\\mathcal{A}_{r_k}(x-a_k)\\big) \n\\end{align}\n(this is implied by \\eqref{eq:Id2} and definitions, together with $\\alpha'(z)=-V(z)$ and the fact that $\\alpha(z)$ and $V(z)$ both are $2\\mathrm{i}\\delta$-periodic), \nand\n\\begin{equation} \n\\label{eq:Aidentity3}\n(\\mathcal{T}\\mathcal{A}_{r_j}'(\\cdot -a_j))(x)=\\mathrm{i} r_j\\mathcal{A}_{r_j}'(x-a_j) \n\\end{equation} \n(this follows from the result, proved in \\cite{berntson2020}, that for the vector-valued functions $\\mathcal{A}_\\pm(z)$ in \\eqref{eq:cA}, $\\mathcal{A}_+'(x-a)$ is an eigenfunction of the matrix operator $\\mathcal{T}$ with eigenvalue $+\\mathrm{i}$ provided that $-3\\delta\/2 < \\mathrm{Im} (a) < -\\delta\/2$, and $\\mathcal{A}_-'(x-a)$ is an eigenfunction of $\\mathcal{T}$ with eigenvalue $-\\mathrm{i}$ provided that $\\delta\/2 < \\mathrm{Im} (a) < 3\\delta\/2$; see \\cite[Appendix A.b]{berntson2020}).\n\nThe interested reader can find full details on how to prove Theorem~\\ref{thm:sncILW} in Appendix~\\ref{app:sncILWsolutions}. There, we prove a more general result for solutions that are not necessarily hermitian. The hermitian case of Theorem~\\ref{thm:sncILW} can be established by specializing to \\eqref{eq:ansatzh} at all points in the proof. \n\n\\section{Further results} \n\\label{sec:results}\nWe briefly summarize further basic results about the sBO and sncILW equations. \n\n\\subsection{Hamiltonian structure}\n\\label{sec:Hamiltonian}\nThe sBO equation \\eqref{eq:sBO} can be obtained as a Hamiltonian equation $\\mathsf{U}_t=\\{\\cH_{\\rm sBO},\\mathsf{U}\\}_{\\mathrm{P.B.}}$ from the Hamiltonian ($\\mathsf{U}$ below is short for $\\mathsf{U}(x)$)\n\\begin{equation} \n\\cH_{\\rm sBO} = \\int \\mathrm{tr}\\left( \\frac13 \\mathsf{U}^3 + \\frac12 \\mathsf{U} H\\mathsf{U}_x \\right)\\,\\mathrm{d}{x} \n\\end{equation} \nand the Poisson brackets\\footnote{The subscript $\\mathrm{P.B.}$ is used to avoid confusion with the anti-commutator.} \n\\begin{equation} \n\\{ U_{{\\mu},{\\nu}}(x), U_{{\\mu}',{\\nu}'}(x')\\}_{\\mathrm{P.B.}} = \\mathrm{i}\\delta(x-x')\\big(\\delta_{{\\nu},{\\mu}'}U_{{\\mu},{\\nu}'}(x)-\\delta_{{\\mu},{\\nu}'}U_{{\\mu}',{\\nu}}(x)\\big) + \\delta'(x-x')\\delta_{{\\nu},{\\mu}'}\\delta_{{\\mu},{\\nu}'} \n\\end{equation} \nwhere $U_{{\\mu},{\\nu}}(x)$ are the matrix elements of $\\mathsf{U}(x)$ and ${\\mu},{\\nu},{\\mu}',{\\nu}'=1,\\ldots,d$; \nthe integration is over ${\\mathbb R}$ and $[-L\/2,L\/2]$ in the real-line and periodic cases, respectively, and $\\mathrm{tr}$ is the usual matrix trace (sum of diagonal elements). \n\nSimilarly, the sncILW equation \\eqref{eq:sncILW} arises from the Hamiltonian \n\\begin{equation} \n\\cH_{\\rm sncILW} = \\int \\mathrm{tr}\\left( \\frac13 \\mathsf{U}^3 +\\frac13\\mathsf{V}^3 + \\frac12 \\mathsf{U} T\\mathsf{U}_x + \\frac12 \\mathsf{V} T\\mathsf{V}_x + \\frac12 \\mathsf{V} \\tilde{T}\\mathsf{U}_x + \\frac12 \\mathsf{U} \\tilde{T}\\mathsf{V}_x \\right)\\,\\mathrm{d}{x} \n\\end{equation} \nand the Poisson brackets \n\\begin{equation} \n\\begin{split} \n\\{ U_{{\\mu},{\\nu}}(x), U_{{\\mu}',{\\nu}'}(x')\\}_{\\mathrm{P.B.}} &= \\mathrm{i}\\delta(x-x')\\big(\\delta_{{\\nu},{\\mu}'}U_{{\\mu},{\\nu}'}(x)-\\delta_{{\\mu},{\\nu}'}U_{{\\mu}',{\\nu}}(x)\\big) + \\delta'(x-x')\\delta_{{\\nu},{\\mu}'}\\delta_{{\\mu},{\\nu}'} ,\\\\\n\\{ V_{{\\mu},{\\nu}}(x), V_{{\\mu}',{\\nu}'}(x')\\}_{\\mathrm{P.B.}} &= \\mathrm{i}\\delta(x-x')\\big(\\delta_{{\\nu},{\\mu}'}V_{{\\mu},{\\nu}'}(x)-\\delta_{{\\mu},{\\nu}'}V_{{\\mu}',{\\nu}}(x)\\big) - \\delta'(x-x')\\delta_{{\\nu},{\\mu}'}\\delta_{{\\mu},{\\nu}'},\\\\\n\\{ U_{{\\mu},{\\nu}}(x), V_{{\\mu}',{\\nu}'}(x')\\}_{\\mathrm{P.B.}} &= 0\n\\end{split} \n\\end{equation} \nin the real-line and periodic cases. \n\n(The verifications of these results are by straightforward computations which we omit.) \n\n\\subsection{Local limits of the sILW equation}\n\\label{sec:locallimit}\nWe give details on how the matrix KdV equation \\eqref{eq:mKdV} and the (generalization of the) HF equation \\eqref{eq:sHF} are obtained as limits $\\delta\\downarrow 0$ from the sILW equation \\eqref{eq:sILW}; note that, while the latter equation is non-local, the former equations both are local.\n\nFor simplicity, we only consider the real-line case with $T$ given by \\eqref{eq:TT}. \nWe recall that\n\\begin{equation} \n\\label{eq:Texpand}\n(Tf_x)(x) = -\\frac1\\delta f(x)+\\frac{\\delta}3 f_{xx}(x) + O(\\delta^3) \n\\end{equation} \nas $\\delta\\downarrow 0$ uniformly for $x\\in{\\mathbb R}$ \\cite[Appendix~A]{scoufis2005}. \nWe scale \n\\begin{equation} \n\\label{eq:scaling1} \n\\mathsf{U}(x,t)\\to \\frac{\\delta}{3}\\mathsf{U}\\left(x,\\frac{\\delta}{3}t\\right) \n\\end{equation} \nin \\eqref{eq:sILW} (and change variables accordingly) to obtain\n\\begin{equation} \n\\mathsf{U}_t + \\{\\mathsf{U},\\mathsf{U}_x\\} +\\frac{3}{\\delta^2}\\mathsf{U}_x + \\frac{3}{\\delta}T\\mathsf{U}_{xx} + \\mathrm{i}[\\mathsf{U},T\\mathsf{U}_x]=0. \n\\end{equation} \nExpanding this in powers of $\\delta$ using \\eqref{eq:Texpand} yields \n\\begin{equation} \n\\mathsf{U}_t+\\{\\mathsf{U},\\mathsf{U}_x\\}+\\mathsf{U}_{xxx} +\\frac{\\delta}3\\mathrm{i}[\\mathsf{U},\\mathsf{U}_x] + O(\\delta^2)=0, \n\\end{equation} \nwhich converges to the matrix KdV equation \\eqref{eq:mKdV} in the limit $\\delta\\downarrow 0$. \n\nSimilarly, by scaling \n\\begin{equation} \n\\label{eq:scaling2} \n\\mathsf{U}(x,t)\\to \\frac1\\delta \\mathsf{U}\\left(x,\\frac{t}{3} \\right), \n\\end{equation} \n\\eqref{eq:sILW} is transformed to\n\\begin{equation} \n\\mathsf{U}_t +\\frac{3}{\\delta}\\{\\mathsf{U},\\mathsf{U}_x\\} + \\frac{3}{\\delta}\\mathsf{U}_x + 3T\\mathsf{U}_{xx}+\\frac{3}{\\delta}\\mathrm{i}[\\mathsf{U},T\\mathsf{U}_x]=0. \n\\end{equation} \nBy inserting this expansion in \\eqref{eq:Texpand}, we obtain\n \\begin{equation} \n \\mathsf{U}_t + \\frac{3}{\\delta}\\{\\mathsf{U},\\mathsf{U}_x\\}+ \\delta \\mathsf{U}_{xxx}+\\mathrm{i}[\\mathsf{U},\\mathsf{U}_{xx}]+O(\\delta^2)=0, \n \\end{equation} \nyielding the generalized HF equation \\eqref{eq:sHF} in the limit $\\delta\\downarrow 0$ provided $\\{\\mathsf{U},\\mathsf{U}_x\\}=0$; the latter is achieved by imposing the condition $\\mathsf{U}^2=I$ (note that this condition is preserved under the time evolution \\eqref{eq:sHF}). \n\n\n\\subsection{Spin generalization of the bidirectional Benjamin-Ono equation}\n\\label{sec:2sBO} \nThe bidirectional BO equation was introduced by Abanov, Bettelheim, and Wiegmann \\cite{abanov2009} as a powerful tool to derive a hydrodynamic description of the $A$-type trigonometric CM system (to mention just one of several interesting applications). In this section, we present a spin generalization of this equation. \n\n\nLet $\\alpha(z)$ \\eqref{eq:alpha} and $V(z)$ \\eqref{eq:V} in II (trigonometric case). \nWe assume that the variables $a_j\\in{\\mathbb C}$, $|e_j\\rangle\\in\\mathcal{V}$, $\\langle f_j|\\in\\mathcal{V}^*$ for $j=1,\\ldots,N$ and $b_j\\in{\\mathbb C}$, $|g_j\\rangle\\in\\mathcal{V}$, $\\langle h_j|\\in\\mathcal{V}^*$ for $j=1,\\ldots,N$ satisfying the first order equations \\eqref{eq:sCM1b}, \\eqref{eq:sCM2b} and \\eqref{eq:BT}, together with the constraints \\eqref{eq:sCM1c} and \\eqref{eq:sCM2c}; note that these are exactly the equations defining the B\\\"acklund transformation of the $A$-type sCM system in the trigonometric case (see Proposition~\\ref{prop:BT}). Using that, we define the functions\n\\begin{equation} \n\\mathsf{U}_0(z,t)\\coloneqq -\\mathrm{i}\\sum_{j=1}^M |g_j(t)\\rangle\\langle h_j(t)|\\alpha(z-b_j(t)), \\quad \\mathsf{U}_1(z,t)\\coloneqq \\mathrm{i}\\sum_{j=1}^N |e_j(t)\\rangle\\langle f_j(t)|\\alpha(z-a_j(t)) \n\\end{equation} \nof the position variable $z\\in{\\mathbb C}$ and time $t\\in{\\mathbb R}$, and \n\\begin{equation} \n\\mathsf{U}\\coloneqq \\mathsf{U}_0+\\mathsf{U}_1,\\quad \\tilde{\\mathsf{U}}\\coloneqq \\mathsf{U}_0-\\mathsf{U}_1. \n\\end{equation} \nOne then can verify, by arguments similar to the ones we use to prove Propositions~\\ref{prop:sBO1gen}, \nthat the following equation is satisfied, \n\\begin{equation} \n\\label{eq:bsBO}\n\\mathsf{U}_t + \\{\\mathsf{U},\\mathsf{U}_z\\} +\\mathrm{i}\\tilde{\\mathsf{U}}_{zz}-[\\mathsf{U},\\tilde{\\mathsf{U}}_z]=0. \n\\end{equation} \nThis is the spin generalization of the bidirectional BO equation. It would be interesting to generalize the arguments in \\cite{abanov2009} to derive a hydrodynamic description of the sCM systems from \\eqref{eq:bsBO}. \n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzcypn b/data_all_eng_slimpj/shuffled/split2/finalzzcypn new file mode 100644 index 0000000000000000000000000000000000000000..696c277b5409202a9436fc45db7d428971eeb4ab --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzcypn @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nAt the extremely high temperatures reached in heavy-ion collisions, a phase-transition occurs from ordinary nuclear matter to a QGP state in which quarks and gluons are not confined into hadrons. The quark formation time during the collision is proportional to the inverse of the quark mass \\cite{MassDep}. Therefore, heavy quarks are generated early during the collision and can experience the full evolution of the medium \\cite{qgptime}. The quarks lose energy while moving through the medium by collisional and radiative processes. This energy loss is expected to depend on the path length, the QGP density,the parton colour charge (Casimir factor), and the quark mass (dead-cone effect) \\cite{deadcone,charmbeauty}. Because of this, the following energy loss hierarchy is expected: $\\Delta E_\\mathrm{loss}$(g) $>$ $\\Delta E_\\mathrm{loss}$(u,d) $>$ $\\Delta E_\\mathrm{loss}$(c) $>$ $\\Delta E_\\mathrm{loss}$(b). \n\nThe nuclear modification factor ($R_\\mathrm{AA}$) quantifies the medium effects that affect the heavy quarks when they traverse the medium. This factor, defined as $$R_\\mathrm{AA} = \\frac{1}{\\langle N^\\mathrm{AA}_{coll}\\rangle} \\frac{\\mathrm{d}N^\\mathrm{AA} \/ \\mathrm{d}p_\\mathrm{T}}{\\mathrm{d}N^\\mathrm{pp} \/ \\mathrm{d}p_\\mathrm{T}},$$ is obtained from the ratio of the transverse-momentum-differential yields measured in Pb\u2014Pb and pp collisions. The scaling factor $\\langle N^\\mathrm{AA}_{coll}\\rangle$ represents the average number of binary nucleon-nucleon collisions in Pb--Pb collisions for a given centrality interval. If heavy quarks do not lose energy in the medium $R_\\mathrm{AA} = 1$, while it drops below unity if they do. Heavy quarks are also expected to be affected by the collective motion of the medium. This gives rise to an anisotropic flow usually described by the components of a Fourier expansion of the azimuthal distribution of the outgoing particles. The second coefficient of this expansion is called elliptic flow ($v_2$). \n\n\\begin{figure}[t!]\n\\begin{minipage}[b]{6cm}\n\\centerline{\n\\includegraphics[width=\\textwidth]{1.pdf}}\n\\end{minipage}\n\\begin{minipage}[b]{6cm}\n\\centerline{\n\\includegraphics[width=\\textwidth]{2.pdf}}\n\\end{minipage}\n\\caption{Left: $R_\\mathrm{AA}$ of non-strange D mesons in central Pb--Pb collisions compared with theoretical calculations. Right: Ratio of $R_\\mathrm{AA}$ of non-prompt D$^0$ mesons over the $R_\\mathrm{AA}$ of prompt D$^0$ mesons. The data is compared with models with different energy loss for charm and beauty. Copyright CERN, reused with permission.}\n\\label{Fig:1}\n\\end{figure}\n\n\\section{Open heavy flavour}\nThe left panel in Fig. \\ref{Fig:1} shows a comparison of the $R_\\mathrm{AA}$ of non-strange D-mesons in central Pb--Pb collisions with theoretical calculations. The low momentum reach in central collisions allows setting stringent constraints on energy-loss models for central Pb--Pb collisions. Models without shadowing, like the BAMPS model \\cite{bamps}, overestimate the $R_\\mathrm{AA}$ spectrum at low $p_\\mathrm{T}$. \n\nThe models can be tested more rigorously by requiring a description of multiple observables, like $R_\\mathrm{AA}$ and $v_2$, at the same time, over a wide momentum range, and in different centrality intervals \\cite{dmesonRAA, dmesonFlow}. This shows that accurate modeling of data requires a combination of collisional and radiative energy loss, hadronization via coalescence, cold-nuclear-matter effects, and a realistic description of the medium evolution. \n\nThe right panel shows the ratio of the $R_\\mathrm{AA}$ of non-prompt D$^0$-mesons over the $R_\\mathrm{AA}$ for prompt D$^0$-mesons. Prompt D$^0$-mesons, which come directly from the charm quarks produced in the initial collision, and non-prompt D$^0$-mesons, which are produced later by the decay of beauty hadrons, show a different $R_\\mathrm{AA}$ at intermediate $p_\\mathrm{T}$. Models with different energy loss for charm and beauty can describe within uncertainties the ratio of non-prompt over prompt D$^0$-meson $R_\\mathrm{AA}$. This is an indication that energy loss depends on the quark mass. \n\n\\begin{figure}[t!]\n\\begin{minipage}[b]{6cm}\n\\centerline{\n\\includegraphics[width=\\textwidth]{3.pdf}}\n\\end{minipage}\n\\begin{minipage}[b]{6cm}\n\\centerline{\n\\includegraphics[width=\\textwidth]{4.pdf}}\n\\end{minipage}\n\\caption{Left: $R_\\mathrm{AA}$ in central Pb--Pb collisions for multiple types of particle species. Right: $\\Lambda_c^+$ \/ D$^0$ ratio as a function of multiplicity for several $p_\\mathrm{T}$ intervals. Copyright CERN, reused with permission.}\n\\label{Fig:2}\n\\end{figure}\n\nThe left panel in Fig. \\ref{Fig:2} shows the $R_\\mathrm{AA}$ for different particle species with a hierarchy that is consistent with the expected difference in energy loss for charm versus light-flavour and gluons. Strange D-mesons and $\\Lambda_{c}$ baryons show a hint of lower suppression, compared to non-strange D-mesons, that may point at recombination effects. \"Models that include hadronization via coalescence reproduce D$_\\mathrm{S}$ data within uncertainties. \n\nThe right panel in Fig. \\ref{Fig:2} shows the $\\Lambda_c^+$ \/ D$^0$ ratio as a function of multiplicity in pp, p--Pb, and Pb--Pb collisions for several $p_\\mathrm{T}$ intervals. This ratio shows an enhancement at low $p_\\mathrm{T}$ compared to e$^{+}$e$^{-}$ collider measurements in which $\\Lambda_c^+$ \/ D$^0$ $\\approx 0.1$ \\cite{epref}. The multiplicity dependence of the $\\Lambda_c^+$ \/ D$^0$ ratio shows that the enhancement remains higher than electron-positron collider measurements even for low-multiplicity pp collisions, suggesting that charm-quark recombination with quarks from the surrounding hadronic environment may already occur in small systems.\n\n\n\\begin{figure}[t!]\n\\begin{minipage}[b]{6cm}\n\\centerline{\n\\includegraphics[width=\\textwidth]{5.pdf}}\n\\end{minipage}\n\\begin{minipage}[b]{6cm}\n\\centerline{\n\\includegraphics[width=\\textwidth]{6.pdf}}\n\\end{minipage}\n\\caption{Left: $R_\\mathrm{AA}$ as a function of multiplicity for inclusive J\/$\\psi$ in two rapidity intervals. Right: $R_\\mathrm{AA}$ as a function of $\\langle N_\\mathrm{part} \\rangle$ for two $\\Upsilon$ states along with model predictions. Copyright CERN, reused with permission.}\n\\label{Fig:3}\n\\end{figure}\n\n\n\\section{Quarkonium}\nAt high temperatures colour screening in the QGP results in the suppression of quarkonium production \\cite{quarkonium}. Different quarkonium states have different binding energies, which results in the expectation of a sequential melting of states when colliding nuclei at higher energies \\cite{melting}. On the other hand, the c$\\bar{\\mathrm{c}}$ multiplicity increases at higher collision energies. This leads to the expectation of an enhancement of quarkonia production via recombination at hadronization.\n\nThe left panel of Fig. \\ref{Fig:3} shows the $R_\\mathrm{AA}$ as a function of multiplicity for inclusive J\/$\\psi$-mesons in two rapidity intervals. This $R_\\mathrm{AA}$ measurement has a significantly improved precision and $p_\\mathrm{T}$ reach compared to previous measurements \\cite{improved}. At higher multiplicities the $R_\\mathrm{AA}$ at midrapidity is higher than at forward rapidity. This observation may suggest that recombination effects are stronger at midrapidity, where the charm-quark density is higher.\n\nThe centrality dependence of the $R_\\mathrm{AA}$ is shown in the right panel of Fig. \\ref{Fig:3}. The data show a slight bottomonium centrality dependence and match well with the model predictions \\cite{du}. A stronger suppression of $\\Upsilon$(2S) than $\\Upsilon$(1S) is observed.\n\nFor J\/$\\psi$-mesons, measurements show a positive $v_2$ in a large $p_\\mathrm{T}$ range at forward rapidity. This is illustrated in the left panel of Fig. \\ref{Fig:4}. The bottomonium $v_2$ is consistent with zero, however more data are needed for a conclusive interpretation on the difference between J\/$\\psi$ and bottomonium $v_2$\n\n\\begin{figure}[t!]\n\\begin{minipage}[b]{6cm}\n\\centerline{\n\\includegraphics[width=\\textwidth]{7.pdf}}\n\\end{minipage}\n\\begin{minipage}[b]{6cm}\n\\centerline{\n\\includegraphics[width=\\textwidth]{8.pdf}}\n\\end{minipage}\n\\caption{Left: $v_2$ as a function of $p_\\mathrm{T}$ for inclusive J\/$\\psi$. Right: $\\Upsilon$(1S) $v_2$ as a function of $p_\\mathrm{T}$ compared with inclusive J\/$\\psi$ $v_2$ and different models \\cite{Yflow}. Copyright CERN, reused with permission.}\n\\label{Fig:4}\n\\end{figure}\n\n\n\\section{Heavy-flavour jets}\nJets originate from hard parton-parton interactions. In ALICE heavy-flavour tagged jets are measured down to low jet $p_\\mathrm{T}$ (5 GeV\/$c$). The study of jets provides experimental data for gluon-to-hadron fragmentation functions and gluon PDF at low $x$. The study of jet quenching provides additional information to further characterise parton energy loss in the QGP.\n\nFig. \\ref{Fig:5} shows the first measurement of the $\\Lambda_c^+$ probability density distribution of the parallel jet momentum fraction (z$_{||}^{ch}$) compared to data. The Pythia 8 SoftQCD model has the best agreement with data.\n\n Jets with beauty hadrons were reconstructed exploiting the displaced impact parameter of b-hadron decay tracks to the primary vertex. The observed yields are consistent with POWHEG. The nuclear modification factor in p--Pb ($R_\\mathrm{pPb}$) for B-tagged jets is shown in the right panel of Fig. \\ref{Fig:5}. No cold-nuclear-matter effects are observed within uncertainties using B-tagged jets.\n\n\\begin{figure}[t!]\n\\begin{minipage}[b]{6cm}\n\\centerline{\n\\includegraphics[width=\\textwidth]{9.pdf}}\n\\end{minipage}\n\\hspace{0.1cm}\n\\begin{minipage}[b]{6cm}\n\\centerline{\n\\includegraphics[width=\\textwidth]{10.pdf}}\n\\end{minipage}\n\\caption{Left: probability density distribution of the parallel jet momentum fraction (z$_{||}^{ch}$) for $\\Lambda_c^+$-tagged jets compared to expectations from Monte Carlo generators. Right: $R_\\mathrm{pPb}$ for B-tagged jets with a comparison of measurements by ALICE and CMS \\cite{CMSbjet}. Copyright CERN, reused with permission.}\n\\label{Fig:5}\n\\end{figure}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\label{sec:intro}\n\nThe cosmic microwave background (CMB) anisotropies observed by the\nWilkinson Microwave Anisotropy Probe (WMAP) have revealed that the\nprimordial density fluctuations are almost adiabatic and scale invariant\n\\cite{Komatsu:2008hk,Dunkley:2008ie}. Inflationary cosmology\n\\cite{inflation} is currently the favored scenario to explain such\nprimordial fluctuations. According to it, density perturbations are\nproduced as quantum vacuum fluctuations on sub-Hubble scales and then\nstretched to super-Hubble scales during the phase of accelerated\nexpansion of space. However, inflationary cosmology is not without its\nproblems (see e.g. \\cite{RHBrev}),\\footnote{One of the problems in the\nusually considered mechanism, where quantum fluctuations are the origin\nof the present cosmic fluctuations, is the transition from quantum\nquantities to classical observables. However, in the mechanism proposed\nin this paper, such a problem does not exist because they are classical\nfluctuations from the beginning.} and thus it is important to study\nscenarios alternative to inflation. In the 1980s, topological defect\nmodels such as those based on cosmic strings were investigated intensely\nas a possible alternative to generate primordial density fluctuations\n\\cite{topologicaldefects}. However, the fluctuations induced by defects\nin an expanding universe are isocurvature and, even if they might mimic\nthe inflationary predictions for the temperature-temperature (TT)\ncorrelation of the CMB \\cite{Turok:1996wa}, observations of the\nanti-correlation of the temperature and the E-mode polarization (TE),\nprecisely measured by WMAP, confirmed that such fluctuations could not\nbe the dominant source of CMB anisotropies\n\\cite{Peiris:2003ff,Komatsu:2008hk}. Thus, causal scaling seed models\nare ruled out as a main component of primordial density fluctuations.\n\nOn the other hand, in recent years other types of scenarios alternative\nto inflation motivated by developments in string theory have been\nproposed. Examples are the Pre-Big-Bang model (PBB)\n\\cite{Gasperini:1992em} and the Cyclic\/Ekpyrotic scenario\n\\cite{Khoury:2001wf,Steinhardt:2001st}.\\footnote{Another model is string\ngas cosmology \\cite{BV,NBV}, in which even the dimensionality of the\nobservable universe may be determined dynamically.} The common feature\nof these models is that the universe begins in a contracting phase\nbefore emerging into the expanding phase of Standard Big Bang cosmology\nafter a cosmological bounce. In the contracting phase, comoving scales\nexit the Hubble radius unless the contraction is too rapid. Previous\nstudies have considered quantum mechanical vacuum fluctuations of a\nscalar matter field evaluated when the matter scales exit the Hubble\nradius during the contracting phase. Of course, once they are produced\nin the contracting phase, the fluctuations must be coupled to\nfluctuations in the expanding phase after the bounce. The propagation of\nfluctuations through the bounce phase depends on the details of the\nbounce \\cite{Durrer}. There are models which yield an almost scale\ninvariant spectrum (see e.g. \\cite{Tolley}) after the bounce.\n\nIn this paper, we suggest the possibility that primordial density\nfluctuations are produced by causal seeds such as cosmic strings in the\ncontracting phase, and show that they could generate adiabatic, almost\nscale invariant, and super-Hubble curvature fluctuations in the\nexpanding universe.\\footnote{Another attempt to produce adiabatic\nfluctuations from cosmic strings is discussed in the context of\ntwo-metric theories of gravity \\cite{Avelino:2000iy}.} One simple\npossibility to realize cosmic strings in the contracting universe is to\nembed our model into a so-called cyclic universe, in which cosmic\nstrings are formed in the usual way during a phase transition in the\nexpanding phase, if the matter Lagrangian admits cosmic strings. In our\nscenario, different from the cyclic scenario of Steinhardt and Turok\n\\cite{Steinhardt:2001st} where quantum fluctuations source the\nprimordial density fluctuations, here cosmic strings seed the\nperturbations. Of course, in the cyclic scenario, topological defects\nmay be dangerous because they may dominate the energy density of the\nuniverse, as pointed out by Avelino et al. in\nRefs.~\\cite{Avelino:2002hx,Avelino:2002xy}. However, Avelino et al. also\ngive a solution to that problem: they point out that a relatively long\nperiod of cosmic acceleration at low energies (late period of one cycle)\ncan dilute topological defects in order that they do not overdominate\nthe universe. A second possibility is to consider the birth of the\nuniverse in the contracting universe (not cyclic). If the universe has\nfinite birth time, the correlated region at the birth of the universe\ndoes not necessarily cover the whole volume of the universe. Then, the\nrandomness of the values of the underling field beyond the correlation\nlength leads to the formation of topological defects.\n\nDensity fluctuations produced by causal seed models naturally become\nsuper-Hubble in the contracting phase. More specifically, the key point\nis that the defect-seeded perturbations which are initially isocurvature\nin nature seed a growing adiabatic mode. At the time when the symmetry\n(whose breaking yields the topological defects) is restored, the seed\nterm disappears and the fluctuations become frozen-in super-Hubble scale\nadiabatic perturbations. As long as no dominant isocurvature\nfluctuations are produced in the expanding phase, the fluctuations in\nthe expanding phase will be adiabatic and thus able to explain the TE\nanti-correlation observed in the CMB. In the following we consider\ncosmic strings as a concrete example and investigate the nature of the\ndensity fluctuations in detail.\n\n\\section{Adiabatic fluctuations from cosmic strings in a contracting\nuniverse}\n\nThe evolution of cosmic strings in a contracting universe was\ninvestigated in Refs. \\cite{Avelino:2002hx,Avelino:2002xy}. As in these\nreferences, we will assume that the distribution of strings on\nsuper-Hubble scale is like a random walk. We make the simplest\nassumption that the universe is initially matter and then radiation\ndominated in the contracting phase. In this context, it was shown that\ncosmic strings obey the scaling solution asymptotically both in the\nradiation and matter dominated epochs. Specifically, the correlation\nlength $L$ is proportional to $a^{2} \\ln a \\propto (-t) \\ln (-t)$ in the\nradiation era ($a$ is the scale factor and $t(<0)$ is cosmic time, with\nthe bounce time taken as $t=0$). If we take the string loop chopping\nefficiency $\\tilde{c}$ to be a non-zero constant, then the ratio of the\nenergy density in cosmic strings to the total one stays almost\nconstant.\\footnote{There remains the logarithmic dependence in the\nradiation dominated era, which can lead to a small deviation from scale\ninvariance of the final curvature perturbations.} Therefore, the\ndensity fluctuations produced by cosmic strings are almost scale\ninvariant at least initially at the Hubble radius exit.\n\nOn super-Hubble scales, the dynamics of the defect network\nsets up isocurvature fluctuations which in turn act as a\nseed for growing curvature perturbations.\nAs the universe contracts, the temperature of radiation\nincreases, and eventually leads to symmetry \nrestoration and the disappearance of the\ntopological defects. Thus, there will no longer\nbe any isocurvature fluctuations, and the source on\nsuper-Hubble scales in the differential equation for the\nadiabatic mode vanishes. Hence, the fluctuations become\nfrozen in as adiabatic ones.\n\nOn super-Hubble scales, the equation for the evolution of the curvature\nperturbation on uniform total density hypersurfaces, $\\zeta$, is given\nby\\footnote{\n Although the density fluctuations of cosmic strings can be large and\n of the order of unity, the linear perturbation theory applies\n because the metric perturbations remain small as long as the energy\n density of the cosmic string is subdominant.\n} \\cite{Wands:2000dp}\n\\begin{equation}} \\newcommand{\\eeq}{\\end{equation}\n \\dot{\\zeta} = - \\frac{H}{\\rho+P} \\delta P_{\\rm nad},\n \\label{eq:zetaevo}\n\\eeq\nwhere $H \\equiv \\dot{a}\/a$ is the Hubble parameter, $\\rho$ and $P$ are\nthe total energy density and pressure, respectively. A dot represents\na derivative with respect to the cosmic time. The non-adiabatic pressure\nperturbation $\\delta P_{\\rm nad}$ is defined as $\\delta P_{\\rm nad}\n\\equiv \\delta P - c_s^2 \\delta \\rho$ with $\\delta \\rho$ being the\ntotal density fluctuation and $c_s^2 = \\dot{P}\/\\dot{\\rho}$ being the\nadiabatic sound speed. In the multi-fluid case, the total\nnon-adiabatic pressure perturbation consists of two parts. The first\npart comes from intrinsic entropy perturbations which vanish for a\nbarotropic fluid. Therefore, the intrinsic entropy perturbations of\nmatter and radiation vanish. On the other hand, it is non-trivial that\nthe intrinsic isocurvature perturbation for cosmic strings also\nvanishes as strings can be modeled as a simple fluid with equation of state \n$w_{st} \\equiv P_{st} \/ \\rho_{st} = - 1\/3$. However, we expect that \nthe intrinsic entropy perturbation of cosmic strings is negligible and\nwill assume this in the following. The second part $\\delta P_{\\rm rel}$ \ncomes from the relative entropy perturbation between different fluids\n$S_{\\alpha\\beta}$ \\cite{Malik:2002jb},\n\\begin{equation}} \\newcommand{\\eeq}{\\end{equation}\n \\delta P_{\\rm rel} \\equiv \\frac{1}{6H\\dot{\\rho}} \\sum_{\\alpha,\\beta}\n \\dot{\\rho}_{\\alpha} \\dot{\\rho}_{\\beta} (c_{\\alpha}^2-c_{\\beta}^2) \n S_{\\alpha\\beta},\n \\label{eq:relnonad}\n\\eeq\nwhere the relative entropy perturbation between different fluids\n$S_{\\alpha\\beta}$ is given by\n\\begin{equation}} \\newcommand{\\eeq}{\\end{equation}\n S_{\\alpha\\beta} = - 3H \n \\left(} \\newcommand{\\rmk}{\\right) \\frac{\\delta\\rho_{\\alpha}}{\\dot{\\rho_{\\alpha}}}\n - \\frac{\\delta\\rho_{\\beta}}{\\dot{\\rho_{\\beta}}} \\rmk,\n \\label{eq:relentropy}\n\\eeq \nand the adiabatic sound speed for each component, $c_\\alpha^2$, is\ngiven by $\\dot{P_\\alpha}\/\\dot{\\rho_\\alpha}$ with $\\rho_\\alpha$ and\n$P_\\alpha$ being the energy density and pressure of the component.\n\nNow, let us estimate the amplitude of the curvature perturbation $\\zeta$\nfor a comoving scale $k$. First, we consider a scale $k$ which exits the\nHubble radius during a radiation dominated era. We neglect the curvature\nperturbations which are generated when the corresponding scale is\nsub-Hubble.\\footnote{This can be justified by assuming that string loop\ndistribution is subdominant and the initial value of the curvature\nperturbation at $t \\rightarrow - \\infty$ vanishes. In fact, we expect\nthat the string loop distribution will be subdominant in a contracting\nuniverse compared to an expanding universe since comoving scales are\nexiting rather than entering the Hubble radius. Loops exit the Hubble\nradius before they collapse through emission of gravitational\nwaves. This effect reduces the loop chopping efficiency $\\tilde{c}$, and\nhence the number of produced loops is smaller in a contracting phase.}\nThus, we just follow the evolution of the curvature perturbation from\nthe epoch $t_H(k)$ when a comoving scale $k$ exits the Hubble radius\nuntil the time $t_{cs}$ when the symmetry is restored and the strings\ndisappear.\n\nThe relative entropy perturbation between\nradiation and cosmic strings is \n\\begin{equation}} \\newcommand{\\eeq}{\\end{equation}\n S_{rs} \\simeq - 3H \\frac{\\delta \\rho_{\\rm st}}{\\dot{\\rho}_{\\rm st}},\n \\label{eq:entropyrs}\n\\eeq\nwhere we have used the fact that \n$\\dot{\\rho}_{\\rm st}\/\\dot{\\rho}_{\\rm rad}$ \nis almost constant and much smaller than unity, and \n$|\\delta \\rho_{\\rm rad}|$ is at most comparable to \n$|\\delta \\rho_{\\rm st}|$. From the scaling solution, it follows that \ncosmic strings can be modeled as a random walk on scales\nlarger than the Hubble radius with step length comparable to\nthe Hubble radius. Then, the density fluctuations of cosmic strings\nfor a super-Hubble scale can be easily estimated as\n\\begin{equation}} \\newcommand{\\eeq}{\\end{equation}\n \\delta \\rho_{st}(k) \\simeq N |H| \\mu k_{\\rm phys},\n \\label{eq:deltarhosr}\n\\eeq\nwhere $\\mu$ is the mass per unit length of a string, $N={\\cal O}} \\newcommand{\\CP}{{\\cal P}(1)$ is\nthe number of long strings crossing any given Hubble volume, and $k_{\\rm\nphys} \\equiv k\/a$. Notice that $N$ can be different in radiation and\nmatter dominated epochs although the difference is expected to be at\nmost ${\\cal O}} \\newcommand{\\CP}{{\\cal P}(1)$. Inserting these equations to Eq. (\\ref{eq:zetaevo})\nyields\n\\begin{equation}} \\newcommand{\\eeq}{\\end{equation}\n \\dot{\\zeta} \\simeq N G \\mu k_{\\rm phys}.\n\\eeq\n\nAs stated above, cosmic strings disappear at the time\n$t_{\\rm cs}$ due to symmetry restoration. Once\ncosmic strings disappear, the curvature perturbation is conserved at\nleast until the bounce. Thus, the final curvature perturbation before\nthe bounce is estimated as\n\\begin{equation}} \\newcommand{\\eeq}{\\end{equation}\n \\zeta = \\int_{t_H(k)}^{t_{\\rm cs}} dt \\dot{\\zeta}\n \\sim N_{\\rm r} G \\mu k (-t_H(k))^{\\frac 12} \n \\sim N_{\\rm r} G \\mu.\n\\eeq\nHere we have made use of $k (-t_H(k))^{\\frac 12} = 1\/2$ (in the\nradiation era), and $N_{\\rm r}$ is the value of $N$ (in the\nradiation epoch) which is ${\\cal O}} \\newcommand{\\CP}{{\\cal P}(1)$. Thus,\nthe curvature perturbations are independent of the comoving scale $k$ and\nhence scale invariant at least before the bounce. In the same way, the\ncurvature perturbation for comoving scales whose physical scales exit\nthe Hubble radius during the matter era is estimated as\n\\begin{equation}} \\newcommand{\\eeq}{\\end{equation}\n \\zeta = \\int_{t_H(k)}^{t_{\\rm eq}} dt \\dot{\\zeta} +\n \\int_{t_{\\rm eq}}^{t_{\\rm cs}} dt \\dot{\\zeta}\n \\sim N_{\\rm m} G \\mu k (-t_H(k))^{\\frac 13} \n \\sim N_{\\rm m} G \\mu,\n\\eeq\nwhere $t_{\\rm eq}$ is the matter-radiation equality time in the\ncontracting phase, we have made use of $k (-t_H(k))^{\\frac 13}\n= 2\/3$ (in the matter era), and $N_{\\rm m}$ is the value of $N$ in\nthe matter epoch. Therefore, the curvature perturbation is\nscale invariant at least before the bounce also on these scales. \n\nAccording to the often-used Hwang-Vishniac \\cite{HV} (Deruelle-Mukhanov\n\\cite{DM}) matching conditions for fluctuations across a space-like\nhypersurface, the curvature perturbation is conserved across the\nbounce. If we apply these matching condition, we conclude that the final\ncurvature perturbations in the expanding phase are almost scale\ninvariant and hence could be responsible for the present density\nfluctuations if $G \\mu \\sim 10^{-5}$. As emphasized in \\cite{Durrer},\nthere are problems with blindly applying these matching\nconditions. Subsequent studies have shown that the actual transfer of\nthe fluctuations depends quite sensitively on the details of the\nbounce. There are cases where the curvature perturbation is conserved\n(see e.g. \\cite{Tsujikawa:2002qc,Copeland:2006tn,Bozza}), but there are\nother examples where this does not hold\n\\cite{Hassan,Tirtho,Cai}. However, if the bounce time is short compared\nto the time scale of the fluctuations of interest, it can be rather\nrigorously shown that the spectrum of $\\zeta$ is maintained through the\nbounce. This can be shown \\cite{Cai,Cai2} by modeling the background\ncosmology with three phases: the initial contracting radiation phase,\nthe ``bounce phase'' during which $H = \\alpha t$, where $\\alpha$ is some\nconstant, and the expanding radiation phase. The matching conditions at\nthe two hypersurfaces between these phases can be consistently applied\n(since the background also satisfies the matching conditions, unlike\nwhat happens in the single matching between contracting and expanding\nphase which has been applied in the case of the singular Ekpyrotic\nbounce).\nHowever, we would like to point out that, even if the\ncurvature perturbation is {\\it not} conserved through the bounce, the\nscale invariance of the final curvature perturbation still holds true as\nlong as its change in the amplitude of the fluctuations across the\nbounce is independent of the comoving scale. This can be reasonably\nexpected for modes we are interested in because their momenta are much\nsmaller than the maximal value of $|H|$ around the bounce point\n(assuming that the bounce is smooth), the only energy scale which can be\nset by the bounce.\\footnote{\n Even in the case when the change through the bounce depends on the\n comoving scale, a scale invariant spectrum may be realized by\n considering the time varying tension of cosmic strings discussed in\n Refs. \\cite{Yamaguchi:2005gp,Ichikawa:2006rw,Takahashi:2006yc}, which\n compensates for the variation of the curvature perturbation.\n}\n\nFinally, we comment on some subtleties. First of all, cosmic strings may\nbe formed again in the expanding phase. In this case, cosmic strings\nagain produce isocurvature fluctuations in the expanding phase, which\nshould be suppressed to less than $10\\%$ of the total curvature\nperturbations \\cite{Pogosian:2003mz,Endo:2003fr}. Such a suppression may\nbe realized in the case when the constant $N$ for cosmic strings in the\ncontracting phase is much larger than that in the expanding phase. The\nfact that the loop chopping efficiency $\\tilde{c}$ is smaller implies\nthat the constant $N$ might be larger. Therefore, it is plausible that\nthe constant $N$ for cosmic strings in the contracting phase is larger\nthan that in the expanding phase. Another possibility is that the\ncurvature perturbation sourced by fluctuations of cosmic strings in a\ncontracting phase is amplified at the bounce. Another solution is to\nsimply assume that cosmic string are not produced in the expanding phase\nbecause the symmetry breaking patterns of scalar fields are not\nnecessarily the same in the contracting and the expanding phases.\n\nAnother issue is that cosmic strings generate not only density\nperturbations but also vector and tensor perturbations (gravitational\nwaves). Gravitational waves are produced by oscillations of loops as\nwell as long strings. As explained before, the radius of a loop could\nbecome larger than the Hubble before there has been a significant amount\nof gravitational radiation. Therefore, we expect that the relative\namplitude of gravitational waves to scalar metric fluctuations will be\nsmaller in the contracting phase than the standard scenario of cosmic\nstrings in an expanding universe. Regarding the vector mode, it has\nbeen shown that vector perturbations exhibit growing mode solutions in\nthe contracting phase \\cite{Battefeld:2004cd}. In particular, the metric\nperturbations always grow while the matter perturbations stay constant\nin the radiation dominated era. However, the vector perturbations will\ndecrease in the expanding phase. If vector fluctuations are suppressed\n(even slightly) across the bounce (or the scalar fluctuations enhanced),\nthen the vector modes will be sufficiently small today not to destroy\nthe successful agreement with the CMB angular power spectrum. Small but\nnot negligible vector fields could, in fact, be useful for generating\nthe observed large scale magnetic fields, as pointed out in\nRef. \\cite{Battefeld:2004cd}. As a final remark, we mention possible\nnon-Gaussianities in the scenario. Since the distribution of cosmic\nstring is highly non-Gaussian, the produced density fluctuations may\ngive large non-Gaussianity, though the bispectrum for a simulated string\nmodel in the expanding universe is shown not to be so large for the\ndiagonal contribution \\cite{Gangui:2001fr}. All these topics are worth\ninvestigating.\n\n\\section{Summary}\n\nWe have shown that adiabatic, super-Hubble, and almost scale invariant\ndensity fluctuations can be produced by cosmic strings in a contracting\nuniverse. Although cosmic strings can only generate isocurvature\nfluctuations in an expanding universe, they can produce adiabatic\nfluctuations by considering them in a contracting universe because\ncosmic strings disappear due to symmetry restoration. Our findings open\nthe possibility that topological defects could be resurrected as the\nmain source of the current cosmic density fluctuations.\n\n\n\\ack\nM.Y is grateful to M. Kawasaki for useful\n discussions. T.T. and M.Y would like to thank R. H. Brandenberger for\n kind hospitality at McGill University where this work was finished.\n This work is supported in part by a Canadian NSERC Discovery Grant and\n by the Canada Research Chair program (R.B.), and by the Sumitomo\n Foundation (T.T.), and by Grant-in-Aid for Scientific Research from\n the Ministry of Education, Science, Sports, and Culture of Japan\n No.\\,19740145 (T.T.), No.\\,18740157, and No.\\,19340054 (M.Y.).\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\nAs artificial intelligence (AI) technologies are playing key roles in our daily lives, developing intelligent systems which can work with humans more effectively (instead of replacing them) is becoming a central research theme \\cite{9153877,peng2022investigations,russell2021human}. Such theme is mostly pronounced as \\textsl{hybrid intelligence}, aiming to benefit from the strengths of both humans and the machine intelligence in solving problems. Developing systems of such capability demands fundamentally novel approaches to major research problems in AI: state-of-the-art systems outperform humans in many cognitive tasks from playing video games \\cite{hester_deep_2017} to pattern recognition \\cite{liu2019comparison}, however they fall short when it comes to other tasks such as common sense reasoning, performing causal discovery, and behavioural human capabilities such as explaining its own decisions, adapting to different environments, collaborating with others, etc. A particular challenge in developing such systems relies on making them more interpretable \\cite{9153877,tjoa2020survey,TIDDI2022103627} which is the main focus of this paper. \t\t\n\nAn obvious medium in making such systems interpretable relies on employing an existing knowledge representation formalism which is inherently tailored towards expressing human knowledge. One such type of human knowledge that is relevant in problem solving is captured by the notion of \\textit{interrogative agenda} (also called research agenda \\cite{enqvist2012modelling}) of an epistemic agent (which will be explained further in detail in Section ~\\ref{prelim: imterrogative agenda}). Intuitively, given a context, an interrogative agenda abstracts a set of features that an epistemic agent is interested in. In order to express interrogative agendas we employ the knowledge representation formalism of \\textit{formal concept analysis}. \n\nFormal concept analysis (FCA) is an influential foundational theory in knowledge representation and reasoning \\cite{priss2006formal, qadi2010formal, poelmans2010formal, valtchev2004formal, poelmans2013formal, ganter2012formal, wille1996formal} which provides a framework for categorizing objects w.r.t.~a given set of features.\nThe set of features used in the categorization (formal context in FCA) can be identified as its agenda, and different agendas will correspond to different categorizations. The agenda used to categorize a set of objects may be chosen on several factors like the availability and precision of the data, the categorization methodology, and the purpose of the categorization.\\footnote{A logical framework for studying these different categorizations obtained from different agendas and their interaction was developed in our earlier work \\cite{FLexiblecat2022} and applied to auditing domain.} In this paper, we focus on obtaining concept lattices (possibly fuzzy) corresponding to different agendas (possibly non-crisp)\nHowever, in many applications, it is unclear which interrogative agenda (Sec.~\\ref{prelim: imterrogative agenda}) is best suited to obtain a categorization that can be useful in dealing with a given problem. Thus, in this work, we focus on the task of using a machine learning algorithm to learn such agendas, and hence a ``good categorization'' for the problem at hand. In particular, we will address the task of classification and outlier detection.\n\nIn the realm of machine learning, formal concept analysis has been used in the past for classification, outlier detection, rare concept mining and identification of rare patterns (Sec.~\\ref{Sec:Classification and outlier detection using concept lattice}). However, to the best of our knowledge, all these methods use a single concept lattice (or its sublattice) to deal with the problems mentioned above. That is, the agenda of the categorization is fixed beforehand. The main difficulty in using such techniques relies on the fact that there are exponentially many subsets of features (and weights) one has to take into account. \nOn the other hand, since some features may not be relevant for a given classification task, removing them can reduce the data collection cost, its complexity, and may even improve the accuracy for some tasks. However, determining the set of relevant features can be difficult, and it is an important part of the preprocessing phase for many such algorithms. \n\nIn this paper, we propose a meta-learning algorithm to identify the best-suited agenda (and hence categorization). That is, to estimate the significance of different sets of features for the given task. The incorporation of such outer-loop on top of an existing classification or outlier detection algorithm can potentially increase its generalising power and the performance. Another major advantage of such method is that the learned agendas provide us an estimation of the importance of different sets of features for the given task, making our results more explainable. \n\n\\paragraph{Structure of paper.} In Section \\ref{sec:preliminaries}, we provide the relevant preliminaries. In Section \\ref{Sec:Classification and outlier detection using concept lattice}, we give an overview of FCA-based classification and outlier detection algorithms. In Section \\ref{sec:Learning interrogative agendas}, we describe the framework for learning agendas and provide a generic learning algorithm. In Section \\ref{sec:Conclusion}, we conclude and give some directions for future research. \n\\section{Preliminaries}\\label{sec:preliminaries}\n\\subsection{Formal concept analysis}\nA {\\em formal context} \\cite{ganter2012formal} is a structure $\\mathbb{P} = (A, X, I)$ such that $A$ and $X$ are sets of {\\em objects} and {\\em features}, respectively, and $I\\subseteq A\\times X$ is the so-called {\\em incidence relation} which records whether a given object has a given feature. That is, for any object $a$ and feature $x$, $a I x$ iff $a$ has feature $x$. \nFormal contexts can be thought of as abstract representations of e.g., databases, tabular data and such.\nEvery formal context as above induces maps $I^{(1)}: \\mathcal{P}(A)\\to \\mathcal{P}(X)$ and $I^{(0)}: \\mathcal{P}(X)\\to \\mathcal{P}(A)$, respectively defined by the assignments \n\\begin{equation}\n I^{(1)}[B]: = \n\\{x\\in X\\mid \\forall a(a\\in B\\Rightarrow aIx)\\},\\quad \n I^{(0)}[Y] = \n\\{a\\in A\\mid \\forall x(x\\in Y\\Rightarrow aIx)\\}.\n\\end{equation}\nA {\\em formal concept} of $\\mathbb{P}$ is a pair \n$c = (\\val{c}, \\descr{c})$ such that $\\val{c}\\subseteq A$, $\\descr{c}\\subseteq X$, and $I^{(1)}[\\val{c}] = \\descr{c}$ and $I^{(0)}[\\descr{c}] = \\val{c}$. \nA subset $B \\subseteq A$ (resp.\\ $Y\\subseteq X$) is said to be {\\em closed} or {\\em Galois-stable} if $Cl(B)=I^{(0)}[I^{(1)}[B]]=B$ (resp.\\ $Cl(Y)=I^{(1)}[I^{(0)}[Y]]=Y$).\nThe set of objects $\\val{c}$ is intuitively understood as the {\\em extension} of the concept $c$, while the set of features $ \\descr{c}$ is understood as its {\\em intension}. \nThe set of the all formal concepts of $\\mathbb{P}$ (denoted by $L(\\mathbb{P})$) can be partially ordered as follows: for any $c, d\\in L(\\mathbb{P})$, \n\\begin{equation}\nc\\leq d\\quad \\mbox{ iff }\\quad \\val{c}\\subseteq \\val{d} \\quad \\mbox{ iff }\\quad \\descr{d}\\subseteq \\descr{c}.\n\\end{equation}\nWith this order, $L(\\mathbb{P})$ is a complete lattice, the {\\em concept lattice} $\\mathbb{P}^+$ of $\\mathbb{P}$. \n\\subsection{Interrogative agendas}\\label{prelim: imterrogative agenda} \n\nIn epistemology and formal philosophy, interrogative agenda (or research agenda \\cite{enqvist2012modelling}) of an epistemic agent (or group of agents e.g.,~users) indicates the set of questions they are interested in, or what they want to know relative to a certain circumstance. \nIntuitively, in any context, interrogative agendas act as cognitive filters that block content which is deemed irrelevant by the agent. Only the information the agent considers relevant is used\ne.g.~in the formation of their beliefs, or actions, etc. Deliberation and negotiation processes can be described as whether or how agents succeed and interact in shaping their interrogative agendas, and the outcomes of these processes can be described in terms of the aggregated (or \"common ground\") agenda.\nAlso, phenomena such as polarization \\cite{myers1976group}, echo chambers \\cite{sunstein2001republic} and self-fulfilling prophecies \\cite{merton1948self} can be described in terms of the formation and dynamics of interrogative agendas among networks of agents. \n\n\n\n\nDealing with a classification or outlier detection problem, we may have different agendas for different aims. For example, the agenda for the classification of consumers for a grocery store based on their buying preferences is very different from the agenda of a political analyst trying to classify the same set of people based on their political inclinations. Thus, interrogative agendas play an important role in determining natural or useful categorization for a specific purpose. \n\n\\subsection{Interrogative agendas and flexible categorization}\n\\label{ssec:Interrogative agendas and flexible categorization}\nLet $\\mathbb{P}=(A,X,I)$ be a formal context. For a set of features $Y \\subseteq X$, the formal context induced by $Y$ from $\\mathbb{P}$ is $(A,X,I \\cap A \\times Y)$. Given the set of all the features $X$, the (non-crisp) interrogative agenda of an agent can be described by a mass function on $\\mathcal{P}(X)$. For an agenda represented by $m:\\mathcal{P}(X) \\to [0,1]$, and any $Y \\subseteq X$, $m(Y)$ represents the importance (or intensity of the preference) of the set of features $Y$ according to the agenda given by $m$. We assume that mass functions are normalized, that is, \n\\begin{equation}\n\\sum_{Y \\subseteq X} m(Y)=1.\n\\end{equation}\nAny such mass function induces a probability or preference function $p_m: \\mathcal{R} \\to [0,1]$ such that $p_m((A,X,I \\cap A \\times Y))= m(Y)$, where $\\mathcal{R}$ is the set of all the formal contexts corresponding to the crisp agendas induced by subsets of $X$ (i.e. the formal contexts corresponding to each $Y \\subseteq X$).\n\nThe agendas of different agents can be aggregated using different Dempster-Shafer rules \\cite{shafer1992dempster, sentz2002combination, denoeux2006cautious} to obtain a categorization corresponding to aggregated agendas. A logical framework for deliberation between different agents having different agendas is developed in \\cite{FLexiblecat2022}. This framework can be applied to study categorizations when different agents with different interests interact with each other for communication or joint decision making, as it is the case in auditing, community analysis, linguistics, etc. We also describe a method to approximate the importance of individual features from mass functions describing agendas by plausibility transform \\cite{cobb2006plausibility} or pignistic transformation \\cite{smets2005decision}, methods used in Dempster-Shafer theory to transform Dempster-Shafer mass functions to probability functions. These importance values of individual features can be useful in several different applications like feature analysis, clustering, etc. \n\\label{ssec:interrogativeag}\n\n\\section{Classification and outlier detection using concept lattices} \\label{Sec:Classification and outlier detection using concept lattice}\nIn this section, we give an overview of different classification \nand outlier detection techniques using concept lattices.\n\\subsection{Classification using concept lattices}\nDifferent algorithms have been applied to classify objects using formal concept analysis, that is, using concept lattices. \n Fu et al. \\cite{fu2004comparative} provide a comparison between different FCA-based classification algorithms, such as LEGAL \\cite{liquiere1990legal}, GALOIS \\cite{carpineto1993galois}, RULEARNER \\cite{sahami1995learning}, CLNN and CLNB \\cite{xie2002concept}. Prokasheva et al. \\cite{prokasheva2013classification} describe different classification algorithms using FCA and challenges to such methods. \n \n In \\cite{kuznetsov2004machine}, Kuznetsov describes a classification algorithm that uses the JSM-method \\cite{finn1989generalized,FINN1983351}. He proposes to use concept lattices and training examples to form hypotheses as follows. Let $(A, X, I)$ be a formal context for the set of objects $A$ and the set of features $X$. We add an additional target feature $x \\not\\in X$ for denoting a class of an object. This partitions $A$ into three sets of objects $A_+$, $A_-$, and $A_\\tau$ consisting of objects known to have feature $x$, objects known to not have feature $x$, and objects for which it is unknown whether or not they have it, respectively. Positive hypotheses for the JSM-method based on this formal context are given by the sets of features that are shared by a set of positive examples but not by any negative example. That is, a set $H \\subseteq X$ is a positive hypothesis iff $I^{(0)}[H] \\cap A_+ \\neq \\emptyset$ and $H \\not\\subseteq I^{(1)}[\\{a\\}] $ for any $a \\in A_-$. Negative hypotheses are defined analogously. For any object $b$, it will be classified positively (resp. negatively) if $I^{(1)}[\\{b\\}]$ contains a positive (resp. negative) hypothesis but no negative (resp. positive) hypotheses. In case $I^{(1)}[\\{b\\}]$ contains both or neither, the classification is undetermined or some other method like majority voting can be used to classify $b$. The method sketched above has been used with different modifications in many FCA-based classification algorithms \\cite{ganter2000formalizing,kuznetsov2013fitting,onishchenko2012classification}. Some classification algorithms based on FCA use concept lattices to augment other classifiers like SVM \\cite{carpineto2009concept}, Naive Bayes classifier and Nearest neighbour classifier \\cite{xie2002concept} in preprocessing or feature selection. Other FCA-based classification methods include biclustering \\cite{onishchenko2012classification}, and cover-based classification \\cite{maddouri2004towards}. \n\\subsection{Outlier detection using concept lattices}\nOutlier detection can be considered as a special case of binary classification where the classes are outlier and non-outliers. Thus, any of the above-mentioned algorithms can be used for outlier detection using concept lattices. Some other methods or algorithms based on formal concept analysis have also been studied specifically for outlier detection or similar tasks like mining rare concepts or patterns \\cite{sugiyama2013semi,okubo2010algorithm,zhang2014outlier}. The simplest method to define the outlier degree of an element from a concept lattice is by using the size of its closure (i.e. the smallest category containing the element). Smaller size of closure of an object indicates that there are a small number of elements which have the same features as the object and thus it is likely to be an outlier. Sugiyama \\cite{sugiyama2013semi} suggests that the outlierness of an object in a concept lattice should not depend on the size of its closure but one must consider the number of concepts it creates. He suggests to define the outlierness score of a set of objects $B \\subseteq A$ as\n\\begin{equation}\nq(B): = |\\{ (G,Y) \\in \\mathbb{P}^+ \\mid B \\subseteq G \\, \\text{or}\\, I^{(1)}[B] \\subseteq Y \\}|.\n\\end{equation}\nThis definition is more suited to detect outliers that belong to a densely agglomerated cluster which locates sparsely if we overview the whole set of objects. Zhang et al.~\\cite{zhang2014outlier} propose an outlier mining algorithm based on constrained concept lattices to detect local outliers using a sparsity-based method. One of the key advantages of using formal concept analysis in classification or outlier detection over other algorithms is that FCA can be used to deal with both continuous and discrete attributes simultaneously, through the discretization of continuous attributes by conceptual scaling (Sec. \\ref{ssec:COnceptual scaling}).\n\nOne of the major issues in applications of formal concept analysis is the complexity of the algorithms involved. The fundamental reason behind the high complexity is that in the worst-case scenario the number of categories in a concept lattice grows exponentially with the number of objects and features involved. Several techniques have been devised in past to overcome this complexity problem \\cite{cole1999scalability,dias2010reducing,singh2017concepts}. \n\n\\subsection{Discretization of continuous attributes and conceptual scaling} \\label{ssec:COnceptual scaling}\nIn order to apply formal concept analysis on attributes with continuous values, we need to discretize them. The process of converting many-valued (possibly continuous-valued) attributes into binary attributes or features for FCA is known as conceptual scaling \\cite{ganter1989conceptual}. Scaling is an important part of most FCA-based techniques and has been studied extensively \\cite{ganter1989conceptual,prediger1997logical,prediger1999lattice}. Choosing the correct scaling method depends on the specific task the concept lattice is used for. \n\\section{Learning interrogative agendas} \\label{sec:Learning interrogative agendas}\nFormal concept analysis categorizes a given set of objects w.r.t~a given set of features. Thus, the outlier detection (or the classification) task at hand depends on the features (or attributes) under consideration. However, in many applications it is hard to estimate which features are of importance and how important they are, that is the correct agenda, for a given task. Here we describe a machine learning framework that tries to solve this problem by using machine learning to learn a ``good'' agenda for the given task. This provides a way to improve the performance of FCA-based classification or outlier detection algorithms by choosing the correct agenda. This also makes results more explainable by providing the importance value of each set of features. \n\\subsection{Space of possible agendas}\nAs discussed in Section \\ref{ssec:Interrogative agendas and flexible categorization}, an (non-crisp) interrogative agenda on a given set of features $X$ is given by a mass function $m:\\mathcal{P}(X) \\to [0,1]$, where for any $Y \\subseteq X$, $m(Y)$ denotes the importance of the set of features $Y$ in the categorization.\nThe mass function $m$ induces a probability function $p_m:\\mathcal{R} \\to [0,1]$, where $\\mathcal{R}$ is the set of all the (crisp) formal contexts induced from $\\mathbb{P}=(A,X,I)$ by different crisp agendas i.e. subsets of $X$. For any categorization (formal context) $\\mathbb{P} \\in \\mathcal{R}$, $p_m(\\mathbb{P})$ denotes the likelihood assigned or preference given to the categorization $\\mathbb{P}$ by the agenda $m$. Thus, the set of all possible non-crisp categorizations (resp. non-crisp agendas) induced from a context $\\mathbb{P}$ is given by the set of all the probability functions on $\\mathcal{R}$ (resp. the set of all the possible mass functions on $\\mathcal{P}(X)$). As discussed in the introduction, we want to learn a ``good'' agenda that\nleads to a categorization that can be used to complete a given task effectively. This corresponds to learning a probability function $p$ on $\\mathcal{R}$ which represents a suitable categorization for the given task. That is, we use machine learning to search for a ``good'' function in the space of probability functions on $\\mathcal{R}$.\nFor the sake of computational and notational convenience, here we propose the following simplifications.\n\n\nLet $\\mathbb{R}$ be the set of real numbers. Let $f:\\mathcal{R} \\to \\mathbb{R}$ be a map assigning weight $w_\\mathbb{L} \\in \\mathbb{R}$ for every $\\mathbb{P} \\in \\mathcal{R}$. For any $\\mathbb{P} \\in \\mathcal{R}$, $f(\\mathbb{P})$ denotes the importance (or preference) assigned to the context $\\mathbb{P}$ or to the corresponding set of features $Y$, where $\\mathbb{P}=(A,X, I \\cap A \\times Y)$. We call any such function $f$ a non-crisp agenda as it gives weights (representing importance) to different sets of features. Any such function can be seen as a real-valued vector of dimension $|\\mathcal{R}|$. Thus, the set of all such functions is isomorphic to the space $\\mathbb{R}^{|\\mathcal{R}|}$. As this space is linear, the shift from probability functions on $\\mathcal{R}$ to real-valued functions simplifies the task of learning an agenda (weight function) that minimizes loss using a simple gradient descent method. \n\nThe weights assigned to lattices can be interpreted as probabilities on $\\mathcal{R}$, (and hence mass functions on $\\mathcal{P}(X)$) via normalization when all the weights are non-negative. The negative weights suggest that the corresponding categorization is opposite to the preferred categorization for the task at hand. For example, suppose we are interested in detecting elements with a value of feature $f_1$ being abnormally high, while the outlier detection method used finds \noutliers with value of $f_1$ low. Then the learning algorithm is likely to assign a negative weight to the agenda $\\{f_1\\}$. \n\nAs discussed earlier, one of the major problems in applications of formal concept analysis is the complexity of the algorithms involved. Here, we are proposing to consider priority (or weight) functions on a set of different concept lattices corresponding to different agendas. As the number of different (crisp) agendas induced from a set $X$ of features is exponential in $|X|$, this may add another exponential factor to the complexity of the algorithm. In many applications where the number of features is large, this may make the problem computationally infeasible. Thus, in most applications we need to choose a smaller set of concept lattices or (crisp) agendas as a basis, that is set of (crisp) concept lattices on which the weight functions are defined. We propose the following strategies for this choice. \n\n\\begin{enumerate}\n \\item \\textbf{Choosing agendas that consist of a small number of features} In this strategy, we choose the (crisp) agendas consisting of $\\alpha$ or a smaller number of features to construct basis concept lattices for some fixed $\\alpha\\ll |X|$. This is based on the idea that tasks like classification or outlier detection can be performed with good accuracy by considering only a small number of features together. This is especially the case with tasks involving epistemic components as humans use a limited number of features in combination for basic tasks like comparison and classification. As these agendas consist of a small number of features, the number of concepts in these concept lattices is small. This makes the computational complexity low for most algorithms operating on concept lattices. Thus, this method can be applied for finding agendas when the algorithms may have high computational complexity for lattices with a large number of concepts. In some situations, it may also be useful to add the full concept lattice (lattice corresponding to the full feature set $|X|$) to the set of basis lattices. This allows us to consider the full concept lattice with all available information for the task at hand while having the possibility of giving higher or lower (compared to other features) importance to some small subsets of features. For example, if the weights attached to all the lattices except those given by agendas $\\{f_1\\}$ and $X$ are close to $0$ and the weights assigned to these agendas are similar, it corresponds to the agenda in which the set of all features and $\\{f_1\\}$ are the only important sets of features. Thus, the concept lattice based on $f_1$ alone would be of high significance. \n \n \\item \\textbf{Choosing important agendas based on prior or expert knowledge} For some tasks, we may have prior or expert knowledge assigning different importance or priority to some lattices or agendas. In such cases, these lattices are taken as the set of basis lattices. This provides us a way to incorporate prior or expert knowledge with other algorithms using formal concept analysis. \n \\item \\textbf{Choosing agendas adaptively} In this strategy, we start with a set of agendas given by all the sets consisting of less than $\\alpha$ features for some small $\\alpha$ (usually taken as 1). We use machine learning to learn weights assigned to them, and then drop all the oness which get assigned a very low weight (after normalization). We then consider agendas consisting of any set of features that is a subset of the union of agendas that are not removed in the first step. Choosing these agendas can be interpreted as considering combinations of features that are deemed important in the first learning step. We then repeat the learning process with this new set of lattices. We keep repeating this process until all the agendas (lattices) added in the last step get assigned low weights or we reach $|X|$ (full concept lattice). In this way, we recursively check the possible combinations of agendas deemed to be important so far in the next recursive step. This method works on assumption that if a feature is not important on its own, then it is unlikely to be part of a set of features that is important. However, this assumption may fail in several situations. In such cases, this method should not be used to choose a basis. \n \\end{enumerate}\n There can be other effective strategies for choosing basis lattices for different tasks and algorithms. \n \\subsection{Learning algorithm}\n Once the set of possible agendas (or concept lattices) is chosen, we apply some \n classification or outlier detection algorithm on each of these. For every lattice $\\mathbb{L} \\in \\mathcal{R}$, we start assigning it a random weight $w \\in \\mathbb{R}$. Let $Alg$ be any algorithm which performs classification or outlier detection for a fixed concept lattice.\n \n Suppose $Alg$ is a classification (resp. outlier detection) algorithm classifying a set $A$ of objects into $n$ classes using concept lattices. For any object $a$ and a class $k$, let $Alg_k(a, \\mathbb{L})$ (resp. $Alg(a, \\mathbb{L})$ denote the membership of the object $a$ into the class $k$ (resp. outlier degree) according to the classification algorithm $Alg$ acting on the lattice $ \\mathbb{L}$. Notice that we allow for our classifiers (resp. outlier detection algorithms) to be interpreted as fuzzy or probabilistic such that membership value (resp. outlier degree) of $a$ belongs to $[0,1]$. For an algorithm $Alg$ with crisp output, the value $Alg_k(a, \\mathbb{L})$ (resp. $Alg(a, \\mathbb{L})$) will be either $0$ or $1$. For a given weight function $w: \\mathcal{R} \\to \\mathbb{R}$, we say that the membership of $a$ in the class $k$ (resp. outlier degree of $a$) assigned by the algorithm $Alg$ acting on a non-crisp categorization described by $w$ is \n \\begin{equation}\n \\label{eqn:outputs}\n out_k(a,w) = \\frac{\\sum_{\\mathbb{L} \\in \\mathcal{R} } w(\\mathbb{L})Alg_k(a, \\mathbb{L})}{\\sum_{\\mathbb{L} \\in \\mathcal{R} }w(\\mathbb{L})}.\n \\end{equation}\n Intuitively, this corresponds to taking the weighted sum of the result given by $Alg$ on lattices with weights provided by the agenda $w$. Let $loss$ be a loss function for a given classification task, and let $loss(out)$ be the total loss for the classification (resp. outlier detection) when classes (outlier degrees) are assigned by $ Alg_k(a,w) $ (resp. $ Alg(a,w) $). We use a gradient descent method to learn the agenda $f_0$ that minimizes the loss. We then use the learnt agenda $f_0$ to assign a class to an object that is for any test object $b$, its predicted membership in class $k$ (resp. outlier degree) is $ Alg_k(b,f_0) $ (resp. $Alg(b, f_0)$). \n \n\\begin{algorithm}\n\\footnotesize\n\\caption{Meta-Learning Algorithm for Interrogative Agendas}\n\\hspace*{\\algorithmicindent} \\textbf{Input:} a set of objects $A$, a set of features $X$, a training set $T\\subseteq X$, and a map $y:T \\to C$ representing the labels on the training set, an algorithm $Alg$ that takes in input some object and a concept lattice in $\\mathcal{R}$, and outputs an element in $\\mathbb{R}^C$ representing its prediction for each class; a loss function $loss$ that compares two classifications and outputs a real number, and a number of training epochs $M$.\\\\\n\\hspace*{\\algorithmicindent} \\textbf{Output} A model that classifies objects in $X$.\n\\begin{algorithmic}[1]\n\\Procedure{Train}{$A$, $X$, $T$, $y$, $Alg$, $loss$, $M$}\n \\State $\\mathbb{L}_1,\\ldots,\\mathbb{L}_n \\leftarrow $ \\textbf{compute} the concept lattices of $\\mathcal{R}$\n \\State \\textbf{let} $predictions$ be an empty map from $A$ to $\\mathbb{R}^C$\n \\State \\textbf{let} $w$ be an array of weights of length $n$ initialized with random values in $\\mathbb{R}$\n \\For{$e = 1, \\ldots, M$ } \n \\For{$a \\in X$, $k \\in C$}\n \\State $predictions[a][k] \\leftarrow \\frac{\\sum_{i = 1}^n w(\\mathbb{L}_i)Alg_k(a, \\mathbb{L}_i)}{\\sum_{i = 1}^n w(\\mathbb{L}_i)}$\n \\EndFor\n \\State \\textbf{update} $w$ with an iteration of gradient descent using $loss(predictions)$\n \\EndFor\n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\nA generic algorithm for outlier detection can be given in a similar manner. \n\n\\subsection{Example} \nLet us consider the following toy data table providing some information on different types of apples. It contains information on the color, size, sweetness, and origin of the apples. We assume that all apples under consideration are either green or red. For conceptual scaling, we divide sweetness, price, and volume into low, medium, and high. This converts these continuous-valued attributes into discrete-valued. The set of features is obtained by considering each value of attributes as a different feature. For example, High volume, red color, Medium price are a few of them. \n\n\\begin{table*}[h]\n \\label{Table:data table}\n \\centering\n \\begin{tabular}{c c c c c c}\n \\hline\n \\textbf{Type}& \\textbf{Color}&\\textbf{ Volume}&\\textbf{ Sweetness} &\\textbf{ Local}& \\textbf{Price}\\\\\n \\hline\n 1 & red & High & High & Yes & Medium\\\\\n 2 & green & High & High & Yes & Medium \\\\\n 3 & red & Medium & Medium & Yes& Medium\\\\\n 4 & green & Low & High & No& Medium\\\\\n 5 & green & High & Medium & No & Low\\\\\n 6 & red & Medium & Low & Yes& Low \\\\\n 7 & green & High & Medium & Yes &Low \\\\\n 8 & green & High & Medium & Yes& High\\\\\n \\hline\n \\end{tabular}\n \\caption{Exemplary data table containing the information on different types of apples w.r.t. attributes such as color, volume, sweetness, et cetera. }\n\\end{table*}\n\nLet $A$ and $X$ be the set of all types of apples and features respectively. The (non-crisp) agendas of interest to us are the ones assigning mass to an attribute and not to an individual feature. That is, we consider basis lattices corresponding to feature sets that contain all the values for a given many-valued attribute. As an example, if a $Y \\subseteq X$ in the agendas corresponding to a basis lattice contains the feature high volume, then it must also contain the features low and medium volume. We use volume to denote the set of features \\{high volume, low volume, medium volume\\}. A similar convention is used for the other attributes as well. \n\n\nLet $I \\subseteq A \\times X$ be the incidence relation and\nlet $P$ be a customer. Suppose we are interested in classifying apples into types customer $P$ likes (class 1) and does not like (class 2). Given a formal context (concept lattice) $\\mathbb{P}= (A, X, I \\cap A \\times Y)$, describing a categorization of these types of apples for a given agenda of interest $Y$, we use the following simple algorithm to predict the class for a new type of apple. \nLet $A_+$ and $A_-$ be the set of apples known to be in class 1 and class 2 respectively (from the training set). A set of features $H \\subseteq Y$ is said to be a positive (resp. negative) hypothesis in w.r.t.~a lattice $(A, X, I \\cap A \\times Y)$ iff $H$ is Galois-stable, $I^{(0)}[H]$ is non-empty, and $I^{(0)}[H] \\cap A_-=\\emptyset$ (resp.~$I^{(0)}[H] \\cap A_+=\\emptyset$). For any new element $t$, \nwe put it in class 1 (resp. class 2) if the category $ I^{(1)}[\\{t\\}]$ contains only positive (resp. negative) hypothesis. The algorithm is inconclusive when it contains neither type of hypothesis (no information) or contains both type of hypotheses (inconsistent information). \n\n Suppose the classification of apples of types 1-8 into classes 1 and 2 for customer $P$ are as $Class \\, 1=\\{1,2,4,5, 7\\}$ and $Class\\, 2=\\{3,6,8\\}$ and suppose also that we use the full concept lattice (that is, agenda $Y=X$). Let $t_0$ be a new type of apple that is green, has high volume, high sweetness, is local, and has a high price. Consider hypotheses $H_1=$ \\{High sweetness\\} and $H_2=$ \\{Green, local\\} which are both contained in $ I^{(1)}[\\{t_0\\}]$. The hypothesis $H_1$ is positive while $H_2$ is negative. Thus, the above classification algorithm can not classify this object as the available information is inconsistent. However, in many cases, some subsets of features are of much more importance to a customer than others. For example, from the above classification, it is hinted that the customer $P$ considers Sweetness and Price as more important features than color or location. Our algorithm for learning agenda can help to recognize this difference in importance of different sets of features and allow us to classify such elements.\n \n Suppose we use our method to find the best categorization (or agenda) for the completion of this task using the above classification algorithm with the basis lattice consisting of lattices given by agendas comprising of one attribute (as discussed earlier, one attribute can correspond to multiple features due to scaling). We start with random weights assigned to each of these lattices. We then use the classification algorithm described above to classify new types of apples into classes 1 and 2 using each of these lattices. We then sum over the weights of lattices in which elements are assigned to either class. The new object is assigned to the class which has a higher mass (the algorithm is indecisive if such a class does not exist). We use machine learning (gradient descent) to train the algorithm to find the best weights for this classification task. \n \nDuring the training phase, our algorithm can (generally) learn that the attribute (or set of features) \\{sweetness\\} matters much more than other features to the customer $P$. Thus, a high weight will be attached to the lattice with agenda \\{sweetness\\}. Thus, the above algorithm in combination with our method assigns $t_0$ to class 1. \nAdding this method on top of a classification algorithm may give a better classification (that is, more elements classified correctly with a given amount of training samples) given our learnt information `sweetness is much more important for $P$ in decision-making' is true. \n\nSimilarly, higher (resp. lower) masses attached to agendas consisting of different sets of a single attribute are helpful in better categorization when this attribute is more (resp. less) important for the customer. Thus, using machine learning techniques (for example, gradient descent when possible) to learn the best possible agenda to provide categorization to complement the classification algorithm can improve its accuracy with less training. Considering more basis lattices may further improve the accuracy and sample complexity. For example, it can be seen that the types 5 and 7, which have medium sweetness and low price, belong to class 1. This provides us with another likely hypothesis that the customer likes apples that are of medium size (not necessarily high) but have a low price. This hints to us that the agenda \\{size, price\\} may be of significant importance to the customer. In case this agenda is indeed more important to the agent, the learning would assign it a high weight during training and thus allows us to make more accurate predictions with a fewer number of samples. However, an increasing number of basis lattices may increase computational complexity significantly, meaning that such a decision needs to be made judiciously. \n\nThis simple example shows that the classification algorithm described above can be improved in terms of accuracy, sample complexity, and explainability by adding a learning step for finding out the best agenda for categorization. Adding this step to the different algorithms discussed in Section \\ref{Sec:Classification and outlier detection using concept lattice}, used for classification and outlier detection using concept lattices can improve these algorithms in a similar manner. This is especially the case for the tasks in which the importance of different features may be hard to estimate beforehand. The obtained agendas can be defined formally using the logical framework described in \\cite{FLexiblecat2022}. In that paper, a logical model was used to represent deliberation between agents with different agendas. This framework can also be used to model deliberation or interaction between different learning algorithms by aggregating learnt agendas using different techniques described in \\cite{FLexiblecat2022}. The agendas inferred by our learning algorithm can be used for further tasks like aggregation from different sources. For example, if for two different classifiers the agendas learned are given by the mass functions $m_1$ and $m_2$ on $\\mathcal{P}(X)$, then a combined classifier that takes both into account can be obtained by choosing the agenda $F(m_1, m_2)$, where $F$ is a suitable Dempster-Shafer combination rule \\cite{sentz2002combination,smets1993belief,denoeux2006cautious}, and then applying the classification algorithm to the resulting lattice. \n\n\\section{Conclusion and future directions} \\label{sec:Conclusion}\nIn this paper, targeting the explainability line of hybrid intelligence research~\\cite{9153877}, we proposed a meta-learning algorithm to learn a \"good\" (interrogative) agenda for categorization (which is used by a potential FCA-based classification or outlier detection algorithm). Adding such a learning step to a given algorithm allows us to improve the accuracy and sample complexity of the procedure while also making it more explainable. On the empirical side, a performance evaluation and the ablation study on the results of employing different FCA-based classification and outlier detection algorithms is an avenue of future research. Another investigation line is the transferability analysis of \"good\" agendas e.g., how much knowledge do we transfer and how good the data efficiency is when such an agenda is used on previously unseen environments\/categorizations. Noteworthy is extending this methodology towards other interesting application domains such as knowledge discovery, data visualization, information retrieval, etc.\n\nOn the theoretical side, this framework can be used to model deliberation between agendas learnt from different algorithms, providing us a way to study their interaction, comparison, or combination. Within the interpretation of taking the concept lattice as expert knowledge, the learnt agendas can also be aggregated or compared with agendas of different experts allowing us to incorporate learning and expert knowledge in categorization. From a multiagent systems perspective, it is especially useful to model subjective categorizations involving multiple agents (human experts and algorithms) with different agendas or goals interacting with each other. In future work, we are considering investigation in a variety of directions e.g., investigating desirable properties of various aggregation mechanisms, representational power such as proportionality and fairness of induced agendas for multiple parties, convergence and robustness guarantees for the \"good\" agendas, computational complexity analysis on hard and easy cases for (non-)crisp agendas, and extending our method on a more general framework in order to tackle the problem of features selection in an uniform way.\n\nThe meta-algorithm described in the present paper is currently employed in the development of an outlier detection algorithm with good results. Currently, it has been tested on the datasets from the ELKI toolkit \\cite{Campos2016} and it has been compared against the algorithms discussed in it. A detailed report of the results will be available in the future.\n\n\n\n\\begin{acknowledgments}\n Erman Acar is generously supported by the Hybrid Intelligence Project which is financed by the Dutch Ministry of Education, Culture and Science with project number 024.004.022. Krishna Manoorkar is supported by the NWO grant KIVI.2019.001 awarded to Alessandra Palmigiano.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:intro}Introduction}\nRecently the Collider Detector at Fermilab (CDF) Collaboration has measured the mass of $W$ boson to be $80.4335\\pm 0.0094~\\mathrm{GeV}$~\\cite{CDF:2022hxs}, which is deviated from Standard Model (SM) prediction of $80.357\\pm 0.006~\\mathrm{GeV}$~\\cite{ParticleDataGroup:2020ssz} and which seems to indicate new physics beyond SM. And there are lots of works appeared to discuss this topic~\\cite{Cirigliano:2022qdm,Borah:2022obi,Chowdhury:2022moc,Arcadi:2022dmt,Zhang:2022nnh,Mondal:2022xdy,Nagao:2022oin,Kanemura:2022ahw,Kawamura:2022uft,Peli:2022ybi,Ghoshal:2022vzo,Perez:2022uil,Zheng:2022irz,Ahn:2022xeq,Heo:2022dey,Crivellin:2022fdf,Endo:2022kiw,Du:2022brr,Cheung:2022zsb,DiLuzio:2022ziu,Balkin:2022glu,Biekotter:2022abc,Krasnikov:2022xsi,Paul:2022dds,Babu:2022pdn,DiLuzio:2022xns,Bagnaschi:2022whn,Heckman:2022the,Lee:2022nqz,Cheng:2022jyi,Bahl:2022xzi,Song:2022xts,Asadi:2022xiy,Athron:2022isz,Sakurai:2022hwh,Fan:2022yly,Zhu:2022scj,Arias-Aragon:2022ats,Cacciapaglia:2022xih,Blennow:2022yfm,Strumia:2022qkt,Athron:2022qpo,Yang:2022gvz,deBlas:2022hdk,Tang:2022pxh,Du:2022pbp,Campagnari:2022vzx,Zhu:2022tpr,Fan:2022dck}.\nIn this work we will explore physics beyond SM which can give the observed mass of the $W$ boson at tree level. \n\nIn the SM, the mass of the $W$ boson and the $Z$ boson are given by the Higgs mechanism. Since $Z$ is combination of the $B$ boson and the $W^{3}$ boson, which is a component of the gauge triplet $W^{i}$, the mass of the $W$ boson and the $Z$ boson are connected. And it is difficult to change the mass of the $W$ boson solely. One way to alter the mass of the $W$ boson is to mix the $Z$ boson with an other vector boson. Mix the $Z$ boson with another boson will inevitably alter the mass expression of the $Z$ boson which may alter the value of the $\\mathrm{SU}(2)_L$ gauge coupling and thus the mass of the $W$ boson. There are usually two kinds of mixing: direct mixing in mass matrix and kinetic mixing. Though the normalization of the kinetic mixing terms will result in mass mixing, we will consider two models in this work: Derivative Portal Dark Matter (DPDM) model~\\cite{Zeng:2022llh} and the U(1) model~\\cite{Holdom:1990xp,Lao:2020inc}. In these two models, the extra gauge boson are connects to the SM through kinetic mixing to $Z$ boson and $B$ boson respectively. The kinetic mixing will alter the mass expression of the $Z$ boson and thus the mass of the $W$ boson at tree level. Since electroweak oblique parameters have a strong constraint on electroweak physics, we will also consider the electroweak oblique parameters constraints to these models. For the DPDM model, we also consider constraints from the observed Dark Matter (DM) relic density. \n\nThis work is organized as follows: In Sec.~\\ref{sec:general} we generally discuss the mechanism that the mixing between extra boson and $Z$ boson will change the mass of the $W$ boson. In Sec.~\\ref{sec:bsm} we explore two models and discuss their capability of altering the $W$ mass. As well as explore constraints from electroweak oblique parameters and DM relic density. And we conclude in Sec.~\\ref{sec:con} \n\\section{general discussion of prediction of the mass of $W$ boson}%\n\\label{sec:general}\nIn this section we will discuss in general how can an extra boson mix with the $Z$ boson will change the mass of $W$ boson. To see this we first write down the mass of the $W$ boson $m_{W}$ and the mass of the $Z$ boson $m_{Z}$ given by SM:\n\\begin{eqnarray}\n m_{W}^2=\\frac{1}{4}g^2v^2,\\ m_{Z}^2=\\frac{1}{4}(g^2+g^{\\prime 2})v^2.\n\\end{eqnarray}\nWhere $g$ and $g^{\\prime}$ are the gauge couplings of $\\mathrm{SU}(2)_\\mathrm{L}$ and $\\mathrm{U}(1)_\\mathrm{Y}$. And $v$ is the vacuum expectation value (vev) of the Higgs boson. When choosing the Fermi coupling constant $G_{F}$, the $Z$ boson mass $m_{Z}$ and the fine-structure constant $\\alpha$ as input parameter, the $W$ boson mass will then be determined. Because \n\\begin{eqnarray}\n G_{F}=\\frac{1}{\\sqrt{2}v^2 },\\ e=\\sqrt{4\\pi\\alpha} =\\frac{gg^{\\prime}}{\\sqrt{g^2+g^{\\prime 2}} }.\n\\end{eqnarray}\nGoing beyond the SM, we will mix the $Z$ boson with another vector boson. After that the real mass of the $Z$ will be the square root of one of the eigenvalues of the following mass matrix:\n\\begin{eqnarray}\n \\begin{pmatrix}\n\t\\frac{1}{4}(g^2+g^{\\prime 2})v^2&b\\\\\n\tb&a\n \\end{pmatrix}.\n\\end{eqnarray}\nWhere we have used $a$ and $b$ to denote some general mass term. The eigenvalues of the mass matrix can be written as: \n\\begin{eqnarray}\n m_{Z,Z^{\\prime}}^2=\\frac{1}{2}\\left( \\frac{1}{4}(g^2+g^{\\prime 2})v^2+a\\pm \\sqrt{\\left( \\frac{1}{4}(g^2+g^{\\prime 2})v^2+a \\right)^2-a(g^2+g^{\\prime 2})v^2+4b^2} \\right) \\label{mass}.\n \n\\end{eqnarray}\nDefine $c=\\frac{1}{4}(g^2+g^{\\prime 2})v^2$, we have a compact form of $m_{Z,Z^{\\prime}}^2=\\frac{1}{2}\\left( a+c\\pm\\sqrt{(a-c)^2+4b^2} \\right)$. \nAnd we can see the heavier mass of $m_{Z,Z^{\\prime}}$ will be bigger than both $a$ and $c$, and the lighter mass of $m_{Z,Z^{\\prime}}$ will be smaller than both $a$ and $c$. Therefore in order to have a bigger $c$, since observation of the $W$ mass indicate larger $g$, the value of $a$ must be lager than $c$. And the mass of $Z$ boson should corresponds to the minus sign in Eq.~\\eqref{mass}. \nAdopting the input parameters as $G_{F}=1.1663787*10^{-5}~\\mathrm{GeV}^{-2},\\ m_{Z}=91.1876~\\mathrm{GeV},\\ \\alpha\\approx 1\/128$~\\cite{ParticleDataGroup:2020ssz}, we can draw a blue band which saturate the observed mass of the $W$ boson in $3\\sigma$ confidence level in Fig.~\\ref{fig:abband}.\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{abband}\n \\caption{Band which gives the $W$ mass between $80.4053$ and $80.4617$. }\n \\label{fig:abband}\n\\end{figure}\n\nActually we can calculate the analytic relation between $a$ and $b$ by taking the mass of $W$ boson $m_{W}$ as an input parameter. From Eq.~\\eqref{mass} we can write:\n\\begin{eqnarray}\n b^2&&=c(a-m_{Z}^2)+m_{Z}^{4}-m_{Z}^2a\\nonumber\\\\\n \n \n\t&&=\\frac{4m_{W}^4}{4m_{W}^2-e^2v^2}(a-m_{Z}^2)+m_{Z}^{4}-m_{Z}^2a\\label{abconstranit}.\n\\end{eqnarray}\nThen we can given constraint to models beyond SM according to Eq.~\\eqref{abconstranit}. Actually the above discussion does not take loop corrections from SM into consideration. Considering the loop corrections from SM we should replace $m_{W}$ in Eq.~\\eqref{abconstranit} with $m_{W}-\\delta m_{W} $, where $\\delta m_{W}$ represents the loop corrections to $m_{W}$ from SM. \n\\section{models beyond SM}%\n\\label{sec:bsm}\nIn this section we will explore two models beyond SM which mix the $Z$ boson with an extra vector boson and might give the observed $W$ boson mass. We also consider other constraints like electroweak oblique parameters constraint and DM relic density constraint. \n\\subsection{Derivative Portal Dark Matter}\n\\label{sub:u_1_model}\nThe DPDM model extends the SM with an extra vector boson which links the dark sector and the SM through its kinetic mixing with the $Z$ boson. The relevant Lagrangian of the DPDM model can be written as~\\cite{Zeng:2022llh}:\n\\begin{eqnarray}\n \\mathcal{L}=&&-\\frac{1}{4}Z^{\\mu\\nu}Z_{\\mu\\nu}-\\frac{1}{4}Z^{\\prime\\mu\\nu}Z^{\\prime}_{\\mu\\nu}-\\frac{\\epsilon}{2} Z^{\\mu\\nu}Z_{\\mu\\nu}^{\\prime}\\\\\n\t&&+\\sum\\limits_{f} Z_{\\mu}\\bar{f}\\gamma^{\\mu}(g_{V}-g_{A}\\gamma^{5})f+g_{\\chi}Z_{\\mu}^{\\prime}\\bar{\\chi}\\gamma^{\\mu}\\chi\\nonumber\\\\\n\t&&+\\frac{1}{2}m_{Z}^2Z_{\\mu}Z^{\\mu}+\\frac{1}{2}m_{Z^{\\prime}}^2Z_{\\mu}^{\\prime}Z^{\\prime\\mu}-m_{\\chi}\\bar{\\chi}\\chi\\nonumber.\n\\end{eqnarray}\nAfter normalization of the kinetic terms, the kinetic mixing between $Z$ and $Z^{\\prime}$ actually result in mass mixing between them. The kinetic mixing term of the Lagrangian can normalized by:\n\\begin{eqnarray}\n K=\\begin{pmatrix}\n -k_1&k_2\\\\\n k_1&k_2\n \\end{pmatrix},\n\\end{eqnarray}\nwhere $k_1=1\/\\sqrt{2-2\\epsilon} $ and $k_2=1\/\\sqrt{2+2\\epsilon} $. And this operation will result in the following mass matrix between the two vector bosons: \n\\begin{eqnarray}\n\t \\begin{pmatrix}\n\t k_1^2M_1&k_1k_2M_2\\\\\n\t k_1k_2M_2&k_2^2M_1\n\t \\end{pmatrix}\n\\end{eqnarray}\nwhere $M_1=m_{Z}^2+m_{Z^{\\prime}}^2$ and $M_2=m_{Z^{\\prime}}^2-m_{Z}^2$. \nOne can use an orthogonal matrix $O$ to diagonalize the mass matrix, and $O$ can be defined as \n\\begin{eqnarray}\n\t O=\\begin{pmatrix}\n\t\t\\cos \\theta&\\sin \\theta\\\\\n\t\t-\\sin \\theta& \\cos \\theta\n\t \\end{pmatrix},\\ \\text{with} \\tan 2\\theta=\\frac{2k_1k_2M_2}{(k_2^2-k_1^2)M_1}.\n\\end{eqnarray}\nTherefore according to Eq.~\\eqref{abconstranit} we can give constraint to $m_{Z^{\\prime}}$ and $\\epsilon$ as:\n\\begin{eqnarray}\n k_1k_2M_2=\\sqrt{(k_1^2M_1-m_{Z}^{exp\\ 2})(k_2^2M_1-m_{Z}^{exp\\ 2})}\\label{DPDMcons}. \n\\end{eqnarray}\nWhere we have used $m_{Z}^{exp}$ to represent the experiment observed mass of $Z$ boson, which is meant to distinguish from $m_{Z}$. \n\nApart from giving mass to the $W$ boson, we will also calculate the tree level $S,T,U$ constraints to this model. The neutral-current coupling between $Z$ boson and SM fermions in the DPDM model can be written as:\n\\begin{eqnarray}\n L_{NC,Zff}&&=\\sum\\limits_{f} (-k_2\\sin \\theta -k_1\\cos \\theta) \\hat{Z}_{\\mu}\\bar{f}\\gamma^{\\mu}(g_{V}-g_{A}\\gamma^{5})f\\\\\n\t &&=\\sum\\limits_{f} (-k_2\\sin \\theta -k_1\\cos \\theta) \\hat{Z}_{\\mu}\\bar{f}\\gamma^{\\mu}\\frac{e}{s_{w}c_{w} }(T^{3}_{f}\\frac{1-\\gamma^{5}}{2}-Q_{f}s_{w}^2 )f,\n\\end{eqnarray}\nwhere $\\hat{Z}_{\\mu} $ is the mass eigenstate of the $Z$ boson. The form of the charged-current in the DPDM is the same as the SM. \nUsing the effective-lagrangian techniques given by ~\\cite{burgess1994model}:\n\\begin{eqnarray}\n &&\\mathcal{L}_{CC, Wff}=-\\frac{e}{\\sqrt{2} \\hat{s}_{w}}(1-\\frac{\\alpha S}{4(\\hat{c}_{w}^2-\\hat{s}_{w}^2)}+\\frac{\\hat{c}_{w}^2\\alpha T}{2(\\hat{c}_{w}^2-\\hat{s}_{w}^2)}+\\frac{\\alpha U}{8\\hat{s}_{w}^2})\\sum\\limits_{ij}V_{ij}\\bar{f}_{i}\\gamma^{\\mu}\\gamma_{L}f_{j}W_{\\mu}^{\\dagger}+\\mathrm{c.c.}.\\\\ \n&& \\mathcal{L}_{NC, Zff}=\\frac{e}{\\hat{s}_{w}\\hat{c}_{w}}(1+\\frac{\\alpha T}{2})\\sum\\limits_{f}\\bar{f}\\gamma^{\\mu}[T^{3}_{f}\\frac{1-\\gamma^{5}}{2}-Q_{f}(\\hat{s}_{w}^2+\\frac{\\alpha S}{4(\\hat{c}_{w}^2-\\hat{s}_{w}^2)}-\\frac{\\hat{c}_{w}^2\\hat{s}_{w}^2\\alpha T}{\\hat{c}_{w}^2-\\hat{s}_{w}^2})]fZ_{\\mu},\n\\end{eqnarray}\nwhere $\\hat{s}_{w}\\hat{c}_{w}m_{\\hat{Z}}=s_{w}c_{w}\\frac{1}{2}\\sqrt{ g^2+g^{\\prime 2}}v=s_{w}c_{w}m_{Z}$, we can write $S$ and $T$ in the DPDM model as\n\\begin{eqnarray}\n \\alpha T=2(\\frac{\\hat{s}_{w}\\hat{c}_{w}}{s_{w}c_{w}}(-k_2\\sin \\theta-k_1\\cos \\theta)-1)\\\\\n \\alpha S=4\\hat{c}_{w}^2\\hat{s}_{w}^2\\alpha T+4(\\hat{c}_{w}^2-\\hat{s}_{w}^2)(s_{w}^2-\\hat{s}_{w}^2)\\\\\n \\alpha U=8\\hat{s}_{w}^2(\\frac{\\hat{s}_{w}}{s_{w}}-1+\\frac{\\alpha S}{4(\\hat{c}_{w}^2-\\hat{s}_{w}^2)}-\\frac{\\hat{c}_{w}^2\\alpha T}{2(\\hat{c}_{w}^2-\\hat{s}_{w}^2)})\n\\end{eqnarray}\nAnd we constrain the DPDM model with global fit results given by table five of ~\\cite{deBlas:2022hdk}:\n\\begin{eqnarray}\n S=0.005\\pm 0.097,\\ T=0.04\\pm 0.12,\\ U=0.134\\pm 0.087,\n\\end{eqnarray}\nwith the correlation coefficient $\\rho_{ST}=0.91,\\ \\rho_{SU}=-0.65,\\ \\rho_{TU}=-0.88$. \n\nThe DPDM model can naturally escape stringent constraint from Dark Matter direct detection due to a cancellation mechanism~\\cite{Cai:2021evx}, and in Fig.~\\ref{fig:DPDMres} we have draw the constraints from observed DM relic density, the observed $W$ mass and the electroweak oblique parameters. \n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{DPDMres}\n \\caption{The lightblue area are excluded by Planck experiment~\\cite{Planck:2018vyg}. The red line gives the observed $W$ at tree level. The green line has taken the SM model loop corrections into consideration and gives the observed $W$ mass, with the dashed green lines correspond to the $3\\sigma$ upper and lower deviation. And the blue line gives the observed DM relic density. The magenta contour gives the constraints from electroweak oblique parameters at 95\\% C.L., with the red star gives the best fit. }\n \\label{fig:DPDMres}\n\\end{figure}\nWhere the red line give the observed $W$ boson mass solely. And the green line give the observed $W$ boson mass with SM loop corrections taken into consideration. The dashed green lines correspond to the $3\\sigma$ mass deviated from the $W$ boson mass. And the blue line saturate the observed DM relic density, while the light blue area are excluded by Planck experiment~\\cite{Planck:2018vyg}. The DM relic density are calculated in settings $m_{\\chi}=60~\\mathrm{GeV},\\ g_{\\chi}=0.1$ by numerical tools: \\texttt{FeynRules~2}~\\cite{Alloul:2013bka}, \\texttt{MadGraph}~\\cite{Alwall:2014hca}, and \\texttt{MadDM}~\\cite{Ambrogi:2018jqj}. The magenta contour corresponds to the 95\\% C.L. constraints from electroweak oblique parameters. And the red star represents the best fit of $STU$: $m_{Z^{\\prime}}\\approx 116~\\mathrm{GeV},\\ \\epsilon\\approx 0.0250$. We see that this point also meets the direct calculation of $m_W$ and observed DM relic density. Also from Fig.~\\ref{fig:DPDMres} we see that the magenta area is consistent with the green lines and is also consistent with the physics that $STU$ not only contains the fitting of the observed $W$ boson mass, but also contains the constraints from electroweak couplings. To give the observed $W$ boson mass $m_{Z^{\\prime}}$ should satisfy $105~\\mathrm{GeV}130~\\mathrm{GeV}$ not excluded by Planck experiment, one can change the DM mass $m_{\\chi}$ and thus the annihilation resonance area will move accordingly. On the other hand, one can increase the extra gauge coupling $g_{\\chi}$ or simply not introduce dark matter in this model\n\n\\subsection{U(1) model}\n\\label{sub:u_1_model}\nIn the U(1) model there is a gauge boson of an extra $\\mathrm{U}(1)_\\mathrm{X}$ gauge symmetry which connects to the gauge boson of SM $\\mathrm{U}(1)_\\mathrm{Y}$ symmetry through kinetic mixing. In this section we will adopt the same model setting as~\\cite{Lao:2020inc}. Then the kinetic mixing terms can be written as:\n\\begin{eqnarray}\n \\mathcal{L}_{\\mathrm{K}}=-\\frac{1}{4}B^{\\mu\\nu}B_{\\mu\\nu}-\\frac{1}{4}X^{\\mu\\nu}X_{\\mu\\nu}-\\frac{\\epsilon}{2}B^{\\mu\\nu}X_{\\mu\\nu},\n\\end{eqnarray}\nwhere $B_{\\mu}$ and $X_{\\mu}$ are the gauge fields of $\\mathrm{U}(1)_\\mathrm{Y}$ and $\\mathrm{U}(1)_\\mathrm{X}$ gauge symmetry. And there will be mass mixing term between $B_{\\mu}$ and $W^{3}_{\\mu}$ after Higgs getting its vev. Therefore the matrix of $(W_{\\mu}^{3},\\ B_{\\mu},\\ X_{\\mu} )$ can be denoted as:\n\\begin{eqnarray}\n &&\\frac{1}{2}\\begin{pmatrix}\n W^{3\\mu}&B^{\\mu}&X^{\\mu} \n \\end{pmatrix}\n \\begin{pmatrix}\n\tg^2v^2\/4&-gg^{\\prime}v^2\/4&0\\\\\n\t-gg^{\\prime}v^2\/4&g^{\\prime 2}v^2\/4&0\\\\\n\t0&0&g_{x}^2v_{s}^2\n \\end{pmatrix}\n \\begin{pmatrix}\n W_{\\mu}^{3}\\\\\n B_{\\mu}\\\\\n X_{\\mu} \n \\end{pmatrix}\\nonumber\\\\\n =&&\\frac{1}{2}\\begin{pmatrix}\n W^{3\\mu}&B^{\\mu}&X^{\\mu} \n \\end{pmatrix}K^{-1T}OO^TK^{T}\n \\begin{pmatrix}\n\tg^2v^2\/4&-gg^{\\prime}v^2\/4&0\\\\\n\t-gg^{\\prime}v^2\/4&g^{\\prime 2}v^2\/4&0\\\\\n\t0&0&g_{x}^2v_{s}^2\n \\end{pmatrix}KOO^T K^{-1}\n \\begin{pmatrix}\n W_{\\mu}^{3}\\\\\n B_{\\mu}\\\\\n X_{\\mu} \n \\end{pmatrix}\\\\\n =&&\\frac{1}{2}\\begin{pmatrix}\n A^{\\mu}&Z^{\\mu}&Z^{\\prime\\mu} \n \\end{pmatrix}\n \\begin{pmatrix}\n\t0&0&0\\\\\n\t0&m_{Z}^2&0\\\\\n\t0&0&m_{Z^{\\prime}}^2\n \\end{pmatrix}\n \\begin{pmatrix}\n A_{\\mu}\\\\\n Z_{\\mu}\\\\\n Z^{\\prime}_{\\mu} \n \\end{pmatrix}\\nonumber\\label{massmatrix}\n\\end{eqnarray}\nWhere we have used $K$ to normalize the kinetic terms of $B_{\\mu}$ and $X_{\\mu}$ and used $O$ to diagonalize the mass matrix and transform the fields to their mass eigenstates. The masses of the two massive vector boson $Z$ and $Z^{\\prime}$ will be:\n\\begin{eqnarray}\n m_{Z,Z^{\\prime}}^2=\n \\frac{1}{8} (\n g^2 v^2+g^{\\prime 2} k1^2 v^2+g^{\\prime 2} k2^2 v^2+4 g_x^2 k1^2 v_s^2+4 g_x^2 k2^2 v_s^2\\\\\n\\pm\\sqrt{\\left(g^2 v^2+\\left(k1^2+k2^2\\right) \\left(g^{\\prime 2} v^2+4 g_x^2 v_s^2\\right)\\right)^2-16 g_x^2 v^2 v_s^2 \\left(g^2 \\left(k1^2+k2^2\\right)+4 g^{\\prime 2} k1^2 k2^2\\right)})\\nonumber\n\\end{eqnarray}\nNote that the kinetic mixing between $B_{\\mu}$ and $X_{\\mu}$ will not change the form of the electric charge $e$. The definition of electric can be extracted from couplings between Photon and the Higgs doublet. Which in this model will be:\n\\begin{eqnarray}\n e=g[KO]_{11}=g^{\\prime}[KO]_{21}=\\frac{2g^{\\prime}k_2}{\\sqrt{1+\\frac{4g^{\\prime 2}k_2^2}{g^2}+\\frac{k_2^2}{k_1^2}} }=\\frac{gg^{\\prime}}{\\sqrt{ g^2+g^{\\prime 2}}}.\n\\end{eqnarray}\nWhere $[KO]_{ij}$ represents the element which lies in the $i^{th}$ row and the $j^{th}$ column of matrix $KO$. \nThe neutral-current coupling between $Z$ boson and SM fermions in the U(1) model can be written as:\n\\begin{eqnarray}\n L_{NC,Zff}&&=\\sum\\limits_{f}Z_{\\mu}\\bar{f}\\gamma^{\\mu}(g_{V}-g_{A}\\gamma^{5})f,\\\\\n \\mathrm{with}\\ g_{V}&&=g_{A}+g^{\\prime}[KO]_{22}Q_{f},\\ g_{A}=\\frac{T_{f}^3}{2}(-g^{\\prime}[KO]_{22}+g[KO]_{12}).\n\\end{eqnarray}\nAnd we can read $S,T,U$ as:\n\\begin{eqnarray}\n \\alpha T=\\frac{2\\hat{s}_{w}\\hat{c}_{w}(-g^{\\prime}[KO]_{22}+g[KO]_{12})}{e}-2\\\\\n \\alpha S=\\frac{-4g^{\\prime}[KO]_{22}(\\hat{c}_{w}^2-\\hat{s}_{w}^2)}{-g^{\\prime}[KO]_{22}+g[KO]_{12}}-4\\hat{s}_{w}^2(\\hat{c}_{w}^2-\\hat{s}_{w}^2)+4\\hat{c}_{w}^2\\hat{s}_{w}^2\\alpha T\\\\\n \\alpha U=8\\hat{s}_{w}^2(\\frac{\\hat{s}_{w}}{s_{w}}-1+\\frac{\\alpha S}{4(\\hat{c}_{w}^2-\\hat{s}_{w}^2)}-\\frac{\\hat{c}_{w}^2\\alpha T}{2(\\hat{c}_{w}^2-\\hat{s}_{w}^2)})\n\\end{eqnarray}\n\n\nNow we can give a line which predict the observed $W$ mass in this model in Fig.~\\ref{fig:u1res}.\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{u1res}\n \\caption{The red line gives the observed $W$ at tree level. The green line has taken the SM model loop corrections into consideration and gives the observed $W$ mass, with the dashed green lines correspond to the $3\\sigma$ upper and lower deviation. The magenta contour gives the constraints from electroweak oblique parameters at 99\\% C.L., with the red star gives the best fit. }\n \\label{fig:u1res}\n\\end{figure}\nWhere we also use red dashed line to show that the U(1) model can solely give the observed mass of $W$ boson. And the green line takes the SM loop corrections into consideration and gives observed $W$ boson mass, with the dashed green lines correspond to the $3\\sigma$ upper and lower deviation. The magenta contour stands for the 99\\% C.L. constraints from the electroweak oblique parameters constraints, with the red star being the best fit: $m_{Z^{\\prime}}=139.9~\\mathrm{GeV},\\ \\epsilon=0.068$. From Fig.~\\ref{fig:u1res} we see that the electroweak oblique parameters constraints are in accordance with the direct calculation of $m_{W}$. And there are large area in the U(1) model that can give the observed $W$ boson mass. The lower mass limit and the lower $\\epsilon$ limit is about $m_{Z^{\\prime}}\\approx 109.8~\\mathrm{GeV}$ and $\\epsilon\\approx 0.036$. \n\\section{\\label{sec:con}Conclusion}\nIn this work we have explored the possibility of altering the $W$ boson mas at tree level through mixing between an extra gauge boson and the $Z$ boson. We first give general discussion of the effects from mixing extra boson with $Z$ boson, then explored two realistic models: the DPDM model and the U(1) model. In the DPDM model the extra gauge boson mixes with the $Z$ boson through the kinetic mixing between extra boson and the $Z$ boson, while in the U(1) model the extra gauge boson mixes with the $Z$ boson through the kinetic mixing between extra boson and the $B$ boson. Apart from giving the $W$ boson mass, we also discussed the electroweak oblique parameters constraints for both model, and also explored DM relic density constraints for the DPDM model. We find that there are area in both model which can give the observed $W$ boson mass at tree level, and the best fit value for the extra vector boson mass is around $120~\\mathrm{GeV}$. \n\n\\begin{acknowledgments}\n This work is supported by the National Natural Science Foundation of China (NSFC) under Grant Nos. 11875327 and 11905300, the China Postdoctoral Science Foundation under Grant\nNo. 2018M643282, the Fundamental Research Funds for the Central Universities, and the Sun Yat-Sen University Science Foundation.\n\\end{acknowledgments}\n\\section*{Note added}\nDuring the finalizing of this manuscript, we noticed that ~\\cite{Zhang:2022nnh} appeared on arxiv. ~\\cite{Zhang:2022nnh} discusses explaination of the $W$ boson mass with U(1) dark matter model as well as several phenomenology constraints on DM. Our work discusses models with an extra gauge boson which can explain the $W$ boson mass. Apart from the DPDM model, we also discussed the U(1) model, but in different scenarios.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\n One extension of Malliavin calculus from the Brownian motion to general L\\'{e}vy processes was made using the It\\^{o} chaos \n decomposition on the $L_2$-space over the L\\'evy space. This approach was used for\n instance by Nualart and Vives \\cite{nualart-vives}, Privault \\cite{privault_extension}, Benth, Di Nunno, L{\\o}kka, {\\O}ksendal and Proske \n \\cite{benth-dinunno-lokka-oksendal-proske}, Lee and Shih \\cite{lee-shih_product_formula}, Sol\\'e, Utzet and Vives\n \\cite{sole-utzet-vives} and Applebaum\n \\cite{applebaum2}. \n \n The wide interest in Malliavin calculus for L\\'{e}vy processes in stochastics and \n applications motivates the study of an accessible characterization\n of differentiability and fractional differentiability.\n Fractional differentiability is defined by real interpolation between the Malliavin Sobolev space $\\mathbbm{D}_{1,2}$ and $L_2({\\mathbbm{P}})$ and\n we recall the definition in Section \\ref{section:fractional} of this paper. \n Geiss and Geiss \\cite{geiss-geiss} and Geiss and Hujo \\cite{geiss-hujo} have shown that Malliavin differentiability and \n fractional differentiability \n are in a close connection to discrete-time approximation of certain stochastic integrals when the underlying process is a (geometric)\n Brownian motion. Geiss et al. \\cite{geiss-geiss-laukkarinen} proved that this applies also to L\\'{e}vy processes with jumps.\n These works assert that knowing the parameters of \n fractional smoothness allow to design discretization time-nets\n such that the optimal approximation rate can be achieved. \n For details, see \\cite{geiss-geiss}, \\cite{geiss-hujo} and \\cite{geiss-geiss-laukkarinen}.\n \n\n \n Steinicke \\cite{steinicke} and Geiss and Steinicke \\cite{geiss-steinicke} take advantage of the fact that any random variable $Y$ on \n the L\\'evy space can be represented as a functional $Y = F(X)$ of the L\\'evy process $X$, where $F$ is a real valued measurable mapping \n on the Skorohod space of right continuous functions. Let us restrict to the case that $F(X)$ only depends on the jump part of $X$. \n Using the corresponding result from \n Sol\\'e, Utzet and Vives \\cite{sole-utzet-vives} and Al\\`os, Le\\'on and Vives \\cite{alos-leon-vives} on the canonical space, Geiss and Steinicke\n \\cite{geiss-steinicke} show that the condition $F(X)\\in\\mathbbm{D}_{1,2}$ is equivalent with\n $$\\iint_{\\mathbbm{R}_+\\times\\mathbbm{R}}\\mathbbm{E}\\left[ \\left(F(X+x\\mathbbm{1}_{[t,\\infty)}) - F(X) \\right)^2 \\right] {\\mathrm{d}} t\\nu({\\mathrm{d}} x) < \\infty, $$\n where $\\nu$ is the L\\'evy measure of $X$.\n On the other hand one gets from Mecke's formula \\cite{mecke} that\n %\n $$\\iint_A \\mathbbm{E}[F( X+x\\mathbbm{1}_{[t,\\infty)} )]{\\mathrm{d}} t\\nu({\\mathrm{d}} x) = \\mathbbm{E}[N(A)F(X)]$$\n %\n for any nonnegative measurable $F$ and any $A\\in\\mathcal{B}([0,\\infty)\\times\\mathbbm{R}\\setminus\\{0\\})$, \n where $N$ is the Poisson random measure associated with $X$ as in Section \\ref{section:preliminaries}. \n These results raise the following questions: when can Malliavin \n differentiability be described using a weight function such as $N(A)$, and is there a weight function for fractional differentiability?\n \n\n In this paper we search for weight functions $\\Lambda$ and measurability conditions on $Y$ such that the criteria \n \\begin{equation}\\|Y \\Lambda\\|_{L_2({\\mathbbm{P}})} < \\infty \\label{equation:weight_criteria}\\end{equation}\n describes the smoothness of $Y$. We begin by recalling the orthogonal It\\^{o} \n chaos decomposition\n $$Y = \\sum_{n=0}^\\infty I_n(f_n)$$\n on $L_2({\\mathbbm{P}})$ and the Malliavin Sobolev space\n $$\\mathbbm{D}_{1,2} = \\left\\{ Y\\in L_2({\\mathbbm{P}}): \\|Y\\|_{\\mathbbm{D}_{1,2}} = \\sum_{n=0}^\\infty (n+1) \\|I_n(f_n)\\|_{L_2({\\mathbbm{P}})}^2 < \\infty \\right\\}$$ \n %\n in Section \\ref{section:preliminaries}.\n Then, in Section \\ref{section:differentiability}, we obtain an equivalent condition for Malliavin differentiability. The assertion is that \n $$Y\\in\\mathbbm{D}_{1,2} \\text{ if and only if } \\normP{ Y \\sqrt{N(A) +1}} < \\infty,$$\n whenever $Y$ is measurable with respect to ${\\cal F}_A$, the\n completion of the sigma-algebra generated by $\\left\\{ N(B) : B\\subseteq A, \\,B\\in\\mathcal{B}([0,\\infty)\\times\\mathbbm{R})\\right\\}$ \n and the set $A\\in\\mathcal{B}([0,\\infty)\\times\\mathbbm{R}\\setminus\\{0\\})$ satisfies $\\mathbbm{E}[N(A)]<\\infty$.\n\n Section \\ref{section:fractional} treats fractional differentiability and our aim is to adjust the weight function $\\Lambda$ so that the \n condition \\eqref{equation:weight_criteria} describes \n a given degree of smoothness. We recall the $K$-method of real interpolation which we \n use to determine the interpolation spaces $(L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,q}$ for $\\theta\\in(0,1)$ and $q\\in[1,\\infty]$. These spaces are intermediate\n between $\\mathbbm{D}_{1,2}$ and $L_2({\\mathbbm{P}})$.\n We show that when $Y$ is ${\\cal F}_A$-measurable and $\\mathbbm{E}[N(A)]<\\infty$, then $Y$ has fractional differentiability of order $\\theta$ for $q=2$ \n if and only if\n$$\\normP{ Y\\sqrt{ N(A) +1}^{\\,\\theta} } < \\infty.$$\n \n \n\n\n\\section{Preliminaries} \\label{section:preliminaries}\n\nConsider a L\\'evy process $X = (X_t)_{t\\geq0}$ with c\\`{a}dl\\`{a}g paths on a\ncomplete probability space $({\\Omega},{\\cal F},\\mathbbm{P})$, where ${\\cal F}$ is the completion of the sigma-algebra generated by $X$.\nThe L\\'{e}vy-It\\^{o} decomposition states that there exist $\\gamma\\in\\mathbbm{R}$, $\\sigma\\geq0$, a standard Brownian motion $W$ and \na Poisson random measure $N$ on $\\mathcal{B}([0,\\infty)\\times\\mathbbm{R})$ such that\n\\[X_t=\\gamma t + \\sigma W_t + \\iint_{(0,t]\\times \\{ |x|>1\\}} x N(\\mathrm{d} s,\\mathrm{d} x) \n+ \\iint_{(0,t]\\times\\left\\{0<|x|\\leq1\\right\\}}x\\tilde{N}(\\mathrm{d} s,\\mathrm{d} x)\\]\n holds for all $t\\geq0$ a.s.\nHere $\\tilde{N}(\\mathrm{d} s,\\mathrm{d} x) = N(\\mathrm{d} s,\\mathrm{d} x)-\\mathrm{d} s\\nu(\\mathrm{d} x)$ is the compensated Poisson random measure and \n$\\nu:\\mathcal{B}(\\mathbbm{R})\\to[0,\\infty]$ is the L\\'{e}vy measure of $X$ satisfying $\\nu(\\{0\\})=0$,\n$\\int_\\mathbbm{R} (x^2\\wedge1)\\nu(\\mathrm{d} x)<\\infty$ and $\\nu(B)=\\mathbbm{E} \\left[ N((0,1]\\times B) \\right]$ when $0\\not\\in \\bar{B}$.\nThe triplet $(\\gamma,\\sigma,\\nu)$ is called the L\\'evy triplet.\n\n\nLet us recall the It\\^{o} chaos decomposition from \\cite{ito}:\nDenote $\\mathbbm{R}_+ := [0,\\infty)$. We consider the following measure $\\mathbbm{m}$ defined as \n\\begin{eqnarray*}\n\n \\mathbbm{m}:\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R})\\to[0,\\infty],& \n & \\mathbbm{m}(\\mathrm{d} s,\\mathrm{d} x):= \\mathrm{d} s \\left[ {\\sigma}^2 \\delta_0(\\mathrm{d} x) + x^2 \\nu(\\mathrm{d} x) \\right].\n\\end{eqnarray*}\nFor sets $B\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R})$ such that $\\mathbbm{m}(B) < \\infty$, a random measure $M$ is defined by\n\\[M(B) := \\sigma \\int_{\\left\\{ s\\in\\mathbbm{R}_+:(s,0)\\in B\\right\\}} \\mathrm{d} W_s + \\lim_{n\\to\\infty}\\iint_{\\left\\{(s,x)\\in B: \\frac{1}{n} < |x| < n\\right\\}} x\\ \\tilde{N}(\\mathrm{d} s,\\mathrm{d} x),\\]\nwhere the convergence is taken in $L_2(\\mathbbm{P}):=L_2({\\Omega},{\\cal F},\\mathbbm{P})$.\nThe random measure $M$ is independently scattered and it holds that $\\mathbbm{E}\nM(B_1) M(B_2) = \\mathbbm{m}(B_1 \\cap B_2)$ for all $B_1,B_2\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R})$ with\n$\\mathbbm{m}(B_1)<\\infty$ and $\\mathbbm{m}(B_2)<\\infty$.\n\n\n\nFor $n=1,2,\\ldots$ write \n\\[ {L_2\\left( \\mm^{\\otimes n} \\right)} = L_2 \\left((\\mathbbm{R}_+\\times\\mathbbm{R})^n, \\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R})^{\\otimes n}, \\mathbbm{m}^{\\otimes n}\\right)\\] \nand set $L_2 \\left(\\mathbbm{m}^{\\otimes 0} \\right):=\\mathbbm{R}$. A function $f_n:(\\mathbbm{R}_+\\times\\mathbbm{R})^n\\to\\mathbbm{R}$ is said to\nbe symmetric, if it coincides with its symmetrization\n$\\tilde{f}_n$,\n\\[\\tilde{f}_n((s_1,x_1),\\ldots,(s_n,x_n))=\\frac{1}{n!}\\sum_{\\pi}f_n\\left( \\left( s_{\\pi(1)},x_{\\pi(1)} \\right),\\ldots, \\left( s_{\\pi(n)},x_{\\pi(n)} \\right) \\right),\\]\nwhere the sum is taken over all permutations\n$\\pi:\\{1,\\ldots,n\\}\\to\\{1,\\ldots,n\\}$.\n\nWe let $I_n$ denote the multiple integral of order $n$ defined by It\\^{o} \\cite{ito} and shortly recall the definition. \nFor pairwise disjoint \n$B_1,\\ldots, B_n\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R})$ with $\\mathbbm{m}(B_i)<\\infty$ the \nintegral of $\\mathbbm{1}_{B_1}\\otimes\\cdots\\otimes\\mathbbm{1}_{B_n}$ \nis defined by\n\\begin{equation} I_n\\left( \\mathbbm{1}_{B_1}\\otimes\\cdots\\otimes\\mathbbm{1}_{B_n}\\right) := M(B_1)\\cdots M(B_n). \\label{equation:multiple_integral}\\end{equation}\nIt is then extended to a linear and continuous operator $I_n:{L_2\\left( \\mm^{\\otimes n} \\right)}\\to L_2(\\mathbbm{P})$. We let $I_0(f_0):=f_0$ for $f_0\\in\\mathbbm{R}$. \nFor the multiple integral we have\n\\begin{equation}\\label{equation:inner_product_L_2}\nI_n(f_n)=I_n(\\tilde{f}_n) \\text{ and } \\mathbbm{E} \\left[ I_n(f_n)I_k(g_k) \\right] \n=\\begin{cases}\n 0, & \\text{ if }n\\neq k\\\\\n n! \\left( \\tilde{f_n},\\tilde{g_n}\\right)_{L_2(\\mathbbm{m}^{\\otimes n})}, & \\text{ if }n=k\n \\end{cases}\n\\end{equation}\nfor all $f_n\\in {L_2\\left( \\mm^{\\otimes n} \\right)}$ and $g_k\\in L_2\\left(\\mathbbm{m}^{\\otimes k}\\right)$. \n\nAccording to \\cite[Theorem 2]{ito}, for any $Y\\in L_2({\\mathbbm{P}})$ there exist functions $f_n\\in {L_2\\left( \\mm^{\\otimes n} \\right)}$,\n$n=0,1,2,\\ldots,$ such that\n\\[ Y=\\sum_{n=0}^\\infty\nI_n(f_n) \\quad \\text{ in } L_2(\\mathbbm{P})\\]\nand the functions $f_n$ are unique in $L_2(\\mathbbm{m}^{\\otimes n})$ when they are chosen to be\nsymmetric. We have\n\\[ \\norm{Y}^2_{L_2(\\mathbbm{P})} = \\sum_{n=0}^\\infty n! \\norm{\\tilde{f}_n}^2_{L_2\\left( \\mm^{\\otimes n} \\right)}.\\]\n\n\n\n\n\n\nWe recall the definition of the Malliavin Sobolev space $\\mathbbm{D}_{1,2}$ based on the\nIt\\^{o} chaos decomposition. We denote by $\\mathbbm{D}_{1,2}$ \nthe space of all $Y = \\sum_{n=0}^\\infty\nI_n(f_n) \\in L_2(\\mathbbm{P})$ such that\n\\[\\|Y\\|^2_{\\mathbbm{D}_{1,2}} := \\sum_{n=0}^\\infty (n+1)! \\norm{\\tilde{f}_n}^2_{L_2\\left( \\mm^{\\otimes n} \\right)} < \\infty.\\]\nLet us write $L_2(\\mathbbm{m}\\otimes\\mathbbm{P}) := L_2(\\mathbbm{R}_+\\times\\mathbbm{R}\\times{\\Omega},\n\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R})\\otimes{\\cal F}, \\mathbbm{m}\\otimes\\mathbbm{P})$ and define the\nMalliavin derivative $D:\\mathbbm{D}_{1,2}\\to L_2(\\mathbbm{m}\\otimes\\mathbbm{P})$ in the following way.\nFor $B_1,\\ldots,B_n \\in \\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R})$, which are pairwise disjoint and such that $\\mathbbm{m}(B_i)< \\infty$ for all $i=1,\\ldots,n$, we let\n\\begin{align*}D_{t,x} I_n\\left( \\mathbbm{1}_{B_1}\\otimes\\cdots\\otimes\\mathbbm{1}_{B_n}\\right) \n&= nI_{n-1}\\left( \\tilde{\\mathbbm{1}_{B_1}\\otimes\\cdots\\otimes\\mathbbm{1}_{B_n}}(\\cdot,(t,x))\\right)\\\\\n&:= \\sum_{i=1}^n \\prod_{j\\neq i} M(B_j) \\mathbbm{1}_{B_i}(t,x). \n\\end{align*}\nIt holds $\\normmP{DI_n\\left( \\mathbbm{1}_{B_1}\\otimes\\cdots\\otimes\\mathbbm{1}_{B_n}\\right)} \n= \\sqrt{n}\\normP{I_n\\left( \\mathbbm{1}_{B_1}\\otimes\\cdots\\otimes\\mathbbm{1}_{B_n}\\right) }$ and\nthe operator is extended to $\\left\\{ I_n(f_n): f_n\\in L_2(\\mathbbm{m}^{\\otimes n}) \\right\\}$ by linearity and continuity. For \n$Y = \\sum_{n=0}^\\infty I_n(f_n) \\in \\mathbbm{D}_{1,2}$ it then holds that\n\\[D_{t,x}Y := \\sum_{n=1}^\\infty n I_{n-1} \\left(\\tilde{f}_n(\\cdot,(t,x))\\right) \\] \nconverges in $L_2(\\mathbbm{m}\\otimes\\mathbbm{P})$.\n\n\n\\begin{remark}\\label{remark:inner_product_DD}\nNote that also for any $u\\in L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})$ one finds a chaos representasion $u=\\sum_{n=0}^\\infty I_n(g_{n+1})$, \nwhere the functions $g_{n+1} \\in L_2\\left(\\mathbbm{m}^{\\otimes(n+1)}\\right)$\nare symmetric in the first $n$ variables. For $u,v\\in L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})$ with \n$u=\\sum_{n=0}^\\infty I_n(g_{n+1})$ and $v=\\sum_{n=0}^\\infty I_n(h_{n+1})$ it then holds\n\\begin{equation}\\label{equation:inner_product_DD}\n (u,v)_{L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})} = \\sum_{n=0}^\\infty n! \\left(g_{n+1},h_{n+1} \\right)_{L_2\\left(\\mathbbm{m}^{\\otimes (n+1)}\\right)}.\n \\end{equation}\n\\end{remark}\nFor more information, see for example \\cite{nualart-vives}, \\cite{privault_extension}, \\cite{benth-dinunno-lokka-oksendal-proske}, \n\\cite{lee-shih_product_formula}, \\cite{sole-utzet-vives} and\n \\cite{applebaum2}. \n\n\\section{Differentiability}\n\\label{section:differentiability}\n\nWe shall use the notation $\\mathbbm{R}_0 = \\mathbbm{R}\\setminus\\{0\\}$. For $A\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R}_0)$ we denote by\n${\\cal F}_A$ the completion of the sigma-algebra $ \\sigma \\left( N(B) : B\\subseteq A \\text{ and } B\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R}) \\right) $. \nThe following theorem implies that if $Y\\in L_2({\\mathbbm{P}})$ is ${\\cal F}_A$-measurable and $\\mathbbm{E}[N(A)]<\\infty$, then \n$Y\\in\\mathbbm{D}_{1,2}$ if and only if $\\mathbbm{E}[Y^2 N(A)]<\\infty$.\n\n\\begin{theorem}\\label{theorem:second_double_inequality}\n Let $A\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R}_0)$ be such that $ \\mathbbm{E}\\left[N(A)\\right] = ({\\mathrm{d}} t \\otimes \\nu)(A) < \\infty$ and \n $Y\\in L_2({\\mathbbm{P}})$. \n \\begin{enumerate}\n \\item If $Y\\in\\mathbbm{D}_{1,2}$, then $Y\\sqrt{N(A)}\\in L_2({\\mathbbm{P}})$ and\n \\begin{align}\\label{equation:second_double_inequality1}\n \\left| \\normP{Y\\sqrt{N(A)}} - \\normP{Y} \\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} \\right| \n \\leq \\normmP{DY\\mathbbm{1}_A}.\\end{align}\n \\item If $Y\\sqrt{N(A)}\\in L_2({\\mathbbm{P}})$ and $Y$ is ${\\cal F}_A$-measurable, then $Y\\in\\mathbbm{D}_{1,2}$ and\n \\begin{align}\\label{equation:second_double_inequality2}\n \\normmP{DY}\n \\leq \\normP{Y\\sqrt{N(A)}} + \\normP{Y}\\sqrt{\\mathbbm{E}\\left[ N(A)\\right]}.\n \\end{align}\n \\end{enumerate}\n\\end{theorem}\n\n\n\n\n\nWe denote by $\\mathcal{S}$ the set of random variables $Y$ such that there exists $m\\geq 1$, $f\\in C_c^\\infty(\\mathbbm{R}^m)$ and \n$0\\leq t_0 < t_1 < \\cdots t_m < \\infty$ such that\n$$Y= f\\left( X_{t_1} - X_{t_0},\\ldots,X_{t_m} - X_{t_{m-1}} \\right).$$\n\n\n\\begin{lemma}[Theorem 4.1, Corollaries 4.1 and 3.1 in \\cite{geiss-laukkarinen}]\\label{lemma:smooths_dense_in}\\hfill\n\n\n\\begin{itemize}\n \\itemm{a} $\\mathcal{S}$ is dense in $\\mathbbm{D}_{1,2}$ and $L_2({\\mathbbm{P}})$.\\label{lemma:smooths_dense_in_a}\n \\itemm{b} For $Y,Z\\in\\mathcal{S}$ it holds\n $D_{t,x}(YZ) = Y D_{t,x}Z + Z D_{t,x}Y + x D_{t,x} Y D_{t,x} Z$ $\\mathbbm{m}\\otimes{\\mathbbm{P}}$-a.e.\n\\end{itemize}\n\n\\end{lemma}\n\n\n\n\n\\begin{proposition}\\label{proposition:first_double_inequality}\n Let $Y = \\sum_{n=0}^\\infty I_n(f_n)$ be bounded and $A\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R}_0)$ be such that $ \\mathbbm{E}\\left[N(A)\\right] = \u00a0({\\mathrm{d}} t \\otimes \\nu)(A) < \\infty$.\n Then $\\sum_{n=1}^\\infty nI_{n-1}\\left( \\tilde{f_n}(\\cdot,{ * })\\right) \\mathbbm{1}_A({ *})$ converges in $L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})$ and\n \\begin{align}\\label{equation:first_double_inequality}\n &\\left| \\normP{Y\\sqrt{N(A)}} - \\normP{Y} \\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} \\right| \n \\leq \\normmP{\\sum_{n=1}^\\infty \\left( nI_{n-1}\\!\\left( \\tilde{f_n} \\right) \\mathbbm{1}_A \\right) } \\nonumber \\\\*\n &\\qquad\\qquad\\leq \\normP{Y\\sqrt{N(A)}} + \\normP{Y}\\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} .\n \\end{align}\n\\end{proposition}\n\\begin{proof}\n Assume first that $Y\\in\\mathcal{S}$. Then also $Y^2 = \\sum_{n=0}^\\infty I_n(g_n) \\in\\mathcal{S}$. \n %\nLetting $h(t,x) = \\frac{1}{x}\\mathbbm{1}_A(t,x)$ we have $I_1(h) = N(A) - \\mathbbm{E}\\left[ N(A) \\right]$ and we get using\n\\eqref{equation:inner_product_L_2} and \\eqref{equation:inner_product_DD} that\n %\n \\begin{align*}\n \\mathbbm{E}\\left[ Y^2 N(A)\\right] - \\mathbbm{E}\\left[ Y^2 \\right] \\mathbbm{E}\\left[ N(A)\\right]\n & = \\mathbbm{E}\\left[ Y^2 I_1(h) \\right] \n = (g_1,h)_{L_2(\\mathbbm{m})}\\\\\n & = (DY^2,h{\\mathbbm{1}_{{\\Omega}}})_{L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})}.\n\\end{align*}\nFrom Lemma \\ref{lemma:smooths_dense_in} (b) we obtain\n\\begin{align*}\n \\mathbbm{E}\\left[ Y^2 N(A)\\right]\n & = \\mathbbm{E}\\left[ Y^2 \\right] \\mathbbm{E}\\left[ N(A)\\right] + (DY^2,h)_{L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})}\\\\\n & = \\mathbbm{E}\\left[ Y^2 \\right] \\mathbbm{E}\\left[ N(A)\\right] + 2 \\iint_A \\mathbbm{E}\\left[ YD_{t,x}Y \\right] x{\\mathrm{d}} t \\nu({\\mathrm{d}} x)\\\\*\n &\\qquad + \\iint_A \\mathbbm{E}\\left[ \\left( D_{t,x} Y\\right)^2\\right]\\mathbbm{m}({\\mathrm{d}} t,{\\mathrm{d}} x).\n \\end{align*}\n Using H\\\"{o}lder's inequality we get\n %\n $$\\left| 2 \\iint_A \\mathbbm{E}\\left[ YD_{t,x}Y \\right] x{\\mathrm{d}} t \\nu({\\mathrm{d}} x)\\right| \\leq 2\\normP{Y}\\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} \\normmP{DY\\mathbbm{1}_A},$$\n so that\n %\n \\begin{align*}\n & \\left( -\\normP{Y}\\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} + \\normmP{DY\\mathbbm{1}_A} \\right)^2\n \\leq \\mathbbm{E} \\left[ Y^2 N(A)\\right]\\\\*\n & \\qquad\\qquad\\leq \\left( \\normP{Y}\\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} + \\normmP{DY\\mathbbm{1}_A} \\right)^2.\\end{align*}\n Taking the square root yields to the double inequality \\eqref{equation:first_double_inequality}.\n\n Using Lemma \\ref{lemma:smooths_dense_in} (a) we find for any bounded $Y$ a uniformly bounded sequence $(Y_k)\\subset \\mathcal{S}$ such that \n $Y_k\\to Y$ a.s. Since inequality \\eqref{equation:first_double_inequality} holds for all random variables \n $Y_k-Y_m \\in \\mathcal{S}$, they are uniformly bounded and $Y_k-Y_m\\to 0$ a.s. as $k,m\\to\\infty$, we have by dominated convergence that\n \\begin{align*}\n & \\normmP{D(Y_k-Y_m) \\mathbbm{1}_A}\\\\\n &\\leq \\normP{(Y_k-Y_m)\\sqrt{N(A)}} + \\normP{Y_k-Y_m}\\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} \\\\ \n &\\to 0 \n \\end{align*}\n as $k,m\\to\\infty$.\n Thus the sequence $(DY_{k}\\mathbbm{1}_A)_{k=1}^\\infty$ converges\n in $L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})$ to some mapping $u\\in L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})$. Write $Y_k=\\sum_{n=0}^\\infty I_n \\left( \\tilde{f}_n^{(k)} \\right)$.\n The mapping $u$ has a representasion $u = \\sum_{n=0}^\\infty I_n(h_{n+1})$\n (see Remark \\ref{remark:inner_product_DD}), where for all $n\\geq0$ we have that\n %\n \\begin{align*}\n \\left\\| n\\tilde{f_n}\\mathbbm{1}_A - h_n \\right\\|_{L_2({\\mathbbm{m}^{\\otimes n}})} \n & \\leq \\left\\| n\\tilde{f_n}\\mathbbm{1}_A - n\\tilde{f}_n^{(k)}\\mathbbm{1}_A \\right\\|_{L_2({\\mathbbm{m}^{\\otimes n}})} \n \\!+ \\left\\| n\\tilde{f}_n^{(k)}\\mathbbm{1}_A - h_n \\right\\|_{L_2({\\mathbbm{m}^{\\otimes n}})} \\\\*\n & \\to 0 \n \\end{align*} \n as $k\\to\\infty$.\n We obtain \\eqref{equation:first_double_inequality} for the random variable $Y$ using dominated convergence, the convergence\n $DY_k\\mathbbm{1}_A \\to \\sum_{n=0}^\\infty \\left( D I_n(f_n) \\mathbbm{1}_A \\right)$ in $L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})$ and the fact that \n \\eqref{equation:first_double_inequality} holds for all random variables $Y_{k}$. \n\\end{proof}\n\n\n\\begin{lemma}\\label{lemma:Lipschitz}\n If $Y=\\sum_{n=0}^\\infty I_n(f_n\\mathbbm{1}_{\\mathbbm{R}_+\\times\\mathbbm{R}_0}^{\\otimes n}) \\in \\mathbbm{D}_{1,2}$ and $g:\\mathbbm{R}\\to\\mathbbm{R}$ is Lipschitz-continuous, then\n $g(Y)\\in\\mathbbm{D}_{1,2}$ and\n $$D_{t,x}g(Y) = \\frac{g(Y + x D_{t,x}Y) - g(Y)}{x} \\quad \\text{ in } L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}}).$$ \n\\end{lemma}\n\\begin{proof}\nThe lemma is an immediate consequence of \\cite[Lemma 5.1 (b)]{geiss-laukkarinen}.\n\\end{proof}\n\n\n\\begin{lemma}\\label{lemma:measurability}\nLet $Y = \\sum_{n=0}^\\infty I_n(f_n)\\in L_2({\\mathbbm{P}})$ and $A\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R})$. Then\n$$\\mathbbm{E} \\left[ Y | {\\cal F}_A \\right] = \\sum_{n=0}^\\infty I_n \\left( f_n\\mathbbm{1}_A^{\\otimes n} \\right) \\text{ in }L_2({\\mathbbm{P}}).$$\n\\end{lemma}\n\\begin{proof}\nThe equality can be shown via the construction of the chaos analogously to the proof of \\cite[Lemma 1.2.4]{nualartv1}.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{theorem:second_double_inequality}]\n{ \\emph{1.}} Assume $Y \\in\\mathbbm{D}_{1,2}$ and define $g_m(x)= (-m \\vee x)\\wedge m$ for $m\\geq 1$. From Lemma \\ref{lemma:Lipschitz} we get \n$g_m(Y)\\in\\mathbbm{D}_{1,2}$ and $|Dg_m(Y)|\\leq |DY|$. Then, using monotone convergence and Proposition\n\\ref{proposition:first_double_inequality}, we obtain\n\\begin{align*}\n& \\left| \\normP{Y\\sqrt{N(A)}} - \\normP{Y} \\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} \\right| \\\\\n& = \\lim_{m\\to\\infty} \\left| \\normP{g_m(Y)\\sqrt{N(A)}} - \\normP{g_m(Y)} \\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} \\right| \\\\\n&\\leq \\limsup_{m\\to\\infty} \\normmP{Dg_m(Y)\\mathbbm{1}_A}\\\\\n& \\leq \\normmP{DY\\mathbbm{1}_A} < \\infty.\n\\end{align*}\nHence $Y\\sqrt{N(A)}\\in L_2({\\mathbbm{P}})$.\n\n{ \\emph{2.}} Assume $\\| Y\\sqrt{N(A)}\\| < \\infty$ and define $g_m(Y)$ as above. \nWrite $Y = \\sum_{n=0}^\\infty I_n \\left( f_n \\right)$ and $g_m(Y) = \\sum_{n=0}^\\infty I_n ( f_n^{(m)})$. \nSince $g_m(Y)\\to Y$ in $L_2({\\mathbbm{P}})$, it holds \n$\\| \\tilde{f}_n^{(m)} \\|^2_{L_2(\\mathbbm{m}^{\\otimes n})}\\to \\| \\tilde{f_n} \\|^2_{L_2(\\mathbbm{m}^{\\otimes n})}$ as $m\\to\\infty$.\n Since $g_m(Y)$ is ${\\cal F}_A$-measurable, we have $\\tilde{f}_n^{(m)} = \\tilde{f}_n^{(m)}\\mathbbm{1}_A^{\\otimes n}$ $\\mathbbm{m}^{\\otimes n}$-a.e. \nby Lemma \\ref{lemma:measurability} for all $m\\geq 1$. \n By Fatou's Lemma, \nProposition \\ref{proposition:first_double_inequality} and monotone convergence we get \n\\begin{align*}\n & \\sqrt{ \\sum_{n=1}^\\infty nn! \\left\\| \\tilde{f_n} \\right\\|^2_{L_2(\\mathbbm{m}^{\\otimes n})}} \\\\*\n &\\leq \\liminf_{m\\to\\infty}\\sqrt{\\sum_{n=1}^\\infty nn! \\left\\| \\tilde{f}_n^{(m)} \\right\\|^2_{L_2(\\mathbbm{m}^{\\otimes n})}} \\\\*\n &\\leq \\liminf_{m\\to\\infty} \\left( \\normP{g_m(Y)\\sqrt{N(A)}} + \\normP{g_m(Y)} \\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} \\right)\\\\*\n & = \\normP{Y\\sqrt{N(A)}} + \\normP{Y} \\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} < \\infty.\n\\end{align*}\nThus $Y\\in\\mathbbm{D}_{1,2}$.\n\\end{proof}\n\nWe use the notation $ \\alpha \\sim_c \\beta$ for $\\frac{1}{c} \\beta \\leq \\alpha \\leq c \\beta$ for $c\\geq 1$ and \n$\\alpha, \\beta\\in[0,\\infty]$.\n \n\\begin{corollary}\\label{corollary:norm_equivalence}\nLet $A\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R}_0)$ be such that $ \\mathbbm{E} \\left[ N(A)\\right] < \\infty$ and assume that \n$Y= \\sum_{n=0}^\\infty I_n(f_n)\\in L_2({\\mathbbm{P}})$\n is ${\\cal F}_A$-measurable. Then\n$$ \\|Y\\|_{\\mathbbm{D}_{1,2}} \\sim_{\\sqrt{2}\\left( \\sqrt{ \\mathbbm{E} \\left[ N(A) \\right]} +1 \\right)} \n\\left\\| Y \\sqrt{N(A) +1 } \\right\\|_{L_2({\\mathbbm{P}})}, $$\n where the norms may be infinite. \n\\end{corollary}\n\\begin{proof}\nThe inequalities \\eqref{equation:second_double_inequality1} and \\eqref{equation:second_double_inequality2} give the relation\n$$\\left \\| Y \\sqrt{N(A)} \\right\\|_{L_2({\\mathbbm{P}})} + \\|Y\\|_{L_2({\\mathbbm{P}})} \\sim_{ \\sqrt{ \\mathbbm{E} \\left[ N(A) \\right]} +1 } \\|Y\\|_{L_2({\\mathbbm{P}})} \n+ \\|DY\\|_{L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})}. $$ \nThe claim follows from $\\|Y\\|_{\\mathbbm{D}_{1,2}} \\leq \\|Y\\|_{L_2({\\mathbbm{P}})} + \\|DY\\|_{L_2(\\mathbbm{m}\\otimes{\\mathbbm{P}})} \\leq \\sqrt{2}\\|Y\\|_{\\mathbbm{D}_{1,2}} $ and\n\\begin{align*}\n { \\normP{Y\\sqrt{N(A)+1}} }\n & \\leq \\left\\| Y \\left( \\sqrt{N(A)} + 1 \\right) \\right\\|_{L_2({\\mathbbm{P}})} \\\\\n & \\leq \\left\\| Y \\sqrt{N(A)} \\right\\|_{L_2({\\mathbbm{P}})} + \\|Y\\|_{L_2({\\mathbbm{P}})}\\\\\n & \\leq { \\sqrt{ 2 \\left( \\left\\| Y \\sqrt{N(A)} \\right\\|^2_{L_2({\\mathbbm{P}})} + \\|Y\\|^2_{L_2({\\mathbbm{P}})} \\right) } }\\\\\n & = { \\sqrt{2} \\left\\| Y \\sqrt{N(A) +1 } \\right\\|_{L_2({\\mathbbm{P}})}. } \n\\end{align*}\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\\section{Fractional differentiability}\n\\label{section:fractional}\n\n\nWe consider fractional smoothness in the sense of real interpolation spaces between $L_2({\\mathbbm{P}})$ and $\\mathbbm{D}_{1,2}$. \nFor parameters $\\theta\\in(0,1)$ and $q\\in[1,\\infty]$ the interpolation space $(L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,q}$\nis a Banach space, intermediate between $L_2({\\mathbbm{P}})$ and $\\mathbbm{D}_{1,2}$.\n\n \n\n\nWe shortly recall the $K$-method of real interpolation. \nThe K-functional of $Y\\in L_2({\\mathbbm{P}})$ is the mapping\n$K(Y,\\cdot; L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2}): (0,\\infty) \\to [0,\\infty)$ defined by \n\\begin{align*} \n&K(Y,s; L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})\\\\ & := \\inf\\{ \\|Y_0\\|_{L_2({\\mathbbm{P}})} + s\\|Y_1\\|_{\\mathbbm{D}_{1,2}}:\\ Y=Y_0+Y_1,\\,Y_0\\in L_2({\\mathbbm{P}}),\\, Y_1\\in \\mathbbm{D}_{1,2} \\}\n\\end{align*}\nand we shall use the abbreviation $K(Y,s)$ for $K(Y,s; L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})$.\nLet $\\theta\\in(0,1)$ and $q\\in[1,\\infty]$. The space $(L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,q}$ consists of all $Y\\in L_2({\\mathbbm{P}})$\nsuch that\n\\[\n\\|Y\\|_{(L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,q}} \n= \\begin{cases}\n \\left[ \\int_0^\\infty \\left| s^{-\\theta} K(Y,s) \\right|^q \\frac{\\mathrm{d}s}{s} \\right]^{\\frac{1}{q}}, & q\\in[1,\\infty)\\\\\n \\sup_{s>0} s^{-\\theta} K(Y,s), & q=\\infty\n \\end{cases}\n\\]\nis finite.\n\n\nThe interpolation spaces are nested in a lexicographical order:\n\\[\n\\mathbbm{D}_{1,2} \\subset (L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\eta,p} \\subset (L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,q} \\subseteq (L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,p} \\subset L_2({\\mathbbm{P}})\n\\]\n for $ 1 \\leq q \\leq p \\leq \\infty$ and\n $ 0 < \\theta < \\eta < 1$.\nFor further properties of interpolation we refer to \\cite{bennet-sharpley} and \\cite{triebel}.\n\n\n\\begin{theorem} \\label{theorem:fractional}\n Let $\\theta\\in(0,1)$, $A\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R}_0)$ be such that $ \\mathbbm{E}\\left[ N(A) \\right]< \\infty$ \n and $Y\\in L_2({\\mathbbm{P}})$\n be ${\\cal F}_A$-measurable. Then \n$$Y \\in (L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,2}\\text{ if and only if } \\mathbbm{E}\\left[ Y^2 N(A)^\\theta \\right] < \\infty.$$ If\n $Y \\in (L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,2}$, then\n %\n$$ \\|Y\\|_{(L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,2}}\n \\sim_{\\sqrt{2}\\frac{ \\sqrt{ \\mathbbm{E}\\left[ N(A) \\right] } +1}{\\sqrt{\\theta(1-\\theta)}}} \\left\\| Y \\sqrt{N(A) +1}^{\\,\\theta} \\right\\|_{L_2({\\mathbbm{P}})}.$$\n\\end{theorem}\n\\begin{proof}\nWe first show that\n \\begin{equation} \\label{equation:Kfunctional}\n K(Y,s) \n\\sim_{ 2\\left( \\sqrt{\\mathbbm{E}\\left[ N(A)\\right]} +1 \\right)} \\left\\| Y \\min\\left\\{1, s \\sqrt{N(A) +1} \\right\\} \\right \\|_{L_2({\\mathbbm{P}})}.\n \\end{equation}\nFrom Lemma \\ref{lemma:measurability} we obtain the inequalities $ \\| \\mathbbm{E} \\left[ Y_0 | {\\cal F}_A \\right] \\|_{L_2({\\mathbbm{P}})} \\leq \\|Y_0\\|_{L_2({\\mathbbm{P}})} $\nand $\\| \\mathbbm{E} \\left[ Y_1 | {\\cal F}_A \\right] \\|_{\\mathbbm{D}_{1,2}} \\leq \\|Y_1\\|_{\\mathbbm{D}_{1,2}} $ for any $Y_0\\in L_2({\\mathbbm{P}})$ and $Y_1\\in\\mathbbm{D}_{1,2}$. Hence\n\\begin{align}\\label{align:K_relation_one}\n & K(Y,s) \\nonumber\\\\*\n& = \\inf\\left\\{ \\|Y_0\\|_{L_2({\\mathbbm{P}})} + s\\|Y_1\\|_{\\mathbbm{D}_{1,2}}:\\ Y_0+Y_1 = Y,\\, Y_0\\in L_2({\\mathbbm{P}}),\\, Y_1\\in \\mathbbm{D}_{1,2} \\right\\}\\nonumber \\\\\n& = \\inf\\left\\{ \\| \\mathbbm{E} \\left[ Y_0 | {\\cal F}_A \\right] \\|_{L_2({\\mathbbm{P}})} + s\\| \\mathbbm{E} \\left[ Y_1 | {\\cal F}_A \\right] \\|_{\\mathbbm{D}_{1,2}}: Y_0+Y_1 = Y,\\, Y_1\\in \\mathbbm{D}_{1,2} \\right\\}\\nonumber \\\\*\n& \\sim_{c} \n \\inf\\left \\{ \\|Y_0\\|_{L_2({\\mathbbm{P}})} + s \\left\\| Y_1 \\sqrt{N(A)+1} \\right\\|_{L_2({\\mathbbm{P}})} : \n Y_0 + Y_1 = Y, Y_1\\in\\mathbbm{D}_{1,2} \\right \\}\n\\end{align}\nfor $c = \\sqrt{2}\\left( \\sqrt{\\mathbbm{E}\\left[ N(A) \\right]} +1\\right) $ by Corollary \\ref{corollary:norm_equivalence}. \nNext we approximate the $K$-functional from above with the choice $Y_0 = Y\\mathbbm{1}_{\\left\\{ \\sqrt{N(A)+1} > \\frac{1}{s}\\right\\}} $ \nand get from \\eqref{align:K_relation_one} that\n\\begin{align*}\n& \\frac{1}{c} K(Y,s)\\\\\n& \\leq \\left( \\left\\|Y \\mathbbm{1}_{ \\left\\{{ \\sqrt{N(A) +1}} > \\frac{1}{s} \\right\\}} \\right\\|_{L_2({\\mathbbm{P}})} \n + s \\left\\| Y{ \\sqrt{N(A) +1} } \\mathbbm{1}_{ \\left\\{ { \\sqrt{N(A) +1 }} \\leq \\frac{1}{s} \\right\\}} \\right\\|_{L_2({\\mathbbm{P}})} \\right) \\\\\n& \\leq \\sqrt{2} \\left\\| Y \\min\\left\\{ 1, s { \\sqrt{N(A) +1 }} \\right\\} \\right\\|_{L_2({\\mathbbm{P}})}.\n\\end{align*}\nUsing the triangle inequality and the fact that $$|Y({\\omega}) -y| + |y|a \\geq |Y({\\omega})| \\min\\{1,a\\}$$ for all ${\\omega}\\in{\\Omega}$, \n$y\\in\\mathbbm{R}$ and $a\\geq 0$ we obtain from \\eqref{align:K_relation_one} the lower bound \n\\begin{align*}\n & c K(Y,s)\\\\*\n& \\geq \\inf \\left\\{ \\left\\| |Y_0| + |Y_1|s { \\sqrt{N(A) +1} } \\right\\|_{L_2({\\mathbbm{P}})} : Y = Y_0+Y_1,\\, Y_1\\in\\mathbbm{D}_{1,2} \\right\\}\\\\*\n& \\geq \\left\\| Y \\min \\left\\{ 1, s { \\sqrt{N(A) +1}} \\right\\} \\right\\|_{L_2({\\mathbbm{P}})}.\n\\end{align*}\nWe have shown that \\eqref{equation:Kfunctional} holds.\nFrom \\eqref{equation:Kfunctional} we get \n\\begin{align*}\n& \\|Y\\|_{(L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,2}} \\\\*\n\n&\\sim_{2\\left( \\sqrt{\\mathbbm{E} \\left[ N(A) \\right]} +1 \\right)} \n\\left( \\int_0^\\infty \\left| s^{-\\theta} \\normP{ Y \\min\\left\\{ 1, s { \\sqrt{N(A) +1 } } \\right\\} } \\right|^2 \n\\frac{{\\mathrm{d}} s}{s} \\right)^{\\frac{1}{2}}.\n\\end{align*}\nWe finish the proof by computing the integral using first Fubini's theorem. We get\n\\begin{align*}\n& \\int_0^\\infty \\left| s^{-\\theta} \\normP{ Y \\min\\left\\{ 1, s { \\sqrt{N(A) +1 } } \\right\\} } \\right|^2 \\frac{{\\mathrm{d}} s}{s}\\\\*\n& = \\mathbbm{E} \\left[ Y^2 \\int_0^\\infty s^{-2\\theta} \\min\\left\\{ 1, s^2 {\\left( N(A) +1 \\right) } \\right\\}\\frac{{\\mathrm{d}} s}{s} \\right] \\\\*\n& = \\mathbbm{E} \\left[ Y^2 \\frac{1}{2\\theta(1-\\theta)} { \\left( N(A) + 1 \\right)^{\\theta} } \\right].\n\\end{align*}\n\\end{proof}\n\n\n\n\n\n\\section{Concluding remarks}\n\n\n\n\nFrom Theorem \\ref{theorem:second_double_inequality} assertion \\emph{2.} we can conclude that a higher integrability than square integrability\ncan imply Malliavin differentiability. For example,\nall the spaces $L_p({\\Omega},{\\cal F}_A, {\\mathbbm{P}})$ are subspaces of $\\mathbbm{D}_{1,2}$ when $p>2$ and $\\mathbbm{E}[N(A)]< \\infty$ as we can deduce from the following corollary.\n\n\n\n\\begin{corollary} Let $A\\in\\mathcal{B}(\\mathbbm{R}_+\\times\\mathbbm{R}_0)$ be such that $\\lambda:=\\mathbbm{E}[N(A)] \\in (0, \\infty)$ so that $N(A)\\sim$ Poisson($\\lambda$). \nThen for the space\n$$L_2 \\log^+ L_2({\\Omega},{\\cal F}_A,{\\mathbbm{P}}) : = \\left\\{ Y \\in L_2({\\Omega},{\\cal F}_A,{\\mathbbm{P}}): \\mathbbm{E} \\left[ Y^2 \\ln^+ Y^2 \\right] < \\infty \\right\\},$$\nwhere $\\ln^+x = \\max\\{\\ln x , 0\\}$, it holds that\n$$L_2 \\log^+ L_2({\\Omega},{\\cal F}_A,{\\mathbbm{P}})\\subsetneq \\mathbbm{D}_{1,2} \\cap L_2({\\Omega},{\\cal F}_A,{\\mathbbm{P}}).$$\n\\end{corollary}\n\\begin{proof}\nSuppose $\\mathbbm{E} \\left[ Y^2 \\ln^+ Y^2 \\right] < \\infty$ and let ${\\varphi}(y)=\\ln(y+1)$. The functions $\\Phi$ and $\\Phi^\\star$ with\n$$\\Phi(x) = \\int_0^x {\\varphi}(y) {\\mathrm{d}} y = (x+1)\\ln(x+1) -x \\leq 1 + x\\ln^+ x$$ \nand\n$$\\Phi^\\star(x)= \\int_0^x {\\varphi}^{-1}(y) {\\mathrm{d}} y =e^x - x -1$$\nare a complementary pair of Young functions. They satisfy the\nYoung inequality $xy \\leq \\Phi(x) + \\Phi^\\star(y)$ for all $x,y\\geq0$ and we get\n\\begin{align*}\n \\mathbbm{E} \\left[ Y^2 N(A) \\right] \n& \\leq \\mathbbm{E} \\left[ \\Phi\\left( Y^2 \\right) \\right] + \\mathbbm{E} \\left[ \\Phi^\\star(N(A)) \\right] \\\\*\n& \\leq \\mathbbm{E} \\left[ Y^2 \\ln^+\\left( Y^2 \\right) \\right] + e^{(e-1){\\lambda} } - {\\lambda} \\\\*\n &< \\infty. \\end{align*}\n %\nHence $Y\\in\\mathbbm{D}_{1,2}$ by Theorem \\ref{theorem:second_double_inequality}.\n\nTo see that the inclusion is strict, let $a\\in(1,2]$ and choose a Borel function $f:\\mathbbm{R}\\to\\mathbbm{R}$ such that\n$f(0)=f(1)=0$ and\n$$f(n) = \\sqrt{e^\\lambda \\frac{n!}{\\lambda^n} \\frac{1}{n^2 \\ln^a n}} \\quad \\text{ for }n = 2,3,\\ldots.$$\nThen, since\n $\\ln n! = \\sum_{k=2}^n \\ln k \\geq \\int_1^n \\ln x\\, {\\mathrm{d}} x = n\\ln n - n + 1 $ for $n\\geq 2$ and $a\\leq 2$, we have\n\\begin{align*}\n \\mathbbm{E}\\left[f^2(N(A))\\ln^+ f^2(N(A))\\right] \n& = \\sum_{n=2}^\\infty \\frac{1}{n^2 \\ln^a n}\\ln \\left( e^\\lambda \\frac{n!}{\\lambda^n} \\frac{1}{n^2 \\ln^a n} \\right)\\\\\n& = \\sum_{n=2}^\\infty \\frac{\\ln n!}{n^2 \\ln^a n}\n + \\sum_{n=2}^\\infty \\frac{1}{n^2 \\ln^a n}\\ln \\left(e^\\lambda\\frac{1}{\\lambda^n} \\frac{1}{n^2 \\ln^a n} \\right) \\\\\n& = \\infty,\n\\end{align*}\nbut\n$$\\mathbbm{E}\\left[N(A) f^2(N(A))\\right] = \\sum_{n=2}^\\infty nf^2(n) e^{-\\lambda} \\frac{\\lambda^n}{n!} = \\sum_{n=2}^\\infty \\frac{1}{n\\ln^a n} < \\infty $$\nso that $f(N(A)) \\in \\mathbbm{D}_{1,2}$ by Theorem \\ref{theorem:second_double_inequality}.\n\\end{proof}\n\n\n\n\n\n\n\\begin{remark}\\label{remark:compound_differentiability}\n Suppose ${\\sigma}=0$ and $\\nu(\\mathbbm{R})<\\infty$, which means that $X$ is a compound Poisson process (with drift) and\n $$X_t = \\beta t + \\int_{(0,t]\\times\\mathbbm{R}_0} x N({\\mathrm{d}} s, {\\mathrm{d}} x)\\quad \\text{ for all }t\\geq 0 \\text{ a.s.}$$\n for some $\\beta\\in\\mathbbm{R}$. The process $(N_t)_{t\\geq0}$, with $N_t = N((0,t]\\times\\mathbbm{R}_0)$ a.s., is the Poisson process associated to $X$.\n Let $T\\in(0,\\infty)$ and ${\\cal F}_T$ be the completion of the sigma-algebra generated by $(X_t)_{t\\in[0,T]}$. Then\n ${\\cal F}_T = {\\cal F}_{[0,T]\\times\\mathbbm{R}}$ and by Theorems \\ref{theorem:second_double_inequality} and \\ref{theorem:fractional} for\n any\n ${\\cal F}_T$-measurable random variable $Y$ and any $\\theta\\in(0,1)$ it holds that\n \\begin{itemize} \\item[(a)] $Y\\in\\mathbbm{D}_{1,2}$ if and only if\n $\\normP{ Y\\sqrt{N_T+1}} < \\infty$ and\n \\item[(b)] $Y\\in(L_2({\\mathbbm{P}}),\\mathbbm{D}_{1,2})_{\\theta,2}$ if and only if $\\normP{ Y\\sqrt{N_T+1}^{\\,\\theta}} < \\infty$.\n \\end{itemize}\n\\end{remark}\n\n\n\n\n\n\n\n{\\bf Acknowledgements.} The author is grateful to Christel Geiss and Stefan Geiss for several\nvaluable ideas and suggestions regarding this work.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdstu b/data_all_eng_slimpj/shuffled/split2/finalzzdstu new file mode 100644 index 0000000000000000000000000000000000000000..b63935a537128944dfb246a67ecdbc93100e5646 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdstu @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe question whether solutions of the two-dimensional Euler equation in vorticity form\n\\begin{align}\\label{eqEuler}\n\\omega_t + u \\cdot \\nabla \\omega &= 0\n\\end{align}\ncan exhibit strong gradient growth in time is a topic of ongoing interest. The best known upper bound predicts\ndouble-exponential growth in time:\n$$\n\\|\\nabla \\omega\\|_\\infty \\leq C_1 \\exp(C_2 \\exp(C_3 t))\n$$\nwith constants $C_i$ depending on the initial data. A natural and important question is: \nAre there flows for which this upper bound is attained? The problem can be considered in bounded domains \nwith no-flow boundary conditions or in domains without a natural boundary (e.g. on the torus). \nFor domains with boundary, a recent breakthrough by A. Kiselev and V. \\v Sver\\'ak \n\\cite{KiselevSverak} answers the question affirmatively. For smooth solutions on the torus, the best known result \nso far was given by S. Denisov. In \\cite{Denisov1}, he shows that at least superlinear gradient growth is possible\nand in \\cite{Denisov2} he provides an example of double-exponential growth for an arbitrarily long, but finite\ntime interval. In the recent paper \\cite{Zlatos}, A. Zlat\\v os constructs initial data leading to exponential gradient growth, \nhis solution is however in $C^{1, \\gamma}$ for some $\\gamma \\in (0, 1)$ and not in $C^2$. \n\nIn \\cite{KiselevSverak} the construction is based on imposing certain symmetries on the solution\nleading to a \\emph{hyperbolic flow scenario}. The presence of a boundary and the hyperbolic flow work nicely\ntogether, allowing the construction of examples with double-exponential gradient growth.\nConsidering double-odd solutions, i.e.\n\\begin{align}\\label{doubleOddSym}\n\\omega(-x_1, x_2)& = - \\omega(x_1, x_2), ~\\omega(x_1, -x_2) = - \\omega(x_1, x_2),\n\\end{align}\nis a possible, natural way to replace the physical wall from \\cite{KiselevSverak}\nby the $x_1$-axis in order to try to create strong gradient growth in the bulk. This \nconstruction was employed in \\cite{Zlatos}. In \\cite{Denisov2}, a perturbation argument\nstarting from a non-smooth double-odd stationary solution (see \\cite{Bahouri}) was used. Creating infinite-time \n\\emph{double-exponential} growth away from the boundary, however, is met with considerable \ndifficulties.\n\nIt is interesting to notice that the result \\cite{KiselevSverak} is in some sense analogous to the still open blowup problem \nfor for the more singular surface quasigeostrophic equation. In SQG blowup means that the solution becomes singular in finite time whereas for the 2d Euler \nequation ``blowup'' would mean maximal (double-exponential) gradient growth on an infinite time interval. \nThere are important conditional regularity results for the SQG equation such as \\cite{Cordoba}, \\cite{CordobaFefferman}, where \none studies a certain blowup scenario, in order to finally exclude it.\nAn analogous ``conditional regularity result'' for 2d Euler equation would be to show that in certain scenarios maximal gradient growth does not occur. Since \nthe possible motions of fluids are various and in general very complicated, studying scenarios is an invaluable method to gain insight into \nregularity problems of fluid mechanics.\n\nOur goal in this paper is to prove such a conditional regularity result in the sense that \na hyperbolic flow cannot create maximal gradient growth near the origin by itself when we start with double-odd $C^2$ initial data,\nprovided a certain ``upstream\" control is assumed on the flow.\nThis is an important step into understanding the double-odd hyperbolic scenario since we rule out the most promising \ncandidate for a mechanism creating maximal gradient growth, i.e. the local hyperbolic compression. Our result does not imply\nimpossibility of double-exponential growth in general, but makes the construction of examples much harder. \n\nIn some sense, the scenario considered here is complementary to the one considered by D. Cordoba for the SQG\nequation in \\cite{Cordoba}, where a closing hyperbolic saddle is considered. There the solution stays smooth except for\nthe possible closing of the saddle. In our scenario for 2d Euler, the hyperbolic saddle is fixed due to the symmetry\n($\\omega=0$ on the coordinate axes), and we are asking if blowup can happen in another way.\n\nFinally, we would like to mention the recent preprint \\cite{Katz}, where a different approach is proposed\nto study whether double-exponential gradient growth can occur at an interior point (see also\nT. Tao's blog \\cite{Tao} for a related discussion).\n\n\n\n\\subsection{Main result.}\nWe consider \\eqref{eqEuler} on $\\mathbb{T}=[-1,1)^2$ with periodic boundary conditions and double-odd $C^2$ initial data $\\omega_0$. The double-odd \nsymmetry is preserved by the evolution and \\eqref{doubleOddSym} implies that the origin is a stagnant point of the flow field for all times. Moreover, the flow on each coordinate axis is always directed along that axis. When considering smooth solutions $\\omega\\in C^1([0, \\infty), C^2(\\mathbb{T}))$,\n\\eqref{doubleOddSym} also implies \n$$\\omega = 0$$ \non the coordinate axes.\n\nWe will studying the flow in boxes of the form\n\\begin{align}\nD&=(0,\\delta_1)\\times(0,\\delta_2),~~\\widehat{D}=(0,\\delta_1+\\delta_3)\\times(0,\\delta_2),\n\\end{align}\nwhere $\\delta_j$ are positive, but small and \n\\begin{align}\\label{defDeltas}\n0<\\delta_1< \\delta_2 < \\delta_1+\\delta_3. \n\\end{align}\nIn a hyperbolic flow, which we will explain in detail in section \\ref{secHyperbolic}, fluid particles are supposed to constanty enter the box\n$D$ from the right and leave on the top. Therefore we call $\\widehat{D}\\setminus D$ \\emph{feeding zone}. The following definition\nformalizes the control we assume on the solution in the feeding zone. The meaning of the parameter $\\alpha$\nwill become clear later.\n\\begin{definition} Let $\\alpha\\in(0,\\frac{1}{4})$.\nThe box $\\widehat{D}$ is said to satisfy the conditions of {\\em controlled feeding}, with feeding parameter $R \\geq 0$ if \n\\begin{align}\\label{eqfeeding1}\n|\\d_{x_1}\\omega(x, t)|\\leq R x_2^{1-\\alpha}, ~~|\\d_{x_2}\\omega(x, t)|\\leq R \\quad (x\\in \\widehat{D}\\setminus D)\n\\end{align} \nfor all times $t \\geq 0$.\n\\end{definition}\nWe can think of the first inequality in \\eqref{eqfeeding1} as a H\\\"older-version of a bound on $\\d_{x_2, x_1}\\omega$, keeping in mind that $\\d_{x_1}\\omega(x_1, 0, t)=0$\nfor all times. The concept of controlled feeding conditions allows us to study the evolution of $\\omega$\nin $D$ independent of the remaining flow.\n\nOur main result is the following theorem. \n\\begin{thm} \\label{main} Fix $0< \\alpha < \\frac{1}{4}, 0<\\delta_3$. Let $\\omega$ be a smooth, double-odd solution of the Euler equation, and \nsuppose the flow is hyperbolic near the origin. Let $R> 0$ be given. \nThere exist small $\\delta_1, \\delta_2>0$, such that if $\\widehat{D}$ satisfies the controlled feeding conditions with parameter $R$,\nthen \n\\begin{equation}\n\\|\\nabla \\omega\\|_{D, \\infty} \\leq C_1 \\exp( C_2 t)\\quad (t\\in [0, \\infty))\n\\end{equation}\nfor some $C_1, C_2 > 0$.\n\\end{thm}\nThis means that in this situation for maximal gradient growth near the origin one cannot rely on the hyperbolic compression alone but rather has \nto create in some other way a scenario where the feeding conditions are violated, i.e. there has to be a compression in\n$x_2$-direction in the feeding zone.\n\n\n\\subsection{The hyperbolic scenario.} \\label{secHyperbolic}\n\nIn order to give a definition of hyperbolic flow suitable for our purposes, we introduce the following\nimportant quantity. Let $\\alpha\\in (0, \\frac{1}{4})$ be fixed. For a smooth, periodic function $\\omega$ we set\n\\begin{equation}\nM(x, t) := \\max_{0\\le y_1, y_2\\le \\max\\{x_1, x_2\\}}\\left\\{\\left|y_1^\\alpha \\frac{\\d \\omega}{\\d x_1}(y, t)\\right|, \\left|y_2^\\alpha \\frac{\\d \\omega}{\\d x_2}(y, t)\\right|\\right\\}.\n\\end{equation}\nNote that $M(x,t)$ also depends on $\\omega$ and $\\alpha$.\nThe velocity field $u(x, t) := \\nabla^\\bot (-\\Delta)^{-1}\\omega$ for double-odd $\\omega$ \n($\\omega$ with mean zero over $\\mathbb{T}$) can be written in the form\n\\begin{equation}\\label{velStruc}\nu_1(x, t) = - x_1 Q_1(x, t), ~u_2(x, t) = x_2 Q_2(x, t)\n\\end{equation}\nwhere $Q_1, Q_2$ are scalar fields given by certain integral operators (see \\eqref{defQ}) acting on $\\omega$.\nThe following definition says we regard the flow as hyperbolic if both $Q_1$ and $Q_2$ essentially have a positive\nlower bound, up to a term controlled by the quantity $M(x,t)$. \n\\begin{definition}\nLet $\\omega$ be a smooth solution of the Euler equation, and let $\\alpha\\in (0, \\frac{1}{4})$ be fixed.\nWe say that the flow is hyperbolic near the origin if there are constants $\\rho, A, \\beta_0>0$ \nfor which the following condition is satisfied\n\\begin{equation}\\label{Hyperbolicity}\nQ_i(x, t) + A |x|^{1-\\alpha} M(x, t) \\geq \\beta_0 > 0 \\qquad(0\\le x_1, x_2 \\le \\rho)\n\\end{equation}\nwhere $i=1, 2$, and for all $t \\in [0, \\infty)$. \n\\end{definition}\nBy choosing the initial data $\\omega_0$ suitably, we can ensure hyperbolic flow.\nOne possible choice is, for example, choosing $\\omega_0$ to be nonnegative in $[0, 1]^2$ and such that $\\omega_0 = 1$ on a set of sufficiently large measure, as it\nwas done in \\cite{KiselevSverak}, \\cite{Zlatos}. This creates \na situation where \\eqref{Hyperbolicity} is satisfied. The proof will be given in section \\ref{PotentialTheory}.\nPhysically, we then have compression of the fluid in the $x_1$-direction and expansion of the fluid in $x_2$-direction.\n\n\\section{Gradient growth in the hyperbolic scenario}\nBefore describing our approach, let us explain first why at first sight the hyperbolic scenario seems to be a good candidate \nfor double-exponential growth. Namely, for $Q_1, Q_2$ we have the upper bounds\n\\begin{equation}\nQ_1(x, t), Q_2(x, t) \\lesssim \\|\\omega\\|_{\\infty} |\\log(x_1^2 + x_2^2)|.\n\\end{equation}\nIf it were possible to create a situation where a \\emph{lower bound} of roughly the same order holds, i.e.\n$Q_1 \\geq C |\\log(x_1^2 + x_2^2)|$ over an infinitely long time interval, then for the particle trajectories \nlying on the $x_1$-axis (i.e. $X_2=0$) \n$$\nX_1(t) \\le \\exp( - C_1 \\exp( C_2 t))\n$$\nwould hold, as seen by solving the ODE $\\dot X_1 = -X_1 Q_1$. If, moreover one could arrange for the initial data $\\omega_0$ \nto have suitable nontrivial values on the $x_1$-axis, then this would create double exponential gradient growth. \nHowever, and the simultanenous requirements of smoothness and double-odd symmetry of $\\omega$, \nnecessarily imply $\\omega=0$ on the axes. Moreover, it is highly unclear how a such strong lower bound on $Q_1$ could be achieved. \nAs we shall see later, a certain amount of smoothness of $\\omega$ and the vanishing of $\\omega$ on the axes \nlead to a better upper bound, {\\em without} the logarithmic behavior which is crucial for the double-exponential growth. \n\nAnother way one might hope to get double exponential growth is to consider a ``projectile'', i.e. to track the movement\nof a small domain close to the origin on which $\\omega = 1$, as it was done in \\cite{KiselevSverak}. There the self-interaction of the projectile\nwas able to create enough growth in the values of $Q_1$ to allow double-exponential growth. Namely, while the projectile\napproaches the origin, the values of $Q_1$ on it get larger, this fact being connected to a certain logarithmically divergent integral. \nOur Theorem \\ref{main} shows that in general this is not possible for double-odd solutions, unless there is some compression in \n$x_2$-direction in the feeding zone. Thus a scenario with maximal gradient growth must be much more complicated than \njust using the self-interaction of the projectile.\n\nIn fact, provided the feeding condition holds, the steady fluid compression guaranteed by \\eqref{Hyperbolicity} will turn out \nto stabilize the flow in the neighborhood of the origin. That is, the hyperbolicity condition \\eqref{Hyperbolicity}\n- essentially a lower bound on $Q_i$ - is converted in the proof of Theorem \\ref{main}\ninto an upper bound for $Q_i$. This is what finally leads to a bound on the gradient growth in $D$. \n\n\\subsection{Heuristic considerations.}\nWe now present an intuitive discussion of our result. Fluid particles carried by the hyperbolic flow will\n constantly enter the box $D$ from the right and leave on\nthe top (see figure \\ref{fig1}). All particles except for those moving on the axes spend a finite time\nin the box. As for the particles on the $x_1$-axis, these move towards the\nleft, approaching the origin asymptotically as $t\\to \\infty$. Particle trajectories\n$t\\mapsto \\mathbf{X}(t)=(X_1(t), X_2(t))$ for which $X_2(0)$ is small approximate the\nstraight trajectories of the particles on the $x_1$-axis, before going steeply\nupward. The time a particle spends in $D$ goes to infinity as $X_2(0)\\to 0$.\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[scale=1]{figure1.pdf}%\n\\end{center}\n\\caption{Illustration of a hyperbolic flow.} \\label{fig1}\n\\end{figure} \nWe now consider the trajectory of a particle $\\mathbf{X}$. The particle may \nhave started inside $D$ at time $t=0$, or may have entered the box at some\ntime $T_0 > 0$, in which case $\\mathbf{X}(T_0)\\in \\d D$. Also, assume that the particle\nexits the box $D$ at some time $T_e$, i.e. $X_2(T_e)=\\delta_2$. \nThe evolution of the gradient of $\\omega$ along the trajectory is given by an ODE of the form\n\\begin{align}\\label{ode1}\n\\frac{d}{d t}\\nabla \\omega(\\mathbf{X}(t), t) = (-\\nabla u)^T(\\mathbf{X}(t), t) \\nabla \\omega(\\mathbf{X}(t), t)\n\\end{align}\nwhere $\\nabla u$ is the velocity gradient. The relation \\eqref{ode1} is simply obtained by\ndifferentiating the Euler equation. The key is now to use the structure \\eqref{velStruc}\nof the velocity field. Combining with \\eqref{ode1}, we obtain\n\\begin{align}\\label{ode2}\n\\frac{d}{d t}\\nabla \\omega(\\mathbf{X}(t), t) = \\begin{pmatrix}\nQ_1 + x_1 \\frac{\\d Q_1}{\\d x_1}& -x_2\\frac{\\d Q_2}{\\d x_1}\\\\\nx_1 \\frac{\\d Q_1}{\\d x_2} & - Q_2 - x_2 \\frac{\\d Q_2}{\\d x_2}\n\\end{pmatrix}\n\\nabla \\omega(\\mathbf{X}(t), t)\n\\end{align}\nWe write the matrix in \\eqref{ode2} as \n\\begin{equation*}\n\\begin{pmatrix}\na(t) & c(t) \\\\\nb(t) & -a(t)\n\\end{pmatrix}\n\\end{equation*}\nevaluating all matrix entries along the given trajectory $\\mathbf{X}$ (note that the matrix has trace zero, since the\nvelocity field $u$ is divergence free). Since in a sufficiently small box\n$x_1 \\frac{\\d Q_1}{\\d x_1}, x_2 \\frac{\\d Q_2}{\\d x_2}$ should be rather ``small\" (due to the prefactors $x_1, x_2$), \n$a$ should be positive and bounded away from zero along the hyperbolic trajectory. Roughly speaking, the the form of \\eqref{ode2} implies that $\\omega_{x_1}$ grows in time like $e^{\\int_{T_0}^t a(\\mathbf{X}(s)) ds}$ whereas\n$\\omega_{x_2}$ should decay in time like $e^{-\\int_{T_0}^t a(\\mathbf{X}(s)) ds}$.\nThis would be exactly true if \\eqref{ode2} were a diagonal system.\n\nTo gain some insight, we consider the case of a particle moving close to the $x_1$-axis, i.e. with small $X_2(T_0) > 0$. We expect that\n$c = x_2 \\frac{\\d Q_2}{\\d x_1}, b = -x_1 \\frac{\\d Q_1}{\\d x_2}$ are ``small\". This suggest to neglect $b, c$ and set $b, c=0$ in \\eqref{ode2},\nso that we have a diagonal system. Denoting $\\boldsymbol{\\xi}(t) = \\nabla \\omega(\\mathbf{X}(t))$\nthe solution would be given by\n\\begin{align}\\label{eqSol}\n\\xi_1(t) = e^{A(t)} \\xi_1(T_0), ~~\\xi_2(t) = e^{-A(t)} \\xi_2(T_0).\n\\end{align}\nwhere $A(t) = \\int_{T_0}^t a(\\mathbf{X}(s)) ds$. \\eqref{eqSol} shows that, in general,\nthe gradient in $x_1$-direction grows along the particle trajectory. However, there is\nan effect which allows us to cancel the growing factor $e^A$. Assume for the sake of the discussion that the following stronger feeding conditions hold:\n\\begin{equation}\\label{eqfeeding2}\n|\\d_{x_1} \\omega(x, t)|\\leq R x_2, ~~|\\d_{x_2} \\omega(x, t)|\\leq R\n\\end{equation}\nThese imply\n\\begin{equation}\\label{eqIntro1}\n|\\xi_1(t)|\\leq R e^A(t) X_2(T_0).\n\\end{equation}\nNow we observe that\n\\begin{align}\\label{eqIntro3}\nA(t) \\approx \\int_{T_0}^t Q_2(s)~ds\n\\end{align}\ntemporarily neglecting the term $x_2 \\frac{\\d Q_2}{\\d x_2}$. Now from \\eqref{velStruc} we have the differential equation $\\dot X_2 = X_2 Q_2$, so that \n$X_2(t) = X_2(T_0) \\expo{\\int_{T_0}^t Q_2(\\mathbf{X}(s)) ds}$\nand hence\n\\begin{equation}\\label{eqIntro2}\nX_2(T_0) = X_2(T_e) \\expo{- \\int_{T_0}^{T_e} Q_2(\\mathbf{X}(s)) ds}\\leq \\delta_2\\expo{- \\int_{T_0}^{T_e} Q_2(\\mathbf{X}(s)) ds}\n\\end{equation}\nCombining \\eqref{eqIntro2}, \\eqref{eqIntro1} and \\eqref{eqIntro3}, we get\n\\begin{equation}\n|\\xi_1(t)|\\leq \\delta_2 R \\expo{ - \\int_{t}^{T_e} Q_2(\\mathbf{X}(s))~ds} \\leq \\delta_2 R\n\\end{equation}\n(we assume $Q_2\\geq 0$ for this heuristic discussion), suggesting that the gradient\nin $x_1$-direction \\emph{does not grow at all in time}. Our rigorous result does not\ngive such a strong conclusion, but we will be able to prove that the gradient \n\\emph{grows at most exponentially in time}. In Remark \\ref{remAlpha} we explain why we actually do not use \n\\eqref{eqfeeding2}. \n\nThe heuristics appear deceivingly simple, but in order to make the argument rigorous, we have to overcome a number of \nformidable technical difficulties. To begin with, the coefficients of \\eqref{ode2} depend on the solution $\\omega$ through the integral operators $Q_1, Q_2$. \nThe derivatives $\\frac{\\d Q_1}{\\d x_1}, \\frac{\\d Q_2}{\\d x_2}$ are given by singular integral operators. These can be controlled if one has \ncontrol over the first derivatives $\\frac{\\d \\omega}{\\d x_1}, \\frac{\\d \\omega}{\\d x_2}$\nof $\\omega$ inside the box, and thus one has a certain control over the coefficients of the ODE system \\eqref{ode2}. \n\nOf course, none of the coefficients may be neglected, and we have to produce sufficiently good estimates on the solutions of the full ODE system \\eqref{ode2}. A major obstacle in getting good estimates, however, is caused by the unstable nature of \\eqref{ode2}. This may be seen, e.g. by setting $c=0$, but keeping $b$, so\nthat we get a supposedly better approximation than the diagonal system. In this model, the solutions can be calculated explictly, and we get\n\\begin{align*}\n\\xi_1(t) = e^A(t) \\xi_1(T_0), ~~\\xi_2(t) = e^{-A(t)} \\left[ \\xi_2(T_0)+ \\xi_1(T_0)\\int_0^t b(s) e^{2 A(s)}~ds\\right].\n\\end{align*}\nThis shows that not only the derivative in $x_1$-direction but also the derivative in $x_2$-direction of $\\omega$ may potentially \ngrow in time (due to the contribution $e^{-A(t)}\\int_0^t b(s) e^{2 A(s)}~ds$),\nmaking things worse, since a possible strong growth in $\\frac{\\d\\omega}{\\d x_2}$ is coupled back into the coefficients of\nthe ODE \\eqref{ode2} via our estimates on $\\frac{\\d Q_1}{\\d x_1}, \\frac{\\d Q_2}{\\d x_2}$. On the other hand,\nthe factor $\\xi_1(T_0)$ may help as before, via the feeding condition \\eqref{eqfeeding2}. \nWe need therefore to proceed with extreme care, looking to cancel the growing factor $e^A$ with the decaying factor $e^{-A}$ whenever possible.\n\n\n\\section{Notation}\n\\subsection{Euler velocity field.}\nFor $x=(x_1,x_2)$ we write $\\til x = (-x_1,x_2)$ and $\\bar x=(x_1,-x_2)$. The velocity field for the Euler equation is\n\\begin{align}\\label{eqVelEuler}\nu(x, t) = \\frac{1}{2\\pi}\\int_{\\ensuremath{\\mathbb{R}}^2} \\frac{(y-x)^\\bot}{|y-x|^2} \\omega(y, t)~dy\n\\end{align}\nwhere $\\omega\\in C^2(\\mathbb{T})$ is periodically extended to all of $\\ensuremath{\\mathbb{R}}^2$. In the calculation of the integral a limit in the\nmean (sequence of unboundedly growing domains) is understood. Note that the velocity field is $\\nabla^\\bot (-\\Delta)^{-1}\\omega$,\nwhere $-\\Delta$ is the periodic Laplacian on the Torus $\\mathbb{T}$. A simple calculation using the double-odd symmetry of $\\omega$ leads to\n\\begin{align*}\nu_1(x, t) = -x_1 Q_1(x, t), ~~u_2(x, t) = x_2 Q_2(x, t)\n\\end{align*}\nwhere $Q_1, Q_2$ are the following integral operators \n\\begin{align}\\label{defQ}\\begin{split}\nQ_1(x, t)&=c_0\\int_{[0, 1]^2}[G_1^1(x,y)+G_1^2(x,y)]\\omega(y)~dy+Q_1^r(x, t)\\\\\nQ_2(x, t)&=c_0\\int_{[0, 1]^2}[G_2^1(x,y)+G_2^2(x,y)]\\omega(y)~dy+Q_2^r(x, t)\n\\end{split}\n\\end{align} \nwith kernels\n\\begin{align*}\n&G_1^1(x,y)=\\frac{y_1(y_2-x_2)}{|y-x|^2|y-\\til x|^2},\n&G_1^2(x,y)=\\frac{y_1(y_2+x_2)}{|y+x|^2|y-\\bar x|^2},\\\\\n&G_2^1(x,y)=\\frac{y_2(y_1+x_1)}{|y+x|^2|y-\\til x|^2},\n&G_2^2(x,y)=\\frac{y_2(y_1-x_1)}{|y-x|^2|y-\\bar x|^2},\n\\end{align*}\nwhere $c_0$ denotes the right constant.\nThe expression $Q_1^r$ is given by the following (limit in the mean) integral\n\\begin{align*}\nc_0 \\int_{\\ensuremath{\\mathbb{R}}^2 \\setminus {[0, 1]^2}}[G_1^1(x,y)+G_1^2(x,y)]\\omega(y)~dy,\n\\end{align*}\na similar formula holding for $Q_2^r$.\n\\subsection{Convention for estimates.} The notation $f \\lesssim g$ means\n$$\nf \\le C g,\n$$\nwhere $C$ may depend on $\\alpha, \\beta, \\|\\omega\\|_\\infty$ and on universal constants,\ne.g. geometrical characteristics of the domain $\\mathbb{T}$. $C$ does not depend on $\\delta_1, \\delta_2, \\delta_3$.\nWhen using this notation, we shall always imply that $C<\\infty$ for all $\\alpha\\in (0, \\frac{1}{4})$.\n\n\\section{Potential theory of $Q_1, Q_2$} \\label{PotentialTheory}\n\n\\subsection{Sufficient conditions for hyperbolic flow}\n\nWe will be working with boxes of the form\n\\begin{align}\nD&=(0,\\delta_1)\\times(0,\\delta_2)\\\\\n\\widehat{D}&=(0,\\delta_1+\\delta_3)\\times(0,\\delta_2) \\label{defD},\n\\end{align}\nwith the following restriction:\n\\begin{equation}\\label{condDelta}\n0<\\delta_1 < \\delta_2 < \\delta_1+\\delta_3.\n\\end{equation}\nand $\\delta_j$ so small that $\\widehat{D}\\subset [0, 1]^2$. We also write\n\\begin{align}\nd(x) = \\delta_2 - x_2\n\\end{align}\nwhich is the distance of the point $x$ to the top of the box. We write $\\boldsymbol{\\delta}=(\\delta_1, \\delta_2), |\\boldsymbol{\\delta}|^2=\\delta_1^2+\\delta_2^2$.\n\nWe define\n\\begin{align}\nM_{D}(t) := \\max_{y\\in D}\\left\\{\\left|y_1^\\alpha \\frac{\\d \\omega}{\\d x_1}(y,t)\\right|, \\left|y_2^\\alpha \\frac{\\d \\omega}{\\d x_2}(y,t)\\right|\\right\\}\n\\end{align}\nand $M_{\\widehat{D}}$ for the analogous quantity, but where the maximum over $D$ is replaced by a maximum over $\\widehat{D}$. \nNote that $M_{D}$ and $M_{\\widehat{D}}$ depend on $\\omega$ and $\\alpha$. \n\n\n\nAs mentioned before, the flow near the origin can be made hyperbolic, with compression \nin the $x_1$-direction and expansion in $x_2$-direction by choosing the initial data such that\n$\\omega_0 \\geq 0$ on ${[0, 1]^2}$ and such that\n\\begin{equation}\\label{mes}\nm := |\\{ x : \\omega_0(x) = \\|\\omega_0\\|_\\infty \\}|\n\\end{equation}\nis sufficiently large. This is a consequence of theorem \\ref{thmLowerboundQ2}.\n\\begin{remark}\nThe periodicity and double-oddness of $\\omega(\\cdot, t)$ imply also the reflection symmetries\n\\begin{align*}\n\\omega(1+x_1, x_2, t) = -\\omega(1-x_1, x_2, t), ~\\omega(x_1, 1+x_2)=-\\omega(x_1, 1-x_2).\n\\end{align*}\nConsequently, the four corner points of $[-1, 1]\\times [-1, 1]$ are also stagnant points\nof the flow, the flow being confined in ${[0, 1]^2}$. Hence $\\omega_0 \\ge 0$ on ${[0, 1]^2}$ implies $\\omega(x, t)\\ge 0$ on ${[0, 1]^2}$ for all times,\na fact we shall use below.\n\\end{remark}\n\n\\begin{thm}\\label{thmLowerboundQ2}\nSuppose $\\omega_0(x)\\ge 0$ on ${[0, 1]^2}$. There exists a universal $0 0, A>0$ such that the following estimate holds for all times\n\\begin{align}\\label{ineqthmLowerboundQ2}\n\\begin{split}\nQ_2(x, t) + A M(x, t) |x|^{1-\\alpha} \\geq \\beta_0 \\\\\nQ_1(x, t) + A M(x, t) |x|^{1-\\alpha} \\geq \\beta_0 \n\\end{split}\n\\end{align}\nfor $|x| \\leq K (1-m)$, i.e. the flow is hyperbolic near the origin.\n\\end{thm}\n\nTo prove this, we need the following lemma, which is an adaption of a result in \\cite{Zlatos}.\n\\begin{lem}\nLet $\\Omega(2x) := [2 x_1, 1]\\times[2 x_2, 1]$. Suppose $\\omega(x)\\ge 0$ for $x\\in {[0, 1]^2}$. Then the estimate\n\\begin{equation}\nQ_i(x) \\geq c_0 \\int_{\\Omega(2x)} \\frac{y_1 y_2}{|y|^4}\\omega(y)~dy - M(x,t) |x|^{1-\\alpha}-C_2 \\|\\omega\\|_{\\infty} \\quad (x\\in D,~i=1,2)\n\\end{equation}\nholds, with universal $C_2 > 0$.\n\\end{lem}\n\n\\begin{proof}\nWe write $G_2 = G_2^1+G_2^2$ and prove the result for $Q_2$. The proof for $Q_1$ is similar. We have\n\\begin{align*}\nQ_2(x) \\geq\t&c_0 \\int_{\\Omega(2x)} \\frac{y_1 y_2}{|y|^4}\\omega(y)~dy + \\int_{\\Omega(2x)} \\left[G_2^2(x, y) - c_0 \\frac{y_1 y_2}{|y|^4}\\right] \\omega(y)~dy\\\\\n& + \\int_{{[0, 1]^2}\\setminus \\Omega(2x)} G_2^2(x, y) \\omega(y)~dy - C_1 \\|\\omega\\|_{\\infty},\n\\end{align*}\nthrowing away the nonnegative contribution from $G_2^1$ and estimating $Q_2^r$ by $C_1 \\|\\omega\\|_{\\infty}$.\nFirst, note that straightforward calculations and estimations give\n\\begin{equation}\n\\left|G_2(x, y)-c_0 \\frac{y_1 y_2}{|y|^4}\\right|\\lesssim \\frac{(|y-x||y-\\bar x|+|y|^2)(|x|^2+|x||y|)}{|y|^2|y-x|^2|y-\\bar x|^2}.\n\\end{equation}\n$y \\in \\Omega(2 x)$ implies that $|y-x|\\geq \\frac{1}{2}|y|, |y-\\bar x|\\geq \\frac{1}{2}|y|$ so that\n\\begin{equation}\n\\left|G_2(x, y)-c_0 \\frac{y_1 y_2}{|y|^4}\\right|\\lesssim (|y|^{-4}+|y|^{-2})(|x|^2+|x||y|)\n\\end{equation}\nand hence the integral over $\\Omega(2 x)$ is bounded in absolute value by\n\\begin{align*}\n|x|^2 \\int_{2 \\geq |y|\\geq 2|x|}(|y|^{-4}+|y|^{-2})~dy + |x| \\int_{2\\geq |y|\\geq 2|x|}(|y|^{-3}+|y|^{-1})~dy\\\\\n\\leq C|x|^2(|x|^{-2}+\\log|x|)+C|x|(|x|^{-1}+1)\\leq C.\n\\end{align*}\nFor the estimation of the integral with domain of integration ${[0, 1]^2}\\setminus \\Omega(2x)$, we distinguish two cases. \nThe more difficult case is given by the condition $x_2 \\leq x_1$, and we split the domain of integration up into\nthe three parts $[2 x_1, 1]\\times [0, 2x_2], [0, 2 x_1]\\times[2 x_1, 1]$ and $[0, 2 x_1]\\times [0, 2 x_1]$. \nFor the integral over $[2 x_1, 1]\\times [0, 2x_2]$, estimate $\\omega$ by its $L^\\infty$-norm and in the remaining integral \nwe substitute $y_j = x_j + z_j$. \n\\begin{align*}\n&\\int_{x_1}^1 \\int_{-x_2}^{x_2} \\frac{z_1(x_2+z_2)}{(z_1^2+z_2^2)(z_1^2+(2x_2+z_2)^2)} ~dz\n\\leq \\int_0^1 \\int_{-x_2}^{x_2} \\frac{2 z_1 x_2}{(z_1^2+z_2^2)(z_1^2+x_2^2)} ~dz_2~dz_1\\\\\n\\,\\,&\\leq C \\int_0^1 \\frac{z_1 x_2}{(z_1^2+x_2^2)} \\frac{1}{z_1}\\arctan(x_2\/z_1)~ d z_1 \\leq C \\arctan(1\/x_2) \\leq C.\n\\end{align*}\n The same strategy for the integral over $[0, 2 x_1]\\times[2 x_1, 1]$ leads to\n\\begin{align*}\n\\int\\limits_{-x_1}^{x_1}\\int\\limits_{x_1}^{1-x_2} \\frac{|z_1|(x_2+z_2)}{(z_1^2+z_2^2)(z_1^2+(2x_2+z_2)^2)}~dz\n&\\leq 2 \\int\\limits_{0}^{x_1}\\int\\limits_{x_1}^{1} \\frac{z_1(x_2+z_2)}{(z_1^2+z_2^2)(z_1^2+(2x_2+z_2)^2)}~dz.\n\\end{align*}\nNoting\n\\begin{align*}\n&\\int\\limits_{0}^{x_1}\\int\\limits_{x_1}^{1} \\frac{z_1 x_2}{(z_1^2+z_2^2)(z_1^2+(2x_2+z_2)^2)}~dz\n\\leq \\int\\limits_{0}^{x_1}\\int\\limits_{x_1}^{1} \\frac{z_1 x_2}{(z_1^2+z_2^2)(z_1^2+x_2^2)}~dz_2 dz_1\\\\\n\\,\\,&\\leq C \\int\\limits_{0}^{x_1} \\frac{x_2}{z_1^2+x_2^2}~d z_1 \\leq C \\arctan(x_1\/x_2)\\leq C\n\\end{align*}\nand \n\\begin{align*}\n\\int\\limits_{0}^{x_1}\\int\\limits_{x_1}^{1} \\frac{z_1 z_2}{(z_1^2+z_2^2)(z_1^2+(2x_2+z_2)^2)}~dz &\\leq\n\\int\\limits_{0}^{x_1}\\int\\limits_{x_1}^{1} \\frac{z_1 z_2}{(z_1^2+z_2^2)^2} \\leq C\\log(2 x_1^2\/x_2^2)\n\\end{align*}\nwe can estimate the integral in question by $C\\|\\omega\\|_\\infty$.\n\nIt remains to estimate the integral over $[0, 2 x_1]\\times [0, 2 x_1]$. First note that\n\\begin{align*}\n&\\int_{[0, 2 x_1]\\times [0, 2 x_1]} G_2^2(x, y)\\omega(y)~d y\\ge\\int_{[0, x_1]\\times [0, 2 x_1]} G_2^2(x, y)\\omega(y)~d y.\n\\end{align*}\nsince $\\omega \\ge 0$ and $G_2^2(x, y) \\ge 0$ if $y_1 \\le x_1$.\nWe will estimate the integral over $[0, x_1]\\times [0, 2 x_1]$ in absolute value, splitting it again into\n$[0, x_1]\\times [0, x_1]$ and $[0, x_1]\\times [x_1, 2 x_1]$. First, writing $M = M(x,t)$,\n\\begin{align*}\n&\\left|\\int_{[0, x_1]\\times [0, x_1]} G_2^2(x, y)\\omega(y)~d y \\right|\\leq \\int_{[0, x_1]\\times [0, x_1]} \\frac{M y_2^{1-\\alpha}}{|y-x|\n|y-\\bar x|}\\\\\n \\leq &\\int_{[0, x_1]\\times [0, x_1]} \\frac{M |y-\\bar x|^{1-\\alpha}}{|y-x|\n|y-\\bar x|} \\le \\int_{[0, x_1]\\times [0, x_1]} M |y-x|^{-1-\\alpha}~dy\\\\\n\\leq &\\int_{B(x, r)} M |y-x|^{-1-\\alpha}~dy \\leq M r^{1-\\alpha}\n\\end{align*}\nwhere $B(x, r)$ is the smallest ball around $x$ containing $[0, 2 x_1]\\times [0, 2 x_1]$. Clearly $r \\lesssim x_1$,\nso the integral is $\\lesssim M x_1^{1-\\alpha}$.\n\nNext, for the remaining part over $[0, x_1]\\times [x_1, 2 x_1]$, we estimate $\\omega$ by $\\|\\omega\\|_{\\infty}$. We need to bound\n\\begin{align*}\n&\\int_{[0, x_1]\\times [x_1, 2 x_1]} |G_2^2(x, y)| dy = \\int_{-x_1}^0 \\int_{x_1-x_2}^{2 x_1-x_2} \\frac{|z_1| z_2}{(z_1^2+z_2^2)(z_1^2+(2x_2+z_2)^2)}dz\\\\\n&+ \\int_{-x_1}^0 \\int_{x_1-x_2}^{2 x_1-x_2} \\frac{|z_1| x_2}{(z_1^2+z_2^2)(z_1^2+(2x_2+z_2)^2)} dz.\n\\end{align*}\nFor the integral containing $|z_1| z_2$ we distinguish two cases.\nIn case $x_2 \\le \\frac{1}{2} x_1$, we use $z_1^2+(2x_2+z_2)^2\\ge z_1^2+z_2^2$, leading to a bound on the form $\\log(1+\\frac{x_1}{x_1-x_2})\\le C$.\nIf $x_2 \\ge \\frac{1}{2} x_1$, we use $z_1^2+(2x_2+z_2)^2\\ge (x_2+z_2)^2$ in the denominator and $z_2 \\le (z_2+x_2)$\nin the nominator and get the bound $C x_2^{-1} x_1 \\le C$. The integral with $|z_1|x_2$ is estimated as before.\n\nIf $x_1\\leq x_2$, we split ${[0, 1]^2} \\setminus \\Omega(2 x)$ into $[0, 1]\\times [0, x_2], [0, 2 x_1]\\times[2 x_2, 1]$\nand perform similar calculations. In this case, we do not need to use $M(x,t)$. \n\\end{proof}\n\\begin{proof}(of theorem \\ref{thmLowerboundQ2})\nFollowing \\cite{KiselevSverak}, \\cite{Zlatos} we observe that the integral $\\int_{\\Omega(2x)} y_1 y_2|y|^{-4}\\omega(y, t)~dy$ can be bounded\naway from zero by an expression of the form $C_1\\|\\omega\\|_\\infty |\\log(1-m)|$, for $|x|\\leq K (1-m)$.\nwith universal $C_1, K>0$. Hence we obtain \\eqref{ineqthmLowerboundQ2}.\n\\end{proof}\n\n\\subsection{Upper bounds}\nThe following lemma gives an upper bound on $Q_1, Q_2$, in terms of $M_{\\widehat{D}}(t)$. Recall that $d(x)$ is the \ndistance to the top of the box, so the upper bound given blows up close to the top of the box. This is, however\nnot a problem, since we mostly have to integrate $Q_1, Q_2$ along particle trajectories (see the proof \nTheorem \\ref{mainTechnicalThm}).\n\\begin{lem}\\label{lemUpperboundQ}\nFor $x\\in D$, \n\\begin{align}\nQ_i(x) \\lesssim C \\|\\omega\\|_{\\infty}(1+|\\log d(x)|) + M_{\\widehat{D}} (t) |\\boldsymbol{\\delta}|^{1-\\alpha}\\quad (i=1, 2)\n\\end{align}\n\\end{lem}\n\\begin{proof}\nWe bound $Q_2$, the calculation for $Q_1$ is analogous. First we note\n$$\n|G_2^k| \\lesssim |x-y|^{-1}|y-\\bar x|^{-1} \\quad (k=1, 2)\n$$\nfor $y, x\\in {[0, 1]^2}$. We write $M=M_{\\widehat{D}}(t)$, and split the integral in question\ninto two parts:\n\\begin{align*}\n\\int_{{[0, 1]^2}} G_2^k (x, y) \\omega(y)~dy = \\int_{\\widehat{D}}\\ldots + \\int_{{[0, 1]^2} \\setminus \\widehat{D}}\\ldots\n\\end{align*} \nSince $|\\omega(y)|\\lesssim M y_2^{1-\\alpha}$,\n\\begin{align*}\n&\\left|\\int_{\\widehat{D}} G_2^k(x, y)\\omega(y)~dy \\right|\\lesssim M \\int_{\\widehat{D}} y_2^{1-\\alpha} |x-y|^{-1}|y-\\bar x|^{-1}~dy \\\\\n&\\,\\,\\lesssim M \\int_{\\widehat{D}} |x-y|^{-1}|y-\\bar x|^{-\\alpha} ~dy \\lesssim M \\int_{\\widehat{D}} |x-y|^{-1-\\alpha} ~dy\\\\\n&\\,\\,\\leq M \\int_{B(x, r)} |x-y|^{-1-\\alpha} ~dy \\leq M r^{1-\\alpha} \n\\end{align*}\nwhere $B(x, r)$ is the smallest ball centered at $x$ containing $\\widehat{D}$. Obviously $r \\lesssim |\\boldsymbol{\\delta}|$, so the part \nover $\\widehat{D}$ is dominated by $M |\\boldsymbol{\\delta}|^{1-\\alpha}$.\n\nFor the part over ${[0, 1]^2} \\setminus \\widehat{D}$, we have\n\\begin{align*}\n&\\left|\\int_{{[0, 1]^2}\\setminus \\widehat{D}} G_2^k (x, y) \\omega(y)~dy\\right| \\lesssim \\|\\omega\\|_{\\infty} \\int_{{[0, 1]^2}\\setminus \\widehat{D}} |y - x|^{-2} dy \\\\\n&\\,\\,\\lesssim \\|\\omega\\|_{\\infty} \\int\\limits_{B(x, 10)\\setminus B(x, d(x))} |y - x|^{-2} dy \n\\lesssim \\|\\omega\\|_{\\infty}|\\log d(x)|\n\\end{align*} \nwhere we have used $|G_2^k| \\lesssim |x-y|^{-1}|y-\\bar x|^{-1}$ and $|y - \\bar x|\\geq |y - x|$ for $x, y\\in {[0, 1]^2}$. Note also that for $x\\in D$,\n${[0, 1]^2} \\setminus \\widehat{D}$ is completely contained in $B(x, 10)\\setminus B(x, d(x))$ because of \\eqref{condDelta}.\n\\end{proof}\nThe following important lemma allows us to control the coefficients of the ODE system \\eqref{ode2} in terms of\nthe quantity $M_{\\widehat{D}}$. Recall that $d(x)$ is the distance from $x\\in D$ to the top of the box.\n\\begin{lem}\\label{lemFor_c_b}\nWe have the following estimates for $x\\in D$:\n\\begin{align}\n|c(x)|& \\le C(\\alpha) M_{\\widehat{D}}x_2^{1-\\alpha} \n+ C(\\alpha, \\gamma_1, \\gamma_2) x_2^{1-\\gamma_1-\\gamma_2} x_1^{\\gamma_2} d(x)^{-1+\\gamma_1+\\gamma_2},\\label{estc}\\\\\n|b(x)| &\\le C(\\alpha) M_{\\widehat{D}}x_1^{1-\\alpha}(1+|\\log d(x)|) + C(\\alpha, \\gamma) x_1^{1-\\gamma} d(x)^{-1+\\gamma},\\label{estb}\\\\\n\\left|x_i \\frac{\\d Q_i(x)}{\\d x_i} \\right|&\\le C(\\alpha) M_{\\widehat{D}} x_i^{1-\\alpha}(1+|\\log d(x)|) + C(\\alpha, \\gamma) x_i^{1-\\gamma} d(x)^{-1+\\gamma}\\label{estxdQ}\n\\end{align}\nwhere $\\gamma, \\gamma_1, \\gamma_2\\in (0, 1), \\gamma_1+\\gamma_2 = 1$, $i=1, 2$ and the constants do not depend on $\\delta_1, \\delta_2, \\delta_3$.\n\\end{lem}\n\\begin{proof}\nThis is a consequence of Proposition \\ref{prop:A4} (see appendix) and the definition of $c, b, x_i \\d_{x_i} Q_i(x)$. Note that we have\n\\begin{align*}\n\\left|\\frac{\\d Q_i^r(x)}{\\d x_j}\\right|\\leq C \\|\\omega\\|_{\\infty} \n\\end{align*}\nfor $x\\in D$.\n\\end{proof}\n\n\\begin{remark}\\label{remAlpha}\nIt is not possible to set $\\alpha = 0$ in the estimates of Lemma \\ref{lemFor_c_b}, i.e. if we replace $M_{\\widehat{D}}$ by $\\|\\nabla \\omega\\|_{D, \\infty}$, then\ne.g. the first term on the right-hand side of the estimate for $c$ would contain a logarithmic expression\n$$\n\\|\\nabla \\omega\\|_{D, \\infty} x_2 |\\log x_2|.\n$$ \nThis is the main reason why we do not adopt the stronger feeding condition \\eqref{eqfeeding2}, since we do not know how to deal with\nthe logarithmic terms in our main argument.\n\\end{remark}\n\n\n\\section{Perturbation theory for an ordinary differential equation}\nIn this section we derive estimates for an ODE system of the form\n\\begin{align}\n\\dot\\boldsymbol{\\xi}(t) = \\left(\\begin{array}{cc} a(t) & c(t) \\\\ b(t) & -a(t)\\end{array}\\right) \\boldsymbol{\\xi}(t)\n\\end{align}\nwhere $a, b, c$ are given smooth functions on a time interval $[T_0, T_e]$. For simplicity of notation, we set $T_0=0$.\nThis part is independent of the actual structure of $a, b, c$ from the ODE \\eqref{ode2}. \n\nThe idea will be to perturb from the system with $c\\equiv 0$. We write\n\\begin{align}\n P(t):=\\left(\\begin{array}{cc} a(t) & 0 \\\\ b(t) & -a(t)\\end{array}\\right), ~~S(t):=\\left(\\begin{array}{cc} 0 & c(t) \\\\ 0 & 0\\end{array}\\right)\n\\end{align}\n\\begin{definition}\nLet the integral operators $\\widehat{P}, \\widehat{S}$ be given by\n\\begin{align}\\label{ode_def1}\n(\\widehat{P} \\boldsymbol{\\xi})(t) = \\int_0^t P(\\tau)\\boldsymbol{\\xi}(\\tau)~d\\tau, ~~(\\widehat{S} \\boldsymbol{\\xi})(t)=\\int_0^t S(\\tau) \\boldsymbol{\\xi}(\\tau) ~d\\tau\n\\end{align}\n\\end{definition}\nRecall that $A(t)=\\int_0^t a(s) ds$. It is convenient to introduce the following operators: \n\\begin{equation}\\label{defF}\n\\begin{split}\n(F^+ g)(t) &= g(t) + e^A \\int_0^t a e^{-A} g(s) ds\\\\\n(F^- g)(t) &= g(t) - e^{-A} \\int_0^t a e^{A} g(s) ds.\n\\end{split}\n\\end{equation}\n\\begin{prop}\n\\begin{enumerate}\n\\item[(a)] The operator $(I-\\widehat{P})$ is bounded and bijective as an operator from $C[0, T]$ into $C[0, T]$.\n\\item[(b)] Consider the Volterra integral equation \n\\begin{align}\\label{ode_eq1}\n\\phi = \\widehat{P} \\phi + g\n\\end{align}\nwith given $g\\in C([0, T], \\ensuremath{\\mathbb{R}}^2)$. The solution $\\phi = (I-\\widehat{P})^{-1} g$ is given by\n\\begin{equation}\\label{ode_eq3}\n\\begin{split}\n\\phi_1(t) &= F^+ g_1\\\\\n\\phi_2(t) &= F^- g_2 + e^{-A} \\int_0^t e^{A} b (F^+ g_1)(s) ds\n\\end{split}\n\\end{equation}\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\nThe statement (a) is standard. Statement\n(b) is an easy calculation, noting that \\eqref{ode_eq1} is equivalent to the ODE system $\\dot \\boldsymbol{\\xi} = P\\boldsymbol{\\xi} + \\dot g$\nfor $g\\in C^1$.\n\\end{proof}\nThe initial value problem for the system \n\\begin{align*}\n\\dot \\boldsymbol{\\xi} = (P+S)\\boldsymbol{\\xi}, \\,\\, \\boldsymbol{\\xi}(0)~\\text{given}\n\\end{align*}\nis equivalent to the Volterra integral equation\n\\begin{align}\\label{ode_perturbedproblem}\n\\boldsymbol{\\xi} = (\\widehat{P}+\\widehat{S})\\boldsymbol{\\xi} + \\boldsymbol{\\xi}(0).\n\\end{align}\nWe can write $\\boldsymbol{\\xi} = (I-\\widehat{P})^{-1} \\mathbf{w}$ for some $\\mathbf{w}\\in C[0, T]$. This leads to\n\\begin{align}\\label{ode_eq2}\n\\mathbf{w} = \\widehat{S} (I-\\widehat{P})^{-1}\\mathbf{w} + \\boldsymbol{\\xi}(0).\n\\end{align}\nThe following proposition gives a representation of the solution $\\boldsymbol{\\xi}$ in terms of $\\mathbf{w}$:\n\\begin{prop}\nLet $\\boldsymbol{\\xi}\\in C[0, T]$ solve the integral equation \\eqref{ode_perturbedproblem} with given $\\boldsymbol{\\xi}(0)$. Then\n\\begin{equation}\\label{relWXi}\n\\begin{split}\nw_1(t) &= \\xi_1(0)+\\xi_2(0) \\int_0^t e^{-A} c ds + \\int_0^t e^{-A} c \\int_0^s e^{A} b (F^+ w_1)(\\tau) d\\tau,\\\\\nw_2(t) &= \\xi_2(0),\\\\\n\\xi_1(t) &= (F^+ w_1)(t), ~~\\xi_2(t) = \\xi_2(0)e^{-A} + e^{-A} \\int_0^t e^{A} b (F^+ w_1) ds.\n\\end{split}\n\\end{equation}\n\\end{prop}\n\\begin{proof}\nFirst note that \n\\begin{align}\\label{ode_eq6}\n\\widehat{S} (I-\\widehat{P})^{-1} \\mathbf{w} = \\widehat{S} \\boldsymbol{\\xi} = (\\int_0^t c(s) \\xi_2~ ds, 0)\n\\end{align}\nand hence by \\eqref{ode_eq2}, $w_2(t) = \\xi_2(0)$ (the second line of \\eqref{relWXi}). \nRecalling $\\boldsymbol{\\xi} = (I-\\widehat{P})^{-1} \\mathbf{w}$ and using \\eqref{ode_eq3}, we get\nthe following relation:\n\\begin{align}\n\\notag\\xi_2(t) = \\xi_2(0)[1 - e^{-A} \\int_0^t a e^{A} ds] +e^{-A} \\left[ \\int_0^t e^{A} b w_1(s) ds + \\int_0^t e^{2 A} b \\int_0^s a e^{-A} w_1 d\\tau \\right]&\\\\\n\\label{ode_eq4}=e^{-A} \\left[ \\xi_2(0) + \\int_0^t e^{A} b w_1(s) ds + \\int_0^t e^{2 A} b \\int_0^s a e^{-A} w_1 d\\tau \\right]&\n\\end{align}\n\\eqref{ode_eq6} and \\eqref{ode_eq2} together give, \n$$\nw_1(t) = \\xi_1(0)+\\int_0^t c(s) \\xi_2(s) ds.\n$$\nBy inserting \\eqref{ode_eq4}, we get the relation\n\\begin{align*}\nw_1(t) &= \\xi_2(0) \\xi_1(0)+\\int_0^t e^{-A} c\\\\\n&\\,\\,+ \\int_0^t e^{-A} c \\int_0^s e^{A} b w_1 + \\int_0^t e^{-A} c \\int_0^s e^{2 A} b \\int_0^{\\tau} a e^{-A} w_1,\n\\end{align*}\nwhich is the first line of \\eqref{relWXi}.\n\\end{proof}\nWe will need the following Gronwall-type inequality by Wilett \\cite{Wilett}:\n\\begin{lem}\\label{lem_Willet}\nLet $z, f_0, f_1, f_2, v_1, v_2, v_3$ are nonnegative, integrable functions on $[0, T]$ and suppose $z$ satisfies the following\nintegral inequality:\n\\begin{equation}\\label{ode_ineq1}\nz(t) \\leq f_0(t) + f_1(t) \\int_0^t v_1 z + f_2(t) \\int_0^t v_2 z. \n\\end{equation}\nThen $z \\leq H f_0 $, where $H$ is the following functional \n\\begin{align}\\label{eq_H}\n(H f_0)(t) = & f_0 + f_1 \\expo{\\int_0^t v_1 f_1}\\int_0^t v_1 f_0\\\\\n&+\\left[f_2(t)+f_1(t)\\expo{\\int_0^t v_1 f_1}\\int_0^t{v_1 f_2}\\right]\\notag\\\\\n&\\times \\expo{\\int_0^t v_2\\left[f_2(s)+f_1(s)\\expo{\\int_0^s v_1 f_1}\\int_0^s{v_1 f_2}\\right]}\\notag\\\\\n&\\times \\int_0^t v_2 \\left[f_0(s)+f_1(s)\\expo{\\int_0^s v_1 f_1}\\int_0^s{v_1 f_0}\\right]\\notag\n\\end{align}\nWe write $H f_0$ to emphasize the linear dependency on $f_0$.\n\\end{lem}\n\\begin{proof}\nWe give the proof for reference. Recall first the following form basic of Gronwall's integral inequality: suppose\n$z, r, f_1, v_1$ are nonnegative functions on $[0, T]$ satisfying the integral inequality\n\\begin{equation}\\label{ode_ineq2}\nz(t) \\leq r(t) + f_1(t)\\int_0^t v_1 z,\n\\end{equation}\nthen\n\\begin{equation}\\label{ode_eq5}\nz(t) \\leq r(t)+f_1(t) \\expo{\\int_0^t v_1 f_1}\\int_0^t v_1 r~~\\quad (t\\in [0, T]).\n\\end{equation}\nSet $r = f_0 + f_2 \\int_0^t v_2 z$ and apply \\eqref{ode_eq5}. This leads to the following bound for $z$:\n\\begin{align}\\label{ode_ineq3}\nz(t) \\leq f_0 + f_2 \\int_0^t v_2 z +f_1(t) \\expo{\\int_0^t v_1 f_1}\\int_0^t v_1 \\left[f_0 + f_2 \\int_0^s v_2 z\\right].\n\\end{align}\nNote that \n\\begin{align*}\n\\int_0^t v_1 f_2 \\int_0^s v_2 z &= - \\int_0^t \\left(\\int_0^s v_1 f_2 \\right)v_2 u + \\left(\\int_0^t v_1 f_2\\right)\\int_0^t v_2 z \\\\\n&\\leq \\left(\\int_0^t v_1 f_2\\right)\\int_0^t v_2 z\n\\end{align*}\nsince $v_1, f_1, z, v_2 \\geq 0$. Thus \\eqref{ode_ineq3} implies\n\\begin{align*}\\label{ode_ineq4}\nz(t) &\\leq f_0 + f_1 \\expo{\\int_0^t v_1 f_1} \\int_0^t v_1 f_0 + \\\\\n& +\\left[f_2(t)+ f_1(t) \\expo{\\int_0^t v_1 f_1}\\left(\\int_0^t v_1 f_2\\right)\\right] \\int_0^t v_2 z. \\notag\n\\end{align*}\nApplying \\eqref{ode_eq5} again, this time with $r = f_0 + f_1 \\expo{\\int_0^t v_1 f_1}\\int_0^t v_1 f_0$, yields\nthe result \\eqref{eq_H}.\n\\end{proof}\n\\begin{lem}\nLet $\\boldsymbol{\\xi}\\in C[0, T]$ solve the integral equation \\eqref{ode_perturbedproblem} with given $\\boldsymbol{\\xi}(0)$. Then\nthe estimates \n\\begin{equation}\\label{ode_ineq5}\n\\begin{split}\n|\\xi_1(t)| &\\leq (H f_0)(t)+ e^{A} \\int_0^t |a| e^{-A} H f_0\\\\\n|\\xi_2(t)| &\\leq |e^{-A} \\xi_2(0)| + e^{-A} \\left[ \\int_0^t e^{A} |b| H f_0 + \\int_0^t e^{2 A} |b| \\int_0^s |a| e^{-A} H f_0\\right],\n\\end{split}\n\\end{equation}\nhold, where $H$ is the functional \\eqref{eq_H} and where\n\\begin{align*}\n&f_1(t) = \\int_0^t e^{-A} |c| \\int_0^s e^{2 A} |b|,\n&f_2(t)= \\int_0^t e^{-A} |c|,\\\\\n&f_0(t) = |\\xi_1(0)|+f_2(t) |\\xi_2(0)|,\n&v_1(t) = |a(t)| e^{-A},\\\\\n&v_2(t) = |b(t)| e^{A}.\n\\end{align*}\n\\end{lem}\n\\begin{proof}\nUsing obvious estimations, we get from \\eqref{relWXi} the following integral inequality for $|w_1|$:\n\\begin{align*}\n|w_1(t)|&\\leq |\\xi_1(0)| + |\\xi_2(0)| \\int_0^t e^{-A} |c|+\\int_0^t e^{-A} |c| ds ~ \\int_0^t e^{A} |b| |w_1| ds\\\\\n&+ \\int_0^t e^{-A} |c| \\int_0^s e^{2 A} |b| d\\tau ds~ \\int_0^t |a|e^{-A} |w_1|\\\\\n&= f_0(t) + f_1(t) \\int_0^t v_1 |w_1|ds + f_2(t) \\int_0^t v_2 |w_1|ds, \n\\end{align*}\nwhere the expressions $f_0, f_1, f_2, v_1, v_2$ are given as in the statement of the lemma.\nNow using lemma \\ref{lem_Willet}, we obtain $|w_1(t)|\\leq H f_0$ on $[0, T]$. The inequalities \n\\eqref{ode_ineq5} follow from $\\boldsymbol{\\xi} = (I-\\widehat{P})^{-1} \\mathbf{w}$ and the formulas \\eqref{ode_eq3}.\n\\end{proof}\n\n\\section{Main argument}\n\n\\subsection{The main technical result}\nIn order to formulate our main technical result, we introduce a notion of {\\em harmless nonlinear bound}.\n\n\\begin{definition}\nA function $\\mathcal{N} = \\mathcal{N}(R, \\beta, \\alpha, \\boldsymbol{\\delta}, M)$ where all arguments are nonnegative numbers\n is a {\\em harmless nonlinear function} if for fixed $\\alpha\\in (0, 1), \\beta>0, \\delta_3 > 0$ the following holds: \nfor any given $R > 0$, there exists $\\bar \\delta_2(R) > 0$ and a number $\\bar \\delta_1=\\bar \\delta_1(R, \\delta_2)$\nsuch that for all $\\delta_2 \\leq \\bar \\delta_2, \\delta_1 \\leq \\bar \\delta_1(\\delta_2)$ the inequality\n\\begin{align}\n\\mathcal{N}(R, \\beta, \\alpha, \\boldsymbol{\\delta}, R) < R\n\\end{align} \nholds.\n\\end{definition}\n\nRecall the box $\\widehat{D}$ is said to satisfy the conditions of {\\em controlled feeding} if there is a $R \\geq 0$ with\n\\begin{align}\n|\\d_{x_1}\\omega(x, t)|\\leq R x_2^{1-\\alpha}, ~~|\\d_{x_2}\\omega(x, t)|\\leq R \\quad (x\\in \\widehat{D}\\setminus D)\n\\end{align} \nfor all times $t \\geq 0$. $R$ is called \\emph{feeding parameter}. For convenience, we introduce the following definition.\n\\begin{definition}\nLet $T > 0, \\beta > 0$. We say that the flow is $\\beta$-hyperbolic in the box $D$ on $[0, T]$ if \n\\begin{equation}\nQ_i(x, t) \\geq \\beta \\quad (x\\in D, \\,t\\in [0, T], i=1, 2).\n\\end{equation}\n\\end{definition}\n\\begin{thm}\\label{mainTechnicalThm}\nLet $0 < \\alpha < 1\/4$. There exists a harmless nonlinear function $\\mathcal{N} = \\mathcal{N}(R, \\beta, \\alpha, \\boldsymbol{\\delta}, M)$ with the following properties.\nIf $\\omega$ is a solution of the Euler equation, $\\widehat{D}$ a box defined by \\eqref{defD} with parameters $\\delta_1, \\delta_2, \\delta_3 > 0$\nsatisfying \\eqref{condDelta} and $T > 0$ is such that \n\\begin{enumerate}\n\\item[(i)] the flow is $\\beta$-hyperbolic in the box $D$ on the time interval $[0, T]$, \n\\item[(ii)] the box $\\widehat{D}$ is satisfies the conditions of {\\em controlled feeding} with parameter $R>0$\n\\item[(iii)] for the initial data,\n\\begin{equation}\nM_{D}(0) < R,~~\\left|\\frac{\\d \\omega_0}{\\d x_1}(x)\\right| \\le R x_2^{1-\\alpha},~~\\left|\\frac{\\d \\omega_0}{\\d x_2}(x)\\right|\\leq R\\qquad (x \\in D),\n\\end{equation}\n\\item[(iv)] there exists a number $K$ such that\n$$\nM_{D}(t) \\leq K \\quad (t\\in [0, T]), \n$$\n\\end{enumerate}\nthen \n\\begin{equation}\nM_{D}(t) \\leq \\mathcal{N}(R, \\alpha, \\beta, \\boldsymbol{\\delta}, K)\n\\end{equation}\nholds.\n\\end{thm}\n\\subsection{Estimates along particle trajectories}\nWe now begin the proof of our technical main result, theorem \\ref{mainTechnicalThm}. Therefore, let $\\omega$ be a given double-odd solution \nof the Euler equation that is in $C^1([0, \\infty), C^2(\\mathbb{T}))$. \nMoreover, let $\\widehat{D}$ be a box depending on the parameters $\\delta_1, \\delta_2, \\delta_3 > 0$ satisfying the conditions \\eqref{condDelta}.\n\nSuppose also that for the remainder of this section, (i)-(iv) from theorem \\ref{mainTechnicalThm} are satisfied.\nFor abbreviation, we write in the following\n\\begin{align*}\nM := \\max\\{K, R\\}.\n\\end{align*}\nWe observe the following important fact: since $\\delta_1, \\delta_2, \\delta_1 \\leq 1$,\n\\begin{equation}\\label{ineq_M}\nM_{\\widehat{D}}(t) \\leq M\n\\end{equation} \nholds.\n\nWe consider associated particle trajectories, which are the solutions of \n\\begin{equation}\\label{partODE}\n\\dot X_1 = - X_1 Q_1, \\,\\,\\,\\dot X_2 = X_2 Q_2.\n\\end{equation}\nMore precisely, we define the particle trajectories as follows: for any $(x_0, t_0)\\in \\ba D\\times [0, \\infty)$ we take the maximal\nsolution of $t\\mapsto \\mathbf{X}(t)$ of \\eqref{partODE} which passes through $(x_0, t_0)$, and lies $\\ba D$. \n$\\mathbf{X}$ is defined on an interval $[T_0, T_e]$ such that \n\\begin{enumerate}\n\\item[(i)] $\\mathbf{X}(t)\\in \\ba D$ for all $T_0 \\le t \\le T_e$,\n\\item[(ii)] either $T_0 = 0$ or $T_0 > 0$, in which case necessarily $\\mathbf{X}(T_0)\\in \\d D$,\n\\item[(iii)] $\\mathbf{X}(T_e) \\in \\d D$.\n\\end{enumerate}\nObserve that $\\mathbf{X}$ is given by\n\\begin{align}\\label{X2}\n\\begin{split}\nX_1(t) &= X_1(T_0) \\exp\\left(- \\int_{T_0}^t Q_1(\\mathbf{X}(t), s)~ds\\right) \\\\\nX_2(t) &= X_2(T_0) \\exp\\left(\\int_{T_0}^t Q_2(\\mathbf{X}(t), s)~ds\\right).\n\\end{split}\n\\end{align}\nWe call $T_0$ the entry time and $T_e$ the exit time of a particle trajectory. \n$T_0=0$ if the particle starts in $\\ba D$ for $t=0$.\n\nThe next proposition gives a upper bound for the time a particle can spend in the upper\nhalf of the box $D$, provided the flow is $\\beta$-hyperbolic.\n\\begin{prop}\\label{propTime}\nSuppose that the flow is $\\beta$-hyperbolic in the box $D$ on the time interval $[0, T]$. Let $\\mathbf{X}$ be a particle trajectory\nwhose entry time $T_0$ is $< T$.Then if \n\\begin{enumerate}\n\\item[(i)] $X_2(T_0)\\neq 0$,\n\\item[(ii)] $T_0 < T_e$,\n\\end{enumerate}\nthere is a either time $T_e > T_1\\geq T_0$ such that\n$$\nX_2(t) \\geq \\frac{1}{2} \\delta_2 \\quad (t\\in [T_1, T])\n$$\nor \n$$\nX_2(t) \\leq \\frac{1}{2} \\delta_2 \\quad (t\\in [T_0, T]).\n$$\nIf $T_1$ exists, we have the estimate\n\\begin{equation}\nT_e - T_1 \\leq {\\beta}^{-1} \\log(2). \n\\end{equation}\n\\end{prop}\n\n\\begin{definition}\nWe call a function $g=g(\\alpha, \\beta, \\boldsymbol{\\delta}, M)$ \\emph{harmless generic factor} it has the following property:\nthere exists a $p > 0$ such that for fixed $\\alpha, \\beta, M$ \n\\begin{align*}\ng(\\alpha, \\beta, \\delta_2^p, \\delta_2, M)\n\\end{align*}\nis bounded as $\\delta_2\\to 0$.\n\\end{definition}\nFor example, a function of the form\n\\begin{align*}\ng=C(\\alpha, \\beta) \\left[\\delta_2^{\\gamma_3} M(1+|\\log \\delta_2|)+\\delta_1^{\\gamma_1}{\\delta_2}^{-\\gamma_2}+1\\right]^{\\gamma_4}\n\\end{align*}\n($\\gamma_j > 0$) is a harmless generic factor, and $e^g$ is also a harmless generic factor if $g$ is one.\nWhen performing estimations, we shall often absorb harmless generic factors into one\nanother, so the actual meaning of $g$ may change from line to line.\n\nOur goal will be to obtain estimates for the quantities $f_0, f_1, f_2, v_1, v_2$ along a single particle trajectory,\nup to the given time $T$, so that we can apply our ODE estimates.\nThe crucial point is that our bounds depend \\emph{not directly} on $\\omega,~T,~T_e$ but \nonly on $\\beta, \\alpha, \\mathbf{X}(T_0)$. \nFor the estimations below we often refer to a fixed particle trajectory with entry time $T_0$, along\nwhich we evaluate integrals over time of the quantities $Q_1, Q_2, c$ etc. To make the notation more compact, \nwe often skip $\\mathbf{X}$ in the arguments of the integrands, e.g. we write\n$$\n\\int_{T_0}^t |c| e^{-A} ds=\\int_{T_0}^t |c| e^{-A(\\mathbf{X}(t),t)} ds = \\int_{T_0}^t |c(\\mathbf{X}(s), s)| \\exp{\\int_{T_0}^s a(\\mathbf{X}(\\tau), \\tau ) d\\tau} ~ds.\n$$\n\\begin{lem}\\label{keyLemma}\nFor any $T^*\\le T_e$,\n\\begin{equation}\nX_2(T_0) \\le \\delta_2 \\expo{-\\int_{T_0}^{T^*} Q_2(\\mathbf{X}) ds}.\n\\end{equation}\n\\end{lem}\n\\begin{proof}\nSince the particle trajectory lies in $D$ for $t\\in [T_0, T_e]$, \n$$\\delta_2 \\ge X_2(T^*) = X_2(T_0) \\expo{\\int_{T_0}^{T^*} Q_2(\\mathbf{X}) ds}$$ holds.\n\\end{proof}\nLet $\\phi: [0, \\infty) \\to [0, \\infty)$ be a function with the properties\n\\begin{equation}\n\\label{phiEst}\\phi(s) \\leq 1-e^{-s}\n\\end{equation}\nand $\\phi$ monotone nondecreasing on $[0, \\infty)$, \n$\\phi$ linear on $[0, s^*]$ and $\\phi$ constant on $[s^*, \\infty)$ for some $s^*$. We fix such a function $\\phi$ for the following.\n\\begin{prop}\\label{propSomeEst}\nAlong a particle trajectory in a $\\beta$-hyperbolic flow in $D$, we have the following\nfor $t\\in [T_0, \\min \\{T_e,T\\}]$:\n\\begin{enumerate}\n\\item[(i)]\\begin{align*}\nX_1(t) &\\leq \\delta_1 \\expo{-\\beta (t-T_0)}\\\\\nX_2(t) &\\leq \\delta_2 \\expo{-\\beta (\\min \\{T_e,T\\}-t)},\n\\end{align*}\n\\item[(ii)]\\begin{align*\nd(\\mathbf{X}(t)) \\geq \\delta_2 \\phi\\left(\\int_{t}^{T_e} Q_2 ~ds\\right) \\geq \\delta_2 \\phi(\\beta(\\min\\{T, T_e\\}-t)),\n\\end{align*}\n\\item[(iii)]For any $\\gamma\\in (0,1)$, $t \\in [T_1, \\min\\{T_e, T\\}]$,\n\\begin{align*}\n\\int_{T_1}^{t} d(\\mathbf{X}(s))^{-1+\\gamma}~ds \\leq C(\\gamma, \\beta) \\delta_2^{-1+\\gamma},\\\\\n\\int_{T_1}^{t} |\\log d(\\mathbf{X}(s))|~ds \\leq C(\\gamma, \\beta) |\\log\\delta_2|\n\\end{align*}\nwith a $C(\\gamma, \\beta)$ independent of the trajectory.\n\\end{enumerate}\n\\end{prop}\n\n\\begin{proof}\nFor (i), recall that under the assumption of $\\beta$-hyperbolic flow, \n$Q_2 \\geq \\beta$. From \\eqref{X2}, we get\n\\begin{align}\n&X_2(t) = X_2(T_0) \\expo{\\int_{T_0}^{\\min \\{T_e,T\\}} Q_2~ds\n- \\int_{t}^{\\min \\{T_e,T\\}} Q_2~ds}\\notag\\\\ \n&\\,\\,\\,= X_2(\\min \\{T_e,T\\})\\expo{- \\int_{t}^{\\min \\{T_e,T\\}} Q_2~ds}\\label{propSomeEstEq1}\\\\\n &\\,\\,\\,\\leq \\delta_2 \\expo{-\\beta (\\min \\{T_e,T\\}-t)}\\notag,\n\\end{align}\nnoting that $X_2(\\min \\{T_e,T\\})\\leq \\delta_2$. The bound for $X_1$ is analogous.\n\nNow we show (ii). Recall that $d(\\mathbf{X}) = \\delta_2 - X_2(t)$. Hence by \\eqref{propSomeEstEq1}\n$$\n\\delta_2 - X_2(t) \\geq \\delta_2\\left(1- \\expo{-\\int_t^{\\min \\{T_e,T\\}} Q_2 ds}\\right) = \\delta_2 \\phi\\left(\\int_t^{\\min \\{T_e,T\\}} Q_2 ds\\right).\n$$\n\n(iii) We split the integrals by introducing the time \n$T^*$ defined as follows: $T^*$ is the maximum of all $T_1 \\leq t$ such that \n$$\\phi(\\expo{\\beta(\\min\\{T_e, T\\}-s)})= \\phi(s^*).$$ \nIf there are no such $t$, we set $T^*= T_1$. Thus we split \nas follows:\n$$\n\\int_{T_1}^{t}= \\int_{T_1}^{T^*} \\ldots + \\int_{T^*}^{t}\\ldots \n$$\nif $t \\geq T^*$, otherwise we have only one integral from $T_1$ to $t$.\nWe calculcate \n\\begin{align*}\n\\int_{T_1}^{T^*} d(\\mathbf{X}(s))^{-1+\\gamma}~ds &\\leq \n\\delta_2^{-1+\\gamma} \\int_{T_1}^{T^*} \\phi(\\beta(\\min\\{T_e, T\\}-s))^{-1+\\gamma}~ds\\\\\n&\\leq \\delta_2^{-1+\\gamma} (T_e-T_1) \\phi(s^*)^{-1+\\gamma}~ds \\leq C(\\beta, \\gamma)\\delta_2^{-1+\\gamma}\\\\\n\\int_{T^*}^{t} d(\\mathbf{X}(s))^{-1+\\gamma}~ds &\\leq \n\\delta_2^{-1+\\gamma} \\int_{T^*}^{t} \\phi(\\beta(\\min\\{T_e, T\\}-s))^{-1+\\gamma}~ds\\\\\n&\\lesssim \\delta_2^{-1+\\gamma} \\beta^{-1+\\gamma}\\int_{T^*}^{t} (\\min\\{T_e, T\\}-s)^{-1+\\gamma}~ds\\\\\n&\\lesssim \\delta_2^{-1+\\gamma} \\beta^{-1+\\gamma}\\int_{T_1}^{\\min\\{T_e, T\\}} (\\min\\{T_e, T\\}-s)^{-1+\\gamma}~ds\\\\\n&\\lesssim \\delta_2^{-1+\\gamma} \\beta^{-1+\\gamma}\\int_{0}^{T_e-T_1} z^{-1+\\gamma}~dz\n\\lesssim \\delta_2^{-1+\\gamma} C(\\beta, \\gamma).\n\\end{align*}\nusing (ii), Proposition \\ref{propTime} to estimate $T_e-T_1$ and the fact that $\\phi$ is linear on $[0, s^*]$.\n The second integral is treated analogously.\n\\end{proof}\n\\begin{lem}\\label{lemExponential}\nAlong a particle trajectory, we have, for $T_0\\leq t\\leq \\min\\{T, T_e\\}$,\n\\begin{align*}\ne^{\\pm A(t)} &\\leq g(\\alpha, \\beta, \\boldsymbol{\\delta}, M) \\expo{\\pm \\int_{T_0}^t Q_1~ds}, \\\\\ne^{\\pm \\int_{T_0}^t Q_1 ds} &\\leq g(\\alpha, \\beta, \\boldsymbol{\\delta}, M) e^{\\pm \\int_{T_0}^t Q_2 ds}\n\\end{align*}\nwhere $g(\\alpha, \\beta, \\boldsymbol{\\delta}, M)$ are harmless factors depending only on the quantities indicated.\n\\end{lem}\n\\begin{proof}\nWe prove the second inequality of the lemma, the other ones being analogous.\nRecall $a(t) = Q_2(t) + X_2 \\d_{x_2} Q_2(t)$ and thus\n$$\n\\pm A \\leq \\pm \\int_{T_0}^t Q_2(s) ds + \\int_{T_0}^t |X_2(s) \\d_{x_2} Q_2(s)|~ds.\n$$\nWe now use lemma \\ref{lemFor_c_b}:\n\\begin{align*}\n\\int_{T_0}^t |X_2(s) \\d_{x_2} Q_2(s)|~ds \\lesssim M \\int_{T_0}^{\\min\\{T, T_e\\}} X_2^{1-\\alpha}(1+|\\log d(\\mathbf{X})|) ds\\\\\n + C(\\gamma_1, \\gamma_2) \\int_{T_0}^{\\min\\{T, T_e\\}} X_2^{1-\\gamma_1-\\gamma_2} X_1^{\\gamma_2} d(\\mathbf{X})^{-1+\\gamma_1}~ds\n\\end{align*}\n(note that the interval of integration has been enlarged). \nWe split the interval of integration into $[T_0, T_1]$ and $[T_1, \\min\\{T, T_e\\}]$ provided $\\min\\{T, T_e\\} \\geq T_1$. \nThe case $\\min\\{T, T_e\\} < T_1$ is analogous.\n\nIn the part over $[T_0, T_1]$, while $d(\\mathbf{X})\\geq \\frac{1}{2} \\delta_2$, we cannot control the length of the time interval, so we estimate as follows:\n\\begin{align*}\n\\int_{T_0}^{T_1}X_2^{1-\\alpha}(1+|\\log d(\\mathbf{X})|) &\\leq \\delta_2^{1-\\alpha}\\int_{T_0}^{T_1} e^{-(1-\\alpha)\\beta (\\min\\{T, T_e\\}-s)} (C+|\\log \\delta_2|)~ds\\\\\n&\\lesssim \\delta_2^{1-\\alpha} |\\log \\delta_2| \\int_{0}^{\\infty} e^{-(1-\\alpha)\\beta z}~dz\\\\\n&\\lesssim C(\\alpha, \\beta) \\delta_2^{1-\\alpha}|\\log \\delta_2|,\n\\end{align*}\nusing part (i) of Proposition \\ref{propSomeEst} and $d(\\mathbf{X}(s))\\geq \\frac{1}{2} \\delta_2$ for $s\\in [T_0, T_1]$, and $\\delta_2$ sufficiently small.\nIn the part over $[T_1, \\min\\{T, T_e\\}]$ the length of the time interval is bounded but\n$d(\\mathbf{X})$ is unbounded, so we proceed differently:\n\\begin{align*}\n\\int_{T_1}^{\\min\\{T, T_e\\}} X_2^{1-\\alpha}(1+|\\log d(\\mathbf{X})|) &\\lesssim \\delta_2^{1-\\alpha} \\int_{T_1}^{\\min\\{T, T_e\\}}|\\log d(\\mathbf{X})|~ds\\\\\n&\\lesssim \\delta_2^{1-\\alpha}|\\log\\delta_2|.\n\\end{align*}\nusing statement (iii) of Proposition \\ref{propSomeEst} and $X_2\\leq \\delta_2$.\n\nFor the second integral involving $X_2^{1-\\gamma_1+\\gamma_2} X_1^{\\gamma_2} d(\\mathbf{X})^{-1+\\gamma_1}$, we note \n\\begin{align*}\n\\int_{T_0}^{T_1} X_2^{1-\\gamma_1-\\gamma_2} X_1^{\\gamma_2} d(\\mathbf{X})^{-1+\\gamma_1}&\\lesssim \\delta_1^{\\gamma_2} \\delta_2^{1-\\gamma_1-\\gamma_2} \\delta_2^{-1+\\gamma_1} \\int_{T_0}^{T_1} \ne^{-(1-\\gamma_1-\\gamma_2)\\beta(\\min\\{T, T_e\\}-s)} ds\\\\\n&\\lesssim \\delta_1^{\\gamma_2} \\delta_2^{-\\gamma_2} \\\\\n\\int_{T_1}^{\\min\\{T, T_e\\}} X_2^{1-\\gamma_1-\\gamma_2} X_1^{\\gamma_2} d(\\mathbf{X})^{-1+\\gamma_1} &\\lesssim \\delta_1^{\\gamma_2}\\delta_2^{-\\gamma_2}\n\\end{align*}\nby Proposition \\ref{propSomeEst}, (i) and (iii) and moreover using $X_1\\leq \\delta_1, X_2\\leq \\delta_2$.\nThis yields finally\n$$\n\\int_{T_0}^t |X_2(s) \\d_{x_2} Q_2(s)|~ds \\leq C(\\alpha, \\beta)[M \\delta_2^{1-\\alpha}|\\log \\delta_2|+(\\delta_1\/\\delta_2)^{\\gamma_2}] \n$$\nimplying the result, since the factor in square brackets is a harmless generic factor. \n\\end{proof}\n\\begin{lem}\\label{lemf2f0}\nThe following estimates hold for $T_0\\leq t \\leq \\min\\{T, T_e\\}$:\n\\begin{align}\nf_2(t) &\\leq g(\\alpha, \\beta, \\boldsymbol{\\delta}, M) X_2(T_0)^{1-\\alpha}\\left[ M+\\delta_1 ^{\\alpha\/2} \\delta_2^{-1+\\alpha\/2} \\right], \\label{lemf2f0_est1}\\\\\nf_0(t) &\\leq R\\, g(\\alpha, \\beta, \\boldsymbol{\\delta}, M) X_2(T_0)^{1-\\alpha}\\left[ M+\\delta_1 ^{\\alpha\/2} \\delta_2^{-1+\\alpha\/2} \\right].\n\\end{align}\n\\end{lem}\n\\begin{proof} We write $g = g(\\alpha, \\beta, \\boldsymbol{\\delta}, M)$ for any occuring harmless\nfactor. Using Lemma \\ref{lemFor_c_b}, \n\\begin{align*}\nf_2(t) &= \\int_{T_0}^t e^{-A}|c| \\lesssim M \\int_{T_0}^{\\min\\{T, T_e\\}} e^{-A} X_2^{1-\\alpha}~ ds\\\\\n&\\,\\,+ C(\\alpha)\\int_{T_0}^{\\min\\{T, T_e\\}} e^{-A} X_2^{1-\\alpha}X_1^{\\alpha\/2} d(\\mathbf{X})^{-1+\\alpha\/2} ds\n\\end{align*}\n(choose $\\gamma_1=\\gamma_2= \\frac{1}{2} \\alpha$). Observe first that we can use Lemma \\ref{lemExponential} to replace\n$e^{-A}$ by $\\expo{- \\int_{T_0}^t Q_2 ds}$ and to get the estimate\n\\begin{align}\\label{lem_eq1}\ne^{-A} X_2(s)^{1-\\alpha} \n&\\leq g X_2(T_0)^{1-\\alpha} \\expo{-\\alpha\\int_{T_0}^t Q_2 ~d\\tau}\\notag\\\\\n&\\leq g X_2(T_0)^{1-\\alpha} \\expo{-\\alpha \\beta (s - T_0)} \n\\end{align}\n(we use again $Q_2\\geq \\beta$).\n\nUse \\eqref{lem_eq1} to estimate the integral containing $e^{-A} X_2^{1-\\alpha}$: \n\\begin{align*}\n\\int_{T_0}^{\\min\\{T, T_e\\}} e^{-A} X_2^{1-\\alpha} ds\n&\\leq g\\,X_2(T_0)^{1-\\alpha} \\int_{T_0}^{\\infty} e^{-\\alpha \\beta(s-T_0)} ds \\\\\n&\\leq g\\,X_2(T_0)^{1-\\alpha} C(\\alpha, \\beta).\n\\end{align*}\nFor integral containing $e^{-A} X_2^{1-\\alpha}X_1^{\\alpha\/2} d(\\mathbf{X})^{1-\\alpha\/2}$, we use \\eqref{lem_eq1} again and estimate\n\\begin{align*}\n&\\int_{T_0}^{\\min\\{T, T_e\\}} e^{-A} X_2^{1-\\alpha}X_1^{\\alpha\/2} d(\\mathbf{X})^{1-\\alpha\/2} ds \\le \\\\\n&\\,\\,\\,g X_2(0)^{1-\\alpha} \\delta_1^{\\alpha\/2} \\int_{T_0}^{\\min\\{T, T_e\\}} e^{-\\alpha \\beta(s-T_0)}d(\\mathbf{X})^{-1+\\alpha\/2}~ds.\n\\end{align*}\nAs in the proof of Lemma \\ref{lemExponential}, we split the interval of integration into $[T_0, T_1]$ and $[T_1, \\min\\{T, T_e\\}]$ in case $T_1 \\leq \\min\\{T, T_e\\}$,\nobtaining\n\\begin{align*}\n\\int_{T_0}^{T_1} e^{-\\alpha \\beta(s-T_0)}d(\\mathbf{X})^{-1+\\alpha\/2}~ds &\\lesssim \\delta_2^{-1+\\alpha\/2} C(\\alpha, \\beta),\\\\\n\\int_{T_1}^{\\min\\{T, T_e\\}} e^{-\\alpha \\beta(s-T_0)}d(\\mathbf{X})^{-1+\\alpha\/2}~ds &\\lesssim \\int_{T_1}^{\\min\\{T, T_e\\}} d(\\mathbf{X})^{-1+\\alpha\/2}~ds\\\\\n &\\lesssim \\delta_2^{-1+\\alpha\/2}\n\\end{align*}\nwhere we have used $d(\\mathbf{X})\\ge \\frac{1}{2} \\delta_2, e^{-\\alpha \\beta(s-T_0)}\\leq 1$ and \nProposition \\ref{propSomeEst}. In the case $T_1 \\geq \\min\\{T, T_e\\}$, we are left with only integral\nand deal with it in the same way.\n\nTo estimate $f_0$, we use that the feeding condition holds and that assumption (iii) from Theorem \\ref{mainTechnicalThm} holds. This gives \n\\begin{align*}\n|\\xi_1(T_0)| &= |\\d_{x_1}\\omega(\\mathbf{X}(T_0), T_0)| \\leq R X_2(T_0)^{1-\\alpha},\\\\\n|\\xi_2(T_0)|&=|\\d_{x_2}\\omega(\\mathbf{X}(T_0), T_0)|\\leq R\n\\end{align*}\nfor both of the cases $T_0=0$ (particle starts in $D$) and $T_0>0$ (particle starts in feeding zone). \nNow use the definition of $f_0$ and the estimate \\eqref{lemf2f0_est1} for $f_2$.\n\\end{proof}\n\\begin{lem}\\label{lemf1}\nFor $T_0\\leq t\\leq \\min\\{T, T_e\\}$, \n\\begin{align}\nf_1(t) &\\leq g(\\alpha, \\beta, \\boldsymbol{\\delta}, M) \\delta_1^{1-\\alpha} \\delta_2^{1-2\\alpha} e^{ \\alpha \\int_{T_0}^t Q_2 ds} \n\\end{align}\nwith a universal factor $g$ depending on the quantities indicated. \n\\end{lem}\n\\begin{proof}\nWe abbreviate again $g = g(\\alpha, \\beta, \\boldsymbol{\\delta}, M)$. First we claim that \n\\begin{equation}\n\\label{lemf1_claim}\n\\begin{split}\n\\int_{T_0}^t e^{2 A} |b|~ds &\\le g\\, C(\\alpha, \\beta) X_1(0)^{1-\\alpha}\n\\expo{(1+\\alpha)\\int_{T_0}^t Q_1~ds}\\\\\n&\\,\\,\\times\\left[M(1+|\\log \\delta_2|)+\\delta_1^{2\\alpha}{\\delta_2}^{-1+2\\alpha}\\right].\n\\end{split}\n\\end{equation}\nWe treat the case $T_1 \\leq t \\leq \\min\\{T, T_e\\}$. Using Lemma \\ref{lemFor_c_b}\n(recall $M_{\\omega, \\widehat{D}}\\leq M$), with $\\gamma= 2\\alpha$, and Lemma \\ref{lemExponential} we get\n\\begin{align*}\ne^{2 A} |b| &\\leq e^{2 A} X_1^{1-\\alpha}[M(1+|\\log d(\\mathbf{X})|)+X_1^{2\\alpha} d(\\mathbf{X})^{-1+2\\alpha}]\\\\\n &\\leq g \\,e^{(1+\\alpha)\\int_{T_0}^t Q_1 ds} \\,X_1(0)^{1-\\alpha}[M(1+|\\log d(\\mathbf{X})|)+\\delta_1^{2\\alpha} d(\\mathbf{X})^{-1+2\\alpha}].\n\\end{align*}\nWe integrate this bound from $T_0$ to $t$ and split into two integrals from $T_0$ to $T_1$ and $T_1$ to $t$:\n\\begin{align*}\ng\\,X_1(0)^{1-\\alpha}[M(1+|\\log \\delta_2|)+\\delta_1^{2\\alpha} \\delta_2^{-1+2\\alpha}]\\int_{T_0}^{T_1} e^{(1+\\alpha)\\int_{T_0}^{s} Q_1 d\\tau}~ds\n\\end{align*}\nObserve for the integral:\n\\begin{align*}\n&\\int_{T_0}^{T_1} e^{(1+\\alpha)\\int_{T_0}^{s} Q_1 d\\tau}~ds = \\int_{T_0}^{T_1} \\frac{Q_1}{Q_1} e^{(1+\\alpha)\\int_{T_0}^{s} Q_1 d\\tau}~ds\\\\\n&\\,\\,\\leq \\beta^{-1} (1+\\alpha)^{-1} \\left.e^{(1+\\alpha)\\int_{T_0}^{s} Q_1 d\\tau}\\right|_{s=T_0}^{s=T_1} \\lesssim e^{(1+\\alpha)\\int_{T_0}^{T_1} Q_1 d\\tau}.\n\\end{align*}\nHence\n\\begin{align*}\n \\int_{T_0}^{T_1} e^{2 A} |b|~ds &\\lesssim g\\, X_1(0)^{1-\\alpha}[M(1+|\\log \\delta_2|)+\\delta_1^{2\\alpha} \\delta_2^{-1+2\\alpha}] e^{(1+\\alpha)\\int_{T_0}^{T_1} Q_1}\n\\end{align*}\nFor the remaining part $\\int_{T_1}^{t} e^{2 A} |b|~ds$, we use Proposition \\ref{propSomeEst} again, and find the bound\n\\begin{align*}\n&g\\,e^{(1+\\alpha)\\int_{T_0}^t Q_1 ds} X_1(0)^{1-\\alpha} \n\\int_{T_1}^{t}[M(1+|\\log d(\\mathbf{X})|)+\\delta_1^{2\\alpha} d(\\mathbf{X})^{-1+2\\alpha}]~ds\\\\\n&\\,\\,\\lesssim g\\, X_1(0)^{1-\\alpha} \n[M|\\log \\delta_2|+\\delta_1^{2\\alpha} \\delta_2^{-1+2\\alpha}] e^{(1+\\alpha)\\int_{T_0}^t Q_1 ds}.\n\\end{align*}\nThe claim follows for the case $T_1\\leq t\\leq \\min\\{T, T_e\\}$. The calculation for $t\\leq T_1$ is similar (and slightly simpler).\n Next, using again Lemma \\ref{lemFor_c_b}, with $\\gamma= 2\\alpha$,\n\\begin{align*}\ne^{-A} |c|&\\leq e^{-A}X_2^{1-\\alpha} [M + X_2^{\\alpha\/2} X_1^{\\alpha\/2} d(\\mathbf{X})^{-1+\\alpha\/2}]\\\\\n&\\leq g\\, e^{-\\alpha \\int_{T_0}^t Q_2~ds} X_2(0)^{1-\\alpha} \\left[M + \\delta_2^{\\alpha\/2} \\delta_1^{\\alpha\/2} d(\\mathbf{X})^{-1+\\alpha\/2}\\right].\n\\end{align*}\nHence\n\\begin{align*}\n&\\int_{T_0}^t e^{-A} |c|\\int_{T_0}^s e^{2 A} |b| \\lesssim g\\, X_2(0)^{1-\\alpha} X_1(0)^{1-\\alpha}\n\\left[M(1+|\\log \\delta_2|)+\\delta_1^{\\alpha\/2}{\\delta_2}^{-1+\\alpha\/2}\\right]\\\\\n&\\,\\,\\times \\int_{T_0}^t e^{-\\alpha \\int_{T_0}^s Q_2~d\\tau} e^{(1+\\alpha)\\int_{T_0}^s Q_1~d\\tau} [M + \\delta_2^{\\alpha\/2} \\delta_1^{\\alpha\/2} d(\\mathbf{X})^{-1+\\alpha\/2}]~ ds.\n\\end{align*}\nContinuing the estimation, we have\n\\begin{align*}\n\\int_{T_0}^t e^{-\\alpha \\int_{T_0}^s Q_2 d\\tau} e^{(1+\\alpha)\\int_{T_0}^s Q_1 d\\tau} [M + \\delta_2^{\\alpha\/2} \\delta_1^{\\alpha\/2} d(\\mathbf{X})^{-1+\\alpha\/2}]~ ds\\\\\n\\leq g \\int_{T_0}^t e^{-\\alpha \\int_{T_0}^s Q_2 ds} e^{(1+\\alpha)\\int_{T_0}^s Q_2d\\tau} [M + \\delta_2^{\\alpha\/2} \\delta_1^{\\alpha\/2} d(\\mathbf{X})^{-1+\\alpha\/2}]~ ds\\\\\n\\leq g \\int_{T_0}^t e^{ \\int_{T_0}^s Q_2 d\\tau} [M + \\delta_2^{\\alpha\/2} \\delta_1^{\\alpha\/2} d(\\mathbf{X})^{-1+\\alpha\/2}]~ ds\\\\\n\\leq g \\,C(\\alpha, \\beta) e^{ \\int_{T_0}^t Q_2 ds} [M + \\delta_2^{\\alpha\/2} \\delta_1^{\\alpha\/2} \\delta_2^{-1+\\alpha\/2}]\n\\end{align*}\nwhere we have used Lemma \\ref{lemExponential} in the second line to exchange\n$e^{- \\int_{T_0}^s Q_1 d\\tau}$ for $e^{- \\int_{T_0}^s Q_2 d\\tau}$ and used the familiar splitting at $T_1$ to \nestimate the integral. Thus, finally we get \n\\begin{align*}\n\\int_{T_0}^t e^{-A} |c|\\int_{T_0}^s e^{2 A} |b|&\\lesssim g\\, X_1(0)^{1-\\alpha}\n\\left[M(1+|\\log \\delta_2|)+\\delta_1^{\\alpha\/2}{\\delta_2}^{-1+\\delta_2\/2}\\right]\\\\\n&\\,\\,\\,\\times [M + \\delta_1^{\\alpha\/2} \\delta_2^{-1+\\alpha}] e^{ \\int_{T_0}^t Q_2 ds} X_2(0)^{1-\\alpha}.\n\\end{align*}\nIt remains to apply key Lemma \\ref{keyLemma} to estimate the factor $e^{ \\int_{T_0}^{t} Q_2 ds} X_2(0)^{1-\\alpha}$,\nwhich is less than\n\\begin{align}\n&\\delta_2^{1-\\alpha} e^{ \\alpha \\int_{T_0}^t Q_2 ds}\ne^{(1-\\alpha) \\int_{T_0}^{t} Q_2 ds - (1-\\alpha) \\int_{T_0}^{\\min\\{T, T_e\\}} Q_2 ds}\n\\le \\delta_2^{1-\\alpha} e^{ \\alpha \\int_{T_0}^t Q_2 ds}\\label{lemf1_eq1}.\n\\end{align}\nNow observe that the factor $\\delta_2^{1-\\alpha}$ on the right-hand side of \\eqref{lemf1_eq1} can be combined with the expression\n$\\left[M(1+|\\log \\delta_2|)+\\delta_1^{\\alpha\/2}{\\delta_2}^{-1+\\delta_2\/2}\\right]$, giving \n$$\n\\delta_2^{1-2\\alpha} g.\n$$\n\\end{proof}\n\n\n\\begin{lem}\\label{lem11}\n\\begin{align*}\n&v_1 \\lesssim g \\left[Q_2 + M X_2^{1-\\alpha}(1+|\\log d(\\mathbf{X})|) + X_2^{1-\\alpha} d(\\mathbf{X})^{1-\\alpha}\\right] e^{-\\int_{T_0}^t Q_2 ds}\\\\\n&v_2(t) \\leq g X_1(T_0)^{1-\\alpha} \\left[M(1+|\\log d(\\mathbf{X})|)+\\delta_1^{\\alpha\/2} d(\\mathbf{X})^{-1+\\alpha\/2}\\right] e^{\\alpha \\int_{T_0}^t Q_1 ds},\\\\\n&\\int_{T_0}^{\\min\\{T, T_e\\}} v_1 \\le g [M(1+|\\log \\delta_2|) + 1],\\\\\n&\\int_{T_0}^{\\min\\{T, T_e\\}} v_2 e^{\\alpha \\int_{T_0}^t Q_1 ds} \\le g e^{2 \\alpha \\int_{T_0}^t Q_2 ds} \\\\\n&\\int_{T_0}^{\\min\\{T, T_e\\}} v_1 f_1 \\lesssim g \\delta_1^{1-\\alpha}\\delta_2^{1-\\alpha\/2},\\\\\n&\\int_{T_0}^t v_2 f_1 \\lesssim g \\delta_1^{1-\\alpha}\\delta_2^{1-\\alpha} X_1(T_0)^{1-\\alpha} e^{2\\alpha\\int_{T_0}^t Q_2 ds}.\\\\\n&\\int_{T_0}^{\\min\\{T, T_e\\}} v_1 f_2 \\leq g X_2(0)^{1-\\alpha},\n\\end{align*}\n$g = g(\\alpha, \\beta, \\boldsymbol{\\delta}, M)$ a harmless factor.\n\\end{lem}\n\\begin{proof}\nThe estimates for $v_1$ and $v_2$ follow from Lemma \\ref{lemFor_c_b} and Lemma \\ref{lemExponential}.\nBy Proposition \\ref{propSomeEst} and the usual splitting of the interval of integration,\n\\begin{align*}\n\\int_{T_0}^{\\min\\{T, T_e\\}} v_1 &\\le g [M(1+|\\log \\delta_2|) + 1].\n\\end{align*}\nThe integral $\\int_{T_0}^{\\min\\{T, T_e\\}} v_1 e^{\\alpha \\int_{T_0}^t Q_1 ds}$ is estimated similarly.\n\nUsing Lemma \\ref{lemf1} and Lemma \\ref{lemFor_c_b} we get\n\\begin{align*}\nv_1 f_1 &= |a| e^{-A} f_1 \\leq g \\delta_1^{1-\\alpha}\\delta_2^{1-\\alpha\/2} e^{\\alpha \\int_{T_0}^t Q_2 ds} e^{-A} \\\\\n&\\,\\,\\,\\,\\,\\times \\left[ Q_2 + M X_2^{1-\\alpha}(1+|\\log d(\\mathbf{X})|) + X_2^{1-\\alpha} d(\\mathbf{X})^{1-\\alpha}\\right]\\\\\n&\\,\\,\\lesssim g \\delta_1^{1-\\alpha}\\delta_2^{1-\\alpha\/2} e^{(-1+\\alpha) \\int_{T_0}^t Q_2 ds} \\\\\n&\\,\\,\\,\\,\\,\\times\\left[ Q_2 + M \\delta_2^{1-\\alpha}(1+|\\log d(\\mathbf{X})|) + \n\\delta_2^{1-\\alpha} d(\\mathbf{X})^{-1+\\alpha}\\right].\n\\end{align*} \nNote how the growing factor $e^{\\alpha \\int_{T_0}^t Q_2 ds}$ was cancelled by $e^{-A}$.\nBy integrating, we can bound $\\int_{T_0}^{\\min\\{T, T_e\\}} v_1 f_1$: \n\\begin{align*}\n&\\lesssim g \\delta_1^{1-\\alpha}\\delta_2^{1-2\\alpha} \\int\\limits_{T_0}^{\\min\\{T, T_e\\}} e^{(-1+\\alpha) \\int_{T_0}^t Q_2 ds} \\\\\n&\\,\\,\\times\\left[ Q_2 + M \\delta_2^{1-2\\alpha}(1+|\\log d(\\mathbf{X})|) + \\delta_2^{1-\\alpha}d(\\mathbf{X})^{-1+\\alpha}\\right] \\\\\n&\\lesssim g \\delta_1^{1-\\alpha}\\delta_2^{1-2 \\alpha}\\left[M \\delta_2^{1-\\alpha}|\\log \\delta_2| +1 \\right]\n\\end{align*}\nyielding the desired estimate for the integral with integrand $v_2 f_1$.\n\nProceeding analogously, we bound $v_2 f_1 = e^{A} |b| f_1$ by\n\\begin{align*}\n&g \\delta_1^{1-\\alpha} \\delta_2^{1-\\alpha\/2} e^{(1+\\alpha)\\int_{T_0}^t Q_2 ds} X_1^{1-\\alpha}\\left[\nM(1+|\\log d(\\mathbf{X})|) + X_1^{\\alpha\/2} d(\\mathbf{X})^{1-\\alpha\/2}\\right]\\\\\n&\\leq g \\delta_1^{1-\\alpha} \\delta_2^{1-\\alpha\/2} X_1(T_0)^{1-\\alpha} e^{2\\alpha\\int_{T_0}^t Q_2 ds} \\left[\nM(1+|\\log d(\\mathbf{X})|) + \\delta_1^{\\alpha\/2} d(\\mathbf{X})^{1-\\alpha\/2}\\right]\n\\end{align*}\nwhere we have used $e^{(1+\\alpha)\\int_{T_0}^t Q_2 ds} X_1^{1-\\alpha}\\leq g X_1(T_0)^{1-\\alpha} e^{(1+\\alpha)\\int_{T_0}^t Q_2 ds} e^{\n-(1-\\alpha)\\int_0^t Q_1 ds}$ and used Lemma \\ref{lemExponential} to replace $Q_1$ by $Q_2$.\nIntegration now yields \n\\begin{align*}\n\\int_{T_0}^t v_2 f_1 \\leq g \\delta_1^{1-\\alpha} \\delta_2^{1-\\alpha\/2} X_1(T_0)^{1-\\alpha} e^{2\\alpha\\int_{T_0}^t Q_2 ds} \\left[\nM|\\log \\delta_2| + \\delta_1^{\\alpha\/2} \\delta_2^{1-\\alpha\/2}\\right].\n\\end{align*}\n\\end{proof}\n\\begin{lem}\\label{lemEstH}\nAlong a particle trajectory, for $T_0\\leq t \\leq \\min\\{T, T_e\\}$,\n\\begin{align}\\label{lemEstH_eq3}\n(H f_0)(t)&\\leq g \\|f_0\\|_{\\infty} \\left[1+ e^{\\alpha \\int_0^t Q_2 ds}\\right]\n\\end{align}\nholds, where $\\|f_0\\|_{\\infty} = \\|f_0\\|_{\\infty, [T_0, t]}$.\n\\end{lem}\n\\begin{proof}\nFirst we estimate the expression $f_0+f_1 \\expo{\\int_{T_0}^t v_1 f_1}\\int_{T_0}^t v_1 f_0$. Using Lemmas \\ref{lemf1}, \\ref{lem11}, we get the following bound\n\\begin{align}\n&f_0(t) + g \\delta_1^{1-\\alpha}\\delta_2^{1-2\\alpha} e^{\\alpha \\int_0^t Q_2 ds} \\expo{g \\delta_1^{1-\\alpha}\\delta_2^{1-2\\alpha}} \\|f_0\\|_{\\infty}\\int_{T_0}^{T_e} v_1 ds \\le\\notag\\\\ \n&\\,\\,\\|f_0\\|_{\\infty} (1+g \\delta_1^{1-\\alpha}\\delta_2^{1-2\\alpha} e^{\\alpha \\int_0^t Q_2 ds} M(1+|\\log \\delta_2|))\\le\\notag\\\\\n&\\,\\,\\|f_0\\|_{\\infty} (1+g \\delta_1^{1-\\alpha}\\delta_2^{1-3\\alpha} e^{\\alpha \\int_0^t Q_2 ds}) \\le g e^{\\alpha \\int_0^t Q_2 ds} \\label{lemEstH_eq1}\n\\end{align}\nwhere we have absorbed the harmless factors $\\expo{g \\delta_1^{1-\\alpha}\\delta_2^{1-2\\alpha}}, \n\\delta_2^{\\alpha}M(1+|\\log \\delta_2|)$ into $g$.\nNext we consider \n$$f_2+f_1 \\expo{\\int_{T_0}^t v_1 f_1}\\int_{T_0}^t v_1 f_2,$$ \nfor which we get the following bound (Lemmas \\ref{lemf2f0}, \\ref{lemf1}) \n\\begin{align*}\n&g X_2(T_0)^{1-\\alpha} + g \\delta_1^{1-\\alpha} \\delta_2^{1-2\\alpha} e^{\\alpha \\int_{T_0}^t Q_2 ds} X_2(T_0)^{1-\\alpha} \\\\\n&\\,\\,\\,\\le g X_2(T_0)^{1-\\alpha} + g \\delta_1^{1-\\alpha} \\delta_2^{1-2 \\alpha} X_2(T_0)^{1-2 \\alpha}\\\\\n&\\,\\,\\,\\le g (\\delta_2^{\\alpha}+\\delta_1^{1-\\alpha}\\delta_2^{1-2\\alpha}) X_2(T_0)^{1-2 \\alpha} \\le g X_2(T_0)^{1-2\\alpha} \\le g X_2(T_0)^{1-2\\alpha}\n\\end{align*}\nusing, in the first step, the key Lemma \\ref{keyLemma} to cancel of $e^{\\alpha \\int_{T_0}^t Q_2 ds}$ using the factor $X_2(T_0)^{\\alpha}$. \n\nSo for $v_2 \\left[f_2+f_1 \\expo{\\int_{T_0}^t v_1 f_1}\\int_{T_0}^t v_1 f_2\\right]$ we obtain the upper bound\n\\begin{align*}\nv_2 g X_2(T_0)^{1-2 \\alpha} &\\le g e^{\\alpha \\int_{T_0}^t Q_1} X_2(T_0)^{1-2 \\alpha} \\left[M(1+|\\log d(\\mathbf{X})|)+\\delta_1^{\\alpha\/2} d(\\mathbf{X})^{-1+\\alpha\/2}\\right] \\\\\n&\\,\\,\\,\\leq g \\delta_2^\\alpha X_2(T_0)^{1-3 \\alpha} \\left[M(1+|\\log d(\\mathbf{X})|)+\\delta_1^{\\alpha\/2} d(\\mathbf{X})^{-1+\\alpha\/2}\\right] \n\\end{align*}\nusing the key lemma \\ref{keyLemma} again to cancel $e^{\\alpha \\int_{T_0}^t Q_1}$. Thus we see that \n$$\n\\expo{\\int_{T_0}^{\\min\\{T, T_e\\}} v_2 \\left[f_2+f_1 \\expo{\\int_{T_0}^s v_1 f_1} \\int_{T_0}^s v_1 f_2\\right]} \\le g\n$$\nFinally, by \\eqref{lemEstH_eq1} and Lemma \\ref{lem11}\n\\begin{align*}\n&\\int_{T_0}^t v_2 \\left[f_0+f_1 \\expo{\\int_{T_0}^s v_1 f_1}\\int_{T_0}^s v_1 f_0\\right] \\le\\\\\n&\\,\\,g \\|f_0\\|_{\\infty} \\int_{T_0}^t v_2 e^{\\alpha \\int_0^t Q_2 ds} ds \\le g \\|f_0\\|_{\\infty} e^{\\alpha \\int_0^t Q_2 ds}. \n\\end{align*} \nThus, in total we get\n\\begin{equation}\\label{lemEstH_eq2}\n\\begin{split}\n&f_0+f_1 \\expo{\\int_{T_0}^t v_1 f_1}\\int_{T_0}^t v_1 f_0 \\le g \\|f_0\\|_{\\infty} e^{\\alpha \\int_0^t Q_2 ds}\\\\\n&f_2+f_1 \\expo{\\int_{T_0}^t v_1 f_1}\\int_{T_0}^t v_1 f_2 \\le g X_2(T_0)^{1-2\\alpha}\\\\\n&\\expo{\\int_{T_0}^{\\min\\{T, T_e\\}} v_2 \\left[f_2+f_1 \\expo{\\int_{T_0}^s v_1 f_1} \\int_{T_0}^s v_1 f_2\\right]} \\le g\\\\\n&\\int_{T_0}^t v_2 \\left[f_0+f_1 \\expo{\\int_{T_0}^s v_1 f_1}\\int_{T_0}^s v_1 f_0\\right] \\le g \\|f_0\\|_{\\infty} e^{\\alpha \\int_0^t Q_2 ds}.\n\\end{split}\n\\end{equation}\nCombining the inequalities in \\eqref{lemEstH_eq2},\n\\begin{align*}\n\\left(f_2+f_1 \\expo{\\int_{T_0}^t v_1 f_1}\\int_{T_0}^t v_1 f_2\\right) \\\\\n\\times \\expo{\\int_{T_0}^{T_e} v_2 \\left[f_2+f_1 \\expo{\\int_{T_0}^s v_1 f_1}\\int_{T_0}^t v_1 f_2\\right]} \\\\\n \\times \\int_{T_0}^t v_2 \\left[f_0+f_1 \\expo{\\int_{T_0}^s v_1 f_1}\\int_{T_0}^t v_1 f_0\\right]\\\\\n\\le g X_2(T_0)^{1-4 \\alpha} \\|f_0\\|_{\\infty} \\le g \\|f_0\\|_\\infty\n\\end{align*}\nusing again the key lemma to get rid of the factor $e^{2\\alpha \\int_{T_0}^t Q_2}$, and in the\nvery last step we used $\\alpha\\in (0, 1\/4)$.\nIn view of \\eqref{eq_H}, \\eqref{lemEstH_eq3} now follows.\n\\end{proof}\nWe can now complete the proof of our main technical result, Theorem \\ref{mainTechnicalThm}. At time $t=T$, any\n$x\\in D$ is occupied by a particle, i.e. $x = \\mathbf{X}(T)$ for some particle trajectory. Let us write \n$$\n\\d_{x_j} \\omega(\\mathbf{X}(t), t) = \\xi_j(t) \n$$\nalong that particle trajectory, and so by \\eqref{ode_ineq5},\n\\begin{align}\\label{proofMainTechnicalThmEq1}\n|\\xi_1(t)|&\\leq (H f_0)(t) + e^{A} \\int_{T_0}^t |a| e^{-A} (H f_0)(s) ds\n\\end{align}\nFirst note that by Lemmas \\ref{lemFor_c_b}, \\ref{lemEstH}, \\ref{lemf2f0}, \\ref{lem11}\n\\begin{align}\n&e^{A} \\int_{T_0}^t v_1(s) (H f_0)(s) ds \\lesssim e^{A} g \\|f_0\\|_{\\infty} \\int_{T_0}^t v_1~ds\\notag\\\\ \n&\\,\\,\\, \\le g \\|f_0\\|_{\\infty} (M |\\log\\delta_2|+1) e^{A} \n\\le g R X_2(T_0)^{1-\\alpha} e^{A} (M |\\log\\delta_2|+1) \\label{proofMainTechnicalThmEq2}\n\\end{align} \nNote that the growing exponential factor $e^{\\alpha \\int_{T_0}^s Q_2 d\\tau}$ is cancelled by the decaying one $e^{-\\int_{T_0}^t Q_2 ds}$\nand that the integral can be estimated using the familiar splitting technique.\nMoreover again by Lemmas \\ref{lemEstH}, \\ref{lemf2f0}\n\\begin{align*}\n(H f_0)(t) \\leq g R X_2(T_0)^{1-\\alpha} \\left[1+e^{\\alpha \\int_{T_0}^t Q_2 ds}\\right]\n\\end{align*}\nand so in view of \\eqref{proofMainTechnicalThmEq1},\n\\begin{align}\n|\\xi_1(t)|X_1(t)^\\alpha &\\leq g R X_2(T_0)^{1-\\alpha} \\left[1+e^{\\alpha \\int_0^t Q_2 ds}\\right]\\delta_1^{\\alpha} e^{-\\alpha \\int_{T_0}^t Q_1 ds}\\notag\\\\\n&+ g R (M |\\log\\delta_2|+1) e^{A} X_2(T_0)^{1-\\alpha} \\delta_1^{\\alpha} e^{-\\alpha \\int_{T_0}^t Q_1 ds}\\delta_1^{\\alpha}\\notag\\\\\n&\\le g R \\delta_1^{\\alpha}\\label{proofMainTechnicalThmEq5} \n\\end{align}\nwhere we used the key Lemma \\ref{keyLemma} to cancel $e^A$\n$$e^{A} X_2(T_0)^{1-\\alpha}e^{-\\alpha \\int_{T_0}^t Q_1 ds}\\le g \\delta_2^{1-\\alpha}$$\nand combined the factor $\\delta_2^{1-\\alpha}$ with $(M |\\log\\delta_2|+1)$ to get a\nharmless generic factor. In fact, this was the most critical estimate in the whole proof, since the dangerous factor $e^{A}$ was barely \ncancelled. \n\nWe now derive a similar estimate for $|\\xi_2(t)|X_2(t)^\\alpha$. From the second line of \\eqref{ode_ineq5}, \n\\begin{align}\\label{proofMainTechnicalThmEq3}\n|\\xi_2(t)| &\\leq R e^{-A} + e^{-A} \\| H f_0\\|_\\infty \\int_{T_0}^t e^{A}|b| ds+ e^{-A} \\int_{T_0}^t e^{2 A} |b| \\int_{T_0}^s |a| e^{-A} H f_0. \n\\end{align}\nFirst we observe that by Lemmas \\ref{lemEstH}, \\ref{lemf2f0}, \\ref{lem11} \n\\begin{align*}\n&\\|H f_0\\|_\\infty e^{-A} \\int_{T_0}^t e^{A}|b| ds \\leq \\|H f_0\\|_{\\infty} e^{-A} \\int_{T_0}^t v_2 ds\\\\\n &\\,\\,\\leq g \\delta_1^{1-\\alpha} R X_2(T_0)^{1-\\alpha} \\left[1+e^{\\alpha \\int_{T_0}^t Q_2 ds}\\right] e^{-\\int_{T_0}^t Q_2 ds} e^{\\alpha\\int_{T_0}^t Q_2 ds}\\\\\n&\\,\\,\\leq g \\delta_1^{1-\\alpha} R X_2(T_0)^{1-\\alpha}.\n\\end{align*} \n\nMoreover, using \\eqref{proofMainTechnicalThmEq2} and Lemma \\ref{lemf1}\n\\begin{align*}\n&e^{-A} \\int_{T_0}^t e^{2 A} |b| \\int_{T_0}^s |a| e^{-A} H f_0 \\leq g R X_2(T_0)^{1-\\alpha}(M|\\log \\delta_2|+1) e^{-A} \\int_{T_0}^t e^{2A}|b|\\\\\n&\\,\\,= g R X_2(T_0)^{1-\\alpha} (M|\\log \\delta_2|+1) \\delta_1^{1-\\alpha} \\delta_2^{1-2\\alpha} e^{-A} e^{(1+\\alpha) \\int_{T_0}^t Q_2 ds}\\\\\n&\\,\\,\\leq g R \\delta_1^{1-\\alpha} X_2(T_0)^{1-\\alpha} e^{\\alpha \\int_{T_0}^t Q_2 ds}\\\\\n &\\,\\,\\leq g R \\delta_1^{1-\\alpha} X_2(T_0)^{1-2\\alpha}.\n\\end{align*}\nHence from \\eqref{proofMainTechnicalThmEq3},\n\\begin{align}\\label{proofMainTechnicalThmEq6}\n|\\xi_2(t)| \\leq g R\n\\end{align}\nand thus\n\\begin{align}\\label{proofMainTechnicalThmEq4}\n|\\xi_2(t)|X_2^{\\alpha} \\leq g R \\delta_2^{\\alpha}.\n\\end{align} follows. Inequalities \\eqref{proofMainTechnicalThmEq4} and \\eqref{proofMainTechnicalThmEq5} imply\n\\begin{align*}\nM_{D}(T) \\leq g(\\alpha, \\beta, \\boldsymbol{\\delta}, M) R \\delta_2^{\\alpha} =: \\mathcal{N}(R, \\alpha, \\beta, \\boldsymbol{\\delta}, M).\n\\end{align*}\nIt remains to show that $\\mathcal{N}$ is a harmless nonlinearity. Therefore, let $\\alpha, \\beta, R$ be given.\nRecall that $g$ has the property that $g(\\alpha, \\beta, \\delta_2^p, \\delta_2, R)$ is bounded as $\\delta_2 \\to 0$,\nwith some $p > 0$. Hence\n\\begin{align*}\ng(\\alpha, \\beta, \\delta_2^p, \\delta_2, R) R \\delta_2^{\\alpha} < R\n\\end{align*}\n for sufficiently small $\\delta_2 > 0$.\n\n\n\\subsection{Proof of the main result}\nWe are now ready to prove Theorem \\ref{main}. So let \n$\\alpha\\in (0, \\frac{1}{4})$ and $R \\geq 0$ be a given nonnegative number.\nLet $\\mathcal{N}$ be the harmless nonlinear function from theorem \\ref{mainTechnicalThm}. \nFix small positive $\\delta_1, \\delta_2$ such that the following set of inequalities hold true:\n\\begin{align}\n\\delta_1, \\delta_2 \\leq \\rho, ~\\beta_0 - A |\\boldsymbol{\\delta}|^{1-\\alpha} R \\ge \\frac{1}{2} \\beta_0. \\label{mainThm_ineq3}\\\\\nM_{D}(0) < R, ~\\left|\\frac{\\d \\omega_0}{\\d x_1}\\right|\\le R x_2^{1-\\alpha}, ~\\left|\\frac{\\d \\omega_0}{\\d x_2}\\right|\\le R, \\label{mainThm_ineq4}\\\\\n\\mathcal{N}(R, \\alpha, \\frac{1}{2}\\beta_0, \\delta_1, \\delta_2, R) < R, \\label{mainThm_ineq2}\n\\end{align}\nwhere $A, \\beta_0, \\rho$ are the numbers from the definition of the hyperbolicity of the flow.\nNote that the box can be chosen so small that that \\eqref{mainThm_ineq4} holds. This a \nconsequence of $\\frac{\\d \\omega_0}{\\d x_2}(0, x_2)=0$ and the\n$C^2$-smoothness of $\\omega_0$.\n\nWe claim now that if the box $\\widehat{D}$ satisfies the controlled feeding conditions with parameter $R$,\nthen we have the bound\n\\begin{align}\\label{mainThm_ineq1}\nM_{D}(t) \\leq R\\quad (t\\in [0, \\infty))\n\\end{align}\nfor all times. Assume \\eqref{mainThm_ineq1} is not true for all times.\nSince $M_{D}(0) < R$, and the solution $\\omega$ is sufficiently smooth in time by assumption,\nthere exists a time $T > 0$ such that $M_{D}(t) < R$ holds\non $[0, T)$ and $M_{D}(T)=R$. Moreover, by \\eqref{mainThm_ineq3},\nthe flow is $\\frac{1}{2} \\beta_0$-hyperbolic in the box $D$ on the time interval $[0, T]$.\nObserve that because of \\eqref{condDelta} and the feeding conditions,\n $M(x,t)\\le M_{\\widehat{D}}(t)\\le R$ for all $x\\in D$ and $t\\in[0,T]$.\n\n\\eqref{mainThm_ineq4} implies that (ii) in the formulation of Theorem \\ref{mainTechnicalThm} holds.\nAlso, on $[0, T]$, we have $M_{D}(t) \\le R$.\nApplying Theorem \\ref{mainTechnicalThm} (choose $K=R$) and \\eqref{mainThm_ineq2}, we get\n$$\nM_{D}(T) \\leq \\mathcal{N}(R, \\alpha, \\frac{1}{2}\\beta_0, \\delta_1, \\delta_2, R) < R.\n$$\nIn combination with \\eqref{mainThm_ineq2}, this gives\n$$\nM_{D}(T) < R,\n$$\na contradiction. This proves \\eqref{mainThm_ineq1}. Now we prove the exponential bound on the gradient growth.\nAt an arbitrary time $t \\geq 0$, each $x \\in D$ is occupied by a particle $\\mathbf{X}(t)$ that has entered the box at some earlier\ntime $T_0$. The same calculation leading to \\eqref{proofMainTechnicalThmEq5}, using \\eqref{proofMainTechnicalThmEq1}\nand \\eqref{proofMainTechnicalThmEq2} yields \n\\begin{align*}\n\\left|\\frac{\\d \\omega}{\\d x_1}(X(t), t)\\right|&\\leq g(\\alpha, \\beta, \\boldsymbol{\\delta}, R) R \\delta_2^{1-\\alpha}\\left[1+e^{\\alpha \\int_{T_0}^t Q_1 ds}\\right].\n\\end{align*} \nfor all $t\\in [T_0, T_e]$ on account of \\eqref{mainThm_ineq1}.\n\nWe apply now Lemma \\ref{lemUpperboundQ}\n\\begin{align*}\n\\int_{T_0}^t Q_1 ds &\\leq \\int_{T_0}^t (C\\|\\omega\\|_{\\infty}+ R|\\boldsymbol{\\delta}|^{1-\\alpha}) (t-T_0) + \\|\\omega\\|_{\\infty} \\int_{T_0}^t |\\log d(\\mathbf{X})| ds \n\\end{align*}\nThe integral containing the logarithmic term can be estimated using the familiar splitting at $T_1$ and gives\n\\begin{align*}\n\\int_{T_0}^t |\\log d(\\mathbf{X})|~ds &\\leq C (t - T_0)|\\log \\delta_2| + C(\\alpha, \\beta). \n\\end{align*}\nThus, finally, \n\\begin{align*}\n\\left|\\frac{\\d \\omega}{\\d x_1}(\\mathbf{X}(t), t)\\right| \\leq g(\\alpha, \\beta, \\boldsymbol{\\delta}, R) R \\delta_2^{1-\\alpha} e^{\\alpha(C\\|\\omega\\|_{\\infty}+ R|\\boldsymbol{\\delta}|^{1-\\alpha}) t}.\n\\end{align*}\nThe derivative in $x_2$-direction is bounded by \\eqref{proofMainTechnicalThmEq6}.\nThis concludes the proof of Theorem \\ref{main}.\n\n\\section{Acknowlegdements}\nThe authors cordially thank A. Kiselev for suggesting the problem and a great number of helpful discussions. \nVH would like to express his gratitude to the Deutsche Forschungsgemeinschaft (German Research Foundation),\nwithout whose financial support (FOR HO 5156\/1-1) the present research could not have been undertaken.\n\n\\section{Appendix}\n\n\n\\subsection{Appendix A}\n\n\\begin{prop}\\label{prop:A1}\nFor all $x,y\\in{[0, 1]^2},~x\\neq y$ the following estimates hold.\n\\begin{align}\n\\left|G_i^k(x,y)\\right|&\\lesssim|y-x|^{-1}x_i^{-1} \\\\\n\\left|\\frac{\\d G_i^k}{\\d x_j}(x,y)\\right|&\\lesssim|y-x|^{-3}\n\\end{align}\n$(i, k = 1, 2)$.\n\\end{prop}\nThe proofs are straightforward calculations based on the identities in Appendix B, and the reflection identities:\n\\begin{align*}\n|y-\\til x| \\geq |y-x|, |y-\\bar x| \\geq |y-x|, |y+x| \\geq |y-\\bar x| \n\\end{align*}\nholding for $x, y\\in {[0, 1]^2}$.\nAlso, use the obvious inequalities\n\\begin{align*}\nx_2 \\leq |y-\\bar x|, x_1 \\le |y-\\til x|.\n\\end{align*}\n\nWe observe some useful relations for the kernels $G_i^k$ and their derivatives. Let $G$ stand for any $G_i^k$ and let\n$$\n\\Omega_x = (-x_1, 1-x_1)\\times(-x_2, 1-x_2).\n$$\n$G$ has the form $G(x, y) = \\widetilde G(y-x, x, y)$, where $\\widetilde G(z, \\eta, \\mu)$ is \nsmooth provided $\\eta, \\mu\\in (0,1)^2, z\\in \\Omega_x \\setminus \\{0\\}$. For example, if $G=G_1^1$ then\n$$\n\\widetilde G(z, \\eta, \\mu)=\\frac{\\mu_1 z_1}{|z|^2 |\\mu-\\til \\eta|^2}.\n$$\nNote that for $x\\neq y, x, y\\in (0, 1)^2$,\n\\begin{align}\n(\\d_{x_j} G)(x, y) = (\\d_{\\eta_j}\\widetilde G)(y-x, x, y) - (\\d_{z_j}\\widetilde G)(y-x, x, y)\\label{dev_eq4}\\\\\n(\\d_{y_j} G)(x, y) = (\\d_{\\mu_j}\\widetilde G)(y-x, x, y) +(\\d_{z_j}\\widetilde G)(y-x, x, y)\\notag\n\\end{align}\nso that\n\\begin{equation}\\label{relDevXY}\n(\\d_{x_j} G)(x, y) = -(\\d_{y_j} G)(x, y) +(\\d_{\\eta_j}\\widetilde G)(y-x, x, y) + (\\d_{\\mu_j}\\widetilde G)(y-x, x, y).\n\\end{equation}\nMoreover, we always have\n\\begin{align}\\label{dev_ineq1}\n|\\widetilde G(z, x, y)|, \\left|\\frac{\\d \\widetilde G}{\\d \\eta_j}(z, x, x+z)\\right|, \\left|\\frac{\\d \\widetilde G}{\\d \\mu_j}(z, x, x+z)\\right|\\leq C(\\eta)|z|^{-1}.\n\\end{align}\nwhere $C(\\eta)$ is uniformly bounded if $\\eta$ varies in a compact subset of $(0, 1)^2$.\n\n\\begin{prop}\\label{prop:A7}\n\\begin{equation}\n\\frac{\\d G_i^k}{\\d x_j}=-\\frac{\\d G_i^k}{\\d y_j}+x_i^{-2}\\delta_{ij}\\mathcal{O}(|y-x|^{-1})\n\\end{equation}\n\\end{prop}\n\\begin{proof}\nThis is a straighforward calculation using \\eqref{relDevXY}. \n\\end{proof}\n\n\n\n\n\\begin{prop}[Derivatives of $Q_i$]\\label{prop:A2}\n\\begin{align*}\n\\frac{\\d Q_i}{\\d x_j}&=\nc_0P.V.\\!\\!\\int_{[0,1]^2}\\left[\\frac{\\d G_i^1}{\\d x_j}+\\frac{\\d G_i^2}{\\d x_j}\\right] \\omega(y)~dy\\\\\n& - \\omega(x)\\lim_{\\delta \\to 0^+}\\int_{\\d B(\\delta,x)} G_i^i\\cdot\\nu_j~d\\sigma+\\frac{\\d Q_i^3}{\\d x_j}\n\\end{align*}\n\\end{prop}\n\n\\begin{proof}\nWrite $(G_i^1+G_i^2)(x, y):=G(x, y)$. $G$ has again the form $G(x, y) = \\widetilde G(y-x, x, y)$, where $\\widetilde G(z, \\eta, \\mu)$ is \nsmooth provided $\\eta, \\mu\\in (0,1)^2, z\\in \\Omega_x \\setminus \\{0\\}$. Also \\eqref{dev_eq4}, \\eqref{dev_ineq1} hold for $\\widetilde G$.\nNow \n\\begin{align}\n \\label{dev_eq3}\\frac{\\d}{\\d x_j} \\int_{\\Omega_x} \\widetilde G(z, x, x+z) \\omega(x+z)~dz= \\int_{\\Omega_x} \\widetilde G(z, x, x+z) \\frac{\\d \\omega}{\\d z_j}(x+z)~dz\\\\\n+\\int_{\\Omega_x} \\d_{x_j}(\\widetilde G(z, x, x+z)) \\omega(x+z)~dz\\notag \\\\\n- \\int_{\\d \\Omega_x} \\widetilde G(z, x, x+z) \\omega(x+z)\\nu_j~d\\sigma\\notag\n\\end{align}\nwhere $\\nu_j$ denotes the $j$-th component of the unit outer normal. This is a standard differentiation result (note the\nbounds \\eqref{dev_ineq1}).\n\nNow consider the integral in the line \\eqref{dev_eq3}, exclude the singularity and integrate by parts:\n\\begin{align}\\label{dev_eq5}\n\\int_{\\Omega_x} \\widetilde G(z, x, x+z) \\frac{\\d \\omega}{\\d z_j}(x+z)~dz =- \\int_{\\Omega_x\\setminus B(0, \\delta)} \\d_{z_j}(\\widetilde G(z, x, x+z)) \\omega(x+z)~dz\\\\\n+ \\int_{\\d \\Omega_x} \\widetilde G(z, x, x+z) \\omega(x+z)\\nu_j~d\\sigma \\notag\\\\\n- \\int_{\\d B(0, \\delta)} \\widetilde G(z, x, x+z) \\omega(x+z)\\nu_j~d\\sigma\\notag\n\\end{align}\nObserve that by \\eqref{dev_eq4},\n\\begin{align}\\notag\n-\\d_{z_j}(\\widetilde G(z, x, x+z))+\\d_{x_j}(G(z, x, x+z)) = (\\d_{x_j}G)(z, x, x+z).\n\\end{align}\nSo combining \\eqref{dev_eq3} and \\eqref{dev_eq5}, we finally get\n\\begin{align*}\n\\frac{\\d}{\\d x_j} \\int_{\\Omega_x} \\widetilde G(z, x, x+z) \\omega(x+z)~dz = -\\int_{\\d B(0, \\delta)} \\widetilde G(z, x, x+z) \\omega(x+z)\\nu_j~d\\sigma\\\\\n+ \\int_{\\Omega_x} (\\d_{x_j}G)(z, x, x+z) \\omega(x+z)~dz + \\int_{B(0, \\delta)} \\d_{x_j}(\\widetilde G(z, x, x+z)) \\omega(x+z)~dz.\n\\end{align*}\nReplacing $x+z$ by $y$ and sending $\\delta\\to 0$ yields the statement.\n\\end{proof}\n\nRecall that \n\\begin{equation}\nd_1(x) = \\min\\{x_1, x_2\\}\n\\end{equation}\nis the distance of the point $x$ to the coordinate axes.\nObserve also that\n\\begin{align}\\label{relXY}\n\\frac{1}{2} x_r \\le y_r \\le \\frac{3}{2}x_r \n\\end{align}\nfor $y\\in B(\\frac{1}{2} d_1(x), x), r=1, 2$. For the entire appendix, we shall write that\n$M = M_{\\widehat{D}}$, i.e.\n\\begin{equation}\n\\left|\\frac{\\d\\omega}{\\d x_j}(x)\\right| \\le M x_j^{-\\alpha}\\quad (x \\in \\widehat{D}, j=1, 2)\n\\end{equation}\nholds, implying also the inequalities\n\\begin{equation}\\label{estOm}\n|\\omega(x)| \\lesssim M x_j^{1-\\alpha} \\quad (x\\in \\widehat{D}, j=1, 2)\n\\end{equation}\n(by the fact that $\\omega$ vanishes identically on the coordinate axes).\nFigure \\ref{fig2} illustrates the domains we need in the proof of the following propositions. \n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[scale=1]{figure2.pdf}%\n\\end{center}\n\\caption{Domains of integration in Proposition \\ref{prop:A3}.} \\label{fig2}\n\\end{figure}\n\\begin{prop}\\label{prop:A3} Let $I=B(\\frac{1}{2} d_1(x),x)\\cap \\widehat{D}$. Then\n\\begin{equation}\n\\left|P.V.\\!\\!\\int_{I}\\frac{\\d(G_i^1+G_i^2)}{\\d x_j}\\omega(y)~dy\\right| \\lesssim M x_i^{-\\alpha}(1+\\delta_{j2}|\\log d(x)|)\\quad (i\\neq j)\n\\end{equation}\n\\end{prop}\n\n\\begin{proof}\nFirst let $i\\neq j$ and $0<\\delta<\\frac{1}{2} d_1(x)$. By Proposition \\ref{prop:A7} and integration by parts,\n\\begin{equation}\\label{proofOfPropA3eq1}\n\\begin{split}\n&\\left|\\int_{I\\setminus B(\\delta,x)}\\frac{\\d G_i^k}{\\d x_j}\\omega(y)~dy\\right|=\\left|\\int_{I\\setminus B(\\delta,x)}\\frac{\\d G_i^k}{\\d y_j}\\omega(y)~dy\\right|\\\\\n&\\lesssim\\left|\\int_{I\\setminus B(\\delta,x)}G_i^k\\frac{\\d \\omega}{\\d y_j}(y)~dy\\right|+ \\left|\\int_{\\d I} G_i^k \\omega(y)\\nu_j~d\\sigma \\right| \n+ \\left|\\int_{\\d B(\\delta,x)}G_i^k \\omega(y)\\nu_j~d\\sigma\\right|\n\\end{split}\n\\end{equation}\nwhere $\\nu=(\\nu_1,\\nu_2)$ is the unit outward pointing normal on $\\d I$. We first take care of the integral over $\\d I$.\nObserve that for $x\\in D$, $\\d I$ is either a full circle is the union of a part of a circle and a flat part $\\Sigma$. Hence\n\\begin{align*}\n\\left|\\int_{\\d I} G_i^k \\nu_j~d\\sigma\\right|&\\leq \\int_{\\Sigma} |G_i^k| \\delta_{2j} ~d\\sigma +\\int_{\\d B(0, \\frac{1}{2} d_1(x))} |G_i^k(\\varphi)|~d\\sigma\n\\end{align*}\nFor all sufficiently small $\\varepsilon>0$,\n\\begin{align*}\n\\int_{\\Sigma} |G_i^k|~d\\sigma &\\le\\int_{\\Sigma\\cap \\{|y_1-x_1|\\le \\varepsilon\\}} |G_i^k|~d\\sigma+\\int_{\\Sigma\\cap \\{|y_1-x_1|\\ge \\varepsilon\\}} |G_i^k|~dy_1\\\\\n&\\lesssim\\int_{\\Sigma\\cap \\{|y_1-x_1|\\le \\varepsilon\\}} \\frac{x_i^{-1}}{|y-x|}~dy_1\n+ \\int_{\\Sigma\\cap \\{|y_1-x_1|\\ge \\varepsilon\\}} \\frac{x_i^{-1}}{|y-x|}~dy_1\\\\\n&\\lesssim x_i^{-1}\\int_{x_1-\\varepsilon}^{x_1+\\varepsilon} \\frac{1}{|x_2-\\delta_2|}~dy_1\n+ x_i^{-1}\\int_{1>|x_1-y_1|>\\varepsilon} \\frac{1}{|y_1-x_1|}~dy_1\\\\\n&\\lesssim \\frac{x_i^{-1}\\varepsilon}{|x_2-\\delta_2|}+ x_i^{-1}\\int_{\\varepsilon}^1 \\frac{1}{y_1}~dy_1\\\\\n&\\lesssim \\frac{x_i^{-1}\\varepsilon}{|x_2-\\delta_2|}+ x_i^{-1}|\\log\\varepsilon|.\n\\end{align*}\nHere we used proposition \\ref{prop:A1} again.\nChoosing $\\varepsilon=|x_2-\\delta_2|=d(x)$ we get\n\\begin{align*}\n\\int_{\\Sigma} |G_i^k|~d\\sigma\n&\\lesssim x_i^{-1}(1+|\\log d(x)|).\n\\end{align*}\nThe other part is estimated by (using proposition \\ref{prop:A1} again)\n\\begin{align*}\n\\int_{\\d B(0, \\frac{1}{2} d_1(x))} |G_i^k(\\varphi)|~d\\sigma\n&\\lesssim\nx_i^{-1}\\int_{0}^{2\\pi} |y-x|^{-1} d_1(x) ~d\\varphi\n\\lesssim x_i^{-1}.\n\\end{align*}\nTherefore we get for the integral over $\\d I$, using \\eqref{estOm} and \\eqref{relXY}:\n\\begin{align*}\n\\left|\\int_{\\d I} G_i^k \\omega(y)\\nu_j~d\\sigma \\right|&\\lesssim \\int_{\\d I} \\left| G_i^k\\nu_j \\right| \\left|\\omega(y) \\right|~d\\sigma\n\\lesssim M \\int_{\\d I} \\left|G_i^k\\nu_j \\right| y_i^{1-\\alpha}~d\\sigma\\\\\n&\\lesssim M x_i^{1-\\alpha}\\int_{\\d I} \\left|G_i^k\\nu_j \\right| ~d\\sigma\n\\lesssim M x_i^{1-\\alpha}x_i^{-1}(1+\\delta_{j2}|\\log d(x)|).\n\\end{align*}\nSimilar estimates yield that the contribution from the integral over \n$\\d B(\\delta,x)$ is $\\lesssim M x_i^{-\\alpha}$, with universal constants independent of $\\delta$.\nFor the remaining integral we have, using \\eqref{relXY}:\n\\begin{align*}\n\\left|\\int_{I\\setminus B(\\delta,x)}G_i^k\\frac{\\d \\omega}{\\d y_j}(y)~dy\\right|&\\lesssim M x_i^{-\\alpha}\\int_{I\\setminus B(\\delta,x)}\\left|G_i^k\\right|~dy\n\\lesssim M x_i^{-\\alpha} \\int_{I\\setminus B(\\delta,x)}\\left|y-x\\right|^{-1}x_i^{-1}~dy\\\\\n&\\lesssim M x_i^{-\\alpha} x_i^{-1}\\int_{\\delta}^{d_1(x)}\\frac{1}{\\rho}\\rho ~d\\rho \\lesssim M x_i^{-\\alpha}x_i^{-1}\\int_{\\delta}^{d_1(x)}\\frac{1}{\\rho}\\rho ~d\\rho\\\\\n&\\lesssim M x_i^{-\\alpha} x_i^{-1}d_1(x).\n\\end{align*}\nSince $d_1(x)\\le x_j$ we get:\n\\bn\n\\left|\\int_{I\\setminus B(\\delta,x)}G_i^k\\frac{\\d \\omega}{\\d y_j}(y)~dy\\right|\\lesssim M x_i^{-\\alpha}.\n\\end{equation*}\nThis concludes the case $i\\neq j$, since all the estimates are independent of $\\delta$.\n\nNow let $i=j$. We argue as before, but in \\eqref{proofOfPropA3eq1} now an additional term of the\nform\n$$\nx_i^{-2} \\int_{I\\setminus B(\\delta, x)} \\mathcal{O}(|y-x|^{-1}) |\\omega(y)| dy\n$$\nappears, for which we obtain the estimate $\\lesssim M x_i^{-\\alpha}$ by the same methods.\n\\end{proof}\n\\begin{lem}\\label{lem:A2} Let $\\gamma\\in (0,1)$ and $x_1\\geq 0$. Then\n\\begin{equation}\ny_1^\\gamma\\le|y_1-x_1|^\\gamma+x_1^\\gamma\\quad (y_1\\geq 0).\\notag\n\\end{equation}\n\\end{lem}\n\\begin{proof} If $y_1\\le x_1$, the inequality is obvious. For $y_1>x_1$ we have $y_1\\ge y_1-x_1>0$ and hence \n$\\gamma y_1^{\\gamma-1}\\le\\gamma(y_1-x_1)^{\\gamma-1}$ so that\n$$\ny_1^\\gamma-x_1^\\gamma\\le\\gamma\\int_{x_1}^{y_1}s^{\\gamma-1}~ds\\le\\gamma\\int_{x_1}^{y_1}(s-x_1)^{\\gamma-1}~ds=(y_1-x_1)^\\gamma.\n$$\n\\end{proof}\n\\begin{prop}\\label{prop:A8} Let $II=\\widehat{D} \\setminus I$ with $I$ as in proposition \\ref{prop:A3}. Then\n\\begin{equation}\n\\left| \\int_{II}\\frac{\\d G_i^k}{\\d x_j}\\omega(y)~dy\\right| \\lesssim M x_i^{-\\alpha}\n\\end{equation}\n\\end{prop}\n\\begin{proof}\nUsing proposition \\ref{prop:A1}, \\eqref{estOm} and lemma \\ref{lem:A2} we have\n\\begin{align*}\n&\\left| \\int_{II}\\frac{\\d G_i^k}{\\d x_i}\\omega(y)~dy\\right| \\lesssim \\int_{II} \\left|\\frac{\\d G_i^k}{\\d x_i}\\right||\\omega(y)|~dy\\\\\n&\\lesssim \\int_{II} |y-x|^{-3}y_i^{1-\\alpha}~dy\\lesssim \\int_{II} |y-x|^{-3}(|y_i-x_i|^{1-\\alpha}+x_i^{1-\\alpha})\\\\\n&\\lesssim \\int_{II} |y-x|^{-2-\\alpha}~dy+x_i^{1-\\alpha}\\int_{II} |y-x|^{-3}\\\\\n&\\lesssim \\int_{\\frac{1}{2} d_1(x)}^\\infty\\frac{1}{\\rho^{2+\\alpha}}~\\rho~d\\rho\n+ x_i^{1-\\alpha}\\int_{\\frac{1}{2} d_1(x)}^\\infty\\frac{1}{\\rho^{3}}~\\rho~d\\rho\\\\\n&\\lesssim d_1(x)^{-\\alpha}+x_i^{1-\\alpha}d_1(x)^{-1}\\lesssim x_i^{-\\alpha}+x_i^{1-\\alpha}x_i^{-1}\\lesssim x_i^{-\\alpha}.\n\\end{align*}\n\\end{proof}\n\\begin{prop}\\label{prop:A6}\nFor $i\\neq j$, we have\n\\begin{equation}\n\\left|\\frac{\\d (G_{i}^1+G_{i}^2)}{\\d x_j}\\right| \\lesssim x_i^{-\\gamma_1-\\gamma_2} x_j^{\\gamma_2} |y-x|^{-(3 - \\gamma_1)}.\n\\end{equation} \nwhere $\\gamma_1, \\gamma_2\\in [0, 1], \\gamma_1+\\gamma_2\\leq 1$.\n\\end{prop}\n\\begin{proof}\nThe proof of the proposition is based on a cancellation property of the kernels $G_2^1$ and $G_2^2$ and \nrequires quite tedious computations. First calculate the sum of $\\d_{x_1} G_2^2$ and $\\d_{x_1} G_2^1$ and see that\nit can be grouped into three the expressions\n\\begin{align*}\n\\frac{y_2(y_1-x_1)^2}{|y-x|^4|y-\\bar x|^2}-\\frac{y_2(y_1+x_1)^2}{|y+x|^4|y-\\til x|^2} &= (A)\\\\\n\\frac{y_2(y_1-x_1)^2}{|y-x|^2|y-\\bar x|^4}-\\frac{y_2(y_1+x_1)^2}{|y-\\til x|^2|y + x|^4} &= (B)\\\\\n\\frac{y_2}{|y - \\til x|^2|y+x|^2}-\\frac{y_2}{|y-x|^2|y-\\bar x|^2} &= (C)\n\\end{align*}\nFor convenience, these can be further written as\n\\begin{align*}\n\\frac{y_2(y_1-x_1)^2}{|y-\\bar x|^2}[|y-x|^{-4}-|y-\\til x|^{-4}]+\\frac{y_2}{|y-\\til x|^4}\\left(\n\\frac{(y_1-x_1)^2}{|y-\\bar x|^2} - \\frac{(y_1+x_1)^2}{|y+x|^2}\\right) \\\\= (1)+(2)\\\\\n\\frac{y_2}{|y-\\bar x|^4}\\left[\\frac{(y_1-x_1)^2}{|y-x|^2}-\\frac{(y_1+x_1)^2}{|y-\\til x|^2} \\right]\n+\\frac{y_2}{|y-\\til x|^2}\\left[\\frac{(y_1+x_2)^2}{|y-\\bar x|^4}+\\frac{(y_1+x_1)^2}{|y+x|^4}\\right] \\\\= (3)+(4)\\\\\n\\frac{y_2}{|y+x|^2}\\left[|y-\\til x|^{-2}-|y - x|^{-2}\\right]+\\frac{y_2}{|y-x|^2}[|y+ x|^{-2}-|y - \\bar x|^{-2}]\n\\\\= (5)+(6)\n\\end{align*}\nLet us estimate expression (1). Using $|y-\\til x|^2 - |y-x|^2 = 2 x_1 y_1$ and the relations\n$y_2 \\leq |y-\\bar x|, (y_1-x_1)^2\\leq |y-x|^2, y_1\\leq (y_1+x_1)$, we arrive at\n$$\n|(1)| \\leq \\frac{x_1}{|y-\\bar x||y-\\til x|}[|y-x|^{-2}+|y-\\til x|^{-2}].\n$$\nWrite $\\gamma = \\gamma_1+\\gamma_2$ and noting that $|y-\\bar x|\\geq x_2^{\\gamma}|y-\\bar x|^{1-\\gamma}, |y-\\til x|\\geq x_1^{1-\\gamma_2}|y-\\til x|$ \nand the reflection relations $|y-\\til x|, |y - \\bar x|\\geq |y-x|$ for $y\\in {[0, 1]^2}$, we arrive at\n$$\n|(1)| \\leq x_2^{-\\gamma} x_1^{\\gamma_2} |y-x|^{-(3 - \\gamma + \\gamma_2)}.\n$$\n\nTo estimate (2), we use the relation\n$$\n|y+x|^2 (y_1 - x_1)^2 - |y-\\bar x|^2 (y_1+x_1)^2 = - 4 x_1 y_1 (y_2+x_2)\n$$\nand similar estimations as above to arrive at\n\\begin{align*}\n|(2)| \\lesssim \\frac{y_2 y_1 x_1 (y_2+x_2)^2}{|y-\\til x|^4 |y-\\bar x|^2 |y+x|^2} \\leq \\frac{x_1}{|y-\\bar x||y-\\til x|^3}\\\\\n\\leq x_2^{-\\gamma} x_1^{\\gamma_1} |y-x|^{-3 - \\gamma_2 + \\gamma}.\n\\end{align*}\n\n(3) is similar to the above. (4) is slightly different, since expressions containing $x_1^2$ may appear. In (4),\nwe use\n$$\n|y+x|^4-|y-\\bar x|^4 \\lesssim [|y_1-x_1|x_1+x_1^2][|y+x|^2+|y-\\bar x|^2]\n$$\nto get\n\\begin{align*}\n|(4)|\\leq \\frac{|y_1-x_1|x_1}{|y-\\til x||y-\\bar x|^3|y+x|}+\\frac{x_1^2}{|y-\\til x||y-\\bar x|^3|y+x|}\\\\\n+\\frac{|y_1-x_1|x_1}{|y-\\til x||y-\\bar x||y+x|^3}+\\frac{x_1^2}{|y-\\til x||y-\\bar x ||y+x^3|}.\n\\end{align*}\nThe terms not containing $x_1^2$ are handled in a familar manner. Concerning the others, we have for example\n\\begin{align*}\n&\\frac{x_1^2}{|y-\\til x||y-\\bar x|^3|y+x|}\n\\leq x_2^{-\\gamma}x_1^{\\gamma_2} \\frac{(x_1+y_1)}{|y-\\til x|^\\gamma_2 |y-\\bar x|^{3-\\gamma}|y+x|}\\\\\n&\\leq x_2^{-\\gamma}x_1^{\\gamma_2}|y-x|^{-3+\\gamma-\\gamma_2}.\n\\end{align*} \nThe treatment of (5), (6) parallels the above.\n\\end{proof}\n\n\n\\begin{prop}\\label{prop:A9}\nFor $i\\neq j$,\n\\begin{align*}\n\\left| \\int_{{[0, 1]^2}\\setminus \\widehat{D}}\\left[\\frac{\\d G_i^1}{\\d x_j}+\\frac{\\d G_i^2}{\\d x_j}\\right]\\omega(y)~dy\\right| \n\\leq ~~C(\\gamma_1,\\gamma_2)x_i^{-(\\gamma_1+\\gamma_2)}x_j^{\\gamma_2} d(x)^{-1+\\gamma_1}\n\\end{align*}\nwhere $\\gamma_1\\in (0, 1), \\gamma_2\\in [0, 1), \\gamma_1+\\gamma_2 < 1$. Also,\n\\begin{align*}\n\\left| \\int_{{[0, 1]^2}\\setminus \\widehat{D}}\\left[\\frac{\\d G_i^i}{\\d x_1}+\\frac{\\d G_i^2}{\\d x_i}\\right]\\omega(y)~dy\\right| \n\\leq ~~C(\\gamma)x_i^{-\\gamma} d(x)^{-1+\\gamma_1}\n\\end{align*}\n\\end{prop}\n\\begin{proof}\nAs a preparation, we note that for $x\\in D$, $0<\\gamma_1< 1$,\n\\begin{equation}\\label{prop_a9_ineq1}\n\\int_{{[0, 1]^2}\\setminus \\widehat{D}} |y-x|^{-3 + \\gamma_1} \\lesssim d(x)^{-1+\\gamma_1}.\n\\end{equation}\nThis follows from\n\\begin{align*}\n\\int_{{[0, 1]^2}\\setminus \\widehat{D}} |y-x|^{-3 + \\gamma_1} &\\leq\\int_{{[0, 1]^2} \\setminus B(x, d(x))} |y-x|^{-3 + \\gamma_1}\\\\\n&\\leq \\int_{B(x, 10)\\setminus B(x, d(x))}|y-x|^{-3 + \\gamma_1},\n\\end{align*}\nsince ${[0, 1]^2}\\setminus \\widehat{D}$ is contained in ${[0, 1]^2} \\setminus B(x, d(x))$ because of $\\delta_2 < \\delta_3, \\delta_1 < \\delta_2$.\n\nFrom Proposition \\ref{prop:A6} we get in case $i\\neq j$\n\\begin{align*}\n\\left| \\int_{{[0, 1]^2}\\setminus \\widehat{D}}\\left[\\frac{\\d G_i^1}{\\d x_1}+\\frac{\\d G_i^2}{\\d x_j}\\right]\\omega(y)~dy\\right| &\\leq\nx_i^{-(\\gamma_1+\\gamma_2)} x_j^{\\gamma_2} \\int_{{[0, 1]^2}\\setminus \\widehat{D}} |y-x|^{-1+\\gamma_1} \\\\\n&\\leq x_i^{-(\\gamma_1+\\gamma_2)} x_j^{\\gamma_2} d(x)^{-1+\\gamma_1},\n\\end{align*}\naccording to \\eqref{prop_a9_ineq1}.\n\nFor the second inequality of the Proposition, we note that (see Proposition \\ref{prop:A1})\n\\begin{align*}\n\\left|\\frac{\\d G_i^k}{\\d x_i}\\right|&\\lesssim x_i^{-\\gamma} |y-x|^{-3+\\gamma},\n\\end{align*}\nand use \\eqref{prop_a9_ineq1}.\n\\end{proof}\n\n\n\\begin{prop}\\label{prop:A4}For $x\\in D$,\n\\begin{align*}\n\\left|P.V.\\!\\!\\int_{{[0, 1]^2}}\\left[\\frac{\\d G_i^1}{\\d x_j}+\\frac{\\d G_i^2}{\\d x_j}\\right] \\omega(y)~dy\\right|&\\leq\n M x_i^{-1} x_j^{1-\\alpha}(1+\\delta_{j2}|\\log d(x)|)\\\\\n&\\,\\, + C(\\gamma_1, \\gamma_2) x_i^{-(\\gamma_1+\\gamma_2)}x_j^{\\gamma_2} d(x)^{-1+\\gamma_1} \\quad (i\\neq j)\\\\\n\\left|P.V.\\!\\!\\int_{{[0, 1]^2}}\\left[\\frac{\\d G_i^1}{\\d x_i}+\\frac{\\d G_i^2}{\\d x_i}\\right] \\omega(y)~dy\\right|&\\leq \n M x_i^{-\\alpha}(1+\\delta_{i2}|\\log d(x)|) + C(\\gamma_1) x_i^{-\\gamma_1}d(x)^{-1+\\gamma_1}\n\\end{align*}\nwith $\\gamma,\\gamma_1\\in (0, 1), \\gamma_2\\in [0, 1), \\gamma_1+\\gamma_2 < 1$. \n\\end{prop}\n\\begin{proof}\nWe split the integral into a principal value integral over $\\widehat{D}$ and a convergent integral over ${[0, 1]^2} \\setminus \\widehat{D}$. The integral over $\\widehat{D}$ is \nfurther split in to integrals over the domains $I=B(x, \\frac{1}{2} d_1(x))$ and $II= \\widehat{D}\\setminus I$, which are estimated by Propositions\n\\ref{prop:A3} and \\ref{prop:A8}. The part over ${[0, 1]^2} \\setminus \\widehat{D}$ is estimated by Proposition \\ref{prop:A9}.\n\\end{proof}\n\n\n\\subsection{Appendix B\n\n\\begin{prop}\\label{propAllG} The following relations hold:\n\\begin{align*}\n\\frac{\\d G_1^1}{\\d x_1}&=-\\frac{2y_1(y_1+x_1)(y_2-x_2)}{|y-x|^2|y-\\til x|^4}\n+\\frac{2y_1(y_1-x_1)(y_2-x_2)}{|y-x|^4|y-\\til x|^2}\\\\\n\\frac{\\d G_1^1}{\\d x_2}&=\\frac{2y_1(y_2-x_2)^2}{|y-x|^2|y-\\til x|^4}\n+\\frac{2y_1(y_2-x_2)^2}{|y-x|^4|y-\\til x|^2}-\\frac{y_1}{|y-x|^2|y-\\til x|^2}\\\\\n\\frac{\\d G_1^1}{\\d y_1}&=-\\frac{2y_1(y_1+x_1)(y_2-x_2)}{|y-x|^2|y-\\til x|^4}\n-\\frac{2y_1(y_1-x_1)(y_2-x_2)}{|y-x|^4|y-\\til x|^2}+\\frac{y_2-x_2}{|y-x|^2|y-\\til x|^2}\\\\\n\\frac{\\d G_1^1}{\\d y_2}&=-\\frac{2 y_1(y_2-x_2)^2}{|y-x|^2|y-\\til x|^4}\n-\\frac{2y_1(y_2-x_2)^2}{|y-x|^4|y-\\til x|^2}+\\frac{y_1}{|y-x|^2|y-\\til x|^2}\n\\end{align*}\n\\begin{align*}\n\\frac{\\d G_1^2}{\\d x_1}&=-\\frac{2y_1(y_1+x_1)(y_2+x_2)}{|y+x|^4|y-\\bar x|^2}\n+\\frac{2y_1(y_1-x_1)(y_2+x_2)}{|y+x|^2|y-\\bar x|^4}\\\\\n\\frac{\\d G_1^2}{\\d x_2}&=-\\frac{2y_1(y_2+x_2)^2}{|y+x|^4|y-\\bar x|^2}\n-\\frac{2y_1(y_2+x_2)^2}{|y+x|^2|y-\\bar x|^4}+\\frac{y_1}{|y+x|^2|y-\\bar x|^2}\\\\\n\\frac{\\d G_1^2}{\\d y_1}&=-\\frac{2y_1(y_1+x_1)(y_2+x_2)}{|y+x|^4|y-\\bar x|^2}\n-\\frac{2y_1(y_1-x_1)(y_2+x_2)}{|y+x|^2|y-\\bar x|^4}+\\frac{y_2+x_2}{|y+x|^2|y-\\bar x|^2}\\\\\n\\frac{\\d G_1^2}{\\d y_2}&=-\\frac{2y_1(y_2+x_2)^2}{|y+x|^4|y-\\bar x|^2}\n-\\frac{2y_1(y_2+x_2)^2}{|y+x|^2|y-\\bar x|^4}+\\frac{y_1}{|y+x|^2|y-\\bar x|^2}\n\\end{align*}\n\\begin{align*}\n\\frac{\\d G_2^1}{\\d x_1}&=-\\frac{2y_2(y_1+x_1)^2}{|y+x|^4|y-\\til x|^2}\n-\\frac{2y_2(y_1+x_1)^2}{|y+x|^2|y-\\til x|^4}+\\frac{y_2}{|y+x|^2|y-\\til x|^2}\\\\\n\\frac{\\d G_2^1}{\\d x_2}&=-\\frac{2y_2(y_1+x_1)(y_2+x_2)}{|y+x|^4|y-\\til x|^2}\n+\\frac{2y_2(y_1+x_1)(y_2-x_2)}{|y+x|^2|y-\\til x|^4}\\\\\n\\frac{\\d G_2^1}{\\d y_1}&=-\\frac{2y_2(y_1+x_1)^2}{|y+x|^4|y-\\til x|^2}\n-\\frac{2y_2(y_1+x_1)^2}{|y+x|^2|y-\\til x|^4}+\\frac{y_2}{|y+x|^2|y-\\til x|^2}\\\\\n\\frac{\\d G_2^1}{\\d y_2}&=-\\frac{2y_2(y_1+x_1)(y_2+x_2)}{|y+x|^4|y-\\til x|^2}\n-\\frac{2y_2(y_1+x_1)(y_2-x_2)}{|y+x|^2|y-\\til x|^4}+\\frac{y_1+x_1}{|y+x|^2|y-\\til x|^2}\n\\end{align*}\n\\begin{align*}\n\\frac{\\d G_2^2}{\\d x_1}&=\\frac{2y_2(y_1-x_1)^2}{|y-x|^2|y-\\bar x|^4}\n+\\frac{2y_2(y_1-x_1)^2}{|y-x|^4|y-\\bar x|^2}-\\frac{y_2}{|y-x|^2|y-\\bar x|^2},\\\\\n\\frac{\\d G_2^2}{\\d x_2}&=-\\frac{2y_2(y_1-x_1)(y_2+x_2)}{|y-x|^2|y-\\bar x|^4}\n+\\frac{2y_2(y_1-x_1)(y_2-x_2)}{|y-x|^4|y-\\bar x|^2},\\\\\n\\frac{\\d G_2^2}{\\d y_1}&=-\\frac{2y_2(y_1-x_1)^2}{|y-x|^2|y-\\bar x|^4}\n-\\frac{2y_2(y_1-x_1)^2}{|y-x|^4|y-\\bar x|^2}+\\frac{y_2}{|y-x|^2|y-\\bar x|^2},\\\\\n\\frac{\\d G_2^2}{\\d y_2}&=-\\frac{2y_2(y_1-x_1)(y_2+x_2)}{|y-x|^2|y-\\bar x|^4}\n-\\frac{2y_2(y_1-x_1)(y_2-x_2)}{|y-x|^4|y-\\bar x|^2}+\\frac{y_1-x_1}{|y-x|^2|y-\\bar x|^2}.\n\\end{align*}\n\\end{prop}\n\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\n\n\n\n\n\n\nThis paper gives a method for renormalising a class of quantum field theories. The field theories we are going to consider have space of fields of the form $\\mathscr{E} = \\Gamma(M,E)$, where $M$ is a compact manifold and $E$ is a super vector bundle on $M$. We work within the Batalin-Vilkovisky formalism, so that $\\mathscr{E}$ is equipped with an odd symplectic pairing $\\mathscr{E} \\otimes \\mathscr{E} \\to \\mathbb C$ satisfying a certain non-degeneracy condition\\footnote{Here, and throughout, $\\otimes$ refers to the completed projective tensor product, so that $\\mathscr{E} \\otimes \\mathscr{E} = \\Gamma (M^2, E \\boxtimes E)$.}. $\\mathscr{E}$ will also be equipped with a differential $Q$, which is an odd differential operator $Q : \\mathscr{E} \\to \\mathscr{E}$ which is skew self adjoint and of square zero. The action functionals in our quantum field theory will be of the form\n$$\n\\frac{1}{2} \\ip{e, Q e } + S(e )\n$$\nwhere $S$ is a local functional on the space of fields $\\mathscr{E}$, which is at least cubic. \n\nThe functional integrals we will renormalise are of the form\n$$\nZ(S, \\hbar, a) = \\int_{x \\in L} \\exp \\left( \\frac{1}{2 \\hbar} \\ip{x, Q x } + \\frac{1}{\\hbar} S(x + a ) \\right) \\d x\n$$\nwhere $a \\in \\mathscr{E}$ and $L \\subset \\mathscr{E}$ is an isotropic linear subspace, such that the map $Q : L \\to \\Im Q$ is an isomorphism. Such an $L$ is known as a \\emph{gauge fixing condition}. $Z(S,a)$ is a formal functional of the variable $a \\in \\mathscr{E}$, and can be viewed as a generating function for certain Green's functions of the theory.\n\nA fairly wide class of theories can be put in the form we use, including pure Yang-Mills theory in dimension 4, and Chern-Simons theory in any dimension.\n\nThis introduction will give a sketch of the results and of the underlying philosophy, without worrying too much about technical details. \n\n\\subsection{}\nOur gauge fixing conditions are always of the form\n$$\nL = \\Im Q^{GF}\n$$\nwhere $Q^{GF} : \\mathscr{E} \\to \\mathscr{E}$ is an odd self adjoint differential operator of square zero, with the property that the super-commutator\n$$\nH \\overset{\\text{def}}{=} [Q,Q^{GF}]\n$$\n is a positive elliptic operator of second order. The facts that $Q^{GF}$ is self-adjoint with respect to the symplectic pairing on $\\mathscr{E}$, and that $Q^{GF}$ is of square zero, imply that $L = \\Im Q^{GF}$ is an isotropic subspace.\n\nOnly certain theories admit gauge fixing conditions of this form (this is the main restriction on the kind of theories the techniques from this paper can treat). In many examples, $Q$ is a first order elliptic operator, and $Q^{GF}$ is a Hermitian adjoint of $Q$. \n\nA basic example to bear in mind is Chern-Simons theory in dimension $3$. If $M$ is a compact oriented $3$-manifold, and $\\mathfrak g$ is a Lie algebra with an invariant non-degenerate pairing, then \n$$\n\\mathscr{E} = \\Omega^\\ast(M) \\otimes \\mathfrak g[1].\n$$\n$[1]$ denotes parity shift. The symplectic pairing on $\\mathscr{E}$ arises from the Poincar\\'e pairing on $\\Omega^\\ast(M)$ and the pairing on $\\mathfrak g$. The operator $Q$ is simply the de Rham differential $\\d_{DR}$, and $S$ is the cubic term in the standard Chern-Simons action. The choice of a metric on $M$ leads to a gauge fixing condition $Q^{GF} = \\d_{DR}^\\ast$. Further examples, including Chern-Simons theory in other dimensions, will be discussed later.\n\n\\subsection{}\nLet us write \n$$\nP(\\varepsilon,T) = \\int_{\\varepsilon}^T (Q^{GF} \\otimes 1) K_t \\d t \\in \\mathscr{E} \\otimes \\mathscr{E}\n$$\nwhere $K_t \\in \\mathscr{E} \\otimes \\mathscr{E}$ is the heat kernel for $H = [Q, Q^{GF}]$. Here, $\\mathscr{E} \\otimes \\mathscr{E}$ denotes the completed projective tensor product, \n$$\n\\mathscr{E} \\otimes \\mathscr{E} = C^{\\infty}(M\\times M, E \\boxtimes E ). \n$$\nThe propagator of our theory is \n$$\nP (0,\\infty) = \\int_{0}^\\infty (Q^{GF} \\otimes 1) K_t \\d t. \n$$\nThis is not an element of $\\mathscr{E} \\otimes \\mathscr{E}$, because of the singularities in the heat kernel at $t = 0$. Instead, $P(0,\\infty)$ is an element of a distributional completion of $\\mathscr{E} \\otimes \\mathscr{E}$. \n\nLet $\\mscr O(\\mathscr{E})$ denote the algebra of functions on $\\mathscr{E}$,\n$$\n\\mscr O(\\mathscr{E}) = \\prod_{n \\ge 0} \\Hom ( \\mathscr{E}^{\\otimes n}, \\mathbb C )_{S_n}\n$$\nwhere, as above, the tensor products are completed projective tensor products, and $\\Hom$ denotes continuous linear maps. \n\nAny element $P \\in \\mathscr{E} \\otimes \\mathscr{E}$ gives rise to an order two differential operator $\\partial_P$ on $\\mscr O(\\mathscr{E})$ in a standard way. \n\n\nUp to a constant factor (the determinant of $Q$), one can write our functional integral as\n$$\nZ(S, \\hbar, a) = \\lim_{\\varepsilon \\to 0} \\left( e^{\\hbar \\partial_{P(\\varepsilon,\\infty ) } } e^{S \/ \\hbar } \\right) (a).\n$$\nThe right hand side is the exponential of a differential operator applied to a function on $\\mathscr{E}$, yielding a function on $\\mathscr{E}$. This identity is rather formal; in finite dimensions, it is a simple consequence of integration by parts. In infinite dimensions we take it as an attempt at a definition. \n\n When $\\varepsilon > 0$, the right hand side of this equation is well-defined. However, the $\\varepsilon \\to 0$ limit is singular, because $P(0,\\infty)$ is not an element of $\\mathscr{E} \\otimes \\mathscr{E}$, but has singularities along the diagonal in $M^2$. \n\n\nLet us use the notation\n$$\n\\Gamma ( P(\\varepsilon,T) , S ) = \\hbar \\log \\left ( e^{\\hbar \\partial_{P(\\varepsilon,\\infty )} } e^{S \/ \\hbar }\\right). \n$$\nThis is an $\\hbar$ dependent function on $\\mathscr{E}$, that is, an element of $\\mscr O(\\mathscr{E}) [[\\hbar]]$. We will typically omit the variables $a \\in \\mathscr{E}$ and $\\hbar$ from the notation.\n\nThe expression we would like to make sense of is \n$$\\hbar \\log Z(S, \\hbar, a) = \\lim_{\\varepsilon \\to 0} \\Gamma( P(\\varepsilon,\\infty), S).$$\n\n\\subsection{}\n An effective action\\footnote{ What I mean by effective action is related to the Wilsonian effective action. Ignoring for the moment considerations of renormalisation, the Wilsonian effective action is obtained by integrating out all the high-energy fields, i.e.\\ all the eigenspaces of $H$ with high eigenvalues. The effective action considered here is obtained by averaging over all interactions occurring at small scales. More precisely, the effective action at scale $\\varepsilon$ is the sum over all Feynman graphs of the theory, using the propagator $P(0,\\varepsilon)$. Using the propagator $P(0,\\varepsilon)$ amounts to allowing particles to propagate for a distance of between $0$ and $\\varepsilon$. } at scale $\\varepsilon$ is a function $f \\in \\mscr O(\\mathscr{E}) [[\\hbar]]$ which describes all interactions occurring at a scale below $\\varepsilon$. One can reconstruct the effective action at any other scale using the effective action at scale $\\varepsilon$ and the propagator. The map $f \\mapsto \\Gamma ( P(\\varepsilon,T) , f)$ is the operation taking an effective action at scale $\\varepsilon$ to the corresponding effective action at scale $T$. One can imagine that the scale $T$ effective action $\\Gamma(P(\\varepsilon,T), f)$ is obtained from the scale $\\varepsilon$ effective action $f$ by allowing particles to interact according to $f$, and to propagate a distance between $\\varepsilon$ and $T$. \n\n$\\Gamma ( P(\\varepsilon,T) , f)$ is the renormalisation group flow from scale $\\varepsilon$ to scale $T$ applied to $f$. This is well-defined for all those $f \\in \\mscr O(\\mathscr{E})[[\\hbar]]$ which are at least cubic modulo $\\hbar$, as long as $0 < \\varepsilon < T \\le \\infty$. We have the semi-group law\n$$\n\\Gamma ( P(T_2,T_3 ) , \\Gamma ( P(T_1,T_2) , f) ) = \\Gamma ( P(T_1,T_3) , f).\n$$\nThis equation is a version of the exact renormalisation group equation. The operation $f \\to \\Gamma( P(\\varepsilon,T), f)$ is invertible, so that if we know the effective action at any scale, we know it at all other scales. \n\nThe only part of this renormalisation group flow that is problematic is taking an effective action at scale $0$ and turning it into an effective action at any positive scale $\\varepsilon$. This part of the procedure needs to be renormalised. This is to be expected: an effective action at scale $0$ would describe interactions occurring at infinitely high energy. \n\nOne way to phrase the main result we prove is that there is a bijection between functionals $S \\in \\mscr O(\\mathscr{E}) [[\\hbar]]$, satisfying a locality axiom, and systems $\\{ S^{eff}(T) \\mid T \\in \\mbb R_{> 0} \\}$ of effective actions at all scales $T > 0$, related by the renormalisation group equation. The effective action $S^{eff}(T)$ must also satisfy a locality condition as $T \\to 0$. The original action $S$ plays the role of the scale $0$ effective action, and the positive scale effective actions $S^{eff}(T)$ are obtained by renormalising the expression $\\Gamma ( P( 0,T) , S)$. Every such system of effective actions $S^{eff}(T)$ arises from a unique local functional $S$ in this way.\n\n\\subsection{}\nIn order to renormalise the expression $\\lim_{\\varepsilon \\to 0} \\Gamma ( P(\\varepsilon, T ) , S)$, and thus construct the scale $T$ effective action, we will need a way to extract the ``singular part'' of expressions of the form $\\Gamma ( P(\\varepsilon, T ) , S)$. This will rely on some results about the small $\\varepsilon$ asymptotic expansion of $\\Gamma ( P(\\varepsilon, T ) , S)$. \n\n\nIt's convenient to represent $\\Gamma ( P(\\varepsilon, T ) , S)$ as \n$$\n\\Gamma ( P(\\varepsilon, T ) , S)= \\sum_{i,k \\ge 0} \\hbar^i \\Gamma_{i,k} ( P(\\varepsilon, T ) , S)\n$$\nwhere $\\Gamma_{i,k} ( P(\\varepsilon, T ) , S)$ is homogeneous of degree $k$ as a function of $a \\in \\mathscr{E}$. This expression is just the Taylor expansion of $\\Gamma ( P(\\varepsilon, T), S)$ as a function of $\\hbar$ and $a \\in \\mathscr{E}$. In terms of Feynman graphs, $\\Gamma_{i,k}(P(\\varepsilon,T),S)$ is the sum over all graphs with first Betti number $i$ and $k$ external edges. \n\nLet $\\operatorname{An} ( (0,\\infty) )$ be the algebra of analytic functions on $(0,\\infty)$, where $\\varepsilon$ is the coordinate on $(0,\\infty)$. \n\\begin{thmA}\nThere exists a subalgebra $\\mathscr{A} \\subset \\operatorname{An} ( (0,\\infty) )$ with a countable basis, such that for all local functionals $S \\in \\mscr O(\\mathscr{E}) [[\\hbar]]$, there exists \na small $\\epsilon$ asymptotic expansion\n$$\n\\Gamma_{i,k} ( P(\\varepsilon, T ) , S) \\simeq \\sum f_r(\\epsilon) \\otimes \\Phi_{i,k,r}(T,a) \n$$\nwhere $f_r \\in \\mathscr{A}$, and $\\Phi_{i,k,r}$ are certain functions of the variables $T \\in (0,\\infty)$ and $a \\in \\mathscr{E}$. \n\\end{thmA}\nA functional $S \\in \\mscr O(\\mathscr{E}) [[\\hbar]]$ is \\emph{local} if, roughly, its homogeneous components $S_{i,k} \\in \\Hom(\\mathscr{E}^{\\otimes k}, \\mathbb C)_{S_k}$, which are distributions on the vector bundle $E^{\\boxtimes k}$ on $M^k$, are supported on the small diagonal, and non-singular in the diagonal directions. We will denote the space of local functionals by \n$$\n\\mscr O_l(\\mathscr{E}) \\subset \\mscr O(\\mathscr{E}) . \n$$\n\nLet $\\mathscr{A}_{\\ge 0} \\subset \\mathscr{A}$ be the subspace of those functions whose $\\epsilon \\to 0$ limit exists. In order to extract the singular part of functions in $\\mathscr{A}$, we need to pick a complementary subspace to $\\mathscr{A}_{\\ge 0}$.\n\\begin{definition}\nA \\emph{renormalisation scheme} is a subspace $\\mathscr{A}_{< 0}\\subset \\mathscr{A}$ such that \n$$\n\\mathscr{A} = \\mathscr{A}_{\\ge 0} \\oplus \\mathscr{A}_{< 0} . \n$$\n\\end{definition}\nLet us fix a renormalisation scheme $\\mathscr{A}_{< 0}$. Later we will see that things are independent in a certain sense of the choice of renormalisation scheme. \n\n\\begin{remark}\nThe functions in $\\mathscr{A}$ are quite explicit integrals of multi-variable rational functions. The algebra $\\mathscr{A}$ only depends on the dimension of the manifold $M$, and not on the details of the particular theory we are working with. \n\\end{remark}\n\\begin{remark}\nInstead of using the algebra $\\mathscr{A}$, one could use any larger algebra of functions on $(0,\\infty)$, and obtain the same results. It is technically easier to use an algebra with a countable basis. \n\\end{remark}\n\n\\begin{remark}\nAn alternative regularisation scheme, which Jack Morava suggested to me, would be to use the propagator $\\int_0^\\infty t^z (Q^{GF} \\otimes 1)K_t \\d t$, where $z$ is a complex parameter. If we use this propagator, then integrals attached to Feynman graphs converge if $\\Re z \\gg 0$. The expressions should admit an analytic continuation to $\\mathbb C$ with poles on $\\tfrac{1}{2} \\mathbb Z$. Unfortunately, I wasn't able to prove the existence of the analytic continuation. \n\\end{remark}\n\n\n\\subsection{}\nThe first main result of this paper is the following.\n\\begin{thmB}\nFix a renormalisation scheme $\\mathscr{A}_{< 0} \\subset \\mathscr{A}$. \nThen there exists a unique series of counter-terms\n$$\nS^{CT} (\\hbar, \\epsilon ,a ) = \\sum_{i > 0, k\\ge 0} \\hbar^i S^{CT}_{i,k} (\\epsilon, a )\n$$\nwhere \n\\begin{enumerate}\n\\item\neach $S^{CT}_{i,k}(\\epsilon,a)$ is a formal local functional of $a \\in \\mathscr{E}$, homogeneous of order $k$ as a function of $a$, with values in $\\mathscr{A}_{< 0}$ Thus,\n$$\nS^{CT}_{i,k} \\in \\mscr O_l(\\mathscr{E}) \\otimes \\mathscr{A}_{< 0}\n$$ \nwhere $\\mscr O_l(\\mathscr{E}) \\subset \\mscr O(\\mathscr{E})$ is the space of local functionals on $\\mathscr{E}$.\n\\item\nthe limit \n$$\\lim_{\\epsilon \\to 0} \\Gamma ( P(\\varepsilon, T ) , S - S^{CT})$$ \nexists. \n\\end{enumerate}\nThese counter-terms are independent of $T$. \n\\end{thmB}\n\n\nLet me give a brief sketch of the (surprisingly simple) proof of this theorem. As before, let us write \n$$\n\\Gamma ( P(\\varepsilon, T ) , S)= \\sum_{i,k \\ge 0} \\hbar^i \\Gamma_{i,k} ( P(\\varepsilon, T ) , S).\n$$\nThe $\\Gamma_{0,k} ( P(\\varepsilon, T ) , S)$ all have well-defined $\\epsilon \\to 0$ limits. So the first counter-term we need to construct is $S^{CT}_{1,1}$. Our choice of renormalisation scheme allows us to extract the singular part of any function of $f(\\epsilon) \\in \\mathscr{A}$; this singular part is simply the projection of $f$ onto $\\mathscr{A}_{< 0}$. We define\n$$\nS^{CT}_{1,1}(\\epsilon,a) = \\text{singular part of the small $\\epsilon$ expansion of } \\Gamma_{1,1} ( P(\\varepsilon, T ) , S)\\in \\mathscr{A}_{< 0}. \n$$\nIt is easy to see that $\\Gamma_{1,1} ( P(\\varepsilon, T ) , S- \\hbar S^{CT}_{1,1})$ is non-singular as $\\epsilon \\to 0$. \n\nThe next step is to replace $S$ by $S - \\hbar S^{CT}_{1,1}$, and use this new action to produce the next counter-term, $S^{CT}_{1,2}$. That is, we define\n$$\nS^{CT}_{1,2}= \\text{singular part of } \\Gamma_{1,2} ( P(\\varepsilon, T ) , S - \\hbar S_{1,1}^{CT})\n$$\nwhere we understand ``singular part'' in the same way as before. We continue like this, to define $S^{CT}_{1,k}$ for all $k \\ge 1$.\n\nThe next step is to define\n$$\nS^{CT}_{2,0} = \\text{singular part of } \\Gamma_{2,0} ( P(\\varepsilon, T ) , S - \\hbar \\sum_{k = 0}^\\infty S_{1,k}^{CT})\n$$\nContinuing in this manner defines all the $S^{CT}_{i,k}$. \n\nThe difficult part of the proof is the verification that the counter-terms $S^{CT}_{i,k}$ are local functionals. Locality is desirable for many physical and mathematical reasons. More practically, we need $S^{CT}_{i,k}$ to be local in order to apply the procedure at the next inductive step. We only know the existence of a small $\\varepsilon$ asymptotic expansion of the $\\Gamma_{I,K}(P(\\varepsilon,T) , S - \\sum \\hbar^i S^{CT}_{i,k} )$ when the counter-terms $S^{CT}_{i,k}$ are local. \n\nThe main step in the proof of locality is showing that the $S^{CS}_{i,k}$ are independent of $T$. Once we know they are independent of $T$, we can take $T \\to 0$. $\\Gamma_{i,k}( P(\\varepsilon, T) , S)$ is concentrated near the diagonal in $M^k$, if $\\varepsilon < T$ and both $\\varepsilon, T$ are very small. Thus, the counter-terms become supported on the diagonal, and local. \n\n\nThis theorem allows one to define unambiguously the renormalised scale $T$ effective action\n$$\n\\Gamma^R ( P(0,T) , S ) = \\lim_{\\varepsilon \\to 0} \\Gamma( P(\\varepsilon,T) , S - S^{CT} ) .\n$$\nThus, we can define the renormalised functional integral\n\\begin{multline*}\nZ^R = \\exp ( \\Gamma^R ( P(0,\\infty), S) \/ \\hbar ) = \\text { renormalisation of } \\\\\n\\int_{x \\in L} \\exp \\left( \\frac{1}{2 \\hbar} \\ip{x, Q x } + \\frac{1}{\\hbar} S(x + a ) \\right) \\d x.\n\\end{multline*}\nThis renormalised partition function is an element of $\\mscr O(\\mathscr{E})((\\hbar))$, that is, a non-local formal function on the space $\\mathscr{E}$ of fields.\n\n\n\n\\subsection{Independence of choice of renormalisation scheme}\nWe should interpret the expression $\\Gamma^R ( P(0,T) , S ) $ constructed using Theorem B as the scale $T$ renormalised effective action. The renormalisation group equation holds:\n$$\n\\Gamma ( P(T_1,T_2 ) , \\Gamma^R ( P(0,T_1) , S) ) = \\Gamma^R ( P(0,T_2) , S).\n$$\n\\begin{definition}\nA \\emph{system of effective actions} on the space of fields $\\mathscr{E}$ is given by an effective action\n$$\nS^{eff}(T) \\in \\mscr O(\\mathscr{E}) [[\\hbar]] \n$$\nfor all $T \\in \\mbb R_{> 0}$, varying smoothly with $T$, \nsuch that\n\\begin{enumerate}\n\\item\nEach $S^{eff}(T)$ is at least cubic modulo $\\hbar$. \n\\item\nThe renormalisation group equation is satisfied,\n$$\nS^{eff}(T_2) = \\Gamma ( P(T_1,T_2) , S^{eff}(T_1)).\n$$\n\\item\nAs $T \\to 0$, $S^{eff}(T)$ must become local, in the following sense. There must exist some $T$-dependent local functional $\\Phi(T)$ such that $\\lim_{T \\to 0} \\left( S^{eff}(T) - \\Phi(T) \\right) = 0$. (The $T \\to 0$ limit of $S^{eff}(T)$ itself will generally not exist). \n\\end{enumerate}\n\\end{definition}\nThe renormalised effective actions $\\Gamma^R ( P ( 0,T) , S)$ constructed from a local functional $S$ satisfy these two axioms. Thus, for any renormalisation scheme $\\mathscr{A}_{< 0}$, theorem B provides a map\n\\begin{align*}\n\\text{local functionals } S \\in \\mscr O_l(\\mathscr{E}) [[\\hbar]] & \\to \\text{ systems of effective actions } \\\\\nS & \\mapsto \\{ \\Gamma^R ( P(0,T), S ) \\mid T \\in \\mbb R_{> 0} \\} .\n\\end{align*}\n(The local functionals $S$ must be at least cubic modulo $\\hbar$, as must the effective actions $S^{eff}(T)$. ) .\n\\begin{thmC}\nFor any renormalisation scheme $\\mathscr{A}_{< 0}$, this map is a bijection.\n\\end{thmC}\nThis set of systems of effective actions on $\\mathscr{E}$ is a canonical object associated to $(\\mathscr{E},Q, Q^{GF})$, independent of the choice of renormalisation scheme. Renormalisation and regularisation techniques other than those considered should lead to different ways of parametrising the same set of systems of effective actions. For instance, if one could make sense of dimensional regularisation on general manifolds, one would hope to get simply a different parametrisation of this set.\n\nFrom this point of view, the formalism of counter-terms is simply a convenient way to describe this set of systems of effective actions. The counter-terms themselves, and the original action $S$, are not really meaningful in themselves. The main point of introducing counter-terms is that it is otherwise difficult to produce systems of effective actions. \\emph{A priori}, it is not obvious that there are any non-zero systems of effective actions at all.\n\nThis proposition makes clear in what sense renormalisation is independent of the choice of renormalisation scheme. To any renormalisation scheme $\\mathscr{A}_{< 0}$ and local funct\\-ional $S \\in \\mscr O_l(\\mathscr{E}) [[\\hbar]]$ corresponds a ``theory'', i.e.\\ a system of effective actions. If $\\mathscr{A}'_{< 0}$ is another renormalisation scheme, there is a unique local functional $S'$ such that $(S', \\mathscr{A}'_{< 0})$ gives the same theory as $(S, \\mathscr{A}_{< 0})$. \n\nThis statement can be expressed more formally as follows. Let $\\operatorname{RS}$ denote the space of renormalisation schemes, and let $\\operatorname{EA}$ denote the set of systems of effective actions. Theorem C implies gives an isomorphism of fibre bundles on $\\operatorname{RS}$\n$$\n\\mscr O_l(\\mathscr{E}) \\times \\operatorname{RS} \\to \\operatorname{EA} \\times \\operatorname{RS} .\n$$\nGive the right hand side the trivial flat connection; this pulls back to a non-trivial (and non-linear) flat connection on $\\mscr O_l(\\mathscr{E})$. This flat connection is uniquely characterised by the property that for any flat section $S : \\operatorname{RS} \\to \\mscr O_l(\\mathscr{E}) \\times \\operatorname{RS}$, the system of effective actions $\\{\\Gamma^R ( P(0,T) , S (\\mathscr{A}_{< 0} ) ) \\}$ associated to the functional $S(\\mathscr{A}_{< 0} ) \\in \\mscr O_l(\\mathscr{E})$ is independent of the point $\\mathscr{A}_{< 0} \\in \\operatorname{RS}$. \n\nWe will fix once and for all a renormalisation scheme $\\mathscr{A}_{< 0}$. This allows us to always talk about local functionals, as a convenient proxy for the set of systems of effective actions. The choice of $\\mathscr{A}_{< 0}$ is analogous to the choice of a basis in a vector space. \n\nAll statements about a theory should be expressed in terms of the effective actions $\\Gamma^R(P(0,T), S)$, and not directly in terms of the local functional $S$. This will ensure everything is independent of the choice of renormalisation scheme.\n\n\\subsection{}\n\n Bogoliubov and Parasiuk \\cite{BogPar57}, Hepp \\cite{Hep66} and Zimmerman \\cite{Zim69} have given an algorithm for the renormalisation of certain quantum field theories. Their algorithm is based on combinatorial manipulations of Feynman graphs. Recently, Connes and Kreimer \\cite{ConKre00} have given a beautiful interpretation of the BPHZ algorithm, in terms of the Birkhoff factorisation of loops in a certain Hopf algebra. \n \nIn the approach used in this paper, no graph combinatorics are needed; all we use is a very simple inductive argument, sketched above. The reason that things become so simple seems to be the particular kind of functional integrals we consider, which are always of the form\n$$\nZ(S, \\hbar, a) = \\int_{x \\in \\Im Q^{GF}} \\exp \\left( \\frac{1}{2 \\hbar} \\ip{x, Q x } + \\frac{1}{\\hbar} S(x + a ) \\right) \\d x.\n$$\nThus, the moral of this paper is that if we consider functional integrals of this form,\nthen the problem of renormalisation becomes quite simple, and the counter-terms are unique and automatically local. As we will see shortly, the particular functional integrals we use also play an important role in the Batalin-Vilkovisky formalism. \n\n\nAnother difference between the approach to renormalisation described in this paper and that of Connes-Kreimer and BPHZ is that we do not give finite values to individual Feynman graphs, but only to the sum over all graphs with a fixed number of loops and external edges. From the point of view of the effective action, the expression attached to an individual graph has no meaning. \n\n\n\nIf we tried to renormalise different classes of functional integrals the procedure would not be so simple. For example, if we try to simply renormalise the integral \n$$\\int_{x \\in \\Im Q^{GF}} \\exp \\left( \\frac{1}{2 \\hbar} \\ip{x, Q x } + \\frac{1}{\\hbar} S(x ) \\right) \\d x,$$\nwithout using the variable $a$, then there are many possible choices of counter-terms. \n\nIf we try to renormalise the functional integral $$\\int_{x \\in \\Im Q^{GF}} \\exp \\left( \\frac{1}{2 \\hbar} \\ip{x, Q x } + \\frac{1}{\\hbar} S(x) + \\frac{1}{\\hbar} \\ip{x,a} \\right) \\d x, $$ then the terms in the Feynman graph expansion don't fit together in the right way to produce local counter-terms. \n\nThis type of functional integral is one which appears more commonly in the literature. A simple change of variables allows one to express this type of functional integral in terms of the kind considered here, but not conversely. Indeed, if $a =- Q^{-1} \\pi b$, where $\\pi$ is the projection onto $\\Im Q^{GF}$ and $Q^{-1} : \\Im Q \\to \\Im Q^{GF}$ is the inverse to $Q$, we can write\n\\begin{multline*}\n\\int_{x \\in \\Im Q^{GF} } \\exp \\left( \\frac{1}{2 \\hbar} \\ip{x, Q x } + \\frac{1}{\\hbar} S(x + a) \\right) \\d x \\\\\n\\shoveright{ =\\int_{x \\in \\Im Q^{GF} } \\exp \\left( \\frac{1}{2 \\hbar} \\ip{x- a, Q (x-a) } + \\frac{1}{\\hbar} S(x ) \\right) \\d x \\hspace{41pt}} \\\\\n= e^{- \\ip{a, b } \/\\hbar } \\int_{x \\in \\Im Q^{GF} } \\exp \\left( \\frac{1}{2 \\hbar} \\ip{x, Q x } + \\frac{1}{\\hbar} S(x) + \\frac{1}{\\hbar} \\ip{x,b} \\right) \\d x \n\\end{multline*}\n\n\n\\subsection{Batalin-Vilkovisky formalism}\nThe Batalin-Vilkovisky quantum master equation is the quantum expression of gauge symmetry. It takes the form\n$$\n(Q + \\hbar \\Delta ) \\exp ( S \/ \\hbar ) = 0\n$$\nwhere $\\Delta$ is a certain order $2$ differential operator acting on the space of functionals on $\\mathscr{E}$. This expression makes perfect sense in the simple situation when the space of fields $\\mathscr{E}$ is finite dimensional (i.e.\\ the underlying manifold $M$ is of dimension $0$). In the situation we work in, however, this expression is infinite. This is because $\\Delta S$ involves the multiplication of singular distributions, and thus has the same kind of singularities as appear in one-loop Feynman diagrams. \n\nThis paper gives a definition of a renormalised quantum master equation that works in the infinite dimensional situation. There are regularised BV operators $\\Delta_t$, for $t > 0$, defined by \n$$\n\\Delta_t = - \\partial_{K_t}.\n$$\nRecall that $K_t \\in \\mathscr{E} \\otimes \\mathscr{E}$ is the heat kernel for $H = [Q, Q^{GF}]$, and $\\partial_{K_t}$ is the order two differential operator on the algebra $\\mscr O(\\mathscr{E})$ of functions on $\\mathscr{E}$, associated to $K_t$. The operators $\\Delta_t$ are thus differential operators on $\\mscr O(\\mathscr{E})$, for all $t > 0$. The ``physical'' BV operator is $\\Delta_0$, which is ill-defined.\n\n\\begin{lemma}\t\nA function $f \\in \\mscr O(\\mathscr{E})[[\\hbar]]$ satisfies the $\\Delta_\\varepsilon$ quantum master equation\n$$\n(Q + \\hbar \\Delta_\\varepsilon)e^{f \/ \\hbar} = 0\n$$\n if and only if \n$$\n\\Gamma ( P(\\varepsilon, T ) , f)\n$$\nsatisfies the $\\Delta_T$ quantum master equation. \n\\end{lemma}\nThis follows from the fact that\n$$\nQ P(\\varepsilon, T) = -K_T + K_\\varepsilon\n$$\nso that the differential operator $\\partial_{P(\\varepsilon,T)}$ is a chain homotopy between $\\Delta_\\varepsilon$ and $\\Delta_T$.\n\nThis lemma motivates the following definition.\n\\begin{definition}\nA local functional $S \\in \\mscr O_l(\\mathscr{E})[[\\hbar]]$ satisfies the renormalised quantum master equation if the renormalised expression \n$$\\Gamma^R ( P(0,T) , S) = \\lim_{\\varepsilon \\to 0} \\Gamma (P(\\varepsilon,T), S - S^{CT} )$$\nsatisfies the $\\Delta_T$ quantum master equation, for some (or equivalently, all) $T > 0$. \n\\end{definition}\nThus, the renormalised QME says that the scale $T$ effective action associated to $S$ satisfies the scale $T$ quantum master equation.\n\nOne peculiarity of this renormalised quantum master equation is that, unlike in the finite dimensional situation, it depends on the gauge fixing condition $Q^{GF}$. We will see shortly that this dependence is very weak. \n\nIn fact, the equation depends also on the renormalisation scheme $\\mathscr{A}_{< 0}$, but only in an artificial way. Recall that theorem C says that the choice of a renormalisation scheme sets up a bijection between local functionals $S \\in \\mscr O_l(\\mathscr{E})[[\\hbar]]$, and systems of effective actions. The system of effective actions is given by $\\{\\Gamma^R(P(0,T), S) \\mid T \\in \\mbb R_{> 0} \\}$. The quantum master equation as a statement about the system of effective actions is independent of the choice of renormalisation scheme. \n\nWe will fix once and for all a renormalisation scheme, and use it to parametrise the set of systems of effective actions. If we use a different renormalisation scheme, everything works the same, except we are parametrising the set of systems of effective actions in a slightly different way. \n\n\\subsection{Homotopies of solutions to the renormalised quantum master equation}\nLet's consider the finite dimensional situation again for a moment, with the further assumption that the finite dimensional space of fields $V$ has trivial $Q$ cohomology. Then the subspace $L \\subset V$ is a Lagrangian, and not just isotropic; we have a direct sum decomposition \n$$\nV = L \\oplus \\Im Q.\n$$\nThe importance of the Batalin-Vilkovisky quantum master equation comes from the fact that in this situation, if $S$ satisfies the Batalin-Vilkovisky quantum master equation, then the partition function $Z(S,\\hbar, a = 0)$ remains unchanged under continuous variations of $L$. \n\n\nWe would like to prove a version of this in the infinite dimensional situation, for the renormalised quantum master equation. However, things are more delicate in this situation; the renormalised QME itself depends on the choice of a gauge fixing condition. What we will show is that if $Q^{GF}(t)$ is a one-parameter family of gauge fixing conditions, corresponding to the family of isotropic subspaces $\\Im Q^{GF}(t)$, then the set of homotopy classes of solutions to the renormalised QME using $Q^{GF}(0)$ is isomorphic to the corresponding set using $Q^{GF}(1)$. This result is a corollary of a result about certain simplicial sets of gauge fixing conditions and of solutions to the renormalised QME. \n\n\nLet me explain this picture in more detail. One of the axioms we need for our gauge fixing conditions is that there is a direct sum decomposition\n$$\n\\mathscr{E} = \\Im Q^{GF} \\oplus \\Ker H \\oplus \\Im Q\n$$ \nwhere $H = [Q^{GF}, Q]$ so that $\\Ker H$ is the space of harmonic elements of $\\mathscr{E}$. \nThis leads to the identification\n$$\nH^\\ast(\\mathscr{E}, Q) = \\Ker H.\n$$\nThis cohomology group is finite dimensional.\n\nThere is a notion of homotopy of solutions of the quantum master equation. Briefly, two solutions $S_0, S_1$ of the quantum master equation are homotopic if there exists $S_t + \\d t S'_t$, for $t \\in [0,1]$, such that \n$$\n\\left(Q + \\d t \\frac{\\d}{\\d t} + \\hbar \\Delta \\right) e^{(S_t + \\d t S'_t )\/\\hbar } = 0. \n$$\nIn a similar way, one can define a notion of homotopy of solutions of the renormalised quantum master equation.\n\n\n\\begin{utheorem}\n\\begin{enumerate}\n\\item\nIf $S$ satisfies the renormalised quantum master equation, then the restriction of $\\Gamma^R (P(0,\\infty), S)$ to $H^\\ast(\\mathscr{E}, Q)$ satisfies the finite dimensional quantum master equation. The map $S \\to \\Gamma^R (P(0,\\infty), S)$ respects homotopies. \n\\item\nLet $Q^{GF}(t) \\subset \\mathscr{E}$ be a smooth one-parameter family of gauge fixing conditions, for $t \\in [0,1]$. Then there exists a natural bijection between the set of homotopy classes of solutions of the renormalised quantum master equation using the gauge fixing condition $Q^{GF}(0)$ and the corresponding set using $Q^{GF}(1)$.\n\\item\nLet $S_0$, $S_1$ be solutions of the renormalised quantum master equations using $Q^{GF}(0)$ and $Q^{GF}(1)$ respectively, which correspond up to homotopy under the bijection coming from the family $Q^{GF}(t)$. Then $\\Gamma^R (P(0,\\infty), S_0)$ and $\\Gamma^R(P(0,\\infty), S_1)$ (defined using $Q^{GF}(0)$ and $Q^{GF}(1)$ respectively) are homotopic solutions of the quantum master equation on $H^\\ast(\\mathscr{E}, Q)$. \n\\end{enumerate}\n\n\\end{utheorem}\nIn fact, this result will be a corollary of a more abstract result concerning simplicial sets of solutions of the quantum master equation.\n\\begin{utheorem}\nThere exist simplicial sets\n\\begin{enumerate}\n\\item $\\mathbf{GF}(\\mathscr{E},Q)$ of gauge fixing conditions \n\\item $\\mathbf{BV}( \\mathscr{E}, Q )$ of solutions of the renormalised quantum master equation on $\\mathscr{E}$ \n\\item $\\mathbf{BV}( H^\\ast(\\mathscr{E}, Q) )$ of solutions to the finite-dimensional quantum master equation on $H^\\ast(\\mathscr{E},Q)$\n\\end{enumerate}\nwhich fit into a diagram of maps of simplicial sets\n$$\n\\xymatrix{ \\mathbf{BV}( \\mathscr{E}, Q ) \\ar[rr]^(.43){\\Gamma^R(P(0,\\infty), - ) } \\ar[d]^{\\pi} & & \\mathbf{BV}( H^\\ast(\\mathscr{E}, Q) ) \\\\ \n\\mathbf{GF}( \\mathscr{E}, Q) & }\n$$\nwhere the vertical arrow $\\pi$ is a fibration of simplicial sets. \n\\end{utheorem}\nA solution of the renormalised quantum master equation must be with respect to some gauge fixing condition; this defines the vertical arrow $\\pi : \\mathbf{BV}( \\mathscr{E}, Q ) \\to \\mathbf{GF}(\\mathscr{E},Q)$. The map $\\mathbf{BV}( \\mathscr{E}, Q ) \\to \\mathbf{BV}( H^\\ast(\\mathscr{E}, Q) )$ is the simplicial version of the map discussed earlier, which takes a solution $S$ of the renormalised quantum master equation to $\\Gamma^R(P(0,\\infty), S) \\vert_{H^\\ast(\\mathscr{E},Q)}$. This is a solution of the quantum master equation on $H^\\ast(\\mathscr{E},Q)$.\n\nOne can deduce the previous more concrete result from this abstract statement about simplicial sets using the simplicial analogs of path and homotopy lifting properties for fibrations. \n\n\\subsection{}\nWe have seen that the set of systems of effective actions is a canonical object associated to $(\\mathscr{E},Q, Q^{GF})$, defined with out reference to a renormalisation scheme. In a similar way, we could say that the simplicial set $\\mathbf{BV}(\\mathscr{E},Q)$ is a canonical object associated to $(\\mathscr{E},Q)$ without reference to the choice of gauge fixing condition. A choice of a renormalisation scheme and a gauge fixing condition gives us a convenient parametrisation of this simplicial set, as a set of local functionals satisfying the renormalised quantum master equation. However, if we choose a different renormalisation scheme, we get something canonically isomorphic; and if we choose a different gauge fixing condition we something canonically homotopy equivalent. (At least, this is true as long as the space of natural gauge fixing conditions is contractible, which it always seems to be in examples). \n\nIf we have a classical action $S_0$ which solves the classical master equation, there is a simplicial set $\\mathbf{BV}(\\mathscr{E},Q, S_0)$ of quantisations of $S_0$, i.e.\\ solutions to the renormalised quantum master of the form $S_0+ \\hbar S_1 + \\cdots$. If the space of natural gauge fixing conditions is contractible, then this simplicial set is canonically associated to the classical theory $(\\mathscr{E},Q,S_0)$, up to canonical homotopy equivalences. \n\nThus, there is a homotopy action of the group of symmetries of the classical theory on the simplicial set $\\mathbf{BV}(\\mathscr{E},Q, S_0)$ of quantisations. One would like to quantise a given classical theory in a way preserving as many symmetries as possible. \n\n\n\n\n\n\n\n\n\n\n\\subsection{Quantisation of Chern-Simons theory}\nThe results of this paper allow one to renormalise a wide variety of quantum field theories, for example Chern-Simons theory on a compact oriented manifold, or pure Yang-Mills theory on a compact $4$-dimen\\-sional manifold with a conformal class of metrics. By ``renormalisation'' I simply mean the construction of counter-terms. \n\n\n\nI would like to distinguish between renormalisation and quantisation. By \\emph{quantisation} I mean the replacement of an action $S_0$ by an action $S = S_0 + \\sum_{i > 0} \\hbar^i S_i$ which satisfies the renormalised quantum master equation. The only non-trivial theory we succeed in quantising in this paper is Chern-Simons theory. In fact, we only quantise Chern-Simons theory modulo the constant term (constant as a function on the space of fields). \n\nThe quantisation of Chern-Simons theory is based on a general local-to-global principle, which allows one to construct global solutions to the renormalised QME from local ones. This local-to-global result is stated and proved for a general class of theories in the body of the paper. To keep things simple, I'll only discuss Chern-Simons theory in this introduction. \n\nChern-Simons theory is a perturbative gauge theory associated to a compact oriented manifold $M$ and a flat bundle $\\mathfrak g$ of super Lie algebras on $M$, with an invariant pairing of parity opposite to that of $\\dim M$. \n\nIn this situation, let\n$$\n\\mathscr{E} = \\Omega^\\ast(M , \\mathfrak g ) [1].\n$$\n$\\mathscr{E}$ is the Batalin-Vilkovisky odd symplectic manifold associated to the Chern-Simons gauge theory of connections with values in $\\mathfrak g$. \n\nA gauge fixing condition on $\\mathscr{E}$ is given by a choice of a metric on $M$. The space of metrics is of course contractible,. As we have seen above, the spaces of homotopy classes of solutions to the renormalised QME for different gauge fixing conditions are homotopy equivalent. Thus, one can speak about homotopy classes of solutions to the renormalised QME without reference to a metric. \n\\begin{utheorem}\nLet $M$, $\\mathfrak g$ be as above. Then there is a canonical (up to a contractible choice) quantisation $S$ of the Chern-Simons action $S_0$ on $\\mathscr{E}$, modulo constant terms. \n\nThat is, there is a canonical up to homotopy functional $S = S_0 + \\sum_{i > 0} \\hbar^i S_i$ on $\\mathscr{E}$, where each $S_i$ is defined modulo constants (as a function on $\\mathscr{E}$), which satisfies the renormalised quantum master equation. \n\\end{utheorem}\nI should emphasise that the specific form the $S_i$ will take is dependent on both the metric and the renormalisation scheme we choose to work with. If we use a different renormalisation scheme, then the $S_i$ will change, but the effective action $\\Gamma^R ( P (0,T) , \\sum \\hbar^i S_i)$ remains unchanged. If we use a different metric, then this effective action changes by a homotopy. \n\n\\begin{ucorollary}\nThere is a canonical, up to homotopy and modulo constants, function $\\Phi^{CS}$ on \n$$H^\\ast(\\mathscr{E}) = H^\\ast(M, \\mathfrak g ) [1]$$\nwhich satisfies the quantum master equation. \n\\end{ucorollary}\nAs $\\Phi^{CS}$ is well-defined modulo constants, $\\Phi^{CS}$ is an element of \n$$\n\\Phi^{CS} \\in \\left( \\prod_{k > 0} \\Sym^k H^\\ast(\\mathscr{E})^{\\vee} \\right)[[\\hbar]] .\n$$\nThe quantum master equation is\n$$\n\\{ \\Phi^{CS}, \\Phi^{CS} \\} + \\hbar \\Delta \\Phi^{CS} = 0\n$$\nwhich must hold modulo constants, that is, modulo $\\mathbb C[[\\hbar]]$. \n\nOne can write this identity in more explicit terms. The Hamiltonian vector field associated to $\\Phi^{CS}$ has Taylor components which are linear maps\n$$\n\\sum \\hbar^i l_{i,k} : H^\\ast(M, \\mathfrak g)^{\\otimes k} \\to H^\\ast(M, \\mathfrak g) [[\\hbar]]. \n$$\nwhere $l_{i,k}$ is independent of $\\hbar$. The $l_{i,k}$ are defined for all $i, k \\ge 0$. The $l_{0,k}$ are the usual $L_\\infty$ operations, which arise via the homological perturbation lemma. The $l_{i,k}$ for $i > 0$ are the new structure. The quantum master equation is encoded in a sequence of identities of the form\n\\begin{multline*}\n\\sum_{\\substack{i_1+i_2 = i \\\\ k_1 + k_2 = k-1 }} \\pm l_{i_1,k_1} ( x_1,\\ldots, x_j, l_{i_2,k_2} \n(x_{j+1}, \\ldots, x_{j+i_1} ) , x_{j + i_1 + 1}, \\ldots, x_{i} ) \\\\\n+ \\sum \\pm l_{i-1,k+2} ( x_1, \\ldots, x_{j'}, \\delta', x_{j'+1}, \\ldots, x_{j''}, \\delta'', x_{j''+1}, \\ldots, x_i ) = 0. \n\\end{multline*}\nfor all $i,k$. \nHere, the $x_a \\in H^\\ast(M , \\mathfrak g)$, and $\\sum \\delta' \\otimes \\delta'' \\in H^\\ast(M, \\mathfrak g)^{\\otimes 2}$ is the tensor inverse to the pairing. The first term in this identity is the expression appearing in the usual $L_\\infty$ equation. This algebraic structure is sometimes called a ``quantum $L_\\infty$ algebra''. \n\nAs we have seen, the $l_{0,k}$ give the usual $L_\\infty$ structure on $H^\\ast(M, \\mathfrak g)$. These $L_\\infty$ algebras, for varying $\\mathfrak g$, encode a great deal of the rational homotopy type of $M$. Thus, the structure defined by the the $l_{i,k}$ for $i > 0$ can be viewed as a kind of quantisation of the rational homotopy type. \n\nIn the case when $H^\\ast(\\mathscr{E}) = 0$, Kontsevich \\cite{Kon94} and Axelrod-Singer \\cite{AxeSin92,AxeSin94} (when $\\dim M = 3$) have already constructed the perturbative Chern-Simons invariants. In some sense, their construction is orthogonal to the construction in this paper. Because we work modulo constants, the construction in this paper doesn't give anything in the case when $H^\\ast(\\mathscr{E}) = 0$. On the other hand, their constructions don't apply in the situations where our construction gives something non-trivial. \n\nThere seems to be no fundamental reason why a generalisation of the construction to this paper, including the constant term, should not exist. Such a generalisation would also generalise the results of Kontsevich and Axelrod-Singer. However, the problem of constructing such a generalisation does not seem to be amenable to the techniques used in this paper.\n\nA theory related to the Chern-Simons theory considered here has been treated in the very interesting recent paper \\cite{Mne06}. The BF theory used by Mnev is the same as Chern-Simons theory with a special kind of Lie algebra, of the following form. Let $\\mathfrak g$ be any finite dimensional Lie algebra. Then $\\mathfrak g \\oplus \\mathfrak g^\\vee$ is a Lie algebra with an even invariant pairing, and $\\mathfrak g \\oplus \\mathfrak g^\\vee[1]$ is a Lie algebra with an odd invariant pairing. The Lie algebra structure arises from the fact that $\\mathfrak g^\\vee$ carries a $\\mathfrak g$ action. Chern-Simons theory with Lie algebra $\\mathfrak g \\oplus \\mathfrak g^\\vee$ (when $\\dim M$ is odd) or $\\mathfrak g \\oplus \\mathfrak g^\\vee[1]$ (when $\\dim M$ is even) is the same as the BF theory considered by Mnev. \n\n\\subsection{Construction of the quantisation of Chern-Simons theory}\n\n\nLet me sketch the proof of the theorem on quantisation of Chern-Simons theory. The proof uses the homotopical algebra of simplicial presheaves to glue together local solutions to the renormalised quantum master equation to find a global solution.\n\nRecall that a simplicial presheaf $G$ on $M$ is a presheaf of simplicial sets on $M$; thus, for each open set $U \\subset M$ and each integer $n$, we have a set $G(U,n)$ of $n$-simplices of the simplicial set $G(U)$. \n\nLet $\\mathbf{Met}$ be the simplicial presheaf such that $\\mathbf{Met}(U,n)$ is the set of smooth families of Riemannian metrics on $U$, parametrised by $\\Delta^n$. Let $\\mathbf{FMet} \\subset \\mathbf{Met}$ be the sub-simplicial presheaf given by families of flat metrics. \n\n It turns out that whether or not an action functional $S$ satisfies the renormalised quantum master equation is a local property. Further, the statement that $S$ satisfies the renormalised QME on an open set $U$ depends only on the metric $g$ on $U$. This is true also in families, parametrised by simplices. Thus, we can arrange the solutions of the renormalised QME into a simplicial presheaf on $M$.\n \n\\begin{utheorem}\nThere is a simplicial presheaf $\\mathbf{BV}$ on $M$, with a map $\\mathbf{BV} \\to \\mathbf{Met}$, whose $0$-simplices $\\mathbf{BV}(U,0)$ are given by\n\\begin{enumerate}\n\\item\nmetrics $g$ on $U$ \n\\item\na quantisation $S$ of the Chern-Simons action on $U$, modulo constants. That is, $S$ is a solution of the renormalised quantum master equation (modulo constants) associated to $g$, which modulo $\\hbar$ is the Chern-Simons action $S_0$.\n\\end{enumerate}\nThe one-simplices $\\mathbf{BV} (U,1)$ are homotopies of this data, and so on.\n\\end{utheorem}\n\n\nOur ultimate aim is to construct a canonical (up to contractible choice) point of $\\Gamma(M, \\mathbf{BV})$. Such a point will consist of a metric $g$ on $M$ and a quantisation of the Chern-Simons action on $\\mathscr{E}$ to a solution of the renormalised quantum master equation associated to $g$, as always modulo constants. \n\nWe can always find a solution to the renormalised QME locally. \n\\begin{uproposition}\nSuppose an open subset $U\\subset M$ is equipped with a flat Riemannian metric. Then the original Chern-Simons action satisfies the renormalised quantum master equation.\n\\end{uproposition}\n\\begin{remark}\nThis proposition is the only result in this subsection which is really special to Chern-Simons theory. The proof of this proposition relies heavily on the work of Kontsevich \\cite{Kon94, Kon03a}. In particular, we use the compactifications of configuration spaces used in these papers. The quantum master equation is deduced from a theorem about the vanishing of certain integrals on configuration spaces proved by Kontsevich in \\cite{Kon94} when $\\dim M > 2$ and in \\cite{Kon03a} when $\\dim M = 2$. \n\\end{remark}\n\n\n\nThe last proposition shows that we have a map \n$$\n\\mathbf{FMet} \\to \\mathbf{BV}\n$$\nof simplicial presheaves on $M$.\n\n\n\nIf $G$ is a simplicial presheaf on $M$, one can construct its derived global sections $\\mbb R \\Gamma(M,G)$, which is a simplicial set. We use a \\v{C}ech definition of $\\mbb R \\Gamma(M,G)$. If $G_1 \\to G_2$ is a map of simplicial presheaves which induces a a weak equivalence of simplicial sets $G_1(U) \\to G_2(U)$, for sufficiently small open balls $U$ in $M$, then the map $\\mbb R \\Gamma(M,G_1) \\to \\mbb R \\Gamma(M,G_2)$ is a weak equivalence. \n\\begin{ulemma}\nFor sufficiently small open balls $U$ in $M$, $\\mathbf{FMet}(U)$ is contractible.\n\\end{ulemma}\nIf $U$ is a ball, then $\\mathbf{FMet}(U)$ is the simplicial set associated to the space of flat metrics on $U$, which one can show is contractible using a simple rescaling argument.\n\nIt follows that $\\mbb R \\Gamma(M,\\mathbf{FMet})$ is contractible. \n\\begin{thmD}\nThe map\n$$\n\\Gamma(M, \\mathbf{BV}) \\to \\mbb R \\Gamma(M,\\mathbf{BV})\n$$\nis a weak equivalence.\n\\end{thmD}\nThis theorem is the heart of the ``local-to-global'' principle; it allows one to glue together local solutions to the renormalised QME to give global ones. This result is stated and proved for a general class of theories in the body of the paper. \n\n\nWe have a diagram\n$$\n\\mbb R \\Gamma(M,\\mathbf{FMet}) \\to \\mbb R \\Gamma(M,\\mathbf{BV}) \\xleftarrow{\\simeq} \\Gamma(M, \\mathbf{BV}) \n$$\nwhere the first arrow comes from the map of simplicial presheaves $\\mathbf{FMet} \\to \\mathbf{Met}$ constructed earlier. The space on the left is contractible, and theorem D says that the arrow on the right is a weak equivalence. Thus, we get the required point of $\\Gamma(M, \\mathbf{BV}) $ up to contractible choice. \n\n\n\n\n\\subsection{Acknowledgements}\nThis paper benefited a great deal from conversations with many people. I'd like to thank Dennis Sullivan for inviting me to present these results in his seminar, and for many helpful conversations. Ezra Getzler's help with heat kernels and simplicial methods was invaluable, and made the paper much clearer. A conversation with Jack Morava was very helpful early on. Arthur Greenspoon's comments greatly improved the text. I'm also grateful to Mohammed Abouzaid, John Baez, Alberto Cattaneo, Paul Goerss, Dmitry Kaledin, Pavel Mnev, David Nadler, Vasily Pestun, Jared Wunsch and Eric Zaslow, for their help with various aspects of this work. \n\n\n\\section{A crash course in the Batalin-Vilkovisky formalism}\n\\label{section intro bv}\n\nThis section should probably be skipped by experts; it consists of an informal introduction to the Batalin-Vilkovisky approach to quantising gauge theories. The only thing which may not be standard is a discussion of a version of the BV formalism where one integrates over isotropic instead of Lagrangian subspaces. \n\nIn this section, our vector spaces will \\emph{always} be finite dimensional. Of course, none of the difficulties of renormalisation are present in this simple case. Many of the expressions we write in the finite dimensional case are ill-defined in the infinite dimensional case. \n\nLet us suppose we have a finite dimensional vector space $V$ of fields, a group $G$ acting on $V$ in a possibly non-linear way, and a $G$-invariant function $f$ on $V$ such that $0$ is a critical point of $f$.\n\nOne is interested in making sense of functional integrals of the form\n$$\\int_{ V \/ G } e^{f\/ \\hbar }$$\nover the quotient space $V \/ G$. The starting point in the Batalin-Vilkovisky formalism is the BRST construction, which says one should try to interpret this quotient in a homological fashion. This means we should consider the supermanifold\n$$\n\\mathfrak g [ 1 ] \\oplus V\n$$\nwhere $[1]$ refers to a change of degree, so $\\mathfrak g$ is in degree $-1$. The space of functions on this super-manifold is \n$$\n\\mscr O ( \\mathfrak g [ 1 ] \\oplus V ) = \\wedge^\\ast \\mathfrak {g}^\\vee \\otimes \\mscr O (V)\n$$\nwhich is the super vector space underlying the Chevalley-Eilenberg Lie algebra co\\-chain complex for $\\mathfrak g$ with coefficients in the $\\mathfrak g$-module $\\mscr O(V)$. The Chevalley-Eilenberg differential gives an odd derivation of $\\mscr O(\\mathfrak g [1] \\oplus V)$, which can be thought of as an odd vector field on $\\mathfrak g [1] \\oplus V$. Let us denote this odd vector field by $X$. \n\n Recall that this Lie algebra cochain complex computes the homotopy invariants for the action of $\\mathfrak g$ on $\\mscr O(V)$. Thus, we can view $\\mscr O(\\mathfrak g[1] \\oplus V)$, with this differential, as a ``derived'' version of the algebra of functions on the quotient of $V$ by the action of $G$ \n(at least, in a formal neighbourhood of the origin, which is all we really care about). The BRST construction says one should replace the integral over $V \/ G$ by an integral of the form\n$$\\int_{ \\mathfrak {g} [1 ] \\oplus V } e^{f \/ \\hbar }.$$\n\nThis leaves us in a better situation than before, as we are in a linear space, and we can attempt to make sense of the integral perturbatively. However, we still have problems; the quadratic part of the functional $f$ is highly degenerate on $\\mathfrak {g} [1 ] \\oplus V$. Indeed, $f$ is independent of $\\mathfrak g[1]$ and is constant on $G$-orbits on $V$. Thus, we cannot compute the integral above by a perturbation expansion around the critical points of the quadratic part of $f$. \n\nThis is where the Batalin-Vilkovisky formalism comes in. Let $E$ denote the odd cotangent bundle of $\\mathfrak{g} [1] \\oplus V$, so that\n$$\nE = \\mathfrak{g} [1] \\oplus V \\oplus V^\\vee [-1] \\oplus \\mathfrak{g}^\\vee [-2].\n$$\nThe various summands of $E$ are usually given the following names: $\\mathfrak{g} [1]$ is the space of ghosts, $V$ is the space of fields, $V^\\vee [-1]$ is the space of antifields and $\\mathfrak{g}^\\vee [-2]$ is the space of antighosts. \n\nThe function $f$ on $\\mathfrak{g} [1] \\oplus V$ pulls back to a function on $E$, via the projection $E \\to \\mathfrak{g} [1] \\oplus V$; we continue to call this function $f$. By naturality, the vector field $X$ on $\\mathfrak{g} [1] \\oplus V$ induces one on $E$, which we continue to call $X$. As $[X, X] = 0$ on $\\mathfrak{g} [1] \\oplus V$, the same identity holds on $E$. As $X$ preserves $f$ on $\\mathfrak{g} [1] \\oplus V$, it continues to preserve $f$ on $E$.\n\n$E$ is an odd symplectic manifold, and $X$ is an odd vector field preserving the symplectic form. Thus, there exists a unique function $h_X$ on $E$ whose Hamiltonian vector field is $X$, and which vanishes at zero. As $X$ is odd, $h_X$ is an even function.\n\nAs $E$ is odd symplectic, the space of functions on $E$ has an odd Poisson bracket. The statement that $[X,X] = 0$ translates into the equation $\\{h_X, h_X\\} = 0$. The statement that $X f = 0$ becomes $\\{h_X, f\\} = 0$. And, as $f$ is pulled back from $\\mathfrak{g} [1] \\oplus V$, it automatically satisfies $\\{f,f\\} = 0$. These identities together tell us that the function $f + h_X$ satisfies the \\emph{Batalin-Vilkovisky classical master equation}, \n$$\n\\{ f + h_X, f + h_X \\} = 0 . \n$$\n\nLet us write \n$$\nf (e) + h_X ( e ) = \\frac{1}{2} \\ip{e, Q e } + S(e)\n$$\nwhere $Q : V \\to V$ is an odd linear map, skew self adjoint for the pairing $\\ip{\\quad}$, and $S$ is a function which is at least cubic. The fact that $f + h_X$ satisfies the classical master equation implies that $Q^2 = 0$. Also, the identity \n$$\nQ S + \\frac{1}{2} \\{ S , S \\} = 0\n$$ \nholds as a consequence of the classical master equation for $f + h _X$. \n\n\nLet $L \\subset E$ be a small, generic, Lagrangian perturbation of the zero section $\\mathfrak {g} [1] \\oplus V \\subset E$. The Batalin-Vilkovisky formalism tells us to consider the functional integral\n\\begin{equation*}\n\\int_{e \\in L} \\exp ( f(e) \/ \\hbar + h_X (e)\/ \\hbar ) = \\int_{e \\in L} \\exp ( \\frac{1}{2 \\hbar} \\ip{e, Q e } + \\frac{1}{\\hbar} S(e) ) \n\\end{equation*}\nAs $L$ is generic, the pairing $\\ip{e, Q e}$ will have very little degeneracy on $L$. In fact, if the complex $(E,Q)$ has zero cohomology, then the pairing $\\ip{e, Q e}$ is non-degenerate on a generic Lagrangian $L$. This means we can perform the above integral perturbatively, around the critical point $0 \\in L$. \n\n\\subsection{Quantum master equation}\nLet us now turn to a more general situation, where $E$ is a finite dimensional vector space with an odd symplectic pairing, and $Q : E \\to E$ is an odd operator of square zero which is skew self adjoint for the pairing. $E$ is not necessarily of the form constructed above.\n\nLet $x_i, \\eta_i$ be a Darboux basis for $E$, so that $x_i$ are even, $\\eta_i$ are odd, and $\\ip{x_i, \\eta_i} = 1$. Let $\\Delta$ be the order two differential operator on $E$ given by the formula\n$$\n\\Delta = \\sum \\partial_{x_i} \\partial_{\\eta_i}.\n$$\nThis operator is independent of the choice of basis of $E$. \n\nLet $S \\in \\mscr O(E ) [[\\hbar]]$ be an $\\hbar$-dependent function on $E$, which modulo $\\hbar$ is at least cubic. The function $S$ satisfies the \\emph{quantum master equation} if \n$$\n(Q + \\hbar \\Delta) e^{S\/ \\hbar } = 0 . \n$$\nThis equation is equivalent to the equation\n$$\nQ S + \\frac{1}{2} \\{ S , S \\} + \\hbar \\Delta S = 0 . \n$$\n\nThe key lemma in the Batalin-Vilkovisky formalism is the following.\n\\begin{lemma}\nLet $L \\subset E$ be a Lagrangian on which the pairing $\\ip{e,Q e }$ is non-degenerate. (Such a Lagrangian exists if and only if $H^\\ast(E, Q ) = 0$). Suppose that $S$ satisfies the quantum master equation. Then the integral\n$$\n\\int_{e \\in L} \\exp ( \\frac{1}{2 \\hbar} \\ip{e, Q e } + \\frac{1}{\\hbar} S(e) ) \n$$\nis unchanged under deformations of $L$.\n\\end{lemma}\nThe non-degeneracy of the inner product on $L$, and the fact that $S$ is at least cubic modulo $\\hbar$, means that one can compute this integral perturbatively.\n\nSuppose $E, Q, \\ip{\\ , \\ }, S$ are obtained as before from a gauge theory. Then $S$ automatically satisfies the classical master equation $Q S + \\frac{1}{2} \\{S, S \\} = 0$. If, in addition, $\\Delta S = 0$, then $S$ satisfies the quantum master equation. Thus, we see that we can quantise the gauge theory in a way independent of the choice of $L$ as long as $S$ satisfies the equation $\\Delta S = 0$. When $S$ does not satisfy this equation, one looks to replace $S$ by a series $S ' = S + \\sum_{ i > 0} \\hbar^i S_i$ which does satisfy the quantum master equation $Q S' + \\frac{1}{2} \\{ S' , S' \\} + \\hbar \\Delta S' = 0 $.\n\n\\subsection{Geometric interpretation of the quantum master equation}\nThe quantum master equation has a geometric interpretation, first described by Albert Schwarz \\cite{Sch93}. I will give a very brief summary; the reader should refer to this paper for more details.\n\nLet $\\mu$ denote the unique up to scale translation invariant ``measure'' on $E$, that is, section of the Berezinian. The operator $\\Delta$ can be interpreted as a kind of divergence associated to the measure $\\mu$, as follows. As $E$ is an odd symplectic manifold, the algebra $\\mscr O(E)$ has an odd Poisson bracket. Every function $S \\in \\mscr O(E)$ has an associated vector field $X_S$, defined by the formula $X_S f = \\{ S, f \\}$. \n\nThe operator $\\Delta$ satisfies the identity\n$$\n\\mathcal L_{X_S} \\mu = (\\Delta S) \\mu\n$$\nwhere $\\mathcal L_{X_S}$ refers to the Lie derivative. In other words, $\\Delta S$ is the infinitesimal change in volume associated to the vector field $X_S$. \n\nThus, the two equations\n\\begin{align*}\n\\{S,S\\} &= 0 \\\\\n\\Delta S &= 0\n\\end{align*}\nsay that the vector field $X_S$ has square zero and is measure preserving.\n\nThis gives an interpretation of the quantum master equation in the case when $S$ is independent of $\\hbar$. When $S$ depends on $\\hbar$, the two terms of the quantum master equation do not necessarily hold independently. In this situation, we can interpret the quantum master equation as follows. Let $\\mu_S$ be the measure on $E$ defined by the formula\n$$\n\\mu_S = e^{S \/ \\hbar} \\mu.\n$$\nWe can define an operator $\\Delta_S$ on $\\mscr O(E)$ by the formula\n$$\n\\mathcal L_{X_f} \\mu_S = (\\Delta_S f ) \\mu_S.\n$$\nThis is the divergence operator associated to the measure $\\mu_S$, in the same way that $\\Delta$ is the divergence operator associated to the translation invariant measure $\\mu$. \n\nThen, a slightly weaker version of the quantum master equation is equivalent to the statement\n$$\n\\Delta_S^2 = 0.\n$$\nIndeed, one can compute that\n$$\n\\hbar \\Delta_S f = \\{S, f \\} + \\hbar \\Delta f \n$$\nso that \n$$\n\\hbar^2 \\Delta_S^2 f = \\tfrac{1}{2} \\{ \\{S, S\\} , f \\} + \\hbar \\{ \\Delta S , f \\} . \n$$\nThus, $\\Delta_S^2 = 0$ if and only if $\\tfrac{1}{2} \\{S,S\\} +\\hbar \\Delta S$ is in the centre of the Poisson bracket, that is, is constant. \n\nThis discussion shows that the quantum master equation is the statement that the measure $e^{S \/ \\hbar } \\mu$ is compatible in a certain sense with the odd symplectic structure on $E$. \n\n\\begin{remark}\nIn fact it is better to use half-densities rather than densities in this picture. A solution of the quantum master equation is then given by a half-density which is compatible in a certain sense with the odd symplectic form. As all of our odd symplectic manifolds are linear, we can ignore this subtlety. \n\\end{remark}\n\n \\subsection{Integrating over isotropic subspaces}\n As we have described it, the BV formalism only has a chance to work when $H^\\ast(E, Q ) = 0$. This is because one cannot make sense of the relevant integrals perturbatively otherwise. However, there is a generalisation of the BV formalism which works when $H^\\ast(E,Q ) \\neq 0$. In this situation, let $L \\subset E$ be an isotropic subspace such that $Q : L \\to \\Im Q$ is an isomorphism. Let $\\operatorname{Ann}(L) \\subset E$ be the set of vectors which pair to zero with any element of $L$. Then we can identify \n$$\nH^\\ast(E, Q) = \\operatorname{Ann}(L) \\cap \\Ker Q . \n$$\nWe thus have a direct sum decomposition \n$$\nE = L \\oplus H^\\ast(E,Q) \\oplus \\Im Q . \n$$\nNote that $H^\\ast(E,Q)$ acquires an odd symplectic pairing from that on $E$. Thus, there is a BV operator $\\Delta_{H^\\ast(E,Q) }$ on functions on $H^\\ast(E,Q)$. We say a function $f$ on $H^\\ast(E,Q)$ satisfies the quantum master equation if $\\Delta e^{f \/ \\hbar } = 0$. \n\nThe analog of the ``key lemma'' of the Batalin-Vilkovisky formalism is the following. This lemma is well known to experts in the area. \n\\begin{lemma}\nLet $S \\in \\mscr O(E) [[\\hbar]] $ be an $\\hbar$-dependent function on $E$ which satisfies the quantum master equation. Then the function on $H^\\ast(E,Q)$ defined by\n$$\na \\mapsto \\hbar \\log \\left( \\int_{e \\in L} \\exp ( \\frac{1}{2 \\hbar} \\ip{e, Q e } + \\frac{1}{\\hbar} S(e+ a ) ) \\right) \n$$\n(where we think of $H^\\ast(E,Q)$ as a subspace of $E$) satisfies the quantum master equation. \n\nFurther, if we perturb the isotropic subspace $L$ a small amount, then this solution of the QME on $H^\\ast( E, Q)$ is changed to a homotopic solution of the QME. \n\\end{lemma}\nNote that since $Q a = 0$, we can write the exponential in the integrand in the equivalent way $( \\frac{1}{2 \\hbar} \\ip{e+a, Q (e+a) } + \\frac{1}{\\hbar} S(e+ a ) ) $. Thus, there is no real need here to separate out quadratic and higher terms. However, in the infinite dimensional situation we will discuss later, it will be essential to write the integrand as $( \\frac{1}{2 \\hbar} \\ip{e, Q e } + \\frac{1}{\\hbar} S(e+ a ) ) $, because we will take a field $a$ which is not closed. \n\n\nThis integral is an explicit way of writing the homological perturbation lemma for BV algebras, which transfers a solution of the quantum master equation at chain level to a corresponding solution on cohomology. From this observation it's clear (at least philosophically) why the lemma should be true; the choice of the Lagrangian $L$ is essentially the same as the choice of symplectic homotopy equivalence between $E$ and its cohomology. I'll omit a formal proof for now, as a proof of a more general statement will be given later. \n\nLet me explain what a ``homotopy'' of a solution of the QME is. There is a general concept of homotopy equivalence of algebraic objects, which I learned from the work of Deligne, Griffiths, Morgan and Sullivan \\cite{DelGriMor75}. Two algebraic objects are homotopic if they are connected by a family of such objects parametrised by the commutative differential algebra $\\Omega^\\ast ([0,1])$. In our context, this means that two solutions $f_0, f_1$ to the quantum master equation on $H^\\ast(E,Q)$ are homotopic if there exists an element $F \\in \\mscr O(H^\\ast(E,Q) ) \\otimes \\Omega^\\ast([0,1] ) [[\\hbar]]$ which satisfies the quantum master equation\n$$\n ( \\d_{DR} + \\hbar \\Delta_{H^\\ast(E,Q) } ) e^{F \/ \\hbar } = 0\n$$\nand which restricts to $f_0$ and $f_1$ when we evaluate at $0$ and $1$. Here $\\d _{DR}$ refers to the de Rham differential on $\\Omega^\\ast([0,1])$. \n\nThe quantum master equation imposed on $F$ is equivalent to the equation\n$$\n\\d_{DR} F + \\frac{1}{2} \\{ F , F \\} + \\hbar \\Delta F = 0 . \n$$\nIf we write $F(t, \\d t) = A(t ) + \\d t B(t)$, then the QME imposed on $F$ becomes the system of equations\n\\begin{align*}\n\\frac{1}{2} \\{ A(t) , A(t) \\} + \\hbar \\Delta A (t) &= 0 \\\\\n\\frac{\\d} { \\d t} A(t) + \\{ A(t) , B(t) \\} + \\hbar \\Delta B(t) &= 0 . \n\\end{align*}\nThe first equation says that $A(t)$ satisfies the ordinary QME for all $t$, and the second says that the family $A(t)$ is tangent at every point to an orbit of a certain ``gauge group'' acting on the space of solutions to the QME. \n\n\\section{Example : Chern-Simons theory}\nI want to discuss a class of quantum field theories in the Batalin-Vilkovisky formalism. The general definition of the kind of quantum field theory I want to consider is a little technical, so I will start by discussing a simple example in detail, namely Chern-Simons theory on a $3$-manifold. (Later we will discuss Chern-Simons theory on a manifold of any dimension). Most of the features of more general theories are already evident in this example. \n\nLet $M$ be a compact oriented $3$-manifold. Let $\\mathfrak g$ be a flat bundle of complex\\footnote{Everything works if we take a real Lie algebra and work over $\\mbb R$ throughout.} Lie algebras with a complex valued invariant pairing on $M$. For example, $\\mathfrak g$ could be the adjoint bundle associated to a flat principal $SL(n,\\mathbb C)$ bundle, equipped with the Killing form. \n\nWe want to quantise the Chern-Simons gauge theory associated to $\\mathfrak g$. This is the theory whose space of fields $\\mathscr{V}$ is the space $\\Omega^1(M, \\mathfrak g)$, which we think of as being the space of $\\mathfrak g$-valued connections. The Lie algebra of the gauge group is\n$$\\mathscr{G} = \\Omega^0 (M, \\mathfrak g ).$$ This Lie algebra acts on the space of fields in the usual way that infinitesimal bundle automorphisms act on connections. \n\nThe Chern-Simons action functional is the function on $\\mathscr{V}$ given by\n$$\n\\frac{1}{2}\\ip{v, \\d v} + \\frac{1}{3} \\ip{ v, [v,v] } \n$$\nwhere $\\d : \\Omega^1 (\\mathfrak g) \\to \\Omega^2 (\\mathfrak g)$ is the operator obtained by coupling the flat connection on $\\mathfrak g$ to the de Rham differential.\n\nApplying the Batalin-Vilkovisky construction, as described in section \\ref{section intro bv}, yields an odd symplectic vector space \n\\begin{align*}\n\\mathscr{E} = \\mathscr{G} [1] \\oplus \\mathscr{V} \\oplus \\mathscr{V}^\\vee [-1] \\oplus \\mathscr{G}^\\vee [-2].\n\\end{align*}\nIf we interpret the duals of the infinite dimensional vector spaces appropriately, we find that \n$$\n\\mathscr{E} = \\Omega^\\ast (M, \\mathfrak g) [1] . \n$$\nThis is simply because $\\Omega^2 (M,\\mathfrak{g})$ has a natural non-degenerate pairing with $ \\mathscr{V} = \\Omega^1 (M,\\mathfrak{g} )$, and $\\Omega^3 (M,\\mathfrak{g})$ has a natural non-degenerate pairing with $\\Omega^0(M,\\mathfrak{g}) = \\mathscr{G}$. \n\nThe differential $Q : \\mathscr{E} \\to \\mathscr{E}$ (constructed as in section \\ref{section intro bv}) is simply the de Rham differential, coupled to the flat connection on $\\mathfrak{g}$. The odd pairing on $\\mathscr{E}$ is given by\n$$\n\\ip{ \\alpha \\otimes X, \\alpha' \\otimes X' } = (-1)^{\\abs{\\alpha} }\\int_M \\alpha\\wedge \\alpha' \\ip{X, X' }_{\\mathfrak g}\n$$\nwhere $\\alpha,\\alpha' \\in \\Omega^\\ast(M)$ and $X,X' \\in \\mathfrak{g}$. The notation $\\ip{\\ , \\ }_{\\mathfrak g}$ refers to the invariant pairing on $\\mathfrak g$. \n\nThe action functional $S$ is given by the formula\n$$\nS( \\sum_i \\alpha_i \\otimes X_i ) = \\sum_{i,j,k} (-1)^{(\\abs{\\alpha_i} + 1) (\\abs{\\alpha_j} + 1 )} \\int_M \\alpha_i \\wedge \\alpha_j \\wedge \\alpha_k \\ip{ X_i, [X_j, X_k ] }.\n$$\n\n\n\n\n\\subsection{Gauge fixing and functional integrals}\nAs we saw in section \\ref{section intro bv}, the next step in the Batalin-Vilkovisky formalism is the choice of a gauge fixing condition. This is an isotropic subspace $L \\subset \\mathscr{E}$ such that $Q : L \\to \\Im Q$ is an isomorphism. \n\nPicking a metric on $M$ gives an operator $\\d^\\ast : \\Omega^i (M) \\to \\Omega^{i-1} (M)$. This extends to an operator $Q^{GF} : \\mathscr{E} \\to \\mathscr{E}$, defined (locally) by\n$$\nQ^{GF} ( \\omega \\otimes A) = (\\d^\\ast \\omega) \\otimes A\n$$\nwhere $A$ is a flat section of the bundle $\\mathfrak g$. Our gauge fixing condition will be\n$$\nL = \\Im Q^{GF} . \n$$\nThe functional integral we will construct is\n$$\n \\int_{e \\in \\Im Q^{GF}} \\exp ( \\frac{1}{2 \\hbar} \\ip{e, Q e } + \\frac{1}{\\hbar} S(e+ a ) ) .\n$$\n\nThus, the Chern-Simons gauge theory in the BV formalism is encoded in the following data.\n\\begin{enumerate}\n\\item\nA vector space $\\mathscr{E}$, which is the space of global sections of a super vector bundle on a compact manifold $M$.\n\\item\nAn odd anti-symmetric pairing on $\\mathscr{E}$, satisfying a certain non-degeneracy condition.\n\\item\nA skew self adjoint operator $Q : \\mathscr{E} \\to \\mathscr{E}$, of square zero.\n\\item\nAn even functional $S : \\mathscr{E} \\to \\mathbb C$ which is ``local'', meaning roughly that it has a Taylor expansion in terms of continuous linear maps $\\mathscr{E}^{\\otimes k} \\to \\mathbb C$ which are distributions on $M^k$, supported on the small diagonal, and non-singular in the diagonal directions. (A precise definition will be given soon).\n\\item\nAn auxiliary operator $Q^{GF} : \\mathscr{E} \\to \\mathscr{E}$, which is a self adjoint elliptic operator of square zero.\n\\item\nThe super-commutator $[Q,Q^{GF}]$ must be an elliptic operator of order $2$ with some positivity conditions.\n\\end{enumerate}\n\n\n\\section{Batalin-Vilkovisky formalism in infinite dimensions}\n\\label{section infinite bv}\n\nNow we will define the type of quantum field theory we will consider. The definition consists essentially of abstracting the data we encountered above in the case of Chern-Simons theory. \n\n\n\\subsection{Functionals}\n If $M,N$ are smooth manifolds and $F,G$ are super vector bundles on $M,N$ respectively, we will use the notation\n$$\n\\Gamma(M,F) \\otimes \\Gamma(N,G)\n$$\nto denote the space $\\Gamma(M \\times N, F \\boxtimes G)$ of smooth sections of $F \\boxtimes G$. In other words, $\\otimes$ refers to the completed projective tensor product where appropriate. \n\nLet $E$ be a super vector bundle over $\\mathbb C$ on a compact manifold $M$. Let $\\mathscr{E}$ denote the space of global sections of $E$. \n\\begin{definition}\nLet $\\mscr O(\\mathscr{E})$ be the algebra \n$$\n\\mscr O(\\mathscr{E}) = \\prod_{n \\ge 0} \\Hom ( \\mathscr{E}^{\\otimes n}, \\mathbb C ) _{S_n}\n$$\nwhere $\\Hom$ denotes the space of continuous linear maps, and the subscript $S_n$ denotes taking $S_n$ coinvariants. \n\nDirect product of distributions makes $\\mscr O(\\mathscr{E})$ into an algebra.\n\\end{definition}\nWe can view $\\mscr O(\\mathscr{E})$ as the algebra of formal functions at $0 \\in \\mathscr{E}$ which have Taylor expansions of the form\n$$\nf(e) = \\sum f_i(e^{\\otimes i} ) \n$$\nwhere each \n$$\nf_i : \\mathscr{E}^{\\otimes i} \\to \\mathbb C\n$$\nis a continuous linear map (i.e. a distribution). \n\\begin{definition}\nLet $X$ be an auxiliary manifold. Define\n$$\n\\mscr O(\\mathscr{E}, C^{\\infty}(X)) = \\prod_{n \\ge 0} \\Hom ( \\mathscr{E}^{\\otimes n}, C^{\\infty}(X) ) _{S_n}\n$$\nwhere, as before, we take the space of continuous linear maps. \n\\end{definition}\nThus, $\\mscr O(\\mathscr{E},C^{\\infty}(X))$ is a certain completed tensor product of $\\mscr O(\\mathscr{E})$ and $C^{\\infty}(X)$. \nLet $\\hbar$ be a formal parameter. Let \n$$\n\\mscr O(\\mathscr{E}, \\mathbb C[[\\hbar]] ) = \\varprojlim \\mscr O(\\mathscr{E}) \\otimes \\mathbb C[\\hbar]\/\\hbar^n.\n$$\nSimilarly, let $\\mscr O(\\mathscr{E}, C^{\\infty}(X) \\otimes \\mathbb C[[\\hbar]])$ be the inverse limit $ \\varprojlim \\mscr O(\\mathscr{E}, C^{\\infty}(X) ) \\otimes \\mathbb C[\\hbar] \/ \\hbar^n$. \n\n\\subsection{Local functionals}\n\\begin{definition}\n\nLet $\\operatorname{Diff}(E,E')$ denote the infinite rank vector bundle on $M$ of differential operators between two vector bundles $E$ and $E'$ on $M$. Let \n$$\n\\operatorname{Poly Diff} ( \\Gamma(E)^{\\otimes n}, \\Gamma(E') ) = \\Gamma ( M, \\operatorname{Diff}(E, \\mathbb C)^{\\otimes n} \\otimes E' ) \\subset \\operatorname{Hom} (\\Gamma(E)^{\\otimes n} , \\Gamma(E'))\n$$\nwhere $\\mathbb C$ denotes the trivial vector bundle of rank $1$. All tensor products in this expression are fibrewise tensor products of vector bundles on $M$. \n\\end{definition}\nIt is clear that $\\operatorname{Poly Diff}( \\Gamma(E)^{\\otimes n}, \\Gamma(E'))$ is the space of sections of an infinite rank vector bundle on $M$. If $F$ is a super vector bundle on another manifold $N$, let \n$$\n\\operatorname{PolyDiff} ( \\Gamma(E)^{\\otimes n}, \\Gamma(E') ) \\otimes \\Gamma(F)\n$$\ndenote the completed projective tensor product, as usual, so that \n$$\n\\operatorname{PolyDiff} ( \\Gamma(E)^{\\otimes n}, \\Gamma(E') ) \\otimes \\Gamma(F) = \\Gamma ( M \\times N, (\\operatorname{Diff}(E, \\mathbb C)^{\\otimes n} \\otimes E') \\boxtimes F ) .\n$$\nIf $X$ is a manifold, we can think of $\\operatorname{PolyDiff} ( \\Gamma(E)^{\\otimes n}, \\Gamma(E') ) \\otimes C^{\\infty}(X)$ as the space of smooth families of polydifferential operators parametrised by $X$. \n\nOne can give an equivalent definition of polydifferential operators in terms of local trivialisations $\\{e_i\\}, \\{e'_j\\}$ of $E$ and $E'$, and local coordinates $y_1,\\ldots, y_l$ on $M$. \nA map $\\Gamma(E)^{\\otimes n} \\to \\Gamma(E')$ is a polydifferential operator if, locally, it is a finite sum of operators of the form \n$$\nf_1 e_{i_1} \\otimes \\cdots \\otimes f_n e_{i_n} \\mapsto \\sum _{j} e'_j \\Phi^{j}_{i_1 \\ldots i_n} (y_1,\\ldots,y_l) ( D_{I_{i_1}} f_1 ) \\cdots ( D_{I_{i_n}} f_n ) \n$$\nwhere $\\Phi^{j}_{i_1 \\ldots i_n} (y_1,\\ldots,y_l)$ are smooth functions of the $y_i$, the $I_k$ are multi-indices, and the operators $D_{I_k}$ are the corresponding partial derivatives with respect to the $y_i$. \n\n\n\n\n\\begin{definition}\nLet \n$$\\mscr O_l(\\mathscr{E}) \\subset \\mscr O(\\mathscr{E}) = \\prod_{n \\ge 0} \\Hom ( \\mathscr{E}^{\\otimes n}, \\mathbb C ) _{S_n}$$ be the space of formal functionals on $\\mathscr{E}$, each of whose Taylor series terms $f_n : \\mathscr{E}^{\\otimes n} \\to \\mathbb C$ factors as \n$$\n\\mathscr{E}^{\\otimes n} \\to \\dens(M) \\xrightarrow{\\int_M} \\mathbb C\n$$\nwhere the first map is a polydifferential operator. \n\nElements of $\\mscr O_l(\\mathscr{E})$ will be called \\emph{local functionals} on $\\mathscr{E}$.\n\nIf $X$ is another manifold (possibly with corners), and $F$ is a super vector bundle on $X$, let \n$$\n\\mscr O_l( \\mathscr{E}, \\Gamma(X,F) ) \\subset \\mscr O(\\mathscr{E}, \\Gamma(X,F)) = \\prod_{n \\ge 0} \\Hom ( \\mathscr{E}^{\\otimes n}, \\Gamma(X,F) ) _{S_n}\n$$\nbe the subspace of those functions each of whose terms factors as \n$$\n\\mathscr{E}^{\\otimes n} \\to \\dens(M) \\otimes \\Gamma(X,F) \\xrightarrow{\\int_M} \\Gamma(X,F)\n$$\nwhere the first map is in $\\operatorname{Poly Diff} ( \\mathscr{E}^{\\otimes n}, \\dens(M) ) \\otimes \\Gamma(X,F)$.\n\\end{definition}\nElements of $\\mscr O_l(\\mathscr{E}, C^{\\infty}(X))$ are smooth families of local functionals parametrised by a manifold $X$. We will use a similar notation when we want to take functionals with values in $\\mathbb C[[\\hbar]]$. Let\n$$\n\\mscr O_l(\\mathscr{E}, \\mathbb C[[\\hbar]] ) = \\varprojlim \\mscr O_l(\\mathscr{E}) \\otimes \\mathbb C[\\hbar]\/\\hbar^n.\n$$\nWe can define $\\mscr O_l(\\mathscr{E}, C^{\\infty}(X) \\otimes \\mathbb C[[\\hbar]])$ as an inverse limit in the same way. \n\n\\begin{remark}\nThe space $\\mscr O_l(\\mathscr{E})$ is \\emph{not} an algebra; the product of two local functionals is not local.\n\\end{remark}\n\n\nIf $f \\in \\mscr O_l(\\mathscr{E})$ is a local functional, then the $i$-th term of its Taylor expansion $f_i : \\mathscr{E}^{\\otimes i} \\to \\mathbb C$ is assumed to factor through a map $\\mathscr{E}^{\\otimes i} \\to \\dens(M)$.\nNote that there will in general exist many such factorisations. However, we have\n\\begin{lemma}\nIf \n$$\\Phi : \\mathscr{E}^{\\otimes n} \\to \\mathbb C$$ is a map which factorises through a polydifferential operator $\\mathscr{E}^{\\otimes n} \\to \\dens(M)$, there is a unique polydifferential operator\n$$\n\\Psi : \\mathscr{E}^{\\otimes n-1} \\to \\Gamma(E^\\vee) \\otimes_{C^{\\infty}(M)} \\dens(M)\n$$\nsuch that \n$$\n\\Phi ( e_1 \\otimes \\cdots \\otimes e_n ) = \\int_M \\ip{ \\Psi ( e_1 \\otimes \\cdots \\otimes e_{n-1} ) , e_n }.\n$$\n\\label{lemma local functional vector field}\n\\end{lemma}\n\\begin{proof}\nThis is an easy calculation in local coordinates.\n\\end{proof} \n\n\n\n\n\\subsection{Batalin-Vilkovisky formalism}\n\\begin{definition}\nAn odd symplectic structure on $\\mathscr{E}$ is a map\n$$\n\\ip{\\ , \\ }_M : E \\otimes E \\to \\dens_M\n$$\nof graded vector bundles on $M$, which is odd, antisymmetric, and non-degenerate, in the sense that the associated map $E \\to E^{\\vee} \\otimes \\dens_M$ is an isomorphism. \n\\end{definition}\nSuch an odd symplectic structure induces an odd pairing \n$$\n\\ip{\\ , \\ } : \\mathscr{E} \\otimes_\\mathbb C \\mathscr{E} \\to \\mathbb C\n$$\non $\\mathscr{E}$. \n\n\n\n\\begin{lemma}\nAn odd symplectic structure on $\\mathscr{E}$ induces an odd Poisson bracket on the space $\\mscr O_l(\\mathscr{E})$ of local functionals. Further, if $f \\in \\mscr O_l(\\mathscr{E})$ is local and $g \\in \\mscr O(\\mathscr{E})$ is possibly non-local, the Poisson bracket $\\{f,g\\}$ is well-defined.\n\\end{lemma}\n\\begin{proof}\nThis is an immediate corollary of Lemma \\ref{lemma local functional vector field}. Indeed, this lemma shows that any local functional on $\\mathscr{E}$ can be replaced by something which has a Taylor expansion in terms of polydifferential operators $\\mathscr{E}^{\\otimes n} \\to \\mathscr{E}$. Here we are using the identification of $\\mathscr{E}$ with $\\Gamma(E^\\vee )\\otimes_{C^{\\infty}(M)} \\dens(M)$ given by the odd symplectic form. \n\nSomething with a Taylor expansion in terms of maps $\\mathscr{E}^{\\otimes n} \\to \\mathscr{E}$ can be considered to be a formal vector field on $\\mathscr{E}$. This yields the Hamiltonian vector field on $\\mathscr{E}$ associated to a local functional. The usual formula for the action of vector fields on functions defines the Poisson bracket $\\{f,g\\}$ if $f \\in \\mscr O_l(\\mathscr{E})$ and $g \\in \\mscr O(\\mathscr{E})$. Since this formula amounts to inserting the output of the map $\\mathscr{E}^{\\otimes n} \\to \\mathscr{E}$ into the input of a map $\\mathscr{E}^{\\otimes k} \\to \\mathbb C$, everything is well-defined.\n\\end{proof} \nIn general, the Poisson bracket of two non-local functionals may be ill-defined. \n\n\n\nLet us fix, for the rest of the paper, the following data:\n\\begin{enumerate}\n\\item\nA compact manifold $M$ with a complex super vector bundle $E$ on $M$, whose algebra of global sections is denoted (as above) by $\\mathscr{E}$.\n\\item\nAn odd linear elliptic differential operator $Q : \\mathscr{E} \\to \\mathscr{E}$, which is self adjoint and satisfies $[Q,Q] = 0$. $Q$ induces a differential on all the spaces associated to $\\mathscr{E}$, such as $\\mscr O_l(\\mathscr{E})$ and $\\mscr O(\\mathscr{E})$.\n\\end{enumerate}\nA local functional $S \\in \\mscr O_l(\\mathscr{E})$, which is at least cubic, satisfies the classical master equation if\n$$\nQ S + \\frac{1}{2}\\{S,S\\} = 0.\n$$\nWe will try to quantise, within the Batalin-Vilkovisky formalism, field theories whose action functionals are of the form\n$$\n\\frac{1}{2} \\ip{e, Q e } + S(e)\n$$\nwhere $S \\in \\mscr O_l(\\mathscr{E} )$ satisfies the classical master equation. A wide class of quantum field theories of physical and mathematical interest, including for instance pure Yang-Mills theory, appear in this way.\n\n\n\n\n\n\n\n \n\n\\begin{definition}\nLet $(M,\\mathscr{E},Q,\\ip{\\ , \\ })$ be as above. A gauge fixing condition is an odd linear differential operator \n$$\nQ^{GF} : \\mathscr{E} \\to \\mathscr{E}\n$$\nsuch that\n\\begin{enumerate}\n\\item\n$Q^{GF}$ is skew self adjoint with respect to the odd symplectic pairing on $\\mathscr{E}$.\n\\item\n$Q^{GF}$ is an operator of order $\\le 1$. \n\\item\n$[Q^{GF},Q^{GF}] = 0.$\n\\item\nthe operator \n$$\nH := [Q,Q^{GF}]\n$$\nis a second order elliptic operator, which is a generalised Laplacian in the sense of \\cite{BerGetVer92}. This means that the symbol \n$$\n\\sigma(H) \\in \\Gamma( M, \\Sym^2 TM \\otimes \\End E) \n$$\nis a positive definite metric on $T^\\ast M$, tensored with the identity map $E\\to E$.\n\\item\nThere is a direct sum decomposition\n$$\n\\mathscr{E} = \\Im Q \\oplus \\Im Q^{GF} \\oplus \\Ker H . \n$$\n\\end{enumerate}\n\\end{definition}\nThe fourth condition is a little restrictive. Even so, many interesting quantum field theories admit a natural collection of such gauge fixing conditions. This condition is imposed because I need to use the asymptotic expansion of the heat kernel of $H$ proved in \\cite{BerGetVer92}. \n\n\\section{Examples}\n\\label{section examples}\n\\subsection{$\\phi^4$ theory}\nThe $\\phi^4$ theory is a standard simple example of a quantum field theory. The Batalin-Vilkovisky formalism does not say anything interesting about the $\\phi^4$ theory, because it is a simple bosonic field theory with no gauge symmetries. \n\nIn our setting, the $\\phi^4$ theory works as follows. We take $M$ to be a compact Riemannian manifold. Let \n$$\n\\mathscr{E} = \\mathscr{E}^0 \\oplus \\mathscr{E}^1 = C^{\\infty}(M,\\mathbb C) \\oplus \\Pi C^{\\infty}(M,\\mathbb C)\n$$ \nwhere $\\Pi$ refers to parity reversal. Let $e_0 \\in \\mathscr{E}^0$, $e_1 \\in \\mathscr{E}^1$ be the elements which correspond to the identity function on $M$. The odd symplectic pairing on $\\mathscr{E}$ is defined by \n$$\n\\ip{ \\phi e_0, \\psi e_1 } = \\int_M \\phi \\psi \n$$\nwhere $\\phi, \\psi \\in C^{\\infty}(M,\\mathbb C)$ and the integration is taken using the measure associated to the Riemannian metric.\n\nThe operator $Q$ is defined by \n\\begin{align*}\nQ (\\phi e_0 ) &= ( \\Delta \\phi + m^2 \\phi ) e_1 \\\\\nQ ( \\phi e_1 ) &= 0 \n\\end{align*}\nwhere $m \\in \\mbb R_{> 0}$ is a ``mass'' parameter, and $\\Delta$ is the Laplacian on $C^{\\infty}(M)$ associated to the Riemannian metric. \n\nThe action $S$ is defined by\n$$\nS ( \\phi e_0 + \\psi e_1 ) = \\int_M \\phi^4 .\n$$\nThen \n$$\n\\frac{1}{2} \\ip{ \\phi e_0 , Q \\phi e_0 } + S ( \\phi ) = \\int_M \\frac{1}{2} \\phi \\Delta \\phi + \\frac{1}{2} m^2 \\phi^2 + \\phi^4 \n$$\nwhich is the usual action for the $\\phi^4$ theory.\n\nThe gauge fixing operator $Q^{GF}$ on $\\mathscr{E}$ is defined by\n\\begin{align*}\nQ^{GF} (\\phi e_1 ) &= \\phi e_0 \\\\\nQ^{GF} ( \\phi e_0 ) &= 0 .\n\\end{align*}\nThe operator $H = [Q^{GF}, Q ]$ is given by\n$$\nH ( \\phi e_i) = ( \\Delta \\phi + m^2 \\phi ) e_i\n$$\nfor $i = 0,1$. As required, this is a generalised Laplacian. Also, \n$$\\mathscr{E} = \\Ker H \\oplus \\Im Q^{GF} \\oplus \\Im Q . $$ \nNote that these axioms would not be satisfied if $m = 0$, because $\\Ker H \\cap \\Im Q^{GF}$ would not be zero. \n\nThe functional integral we will construct is \n$$\n\\int_{x \\in C^{\\infty}(M)} \\exp \\left( \\hbar^{-1} \\int_M \\frac{1}{2} \\phi \\Delta \\phi + \\frac{1}{2} m^2 \\phi^2 + \\phi^4 \\right).\n$$\nOf course, this construction works if we use any polynomial of $\\phi$ which is at least cubic in place of $\\phi^4$. \n\nBecause the space of fields is concentrated in degrees $0$ and $1$, any local functional which is homogeneous of degree $0$ satisfies the renormalised quantum master equation. \n\n\\subsection{Yang-Mills theory} \nWe will use a certain first order formulation of Yang-Mills theory, well-known in the physics literature \\cite{MarZen96, Wit04}. This was studied in the Batalin-Vilkovisky formalism in \\cite{Cos06a}. I don't see how to fit the standard formulation of Yang-Mills theory into the framework of this paper; the problem is that I don't see how to construct gauge fixing conditions of the type we need. I will refer to \\cite{Cos06a} for details of this construction. \n\nLet $M$ be a compact oriented $4$ manifold, and let $V$ be a vector bundle on $M$ with a connection $A$ satisfying the Yang-Mills equation\n$$\n\\d_A F(A)_+ = 0. \n$$\nIn \\cite{Cos06a} a certain differential graded algebra $\\mathscr{B}$ was constructed from $(M,V,A)$, which looks like\n$$\n\\xymatrix{ \\Omega^{0} (\\End(V)) \\ar[dd]_{\\d_{A}} \\ar[ddrr]^{-[F(A)_{+},\\quad]} & & & \\text{ degree } 0 \\\\ \n & & & \\\\\n \\Omega^{1}(\\End(V)) \\ar[dd]_{\\d_{A+}} \\ar'[dr]|{-[F(A)_{+}, \\quad]} [ddrr] & \\oplus & \\Omega^{2}_{+} (\\End(V)) \\ar[dd]^{- \\d_{A}} \\ar[ddll]|(.3){\\operatorname{Id}} & \\text{ degree } 1 \\\\\n & & & \\\\\n \\Omega^{2}_{+}(\\End(V)) \\ar[ddrr]^{-[F(A)_{+},\\quad]} & \\oplus & \\Omega^{3} (\\End(V)) \\ar[dd]^{-\\d_{A}} & \\text{ degree } 2 \\\\\n & & & \\\\\n & & \\Omega^{4} (\\End(V)) & \\text{ degree } 3 }\n$$\nThe arrows describe the differential $Q$ in $\\mathscr{B}$. The algebra $\\mathscr{B}$ also has a trace of degree $-3$, which comes from trace over $V$ and integration over $M$. This trace induces an odd symmetric pairing $\\Tr ab$ on $\\mathscr{B}$. \n\nLet $\\mathscr{E} = \\mathscr{B}[-1]$ be $\\mathscr{B}$ shifted by one, with differential $Q$ inherited from $\\mathscr{B}$. The pairing on $\\mathscr{B}$ induces a symplectic pairing on $\\mathscr{E}$. Define a functional $S$ on $\\mathscr{E}$ by \n$$\nS(e) = \\Tr a^3.\n$$\nIt was observed in section 3.3 of \\cite{Cos06a} that $\\mathscr{E}$, with this action functional, is the Batalin-Vilkovisky odd symplectic space associated with the first order formulation of pure Yang-Mills gauge theory, as studied in for instance \\cite{MarZen96,Wit04}. Indeed, $\\mathscr{E}^{-1}= \\mathscr{B}^0$ is the space of ghosts, $\\mathscr{E}^0 = \\mathscr{B}^1$ is the space of fields, $\\mathscr{E}^1 = \\mathscr{B}^2$ is the space of antifields, and $\\mathscr{E}^2 = \\mathscr{B}^3$ is the space of antighosts. \n\nThe choice of a metric on $M$ and a Hermitian metric on the vector bundle $V$ leads to a Hermitian metric on $\\mathscr{B}$. The operator $Q^{GF}$ is the Hermitian adjoint to $Q$. This operator satisfies the axioms of a gauge fixing condition. \n\n\\subsection{Holomorphic Chern-Simons}\n\nLet $M$ be a Calabi-Yau manifold of odd complex dimension. Fix a holomorphic volume form $\\Omega_{Vol}$ on $M$. Let $\\mathfrak g$ be a complex Lie algebra with a non-degenerate , symmetric, invariant pairing $\\ip{\\ , \\ }$. Let\n$$\n\\mathscr{E} = \\Omega^{0,\\ast}(M) \\otimes_\\mathbb C \\mathfrak g. \n$$\n$\\mathscr{E}$ has an odd symplectic pairing, defined by\n$$\n\\ip{\\alpha \\otimes X, \\alpha' \\otimes X' } = (-1)^{\\abs{\\alpha} }\\int_M \\Omega_{Vol} \\wedge \\alpha \\wedge \\alpha' \\ip{X, X' } \n$$\nwhere $\\alpha \\in \\Omega^{0,\\ast}(M)$ and $X \\in \\mathfrak g$. \n\nDefine $S \\in \\mscr O_l(\\mathscr{E})$ by\n$$\nS( \\sum_i \\alpha_i \\otimes X_i ) = \\sum_{i,j,k} (-1)^{(\\abs{\\alpha_i} + 1) (\\abs{\\alpha_j} + 1 )} \\int_M \\Omega_{Vol} \\wedge \\alpha_i \\wedge \\alpha_j \\wedge \\alpha_k \\ip{ X_i, [X_j, X_k ] }.\n$$\nThis is the holomorphic Chern-Simons action functional.\n\nPicking a metric on $M$ leads to an operator $\\br{\\partial}^\\star$ on $\\Omega^{0,\\ast}$, and so on $\\mathscr{E}$. This is the gauge fixing operator. The operator $H= [\\br{\\partial},\\br{\\partial}^\\star]$ is a generalised Laplacian.\n\nThe functional integral we will renormalise is\n$$\n\\int_{x \\in \\Im \\br{\\partial}^\\star} \\exp\\left({\\frac{1}{2\\hbar} \\ip{ x, \\br{\\partial} x } + \\frac{1}{\\hbar} S(x + a)}\\right) .\n$$\n\nThis theory can be generalised in several directions. Firstly, we can take $\\mathfrak g$ to be a non-trivial holomorphic vector bundle on $M$ with a Lie algebra structure (i.e.\\ a sheaf of Lie algebras over the sheaf of holomorphic functions on $M$). In that case, to get a gauge fixing condition would require the choice of both a metric on $M$ and a Hermitian metric on the vector bundle. \n\nIf $M$ is of even complex dimension, one can take $\\mathfrak g$ to be a super-Lie algebra with an odd invariant pairing (see \\cite{AleKonSch97}). Of course, there is a version of this where we take a sheaf of such super-Lie algebras.\n\nYet another direction is to use $L_\\infty$ rather than Lie algebras. In that case the action $S$ is no longer purely cubic, but has higher-order contributions from the higher $L_\\infty$ operations. Such theories are also discussed in \\cite{AleKonSch97}.\n\n\\subsection{Higher dimensional Chern-Simons}\nWe have already discussed Chern-Simons theory in dimension three. One can readily generalise this theory to manifolds of arbitrary dimension, just as with holomorphic Chern-Simons theory. \n\nFor odd-dimensional manifolds, one takes a complex Lie algebra $\\mathfrak g$ with an invariant pairing, and sets $\\mathscr{E} = \\Omega^\\ast(M)\\otimes \\mathfrak g [1]$. The Chern-Simons action is given by essentially the same formula as in the $3$-dimensional case: \n$$\nS( \\sum_i \\alpha_i \\otimes X_i ) = \\sum_{i,j,k} (-1)^{(-1)^{(\\abs{\\alpha_i} + 1) (\\abs{\\alpha_j} + 1 )}} \\int_M\\alpha_i \\wedge \\alpha_j \\wedge \\alpha_k \\ip{ X_i, [X_j, X_k ] }\n$$\nwhere $\\alpha_i \\in \\Omega^\\ast(M)$ and $X_i \\in \\mathfrak g$. The operator $Q$ is defined to be the de Rham differential $\\d$. If we pick a metric on $M$ then we can define $Q^{GF}$ to be the adjoint operator $\\d^\\star$. \n\nAs with holomorphic Chern-Simons theory, this can be generalised in various directions. Firstly, we could replace $\\mathfrak g$ by a locally trivial sheaf of Lie algebras. Or, we could use a sheaf of $L_\\infty$ algebras instead Lie algebras.\n\nIn the case when $M$ is even-dimensional, we can take $\\mathfrak g$ to be an $L_\\infty$ algebra with an odd invariant pairing, or a locally trivial sheaf of such.\n\nA particularly interesting case occurs when $\\dim M = 2$. Two dimensional Chern-Simons theory (introduced in \\cite{AleKonSch97}) is essentially the same as the Poisson sigma model. This theory is used in the proof of Kontsevich's formality theorem \\cite{Kon03a,CatFel01}. \n\nIf $\\mathfrak g$ is an $L_\\infty$ algebra with an odd invariant pairing, then we can identify $\\mathfrak g^{even} = (\\mathfrak g^{odd})^{\\vee} [1]$. Thus, if we let $V = \\mathfrak g^{odd} [1]$, so that $V$ is a purely even vector space, then \n$$\n\\mathfrak g [1] = \\Pi T^\\ast V . \n$$\nThe $L_\\infty$ structure on $\\mathfrak g$ is described by a function $f$ on $\\mathfrak g[1] = \\Pi T^\\ast V$, satisfying the classical master equation $\\{f,f\\} = 0$. \n\nWe can identify functions on $\\Pi T^\\ast V$ with polyvector fields on $V$, that is, \n$$\n\\mscr O(\\Pi T^\\ast V) = \\oplus_i \\wedge^i T V\\otimes \\mscr O(V). \n$$\nThe Poisson bracket on $\\mscr O(\\Pi T^\\ast V)$ corresponds to the Schouten bracket on polyvector fields. \n\nThus, the $L_\\infty$ structure on $\\mathfrak g$ defines a generalised Poisson structure on $V$. If this $L_\\infty$ structure is purely quadratic in the $\\mathfrak g^{ev}[1]$ variables, then we find a Poisson structure in the classical sense on $V$.\n\nThe space\n$$\n\\mathscr{E} = \\Omega^\\ast( M ) \\otimes \\mathfrak g [ 1]\n$$\ncan be identified with the supermanifold of maps $\\Pi T M \\to \\Pi T^\\ast V$, leading to the sigma model interpretation of the theory. \n\n\n\\section{Differential operators}\n\\label{section differential operators}\n\n\n\\subsection{Functor of points approach to formal super geometry}\n\n\nIn order to clarify sign issues, we will use the ``functor of points'' approach to formal super geometry. A formal super-space is viewed as a functor from the category of nilpotent local commutative super-algebras to the category of sets. For example, the formal super-space $\\mathbb{A}^{m,n}$ sends $R \\mapsto \\left( R^{ev} \\right)^{\\oplus m} \\oplus \\left( R^{odd} \\right)^{\\oplus n}$. \n\nIf \n$$\nV = V^{ev} \\oplus V^{odd}\n$$\nis a $\\mathbb Z\/2$-graded vector space, the formal super-space $V$ sends $R \\mapsto (V \\otimes R)^{ev}$. \n\nAn even formal function on $V$ is then a natural transformation of functors $V \\to \\mathbb{A}^{1,0}$, and an odd formal function on $V$ is a natural transformation to $V \\to \\mathbb{A}^{0,1}$. More explicitly, an even formal function $f$ on $V$ is a collection of maps \n$$f_R : (V \\otimes R)^{ev} \\to R^{ev},$$\nnatural with respect to morphisms $R \\to R'$ of nilpotent local commutative super-algebras. \n\nLet us denote by $\\mscr O'(V)$ the $\\mathbb Z\/2$-graded algebra of all formal functions on $V$. We can identify this algebra with the inverse limit \n$$\\mscr O'(V) = \\varprojlim \\Sym^\\ast V^\\vee \/ \\Sym^{\\ge n} V^\\vee. $$\n\nIn particular, if $\\mathscr{E}$ as above is the global sections of some super vector bundle $E$ on $M$, we can think of $\\mathscr{E}$ as an infinite dimensional formal super-space. The algebra $\\mscr O(\\mathscr{E})$ is a subalgebra of the algebra $\\mscr O'(\\mathscr{E})$. An element of $\\mscr O'(\\mathscr{E})$ is in $\\mscr O(\\mathscr{E})$ whenever its Taylor expansion is in terms of continuous linear maps on $\\mathscr{E}^{\\otimes n}$. \n\nIn the case when $M$ is a point, so that $\\mathscr{E}$ is simply a finite dimensional super vector space, then $\\mscr O(\\mathscr{E})$ and $\\mscr O'(\\mathscr{E})$ coincide. \n\n\\subsection{Derivations}\nThe functor of points way of looking at things is useful (for instance) when thinking about derivations of the algebra $\\mscr O(\\mathscr{E})$. Everything we do could also be written more explicitly, but the advantage of the functor of points formalism is that it allows one to essentially forget about signs. \n\nLet $\\epsilon$ be an odd parameter. One can define the derivation $Q$ of $\\mscr O(\\mathscr{E})$ by the formula\n$$\nf ( x + \\epsilon Q x ) = f( x ) + \\epsilon (Q f ) (x) . \n$$\nThe left hand side is defined using the language of the functor of points, as follows. For each nilpotent local commutative superalgebra $R$ as above, $f$ gives a map \n$$f_{R \\otimes \\mathbb C[\\epsilon]} : (\\mathscr{E} \\otimes R \\otimes \\mathbb C[\\epsilon])^{ev} \\to\n\\begin{cases}\n (R \\otimes \\mathbb C[\\epsilon] )^{ev} \\text{ if } f \\text{ is even} \\\\ \n (R \\otimes \\mathbb C[\\epsilon] )^{odd} \\text{ if } f \\text{ is odd} . \n\\end{cases}\n$$\nIf $x \\in (\\mathscr{E} \\otimes R)^{ev}$, then $x + \\varepsilon Q x$ is in $( \\mathscr{E} \\otimes R \\otimes \\mathbb C[\\varepsilon] )^{ev}$. Thus, $f ( x + \\varepsilon Q x ) \\in (R \\otimes \\mathbb C[\\varepsilon])$. The coefficient of $\\epsilon$ in $f( x + \\epsilon Q x ) $ gives a map $(\\mathscr{E} \\otimes R)^{ev}$ to $R^{ev}$ (if $f$ is odd), or to $R^{odd}$ (if $f$ is even), and so a formal functional on $\\mathscr{E}$ of parity opposite to that of $f$. This is defined to be $Q f$. \n\n\n\nA priori, this construction only yields a derivation of $\\mscr O'(\\mathscr{E})$, but one can check easily that this derivation preserves the subalgebra $\\mscr O(\\mathscr{E})$. In more explicit terms, if we identify\n$$\n\\mscr O(\\mathscr{E}) = \\prod \\Hom ( \\mathscr{E}^{\\otimes n}, \\mathbb C ) _{S_n} ,\n$$\nthen the derivation $Q$ preserves each factor $\\Hom(\\mathscr{E}^{\\otimes n} , \\mathbb C)_{S_n}$. On each such factor, $Q$ is simply the usual tensor product differential on each $\\mathscr{E}^{\\otimes n}$, which we then transfer to the dual space and to the space of $S_n$ coinvariants. \n\n\n To compute the commutator of derivations in terms of the functor of points is very simple. Let $D_1,D_2$ be derivations of $\\mscr O'(V)$, where $D_i$ is given by an automorphism of $\\mscr O'(V) \\otimes \\mathbb C[\\epsilon_i] \/ (\\epsilon_i^2)$, which modulo $\\epsilon_i$ is the identity. The parameter $\\epsilon_i$ has the same parity as $D_i$. We can compute the commutator by \n$$\n(1 + \\epsilon_1 D_1 ) ( 1 + \\epsilon_2 D_2 ) ( 1 - \\epsilon_1 D_1 )(1 - \\epsilon_2 D_2 ) = 1 + \\epsilon_1 \\epsilon_2 [D_1,D_2] .\n$$\n\nLet $e \\in \\mathscr{E}$. Differentiation with respect to $e$ is an operator $\\partial_e : \\mscr O'(\\mathscr{E}) \\to \\mscr O'(\\mathscr{E})$ defined by the formula \n$$\nf ( x + \\epsilon e ) = f ( x ) + \\epsilon \\partial_e f \n$$\nwhere $\\epsilon$ is an auxiliary parameter of the same parity as $e$, and of square $0$. \n\nIt is easy to see that this derivation preserves $\\mscr O(\\mathscr{E})$. Indeed, if we identify $\\mscr O(\\mathscr{E}) = \\prod \\Hom ( \\mathscr{E}^{\\otimes n}, \\mathbb C ) _{S_n}$, then the derivation $\\partial_e$ is (up to sign) the map \n$$ \\Hom(\\mathscr{E}^{\\otimes n}, \\mathbb C)_{S_n} \\to \\Hom(\\mathscr{E}^{\\otimes n-1}, \\mathbb C)_{S_{n-1}}$$ \ngiven by contraction with $e$. \n\n\nOne can compute easily that \n$$\n[ \\partial_e , Q] = \\partial_{Q e} .\n$$\n\n\\subsection{Convolution operators}\n\nIf $K \\in \\mathscr{E} \\otimes \\mathscr{E}$, define a convolution map $K \\star : \\mathscr{E} \\to \\mathscr{E}$ by the formula\n$$\nK \\star e = (-1)^{\\abs e} \\sum K' \\otimes \\ip{K'', e}\n$$\nwhere $K = \\sum K' \\otimes K''$. The reason for the choice of sign in the definition of the convolution is that \n$$\n( Q K ) \\star e = [Q, K\\star ] e.\n$$\nThe first $Q$ in this formula denotes the tensor product differential on $\\mathscr{E} \\otimes \\mathscr{E}$.\n\n\n\\begin{lemma}\nAn element $K \\in \\mathscr{E} \\otimes \\mathscr{E}$ is symmetric if and only if the map $K \\star $ is self adjoint, and is antisymmetric if and only if $K \\star $ is skew self adjoint. \n\\end{lemma}\n\n\n\\subsection{Order two differential operators}\nLet\n$$\n\\phi = \\sum \\phi' \\otimes \\phi'' \\in \\Sym^2 \\mathscr{E}.\n$$\nAssociated to $\\phi$ we have an order two differential operator $\\partial_{\\phi}$ on $\\mscr O(\\mathscr{E})$, given by the formula\n$$\n\\partial_{\\phi} = \\frac{1}{2} \\sum \\partial_{\\phi''} \\partial_{\\phi'} = \\frac{1}{2} \\sum (-1)^{\\abs{\\phi'} \\abs{\\phi'' } }\\partial_{\\phi'} \\partial_{\\phi''}.\n$$\nEven though the sum may be infinite, this expression is well defined; up to sign, $\\partial_\\phi$ comes from the map \n$$ \\Hom(\\mathscr{E}^{\\otimes n}, \\mathbb C)_{S_n} \\to \\Hom(\\mathscr{E}^{\\otimes n-2}, \\mathbb C)_{S_{n-2}}$$ \ngiven by contraction with $\\phi$. \n\nThe differential operators $\\partial_\\phi$ mutually super-commute, and \n$$\n[\\partial_\\phi, Q] = \\partial_{Q \\phi} . \n$$\n\nIf $e \\in \\mathscr{E}$, let $e^\\vee : \\mathscr{E} \\to \\mathbb C$ be the continuous linear functional given by $e^\\vee( e' ) = \\ip{e',e}$. Note that $e^\\vee \\in \\mscr O(\\mathscr{E})$, so there is an associated left multiplication map $\\mscr O(\\mathscr{E}) \\to \\mscr O(\\mathscr{E})$.\nThe following lemma will be useful later.\n\\begin{lemma}\nIf $\\phi \\in \\Sym^2 \\mathscr{E}$ is even, then\n$$\n[\\partial_\\phi, e^\\vee ] = -\\partial_{\\phi \\star e } .\n$$\n\\end{lemma}\n\\begin{proof}\nIn the language we are using for formal functions, if $R$ is an auxiliary nilpotent ring, and $r \\in R$, $\\eta \\in \\mathscr{E}$ are elements of the same parity, then by definition\n$$\ne^\\vee (r \\eta ) = \\ip{r \\eta , e } = r \\ip{\\eta , e} .\n$$\nTherefore, in particular, \n$$\n\\partial_{e'} e^\\vee = \\ip{ e', e } \n$$\nfor all $e' \\in \\mathscr{E}$. \n\nWe can assume\n$$\n\\phi = \\phi' \\otimes \\phi'' + (-1)^{\\abs{\\phi'} \\abs{\\phi''} } \\phi'' \\otimes \\phi'.\n$$\nThen\n$$\n\\partial_\\phi = \\partial_{\\phi''} \\partial_{\\phi'}.\n$$\nThen\n$$\n[\\partial_\\phi, e^\\vee ] = \\ip{\\phi',e} \\partial_{\\phi''} + (-1)^{\\abs{\\phi'} (\\abs{e}+1) } \\ip{\\phi'',e} \\partial_{\\phi''} \n$$\nwhereas\n$$\n\\phi \\star e = (-1)^{\\abs{e} } \\ip{\\phi'', e} \\phi' + (-1)^{\\abs{e} + \\abs{\\phi''} \\abs{\\phi' }} \\ip{\\phi'', e} \\phi'.\n$$\nThe lemma follows from these two equations. \n\\end{proof} \n\n\n\n\n\\subsection{Heat kernels}\nA heat kernel for the operator $H : \\mathscr{E} \\to \\mathscr{E}$ is an element \n$$K_t \\in\\mathscr{E}^{\\otimes 2} = \\Gamma(M^2,E \\boxtimes E),$$ defined for $t \\in \\mbb R_{> 0}$, such that \n$$\nK_t \\star e = e^{- t H} e \n$$\nfor all $e \\in \\mathscr{E}$. Because $H$ is a generalised Laplacian, the results of \\cite{BerGetVer92} imply that it admits a unique heat kernel $K_t$, which is also smooth as a function of $t$.\n\nLet \n$$L_t = (Q^{GF} \\otimes 1) K_t$$ \nso that $L_t$ is the kernel representing the operator $Q^{GF} e^{-t H}$. This is a smoothing operator for $0< t \\le \\infty$, so that $L_t \\in \\mathscr{E} \\otimes \\mathscr{E}$ for such $t$. \n\n\nObserve that \n\\begin{align*}\n(Q \\otimes 1 + 1 \\otimes Q) K_t &= 0 \\\\\n\\frac { \\d } { \\d t } K_t + ( Q \\otimes 1 + 1 \\otimes Q) L_t &= 0 .\n\\end{align*}\nThese formulae together say that the expression\n$$\nK_t + \\d t L_t \n$$\nis a closed element of $\\mathscr{E} \\otimes \\mathscr{E} \\otimes \\Omega^\\ast(\\mbb R_{> 0})$, where we give this space the tensor product differential. \n\nLet\n$$\nP(\\varepsilon,T)= \\int_{\\epsilon}^T L_t \\d t = \\int_{\\varepsilon}^{T} (Q^{GF} \\otimes 1)K_t \\d t .\n$$\nNote that for $0 < \\epsilon < T \\le \\infty$, $P(\\epsilon,T)$ is in $\\mathscr{E} \\otimes \\mathscr{E}$. This is clear if $0 < \\epsilon < T < \\infty$. If $T = \\infty$, one needs to check that \n$$\n\\alpha \\mapsto \\int_\\epsilon^\\infty Q^{GF} e^{-t H} \\alpha \\d t\n$$\nis a smoothing operator for any $t > 0$. The only problems that could occur would be on the $0$ eigenspace of $H$, but $Q^{GF}$ annihilates this eigenspace. This is one of the axioms of a gauge fixing condition. \n\nThe reason the kernels $P(\\epsilon,T)$ are important comes from the following lemma.\n\\begin{lemma}\nThe operator\n\\begin{align*}\n\\Im Q &\\to \\Im Q^{GF} \\\\\n\\alpha &\\mapsto \\int_0^\\infty Q^{GF} e^{-t H} \\alpha \\d t\n\\end{align*}\nis the inverse to the isomorphism $Q : \\Im Q^{GF} \\to \\Im Q$.\n\\end{lemma}\nThus, the singular kernel $P(0,\\infty)$ represents $Q^{-1}$.\n\\begin{proof}\nIndeed,\n$$\n\\int_0^\\infty Q^{GF} e^{-t H} Q \\alpha \\d t= - Q \\int_0^\\infty Q^{GF} e^{-t H} \\alpha \\d t + \\int_0^\\infty H e^{-t H} \\alpha \\d t . \n$$\nSince $(Q^{GF})^2 = 0$, and $\\alpha \\in \\Im Q^{GF}$, the first term on the right hand side is zero. Thus, \n$$\n\\int_0^\\infty Q^{GF} e^{-t H} Q \\alpha \\d t = - \\int_0^\\infty \\frac{\\d } {\\d t } e^{-t H} \\alpha \\d t = \\alpha - \\pi \\alpha\n$$ \nwhere $\\pi : \\mathscr{E} \\to \\Ker H$ is the projection onto the zero eigenspace of $H$. \n\nSince $\\Ker H \\cap \\Im Q^{GF} = 0$, and $\\Im Q^{GF}$ is a direct sum of $H$ eigenspaces, we see that $\\pi \\alpha = 0$, so that\n$$\n\\int_0^\\infty Q^{GF} e^{-t H} Q \\alpha \\d t = \\alpha \n$$\nas desired. \n\\end{proof} \n\n\\subsection{Functional integrals in terms of differential operators}\nWhen $M$ is a point, we will be able to express certain integrals over the finite dimensional vector space $\\mathscr{E}$ in terms of the differential operators on $\\mscr O(\\mathscr{E})$ introduced above. When $\\dim M > 0$, so that $\\mathscr{E}$ is infinite dimensional, we will attempt to take this as a definition of the functional integral.\n\n\\begin{definition}\nLet $\\Phi \\in \\Sym^2 \\mathscr{E}$ be any even element, and $S \\in \\mscr O(\\mathscr{E}, \\mathbb C [[\\hbar]])$ be an even element, which modulo $\\hbar$ is at least cubic. Define\n$$\n\\Gamma( \\Phi, S ) = \\hbar \\log\\left( \\exp (\\hbar \\partial_\\Phi ) \\exp ( S \/ \\hbar ) \\right) \\in \\mscr O(\\mathscr{E}, \\mathbb C [[\\hbar ]] ) . \n$$\n\\end{definition}\nIt is easy to check that this expression is well defined. However, if $S$ was not at least cubic, but contained a quadratic term, this expression would contain some non-convergent infinite sums. \n\nLet us assume for a moment that $M$ is a point.\nLet $\\left(\\Im Q^{GF}\\right)_\\mbb R$ be a real slice of $\\Im Q^{GF} \\subset \\mathscr{E}$, with the property that the quadratic form $\\ip{x,Qx}$ is negative definite on $\\left(\\Im Q^{GF}\\right)_\\mbb R$. Then the integral\n$$\n\\int_{x \\in \\left(\\Im Q^{GF}\\right)_\\mbb R} \\exp\\left( \\tfrac{1}{2} \\ip{x,Q x} \/ \\hbar + S(x + a) \/ \\hbar \\right) \\d \\mu\n$$\nis well defined as a formal series in the variables $\\hbar$ and $a \\in \\mathscr{E}$. We use an $\\hbar$-dependent Lebesgue measure $\\d \\mu$ normalised so that \n$$\n\\int_{x \\in \\left(\\Im Q^{GF}\\right)_\\mbb R} \\exp\\left( \\tfrac{1}{2} \\ip{x,Q x} \/ \\hbar \\right) \\d \\mu = 1 . \n$$\n\\begin{lemma} If $M$ is a point,\n\\label{lemma integral differential}\n$$ \\Gamma(P(0,\\infty), S) (a) = \\hbar \\log \\int_{x \\in \\left(\\Im Q^{GF}\\right)_\\mbb R} \\exp\\left( \\tfrac{1}{2} \\ip{x,Q x} \/ \\hbar + S(x + a) \/ \\hbar \\right) \\d \\mu . $$\n\\label{lemma integral differential general}\n\\end{lemma}\n\\begin{proof}\nIt suffices to show that for any polynomial $f \\in \\mscr O(\\mathscr{E})$\n$$\n \\exp ( \\hbar \\partial_{P(0,\\infty)} ) f = \\int_{\\left(\\Im Q^{GF}\\right)_\\mbb R} \\exp \\left( \\tfrac{1}{2} \\ip{x,Q x} \/\\hbar \\right) f (x+a).\n$$\nBoth sides are functions of $a \\in \\mathscr{E}$. \n\nLet $v \\in \\left(\\Im Q^{GF}\\right)_\\mbb R \\subset \\mathscr{E}$. Define a linear function $l_v = (Q v)^\\vee$ on $\\left(\\Im Q^{GF}\\right)_\\mbb R$, so that\n$$\nl_v(x) = \\ip{x,Q v}.\n$$\nThen\n$$\n\\partial_v \\ip{x, Q x} = 2l_v (x). \n$$\nWe can integrate by parts, to find\n\\begin{multline*}\n\\int_{\\left(\\Im Q^{GF}\\right)_\\mbb R} \\exp \\left( \\tfrac{1}{2} \\ip{x,Q x} \/\\hbar \\right) l_v (x + a) f(x+a) \\\\ = \\hbar \\int_{\\left(\\Im Q^{GF}\\right)_\\mbb R} \\left(\\partial_v \\exp \\left( \\tfrac{1}{2} \\ip{x,Q x} \/\\hbar \\right) \\right) f (x+a) \n\\\\ \\shoveright{ + l_v(a) \\int_{\\left(\\Im Q^{GF}\\right)_\\mbb R} \\exp \\left( \\tfrac{1}{2} \\ip{x,Q x} \/\\hbar \\right) f (x+a )) } \\\\\n= - \\hbar \\int_{\\left(\\Im Q^{GF}\\right)_\\mbb R} \\exp \\left( \\tfrac{1}{2} \\ip{x,Q x} \/\\hbar \\right) \\partial_v f (x+a)\\\\\n + l_v(a) \\int_{\\left(\\Im Q^{GF}\\right)_\\mbb R} \\exp \\left( \\tfrac{1}{2} \\ip{x,Q x} \/\\hbar \\right) f (x+a )) \n\\end{multline*}\nA similar identity holds for $\\exp(\\hbar \\partial_{P(0,\\infty)}) f$, namely\n$$\n \\exp( \\hbar \\partial_{P(0,\\infty)}) l_v f = - \\hbar \\exp (\\hbar \\partial_{P(0,\\infty)}) \\partial_v f + l_v(a) \\exp( \\hbar \\partial_{P(0,\\infty)}) f .\n$$\nThis follows from the equation\n$$\n[\\partial_{P(0,\\infty)}, l_v ] = [\\partial_{P(0,\\infty)}, (Q v)^\\vee ] = - \\partial_{ P(0,\\infty) \\star (Q v) } = - \\partial_{ v } .\n$$\nThe last equation holds because $P(0,\\infty) \\star$ is the operator $Q^{-1} : \\Im Q \\to \\Im Q^{GF}$. \n\nThese identities allow us to use induction to reduce to the case when $f$ is constant. The normalisation in the measure on $\\left(\\Im Q^{GF}\\right)_\\mbb R$ takes care of this initial case. \n\n\n\n\\end{proof} \n\n\\section{Regularisation}\n\n\n\n\\subsection{Regularisation}\n\nOne can write the formal equality\n$$\n \\hbar \\log \\int_{e \\in (\\Im Q^{GF})_\\mbb R} \\exp( \\tfrac{1}{2} \\ip{e,Qe} \/ \\hbar + S ( e + a) \/ \\hbar ) = \\lim_{ \\epsilon\\to 0} \\Gamma (P(\\epsilon,\\infty), S)\n$$\nwhere as above,\n$$\n \\Gamma (P(\\epsilon,\\infty), S) = \\hbar \\log \\left( \\exp (\\hbar \\partial_{P(\\epsilon,\\infty)}) \n \\exp ( S \/ \\hbar) \\right) . \n$$\nWhen $\\dim M = 0$, this equality was proved in Lemma \\ref{lemma integral differential}. When $\\dim M > 0$, we will take this equality as an attempt to define the functional integral over $(\\Im Q^{GF})_\\mbb R$. \n\nWhen $\\dim M > 0$, although the expression $\\Gamma(P(\\epsilon,\\infty), S)$ is well defined for all $\\epsilon > 0$, the limit $\\lim_{\\epsilon \\to 0} \\Gamma(P(\\epsilon,\\infty), S)$ is divergent. This is because $P(0,\\infty)$ is a distributional section of the vector bundle $E \\boxtimes E$ on $M^2$, with singularities on the diagonal. The expression $\\exp (\\hbar \\partial_{P(0,\\infty)} ) \\exp ( S \/ \\hbar)$ involves multiplication of the distribution $P(0,\\infty)$ with distributions with support along the diagonal. In other words, it involves integrals over products of $M$ where the integrand has singularities on the diagonal. This is the problem of ultraviolet singularities.\n\nAs I explained in the introduction, we will renormalise the limit by subtracting certain counter-terms from the action $S$. In order to do this, we need some control over the small $\\epsilon$ asymptotics of $\\Gamma(P(\\varepsilon,\\infty), S)$. \n\nLet us write\n$$\n\\Gamma(P(\\epsilon,T),S) = \\sum_{i\\ge 0,k \\ge 0} \\hbar^i \\Gamma_{i,k} (P(\\epsilon,T),S)\n$$\nwhere $\\Gamma_{i,k}(P(\\varepsilon,T),S)$ is homogeneous of order $k$ as a formal functional of $e \\in \\mathscr{E}$. This expression is the Taylor expansion of $\\Gamma$ in the variables $\\hbar$ and $e \\in \\mathscr{E}$. \n\nWe will construct a small $\\epsilon$ asymptotic expansion for each $\\Gamma_{i,k}(P(\\epsilon,T), S)$. This expansion will take values in a certain subalgebra $\\mathscr{A}$ of the algebra of analytic functions on $\\epsilon \\in (0,\\infty)$. \n\\begin{definition}\nLet $\\mathscr{A} \\subset C^{\\infty}( (0,\\infty) )$ be the subalgebra spanned over $\\mathbb C$ by functions of $\\epsilon$ of the form\n$$\nf(\\epsilon ) = \\int_{U \\subset (\\epsilon,1)^n } \\frac{ F(t_1,\\ldots,t_n)^{1\/2} } { G(t_1,\\cdots, t_n)^{1\/2} } \\d t_1 \\ldots \\d t_n \n$$\nand functions of the form\n$$\nf(\\epsilon ) = \\int_{U \\subset (\\epsilon,1)^{n-1} } \\frac{ F(t_1,\\ldots,t_n = \\varepsilon)^{1\/2} } { G(t_1,\\ldots, t_n = \\varepsilon)^{1\/2} } \\d t_1 \\cdots \\d t_{n-1}\n$$\nwhere \n\\begin{enumerate}\n\\item\n$F, G \\in \\mathbb Z_{\\ge 0} [t_1,\\ldots, t_n] \\setminus \\{0\\}$; $n$ can take on any value. \n\\item\nthe region of integration $U$ is an open subset cut out by finitely many equations of the form $t_i^l > t_j$, for $l \\in \\mathbb Z$. \n\\end{enumerate}\n\\end{definition}\n``Spanned'' means in the non-topological sense; we only allow finite sums. Thus, $\\mathscr{A}$ has a countable basis, and every element is written as a finite sum of basis elements. We give $\\mathscr{A}$ the trivial topology, where all subspaces are closed. \n\nThe details in the definition of $\\mathscr{A}$ aren't all that important; we could always use a larger algebra containing $\\mathscr{A}$. \n\n\\begin{thmA}\n\\begin{enumerate}\n\\item\nFor each $i$ and $k$, there exist functions $f_r \\in \\mathscr{A}$, $r \\in \\mathbb Z_{\\ge 0}$, such that there is a small $\\varepsilon$ asymptotic expansion of the form \n$$\n\\Gamma_{i,k}(P(\\epsilon,T), S(\\epsilon, \\hbar) ) (e) \\simeq \\sum_{r \\ge 0} f_r (\\epsilon) \\Phi_r (T,e) \n$$\nwhere $e \\in \\mathscr{E}$, and each $\\Phi_r$ is in $\\mscr O(\\mathscr{E}, C^{\\infty}((0,\\infty)))$. \n\\item\nThe $\\Phi_r(T,e)$ have a small $T$ asymptotic expansion\n$$\n\\Phi_r(T,e) = \\sum g_q(T) \\Psi_{q,r} (e)\n$$ where the $\\Psi_{q,r} \\in \\mscr O_l(\\mathscr{E})$ are local functionals of $e$, and $g_q(T)$ are certain smooth functions of $T$. \n\\item\nIf $k > 0$, so $\\Psi_{q,r}$ is a non-constant function on $\\mathscr{E}$, then we can speak of the germ of each $\\Psi_{q,r}$ near a point $x \\in M$. This germ only depends on the germ of the data $(E, \\ip{\\ , \\ }, Q, Q^{GF}, S )$ near $x$. \n\\end{enumerate}\n\\label{theorem_analytic_continuation} \n\\end{thmA}\n\n\\begin{remark}\nThe last point is a little delicate. For instance, for any $t > 0$, the germ of the heat kernel $K_t$ near a point $(x,x)$ in $M^2$ depends on the \\emph{global} behaviour of the elliptic operator $H$. Only the $t \\to 0$ asymptotics of the germ of $K_t$ depends on the germ of $H$.\n\\end{remark}\nThe appendix contains a proof of a more general version of this theorem, as well \nas a precise statement of what I mean by small $\\varepsilon$ asymptotic expansion. \n\n\n\\section{Renormalisation}\nThe next step in constructing the quantum field theory is \\emph{renormalisation}. This amounts to replacing our action functional $S \\in \\mscr O_l(\\mathscr{E}, \\mathbb C[[\\hbar]])$ by a series\n$$\nS^R(\\hbar,\\epsilon) = S(\\hbar) - S^{CT}(\\hbar,\\epsilon) = S - \\sum_{i > 0, k \\ge 0} \\hbar^i S^{CT}_{i,k}(\\epsilon).\n$$\nThe $S_{i,k}^{CT}$ are known as counter-terms; they are $\\varepsilon$-dependent local functionals on $\\mathscr{E}$, homogeneous of degree $k$. This renormalised action functional $S^R$ is required to be such that the $\\epsilon \\to 0$ limit of $\\Gamma( P(\\varepsilon,T), S^R (\\varepsilon) )$ exists. Recall that \n$$\nP(\\varepsilon,T)= \\int_{\\varepsilon}^T (Q^{GF} \\otimes 1) K_t \\d t\n$$\nso that $\\lim_{\\varepsilon \\to 0}\\Gamma(P(\\varepsilon,T), S^R(\\varepsilon) )$ will be our renormalised effective action.\n\nIn order to perform this renormalisation, we need to pick a renormalisation scheme. Let $\\mathscr{A}_{\\ge 0} \\subset \\mathscr{A}$ be the subspace of those functions $f$ such that $\\lim_{\\varepsilon \\to 0} f$ exists. \n\\begin{definition}\nA \\emph{renormalisation scheme} is a subspace $\\mathscr{A}_{< 0}$ to $\\mathscr{A}_{\\ge 0}$ such that \n$$\n\\mathscr{A} = \\mathscr{A}_{< 0} \\oplus \\mathscr{A}_{\\ge 0} . \n$$\n\\end{definition}\nNote that a renormalisation scheme exists, simply because $\\mathscr{A}$ is a vector space with a countable basis.\n\nThe choice of a renormalisation scheme allows us to extract the singular part of functions $f \\in \\mathscr{A}$. The singular part of $f$ is simply the projection of $f$ onto $\\mathscr{A}_{< 0}$. \n\n\n\n\n\n\\begin{thmB}\nLet $S \\in \\mscr O_l(\\mathscr{E}, \\mathbb C[[\\hbar]])$ be an even functional which, modulo $\\hbar$, is at least cubic. Then there exists a unique series\n$$S^{CT}(\\varepsilon,\\hbar) = \\sum_{i \\ge 1, k \\ge 0} \\hbar^i S_{i,k}^{CT}(s) $$\nsuch that \n\\begin{enumerate}\n\\item\neach $S_{i,k}^{CT}(\\varepsilon)$ is an element of $\\mscr O_l(\\mathscr{E}, \\mathscr{A}_{< 0})$\nwhich is homogeneous of degree $k$ as a function on $\\mathscr{E}$; \n\\item\nthe limit $\\lim_{\\epsilon \\to 0} \\Gamma(P(\\varepsilon,T),S - S^{CT})$ exists. \n\\end{enumerate}\n\nIf $k > 0$, the germ of the counter-term $S_{i,k}^{CT}$ at $x \\in M$ depends only on the germ of the data $(E, \\ip{\\ , \\ }, Q, Q^{GF}, S )$ near $x$. \n\\label{theorem renormalisation}\n\\end{thmB}\nWe use the notation \n$$\\Gamma^R(P(0,T), S) = \\lim_{\\varepsilon \\to 0} \\Gamma(P(\\varepsilon,T),S - S^{CT}). $$ $\\Gamma^R(P(0,\\infty), S) $ is a renormalised functional integral.\n\n\\begin{proof}\n\nBefore I begin the proof, let us introduce some notation. Let us give the set $\\mathbb Z_{\\ge 0} \\times \\mathbb Z_{\\ge 0}$ the lexicographic ordering, so that $(i,k) < (i',k')$ if $i < i'$, or if $i = i'$ and $k < k'$. \n\nLet $A \\in \\mscr O(\\mathscr{E}, \\mathbb C[[\\hbar]])$ be any (not necessarily local) functional. As before, we will write \n$$\nA = \\sum \\hbar^i A_{i,k} \n$$\nwhere $A_{i,k} \\in \\mscr O(\\mathscr{E})$ is a homogeneous functional of degree $k$. Let us write \n$$\nA_{\\le (I,K)} = \\sum_{(i,k) \\le (I,K) } \\hbar^i A_{i , k}.\n$$\n\nThe proof of this theorem is actually very simple; no graph combinatorics are required. All we need to do is to construct the $S_{i,k}^{CT}$ inductively using the lexicographic ordering on $\\mathbb Z_{\\ge 0} \\times \\mathbb Z_{\\ge 0}$. \n\n\nSo, let us suppose, by induction, that we have constructed \n$$S_{i,k}^{CT}(\\varepsilon, T) \\in \\mscr O_l(\\mathscr{E}, \\mathscr{A}_{< 0} \\otimes C^{\\infty}(\\mbb R_{> 0}))$$ \nfor all $(i,k) < (I,K)$, which are homogeneous of degree $k$ as a function of $\\alpha \\in \\mathscr{E}$. Here, $\\mbb R_{> 0}$ has coordinate $T$. \n\nLet \n$$\nS^{CT}_{<(I,K)} = \\sum_{(i,k) < (I,K)} \\hbar^i S^{CT}_{i,k}\n$$\nWe can write\n$$\n\\Gamma ( P(\\varepsilon,T), S - S^{CT}_{<(I,K)} )= \\sum_{i,k \\ge 0} \\hbar^i \\Gamma_{i,k}( P(\\varepsilon,T), S - S^{CT}_{<(I,K)} )\n$$\nLet us make the following further induction assumptions : \n\\begin{enumerate}\n\\item\nthe limit $\\lim_{\\varepsilon \\to 0} \\Gamma_{i,k}( P(\\varepsilon,T), S - S^{CT}_{<(I,K)} )$ exists, for all $(i,k) < (I,K)$. \n\\item\neach $S^{CT}_{i,k}$ is independent of $T$ if $(i,k) <(I,K)$. \n\\end{enumerate}\n\nNow simply let\n$$\nS^{CT}_{I,K} = \\text{Singular part of } \\Gamma_{I,K}( P(\\varepsilon,T), S - S^{CT}_{<(I,K)} ).\n$$\nBy ``singular part'' I mean the following. We take the small $\\varepsilon$ expansion of the right hand side, of the form $\\sum f_r(\\varepsilon) \\Phi( T, a)$ where $f_r(\\varepsilon) \\in \\mathscr{A}$. The singular part is obtained by replacing each $f_r$ by its projection onto $\\mathscr{A}_{< 0}$. \n\nNote that \n$$\n \\Gamma_{i,k}( P(\\varepsilon,T), S - S^{CT}_{<(I,K)} - S^{CT}_{I,K}) = \\Gamma_{i,k}( P(\\varepsilon,T), S - S^{CT}_{<(I,K)} ) - \\delta_{i,I} \\delta_{k,K} S^{CT}_{I,K} \n$$\nif $(i,k) \\le (I,K)$. It follows that $\\Gamma_{i,k}( P(\\varepsilon,T), S - S^{CT}_{<(I,K)} - S^{CT}_{I,K}) $ is non-singular (i.e.\\ has a well-defined $\\varepsilon \\to 0$ limit) if $(i,k) \\le (I,K)$. \n\nTo prove the result, it remains to prove the following. \n\\begin{enumerate}\n\\item\n$S^{CT}_{I,K}$ is independent of $T$.\n\\item\n$S^{CT}_{I,K}$ is a local functional. \n\\item\n$S^{CT}_{0,K} = 0$ (that is, there are no tree-level counter-terms).\n\\end{enumerate}\n\n\n\\begin{lemma}\n$S^{CT}_{I,K}$ is independent of $T$. \n\\label{lemma counterterms independent T}\n\\end{lemma}\n\\begin{proof}\nObserve that \n$$\nP(\\varepsilon,T') - P(\\varepsilon,T)= P(T,T') = \\int_{T}^{T'} ( Q^{GF} \\otimes 1) K_t \\d t\n$$\nis in $\\mathscr{E} \\otimes \\mathscr{E}$ (that is, it has no singularities). Let \n$$\nA(\\varepsilon,\\hbar) \\in \\mscr O(\\mathscr{E}, \\mathscr{A}_{\\ge 0} \\otimes \\mathbb C[[\\hbar]])\n$$\nbe any (not necessarily local) functional. Non-singularity of $P(T,T')$ implies that \n$$\n\\Gamma_{\\le (I,K)} ( P(T,T'), A(\\varepsilon,\\hbar))\n$$\nis non-singular (i.e., has a well-defined $\\varepsilon \\to 0$ limit). \n\nWe know by induction that $ \\Gamma_{\\le (I,K)}(P(\\varepsilon,T), S - S^{CT}_{\\le(I,K)}(\\varepsilon,T) ) $ is non-singular. It follows that\n$$\n\\Gamma_{\\le(I,K)} \\left( P(T,T') , \\Gamma_{\\le (I,K)}\\left( P(\\varepsilon,T), S - S^{CT}_{\\le(I,K)}(\\varepsilon,T) \\right) \\right) \n$$\nis non-singular. But,\n\\begin{multline*}\n\\Gamma_{\\le(I,K)} \\left( P(T,T') , \\Gamma_{\\le(I,K)}\\left( P(\\varepsilon,T), S - S^{CT}_{\\le(I,K)}(\\varepsilon,T) \\right) \\right) \\\\ \n\\begin{split}\n& = \\Gamma_{\\le(I,K)}\\left(P(\\varepsilon,T'), S- S^{CT}_{\\le(I,K)} (\\varepsilon,T)\\right) \\\\ \n& = \\Gamma_{\\le(I,K)}\\left(P(\\varepsilon,T'), S - S^{CT}_{<(I,K)} (\\varepsilon,T') \\right) - S^{CT}_{(I,K)} (\\varepsilon,T)\n \\end{split}\n\\end{multline*}\n(where we are using the induction assumption that $S^{CT}_{<(I,K)} (\\varepsilon,T') = S^{CT}_{<(I,K)} (\\varepsilon,T)$).\n\nThis makes it clear that\n\\begin{align*}\nS^{CT}_{(I,K)} (\\varepsilon,T) &= \n\\text{Singular part of } \\Gamma_{\\le(I,K)}\\left(P(\\varepsilon,T'), S - S^{CT}_{<(I,K)} (\\varepsilon,T') \\right)\\\\\n&= S^{CT}_{(I,K)} (\\varepsilon,T')\n\\end{align*}\nas desired. \n\\end{proof}\n\n\n\\begin{lemma}\n$S^{CT}_{I,K}$ is local, so that \n$$S^{CT}_{I,K} \\in \\mscr O_l ( \\mathscr{E}, \\mathscr{A}_{< 0} ).$$ \n\\end{lemma}\n\\begin{proof}\nThis follows immediately from the fact that $S^{CT}_{I,K}$ is independent of $T$, and from the $T \\to 0$ asymptotic expansion of the singular part of $\\Gamma_{I,K}( P(\\varepsilon,T), S - S^{CT}_{<(I,K)} )$ proved in theorem A.\n\\end{proof}\n\n\\begin{lemma}\n$S_{0,k}^{CT} = 0$.\n\\end{lemma}\n\\begin{proof}\nAll we need to show is that $\\Gamma_{0,k} (P(\\varepsilon,T), S ) $ is regular, for all $k$. This is an easy corollary of Lemma \\ref{lemma local functional vector field}. Namely, if we write $S = \\sum \\hbar^i S_{i,k}$ as usual, then Lemma \\ref{lemma local functional vector field} implies that the functionals $S_{0,k}$ are of the form\n$$\nS_{0,k}(\\alpha) = \\ip { \\Psi_k ( \\alpha^{\\otimes k-1}) ,\\alpha } \n$$\nfor some polydifferential operator\n$$\n\\Psi_k : \\mathscr{E}^{\\otimes k-1} \\to \\mathscr{E}\n$$\nand $\\alpha \\in \\mathscr{E}$. The tree-level terms $\\Gamma_{0,k}(P(\\varepsilon,T),S)$ of $\\Gamma(P(\\varepsilon,T),S)$ are obtained by composing these polydifferential operators $\\Psi_k$ with each other and with the operator \n$$\nP(\\varepsilon,T) : \\mathscr{E} \\to \\mathscr{E}\n$$\nconstructed by convolution with the kernel $P(\\varepsilon,T)$. Note that \n$$\nP(\\varepsilon,T)\\star \\alpha = \\int_{\\varepsilon}^T Q^{GF} e^{-t H} \\alpha \n$$\nThis operator is a map $\\mathscr{E} \\to \\mathscr{E}$, even when $\\varepsilon = 0$. That is, $\\int_0^T e^{-t H}$ takes smooth sections of the vector bundle $E$ on $M$ to smooth sections of $E$. This implies that all tree-level operators are non-singular, as desired. \n\\end{proof}\nThis completes the proof. \n\\end{proof}\n\n\n\\section{Renormalisation group flow and converse to theorem B}\nLet $S \\in \\mscr O_l(\\mathscr{E})[[\\hbar]]$ be a local functional. The expression $\\Gamma^R ( P(0,T) , S ) $ constructed using theorem B should be interpreted as the scale $T$ renormalised effective action. \n\\begin{lemma}\nThe renormalisation group equation holds:\n$$\n\\Gamma^R ( P(0,T') , S ) = \\Gamma ( P(T,T') , \\Gamma^R ( P(0,T) , S ) ) .\n$$ \n\\end{lemma}\n\\begin{proof}\nThis follows from the fact that the counter-terms $S_{i,k}^{CT}$ are independent of $T$, and the identity\n$$\n\\Gamma ( P(T,T') , \\Gamma ( P(\\varepsilon,T) , S - S^{CT} ) ) = \\Gamma ( P(\\varepsilon,T'), S - S^{CT} ) .\n$$\n\\end{proof} \n\nThus we have seen that to any local functional $S \\in \\mscr O(\\mathscr{E})[[\\hbar]]$, we can associate a system of effective actions $\\Gamma^R ( P (0,T), S)$ satisfying the renormalisation group equation. In a moment we will see that a converse holds: all such systems of effective actions satisfying a certain locality condition arise in this way.\n\n \n\\begin{definition}\nA \\emph{system of effective actions} on the space of fields $\\mathscr{E}$ is given by an effective action\n$$\nS^{eff} (T)\\in \\mscr O(\\mathscr{E} , \\mathbb C[[\\hbar]] )\n$$\nfor each $T \\in \\mbb R_{> 0}$, which are all at least cubic modulo $\\hbar$, and\nsuch that\n\\begin{enumerate}\n\\item\nThe renormalisation group equation is satisfied,\n$$\nS^{eff}(T_2) = \\Gamma ( P(T_1,T_2) , S^{eff}(T_1)).\n$$\n\\item\nAs $T \\to 0$, $S^{eff}(T)$ must become local, in the following sense. There must exist some $T$-dependent local functional \n$$\\Phi \\in \\mscr O_l( \\mathscr{E} , C^{\\infty}(0,\\infty) \\otimes \\mathbb C[[\\hbar]] ) $$ \nwhere $T$ is the coordinate on $(0,\\infty)$,\nsuch that \n$$\\lim_{T \\to 0} \\left( S^{eff}(T) - \\Phi(T) \\right)= 0.$$ \n(The $T \\to 0$ limit of $S^{eff}(T)$ itself will generally not exist). \n\\end{enumerate}\n\\end{definition}\nThe effective actions defined by $\\Gamma^R ( P(0,T) , S)$ for $S \\in \\mscr O_l(\\mathscr{E}, \\mathbb C[[\\hbar]] )$ satisfy these axioms. \n\n\nFix any renormalisation scheme $\\mathscr{A}_{< 0}$. Then theorem B provides a map\n\\begin{align*}\nS \\in \\mscr O_l(\\mathscr{E}) [[\\hbar]] \\text{ which is at least cubic modulo }\\hbar& \\to \\text{ systems of effective actions } \\\\\nS & \\mapsto \\{ \\Gamma^R ( P(0,T), S ) \\mid T \\in \\mbb R_{> 0} \\} \n\\end{align*}\n\\begin{thmC}\n\\label{proposition rge local functional}\nFor any renormalisation scheme $\\mathscr{A}_{< 0}$, this map is a bijection.\n\\end{thmC}\nThis set of systems of effective actions is a canonical object associated to $(\\mathscr{E},Q, Q^{GF})$, independent of the choice of renormalisation scheme. Renormalisation and regularisation techniques other than those considered should lead to different ways of parametrising the same set of systems of effective actions. For instance, if one could make sense of dimensional regularisation on general manifolds, one would hope to get simply a different parametrisation of this set.\n\nFrom this point of view, the formalism of counter-terms is simply a convenient way to describe this set of systems of effective actions. The counter-terms themselves, and the original action $S$, are not really meaningful in themselves. \n\n\\begin{proof}[Proof of theorem C]\nThe proof is very simple. Let $\\{S^{eff}(T)\\}$ be a system of effective actions. We want to construct a corresponding local functional $S \\in \\mscr O_l(\\mathscr{E}) [[\\hbar]]$ such that \n$$\nS^{eff} (T) = \\Gamma^R ( P(0,T) , S ) .\n$$\nSuppose, by induction, that we have constructed $S_{<(I,K)}$ such that \n$$\nS_{(i,k)} ^{eff} (T) = \\Gamma_{(i,k)} ^R ( P(0,T) , S_{<(I,K)} ) \n$$\nfor all $(i,k) < (I,K)$. The initial case is when $i = 0$ and $k = 2$. Then, $S^{eff}_{(0,2)} (T) = 0 = S_{(0,2)}$, so the identity automatically holds. \n\nAssume $(I,K) \\ge (0,3)$. The renormalisation group equation says that \n\\begin{align*}\nS_{(I,K) }^{eff} (T) &= \\Gamma_{(I,K)}(P(\\varepsilon, T) , S^{eff}_{\\le (I,K)} (\\varepsilon )) \\\\\n&= S_{(I,K)}^{eff}(\\varepsilon) + \\Gamma_{(I,K)} ( P(\\varepsilon, T) , S_{<(I,K)}^{eff} (\\varepsilon ) ) \\\\\n&= S_{(I,K)}^{eff}(\\varepsilon) + \\Gamma_{(I,K)} ( P(\\varepsilon, T) , \\Gamma_{<(I,K)}^R (P(0,\\varepsilon ), S_{<(I,K)} ) ) \\\\\n&= S_{(I,K)}^{eff}(\\varepsilon) + \\Gamma_{(I,K)} ( P(\\varepsilon, T) , \\Gamma_{\\le (I,K)}^R (P(0,\\varepsilon ), S_{<(I,K)} ) ) - \\Gamma^R_{(I,K)} ( P(0, \\varepsilon) , S_{<(I,K)} ) ) \\\\\n&= S_{(I,K)}^{eff}(\\varepsilon) + \\Gamma^R_{(I,K)} ( P(0, T) , S_{<(I,K)} ) ) - \\Gamma^R_{(I,K)} ( P(0, \\varepsilon) , S_{<(I,K)} ) ).\n\\end{align*}\nThe left hand side is independent of $\\varepsilon$, so the right hand side is also. Thus, \n$$\n S_{(I,K)}^{eff}(\\varepsilon) - \\Gamma^R_{(I,K)} ( P(0, \\varepsilon) , S_{<(I,K)} ) )\n$$\nis independent of $\\varepsilon$. This allows us to define\n$$\nS_{(I,K)} = S_{(I,K)}^{eff}(\\varepsilon) - \\Gamma^R_{(I,K)} ( P(0, \\varepsilon) , S_{<(I,K)} ) ).\n$$\nWith this definition, we automatically have\n$$\nS_{(I,K)}^{eff}(T) = \\Gamma^R ( P(0,T) , S_{\\le (I,K)} ) .\n$$\nIt remains to show that $S_{(I,K)}$ is local. This is an immediate consequence of the locality axiom for $S^{eff}(\\varepsilon)$. There exists $\\Phi(\\varepsilon)$, an $\\varepsilon$ dependent local functional, such that $\\lim_{\\varepsilon \\to 0} S^{eff}(\\varepsilon ) - \\Phi(\\varepsilon ) = 0$. Thus, \n\\begin{align*}\nS_{(I,K)} &= S_{(I,K)}^{eff}(\\varepsilon) - \\Gamma^R_{(I,K)} ( P(0, \\varepsilon) , S_{<(I,K)} ) ) \\\\\n& = S_{(I,K)}^{eff}(\\varepsilon) - \\Phi(\\varepsilon) + \\Phi(\\varepsilon) - \\Gamma^R_{(I,K)} ( P(0, \\varepsilon) , S_{<(I,K)} ) )\\\\\n&= \\lim_{\\varepsilon \\to 0} \\left( S_{(I,K)}^{eff}(\\varepsilon) - \\Phi(\\varepsilon) \\right) + \\lim_{\\varepsilon \\to 0} \\left( \\Phi(\\varepsilon) - \\Gamma^R_{(I,K)} ( P(0, \\varepsilon) , S_{<(I,K)} ) ) \\right) \\\\\n&= \\lim_{\\varepsilon \\to 0} \\left( \\Phi(\\varepsilon) - \\Gamma^R_{(I,K)} ( P(0, \\varepsilon) , S_{<(I,K)} ) ) \\right) .\n\\end{align*}\nThis last quantity is local, because $\\Phi(\\varepsilon)$ is local and $\\Gamma^R_{(I,K)} ( P(0,\\varepsilon), S_{< (I,K)} )$ has a small $\\varepsilon$ asymptotic expansion in terms of local functionals. \n\n\\end{proof} \n\n\nAs I explained in the introduction, this result elucidates how the renormalisation procedure depends on the choice of renormalisation scheme. For the rest of the paper, we will fix one renormalisation scheme $\\mathscr{A}_{< 0}$. When we talk about local functionals, we are in some sense really talking about systems of effective actions, but using our fixed renormalisation scheme to identify these two sets. Thus, as long as all the statements we make are about the effective action $\\Gamma^R(P(0,T), S)$, and not directly about the local functional $S$, everything we do is independent of the choice of renormalisation scheme. \n\n\\section{Quantum master equation}\n\n\n\n\n\\subsection{Quantum master equation in finite dimensions}\nRecall that to each element $\\Phi \\in \\Sym^2 \\mathscr{E}$ we have associated (in section \\ref{section differential operators}) an order two differential operator $\\partial_{\\Phi}$ on $\\mscr O(\\mathscr{E})$. Let $$\\Delta_T = -\\partial_{K_T}.$$ When $T > 0$, this is a differential operator on $\\mscr O(\\mathscr{E})$. When $T = 0$, because $K_0$ is no longer an element of $\\Sym^2 \\mathscr{E}$ but of some distributional completion, the operator $\\Delta_0$ is ill-defined. The operators $\\Delta_T$ are odd, second-order differential operators on $\\mscr O(\\mathscr{E})$, which satisfy the equation\n$$\n\\Delta_T^2 = 0.\n$$ \nThus, each $\\Delta_T$ endows the algebra $\\mscr O(\\mathscr{E})$ with the structure of a Batalin-Vilkovisky algebra. The ill-defined operator $\\Delta_0$ is the ``physical'' Batalin-Vilkovisky operator.\n\nLet us assume for a moment that $M$ is a point, so that $\\mathscr{E}$ is simply a finite dimensional super vector space. Then, the operator $\\Delta_0$ is perfectly well defined. In this situation, $\\Delta_0$ is the Batalin-Vilkovisky odd Laplacian, as described in section \\ref{section intro bv}. The operator $K_0$ is simply the kernel representing the identity map on $\\mathscr{E}$. In terms of a Darboux basis $x_i, \\xi_i$ of $V$, where $x_i$ are even, $\\xi_i$ are odd, and $\\ip{x_i,\\xi_i} = \\delta_{i j}$, then\n$$\nK_0 = - \\sum_i \\left( x_i \\otimes \\xi_i + \\xi_i \\otimes x_i \\right)\n$$\nand\n$$\n\\Delta_0 = - \\partial_{K_0} = \\sum \\partial_{x_i} \\partial_{\\xi_i}.\n$$\n\nStill in the situation when $\\dim M = 0$, the Poisson bracket on the algebra $\\mscr O(\\mathscr{E})$ can be written in terms of $\\Delta_0$, using the equation\n$$\n \\{ f , g \\} = \\Delta_0(f g) - \\Delta_0 (f) g - (-1)^{\\abs f} f \\Delta_0 (g) .\n$$\nWhen $\\dim M > 0$, this Poisson bracket is ill-defined on $\\mscr O(\\mathscr{E})$. However, as we have seen, the Poisson bracket of an element of $\\mscr O_l(\\mathscr{E})$ with an element of $\\mscr O(\\mathscr{E})$ is well-defined. \n\nA functional $S \\in \\mscr O(\\mathscr{E}, \\mathbb C[[\\hbar]])$ satisfies the BV quantum master equation if \n$$\n(Q + \\hbar \\Delta_0 ) \\exp ( S \/ \\hbar ) = 0.\n$$\nAgain, this makes sense when $\\dim M = 0$, but the left hand side of this equation is ill-defined when $\\dim M > 0$. This equation can be re-expressed as \n$$\nQ S + \\tfrac{1}{2} \\{ S, S \\} + \\hbar \\Delta_0 S = 0.\n$$\n\n\n\\subsection{The quantum master equation and the renormalisation group flow}\nLet us return to the situation where $\\dim M \\ge 0$. \n\nRecall that \n$$\nP(\\varepsilon,T)= \\int_{\\epsilon}^T L_t \\d t = \\int_{\\varepsilon}^T (Q^{GF} \\otimes 1) K_t \\d t.\n$$\n\\begin{lemma}\n$$\nQ P _\\epsilon^T = - K_T + K _\\epsilon. \n$$\nwhere $Q$ denotes the tensor product differential on $\\mathscr{E} \\otimes \\mathscr{E}$.\n\\end{lemma}\n\\begin{proof}\nIndeed, our sign conventions are such that $Q P(\\epsilon,T)$ is the kernel for the operator \n\\begin{align*}\n\\alpha &\\mapsto Q \\int_\\epsilon^T Q^{GF} e^{-t H } \\alpha + \\int_\\epsilon^T Q^{GF} e^{-t H } Q \\alpha \\\\\n&= \\int_\\epsilon^T H e^{-t H } \\alpha \\\\\n&= -e^{-T H }\\alpha + e^{- \\epsilon H } \\alpha .\n\\end{align*}\nThus, the operator associated to $Q P(\\epsilon,T)$ is the same as that associated to $-K_T + K_\\epsilon$.\n\\end{proof} \n\n\\begin{lemma}\nLet $S \\in \\mscr O(\\mathscr{E}, \\mathbb C[[\\hbar]])$. Then $S$ satisfies the $\\Delta_{T_1}$ quantum master equation\n$$\n(Q + \\hbar \\Delta_{T_1} ) \\exp ( S \/ \\hbar ) = 0\n$$\nif and only if $\\Gamma( P(T_1,T_2) , S)$ satisfies the $\\Delta_{T_2}$ quantum master equation\n$$\n(Q + \\hbar \\Delta_{T_2} ) \\exp (\\Gamma( P(T_1,T_2) , S) \/ \\hbar ) = 0. \n$$\n\\label{lemma propagator homotopy}\n\\end{lemma}\n\\begin{proof}\nIndeed,\n$$\n[ \\partial_{P(T_1,T_2)} , Q ] = \\partial_{Q P(T_1,T_2) } = \\partial_{K_{T_1} } - \\partial_{K_{T_2} } = \\Delta_{T_2} - \\Delta_{T_1} . \n$$\nThis implies that \n$$\n[ Q , \\exp ( \\partial_{P(T_1,T_2)} ) ] = \\exp (\\partial_{P(T_1,T_2)} ) ( \\Delta_{T_1} - \\Delta_{T_2} ) . \n$$\nThus,\n\\begin{align*}\n(Q + \\hbar \\Delta_{T_2 } ) \\exp (\\Gamma( P(T_1,T_2) , S) \/ \\hbar ) &= (Q + \\hbar \\Delta_{T_2 } ) \\exp ( \\hbar \\partial_{P(T_1,T_2)} ) \\exp ( S \/ \\hbar )\\\\ \n&= \\exp ( \\hbar \\partial_{P(T_1,T_2)} ) ( Q + \\hbar \\Delta_{T_1 } ) \\exp ( S \/ \\hbar ) .\n\\end{align*}\nThe converse follows because the operator $\\exp ( \\hbar \\partial_{P(T_1,T_2)} )$ is invertible. \n\n\n\\end{proof} \n\nSuppose we have an effective action $S^{eff}(T_1) \\in \\mscr O(\\mathscr{E}, \\mathbb C[[\\hbar]])$ at scale $T_1$. Then $\\Gamma( P(T_1,T_2), S^{eff}(T_1))$ is the effective action at scale $T_2$. What we see is that if the effective action $S^{eff}(T_1)$ satisfies the scale $T_1$ quantum master equation, then the effective action $S^{eff}(T_2)$ at scale $T_2$ satisfies the scale $T_2$ quantum master equation, and conversely.\n\n\n\n\\subsection{Renormalised quantum master equation}\n\nWe know, using the renormalisation techniques we have discussed so far, how to make sense of the expression $\\Gamma(P(0,T) , S ) $ if $S \\in \\mscr O_l(\\mathscr{E}, \\mathbb C[[\\hbar]])$ is a local functional which is at least cubic modulo $\\hbar$. The renormalised scale $T$ effective action is\n$$\\Gamma^R ( P(0,T) , S ) \\overset{\\text{def}}{=} \\lim_{\\varepsilon \\to 0} \\Gamma(P(\\varepsilon,T), S^R )$$ \nwhere $S^R = S - S^{CT}$. \n\nWe would like to define a renormalised quantum master equation, which is a replacement for the ill-defined equation\n$$\n(Q + \\hbar \\Delta_0 ) e^{S \/ \\hbar }= 0.\n$$\nWhen $\\dim M = 0$, so $\\mathscr{E}$ is finite dimensional, this equation is well-defined and is equivalent to the equation\n$$\n(Q + \\hbar \\Delta_T )\\exp \\left( \\hbar^{-1} \\Gamma ( P(0,T) , S ) \\right)= 0.\n$$\nIn the infinite dimensional situation, we will take this second equation as a definition, where we use the renormalised version $\\Gamma^R ( P(0,T), S )$ of $\\Gamma(P(0,T), S)$. Thus, \n\\begin{definition}\n$S \\in \\mscr O_l(\\mathscr{E}, \\mathbb C[[\\hbar]])$ satisfies the renormalised quantum master equation if\n$$\n(Q + \\hbar \\Delta_T ) \\exp \\left( \\hbar^{-1} \\Gamma^R (P(0,T), S ) \\right) = 0 .\n$$\n\\end{definition}\nTheorems B and C show that there is a bijection between the set of local functionals $S \\in \\mscr O_l(\\mathscr{E}, \\mathbb C[[\\hbar]])$ and the set of systems of effective actions, sending $S$ to $\\{\\Gamma^R ( P(0,T) , S) \\mid T \\in \\mbb R_{> 0} \\}$ . The renormalised QME is thus just saying that the scale $T$ effective action satisfies the scale $T$ QME. \n\n\nIt is immediate from Lemma \\ref{lemma propagator homotopy} that this condition is independent of $T$. That is, if $S$ satisfies the renormalised QME for one value of $T$, then it does so for all other values of $T$. \n\nIt seems to me that a successful quantisation of the quantum field theory with classical action $S_0$ consists of replacing $S_0$ by a solution $S = S_0 + \\sum_{i > 0} \\hbar^i S_i$ of the renormalised quantum master equation. Equivalently, a quantisation of a classical action $S_0$ is given by replacing $S_0$ by a system of effective actions $\\{S^{eff}(T) \\mid T \\in \\mbb R_{> 0} \\}$, which satisfy the quantum master equation, and which, modulo $\\hbar$, converge to $S_0$ as $T \\to 0$. \n\n\\begin{lemma}\nSuppose that $S \\in \\mscr O_l(\\mathscr{E}, \\mathbb C[[\\hbar]])$ satisfies the renormalised quantum master equation. Then \n$$\n\\Gamma^R ( P(0,\\infty), S ) \\mid_{\\Ker H} \n$$\nsatisfies the quantum master equation on the finite dimensional vector space \n$$\n\\Ker H = H^\\ast(\\mathscr{E}, Q) . \n$$\n\\end{lemma}\n\\begin{proof}\nThis is immediate from the fact that the heat kernel $K_\\infty$ is the element of $\\Ker H \\otimes \\Ker H \\subset \\mathscr{E} \\otimes \\mathscr{E}$ which (as a kernel on $\\Ker H$) represents the identity map $\\Ker H \\to \\Ker H$. \n\\end{proof} \nRecall that\n$\\Gamma^R ( P(0,\\infty), S )(a)$ is a renormalised version of the functional integral\n$$\n\\hbar \\log \\int_{x \\in (\\Im Q^{GF} ) _\\mbb R } \\exp\\left( \\tfrac{1}{2\\hbar} \\ip{x,Q x} + \\tfrac{1}{\\hbar} S ( x + a ) \\right) .\n$$\nIf we take $a$ to be in $\\Ker H = H^\\ast(\\mathscr{E},Q)$, then this functional integral is the Wilsonian effective action in the Batalin-Vilkovisky formalism, obtained by integrating over a Lagrangian $\\Im Q^{GF}$ in the space $\\Im Q^{GF} \\oplus \\Im Q$ of positive eigenvalues of the Hamiltonian $H$. \n\n\n\\section{Homotopies of solutions of the quantum master equation}\n\n\\label{section simplicial sets}\nIn the usual finite-dimensional Batalin-Vilkovisky formalism \\cite{Sch93, AleKonSch97}, one reason the quantum master equation is so important is that it implies that the functional integral is independent of the choice of gauge fixing condition (usually, a Lagrangian or isotropic subspace in the space of fields). \n\nWe would like to prove a version of this in the infinite dimensional situation, for the renormalised quantum master equation. However, things are more delicate in this situation; the renormalised QME itself depends on the choice of a gauge fixing condition. What we will show is that if $Q^{GF}(t)$ is a one-parameter family of gauge fixing conditions, then the set of homotopy classes of solutions to the renormalised QME using $Q^{GF}(0)$ is isomorphic to the corresponding set using $Q^{GF}(1)$. This result is a corollary of a result about certain simplicial sets of gauge fixing conditions and of solutions to the renormalised QME.\n\nAs I explained earlier, we are fixing a renormalisation scheme throughout the paper, which allows us to identify the set of local functionals $S \\in \\mscr O_l(\\mathscr{E}) [[\\hbar]]$ with the set of systems of effective actions. Everything we say should be in terms of the effective action $\\Gamma^R(P(0,T), S)$ and not directly in terms of the local functional $S$. As long as we stick to this, everything we do will be independent of the choice of renormalisation scheme. \n\\subsection{Families and simplicial sets of gauge fixing conditions}\n\nLet us denote the $n$-simplex by $\\Delta^n$.\n\n\\begin{definition}\n A family of gauge fixing conditions parametrised by $\\Delta^n$ is a first order differential operator \n$$\nQ^{GF} : \\mathscr{E} \\otimes C^{\\infty}(\\Delta^n) \\to \\mathscr{E} \\otimes C^{\\infty} (\\Delta^n)\n$$\nwhich is $C^{\\infty}(\\Delta^n)$ linear, and which satisfies a family version of the axioms for a gauge fixing condition. Namely, \n\\begin{enumerate}\n\\item\nthe operator $H_0 = [Q \\otimes 1, Q^{GF}]$ must be a smooth family of generalised Laplacians in the sense of \\cite{BerGetVer92}, section 9.5.\n\\item\nthere is a direct sum decomposition (of $C^{\\infty}(\\Delta^n)$ modules)\n$$\n\\mathscr{E} \\otimes C^{\\infty}(\\Delta^n) = \\Im (Q\\otimes 1) \\oplus \\Im Q^{GF} \\oplus \\Ker H_0 .\n$$\n\\end{enumerate}\n\\end{definition}\nWe will normally just write $Q$ for the operator $Q \\otimes 1$ on $\\mathscr{E} \\otimes \\Omega^\\ast(\\Delta^n)$, and similarly we will write $\\d_{DR}$ for $ 1 \\otimes \\d_{DR}$. The operator $Q^{GF}$ extends in a unique $\\Omega^\\ast(\\Delta^n)$-linear way to $\\mathscr{E} \\otimes \\Omega^\\ast(\\Delta^n)$. Let us use the notation\n$$\nH = [ Q + \\d_{DR} , Q^{GF} ] : \\mathscr{E} \\otimes \\Omega^\\ast (\\Delta^n) \\to \\mathscr{E} \\otimes \\Omega^\\ast (\\Delta^n) . \n$$\nIt is easy to see that $H$ is $\\Omega^\\ast(\\Delta^n)$-linear. If $H_1 = [\\d_{DR}, Q^{GF}]$, then $H = H_0 + H_1$ and $H_1$ is an order $1$ differential operator. Thus, the symbol of $H$ is the same as that of $H_0$, and, by assumption, the symbol of $H_0$ is given by a smooth family of metrics on $M$, times the identity on the vector bundle $E$ on $M$. Thus, we can think of $H$ as a smooth family of generalised Laplacians on $\\mathscr{E}$ parametrised by the supermanifold $\\Delta^n \\times \\mbb R^{0,n}$. The results of \\cite{BerGetVer92} imply that there exists a unique heat kernel for $H$, \n$$\nK_t \\in \\mathscr{E} \\otimes \\mathscr{E} \\otimes \\Omega^\\ast(\\Delta^n) .\n$$\n\n\nLet us denote by $\\mathbf{GF} [n]$ the set of families of gauge fixing conditions parametrised by $\\Delta^n$. If $\\phi : \\Delta^n \\to \\Delta^m$ is a smooth map (for instance, the inclusion of a face) then there is an induced map $\\phi^\\ast : \\mathbf{GF}[m] \\to \\mathbf{GF}[n]$. In this way, the sets $\\mathbf{GF}[n]$ form a simplicial set, which we denote $\\mathbf{GF}$. \n\n\n\\subsection{Simplicial sets of solutions to the renormalised QME}\n\nLet $$S \\in \\mscr O_l(\\mathscr{E}, \\Omega^\\ast(\\Delta^n) [[\\hbar]] )$$ be a family of functionals on $\\mathscr{E}$, which, as always, is at least cubic modulo $\\hbar$. As before, if $P \\in \\mathscr{E} \\otimes \\mathscr{E} \\otimes \\Omega^\\ast (\\Delta^n)$, one can define\n$$\n\\Gamma ( P , S ) = \\hbar \\log \\left( \\exp ( \\hbar \\partial_P ) \\exp ( S \/ \\hbar ) \\right) \\in \\mscr O ( \\mathscr{E} , \\Omega^\\ast(\\Delta^n) [[\\hbar]] ) .\n$$\n\nLet us suppose, as above, that we have a smooth family of gauge fixing conditions $Q^{GF}$ parametrised by $\\Delta^n$. Let $H$ and $K_t$ be as above. This allows us to construct the propagator we use, \n$$\nP(\\varepsilon,T)= \\int_{\\epsilon}^T (Q^{GF} \\otimes 1) K_t \\d t .\n$$\nThus, one can define the family version of the renormalisation group flow, which sends\n$$\nf \\in \\mscr O(\\mathscr{E}, \\Omega^\\ast(\\Delta^n) [[\\hbar]] ) \\to \\Gamma(P(\\varepsilon,T) ,f ) \\in \\mscr O(\\mathscr{E}, \\Omega^\\ast(\\Delta^n) [[\\hbar]] ).\n$$\n\\begin{definition}\nA family of systems of effective actions on $\\mathscr{E}$ parametrised by the supermanifold $\\Delta^n \\times \\mbb R^{0,n}$ is given by an effective action\n$$\nS^{eff}(T) \\in \\mscr O(\\mathscr{E}, \\Omega^\\ast(\\Delta^n) [[\\hbar]] )\n$$\nfor all $T \\in \\mbb R_{> 0}$, \nsuch that\n\\begin{enumerate}\n\\item\nThe renormalisation group equation is satisfied,\n$$\nS^{eff}(T_2) = \\Gamma ( P(T_1,T_2) , S^{eff}(T_1)).\n$$\n\\item\nAs $T \\to 0$, $S^{eff}(T)$ must become local, in the following sense. There must exist some $T$-dependent local functional \n$$\\Phi \\in \\mscr O_l( \\mathscr{E} , \\Omega^\\ast(\\Delta^n) \\otimes C^{\\infty}(0,\\infty) \\otimes \\mathbb C[[\\hbar]] ) $$ \nwhere $T$ is the coordinate on $(0,\\infty)$,\nsuch that \n$$\\lim_{T \\to 0} \\left( S^{eff}(T) - \\Phi(T) \\right) = 0.$$ \n(The $T \\to 0$ limit of $S^{eff}(T)$ itself will generally not exist). \n\\end{enumerate}\n\\end{definition}\n\n\n\nTheorems A, B and C apply in this situation. Indeed, theorem A is stated and proved in this generality in the appendix, and the proofs of the family version of theorems B and C is identical to the proof we gave earlier. Thus, we can define \n$$\n\\Gamma^R(P(0,T), S ) = \\lim_{\\varepsilon \\to 0} \\Gamma(P(\\varepsilon,T), S - S^{CT} ) \\in \\mscr O ( \\mathscr{E} , \\Omega^\\ast(\\Delta^n) \\otimes \\mathbb C [[\\hbar]] ) . \n$$\nThis map gives us a bijection between the set of local functionals $S \\in \\mscr O_l ( \\mathscr{E}, \\Omega^\\ast(\\Delta^n) \\otimes \\mathbb C[[\\hbar]] )$, and the set of families of systems of effective actions parametrised by $\\Delta^n \\times \\mbb R^{0,n}$. This bijection depends on the choice of renormalisation scheme, which we fix throughout the paper.\n\nWe say that $S$ satisfies the renormalised quantum master equation if\n$$\n( \\d_{DR} + Q + \\hbar \\Delta_T) \\exp ( \\hbar^{-1} \\Gamma^R (P(0,T), S ) ) = 0 . \n$$\nThis condition, as before, is independent of $T$. \n\\begin{definition}\nDefine a simplicial set $\\mathbf{BV}_\\mathscr{E}$ by saying that $\\mathbf{BV}_\\mathscr{E}[n]$ is the set of triples \n\\begin{enumerate}\n\\item\n$Q^{GF} \\in \\mathbf{GF}[n]$\n\\item $S \\in \\mscr O_l(\\mathscr{E}, \\Omega^\\ast(\\Delta^n) [[\\hbar]] )$ which is at least cubic modulo $\\hbar$, and which satisfies the renormalised quantum master equation corresponding to $Q^{GF}$.\n\\end{enumerate}\n\nSimilarly, define a simplicial set $\\mathbf{BV}_{H^\\ast(\\mathscr{E},Q) }$, by saying $\\mathbf{BV}_{H^\\ast(\\mathscr{E},Q) }[n]$ is the set of functions $S \\in \\mscr O (H^\\ast(\\mathscr{E}, Q) ) \\otimes \\Omega^\\ast(\\Delta^n) [[ \\hbar ]]$ which satisfy the quantum master equation $$ ( \\d_{DR} + \\hbar \\Delta_{H^\\ast(\\mathscr{E}, Q) } ) e^{S \/ \\hbar } = 0.$$ \n\\end{definition}\n\nAn equivalent (and more natural) definition of the simplicial set $\\mathbf{BV}_\\mathscr{E}$ would be to say that $\\mathbf{BV}_\\mathscr{E}[n]$ is the set of families of systems of effective actions, parametrised by $\\Delta^n \\times \\mbb R^{0,n}$, such that the scale $T$ effective action $S^{eff}(T)$ satisfies the scale $T$ QME, \n$$\n(Q + \\d_{DR} + \\hbar \\Delta_T ) e^{ S^{eff}(T) \/ \\hbar } = 0 . \n$$\nOf course, the choice of renormalisation scheme (which we are fixing throughout the paper) leads to an equivalence between the two definitions. \n\n\n\n\\begin{lemma}\nThere is a map\n$$\n\\mathbf{BV}_\\mathscr{E}[n] \\to \\mathbf{BV}_{H^\\ast(\\mathscr{E},Q)} [n]\n$$\nof simplicial sets, which sends\n$$\nS \\mapsto \\Gamma^R ( P(0,\\infty) , S )\\mid_{ H^\\ast(\\mathscr{E}, Q) } .\n$$\nThis extends to a map of simplicial sets\n$$\n\\mathbf{BV}_\\mathscr{E}[n] \\to \\mathbf{BV}_{H^\\ast(\\mathscr{E},Q)} [n]\n$$\nin the natural way.\n\\end{lemma}\n\n\\subsection{Fibration property}\nLet $Q^{GF}$ be a gauge fixing condition on $\\mathscr{E}$. Let $\\mathbf{BV}_{\\mathscr{E}, Q^{GF}}$ denote the simplicial set of solutions to the renormalised quantum master equation with fixed gauge fixing condition $Q^{GF}$. In other words, $\\mathbf{BV}_{\\mathscr{E}, Q^{GF}}$ is the fibre of $\\mathbf{BV}$ over the point $Q^{GF} \\in \\mathbf{GF} [0]$, under the natural map $\\mathbf{BV}_{\\mathscr{E}} \\to \\mathbf{GF} $. \n\n\n The set $\\pi_0 \\left( \\mathbf{BV}_{\\mathscr{E}, Q^{GF}} \\right)$ is the set of homotopy classes of solutions to the renormalised quantum master equation. The fact that the map $\\mathbf{BV}_{\\mathscr{E}, Q^{GF}} \\to \\mathbf{BV}_{H^\\ast(\\mathscr{E},Q)}$ is a map of simplicial sets tells us that there is an induced map\n\\begin{equation}\n\\pi_0 \\left( \\mathbf{BV}_{\\mathscr{E}, Q^{GF}} \\right) \\to \\pi_ 0 \\left( \\mathbf{BV}_{H^\\ast(\\mathscr{E},Q)} \\right). \\tag{$\\dagger$} \\label{map to qme cohomology} \n\\end{equation}\nWe would like to show that the set $\\pi_0 ( \\mathbf{BV}_{\\mathscr{E}, Q^{GF}}) $, and the map \\eqref{map to qme cohomology}, are in some sense independent of $Q^{GF}$.. This will follow from an abstract result.\n\n\\begin{theorem}\nThe map\n$$\n\\mathbf{BV}_\\mathscr{E} \\to \\mathbf{GF} \n$$\nis a fibration of simplicial sets. \n\\label{theorem fibration}\n\\end{theorem}\nWe will prove this later.\n\n\\begin{corollary}\nLet $Q^{GF}(t)$ is a smooth families of gauge fixing conditions parametrised by $[0,1]$. Then the sets $\\pi_0 ( \\mathbf{BV}_{\\mathscr{E}, Q^{GF}(0)} ) $, $\\pi_0 ( \\mathbf{BV}_{\\mathscr{E}, Q^{GF}(1) } ) $ are canonically isomorphic. Further, the diagram\n$$\n\\xymatrix{ \\pi_0 \\left( \\mathbf{BV}_{\\mathscr{E}, Q^{GF}(0)} \\right) \\ar[r] \\ar[d] & \\pi_0 \\left( \\mathbf{BV}_{\\mathscr{E}, Q^{GF}(1)} \\right) \\ar[dl] \\\\\n\\pi_ 0 \\left(\\mathbf{BV}_{H^\\ast(\\mathscr{E}, Q ) } \\right) } \n$$\ncommutes. \n\\end{corollary}\n\\begin{proof}\nThis follows immediately from the path and homotopy lifting properties for fibrations of simplicial sets. \n\\end{proof} \n\n\n\\section{A local obstruction to solving the renormalised quantum master equation} \nWe will prove Theorem \\ref{theorem fibration} using obstruction theory. In this section we will construct the required obstructions to solving the renormalised QME. \n\nLet us fix a smooth family $Q^{GF}$ of gauge fixing conditions parametrised by $\\Delta^n$. Let $S \\in \\mscr O_l(\\mathscr{E}, \\Omega^\\ast(\\Delta^n) \\otimes \\mathbb C[[\\hbar]])$ be, as always, at least cubic modulo $\\hbar$. One way to write the renormalised quantum master equation for $S$ is the equation\n$$\n\\lim_{\\varepsilon \\to 0} (Q + \\d_{DR} + \\Delta_T ) \\exp ( \\hbar \\partial_{P(\\varepsilon,T)} ) \\exp ( S^R \/ \\hbar ) = 0. \n$$\n\n\nOne can re-express this using the following identity: \n\\begin{multline}\n\\Gamma \\left( P(\\varepsilon,T)+ \\delta \\Delta_\\varepsilon , S^R + \\delta Q S^R + \\delta \\d_{DR} S^R \\right) = \\\\ \\hbar \\log \\left( (1 + \\delta Q + \\delta \\d_{DR} + \\delta \\Delta_T ) \\exp ( \\hbar \\partial_{P(\\varepsilon,T)} ) \\exp ( S^R \/ \\hbar ) \\right) \\label{equation Q inside gamma}\n\\end{multline}\nwhere $\\delta$ is an odd parameter. This identity is a corollary of the fact that \n$$\n[\\partial_{P(\\varepsilon,T)} , Q + \\d_{DR}] = \\Delta_T -\\Delta_\\varepsilon. \n$$\nThus, we have an equivalent expression of the renormalised quantum master equation; the renormalised QME holds if and only if \n\\begin{equation*}\n\\lim_{\\varepsilon \\to 0} \\frac{\\d}{\\d \\delta} \\left< \\Gamma \\left( P(\\varepsilon,T)+ \\delta \\Delta_\\varepsilon, S^R + \\delta Q S^R + \\delta \\d_{DR} S^R \\right) \\right> = 0 .\n \\tag{$\\dagger$} \\label{rqme} \n\\end{equation*}\n\nRecall that our choice of renormalisation scheme gives us a subspace\n$$\n\\mathscr{A}_{< 0} \\subset \\mathscr{A}\n$$\nwhich is complementary to the subspace $\\mathscr{A}_{\\ge 0}$ .\nLet\n$$\n\\mathscr{A}_{\\le 0} = \\mathscr{A}_{< 0} \\oplus \\mathbb C \\subset \\mathscr{A}.\n$$\nIf we denote by $\\mathscr{A}_{> 0}$ the space of functions whose $\\varepsilon \\to 0$ limit exists and is equal to zero, we have a direct sum decomposition\n$$\n\\mathscr{A} = \\mathscr{A}_{\\le 0} \\oplus \\mathscr{A}_{> 0} . \n$$\n\\begin{proposition}\n\\label{proposition obstruction}\n\\begin{enumerate}\n\\item\nLet $S \\in \\mscr O_l(\\mathscr{E}, \\Omega^\\ast(\\Delta^n) \\otimes \\mathbb C[[\\hbar]] )$ be an even element. Then\nthere exists a unique odd element \n$$O(S) \\in \\mscr O_l(\\mathscr{E}, \\mathscr{A}_{\\le 0}\\otimes \\Omega^\\ast(\\Delta^n) \\otimes \\mathbb C[[\\hbar]] )$$ such that \n$$\\lim_{\\varepsilon \\to 0} \\frac{\\d}{\\d \\delta} \\left< \\Gamma \\left( P(\\varepsilon,T)+ \\delta \\Delta_\\varepsilon, S^R + \\delta Q S^R + \\delta \\d_{DR} S^R + \\delta O(S) \\right) \\right> $$ \nexists, and has value $0$. \n\\item\n$O_{(I,K)}(S)$ depends only on $S_{\\le (I,K)}$, and \n$$\nO_{(I,K)}(S_{<(I,K)} + \\hbar^I S_{(I,K)} ) = \\hbar^I Q S_{(I,K)} + \\hbar^I \\d_{DR} S_{(I,K)} + O_{(I,K)}(S_{<(I,K)} ). \n$$\n\\item\nFurther, if $k > 0$, the germ of $O_{i,k}(S)$ at $x \\in M$ depends only on the germ of the data $(\\mathscr{E}, \\ip{\\ , \\ }, Q , Q^{GF} , \\{ S_{i,k} \\mid k > 0 \\} )$ near $x\\times \\Delta^n$. \n\\end{enumerate}\n\n\\end{proposition}\nThus, $S$ satisfies the renormalised quantum master equation if and only if $O(S) = 0$.\n\\begin{proof}\nThe proof is very similar to that of theorem B. \n\nSuppose we've constructed $O_{(i,k)}(S) \\in \\mscr O_l(\\mathscr{E}, \\Omega^\\ast(\\Delta^n) \\otimes \\mathscr{A}_{\\le 0})$ for all $(i,k) < (I,K)$, which are independent of $T$, and which are such that \n$$\\frac{\\d}{\\d \\delta} \\left< \\Gamma_{< (I,K)} \\left( P(\\varepsilon,T)+ \\delta \\Delta_\\varepsilon, S^R + \\delta Q S^R + \\delta \\d_{DR} S^R + \\delta O_{<(I,K)}(S) \\right) \\right> $$\nis regular at $\\varepsilon = 0$ with value $0$. \n\nDefine\n\\begin{multline*}\nO_{(I,K)}(S,T) = -\\text{projection onto $\\mathscr{A}_{\\le 0}$ of } \\\\ \\frac{\\d}{\\d \\delta} \\left< \\Gamma_{(I,K)} \\left( P(\\varepsilon,T)+ \\delta \\Delta_\\varepsilon, S^R + \\delta Q S^R + \\delta \\d_{DR} S^R + \\delta O_{<(I,K)}(S) \\right) \\right>. \n\\end{multline*}\nThe second clause of the proposition is immediate from this definition. What remains to be shown is that $O_{(I,K)}(S,T)$ is independent of $T$, and so (by the same argument as in the proof of theorem B) local.\n\nIt follows from the definition of $O_{(I,K)}(S,T)$ that\n$$\n\\lim_{\\varepsilon \\to 0}\\Gamma_{\\le (I,K)} \\left( P(\\varepsilon,T)+ \\delta \\Delta_\\varepsilon, S^R \n+ \\delta Q S^R + \\delta \\d_{DR} S^R +\n \\delta O_{< (I,K)}(S) + \\hbar^I \\delta O_{(I,K)}(S,T) \\right) \n$$\nexists, and this limit is independent of $\\delta$. Thus, the same holds for \n\\begin{multline*}\n\\Gamma_{\\le(I,K)} \\left( P(T,T') , \\Gamma_{\\le(I,K)}\\left( P(\\varepsilon,T)+ \\delta \\Delta_\\varepsilon, S^R + \\delta Q S^R \\delta \\d_{DR} S^R \n\\right. \\right.\n\\\\ \n \\left. \\left.\n+ \\delta O_{< (I,K)}(S) + \\hbar^I \\delta O_{(I,K)}(S,T) \\right) \\right) . \n\\end{multline*}\n\nObserve that\n\\begin{multline*}\n\\Gamma_{\\le(I,K)} \\left( P(T,T') , \\Gamma_{\\le(I,K)}\\left( P(\\varepsilon,T)+ \\delta \\Delta_\\varepsilon, S^R + \\delta Q S^R + \\delta \\d_{DR} S^R \\right. \\right. \\\\\n\\shoveright {\\left. \\left.\n + \\delta O_{< (I,K)}(S) + \\hbar^I \\delta O_{(I,K)}(S,T) \\right) \\right) }\n\\\\ = \\Gamma_{\\le(I,K)}\\left(P(\\varepsilon,T') + \\delta \\Delta_\\varepsilon, S^R + \\delta Q S^R \n + \\delta \\d_{DR} S^R \n \\right. \\\\\n\\shoveright{ \\left.\n + \\delta O_{< (I,K)}(S)) + \\hbar^I \\delta O_{(I,K)}(S,T) \\right) }\n\\\\ = \\Gamma_{\\le(I,K)}\\left(P(\\varepsilon,T') + \\delta \\Delta_\\varepsilon, S^R + \\delta Q S^R + \\delta \\d_{DR} S^R \n \\right. \\\\\n\\left.\n+ \\delta O_{< (I,K)} (S) \\right) + \\hbar^I \\delta O_{ (I,K)}(S,T).\n\\end{multline*}\nAs we have seen, the first line in this expression has an $\\varepsilon \\to 0$ limit which is independent of $\\delta$. The same is therefore true for the last line; it follows that $O_{(I,K)}(S,T) = O_{(I,K)} (S,T')$, and so $O_{(I,K)}(S) = O_{(I,K)}(S,T)$ is local as desired. \n\n\\end{proof} \n\nWe would like to use an inductive, term-by-term method to construct solutions to the renormalised QME. The following lemma allows us to do so.\n\\begin{lemma}\nLet $S \\in \\mscr O_l ( \\mathscr{E}, \\Omega^\\ast(\\Delta^n) \\otimes \\mathbb C[[\\hbar]] )$ be a local action functional such that $O_{< (I,K)} (S) = 0$. (In other words, $S$ satisfies the QME up to order $(I,K)$). Then:\n\\begin{enumerate}\n\\item\n$$\\lim_{\\varepsilon \\to 0} O_{(I,K)}(S)$$ exists. This implies that $O_{(I,K)}(S)$ is independent of $\\varepsilon$, and so is an element\n$$\nO_{(I,K)}(S) \\in \\mscr O_l(\\mathscr{E}, \\Omega^\\ast(\\Delta^n) ).\n$$\n\\item\n$$(Q + \\d_{DR}) O_{(I,K)}(S) = 0. $$ \n\\end{enumerate}\n\\label{closed local obstruction}\n\\end{lemma} \n\\begin{remark}\nThis lemma implies that if $S$ solves the $<(I,K)$ renormalised QME, that is, $O_{<(I,K)}(S)= 0$, then lifting $S$ to a solution to the $\\le(I,K)$ renormalised QME amounts to finding some $S'\\in \\mscr O_l(\\mathscr{E}, \\Omega^\\ast(\\Delta^n))$, homogeneous of degree $K$ as a function on $\\mathscr{E}$, with $(Q+ \\d_{DR}) S' = - O_{(I,K)}(S)$. The desired solution of the renormalised QME will be $S + \\hbar^I S'$. \n\\end{remark}\n\n\n\\begin{proof}\nBy definition,\n\\begin{multline*}\nO_{(I,K)}(S) = -\\text{projection onto $\\mathscr{A}_{\\le 0}$ of } \\\\ \\frac{\\d}{\\d \\delta} \\left< \\Gamma_{(I,K)} \\left( P(\\varepsilon,T)+ \\Delta_\\varepsilon, S^R + \\delta Q S^R + \\delta \\d_{DR} S^R + \\delta O_{<(I,K)}(S) \\right) \\right>. \n\\end{multline*}\nEquation \\eqref{equation Q inside gamma} says that\n\\begin{multline*}\n \\Gamma \\left( P(\\varepsilon,T)+ \\Delta_\\varepsilon, S^R + \\delta Q S^R + \\delta \\d_{DR} S^R \\right) \\\\ \n = \\hbar \\log \\left( (1 + \\delta Q + \\delta \\d_{DR} + \\delta \\Delta_T ) \\exp ( \\hbar \\partial_{P(\\varepsilon,T)} ) \\exp ( S^R \/ \\hbar ) \\right) .\n\\end{multline*}\nSince $O_{<(I,K)}(S) = 0$, it follows that \n\\begin{multline*}\nO_{(I,K)}(S) = -\\text{projection onto $\\mathscr{A}_{\\le 0}$ of } \\\\ \\hbar \\log \\left( (1 + \\delta Q + \\delta \\d_{DR} + \\delta \\Delta_T ) \\exp ( \\hbar \\partial_{P(\\varepsilon,T)} ) \\exp ( S^R \/ \\hbar ) \\right) .\n\\end{multline*}\nIt follows that the $\\varepsilon \\to 0$ limit of $O_{(I,K)}(S)$ exists. This implies that $O_{(I,K)}(S)$ is constant as a function of $\\varepsilon$. \n\nThe equation $O_{<(I,K)}(S) = 0$ implies that \n\\begin{multline*}\n\\hbar^I O_{I,K} (S) = - \\text{leading term of } \\\\ \\lim_{\\varepsilon \\to 0} \\frac{ \\d}{\\d \\delta} \\left< \\hbar \\log \\left( (1 + \\delta Q + \\delta \\d_{DR} +\\delta T \\Delta_T ) \\exp ( \\hbar \\partial_{P(\\varepsilon,T)} ) \\exp ( S^R \/ \\hbar ) \\right) \\right>\n\\end{multline*}\nwhere ``leading term'' means the first non-zero term in the expansion labelled by $(i,k)$, where the $(i,k)$ term, as always, is $\\hbar^i$ times a homogeneous function of degree $k$ on $\\mathscr{E}$. We use the lexicographical ordering on the labels $(i,k)$. \n\n\nIt follows that\n\\begin{multline*}\n\\hbar^I O_{I,K}(S) = - \\text{leading term of } \\\\ ( \\delta Q + \\delta \\d_{DR} + T \\Delta_T ) \\lim_{\\varepsilon \\to 0} \\left< \\exp ( \\hbar \\partial_{P(\\varepsilon,T)} ) \\exp ( S^R \/ \\hbar ) \\right>\n\\end{multline*}\nso that\n\\begin{align*}\n\\hbar^I (Q + \\d_{DR}) O_{I,K}(S) =& -\\text{leading term of } \\\\ & (\\delta Q + \\delta \\d_{DR} + T \\Delta_T )^2 \\lim_{\\varepsilon \\to 0} \\left< \\exp ( \\hbar \\partial_{P(\\varepsilon,T)} ) \\exp ( S^R \/ \\hbar ) \\right> \n\\\\ \n =& 0 \n\\end{align*}\nas desired.\n\n\\end{proof} \n\n\n\\section{Proof of Theorem \\ref{theorem fibration}}\nNow we can prove Theorem \\ref{theorem fibration}, that the map $\\mathbf{BV}_\\mathscr{E} \\to \\mathbf{GF}$ is a fibration of simplicial sets.\n\\subsection{Preliminary definitions}\nSending $\\Delta^n$ to $\\Omega^\\ast(\\Delta^n)$ defines a simplicial algebra, which we denote $\\Omega^\\ast_\\Delta$. If $X$ is a simplicial set, let $\\Omega^\\ast(X)$ denote the space of maps $X \\to \\Omega^\\ast_\\Delta$ of simplicial sets. Concretely, an element of $\\Omega^\\ast(X)$ is something which assigns to an $n$-simplex of $X$ an element of $\\Omega^\\ast(\\Delta^n)$, in a way compatible with the natural maps between simplices. \n\n\nThe spaces \n$$\n\\mscr O_l(\\mathscr{E}, \\Omega^\\ast(\\Delta^n) )\n$$\nassemble into a simplicial object in the category of chain complexes, which we denote $\\mscr O_l(\\mathscr{E}, \\Omega^\\ast_{\\Delta}) $. For any simplicial set $X$, let $\\mscr O_l(\\mathscr{E}, \\Omega^\\ast(X) )$ denote the space of maps $X \\to \\mscr O_l(\\mathscr{E}, \\Omega^\\ast_{\\Delta}) $. More concretely, we can view an element of $\\mscr O_l(\\mathscr{E}, \\Omega^\\ast(X) )$ as being an element of $\\mscr O_l(\\mathscr{E}, \\Omega^\\ast(\\Delta^n) )$ for each $n$-simplex of $X$, compatible with the natural maps between simplices. \n\n\nEarlier we defined a map \n\\begin{align*}\n\\mscr O_l(\\mathscr{E}, \\Omega^\\ast(\\Delta^n) \\otimes \\mathbb C [[\\hbar]] ) &\\to \\mscr O_l(\\mathscr{E}, \\mathscr{A}_{\\le 0} \\otimes \\Omega^\\ast(\\Delta^n)\\otimes \\mathbb C [[\\hbar]] ) \\\\\nS &\\mapsto O(S) .\n\\end{align*}\nThis gives a map of simplicial sets\n$$\n\\mscr O_l(\\mathscr{E}, \\Omega^\\ast_{\\Delta} \\otimes \\mathbb C [[\\hbar]]) \\to \\mscr O_l(\\mathscr{E}, \\mathscr{A}_{\\le 0}\\otimes \\Omega^\\ast_{\\Delta}\\otimes \\mathbb C [[\\hbar]] ).\n$$\nFor each $X$, we get an obstruction map\n$$\nO : \\mscr O_l(\\mathscr{E}, \\Omega^\\ast(X)\\otimes \\mathbb C [[\\hbar]] ) \\to \\mscr O_l(\\mathscr{E}, \\mathscr{A}_{\\le 0} \\otimes \\Omega^\\ast(X)\\otimes \\mathbb C [[\\hbar]]).\n$$\n$S \\in \\mscr O_l(\\mathscr{E}, \\Omega^\\ast(X) \\otimes \\mathbb C[[\\hbar]] )$ satisfies the renormalised QME if and only if $O(S) = 0$. \n\n\n$\\mathbf{BV}_{\\mathscr{E}}$ is the simplicial set whose $n$-simplices are elements $S \\in \\mscr O_l(\\mathscr{E}, \\Omega^\\ast(\\Delta^n) ) $ such that $O(S) = 0$. \n\n\\begin{definition}\nLet $\\mathbf{BV}_{\\mathscr{E}, \\le(I,K)}$ be the simplicial set whose $n$-simplices are given by pairs\n$$\n(Q^{GF} ,S ) \n$$ \nwhere $Q^{GF}$ are smooth families of gauge fixing conditions, and \n$$S \\in \\mscr O_l(\\mathscr{E}, \\Omega^\\ast(\\Delta^n) )$$ \nis such that $S_{(i,k)} = 0$ if $(i,k) > (I,K)$ and $O_{\\le (I,K)} (S) = 0$. \n\nSimilarly, define $\\mathbf{BV}_{\\mathscr{E}, <(I,K)}$ to be the simplicial set whose $n$-simplices are pairs $(Q^{GF}, S)$ where $Q^{GF}$ are as before, and \n$S \\in \\mscr O_l(\\mathscr{E}, \\Omega^\\ast(\\Delta^n) ) $ is such that $S_{(i,k)} = 0$ if $(i,k) \\ge (I,K)$ and $O_{< (I,K)} (S) = 0$. \n\\end{definition} \nSuppose $X$ is equipped with a map $X \\to \\mathbf{GF}$. Then a lift of this to a map $X \\to \\mathbf{BV}_{\\mathscr{E}, \\le(I,K)}$ is given by $S \\in \\mscr O_l(\\mathscr{E}, \\Omega^\\ast(X) )$ such that $S_{>(I,K)} = 0$ and $O_{\\le(I,K)} (S) = 0$. A similar description holds for maps to $\\mathbf{BV}_{\\mathscr{E}, <(I,K)}$. \n\nThe following lemma is an immediate corollary of Lemma \\ref{closed local obstruction}. \n\\begin{lemma}\nLet $f : X \\to \\mathbf{BV}_{\\mathscr{E}, <(I,K)}$ be a map. Then the obstruction $O_{(I,K)}(X)$ to lifting $f$ to a map $X \\to \\mathbf{BV}_{\\mathscr{E}, \\le (I,K) }$ is independent of the parameter $\\varepsilon$, and so is an element of $\\mscr O_l(\\mathscr{E}, \\Omega^\\ast(X) )$. This is closed, $(Q + \\d_{DR}) O_{(I,K)}(X) = 0$. \n\nA lift of $f$ to a map $ X \\to \\mathbf{BV}_{\\mathscr{E}, \\le (I,K)}$ is given by an element $S' \\in \\mscr O_l(\\mathscr{E}, \\Omega^\\ast(X) )$, homogeneous of degree $K$ as a function on $\\mathscr{E}$, with \n$$\n(Q + \\d_{DR}) S' = - O_{(I,K)}(X) . \n$$\n\\end{lemma}\n\n\n\n\\subsection{Completion of proof} \nNote that $\\mathbf{BV}_{\\mathscr{E}, \\le (0,0) }$ is $\\mathbf{GF}$. Thus, to prove Theorem \\ref{theorem fibration}, it suffices to show that\n\\begin{proposition}\nThe map\n$$\\mathbf{BV}_{\\mathscr{E}, \\le (I,K) } \\to \\mathbf{BV}_{\\mathscr{E}, < (I,K) }$$\nis a fibration. \n\\label{proposition fibration}\n\\end{proposition}\n\\begin{proof}\n\n\n\n\n\nLet $\\Lambda^n \\subset \\Delta^n$ denote an $n$-horn, obtained by removing any face from $\\partial \\Delta^n$. Suppose we have a map $\\Lambda^n \\to \\mathbf{BV}_{\\mathscr{E}, \\le (I,K) }$, and an extension of this to a map $\\Delta^n \\to \\mathbf{BV}_{\\mathscr{E}, <(I,K) }$. We need to show that there exists a map $\\Delta^n \\to \\mathbf{BV}_{\\mathscr{E}, \\le (I,K) }$ such that the diagram\n$$\n\\xymatrix{\n\\Lambda^n \\ar[r] \\ar[d] & \\mathbf{BV}_{\\mathscr{E}, \\le (I,K) } \\ar[d] \\\\\n\\Delta^n \\ar@{-->}[ur] \\ar[r] & \\mathbf{BV}_{\\mathscr{E}, < (I,K) }\n}\n$$\ncommutes. There is an obstruction \n$$\nO_{I,K} ( \\Delta^n ) \\in \\mscr O_l(\\mathscr{E}, \\Omega^\\ast(\\Delta^n ) ) .\n$$\nto the existence of this lift. \n\nThere is a map $\\Lambda^n \\to \\mathbf{BV}_{\\mathscr{E}, <(I,K) }$. The obstruction to the lift of this to a map $\\Lambda^n \\to \\mathbf{BV}_{\\mathscr{E}, \\le (I,K) }$ is an element\n$$\nO_{I,K}( \\Lambda^n ) \\in \\mscr O_l ( \\mathscr{E}, \\Omega^\\ast(\\Lambda^n) ) .\n$$\nBecause we are given a lift $\\Lambda^n \\to \\mathbf{BV}_{\\mathscr{E}, \\le (I,K) }$, we are given an element\n$$\nL_{I,K}( \\Lambda^n ) \\in \\mscr O_l ( \\mathscr{E}, \\Omega^\\ast(\\Lambda^n) ) .\n$$\nsuch that $(Q + \\d_{DR}) L_{I,K} (\\Lambda^n) = O_{I,K} (\\Lambda^n)$. \n\n\nUnder the natural map\n$$\n\\mscr O_l ( \\mathscr{E}, \\Omega^\\ast(\\Delta^n) ) \\to \\mscr O_l ( \\mathscr{E}, \\Omega^\\ast(\\Lambda^n) )\n$$\nthe restriction of $O_{I,K}(\\Delta^n)$ is $O_{I,K}(\\Lambda^n)$. \n\nThe spaces $\\mscr O_l ( \\mathscr{E}, \\Omega^\\ast(\\Delta^n))$ form a simplicial abelian group. All simplicial groups are Kan complexes; see \\cite{May92}. This implies that this restriction map is surjective. It is also a quasi-isomorphism. (This argument was explained to me by Ezra Getzler).\n\n It follows immediately that there exists an $L_{I,K}(\\Delta^n) \\in \\mscr O_l ( \\mathscr{E}, \\Omega^\\ast(\\Delta^n) )$ which satisfies $(Q + \\d_{DR}) L_{I,K}(\\Delta^n) = O_{I,K}(\\Delta^n)$, and which restricts to $L_{I,K}(\\Lambda^n)$. The choice of such an $L_{I,K}(\\Delta^n)$ is precisely the choice of a lift of $\\Delta^n \\to \\mathbf{BV}_{\\mathscr{E}, < (I,K) }$ to $\\mathbf{BV}_{\\mathscr{E}, \\le (I,K) }$. The fact that $L_{I,K}(\\Delta^n)$ restricts to $L_{I,K}(\\Lambda^n)$ implies that the above diagram commutes. \n\n\n\\end{proof} \n\n\n\\section{A local to global principle} \n\n\\label{section local global}\n\\subsection{Simplicial presheaves and the \\v{C}ech complex}\n\nLet $\\operatorname{sSets}$ denote the category of simplicial sets. Recall that a simplicial presheaf on a topological space is simply a presheaf of simplicial sets. Let $\\mathcal U$ be an open cover of $M$, so that $\\mathcal U$ is a topological space with a surjective open map $\\mathcal U \\to M$ which, on every connected component of $\\mathcal U$, is an open embedding. Let \n$$\n\\mathcal U_n = \\mathcal U \\times_M \\cdots \\times_M \\mathcal U \n$$\nwhere $\\mathcal U$ appears $n+1$ times on the right hand side. As usual, we can associate to $\\mathcal U$ a simplicial space, whose space of $n$-simplices is $\\mathcal U_n$.\n\nEvery connected component of $\\mathcal U_n$ is an open subset of $X$. Let $G$ be a simplicial presheaf on $M$, and let \n$$G(\\mathcal U_n) = \\prod_{V \\text{ a connected component of } \\mathcal U_n} G(V) . $$\nThe simplicial sets $G(\\mathcal U_n)$ form a cosimplicial simplicial set. In other words, if $\\Delta$ denotes the category of totally ordered finite sets, there is a functor \n\\begin{align*}\n\\check{G}_\\mathcal U : \\Delta &\\to \\operatorname{sSets} \\\\\n\\check{G} _\\mathcal U[n] &= G(\\mathcal U_n) \n\\end{align*}\nwhere $[n] \\in \\operatorname{Ob} \\Delta$ is the set $\\{0,1,\\ldots,n\\}$. \n\nNext we will define a simplicial set $\\check{C}( \\mathcal U, G )$ which is the \\v{C}ech complex for the open cover $\\mathcal U$, with coefficients in $G$. \n\nFor any simplicial sets $X,Y$, let us denote by\n$$\n\\Hom^{\\Delta}( X, Y)\n$$\nthe simplicial set of maps $X \\to Y$, so that \n$$\n\\Hom^{\\Delta}( X, Y) [n] = \\Hom( X \\times \\Delta^n, Y) \n$$\n\nFor each face map $\\phi : \\Delta_{n-1} \\to \\Delta^n$, there are maps\n$$\n\\Hom^\\Delta (\\Delta^n , G( \\mathcal U_n ) ) \\xrightarrow{\\phi^\\ast} \\Hom^\\Delta (\\Delta_{n-1} , G( \\mathcal U_n ) ) \\xleftarrow{\\phi_\\ast} \\Hom^\\Delta (\\Delta_{n-1} , G( \\mathcal U_{n-1} ) ).\n$$\nThe first arrow is simply composition with $\\phi \\times \\operatorname{Id}$, and the second arrow comes from the cosimplicial set structure on the simplicial sets $G(\\mathcal U_n)$. \n\nLet \n$$\n\\check{C}( \\mathcal U, G ) \\subset \\prod_n \\Hom^\\Delta (\\Delta^n , G( \\mathcal U_n ) )\n$$\nbe the sub-simplicial set of those elements which are compatible with the face maps. More precisely, an element $f \\in \\check{C}( \\mathcal U, G ) [m] $ is a sequence of elements \n$$f_n \\in \\Hom (\\Delta^n \\times \\Delta^m , G( \\mathcal U_n ) )$$ for each $n \\ge 0$, such that for each face map $\\phi : \\Delta_{n-1} \\to \\Delta^n$, \n$$\n\\phi^\\ast f_n = \\phi_\\ast f_{n-1} \\in \\Hom ( \\Delta_{n-1} \\times \\Delta^m , G( \\mathcal U_n ) ).\n$$\nNote that we \\emph{do not} require compatibility with degeneracy maps. The simplicial set $\\check{C}(\\mathcal U, G)$ is a version of the total complex of the cosimplicial simplicial set defined by $G(\\mathcal U_n)$ (see \\cite{BouKan72}).\n\nSimilarly, let \n$$\n\\check{C}_{\\le N}( \\mathcal U, G ) \\subset \\prod_{n = 0}^N \\Hom^\\Delta (\\Delta^n , G( \\mathcal U_n ) )\n$$\nbe the sub-simplicial set of those elements compatible with face maps $\\Delta_{n-1} \\to \\Delta_{n}$, for all $0 \\le n \\le N$. Note that \n$$\n\\check{C}( \\mathcal U, G ) = \\varprojlim _N \\check{C}_{\\le N}( \\mathcal U, G )\n$$\n\n\nThe simplicial set $\\check{C}( \\mathcal U, G )$ is the non-linear \\v{C}ech complex for the open cover $\\mathcal U$, with values in the simplicial presheaf $G$. As with ordinary \\v{C}ech cohomology, we will define the simplicial set of derived global sections of $G$ as a limit over all open covers. That is, let \n$$\n\\mbb R \\Gamma (M , G ) = \\operatorname{colim}_{\\mathcal U} \\check{C} ( \\mathcal U,G) . \n$$\n\\begin{definition}\nA simplicial presheaf $G$ is \\emph{locally fibrant} if for there exists an open cover $\\{U_i\\}$ of $M$, such that for all $i$ and all open sets $W \\subset U_i$, $G(W)$ is a Kan complex (i.e.\\ a fibrant simplicial set). \n\\end{definition}\n\nWe will now see that our definition of $\\mbb R \\Gamma(M,G)$ is well-behaved as long as $G$ is locally fibrant.\n\\begin{lemma}\nIf $G$ is locally fibrant, then the map\n$$\n\\check{C}_{\\le N}( \\mathcal U, G ) \\to \\check{C}_{\\le N-1}( \\mathcal U, G )\n$$\nis a fibration for sufficiently fine open covers $\\mathcal U$.\n\\end{lemma}\n\n\\begin{proof}\nWe can assume that $G(W)$ is fibrant for all open sets $W$ contained inside some element of the cover $\\mathcal U$. \n\nSuppose we have a map $\\Delta^n \\to \\check{C}_{\\le N-1}( \\mathcal U, G )$, and a map $\\Lambda^n \\to \\check{C}_{\\le N}( \\mathcal U, G )$ lifting the composition $\\Lambda^n \\to \\Delta^n \\to \\check{C}_{\\le N-1}( \\mathcal U, G )$. We need to construct a lift $\\Delta^n \\to \\check{C}_{\\le N}( \\mathcal U, G )$.\n\nSuch a lift is given by a map $\\Delta^n \\to \\Hom^\\Delta(\\Delta_N , G ( \\mathcal U_N ) )$, in other words, a map $\\Delta_N \\times \\Delta^n \\to G( \\mathcal U_N)$. This map is known on $\\partial \\Delta_N \\times \\Delta^n$ and on $\\Delta_N \\times \\Lambda^n$. Indeed, the definition of $\\check{C}_{\\le N}( \\mathcal U, G )$ implies that for each face $\\Delta_{N-1} \\to \\Delta_N$, this map must restrict to the given map $\\Delta^n \\to \\Hom^\\Delta( \\Delta_{N-1}, G( \\mathcal U_{N-1})$. On $\\Delta_N \\times \\Lambda^n$, this map must restrict to the given map $\\Lambda^n \\to \\Hom^\\Delta(\\Delta_N , G ( \\mathcal U_N ) )$. \n\nThus, we are left with finding an extension of a map \n$$\n\\left( \\partial \\Delta_N \\times \\Delta^n \\right) \\amalg_{\\partial \\Delta_N \\times \\Lambda^n} \\Delta_N \\times \\Lambda^n \\to G ( \\mathcal U_N ) \n$$\nto a map \n$$\n\\Delta_N \\times \\Delta^n \\to G ( \\mathcal U_N ) . \n$$\nThe map\n$$\n\\left( \\partial \\Delta_N \\times \\Delta^n \\right) \\amalg_{\\partial \\Delta_N \\times \\Lambda^n} \\Delta_N \\times \\Lambda^n \\to \\Delta_N \\times \\Delta^n \n$$\nis a cofibration and a weak equivalence. Since $G ( \\mathcal U_N ) $ is fibrant, the desired extension exists. \n\\end{proof} \n\n\\begin{lemma}\nLet $G, G'$ be a locally fibrant simplicial presheaves on $M$, and let $\\mathcal U$ be a sufficiently fine open cover of $M$. Let $G \\to G'$ be a map of simplicial presheaves such that the maps $G(\\mathcal U_n) \\to G'(\\mathcal U_n)$ are all weak equivalence. \n\nThen the map\n$$\n \\check{C} ( \\mathcal U,G) \\to \\check{C} ( \\mathcal U,G') \n$$\nis a weak equivalence. \n\n\\label{lemma local weak equivalence}\n\\end{lemma}\n\\begin{remark}\nThis lemma is standard in the theory of cosimplicial spaces, see \\cite{BouKan72}.\n\\end{remark}\n\n\\begin{proof}\nThe long exact sequence for the homotopy groups of a fibration implies that the maps $\\check{C}_{\\le N} ( \\mathcal U,G) \\to \\check{C}_{\\le N} ( \\mathcal U,G')$ are weak equivalences for all $N$. The result follows easily from this. \n\\end{proof} \n\\begin{lemma}\nLet $G, G'$ be locally fibrant simplicial presheaves on $M$. Let $G \\to G'$ be a map of simplicial presheaves such that, for all sufficiently small open balls $B$ in $M$, the map $G (B) \\to G'(B)$ is a weak equivalence.\n\nThen the map $\\mbb R \\Gamma(M, G) \\to \\mbb R \\Gamma(M,G') $ is a weak equivalence.\n\\end{lemma}\n\\begin{proof}\nRecall that a \\emph{good cover} of $M$ is a cover all of whose iterated intersections are open balls. Every cover of $M$ has a refinement by a good cover. \n\nIf $\\mathcal U \\to M$ is a sufficiently fine good cover, then $\\check{C}(\\mathcal U, G ) \\to \\check{C}(\\mathcal U, G')$ is a weak equivalence, by the previous lemma. \n\nThe colimit over open covers $\\mathcal U$ we use to define $\\mbb R \\Gamma(M,G)$ is a filtered colimit. Homotopy groups commute with filtered colimits. It follows immediately that the map $\\mbb R \\Gamma(M,G) \\to \\mbb R \\Gamma(M,G')$ is a weak equivalence. \n\\end{proof} \n\n\\begin{corollary}\nLet $G$ be a simplicial presheaf such that for all sufficiently small open balls $U$ in $M$, $G(U)$ is contractible. Then $\\mbb R \\Gamma(M,G)$ is contractible.\n\\label{corollary locally contractible}\n\\end{corollary}\n\\begin{proof}\nThe hypothesis, together with Lemma \\ref{lemma local weak equivalence}, implies that the map $\\mbb R \\Gamma(M,G) \\to \\mbb R \\Gamma(M,\\ast)$ is a weak equivalence, where $\\ast$ is the constant simplicial presheaf on $M$ with value a point. For any open cover $\\mathcal U$ of $M$, $\\check{C}(\\mathcal U, \\ast) = \\ast$, so $\\mbb R \\Gamma(\\mathcal U, \\ast) = \\ast$.\n\\end{proof} \n\n\\subsection{Simplicial presheaf of gauge fixing conditions}\nIn order to construct a simplicial presheaf of gauge fixing conditions, we need some extra data, which is typically present in applications. \n\nThe extra data consists of:\n\\begin{enumerate}\n\\item\nA smooth, locally trivial fibre bundle $F$ over $M$, whose fibres are diffeomorphic to $\\mbb R^k$ for some $k$.\n\\item\nA map $D : F \\to \\operatorname{Diff}^{\\le 1} (E)$ of fibre bundles over $M$.\n\\item\nIf $f : U \\to F\\mid_U$ is any local section, defined over some open set $U$ in $M$, then \n$$\nD(f) : \\Gamma_c(U, E) \\to \\Gamma_c(U,E)\n$$\nmust be an odd operator of square zero, which is self adjoint with respect to the symplectic pairing, and which is such that $[Q, D(f)]$ is a generalised Laplacian. Here $\\Gamma_c$ denotes sections with compact support.\n\\item\nIf $f : M \\to F$ is a global section of $F$, then $D(f)$ satisfies the axioms of a gauge fixing condition: in addition to the properties already mentioned, this means that there is a direct sum decomposition\n$$\n\\mathscr{E} = \\Gamma(M,E) = \\Im Q \\oplus \\Ker [ D(f), Q ] \\oplus \\Im D(f) . \n$$\n\\end{enumerate}\n\nIt is a little \\emph{ad hoc} to require the existence of such a bundle $F$. It would be better to construct a bundle of local gauge fixing conditions directly from the data $(\\mathscr{E},Q, \\ip{\\ , \\ })$. However, I couldn't see a clean way of doing this. In all examples, one is naturally given an $F$ satisfying these properties. \n\nFor example, in Chern-Simons theory, $F$ will be the bundle of metrics on the tangent bundle of $M$. This satisfies these conditions. Once we fix some metric, we can identify the fibre of the bundle of metrics on the tangent bundle is $GL(n) \/ SO(n)$, which is diffeomorphic to $\\mbb R^{n(n+1)\/2}$. \n\nAssociated to the bundle $F$ we have a simplicial presheaf $\\mathbf{F}$ on $M$, such that \n$$\\mathbf{F}(U)[n] = \\text{smooth sections of } F \\vert_U \\times \\Delta^n \\to U \\times \\Delta^n.$$\nThere is a map\n$$\n\\Gamma(M,\\mathbf{F} ) \\to \\mathbf{GF} .\n$$\nThe simplicial presheaf $\\mathbf{F}$ has the following properties.\n\\begin{lemma}\n\\label{lemma properties sheaf gf}\n\\begin{enumerate}\n\\item\n$\\mathbf{F}$ is locally fibrant. \n\\item\nBoth $\\Gamma(M, \\mathbf{F})$ and $\\mbb R \\Gamma(M, \\mathbf{F})$ are contractible.\n\\item\nThe sheaves of sets $U \\mapsto \\mathbf{F}(U)[n]$ are soft. \n\\end{enumerate}\n\\end{lemma}\n\nRecall that a soft sheaf $G$ is one such that for all closed sets $C \\subset M$, the map $G(M) \\to G(C)$ is surjective. \n\n\\begin{proof}\nI will give a proof of the first statement, that $\\mathbf{F}$ is locally fibrant. The other assertions are straightforward to prove. It suffices to show that for all open subsets $U \\subset M$ on which the bundle $F$ is trivial, the simplicial set $\\mathbf{F}(U)$ is fibrant.\n\nTrivialising $F$ on $U$ gives a diffeomorphism $F \\mid_U \\cong U \\times \\mbb R^{k}$. Thus, the simplicial presheaf $\\mathbf{F}$ is isomorphic to the simplicial presheaf whose $n$-simplices, on an open set $U$, are smooth maps $U \\times \\Delta^n \\to \\mbb R^k$. Therefore the simplicial set $\\mathbf{F}(U)$ admits the structure of simplicial abelian groups. All simplicial abelian groups are Kan complexes, thus $\\mathbf{F}(U)$ is a Kan complex.\n\n\n\\end{proof} \n\n\n\n\\subsection{The sheaf of local functionals}\nWe want to talk about sheaves of solutions to the renormalised quantum master equation. In order to do this, we need to first define sheaves of local functionals. Throughout this section, we will work with functions \\emph{modulo constants}. \n\n\\begin{definition}\nLet $\\mscr O(E)$ denote the presheaf on $M$ whose value on an open set $U$ is \n$$\n\\Gamma(U, \\mscr O(E)) = \\prod_{n \\ge 1} \\Hom ( \\Gamma_c(U,E)^{\\otimes n}, \\mathbb C ) _{S_n}. \n$$\nHere, we work with functions modulo constants, so we take $n \\ge 1$. $\\Hom$ denotes continuous linear maps, the subscript $S_n$ denotes coinvariants, and the subscript $c$ denotes sections with compact support.\n\\end{definition}\nThus,\n$$\n\\Gamma(M, \\mscr O(E)) = \\mscr O(\\mathscr{E}) \/ \\mathbb C . \n$$\nIn a similar way, if $X$ is an auxiliary manifold, there is a presheaf $\\mscr O(E, C^{\\infty}(X) )$ on $M$, such that\n$$\n\\Gamma(U, \\mscr O(E, C^{\\infty}(X)) = \\prod_{n \\ge 1} \\Hom ( \\Gamma_c(U,E)^{\\otimes n}, C^{\\infty}(X) ) _{S_n}. \n$$\n\nRecall that polydifferential operators $\\mathscr{E}^{\\otimes n} \\to \\dens(M)$ can be identified with global sections of the infinite rank vector bundle\n$$\n\\Diff(E,\\mathbb C)^{\\otimes n} \\otimes \\dens(M)\n$$\non $M$. Here $\\mathbb C$ denotes the trivial bundle, $\\Diff$ is the sheaf of differential operators between two vector bundles, and tensor products are fibrewise tensor products of vector bundles. \n\\begin{definition}\nLet $\\mscr O_l (E)\\subset \\mscr O(E)$ be the sub-presheaf given locally by the image of \n$$\n\\prod_{n \\ge 1} \\Gamma(U, \\Diff(E,\\mathbb C)^{\\otimes n} \\otimes \\dens_M ) \\xrightarrow{\\int} \\prod_{n \\ge 1} \\Hom ( \\Gamma_c(U,E)^{\\otimes n}, C^{\\infty}(X) ) _{S_n} .\n$$\n\\end{definition}\nThus, \n$$\n\\Gamma(M , \\mscr O_l (E) ) = \\mscr O_l(\\mathscr{E}) \/ \\mathbb C .\n$$\nIn a similar way, we can define the sheaf $\\mscr O_l (E , C^{\\infty}(X) )$ of local functionals with values in $C^{\\infty}(X)$, or in $\\Omega^\\ast(X)$, if $X$ is an auxiliary manifold (with corners).\n\\begin{lemma}\nThe presheaf $\\mscr O_l (E, \\Omega^\\ast(X))$ is a soft sheaf. \n\\end{lemma}\n\n\\subsection{Simplicial presheaf of solutions to the renormalised QME}\n\nProposition \\ref{proposition obstruction} shows that if we have $S \\in \\mscr O_l(\\mathscr{E}, \\Omega^\\ast(\\Delta^n)\\otimes \\mathbb C [[\\hbar]])$, and a smooth family of gauge fixing conditions $Q^{GF}$, parametrised by $\\Delta^n$, then there are obstructions \n$$O_{(I,K)}(S, Q^{GF} )\\in \\mscr O_l(\\mathscr{E}, \\mathscr{A}_{\\le 0} \\otimes \\Omega^\\ast(\\Delta^n))$$\n to $S$ satisfying the renormalised quantum master equation. These obstructions are homogeneous of order $K$; thus, if $K > 0$, they are elements of $\\Gamma(M, \\mscr O_l (E, \\mathscr{A}_{\\le 0 } \\otimes \\Omega^\\ast(\\Delta^n) ))$. It is easy to see that the $O_{(I,K)}$, when $K > 0$, don't depend on the constant term of $S$. Thus, the obstructions give us maps \n$$\nO_{(I,K)}^{global} : \\Gamma(M, \\mscr O_l(E, \\Omega^\\ast(\\Delta^n ) \\otimes \\mathbb C [[\\hbar]] ) \\times \\mathbf{F} [n] ) \\to \\Gamma(M, \\mscr O_l(E, \\mathscr{A}_{\\le 0}\\otimes \\Omega^\\ast(\\Delta^n) ) ).\n$$ \n\\begin{lemma}\nFor all $I \\ge 0$, $K > 0$ and $n \\ge 0$, there are unique maps of simplicial presheaves \n$$O_{(I,K)} : \\mscr O_l(E, \\Omega^\\ast(\\Delta^n ) \\otimes \\mathbb C [[\\hbar]] ) \\times \\mathbf{F} [n] \\ \\to \\mscr O_l(E, \\mathscr{A}_{\\le 0} \\otimes \\Omega^\\ast(\\Delta^n) ) $$ such that the diagram\n$$\n\\xymatrix{ \\Gamma(M, \\mscr O_l(E, \\Omega^\\ast(\\Delta^n ) \\otimes \\mathbb C [[\\hbar]] ) \\times \\mathbf{F} [n] ) \\ar[r]^>>>>>{O_{(I,K)}^{global}} \\ar[d] & \\Gamma(M, \\mscr O_l(E, \\mathscr{A}_{\\le 0} \\otimes \\Omega^\\ast(\\Delta^n) ) ) \\ar[d] \\\\ \n\\Gamma(U, \\mscr O_l(E, \\Omega^\\ast(\\Delta^n ) \\otimes \\mathbb C [[\\hbar]] ) \\times \\mathbf{F} [n]) \\ar[r]^>>>>>{O_{(I,K)}(U)} & \\Gamma(U, \\mscr O_l(E, \\mathscr{A}_{\\le 0}\\otimes \\Omega^\\ast(\\Delta^n) ) ) }.\n$$\ncommutes, for all open sets $U \\subset M$. \n\\end{lemma}\n\\begin{proof}\nThe sheaf $ \\mscr O_l(E, \\Omega^\\ast(\\Delta^n ) \\otimes \\mathbb C [[\\hbar]] ) \\times \\mathbf{F} [n]$ is soft. This implies that for all open sets $U \\subset M$, and all sections $(S,f) \\in \\Gamma(U, \\mscr O_l(E, \\Omega^\\ast(\\Delta^n ) \\otimes \\mathbb C [[\\hbar]] ) \\times \\mathbf{F} [n] \\ ) $, there exists an open cover of $U$ by sets $\\{V_i\\}$, and global sections $(S_i,f_i )$, such that $(S_i,f_i) \\mid_{V_i} = (S,f) \\mid_{V_i}$. \n\nThis implies uniqueness. For existence, we need to show that $O_{(I,K)}^{global}(S_i,f_i) \\mid_{V_i}$ only depends on $(S_i,f_i) \\mid {V_i}$. This follows from the third clause of Proposition \\ref{proposition obstruction}. \n\n\\end{proof} \n\n\n\n\\begin{lemma}\n\n\nLet $S \\in \\Gamma(U, \\mscr O_l(E, \\Omega^\\ast(\\Delta^n ) \\otimes \\mathbb C [[\\hbar]] ) \\times \\mathbf{F} [n])$ be such that $O_{(i,k)}(S)= 0$ for all $(i,k) < (I,K)$. Then \\begin{enumerate}\n\\item\n$O_{(I,K)}(S)$ is independent of $\\varepsilon$, so it is a section of $\\mscr O_l(E, \\Omega^\\ast(\\Delta^n ) )$. \n\\item\n$$\n(Q + \\d_{DR} ) O_{(I,K)}(S) = 0. \n$$\n\\item\nIf $S'\\in \\Gamma(U, \\mscr O_l(E, \\Omega^\\ast(\\Delta^n ) )$ is homogeneous of degree $K$, then \n$$\nO_{(I,K)} ( S + \\hbar^I S' ) = O_{(I,K)} (S) + (Q + \\d_{DR} ) S'.\n$$\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nThe proof is identical to that of Lemma \\ref{closed local obstruction}. \n\\end{proof} \n\n\\begin{definition}\nLet $\\mathbf{BV}$ be the simplicial presheaf on $M$ such that $\\Gamma(U, \\mathbf{BV}[n] )$ is the set of $$(S,f) \\in \\Gamma(U, \\mscr O_l(E, \\Omega^\\ast(\\Delta^n ) \\otimes \\mathbb C [[\\hbar]] ) \\times \\mathbf{F}[n])$$ such that $$O_{(I,K)}(S,f) = 0$$ for all $I \\ge 0,K > 0$.\n\nLet $\\mathbf{BV}_{\\le (I,K)}$ be the simplicial presheaf such that $\\Gamma(U, \\mathbf{BV}[n] )$ is the set \n$$(S,f) \\in \\Gamma(U, \\mscr O_l(E, \\Omega^\\ast(\\Delta^n ) \\otimes \\mathbb C [[\\hbar]] ) \\times \\mathbf{F}[n] )$$ \nsuch that\n\\begin{enumerate}\n\\item\n$S_{(i,k)} = 0$ if $(i,k) > (I,K)$, where, as usual, $S_{(i,k)}$ is the part homogeneous of degree $(i,k)$ as a function of $(\\hbar, e \\in \\Gamma_c(U,E) )$. \n\\item\n$O_{(i,k)}(S,f) = 0$ for $(i,k) \\le (I,K)$.\n\\end{enumerate}\n\nLet $\\mathbf{BV}_{< (I,K)}$ be the simplicial presheaf such that $\\Gamma(U, \\mathbf{BV}[n] )$ is the set \n$$(S,f) \\in \\Gamma(U, \\mscr O_l(E, \\Omega^\\ast(\\Delta^n ) \\otimes \\mathbb C [[\\hbar]] ) \\times \\mathbf{F}[n] )$$ \nsuch that\n\\begin{enumerate}\n\\item\n$S_{(i,k)} = 0$ if $(i,k) \\ge (I,K)$.\n\\item\n$O_{(i,k)}(S,f) = 0$ for $(i,k) < (I,K)$.\n\\end{enumerate}\n\n\\end{definition}\n\n\n\n\\begin{lemma}\n$\\mathbf{BV}$ is a locally fibrant simplicial presheaf.\n\\end{lemma}\n\\begin{proof}\nIt suffices to show that $\\Gamma ( U , \\mathbf{BV}) $ is a fibrant simplicial set, for all open subsets $U$ of $M$ on which the bundle $F$ is trivial. Since we know $\\Gamma(U, \\mathbf{F})$ is fibrant, it suffices to show that the map $\\Gamma(U, \\mathbf{BV} ) \\to \\Gamma(U, \\mathbf{F})$ is a fibration. The proof is identical to that of Theorem \\ref{theorem fibration}.\n\\end{proof} \n\nThis implies that our definition of $\\mbb R \\Gamma (M ,\\mathbf{BV})$ is well-behaved. \n\n\n\\begin{definition}\nThe operator $Q$ on the sheaf of sections of $E$ is called \\emph{triangular} if \n$E$ admits a decomposition as a direct sum of vector bundles \n$$E = E_0 \\oplus E_1 \\oplus \\cdots \\oplus E_n$$ such that the operator $Q$ takes sections of $E_i$ to sections of $\\oplus_{j > i} E_j$.\n\\end{definition}\n\n\\begin{thmD}\nSuppose $Q$ is triangular.\nThen the natural map\n$$\n\\Gamma(M, \\mathbf{BV} ) \\to \\mbb R \\Gamma(M, \\mathbf{BV} ) \n$$\nis a weak equivalence. \n\\end{thmD}\nThis theorem I will call the ``local to global principle''. This result allows one to construct global solutions to the quantum master equation from local local solutions. The local solutions will typically be constructed in explicit model cases, like $\\mbb R^n$ with the flat metric, as we will see below for the case of Chern-Simons theory. \n\n\n\\subsection{Proof of theorem D} \nThere is a simplicial presheaf on $M$ whose presheaf of $n$-simplices is\n$$\n\\Gamma(U, \\mscr O_l(E, \\Omega^\\ast(\\Delta^n ) ) .\n$$\nLet us denote this simplicial presheaf by $\\mscr O_l^{\\Delta}(E)$. Note that\n$$\n\\mathbf{BV} \\subset \\mscr O_l^{\\Delta}(E) \\times \\mathbf{F} \n$$\nis a sub-simplicial presheaf. \n\n\\begin{lemma}\nLet $X$ be a simplicial set, and let $S_X : X \\to \\mbb R \\Gamma(M, \\mathbf{BV}_{< (I,K)})$ be a map. Then there is an obstruction \n$$\nO(S_X) \\in \\operatorname{Hom} ( X, \\mbb R \\Gamma(M, \\mscr O_l^{\\Delta} (E) ) )\n$$\nwhich is closed, $(Q + \\d_{DR} )O(S_X) = 0$, and homogeneous of degree $K$. The simplicial set of lifts $X \\to \\mbb R \\Gamma(M, \\mathbf{BV}_{\\le (I,K)})$ is isomorphic to the presheaf of maps $S_X' : X \\to \\mbb R \\Gamma(M, \\mscr O_l^{\\Delta}(E))$, homogeneous of degree $K$, satisfying\n$$\n(Q + \\d_{DR} ) S'_X = O(X) . \n$$\n\\end{lemma}\n\\begin{proof}\nFix an open cover $\\mathcal U$ of $M$. First, we will prove the corresponding statement for lifts of maps\n$$\nS_X : X \\to \\check{C}(\\mathcal U, \\mathbf{BV}_{<(I,K)}).\n$$\nSuch a map is given by maps\n$$\nS_X^n : X \\times \\Delta^n \\to \\Gamma(\\mathcal U_n, \\mathbf{BV}_{<(I,K)} )\n$$\nfor all $n$, which are compatible with the face maps $\\Delta^m \\to \\Delta^n$, $\\mathcal U_m \\to \\mathcal U_n$, for $m < n$. \n\nThe obstructions to lifting $S_X^n$ to $\\mathbf{BV}_{\\le (I,K)}$ are elements\n$$\nO(S_X, \\mathcal U_n ) : X \\times \\Delta^n \\to \\Gamma(\\mathcal U_n, \\mscr O_l^\\Delta(E) ).\n$$\nNaturality of these obstructions implies that they fit together into an element\n$$\nO(S_X, \\mathcal U) : X \\to \\check{C}(\\mathcal U, \\mscr O_l^\\Delta(E) ).\n$$\n\nThere is a bijection between lifts of each $S_X^n$ and ways of making the obstruction $O(S_X, \\mathcal U_n)$ exact. Thus, there is a corresponding bijection between lifts of $S_X$ and ways of making the total obstruction $O(S_X, \\mathcal U)$ exact. Of course, the map $S' : X \\to \\check{C}(\\mathcal U, \\mscr O_l^\\Delta(E) )$ making $O(S_X, \\mathcal U)$ exact must be homogeneous of degree $K$.\n\nThe obstruction $O(S_X, \\mathcal U)$ is natural with respect to restrictions to finer open covers $\\mathcal V$. Thus, taking the direct limit, we get an element\n$$\nO(S_X) = \\varinjlim_{\\mathcal U} O(S_X, \\mathcal U) \\in \\varinjlim_{\\mathcal U} \\check{C}(\\mathcal U, \\mscr O_l^\\Delta(E) ) = \\mbb R \\Gamma ( M , \\mscr O_l^\\Delta(E) ).\n$$\nAgain, naturality of the obstruction implies that lifts $X \\to \\mbb R \\Gamma(M, \\mathbf{BF}_{\\le (I,K)})$ correspond to ways of making $O(S_X)$ exact.\n\\end{proof} \n\\begin{proposition}\nThe map\n$$\n\\mbb R \\Gamma (M, \\mathbf{BV}_{\\le (I,K)} ) \\to \\mbb R \\Gamma ( M, \\mathbf{BV}_{< (I,K)} )\n$$\nis a fibration.\n\\end{proposition}\n\\begin{proof}\nThe proof is exactly the same as that of Proposition \\ref{proposition fibration},\nusing the obstruction to lifting maps to $ \\mbb R \\Gamma ( M, \\mathbf{BV}_{< (I,K)} )$ to maps to $\\mbb R \\Gamma (M, \\mathbf{BV}_{\\le (I,K)} )$, \\end{proof} \n\nNow we will prove the theorem by induction. The starting point in the induction is the statement that the map\n$$\n\\Gamma(M, \\mathbf{BV}_{(0,0)} ) \\to \\mbb R \\Gamma (M, \\mathbf{BV}_{(0,0)})\n$$\nis a weak equivalence. Note that $\\mathbf{BV}_{(0,0)} = \\mathbf{F}$. Lemma \\ref{lemma properties sheaf gf} implies that the map $\\Gamma(M, \\mathbf{F}) \\to \\mbb R \\Gamma(M, \\mathbf{F} )$ is a weak equivalence. \n\nWe will assume by induction that the map\n$$\n\\Gamma(M, \\mathbf{BV}_{< (I,K)} )\\to \\mbb R \\Gamma (M, \\mathbf{BV}_{<(I,K)} )\n$$\nis a weak equivalence.\n\nConsider the diagram\n$$\n\\xymatrix{ \\Gamma (M, \\mathbf{BV}_{\\le (I,K)} ) \\ar[r] \\ar[d]^{\\pi} &\\mbb R \\Gamma ( M, \\mathbf{BV}_{\\le (I,K)} ) \\ar[d]^{\\mbb R \\pi} \\\\ \n \\Gamma (M, \\mathbf{BV}_{< (I,K)} ) \\ar[r] & \\mbb R \\Gamma ( M, \\mathbf{BV}_{< (I,K)} ) . } \n$$\nBy Proposition \\ref{proposition fibration}, \tthe map $\\pi$ is a fibration, and we have just seen that $\\mbb R \\pi$ is also a fibration. By induction, the bottom horizontal arrow is a weak equivalence. By considering the long exact sequence of homotopy groups of a fibration, it suffices to show that the fibres of $\\pi$ and of $\\mbb R \\pi$ are weakly homotopy equivalent. \n\nThus, fix a base point\n$$\nx_0 \\in \\Gamma (M, \\mathbf{BV}_{< (I,K)} )[0] .\n$$\nIt remains to show that the map\n$$\n\\pi^{-1}(x_0) \\to \\mbb R \\pi^{-1} (x_0)\n$$\nis a weak equivalence. \n\n\n\n\n\nThere is an obstruction\n$$\nO_{(I,K)}(x_0) \\in \\Gamma(M, \\mscr O_l(E) )\n$$\nto lifting $x_0$ to $\\Gamma (M, \\mathbf{BV}_{\\le (I,K)} ) $. This is a closed element, $Q O_{(I,K)}(x_0) = 0$. \n\nWe can identify $\\pi^{-1}(x_0)$ with the simplicial set whose $n$-simplices are elements of \n$$\\phi \\in \\Gamma(M, \\mscr O_l^{\\Delta}(E) ) [n] = \\Gamma(M, \\mscr O_l(E, \\Omega^\\ast(\\Delta^n) ) )$$\nwhich are homogeneous of degree $K$ as functionals on $\\Gamma(M,E)$, and which satisfy\n$$\n(Q + \\d_{DR}) \\phi = O_{(I,K)}(x_0).\n$$ \n\nSimilarly, we can identify $\\mbb R \\pi^{-1}(x_0)$ with the simplicial set whose $n$-simplices are\n$$\n\\phi \\in \\mbb R \\Gamma(M, \\mscr O_l^{\\Delta}(E) ) [n]\n$$\nwhich are homogeneous of degree $K$ and satisfy $(Q + \\d_{DR}) \\phi = O_{(I,K)}(x_0)$. \n\n\n\\begin{lemma}\nThe natural map of chain complexes\n$$\n(\\Gamma(M, \\mscr O_l(E) ) , Q) \\to ( \\mbb R \\Gamma(M, \\mscr O_l^{\\Delta}(E) ) [0] , Q + \\d _{DR} ) \n$$\nis a quasi-isomorphism. \n\\label{lemma linear cech}\n\\end{lemma}\n\n\\begin{proof}\nIt suffices to show that for all open covers $\\mathcal U$ of $M$, the natural map \n$$\n( \\Gamma(M, \\mscr O_l(E) ) , Q) \\to ( \\check{C}(\\mathcal U, \\mscr O_l^{\\Delta} (E) ) [0] , Q + \\d_{DR} )\n$$\nis an isomorphism.\n\n The triangular nature of the differential $Q$ on $E$ implies that the complex of sheaves $\\mscr O_l(E)$ on $M$ admits a grading, labelled by non-negative integers, such that $Q$ is of strictly negative degree. This grading is inherited by all auxiliary complexes, such as $\\check{C}(\\mathcal U, \\mscr O_l^{\\Delta} (E) ) [0]$ and $\\Gamma(M, \\mscr O_l(E) ) $. It follows from a spectral sequence argument that it suffices to show that the map of complexes\n$$\n( \\Gamma(M, \\mscr O_l(E) ) , 0) \\to ( \\check{C}(\\mathcal U, \\mscr O_l^{\\Delta} (E) ) [0] , \\d_{DR} )\n$$\nis a quasi-isomorphism, where the complex on the left hand side now has zero differential.\n\nThe complex $\\check{C}(\\mathcal U, \\mscr O_l^{\\Delta} (E) ) [0]$, with the differential $\\d_{DR}$, is quasi-isomorphic to the ordinary \\v{C}ech complex with coefficients in sheaf $\\mscr O_l(E)$ (with zero differential). This sheaf is soft. Thus, its \\v Cech cohomology is the same as its global sections, which is the statement that the map above is a quasi-isomorphism.\n\n\\end{proof} \n\n\n\n\n\\begin{corollary}\n$\\pi^{-1}(x_0)$ is empty if and only if $\\mbb R \\pi^{-1}(x_0)$ is empty.\n\\end{corollary}\n\\begin{proof}\nIndeed, $\\pi^{-1}(x_0)$ is empty if and only if the cohomology class\n$$\n[\\phi ] \\in H^\\ast ( \\Gamma(M, \\mscr O_l(E) ), Q ) \n$$\nis non-zero, and similarly for $\\mbb R \\pi^{-1}(x_0)$. \n\\end{proof}\n\nLet us use the temporary notation $\\mscr O_{l,K}(E) \\subset \\mscr O_{l}(E)$ for the sub-sheaf of functionals which are homogeneous of degree $K$. Also let $\\mscr O_{l,K}^\\Delta(E) \\subset \\mscr O_{l}^\\Delta(E)$ be the sub-simplicial presheaf of $\\mscr O_{l}^\\Delta(E)$ consisting of those functionals which are homogeneous of degree $K$. Thus, a section of the presheaf of $n$-simplices of $\\mscr O_{l,K}^\\Delta(E)$ is given by an element of $\\Gamma(U, \\mscr O_{l,K}(E, \\Omega^\\ast(\\Delta^n)) )$. $\\mscr O_{l,K}^{\\Delta}(E)$ is a simplicial presheaf of $\\mathbb Z\/2$-graded complexes, with differential $Q + \\d_{DR}$. \n\n\nLet us suppose that $\\pi^{-1}(x_0)$ is non-empty. (Otherwise, we are done, as $\\mbb R \\pi^{-1}(x_0)$ will also be empty). Then by picking a base point in $\\pi^{-1}(x_0)$, we can identify $\\pi^{-1}(x_0)$ with the simplicial set whose $n$-simplices are closed, even elements of $\\Gamma(M, \\mscr O_{l,K}^\\Delta(E) )[n]$. Let us denote this simplicial set by $\\Gamma(M, \\mscr O_{l,K}^\\Delta(E) )^{closed, even}$.\n\nBy taking the corresponding base point in $\\mbb R \\pi^{-1}(x_0)$, we can identify $\\mbb R \\pi^{-1}(x_0)$ with the simplicial set whose $n$-simplices are closed even elements of $\\mbb R \\Gamma(M, \\mscr O_{l,K}^\\Delta(E) )[n]$. Let us denote this simplicial set by $\\mbb R \\Gamma(M, \\mscr O_{l,K}^\\Delta(E) )^{closed, even}$.\n\n\nThus, to complete the proof of the theorem, it suffices to show that the map\n$$\n\\Gamma(M, \\mscr O_{l,K}^\\Delta(E) )^{closed, even} \\to \\mbb R \\Gamma(M, \\mscr O_{l,K}^\\Delta(E) )^{closed, even}\n$$\nis a weak equivalence. This will follow from the following lemma, combined with Lemma \\ref{lemma linear cech}.\n\\begin{lemma}\nThe homotopy groups of the simplicial sets $\\Gamma(M, \\mscr O_{l,K}^\\Delta(E) )^{closed, even}$ and $\\mbb R \\Gamma(M, \\mscr O_{l,K}^\\Delta(E) )^{closed, even}$ are given by\n\\begin{align*}\n\\pi_n \\left( \\Gamma(M, \\mscr O_{l,K}^\\Delta(E) )^{closed, even} \\right) &=\nH^{\\abs{n}} (\\Gamma(M, \\mscr O_{l,K} ), Q ) \\\\\n\\pi_n \\left( \\mbb R \\Gamma(M, \\mscr O_{l,K}^\\Delta(E) )^{closed, even} \\right) &= H^{\\abs{n}} (\\mbb R \\Gamma(M, \\mscr O_{l,K}^{\\Delta} )[0], Q + \\d_{DR} ).\n\\end{align*}\nOn the right hand side we are using $\\mathbb Z\/2$-graded cohomology groups. These equations are true for all $n \\ge 0$. \n\\end{lemma}\nThis is just a $\\mathbb Z\/2$-graded version of the Dold-Kan correspondence. \n\\begin{proof}\n\nThe $n$-simplices of $\\Gamma(M, \\mscr O_{l,K}^\\Delta(E) )^{closed, even}$ are closed and even elements of the complex $\\Omega^\\ast(\\Delta^n)\\otimes \\Gamma(M, \\mscr O_{l,k}(E) )$, where we are using a completed tensor product. Similarly, the $n$-simplices of $\\mbb R \\Gamma(M, \\mscr O_{l,K}^\\Delta(E) )^{closed, even}$ are given by closed even elements of $\\Omega^\\ast(\\Delta^n) \\otimes \\mbb R \\Gamma ( M , \\mscr O_{(l,K) } ^\\Delta ) [0]$, where we are using, again, an appropriate completed tensor product.\n\nLet $(V, \\d_V)$ be one of these two $\\mathbb Z\/2$-graded complexes, so\n$$\n(V, \\d_V) = \\begin{cases}\n( \\Gamma(M, \\mscr O_{l,k}(E), Q ) & \\\\\n( \\mbb R \\Gamma ( M , \\mscr O_{(l,K) } ^\\Delta ) [0], Q + \\d_{DR} ) &\n\\end{cases}\n$$\nLet $V^\\Delta$ be the simplicial set whose $n$-simplices are closed even elements of $V \\otimes \\Omega^\\ast(\\Delta^n)$, where we use an appropriate completed tensor product. We need to calculate the homotopy groups of $V^\\Delta$.\n\nAs $V^\\Delta$ is a Kan complex, we can calculate these homotopy groups as homotopy classes of maps from spheres to $V^\\Delta$. Let us take $0$ as a base point in $V^\\Delta$. Let $S^n$ be a simplicial set representing the $n$ sphere, and let $p \\in S^n$ be a base point. A base point preserving map $f : S^n \\to V^\\Delta$ is given by an even element \n$$\n\\omega_f \\in V \\otimes \\Omega^\\ast(S^n , p ) .\n$$\nwhich is closed, $(\\d_V + \\d_{DR} ) \\omega = 0$.\n\nThus, $\\omega_f$ gives a class \n$$[\\omega_f] \\in H^{0 } (V \\otimes \\Omega^\\ast(S^n , p )) = H^{\\abs{n} } (V) .$$\n(Here we are using $\\mathbb Z\/2$-graded cohomology groups).\n\nWe need to show that if the map $f : S^n \\to V^\\Delta$ changes by a homotopy, then the class $[\\omega_f]$ remains unchanged, and conversely, if we change $\\omega_f$ to something cohomologous, then the corresponding map $S^n \\to V^\\Delta$ changes by a homotopy.\n\nIf $f,g : S^n \\to V^\\Delta$ are base point preserving maps, a base point preserving homotopy between them is given by an element $\\eta \\in V \\otimes \\Omega^\\ast ( S^n \\times [0,1], p \\times [0,1] )$, which is even and closed, and which restricts to $\\omega_f$ and $\\omega_g$ at $0$ and $1$.\n\nSince \n$$H^\\ast (V \\otimes \\Omega^\\ast ( S^n \\times [0,1], p \\times [0,1] ) ) = H^{\\abs{n}}(V) $$\nthe existence of such a homotopy implies that $[\\omega_f] = [ \\omega_g]$.\n\nConversely, suppose $\\phi \\in V \\otimes \\Omega^\\ast(S^n , p )$ is an odd element such that \n$$\n(\\d_V + \\d_{DR}) \\phi = \\omega_f - \\omega_g . \n$$\nThen if we let \n$$\n\\eta = \\omega_f + t (\\omega_g - \\omega_f ) + \\phi \\d t \\in V \\otimes \\Omega^\\ast ( S^n \\times [0,1], p \\times [0,1] )\n$$\nthen $\\eta$ is an even element satisfying $(\\d_V + \\d_{DR}) \\eta = 0$, so $\\eta$ defines a homotopy between $f$ and $g$. \n\n\\end{proof} \nThis completes the proof of theorem D. \n\n\\section{Local computations in Chern-Simons theory }\nWe will apply the theory developed in the previous section to construct a canonical up to homotopy solution of the renormalised quantum master equation in the case of Chern-Simons theory. \n\nIn this section, therefore, let $M$ be a manifold of dimension \n$$\\dim M = n \\ge 2.$$ \nLet $\\mathfrak{g}$ be a locally trivial sheaf of Lie algebras \\footnote{We could also use $L_\\infty$ algebras, and the construction works in more or less the same way. } over $\\mathbb C$ on $M$, with a non-degenerate invariant symmetric pairing of the opposite parity to $n$, with values in the constant sheaf $\\mathbb C$.\n\n\nLet\n$$\n\\mathscr{E} = \\Omega^\\ast(M , \\mathfrak g )[1] . \n$$\nLet $E$ denote the corresponding vector bundle on $M$, so $\\Gamma(M,E) = \\mathscr{E}$. Let $S_{CS} \\in \\mscr O_l(\\mathscr{E})$ denote the Chern-Simons action. \n\n\\begin{theorem}\nThere is a canonical, up to a contractible choice, element \n$$S \\in \\mscr O_l(\\mathscr{E} , \\mathbb C[[\\hbar]] )\/ \\mathbb C[[\\hbar]]$$ \nwhich solves the renormalised quantum master equation modulo constants, and is of the form\n$$\nS = S_{CS } + \\hbar S^{(1)} + \\hbar^2 S^{(2)} +\\cdots\n$$\n\\label{theorem chern simons}\n\\end{theorem}\nThe proof was sketched in the introduction; the idea is to show that locally, in a flat metric, $S_{CS}$ satisfies the renormalised QME. These local solutions together will give a global solution in a curved metric. \n\n\\subsection{Simplicial presheaves of metrics}\nThe bundle on $M$ playing the role of $F$ in section \\ref{section local global} is the bundle $\\operatorname{Met} \\to M$, whose fibres are metrics on the tangent space of $M$. As in section \\ref{section local global}, we have a simplicial presheaf associated to $\\operatorname{Met}$, defined by saying $\\Gamma(U, \\mathbf{Met} [d])$ is the set of smooth family of metrics on $U$ parametrised by $\\Delta^d$. \n\nLet $\\mathbf{FMet} \\subset \\mathbf{Met}$ be the sub-simplicial presheaf given by families of flat metrics on $U$. \n\\begin{lemma}\nIf $U$ is a ball in $M$, then $\\Gamma(U, \\mathbf{FMet} )$ is contractible. \\label{lemma flat metrics locally contractible}\n\\end{lemma}\n\\begin{proof}\nPick an isomorphism between $U$ and $\\{x \\in \\mbb R^{n} \\mid \\norm{x} < 1 \\}$. The standard flat metric on the ball in $\\mbb R^{n}$ gives a metric on $U$, which we call $g^0$. \n\nLet $x_1,\\ldots, x_{n}$ denote our coordinates on $U$. If $g$ is a smooth family of flat metrics on $U$, parametrised by $\\Delta^d$, then, in these coordinates, we can write $g$ as \n$$\n\\sum_{1 \\le i,j \\le n} g_{ij} ( x , \\sigma ) \\d x_i \\otimes \\d x_j .\n$$\nHere $g_{ij}(x,\\sigma)$ is smooth in the $\\sigma$ variables, and smooth in the $x$ variables.\nTo show that the simplicial set of such $g$ is contractible, it suffices to construct a simplicial map \n$$\\Gamma(U, \\mathbf{FMet} ) \\times \\Delta_1 \\to \\Gamma(U, \\mathbf{FMet} ),$$ which is a simplicial homotopy between the identity on $\\Gamma(U, \\mathbf{FMet} )$ and the projection onto the flat metric $g^0 = \\sum \\d x_i \\otimes \\d x_i$. A $d$-simplex in $\\Gamma(U, \\mathbf{FMet} ) \\times \\Delta_1$ is of course a $d$-simplex in $\\Gamma(U, \\mathbf{FMet} )$ and a $d$-simplex on $\\Delta_1$. A $d$-simplex in $\\Delta_1$ is given by a non-decreasing map of sets $\\{0,1,\\ldots,d\\} \\to \\{0,1\\}$. Such a $d$-simplex yields an affine linear map $\\Delta^d \\to \\Delta_1$. \n\nLet $A \\subset \\Gamma(U, \\mathbf{FMet} )$ denote the simplicial set of constant metrics, i.e. those for which $g_{ij}(x,\\sigma)$ is independent of $x \\in U$. Of course, this is just the simplicial set whose $0$-simplices are positive-definite inner products on the vector space $\\mbb R^{n}$, and whose $d$-simplices are smooth families of such.\n\nFirst we construct a deformation retraction of $\\Gamma(U, \\mathbf{FMet} )$ onto $A$. This is given by a map $\\Phi : \\Gamma(U, \\mathbf{FMet} ) \\times \\Delta_1 \\to \\Gamma(U, \\mathbf{FMet} )$, defined as follows. Suppose we have a $d$-simplex in $\\Gamma(U, \\mathbf{FMet} ) \\times \\Delta_1$, corresponding to $g \\in \\Gamma(U, \\mathbf{FMet} )[d]$ and an affine linear map $p : \\Delta^d \\to \\Delta_1$. If, as above, $g = \\sum g_{ij}(x,\\sigma) \\d x_i \\otimes \\d x_j$, we define $\\Phi(g,p)$ by\n$$\n\\Phi(g,p ) = \\sum g_{ij} ( p(\\sigma) x, \\sigma ) \\d x_i \\otimes \\d x_j . \n$$\n$\\Phi$ is easily seen to give the required homotopy; when we restrict to $\\Gamma(U, \\mathbf{FMet} ) \\times \\{1\\} \\subset \\Gamma(U, \\mathbf{FMet} ) \\times \\Delta_1$, $\\Phi$ is the identity, and when we restrict to $\\Gamma(U, \\mathbf{FMet} )\\times \\{0\\}$, $\\Phi$ is a projection onto $A$.\n\nNext we need to show that the simplicial set $A$ is contractible. Note that the space of positive-definite inner products on $\\mbb R^n$ is $\\operatorname{GL}(n,\\mbb R) \/ \\operatorname{O}(n.\\mbb R)$. A simplex in $A$ is a continuous map $\\Delta^d \\to \\operatorname{GL}(n,\\mbb R) \/ \\operatorname{O}(n,\\mbb R)$, which is smooth on the top simplices of some barycentric subdivision of $\\Delta^d$. Thus, to show $A$ is contractible, it suffices to construct a smooth homotopy equivalence between $\\operatorname{GL}(n,\\mbb R) \/ \\operatorname{O}(n,\\mbb R)$ and a point. This is easy to do using (for instance) the Gram-Schmidt procedure. \n\n\n\\end{proof} \n\nThe simplicial presheaf $\\mathbf{FMet}$ is not necessarily locally fibrant. Thus, our definition of $\\mbb R \\Gamma$ is not well-behaved when applied to $\\mathbf{FMet}$. Therefore, we will instead consider $\\mbb R \\Gamma$ applied to a modification of $\\mathbf{FMet}$. \n\nLet $\\operatorname{Ex}^\\infty$ denote Kan's fibrant replacement functor for simplicial sets. If $X$ is a simplicial set, then $\\operatorname{Ex}^\\infty(X)$ is a fibrant simplicial set equipped with a map to $X$ which is a weak equivalence. Define $\\operatorname{Ex}^\\infty \\mathbf{FMet}$ by saying that \n$$\\Gamma (U, \\operatorname{Ex}^\\infty \\mathbf{FMet } ) = \\operatorname{Ex}^\\infty \\Gamma ( U, \\mathbf{FMet }).$$\nSince $\\operatorname{Ex}^\\infty \\mathbf{FMet}$ is locally fibrant, $\\mbb R \\Gamma ( M, \\operatorname{Ex}^\\infty \\mathbf{FMet})$ is well-behaved. \nThis is the correct version of derived global sections of $\\mathbf{FMet}$. \n\n\\begin{corollary}\n$\\mbb R \\Gamma (M , \\operatorname{Ex}^\\infty \\mathbf{FMet} ) $ is contractible.\n\\end{corollary}\nThis is immediate from Corollary \\ref{corollary locally contractible} and Lemma \\ref{lemma flat metrics locally contractible}.\n\nWe apply our local to global principle as follows. We would like to construct a homotopy point of $\\Gamma(M, \\mathbf{BV})$. Any map $\\mathbf{FMet} \\to \\mathbf{BV}$ of simplicial presheaves would yield such a homotopy point; indeed, such a map induces a map\n$$\n\\mbb R \\Gamma(M, \\operatorname{Ex}^\\infty \\mathbf{FMet}) \\to \\mbb R \\Gamma(M, \\operatorname{Ex}^\\infty \\mathbf{BV}) .\n$$\nWe know that $\\mbb R \\Gamma(M, \\operatorname{Ex}^\\infty \\mathbf{FMet}) $ is contractible. Also, $\\Gamma(M,\\mathbf{BV})$ is weakly equivalent to $\\mbb R \\Gamma(M, \\mathbf{BV})$, and therefore to $\\mbb R \\Gamma ( M ,\\operatorname{Ex}^\\infty \\mathbf{BF} )$. Thus, a map $\\mathbf{FMet} \\to \\mathbf{BV}$ yields a point (up to contractible choice) of $\\Gamma(M, \\mathbf{BV})$. \n\n\\begin{theorem}\nLet $U \\subset M$ be an open subset, and let\n$$\ng \\in \\Gamma(U, \\mathbf{FMet}[d] ) \n$$\nbe a smooth family of flat metrics on $U$ parametrised by the $d$-simplex $\\Delta^d$. Then the usual Chern-Simons action $S_{CS}$ satisfies the renormalised quantum master equation on $U$, and so defines an element\n$$\nS_{CS} \\in \\Gamma(U , \\mathbf{BV}[d] ) .\n$$\n\\label{theorem chern simons local calculation}\n\\end{theorem}\nThis theorem gives a map \n$$\n\\mathbf{FMet} \\to \\mathbf{BV}\n$$\n of simplicial presheaves. Thus, once we have proved this result, we will have proved Theorem \\ref{theorem chern simons}. \n\\subsection{Proof of Theorem \\ref{theorem chern simons local calculation}}\nWe want to show that, locally in a flat metric, all the obstructions $O_{(I,K)}(S_{CS})$ vanish, for $I \\ge 0, K > 0$. \n\nSince everything is local, it suffices to work with small open sets $U$ in $M$, with a smooth family of flat metrics parametrised by $\\Delta^d$. \n\n\nLet us trivialise our flat bundle of Lie algebras $\\mathfrak g$ on $U$. The space of fields is the $\\Omega^\\ast(\\Delta^d)$-module\n$$\n\\Omega^\\ast(U) \\otimes \\Omega^\\ast(\\Delta^d) \\otimes \\mathfrak g\n$$\nwith the differential $\\d_{DR}^U + \\d_{DR}^{\\Delta^d}$, and the gauge fixing condition $\\d^\\ast$. The operator $\\d^\\ast$ depends on the coordinates in $\\Delta^d$. The Hamiltonian $H$ is\n$$\nH = [\\d_{DR}^U + \\d_{DR}^{\\Delta^d}, \\d^\\ast ] .\n$$\n\n\n\n\n\nRecall that $n = \\dim M$. Let us give $\\mbb R^n$ the standard flat metric. It is easy to check that one can find an open subset $V \\subset \\mbb R^n\\times \\Delta^d$, and an isomorphism $U \\times \\Delta^d \\to V$, such that the diagram\n$$\n\\xymatrix{ U \\times \\Delta^d \\ar[r] \\ar[d] & V \\ar[dl] \\\\ \n\\Delta^d & }\n$$\ncommutes, and which is compatible with the metrics along the fibres of the projection maps to $\\Delta^d$. \n\n\n \nThe isomorphism chosen above gives an isomorphism\n$$\n\\Omega^\\ast(U) \\otimes \\Omega^\\ast(\\Delta^d) \\otimes \\mathfrak g \\cong \\Omega^\\ast(V)\n$$\nof differential graded $\\Omega^\\ast(\\Delta^d)$-modules. The differential on both sides is simply the de Rham differential on $U \\times \\Delta^d$, or on $V$. Because the metric along the fibres of the map $U\\times \\Delta^d \\to \\Delta^d$ corresponds to that along the fibres of the map $V \\to \\Delta^d$, this isomorphism also takes the operator $\\d^\\ast$ on the left hand side to the corresponding one on the right hand side.\n\nThe propagator is constructed from the de Rham differential on $U \\times \\Delta^d$, and the operator $\\d^\\ast$. Thus, it suffices to show that the obstructions $O_{(I,K)}(S_{CS})$ vanish when we work with $V$ instead of $U \\times \\Delta^d$. Since the obstruction is local, it suffices to work in $\\mbb R^n \\times \\Delta^d$.\n\nWe have reduced to the case of $\\mbb R^n$ with the constant family of flat metrics. Thus, it suffices to show the following.\n\\begin{theorem}\nLet $S_{CS}$ be the usual Chern-Simons action on $\\mbb R^n$ with the flat metric. Then\n\\begin{enumerate}\n\\item\nThere are no counter-terms.\n\\item\n$S_{CS}$ satisfies the renormalised quantum master equation.\n\\end{enumerate}\n\\end{theorem}\nThis result implies, in particular, that the construction is independent of the choice of renormalisation scheme. The choice of renormalisation scheme is only involved in constructing the bijection between the set of local functionals $S\\in \\mscr O_l(\\mathscr{E}, \\mathbb C[[\\hbar]])$ and the set of systems of effective actions satisfying the renormalisation group equation. However, because there are no counter-terms, the system of effective actions $\\{\\Gamma ( P (0,T) , S_{CS} )\\}$ associated to the Chern-Simons action $S_{CS}$ is independent of the choice of renormalisation scheme.\n\n\nThe proof of this theorem will follow closely the results of Kontsevich \\cite{Kon94, Kon03a}. In particular, we make use of the compactifications of configuration spaces used in these papers, and we rely on a certain vanishing theorem proved by Kontsevich in \\cite{Kon94} when $n \\ge 3$, and in \\cite{Kon03a} when $n = 2$. \n\n\nLet $\\operatorname{Conf}_m(\\mbb R^n)$ denote the space of $m$ distinct ordered points in $\\mbb R^n$. There is a partial compactification $\\overline{\\operatorname{Conf}}_m(\\mbb R^n)$ of $\\operatorname{Conf}_m(\\mbb R^n)$ with the following properties. \n\\begin{enumerate}\n\\item\n$\\overline{\\operatorname{Conf}}_2(\\mbb R^n)$ is the real blow-up along the diagonal of $\\mbb R^n \\times \\mbb R^n$.\n\\item\n$\\overline{\\operatorname{Conf}}_m(\\mbb R^n)$ admits a proper map to $\\mbb R^{nm}$ such that the diagram\n$$\n\\xymatrix{ \\operatorname{Conf}_m(\\mbb R^n) \\ar[r] \\ar[d] & \\overline{\\operatorname{Conf}}_m(\\mbb R^n) \\ar[dl] \\\\ \n\\mbb R^{nm} & }\n$$\ncommutes.\n\\item\nFor any $i,j$ with $1 \\le i,j \\le m$ and $i \\neq j$, there is a projection map\n$$\n\\pi_{i,j} : \\overline{\\operatorname{Conf}}_m(\\mbb R^n) \\to \\mbb R^{2n}\n$$\ngiven by forgetting all factors except $i$ and $j$. This map lifts to $\\overline{\\operatorname{Conf}}_2(\\mbb R^n)$.\n\\end{enumerate}\nThe partial compactification we use was constructed in \\cite{Kon94,Kon03a}. This is the real version of the Fulton-MacPherson compactification of configuration spaces of algebraic varieties \\cite{FulMac94}. \n\n\nLet\n$$\n\\mathscr{E} = \\Omega^\\ast(\\mbb R^n) \\otimes \\mathfrak g [1]\n$$\nbe the space of fields, and let\n$$\n\\mathscr{E}_c = \\Omega^\\ast_c(\\mbb R^n) \\otimes \\mathfrak g [1]\n$$\nbe the space of compactly supported fields.\n\nLet\n$$\nP(\\varepsilon,T ) = \\int_{\\varepsilon}^T \\d^\\ast K_t \\d t \\in \\mathscr{E} \\otimes \\mathscr{E} .\n$$\n\\begin{lemma}\nUp to an overall constant,\n$$\nP(\\varepsilon,T) = \\left ( \\int_{ \\norm{x - y}^2 \/ 4 T }^{\\norm{x - y}^2 \/ 4 \\varepsilon } u^{n\/2 - 1 } e^{- u } \\d u \\right) \\pi^\\ast \\operatorname{Vol}_{S^{n-1}} \\otimes I_{\\mathfrak g} \n$$\nwhere\n\\begin{enumerate}\n\\item\n$$\n\\pi : \\operatorname{Conf}_2(\\mbb R^n) \\to S^{n-1}\n$$\nis the projection \n$$(x,y) \\to (x - y) \/ \\norm{x - y} .$$\n\\item\n$ \\operatorname{Vol}_{S^{n-1}}$ is the standard volume form on $S^{n-1}$, given by the formula\n$$\n \\operatorname{Vol}_{S^{n-1}} = \\norm{z}^{-n} \\sum_{i = 1}^{n} (-1)^{i} z_i \\d z_1 \\cdots \\widehat{ \\d z_i } \\cdots \\d z_n .\n$$\n\\item\n$$\nI_{\\mathfrak g} \\in \\mathfrak g \\otimes \\mathfrak g\n$$\nis the tensor dual to the pairing on $\\mathfrak g$. \n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nThe proof is an explicit calculation. The sign conventions we are using imply that the heat kernel is\n$$\nK_t = C t^{-n\/2 } e^{- \\norm{x-y}^2 \/ 4t } \\d (x_1 - y_1) \\wedge \\cdots \\wedge \\d (x_n - y_n) \\otimes I_{\\mathfrak g} \n$$\nwhere $C$ is a certain normalising constant. \nIt follows that \n\\begin{multline*}\n\\d^\\ast K_t = C' t^{-n\/2 - 1 } e^{- \\norm{x-y}^2 \/ 4t } \\sum_{i = 1}^{n} (-1)^{i} (x_i - y_i) \\d (x_1 - y_1) \\cdots \\widehat{ \\d (x_i - y_i) } \\\\ \\cdots \\d (x_n - y_n ) \\otimes I_{\\mathfrak g} \n\\end{multline*}\nwhere $C'$ is a constant.\nNow, \n$$\n\\int_\\varepsilon^T t^{-n\/2 - 1 } e^{- \\norm{x-y}^2 \/ 4t } \\d t = C'' \\norm{x - y}^{-n} \\int_{ \\norm{x - y}^2 \/ 4 T }^{\\norm{x - y}^2 \/ 4 \\varepsilon } u^{n\/2 - 1 } e^{- u } \\d u \n$$\nfor some constant $C''$. This follows from the change of variables $u = \\norm{x-y}^2 \/ 4t$. \n\\end{proof} \n\\begin{corollary}\nThe form $P(0,T) \\in \\Omega^\\ast(\\operatorname{Conf}_2(\\mbb R^n) ) \\otimes \\mathfrak g^{\\otimes 2}$ extends to a smooth form in $ \\Omega^\\ast(\\overline{\\operatorname{Conf}}_2(\\mbb R^n)) \\otimes \\mathfrak g^{\\otimes 2} $. \n\\end{corollary}\n\\begin{proof}\nThe map $\\pi : \\operatorname{Conf}_2(\\mbb R^n) \\to S^{n-1}$ extends to a map $\\overline{\\operatorname{Conf}}_2(\\mbb R^n) \\to S^{n-1}$, which implies that the form $ \\pi^\\ast \\operatorname{Vol}_{S^{n-1}}$ extends to a smooth form on $\\overline{\\operatorname{Conf}}_2(\\mbb R^n)$. It remains to show that the $\\varepsilon \\to 0$ limit of $\\int_{ \\norm{x - y}^2 \/ 4 T }^{\\norm{x - y}^2 \/ 4 \\varepsilon } u^{n\/2 - 1 } e^{- u } \\d u$ is a smooth function on $\\overline{\\operatorname{Conf}}_2(\\mbb R^n)$. Note that this $\\varepsilon \\to 0$ limit is the incomplete gamma function\n$$\n\\int_{ \\norm{x - y}^2 \/ 4 T }^{\\infty } u^{n\/2 - 1 } e^{- u } \\d u = \\Gamma (n\/2, \\norm{x - y}^2 \/ 4 T ) .\n$$\n The function $t \\to \\Gamma(n\/2, t^2)$ is smooth, and the function $\\norm{x - y} : \\overline{\\operatorname{Conf}}_2(\\mbb R^n) \\to \\mbb R$ is smooth, which implies that $\\Gamma( n\/2, \\norm{x- y}^2 \/ 4T)$ is smooth. \n\\end{proof} \n\n\\begin{lemma}\nThe differential of $P(0,T)$ on $\\overline{\\operatorname{Conf}}_2 (\\mbb R^n)$ is $-K_T$.\n\\end{lemma}\n\\begin{proof}\nIf we think of $P(0,T)$ as a de Rham current on $\\mbb R^n \\times \\mbb R^n$, we know its differential is the delta current on the diagonal, minus $K_T$. Since $P(0,T)$ is a smooth form on $\\overline{\\operatorname{Conf}}_2(\\mbb R^n)$, its differential must be smooth, and is determined by its restriction to $\\operatorname{Conf}_2(\\mbb R^n)$, where it is equal to $-K_T$. \n\\end{proof} \n\nAs before, we can attempt to define\n$$\n\\Gamma( P(\\varepsilon,T), S_{CS} ) \n$$\nas an element of $\\mscr O(\\mathscr{E}_c, \\mathbb C[[\\hbar]])$, the space of functionals on the compactly supported fields. It is not completely obvious that this is well defined. However,\n\\begin{proposition}\nFix $0 < T < \\infty$ and $i \\ge 0, k > 0$. \nThen $\\Gamma_{(i,k)}(P(\\varepsilon,T), S_{CS})$ is a well-defined continuous linear functional $\\mathscr{E}_c^{\\otimes k} \\to \\mathbb C$. Further, the limit $\\lim_{\\varepsilon \\to 0} \\Gamma_{(i,k)}(P(\\varepsilon,T), S_{CS})$ exists.\n\\label{prop cs convergence}\n\\end{proposition}\nThis proposition implies that all the counter-terms vanish. \n\\begin{proof}\nLet $\\alpha \\in \\mathscr{E}_c^{\\otimes k}$.\nStandard Feynman graph techniques allow one to write\n$$\n\\Gamma_{(i,k)}(P(\\varepsilon,T) , S_{CS} )(\\alpha) = \\sum_{\\gamma} \\frac{1}{\\Aut(\\gamma)} w_\\gamma(\\varepsilon, T, \\alpha)\n$$\nwhere the sum is over connected trivalent graphs $\\gamma$, whose first Betti number is $i$, with $k$ external edges. The weight $w_\\gamma$ attached to each graph is a certain integral; we will show that these integrals converge absolutely, even when we set $\\varepsilon =0$. \n\nLet $\\gamma$ be such a trivalent graph, and let $V(\\gamma), E(\\gamma)$ denote the sets of vertices and edges of $\\gamma$. Vertices are all trivalent; the end points of the external edges are not considered vertices. Also, the external edges are not elements of $E(\\gamma)$.\n\nWe will define a differential form $\\omega_\\gamma(\\varepsilon, T, \\alpha)$ on the space $\\overline{\\operatorname{Conf}}_{V(\\gamma)}(\\mbb R^n)$. I'll ignore signs, as we only want to show convergence. (Technically, the form $\\omega_\\gamma$ is associated to a trivalent graph with a certain orientation). \n\nOnly graphs with no loops (i.e.\\ edges joining a vertex to itself) appear in the sum. The weights of graphs with loops vanish, because the propagator $P(\\varepsilon,T)$ (which is form on $\\mbb R^n \\times \\mbb R^n$) is zero when restricted to the diagonal. \n\nFor each edge $e \\in E(\\gamma)$, let\n$$\n\\pi_e : \\overline{\\operatorname{Conf}}_{V(\\gamma)}(\\mbb R^n) \\to \\overline{\\operatorname{Conf}}_2(\\mbb R^n)\n$$\nbe the projection correspond to the two vertices attached to $e$. \n\nLet \n$$\\phi : \\overline{\\operatorname{Conf}}_{V(\\gamma)}(\\mbb R^n) \\to \\mbb R^{n k}$$\nbe the projection corresponding to the $k$ vertices of $\\gamma$ which are attached to external edges. \n\nThe form attached to the graph is\n$$\n\\omega_\\gamma (\\varepsilon,T , \\alpha) = \\otimes_{v \\in V(\\gamma)} \\Tr^{\\mathfrak g}_v \\left( \\phi^\\ast \\alpha \\wedge_{e \\in E(\\gamma)} \\pi_e^\\ast P(\\varepsilon,T) \\right).\n$$\nLet me explain this notation. The expression inside the bracket is an element \n$$ \\phi^\\ast \\alpha \\wedge_{e \\in E(\\gamma)} \\pi_e^\\ast P(\\varepsilon,T) \\in \\Omega^\\ast(\\overline{\\operatorname{Conf}}_{V(\\gamma)} (\\mbb R^n) ) \\otimes \\mathfrak g^{\\otimes H(\\gamma)}, $$ where $H(\\gamma)$ is the set of half-edges (or germs of edges) of $\\gamma$. Half-edges include external edges. Let $H(v)$ denote the set of $3$ half-edges at a vertex $v$. For each $v$, there is a trace map \n$$\n\\Tr_v^{\\mathfrak g} : \\mathfrak g^{\\otimes H(v) } \\to \\mathbb C\n$$\ndefined by $\\ip{X, [Y,Z]}$. Thus, \n$$\n\\otimes_{v \\in V(\\gamma)} \\Tr_v^{\\mathfrak g} : \\mathfrak g^{\\otimes H(\\gamma)} \\to \\mathbb C \n$$\nso that\n$$\n\\omega_\\gamma (\\varepsilon,T , \\alpha) \\in \\Omega^\\ast(\\overline{\\operatorname{Conf}}_{V(\\gamma)} (\\mbb R^n) ) .\n$$\n\nThe weight attached to $\\gamma$ is\n$$\nw_\\gamma(\\varepsilon,T, \\alpha) = \\int_{\\overline{\\operatorname{Conf}}_{V(\\gamma)}} \\omega_\\gamma(\\varepsilon,T,\\alpha).\n$$\nWe have to show that this integral converges absolutely, for all $\\varepsilon \\ge 0$ and all $0 < T < \\infty$. \n\nThe problems that could occur would be if some of the points in the configuration space went to $\\infty$. However, we are putting a compactly supported form at the vertices which are attached to external edges, and (since $k > 0$) there is at least one such vertex. If the two points in $\\mbb R^{n}$ attached to the end points of an edge $e$ become very far apart, then $\\pi_e^\\ast P(\\varepsilon,T)$ decays exponentially. If the point in $\\mbb R^n$ attached to one of the vertices goes to $\\infty$, and another point is constrained to lie in a compact set because of the compactly supported form, then we must have an edge $e$ whose end points are far apart. Thus, if any of the points in $\\mbb R^n$ attached to vertices tend to $\\infty$, the integrand decays exponentially. This implies that the integral converges absolutely.\n\\end{proof} \nThis proposition implies that\n$$\n\\Gamma(P(0,T), S_{CS} ) \\in \\mscr O( \\mathscr{E}_c, \\mathbb C[[\\hbar]] ) \/ \\mathbb C[[\\hbar]] \n$$\nis well-defined. Note that we are working modulo the ring of constants. \n\nTo prove our result, it remains to show that the appropriate quantum master equation is satisfied. \n\\begin{proposition}\n$$\n(\\d_{DR} +\\hbar \\Delta_T ) \\exp ( \\Gamma(P(0,T), S_{CS} ) \/ \\hbar ) = 0\n$$\nmodulo constants.\n\\end{proposition}\n\\begin{proof}\nWe can re-express the quantum master equation as \n$$\n\\d_{DR} \\Gamma(P(0,T), S_{CS} ) + \\{\\Gamma(P(0,T), S_{CS} ), \\Gamma(P(0,T), S_{CS} )\\}_T +\\hbar \\Delta_T \\Gamma(P(0,T), S_{CS} ) = 0\n$$\nwhere $\\{\\quad\\}_T$ is a certain bracket on the space of functionals. Up to sign, if $f, g \\in \\mscr O(\\mathscr{E})$ are functionals, and $K_T = \\sum \\phi' \\otimes \\phi''$, then\n$$\n\\{f,g\\}_T = \\sum \\frac{\\partial f }{\\partial \\phi'} \\frac{\\partial g }{\\partial \\phi''} .\n$$\nWorking modulo constants amounts to ignoring the terms $\\hbar \\Delta_T \\Gamma_{(i,2)}(P(0,T), S_{CS} )$ for any $i$, and $\\{ \\Gamma_{(i,1)}(P(0,T), S_{CS} ) , \\Gamma_{(j,1)}(P(0,T), S_{CS} ) \\}_T$ for any $i,j$. \n\nIn the proof of Proposition \\ref{prop cs convergence} we showed how to express $\\Gamma_{(i,k)}(P(0,T) , S_{CS} ) (\\alpha)$ as a sum over trivalent graphs, with $k$ external edges, and first Betti number $i$. The weight attached to each trivalent graph $\\gamma$ is\n$$\nw_\\gamma(0,T,\\alpha) = \\int_{\\overline{\\operatorname{Conf}}_{V(\\gamma)}} \\otimes_{v \\in V(\\gamma)} \\Tr^{\\mathfrak g}_v \\left( \\phi^\\ast \\alpha \\bigwedge_{e \\in E(\\gamma)} \\pi_e^\\ast P(0,T) \\right)\n$$\nHere $\\alpha \\in \\mathscr{E}_c^{\\otimes k}$ is the input, which we put at the external edges.\n\nBy definition, \n$$\\d_{DR} \\Gamma_{(i,k)}(P(0,T), S_{CS})(\\alpha) = \\Gamma_{(i,k)}(P(0,T), S_{CS})(\\d_{DR} \\alpha). $$\nStokes' theorem implies that\n\\begin{multline*}\nw_\\gamma(0,T, \\d_{DR} \\alpha) = - \\int_{\\partial \\overline{\\operatorname{Conf}}_{V(\\gamma)}} \\otimes_{v \\in V(\\gamma)} \\Tr^{\\mathfrak g}_v \\left( \\phi^\\ast \\alpha \\bigwedge_{e \\in E(\\gamma)} \\pi_e^\\ast P(0,T) \\right) \\\\\n+ \\sum_{e \\in E(\\gamma)} \\pm \\int_{ \\overline{\\operatorname{Conf}}_{V(\\gamma)}} \\otimes_{v \\in V(\\gamma)} \\Tr^{\\mathfrak g}_v \\left( \\phi^\\ast \\alpha \\wedge \\pi_e^\\ast (K_T) \\bigwedge_{e' \\neq e} \\pi_e^\\ast P(0,T) \\right).\n\\end{multline*}\nIn the second line, we are using the fact that $\\d_{DR} P(0,T) = -K_T$. \n\nThe terms in the sum\n$$\n\\sum_{e \\in E(\\gamma)} \\pm \\int_{ \\overline{\\operatorname{Conf}}_{V(\\gamma)}} \\otimes_{v \\in V(\\gamma)} \\Tr^{\\mathfrak g}_v \\left( \\phi^\\ast \\alpha \\wedge \\pi_e^\\ast (K_T) \\bigwedge_{e' \\neq e} \\pi_e^\\ast P(0,T) \\right)\n$$\nwhere $e$ is a separating edge cancel with terms in the graphical expansion of $$ \\{\\Gamma(P(0,T), S_{CS} ), \\Gamma(P(0,T), S_{CS} )\\}_T .$$ The terms where $e$ is a non-separating edge cancel with terms in the graphical expansion of $\\hbar \\Delta_T \\Gamma(P(0,T), S_{CS} )$. \n\nThus, it remains to show that \n$$\n\\sum_{\\gamma} \\pm \\int_{\\partial \\overline{\\operatorname{Conf}}_{V(\\gamma)}} \\otimes_{v \\in V(\\gamma)} \\Tr^{\\mathfrak g}_v \\left( \\phi^\\ast \\alpha \\bigwedge_{e \\in E(\\gamma)} \\pi_e^\\ast P(0,T) \\right) = 0 . \n$$\nThis has been proved by Kontsevich in \\cite{Kon94} when $n \\ge 3$, and in \\cite{Kon03a} when $n = 2$. Indeed, lemma 2.1 of \\cite{Kon94} implies that when $n \\ge 3$, the only boundary strata of $\\overline{\\operatorname{Conf}}_{V(\\gamma)}$ which could contribute are those where precisely two points collide. The first lemma in section 6 of \\cite{Kon03a} proves the same statement when $n = 2$. The boundary strata where only two points collide are taken care of by the Jacobi identity.\n\\end{proof} \n\n\\section*{Appendix}\n\n\n\n\n\nThis appendix contains the proof of a generalised version of theorem A. \n\\subsection{Statement of results}\nLet $E$ be a super vector bundle on a compact manifold $M$, whose space of global sections is denoted by $\\mathscr{E}$. As always, suppose $\\mathscr{E}$ has an odd symplectic structure. \nLet\n$$\nH_0 : \\mathscr{E} \\otimes C^{\\infty}(\\Delta^d) \\to \\mathscr{E} \\otimes C^{\\infty}(\\Delta^d) \n$$\nbe a smooth family of generalised Laplacians, parametrised by $\\Delta^d$. Thus, $H_0$ is a $C^{\\infty}(\\Delta^d)$ linear map, which is an order two differential operator with respect to $M$. The symbol of $H_0$ is a section\n$$\n\\sigma(H_0 ) : \\Gamma( T^\\ast M \\times \\Delta^d, \\End E) .\n$$\nThe statement that $H_0$ is a family of generalised Laplacians says that $\\sigma(H_0)$ is a smooth family of metrics on $T^\\ast M$, parametrised by $\\Delta^d$, times the identity in $\\End E$.\n\nLet \n$$\nH_1 : \\mathscr{E} \\otimes \\Omega^\\ast(\\Delta^d) \\to \\mathscr{E} \\otimes \\Omega^\\ast(\\Delta^d)\n$$\nbe an even $\\Omega^\\ast(\\Delta^d)$ linear map, which is a first order differential operator with respect to $M$. Then\n$$\nH = H_0 + H_1\n$$\nis a smooth family of generalised Laplacians parametrised by the super-manifold $\\Delta^d \\times \\mbb R^{0,d}$. \n\nThe results of \\cite{BerGetVer92}, appendix to chapter 9, imply that there is a unique heat kernel\n$$K_t \\in \\mathscr{E} \\otimes \\mathscr{E} \\otimes \\Omega^\\ast(\\Delta^d)$$ for the operator $H$.\n\n\n\nLet\n$$\nD : \\mathscr{E} \\otimes \\Omega^\\ast(\\Delta^d) \\to \\mathscr{E} \\otimes \\Omega^\\ast(\\Delta^d)\n$$\nbe any odd $\\Omega^\\ast(\\Delta^d)$ linear differential operator, commuting with $H$.\n\nAs before, let\n$$\nP(\\varepsilon,T) = \\int_\\varepsilon^T (D \\otimes 1) K_t \\d t \\in \\mathscr{E} \\otimes \\mathscr{E} \\otimes \\Omega^\\ast(\\Delta^d) \n$$\nbe the propagator.\n\\begin{theorem}\nLet \n$$S \\in \\mscr O_l(\\mathscr{E}, \\Omega^\\ast(\\Delta^d) \\otimes \\mathscr{A} \\otimes \\mathbb C[[\\hbar]] )$$ be a function which, modulo $\\hbar$, is at least cubic. More explicitly, $S$ has a Taylor series expansion \n$$\nS = \\sum_{i,k \\ge 0} \\hbar^i S_{i,k} \\phi_{i,k}(\\varepsilon)\n$$\nwhere $S_{i,k} : \\mathscr{E} \\to \\Omega^\\ast(\\Delta^d)$ is a local functional, homogeneous of degree $k$, and $\\phi_{i,k}(\\varepsilon) \\in \\mathscr{A}$. \n\nThen we can form\n$$\n\\Gamma\\left( P (\\epsilon,T) , S \\right) = \\sum_{i\\ge 0,k \\ge 0} \\hbar^i \\Gamma_{i,k} (P (\\epsilon, T), S)\n$$\nas before. \n\n\\begin{enumerate}\n\\item\nThere exist functions $f_r \\in \\mathscr{A}$ and $\\Phi_r \\in \\mscr O(\\mathscr{E}, \\Omega^\\ast(\\Delta^d) \\otimes C^{\\infty}(0,\\infty))$, for $r \\in \\mathbb Z_{\\ge 0}$, such that there is a small $\\varepsilon$ asymptotic expansion \n$$\n\\Gamma_{i,k}(P(\\epsilon,T),S ) (e) \\simeq \\sum f_r(\\varepsilon) \\Phi_r (e,T)\n$$\nfor all $e \\in \\mathscr{E}$.\n\\item\nEach $\\Phi_r(e,T)$ has a small $T$ asymptotic expansion \n$$\n\\Phi_r(e,T) \\simeq \\sum \\Phi_{r,s} (e) g_s(T)\n$$\nwhere the functions $\\Phi_{r,s}$ are local functionals of $e$, that is, $$\\Phi_{r,s} \\in \\mscr O_l(\\mathscr{E},\\Omega^\\ast(\\Delta^d)).$$ The $g_s(T)$ are certain smooth functions of $T \\in (0,\\infty)$. \n\\item\nWe can view the coefficients $\\Phi_{r,s}$ as linear maps $\\mathscr{E}^{\\otimes k} \\to \\Omega^\\ast(\\Delta^d)$. Thus, we can speak of the germ of $\\Phi_{r,s}$ near a point $x$. This germ only depends on the germ of the operators $H,D, S$ near $x \\times \\Delta^d$. \n\\end{enumerate}\n\\end{theorem}\n\\begin{remark}\nAt one stage in the paper, we need a slight generalisation of this result, which involves a propagator $\\delta K_\\varepsilon$ where $\\delta$ is an odd parameter of square zero. The proof given below incorporates this case also. \n\\end{remark}\n\n\\begin{remark}\nRecall that $\\mathscr{A} \\subset C^{\\infty}((0,\\infty))$ is the sub-algebra spanned over $\\mathbb C$ by functions of $\\epsilon$ of the form\n$$\nf(\\epsilon ) = \\int_{U \\subset (\\epsilon,1)^n } \\frac{ F(t_1,\\ldots,t_n)^{1\/2} } { G(t_1,\\ldots, t_n)^{1\/2} } \\d t_1 \\cdots \\d t_n\n$$\nand functions of the form\n$$\nf(\\epsilon ) = \\int_{U \\subset (\\epsilon,1)^{n-1} } \\frac{ F(t_1, \\ldots,t_n = \\varepsilon)^{1\/2} } { G(t_1,\\ldots, t_n = \\varepsilon)^{1\/2} } \\d t_1 \\cdots \\d t_{n-1}\n$$\nwhere \n\\begin{enumerate}\n\\item\n$F, G \\in \\mathbb Z_{\\ge 0} [t_1,\\ldots, t_n] \\setminus \\{0\\}$; $n$ can take on any value. \n\\item\nthe region of integration $U$ is an open subset cut out by finitely many equations of the form $t_i^l > t_j$, for $l \\in \\mathbb Z$. \n\\end{enumerate}\n\n\\end{remark}\n\n\n\n\n\nLet me explain more precisely what I mean by small $\\varepsilon$ asymptotic expansion. Let $\\norm{\\cdot}_{l}^{\\Delta^d}$ be the $C^l$ norm on the space $\\Omega^\\ast(\\Delta^d)$, so that $\\norm{\\omega}_{l}^{\\Delta^d}$ is the supremum over $\\Delta^d$ of the sum of all order $\\le l$ derivatives of $\\omega$. \n\nLet us consider $ \\Gamma_{i,k}(P(\\varepsilon,T),S ) $ as a linear map $\\mathscr{E}^{\\otimes k} \\to \\Omega^\\ast(\\Delta^d) \\otimes C^{\\infty}( \\{ 0 < \\varepsilon < T \\} ) $. The precise statement is that for all $R , l\\in \\mathbb Z_{\\ge 0}$ and compact subsets $K \\subset (0,\\infty)$, there exists $m \\in \\mathbb Z_{\\ge 0}$ such that \n$$\n\\sup_{T \\in K} \\norm{ \\Gamma_{i,k}(P(\\varepsilon,T),S ) (\\alpha) - \\sum_{r = 0}^R f_r(\\varepsilon) \\Phi_r (T,\\alpha) }_{l}^{\\Delta^d} < \\varepsilon^{R+1} \\norm{\\alpha}_m\n$$\nfor all $T$ sufficiently small. Here $\\norm{\\alpha}_m$ denotes the $C^m$ norm on the space $\\mathscr{E}^{\\otimes k}$. \n\nThe small $T$ asymptotic expansion in part (2) has a similar definition. \n\n\n\n\n\n\n\n\\subsection{Expressions in terms of integrals attached to graphs}\n\n\n\nStandard Feynman graph techniques allow one to represent each $\\Gamma_{i,k}(P(\\epsilon,T),S ) $ as a finite sum \n$$\n\\Gamma_{i,k}(P(\\epsilon,T),S ) (\\alpha) = \\sum_\\gamma \\frac{1}{\\operatorname{Aut} \\gamma} w_\\gamma (\\alpha)\n$$\nwhere $\\alpha \\in \\mathscr{E}^{\\otimes k}$, and the sum is over certain graphs $\\gamma$. The graphs $\\gamma$ that appear in the sum are connected graphs, with $k$ legs (or external lines). Each vertex is assigned a degree in $\\mathbb Z_{\\ge 0}$, corresponding to the power of $\\hbar$ attached to the vertex. Degree zero vertices are at least trivalent, and the sum of the degrees of the vertices, plus the first Betti number of the graph, must be equal to $i$. There are a finite number of such graphs. To each graph is attached a certain integral, whose value is $w_\\gamma(\\alpha)$. \n\nI won't describe in detail the formula for the particular graph integrals appearing in the expansion of $\\Gamma_{(i,k)}(P(\\varepsilon,T),S)$. Instead, I will describe a very general class of graph integrals, which include those appearing in the sum above. The theorem will be proved for this general class of graph integrals. \n\n\n\n\n\n\n\\subsection{Asymptotics of graph integrals}\n\n\\begin{definition}\nA labelled graph is a connected graph $\\gamma$, with some number of legs (or external edges). \n\nFor each vertex $v$ of $\\gamma$, let $H(v)$ denote the set of half edges (or germs of edges) emanating from $v$; a leg attached at $v$ counts as a half edge.\n\nAlso, $\\gamma$ has an ordering on the sets of vertices, edges, legs and on the set of half edges attached to each vertex. \n\nEach vertex $v$ of $\\gamma$ is labelled by an element\n$$S_v \\in \\operatorname{PolyDiff}( \\mathscr{E}^{\\otimes H(v)}, \\operatorname{Dens}(M) ) \\otimes \\Omega^\\ast(\\Delta^d).$$\n$S_v$ is a smooth family of polydifferential operators parametrised by $\\Delta^d$. Let $O(v)$ denote the order of $S_v$.\n\\end{definition}\nLet $L(\\gamma)$ denote the set of legs of $\\gamma$.\nFix $\\alpha \\in \\mathscr{E}^{\\otimes L(\\gamma)}$, and fix $t_e \\in (0,\\infty)$ for each $e \\in E(\\gamma)$. Define a function $f_\\gamma (t_e, \\alpha)$ as follows.\n\nLet $H(\\gamma)$ denote the set of half edges of $\\gamma$, so $H(\\gamma) = \\cup_{v \\in V(\\gamma)} H(v)$. By putting $K_{t_e}$ at each edge of $\\gamma$, and $\\alpha$ at the legs, we get an element\n$$\n\\alpha \\otimes_{e \\in E(\\gamma)} K_{t_e} \\in \\mathscr{E}^{\\otimes H(\\gamma)} .\n$$\nOn the other hand, the polydifferential operators $S_v$ at the vertices define a map\n$$\n\\int_{M^{V(\\gamma) } } \\otimes S_v : \\mathscr{E}^{\\otimes H(\\gamma) } \\to \\operatorname{Dens}(M)^{\\otimes V(\\gamma) } \\otimes \\Omega^\\ast(\\Delta^d) \\xrightarrow{\\int_{M^{V(\\gamma) } } } \\Omega^\\ast(\\Delta^d).\n$$\nLet \n$$\nf_\\gamma(t_e,\\alpha) = \\int_{M^{V(\\gamma) } } \\otimes S_v \\left( \\alpha \\otimes_{e \\in E(\\gamma)} K_{t_e} \\right) \\in \\Omega^\\ast(\\Delta^d).\n$$\n\n\nThe graph integrals $w_\\gamma$ which appear in the expansion of $\\Gamma_{i,k}(P(\\varepsilon,T), S)$ can be realised as finite sums of functions of the form \n$$f_\\gamma (t_e, \\alpha, S_v )$$\nfor certain local functionals $S_v$. Terms in the sum can be multiplied by elements of $ \\mathscr{A}$. \n\n\n\n\\begin{theorem}\n\\label{graph integral expansion}\nThe integral\n$$\nF_\\gamma(\\epsilon, T , \\alpha ) = \\int_{t_e \\in (\\epsilon,T)^{E(\\gamma)} } f_\\gamma (t_e, \\alpha ) \\prod \\d t_e\n$$\nhas an asymptotic expansion as $\\epsilon \\to 0$ of the form\n$$\nF_\\gamma(\\epsilon, T , \\alpha ) \\simeq \\sum f_r(\\varepsilon) \\Psi_r( T,\\alpha)\n$$\nwhere $f_r \\in \\mathscr{A}$, and the $\\Psi_r$ are continuous linear maps \n$$\n\\mathscr{E}^{\\otimes L(\\gamma ) } \\to C^{\\infty}((0,\\infty) ) \\otimes \\Omega^\\ast(\\Delta^d) \n$$\nwhere $T$ is the coordinate on $(0,\\infty)$.\n\nFurther, each $\\Psi_r(T,\\alpha)$ has a small $T$ asymptotic expansion\n$$\n\\Psi_r(T,\\alpha) \\simeq \\sum g_r(T) \\int_M \\Phi_{r,k} (\\alpha)\n$$\nwhere \n$$\n\\Phi_{r,k} \\in \\operatorname{PolyDiff}( \\mathscr{E}^{\\otimes L(\\gamma)}, \\dens(M) ) \\otimes \\Omega^\\ast(\\Delta^d). \n$$\nand $g_r$ are smooth functions of $T \\in (0,\\infty)$. \n\\end{theorem}\nThe phrase ``asymptotic expansion'' is to be interpreted in the same sense as before. \n\n\\begin{remark}\n\n\n\nThere is a variant of this result, which incorporates a propagator of the form $P(\\varepsilon,T) + \\delta K_\\varepsilon$, where $\\delta$ is an odd parameter of square zero. In this case, the graphs we use may have one special edge, on which we put $K_\\varepsilon$ instead of $K_{t_e}$. Thus, instead of integrating over the parameter $t_e$ for this edge, we specialise to $\\varepsilon$. \n\\end{remark}\n\\subsection{Asymptotics of heat kernels}\nThe proof is based on the asymptotic expansion of the heat kernels of generalised Laplacians, proved in \\cite{BerGetVer92}. The operator\n$$\nH : \\mathscr{E} \\otimes \\Omega^\\ast(\\Delta^d) \\to \\mathscr{E} \\otimes \\Omega^\\ast(\\Delta^d)\n$$\nis a smooth family of generalised Laplacians parametrised by the supermanifold with corners $\\Delta^d \\times \\mbb R^{0,d}$. A generalised Laplacian on the vector bundle $E$ on $M$ (whose global sections is $\\mathscr{E}$) is specified by a metric on $M$, a connection on $E$ and a potential $F \\in \\Gamma(M, \\End(E))$. A smooth family of generalised Laplacians is a family where this data varies smoothly; this is equivalent to saying that the operator varies smoothly. We are dealing with a smooth family parametrised by the super-manifold $\\Delta^d \\times\\mbb R^{0,d}$. The metric on $M$ will be independent of the odd coordinates $\\mbb R^{0,d}$, but the parameters for the connection and the potential will both involve Grassmann variables. (I'm very grateful to Ezra Getzler for explaining this point of view to me.)\n\nLet $x \\in M$. Let $U \\subset M \\times \\Delta^d$ denote the open subset of points $(y,\\sigma)$ where $d_\\sigma(x,y) < \\varepsilon$. Normal coordinates on $U$ gives an isomorphism \n$$\nU \\cong B_\\varepsilon^n \\times \\Delta^d\n$$\nwhere $B_\\varepsilon^n$ is the ball of radius $\\varepsilon$ in $\\mbb R^n$, and $n = \\dim M$. \n\nThus, we get an isomorphism \n$$\nC^{\\infty}( U \\times \\mbb R^{0,d} ) = C^{\\infty}(U) \\otimes \\mathbb C [ \\d t_1 ,\\ldots, \\d t_d ] \\cong C^{\\infty}(B_\\varepsilon^n) \\otimes \\Omega^\\ast(\\Delta^d)\n$$ \nof $\\Omega^\\ast(\\Delta^d)$ algebras. \n\nThe vector bundle $E$ becomes a vector bundle (still called $E$) on $B_{\\varepsilon}^{n}$, and we find\n$$\n\\Gamma( U, E) \\otimes \\mathbb C[\\d t_1, \\ldots, \\d t_d ] \\cong \\Gamma(B_\\varepsilon^n, E ) \\otimes \\Omega^\\ast(\\Delta^d) . \n$$\nThe following is a variant of a result proved in \\cite{BerGetVer92}, following \\cite{MinPle49, BerGauMaz71,McKSin67}. \n\\begin{theorem}\nThere exists a small $t$ asymptotic expansion of $K_t$ which, in these coordinates, is of the form\n$$\nK_t \\simeq t^{-\\operatorname{dim} M \/ 2} e^{-\\norm{x-y}^2\/t } \\sum_{i \\ge 0} t^i \\phi_i\n$$\nwhere $x,y$ denote coordinates on the two copies of $B_\\varepsilon^n$, and\n$$\n\\phi_i \\in \\Gamma(B_\\varepsilon^n,E) \\otimes \\Gamma(B_\\varepsilon^n,E) \\otimes \\Omega^\\ast(\\Delta^d).\n$$\nIf we denote\n$$\nK_t^N = t^{-\\operatorname{dim} M \/ 2} e^{-\\norm{x-y}^2\/t } \\sum_{i =0}^N t^i \\phi_i\n$$\nthen for all $l \\in \\mathbb Z_{\\ge 0}$\n$$\n\\norm{ K_t - K_t^N }_l = O(t^{N - \\dim M \/ 2 - l}).\n$$\nHere $\\norm{\\cdot}_l$ denotes the $C^l$ norm on the space $C^{\\infty}(B_\\varepsilon^n) \\otimes C^{\\infty}(B_\\varepsilon^n) \\otimes \\Omega^\\ast(\\Delta^d)$. \n\\end{theorem}\nThis precise statement is not proved in \\cite{BerGetVer92}, as they do not use Grassmann parameters. But, as Ezra Getzler explained to me, the proof in \\cite{BerGetVer92} goes through \\emph{mutatis mutandis}. \n\n\\subsection{Proof of Theorem \\ref{graph integral expansion}}\nIn this proof, I will often avoid mention of the parameter space $\\Delta^d \\times \\mbb R^{0,d}$. Thus, if $f$ is some expression which depends on this parameter space, I will often abuse notation and write $\\abs{ f }$ for the $C^l$ norm of $f$ as a function of the $\\Delta^d \\times \\mbb R^{0,d}$ variables, for some $l$. \n\nLet us enumerate the edges of $\\gamma$ as $e_1,\\ldots, e_k$. Let $t_i = t_{e_i}$, and let us consider the region where $t_1 > t_2 > \\cdots > t_k$. (Of course we have to consider all orderings of the edges of $\\gamma$).\n\nFor a function $I : E(\\gamma) = \\{1,\\ldots, k\\} \\to \\mathbb Z_{\\ge 0}$, let $\\abs{I} = \\sum I(i)$. Let $t^I = \\prod t_i^{I(i)}$. Similarly, if $n \\in \\mathbb Z$, let $t^n = \\prod t_i^n$.\n\nLet \n$$\nO(\\gamma) = \\sum_{v \\in V(\\gamma)} O(v) .\n$$\n\nFor $R > 1$ let \n$$\nA_{R,T} \\subset (0,T)^{E(\\gamma)}\n$$\nbe the region where $t_i^R < t_{j}$ for all $i,j$. This means that the $t_i$ are all of a similar size.\n\\begin{proposition}\n\\label{prop wicks lemma expansion}\nFor all $r \\ge 0$, there exist\n\\begin{enumerate}\n\\item\n $F_i,G_i \\in \\mathbb Z_{\\ge 0} [ t_1,\\ldots, t_k ] \\setminus \\{0\\} $ for $1 \\le i \\le c_r$,\n \\item\n polydifferential operators\n$$\\Psi_{i} \\in \\operatorname{Poly Diff} ( \\mathscr{E}^{\\otimes L(\\gamma)}, \\operatorname{Dens}(M) ) \\otimes \\Omega^\\ast(\\Delta^d) $$ \nfor $1 \\le i \\le c_r$,\n\\end{enumerate}\nsuch that\n$$\n\\abs{ f_\\gamma(t_1,\\ldots, t_k) - \\sum_{i = 1}^{c_r} \\frac{ F_i (t_1,\\ldots, t_k )^{1\/2}} { G_i (t_1,\\ldots, t_k)^{1\/2} } \\int_M \\Psi_i (\\alpha ) } \\le \\norm{\\alpha}_{r + 1 - \\chi(\\gamma) \\dim M \/ 2 + O(\\gamma)(\\abs{E(\\gamma)} + 1) } t_1^{r+1} \n$$\nfor all $\\{t_1,\\ldots, t_k\\} \\in A_{R,T}$ with $t_1 > t_2 > \\cdots > t_k$ and $t_1$ sufficiently small. In this expression, $\\chi(\\gamma)$ is the Euler characteristic of the graph.\n\\end{proposition}\n\n\\begin{proof}\nAs above, let\n$$\nK^N_{t}(x,y) = t^{-\\operatorname{dim} M \/ 2} \\Psi(x,y) e^{-\\norm{x-y}^2\/t } \\sum_{i = 0}^N t^i \\phi_i(x,y)\n$$\nbe the approximation to the heat kernel to order $t^N$. This expression is written in normal coordinates near the diagonal in $M^2$. The $\\phi_i(x,y)$ are sections of $E \\boxtimes E$ defined near the diagonal on $M^2$. $\\Psi(x,y)$ is a cut-off function, which is $1$ when $\\norm{x-y} < \\epsilon$ and $0$ when $\\norm{x-y} > 2\\epsilon$. \n\n\n\nWe have the bound\n$$\n\\norm{ K_{t}(x,y) - K^N_{t}(x,y) }_l = O ( t^{N - \\operatorname{dim} M \/2 - l } ) .\n$$\n\nThe first step is to replace each $K_{t_i}$ by $K^{N}_{t_i}$ on each edge of the graph. Thus, let $f_\\gamma^N(t_i,\\alpha)$ be the function constructed like $f_\\gamma(t_i,\\alpha)$ except using $K_{t_i}^N$ in place of $K_{t_i}$. \n\nEach time we replace $K_{t_i}$ by $K_{t_i}^{N}$, we get a contribution of $t_i^{N - O(\\gamma) - \\operatorname{dim} M \/ 2}$ from the edge $e_i$, times the $O(\\gamma)$ norm of the contribution of the remaining edges, times $\\norm{\\alpha}_{O(\\gamma)}$. \n\nThe $O(\\gamma)$ norm of the contribution of the remaining edges is $\\prod_{j \\neq i} t_j^{ - O(\\gamma) - \\operatorname{dim} M \/ 2}$. We are thus left with the bound \n$$\n\\abs { f_\\gamma(t_i,\\alpha ) - f_\\gamma^N (t_i,\\alpha ) } < C t^{- O(\\gamma) - \\dim M \/ 2 } t_1^{N} \\norm{\\alpha}_{O(\\gamma) } \n$$\nwhere $C$ is a constant. (Recall our notation : $t^n$ denotes $\\prod t_i^n$). \n\nIn particular, if the $t_i$ are in $A_{R,T}$, we find that \n$$ \\abs { f_\\gamma(t_i, \\alpha ) - f_\\gamma^N(t_i,\\alpha) } < \\norm{\\alpha}_{O(\\gamma)} t_1^{N - \\abs{E(\\gamma)} ( O(\\gamma) + \\dim M \/ 2 ) R + 1 } $$\nif $t_i \\in A_{R,T}$ and $t_i$ are all sufficiently small.\n\n\n\n\nNext, we construct a small $t_i$ asymptotic expansion of $f_\\gamma^N(t_i,\\alpha)$. Recall that $f_\\gamma^N(t_i,\\alpha)$ is defined as an integral over a small neighbourhood of the small diagonal in $M^{V(\\gamma)}$. Let \n$$n = \\operatorname{dim} M .$$ \nBy using a partition of unity we can consider $f_\\gamma^N(t_i,\\alpha)$ as an integral over a small neighbourhood of zero in $\\mbb R^{n V(\\gamma)}$. This allows us to express $f_\\gamma^N(t_i,\\alpha)$ as a finite sum of integrals over $\\mbb R^{n V(\\gamma)}$, of the following form.\n\nFor each vertex $v$ of $\\gamma$, we have a coordinate map $x_v : \\mbb R^{n \\abs{V(\\gamma)}} \\to \\mbb R^n$. Fix any $\\varepsilon> 0$, and let $\\chi : [0,\\infty) \\to [0,1]$ be a smooth function with $\\chi(x) = 1$ if $x < \\epsilon$, and $\\chi(x) = 0$ if $x > 2 \\epsilon$. Let us define a cut-off function $\\psi$ on $\\mbb R^{n V(\\gamma)}$ by the formula\n$$\n\\psi = \\chi ( \\norm{\\sum x_v}^2 ) \\chi \\left( \\sum_{v' \\in V(\\gamma)} \\norm{ x_{v'} - \\abs{V(\\gamma)}^{-1} \\sum_{v \\in V(\\gamma)} x_v }^2 \\right). \n$$\nThus, $\\psi$ is zero unless all points $x_v$ are near their centre of mass $\\abs{V(\\gamma)}^{-1}\\sum x_v$, and $\\psi$ is zero when this centre of mass is too far from the origin. \n\n For each $1 \\le i \\le k$, let $Q_i$ be the quadratic for on $\\mbb R^{n \\abs{V(\\gamma)}}$ defined by\n$$\nQ_i ( x) = \n\\begin{cases} \n0 & \\text{ if the edge } e_i \\text{ is a loop, i.e.\\ is attached to only one vertex } \\\\\n\\norm{ x_{v_1} - x_{v_2} }^2 & \\text{ if } v_1, v_2 \\text{ are the vertices attached to the edge } e_i \n\\end{cases}\n$$\n\nWe can write $f_\\gamma^N$ as a finite sum of integrals of the form\n$$\n\\int_{\\mbb R^{n V(\\gamma)} } \\psi e^{- \\sum Q_i \/ t_i } \\sum_{ I, K} t^{ I - \\dim M \/ 2 - O(\\gamma) } \\Phi_{I,K} \\partial_{K,x} \\alpha.\n$$\nIn this expression, \n\\begin{itemize}\n\\item\nThe sum is over $I : E(\\gamma) \\to \\mathbb Z_{\\ge 0}$, with all $I(e) \\le N + O(\\gamma) + 1$, and multi-indices \n$K : V(\\gamma) \\times \\{1,\\ldots,n\\} \\to \\mathbb Z_{\\ge 0}$, with $\\sum K(v,i) \\le O(\\gamma)$. The notation $\\partial_{K,x}$ denotes \n$$\\partial_{K,x}= \\prod_{v \\in V(\\gamma), 1 \\le i \\le n} \\frac {\\partial}{\\partial x_{v,i}^{K(v,i)} }.$$\n\nIn this notation, we are pretending (by trivialising the vector bundle $E$ on some small open sets in $M$) that $\\alpha$ is a function on $\\mbb R^{n V(\\gamma)}$. \n\\item\nThe $\\Phi_{I,K}$ are smooth functions on $\\mbb R^{n V(\\gamma)}$.\n\\end{itemize}\n\nNext, we will use Wick's lemma to compute the asymptotics of this integral. Let $c = \\left(1\/\\abs{V(\\gamma)} \\right)\\sum x_v$ be the centre of mass function $\\mbb R^{n V(\\gamma)}\\to \\mbb R^n$. We can perform the integral in two steps, first integrating over the coordinates $y_v = x_v - c$, and secondly by integrating over the variable $c$. (Of course, there are $\\abs{V(\\gamma)} - 1$ independent $y_v$ coordinates). The quadratic form $\\sum Q_i \/ t_i$ on $\\mbb R^{n V(\\gamma)}$ is non-degenerate on the subspace of $\\mbb R^{n V(\\gamma)}$ of vectors with a fixed centre of mass, for all $t_i \\in (0,\\infty)$. Thus, the integral over the variables $y_v$ can be approximated with the help of Wick's lemma. \n\n\nLet us order the set $V(\\gamma)$ of vertices as $v_1,v_2, \\ldots, v_m$. We will use the coordinates $y_1,\\ldots,y_{m-1}$, and $c$ on $\\mbb R^{n V(\\gamma)}$. Then $f^N_\\gamma$ is a finite sum of integrals of the form\n$$\n\\int_{w \\in \\mbb R^n} \\chi ( \\abs{w}^2 ) \\int_{y_1,\\ldots,y_{m-1} \\in \\mbb R^n} \\chi ( \\sum \\norm{y_v}^2 ) e^{- \\sum Q_i(y) \/ t_i} \\sum t^{I - O(\\gamma) - \\dim M \/ 2} \\Phi_{I,K} \\partial_{K, w,y_i} \\alpha.\n$$\nHere we are using the same notation as before, in these new coordinates. \n\nTo get an approximation to the inner integral, we take a Taylor expansion of the functions $\\alpha$ and $\\Phi_{I,K}$ around the point $\\{y_i = 0, w\\}$, only expanding in the variables $y_i$. We take the expansion to order $N$. We find, as an approximation to the inner integral, an expression of the form\n$$\n\\int_{y_1,\\ldots,y_{m-1} \\in \\mbb R^n} e^{- \\sum Q_i(y) \/ t_i} \\sum t^{I - \\dim M \/ 2 - O(\\gamma)} y^K c_{K,I,L} \\left( \\prod \\frac{\\partial^{L_i}} {\\partial^{L_i} y_i} \\frac{\\partial ^{L_w}} {\\partial^{L_w} w} \\alpha \\right)_{y_i = 0}\n$$\nwhere the sum is over a finite number of multi-indices $I,K,L$, and $c_{K,I,L}$ are constants.\n\nWe can calculate each such integral by Wick's lemma. The application of Wick's lemma involves inverting the quadratic form $\\sum Q_i(y) \/ t_i$. Let $A = A(t_i)$ denote the matrix of the quadratic form $\\sum Q_i(y) \/ t_i$; this is a square matrix of size $(\\dim M) \\abs{V(\\gamma)}$, whose entries are sums of $t_{i}^{-1}$. Note that $\\left( \\prod_{i = 1}^{k} t_i \\right) A$ has polynomial entries. \n\nLet \n$$\nP_\\gamma (t_i) = \\det \\left( \\left( \\prod_{i = 1}^{k}t_i \\right) A \\right). \n$$\nThis is the graph polynomial associated to $\\gamma$ (see \\cite{BloEsnKre06}). One important property of $P_\\gamma$ is that it is a sum of monomials, each with a non-negative integer coefficient. \n\nWe can write\n$$\nA^{-1} = P_\\gamma^{-1} B\n$$\nwhere the entries of $B$ are polynomial in the $t_i$. Note also that \n$$\n\\det A = P_\\gamma t^{-(\\dim M)(\\abs{V(\\gamma)} - 1) \/ 2 } .\n$$\n(The expansion from Wick's lemma gives an overall factor of $(\\det A)^{-1\/2}$).\n\nThus, we find, using Wick's lemma, an approximation of the form \n$$\nf^N_\\gamma(t_i) \\simeq P_\\gamma^{-1\/2} \\sum_{l \\ge 0, I: E(\\gamma) \\to \\tfrac{1}{2}\\mathbb Z_{\\ge 0}} P_\\gamma^{-l} t^{I - \\dim M \/ 2 - O(\\gamma) } \\int_M \\Psi_{l,I}(\\alpha)\n$$\nwhere the $\\Psi_{l,I}$ are polydifferential operators\n$$\n\\Psi_{l,I} : \\mathscr{E}^{\\otimes L(\\gamma)} \\to \\dens(M)\n$$\n and the sum is finite (i.e.\\ all but finitely many of the $\\Psi_{l,I}$ are zero). \n \nThis expansion is of the desired form; it remains to bound the error term.\n\\begin{lemma}\nThe error term in this expansion is bounded by $$t_1^{R (N+1 + (\\dim M) \\chi(\\gamma)\/2 - \\abs{E(\\gamma)} O(\\gamma) ) } \\norm{\\alpha}_{N + 1 + O(\\gamma)}$$ for $N$ sufficiently large and $t_1$ sufficiently small. Here $\\chi(\\gamma)$ is the Euler characteristic of the graph. \n\\end{lemma}\n\\begin{proof}\n\n\nThe error in this expansion arises from the error in the Taylor expansion of the functions $\\alpha, \\Phi_{I,K}$ around $0$, and from the fact that we are neglecting the cut-off function. Thus, if $N$ is sufficiently large, the magnitude of the error in the expansion can be bounded by an expression of the form\n$$\nt^{- \\dim M \/ 2 - O(\\gamma) }\\int_{w \\in \\mbb R^n} \\chi(w) \\int_{y_1,\\ldots, y_{m-1} \\in \\mbb R^{n} } \\abs{ \\prod_e e^{-\\sum Q_i(y)\/t_i} \\sum_{K} \\phi_K \\partial_{K,w,y_j} \\alpha } . \n$$\nHere, the sum is over multi-indices $K$ for the variables $y_i,w$, each $\\phi_K$ is homogeneous of order $N+1$ as a function of the variables $y_i$, and $\\abs{K} \\le N+1+O(\\gamma)$, so we are differentiating $\\alpha$ at most this number of times.\n\nThis integral only decreases if we decrease each $t_i$. Since $t_i > t_1^{R}$, we find we can bound the integral by the corresponding integral using the quadratic form $\\sum Q_i(y) \/ t_1^{R}$. Thus, we find a bound of \n$$\nt_1^{- R \\abs{E(\\gamma)} ( \\dim M \/ 2 + O(\\gamma) )}\\int_{w \\in \\mbb R^n} \\chi(w) \\int_{y_1,\\ldots, y_{m-1} \\in \\mbb R^{n} } \\abs{ \\prod_e e^{-\\sum Q_i(y)\/t_1^R} \\sum_{K} \\phi_K \\partial_{K,w,y_j} \\alpha } . \n$$\nWick's lemma bounds this integral by\n$$\n\\operatorname{det}(\\sum Q_i(y)\/ t_1^{R})^{-1\/2} t_1^{R (N+1 - \\abs{E(\\gamma)}(\\dim M \/ 2 + O(\\gamma)) )} \\norm{\\alpha}_{N+1 + O(\\gamma) }\n$$\nAlso, \n$$\n\\operatorname{det}(\\sum Q_i(y)\/ t_1^{R})^{-1\/2} = C t_1^{ R\\operatorname{dim} M \\abs{V(\\gamma)} \/2 }\n$$\nwhere $C$ is a constant, to yield the desired bound. \n\\end{proof} \n\n\\end{proof} \n\nA \\emph{subgraph} $\\gamma'$ of a graph $\\gamma$ is given by the set of edges $E(\\gamma') \\subset E(\\gamma)$. The vertices of the subgraph $\\gamma'$ are the ones that adjoin edges in $E(\\gamma')$. The legs of $\\gamma'$ are the half-edges of $\\gamma$, which adjoint vertices of $\\gamma'$, but which are not part of any edge of $\\gamma'$. \n\nLet us fix a proper subgraph $\\gamma'$, and let us enumerate the edges of $\\gamma$ as $e_1,\\ldots, e_k$, where $e_1,\\ldots, e_l \\in E(\\gamma')$ and $e_{l+1},\\ldots, e_k \\in E(\\gamma) \\setminus E(\\gamma')$. Let $t_i = t_{e_i}$. \n\n\nLet \n$$ \\phi_{ \\gamma,\\gamma' } (t_{l+1}, \\ldots, t_k ) = \\int_{ t_1,\\ldots, t_l \\in (t_{l+1}^{1\/R} , T) } f_\\gamma(t_1,\\ldots, t_k) \\d t_1 \\cdots \\d t_l $$\n\\begin{lemma}\nLet $R \\gg 0$ be sufficiently large. Then for all $r > 0$, there exist $m_r \\in \\mathbb Z_{\\ge 0}$, a finite number of $g_i \\in \\mathscr{A}$, $F_i, G_i \\in \\mathbb Z_{\\ge 0 }[t_{l+1},\\ldots,t_k]\\setminus\\{0\\}$, and continuous linear maps $\\Psi_i : \\mathscr{E}^{\\otimes L(\\gamma)} \\to \\Omega^\\ast(\\Delta^d) \\otimes C^{\\infty}((0,\\infty))$ such that \n$$\n\\abs{ \\phi_{\\gamma,\\gamma'} (t_{l+1},\\ldots, t_k ) - \\sum g_i(t_{l+1} )\\frac{ F_i (t_{l+1}, \\ldots, t_k )^{1\/2} } {G_i(t_{l+1}, \\ldots, t_k)^{1\/2} } \\Psi_i ( T,\\alpha) } < \\norm{\\alpha}_{m_r} t_{l+1}^{r+1}\n$$\nfor all $t_{l+1} > \\cdots > t_k > 0$ with $t_{l+1}^R < t_k$, and $t_{l+1}$ sufficiently small.\n\nFurther, the $\\Psi_i$ admit small $T$ asymptotic expansions \n$$\n\\Psi_i(T,\\alpha) \\simeq \\sum \\eta_{i,k}(T)\\int_M \\Phi_{i,k}(\\alpha)\n$$\nwhere \n$$\\Phi_{i,k} \\in \\operatorname{Poly Diff}(\\mathscr{E}^{\\otimes L(\\gamma)}, \\dens(M) ) \\otimes \\Omega^\\ast(\\Delta^d) $$\nand $\\eta_{i,k}(T)$ are smooth functions of $T$.\n\\label{lemma subgraph}\n\\end{lemma}\n\n\\begin{proof}\nWe can write \n$$f_\\gamma(t_1,\\ldots, t_k, \\alpha) = f_{\\gamma'}(t_{l+1}, \\ldots, t_k, \\alpha \\otimes K_{t_1} \\otimes \\cdots \\otimes K_{t_l}). $$ The right hand side of this equation denotes the graph integral for $\\gamma'$ with inputs being tensor products of the heat kernels $K_{t_i}$, for $1 \\le i \\le l$, and $\\alpha$.\n\nThe starting point of the proof is Proposition \\ref{prop wicks lemma expansion} applied to each connected component of the graph $\\gamma'$. This proposition implies that we can approximate the contribution of each such connected component by a local functional applied to the legs of that connected component. If we approximate the contribution of each connected component in this way, we are left with a graph integral $\\Psi_{\\gamma\/\\gamma'}(t_1,\\ldots, t_l, \\alpha)$ for the quotient graph $\\gamma \/ \\gamma'$, with certain local functionals at the vertices of $\\gamma \/ \\gamma'$. \n\nMore formally, Proposition \\ref{prop wicks lemma expansion} implies that there exists a finite number of $F_i, G_i \\in \\mathbb Z_{\\ge 0}[t_{l+1},\\ldots,t_k]\\setminus\\{0\\}$, and $\\Psi^i_{\\gamma \/ \\gamma' } (t_1,\\ldots, t_l , \\alpha )$, which are graph integrals for $\\gamma \/ \\gamma'$, such that \n\\begin{multline*}\n\\abs{f_{\\gamma}(t_{1}, \\ldots, t_k, \\alpha) - \\sum \\frac{ F_i (t_{l+1}, \\ldots, t_k )^{1\/2} } {G_i(t_{l+1}, \\ldots, t_k)^{1\/2} } \\Psi^i_{\\gamma \/ \\gamma' } (t_1,\\ldots, t_l , \\alpha ) } \\\\ < t_{l+1}^{r+1} \\norm{\\alpha \\otimes K_{t_1} \\otimes \\cdots \\otimes K_{t_l}}_{r + 1 -\\chi(\\gamma) \\dim M \/ 2 + O(\\gamma)(\\abs{E(\\gamma)} + 1) } \n\\end{multline*}\nfor all $t_1,\\ldots,t_k$ with $t_{l+1} > \\ldots > t_k$, $t_{l+1}^R < t_k$, and $t_{l+1}$ sufficiently small.\n\nWe are only interested in the region where $t_i^R > t_{l+1}$ if $1 \\le i \\le l$. Note that \n\\begin{align*}\n\\norm{K_{t_i}}_{r + 1 -\\chi(\\gamma) \\dim M \/ 2 + O(\\gamma) (\\abs{E(\\gamma)}+1)} &= O(t_i^{(\\chi(\\gamma) -1)\\dim M\/2 - r -1 - O(\\gamma)(\\abs{E(\\gamma)}+1) } )\\\\ & = O( t_{l+1}^{R^{-1}\\left(( \\chi(\\gamma) -1)\\dim M\/2 - r -1 - O(\\gamma)(\\abs{E(\\gamma)}+1) \\right) } )\n\\end{align*}\nSince we can take $R$ as large as we like, this contribution is small, and we find, \n$$\n\\abs{f_{\\gamma}(t_{1}, \\ldots, t_k, \\alpha) - \\sum \\frac{ F_i (t_{l+1}, \\ldots, t_k )^{1\/2} } {G_i(t_{l+1}, \\ldots, t_k)^{1\/2} } \\Psi^i_{\\gamma \/ \\gamma' } (t_1,\\ldots, t_l , \\alpha ) } \\\\ < t_{l+1}^{r ( 1 - 1 \/ R ) } \\norm{\\alpha}_{m_r} \n$$\nfor some $m_r \\gg 0$. \n\nWe want to integrate over the variables $t_1,\\ldots, t_l$, in the region $(t_{l+1}^{1\/R} , T)$. The integral \n$$\n\\int_{t_1,\\ldots, t_l \\in (t_{l+1}^{1\/R}, T) } \\Psi^i_{\\gamma \/ \\gamma' } (t_1,\\ldots, t_l , \\alpha ) \\d t_1 \\cdots \\d t_l\n$$\nis approximated (by induction) using Theorem \\ref{graph integral expansion}, applied to the graph $\\gamma \/ \\gamma'$, with $t_{l+1}^{1\/R}$ playing the role of $\\varepsilon$. It is easy to see that this yields the desired approximation of $\\phi_{\\gamma , \\gamma ' }(t_{l+1},\\ldots, t_k,\\alpha)$. \n\\end{proof}\n\n\nLet $\\gamma' \\subset \\gamma$ be a subgraph. \nLet $A_{R,T}^{\\gamma'} \\subset (0,T)^{E(\\gamma)}$ be the open subset where \n\\begin{align*}\nt_e^R > t_{e'} & \\text{ if } e \\in E(\\gamma) \\setminus E(\\gamma') \\text{ and } e' \\in E(\\gamma') \\\\\nt_e^R < t_{e'} & \\text{ if } e,e' \\in E(\\gamma') .\n\\end{align*} \nThis means that the lengths of the edges of the subgraph $\\gamma'$ are all around the same size, and are all much smaller than the lengths of the other edges. \n\n\\begin{lemma}\nFix $R \\gg 0$, and $0 < T < \\infty$. Then the closures of the regions\n $A_{R^{2^k},T}^{\\gamma'} $, \nwhere $0 \\le k \\le \\abs{E(\\gamma)}$ and $\\gamma' \\subset \\gamma$ is non-empty, cover $(0,T)^{E(\\gamma)}$. (The regions $A_{R^{2k},T}$ appear as $A_{R^{2k}, T}^{\\gamma}$, where $\\gamma$ is considered as a subgraph of itself). \n\\end{lemma}\n\\begin{proof}\nLet $\\{t_e\\} \\in (0,T)^{E(\\gamma)}$. As before, label the elements of $E(\\gamma)$ by $\\{1,2,\\ldots, k \\}$, in such a way that $t_1 \\ge t_2 \\ge \\cdots \\ge t_{k}$. \n\nEither $t_j^R \\ge t_{k}$ for all $j < k$, or, there is a smallest $i_1 < k$ such that $t_{i_1}^{R} \\le t_{k}$. In the first case, we are done, as then $\\{t_e\\} \\in \\overline{ A_{R,T}^{\\gamma'}} $ where $\\gamma'$ is the subgraph with the single edge corresponding to $t_{k}$. \n\nSuppose the second possibility holds. Then either for all $j < i_1$, $t_j^{R} \\ge t_{i_1}$, or there exists a smallest $i_2 < i_1$ with $t_{i_2}^R \\le t_{i_1}$. In the first case, we're done, as we are in $\\overline{ A_{R,T}^{\\gamma'} }$ where $\\gamma'$ is the subgraph whose edges correspond to $t_{i_1}, t_{i_1+1}, \\ldots, t_{k}$. \n\nAgain, let's suppose the second possibility holds. Then $t_{i_2}^{R^2} \\le t_{i_1}^R \\le t_{k}$. Either, for all $j < i_2$, $t_j^{R^2} \\ge t_{i_2}$, and then we are in $\\overline{ A_{R^2,T}^{\\gamma'} }$ where $\\gamma'$ is the subgraph whose edges correspond to $t_{i_2}, t_{i_2+1} , \\ldots, t_k$.\n\nOtherwise, there is some smallest $i_3 < i_2$ with $t_{i_3}^{R^2} \\le t_{i_2}$. Then $t_{i_3}^{R^4} \\le t_{k}$. And so forth.\n\nWe eventually end up either finding ourselves in one of the regions $\\overline { A_{R^{2^k}, T}^{\\gamma'} }$, for some non-empty proper subgraph $\\gamma' \\subset \\gamma$, or we find some $i_{k+1} = 1$, so we are in $\\overline{ A_{R^{2^{k}}, T}^{\\gamma} } = \\overline{ A_{R^{2^{k}}, T} }$. \n\\end{proof} \n\\begin{definition}\nAn open subset $U$ of $(0,T)^{E(\\gamma)}$ is called \\emph{good} if it is a subset of some $A_{R,T}^{\\gamma'}$ which is cut out by a finite number of inequalities $t_e^{R^n} > t_{e'}$, where $l \\in \\mathbb Z_{\\ge 0}$ and both $e,e' \\in E(\\gamma')$. \n\\end{definition}\n\\begin{lemma}\nThe intersection of any two good regions is good. \n\\end{lemma}\nNow, we have seen that $(\\varepsilon,T)^{E(\\gamma)}$ is covered by the closures of a finite number of good regions. Thus, we can write \n$$F_\\gamma (\\varepsilon , T, \\alpha ) = \\int_{(\\varepsilon,T)^{E(\\gamma)} } f_\\gamma ( t_e ,\\alpha ) \\prod_{e \\in E(\\gamma)} \\d t_e $$ as an alternating sum of integrals of $f_\\gamma(t_e)$ over $U \\cap (\\varepsilon,T)^{E(\\gamma)}$, where $U$ is a good subset of $(0,T)^{E(\\gamma)}$. Thus, in order to understand the small $\\varepsilon$ asymptotic expansions of $F_\\gamma(\\varepsilon, T,\\alpha)$, it suffices to consider the integrals of $f_\\gamma$ over such regions.\n\n\n\\begin{lemma}\nLet us fix $R$ to be a (sufficiently large) integer.\n\nLet $U \\subset (0,T)^{E(\\gamma)}$ be a good subset. Then the integral\n$$\nF_{\\gamma,U}(\\varepsilon,T,\\alpha) = \\int_{U \\cap (\\varepsilon,T)^{E(\\gamma)} } f_\\gamma ( t_e ,\\alpha ) \\prod_{e \\in E(\\gamma)} \\d t_e\n$$\nadmits a small $\\varepsilon$ asymptotic expansion\n$$F_{\\gamma,U}(\\varepsilon,T,\\alpha) \\simeq \\sum \\phi_r(\\varepsilon) \\Psi_r( T,\\alpha)$$\nwhere $\\phi_r \\in \\mathscr{A}$ and $\\Psi_r : \\mathscr{E}^{\\otimes L(\\gamma)} \\to \\Omega^\\ast(\\Delta^d)\\otimes C^{\\infty}((0,T))$ are continuous maps.\n\nEach $\\Psi_r(T,\\alpha)$ admits a small $T$ asymptotic expansion\n$$\n\\Psi_r(T,\\alpha) \\simeq \\sum g_k(T)\\int_M \\Phi_{r,k}(\\alpha)\n$$\nwhere $g_k$ is a smooth function of $T$, and $\\Phi_{r,k} \\in \\operatorname{Poly Diff}(\\mathscr{E}^{\\otimes L(\\gamma)}, \\dens(M) ) \\otimes \\Omega^\\ast(\\Delta^d)$. \n\\end{lemma}\n\\begin{proof}\nThis follows from Proposition \\ref{prop wicks lemma expansion} and Lemma \\ref{lemma subgraph}. \n\nIndeed, we can assume (without loss of generality) that \n$$\nU \\subset \\{ t_1,\\ldots, t_k \\mid t_i^N > t_{l+1} \\text{ for all } 1 \\le i \\le l , \\quad t_{l+1} > t_{l+2} > \\cdots > t_k, \\quad t_{l+1}^N < t_k \\} \n$$\nis an open subset cut out by a finite number of inequalities of the form $t_i^{a} > t_j$, where $a \\in \\mathbb Z_{> 0}$ and $l+1 \\le i,j \\le k$.\n\nThen let\n$$\n\\eta(t_{l+1}, T, \\alpha) = \\int_{(t_1,\\ldots, t_k) \\in U} f_\\gamma( t_1,\\ldots, t_k , \\alpha) \\d t_1 \\cdots \\widehat{\\d t_{l+1} } \\cdots \\d t_k .\n$$\nHere, we're integrating over all variables except $t_{l+1}$.\n\nThen Proposition \\ref{prop wicks lemma expansion} (if $l = 0$) or Lemma \\ref{lemma subgraph} (if $l > 0$) imply that $\\eta(t_{l+1}, T,\\alpha)$ has a nice $t_{l+1}$ asymptotic expansion. More precisely, for all $r \\in \\mathbb Z_{> 0}$, there exist $m_r \\in \\mathbb Z_{> 0}$, and a finite number of functions $g_i(t_{l+1})$, $\\psi_i(T,\\alpha)$, such that\n\\begin{enumerate}\n\\item\n$$\\abs{ \\eta(t_{l+1}, T, \\alpha) - \\sum g_i(t_{l+1}) \\psi_i(T,\\alpha) } < t_{l+1}^{r+1} \\norm{\\alpha}_{m_r} $$\n\\item\nthe $\\psi_i$ are continuous linear maps $\\psi_i : \\mathscr{E}^{\\otimes L(\\gamma)} \\to \\Omega^\\ast(\\Delta^d) \\otimes C^{\\infty}((0,\\infty))$, which admit a small $T$ asymptotic expansion \n$$\n\\psi_i(T,\\alpha) \\sim \\sum \\zeta_{i,k}(T) \\int_M \\phi_{i,k}(\\alpha)\n$$ where \n$$\\phi_{i,k} \\in \\operatorname{Poly Diff}(\\mathscr{E}^{\\otimes L(\\gamma)}, \\dens(M) ) \\otimes \\Omega^\\ast(\\Delta^d) $$\nand $\\zeta_{i,k}(T)$ are smooth functions of $T$.\n\\item\n$$\n\\int_{\\varepsilon}^1 g_i(t_{l+1}) \\d t_{l+1} \\in \\mathscr{A}\n$$\n\\end{enumerate}\n(The third part follows from the particular form of the terms of the expansions proved in Proposition \\ref{prop wicks lemma expansion} and Lemma \\ref{lemma subgraph}.)\n\nNow, it remains to integrate out the variable $t_{l+1}$. Let\n$$\n\\eta_r (t_{l+1}, T, \\alpha) = \\sum g_i(t_{l+1}) \\psi_i(T,\\alpha)\n$$\nbe the approximation to order $t_{l+1}^{r+1}$ to $\\eta(t_{l+1}, T,\\alpha)$. \n\nThen\n$$\n\\abs{ \\int_\\varepsilon^T \\eta \\d t_{l+1} - \\left( \\int_0^T (\\eta - \\eta_r) \\d t_{l+1} + \\int_1^T \\eta_r \\d t_{l+1} + \\int_\\varepsilon^1 \\eta_r \\d t_{l+1} \\right) } < \\varepsilon^{r+2} \\norm{\\alpha}_{m_r} .\n$$\nThus, \n$$\\int_0^T (\\eta - \\eta_r) \\d t_{l+1} + \\int_1^T \\eta_r \\d t_{l+1} + \\int_\\varepsilon^1 \\eta_r \\d t_{l+1} $$\ngives the desired small $\\varepsilon$ asymptotic expansion. Note that the integral in the first term converges. The first two terms are independent of $\\varepsilon$, and the third term is in $\\mathscr{A} \\otimes \\Hom ( \\mathscr{E}^{\\otimes L(\\gamma)}, \\Omega^\\ast(\\Delta^d) \\otimes C^{\\infty}((0,\\infty)) )$, as desired. \n\nIt's easy to check that the small $T$ asymptotic expansion of the approximation above is in terms of local functionals of $\\alpha$, as desired. \n\n\n\n\\end{proof} \nThis completes the proof of Theorem \\ref{graph integral expansion}. \n\nThe proof of the variant result, when we include a propagator of the form $P(\\varepsilon,T) + \\delta K_\\varepsilon$, is identical, except that instead of integrating over the smallest variable $t_k$, we specialise $t_k = \\varepsilon$. In our definition of the algebra $\\mathscr{A}$, we included functions of $\\varepsilon$ of the form\n$$\nf(\\epsilon ) = \\int_{U \\subset (\\epsilon,1)^{k-1} } \\frac{ F(t_1,\\ldots,t_k = \\varepsilon)^{1\/2} } { G(t_1,\\ldots, t_k = \\varepsilon)^{1\/2} } \\d t_1 \\cdots \\d t_{k-1}\n$$\nwhere $F, G \\in \\mathbb Z_{\\ge 0}[ t_1, \\ldots, t_k]$ and $U$ is cut out by equations of the form $t_i^l > t_j$, for $i,j < k$. Functions of this kind arise when we have a $\\delta K_\\varepsilon$ term in the propagator. \n\n\n\n\n\\def$'${$'$}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLight carrying OAM, which corresponds to beams with a helical twist of phase, has gained interest due to its applications in various fields of optics. Generally termed as `Optical Vortex' in classical optics, light having phase singularity has been used in the generation of bright optical solitons \\cite{swartzlander1, tikhonenko}, study of optical chronograph \\cite{swartzlander2, berkhout} and trapping of particles \\cite{gahagan}. The transverse intensity distribution of an optical vortex mode is radially symmetric in nature that originates from the azimuthal dependence of the phase. Among such classes of phase singular modes, Laguerre-Gaussian (LG) modes are most commonly used in practical applications due to their profile stability upon free space propagation. Since the size of a normal optical vortex strongly depends on its topological charge \\cite{reddydivergence}, they have their limitations in applications involving transmission of OAM modes through optical fibers \\cite{li, gregg} in communication \\cite{willner}. Due to this, projective measurements based on `phase-flattening' technique become difficult for higher-order orbital angular momentum (OAM) modes \\cite{qassim}. Bessel-Gaussian (BG) modes are another class of structured light modes carrying OAMs, which come from the non-diffracting solution of paraxial wave equation \\cite{gori1987bessel, mcgloin2005bessel}. The transverse field distribution of BG modes is described by a Bessel function \\cite{korenev2002bessel}. They exhibit non-diffracting nature \\cite{durnin1987diffraction, cruz2012observation} as well as self-healing property \\cite{litvin2009conical, mclaren2014self} when propagated in space. In most of the practical cases, Bessel-Gaussian modes are generated using axicons \\cite{arlt2000generation} and spatial light modulators (SLM) \\cite{leach2006generation}. Fourier transformation of BG modes gives a class of size invariant modes known as Perfect optical vortex (POV) \\cite{vaity2015}. They are a new class of ring-shaped beams carrying OAM. The size of a POV modes does not change with respect to the change in OAM values.\n\nThe concept of OAM has been well established in quantum information too. Access to higher dimensions in Hilbert space makes OAM suitable for encoding higher amounts of information per photon. SPDC is the most common workhorse for the generation of single photons carrying OAM \\cite{boydbook, mair}. The paired photons generated in SPDC are inherently correlated in their OAM values \\cite{walborn2004entanglement}. It has been experimentally verified that the amplitude, as well as the helical phase of an optical vortex pump, gets transferred to SPDC photons \\cite{vicuna, anwar2018direct}. Based on the OAM selection rule in SPDC process, single photons carrying OAM are generated in `heralding' configuration by pumping an optical vortex beam into a nonlinear crystal and projecting one photon from the generated pair to `zero OAM' (Gaussian) mode \\cite{lal2020photon}. Single-photons carrying OAM find potential applications in quantum information \\cite{erhard}, quantum gates \\cite{babazadeh}, quantum memories \\cite{nicolas} etc. \n\nApart from the Gaussian modes, OAM correlations among the SPDC photon-pairs are explored using structured pump modes \\cite{romero}. Such systems can be used to generate versatile high-dimensional quantum states by careful shaping of pump beam having additional spiral modes using different pump engineering techniques \\cite{liu2018coherent, kovlakov2018quantum, anwar2020selective}. Also, it was shown that the use of BG modes in photon-pair detection using OAM projection improved the dimensionality of the OAM state generated. However, the potential of the class of such modes in further enhancement of the heralded single-photon detection and dimensionality of the quantum state is yet unexplored. In this article, we discuss the use of POV modes as the pump in a parametric down-conversion process to generate heralded single photons carrying OAM. By exploiting the OAM independence on the size of POV modes, we show that the heralding efficiency of the single photons with POV pump is higher than that with a NOV pump. We also show that a configuration with POV pump beams with generated SPDC photon-pairs projected to BG modes give improved fidelity than the configuration with an LG mode projection.\n\n\\section{Helical modes and their generations} \\label{gen-non-diffr-size-inv-modes}\nDepending on the transverse intensity distribution, there are three main classes of optical modes carrying azhimuthal phase. All those modes have phase singularity at the center where the intensity is zero. Here we introduce different helical modes and an experimental method to generate them.\n\\begin{figure}[h]\n\t\\centering\n \\includegraphics[width=0.46\\textwidth]{POV-generation-scheme.pdf}\n\t\\caption{Experimental schemes to generate NOV, then BGV, and then POV: (a) Conversion of a Gaussian beam to helical modes Using spiral phase plate, axicon lens and a spherical lens. (b) Generation of helical modes by diffraction of fork grating hologram imprinted on a SLM.} \\label{pump_prep_POV}\n\\end{figure}\n\n\\subsection{Size-variant: Normal optical vortex (NOV)}\nA typical field distribution of a normal optical vortex (NOV) mode of topological charge $\\ell$ in polar coordinates $(r,\\theta)$, is given by\n\\begin{equation}\nE_{\\text{NOV}}^{\\ell}(r,\\theta)=\\sqrt{\\frac{2^{\\abs{\\ell}+1}}{\\pi w^2|\\ell|!}}\\left(\\frac{r}{w}\\right)^{|\\ell|}\\exp\\left(-\\frac{r^2}{w^2}\\right)\\exp(i\\ell\\theta),\n\\label{NOV}\n\\end{equation}\nwhere $w$ is the radius of the mode. As shown in Fig. \\ref{pumpmodes}, the size of NOV modes increases with OAM due to the OAM-dependant radial term $r^\\abs{\\ell}$ in the above expression.\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.46\\textwidth]{pumpmodes.pdf}\n \\caption{(a) Theoretical (top) and experimental (bottom) intensity distributions of NOV and POV beams of order varying from $\\ell=1$ to $25$. (b) and (c) compares the intensity distribution\nof a NOV and POV with $\\ell=1$ and $\\ell=25$.}\\label{pumpmodes}\n\\end{figure}\n\n\\subsection{Size-invariant: Perfect optical vortex (POV)}\nA new class of optical vortex beam, termed as `perfect optical vortex', was introduced by Ostrovsky \\textit{et. al.} that solves size effects of a normal vortex \\cite{ostrovskygeneration}. Conventionally, POV of order $\\ell$ are formed as a Fourier transform of Bessel-Gaussian vortex (BGV) of order $\\ell$ \\cite{vaity2015}. The normalized expression for a BGV mode of azhimuthal order $\\ell$ is written as\n\\begin{equation}\nE_{\\rm{BG}}^{\\ell}(r,\\theta)=\\sqrt{\\frac{2e^{\\sfrac{1}{4}}}{\\pi w^2I_\\ell(\\sfrac{1}{4})}}J_\\ell(k_rr)\\exp\\left(-\\frac{r^2}{w^2}\\right)\\exp(i\\ell\\theta).\n\\label{BG}\n\\end{equation}\nThe Fourier transform of the above equation leads to the field of a typical perfect optical vortex of order $\\ell$ is given as\n\\begin{equation}\nE_{\\rm{POV}}^{\\ell}(r,\\theta)=i^{\\ell-1}\\frac{w_g}{w_o}\\exp(i\\ell\\theta)\\exp\\left(-\\frac{(r^2+r_r^2)}{w_o^2}\\right)I_\\ell\\left(\\frac{2r_rr}{w_o^2}\\right),\n\\label{POV-expression}\n\\end{equation}\nwhere $w_g$ is the waist radius of the initial Gaussian beam, $w_o=2f\/kw_g$ is half of the ring width and $r_r$ is radius of the ring. Here $f$ is the focal length of the Fourier lens and $k$ is the magnitude of the wave-vector of the light beam. \n\n\\subsection{Experimental generation of NOV and POV modes}\nFigure \\ref{pump_prep_POV}(a) shows a simple method to generate POV beams using a spiral phase plate (SPP), an axicon, and a lens. A Gaussian beam passing through an SPP of topological order $\\ell$ generates an NOV of OAM $\\ell$. This beam then passing through the axicon becomes a BGV, and Fourier transformation of the BGV using lens generates a POV beam. Alternatively, these helical modes can also be generated from the diffraction of a laser beam using a fork-grating pattern imprinted onto SLM, as shown in Fig. \\ref{pump_prep_POV}(b). The first diffracted order is imaged in the far-field plane will contain the desired helical mode, or NOV modes. A Gaussian light incident on a fork-grating hologram obtained through interference converts into BG modes in the immediate plane after SLM, and the corresponding POV mode is generated at the far-field plane.\n\nIn Fig \\ref{pumpmodes}(a), the experimental intensity distribution of normal optical vortex pump is compared with the perfect optical vortex pump of different orders $\\ell$=1, 5, 10, 15, 20 and 25. For a NOV pump, Eqn.1 shows an $\\ell$ dependence on the radial terms. So, the size of the OAM modes increases with $\\ell$. However, for a POV pump, radial terms are independent of $\\ell$ (\\eqref{BG}), and thus the size of modes remains the same for higher OAM values. The spatial profiles are shown in Fig \\ref{pumpmodes}(b) and (c) for NOV and POV, respectively. From the intensity profile, one can observe that the spatial profile change considerably for a NOV, while remains unchanged for POVs.\n\n\\section{Parametric down conversion of helical modes} \\label{PDC}\nFirst, we will discuss the theory of SPDC with NOV and POV pump beams. A comparative study of down-converted photons with these pump modes is important to understand their two-photon modal spectra.\n\n\\subsection{Theory} \\label{PDC:theory}\nIn the perturbative treatment of spontaneous parametric down-conversion process, interaction of pump ($p$), signal ($s$) and idler ($i$) modes in a medium (Non-linear $\\chi^{(2)}$ crystal) is represented by an interaction Hamiltonian $\\mathcal{H}_I$. The initial state is a vacuum state $\\vert0\\rangle_s\\vert0\\rangle_i$, therefore the output state of SPDC is approximated as\n\\begin{equation}\n\\vert\\Phi\\rangle\\approx\\left(1-\\frac{i}{\\hbar}\\int_{0}^{\\tau}\\mathcal{H}_I(t)dt\\right)\\vert0\\rangle_s\\vert0\\rangle_i.\n\\label{biphotonstate}\n\\end{equation}\nThe biphoton mode function of the generated twin photons in transverse momentum coordinates ($\\mathbf{k}$) is obtained as\n\\begin{equation}\n\\Phi(\\mathbf{k}^{\\perp})=\\langle\\mathbf{k}^{\\perp}_s\\vert\\langle-\\mathbf{k}^{\\perp}_i\n\\vert\\Phi\\rangle,\n\\label{biphotonmode}\n\\end{equation}\nwhere $\\mathbf{k}^{\\perp}_s$ and $-\\mathbf{k}^{\\perp}_i$ represents the transverse position in the momentum coordinates for signal and idler respectively. On simplification, the biphoton mode \nfunction in transverse momentum coordinates is given by\n\\begin{equation}\n\\Phi(\\mathbf{k}_s^{\\perp},\\mathbf{k}_i^{\\perp},\\Delta k)=E_p(\\mathbf{k}_p^{\\perp})L\\text{sinc}\\left(\\dfrac{\\Delta kL}{2}\\right)\\exp\\left(i\\dfrac{\\Delta kL}{2}\\right),\n\\label{modefna}\n\\end{equation}\nwhere $E_p(\\mathbf{k}_p^{\\perp})$ represents the pump transverse amplitude distribution, $\\mathbf{k}_p^{\\perp} (= \\mathbf{k}^{\\perp}_s+\\mathbf{k}^{\\perp}_i)$ is the angular coordinates of the pump, $L$ is the thickness of the crystal and the exponential factor in the Eqn. \\ref{modefna} is a global phase term. $\\Delta k$ is the longitudinal phase mismatch given by\n\\begin{equation}\n \\Delta k(\\mathbf{k}_s^{\\perp},\\mathbf{k}_i^{\\perp}) = k_p^z(\\mathbf{k}_s^{\\perp},\\mathbf{k}_i^{\\perp})-k_s^z(\\mathbf{k}_s^{\\perp})-k_i^z(\\mathbf{k}_i^{\\perp}).\n \\label{phase_mismatch}\n\\end{equation}\n\nConsider a Type-0\/I SPDC using a $\\chi^{(2)}$ crystal of thickness $L$. The magnitude of the wave-vector of the interacting fields under phase-matching condition are given by\n\\begin{align}\nk_{p,s,i}^z(\\mathbf{k}_{p,s,i}^{\\perp})&=\\sqrt{k_{p,s,i}^2-\\vert \\mathbf{k}_{p,s,i}^{\\perp}\\vert^2}.\n\\end{align}\nHere $k_j$=$n_j\\omega_j\/c$ $(j=p,s,i)$ are the magnitudes of wave-vectors of the fields. Here, $n_j\\equiv n_j(\\omega_j)$ $(j=p,s,i)$ are the refractive indices and $\\omega_j$ are the frequencies of the pump, signal and idler respectively. $c$ is the speed to light in vacuum. For near-collinear SPDC, the z-component of the wave-vector is approximated by $k-|\\mathbf{k}^{\\perp}|^2\/2k$. So, \\eqref{phase_mismatch} becomes\n\\begin{align}\n \\Delta k(\\mathbf{k}_s^{\\perp},\\mathbf{k}_i^{\\perp}) = \\frac{|\\mathbf{k}_s^{\\perp}-\\mathbf{k}_i^{\\perp}|^2}{2k_p}.\n \\label{phase-mismatch-approx}\n\\end{align}\nAlso, an additional term of $-2\\pi\/\\Lambda$ will be added to the phase-mismatch in \\eqref{phase-mismatch-approx}, in case of a quasi phase-matched crystal, where $\\Lambda$ is the grating period.\n\n\\subsection{Angular spectrum of SPDC} \\label{PDC:experiment}\nThe generated signal and idler photons in SPDC propagate in space according to the phase-matching condition of SPDC. The locus of all points in space that satisfies phase-matching conditions comes out to be an annular ring distribution where the signal and idler photons in a pair are present. The angular spectrum of the down-converted signal photons for frequency $\\omega_s$ is obtained by tracing the biphoton mode function overall idler photons \\cite{yasser}\n\\begin{equation}\n R_s(\\mathbf{k}^{\\perp}_s)=\\int \n d\\mathbf{k}_i^{\\perp}\\vert\\Phi(\\mathbf{k}_s^{\\perp},\\mathbf{k}_i^{\\perp},\\Delta k)\\vert^2.\n \\label{SPDCAS}\n\\end{equation}\n\n\\begin{figure}[b]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{SPDC_imaging_setup.pdf}\n \\caption{Experimental setup to record the angular spectrum of SPDC photons generated by pumping a non-linear crystal with different pump beams. Figure in inset is the SPDC annular distribution in the angular spectrum. Two diametrically opposite points (circled in red) on the distribution are chosen to collect the signal-idler pairs.}\n \\label{exptalsetup_imaging}\n\\end{figure}\n\nThe experimental setup to record the angular spectra of SPDC photons with different helical pump modes is given in Fig. \\ref{exptalsetup_imaging}. A pump beam of wavelength 405$\\pm$2~nm from a continuous-wave diode laser (TOPTICA iBeam Smart) of 50~mW power is incident on a $\\chi^{(2)}$ crystal. The dashed box corresponds to the case-by-case method to prepare NOV or POV helical beams using the method given in Fig. \\ref{pump_prep_POV}(a). A half-wave plate (HWP) is used to orient the pump polarization along the optic axis of the crystal. As discussed earlier, we use Type-I BBO crystal to generate SPDC photon-pairs. The down-converted photons, signal \\& idler, are generated in a non-collinear fashion at diametrically opposite points of the SPDC ring. A bandpass filter (BPF) of width 10~nm centered at 810~nm is used to filter down-converted photons and block the unconverted pump beam after the crystal. A $2f$-imaging configuration with a plano-convex lens of focal length 50~mm is used to image the SPDC in $k$-space. The angular spectrum of SPDC is recorded using an electron-multiplying CCD (EMCCD) camera with a gain $\\times$100 and the addition of 100 frames, each having an exposure time of 0.5~s. The EMCCD camera has an imaging area of 512$\\times$512 pixels with a pixel size of 16~$\\mu$m.\n\nThe numerical angular spectra of SPDC with different pump OAM modes with similar experimental conditions obtained from Eqn. \\eqref{SPDCAS} is shown in Fig. \\ref{SPDC-angular-spectra}(top). This is well-matched with their corresponding recorded intensity distributions of SPDC photons with NOV and POV pumps of different OAMs, as shown in Fig. \\ref{SPDC-angular-spectra}(bottom). The asymmetric intensity distribution for all the angular spectra is mainly due to the spatial walk-off of the thick crystal used. For a NOV pump, the SPDC annular distribution broadens with an increase in the topological order. However, the signature of 'size-invariance' of POV pump beams is observed in their corresponding angular spectra, where the intensity distribution remains the same for higher OAM values. \n\n\\begin{figure}[h]\n\t\\centering\n \\includegraphics[width=0.48\\textwidth]{SPDC_AS.pdf}\n\t\\caption{Numerical (top) and experimental (bottom) angular spectra of SPDC generated using NOV, and POV pump modes of various orders.} \\label{SPDC-angular-spectra}\n\\end{figure}\n\nFor a non-collinear down-conversion, the signal and idler are located at diametrically opposite points of the annular distribution (Fig. \\ref{exptalsetup_imaging}). If one chooses to place a collection optics setup each at such two points and detect the photons, then we could observe a decrease in the number of detected photons with the increase in OAM of an NOV pump. There will not be a significant change in the number of detected photons in the case of a POV pump. To achieve the full advantage of our study, we preferred the pump generation method illustrated in Fig.\\ref{pump_prep_POV}(a) over Fig.\\ref{pump_prep_POV}(b) due to the limitation of spiral-phase plates of order more than 3. As the low down-conversion rates in BBO crystal restricted us to investigate further the size effects on photon-pair detection, we choose periodically poled KTP (ppKTP) crystal, which not only provides higher photon-pairs production rate, however also removes the asymmetry in the SPDC distribution.\nPumping the crystal with a beam carrying OAM $l_p$ and the collection of idler photon through a single-mode fiber leads to a conditional detection of signal photon (heralded) carrying an OAM identical to that of the pump, which is discussed below.\n\n\\subsection{Heralded twisted single photons}\n\nEquation \\eqref{modefna} gives the two-photon modal distribution for a pump with electric field $E_p(\\mathbf{k}_p^{\\perp})$. According to conservation of OAM in SPDC process \\cite{walborn2004entanglement}, for a pump carrying an OAM $l_p$ and the idler photon projected to a zero-OAM mode ($l_i=0$), the corresponding heralded signal photon will have the same OAM as that of the pump ($l_s=l_p$). This heralded single-photon carrying OAM can be experimentally generated by coupling one photon in a pair to a single-mode fiber and the other one to a multimode fiber. So, the coincidence counts can be calculated from the modal distribution as\n\\begin{equation}\n C \\propto \\int_0^{a_s}\\int_0^{a_i}\\Phi(\\mathbf{k}_s^{\\perp},\\mathbf{k}_i^{\\perp},\\Delta k)\\xi_s^*(\\mathbf{k}_s^{\\perp})\\xi_i^*(\\mathbf{k}_i^{\\perp})d\\mathbf{k}_s^{\\perp}\\mathbf{k}_i^{\\perp},\n \\label{coinc_counts_herladed}\n\\end{equation}\nwhere $\\xi_s$ and $\\xi_i$ are the characteristic functions of spatial mode projectors in signal and idler arms respectively. The fibers in each arm is represented by a Gaussian function with mode-field diameters $2a_s$ and $2a_i$, given by\n\\begin{align}\n \\xi_{s,i}^*(\\mathbf{k}_{s,i}^{\\perp})=\\sqrt{\\frac{a_{s,i}^2}{2\\pi}}\\text{exp}\\left(-\\frac{a_{s,i}^2}{4}|\\mathbf{k}_{s,i}^{\\perp}|^2\\right).\n \\label{FiberExpression}\n\\end{align}\n\nThe experimental setup to generate heralded twisted single-photons from SPDC is given in Fig. \\ref{exptalsetup_her_twist}. Here, we have used a blue diode laser (TopMode) of wavelength 405~nm and power 10~mW with a spectral bandwidth of 0.1~nm, as the pump beam. NOV and POV beams are generated by illuminating the Gaussian laser mode onto the corresponding grating holograms imprinted on a SLM (Hamamatsu), and the first order far-field diffracted beam after Fourier transformation using a 750~mm lens is incident on a Type-0 ppKTP crystal of thickness 30mm and transverse dimensions of 1~mm$\\times$2~mm. The HWP allows us to vary the pump beam polarization along the crystal axis. \n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{SPDC_twist_photons_expt_setup.pdf}\n \\caption{Experimental setup for generating heralded twisted single photons. BPF - Band pass filter; PM - Prism mirror; SMF - Single mode fiber; MMF - Multimode fiber; FC - Fiber coupler; SPCM - Single photon counting module; CC - Coincidence counter.}\\label{exptalsetup_her_twist}\n\\end{figure}\n\nThe down-converted photons (signal \\& idler) of wavelength 810 nm each (degenerate) are generated in a non-collinear fashion at diametrically opposite points of the SPDC ring. To measure the number of generated photon pairs, two diametrically opposite portions of the SPDC ring at a given plane were selected using apertures (not shown in the setup) and the photons coming out of each aperture were collected using fiber collimators (CFC-2X-B, Thorlabs), of focal lengths 2 mm each. The fiber collimator in the idler arm is attached to a SMF (P1-780A-FC-2, Thorlabs) having a numerical aperture of 0.13 and a mode field diameter of $5\\pm0.5$ $\\mu$m, and that in the signal arm is attached to a multi-mode fiber (M43L02, Thorlabs). The fibers are connected to the single-photon detectors SPCMs (SPCM-AQRH-16-FC, Excelitas). The detectors have a timing resolution of 350 ps with 25 dark counts per second. To count the correlated photon-pairs, the two detectors are connected to a coincidence counter (IDQuantique-ID800), having a time resolution of 81 ps.\n\nWe recorded the coincidences of signal and idler for NOV and POV pumps starting from $\\ell=1$ to 25. Figure \\ref{plot1}(a) shows the coincidence counts for NOV and POV pump beams with different OAM. For a pump OAM up to 10, the coincidence counts of SPDC photon pairs is considerably higher for a POV pump than that with a NOV pump beam. \n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{plot2.png}\n \\caption{Plot of (a) measured coincidence counts for different values of OAM of pump, and (b) corresponding heralding efficiency, for NOV and POV pumps.}\\label{plot1}\n\\end{figure}\n\nThe difference in coincidences corresponding to the two pump modes of the same order decreases for higher orders. The robustness of the source with POV pump mode can be more understood by comparing the heralding efficiencies. Figure \\ref{plot1}(b) shows the variation of heralding efficiency of SPDC single photons with pump OAM for NOV and POV pumps. From the graph, it is clear that, for up to a pump OAM of 15, the efficiency of heralded single photons carrying OAM with POV pump is considerably greater than that corresponding to the NOV pump. Ideally, the coincidences for POV pump should be independent of the order. However, we observe a drop in their coincidences for higher OAM values. This is due to the fact that the ring diameter of the POV mode gradually increases with order and the difference is more exposed for orders starting from $l=15$, which causes a considerable drop in counts.\n\n\\section{Two-photon OAM states using POV pump} \\label{high-dim-OAM-states}\n\\subsection{Theory}\n\nParametric down-converted photons generated using a pump beam of any spatial profile will have an incoherent sum of different OAMs. Based on the OAM conservation in SPDC process \\cite{walborn2004entanglement}, the photon-pairs represent a joint signal-idler OAM state in the allowed OAM subspace, given as\n\\begin{equation}\n \\ket{\\psi}=\\sum_{\\ell_i=-\\infty}^\\infty C_{\\ell_p-\\ell_i,\\ell_i}\\ket{\\ell_p-\\ell_i}_a\\ket{\\ell_i}_b,\n \\label{SPDC_single_sum}\n\\end{equation}\nwhere $l_i$ is the OAM of idler photon, $C_{l_p-l_i,l_i}$ is the probability amplitude for occurrence of the state $\\ket{l_p-l_i}_a\\ket{l_i}_b$. The subscripts $a$ \\& $b$ represents signal and idler modes respectively. In the experiment, the probability amplitude corresponds to the coincidence counts obtained by projecting conjugate azimuthal modes in signal and idler. The coincidence counts representing each projection is given by\n\\begin{equation}\n C_{\\ell_s,\\ell_i}\\propto {}^{}_{a}\\langle \\ell_s^{\\text{(proj)}}\\vert {}^{}_{b}\\langle \\ell_i^{\\text{(proj)}}\\vert \\psi\\rangle.\n \\label{prob_ampl}\n\\end{equation}\nHere, $\\ell_s^{\\text{(proj)}}$ and $\\ell_i^{\\text{(proj)}}$ are the OAM values of azimuthal modes considered in projective measurement. \n\nFor a nearly collinear phase-matched SPDC, the probability amplitude in Eqn. (\\ref{prob_ampl}) is rewritten in terms of an integral in polar coordinates representing the field overlap of pump, projected signal and idler modes\n\\begin{equation}\n C_{\\ell_s,\\ell_i}\\propto\\int_0^{2\\pi}d\\theta\\int_0^\\infty rE_p(r,\\theta)E_s^*(r,\\theta)E_i^*(r,\\theta)dr,\n \\label{prob_ampl_num}\n\\end{equation}\nwhere $E_p$ is the pump mode and $E_s$ \\& $E_i$ are the projected signal and idler modes respectively. Here, we consider a POV pump represented by \\eqref{POV-expression} and NOV or BGV modes in both signal and idler arms. \n\n\\subsection{Experiment}\nThe schematic of the experimental setup to measure in higher dimension is shown in Fig. \\ref{OAMentlsetup}. In this case, the combination of a SLM and a SMF was used as the detection system, instead of the heralding configuration using fibres as discussed earlier. The combination was added in both the signal \\& idler arms. The SLM is used to generate either the NOV or POV modes at the plane of the crystal. The crystal is then imaged on the two SLMs using two lenses and then imaged again to the SMF. The photon is then sent to the APDs, and the signal is analysed using the coincidence logic (CC). \n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{OAMentlsetup.pdf}\n \\caption{Experimental setup for generating OAM entangled photon-pairs with different helical pump modes.}\\label{OAMentlsetup}\n\\end{figure}\n\nWe measured OAM spectra of SPDC photon-pairs with a POV pump and estimated their bandwidths using LG and BG mode projections in signal\/idler. The OAM bandwidth is quantified as Schmidt number, which estimates the dimensionality of the state generated \\cite{pors2008shannon, pires2010, straupe}. Fig. \\ref{OAMspectra} shows the OAM spectra of photon-pairs for POV pump of order 0 \\& 1. \n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{oamspectrum-highdim-states.pdf}\n \\caption{OAM spectrum of SPDC photon-pairs generated using POV pump of OAM (a) $\\ell_p=0$ and (b) $\\ell_p=1$.}\\label{OAMspectra}\n\\end{figure}\nFor $\\ell_p=0$ (Fig. \\ref{OAMspectra}(a)), the Schmidt numbers are 3.8 \\& 8.7 for LG \\& BG mode projections respectively. For $\\ell_p=1$ (Fig. \\ref{OAMspectra}(b)), the Schmidt numbers are 6.3 \\& 11.6 for respective LG and BG mode projections. This shows that projecting the photon-pairs generated from a POV pump to a BG mode will effectively bring down the 'size effects' that causes a reduction in OAM bandwidth. Studies had already shown that the bandwidth could be further increased by increasing the radial wavenumber $k_r$, which broadens the spectrum further. \\cite{mclaren2012entangled}.\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{densitymatrix-2doamstates.pdf}\n \\caption{The density matrices of the two-photon OAM entangled states. (a)-(c) correspond to the estimation by projecting photon-pairs on LG basis, and (d)-(f) are the estimations in BG projection. The holograms or the true state and the fidelity of the estimated state are given in the inset of each density matrix. (a) \\& (d) corresponds to a POV pump of $\\ell_p=0$ and the other density matrices correspond to a POV pump of $\\ell_p=1$. }\\label{densitymatrix-2doamstates}\n\\end{figure}\n\nWe also studied the effect of zero-order mode projection in idler on the fidelity of generated quantum states using POV pump. For simplicity, we consider two-dimensional OAM states in the form of Bell states $(\\ket{\\ell_s}\\ket{\\ell_i}+\\ket{\\ell_i}\\ket{\\ell_s})\/\\sqrt{2}$, with different $\\ell_s$ and $\\ell_i$ values. Figure \\ref{densitymatrix-2doamstates}(a)-(c) show the density matrices of OAM states estimated with LG mode projection and Fig. \\ref{densitymatrix-2doamstates}(d)-(f) gives the corresponding density matrices with BG mode projections. In all these cases, the pump OAM $\\ell_p$ is the sum of OAMs of signal and idler, $\\ell_p=\\ell_s+\\ell_i$. Here, we observe that the state fidelity is lower for the case of a zero-OAM mode in signal or idler, which is the setting for the generation of heralded twisted single photons, as discussed in the earlier section. So, there is a trade-off in using a two-photon SPDC system with complete mode projection capabilities and using heralding with the fiber projective systems. \n\nWe also observed that the fidelity of the state is improved with the use of BG projection. Efficient OAM detection based on phase-flattening of POV modes in classical light source has been demonstrated \\cite{pinnell2019quantitative, pinnell2019perfect}. Thus, a robust source of high dimensional quantum states in OAM degree of freedom can be implemented by using a size-invariant twisted optical mode like a perfect optical vortex as a pump, as well as by performing projection of photon-pairs in Bessel-Gaussian modes.\n\n\\section{Conclusion} \\label{Conclusion}\nIn conclusion, we have experimentally demonstrated that the conditional coupling efficiency of the heralded twisted single photons for higher OAM values can be improved by using a perfect optical vortex beam as the pump. We also showed that the dimensionality of the two-photon OAM states is increased with the use of POV modes in the pump, as well as projective measurements using Bessel-Gaussian vortex modes that give POV, instead of the Laguerre-Gaussian modes. The presented results may be utilized for the practical realization of efficient higher dimensional OAM entangled photon-pair sources. \n\n\\section*{Disclosures}\nThe authors declare that there are no conflicts of interest related to this article.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\nLet $L$ be a link in the 3-dimensional Euclidean space $\\mathbb{R}^3$. The \\textit{unknotting number} $u(L)$ is the minimal number of crossing changes\\ (Fig.\\ \\ref{cc}) from $L$ to a trivial link. The \\textit{crossing number} $c(L)$ is the minimal number of crossing points among all regular diagrams of $L$. It is well-known that $u(L)$ is less than or equal to half of $c(L)$\\ (see for example\\ \\cite{unbound}). In\\ \\cite{unbound} Taniyama characterized the links which satisfy the equality as follows. \n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[height=0.12\\linewidth]{cc1.eps}\n\\caption{}\n\\label{cc}\n\\end{figure}\n\n\\begin{thm} \\cite[Theorem $1.5\\,(2)$] {unbound}\nLet L be a $\\mu-$component link that satisfies the equality $u(L) =\\dfrac{c(L)}{2}$. Then $L$ has a diagram $D = \\gamma_1 \\cup \\cdots \\cup \\gamma_{\\mu}$ such that each $\\gamma_i$ is a simple closed curve on $\\mathbb{R}^2$ and for each pair $i,\\ j$, the subdiagram $\\gamma_i \\cup \\gamma_j$ is an alternating diagram or a diagram without crossings.\n\\end{thm}\n\nIn\\ \\cite{uncr} Taniyama and the author showed that this inequality is not extended to spatial embeddings of planar graphs and this inequality is extended to spatial embeddings of trivializable planar graphs. Namely for any spatial embedding $f$ of a trivializable planar graph, $u(f)$ is less than or equal to half of $c(f)$. For example, a handcuff-graph and a theta-curve as illustrated in Fig.\\ \\ref{hanthe} are trivializable. We characterize the spatial embeddings of a handcuff-graph or a theta curve which satisfy the equality as follows.\n\n\\begin{figure}[H]\n \\begin{tabular}{c}\n \\begin{minipage}[b]{0.5\\hsize}\n \\begin{center}\n \\includegraphics[height=0.3\\linewidth]{hand.eps}\\\\\n \\ \\\\\n handcuff-graph\n \\end{center}\n \\end{minipage}\n \\begin{minipage}[b]{0.5\\hsize}\n \\begin{center}\n \\includegraphics[height=0.3\\linewidth]{theta.eps}\\\\\n \\ \\\\\n theta curve\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n \\caption{}\n \\label{hanthe}\n\\end{figure}\n\n\\begin{thm}\\label{handcuff}\nLet $G$ be a handcuff-graph and let $f$ be a spatial embedding of $G$. Then $f$ satisfies the equality $u(f)=\\dfrac{c(f)}{2}$ if and only if $f$ has a diagram $D$ with the following conditions {\\rm:} \\\\\n$(1)\\,$Each edge of $D$ has no self-crossings. \\\\\n$(2)\\,$All crossings of $D$ are crossings between two loops.\\\\\n$(3)\\,$Two loops of $D$ form an alternating diagram or a diagram without crossings.\n\\end{thm}\n\n\\begin{thm}\\label{theta}\nLet $G$ be a theta curve and let $f$ be a spatial embedding of $G$. Then $f$ satisfies the equality $u(f)=\\dfrac{c(f)}{2}$ if and only if $f$ is trivial.\n\\end{thm}\n\nWe note that unknotting numbers of spatial embeddings of a theta curve is studied in \\cite{ob}.\n\nA \\textit{handlebody-knot} is an embedded handlebody in the 3-dimensional Euclidean space $\\mathbb{R}^3$, which is introduced by Ishii in \\cite{handle}. Two handlebody-knots $H_1$ and $H_2$ are equivalent if there is an\norientation-preserving homeomorphism $h$ of $\\mathbb{R}^3$ with $h(H_1) = H_2$. A \\textit{spine} of a handlebody-knot $H$ is a spatial graph whose regular neightborhood is $H$. In this paper, we assume that spines have no degree 1 verticies. Any handlebody-knot $H$ can be represented by a spatial trivalent graph that is a spine of $H$. In particular, genus 2 handlebody-knot can be represented by a spatial embedding of a handcuff-graph or a theta curve. A \\textit{crossing change} of a handlebody-knot $H$ is that of a spatial trivalent graph representing $H$. In\\ \\cite{unqc} Iwaliri showed that a crossing change of a handlebody-knot is an unknotting operation and give lower bounds of the unknotting numbers for handlebody-knots by the numbers of some finite Alexander quandle colorings. \n\nWe have the following well-known relation between unknotting number and crossing number of classical knots.\n\\begin{prop} \\label{12c1k}\nLet $K$ be a nontrivial knot. Then $u(K) \\leq \\dfrac{c(K)-1}{2}$.\n\\end{prop}\nIn \\cite{unbound} Taniyama characterized the knots which satisfy the equality as follows.\n\\begin{thm} \\cite[Theorem1.4\\,(2)]{unbound} \\label{12c2k}\nLet $K$ be a nontrivial knot that satisfies the equality $u(K)=\\dfrac{c(K)-1}{2}$. Then $K$ is a $(2,\\ p)$-torus knot for some odd number $p \\neq \\pm 1$.\n\\end{thm}\n\nIn this paper, as an extension of Proposition \\ref{12c1k}, we show the following theorem.\n\n\\begin{thm}\\label{12c1}\nLet $H$ be a non-trivial handlebody-knot.\\ Then $u(H) \\leq \\dfrac{c(H)-1}{2}$.\n\\end{thm}\nThe spine of genus 1 handlebody-knot is a classical knot. Therefore Theorem \\ref{12c1} is an extention of Proposition \\ref{12c1k}. It follows from Theorem \\ref{handcuff} and Theorem \\ref{theta} that for any non-trivial handlebody-knot $H$ with genus 2 twice the unknotting number of $H$ is less than or equal to the crossing number of $H$ minus one \\ (see section 4). \n\nIt follows from Theorem \\ref{12c2k} that genus 1 handlebody-knot $H$ with $u(H) = \\dfrac{c(H)-1}{2}$ is a regular neighborhood of a $(2,\\ p)-$torus knot. We also characterize genus $n \\geq 2$ handlebody-knots which satisfy the equality as follows.\n\\begin{thm}\\label{12c2}\nLet $n\\geq 2$ and let $H$ be a nontrivial genus $n$ handlebody-knot that satisfies the equality $u(H)=\\dfrac{c(H)-1}{2}$.\\,Then $H$ is a handlebody-knot represented by $D_3$ or $D_{-3}$ illustrated in Fig.\\ \\ref{d3}.\n\\end{thm}\n\\begin{figure}[H]\n \\begin{tabular}{c}\n \\begin{minipage}{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.6\\linewidth]{d311.eps}\n \\end{center}\n \\end{minipage}\n \\begin{minipage}{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.6\\linewidth]{d312.eps}\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n \\caption{}\n \\label{d3}\n\\end{figure}\n\nThis paper consists of five sections. In section 2 we review trivializability of planar graphs and inequalities between unknotting numbers and crossing numbers of spatial embeddings of planar graphs. In section 3 we introduce unknotting number of handlebody-knots. In section 4 we give proofs of Theorem \\ref{handcuff} and Theorem \\ref{theta}. In section 5 we give proofs of Theorem \\ref{12c1} and Theorem \\ref{12c2}.\n\n\\section{UNKNOTTING NUMBERS AND CROSSING NUMBERS OF SPATIAL EMBEDDINGS OF PLANAR GRAPHS}\n\nLet $G$ be a planar graph. A \\textit{spatial embedding} of $G$ is an embedding $f : G\\rightarrow \\mathbb{R}^3$. Its image $f(G)$ is said to be a \\textit{spatial graph}. Let $\\pi: \\mathbb{R}^3 \\rightarrow \\mathbb{R}^2$ be a natural projection defined by $\\pi(x,y, z) = (x, y)$. Let $SE(G)$ be the set of all spatial embeddings of $G$. A \\textit{regular projection} of $G$ is a continuous map $\\tilde{f} : G \\rightarrow \\mathbb{R}^2$ whose double points are only finitely many transversal double points. Such a double point is said to be a \\textit{crossing point} or simply a \\textit{crossing}. If we give over\/under informations at each crossing points of a regular projection $\\tilde{f}$ of $G$, then $\\tilde{f}$ together with the over\/under informations represents a spatial embedding $f:G \\rightarrow \\mathbb{R}^3$ such that $\\tilde{f} = \\pi \\circ f$. Such a regular projection together with the over\/under informations is said to be a \\textit{diagram} of $f(G)$. Then we say that $f$ is \\textit{obtained from} $\\tilde{f}$. We also call $\\tilde{f}$ a \\textit{regular projection} of $f(G)$. For a diagram $D$ of a spatial embedding, the set of all crossings of $D$ is denoted by $\\textit{C}(D)$. The number of crossings of $D$ is denoted by $c(D) = |\\textit{C}(D)|$.\n\nAn element $f \\in SE(G)$ is said to be \\textit{trivial}, if it is ambient isotopic to $t \\in SE(G)$ such that $t(G) \\subset \\mathbb{R}^2$.\nAny spatial embedding of a planar graph can be transformed into trivial one by crossing changes. Therefore unknotting number is naturally extended to spatial embeddings of planar graphs as follows. For $f \\in SE(G)$, the unknotting number $u(f)$ is defined to be the minimal number of crossing changes from $f$ to a trivial embedding of $G$. The \\textit{crossing number} $c(f)$ is defined to be the minimal number of crossing points among all diagrams of spatial embeddings that are ambient isotopic to $f$. \n\nFor any link $L$, $L$ satisfies the inequality $u(L) \\leq \\dfrac{c(L)}{2}$. But this is not extended for spatial embeddings of planar graph, namely there are a planar graph $G$ and a spatial embedding $f$ of $G$ such that $u(f) > \\dfrac{c(f)}{2}$. Let $P_3$ the cube graph and $f_3 \\in SE(P_3)$ a spatial embedding of $P_3$ as illustrated in Fig.\\ \\ref{f3p3}. The spatial graph $f_3(P_3)$ contains three Hopf-links and one crossing change of edges of $f_3(P_3)$ unknot at most two of them\\ (See Fig.\\ \\ref{hopf3}). Then $u(f_3) \\geq 2$. Since $f_3(P_3)$ contains a trefoil whose crossing number is 3, $c(f_3)=3$ and $u(f_3) > \\dfrac{c(f_3)}{2}$ \\cite{uncr}. \n\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.8\\linewidth]{f3p3.eps}\\\\\n\\caption{}\n\\label{f3p3}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.55\\linewidth]{hopf3.eps}\n\\caption{}\n\\label{hopf3}\n\\end{figure}\n\nNow we review the reason why it happens for some planar graphs. The key point of the proof of $u(L) \\leq \\dfrac{c(L)}{2}$ for a link $L$ is that any link diagram can be transformed into a trivial link diagram by changing over\/under informations at some crossings of the diagram. Let $D$ be a minimal crossing diagram of $L$. Let $A$ be a subset of $C(D)$ such that changing over\/under informations at all crossings in $A$ turns $D$ to a diagram $T_1$ of a trivial link. Let $T_2$ be a diagram that is obtained from $T_1$ by changing over\/under informations at all crossings. A mirror image of a trivial link is also trivial. Thus $T_2$ is a diagram of a trivial link. Note that $T_2$ is obtained from $D$ by changing over\/under informations at all crossings in $C(D)-A$. Therefore we have\n$$u(L) \\leq u(D) \\leq \\min \\{|A|,|C(D)-A|\\} \\leq \\dfrac{c(D)}{2}=\\dfrac{c(L)}{2}$$\n\nOn the other hand, all diagrams obtained from $\\pi \\circ f_3(P_3)$ (Fig.\\ \\ref{knotted}) represent non-trivial spatial graphs since each of the spatial graphs obtained from these diagrams contains at least one Hopf-link. A regular projection $\\tilde{f}$ of a planar graph $G$ is said to be a \\textit{knotted projection} \\cite{knotted}, if all spatial embeddings of $G$ which can be obtained from $\\tilde{f}$ are non-trivial. \n\n\n\n\\begin{figure}[H]\n \\begin{center}\n \\includegraphics[height=0.18\\linewidth]{knotted.eps}\\\\\n \\ \\\\\n $\\pi \\circ f_3(P_3)$\n \\end{center}\n\\caption{}\n\\label{knotted}\n\\end{figure}\n\nA planar graph is said to be \\textit{trivializable} if it has no knotted projections. In \\cite{knotted} Taniyama gave a class of trivializable graphs. In \\cite{class} Sugiura and Suzuki extended the class. In \\cite{tamura} Tamura gave another class of trivializable graphs.\n\nFor a spatial embedding of a trivializable planar graph, the same argument as\nfor a link works, and we have the following proposition.\n\n\\begin{prop}\\label{tri12} \\cite{uncr}\nLet $G$ be a trivializable planar graph and $f: G \\rightarrow \\mathbb{R}^3$ a spatial embedding of $G$. Then $u(f) \\leq \\dfrac{c(f)}{2}$.\n\\end{prop}\n\n\\newpage\n\n\\section{UNKNOTTING NUMBERS AND CROSSING NUMBERS OF HANDLEBODY-KNOTS}\n\nWe review that crossing change of a handlebody-knot is an unknotting operation \\cite{unqc}.\n\nA \\textit{diagram} of a handlebody-knot $H$ is that of a spatial trivalent graph representing $H$. In \\cite{handle}, Ishii gave a list of fundamental moves among diagrams of handlebody-knots, which is called R1-6 moves illustrated in Fig.\\ \\ref{reide}. Ishii showed that two handlebody-knots are equivalent if and only if their representing diagrams are related by a finite sequence of R1-6 moves. Note that R6-move is also called \\textit{IH-move}. \n\n\\begin{figure}[H]\n \\begin{tabular}{c}\n \\begin{minipage}{0.3\\hsize}\n \\begin{center}\n \\includegraphics[height=0.4\\linewidth]{R1.eps}\n \\end{center}\n \\end{minipage}\n \\begin{minipage}{0.25\\hsize}\n \\begin{center}\n \\includegraphics[height=0.48\\linewidth]{R2.eps}\n \\end{center}\n \\end{minipage}\n \\begin{minipage}{0.25\\hsize}\n \\begin{center}\n \\includegraphics[height=0.48\\linewidth]{R3.eps}\n \\end{center}\n \\end{minipage}\n \\end{tabular}\\\\\n\\ \\vspace{0.5cm} \\\\\n \\begin{tabular}{c}\n \\begin{minipage}{0.36\\hsize}\n \\begin{center}\n \\includegraphics[height=0.333333\\linewidth]{R4.eps}\n \\end{center}\n \\end{minipage}\n \\begin{minipage}{0.36\\hsize}\n \\begin{center}\n \\includegraphics[height=0.333333\\linewidth]{R5.eps}\n \\end{center}\n \\end{minipage}\n \\begin{minipage}{0.2\\hsize}\n \\begin{center}\n \\includegraphics[height=0.6\\linewidth]{R6.eps}\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n \\caption{}\n \\label{reide}\n\\end{figure}\n\nA crossing change of a handlebody-knot $H$ is that of a spatial trivalent graph representing $H$. This move can be realized by switching two tubes illustrated in Fig.\\ \\ref{cch}. A genus $n$ handlebody-knot is \\textit{trivial} if it is equivalent to a handlebody-knot represented by a diagram illustrated in Fig.\\ \\ref{tg}.\n\n\n\\begin{figure}[H]\n\\centering\n \n \n \n \\includegraphics[height=0.12\\linewidth]{cc2.eps}\n \n \n \n \n \n \n \n \n \\caption{}\n \\label{cch}\n\\end{figure}\n\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.4\\linewidth]{tg.eps}\n \\caption{}\n \\label{tg}\n\\end{figure}\n\n\nLet $T_n$ be the trivalent graph whose image is illustrated in Fig.\\ \\ref{tg}. Any handlebody-knot is represented by a diagram of a spatial embedding of $T_n$ since a genus $n$ handlebody has $T_n$ as a spine. Note that $T_n$ is a trivializable graph \\cite{class}. Namely, any diagram $D$ of a spatial embedding of $T_n$ can be changed to a trivial spatial graph diagram by changing over\/under informations at some crossings of $D$. Then we have the following proposition.\n\n\\begin{prop}\\label{unop} \\cite[Proposition\\ 2.1]{unqc} Any handlebody-knot can be transformed into trivial one by crossing changes.\n\\end{prop}\n\nTherefore unknotting number is naturally extended to handlebody-knots as follows. For a handlebody-knot $H$, the unknotting number $u(H)$ is the minimal number of crossing changes needed to obtain a trivial handlebody-knot from $H$. The crossing number $c(H)$ is the minimal number of crossing points among all diagrams of handlebody-knots that are equivalent to $H$. \n\nBy the proof of \\cite[Proposition\\ 3.1]{forbidden} we see that any diagram $D$ of a spatial graph can be transformed into a diagram of a spatial graph whose neighborhood are ambient isotopic to a neighborhood of a trivial bouquet by changing over\/under informations at some crossings of $D$. Therefore, in \\cite{unqc}, Iwakiri also showed that Proposition\\ \\ref{unop} can be refined to the strong statement as follows.\n\n\\begin{prop}\\label{dunop} \\cite{unqc} Any handlebody-knot diagram can be transformed into a trivial handlebody-knot diagram by changing over\/under informations at some crossings of the diagram.\n\\end{prop}\n\n\n\nFor a handlebody-knot diagram $D$, the unknotting number $u(D)$ is the minimal number of changing over\/under informations at crossings of $D$ needed to obtain a trivial handlebody-knot diagram. Same as Proposition\\ \\ref{tri12} we have $u(D) \\leq \\dfrac{c(D)}{2}$ and $u(H) \\leq \\dfrac{c(H)}{2}$.\n\nIn section 5, we show that a handlebody-knot $H$ satisfies $u(H)=\\dfrac{c(H)}{2}$ if and only if $H$ is trivial\\ (Theorem\\ \\ref{12c1}). Then it is natural to ask when handlebody-knots satisfy the equality $u(H) =\\dfrac{c(H)-1}{2}$. Let $H$,\\ $H_1$ and $H_2$ be handlebody-knots in $\\mathbb{R}^3$ and let $S$ be a $2-$sphere in $\\mathbb{R}^3$. Suppose that $H \\cap S=H_1 \\cap H_2$ is a $2-$disk and $H=H_1 \\cup H_2$. Then $H$ is said to be a \\textit{disk sum} of $H_1$ and $H_2$ and denoted by $H=H_1 \\# H_2$. In \\cite{unbound} Taniyama showed that if a classical knot $K$ satisfies $u(K) = \\dfrac{c(K)-1}{2}$ then $K$ is a $(2,\\ p)$-torus knot for some odd number $p \\neq \\pm 1$\\ (Theorem\\ \\ref{12c2k}). Therefore the handlebody-knots illustrated in Fig.\\ \\ref{d2n1} may satisfy the equality. But by the following proposition only two of these handlebody-knots satisfy the equality.\n\n\\begin{prop}\\label{2br}\nLet $n\\geq 2$ and let $H$ be a genus $n$ handlebody-knot such that $H=K\\ \\#\\ O_{n-1}$, where $K$ is a genus $1$ handlebody-knot whose spine is a $2-$bridge knot and $O_{n-1}$ is a genus $n-1$ trivial handlebody-knot. Then $u(H)=1$.\n\\end{prop}\n\n\\begin{figure}[H]\n\\begin{tabular}{c}\n\n \\begin{minipage}[t]{0.24\\hsize}\n \\begin{center}\n \\includegraphics[height=0.6\\linewidth]{d311.eps}\\\\\n $D_3$\n \\end{center}\n \\end{minipage}\n \n \\begin{minipage}[t]{0.24\\hsize}\n \\begin{center}\n \\includegraphics[height=0.6\\linewidth]{d51.eps}\\\\\n $D_5$\n \\end{center}\n \\end{minipage}\n \n \\begin{minipage}[t]{0.32\\hsize}\n \\begin{center}\n \\includegraphics[height=0.45\\linewidth]{d71.eps}\\\\\n $D_7$\n \\end{center}\n \\end{minipage}\n \\end{tabular}\\\\\n \\ \\vspace{0.5cm} \\\\\n\\begin{tabular}{c}\n\n \\begin{minipage}[t]{0.24\\hsize}\n \\begin{center}\n \\includegraphics[height=0.6\\linewidth]{d312.eps}\\\\\n $D_{-3}$\n \\end{center}\n \\end{minipage}\n \n \\begin{minipage}[t]{0.24\\hsize}\n \\begin{center}\n \\includegraphics[height=0.6\\linewidth]{d52.eps}\\\\\n $D_{-5}$\n \\end{center}\n \\end{minipage}\n \n \\begin{minipage}[t]{0.32\\hsize}\n \\begin{center}\n \\includegraphics[height=0.45\\linewidth]{d72.eps}\\\\\n $D_{-7}$\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n \\caption{}\n \\label{d2n1}\n\\end{figure}\n\n\n\n\\begin{proof}\nLet $K'$ be the spine of $K$. Let $H'$ be the handlebody-knot obtained from $K\\# O_{n-1}$ by one crossing change as illustrated in the left of Fig.\\ \\ref{dut}. By \\cite[Proposition\\ 3.1]{tunnel} we see that the tunnel $\\tau$ for $K'$ as illustrated in the right of Fig.\\ \\ref{dut} is an unknotting tunnel. Therefore the genus 2 handlebody-knot represented by the right of Fig.\\ \\ref{dut} is trivial. Since a disk sum of two trivial handlebody-knots is trivial, $H'$ is also trivial.\n\n\\begin{figure}[H]\n \\begin{tabular}{c}\n \\begin{minipage}[t]{0.5\\hsize}\n \\begin{center}\n \\includegraphics[height=0.95\\linewidth]{2briuntn.eps} \\vspace{0.5cm}\\\\\n \\hspace{0.2\\linewidth}$b_1,\\ b_2,\\ \\cdots b_n:\\ 2$-braids\n \\end{center}\n \n \\end{minipage}\n \\begin{minipage}[t]{0.5\\hsize}\n \\begin{center}\n \\includegraphics[height=0.9\\linewidth]{2briunt.eps}\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n \\caption{}\n \\label{dut}\n\\end{figure}\n\n\\end{proof}\n\n\n\n\n\\section{PROOFS OF THEOREM\\ \\ref{handcuff} AND THEOREM\\ \\ref{theta}}\nLet $D$ be a diagram of a spatial graph $f(G)$ and let $H$ be a subgraph of $G$. Then the diagram of $f(H)$ that is contained in $D$ is said to be a \\textit{subdiagram} of $D$. For subdiagrams $A,\\ B$ of a diagram $D$, let $c(A)$ be the number of all crossings on $A$ among the crossings of $D$ and let $c(A,\\,B)$ be the number of all crossings between $A$ and $B$. \n\n\\begin{lemma}\\label{self}\nLet $G$ be a trivializable graph and let $f$ be a spatial embedding of $G$. Let $D$ be a diagram of $f(G)$. If $D$ has a self-crossing, then $u(D) \\leq \\dfrac{c(D)-1}{2}$.\n\\end{lemma}\n\\noindent\n\\begin{proof}\nLet $P$ be a self-crossing of $D$. By smoothing $D$ at $P$, we have a diagram $D'$ such that one of the components of $D'$ represents a knot\\ (see Fig.\\ \\ref{selfc}). Let $\\gamma_1$ be a component of $D'$ that represents a knot and let $\\gamma_2$ be the other component of $D'$. \n\n\\begin{figure}[H]\n \\begin{tabular}{c}\n \\begin{minipage}[b]{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.5\\linewidth]{selfc1.eps} \\vspace{0.35cm} \\\\\n \\hspace{0.08\\linewidth}$D$\n \\end{center}\n \n \\end{minipage}\n \\begin{minipage}[b]{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.5\\linewidth]{selfc2.eps}\\\\\n \\hspace{0.08\\linewidth}$D'$\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n \\caption{}\n \\label{selfc}\n\\end{figure}\n\nIf we change some crossings on $\\gamma_1$ so that the part $\\gamma_1$ is over other component of $D$ and itself unknotted then we have a spatial embedding that has a diagram $\\gamma_2$. Also we may change some crossings on $\\gamma_1$ so that the part $\\gamma_1$ is under the other component of $D$ and itself unknotted. Note that we can choose these two crossing changes complementary on the crossings on $\\gamma_1$. We choose one of them that have no more crossing changes than the other. Thus by changing no more than $\\dfrac{c(D)-c(\\gamma_2)-1}{2}$ crossings of $D$ we have a spatial embedding that has a diagram $\\gamma_2$. Note that the key point here is that we do not need to change the crossing $c$. Since $\\gamma_2$ is also a diagram of a spatial embedding of a trivializable graph, we have $u(\\gamma_2) \\leq \\dfrac{c(\\gamma_2)}{2}$. Therefore we have \n$$u(D) \\leq u(\\gamma_2) + \\dfrac{c(D)-c(\\gamma_2)-1}{2} \\leq \\dfrac{c(\\gamma_2) +(\\ c(D)-c(\\gamma_2)-1\\ )}{2}=\\dfrac{c(D)-1}{2}$$\n\\end{proof}\n\n\\begin{lemma}\\label{mini}\nLet $G$ be a trivializable planar graph and let $f$ be a spatial embedding of $G$ such that $u(f)=\\dfrac{c(f)}{2}$. Let $D$ be a minimal crossing diagram of $f(G)$. Then $u(D)=\\dfrac{c(D)}{2}$.\n\\end{lemma}\n\n\\begin{proof} It is sufficient to show that $u(D) \\geq \\dfrac{c(D)}{2}$. Since $u(f) \\leq u(D)$ and $c(f)=c(D)$ we have\n$$u(D) \\geq u(f)=\\dfrac{c(f)}{2}=\\dfrac{c(D)}{2}$$\n\\end{proof}\n\n\\begin{lemma}\\label{dhandcuff}\nLet $D$ be a diagram of a spatial embedding of a handcuff-graph such that $u(D)=\\dfrac{c(D)}{2}$. Then $D$ satisfies the following conditions {\\rm:} \\\\\n$(1)\\,$Each edge of $D$ has no self-crossings. \\\\\n$(2)\\,$All crossings of $D$ are crossings between two loops.\\\\\n$(3)\\,$Two loops of $D$ form an alternating diagram or a diagram without crossings.\n\\end{lemma}\n\n\\begin{proof}\nBy Lemma \\ref{self} $D$ satisfies $(1)$. In the following we show that $D$ satisfies $(2)$ and $(3)$. \n\nLet $\\gamma_1$ and $\\gamma_2$ be two loops of $D$ and let $e$ be the edge of $D$ that is not $\\gamma_i$\\ ($i=1,\\ 2$). If we change some crossings on $\\gamma_2$ so that the part $\\gamma_2$ is over $D-\\gamma_2$ of $D$ then we have a diagram of a trivial spatial embedding of $G$ since $\\gamma_i$ is a simple closed curve on $\\mathbb{R}^2$\\ ($i=1,\\ 2$). See for example Fig.\\ \\ref{alc}. Also we may change some crossings on $\\gamma_2$ that the part $\\gamma_2$ is under $D-\\gamma_2$ of D and itself unknotted. Note that these two crossing changes are complementary on the crossings on $\\gamma_2$. We choose one of them that have no more crossing changes than the other. Thus by changing no more than $\\dfrac{c(D)-c(\\gamma_1,\\ e)}{2}$ crossings of $D$ we have a trivial diagram and $u(D) \\leq \\dfrac{c(D)-c(\\gamma_1,\\ e)}{2}$. The key point here is that we do not need to change crossings between $\\gamma_1$ and $e$. Since $u(D) = \\dfrac{c(D)}{2}$ we have $c(\\gamma_1,\\ e)=0$. Similarly we have $c(\\gamma_2,\\ e)=0$. Therefore $D$ satisfies $(2)$. \n\n\n\\begin{figure}[H]\n \\begin{tabular}{c}\n \\begin{minipage}[b]{0.3\\hsize}\n \\begin{center}\n \\includegraphics[height=0.7\\linewidth]{lemma431.eps}\n \\end{center}\n \n \\end{minipage}\n \\begin{minipage}[b]{0.7\\hsize}\n \\begin{center}\n \\includegraphics[height=0.3\\linewidth]{lemma432.eps}\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n \\caption{}\n \\label{alc}\n\\end{figure}\n\nSuppose that $\\gamma_1 \\cup \\gamma_2$ is not an alternating diagram. Then we may suppose without loss of generality that there is an arc $\\alpha$ of $\\gamma_1$ disjoint from $e$ such that $\\alpha \\cap \\gamma_2=\\partial \\alpha=\\{c_1,\\ c_2\\}$ and $\\gamma_1$ is over $\\gamma_2$ at $c_1$ and $c_2$. See Fig.\\ \\ref{disarc} .\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.25\\linewidth]{disarc.eps}\n\\caption{}\n\\label{disarc}\n\\end{figure}\n\nLet $A$ be the set of all crossings of $D$ at which $\\gamma_1$ is under $\\gamma_2$. Let $B=C(D)\\backslash (A \\cup \\{c_1.\\ c_2\\})$. Then by the height function argument first used in \\cite{knotted} we see that changing all crossings in $A$ (resp. $B$) produce a trivial spatial embedding. See Fig. \\ref{althand}.\n\n\\begin{figure}[H]\n\\begin{tabular}{c}\n \\begin{minipage}[b]{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.8\\linewidth]{althandover.eps} \\vspace{0.55cm}\n\n \\end{center}\n \n \\end{minipage}\n \\begin{minipage}[b]{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.75\\linewidth]{althandunder.eps} \n\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n \\caption{}\n \\label{althand}\n\\end{figure}\n\nTherefore we have\n$$u(D) \\leq \\min \\{|A|,\\ |B|\\} \\leq \\dfrac{c(D)-2}{2}$$\nThis is contradicts to the equation $u(D) = \\dfrac{c(D)}{2}$. Thus $D$ satisfies $(3)$ as desired. \n\\end{proof}\n\n\n\\noindent\n\\textit{Proof of Theorem\\ref{handcuff}}\\\\\nFirst, we show that if there exists a diagram $D$ of $f(G)$ satisfying $(1)$, $(2)$ and $(3)$, then $u(f) = \\dfrac{c(f)}{2}$. We may suppose that $c(D)>0$. Let $L=l_1 \\cup l_2$ be a $2-$component link represented by two loops of $D$. See for example Fig.\\ \\ref{altlink}. Since the diagram of $L$ consists of two simple closed curves and it is alternating, we see that twice the absolute value of the linking number $2|\\,lk(l_1,l_2)\\,|$ is equal to $C(D)$. \n Therefore we have \n$$u(f) \\geq u(L) \\geq |\\,lk(l_1,l_2)\\,| = \\dfrac{c(D)}{2}=\\dfrac{c(f)}{2}$$\nBy Proposition \\ref{tri12} we have $u(f)=\\dfrac{c(f)}{2}$. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.25\\linewidth]{altlink.eps}\n\\caption{}\n\\label{altlink}\n\\end{figure}\n\nLet $f$ be a spatial embedding of $G$ such that $u(f)=\\dfrac{c(f)}{2}$ and let $D$ be a minimal crossing diagram of $f(G)$. By Lemma\\ \\ref{mini} we have $u(D)=\\dfrac{c(D)}{2}$. By Lemma \\ref{dhandcuff} $D$ satisfies $(1)$, $(2)$ and $(3)$ as desired. \\qed\n\n\\begin{lemma}\\label{dtheta}\nLet $G$ be a theta curve. Let $D$ be a diagram of a spatial embedding of $G$ such that $u(D)=\\dfrac{c(D)}{2}$. Then $c(D)=0$.\n\\end{lemma}\n\n\\begin{proof}\nBy Lemma \\ref{self} we may suppose that each edge of $D$ has no self-crossings. Suppose that $c(D)>0$. Then there exists a crossing $c$ on $D$ between two edges. Let $\\tilde{f}:G \\rightarrow \\mathbb{R}^2$ be a regular projection of $G$ where $D$ is obtained from $\\tilde{f}(G)$. Let $v$ and $u$ be two vertices of $G$. Let $G'$ be the graph obtained by adding $2$ verticies $v_1,\\ v_1'$ to $G$ such that $\\tilde{f}(v_1)=\\tilde{f}(v_1')=c$ and $v_1$\\ (resp.\\ $v_1'$) is contained in the over-arc\\ (resp.\\ the under-arc) at $c$. Let $P$ be the path from $v$ to $u$ that contains $v_1$. We fix a spanning tree $T$ of $G'$ that contains $P$\\ (see for example Fig.\\ \\ref{subdthe}). Let $h: G' \\rightarrow \\mathbb{R}$ be a continuous function with the following properties : \\\\\n$(1)\\,$For each vertex $t$ of $G'$, $h(t)=-d_{T}(t,\\ v)$. Here $d_{T}(t,v)$ be the number of edges of the path in $T$ joining $t$ and $v$.\\\\\n$(2)\\,h|_e$ is injective for each edge $e$ of $G'$\\\\\n\\begin{figure}[H]\n \\begin{tabular}{c}\n \\begin{minipage}[b]{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.45\\linewidth]{kino.eps} \\vspace{0.3cm}\\\\\n $D$\n \\end{center}\n \n \\end{minipage}\n \\begin{minipage}[b]{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.7\\linewidth]{kinospa.eps} \\vspace{0.3cm}\\\\\n $G'$\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n \\caption{}\n \\label{subdthe}\n\\end{figure}\nWe can deform $h$ slightly so that $h(v_1)>h(v_1')$ since $d_{T}(v,v_1)=d_{T}(v,v_1')=1$. Then we give over\/under information to $\\tilde{f}$ to produce a spatial embedding $f : G \\rightarrow \\mathbb{R}^3 = \\mathbb{R}^2 \\times \\mathbb{R}$ such that $p_1 \\circ f=\\tilde{f}$ and $p_2 \\circ f =h$, where $p_1$\\ (resp. $p_2)$ denotes the projection of $\\mathbb{R}^3$ to the first factor (respectively to the second factor) of $\\mathbb{R}^2 \\times \\mathbb{R}$. Let $\\Pi:\\mathbb{R}^3 \\rightarrow \\mathbb{R}^2$ be a projection defined by $\\Pi(x, y, z) = (x, z)$. We deform $f$ slightly by an ambient isotopy if necessary so that $\\Pi \\circ f$ is a regular projection. Then we can eliminate all crossings of $\\Pi \\circ f$ by eliminating the crossing nearest to $v$ repeatedly (see Fig.\\ \\ref{heithe}). Therefore $f$ is trivial.\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.4\\linewidth]{kinohei.eps}\n \\caption{}\n \\label{heithe}\n\\end{figure}\nLet $D'$ be the diagram of $f(G)$ where $D'$ is obtained from $\\tilde{f}(G)$. We note that $D$ and $D'$ are deformed into each other by changing over\/under informations of all crossing points without changing over\/under informations of $c$. Let $D''$ be the diagram that is obtained from $D'$ by changing over\/under informations of all crossing points with the exception of $c$\\ (see for example Fig.\\ \\ref{dandd}). Let $h':G' \\rightarrow \\mathbb{R}$ be a continuous function such that $h'=-h$. We can deform $h'$ slightly so that $h'(v_1)>h'(v_1')$. Then $D''$ is the diagram of a spatial embedding of $f' : G \\rightarrow \\mathbb{R}^3 = \\mathbb{R}^2 \\times \\mathbb{R}$ such that $p_1 \\circ f'=\\tilde{f}$ and $p_2 \\circ f'=h'$. Same as the case of $f(G)$, $f'$ is also trivial.\n\\begin{figure}[H]\n \\begin{tabular}{c}\n \\begin{minipage}[b]{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.45\\linewidth]{kinod1.eps} \\vspace{0.3cm}\\\\\n $D'$\n \\end{center}\n \n \\end{minipage}\n \\begin{minipage}[b]{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.45\\linewidth]{kinod2.eps} \\vspace{0.3cm}\\\\\n $D''$\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n \\caption{}\n \\label{dandd}\n\\end{figure}\n\nLet $A$ be a subset of $C(D)$ such that changing all crossings in $A$ turns $D$ to $D'$. We note that changing all crossings in $(\\ C(D)-\\{c\\}\\ )-A$ turns $D$ to $D''$. Therefore we have\n$$u(D) \\leq \\min \\{|A|,|(\\ C(D)-\\{c\\}\\ )-A|\\} \\leq \\dfrac{c(D)-1}{2}$$ \n\nThis is contradicts to the equation $u(D) = \\dfrac{c(D)}{2}$. Therefore we have $c(D)=0$ and $D$ is a diagram of a trivial theta curve. \n\\end{proof}\n\n\n\n\n\\noindent\n\\textit{Proof of Theorem\\ref{theta}}\\\\\nLet $f$ be a spatial embedding of $G$ such that $u(f)=\\dfrac{c(f)}{2}$ and let $D$ be a minimal crossing diagram of $f(G)$. By Lemma\\ \\ref{mini} we have $u(D)=\\dfrac{c(D)}{2}$. By Lemma \\ref{dtheta} we see that $f$ is trivial.\\qed\n\n\\begin{remark}\\label{ge2} {\\rm We can prove Theorem\\ \\ref{12c1} in the case of genus 2 by observing Lemma \\ref{dhandcuff} and Lemma \\ref{dtheta}. Let $D$ be a minimal crossing diagram of a non-trivial genus 2 handlebody-knot $H$. Then $D$ is also a diagram of a spatial embedding of a handcuff-graph or a theta curve. \n\nIn the case $D$ is a diagram of a spatial handcuff-graph, by Lemma \\ref{dhandcuff} all crossings of $D$ are between two loops or $u(D) \\leq \\dfrac{c(D)-1}{2}$. In the case all crossings of $D$ are between two loops, by one IH-move on the edge that is not a loop we have a diagram $D'$ of $H$ such that $c(D')=c(D)=c(H)$ and $D'$ is also a diagram of a spatial theta curve (see Fig.\\ \\ref{wh}). By Lemma \\ref{dtheta} we have $u(D') \\leq \\dfrac{c(D')-1}{2}$. \n\nIn the case $D$ is a diagram of a spatial theta curve, by Lemma\\ \\ref{dtheta} we have $u(D) \\leq \\dfrac{c(D)-1}{2}$. In the both cases we have $u(H) \\leq \\dfrac{c(H)-1}{2}$.}\n\\end{remark}\n\n\\begin{figure}[H]\n \\begin{tabular}{c}\n \\begin{minipage}[b]{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.35\\linewidth]{nselfhand.eps}\\vspace{0.5cm} \\\\\n $D$\n \\end{center}\n \n \\end{minipage}\n \\begin{minipage}[b]{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.35\\linewidth]{nselftheta.eps}\\vspace{0.5cm} \\\\\n $D'$\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n \\caption{}\n \\label{wh}\n\\end{figure}\n\n\n\\section{PROOFS OF THEOREM\\ \\ref{12c1} AND THEOREM\\ \\ref{12c2}}\nIn this section we prove Theorem \\ref{12c1} and Theorem \\ref{12c2}. In the following we give an inequality between unknotting number and crossing number by an observation of subdivided graph. \n\nLet $\\tilde{f}:G \\rightarrow \\mathbb{R}^2$ be a regular projection of a graph $G$. Let $c_1,\\ c_2,\\ \\cdots,\\ c_k$ be crossing points of $\\tilde{f}(G)$. A \\textit{subdivided graph} of $G$ at $\\{c_1,\\ c_2,\\ \\cdots,\\ c_k\\}$ is a graph obtained by adding $2k$ vertices $v_1,\\ v_1',\\ v_2,\\ v_2',\\ \\cdots ,v_k,\\ v_k'$ to $G$ such that $\\tilde{f}(v_i)=\\tilde{f}(v_i')=c_i$ and $v_i$\\ (resp.\\ $v_i'$) is contained in the over-arc\\ (resp.\\ the under-arc) at $c_i$. Then we say that $v_i$\\ (resp.\\ $v_i')$ is an \\textit{over-vertex}\\ (resp.\\ \\textit{under-vertex}) at $c_i$. Let $T$ be a spanning tree of $G'$. For any two vertices $v$ and $u$ of $G'$, let $d_{T}(v,u)$ be the number of edges of the path in $T$ joining $v$ and $u$.\n\n\\begin{lemma}\\label{31}\nLet $D$ be a diagram of a nontrivial handlebody-knot $H$. Let $\\tilde{f}:G \\rightarrow \\mathbb{R}^2$ be a regular projection of a connected trivalent graph $G$ where $D$ is obtained from $\\tilde{f}(G)$. Let $c_1,\\ c_2,\\ \\cdots,\\ c_k$ be crossing points of $\\tilde{f}(G)$. Let $G'$ be the subdivided graph of $G$ at $\\{c_1,\\ c_2,\\ \\cdots,\\ c_k\\}$. Let $v_1,\\ v_1',\\ v_2,\\ v_2',\\ \\cdots ,v_k,\\ v_k'$ be vertices of $G'$ such that $v_i$\\ (resp.\\ $v_i'$) is an over-vertex\\ (resp.\\ under-vertex) at $c_i\\ (i=1,2,\\cdots ,k)$. If there exists a vertex $v$ of $G'$ and a spanning tree $T$ of $G'$ such that $d_{T}(v,v_i)=d_{T}(v,v_i')$ for all $i \\in \\{1,2,\\cdots,k\\}$, then $u(D) \\leq \\dfrac{c(D)-k}{2}$. \n\\end{lemma}\n\\begin{proof}\nThe proof is analogous to the proof of \\cite[Proposition\\ 3.2]{forbidden}. We fix a vertex $v$ of $G'$ and a spanning tree $T$ of $G'$ such that $d_{T}(v,v_i)=d_{T}(v,v_i')$ for all $i \\in \\{1,2,\\cdots,k\\}$\\ (see Fig.\\ \\ref{311}). Let $h:G' \\rightarrow \\mathbb{R}$ be a continuous function with the following properties : \n\\begin{figure}[H]\n \\begin{tabular}{c}\n \\begin{minipage}{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.5\\linewidth]{41c2.eps}\n \\end{center}\n \\end{minipage}\n \\begin{minipage}{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.5\\linewidth]{41spa.eps}\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n \\caption{}\n \\label{311}\n\\end{figure}\n\\noindent\n$(1)\\,$For each vertex $u$ of $T$, $h|_T(u)=-d_{T}(v,u)$.\\\\\n$(2)\\,h|_e$ is injective for each edge $e$ of $T$.\\\\\n$(3)\\,$Each edge of $G'-T$ has exactly one minimum point of $h$. \\\\\nWe can deform $h$ slightly so that $h(v_i)>h(v_i')\\ (i=1,2,\\cdots,k)$ since $d_{T}(v,v_i)=d_{T}(v,v_i')\\ (i=1,2,\\cdots,k)$. Then we give over\/under informations to $\\tilde{f}$ to produce a spatial embedding $f : G \\rightarrow \\mathbb{R}^3 = \\mathbb{R}^2 \\times \\mathbb{R}$ such that $p_1 \\circ f=\\tilde{f}$ and $p_2 \\circ f =h$, where $p_1$\\ (respectively $p_2)$ denotes the projection of $\\mathbb{R}^3$ to the first factor (respectively to the second factor) of $\\mathbb{R}^2 \\times \\mathbb{R}$. \n\n\nLet $D'$ be the diagram of $f(G)$ where $D'$ is obtained from $\\tilde{f}(G)$. We note that $D$ and $D'$ are deformed into each other by changing over\/under informations of crossing points without changing over\/under informations of $c_1,\\ c_2,\\ \\cdots,\\ c_k$\\ (see for example Fig.\\ \\ref{312}). Since we obtain a bouquet as in Fig.\\ \\ref{313} which is trivial by contracting spatial edges of $f(T)$, $D'$ is a diagram of a trivial handlebody-knot. \n\n\\begin{figure}[H]\n \\begin{tabular}{c}\n \\begin{minipage}{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.6\\linewidth]{41d1hei.eps}\n \\end{center}\n \\end{minipage}\n \\begin{minipage}{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.5\\linewidth]{41d1.eps}\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n \\caption{}\n \\label{312}\n\\end{figure}\n\n\\begin{figure}[H]\n \\begin{tabular}{c}\n \\begin{minipage}{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.4\\linewidth]{41ec.eps}\n \\end{center}\n \\end{minipage}\n \\begin{minipage}{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.4\\linewidth]{41braid.eps}\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n \\caption{}\n \\label{313}\n\\end{figure}\n\n\nLet $D''$ be the diagram that is obtained from $D'$ by changing over\/under informations of all crossing points with the exception of $c_1,\\ c_2,\\ \\cdots,\\ c_k$. Let $h':G' \\rightarrow \\mathbb{R}$ be a continuous function such that $h'=-h$. We can deform $h'$ slightly so that $h'(v_i)>h'(v_i')\\ (i=1,2,\\cdots,k)$\\ (see for example Fig.\\ \\ref{314}). Then $D''$ is the diagram of a spatial embedding of $f' : G \\rightarrow \\mathbb{R}^3 = \\mathbb{R}^2 \\times \\mathbb{R}$ such that $p_1 \\circ f'=\\tilde{f}$ and $p_2 \\circ f'=h'$. Thus we obtain a trivial bouquet by contracting spatial edges of $f'(T)$ and $D''$ is a diagram of a trivial handlebody-knot. \n\n\\begin{figure}[H]\n \\begin{tabular}{c}\n \\begin{minipage}{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.5\\linewidth]{41d2.eps}\n \\end{center}\n \\end{minipage}\n \\begin{minipage}{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.6\\linewidth]{41d2hei.eps}\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n \\caption{}\n \\label{314}\n\\end{figure}\n\nLet $A$ be a subset of $C(D)$ such that changing all crossings in $A$ turns $D$ to $D'$. We note that changing all crossings in $(\\ C(D)-\\{c_1,\\ c_2,\\ \\cdots,\\ c_k\\}\\ )-A$ turns $D$ to $D''$. Therefore we have\n$$u(D) \\leq \\min \\{|A|,|(\\ C(D)-\\{c_1,\\ c_2,\\ \\cdots,\\ c_k\\}\\ )-A|\\} \\leq \\dfrac{c(D)-k}{2}$$ \n\\end{proof}\n\\noindent\n\\textit{Proof of Theorem \\ref{12c1}}\\\\\nLet $D$ be a minimal crossing diagram of a nontrivial handlebody-knot $H$. Let $\\tilde{f}:G \\rightarrow \\mathbb{R}^2$ be a regular projection of a connected trivalent graph $G$ where $D$ is obtained from $\\tilde{f}(G)$. Let $c_1$ be a crossing point of $\\tilde{f}(G)$. Let $G'$ be the subdivided graph of $G$ at $\\{c_1\\}$. Let $v_1$\\ (resp.\\ $v_1'$) be a vertex of $G'$ such that $v_1\\ (resp.\\ v_1')$ is over-vertex (resp.\\ under-vertex) at $c_1$. Let $T$ be a spanning tree of $G'$ containing $v_1$ and $v_1'$. \\\\\n\\begin{figure}[H]\n \\begin{tabular}{c}\n \\begin{minipage}{0.45\\hsize}\n \\begin{center}\n \\includegraphics[width=0.5\\linewidth]{52d.eps}\n \\end{center}\n \\end{minipage}\n \\begin{minipage}{0.45\\hsize}\n \\begin{center}\n \\includegraphics[width=0.5\\linewidth]{theta52.eps}\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n \\caption{}\n \\label{141}\n\\end{figure}\nBy subdividing $T$ if necessary, we can choose a vertex $v$ of $T$ such that $d_{T}(v,v_1)=d_{T}(v,v_1')$ since there exists a path in $T$ joining $v_1$ and $v_1'$\\ (see for example Fig.\\ \\ref{141}). Then by Lemma \\ref{31} we have $u(D) \\leq \\dfrac{c(D)-1}{2}$. Since $u(H) \\leq u(D)$ and $c(D)=c(H)$ we have $u(H) \\leq \\dfrac{c(H)-1}{2}$. \\qed\n\n\\begin{lemma}\\label{d12c2}\nLet $D$ be a minimal crossing diagram of a nontrivial handlebody-knot $H$ that satisfies the equality $u(D)=\\dfrac{c(D)-1}{2}$. Let $\\gamma$ be a cycle of $D$ that has at least one crossing of $D$. Then the following $(1)$ and $(2)$ holds.\\\\\n$(1)\\,$All crossings of $D$ are self-crossings of $\\gamma$.\\\\\n$(2)\\,$There exists an odd number $p \\neq \\pm1$ such that $\\gamma$ is a reduced alternating diagram of a $(2,\\ p)$-torus knot.\n\\end{lemma}\n\\begin{proof}\nSuppose that $\\gamma$ has just one crossing. Suppose that $\\gamma$ itself is a simple closed curve on $\\mathbb{R}^2$. Then we have a diagram $D'$ of $H$ as illustrated in Fig.\\ \\ref{cl1} such that $c(D')=c(D)-1$. Suppose that $\\gamma$ is not a simple closed curve on $\\mathbb{R}^2$ and $\\gamma$ has exactly one crossing of $D$. Then by a similar deformation we have a diagram $D'$ of $H$ with $c(D')=c(D)-1$. This contradicts that $D$ is a minimal crossing diagram. Thus $\\gamma$ has at least two crossings of $D$. \\\\\n\\begin{figure}[h]\n \\begin{tabular}{c}\n \n \\begin{minipage}{0.3\\hsize}\n \\begin{center}\n \\includegraphics[width=0.8\\linewidth]{cl11.eps}\\\\\n $D$\n \\end{center}\n \\end{minipage}\n\n \n \\begin{minipage}{0.3\\hsize}\n \\begin{center}\n \\includegraphics[width=0.8\\linewidth]{cl131.eps}\\\\\n \\ \\ \n \\end{center}\n \\end{minipage}\n\n \n \\begin{minipage}{0.3\\hsize}\n \\begin{center}\n \\includegraphics[width=0.8\\linewidth]{cl14.eps}\\\\\n $D'$\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n \\caption{}\n \\label{cl1}\n\\end{figure}\\\\\nLet $\\tilde{f}$ be a regular projection of a trivalent graph $G$ where $D$ is obtained from $\\tilde{f}(G)$. First, we show that if $(1)$ does not hold, then $u(D)\\leq \\dfrac{c(D)-2}{2}$. We will show this claim step by step as follows.\\\\\n\\ \\\\\n\\textbf{Subclaim 1.}\\ \\textit{If one of the crossings on $\\gamma$, say $c_1$, is a crossing between $\\gamma$ and $D-\\gamma$ and another crossing on $\\gamma$, say $c_2$, is a self-crossing of $\\gamma$, then $u(D) \\leq \\dfrac{c(D)-2}{2}$.}\n\\begin{proof}\nLet $G'$ be the subdivided graph of $G$ at $\\{c_1,\\ c_2\\}$. Let $v_i$\\ (resp.\\ $v_i')$ be the over-vertex\\ (resp.\\ under-vertex) at $c_i$\\ $(i=1,\\ 2)$. Then $G'$ is the graph as illustrated in Fig.\\ \\ref{slmu}\\ $(a)$ or $(b)$. \n\n\n\\begin{figure}[H]\n \\begin{tabular}{c}\n \\begin{minipage}{0.5\\hsize}\n \\begin{center}\n \\includegraphics[height=0.4\\linewidth]{slmu1.eps}\n \\end{center}\n \\subcaption{}\n \\end{minipage}\n \\begin{minipage}{0.5\\hsize}\n \\begin{center}\n \\includegraphics[height=0.4\\linewidth]{slmu2.eps}\n \\end{center}\n \\subcaption{}\n \\end{minipage}\n \\end{tabular}\n \\caption{}\n \\label{slmu}\n\\end{figure}\n\n\n\\begin{figure}[H]\n \\begin{center}\n \\includegraphics[width=0.2\\linewidth]{slmu1a.eps}\n \\end{center} \n \\caption{}\n \\label{slmua}\n\\end{figure}\n\nBy subdividing if necessary, we can choose a spanning tree $T$ of $G'$ and a vertex $v$ of $T$ such that $d_T(v,v_i)=d_T(v,v_i')\\ (i=1,2)$. A choice of $T$ and $v$ for the case of Fig.\\ \\ref{slmu}\\ $(a)$ is illustrated in Fig.\\ \\ref{slmua}. By Lemma \\ref{31} we have $u(D)\\leq \\dfrac{c(D)-2}{2}$. \n\\end{proof}\n\\noindent\n\\textbf{Subclaim 2.}\\ \\textit{If two of crossings on $\\gamma$, say $c_1$ and $c_2$, are crossings between $\\gamma$ and $D-\\gamma$ then $u(D) \\leq \\dfrac{c(D)-2}{2}$.} \n\\begin{proof}\nLet $G'$ be the subdivided graph of $G$ at $\\{c_1,\\ c_2\\}$. Let $v_i$\\ (resp.\\ $v_i')$ be the over-vertex\\ (resp.\\ under-vertex) at $c_i$\\ $(i=1,\\ 2)$. Then $G'$ is one of the graphs as illustrated in Fig.\\ \\ref{mumu}. \n\n\\begin{figure}[H]\n \\begin{tabular}{c}\n \\begin{minipage}{0.24\\hsize}\n \\begin{center}\n \\includegraphics[height=0.8\\linewidth]{mumu3.eps}\n \\end{center}\n \\subcaption{}\n \\end{minipage}\n \\begin{minipage}{0.24\\hsize}\n \\begin{center}\n \\includegraphics[height=0.8\\linewidth]{mumu2.eps}\n \\end{center}\n \\subcaption{}\n \\end{minipage}\n \\begin{minipage}{0.24\\hsize}\n \\begin{center}\n \\includegraphics[height=0.8\\linewidth]{mumu1.eps}\n \\end{center}\n \\subcaption{}\n \\end{minipage}\n \\begin{minipage}{0.24\\hsize}\n \\begin{center}\n \\includegraphics[height=0.8\\linewidth]{mumu4.eps}\n \\end{center}\n \\subcaption{}\n \\end{minipage}\n \n \\end{tabular}\n \n \\caption{}\n \\label{mumu}\n\\end{figure}\n\n\\begin{figure}[H]\n \\begin{center}\n \\includegraphics[width=0.25\\linewidth]{mumu31.eps}\n \\end{center} \n \\caption{}\n \\label{mumua}\n\\end{figure}\n\nBy subdividing if necessary, we can choose a spanning tree $T$ of $G'$ containing $v_1,\\,v_1',\\,v_2,\\,v_2'$ and a vertex $v$ of $T$ such that $d_T(v,v_i)=d_T(v,v_i')\\ (i=1,2)$. A choice of $T$ and $v$ for the case of Fig.\\ \\ref{mumu}\\ $(a)$ is illustrated in Fig.\\ \\ref{mumua}. By Lemma \\ref{31} we have $u(D)\\leq \\dfrac{c(D)-2}{2}$. \n\\end{proof}\n\\noindent\n\\textbf{Subclaim 3.}\\ \\textit{If there exists a self-crossing of $D-\\gamma$, say $c_1$, then $u(D) \\leq \\dfrac{c(D)-2}{2}$.} \n\\begin{proof}\nBy Subclaim 2 we may assume that $\\gamma$ has a self-crossing, say $c_2$. Let $G'$ be the subdivided graph of $G$ at $\\{c_1,\\ c_2\\}$. Let $v_i$\\ (resp.\\ $v_i')$ be the over-vertex\\ (resp.\\ under-vertex) at $c_i$\\ $(i=1,\\ 2)$. Then $G'$ is one of the graphs as illustrated in Fig.\\ \\ref{sese}. \n\n\\begin{figure}[H]\n \\begin{tabular}{c}\n \\begin{minipage}{0.24\\hsize}\n \\begin{center}\n \\includegraphics[height=0.42\\linewidth]{sese4.eps}\n \\end{center}\n \\subcaption{}\n \\end{minipage}\n \\begin{minipage}{0.24\\hsize}\n \\begin{center}\n \\includegraphics[height=0.42\\linewidth]{sese1.eps}\n \\end{center}\n \\subcaption{}\n \\end{minipage}\n \\begin{minipage}{0.24\\hsize}\n \\begin{center}\n \\includegraphics[height=0.42\\linewidth]{sese2.eps}\n \\end{center}\n \\subcaption{}\n \\end{minipage}\n \\begin{minipage}{0.24\\hsize}\n \\begin{center}\n \\includegraphics[height=0.42\\linewidth]{sese3.eps}\n \\end{center}\n \\subcaption{}\n \\end{minipage}\n \n \\end{tabular}\n \\caption{}\n \\label{sese}\n\\end{figure}\n\n\n\\begin{figure}[H]\n \\begin{center}\n \\includegraphics[width=0.25\\linewidth]{sese41.eps}\n \\end{center} \n \\caption{}\n \\label{sesea}\n\\end{figure}\n\nBy subdividing if necessary, we can choose a spanning tree $T$ of $G'$ containing $v_1,\\,v_1',\\,v_2,\\,v_2'$ and a vertex $v$ of $T$ such that $d_T(v,v_i)=d_T(v,v_i')\\ (i=1,2)$. A choice of $T$ and $v$ for the case Fig.\\ \\ref{sese}\\ $(a)$ is illustrated in Fig.\\ \\ref{sesea}. By Lemma \\ref{31} we have $u(D)\\leq \\dfrac{c(D)-2}{2}$. \n\\end{proof}\n \nFrom the above we see that $\\gamma$ satisfies $(1)$. \\\\\n\\ \\\\\n\\textbf{Subclaim 4.}\\ \\textit{If $\\gamma$ is not obtained from a standard projection of a $(2,\\ p)$-torus knot for any odd number $p>1$, then $u(D) \\leq \\dfrac{c(D)-2}{2}$.}\n\n\\begin{proof}\n\nLet $G'$ be the subdivided graph of $G$ at $C(D)$ and let $\\Gamma$ be a cycle of $G'$ such that $\\gamma$ is obtained from $\\tilde{f}(\\Gamma)$. Note that if $\\tilde{f}(\\Gamma)$ is a standard projection of a $(2,\\ p)$-torus knot for some odd number $p\\neq \\pm 1$ as the case $p=5$ is illustrated in the left of Fig.\\ \\ref{pro25}, then any pair of crossings on $\\gamma$ are not parallel. Namely $\\Gamma$ is as illustrated in the right of Fig.\\ \\ref{pro25}. It follows from \\cite[Theorem\\ 1]{proje} that the converse is also true (see also \\cite[Proof of Theorem\\ 1.11]{pseudo}). \n\n\n\\begin{figure}[h]\n \\begin{tabular}{c}\n \n \n \\begin{minipage}[b]{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.4\\linewidth]{pro25.eps}\n \\end{center}\n \\end{minipage}\n \n \n \\begin{minipage}[b]{0.5\\hsize}\n \\begin{center}\n \\includegraphics[width=0.4\\linewidth]{pro25cd.eps}\n \\end{center}\n \\end{minipage}\n \n \\end{tabular}\n \\caption{}\n \\label{pro25}\n\\end{figure}\n\n\nTherefore there are two crossings $c_1$, $c_2$ of $\\gamma$ such that $c_1$ and $c_2$ are parallel, namely $\\Gamma$ is illustrated as the left of Fig.\\ \\ref{para}. Let $v_i$\\ (resp.\\ $v_i')$ be the over-vertex\\ (resp.\\ under-vertex) at $c_i$\\ $(i=1,\\ 2)$. By subdividing if necessary, we can choose a spanning tree $T$ of $G'$ and a vertex $v$ of $T$ such that $d_T(v,v_i)=d_T(v,v_i')\\ (i=1,2)$\\ (see the right of Fig.\\ \\ref{para}). By Lemma \\ref{31} we have $u(D)\\leq \\dfrac{c(D)-2}{2}$. \n\\begin{figure}[h]\n \\begin{tabular}{c}\n \n \\begin{minipage}[t]{0.45\\hsize}\n \\begin{center}\n \\includegraphics[width=0.4\\linewidth]{paracd.eps}\n \\end{center}\n \\end{minipage}\n\n \n \\begin{minipage}[t]{0.45\\hsize}\n \\begin{center}\n \\includegraphics[width=0.4\\linewidth]{paraspa.eps}\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n \\caption{}\n \\label{para}\n\\end{figure}\n\n\\end{proof}\n\nFinally, we show that if $u(D)=\\dfrac{c(D)-1}{2}$, then $u(\\gamma)=u(D)$ and $\\gamma$ is a reduced alternating diagram of a $(2,\\ p)-$ torus knot. \n\nLet $\\Gamma$ be a cycle of $G$ such that $\\gamma$ is obtained from $\\tilde{f}(\\Gamma)$. From the above $\\tilde{f}(\\Gamma)$ is a standard projection of a $(2,\\ p)-$torus knot as the case $p=5$ is illustrated in the left of Fig.\\ \\ref{pro25} and $c(\\gamma)=c(D)=p$. If we can join two components of $\\tilde{f}(\\Gamma) \\backslash C(\\tilde{f}(G))$ by a path $P$ of $\\tilde{f}(G)$ as illustrated in the left of Fig.\\ \\ref{pro2}, then there exists a cycle $\\gamma'$ of $D$ that has a crossing between $\\gamma$ and $D-\\gamma$ as illustrated in the right of Fig.\\ \\ref{pro2}. This is contradict to Lemma\\ \\ref{d12c2}\\ $(1)$. \n\n\\begin{figure}[h]\n \\begin{tabular}{c}\n \n \\begin{minipage}[t]{0.45\\hsize}\n \\begin{center}\n \\includegraphics[width=0.5\\linewidth]{pro25p.eps}\n \\end{center}\n \\end{minipage}\n\n \n \\begin{minipage}[t]{0.45\\hsize}\n \\begin{center}\n \\includegraphics[width=0.5\\linewidth]{pro25p2.eps}\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n \\caption{}\n \\label{pro2}\n\\end{figure}\n\n\nTherefore we may assume that $\\tilde{f}(G)$ has no paths as illustrated in the left of Fig.\\ \\ref{pro2}, namely $\\tilde{f}(G)$ is a projection as illustrated in Fig.\\ \\ref{pro3}. Then by changing over\/under informations at $u(\\gamma)$ crossings on $D$ we can obtain a diagram of a trivial handlebody-knot. Since $c(\\gamma)=c(D),\\ u(D)=\\dfrac{c(D)-1}{2}$ and $u(D) \\leq u(\\gamma)$, we have $u(\\gamma)=\\dfrac{c(\\gamma)-1}{2}$. By Theorem \\ref{12c2k}, $\\gamma$ is a reduced alternating diagram of a $(2,\\ p)$-torus knot for some odd number $p \\neq \\pm 1$. \n\n\\begin{figure}[h]\n\\centering\n \\includegraphics[width=0.25\\linewidth]{pro25g.eps}\n \\caption{}\n \\label{pro3}\n\\end{figure}\n\n\\end{proof}\n\n\\noindent\n\\textit{Proof of Thm \\ref{12c2}}\\\\\nLet $H$ be a nontrivial handlebody-knot that satisfies the equality $u(H) = \\dfrac{c(H) -1}{2}$. Let $D$ be a minimal crossing diagram of $H$. Since $u(H) \\leq u(D)$ and $\\dfrac{c(D)-1}{2} =\\dfrac{c(H)-1}{2}$ we have $u(D) \\geq \\dfrac{c(D)-1}{2}$. Thus by the proof of Theorem\\ \\ref{12c1} we have $u(D) = \\dfrac{c(D)-1}{2}$. Then by Lemma\\ \\ref{d12c2} we see that $H$ is a handlebody-knot represented by one of diagrams illustrated in Fig.\\ \\ref{d2n1}. We note that the unknotting number of handlebody-knot represented by $D_{2n-1}(n \\ne 0,1)$ are one by Proposition \\ref{2br}. Therefore $H$ is a handlebody-knot represented by $D_3$ or $D_{-3}$ as desired. \\qed \n\n\n\n\\section*{Acknowledgements}\nThe author would like to thank Professor Tomo Murao for his helpful comments. He is particularly grateful to Professor Kouki Taniyama for invaluable advice and\nhis suggestions.\n\n\\bibliographystyle{myplain2}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\nThe study of turbulence in a collisionless plasma is an extremely challenging problem to face because it is a strongly nonlinear process involving many decades of scales that extend from fluid magnetohydrodynamic (MHD) scales to ion kinetic and electron kinetic scales that are associated with different physical regimes. No general theory is currently capable of describing the full turbulent cascade process in a plasma. On the other hand, different reduced models have been formulated to describe the properties of the turbulent system in a limited range of spatial and temporal scales and in special physical conditions such as in the presence of a strong magnetic field that makes the plasma anisotropic (see, e.g., \\citet{00,01,02} and references therein). Thus, the properties of plasma turbulence can be studied in detail only by means of numerical simulations, within the limits of the currently available computational resources \\citep{03a,03,04}. Numerical studies are inspired and guided by in situ satellite measurements taken in the solar wind and in the terrestrial magnetosphere. Space plasmas represent natural laboratories for the study of plasma turbulence through extremely accurate spatial and temporal satellite data \\citep{06}. It is worth noting that today, space is the only environment where measurements down to electron scales are accessible, as in the case of the Magnetospheric Multiscale (MMS) space mission \\citep{07}, and even ion-scale measurements are far more accurate than in the laboratory.\n\nSolar wind studies focusing on the formation of the turbulent cascade at MHD scales have unambiguously demonstrated the fundamental role of low-frequency Alfv\u00e9n waves in nonlinearly building up the turbulent spectrum \\citep{02}. On the other hand, the properties of the turbulent cascade at kinetic scale are not yet fully understood. Energy transfers at sub-ion scales are thought to be driven by nonlinear interaction between relatively high-frequency modes such as kinetic Alfv\u00e9n waves and whistler waves \\citep{08}. Recent studies instead suggest that the development of turbulence at small scales is closely related to magnetic reconnection phenomena developing inside the current sheets that are spontaneously generated by the turbulent MHD dynamics, which create small-scale coherent structures where energy is thought to be dissipated \\citep{04,09,09a}. Understanding the nature of kinetic-scale turbulence in plasmas is therefore an open problem.\n\nObservations of reconnection driven by turbulence have been reported in space plasmas \\citep{10a,10b,10c}, and recently, satellite measurements of the MMS mission in the turbulent magnetosheath of Earth have given evidence of unusual reconnection events driven only by electrons, while ions were found to be decoupled from the magnetic field \\citep{10}. In particular, satellite data show electron-scale current sheets in which divergent bidirectional electron jets were not accompanied by any ion outflow. This situation is quite different from the standard reconnection picture, in which an electron-scale diffusion region is embedded within a wider ion-scale current sheet. For these reasons, these new phenomena were dubbed \"electron-only reconnection events\" (e-rec from now on). This discovery stimulated great interest first of all because it is not trivial to determine how electron-scale current sheets undergoing e-rec may form in a large-scale turbulent environment. For instance, in the terrestrial magnetosheath, energy is typically first transferred in a continuous way from large MHD scales down to ion kinetic scales (or directly injected by reconnection at ion kinetic scales) and finally to the electron kinetic scale. A fundamental question to answer is therefore how e-rec can be triggered by the turbulent motion of a plasma. This problem has recently been addressed by Califano et al. \\citep{11}, who showed using 2D-3V dimensional Eulerian hybrid Vlasov-Maxwell simulations \\citep{12a,12} that if the scale of injection of energy in a turbulent plasma is close to the ion kinetic scale, ions decouple from the magnetic field and reconnection processes taking place in the system are driven exclusively by electrons, showing the same features of the e-rec events detected by MMS. The transition from standard ion-coupled reconnection to e-rec has recently been studied in detail by Pyakurel et al. \\citep{12b} using 2D-3V dimensional particle-in-cell simulations of laminar reconnection with conditions appropriate for the magnetosheath. By gradually increasing the size of the simulation box from a few ion inertial lengths to several tens of ion inertial lengths, they observed a smooth transition from the e-rec regime, where ions are decoupled from the reconnection dynamics, to the more familiar ion-coupled reconnection. \n\nAnother important aspect concerning the relationship between e-rec and turbulence is to understand whether and how this new reconnection process in turn affects the development of the turbulent energy cascade and its statistical properties, and if there are any differences with respect to the turbulence associated with standard reconnection. In this context, the magnetosheath data collected by the MMS satellites were recently analyzed by Stawarz et al. \\citep{12c}, who showed that the statistical distribution of the turbulent magnetic field fluctuations associated with e-rec and their spectral properties are analogous to those observed in other turbulent plasmas, such as the solar wind, and in numerical simulations of plasma turbulence. \n\nIn this paper we present a study of the statistical properties of fluctuations developing in a simulation of freely decaying plasma turbulence in which e-rec occurs. The results obtained from this simulation are then compared to those of a different simulation of plasma turbulence where standard reconnection takes place. We aim at finding possible differences between the statistical features of these two turbulent systems by taking advantage of the different dynamics of the ions associated with the reconnection structures in the two simulations. In particular, we investigate if there is any specific signature of e-rec in the turbulence statistics. Our study is based on the analysis of the structure functions (hereafter, SFs) of turbulent fields. SFs have been used extensively to analyze numerical simulations \\citep{12d,12d2} and observational data \\citep{12e}, showing that the turbulent magnetic field undergoes a transition from an intermittent dynamics to a self-similar one at sub-ion scales \\citep{12d,12e}. Here we extend the SFs analysis to ion velocity fluctuations as well in order to characterize the behavior of this species, which has a very different role in the reconnection dynamics of the two simulations. Our main finding is that the turbulent fluctuations associated with e-rec show the same statistical properties as the turbulent fluctuations associated with standard ion-coupled reconnection. The structure of the turbulent cascade is also examined. In particular, the properties of the magnetic field dissipation range of a collisionless turbulent plasma are discussed, and we claim that only electrons contribute to its formation.\n\nThe paper is structured as follows: the numerical model implemented in our simulations is discussed in section 2. In section 3 we describe the specific setup adopted for the two simulations considered here, which are the same as were analyzed by Califano et al. (2020). The method of analysis based on the study of SFs is introduced in section 4, and our results are presented in section 5. Our conclusions are finally discussed in the last section.\n\n\\section{Numerical Model} \n\nThe two simulations analyzed in this paper were performed using an Eulerian hybrid Vlasov-Maxwell (HVM) 2D-3V dimensional code that advances the Vlasov equation for ions in time \\citep{12a}, coupled with an isothermal fluid model with finite mass for the electrons \\citep{12}. The electron response is described by the generalized Ohm law that includes electron inertia terms, allowing the complete decoupling of the magnetic field at electron scales \\citep{12},\\\\\n\\begin{gather}\n\\textbf{E}-\\frac{d^2_e}{n}\\nabla^2\\textbf{E}=\\frac{1}{n}(\\textbf{J}\\times\\textbf{B})-(\\textbf{u}\\times\\textbf{B})-\\frac{1}{n}\\nabla P_e + \\notag \\\\\n\\\\\n+\\frac{d^2_e}{n} \\nabla \\cdot \\left(\\textbf{u}\\textbf{J}+\\textbf{J}\\textbf{u}- \\frac{\\textbf{J}\\textbf{J}}{n} \\right), \\notag\n\\end{gather}\n\\\\where $\\textbf{E}$ and $\\textbf{B}$ are the electric and magnetic fields, respectively, $\\textbf{J}\\!=\\!\\nabla \\times \\textbf{B}$ is the current density (we neglect the displacement current), $\\textbf{u}$ and $P_i$ are the ion velocity and pressure, respectively, $P_e\\!=\\!nT_e$ is the electron isothermal pressure, and $d_e$ is the electron inertial length. Furthermore, quasi-neutrality is assumed so that ion and electron densities are equal $n_e\\!=\\!n_i\\!=\\!n$. Finally, the evolution of the magnetic field is described by the Faraday equation. All equations were normalized and transformed in dimensionless units using the ion mass $m_i$, charge $+e$, inertial length $d_i$ , and cyclotron frequency $\\Omega_i$ (see \\citet{12}). For the sake of numerical stability, a numerical filter that smooths out the electromagnetic fields at high wave numbers was used \\citep{13}.\n\n\\section{Simulations Setup}\n\nWe report two simulations that were identical in every aspect except for the spectrum of modes that was used to initially drive turbulence: the first simulation was initialized with ion-scale fluctuations so that e-rec can take place, with ions that do not participate significantly, whereas the second simulation has only large-scale perturbations and reconnection occurs in the usual ion-coupled regime.\n\nThe equations of the HVM model were integrated on a 2D-3V domain (bidimensional in real space and tridimensional in velocity space). In both simulations we took a square spatial domain of size $L\\!=\\!20 \\pi d_i$ covered by a uniform grid consisting of $1024^2$ mesh points, while the velocity domain was cubic with sides spanning from $-5v_{th,i}$ to $+5v_{th,i}$ in each direction (where $v_{th,i}$ is the ion thermal velocity) and sampled by a uniform grid consisting of $51^3$ mesh points. The ion-to-electron mass ratio was $m_i\/m_e\\!=\\!144,$ which implies $d_i\/d_e\\!=\\!\\sqrt[]{m_i\/m_e}\\!=\\!12$, the electron temperature was set to $T_e\\!=\\!0.5,$ and the plasma beta was $\\beta\\!=\\!1$, corresponding to $v_{th,i}\\!=\\!\\sqrt[]{\\beta\/2}\\!=\\!\\sqrt[]{1\/2}$ (in Alfv\u00e9n speed units). Both simulations were initialized with an isotropic Maxwellian distribution for ions and an homogeneous out-of-plane guide field $\\textbf{B}_0$ along the $z$ -axis. Turbulence was triggered by adding to the guide field some large-scale, random phase, isotropic magnetic field sinusoidal perturbations $\\delta \\textbf{B}$. For the first simulation (hereafter, sim.1), we took perturbations with wavenumber $k$ in the range $0.1 \\leqslant k d_i \\leqslant 0.6$, mean amplitude $\\left| \\delta\\textbf{B} \\right|_{rms}\/B_0\\!\\simeq\\!0.2,$ and maximum amplitude $\\left| \\delta\\textbf{B} \\right|_{max}\/B_0\\!\\simeq\\!0.5$. The scales of the largest wavenumbers of these perturbations were close to ion kinetic scales in order for the ions to be nearly decoupled from the magnetic field dynamics from the beginning of the simulation and therefore to drive a turbulent environment in which e-rec occurs \\citep{11}. In the second simulation (hereafter, sim.2), the system was perturbed by fewer modes with wavenumber $k$ in the range $0.1 \\leqslant k d_i \\leqslant 0.3$, all being far larger than ion kinetic scales, mean amplitude $\\left| \\delta\\textbf{B} \\right|_{rms}\/B_0\\!\\simeq\\!0.25,$ and maximum amplitude $\\left| \\delta\\textbf{B} \\right|_{max}\/B_0\\!\\simeq\\!0.5$. In this way, ions were magnetized at the beginning of the simulation, eventually leading to a turbulent environment in which standard reconnection occurs. The time step used for both simulations was $\\Delta t\\!=\\!0.005 \\,\\Omega^{-1}_i$ in order to accurately resolve phenomena with frequencies between the electron cyclotron frequency $\\Omega_e$ and the ion cyclotron frequency $\\Omega_i$. This choice is consistent with the limits of our HVM model, where ions are kinetic, while electrons, with mass, are taken as fluid but adopting the Ohm law corresponding to an electron magnetohydrodynamics (EMHD) dynamics \\citep{11}. It is worth noting that in our simulations, with the spatial resolution we chose, we have only two points to resolve the electron inertial length $d_e$ , but nonetheless this was sufficient to distinguish the EMHD invariant $F\\!\\doteq\\!\\psi-d_{\\rm e}^2\\nabla^2\\psi$ from the flux function $\\psi$ \\citep{23}, in other words, it allowed us to accurately resolve the electron physics at sub-ion scales (see \\citet{11} for a detailed discussion of this point).\n\nWe did not include any external forcing term in our model (in this case, we talk about freely decaying turbulence simulations). This means that when the plasma is in a turbulent regime, the energy dissipated at small scales is not replaced by any large-scale energy source and the system will never reach the statistically stationary state corresponding to a fully developed turbulence \\citep{14}. However, there is a time interval during which 2D freely decaying turbulence reaches a peak of activity that shows statistical properties that are very similar to those of homogeneous and isotropic fully developed turbulence. This time interval corresponds to a period in which the out-of-plane mean square current $\\langle J^2_z \\rangle$ reaches and maintains a roughly constant peak value \\citep{03a,04,12d} that corresponds to an intense small-scale activity \\citep{16}. For this reason, the analysis of the turbulence statistics was carried out at a fixed time close to the peak of $\\langle J^2_z \\rangle$ in both simulations.\n\n\\section{Structure functions and intermittency} \\label{sec04}\n\nOne way to characterize a turbulent process, regardless of its nature, is to analyze the statistical features of the fluctuations of the physical quantities at different scales \\citep{01,14}. For a plasma, these quantities could be for example the magnetic field $\\textbf{B}$ or the ion velocity $\\textbf{u}$. Given a generic vector quantity $\\textbf{q}(\\textbf{x})$, its fluctuations in the direction of $\\textbf{r}$ at scale $r\\!=\\!|\\textbf{r}|$ can be defined as \\citep{14}\\\\\n\\begin{equation}\n\\Delta q_\\parallel(\\textbf{x},\\textbf{r})\\!=\\![\\textbf{q}(\\textbf{x}+\\textbf{r})\\!-\\!\\textbf{q}(\\textbf{x})]\\!\\cdot\\!\\frac{\\textbf{r}}{r}\n.\\end{equation}\n\\\\The moments of order $p$ of such fluctuations are given by\\\\\n\\begin{equation}\nS_p(\\textbf{r})\\!=\\!\\langle |\\Delta q_\\parallel(\\textbf{x},\\textbf{r})|^p \\rangle \\label{sf}\n\\end{equation}\n\\\\and are known as (longitudinal) structure functions of the variable $\\textbf{q}(\\textbf{x})$, where the symbol $\\langle \\cdot \\rangle$ indicates the average on a suitable statistical ensemble. For a homogeneous and isotropic system, SFs depend solely on $r$ and the ensemble average can be replaced by an average over the real space. \n\nThe importance of SFs in analyzing turbulence lies in the fact that in many turbulent processes, they take the form of a power law,\\\\\n\\begin{equation}\nS_p(r)\\sim r^{\\xi(p)} \\label{power}\n,\\end{equation} \n\\\\where $\\xi(p)$ is called the scaling exponent of the process. This exponent contains important information about the spatial distribution of fluctuations. It is possible to prove that if $\\xi(p)\\!=\\!ph$ (with $h$ being a constant), the fluctuations are self-similar, that is, they are uniformly distributed in the system at all scales. On the other hand, if $\\xi(p)$ is nonlinear in $p,$ the fluctuations are intermittent, which means that they become increasingly less homogeneous with decreasing scale length, and they tend to be concentrated only in some portions of the system \\citep{14}. Therefore SFs represent a powerful analysis tool that allows us to identify some key properties of a turbulent process.\n\nSometimes the SFs of finite systems where turbulence is not fully developed do not take the form of the power law of eq. (\\ref{power}). Nevertheless, the turbulent flow can still be characterized using a set of scaling exponents $\\xi(p)$ if by plotting SFs of different order one against the other, the following scaling is obtained:\\\\\n\\begin{equation}\nS_p(r) \\sim S_q(r)^{\\beta(p,q)}\n,\\end{equation}\n\\\\where $\\beta(p,q)\\!=\\!\\xi(p)\/\\xi(q)$. In this case, we talk about extended self-similarity (ESS), which has been observed in many experimental turbulent systems as well as in numerical simulations \\citep{17,18,19}. In the case of ESS, it is not possible to calculate all the $\\xi(p)$ separately because they appear in the form of a fraction in the scaling exponents $\\beta(p,q)$. However, the knowledge of $\\beta(p,q)$ alone is sufficient to determine whether the turbulent cascade is self-similar or intermittent. In the case of self-similarity, $\\xi(p)\\!=\\!ph$ and so $\\beta(p,q)\\!=\\!p\/q$, while in the case of intermittency, $\\beta(p,q)\\!\\neq\\!p\/q$ \\citep{12d}.\n\nIn our case, the SFs were calculated by assuming homogeneity and isotropy in both simulations. In this way, eq. (\\ref{sf}) reduces to\\\\\n\\begin{gather}\nS_p(r)=\\langle |q_x(x+r,y)-q_x(x,y)|^p \\rangle=\\notag \\\\\n\\\\\n\\langle |q_y(x,y+r)-q_y(x,y)|^p \\rangle, \\notag\n\\end{gather}\n\\\\where the ensemble average is replaced by the average over real space. The assumption of homogeneity and isotropy was confirmed by comparing the SFs calculated using $q_x$ with the SFs calculated using $q_y$ , and we found only very little difference between them for all quantities we considered in the two simulations. SFs higher than $p\\!=\\!4$ were not considered here because calculating them requires a larger simulation grid with many more points in the real space domain than we used \\citep{20a,20}. This problem is related to the fact that the calculation of high-order moments of a quantity strongly depends on the tails of its distribution, which are often associated with low probability. As a result, when the ensemble average is replaced with the real space average, it is necessary to ensure that the number of sampled grid points is large enough to include the tail events. \n\n\\section{Results}\n\nThe statistical analysis of the turbulent fluctuations in the two simulations was carried out at a fixed time when the turbulent activity was at its maximum. For sim.1, where e-rec is observed, this time corresponds to $t_1\\! = \\!131.7 \\,\\Omega^{-1}_i$ , while for sim.2, where magnetic reconnection develops according to the standard picture, this time corresponds to $t_2\\! = \\!147.5 \\,\\Omega^{-1}_i$.\n\n\\begin{figure}[t]\n\\centering\n\\subfloat{\n\\begin{tikzpicture}\n\\node[] at (0,0) { \\includegraphics[width=.48\\linewidth]{e-rec_Jz} };\n\\node[] at (-4.2,3) { \\Large{(a)} };\n\\end{tikzpicture}\n}\n\\subfloat{\n\\begin{tikzpicture}\n\\node[] at (0,0) { \\includegraphics[width=.48\\linewidth]{n-rec_Jz} };\n\\node[] at (-4.2,3) { \\Large{(b)} };\n\\end{tikzpicture}\n}\n\\\\\n\\subfloat{\n\\begin{tikzpicture}\n\\node[] at (0,0) { \\includegraphics[width=.48\\linewidth]{e-rec_jdote} };\n\\node[] at (-4.2,3) { \\Large{(c)} };\n\\end{tikzpicture}\n}\n\\subfloat{\n\\begin{tikzpicture}\n\\node[] at (0,0) { \\includegraphics[width=.48\\linewidth]{n-rec_jdote} };\n\\node[] at (-4.2,3) { \\Large{(d)} };\n\\end{tikzpicture}\n}\n\\caption{Top panels: Shaded contour plots of the out-of-plane current $J_z$ (colored) and contour lines of the flux function $\\Psi$ (black lines) of sim.1 at $t_1\\!=\\!131.7 \\,\\Omega^{-1}_i$ (a) and of sim.2 at $t_2\\!=\\!147.5 \\,\\Omega^{-1}_i$ (b). Bottom panels: Shaded contour plots of $\\textbf{J}\\cdot\\textbf{E}$ (colored) and contour lines of $\\Psi$ (black lines) of sim.1 at $t_1\\!=\\!131.7 \\,\\Omega^{-1}_i$ (c) and of sim.2 at $t_2\\!=\\!147.5 \\,\\Omega^{-1}_i$ (d).}\n\\label{fig_Jz_e-rec}\n\\end{figure}\n\nIn the top panels of figure \\ref{fig_Jz_e-rec} we show for both simulations the shaded contour plots of the out-of-plane current $J_z$ together with the contour lines of the flux function $\\Psi$, related to the in-plane magnetic field by $\\textbf{B}_\\perp\\!=\\!\\nabla\\Psi\\times\\textbf{e}_z$ (with $\\textbf{e}_z$ being the out-of-plane unit vector). In both simulations we see that the magnetic configuration of the system is characterized by a large number of island-like magnetic structures of various sizes and shapes, produced by the nonlinear evolution of the initial perturbation. The process of the formation of reconnection sites and the development of an intermittent turbulent cascade of magnetic energy can be understood in terms of the nonlinear interaction between these magnetic islands, which attract one another when the associated central current $J_z$ is of the same sign (and vice versa in the case of opposite sign). In particular, as two islands with central $J_z$ of the same sign approach each other, the magnetic field lines of opposite sign between them are pushed against each other, and this leads to the formation of a thin current sheet where reconnection occurs and magnetic energy is dissipated. Thus, as a result of this dynamics, reconnecting current sheets are not uniformly distributed in a turbulent plasma, they tend to be concentrated between merging magnetic islands, and therefore the dissipation of magnetic energy is nonuniform, that is, the turbulent cascade of magnetic energy is intermittent. The relation between the formation of localized reconnecting current sheets and the development of an intermittent turbulent cascade is highlighted by the contour plots of $\\textbf{J}\\cdot\\textbf{E}$ shown in figure \\ref{fig_Jz_e-rec} (the flux function $\\Psi$ is overplotted), bottom panels, made at the same time instants and for the same runs as the corresponding contour plots of $J_z$ in the top panels. The quantity $\\textbf{J}\\cdot\\textbf{E}$, representing the energy exchange between the electromagnetic field and the plasma, is significantly nonzero only in correspondence to the intense current structures, thus marking the strong correlation between reconnection and the intermittent dissipation of magnetic energy.\n\nThe characteristic size of the magnetic islands depends on the wavelength of the initial fluctuations, that is, on the injection scale, and therefore magnetic islands in sim.1 are smaller than those in sim.2. As a result, the characteristic thickness and length of the current sheets in the two simulations are different as well, and this affects the ion magnetization and consequently the dynamics of magnetic reconnection \\citep{12b}. It has been shown in Califano et al. (2020) that in sim.1, ions are (and remain) decoupled from the magnetic field on the scale of the current sheets and because of this, e-rec develops. Conversely, the current sheets of sim.2 are large enough to let the ions participate in the magnetic field dynamics, hence reconnection proceeds according to the standard ion-coupled reconnection model. A statistical analysis of the characteristic widths and lengths of the current structures of the two simulations here considered has been carried out by Califano et al. (2020), who showed that the reconnecting current sheets of sim.1 are shorter than those of sim.2, while their characteristic width is about the same in the two simulations. In particular, in sim.2 the characteristic length of the current sheets was found to be at least about $10 d_i$ and to vary up to scales of some tens of $d_i$. On the other hand, in sim.1, all the reconnecting current sheets have about the same length, which is about a few $d_i$.\n\nIn summary, the turbulent magnetic fluctuations of sim.1 and sim.2 have a significantly different local dynamics. We now determine whether there is a difference in their statistical features, in particular by analyzing the SFs of the magnetic field.\n\n\\begin{figure}[t]\n\\centering\n\\subfloat{\n\\begin{tikzpicture}\n\\node[] at (0,0) { \\includegraphics[width=.48\\linewidth]{e-rec_SFBs} };\n\\node[] at (-4.1,1.7) { \\Large{(a)} };\n\\node[] at (-1.2,-1.4) { \\scriptsize{Range I} };\n\\node[] at (0.6,-1.4) { \\scriptsize{Range II} };\n\\end{tikzpicture}\n}\n\\subfloat{\n\\begin{tikzpicture}\n\\node[] at (0,0) { \\includegraphics[width=.48\\linewidth]{n-rec_SFBs} };\n\\node[] at (-4.1,1.7) { \\Large{(b)} };\n\\node[] at (-1.2,-1.4) { \\scriptsize{Range I} };\n\\node[] at (0.6,-1.4) { \\scriptsize{Range II} };\n\\end{tikzpicture}\n}\n\\\\\n\\subfloat{\n\\begin{tikzpicture}\n\\node[] at (0,0) { \\includegraphics[width=.48\\linewidth]{e-rec_ESS2} };\n\\node[] at (-4.1,1.7) { \\Large{(c)} };\n\\end{tikzpicture}\n}\n\\subfloat{\n\\begin{tikzpicture}\n\\node[] at (0,0) { \\includegraphics[width=.48\\linewidth]{n-rec_ESS2} };\n\\node[] at (-4.1,1.7) { \\Large{(d)} };\n\\end{tikzpicture}\n}\n\\\\\n\\subfloat{\n\\begin{tikzpicture}\n\\node[] at (0,0) { \\includegraphics[width=.48\\linewidth]{e-rec_beta1} };\n\\node[] at (-4.1,1.7) { \\Large{(e)} };\n\\end{tikzpicture}\n}\n\\subfloat{\n\\begin{tikzpicture}\n\\node[] at (0,0) { \\includegraphics[width=.48\\linewidth]{n-rec_beta1} };\n\\node[] at (-4.1,1.7) { \\Large{(f)} };\n\\end{tikzpicture}\n}\n\\caption{Top panels: Magnetic field structure functions $S_{B,p}$ (log-scale) of sim.1 at $t_1\\!=\\!131.7 \\,\\Omega^{-1}_i$ (a) and of sim.2 at $t_2\\!=\\!147.5 \\,\\Omega^{-1}_i$ (b); dashed straight lines represent the power laws $r^p$ , and vertical dash-dotted lines delimit ranges I and II. Middle panels: $S_{B,4}$ vs. $S_{B,2}$ (filled black dots, in log-scale) for $r\\!<\\!10\\,d_i$ from sim.1 at $t_1\\!=\\!131.7 \\,\\Omega^{-1}_i$ (c) and sim.2 at $t_2\\!=\\!147.5 \\,\\Omega^{-1}_i$ (d); ranges I and II were fit separately with straight lines (blue and red lines, respectively); vertical dash-dotted lines separate range I from range II. Bottom panels: Magnetic field scaling exponents $\\beta_B(p,1)$ within range I (blue) and range II (red) from sim.1 at $t_1\\!=\\!131.7 \\,\\Omega^{-1}_i$ (e) and sim.2 at $t_2\\!=\\!147.5 \\,\\Omega^{-1}_i$ (f); the dashed straight line, representing the self-similar scaling $\\beta(p,1)\\!=\\!p$, is given as reference.}\n\\label{fig02}\n\\end{figure}\n\nIn panels (a) and (b) of figure \\ref{fig02} we compare the first four magnetic field structure functions $S_{B,p}$ (in logarithmic scale) of sim.1 and sim.2, respectively. These SFs were calculated using $B_x$. The same results were obtained using $B_y$ (not shown here), which means that magnetic field turbulence is isotropic in our simulations. Figure \\ref{fig02} shows that all magnetic field SFs of both simulations have the same behavior over the range of scales we considered and that there are no significant differences between sim.1 and sim.2. In particular, we see that for $r\\!>\\!10\\,d_i$ , all SFs start to saturate, while for $r\\!<\\!10\\, d_i$ , it is possible to distinguish two ranges that correspond to two different scalings. The first range, hereafter called range I, extends from $r\\!\\simeq\\!0.06\\, d_i$ to $r\\!\\simeq\\!0.3\\, d_i$. Here the SFs follow the power law of eq. (\\ref{power}). The second range, hereafter called range II, reaches from $r\\!\\simeq\\!0.3\\, d_i$ to $r\\!\\simeq\\!10\\, d_i$. In this range, $log(S_{B,p})$ is nonlinear in $log(r),$ which means that the SFs do not take the form of a power law.\n\n\\begin{figure}[t]\n\\centering\n\\subfloat{\n\\begin{tikzpicture}\n\\node[] at (0,0) { \\includegraphics[width=.48\\linewidth]{e-rec_SFus} };\n\\node[] at (-4.1,1.7) { \\Large{(a)} };\n\\end{tikzpicture}\n}\n\\subfloat{\n\\begin{tikzpicture}\n\\node[] at (0,0) { \\includegraphics[width=.48\\linewidth]{n-rec_SFus} };\n\\node[] at (-4.1,1.7) { \\Large{(b)} };\n\\end{tikzpicture}\n}\n\\\\\n\\subfloat{\n\\begin{tikzpicture}\n\\node[] at (0,0) { \\includegraphics[width=.48\\linewidth]{e-rec_ESSu2} };\n\\node[] at (-4.1,1.7) { \\Large{(c)} };\n\\end{tikzpicture}\n}\n\\subfloat{\n\\begin{tikzpicture}\n\\node[] at (0,0) { \\includegraphics[width=.48\\linewidth]{n-rec_ESSu2} };\n\\node[] at (-4.1,1.7) { \\Large{(d)} };\n\\end{tikzpicture}\n}\n\\\\\n\\subfloat{\n\\begin{tikzpicture}\n\\node[] at (0,0) { \\includegraphics[width=.48\\linewidth]{e-rec_betau1} };\n\\node[] at (-4.1,1.7) { \\Large{(e)} };\n\\end{tikzpicture}\n}\n\\subfloat{\n\\begin{tikzpicture}\n\\node[] at (0,0) { \\includegraphics[width=.48\\linewidth]{n-rec_betau1} };\n\\node[] at (-4.1,1.7) { \\Large{(f)} };\n\\end{tikzpicture}\n}\n\\caption{Top panels: Ion velocity structure functions $S_{u,p}$ (in log-scale) of sim.1 at $t_1\\!=\\!131.7 \\,\\Omega^{-1}_i$ (a) and of sim.2 at $t_2\\!=\\!147.5 \\,\\Omega^{-1}_i$ (b); dashed straight lines represent the power laws $r^p$ , and the vertical dash-dotted lines separate the power law-like region from the saturation region. Middle panels: $S_{u,4}$ vs. $S_{u,2}$ (filled black dots, in log-scale) for $r\\!<\\!7\\,d_i$ from sim.1 at $t_1\\!=\\!131.7 \\,\\Omega^{-1}_i$ (c) and sim.2 at $t_2\\!=\\!147.5 \\,\\Omega^{-1}_i$ (d); these curves were fit with a straight line (in magenta). Bottom panels: Ion velocity scaling exponents $\\beta_u(p,1)$ in range $r\\!<\\!7\\,d_i$ from sim.1 at $t_1\\!=\\!131.7 \\,\\Omega^{-1}_i$ (e) and sim.2 at $t_2\\!=\\!147.5 \\,\\Omega^{-1}_i$ (f); the dashed straight line, representing the self-similar scaling $\\beta(p,1)\\!=\\!p$, is given as reference.}\n\\label{fig03}\n\\end{figure}\n\nThe large-scale behavior observed for $r\\!>\\!10\\,d_i$ is expected because we used periodic boundary conditions in a finite box, which causes the SFs to become periodic and even in $r$ \\citep{21}. Because of these properties, all SFs here considered tend to grow for $r\\!>\\!0$ and start to saturate around $r\\!=\\!L\/2$ (where $L$ is the box size), while for $r\\!>\\!L\/2,$ they decrease symmetrically with respect to $r\\!=\\!L\/2$. We did not analyze the SFs for $r\\!>\\!10\\,d_i$ because the statistics there would be affected by these finite-box effects.\n\nThe small-scale power law behavior observed in range I is expected as well \\citep{18,22} because of the dissipation effects that become important at small $r$ and tend to smooth out magnetic field fluctuations. This implies that in the dissipation range $B_x(x+r,y)-B_x(x,y)\\!=\\!\\Delta B_x(r)\\!\\sim\\!r$ and consequently, the magnetic field SFs take the form of the power law $S_{B,p}\\!\\sim\\!r^p$. This appears evident in the first two panels of figure \\ref{fig02}, where in range I all SFs overlap almost perfectly with their corresponding smooth scaling power law $r^p$ in both simulations. Thus, range I can be identified as the magnetic field dissipation range.\n\nAs discussed in section \\ref{sec04}, even if the SFs do not scale as a power law, as in range II, it can still be possible to characterize the turbulent fluctuations with a set of scaling exponents $\\beta(p,q)$ if ESS is observed. Thus, we tested range II for ESS by analyzing all combinations of magnetic field SFs of different order plotted against each other. Panels (c) and (d) of figure \\ref{fig02} show an example of two magnetic field SFs of different orders plotted against each other for $r\\!<\\!10\\,d_i$ from sim.1 and sim.2, respectively. Each of these curves was fit separately in range I and range II using two straight lines, and we find that in both simulations, they are linear in range I (blue line) and range II (red line), but with different slopes. This means that ESS holds well in range II but with a different scaling exponent than in range I. The same behavior was found for any other combination of magnetic field SFs of different order in both simulations (not shown here). As ESS is observed, we proceed by evaluating the magnetic field scaling exponents $\\beta_B(p,q)$. Panels (e) and (f) of figure \\ref{fig02} show $\\beta_B(p,q)$ as a function of $p$ at fixed $q\\!=\\!1$ within range I (blue curve) and range II (red curve) for sim.1 and sim.2, respectively. These exponents were calculated for both ranges separately by taking the gradients of the linear fits of all possible combinations of $log(S_{B,p})$ versus $log(S_{B,q})$. In both simulations, $\\beta_B(p,1)$ is linear in $p$ within range I and becomes nonlinear in range II. It is worth noting that a very small deviation from the self-similar scaling is observed in range I as p increases because the calculation of SFs performed by averaging over the simulation grid becomes increasingly less accurate with increasing $p,$ as pointed out in section \\ref{sec04}. The same behavior with $\\beta_B(p,q)$ being linear in range I and nonlinear in range II was observed for any other value of $q$ in sim.1 and sim.2 (not shown here). This result suggests that in both simulations, the magnetic field fluctuations are intermittent for $r\\!>\\!0.3\\,d_i,$ and they become self-similar for $r\\!<\\!0.3\\,d_i$.\n\nThis small-scale transition from an intermittent inertial range to a self-similar magnetic field dissipation range has previously been observed in numerical simulations \\citep{12d} and is consistent with the Cluster satellite measurements \\citep{12e}. However, in our case, it takes place at scales of about a few electron inertial lengths (around $r\\!\\simeq\\!0.3\\,d_i\\!\\simeq\\!3.6\\,d_e$) rather than at $r\\!\\simeq\\!1\\,d_i$. Furthermore, no relevant differences between the magnetic field statistics of sim.1 and sim.2 are detected, suggesting that the statistical features of the turbulent cascade of magnetic energy, and in particular, the formation of the dissipation range, are independent of the specific reconnection mechanism associated with the evolution of magnetic field fluctuations. These results agree with recent MMS measurements in the magnetosheath of Earth, showing that the statistical properties of turbulent magnetic fluctuations associated with e-rec are analogous to those of other turbulent plasmas where standard reconnection occurs \\citep{12c}.\n\nAs the magnetic field statistics do not show any significant difference between sim.1 and sim.2, we analyzed the SFs of the ion fluid velocity $\\textbf{u}$ to determine whether they show any signature of e-rec because the ions do play a very different role in the reconnection dynamics of the two simulations. In panels (a) and (b) of figure \\ref{fig03} we compare the first four ion velocity structure functions $S_{u,p}$ (in logarithmic scale) of sim.1 and sim.2, respectively. All the ion velocity SFs we considered were calculated using $u_x$ , but the same results were obtained using $u_y$ (not shown here). This again implies that ion turbulence is essentially isotropic in our simulations. Surprisingly, as in the case of the magnetic field SFs, figure \\ref{fig03} shows that the ion velocity SFs of both simulations shows the same features in the range of scales we considered, and there are no noticeable differences between sim.1 and sim.2. On the other hand, their behavior is significantly different from that of the magnetic field SFs because we do not observe any sub-ion scale transition such as the one between range I and II that characterizes the magnetic field statistics (see the top panels of figure \\ref{fig02}). In particular, we see that for $r\\!>\\!7\\, d_i$ all SFs start to saturate, while for $r\\!<\\!7\\, d_i$ , they behave like a power law, although the transition between these two regions is not sharp and introduces some curvature between about $2\\, d_i$ and $7\\, d_i$.\n\nThe large-scale saturation observed for $r\\!>\\!7\\, d_i$ is caused, as in the case of the magnetic field SFs, by the use of periodic boundary conditions in a finite simulation box. We did not analyze the ion velocity SFs in this range because their properties here are significantly affected by these finite box effects.\n\nAs concerning the ion velocity SFs behavior for $r\\!<\\!7\\,d_i$, we already said that SFs are usually expected to take the form of the power law $r^p$ at small $r$ because of dissipation that tends to smooth out fluctuations on small scales. However, the first two panels of figure \\ref{fig03} show that in both simulations, all ion velocity SFs are well approximated by their corresponding $r^p$ power law for $r\\!<\\!2\\,d_i$, a range that is much wider than the dissipation range of the magnetic field SFs that was identified with range I. This means that the ion velocity fluctuations are smooth on a wider range than the magnetic field fluctuations. However, the formation of this extended ion dissipation range observed in the ion velocity SFs must have a different origin than the magnetic field dissipation range as it covers a range of scales that far exceeds range I and extends to ion scales. A possible explanation is that the development of the ion dissipation range is related to ions being decoupled from the magnetic field at scales of about the ion Larmor radius $\\rho_i$ (which is on the same order as $d_i$ for $\\beta\\!=\\!1$, as in our simulations) where ion thermal effects become important. It is reasonable to assume that if the system develops magnetic fluctuations at scales on the same order of $\\rho_i$ or smaller, then ions are unable to follow the rapid magnetic field variations in space, so they will decouple from it and no ion structures will be formed at those scales. Therefore, as an effect of ions decoupling, ion velocity becomes smooth at scales smaller than some $\\rho_i\\!\\simeq\\!d_i$. On the other hand, even if ions are decoupled, the intermittent cascade of magnetic energy proceeds toward smaller scales, supported by the electrons that remain coupled to the magnetic field. However, when electron scales are reached, even the electron dynamics decouples from the magnetic field and the magnetic dissipation range is formed. Thus we claim that only the electrons play a role in the formation of the magnetic field dissipation range as the ions decouple from the magnetic field dynamics long before the formation of the magnetic dissipation range.\n\nFurthermore, as all ion velocity SFs exhibit some curvature between $2\\,d_i$ and $7\\,d_i$, we verified that ESS holds by analyzing all combinations of ion velocity SFs of different order plotted against each other. Panels (c) and (d) of figure \\ref{fig03} show an example of two ion velocity SFs of different order plotted against other other for $r\\!<\\!7\\,d_i$ from sim.1 and sim.2, respectively. These curves were fit using a single straight line over the whole range, and we find that in both simulations, they are linear for $r\\!<\\!7\\,d_i$ without any change in slope between the region where all SFs behave like $r^p$ and the region where they show some curvature. This means that ESS holds and that the whole range $r\\!<\\!7\\,d_i$ is characterized by a single scaling exponent. The same behavior was observed for every other combination of ion velocity SFs of different order in both simulations (not shown here). Finally, as ESS is observed, we calculated the ion velocity scaling exponents $\\beta_u(p,q)$. Panels (e) and (f) of figure \\ref{fig03} show $\\beta_u(p,q)$ as a function of $p$ at fixed $q\\!=\\!1$ for sim.1 and sim.2, respectively. These exponents were calculated taking the gradients of the linear fits of all possible combinations of $log(S_{u,p})$ versus $log(S_{u,q})$. We find that $\\beta_u(p,1)$ is linear in $p$ in range $r\\!<\\!7\\,d_i$ , and the same behavior was observed for every other value of $q$ in sim.1 and sim.2 (not shown here). This result suggests that in both simulations, ion velocity fluctuations are self-similar at scales smaller than about $7\\, d_i$, even in the region where all SFs show some curvature. The ion velocity fluctuations are therefore likely to be smooth over the whole $r\\!<\\!7\\,d_i$ range.\n\nThus, the analysis of ion velocity SFs clearly shows that the ion statistics is also not influenced by the specific reconnection mechanism associated with the evolution of magnetic field fluctuations. No signature of e-rec is present because we do not see any difference between the statistical features of the ions in sim.1 and sim.2. \n\n\\section{Conclusions}\n\nBy combining the information obtained from the magnetic field and the ion velocity SFs, we find that the turbulent cascade associated with e-rec has the same statistical properties of the turbulent cascade associated with standard reconnection. This result is consistent with a recent analysis of turbulent magnetic fluctuations associated with e-rec, measured in the terrestrial magnetosheath by the satellites of the MMS space mission \\citep{12c}.\n\nFurthermore, our analysis suggests that in both simulations, it is possible to identify two dynamical regimes. The first is the ion-decoupled regime, associated with scales in the range $4\\,d_e\\!<\\!r\\!<\\!7\\,d_i$, where magnetic field fluctuations are intermittent while ion velocity fluctuations are self-similar and smooth as this species is strongly decoupled from the magnetic field. The second regime is the dissipative one, associated with scales in the range $r\\!<\\!4\\,d_e$, where both magnetic field and ion velocity fluctuations are self-similar and smooth because of small-scale dissipation. This result is consistent with the analysis of Pyakurel et al. \\citep{12b}, according to which ions decouple from the magnetic field at scales of about $10\\, d_i\\!\\simeq\\!10\\, \\rho_i$ and no ion structures are formed at these scales or smaller. In addition, we claim that the formation of the self-similar magnetic field dissipation range is only guided by the small-scale electron dynamics and that it is independent of the ion dynamics as these particles are decoupled from the magnetic field in this range. These results suggest that the statistical features of the turbulent cascade in a collisionless magnetized plasma depend solely on the coupling between the magnetic field and the different particle species present in the system, but they are independent of the specific process that is responsible for the decoupling of these particles (whether it is e-rec or standard magnetic reconnection, e.g.). In other words, this means that e-rec dissipates the turbulent magnetic energy in the same way as standard ion-coupled reconnection does, and this happens because turbulent dissipation is guided by electrons whose dynamics remains unaltered from standard reconnection to e-rec. This seems to be a robust and universal feature of turbulent magnetized plasmas, independent of the reconnection dynamics, and this result has a potential impact on the formulation of new theoretical models of plasma turbulence. In addition, in this context, the SFs proved to be a useful tool for investigating the coupling between particles and the magnetic field, and their use may be extended to the analysis of satellite data as well.\n\nAdditional studies are necessary to better characterize the transition between the ion-decoupled regime and the dissipative regime. The spatial grid spacing of our simulations is on the same order as the electron inertial length $d_e$ , and because of this, it is not possible to accurately resolve the small-scale electron dynamics in the dissipation range. Moreover, even if our hybrid model is computationally very efficient and able to highlight the different roles of ions and electrons, it is still too simplified to completely describe the small-scale electron physics. Thus, simulations with higher resolution and including electron kinetic effects are required for a much more detailed study of the formation of the magnetic field dissipation range.\n\nFinally, the natural extension of our work will be to perform full 3D-3V simulations of plasma turbulence to study three dimensional effects on the transition between the different physical regimes that characterize the turbulent energy cascade. \n\n\\section*{\\normalsize{Acknowledgments}}\n\nThis paper has received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement No 776262 (AIDA, www.aida-space.eu). Numerical simulations have been performed on Marconi at CINECA (Italy) under the ISCRA initiative. FC thank Dr.~M.~Guarrasi (CINECA, Italy) for his essential contribution for his contribution on code implementation on Marconi. We acknowledge dr. S. Cerri, M. Sisti, F. Finelli and S. Fadanelli for very useful discussions.\n\n\\section*{\\normalsize{Data Availability}}\n\nThe simulation dataset (UNIPI e-rec) is available at Cineca on the AIDA-DB. In order to access the meta-information and the link to the raw data, look at the tutorial at http:\/\/aida-space.eu\/AIDAdb-iRODS.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzehri b/data_all_eng_slimpj/shuffled/split2/finalzzehri new file mode 100644 index 0000000000000000000000000000000000000000..8fe5e62c20cfee20a6e5573d40272d282b6a09c8 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzehri @@ -0,0 +1,5 @@ +{"text":"\\section{INTRODUCTION}\n\n\nUltracold Fermi gas \\cite{giorgini_rmp2008},\nwith tunable atom-atom interaction through Feshbach resonance \\cite{chin_rmp2010}, has been an ideal platform for the study of the crossover physics from weak-coupling Bardeen-Cooper-Schrieffer (BCS) pairing to a Bose-Einstein condensate (BEC) of bound pairs \\cite{regal_prl2004,zwierlein_prl2004,bartenstein_prl2004}.\nRecently synthetic gauge fields \\cite{lin_nat2009} and spin-orbit (SO) coupling \\cite{lin_nat2011, ji_nphys2014, chunlei_pra2013, pengjun_prl2012, cheuk_prl2012} were realized in experiments,\nopening up a completely new avenue of research in superfluid Fermi gases.\nThe interplay of Zeeman fields and SO coupling leads to many novel phenomena at both zero\n\\cite{gong_prl2011,han_pra2012,seo_pra2012,jiang_pra2011,zengqiang_prl2011,zhoupra2013}\nand finite temperatures \\cite{lindong_njp2013,hu_njp2013},\nfrom mixed singlet-triplet pairing to topological phase transition\n\\cite{gong_prl2011,xiaji_pra2012,wu_prl2013,zheng_pra2013,chunlei_natc2013,weizhang_natc2013}.\n\nMany of the interesting physics of SO-coupled Fermi gas can be captured by mean-field theory. However, it is also true that mean-field theory may fail under many circumstances. For example, mean-field theory, which does not include the effect of noncondensed pairs,\nfails to describe accurately the phase transition from a superfluid to a normal gas, particularly for systems with strong interaction.\nBeyond-mean-field theoretical methods have been proposed \\cite{nozieres1985,melo_prl1993,ohashi_prl2002,machida_pra2006,xiaji_pra2005}.\nHere we present a theoretical investigation under the framework of the T-matrix scheme to address the superfluid properties of a Rashba SO coupled Fermi gas over the entire BCS-BEC crossover regime \\cite{qijin_prl1998,maly1999,loktev_pr2001,perali_prb2007,qijin_pr2005,kinnunen,huihu_prl2010,qijin,kinast, stajic,bauer_prl2014, haoguo_arxiv2013,lianyi_pra2013,zhangjing}.\nIn the absence of the Zeeman field, it was shown that the SO coupling enhances superfluid pairing \\cite{renyuan_prl2012,zengqiang_prl2011,lianyi_prl2012,lianyi_pra2013}. At the mean-field level, we know that\nthe presence of both the SO coupling and a perpendicular (out-of-plane) Zeeman field give rise to effective $p$-wave pairing \\cite{tewari_prl2007,chuanwei_prl2012}.\nMeanwhile introducing an in-plane Zeeman component creates an anisotropic Fermi surface which favors finite-momentum pairing,\ngiving rise to a Fulde-Ferrell superfluid \\cite{wu_prl2013,zheng_pra2013,chunlei_natc2013,weizhang_natc2013,ff1,ff4}.\nThese previous studies motivated our present work, in which we investigate the thermodynamic properties of an SO coupled Fermi gas subject to a Zeeman field. We will focus our calculation on two important quantities that are measurable in experiment: the superfluid to normal transition temperature and the isothermal compressibility.\n\nWe organize the paper as follows.\nIn Sec.~\\ref{sec_model_hamiltonian}, we give an introduction to the Rashba SO coupled model\nand briefly describe the T-matrix scheme used in the calculation.\nThen in Sec.~\\ref{sec_zero-temperature_properties},\nwe briefly review the zero-temperature properties of the system.\nThe superfluid transition temperature $T_c$ in the BEC-BCS crossover is investigated in Sec.~\\ref{sec_superfluid_transition_temperature}, and its\ndependence on the Zeeman field is emphasized. We present the numerical results of compressibility in Sec.~\\ref{sec_isothermal_compressibility} before we provide a summary in Sec~\\ref{summary}. We show that both the superfluid transition temperature and the compressibility have distinct dependence on the out-of-plane and the in-plane Zeeman fields.\nIn the Appendix we present technical details of the T-matrix formalism.\n\n\\section{Model}\n\\label{sec_model_hamiltonian}\n\nWe consider a three-dimensional two-component degenerate Fermi gas with Rashba SO coupling together with effective Zeeman fields.\nThis system can be described by the following Hamiltonian:\n\\begin{eqnarray}\n\\mathcal{H} &=& \\int d{\\bf r}\\,\n\\psi^\\dag ( \\mathcal{H}_0 + \\mathcal{H}_\\mathrm{so} + h_z\\sigma_z +h_x\\sigma_x) \\psi({\\bf r}) \\nonumber\\\\\n&& + U \\int d{\\bf r}\\, \\psi^\\dag_\\uparrow({\\bf r}) \\psi^\\dag_\\downarrow({\\bf r})\n\\psi_\\downarrow({\\bf r}) \\psi_\\uparrow({\\bf r})~,\n\\end{eqnarray}\nwhere $\\psi({\\bf r})=(\\psi_\\uparrow \\,,\\psi_\\downarrow)^T$ represents the fermionic field operator and $\\mathcal{H}_0 = -\\nabla^2\/(2m)-\\mu$ represents the kinetic energy\nwith $\\mu$ being the chemical potential.\nThe Rashba SO-coupling term takes the form\n$\\mathcal{H}_\\mathrm{so} = \\alpha ( k_y\\sigma_x - k_x\\sigma_y )$ in the $xy$-plane, with the parameter $\\alpha$ characterizing the strength of the SO coupling.\nWe consider both an out-of-plane Zeeman field $h_z$ and an in-plane Zeeman field $h_x$.\nThe quantity $U$ represents the bare two-body interaction constant and in the calculation will be replaced by the $s$-wave scattering length $a_s$ through the standard regularization scheme:\n$1\/U = m\/(4\\pi\\hbar^2a_s)-\\sum_k m\/k^2$.\n\nIn the mean-field BCS theory, we introduce an order parameter\n$\\Delta=U\\sum_{\\bm{k}}\\langle \\psi_{\\bm{Q}+\\bm{k},\\uparrow}\\psi_{\\bm{Q}-\\bm{k},\\downarrow}\\rangle$ to characterize the property of the superfluid. Here $\\bm{Q}$ represents the center-of-mass momentum of the pairs. A finite ${\\bm{Q}}$ arises from the presence of the in-plane Zeeman field \\cite{wu_prl2013,zheng_pra2013,chunlei_natc2013,ff1,lindong_njp2013,hu_njp2013,ff4}.\nAt finite temperature in a T-matrix scheme, the order parameter $\\Delta(T)$ can be divided into two parts:\n$\\Delta^2 = \\Delta_\\mathrm{sc}^2 + \\Delta_\\mathrm{pg}^2$ \\cite{lianyi_pra2013}. Here $\\Delta_{\\rm sc}$ is the superfluid gap arising from the condensed pairs and vanishes above the superfluid transition temperature $T_c$.\nThe pseudo-gap $\\Delta^2_\\mathrm{pg}\\sim\\langle\\Delta^2(T)\\rangle - \\langle\\Delta(T)\\rangle^2$ describes the thermodynamic fluctuation of non-condensed pairs \\cite{qijin_prl1998,kosztin_prb2000}.\nBelow the superfluid transition temperature $T_c$, the thermodynamic quantities are determined\nby the Thouless criterion \\cite{thouless_1960} within the T-matrix scheme instead of the BCS formalism in the mean-field model.\nThis is because at finite temperature thermodynamic fluctuation plays a critical role with a tendency towards destroying the pairing condensation.\nThe Thouless criterion for finite pairing momentum $\\bm{Q}$ takes the form:\n\\begin{equation}\nU^{-1}+\\chi(0,\\bm{Q})=0 ~,\n\\end{equation}\nwhere $\\chi(0,\\bm{Q})$ is the spin symmetrized pair susceptibility.\nMore technical details are given in the Appendix.\n\nThe T-matrix scheme adopted here was first developed in the context of high-$T_c$ cuprates \\cite{qijin_prl1998,maly1999,loktev_pr2001,perali_prb2007}. It was later applied to study the BEC-BCS crossover phenomenon in ultracold atomic Fermi gases \\cite{qijin_pr2005}. The T-matrix theory was found to be reasonably successful in providing theoretical support for the measured radio-frequency (rf) spectrum \\cite{kinnunen,huihu_prl2010,qijin}, density profiles \\cite{stajic}, and thermodynamic properties such as heat capacity \\cite{kinast}, pressure \\cite{bauer_prl2014}, and isothermal compressibility \\cite{haoguo_arxiv2013} of ultracold Fermi gases. More recently, this theory was also applied to Fermi gases with spin-orbit coupling \\cite{lianyi_pra2013, zhangjing}. In Ref.~\\cite{zhangjing}, it was found that the theoretical calculation qualitatively agrees with the measured rf spectrum of a spin-orbit-coupled Fermi gas on the BEC side of the resonance. Given these past successes, we are confident that the T-matrix scheme indeed represents a vast improvement over the simple mean-field theory and should be at least qualitatively valid over the whole BEC-BCS crossover regime. \n\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.35\\textwidth]{fig1.eps}\n\\caption{(Color online) Thermodynamic quantities at zero temperature,\n(a) order parameter, and (b) pairing momentum, as functions of interaction strength characterized by $1\/a_s K_F$ for different $\\theta_h$. Here the SO coupling strength is\n$\\alpha K_F=2.0E_F$, and the effective Zeeman field strength is $h=0.5E_F$. $\\theta_h$ is the angle between the effective Zeeman field and the $z$-axis, such that $h_z = h \\cos \\theta_h$ and $h_x = h \\sin \\theta_h$. Therefore $\\theta_h=0$ represents a pure out-of-plane Zeeman field, while $\\theta_h=\\pi\/2$ represents a pure in-plane Zeeman field. $K_F$ and $E_F$ are the Fermi wave number and Fermi energy of the ideal Fermi gas, respectively.}\n\\label{crossover_para}\n\\end{figure}\n\n\n\\section{Zero-temperature properties}\n\\label{sec_zero-temperature_properties}\n\nWe first briefly review the main features of the system at zero temperature. Note that the pseudo-gap tends to zero in the low-temperature limit as $\\Delta_\\mathrm{pg}\\sim T^{3\/4}$ [see Eq.~(\\ref{delta_pg})] and vanishes at $T=0$. Hence, the T-matrix scheme we adopted here reduces to the mean-field theory at zero temperature. The Rashba SO coupling and the Zeeman field have opposite effects on the magnitude of the gap parameter: the former tends to enhance the gap, while the latter tends to reduce the gap. Fig.~\\ref{crossover_para}(a) displays the gap parameter as a function of interaction strength for several different Zeeman fields. The in-plane Zeeman field is known to break the symmetry of the band structure and results in Cooper pairs with finite momentum, a signature of the Fulde-Ferrell superfluid. In Fig.~\\ref{crossover_para}(b), we show how the magnitude of the momentum of Cooper pairs $Q=|\\bm{Q}|$ ($\\bm{Q}$ is along the $y$-axis \\cite{zheng_pra2013}) changes in the BEC-BCS crossover. $Q$ decreases quickly on the BEC side of the resonance as the two-body $s$-wave interaction, which favors zero-momentum pairing, dominates in that regime.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.35\\textwidth]{fig2.eps}\n\\caption{(Color online) Superfluid gap at $T=0$ as a function of the Zeeman field strength for (a) $1\/a_sK_F= -0.5$, (b) 0, and (c) 0.5. The solid, dashed, and dot-dashed lines correspond to $\\theta_h= 0$, $\\pi\/4$, and $\\pi\/2$, respectively. The SO-coupling strength is\n$\\alpha K_F=2.0E_F$. The vertical arrows indicate the critical Zeeman field strength at which the system becomes gapless.}\n\\label{deltah}\n\\end{figure}\n\nIn Fig.~\\ref{deltah}, we show how the zero-temperature superfluid gap varies as the Zeeman field strength changes. For a weak Zeeman field, the gap is insensitive to the orientation of the field. In general, $\\Delta$ decreases as $h=\\sqrt{h_z^2+h_x^2}$ increases due to the pair-breaking effect of the Zeeman field. However, as $h$ exceeds some threshold value, the decrease of the gap becomes sensitive to the orientation of the field: The larger the in-plane Zeeman field component is, the steeper the decrease of the gap becomes. As we will show below, this threshold value corresponds to the field strength at which the quasi-particle excitation gap vanishes.\n\nAnother important feature induced by the Zeeman field is that it closes the bulk quasi-particle excitation gap at a critical value. Fig.~\\ref{phase}(a) represents a phase diagram in which we plot the critical Zeeman field at which the system changes from gapped to gapless. The region below each line represents the gapped phase. As the interaction strength varies from the BCS side to the BEC side, the order parameter increases [see Fig.~\\ref{crossover_para}(a)], and correspondingly the critical field strength increases and the gapped region enlarges. In the absence of the in-plane Zeeman field (i.e., when $h_x=0$), the system becomes gapless due to the presence of the discrete Fermi points \\cite{gong_prl2011} located along the $k_z$-axis in momentum space at $k_z=\\pm\\sqrt{2m(\\mu\\pm\\sqrt{h^2-\\Delta^2})}$. These Fermi points are topological defects. Hence the transition from the gapped to the gapless region in this case also represents a transition from a topologically trivial to a topologically nontrivial quantum phase. For finite $h_x$, by contrast, the gapless region features a nodal surface in momentum space on which the single particle excitation gap vanishes \\cite{lindong_njp2013,zhoupra2013,yong_prl2014}. An example of such a nodal surface is plotted in Fig.~\\ref{phase}(b). The vertical arrows in Fig.~\\ref{deltah} indicate the critical Zeeman field strength at which the system becomes gapless. One can see that the sharp drop of the superfluid gap is correlated with the appearance of the nodal surface induced by a large in-plane Zeeman field. As we shall show below, the presence of the nodal surface also has dramatic effects on the thermodynamic properties of the system.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.32\\textwidth]{fig3.eps}\n\\caption{(Color online) (a) The critical magnetic field that separates the gapped region and the gapless region. The region below the lines is gapped. Here the SO-coupling strength is\n$\\alpha K_F=2.0E_F$. (b) The nodal surface in the gapless region for $h_z=0.0$, $h_x=1.0E_F$, $\\alpha K_F=2.0E_F$, and $1\/a_sK_F=0.0$. The range of the plot is from $-2 K_F$ to $2K_F$ along each direction.}\n\\label{phase}\n\\end{figure}\n\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{fig4.eps}\n\\caption{(Color online) (a) The total gap $\\Delta$, pseudo-gap $\\Delta_{\\rm pg}$ and superfluid gap $\\Delta_{\\rm sc}$ as functions of the temperature.\n(b) The corresponding pairing momentum $Q$ as a function of the temperature.\nThe parameters used are\n$1\/a_sK_F=0.0$, $\\alpha K_F=2.0E_F$, $h_z=0.0$, and $h_x=0.5E_F$. In this example, the green vertical dashed line indicates the critical temperature $T_c=0.210E_F$.}\n\\label{delta}\n\\end{figure}\n\n\\section{Superfluid Transition Temperature}\n\\label{sec_superfluid_transition_temperature}\n\nWe now turn our attention to the finite-temperature properties of the system. The first quantity we want to address is the superfluid transition temperature $T_c$, which is identified as the lowest temperature at which the superfluid gap $\\Delta_{\\rm sc}$ vanishes.\nFig.~\\ref{delta}(a) shows an example of how the gap varies as the temperature increases. As temperature increases, the superfluid gap $\\Delta_{\\rm sc}$ decreases monotonically and vanishes at $T_c$.\nIn contrast, the pseudo-gap $\\Delta_{\\rm pg}$ is a monotonically increasing function $\\Delta_{\\rm pg}\\sim T^{3\/4}$ below $T_c$ and decreases above $T_c$.\nOn the other hand, Fig.~\\ref{delta}(b) shows that the center-of-mass momentum $Q$ of the Cooper pairs is quite insensitive to the temperature for $TT_c$. This indicates that the in-plane Zeeman field not only induces a Fulde-Ferrell superfluid below $T_c$, but also induces an exotic normal state above $T_c$ featuring a finite-momentum pseudo-gap \\cite{ff4}.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.36\\textwidth]{fig5.eps}\n\\caption{(Color online) Critical temperature $T_c$ in the BEC-BCS crossover for (a) different spin-orbit couplings (the values of the SO-coupling strength $\\alpha$, in units of $E_F\/K_F$, are shown in the legends) with no Zeeman field;\nand for (b) different ($h$, $\\theta_h$) with $\\alpha K_F=2.0E_F$. $h$ is in units of $E_F$.}\n\\label{crossover}\n\\end{figure}\n\nIn Fig. \\ref{crossover}, we show the superfluid transition temperature $T_c$ in the BEC-BCS crossover. In Fig. \\ref{crossover}(a), we plot $T_c$ as a function of the interaction strength for several different values of the SO-coupling strength without the Zeeman field. For comparison, the mean-field result is also included. For weak interaction, where $1\/(a_sK_F)$ is large and negative, the mean-field result agrees with the T-matrix result. As the interaction increases towards the BEC limit, the mean-field theory predicts an unphysically large critical temperature, a clear indication of its breakdown. In the BEC limit where the two-body attractive interaction is dominant, the system behaves like a condensation of weakly interacting tightly bound bosonic dimers with effective mass $2m$, regardless of the SO coupling strength \\cite{jiang_pra2011,zengqiang_prl2011,hu_prl2011}. The BEC transition temperature for this system is 0.218$E_F$ \\cite{melo_prl1993,ohashi_prl2002,machida_pra2006}.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.35\\textwidth]{fig6.eps}\n\\caption{(Color online) Critical temperature $T_c$ as functions of the Zeeman field strength for (a) $1\/a_sK_F= -0.5$, (b) 0, and (c) 0.5. The solid, dashed, and dot-dashed lines correspond to $\\theta_h= 0$, $\\pi\/4$, and $\\pi\/2$, respectively. The SO-coupling strength is\n$\\alpha K_F=2.0E_F$. The vertical arrows indicate the value of the Zeeman field strength at which the system become gapless.}\n\\label{htc}\n\\end{figure}\n\nIn Fig. \\ref{crossover}(b), we examine how the Zeeman field affects $T_c$. In the BEC limit, we have again $T_c=0.218 E_F$, insensitive to either the SO-coupling strength or the Zeeman field strength \\cite{jiang_pra2011}. On the BCS side, however, the Zeeman field tends to decrease $T_c$. This effect is much more pronounced with the in-plane Zeeman field than with the out-of-plane Zeeman field, indicating that the finite-momentum Fulde-Ferrell pairing is less robust than the zero-momentum Cooper pairing.\n\nTo show the effect of the Zeeman field on the transition temperature more clearly, we plot in Fig.~\\ref{htc} $T_c$ as a function of the Zeeman field strength at different orientations. Across the BEC-BCS crossover, $T_c$ is not very sensitive to the out-of-plane Zeeman field over a large range of Zeeman field strengths. By contrast, when the in-plane Zeeman field is present, $T_c$ starts to drop rather sharply near the critical Zeeman field strength where the system becomes gapless. This steep down turn of $T_c$ is particularly pronounced on the BEC side of the resonance. We attribute this steep drop of $T_c$ and the corresponding steep drop of the zero-temperature superfluid gap (see Fig.~\\ref{deltah}) to the enhanced fluctuation as a result of the emergence of the nodal surface induced by a large in-plane Zeeman field.\n\n\n\n\n\\section{Isothermal Compressibility}\n\\label{sec_isothermal_compressibility}\n\nThe isothermal compressibility, defined as $\\kappa = \\frac{1}{n} \\left. \\frac{\\partial n}{\\partial P} \\right|_{T}$, measures the change in the density $n$ in response to the change of the pressure $P$. A recent experiment measured the compressibility of a Fermi gas across the superfluid phase transition \\cite{ku_sci2012}, which is found to be in reasonable agreement with the T-matrix theory \\cite{haoguo_arxiv2013}. In the superfluid regime, it is found that $\\kappa$ increases with temperature, i.e., the gas becomes more compressible as temperature increases. This can be intuitively understood as a lower temperature yields a larger superfluid gap, hence a gas less likely to be compressed. Here we want to examine how SO coupling affects the behavior of $\\kappa$.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{fig7.eps}\n\\caption{(Color online) Compressibility in the superfluid regime as a function of temperature for different Zeeman field strengths and scattering lengths. The scattering lengths are: $1\/K_Fa_s = -0.5$ for upper panels (a1) and (b1), $1\/K_Fa_s = 0.$ for middle panels (a2) and (b2), and $1\/K_Fa_s = 0.5$ for lower panels (a3) and (b3). For left panels (a1), (a2) and (a3), $h_x=0$; for right panels (b1), (b2) and (b3), $h_z=0$. The SO-coupling strength is\n$\\alpha K_F=2.0E_F$. The compressibility is normalized by $\\kappa_0=\\frac{3}{2}\\frac{1}{N E_F}$, the compressibility of an ideal Fermi gas, and the temperature is normalized to the superfluid transition temperature $T_c$ for the given set of parameters. In the legends, we also indicate whether the quasiparticle excitations of the corresponding system are gapped or gapless. In (a2), we show $\\kappa$ in the absence of SO couplings and Zeeman fields (the dotted line) by taking $\\kappa=h=0$. In this limit, our calculation recovers the result reported in Ref. \\cite{haoguo_arxiv2013}.}\n\\label{compressibility}\n\\end{figure}\n\nFor our system, $\\kappa$ can be expressed as \\cite{seo_arxiv2011, seo_pra2013, haoguo_arxiv2013}\n\\begin{eqnarray}\n\\kappa &=& \\dfrac{1}{N^2}\\Big[\n\\Big(\\dfrac{\\partial N}{\\partial \\mu}\\Big)_{T,\\Delta,Q}\n+ \\Big(\\dfrac{\\partial N}{\\partial \\Delta}\\Big)_{T,\\mu,Q} \\Big(\\dfrac{\\partial \\Delta}{\\partial \\mu}\\Big)_{T,Q} \\notag\\\\\n&& + \\Big(\\dfrac{\\partial N}{\\partial Q}\\Big)_{T,\\mu,\\Delta} \\Big(\\dfrac{\\partial Q}{\\partial \\mu}\\Big)_{T,\\Delta}\n\\Big] ~,\n\\end{eqnarray}\nwhere\n\\[\n\\Big(\\dfrac{\\partial \\Delta}{\\partial \\mu}\\Big)_{T,Q} =\n\\Big(\\dfrac{\\partial N}{\\partial \\Delta}\\Big)_{T,\\mu,Q} \\Big\/ \\Big(\\dfrac{\\partial^2 \\Omega}{\\partial \\Delta^2}\\Big)_{T,\\mu,Q} ~,\n\\]\n\\[\n\\Big(\\dfrac{\\partial Q}{\\partial \\mu}\\Big)_{T,\\Delta} =\n\\Big(\\dfrac{\\partial N}{\\partial Q}\\Big)_{T,\\mu,\\Delta} \\Big\/ \\Big(\\dfrac{\\partial^2 \\Omega}{\\partial Q^2}\\Big)_{T,\\mu,\\Delta} ~.\n\\]\nIn Fig. \\ref{compressibility}, we show the compressibility $\\kappa$ as a function of temperature $T$ in the superfluid regime on both sides of the Feshbach resonance. In the left panels, we set the in-plane Zeeman field $h_x=0$. Across the Feshbach resonance, we see that regardless of the strength of the out-of-plane Zeeman field $h_z$, $\\kappa$ increases smoothly as the temperature increases from 0 to $T_c$, just like in a superfluid Fermi gas without SO coupling \\cite{ku_sci2012, haoguo_arxiv2013}. By contrast, the presence of the in-plane Zeeman field gives rise to some surprising effects. In the right panels of Fig.~\\ref{compressibility}, we set $h_z=0$. One can see that for small $h_x$ such that the system is gapped, $\\kappa$ is a monotonically increasing function of $T$. However, once $h_x$ exceeds the critical value such that the system becomes gapless, $\\kappa$ becomes a non-monotonic function of temperature. In certain regimes, $\\kappa$ even decreases as $T$ increases.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.32\\textwidth]{fig8.eps}\n\\caption{(Color online) Compressibility at $T=0$ as a function of the Zeeman field strength. The dashed line corresponds to the in-plane Zeeman field ($\\theta_h=\\pi\/2$), and the solid line corresponds to the out-of-plane Zeeman field ($\\theta_h=0$). The dashed and solid arrows indicate the critical field strength at which the system becomes gapless in the presence of an in-plane Zeeman field and an out-of-plane Zeeman field, respectively. The SO coupling strength is\n$\\alpha K_F=2.0E_F$, and the scatter length is set at $1\/K_Fa_s=0$.}\n\\label{comph}\n\\end{figure}\n\nThe drastically different effects on compressibility by the in-plane and the out-of-plane Zeeman field can also be seen in Fig.~\\ref{comph}, where we plot $\\kappa$ at the unitarity limit at zero temperature as a function of the Zeeman field strength. For the out-of-plane Zeeman field, $\\kappa$ decreases smoothly as the field strength increases. In particular, it does not exhibit any distinctive feature when the system changes from gapped to gapless. This is consistent with the fact that the gapped to gapless transition in the presence of a pure out-of-plane Zeeman field represents a topological phase transition which does not leave a trace in thermodynamic quantities. On the other hand, when the Zeeman field is in plane with its strength increasing from zero, $\\kappa$ first decreases and then rises sharply near the critical field when the system becomes gapless. Hence the in-plane Zeeman-field-induced gapless superfluid, featuring a Fermi nodal surface, is highly compressible.\n\nThe nodal surface resulting from the in-plane Zeeman field is not unique to Rashba SO coupling. For example, in the experimentally realized equal-weight Rashba-Dresselhaus coupling, a large in-plane Zeeman field can also induce a nodal surface \\cite{nodal1,nodal2}. We have checked that for that system, the Zeeman-field dependence of the compressibility exhibits very similar behavior.\nAs it is known, using the standard form of the fluctuation-dissipation\ntheorem for a balanced system, the isothermal compressibility can be rewritten as\n$\\kappa= \\frac{V}{T}( \\frac{\\langle\\hat{N}^2\\rangle-N^2}{N^2} )$ \\cite{seo_arxiv2011}, i.e., $\\kappa$ is directly proportional to number fluctuation of the system. The increase in $\\kappa$ can therefore be interpreted as a consequence of enhanced number fluctuation induced by the nodal surface.\n\n\n\n\n\\section{SUMMARY}\n\\label{summary}\n\nIn summary,\nwe have investigated the effect of the pairing fluctuation\non the thermodynamic properties of a Rashba SO coupled superfluid Fermi gas by adopting a T-matrix scheme. We focus on the effect of the Zeeman field. In particular, the in-plane Zeeman component leads to finite-momentum Cooper pairing and, when its magnitude becomes sufficiently large, it induces a nodal Fermi surface on which the quasi-particle excitation gap vanishes. The presence of the nodal surface has dramatic effects on the superfluid properties: it greatly suppresses the superfluid transition temperature and increases the isothermal compressibility. Both phenomena can be attributed to the enhanced fluctuation due to the presence of the nodal surface. By stark contrast, when only the out-of-plane Zeeman field is present, both $T_c$ and $\\kappa$ exhibit smooth behavior when the Zeeman field strength is increased, even when the system becomes gapless at large Zeeman field strength. The key difference here is that a large out-of-plane Zeeman field, unlike its in-plane counterpart, does not give rise to a nodal surface, but only discrete Fermi points along the $k_z$-axile.\nThese Fermi points are topological defects, and their appearance does not manifest on any thermodynamic quantities. In addition,\nwe also find an unconventional pseudo-gap state above the superfluid transition temperature, in which the non-condensed pairs possess nonzero center-of-mass momentum.\nWe attribute it to the anisotropic Fermi surface induced by the in-plane Zeeman field, which is independent of temperature.\n\n\n\n\n\\section*{ACKNOWLEDGEMENTS}\nZ.Z., X.Z. and G.G. are supported by the National Natural\nScience Foundation of China (Grants No. 11074244\nand No. 11274295), and the National 973 Fundamental Research Program (2011cba00200).\nH.P. acknowledges support from the NSF, and\nthe Welch Foundation (Grant No. C-1669).\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Counting of branched covers}\n\n\nLet us consider a connected compact surface without boundary $\\Omega$ and a branched covering $f:\\Sigma\\rightarrow\\Omega$\nby a connected or non-connected surface $\\Sigma$. We will consider a covering $f$ of the degree $d$. It means that the\npreimage $f^{-1}(z)$ consists of $d$ points $z\\in\\Omega$ except some finite number of points. This points are called\n\\textit{critical values of $f$}.\n\nConsider the preimage $f^{-1}(z)=\\{p_1,\\dots,p_{\\ell}\\}$ of $z\\in\\Omega$. Denote by $\\delta_i$ the degree of $f$ at $p_i$. It\nmeans that in the neighborhood of $p_i$ the function $f$ is homeomorphic to $x\\mapsto x^{\\delta_i}$. The set \n$\\Delta=(\\delta_1\\dots,\\delta_{\\ell})$\nis the partition of $d$, that is called \\textit{topological type of $z$}.\n\n\nFor a partition $\\Delta$ of a number $d=|\\Delta|$ denote by $\\ell(\\Delta)$ the number of the non-vanishing parts ($|\\Delta|$ and\n$\\ell(\\Delta)$ are called the weight and the length of $\\Delta$, respectively). We denote a partition and its Young diagram by \nthe same letter. Denote by $(\\delta_1,\\dots,\\delta_{\\ell})$ the Young diagram with rows of length $\\delta_1,\\dots,\\delta_{\\ell}$\nand corresponding partition of $d=\\sum \\delta_i$. \n\n\nFix now points $z_1,\\dots,z_{\\textsc{f}}$ and partitions $\\Delta^{(1)},\\dots,\\Delta^{(\\textsc{f})}$ of $d$. Denote by\n\\[\\widetilde{C}_{\\Omega (z_1\\dots,z_{\\textsc{f}})} (d;\\Delta^{(1)},\\dots,\\Delta^{(\\textsc{f})})\\]\nthe set of all branched covering $f:\\Sigma\\rightarrow\\Omega$ with critical points $z_1,\\dots,z_{\\textsc{f}}$ of topological\ntypes $\\Delta^{(1)},\\dots,\\Delta^{(\\textsc{f})}$.\n\nCoverings $f_1:\\Sigma_1\\rightarrow\\Omega$ and $f_2:\\Sigma_2\\rightarrow\\Omega$ are called isomorphic if there exists an\nhomeomorphism $\\varphi:\\Sigma_1\\rightarrow\\Sigma_2$ such that $f_1=f_2\\varphi$. Denote by $\\texttt{Aut}(f)$ the group of\nautomorphisms of the covering $f$. Isomorphic coverings have isomorphic groups of automorphisms of degree $|\\texttt{Aut}(f)|$.\n\nConsider now the set $C_{\\Omega (z_1\\dots,z_{\\textsc{f}})} (d;\\Delta^{(1)},\\dots,\\Delta^{(\\textsc{f})})$ of isomorphic classes\nin $\\widetilde{C}_{\\Omega (z_1\\dots,z_{\\textsc{f}})} (d;\\Delta^{(1)},\\dots,\\Delta^{(\\textsc{f})})$. This is a finite set.\nThe sum\n\\begin{equation}\\label{Hurwitz-number-geom-definition}\nH^{\\textsc{e},\\textsc{f}}(d;\\Delta^{(1)},\\dots,\\Delta^{(\\textsc{f})})=\n\\sum\\limits_{f\\in C_{\\Omega (z_1\\dots,z_{\\textsc{f}})}(d;\\Delta^{(1)},\\dots,\n\\Delta^{(\\textsc{f})})}\\frac{1} {|\\texttt{Aut}(f)|}\\quad,\n\\end{equation}\ndon't depend on the location of the points $z_1\\dots,z_{\\textsc{f}}$ and is called \\textit{Hurwitz number}.\nHere $\\textsc{f}$ denotes the number of the branch points, and $\\textsc{e}$ is the Euler characteristic of the base surface.\n\nIn case it will not produce a confusion we admit 'trivial' profiles $(1^d)$ among $\\Delta^1,\\dots,\\Delta^{\\textsc{f}}$ in\n(\\ref{Hurwitz-number-geom-definition})\nkeeping the notation $H^{\\textsc{e},\\textsc{f}}$ though the number of critical points now is less than $\\textsc{f}$.\n\n\nIn case we count only connected covers $\\Sigma$ we get the \\textit{connected} Hurwitz numbers \n$H_{\\rm con}^{\\textsc{e},\\textsc{f}}(d;\\Delta^{(1)},\\dots,\\Delta^{(\\textsc{f})})$.\n\n\n\\vspace{1ex}\n\nThe Hurwitz numbers arise in different fields of mathematics: from algebraic geometry to integrable systems. They are well\nstudied for orientable $\\Omega$. In this case the Hurwitz number coincides with the weighted number of holomorphic branched\ncoverings of a Riemann surface $\\Omega$ by other Riemann surfaces, having critical points $z_1,\\dots,z_\\textsc{f}\\in\\Omega$ of\nthe topological types $\\Delta^{(1)},\\dots,\\Delta^{(\\textsc{f})}$ respectively. The well known isomorphism between Riemann\nsurfaces and complex algebraic curves gives the interpretation of the Hurwitz numbers as the numbers of morphisms of\ncomplex algebraic curves.\n\nSimilarly, the Hurwitz number for a non-orientable surface $\\Omega$ coincides with the weighted number of the dianalytic\nbranched coverings of the Klein surface without boundary by another Klein surface and coincides with the weighted number\nof morphisms of real algebraic curves without real points \\cite{AG,N90,N2004}. An extension of the theory to all Klein surfaces\nand all real algebraic curves leads to Hurwitz numbers for surfaces\nwith boundaries may be found in \\cite{AN,N}.\n\n\nRiemann-Hurwitz formula related the Euler characteristic of the base surface $\\textsc{e}$ and the Euler characteristic of\nthe $d$-fold cover $\\textsc{e}'$ as follows:\n\\begin{equation}\\label{RH}\n \\textsc{e}'= d\\textsc{e}+\\sum_{i=1}^{\\textsc{f}}\\left(\\ell(\\Delta^{(i)})-d\\right)=0\n\\end{equation}\nwhere the sum ranges over all branch points $z_i\\,,i=1,2,\\dots$ with ramification profiles given by partitions $\\Delta^i\\,,i=1,2,\\dots$\nrespectively, and $\\ell(\\Delta^{(i)})$ denotes the length of the partition $\\Delta^{(i)}$ which is equal to the number of\nthe preimages $f^{-1}(z_i)$ of the point $z_i$.\n\n\\vspace{1ex}\n{\\bf Example 1}.\nLet $f:\\Sigma\\rightarrow\\mathbb{CP}^1$ be a covering without critical points.\nThen, each $d$-fold cover is a union of $d$ Riemann spheres: $\\mathbb{CP}^1 \\coprod \\cdots \\coprod \\mathbb{CP}^1$, then\n$\\deg f =d!$ and $H^{2,0}(d)=\\frac{1}{d!}$\n\n\n\\vspace{1ex}\n{\\bf Example 2}.\nLet $f:\\Sigma\\rightarrow\\mathbb{CP}^1$ be a $d$-fold covering with two critical points with the profiles \n$\\Delta^{(1)}=\\Delta^{(2)}=(d)$.\n(One may think of $f=x^d$). Then $H^{2,2}(d;(d),(d))=\\frac 1d$. Let us note that $\\Sigma$ is connected in this case \n(therefore $H^{2,2}(d;(d),(d))=H_{\\rm con}^{2,2}(d;(d),(d)) $)\nand its Euler\ncharacteristic $\\textsc{e}'=2$.\n\n\n\\vspace{1ex}\n{\\bf Example 3}.\n The generating\nfunction for the Hurwitz numbers $H^{2,2}(d;(d),(d))$ from the previous Example may be writen as\n$$\nF(h^{-1}\\mathbf{p}^{(1)},h^{-1}\\mathbf{p}^{(2)}):=\\, h^{-2}\\sum_{d>0}\\, H_{\\rm con}^{2,2}(d;(d),(d)) p_d^{(1)}p_d^{(2)}=\nh^{-2}\\sum_{d>0} \\frac 1d p_d^{(1)}p_d^{(2)}\n$$ \nHere $\\mathbf{p}^{(i)}=(p_1^{(i)},p_2^{(i)},\\dots),\\,i=1,2$ are two sets of formal parameters. The powers of the auxilary\nparameter $\\frac 1h$ count the Euler characteristic of the cover $\\textsc{e}'$ which is 2 in our example.\nThen thanks to the known general statement about the link between generating functions of \"connected\" and \"disconnected\"\nHurwitz numbers (see for instance \\cite{ZL}) one can write down the generating function for the Hurwitz numbers for covers with\ntwo critical points,\n$H^{2,2}(d;\\Delta^{(1)},\\Delta^{(2)})$, as follows:\n\\[\n\\tau(h^{-1}\\mathbf{p}^{(1)},h^{-1}\\mathbf{p}^{(2)})=e^{F(h^{-1}\\mathbf{p}^{(1)},h^{-1}\\mathbf{p}^{(2)}) } =\n\\]\n\\begin{equation}\\label{E=2,F=2Hurwitz}\ne^{h^{-2}\\sum_{d>0} \\frac 1d p_d^{(1)}p_d^{(2)}}\\,\n= \\,\n\\sum_{d\\ge 0} \\sum_{\\Delta^{(1)},\\Delta^{(2)}}\n H^{2,2}(d;\\Delta^{(1)},\\Delta^{(2)}) \\,h^{-\\textsc{e}'} \\mathbf{p}^{(1)}_{\\Delta^{(1)}}\\mathbf{p}^{(2)}_{\\Delta^{(2)}}\n\\end{equation}\nwhere $\\mathbf{p}^{(i)}_{\\Delta^{(i)}}:=p^{(i)}_{\\delta^{(i)}_1}p^{(i)}_{\\delta^{(i)}_2}p^{(i)}_{\\delta^{(i)}_3}\\cdots$, $i=1,2$\nand where $\\textsc{e}'=\\ell(\\Delta^{(1)}) + \\ell(\\Delta^{(2)})$ in agreement with (\\ref{RH}) where we put $\\textsc{f}=2$.\nFrom (\\ref{E=2,F=2Hurwitz}) it follows that the profiles of both critical points\ncoincide, otherwise the Hurwitz number vanishes. Let us denote this profile $\\Delta$, \nand $|\\Delta|=d$ and from the last equality we get\n$$\nH^{2,2}(d;\\Delta,\\Delta) = \\frac {1}{z_{\\Delta}}\n$$\nHere\n\\begin{equation}\nz_\\Delta\\,=\\,\\prod_{i=1}^\\infty \\,i^{m_i}\\,m_i!\n\\end{equation}\nwhere $m_i$ denotes the number of parts equal to $i$ of the partition $\\Delta$ (then the partition $\\Delta$ is often\ndenoted by $(1^{m_1}2^{m_2}\\cdots)$).\n\n\n\\vspace{1ex}\n{\\bf Example 4}.\nLet $f:\\Sigma\\rightarrow\\mathbb{RP}^2$ be a covering without critical points.\nThen, if $\\Sigma$ is connected, then $\\Sigma=\\mathbb{RP}^2$,\n$\\deg f=1$\\quad or $\\Sigma=S^2$, $\\deg f=2$. Next, if $d=3$, then\n$\\Sigma=\\mathbb{RP}^2\\coprod\\mathbb{RP}^2\\coprod\\mathbb{RP}^2$ or $\\Sigma=\\mathbb{RP}^2\\coprod S^2$.\nThus $H^{1,0}(3)=\\frac{1}{3!}+\\frac{1}{2!}=\\frac{2}{3}$.\n\n\n\n\n\\vspace{1ex}\n{\\bf Example 5}.\nLet $f:\\Sigma\\rightarrow\\mathbb{RP}^2$ be a covering with a single critical point with profile $\\Delta$, and $\\Sigma$ \nis connected.\n Note that due to (\\ref{RH}) the Euler\ncharacteristic of $\\Sigma$ is $\\textsc{e}'=\\ell(\\Delta)$. \n(One may think of $f=z^d$ defined in the unit disc where we identify $z$ and $-z$ if $|z|=1$).\nIn case we cover the Riemann sphere by the Riemann sphere $z\\to z^m$ we get\ntwo critical points with the same profiles. However we cover $\\mathbb{RP}^2$ by the Riemann sphere, then we have the \ncomposition of the\nmapping $z\\to z^{m}$ on the\nRiemann sphere and the factorization by antipodal involution $z\\to - \\frac{1}{\\bar z}$. Thus we have the ramification \nprofile $(m,m)$\nat the single critical point $0$ of $\\mathbb{RP}^2$.\nThe automorphism group is the dihedral group of the order $2m$ which consists of rotations on $\\frac{2\\pi }{m}$ and \nantipodal involution\n$z\\to -\\frac{1}{\\bar z}$.\nThus we get that \n$$\nH_{\\rm con}^{1,1}\\left(2m;(m,m)\\right)=\\frac{1}{2m}\n$$\nFrom (\\ref{RH}) we see that $1=\\ell(\\Delta)$ in this case.\nNow let us cover $\\mathbb{RP}^2$ by $\\mathbb{RP}^2$ via $z\\to z^d$. From (\\ref{RH}) we see that $\\ell(\\Delta)=1$.\nFor even $d$ we have the critical point\n$0$, in addition each point of the unit\ncircle $|z|=1$ is critical (a folding), while from the beginning we restrict our consideration only on isolated critical points.\nFor odd $d=2m-1$ there is\nthe single critical point $0$, the automorphism group consists of rotations on the angle $\\frac{2\\pi}{2m-1}$. Thus in this case\n$$\nH_{\\rm con}^{1,1}\\left(2m-1;(2m-1)\\right)=\\frac{1}{2m-1}\n$$\n\n\\vspace{1ex}\n{\\bf Example 6}.\nThe generating series of the connected Hurwitz numbers with a single critical point from the previous Example is\n\\[\nF(h^{-1}\\mathbf{p})=\n \\frac{1}{h^2}\\sum_{m>0} p_m^2 H_{\\rm con}^{1,1}\\left(2m;(m,m)\\right) +\n \\frac{1}{h} \\sum_{m>0} p_{2m-1} H_{\\rm con}^{1,1}\\left(2m-1;(2m-1)\\right)\n\\]\nwhere $H_{{\\rm con}}^{1,1}$ describes $d$-fold covering either by the Riemann\nsphere ($d=2m$) or by the projective plane ($d=2m-1$). \nWe get the generating function for Hurwitz numbers with a single critical point\n\\[\n\\tau(h^{-1}\\mathbf{p})=e^{F(h^{-1}\\mathbf{p} ) } =\n\\]\n\\begin{equation}\\label{single-branch-point'}\ne^{\\frac {1}{h^2}\\sum_{m>0} \\frac {1}{2m}p_m^2 +\\frac 1h\\sum_{m {\\rm odd}} \\frac 1m p_m }=\n\\sum_{d>0} \n\\sum_{\\Delta\\atop |\\Delta|=d} h^{-\\ell(\\Delta)} \\mathbf{p}_\\Delta\nH^{1,a}(d;\\Delta)\n\\end{equation}\nwhere $a=0$ and if $\\Delta=(1^d)$, and where $a=1$ and\n otherwise. Then $H^{1,1}(d;\\Delta)$ is the Hurwitz number \ndescribing $d$-fold covering of $\\mathbb{RP}^2$ with a single\nbranch point of type $\\Delta=(d_1,\\dots,d_l),\\,|\\Delta|=d$ by a (not necessarily connected) Klein surface of\nEuler characteristic $\\textsc{e}'=\\ell(\\Delta)$. For instance, for $d=3$, $\\textsc{e}'=1$ we get\n$H^{1,1}(3;\\Delta)=\\frac 13\\delta_{\\Delta,(3)}$.\nFor unbranched coverings (that is for $a=0$, $\\textsc{e}'=d$) we get formula (\\ref{unbranched}).\n\n\\paragraph{Tau functions.}\n\nLet us note that the expression presented in (\\ref{E=2,F=2Hurwitz}), namely,\n\\begin{equation}\\label{2KP-tau-vac-Schur}\n \\tau^{\\rm 2KP}_1(h^{-1}\\mathbf{p}^{(1)},h^{-1}\\mathbf{p}^{(2)}) = \ne^{h^{-2}\\sum_{d>0} \\frac 1d p_d^{(1)}p_d^{(2)}}\n\\end{equation}\ncoincides with the simplest two-component KP tau function\nwith two sets of higher times $h^{-1}\\mathbf{p}^{(i)},\\, i=1,2$, while (\\ref{single-branch-point'}) may be recognized\nas the simplest non-trivial tau function of the BKP hierarchy of Kac and van de Leur \\cite{KvdLbispec} \n\\begin{equation}\n \\tau^{\\rm BKP}_1(h^{-1}\\mathbf{p}) =\ne^{\\frac {1}{h^2}\\sum_{m>0} \\frac {1}{2m}p_m^2 +\\frac 1h\\sum_{m {\\rm odd}} \\frac 1m p_m }\n\\end{equation}\nwritten down in\n\\cite{OST-I}. In (\\ref{E=2,F=2Hurwitz}) and in (\\ref{single-branch-point'}) the higher times are rescaled as\n$p_m\\to h^{-1}p_m,\\,m>0$ as it is common in the study of the integrable dispersionless equations where only\nthe top power of the 'Plank constant' $h$ is taken into account. For instance, see \\cite{NZ} where the counting\nof coverings of the Riemann sphere by Riemann spheres was related to the so-called Laplacian growth problem \\cite{MWZ},\n\\cite{Z}.\nAbout the quasiclassical limit of the DKP hierarchy see \\cite{ATZ}.\nThe rescaling is also common for tau functions used in two-dimensional gravity where the powers of $h^{-\\textsc{e}}$ \ngroup contributuions of surfaces of Euler characteristic $\\textsc{e}$ to the 2D gravity partition function \\cite{BrezinKazakov}.\nIn the context of the links between Hurwitz numbers and integrable hierarchies the rescaling $\\mathbf{p} \\to h^{-1}\\mathbf{p}$ was\nconsidered in \\cite{Harnad-overview-2015} and in \\cite{NO-LMP}. In our case the role similar to $h$ plays $N^{-1}$, \nwhere $N$ is the size of matrices in matrix integrals.\n\nWith the help of these tau functions we shall construct integral over matrices. To do this we present the variables\n$\\mathbf{p}^{(i)},\\, i=1,2$ and $\\mathbf{p}$ as traces of a matrix we are interested in. We write $\\mathbf{p}(X)=\\left(p_1(X),p_2(X),\\dots \\right)$,\nwhere\n\\begin{equation}\\label{p_m(X)}\np_m(X) = \\mathrm {tr} X^m = \\sum_{i=1}^N x_i^m\n\\end{equation}\nand where $x_1,\\dots,x_N$ are eigenvalues of $X$. \n\n\nIn this case we use non-bold capital letters for the matrix argument and our tau functions\nare tau functions of the matrix argument:\n\\begin{equation}\\label{tau_1}\n\\tau_1^{\\rm 2KP}(X,\\mathbf{p}):=\\tau_1^{\\rm 2KP}(\\mathbf{p}(X),\\mathbf{p})=\n\\sum_\\lambda s_\\lambda(X)s_\\lambda(\\mathbf{p}) =e^{\\mathrm {tr} V(X,\\mathbf{p})}=\n\\prod_{i=1}^N e^{\\sum_{m=1}^\\infty \\frac{1}{m}x_i^mp_m}\n\\end{equation}\nwhere $x_i$ are eigenvalues of $X$,\nwhere $\\mathbf{p}=(p_1,p_2,\\dots)$ is a semi-infinite set of parameters,\nand\n\\begin{equation}\\label{tau_1^B}\n \\tau_1^{\\rm BKP}(X): =\\tau_1^{\\rm BKP}(\\mathbf{p}(X)) =\n \\sum_\\lambda s_\\lambda(X) =\\prod_{i=1}^N (1-x_i)^{-1}\\prod_{iN\n\\end{equation}\nwhere $\\ell(\\lambda)$ is the length of a partition $\\lambda=(\\lambda_1,\\dots,\\lambda_\\ell),\\,\\lambda_\\ell >0$.\n\nFor further purposes we need the following spectral invariants\nof a matrix $X$:\n\\begin{equation}\\label{spectral-invariant}\n{\\bf P}_\\Delta(X):=\\prod_{i=1}^\\ell p_{\\delta_i}(X)\n\\end{equation}\nwhere $\\Delta=(\\delta_1,\\dots, \\delta_\\ell)$ is a partition and each $p_{\\delta_i}$ is defined by (\\ref{p_m(X)})\n\n\nIn our notation one can write\n\\begin{equation}\\label{tau_1-XY}\n\\tau_1^{\\rm 2KP}(X,Y)=\\tau_1^{\\rm 2KP}(\\mathbf{p}(X),\\mathbf{p}(Y))=\n\\sum_{\\Delta}\\frac{1}{z_\\Delta} {\\bf P}_\\Delta(X){\\bf P}_\\Delta(Y)\n\\end{equation}\n\n\\paragraph{Combinatorial approach.} The study of the homomorphisms between the fundemental group of the base Riemann sufrace \nof genus $g$ (the Euler characterisic is resectively $\\textsc{e}=2-2g$)\nwith \n$\\textsc{f}$ marked points and the symmetric group in the context of the counting of the non-equivalent $d$-fold covering with \ngiven profiles \n$\\Delta^{i},\\,i=1,\\dots,\\textsc{f}$ results to the following equation (for instance, for the details, see Appendix A written by Zagier for the \nRussian edition of \\cite{ZL} or works \\cite{M1}, \\cite{GARETH.A.JONES})\n\\begin{equation}\\label{Hom-pi-S_d-Riemann}\n\\prod_{j=1}^g a_jb_ja_j^{-1}b_j^{-1}X_1\\cdots X_{\\textsc{f}} =1\n\\end{equation}\nwhere $a_j,b_j,X_i\\in S_d$ and where each $X_i$ belongs to the cycle class $C_{\\Delta^i}$. Then the Hurwitz number \n$H^{2-2g,\\textsc{f}}(d;\\Delta^1,\\dots,\\Delta^\\textsc{f})$ is equal to the number of solutions of equation (\\ref{Hom-pi-S_d-Riemann})\ndivided by the order of symmetric group $S_d$ (to exclude the equivalent solutions obtained by the conjugation of all factors in\n(\\ref{Hom-pi-S_d-Riemann}) by elements of the group. In the geometrical approach each conjugation means the re-enumaration of $d$ sheets\nof the cover).\n\nFor instance Example 3 considered above counts non-equivalent solutions of the equation $X_1X_2=1$ with given cycle classes \n$C_{\\Delta^1}$ and $C_{\\Delta^2}$. Solutions of this equation consist of all elements of class $C_{\\Delta^1}$ and inverse elements, so \n$\\Delta^2=\\Delta^1=:\\Delta$. The number of elements of any class $C_\\Delta$ (the cardinality of $|C_\\Delta|$) divided by $|\\Delta|!$ \nis $1 \\over z_\\Delta$ as we got in the Example 3.\n\nFor Klein surfaces (see \\cite{M2},\\cite{GARETH.A.JONES}) instead of (\\ref{Hom-pi-S_d-Riemann}) we get \n\\begin{equation}\\label{Hom-pi-S_d-Klein}\n\\prod_{j=1}^g R_j^2 X_1\\cdots X_{\\textsc{f}} =1\n\\end{equation}\nwhere $R_j,X_i\\in S_d$ and where each $X_i$ belongs to the cycle class $C_{\\Delta^i}$. In (\\ref{Hom-pi-S_d-Klein}),\n$g$ is the so-called genus of non-orientable surface which is related to its Eular chatacteristic $\\textsc{e}$ as \n$\\textsc{e}=1-g$. For the projective plane ($\\textsc{e}=1$) we have $g=0$, for the Klein bottle ($\\textsc{e}=1$) $g=1$.\n\nConsider unbranched covers of the torus (equation (\\ref{Hom-pi-S_d-Riemann})), projective plane and Klein bottle\n(\\ref{Hom-pi-S_d-Klein})). In this we put each $X_i=1$ in (\\ref{Hom-pi-S_d-Riemann})) and (\\ref{Hom-pi-S_d-Klein})).\nHere we present three pictures, for the torus ($\\textsc{e}=0$), the real projective plane ($\\textsc{e}=1$) and Klein bottle \n($\\textsc{e}=0$) which may be obtained by the identification of square's edges. We get $aba^{-1}b^{-1}=1$ for torus,\n$abab=1$ for the projective plane and $abab^{-1}=1$ for the Klein bottle.\n\n\n3 pictures.\n\nConsider unbranched coverings ($\\textsc{f}=0$).\nFor the real projective plane we have $g=1$ in (\\ref{Hom-pi-S_d-Klein}) only one $R_1=ab$. If we treat the projective plane as the unit disk\nwith identfied opposit points of the boarder $|z|=1$, then $R$ is related to the path from $z$ to $-z$.\nFor the Klein bottle ($g=2$ in (\\ref{Hom-pi-S_d-Klein})) there are $R_1=ab$ and $R_2=b^{-1}$.\n\nTo avoid confisions in what follows we will use the notion of genus and the notations $g$ only for Rieamnn surfaces, while\nthe notion of the Euler characteristic $\\textsc{e}$ we shall use both for orientable and non-orientable surfaces.\n\n \n \n\\section{Random matrices. Complex Ginibre ensemble.}\nOn this subject there is an extensive literature, for instance see \\cite{Ak1}, \\cite{Ak2}, \\cite{AkStrahov}, \n\\cite{S1}, \\cite{S2}.\n\nWe will consider integrals over complex matrices $Z_1,\\dots,Z_n$ where the measure is defined as\n\\begin{equation}\nd\\Omega(Z_1,\\dots,Z_n)= \\prod_{\\alpha=1}^n d\\mu(Z_\\alpha)=c \n\\prod_{\\alpha=1}^n\\prod_{i,j=1}^N d\\Re (Z_\\alpha)_{ij}d\\Im (Z_\\alpha)_{ij}e^{-|(Z_\\alpha)_{ij}|^2}\n\\end{equation}\nwhere the integration range is $\\mathbb{C}^{N^2}\\times \\cdots \\times\\mathbb{C}^{N^2}$ and where $c$ is the normalization \nconstant defined via $\\int d \\Omega(Z_1,\\dots,Z_n)=1$. \n\n\nWe treat this measure as the probability measure. The related ensemble is called the ensemble of $n$ independent \ncomplex Ginibre enesembles. \nThe expectation of a quantity\n$f$ which depends on entries of the matrices $Z_1,\\dots,Z_n$ is defined by\n$$\n\\mathbb{E}(f)=\\int f(Z_1,\\dots,Z_n) d\\Omega(Z_1,\\dots,Z_n).\n$$\n\nLet us introduce\nthe following products\n\\begin{eqnarray}\\label{Z}\nX&:=&(Z_1 C_1) \\cdots (Z_n C_n)\\\\ \n\\label{tildeZ^*}\nY_{t}&:=& Z^\\dag_n Z^\\dag_{n-1} \\cdots Z^\\dag_{t+1} Z^\\dag_1Z^\\dag_2 \\cdots Z^\\dag_{t} ,\\qquad 0< t < n\n\\end{eqnarray}\nwhere $Z_\\alpha^\\dag$ is the Hermitian conjugate of $Z_\\alpha$. \nWe are interested in correlation functions of spectral invariants of matrices $X$ and $Y_t$.\n\nWe denote by $x_1,\\dots,x_N$ and by $y_1,\\dots,y_N$ the eigenvalues of the matrices $X$ and $Y_t$, respectively.\nGiven partitions $\\lambda=(\\lambda_1,\\dots,\\lambda_l)$, $\\mu=(\\mu_1,\\dots,\\mu_k)$, $l,k\\le N$. Let us introduce the following \nspectral invariants\n\\begin{equation}\n{\\bf P}_\\lambda (X)=p_{\\lambda_1}(X)\\cdots p_{\\lambda_l} (X),\\qquad\n{\\bf P}_\\mu(Y_t)=p_{\\mu_1}(Y_t)\\cdots p_{\\mu_k}(Y_t)\n\\end{equation}\nwhere each $p_m(X)$ is defined via (\\ref{p_m(X)}).\n\nFor a given partition $\\lambda$, such that $d:=|\\lambda|\\le N$, let us consider the spectral invariant \n${\\bf P}_\\lambda$ of the matrix $XY_{t}$ \n(see (\\ref{spectral-invariant})). We have \n\n\n\\begin{Theorem}\\label{Theorem1}\n $X$ and $Y_t$ are defined by (\\ref{Z})-(\\ref{tildeZ^*}).\n Denote $\\textsc{e}=2-2g$. \n \n(A) Let $n > t=2g \\ge 0$. \nThen\n \\[\n \\mathbb{E}\\left({\\bf P}_\\lambda (X Y_{2g})\\right)=\n \\]\n \\begin{equation}\n z_\\lambda \\sum_{\\Delta^1,\\dots,\\Delta^{n-2g+1}\\atop |\\lambda|=|\\Delta^{j}| =d,\\, j\\le n-2g+1}\nH^{2-2g,n+2-2g}(d;\\lambda,\\Delta^{1},\\dots,\\Delta^{n-2g+1})P_{\\Delta^{n-2g+1}}(C' C'')\n\\prod_{i=1}^{n-2g} P_{\\Delta^i}(C_{2g+i})\n \\end{equation}\n where\n \\begin{equation}\\label{C'C''2g}\n C' = C_1 \\cdots C_{2g-1} ,\n\\qquad\nC''=C_2C_4\\cdots C_{2g}\n\\end{equation} \n\n\n(B) Let $n > t=2g+1 \\ge 1$. \nThen\n \\[\n \\mathbb{E}\\left({\\bf P}_\\lambda (X Y_{2g+1})\\right)=\n \\] \n \\begin{equation}\n z_\\lambda \\sum_{\\Delta^1,\\dots,\\Delta^{n-2g+1}\\atop |\\lambda|= |\\Delta^{j}|=d,\\,j\\le n-2g+1}\nH^{2-2g,n+2-2g}(d;\\lambda,\\Delta^{1},\\dots,\\Delta^{n-2g+1})P_{\\Delta^{n-2g}}(C')P_{\\Delta^{n-2g+1}}(C'')\n\\prod_{i=1}^{n-2g-1} P_{\\Delta^i}(C_{2g+1+i})\n \\end{equation}\n where\n \\begin{equation}\\label{C'C''2g+1}\nC'= C_1C_3 \\cdots C_{2g+1} ,\n\\qquad\nC''=C_2C_4\\cdots C_{2g}\n\\end{equation} \n \n\\end{Theorem}\n\n\n\\begin{Corollary}\nLet $|\\lambda|=d\\le N$ as before, and let each $C_i=I_N$ ($N\\times N$ unity matrix).\nThen\n \\begin{equation}\n \\frac{1}{z_\\lambda}\\mathbb{E}\\left({\\bf P}_\\lambda (X Y_{2g})\\right)=\n \\frac{1}{z_\\lambda}\\mathbb{E}\\left({\\bf P}_\\lambda (X Y_{2g+1})\\right)=\n N^{nd-\\ell(\\lambda)} \\sum_{\\textsc{e}'} N^{\\textsc{e}'}\n S^{\\textsc{e}'}_{\\textsc{e}}(\\lambda)\n \\end{equation}\n where $\\textsc{e}=2-2g$ and where\n \\begin{equation}\n S^{\\textsc{e}'}_{\\textsc{e}}(\\lambda) :=\n\\sum_{\\Delta^1,\\dots,\\Delta^{n+\\textsc{e}-1}\\atop \n\\sum_{i=1}^{n+\\textsc{e}-1}\\ell(\\Delta^{i}) =L}\nH^{\\textsc{e},n+\\textsc{e}}(d;\\lambda,\\Delta^{1},\\dots,\\Delta^{n+\\textsc{e}-1}),\\quad L=-\\ell(\\lambda)+ nd +\\textsc{e}' \n \\end{equation}\nis the sum of Hurwitz numbers counting all $d$-fold coverings with the following properties: \n\n(i) the Euler characteristic of the base surface is $\\textsc{e}$ \n\n(ii) the Euler characteristic of the cover is $\\textsc{e}'$ \n\n(iii) there are at most $\\textsc{f}=n+\\textsc{e}$ critical points\n\n\n\\end{Corollary}\n\nThe item (ii) in the Corollary follows from the equality ${\\bf P}_\\Delta(I_N)=N^{\\ell(\\Delta)}$ \n(see (\\ref{spectral-invariant}) and (\\ref{p_m(X)})) and\nfrom the Riemann-Hurwitz relation which relates Euler characteristics of a base and a cover via branch points\nprofile's lengths (see (\\ref{RH})):\n$$\n\\sum_{i=1}^{n+\\textsc{e}-1}\\ell(\\Delta^{i}) =-\\ell(\\lambda) +(\\textsc{f}- \\textsc{e})d +\\textsc{e}'\n$$ \nIn our case $\\textsc{f}- \\textsc{e}=n$.\n\n\n\n\\begin{Theorem} \\label{Theorem2}\n$X$ and $Y_t$ are defined by (\\ref{Z})-(\\ref{tildeZ^*}).\n \n (A) If $|\\lambda|\\neq |\\mu|$ then\n $\\mathbb{E}\\left({\\bf P}_\\lambda (X) {\\bf P}_\\mu(Y_t)\\right)=0$.\n\n \n (B) Let $|\\lambda| = |\\mu| =d$ and $n-1 > t=2g+1 \\ge 1$.\n \n\nThen\n \\[\n \\mathbb{E}\\left({\\bf P}_\\lambda (X) {\\bf P}_\\mu(Y_{2g+1})\\right)=\n \\]\n \\begin{equation}\n z_\\lambda z_\\mu \n \\sum_{\\Delta^1,\\dots,\\Delta^{n-2g}\\atop |\\lambda|= |\\Delta^{j}|=d,\\,j\\le n-2g}\nH^{2-2g,n+2-2g}(d;\\lambda,\\mu,\\Delta^{1},\\dots,\\Delta^{n-2g})P_{\\Delta^{n-2g-1}}(C')P_{\\Delta^{n-2g}}(C_{n}C'')\n\\prod_{i=1}^{n-2g-2} P_{\\Delta^i}(C_{2g+1+i}) \n \\end{equation} \n where $C'$ and $C''$ are given by (\\ref{C'C''2g}).\n \n (C) \n Let $|\\lambda| = |\\mu|$ $n > t=2g \\ge 0$ . \n\nThen\n \\[\n \\mathbb{E}\\left({\\bf P}_\\lambda (X) {\\bf P}_\\mu(Y_{2g})\\right)=\n \\]\n \\begin{equation}\n z_\\lambda z_\\mu \n \\sum_{\\Delta^1,\\dots,\\Delta^{n-2g}\\atop |\\lambda|=|\\Delta^{j}| =d,\\, j\\le n-2g}\nH^{2-2g,n+2-2g}(d;\\lambda,\\mu,\\Delta^{1},\\dots,\\Delta^{n-2g})P_{\\Delta^{n-2g+1}}(C'C_n C'')\n\\prod_{i=1}^{n-2g} P_{\\Delta^i}(C_{2g+i})\n \\end{equation}\n where $C'$ and $C''$ are given by (\\ref{C'C''2g+1}).\n \n \n\\end{Theorem}\n\n\n\\begin{Corollary}\n Let $|\\lambda|=d\\le N$ as before, and let each $C_i=I_N$.\nThen\n \\begin{equation}\n \\frac{1}{z_\\lambda z_\\mu}\\mathbb{E}\\left({\\bf P}_\\lambda (X){\\bf P}_\\lambda(Y_{2g})\\right)=\n \\frac{1}{z_\\lambda z_\\mu}\\mathbb{E}\\left({\\bf P}_\\lambda (X){\\bf P}_\\lambda(Y_{2g+1})\\right)\n =\n \\frac{1}{z_\\lambda}\\mathbb{E}\\left({\\bf P}_\\lambda (X Y_{2g})\\right)=\n \\frac{1}{z_\\lambda}\\mathbb{E}\\left({\\bf P}_\\lambda (X Y_{2g+1})\\right)\n \\end{equation}\n\n\n\n\\end{Corollary}\n\n\n\n\\begin{Theorem} \n\\label{Theorem3}\n$X$ and $Y_t$ are defined by (\\ref{Z})-(\\ref{tildeZ^*}).\n \n \n (A) Let $n-1 > t=2g+1 \\ge 0$.\n \n\nThen\n \\[\n \\mathbb{E}\\left({\\bf P}_\\lambda (X) \\tau^{\\rm BKP}_1(Y_{2g+1})\\right)=\n \\]\n \\begin{equation}\n z_\\lambda \n \\sum_{\\Delta^1,\\dots,\\Delta^{n-2g}\\atop |\\lambda|= |\\Delta^{j}|=d,\\,j\\le n-2g}\nH^{1-2g,n+1-2g}(d;\\lambda,\\Delta^{1},\\dots,\\Delta^{n-2g})P_{\\Delta^{n-2g-1}}(C')P_{\\Delta^{n-2g}}(C_{n}C'')\n\\prod_{i=1}^{n-2g-2} P_{\\Delta^i}(C_{2g+1+i}) \n \\end{equation} \n where $C'$ and $C''$ are given by (\\ref{C'C''2g}).\n \n (B) \n Let $n > t=2g \\ge 0$ . \n\nThen\n \\[\n \\mathbb{E}\\left({\\bf P}_\\lambda (X) \\tau^{\\rm BKP}_1(Y_{2g})\\right)=\n \\]\n \\begin{equation}\n z_\\lambda z_\\mu \n \\sum_{\\Delta^1,\\dots,\\Delta^{n-2g}\\atop |\\lambda|=|\\Delta^{j}| =d,\\, j\\le n-2g}\nH^{1-2g,n+1-2g}(d;\\lambda,\\Delta^{1},\\dots,\\Delta^{n-2g})P_{\\Delta^{n+1-2g}}(C'C_n C'')\n\\prod_{i=1}^{n-2g} P_{\\Delta^i}(C_{2g+i})\n \\end{equation}\n where $C'$ and $C''$ are given by (\\ref{C'C''2g+1}).\n \n \n\\end{Theorem}\n\nThe sketch of proof.\n \nThe character Frobenius-type formula by Mednykh-Pozdnyakova-Jones \\cite{M2},\\cite{GARETH.A.JONES} \n\\begin{equation}\\label{Hurwitz-number}\nH^{\\textsc{e},k}(d;\\Delta^1,\\dots,\\Delta^{k})=\\sum_{\\lambda\\atop |\\lambda|=d}\n\\left(\\frac{{\\rm dim}\\lambda}{d!} \\right)^{\\textsc{e}}\\varphi_\\lambda(\\Delta^1)\\cdots \n\\varphi_\\lambda(\\Delta^k)\n\\end{equation}\nwhere \n${\\rm dim}\\lambda$ is the dimension of the irreducible representation of $S_d$, and\n\\begin{equation}\n\\label{varphi}\n\\varphi_\\lambda(\\Delta^{(i)}) := |C_{\\Delta^{(i)}}|\\,\\,\\frac{\\chi_\\lambda(\\Delta^{(i)})}{{\\rm dim}\\lambda} ,\n\\quad {\\rm dim}\\lambda:=\\chi_\\lambda\\left((1^d)\\right)\n\\end{equation}\n$\\chi_\\lambda(\\Delta)$ is the character of the symmetric group $S_d$ evaluated at a cycle type $\\Delta$,\nand $\\chi_\\lambda$ ranges over the irreducible complex characters of $S_d$ (they are\nlabeled by partitions $\\lambda=(\\lambda_1,\\dots,\\lambda_{\\ell})$ of a given weight $d=|\\lambda|$). It \nis supposed that $d=|\\lambda|=|\\Delta^{1}|=\\cdots =|\\Delta^{k}|$.\n$|C_\\Delta |$ is the cardinality of the cycle\nclass $C_\\Delta$ in $S_d$.\n\nThen we use the characteristic map relation\n\\cite{Mac}:\n\\begin{equation}\\label{Schur-char-map}\ns_\\lambda(\\mathbf{p})=\\frac{{\\rm dim}\\lambda}{d!}\\left(p_1^d+\\sum_{\\Delta\\atop |\\Delta|=d } \\varphi_\\lambda(\\Delta)\\mathbf{p}_{\\Delta}\\right)\n\\end{equation}\nwhere $\\mathbf{p}_\\Delta=p_{\\Delta_1}\\cdots p_{\\Delta_{\\ell}}$ and where $\\Delta=(\\Delta_1,\\dots,\\Delta_\\ell)$ is a partition whose weight\ncoinsides with the weight of $\\lambda$: $|\\lambda|=|\\Delta|$. Here \n\\begin{equation}\n{\\rm dim}\\lambda =d!s_\\lambda(\\mathbf{p}_\\infty),\\qquad \\mathbf{p}_\\infty = (1,0,0,\\dots)\n\\end{equation}\nis the dimension of the irreducable representation of the symmetric group $S_d$. We imply that \n$\\varphi_\\lambda(\\Delta)=0$ if $|\\Delta|\\neq |\\lambda|$.\n\nThen we know how to evaluate the integral with the Schur function via Lemma\n used in \\cite{O-2004-New} and \\cite{NO-2014}, \\cite{NO-LMP}\n(for instance see \\cite{Mac} for the derivation). \n\\begin{Lemma} \\label{useful-relations}\nLet $A$ and $B$ be normal matrices (i.e. matrices diagonalizable by unitary transformations). Then\nBelow $p_{\\infty}=(1,0,0,\\dots)$. \n\\begin{equation}\\label{sAZBZ^+'}\n\\int_{\\mathbb{C}^{n^2}} s_\\lambda(AZBZ^+)e^{-\\textrm{Tr}\nZZ^+}\\prod_{i,j=1}^n d^2Z=\n\\frac{s_\\lambda(A)s_\\lambda(B)}{s_\\lambda(p_{\\infty})}\n\\end{equation}\nand\n\\begin{equation}\\label{sAZZ^+B'}\n\\int_{\\mathbb{C}^{n^2}} s_\\mu(AZ)s_\\lambda(Z^+B) e^{-\\textrm{Tr}\nZZ^+}\\prod_{i,j=1}^nd^2Z= \\frac{s_\\lambda(AB)}{s_\\lambda(p_{\\infty})}\\delta_{\\mu,\\lambda}\\,.\n\\end{equation}\n\\end{Lemma}\nTo prove Theorem \\ref{Theorem1} we use that we can equate the integral over $E(\\tau^{\\rm 2KP}(XY_y))$ using this Lemma\nand (\\ref{2KP-tau-vac-Schur}) and then compare it to the same integral where now we use (\\ref{tau_1}).\nTo prove Theorem \\ref{Theorem2} in the similar way we equate $E(\\tau^{\\rm 2KP}(X)\\tau^{\\rm 2KP}(Y_y))$.\nTo prove Theorem \\ref{Theorem3} we similarly $E(\\tau^{\\rm 2KP}(X)\\tau^{\\rm 2KP}(Y_y))$ in the same way taking into acount\nalso (\\ref{tau_1^B}).\n\n\n\n\n\\section*{Acknowledgements}\n\nThe work has been funded by the RAS Program ``Fundemental problems of nonlinear mechanics'' and by the Russian Academic Excellence Project '5-100'.\nI thank A. Odziyevich and university of Bialystok for warm hospitality which allowed\nit is accurate to write down this work. I am grateful to S. Natanzon, A. Odziyevich, J. Harnad, A. Mironov (ITEP) and\nto van de Ler for various remarks concerning the questions connected with this work. Special gratitude\nto E. Strakhov for the fact that he drew my attention to the works on quantum chaos devoted to the products \nof random matrices and for fruitful discussions.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe Gross-Pitaevskii (GP) equation was independently derived by L.P.~Pitaevskii \\cite{Pitaevskii61} and E.P.~Gross \\cite{Gross61} in 1961. It describes a superfluid gas of weakly interacting bosons at zero temperature. The solution of the equation is a complex function $\\Psi=|\\Psi| \\exp i \\varphi$, whose modulus squared represents the particle density, $n=|\\Psi|^2$, and the gradient of the phase gives the local velocity of the fluid, ${\\bf v}= (\\hbar\/m)\\nabla \\varphi$, where $m$ is the particle mass. In the derivation by L.P. Pitaevskii, the GP equation emerges as a generalization of Bogoliubov's theory \\cite{Bogolyubov47} to a spatially inhomogeneous superfluid \\cite{Ginzburg58}. A quantized vortex can exist as a stationary solution of the GP equation where all particles circulate with the same angular momentum $\\hbar$ around a line where the density vanishes; the solution has the form $\\sqrt{n(r)} \\exp i \\varphi$, where now $\\varphi$ is the angle around the vortex axis and $r$ is the distance from the axis in cylindrical coordinates. The density $n(r)$ is a smooth function which increases from $0$ to a constant asymptotic value $n_0$ over a length scale characterized by $\\xi$, known as the {\\it healing length}, determined by $n_0$ and the strength of the interaction. \n\nQuantized vortices have been extensively studied in superfluid $^4$He \\cite{Donnelly91}, which is a strongly correlated liquid. The core of the vortex in $^4$He is only qualitatively captured by the GP equation and more refined theories are needed to account for the atom-atom interactions and many-body effects \\cite{Dalfovo92,Ortiz95,Vitiello96,Giorgini96,Galli14}. A direct comparison between theory and experiment for the structure of the vortex core is not available, and is likely unrealistic, the main reason being that the core size in $^4$He is expected to be of the same order as the atom size. The only way to observe such a vortex thus consists of looking at its effects on the motion of impurities that may be attached to it. Electrons \\cite{Yarmchuk79,Yarmchuk82,Maris09,Maris11}, solid hydrogen particles \\cite{Bewley06,Bewley08,Paoletti10,Paoletti11,Fonda14}, and $^4$He$_2^*$ excimer molecules \\cite{Zmeev13} have been used for this purpose. These impurities act as tracers for the position of vortex filaments in order to infer their motion on a macroscopic scale, but the fine structure of the core remains inaccessible. Furthermore, impurities may themselves affect the dynamics of the vortex filaments \\cite{Barenghi09}. \n\nIn dilute ultracold atomic gases the situation is more favorable. On the one hand, the GP theory furnishes a very accurate description of the system in regimes of temperature and diluteness that are attainable in typical experiments with trapped Bose-Einstein condensates (BECs) \\cite{Dalfovo99,PSbook16}. On the other hand, beginning with a series of seminal experiments \n\\cite{Matthews99,Madison00,Anderson2000,Anderson2001,Haljan01,AboShaeer01,Hodby02},\nquantized vortices are routinely produced and observed with different techniques (see \\cite{Fetter09} for a review). \n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.9\\textwidth]{vortex-core-fig1.png}\n\\caption{Experimental absorption images of a condensate with $7 \\times 10^6$ atoms after $120$~ms of free expansion. The small blue ellipse at the center of (a) represents the shape of the trapped condensate before the expansion, which is an elongated ellipsoid with the long axis in the $x$-direction. The expansion is faster in the transverse direction, so that the aspect ratio is inverted and the atomic distribution aquires a pancake shape. (a) Column density along a transverse direction. The faint vertical stripe is a signature of the presence of a vortex, and its shape is an interference pattern originating from the anisotropic velocity field around the vortex and the velocity field of the expansion. The field of view is $1.3 \\times 3$~mm. (b) Column density along the axial direction. The vortex is almost invisible. The field of view is $3 \\times 3$~mm. (c) Residual column density. From the previous image we subtract spurious interference fringes, due to imperfections in the optical imaging, and the background density, using a Thomas-Fermi fit (see text). The result is an image of the residual column density which neatly reveals a vortex filament. (d)-(g) Other examples of vortex filaments shown by the residual column density for different condensates with one or more vortices. Note that even though the {\\it in-situ} condensate is always isotropic in the $y$-$z$ plane it becomes slightly elliptic after a long expansion due to a residual curvature of the magnetic field used to levitate the condensate against gravity.}\n\\label{fig:expt-images}\n\\end{figure*}\n\nDespite such an abundance of work, it may sound surprising that no detailed quantitative comparison between theory and experiment for the\nstructure of the vortex core in three-dimensional (3D) condensates has yet been performed. A reason is that the healing length $\\xi$ in typical trapped BECs, though much larger than in liquid $^4$He, is still smaller than the optical resolution, which is limited by the wavelength of the laser beams used for imaging. Another reason is that, when illuminating the atomic cloud with light, the result is the optical density, which is determined by an integral of the density along the imaging axis (column density); thus, a vortex filament has a strong contrast only if it is rectilinear and aligned along the imaging axis. One can overcome the first limitation by switching off the confining potential, letting the condensate freely expand. The vortex core expands as well, at least as fast as the condensate radius \\cite{Lundh98,Dalfovo00,Hosten03,Teles13a}, so that it can become visible after a reasonable expansion time. Concerning vortex alignment, one can strongly confine a BEC along one spatial direction, squeezing it to within a width of several $\\xi$. In such a geometry, vortices orient themselves along the short direction, thus behaving as point-like topological defects in a quasi-2D system rather than filaments in a 3D fluid (a recent discussion about the structure of the vortex core in expanding quasi-2D condensates can be found in \\cite{Sang13}). Conversely, if the condensate width is significantly larger than $\\xi$ in all directions, the vortex filaments can easily bend \\cite{Aftalion01,Garcia01,Aftalion03,Modugno03}, with a consequent reduction of their visibility in the column density. Bent vortex filaments have indeed been observed in \\cite{Raman01,Rosenbusch02,Bretin03,Donadello14}.\nBending and optical resolution particularly limit the quality of comparisons between theory and experiment for the structure of the vortex core (see Fig.~14.10 in \\cite{PSbook16}). \n\nIn this work, we show that 3D vortex filaments can be optically observed with enough accuracy to permit a direct comparison with the predictions of the GP theory. In our experiment, we produce large condensates of sodium atoms in an elongated axially symmetric harmonic trap and we image each condensate, in both the axial and a transverse direction, after free expansion. When a vortex filament is present, it produces a visible modification of the column density distribution of the atoms. \nWe use numerical GP simulations, as well as scaling laws which are valid for the expansion of large condensates, to make direct comparisons with our experimental observations and find good agreement. \n\n\\section{Experiment}\n\nWe produce ultracold samples of sodium atoms in the internal state $|3S_{1\/2},F=1,m_{\\mathrm{F}}=-1\\rangle$ in a cigar-shaped harmonic magnetic trap with trap frequencies $\\omega_x\/2 \\pi=9.3$~Hz and $\\omega_\\perp\/2 \\pi=93$~Hz. The thermal gas is cooled via forced evaporative cooling and pure BECs of typically around $10^7$ atoms are finally obtained with negligible thermal component. The evaporation ramp in the vicinity of the BEC phase transition is performed at different rates: slow quenches eventually produce condensates which are almost in their ground state, while faster quenches lead to the formation of quantized vortices in the condensate as a result of the Kibble-Zurek mechanism \\cite{Lamporesi13,Donadello16}. The quench rate can be chosen in such a way to obtain condensates with one vortex on average. \n\nThe trapped condensate has a radial width on the order of $30 \\ \\mu$m and an axial width that is $10$ times larger. The healing length in the center of the condensate is about $0.2\\ \\mu$m, smaller than the optical resolution. It is also about two orders of magnitude smaller than the radial width of the condensate, which means that, as far as the density distribution is concerned, a vortex is a thin filament living in a 3D superfluid background with smoothly varying density, and the local properties of the vortex core are hence almost unaffected by boundary conditions. However, boundaries are still important for the superfluid velocity field. In fact, the ellipsoidal shape of the condensate causes a preferential alignment of the vortex filament along a (randomly chosen) radial direction so as to minimize its energy. Moreover, this geometry makes the flow around the vortex line anisotropic, meaning that on the larger scale of the entire condensate a vortex behaves as an almost planar localized object. For this reason, such vortices in elongated condensates are also known as solitonic-vortices \\cite{Donadello14,Brand02,Komineas03,Ku14}. For our purposes, such localization is an advantage since it significantly reduces the bending of the vortex filaments, while at the same time keeping their local core structure three dimensional. \n\nObservations are performed by releasing the atoms from the trap and taking simultaneous absorption images of the full atomic distribution along the radial and axial directions after a sufficiently long expansion in free space, so that the vortex core becomes larger than the imaging resolution \\cite{Donadello14,Donadello16}. The presence of a levitating magnetic field gradient makes it possible to achieve long expansion times preventing the BEC from falling. Typical images are shown in Fig.~\\ref{fig:expt-images}. In the radial direction (panel {\\it a}), the vortex is seen as a dark stripe. This soliton-like character is due to the interference of the two halves (ends) of the elongated condensate which, on the large length scale of the entire condensate, have approximately a $\\pi$ phase difference \\cite{Donadello14,Ku14,Tylutki15}.\nIf a vortex filament is parallel to the imaging direction, the dark stripe exhibits a central dip, corresponding to the vortex core seen along its axis, and a twist due to the anisotropic quantized circulation. The $2\\pi$ phase winding around the vortex core was also detected in the same setup \\cite{Donadello14} by means of an interferometric technique based on a sequence of Bragg pulses. In the axial direction (panel {\\it b}), the soliton-like character is integrated out and the vortex filament is only a faint (and almost invisible) perturbation in the column density. However, by subtracting the background represented by a condensate without any vortex, the filament clearly emerges in the residual density distribution (panel {\\it c}). In the following we show how this signal can be used to extract quantitative information on the vortex structure after expansion, and how this is related to the shape of the vortex core in the condensate {\\it in-situ}, before the expansion. \n\n\\section{Theory}\n\nThe GP equation for the macroscopic wave function $\\Psi({\\bf r,t})$ for a BEC of weakly interacting bosons of mass $m$ at zero temperature is \\cite{Pitaevskii61,Gross61,Dalfovo99,PSbook16}\n\\begin{equation}\ni \\hbar \\frac{\\partial \\Psi}{\\partial t} = \\left( -\\frac{\\hbar^2 \\nabla^2}{2m} + V_{\\rm ext} + g |\\Psi|^2 \\right) \\Psi ,\n\\end{equation}\nwhere $V_{\\rm ext}$ is the external potential and $t$ is time. The quantity $g$ is a coupling constant characterizing the interaction between the atoms, which is positive for our condensates. The stationary version of the GP equation is obtained by choosing $\\Psi({\\bf r},t)= \\psi({\\rm r}) \\exp (-i\\mu t\/\\hbar)$, so that\n\\begin{equation}\n\\left( -\\frac{\\hbar^2 \\nabla^2}{2m} + V_{\\rm ext} + g |\\psi|^2 \\right) \\psi = \\mu \\psi\n\\label{eq:stationaryGP}\n\\end{equation}\nwhere $\\mu$ is the chemical potential and $n=|\\psi|^2$ is the density. In our case, we use the stationary GP equation to describe the condensate confined by the axially symmetric harmonic potential $V_{\\rm ext}=(m\/2)[\\omega_x^2 x^2+\\omega_\\perp^2 (y^2+z^2)]$, with the aspect ratio $\\lambda=\\omega_x\/\\omega_\\perp=0.1$, as in the experiment. Then we simulate the expansion by using this solution as the $t=0$, starting condition for the solution of the time dependent GP equation with $V_{\\rm ext}=0$. We simulate condensates with and without a vortex. In the former case, the vortex is rectilinear, passing through the center and aligned along the $z$-axis. The need to accurately describe the dynamics of the system on both the scale of the healing length $\\xi$ and the scale of the width of the entire expanding condensate poses severe computational constraints. With this in mind, we are only able to perform simulations up to values of the chemical potential on the order of $10\\hbar \\omega_\\perp$, which are smaller than the experimental values, ranging from about $15$ to $30\\hbar \\omega_\\perp$. Experiments can also be performed for smaller values of $N$, and hence smaller $\\mu$, but fluctuations in the density distribution become relatively larger with decreasing $N$, and the signal-to-noise ratio for the visibility of vortices in axial imaging becomes too small. The comparison between theory and experiments hence requires an extrapolation of the GP results to larger $\\mu$ and this is possible thanks to scaling laws which are valid for large condensates. \n\nIf $\\mu$ is significantly larger than both $\\hbar \\omega_\\perp$ and $\\hbar \\omega_x$, then the ground state of the condensate, i.e., the lowest energy stationary solution of the GP equation, is well approximated by the Thomas-Fermi (TF) approximation, which corresponds to neglecting the first term in the parenthesis of Eq.~(\\ref{eq:stationaryGP}), so that the density becomes \\cite{Dalfovo99,PSbook16}\n\\begin{equation}\nn_{\\rm TF}(x,y,z) = \\frac{1}{g} \\left[\\mu -\\frac{1}{2}m \\omega_x^2 x^2 -\\frac{1}{2}m \\omega_\\perp^2 (y^2+ z^2) \\right] \n\\label{eq:TF}\n\\end{equation}\nwithin the central region where $n_{\\rm TF}$ is positive, and is $0$ elsewhere.\nWe can then define the boundary TF radii $R_x = (2\\mu\/m\\omega_x^2)^{1\/2}$ and $R_\\perp = (2\\mu\/m\\omega_\\perp^2)^{1\/2}$, the central density $n_0=\\mu\/g$, and the rescaled coordinates $\\tilde{x}=x\/R_x$, $\\tilde{y}=y\/R_\\perp$ and $\\tilde{z}=z\/R_\\perp$, and rewrite the density in the form \n\\begin{equation}\nn_{\\rm TF}(\\tilde{x},\\tilde{y},\\tilde{z}) = n_0 ( 1 - \\tilde{x}^2 - \\tilde{y}^2 - \\tilde{z}^2 ) \\, .\n\\label{eq:rescaledTF}\n\\end{equation}\nThis inverted parabola is a very good approximation for the density profiles of our condensates except in a narrow region near the condensate boundaries \\cite{Dalfovo96b}. \n\nIn the regime where the TF approximation is valid, the free expansion is governed by simple scaling laws \\cite{Castin96,Kagan96,Dalfovo97}. In particular, one can prove that the condensate preserves its shape with a rescaling of the TF radii in time according to $R_x(t)=b_x(t)R_x(0)$ and $R_\\perp(t)=b_\\perp(t)R_\\perp(0)$, where the scaling parameters $b_x$ and $b_\\perp$ are solutions of the coupled differential equations\n$\\ddot{b}_\\perp - \\omega_{\\perp}^2\/(b_xb_\\perp^3) = 0 $ and $\\ddot{b}_x - \\omega_{x}^2\/(b_x^2b_\\perp^2) = 0$,\nwith initial conditions $b_x=b_\\perp=1$ and $\\dot{b}_x=\\dot{b}_\\perp=0$ at $t=0$.\nBy using the aspect ratio $\\lambda$ and introducing the dimensionless time $\\tau=\\omega_{\\perp}t$, one can rewrite the same equations as \n\\begin{equation}\n\\frac{d^2 b_\\perp}{d \\tau^2} - \\frac{1}{b_xb_\\perp^3} = 0 \\ \\ \\ , \\ \\ \\ \n\\frac{d^2 b_x}{d \\tau^2} - \\frac{\\lambda^2}{b_x^2b_\\perp^2} = 0 \\, . \n\\label{eq:b}\n\\end{equation}\nAnalytic solutions exist in the limit $\\lambda \\ll 1$, that is, for a very elongated ellipsoid, for which one finds \\cite{Castin96}\n\\begin{eqnarray}\nb_\\perp (\\tau) & = & \\sqrt{1+\\tau^2} \\nonumber \\\\\nb_x (\\tau) & = & 1 + \\lambda^2 [ \\tau \\ {\\rm arctan} \\tau - \\ln \\sqrt{1+\\tau^2} \\ ] \\, .\n\\label{eq:banalytic} \n\\end{eqnarray}\nThe correction proportional to $\\lambda^2$ becomes vanishingly small in the limit of the infinite cylinder, where the condensate is known to follow a scaling behavior that preserves its radial shape, even in regimes where the TF approximation does not apply \\cite{Pitaevskii97}.\n\n\\begin{figure}[]\n\\includegraphics[width=0.9\\linewidth]{vortex-core-fig2.pdf}\n\\caption{Residual column density (\\ref{eq:nres}) calculated for a GP simulation of an expanding condensate with $\\mu=9.7 \\hbar \\omega_\\perp$ and with a vortex aligned along $z$, passing through the origin. Curves are plotted for different values of the expansion time, $\\tau=\\omega_\\perp t$, and are normalized to the value $n^{\\rm TF}_{\\rm col} (0,\\tau)$, which is the maximum of the fitted TF column density at the same time. The coordinate $\\tilde{y}=y\/R_\\perp$ is the distance from the vortex axis in units of the transverse TF radius obtained from the same fit. The spatial range is limited to half the TF radius in order to highlight the print of the vortex in the column density; the effects of the condensate boundaries are almost negligible in this range. }\n\\label{fig:restheo}\n\\end{figure}\n\n\\begin{figure}[]\n\\includegraphics[width=0.9\\linewidth]{vortex-core-fig3.pdf}\n\\caption{Time evolution of the depth (top) and width (bottom) of the depletion produced by a vortex in the residual column density of expanding condensates with different chemical potentials $\\mu$.\nDepth and width are defined as the amplitude and the width $\\sigma$ of a Gaussian fit, respectively.\nAs in Fig.~\\ref{fig:restheo}, these parameters are normalized by the central TF column density and the transverse TF radius.\nNote that to be consistent with our experiments, for the purpose of improving the fit quality, prior to fitting we average $\\delta n (\\tilde{y},\\tilde{z},\\tau)\/n^{\\rm TF}_{\\rm col} (0,\\tilde{z},\\tau)$ over different $z$ values within the interval [$-R_{\\perp}\/3,R_{\\perp}\/3$]. At very early times, $\\tau\\lesssim 3$, the dip in the residual is too small for the fit to quantitatively represent the vortex's characteristics. The dashed line is the prediction (\\ref{eq:maxresidualvortex}) of the empty core model.}\n\\label{fig:depthwidth}\n\\end{figure}\n\nThe TF density profile (\\ref{eq:rescaledTF}) is not only an accurate fitting function of the GP density distribution during the free expansion of an elongated condensate with $\\mu \\sim 10 \\hbar \\omega_{\\perp}$, but the TF radii extracted from the fit also agree with the scaling solutions of (\\ref{eq:b}), as well as with the analytic expressions (\\ref{eq:banalytic}), the discrepancy being less than $2$\\% in all our simulations, even for long expansion times. The agreement is expected to be even better for larger values of $\\mu$. This justifies the use of a TF fit to extract the residual density both in the experiments and in the GP simulations. The fit also provides the values of the TF radii and $n_0$ at any given time $t$, which can be used to rescale the coordinates and the density. \n\n For comparison with experiments, the key quantity is the column density, that is, the integral of the density along the imaging axis. Let us consider a cut of the density in the $z=0$ plane and define $n_{\\rm col}(\\tilde{y},t) = \\int d\\tilde{x} \\ n(\\tilde{x},\\tilde{y},0,t)$, where the integral is restricted to the region where the density is positive. Using the analytic TF density, one finds \n\\begin{equation}\nn^{\\rm TF}_{\\rm col}(\\tilde{y},t)=n^{\\rm TF}_{\\rm col} (0,t)(1 - \\tilde{y}^2)^{3\/2} \n\\label{eq:ncolaxial}\n\\end{equation}\nand we can finally define the residual column density as\n\\begin{equation}\n\\delta n (\\tilde{y},t) = n_{\\rm col} (\\tilde{y},t) - n^{\\rm TF}_{\\rm col}(\\tilde{y},t) \\, .\n\\label{eq:nres}\n\\end{equation}\nAn example is shown in Fig.~\\ref{fig:restheo}, where we plot $\\delta n$ obtained in the GP simulation of the expansion for a condensate with $\\mu= 9.7 \\hbar \\omega_\\perp$. The figure shows that, as expected, a vortex produces a (column) density depletion whose depth is very small, i.e., only a few percent of the central column density of the condensate. It also shows that the depth increases in time during the expansion, while the width seems to remain almost constant. In Fig.~\\ref{fig:depthwidth} we show results for the depth and the width obtained in simulations of condensates with different chemical potentials, plotted as a function of the expansion time.\n\nThese results can be qualitatively understood by using a simplified model where the GP vortex core in the initial condensate is modelled by an empty cylinder of radius $r_v = c \\xi_0$, where $c$ is a number of order $1$ and $\\xi_0$ is the healing length of a uniform condensate with density $n_0$, which is given by $\\xi_0= \\hbar\/\\sqrt{2mgn_0} = \\hbar\/\\sqrt{2m\\mu}$. The rescaled radius is $\\tilde{r}_v=r_v\/R_\\perp= c \\xi_0\/R_\\perp= c \\hbar\\omega_\\perp\/2\\mu$. Then, let us assume that the initial expansion of the condensate is dominated by the mean-field interaction in the following sense: a segment of vortex filament near the center of the condensate expands as if it were in a uniform condensate, preserving its shape, but adiabatically following the time variation of the density of the medium around it. Hence, the vortex radius grows because the density decreases and the healing length is inversely proportional to $\\sqrt{n_0}$. Meanwhile, the transverse and axial TF radii $R_\\perp$ and $R_x$ grow, but with different scaling laws; such a difference is precisely the origin of the increased visibility of the vortex. The empty-cylinder model allows us to calculate the column density, analytically taking into account all of these effects. In particular, using the scaling law (\\ref{eq:banalytic}) and neglecting the $\\lambda^2$ term, one can easily prove that $\\tilde{r}_v$ is constant during the expansion, while the residual column density takes the form \n\\begin{equation}\n\\delta n (\\tilde{y},\\tau) = - \\frac{3 \\lambda \\tilde{r}_v}{2} n^{\\rm TF}_{\\rm col} (0;\\tau) \\sqrt{1+\\tau^2} \n(1 - \\tilde{y}^2) \\left( 1 - \\frac{\\tilde{y}^2}{\\tilde{r}_v^2} \\right)^{\\frac{1}{2}} \\ ,\n\\end{equation}\nand the normalized depth can be written as \n\\begin{equation}\n\\frac{|\\delta n (0,\\tau)|}{n^{\\rm TF}_{\\rm col} (0,\\tau)} = \\frac{3}{2} \\lambda \\tilde{r}_v \\sqrt{1+\\tau^2} \\, .\n\\label{eq:maxresidualvortex}\n\\end{equation}\nThe dashed line in Fig.~\\ref{fig:depthwidth} corresponds to this prediction when $c=1.6$ and $\\mu= 9.7 \\hbar \\omega_\\perp$. With the same parameters, the rescaled width of the empty cylinder is $\\tilde{r}_v \\sim 0.08$, which is in qualitative agreement with the data in the bottom panel of the same figure. However, the assumption of adiabaticity is expected to be valid only at short times, when the density of the expanding condensate remains sufficiently large. As the expansion proceeds, the mean-field interactions lose their strength and the velocity field gradually assumes the characteristics of a ballistic expansion \\cite{Lundh98,Dalfovo00}.\nThe crossover from mean-field to ballistic expansion is smooth and,\nfor reference, we note that a spherically trapped condensate is expected to decouple at around $\\tau_{\\rm dec} \\sim \\sqrt{2\\mu\/\\hbar\\omega}$ \\cite{Lundh98}, which, for $\\mu = 9.7\\hbar\\omega$, would correspond to $\\tau_{\\rm dec} \\sim 4$ in Fig.~\\ref{fig:depthwidth}.\nThe full GP simulations show that the width remains approximately constant throughout the simulation, while the depth significantly deviates from the $\\sqrt{1+\\tau^2}$ law and saturates to a constant value deep in the ballistic regime. \n\n\\section{Experiment \\lowercase{vs.} Theory}\n\nIn this section, we compare the results of the experiments with the predictions of the GP theory for the overall shape, width and depth of the vortex in the residual column density. \n\nThe depth and the width after a given expansion time $t$ are shown in Fig.~\\ref{fig:mu_scaling} as a function of $1\/\\mu$. The two quantities are extracted from Gaussian fits, and normalized by the central TF column density and the transverse TF radius as in Fig.~\\ref{fig:depthwidth}. In the case of experimental data, we first select condensates exhibiting a rectilinear vortex filament near their center, at an axial distance smaller than $R_\\perp\/3$. We then fit the column density with the analytic TF profile, but excluding points lying within a few healing lengths of the filament. From the fit we obtain the chemical potential and the TF radii of the ``background\" condensate and, by subtracting this background from the column density, we get the residual $\\delta n (\\tilde{y})$, where $\\tilde{y}$ is taken to be orthogonal to the filament. In order to increase the signal-to-noise ratio we average the normalized depth $\\delta n(\\tilde{y})\/n_{\\rm col}^{\\rm TF}(0)$ over different $z$ values within the interval [$-R_{\\perp}\/3,R_{\\perp}\/3$]. \nMoreover, if a vortex line is displaced from the center by a distance $\\tilde \\rho = \\sqrt{\\tilde x ^ 2 +\\tilde y ^2 + \\tilde z ^2}$, its core structure is that of a vortex in a background condensate with a density $(1 - \\tilde \\rho^2)$ times lower than the central density; we thus assign to the vortex a value of $\\mu$ corrected by the same factor.\nFinally, for long expansion times the residual external field makes the condensate slightly elliptic in the radial plane. For this reason, we use both $R_y$ and $R_z$ as independent TF radii and then we define $R_\\perp=\\sqrt{R_yR_z}$. The same fitting procedure is applied to the GP density distributions, for which the condensate radius is always axially symmetric and the vortex is centered by construction. The experimental points correspond to four independent sets of data, where the cooling, evaporation, and imaging procedures are optimized for condensates with different atom numbers: red and orange points correspond to the largest condensates in our laboratory ($\\mu \\sim 30 \\hbar\\omega_\\perp$, $t=150$~ms and $120$~ms), blue points are the smallest condensates in which vortices are still observable ($\\mu \\sim 15 \\hbar\\omega_\\perp$, $t=100$~ms), while green points represent an old data set \\cite{Bisset17} for intermediate condensates ($\\mu \\sim 20 \\hbar\\omega_\\perp$, $t=120$~ms). Error bars account for statistical noise in the residual column density and for the uncertainties in the fit. \n\nThe GP results clearly show that the rescaled width $\\sigma\/R_\\perp$ scales linearly with $1\/\\mu$. This is consistent with the fact that, in the elongated geometry of our condensates, the rescaled width remains almost constant during the expansion.\nAnother way to understand this is to note that the in-trap width is proportional to $\\xi_0 \/R_\\perp$, and hence to $1\/\\mu$, and this scaling survives after long expansion times, even deep within the ballistic regime where length ratios become frozen.\nThe dashed line is a linear fit to the GP points, including the limiting case of an infinite condensate at $1\/\\mu=0$. Figure~\\ref{fig:mu_scaling} shows that the experimental data are in good agreement with the GP predictions, especially for the largest condensates, where the vortex signal-to-noise ratio is the largest. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=1\\linewidth]{vortex-core-fig4.pdf}\n\\caption{Depth (top) and width (bottom) of the depletion produced by a vortex in the residual column density for condensates of different $\\mu$. The black $+$ symbols are obtained from GP simulations for an expansion time $\\tau=\\omega_\\perp t= 70$, corresponding to $120$~ms; the point at $1\/\\mu=0$ is the limit of an infinitely large condensate, where both quantities must vanish. The dashed line in the bottom panel is the linear law $\\sigma\/R_\\perp \\sim \\xi_0 \/R_\\perp \\propto 1\/\\mu$ predicted by GP theory in the TF scaling regime. Points with error bars are the experimental data. The expansion time is $t=150$~ms (red), $t=120$~ms (green and orange) and $t=100$~ms (blue); varying $t$ in this range would change the vertical position of the experimental data by a negligible amount of the order of $1\\%$. The depth and width are calculated from Gaussian fits to both GP and experimental distributions of the residual column density by using the same procedure. }\n\\label{fig:mu_scaling}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=1\\linewidth]{vortex-core-fig5.pdf}\n\\caption{Residual column density after $150$~ms of free expansion for a condensate with $2 \\times 10^7$ atoms and $\\mu=33 \\hbar \\omega_\\perp$, containing a vortex. The inset shows the full residual column density in the $y$-$z$ plane. The quantity $\\delta n (\\tilde{y},\\tilde{z})\/n^{\\rm TF}_{\\rm col} (0,\\tilde{z})$ is averaged in the direction $z$ within the rectangular box and the resulting values (blue points) are plotted in the main panel as a function of the rescaled coordinate $\\tilde{y}=y\/R_\\perp$, with $\\tilde{y}=0$ at the vortex position. The solid line is the same quantity, obtained with the same fitting procedure applied to the GP residual column density of a condensate with $\\mu=9.7 \\hbar \\omega_\\perp$, after linearly rescaling its width according to the dashed line of Fig.~\\ref{fig:mu_scaling}, and reducing its depth to match the experimental value. }\n\\label{fig:profile}\n\\end{figure}\n\nFor the case of vortex depth, the GP theory does not provide any simple scaling law to compare with the experimental results considered here. The reason is that, as discussed in the previous section, the visibility of the vortex in the residual column density exhibits a nontrivial dependence on the expansion time, associated with the crossover from the mean-field dominated early stages of expansion to the later ballistic expansion dynamics. Eventually, for large $t$, the normalized depth saturates at a value weakly dependent on $\\mu$ (see Fig.~\\ref{fig:depthwidth}). The experimental points lie in a range fully compatible with a smooth interpolation from the GP results down to the infinite condensate limit, in the sense that any reasonable interpolating function would clearly pass through most of the experimental points, within the experimental uncertainties. \n\nIn Fig.~\\ref{fig:profile}, we show an example of vortex profile in a condensate with $2 \\times 10^7$ atoms and chemical potential $\\mu_{\\rm expt}=33 \\hbar \\omega_\\perp$, after an expansion time $t=150$~ms. The full residual column density $\\delta n (\\tilde{y},\\tilde{z})$ is plotted in the inset. The quantity $\\delta n (\\tilde{y},\\tilde{z})\/n^{\\rm TF}_{\\rm col} (0,\\tilde{z})$ is averaged in the $z$ direction within the rectangular box, and the resulting $\\delta n (\\tilde{y})\/n^{\\rm TF}_{\\rm col} (0)$ is shown in the main panel of the figure as a function of $\\tilde{y}$. In order to compare the experimental data with GP theory we proceed as follows. We first check that the shape of the vortex core in the residual column density of GP simulations with different values of $\\mu$ is the same up to a rescaling of the width and the depth as in Fig.~\\ref{fig:mu_scaling}, except for small fluctuations in the tails, which are expected to become negligible for large $\\mu$. This implies that the GP profile of $\\delta n (\\tilde{y})\/n^{\\rm TF}_{\\rm col} (0)$ for the experimental chemical potential $\\mu_{\\rm expt}=33 \\hbar \\omega_\\perp$ should be the same as for the GP simulation for $\\mu_{\\rm GP}=9.7 \\hbar \\omega_\\perp$, after rescaling the width linearly with $\\mu$ (dashed line in Fig.~\\ref{fig:mu_scaling}). The solid line in Fig.~\\ref{fig:profile} is the resulting GP profile, where we fixed the depth to the experimental value.\nThere is good agreement between theory and experiment for the overall shape, including quantitative agreement for the width. The depth has good qualitative agreement if one considers that the experimental value lies within a range between the GP results for smaller $\\mu$ and the trivial limit for $\\mu \\to \\infty$, in a way that is compatible with any reasonable smooth interpolation as already shown in the top panel of Fig.~\\ref{fig:mu_scaling}.\n \nIt is worth noticing that the optical resolution in our experiments is not limiting the comparison with theory. To check this, we convolve the GP profile with a Gaussian having a width in the range $\\sigma_{\\rm res} \\sim 2 - 3\\ \\mu$m, corresponding to our optical resolution, and we find that the effects on the points in Figs.~\\ref{fig:mu_scaling} and \\ref{fig:profile} are negligible (note that the vortex core in Fig.~\\ref{fig:profile} has a width $\\sigma \\sim 30\\ \\mu {\\rm m} \\gg \\sigma_{\\rm res}$). The fluctuations in the experimental data, which contribute to the error bars in Fig.~\\ref{fig:mu_scaling}, are dominated by photon shot-noise in the absorption images and by systematic spurious optical fringes which are not completely filtered out. \n\nFinally, we note that thermal atoms are not visible in our samples, which means that the temperature of the condensates is significantly smaller than the critical temperature for Bose-Einstein condensation. Nevertheless, a certain number of thermal atoms is still expected to be present in the trapped condensate, and some of them can be confined within the vortex core \\cite{Coddington04}. These atoms should not be present in the vortex core after the expansion, since their kinetic energy is sufficient to separate them from the expanding condensate, leaving an empty vortex core.\nIn any case, our observations suggest that the effect of thermal atoms on the {\\it in situ} vortex core is limited.\nIn fact, the good agreement that we find with GP theory (valid at zero temperature) is an indication that, if thermal atoms are present, their effects on the shape, width and depth of the vortex are negligible within the uncertainties of our experiments. \n\n\\section{Conclusion}\n\nIn summary, we have shown that quantized vortex filaments can be observed by optical means in 3D Bose-Einstein condensates of weakly interacting ultracold atoms, at a level of accuracy which is enough to allow for a direct comparison with the predictions of the Gross-Pitaevskii theory for the width, depth, and overall shape of the vortex core. We found good agreement between theory and experiment. We have performed experiments with large condensates of sodium atoms and compared the results to those obtained in numerical simulations. In order to make the vortex visible we let the condensate expand for a long time. The expansion dynamics were included in the numerical simulations. We have shown that Thomas-Fermi scaling laws, valid for large elongated condensates, can be efficiently used to relate the observed features after expansion to the structure of the vortex core in the initially trapped condensate. \\\\\n\n\\bigskip\n\n{\\bf Acknowledgments:}\nWe dedicate this paper to Lev P. Pitaevskii in celebration of his 85th birthday.\nNo words can express our gratitude for the times spent working alongside him and, of course, for his pioneering contributions to physics itself.\nThis work is supported by Provincia Autonoma di Trento and by QuantERA ERA-NET cofund project NAQUAS. \\\\\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\nKeeping information secure has become a major concern with the advancement in technology. In this work, the information theory aspect of security is analyzed, as entropies are used to measure security. The system also incorporates some traditional ideas surrounding cryptography, namely Shannon's cipher system and adversarial attackers in the form of eavesdroppers. In cryptographic systems, there is usually a message in plaintext that needs to be sent to a receiver. In order to secure it, the plaintext is encrypted so as to prevent eavesdroppers from reading its contents. This ciphertext is then transmitted to the receiver. Shannon's cipher system (mentioned by Yamamoto\\cite{shannon1_yamamoto}) incorporates this idea. The definition of Shannon's cipher system has been discussed by Hanawal and Sundaresan~\\cite{hanawal_shannon}. In Yamamoto's~\\cite{shannon1_yamamoto} development on this model, a correlated source approach is introduced. This gives an interesting view of the problem, and is depicted in Figure~\\ref{fig:yamamoto_shannoncipher}. Correlated source coding incorporates the lossless compression of two or more correlated data streams. Correlated sources have the ability to decrease the bandwidth required to transmit and receive messages because a syndrome (compressed form of the original message) is sent across the communication links instead of the original message. A compressed message has more information per bit, and therefore has a higher entropy because the transmitted information is more unpredictable. The unpredictability of the compressed message is also beneficial in terms of securing the information. \n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics [scale = 0.7]{yamamoto_shannoncipher.pdf}\n\\caption{Yamamoto's development of the Shannon Cipher System}\n\\label{fig:yamamoto_shannoncipher}\n\\end{figure}\n\nThe source sends information for the correlated sources, $X$ and $Y$ along the main transmission channel. A key $W_k$, is produced and used by the encoder when producing the ciphertext. The wiretapper has access to the transmitted codeword, $W$. The decoded codewords are represented by $\\widehat{X}$ and $\\widehat{Y}$. In Yamamoto's scheme the security level was also focused on and found to be $\\frac{1}{K} H(X^K,Y^K|W)$ (i.e. the joint entropy of $X$ and $Y$ given $W$, where $K$ is the length of $X$ and $Y$) when $X$ and $Y$ have equal importance, which is in accordance with traditional Shannon systems where the security is measured by the equivocation. When one source is more important than the other then the security level is measured by the pair of the individual uncertainties $(\\frac{1}{K} H(X^K|W), \\frac{1}{K} H(Y^K|W))$. \n\nIn practical communication systems links are prone to eavesdropping and as such this work incorporates wiretapped channels, i.e. channels where an eavesdropper is present.\n\nThere are specific kinds of wiretapped channels that have been developed. The mathematical model for this Wiretap Channel is given by Rouayheb \\textit{et al.}~\\cite{ref12_rouayheb_soljanin}, and can be explained as follows: the channel between a transmitter and receiver is error-free and can transmit $n$ symbols $Y=(y_1,\\ldots,y_n)$ from which $\\mu$ bits can be observed by the eavesdropper and the maximum secure rate can be shown to equal $n-\\mu$ bits. The security aspect of wiretap networks have been looked at in various ways by Cheng \\textit{et al.} \\cite{ref21_cheng_yeung}, and Cai and Yeung \\cite{ref11_cai_yeung}, emphasising that it is of concern to secure these type of channels. \n\nVillard and Piantanida \\cite{pablo_secure_multiterminal} also look at correlated sources and wireap networks: A source sends information to the receiver and an eavesdropper has access to information correlated to the source, which is used as side information. There is a second encoder that sends a compressed version of its own correlation observation of the source privately to the receiver. Here, the authors show that the use of correlation decreases the required communication rate and increases secrecy. Villard \\textit{et al.} \\cite{pablo_secure_transmission_receivers} explore this side information concept further where security using side information at the receiver and eavesdropper is investigated. Side information is generally used to assist the decoder to determine the transmitted message. An earlier work involving side information is that by Yang \\textit{et al.}~\\cite{feedback_yang}. The concept can be considered to be generalised in that the side information could represent a source. It is an interesting problem when one source is more important and Hayashi and Yamamoto\\cite{Hayashi_coding} consider it in another scheme, where only $X$ is secure against wiretappers and $Y$ must be transmitted to a legitimate receiver. They develop a security criterion based on the number of correct guesses of a wiretapper to retrieve a message. In an extension of the Shannon cipher system, Yamamoto \\cite{coding_yamamoto} investigated the secret sharing communication system. \n\nIn this case, we generalise a model for correlated sources across a channel with an eavesdropper and the security aspect is explored by quantifying the information leakage and reducing the key lengths when incorporating Shannon's cipher system. \n\nThis paper initially describes a two correlated source model across wiretapped links, which is detailed in Section II. In Section III, the information leakage is investigated and proven for this two correlated source model. The information leakage is quantified to be the equivocation subtracted from the total obtained uncertainty. In Section IV the two correlated sources model is looked at according to Shannon's cipher system. The notation contained in the tables will be clarified in the following sections. The proofs for this Shannon cipher system aspect are detailed in Section V. Section VI details the extension of the two correlated source model where multiple correlated sources in a network scenario is investigated. There are two subsections here; one quantifying information leakage for the Slepian-Wolf scenario and the other incorporating Shannon's cipher system where key lengths are minimized and a masking method to save on keys is presented. Section VII explains how the models detailed in this paper are a generalised model of Yamamoto's~\\cite{shannon1_yamamoto} model, and further offers comparison to other models. The future work for this research is detailed in Section VIII and the paper is concluded in Section IX.\n\n\n\n\n\n\\section{Model}\n\nThe independent, identically distributed (i.i.d.) sources $X$ and $Y$ are mutually correlated random variables, depicted in Figure~\\ref{fig:new_model}. The alphabet sets for sources $X$ and $Y$ are represented by $\\mathcal{X}$ and $\\mathcal{Y}$ respectively. Assume that ($X^K$, $Y^K$) are encoded into two syndromes ($T_{X}$ and $T_{Y}$). We can write $T_X = (V_X, V_{CX})$ and $T_Y = (V_Y, V_{CY})$ where $T_X$ and $T_Y$ are the syndromes of $X$ and $Y$. Here, $T_X$ and $T_Y$ are characterised by $(V_X, V_{CX})$ and $(V_Y, V_{CY})$ respectively. The Venn diagram in Figure \\ref{fig:new_venn2} easily illustrates this idea where it is shown that $V_X$ and $V_Y$ represent the private information of sources $X$ and $Y$ respectively and $V_{CX}$ and $V_{CY}$ represent the common information between $X^K$ and $Y^K$ generated by $X^K$ and $Y^K$ respectively. \n\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics [scale = 0.7]{new_model.pdf}\n\\caption{Correlated source coding for two sources}\n\\label{fig:new_model}\n\\end{figure}\n\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics [scale = 0.7]{new_venn2.pdf}\n\\caption{The relation between private and common information}\n\\label{fig:new_venn2}\n\\end{figure}\n\nThe correlated sources $X$ and $Y$ transmit messages (in the form of syndromes) to the receiver along wiretapped links. The decoder determines $X$ and $Y$ only after receiving all of $T_X$ and $T_Y$. The common information between the sources are transmitted through the portions $V_{CX}$ and $V_{CY}$. In order to decode a transmitted message, a source's private information and both common information portions are necessary. This aids in security as it is not possible to determine, for example $X$ by wiretapping all the contents transmitted along $X$'s channel only. This is different to Yamamoto's~\\cite{shannon1_yamamoto} model as here the common information consists of two portions. The aim is to keep the system as secure as possible and these following sections show how it is achieved by this new model. \n\nWe assume that the function $F$ is a one-to-one process with high probability, which means based on $T_X$ and $T_Y$ we can retrieve $X^K$ and $Y^K$ with minimal error. Furthermore, it reaches the Slepian-Wolf bound, $H(T_X, T_Y)=H(X^K,Y^K)$. Here, we note that the lengths of $T_X$ and $T_Y$ are not fixed, as it depends on the encoding process and nature of the Slepian-Wolf codes. The process is therefore not ideally one-to-one and reversible and is another difference between our model and Yamamoto's~\\cite{shannon1_yamamoto} model.\n\nThe code described in this section satisfies the following inequalities for $\\delta > 0$ and sufficiently large $K$.\n\n\\begin{eqnarray}\nPr \\{X^K \\neq G(V_X, V_{CX}, V_{CY})\\} \\le \\delta\n\\label{x_prob}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nPr \\{Y^K \\neq G(V_Y, V_{CX}, V_{CY})\\} \\le \\delta\n\\label{y_prob}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nH(V_X, V_{CX}, V_{CY})\\le H(X^K) + \\delta \n\\label{x_entropy}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nH(V_Y, V_{CX}, V_{CY})\\le H(Y^K) + \\delta \n\\label{y_entropy}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nH(V_X, V_Y, V_{CX}, V_{CY})\\le H(X^K,Y^K) + \\delta \n\\label{xy_entropy}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nH(X^K|V_X, V_Y) \\geq H(V_{CX}) + H(V_{CY}) - \\delta \n\\label{H_inequality1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nH(X^K|V_{CX}, V_{CY}) \\geq H(V_X) + H(V_{CY}) - \\delta \n\\label{H_inequality2}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nH(X^K|V_{CX}, V_{CY}, V_Y) \\geq H(V_X) + H(V_{CY}) - \\delta \n\\label{H_inequality3}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nH(V_{CX}) + H(V_X) - \\delta \\le H(X^K|V_{CY}, V_{Y}) \\nonumber\n\\\\ \\le H(X) - H(V_{CY}) + \\delta\n\\label{H_inequality4}\n\\end{eqnarray}\n\\\\\nwhere $G$ is a function to define the decoding process at the receiver. It can intuitively be seen from \\eqref{x_entropy} and \\eqref{y_entropy} that $X$ and $Y$ are recovered from the corresponding private information and the common information produced by $X^K$ and $Y^K$. Equations \\eqref{x_entropy}, \\eqref{y_entropy} and \\eqref{xy_entropy} show that the private information and common information produced by each source should contain no redundancy. \nIt is also seen from \\eqref{H_inequality2} and \\eqref{H_inequality3} that $V_Y$ is independent of $X^K$ asymptotically. Here, $V_X$, $V_Y$, $V_{CX}$ and $V_{CY}$ are disjoint, which ensures that there is no redundant information sent to the decoder. \n\nTo recover $X$ the following components are necessary: $V_X$, $V_{CX}$ and $V_{CY}$. This comes from the property that $X^K$ cannot be derived from $V_X$ and $V_{CX}$ only and part of the common information between $X^K$ and $Y^K$ is produced by $Y^K$.\n\nYamamoto~\\cite{shannon1_yamamoto} proved that a common information between $X^K$ and $Y^K$ is represented by the mutual information $I(X;Y)$. Yamamoto~\\cite{shannon1_yamamoto} also defined two kinds of common information. The first common information is defined as the rate of the attainable minimum core $V_C$ (i.e. $V_{CX}, V_{CY}$ in this model) by removing each private information, which is independent of the other information, from ($X^K$, $Y^K$) as much as possible. The second common information is defined as the rate of the attainable maximum core $V_C$ such that if we lose $V_C$ then the uncertainty of $X$ and $Y$ becomes $H(V_C)$. Here, we consider the common information that $V_{CX}$ and $V_{CY}$ represent.\n\nWe begin demonstrating the relationship between the common information portions by constructing the prototype code ($W_X$, $W_Y$, $W_{CX}$, $W_{CY}$) as per Lemma 1. \n\n\\textit{Lemma 1: For any $\\epsilon_0 \\geq 0$ and sufficiently large $K$, there exits a code $W_X = F_X(X^K)$, $W_Y = F_Y(Y^K)$, $W_{CX} = F_{CX}(X^K)$, $W_{CY} = F_{CY}(Y^K)$, $\\widehat{X}^K,\\widehat{Y}^K = G(W_X, W_Y, W_{CX}, W_{CY})$, where $W_X \\in I_{M_X}$, $W_Y \\in I_{M_Y}$, $W_{CX} \\in I_{M_{CX}}$, $W_{CY} \\in I_{M_{CY}}$ for $I_{M_{\\alpha}}$, which is defined as $\\{0, 1, \\ldots, M_{\\alpha} - 1\\}$, that satisfies},\n\n\\begin{eqnarray}\nPr\\{\\widehat{X}^K, \\widehat{Y}^K \\neq X^K, Y^K\\} \\le \\epsilon\n\\label{lemma1_1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nH(X|Y) - \\epsilon_0 \\le \\frac{1}{K} H(W_X) \\le \\frac{1}{K} \\log M_X \\le H(X|Y) + \\epsilon_0\n\\label{lemma1_2}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nH(Y|X) - \\epsilon_0 \\le \\frac{1}{K} H(W_Y) \\le \\frac{1}{K} \\log M_Y \\le H(Y|X) + \\epsilon_0\n\\label{lemma1_3}\n\\end{eqnarray}\n\n\n\\begin{eqnarray}\n& & I(X;Y) - \\epsilon_0 \\le \\frac{1}{K} (H(W_{CX}) + H(W_{CY})) \\nonumber \\\\\n & \\le & \\frac{1}{K} (\\log M_{CX} + \\log M_{CY}) \\le I(X;Y) + \\epsilon_0\n\\label{lemma1_4}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_Y) \\geq H(X) - \\epsilon_0\n\\label{lemma1_5}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_X) \\geq H(Y) - \\epsilon_0\n\\label{lemma1_6}\n\\end{eqnarray}\n\nWe can see that \\eqref{lemma1_2} - \\eqref{lemma1_4} mean\n\\begin{eqnarray}\n&& H(X,Y) - 3\\epsilon_0 \\le \\frac{1}{K} (H(W_X) + H(W_Y) + H(W_{CX}) \\nonumber \\\\\n& + & H(W_{CY})) \\nonumber \\\\\n& \\le & H(X,Y) + 3\\epsilon_0\n\\label{lemma1_7}\n\\end{eqnarray}\n\nHence from \\eqref{lemma1_1}, \\eqref{lemma1_7} and the ordinary source coding theorem, ($W_X$, $W_Y$, $W_{CX}$, $W_{CY}$) have no redundancy for sufficiently small $\\epsilon_0 \\geq 0$. It can also be seen that $W_X$ and $W_Y$ are independent of $Y^K$ and $X^K$ respectively. \n\n\\begin{proof}[Proof of Lemma 1]\n\nAs seen by Slepian and Wolf, mentioned by Yamamoto\\cite{shannon1_yamamoto} there exist $M_X$ codes for the $P_{Y|X}(y|x)$ DMC (discrete memoryless channel) and $M_Y$ codes for the $P_{X|Y}(x|y)$ DMC. The codeword sets exist as $C^X_i$ and $C^Y_j$, where $C^X_i$ is a subset of the typical sequence of $X^K$ and $C^Y_j$ is a subset of the typical sequence of $Y^K$.\nThe encoding functions are similar, but we have created one decoding function as there is one decoder at the receiver:\n\n\\begin{eqnarray}\nf_{Xi}:I_{M_{CX}} \\rightarrow C^X_i\n\\label{lemma1proof_1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nf_{Yj}:I_{M_{CY}} \\rightarrow C^Y_j\n\\label{lemma1proof_1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\ng: X^K, Y^K \\rightarrow I_{M_{CX}} \\times I_{M_{CY}}\n\\label{lemma1proof_2}\n\\end{eqnarray}\n\nThe relations for $M_X$, $M_Y$ and the common information remain the same as per Yamamoto's and will therefore not be proven here. \n\nIn this scheme, we use the average $(V_{CX}, V_{X},V_{CY}, V_{Y})$ transmitted for many codewords from $X$ and $Y$. Thus, at any time either $V_{CX}$ or $V_{CY}$ is transmitted. Over time, the split between which common information portion is transmitted is determined and the protocol is prearranged accordingly. Therefore all the common information is either transmitted as $l$ or $m$, and as such Yamamoto's encoding and decoding method may be used. \n\nAs per Yamamoto's method the code does exist and that $W_X$ and $W_Y$ are independent of $Y$ and $X$ respectively, as shown by Yamamoto\\cite{shannon1_yamamoto}. \n\n\n\\end{proof}\n\nThe common information is important in this model as the sum of $V_{CX}$ and $V_{CY}$ represent a common information between the sources. The following theorem holds for this common information:\n\\\\\n\\textit{Theorem 1:}\n\\begin{eqnarray}\n\\frac{1}{K} [H (V_{CX}) + H (V_{CY})] = I (X;Y) \n\\label{theorem1}\n\\end{eqnarray}\n\nwhere $V_{CX}$ is the common portion between $X$ and $Y$ produced by $X^K$ and $V_{CY}$ is the common portion between $X$ and $Y$ produced by $Y^K$. It is noted that the \\eqref{theorem1} holds asymptotically, and does not hold with equality when $K$ is finite. Here, we show the approximation when $K$ is infinitely large.\nThe private portions for $X^K$ and $Y^K$ are represented as $V_X$ and $V_Y$ respectively. As explained in Yamamoto's~\\cite{shannon1_yamamoto} Theorem 1, two types of common information exist (the first is represented by $I(X;Y)$ and the second by $\\text{min} (H(X^K), H(Y^K))$. We will develop part of this idea to show that the sum of the common information portions produced by $X^K$ and $Y^K$ in this new model is represented by the mutual information between the sources. \n\n\\begin{proof}[Proof of Theorem 1]\nThe first part is to prove that $H(V_{CX}) + H(V_{CY}) \\geq I(X;Y)$, and is done as follows. \nWe weaken the conditions \\eqref{x_prob} and \\eqref{y_prob} to\n\\begin{eqnarray}\n\\text{Pr }\\{X^K,Y^K \\neq G_{XY} (V_X, V_Y, V_{CX}, V_{CY}\\}) \\le \\delta_1\n\\label{weakenedxy_prob}\n\\end{eqnarray}\n\nFor any ($V_X$,$V_Y$, $V_{CX}$, $V_{CY}$) $\\in C(3\\epsilon_0)$ (which can be seen from \\eqref{lemma1_7}), we have from \\eqref{weakenedxy_prob} and the ordinary source coding theorem that\n\n\\begin{eqnarray}\nH(X^K,Y^K) - \\delta_1 &\\le & \\frac{1}{K} H(V_X, V_Y, V_{CX}, V_{CY}) \\nonumber \\\\\n\t& \\le & \\frac{1}{K} [H(V_X) + H(V_Y) + H (V_{CX}) \\nonumber \\\\\n\t& + & H(V_{CY})]\n\\label{theorem1proof_1}\n\\end{eqnarray}\n\nwhere $\\delta_1 \\rightarrow 0$ as $\\delta \\rightarrow 0$. From Lemma 1,\n\\begin{eqnarray}\n\\frac{1}{K} H(V_Y|X^K) \\geq \\frac{1}{K} H(V_Y) - \\delta\n\\label{theorem1proof_2}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(V_X|Y^K) \\geq \\frac{1}{K} H(V_X) - \\delta\n\\label{theorem1proof_3}\n\\end{eqnarray}\n\nFrom \\eqref{theorem1proof_1} - \\eqref{theorem1proof_3},\n\\begin{eqnarray}\n\\frac{1}{K} [H(V_{CX}) + H(V_{CY})] &\\ge & H(X,Y) - \\frac{1}{K} H(V_X) \\nonumber \\\\\n& - & \\frac{1}{K} H(V_Y) - \\delta_1\t\t\t\t\t\t\t\\nonumber \\\\\n& \\geq & H(X,Y) - \\frac{1}{K} H(V_X|Y) \t\t\\nonumber \\\\\n& - & \\frac{1}{K} H(V_Y|X) - \\delta_1 - 2\\delta\n\\label{theorem1proof_4}\n\\end{eqnarray}\n\nOn the other hand, we can see that\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K, V_Y) \\le H(X,Y) + \\delta\n\\label{theorem1proof_5}\n\\end{eqnarray}\n\nThis implies that \n\\begin{eqnarray}\n\\frac{1}{K} H(V_Y|X^K) \\le H(Y|X) + \\delta \n\\label{theorem1proof_6}\n\\end{eqnarray}\nand \n\\begin{eqnarray}\n\\frac{1}{K} H(V_X|Y^K) \\le H(X|Y) + \\delta \n\\label{theorem1proof_7}\n\\end{eqnarray}\n\nFrom \\eqref{theorem1proof_4}, \\eqref{theorem1proof_6} and \\eqref{theorem1proof_7} we get\n\\begin{eqnarray}\n\\frac{1}{K} [H(V_{CX}) + H(V_{CY})] & \\ge & H(X,Y) - H(X|Y) - H(Y|X) \\nonumber \\\\\n& - & \\delta_1 - 4\\delta\t\\nonumber \\\\\n& = & I(X;Y) - \\delta_1 - 4\\delta\n\\label{theorem1proof_8}\n\\end{eqnarray}\n\nIt is possible to see from \\eqref{lemma1_4} that $H(V_{CX}) + H(V_{CY}) \\le I(X;Y)$. From this result, \\eqref{lemma1proof_2} and \\eqref{theorem1proof_8}, and as $\\delta_1 \\rightarrow 0$ and $\\delta \\rightarrow 0$ it can be seen that\n\\begin{eqnarray}\n\\frac{1}{K} [H (V_{CX} + H(V_{CY})] = I(X;Y)\n\\end{eqnarray}\n\\end{proof}\n \nThis model can cater for a scenario where a particular source, say $X$ needs to be more secure than $Y$ (possibly because of eavesdropping on the $X$ channel). In such a case, the $\\frac{1}{K} H(V_{CX})$ term in \\eqref{theorem1proof_8} needs to be as high as possible. When this uncertainty is increased then the security of $X$ is increased. Another security measure that this model incorporates is that $X$ cannot be determined from wiretapping only $X$'s link. \n \n\n\\section{Information Leakage}\nIn order to determine the security of the system, a measure for the amount of information leaked has been developed. This is a new notation and quantification, which emphasizes the novelty of this work. The obtained information and total uncertainty are used to determine the leaked information. Information leakage is indicated by $L_{\\mathcal{Q}}^\\mathcal{P}$. Here $\\mathcal{P}$ indicates the source\/s for which information leakage is being quantified, $\\mathcal{P} = \\{S_1, \\ldots, S_n\\}$ where $n$ is the number of sources (in this case, $n = 2$). Further, $\\mathcal{Q}$ indicates the syndrome portion that has been wiretapped, $\\mathcal{Q} = \\{V_1, \\ldots, V_m\\}$ where $m$ is the number of codewords (in this case, $m = 4$).\n\nThe information leakage bounds are as follows:\n\\begin{eqnarray}\nL_{V_X,V_Y}^{X^K} \\le H(X^K) - H(V_{CX}) - H(V_{CY}) + \\delta\n\\label{L_inequality1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nL_{V_{CX},V_{CY}}^{X^K} \\le H(X^K) - H(V_X) - H(V_{CY}) + \\delta\n\\label{L_inequality2}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nL_{V_{CX},V_{CY},V_Y}^{X^K} \\le H(X^K) - H(V_X) - H(V_{CY}) + \\delta\n\\label{L_inequality3}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&& H(V_{CY}) - \\delta \\le L_{V_{Y},V_{CY}}^{X^K} \\nonumber\n\\\\ & \\le & H(X^K) - H(V_{CX}) - H(V_X) + \\delta \n\\label{L_inequality4}\n\\end{eqnarray}\n \nHere, $V_Y$ is private information of source $Y^K$ and is independent of $X^K$\nand therefore does not leak any information about $X^K$, shown\nin \\eqref{L_inequality2} and \\eqref{L_inequality3}. Equation \\eqref{L_inequality4} gives an indication of the minimum and maximum amount of leaked information for the interesting case where a syndrome has been wiretapped and its information leakage on the alternate source is quantified. The outstanding common information component is the maximum information that can be leaked. For this case, the common information $V_{CX}$ and $V_{CY}$ can thus consist of added\nprotection to reduce the amount of information leaked. These bounds developed in \\eqref{L_inequality1} - \\eqref{L_inequality4} are proven in the next section.\n\nThe proofs for the above mentioned information leakage inequalities are now detailed. First, the inequalities in \\eqref{H_inequality1} - \\eqref{H_inequality4} will be proven, so as to prove that the information leakage equations hold. \\\\\n\\begin{proof}[Lemma 2]\nThe code ($V_X$, $V_{CX}$, $V_{CY}$, $V_Y$) defined at the beginning of Section I, describing the model and \\eqref{x_prob} - \\eqref{xy_entropy} satisfy \\eqref{H_inequality1} - \\eqref{H_inequality4}. Then the information leakage bounds are given by \\eqref{L_inequality1} - \\eqref{L_inequality4}.\n\\\\\\\\ \\textit{Proof for \\eqref{H_inequality1}}:\n\\begin{eqnarray}\n& & \\frac{1}{K} H(X^K|V_X,V_Y) \t\t\t\t\t\t\t\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(X^K,V_X,V_Y) - H(V_X,V_Y)] \t\t\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(X^K,V_Y) - H(V_X,V_Y)] \t\t\t\t\t\t\t\t\\label{lemma2_ref1}\\\\\n& = & \\frac{1}{K} [H(X^K|V_Y) + I(X^K;V_Y) + H(V_Y|X^K)] \t\t\t\t\t\t\t\\nonumber\\\\\n& & - \\frac{1}{K} [H(V_X|V_Y) + I(V_X;V_Y) + H(V_Y|V_X)] \t\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(X^K|V_Y) + H(V_Y|X^K) - H(V_X|V_Y)\t\t\t\t\t\t\t\\nonumber\\\\\n& & - H(V_Y|V_X)]\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(X^K) + H(V_Y) - H(V_X) - H(V_Y)]\t\t\t\t\t\t\t\\label{lemma2_ref2}\\\\ \n& = & \\frac{1}{K} [H(X^K) - H(V_X)]\t\t\t\t\t\t\t\t\t\t\t\\nonumber\\\\\n& \\geq & \\frac{1}{K} [H(V_X) + H(V_{CX}) + H(V_{CY}) - H(V_X)]\t- \\delta\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(V_{CX}) + H(V_{CY})] -\\delta\t\t\t\t\t \t\t\t\n\\label{lemma2_part1}\n\\end{eqnarray}\n\nwhere \\eqref{lemma2_ref1} holds because $V_X$ is a function of $X$ and \\eqref{lemma2_ref2} holds because $X$ is independent of $V_Y$ asymptotically and $V_X$ is independent of $V_Y$ asymptotically.\n\nFor the proofs of \\eqref{H_inequality2} and \\eqref{H_inequality3}, the following simplification for $H(X|V_{CY})$ is used:\n\\begin{eqnarray}\nH(X^K|V_{CY}) & = & H(X^K,Y^K) - H(V_{CY}) \\nonumber \\\\\n& = & H(X^K) + H(V_{CY}) - I(X; V_{CY}) - H(V_{CY}) \\nonumber \\\\\n& = & H(X^K) + H(V_{CY}) - H(V_{CY}) - H(V_{CY}) \\nonumber \\\\\n& + & {\\delta}_1 \\label{new_55} \\\\\n& = & H(X^K) - H(V_{CY}) = {\\delta}_1\n\\label{simpli}\n\\end{eqnarray}\n\nwhere $I(X; V_{CY})$ approximately equal to $H(V_{CY})$ in \\eqref{new_55} can be seen intuitively from the Venn diagram in Figure \\ref{fig:new_venn2}. Since it is an approximation, ${\\delta}_1$, which is smaller than $\\delta$ in the proofs below has been added to cater for the tolerance. \n\\\\\\\\ \\textit{Proof for \\eqref{H_inequality2}}:\n\\begin{eqnarray}\n& & \\frac{1}{K} H(X^K|V_{CX},V_{CY})\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(X^K,V_{CX},V_{CY}) - H(V_{CX},V_{CY})]\t\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(X^K,V_{CY}) - H(V_{CX},V_{CY})] \t\t\t\t\t\t\t\t\\label{lemma2_ref3}\\\\\n& = & \\frac{1}{K} [H(X^K) - H(V_{CY}) + I(X;V_{CY}) + H(V_{CY}|X^K)] \t\t\t\t\\nonumber\\\\\n& & - \\frac{1}{K} [H(V_{CX}|V_{CY}) + I(V_{CX};V_{CY}) + H(V_{CY}|V_{CX})] \\nonumber \\\\\n& + & \\delta_1\t\t\\nonumber\\\\\t\n& = & \\frac{1}{K} [H(X^K) - H(V_{CY}) + H(V_{CY})- H(V_{CX}) - H(V_{CY})]\t \\nonumber \\\\\n& + & \\delta_1\t\t\t\t\t\\label{lemma2_ref4} \\\\\n& = & \\frac{1}{K} [H(X^K) - H(V_{CY}) - H(V_{CX})]\t\t\t\t\t\t\t\t+ \\delta_1\t\t\t\\nonumber\\\\\n& \\geq & \\frac{1}{K} [H(V_X) + H(V_{CX}) + H(V_{CY}) - H(V_{CY}) - H(V_{CX})] -\\delta\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} H(V_X) + \\delta_1 -\\delta\n\\label{lemma2_part2}\n\\end{eqnarray}\n\nwhere \\eqref{lemma2_ref3} holds because $V_{CX}$ is a function of $X^K$ and \\eqref{lemma2_ref4} holds because $X$ is independent of $V_{CY}$ asymptotically and $V_{CX}$ is independent of $V_{CY}$ asymptotically. \n\nThe proof for $H(X|V_{CX},V_{CY},V_Y)$ is similar to that for $H(X|V_{CX},V_{CY})$, because $V_Y$ is independent of $X$.\n\\\\\\\\ \\textit{Proof for \\eqref{H_inequality3}}:\n\\begin{eqnarray}\n& & \\frac{1}{K} H(X^K|V_{CX},V_{CY},V_Y)\t\t\t\t\t\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} H(X^K|V_{CX},V_{CY})\n\\label{lemma2_ref5}\\\\\n& = & \\frac{1}{K} [H(X^K,V_{CX},V_{CY}) - H(V_{CX},V_{CY})]\t\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(X^K,V_{CY}) - H(V_{CX},V_{CY})] \t\t\t\t\t\t\t\t\\label{lemma2_ref6}\\\\\n& = & \\frac{1}{K} [H(X^K) - H(V_{CY}) + I(X;V_{CY}) + H(V_{CY}|X^K)] \t\t\t\t\t\\nonumber\\\\\n& & - \\frac{1}{K} [H(V_{CX}|V_{CY}) + I(V_{CX};V_{CY}) + H(V_{CY}|V_{CX})] \\nonumber \\\\\n& + & \\delta_1\t\t\\nonumber\\\\\t\n& = & \\frac{1}{K} [H(X^K) - H(V_{CY}) + H(V_{CY})- H(V_{CX}) \t\t\\nonumber\\\\\n& - & H(V_{CY})] + \\delta_1\t\t\t\t\\label{lemma2_ref7}\\\\\n& = & \\frac{1}{K} [H(X^K) - H(V_{CY}) - H(V_{CX})]\t\t\t\t\t\t\t\t+ \\delta_1\t\t\t\\nonumber\\\\\n& \\geq & \\frac{1}{K} [H(V_X) + H(V_{CX}) + H(V_{CY}) - H(V_{CY}) \\nonumber\\\\\n& - & - H(V_{CX})] - \\delta\t\t+ \\delta_1\t\\nonumber\\\\\n& = & \\frac{1}{K} H(V_X) -\\delta + \\delta_1\n\\label{lemma2_part3}\n\\end{eqnarray}\n\nwhere \\eqref{lemma2_ref5} holds because $V_Y$ and $X^K$ are independent, \\eqref{lemma2_ref6} holds because $V_{CX}$ is a function of $X^K$ and \\eqref{lemma2_ref7} holds because $X^K$ is independent of $V_{CY}$ asymptotically and $V_{CX}$ is independent of $V_{CY}$ asymptotically. \n\nFor the proof of \\eqref{H_inequality4}, we look at the following probabilities:\n\\begin{eqnarray}\n\\text{Pr} \\{V_X,V_{CX} \\neq G(T_X)\\} \\le \\delta\n\\label{lemma2_eqn1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\text{Pr} \\{V_Y,V_{CY} \\neq G(T_Y)\\} \\le \\delta\n\\label{lemma2_eqn2}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n& & \\frac{1}{K} H(X^K|T_Y)\t\t\t\t\t\t\t\t\t\t\t\\nonumber\\\\\n& \\le & \\frac{1}{K} H(X^K, V_{CY},V_Y)] + \\delta\t\t\t\t\t\\label{lemma2_ref8}\\\\\n& = & \\frac{1}{K} [H(X^K, V_{CY},V_{Y}) - H(V_{CY},V_{Y})] + \\delta\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(X^K, V_{Y}) - H(V_{CY},V_{Y})] + \\delta\t\t\t\t\t\t\\label{lemma2_ref9}\\\\\n& = & \\frac{1}{K} [H(X^K|V_{Y}) + I(X^K;V_{Y}) + H(V_{Y}|X^K)] \t\t\t\t\t\\nonumber\\\\\n& & - \\frac{1}{K} [H(V_{CY}|V_{Y}) + I(V_{CY};V_{Y}) + H(V_{Y}|V_{CY})] + \\delta\t\t\\nonumber\\\\\t\n& = & \\frac{1}{K} [H(X^K) + H(V_{Y})- H(V_{CY}) - H(V_{Y})]+ \\delta\t\\label{lemma2_ref10}\\\\\n& = & \\frac{1}{K} [H(X^K) - H(V_{CY})] + \\delta\t\t\t\t\t\t\t\t\t\t\t\n\\label{lemma2_part4.1}\n\\end{eqnarray}\n\nwhere \\eqref{lemma2_ref8} holds from \\eqref{lemma2_eqn2}, \\eqref{lemma2_ref9} holds because $V_{CY}$ and $V_Y$ are asymptotically independent. Furthermore, \\eqref{lemma2_ref10} holds because $V_{CY}$ and $V_{Y}$ are asymptotically independent and $X^K$ and $V_{Y}$ are asymptotically independent.\n\nFollowing a similar proof to those done above in this section, another bound for $H(X^K|V_{CY},V_Y)$ can be found as follows:\n\\begin{eqnarray}\n& & \\frac{1}{K} H(X^K|V_{CY},V_Y)\t\t\t\t\t\t\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(X^K,V_{CY},V_{Y}) - H(V_{CY},V_{Y})]\t\t\t\t\t\\nonumber\\\\\n& = & \\frac{1}{K} [H(X^K,V_{Y}) - H(V_{CY},V_{Y})] \t\t\t\t\t\t\\label{lemma2_ref11}\\\\\n& = & \\frac{1}{K} [H(X^K|V_{Y}) + I(X^K;V_{Y}) + H(V_{Y}|X)] \t\t\t\t\t\\nonumber\\\\\n& & - \\frac{1}{K} [H(V_{CY}|V_{Y}) + I(V_{CY};V_{Y}) + H(V_{Y}|V_{CY})] \t\t\\nonumber\\\\\t\n& = & \\frac{1}{K} [H(X^K) + H(V_{Y})- H(V_{CY}) - H(V_{Y})]\t\t\t\t\\label{lemma2_ref12}\\\\\n& = & \\frac{1}{K} [H(X^K) - H(V_{CY})]\t\\nonumber\\\\\n& \\geq & \\frac{1}{K} [H(V_X) + H(V_{CX}) + H(V_{CY}) - H(V_{CY})]\t- \\delta \\nonumber\\\\\n& = & \\frac{1}{K} [H(V_X) + H(V_{CX})]\t- \\delta\n\\label{lemma2_part4.2}\n\\end{eqnarray}\n\nwhere \\eqref{lemma2_ref11} and \\eqref{lemma2_ref12} hold for the same reason as \\eqref{lemma2_ref9} and \\eqref{lemma2_ref10} respectively. \n\nSince we consider the information leakage as the total information obtained subtracted from the total uncertainty, the following hold for the four cases considered in this section:\n\n\\begin{eqnarray}\nL_{V_X,V_Y}^{X^K} & = & H(X^K) - H(X^K|V_X,V_Y) \t\\nonumber\\\\ \n& \\le & H(X^K) - H(V_{CX}) - H(V_{CY}) + \\delta\n\\label{Lemma2_proof_inequality1}\n\\end{eqnarray}\nwhich proves \\eqref{L_inequality1}.\n\n\\begin{eqnarray}\nL_{V_{CX},V_{CY}}^{X^K} & = & H(X^K) - H(X^K|V_{CX},V_{CY}) \\nonumber\n\\\\ & \\le & H(X^K) - H(V_X) + \\delta\n\\label{Lemma2_proof_inequality2}\n\\end{eqnarray}\nwhich proves \\eqref{L_inequality2}.\n\n\\begin{eqnarray}\nL_{V_{CX},V_{CY},V_Y}^{X^K} & = & H(X^K) - H(X^K|V_{CX},V_{CY},V_Y) \\nonumber\n\\\\ & \\le & H(X^K) - H(V_X) + \\delta\n\\label{Lemma2_proof_inequality3}\n\\end{eqnarray}\nwhich proves \\eqref{L_inequality3}.\n\nThe two bounds for $H(V_{CY},V_Y)$ are given by \\eqref{lemma2_part4.1} and \\eqref{lemma2_part4.2}. \nFrom \\eqref{lemma2_part4.1}:\n\n\\begin{eqnarray}\nL_{V_{Y},V_{CY}}^{X^K} & \\geq & H(X^K) - [H(X) - H(V_{CY}) + \\delta] \\nonumber \\\\\n& \\geq & H(V_{CY}) - \\delta\n\\label{Lemma2_proof_inequality4.1}\n\\end{eqnarray}\n\nand from \\eqref{lemma2_part4.2}:\n\\begin{eqnarray}\nL_{V_{Y},V_{CY}}^{X^K} & \\le & H(X^K) - \\left (H(V_X) + H(V_{CX}) - \\delta \\right) \\nonumber \\\\\n& \\le & H(X^K) - H(V_X) - H(V_{CX}) + \\delta \n\\label{Lemma2_proof_inequality4.2}\n\\end{eqnarray}\n\n\nCombining these results from \\eqref{Lemma2_proof_inequality4.1} and \\eqref{Lemma2_proof_inequality4.2} gives \\eqref{L_inequality4}.\n\\end{proof}\n\n\\section{Shannon's Cipher System}\n\nHere, we discuss Shannon's cipher system for two independent correlated sources (depicted in Figure \\ref{fig:shannon_cipher_2sources}). The two source outputs are i.i.d random variables $X$ and $Y$, taking on values in the finite sets $\\mathcal{X}$ and $\\mathcal{Y}$. Both the transmitter and receiver have access to the key, a random variable, independent of $X^K$ and $Y^K$ and taking values in $I_{M_k} = \\{0, 1, 2, \\ldots ,M_{k} - 1\\}$. The sources $X^K$ and $Y^K$ compute the ciphertexts $X^{'}$ and $Y^{'}$, which are the result of specific encryption functions on the plaintext from $X$ and $Y$ respectively. The encryption functions are invertible, thus knowing $X^{'}$ and the key, $X^K$ can be retrieved. \n\nThe mutual information between the plaintext and ciphertext should be small so that the wiretapper cannot gain much information about the plaintext. For perfect secrecy, this mutual information should be zero, then the length of the key should be at least the length of the plaintext.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics [scale = 0.7]{shannon_2sources.pdf}\n\\caption{Shannon cipher system for two correlated sources}\n\\label{fig:shannon_cipher_2sources}\n\\end{figure}\n\nThe encoder functions for $X$ and $Y$, ($E_X$ and $E_Y$ respectively) are given as:\n\n\\begin{eqnarray}\nE_X : \\mathcal{X}^K \\times I_{M_{kX}} & \\rightarrow & I_{M_X'} = \\{0, 1, \\ldots, M_X' - 1\\} \\nonumber \n\\\\ && I_{M_{CX}'} = \\{0, 1, \\ldots, M_{CX}' - 1\\}\n\\label{xencoder_fcn}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nE_Y : \\mathcal{Y}^K \\times I_{M_{kY}} & \\rightarrow & I_{M_Y'} = \\{0, 1, \\ldots, M_Y' - 1\\} \\nonumber \n\\\\ && I_{M_{CY}'} = \\{0, 1, \\ldots, M_{CY}' - 1\\}\n\\label{yencoder_fcn}\n\\end{eqnarray}\n\nThe decoder is defined as:\n\n\\begin{eqnarray}\nD_{XY} : (I_{M'_X}, I_{M'_Y}, I_{M'_{CX}},I_{M'_{CY}}) & \\times & I_{M_{kX}}, I_{M_{kY}} \\nonumber \\\\\n& \\rightarrow & \\mathcal{X}^K \\times \\mathcal{Y}^K\n\\end{eqnarray}\n\nThe encoder and decoder mappings are below:\n\\begin{eqnarray}\nW_1 = F_{E_X} (X^K, W_{kX})\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_2 = F_{E_Y} (Y^K, W_{kY})\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\widehat{X}^K = F_{D_X} (W_1, W_2, W_{kX})\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\widehat{Y}^K = F_{D_Y} (W_1, W_2, W_{kY})\n\\end{eqnarray}\n\nor \n\n\\begin{eqnarray}\n(\\widehat{X}^K, \\widehat{Y}^K) = F_{D_{XY}} (W_1, W_2, W_{kX}, W_{kY})\n\\end{eqnarray}\n\n\nThe following conditions should be satisfied for cases 1- 4:\n\n\\begin{eqnarray}\n\\frac{1}{K}\\log M_X \\le R_X +\\epsilon\n\\label{cond1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K}\\log M_Y \\le R_Y +\\epsilon\n\\label{cond2}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K}\\log M_{kX} \\le R_{kX} +\\epsilon\n\\label{cond3}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K}\\log M_{kY} \\le R_{{kY}} +\\epsilon\n\\label{cond4}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\text {Pr} \\{\\widehat{X}^K \\neq X^K\\} \\le \\epsilon\n\\label{cond5}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\text{Pr} \\{ \\widehat{Y}^K \\neq Y^K\\} \\le \\epsilon\n\\label{cond6}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_1) \\le h_X + \\epsilon\n\\label{cond7}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_2) \\le h_Y + \\epsilon\n\\label{cond8}\n\\end{eqnarray}\n\n\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K,Y^K|W_1) \\le h_{XY} + \\epsilon\n\\label{cond8.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K,Y^K|W_2) \\le h_{XY} + \\epsilon\n\\label{cond9}\n\\end{eqnarray}\n\nwhere $R_X$ is the the rate of source $X$'s channel and $R_Y$ is the the rate of source $Y$'s channel. Here, $R_{kX}$ is the rate of the key channel at $X^K$ and $R_{kY}$ is the rate of the key channel at $Y^K$. The security levels, which are measured by the total and individual uncertainties are $h_{XY}$ and $(h_X, h_Y)$ respectively. \n\\\\\\\\\nThe cases 1 - 5 are:\n\\\\ \\textit{Case 1:} When $T_X$ and $T_Y$ are leaked and both $X^K$ and $Y^K$ need to be kept secret.\n\\\\ \\textit{Case 2:} When $T_X$ and $T_Y$ are leaked and $X^K$ needs to be kept secret.\n\\\\ \\textit{Case 3:} When $T_X$ is leaked and both $X^K$ and $Y^K$ need to be kept secret.\n\\\\ \\textit{Case 4:} When $T_X$ is leaked and $Y^K$ needs to be kept secret.\n\\\\ \\textit{Case 5:} When $T_X$ is leaked and $X^K$ needs to be kept secret.\n\\\\ where $T_X$ is the syndrome produced by $X$, containing $V_{CX}$ and $V_X$ and $T_Y$ is the syndrome produced by $Y$, containing $V_{CY}$ and $V_X$ .\n\\\\\\\\\nThe admissible rate region for each case is defined as follows:\n\\\\ \\textit{Definition 1a:} ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{XY}$) is admissible for case 1 if there exists a code ($F_{E_{X}}$, $F_{D_{XY}}$) and ($F_{E_{Y}}$, $F_{D_{XY}}$) such that \\eqref{cond1} - \\eqref{cond6} and \\eqref{cond9} hold for any $\\epsilon \\rightarrow 0$ and sufficiently large $K$.\n\\\\ \\textit{Definition 1b:} ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{X}$) is admissible for case 2 if there exists a code ($F_{E_{X}}$, $F_{D_{XY}}$) such that \\eqref{cond1} - \\eqref{cond7} hold for any $\\epsilon \\rightarrow 0$ and sufficiently large $K$.\n\\\\ \\textit{Definition 1c:} ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{X}$, $h_{Y}$) is admissible for case 3 if there exists a code ($F_{E_{X}}$, $F_{D_{XY}}$) and ($F_{E_{Y}}$, $F_{D_{XY}}$) such that \\eqref{cond1} - \\eqref{cond6} and \\eqref{cond8}, \\eqref{cond9} hold for any $\\epsilon \\rightarrow 0$ and sufficiently large $K$.\n\\\\ \\textit{Definition 1d:} ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{Y}$) is admissible for case 4 if there exists a code ($F_{E_{X}}$, $F_{D_{XY}}$) such that \\eqref{cond1} - \\eqref{cond6} and \\eqref{cond8} hold for any $\\epsilon \\rightarrow 0$ and sufficiently large $K$.\n\\\\ \\textit{Definition 1e:} ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{X}$) is admissible for case 5 if there exists a code ($F_{E_{X}}$, $F_{D_{XY}}$) such that \\eqref{cond1} - \\eqref{cond6} and \\eqref{cond7} hold for any $\\epsilon \\rightarrow 0$ and sufficiently large $K$.\n\\\\ \\textit{Definition 2:} The admissible rate regions of $\\mathcal{R}_j$ and of $\\mathcal{R}_k$ are defined as:\n\n\\begin{eqnarray}\n\\mathcal{R}_1(h_{XY}) = \\{(R_X, R_Y, R_{kX}, R_{kY}):\t\t\t\\nonumber\n\\\\(R_X, R_Y, R_{kX}, R_{kY}, h_{XY} ) \\text{ is admissible for case 1} \\}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\mathcal{R}_2(h_{X}) = \\{(R_X, R_Y, R_{kX}, R_{kY}):\t\t\t\\nonumber\n\\\\ (R_X, R_Y, R_{kX}, R_{kY}, h_{X} ) \\text{ is admissible for case 2} \\}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\mathcal{R}_3(h_X, h_Y) = \\{(R_X, R_Y, R_{kX}, R_{kY}):\t\t\t\\nonumber\n\\\\ (R_X, R_Y, R_{kX}, R_{kY}, h_{X}, h_{Y} ) \\text{ is admissible for case 3} \\}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\mathcal{R}_4(h_{Y}) = \\{(R_X, R_Y, R_{kX}, R_{kY}):\t\t\t\\nonumber\n\\\\(R_X, R_Y, R_{kX}, R_{kY}, h_{Y} ) \\text{ is admissible for case 4} \\}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\mathcal{R}_5(h_{X}) = \\{(R_X, R_Y, R_{kX}, R_{kY}):\t\t\t\\nonumber\n\\\\(R_X, R_Y, R_{kX}, R_{kY}, h_{X} ) \\text{ is admissible for case 5} \\}\n\\end{eqnarray}\n\nTheorems for these regions have been developed:\n\n\\textit{Theorem 2:} For $0 \\le h_{XY} \\le H(X,Y)$,\n\\begin{eqnarray}\n&& \\mathcal{R}_1(h_{XY}) = \\{(R_X, R_Y, R_{kX},R_{kY}): \t\t\\nonumber\n\\\\ && R_X \\geq H(X|Y), \t\t\t\t\\nonumber\n\\\\ && R_Y \\geq H(Y|X),\t\t\\nonumber\n\\\\ && R_X + R_Y\t\\geq H(X,Y)\t\t\t\t\t\t\\nonumber\n\\\\ && R_{kX} \\geq h_{XY} \\text{ and } R_{kY} \\geq h_{XY} \\}\t\t\t\n\\label{theorem2}\n\\end{eqnarray}\n\n\\textit{Theorem 3:} For $0 \\le h_{X} \\le H(X)$,\n\\begin{eqnarray}\n&& \\mathcal{R}_2(h_{X}) = \\{(R_X, R_Y, R_{kX},R_{kY}): \t\t\\nonumber\n\\\\ && R_X \\geq H(X|Y), \t\t\t\t\\nonumber\n\\\\ && R_Y \\geq H(Y|X),\t\t\\nonumber\n\\\\ && R_X + R_Y\t\\geq H(X,Y)\t\t\t\t\t\t\\nonumber\n\\\\ && R_{kX} \\geq h_X \\text{ and } R_{kY} \\geq h_Y \\}\t\t\t\n\\label{theorem3}\n\\end{eqnarray}\n\n\\textit{Theorem 4:} For $0 \\le h_{X} \\le H(X)$ and $0 \\le h_{Y} \\le H(Y)$,\n\\begin{eqnarray}\n&& \\mathcal{R}_3(h_{X}, h_{Y}) = \\{(R_X, R_Y, R_{kX},R_{kY}): \t\t\\nonumber\n\\\\ && R_X \\geq H(X|Y), \t\t\t\t\\nonumber\n\\\\ && R_Y \\geq H(Y|X),\t\t\\nonumber\n\\\\ && R_X + R_Y\t\\geq H(X,Y)\t\t\t\t\t\t\\nonumber\n\\\\ && R_{kX} \\geq h_{X} \\text{ and } R_{kY} \\geq h_{Y} \\}\n\\label{theorem4}\n\\end{eqnarray}\n\n\\textit{Theorem 5:} For $0 \\le h_{X} \\le H(X)$,\n\\begin{eqnarray}\n&& \\mathcal{R}_5(h_{X}, h_{Y}) = \\{(R_X, R_Y, R_{kX},R_{kY}): \t\t\\nonumber\n\\\\ && R_X \\geq H(X|Y), \t\t\t\t\\nonumber\n\\\\ && R_Y \\geq H(Y|X),\t\t\\nonumber\n\\\\ && R_X + R_Y\t\\geq H(X,Y)\t\t\t\t\t\t\\nonumber\n\\\\ && R_{kX} \\geq h_{X} \\text{ and } R_{kY} \\geq 0 \\}\n\\label{theorem5}\n\\end{eqnarray}\n\nWhen $h_X = 0$ then case $5$ can be reduced to that depicted in \\eqref{theorem4}. \nHence, Corollary 1 follows:\n\\\\ \\textit{Corollary 1:} $\\mathcal{R}_4(h_{Y}) = \\mathcal{R}_3(0, h_Y)$\n\nThe security levels, which are measured by the total and individual uncertainties $h_{XY}$ and $(h_X, h_Y)$ respectively give an indication of the level of uncertainty in knowing certain information. When the uncertainty increases then less information is known to an eavesdropper and there is a higher level of security. \n\n\n\\section{Proof of Theorems 2 - 5}\nThis section initially proves the direct parts of Theorems 2 - 5 and thereafter the converse parts.\n\n\\subsection{Direct parts}\nAll the channel rates in the theorems above are in accordance with Slepian-Wolf's theorem, hence there is no need to prove them. \nWe construct a code based on the prototype code ($W_X, W_Y, W_{CX}, W_{CY}$) in Lemma 1. In order to include a key in the prototype code, $W_X$ is divided into two parts as per the method used by Yamamoto \\cite{shannon1_yamamoto}:\n\\begin{eqnarray}\nW_{X1} = W_X \\text{ mod } M_{X1} \\in I_{M_{X1}} = \\{0, 1, 2, \\ldots, M_{X1} - 1\\}\n\\label{theorems2-4_eq_1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{X2} = \\frac{W_X - W_{X1}}{M_{X1}} \\in I_{M_{X2}} = \\{0, 1, 2, \\ldots, M_{X2} - 1\\}\n\\label{theorems2-4_eq_2}\n\\end{eqnarray}\n\nwhere $M_{X1}$ is a given integer and $M_{X2}$ is the ceiling of $M_X\/M_{X1}$. The $M_X\/M_{X1}$ is considered an integer for simplicity, because the difference between the ceiling value and the actual value can be ignored when $K$ is sufficiently large. In the same way, $W_Y$ is divided:\n\n\\begin{eqnarray}\nW_{Y1} = W_Y \\text{ mod } M_{Y1} \\in I_{M_{Y1}} = \\{0, 1, 2, \\ldots, M_{Y1} - 1\\}\n\\label{theorems2-4_eq_3}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{Y2} = \\frac{W_Y - W_{Y1}}{M_{Y1}} \\in I_{M_{Y2}} = \\{0, 1, 2, \\ldots, M_{Y2} - 1\\}\n\\label{theorems2-4_eq_4}\n\\end{eqnarray}\n\nThe common information components $W_{CX}$ and $W_{CY}$ are already portions and are not divided further. \nIt can be shown that when some of the codewords are wiretapped the uncertainties of $X^K$ and $Y^K$ are bounded as follows:\n\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_{X2},W_Y) \\geq I(X;Y) + \\frac{1}{K} \\log M_{X1} - \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_{X},W_{Y2}) \\geq I(X;Y) + \\frac{1}{K} \\log M_{Y1} - \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_2}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_{X},W_{Y2}) \\geq I(X;Y) - \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_3}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_{X},W_Y, W_{CY}) \\geq \\frac{1}{K} \\log M_{CX} - \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_4}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_{X},W_Y, W_{CY}) \\geq \\frac{1}{K} \\log M_{CX} - \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_5}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_Y, W_{CY}) \\geq H(X|Y) + \\frac{1}{K} \\log M_{CX} - \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_6}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_Y, W_{CY}) \\geq \\frac{1}{K} \\log M_{CX} - \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_7}\n\\end{eqnarray}\n\nwhere $\\epsilon_{0}^{'} \\rightarrow 0$ as $\\epsilon_{0} \\rightarrow 0$.\nThe proofs for \\eqref{theorems2-4_ineq_1} - \\eqref{theorems2-4_ineq_7} are the same as per Yamamoto's\\cite{shannon1_yamamoto} proof in Lemma A1. The difference is that $W_{CX}$, $W_{CY}$, $M_{CX}$ and $M_{CY}$ are described as $W_{C1}$, $W_{C2}$, $M_{C1}$ and $M_{C2}$ respectively by Yamamoto. Here, we consider that $W_{CX}$ and $W_{CY}$ are represented by Yamamoto's $W_{C1}$ and $W_{C2}$ respectively. In addition there are some more inequalities considered here:\n\\begin{eqnarray}\n \\frac{1}{K} H(Y^K|W_X, W_{CX}, W_{CY}, W_{Y2}) & \\geq & \\frac{1}{K} \\log M_{Y1} \\nonumber\n \\\\ & - & \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_8}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n \\frac{1}{K} H(Y^K|W_X, W_{CX}, W_{CY}) & \\geq & \\frac{1}{K} \\log M_{Y1} \\nonumber\n\\\\ & + & \\frac{1}{K} \\log M_{Y2} - \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_9}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_{X2}, W_{CY}) & \\geq & \\frac{1}{K} \\log M_{X1} \t\\nonumber\n\\\\ & + & \\frac{1}{K} \\log M_{CX} - \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_10}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_{X2}, W_{CY}) & \\geq & \\frac{1}{K} \\log M_{Y1} \t\\nonumber\n\\\\ & + & \\frac{1}{K} \\log M_{Y2} + \\frac{1}{K} \\log M_{CX} \t\t\t\\nonumber\n\\\\ & - & \\epsilon_{0}^{'}\n\\label{theorems2-4_ineq_11}\n\\end{eqnarray}\n\nThe inequalities \\eqref{theorems2-4_ineq_8} and \\eqref{theorems2-4_ineq_9} can be proved in the same way as per Yamamoto's\\cite{shannon1_yamamoto} Lemma A2, and \\eqref{theorems2-4_ineq_10} and \\eqref{theorems2-4_ineq_11} can be proved in the same way as per Yamamoto's\\cite{shannon1_yamamoto} Lemma A1. \n\nFor each proof we consider cases where a key already exists for either $V_{CX}$ or $V_{CY}$ and the encrypted common information portion is then used to mask the other portions (either $V_{CX}$ or $V_{CY}$ and the private information portions). There are two cases considered for each; firstly, when the common information portion entropy is greater than the entropy of the portion that needs to be masked, and secondly when the common information portion entropy is less than the entropy of the portion to be masked. For the latter case, a smaller key will need to be added so as to cover the portion entirely. This has the effect of reducing the required key length, which is explained in greater detail in Section VII.\n\n\\begin{proof}[Proof of Theorem 2]\nSuppose that ($R_X$, $R_Y$, $R_{KX}$, $R_{KY}$) $\\in$ \n$\\mathcal{R}_1$ for $h_{XY} \\le H(X,Y)$. Without loss of generality, we assume that $h_X \\le h_Y$. Then, from \\eqref{theorem2} \n\\begin{eqnarray}\n&& R_X \\geq H(X^K|Y^K) \t\t\t\t\\nonumber\n\\\\&& R_Y \\geq H(Y^K|X^K) \t\t\t\t\\nonumber\n\\\\&& R_X + R_Y \\geq H(X^K, Y^K)\n\\label{theorem2_proof_1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nR_{kX} \\geq h_{XY}, R_{kY} \\geq h_{XY} \n\\label{theorem2_proof_2}\n\\end{eqnarray}\n\nAssuming a key exists for $V_{CY}$. For the first case, consider the following: $H(V_{CY}) \\geq H(V_X)$, $H(V_{CY}) \\geq H(V_Y)$ and $H(V_{CY}) \\geq H(V_{CX})$.\n\n\\begin{eqnarray}\nM_{CY} = 2^{K h_{XY}}\n\\label{theorem2_proof_6}\n\\end{eqnarray}\n\nThe codewords $W_X$ and $W_Y$ and their keys $W_{kX}$ and $W_{kY}$ are now defined:\n\n\\begin{eqnarray}\nW_X = (W_{X1} \\oplus W_{kCY}, W_{X2} \\oplus W_{kCY}, W_{CX} \\oplus W_{kCY})\n\\label{theorem2_proof_7}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_Y = (W_{Y1} \\oplus W_{kCY}, W_{Y2} \\oplus W_{kCY}, W_{CY})\n\\label{theorem2_proof_8}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kY} = (W_{kCY})\n\\label{theorem2_proof_10}\n\\end{eqnarray}\n\nwhere $W_\\alpha \\in I_{M_\\alpha} = \\{0, 1, \\ldots, M_\\alpha - 1\\}$. The wiretapper will not know $W_{X1}$, $W_{X2}$ and $W_{CX}$ from $W_X$ and $W_{Y1}$, $W_{Y2}$ and $W_{CY}$ from $W_Y$ as these are protected by the key ($W_{kCY}$.\n\nIn this case, $R_X$, $R_Y$, $R_{kX}$ and $R_{kY}$ satisfy from \\eqref{lemma1_2} - \\eqref{lemma1_4} and \\eqref{theorem2_proof_1} - \\eqref{theorem2_proof_6}, that\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_X + \\frac{1}{K} \\log M_Y & = & \\frac{1}{K} (\\log M_{X1} + \\log M_{X2} \\nonumber\n\\\\ & + & \\log M_{CX}) + \\frac{1}{K} (\\log M_{Y1} \\nonumber\n\\\\ & + & \\log M_{Y2} + \\log M_{CY}) \\nonumber \n\\\\ & \\le & H(X|Y) + H(Y|X) \t\t\t\t\\nonumber\n\\\\ & + & I(X;Y) + \\epsilon_0 \\nonumber\n\\\\ & = & H(X,Y) \t\t\\nonumber\n\\\\ & \\le & R_X + R_Y\n\\label{theorem2_proof_11}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kX} & = & \\frac{1}{K} \\log M_{CY}\t\\nonumber\n\\\\ & = & h_{XY}\t\t\t\\label{num3}\n\\\\ & \\le & R_{kX}\n\\label{theorem2_proof_13}\n\\end{eqnarray}\n\nwhere \\eqref{num3} comes from \\eqref{theorem2_proof_6}.\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kY} & = & \\frac{1}{K} \\log M_{CY}\t\\nonumber\n\\\\ & = & h_{XY}\t\t\t\\label{num4}\n\\\\ & \\le & R_{kY}\n\\label{theorem2_proof_14}\n\\end{eqnarray}\nwhere \\eqref{num4} comes from \\eqref{theorem2_proof_6}.\n\nThe security levels thus result:\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_X, W_Y) & = & \\frac{1}{K} H(X|W_{X1} \\oplus W_{kCY}, \\nonumber\n\\\\ && W_{X2} \\oplus W_{kCY}, W_{CX} \\oplus W_{kCY},\t\t\t\\nonumber\n\\\\ && W_{Y1} \\oplus W_{kCY}, W_{Y2} \\oplus W_{kCY},\t\\nonumber\n\\\\ && W_{CY}) \t\t\t\t\\nonumber\n\\\\ & = & H(X^K)\t\\label{num5}\n\\\\ & \\ge & h_X - \\epsilon^{'}_0\n\\label{theorem2_proof_16}\n\\end{eqnarray}\n\nwhere \\eqref{num5} holds because $W_{X1}$, $W_{X2}$, $W_{CX}$, $W_{Y1}$, $W_{Y2}$ are covered by key $W_{CY}$ and $W_{CY}$ is covered by an existing random number key. Equations \\eqref{lemma1_1} - \\eqref{lemma1_7} imply that $W_{X1}$, $W_{X2}$, $W_{Y1}$ and $W_{Y2}$ have almost no redundancy and they are mutually independent.\n\nSimilarly, \n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_X, W_Y) \\geq h_Y - \\epsilon^{'}_0\n\\label{theorem2_proof_17}\n\\end{eqnarray}\n\nTherefore ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{XY}$, $h_{XY}$) is admissible from \\eqref{theorem2_proof_11} - \\eqref{theorem2_proof_17}.\n\nNext the case where: $H(V_{CY}) < H(V_X)$, $H(V_{CY}) < H(V_Y)$ and $H(V_{CY}) < H(V_{CX})$ is considered. Here, there are shorter length keys used in addition to the key provided by $W_{CY}$ in order to make the key lengths required by the individual portions. For example the key $W_{k1}$ comprises $W_{kCY}$ and a short key $W_1$, which together provide the length of $W_{X1}$.\nThe codewords $W_X$ and $W_Y$ and their keys $W_{kX}$ and $W_{kY}$ are now defined:\n\n\\begin{eqnarray}\nW_X = (W_{X1} \\oplus W_{k1}, W_{X2} \\oplus W_{k2}, W_{CX} \\oplus W_{k3})\n\\label{theorem2_proof_7.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_Y = (W_{Y1} \\oplus W_{k4}, W_{Y2} \\oplus W_{k5}, W_{CY})\n\\label{theorem2_proof_8.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kX} = (W_{k1}, W_{k2}, W_{k3})\n\\label{theorem2_proof_10.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kY} = (W_{k4}, W_{k5})\n\\label{theorem2_proof_10.1.2}\n\\end{eqnarray}\n\nwhere $W_\\alpha \\in I_{M_\\alpha} = \\{0, 1, \\ldots, M_\\alpha - 1\\}$. The wiretapper will not know $W_{X1}$, $W_{X2}$ and $W_{CX}$ from $W_X$ and $W_{Y1}$, $W_{Y2}$ and $W_{CY}$ from $W_Y$ as these are protected by the key ($W_{kCY}$.\n\nIn this case, $R_X$, $R_Y$, $R_{kX}$ and $R_{kY}$ satisfy that\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_X + \\frac{1}{K} \\log M_Y & = & \\frac{1}{K} (\\log M_{X1} + \\log M_{X2} \\nonumber\n\\\\ & + & \\log M_{CX}) + \\frac{1}{K} (\\log M_{Y1} \\nonumber\n\\\\ & + & \\log M_{Y2} + \\log M_{CY}) \\nonumber \n\\\\ & \\le & H(X|Y) + H(Y|X) \t\t\t\t\\nonumber\n\\\\ & + & I(X;Y) + \\epsilon_0 \\nonumber\n\\\\ & = & H(X,Y) \t\t\\nonumber\n\\\\ & \\le & R_X + R_Y\n\\label{theorem2_proof_11.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kX} & = & \\frac{1}{K} [\\log M_{k1} + \\log M_{k2} + \\log M_{k3}]\t\\nonumber\n\\\\ & = & \\log M_{kCY} + \\log M_{1} \\nonumber\n\\\\ & + & \\log M_{kCY} + \\log M_{2} \\nonumber\n\\\\ & + & \\log M_{kCY} + \\log M_{3} \\nonumber\n\\\\ & = & 3 \\log M_{kCY} + \\log M_{1} \\nonumber\n\\\\ & + & \\log M_{2} + \\log M_{3} \\nonumber\n\\\\ & \\geq & 3 h_{XY} - \\epsilon_0 \\label{num333.1}\n\\\\ & \\geq & h_{XY}\n\\label{theorem2_proof_13.1.1}\n\\end{eqnarray}\n\nwhere \\eqref{num333.1} results from \\eqref{theorem2_proof_6}.\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kX} & = & \\frac{1}{K} [\\log M_{k3} + \\log M_{k4} + \\log M_{kCY}]\t\\nonumber\n\\\\ & = & \\log M_{kCY} + \\log M_{3} +\t\\log M_{kCY} \\nonumber\n\\\\ & + & \\log M_{4} + \\log M_{kCY} \t\t\\nonumber\n\\\\ & = & 3 \\log M_{kCY} + \\log M_{3} + \\log M_{4}\n\\\\ & \\geq & 3 h_{XY} - \\epsilon_0 \\label{num333}\n\\\\ & \\geq & h_{XY}\n\\label{theorem2_proof_14.1.1}\n\\end{eqnarray}\n\n\nwhere \\eqref{num333} results from \\eqref{theorem2_proof_6}.\n\nThe security levels thus result:\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_X, W_Y) & = & \\frac{1}{K} H(X|W_{X1} \\oplus W_{k1}, W_{X2} \\oplus W_{k2}, \\nonumber\n\\\\ && W_{CX} \\oplus W_{k3},\t\t\t\\nonumber\n\\\\ && W_{Y1} \\oplus W_{k4}, W_{Y2} \\oplus W_{k5},\t\\nonumber\n\\\\ && W_{CY}) \t\t\t\t\\nonumber\n\\\\ & = & H(X^K)\t\\label{num5.1.1}\n\\\\ & \\ge & h_X - \\epsilon^{'}_0\n\\label{theorem2_proof_16.1.1}\n\\end{eqnarray}\n\nwhere \\eqref{num5} holds because $W_{X1}$, $W_{X2}$, $W_{CX}$, $W_{Y1}$, $W_{Y2}$ are covered by key $W_{CY}$ and some shorter length key and $W_{CY}$ is covered by an existing random number key. \n\nSimilarly, \n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_X, W_Y) \\geq h_Y - \\epsilon^{'}_0\n\\label{theorem2_proof_17.1.1}\n\\end{eqnarray}\n\nTherefore ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{XY}$, $h_{XY}$) is admissible from \\eqref{theorem2_proof_11.1.1} - \\eqref{theorem2_proof_17.1.1}.\n\n\\end{proof}\n\nTheorem 3 - 5 are proven in the same way with varying codewords and keys. The proofs follow:\n\n\\begin{proof}[Theorem 3 proof]\n\nThe consideration for the security levels is that $h_Y \\geq h_X$ because $Y$ contains the key the is used for masking.\nSuppose that ($R_X$, $R_Y$, $R_{KX}$, $R_{KY}$) $\\in$ \n$\\mathcal{R}_2$. From \\eqref{theorem3} \n\\begin{eqnarray}\n&& R_X \\geq H(X^K|Y^K) \t\t\t\t\\nonumber\n\\\\&& R_Y \\geq H(Y^K|X^K) \t\t\t\t\\nonumber\n\\\\&& R_X + R_Y \\geq H(X^K, Y^K)\n\\label{theorem3_proof_1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nR_{kX} \\geq h_{X}, R_{kY} \\geq h_{Y} \n\\label{theorem3_proof_2}\n\\end{eqnarray}\n\nAssuming a key exists for $V_{CY}$. For the first case, consider the following: $H(V_{CY}) \\geq H(V_X)$, $H(V_{CY}) \\geq H(V_Y)$ and $H(V_{CY}) \\geq H(V_{CX})$.\n\n\\begin{eqnarray}\nM_{CY} = 2^{K h_{Y}}\n\\label{theorem3_proof_6}\n\\end{eqnarray}\n\nThe codewords $W_X$ and $W_Y$ and their keys $W_{kX}$ and $W_{kY}$ are now defined:\n\n\\begin{eqnarray}\nW_X = (W_{X1} \\oplus W_{kCY}, W_{X2} \\oplus W_{kCY}, W_{CX} \\oplus W_{kCY})\n\\label{theorem3_proof_7}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_Y = (W_{Y1} \\oplus W_{kCY}, W_{Y2} \\oplus W_{kCY}, W_{CY})\n\\label{theorem3_proof_8}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kY} = (W_{kCY})\n\\label{theorem3_proof_10}\n\\end{eqnarray}\n\nwhere $W_\\alpha \\in I_{M_\\alpha} = \\{0, 1, \\ldots, M_\\alpha - 1\\}$. The wiretapper will not know $W_{X1}$, $W_{X2}$ and $W_{CX}$ from $W_X$ and $W_{Y1}$, $W_{Y2}$ and $W_{CY}$ from $W_Y$ as these are protected by the key ($W_{kCY}$.\n\nIn this case, $R_X$, $R_Y$, $R_{kX}$ and $R_{kY}$ satisfy from \\eqref{lemma1_2} - \\eqref{lemma1_4} and \\eqref{theorem3_proof_1} - \\eqref{theorem3_proof_6}, that\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_X + \\frac{1}{K} \\log M_Y & = & \\frac{1}{K} (\\log M_{X1} + \\log M_{X2} \\nonumber\n\\\\ & + & \\log M_{CX}) + \\frac{1}{K} (\\log M_{Y1} \\nonumber\n\\\\ & + & \\log M_{Y2} + \\log M_{CY}) \\nonumber \n\\\\ & \\le & H(X|Y) + H(Y|X) \t\t\t\t\\nonumber\n\\\\ & + & I(X;Y) + \\epsilon_0 \\nonumber\n\\\\ & = & H(X,Y) \t\t\\nonumber\n\\\\ & \\le & R_X + R_Y\n\\label{theorem3_proof_11}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kX} & = & \\frac{1}{K} \\log M_{CY}\t\\nonumber\n\\\\ & = & h_{Y}\t\t\t\\label{num3.2}\n\\\\ & \\geq & h_X - \\epsilon_0 \\label{num3.3}\n\\\\ & & R_{kX}\n\\label{theorem3_proof_13}\n\\end{eqnarray}\n\nwhere \\eqref{num3.2} comes from \\eqref{theorem3_proof_6} and \\eqref{num3.3} comes form the consideration stated at the beginning of this proof.\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kY} & = & \\frac{1}{K} \\log M_{CY}\t\\nonumber\n\\\\ & = & h_{XY}\t\t\t\\label{num4.1}\n\\\\ & \\le & R_{kY}\n\\label{theorem3_proof_14}\n\\end{eqnarray}\nwhere \\eqref{num4.1} comes from \\eqref{theorem3_proof_6}.\n\nThe security levels thus result:\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_X, W_Y) & = & \\frac{1}{K} H(X|W_{X1} \\oplus W_{kCY}, \\nonumber\n\\\\ && W_{X2} \\oplus W_{kCY}, W_{CX} \\oplus W_{kCY},\t\t\t\\nonumber\n\\\\ && W_{Y1} \\oplus W_{kCY}, W_{Y2} \\oplus W_{kCY},\t\\nonumber\n\\\\ && W_{CY}) \t\t\t\t\\nonumber\n\\\\ & = & H(X^K)\t\\label{num5.1}\n\\\\ & \\ge & h_X - \\epsilon^{'}_0\n\\label{theorem3_proof_16}\n\\end{eqnarray}\n\nwhere \\eqref{num5.1} holds because $W_{X1}$, $W_{X2}$, $W_{CX}$, $W_{Y1}$, $W_{Y2}$ are covered by key $W_{CY}$ and $W_{CY}$ is covered by an existing random number key. Equations \\eqref{lemma1_1} - \\eqref{lemma1_7} imply that $W_{X1}$, $W_{X2}$, $W_{Y1}$ and $W_{Y2}$ have almost no redundancy and they are mutually independent.\n\nSimilarly, \n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_X, W_Y) \\geq h_Y - \\epsilon^{'}_0\n\\label{theorem3_proof_17}\n\\end{eqnarray}\n\nTherefore ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{XY}$, $h_{XY}$) is admissible from \\eqref{theorem2_proof_11} - \\eqref{theorem2_proof_17}.\n\nNext the case where: $H(V_{CY}) < H(V_X)$, $H(V_{CY}) < H(V_Y)$ and $H(V_{CY}) < H(V_{CX})$ is considered. Here, there are shorter length keys used in addition to the key provided by $W_{CY}$ in order to make the key lengths required by the individual portions. For example the key $W_{k1}$ comprises $W_{kCY}$ and a short key $W_1$, which together provide the length of $W_{X1}$.\nThe codewords $W_X$ and $W_Y$ and their keys $W_{kX}$ and $W_{kY}$ are now defined:\n\n\\begin{eqnarray}\nW_X = (W_{X1} \\oplus W_{k1}, W_{X2} \\oplus W_{k2}, W_{CX} \\oplus W_{k3})\n\\label{theorem3_proof_7.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_Y = (W_{Y1} \\oplus W_{k4}, W_{Y2} \\oplus W_{k5}, W_{CY})\n\\label{theorem3_proof_8.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kX} = (W_{k1}, W_{k2}, W_{k3})\n\\label{theorem3_proof_10.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kY} = (W_{k4}, W_{k5})\n\\label{theorem3_proof_10.1.2}\n\\end{eqnarray}\n\nwhere $W_\\alpha \\in I_{M_\\alpha} = \\{0, 1, \\ldots, M_\\alpha - 1\\}$. The wiretapper will not know $W_{X1}$, $W_{X2}$ and $W_{CX}$ from $W_X$ and $W_{Y1}$, $W_{Y2}$ and $W_{CY}$ from $W_Y$ as these are protected by the key ($W_{kCY}$.\n\nIn this case, $R_X$, $R_Y$, $R_{kX}$ and $R_{kY}$ satisfy that\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_X + \\frac{1}{K} \\log M_Y & = & \\frac{1}{K} (\\log M_{X1} + \\log M_{X2} \\nonumber\n\\\\ & + & \\log M_{CX}) + \\frac{1}{K} (\\log M_{Y1} \\nonumber\n\\\\ & + & \\log M_{Y2} + \\log M_{CY}) \\nonumber \n\\\\ & \\le & H(X|Y) + H(Y|X) \t\t\t\t\\nonumber\n\\\\ & + & I(X;Y) + \\epsilon_0 \\nonumber\n\\\\ & = & H(X,Y) \t\t\\nonumber\n\\\\ & \\le & R_X + R_Y\n\\label{theorem3_proof_11.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kX} & = & \\frac{1}{K} [\\log M_{k1} + \\log M_{k2} + \\log M_{k3}]\t\\nonumber\n\\\\ & = & \\log M_{kCY} + \\log M_{1} +\t\\log M_{kCY} \\nonumber\n\\\\ & + & \\log M_{2} + \\log M_{kCY} + \\log M_{3} \\nonumber\n\\\\ & = & 3 \\log M_{kCY} + \\log M_{1} + \\log M_{2} + \\log M_{3} \\nonumber\n\\\\ & \\geq & 3 h_{Y} - \\epsilon_0 \\label{num334.1}\n\\\\ & \\geq & 3 h_{X} - \\epsilon_0 \\nonumber\n\\\\ & \\geq & h_{X}\n\\label{theorem3_proof_13.1.1}\n\\end{eqnarray}\n\nwhere \\eqref{num334.1} results from \\eqref{theorem3_proof_6} and the result is from the consideration at the beginning of this proof.\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kY} & = & \\frac{1}{K} [\\log M_{k3} + \\log M_{k4} + \\log M_{kCY}]\t\\nonumber\n\\\\ & = & \\log M_{kCY} + \\log M_{3} +\t\\log M_{kCY} \\nonumber\n\\\\ & + & \\log M_{4} + \\log M_{kCY} \\nonumber\n\\\\ & = & 3 \\log M_{kCY} + \\log M_{3} + \\log M_{4} \\nonumber\n\\\\ & \\geq & 3 h_{Y} - \\epsilon_0 \\label{num333.5}\n\\\\ & \\geq & h_{Y}\n\\label{theorem3_proof_14.1.1}\n\\end{eqnarray}\n\n\nwhere \\eqref{num333.5} results from \\eqref{theorem3_proof_6}.\n\nThe security levels thus result:\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_X, W_Y) & = & \\frac{1}{K} H(X|W_{X1} \\oplus W_{k1}, \\nonumber \n\\\\ && W_{X2} \\oplus W_{k2}, W_{CX} \\oplus W_{k3},\t\t\t\\nonumber\n\\\\ && W_{Y1} \\oplus W_{k4}, W_{Y2} \\oplus W_{k5},\t\\nonumber\n\\\\ && W_{CY}) \t\t\t\t\\nonumber\n\\\\ & = & H(X^K)\t\\label{num5.1.1.1}\n\\\\ & \\ge & h_X - \\epsilon^{'}_0\n\\label{theorem3_proof_16.1.1}\n\\end{eqnarray}\n\nwhere \\eqref{num5.1.1.1} holds because $W_{X1}$, $W_{X2}$, $W_{CX}$, $W_{Y1}$, $W_{Y2}$ are covered by key $W_{CY}$ and some shorter length key and $W_{CY}$ is covered by an existing random number key. \n\nSimilarly, \n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_X, W_Y) \\geq h_Y - \\epsilon^{'}_0\n\\label{theorem3_proof_17.1.1}\n\\end{eqnarray}\n\nTherefore ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{XY}$, $h_{XY}$) is admissible from \\eqref{theorem2_proof_11.1.1} - \\eqref{theorem2_proof_17.1.1}.\n\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem 4]\n\n\nAgain, the consideration for the security levels is that $h_Y \\geq h_X$ because $Y$ contains the key the is used for masking.\nSuppose that ($R_X$, $R_Y$, $R_{KX}$, $R_{KY}$) $\\in$ \n$\\mathcal{R}_3$. From \\eqref{theorem3} \n\\begin{eqnarray}\n&& R_X \\geq H(X^K|Y^K) \t\t\t\t\\nonumber\n\\\\&& R_Y \\geq H(Y^K|X^K) \t\t\t\t\\nonumber\n\\\\&& R_X + R_Y \\geq H(X^K, Y^K)\n\\label{theorem4_proof_1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nR_{kX} \\geq h_{X}, R_{kY} \\geq h_{Y} \n\\label{theorem4_proof_2}\n\\end{eqnarray}\n\nAssuming a key exists for $V_{CY}$. For the first case, consider the following: $H(V_{CY}) \\geq H(V_X)$, $H(V_{CY}) \\geq H(V_Y)$ and $H(V_{CY}) \\geq H(V_{CX})$.\n \n\\begin{eqnarray}\nM_{CY} = 2^{K h_{Y}}\n\\label{theorem4_proof_6}\n\\end{eqnarray}\n\nIn the same way as theorem 2 and 3, the codewords $W_X$ and $W_Y$ and their keys $W_{kX}$ and $W_{kY}$ are now defined:\n\n\\begin{eqnarray}\nW_X = (W_{X1} \\oplus W_{kCY}, W_{X2} \\oplus W_{kCY}, W_{CX} \\oplus W_{kCY})\n\\label{theorem4_proof_7}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_Y = (W_{Y1} \\oplus W_{kCY}, W_{Y2} \\oplus W_{kCY}, W_{CY})\n\\label{theorem4_proof_8}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kY} = (W_{kCY})\n\\label{theorem4_proof_10}\n\\end{eqnarray}\n\nwhere $W_\\alpha \\in I_{M_\\alpha} = \\{0, 1, \\ldots, M_\\alpha - 1\\}$. The wiretapper will not know $W_{X1}$, $W_{X2}$ and $W_{CX}$ from $W_X$ and $W_{Y1}$, $W_{Y2}$ and $W_{CY}$ from $W_Y$ as these are protected by the key ($W_{kCY}$.\n\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_X + \\frac{1}{K} \\log M_Y & = & \\frac{1}{K} (\\log M_{X1} + \\log M_{X2} \\nonumber\n\\\\ & + & \\log M_{CX}) + \\frac{1}{K} (\\log M_{Y1} \\nonumber\n\\\\ & + & \\log M_{Y2} + \\log M_{CY}) \\nonumber \n\\\\ & \\le & H(X|Y) + H(Y|X) \t\t\t\t\\nonumber\n\\\\ & + & I(X;Y) + \\epsilon_0 \\nonumber\n\\\\ & = & H(X,Y) \t\t\\nonumber\n\\\\ & \\le & R_X + R_Y\n\\label{theorem4_proof_11}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kX} & = & \\frac{1}{K} \\log M_{CY}\t\\nonumber\n\\\\ & = & h_{Y}\t\t\t\\label{num4.2}\n\\\\ & \\geq & h_X - \\epsilon_0 \\label{num4.3}\n\\\\ & & R_{kX}\n\\label{theorem4_proof_13}\n\\end{eqnarray}\n\nwhere \\eqref{num4.2} comes from \\eqref{theorem4_proof_6} and \\eqref{num4.3} comes form the consideration stated at the beginning of this proof.\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kY} & = & \\frac{1}{K} \\log M_{CY}\t\\nonumber\n\\\\ & = & h_{XY}\t\t\t\\label{num5.1}\n\\\\ & \\le & R_{kY}\n\\label{theorem4_proof_14}\n\\end{eqnarray}\nwhere \\eqref{num5.1} comes from \\eqref{theorem4_proof_6}.\n\nThe security levels thus result:\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_X, W_Y) & = & \\frac{1}{K} H(X|W_{X1} \\oplus W_{kCY}, \n\\\\ && W_{X2} \\oplus W_{kCY}, W_{CX} \\oplus W_{kCY},\t\t\t\\nonumber\n\\\\ && W_{Y1} \\oplus W_{kCY}, W_{Y2} \\oplus W_{kCY},\t\\nonumber\n\\\\ && W_{CY}) \t\t\t\t\\nonumber\n\\\\ & = & H(X^K)\t\\label{num6.1}\n\\\\ & \\ge & h_X - \\epsilon^{'}_0\n\\label{theorem4_proof_16}\n\\end{eqnarray}\n\nwhere \\eqref{num6.1} holds because $W_{X1}$, $W_{X2}$, $W_{CX}$, $W_{Y1}$, $W_{Y2}$ are covered by key $W_{CY}$ and $W_{CY}$ is covered by an existing random number key. Equations \\eqref{lemma1_1} - \\eqref{lemma1_7} imply that $W_{X1}$, $W_{X2}$, $W_{Y1}$ and $W_{Y2}$ have almost no redundancy and they are mutually independent.\n\nSimilarly, \n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_X, W_Y) \\geq h_Y - \\epsilon^{'}_0\n\\label{theorem4_proof_17}\n\\end{eqnarray}\n\nTherefore ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{XY}$, $h_{XY}$) is admissible from \\eqref{theorem2_proof_11} - \\eqref{theorem2_proof_17}.\n\nNext the case where: $H(V_{CY}) < H(V_X)$, $H(V_{CY}) < H(V_Y)$ and $H(V_{CY}) < H(V_{CX})$ is considered. Here, there are shorter length keys used in addition to the key provided by $W_{CY}$ in order to make the key lengths required by the individual portions. For example the key $W_{k1}$ comprises $W_{kCY}$ and a short key $W_1$, which together provide the length of $W_{X1}$.\nThe codewords $W_X$ and $W_Y$ and their keys $W_{kX}$ and $W_{kY}$ are now defined:\n\n\\begin{eqnarray}\nW_X = (W_{X1} \\oplus W_{k1}, W_{X2} \\oplus W_{k2}, W_{CX} \\oplus W_{k3})\n\\label{theorem4_proof_7.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_Y = (W_{Y1} \\oplus W_{k4}, W_{Y2} \\oplus W_{k5}, W_{CY})\n\\label{theorem4_proof_8.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kX} = (W_{k1}, W_{k2}, W_{k3})\n\\label{theorem4_proof_10.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kY} = (W_{k4}, W_{k5})\n\\label{theorem4_proof_10.1.2}\n\\end{eqnarray}\n\nwhere $W_\\alpha \\in I_{M_\\alpha} = \\{0, 1, \\ldots, M_\\alpha - 1\\}$. The wiretapper will not know $W_{X1}$, $W_{X2}$ and $W_{CX}$ from $W_X$ and $W_{Y1}$, $W_{Y2}$ and $W_{CY}$ from $W_Y$ as these are protected by the key ($W_{kCY}$.\n\nIn this case, $R_X$, $R_Y$, $R_{kX}$ and $R_{kY}$ satisfy that\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_X + \\frac{1}{K} \\log M_Y & = & \\frac{1}{K} (\\log M_{X1} \\nonumber\n\\\\ & + & \\log M_{X2} + \\log M_{CX}) \\nonumber\n\\\\ & + & \\frac{1}{K} (\\log M_{Y1} + \\log M_{Y2} + \\log M_{CY}) \\nonumber\n\\\\ & \\le & H(X|Y) + H(Y|X) + I(X;Y) + \\epsilon_0\t\t\t\t\\nonumber\n\\\\ & = & H(X,Y) \t\t\\nonumber\n\\\\ & \\le & R_X + R_Y\n\\label{theorem4_proof_11.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kX} & = & \\frac{1}{K} [\\log M_{k1} + \\log M_{k2} + \\log M_{k3}]\t\\nonumber\n\\\\ & = & \\log M_{kCY} + \\log M_{1} +\t\\log M_{kCY} \\nonumber\n\\\\ & + & \\log M_{2} + \\log M_{kCY} + \\log M_{3} \\nonumber\n\\\\ & = & 3 \\log M_{kCY} + \\log M_{1} + \\log M_{2} + \\log M_{3} \\nonumber\n\\\\ & \\geq & 3 h_{Y} - \\epsilon_0 \\label{num335.1}\n\\\\ & \\geq & 3 h_{X} - \\epsilon_0 \\nonumber\n\\\\ & \\geq & h_{X}\n\\label{theorem4_proof_13.1.1}\n\\end{eqnarray}\n\nwhere \\eqref{num335.1} results from \\eqref{theorem4_proof_6} and the result is from the consideration at the beginning of this proof.\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kY} & = & \\frac{1}{K} [\\log M_{k3} + \\log M_{k4} + \\log M_{kCY}]\t\\nonumber\n\\\\ & = & \\log M_{kCY} + \\log M_{3} +\t\\log M_{kCY} \\nonumber\n\\\\ & + & \\log M_{4} + \\log M_{kCY} \\nonumber\n\\\\ & = & 3 \\log M_{kCY} + \\log M_{3} + \\log M_{4} \\nonumber\n\\\\ & \\geq & 3 h_{Y} - \\epsilon_0 \\label{num336.5}\n\\\\ & \\geq & h_{Y}\n\\label{theorem4_proof_14.1.1}\n\\end{eqnarray}\n\n\nwhere \\eqref{num336.5} results from \\eqref{theorem4_proof_6}.\n\nThe security levels thus result:\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_X, W_Y) & = & \\frac{1}{K} H(X|W_{X1} \\oplus W_{k1}, \n\\\\ && W_{X2} \\oplus W_{k2}, W_{CX} \\oplus W_{k3},\t\t\t\\nonumber\n\\\\ && W_{Y1} \\oplus W_{k4}, W_{Y2} \\oplus W_{k5},\t\\nonumber\n\\\\ && W_{CY}) \t\t\t\t\\nonumber\n\\\\ & = & H(X^K)\t\\label{num6.1.1.1}\n\\\\ & \\ge & h_X - \\epsilon^{'}_0\n\\label{theorem4_proof_16.1.1}\n\\end{eqnarray}\n\nwhere \\eqref{num6.1.1.1} holds because $W_{X1}$, $W_{X2}$, $W_{CX}$, $W_{Y1}$, $W_{Y2}$ are covered by key $W_{CY}$ and some shorter length key and $W_{CY}$ is covered by an existing random number key. \n\nSimilarly, \n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_X, W_Y) \\geq h_Y - \\epsilon^{'}_0\n\\label{theorem4_proof_17.1.1}\n\\end{eqnarray}\n\nTherefore ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{XY}$, $h_{XY}$) is admissible from \\eqref{theorem4_proof_11.1.1} - \\eqref{theorem4_proof_17.1.1}.\n\nThe region indicated for $\\mathcal{R_4}$ is derived from this region for $\\mathcal{R_3}$, when $h_X = 0$.\n\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem 5]\nAs before, $V_{CY}$ may be used as a key, however here we use $V_{CX}$ as the key in this proof to show some variation. \n\nNow the consideration for the security levels is that $h_X \\geq h_Y$ because $X$ contains the key that is used for masking.\nSuppose that ($R_X$, $R_Y$, $R_{KX}$, $R_{KY}$) $\\in$ \n$\\mathcal{R}_5$. From \\eqref{theorem3} \n\\begin{eqnarray}\n&& R_X \\geq H(X^K|Y^K) \t\t\t\t\\nonumber\n\\\\&& R_Y \\geq H(Y^K|X^K) \t\t\t\t\\nonumber\n\\\\&& R_X + R_Y \\geq H(X^K, Y^K)\n\\label{theorem5_proof_1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nR_{kX} \\geq h_{X}, R_{kY} \\geq h_{Y} \n\\label{theorem5_proof_2}\n\\end{eqnarray}\n\nAssuming a key exists for $V_{CX}$. For the first case, consider the following: $H(V_{CX}) \\geq H(V_X)$, $H(V_{CX}) \\geq H(V_Y)$ and $H(V_{CX}) \\geq H(V_{CX})$.\n \n\\begin{eqnarray}\nM_{CX} = 2^{K h_{X}}\n\\label{theorem5_proof_6}\n\\end{eqnarray}\n\nThe codewords $W_X$ and $W_Y$ and their keys $W_{kX}$ and $W_{kY}$ are now defined:\n\n\\begin{eqnarray}\nW_X = (W_{X1} \\oplus W_{kCX}, W_{X2} \\oplus W_{kCX}, W_{CX})\n\\label{theorem5_proof_7}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_Y = (W_{Y1} \\oplus W_{kCX}, W_{Y2} \\oplus W_{kCX}, W_{CY} \\oplus W_{kCX})\n\\label{theorem5_proof_8}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kX} = (W_{kCX})\n\\label{theorem5_proof_10}\n\\end{eqnarray}\n\nwhere $W_\\alpha \\in I_{M_\\alpha} = \\{0, 1, \\ldots, M_\\alpha - 1\\}$. The wiretapper will not know $W_{X1}$, $W_{X2}$ and $W_{CX}$ from $W_X$ and $W_{Y1}$, $W_{Y2}$ and $W_{CY}$ from $W_Y$ as these are protected by the key $W_{kCX}$ and $W_{kCX}$ is protected by a random number key.\n\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_X + \\frac{1}{K} \\log M_Y & = & \\frac{1}{K} (\\log M_{X1} + \\log M_{X2} \\nonumber\n\\\\ & + & \\log M_{CX}) + \\frac{1}{K} (\\log M_{Y1} \\nonumber\n\\\\ & + & \\log M_{Y2} + \\log M_{CY}) \\nonumber \n\\\\ & \\le & H(X|Y) + H(Y|X) \t\t\t\t\\nonumber\n\\\\ & + & I(X;Y) + \\epsilon_0 \\nonumber\n\\\\ & = & H(X,Y) \t\t\\nonumber\n\\\\ & \\le & R_X + R_Y\n\\label{theorem5_proof_11}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kX} & = & \\frac{1}{K} \\log M_{CX}\t\\nonumber\n\\\\ & \\geq & h_X - \\epsilon_0 \\label{num6.31}\n\\\\ & & R_{kX}\n\\label{theorem5_proof_13}\n\\end{eqnarray}\n\nwhere \\eqref{num6.31} comes from \\eqref{theorem5_proof_6}.\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kY} & = & \\frac{1}{K} \\log M_{CX}\t\\nonumber\n\\\\ & = & h_{X}\t\t\t\\label{num6.32}\n\\\\ & \\geq & h_Y \\label{num6.32}\n\\\\ & \\le & R_{kY}\n\\label{theorem5_proof_14}\n\\end{eqnarray}\nwhere \\eqref{num6.32} comes from \\eqref{theorem5_proof_6} and \\eqref{num6.32} comes form the consideration stated at the beginning of this proof.\n\nThe security levels thus result:\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_X, W_Y) & = & \\frac{1}{K} H(X|W_{X1} \\oplus W_{kCX}, \\nonumber\n\\\\ && W_{X2} \\oplus W_{kCX}, W_{CX}, W_{CY} \\oplus W_{kCX},\t\t\t\\nonumber\n\\\\ && W_{Y1} \\oplus W_{kCX}, W_{Y2} \\oplus W_{kCX}) \t\t\t\t\\nonumber\n\\\\ & = & H(X^K)\t\\label{num16.1}\n\\\\ & \\ge & h_X - \\epsilon^{'}_0\n\\label{theorem5_proof_16}\n\\end{eqnarray}\n\nwhere \\eqref{num6.1} holds because $W_{X1}$, $W_{X2}$, $W_{CX}$, $W_{Y1}$, $W_{Y2}$ are covered by key $W_{CY}$ and $W_{CY}$ is covered by an existing random number key. Equations \\eqref{lemma1_1} - \\eqref{lemma1_7} imply that $W_{X1}$, $W_{X2}$, $W_{Y1}$ and $W_{Y2}$ have almost no redundancy and they are mutually independent.\n\nSimilarly, \n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_X, W_Y) \\geq h_Y - \\epsilon^{'}_0\n\\label{theorem5_proof_17}\n\\end{eqnarray}\n\nTherefore ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{XY}$, $h_{XY}$) is admissible from \\eqref{theorem2_proof_11} - \\eqref{theorem2_proof_17}.\n\nNext the case where: $H(V_{CX}) < H(V_X)$, $H(V_{CX}) < H(V_Y)$ and $H(V_{CX}) < H(V_{CX})$ is considered. Here, there are shorter length keys used in addition to the key provided by $W_{CX}$ in order to make the key lengths required by the individual portions. For example the key $W_{k1}$ comprises $W_{kCX}$ and a short key $W_1$, which together are the length of $W_{X1}$.\nThe codewords $W_X$ and $W_Y$ and their keys $W_{kX}$ and $W_{kY}$ are now defined:\n\n\\begin{eqnarray}\nW_X = (W_{X1} \\oplus W_{k1}, W_{X2} \\oplus W_{k2}, W_{CX})\n\\label{theorem5_proof_7.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_Y = (W_{CY} \\oplus W_{k3}, W_{Y1} \\oplus W_{k4}, W_{Y2} \\oplus W_{k5})\n\\label{theorem5_proof_8.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kX} = (W_{k1}, W_{k2}, W_{k3})\n\\label{theorem5_proof_10.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kY} = (W_{k4}, W_{k5})\n\\label{theorem5_proof_10.1.2}\n\\end{eqnarray}\n\nwhere $W_\\alpha \\in I_{M_\\alpha} = \\{0, 1, \\ldots, M_\\alpha - 1\\}$. The wiretapper will not know $W_{X1}$, $W_{X2}$ and $W_{CX}$ from $W_X$ and $W_{Y1}$, $W_{Y2}$ and $W_{CY}$ from $W_Y$ as these are protected by the key ($W_{kCY}$.\n\nIn this case, $R_X$, $R_Y$, $R_{kX}$ and $R_{kY}$ satisfy that\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_X + \\frac{1}{K} \\log M_Y & = & \\frac{1}{K} (\\log M_{X1} \\nonumber\n\\\\ & + & \\log M_{X2} + \\log M_{CX}) \\nonumber\n\\\\ & + & \\frac{1}{K} (\\log M_{Y1} + \\log M_{Y2} \\nonumber\n\\\\ & + & \\log M_{CY}) \\nonumber\n\\\\ & \\le & H(X|Y) + H(Y|X) + I(X;Y) \\nonumber\n\\\\ & + & \\epsilon_0\t\t\t\t\\nonumber\n\\\\ & = & H(X,Y) \t\t\\nonumber\n\\\\ & \\le & R_X + R_Y\n\\label{theorem5_proof_11.1.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kX} & = & \\frac{1}{K} [\\log M_{k1} + \\log M_{k2} + \\log M_{k3}]\t\\nonumber\n\\\\ & = & \\log M_{kCX} + \\log M_{1} +\t\\log M_{kCX} \\nonumber\n\\\\ & + & \\log M_{2} + \\log M_{kCX} + \\log M_{3} \\nonumber\n\\\\ & = & 3 \\log M_{kCX} + \\log M_{1} + \\log M_{2} + \\log M_{3} \\nonumber\n\\\\ & \\geq & 3 h_{X} - \\epsilon_0 \\label{num3335.1}\n\\\\ & \\geq & h_{X}\n\\label{theorem5_proof_13.1.1.1}\n\\end{eqnarray}\n\nwhere \\eqref{num3335.1} results from \\eqref{theorem5_proof_6}.\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{kY} & = & \\frac{1}{K} [\\log M_{k3} + \\log M_{k4} + \\log M_{kCY}]\t\\nonumber\n\\\\ & = & \\log M_{kCX} + \\log M_{3} +\t\\log M_{kCX} \\nonumber\n\\\\ & + & \\log M_{4} + \\log M_{kCX} \\nonumber\n\\\\ & = & 3 \\log M_{kCX} + \\log M_{3} + \\log M_{4} \\nonumber\n\\\\ & \\geq & 3 h_{X} - \\epsilon_0 \\label{num33335.1}\n\\\\ & \\geq & 3 h_{Y} - \\epsilon_0 \\label{num336.5}\n\\\\ & \\geq & h_{Y}\n\\label{theorem5_proof_14.1.1}\n\\end{eqnarray}\n\nwhere \\eqref{num33335.1} results from \\eqref{theorem5_proof_6} and \\eqref{num336.5} results from the consideration at the beginning of this proof.\n\n\nThe security levels thus result:\n\\begin{eqnarray}\n\\frac{1}{K} H(X^K|W_X, W_Y) & = & \\frac{1}{K} H(X|W_{X1} \\oplus W_{k1}, W_{X2} \\oplus W_{k2}, \\nonumber\n\\\\ && W_{CX}, W_{CY} \\oplus W_{k3},\t\t\t\\nonumber\n\\\\ && W_{Y1} \\oplus W_{k4}, W_{Y2} \\oplus W_{k5},\t\\nonumber\n\\\\ && W_{CY}) \t\t\t\t\\nonumber\n\\\\ & = & H(X^K)\t\\label{num6.1.1.1.1}\n\\\\ & \\ge & h_X - \\epsilon^{'}_0\n\\label{theorem5_proof_16.1.1.1}\n\\end{eqnarray}\n\nwhere \\eqref{num6.1.1.1.1} holds because $W_{X1}$, $W_{X2}$, $W_{CY}$, $W_{Y1}$, $W_{Y2}$ are covered by key $W_{CX}$ and some shorter length key and $W_{CX}$ is covered by an existing random number key. \n\nSimilarly, \n\\begin{eqnarray}\n\\frac{1}{K} H(Y^K|W_X, W_Y) \\geq h_Y - \\epsilon^{'}_0\n\\label{theorem5_proof_17.1.1}\n\\end{eqnarray}\n\nTherefore ($R_X$, $R_Y$, $R_{kX}$, $R_{kY}$, $h_{XY}$, $h_{XY}$) is admissible from \\eqref{theorem5_proof_11.1.1} - \\eqref{theorem5_proof_17.1.1}.\n\n\\end{proof}\n\n\n\\subsection{Converse parts}\nFrom Slepian-Wolf's theorem we know that the channel rate must satisfy $R_X \\geq H(X|Y)$, $R_Y \\geq H(Y|X)$ and $R_X + R_Y \\geq H(X,Y)$ to achieve a low error probability when decoding.\nHence, the key rates are considered in this subsection. \n\\\\\n\\textit{Converse part of Theorem 2:}\n\\begin{eqnarray}\nR_{kX} & \\geq & \\frac{1}{K} log M_{kX} - \\epsilon\t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} H(W_{kX}) - \\epsilon\t\t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} H(W_{kX|W_1}) - \\epsilon\t\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H(W_{kX}) - I(W_{kX}; W_1)] - \\epsilon\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H((W_{kX|X, Y, W_1}) + I(W_{kX}; W_1) \t\t\\nonumber\n\\\\ & + & I(W_{kX};X|Y, W_1) + I(X, Y, W_{kX}|W_1) \t\t\\nonumber\n\\\\ & + & I(Y, W_{kX}|X, W_1) - I(W_{kX}; W_1)] - \\epsilon\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H(X, Y|W_1) - H(X,Y|W_1, W_{kX})] - \\epsilon\t\t\\nonumber\n\\\\ & \\geq & h_{XY} - \\frac{1}{K} H(X,Y|W_1, W_{kX}) - \\epsilon \\label{conv_1}\n\\\\ & = & h_{XY} - H(V_{CY}) - \\epsilon \t\t\\nonumber\n\\\\ & = & h_{XY} - \\epsilon\n\\end{eqnarray}\n\nwhere \\eqref{conv_1} results from equation \\eqref{cond8.1}. Here, we consider the extremes of $H(V_{CY})$ in order to determine the limit for $R_{kX}$. When this quantity is minimum then we are able to achieve the maximum bound of $h_{XY}$.\n\n\\begin{eqnarray}\nR_{kY} & \\geq & \\frac{1}{K} log M_{kY} - \\epsilon\t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} H(W_{kY}) - \\epsilon\t\t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} H(W_{kY|W_2}) - \\epsilon\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H(W_{kY}) - I(W_{kY}; W_2)] - \\epsilon\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H((W_{kY|X, Y, W_2}) + I(W_{kY}; W_2) \t\t\\nonumber\n\\\\ & + & I(W_{kY};X|Y, W_2) + I(X, Y, W_{kY}|W_2) \t\t\t\t\\nonumber\n\\\\ & + & I(Y, W_{kY}|X, W_2) - I(W_{kY}; W_2)] - \\epsilon\t\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H(X, Y|W_2) - H(X,Y|W_2, W_{kY})] - \\epsilon\t\t\\nonumber\n\\\\ & \\geq & h_{XY} - \\frac{1}{K} H(X,Y|W_2, W_{kY}) - \\epsilon \\label{conv_2}\n\\\\ & = & h_{XY} - H(V_{CX}) - \\epsilon \t\\nonumber\n\\\\ & = & h_{XY} - \\epsilon\n\\end{eqnarray}\n\n\nwhere \\eqref{conv_2} results from equation \\eqref{cond9}. Here, we consider the extremes of $H(V_{CX})$ in order to determine the limit for $R_{kY}$. When this quantity is minimum then we are able to achieve the maximum bound of $h_{XY}$.\n\n\n\\textit{Converse part of Theorem 3:}\n\\begin{eqnarray}\nR_{kX} & \\geq & \\frac{1}{K} log M_{kX} - \\epsilon\t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} H(W_{kX}) - \\epsilon\t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} H(W_{kX|W_1}) - \\epsilon\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H(W_{kX}) - I(W_{kX}; W_1)] - \\epsilon\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H((W_{kX|X, W_1}) + I(W_{kX}; W_1) \t\t\\nonumber\n\\\\ & + & I(X, W_{kX}|W_1) - I(W_{kX}; W_1)] - \\epsilon\t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} I(X, W_{kX}|W_1) - \\epsilon\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H(X|W_1) - H(X|W_1, W_{kX})] - \\epsilon\t\\nonumber\n\\\\ & \\geq & h_{X} - H(V_{CY}) - \\epsilon \\label{conv_3}\n\\\\ & = & h_{X} - \\epsilon\n\\end{eqnarray}\n\n\nwhere \\eqref{conv_3} results from \\eqref{cond7}. Here, we consider the extremes of $H(V_{CY})$ in order to determine the limit for $R_{kX}$. When this quantity is minimum then we are able to achieve the maximum bound of $h_{X}$.\n\n\\begin{eqnarray}\nR_{kY} & \\geq & \\frac{1}{K} log M_{kY} - \\epsilon\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} H(W_{kY}) - \\epsilon\t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} H(W_{kY|W_2}) - \\epsilon\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H(W_{kY}) - I(W_{kY}; W_2)] - \\epsilon\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H((W_{kY|Y, W_2}) + I(W_{kY}; W_2) \t\t\\nonumber\n\\\\ & + & I(X, W_{kY}|W_2) - I(W_{kY}; W_2)] - \\epsilon\t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} I(Y, W_{kY}|W_2) - \\epsilon\t\t\t\t\\nonumber\n\\\\ & = & \\frac{1}{K} [H(Y|W_2) - H(Y|W_2, W_{kY})] - \\epsilon\t\t\\nonumber\n\\\\ & \\geq & h_{Y} - H(V_{CX}) - \\epsilon \\label{conv_4}\n\\\\ & = & h_{Y} - \\epsilon\n\\end{eqnarray}\n\nwhere \\eqref{conv_3} results from \\eqref{cond8}. Here, we consider the extremes of $H(V_{CX})$ in order to determine the limit for $R_{kY}$. When this quantity is minimum then we are able to achieve the maximum bound of $h_{Y}$.\n\nSince theorems 4-5 also have key rates of $h_X$ and $h_Y$ for $X$ and $Y$ respectively we can use the same methods to prove the converse. \n\n\n\\section{Scheme for multiple sources}\n\nThe two correlated source model presented in Section II is generalised even further, and now concentrates on multiple correlated sources transmitting syndromes across multiple wiretapped links. This new approach represents a network scenario where there are many sources and one receiver. We consider the information leakage for this model for Slepian-Wolf coding and thereafter consider the Shannon's cipher system representation.\n\n\\subsection{Information leakage using Slepian-Wolf coding}\n\nHere, Figure~\\ref{fig:extended} gives a pictorial view of the new extended model for multiple correlated sources.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics [scale = 0.7]{extended.pdf}\n\\caption{Extended generalised model}\n\\label{fig:extended}\n\\end{figure}\n\n\nConsider a situation where there are many sources, which are part of the ${\\bf S}$ set:\n \n\\begin{eqnarray}\n{\\bf S} = \\{S_{1}, S_{2}, \\ldots, S_{n}\\} \\nonumber\n\\end{eqnarray}\nwhere $i$ represents the $i$th source ($i = 1,\\ldots, n$) and there are $n$ sources in total. Each source may have some correlation between some other source and all sources are part of a binary alphabet. There is one receiver that is responsible for performing decoding. The syndrome for a source $S_i$ is represented by $T_{S_i}$, which is part of the same alphabet as the sources. \n\nThe entropy of a source is given by a combination of a specific conditional entropy and mutual information. In order to present the entropy we first define the following sets:\n\n\n\n\\begin{itemize}\n\\item [-] The set, ${\\bf S}$ that contains all sources: ${\\bf S} = \\{S_1, S_2,\\ldots, S_n\\}$. \n\\item [-] The set, ${\\bf S}_t$ that contains $t$ unique elements from ${\\bf S}$ and ${\\bf S}_t$ $\\subseteq$ $\\bf{S}$, ${S}_i \\in {\\bf S}_t$, ${\\bf S}_t \\cup {\\bf S}_t^c$ $=$ $\\bf{S}$ and $|{\\bf S}_t|$ $= t$ \n\\end{itemize}\n\n\nHere, $H(S_i)$ is obtained as follows:\n\n\\begin{eqnarray}\nH(S_i) = H(S_i|{\\bf S}_{\\backslash S_i}) + \\displaystyle\\sum_{t=2}^{n} (-1)^{t-1} \\displaystyle\\sum_{\\text{all possible ${\\bf S}_t$}}^{} I({\\bf S}_t|{\\bf S}_t^c)\n\\label{entropy}\n\\end{eqnarray}\n\n\nHere, $n$ is the number of sources, $ H(S_i|{\\bf S}_{\\backslash S_i})$ denotes the conditional entropy of the source $S_i$ given $S_i$ subtracted from the set ${\\bf S}$ and $I({\\bf S}_t|{\\bf S}_t^c)$ denotes the mutual information between all sources in the subset ${\\bf S}_t$ given the complement of ${\\bf S}_t$.\nIn the same way as for two sources, the generalised probabilities and entropies can be developed. It is then possible to decode the source message for source $S_i$ by receiving all components related to $S_i$. This gives rise to the following inequality for $H(S_i)$ in terms of the sources:\n\\begin{eqnarray}\nH(S_i|{ {\\bf S}_{\\backslash S_i}}) & + & \\displaystyle\\sum_{t=2}^{n} (-1)^{t-1}\t\\nonumber \\displaystyle\\sum_{\\text{all possible ${\\bf S}_t$}}^{} I({\\bf S}_t|{\\bf S}_t^c) \n\\\\ & \\le & H(S_i) + \\delta\n\\label{3source_2}\n\\end{eqnarray}\n\nIn this type of model information from multiple links need to be gathered in order to determine the transmitted information for one source. Here, the common information between sources is represented by the $I({\\bf S}_t|{\\bf S}_t^c)$ term. The portions of common information sent by each source can be determined upfront and is an arbitrary allocation in our case. For example in a three source model where $X$, $Y$ and $Z$ are the correlated sources, the common information shared with $X$ and the other sources is represented as: $I(X;Y|Z)$ and $I(X;Z|Y)$. Each common information portion is divided such that the sources having access to it are able to produce a portion of it themselves. The common information $I(X;Y|Z)$ is divided into $V_{CX1}$ and $V_{CY1}$ where the former is the common information between $X$ and $Y$, produced by $X$ and the latter is the common information between $X$ and $Y$, produced by $Y$. Similarly, $I(X;Z|Y)$ consists of two common information portions, $V_{CX2}$ and $V_{CZ1}$ produced by $X$ and $Z$ respectively. \n\nAs with the previous model for two correlated sources, since wiretapping is possible there is a need to develop the information leakage for the model. The information leakages for this multiple source model is indicated in \\eqref{remark1} and \\eqref{remark2}. \n\n\\textit{Remark 1:} The leaked information for a source $S_i$ given the transmitted codewords $T_{S_i}$, is given by:\n\\begin{eqnarray}\nL^{S_i}_{T_{S_i}} = I(S_i ; T_{S_i})\n\\label{remark1}\n\\end{eqnarray}\n\nSince we use the notion that the information leakage is the conditional entropy of the source given the transmitted information subtracted from the source's uncertainty (i.e $H(S_i) - H(S_i| T_{S_i})$), the proof for \\eqref{remark1} is trivial. Here, we note that the common information is the minimum amount of information leaked. Each source is responsible for transmitting its own private information and there is a possibility that this private information may also be leaked. The maximum leakage for this case is thus the uncertainty of the source itself, $H(S_i)$.\n\nWe also consider the information leakage for a source $S_i$ when another source $S_{j_(j \\neq i)}$ has transmitted information. This gives rise to Remark 2.\n\\\\\n\\textit{Remark 2:} The leaked information for a source $S_i$ given the transmitted codewords $T_{S_j}$, where $i \\neq j$ is:\n\\begin{eqnarray}\nL^{S_i}_{T_{S_j}} & = & H(S_i) - H(S_i|T_{S_j})\t\\nonumber\n\\\\ & = & H(S_i) - [H(S_i) - I(S_i; T_{S_j})]\t\\nonumber\n\\\\ & = & I(S_i ; T_{S_j})\n\\label{remark2}\n\\end{eqnarray}\n\nThe information leakage for a source is determined based on the information transmitted from any other channel using the common information between them. The private information is not considered as it is transmitted by each source itself and can therefore not be obtained from an alternate channel. Remark 2 therefore gives an indication of the maximum amount of information leaked for source $S_i$, with knowledge of the syndrome $T_{S_j}$. \n\nThese remarks show that the common information can be used to quantify the leaked information. The common information provides information for more than one source and is therefore susceptible to leaking information about more than one source should it be compromised. This subsection gives an indication of the information leakage for the new generalised multiple correlated sources model when a source's syndrome and other syndromes are wiretapped.\n\n\\subsection{Information leakage for Shannon's cipher system}\n\nThis subsection details a novel masking method to minimize the key length and thereafter builds this multiple correlated source model on Shannon's cipher system. \n\nThe new masking method encompasses masking the conditional entropy portion with a mutual information portion. By masking, certain information is hidden and it becomes more difficult to obtain the information that has been masked. Masking can typically be done using random numbers, however we eliminate the need for random numbers that represent keys and rather use a common information to mask with. \n\nWe make the following assumptions:\n\\begin{itemize}\n\\item\nThe capacity of each link cannot be exhausted using this method.\n\\item\nA common information is used to mask certain private information and can be used to mask multiple times. Further, private information that needs to be masked always exists in this method.\n\\end{itemize}\nThe allocation of common information for transmission are done on an arbitrary basis. The objective of this subsection is to minimize the key lengths while achieving perfect secrecy. \n\nThe private information for source $i$ is given by $H(S_i|{ {\\bf S}_{\\backslash S_i}})$ according to \\eqref{entropy}, which is called $W_{S_i}$ and the common information associated with source $S_i$ is given by $W_{CS_i}$. First, choose a common information with which to mask. Then we take a part of $W_{S_i}$, i.e. ${W_{S_i}}^{'}$, that has entropy equal to $H(W_{CS_i})$, and mask as follows: \n\\begin{eqnarray}\nW_{S_i}^{'} \\oplus W_{CS_i}\n\\label{masking}\n\\end{eqnarray}\n\nWhen the two sequences are xor'ed the result is a single sequence that may look different to the originals. We then transmit the masked portion instead of the $W_{S_i}^{'}$ portion when transmitting $W_{S_i}$, thus providing added security. \nThis brings in the interesting factor that there are many possibilities for a specific mutual information to mask conditional entropy portions. For example when considering three sources as before, it is possible to mask the private information for $X$, $Y$ and $Z$ with the common portion $I(X;Y;Z)$. If $Y$ is secure then this common information can be transmitted along $Y$'s channel, ensuring the information is kept secure. The ability to mask using the common information is a unique and interesting feature of this new model for multiple correlated sources. The underlying principle is that the secure link should transmit more common information after transmitting their private information. \n\nWe find that the lower bound for the channel rate when the masking approach is used is given by:\n\\begin{eqnarray}\nR_i^M \\geq H(S_1, \\ldots, S_n) - \\displaystyle\\sum_{t=2}^{n} \\displaystyle\\sum_{\\text{all possible ${\\bf S}_t$}} (t -1) I({\\bf S}_t|{\\bf S}_t^c)\n\\end{eqnarray}\nwhere $R_i^M$ is the $i$th channel rate when masking is used. \n\nThe method works theoretically but may result in some concern practically as there may be a security compromise when common information is sent across non secure links. We see that if the $ W_{CS_i}$ component used for masking has been compromised then the private portion it has masked will also be compromised. A method to overcome this involves using two common information parts for masking. Equation \\eqref{masking} representing the masking would become:\n\\begin{eqnarray}\nW_{S_i}^{'} \\oplus W_{CS_i} \\oplus W_{CS_j}\n\\label{masking2}\n\\end{eqnarray}\n\nwhere $i \\neq j$ and both $W_{CS_i}$ and $W_{CS_j}$ are common information associated with source $S_i$. This way, if only $W_{CS_j}$ is compromised then $W_{S_i}$ is not compromised as it is still protected by $W_{CS_i}$. Here, combinations of common information are used to increase the security. The advantage with \\eqref{masking2} is that keys may be reused because common information may be shared by more than one source. Further, the method will not result in an increase in key length. \n\n\nThe Shannon's cipher system for this multiple source model is now presented in order to determine the rate regions for perfect secrecy. The multiple sources each have their own encoder and there is a universal decoder. Each source has an encoder represented by:\n\\begin{eqnarray}\nE_i : \\mathcal{S} \\times I_{W_{S_i}} & \\rightarrow & I_{W_{CS_i}} = \\{0, 1, \\ldots, W_{S_i} - 1\\} \\nonumber \n\\\\ && I_{W_{CS_i}} = \\{0, 1, \\ldots, W_{CS_i} - 1\\}\n\\label{ms_xencoder_fcn}\n\\end{eqnarray}\nwhere $I_{MPi}$ is the alphabet representing the private portion for source $S_i$ and $I_{MCi}$ is the alphabet representing the common information for source $S_i$.\nThe decoder at the receiver is defined as:\n\\begin{eqnarray}\nD : (I_{W_{S_i}}, I_{W_{CS_i}}) & \\times & I_{Mk} \\rightarrow \\mathcal{S}\n\\end{eqnarray}\n\nThe encoder and decoder mappings are below:\n\\begin{eqnarray}\nW_i = F_{E_i} (S_i, W_{ki})\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\widehat{S_i} = F_{D_i} (W_i, W_{ki}, W_{\\{{kp}\\}})\n\\end{eqnarray}\n\nwhere $p = 1, \\ldots, n$, $p \\neq i$ and $W_{\\{{kp}\\}}$ represents the set of common information required to find $S_i$, and $\\widehat{S_i}$ is the decoded output. \n\nThe following conditions should be satisfied for the general cases:\n\n\\begin{eqnarray}\n\\frac{1}{K}\\log W_{S_i} \\le R_i +\\epsilon\n\\label{ms_cond1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K}\\log M_{ki} \\le R_{ki} +\\epsilon\n\\label{ms_cond2}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\text {Pr} \\{\\widehat{S_i} \\neq S_i\\} \\le \\epsilon\n\\label{ms_cond3}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(S_i|W_i) \\le h_i - \\epsilon\n\\label{ms_cond7}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} H(S_j|W_i) \\le h_j - \\epsilon\n\\label{ms_cond8}\n\\end{eqnarray}\n\nwhere $R_i$ is the the rate of source $S_i$'s channel and $R_{k_{i}}$ is the key rate of $S_i$. The security levels, for source $i$ and any other source $j$ are measured uncertainties $h_{i}$ and $h_j$ respectively. \n\\\\\\\\\nThe general cases considered are:\n\\\\ \\textit{Case 1:} When $T_{S_i}$ is leaked and $S_i$ needs to be kept secret.\n\\\\ \\textit{Case 2:} When $T_{S_i}$ is leaked and $S_i$ and\/or $S_j$ needs to be kept secret.\n\\\\\\\\\nThe admissible rate region for each case is defined as follows:\n\\\\ \\textit{Definition 1a:} ($R_i$, $R_{ki}$, $h_{i}$) is admissible for case 1 if there exists a code ($F_{E_{i}}$, $F_{D}$) such that \\eqref{ms_cond1} - \\eqref{ms_cond7} hold for any $\\epsilon \\rightarrow 0$ and sufficiently large $K$.\n\\\\ \\textit{Definition 1b:} ($R_i$, $R_{ki}$, $R_j$, $R_{kj}$, $h_{j}$) is admissible for case 2 if there exists a code ($F_{E_{i}}$, $F_{D}$) such that \\eqref{ms_cond1} - \\eqref{ms_cond3} and \\eqref{ms_cond8} hold for any $\\epsilon \\rightarrow 0$ and sufficiently large $K$.\n\\\\ \\textit{Definition 2:} The admissible rate regions are defined as:\n\n\\begin{eqnarray}\n\\mathcal{R}(h_{i}) = \\{(R_i, R_{ki}):\t\t\t\\nonumber\n\\\\(R_i, R_{ki}, h_{i} ) \\text{ is admissible for case 1} \\}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\mathcal{R}(h_{i}, h_{j}) = \\{(R_i, R_{ki}, R_j, R_{kj}):\t\t\t\\nonumber\n\\\\(R_i, R_{ki}, R_j, R_{kj}, h_{j} ) \\text{ is admissible for case 2} \\}\n\\end{eqnarray}\n\nThe theorems developed for these regions follow:\n\n\\textit{Theorem 6:} For $0 \\le h_{i} \\le I(S_i;S_n|S_n^c)$,\n\\begin{eqnarray}\n&& \\mathcal{R}_1(h_{i}) = \\{(R_i, R_{ki}): \t\t\\nonumber\n\\\\ && R_i \\geq H(S_i), \t\t\t\t\\nonumber\n\\\\ && R_{k_i} \\geq I({\\bf S}_t|{\\bf S}_t^c) \\}\t\t\t\n\\label{theorem6}\n\\end{eqnarray}\n\n\\textit{Theorem 7:} For $0 \\le h_{j} \\le H(S_i, S_j)$,\n\\begin{eqnarray}\n&& \\mathcal{R}_2(h_{i}, h_{j}) = \\{(R_i, R_{ki}, R_j, R_{kj}): \t\t\\nonumber\n\\\\ && R_i \\geq H(S_i, S_j), R_j \\geq H(S_i, S_j), \t\t\t\t\t\t\t\\nonumber\n\\\\ && R_{ki} \\geq I(S_i;S_j) \\text{ and } R_{kj} \\geq I(S_i;S_j) \\}\t\t\t\n\\label{theorem7}\n\\end{eqnarray}\n\nThe proofs for these theorems follow. The source information components are first identified. Assume the private portions of source $i$ and $j$ are given by $W_i$ and $W_j$ respectively. \n\n\\begin{proof}[Theorem 6 proof]\nHere, $R_i \\geq H(S_i)$, $R_{ki} \\geq I({\\bf S}_t|{\\bf S}_t^c)$. For the case where $h_i > I({\\bf S}_t|{\\bf S}_t^c)$, the definitions for $W_{CS_i}$, $W_i$ and $W_{ki}$ follow:\n\n\\begin{eqnarray}\nW_{CS_i} = 2^{K I({\\bf S}_t|{\\bf S}_t^c)}\n\\label{theorem6_proof_41}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_i = (W_{Pi}, W_{kCi})\n\\label{theorem6_proof_43}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{ki} = W_{Ci}\n\\label{theorem6_proof_44}\n\\end{eqnarray}\n\nThe keys and uncertainties are calculated as follows:\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{i} & = & \\frac{1}{K} (\\log W_{S_i} + \\log W_{CS_i}) \\nonumber\n\\\\ & \\le & H(S_i|{\\bf S}_{\\backslash {S_i}}) + \\frac{1}{K} W_{CS_i} + \\epsilon_0 \t\\nonumber\n\\\\ & = & H(S_i|{\\bf S}_{\\backslash {S_i}}) + I({\\bf S}_t|{\\bf S}_t^c) + \\epsilon_0 \\nonumber \n\\\\ & = & \\frac{1}{K} H(S_i) + \\epsilon_0\t\t\t\\nonumber\n\\\\ & \\le & R_i + \\epsilon_0\n\\label{theorem6_proof_45.0}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&& \\frac{1}{K} \\log M_{ki} \t\t\\nonumber\n\\\\ & = & \\frac{1}{K} \\log W_{CS_i}\t\\nonumber\n\\\\ & = & I({\\bf S}_t|{\\bf S}_t^c) \n\\\\ & \\le & R_{ki} + \\epsilon_0\n\\label{theorem6_proof_45.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&& \\frac{1}{K} H(S_i|W_{Pi}, W_{Ci}) \t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} H(S_i) - \\epsilon_0^{'} \t\t\\nonumber\n\\\\ & = & h_i - \\epsilon_0^{'}\t\t\n\\label{theorem6_proof_47}\n\\end{eqnarray}\n\nFrom \\eqref{theorem6_proof_45.0} - \\eqref{theorem6_proof_47}, ($R_i$, $R_{ki}$, $h_i$) is admissible for $h_i > I({\\bf S}_t|{\\bf S}_t^c))$. We now consider the case where $h_i \\le I({\\bf S}_t|{\\bf S}_t^c))$ and define $W_{CS_i}$, $W_i$ and $W_{ki}$ as follows:\n\n\n\\begin{eqnarray}\nW_{CS_i} = 2^{K I({\\bf S}_t|{\\bf S}_t^c)}\n\\label{theorem6_proof_49}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{i} = (W_{Pi}, W_{kCi})\n\\label{theorem6_proof_53}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{ki} = W_{Ci}\n\\label{theorem6_proof_54}\n\\end{eqnarray}\n\nThe keys and uncertainties are calculated as follows:\n\\begin{eqnarray}\n&& \\frac{1}{K} \\log M_{ki} \t\t\\nonumber\n\\\\ & = & \\frac{1}{K} \\log W_{CS_i}\t\\nonumber\n\\\\ & = & I({\\bf S}_t|{\\bf S}_t^c)) \n\\\\ & \\le & R_{ki} + \\epsilon_0\n\\label{theorem6_proof_55}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&& \\frac{1}{K} H(S_i|W_{Pi}, W_{Ci}) \t\t\\nonumber\n\\\\ & \\geq & \\frac{1}{K} H(S_i|W_{Ci}) + I({\\bf S}_t|{\\bf S}_t^c) - \\epsilon_0^{'} \t\t\\nonumber\n\\\\ & = & H(S_i) - \\epsilon_0^{'} \\nonumber\n\\\\ & = & h_i - \\epsilon_0^{'}\n\\label{theorem6_proof_57}\n\\end{eqnarray}\n\nFrom \\eqref{theorem6_proof_55} - \\eqref{theorem6_proof_57} it is seen that ($R_i$, $R_{ki}$, $h_i$) is admissible for $h_i \\le I({\\bf S}_t|{\\bf S}_t^c))$.\n\\end{proof}\n\nTheorem 7 is proven in a similar manner. \n\n\\begin{proof}[Theorem 7 proof]\nHere, $R_i \\geq H(S_i, S_j)$, $R_j \\geq H(S_i, S_j)$, $R_{ki} \\geq I(S_i;S_j)$ and $R_{kj} \\geq I(S_i;S_j)$. For the case where $h_j \\le H(S_i;S_j)$, the definitions for $W_{CS_i}$, $M_{Cj}$ $W_i$, $W_{ki}$, $W_j$ and $W_{kj}$ follow:\n\n\\begin{eqnarray}\nW_{CS_i} = 2^{K I({\\bf S}_t|{\\bf S}_t^c)}\n\\label{theorem7_proof_40.9}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nM_{Cj} = 2^{K I({\\bf S}_t|{\\bf S}_t^c)}\n\\label{theorem7_proof_40.10}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_i = (W_{Pi}, W_{kCi})\n\\label{theorem7_proof_41}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{ki} = W_{Ci}\n\\label{theorem7_proof_42}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_j = (W_{Pj}, W_{kCj})\n\\label{theorem7_proof_43}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kj} = W_{Cj}\n\\label{theorem7_proof_44}\n\\end{eqnarray}\n\nThe keys and uncertainties are calculated as follows:\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{i} & = & \\frac{1}{K} (\\log W_{S_i} + \\log W_{CS_i}) \\nonumber\n\\\\ & \\le & \\frac{1}{K} H(S_i|{\\bf S}_{\\backslash S_i}) + \\frac{1}{K} W_{CS_i} + \\epsilon_0 \t\\nonumber\n\\\\ & = & \\frac{1}{K} H(S_i|{\\bf S}_{\\backslash S_i}) + I({\\bf S}_t|{\\bf S}_t^c) + \\epsilon_0 \t\n\\\\ & = & \\frac{1}{K} H(S_i) + \\epsilon_0\t\t\t\\nonumber\n\\\\ & \\le & R_i + \\epsilon_0\n\\label{theorem7_proof_45.0}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n\\frac{1}{K} \\log M_{j} & = & \\frac{1}{K} (\\log M_{Pj} + \\log M_{Cj}) \\nonumber\n\\\\ & \\le & \\frac{1}{K} H(S_j|{\\bf S}_{\\backslash S_j}) + \\frac{1}{K} M_{Cj} + \\epsilon_0 \t\\nonumber\n\\\\ & = & \\frac{1}{K} H(S_j|{\\bf S}_{\\backslash S_j}) + I(S_j;{\\bf S}_t|{\\bf S}_t^c) + \\epsilon_0 \t\n\\\\ & = & \\frac{1}{K} H(S_j) + \\epsilon_0\t\t\t\\nonumber\n\\\\ & \\le & R_j + \\epsilon_0\n\\label{theorem7_proof_45.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&& \\frac{1}{K} \\log M_{ki} \t\t\\nonumber\n\\\\ & = & \\frac{1}{K} \\log W_{CS_i}\t\\nonumber\n\\\\ & = & I({\\bf S}_t|{\\bf S}_t^c)\n\\\\ & \\le & R_{ki} + \\epsilon_0\n\\label{theorem7_proof_46}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&& \\frac{1}{K} \\log M_{kj} \t\t\\nonumber\n\\\\ & = & \\frac{1}{K} \\log M_{Cj}\t\\nonumber\n\\\\ & = & I(S_i; S_j) + \\epsilon_0\n\\\\ & \\le & R_{ki} + \\epsilon_0\n\\label{theorem7_proof_46.1}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&& \\frac{1}{K} H(S_j|W_{Pi}, W_{Ci}) \t\t\\nonumber\n\\\\ & \\geq & H(S_j) - H(S_i) - \\epsilon_0^{'} \t\t\\nonumber\n\\\\ & = & H(S_i, S_j) - H(S_i) - \\epsilon_0^{'} \\nonumber\n\\\\ & \\geq & h_j - H(S_i)\n\\\\ & = & h_j - h_i - \\epsilon_0^{'}\t\t\n\\label{theorem7_proof_47}\n\\end{eqnarray}\n\nFrom \\eqref{theorem7_proof_45.0} - \\eqref{theorem7_proof_47}, ($R_i$, $R_{ki}$, $R_j$, $R_{kj}$, $h_j$) is admissible for $h_j \\le H(S_i, S_j)$. We now consider the case where $h_j > H(S_i,S_j)$, and define $W_{CS_i}$, $W_i$ and $W_{ki}$ as follows:\n\n\\begin{eqnarray}\nW_{CS_i} = 2^{K I({\\bf S}_t|{\\bf S}_t^c)}\n\\label{theorem7_proof_40.91}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nM_{Cj} = 2^{K I({\\bf S}_t|{\\bf S}_t^c)}\n\\label{theorem7_proof_40.11}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_i = (W_{Pi}, W_{kCi})\n\\label{theorem7_proof_48}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{ki} = W_{Ci}\n\\label{theorem7_proof_49}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_j = (W_{Pj}, W_{kCj})\n\\label{theorem7_proof_50}\n\\end{eqnarray}\n\n\\begin{eqnarray}\nW_{kj} = W_{Cj}\n\\label{theorem7_proof_51}\n\\end{eqnarray}\n\nThe keys and uncertainties are calculated as follows:\n\n\\begin{eqnarray}\n&& \\frac{1}{K} \\log M_{ki} \t\t\\nonumber\n\\\\ & = & \\frac{1}{K} \\log W_{CS_i}\t\\nonumber\n\\\\ & = & I({\\bf S}_t|{\\bf S}_t^c)\n\\\\ & \\le & R_{ki} + \\epsilon_0\n\\label{theorem7_proof_52}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&& \\frac{1}{K} \\log M_{kj} \t\t\\nonumber\n\\\\ & \\le & I(S_i; S_j) + \\epsilon_0\n\\\\ & \\le & R_{ki} + \\epsilon_0\n\\label{theorem7_proof_53}\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&& \\frac{1}{K} H(S_j|W_{Pi}, W_{Ci}) \t\t\\nonumber\n\\\\ & \\le & H(S_j) - H(S_i) + \\epsilon_0^{'} \t\t\\nonumber\n\\\\ & = & H(S_i, S_j) - H(S_i) + \\epsilon_0^{'} \\nonumber\n\\\\ & \\le & h_j - H(S_i)\n\\\\ & = & h_j - h_i - \\epsilon_0^{'}\t\t\n\\label{theorem7_proof_54}\n\\end{eqnarray}\n\nFrom \\eqref{theorem7_proof_52} - \\eqref{theorem6_proof_54} it is seen that ($R_i$, $R_{ki}$, $R_j$, $R_{kj}$, $h_j$) is admissible for $h_j \\le H(S_i,S_j)$.\n\\end{proof}\n\nThese theorems demonstrate the necessary rates required for perfect secrecy. The goal of the Shannon's cipher aspect was to reduce the key lengths. The masking method explained in this section is able to use common information as keys and therefore minimise the key rates for the general cases. \n\nThe information leakage described in the Slepian-Wolf aspect indicates the common information that should be given added protection to ensure that even less information will be leaked. \nThe new extended model presented here also incorporates a multiple correlated sources approach using Shannon's cipher system, which is more practical than looking at two sources. \n\n\n\\section{Comparison to other models}\nThe two correlated sources model across a channel with an eavesdropper is a more generalised approach of Yamamoto's~\\cite{shannon1_yamamoto} model. If we were to combine the links into one link, we would have the same situation as per Yamamoto's ~\\cite{shannon1_yamamoto}. From Section VI it is evident that the model can be implemented for multiple sources with Sahnnon's cipher system. Due to the unique scenario incorporating multiple sources and multiple links, the new model is more secure as private information and common information from other link\/s are required for decoding.\n\nFurther, information at the sources may be more secure in the new model because if one source is compromised then only one source's information is known. In Yamamoto's ~\\cite{shannon1_yamamoto} method both source's information is contained at one station and when that source is compromised then information about both sources are known. The information transmitted along the channels (i.e. the syndromes) do not have a fixed length as per Yamamoto's ~\\cite{shannon1_yamamoto} method. Here, the syndrome length may vary depending on the encoding procedure and nature of Slepian-Wolf codes, which is another feature of this generalised model. \n\nThe generalised model also has the advantage that varying amounts of the common information $V_{CX}$ and $V_{CY}$ (in the case of two sources) may be transmitted depending on the security of the transmission link and\/or sources. For example, for two correlated sources, if $Y$'s channel is not secure we can specify that more of the common information is transmitted from $X$. In this way we're able to make better use of the transmission link's security. For Yamamoto's ~\\cite{shannon1_yamamoto} method the common information was transmitted as one portion, $V_{C}$.\n\nIn this model, information from more than one link is required in order to determine the information for one source. This gives rise to added security as even if one link is wiretapped it is not possible to determine the contents of a particular source. This is attributed to the fact that this model has separate common information portions, which is different to Yamamoto's model. \n\nAnother major feature is that private information can be hidden using common information. Here, common information produced by a source may be used to mask its private codeword thus saving on key length. The key allocation is specified by general rules presented in Section VI. The multiple correlated sources model presents a combination masking scheme where more than one common information is used to protect a private information, which is a practical approach. This is an added feature developed in order to protect the system. This approach has not been considered in the other models mentioned in this section. \n \nThe work by Yang \\textit{et al.}~\\cite{feedback_yang} uses the concept of side information to assist the decoder in determining the transmitted message. The side information could be considered to be a source and is related to this work when the side information is considered as correlated information. Similar work with side information that incorporates wiretappers, by Villard and Piantanida \\cite{pablo_secure_multiterminal} and Villard \\textit{et al.} \\cite{pablo_secure_transmission_receivers} may be generalised in the sense that side information can be considered to be a source, however this new model is distinguishable as syndromes, which are independent of one another are transmitted across an error free channel in the new model. Further, to the author's knowledge Shannon's cipher system has not been incorporated into these models by Villard and Piantanida \\cite{pablo_secure_multiterminal} and Villard \\textit{et al.} \\cite{pablo_secure_transmission_receivers}. \n\n\n\n\\section{Future work}\nThis work has room for expansion and future work. It would be interesting to consider the case where the channel capacity has certain constraints (according to the assumptions in Section VI the channel capacity is enough at all times). In the new model, the channels are either protected by keys or not however this is limited and a real case scenario where there are varying security levels for the channels is an avenue for future work. Another aspect for expansion is to investigate the allocation of common information as keys to minimize additional keys with links having varying security levels and limited capacity. \n\n\n\n\\section{Conclusion}\nThe information leakage for two correlated sources across a channel with an eavesdropper was initially considered. Knowing which components contribute most to information leakage aids in keeping the system more secure, as these terms can be made more secure. The information leakage for the two correlated source model was quantified and proven. Shannon's cipher system was also incorporated for this model and channel and key rates to achieve perfect secrecy have been provided. The two correlated sources model has been extended for the network scenario where we consider multiple sources transmitting information across multiple links. The information leakage for this extended model is detailed. The channel and key rates are also considered for the multiple correlated source model when Shannon's chipher system is implemented. A masking method is further presented to minimize key lengths and a combination masking method is presented to address its practical shortcoming.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe $n$-vector model, introduced by Stanley in 1968 \\cite{S68} is described by the Hamiltonian $${\\mathcal H}(d,n) = -J\\sum_{\\langle i,j \\rangle} {\\bf s_i }\\cdot{\\bf s_j} ,$$\nwhere $d$ denotes the dimensionality of the lattice, and ${\\bf s_i}$\nis an $n$-dimensional vector of magnitude $\\sqrt{n}$. When $n=1$ this Hamiltonian\ndescribes the Ising model, when $n=2$ it describes the classical\nXY model, and in the limit $n \\to 0,$ one recovers the self-avoiding walk (SAW) model, as first pointed out by de Gennes \\cite{dG72}. The $n$-vector model has been shown to be equivalent to a loop model with a weight $n$ assigned to each closed loop \\cite{DMNS81} and weight $x$ to each edge of the loop. The two-dimensional O($n$) model on a honeycomb lattice, which is the focus of this paper, is a particular case of this equivalence. The partition function of the loop model can be written as\n\\begin{equation}\nZ(x)=\\sum_{G}x^{l(G)}n^{c(G)},\n\\end{equation}\nwhere $G$ is a configuration of loops, $l(G)$ is the number of loop segments and $c(G)$ is the number of closed loops. The parameter $x$ is defined by the high-temperature expansion of the $O(n)$ model partition function and is related to the coupling $J$, the temperature $T$ and Boltzmann's constant $k$ by \n\\begin{equation}\n{\\rm e}^{ \\frac{J}{kT} {\\bf s_i }\\cdot{\\bf s_j} } \\approx 1+ x{\\bf s_i }\\cdot{\\bf s_j}.\n\\end{equation}\n\nIn 1982 Nienhuis \\cite{N82} showed that, for $n \\in [-2,2],$ the model on the honeycomb lattice could be mapped onto a solid-on-solid model, from which he was able to derive the critical points and critical exponents, subject to some plausible assumptions. These results agreed with the known exponents and critical point for the Ising model, and predicted exact values for those models corresponding to other values of the spin dimensionality $n.$ In particular, for $n=0$ the critical point for the honeycomb lattice SAW model was predicted to be $x_{\\rm c}=1\/\\sqrt{2+\\sqrt{2}},$ a result finally proved 28 years later by Duminil-Copin and Smirnov \\cite{DC-S10}. \n\nThe proof of Duminil-Copin and Smirnov involves the use of a non-local \\emph{parafermionic observable} $F(z)$ where $z$ is the (complex) coordinate of the plane. This function can be thought of as a complex function with the ``parafermionic'' property $F({\\rm e}^{2\\pi{\\rm i}} z) = {\\rm e}^{-2\\pi{\\rm i} \\sigma} F(z)$ where the real-valued parameter $\\sigma$ is called the $\\emph{spin}$. For special values of $\\sigma$, this observable satisfies a discrete analogue of (one half of) the Cauchy-Riemann equations. This \\emph{discrete holomorphic} or \\emph{preholomorphic} property allowed Smirnov and Duminil-Copin to derive an important identity for self-avoiding walks on the honeycomb lattice and, consequently, the Nienhuis prediction for $x_{\\rm c}$.\n\nSmirnov~\\cite{Smirnov10} has also derived such an identity for the general honeycomb $O(n)$ model with $n \\in [-2,2]$. This identity provides an alternative way of predicting the value of the critical point $x_{\\rm c}(n)=1\/\\sqrt{2+\\sqrt{2-n}}$ as conjectured by Nienhuis for values of $n$ other than $n=0$.\n\nThis paper contains two new results. We first present an off-critical deformation of the discrete Cauchy-Riemann equations, by relaxing the preholomorphicity condition, which allows us to consider critical exponents near criticality. Indeed, this deformation gives rise to an identity between bulk and boundary generating functions, and we utilize this identity in Section~\\ref{ssec:wedge} to determine, based on some assumptions, the asymptotic form of the winding angle distribution function for SAWs on the half-plane and in a wedge in terms of boundary critical exponents. It is important to note that up to this point the only assumptions we make are the existence of the critical exponents and the value of the critical point. We will not rely on Coulomb gas techniques or conformal invariance. We find perfect agreement with the conjectured winding angle distribution function on the cylinder predicted by Duplantier and Saleur \\cite{DS88} in terms of bulk critical exponents. Finally we conjecture the values of the wedge critical exponents as a function of the wedge angle for $n\\in [-2, 2)$.\n\n\\section{Off-critical identity for the honeycomb O($n$) model}\n\\subsection{Smirnov's observable on the honeycomb lattice}\nWe briefly review an important result of Smirnov for self-avoiding walks on the honeycomb lattice. \n\nFirstly, a $\\emph{mid-edge}$ is defined to be a point equidistant from two adjacent vertices on a lattice. A \\emph{domain} $\\Omega$ is a simply connected collection of mid-edges on the half-plane honeycomb lattice. The set of vertices of the half-plane honeycomb lattice is denoted $V(\\Omega)$. Those mid-edges of $\\Omega$ which are adjacent to only one vertex in $V(\\Omega)$ form $\\partial\\Omega$. \n\\begin{figure}\n\\centering\n\\begin{picture}(150,180)\n\\put(0,180){\\includegraphics[scale=0.5, angle=270]{exampleF}}\n\\put(-10,85){$a$}\n\\put(15,85){$v_a$}\n\\put(48,122){$z$}\n\\put(48,143){$v$}\n\\end{picture}\n\\caption{A configuration $\\gamma$ on a finite domain. Point $a$ is a boundary mid-edge, point $z$ is another mid-edge, and $v_a$ and $v$ are corresponding vertices. The contribution of $\\gamma$ to $F(z)$ is ${\\rm e}^{-2{\\rm i}\\sigma\\pi\/3}x^{30} n$.\n} \\label{fig:exampleF}\n\\end{figure}\nLet $\\gamma$ be a configuration on a domain $\\Omega$ comprising a single self-avoiding walk and a number (possibly zero) of closed loops. We denote by $\\ell(\\gamma)$ the number of vertices occupied by $\\gamma$ and $c(\\gamma)$ the number of closed loops. Furthermore let $W(\\gamma)$ be the winding angle of the self-avoiding walk component. Define the following observable.\n\\begin{definition}[Preholomorphic observable]\n\\label{def:Fdef}\n\\mbox{}\\newline\n\\begin{itemize}\n\\item For $a\\in\\partial\\Omega, z\\in\\Omega$, set\n\\begin{equation}\nF(\\Omega,z;x,n,\\sigma):=F(z) = \\sum_{\\gamma: a\\rightarrow z} {\\rm e}^{-{\\rm i} \\sigma W(\\gamma)} x^{\\ell(\\gamma)} n^{c(\\gamma)},\n\\label{eq:Fmidedge}\n\\end{equation}\nwhere the sum is over all configurations $\\gamma$ for which the SAW component runs from the mid-edge $a$ to a mid-edge $z$ (we say that $\\gamma$ ends at $z$).\n\\item Let $F(p;v)$ only include configurations where there is a walk terminating at the mid-edge $p$ adjacent to the vertex $v$ and the other two mid-edges adjacent to $v$ are not occupied by a loop segment. For $v_a, v\\in V(\\Omega)$ and $p, q$ and $r$ mid-edges adjacent to $v$, set\n\\begin{align}\n\\overline{F}(V(\\Omega),v;x,n,\\sigma):=\\overline{F}(v) = (p-v)F(p;v)+(q-v)F(q;v)+(r-v)F(r;v),\n\\label{eq:Fvertex}\n\\end{align}\nSince this is a function involving walks that terminate at mid-edges adjacent to the vertex $v$ we consider this as a function defined at the vertices of the lattice.\n\\end{itemize}\nSee Fig.~\\ref{fig:exampleF} for an example:\n\\end{definition}\nSmirnov \\cite{Smirnov10} proves the following: \n\\begin{lemma}[Smirnov]\n\\label{lem:Slemma4}\nFor $n\\in[-2,2]$, set $n=2\\cos\\phi$ with $\\phi\\in[0,\\pi]$. Then for\n\\begin{align}\n\\sigma &= \\frac{\\pi-3\\phi}{4\\pi},\\qquad x^{-1}=x_{\\rm c}^{-1}=2\\cos\\left(\\frac{\\pi+\\phi}{4}\\right) = \\sqrt{2-\\sqrt{2-n}},\\qquad\\text{or}\n\\label{eq:Slemma_dense}\\\\\n\\sigma &= \\frac{\\pi+3\\phi}{4\\pi},\\qquad x^{-1}=x_{\\rm c}^{-1}=2\\cos\\left(\\frac{\\pi-\\phi}{4}\\right) = \\sqrt{2+\\sqrt{2-n}},\n\\label{eq:Slemma_dilute}\n\\end{align}\nthe observable $F$ satisfies the following relation for every vertex $v\\in V(\\Omega)$:\n\\begin{equation} \\label{eqn:localidentity}\n(p-v)F(p) + (q-v)F(q) + (r-v)F(r)=0,\n\\end{equation}\nwhere $p,q,r$ are the mid-edges adjacent to $v$.\n\\end{lemma}\n\nThe first equation in Lemma~\\ref{lem:Slemma4} corresponds to the larger of the two critical values of the step weight $x$ and thus describes the so-called dense regime as configurations with many loops are favoured. The second equation corresponds to the line of critical points separating the dense and dilute phases \\cite{N82}. Eqn. (\\ref{eqn:localidentity}) can be interpreted as the vanishing of a discrete contour integral, hence the name preholomorphic observable for $F(z)$.\n\\begin{figure}\n\\centering\n\\begin{picture}(250,180)\n\\put(0,180){\\includegraphics[scale=1.1, angle=270]{identity_groups}}\n\\put(48,8){$p$}\n\\put(65, 20){$r$}\n\\put(58, 0){$q$}\n\\end{picture}\n\\caption{The two types of configurations which end at mid-edges $p,q,r$ adjacent to vertex $v$. The first type, on the left, involves configurations which visit all three mid-edges. On the right are those configurations where the self-avoiding walk visits at most two mid-edges.} \n\\label{fig:identity_groups}\n\\end{figure}\n\\begin{proof}\nConsider a vertex $v$ adjacent to a mid-edge $p$. The two other adjacent mid-edges we refer to as $q$ and $r$ and are labelled as shown in Fig.~\\ref{fig:identity_groups}. For a self-avoiding walk entering the vertex $v$ from the mid-edge $p$ and terminating at either $p, q$ or $r$ there are two disjoint sets of configurations to consider, each corresponding to a different external connectivity of the remaining mid-edges $q$ and $r$. These are also shown in Fig. (\\ref{fig:identity_groups}). Since the two sets of configurations are disjoint we can consider the identity (\\ref{eqn:localidentity}) for each case separately.\nIn the following, we define $\\lambda={\\rm e}^{-{\\rm i}\\sigma\\pi\/3}$ (the weight accrued by a walk for each left turn) and $j={\\rm e}^{2{\\rm i}\\pi\/3}$ (the value of $p-v$ when mid-edge $p$ is to the north-west of its adjacent vertex $v$).\n\\begin{enumerate}\n\\item\nIn the first case, we consider all configurations where mid-edges $p$ and $q$ are connected. There are three ways for this to occur: two with the self-avoiding walk visiting all three sites, and one with a closed loop running through $v$. Furthermore, we define $F_L(P;v)$ to be the contribution to $F(p)$ involving only configurations where the walk ends at the point $p$, adjacent to the vertex $v$ and where the two other mid-edges adjacent to $v$ are occupied by a closed loop. Requiring \\eqref{eqn:localidentity} to hold then implies\n\\begin{equation}\n(p-v)F_L(p;v)+(q-v)\\frac{1}{n}\\bar{\\lambda}^4F_L(p;v)+(r-v)\\frac{1}{n}\\lambda^4F_L(p;v)=0.\n\\label{eqn:localidentity2}\n\\end{equation}\nThe factor of $\\frac{1}{n}$ arises from the absence of a closed loop and the complex phase factors arise from the additional winding: the loop makes an additional four left turns to arrive at $q$ and four right turns to arrive at $r$. Substituting these into \\eqref{eqn:localidentity2} we find\n\\begin{equation}\n\\frac{1}{n} (p-v)F_L(p;v)(-n-\\bar{\\lambda}^4 j- \\lambda^4\\bar{j} )=0, \\nonumber\n\\end{equation}\nwhere we have used that $q-v=j(p-v)$ and $r-v=\\bar{j}(p-v)$. Since the choice of $v$ and $p$ was arbitrary this implies\n\\begin{equation}\nn+\\lambda^4 \\bar{j} +\\bar{\\lambda}^{4}j=0. \\nonumber\n\\end{equation}\nFinally, using the parameterisation of $n$ in terms of $\\phi$ and solving for $\\sigma$ we obtain\n\\begin{align}\n\\sigma &= \\frac{\\pi-3\\phi}{4\\pi} \\qquad \\text{for } \\lambda^4 \\bar{j} = -{\\rm e}^{{\\rm i}\\phi},\n\\label{eq:sigmadense}\\\\\n\\sigma &= \\frac{\\pi+3\\phi}{4\\pi} \\qquad \\text{for } \\lambda^4 \\bar{j} = -{\\rm e}^{-{\\rm i}\\phi},\n\\label{eq:sigmadilute}\n\\end{align}\n\\item\nIn the second case only one or two mid-edges are occupied in the configuration and mid-edges $q$ and $r$ are not connected.\nRecalling the definition of $F(p; v)$ in (\\ref{def:Fdef}) and Eqn. \\eqref{eqn:localidentity} we have\n\\begin{equation}\n(p-v)F(p; v)+(q-v)x\\bar{\\lambda} F(p; v)+(r-v)x\\lambda F(p; v)=0,\n\\end{equation}\nwhich simplifies to\n\\begin{equation}\nF(p; v)(-1-x_{\\rm c}\\bar{\\lambda}j-x_{\\rm c}\\lambda\\bar{j})=0.\n\\end{equation}\nAgain, since this equation holds for arbitrary $v$ we obtain\n\\begin{equation}\n1+x_{\\rm c}\\lambda\\bar{j}+x_{\\rm c}\\bar{\\lambda}j=0,\n\\end{equation}\nwhich leads to\n\\begin{equation}\nx_{\\rm c}^{-1}=2\\cos\\left(\\frac\\pi3(\\sigma-1)\\right).\n\\end{equation}\n\\end{enumerate}\nThe two possible values of $\\sigma$ give rise to the corresponding two values for $x_{\\rm c}$.\n\\end{proof}\n\n\\subsection{Off-critical deformation}\nFirst we evaluate the discrete divergence of the second set of configurations in Fig.~\\ref{fig:exampleF} for general $x$, below the critical value. This gives\n\\begin{lemma}[Massive preholomorphicity identity]\n\\label{lem:massiveCR}\nFor a given vertex $v$ with mid-edges $p$, $q$ and $r,$ and $x$ below the critical value $x_{\\rm c}$, the parafermionic observable $F(z)$ satisfies\n\\begin{align}\n\\label{eqn:massiveCR}\n(p-v)F(p)+(q-v)F(q)+(r-v)F(r)=(1-\\frac{x}{x_{\\rm c}}) \\overline{F}(v),\n\\end{align}\nwhere $F(z)$ and $\\overline{F}(v)$ are defined in Definition \\ref{def:Fdef}.\n\\end{lemma}\nWe use the term massive preholomorphic as \\eqref{eqn:massiveCR} is of a similar form to that described in \\cite{MS09} and \\cite{Smirnov10}.\n\\begin{proof}\nSimilar to Lemma~\\ref{lem:Slemma4} the proof splits into two parts. The first part, concerning cancellations of contributions coming from walks depicted in the left-hand side of Fig. \\ref{fig:identity_groups}, is completely analogous, and as before fixes the value of $\\sigma$. The difference is that now we relax the requirement that the contribution from the second set of configurations (shown on the right in Fig.~\\ref{fig:identity_groups}) to the discrete contour integral vanishes. Consequently, $x$ is no longer fixed to the critical value.\n\nConsider a vertex $v$ with mid-edges labelled $p$, $q$ and $r$ in a counter-clockwise fashion. There are three disjoint sets of configurations, depending on which of the three mid-edges $p$, $q$ or $r$ the walk enters from. These are shown in Fig. \\ref{fig:pqrvertex}. Recall that we denote by $F(p;v)$ the contributions to $F(p)$ that only include configurations where there is a walk terminating at the mid-edge $p$ adjacent to the vertex $v$ and where the two other mid-edges adjacent to $v$ are unoccupied. The contribution to the left-hand side of \\eqref{eqn:massiveCR} from walks entering the vertex from $p$ is the sum of three terms\n\\begin{eqnarray}\n\\label{eqn:venterp}\n&&(p-v)F(p;v)+(q-v)x\\,{\\rm e}^{\\pi\\sigma{\\rm i}\/3}F(p;v)+(r-v)x\\,{\\rm e}^{-\\pi\\sigma{\\rm i}\/3}F(p;v).\n\\end{eqnarray}\nThe first term is simply from walks that enter and terminate at $p$. The second term is from those walks that enter from $p$, make a right turn and terminate at $q$. The final term is from walks that enter at $p$ and make a left turn to terminate at $r$. The last two terms acquire an additional weight $x$ from the extra step and a phase factor from the turn. We can simplify \\eqref{eqn:venterp} to obtain\n\\begin{eqnarray*}\n&=& (p-v)F(p;v)+(p-v)xj\\bar{\\lambda}F(p;v)+(p-v)x\\lambda \\bar{j}F(p;v) \\\\\n&=& (p-v)F(p;v)(1-xj\\bar{\\lambda} -x\\bar{j}\\lambda)\\\\\n&=& (p-v)F(p;v)(1-\\frac{x}{x_{\\rm c}}),\n\\end{eqnarray*}\nwhere in the first line we have used that \n\\[\nq-v=j(p-v),\\qquad r-v=\\bar{j}(p-v).\n\\]\nFor walks entering from mid-edges $q$ and $r$ similar calculations give contributions\n\\[\n(q-v)F(q;v)(1-\\frac{x}{x_{\\rm c}}) \\text{ and }\n(r-v)F(r;v)(1-\\frac{x}{x_{\\rm c}}).\n\\]\nAdding the three contributions together and using Definition~\\ref{def:Fdef} gives the right-hand side of Eqn. \\eqref{eqn:massiveCR}. \n\\end{proof}\nUsing the above lemma we can now derive the following relationship between generating functions. \\\\\n\\begin{lemma}[Off-critical generating function identity]\n\\label{lem:offcriticalDCS}\n\\begin{align}\n\\sum_{\\gamma : a \\to z\\in\\partial \\Omega\\backslash\\{a\\}} e^{{\\rm i} \\tilde{\\sigma} W(\\gamma)} x^{|\\gamma|}n^{c(\\gamma)} + (1- x\/x_c)\\sum_{\\gamma : a \\to z\\in\\Omega\\backslash \\partial \\Omega} e^{{\\rm i} \\tilde{\\sigma} W(\\gamma)} x^{|\\gamma|}n^{c(\\gamma)} = C_{\\Omega}(x),\n\\end{align}\nwhere\n\\begin{equation}\nC_{\\Omega}(x) = \\sum_ {\\gamma : a \\to a} x^{|\\gamma|}n^{c(\\gamma)},\n\\end{equation}\nis the usual generating function of the honeycomb lattice O($n$) model, i.e. closed loops without the SAW component, and $\\tilde{\\sigma} = 1-\\sigma$.\n\\end{lemma}\n\\begin{figure}\n\\centering\n\\begin{picture}(400,100)\n\\put(85,0){\\includegraphics[scale=0.75, angle=0]{pqrvertex2.pdf}}\n\\put(122,58){$p$}\n\\put(207,58){$p$}\n\\put(290,57){$p$}\n\\put(110,38){$q$}\n\\put(194,38){$q$}\n\\put(276,38){$q$}\n\\put(135,37){$r$}\n\\put(219,37){$r$}\n\\put(303,37){$r$}\n\\end{picture}\n\\caption{The three possible ways for a walk to enter a given vertex via each of the three mid-edges, $p$, $q$ and $r$. The discrete divergence is evaluated for all three cases in order to derive the off-critical, or `massive' preholomorphicity condition.}\n\\label{fig:pqrvertex}\n\\end{figure}\n\\begin{proof}\nWe begin by summing Eqn. \\eqref{eqn:massiveCR} over all the vertices of the lattice $\\Omega$. The contribution to the left-hand side of \\eqref{eqn:massiveCR} from those mid-edges that are in the bulk cancels, since each bulk mid-edge is summed over twice but with opposite signs. This leaves only the boundary mid-edges contributing to the left-hand side which can be written as\n\\begin{equation}\n\\sum_{\\gamma : a \\to z \\in\\partial \\Omega} {\\rm e}^{ -{\\rm i} \\sigma W(\\gamma)} x^{|\\gamma|} n^{c(\\gamma)}{\\rm e}^{{\\rm i}\\phi(\\gamma)}, \\nonumber\n\\end{equation}\nwhere ${\\rm e}^{{\\rm i}\\phi(\\gamma)}$ is the complex number that describes the direction from the boundary vertex to the boundary mid-edge. It is easy to check that this equals ${\\rm e}^{{\\rm i} W(\\gamma)}$ for all walks terminating on boundary mid-edges other than the starting mid-edge $a$ and is $-1$ if the walk terminates at $a$ (which we call a $\\emph{zero-length walk}$) . Using $\\tilde{\\sigma}=1-\\sigma$ we then have\n\\begin{equation}\n\\label{eqn:lhssum}\n\\sum_{\\gamma : a \\to z\\in\\partial \\Omega\\backslash\\{a\\}} {\\rm e}^{{\\rm i} \\tilde{\\sigma} W(\\gamma)} x^{|\\gamma|} n^{c(\\gamma)}-\\sum_{\\gamma : a \\to a} x^{|\\gamma|} n^{c(\\gamma)}.\n\\end{equation}\nThis first term arises from all configurations where the walk terminates on a boundary mid-edge other than the starting mid-edge $a$. The second is from all configurations with a zero-length walk, that is one that terminates at $a$. Note that we define the winding angle of a zero-length walk to be $0$.\n\n \nAs for the right-hand side of Eqn. \\eqref{eqn:massiveCR}, using Definition \\ref{def:Fdef} this can be written as\n\\begin{equation}\n\\label{eqn:rhssum}\n\\left (1-\\frac{x}{x_{\\rm c}} \\right ) \\sum_{\\gamma : a \\to z \\in\\Omega\\backslash \\partial \\Omega} \\left [ F(z;v_1(z)) (z-v_1(z))+ F(z;v_2(z)) (z-v_2(z)) \\right ].\n\\end{equation}\nThis is because for a given end point $z$, a walk can be heading towards one of two possible vertices which we call $v_1$ and $v_2$, the labelling being unimportant. This is illustrated in Fig. \\ref{fig:2vertex}. Equating \\eqref{eqn:lhssum} and \\eqref{eqn:rhssum} we have\n\\begin{align}\n&\\sum_{\\gamma : a \\to z\\in \\partial \\Omega\\backslash \\{a\\}}{\\rm e}^{ {\\rm i} \\tilde{\\sigma} W(\\gamma)} x^{|\\gamma|}n^{c(\\gamma)}-\\sum_{\\gamma : a \\to a}x^{|\\gamma|}n^{c(\\gamma)} \\nonumber\\\\\n= &\\left (1-\\frac{x}{x_{\\rm c}} \\right ) \\sum_{\\gamma : a \\to z \\in\\Omega\\backslash \\partial \\Omega} \\left ( F(z;v_1) (z-v_1(z))+ F(z;v_2) (z-v_2(z)) \\right ) \n\\label{eqn:identity1}\n\\end{align}\nUsing $\\sigma=1-\\tilde{\\sigma}$ and the definition of $F(z;v)$ the summation on the right-hand side becomes \n\\[\n{\\rm e}^{ {\\rm i} \\phi } \\left ( \\sum_{\\gamma : a \\to z \\to v_1} x^{|\\gamma|}n^{c(\\gamma)}{\\rm e}^{ {\\rm i} \\tilde{\\sigma} W(\\gamma)}{\\rm e}^{-{\\rm i} W(\\gamma)} - \\sum_{\\gamma : a \\to z \\to v_2} x^{|\\gamma|}n^{c(\\gamma)}e^{{\\rm i} \\tilde{\\sigma} W(\\gamma)} {\\rm e}^{-{\\rm i} W(\\gamma)} \\right ), \n\\] \\\\\nwhere $e^{{\\rm i}\\phi}$ is the unit vector from $v_1$ to $z$, which is the negative of the unit vector from $v_2$ to $z$.\n\\begin{figure}\n\\centering\n\\begin{picture}(250,75)\n\\put(50,0){\\includegraphics[scale=0.75, angle=0]{2vertex2.pdf}}\n\\put(37,0){$v_1$}\n\\put(90,35){$v_2$}\n\\put(66, 22){$z$}\n\\put(147,0){$v_1$}\n\\put(200,35){$v_2$}\n\\put(180, 22){$z$}\n\\put(70,0){$e^{i\\phi}$}\n\\end{picture}\n\\caption{A walk terminating at the mid-edge $z$. The mid-edge lies between two vertices $v_1$ and $v_2$ and the unit vector from $v_1$ to $z$ is given by $e^{{\\rm i} \\phi}$. The labelling of the vertices is arbitrary.} \n\\label{fig:2vertex}\n\\end{figure}\n\nA walk that terminates at $z$ and moves in the direction of vertex $v_2$ has winding $W(\\gamma_2)=2\\pi m' + \\phi$ while a walk heading in the direction of the vertex $v_1$ and terminating at $z$ has winding $W(\\gamma_1)=(2m + 1)\\pi + \\phi$ for some $m, m' \\in\\mathbb{Z}$. In each case the angle $\\phi$ from the unit vector is cancelled by the $\\phi$ appearing in the winding angle term $e^{-{\\rm i} W(\\gamma)}$ and this leaves\n\\begin{equation}\n\\label{exp:rhsidentity1}\n-\\sum_{\\gamma : a \\to z \\in \\Omega\\backslash \\partial \\Omega} x^{|\\gamma|}n^{c(\\gamma)}{\\rm e}^{ {\\rm i} \\tilde{\\sigma} W(\\gamma)}. \n\\end{equation}\n\nThe left-hand side \\eqref{eqn:lhssum} is a sum of walks to the boundary and walks of length zero, which is equal to $C_{\\Omega}(x)$. Substituting expression (\\ref{exp:rhsidentity1}) into Eqn. (\\ref{eqn:identity1}) and rearranging we obtain \n\\begin{equation} \\label{eqn:keyidentity}\n\\sum_{\\gamma : a \\to z\\in \\partial \\Omega\\backslash \\{a \\}} {\\rm e}^{{\\rm i}\\tilde{\\sigma} W(\\gamma)} x^{|\\gamma|}n^{c(\\gamma)} + (1- x\/x_c)\\sum_{\\gamma : a \\to z\\in \\Omega\\backslash \\partial \\Omega} {\\rm e}^{{\\rm i}\\tilde{\\sigma} W(\\gamma)} x^{|\\gamma|}n^{c(\\gamma)} = C_{\\Omega}(x)\n\\end{equation}\n\\end{proof}\n\\section{Winding angle}\n\\subsection{Generating function definitions}\nLet us now restrict to a particular trapezoidal domain $\\Omega=S_{T,L}$ of width $T$ and left-height $2L$, see Fig.~\\ref{fig:S_boundary}.\nThe winding angle distribution function can be calculated directly from the off-critical generating function identity (\\ref{eqn:keyidentity}). We remind the reader that $\\gamma$ describes a walk along with a configuration of loops. For convenience we use the terms $\\emph{generating function of walks}$ and $\\emph{configuration of walks}$ but it should be understood that these include configurations of closed loops as well. We define the following generating function\n\\[\nG_{\\theta,\\Omega}(x)=\\sum_{\\substack{\\gamma : a \\to z\\in\\Omega\\backslash \\partial \\Omega\\\\W(\\gamma)=\\theta}}x^{|\\gamma|}n^{c(\\gamma)}.\n\\]\n$G_{\\theta,\\Omega}(x)$ contains only those contributions to $G_\\Omega(x) = \\sum_\\theta G_{\\theta,\\Omega}(x)$ where the walk has winding angle $\\theta$. We also define\n\\[\nH_{\\Omega}(x)=\\sum_{\\gamma : a \\to z\\in\\partial \\Omega\\backslash \\{a \\} }{\\rm e}^{{\\rm i}\\tilde{\\sigma} W(\\gamma)}x^{|\\gamma|}n^{c(\\gamma)},\n\\]\nwhich is the generating function describing walks that terminate on the boundary of the domain, and thus have a winding angle associated to that boundary.\n\\begin{figure}[t]\n\\begin{center}\n\\begin{picture}(140,190)\n\\put(0,180){\\includegraphics[height=120pt, angle=270]{honeycomb}}\n\\put(15,77){$\\alpha$}\n\\put(123,76){$\\beta$}\n\\put(55,143){$\\varepsilon$}\n\\put(55,10){$\\bar{\\varepsilon}$}\n\\put(15,89){$a$}\n\\put(-11,77){$2L$}\n\\put(69,182){$T$}\n\\end{picture}\n\\end{center}\n\\caption{Finite patch $S_{5,1}$ of the hexagonal lattice. The SAW component of a loop configuration starts on the central mid-edge of the left boundary (shown as $a$).}\n\\label{fig:S_boundary}\n\\end{figure}\nUsing this notation (\\ref{eqn:keyidentity}) becomes\n\\[\nH_{\\Omega}(x) + (1- x\/x_c)\\sum_{\\theta} {\\rm e}^{{\\rm i}\\tilde{\\sigma} \\theta}G_{\\theta,\\Omega}(x) = C_{\\Omega}(x).\n\\]\nNow let $H_{\\Omega}^{*}(x)$ and $G_{\\theta,\\Omega}^{*}(x)$ be $H_{\\Omega}(x)\/C_{\\Omega}(x)$ and $G_{\\theta,\\Omega}(x)\/C_{\\Omega}(x)$ respectively. For $x0$ and $\\dot{Q}_{\\rm GeAl}<0$, and $\\Pi$ is the Peltier coefficient. An illustration is seen in Figure \\ref{fig:peltier}b. The injected heat $\\dot{Q}$ from these regions is then equal to the sum of the heat flux towards the Al-electrodes and the heat flux over the Ge-segment to the other heat source\/sink. (We can again neglect on the short length scale $\\lambda$, as discussed above.) By use of Fourier's law the relation is then written as\n\\begin{equation}\n\\dot{Q}= (-\\kappa_{\\rm Al}\\nabla T_{\\rm Al} - \\kappa_{\\rm Ge}\\nabla T_{\\rm Ge})\\cdot A_{\\rm wire}.\n\\label{eq:injectedHeat}\n\\end{equation}\nIn order to take into account the above described microscopic extension of the Peltier coefficient we consider the temperature profile beyond the length scale away from the Al-Ge interface. Then we can treat the temperature profiles as solutions to equation \\ref{eq:diffusionP} in sections without source terms.\nThe temperature gradient in the Al-segment is the derivative with respect to the position $x$ of the exponential fit to the Peltier profile evaluated at the junction as $\\nabla T_{\\rm Al} = \\sqrt{\\frac{g}{\\kappa A}} \\cdot\\Delta T_{\\rm AlGe}$, see equation\\,\\ref{eq:exponetial}. By extrapolating the exponential fit we conceptually concentrate the extended heat source in a point at the AlGe interface. The temperature gradient in the Ge-segment is obtained from a linear fit to the data between heat source and heat sink. The linearity of the temperature profile within the Ge section is a result from the strong temperature gradient and the $\\kappa \/ g$ ratio which justifies neglecting substrate loss $g$ in this region. \nThe effective Peltier coefficient is then calculated as $\\Pi=267\\pm25$\\,mW\/A, resulting in a Seebeck coefficient of $S=790\\pm95\\,\\mu$V\/K for the temperature at the Al-Ge interface. \n\nIt is interesting to relate the effective Peltier coefficient to the Schottky barrier height to test the validity of the derived value. The cooling power by thermionic emission over a barrier is given by the total current times the average energy of the carriers over the barrier\n\\begin{equation}\n\\dot{Q}=I\\cdot\\bigg(\\phi_C+\\frac{2k_BT}{e}\\bigg).\n\\end{equation}\nHence, the barrier height may be calculated from our measurements. We find $\\phi_{\\rm c}=325\\pm45$\\,meV, which is in agreement with previously determined barrier heights from gating experiments of similar devices\\cite{Kral2015}. Independent of metal type and doping concentrations, metal-germanium junctions form Schottky contacts and exhibit very strong Fermi-level pinning close to the valence band \\cite{THANAILAKIS19731383, Dimoulas2006}.\nFinally, the thermoelectric figure of merit is calculated for the scan taken at a current of $I=26\\,\\mu$A to be $ZT = 0.020\\pm0.005$. This finalizes the thermoelectric characterization of the Al-Ge heterostructure nanowire with a segment length of 168\\,nm from a single measurement.\n\nLastly, we have a closer look at the Ge-Al interface regions. We acknowledge that extracting quantitative results is hampered by the oxide shell, which does not comprise an interface. However, important observations can nevertheless be made. The conjecture that the Peltier source term has an exponential spatial distribution has testable consequences. As a result of the distributed source term (equation \\ref{eq:petcoefficient}), the point of maximum temperature should be shifted with respect to the Al-Ge interface. Using the differential equation with this source term and the material parameters extracted above, we calculate that the maximum temperature should be about 46\\,nm separated from the interface using an equilibration length of 22\\,nm. This shift is a good estimation with the position of maximum temperature extracted from the experimental temperature profile of $58 \\pm 10$\\,nm. \n\n\\section{Conclusion}\nThis study demonstrates the capability to use thermal imaging by SThM for extracting relevant information on material properties and the dynamics of nanoscale devices and structures. Most notably, we were able to obtain the thermal and thermoelectric properties of a model heterostructure from a single measurement. By quantifying the thermal contact resistance between sample and substrate, its effect can be quantified and, in contrast to other techniques, does no longer hamper the measurements. The values are confirmed by independent measurements, such as the derivation of the substrate coupling with two different models and the comparison of the barrier height investigated thermally with previous electrical transport measurements. It appears, there is currently no other method available to extract this set of information from a sample of this size. For thermoelectric applications, the performance of short segments is interesting, and the results create a link between device design and materials properties. \n\n\\section{Acknowledgements}\nThis project has received funding from the European Union's Horizon 2020 research and innovation programme under Grants 482\nNo. 766853 (EFINED) and No. 767187 (QuIET).\n\nContinuous support from Anna Fontcuberta i Morral, Ilario Zardo, \nMarta De Luca, Kirsten Moselund, Heike Riel and Steffen Reidt is gratefully acknowledged.\n\n\n\n\\section{Device Fabrication}\nThe NWs are fabricated by thermally induced substitution of gold assisted vapor-liquid-solid (VLS) grown Ge-NWs by Al. At first, <111> oriented Ge-NWs are grown heteroepitaxially on silicon substrates in a low pressure chemical vapor deposition system by use of a gold-assisted VLS process. The NWs are then coated with 15\\,nm high-\\textit{k} Al$_2$O$_3$ by atomic layer deposition (ALD) and drop-cast on a highly p-doped Si substrate with a 100\\,nm thick dielectric layer of SiO$_2$. Afterwards, the Al-contacts are formed with electron-beam lithography, sputter deposition and lift-off techniques in preparation for the subsequent thermal exchange reaction. Lastly, in order to form the single crystalline Al-Ge-Al heterostructure, the Ge-NWs are thermally annealed at 623\\,K. Under these conditions, the diffusion constants of Ge and Al are $10^{12}$ times higher in Al than in Ge\\cite{Elhajouri2019}. This means, that the Ge can easily diffuse into the Al-contact pats, whereas the Al-atoms are efficiently supplied via fast self-diffusion to take over the released lattice sites. The successive substitution of Ge-atoms with Al has been monitored \\textit{in-situ} in both, scanning electron microscopy (SEM) and scanning transmission electron microscopy (STEM) studies. They revealed atomically sharp interfaces between the Al and Ge during the anneal. Thus, the length $L_{\\rm Ge}$ of the Ge-segment is tuned by varying the annealing time. For sufficiently long baking times the Ge diffuses completely out of the wire into the contact pads. It leaves behind single crystalline Al (c-Al) NWs with a single grain boundary in the center. Furthermore, the Ge-segments are integrated in a back-gated field effect transistor (FET) by construction. The Al-contacts are naturally self-aligned with the wire. More details are found in references \\cite{Kral2015,Elhajouri2019,Brunbauer_2016}.\n\n\\section{NW properties}\nFigure 1 b-d in the paper show a schematic, a STEM image of the cross-section and a SEM of the nanowire heterostructure studied here. More particularly it consists of a single-crystalline Ge-segment of 168\\,nm length and 37\\,nm diameter, as determined using scanning transmission electron microscopy (STEM). The Ge-segment is contacted by two self-aligned c-Al nanowires of the same diameter and around 1.7\\,$\\mu$m length. An aluminium oxide shell (Al$_2$O$_3$) of 15\\,nm thickness deposited by atomic layer deposition (ALD) surrounds the Al-Ge-Al core. The interfaces between the c-Al and Ge are abrupt to the atomic level. The Al-sections of the nanowire are contacted using Al pads patterned with electron-beam lithography on a Si\/SiO$_2$-substrate. \n\nThe electrical properties of pure c-Al nanowires fabricated using the same process revealed a conductivity of $\\sigma = (7.6 \\pm 1.5) \\cdot 10^6 \\, (\\Omega$m$)^{-1}$ for Al\\cite{Brunbauer_2016}. The Al-Ge-Al heterostructures with atomically abrupt interfaces show the non-linear current-voltage (IV) relationship of two back-to-back Schottky diodes in series for Ge-segments of length $L_{\\rm Ge}>45$\\,nm with an overall resistance proportional to the Ge-segment length\\cite{sistani2017}. From gating experiments it is further concluded that the Ge segments act like p-type semiconductors. Independent of metal type and doping concentrations, metal-germanium junctions form Schottky contacts, exhibiting very strong Fermi-level pinning close to the valence band \\cite{THANAILAKIS19731383, Dimoulas2006}.\n\nOne issue regarding the thermal characterization of current-carrying Al-Ge-Al nanowires are traps for charge carriers located at the Ge\/Ge-oxide interface. In this regard, the protective Al$_2$O$_3$-shell ensures reliable and reproducible measurements by avoiding any influence of adsorbates rather than eliminating charge trapping due to dangling bonds at the Ge\/Ge-oxide interface. These traps lead to hysteretic current voltage relationships. However, the hysteresis decreases with increasing current range, drive speed and operating time. It was found that for currents larger than 21\\,$\\mu$A, the hysteresis is reduced and the current-voltage graph sufficiently linear to assume a constant resistance in the thermal analysis. Consequently, we estimate a contribution to the systematic error to the extracted temperature values for lower currents shown below due to these charging effects.\n\n\\section{Imaging and Geometric Characterization of the Wire}\n\nAfter the thermal scans, a scanning transmission electron microscopy analysis of an Al segment is carried out with a double spherical aberration-corrected JEOL JEM- ARM200F operated at 200 kV. The Al segment was prepared by focused ion beam, using a dual beam FIB Helios NanoLab 660 from FEI. The DF STEM images of the cross-section are shown in figure \\ref{fig:STEM} and reflect the round shape. A sharp interface separates the core and the oxide shell with respective areas of $A_{\\rm core} = 1075\\pm 80 $\\,nm$^2$ and $A_{\\rm shell} = 2450\\pm230 $\\,nm$^2$. The Al-core shows some poly-crystallinity, however, the loss of crystalline order probably occurred during sample preparation in the FIB. Previous more extensive studies suggest crystalline order. Furthermore, the Al$_2$O$_3$-shell is observed to be crystalline in the Fourier image of the cross-section (not shown here). \n\nEnergy Dispersive X-ray Spectroscopy (EDS) chemical maps reveal a well defined Al-core and Al$_2$O$_3$ shell with a sharp transition between the two. No residual Ge is detected beyond the detection limit which does not resolve doping concentrations. They are shown in figure \\ref{fig:eds}. To conclude, all geometric properties of the NW are measured and summarized in table \\ref{tab:geometry}. \n\nFinally, figure \\ref{fig:sem_after} shows the SEM-image taken in the FIB, when the wire was no longer conducting. The location at which the wire broke, lies close to the upper Al-electrode. In most of the experiments the wires break in a similar spot, even though the heating is observed closer to the Ge-segment. The wire is slightly thinner towards the electrodes due to the chemical etching of the oxide shell before the deposition of the electrodes.\n\n\\begin{figure}[tb]\n\t{\\includegraphics[width=1.0\\columnwidth]{figure_stm.pdf}}\n\t\\caption{DF-STEM scans taken after the thermal measurements. The oxide Aluminium Oxide layer shows a certain degree of crystalinity which enhances the thermal conductivity.}\n\t\\label{fig:STEM}\n\\end{figure}\n\\begin{figure*}[tb]\n\t\\centering\n\t\\includegraphics[width=2.0\\columnwidth]{figure_stmanalysis.pdf}\n\t\\caption{Chemical Analysis (EDS) of the nanowire cross-section. a) Along the a line through the center of the wire's cross-section, see the green line on the STEM image in the inset. b), c), d), e) show the number of counts at the energy of the Al, O, Pt and Si respectively. They demonstrate the high purity of the different materials. The platinum was deposited after the thermal measurements, to protect the wire before the FIB cut.}\n\t\\label{fig:eds}\n\\end{figure*}\n\\begin{figure}[tb]\n\t{\\includegraphics[width=0.8\\columnwidth]{semafter.jpg}}\n\t\\caption{The SEM image after the thermal scans when the wire was no longer conducting. The breaking point lies close to the electrodes, even though the thermal scans revealed only little heating in these areas.}\n\t\\label{fig:sem_after}\n\\end{figure}\n\n\n\\begin{table}[tbh]\n\t\\centering\n\t\\begin{tabular}{|c|l|} \n\t\t\\hline\n\t\t$L_{\\rm wire}$ & $(2475\\pm5)$\\,nm\\\\ \\hline\n\t\t$L_{\\rm Ge}$ & $(168\\pm1)$\\,nm \\\\ \\hline\n\t\t$r_{\\rm core}$ & $(18.5\\pm2)$\\,nm \\\\ \\hline\n\t\t$d_{\\rm oxide}$ & $(15\\pm2)$\\,nm \\\\ \\hline\n\t\t$d_{\\rm touch}$ & $(56\\pm5)$\\,nm \\\\ \\hline\n\t\t$A_{\\rm core}$ & $(1075\\pm 80) $\\,nm$^2$ \\\\ \\hline\n\t\t$A_{\\rm shell}$ & $(2450\\pm230 $\\,nm$^2$ \\\\ \\hline\n\t\t$A_{\\rm wire}$ & $(0.0035\\pm0.0003)\\, \\mu $m$^2$ \\\\ \\hline\n\t\\end{tabular}\n\t\\caption{Geometric properties of the NW obtained from SEM and STEM images: $L_{\\rm Ge}$ is the length of the Ge segment, $r_{\\rm core}$ is the radius of the conducting core, $d_{\\rm oxide}$ is the thickness of the surrounding Al$_2$O$_3$ shell, $d_{\\rm touch}$ is the length perpendicular to the wire that is in contact with the substrate. $A_{\\rm core}$, $A_{\\rm shell}$ and $A_{\\rm wire}$ are the cross-sections of the conducting inner core, the passivating Al$_2$O$_3$ shell and the entire wire respectively. The error estimation for the areas follows Gaussian error propagation.}\n\t\\label{tab:geometry}\n\\end{table}\n\n\n\\section{Electrical Characterization of the nanowire}\n\nThis section provides some overview on the electrical behavior of the NW under measuring conditions for the thermal scans. Figure \\ref{fig:ivbeforeheating} displays the I-V relation that is taken just below the threshold voltage at which any heating of the wire is detected. The I-V has two very striking features: First, a pronounced hysteresis appears between ramping the current up and down. Second, the electrical resistance depends highly on the applied voltage. Both observations are well visible on the resistance-voltage (R-V) plot in the logarithmic scale depicted next to the I-V. It reveals a change in resistance of two orders of magnitude. A heating of the wire is observed only if the applied electric field is higher than the electrical breakdown field in pure Ge wires that has been previously observed at $E=1.25\\cdot10^5$\\,V\/cm \\cite{Lugstein_2013}. For a Ge-segment of length $L_{\\rm Ge} = 168$\\,nm this corresponds to an applied voltage bias of $V_{\\rm bias}=2.1$\\,V.\n\nThe wire is integrated in an electrical circuit including a series resistor of similar resistance $R_{\\rm S} = 100$\\,k$\\Omega$ that protects from current spikes. The real time data acquisition system is is also used to measure the IV after each scan. The IV is conducted in the DC-mode with 200 steps distributed over the AC-modulation. Each point is averaged over $20$\\,ms.\n\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=2.0\\columnwidth]{figure_ivs.pdf} \n\t\\caption[A floating figure]{Characterization of the electrical behavior of the nanowire. The data in a) and b) is measured just below a current density $J$ is that is high enough to lead to a detectable change in the temperature for the wire. a) is the current density is represented as a function of the voltage drop over the device, b) the same information shown with the device resistance as a function of the voltage drop over the NW. c) and d) show the I-V and R-V curves measured immediately after each SthM scan up to the same voltage bias. The total applied voltage $v_{\\rm tot}$ drops additionally over a series resistance of $R_{\\rm S}=100$\\,k$\\Omega$.}\n\t\\label{fig:ivbeforeheating} \n\\end{figure*}\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.8\\columnwidth]{rhoge.pdf} \n\t\\caption[A floating figure]{The electrical resistivity $\\rho_{\\rm Ge}$ is shown as a function of the average temperature in the Ge segment.}\n\t\\label{fig:rhoge} \n\\end{figure*}\n\nFigure \\ref{fig:ivbeforeheating} shows a plot of all I-V and R-V curves going up to the maximum voltage bias applied in the respective thermal measurements. The electric field in the Ge-segment is between $E \\approx V_{\\rm device}\/L_{\\rm Ge} = 1.27-1.44\\cdot10^5$\\,V\/cm. Each curves is recorded after one hour of operation time and, therefore, charging of the oxide shell during the thermal scan. The wires have some seconds to cool down after the scan. The very different behavior of the IVs for different maximally applied voltages is, therefore, not caused by changes in the device temperature which should be the same for equal current density. Hence, it might be attributed to the charging effect in the oxide shell. Moreover, the amount of charging depends upon whether the NW is operated under AC- or DC-current. It is observed, that this local gating phenomenon changes the resistance of the device by orders of magnitude for only small changes of the experimental condition. However, the amount of charging tends to a stationary state when the IV is run quickly several times. Therefore, the hysteresis is expected to disappear sufficiently during the scans in the SThM. A difficulty for investigating thermoelectric effects is the simultaneous presence of Joule heating, which, on the other hand, creates a positive feedback phenomenon by decreasing the resistance during heating. It is counteracted only by an increased charge carrier scattering at higher temperature.\n\nAn increase in drain current with the gate voltage is linked to the trapping of negative surface charges in the interband levels due to surface states and bulk impurities. An applied negative gate voltage provokes the accumulation of holes that continuously neutralize the trapped electrons\\cite{sistani2017}. A similar p-type charging has been shown to appear in undoped Si-NWs. In the Si-system, changing the passivation layer from Al-oxide to Si-oxide turns the device into a n-type FET, being OFF for negative gate voltage. Therefore, surface charges originating from interface defects and dangling bonds can generate doping effects in NWs\\cite{Winkler2015}. The use of a high quality thermal oxide reduces the number of charge traps and thereby also the hysteresis that occurs when the gate voltage is sweeped from negative to positive values and vice versa\\cite{sistani2017}. Under a given gate voltage, the neutralization of holes is a rather slow process. It reaches a stationary state after approximately 20 minutes of operation. Such a long time-span is owed to a kinetic limitation by either a diffusion or tunnel barrier in form of a GeO$_x$ layer at the interface between the Ge and the Al$_2$O$_3$. Indeed, such a layer is expected to form during ALD-deposition of the Al$_2$O$_3$ layer. It acts as a local gate and provides a large number of trapping states\\cite{Zhang2015}\\cite{staudinger2018}. To conclude, the nature of the passivation layer highly influences the reliability and reproducible performance of the device under the above described charging mechanism.\n\nThe electrical resistivity in the Ge-segment is determined as follows. It is calculated from the electrical resistance measured in the I-V curves which are taken after each scan. The resistance at the maximal voltage in the I-V is the same as the AC-voltage applied during the scan closest to the conditions during the respective thermal measurement. The contribution of the Al-wire and contacts is subtracted from the overall device resistance. The values are represented as a function of the average temperature in the Ge-segment in figure \\ref{fig:rhoge}. The electrical resistivity depends strongly on the temperature.\n\n\n\n\\section{SThM operation and analysis details}\nThe images of the scans are generated and treated with the help of the open-source software Gwyddion. Apart from the straight forward calculation of the temperature field using equation 1 in the paper, some further data processing is conducted: Firstly, under the observation that the trace and retrace are nearly identical, they are averaged for noise reduction. Secondly, a low-pass scaling is applied to compensate for the signal attenuation that is due to a delay in temperature response of the cantilever. Thirdly, a DC-signal offset is manually corrected to eliminate some remaining non-uniformity in the temperature field which is caused by the artifacts in the DC-field. It is hard to measure the temperature offset of the sensor exactly due to small changes in DC-signal when approaching the tip. However, by eye it is easy to see when the artifacts at the edge of the NW disappear. The procedure might be compared to aligning a microscope. The final heat maps are shown in figure 2.\n\n\nWhen turning the focus to the temperature profile around the Al-Ge interface, the location of the Ge segment on the temperature line has to be determined first. The temperature gradient of the Joule profile is plotted in figure \\ref{fig:gradient}a). A pronounced maximum and\nminimum peak marks the deflection points in the Joule profile. They result from a sudden change of the thermal and electrical conductivities between the Ge- and the Al-parts. Their distance of 169\\,nm corresponds precisely to the expected length of the Ge-segment in a scan with 10\\,nm resolution. The hereby identified location of the Ge-segment is then shaded in red on the profile lines and used as orientation in the following analysis. After the deflection point there is a transition length of about 40-50\\,nm, that is well seen on the zoom-in figure \\ref{fig:gradient}b). This area corresponds to the phonon-electron thermalization that is blurred by a combination of the spatial resolution and a parallel heat transport in the oxide shell.\n\\begin{figure}[tb] \n\t\\centering\n \\includegraphics[width=1.0\\columnwidth]{figure_gradient.pdf}\n\t\\caption{Determination of the location of the Ge segment is possible due to a discontinuity in the temperature gradient at the thermal interface between the Ge and the Al segments. a) The temperature gradient $\\nabla T_{\\rm Joule}$ as a function of the position $x$ along the wire for the different operating currents. The gradients are normalized by the maximum temperature in the center of the red shaded Ge segment. b) shows a zoom on the gradient at the Ge-Al interface.}\n\t\\label{fig:gradient} \n\\end{figure}\n\nIn general, the temperature profiles show mostly symmetric behavior for the Joule and the Peltier signal. It reflects the good quality of the measurement data and the sample. However for the thermal measurements that were conducted at lower operating currents, the profile curves show a slight asymmetry. This is when the SThM method reaches its limitations due to the underlying assumption of a linear electrical behavior of the sample. This requirement is better fulfilled at higher currents and leads to a mixing of the $1f_{\\rm mod}$ and $2f_{\\rm mod}$ signal at lower currents.\n\\begin{figure}[tb]\n\t\\centering\n \\includegraphics[width=1.0\\columnwidth]{figure_linearfits.pdf} \n\t\\caption{In a), the maximum temperature increase caused by Joule heating is plotted as a function of the dissipated power $P$. In b), the temperature difference between the two poles in the Peltier scan as a function of the operating current. The errorbars reflect the uncertainties on the temperature measurement.}\n\t\\label{fig:propI2I}\n\\end{figure}\nNonetheless, the comparison of the the maximum temperature in the Joule-scan with the dissipated power and the comparison between the maximum temperature difference in the Peltier-scan and the current reveal linear relationships. These results is expected under the the 1D diffusion equation 2 from the paper. They can be seen in figure \\ref{fig:propI2I}.\nThe noise levels are quite low. No noise reduction is used at any stage in the lateral direction of the wire, to omit altering the shape of the profile. The expected error on the measured temperature is a combination of a constant absolute error and a more important relative contribution that depends on the sample temperature. The absolute error is determined by taking the root mean squared of the temperature on the substrate far from the wire where no signal is expected. The relative error the result of a multiplication with the temperature difference to the sensor according to equation 1 is estimated to be around $5\\%$ of the sample temperature\\cite{konemann2019}. The estimated uncertainty for each temperature measurement along the profile of the NW is then calculated as\n\\begin{equation}\n\\Delta \\bigg(\\Delta T _{\\rm J\/P}\\bigg) = \\frac{1.84 [\\rm{K}]}{\\sqrt{15}}+0.05\\cdot\\Delta T_{\\rm J\/P}\n\\label{eq:error}\n\\end{equation}\nTo conclude, the method allows to resolve the temperature profiles of the samples that are caused by the Joule and Peltier effects down to the thermalization lengths of the heat carriers. In order to extract further information by the most simple means, the analysis is divided in different sectors on the wire.\n\n\n\\section{Joule heating in pure Aluminium nanowires}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1.0\\columnwidth]{figure_AlJoule.pdf}\n\t\\caption[A floating figure]{Joule heating in an operando c-Al wire. a) shows the heat map measured with the SThM and b) shows the temperature profile along the nanowire axis under different voltage bias.}\n\t\\label{fig:AlJoule} \n\\end{figure}\n\nThermal measurements are also conducted on a c-Al NW. However, the electrical characterization indicates leaking of the device. Nonetheless,\nthe thermal measurements give some insight on where heat is generated. The Joule heat map in figure \\ref{fig:AlJoule} shows heating of the entire wire, compared to the Al-Ge-wire, where it is more localized around the Ge-segment. Even more striking is the heating of the contacts which is not present in the heterostructure. In the profile lines along the wire, also shown in figure A.4, a distinct heat source is identified. Most probably it is located where the Ge-segment disappeared during fabrication, leaving behind some grain boundary or impurities inducing more scattering of electrons during operation. Consequently, the electrical conductivity of the single crystalline Al-parts of the heterostructure wire is expected to be higher than the values determined for the entire c-Al-wires due to the absence of this defect.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThis paper is concerned with the interplay of the symmetric trilinear form $\\mu$ on the second cohomology group $H^{2}(X,\\mathbb{Z})$ \nand the Chern classes $c_{2}(X)$, $c_{3}(X)$ of a Calabi--Yau threefold $X$. \nIt is an open problem whether or not the number of topological types of Calabi--Yau threefolds is bounded and \nthe original motivation of this work was to investigate topological types of Calabi--Yau threefolds via the trilinear form $\\mu$ on $H^{2}(X,\\mathbb{Z})$. \nThe role that the trilinear form $\\mu$ plays in the geography of $6$-manifolds is indeed prominent as \nC.T.C. Wall proved the following celebrated theorem by using surgery methods and homotopy information associated with these surgeries.\n\\begin{Thm}[C.T.C. Wall \\cite{Wall}] \\label{Wall}\nDiffeomorphism classes of simply-connected, spin, oriented, closed $6$-manifolds $X$\nwith torsion-free cohomology correspond bijectively to isomorphism classes of systems of invariants consisting of \n\\begin{enumerate}\n\\item free Abelian groups $H^{2}(X,\\mathbb{Z})$ and $H^{3}(X,\\mathbb{Z})$, \n\\item a symmetric trilinear from $\\mu:H^{2}(X,\\mathbb{Z})^{\\otimes 3}\\rightarrow H^{6}(X,\\mathbb{Z})\\cong \\mathbb{Z}$ defined by $\\mu(x,y,z)=x\\cup y \\cup z$, \n\\item a linear map $p_{1}:H^{2}(X,\\mathbb{Z})\\rightarrow H^{6}(X,\\mathbb{Z})\\cong \\mathbb{Z}$ defined by $p_{1}(x)=p_{1}(X)\\cup x$,\nwhere $p_{1}(X) \\in H^{4}(X,\\mathbb{Z})$ is the first Pontrjagin class of $X$, \n\\end{enumerate}\nsubject to: for any $x,y \\in H = H^{2}(X,\\mathbb{Z})$, \n$$\n\\mu(x,x,y)+\\mu(x,y,y)\\equiv 0 \\pmod{2},\\ \\\n4\\mu(x,x,x) -p_{1}(x)\\equiv 0\\pmod{24}.\n$$\nThe isomorphism $H^{6}(X,\\mathbb{Z})\\cong \\mathbb{Z}$ above is given by pairing the cohomology class with the fundamental class $[X]$ with natural orientation. \n\\end{Thm}\nAt present the classification of trilinear forms, which is as difficult as that of diffeomorphism classes of $6$-manifolds, is unknown. \nIn the light of the essential role of the K3 lattice in the study of K3 surfaces, we would like to propose the following question:\n\\textit{what kind of trilinear forms $\\mu$ occur on Calabi--Yau threefolds?} \nThe quantized version of the trilinear forms, known as Gromov--Witten invariants or A-model Yukawa couplings, are also of interest to both mathematicians and physicists. \nOne advantage of working with complex threefolds is that we can reduce our questions to the theory of complex surfaces by considering linear systems of divisors. \nFurthermore, for Calabi--Yau threefolds $X$, \nthe second Chern class $c_{2}(X)$ and the K\\\"{a}hler cone $\\mathcal{K}_{X}$ turn out to encode important information about $\\mu$ (see \\cite{Wil2, Wil4} for details). \nOne purpose of this paper is to take the first step towards an investigation on \nhow the Calabi--Yau structure affects the trilinear form $\\mu$ and the Chern classes of the underlying manifold. \\\\\n\nIt is worth mentioning some relevant work from elsewhere. Let $(X,H)$ be a polarized Calabi--Yau threefold. \nA bound for the value $c_{2}(X)\\cup H$ in terms of the triple intersection $H^{3}$ is well-known (see for example \\cite{Wil3}) \nand hence there are only finitely many possible Hilbert polynomials $\\chi(X,\\mathcal{O}_{X}(nH))=\\frac{H^{3}}{6}n^{3} +\\frac{c_{2}(X)\\cup H}{12}n$ for such $(X,H)$. \nBy the footnote below \nand standard Hilbert scheme theory, we know that the Calabi--Yau threefold $X$ belongs to a finite number of families. \nThis implies that once we fix a positive integer $n \\in \\mathbb{N}$, there are only finitely many diffeomorphism classes of polarized Calabi--Yau threefolds $(X,H)$ with $H^{3}=n$, \nand in particular only finitely many possibilities for the Chern classes $c_{2}(X)$ and $c_{3}(X)$ of $X$. \nExplicit bounds on the Euler characteristic $c_{3}(X)$ in terms of $H^{3}$ for certain types of Calabi--Yau threefolds are given in \\cite{BH, CK}; \nthe idea of this article is to record the following simple explicit result which holds in general, and which may be useful for both mathematicians and physicists. \n\\begin{Thm} \\label{MAIN} \nLet $(X,H)$ be a very amply polarized Calabi--Yau threefold, i.e. $x=H$ is a very ample divisor on $X$. \nThen the following inequality holds:\n$$\n-36\\mu(x,x,x)-80\\le \\frac{c_{3}(X)}{2}=h^{1,1}(X)-h^{2,1}(X)\\le 6\\mu(x,x,x)+40.\n$$\nMoreover, the above inequality can be sharpened by replacing the left hand side by $-80$, $-180$ and right hand side by $28$, $54$ when $\\mu(x,x,x)=1,3$ respectively \n\\footnote{It is shown by K. Oguiso and T. Peternell \\cite{OP} that we can always pass from an ample divisor $H$ on a Calabi--Yau threefold to a very ample one $10H$. }. \n\\end{Thm}\n\nIn the last section, we study the cubic form $\\mu(x,x,x):H^{2}(X,\\mathbb{Z})\\rightarrow \\mathbb{Z}$ for a K\\\"{a}hler threefold $X$, assuming that $\\mu(x,x,x)$ has a linear factor over $\\mathbb{R}$. \nSome properties of the linear form and the residual quadratic form on $H^{2}(X,\\mathbb{R})$ are obtained; \npossible signatures of the residual quadratic form are determined under a certain condition (for example $X$ is a Calabi--Yau threefold). \n\n\\section{Bound for $c_{2}(X)\\cup H$} \n\nIn this section, we collect some properties of the trilinear form and the second Chern classes of a Calabi--Yau threefold. \nWe will always work over the field of complex numbers $\\mathbb{C}$. \\\\\n\nLet $X$ be a smooth K\\\"{a}hler threefold. \nThroughout this paper, we write $c_{i}(X)=c_{i}(TX)$ the $i$-th Chern class of the tangent bundle $TX$. \nK\\\"{a}hler classes constitute an open cone $\\mathcal{K}_{X}\\subset H^{1,1}(X,\\mathbb{C})\\cap H^{2}(X,\\mathbb{R})$, called the K\\\"{a}hler cone. \nThe closure $\\overline{\\mathcal{K}_{X}}$ then consists of nef classes and hence is called the nef cone. \nThe second Chern class $c_{2}(X) \\in H^{4}(X, \\mathbb{Z})$ defines a linear function on $H^{2}(X,\\mathbb{R})$. \nUnder the assumption that $X$ is minimal (for instance a Calabi--Yau threefold), \nresults of Y. Miyaoka \\cite{Miy} imply that for any nef class $x \\in \\overline{\\mathcal{K}_{X}}$, we have $c_{2}(X)\\cup x \\ge 0$. \\\\\n\n\nLet $X$ be a smooth complex threefold. \nWe define a symmetric trilinear form $\\mu:H^{2}(X,\\mathbb{Z})^{\\otimes3} \\rightarrow H^{6}(X,\\mathbb{Z})\\cong \\mathbb{Z}$ \nby setting $\\mu(x,y,z)=x\\cup y \\cup z$ for $x,y,z \\in H^{2}(X,\\mathbb{Z})$. \nBy small abuse of notation we also use $\\mu$ for its scalar extension. \n\n\n\\begin{Def}\nA Calabi--Yau threefold $X$ is a complex projective smooth threefold with trivial canonical bundle $K_{X}$ such that $H^{1}(X, \\mathcal{O}_{X})=0$. \n\\end{Def}\nFor a Calabi--Yau threefold $X$, the exponential exact sequence gives an identification $\\mathrm{Pic}(X)=H^{1}(X, \\mathcal{O}_X ^{\\times}) \\cong H^{2}(X,\\mathbb{Z})$. \nThe divisor class $[D]$ is then identified with the first Chern class $c_{1}(\\mathcal{O}_{X}(D))$ of the associated line bundle $\\mathcal{O}_{X}(D)$. \nIn the following we freely use this identification. \\\\\n\nThe Hirzebruch--Riemann--Roch theorem for a Calabi--Yau threefold $X$ states that $\\chi(X,\\mathcal{O}_{X}(D))=\\frac{1}{6} \\mu(x,x,x)+\\frac{1}{12}c_{2}(X)\\cup x$ for any $x=D \\in H^{2}(X,\\mathbb{Z})$. \nTherefore \n$$\n2\\mu(x,x,x)+c_{2}(X)\\cup x\\equiv0 \\pmod{12}. \n$$\nIn particular, $c_{2}(X)\\cup x$ is an even integer for any $x \\in H^{2}(X,\\mathbb{Z})$. \nIn the case when the cohomology is torsion-free, this also follows from the fact $p_{1}(X)=-2c_{2}(X)$ and Wall's Theorem \\ref{Wall}. \nThe role played by $p_{1}(X)$ in his theorem is replaced by $c_{2}(X)$ for Calabi--Yau threefolds. \\\\ \n\nFor a compact complex surface $S$, the geometric genus $p_{g}(S)$ is defined by $p_{g}(S)=\\dim_{\\mathbb{C}}H^{0}(S,\\Omega_{S}^{2})$. \nThe basic strategy we take in the following is to reduce the question on Calabi--Yau threefolds to compact complex surface theory by considering linear systems of divisors. \n\n\\begin{Prop} \\label{NC} \nLet $X$ be a Calabi--Yau threefold. \n\\begin{enumerate}\n\\item For any ample $x=H \\in \\mathcal{K}_{X}\\cap H^{2}(X,\\mathbb{Z})$ with $|H|$ free and $\\dim_{\\mathbb{C}}|H|\\ge 2$, the following inequalities hold. \n$$\n\\frac{1}{2}c_{2}(X)\\cup x \\le 2\\mu(x,x,x)+C\n$$\nwhere $C=18$ when $\\mu(x,x,x)$ even and $C=15$ otherwise. \n\\item If furthermore the canonical map $\\Phi_{|K_{H}|}:H\\rightarrow \\mathbb{P}^{|K_{H}|}$ (which is given by the restriction of the map $\\Phi_{|H|}$ to $H$) is birational onto its image, the following inequality holds. \n$$\n\\frac{1}{2}c_{2}(X)\\cup x \\le \\mu(x,x,x)+20 \n$$\n\\item If furthermore the image of the canonical map in (2) is generically an intersection of quadrics, the following inequality holds.\n$$\nc_{2}(X)\\cup x \\le \\mu(x,x,x)+48\n$$\n\\end{enumerate}\n\\end{Prop}\n\n\\begin{proof}\n(1) By Bertini's theorem, a general member of the complete linear system $|H|$ is irreducible and gives us a smooth compact complex surface $S \\subset X$. \nApplying the Hirzebruch--Riemann--Roch theorem and the Kodaira vanishing theorem to the ample line bundle $\\mathcal{O}_{X}(H)$, \nwe can readily show that the geometric genus $p_{g}(S)=\\frac{1}{6}\\mu(x,x,x)+\\frac{1}{12}c_{2}(X)\\cup x-1$. \nSince $K_{S}$ is ample, the surface $S$ is a minimal surface of general type. \nThen the Noether's inequality $\\frac{1}{2}K_{S}^{2}\\ge p_{g}(S)-2$ yields the desired two equalities depending on the parity of $K_{S}^{2}=\\mu(x,x,x)$.\\\\\n(2) The proof is almost identical to the first case. \nSince the surface $S$ obtained above is a minimal canonical surface, i.e. the canonical map $\\Phi_{|K_{S}|}:S\\rightarrow \\mathbb{P}^{|K_{S}|}$ is birational onto its image, \nthe Castelnuovo inequality for minimal canonical surfaces $K_{S}^{2}\\ge 3p_{g}(S)-7$ yields the inequality.\\\\\n(3) We say that an irreducible variety $S\\subset \\mathbb{P}^{p_{g}-1}$ is generically an intersection of quadrics \nif $S$ is one component of the intersection of all quadrics through $S$. \nIn this case, M. Reid \\cite{MR} improved the above inequality to $K_{S}^{2}\\ge 4p_{g}(S)+q(S)-12$. \nThe irregularity $q(S)=\\dim_{\\mathbb{C}}H^{1}(S,\\mathcal{O}_{S})=0$ in our case. \n\\end{proof}\n\nIf $x\\in \\mathcal{K}_{X}$ is very ample, the conditions in Proposition \\ref{NC} (1) and (2) are automatically satisfied. \nThe first two inequalities are optimal in the sense that equalities hold for the complete intersection Calabi--Yau threefolds $\\mathbb{P}_{(1^{4},4)}\\cap(8)$ and $\\mathbb{P}^{4}\\cap (5)$. \\\\\n\n\nIt is worth noting that polarized Calabi--Yau threefolds $(X,H)$ with $\\Delta$-genus $\\Delta(X, H)\\le 2$ are classified by K. Oguiso \\cite{O} \nand it is observed by the second author \\cite{Wil3} that the inequality $c_{2}(X)\\cup H \\le 10 H^{3}$ holds for those with $\\Delta(X,H)>2$. \nR. Schimmrigk's experimental observation \\cite{Sc} however conjectures the existence of a better linear upper bound of $c_{2}(X)$ \nfor Calabi--Yau hypersurfaces in weighted projective spaces. \n\n\n\\begin{Prop}\nThe surface $S$ in the proof of Proposition \\ref{NC} is a minimal surface of general type with non-positive second Segre class $s_{2}(S)$. \n$s_{2}(S)$ is negative if and only if $c_{2}(X)$ is not identically zero. \n\\end{Prop}\n\\begin{proof}\nLet $i:S \\hookrightarrow X$ be the inclusion and we identify $H^{4}(S,\\mathbb{Z})\\cong \\mathbb{Z}$. \nA simple computation shows $c_{1}(S)=-i^{*}(x)$ and $c_{2}(S)=\\mu(x,x,x)+c_{2}(X)\\cup x$. \nSince $x \\in \\mathcal{K}_{X}$, $s_{2}(S)=c_{1}(S)^{2}-c_{2}(S)=-c_{2}(X)\\cup x \\le 0$ by the result of Y. Miyaoka \\cite{Miy}. \nThe second claim follows from the fact that $\\mathcal{K}_{X} \\subset H^{2}(X,\\mathbb{R})$ is an open cone. \n\\end{proof}\nIf $X$ is a Calabi--Yau threefold and the linear form $c_{2}(X)$ is identically zero, it is well known that $X$ is the quotient of an Abelian threefold by a finite group acting freely on it. \n\n\n\n\\section{Bound for $c_{3}(X)$}\nIn this section, we apply to smooth projective threefolds the Fulton--Lazarsfeld theory for nef vector bundles developed by J.P. Demailly, T. Peternell and M. Schneider \\cite{DPS}. \nThis gives us several inequalities among Chern classes and cup products of certain cohomology classes. \nWhen $X$ is a Calabi--Yau threefold, these inequalities simplify and provide us with effective bounds for the Chern classes.\\\\\n\nRecall that a vector bundle $E$ on a complex manifold $X$ is called nef if the Serre line bundle $\\mathcal{O}_{\\mathbb{P}(E)}(1)$ on the projectivized bundle $\\mathbb{P}(E)$ is nef. \n\n\\begin{Thm}[J.P. Demailly, T. Peternell, M. Schneider \\cite{DPS}] \\label{DPS}\nLet $E$ be a nef vector bundle over a complex manifold $X$ equipped with a K\\\"{a}hler class $\\omega_{X} \\in \\mathcal{K}_{X}$. \nThen for any Schur polynomial $P_{\\lambda}$ of degree $2r$ and any complex submanifold $Y$ of dimension $d$, we have $\\int_{Y}P_{\\lambda}(c(E))\\wedge \\omega_{X}^{d-r} \\ge 0$.\n\\end{Thm}\nHere we let $\\deg c_{i}(E)=2i$ for $0\\le i \\le \\mathop{\\mathrm{rank}}\\nolimits E$ and the Schur polynomial $P_{\\lambda}(c(E))$ of degree $2r$ is defined by $P_{\\lambda}(c(E))=\\det(c_{\\lambda_{i}-i+j}(E))$ \nfor each partition $\\lambda=(\\lambda_{1},\\lambda_{2},\\dots) \\dashv r$ of a non-negative integer $r\\le \\dim Y$ with $\\lambda_{k}\\ge \\lambda_{k+1}$ for all $k\\in \\mathbb{N}$. \n\n\\begin{Ex} (\\cite{Laz}, page 118) \nLet $X$ be a complex threefold and $E$ a vector bundle of $\\mathop{\\mathrm{rank}}\\nolimits E=3$, then\n$$\nP_{(1)}(c(E))=c_{1}(E), \\ \\ \nP_{(2)}(c(E))=c_{2}(E), \\ \\ \nP_{(1,1)}(c(E))=c_{1}(E)^{2}-c_{2}(E)\n$$\n$$\nP_{(3)}(c(E))=c_{3}(E), \\ \\ \\ \nP_{(2,1)}(c(E))=c_{1}(E)\\cup c_{2}(E)-c_{3}(E),\n$$\n$$\nP_{(1,1,1)}(c(E))=c_{1}(E)^{3}-2c_{1}(E)\\cup c_{2}(E)+c_{3}(E). \n$$\n\n\\end{Ex}\n\n\\begin{Prop}\nLet $X$ be a smooth projective threefold, $x,y \\in \\mathcal{K}_{X}\\cap H^{2}(X,\\mathbb{Z})$ and assume $x$ is very ample, then the following inequalities hold. \n\\begin{enumerate}\n\\item $8\\mu(x,x,x)+2c_{2}(X)\\cup x \\ge 4\\mu(c_{1}(X),x,x)+c_{3}(X)$\n\\item $64\\mu(x,x,x)+4\\mu(c_{1}(X),c_{1}(X),x)+4c_{2}(X)\\cup x +c_{3}(X)\n \\\\ \\ \\ \\ge 32\\mu(c_{1}(X),x,x)+c_{1}(X)\\cup c_{2}(X)$\n\\item $80\\mu(x,x,x)+10\\mu(c_{1}(X),c_{1}(X),x)+2c_{1}(X)\\cup c_{2}(X)\n \\\\ \\ \\ \\ge 40\\mu(c_{1}(X),x,x)+\\mu(c_{1}(X),c_{1}(X),c_{1}(X))+10c_{2}(X)\\cup x +c_{3}(X)$\n\\item $12\\mu(x,x,y)+c_{2}(X)\\cup y \\ge 4\\mu(c_{1}(X),x,y)$\n\\item $24 \\mu(x,x,y) +\\mu(c_{1}(X),c_{1}(X),y) \\ge 8\\mu(c_{1}(X),x,y)+c_{2}(X)\\cup y$\n\\item $6\\mu(x,y,y)\\ge\\mu(c_{1}(X),y,y)$\n\\end{enumerate}\n\\end{Prop}\n\n\\begin{proof}\nThe very ample divisor $x=H$ gives us an embedding $\\Phi_{|H|}:X\\rightarrow \\mathbb{P}(V)$, where $V=H^{0}(X,\\mathcal{O}_{X}(H))$. \nUsing the Euler sequence and the Koszul complex, we obtain the following exact sequence of sheaves \n$$\n0 \\longrightarrow \\Omega_{\\mathbb{P}(V)}^{k+1} \\longrightarrow \\bigwedge^{k+1}V \\otimes \\mathcal{O}_{\\mathbb{P}(V)}((-k-1)H) \\longrightarrow \\Omega_{\\mathbb{P}(V)}^{k}\\longrightarrow 0 \n$$\nfor each $1\\le k \\le \\dim_{\\mathbb{C}}V-1$. \nWe see that $\\Omega_{\\mathbb{P}(V)}(2H)$ is a quotient of $\\mathcal{O}_{\\mathbb{P}(V)}^{\\oplus\\binom{\\dim_{\\mathbb{C}}V}{2}}$. \nThe vector bundle $\\Omega_{X}(2H)$ is then generated by global sections because it is a quotient of the globally generated vector bundle $\\Omega_{\\mathbb{P}(V)}|_{X}(2H)$. \nWe hence conclude that $\\Omega_{X}(2H)$ is a nef vector bundle. \nApplying Theorem \\ref{DPS} (or rather the inequalities derived using the above example) to our nef vector bundle $\\Omega_{X}(2H)$, straightforward computation shows the desired inequalities. \n\\end{proof}\n\n\nThe above result (with appropriate modification) certainly carries over to complex manifolds of dimension other than $3$. \n\n\n\\begin{Cor} \\label{FL} \nLet $X$ be a Calabi--Yau threefold, $x,y \\in \\mathcal{K}_{X}\\cap H^{2}(X,\\mathbb{Z})$ and assume $x$ is very ample, then the following inequalities hold. \n\\begin{enumerate}\n\\item $8\\mu(x,x,x)+2c_{2}(X)\\cup x \\ge c_{3}(X)$ \n\\item $64\\mu(x,x,x)+4c_{2}(X)\\cup x +c_{3}(X)\\ge 0$\n\\item $80\\mu(x,x,x)\\ge 10c_{2}(X)\\cup x+c_{3}(X)$\n\\item $24 \\mu(x,x,y)\\ge c_{2}(X)\\cup y$\n\\end{enumerate}\n\\end{Cor}\n\nIn recent literature there has been some interest in finding practical bounds for topological invariants of Calabi--Yau threefolds. \nAs is mentioned in the introduction, the standard Hilbert scheme theory assures \nthat possible Chern classes of a polarized Calabi--Yau threefold $(X,H)$ are in principle bounded once we fix a triple intersection number $H^{3}=n \\in \\mathbb{N}$, \nbut now that we have effective bounds for the Chern classes (with a bit of extra data for the second Chern class $c_{2}(X)$) as follows. \nRecall first that it is shown by K. Oguiso and T. Peternell \\cite{OP} that we can always pass from an ample divisor $H$ on a Calabi--Yau threefold to a very ample one $10H$. \nThen the last inequality in Corollary \\ref{FL} says that once we know the trilinear form $\\mu$ on the ample cone $\\mathcal{K}_{X}$ \nthere are only finitely many possibilities for the linear function $c_{2}(X) : H^{2}(X,\\mathbb{Z}) \\rightarrow \\mathbb{Z}$. \nWe shall now give a simple explicit formula to give a range of the Euler characteristic $c_{3}(X)$ of a Calabi--Yau threefold $X$. \n\n\\begin{Thm} \\label{MAIN} \nLet $(X,H)$ be a very amply polarized Calabi--Yau threefold, i.e. $x=H$ is a very ample divisor on $X$. \nThen the following inequality holds:\n$$\n-36\\mu(x,x,x)-80\\le \\frac{c_{3}(X)}{2}=h^{1,1}(X)-h^{2,1}(X)\\le 6\\mu(x,x,x)+40.\n$$\nMoreover, the above inequality can be sharpened by replacing the left hand side by $-80$, $-180$ and right hand side by $28$, $54$ when $\\mu(x,x,x)=1,3$ respectively. \n\\end{Thm}\n\\begin{proof}\nThis is readily proved by combining Proposition \\ref{NC} (1), (2) and Corollary \\ref{FL} (1), (2), (4). \n\\end{proof}\n\nThe smallest and largest known Euler characteristics $c_{3}(X)$ of a Calabi--Yau threefold $X$ are $-960$ and $960$ respectively. \nOur formula may replace the question of finding a range of $c_{3}(X)$ by that of estimating the value $\\mu(x,x,x)$ for an ample class $x \\in \\mathcal{K}_{X} \\cap H^{2}(X,\\mathbb{Z})$. \n\n\n\n\\section{Quadratic forms associated with special cubic forms}\n\nIn this section we further study the cubic form $\\mu(x,x,x):H^{2}(X,\\mathbb{Z})\\rightarrow \\mathbb{Z}$ for a K\\\"{a}hler threefold $X$, assuming that $\\mu(x,x,x)$ has a linear factor over $\\mathbb{R}$. \nWe will see that the linear factor and the residual quadratic form are not independent. \nPossible signatures of the residual quadratic form are also determined under a certain condition. \nIf the second Betti number $b_{2}(X)>3$, the residual quadratic form may endow the second cohomology $H^{2}(X,\\mathbb{Z})$ mod torsion with a lattice structure. \\\\\n\nWe start with fixing our notation. \nLet $\\xi:V\\rightarrow \\mathbb{R}$ be a real quadratic form. \nOnce we fix a basis of the $\\mathbb{R}$-vector space $V$, $\\xi$ may be represented as $\\xi(x)=x^{t}A_{\\xi}x$ for some symmetric matrix $A_{\\xi}$. \nThe signature of a quadratic form $\\xi$ is a triple $(s_{+},s_{0},s_{-})$ \nwhere $s_{0}$ is the number of zero eigenvalues of $A_{\\xi}$ and $s_{+}$ $(s_{-})$ is the number of positive (negative) eigenvalues of $A_{\\xi}$. \n$A_{\\xi}$ also defines a linear map $A_{\\xi}:V\\rightarrow V^{\\vee}$ (or a symmetric bilinear form $A_{\\xi}:V^{\\otimes2}\\rightarrow \\mathbb{R}$). \nThe quadratic form $\\xi$ is called (non-)degenerate if $\\dim_{\\mathbb{R}} Ker(A_{\\xi})>0 \\ (=0)$. \nWe say that $\\xi$ is definite if it is non-degenerate and either $s_{+}$ or $s_{-}$ is zero, and indefinite otherwise. \\\\\n\n\nLet $X$ be a K\\\"{a}hler threefold and assume that its cubic form $\\mu(x,x,x)$ factors as $\\mu(x,x,x)=\\nu(x)\\xi(x)$, where $\\nu$ is linear and $\\xi$ is quadratic map $H^{2}(X,\\mathbb{R}) \\rightarrow \\mathbb{R}$. \nWe can always choose the linear form $\\nu$ so that it is positive on the K\\\"{a}hler cone $\\mathcal{K}_{X}$. \nIt is proven (see the proof of Lemma 4.3 in \\cite{Wil}) that there exists a non-zero point on the quadric $Q_{\\xi}=\\{x\\in H^{2}(X,\\mathbb{R}) \\ | \\ \\xi(x)=0\\}$ and hence $\\xi$ is indefinite \nprovided that the irregularity $q(X)=\\dim_{\\mathbb{C}}H^{1}(X,\\mathcal{O}_{X})=0$ and the second Betti number $b_{2}(X)>3$. \n\n\\begin{Prop}\nLet $X$ be a K\\\"{a}hler threefold. \nAssume that the trilinear form $\\mu(x,x,x)$ decomposes as $\\nu(x)\\xi(x)$ over $\\mathbb{R}$ (if the quadratic form is not a product of linear forms, then we may work \nover $\\mathbb{Q}$)\nand the linear form $\\nu$ is positive on the K\\\"{a}hler cone $\\mathcal{K}_{X}$. \nThen the following hold. \n\\begin{enumerate}\n\\item $\\dim_{\\mathbb{R}} Ker(A_{\\xi})\\le 1$. \nIf $\\xi$ is a degenerate quadratic form, its restriction $\\xi|_{H_{\\nu}}$ to the hyperplane $H_{\\nu}=\\{x \\in H^{2}(X,\\mathbb{R})\\ | \\ \\nu(x)=0\\}$ is non-degenerate. \n\\item If the irregularity $q(X)=0$ (for example a Calabi--Yau threefold), then the signature of $\\xi$ is either $(2,0,b_{2}(X)-2)$, $(1,1,b_{2}(X)-2)$ or $(1,0,b_{2}(X)-1)$. \n\\item The above three signatures are realized by some Calabi--Yau threefolds with $b_{2}(X)=2$. \n\\end{enumerate}\n\\end{Prop}\n\n\\begin{proof}\n(1) Let $\\omega_{X} \\in \\mathcal{K}_{X}$ be a K\\\"{a}hler class. \nThe Hard Lefschetz theorem states that the map $H^{2}(X,\\mathbb{R})\\rightarrow H^{4}(X,\\mathbb{R})$ defined by $\\alpha \\mapsto \\omega_{X} \\cup \\alpha$ is an isomorphism.\nHence the cubic form $\\mu(x,x,x)$ depends on exactly $b_{2}(X)$ variables. \nThen the quadratic form $\\xi$ must depend on at least $b_{2}(X)-1$ variables and thus we have $\\dim_{\\mathbb{R}}(Ker(A_{\\xi}))\\le 1$. \nAssume next that the quadratic form $\\xi$ is degenerate. \nThen the linear form $\\nu$ is not the zero form on $Ker(A_{\\xi})$ (otherwise $\\mu(x,x,x)$ depends on less than $b_{2}(X)$ variables). \nThe restriction $\\xi|_{H_{\\nu}}$ is non-degenerate because $H^{2}(X,\\mathbb{R})=H_{\\nu} \\oplus Ker(A_{\\xi})$ as a $\\mathbb{R}$-vector space. \\\\\n(2) Let $L_{1} \\in \\mathcal{K}_{X} \\cap H^{2}(X,\\mathbb{R})$ be an ample class such that $\\mu(L_{1},L_{1},L_{1})=1$. \nSince the K\\\"{a}hler cone $\\mathcal{K}_{X}\\subset H^{2}(X,\\mathbb{R})$ is an open cone, $X$ is projective by the Kodaira embedding theorem. \nThen the Hodge index theorem states \nthat the symmetric bilinear form $b_{\\mu,L_{1}}=\\mu(L_{1},*,**):H^{2}(X,\\mathbb{R})^{\\otimes 2}\\cong (NS(X)\\otimes \\mathbb{R})^{\\otimes 2} \\rightarrow \\mathbb{R}$ has signature $(1,0,b_{2}(X)-1)$, \nwhere $NS(X)$ is the Neron--Severi group of $X$. \nNote that $\\dim_{\\mathbb{R}} (L_{1} ^{\\perp} \\cap H_{\\nu}) \\ge b_{2}(X) -2$, \nwhere $L_1^{\\perp}$ denotes the orthogonal space to $L_{1}$ with respect to the non-degenerate bilinear form $b_{\\mu , L_1}$. \nWe then have two cases; the first is when $\\dim_{\\mathbb{R}} (L_{1} ^{\\perp} \\cap H_{\\nu})=b_2(X) -1$ (i.e. $L_{1}^{\\perp} = H_{\\nu}$). \nIn this case we can write down a basis $L_{2}, \\ldots , L_{b_{2}(X)}$ for the subspace $H_{\\nu}$ which diagonalizes the quadratic form $b_{\\mu , L_{1}}|_{H_{\\nu}}$, \nand hence (noting that $L_{1} \\not\\in H_{\\nu})$ the Gramian matrix of $b_{\\mu , L_{1}}$ \nwith respect to the basis $L_{1}, \\ldots , L_{b_{2}(X)}$ of $H^2(X, \\mathbb{R})$ is $A_{b_{\\mu,L_{1}}}=(b_{\\mu,L_{1}}(L_{i},L_{j}))=diag(1,-1,\\dots,-1)$. \nIf $\\dim_{\\mathbb{R}} (L_{1} ^{\\perp} \\cap H_{\\nu})=b_{2}(X)-2$, then we can write down a basis $L_{2} , \\ldots, L_{b_{2}(X)-1}$ \nfor the subspace $L_{1} ^{\\perp} \\cap H_{\\nu}$ which diagonalizes the quadratic form $b_{\\mu , L_{1}}|_{L_{1} ^{\\perp} \\cap H_{\\nu}}$, \nand then extend it to a basis $L_2, \\ldots , L_{b_2(X)}$ of $H_\\nu$. \nThus in both cases $L_1 , \\ldots ,L_{b_2(X)}$ is a basis for $H^2 (X, \\mathbb{R})$; \nthe corresponding matrix $A_{b_\\mu ,L_1}$ will not be diagonal in this second case, \nbut the first $(b_{2}(X) -1)$-principal minor is, with one $+1$ and $b_{2}(X)-2$ entries $-1$ on the diagonal.\n\nLet us define a new basis $\\{M_{i}\\}_{i=1}^{b_{2}(X)}$ of $H^{2}(X,\\mathbb{R})$ by setting $M_{i}=L_{i}$ for $1\\le i \\le b_{2}(X)-1$ and \n$$\nM_{b_{2}(X)}=L_{b_{2}(X)}+\\sum_{i=2}^{b_{2}(X)}b_{\\mu,L_{1}}(L_{i},L_{b_{2}(X)})L_{i} \\in H_{\\nu}. \n$$ \nLet $x=\\sum_{i=1}^{b_2{(X)}} a_{i}M_{i}$. \nThen the hyperplane $H_{\\nu}$ is defined by the equation $a_{1}=0$ and the K\\\"{a}hler cone $\\mathcal{K}_{X}$ lies on the side where $a_{1}>0$ by the assumption on $\\nu$. \nTherefore we have \n$$\n\\mu(x,x,x)=a_{1}(a_{1}^{2}-\\sum_{i=2}^{b_{2}(X)-1}a_{i}^{2}+Ca_{1}a_{b_{2}(X)}+Da_{b_{2}(X)}^{2})\n$$\nfor some (explicit) constants $C,D \\in \\mathbb{R}$.\nSince the quadratic form is positive on the the K\\\"{a}hler cone $\\mathcal{K}_{X}$, there must be at least one positive eigenvalue \nand hence possible signatures are $(2,0,b_{2}(X)-2)$, $(1,1,b_{2}(X)-2)$ and $(1,0,b_{2}(X)-1)$. \\\\\n(3) Consider a Calabi--Yau threefold $X_{7}^{II}(1,1,1,2,2)^{2}_{-186}$ from page 575 \\cite{HLY} \ngiven as a resolution of a degree $7$ hypersurface in the weighted projective space $\\mathbb{P}_{(1,1,1,2,2)}$. \nIts cubic form is given by $a_{1}(14a_{1}^{2} +21a_{1}a_{2}+9a_{2}^{2})$, whose quadratic form has signature $(2,0,0)$. \nThe cubic form of a hypersurface Calabi--Yau threefold $(\\mathbb{P}^{3}\\times \\mathbb{P}^{1})\\cap(4,2)$ is $2a_{1}^{3}+12a_{1}^{2}a_{2}$, \nwhose quadratic form has signature either $(1,0,1)$ or $(1,1,0)$, depending on its decomposition. \n\\end{proof}\n\n\nThe restriction $\\xi|_{H_{\\nu}}$ may be degenerate if $\\xi$ is non-degenerate. \nThe cubic form of the above Calabi--Yau threefold $(\\mathbb{P}^{3}\\times \\mathbb{P}^{1})\\cap(4,2)$ gives an example of such phenomenon. \nLet $\\nu(a)=2a_{1}$ and $\\xi(a)=a_{1}(a_{1}+6a_{2})$. \nThen $\\xi$ is hyperbolic and non-degenerate, but its restriction to $H_{\\nu}$ is trivial. \\\\\n\n\nLet $X$ be a K\\\"{a}hler threefold. \nIf $b_{2}(X)>3$, the cubic form $\\mu$ cannot consist of three linear factors over $\\mathbb{R}$ and hence if $\\mu$ contains a linear factor it must be rational (see also the comment after Lemma 4.2 \\cite{Wil}). \nHence an appropriate scalar multiple of $\\xi$ endows the second cohomology $H^{2}(X,\\mathbb{Z})$ mod torsion with a lattice structure. \n\n\\begin{Ex}\nLet $X$ be a K3 surface and $\\iota_{S}$ an involution of $S$ without fixed point (hence the quotient $S\/\\langle \\iota_{S}\\rangle$ is an Enriques surface). \nLet $E$ be an elliptic curve with the canonical involution $\\iota_{E}$. \nThen we can define a new involution $\\iota$ of $S\\times E$ given by $\\iota=(\\iota_{S},\\iota_{E})$. \nThe quotient $X=(S\\times E)\/\\langle \\iota \\rangle$ is a Calabi--Yau threefold with $b_{2}(X)=11$. \nThe cubic form $\\mu(x,x,x$) of $X$ has a linear factor (which, we assume, is positive on the K\\\"{a}hler cone $\\mathcal{K}_{X}$) and the residual quadratic form $\\xi$ has signature $(1,1,9)$. \nMore precisely, the lattice structure on $H^{2}(X,\\mathbb{Z})$ mod torsion associated with appropriate $\\xi$ is given by $U\\oplus E_{8}(-1)\\oplus \\langle 0\\rangle $, \nwhere $U$ is the hyperbolic lattice, $E_{8}(-1)$ is the root lattice of type $E_{8}$ multiplied by $-1$ and $\\langle 0\\rangle$ is a trivial lattice of rank $1$. \n\\end{Ex}\n\n\n\n\n\\begin{Prop}\nLet $G$ be a finite group acting on a K\\\"{a}hler threefold $X$ and $\\phi:G\\rightarrow GL(H^{2}(X,\\mathbb{Z}))$ the induced representation. \nAssume that the trilinear form decomposes $\\mu(x,x,x)=\\nu(x)\\xi(x)$ as above. \nThen the image of $\\phi:G\\rightarrow GL(H^{2}(X,\\mathbb{Z}))$ lies in the orthogonal group $O(\\xi)$ associated with the quadratic form $\\xi$. \n\\end{Prop}\n\\begin{proof}\nSince the cubic form $\\mu:H^{2}(X,\\mathbb{R})\\rightarrow \\mathbb{R}$ is invariant under $G$, it is enough to show that the linear form $\\nu$ is invariant under $G$. \nThere exists $x\\in \\mathcal{K}_{X}$ such that $\\mathbb{R} x$ is a trivial representation of $G$ (by averaging a K\\\"{a}hler class over $G$) \nand then the representation $\\phi$ is a direct sum of two subrepresentations $\\mathbb{R} x\\oplus H_{\\nu}$. \nSince $\\nu$ is a linear form, this shows the invariance of $\\nu$ under $G$. \n\\end{proof}\n\nThis proposition may be useful to study group actions on the cohomology group $H^{2}(X,\\mathbb{Z})$. \n\n\n\\subsection*{Acknowledgement}\nThe present work was initiated during the the first author's stay at the Workshop on Arithmetic and Geometry of K3 surfaces and Calabi--Yau threefolds, August 16-25, 2011. \nHe is grateful to the organizers of the workshop and Fields Institute for their warm hospitality and partial travel support. \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nHeavy-tailed distributions are ubiquitous in modern datasets, especially those arising in economy, finance, imaging, biology, see \\cite{woolson2011statistical,sahu2019noising,ibragimov2015heavy,biswas2007statistical,swami2002some,kruczek2020detect} for instance. In contrast to sub-Gaussian distributions, outliers and extreme values appear much more frequently in data with heavy-tailed distributions (referred to as heavy-tailed data), which challenges statistical analysis. In fact, many standard statistical procedures developed for sub-Gaussian data suffer from performance degradation in the heavy-tailed regime. Consequently, the past decade has witnessed considerable interests on statistical estimation methods that are robust to heavy-tailedness \\cite{catoni2012challenging,fan2017estimation,nemirovskij1983problem,minsker2015geometric,brownlees2015empirical,hsu2016loss,fan2021shrinkage,zhu2021taming,lugosi2019mean}. \n\n\n\n\nIn the era of digital signal processing, quantization is an inevitable process that maps signals to bitstreams so that they can be stored, processed and transmitted. In particular, the resolution of quantization should be selected to achieve a trade-off between accuracy and various data processing costs, and in some applications relatively low resolution would be preferable. For instance, in a distributed\/federated learning setting or a MIMO system, the frequent information transmission among multiple parties often result in prohibitive communication cost \\cite{konevcny2016federated,mo2018limited}, and quantizing signals or data to fairly low resolution (while preserving satisfying utility) is an effective approach to lower the cost. Under such a big picture, in recent years there appeared growing literature on high-dimensional signal recovery from quantized data (see, e.g., \\cite{boufounos20081,davenport20141,chen2022high,thrampoulidis2020generalized,jung2021quantized,dirksen2021non,dirksen2021covariance} for 1-bit quantization, \\cite{thrampoulidis2020generalized,jung2021quantized,xu2020quantized,jacques2017time} for multi-bit uniform quantization), trying to design appropriate quantization scheme for some fundamental estimation or recovery problems.\n\n\n\n\n\n\n\nIndependently, the heavy-tailed difficulty can be overcome by some robustifying techniques, and also, data quantization under uniform dither is shown to cost very little in some recovery problems. Considering ubiquitousness of heavy-tailed behaviour and quantization, a natural question is to design quantization scheme for heavy-tailed data that only incurs minor information loss. For instance, when applied to statistical estimation problems with heavy-tailed data, an appropriate quantization scheme should at least enable a faithful estimator from the quantized data, and ideally an estimator achieving the (near) optimal error rate. Nevertheless, to the best of our knowledge, the only existing results that simultaneously take quantization and heavy-tailedness into account are in our earlier work \\cite{chen2022high}, and these results are essentially sub-optimal. It remains an open question {\\em whether (near) minimax rates can be achieved if only quantized heavy-tailed data are available.} \n\n\nIn this paper, we address this question in the affirmative. We propose to truncate the data prior to a dithered uniform quantizer; under our quantization scheme, (near) minimax estimators are proposed in several estimation problems. We also study covariate quantization and uniform recovery in quantized compressed sensing with heavy-tailed data. \n\n\n\n \\subsection{Related works and our contributions}\n We review the most relevant works and then explain our main results.\n \\subsubsection{Heavy-tailed data and quantization in estimation problems}\nWe first review prior works on estimation with heavy-tailed data or under quantization. \n\n\n\n\n\n\n Compared to sub-Gaussian data, heavy-tailed data may contain many outliers and extreme values that are overly influential to traditional estimation methods. Hence, developing estimation methods that are robust to heavy-tailedness has become a recent focus in statistics literature, where heavy-tailed distributions are only assumed to have bounded moments of some order. In particular, significant efforts have been devoted to the fundamental problem of mean-estimation for heavy-tailed distribution. For instance, effective techniques for estimating the mean of heavy-tailed distribution include Catoni's mean estimator \\cite{catoni2012challenging,fan2017estimation}, median of means \\cite{nemirovskij1983problem,minsker2015geometric}, and trimmed mean \\cite{lugosi2021robust,devroye2016sub}. While the strategies to achieve robustness are different, these methods indeed share the same core spirit of making the outliers less influential. To this end, the trimmed method (also referred to as truncation or shrinkage) may be the most intuitive --- it truncates overlarge data to some threshold, the extreme values can thus be well controlled. For more in-depth discussions we refer to the recent survey \\cite{lugosi2019mean}. Furthermore, these robust methods for estimating the mean have been applied to empirical risk minimization \\cite{brownlees2015empirical,hsu2016loss} and various high-dimensional estimation problems \\cite{fan2021shrinkage,zhu2021taming}, achieving near optimal guarantees. For instance, by invoking M-estimators with truncated data, (near) minimax rates can be achieved in linear regression, matrix completion, covariance estimation \\cite{fan2021shrinkage}. \n\n\n\n\nDue to the prominent role of quantization in signal processing and machine learning, quantized compressed sensing (QCS) that studies the interplay between compressed sensing (CS) and data quantization has become an active research branch in the field. In this work, we focus on memoryless quantization scheme\\footnote{This means the quantization for different measurements are independent. For other quantization schemes we refer to the recent survey \\cite{dirksen2019quantized}.} that embraces simple hardware design. An important QCS model is 1-bit compressed sensing (1-bit CS) where only the sign of the measurement is retained \\cite{boufounos20081,jacques2013robust,plan2012robust,plan2013one}, for instance, the recovery of sparse $\\bm{\\theta^\\star}\\in \\mathbb{R}^d$ from $\\sign(\\bm{X\\theta^\\star})$ under some sensing matrix $\\bm{X}\\in \\mathbb{R}^{n\\times d}$. However, 1-bit CS with this direct quantization suffers from some frustrating limitations, e.g., the loss of norm information, and the identifiability issue under some regular sensing matrix (e.g., under Bernoulli sensing matrix, see \\cite{dirksen2021non})\\footnote{All guarantees for this 1-bit CS model are restricted to standard Gaussian measurement, with a single exception of \\cite{ai2014one}, which is still quite restricted.}. Fortunately, these limitations can be overcome by random dithering prior to the quantization, meaning that the quantized measurements are $\\sign(\\bm{X\\theta^\\star}+\\bm{\\tau})$ for some random dither $\\bm{\\tau}\\in \\mathbb{R}^n$ to be chosen: with Gaussian dither, full reconstruction with norm becomes possible \\cite{knudson2016one}; with uniform dither, recovery with norm can be achieved under general sub-Gaussian sensing matrix \\cite{dirksen2021non,jung2021quantized,thrampoulidis2020generalized,chen2022high} even with near minimax error rate \\cite{thrampoulidis2020generalized,chen2022high}. \n\n\n\nBesides the 1-bit quantizer that retains the sign, the uniform quantizer maps $a\\in \\mathbb{R}$ to $\\mathcal{Q}_\\Delta(a)= \\Delta\\big(\\lfloor \\frac{a}{\\Delta}\\rfloor +\\frac{1}{2} \\big)$ for some pre-specified $\\Delta>0$ (we refer to $\\Delta$ as the quantization level, and note that smaller $\\Delta$ represents higher resolution). While recovering $\\bm{\\theta^\\star}$ from $\\mathcal{Q}_\\Delta(\\bm{X\\theta^\\star})$ encounters identifiability issue (e.g., under Bernoulli sensing matrix and $\\Delta=1$, $1.1\\bm{e}_1$ and $1.2\\bm{e}_2$ can never be distinguished because of $\\mathcal{Q}_1(1.1\\bm{Xe}_1)=\\mathcal{Q}_1(1.2\\bm{Xe}_1)$), it is again beneficial to use random dithering and observe $\\mathcal{Q}_\\Delta(\\bm{X\\theta^\\star}+\\bm{\\tau})$. More specifically, by using uniform dither the Lasso estimator \\cite{thrampoulidis2020generalized,sun2022quantized} and projected back projection (PBP) method \\cite{xu2020quantized} achieve minimax rate. These error bounds demonstrate that the dithered uniform quantization only worsens the multiplicative factor in the error rate. While the above results of QCS with random dither are quite recent, we point out that the technique of dithering in quantization has a long history, and we refer to \\cite{gray1993dithered} for a brief introduction. \n\n\n\n\n\nSeveral other models were also investigated under dithered 1-bit quantization, including signal parameter estimation, covariance matrix estimation, and matrix completion (MC). Specifically, \\cite{balan2006signal} studied a general estimation problem under dithered 1-bit quantization in a traditional setting where sample size tends to infinity. Their results indicate that only logarithmic rate loss is incurred by the quantization. By collecting 2 bits per entry from each sample $\\bm{x}_k$, \\cite{dirksen2021covariance} proposed a covariance matrix estimator for sub-Gaussian distribution, which was then extended to high-dimensional case (with sparsity) in \\cite{chen2022high}. Moreover, 1-bit matrix completion (1-bit MC) was first proposed and studied in \\cite{davenport20141} based on maximum likelihood estimation with a nuclear norm constraint. A series of follow-up works developed this likelihood approach with various regularizers or constraints, or under multi-bit quantization on a finite alphabet \\cite{cai2013max,lafond2014probabilistic,klopp2015adaptive,cao2015categorical,bhaskar2016probabilistic}. With uniform dither, the method in \\cite{chen2022high} essentially departs from the standard likelihood approach, representing the first 1-bit MC method robust to pre-quantization noise with unknown distribution.\n\n\n\nIt should be noted that the results we just reviewed are for heavy-tailed data with full precision, or quantization of sub-Gaussian data. While for quantization of heavy-tailed data, e.g., those corrupted by heavy-tailed noise, the only results we are aware of were presented in \\cite{chen2022high} concerning covariance estimation, CS, MC. Applying truncation and a dithered 1-bit quantizer, however, all rates derived in \\cite{chen2022high} are essentially sub-optimal. To the best of our knowledge, there exists no previous example for heavy-tailed data with quantization that nearly achieves optimal rate. \n\n\n\\subsubsection{Quantization of heavy-tailed data and (near) minimax rates}\nWe now introduce our quantization scheme and the (near) minimax guarantees for covariance (matrix) estimation, CS, MC. \n\n\nOur procedure for quantizing heavy-tailed data contains three steps: {\\it truncation} that shrinks data to some threshold, {\\it dithering} with proper random dither, {\\it uniform quantization}. For sub-Gaussian or sub-exponential data the truncation step is unnecessary, and one can set truncation threshold as $\\infty$ in this case. In fact, we replace the dithered 1-bit quantizer ($\\sign(.)$) in \\cite{chen2022high} with the less extreme dithered uniform quantizer ($\\mathcal{Q}_\\Delta(.)$). The gain turns out to be significant --- we derive (near) optimal rates that improve the sub-optimal ones in \\cite{chen2022high}. Specifically, \n we consider the canonical examples of covariance estimation (in low dimension or high dimension with sparsity), CS (particularly sparse recovery), and MC. From the quantized data, (near) minimax estimators are constructed in closed form or as solution of convex programs, see Theorems \\ref{thm1}-\\ref{thm9}. \n \n\n The goal of truncation is to achieve a bias-and-variance tradeoff --- introducing some bias can render significantly reduced variance, thereby allowing more robust estimator \\cite{fan2021shrinkage}. Some of our results strictly generalize \\cite{fan2021shrinkage} to dithered uniform quantization setting without extra assumption, in the sense that the corresponding ones in \\cite{fan2021shrinkage} can be exactly recovered by setting quantization level to zero. For the effect of dithered uniform quantizaion, a unified conclusion is that the dithered quantization only results in a slightly worse multiplicative factor in the error rate. For instance, in QCS with (regular) sub-Gaussian covariate but noise with only bounded $(2+\\nu)$-th moment (for some $\\nu>0$), we derive the rate $O\\big(\\mathscr{L}\\sqrt{\\frac{s \\log d}{n}}\\big)$ with $\\mathscr{L}=M^{1\/(2l)} +\\Delta$ (Theorem \\ref{thm4}). Such unified conclusion generalizes similar ones for QCS in \\cite{thrampoulidis2020generalized,sun2022quantized,xu2020quantized} towards two directions, i.e., to heavy-tailed data, and to more estimation problems (i.e., covariance matrix estimation, MC). From a technical side, many of our analyses on the dithered quantizer are cleaner than prior works because we make full use of the nice property of a dithered uniform quantizer\\footnote{Prior work that did not fully leveraged the property of dithered uniform quantizer may incur extra technical complication, e.g., the symmetrization and contraction in \\cite[Lemma A.2]{thrampoulidis2020generalized}.}, see section \\ref{prequan}. \n\n\n\n\n \n\n\\subsubsection{Covariate quantization}\nThis work contains some novel QCS results that involve the quantization of covariate.\n\n\nPrior works on QCS focused on quantization of the response, \nbut the consideration on covariate quantization is rather natural because of the direct outcome of lower communication or memory cost (e.g., in distributed machine learning one often encounters some communication constraint on the number of bits). Indeed, for QCS results with quantizd covariate we are only aware of \\cite[Theorems 7-8]{chen2022high} concerning 1-bit quantization of the covariate-response pair in CS. The key downside of these two results is that they require the uncommon sparsity on the covariance matrix of the covariate (denoted $\\bm{\\Sigma^\\star}$) to provide positive definiteness for the covariance matri estimator. See \\cite[Assumption 3]{chen2022high}. \n\n\n\nIn this work, we present QCS results with covariate quantization free of non-standard assumption such as sparsity on $\\bm{\\Sigma^\\star}$. Specifically, the quantization scheme is applied to $\\bm{x}_k$ with triangular dither, then, our covariance matrix estimator is plugged in the framework of regularized M-estimator. While the resulting recovery program is non-convex, we prove (near) minimax error bounds for all local minimizers (Theorems \\ref{thm6}-\\ref{thm7}). To derive guarantee of this kind, our analysis bears resemblance to a line of works on non-convex M-estimator \\cite{loh2011high,loh2017statistical,loh2013regularized}, but also exhibit some essential differences (Remark \\ref{rem4}). As byproducts, we present 1-bit QCS results with quantized covariate that are comparable to \\cite{chen2022high} without assuming sparsity on $\\bm{\\Sigma^\\star}$ (Theorems \\ref{sg1bitqccs}-\\ref{ht1bitqccs}). \n\n\n\n\n\n\n\\subsubsection{Uniform recovery guarantee}\nWe also derive a uniform error bound for QCS with heavy-tailed data. \n\n\nIt is standard in CS to leverage measurements with randomness, and consequently, a prominent feature of a CS result (e.g., exact recovery, error bound for approximate recovery) is the uniformity. In brief, a uniform result guarantees the recovery of all structured signals with a single usage of the randomness, and CS associated with a uniform guarantee can be referred to as a uniform compressive coding scheme. In contrast, a non-uniform result is only valid for a fixed structured signal, and basically, a new realization of the sensing matrix is required for the sensing of a new signal. \n\n\n\nIt is well-known in linear CS that, the restricted isometry property (RIP) of the sensing matrix implies uniform recovery of all sparse signals (e.g., \\cite{Foucart2013}). However, when it comes to various nonlinear CS models, uniform recovery is still eagerly pursued so far. For example, in the specific regime of quantization (1-bit\/uniform quantization with\/without dithering), or a more general single index model $y_k = f\\big(\\bm{x}_k^\\top\\bm{\\theta^\\star}\\big)$ with possibly unknown nonlinearity $f(.)$, most representative results are non-uniform (e.g., \\cite{jacques2013robust,plan2017high,plan2016generalized,thrampoulidis2020generalized,chen2022high,jacques2021importance}, or part of \\cite{xu2020quantized,jung2021quantized}), see the discussions in \\cite{genzel2022unified}. For examples of uniform recovery, we refer to \\cite{plan2013one,dirksen2021non,xu2020quantized,jung2021quantized,genzel2022unified,chen2022uniform}; some of them remain (near) optimal, while others suffer from essential degradation compared to the non-uniform ones. \n\n\n\nWe improve the non-uniform Theorem \\ref{thm4} to a uniform recovery guarantee in Theorem \\ref{uniformtheorem}, seemingly the first uniform result for QCS with heavy-tailed data. It states that, with a single realization of the sub-Gaussian sensing matrix, heavy-tailed noise, uniform dither, all $s$-sparse signals within a $\\ell_2$ ball can be estimated with the near minimiax $\\ell_2$ norm error ${O}\\big(\\sqrt{\\frac{s\\log \\mathscr{W}}{n}}\\big)$ for some $\\mathscr{W}$ depending on $(n,d,s,\\Delta)$ and other parameters. Our proof builds on an in-depth concentration result and a covering argument. Some new machinery is also needed to handle the truncation step.\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Outline}\nThe remainder of this paper is structured as follows. We provide the notation and preliminaries in section \\ref{sec2}. We present the first set of main results, i.e., the (near) optimal guarantees for three quantized estimation problems with heavy-tailed data, in section \\ref{sec3}. Our second set of results for covariate quantization and uniform recovery in QCS is given in section \\ref{qc-qcs}. To corroborate our theory, numerical results on synthetic data are reported in section \\ref{sec6}. We give some remarks to conclude the paper in section \\ref{sec7}. All the proofs are postponed to the Appendices.\n\n\n\n\n\\section{Preliminaries}\\label{sec2}\nWe adopt the following conventions throughout the paper:\n\n\n\n 1) We use capital letters (e.g., $\\bm{A}$, $\\bm{x}$) to denote matrices and vectors, and regular letters (e.g, $a$, $x$) for scalars. We write $[m] = \\{1,...,m\\}$ for positive integer $m$. We denote the complex unit by $\\bm{i}$. The $i$-th entry for a vector $\\bm{x}$ (likewise, $\\bm{y}$, $\\bm{\\tau}$) is denoted by $x_i$ (likewise, $y_i$, $\\tau_i$).\n \n \n \n 2) Notation with \"$\\star$\" as superscript denotes the desired underlying parameter or signal, e.g., $\\bm{\\Sigma}^\\star$, $\\bm{\\theta}^\\star$. Moreover, notation marked by a tilde (e.g., $\\bm{\\widetilde{x}}$) and a dot (e.g., $\\bm{\\dot{\\bm{x}}}$) stands for the truncated data and quantized data, respectively.\n\n\n3) We reserve $d$, $n$ for the problem dimension and sample size, repectively. \n\n\n \n \n \n 4) For vector $\\bm{x} \\in \\mathbb{R}^d$, we work with its transpose $\\bm{x}^\\top$, $\\ell_p$ norm $\\|\\bm{x}\\|_p = (\\sum_{i\\in [d]} |x_i|^p)^{1\/p}$ ($p\\geq 1$), max norm $\\|\\bm{x}\\|_{\\infty} = \\max_{i\\in [d]}|x_i|$.\nWe define the standard Euclidean sphere as $\\mathbb{S}^{d-1}= \\{\\bm{x}\\in\\mathbb{R}^d:\\|\\bm{x}\\|_2=1\\}$. \n\n\n\n5) For matrix $\\bm{A}= [a_{ij}]\\in \\mathbb{R}^{m\\times n}$ with singular values $\\sigma_1\\geq \\sigma_2\\geq ...\\geq \\sigma_{\\min\\{m,n\\}}$, recall the operator norm $\\|\\bm{A}\\|_{op}= \\sup_{\\bm{v}\\in \\mathbb{S}^{n-1}}\\|\\bm{Av}\\|_2=\\sigma_1$, Frobenius norm $\\|\\bm{A}\\|_F = (\\sum_{i,j}a_{ij}^2)^{1\/2}$, nuclear norm $\\|\\bm{A}\\|_{nu} = \\sum_{k=1}^{\\min\\{m,n\\}}\\sigma_k$, and max norm $\\|\\bm{A}\\|_{\\infty}= \\max_{i,j}|a_{ij}|$. $\\lambda_{\\min}(\\bm{A})$ (resp. $\\lambda_{\\max}(\\bm{A})$) stands for the minimum eigenvalue (resp. maximum eigenvalue) of a symmetric $\\bm{A}$. \n\n\n\n6) We denote universal constants by $C$, $c$, $C_i$ and $c_i$, whose value may vary from line to line. We write $T_1 \\lesssim T_2$ or $T_1 = O(T_2)$ if $T_1\\leq CT_2$. Conversely, if $T_1\\geq CT_2$ we write $T_1 \\gtrsim T_2$ or $T_1 = \\Omega(T_2)$. Also, we write $T_1 \\asymp T_2$ if both $T_1=O(T_2)$ and $T_2 = \\Omega(T_1)$ hold.\n\n\n\n7) We use $\\mathscr{U}(\\Omega)$ to denote the uniform distribution over $\\Omega\\subset \\mathbb{R}^N$, $\\mathcal{N}(\\bm{\\mu},\\bm{\\Sigma})$ to denote Gaussian distribution with mean $\\bm{\\mu}$ and covariance $\\bm{\\Sigma}$, $\\mathsf{t}(\\nu)$ to denote student's t distribution with degrees of freedom $\\nu$. \n\n\n8) $\\mathcal{Q}_\\Delta(.)$ is the uniform quantizer with quantization level $\\Delta>0$. It applies to scalar $a$ by $\\mathcal{Q}_\\Delta(a)=\\Delta\\big(\\big\\lfloor\\frac{a}{\\Delta}\\big\\rfloor+\\frac{1}{2}\\big)$, and we set $\\mathcal{Q}_0(a) =a$. Given a threshold $\\mu$, the hard thresholding of scalar $a$ is $\\mathcal{T}_\\mu(a)= a\\cdot \\mathbbm{1}(|a|\\geq \\mu)$. Both functions element-wisely apply to vectors or matrices.\n\n\n\n\\subsection{High-dimensional statistics}\n \nLet $X$ be a real random variable, we provide some basic knowledge on the sub-Gaussian, sub-exponential random variable. We also give the formulation of heavy-tailed distribution.\n\n\n\n1) The sub-Gaussian norm is defined as $\\|X\\|_{\\psi_2} = \\inf\\{t>0:\\mathbbm{E}\\exp(\\frac{X^2}{t^2})\\leq 2\\}$. A random variable $X$ with finite $\\|X\\|_{\\psi_2}$ is said to be sub-Gaussian. Sub-Gaussian variable shares similar properties with the Gaussian variable, e.g., a probability tail with similar decaying rate and the moment constraint up to any order\\begin{equation}\n \\begin{aligned}\\label{2.2}\n &\\mathbbm{P}(|X|\\geq t)\\leq 2\\exp(-\\frac{ct^2}{\\|X\\|_{\\psi_2}^2});\\\\\n &(\\mathbbm{E}|X|^p)^{1\/p}\\leq C\\|X\\|_{\\psi_2}\\sqrt{p},~\\forall~ p\\geq 1.\n \\end{aligned}\n\\end{equation} \nNote that, these two properties can also define $\\|.\\|_{\\psi_2}$ up to multiplicative constant, for instance, $\\|X\\|_{\\psi_2}\\asymp \\sup_{p\\geq 1}\\frac{(\\mathbbm{E}|X|^p)^{1\/p}}{\\sqrt{p}}$, see \\cite[Proposition 2.5.2]{vershynin2018high}. For a $d$-dimensional random vector $\\bm{x}$ we have the sub-Gaussian norm $\\|\\bm{x}\\|_{\\psi_2}=\\sup_{\\bm{v}\\in\\mathbb{S}^{d-1}}\\|\\bm{v}^\\top\\bm{x}\\|_{\\psi_2}$.\n\n\n\n2) The sub-exponential norm is defined as $\\|X\\|_{\\psi_1} = \\inf\\{t>0:\\mathbbm{E}\\exp(\\frac{|X|}{t})\\leq 2\\}$, \nand $X$ is sub-exponential if $\\|X\\|_{\\psi_1}<\\infty$. The sub-exponential $X$ enjoys the properties \n\\begin{equation}\n \\begin{aligned}\\nonumber\n & \\mathbbm{P}(|X|\\geq t)\\leq 2\\exp(-\\frac{ct}{\\|X\\|_{\\psi_1}});\\\\\n & (\\mathbbm{E}|X|^p)^{1\/p}\\leq C\\|X\\|_{\\psi_1}p,~\\forall~p\\geq 1.\n \\end{aligned}\n\\end{equation}\n To relate $\\|.\\|_{\\psi_1}$ and $\\|.\\|_{\\psi_2}$ one has $\\|XY\\|_{\\psi_2}\\leq \\|X\\|_{\\psi_1}\\|Y\\|_{\\psi_1}$ \\cite[Lemma 2.7.7]{vershynin2018high}. \n \n \n \n\n\n3) In contrast to the moment constaint in (\\ref{2.2}), the heavy-tailed distributions in this work only satisfy bounded moments of some order. More precisely, for a given $l>0$, we assume $\\mathbbm{E}|X|^l\\leq M$ for some $M>0$. \nWhile for random vector $\\bm{x}\\in \\mathbb{R}^d$, we may use an element-wise assumption $\\sup_{i\\in [d]}\\mathbbm{E}|x_i|^l\\leq M$ or a stronger one $\\mathbbm{E}|\\bm{v}^\\top\\bm{x}|^l\\leq M$ towards all directions. Our focus is on quantization of heavy-tailed data drawn from such distributions. \n\n\n\n\\subsection{Dithered uniform quantization}\\label{prequan}\nIn this part, we describe the dithered uniform quantizer and its property in detail. We also specify the choices of random dither in this work.\n\n\n\n1) We first provide the detailed procedure of dithered quantization and its general property. Independent of the input signal $\\bm{x}\\in \\mathbb{R}^N$ with dimension $N\\geq 1$, we generate the random dither $\\bm{\\tau}\\in \\mathbb{R}^N$ with i.i.d. entries from some distribution, and then quantize $\\bm{x}$ to $\\bm{\\dot{x}}= \\mathcal{Q}_\\Delta(\\bm{x}+\\bm{\\tau})$. Following \\cite{gray1993dithered}, we refer to $\\bm{w}:= \\bm{\\dot{x}}- (\\bm{x}+\\bm{\\tau})$ as the quantization error, while $\\bm{\\xi}:= \\bm{\\dot{x}} - \\bm{x}$ as the quantization noise. The principal properties of dithered quantization is provided in Lemma \\ref{lem1}.\n\n\n\\begin{lem} \\label{lem1} {\\rm \\scshape\t(Theorems 1-2 in} {\\rm \\cite{gray1993dithered}){\\bf \\sffamily.}}\nConsider the dithered uniform quantization described above for the input signal $\\bm{x} = [x_i]$, with random dither $\\bm{\\tau} = [\\tau_i]$, quantization error $\\bm{w} = [w_i]$ and quantization noise $\\bm{\\xi} = [\\xi_i]$. Let $Y$ be the random variable having the same distributioin as the random dither $w_i$. \n\n\\noindent{\\rm (a)} {\\rm \\scshape(Quantization Error){\\bf \\sffamily.}} If $f(u):=\\mathbbm{E}(\\exp(\\bm{i} uY))$ satisfies $f\\big(\\frac{2\\pi l}{\\Delta}\\big)=0$ for all non-zero integer $l$, then $x_i$ and $w_j$ are independent for all $i,j\\in [N]$. Moreover, $\\{w_j:j\\in [N]\\}$ is an i.i.d. sequence of $\\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}])$. \n\n\n\\noindent{\\rm(b)} {\\rm \\scshape(Quantization Noise){\\bf \\sffamily.}} Assume $Z\\sim \\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}])$ is independent of $Y$. Let $g(u):=\\mathbbm{E}(\\exp(\\ii uY))\\mathbbm{E}(\\exp(\\ii u Z))$. Given positive integer $p$, if the $p$-th order derivative $g^{(p)}(u)$ satisfies $g^{(p)}\\big( \\frac{2\\pi l}{\\Delta}\\big)=0$ for all non-zero integer $l$, then the $p$-th conditional moment of $\\xi_i$ does not depend on $\\bm{x}$: $\\mathbbm{E}[\\xi_i^p|\\bm{x}] = \\mathbbm{E}(Y+Z)^p$. \n\\end{lem}\n\n\n\n2) We use {\\it uniform dither} for quantization of the response in CS and MC. More specifically, under $\\Delta>0$, we adopt the uniform dither $\\tau_k\\sim \\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}])$ for the response $y_k\\in \\mathbb{R}$, which is also a common choice in previous works (e.g., \\cite{thrampoulidis2020generalized,xu2020quantized,dirksen2021non,jung2021quantized}). For $Y\\sim \\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}])$, it can be calculated that \n\\begin{equation}\\nonumber\n \\mathbbm{E}(\\exp(\\bm{i}uY))= \\int_{-\\Delta\/2}^{\\Delta\/2}~\\frac{1}{\\Delta}\\big(\\cos (ux) + \\bm{i}\\sin (ux)\\big)~\\mathrm{d}x = \\frac{2}{\\Delta u}\\sin\\big(\\frac{\\Delta u}{2}\\big),\n\\end{equation}\nand hence $\\mathbbm{E}(\\exp(\\bm{i} \\frac{2\\pi l}{\\Delta} Y))=0$ holds for all non-zero integer $l$. Therefore, the benefit of using $\\tau_k \\sim \\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}])$\nis that the quantization errors $w_k=\\mathcal{Q}_\\Delta(y_k+\\tau_k)-(y_k+\\tau_k)$ i.i.d. follow $\\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}])$, and are independent of $\\{y_k\\}$.\n\n\n\n3) We use {\\it triangular dither} for quantization of the covariate, i.e., the sample in covariance estimation or the covariate in CS. Particularly, when considering the uniform quantizer $\\mathcal{Q}_\\Delta(.)$ for the covariate $\\bm{x}_k\\in \\mathbb{R}^d$, we adopt the dither $\\bm{\\tau}_k\\sim \\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}]^d)+\\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}]^d)$, which is the sum of two independent $\\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}]^d)$ and referred to as a triangular dither \\cite{gray1993dithered}. Simple calculations verify that the triangular dither respects not only the condition in Lemma \\ref{lem1}(a), but also the one in Lemma \\ref{lem1}(b) with $p=2$. Thus, at the cost of a larger dithering variance (compared to uniform dither), the advantage of triangular dither over uniform one is the additional property of signal-independent quantization noise --- $\\mathbbm{E}(\\xi_{ki}^2)=\\frac{1}{4}\\Delta^2$, where $\\xi_{ki}$ is the $i$-th entry of $\\bm{\\xi}_k = \\mathcal{Q}_\\Delta(\\bm{x}_k+\\bm{\\tau}_k)-(\\bm{x}_k+\\bm{\\tau}_k)$. \n\n\n\nTo the best of our knowledge, the triangular dither is new to the literature of QCS. We will explain its necessity if covariance estimation is involved. This is also complemented by numerical simulation (see Figure \\ref{fig5}(a)).\n\n\n\n \n\n\n\\section{(Near) minimax error rates} \\label{sec3}\nIn this section we derive (near) optimal error rates for several canonical statistical estimation problems, including quantized covariance estimation, quantized compressed sensing, and quantized matrix completion. For convenience, we represent these three models by QCE, QCS, QMC hereafter. The novelty is that the optimality is achieved under the two-fold difficulty of heavy-tailedness and quantization\n\\subsection{Quantized covariance estimation}\nGiven $\\mathscr{X}:=\\{\\bm{x}_1,...,\\bm{x}_n\\}$ as i.i.d. copies of a zero-mean random vector $\\bm{x}\\in \\mathbb{R}^d$, one often encounters the covariance (matrix) estimation problem, i.e., to estimate $\\bm{\\Sigma^\\star} = \\mathbbm{E}(\\bm{x} \\bm{x}^\\top)$. This estimation problem is of fundamental importance in multivariate analysis and has attracted a mass of research interest (e.g., \\cite{cai2012optimal,cai2010optimal,cai2011adaptive,bickel2008covariance}), whereas the more practical quantized regime remains under-developed, for which we are only aware of the extreme 1-bit quantization results in \\cite{dirksen2021covariance,chen2022high}.\n\nFormally put, the problem of quantized covariance estimation (QCE) investigates {\\it how to design a quantizer for quantizing $\\bm{x}_k$, together with a fairly faithful estimation procedure only based on quantized data}.\nIn contrast to the conventional sub-Gaussian assumption in most previous works, we only assume that $\\bm{x}$ possesses bounded fourth moments. \n\nUnder such essential weaker assumption, the observed data may contain outliers, \n which we handle by a data truncation step. We first truncate $\\bm{x}_k$ to $\\bm{\\widetilde{x}}_k$, where the core spirit is to make the outliers less influential. Here, we defer the precise definition of $\\bm{\\widetilde{x}}_k$ to the concrete results, because it should be well suited to the error measure and hence can vary in different instances. \n \n \n After the truncation, we dither and quantize $\\bm{\\widetilde{x}}_k$ to $\\bm{\\dot{x}}_k = \\mathcal{Q}_\\Delta(\\bm{\\widetilde{x}}_k+\\bm{\\tau}_k)$ with the triangular dither $\\bm{\\tau}_k\\sim \\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}]^d)+\\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}]^d)$. Different from the uniform dither adopted in the literature (e.g., \\cite{chen2022high,thrampoulidis2020generalized,xu2020quantized,dirksen2021non,jung2021quantized}), first let us explain our usage of triangular dither. Recall the definitions of quantization noise $\\bm{\\xi}_k:=\\bm{\\dot{x}}_k - \\bm{\\widetilde{x}}_k$ and quantization error $\\bm{w}_k:=\\bm{\\dot{x}}_k - \\bm{\\widetilde{x}}_k - \\bm{\\tau}_k$,\n then $\\bm{\\xi}_k = \\bm{\\tau}_k+\\bm{w}_k$. If $\\bm{w}_k$ is independent of $\\bm{\\widetilde{x}}_k$ and follows $\\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}]^d)$ (as satisfied by both uniform dither and triangular dither), then we have\n\\begin{equation}\n \\begin{aligned}\n \\label{3.1}\n & \\mathbbm{E}(\\bm{\\dot{x}}_k\\bm{\\dot{x}}_k^\\top) = \\mathbbm{E}\\big((\\bm{\\widetilde{x}}_k + \\bm{\\xi}_k)(\\bm{\\widetilde{x}}_k + \\bm{\\xi}_k)^\\top\\big)\\\\& = \\mathbbm{E}(\\bm{\\widetilde{x}}_k\\bm{\\widetilde{x}}_k^\\top)+ \\mathbbm{E}(\\bm{\\widetilde{x}}_k\\bm{\\xi}_k^\\top) + \\mathbbm{E}(\\bm{\\xi}_k\\bm{\\widetilde{x}}_k^\\top) + \\mathbbm{E}(\\bm{\\xi}_k\\bm{\\xi}_k^\\top)\\\\&\\stackrel{(i)}{=}\\mathbbm{E}(\\bm{\\widetilde{x}}_k\\bm{\\widetilde{x}}_k^\\top)+ \\mathbbm{E}(\\bm{\\xi}_k\\bm{\\xi}_k^\\top).\n \\end{aligned}\n\\end{equation}\nNote that $(i)$ is because $\\mathbbm{E}(\\bm{\\xi}_k\\bm{\\widetilde{x}}_k^\\top)=\\mathbbm{E}(\\bm{\\tau}_k\\bm{\\widetilde{x}}_k^\\top)+ \\mathbbm{E}(\\bm{w}_k\\bm{\\widetilde{x}}_k^\\top)=\\mathbbm{E}(\\bm{\\tau}_k)\\mathbbm{E}(\\bm{\\widetilde{x}}_k^\\top)+\\mathbbm{E}(\\bm{w}_k)\\mathbbm{E}(\\bm{\\widetilde{x}}_k^\\top)=0$, and here we use the fact that $\\bm{\\tau}_k$, $\\bm{w}_k$ are independent of $\\bm{\\widetilde{x}}_k$. While with suitable choice of the truncation threshold $\\mathbbm{E}(\\bm{\\widetilde{x}}_k\\bm{\\widetilde{x}}_k^\\top)$ is expected to well approximate $\\bm{\\Sigma^\\star}$, the remaining $\\mathbb{E}(\\bm{\\xi}_k\\bm{\\xi}_k^\\top)$ gives rise to constant deviation. Thus, it must be removed, and the only possibility is that one has full knowledge on $\\mathbbm{E}(\\bm{\\xi}_k\\bm{\\xi}_k^\\top)$, i.e., the covariance matrix of the quantization noise. For $i\\neq j$, we can observe that \\begin{equation}\n \\begin{aligned}\\nonumber\n \\mathbbm{E}(\\xi_{ki}\\xi_{kj})=\\mathbbm{E}\\big((w_{ki}+\\tau_{ki})(w_{kj}+\\tau_{kj})\\big)=0,\n \\end{aligned}\n\\end{equation}\nshowing that $\\mathbbm{E}(\\bm{\\xi}_k\\bm{\\xi}_k^\\top)$ is diagonal. Moreover, under triangular dither the $i$-th diagonal entry is also known as $\\mathbbm{E}|\\xi_{ki}|^2=\\frac{\\Delta^2}{4}$, see section \\ref{prequan}. Now we can conclude that $\\mathbbm{E}(\\bm{\\xi}_k\\bm{\\xi}_k^\\top)=\\frac{\\Delta^2}{4}\\bm{I}_d$; combined with (\\ref{3.1}) we propose the following estimator\n\\begin{equation}\n\\label{3.2}\n \\bm{\\widehat{\\Sigma}} = \\frac{1}{n} \\sum_{k=1}^n \\bm{\\dot{x}}_k \\bm{\\dot{x}}_k^\\top - \\frac{\\Delta^2}{4}\\bm{I}_d,\n\\end{equation}\nwhich is the sample covariance of the quantized sample $\\dot{\\mathscr{X}}:=\\{\\bm{\\dot{x}}_1,...,\\bm{\\dot{x}}_n\\}$ followed by a correction step. On the other hand, the reason why the standard uniform dither is not suitable for QCE becomes self-evident --- the diagonal of $\\mathbbm{E}(\\bm{\\xi}_k\\bm{\\xi}_k^\\top)$ remains unknown\\footnote{It depends on the input signal, see \\cite[Page 3]{gray1993dithered}.} and thus the bias cannot be precisely removed.\n\n\n\nWe are now ready to present error bounds for $\\bm{\\widehat{\\Sigma}}$ under max-norm, operator norm. We will also investigate the high-dimensional setting by assuming sparse structure of $\\bm{\\Sigma^\\star}$, for which we propose a thresholding estimator based on $\\bm{\\widehat{\\Sigma}}$ in Theorem \\ref{thm1}. \n\n\nIn an estimation with error measured by $\\|.\\|_\\infty$, we invoke element-wise truncation to obtain $\\bm{\\widetilde{x}}_k$. The bounded fourth moment assumption is also in an element-wise manner. \n\n\n\n\\begin{theorem}\\label{thm1}\n\\noindent{\\rm\\scshape(Element-wise Error){\\bf \\sffamily.}} Given $\\Delta>0$ and $\\delta > 2$, we consider the problem QCE described above. We suppose that $\\bm{x}_k = [x_{ki}]$ satisfies $\\mathbbm{E}|x_{ki}|^4\\leq M$, $\\forall ~i\\in [d]$. We adopt the element-wise truncation with threshold $\\zeta>0$: $\\bm{\\widetilde{x}}_k=[\\widetilde{x}_{ki}]$, $\\widetilde{x}_{ki}=\\sign(x_{ki})\\min\\{|x_{ki}|,\\zeta\\}$; and recall that $\\bm{\\dot{x}}_k = \\mathcal{Q}_\\Delta(\\bm{\\widetilde{x}}_k+ \\bm{\\tau}_k)$ with triangular dither $\\bm{\\tau}_k$. We set $\\zeta \\asymp \\big(\\frac{nM}{\\delta \\log d}\\big)^{1\/4}$. Then if $n\\gtrsim \\delta \\log d$, the estimator in (\\ref{3.2}) enjoys the bound \n\\begin{equation}\\nonumber\n \\mathbbm{P}\\Big(\\|\\bm{\\widehat{\\Sigma}}-\\bm{\\Sigma^\\star}\\|_{\\infty}\\geq C\\mathscr{L}\\sqrt{\\frac{\\delta \\log d}{n}}\\Big)\\leq 2d^{2-\\delta},\n\\end{equation}\nwith $\\mathscr{L}:=\\sqrt{M}+\\Delta^2$. \n\\end{theorem}\n\n\n\n\\vspace{2mm}\n\nNotably, despite the heavy-tailedness and quantization, the estimator achieves an element-wise rate $O(\\sqrt{\\frac{\\log d}{n}})$ coincident with the one for sub-Gaussian case. Indeed, after truncation the data are essentially more benign for estimation, while one can clearly position quantization level $\\Delta$ in the multiplicative factor $\\mathscr{L}=\\sqrt{M}+\\Delta^2$. Thus, the cost of quantization is inessential without affecting the key part of the error rate. These remarks on the (near) optimality and the degradation incurred by quantization remain valid in our subsequent Theorems. \n\n\n\n\nWhen estimating $\\bm{\\Sigma^\\star}$ under operator norm, we truncate $\\bm{x}_k$ regarding $\\ell_4$ norm with threshold $\\zeta$. To distinguish with the truncation in Theorem \\ref{thm1}, we change notation and use $\\bm{\\check{x}}_k$ to denote the truncated date, i.e., $\\bm{\\check{x}}_k=\\frac{\\bm{x}_k}{\\|\\bm{x}_k\\|_4} \\min\\{\\|\\bm{x}_k\\|_4,\\zeta\\}$.\nWe also require bounded fourth moments towards all directions.\nThen, we still define the estimator as (\\ref{3.2}). \n\\begin{theorem}\\label{thm2}\n\\noindent{\\rm\\scshape(Operator Norm Error){\\bf \\sffamily.}} Given $\\Delta>0$ and $\\delta>0$, we consider the problem of QCE described above. We suppose that $\\bm{x}_k$ satisfies $ \\mathbbm{E}|\\bm{v}^\\top \\bm{x}_k|^4 \\leq M$ for any $\\bm{v}\\in \\mathbb{S}^{d-1}$. We adopt the $\\ell_4$ norm truncation with threshold $\\zeta>0$: $\\bm{\\check{x}}_k=\\frac{\\bm{x}_k}{\\|\\bm{x}_k\\|_4}\\min \\{\\|\\bm{x}_k\\|_4,\\zeta\\}$; also recall that $\\bm{\\dot{x}}_k=\\mathcal{Q}_\\Delta (\\bm{\\check{x}}_k+\\bm{\\tau_k})$ with triangular dither $\\bm{\\tau}_k$. We set $\\zeta \\asymp (M^{1\/4}+\\Delta)\\big(\\frac{n}{\\delta \\log d}\\big)^{1\/4}$. Then if $n\\gtrsim \\delta d \\log d$, the estimator in (\\ref{3.2}) enjoys the bound \\begin{equation}\\nonumber\n \\mathbbm{P}\\Big(\\|\\bm{\\widehat{\\Sigma}}-\\bm{\\Sigma^\\star}\\|_{op}\\geq C\\mathscr{L}\\sqrt{\\frac{\\delta d\\log d}{n}}\\Big)\\leq 2d^{-\\delta},\n\\end{equation} \nwith $\\mathscr{L}:=\\sqrt{M}+\\Delta^2$.\n\\end{theorem}\n\nThe operator norm error rate in Theorem \\ref{thm2} is near optimal (e.g., compared to \\cite[Theorem 7]{fan2021shrinkage}), and the effect of dithered uniform quantization is solely on the factor $\\mathscr{L}$. Nevertheless, one still needs (at least) $n \\gtrsim d$ to achieve small operator norm error.\n\n\nTo handle estimation under operator norm in a high-dimensional regime (where $d$ may even exceed $n$), we resort to additional structure on $\\bm{\\Sigma^\\star}$. We use sparse $\\bm{\\Sigma^\\star}$ as an example, which corresponds to the situations where dependencies among different features are weak. To promote sparsity, we invoke a thresholding regularization \\cite{bickel2008covariance,cai2012optimal} to the element-wise estimator in Theorem \\ref{thm1}. \nIn spite of the heavy-tailedness and quantization, our estimator ($\\bm{\\widehat{\\Sigma}}_s$) achieves minimax rates $O\\big(s\\sqrt{\\frac{\\log d}{n}}\\big)$ under operator norm (e.g., compared to \\cite[Theorem 2]{cai2012optimal}).\n\n\n \n\n\\begin{theorem}\\label{thm3}\n\\noindent{\\rm\\scshape(Sparse Covariance Matrix estimation){\\bf \\sffamily.}} We assume all columns of $\\bm{\\Sigma}^* = [\\sigma^\\star_{ij}]$ are $s$-sparse. Under conditions and estimator $\\bm{\\widehat{\\Sigma}}$ in Theorem \\ref{thm1}, we set $\\mu=C_1 (\\sqrt{M}+\\Delta^2)\\sqrt{\\frac{\\delta \\log d}{n}}$ for some sufficiently large $C_1$, and consider the thresholding estimator $\\bm{\\widehat{\\Sigma}}_s := \\mathcal{T}_{\\mu}(\\bm{\\widehat{\\Sigma}})$. Then it enjoys the operator norm error bound \\begin{equation}\\nonumber\n \\mathbbm{P}\\Big(\\|\\bm{\\widehat{\\Sigma}}_s - \\bm{\\Sigma^\\star}\\|_{op} \\leq C\\mathscr{L}s\\sqrt{\\frac{ \\delta \\log d}{n}}\\Big) \\geq 1-\\exp(-0.25\\delta),\n\\end{equation}\nwith $\\mathscr{L}:=\\sqrt{M}+\\Delta^2$.\n\\end{theorem}\n To analyse the thresholding estimator, our proof resembles those developed in prior works\n (e.g., \\cite{cai2012optimal}), but note that additional efforts are needed to bound the bias term induced by the data truncation. We also point out that the results for full-data regime can be exactly recovered by setting $\\Delta=0$. As a result, Theorems \\ref{thm1}-\\ref{thm2} represent the strict extension of \\cite[Section 4]{fan2021shrinkage}, while Theorem \\ref{thm3} complements \\cite{fan2021shrinkage} by applying the truncation technique to estimating sparse $\\bm{\\Sigma^\\star}$. \n\n\n\n \n \n\n \n\\begin{rem}\\label{rem2}\n\\noindent{\\rm\\scshape(Sub-Gaussian case){\\bf \\sffamily.}} \nWhile we concentrate on quantization of heavy-tailed data in this work, our results can be readily adjusted to sub-Gaussian $\\bm{x}_k$, for which the truncation step is inessential and can be removed (i.e., $\\zeta=\\infty$). These results are also new to the literature but will not be presented here.\n\\end{rem}\n\n\\subsection{Quantized compressed sensing}\\label{sec4}\nWe consider the linear CS model $\\bm{y} = \\bm{X} \\bm{\\theta^\\star} + \\bm{\\epsilon}$, \nwhere $\\bm{\\theta^\\star}\\in\\mathbb{R}^d$, $\\bm{X}\\in \\mathbb{R}^{n\\times d},\\bm{\\epsilon}$ are the desired $s$-sparse signal, sensing matrix and noise vector, respectively; our goal is to recover $\\bm{\\theta^\\star}$ from $(\\bm{X},\\bm{y})$.\n In the area of QCS, we are interested in {\\it developing quantization scheme (mainly for $\\bm{y}$ in prior works) that is associated with a faithful recovery procedure merely based on the quantized data} \\cite{dirksen2019quantized}. \n\n \n Let $\\bm{x}_k$ be the $k$-th row of $\\bm{X}$, in statistical estimation this problem is also termed as sparse linear regression and formulated as \\begin{equation}\\label{csmodel}\n y_k = \\bm{x}_k^\\top \\bm{\\theta^\\star}+\\epsilon_k,~k=1,...,n.\n\\end{equation} In this work, we adopt the more statistical conventions and refer to $\\bm{x}_k$ as the covariate, $y_k$ as the response. Also, we assume $\\bm{x}_k$ are i.i.d. drawn from some multi-variate distribution, $\\{\\epsilon_k:k\\in[n]\\}$ are i.i.d. statistical noise independent of the covariate.\n\n\n\n\n\n\n \n\n\n \nAs already mentioned, we will apply a dithered uniform quantizer to $\\bm{y}$.\nNote that, under the above statistical assumptions and dithered quantization of $\\{y_k\\}$, the near optimal recovery guarantee has been established in \\cite{thrampoulidis2020generalized,jung2021quantized,xu2020quantized}, but restricted to sub-Gaussian regime, i.e., both $\\bm{x}_k$ and $\\epsilon_k$ are drawn from sub-Gaussian distribution. In contrast, our focus is on quantization of heavy-tailed data. In particular, the noise $\\epsilon_k$ is i.i.d. drawn from some heavy-tailed distribution and hence leads to heavy-tailed response. We will also study a more challenging setting where both $\\bm{x}_k$, $\\epsilon_k$ are heavy-tailed. These two settings will be presented separately because of some essential differences, see Remark \\ref{htcova} below.\n\n\n\nAs before, our strategy to handle heavy-tailed data is the truncation step prior to the dithered quantization. Particularly, with threshold $\\zeta_y$ we truncate $y_k$ to $\\widetilde{y}_k = \\sign(y_k) \\min\\{|y_k|,\\zeta_y\\}$, which is then quantized to $\\dot{y}_k = \\mathcal{Q}_\\Delta(\\widetilde{y}_k+\\tau_k)$. To estimate the sparse $\\bm{\\theta^\\star}$, a classical approach is the Lasso \\cite{tibshirani1996regression} (also encompassed by the framework of regularized M-estimator \\cite{negahban2011estimation,negahban2012unified})\n\\begin{equation}\n \\begin{aligned}\\nonumber\n \\mathop{\\arg\\min}\\limits_{\\bm{\\theta}}~\\frac{1}{2n}\\sum_{k=1}^n(y_k-\\bm{x}_k^\\top\\bm{\\theta})^2 + \\lambda \\|\\bm{\\theta}\\|_1,\n \\end{aligned}\n\\end{equation}\nwhich combines the $\\ell_2$ loss for data fidelity and $\\ell_1$ norm that encourages sparsity. Evidently, the main issue to invoke Lasso under quantization lies in the $\\ell_2$ loss that requires full data. To illustrate our remedy for the issue, we calculate the expected $\\ell_2$ loss: \n$$\\mathbbm{E}(y_k-\\bm{x}_k^\\top\\bm{\\theta})^2\\stackrel{(i)}{=}\\bm{\\theta}^\\top\\big(\\mathbbm{E}\\bm{x}_k\\bm{x}_k^\\top\\big)\\bm{\\theta}-2\\big(\\mathbbm{E}y_k\\bm{x}_k\\big)^\\top\\bm{\\theta}:\\stackrel{(ii)}{=}\\bm{\\theta}^\\top\\bm{\\Sigma^\\star}\\bm{\\theta}-2\\bm{\\Sigma}_{y\\bm{x}}^\\top\\bm{\\theta},$$\nwhere $(i)$ holds up to an inessential constant $\\mathbbm{E}|y_k|^2$, and in $(ii)$ we let $\\bm{\\Sigma^\\star}:= \\mathbbm{E}(\\bm{x}_k\\bm{x}_k^\\top)$, $\\bm{\\Sigma}_{y\\bm{x}}=\\mathbbm{E}(y_k\\bm{x}_k)$. Thus, we can still consider a generalized Lasso as \n\\begin{equation}\\label{4.3}\n \\bm{\\widehat{\\theta}} = \\mathop{\\arg\\min}\\limits_{\\bm{\\theta}\\in \\mathcal{S}}~\\frac{1}{2}\\bm{\\theta}^\\top \\bm{Q}\\bm{\\theta} - \\bm{b}^\\top \\bm{\\theta} +\\lambda\\|\\bm{\\theta}\\|_1.\n\\end{equation}\n Enlightened by the expected loss, we will look for surrogates for the covariance $\\bm{\\Sigma^\\star}$, $\\bm{\\Sigma}_{y\\bm{x}}$, and then plug them into $\\bm{Q}$, $\\bm{b}$, respectively. We also introduce the constraint $\\bm{\\theta}\\in \\mathcal{S}$ to allow more flexibility.\n\n\nThe next Theorem is concerned with QCS under the conventional sub-Gaussian covariate but heavy-tailed response. Note that the heavy-tailedness of $y_k$ stems from the noise distribution with bounded $2+\\nu$ moment ($\\nu>0$). As \\cite{fan2021shrinkage,chen2022high,zhu2021taming}, it is easier to impose the moment constraint on the response $y_k$.\n\n\n \n \n \n\n\n\n\\begin{theorem}\\label{thm4}\n\\noindent{\\rm\\scshape(sub-gaussian covariate, heavy-tailed response){\\bf \\sffamily.}} For the model (\\ref{csmodel}) we suppose that $\\bm{x}_k$ are i.i.d., zero-mean and sub-Gaussian with $\\|\\bm{x}_k\\|_{\\psi_2}\\leq \\sigma$, and that $\\kappa_0\\leq \\lambda_{\\min}(\\bm{\\Sigma^\\star})\\leq \\lambda_{\\max}(\\bm{\\Sigma^\\star})\\leq \\kappa_1$ for some $\\kappa_1>\\kappa_0>0$, where $\\bm{\\Sigma^\\star}=\\mathbbm{E}(\\bm{x}_k\\bm{x}_k^\\top)$, $\\bm{\\theta^\\star}\\in\\mathbb{R}^d$ is $s$-sparse, \\textcolor{black}{ $\\mathbbm{E}|y_k|^{2l}\\leq M$ for some fixed $l>1$}. Recall our quantization scheme that $\\widetilde{y}_k= \\sign(y_k)\\min\\{|y_k|,\\zeta_y\\}$, $\\dot{y}_k = \\mathcal{Q}_\\Delta(\\widetilde{y}_k+\\tau_k)$ with $\\tau_k\\sim \\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}])$. For recovery, we define the estimator $\\bm{\\widehat{\\theta}}$ as (\\ref{4.3}) with $\\bm{Q} = \\frac{1}{n}\\sum_{k=1}^n\\bm{x}_k\\bm{x}_k^\\top$, $\\bm{b} = \\frac{1}{n}\\sum_{k=1}^n\\dot{y}_k\\bm{x}_k$, $\\mathcal{S} = \\mathbb{R}^d$. We set $\\zeta_y\\asymp \\big(\\frac{nM^{1\/l}}{\\delta \\log d}\\big)^{1\/2}$, $\\lambda = C_1 \\frac{\\sigma^2}{\\sqrt{\\kappa_0}}(\\Delta +M^{1\/(2l)})\\sqrt{\\frac{\\delta\\log d}{n}}$ with sufficiently large $C_1$. If $n \\gtrsim \\delta s\\log d$ for some hidden constant only depending on $(\\kappa_0,\\sigma)$, then with probability at least $1-9d^{1-\\delta}$, the estimation error $\\bm{\\widehat{\\Delta}} = \\bm{\\widehat{\\theta}} - \\bm{\\theta^\\star}$ enjoys the error bounds \\begin{equation}\\nonumber\n \\|\\bm{\\widehat{\\Delta}}\\|_2\\leq C_3\\mathscr{L} \\sqrt{\\frac{\\delta s\\log d}{n}}~~~\\mathrm{and}~~~\\|\\bm{\\widehat{\\Delta}}\\|_1 \\leq C_4\\mathscr{L}s\\sqrt{\\frac{\\delta\\log d}{n}}~\n\\end{equation}\nwhere $\\mathscr{L} := \\frac{\\sigma^2(\\Delta+M^{1\/(2l)})}{\\kappa_0^{3\/2}}$.\n\\end{theorem}\n\n\n\n \n The rate $O\\big(\\sqrt{\\frac{s\\log d}{n}}\\big)$ for $\\ell_2$ norm error is minimax optimal up to logarithmic factor (e.g., compared to \\cite{raskutti2011minimax}). Note that a random noise \n bounded by $\\Delta$ roughly contributes $\\Delta^{2l}$ to $\\mathbbm{E}|y_k|^{2l}$ bounded by $M$; because in the error bound $\\Delta$ and $M^{1\/(2l)}$ almost play the same role, the effect of uniform quantization can be readily interpreted as an additional bounded noise. This generalizes previous findings in \\cite{thrampoulidis2020generalized,jung2021quantized,sun2022quantized} to the heavy-tailed regime.\n \nNext, we switch to the more tricky situation where both \n $\\bm{x}_k$ and $y_k$ are heavy-tailed, particularly we assume they both possess bounded fourth moments. We need to element-wisely truncate $\\bm{x}_k$ to $\\bm{\\widetilde{x}}_k$ and then set $\\bm{Q}:= \\frac{1}{n}\\bm{\\widetilde{x}}_k\\bm{\\widetilde{x}}_k^\\top$, which can be viewed as a robust covariance estimator and is indeed encompassed by our Theorem \\ref{thm1} (i.e., the special case of $\\Delta=0$).\n\\begin{theorem}\n\\label{thm5}\\noindent{\\rm\\scshape(heavy-tailed covariate and response){\\bf \\sffamily.}} For the model (\\ref{csmodel}) we suppose that $\\bm{x}_k=[x_{ki}]$ are i.i.d. zero-mean with bounded fourth moment $\\mathbbm{E}|x_{ki}|^4\\leq M$, and that $\\kappa_0\\leq \\lambda_{\\min}(\\bm{\\Sigma^\\star})\\leq \\lambda_{\\max}(\\bm{\\Sigma^\\star})\\leq \\kappa_1$ for some $\\kappa_1>\\kappa_0>0$, where $\\bm{\\Sigma^\\star}=\\mathbbm{E}(\\bm{x}_k\\bm{x}_k^\\top)$, and that $\\mathbbm{E}|y_k|^4\\leq M$. We assume $\\bm{\\theta^\\star}\\in\\mathbb{R}^d$ is sparse and satisfies $\\|\\bm{\\theta^\\star}\\|_1\\leq R$ for some $R>0$. In the signal processing, with some $\\zeta_x,\\zeta_y$ we truncate $\\bm{x}_k$ to $\\bm{\\widetilde{x}}_k = [\\widetilde{x}_{ki}]$ with $\\widetilde{x}_{ki}:=\\sign(x_{ki})\\min\\{|x_{ki}|,\\zeta_x\\}$, and $y_k$ to $\\widetilde{y}_k:= \\sign(y_k)\\min\\{|y_k|,\\zeta_y\\}$, which is then quantized to $\\dot{y}_k = \\mathcal{Q}_\\Delta(\\widetilde{y}_k+ \\tau_k)$ with $\\tau_k\\sim\\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}])$. For recovery, we define the estimator $\\bm{\\widehat{\\theta}}$ as (\\ref{4.3}) with $\\bm{Q}=\\frac{1}{n}\\sum_{k=1}^n\\bm{\\widetilde{x}}_k\\bm{\\widetilde{x}}_k^\\top$, $\\bm{b}=\\frac{1}{n}\\sum_{k=1}^n\\dot{y}_k\\bm{\\widetilde{x}}_k$, $\\mathcal{S}=\\mathbb{R}^d$. We set $\\zeta_x, \\zeta_y\\asymp\\big(\\frac{nM}{\\delta \\log d}\\big)^{1\/4}$, $\\lambda = C_1(R\\sqrt{M}+\\Delta^2)\\sqrt{\\frac{\\delta \\log d}{n}}$ with sufficiently large $C_1$. If $n\\gtrsim \\delta s^2\\log d$ for some hidden constant only depending on $(\\kappa_0,M)$, then with probability at least $1-4d^{2-\\delta}$, the estimation error $\\bm{\\widehat{\\Delta}}:= \\bm{\\widehat{\\theta}} - \\bm{\\theta^\\star}$ enjoys the error bounds \\begin{equation}\\nonumber\n \\|\\bm{\\widehat{\\Delta}}\\|_2\\leq C_2\\mathscr{L} \\sqrt{\\frac{s\\log d}{n}}~~~\\mathrm{and}~~~\\|\\bm{\\widehat{\\Delta}}\\|_1 \\leq C_3\\mathscr{L}s\\sqrt{\\frac{\\log d}{n}}~\n\\end{equation} \nwhere $\\mathscr{L}:=\\frac{R\\sqrt{M}+\\Delta^2}{\\kappa_0}$.\n\\end{theorem}\n\n\n\\vspace{1mm}\n\nTheorem \\ref{thm5} generalizes \\cite[Theorem 2(b)]{fan2021shrinkage} to the uniform quantization setting. Clearly, the obtained rate remains near minimax optimal if $R$ is of minor scaling (e.g., bounded or logarithmic factors). Nevertheless, such near optimality in Theorem \\ref{thm5} comes at the cost of more stringent conditions and stronger scaling, see the next remark.\n\n\\begin{rem}\\label{htcova} {\\rm \\scshape (Comparing Theorems \\ref{thm4}-\\ref{thm5}){\\bf \\sffamily.}}\nCompared with $n\\gtrsim s\\log d$ in Theorem \\ref{thm4}, Theorem \\ref{thm5} requires a sub-optimal sample complexity $n\\gtrsim s^2\\log d$. In fact, $n\\gtrsim s^2\\log d$ also appears in \\cite[Theorem 2(b)]{fan2021shrinkage}; while by explicitly adding the constraint $\\|\\bm{\\theta}\\|_1\\leq R$ to the recovery program and more careful analysis (with the first-order optimality of $\\bm{\\widehat{\\theta}}$), it can be improved to $n\\gtrsim s\\log d$ and hence the bound is valid in a wider range of $n$ (see Remark \\ref{impro} for detail). Besides, in Theorem \\ref{thm5} we impose a constraint $\\|\\bm{\\theta^\\star}\\|_1\\leq R$ that is stronger than $\\|\\bm{\\theta^\\star}\\|_2\\lesssim\\frac{M^{1\/(2l)}}{\\sigma}$ derived in the proof of Theorem \\ref{thm4}. \n\\end{rem}\n\n\n\\subsection{Quantized matrix completion}\\label{sec5}\nCompleting a low-rank matrix from only a partial observation of its entries is known as the matrix completion problem, which has found many applications including recommendation system, image inpainting, quantum state tomography \\cite{chen2022color,davenport2016overview,bennett2007netflix,nguyen2019low,gross2010quantum}, to name just a few. It is currently well understood that, the rather stringent incoherence condition is required to pursue exact recovery (e.g., \\cite{candes2012exact,recht2011simpler}), whereas a faithful estimation can be achieved as long as the underlying matrix is not overly spiky (e.g., \\cite{klopp2014noisy,negahban2012restricted}). The latter condition is also known as low spikiness (e.g., \\cite{chen2022color,klopp2017robust,negahban2012restricted}) and is indeed necessary for the well-posedness of matrix completion problem (e.g., \\cite{davenport2016overview,negahban2012restricted}). \n\n\n\n\nIn this part, we do not pursue exact recovery but focus on the estimation problem. \nLet $\\bm{\\Theta^\\star}\\in \\mathbb{R}^{d\\times d}$ be the underlying matrix satisfying $\\rank(\\bm{\\Theta^\\star})\\leq r$, we formulate the model as \n\\begin{equation}\\label{mcmodel}\n y_k = \\big<\\bm{X}_k, \\bm{\\Theta^\\star}\\big> + \\epsilon_k, ~k=1,2,...,n,\n\\end{equation}\nwhere the design $\\bm{X}_k$ is randomly distributed on $\\mathcal{X}:=\\{\\bm{e}_i\\bm{e}_j^\\top: i,j\\in [d]\\}$ ($\\bm{e}_i$ is the $i$-th column of $\\bm{I}_d$), $\\epsilon_k$ is the noise independent of $\\bm{X}_k$. Note that for $\\bm{X}_k=\\bm{e}_{i(k)}\\bm{e}_{j(k)}^\\top$ one has $\\big<\\bm{X}_k,\\bm{\\Theta^\\star}\\big> = \\theta^\\star_{i(k),j(k)}$; hence, each observation is a noisy entry. For simplicity we consider the uniform sampling scheme $\\bm{X}_k\\sim \\mathscr{U}(\\mathcal{X})$, but with a little bit more work it generalizes to more general sampling scheme \\cite{klopp2014noisy}.\nIt has been noted that the low-spikiness condition \\cite{negahban2012restricted} can be substituted with the simpler max-norm constraint \\cite{klopp2014noisy,chen2022color,koltchinskii2011nuclear}, which we adopt here and assume $\\|\\bm{\\Theta^\\star}\\|_\\infty\\leq \\alpha$. \n\n\n\n\n Our main interest is on quantized matrix completion (QMC), where our goal is to {\\it design quantizer for the observations $y_k$ that allows accurate post-quantization estimation on $\\bm{\\Theta^\\star}$, which is often shown by faithful estimation procedure from quantized data}. Besides, we study heavy-tailed $y_k$, which should be truncated as before. In particular, $y_k$ will be truncated to $\\widetilde{y}_k=\\sign(y_k)\\min\\{|y_k|,\\zeta_y\\}$, which is then processed by a uniform quantizer $\\dot{y}_k=\\mathcal{Q}_\\Delta(\\widetilde{y}_k+\\tau_k)$ with dither $\\tau_k\\sim \\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}])$. From the given data $(\\bm{X}_k,\\dot{y}_k)$, we estimate $\\bm{\\Theta^\\star}$ by the regularized M-estimator \\cite{negahban2011estimation,negahban2012unified}\n\\begin{equation}\\label{5.2}\n \\bm{\\widehat{\\Theta}} = \\mathop{\\arg\\min}\\limits_{\\|\\bm{\\Theta}\\|_\\infty\\leq \\alpha}~\\frac{1}{2n}\\sum_{k=1}^n \\big(\\dot{y}_k- \\big<\\bm{X}_k,\\bm{\\Theta}\\big>\\big)^2+\\lambda\\|\\bm{\\Theta}\\|_{nu}\n\\end{equation} \nthat combines an $\\ell_2$ loss and nuclear norm regularizer.\n\n\nIt should be pointed out that there is a line of works on 1-bit or multi-bit matrix completion related to our results to be presented \\cite{cai2013max,lafond2014probabilistic,klopp2015adaptive,cao2015categorical,bhaskar2016probabilistic}. While the referenced works commonly adopted a likelihood approach, our method is an essential departure and embraces some advantage, see a precise comparison in Remark \\ref{rem8}. Because of such novelty, we include the result for sub-exponential $\\epsilon_k$ in Theorem \\ref{thm8}. \n\n\nNote that in the case of sub-exponential noise, the truncation of $y_k$ becomes unnecessary and so we simply set $\\zeta_y=\\infty$. \n\n\n\n\n\\begin{theorem}\n\\label{thm8}{\\rm \\scshape(QMC with Sub-Exponential Noise){\\bf \\sffamily.}} Given some $\\Delta,\\delta$, we consider the MC model (\\ref{mcmodel}) described above under i.i.d. zero-mean $\\{\\epsilon_k:k\\in [n]\\}$ satisfying $\\|\\epsilon_k\\|_{\\psi_1}\\leq \\sigma$. In the quantization scheme we set $\\zeta_y=\\infty$ so that $\\widetilde{y}_k=y_k$. We choose $\\lambda =C_1 (\\sigma+\\Delta) \\sqrt{\\frac{\\delta\\log d}{nd}}$ with sufficiently large $C_1$, and define $\\bm{\\widehat{\\Theta}}$ as (\\ref{5.2}). If $d\\log^3 d\\lesssim n\\lesssim r^2d^2 \\log d$, then with probability at least $1-4d^{-\\delta}$, $\\bm{\\widehat{\\Delta}}:=\\bm{\\widehat{\\Theta}} - \\bm{\\Theta^\\star}$ enjoys the error bounds \\begin{equation}\\nonumber\n \\frac{\\|\\bm{\\widehat{\\Delta}}\\|_F}{d}\\leq C_2 \\mathscr{L}\\sqrt{\\frac{\\delta r d \\log d}{n}} ~~\\mathrm{and}~~ \\frac{\\|\\bm{\\widehat{\\Delta}}\\|_{nu}}{d} \\leq C_3\\mathscr{L}r\\sqrt{\\frac{\\delta d \\log d}{n}}\n\\end{equation}\nwhere $\\mathscr{L}:= \\alpha+\\sigma+\\Delta $. \n\\end{theorem}\n\n \n\nThen, we present our result for the heavy-tailed case where the noise $\\epsilon_k$ possesses bounded second moment. \n\n\n\\begin{theorem}\n\\label{thm9}{\\rm \\scshape(QMC with Heavy-tailed Noise){\\bf \\sffamily.}} Given some $\\Delta,\\delta$, we consider the MC model (\\ref{mcmodel}) described above under i.i.d., zero-mean $\\{\\epsilon_k:k\\in [n]\\}$ satisfying $\\mathbbm{E}|\\epsilon_k|^2\\leq M$. In the quantization scheme we set the threshold $\\zeta_y\\asymp (\\sqrt{M}+\\alpha)\\sqrt{\\frac{n}{\\delta d \\log d}}$. We choose $\\lambda =C_1 (\\alpha+\\sqrt{M}+\\Delta) \\sqrt{\\frac{\\delta\\log d}{nd}}$ with sufficiently large $C_1$, and define $\\bm{\\widehat{\\Theta}}$ as (\\ref{5.2}). If $d\\log d\\lesssim n\\lesssim r^2d^2 \\log d$, then with probability at least $1-6d^{-\\delta}$, $\\bm{\\widehat{\\Delta}}:=\\bm{\\widehat{\\Theta}} - \\bm{\\Theta^\\star}$ enjoys the error bounds \\begin{equation}\\nonumber\n \\frac{\\|\\bm{\\widehat{\\Delta}}\\|_F}{d}\\leq C_2 \\mathscr{L}\\sqrt{\\frac{\\delta r d \\log d}{n}} ~~\\mathrm{and}~~ \\frac{\\|\\bm{\\widehat{\\Delta}}\\|_{nu}}{d} \\leq C_3\\mathscr{L}r\\sqrt{\\frac{\\delta d \\log d}{n}}\n\\end{equation}\nwhere $\\mathscr{L}:=\\alpha+\\sqrt{M}+\\Delta$. \n\\end{theorem}\n\n\n\\vspace{1mm}\n\nCompared to the information-theoretic lower bounds in \\cite{negahban2012restricted,koltchinskii2011nuclear}, the error rates obtained in Theorems \\ref{thm8}-\\ref{thm9} are minimax optimal up to logarithmic factors. Specifically, Theorem \\ref{thm9} derives near optimal guarantee for QMC with heavy-tailed observations, as the key standpoint of this paper. Note that, the 1-bit quantization counterpart of these two Theorems was derived in our previous work \\cite{chen2022high}; in sharp contrast to the present Theorem \\ref{thm9}, the error rate for the heavy-tailed regime in \\cite[Theorem 12]{chen2022high} suffers from essential degradation.\n\n\n\nTo close this section, we give a remark to illustrate the novelty and advantage of our method by a careful comparison with prior works. \n\n\\begin{rem}\n\\label{rem8} \n QMC with 1-bit or multi-bit quantized observations has attracted considerable research interest \\cite{davenport20141,cai2013max,lafond2014probabilistic,klopp2015adaptive,cao2015categorical,bhaskar2016probabilistic}. Adapted to our notation, these works studied the model $\\dot{y}_k = \\mathcal{Q}(\\big<\\bm{X}_k,\\bm{\\Theta^\\star}\\big>+\\tau_k)$ under general random dither $\\tau_k$ and quantizer $\\mathcal{Q}(.)$, and they commonly adopted regularized (or constrained) maximum likelihood estimation for estimating $\\bm{\\Theta^\\star}$. By contrast, with the random dither and quantizer specialized to \n$\\tau_k\\sim \\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}])$ and $\\mathcal{Q}_\\Delta(.)$ (resp.), we consider $\\dot{y}_k=\\mathcal{Q}_\\Delta(\\mathsf{T}(\\big<\\bm{X}_k,\\bm{\\Theta^\\star}\\big>+\\epsilon_k)+\\tau_k)$ where $\\mathsf{T}(.)$ stands for the truncation under heavy-tailed $\\epsilon_k$. Thus, while suffering from less generality in $(\\tau_k,\\mathcal{Q})$, our method embraces the advantage of robustness to pre-quantization noise $\\epsilon_k$, whose distribution is unknown and can even be heavy-tailed. Note that such unknown $\\epsilon_k$ evidently forbids the likelihood approach.\n\\end{rem}\n \n\\section{More developments of QCS}\\label{qc-qcs}\nBy now we have presented near optimal results in the context of QCE, QCS and QMC, all under the two-fold challenge of heavy-tailedness and quantization. While positioning such achievable (near) optimality as the primary standpoint of this work, in this section, we further develop our theoretical results for QCS. \n\\subsection{Covariate quantization}\nIn the area of QCS, almost all prior works merely focused on the quantization of response $y_k$, see the recent survey \\cite{dirksen2019quantized}. Thus, it is of both theoretical and practical interest to study a setting of \"complete quantization\" --- meaning that the covariate $\\bm{x}_k$ is also quantized. From the practical side, for instance, the immediate benefit of covariate quantization is the lower memory load. Perhaps more prominently, in a federated or distributed learning system, multiple parties may need to communicate the feature information among themselves, then quantization may become necessary to render lower communication cost. \n\n\nThe main goal of this part is to study QCS with covariate quantization. \n\n\n\n\n\n\\subsubsection{Multi-bit QCS with quantized covariate}\n\n Since we will also consider the 1-bit quantization, we more precisely refer to the QCS under uniform quantizer (as considered in Theorems \\ref{thm4}-\\ref{thm5}) as multi-bit QCS. We will generalize Theorems \\ref{thm4}-\\ref{thm5} to covariate quantization in the next two theorems. \n\n\n Let $(\\bm{\\dot{x}}_k,\\dot{y}_k)$ be the quantized covariate-response pair, we first quickly sketch the idea of our approach: we stick to the framework of M-estimator in (\\ref{4.3}); as already pointed out, this boils down to looking for accurate surrogates of the covariance $\\bm{\\Sigma^\\star} =\\mathbbm{E}(\\bm{x}_k\\bm{x}_k^\\top)$ and $\\bm{\\Sigma}_{y\\bm{x}}=\\mathbbm{E}(y_k\\bm{x}_k)$ from $(\\bm{\\dot{x}}_k,\\dot{y}_k)$. Fortunately, with triangular dither for $\\bm{x}_k$, this problem has been addressed by our developments of QCE. \n \n\n \n Now we are in a position to state our quantization scheme as follows: \n \\begin{itemize}\n \\item (Response quantization){\\bf \\sffamily.} It is the same as Theorems \\ref{thm4}-\\ref{thm5}. With threshold $\\zeta_y>0$, $y_k$ is truncated to $\\widetilde{y}_k = \\sign(y_k)\\min\\{|y_k|,\\zeta_y\\}$, which is then processed by the uniform quantizer with uniform dither and some $\\Delta>0$, i.e., $\\dot{y}_k = \\mathcal{Q}_\\Delta(\\widetilde{y}_k+ \\phi_k)$ with $\\phi_k\\sim \\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}])$. \n\n \\item (Covariate quantization){\\bf \\sffamily.} It is the same as Theorem \\ref{thm1}. With threshold $\\zeta_x>0$, $\\bm{x}_k$ is element-wisely truncated to $\\bm{\\widetilde{x}}_k$ by $\\widetilde{x}_{ki} = \\sign(x_{ki})\\min\\{|x_{ki}|,\\zeta_x\\}$, which is then processed by the uniform quantizer with triangular dither and some $\\bar{\\Delta}>0$, i.e., $\\bm{\\dot{x}}_k = \\mathcal{Q}_{\\bar{\\Delta}}(\\bm{\\widetilde{x}}_k+\\bm{\\tau}_k)$ with $\\bm{\\tau}_k\\sim \\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}]^d)+\\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}]^d)$. \n\n\n \\item (Notation){\\bf \\sffamily.} We write the quantization noise $\\varphi_k=\\dot{y}_k-\\widetilde{y}_k$, $\\bm{\\xi}_k = \\bm{\\dot{x}}_k- \\bm{x}_k$, and the quantization error $\\vartheta_k= \\dot{y}_k-(\\widetilde{y}_k+\\phi_k)$, $\\bm{w}_k=\\bm{\\dot{x}}_k-(\\bm{\\widetilde{x}}_k+ \\bm{\\tau}_k)$. \n \\end{itemize}\nWe will adopt the above notation in subsequent developments. Based on the quantized covariate-response pair $(\\bm{\\dot{x}}_k,\\dot{y}_k)$, we specify our estimator by setting $(\\bm{Q},\\bm{b})$ in (\\ref{4.3}) as\\begin{equation}\\label{quanlasso}\n \\bm{Q}=\\frac{1}{n}\\sum_{k=1}^n \\bm{\\dot{x}}_k\\bm{\\dot{x}}_k^\\top- \\frac{\\bar{\\Delta}^2}{4}\\bm{I}_d~~\\mathrm{and}~~\\bm{b}=\\frac{1}{n}\\sum_{k=1}^n \\dot{y}_k\\bm{\\dot{x}}_k.\n\\end{equation}\nNote that the choice of $\\bm{Q}$ is due to the estimator in Theorem \\ref{thm1}, while $\\bm{b}$ is because of the relation $\\mathbbm{E}(\\dot{y}_k\\bm{\\dot{x}}_k)=\\mathbbm{E}(\\widetilde{y}_k\\bm{\\widetilde{x}}_k)$. \n\n \n \n However, the issue is that $\\bm{Q} $ is not positive semi-definite.\n To explain this, note that of primary interest in CS is the high-dimensional setting where $d\\gg n$, while evidently, at least $d-n$ eigenvalues of $\\bm{Q}$ are $-\\frac{\\bar{\\Delta}^2}{4}$. We mention that, the lack of positive semi-definiteness of $\\bm{Q}$ is problematic in both statistics and optimization aspects --- the Lemma \\ref{csframework} for deriving statistical rates in Theorems \\ref{thm4}-\\ref{thm5} do not apply, and it is in general unclear how to globally optimize a non-convex program. \n \n\n\n\n\n\nMotivated by a previous line of works on non-convex M-estimator \\cite{loh2011high,loh2013regularized,loh2017statistical}, we add a $\\ell_1$ norm constraint to (\\ref{4.3}) by setting $\\mathcal{S}=\\{ \\bm{\\theta}\\in \\mathbb{R}^d:\\|\\bm{\\theta}\\|_1\\leq \\mathcal{R}\\}$, where $\\mathcal{R}$ represents the prior estimation on $\\|\\bm{\\theta^\\star}\\|_1$. Let $\\nabla\\|\\bm{\\theta}_1\\|_1$ be the sub-differential of $\\|\\bm{\\theta}\\|_1$ at $\\bm{\\theta}=\\bm{\\theta}_1$, we consider any local minimizer for the proposed recovery program\\footnote{The existence of local minimizer is guaranteed because of the additional $\\ell_1$ constraint.}, or more generally $\\bm{\\widetilde{\\theta}}\\in \\mathcal{S}$\\footnote{To distinguish the global minimizer in (\\ref{4.3}), we denote by $\\bm{\\widetilde{\\theta}}$ the estimator in QCS with quantized covariate.} satisfying \\begin{equation}\n \\big<\\bm{Q}\\bm{\\widetilde{\\theta}}-\\bm{b}+ \\lambda \\cdot \\nabla \\|\\bm{\\widetilde{\\theta}}\\|_1, \\bm{\\theta} - \\bm{\\widetilde{\\theta}}\\big> \\geq 0,~~\\forall~\\bm{\\theta}\\in \\mathcal{S}.\\label{4.17}\n\\end{equation} \nWe will prove a fairly strong guarantee that all local minimizers enjoy near minimax error rate.\nWhile result of this kind bears resemblance to the ones in \\cite{loh2013regularized}, we point out that, \\cite{loh2013regularized} only derived concrete result for sub-Gaussian regime; because of the heavy-tailed data and quantization in our setting, some essentially different ingredients are required for the technical analysis (see {Remark} \\ref{rem4}).\n\n\nWe are ready to present our results concerning multi-bit QCS with quantized covariate; the settings of sub-Gaussian $\\bm{x}_k$ and heavy-tailed $\\bm{x}_k$ are presently separately.\n\n\n\n\n\\begin{theorem}\\label{thm6}\n{\\rm \\scshape (Quantized sub-gaussian Covariate){\\bf \\sffamily.}} Given $\\Delta\\geq 0$, $\\bar{\\Delta}\\geq 0$, $\\delta>0$, we consider the same model as Theorem \\ref{thm4}, \nwith the additional assumption of $\\|\\bm{\\theta^\\star}\\|_2\\leq R$ for some $R>0$. We consider the quantization scheme described above with $\\zeta_x=\\infty$ so that $\\bm{\\widetilde{x}}_k=\\bm{x}_k$.\nFor recovery, we let $\\bm{Q} = \\frac{1}{n}\\sum_{k=1}^n\\bm{\\dot{x}}_k\\bm{\\dot{x}}_k^\\top -\\frac{\\bar{\\Delta}^2}{4}\\bm{I}_d$, $\\bm{b}= \\frac{1}{n}\\sum_{k=1}^n\\dot{y}_k\\bm{\\dot{x}}_k$, $\\mathcal{S} = \\{\\bm{\\theta}:\\|\\bm{\\theta}\\|_1\\leq R\\sqrt{s}\\}$. We also set $\\zeta_y \\asymp \\big(\\frac{\\sigma M^{1\/l}}{\\sigma+\\Delta}\\frac{n}{\\delta\\log d}\\big)^{1\/2}$, $\\lambda = C_1\\frac{(\\sigma+\\bar{\\Delta})^2}{\\sqrt{\\kappa_0}}(\\Delta+M^{1\/(2l)})\\sqrt{\\frac{\\delta\\log d}{n}}$ with sufficiently large $C_1$. If $n\\gtrsim \\delta s\\log d$ for some hidden constant only depending on $(\\kappa_0,\\sigma,\\Delta,\\bar{\\Delta},M,R) $,\nwith probability at least $1- 8d^{1-\\delta}-C_2\\exp(-C_3n)$, all $\\bm{\\widetilde{\\theta}}\\in \\mathcal{S}$ satisfying (\\ref{4.17}) deliver an estimation error $\\bm{\\widetilde{\\Delta}}:=\\bm{\\widetilde{\\theta}}-\\bm{\\theta^\\star}$ bounded by \n\\begin{equation}\\nonumber\n \\|\\bm{\\widetilde{\\Delta}}\\|_2\\leq C \\mathscr{L}\\sqrt{\\frac{\\delta s\\log d}{n}} ~~\\mathrm{and}~~\\|\\bm{\\widetilde{\\Delta}}\\|_1\\leq C' \\mathscr{L}s\\sqrt{\\frac{\\delta \\log d}{n}}\n\\end{equation}\nwhere $ \\mathscr{L} :=\\frac{(\\sigma+\\bar{\\Delta})^2(\\bm{\\Delta}+M^{1\/(2l)})}{\\kappa_0^{3\/2}}$.\n\\end{theorem}\n \n\n\\vspace{1mm}\n\nThen, we turn to QCS with heavy-tailed covariate studied in Theorem \\ref{thm5}. The difference is that we also quantize the covariate. \n\n\n\\begin{theorem}\\label{thm7}\n{\\rm \\scshape (Quantized heavy-tailed Covariate){\\bf \\sffamily.}} Given $\\Delta\\geq 0$, $\\bar{\\Delta}\\geq 0$, $\\delta>0$, we consider the same model as Theorem \\ref{thm5}, with the additional assumption of $\\|\\bm{\\theta^\\star}\\|_1\\leq R$ for some $R>0$.\nWe adopt the quantization scheme described above.\nFor recovery, we let $\\bm{Q}=\\frac{1}{n}\\sum_{k=1}^n \\bm{\\dot{x}}_k\\bm{\\dot{x}}_k^\\top- \\frac{\\Delta^2}{4}\\bm{I}_d$, $\\bm{b}= \\frac{1}{n}\\sum_{k=1}^n\\dot{y}_k\\bm{\\dot{x}}_k$, $\\mathcal{S}=\\{\\bm{\\theta}:\\|\\bm{\\theta}\\|_1\\leq R\\}$.\nWe also set $\\zeta_x,\\zeta_y\\asymp \\big(\\frac{nM}{\\delta\\log d}\\big)^{1\/4}$, \n$\\lambda =C_1 (R\\sqrt{M}+\\Delta^2+R\\bar{\\Delta}^2)\\sqrt{\\frac{\\delta \\log d}{n}}$ with sufficiently large $C_1$. If $n\\gtrsim \\delta s\\log d$ for some hidden constant only depending on $(\\kappa_0,M)$, with probability at least $1- 8d^{1-\\delta}$, all $\\bm{\\widetilde{\\theta}}\\in \\mathcal{S}$ satisfying (\\ref{4.17}) have an estimation error $\\bm{\\widetilde{\\Delta}}:=\\bm{\\widetilde{\\theta}}-\\bm{\\theta^\\star}$ bounded by \\begin{equation}\\nonumber\n \\|\\bm{\\widetilde{\\Delta}}\\|_2\\leq C_3\\mathscr{L}\\sqrt{\\frac{\\delta s\\log d}{n}} ~~\\mathrm{and}~~\\|\\bm{\\widetilde{\\Delta}}\\|_1\\leq C_4\\mathscr{L}s\\sqrt{\\frac{\\delta \\log d}{n}}.\n\\end{equation} \nwhere $\\mathscr{L}:=\\frac{R\\sqrt{M}+\\Delta^2+R\\bar{\\Delta}^2}{\\kappa_0}$.\n\\end{theorem}\n\n\n\\begin{rem}\\label{rem4}\n{\\rm \\scshape (Comparing Our Analyses with \\cite{loh2013regularized}){\\bf \\sffamily.}} The above results are motivated by a line of works on nonconvex M-estimator \\cite{loh2011high,loh2013regularized,loh2017statistical}, and our guarantee for the whole set of stationary points (\\ref{4.17}) resembles \\cite{loh2013regularized} most. While the main strategy for proving Theorem \\ref{thm6} is adjusted from \\cite{loh2013regularized}, the proof of Theorem \\ref{thm7} does involve an essentially different RSC condition, see our (\\ref{4.23}). In particular, compared with \\cite[equation (4)]{loh2013regularized}, the leading factor of $\\|\\bm{\\widetilde{\\Delta}}\\|_1^2$ in (\\ref{4.23}) degrades from $O\\big(\\frac{\\log d}{n}\\big)$ to $O\\big(\\sqrt{\\frac{\\log d}{n}}\\big)$. To retain near optimal rate we need to impose a stronger scaling $\\|\\bm{\\theta^\\star}\\|_1\\leq R$ with proper changes in the proof. Although Theorem \\ref{thm7} is presented for a concrete setting, it sheds light on extension of \\cite{loh2013regularized} to a weaker RSC condition that can possibly accommodate covariate with heavier tail. Such extension is formally presented as a deterministic framework in Proposition \\ref{framework}. \n\\end{rem}\n\\begin{pro}\n\\label{framework}\nSuppose that the $s$-sparse $\\bm{\\theta^\\star}\\in \\mathbb{R}^d$ satisfies $\\|\\bm{\\theta^\\star}\\|_1\\leq R$, and that $\\bm{\\Sigma^\\star}=\\mathbbm{E}(\\bm{x}_k\\bm{x}_k^\\top)$ admits $\\lambda_{\\min}(\\bm{\\Sigma^\\star})\\geq \\kappa_0$. If \\begin{equation}\\label{4.26}\n \\lambda \\geq C_1\\max\\big\\{\\|\\bm{Q\\theta^\\star} - \\bm{b}\\|_\\infty,~ R\\cdot\\|\\bm{Q}-\\bm{\\Sigma^\\star}\\|_\\infty\\big\\}\n\\end{equation}\nholds for sufficiently large $C$. Then, all $\\bm{\\widetilde{\\theta}}$ satisfying (\\ref{4.17}) with $\\mathcal{S} = \\{\\bm{\\theta}\\in \\mathbb{R}^d:\\|\\bm{\\theta}\\|_1\\leq R\\}$ has estimation error $\\bm{\\widetilde{\\Delta}}:=\\bm{\\widetilde{\\theta}}-\\bm{\\theta^\\star}$ bounded by \n\\begin{equation}\n \\begin{aligned} \\nonumber\n \\|\\bm{\\widetilde{\\Delta}}\\|_2\\leq C_2 \\frac{\\sqrt{s}\\lambda}{\\kappa_0} ~~\\mathrm{and}~~\\|\\bm{\\widetilde{\\Delta}}\\|_1\\leq C_3 \\frac{s\\lambda}{\\kappa_0}~.\n \\end{aligned}\n\\end{equation} \n\\end{pro}\nWe then compare Theorem \\ref{thm7} with Theorem \\ref{thm5} where full $\\bm{x}_k$ is available. \n\\begin{rem}\n\\label{rem5} \\label{impro}By setting $\\bar{\\Delta}=0$, Theorem \\ref{thm7} produces a result (with convex program) for the setting of Theorem \\ref{thm5}. Interestingly, with the additional $\\ell_1$ constraint, a notable improvement is that the sub-optimal $n\\gtrsim s^2\\log d$ in Theorem \\ref{thm5} is sharpened to the near optimal one in Theorem \\ref{thm7}. \nMore concretely, this is because (ii) in (\\ref{4.8}) can be tightened to $(ii)$ of (\\ref{4.24}). Going back to the full-date regime, Theorem \\ref{thm7} with $\\Delta=\\bar{\\Delta}=0$ recovers \\cite[Theorem 2(b)]{fan2021shrinkage} with improved sample complexity requirement. \n\n\\end{rem}\n \n \n \n \n\\subsubsection{1-bit QCS with quantized covariate} \\label{sec4.3}\nOur consideration of covariate quantization in QCS seems fairly new to the literature. To the best of our knowledge, the only related results are \\cite[Theorems 7-8]{chen2022high} for QCS with 1-bit quantized covariate and response. The assumption there, however, is quite restrictive. Specifically, it is assumed that $\\bm{\\Sigma^\\star}=\\mathbbm{E}(\\bm{x}_k\\bm{x}_k^\\top)$ has sparse columns (see \\cite[Assumption 3]{chen2022high}), which may not be available and is non-standard in CS\\footnote{While it is conventional to use isotropic sensing vector in CS (i.e., $\\bm{\\Sigma^\\star}=\\bm{I}_d$), most results in the literature do not really rely on the sparsity of $\\bm{\\Sigma^\\star}$.}. \n\n\nIn this part, we slightly deviate from the focus of dithered uniform quantization. Rather, our goal is to study QCS under dithered 1-bit quantization. Specifically, as byproducts of this work, we will apply Proposition \\ref{framework} to derive results comparable to \\cite[Theorems 7-8]{chen2022high} without resorting to the sparsity of $\\bm{\\Sigma^\\star}$. \n \nWe first give some remarks on Proposition \\ref{framework}. Compared with the framework \\cite[Theorem 1]{loh2013regularized}, the key strength of Proposition \\ref{framework} is that it does not explicitly assume the RSC condition on the loss function that is hard to verify without assuming sub-Gaussian covariate. \nInstead, the role of the RSC assumption in \\cite{loh2013regularized} is now played by $\\lambda\\gtrsim R\\|\\bm{Q}-\\bm{\\Sigma^\\star}\\|_\\infty$, which immediately yields a kind of RSC condition by simple argument as (\\ref{4.24}). Although this RSC condition is often essentially weaker than the conventional one in terms of the leading factor of $\\|\\bm{\\widetilde{\\Delta}}\\|_1^2$ (see Remark \\ref{rem4}), along this line one can still derive non-trivial (or even near optimal) error rate. The gain of replacing RSC assumption with $\\lambda\\gtrsim R\\|\\bm{Q}-\\bm{\\Sigma^\\star}\\|_\\infty$ is that the latter amounts to \nconstructing element-wise estimator for $\\bm{\\Sigma^\\star}$, which is often much easier for heavy-tailed covariate (e.g., due to many existing robust covariance estimator).\n\n\n\nWe first review the 1-bit quantization scheme developed in \\cite{chen2022high}: \n\\begin{itemize}\n \\item (Response quantization){\\bf \\sffamily.} With threshold $\\zeta_y>0$, $y_k$ is truncated to $\\widetilde{y}_k = \\sign(y_k)\\min\\{|y_k|,\\zeta_y\\}$, which is then quantized to $\\dot{y}_k = \\sign(\\widetilde{y}_k+\\phi_k)$ with $\\phi_k\\sim \\mathscr{U}([-\\gamma_y,\\gamma_y])$.\n\n\n \n \\item (Covariate quantization){\\bf \\sffamily.} With threshold $\\zeta_x>0$, $\\bm{x}_k$ is element-wisely truncated to $\\bm{\\widetilde{x}}_k$ by $\\widetilde{x}_{ki} = \\sign(x_{ki})\\min\\{|x_{ki}|,\\zeta_x\\}$, which is then quantized to $\\bm{\\dot{x}}_{k1}= \\sign(\\bm{\\widetilde{x}}_k+\\bm{\\tau}_{k1})$, $\\bm{\\dot{x}}_{k2}= \\sign(\\bm{\\widetilde{x}}_k+\\bm{\\tau}_{k2})$, with $\\bm{\\tau}_{k1},\\bm{\\tau}_{k2}\\sim \\mathscr{U}([-\\gamma_x,\\gamma_x])$. (Note that we collect two bits for each entry).\n\n \n \n\\end{itemize}\n\n\n\n\n\nAs arranged in \\cite{chen2022high}, we divide it into sub-Gaussian case (Theorem \\ref{sg1bitqccs}) and heavy-tailed case (Theorem \\ref{ht1bitqccs}). \n\n\\begin{theorem}\\label{sg1bitqccs}\n{\\rm \\scshape (1-bit QCS with sub-gaussian Data){\\bf \\sffamily.}} For the model $y_k = \\bm{x}_k^\\top\\bm{\\theta^\\star}+\\epsilon_k$ with $s$-sparse $\\bm{\\theta^\\star}$ that admits $\\|\\bm{\\theta^\\star}\\|_1\\leq R$, and with i.i.d. zero-mean, sub-Gaussian\n $\\bm{x}_k, \\epsilon_k$; for simplicity we assume $\\|\\bm{x}_k\\|_{\\psi_2}\\leq \\sigma,\\|y_k\\|_{\\psi_2}\\leq \\sigma$; also suppose that $\\lambda_{\\min}\\big(\\bm{\\Sigma^\\star}\\big)\\geq \\kappa_0$ holds for some $\\kappa_0>0$, where $\\bm{\\Sigma^\\star}=\\mathbbm{E}(\\bm{x}_k\\bm{x}_k^\\top)$. In the above quantization scheme we set $\\zeta_x=\\zeta_y=\\infty$, $\\gamma_x,\\gamma_y\\asymp \\sigma \\sqrt{\\log\\big(\\frac{n}{2\\delta \\log d}\\big)}$. For the recovery we let $\\bm{Q}:=\\frac{\\gamma_x^2}{2n}\\sum_{k=1}^n\\big(\\bm{\\dot{x}}_{k1}\\bm{\\dot{x}}_{k2}^\\top+\\bm{\\dot{x}}_{k2}\\bm{\\dot{x}}_{k1}^\\top\\big)$, $\\bm{b}:=\\frac{\\gamma_x\\gamma_y}{n}\\sum_{k=1}^n \\dot{y}_k \\bm{\\dot{x}}_{k1}$, $\\mathcal{S}=\\{\\bm{\\theta}:\\|\\bm{\\theta}\\|_1\\leq R\\}$ and set $\\lambda = C_1\\sigma^2R \\sqrt{\\frac{\\delta \\log d(\\log n)^2}{n}}$ with sufficiently large $C_1$. Then if $n\\gtrsim \\delta s\\log d(\\log n)^2$, with probability at least $1-4d^{2-\\delta}$, all local minimizers $\\bm{\\widetilde{\\theta}}$ satisfying (\\ref{4.17}) have estimation error $\\bm{\\widetilde{\\Delta}}:=\\bm{\\widetilde{\\theta}}-\\bm{\\theta^\\star}$ bounded by \\begin{equation}\\nonumber\n \\|\\bm{\\widetilde{\\Delta}}\\|_2\\leq C_2 \\frac{\\sigma^2}{\\kappa_0}\\cdot R\\sqrt{\\frac{\\delta s\\log d (\\log n)^2}{n}} ~~\\mathrm{and}~~\\|\\bm{\\widetilde{\\Delta}}\\|_1\\leq C_3\\frac{\\sigma^2}{\\kappa_0}\\cdot Rs\\sqrt{\\frac{\\delta \\log d (\\log n)^2}{n}}.\n\\end{equation}\n\\end{theorem}\n\n\\begin{theorem}\\label{ht1bitqccs}\n{\\rm \\scshape (1-bit QCS with heavy-tailed Data){\\bf \\sffamily.}} For the model $y_k = \\bm{x}_k^\\top\\bm{\\theta^\\star}+\\epsilon_k$ with $s$-sparse $\\bm{\\theta^\\star}$ that admits $\\|\\bm{\\theta^\\star}\\|_1\\leq R$, and with i.i.d. zero-mean, heavy-tailed $\\bm{x}_k, \\epsilon_k$; for simplicity we assume $\\sup_{\\bm{v}\\in \\mathbb{S}^{d-1}}\\mathbbm{E}|\\bm{v}^\\top \\bm{x}_k|^4\\leq M$, $\\mathbbm{E}|y_k|^4\\leq M$; also suppose that $\\lambda_{\\min}\\big(\\bm{\\Sigma^\\star}\\big)\\geq \\kappa_0$ holds for some $\\kappa_0>0$, where $\\bm{\\Sigma^\\star}=\\mathbbm{E}(\\bm{x}_k\\bm{x}_k^\\top)$. In the above quantization scheme we set $\\zeta_x,\\zeta_y,\\gamma_x,\\gamma_y \\asymp \\big(\\frac{nM^2}{\\delta \\log d}\\big)^{1\/8}$ that satisfy $\\zeta_x <\\gamma_x$ and $\\zeta_y<\\gamma_y$. For the recovery we let $\\bm{Q}:=\\frac{\\gamma_x^2}{2n}\\sum_{k=1}^n\\big(\\bm{\\dot{x}}_{k1}\\bm{\\dot{x}}_{k2}^\\top+\\bm{\\dot{x}}_{k2}\\bm{\\dot{x}}_{k1}^\\top\\big)$, $\\bm{b}:=\\frac{\\gamma_x\\gamma_y}{n}\\sum_{k=1}^n \\dot{y}_k \\bm{\\dot{x}}_{k1}$, $\\mathcal{S}=\\{\\bm{\\theta}:\\|\\bm{\\theta}\\|_1\\leq R\\}$ and set $\\lambda = C_1\\sqrt{M}R\\big(\\frac{\\delta \\log d}{n}\\big)^{1\/4}$ with sufficiently large $C_1$. Then if $n\\gtrsim \\delta s^2\\log d$, with probability at least $1-4d^{2-\\delta}$,\nall local minimizers $\\bm{\\widetilde{\\theta}}$ satisfying (\\ref{4.17}) have estimation error $\\bm{\\widetilde{\\Delta}}:=\\bm{\\widetilde{\\theta}}-\\bm{\\theta^\\star}$ bounded by \\begin{equation}\\nonumber\n \\|\\bm{\\widetilde{\\Delta}}\\|_2\\leq C_2\\frac{\\sqrt{M}}{\\kappa_0}\\cdot R \\Big(\\frac{\\delta s^2\\log d}{n}\\Big)^{1\/4} ~~\\mathrm{and}~~\\|\\bm{\\widetilde{\\Delta}}\\|_1\\leq C_3\\frac{\\sqrt{M}}{\\kappa_0}\\cdot Rs\\Big(\\frac{\\delta \\log d}{n}\\Big)^{1\/4}.\n\\end{equation}\n\\end{theorem}\n\\subsection{Uniform recovery guarantee}\n The main goal of this part is to improve the previous non-uniform Theorem \\ref{thm4} to uniform reconstruction guarantee. The proof requires significantly more work and deeper technical tools. Some techniques are inspired by previous works, e.g., we handle the discontinuity of the dithered uniform quantizer via covering argument as \\cite{xu2020quantized}, and similar to \\cite{genzel2022unified}, a key tool is the powerful concentration inequality for product process due to Mendelson \\cite{mendelson2016upper}. \n\n\n\nHowever, while previous works concentrated on sub-Gaussian data\\footnote{More precisely, \\cite{xu2020quantized} considered RIP sensing matrix without noise; \\cite{chen2022uniform} assumed standard Gaussian complex Gaussian sensing matrix.}, we study quantization of heavy-tailed data that involves additional data processing steps (i.e., truncation and dithered quantization). As a consequence, it is necessary to develop some new machinery. For instance, we consider a setting without covariate quantization ($\\bar{\\Delta}=0$) and let $\\mathscr{S}_{\\zeta_y}(a)= \\sign(a)\\cdot\\min\\{|a|,\\zeta_y\\}$ be the truncation operator; then following similar lines in the proof of Theorem \\ref{thm4}, we will need to bound the term $\\mathscr{T}'=\\sup_{\\bm{v},\\bm{\\theta}} \\sum_{k=1}^n\\big(\\widetilde{y}_k\\bm{x}_k-\\mathbbm{E}(\\widetilde{y}_k\\bm{x}_k)\\big)^\\top \\bm{v}$, where $\\bm{v}$ takes value in the range of the normalized estimation error, and the supremum of $\\bm{\\theta}$ is taken over all sparse signals within an $\\ell_2$ ball, as we are pursuing a uniform guarantee and $\\widetilde{y}_k=\\mathscr{S}_{\\zeta_y}(\\bm{x}_k^\\top\\bm{\\theta^\\star}+ \\epsilon_k)$ does hinge on $\\bm{\\theta}$. Our main tool to bound the product process $\\mathscr{T}'$ is Lemma \\ref{productpro}, but the issue is that $\\widetilde{y}_k$ does not possess $O(1)$ sub-Gaussian norm (due to $\\epsilon_k$), and using the trivial sub-Gaussianity\n$\\|\\widetilde{y}_k\\|_{\\psi_2}=O(\\zeta_y)$ definitely leads to sub-optimal rate. To address the issue, our main idea is to introduce $\\mathscr{S}_{\\zeta_y}(\\epsilon_k)$ and $\\widetilde{z}_k= \\widetilde{y}_k-\\mathscr{S}_{\\zeta_y}(\\epsilon_k)$, then we can write \n\\begin{equation}\\label{n4.9}\n \\mathscr{T}'\\leq \\underbrace{\\sup_{\\bm{v},\\bm{\\theta}}~ \\sum_{k=1}^n\\big(\\widetilde{z}_k\\bm{x}_k-\\mathbbm{E}(\\widetilde{z}_k\\bm{x}_k)\\big)^\\top\\bm{v}}_{\\mathscr{T}'_1}+ \\underbrace{\\sup_{ \\bm{v}}~ \\sum_{k=1}^n\\big(\\mathscr{S}_{\\zeta_y}(\\epsilon_k)\\bm{x}_k-\\mathbbm{E}(\\mathscr{S}_{\\zeta_y}(\\epsilon_k)\\bm{x}_k)\\big)^\\top\\bm{v}}_{\\mathscr{T}'_2}.\n\\end{equation}\nAfter this crucial workaround, Lemma \\ref{productpro} applies to $\\mathscr{T}'_1$\nsince $\\widetilde{z}_k$ is sub-Gaussian and admits sub-Gaussian increments; also, $\\mathscr{T}_2'$ is now independent of $\\bm{\\theta}$ and hence essentially easier. In addition, compared to \\cite[Proposition 6.1]{xu2020quantized}, {\\sffamily step 2.4} in the proof involves some technical changes, as will be explicitly pointed out there.\n\n\n\n\n\n\nIn the following we present the uniform recovery guarantee. We adopt most assumptions in Theorem \\ref{thm4}, but specify the range of $\\bm{\\theta^\\star}$ as $\\bm{\\theta^\\star}\\in \\Sigma_{s,R_0}$ and impose the moment constraint on $\\epsilon_k$. For a more straightforward analysis we consider constrined Lasso instead of (\\ref{4.3}) as prior works.\nThe proof is provided in the appendix. \n\n\\begin{theorem}\n \\label{uniformtheorem}\n {\\rm \\scshape (Uniform version of Theorem \\ref{thm4}){\\bf \\sffamily.}} We suppose in $y_k = \\bm{x}_k^\\top\\bm{\\theta^\\star}+\\epsilon_k$ that $\\bm{x}_k$ are i.i.d., zero-mean, and sub-Gaussian with $\\|\\bm{x}_k\\|_{\\psi_2}\\leq \\sigma$; that $\\kappa_0\\leq \\lambda_{\\min}(\\bm{\\Sigma^\\star})\\leq \\lambda_{\\max}(\\bm{\\Sigma^\\star})\\leq \\kappa_1$ $(\\bm{\\Sigma^\\star}=\\mathbbm{E}(\\bm{x}_k\\bm{x}_k^\\top))$ for some $\\kappa_1\\geq \\kappa_0>0$, $\\bm{\\theta^\\star}\\in \\Sigma_{s,R_0}:= \\Sigma_s \\cap \\{\\bm{\\theta}:\\|\\bm{\\theta}\\|_2\\leq R_0\\}$ for some absolute constant $R_0$ (this is unnecessary and only for less parameter); and that $\\epsilon_k$ are i.i.d. noise satisfying $\\mathbbm{E}|\\epsilon_k|^{2l}\\leq M$ for some fixed $l>1$. Recall our quantization scheme that $\\widetilde{y}_k= \\sign(y_k)\\min\\{|y_k|,\\zeta_y\\}$, $\\dot{y}_k = \\mathcal{Q}_\\Delta(\\widetilde{y}_k+\\tau_k)$ with $\\tau_k\\sim \\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}])$, and we set $\\zeta_y\\asymp \\big(\\frac{n(M^{1\/l}+\\sigma^2)}{\\delta \\log d}\\big)^{1\/2}$. For recovery, we define the estimator $\\bm{\\widehat{\\theta}}$ as the solution to constrained Lasso \\begin{equation}\\nonumber\n \\bm{\\widehat{\\theta}}= \\mathop{\\arg\\min}\\limits_{\\|\\bm{\\theta}\\|_{1}\\leq \\|\\bm{\\theta^\\star}\\|_1}~\\frac{1}{2n}\\sum_{k=1}^n(\\dot{y}_k-\\bm{x}_k^\\top\\bm{\\theta})^2 \n \\end{equation}\n If $n \\gtrsim \\delta s\\log \\mathscr{W}$ for $\\mathscr{W}= \\frac{\\kappa_1d^2n^3}{\\Delta^2s^5\\delta^3}$ and some hidden constant depending on $(\\kappa_0,\\sigma)$, with probability at least $1-Cd^{1-\\delta}$ on a single random draw of $(\\bm{x}_k,\\epsilon_k, \\tau_k)_{k=1}^n$, it holds uniformly for all $\\bm{\\theta^\\star}\\in\\Sigma_{s,R_0}$ that the estimation errors $\\bm{\\widehat{\\Delta}}:=\\bm{\\widehat{\\theta}}-\\bm{\\theta^\\star}$ satisfy \n \\begin{equation}\\nonumber\n \\|\\bm{\\widehat{\\Delta}}\\|_2\\leq C_3\\mathscr{L} \\sqrt{\\frac{\\delta s\\log \\mathscr{W}}{n}}~~~\\mathrm{and}~~~\\|\\bm{\\widehat{\\Delta}}\\|_1 \\leq C_4\\mathscr{L}s\\sqrt{\\frac{\\delta\\log \\mathscr{W}}{n}}~\n \\end{equation}\n where $\\mathscr{L}:=\\frac{\\sigma(\\sigma+\\Delta+M^{1\/(2l)})}{\\kappa_0}$.\n\\end{theorem}\n\n\n Notably, our uniform guarantee is still minimax optimal up to logarithmic factors. Compared to the clean $\\sqrt{\\log d}$ in previous results, the worse logarithmic factor $\\sqrt{\\log \\mathscr{W}}$ is because of a covering argument on $\\Sigma_{s,R_0}$. See {\\sffamily step 2.4} in the proof for more details, and for similar techniques we refer to \\cite{xu2020quantized,chen2022uniform}. Because in the full-data regime $(\\Delta =0)$ we do not need to deal with quantizer noise as {\\sffamily step 2.4}, we can prove the uniform error bound $O\\big(\\sqrt{\\frac{s\\log d}{n}}\\big)$ that is in full agreement with previous results. Such result has its own interest --- by using constrained Lasso instead of the regularized one, it strengthens \\cite[Theorem 2(a)]{fan2021shrinkage} to a uniform reconstruction guarantee almost with no cost.\n\n\n\n \n\n\n\\section{Numerical Simulations}\\label{sec6}\nIn this section we provide two sets of experimental results to support and demonstrate the previous developments. As the standpoint of this work is to demonstrate near minimax error rates are achievable when applying the proposed scheme to heavy-tailed data, the first set of simulations aim to validate the obtained error rates. Then, the second set of results are presented to illustrate the crucial role played by the appropriate dither (i.e., triangular dither for covariate, uniform dither for response) before uniform quantization. For the importance of data truncation we refer to the relatively complete simulations in \\cite{fan2021shrinkage}, which includes three estimation problems in this work and contrasts the estimations with or without the data truncation. \n\n\\subsection{(Near) minimax error rates}\nEach data point in our results is set to be the mean value of $50$ or $100$ independent trials. \n\n\n\n\n\\subsubsection{Quantized covariance estimation}\nWe start from covariance matrix estimation, specifically we verify the element-wise rate $\\mathscr{B}_1:=O\\big(\\mathscr{L}\\sqrt{\\frac{\\log d}{n}}\\big)$ and operator norm rate $\\mathscr{B}_2:=O\\big(\\mathscr{L}\\sqrt{\\frac{d\\log d}{n}}\\big)$ in Theorems \\ref{thm1}-\\ref{thm2}.\n\n\nFor estimator in Theorem \\ref{thm1}, we draw $\\bm{x}_k = (x_{ki})$ such that the first two coordinates are i.i.d. from $\\mathsf{t}(4.5)$, $(x_{ki})_{i=3,4}$ are from $\\mathsf{t}(6)$ with covariance $\\mathbbm{E}(x_{k3}x_{k4})=1.2$, and the remaining $d-4$ coordinates are from $\\mathsf{t}(6)$. We test different choices of $(d,\\Delta)$ under $n=80:20:220$, and the log-log plots are shown in Figure \\ref{fig1}(a). Clearly, for each $(d,\\Delta)$ the experimental points roughly exhibit a straight line that is well aligned with the dashed line representing the $n^{-1\/2}$ rate. As predicted by the factor $\\mathscr{L}=\\sqrt{M}+\\Delta^2$, the curves with larger $\\Delta$ are higher, but note that the error decreasing rates remain unchanged. In addition, the curves of $(d,\\Delta)=(100,1),(120,1)$ are extremely close, thereby confirming that the element-wise error only depends on ambient dimension $d$ logarithmically. \n\n\nFor the error bound $\\mathscr{B}_2$, the coordinates of $\\bm{x}_k$ are drawn from a scaled version of $\\mathsf{t}(4.5)$ such that $\\bm{\\Sigma^\\star}=\\mathrm{diag}(2,2,1,...,1)$, and we test different settings of $(d,\\Delta)$ under $n= 200:100:1000$. As shown in Figure \\ref{fig1}(b), the operator norm error decreases with $n$ in the optimal rate $n^{-1\/2}$, and using a coarser dithered quantizer (i.e., larger $\\Delta$) only slightly lifts the curves. Indeed, the effect seems consistent with $\\mathscr{L}$'s quadratic dependence on $\\Delta$. To validate the relative scaling of $n$ and $d$, we try $(d,\\Delta)= (150,1)$ under $n=1.5\\times(200:100:1000)$, and surprisingly the obtained curve coincides with the one for $(d,\\Delta)=(100,1)$. Thus, ignoring the logarithmic factor $\\log d$, the operator norm error can be characterized by $\\mathscr{B}_2$ fairly well. \n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[scale = 0.6]{Thm_1_2.eps}\n \n ~~~ (a) \\hspace{4.4cm} (b) \n \\caption{(a): Element-wise error (Theorem \\ref{thm1}); (b): operator norm error (Theorem \\ref{thm2}).}\n \\label{fig1}\n\\end{figure}\n\n\n\n\\subsubsection{Quantized compressed sensing}\n\nWe now switch to QCS with full covariate and aim to verify the $\\ell_2$ norm error rate $\\mathscr{B}_3=O\\big(\\mathscr{L}\\sqrt{\\frac{s\\log d}{n}}\\big)$ obtained in Theorems \\ref{thm4}-\\ref{thm5}. We let the support of the $s$-sparse $\\bm{\\theta^\\star}\\in \\mathbb{R}^d$ be $[s]$, and then draw the non-zero entries from a uniform distribution over $\\mathbb{S}^{s-1}$ (hence $\\|\\bm{\\theta^\\star}\\|_2=1$). For the setting of Theorem \\ref{thm4} we adopt $\\bm{x}_k\\sim \\mathcal{N}(0,\\bm{I}_d)$ and $\\epsilon_k\\sim \\frac{1}{\\sqrt{6}}\\mathsf{t}(3)$, while $\\bm{x}_{ki}\\stackrel{iid}{\\sim}\\frac{\\sqrt{5}}{3}\\mathsf{t}(4.5)$ and $\\epsilon_k\\sim \\frac{1}{\\sqrt{3}}\\mathsf{t}(4.5)$ for Theorem \\ref{thm5}. We simulate different choices of $(d,s,\\Delta)$ under $n=100:100:1000$, and the proposed convex program (\\ref{4.3}) is solved with the framework of ADMM (we refer to the review \\cite{boyd2011distributed}). Experimental results are shown as log-log plots in Figure \\ref{fig2}. Consistent with the theoretical bound $\\mathscr{B}_3$, the error in both case decreases in the (near) optimal rate of $n^{-1\/2}$, whereas the effect of uniform quantization is merely on the multiplicative factor $\\mathscr{L}$. Interestingly, it seems that the gaps between $\\Delta=0,0.5$ and $\\Delta = 0.5,1$ are in agreement with the explicit form of $\\mathscr{L}$, i.e., $\\mathscr{L}\\asymp M^{1\/(2l)}+\\Delta$ for Theorem \\ref{thm4}, and $\\mathscr{L}\\asymp \\sqrt{M}+\\Delta^2$ for Theorem \\ref{thm5}. In addition, note that the curves of $(d,s)=(150,5),(180,5)$ are close, whereas increasing $s=8$ suffers from significantly larger error, thus validating the relative scaling of $(n,d,s)$ in $\\mathscr{B}_3$.\n\n\n\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[scale = 0.62]{Thm_4_5.eps}\n \n ~~~ (a) \\hspace{4.4cm} (b) \n \\caption{(a): QCS in Theorem \\ref{thm4}; (b): QCS in Theorem \\ref{thm5}.}\n \\label{fig2}\n\\end{figure}\n\nThen, we simulate the complete quantization setting where both covariate and response are quantized (Theorems \\ref{thm6}-\\ref{thm7}). The simulation details are the same as before except that $\\bm{x}_k$ is also quantized with $\\Delta$ same as $y_k$. We provide the best $\\ell_1$ norm constraint, i.e., $\\mathcal{S}:=\\{\\bm{\\theta}:\\|\\bm{\\theta}\\|_1\\leq\\|\\bm{\\theta^\\star}\\|_1 \\}$. Then, composite gradient descent \\cite{loh2011high,loh2013regularized} is invoked to handle the non-convex estimation program. We show the log-log plots in Figure \\ref{fig3}. Note that these results have implications similar to Figure \\ref{fig2}, in terms of the $n^{-1\/2}$ rate, the effect of quantization, and the relative scaling of $(n,d,s)$.\n\n\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[scale = 0.58]{Thm_6_7.eps}\n \n ~~~ (a) \\hspace{4.4cm} (b) \n \\caption{(a): QCS in Theorem \\ref{thm6}; (b): QCS in Theorem \\ref{thm7}.}\n \\label{fig3}\n\\end{figure}\n\n\n\n\\subsubsection{Quantized matrix completion}\nFinally, we simulate QMC and demonstrate the error bound $\\mathscr{B}_4=O\\big(\\mathscr{L}\\sqrt{\\frac{rd\\log d}{n}}\\big)$ for $\\|\\bm{\\widehat{\\Delta}} \\|_F\/d$ in Theorems \\ref{thm8}-\\ref{thm9}. We generate the rank-$r$ $\\bm{\\Theta^\\star}\\in \\mathbb{R}^{d\\times d}$ as follows: we first generate $\\bm{\\Theta}_0\\in \\mathbb{R}^{d\\times r}$ with i.i.d. standard Gaussian entries to obtain the rank-$r$ $\\bm{\\Theta}_1:=\\bm{\\Theta}_0\\bm{\\Theta}_0^\\top$, then we set $\\bm{\\Theta^\\star}:=k_1\\bm{\\Theta}_1$ such that $\\|\\bm{\\Theta^\\star}\\|_F=d$. \nWe use $\\epsilon_k\\sim \\mathcal{N}(0,\\frac{1}{4})$ to simulate the sub-exponential noise in Theorem \\ref{thm8}, while $\\epsilon_k\\sim \\frac{1}{\\sqrt{6}}\\mathsf{t}(3)$ for Theorem \\ref{thm9}. The convex program (\\ref{5.2}) is fed with $\\alpha=\\|\\bm{\\Theta^\\star}\\|_\\infty$ and optimized by the ADMM algorithm. We test different choices of $(d,r,\\Delta)$ under $n=2000:1000:8000$, with the log-log error plots displayed in Figure \\ref{fig4}. Firstly, the experimental curves are well aligned with the dashed line that represents the optimal $n^{-1\/2}$ rate. Then, comparing the results for $\\Delta = 0,0.5,1$, we conclude that quantization only affects the multiplicative factor $\\mathscr{L}$ in the estimation error. It should also be noted that, increasing either $d$ or $r$ leads to significantly larger error, which is consistent with the $\\mathscr{B}_4$'s essential dependence on $d$ and $r$. \n\n\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[scale = 0.72]{Thm_10_11.eps}\n \n ~~~ (a) \\hspace{4.4cm} (b) \n \\caption{(a): QMC in Theorem \\ref{thm8}; (b): QMC in Theorem \\ref{thm9}.}\n \\label{fig4}\n\\end{figure}\n\n\\subsection{Importance of appropriate dithering}\nTo demonstrate the crucial role played by the suitable dither, we provide the second set of simulations. In order to observe more significant phenomena and then conclude evidently, we may test huge sample size but a rather simple estimation problem under coarse quantization (i.e., large $\\Delta$). \n\n\nSpecifically, for covariance matrix estimation we set $d=1$ and i.i.d. draw $X_1,...,X_n$ from $\\mathcal{N}(0,1)$. Thus, the problem boils down to estimating $\\mathbbm{E}|X_k|^2$, for which the estimators in Theorems \\ref{thm1}-\\ref{thm2} coincide. Since $X_k$ is sub-Gaussian, we do not perform data truncation before dithered quantization. Besides our estimator $\\bm{\\widehat{\\Sigma}}=\\frac{1}{n}\\sum_{k=1}^n\\dot{X}_k^2-\\frac{\\Delta^2}{4}$ where $\\dot{X}_k=\\mathcal{Q}_\\Delta(X_k+\\tau_k)$ and $\\tau_k$ is triangular dither, we invite the following competitors: \n\\begin{itemize}\n \\item $\\bm{\\widehat{\\Sigma}}_{no}=\\frac{1}{n}\\sum_{k=1}^n(\\dot{X}'_k)^2$, where $\\dot{X}'_k=\\mathcal{Q}_\\Delta(X_k)$ is the direct quantization without dithering;\n \\item $\\bm{\\widehat{\\Sigma}}_u-\\frac{\\Delta^2}{6}$ and $\\bm{\\widehat{\\Sigma}}_u$, where $\\bm{\\widehat{\\Sigma}}_u=\\frac{1}{n}\\sum_{k=1}^n(\\dot{X}''_k)^2$, and $\\dot{X}''_k=\\mathcal{Q}_\\Delta(X_k+\\tau''_k)$ is quantized under uniform dither $\\tau''_k\\sim \\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}])$.\n\\end{itemize} To illustrate the choice of $\\bm{\\widehat{\\Sigma}}_u-\\frac{\\Delta^2}{6}$ and $\\bm{\\widehat{\\Sigma}}_u$, we write $\\dot{X}''_k=X_k+\\tau''_k+w_k=X_k+\\xi_k$ with quantization error $w_k\\sim \\mathscr{U}([-\\frac{\\Delta}{2},\\frac{\\Delta}{2}])$ (due to Lemma \\ref{lem1}(a)) and quantization noise $\\xi_k=\\tau''_k+w_k$, then (\\ref{3.1}) gives $\\mathbbm{E}(\\dot{X}''_k)^2=\\mathbbm{E}|X_k|^2 +\\mathbbm{E}|\\xi_k|^2$, while $\\mathbbm{E}|\\xi_k|^2$ remains unknown. Thus, we consider $\\bm{\\widehat{\\Sigma}}_u-\\frac{\\Delta^2}{6}$ because of an unjustified guess $\\mathbbm{E}|\\xi_k|^2\\approx \\mathbbm{E}|\\tau''_k|^2+\\mathbbm{E}|w_k|^2=\\frac{\\Delta^2}{6}$, while $\\bm{\\widehat{\\Sigma}}_u$ simply gives up the correction of $\\mathbbm{E}|\\xi_k|^2$. We test $\\Delta =3$ under $n=(2:2:20)\\cdot10^3$. From the results shown in Figure \\ref{fig5}(a), the proposed estimator based on quantized data under triangular dither embraces the lowest estimation errors and the optimal rate of $n^{-1\/2}$, whereas other competitors are not consistent, i.e., they all reach some error floors under a large sample size.\n\n\n\nFor the two remaining signal recovery problems, we simply focus on the quantization of the measurement $y_k$. In particular, we simulate QCS in the setting of Theorem \\ref{thm4}, with $(d,s,\\Delta)=(50,3,2)$ under $n:=(2:2:20)\\cdot 10^3$. Other experimental details are as previously stated. We compare our estimator $\\bm{\\widehat{\\theta}}$ with its counterpart $\\bm{\\widehat{\\theta}}'$ defined by (\\ref{4.3}) with the same $\\bm{Q},\\mathcal{S}$ but $\\bm{b}'=\\frac{1}{n}\\sum_{k=1}^n\\dot{y}'_k\\bm{x}_k$, where $\\dot{y}_k' = \\mathcal{Q}_\\Delta(\\widetilde{y}_k)$ is a direct uniform quantization with no dither. Evidently, the simulation results in Figure \\ref{fig5}(b) confirm that the application of a uniform dither significantly lessens the recovery errors. Without dithering, although our results under Gaussian covariate still exhibit $n^{-1\/2}$ decreasing rate, identifiability issue does arise under Bernoulli covariate. In that case, the simulation without dithering will evidently deviate from the $n^{-1\/2}$ rate, see \\cite[Figure 1]{sun2022quantized} for instance.\n\n\nIn analogy, we simulate QMC (Theorem \\ref{thm8}) with data generated as previous experiments, and specifically we try $(d,r,\\Delta)= (30,5,1.5)$ under $n=(5:5:25)\\cdot 10^3$. While our estimator $\\bm{\\widehat{\\Theta}}$ is defined in (\\ref{5.2}) involving $\\dot{y}_k$ from a dithered quantizer, we simulate the performance of its counterpart without dithering, i.e., $\\bm{\\widehat{\\Theta}}'$ defined in (\\ref{5.2}) with $\\dot{y}_k$ substituted by $\\dot{y}_k'=\\mathcal{Q}_\\Delta(y_k)$. From the experimental results displayed in Figure \\ref{fig5}(c), one shall clearly see that $\\bm{\\widehat{\\Theta}}$ performs much better in terms of the decreasing rate of $n^{-1\/2}$ and the estimation error; while the curve without dithering even does not decrease.\n\n\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[scale = 0.72]{importance.eps}\n \n ~~~ (a) \\hspace{4.6cm} (b) \\hspace{4.6cm} (c) \n \\caption{(a): covariance matrix estimation; (b): QCS in Theorem \\ref{thm4}; (c): QMC in Theorem \\ref{thm8}.}\n \\label{fig5}\n\\end{figure}\n \n\\section{Final remarks}\\label{sec7}\nIn digital signal processing and many distributed or federated machine learning problems, data quantization is an indispensable process. On the other hand, many modern datasets exhibit heavy tail behaviour, and the past decade has witnessed an increasing interest in statistic method robust to heavy-tailedness.\nIn this work we study the quantization of heavy-tailed data.\nThe proposed scheme is to truncate the heavy-tailed data prior to a uniform quantizer with random dither well suited to the problem. Applying our quantization scheme to covariance estimation, compressed sensing, and matrix completion, we have proposed (near) optimal estimators from quantized data. Indeed, the quantization only affects the multiplicative factor of theoretical error bounds, which we also confirmed via numerical simulations. We believe that our approach can be applied to more statistical estimation problems with heavy-tailed data that need to be quantized. \n\n\nWe also presented more developments for quantized compressed sensing in two respects. Firstly, we study a novel setting that involves covariate quantization. Following the strategy of regularized M-estimator, the key is the covariance estimation from quantized data. Still, the resulting recovery program is non-convex, but we proved that all local minimizers enjoy near minimax rate. At a higher level, this development extends a line of works on non-convex M-estimator \\cite{loh2011high,loh2013regularized,loh2017statistical} to accommodate heavy-tailed covariate, see the deterministic framework Proposition \\ref{framework}. As application, we derive results for (dithered) 1-bit compressed sensing as byproducts. Secondly, we established uniform recovery guarantee by more in-depth concentration inequality and covering argument. Notably, with a single realization of $(\\bm{x}_k,\\epsilon_k,\\tau_k)$, all sparse signals within an $\\ell_2$ ball can be uniformly recovered from the quantized data. \n\n\n \n\n\n\n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\\label{introduction}\n\\hf\\hf Enumeration of singular curves in $\\mathbb{P}^2$ (complex projective space) is a classical \nproblem in algebraic geometry. In our first paper \\cite{BM13}, we used purely topological \nmethods to answer the following question: \n\\begin{que}\nHow many degree $d$-curves are there in $\\mathbb{P}^2$, passing through $(d(d+3)\/2 -k)$ generic points and having one singularity \nof codimension $k$, where $k$ is at most $7$? \n\\end{que} \nIn this paper, we extend the methods applied in \\cite{BM13} to enumerate curves with two singular points. More precisely, we obtain an \nexplicit answer for the following question: \n\\begin{que}\nHow many degree $d$-curves are there in $\\mathbb{P}^2$, passing through $(d(d+3)\/2 -(k+1))$ generic points, having one node and one singularity \nof codimension $k$, where $k$ is at most $6$? \n\\end{que} \n\\hf\\hf Let us denote the space of curves of degree $d$ in $\\mathbb{P}^2$ by $\\mathcal{D}$. It follows that $\\mathcal{D} \\cong \\mathbb{P}^{\\delta_d}$, where $\\delta_d = d(d+3)\/2$. Let $\\gamma_{_{\\mathbb{P}^2}}\\longrightarrow \\mathbb{P}^2$ be the tautological line bundle. \nA homogeneous polynomial $f$, of degree $d$ and in $3$ variables, induces a holomorphic section of the line bundle $\\gamma_{_{\\mathbb{P}^2}}^{*d} \\longrightarrow \\mathbb{P}^2$. \nIf $f$ is non-zero, then we will denote its \\textit{equivalence class} in $\\mathcal{D}$ by $\\tilde{f}$. Similarly, if $p$ is a non-zero vector in $\\mathbb{C}^3$, \nwe will denote its equivalence class in $\\mathbb{P}^2$ by $\\tilde{p}$ \\footnote{In this paper we will use the symbol $\\tilde{A}$ to denote the equivalence class of \n$A$ instead of the standard $[A]$. This will make some of the calculations in section \\ref{closure_of_spaces} easier to read.}. \n\\begin{defn}\n\\label{singularity_defn}\nLet $\\tilde{f} \\in \\mathcal{D}$ and $\\tilde{p} \\in \\mathbb{P}^2$. A point $\\tilde{p} \\in f^{-1}(0)$ \\textsf{is of singularity type} $\\mathcal{A}_k$,\n$\\mathcal{D}_k$, $\\mathcal{E}_6$, $\\mathcal{E}_7$, $\\mathcal{E}_8$ or $\\mathcal{X}_8$ if there exists a coordinate system\n$(x,y) :(\\mathcal{U},\\tilde{p}) \\longrightarrow (\\mathbb{C}^2,0)$ such that $f^{-1}(0) \\cap \\mathcal{U}$ is given by \n\\begin{align*}\n\\mathcal{A}_k: y^2 + x^{k+1} &=0 \\qquad k \\geq 0, \\qquad \\mathcal{D}_k: y^2 x + x^{k-1} =0 \\qquad k \\geq 4, \\\\\n\\mathcal{E}_6: y^3+x^4 &=0, \\qquad \\mathcal{E}_7: y^3+ y x^3=0, \n\\qquad \\mathcal{E}_8: y^3 + x^5=0, \\\\\n\\mathcal{X}_8: x^4 + y^4 &=0. \n\\end{align*}\n\\end{defn}\nIn more common terminology, $\\tilde{p}$ is a {\\it smooth} point of $f^{-1}(0)$ if \nit is a singularity of type $\\mathcal{A}_0$; a {\\it simple node} if its singularity type is $\\mathcal{A}_1$; a {\\it cusp} if its type is $\\mathcal{A}_2$; a {\\it tacnode} \nif its type is $\\mathcal{A}_3$; a {\\it triple point} if its type is $\\mathcal{D}_4$; and a {\\it quadruple point} if its type is $\\mathcal{X}_8$. \\\\\n\\hf\\hf We have several results (cf. Theorem \\ref{algoa1a1}\\,-\\,\\ref{algope6a1}, section \\ref{algorithm_for_numbers}) \nwhich can be summarized collectively as our main result. Although \\eqref{algoa1a1}-\\eqref{algope6a1} may appear as equalities, \nthe content of each of these equations is a theorem.\n\\begin{mthm}\n\\label{main_result}\nLet $\\mathfrak{X}_k$ be a singularity of type $\\mathcal{A}_k$, $\\mathcal{D}_k$ or $\\mathcal{E}_k$. Denote $\\mathcal{N}(\\mathcal{A}_1\\mathfrak{X}_k,n)$ to be the number of \ndegree $d$ curves in $\\mathbb{P}^2$ that pass through $\\delta_d - (k+1+n)$ generic points having \none $\\mathcal{A}_1$ node and one singularity of type $\\mathfrak{X}_k$ at the intersection of $n$ generic lines. \\\\\n\\textup{(i)} There is a formula for $\\mathcal{N}(\\mathcal{A}_1\\mathfrak{X}_k,n)$ if $k+1 \\leq 7$, provided $d \\geq \\mathcal{C}_{\\mathcal{A}_1\\mathfrak{X}_k}$ where \n\\bgd\n\\mathcal{C}_{\\mathcal{A}_1\\mathcal{A}_k} = k+3, ~~\\mathcal{C}_{\\mathcal{A}_1\\mathcal{D}_k} = k+1, ~~\\mathcal{C}_{\\mathcal{A}_1\\mathcal{E}_6} = 6. \n\\edd\n\\textup{(ii)} There is an algorithm to explicitly compute these numbers. \n\\end{mthm}\n\n\\begin{rem}\nNote that $\\mathcal{N}(\\mathcal{A}_1\\mathfrak{X}_k,n)$ is zero if $n>2$, since three or more generic lines do not intersect at a common point. Moreover, $\\mathcal{N}(\\mathcal{A}_1\\mathfrak{X}_k,2)$ is the the number of curves, of degree $d$, that pass through $\\delta_d-(k+3)$ generic points having one $\\mathcal{A}_1$-node and one singularity of type $\\mathfrak{X}_k$ lying at a given fixed point (since the intersection of two generic lines is a point).\n\\end{rem}\n\n\nIn \\cite{BM13}, we obtained an explicit formula for \n$\\mathcal{N}(\\mathfrak{X}_k, n)$, the number of degree $d$ \ncurves in $\\mathbb{P}^2$ that pass through $\\delta_d - (k+n)$ generic points having \none singularity of type $\\mathfrak{X}_k$ at the intersection of $n$ generic lines.\nWe extend the methods applied in \\cite{BM13} to obtain an explicit \nformula for $\\mathcal{N}(\\mathcal{A}_1 \\mathfrak{X}_k, n)$. \\\\[0.1cm]\n\n\\hf \\hf The numbers $\\mathcal{N}(\\mathcal{A}_1\\mathfrak{X}_k,0)$ till \n$k+1\\leq 7$ have also been computed by Maxim Kazarian \\cite{Kaz} \nusing different methods. Our results for $n=0$ \nagree with his. The bound $d \\geq \\mathcal{C}_{\\mathcal{A}_1\\mathfrak{X}_k}$ is imposed to ensure that the \nrelevant bundle sections are transverse.$\\footnote{However, this bound is not the optimal bound.}$ \nThe formulas for $\\mathcal{N}(\\mathcal{A}_1\\mathcal{A}_1,n)$ and $\\mathcal{N}(\\mathcal{A}_1\\mathcal{A}_2,n)$ also appear in \\cite{Z1}. \nWe extend the methods applied by the author to obtain the remaining formulas. \n\n\n\n\n\\section{Overview} \n\\hf\\hf Our main tool will be the following well known fact from topology (cf. \\cite{BoTu}, Proposition 12.8).\n\\begin{thm} \n\\label{Main_Theorem} \nLet $V\\longrightarrow X$ be a vector bundle over a manifold $X$. Then the following are true: \n\\hspace*{0.5cm}\\textup{(1)} A generic smooth section $s: X\\longrightarrow V$ is transverse to the zero set. \\\\\n\\hspace*{0.5cm}\\textup{(2)} Furthermore, if $V$ and $X$ are oriented with $X$ compact then the zero set of such a section defines an integer homology class in $X$, \nwhose Poincar\\'{e} dual is the Euler class of $V$. \nIn particular, if the rank of $V$ is same as the dimension of $X$, \nthen the signed cardinality of $s^{-1}(0)$ is the Euler class of $V$, evaluated on the fundamental class of \n$X$, i.e., \n\\bgd\n|\\pm s^{-1}(0)| = \\langle e(V), [X] \\rangle. \n\\edd\n\\end{thm}\n\\begin{rem}\nLet $X$ be a compact, complex manifold, $V$ a holomorphic vector bundle and $s$ a holomorphic section that is transverse to the zero set. If the rank of $V$ is same as the dimension of $X$, \nthen the signed cardinality of $s^{-1}(0)$ is same as its actual cardinality (provided $X$ and $V$ have their natural orientations). \n\\end{rem}\nHowever, for our purposes, the requirement that $X$ is a smooth manifold is too strong. We will typically be dealing with spaces that are \nsmooth but have non-smooth closure. The following result is a stronger version of Theorem \\ref{Main_Theorem}, that applies to singular spaces, provided \nthe set of singular points is of real codimension two or more. \n\\begin{thm} \n\\label{Main_Theorem_pseudo_cycle} \nLet $M \\subset \\mathbb{P}^{N}$ be a smooth, compact algebraic variety and $X \\subset M$ a smooth subvariety, not necessarily closed. Let $V \\longrightarrow M$ be an oriented vector bundle, such that the rank of $V$ is same as the dimension of $X$. Then the following are true: \\\\\n\\hspace*{0.5cm}\\textup{(1)} The closure of $X$ is an algebraic variety and defines a homology class.\\\\\n\\hspace*{0.5cm}\\textup{(2)} The zero set of a generic smooth section $s: M \\longrightarrow V$ intersects $X$ transversely and does not intersect $\\overline{X}-X$ anywhere. \\\\\n\\hspace*{0.5cm}\\textup{(3)} The number of zeros of such a section inside $X$, counted with signs, \nis the Euler class of $V$ evaluated on the homology class $[\\overline{X}]$, i.e., \n\\bgd\n|\\pm s^{-1}(0) \\cap \\overline{X}| = |\\pm s^{-1}(0) \\cap X| = \\big\\langle e(V), ~[\\overline{X}] \\big\\rangle.\n\\edd\n\\end{thm}\n\n\\begin{rem}\nAll the subsequent statements we make are true provided $d$ is sufficiently large. The precise bound on $d$ is given in section \\ref{bundle_sections}. Although the results of this paper are an extension of \\cite{BM13}, our aim has been to keep this paper self-contained. Ideally, a reader not familar with \\cite{BM13} should have no difficulty following this paper.\n\\end{rem}\n\n\n\\hf\\hf We will now explain our strategy to compute $\\mathcal{N}(\\mathcal{A}_1\\mathfrak{X}_k, n)$. \nThe strategy is very similar to that of computing $\\mathcal{N}(\\mathfrak{X}_k, n)$, which \nwas the content of \\cite{BM13}. Let $\\mathrm{X}_1$ and $\\mathrm{X}_2$ be two subsets of $\\mathcal{D} \\times \\mathbb{P}^2$. Then we define \n\\begin{align*}\n\\mathrm{X}_1 \\circ \\mathrm{X}_2 &:= \\{ (\\tilde{f}, \\tilde{p}_1, \\tilde{p}_2) \\in \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P}^2: (\\tilde{f}, \\tilde{p}_1) \\in \\mathrm{X}_1, ~~(\\tilde{f}, \\tilde{p}_2) \\in \\mathrm{X}_2, ~~\\tilde{p}_1 \\neq \\tilde{p}_2 \\}. \n\\end{align*}\nNext, given a subset $\\mathrm{X}$ of $\\mathcal{D} \\times \\mathbb{P}^2$ we define \n\\begin{align*}\n\\Delta \\mathrm{X} &:= \\{ (\\tilde{f}, \\tilde{p}, \\tilde{p}) \\in \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P}^2: (\\tilde{f}, \\tilde{p}) \\in \\mathrm{X} \\}. \n\\end{align*}\nSimilarly, let $\\mathrm{X}_1$ and $\\mathrm{X}_2$ be two subsets of $\\mathcal{D} \\times \\mathbb{P}^2$ and $\\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2$ respectively.\nThen we define \n\\begin{align*}\n\\mathrm{X}_1 \\circ \\mathrm{X}_2 &:= \\{ (\\tilde{f}, \\tilde{p}_1, l_{\\tilde{p}_2}) \\in \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2: (\\tilde{f}, \\tilde{p}_1) \\in \\mathrm{X}_1, ~~(\\tilde{f}, l_{\\tilde{p}_2}) \\in \\mathrm{X}_2, ~~\\tilde{p}_1 \\neq \\tilde{p}_2 \\}. \n\\end{align*}\nFinally, given a subset $\\mathrm{X}$ of $\\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2$ we define \n\\begin{align*}\n\\Delta \\mathrm{X} &:= \\{ (\\tilde{f}, \\tilde{p}, l_{\\tilde{p}}) \\in \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T \\mathbb{P}^2: (\\tilde{f}, l_{\\tilde{p}}) \\in \\mathrm{X} \\}. \n\\end{align*}\n\\noindent The following result is clear from the definition of closure.\n\\begin{lmm}\n\\label{closure_obvious}\nWe have the following equality of sets \n\\begin{align*}\n \\overline{\\mathrm{X}_1 \\circ \\mathrm{X}_2} & = \\overline{\\overline{\\mathrm{X}}_1 \\circ \\mathrm{X}_2} = \\overline{\\mathrm{X}_1 \\circ \\overline{\\mathrm{X}}_2} = \\overline{\\overline{\\mathrm{X}}_1 \\circ \\overline{\\mathrm{X}}_2}. \n\\end{align*}\n\\end{lmm}\n\\hf\\hf Given a singularity $\\mathfrak{X}_k$, we also denote by $\\mathfrak{X}_k$, the \\textit{space} of \ncurves of degree $d$ with a marked point $\\tilde{p}$ such that the curve has a singularity of type $\\mathfrak{X}_k$ at $\\tilde{p}$. \nSimilarly, $\\mathcal{A}_1\\circ \\mathfrak{X}_k$ is the the \\textit{space} of degree $d$ curves with two distinct marked points $\\tilde{p}_1$ and $\\tilde{p}_2$ such that the curve has a node at $\\tilde{p}_1$ and a singularity of type $\\mathfrak{X}_k$ at $\\tilde{p}_2$. Note that except when $\\mathfrak{X}_k=\\mathcal{A}_1$, the space $\\mathcal{A}_1\\circ\\mathfrak{X}_k$ {\\it is} the fibre product $\\mathcal{A}_1\\times_\\mathcal{D} \\mathfrak{X}_k$.\\\\\n\\hf\\hf Let $\\tilde{p}_1, \\tilde{p}_2, \\ldots , \\tilde{p}_{\\delta_d -(k+1+n)}$ be \n$\\delta_d-(k+1+n)$ generic points in $\\mathbb{P}^2$ \nand $\\mathrm{L}_1, \\mathrm{L}_2, \\ldots, \\mathrm{L}_n $ be $n$ generic lines \nin $\\mathbb{P}^2$. Define the following sets\n\\begin{align}\n\\label{hyperplane} \n\\Hpl_i& := \\{ \\tilde{f} \\in \\mathcal{D}: f(p_i)=0 \\}, \\qquad \\Hpl_i^* := \\{ \\tilde{f} \\in \\mathcal{D}: f(p_i)=0, \\nabla f|_{p_i} \\neq 0 \\}, \\nonumber \\\\\n\\hat{\\Hpl}_i& := \\Hpl_i \\times \\mathbb{P}^2 \\times \\mathbb{P}^2, \\qquad \\hat{\\Hpl}_i^* := \\Hpl_i^* \\times \\mathbb{P}^2 \\times \\mathbb{P}^2 \\qquad \\textnormal{and} \\qquad \\hat{\\mathrm{L}}_i := \\mathcal{D} \\times\\mathbb{P}^2 \\times \\mathrm{L}_i. \n\\end{align}\nBy definition, our desired number $\\mathcal{N}(\\mathcal{A}_1 \\mathfrak{X}_k,n)$ is the cardinality of \nthe set\n\\bge\n\\label{number_Xk_defn}\n\\mathcal{N}(\\mathcal{A}_1\\mathfrak{X}_k,n) := |\\mathcal{A}_1 \\circ \\mathfrak{X}_k \\cap \\hat{\\Hpl}_1 \\cap \\ldots \\cap \\hat{\\Hpl}_{\\delta_d-(n+1+k)} \\cap \n\\hat{\\mathrm{L}}_1 \\cap \\ldots \\cap \\hat{\\mathrm{L}}_n|.\n\\ede\nLet us now clarify an important point to avoid confusion: as per our notation, \nthe number $\\mathcal{N}(\\mathcal{A}_1 \\mathcal{A}_1,0)$ \nis the number of degree $d$ curves through $\\delta_d-2$ generic points having \ntwo \\textit{ordered} nodes. To find the the corresponding number of curves where the \nnodes are \\textit{unordered}, we have to divide by $2$.\\\\\n\\hf\\hf We will now describe the various steps involved to obtain an explicit formula for $\\mathcal{N}(\\mathcal{A}_1\\mathfrak{X}_k,n)$. \n\n\\begin{step} \nOur first observation is that if $d$ is sufficiently large then $\\mathcal{A}_1 \\circ \\mathfrak{X}_k$ is a smooth algebraic variety and its closure defines a homology class.\n\\begin{lmm}{\\bf (cf. section \\ref{bundle_sections})}\nThe space $\\mathcal{A}_1 \\circ \\mathfrak{X}_k$ is a smooth subvariety of $\\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P}^2$ of dimension $\\delta_d-k$. \n\\end{lmm}\n\\end{step}\n\n\\begin{step}\nNext we observe that if the points and lines are chosen generically, \nthen the corresponding hyperplanes and lines defined in \\eqref{hyperplane} will intersect our space \n$\\mathcal{A}_1 \\circ \\mathfrak{X}_k$ transversely. Moreover, they would not intersect any extra points in the closure. \n\\begin{lmm}\n\\label{gpl2}\nLet $\\tilde{p}_1, \\tilde{p}_2, \\ldots , \\tilde{p}_{\\delta_d -(k+1+n)}$ be \n$\\delta_d-(k+1+n)$ generic points in $\\mathbb{P}^2$ \nand $\\mathrm{L}_1, \\mathrm{L}_2, \\ldots, \\mathrm{L}_n $ be $n$ generic lines \nin $\\mathbb{P}^2$. Let $\\hat{\\Hpl}_i$, $\\hat{\\Hpl}_i^*$ and $\\hat{\\mathrm{L}}_i$ be as defined in \n\\eqref{hyperplane}. \nThen \n\\bgd\n\\overline{\\mathcal{A}_1 \\circ \\mathfrak{X}}_k \\cap \\hat{\\Hpl}_1 \\cap \\ldots \\cap \\hat{\\Hpl}_{\\delta_d -(k+n+1)} \n\\cap \\hat{\\mathrm{L}}_1 \\cap \\ldots \\cap \\hat{\\mathrm{L}}_n = \n\\mathcal{A}_1 \\circ \\mathfrak{X}_k \\cap \\hat{\\Hpl}_1^* \\cap \\ldots \\cap \\hat{\\Hpl}_{\\delta_d -(k+n+1)}^* \n\\cap \\hat{\\mathrm{L}}_1 \\cap \\ldots \\cap \\hat{\\mathrm{L}}_n \n\\edd\nand every intersection is transverse. \n\\end{lmm}\nWe omit the details of the proof; it follows from an application of the families transversality theorem and Bertini's theorem. \nThe details of this proof can be found in \\cite{BM_Detail}. \n \n\\begin{notn}\n\\label{tau_bundle_defn}\nLet $\\gamma_{\\mathcal{D}}\\longrightarrow \\mathcal{D}$ and $\\gamma_{_{\\mathbb{P}^2}}\\longrightarrow\\mathbb{P}^2$ denote the tautological line bundles. If $c_1(V)$ denotes the first Chern class of a vector bundle then we set \n\\bgd\ny : = c_1(\\gamma_{\\mathcal{D}}^*) \\in H^{2}(\\mathcal{D}; \\mathbb{Z}), \\qquad \na := c_1(\\gamma_{_{\\mathbb{P}^2}}^*) \\in H^{2}(\\mathbb{P}^2; \\mathbb{Z}). \n\\edd\n\\end{notn}\n \n\\hf\\hf As a consequence of Lemma \\ref{gpl2} \nwe obtain the following fact: \n\\begin{lmm}\n\\label{gpl}\nThe number $\\mathcal{N}(\\mathcal{A}_1\\mathfrak{X}_k,n)$ is given by~ \n$$ \\mathcal{N}(\\mathcal{A}_1\\mathfrak{X}_k, n) =\n\\big\\langle (\\pi_{\\mathcal{D}}^*y)^{\\delta_d-(n+k+1)} (\\pi_2^*a)^n, ~[\\overline{\\mathcal{A}_1 \\circ \\mathfrak{X}}_k] \\big\\rangle$$\n where ~$\\pi_{\\mathcal{D}}, \\pi_1, \\pi_2: \\mathcal{D} \\times \\mathbb{P}_1^2 \\times \\mathbb{P}^2_2 \n \\longrightarrow \\mathcal{D}, \\mathbb{P}^2_1, \\mathbb{P}^2_2 $\n are the projection maps. \n\\end{lmm}\n\\end{step}\n\\noindent \\textbf{Proof: } This follows from Theorem \\ref{Main_Theorem_pseudo_cycle} and Lemma \\ref{gpl2}. \\qed \\\\\n\n\\noindent As explained in \\cite{BM13}, \nthe space $\\mathfrak{X}_k$ is not easy to describe directly and hence \ncomputing $\\mathcal{N}(\\mathfrak{X}_k,n)$ \\textit{directly} is not easy. As a result we define another space $\\mathcal{P} \\mathfrak{X}_k \\subset \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2.$ This is the space of curves $\\tilde{f}$, of degree $d$, with a marked point $\\tilde{p} \\in \\mathbb{P}^2$ and a marked direction $l_{\\p} \\in \\mathbb{P} T_{\\tilde{p}}\\mathbb{P}^2$, such that the curve $f$ has a singularity of type $\\mathfrak{X}_k$ at $\\tilde{p}$ and certain directional derivatives \\textit{vanish along $l_{\\p}$}, and certain other derivatives \\textit{do not vanish}. To take a simple example, $\\mathcal{P} \\mathcal{A}_2$ is the space of curves $\\tilde{f}$ with a marked point $\\tilde{p}$ and a marked direction $l_{\\p}$ such that $f$ has an $\\mathcal{A}_2$-node at $\\tilde{p}$ and the Hessian is degenerate along $l_{\\p}$, but the third derivative along $l_{\\p}$ is non-zero. It turns out that this space is much easier to describe. We have defined $\\mathcal{P} \\mathfrak{X}_k$ in section \\ref{definition_of_px}. Similarly, instead of dealing with the space $\\mathcal{A}_1 \\circ \\mathfrak{X}_k$, we deal with the space $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathfrak{X}_k$. \n\n\\begin{step}\nNext we observe that since $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathfrak{X}_k$ is described locally as the vanishing of certain sections that are transverse to the zero set, these are \\text{smooth} algebraic varieties.\n\n\\begin{lmm}{\\bf (cf. section \\ref{bundle_sections})}\n\\label{pr_sp_pseudo}\nThe space \n$\\mathcal{A}_1 \\circ \\mathcal{P} \\mathfrak{X}_k$ is a smooth subvariety \nof $\\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2$ of dimension $\\delta_d-(k+1)$. \n\\end{lmm}\n\n\\begin{notn}\n\\label{tau_bundle_pv_defn}\nLet $\\tilde{\\gamma} \\longrightarrow \\mathbb{P} T\\mathbb{P}^2$ be the tautological line bundle. The first Chern class of the dual will be denoted by $\\lambda := c_1(\\tilde{\\gamma}^*)\\in H^{2}(\\mathbb{P} T\\mathbb{P}^2; \\mathbb{Z})$. \n\\end{notn}\n\\noindent Lemma \\ref{pr_sp_pseudo} now motivates the following definition: \n \\begin{defn}\n\\label{up_number_defn}\n We define the number \n$\\mathcal{N}(\\mathcal{A}_1 \\circ \\mathcal{P} \\mathfrak{X}_k,n,m)$ as \n\\bge\n\\label{num_proj}\n\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathfrak{X}_k, n,m) := \\big\\langle \\pi_{\\mathcal{D}}^*y^{\\delta_d-(k+n+m+1)} \\pi_2^*a^n \\pi_2^*\\lambda^m, ~[\\overline{\\mathcal{A}_1 \\circ \\mathcal{P} \\mathfrak{X}}_k] \\big\\rangle. \n\\ede\nwhere ~$\\pi_{\\mathcal{D}}, \\pi_1, \\pi_2: \\mathcal{D} \\times \\mathbb{P}_1^2 \\times \\mathbb{P} T\\mathbb{P}^2_2 \\longrightarrow \\mathcal{D}, \\mathbb{P}^2_1, \\mathbb{P} T\\mathbb{P}^2_2 $ are the projection maps. \n\\end{defn}\n\n\\noindent The next Lemma relates the numbers $\\mathcal{N}(\\mathcal{A}_1\\mathcal{P}\\mathfrak{X}_k,n,0)$ and \n$\\mathcal{N}(\\mathcal{A}_1\\mathfrak{X}_k,n)$. \n\\begin{lmm}\n\\label{up_to_down}\nThe projection map $ \\pi: \\mathcal{A}_1 \\circ \\mathcal{P} \\mathfrak{X}_k \\longrightarrow \\mathcal{A}_1 \\circ \\mathfrak{X}_k $ is one to one if $\\mathfrak{X}_k = \\mathcal{A}_k, \\mathcal{D}_k, \\mathcal{E}_6, \\mathcal{E}_7$ or $\\mathcal{E}_8$ except for $\\mathfrak{X}_k = \\mathcal{D}_4$ when it is three to one. In particular, \n\\bge\n\\label{up_down_equation}\n\\mathcal{N}(\\mathcal{A}_1\\mathfrak{X}_k, n) = \\mathcal{N}(\\mathcal{A}_1 \\mathcal{P} \\mathfrak{X}_k, n,0) \n\\qquad \\textnormal{if} ~~\\mathfrak{X}_k \\neq \\mathcal{D}_4 \\qquad \\textnormal{and} \n\\qquad \\mathcal{N}(\\mathcal{A}_1 \\mathcal{D}_4, n) = \n\\frac{\\mathcal{N}(\\mathcal{A}_1 \\mathcal{P} \\mathcal{D}_4,n,0)}{3}.\n\\ede\n\\end{lmm}\n\\noindent \\textbf{Proof: } This is identical to the proof of the corresponding lemma in \\cite{BM13}.\\qed\n\\end{step}\n\\noindent To summarize, the \\textit{definition} of $\\mathcal{N}(\\mathcal{A}_1\\mathfrak{X}_k,n)$ is \\eqref{number_Xk_defn}. \nLemma \\ref{gpl} equates this number to a \\textit{topological} computation. \nWe then introduce another number $\\mathcal{N}(\\mathcal{A}_1\\mathcal{P}\\mathfrak{X}_k,n,m)$ in Definition \\ref{up_number_defn} and relate it to \n$\\mathcal{N}(\\mathcal{A}_1\\mathfrak{X}_k,n) $ in Lemma \\ref{up_to_down}. \nIn other words, we do not compute $\\mathcal{N}(\\mathcal{A}_1\\mathfrak{X}_k,n)$ \\textit{directly}; \nwe compute it \\textit{indirectly} by first computing $\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathfrak{X}_k, n,m)$ and then using Lemma \\ref{up_to_down}. \\\\\n\\hf\\hf We now give a brief idea of how to compute these numbers. \nSuppose we want to compute $\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathfrak{X}_k, n, m)$. \nWe first find some singularity $\\mathfrak{X}_l$ for which \n$\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathfrak{X}_l, n,m)$ has been calculated \nand which contains $\\mathfrak{X}_k$ in its closure, i.e., we want $\\mathcal{P} \\mathfrak{X}_k$ to be a subset of $\\overline{\\mathcal{P} \\mathfrak{X}}_{l}$. \nUsually, $l=k-1$ but it is not necessary. Our next task is to \ndescribe the closure of $ \\mathcal{P} \\mathfrak{X}_l$ and $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathfrak{X}_{l}$ explicitly as\n\\begin{align}\n\\overline{\\mathcal{P} \\mathfrak{X}}_{l}&= \\mathcal{P} \\mathfrak{X}_l \\sqcup \\overline{\\mathcal{P} \\mathfrak{X}}_k \\cup \\mathcal{B}_1 \\qquad \\textnormal{and} \\label{stratification_general_one_point} \\\\\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathfrak{X}}_{l} & = \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathfrak{X}_l \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathfrak{X}}_l- \\mathcal{P} \\mathfrak{X}_l) \\sqcup \\big( \\Delta \\mathcal{B}_2 \\big) \\nonumber \\\\ \n& = \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathfrak{X}_l \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathfrak{X}}_k \\cup \\mathcal{B}_1) \\sqcup \\big( \\Delta \\mathcal{B}_2 \\big), \\qquad \n\\textnormal{where} \\label{stratification_general} \\\\\n\\Delta\\mathcal{B}_2 & := \\{ (\\tilde{f}, \\tilde{p}_1, l_{\\tilde{p}_2}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathfrak{X}}_l: \\tilde{p}_1 = \\tilde{p}_2 \\}. \\nonumber \n\\end{align}\nNote that $\\overline{\\mathcal{A}_1 \\circ \\mathcal{P} \\mathfrak{X}_{l}} = \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathfrak{X}_l}$.\nThe main content of \\cite{BM13} was to express $\\overline{\\mathcal{P} \\mathfrak{X}}_l$ as in \\eqref{stratification_general_one_point}. \nThe main content of this paper is to \ncompute $\\Delta \\mathcal{B}_2$, i.e \nexpressing $\\overline{\\mathcal{A}_1 \\circ \\mathcal{P} \\mathfrak{X}}_l$ as in \\eqref{stratification_general}.\nConcretely, computing $\\Delta \\mathcal{B}_2$ means figuring out what happens to a $\\mathfrak{X}_k$ singularity, when it collides with an $\\mathcal{A}_1$-node. \nAs a simple example, when two nodes collide, we get an \n\n\\begin{figure}[h!]\n\\vspace*{0.2cm}\n\\begin{center}\\includegraphics[scale = 0.5]{collide1.eps}\\vspace*{-0.2cm}\\end{center}\n\\caption{Two nodes colliding into a tacnode}\n\\end{figure}\n\n$\\mathcal{A}_3$-node (which is basically the content of Lemma \\ref{cl_two_pt}, statement \\ref{a1a1_up_cl}). \nBy definition \\ref{up_number_defn} and Theorem \\ref{Main_Theorem_pseudo_cycle} \n\\bgd\n\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathfrak{X}_k, n, m) := \\big\\langle e(\\mathbb{W}_{n,m,k}^{1}), ~[\\overline{\\mathcal{A}_1 \\circ \\mathcal{P} \\mathfrak{X}}_k] \\big\\rangle = \\big|\\pm \\mathcal{Q}^{-1}(0) \\cap \\mathcal{A}_1 \\circ \\mathcal{P} \\mathfrak{X}_k\\big|, \n\\edd\nwhere\n\\bge\n\\label{generic_Q} \n\\mathcal{Q} :\\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2 \\longrightarrow \\mathbb{W}_{n,m,k}^{\\delta}:= \\bigg({\\textstyle \\bigoplus}_{i=1}^{\\delta_d -(n+m+k+\\delta)}\\pi_{\\mathcal{D}}^*\\gamma_{\\mathcal{D}}^*\\bigg)\\oplus\\bigg({\\textstyle \\bigoplus}_{i=1}^{n} \n\\pi_1^*\\gamma_{_{\\mathbb{P}^2}}^*\\bigg)\\oplus\\bigg({\\textstyle \\bigoplus}_{i=1}^{m}\\pi_2^*\\tilde{\\gamma}^* \\bigg)\n\\ede\nis a generic smooth section. \nIn \\cite{BM13}, we constructed a section $\\Psi_{\\mathcal{P} \\mathfrak{X}_k}$ \nof an appropriate vector bundle\n\\bgd\n\\mathbb{V}_{\\mathcal{P} \\mathfrak{X}_k}\\longrightarrow \\overline{\\mathcal{P} \\mathfrak{X}}_{l} = \\mathcal{P} \\mathfrak{X}_{l} \\cup \\overline{\\mathcal{P}\\mathfrak{X}}_k \\cup \\mathcal{B}_1\n\\edd\nwith the following properties: the section ~$ \\Psi_{\\mathcal{P} \\mathfrak{X}_k}: \\overline{\\mathcal{P} \\mathfrak{X}}_k \\longrightarrow \\mathbb{V}_{\\mathcal{P} \\mathfrak{X}_k}$\ndoes not vanish on $\\mathcal{P} \\mathfrak{X}_{l}$ and it vanishes \\textit{transversely} on $\\mathcal{P} \\mathfrak{X}_k$. \nWith a similar reasoning one can show that the induced section \n$$ \\pi_2^*\\Psi_{\\mathcal{P} \\mathfrak{X}_k}: \\overline{\\mathcal{A}_1 \\circ \\mathcal{P} \\mathfrak{X}}_l \\longrightarrow \\pi_2^*\\mathbb{V}_{\\mathcal{P} \\mathfrak{X}_k} $$\nvanishes \\textit{transversely} on $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathfrak{X}_k$.\\footnote{However the bound on $d$ for which transversality \nis achieved increases.} \nHere $\\pi_2$ is the following projection map \n$$ \\pi_2: \\mathcal{D}\\times \\mathbb{P}_1^2 \\times \\mathbb{P} T\\mathbb{P}^2_2 \\longrightarrow \\mathcal{D} \\times \\mathbb{P} T \\mathbb{P}^2_2.$$\nSince $\\Psi_{\\mathcal{P} \\mathfrak{X}_k}$ does not vanish on \n$\\mathcal{P} \\mathfrak{X}_{l}$, the section $ \\pi_2^*\\Psi_{\\mathcal{P} \\mathfrak{X}_k}$ does not \nvanish on $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathfrak{X}_l$. \nTherefore, \n\\begin{align}\n\\Big\\langle e(\\pi_2^*\\mathbb{V}_{\\mathcal{P} \\mathfrak{X}_{k}} \\oplus \\mathbb{W}_{n,m,k}^1), ~\\big[\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathfrak{X}}_{l}\\big] \\Big\\rangle \n& = \\mathcal{N}(\\mathcal{A}_1 \\mathcal{P} \\mathfrak{X}_k,n,m) + \\mathcal{C}_{\\mathcal{A}_1 \\circ \\mathcal{B}_1}(\\pi_2^*\\Psi_{\\mathcal{P} \\mathfrak{X}_k} \\oplus \\mathcal{Q}) \\nonumber \\\\ \n& \\,\\,+ \\mathcal{C}_{\\Delta \\mathcal{B}_2}(\\pi_2^*\\Psi_{\\mathcal{P} \\mathfrak{X}_k} \\oplus \\mathcal{Q}) \\label{Euler_equal_number_plus_bdry}\n\\end{align}\nwhere $\\mathcal{C}_{\\mathcal{A}_1 \\circ \\mathcal{B}_1}(\\pi_2^*\\Psi_{\\mathcal{P} \\mathfrak{X}_k} \\oplus \\mathcal{Q})$ and $\\mathcal{C}_{\\Delta \\mathcal{B}_2}(\\pi_2^*\\Psi_{\\mathcal{P} \\mathfrak{X}_k} \\oplus \\mathcal{Q})$ \nare the contributions of the section\n$\\pi_2^*\\Psi_{\\mathcal{P} \\mathfrak{X}_k} \\oplus \\mathcal{Q}$ to the Euler class from the points of $\\mathcal{A}_1 \\circ \\mathcal{B}_1$ and $\\Delta \\mathcal{B}_2$ \nrespectively. \nThe number $\\mathcal{C}_{\\mathcal{A}_1 \\circ \\mathcal{B}_1}(\\pi_2^*\\Psi_{\\mathcal{P} \\mathfrak{X}_k} \\oplus \\mathcal{Q})$ was computed \nin \\cite{BM13}. The main content of this paper is to compute $\\mathcal{C}_{\\Delta \\mathcal{B}_2}(\\pi_2^*\\Psi_{\\mathcal{P} \\mathfrak{X}_k} \\oplus \\mathcal{Q})$. \nOnce we have computed these numbers, we observe that the left hand side of \\eqref{Euler_equal_number_plus_bdry} \nis computable via splitting principle and the \nfact that $\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathfrak{X}_l, n,m)$ is known. Therefore, we get a recursive formula for $\\mathcal{N}(\\mathcal{A}_1 \\mathcal{P} \\mathfrak{X}_k,n,m)$ in terms of $\\mathcal{N}(\\mathcal{A}_1 \\mathcal{P} \\mathfrak{X}_l,n^{\\prime},m^{\\prime})$ and $\\mathcal{N}(\\mathcal{P} \\mathfrak{X}_{k+1}, n, m)$. The main result of \n\\cite{BM13} was to find an explicit formula for $\\mathcal{N}(\\mathcal{P} \\mathfrak{X}_{k+1}, n, m)$. Using this and iterations, \nwe get an explicit formula for $\\mathcal{N}(\\mathcal{A}_1 \\mathcal{P} \\mathfrak{X}_k,n,m)$. \nFinally, using \nLemma \\ref{up_down_equation}, we get our desired numbers $\\mathcal{N}(\\mathcal{A}_1\\mathfrak{X}_k, n)$.\n\\begin{eg}\nSuppose we wish to compute $\\mathcal{N}(\\mathcal{A}_1\\mathcal{A}_5, n)$. \nThis can be deduced from the knowledge of $\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{A}_5, n,m)$. \nThe obvious singularities which have $\\mathcal{A}_5$-nodes in its closure are $\\mathcal{A}_4$-nodes. \nIn order to analyze the space $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_4$, \nwe infer (cf. Lemma \\ref{cl_two_pt}, statement \\ref{a1_pa5_cl}) that\n\\bgd\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_4 = \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_4 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{A}}_4- \\mathcal{P} \\mathcal{A}_4) \\sqcup \n\\Big( \\Delta \\overline{\\mathcal{P} \\mathcal{A}}_6 \\cup \\Delta \\overline{\\PP \\D_7^{s}} \\cup \\Delta \\overline{\\mathcal{P} \\mathcal{E}}_6 \\Big). \n\\edd\nBy \\cite{BM13} (cf. Lemma \\ref{cl}, statement \\ref{A3cl} we conclude that \n\\bgd\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_4 = \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_4 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{A}}_5 \\cup \\overline{\\mathcal{P} \\mathcal{D}}_5 ) \\sqcup \n\\Big( \\Delta \\overline{\\mathcal{P} \\mathcal{A}}_6 \\cup \\Delta \\overline{\\PP \\D_7^{s}} \\cup \\Delta \\overline{\\mathcal{P} \\mathcal{E}}_6 \\Big). \n\\edd\nThe corresponding line bundle $\\UL_{\\mathcal{P} \\mathcal{A}_5} \\longrightarrow \\overline{\\mathcal{P} \\mathcal{A}}_4$ with a \nsection $\\Psi_{\\mathcal{P} \\mathcal{A}_5}$ that does not vanish on $\\mathcal{P} \\mathcal{A}_4$ and vanishes transversely on \n$\\mathcal{P} \\mathcal{A}_5$ is defined in section \\ref{summary_vector_bundle_definitions}. \nIn section \\ref{bundle_sections}, we indicate that \n\\[ \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_5}: \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_4 \\longrightarrow \\pi_2^* \\UL_{\\mathcal{P} \\mathcal{A}_5} \\] \nvanishes \non $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathcal{A}_5$ transversely. \nLet $\\mathcal{Q}$ be a generic section of the vector bundle \n\\bgd\n\\mathbb{W}_{n,m,5}^1 \\longrightarrow \\mathcal{D} \\times\\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2.\n\\edd\nBy \\cite{BM13} \n$\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_5} \\oplus \\mathcal{Q} $ vanishes on all points of $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathcal{D}_5$ with a multiplicity \nof $2$. By Corollary \\ref{a1_pak_mult_is_2_Hess_neq_0} and \\ref{a1_pa4_mult_is_5_around_pe6},\n$\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_5} \\oplus \\mathcal{Q} $ vanishes on all the points of $\\Delta \\mathcal{P} \\mathcal{A}_6$ and $\\Delta \\mathcal{P} \\mathcal{E}_6$ \nwith a multiplicity of $2$ and $5$ respectively. \nFurthermore, we also show that $\\Delta \\overline{\\PP \\D_7^{s}}$ is contained inside $\\Delta \\overline{\\mathcal{P} \\mathcal{D}}_7$. \nSince the dimension of $\\Delta \\mathcal{P} \\mathcal{D}_7$ is one less than the rank of $\\pi_2^* \\UL_{\\mathcal{P} \\mathcal{A}_5} \\oplus \\mathbb{W}_{n,m,5}^1$ \nand \n$\\mathcal{Q}$ is generic, $\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_5} \\oplus \\mathcal{Q} $ does \nnot vanish on $\\Delta \\overline{\\mathcal{P} \\mathcal{D}}_7$. Hence, it does not vanish on $\\Delta \\overline{\\PP \\D_7^{s}}$. \nTherefore, we conclude that\n\\begin{align}\n\\big\\langle e(\\pi_2^*\\UL_{\\mathcal{P} \\mathcal{A}_5} \\oplus \\mathbb{W}_{n,m,5}^1), ~~[\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_4] \\big\\rangle = & \\,\\,\\mathcal{N}(\\mathcal{A}_1\\mathcal{P}\\mathcal{A}_5,n,m)\\nonumber\\\\\n& +2 \\mathcal{N}(\\mathcal{P} \\mathcal{D}_5, n, m)+ 2 \\mathcal{N}(\\mathcal{P} \\mathcal{A}_6, n, m)+ 5\\mathcal{N}(\\mathcal{P} \\mathcal{E}_6, n, m). \\label{sample_computation}\n\\end{align}\nThis gives us a recursive formula for $\\mathcal{N}(\\mathcal{A}_1 \\mathcal{P} \\mathcal{A}_5, n, m)$ \nin terms of $\\mathcal{N}(\\mathcal{A}_1 \\mathcal{P} \\mathcal{A}_4, n^{\\prime}, m^{\\prime})$, $\\mathcal{N}(\\mathcal{P} \\mathcal{A}_6, n, m)$, \n$\\mathcal{N}(\\mathcal{P} \\mathcal{D}_5, n, m)$ and $\\mathcal{N}(\\mathcal{P} \\mathcal{E}_6, n, m)$, \nwhich is \\eqref{algopa4a1} in our algorithm. \n\\end{eg}\n\n\\begin{rem}\nWe remind the reader that $\\mathcal{N}(\\mathcal{P}\\mathfrak{X}_k, n,m)$ has been defined in \\cite{BM13}.\nThe definition \nis analogous to the definition of \n$\\mathcal{N}(\\mathcal{A}_1 \\mathcal{P} \\mathfrak{X}_k, n, m)$ as given in definition \\ref{up_number_defn} in this paper. \n\\end{rem}\n\n\n\\hf\\hf Now we describe the basic organization of our paper. In section \\ref{algorithm_for_numbers} \nwe state the explicit algorithm to obtain the numbers $\\mathcal{N}(\\mathcal{A}_1\\mathfrak{X}_k, n)$ in our MAIN THEOREM. \nIn section \\ref{summary_notation_def} we recapitulate all the spaces, vector bundles \nand sections of vector bundles we encountered in the process of enumerating curves with one singular point. \nIn section \\ref{bundle_sections} we introduce \na few new notation needed for this paper and \nwrite down the relevant sections that are transverse to the zero set. \nThe proof of why the sections are transverse to the zero set can be found in \\cite{BM_Detail}. \nIn section \\ref{closure_of_spaces} we stratify the space $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathfrak{X}}_k$ as described in \\eqref{stratification_general}. Along the way we also compute the \\textit{order} to which a certain section vanishes around certain points (i.e., the contribution of the section to the Euler class of a bundle). Finally, using the splitting principal, in section \\ref{Euler_class_computation} we compute the \nEuler class of the relevant bundles and obtain the recursive formula similar to \\eqref{sample_computation} above. \n\n\\begin{ack}\\nonumber \n\\textup{One of the crucial results of this paper is to compute the closure of relevant spaces, i.e., \nto stratify the space $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathfrak{X}}_k$ as described in \\eqref{stratification_general}. A key step here is to observe that certain sections are transverse to the zero set and utilize them to describe the neighborhood of a point. The second author is indebted to Aleksey Zinger for sharing his understanding of transversality and explaining this crucial idea to him (i.e., how to describe the neighborhood of a point using transversality of bundle sections). In addition, the second author is also grateful to Aleksey Zinger for suggesting several non trivial low degree checks to verify our formulas. One of those low degree checks proved to be crucial in figuring out a mistake the second author had made earlier. \\\\ \n\\hf \\hf The authors are grateful to Dennis Sullivan for sharing his perspective on this problem and indicating its connection to other areas of mathematics.}\n\\end{ack}\n\n\n\n\n\n\n\n\n\n\n\\section{Algorithm} \n\\label{algorithm_for_numbers}\n\\hf\\hf We now give an algorithm to compute the numbers $\\mathcal{N}(\\mathcal{A}_1\\mathfrak{X}_k,n)$. \nEquations \\eqref{algoa1a1}-\\eqref{algope6a1} are recursive formulas for $\\mathcal{N}(\\mathcal{A}_1\\mathcal{P}\\mathfrak{X}_k,n,m)$ in terms of \n$\\mathcal{N}(\\mathcal{A}_1 \\mathcal{P} \\mathfrak{X}_{k-1}, n^{\\prime}, m^{\\prime})$ and \n$\\mathcal{N}(\\mathcal{P}\\mathfrak{X}_{k+1},n, m)$. In \\cite{BM13} we had obtained\nan explicit formula for $\\mathcal{N}(\\mathcal{P}\\mathfrak{X}_{k+1},n, m)$. Finally, using \nLemma \\ref{up_down_equation}, we get our desired numbers $\\mathcal{N}(\\mathcal{A}_1\\mathfrak{X}_k, n)$. We have implemented this algorithm in a Mathematica program to obtain the final answers. The program is available on our web page \\url{https:\/\/www.sites.google.com\/site\/ritwik371\/home}. We prove the formulas in section \\ref{Euler_class_computation}. \\\\\n\\hf \\hf First we note that \nusing the ring structure of $H^*(\\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2; \\mathbb{Z})$, it is easy to see that \nfor every singularity type $\\mathfrak{X}_k$ we have\n\\begin{align}\n\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathfrak{X}_k,n,m)\n& = -3\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathfrak{X}_k,n+1,m-1)\n-3\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathfrak{X}_k,n+2,m-2) \\qquad\\forall ~~m\\ge2. \\label{ringp}\n\\end{align}\nWe now give recursive formulas for $\\mathcal{N}(\\mathcal{A}_1\\mathcal{A}_1, n)$ \nand $\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathfrak{X}_k, n, m)$: \n\\begin{eqnarray}\n\\mathcal{N}(\\mathcal{A}_1\\mathcal{A}_1, n) &=& \\mathcal{N}(\\mathcal{A}_1,0) \\times \\mathcal{N}(\\mathcal{A}_1, n) \\nonumber \\\\ \n & & -\\big( \\mathcal{N}(\\mathcal{A}_1, n) + d \\mathcal{N}(\\mathcal{A}_1, n+1) + 3 \\mathcal{N}(\\mathcal{A}_2, n)\\big) \\label{algoa1a1} \\\\\n\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{A}_2, n, 0) & = & 2 \\mathcal{N} (\\mathcal{A}_1\\mathcal{A}_1,n) + 2(d-3) \\mathcal{N}(\\mathcal{A}_1\\mathcal{A}_1,n+1) \\nonumber \\\\ \n & & - 2\\mathcal{N}(\\mathcal{P} \\mathcal{A}_3, n, 0)\\label{algopa20a1}\\\\\n\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{A}_2, n, 1) & = & \\mathcal{N}(\\mathcal{A}_1\\mathcal{A}_1,n) + (2d-9) \\mathcal{N}(\\mathcal{A}_1\\mathcal{A}_1,n+1) + (d^2-9d+18) \\mathcal{N}(\\mathcal{A}_1\\mathcal{A}_1,n+2) \\nonumber \\\\ \n & & - 2\\mathcal{N}(\\mathcal{P} \\mathcal{A}_3, n, 1) - 3 \\mathcal{N}(\\mathcal{D}_4,n) \\label{algopa21a1}\\\\\n\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{A}_3, n, m) & = & \\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{A}_2, n, m ) + 3\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{A}_2, n, m+1) + d\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{A}_2, n+1, m) \\nonumber \\\\ \n & & -2\\mathcal{N}(\\mathcal{P} \\mathcal{A}_4,n,m) \\label{algopa3a1}\n\\end{eqnarray}\n\\begin{eqnarray}\n\\mathcal{N}( \\mathcal{A}_1 \\mathcal{P} \\mathcal{A}_4, n, m) & = & 2\\mathcal{N}( \\mathcal{A}_1\\mathcal{P} \\mathcal{A}_3, n, m) + 2\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{A}_3, n, m+1) + (2d-6)\\mathcal{N}( \\mathcal{A}_1\\mathcal{P} \\mathcal{A}_3, n+1, m) \\nonumber \\\\ \n & & -2\\mathcal{N}(\\mathcal{P} \\mathcal{A}_5, n,m) \\label{algopa4a1} \\\\\n\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{A}_5, n, m) & = & 3\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{A}_4, n, m) + \\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{A}_4, n, m+1) + (3d -12)\\mathcal{N}( \\mathcal{A}_1\\mathcal{P} \\mathcal{A}_4, n+1, m)\\nonumber\\\\\n& & -2\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{D}_5, n, m) -2\\mathcal{N}(\\mathcal{P} \\mathcal{A}_6, n, m)- 5\\mathcal{N}(\\mathcal{P} \\mathcal{E}_6, n, m) \\label{algopa5a1}\\\\ \n\\mathcal{N}( \\mathcal{A}_1\\mathcal{P}\\mathcal{A}_6, n, m) & = & 4\\mathcal{N}(\\mathcal{A}_1 \\mathcal{P}\\mathcal{A}_5, n, m) +0\\mathcal{N}(\\mathcal{P} \\mathcal{A}_1\\mathcal{A}_5, n, m+1) + (4d -18)\\mathcal{N}(\\mathcal{P} \\mathcal{A}_1\\mathcal{A}_5, n+1, m) \\nonumber \\\\ \n & & -4\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{D}_6, n, m) - 3\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{E}_6, n, m)\\nonumber\\\\\n & & -2\\mathcal{N}(\\mathcal{P} \\mathcal{A}_7, n, m) -6\\mathcal{N}(\\mathcal{P} \\mathcal{E}_7, n, m) \\label{algopa6a1}\\\\\n\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{D}_4, n, 0) & = & \\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{A}_3, n, 0) -2\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{A}_3, n, 1) + (d-6)\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{A}_3, n+1, 0) \\nonumber \\\\\n & & -2\\mathcal{N}(\\mathcal{D}_5,n) \\label{algopd4a1} \\\\ \n\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{D}_4, n, 1) & = & \\mathcal{N}(\\mathcal{A}_1 \\mathcal{D}_4, n, 0) + (d-9)\\mathcal{N}(\\mathcal{A}_1 \\mathcal{D}_4, n+1, 0) \\label{algopd4a1_lambda} \\\\\n \n\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{D}_5, n, m) & = & \\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{D}_4, n, m) + \\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{D}_4, n, m+1) + (d-3)\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{D}_4, n+1, m) \\nonumber \\\\ \n & & -2\\mathcal{N}(\\mathcal{P} \\mathcal{D}_6,n,m) \\label{algopd5a1} \\\\\n\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{D}_6, n, m) & = & \\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{D}_5, n, m) + 4\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{D}_5, n, m+1) + d\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{D}_5, n+1, m) \\nonumber \\\\ \n & & -2\\mathcal{N}(\\mathcal{P} \\mathcal{D}_7, n, m) - \\mathcal{N}(\\mathcal{P} \\mathcal{E}_7,n,m) \\label{algopd6a1} \\\\\n\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{E}_6, n, m) & = & \\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{D}_5, n, m) -\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{D}_5, n, m+1) + (d-6)\\mathcal{N}(\\mathcal{A}_1\\mathcal{P} \\mathcal{D}_5, n+1, m) \\nonumber \\\\ \n & & -\\mathcal{N}(\\mathcal{P} \\mathcal{E}_7, n,m) \\label{algope6a1} \n\\end{eqnarray}\n\n\n\\section{Review of definitions and notations for one singular point} \n\\label{summary_notation_def}\n\\hf\\hf We recall a few definitions and notation from \\cite{BM13} so that our paper is self-contained. \n\\subsection{The vector bundles involved}\\label{summary_vector_bundle_definitions}\n\\hf\\hf The first three of the vector bundles we will encounter, the tautological line bundles, have been defined in notations \\ref{tau_bundle_defn} and \\ref{tau_bundle_pv_defn}. \nLet $\\pi:\\mathcal{D}\\times \\mathbb{P} T\\mathbb{P}^2\\longrightarrow \\mathcal{D}\\times\\mathbb{P}^2$ be the projection map. \n\\begin{rem}\n\\label{an1_again}\nWe will make the abuse of notation of usually omitting the pullback maps \n$\\pi_{\\mathcal{D}}^*$ and $\\pi_{\\mathbb{P}^2}^*$. \nOur intended meaning should be clear when we say, for instance, $\\gamma_{\\mathcal{D}}^* \\longrightarrow \\mathcal{D} \\times \\mathbb{P}^2$. \nHowever, we will not omit to write the pullback via $\\pi^*$. \n\\end{rem}\nWe have the following bundles over $\\mathcal{D}\\times\\mathbb{P}^2$ :\n\\begin{eqnarray*}\n\\mathcal{L}_{\\mathcal{A}_0} &:= & \\gamma_{\\mathcal{D}}^*\\otimes \\gamma_{_{\\mathbb{P}^2}}^{*d} \\longrightarrow \\mathcal{D} \\times \\mathbb{P}^2 \\\\\n\\mathcal{V}_{\\mathcal{A}_1} &:= & \\gamma_{\\mathcal{D}}^*\\otimes \\gamma_{_{\\mathbb{P}^2}}^{*d} \\otimes T^*\\mathbb{P}^2 \\longrightarrow \\mathcal{D} \\times \\mathbb{P}^2 \\\\\n\\mathcal{L}_{\\mathcal{A}_2} &:= & (\\gamma_{\\mathcal{D}}^* \\otimes \\gamma_{_{\\mathbb{P}^2}}^{*d} \\otimes \\Lambda^2 T^*\\mathbb{P}^2)^{\\otimes 2} \n\\longrightarrow \\mathcal{D} \\times \\mathbb{P}^2 \\\\ \n\\mathcal{V}_{\\mathcal{D}_4} &:= & \\gamma_{\\mathcal{D}}^*\\otimes \\gamma_{_{\\mathbb{P}^2}}^{*d} \\otimes \n\\textnormal{Sym}^2 (T^*\\mathbb{P}^2 \\otimes T^*\\mathbb{P}^2) \\longrightarrow \\mathcal{D} \\times \\mathbb{P}^2 \\\\\n\\mathcal{V}_{\\mathcal{X}_8} &:= & \\gamma_{\\mathcal{D}}^*\\otimes \\gamma_{_{\\mathbb{P}^2}}^{*d} \\otimes \n\\textnormal{Sym}^3 (T^*\\mathbb{P}^2 \\otimes T^*\\mathbb{P}^2 \\otimes T^*\\mathbb{P}^2) \n\\longrightarrow \\mathcal{D} \\times \\mathbb{P}^2 \n\\end{eqnarray*}\nAssociated to the map $\\pi$ there are pullback bundles \n\\begin{eqnarray*}\n\\UL_{\\hat{\\A}_0} &:= &\\pi^* \\mathcal{L}_{\\mathcal{A}_0} \\longrightarrow \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\\\\n\\mathbb{V}_{\\hat{\\A}_1} &:= & \\pi^{*} \\mathcal{V}_{\\mathcal{A}_1} \\longrightarrow \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\\\\n\\mathbb{V}_{\\hat{\\D}_4} &:= & \\pi^{*} \\mathcal{V}_{\\mathcal{D}_4} \\longrightarrow \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\\\\n\\mathbb{V}_{ \\hat{\\XC}_8} &:= & \\pi^{*} \\mathcal{V}_{\\mathcal{X}_8} \\longrightarrow \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\\\\n\\mathbb{V}_{\\mathcal{P} \\mathcal{A}_2} &:= & \\tilde{\\gamma}^*\\otimes \\gamma_{\\mathcal{D}}^*\\otimes \\gamma_{_{\\mathbb{P}^2}}^{*d} \\otimes \\pi^* T^*\\mathbb{P}^2 \n\\longrightarrow \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\\\\n\\mathbb{V}_{\\mathcal{P} \\mathcal{D}_5} &:= & \\tilde{\\gamma}^{*2}\\otimes \\gamma_{\\mathcal{D}}^*\\otimes \\gamma_{_{\\mathbb{P}^2}}^{*d} \\otimes \\pi^* T^*\\mathbb{P}^2 \n\\longrightarrow \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2.\n\\end{eqnarray*}\nFinally, we have \n\\begin{align*}\n\\UL_{\\mathcal{P} \\mathcal{D}_4} &:= (T\\mathbb{P}^2\/\\tilde{\\gamma})^{*2} \\otimes \\gamma_{\\mathcal{D}}^* \\otimes \\gamma_{_{\\mathbb{P}^2}}^{*d} \n \\longrightarrow \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\\\ \n \\UL_{\\mathcal{P} \\mathcal{D}_5} &:= \\tilde{\\gamma}^{*2} \\otimes (T\\mathbb{P}^2\/\\tilde{\\gamma})^* \\otimes \n \\gamma_{\\mathcal{D}}^* \\otimes \\gamma_{_{\\mathbb{P}^2}}^{*d} \\longrightarrow \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\\\\n\\UL_{\\mathcal{P} \\mathcal{D}_5^{\\vee}} &:= \\tilde{\\gamma}^{*2} \\otimes (T\\mathbb{P}^2\/\\tilde{\\gamma})^{*4} \\otimes \n \\gamma_{\\mathcal{D}}^{*2} \\otimes \\gamma_{_{\\mathbb{P}^2}}^{*2d} \\longrightarrow \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\\\\n \\UL_{\\mathcal{P} \\mathcal{D}_6^{\\vee}} &:= \\tilde{\\gamma}^{*8} \\otimes (T\\mathbb{P}^2\/\\tilde{\\gamma})^{*4} \\otimes \n \\gamma_{\\mathcal{D}}^{* 5} \\otimes \\gamma_{_{\\mathbb{P}^2}}^{*5d} \\longrightarrow \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\\\\n\\UL_{\\mathcal{P} \\mathcal{E}_6} &:= \\tilde{\\gamma}^{*} \\otimes (T\\mathbb{P}^2\/\\tilde{\\gamma})^{*2} \\otimes \n \\gamma_{\\mathcal{D}}^* \\otimes \\gamma_{_{\\mathbb{P}^2}}^{*d} \\longrightarrow \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2\\\\\n\\UL_{\\mathcal{P} \\mathcal{E}_7} &:= \\tilde{\\gamma}^{*4} \\otimes \\gamma_{\\mathcal{D}}^* \\otimes \\gamma_{_{\\mathbb{P}^2}}^{*d}\n\\longrightarrow \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\\\\n\\UL_{\\mathcal{P} \\mathcal{E}_8} &:= \\tilde{\\gamma}^{*3} \\otimes (T\\mathbb{P}^2\/\\tilde{\\gamma})^* \\otimes \\gamma_{\\mathcal{D}}^* \\otimes \\gamma_{_{\\mathbb{P}^2}}^{*d}\n\\longrightarrow \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\\\\n\\UL_{\\mathcal{P} \\mathcal{X}_8} &:= (T\\mathbb{P}^2\/\\tilde{\\gamma})^{*3} \\otimes \\gamma_{\\mathcal{D}}^* \\otimes \\gamma_{_{\\mathbb{P}^2}}^{*d} \n\\longrightarrow \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\\\\n\\UL_{\\mathcal{J}} &:= \\tilde{\\gamma}^{*9}\\otimes(T\\mathbb{P}^2\/\\tilde{\\gamma})^{*3}\\otimes \\gamma_{\\mathcal{D}}^{*3} \\otimes \\gamma_{_{\\mathbb{P}^2}}^{*d} \n\\longrightarrow \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\\\\nk \\geq 3 \\qquad \\UL_{\\mathcal{P} \\mathcal{A}_k} &:= \\tilde{\\gamma}^{*k} \\otimes (T\\mathbb{P}^2\/\\tilde{\\gamma})^{*(2k-6)} \\otimes \\gamma_{\\mathcal{D}}^{*(k-2)} \\otimes \\gamma_{_{\\mathbb{P}^2}}^{*(d(k+1)-3d)} \n\\longrightarrow \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\\\ \nk \\geq 6 \\qquad \\UL_{\\mathcal{P} \\mathcal{D}_k} & := \\tilde{\\gamma}^{*(k-2 + \\epsilon_k)} \\otimes (T\\mathbb{P}^2\/\\tilde{\\gamma})^{*(2\\epsilon_k)} \\otimes \\gamma_{\\mathcal{D}}^{*(1+\\epsilon_k)} \\otimes \\gamma_{_{\\mathbb{P}^2}}^{*(d(1+ \\epsilon_k))} \\longrightarrow \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2,\n\\end{align*}\nwhere $\\epsilon_6 =0$, $\\epsilon_7=1$ and $\\epsilon_8 =3$. \n\nThe reason for defining these bundles will become clearer in section \\ref{summary_sections_of_vector_bundle_definitions}, when we define \nsections of these bundles.\\\\\n\\hf \\hf With the abuse of notation as explained in Remark \\ref{an1_again}, the bundle $T\\mathbb{P}^2\/\\tilde{\\gamma}$ is the quotient of the bundles $V$ and $W$, \nwhere $V$ is the pullback of the tangent bundle $T\\mathbb{P}^2\\to \\mathbb{P}^2$ via $\\mathcal{D}\\times\\mathbb{P} T\\mathbb{P}^2\\stackrel{\\pi}{\\rightarrow}\\mathcal{D}\\times \\mathbb{P}^2\\to \\mathbb{P}^2$ and $W$ \nis pullback of $\\tilde{\\gamma}\\to \\mathbb{P} T \\mathbb{P}^2$ via $\\mathcal{D}\\times \\mathbb{P} T \\mathbb{P}^2\\to \\mathbb{P} T \\mathbb{P}^2$.\n\n\\subsection{Sections of Vector Bundles}\\label{summary_sections_of_vector_bundle_definitions}\n\\hf\\hf Let us recall the definition of \\textit{vertical derivative}. \n\\begin{defn}\n\\label{vertical_derivative_defn}\n\\noindent Let $\\pi:V\\longrightarrow M$ be a holomorphic vector bundle of rank $k$ and $s:M\\longrightarrow V$ be a holomorphic section. Suppose $h: V|_{\\mathcal{U}} \\longrightarrow \\mathcal{U}\\times \\mathbb{C}^{k}$ is a holomorphic trivialization of $V$ and $\\pi_{1}, \\pi_{2}: \\mathcal{U} \\times \\mathbb{C}^{k} \\longrightarrow \\mathcal{U}, \\mathbb{C}^{k}$ the projection maps.\nLet \n\\begin{align}\n\\label{section_local_coordinate}\n\\hat{s}&:= \\pi_{2} \\circ h \\circ s.\n\\end{align}\nFor $q\\in \\mathcal{U}$, we define the {\\it vertical derivative} of $s$ to be the $\\mathbb{C}$-linear map \n\\begin{align*}\n\\nabla s|_{q}: T_{q}M \\longrightarrow V_{q}, \\qquad \n\\nabla s|_{q} & := (\\pi_{2} \\circ h)|_{V_{q}}^{-1} \\circ d \\hat{s}|_q,\n\\end{align*}\nwhere $V_{q} = \\pi^{-1}(q)$, the fibre at $q$. In particular, if $v \\in T_q M$ \nis given by a \nholomorphic map $\\gamma: \\mathrm{B}_{\\epsilon}(0) \\longrightarrow M$ such that $\\gamma (0)=q$ and $\\frac{\\partial \\gamma}{\\partial z}\\big|_{z=0} = v$, then\n\\begin{align*} \n\\nabla s|_q (v) & := (\\pi_{2} \\circ h)|^{-1}_{V_{q}} \\circ \n\\frac{\\partial \\hat{s}(\\gamma(z))}{\\partial z}\\bigg|_{z=0}\n\\end{align*}\nwere $\\mathrm{B}_{\\epsilon}$ is an open $\\epsilon$-ball in $\\mathbb{C}$ around the origin.$\\footnote{Not every tangent vector is given by a holomorphic map; \nhowever combined with \nthe fact that $\\nabla s|_p $ is $\\mathbb{C}$-linear, this definition determines $\\nabla s|_p$ completely.} $ \nFinally, if $v, w \\in T_q M$ are tangent vectors such that \nthere exists a family of complex curves $\\gamma: \\mathrm{B}_\\epsilon\\times \\mathrm{B}_\\epsilon \\longrightarrow M$ such that \n\\bgd\n\\gamma (0,0) =q, \\qquad \n \\frac{\\partial \\gamma(x,y)}{\\partial x}\\bigg|_{(0,0)}= v, \\qquad \n\\qquad \\frac{\\partial \\gamma(x,y)}{\\partial y}\\bigg|_{(0,0)} = w\n\\edd\nthen \n\\begin{align}\n\\label{verder}\n\\nabla^{i+j} s|_q \n(\\underbrace{v,\\cdots v}_{\\textnormal{$i$ times}}, \\underbrace{w,\\cdots w}_{\\textnormal{$j$ times}}) & := \n(\\pi_{2} \\circ h)\\mid^{-1}_{V_{q}} \\circ\\left[\n\\frac{ \\partial^{i+j} \\hat{s}(\\gamma(x,y))}{\\partial^i x \\partial^j y}\\right]\\bigg|_{(0,0)}.\n\\end{align}\n\\end{defn}\n\\begin{rem}\nIn general the quantity in \\eqref{verder} is not well defined, i.e., it depends on the \ntrivialization and the curve $\\gamma$. In \\cite{BM_Detail} we explain on what subspace \nthis quantity is well defined. \n\\end{rem}\n\\begin{rem}\n\\label{transverse_local}\nThe section $ s: M \\longrightarrow V $ is transverse to the zero set if and only if the induced map\n\\begin{align} \n\\label{section_local_coordinate_calculus}\n\\hatii{s} & := \\hat{s} \\circ \\varphi_{\\mathcal{U}}^{-1} : \\mathbb{C}^m \\longrightarrow \\mathbb{C}^k \n\\end{align}\nis transverse to the zero set in the usual calculus sense, where $\\varphi_{\\mathcal{U}}: \\mathcal{U} \\longrightarrow \\mathbb{C}^m $ is a coordinate chart and $\\hat{s}$ is as defined in \\eqref{section_local_coordinate}.\n\\end{rem}\n\\hf\\hf Let $f:\\mathbb{P}^2 \\longrightarrow \\gamma_{_{\\mathbb{P}^2}}^{*d}$ be a section and $\\tilde{p} \\in \\mathbb{P}^2$. \nWe can think of $p$ as a non-zero vector in $\\gamma_{_{\\mathbb{P}^2}}$ and $p^{\\otimes d}$ a non-zero vector \nin $\\gamma_{_{\\mathbb{P}^2}}^{\\otimes d}$ $\\footnote{Remember that $p$ is an element of $\\mathbb{C}^3-0$ while $\\tilde{p}$ is the corresponding equivalence class in $\\mathbb{P}^2$.}$. \nThe quantity $\\nabla f|_{\\tilde{p}}$ acts on a vector in $\\gamma_{_{\\mathbb{P}^2}}^{d}|_{\\tilde{p}}$ and produces an element of $T^*_{\\tilde{p}}\\mathbb{P}^2$ . Let us denote this quantity as $\\nabla f|_p$, i.e., \n\\begin{align}\n\\nabla f|_p &:= \\{\\nabla f|_{\\tilde{p}}\\}(p^{\\otimes d}) \\in T^*_{\\tilde{p}}\\mathbb{P}^2. \n\\end{align}\nNotice that $\\nabla f|_{\\tilde{p}}$ is an element of the fibre of $T^*\\mathbb{P}^{2} \\otimes \\gamma_{_{\\mathbb{P}^2}}^{*d}$ at $\\tilde{p}$ while $\\nabla f|_{p}$ is an element of $T^*_{\\tilde{p}}\\mathbb{P}^{2}$. \\\\\n\\hf\\hf Now observe that $\\pi^{*} T\\mathbb{P}^2 \\cong \\tilde{\\gamma} \\oplus \\pi^*T\\mathbb{P}^2\/\\tilde{\\gamma} \\longrightarrow \\mathbb{P} T\\mathbb{P}^2$, where \n$\\pi: \\mathbb{P} T\\mathbb{P}^2 \\longrightarrow \\mathbb{P}^2~$ is the projection map. Let us denote a vector in $\\tilde{\\gamma}$ by $v$ and a vector \nin $\\pi^*T \\mathbb{P}^2\/\\tilde{\\gamma}$ by $\\tilde{w}$. \nGiven $\\tilde{f} \\in \\mathcal{D}$ and $\\tilde{p} \\in \\mathbb{P}^2$, let \n\\begin{align}\n\\label{abbreviation}\nf_{ij} & := \\nabla^{i+j} f|_p \n(\\underbrace{v,\\cdots v}_{\\textnormal{$i$ times}}, \\underbrace{w,\\cdots w}_{\\textnormal{$j$ times}}).\n\\end{align}\nNote that $f_{ij}$ is a \\textit{number}. \nIn general $f_{ij}$ is not well defined; it depends on the trivialization and the curve. \nMoreover it is also not well defined on the quotient space. \nSince our sections are not defined on the whole space, \nwe will use the notation $ s:M \\dashrightarrow V$ to indicate that $s$ is defined only on \na subspace of $M$. \\\\\n\\hf \\hf With this terminology, we now explicitly define the sections that we will \nencounter.\n\\begin{eqnarray*}\n\\psi_{\\mathcal{A}_0}: \\mathcal{D} \\times \\mathbb{P}^2 \\longrightarrow \\mathcal{L}_{\\mathcal{A}_0}, & & \\{\\psi_{\\mathcal{A}_0}(\\tilde{f}, \\tilde{p})\\} (f \\otimes p^{\\otimes d}):= f(p) \\\\\n\\psi_{\\mathcal{A}_1}: \\mathcal{D} \\times \\mathbb{P}^2 \\dashrightarrow \\mathcal{V}_{\\mathcal{A}_1}, & & \\{\\psi_{\\mathcal{A}_1}(\\tilde{f}, \\tilde{p})\\}(f \\otimes p^{\\otimes d}):= \\nabla f|_p \\\\\n\\psi_{\\mathcal{D}_4}: \\mathcal{D} \\times \\mathbb{P}^2 \\dashrightarrow \\mathcal{V}_{\\mathcal{D}_4}, & & \\{\\psi_{\\mathcal{D}_4}(\\tilde{f}, \\tilde{p})\\}(f\\otimes p^{\\otimes d}):= \\nabla^2 f|_p \\\\ \n\\psi_{\\mathcal{X}_8}: \\mathcal{D} \\times \\mathbb{P}^2 \\dashrightarrow \\mathcal{V}_{\\mathcal{X}_8}, & & \\{\\psi_{\\mathcal{X}_8}(\\tilde{f}, \\tilde{p})\\}(f \\otimes p^{\\otimes d}):= \\nabla^3 f|_p \\\\ \n\\psi_{\\mathcal{A}_2} : \\mathcal{D} \\times \\mathbb{P}^2 \\dashrightarrow \\mathcal{L}_{\\mathcal{A}_2}, & & \\{\\psi_{\\mathcal{A}_2}(\\tilde{f}, \\tilde{p})\\}(f \\otimes p^{\\otimes d}):= \\textnormal{det}\\, \\nabla^2 f|_p \\\\\n\\Psi_{\\hat{\\A}_0}: \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\dashrightarrow \\UL_{\\hat{\\A}_0}, & & \\Psi_{\\hat{\\A}_0}(\\tilde{f}, l_{\\p}):= \\psi_{\\mathcal{A}_0}(\\tilde{f},\\tilde{p}) \\\\\n\\Psi_{\\hat{\\A}_1}: \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\dashrightarrow \\mathbb{V}_{\\hat{\\A}_1}, & & \\Psi_{\\hat{\\A}_1}(\\tilde{f},l_{\\p}):= \\psi_{\\mathcal{A}_1}(\\tilde{f}, \\tilde{p}) \\\\\n\\Psi_{\\hat{\\D}_4}: \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\dashrightarrow \\mathbb{V}_{\\hat{\\D}_4}, & & \\Psi_{\\hat{\\D}_4}(\\tilde{f},l_{\\p}):= \\psi_{\\mathcal{D}_4}(\\tilde{f}, \\tilde{p}) \\\\\n\\Psi_{ \\hat{\\XC}_8}: \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\dashrightarrow \\mathbb{V}_{ \\hat{\\XC}_8}, & & \\Psi_{ \\hat{\\XC}_8}(\\tilde{f},l_{\\p}):= \\psi_{\\mathcal{X}_8}(\\tilde{f}, \\tilde{p}).\n\\end{eqnarray*}\nWe also have\n\\begin{eqnarray*}\n\\Psi_{\\mathcal{P} \\mathcal{A}_2}: \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\dashrightarrow \\mathbb{V}_{\\mathcal{P} \\mathcal{A}_2}, & & \\{\\Psi_{\\mathcal{P} \\mathcal{A}_2}(\\tilde{f},l_{\\p})\\}(f \\otimes p^{\\otimes d} \\otimes v):= \\nabla^2 f|_p (v,\\cdot) \\\\ \n\\Psi_{\\mathcal{P} \\mathcal{D}_5}^{\\mathbb{V}}: \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\dashrightarrow \\mathbb{V}_{\\mathcal{P} \\mathcal{D}_5}, & & \\{\\Psi_{\\mathcal{P} \\mathcal{D}_5}^{\\mathbb{V}}(\\tilde{f},l_{\\p})\\}(f \\otimes p^{\\otimes d} \\otimes v^{\\otimes 2}):= \\nabla^3 f|_p (v,v,\\cdot) \\\\\n\\Psi_{\\mathcal{P} \\mathcal{D}_4}: \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\dashrightarrow \\UL_{\\mathcal{P} \\mathcal{D}_4}, & & \\{\\Psi_{\\PP D_4}(\\tilde{f},l_{\\p})\\}(f \\otimes p^{\\otimes d} \\otimes w^{\\otimes 2}):= f_{02} \\\\\n\\Psi_{\\mathcal{P} \\mathcal{D}_5}^{\\UL}: \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\dashrightarrow \\UL_{\\mathcal{P} \\mathcal{D}_5}, & & \\{\\Psi_{\\mathcal{P} \\mathcal{D}_5}^{\\UL}(\\tilde{f},l_{\\p})\\}(f \\otimes p^{\\otimes d} \\otimes v^{\\otimes 2}\\otimes w):= f_{21} \\\\\n\\Psi_{\\mathcal{P} \\mathcal{E}_6}: \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\dashrightarrow \\UL_{\\mathcal{P} \\mathcal{E}_6}, & & \\{\\Psi_{\\mathcal{P} \\mathcal{E}_6}(\\tilde{f},l_{\\p})\\}(f \\otimes p^{\\otimes d} \\otimes v\\otimes w^{\\otimes 2}):= f_{12} \\\\ \n\\Psi_{\\mathcal{P} \\mathcal{E}_7}: \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\dashrightarrow \\UL_{\\mathcal{P} \\mathcal{E}_7}, & & \\{\\Psi_{\\mathcal{P} \\mathcal{E}_7}(\\tilde{f},l_{\\p})\\}(f \\otimes p^{\\otimes d} \\otimes v^{\\otimes 4}):= f_{40} \\\\\n\\Psi_{\\mathcal{P} \\mathcal{E}_8}: \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\dashrightarrow \\UL_{\\mathcal{P} \\mathcal{E}_8}, & & \\{\\Psi_{\\mathcal{P} \\mathcal{E}_8}(\\tilde{f},l_{\\p})\\}(f \\otimes p^{\\otimes d} \\otimes v^{\\otimes 3} \\otimes w ):= f_{31} \\\\\n\\Psi_{\\mathcal{P} \\mathcal{X}_8}: \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\dashrightarrow \\UL_{\\mathcal{P} \\mathcal{X}_8}, & & \\{\\Psi_{\\mathcal{P} \\mathcal{X}_8}(f,l_{\\p})\\}(f \\otimes p^{\\otimes d} \\otimes w^{\\otimes 3}):= f_{03}.\n\\end{eqnarray*}\nWe also have sections of the following bundles: $\\Psi_{\\mathcal{P} \\mathcal{D}_5^{\\vee}}: \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\dashrightarrow \\UL_{\\mathcal{P} \\mathcal{D}_5^{\\vee}}$ given by\n\\bge\n\\label{psi_d5_dual}\n\\{\\Psi_{\\mathcal{P} \\mathcal{D}_5^{\\vee}}(\\tilde{f},l_{\\p})\\}( f^{\\otimes 2} \\otimes p^{\\otimes 2 d} \n\\otimes v^{\\otimes 2} \\otimes w^{\\otimes 4}) := 3 f_{12}^2 - 4 f_{21} f_{03}, \n\\ede\nand $\\Psi_{\\mathcal{P} \\mathcal{D}_6^{\\vee}}: \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\dashrightarrow \\UL_{\\mathcal{P} \\mathcal{D}_6^{\\vee}}$ at $(\\tilde{f},l_{\\p})$ is given by\n\\bge\n\\label{psi_d6_dual}\n(f^{\\otimes 5} \\otimes p^{\\otimes 5 d} \n\\otimes v^{\\otimes 8} \\otimes w^{\\otimes 4})\\mapsto \\big(f_{12}^4 f_{40} - 8f_{12}^3 f_{21} f_{31} + 24 f_{12}^2 f_{21}^2 f_{22}-32 f_{12} f_{21}^3 f_{13} + 16 f_{21}^4 f_{04} \\big),\n\\ede\nand $\\Psi_{\\mathcal{J}}: \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\dashrightarrow \\UL_{\\mathcal{J}}$ given by\n\\bge\n\\label{J_psi}\n\\{\\Psi_{\\mathcal{J}}(\\tilde{f},l_{\\p})\\}( f^{\\otimes 3} \\otimes p^{\\otimes d} \\otimes v^{\\otimes 9} \\otimes w^{\\otimes 3}) := \\Big(- \\frac{f_{31}^3}{8 } + \\frac{3 f_{22} f_{31} f_{40}}{16 } - \\frac{f_{13} f_{40}^2}{16} \\Big).\n\\ede\nWhen $k\\geq 3$ we have $\\Psi_{\\mathcal{P} \\mathcal{A}_k}: \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\dashrightarrow \\UL_{\\mathcal{P} \\mathcal{A}_k}$ given by\n\\bgd\n\\{\\Psi_{\\mathcal{P} \\mathcal{A}_k}(\\tilde{f},l_{\\p}) \\}\\big(f^{\\otimes (k-2)} \\otimes p^{\\otimes d} \\otimes v^{\\otimes k} \\otimes w^{\\otimes (2k-6)}\\big):= f_{02}^{k-3} \\mathcal{A}^f_k.\n\\edd\nSimilarly, when $k \\geq 6$ we have $\\Psi_{\\mathcal{P} \\mathcal{D}_k}: \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 \\dashrightarrow \\UL_{\\mathcal{P} \\mathcal{D}_k}$ given by\n\\bgd\n\\{\\Psi_{\\mathcal{P} \\mathcal{D}_k}(\\tilde{f},l_p)\\}\\big(f^{\\otimes (1+ \\epsilon_k)} \\otimes p^{\\otimes d(1+\\epsilon_k)} \\otimes v^{\\otimes (k-2+\\epsilon_k)}\\otimes w^{\\otimes (2\\epsilon_k)}\\big):= f_{12}^{\\epsilon_k} \\mathcal{D}^f_k,\n\\edd\nwhere, \n$\\epsilon_6 = 0$, $\\epsilon_7 =1$ and \n$\\epsilon_8=3$. The expressions for $\\mathcal{A}^f_k$ (resp. $\\mathcal{D}^f_k$) are given below explicitly in \n\\eqref{Formula_Ak} (resp. \\eqref{Formula_Dk}), till $k=7$ (resp. till $k=8$). \\\\\n\\hf\\hf Here is an explicit formula for \n$\\mathcal{A}^f_k$ till $k=7$: \n\\begin{align}\n\\label{Formula_Ak}\n\\mathcal{A}^f_3&= f_{30},\\qquad\n\\mathcal{A}^f_4 = f_{40}-\\frac{3 f_{21}^2}{f_{02}}, \\qquad\n\\mathcal{A}^f_5= f_{50} -\\frac{10 f_{21} f_{31}}{f_{02}} + \n\\frac{15 f_{12} f_{21}^2}{f_{02}^2} \\nonumber \\\\\n\\mathcal{A}^f_6 &= f_{60}- \\frac{ 15 f_{21} f_{41}}{f_{02}}-\\frac{10 f_{31}^2}{f_{02}} + \\frac{60 f_{12} f_{21} f_{31}}{f_{02}^2}\n +\n \\frac{45 f_{21}^2 f_{22}}{f_{02}^2} - \\frac{15 f_{03} f_{21}^3}{f_{02}^3}\n -\\frac{90 f_{12}^2 f_{21}^2}{f_{02}^3} \\nonumber \\\\ \n\\mathcal{A}^f_7 &= f_{70} - \\frac{21 f_{21} f_{51}}{f_{02}} \n- \\frac{35 f_{31} f_{41}}{f_{02}} + \\frac{105 f_{12} f_{21} f_{41}}{f_{02}^2} + \\frac{105 f_{21}^2 f_{32}}{f_{02}^2} + \n\\frac{70 f_{12} f_{31}^2}{f_{02}^2}+ \\frac{210 f_{21}f_{22}f_{31}}{f_{02}^2} \\nonumber \\\\\n&\n-\\frac{105 f_{03} f_{21}^2 f_{31}}{ f_{02}^3}\n-\\frac{420 f_{12}^2 f_{21} f_{31}}{f_{02}^3}\n-\\frac{630 f_{12}f_{21}^2 f_{22}}{f_{02}^3}\n-\\frac{105 f_{13} f_{21}^3}{f_{02}^3}\n+ \\frac{315 f_{03} f_{12} f_{21}^3}{f_{02}^4}\n+ \\frac{630 f_{12}^3 f_{21}^2}{f_{02}^4}.\n\\end{align} \nHere is an explicit formula for \n$\\mathcal{D}^f_k$ till $k=8$:\n\\begin{align}\n\\label{Formula_Dk}\n\\mathcal{D}^f_6 &= f_{40},\\,\\,\n\\mathcal{D}^f_7 = f_{50} -\\frac{5 f_{31}^2}{3 f_{12}},\\,\\,\n\\mathcal{D}^f_8 = f_{60} + \n\\frac{5 f_{03} f_{31} f_{50}}{3 f_{12}^2} \n-\\frac{5 f_{31} f_{41}}{f_{12}} - \\frac{10 f_{03} f_{31}^3}{3 f_{12}^3} \n+ \\frac{5 f_{22} f_{31}^2}{f_{12}^2}.\n\\end{align} \n\n\\subsection{The spaces involved}\n\\hf\\hf We begin by explaining a terminology. If $l_{\\tilde{p}} \\in \\mathbb{P} T_{\\tilde{p}}\\mathbb{P}^2$, then we say that $v \\in l_{\\tilde{p}}$ if \n$v$ is a tangent vector in $T_{\\tilde{p}}\\mathbb{P}^2$ and lies over the fibre of $l_{\\tilde{p}}$.\nWe now define the spaces that we will encounter.\n\\label{definition_of_px}\n\\begin{align*}\n\\mathfrak{X}_k &:= \\{ ( \\tilde{f},\\tilde{p}) \\in \\mathcal{D} \\times \\mathbb{P}^2~~~~~: \\textnormal{$f$ has a singularity of type $\\mathfrak{X}_k$ at $\\tilde{p}$} \\} \\\\\n\\hat{\\mathfrak{X}}_k &:= \\{ (\\tilde{f},l_{\\p}) \\in \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2: \\textnormal{$f$ has a singularity of type $\\mathfrak{X}_k$ at $\\tilde{p}$} \\} ~= \\pi^{-1}(\\mathfrak{X}_k) \\\\\n\\textnormal{if $~k>1$ } \\quad \n\\mathcal{P} \\mathcal{A}_k &:= \\{ (\\tilde{f},l_{\\p}) \\in \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2: \n\\textnormal{$f$ has a singularity of type $\\mathcal{A}_k$ at $\\tilde{p}$},\\\\\n& \\qquad\\qquad\\qquad\\qquad \\qquad \\qquad \\nabla^2 f|_p(v, \\cdot) =0\\,\\,\\textup{if}\\,\\,v \\in l_{\\tilde{p}}\\}\\\\ \n\\mathcal{P} \\mathcal{D}_4 &:= \\{ (\\tilde{f},l_{\\p}) \\in \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2: \n\\textnormal{$f$ has a singularity of \ntype $\\mathcal{D}_4$ at $\\tilde{p}$}, \\\\ \n& \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\nabla^3 f|_p(v,v,v) =0\\,\\,\\textup{if}\\,\\,v \\in l_{\\tilde{p}}\\} \\\\\n\\textnormal{if $~k>4$} \\qquad \n\\mathcal{P} \\mathcal{D}_k &:= \\{ (\\tilde{f},l_{\\p}) \\in \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2: \\textnormal{$f$ has a singularity of \ntype $\\mathcal{D}_k$ at $\\tilde{p}$} \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad\\qquad \\nabla^3 f|_p(v, v, \\cdot) =0\\,\\,\\textup{if}\\,\\,v \\in l_{\\tilde{p}}\\} \\\\\n\\textnormal{if $k=6, 7$ or $8$}\\qquad \n\\mathcal{P} \\mathcal{E}_k &:= \\{ (\\tilde{f},l_{\\p}) \\in \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2: \\textnormal{$f$ has a singularity of \ntype $\\mathcal{E}_k$ at $\\tilde{p}$}\\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad\\qquad \\nabla^3 f|_p(v, v, \\cdot) =0\\,\\,\\textup{if}\\,\\,v\\in l_{\\tilde{p}}\\} \\\\\n\\textnormal{if $~k>4$} \\qquad \\mathcal{P}\\mathcal{D}_k^{\\vee} &:= \\{ (\\tilde{f},l_{\\p}) \\in \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2: \n\\textnormal{$f$ has a singularity of \ntype $\\mathcal{D}_k$ at $\\tilde{p}$}, \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\nabla^3 f|_p(v,v,v) =0, ~~\\nabla^3 f|_p(v,v,w) \\neq 0 \\\\ \n& \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\,\\,\\textup{if}\\,\\,v \\in l_{\\tilde{p}} ~~\\textup{and} ~~w \\in (T_{\\tilde{p}}\\mathbb{P}^2)\/l_{\\tilde{p}}\\} \n\\end{align*}\n\\noindent We also need the definitions for a few other spaces which will make our computations convenient. \n\\begin{align*}\n\\hat{\\mathcal{A}}_1^{\\#} := \\{ (\\tilde{f}, l_p) \\in \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 &: \n f(p) =0, \\nabla f|_p =0, \\nabla^2 f|_p(v, \\cdot) \\neq 0, \\forall ~v\\neq 0 \\in l_{\\tilde{p}} \\} \\\\\n\\hat{\\mathcal{D}}_4^{\\#} := \\{ (\\tilde{f}, l_{\\p}) \\in \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 &: \nf(p) =0, \\nabla f|_p =0, \\nabla^2 f|_p \\equiv 0, \\nabla^3 f|_p (v,v,v) \\neq 0, \\forall ~v \\neq 0 \\in l_{\\tilde{p}} \\} \\\\\n\\hat{\\mathcal{D}}_k^{\\#\\flat} := \\{ (\\tilde{f}, l_{\\p}) \\in \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 &: \\textnormal{$f$ has a $\\mathcal{D}_k$ singularity at $\\tilde{p}$}, ~~\\nabla^3 f|_p (v,v,v) \\neq 0, \\forall ~v \\neq 0 \\in l_{\\tilde{p}},\\,\\,k\\geq 4\\} \\\\ \n\\hat{\\mathcal{X}}_8^{\\#} := \\{ (\\tilde{f}, l_{\\p}) \\in \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 &: \nf(p) =0, \\nabla f|_p =0, \\nabla^2 f|_p \\equiv 0, \\nabla^3 f|_p =0, \\\\ \n & \\qquad \\nabla^4 f|_p (v,v,v, v) \\neq 0 ~\\forall ~v \\neq 0 \\in l_{\\tilde{p}} \\} \\\\\n \\hat{\\XC}_{8}^{\\# \\flat} := \\{ (\\tilde{f}, l_{\\p}) \\in \\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2 &: (\\tilde{f}, l_{\\p}) \\in \\hat{\\mathcal{X}}_8^{\\#}, \\Psi_{\\mathcal{J}}(\\tilde{f}, l_{\\p})\\neq 0, \n\\textnormal{where $\\Psi_{\\mathcal{J}}$ is defined in \\eqref{J_psi}}\\}. \n\\end{align*}\n\n\n\n\n\n\\section{Transversality} \n\\label{bundle_sections}\n\n\\hf\\hf In this section we list down all the relevant bundle sections that are transverse to the zero set. We set up our notation first. Let us define the following projection maps: \n\\begin{align*}\n\\pi_{1}&:\\mathcal{D} \\times \\mathbb{P}^2_1 \\times \\mathbb{P}^2_2 \\longrightarrow \\mathcal{D} \\times \\mathbb{P}^2_1, \\\\\n\\pi_{2}&:\\mathcal{D} \\times \\mathbb{P}^2_1 \\times \\mathbb{P}^2_2 \\longrightarrow \\mathcal{D} \\times \\mathbb{P}^2_2, \\\\\n\\pi_{1}&:\\mathcal{D} \\times \\mathbb{P}^2_1 \\times \\mathbb{P} T \\mathbb{P}^2_2 \\longrightarrow \\mathcal{D} \\times \\mathbb{P}^2_1, \\\\\n\\pi_{2}&:\\mathcal{D} \\times \\mathbb{P}^2_1 \\times \\mathbb{P} T \\mathbb{P}^2_2 \\longrightarrow \\mathcal{D} \\times \\mathbb{P} T \\mathbb{P}^2_2. \n\\end{align*}\nHence, given a vector bundle over $\\mathcal{D} \\times \\mathbb{P}^2$ or $\\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2$ we obtain a bundle over \n$\\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P}^2$ and $\\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2$ respectively, via the pullback maps. \\\\\n\\hf\\hf A section of a bundle over $\\mathcal{D} \\times \\mathbb{P}^2$ or $\\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2$ induces a section over \nthe corresponding bundle over $\\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P}^2$ and $\\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2$ respectively, via the pullback maps.\n\n\\begin{rem}\n To describe bundles over $\\mathcal{D} \\times \\mathbb{P}^2$ or $\\mathcal{D} \\times \\mathbb{P} T\\mathbb{P}^2$, we follow the abuse of notation \nof omitting pullback maps (as mentioned in Remark \\ref{an1_again}). \nHowever, to describe bundles over $\\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P}^2$ or $\\mathcal{D} \\times\\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2$ we do write the pullback maps. \n\\end{rem}\n\\begin{lmm}\n\\label{tube_lemma}\nLet $\\pi:E \\longrightarrow M$ be a fibre bundle with compact fibers. Let $X\\subseteq E$ and $Y\\subseteq M$. Then \n\\begin{align}\n \\pi (\\overline{X}) & = \\overline{\\pi (X)} \\label{tube_lemma_X}\\\\\n\\pi^{-1} (\\overline{Y}) &= \\overline{\\pi^{-1}(Y)}. \\label{tube_lemma_Y}\n\\end{align}\n\\end{lmm}\n\\noindent \\textbf{Proof: } Since the fibers are compact, $\\pi$ is a closed map (Tube Lemma). Combined with the fact that $\\pi$ is continuous \n\\eqref{tube_lemma_X} follows. Secondly, equation \\eqref{tube_lemma_Y} holds for the trivial bundle, hence it also for \nan arbitrary fibre bundle, since this is a local statement. \\qed \n\\begin{prp}\n\\label{ift_ml}\nThe sections of the vector bundles\n\\begin{align*}\n\\pi_2^*\\psi_{\\mathcal{A}_0}:\\overline{\\mathcal{A}}_1 \\times \\mathbb{P}^2 - \\Delta \\overline{\\mathcal{A}}_1 \\longrightarrow \\pi_2^* \\mathcal{L}_{\\mathcal{A}_0}, \\qquad \\pi_2^*\\psi_{\\mathcal{A}_1}: \\pi_2^*\\psi_{\\mathcal{A}_0}^{-1}(0) \\longrightarrow \\pi_2^*\\mathcal{V}_{\\mathcal{A}_1} \n\\end{align*}\nare transverse to the zero set if $d \\geq 3$. \n\\end{prp}\n\n\\begin{prp}\n\\label{A2_Condition_prp}\nThe section of the vector bundle $\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_2}: \\overline{\\mathcal{A}}_1 \\circ \\overline{\\hat{\\mathcal{A}}}_1 \\longrightarrow \\pi_2^*\\mathbb{V}_{\\mathcal{P} \\mathcal{A}_2}$ is transverse to the zero set, provided $d \\geq 4$. \n\\end{prp}\n\n\\begin{prp}\n\\label{D4_Condition_prp}\nThe sections of the vector bundles \n\\begin{align*}\n\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_3}: \\overline{\\mathcal{A}}_1 \\circ \\overline{\\mathcal{P} \\mathcal{A}}_2 \\longrightarrow \\pi_2^*\\UL_{\\mathcal{P} \\mathcal{A}_3}, \\quad \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_4}: \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_3}^{-1} (0) \\longrightarrow \\pi_2^*\\UL_{\\mathcal{P} \\mathcal{D}_4}, \n\\quad \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_5}^{\\UL}: \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_4}^{-1} (0) \\longrightarrow \\pi_2^*\\UL_{\\mathcal{P} \\mathcal{D}_5} \n\\end{align*} \nare transverse to the zero set provided $d \\geq 5$.\n\\end{prp}\n\n\\begin{prp}\n\\label{A3_Condition_prp}\nIf $i \\geq 4$, then the sections of the vector bundles \n\\begin{align*}\n\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_3}&:\\overline{\\mathcal{A}}_1 \\circ \\overline{\\mathcal{P} \\mathcal{A}}_2 \\longrightarrow \\pi_2^*\\UL_{\\mathcal{P} \\mathcal{A}_3}, \n\\qquad \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_4}:\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_{3}}^{-1} (0) - \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_4}^{-1}(0) \\longrightarrow \\pi_2^*\\UL_{\\mathcal{P} \\mathcal{A}_4}, \\ldots, \\\\ \n\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_{i}}&:\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_{i-1}}^{-1} (0) - \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_4}^{-1}(0) \\longrightarrow \\pi_2^*\\UL_{\\mathcal{P} \\mathcal{A}_{i}} \n\\end{align*} \nare transverse to the zero set provided $d\\geq i+2$.\n\\end{prp}\n\n\\begin{prp}\n\\label{D6_Condition_prp}\nIf $i \\geq 6$, then the sections of the vector bundles \n\\begin{align*}\n\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_6}&:\\overline{\\mathcal{A}}_1 \\circ \\overline{\\mathcal{P} \\mathcal{D}}_5 \\longrightarrow \\pi_2^*\\UL_{\\mathcal{P} \\mathcal{D}_6}, \n\\qquad \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_7}:\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_{6}}^{-1} (0) - \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{E}_6}^{-1}(0) \\longrightarrow \\pi_2^*\\UL_{\\mathcal{P} \\mathcal{D}_7}, \\ldots, \\\\ \n\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_{i}}&:\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_{i-1}}^{-1} (0) - \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{E}_6}^{-1}(0) \\longrightarrow \\pi_2^*\\UL_{\\mathcal{P} \\mathcal{D}_{i}} \n\\end{align*} \nare transverse to the zero set provided $d\\geq i+2$.\n\\end{prp}\n\n\\begin{prp}\n\\label{E6_Condition_prp}\nThe section of the vector bundle $\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{E}_6}:\\overline{\\mathcal{A}}_1 \\circ \\overline{\\mathcal{P} \\mathcal{D}}_5 \\longrightarrow \\pi_2^*\\UL_{\\mathcal{P} \\mathcal{E}_6}$ is transverse to the zero set provided $d\\geq 5$.\n\\end{prp}\n\n\\begin{prp}\n\\label{PD4_Condition_prp}\nThe section of the vector bundle $\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_3}:\\overline{\\mathcal{A}}_1 \\circ \\overline{\\hat{\\mathcal{D}}}_4 \\longrightarrow \\pi_2^*\\UL_{\\mathcal{P} \\mathcal{A}_3}$ is transverse to the zero set provided $d\\geq 5$.\n\\end{prp}\n\n\\noindent \\textbf{Proof: }\nWe have omitted the proofs here; they can be found in \\cite{BM_Detail}. \nThey are similar to the way we prove transversality of bundle sections in \\cite{BM13}.\n\\qed \n\n\\section{Closure and Euler class contribution} \n\\label{closure_of_spaces}\n\n\\hf\\hf In this section we compute closure of a singularity with one $\\mathcal{A}_1$-node. Along the way \nwe also compute how much a certain section contributes to the Euler class of a bundle. But \nfirst, let us recapitulate what we know about the closure of one singular point \nwhich was proved in \\cite{BM13}.\n\\begin{lmm}\n\\label{cl}\nLet $\\mathfrak{X}_k$ be a singularity of type $\\mathcal{A}_k$, $\\mathcal{D}_k$, \n$\\mathcal{E}_k$ or $\\mathcal{X}_8$. Then the closures are given by :\n\\begin{enumerate} \n\\item \\label{A0cl} $\\overline{\\mathcal{A}}_0 = \\mathcal{A}_0 \\cup \\overline{\\mathcal{A}}_1$ \n\\qquad if $d \\geq 3$.\n\\item \\label{A1cl}\n$\\overline{\\hat{\\mathcal{A}}_1} = \\overline{\\hat{\\mathcal{A}}^{\\#}_1} = \\hat{\\mathcal{A}}_1^{\\#} \\cup \n\\overline{\\mathcal{P} \\mathcal{A}}_2$ \\qquad if $d \\geq 3$.\n\\item \\label{D4_cl_no_direction}$ \\overline{\\hat \\mathcal{D}^{\\#}_4} = \\hat \\mathcal{D}^{\\#}_4 \\cup \\overline{\\mathcal{P} \\mathcal{D}}_4 $ \\qquad if $d \\geq 3$.\n\\item \\label{D4cl}$ \\overline{\\mathcal{P} \\mathcal{D}}_4 = \\mathcal{P} \\mathcal{D}_4 \\cup \\overline{\\mathcal{P} \\mathcal{D}}_5 \\cup \\overline{\\mathcal{P} \\mathcal{D}_5^{\\vee}}$ \n\\qquad if $d \\geq 4$. \n\\item \\label{E6cl}$\\overline{\\mathcal{P} \\mathcal{E}}_6 = \n\\mathcal{P} \\mathcal{E}_6 \\cup \\overline{\\mathcal{P} \\mathcal{E}}_7 \\cup \\overline{\\hat{\\mathcal{X}}^{\\#}_8}$ \n\\qquad if $d \\geq 4$. \n\\item \\label{D5cl}$\\overline{\\mathcal{P} \\mathcal{D}}_5 = \\mathcal{P} \\mathcal{D}_5 \\cup \\overline{\\mathcal{P} \\mathcal{D}}_6 \\cup \\overline{\\mathcal{P} \\mathcal{E}}_6$ \\qquad if $d \\geq 4$.\n\\item \\label{D6cl}$\\overline{\\mathcal{P} \\mathcal{D}}_6 = \\mathcal{P} \\mathcal{D}_6 \\cup \\overline{\\mathcal{P} \\mathcal{D}}_7 \\cup \\overline{\\mathcal{P} \\mathcal{E}}_7$ \\qquad if $d \\geq 5$.\n\\item \\label{A2cl}$\\overline{\\mathcal{P} \\mathcal{A}}_2 = \\mathcal{P} \\mathcal{A}_2 \\cup \n\\overline{\\mathcal{P} \\mathcal{A}}_3 \\cup \\overline{\\hat{\\mathcal{D}}_4^{\\#}} $ \n\\qquad if $d \\geq 4$.\n\\item \\label{A3cl}$\\overline{\\mathcal{P} \\mathcal{A}}_3 = \\mathcal{P} \\mathcal{A}_3 \\cup \\overline{\\mathcal{P} \\mathcal{A}}_4 \\cup \\overline{\\mathcal{P} \\mathcal{D}}_4$ \\qquad if $d \\geq 5$.\n\\item \\label{A4cl}$\\overline{\\mathcal{P} \\mathcal{A}}_4 = \\mathcal{P} \\mathcal{A}_4 \\cup \\overline{\\mathcal{P} \\mathcal{A}}_5 \\cup \\overline{\\mathcal{P} \\mathcal{D}}_5$ \\qquad if $d \\geq 6$.\n\\item \\label{A5cl}$\\overline{\\mathcal{P} \\mathcal{A}}_5 = \\mathcal{P} \\mathcal{A}_5 \\cup \\overline{\\mathcal{P} \\mathcal{A}}_6 \\cup \\overline{\\mathcal{P} \\mathcal{D}}_6 \\cup \\overline{\\mathcal{P} \\mathcal{E}}_6 $ \\qquad if $d \\geq 7$.\n\\item \\label{A6cl}$\\overline{\\mathcal{P} \\mathcal{A}}_6 = \\mathcal{P} \\mathcal{A}_6 \\cup \\overline{\\mathcal{P} \\mathcal{A}}_7 \\cup \n\\overline{\\mathcal{P} \\mathcal{D}}_7 \\cup \\overline{\\mathcal{P} \\mathcal{E}}_7 \\cup \\overline{\\hat{\\mathcal{X}}_8^{\\# \\flat}}$ \n \\qquad if $d \\geq 8$.\n\\end{enumerate}\n\\end{lmm}\n\n\\noindent Let us now state a few facts about the closure of one singular point that will be required in this paper. \nThese facts were not explicitly stated in \\cite{BM13} because it was not needed in that paper. \n\n\\begin{lmm}\n\\label{Dk_sharp_closure}\nWe have the following equality (or inclusion) of sets \n\\begin{enumerate}\n\\item \\label{a1_cl_new} $\\overline{\\mathcal{A}}_1 = \\mathcal{A}_1 \\cup \\overline{\\mathcal{A}}_2$ \\qquad if ~$d \\geq 2$. \\vspace*{-0.1cm} \n \\item \\label{d4_new} $\\overline{\\hat{\\mathcal{D}}^{\\#}_4} = \\overline{\\hat{\\mathcal{D}}}_4$ \\qquad if ~$d \\geq 3$. \\vspace*{-0.1cm}\n \\item \\label{dk_new} $\\overline{\\hat{\\mathcal{D}}^{\\#\\flat}_k} = \\overline{\\hat{\\mathcal{D}}}_k$ \\qquad if ~$k\\geq 4$ ~~ and ~~$d \\geq 3$. \n \\vspace*{-0.1cm}\n \\item \\label{d5_pa3_zero} $\\big\\{ (\\tilde{f}, l_{\\p}) \\in \\overline{\\hat{\\mathcal{D}}}_5: \\Psi_{\\mathcal{P} \\mathcal{A}_3}(\\tilde{f}, l_{\\p}) =0 \\big\\} \n = \\overline{\\mathcal{P} \\mathcal{D}}_5 \\cup \\overline{\\mathcal{P} \\mathcal{D}^{\\vee}_5}$ \\qquad if ~$d \\geq 3$. \\vspace*{-0.1cm} \n \\item \\label{pd6_pd7_pd8_closure} \n $\\big\\{(\\tilde{f}, l_{\\p}) \\in \\overline{\\mathcal{P} \\mathcal{D}}_6: \\Psi_{\\mathcal{P} \\mathcal{E}_6}(\\tilde{f}, l_{\\p}) \\neq 0 \\big\\} \n =\\mathcal{P} \\mathcal{D}_6 \\cup \\mathcal{P} \\mathcal{D}_7\\cup \\big\\{(\\tilde{f}, l_{\\p}) \\in \\overline{\\mathcal{P} \\mathcal{D}}_8: \\Psi_{\\mathcal{P} \\mathcal{E}_6}(\\tilde{f}, l_{\\p}) \\neq 0 \\big\\}$\n \\item \\label{pe6_subset_of_cl_pd5_dual} $\\overline{\\mathcal{P} \\mathcal{E}}_6 \\subset \\overline{\\mathcal{P} \\mathcal{D}_5^{\\vee}}$ \\qquad if ~$d \\geq 3$. \n\\end{enumerate}\n\\end{lmm}\nThe proofs are straightforward; the details can be found in \\cite{BM_Detail}. \nHence we stated Lemma \\ref{Dk_sharp_closure}, statements \\ref{d4_new} and \\ref{dk_new} separately to avoid confusion.\n\n\\hf \\hf Before stating the main results of this section, \nlet us define three more spaces which will be required \nwhile formulating some of the Lemmas: \n\\begin{align}\n\\Delta \\PP \\D_7^{s} &:= \\{ (\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_4: \n\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_4}(\\tilde{f}, \\tilde{p}, l_{\\p}) =0, \n~\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{E}_6}(\\tilde{f}, \\tilde{p}, l_{\\p}) \\neq 0 \\}, \\nonumber \\\\\n\\Delta \\PP \\D_8^{s} &:= \\{ (\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_5: \n\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_4}(\\tilde{f}, \\tilde{p}, l_{\\p}) =0, \\nonumber \n~\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{E}_6}(\\tilde{f}, \\tilde{p}, l_{\\p}) \\neq 0 \\}, \\\\\n\\Delta \\PP \\D_6^{\\vee s} & := \\{ (\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_4: \n\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_5}(\\tilde{f}, \\tilde{p}, l_{\\p}) \\neq 0 \\}.\\label{new_defn_delta}\n\\end{align} \nWe are now ready to state the main Lemmas. \n\n\\begin{lmm}\n\\label{cl_two_pt}\nLet $\\mathfrak{X}_k$ be a singularity of type $\\mathcal{A}_k$, $\\mathcal{D}_k$, \n$\\mathcal{E}_k$. Then their closures with one $\\mathcal{A}_1$-node are given by :\n\\begin{enumerate}\n\\item \\label{a1a_minus_1_cl} $\\overline{\\overline{\\mathcal{A}}_1 \\circ (\\mathcal{D} \\times \\mathbb{P}^2)} = \\overline{\\mathcal{A}}_1 \\circ (\\mathcal{D} \\times \\mathbb{P}^2) \\sqcup \\Delta \\overline{\\mathcal{A}}_1 $ \n\\qquad if $d \\geq 1$. \n\\item \\label{a1a1_up_cl} $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{A}}^{\\#}_1} = \\overline{\\mathcal{A}}_1\\circ \\hat{\\mathcal{A}}^{\\#}_1 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\hat{\\mathcal{A}}_1^{\\#}}- \\hat{\\mathcal{A}}_1^{\\#} ) \\sqcup \\Delta \\overline{\\hat{\\mathcal{A}}}_3$ \n\\qquad if $d \\geq 3$. \n\\item \\label{a1_pa2_cl} $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_2 = \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_2 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{A}}_2- \\mathcal{P} \\mathcal{A}_2) \\sqcup \\Big(\\Delta \\overline{\\mathcal{P} \\mathcal{A}}_4 \\cup \n\\Delta \\overline{\\hat{\\mathcal{D}}^{\\#\\flat}_5} \\Big )$ \\qquad if $d \\geq 4$.\n\\item \\label{a1_pa3_cl} $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_3 = \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_3 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{A}}_3- \\mathcal{P} \\mathcal{A}_3) \\sqcup \n\\Big( \\Delta \\overline{\\mathcal{P} \\mathcal{A}}_5 \\cup \\Delta \\overline{\\mathcal{P} \\mathcal{D}^{\\vee}_5} \\Big)$ \\qquad if $d \\geq 5$. \n\\item \\label{a1_pa4_cl} $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_4 = \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_4 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{A}}_4- \\mathcal{P} \\mathcal{A}_4) \\sqcup \n\\Big( \\Delta \\overline{\\mathcal{P} \\mathcal{A}}_6 \\cup \n\\Delta \\overline{\\PP \\D_7^{s}} \\cup \\Delta \\overline{\\mathcal{P} \\mathcal{E}}_6 \\Big) $ \\qquad if $d \\geq 6$. \n\\item \\label{a1_pa5_cl} $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_5 = \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_5 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{A}}_5- \\mathcal{P} \\mathcal{A}_5) \\sqcup \n\\Big( \\Delta \\overline{\\mathcal{P} \\mathcal{A}}_7 \\cup \\Delta \\overline{\\PP \\D_8^{s}} \\cup \\Delta \\overline{\\mathcal{P} \\mathcal{E}}_7 \\Big)$ \\qquad if $d \\geq 7$. \n\\item \\label{a1_pd4_cl} $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_4 = \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}_4 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{D}}_4- \\mathcal{P} \\mathcal{D}_4) \\sqcup \n\\Big( \\Delta \\overline{\\PP \\D_6^{\\vee s}} \\cup \\Delta \\overline{\\mathcal{P} \\mathcal{D}}_6 \\Big) $ \\qquad if $d \\geq 4$.\n\\item \\label{a1_d4_cl} $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{D}}_4} = \\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{D}}_4 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\hat{\\mathcal{D}}}_4- \\hat{\\mathcal{D}}_4) \\sqcup \n\\Big( \\Delta \\overline{\\hat{\\mathcal{D}}}_6 \\Big) $ \\qquad if $d \\geq 4$.\n\\item \\label{a1_pd5_cl} $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_5 = \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}_5 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{D}}_5- \\mathcal{P} \\mathcal{D}_5) \\sqcup \n\\Big( \\Delta \\overline{\\mathcal{P} \\mathcal{D}}_7 \\cup \\Delta \\overline{\\mathcal{P} \\mathcal{E}}_7 \\Big)$ \\qquad if $d \\geq 5$. \n\\end{enumerate}\n\\end{lmm}\n\n\\begin{rem} Although the statement of \nLemma \\ref{cl_two_pt} (\\ref{a1a_minus_1_cl}) is trivial we give an elaborate proof for two reasons. \nFirstly, along the way, we prove a few other statements that will be required later. \nSecondly, as a corollary we also compute the contribution of certain sections to the Euler class of relevant bundles. \n\\end{rem}\n\n\n\\textbf{Proof of Lemma \\ref{cl_two_pt} (\\ref{a1a_minus_1_cl}):} It suffices to show that \n\\begin{align}\n\\{ (\\tilde{f}, \\tilde{p}, \\tilde{p}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ (\\mathcal{D} \\times \\mathbb{P}^2)}\\} &= \\Delta \\overline{\\mathcal{A}}_{1}. \\label{a1_a0_is_a1} \n\\end{align} \nClearly the lhs\\footnote{We shall use {\\it lhs} to denote {\\it left hand side} and {\\it rhs} to denote {\\it right hand side} of an equation.} of \\eqref{a1_a0_is_a1} is a subset of its rhs. To show the converse we will prove the \nfollowing two claims simultaneously: \n\\begin{align}\n \\overline{\\overline{\\mathcal{A}}_1 \\circ (\\mathcal{D} \\times \\mathbb{P}^2)} & \\supset \\Delta (\\mathcal{A}_1 \\sqcup \\mathcal{A}_2), \\label{a1_du_a2_is_subset_of_a1_a0}\\\\\n \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{A}}_1 \\cap \\Delta (\\mathcal{A}_1 \\sqcup \\mathcal{A}_2) & = \\varnothing. \\label{a1_du_a2_intersect_a1_a1_is_empty} \n\\end{align}\nSince $\\overline{\\overline{\\mathcal{A}}_1 \\circ (\\mathcal{D} \\times \\mathbb{P}^2)}$ \nis a closed set, \\eqref{a1_du_a2_is_subset_of_a1_a0} \nimplies that the rhs of \\eqref{a1_a0_is_a1} is a subset of its lhs.\\footnote{In fact\nthe full strength of \\eqref{a1_du_a2_is_subset_of_a1_a0} is not really needed; \nwe simply need that $\\overline{\\overline{\\mathcal{A}}_1 \\circ (\\mathcal{D} \\times \\mathbb{P}^2)} \\supset \\Delta \\mathcal{A}_1$.}\nMoreover, \\eqref{a1_du_a2_intersect_a1_a1_is_empty} is not required at all \nfor the proof of Lemma \\ref{cl_two_pt} \\eqref{a1a_minus_1_cl}. \nHowever, these statements will be required in the proofs of \nLemma \\ref{cl_two_pt} (\\ref{a1a1_up_cl}).\nFurthermore, in the process of proving \\eqref{a1_du_a2_is_subset_of_a1_a0} and \\eqref{a1_du_a2_intersect_a1_a1_is_empty}, \nwe will also be computing the contribution of certain sections to the Euler class of relevant bundles as a corollary. \n\n\n\\begin{claim}\n\\label{a1_a1_closure_intersect_a1_or_a2_empty_equations_claim}\nLet $(\\tilde{f},\\tilde{p}, \\tilde{p}) \\in \\Delta (\\mathcal{A}_1 \\cup \\mathcal{A}_{2})$.\nThen there exist solutions \n$$(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1) ) \\in \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P}^2 $$ \nsufficiently close to $(\\tilde{f},\\tilde{p}, \\tilde{p})$ to the set of equations\n\\begin{align}\n\\pi_1^*\\psi_{\\mathcal{A}_0}( \\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1)) & = 0, ~~\\pi_1^*\\psi_{\\mathcal{A}_1}( \\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1) ) = 0, ~~\\tilde{p}(t_1, t_2) \\neq \\tilde{p}(t_1). \\label{a1_a1_sharp_closure_functional_eqn_a1_plus_a2} \n\\end{align}\nFurthermore, any such solution sufficiently close to $(\\tilde{f}, \\tilde{p}, \\tilde{p})$ satisfies \n\\begin{align}\n\\Big( \\pi_2^*\\psi_{\\mathcal{A}_0}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1)), ~\\pi_2^*\\psi_{\\mathcal{A}_1}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1)) \\Big) \\neq (0,0). \\label{a1_a1_closure_intersect_a1_is_empty_equation}\n\\end{align} \nIn particular, $(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1))$ does not lie in $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{A}}_1$.\n\\end{claim}\nIt is easy to see that claim \\ref{a1_a1_closure_intersect_a1_or_a2_empty_equations_claim} proves statements \\eqref{a1_du_a2_is_subset_of_a1_a0} and \n\\eqref{a1_du_a2_intersect_a1_a1_is_empty} \nsimultaneously. \\\\ \n\n\\noindent \\textbf{Proof: } Choose homogeneous coordinates $[\\mathrm{X}: \\mathrm{Y}: \\mathrm{Z}]$ \nso that $\\tilde{p} = [0:0:1]$ and let $\\mathcal{U}_{\\tilde{p}}$ be a sufficiently \nsmall neighbourhood of $\\tilde{p}$ inside $\\mathbb{P}^2$.\nDenote \n$\\pi_{x}, \\pi_y : \\mathcal{U}_{\\tilde{p}} \\longrightarrow \\mathbb{C} $ \n to be the projection maps given by \n$$\\pi_{x}([\\mathrm{X}: \\mathrm{Y}:\\mathrm{Z}]) := \\mathrm{X}\/\\mathrm{Z} \\qquad \\textnormal{and} \\qquad \\pi_{y}([\\mathrm{X}:\\mathrm{Y}:\\mathrm{Z}]) := \\mathrm{Y}\/\\mathrm{Z},$$ \nand $v, w: \\mathcal{U}_{\\tilde{p}} \\longrightarrow T \\mathbb{P}^2$ the vector fields dual to the one forms $d\\pi_x$ and $d\\pi_y $ respectively. Let $ (\\tilde{f} (t_1, t_2), \\tilde{p}(t_1)) \\in \\mathcal{D} \\times \\mathbb{P}^2 $ be an arbitrary point that is close to \n$( \\tilde{f}, \\tilde{p})$ and let $\\tilde{p}(t_1, t_2)$ be a point in $\\mathbb{P}^2$ that is close to $\\tilde{p}(t_1)$. Let \n\\begin{align*}\n\\tilde{p}(t_1) := [x_{t_1}:y_{t_1}:1] \\in \\mathbb{P}^2, \\qquad p(t_1) &:= (x_{t_1},y_{t_1},1) \\in \\mathbb{C}^3,\\\\ \n\\tilde{p}(t_1, t_2) := [x_{t_1} + x_{t_2}: ~y_{t_1}+y_{t_2}: 1] \\in \\mathbb{P}^2, \\qquad p(t_1, t_2) &:= (x_{t_1} + x_{t_2}, ~y_{t_1}+y_{t_2}, 1) \\in \\mathbb{C}^3, \\\\\n\\tilde{f}(t_1, t_2) \\in \\mathbb{P}^{\\delta_d}, \\qquad f(t_1, t_2) & \\in \\mathbb{C}^{\\delta_d +1}. \n\\end{align*}\nDefine the following numbers: \n\\begin{align*}\nf_{ij}(t_1, t_2) & := \\{ \\nabla^{i+j} f(t_1, t_2)|_{\\tilde{p}(t_1)} \n(\\underbrace{v,\\cdots v}_{\\textnormal{$i$ times}}, \\underbrace{w,\\cdots w}_{\\textnormal{$j$ times}}) \\}(p(t_1)^{\\otimes d}),\\\\\n\\mathrm{F} &:= f_{00}(t_1, t_2) + f_{10}(t_1, t_2) \\xt + f_{01}(t_1, t_2) y_{t_2} + \\sum_{i+j=2}\\big({\\textstyle \\frac{f_{ij}(t_1, t_2)}{i ! j !}} x_{t_2}^i y_{t_2}^j\\big) + \\ldots, \\\\ \n\\mathrm{F}_{x_{t_2}} &:= f_{10}(t_1, t_2) + f_{20}(t_1, t_2) \\xt + f_{11}(t_1, t_2) y_{t_2} + \\ldots, \\\\\n\\mathrm{F}_{y_{t_2}} &:= f_{01}(t_1, t_2) + f_{11}(t_1, t_2) x_{t_2} + f_{02}(t_1, t_2) y_{t_2} + \\ldots. \n\\end{align*}\nIt is easy to see that \n\\begin{align*}\n\\{ \\pi_1^* \\psi_{\\mathcal{A}_0}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1)) \\} (f(t_1, t_2) \\otimes p(t_1, t_2)^{\\otimes d} ) & = \\mathrm{F}, \\\\\n\\{ \\pi_1^* \\psi_{\\mathcal{A}_1}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1)) \\} (f(t_1, t_2) \\otimes p(t_1, t_2)^{\\otimes d} \\otimes v) & = \\mathrm{F}_{x_{t_2}}, \\\\ \n\\{ \\pi_1^* \\psi_{\\mathcal{A}_1}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1)) \\} (f(t_1, t_2) \\otimes p(t_1, t_2)^{\\otimes d} \\otimes w) &= \\mathrm{F}_{y_{t_2}}.\n\\end{align*}\nWe now observe that \n\\eqref{a1_a1_sharp_closure_functional_eqn_a1_plus_a2} \nhas a solution if and only if \nthe following set of equations has a solution \n\\begin{align}\n\\mathrm{F} = 0, \\qquad \\mathrm{F}_{x_{t_2}} = 0, \\qquad \\mathrm{F}_{y_{t_2}} = 0, \\qquad (x_{t_2}, y_{t_2}) \\neq (0, 0) \\qquad \\textnormal{(but small)}. \\label{eval_f1} \n\\end{align} \nNote that in equation \\eqref{a1_a1_sharp_closure_functional_eqn_a1_plus_a2} the equality holds as \n\\textit{functionals}, while in equation \\eqref{eval_f1}, the equality holds as \\textit{numbers}. \n\\begin{comment}\nIt will be useful to define the following number:\n\\begin{align*}\n\\mathrm{G} &:= \\xt \\mathrm{F}_{x_{t_2}} + y_{t_2} \\mathrm{F}_{y_{t_2}} - \\mathrm{F} \\\\ \n & = \\frac{f_{20}(t_1, t_2)}{2} x_{t_2}^2 + f_{11}(t_1, t_2) x_{t_2} y_{t_2} + \\frac{f_{02}(t_1, t_2)}{2} y_{t_2}^2 + \\ldots.\n\\end{align*}\nHence, \\eqref{eval_f1} has a solution if and only if \n\\begin{align}\n\\mathrm{G} = 0, \\qquad \\mathrm{F}_{x_{t_2}} = 0, \\qquad \\mathrm{F}_{y_{t_2}} = 0, \\qquad (x_{t_2}, y_{t_2}) \\neq (0, 0) \\qquad \\textnormal{(but small)} \\label{eval_g} \n\\end{align} \nhas a solution. It easy to see \\eqref{eval_g} always has solutions. \nTo see why, let $\\mathrm{L}$ be a solution of \n\\begin{align}\nf_{20} + 2 f_{11}\\mathrm{L} + f_{02}\\mathrm{L}^2 & =0. \\label{equation_for_L}\n\\end{align}\nThen \n\\begin{align} \n\\xt & = u, \\qquad y_{t_2} = \\mathrm{L} u, \\qquad \\textnormal{$u$ small but non zero}, \\nonumber \\\\\nf_{20}(t_1, t_2) &= -2 f_{11}(t_1, t_2) \\mathrm{L} - f_{02}(t_1, t_2) \\mathrm{L}^2 + \\mathrm{O}(u) + \\ldots \\nonumber \\\\\nf_{10}(t_1, t_2) &= -f_{20}(t_1, t_2) \\xt -f_{11}(t_1, t_2) y_{t_2} + \\ldots, \\nonumber \\\\\nf_{01}(t_1, t_2) & = - f_{11}(t_1, t_2) x_{t_2} - f_{02}(t_1, t_2) y_{t_2} + \\ldots \\label{a1_neighbourhood_inside_a1_a0}\n\\end{align}\nis a solution to \\eqref{eval_g}.\\footnote{If \\eqref{equation_for_L} has no solution then solve the equation \n$f_{02} + 2 f_{11}\\mathrm{L} + f_{20}\\mathrm{L}^2 =0$ and interchange \n$\\xt$ and $y_{t_2}$ in \\eqref{a1_neighbourhood_inside_a1_a0}.} \nHence \\eqref{eval_g} always has solutions.\\footnote{However, we are not saying that \nthe solutions constructed in \\eqref{a1_neighbourhood_inside_a1_a0} are the only solutions.}\\\\\n\\end{comment} \nWe will now show that \n\\eqref{eval_f1} (and hence \\eqref{a1_a1_sharp_closure_functional_eqn_a1_plus_a2}) \nhas solutions whenever $(\\tilde{f}, \\tilde{p}, \\tilde{p}) \\in \\Delta (\\mathcal{A}_1 \\cup \\mathcal{A}_2)$. \nFurthermore, for all those solutions, \n\\eqref{a1_a1_closure_intersect_a1_is_empty_equation} holds. \\\\\n\\hf \\hf First let us assume \n$(\\tilde{f}, \\tilde{p}, \\tilde{p}) \\in \\Delta \\mathcal{A}_1$. \nIt is obvious that solutions to \\eqref{eval_f1} exist; we can solve for $f_{10}(t_1, t_2)$ and $f_{01}(t_1, t_2)$ \nusing $\\mathrm{F}_{x_{t_2}} =0$ and $\\mathrm{F}_{y_{t_2}} =0$ and \nand then solve for \n$f_{00}(t_1, t_2)$ using $\\mathrm{F} =0$. \nTo show that \\eqref{a1_a1_closure_intersect_a1_is_empty_equation} holds it suffices to show that \nif $(\\xt, y_{t_2})$ is small but non zero, then \n\\begin{align}\n\\big(f_{00}(t_1, t_2), ~f_{10}(t_1, t_2), ~f_{01}(t_1, t_2) \\big) &\\neq (0,0,0). \\label{a1_a1_closure_intersect_a1_is_empty_equation_numbers}\n\\end{align}\nObserve that \n\\eqref{eval_f1} implies that\n\\begin{equation}\n\\label{f_ij_matrix_equation}\n\\left( \\begin{array}{c} f_{10}(t_1, t_2) \\\\ f_{01}(t_1, t_2) \\end{array} \\right) = \n-\\begin{pmatrix} f_{20}(t_1,t_2) & f_{11}(t_1, t_2) \\\\ f_{11}(t_1, t_2) & f_{02}(t_1, t_2) \n\\end{pmatrix} \\left(\\begin{array}{c} x_{t_2} \\\\ y_{t_2} \\end{array} \\right)+ \\left (\\begin{array}{c} \\mathrm{E}_1(x_{t_2}, y_{t_2}) \\\\ \\mathrm{E}_2(x_{t_2}, y_{t_2}) \n\\end{array} \\right) \n\\end{equation}\nwhere $\\mathrm{E}_i(x_{t_2}, y_{t_2})$ are second order in $(x_{t_2}, y_{t_2})$. \nSince $(\\tilde{f},\\tilde{p}, \\tilde{p}) \\in \\Delta \\mathcal{A}_1$, the matrix\n\\begin{equation*}\n\\mathrm{M} := \\left( \\begin{array}{cc}\nf_{20}(t_1,t_2) & f_{11}(t_1, t_2) \\\\\nf_{11}(t_1, t_2) & f_{02}(t_1, t_2) \\end{array} \\right)\n\\end{equation*}\nis invertible if $\\tilde{f}(t_1, t_2)$ is sufficiently close to $\\tilde{f}$. \nEquation \\eqref{f_ij_matrix_equation} now implies that \nthat if $(\\xt, y_{t_2})$ is small but non zero, then \n$f_{10}(t_1, t_2)$ and $f_{01}(t_1, t_2)$ can not both be zero. \nHence \\eqref{a1_a1_closure_intersect_a1_is_empty_equation_numbers} holds and hence \\eqref{a1_a1_closure_intersect_a1_is_empty_equation} holds. \\\\\n\\begin{comment}\n\n***************\n\n\n \n\n\nWe claim in fact that $(f_{10}(t_1, t_2), f_{01}(t_1, t_2)) \\neq (0,0)$. \n\nLet us define \n\\begin{align}\n \\mathrm{G} &:= \\xt \\mathrm{F}_{x_{t_2}} + y_{t_2} \\mathrm{F}_{y_{t_2}} - \\mathrm{F} \\nonumber \\\\ \n & = \\frac{f_{20}(t_1, t_2)}{2} x_{t_2}^2 + f_{11}(t_1, t_2) x_{t_2} y_{t_2} + \\frac{f_{02}(t_1, t_2)}{2} y_{t_2}^2 + \\ldots.\\nonumber \n\\end{align}\nNotice that the quadratic term of $\\mathrm{G}$ is same as the quadratic term of $\\mathrm{F}$; moreover it has no first order terms \n(and also no zeroth order term since $f_{00}(t_1, t_2)$ is zero). \nWe observe that \n\\eqref{eval_f1} has a solution if and only if \n\\begin{align}\n\\mathrm{G} = 0, \\qquad \\mathrm{F}_{x_{t_2}} = 0, \\qquad \\mathrm{F}_{y_{t_2}} = 0, \\qquad (x_{t_2}, y_{t_2}) \\neq (0, 0) \\qquad \\textnormal{(but small)} \\label{eval_f1_g} \n\\end{align} \nhas a solution. Since $(\\tilde{f}, \\tilde{p}, \\tilde{p}) \\in \\Delta \\mathcal{A}_1$ one of the possibility holds; at least one of $f_{20}(t_1, t_2)$ or $f_{02}(t_1, t_2)$ is non zero, \nor $f_{11}(t_1, t_2)$ is non zero. Let us assume $f_{02}(t_1, t_2) \\neq 0$. \nWe claim that there exists a unique holomorphic function $\\mathrm{B}(x_{t_2})$, \nvanishing at the origin such that \nafter we make a change of coordinates $y_{t_2} = \\hat{y}_{t_2} + \\mathrm{B}(x_{t_2})$, \nthe function $\\mathrm{G}$ is given by \n\\bgd\n\\mathrm{G} = \\hat{\\Z}_0(x_{t_2}) + \\hat{\\Z}_2(x_{t_2}) \\hat{y}_{t_2}^2 + \\hat{\\Z}_3(x_{t_2}) \\hat{y}_{t_2}^3 + \\ldots \n\\edd\nfor some $\\hat \\Z_k(x_{t_2})$ (i.e., $\\hat{\\Z}_1(x_{t_2}) \\equiv 0$). \nTo see why, we note that this is possible if $\\mathrm{B}(x_{t_2})$ satisfies the identity\n\\begin{align}\n\\Z_1(x_{t_2}) + 2 \\Z_2(x_{t_2}) \\mathrm{B} + 3 \\Z_3(x_{t_2}) \\mathrm{B}^2 + \\ldots \\equiv 0. \n\\end{align}\nSince ~$\\Z_2(0) \\neq 0$, $\\mathrm{B}(x_{t_2})$ exists by the Implicit Function Theorem.\nHence, we get that \n\\begin{align}\n\\mathrm{G} & = \\frac{f_{02}(t_1, t_2)}{2} \\hat{\\hat{y}}_{t_2}^2 + \\frac{1}{2}\\Big( f_{20}(t_1, t_2) - \\frac{f_{11}(t_1, t_2)^2}{f_{02}(t_1, t_2)} \\Big) \\hat{x}_{t_2}^2, \\label{G_modified}\\\\\n\\textnormal{where} ~~~\\hat{\\hat{y}}_{t_2} &:= \\sqrt{\\frac{\\hat{2 \\Z}_2(x_{t_2}) + 2 \\hat{\\Z}_3(x_{t_2}) \\hat{y}_{t_2} + \\ldots}{f_{02}(t_1, t_2)}}, ~~~\\hat{x}_{t_2} := \n\\sqrt{\\frac{2 f_{02}(t_1, t_2)\\hat{\\Z}_0(x_{t_2})}{f_{20}(t_1, t_2)f_{02}(t_1, t_2) - f_{11}(t_1, t_2)^2 }}. \\nonumber \n\\end{align}\nHence,\\eqref{G_modified} implies that solutions to \\eqref{eval_f1_g} are given by \n\\begin{align}\n\\hat{x}_{t_2} &= u, ~~ \\hat{\\hat{y}}_{t_2} = \\alpha u, ~~f_{10}(t_1, t_2) = \\mathrm{O}(u), ~~f_{01}(t_1, t_2) = f_{02}(t_1, t_2) \\alpha u + \\mathrm{O}(u^2) \\nonumber \\\\\n\\textnormal{where} ~~\\alpha &:= \\frac{\\sqrt{f_{20}(t_1, t_2)f_{02}(t_1, t_2) - f_{11}(t_1, t_2)^2 }}{f_{02}(t_1, t_2)} ~~\\textnormal{a branch of the square root.} \\label{eval_f1_g_soln}\n\\end{align}\nSince $f_{02}(t_1, t_2) \\alpha \\neq 0$, \\eqref{eval_f1_g_soln} implies that \nif $u$ is small but non zero, then \n$f_{01}(t_1, t_2) \\neq 0$. \nIn other words $(f_{10}, f_{01}) \\neq (0,0) $;\nhence \\eqref{a1_a1_closure_intersect_a1_is_empty_equation} holds. We can use a similar argument if \n$f_{02}(t_1, t_2) = f_{20}(t_1, t_2) =0$, but $f_{11}(t_1, t_2) \\neq 0$. \n\n\n\n\n\nTherefore, we can compute $\\mathrm{B}(x_{t_2})$ \nas a power series using \n\n\nObserve that \nthe second and third equation of\n\\eqref{eval_f1} \n(namely $\\mathrm{F}_{x_{t_2}} =0$ and $\\mathrm{F}_{y_{t_2}} =0$ ) can be written as \n\\begin{equation}\n\\label{f_ij_matrix_equation}\n\\left( \\begin{array}{c} f_{10}(t_1, t_2) \\\\ f_{01}(t_1, t_2) \\end{array} \\right) = \n-\\begin{pmatrix} f_{20}(t_1,t_2) & f_{11}(t_1, t_2) \\\\ f_{11}(t_1, t_2) & f_{02}(t_1, t_2) \n\\end{pmatrix} \\left(\\begin{array}{c} x_{t_2} \\\\ y_{t_2} \\end{array} \\right)+ \\left (\\begin{array}{c} \\mathrm{E}_1(x_{t_2}, y_{t_2}) \\\\ \\mathrm{E}_2(x_{t_2}, y_{t_2}) \n\\end{array} \\right) \n\\end{equation}\nwhere $\\mathrm{E}_i(x_{t_2}, y_{t_2})$ are second order in $(x_{t_2}, y_{t_2})$. \nSince $(\\tilde{f},\\tilde{p}, \\tilde{p}) \\in \\Delta \\mathcal{A}_1$, the matrix\n\\begin{equation}\\label{matrix_M}\n\\mathrm{M} := \\left( \\begin{array}{cc}\nf_{20}(t_1,t_2) & f_{11}(t_1, t_2) \\\\\nf_{11}(t_1, t_2) & f_{02}(t_1, t_2) \\end{array} \\right)\n\\end{equation}\nis invertible if $(t_1, t_2)$ is sufficiently small.\nHence, by the Implicit Function Theorem, solution to \n\\eqref{f_ij_matrix_equation} exists; \nwe can solve for $(x_{t_2}, y_{t_2})$ \nin terms of $f_{ij}(t_1, t_2)$. \nFinally, since $x_{t_2}$ and $y_{t_2}$ \nare not both equal to zero, we can use the first equation of \n\\eqref{eval_f1} (namely $\\mathrm{F} =0$) \nto solve for either $f_{10}(t_1, t_2)$ or \n$f_{01}(t_1, t_2)$. Hence solutions to \n\\eqref{eval_f1} (and hence to \\eqref{a1_a1_sharp_closure_functional_eqn_a1_plus_a2}) \nexist. \\\\\n\\hf \\hf Next we will show that \\eqref{a1_a1_closure_intersect_a1_is_empty_equation} holds. \nObserve that by \\eqref{f_ij_matrix_equation} \nif $(x_{t_2}, y_{t_2}) \\neq (0, 0)$, but sufficiently small, \nthen $(f_{10}(t_1, t_2), f_{01}(t_1, t_2) ) \\neq (0,0)$. This is \nbecause $\\mathrm{M}$ is injective and $\\mathrm{E}_i(x_{t_2}, y_{t_2})$ are second order in $(x_{t_2}, y_{t_2})$. \nThat proves that \\eqref{a1_a1_closure_intersect_a1_is_empty_equation} holds (as functionals). \\\\ \n\\end{comment}\n\\hf \\hf Next, let $(\\tilde{f}, \\tilde{p}, \\tilde{p}) \\in \\Delta \\mathcal{A}_2$. \nSince $\\Delta \\mathcal{A}_2 \\subset \\Delta \\overline{\\mathcal{A}}_1$, we conclude that solutions to \n\\eqref{a1_a1_sharp_closure_functional_eqn_a1_plus_a2} exist; we only need to show that \n\\eqref{a1_a1_closure_intersect_a1_is_empty_equation} holds. \nObserve that \n$f_{20}$ and $f_{02}$ can not both be zero; assume $f_{02} \\neq 0$ \n(and hence $f_{02}(t_1, t_2) \\neq 0$). \nWrite $\\mathrm{F}$ as \n\\bgd\n\\mathrm{F} = f_{00}(t_1, t_2) +f_{10}(t_1, t_2) \\xt + f_{01}(t_1, t_2) y_{t_2} + \\Z_0(x_{t_2}) + \\Z_1(x_{t_2})y_{t_2} + \\Z_2(x_{t_2}) y_{t_2}^2 + \\ldots\n\\edd\nwhere $\\Z_2(0) \\neq 0.$ We claim that there exists a unique holomorphic function $\\mathrm{B}(x_{t_2})$, \nvanishing at the origin, such that \nafter we make a change of coordinates $y_{t_2} = \\hat{y}_{t_2} + \\mathrm{B}(x_{t_2})$, \nthe function $\\mathrm{F}$ is given by \n\\bgd\n\\mathrm{F} = f_{00}(t_1, t_2) +f_{10}(t_1, t_2) \\xt + f_{01}(t_1, t_2) \\hat{y}_{t_2} + f_{01}(t_1, t_2)\\mathrm{B}(x_{t_2}) + \n \\hat{\\Z}_0(x_{t_2}) + \\hat{\\Z}_2(x_{t_2}) \\hat{y}_{t_2}^2 + \\hat{\\Z}_3(x_{t_2}) \\hat{y}_{t_2}^3 + \\ldots \n\\edd\nfor some $\\hat \\Z_k(x_{t_2})$ (i.e., $\\hat{\\Z}_1(x_{t_2}) \\equiv 0$). \nThis is possible if $\\mathrm{B}(x_{t_2})$ satisfies the identity\n\\begin{align}\n\\Z_1(x_{t_2}) + 2 \\Z_2(x_{t_2}) \\mathrm{B}(\\xt) + 3 \\Z_3(x_{t_2}) \\mathrm{B}(\\xt)^2 + \\ldots \\equiv 0. \\label{psconvgg2}\n\\end{align}\nSince $\\Z_2(0) \\neq 0$, $\\mathrm{B}(x_{t_2})$ exists by the Implicit Function Theorem and \nwe can compute $\\mathrm{B}(x_{t_2})$ explicitly \nas a power series using \\eqref{psconvgg2} and then\ncompute $\\hat \\Z_{0}(x_{t_2})$. Hence, \n\\begin{align*} \n\\mathrm{F} & = f_{00}(t_1, t_2) +f_{10}(t_1, t_2) x_{t_2} + f_{01}(t_1, t_2) \\hat{y}_{t_2} + \\varphi(x_{t_2}, \\hat{y}_{t_2}) \\hat{y}_{t_2}^2 \n+f_{01}(t_1, t_2)\\mathrm{B}(x_{t_2}) \\\\ \n & + \\underbrace{{\\textstyle \\frac{\\mathcal{B}^{f(t_1, t_2)}_2}{2!}x_{t_2}^2 + \\frac{\\mathcal{B}^{f(t_1, t_2)}_3}{3!} x_{t_2}^3 + \\mathrm{O}(x_{t_2}^4)}}_{\\hat \\Z_{0}(x_{t_2})},\n \\end{align*}\n where $\\varphi(0,0) \\neq 0$ and\n\\begin{align*}\n \\mathcal{B}^{f(t_1, t_2)}_2 & := f_{20}(t_1, t_2) - \\frac{f_{11}(t_1, t_2)^2}{f_{02}(t_1, t_2)}, \n\\qquad \\textup{and}\\\\\n\\mathcal{B}^{f(t_1, t_2)}_{3} & :=f_{30}(t_1, t_2)-\\frac{3 f_{11}(t_1, t_2) f_{21}(t_1, t_2)}\n{f_{02}(t_1, t_2)} + \\frac{3 f_{11}(t_1, t_2)^2 f_{12}(t_1, t_2)}{f_{02}(t_1, t_2)^2}- \\frac{f_{11}(t_1, t_2)^3 \nf_{03}(t_1, t_2)}{f_{02}(t_1, t_2)^3}\\neq 0. \n\\end{align*}\nThe last inequality holds because $(\\tilde{f}, \\tilde{p}, \\tilde{p}) \\in \\Delta \\mathcal{A}_2$.\nIn these new coordinates $\\hat{y}_{t_2}$ \nand $x_{t_2}$, \nequation \\eqref{eval_f1} is equivalent to \n\\begin{align}\n\\mathrm{F} = f_{00}(t_1, t_2) + f_{10}(t_1, t_2) x_{t_2} + f_{01}(t_1, t_2) \\hat{y}_{t_2} + \\varphi(x_{t_2}, \\hat{y}_{t_2}) \\hat{y}_{t_2}^2 + f_{01}(t_1, t_2)\\mathrm{B}(x_{t_2}) + & \\nonumber \\\\\n +\\frac{\\mathcal{B}^{f(t_1, t_2)}_2}{2!}x_{t_2}^2 + \\frac{\\mathcal{B}^{f(t_1, t_2)}_3}{3!} x_{t_2}^3 + \\mathrm{O}(x_{t_2}^4) & = 0, \\label{closure_a1_a1_is_not_a2_F} \\\\\nf_{10}(t_1, t_2)+ f_{01} (t_1, t_2) \\mathrm{B}^{\\prime}(x_{t_2})+ \\varphi_{x_{t_2}}(x_{t_2}, \\hat{y}_{t_2}) \\hat{y}_{t_2}^2 + \\mathcal{B}^{f(t_1, t_2)}_2 x_{t_2} + \\frac{\\mathcal{B}^{f(t_1, t_2)}_3}{2!} x_{t_2}^2 + \n\\mathrm{O}(x_{t_2}^3) & = 0, \n\\label{closure_a1_a1_is_not_a2_Fx}\\\\\nf_{01}(t_1, t_2)+ 2 \\hat{y}_{t_2} \\varphi(x_{t_2}, \\hat{y}_{t_2}) + \\hat{y}_{t_2}^2 \\varphi_{\\hat{y}_{t_2}}(x_{t_2}, \\hat{y}_{t_2}) & =0, \\nonumber \\\\ \n(\\xt, \\hat{y}_{t_2}) \\neq (0,0) \\qquad \\textnormal{but small}.& \\label{closure_a1_a1_is_not_a2_Fy}\n\\end{align}\nLet us clarify a point of confusion: we are claiming that \\eqref{eval_f1} has a solution if and only if the equation \n\\begin{align}\n\\mathrm{F}& = 0, \\qquad \\mathrm{F}_{x_{t_2}} = 0, \\qquad \\mathrm{F}_{\\hat{y}_{t_2}} = 0, \\qquad (x_{t_2}, \\hat{y}_{t_2}) \\neq (0, 0) \\qquad \\textnormal{(but small)} \\label{eval_f1_hat} \n\\end{align}\nhas a solution. We are \\textit{not} claiming that the partial derivatives in the old \ncoordinates are individually equal to the partial derivatives in the new coordinates. \nSince $\\varphi(0,0) \\neq 0$ we can use \\eqref{closure_a1_a1_is_not_a2_Fy} to solve for $\\hat{y}_{t_2}$ in terms of $x$ and \n$f_{01}(t_1, t_2)$ to get \n\\begin{align}\n \\hat{y}_{t_2} & = f_{01} (t_1, t_2) \\mathrm{E}(x_{t_2}, f_{01}(t_1, t_2)), \\label{closure_a1_a1_is_not_a2_F_hat_y}\n\\end{align}\nwhere $\\mathrm{E}(x_{t_2}, f_{01}(t_1, t_2))$ is a holomorphic \nfunction of $(x_{t_2}, f_{01}(t_1, t_2))$.\nUsing \\eqref{closure_a1_a1_is_not_a2_F_hat_y}, \\eqref{closure_a1_a1_is_not_a2_Fx} and \\eqref{closure_a1_a1_is_not_a2_F} \nwe get (by eliminating $\\mathcal{B}^{f(t_1, t_2)}_2$ and $\\hat{y}_{t_2}$) \n\\begin{align}\n\\mathrm{F} = -\\frac{\\mathcal{B}^{f(t_1, t_2)}_3}{12} x_{t_2}^3 + \\mathrm{O}(x_{t_2}^4) + f_{00}(t_1, t_2)+f_{10}(t_1, t_2) \\mathrm{E}_{1} (x_{t_2}, f_{10}(t_1, t_2), f_{01}(t_1, t_2)) & \\nonumber \\\\ \n+f_{01}(t_1, t_2) \\mathrm{E}_{2} (x_{t_2}, f_{10}(t_1, t_2), f_{01}(t_1, t_2)) & = 0 \\label{closure_a1_a1_is_not_a2_F_eliminated}\n\\end{align}\nwhere $\\mathrm{E}_i (x_{t_2}, f_{10}(t_1, t_2), f_{01}(t_1, t_2))$ is a holomorphic \nfunction of $(x_{t_2}, f_{10}(t_1, t_2), f_{01}(t_1, t_2))$. \nSince solutions to equation \\eqref{eval_f1}\nsatisfy \\eqref{closure_a1_a1_is_not_a2_F_eliminated}, we conclude \nthat $f_{00}(t_1, t_2)$,\n$f_{10}(t_1, t_2)$ and $f_{01}(t_1, t_2)$ can not all be zero. \nIf they were all zero \nthen $\\mathrm{F}$ could not be zero for small but non zero $x_{t_2}$ (by \\eqref{closure_a1_a1_is_not_a2_F_eliminated}), \nsince \n$\\mathcal{B}^{f(t_1, t_2)}_3 \\neq 0$ (this is where we are using $(\\tilde{f}, \\tilde{p}, \\tilde{p}) \\in \\Delta \\mathcal{A}_2$). \nThis implies \\eqref{a1_a1_closure_intersect_a1_is_empty_equation} holds \nas functionals. This completes the proof of Lemma \\ref{cl_two_pt} \\eqref{a1a_minus_1_cl}. \\qed \\\\\n\n\n\\hf \\hf Before proceeding to the proof of Lemma \\ref{cl_two_pt} (\\ref{a1a1_up_cl}), \nlet us prove a corollary which will be needed in the proof of equation \\eqref{algoa1a1} in section \\ref{Euler_class_computation}. The proof of this corollary follows from the setup of the preceding proof, hence we prove it here.\n\\begin{cor}\n\\label{a1_section_contrib_from_a1_and_a2}\nLet $\\mathcal{W} \\longrightarrow \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P}^2$ \nbe a vector bundle \nsuch that the rank of $\\mathcal{W}$ is \nsame as the dimension of $\\Delta \\mathcal{A}_2$\\footnote{This dimension is also one less than the dimension of $ \\Delta \\mathcal{A}_{1}$, equal to $\\delta_d-2$.} \nand $\\mathcal{Q}: \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P}^2 \\longrightarrow \\mathcal{W}$ \na generic smooth section. \nThen the contribution of the section \n\\[ \\pi_2^* \\psi_{\\mathcal{A}_{0}} \\oplus \\pi_2^* \\psi_{\\mathcal{A}_{1}} \\oplus \\mathcal{Q}: \\overline{\\mathcal{A}}_1 \\times \\mathbb{P}^2 \\longrightarrow \\pi_2^* \\mathcal{L}_{\\mathcal{A}_0} \\oplus \\pi_2^* \\mathcal{V}_{\\mathcal{A}_1} \\oplus \\mathcal{W} \\]\nto the Euler class from the points of $\\Delta \\mathcal{A}_{1}$ is given by \n\\begin{align}\n\\mathcal{C}_{\\Delta \\mathcal{A}_1} (\\pi_2^* \\psi_{\\mathcal{A}_{0}} \\oplus \\pi_2^* \\psi_{\\mathcal{A}_{1}} \\oplus \\mathcal{Q}) = \\big\\langle e( \\pi_2^* \\mathcal{L}_{\\mathcal{A}_0} \\oplus \\mathcal{W}), ~[\\Delta \\overline{\\mathcal{A}}_1] \\big\\rangle. \\label{contrib_from_delta_a1_psi_a1}\n\\end{align}\nSecondly, \nif $(\\tilde{f},\\tilde{p}, \\tilde{p}) \\in \\Delta \\mathcal{A}_{2} \\cap \\mathcal{Q}^{-1}(0)$, \nthen this section vanishes on $(\\tilde{f}, \\tilde{p}, \\tilde{p})$ with a multiplicity of $3$. \n\\end{cor}\n\n\\begin{rem}\nWhen we use the phrase ``number of zeros'' (resp. ``number of solutions'') our intended meaning is number of zeros counted with a sign (resp. the number of solutions counted with a sign).\n\\end{rem}\n\n\\textbf{Proof of Corollary \\ref{a1_section_contrib_from_a1_and_a2}:} The contribution of $\\pi_2^* \\psi_{\\mathcal{A}_{0}}\\oplus \\pi_2^* \\psi_{\\mathcal{A}_{1}} \\oplus \\mathcal{Q}$ \nto the the Euler class \nfrom the points of $\\Delta\\mathcal{A}_1$\nis\nthe number of solutions of \n\\begin{align}\n\\pi_2^*\\psi_{\\mathcal{A}_0} (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1)) &= \\nu_0(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1)), \\nonumber \\\\ \n\\pi_2^*\\psi_{\\mathcal{A}_1} (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1)) &= \\nu_1(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1)), \\nonumber \\\\ \n\\mathcal{Q}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1)) &= 0, ~~(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1)) \\in \\mathcal{U}_{\\mathcal{K}} \\subset \\overline{\\mathcal{A}}_1 \\times \\mathbb{P}^2, \\label{psi_A1_contribution_from_A1}\n\\end{align}\nwhere $\\mathcal{K}$ is a sufficiently \nlarge compact subset of $\\Delta \\mathcal{A}_1$, $\\mathcal{U}_{\\mathcal{K}}$ is a sufficiently small neighborhood of $\\mathcal{K}$ inside $\\overline{\\mathcal{A}}_1 \\times \\mathbb{P}^2$, and $\\nu_0$ and $\\nu_1$ are generic smooth perturbations. We do not need to perturb $\\mathcal{Q}$ since it is already generic. We will convert the functional equation \\eqref{psi_A1_contribution_from_A1} into an equation that involves equality of numbers. \nLet $\\theta_1, \\theta_2, \\ldots, \\theta_{\\delta_d-2}$ form a basis for $\\mathcal{W}^*$ at the \npoint $(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1))$. \nDefine \n\\begin{align}\n\\xi_{0} (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1), (\\xt, y_{t_2})) &:= \\{ \\nu_0(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1)) \\}(f(t_1, t_2) \\otimes p(t_1)^{\\otimes d}) \\in \\mathbb{C}, \\nonumber \\\\\n\\xi_{1x} (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1), (\\xt, y_{t_2})) &:= \\{ \\nu_1(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1)) \\}(f(t_1, t_2) \\otimes p(t_1)^{\\otimes d} \\otimes v) \\in \\mathbb{C},\\nonumber \\\\\n\\xi_{1y} (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1), (\\xt, y_{t_2})) &:= \\{ \\nu_1(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1)) \\}(f(t_1, t_2) \\otimes p(t_1)^{\\otimes d} \\otimes w)\\in \\mathbb{C}, \\nonumber \\\\\n\\mathcal{R}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1), (\\xt, y_{t_2})) &:= \\{ \\mathcal{Q}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1)) \\}(\\theta_1 \\oplus \\theta_2 \\oplus \\ldots \\oplus \\theta_{\\delta_{d-2}}) \\in \\mathbb{C}^{\\delta_d-2}. \\label{number_defn}\n\\end{align}\nConsider now the following set of equations \n\\begin{align}\n\\xi_0 (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1), (\\xt, y_{t_2})) + \\xt \\xi_{1x} (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1), (\\xt, y_{t_2})) & \\nonumber \\\\ \n \\qquad \\qquad + y_{t_2} \\xi_{1y} (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1), (\\xt, y_{t_2})) & \\nonumber \\\\ \n+ \\frac{f_{20}(t_1, t_2)}{2} x_{t_2}^2 + f_{11}(t_1, t_2) x_{t_2} y_{t_2} + \n\\frac{f_{02}(t_1, t_2)}{2} y_{t_2}^2 + \\ldots &=0 \\label{eval_perturbed_number_again} \\\\ \n\\mathcal{R}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1), (\\xt, y_{t_2})) &=0 \\label{relation_number_generic} \\\\ \n\\xi_{1x} (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1), (\\xt, y_{t_2})) + f_{20}(t_1, t_2) \\xt + f_{11}(t_1, t_2) y_{t_2} + \\ldots &=0 \\label{fx_perturbed_number} \\\\ \n\\xi_{1y} (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1), (\\xt, y_{t_2})) + f_{11}(t_1, t_2) x_{t_2} + f_{02}(t_1, t_2) y_{t_2} + \\ldots &= 0 \\label{fy_perturbed_number} \\\\ \n\\textnormal{$(\\xt, y_{t_2})=$ small}, \\qquad \\big|f_{20}(t_1, t_2) f_{02}(t_1, t_2) - f_{11}(t_1, t_2)^2\\big| & > \\mathrm{C} \\label{psi_A1_contribution_from_A1_numbers}\n\\end{align}\nwhere $\\mathrm{C}$ is a small positive constant. Observe that the number of solutions \nof \\eqref{psi_A1_contribution_from_A1} is same as the number of solutions of \n\\eqref{eval_perturbed_number_again}-\\eqref{psi_A1_contribution_from_A1_numbers}. \nLet $\\mathrm{N}$ be the number of solutions $(\\tilde{f}, \\tilde{p})$ of \n\\begin{align}\n\\xi_{0}(\\tilde{f}, \\tilde{p}, 0, 0) =0, ~~\\mathcal{R}(\\tilde{f},\\tilde{p}, 0,0) =0, ~~f_{00}=0, ~~ (f_{10}, ~f_{01}) = (0,0). \\label{eval_perturbed_number} \n\\end{align}\nObserve that this number is same as\nthe number of solutions \n\\begin{align}\n\\nu_0(\\tilde{f}, \\tilde{p}, \\tilde{p}) =0, ~~\\mathcal{Q}(\\tilde{f}, \\tilde{p}, \\tilde{p}) =0, ~~(\\tilde{f}, \\tilde{p}, \\tilde{p}) \\in \\Delta \\overline{\\mathcal{A}}_1. \\label{psi_a0_nu_0_Q_c0}\n\\end{align}\nHence \n\\bge\n\\mathrm{N} = \\big\\langle e( \\pi_2^* \\mathcal{L}_{\\mathcal{A}_0} \\oplus \\mathcal{W}), ~[\\Delta \\overline{\\mathcal{A}}_1] \\big\\rangle. \\label{N_equal_Euler}\n\\ede\nSince $\\mathcal{Q}$ is generic, all solutions of \\eqref{psi_a0_nu_0_Q_c0} \n(and hence \\eqref{eval_perturbed_number})\nbelong to $\\Delta \\mathcal{A}_1$, i.e, \n\\begin{align}\nf_{20} f_{02} - f_{11}^2 \\neq 0 \\implies \\big|f_{20} f_{02} - f_{11}^2\\big| > \\mathrm{C}. \\label{ift_hypothesis_det_hess}\n\\end{align}\nLet $(\\tilde{f}, \\tilde{p})$ be a solution of \\eqref{eval_perturbed_number}. \nSince the sections \n\\[ (\\pi_2^*\\psi_{\\mathcal{A}_0}+\\nu_0) \\oplus \\mathcal{Q}: \\overline{\\mathcal{A}}_0\\times\\{\\tilde{p}\\} \\longrightarrow \\pi_2^* \\mathcal{L}_{\\mathcal{A}_0} \\oplus \\mathcal{W}, \\qquad (\\pi_2^*\\psi_{\\mathcal{A}_0} +\\nu_0) \\oplus \\mathcal{Q}: \\overline{\\mathcal{A}}_0 \\times \\mathbb{P}^2 \\longrightarrow \\pi_2^* \\mathcal{L}_{\\mathcal{A}_0} \\oplus \\mathcal{W} \\]\nare transverse to the zero set (for any $\\tilde{p} \\in \\mathbb{P}^2$)\nwe conclude that given an $(\\xt, y_{t_2})$ sufficiently small, there exists a unique \n$(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1)) \\in \\overline{\\mathcal{A}}_0$\nclose to $(\\tilde{f}, \\tilde{p})$, such that $(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1), (\\xt, y_{t_2}))$ \nsolves \\eqref{eval_perturbed_number_again} and \\eqref{relation_number_generic}. \nPlugging in this value in \\eqref{fx_perturbed_number} and \\eqref{fy_perturbed_number} and \nusing \\eqref{ift_hypothesis_det_hess}, we conclude \nthere is a unique $(\\xt, y_{t_2})$ that solves \\eqref{fx_perturbed_number} and \\eqref{fy_perturbed_number} (provided the norm of $\\xi_{1x}$ and $\\xi_{1y}$ is sufficiently small). \nHence, there is a one to one correspondence between the number solutions of \\eqref{eval_perturbed_number} and the solutions of \n\\eqref{eval_perturbed_number_again}-\\eqref{psi_A1_contribution_from_A1_numbers}. Equation \\eqref{N_equal_Euler} now proves \n\\eqref{contrib_from_delta_a1_psi_a1}. \\\\ \n\\hf \\hf Next, suppose $(\\tilde{f},\\tilde{p}, \\tilde{p}) \\in \\Delta \\mathcal{A}_{2} \\cap \\mathcal{Q}^{-1}(0)$. \nSince $f_{20}$ and $f_{02}$ are not both zero, let us assume $f_{02} \\neq 0$.\nThe contribution of the section \n\\[ \\pi_2^* \\psi_{\\mathcal{A}_{0}} \\oplus \\pi_2^* \\psi_{\\mathcal{A}_{1}} \\oplus \\mathcal{Q}: \\overline{\\mathcal{A}}_1 \\times \\mathbb{P}^2 \\longrightarrow \\pi_2^* \\mathcal{L}_{\\mathcal{A}_0} \\oplus \\pi_2^* \\mathcal{V}_{\\mathcal{A}_1} \\oplus \\mathcal{W} \\]\nto the Euler class is the number of solutions of \n\\begin{align}\n\\pi_2^*\\psi_{\\mathcal{A}_0} (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1))&= \\nu_0, ~~\\pi_2^*\\psi_{\\mathcal{A}_1} (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1))= \\nu_1 \\nonumber \\\\ \n\\mathcal{Q}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1)) &= 0, \\qquad (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1)) \\in \\mathcal{U}_{(\\tilde{f}, \\tilde{p},\\tilde{p})} \\subset \\overline{\\mathcal{A}}_1 \\times \\mathbb{P}^2 \\label{contrib_psi_a0_plus_a1_from_cusps}\n\\end{align}\nwhere $\\mathcal{U}_{(\\tilde{f}, \\tilde{p},\\tilde{p})}$ is a sufficiently small neighbourhood of $(\\tilde{f}, \\tilde{p}, \\tilde{p})$ inside \n$\\overline{\\mathcal{A}}_1 \\times \\mathbb{P}^2$ and $\\nu_0$ and $\\nu_1$\nare generic smooth perturbations. Let $\\xi_0$, $\\xi_{1x}$, $\\xi_{1y}$ and $\\mathcal{R}$ be defined \nas in \\eqref{number_defn}. \nThe number of solutions of \\eqref{contrib_psi_a0_plus_a1_from_cusps} \n(which is a functional equation), is equal to the number of \nsolutions of \\eqref{eval_perturbed_number_again}-\\eqref{fy_perturbed_number} and \n\\begin{align}\n\\textnormal{$(\\xt, y_{t_2})=$ small}, \\qquad f_{02}(t_1, t_2)\\mathcal{B}^{f(t_1, t_2)}_2 := f_{02}(t_1, t_2)f_{20}(t_1, t_2) - f_{11}(t_1, t_2) = \\textnormal{small}. \\label{psi_A1_contribution_from_A2_numbers}\n\\end{align}\nSince the section\n\\[ \\psi_{\\mathcal{A}_2} \\oplus \\mathcal{Q} : \\Delta \\overline{\\mathcal{A}}_1 \\longrightarrow \\mathcal{L}_{\\mathcal{A}_2} \\oplus \\mathcal{W} \\]\nis transverse to the zero set, we conclude that \nthere is a unique $(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1)) \\in \\Delta \\overline{\\mathcal{A}}_1 \\cap \\mathcal{Q}^{-1}(0)$ \nclose to $(\\tilde{f}, \\tilde{p})$ with a specified value of $\\mathcal{B}^{f(t_1, t_2)}_2$.\\footnote{This is true provided $\\mathcal{B}^{f(t_1, t_2)}_2$ is sufficiently small.} \nIn other words, we can express all the \n$f_{ij}(t_1, t_2)$ in terms of $\\mathcal{B}^{f(t_1, t_2)}_2$. \nPlug in this expression for $f_{ij}(t_1, t_2)$\nin \n\\eqref{eval_perturbed_number_again}, \\eqref{fx_perturbed_number} and \\eqref{fy_perturbed_number}. \nSince $~(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1))$ \nis close to $\\Delta \\mathcal{A}_2$, \nwe conclude that after a change of coordinates, the set of equations \n\\eqref{eval_perturbed_number_again}, \\eqref{fx_perturbed_number} and \\eqref{fy_perturbed_number}\nis equivalent to \n\\eqref{closure_a1_a1_is_not_a2_F}, \n\\eqref{closure_a1_a1_is_not_a2_Fx} and \\eqref{closure_a1_a1_is_not_a2_Fy}, \nwith \n\\begin{align}\nf_{00}(t_1, t_2) & = \\xi_{0}, \\qquad f_{10}(t_1, t_2) = \\xi_{1x}, \\qquad f_{01}(t_1, t_2) = \\xi_{1y}. \\label{psi_a1_contribution_from_a2}\n\\end{align}\nHence, \\eqref{closure_a1_a1_is_not_a2_F_eliminated} holds which combined with \\eqref{psi_a1_contribution_from_a2} \nimplies that the multiplicity is $3$ to one in $x_{t_2}$. \nGiven $x_{t_2}$, we can solve for $\\hat{y}_{t_2}$ uniquely using \\eqref{closure_a1_a1_is_not_a2_F_hat_y}. \nAnd given $x_{t_2}$ and $\\hat{y}_{t_2}$, we can uniquely solve for $\\mathcal{B}^{f(t_1, t_2)}_2$ \nusing \\eqref{closure_a1_a1_is_not_a2_Fx}, provided $\\xi_{1x}$ and $\\xi_{1y}$ are sufficiently small. \nHence, the total multiplicity is $3$. \\qed \\\\\n\n\n\\begin{comment}\nThe rest of the $f_{ij}$ can be expressed in terms of $x_{t_2}$, $\\hat{y}_{t_2}$ and $\\mathcal{B}^{f(t_1, t_2)}_2$. \nTo see why this is so, first observe that \n$\\mathcal{Q}$ is a generic section; hence the section \n$$ \\pi_2^*\\psi_{\\mathcal{A}_2} \\oplus \\mathcal{Q}: \\overline{\\mathcal{A}}_1 \\circ \\overline{\\mathcal{A}}_1 \\longrightarrow \\pi_2^*\\mathcal{L}_{\\mathcal{A}_2} \\oplus \\mathcal{W} $$\nis transverse to the zero set. \nBy the Implicit Function Theorem, if $(\\tilde{p}(t_1, t_2), \\tilde{p}(t_1))$ is \nclose to $(\\tilde{p}, \\tilde{p})$ and \n$\\pi_2^*\\psi_{\\mathcal{A}_2}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1))$\nis close to zero, \nthen there exists a unique $\\tilde{f}(t_1, t_2)$ close to \n$\\tilde{f}$ that solves \n$$ \\mathcal{Q}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1)) = 0, \\qquad (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1)) \\in \\overline{\\mathcal{A}}_1\\circ \\overline{\\mathcal{A}}_1.$$ \nIn other words, we can express $\\tilde{f}(t_1, t_2)$ as a function of \n$\\tilde{p}(t_1, t_2)$ , $\\tilde{p}(t_1)$ and $\\pi_2^*\\psi_{\\mathcal{A}_2}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1))$. \nHence $f_{ij}$ can be expressed in terms of $x_{t_2}$, $\\hat{y}_{t_2}$ and $\\mathcal{B}^{f(t_1, t_2)}_2$. \nHence the set of equations \\eqref{psi_a1_contribution_from_a2} is $3$ to one. \\qed \n\\end{comment} \n\n\n\n\\textbf{Proof of Lemma \\ref{cl_two_pt} (\\ref{a1a1_up_cl}):} It suffices to show that \n\\bge\n\\Big\\{ (\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{A}}^{\\#}_1} \\Big\\} = \\Delta \\overline{\\hat{\\mathcal{A}}}_{3}. \\label{a1_hat_a1_sharp_is_a3_sharp} \n\\ede\nBy using Lemma \\ref{tube_lemma} with Lemma \\ref{Dk_sharp_closure} \\eqref{a1_cl_new} we get $\\overline{\\hat{\\mathcal{A}}}_1=\\hat{\\mathcal{A}}_1\\cup\\overline{\\hat{\\mathcal{A}}}_2$. \nNow we use Lemma \\ref{tube_lemma} again with Lemma \\ref{cl} \\eqref{A2cl} to get $\\overline{\\hat\\mathcal{A}}_2=\\hat{\\mathcal{A}}_2\\sqcup \\overline{\\hat\\mathcal{A}}_3 \\cup \\overline {\\hat{\\mathcal{D}}}_4$. \nBy Lemma \\ref{cl} \\eqref{A3cl} and \\ref{tube_lemma}, we conclude that $\\overline{\\hat{\\mathcal{D}}}_4 $ is a subset of $\\overline{\\hat{\\mathcal{A}}}_3$. \nHence $\\overline{\\hat\\mathcal{A}}_2=\\hat{\\mathcal{A}}_2\\sqcup \\overline{\\hat\\mathcal{A}}_3$. \nThis implies that\n\\bgd\n\\Delta \\overline{\\hat{\\mathcal{A}}}_1=\\Delta \\hat{\\mathcal{A}}_1\\sqcup \\Delta\\hat{\\mathcal{A}}_2\\sqcup \\Delta\\overline{\\hat{\\mathcal{A}}}_3.\n\\edd\nFirst we observe that the lhs of \\eqref{a1_hat_a1_sharp_is_a3_sharp} \nis a subset of its rhs. To see this, observe that by \\eqref{a1_du_a2_intersect_a1_a1_is_empty}\n\\begin{align*}\n\\{ (\\tilde{f}, \\tilde{p}, \\tilde{p}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\overline{\\mathcal{A}}_1} \\} \\cap \\Delta (\\mathcal{A}_1 \\cup \\mathcal{A}_{2}) & = \\varnothing \\\\\n\\implies \\{ (\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\overline{\\hat{\\mathcal{A}}}_1} \\} \\cap \\Delta (\\hat{\\mathcal{A}}_1 \\cup \\hat{\\mathcal{A}}_{2}) & = \\varnothing \\\\\n\\implies \\{ (\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{A}}^{\\#}_1} \\} \\cap \\Delta (\\hat{\\mathcal{A}}_1 \\cup \\hat{\\mathcal{A}}_{2}) &= \\varnothing \\qquad \\textnormal{since} ~\\overline{\\hat{\\mathcal{A}}^{\\#}_1} = \\overline{\\hat{\\mathcal{A}}}_1\\,\\,(\\textup{Lemma}\\, \\,\\ref{cl}\\,\\eqref{A1cl}). \n\\end{align*} \n\n\\noindent Next we will show that the rhs of \\eqref{a1_hat_a1_sharp_is_a3_sharp} is a \nsubset of its lhs. We will simultaneously prove the following two statements. \n\\begin{align}\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{A}}_1^{\\#}} \\} &\\supset \\Delta (\\hat{\\mathcal{A}}_{3} \\sqcup \\hat{\\mathcal{D}}_4^{\\#\\flat}), \\label{a3_is_subset_of_a1_a1_closure_down_stairs} \\\\\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_2} \\} \\cap \\Delta (\\hat{\\mathcal{A}}_{3} \\sqcup \\hat{\\mathcal{D}}_4^{\\#\\flat}) &= \\varnothing. \\label{a3_interesct_a1_pa2_is_empty_set_equation}\n\\end{align} \n\n\\noindent Since $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{A}}_1^{\\#}}$ is a closed set, \\eqref{a3_is_subset_of_a1_a1_closure_down_stairs} implies that \nthe rhs of \\eqref{a1_hat_a1_sharp_is_a3_sharp} is a subset of its lhs.\\footnote{As before, we do not need the full strength of \n\\eqref{a3_is_subset_of_a1_a1_closure_down_stairs}. We simply need that \n$\\{ (\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{A}}_1^{\\#}} \\} \\supset \\Delta \\hat{\\mathcal{A}}_{3}$.}\n\n\\begin{claim}\n\\label{claim_a3_subset_of_a1_a1_closure}\nLet $~(\\tilde{f},\\tilde{p}, l_{\\p}) \\in \\Delta (\\hat{\\mathcal{A}}_{3} \\sqcup \\hat{\\mathcal{D}}_4^{\\#\\flat})$.\nThen there exist solutions \n$$ (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) \\in \\overline{(\\mathcal{D} \\times \\mathbb{P}^2) \\circ \\hat{\\mathcal{A}}_1^{\\#}}$$ \nclose to $(\\tilde{f}, \\tilde{p}, l_{\\p})$ to the set of equations \n\\begin{align}\n\\label{a3_is_subset_of_a1_a1_closure_down_stairs_equation}\n\\pi_1^* \\psi_{\\mathcal{A}_0}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & = 0, ~\\pi_1^* \\psi_{\\mathcal{A}_1}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)})= 0, ~~\\tilde{p}(t_1, t_2) \\neq \\tilde{p}(t_1). \n\\end{align}\nMoreover, whenever such a solution is sufficiently close to $(\\tilde{f},\\tilde{p}, l_{\\p})$, it lies in $\\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{A}}_1^{\\#}$, i.e., \n\\begin{align}\n\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{A}_2}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1)) & \\neq 0. \\label{pa3_intersect_a1_a2_is_empty}\n\\end{align}\nIn particular, $(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), \\tilde{p}(t_1))$ does not lie in $\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_2$.\n\\end{claim}\nIt is easy to see that claim \\ref{claim_a3_subset_of_a1_a1_closure} proves \\eqref{a3_is_subset_of_a1_a1_closure_down_stairs} and \\eqref{a3_interesct_a1_pa2_is_empty_set_equation} \nsimultaneously. \\\\\n\n\\noindent \\textbf{Proof: } Choose homogeneous coordinates $[\\mathrm{X}: \\mathrm{Y}: \\mathrm{Z}]$ so that \n$\\tilde{p} = [0:0:1]$ and let ~$\\mathcal{U}_{\\tilde{p}}$, \n$\\pi_x$, $\\pi_y$, $v$, $w$, \n$x_{t_1}$, $y_{t_1}$, $x_{t_2}$, $y_{t_2}$,\n$f_{ij}(t_1, t_2)$\nbe exactly the same as defined in the \nproof of claim \\ref{a1_a1_closure_intersect_a1_or_a2_empty_equations_claim}.\nTake\n\\[(\\tilde{f} (t_1, t_2), l_{\\tilde{p}(t_1)}) \\in \\overline{\\hat{\\mathcal{A}}^{\\#}_1}\\] \nto be a point that is close to \n$( \\tilde{f}, l_{\\tilde{p}})$ and $l_{\\tilde{p}(t_1, t_2)}$ a point \nin $\\mathbb{P} T\\mathbb{P}^2$ that is close to $l_{\\tilde{p}(t_1)}$.\nWithout loss of generality, we can assume that \n\\[ v + \\eta w \\in l_{\\tilde{p}}, ~~v + \\eta_{t_1} w \\in l_{\\tilde{p}(t_1)} ~~\\textnormal{and} ~~v + (\\eta_{t_1} + \\eta_{t_2}) w \\in l_{\\tilde{p}(t_1, t_2)} \\] \nfor some complex \nnumbers $\\eta$, $\\eta_{t_1}$ and $\\eta_{t_1} +\\eta_{t_2}$ close to each other. \nLet the numbers \n$\\mathrm{F}$, $\\mathrm{F}_{x_{t_2}}$ and $\\mathrm{F}_{y_{t_2}}$ be the same as in the proof of claim \\ref{a1_a1_closure_intersect_a1_or_a2_empty_equations_claim}.\nSince $(\\tilde{f} (t_1, t_2), l_{\\tilde{p}(t_1)}) \\in \\overline{\\hat{\\mathcal{A}}^{\\#}_1}$, we conclude that \n\\[ f_{00}(t_1, t_2) =f_{10}(t_1, t_2) = f_{01}(t_1, t_2) =0.\\] \nThe functional equation \n\\eqref{a3_is_subset_of_a1_a1_closure_down_stairs_equation} \nhas a solution if and only if the following has a numerical solution: \n\\begin{align}\n\\mathrm{F} = 0, \\qquad \\mathrm{F}_{x_{t_2}} = 0, \\qquad \\mathrm{F}_{y_{t_2}} = 0, \\qquad (x_{t_2}, y_{t_2}) \\neq (0, 0) \\qquad \\textnormal{(but small)}. \\label{eval_f1_again_d4_hat} \n\\end{align} \nFirst let us assume $(\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\Delta \\hat{\\mathcal{A}}_3$. \nIt is easy to see that $f_{20}$ and $f_{02}$ can not both be zero; \nlet us assume $f_{02}$ is non zero. \nFollowing the same argument as in the proof of claim \\ref{a1_a1_closure_intersect_a1_or_a2_empty_equations_claim}, \nwe make a change of coordinates and write $\\mathrm{F}$ as \n\\begin{align*}\n\\mathrm{F} & = \\hat{\\hat{y}}_{t_2}^2 +\\frac{\\mathcal{B}^{f(t_1, t_2)}_{2}}{2!} x_{t_2}^2 + \\frac{\\mathcal{B}^{f(t_1, t_2)}_{3}}{3!} x_{t_2}^3 + \n\\frac{\\mathcal{B}^{f(t_1, t_2)}_{4}}{4!} x_{t_2}^4 + \\mathrm{O}(x_{t_2}^5), \\\\ \n\\textnormal{where} & \\qquad \\hat{\\hat{y}}_{t_2} := \\sqrt{\\varphi ( x_{t_2}, \\hat{y}_{t_2})} \\hat{y}_{t_2}, \\qquad \\mathcal{B}^{f(t_1, t_2)}_2 = f_{20}(t_1, t_2) - \\frac{f_{11}(t_1, t_2)^2}{f_{02}(t_1, t_2)}. \n\\end{align*}\nIt is easy to see that the only solutions to \\eqref{eval_f1_again_d4_hat} (in terms of the new coordinates) are\n\\bge\n\\mathcal{B}_{2}^{f(t_1, t_2)} = \\ {\\textstyle \\frac{\\mathcal{B}_{4}^{f(t_1, t_2)}}{12}} x_{t_2}^2 + \\mathrm{O}(x_{t_2}^3), ~\\mathcal{B}_{3}^{f(t_1, t_2)} = {\\textstyle \\frac{-\\mathcal{B}_{4}^{f(t_1, t_2)}}{2}} x_{t_2}+\\mathrm{O}(x_{t_2}^2), ~\\hat{\\hat{y}}_{t_2} =0, ~x_{t_2} \\neq 0 ~\\textnormal{(but small).} \\label{b20_equation} \n\\ede\nIt remains to show that these solutions satisfy \\eqref{pa3_intersect_a1_a2_is_empty}. \nFirst consider the case when $(\\tilde{f}, \\tilde{p}, l_{\\p}) \\notin \\mathcal{P} \\mathcal{A}_3 $, i.e., \n$\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{A}_2}(\\tilde{f}, \\tilde{p}, l_{\\p}) \\neq 0$.\nThen \\eqref{pa3_intersect_a1_a2_is_empty} is obviously true, since the section $\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{A}_2}$ is \\textit{continuous}. \nNext, consider the case when $(\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\mathcal{P} \\mathcal{A}_3$. \nDefine the numbers \n\\begin{align}\n\\mathrm{J}_1 &:= f_{20}(t_1, t_2) + \\eta_{t_1} f_{11}(t_1, t_2), ~~ \\mathrm{J}_2:= f_{11}(t_1, t_2) + \\eta_{t_1} f_{02}(t_1, t_2). \\label{b2_can_not_be_zero}\n\\end{align}\nObserve that \n\\begin{align}\n\\{ \\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{A}_2}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)} ) \\} (f(t_1, t_2) \\otimes p(t_1)^{\\otimes d} \\otimes (v + \\eta_{t_1} w) \\otimes v) &= \\mathrm{J}_1, \\nonumber \\\\ \n\\{ \\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{A}_2}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)} ) \\} (f(t_1, t_2) \\otimes p(t_1)^{\\otimes d} \\otimes (v + \\eta_{t_1} w) \\otimes w) &= \\mathrm{J}_2. \n\\label{psi_pa2_number_form}\n\\end{align}\nIf \\eqref{pa3_intersect_a1_a2_is_empty} were false, then $\\mathrm{J}_1$ and $\\mathrm{J}_2$ would vanish (by \\eqref{psi_pa2_number_form}). \nEquations \\eqref{b2_can_not_be_zero} and \\eqref{b20_equation} imply\n\\begin{align}\n\\frac{\\mathcal{B}^{f(t_1, t_2)}_{4}}{12} x_{t_2}^2 + \\mathrm{O}(x_{t_2}^3) &= \\mathrm{J}_1 - \\frac{f_{11}(t_1, t_2)}{f_{02}(t_1, t_2)} \\mathrm{J}_2. \\label{b2_can_not_be_zero_again}\n\\end{align}\nSince $(\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\hat{\\mathcal{A}}_3$, \nwe conclude $\\mathcal{B}^{f(t_1, t_2)}_{4} \\neq 0$. Hence, \n\\eqref{b2_can_not_be_zero_again} implies that \n$\\mathrm{J}_1$ and $\\mathrm{J}_2$ can not both be zero. \nHence \\eqref{pa3_intersect_a1_a2_is_empty} holds. \\\\\n\\hf \\hf Next, let us assume that $(\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\Delta \\hat{\\mathcal{D}}_4^{\\#\\flat}$. \nDefine the following number \n\\begin{align*}\n\\mathrm{G}&:= x_{t_2} \\mathrm{F}_{x_{t_2}} + y_{t_2} \\mathrm{F}_{y_{t_2}} -2 \\mathrm{F} \\\\\n & = \\frac{f_{30}(t_1, t_2)}{6} x_{t_2}^3 + \\frac{f_{21}(t_1, t_2)}{2} x_{t_2}^2 y_{t_2} + \\frac{f_{12}(t_1, t_2)}{2} x_{t_2} y_{t_2}^2 + \\frac{f_{03}(t_1, t_2)}{6} y_{t_2}^3 + \\ldots.\n\\end{align*}\nNote that the cubic term of $\\mathrm{G}$ is same as the cubic term of $\\mathrm{F}$. \nSince $(\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\Delta\\hat{\\mathcal{D}}_4$ \nwe conclude, using the same argument as in \\cite{BM13}\nthat there exists a change of coordinates \n\\begin{align*}\nx_{t_2} & = \\hat{x}_{t_2} +\\mathrm{E}_1 (\\hat{x}_{t_2}, \\hat{y}_{t_2}), \\qquad y_{t_2} = \\hat{y}_{t_2} + \\mathrm{E}_2 (\\hat{x}_{t_2}, \\hat{y}_{t_2}), \n\\end{align*}\n(where $\\mathrm{E}_i(\\hat{x}_{t_2}, \\hat{y}_{t_2})$ are second order in $\\hat{x}_{t_2}$ and $\\hat{y}_{t_2}$)\nso that $\\mathrm{G}$ is given by \n\\begin{align}\n\\mathrm{G} & =\\frac{f_{30}(t_1, t_2)}{6} \\hat{x}_{t_2}^3 + \\frac{f_{21}(t_1, t_2)}{2} \\hat{x}_{t_2}^2 \\hat{y}_{t_2} + \\frac{f_{12}(t_1, t_2)}{2} \\hat{x}_{t_2} \\hat{y}_{t_2}^2 + \n\\frac{f_{03}(t_1, t_2)}{6} \\hat{y}_{t_2}^3. \\label{new_G}\n \n\\end{align}\nSince $(\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\Delta\\hat{\\mathcal{D}}_4$, \nthere \nare three possibilities to consider; \n\\begin{align}\nf_{30}(t_1, t_2) & \\neq 0 \\qquad \\textnormal{or} \\qquad f_{03}(t_1, t_2) \\neq 0 ~~ \\textnormal{or} \\nonumber \\\\ \nf_{30}(t_1, t_2) & = f_{03}(t_1, t_2) =0, ~~~~\\textnormal{but} ~~~~f_{21}(t_1, t_2) \\neq 0 ~~\\textnormal{and} ~~f_{12}(t_1, t_2) \\neq 0. \\label{fij_cases_D4_hat}\n\\end{align}\nLet us assume $f_{30}(t_1, t_2)\\neq 0$. \nSince $(\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\Delta\\hat{\\mathcal{D}}_4^{\\#\\flat}$, \nequation \\eqref{new_G} now can be written as \n\\begin{align}\n\\mathrm{G}&= \n\\frac{f_{30}(t_1,t_2)}{6}(\\hat{x}_{t_2} - \\mathrm{A}_1 \\hat{y}_{t_2}) (\\hat{x}_{t_2} - \\mathrm{A}_2 \\hat{y}_{t_2}) (\\hat{x}_{t_2} - \\mathrm{A}_3 \\hat{y}_{t_2})\n\\end{align}\nwhere $\\mathrm{A}_i$ are complex numbers such that \n\\begin{align}\n \\mathrm{A}_1 & \\neq \\mathrm{A}_2 \\neq \\mathrm{A}_3\\neq \\mathrm{A}_1 ~~\\textnormal{and} ~~\\eta \\neq \\frac{1}{\\mathrm{A}_1}, ~\\frac{1}{\\mathrm{A}_2} ~~\\textnormal{or} ~\\frac{1}{\\mathrm{A}_3}. \n \\label{eta_neq_d4_sharp}\n\\end{align}\nTo see why the last inequality is true, \nnote that if $\\eta$ is either $\\frac{1}{\\mathrm{A}_1}$, \n$\\frac{1}{\\mathrm{A}_2}$ or $\\frac{1}{\\mathrm{A}_3}$ then \n\\[ \\nabla^3 f|_{\\tilde{p}} (v+ \\eta w, v + \\eta w, v + \\eta w) =0. \\]\nSince \n$(\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\Delta \\hat{\\mathcal{D}}_{4}^{\\#\\flat}$ \nthe \nlast inequality of \\eqref{eta_neq_d4_sharp} holds. \nHence, \n\\eqref{a3_is_subset_of_a1_a1_closure_down_stairs_equation} \nhas a solution if and only if \n\\begin{align}\n\\mathrm{G} = &\\,\\, \\frac{f_{30}(t_1,t_2)}{6}(\\hat{x}_{t_2} - \\mathrm{A}_1 \\hat{y}_{t_2}) (\\hat{x}_{t_2} - \\mathrm{A}_2 \\hat{y}_{t_2}) (\\hat{x}_{t_2} - \\mathrm{A}_3 \\hat{y}_{t_2}) = 0, \\nonumber \\\\ \n\\mathrm{F}_{x_{t_2}} =&\\,\\, \\hat{x}_{t_2} f_{20}(t_1, t_2) + \\hat{y}_{t_2} f_{11}(t_1, t_2) \\nonumber \\\\ \n & + \\frac{f_{30}(t_1, t_2)}{6} \\Big( 3 \\hat{x}_{t_2}^2 \n -2 \\hat{x}_{t_2} \\hat{y}_{t_2}\\big ({\\textstyle \\sum_{i=1}^3\\mathrm{A}_i}\\big) + \n (\\mathrm{A}_1\\mathrm{A}_2 + \\mathrm{A}_1\\mathrm{A}_3+ \\mathrm{A}_2\\mathrm{A}_3 ) \n \\hat{y}_{t_2}^2 \\Big)+ \n \\mathrm{E}_3(\\hat{x}_{t_2}, \\hat{y}_{t_2}) =0, \\nonumber \\\\\n\\mathrm{F}_{y_{t_2}} =&\\,\\, \\hat{x}_{t_2} f_{11}(t_1, t_2) + \\hat{y}_{t_2} f_{02}(t_1, t_2) \\nonumber \\\\ \n & -\\frac{f_{30}(t_1, t_2)}{6}\\Big(\\hat{x}_{t_2}^2\\big({\\textstyle\\sum_{i=1}^3 \\mathrm{A}_i}\\big) \n -2 \\hat{x}_{t_2} \\hat{y}_{t_2}\\big({\\textstyle \\sum_{i\\neq j}\\mathrm{A}_i\\mathrm{A}_j}\\big) \n +3\\hat{y}_{t_2}^2\\mathrm{A}_1\\mathrm{A}_2 \\mathrm{A}_3 \\Big) + \\mathrm{E}_4(\\hat{x}_{t_2}, \\hat{y}_{t_2}) =0 \\nonumber \\\\ \n(\\hat{x}_{t_2}, \\hat{y}_{t_2}) \\neq &\\,\\, (0,0) \\qquad \\textnormal{(but small)} \\label{hat_D4_neighbourhood_inside_a1_hat_a1}\n\\end{align}\nhas a solution, where $\\mathrm{E}_i(\\hat{x}_{t_2}, \\hat{y}_{t_2})$ are third order in $(\\hat{x}_{t_2}, \\hat{y}_{t_2})$. \\\\ \n\\hf \\hf To avoid confusion let us clarify one point; in the above equation \n$\\mathrm{F}_{x_{t_2}}$ \nand $\\mathrm{F}_{y_{t_2}}$ \nare simply \nexpressed in terms of the new coordinates \n$\\hat{x}_{t_2}$ and $\\hat{y}_{t_2}$. They are still the partial derivatives of \n$\\mathrm{F}$ with respect to $x_{t_2}$ and $y_{t_2}$; \nthey are \\textit{not} $\\mathrm{F}_{\\hat{x}_{t_2}}$ \nand $\\mathrm{F}_{\\hat{y}_{t_2}}$, the partial derivatives of $\\mathrm{F}$ with respect to \n$\\hat{x}_{t_2}$ and $\\hat{y}_{t_2}$. Now we will construct the solutions to \\eqref{hat_D4_neighbourhood_inside_a1_hat_a1}. \nThere are three solutions; we will just give one of the solutions, the rest are similar. \nThey are given by: \n\\begin{align}\n\\hat{x}_{t_2}&= \\mathrm{A}_1 \\hat{y}_{t_2}, ~~\\hat{y}_{t_2} \\neq 0 ~~\\textnormal{(but small)}, ~~f_{20}(t_1, t_2) = \\textnormal{small}, \\nonumber \\\\\nf_{02}(t_1, t_2) & = \\frac{f_{30}(t_1, t_2)}{6}\\Big( \n2 \\mathrm{A}_1^3 - 2 \\mathrm{A}_1^2 \\mathrm{A}_2 - \\mathrm{A}_1^2 \\mathrm{A}_3 + 2 \\mathrm{A}_1 \\mathrm{A}_2\\mathrm{A}_3 \\Big) \\hat{y}_{t_2} \n+ \\mathrm{A}_1^2 f_{20}(t_1, t_2) + \n\\mathrm{E}_5(\\hat{y}_{t_2}), \\nonumber \\\\ \nf_{11}(t_1, t_2) & = \\frac{f_{30}(t_1, t_2)}{6}\\Big( -\\mathrm{A}_1^2 +\\mathrm{A}_1 \\mathrm{A}_2 +\\mathrm{A}_1 \\mathrm{A}_3 - \\mathrm{A}_2 \\mathrm{A}_3 \\Big)\\hat{y}_{t_2} \n-\\mathrm{A}_1 f_{20}(t_1, t_2) + \n\\mathrm{E}_6(\\hat{y}_{t_2}), \\label{pa2_section_around_hat_d4_sharp} \n\\end{align}\nwhere $\\mathrm{E}_i(\\hat{y}_{t_2})$ are second order in $\\hat{y}_{t_2}$ and independent of $f_{20}(t_1, t_2)$. \nIt remains to show that \\eqref{pa3_intersect_a1_a2_is_empty} holds.\nEquation \\eqref{pa2_section_around_hat_d4_sharp} implies that \n\\begin{align}\n & (1 -\\eta_{t_1} \\mathrm{A}_1)\\mathrm{J}_2 + (\\mathrm{A}_1 - \\mathrm{A}_1^2 \\eta_{t_1}) \\mathrm{J}_1 = \\beta \\hat{y}_{t_2} + \\mathrm{O}(\\hat{y}_{t_2}^2), \\label{pa2_section_around_hat_d4_sharp_again}\\\\\n\\textnormal{where} \\qquad \\beta &:= \\frac{f_{30}(t_1, t_2)}{6}(\\mathrm{A}_1 - \\mathrm{A}_2) (\\mathrm{A}_1 - \\mathrm{A}_3)(-1 + \\mathrm{A}_1 \\eta_{t_1})^2, \\nonumber \n\\end{align}\nand $\\mathrm{J}_1$ and $\\mathrm{J}_2$ are as defined in \\eqref{b2_can_not_be_zero}.\nNote that the rhs of \\eqref{pa2_section_around_hat_d4_sharp_again} \nis independent of $f_{20}(t_1, t_2)$. By \\eqref{eta_neq_d4_sharp}, \n$\\beta \\neq 0$. Hence, by \\eqref{pa2_section_around_hat_d4_sharp_again} $\\mathrm{J}_1$ and $\\mathrm{J}_2$ \ncan not both vanish. As a result, \\eqref{pa3_intersect_a1_a2_is_empty} holds. \nSimilar argument holds for the other two solutions of \\eqref{hat_D4_neighbourhood_inside_a1_hat_a1}. \\\\\n\\hf \\hf A similar argument will go through if \nany of the \nother two cases of \\eqref{fij_cases_D4_hat} holds. \\qed \n\n\\begin{cor}\n\\label{pa2_section_mult_around_pa3}\nLet $\\mathbb{W} \\longrightarrow \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2$ be a vector bundle such that \nthe rank of $\\mathbb{W}$ is equal to dimension of $ \\Delta \\mathcal{P} \\mathcal{A}_{3}$ and \n$\\mathcal{Q}: \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2 \\longrightarrow \\mathbb{W}$ a \\textit{generic} \nsmooth section. Suppose $(\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{A}_3$. \nThen the section \n$$ \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_{2}} \\oplus \\mathcal{Q}: \\overline{\\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{A}}_1^{\\#}} \\longrightarrow \\pi_2^* \\mathbb{V}_{\\mathcal{P} \\mathcal{A}_2} \\oplus \\mathbb{W}$$ \nvanishes around $(\\tilde{f}, \\tilde{p}, l_{\\p})$ with a multiplicity of $2$. \n\\end{cor}\n\\noindent \\textbf{Proof: } Since $\\mathcal{Q}$ is generic, the sections \n\\[ \\Psi_{\\mathcal{P} \\mathcal{A}_2} \\oplus \\mathcal{Q}: \\Delta \\overline{\\hat{\\mathcal{A}}^{\\#}_1} \\longrightarrow \\mathbb{V}_{\\mathcal{P} \\mathcal{A}_2} \\oplus \\mathbb{W}, \n~~ \\Psi_{\\mathcal{P} \\mathcal{A}_3}: \\Psi_{\\mathcal{P} \\mathcal{A}_2}^{-1}(0) \\longrightarrow \\UL_{\\mathcal{P} \\mathcal{A}_3} \\]\nare transverse to the zero set. \nHence, there exists a unique \n$(\\tilde{f}(t_1, t_2), l_{p(t_1)}) \\in \\Delta \\overline{\\hat{\\mathcal{A}}^{\\#}_1}$ close to $(\\tilde{f}, l_{\\p})$ for a specified \nvalue of $\\mathrm{J}_1$, $\\mathrm{J}_2$ and $f_{30}(t_1, t_2)$. \\footnote{Provided \n$\\mathrm{J}_1$, $\\mathrm{J}_2$ and $f_{30}(t_1, t_2)$ are sufficiently small.} \nIn other words we can express all the $f_{ij}(t_1, t_2)$ in terms of \n$\\mathrm{J}_1$, $\\mathrm{J}_2$ and $f_{30}(t_1, t_2)$. \nSince $\\mathcal{B}^{f(t_1, t_2)}_4 \\neq 0$, \nequation \\eqref{b2_can_not_be_zero_again} implies that \nthe number of \nsolutions to the set of equations \n\\[ \\mathrm{J}_1 = \\xi_1, \\qquad \\mathrm{J}_2 = \\xi_2 \\] \nis $2$, where $\\xi_i$ is a small perturbation. \\qed \n\n\\begin{cor}\n\\label{pa2_section_mult_around_hat_d4}\nLet $\\mathbb{W} \\longrightarrow \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2$ be a vector bundle such that \nthe rank of $\\mathbb{W}$ is equal to dimension of $ \\Delta \\hat{\\mathcal{D}}_{4}^{\\#\\flat}$ and \n$\\mathcal{Q}: \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2 \\longrightarrow \\mathbb{W}$ a \\textit{generic} \nsmooth section. Suppose $(\\tilde{f}, \\tilde{p}, \\tilde{p}) \\in \\Delta \\hat{\\mathcal{D}}_4^{\\#\\flat}$. \nThen the section \n$$ \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_{4}} \\oplus \\mathcal{Q}: \\overline{\\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{A}}_1^{\\#}} \\longrightarrow \\pi_2^* \\UL_{\\mathcal{P} \\mathcal{D}_4} \\oplus \\mathbb{W}$$ \nvanishes around $(\\tilde{f}, \\tilde{p}, l_{\\p})$ with a multiplicity of $3$. \n\\end{cor}\n\n\\noindent \\textbf{Proof: } Since $\\mathcal{Q}$ is generic, the sections \n\\[ \\Psi_{\\mathcal{P} \\mathcal{A}_2} \\oplus \\mathcal{Q}: \\Delta \\overline{\\hat{\\mathcal{A}}^{\\#}_1} \\longrightarrow \\mathbb{V}_{\\mathcal{P} \\mathcal{A}_2} \\oplus \\mathbb{W}, \n~~ \\Psi_{\\mathcal{P} \\mathcal{D}_4}: \\Psi_{\\mathcal{P} \\mathcal{A}_2}^{-1}(0) \\longrightarrow \\UL_{\\mathcal{P} \\mathcal{D}_4} \\]\nare transverse to the zero set. \nHence, there exists a unique \n$(\\tilde{f}(t_1, t_2), l_{p(t_1)}) \\in \\Delta \\overline{\\hat{\\mathcal{A}}^{\\#}_1}$ close to $(\\tilde{f}, l_{\\p})$ for a specified \nvalue of $\\mathrm{J}_1$, $\\mathrm{J}_2$ and $f_{02}(t_1, t_2)$. \\footnote{Provided \n$\\mathrm{J}_1$, $\\mathrm{J}_2$ and $f_{02}(t_1, t_2)$ are sufficiently small.} \nIn other words we can express all the $f_{ij}(t_1, t_2)$ in terms of \n$\\mathrm{J}_1$, $\\mathrm{J}_2$ and $f_{02}(t_1, t_2)$. Since $\\beta \\neq 0$, \nequation \n\\eqref{pa2_section_around_hat_d4_sharp_again} \nimplies that \nthe number of \nsolutions to the set of equations \n\\[ \\mathrm{J}_1 = \\xi_1, \\qquad \\mathrm{J}_2 = \\xi_2 \\] \nis $1$, where $\\xi_i$ is a small perturbation. \nSince there are a total of $3$ solutions to \\eqref{hat_D4_neighbourhood_inside_a1_hat_a1}, \nthe total multiplicity is $3$. \\qed \\\\\n\n\n\n\n\\textbf{Proof of Lemma \\ref{cl_two_pt} (\\ref{a1_pa2_cl}):} It suffices to prove the following two statements in view of \\eqref{pak2_is_subset_of_a1_and_pak} : \n\\begin{align}\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\tilde{p}}) \\in \\overline{\\overline{\\mathcal{A}}_1\\circ \\mathcal{P} \\mathcal{A}}_2: ~\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_4}( \\tilde{f}, \\tilde{p}, l_{\\tilde{p}} ) \\neq 0 \\} &= \n\\{ (\\tilde{f}, \\tilde{p}, l_{\\tilde{p}}) \\in \\Delta \\overline{\\mathcal{P} \\mathcal{A}}_{4}: ~\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_4}( \\tilde{f}, \\tilde{p}, l_{\\tilde{p}} ) \\neq 0 \\}\n\\label{closure_a1_pa2_f02_not_zero} \\\\\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\tilde{p}}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_2: ~\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_4}( \\tilde{f}, \\tilde{p}, l_{\\tilde{p}} ) = 0 \\} &= \\Delta \\overline{\\hat{\\mathcal{D}}^{\\#\\flat}_5} \\label{a1_pa2_d5}\n \\end{align}\nLet us directly prove a more general version of \\eqref{closure_a1_pa2_f02_not_zero}: \n\\begin{lmm}\n\\label{closure_a1_pak_f02_not_zero}\nIf $k \\geq 2$, then \n\\begin{align*}\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\tilde{p}}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_k: ~\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_4}( \\tilde{f}, \\tilde{p}, l_{\\tilde{p}} ) \\neq 0 \\} &= \n\\{ (\\tilde{f}, \\tilde{p}, l_{\\tilde{p}}) \\in \\Delta \\overline{\\mathcal{P} \\mathcal{A}}_{k+2}: ~\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_4}( \\tilde{f}, \\tilde{p}, l_{\\tilde{p}} ) \\neq 0 \\}. \n\\end{align*}\n\\end{lmm}\nNote that \\eqref{closure_a1_pa2_f02_not_zero} is a special case of Lemma \\ref{closure_a1_pak_f02_not_zero}; take $k=2$.\\\\\n\n\\noindent \\textbf{Proof: } We will prove the following two facts simultaneously:\n\\begin{align}\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_{k} \\} \\supset \\Delta \\mathcal{P} \\mathcal{A}_{k+2} \\qquad & \\forall ~k \\geq 2, \\label{pak2_is_subset_of_a1_and_pak} \\\\\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_{k+1} \\} \\cap \\Delta \\mathcal{P} \\mathcal{A}_{k+2} = \\varnothing \\qquad & \\forall ~k \\geq 1. \\label{pak2_intersect_a1_and_pa1k+1_is_empty}\n\\end{align}\nIt is easy to see that \\eqref{pak2_is_subset_of_a1_and_pak} and \\eqref{pak2_intersect_a1_and_pa1k+1_is_empty} imply \nLemma \\ref{closure_a1_pak_f02_not_zero}. We will now prove the following claim: \n\\begin{claim}\n\\label{claim_a4_closure_simultaneous}\nLet $~(\\tilde{f},\\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{A}_{k+2}$ and $ k\\geq 2$.\nThen there exists a solution \n$$ (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)} ) \\in \\overline{ (\\mathcal{D} \\times \\mathbb{P}^2) \\circ \\mathcal{P} \\mathcal{A}}_2$$ \n\\textit{near} $(\\tilde{f}, \\tilde{p}, l_{\\p})$ to the set of equations\n\\begin{align}\n\\label{closure_a1_ak_hessian_not_zero}\n\\pi_1^* \\psi_{\\mathcal{A}_0}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & = 0, ~\\pi_1^* \\psi_{\\mathcal{A}_1}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) = 0, \\nonumber \\\\\n\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{A}_3}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & = 0, \\ldots, \\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{A}_k}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) = 0, \\nonumber \\\\\n~\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{D}_4}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & \\neq 0, ~\\tilde{p}(t_1, t_2) \\neq \\tilde{p}(t_1). \n\\end{align}\nMoreover, \\textit{any} solution $(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)})$ sufficiently close to $(\\tilde{f}, \\tilde{p}, l_{\\p})$ \nlies in $ \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_k$, i.e.,\n\\begin{align}\n\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{A}_{k+1}}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) \\neq 0. \\label{psi_pa_k_plus_1_does_not_vanish}\n\\end{align}\nIn particular $(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)})$ \\textit{does not} lie in $ \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_{k+1} $.\n\\end{claim}\n\\noindent It is easy to see that claim \\ref{claim_a4_closure_simultaneous} implies \\eqref{pak2_is_subset_of_a1_and_pak} and \\eqref{pak2_intersect_a1_and_pa1k+1_is_empty} \nsimultaneously for all $k\\geq 2$.\nThe fact that \\eqref{pak2_intersect_a1_and_pa1k+1_is_empty} holds for $k=1$ follows from \\eqref{a3_interesct_a1_pa2_is_empty_set_equation} \n(since $\\mathcal{P} \\mathcal{A}_3$ is a subset of $\\hat{\\mathcal{A}}_3$.)\\\\\n\n\\noindent \\textbf{Proof: } Choose homogeneous coordinates $[\\mathrm{X}: \\mathrm{Y}: \\mathrm{Z}]$ so that \n$\\tilde{p} = [0:0:1]$ and let ~$\\mathcal{U}_{\\tilde{p}}$, \n$\\pi_x$, $\\pi_y$, \n$x_{t_1}$, $y_{t_1}$, $x_{t_2}$, $y_{t_2}$\nbe exactly the same as defined in the \nproof of claim \\ref{a1_a1_closure_intersect_a1_or_a2_empty_equations_claim}.\nLet $v_1, w:\\mathcal{U}_{\\tilde{p}} \\longrightarrow T\\mathbb{P}^2$ be vectors dual to the one \nforms $d\\pi_x$ and $d\\pi_y$ respectively.\nTake\n\\[(\\tilde{f} (t_1, t_2), l_{\\tilde{p}(t_1)}) \\in \\overline{\\mathcal{P} \\mathcal{A}}_2\\] \nto be a point that is close to \n$( \\tilde{f}, l_{\\tilde{p}})$ and $l_{\\tilde{p}(t_1, t_2)}$ a point \nin $\\mathbb{P} T\\mathbb{P}^2$ that is close to $l_{\\tilde{p}(t_1)}$.\nWithout loss of generality, we can assume that \n\\[ v:= v_1 + \\eta w \\in l_{\\tilde{p}}, ~~v_1 + \\eta_{t_1} w \\in l_{\\tilde{p}(t_1)} ~~\\textnormal{and} ~~v + (\\eta_{t_1} + \\eta_{t_2}) w \\in l_{\\tilde{p}(t_1, t_2)} \\] \nfor some complex \nnumbers $\\eta$, $\\eta_{t_1}$ and $\\eta_{t_2}$ close to each other. Let \n\\begin{align*}\nf_{ij}(t_1, t_2) & := \\nabla^{i+j} f(t_1, t_2)|_{p(t_1)} \n(\\underbrace{v,\\cdots v}_{\\textnormal{$i$ times}}, \\underbrace{w,\\cdots w}_{\\textnormal{$j$ times}}).\n\\end{align*}\nThe numbers \n$\\mathrm{F}$, $\\mathrm{F}_{x_{t_2}}$ and $\\mathrm{F}_{y_{t_2}}$ are the same as in the proof of claim \\ref{a1_a1_closure_intersect_a1_or_a2_empty_equations_claim}. \nSince $(\\tilde{f} (t_1, t_2), l_{\\tilde{p}(t_1)}) \\in \\overline{\\mathcal{P} \\mathcal{A}}_2$, we conclude that \n\\[ f_{00}(t_1, t_2) = f_{10}(t_1, t_2) = f_{01}(t_1, t_2) =f_{20}(t_1, t_2) = f_{11}(t_1, t_2)=0.\\]\nMoreover, since $(\\tilde{f}, l_{\\p}) \\in \\mathcal{P} \\mathcal{A}_{k+2}$ we conclude that \n$f_{02}$ and $\\mathcal{A}^{f}_{k+3}$ are non zero.\nHence $f_{02}(t_1, t_2)$ and $\\mathcal{A}^{f(t_1, t_2)}_{k+3}$ are non zero \nif $\\tilde{f}(t_1, t_2)$ is sufficiently close to $\\tilde{f}$.\nSince $f_{02}(t_1, t_2) \\neq 0$ , \nfollowing the same argument as in the proof of claim \\ref{a1_a1_closure_intersect_a1_or_a2_empty_equations_claim}, \nwe can make a change of coordinates to write $\\mathrm{F}$ as \n\\begin{align*} \n\\mathrm{F}&= \\hat{\\hat{y}}_{t_2}^2 + \\frac{\\mathcal{A}^{f(t_1, t_2)}_3}{3!}x_{t_2}^3 + \\frac{\\mathcal{A}^{f(t_1, t_2)}_4}{4!} x_{t_2}^4 + \\ldots \n\\end{align*}\nThe functional equation \\eqref{closure_a1_ak_hessian_not_zero} has a solution if and only if \nthe following set of equations has a solution (as numbers): \n\\begin{align}\n\\label{closure_a1_ak_hessian_not_zero_numbers}\n\\hat{\\hat{y}}_{t_2}^2 + \\frac{\\mathcal{A}^{f(t_1, t_2)}_3}{3!}x_{t_2}^3 + \\frac{\\mathcal{A}^{f(t_1, t_2)}_4}{4!} x_{t_2}^4 + \\ldots &=0, \\qquad 2 \\hat{\\hat{y}}_{t_2} = 0, \\nonumber \\\\\n\\frac{\\mathcal{A}^{f(t_1, t_2)}_3}{2!}x_{t_2}^3 + \\frac{\\mathcal{A}^{f(t_1, t_2)}_4}{3!} x_{t_2}^4 + \\ldots &= 0, \\qquad \\mathcal{A}^{f(t_1, t_2)}_3, \\ldots, \\mathcal{A}^{f(t_1, t_2)}_k = 0, \\nonumber \\\\\n(\\hat{\\hat{y}}_{t_2}, x_{t_2}) & \\neq (0,0) \\qquad \\textnormal{(but small).}\n\\end{align}\nIt is easy to see that the solutions to \\eqref{closure_a1_ak_hessian_not_zero_numbers} \nexist given by \n\\begin{align}\n\\mathcal{A}^{f(t_1, t_2)}_3,& \\ldots, \\mathcal{A}^{f(t_1, t_2)}_k = 0, \\nonumber \\\\\n\\mathcal{A}^{f(t_1, t_2)}_{k+1} &= \\frac{\\mathcal{A}_{k+3}^{f(t_1, t_2)}}{(k+2)(k+3)} x_{t_2}^2 + \\mathrm{O}(x_{t_2}^3), \\label{pak+2_inside_a1_and_pak_solution}\\\\ \n\\mathcal{A}^{f(t_1, t_2)}_{k+2} &= -\\frac{2 \\mathcal{A}^{f(t_1, t_2)}_{k+3}}{(k+3)} x_{t_2} + \\mathrm{O}(x_{t_2}^2), \n\\qquad \\hat{\\hat{y}}_{t_2} = 0, \\qquad x_{t_2} \\neq 0 \\qquad \\textnormal{(but small).} \\nonumber \n\\end{align}\nBy \\eqref{pak+2_inside_a1_and_pak_solution}, it immediately follows that \\eqref{psi_pa_k_plus_1_does_not_vanish} holds (since $\\mathcal{A}^{f(t_1, t_2)}_{k+3} \\neq 0$). \\qed \n\n\\begin{cor}\n\\label{a1_pak_mult_is_2_Hess_neq_0}\nLet $\\mathbb{W} \\longrightarrow \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2$ be a vector bundle such that \nthe rank of $\\mathbb{W}$ is same as the dimension of $ \\Delta \\mathcal{P} \\mathcal{A}_{k+2}$ and \n$\\mathcal{Q}: \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2 \\longrightarrow \\mathbb{W}$ a \\textit{generic} \nsmooth section. Suppose $(\\tilde{f},\\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{A}_{k+2} \\cap \\mathcal{Q}^{-1}(0)$. \nThen the section $$ \\pi_2^\\ast\\Psi_{\\mathcal{P} \\mathcal{A}_{k+1}} \\oplus \\mathcal{Q}: \\Delta \\overline{\\mathcal{P} \\mathcal{A}}_{k} \\longrightarrow \\pi_2^* \\UL_{\\mathcal{P} \\mathcal{A}_{k+1}} \\oplus \\mathbb{W}$$\nvanishes around $(\\tilde{f}, \\tilde{p}, l_{\\p})$ with a multiplicity of $2$.\n\\end{cor}\n\\noindent \\textbf{Proof: } This follows from the fact that the sections \n\\begin{align*}\n\\pi_2^\\ast\\Psi_{\\mathcal{P} \\mathcal{A}_{i}}:& \\Delta \\overline{\\mathcal{P} \\mathcal{A}}_{i-1} - \\pi_2^\\ast\\Psi_{\\mathcal{P} \\mathcal{D}_4}^{-1}(0) \\longrightarrow \\pi_2^\\ast \\UL_{\\mathcal{P} \\mathcal{A}_{i}}\n\\end{align*}\nare transverse to the zero set for all $3 \\leq i \\leq k+2$, the fact that \n$\\mathcal{Q}$ is generic and \\eqref{pak+2_inside_a1_and_pak_solution}. \nThe proof is now similar to that of Corollary \\ref{a1_section_contrib_from_a1_and_a2}, \n\\ref{pa2_section_mult_around_pa3} and \\ref{pa2_section_mult_around_hat_d4}. \\qed \\\\\n\n\\hf \\hf Next let us prove \\eqref{a1_pa2_d5}. First we will prove the following two facts: \n\\begin{align}\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_2} \\cap \\mathcal{P} \\mathcal{D}_4 &= \\varnothing,\\label{a1_pa2_intersect_pd4_empty} \\\\ \n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_3} \\cap \\mathcal{P} \\mathcal{D}_5 &= \\varnothing. \\label{a1_pa3_intersect_pd5_empty} \n\\end{align}\n\\noindent Although \\eqref{a1_pa3_intersect_pd5_empty} is not needed to prove \\eqref{a1_pa2_d5}, we will prove these two \nstatements in one go since their proofs are very similar. \n\n\\begin{claim}\n\\label{a1_pa2_intersect_pd4_empty_claim}\nLet $(\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{D}_4$. Then there exist no solutions \n\\[ (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_2 \\]\nnear $(\\tilde{f}, \\tilde{p}, l_{\\p})$ to the set of equations \n\\begin{align}\n\\pi_1^* \\psi_{\\mathcal{A}_0}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & = 0, ~\\pi_1^* \\psi_{\\mathcal{A}_1}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) = 0. \n\\label{a1_pa2_intersect_pd4_empty_equation}\n\\end{align}\nSecondly, let $(\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{D}_5$. Then there exist no solutions \n\\[ (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_3 \\]\nnear $(\\tilde{f}, \\tilde{p}, l_{\\p})$ to the set of equations \n\\begin{align}\n\\pi_1^* \\psi_{\\mathcal{A}_0}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & = 0, ~\\pi_1^* \\psi_{\\mathcal{A}_1}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) = 0. \n\\label{a1_pa3_intersect_pd5_empty_equation}\n\\end{align}\n\\end{claim}\n\\noindent It is easy to see that claim \\ref{a1_pa2_intersect_pd4_empty_claim} proves \\eqref{a1_pa2_intersect_pd4_empty} \nand \\eqref{a1_pa3_intersect_pd5_empty} .\\\\\n\n\\noindent \\textbf{Proof: } For the first part, choose homogeneous coordinates $[\\mathrm{X}: \\mathrm{Y}: \\mathrm{Z}]$ so that \n$\\tilde{p} = [0:0:1]$ and let ~$\\mathcal{U}_{\\tilde{p}}$, \n$\\pi_x$, $\\pi_y$, $v_1$, $w$, $v$, $\\eta$, $\\eta_{t_1}$, $\\eta_{t_2}$, \n$x_{t_1}$, $y_{t_1}$, $x_{t_2}$, $y_{t_2}$, \n$f_{ij}(t_1, t_2)$, $\\mathrm{F}$, $\\mathrm{F}_{x_{t_2}}$ and $\\mathrm{F}_{y_{t_2}}$\nbe exactly the same as defined in the \nproof of claim \\ref{claim_a4_closure_simultaneous}.\nSince $(\\tilde{f}(t_1, t_2), l_{\\tilde{p}(t_1)}) \\in \\overline{\\mathcal{P} \\mathcal{A}}_2$, we conclude that \n\\[ f_{00}(t_1, t_2) = f_{10}(t_1, t_2) = f_{01}(t_1, t_2) =f_{20}(t_1, t_2) = f_{11}(t_1, t_2)=0.\\]\nThe functional equation \\eqref{a1_pa2_intersect_pd4_empty_equation} has a solution if and only if \nthe following set of equations has a solution (as numbers): \n\\begin{align}\n\\mathrm{F} = 0, \\qquad \\mathrm{F}_{x_{t_2}} = 0, \\qquad \\mathrm{F}_{y_{t_2}} = 0, \\qquad (x_{t_2}, y_{t_2}) \\neq (0, 0) \\qquad \\textnormal{(but small)}. \\label{eval_f1_pd4} \n\\end{align} \nFor the convenience of the reader, let us rewrite the expression for $\\mathrm{F}$: \n\\begin{align*}\n\\mathrm{F} &: = \\frac{f_{02}(t_1, t_2)}{2} y_{t_2}^2 + \\frac{f_{30}(t_1, t_2)}{6} x_{t_2}^3 + \\frac{f_{21}(t_1, t_2)}{2} x_{t_2}^2 y_{t_2}+ \n\\frac{f_{12}(t_1, t_2)}{2} x_{t_2} y_{t_2}^2 + \\frac{f_{03}(t_1, t_2)}{6} y_{t_2}^3 + \\ldots.\n\\end{align*} \nSince $(\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\mathcal{P} \\mathcal{D}_4$ we conclude that \n\\begin{align}\nf_{02} &=0, ~~f_{30} =0, ~~f_{21} \\neq 0, ~~3f_{12}^2 - 4f_{21} f_{03} \\neq 0. \\label{pd4_nv_condition_equation}\n\\end{align}\nTo see why the two non vanishing conditions hold, first notice that since $(\\tilde{f}, l_{\\p}) \\in \\hat{\\mathcal{D}}_4$ we conclude that \nthe cubic term in the Taylor expansion of $f$ has no repeated root. In other words \n\\begin{align*}\n\\beta &:= f_{30}^2 f_{03}^2-6 f_{03} f_{12} f_{21} f_{30} + 4 f_{12}^3 f_{30} + 4 f_{03} f_{21}^3 - 3 f_{12}^2 f_{21}^2 \\neq 0. \n\\end{align*}\nSince $(\\tilde{f}, l_{\\p}) \\in \\mathcal{P} \\mathcal{D}_4$ we conclude $f_{30} =0$. Hence we get \\eqref{pd4_nv_condition_equation}. \nNow we will show that \\eqref{eval_f1_pd4} has no solutions. First of all we claim that \n$y_{t_2} \\neq 0 $; we will justify that at the end. Assuming that, define \n$\\mathrm{L} := \\frac{x_{t_2}}{y_{t_2}}$. Substituting $x_{t_2} = \\mathrm{L} y_{t_2}$ in $\\mathrm{F}_{x_{t_2}} =0 $ and using $y_{t_2} \\neq 0$ and $f_{21}(t_1, t_2) \\neq 0$ \nwe can solve for $\\mathrm{L}$ using the Implicit Function Theorem. That gives us \n\\begin{align}\n\\mathrm{L} &= -\\frac{f_{12}(t_1, t_2)}{2 f_{21}(t_1, t_2)} + y_{t_2} \\mathrm{E}_1(y_{t_1}, f_{30}(t_1, t_2)) + f_{30}(t_1, t_2) \\mathrm{E}_2(y_{t_1}, f_{30}(t_1, t_2)), \n\\label{L_value_pd4_again}\n\\end{align}\nwhere $\\mathrm{E}_i(0,0) =0$. Using the value of $\\mathrm{L}$ from \\eqref{L_value_pd4_again}, \nand substituting $x_{t_2} = \\mathrm{L} y_{t_2}$ in $\\mathrm{F} - \\frac{y_{t_2}\\mathrm{F}_{y_{t_2}}}{2} =0$, \nwe conclude that as $(y_{t_2}, f_{30}(t_1, t_2))$ go to zero\n\\begin{align}\n-\\frac{f_{03}}{12} + \\frac{f_{12}^2}{16 f_{21}} &=0. \\label{pd5_dual_eqn}\n\\end{align}\nIt is easy to see that \\eqref{pd5_dual_eqn} contradicts \\eqref{pd4_nv_condition_equation}. \nIt remains to show that $y_{t_2} \\neq 0$. To see why that is so, consider the equation \n$\\mathrm{F}_{y_{t_2}} =0$. It is easy to see that if $y_{t_2} =0$ then $f_{21}(t_1, t_2)$ will go to zero, contradicting \n\\eqref{pd4_nv_condition_equation}. Hence \\eqref{eval_f1_pd4} has no solutions. \\\\ \n\\hf \\hf For the second part of the claim, we use the same set up except for one difference: we require $(\\tilde{f}, l_{\\p}) \\in \\overline{\\mathcal{P} \\mathcal{A}}_3$. \nHence \n\\[ f_{00}(t_1, t_2) = f_{10}(t_1, t_2) = f_{01}(t_1, t_2) =f_{20}(t_1, t_2) = f_{11}(t_1, t_2)= f_{30}(t_1, t_2)=0.\\]\nThe functional equation \\eqref{a1_pa3_intersect_pd5_empty_equation} has a solution if and only if \nthe following set of equations has a solution (as numbers): \n\\begin{align}\n\\mathrm{F} = 0, \\qquad \\mathrm{F}_{x_{t_2}} = 0, \\qquad \\mathrm{F}_{y_{t_2}} = 0, \\qquad (x_{t_2}, y_{t_2}) \\neq (0, 0) \\qquad \\textnormal{(but small)}. \\label{eval_f1_pd5_again} \n\\end{align} \nFor the convenience of the reader, let us rewrite the expression for $\\mathrm{F}$: \n\\begin{align*}\n\\mathrm{F} &: = \\frac{f_{02}(t_1, t_2)}{2} y_{t_2}^2 + \\frac{f_{21}(t_1, t_2)}{2} x_{t_2}^2 y_{t_2}+ \n\\frac{f_{12}(t_1, t_2)}{2} x_{t_2} y_{t_2}^2 + \\frac{f_{03}(t_1, t_2)}{6} y_{t_2}^3 + \\ldots.\n\\end{align*} \nSince $(\\tilde{f}, l_{\\p}) \\in \\mathcal{P} \\mathcal{D}_5$ we conclude that \n\\begin{align}\nf_{02} &=0, ~~f_{30} =0, ~~f_{21} =0, ~~f_{40}\\neq 0, ~~f_{12} \\neq 0. \\label{pd5_nv_condition_equation_again}\n\\end{align}\nWe will now show that there are no solutions to \\eqref{eval_f1_pd5_again}. First we claim that $y_{t_2} \\neq 0$; we will justify that at the end. \nAssuming that, define $\\mathrm{L} := \\frac{x_{t_2}}{y_{t_2}}$. Substituting $x_{t_2} = \\mathrm{L} y_{t_2}$ in $\\mathrm{F}_{x_{t_2}} =0 $ and using $y_{t_2} \\neq 0$, we conclude that as $y_{t_2}$ and $f_{21}(t_1, t_2)$ \ngo to zero, $f_{12}(t_1, t_2)$ goes to zero, contradicting \\eqref{pd5_nv_condition_equation_again}. It remains to show that $y_{t_2} \\neq 0$. Consider the equation $\\mathrm{F}_{x_{t_2}} =0$. If $y_{t_2} =0$ then $f_{40}(t_1, t_2)$ would go to zero as $x_{t_2}$ goes to zero, \ncontradicting \\eqref{pd5_nv_condition_equation_again}. Hence \\eqref{eval_f1_pd5_again} has no solutions. \\qed \\\\\n\n\\hf \\hf Now we return to the proof of \\eqref{a1_pa2_d5}. First of all we observe that \n\\eqref{a3_interesct_a1_pa2_is_empty_set_equation} and \n\\eqref{a1_pa2_intersect_pd4_empty} imply that \n\\begin{align}\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_2 \\cap \\Delta \\hat{\\mathcal{D}}_4 & = \\varnothing. \\label{a1_pa2_intersect_hat_d4_is_empty}\n\\end{align}\nHence, the lhs of \\eqref{a1_pa2_d5} is a subset of its \nrhs. This is because \n\\begin{align}\n \\overline{\\hat{\\mathcal{D}}}_4 & = \\hat{\\mathcal{D}}_4 \\cup \\overline{\\hat{\\mathcal{D}}}_5 \\qquad \\textnormal{and} \\qquad \\overline{\\hat{\\mathcal{D}}^{\\#\\flat}_5} = \\overline{\\hat{\\mathcal{D}}}_5. \\label{lot_of_closure_claims}\n\\end{align}\nThe first equality follows by applying Lemma \\ref{tube_lemma} twice to Lemma \\ref{cl} \\eqref{D4cl} while the second is covered by Lemma \\ref{Dk_sharp_closure}. To show \nrhs of \\eqref{a1_pa2_d5} is a subset of its lhs, \nit suffices to show that \n\\begin{align}\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_2}: ~\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_4}( \\tilde{f}, \\tilde{p}, l_{\\tilde{p}} ) = 0\\}\n& \\supset \\Delta \\hat{\\mathcal{D}}_5^{\\#\\flat}. \\label{d5_hat+sharp_is_subset_of_a1_pa2}\n\\end{align}\n\\begin{claim}\n\\label{a1_pa2_closure_and_also_pa3}\nLet $(\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\Delta \\hat{\\mathcal{D}}_5^{\\#\\flat}$. Then there exists a solution \n$$ (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)} ) \\in \\overline{ (\\mathcal{D} \\times \\mathbb{P}^2) \\circ \\mathcal{P} \\mathcal{A}}_2$$ \n\\textit{near} $(\\tilde{f}, \\tilde{p}, l_{\\p})$ to the set of equations\n\\begin{align}\n\\pi_1^* \\psi_{\\mathcal{A}_0}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & = 0, ~\\pi_1^* \\psi_{\\mathcal{A}_1}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) = 0, ~~\\tilde{p}(t_1, t_2) \\neq \\tilde{p}(t_1). \\label{a1_pa2_closure_d5_sharp}\n\\end{align}\n\\end{claim}\n\\noindent It is easy to see that claim \\ref{a1_pa2_closure_and_also_pa3} \nimplies \\eqref{d5_hat+sharp_is_subset_of_a1_pa2}.\\\\ \n\n\\noindent \\textbf{Proof: } Choose homogeneous coordinates $[\\mathrm{X}: \\mathrm{Y}: \\mathrm{Z}]$ so that \n$\\tilde{p} = [0:0:1]$ and let ~$\\mathcal{U}_{\\tilde{p}}$, \n$\\pi_x$, $\\pi_y$, $v_1$, $w$, $v$, $\\eta$, $\\eta_{t_1}$, $\\eta_{t_2}$, \n$x_{t_1}$, $y_{t_1}$, $x_{t_2}$, $y_{t_2}$, \n$f_{ij}(t_1, t_2)$, $\\mathrm{F}$, $\\mathrm{F}_{x_{t_2}}$ and $\\mathrm{F}_{y_{t_2}}$\nbe exactly the same as defined in the \nproof of claim \\ref{claim_a4_closure_simultaneous}.\nSince $(\\tilde{f}(t_1, t_2), l_{\\tilde{p}(t_1)}) \\in \\overline{\\mathcal{P} \\mathcal{A}}_2$, we conclude that \n\\[ f_{00}(t_1, t_2) = f_{10}(t_1, t_2) = f_{01}(t_1, t_2) =f_{20}(t_1, t_2) = f_{11}(t_1, t_2)=0.\\]\nThe functional equation \\eqref{a1_pa2_closure_d5_sharp} has a solution if and only if \nthe following set of equations has a solution (as numbers): \n\\begin{align}\n\\mathrm{F} = 0, \\qquad \\mathrm{F}_{x_{t_2}} = 0, \\qquad \\mathrm{F}_{y_{t_2}} = 0, \\qquad (x_{t_2}, y_{t_2}) \\neq (0, 0) \\qquad \\textnormal{(but small)}. \\label{eval_f1_d5_sharp} \n\\end{align} \nFor the convenience of the reader, let us rewrite the expression for $\\mathrm{F}$: \n\\begin{align*}\n\\mathrm{F} &: = {\\textstyle \\frac{f_{02}(t_1, t_2)}{2} y_{t_2}^2 + \\frac{f_{30}(t_1, t_2)}{6} x_{t_2}^3 + \\frac{f_{21}(t_1, t_2)}{2} x_{t_2}^2 y_{t_2}+ \n\\frac{f_{12}(t_1, t_2)}{2} x_{t_2} y_{t_2}^2 + \\frac{f_{03}(t_1, t_2)}{6} y_{t_2}^3 + \\ldots}.\n\\end{align*}\nLet us define \n\\begin{align}\n\\beta_1 &:= f_{21}^2 - f_{12} f_{30}, \n~~\\beta_2^{\\pm}:= -\\frac{f_{03}}{12} -\\frac{f_{21}^3}{6 f_{30}^2} + \\frac{f_{12} f_{21}}{4 f_{30}} \\pm\n\\sqrt{\\beta_1} \\Big( \\frac{f_{21}^2}{6 f_{30}^2} - \\frac{f_{12}}{6 f_{30}} \\Big) \\qquad \\textnormal{and} \\nonumber \\\\\n\\beta_3 & := f_{30}^2 f_{03}^2-6 f_{03} f_{12} f_{21} f_{30} + 4 f_{12}^3 f_{30} + 4 f_{03} f_{21}^3 - 3 f_{12}^2 f_{21}^2 = 4 f_{30}^2 \\beta_{2}^{+} \\beta_2^{-}. \n\\label{beta_3_beta_2_plus_beta_2_minus}\n\\end{align}\nDefine $\\beta_k(t_1, t_2)$ similarly with $f_{ij}$ replaced by $f_{ij}(t_1, t_2)$. \nSince $(\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\hat{\\mathcal{D}}_5$, the cubic \n\\[ \\Phi(\\theta):= {\\textstyle \\frac{f_{30}}{6} \\theta^3 + \\frac{f_{21}}{2} \\theta^2+ \n\\frac{f_{12}}{2} \\theta + \\frac{f_{03}}{6}} \\]\nhas a repeated root, \nbut not all the three roots are the same. Hence we conclude that \n\\begin{align}\n\\beta_3 &=0, ~~\\beta_1 \\neq 0 ~~\\textnormal{and} ~~f_{30} \\neq 0. \n\\end{align}\nThe last inequality follows from the fact that $(\\tilde{f}, \\tilde{p}, l_{\\p})$ belongs to $\\hat{\\mathcal{D}}_5^{\\#\\flat}$ as opposed to\n$\\hat{\\mathcal{D}}_5$.\nWe will now construct solutions to \\eqref{eval_f1_d5_sharp}. \nCorresponding to each branch of $\\sqrt{\\beta_1(t_1, t_2)}$, the solutions are: \n\\begin{align}\nx_{t_2} &= \\frac{-f_{21}(t_1, t_2) + \\sqrt{\\beta_1(t_1, t_2)}}{f_{30}(t_1, t_2)} y_{t_2} + \\mathrm{O}(y_{t_2}^2), \n~~f_{02}(t_1, t_2) = \\mathrm{O}(y_{t_2}), ~~\\beta_2^{+}(t_1, t_2) = \\mathrm{O}(y_{t_2}). \\label{x_y_beta_soln_d5} \n\\end{align}\nLet us explain how we obtained these solutions. To obtain the value of $x_{t_2}$ we used $\\mathrm{F}_{x_{t_2}} =0$. To obtain the value of \n$f_{02}(t_1, t_2)$ we used $\\mathrm{F}_{y_{t_2}} =0$ and the value of $x_{t_2}$ from the previous equation. Finally we used the fact that \n$2\\mathrm{F} - y_{t_2}\\mathrm{F}_{y_{t_2}} =0$ and the value of $x_{t_2}$ to obtain $\\beta_2^{+}(t_1, t_2)$. We get a similar solution \nfor the other branch of $\\sqrt{\\beta_1(t_1, t_2)}$. \nBy \\eqref{beta_3_beta_2_plus_beta_2_minus} and \\eqref{x_y_beta_soln_d5} , we conclude that \nas $y_{t_2}$ goes to zero, $f_{02}(t_1, t_2)$ and \n$\\beta_3$ go to zero. Hence, the solutions in \\eqref{x_y_beta_soln_d5} lie in \n$\\overline{\\mathcal{A}}_1\\circ\\overline{\\mathcal{P} \\mathcal{A}}_2 $ and converge to a point \n$(\\tilde{f}, \\tilde{p} , l_{\\p})$ in $\\hat{\\mathcal{D}}^{\\#\\flat}_5$. \\qed \\\\\n\n\n\n\\textbf{Proof of Lemma \\ref{cl_two_pt} (\\ref{a1_pa3_cl}):} By Lemma \\ref{closure_a1_pak_f02_not_zero} ($k=3$), if we show that\n\\begin{align}\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\tilde{p}}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_3: ~\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_4}( \\tilde{f}, \\tilde{p}, l_{\\tilde{p}} ) = 0 \\} &= \\Delta \\overline{\\mathcal{P} \\mathcal{D}_5^{\\vee}} \\cup \n\\Delta \\overline{\\mathcal{P} \\mathcal{D}}_6 \\label{a1_pa3_pd5_dual_eqn}\n\\end{align}\nthen we have\n\\bgd\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\tilde{p}}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_3\\}\\subseteq \\Delta\\overline{\\mathcal{P}\\mathcal{A}}_5 \\cup \\Delta\\overline{\\mathcal{P} \\mathcal{D}_5^{\\vee}}\\cup\\Delta\\overline{\\mathcal{P} \\mathcal{D}}_6.\n\\edd\nBy Lemma \\ref{cl} \\eqref{A5cl}, we conclude that \n\\bgd\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\tilde{p}}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_3\\}\\subseteq \\Delta\\overline{\\mathcal{P}\\mathcal{A}}_5 \\cup \\Delta\\overline{\\mathcal{P} \\mathcal{D}_5^{\\vee}}.\n\\edd\nOn the other hand, by \\eqref{pak2_is_subset_of_a1_and_pak} applied with $k=3$ and \\eqref{a1_pa3_pd5_dual_eqn} we conclude that\n\\bgd\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\tilde{p}}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_3\\}\\supseteq \\Delta\\overline{\\mathcal{P}\\mathcal{A}}_5 \\cup \\Delta\\overline{\\mathcal{P} \\mathcal{D}_5^{\\vee}}.\n\\edd\nThus, it suffices to prove \\eqref{a1_pa3_pd5_dual_eqn}. Note that the intersection of the lhs of \\eqref{a1_pa3_pd5_dual_eqn} with $\\hat{\\mathcal{D}}_4$ is empty. This follows from \n\\eqref{a1_pa2_intersect_hat_d4_is_empty} and the fact that $\\mathcal{P} \\mathcal{A}_3$ is a subset of $\\overline{\\mathcal{P} \\mathcal{A}}_2$ \n(see Lemma \\ref{cl}). \nEquation \\eqref{lot_of_closure_claims} \nand Lemma \\ref{Dk_sharp_closure}, statement \\ref{d5_pa3_zero} now implies that the \nlhs of \\eqref{a1_pa3_pd5_dual_eqn} is a subset of $\\Delta \\overline{\\mathcal{P} \\mathcal{D}_5^{\\vee}} \\cup \\Delta \\overline{\\mathcal{P} \\mathcal{D}}_5$.\nHowever, the intersection of the lhs of \\eqref{a1_pa3_pd5_dual_eqn} with $\\Delta \\mathcal{P} \\mathcal{D}_5$ is also empty by \\eqref{a1_pa3_intersect_pd5_empty}. \nBy Lemma \\ref{cl}, statement \\ref{D5cl} and Lemma \\ref{Dk_sharp_closure}, statement \\ref{pe6_subset_of_cl_pd5_dual} we have that \n\\begin{align}\n\\overline{\\mathcal{P} \\mathcal{D}}_5 & = \\mathcal{P} \\mathcal{D}_5 \\cup \\overline{\\mathcal{P} \\mathcal{D}}_6 \\cup \\overline{\\mathcal{P} \\mathcal{E}}_6 \\qquad \\textnormal{and} \\qquad \\overline{\\mathcal{P} \\mathcal{E}}_6 \\subset \\overline{\\mathcal{P} \\mathcal{D}_5^{\\vee}}. \\label{lot_of_closure_claims_again}\n\\end{align}\nHence the lhs of \\eqref{a1_pa3_pd5_dual_eqn} is a subset of its rhs. \nTo show the converse, we will first simultaneously prove the following three statements: \n\\begin{align}\n \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_3 & \\supset \\Delta \\mathcal{P} \\mathcal{D}_5^{\\vee}, \\label{pd5_subset_a1_pa3}\\\\ \n \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_4 \\cap \\Delta \\mathcal{P} \\mathcal{D}_5^{\\vee} & = \\varnothing, \\label{pd5_intersect_a1_pd4_is_empty}\\\\\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_4 \\cap \\Delta \\mathcal{P} \\mathcal{D}_5^{\\vee} & = \\varnothing. \\label{pd5_intersect_a1_pa4_is_empty}\n\\end{align}\nAnd then we will prove the following two statements simultaneously:\n\\begin{align}\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_3 & \\supset \\Delta \\mathcal{P} \\mathcal{D}_6, \\label{a1_pa3_is_supsetof_pd6} \\\\ \n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_4 \\cap \\Delta \\mathcal{P} \\mathcal{D}_6 & = \\varnothing. \\label{a1_pa4_intersect_pd6_is_empty} \n\\end{align}\nNote that \\eqref{pd5_subset_a1_pa3} and \\eqref{a1_pa3_is_supsetof_pd6} imply rhs of \n\\eqref{a1_pa3_pd5_dual_eqn} is a subset of its lhs, since $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_3$ is a closed set. \n\\begin{claim}\n\\label{claim_pd5_subset_of_a1_pa3}\nLet $(\\tilde{f},\\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{D}_5^{\\vee}$.\nThen there exists a solution \n$$ (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)} ) \\in \\overline{ (\\mathcal{D} \\times \\mathbb{P}^2) \\circ \\mathcal{P} \\mathcal{A}}_3$$ \n\\textit{near} $(\\tilde{f}, \\tilde{p}, l_{\\p})$ to the set of equations\n\\begin{align}\n\\pi_1^* \\psi_{\\mathcal{A}_0}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & = 0, ~\\pi_1^* \\psi_{\\mathcal{A}_1}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) = 0, ~~\\tilde{p}(t_1, t_2) \\neq \\tilde{p}(t_1). \\label{pd5_dual_limit_a1_pa3_functional_eqn}\n\\end{align}\nMoreover, such solution \nlies in $\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_3$, i.e., \n\\begin{align}\n\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{D}_4}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & \\neq 0, \\label{psi_pd4_neq_0_a1_pa3_functional} \\\\\n\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{A}_4}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & \\neq 0. \\label{psi_pa4_neq_0_a1_pa3_functional_new}\n\\end{align}\nIn particular, $(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)} )$ does not lie in $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_4$ or $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_4$. \n\\end{claim}\n\\noindent Note that claim \\ref{claim_pd5_subset_of_a1_pa3} implies \\eqref{pd5_subset_a1_pa3}, \\eqref{pd5_intersect_a1_pd4_is_empty} and \n\\eqref{pd5_intersect_a1_pa4_is_empty} simultaneously. \\\\\n\n\\noindent \\textbf{Proof: } Choose homogeneous coordinates $[\\mathrm{X}: \\mathrm{Y}: \\mathrm{Z}]$ so that \n$\\tilde{p} = [0:0:1]$ and let ~$\\mathcal{U}_{\\tilde{p}}$, \n$\\pi_x$, $\\pi_y$, $v_1$, $w$, $v$, $\\eta$, $\\eta_{t_1}$, $\\eta_{t_2}$, \n$x_{t_1}$, $y_{t_1}$, $x_{t_2}$, $y_{t_2}$, \n$f_{ij}(t_1, t_2)$, $\\mathrm{F}$, $\\mathrm{F}_{x_{t_2}}$ and $\\mathrm{F}_{y_{t_2}}$\nbe exactly the same as defined in the \nproof of claim \\ref{claim_a4_closure_simultaneous}, except for one difference:\nwe take $(\\tilde{f}(t_1, t_2), l_{\\tilde{p}(t_1)})$ to be a point in $\\overline{\\mathcal{P} \\mathcal{A}}_3$.\nHence \n\\[ f_{00}(t_1, t_2) = f_{10}(t_1, t_2) = f_{01}(t_1, t_2) =f_{11}(t_1, t_2)= f_{20}(t_1, t_2) = f_{30}(t_1, t_2)=0.\\]\nThe functional equation \\eqref{pd5_dual_limit_a1_pa3_functional_eqn} \nhas a solution if and only if \nthe following set of equations has a solution (as numbers): \n\\begin{align}\n\\mathrm{F} = 0, \\qquad \\mathrm{F}_{x_{t_2}} = 0, \\qquad \\mathrm{F}_{y_{t_2}} = 0, \\qquad (x_{t_2}, y_{t_2}) \\neq (0, 0) \\qquad \\textnormal{(but small)}. \\label{eval_f1_d5_dual} \n\\end{align} \nFor the convenience of the reader, let us rewrite the expression for $\\mathrm{F}$: \n\\begin{align*}\n\\mathrm{F} &: = {\\textstyle \\frac{f_{02}(t_1, t_2)}{2} y_{t_2}^2 +\\frac{f_{21}(t_1, t_2)}{2} x_{t_2}^2 y_{t_2}+ \n\\frac{f_{12}(t_1, t_2)}{2} x_{t_2} y_{t_2}^2 + \\frac{f_{03}(t_1, t_2)}{6} y_{t_2}^3 + \\ldots}.\n\\end{align*}\nSince $(\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{D}_5^{\\vee}$, we claim that \n\\begin{align}\nf_{21} & \\neq 0, \\qquad \\beta_1 := \\frac{(f_{12} \\partial_x - 2 f_{21} \\partial_y)^3 f}{2 f_{21}^2} = 3 f_{12}^2 -4f_{21} f_{03} =0 \n\\qquad \\textnormal{and} \\label{pd5_dual_nv1} \\\\\n\\beta_2&:= (f_{12} \\partial_x - 2 f_{21} \\partial_y)^4 f = \nf_{12}^4 f_{40} - 8f_{12}^3 f_{21} f_{31} + 24 f_{12}^2 f_{21}^2 f_{22}-32 f_{12} f_{21}^3 f_{13} + 16 f_{21}^4 f_{04} \\neq 0. \n\\label{pd5_dual_nv_2}\n\\end{align}\nLet us justify this. Since $(\\tilde{f}, \\tilde{p}) \\in \\mathcal{D}_5$ there exists a non zero vector $u = m_1v + m_2w$ such that \n\\begin{align}\n\\nabla^3 f|_{\\tilde{p}} (u,u, v) & = m_1^2 f_{30} + 2 m_1 m_2 f_{21} + m_2^2 f_{12} =0, \\label{nabla_cube_f_1} \\\\\n\\nabla^3 f|_{\\tilde{p}} (u,u, w) &= m_1^2 f_{21} + 2 m_1 m_2 f_{12} + m_2^2 f_{03} =0, \\label{nabla_cube_f_2} \\\\\n\\nabla^4 f|_{\\tilde{p}} (u,u,u,u) & \\neq 0. \\label{nabla_fourth_nv}\n\\end{align}\nSince $(\\tilde{f}, l_{\\p}) \\in \\mathcal{P} \\mathcal{D}_5^{\\vee}$, we conclude \\textit{by definition} that \n\\begin{align}\nf_{30} &=0, \\qquad f_{21} \\neq 0, \\qquad m_2 \\neq 0. \\label{m_pd5_dual} \n\\end{align}\n(If $m_2 =0$ then $f_{21}$ would be zero). \nEquations \\eqref{m_pd5_dual} and \\eqref{nabla_cube_f_1} now imply that \n\\begin{align}\nm_1\/ m_2 &= - f_{12}\/(2 f_{21}). \\label{m1_m2_value}\n\\end{align}\nEquation \\eqref{m1_m2_value} and \\eqref{nabla_cube_f_2} implies \\eqref{pd5_dual_nv1}. \nFinally, \\eqref{nabla_fourth_nv} implies \\eqref{pd5_dual_nv_2}.\\\\\n\\hf \\hf We claim that solutions to \\eqref{eval_f1_d5_dual} are given by \n\\begin{align}\nx_{t_2} = -\\frac{f_{12}(t_1, t_2)}{2 f_{21}(t_1, t_2)} y_{t_2} + \\mathrm{O}(y_{t_2}^2), \\qquad \n\\beta_1(t_1, t_2) &= -\\frac{\\beta_2(t_1, t_2)}{8 f_{21}(t_1, t_2)^3} y_{t_2} + \\mathrm{O}(y_{t_2}^2) \\qquad \\textnormal{and} \\nonumber \\\\\nf_{02}(t_1, t_2) &= -\\frac{\\beta_2(t_1, t_2)}{192 f_{21}(t_1, t_2)^4} y_{t_2}^2 + \\mathrm{O}(y_{t_2}^3). \\label{d5_dual_multilicity}\n\\end{align}\nLet us explain how we obtained these solutions. Assuming $y_{t_2} \\neq 0$ (to be justified at the end) define $~\\mathrm{L} := \\frac{\\xt}{y_{t_2}}.$\nUsing $x_{t_2} = \\mathrm{L} y_{t_2}$ with $y_{t_2}\\neq 0$ in the equation $\\mathrm{F}_{x_{t_2}}=0$ we can solve for $\\mathrm{L}$ via the Implicit Function Theorem. That gives us \n\\begin{equation}\n\\mathrm{L} = \\textstyle{\\frac{-f_{12}(t_1, t_2)}{2 f_{21}(t_1, t_2)}}+ \\Big(-\\textstyle{\\frac{f_{13}(t_1, t_2)}{6 f_{21}(t_1, t_2)}\n+\\frac{f_{12}(t_1, t_2) f_{22}(t_1, t_2)}{4 f_{21}(t_1, t_2)^2} - \\frac{f_{12}(t_1, t_2)^2 f_{31}(t_1, t_2)}{8 f_{21}(t_1, t_2)^3} + \n\\frac{f_{12}(t_1, t_2)^3 f_{40}(t_1, t_2)}{48 f_{21}(t_1, t_2)^4}} \\Big) y_{t_2} + \\mathrm{O}(y_{t_2}^2). \\label{L_value_beta}\n\\end{equation}\nNext, using the equation $2\\mathrm{F} - y_{t_2} \\mathrm{F}_{y_{t_2}} =0$ and the fact that \n$\\xt = \\mathrm{L} y_{t_2}$ and \\eqref{L_value_beta}, \nwe obtain the expression for $\\beta_1(t_1, t_2)$ in \\eqref{d5_dual_multilicity}. \nNext, observe that \n\\begin{align}\nf_{30}(t_1, t_2) & = \\frac{3 f_{12}(t_1, t_2)^2- \\beta_1(t_1, t_2)}{4 f_{21}(t_1, t_2)}. \\label{f30_beta1}\n\\end{align}\nFinally, using the equation $\\mathrm{F}_{y_{t_2}}=0$, the fact that \n$\\xt = \\mathrm{L} y_{t_2}$, \\eqref{L_value_beta}, the expression for $\\beta_1(t_1, t_2)$ in \\eqref{d5_dual_multilicity} and \n\\eqref{f30_beta1} \nwe obtain the expression for $f_{02}(t_1, t_2)$ in \\eqref{d5_dual_multilicity}. \nIt is now easy to see that \\eqref{psi_pd4_neq_0_a1_pa3_functional} holds. It remains to show that $y_{t_2} \\neq 0$. \nTo see why that is so, suppose $y_{t_2}=0$. Then using the fact that $\\mathrm{F}_{y_{t_2}} =0$, we conclude that \n$f_{21}(t_1, t_2) = \\mathrm{O}(\\xt)$. Hence $f_{21}(t_1, t_2)$ goes to zero as $\\xt$ goes to zero, contradicting \\eqref{pd5_dual_nv1}.\\\\ \n\\hf \\hf Finally, \\eqref{psi_pa4_neq_0_a1_pa3_functional_new} is true because \nthe section $\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_4}$ does not vanish on $\\Delta \\mathcal{P} \\mathcal{D}_5^{\\vee}$. \\qed \n\n\\begin{cor}\n\\label{psi_pd4_section_vanishes_order_two_around_dual_d5}\nLet $\\mathbb{W} \\longrightarrow \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2$ be a vector bundle such that \nthe rank of $\\mathbb{W}$ is same as the dimension of $ \\Delta \\mathcal{P} \\mathcal{D}_5^{\\vee}$ and \n$\\mathcal{Q}: \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2 \\longrightarrow \\mathbb{W}$ a \\textit{generic} \nsmooth section. Suppose $(\\tilde{f},\\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{D}_5^{\\vee}\\cap \\mathcal{Q}^{-1}(0)$. \nThen the section $$ \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_4} \\oplus \\mathcal{Q}: \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P}\\mathcal{A}}_3 \\longrightarrow \\pi_2^* (\\UL_{\\mathcal{P} \\mathcal{D}_4}) \\oplus \\mathbb{W}$$\nvanishes around $(\\tilde{f}, \\tilde{p}, l_{\\p})$ with a multiplicity of $2$.\n\\end{cor}\n\\noindent \\textbf{Proof: } This follows by observing that at $\\Delta \\mathcal{P} \\mathcal{D}_5^{\\vee}$, the sections \ninduced by $f_{02}$ and $\\beta_1$ (the corresponding functionals) \nare transverse to the zero set over $\\overline{\\mathcal{P} \\mathcal{A}}_3$ \n\\footnote{To see why; just take the partial derivative with respect to \n$f_{02}$ and $f_{03}$. Since $f_{21} \\neq 0$, transversality follows.}, \n$\\mathcal{Q}$ is generic and \\eqref{d5_dual_multilicity}. \\qed\n\n\\begin{claim}\n\\label{claim_a1_pa4_intersect_pd6_is_empty}\nLet $(\\tilde{f},\\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{D}_6$.\nThen there exist solutions \n$$ (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)} ) \\in \\overline{ (\\mathcal{D} \\times \\mathbb{P}^2) \\circ \\mathcal{P} \\mathcal{A}}_3$$ \n\\textit{near} $(\\tilde{f}, \\tilde{p}, l_{\\p})$ to the set of equations\n\\begin{align}\n\\pi_1^* \\psi_{\\mathcal{A}_0}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & = 0, ~\\pi_1^* \\psi_{\\mathcal{A}_1}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) = 0, \\nonumber \\\\ \n\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{D}_4}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & \\neq 0, ~\\tilde{p}(t_1, t_2) \\neq \\tilde{p}(t_1). \\label{pd6_intersect_a1_pa4_is_empty_functional_eqn}\n\\end{align}\nMoreover, any such solution sufficiently close to $(\\tilde{f}, \\tilde{p}, l_{\\p})$ lies in \n$\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_3$, i.e., \n\\begin{align}\n\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{A}_4}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) \\neq 0. \\label{psi_pa4_neq_0_a1_pa3_functional}\n\\end{align}\nIn particular, $(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)} )$ does not lie in $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_4$. \n\\end{claim}\n\\noindent It is easy to see that claim \\ref{claim_a1_pa4_intersect_pd6_is_empty} implies \\eqref{a1_pa3_is_supsetof_pd6} and \n\\eqref{a1_pa4_intersect_pd6_is_empty} simultaneously. \\\\\n\n\\noindent \\textbf{Proof: } Choose homogeneous coordinates $[\\mathrm{X}: \\mathrm{Y}: \\mathrm{Z}]$ so that \n$\\tilde{p} = [0:0:1]$ and let ~$\\mathcal{U}_{\\tilde{p}}$, \n$\\pi_x$, $\\pi_y$, $v_1$, $w$, $v$, $\\eta$, $\\eta_{t_1}$, $\\eta_{t_2}$, \n$x_{t_1}$, $y_{t_1}$, $x_{t_2}$, $y_{t_2}$, \n$f_{ij}(t_1, t_2)$, $\\mathrm{F}$, $\\mathrm{F}_{x_{t_2}}$ and $\\mathrm{F}_{y_{t_2}}$\nbe exactly the same as defined in the \nproof of claim \\ref{claim_pd5_subset_of_a1_pa3}. \nSince $(\\tilde{f}(t_1, t_2), l_{\\tilde{p}(t_1)} ) \\in \\Delta \\overline{\\mathcal{P} \\mathcal{A}}_3$ we conclude that \n\\[ f_{00}(t_1, t_2) = f_{10}(t_1, t_2) = f_{01}(t_1, t_2) =f_{11}(t_1, t_2)= f_{20}(t_1, t_2) = f_{30}(t_1, t_2)=0.\\]\nFurthermore, since $(\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{D}_6$, we conclude that \n\\begin{align}\nf_{21}, ~f_{40} &=0 ~~\\textnormal{and} ~~f_{12}, ~\\mathcal{D}^{f}_7 \\neq 0. \\label{pd6_vanish_non_vanish}\n\\end{align}\nThe functional equation \\eqref{pd6_intersect_a1_pa4_is_empty_functional_eqn}has a solution if and only if the following has a numerical solution: \n\\begin{align}\n\\mathrm{F} = 0, \\qquad \\mathrm{F}_{x_{t_2}} = 0, \\qquad \\mathrm{F}_{y_{t_2}} = 0, \\qquad (x_{t_2}, y_{t_2}) \\neq (0, 0) \\qquad \\textnormal{(but small)}. \\label{eval_f1_pd6} \n\\end{align} \nFor the convenience of the reader, let us rewrite the expression for $\\mathrm{F}$: \n\\begin{align*}\n\\mathrm{F} &: = {\\textstyle \\frac{f_{02}(t_1, t_2)}{2} y_{t_2}^2 + \\frac{f_{21}(t_1, t_2)}{2} x_{t_2}^2 y_{t_2}+ \\frac{f_{12}(t_1, t_2)}{2} x_{t_2} y_{t_2}^2 + \\frac{f_{03}(t_1, t_2)}{6} y_{t_2}^3} + \\ldots.\n\\end{align*}\nWe will now construct solutions to \\eqref{eval_f1_pd6}. \nLet us define $\\mathrm{G}:= \\mathrm{F}-\\frac{y_{t_2} \\mathrm{F}_{y_{t_2}}}{2}-\\frac{x_{t_2} \\mathrm{F}_{x_{t_2}}}{4}$. Then\n\\begin{align*}\n\\mathrm{G}&:= -\\frac{f_{03}(t_1, t_2)}{12} y_{t_2}^3 + y_{t_2}^3\\mathrm{E}_1(y_{t_2})\\\\\n & ~~~ -\\xt \\big({\\textstyle \\frac{f_{12}(t_1, t_2)}{8} y_{t_2}^2 +\\frac{f_{31}(t_1, t_2)}{8} \\xt^2 y_{t_2} +\\frac{f_{50}(t_1, t_2)}{96} \\xt^4 + y_{t_2}^2 \\mathrm{E}_2(\\xt, y_{t_2})+\\xt^2 y_{t_2} \\mathrm{E}_3(\\xt) + \n\\xt^4 \\mathrm{E}_4(\\xt)} \\big),\n\\end{align*}\nwhere $\\mathrm{E}_i$ is a holomorphic function vanishing at the origin. \\\\\n\\hf \\hf We now make a change of variables by using a holomorphic function $\\mathrm{C}(y_{t_2})$ such that by making the substitution $x_{t_2} = \\hat{x}_{t_2} + \\mathrm{C}(y_{t_2})$, the coefficients of $y_{t_2}^n$ are killed for all $n$ in $\\mathrm{G}$. We may now make a change of coordinates $y_{t_2} = \\hat{y}_{t_2} + \\mathrm{B}(\\hat{x}_{t_2})$ so that the coefficient of $\\hat{x}_{t_2}\\hat{y}_{t_2}^n$ is killed for all $n$ in $\\mathrm{G}$. The existence of these functions follow from an identical argument as in \nin \\cite{BM13}. After these changes, $\\mathrm{G}$ is given by \n\\begin{align*}\n\\mathrm{G} &= {\\textstyle -\\frac{f_{12}(t_1, t_2)}{8} \\hat{x}_{t_2} \\Big( \\hat{y}_{t_2}^2 + \\frac{\\mathcal{D}^{f(t_1, t_2)}_7}{60 f_{12}(t_1, t_2)} \\hat{x}_{t_2}^4 + \\mathrm{O}(\\hat{x}_{t_2}^5)\\Big)}. \n\\end{align*}\nHence we are solving, in terms of the new variables $\\hat{x}_{t_2}$ and $\\hat{y}_{t_2}$, for \n\\begin{align}\n \\mathrm{G}&=0, ~~\\mathrm{F}_{x_{t_2}} =0, ~~\\mathrm{F}_{y_{t_2}} =0. \\label{G_eq_0_Fx_eq_0_Fy_eq_0_D6}\n\\end{align}\nNote that $\\mathrm{F}_{x_{t_2}}$ and $\\mathrm{F}_{y_{t_2}}$ are partials with respect to $x_{t_2}$ and $y_{t_2}$ \nexpressed in the new coordinates, they are not the partials with respect to $\\hat{x}_{t_2}$ and $\\hat{y}_{t_2}$. \nLet us write $\\mathrm{C}(y_{t_2})$ and $\\mathrm{B}(\\hat{x}_{t_2})$ to second order: \n\\begin{align*}\n\\mathrm{C}(y_{t_2}) = -& {\\textstyle \\frac{2 f_{03}(t_1, t_2)}{3 f_{12}(t_1, t_2)} y_{t_2}}\\\\ \n + & \\big( {\\textstyle -\\frac{f_{04}(t_1, t_2)}{3 f_{12}(t_1, t_2)} \n+ \\frac{2 f_{03}(t_1, t_2) f_{13} (t_1, t_2) }{3 f_{12}(t_1, t_2)^2 } - \n\\frac{4 f_{03}(t_1, t_2)^2 f_{22} (t_1, t_2) }{9 f_{12}(t_1, t_2)^3 } + \\frac{8 f_{03}(t_1, t_2)^3 f_{31} (t_1, t_2) }{81 f_{12}(t_1, t_2)^4}} \\big) y_{t_2}^2\\\\ \n+ & \\mathrm{O}(y_{t_2}^3), \\\\ \n\\mathrm{B}(\\hat{x}_{t_2}) = - & {\\textstyle \\frac{f_{31}(t_1, t_2)}{6 f_{12}(t_1, t_2)} \\hat{x}_{t_2}^2 + b_3 \\hat{x}_{t_2}^3 + \\mathrm{O}(\\hat{x}_{t_2}^4)}.\n\\end{align*}\nThe exact value of $b_3$ is not important, \nbut it will play a role in the \nsubsequent calculation. \nThe solutions to \\eqref{G_eq_0_Fx_eq_0_Fy_eq_0_D6} are given by \n\\begin{align}\n\\hat{y}_{t_2} &= \\alpha \\hat{x}_{t_2}^2 +\\mathrm{O}(\\hat{x}_{t_2}^3) \\label{pd6_hat_y}\\\\\nf_{21}(t_1, t_2) &=\\Big({\\textstyle - 2 \\alpha f_{02}(t_1, t_2) + \\frac{f_{02}(t_1, t_2) f_{31}(t_1, t_2)}{3 f_{12}(t_1, t_2)}} \\Big) + \\Big[{\\textstyle -2 b_3 f_{02}(t_1, t_2) -\\frac{8 \\alpha^2 f_{02}(t_1, t_2) f_{03}(t_1, t_2)}{3 f_{12}(t_1, t_2)}} \\nonumber\\\\\n& \\,{\\textstyle -2\\alpha f_{12}(t_1, t_2)+\\frac{8 \\alpha f_{02}(t_1, t_2) f_{03}(t_1, t_2) f_{31}(t_1, t_2)}{9 f_{12}(t_1, t_2)^2} -\\frac{2 f_{02}(t_1, t_2) f_{03}(t_1, t_2) f_{31}(t_1, t_2)^2}{27 f_{12}(t_1, t_2)^3}} \\Big] \\hat{x}_{t_2} \\nonumber\\\\\n& + \\mathrm{O}(\\hat{x}_{t_2}^2) \\label{pd6_f21}\\\\\nf_{40}(t_1, t_2) &= \\Big( 12 \\alpha^2 f_{02}(t_1, t_2)- \\frac{4 \\alpha f_{02}(t_1, t_2) f_{31}(t_1, t_2)}{f_{12}(t_1, t_2)} + \\frac{f_{02}(t_1, t_2) f_{31}(t_1, t_2)^2}{3 f_{12}(t_1, t_2)^2} \\Big) \\nonumber \\\\\n &+ \\big({\\textstyle 24 \\alpha b_3 f_{02}(t_1, t_2)+\\frac{32 \\alpha^3 f_{02}(t_1, t_2) f_{03}(t_1, t_2)}{f_{12}(t_1, t_2)} + 24 \\alpha^2 f_{12}(t_1, t_2)-4 \\alpha f_{31}(t_1, t_2)} \\nonumber \\\\\n &{\\textstyle -\\frac{16 \\alpha^2 f_{02}(t_1, t_2) f_{03}(t_1, t_2) f_{31}(t_1, t_2)}{f_{12}(t_1, t_2)^2}-\\frac{4 b_3 f_{02}(t_1, t_2) f_{31}(t_1, t_2)}{f_{12}(t_1, t_2)}+ \\frac{8 \\alpha f_{02}(t_1, t_2) f_{03}(t_1, t_2) f_{31}(t_1, t_2)^2}{3 f_{12}(t_1, t_2)^3}} \\nonumber \\\\ \n & {\\textstyle - \\frac{4 f_{02}(t_1, t_2) f_{03}(t_1, t_2) f_{31}(t_1, t_2)^3}{27 f_{12}(t_1, t_2)^4}}\\big) \\hat{x}_{t_2} \n + \\mathrm{O}(\\hat{x}_{t_2}^2), \\label{pd6_f40}\\\\\n\\textnormal{where} ~~\\alpha &:= \\sqrt{-\\frac{\\mathcal{D}^{f(t_1, t_2)}_7}{60 f_{12}(t_1, t_2)}} ~~\\textnormal{is a branch of the square root.} \\nonumber \n\\end{align}\nNote that each value of $\\alpha$ corresponds to a different solution. Let us now explain how we obtained these solutions. \nFirst of all we claim that $\\hat{x}_{t_2} \\neq 0$; we will justify that at the end. Assuming that, we obtain \n\\eqref{pd6_hat_y} from the fact $\\mathrm{G}=0$. Next, we obtain \\eqref{pd6_f21} from \\eqref{pd6_hat_y} and using the fact that \n$\\mathrm{F}_{y_{t_2}} =0$. Finally, we obtain \\eqref{pd6_f40} from \\eqref{pd6_hat_y}, \\eqref{pd6_f21} and using the fact that \n$\\mathrm{F}_{x_{t_2}} =0$. \nObserve that equations \\eqref{pd6_hat_y}, \\eqref{pd6_f21} and \\eqref{pd6_f40} imply that \n\\begin{align}\nf_{02}(t_1, t_2)\\mathcal{A}^{f(t_1, t_2)}_4 &= \\textstyle{\\frac{1}{5}}\\mathcal{D}^{f(t_1, t_2)}_7 f_{12}(t_1, t_2) \\hat{x}_{t_2}^2 + \\xt^2 \\mathrm{E}(\\xt, f_{02}(t_1, t_2)) \\label{pd6_f02_pa4}\n\\end{align}\nwhere $\\mathrm{E}(0,0) =0$.\nHence, if $f_{02}(t_1, t_2)$ and $\\hat{x}_{t_2}$ are small and non zero, $f_{02}(t_1, t_2)\\mathcal{A}^{f(t_1, t_2)}_4$ is non zero. \nHence \\eqref{psi_pa4_neq_0_a1_pa3_functional} holds. \\\\\n\\hf \\hf It remains to show that $\\hat{x}_{t_2} \\neq 0$. If $\\hat{x}_{t_2} =0$, then \nusing the fact that $\\mathrm{F}_{x_{t_2}} =0$ we get \n\\begin{align*}\nf_{12}(t_1, t_2) & = \\frac{4 f_{03}(t_1, t_2) f_{21}(t_1, t_2)}{3 f_{12}(t_1, t_2)} + \\mathrm{O}(\\hat{y}_{t_2}).\n\\end{align*}\nHence $f_{12}(t_1, t_2)$ goes to zero as $f_{21}(t_1, t_2)$ and $\\hat{y}_{t_2}$ go to zero, which \ncontradicts \\eqref{pd6_vanish_non_vanish}. \\qed \\\\\n\n\\noindent This proves Lemma \\ref{cl_two_pt} (\\ref{a1_pa3_cl}). Before proceeding further, note that \n\\eqref{a1_pa3_intersect_pd5_empty} and \\eqref{a1_pa4_intersect_pd6_is_empty} imply that \n\\begin{align}\n\\Delta \\PP \\D_7^{s} & \\subset \\Delta \\overline{\\mathcal{P} \\mathcal{D}}_7. \\label{one_a1_one_pa4_f02_zero_f12_not_zero_is_pd7}\n\\end{align}\n\n \n\n\\textbf{Proof of Lemma \\ref{cl_two_pt} (\\ref{a1_pa4_cl}):} By Lemma \\ref{closure_a1_pdk_f12_not_zero} and \\eqref{pak2_is_subset_of_a1_and_pak} for $k=4$, it suffices to show that \n\\begin{align}\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\tilde{p}}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_4: ~\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_4}( \\tilde{f}, \\tilde{p}, l_{\\tilde{p}} ) = 0 \\} &= \\Delta \\overline{\\PP \\D_7^{s}} \\cup \\Delta \\overline{\\mathcal{P} \\mathcal{E}}_6 \\label{a1_pa4_pd7s_pe6}\n\\end{align}\nBy the definition of $\\Delta \\PP \\D_7^{s}$, to prove \\eqref{a1_pa4_pd7s_pe6} it suffices to show that \n\\begin{align}\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\tilde{p}}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_4& : \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_4}( \\tilde{f}, \\tilde{p}, l_{\\tilde{p}} ) = 0, ~\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{E}_6}( \\tilde{f}, \\tilde{p}, l_{\\tilde{p}} ) =0 \\} = \\Delta \\overline{\\mathcal{P} \\mathcal{E}}_6. \n\\label{one_a1_one_pa4_f02_zero_is_pe6}\n\\end{align}\nIt is clear that the lhs of \\eqref{one_a1_one_pa4_f02_zero_is_pe6} is a subset of its rhs. \nTo prove the converse, let us prove the following three facts simultaneously:\n\\begin{align}\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_4 & \\supset \\Delta \\mathcal{P} \\mathcal{E}_6, \\label{a1_pa4_is_supsetof_pe6} \\\\ \n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_5 \\cap \\Delta \\mathcal{P} \\mathcal{E}_6 & = \\varnothing, \\label{a1_pa5_intersect_pe6_is_empty} \\\\\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_4 \\cap \\Delta \\mathcal{P} \\mathcal{E}_6 & = \\varnothing. \\label{a1_pd4_intersect_pe6_is_empty} \n\\end{align}\nNote that since $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_4$ is a closed set, \n\\eqref{a1_pa4_is_supsetof_pe6} implies that the rhs of \\eqref{one_a1_one_pa4_f02_zero_is_pe6} is a subset of its lhs. \nWe will need \\eqref{a1_pd4_intersect_pe6_is_empty} later, since it follows from the present setup, we prove it here. \n\n\\begin{claim}\n\\label{claim_a1_pa5_intersect_pe6_is_empty}\nLet $(\\tilde{f},\\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{E}_6$.\nThen there exist solutions \n$$ (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)} ) \\in \\overline{ (\\mathcal{D} \\times \\mathbb{P}^2) \\circ \\mathcal{P} \\mathcal{A}}_3$$ \n\\textit{near} $(\\tilde{f}, \\tilde{p}, l_{\\p})$ to the set of equations\n\\begin{align}\n\\pi_1^* \\psi_{\\mathcal{A}_0}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & = 0, ~\\pi_1^* \\psi_{\\mathcal{A}_1}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) = 0, \\nonumber \\\\\n\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{A}_4}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & = 0, \n~\\tilde{p}(t_1, t_2) \\neq \\tilde{p}(t_1). \\label{pe6_intersect_a1_pa5_is_empty_functional_eqn}\n\\end{align}\nMoreover, any such solution sufficiently close to $(\\tilde{f}, \\tilde{p}, l_{\\p})$ lies in \n$\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_4$, i.e., \n\\begin{align}\n\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{A}_5}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) \\neq 0, \\label{psi_pa5_neq_0_a1_pa4_functional} \\\\\n\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{D}_4}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) \\neq 0. \\label{psi_pd4_neq_0_a1_pa4_functional}\n\\end{align}\nIn particular, $(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)} )$ does not lie in $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_5$ or \n$\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_4$. \n\\end{claim}\n\\noindent It is easy to see that claim \\ref{claim_a1_pa5_intersect_pe6_is_empty} implies \\eqref{a1_pa4_is_supsetof_pe6}, \n\\eqref{a1_pa5_intersect_pe6_is_empty} and \\eqref{a1_pd4_intersect_pe6_is_empty} simultaneously. \\\\\n\n\\noindent \\textbf{Proof: } Choose homogeneous coordinates $[\\mathrm{X}: \\mathrm{Y}: \\mathrm{Z}]$ so that \n$\\tilde{p} = [0:0:1]$ and let ~$\\mathcal{U}_{\\tilde{p}}$, \n$\\pi_x$, $\\pi_y$, $v_1$, $w$, $v$, $\\eta$, $\\eta_{t_1}$, $\\eta_{t_2}$, \n$x_{t_1}$, $y_{t_1}$, $x_{t_2}$, $y_{t_2}$, \n$f_{ij}(t_1, t_2)$, $\\mathrm{F}$, $\\mathrm{F}_{x_{t_2}}$ and $\\mathrm{F}_{y_{t_2}}$\nbe exactly the same as defined in the \nproof of claim \\ref{claim_pd5_subset_of_a1_pa3}.\nSince $(\\tilde{f}(t_1, t_2) , l_{\\tilde{p}(t_1)}) \\in \\overline{\\mathcal{P} \\mathcal{A}}_3$, we conclude that \n\\[ f_{00}(t_1, t_2) = f_{10}(t_1, t_2) = f_{01}(t_1, t_2) =f_{11}(t_1, t_2)= f_{20}(t_1, t_2) = f_{30}(t_1, t_2)=0.\\]\nMoreover, since $(\\tilde{f},\\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{E}_6$, we conclude that \n\\begin{align}\nf_{21}, f_{12} &=0, ~~f_{03}, f_{40} \\neq 0. \\label{pe6_condition_v_and_nv}\n\\end{align}\nThe functional equation \\eqref{pe6_intersect_a1_pa5_is_empty_functional_eqn} \nhas a solution if and only if \nthe following set of equations has a solution (as numbers): \n\\begin{align}\n\\mathrm{F} = 0, ~~\\mathrm{F}_{x_{t_2}} = 0, ~~\\mathrm{F}_{y_{t_2}} = 0, ~~f_{02}(t_1, t_2)\\mathcal{A}^{f(t_1, t_2)}_4 =0, \\quad (x_{t_2}, y_{t_2}) \\neq (0, 0) \n~~\\textnormal{(small)}. \\label{eval_f1_pe6} \n\\end{align}\nSince $f_{02}(t_1, t_2)\\mathcal{A}^{f(t_1, t_2)}_4 =0$ we conclude that \n$f_{02}(t_1, t_2) = \\frac{3 f_{21}(t_1, t_2)^2}{f_{40}(t_1, t_2)}$. \nHence \n\\begin{align*}\n\\mathrm{F} & = \\frac{3 f_{21}(t_1, t_2)^2}{2 f_{40}(t_1, t_2)} y_{t_2}^2 + \\frac{f_{21}(t_1, t_2)}{2} x_{t_2}^2 y_{t_2}+ \n\\frac{f_{12}(t_1, t_2)}{2} x_{t_2} y_{t_2}^2 + \\frac{f_{03}(t_1, t_2)}{6} y_{t_2}^3 + \\frac{f_{40}(t_1, t_2)}{24} x_{t_2}^4 \\\\ \n& + \\frac{\\mathcal{R}_{50}(x_{t_2})}{120} x_{t_2}^5+ \n\\frac{\\mathcal{R}_{31}(x_{t_2})}{6} x_{t_2}^3 y_{t_2} \n+\\frac{\\mathcal{R}_{22}(x_{t_2})}{4} x_{t_2}^2 y_{t_2}^2 \n+\\frac{\\mathcal{R}_{13}(x_{t_2}, y_{t_2})}{4} x_{t_2} y_{t_2}^2 \n+ \\frac{\\mathcal{R}_{04}(y_{t_2})}{24} y_{t_2}^4. \n\\end{align*}\nWe will now eliminate $f_{12}(t_1, t_2)$ and $f_{21}(t_1, t_2)$ from \\eqref{eval_f1_pe6}. \nFirst we make a simple observation: let \n\\begin{align*}\n\\mathrm{A}(\\theta) &:= \\mathrm{A}_0 + \\mathrm{A}_1 \\theta + \\mathrm{A}_2 \\theta^2, \\quad \\mathrm{B}(\\theta) := \\mathrm{B}_0 + \\mathrm{B}_1 \\theta, \n\\quad p_1 := -\\mathrm{A}_2 \\mathrm{B}_1^2 , \\quad p_2 := -\\mathrm{A}_2^2 \\mathrm{B}_0 + \\mathrm{A}_1 \\mathrm{A}_2 \\mathrm{B}_1 +\\mathrm{A}_2^2 \\mathrm{B}_1 \\theta. \n\\end{align*}\nThen $p_1 \\mathrm{A}(\\theta) + p_2\\mathrm{B}(\\theta)$ is independent of $\\theta$. With this observation we will now proceed to define \n\\begin{align*}\n\\mathrm{G}_1:=\\mathrm{F} - x_{t_2} \\mathrm{F}_{x_{t_2}}, ~~\\mathrm{G}_2 := \\mathrm{F} - \\frac{y_{t_2} \\mathrm{F}_{y_{t_2}}}{2}, ~~ \\mathrm{G} := \n\\mathrm{P}_1(t_1, t_2) \\mathrm{G}_1 + \\mathrm{P}_2(t_1, t_2) \\mathrm{G}_2\n\\end{align*}\nwhere\n\\begin{align*}\n\\mathrm{P}_1 := - \\frac{3 x^4 y^4}{32 f_{40}}, \\qquad \\mathrm{P}_2 &:= \\frac{3 y^7 f_{03}}{16 f_{40}^2} + \\frac{9 x^2 y^5 f_{21}}{16 f_{40}^2} - \\frac{9 x^4 y^4}{32 f_{40}} + \\frac{3 y^8 \\mathcal{R}_{04}(y)}{32 f_{40}^2}\n+\\frac{3 x y^7 \\mathcal{R}_{13}(x,y)}{16 f_{40}^2} \\\\\n &-\\frac{3 x^3 y^5 \\mathcal{R}_{31}(x)}{16 f_{40}^2} - \\frac{3 x^5 y^4 \\mathcal{R}_{50}(x)}{160 f_{40}^2} \n +\\frac{3 y^9 \\mathcal{R}_{04}^{(1)}(y)}{64 f_{40}^2}+\\frac{3 x y^8 \\mathcal{R}_{13}^{(0,1)}(x,y)}{16 f_{40}^2}. \n\\end{align*}\nThe quantity $\\mathrm{P}_k(t_1, t_2)$ is similarly defined, with $f_{ij}$, $x$ and $y$ \nreplaced by $f_{ij}(t_1, t_2)$, \n$\\xt$ and $y_{t_2}$.\\\\ \n\\hf \\hf Note that $\\mathrm{G}_1$ and $\\mathrm{G}_2$ are independent of $f_{12}(t_1, t_2)$. \nSecondly, they are quadratic and linear in $f_{21}(t_1, t_2)$ respectively. \nHence, using our previous observation, $\\mathrm{G}$ is independent of $f_{12}(t_1, t_2)$ and $f_{21}(t_1, t_2)$. \n\\footnote{Replace $\\theta \\longrightarrow f_{21}(t_1, t_2)$, $\\mathrm{A}(\\theta) \\longrightarrow \\mathrm{G}_1$, $\\mathrm{B}(\\theta) \\longrightarrow \\mathrm{G}_2$, $p_1 \\longrightarrow \\mathrm{P}_1(t_1, t_2)$ and $p_2 \\longrightarrow \\mathrm{P}_2(t_1, t_2)$.}\\\\\n\\hf \\hf\nWe claim that $x_{t_2} \\neq 0$ and $y_{t_2} \\neq 0$; \nwe will justify that at the end. \nAssuming this claim we conclude that solving \\eqref{eval_f1_pe6} \nis equivalent to solving: \n\\begin{align}\n\\mathrm{G}=0, ~~\\mathrm{G}_2 =0, ~~\\mathrm{F} =0, \n\\quad (x_{t_2}, y_{t_2}) \\neq (0, 0) \n~~\\textnormal{(small).} \\label{eval_f1_pe6_modified}\n\\end{align}\nDefine $\\mathrm{L}:= \\xt^4 \/ y_{t_2}^3$. We will first solve for $\\mathrm{L}$ in terms of $\\xt$ and $y_{t_2}$ and then we will parametrize $(\\xt, y_{t_2})$. \nNotice that we can rewrite $\\mathrm{G}$ in terms of $\\xt$, $y_{t_2}$ and $\\mathrm{L}$ in such a way that the highest power of \n$\\xt$ is $3$; whenever there is an $\\xt^4$ we replace it with $\\mathrm{L} y_{t_2}^3$.\nHence \n\\begin{align*}\n\\mathrm{G} &= y_{t_2}^{10}\\Big(- \\frac{f_{03}(t_1, t_2)^2}{64 f_{40}(t_1, t_2)^2} +\\frac{f_{03}(t_1, t_2)}{64 f_{40}(t_1, t_2)} \\mathrm{L} + \\mathrm{E}(\\xt, y_{t_2}, \\mathrm{L}) \\Big)\n\\end{align*}\nwhere $\\mathrm{E}(0,0, \\mathrm{L}) =0$. Hence $\\mathrm{G} =0$ and $y_{t_2} \\neq 0$ implies that \n\\begin{align*}\n \\Phi (\\xt, y_{t_2}, \\mathrm{L}) &:= - \\frac{f_{03}(t_1, t_2)^2}{64 f_{40}(t_1, t_2)^2} +\\frac{f_{03}(t_1, t_2)}{64 f_{40}(t_1, t_2)} \\mathrm{L} + \\mathrm{E}(\\xt, y_{t_2}, \\mathrm{L}) =0.\n\\end{align*}\nHence, by the Implicit Function Theorem we conclude that \n\\begin{align*}\n\\mathrm{L}(\\xt, y_{t_2}) = \\frac{f_{03}(t_1, t_2)}{f_{40}(t_1, t_2)} + \\mathrm{E}_2(\\xt, y_{t_2}) \n\\end{align*}\nwhere $\\mathrm{E}_2(0, 0)$ is zero. Hence $(\\xt, y_{t_2})$ is parametrized by \n\\begin{align*}\ny_{t_2} &= u^4, \\qquad \\xt = \\alpha u^3 + \\mathrm{O}(u^4) \n\\end{align*}\nwhere $\\alpha:= \\sqrt[4]{\\frac{f_{03}(t_1, t_2)}{f_{40}(t_1, t_2)}}$, \na branch of the fourth root. Note that just one branch of the fourth root gives all the solutions, choosing another branch does not give us any more solutions.\\footnote{Observe that a neighbourhood of the origin for the curve $x^4 -y^3 =0$ is just one copy of $\\mathbb{C}$ parametrized by $x = u^3$ and $y = u^4$.} We have\n\\begin{eqnarray}\nf_{02}(t_1, t_2) & = & \\frac{f_{03}(t_1, t_2)}{12} u^4 + \\mathrm{O}(u^5) \\label{psi_pd4_neq_0_a1_pa4_number}\\\\\nf_{02}(t_1, t_2)^2 \\mathcal{A}^{f(t_1, t_2)}_5 & = & -\\frac{5 f_{03}(t_1, t_2)^2 f_{40}(t_1, t_2)}{18 \\alpha} u^5 + \\mathrm{O}(u^6) \\label{pe6_multiplicity}\n\\end{eqnarray}\nTo arrive at these, use $\\mathrm{G}_2 =0$ to get\n\\begin{align*}\nf_{21}(t_1, t_2) &= \\frac{f_{03}(t_1, t_2)}{3 \\alpha^2} u^2 + \\mathrm{O}(u^3).\n\\end{align*}\nWe then use the fact that $f_{02}(t_1, t_2) = \\frac{3 f_{21}(t_1, t_2)^2}{f_{40}(t_1, t_2)}$ to get \\eqref{psi_pd4_neq_0_a1_pa4_number}. Finally using $\\mathrm{F} =0$ we get that \n\\begin{align*}\nf_{12}(t_1, t_2) &= -\\frac{2\\alpha^3 f_{40}(t_1, t_2)}{3} u + \\mathrm{O}(u^2)\n\\end{align*}\nPlugging all this in we get \\eqref{pe6_multiplicity}. Equations \\eqref{psi_pd4_neq_0_a1_pa4_number} and \\eqref{pe6_multiplicity} imply that \\eqref{psi_pa5_neq_0_a1_pa4_functional} and \\eqref{psi_pd4_neq_0_a1_pa4_functional} hold respectively. \\\\\n\\hf \\hf It remains to show that that $x_{t_2} \\neq 0$ and $y_{t_2} \\neq 0$. \nIf $y_{t_2} =0$ then using the fact that $\\mathrm{F} =0$ we get \nthat \n\\begin{align*}\nf_{40}(t_1, t_2) &= -\\frac{\\xt \\mathcal{R}_{50}(\\xt)}{5}.\n\\end{align*}\nAs $\\xt$ goes to zero, $f_{40}(t_1, t_2)$ goes to zero, contradicting \n\\eqref{pe6_condition_v_and_nv}. \nSimilarly, if $\\xt=0$, then using the fact that \n$\\mathrm{F} - \\frac{y_{t_2} \\mathrm{F}_{y_{t_2}}}{2} =0$ we get that \n\\begin{align*}\nf_{03}(t_1, t_2) &= -\\frac{2 y_{t_2} \\mathcal{R}_{04}(y_{t_2}) + \ny_{t_2}^2 \\mathcal{R}_{04}^{(1)}(y_{t_2})}{4}\n\\end{align*}\nAs $y_{t_2}$ goes to zero, $f_{03}(t_1, t_2)$ goes to zero, contradicting \n\\eqref{pe6_condition_v_and_nv}. \n\n\\begin{cor}\n\\label{a1_pa4_mult_is_5_around_pe6}\nLet $\\mathbb{W} \\longrightarrow \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2$ be a vector bundle such that \nthe rank of $\\mathbb{W}$ is same as the dimension of $ \\Delta \\mathcal{P} \\mathcal{E}_{6}$ and \n$\\mathcal{Q}: \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2 \\longrightarrow \\mathbb{W}$ a \\textit{generic} \nsmooth section. Suppose $(\\tilde{f},\\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{E}_{6} \\cap \\mathcal{Q}^{-1}(0)$. \nThen the section $$ \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_{5}} \\oplus \\mathcal{Q}: \\overline{\\mathcal{A}_1 \\circ \\mathcal{P} \\mathcal{A}}_{4} \\longrightarrow \\pi_2^* (\\UL_{\\mathcal{P} \\mathcal{A}_{5}}) \\oplus \\mathbb{W}$$\nvanishes around $(\\tilde{f}, \\tilde{p}, l_{\\p})$ with a multiplicity of $5$.\n\\end{cor}\n\\noindent \\textbf{Proof: } Follows from the fact that the sections \ninduced by $f_{02}$, $f_{21}$ and $f_{12}$ (the corresponding functionals) \nare transverse to the zero set over $\\Delta \\overline{\\mathcal{P} \\mathcal{A}}_3$ \\footnote{To see why, just take the partial derivatives with respect to \n$f_{02}$, $f_{21}$ and $f_{12}$}, the fact that $\\mathcal{Q}$ is generic and \\eqref{pe6_multiplicity}. \\qed \\\\\n\n\\noindent This completes the proof of Lemma \\ref{cl_two_pt} (\\ref{a1_pa4_cl}). \\\\\n\n\n\\textbf{Proof of Lemma \\ref{cl_two_pt} (\\ref{a1_pa5_cl}):} We have to show that \n\\bgd\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\tilde{p}}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_5\\} =\\Delta\\overline{\\mathcal{P}\\mathcal{A}}_7\\cup\\Delta\\overline{\\mathcal{P} \\mathcal{D}_8^s}\\cup\\Delta\\overline{\\mathcal{P} \\mathcal{E}}_7.\n\\edd\nBy Lemma \\ref{closure_a1_pak_f02_not_zero} and \\eqref{pak2_is_subset_of_a1_and_pak}, it is equivalent to showing that \n\\begin{align}\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\tilde{p}}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_5: ~\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_4}( \\tilde{f}, \\tilde{p}, l_{\\tilde{p}} ) = 0 \\} &= \\Delta \\overline{\\PP \\D_8^{s}} \\cup \\Delta \\overline{\\mathcal{P} \\mathcal{E}}_7. \\label{a1_pa5_pd8s_pe7}\n\\end{align}\nBy the definition of $\\Delta \\PP \\D_8^{s}$, to prove \\eqref{a1_pa5_pd8s_pe7} it suffices to show that \n\\begin{align}\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\tilde{p}}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_5 & : \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_4}( \\tilde{f}, \\tilde{p}, l_{\\tilde{p}} ) = 0, ~\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{E}_6}( \\tilde{f}, \\tilde{p}, l_{\\tilde{p}} ) =0 \\} = \\Delta \\overline{\\mathcal{P} \\mathcal{E}}_7. \n\\label{one_a1_one_pa5_f02_zero_is_pe7}\n\\end{align}\nBefore we prove \\eqref{one_a1_one_pa5_f02_zero_is_pe7}, we will first prove the following fact: \n\\begin{align}\n\\Delta \\PP \\D_8^{s} & \\subset \\Delta \\overline{\\mathcal{P} \\mathcal{D}}_8. \\label{one_a1_one_pa5_f02_zero_f12_not_zero_is_pd8}\n\\end{align}\nAlthough \\eqref{one_a1_one_pa5_f02_zero_f12_not_zero_is_pd8} \nis not required for the proof of Lemma \\ref{cl_two_pt} (\\ref{a1_pa5_cl}), it will be needed later. \nTo prove \\eqref{one_a1_one_pa5_f02_zero_f12_not_zero_is_pd8}, it suffices to show that \n\\begin{align}\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_5 \\cap \\Delta \\mathcal{P} \\mathcal{D}_7 & = \\varnothing. \\label{a1_pa5_intersect_pd7_is_empty} \n\\end{align}\nTo see why, note that by \\eqref{a1_pa4_intersect_pd6_is_empty} combined with $\\mathcal{P} \\mathcal{A}_5\\subset \\overline{\\mathcal{P} \\mathcal{A}}_4$ we have \n\\bgd\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_5 \\cap \\Delta \\mathcal{P} \\mathcal{D}_6=\\varnothing.\n\\edd\nSince $\\Delta\\overline{\\mathcal{P} \\mathcal{D}}_8^s\\subseteq \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_5$, \nwe conclude that \\eqref{a1_pa5_intersect_pd7_is_empty} implies \n\\bgd\n\\Delta\\overline{\\mathcal{P} \\mathcal{D}}_8^s \\cap \\big(\\Delta \\mathcal{P} \\mathcal{D}_6\\cup \\Delta \\mathcal{P} \\mathcal{D}_7\\big)=\\varnothing.\n\\edd\nOn the other hand, by Lemma \\ref{cl} (\\ref{A5cl}) we \nknow that $\\Delta\\overline{\\mathcal{P} \\mathcal{D}}_8^s\\subseteq \\Delta \\overline{\\mathcal{P} \\mathcal{D}}_6$. \nLemma \\ref{Dk_sharp_closure} (\\ref{pd6_pd7_pd8_closure}) now \nproves \\eqref{one_a1_one_pa5_f02_zero_f12_not_zero_is_pd8} assuming \\eqref{a1_pa5_intersect_pd7_is_empty}.\n\\begin{claim}\n\\label{claim_a1_pa5_intersect_pd7_is_empty}\nLet $(\\tilde{f},\\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{D}_7$. \nThen there are no solutions\n$$ (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)} ) \\in \\overline{ (\\mathcal{D} \\times \\mathbb{P}^2) \\circ \\mathcal{P} \\mathcal{A}}_3$$ \n\\textit{near} $(\\tilde{f}, \\tilde{p}, l_{\\p})$ to the set of equations\n\\begin{align}\n\\label{pd7_intersect_a1_pa5_is_empty_functional_eqn}\n\\pi_1^* \\psi_{\\mathcal{A}_0}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & = 0, ~\\pi_1^* \\psi_{\\mathcal{A}_1}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) = 0, \\nonumber \\\\ \n\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{A}_4}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & = 0, ~\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{A}_5}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) = 0, \\nonumber \\\\ \n\\pi_1^* \\Psi_{\\mathcal{P} \\mathcal{D}_4}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & \\neq 0, ~\\tilde{p}(t_1, t_2) \\neq \\tilde{p}(t_1).\n\\end{align}\nIn particular, if $(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)} )$ is sufficiently close to \n$(\\tilde{f},\\tilde{p}, l_{\\p})$, it does not lie in $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_5$. \n\\end{claim}\n\\noindent It is easy to see that claim \\ref{claim_a1_pa5_intersect_pd7_is_empty} implies \\eqref{a1_pa5_intersect_pd7_is_empty}. \\\\\n\n\\noindent \\textbf{Proof: } Choose homogeneous coordinates $[\\mathrm{X}: \\mathrm{Y}: \\mathrm{Z}]$ so that \n$\\tilde{p} = [0:0:1]$ and let ~$\\mathcal{U}_{\\tilde{p}}$, \n$\\pi_x$, $\\pi_y$, $v_1$, $w$, $v$, $\\eta$, $\\eta_{t_1}$, $\\eta_{t_2}$, \n$x_{t_1}$, $y_{t_1}$, $x_{t_2}$, $y_{t_2}$, \n$f_{ij}(t_1, t_2)$, $\\mathrm{F}$, $\\mathrm{F}_{x_{t_2}}$ and $\\mathrm{F}_{y_{t_2}}$\nbe exactly the same as defined in the \nproof of claim \\ref{claim_pd5_subset_of_a1_pa3}. \nSince $(\\tilde{f}(t_1, t_2), l_{\\tilde{p}(t_1)} ) \\in \\Delta \\overline{\\mathcal{P} \\mathcal{A}}_3$ we conclude that \n\\[ f_{00}(t_1, t_2) = f_{10}(t_1, t_2) = f_{01}(t_1, t_2) =f_{11}(t_1, t_2)= f_{20}(t_1, t_2) = f_{30}(t_1, t_2)=0.\\]\nFurthermore, since $(\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{D}_7$, we conclude that \n\\begin{align}\nf_{21}, ~f_{40}, ~\\mathcal{D}_7^f &=0 ~~\\textnormal{and} ~~f_{12}, ~\\mathcal{D}^{f}_8 \\neq 0. \\label{pd7_vanish_non_vanish}\n\\end{align}\nThe functional equation \\eqref{pd7_intersect_a1_pa5_is_empty_functional_eqn} \nhas a solution if and only if \nthe following set of equations has a solution (as numbers): \n\\begin{align}\n\\mathrm{F} & = 0, \\qquad \\mathrm{F}_{x_{t_2}} = 0, \\qquad \\mathrm{F}_{y_{t_2}} = 0, \\qquad \\mathcal{A}^{f(t_1, t_2)}_4 =0, \\qquad \\mathcal{A}^{f(t_1, t_2)}_5 =0, \\nonumber \\\\ \n(x_{t_2}, y_{t_2}) & \\neq (0, 0), \\qquad f_{02}(t_1, t_2) \\neq 0 \\qquad \\textnormal{(but small)}. \\label{eval_f1_pd7} \n\\end{align} \nFor the convenience of the reader, let us rewrite the expression for $\\mathrm{F}$: \n\\bge\n\\mathrm{F} : = {\\textstyle \\frac{f_{02}(t_1, t_2)}{2}} y_{t_2}^2 + {\\textstyle \\frac{f_{21}(t_1, t_2)}{2}} x_{t_2}^2 y_{t_2}+ {\\textstyle \\frac{f_{40}(t_1, t_2)}{24}}\\xt^4 + {\\textstyle \\frac{f_{12}(t_1, t_2)}{2}} x_{t_2} y_{t_2}^2 +{\\textstyle \\frac{f_{03}(t_1, t_2)}{6}} y_{t_2}^3 + \\ldots. \\label{F_recap_pd8}\n\\ede\nWe will now show that solutions to \n\\eqref{eval_f1_pd7} can not exist. \nWe will show that if $(f_{ij}(t_1, t_2), \\xt, y_{t_2})$ is a sequence converging \nto $(f,0,0) $\nthat satisfies \\eqref{eval_f1_pd7}, then $\\mathcal{D}^{f(t_1, t_2)}_8$ goes to zero (after passing to a subsequence). \nThat would contradict \\eqref{pd7_vanish_non_vanish}. \nFirst, we observe that any solution to the set of equations $\\mathcal{A}^{f(t_1, t_2)}_4 =0$ and $\\mathcal{A}^{f(t_1, t_2)}_5 =0$ \nis given by \n\\begin{align}\nf_{21}(t_1, t_2) & = {\\textstyle \\Big( \\frac{f_{31}(t_1, t_2) \\pm v }{ 3 f_{12}(t_1, t_2)} \\Big)f_{02}(t_1, t_2), ~~f_{40}(t_1, t_2) = \\frac{3 f_{21}(t_1, t_2)^2}{f_{02}(t_1, t_2)},~~\\mathcal{D}^{f(t_1, t_2)}_7 = -\\frac{5 v^2}{3 f_{12}(t_1, t_2)}}. \\label{af4_af5_pd8}\n\\end{align}\nLet us choose the $+v$ solution for $f_{21}(t_1, t_2)$, a similar argument will \ngo through for the $-v$ solution. Now, using the value of $f_{40}(t_1, t_2)$ we observe that the first three terms in the expression for \n$\\mathrm{F}$ in \\eqref{F_recap_pd8} form a perfect square. We will now rewrite the remaining part of $\\mathrm{F}$ by making a change of coordinates. \nUsing an identical argument that is given in \nin \\cite{BM13} and using \\eqref{af4_af5_pd8}, we can make a change of coordinates \n\\[ x_{t_2} = \\hat{x}_{t_2} + \\mathrm{G}(y_{t_2}, v), \\qquad y_{t_2} = \\hat{y}_{t_2} + \\mathrm{B}(\\hat{x}_{t_2}, v) \\]\nso that $\\mathrm{F}$ is given by \n\\begin{align*}\n\\mathrm{F} &= \\frac{u}{2} \\Big( \\hat{y}_{t_2} + \\mathrm{B}(\\hat{x}_{t_2},v) + \\frac{f_{31}(t_1, t_2)}{6 f_{12}(t_1, t_2)} (\\hat{x}_{t_2} + \\mathrm{J} )^2 + \n\\frac{v (\\hat{x}_{t_2} + \\mathrm{J})}{6 f_{12}(t_1, t_2)}\\Big)^2 \\\\ \n& -\\frac{v^2 \\hat{x}_{t_2}^5}{72 f_{12}(t_1, t_2)} + \\frac{f_{12}(t_1, t_2)}{2} \\hat{x}_{t_2} \\hat{y}_{t_2}^2+ \\frac{\\mathcal{D}^{f(t_1, t_2)}_8}{720} \\hat{x}_{t_2}^6 + \n\\beta(\\hat{x}_{t_2}) \\hat{x}_{t_2}^7,\n\\end{align*}\nwhere \n\\[ \\mathrm{J} := \\mathrm{G}(\\hat{y}_{t_2} + \\mathrm{B}(\\hat{x}_{t_2},v), v),\\,\\,\\,u:=f_{02}(t_1,t_2).\\]\nNote that $\\mathrm{B}$ is also a function of $v$ because the coefficients of $\\xt^n$ may depend \non $f_{50}(t_1, t_2)$, which is equal to $\\mathcal{D}^{f(t_1, t_2)}_7 + \\frac{5 f_{31}(t_1, t_2)^2}{3 f_{12}(t_1, t_2)} $. \nLet \n\\[ z_{t_2} := \\hat{y}_{t_2} + \\frac{\\hat{x}_{t_2}^2 v}{6 f_{12}(t_1, t_2)}.\\]\nSince \n\\[\\mathrm{G}(0,v) =0 \\qquad \\textnormal{and} \\qquad \\mathrm{B}(\\hat{x}_{t_2}, v) + \\frac{f_{31}(t_1, t_2)}{6 f_{12}(t_1, t_2)} \\hat{x}_{t_2}^2 = \\mathrm{O}(\\hat{x}_{t_2}^3) \\]\nwe conclude that in the new coordinates $(\\hat{x}_{t_2}, z_{t_2})$, $\\mathrm{F}$ is given by \n\\begin{align}\n\\mathrm{F} &= \\frac{u}{2} \\Big( z_{t_2} + \\alpha(\\hat{x}_{t_2}, v) \\hat{x}_{t_2} ^3 + \\mathrm{E}(\\hat{x}_{t_2}, z_{t_2}, v) z_{t_2} \\Big)^2 \\nonumber \\\\ \n& -\\frac{v \\hat{x}_{t_2}^3 z_{t_2}}{6 f_{12}(t_1, t_2)} + \\frac{f_{12}(t_1, t_2)}{2} \\hat{x}_{t_2} z_{t_2}^2+ \\frac{\\mathcal{D}^{f(t_1, t_2)}_8}{720} \\hat{x}_{t_2}^6 + \n\\beta(\\hat{x}_{t_2}) \\hat{x}_{t_2}^7, \\qquad \\textnormal{where} ~~\\mathrm{E}(0,0,v) \\equiv 0.\\label{FF_new_form_pd8}\n\\end{align}\nHence, \\eqref{eval_f1_pd7} has solutions if and only if the following set of equations has a solution \n\\begin{align}\n\\mathrm{F} &=0, \\qquad \\mathrm{F}_{\\hat{x}_{t_2}} =0, \\qquad \\mathrm{F}_{z_{t_2}} =0, \\qquad (\\hat{x}_{t_2}, z_{t_2}) \\neq (0,0), ~~u \\neq 0, ~~v ~~\\textnormal{small}. \\label{f_eval_pd7_new}\n\\end{align}\nWe will now analyze solutions of \\eqref{f_eval_pd7_new}. \nNotice that we can rewrite \\eqref{f_eval_pd7_new} in the following way \n\\begin{align}\np_0 + p_1 w + p_2 v &=0, ~~q_0 + q_1 w + q_2 v =0, ~~r_0 + r_1 w + r_2 v =0, \\label{p0_q0_etc}\n\\end{align}\nwhere \n\\begin{align}\nw &:= u (z_{t_2} + \\alpha (\\hat{x}_{t_2}, v) \\hat{x}_{t_2}^3 + \\mathrm{E}(\\hat{x}_{t_2}, z_{t_2}, v)), \n~~\\eta := z_{t_2} + \\alpha (\\hat{x}_{t_2}, v) \\hat{x}_{t_2}^3 + \\mathrm{E}(\\hat{x}_{t_2}, z_{t_2}, v), \\nonumber \\\\\np_0 &:= \\frac{f_{12}(t_1, t_2)}{2} \\hat{x}_{t_2} z_{t_2}^2+ \\frac{\\mathcal{D}^{f(t_1, t_2)}_8}{720} \\hat{x}_{t_2}^6 + \n\\beta(\\hat{x}_{t_2}) \\hat{x}_{t_2}^7, ~~p_1 := \\frac{\\eta}{2}, ~~p_2 := -\\frac{\\hat{x}_{t_2}^3 z_{t_2}}{6}, \\nonumber \\\\ \nq_0 &:= \\frac{f_{12}(t_1, t_2)}{2} z_{t_2}^2+ \\frac{\\mathcal{D}^{f(t_1, t_2)}_8}{120} \\hat{x}_{t_2}^5 + \n(7 \\beta(\\hat{x}_{t_2}) + \\hat{x}_{t_2} \\beta^{\\prime} (\\hat{x}_{t_2})) \\hat{x}_{t_2}^6, \\nonumber \\\\ \nq_1 & := 3 \\alpha (\\hat{x}_{t_2}) \\hat{x}_{t_2}^2 + \\alpha^{\\prime} (\\hat{x}_{t_2}) \\hat{x}_{t_2}^3 + \\mathrm{E}_{{\\hat{x}_{t_2}}}(\\hat{x}_{t_2}, z_{t_2}, v), \n~~q_2:= -\\frac{\\hat{x}_{t_2}^2 z_{t_2}}{2}, \\nonumber \\\\ \nr_0& := f_{12}(t_1, t_2) \\hat{x}_{t_2} z_{t_2} ~~r_1 := 1+ \\mathrm{E}(\\hat{x}_{t_2}, z_{t_2}, v)+ z \\mathrm{E}_{z_{t_2}}(\\hat{x}_{t_2}, z_{t_2}, v)), \n~~r_2 := -\\frac{\\hat{x}_{t_2}^3}{6}. \\label{p0_q0_defn}\n\\end{align}\nSince \\eqref{p0_q0_etc} holds, we conclude \nthat \n\\begin{align}\np_0 q_2 r_1 -p_0 q_1 r_2 + p_2 q_1 r_0 -p_1 q_2 r_0 -p_2q_0 r_1 + p_1 q_0 r_2 =0. \\label{p0_q0_eliminated} \n\\end{align}\nEquations \\eqref{p0_q0_eliminated} and \\eqref{p0_q0_defn} now imply that \n\\begin{align}\n\\Phi(\\hat{x}_{t_2}, z_{t_2}) & : = z_{t_2}^3(f_{12}(t_1, t_2) + \\mathrm{E}_2(\\hat{x}_{t_2}, z_{t_2}, v)) \n+ z_{t_2}^2 \\hat{x}_{t_2}^3(f_{12}(t_1, t_2) + \\mathrm{E}_3(\\hat{x}_{t_2}, z_{t_2}, v)) \\nonumber \\\\ \n & ~~~~ +z_{t_2} \\hat{x}_{t_2}^6(f_{12}(t_1, t_2) + \\mathrm{E}_4(\\hat{x}_{t_2}, z_{t_2}, v)) + \n \\hat{x}_{t_2}^9 \\mathcal{R}_5(z_{t_2}, \\hat{x}_{t_2}, v)=0, \\label{eliminate_u_v_pd8}\n\\end{align}\nwhere $\\mathrm{E}_i(0,0,v) =0$ and $\\mathcal{R}_5(z_{t_2}, \\hat{x}_{t_2}, v)$ is a holomorphic function. \nNow let us define $\\mathrm{L} := z_{t_2}\/ \\hat{x}_{t_2}^3.$ Let $(f(t_1, t_2), \\hat{x}_{t_2}, z_{t_2})$ \nbe a sequence converging to $(f, 0,0)$ \nthat satisfies \\eqref{f_eval_pd7_new} and such that $(\\hat{x}_{t_2}, z_{t_2}) \\neq (0,0)$. \nFirst we observe that $\\hat{x}_{t_2} \\neq 0$; this follows from \n\\eqref{eliminate_u_v_pd8} and the fact that $f_{12} \\neq 0$. \nNext, it is easy to see \nusing \\eqref{eliminate_u_v_pd8}, that\n$\\mathrm{L}$ is bounded, since $f_{12}\\neq 0$. Hence, after passing to a subsequence $\\mathrm{L}$ \nconverges. Since $\\mathrm{F} =0$, we can easily see from \\eqref{FF_new_form_pd8}, that \nas $(\\hat{x}_{t_2}, z_{t_2}), u$ and $v$ go to zero, $\\mathcal{D}^{f(t_1, t_2)}_8$ goes to zero. \nThis contradicts \\eqref{pd7_vanish_non_vanish}. \\qed \\\\\n\n\n\n\\noindent Now let us prove \\eqref{one_a1_one_pa5_f02_zero_is_pe7}. It is clear that the lhs of \\eqref{one_a1_one_pa5_f02_zero_is_pe7} is a subset of its rhs. \nNext, we will prove the \nfollowing two facts simultaneously:\n\\begin{align}\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_5 & \\supset \\Delta \\mathcal{P} \\mathcal{E}_7, \\label{a1_pa5_is_supsetof_pe7} \\\\ \n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_6 \\cap \\Delta \\mathcal{P} \\mathcal{E}_7 & = \\varnothing. \\label{a1_pa6_intersect_pe7_is_empty} \n\\end{align}\nSince $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_5$ is a closed set, \\eqref{a1_pa5_is_supsetof_pe7} implies that the \nrhs of \\eqref{one_a1_one_pa5_f02_zero_is_pe7} is a subset of its lhs. \n\n\\begin{claim}\n\\label{claim_a1_pa6_intersect_pe7_is_empty}\nLet $(\\tilde{f},\\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{E}_7$.\nThen there exist solutions \n$$ (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)} ) \\in \\overline{ (\\mathcal{D} \\times \\mathbb{P}^2) \\circ \\mathcal{P} \\mathcal{A}}_3$$ \n\\textit{near} $(\\tilde{f}, \\tilde{p}, l_{\\p})$ to the set of equations\n\\begin{align}\n\\pi_1^* \\psi_{\\mathcal{A}_0}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & = 0, ~\\pi_1^* \\psi_{\\mathcal{A}_1}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) = 0, \\nonumber \\\\\n\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{A}_4}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & = 0, ~~\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{A}_5}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) = 0, \\nonumber \\\\\n ~\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{D}_4}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & \\neq 0, ~~\\tilde{p}(t_1, t_2) \\neq \\tilde{p}(t_1). \\label{pe7_intersect_a1_pa6_is_empty_functional_eqn} \n\\end{align}\nMoreover, any such solution sufficiently close to $(\\tilde{f}, \\tilde{p}, l_{\\p})$ lies in \n$\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_5$, i.e., \n\\begin{align}\n\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{A}_6}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) \\neq 0. \\label{psi_pa6_neq_0_a1_pa5_functional}\n\\end{align}\nIn particular, $(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)} )$ does not lie in $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_6$. \n\\end{claim}\n\\noindent It is easy to see that claim \\ref{claim_a1_pa6_intersect_pe7_is_empty} implies \\eqref{a1_pa5_is_supsetof_pe7} and \n\\eqref{a1_pa6_intersect_pe7_is_empty} simultaneously. \\\\ \n \n\\noindent \\textbf{Proof: } Choose homogeneous coordinates $[\\mathrm{X}: \\mathrm{Y}: \\mathrm{Z}]$ so that \n$\\tilde{p} = [0:0:1]$ and let ~$\\mathcal{U}_{\\tilde{p}}$, \n$\\pi_x$, $\\pi_y$, $v_1$, $w$, $v$, $\\eta$, $\\eta_{t_1}$, $\\eta_{t_2}$, \n$x_{t_1}$, $y_{t_1}$, $x_{t_2}$, $y_{t_2}$, \n$f_{ij}(t_1, t_2)$, $\\mathrm{F}$, $\\mathrm{F}_{x_{t_2}}$ and $\\mathrm{F}_{y_{t_2}}$\nbe exactly the same as defined in the \nproof of claim \\ref{claim_pd5_subset_of_a1_pa3}.\nSince $(\\tilde{f}(t_1, t_2), l_{\\tilde{p}(t_1)}) \\in \\overline{\\mathcal{P} \\mathcal{A}}_3$, we conclude \n\\[ f_{00}(t_1, t_2) = f_{10}(t_1, t_2) = f_{01}(t_1, t_2) =f_{11}(t_1, t_2)= f_{20}(t_1, t_2) = f_{30}(t_1, t_2)=0.\\]\nMoreover, since $(\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{E}_7$, we conclude that\n\\begin{align}\nf_{12}, f_{40} &=0, ~~f_{31}, f_{03} \\neq 0. \\label{pe7_nv}\n\\end{align}\nThe functional equation \\eqref{pe6_intersect_a1_pa5_is_empty_functional_eqn} \nhas a solution if and only if \nthe following set of equations has a solution (as numbers): \n\\begin{align}\n\\mathrm{F} & = 0, ~~\\mathrm{F}_{x_{t_2}} = 0, ~~\\mathrm{F}_{y_{t_2}} = 0, ~~\\mathcal{A}^{f(t_1, t_2)}_4 =0, ~~\\mathcal{A}^{f(t_1, t_2)}_5 =0, \\nonumber \\\\ \nf_{02}(t_1, t_2) &\\neq 0, \\qquad (x_{t_2}, y_{t_2}) \\neq (0, 0) \\qquad \\textnormal{(but small)}. \\label{eval_f1_pe7} \n\\end{align}\nLet us now define the following quantities: \n\\begin{align*}\n\\mathrm{G}_1 &:= \\mathrm{F} \n-\\frac{\\xt \\mathrm{F}_{\\xt}}{4} -\\frac{y_{t_2} \\mathrm{F}_{y_{t_2}}}{2}, ~~\\mathrm{G}_2 := \\mathrm{F}-\\frac{y_{t_2} \\mathrm{F}_{y_{t_2}}}{2}, \n~~\\mathrm{G} := \\mathrm{F}+ 4 \\mathrm{G}_1 - 2 \\mathrm{G}_2. \n\\end{align*}\nNote that $\\mathrm{G}_1$ depends linearly on $f_{12}(t_1, t_2)$\nand is independent of $f_{40}(t_1, t_2)$, \n$f_{21}(t_1, t_2)$ and $f_{02}(t_1, t_2)$; \n$\\mathrm{G}_2$ depends linearly on $f_{40}(t_1, t_2)$ \nand $f_{21}(t_1, t_2)$\nand is independent of $f_{12}(t_1, t_2)$, \nand $f_{02}(t_1, t_2)$; finally \n$\\mathrm{G}$ depends linearly on $f_{02}(t_1, t_2)$ \nand $f_{40}(t_1, t_2)$\nand is independent of $f_{12}(t_1, t_2)$, \nand $f_{21}(t_1, t_2)$. \nNext, we claim that $\\xt \\neq 0$ and $y_{t_2} \\neq 0$; we will \njustify that at the end. Assuming this claim, we observe that \n\\eqref{eval_f1_pe7} combined with \\eqref{pe7_nv} \nis equivalent to\n\\begin{align}\n\\mathrm{G}_1 & = 0, ~~\\mathrm{G}_2 = 0, ~~\\mathrm{G} = 0, ~~f_{21}(t_1, t_2) = \\frac{5f_{12}(t_1, t_2)f_{40}(t_1, t_2) + f_{02}(t_1, t_2) f_{50}(t_1, t_2)}{10 f_{31}(t_1, t_2)}, \\nonumber \\\\\n\\mathcal{A}^{f(t_1, t_2)}_4 & =0, ~~f_{02}(t_1, t_2) \\neq 0, ~~ \\xt \\neq 0, ~~y_{t_2} \\neq 0 \\qquad \\textnormal{(but small)}. \\label{eval_f1_pe7_modified} \n\\end{align}\nWe will now construct solutions for \n\\eqref{eval_f1_pe7_modified}. \nFirst of all, using $\\mathrm{G} =0$ we can solve for $f_{02}(t_1, t_2)$ as a function of \n$f_{40}(t_1, t_2)$, $\\xt$ and $y_{t_2}$. Next, \nusing that $\\mathrm{G}_1 =0$, we get $f_{12}(t_1, t_2)$ as a function of $\\xt$ and $y_{t_2}$. Finally, using \n$\\mathrm{G}_2 =0$, the value of $f_{02}(t_1, t_2)$, $f_{12}(t_1, t_2)$ \nfrom the previous two equations and the value of $f_{21}(t_1, t_2)$ from \\eqref{eval_f1_pe7_modified}, \nwe get $f_{40}(t_1, t_2)$ in terms of $\\xt$ and $y_{t_2}$. Plugging the expression back in, we get \n$f_{12}(t_1, t_2)$, $f_{21}(t_1, t_2)$, $f_{02}(t_1, t_2)$ and $f_{40}(t_1, t_2)$ in terms of $\\xt$ \nand $y_{t_2}$.\\\\\n\\hf \\hf Next, let us define $\\mathrm{L} := \\xt^3 \/y_{t_2}^2.$ We note that any expression involving $\\xt$ and $y_{t_2}$ can be re written in terms of $\\xt$, $y_{t_2}$ \nand $\\mathrm{L}$ so that the highest power of $\\xt$ is $2$; replace $\\xt^3$ by $\\mathrm{L} y_{t_2}^2$. \nUsing that fact, \nwe conclude \n\\begin{align*}\n\\mathcal{A}^{f(t_1, t_2)}_4 \\frac{\\xt^4}{y_{t_2}^3}&= -\\frac{f_{03}(t_1, t_2)^2}{3} + \\frac{f_{03}(t_1, t_2) f_{31}(t_1, t_2)}{3} \\mathrm{L} + \\mathrm{E}_1(\\xt, y_{t_2}, \\mathrm{L}) =0 \n\\end{align*}\nwhere $\\mathrm{E}_1(0,0, \\mathrm{L}) =0$.\nHence, by the Implicit function theorem, we conclude that \n\\begin{align*}\n\\mathrm{L} &= \\frac{f_{03}(t_1, t_2)}{f_{31}(t_1, t_2)} + \\mathrm{E}_2(\\xt, y_{t_2}) \n\\end{align*}\n where $\\mathrm{E}_2(0,0)=0$. \nHence, $\\xt$ and $ y_{t_2}$ are parametrized by \n\\[ y_{t_2} = u^3, ~~\\xt = \\alpha u^2 + \\mathrm{O}(u^3) \\qquad \\textnormal{where} ~~\\alpha := \\sqrt[3]{\\frac{f_{03}(t_1, t_2)}{f_{31}(t_1, t_2)}}, ~~\\textnormal{a branch of the cube root.} \\]\nPlugging in all this we get \n\\begin{align}\nf_{02}(t_1, t_2) &= \\frac{f_{03}(t_1, t_2)}{3} u^3 + \\mathrm{O}(u^4), \n~~f_{12}(t_1, t_2)= -\\frac{f_{03}(t_1, t_2)}{3 \\alpha } u + \\mathrm{O}(u^2), ~~f_{21}(t_1, t_2) = \\mathrm{O}(u^3), \\nonumber \\\\ \nf_{40}(t_1, t_2) &= \\mathrm{O}(u^2), ~~f_{02}(t_1, t_2)^3 \\mathcal{A}^{f(t_1, t_2)}_6 = -\\frac{10 f_{03}(t_1, t_2)^2 f_{31}(t_1, t_2)^2 }{9 } u^6 + \\mathrm{O}(u^7). \\label{pe7_multiplicity}\n\\end{align}\nEquation \\eqref{pe7_multiplicity} implies that \\eqref{psi_pa6_neq_0_a1_pa5_functional} holds. \\qed \\\\\n\n\n\n\\begin{cor}\n\\label{a1_pa5_mult_is_5_around_pe7}\nLet $\\mathbb{W} \\longrightarrow \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2$ be a vector bundle such that \nthe rank of $\\mathbb{W}$ is same as the dimension of $ \\Delta \\mathcal{P} \\mathcal{E}_{7}$ and \n$\\mathcal{Q}: \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2 \\longrightarrow \\mathbb{W}$ a \\textit{generic} \nsmooth section. Suppose $(\\tilde{f},\\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{E}_{7} \\cap \\mathcal{Q}^{-1}(0)$. \nThen the section $$ \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_{6}} \\oplus \\mathcal{Q}: \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_{5} \\longrightarrow \\pi_2^* (\\UL_{\\mathcal{P} \\mathcal{A}_{6}}) \\oplus \\mathbb{W}$$\nvanishes around $(\\tilde{f}, \\tilde{p}, l_{\\p})$ with a multiplicity of $6$.\n\\end{cor}\n\n\\noindent \\textbf{Proof: } Follows from the fact that the sections \ninduced by $f_{02}$, $f_{21}$, $f_{12}$ and $f_{40}$ (the corresponding functionals) \nare transverse to the zero set over $\\Delta\\overline{\\mathcal{P} \\mathcal{A}}_3$ \\footnote{Take partial derivative with respect to \n$f_{02}$, $f_{21}$, $f_{12}$ and $f_{40}$.}, the fact that $\\mathcal{Q}$ is generic and \\eqref{pe7_multiplicity}. \\qed \\\\\n\n\\noindent This finishes the proof of Lemma \\ref{cl_two_pt} (\\ref{a1_pa5_cl}).\\\\\n\n\n\n\\textbf{Proof of Lemma \\ref{cl_two_pt} (\\ref{a1_pd4_cl}):} By definition of $\\Delta \\PP \\D_6^{\\vee s}$ in \\eqref{new_defn_delta}, \nit suffices to show that \n\\begin{align}\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\tilde{p}}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_4: \\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{D}_5}(\\tilde{f}, l_{\\p}) =0 \\} &= \\Delta \\overline{\\mathcal{P} \\mathcal{D}}_{6}. \\label{a1_pd4_is_pd6} \n\\end{align} \nObserve that \n\\begin{align}\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_4 \\cap \\Delta (\\mathcal{P} \\mathcal{D}_4 \\cup \\mathcal{P} \\mathcal{D}_5 \\cup \\mathcal{P} \\mathcal{E}_6) & = \\varnothing. \\label{a1_pd4_intersetc_pd4_is_empty}\n\\end{align}\nThis follows from \\eqref{a1_pa2_intersect_pd4_empty}, \\eqref{a1_pa3_pd5_dual_eqn} and \\eqref{a1_pd4_intersect_pe6_is_empty} \ncombined with \nthe fact that \n\\[ (\\mathcal{P} \\mathcal{D}_4 \\cup \\mathcal{P} \\mathcal{D}_5) \\cap (\\overline{\\mathcal{P} \\mathcal{D}_5^{\\vee}} \\cup \\overline{\\mathcal{P} \\mathcal{D}}_6) = \\varnothing. \\] \nEquation \\eqref{a1_pd4_intersetc_pd4_is_empty} implies that the lhs of \\eqref{a1_pd4_is_pd6} is a subset of its rhs. \nNext, we will simultaneously, prove the following two statements: \n\\begin{align}\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_4 & \\supset \\Delta \\mathcal{P} \\mathcal{D}_6, \\label{a1_pa4_supset_pd6}\\\\ \n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_5 \\cap \\Delta \\mathcal{P} \\mathcal{D}_6 & = \\varnothing, \\label{a1_pa5_intersetc_pd6_is_empty} \n\\end{align}\nSince $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_4$ is a closed set, \\eqref{a1_pa4_supset_pd6} implies that the rhs of \\eqref{a1_pd4_is_pd6} is a subset of its lhs. \n\n\\begin{claim}\n\\label{claim_pd5_actual_subset_of_a1_pa3}\nLet $(\\tilde{f},\\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{D}_6$.\nThen there exists a solution \n$$ (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)} ) \\in \\overline{ (\\mathcal{D} \\times \\mathbb{P}^2) \\circ \\mathcal{P} \\mathcal{D}}_4$$ \n\\textit{near} $(\\tilde{f}, \\tilde{p}, l_{\\p})$ to the set of equations\n\\begin{align}\n\\pi_1^* \\psi_{\\mathcal{A}_0}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & = 0, ~\\pi_1^* \\psi_{\\mathcal{A}_1}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) = 0, \n~\\tilde{p}(t_1, t_2) \\neq \\tilde{p}(t_1). \\label{pd5_actual_limit_a1_pa3_functional_eqn}\n\\end{align}\nMoreover, such a solution \nlies in $\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}_4$, i.e., \n\\begin{align}\n\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{D}_5}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & \\neq 0. \\label{psi_pa4_neq_0_a1_pa3_functional_new_pd5_actual}\n\\end{align}\nIn particular, $(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)} )$ does not lie in $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_5$. \n\\end{claim}\n\\noindent Note that claim \\ref{claim_pd5_actual_subset_of_a1_pa3} implies \\eqref{a1_pa4_supset_pd6} and \n\\eqref{a1_pa5_intersetc_pd6_is_empty} simultaneously. \\\\\n\n\\noindent \\textbf{Proof: } Choose homogeneous coordinates $[\\mathrm{X}: \\mathrm{Y}: \\mathrm{Z}]$ so that \n$\\tilde{p} = [0:0:1]$ and let ~$\\mathcal{U}_{\\tilde{p}}$, \n$\\pi_x$, $\\pi_y$, $v_1$, $w$, $v$, $\\eta$, $\\eta_{t_1}$, $\\eta_{t_2}$, \n$x_{t_1}$, $y_{t_1}$, $x_{t_2}$, $y_{t_2}$, \n$f_{ij}(t_1, t_2)$, $\\mathrm{F}$, $\\mathrm{F}_{x_{t_2}}$ and $\\mathrm{F}_{y_{t_2}}$\nbe exactly the same as defined in the \nproof of claim \n\\ref{claim_a4_closure_simultaneous}, except for one difference:\nwe take $(\\tilde{f}(t_1, t_2), l_{\\tilde{p}(t_1)})$ to be a point in $\\overline{\\mathcal{P} \\mathcal{D}}_4$.\nSince $(\\tilde{f}(t_1, t_2), l_{\\tilde{p}(t_1)}) \\in \\overline{\\mathcal{P} \\mathcal{D}}_4$, \nwe conclude that \n\\[ f_{00}(t_1, t_2) = f_{10}(t_1, t_2) = f_{01}(t_1, t_2) =f_{11}(t_1, t_2)= f_{20}(t_1, t_2)=f_{02}(t_1, t_2) = f_{30}(t_1, t_2)=0.\\]\nThe functional equation \\eqref{pd5_dual_limit_a1_pa3_functional_eqn} \nhas a solution if and only if \nthe following set of equations has a solution (as numbers): \n\\begin{align}\n\\mathrm{F} = 0, \\qquad \\mathrm{F}_{x_{t_2}} = 0, \\qquad \\mathrm{F}_{y_{t_2}} = 0, \\qquad (x_{t_2}, y_{t_2}) \\neq (0, 0) \\qquad \\textnormal{( but small)}. \\label{eval_f1_d5_actual} \n\\end{align} \nFor the convenience of the reader, let us rewrite the expression for $\\mathrm{F}$: \n\\begin{align*}\n\\mathrm{F} &: = \\frac{f_{21}(t_1, t_2)}{2} x_{t_2}^2 y_{t_2}+ \n\\frac{f_{12}(t_1, t_2)}{2} x_{t_2} y_{t_2}^2 + \\frac{f_{03}(t_1, t_2)}{6} y_{t_2}^3 + \\ldots.\n\\end{align*}\nSince $(\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{D}_6$, we conclude that \n\\begin{align}\nf_{21}, ~f_{40} &= 0 ~~\\textnormal{and} ~~f_{12}, ~\\mathcal{D}^{f}_7 \\neq 0. \n\\end{align}\nA little bit of thought will reveal \nthat the solutions to \n\\eqref{eval_f1_d5_actual}\nare exactly the same as in \\eqref{pd6_hat_y}, \n\\eqref{pd6_f21} and \\eqref{pd6_f40}, with $f_{02}(t_1, t_2) =0$. \nSince $\\alpha \\neq 0$, we conclude that $f_{21}(t_1, t_2) \\neq 0$ \nfor small but non zero $\\hat{x}_{t_2}$. \nHence \\eqref{psi_pa4_neq_0_a1_pa3_functional_new_pd5_actual} holds. \\qed \n\n\\begin{cor}\n\\label{psi_pd5_section_vanishes_order_two_around_pd6}\nLet $\\mathbb{W} \\longrightarrow \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2$ be a vector bundle such that \nthe rank of $\\mathbb{W}$ is same as the dimension of $ \\Delta \\mathcal{P} \\mathcal{D}_6$ and \n$\\mathcal{Q}: \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2 \\longrightarrow \\mathbb{W}$ a \\textit{generic} \nsmooth section. Suppose $(\\tilde{f},\\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{D}_6\\cap \\mathcal{Q}^{-1}(0)$. \nThen the section $$ \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_4} \\oplus \\mathcal{Q}: \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P}\\mathcal{D}}_4 \\longrightarrow \\pi_2^* (\\UL_{\\mathcal{P} \\mathcal{D}_4}) \\oplus \\mathbb{W}$$\nvanishes around $(\\tilde{f}, \\tilde{p}, l_{\\p})$ with a multiplicity of $2$.\n\\end{cor}\n\\noindent \\textbf{Proof: } Follows from the fact that the sections induced by $f_{21}$ and $f_{40}$ are transverse to the zero set over \n$\\Delta \\overline{\\mathcal{P} \\mathcal{D}}_4$, \\footnote{Take partial derivatives with respect to $f_{21}$ and $f_{40}$} \nthe fact that $\\mathcal{Q}$ is generic and \\eqref{pd6_f21} (combined with $f_{02}(t_1, t_2) =0$). Each branch of \n$\\alpha := \\sqrt{\\frac{\\mathcal{D}^{f(t_1, t_2)}_7}{-60 f_{12}(t_1, t_2)}}$ \ncontributes with a multiplicity of $1$. Hence, the total multiplicity is $2$. \\qed \\\\\n\n\\noindent Before proceeding further, observe that \\eqref{pd5_intersect_a1_pd4_is_empty} implies that \n\\begin{align}\n\\Delta \\overline{\\PP \\D_6^{\\vee s}} & \\subset \\Delta \\overline{\\mathcal{P} \\mathcal{D}_6^{\\vee}}. \\label{pd6_dual_s_is_subset_of_pd6_dual} \n\\end{align}\n\n\n\n\\textbf{Proof of Lemma \\ref{cl_two_pt} (\\ref{a1_d4_cl}):} Follows from Lemma \\ref{cl_two_pt} (\\ref{a1_pd4_cl}), \\eqref{pd6_dual_s_is_subset_of_pd6_dual}, \nLemma \\ref{tube_lemma}, \\eqref{tube_lemma_X} and \n\\eqref{tube_lemma_Y}. \\qed \\\\\n\n\n\\textbf{Proof of Lemma \\ref{cl_two_pt} (\\ref{a1_pd5_cl}):} It suffices to prove the following two statements: \n\\begin{align}\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\tilde{p}}) \\in \\overline{\\overline{\\mathcal{A}}_1\\circ \\mathcal{P} \\mathcal{D}}_5: ~\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{E}_6}( \\tilde{f}, \\tilde{p}, l_{\\tilde{p}} ) \\neq 0 \\} &= \n\\{ (\\tilde{f}, \\tilde{p}, l_{\\tilde{p}}) \\in \\Delta \\overline{\\mathcal{P} \\mathcal{D}}_{7}: ~\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{E}_6}( \\tilde{f}, \\tilde{p}, l_{\\tilde{p}} ) \\neq 0 \\}\n\\label{closure_a1_pd5_f12_not_zero} \\\\\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\tilde{p}}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_5: ~\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{E}_6}( \\tilde{f}, \\tilde{p}, l_{\\tilde{p}} ) = 0 \\} &= \\Delta \\overline{\\mathcal{P} \\mathcal{E}}_7. \\label{a1_pd5_pe7}\n \\end{align}\nLet us directly prove a more general version of \\eqref{closure_a1_pd5_f12_not_zero}: \n\\begin{lmm}\n\\label{closure_a1_pdk_f12_not_zero}\nIf $k \\geq 5$ then \n\\begin{align*}\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\tilde{p}}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_k: ~\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{E}_6}( \\tilde{f}, \\tilde{p}, l_{\\tilde{p}} ) \\neq 0 \\} &= \n\\{ (\\tilde{f}, \\tilde{p}, l_{\\tilde{p}}) \\in \\Delta \\overline{\\mathcal{P} \\mathcal{D}}_{k+2}: ~\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{E}_6}( \\tilde{f}, \\tilde{p}, l_{\\tilde{p}} ) \\neq 0 \\}. \n\\end{align*}\n\\end{lmm}\nNote that \\eqref{closure_a1_pd5_f12_not_zero} is a special case of Lemma \\ref{closure_a1_pdk_f12_not_zero}; take $k=5$. \nWe will prove the following two facts simultaneously:\n\\begin{align}\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_{k} \\} \\supset \\Delta \\mathcal{P} \\mathcal{D}_{k+2} \\qquad & \\forall ~k \\geq 5, \\label{pdk2_is_subset_of_a1_and_pdk} \\\\\n\\{ (\\tilde{f}, \\tilde{p}, l_{\\p}) \\in \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_{k+1} \\} \\cap \\Delta \\mathcal{P} \\mathcal{D}_{k+2} = \\varnothing \\qquad & \\forall ~k \\geq 4. \\label{pdk2_intersect_a1_and_pd1k+1_is_empty}\n\\end{align}\nIt is easy to see that \\eqref{pdk2_is_subset_of_a1_and_pdk} and \\eqref{pdk2_intersect_a1_and_pd1k+1_is_empty} imply \nLemma \\ref{closure_a1_pdk_f12_not_zero}. We will now prove the following claim:\n\n\\begin{claim}\n\\label{claim_d6_closure_simultaneous}\nLet $~(\\tilde{f},\\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{D}_{k+2}$ and $ k\\geq 5$.\nThen there exists a solution \n$$ (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)} ) \\in \\overline{ (\\mathcal{D} \\times \\mathbb{P}^2) \\circ \\mathcal{P} \\mathcal{D}}_5$$ \n\\textit{near} $(\\tilde{f}, \\tilde{p}, l_{\\p})$ to the set of equations\n\\begin{align}\n\\label{closure_a1_dk_f12_not_zero}\n\\pi_1^* \\Psi_{\\mathcal{A}_0}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & = 0, ~\\pi_1^* \\Psi_{\\mathcal{A}_1}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) = 0, \\nonumber \\\\\n\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{D}_6}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & = 0, \\ldots, \\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{D}_k}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) = 0, \\nonumber \\\\\n~\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{E}_6}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & \\neq 0, ~\\tilde{p}(t_1, t_2) \\neq \\tilde{p}(t_1). \n\\end{align}\nMoreover, \\textit{any} solution $(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)})$ sufficiently close to $(\\tilde{f}, \\tilde{p}, l_{\\p})$ \nlies in $ \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}_k$, i.e.,\n\\begin{align}\n\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{D}_{k+1}}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) \\neq 0. \\label{psi_pd_k_plus_1_does_not_vanish}\n\\end{align}\nIn particular $(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)})$ \\textit{does not} lie in $ \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_{k+1} $.\n\\end{claim}\n\n\\noindent It is easy to see that claim \\ref{claim_d6_closure_simultaneous} implies \\eqref{pdk2_is_subset_of_a1_and_pdk} and \\eqref{pdk2_intersect_a1_and_pd1k+1_is_empty} \nsimultaneously for all $k\\geq 5$.\nThe fact that \\eqref{pdk2_intersect_a1_and_pd1k+1_is_empty} holds for $k=4$ is the content of \\eqref{a1_pa5_intersetc_pd6_is_empty}. \\\\\n\n\n\\noindent \\textbf{Proof: } Choose homogeneous coordinates $[\\mathrm{X}: \\mathrm{Y}: \\mathrm{Z}]$ so that \n$\\tilde{p} = [0:0:1]$ and let ~$\\mathcal{U}_{\\tilde{p}}$, \n$\\pi_x$, $\\pi_y$, $v_1$, $w$, $v$, $\\eta$, $\\eta_{t_1}$, $\\eta_{t_2}$, \n$x_{t_1}$, $y_{t_1}$, $x_{t_2}$, $y_{t_2}$, \n$f_{ij}(t_1, t_2)$, $\\mathrm{F}$, $\\mathrm{F}_{x_{t_2}}$ and $\\mathrm{F}_{y_{t_2}}$\nbe exactly the same as defined in the \nproof of claim \\ref{claim_pd5_subset_of_a1_pa3}.\nHence \n\\[ f_{10}(t_1, t_2) = f_{01}(t_1, t_2) =f_{11}(t_1, t_2)= f_{20}(t_1, t_2)= f_{02}(t_1, t_2) = f_{30}(t_1, t_2)= f_{21}(t_1, t_2)=0.\\]\nSince $(\\tilde{f},\\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{D}_{k+2}$, we conclude that $f_{12}(t_1, t_2) \\neq 0$. \nHence, \nwe can make a change of coordinates to write $\\mathrm{F}$ as \n\\begin{align*} \n\\mathrm{F}&= \\hat{y}_{t_2}^2 \\hat{x}_{t_2} + \\frac{\\mathcal{D}^{f(t_1, t_2)}_6}{4!} \\hat{x}_{t_2}^4 + \\frac{\\mathcal{D}^{f(t_1, t_2)}_7}{5!} \\hat{x}_{t_2}^5+ \\ldots \n\\end{align*}\nThe functional equation \\eqref{closure_a1_dk_f12_not_zero} has a solution if and only if \nthe following set of equations has a solution (as numbers): \n\\begin{align}\n\\label{closure_a1_dk_f12_not_zero_numbers}\n\\hat{y}_{t_2}^2 \\hat{x}_{t_2} + \\frac{\\mathcal{D}^{f(t_1, t_2)}_6}{4!} \\hat{x}_{t_2}^4 + \\frac{\\mathcal{D}^{f(t_1, t_2)}_7}{5!} \\hat{x}_{t_2}^5+ \\ldots &=0, \n\\qquad \\hat{y}_{t_2} \\hat{x}_{t_2} = 0, \\nonumber \\\\\n\\hat{y}_{t_2}^2 + \\frac{\\mathcal{D}^{f(t_1, t_2)}_6}{3!}\\hat{x}_{t_2}^3 + \\frac{\\mathcal{D}^{f(t_1, t_2)}_7}{4!} \\hat{x}_{t_2}^4 + \\ldots &= 0, \\qquad \\mathcal{D}^{f(t_1, t_2)}_6, \\ldots, \\mathcal{D}^{f(t_1, t_2)}_k = 0, \\nonumber \\\\\n(\\hat{y}_{t_2}, \\hat{x}_{t_2}) & \\neq (0,0) \\qquad \\textnormal{(but small).}\n\\end{align}\nIt is easy to see that the solutions to \\eqref{closure_a1_dk_f12_not_zero_numbers} exist given by \n\\begin{align}\n\\mathcal{D}^{f(t_1, t_2)}_6,& \\ldots, \\mathcal{D}^{f(t_1, t_2)}_k = 0, \\nonumber \\\\\n\\mathcal{D}^{f(t_1, t_2)}_{k+1} &= \\frac{\\mathcal{D}_{k+3}^{f(t_1, t_2)}}{k(k+1)} \\hat{x}_{t_2}^2 + \\mathrm{O}(\\hat{x}_{t_2}^3), \\label{pdk+2_inside_a1_and_pdk_solution}\\\\ \n\\mathcal{D}^{f(t_1, t_2)}_{k+2} &= -\\frac{2 \\mathcal{D}^{f(t_1, t_2)}_{k+3}}{(k+1)} \\hat{x}_{t_2} + \\mathrm{O}(\\hat{x}_{t_2}^2), \\qquad \\hat{y}_{t_2} = 0, \\qquad \\hat{x}_{t_2} \\neq 0 \\qquad \\textnormal{(but small).} \\nonumber \n\\end{align}\nBy \\eqref{pdk+2_inside_a1_and_pdk_solution}, it immediately follows that \\eqref{psi_pd_k_plus_1_does_not_vanish} holds. \\qed \n\n\n\\begin{cor}\n\\label{a1_pdk_mult_is_2_f12_neq_0}\nLet $\\mathbb{W} \\longrightarrow \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2$ be a vector bundle such that \nthe rank of $\\mathbb{W}$ is same as the dimension of $ \\Delta \\mathcal{P} \\mathcal{D}_{k+2}$ and \n$\\mathcal{Q}: \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2 \\longrightarrow \\mathbb{W}$ a \\textit{generic} \nsmooth section. Suppose $(\\tilde{f},\\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{D}_{k+2} \\cap \\mathcal{Q}^{-1}(0)$. \nThen the section $$ \\pi_2^\\ast\\Psi_{\\mathcal{P} \\mathcal{D}_{k+1}} \\oplus \\mathcal{Q}: \\Delta \\overline{\\mathcal{P} \\mathcal{D}}_{k} \\longrightarrow \\pi_2^* (\\UL_{\\mathcal{P} \\mathcal{D}_{k+1}}) \\oplus \\mathbb{W}$$\nvanishes around $(\\tilde{f}, \\tilde{p}, l_{\\p})$ with a multiplicity of $2$.\n\\end{cor}\n\\noindent \\textbf{Proof: } This follows from the fact that the sections \n\\begin{align*}\n\\pi_2^\\ast\\Psi_{\\mathcal{P} \\mathcal{D}_{i}}:& \\Delta \\overline{\\mathcal{P} \\mathcal{D}}_{i-1} - \\pi_2^\\ast\\Psi_{\\mathcal{P} \\mathcal{E}_6}^{-1}(0) \\longrightarrow \\pi_2^\\ast\\UL_{\\mathcal{P} \\mathcal{D}_{i}}\n\\end{align*}\nare transverse to the zero set for all $6 \\leq i \\leq k+2$, the fact that \n$\\mathcal{Q}$ is generic and \\eqref{pdk+2_inside_a1_and_pdk_solution}. \\qed \\\\ \n\n\n\\noindent Next, we will prove \\eqref{a1_pd5_pe7}. The lhs of \\eqref{a1_pd5_pe7} is a subset of its rhs; this follows from \n\\eqref{a1_pd4_intersect_pe6_is_empty} and the fact that $\\mathcal{P} \\mathcal{D}_5$ is a subset of $\\overline{\\mathcal{P} \\mathcal{D}}_4$. To prove the converse, we will prove the following three statements simultaneously:\n\\begin{align}\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_5 & \\supset \\Delta \\mathcal{P} \\mathcal{E}_7, \\label{a1_pd5_supset_pe7}\\\\\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_6 \\cap \\Delta \\mathcal{P} \\mathcal{E}_7 & = \\varnothing. \\label{a1_pd6_intersect_pe7_empty}\\\\\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{E}}_6 \\cap \\Delta \\mathcal{P} \\mathcal{E}_7 & = \\varnothing. \\label{a1_pe6_intersect_pe7_empty}\n\\end{align}\nSince $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_5$ is a closed set, \\eqref{a1_pd5_supset_pe7} implies that the rhs of \n\\eqref{a1_pd5_pe7} is a subset of its lhs.\n\n\\begin{claim}\n\\label{claim_a1_pd6_intersect_pe7_is_empty}\nLet $(\\tilde{f},\\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{E}_7$.\nThen there exist solutions \n$$ (\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)} ) \\in \\overline{ (\\mathcal{D} \\times \\mathbb{P}^2) \\circ \\mathcal{P} \\mathcal{D}}_5$$ \n\\textit{near} $(\\tilde{f}, \\tilde{p}, l_{\\p})$ to the set of equations\n\\begin{align}\n\\label{pe7_intersect_a1_pd6_is_empty_functional_eqn}\n\\pi_1^* \\Psi_{\\mathcal{A}_0}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) & = 0, ~\\pi_1^* \\Psi_{\\mathcal{A}_1}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) = 0, ~~\\tilde{p}(t_1, t_2) \\neq \\tilde{p}(t_1).\n\\end{align}\nMoreover, any such solution sufficiently close to $(\\tilde{f}, \\tilde{p}, l_{\\p})$ lies in \n$\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}_5$, i.e., \n\\begin{align}\n\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{D}_6}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) \\neq 0.\n\\end{align}\nIn particular, $(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)} )$ does not lie in $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_6$. Since\n\\begin{align}\n\\pi_2^* \\Psi_{\\mathcal{P} \\mathcal{E}_6}(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)}) \\neq 0\n\\end{align}\nthe solution $(\\tilde{f}(t_1, t_2), \\tilde{p}(t_1, t_2), l_{\\tilde{p}(t_1)} )$ does not lie in $\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{E}}_6$. \n\\end{claim}\n\n\\noindent Note that claim \\ref{claim_a1_pd6_intersect_pe7_is_empty} implies \\eqref{a1_pd5_pe7} and \\eqref{a1_pd6_intersect_pe7_empty} \nsimultaneously. \\\\ \n\n\\noindent \\textbf{Proof: } Choose homogeneous coordinates $[\\mathrm{X}: \\mathrm{Y}: \\mathrm{Z}]$ so that \n$\\tilde{p} = [0:0:1]$ and let ~$\\mathcal{U}_{\\tilde{p}}$, \n$\\pi_x$, $\\pi_y$, $v_1$, $w$, $v$, $\\eta$, $\\eta_{t_1}$, $\\eta_{t_2}$, \n$x_{t_1}$, $y_{t_1}$, $x_{t_2}$, $y_{t_2}$, \n$f_{ij}(t_1, t_2)$, $\\mathrm{F}$, $\\mathrm{F}_{x_{t_2}}$ and $\\mathrm{F}_{y_{t_2}}$\nbe exactly the same as defined in the \nproof of claim \\ref{claim_pd5_subset_of_a1_pa3}\nexcept for one difference:\nwe take $(\\tilde{f}(t_1, t_2), l_{\\tilde{p}(t_1)})$ to be a point in $\\overline{\\mathcal{P} \\mathcal{D}}_5$.\nSince $(\\tilde{f}(t_1, t_2), l_{\\tilde{p}(t_1)}) \\in \\overline{\\mathcal{P} \\mathcal{D}}_5$, we conclude that \n\\[ f_{10}(t_1, t_2) = f_{01}(t_1, t_2) =f_{11}(t_1, t_2)= f_{20}(t_1, t_2)= f_{02}(t_1, t_2) = f_{30}(t_1, t_2)= f_{21}(t_1, t_2)=0.\\]\nSince $(\\tilde{f},\\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{E}_7$, we conclude that \n\\begin{align}\nf_{03}, f_{31} &\\neq 0, ~~f_{12}, f_{40} =0. \\label{pe7_nv_again} \n\\end{align} \nThe functional equation \\eqref{pe7_intersect_a1_pd6_is_empty_functional_eqn} \nhas a solution if and only if \nthe following set of equations has a solution (as numbers): \n\\begin{align}\n\\mathrm{F} = 0, \\qquad \\mathrm{F}_{x_{t_2}} = 0, \\qquad \\mathrm{F}_{y_{t_2}} = 0, \\qquad (x_{t_2}, y_{t_2}) \\neq (0, 0) \\qquad \\textnormal{(but small)}. \\label{eval_f1_pe7_again} \n\\end{align} \nLet us now define \n\\begin{align*}\n\\mathrm{G} &:= -8\\mathrm{F} + 2 \\xt \\mathrm{F}_{\\xt} + 3 y_{t_2} \\mathrm{F}_{y_{t_2}}. \n\\end{align*}\nWe claim that $ \\xt \\neq 0$ and $ y_{t_2} \\neq 0$; we will justify that at the end. Assuming that claim we \nconclude that solving \\eqref{eval_f1_pe7_again} is equivalent to solving \n\\begin{align}\n\\mathrm{G} = 0, \\qquad \\mathrm{F}_{x_{t_2}} = 0, \\qquad \\mathrm{F}_{y_{t_2}} = 0, \\qquad (x_{t_2}, y_{t_2}) \\neq (0, 0) \\qquad \\textnormal{(but small)}. \\label{eval_f1_pe7_again_modified} \n\\end{align}\nNote that $\\mathrm{G}$ is independent of $f_{12}(t_1, t_2)$ and $f_{40}(t_1, t_2)$. Hence, $\\mathrm{G}$ is explicitly given by \n\\begin{equation*}\n\\mathrm{G}= \\frac{\\mathrm{P}_{03}(\\xt,y_{t_2})}{6}y_{t_2}^3+ \\frac{\\mathrm{P}_{31}(\\xt,y_{t_2})}{6}\\xt^3y+\\kappa \\xt^2y_{t_2}^2+\\mathrm{P}_{50}(\\xt)\\xt^5,\n\\end{equation*}\nwhere $\\mathrm{P}_{03}(0,0) = f_{03}(t_1, t_2)$ and $\\mathrm{P}_{31}(0,0) = f_{31}(t_1, t_2)$. \nUsing the same argument as in \\cite{BM13}, \nthere exists a holomorphic function $\\mathrm{B}(\\hat{x}_{t_2})$ and \nconstant $\\eta$ such that if we make the substitution \n$$\\xt = \\hat{x}_{t_2} + \\eta \\hat{y}_{t_2}, \\qquad y_{t_2} = \\hat{y}_{t_2} + \\mathrm{B}(\\hat{x}_{t_2}) \\hat{x}_{t_2}^2 $$ \nthen $\\mathrm{G}$ is given by\n\\begin{align*}\n\\mathrm{G} &= \\frac{\\hat{\\mathrm{P}}_{03}(\\hat{x}_{t_2}, \\hat{y}_{t_2})}{6} \\hat{y}_{t_2}^3 + \\frac{\\hat{\\mathrm{P}}_{31}(\\hat{x}_{t_2}, \\hat{y}_{t_2})}{6} \\hat{x}_{t_2}^3 \\hat{y}_{t_2},\n\\end{align*}\nwhere $\\hat{\\mathrm{P}}_{03} (0,0) = f_{03}(t_1, t_2)$ and $\\hat{\\mathrm{P}}_{31} (0,0) = f_{31}(t_1, t_2)$. \nWe claim that $\\hat{y}_{t_2} \\neq 0$; we will justify that at the end. Assuming that claim, we conclude from $\\mathrm{G} =0$ that \n\\begin{align*}\n\\hat{y}_{t_2} &= u^3, ~~\\hat{x}_{t_2} = \\alpha u^2 + \\mathrm{O}(u^2) \\qquad \\textnormal{where} \\qquad \\alpha := \\sqrt[3]{-\\frac{f_{03}(t_1, t_2)}{f_{31}(t_1, t_2)}} \\qquad \\textnormal{a branch of the cuberoot.} \n\\end{align*}\nUsing this and the remaining two equations of \\eqref{eval_f1_pe7_again_modified} we conclude that\n\\begin{align}\nf_{12}(t_1, t_2) & = - \\frac{f_{03}(t_1, t_2)}{3 \\alpha} u + \\mathrm{O}(u^2), ~~f_{40}(t_1, t_2) = -\\frac{4 f_{31}(t_1, t_2)}{\\alpha} u + \\mathrm{O}(u^2). \\label{f12_and_f40_mult_around_pe7}\n\\end{align}\nIt remains to show that $\\xt \\neq 0$, $y_{t_2} \\neq 0$ and $\\hat{y}_{t_2} \\neq 0$. If $\\xt =0$, then $\\mathrm{F} =0$ implies that \n$f_{03}(t_1, t_2) = \\mathrm{O}(y_{t_2})$, \ncontradicting \\eqref{pe7_nv_again}. Next, if $y_{t_2} =0$ then $\\mathrm{F}_{y_{t_2}} =0$ implies that $f_{31} (t_1, t_2) = \\mathrm{O}(\\xt)$, contradicting \\eqref{pe7_nv_again}.\nFinally, if $\\hat{y}_{t_2} =0$, then $\\mathrm{F}_{y_{t_2}} =0$ implies that \n\\begin{align*}\nf_{31}(t_1, t_2) &= -6\\mathrm{B}(0) f_{12}(t_1, t_2) + \\mathrm{O}(\\xt). \n\\end{align*}\nAs $f_{12}(t_1, t_2)$ and $\\xt$ go to zero, $f_{31}(t_1, t_2)$ goes to zero, contradicting \\eqref{pe7_nv_again}. \\qed \n\n\\begin{cor}\n\\label{psi_pe6_and_pd6_section_vanishes_order_one_around_pe7}\nLet $\\mathbb{W} \\longrightarrow \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2$ be a vector bundle such that \nthe rank of $\\mathbb{W}$ is same as the dimension of $ \\Delta \\mathcal{P} \\mathcal{E}_7$ and \n$\\mathcal{Q}: \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P} T\\mathbb{P}^2 \\longrightarrow \\mathbb{W}$ a \\textit{generic} \nsmooth section. Suppose $(\\tilde{f},\\tilde{p}, l_{\\p}) \\in \\Delta \\mathcal{P} \\mathcal{E}_7\\cap \\mathcal{Q}^{-1}(0)$. \nThen the sections \n\\begin{align*}\n\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_6} \\oplus \\mathcal{Q}: \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P}\\mathcal{D}}_5 \\longrightarrow \\pi_2^* (\\UL_{\\mathcal{P} \\mathcal{D}_6}) \\oplus \\mathbb{W}, \n~~\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{E}_6} \\oplus \\mathcal{Q}: \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P}\\mathcal{D}}_5 \\longrightarrow \\pi_2^* (\\UL_{\\mathcal{P} \\mathcal{E}_6}) \\oplus \\mathbb{W} \n\\end{align*}\nvanish around $(\\tilde{f}, \\tilde{p}, l_{\\p})$ with a multiplicity of $1$.\n\\end{cor}\n\\noindent \\textbf{Proof: } Follows from the fact \nthat the sections induced by $f_{12}$ and $f_{40}$ are transverse to the zero set over \n$\\Delta \\overline{\\mathcal{P} \\mathcal{D}}_5$, \\footnote{Take partial derivatives with respect to $f_{12}$ and $f_{40}$.}\n$\\mathcal{Q}$ is generic and \\eqref{f12_and_f40_mult_around_pe7}. \\qed \n\n\n\\section{Euler class computation} \n\\label{Euler_class_computation}\n\n\\hf\\hf We are ready to prove the recursive formulas stated in section \\ref{algorithm_for_numbers}.\\\\\n\n\\noindent \\textbf{Proof of \\eqref{algoa1a1}:} \nLet $\\mathcal{Q}: \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P}^2 \\longrightarrow \\mathcal{W}$ \nbe a generic smooth section to\n\\bgd\n\\mathcal{W} := \\bigg({\\textstyle \\bigoplus}_{i=1}^{\\delta_d -(n+2)} \\pi_{\\mathcal{D}}^*\\gamma_{\\mathcal{D}}^* \n\\bigg)\\oplus\\Big({\\textstyle \\bigoplus}_{i=1}^{n} \\pi_2^* \\gamma_{_{\\mathbb{P}^2}}^*\\Big) \\longrightarrow \\mathcal{D} \\times \\mathbb{P}^2 \\times \\mathbb{P}^2.\n\\edd\nNote that \n\\begin{align*}\n\\mathcal{N}(\\mathcal{A}_1 \\mathcal{A}_1, n) = \\big\\langle e(\\mathcal{W}), ~[\\overline{\\mathcal{A}_1\\circ\\mathcal{A}}_1] \\big\\rangle = | \\pm (\\mathcal{A}_1\\circ \\mathcal{A}_1) \\cap \\mathcal{Q}^{-1}(0)|.\n\\end{align*}\nBy Lemma \\ref{cl_two_pt} (\\ref{a1a_minus_1_cl}) \n\\begin{align*} \n\\overline{\\mathcal{A}}_1 \\times \\mathbb{P}^2 & = \\overline{\\overline{\\mathcal{A}}_1 \\circ (\\mathcal{D} \\times \\mathbb{P}^2)} = \\overline{\\mathcal{A}}_1 \\circ (\\mathcal{D} \\times \\mathbb{P}^2) \\sqcup \\Delta \\overline{\\mathcal{A}}_1.\n\\end{align*} \nThe sections \n\\begin{align}\n \\pi_2^*\\psi_{\\mathcal{A}_0}:\\overline{\\mathcal{A}}_1 \\times \\mathbb{P}^2 - \\Delta \\overline{\\mathcal{A}}_1 \\longrightarrow \\pi_2^* \\mathcal{L}_{\\mathcal{A}_0}, \\qquad \\pi_2^*\\psi_{\\mathcal{A}_1}: \\pi_2^*\\psi_{\\mathcal{A}_0}^{-1}(0) \\longrightarrow \\pi_2^*\\mathcal{V}_{\\mathcal{A}_1}\n\\end{align}\nare transverse to the zero set. \n(cf. Proposition \\ref{ift_ml}). \nHence \n\\begin{align}\n\\big\\langle e(\\pi_2^*\\mathcal{L}_{\\mathcal{A}_0}) e(\\pi_2^*\\mathcal{V}_{\\mathcal{A}_1}) e(\\mathcal{W}), ~[\\overline{\\mathcal{A}}_1 \\times \\mathbb{P}^2] \\big\\rangle & = \\mathcal{N}(\\mathcal{A}_1 \\mathcal{A}_1, n) + \n\\mathcal{C}_{\\Delta \\overline{\\mathcal{A}}_1}(\\pi_2^*\\psi_{\\mathcal{A}_0} \\oplus \\pi_2^*\\psi_{\\mathcal{A}_1} \\oplus \\mathcal{Q}), \\label{A1A1_Euler_Class}\n\\end{align}\nwhere $\\mathcal{C}_{\\Delta \\overline{\\mathcal{A}}_1}(\\pi_2^*\\psi_{\\mathcal{A}_0} \\oplus \\pi_2^*\\psi_{\\mathcal{A}_1} \\oplus \\mathcal{Q})$ \nis the contribution of the section \n$\\pi_2^*\\psi_{\\mathcal{A}_0} \\oplus \\pi_2^*\\psi_{\\mathcal{A}_1} \\oplus \\mathcal{Q}$ to the Euler class from \n$\\Delta \\overline{\\mathcal{A}}_1$. \nThe lhs of \\eqref{A1A1_Euler_Class}, as computed by splitting principle and a case by case check, is \n\n\\begin{align}\n\\big\\langle e(\\pi_2^*\\mathcal{L}_{\\mathcal{A}_0}) e(\\pi_2^*\\mathcal{V}_{\\mathcal{A}_1}) e(\\mathcal{W}), ~[\\overline{\\mathcal{A}}_1 \\times \\mathbb{P}^2] \\big\\rangle &= \\mathcal{N}(\\mathcal{A}_1, 0) \\times \\mathcal{N}(\\mathcal{A}_1, n) \\label{a1a1_Euler_class_Main_stratum}.\n\\end{align}\nIn fact, the usual answer one arrives at is\n\\bgd\n\\mathcal{N}(\\mathcal{A}_1,n)+3(d-1)\\mathcal{N}(\\mathcal{A}_1,n+1)+3(d-1)^2\\mathcal{N}(\\mathcal{A}_1,n+2).\n\\edd\nOne then uses a result from \\cite{BM13}:\n\\begin{align} \n\\mathcal{N}(\\mathcal{A}_1,n) = \\begin{cases}\n3(d-1)^{2},&\\textnormal{if}~n=0;\\\\\n3(d-1),&\\textnormal{if}~n=1;\\\\\n1,&\\textnormal{if}~n=2;\\\\\n0,&\\textnormal{otherwise}.\n\\end{cases}\n\\end{align}\n\\noindent Next, we compute \n$\\mathcal{C}_{\\Delta \\overline{\\mathcal{A}}_1}(\\pi_2^*\\psi_{\\mathcal{A}_0}\\oplus\\pi_2^*\\psi_{\\mathcal{A}_1} \\oplus \\mathcal{Q})$.\nNote that ~$ \\overline{\\mathcal{A}}_1 = \\mathcal{A}_1 \\sqcup \\overline{\\mathcal{A}}_2.$\nBy claim \\ref{a1_section_contrib_from_a1_and_a2} we get that \n\\begin{align}\n\\mathcal{C}_{\\Delta \\mathcal{A}_1}(\\pi_2^*\\psi_{\\mathcal{A}_0}\\oplus\\pi_2^*\\psi_{\\mathcal{A}_1} \\oplus \\mathcal{Q}) & = \\big\\langle e(\\pi_2^* \\mathcal{L}_{\\mathcal{A}_0})e(\\mathcal{W}) , ~[\\Delta \\overline{\\mathcal{A}}_1] \\big\\rangle = \\mathcal{N}(\\mathcal{A}_1, n) + d \\mathcal{N}(\\mathcal{A}_1, n+1), \\label{a1a1_a1_contribution}\\\\\n\\mathcal{C}_{\\Delta \\overline{\\mathcal{A}}_2}(\\pi_2^*\\psi_{\\mathcal{A}_0}\\oplus\\pi_2^*\\psi_{\\mathcal{A}_1} \\oplus \\mathcal{Q}) &= 3 \\mathcal{N}(\\mathcal{A}_2, n). \\label{a1a1_a2_contribution}\n\\end{align}\nIt is easy to see that \\eqref{a1a1_Euler_class_Main_stratum}, \\eqref{a1a1_a1_contribution} and \\eqref{a1a1_a2_contribution} \nprove \\eqref{algoa1a1}. \\qed \n\\begin{rem}\nIn \\cite{Z1} a different method is used to compute $\\mathcal{C}_{\\Delta \\mathcal{A}_1}(\\pi_2^\\ast\\psi_{\\mathcal{A}_0}\\oplus\\pi_2^*\\psi_{\\mathcal{A}_1} \\oplus \\mathcal{Q})$. The author later on pointed out this simpler method to the second author of this paper. \n\\end{rem}\n\n\n\\noindent \\textbf{Proof of \\eqref{algopa20a1} and \\eqref{algopa21a1}:} Let $\\mathbb{W}_{n,m,2}^1$ and $\\mathcal{Q}$ be as \nin \\eqref{generic_Q} with $k=2$. \nBy definition, $~\\mathcal{N}(\\mathcal{A}_1\\mathcal{P}\\mathcal{A}_2, n,m)$ is the signed \ncardinality of the intersection of $\\mathcal{A}_1\\circ \\mathcal{P} \\mathcal{A}_2$ with $\\mathcal{Q}^{-1}(0)$. \nBy Lemma \\ref{cl_two_pt}, statement \\ref{a1a1_up_cl} we gather that\n\\begin{align*}\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{A}}^{\\#}_1}& = \\overline{\\mathcal{A}}_1\\circ \\hat{\\mathcal{A}}^{\\#}_1 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\hat{\\mathcal{A}}_1^{\\#}}- \\hat{\\mathcal{A}}_1^{\\#} ) \\sqcup \\Delta \\overline{\\hat{\\mathcal{A}}}_3 \\\\ \n & = \\overline{\\mathcal{A}}_1\\circ \\hat{\\mathcal{A}}^{\\#}_1 \\sqcup \\overline{\\mathcal{A}}_1 \\circ \\overline{\\mathcal{P} \\mathcal{A}}_2 \\sqcup \\Delta \\overline{\\hat{\\mathcal{A}}}_3 \\qquad \n\\textnormal{(by \\cite{BM13} (cf. Lemma \\ref{cl}, statement \\ref{A1cl}).}\n\\end{align*}\nBy Proposition \\ref{A2_Condition_prp}, the section \n$$\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_2} : \\overline{\\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{A}}^{\\#}_1} \\longrightarrow \\pi_2^*\\mathbb{V}_{\\mathcal{P} \\mathcal{A}_2}$$ \nvanishes on $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathcal{A}_2$ transversely. \nHence, the zeros of the section \n$$ \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_2} \\oplus \\mathcal{Q} : \\overline{\\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{A}}^{\\#}_1} \\longrightarrow \\pi_2^*\\mathbb{V}_{\\mathcal{P} \\mathcal{A}_2} \\oplus \\mathbb{W}_{n,m,2}^1, $$\nrestricted to $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathcal{A}_2$ counted with a sign, is our desired number. \nIn other words \n\\bgd\n\\Big\\langle e(\\pi_2^*\\mathbb{V}_{\\mathcal{P} \\mathcal{A}_2}) e(\\mathbb{W}_{n,m,2}^1), ~[\\overline{\\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{A}}^{\\#}_1}] \\Big\\rangle = \\mathcal{N}(\\mathcal{A}_1 \\mathcal{P}\\mathcal{A}_2, n,m) + \\mathcal{C}_{\\Delta \\overline{\\hat{\\mathcal{A}}}_3}(\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_2} \\oplus \\mathcal{Q}) \n\\edd\nwhere $\\mathcal{C}_{\\Delta \\overline{\\hat{\\mathcal{A}}}_3}(\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_2} \\oplus \\mathcal{Q})$ is the contribution of the section to the Euler class from \n$\\Delta \\overline{\\hat{\\mathcal{A}}}_3$. Note that $ \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_2} \\oplus \\mathcal{Q}$ vanishes only on $\\mathcal{P} \\mathcal{A}_3$ and $\\hat{\\mathcal{D}}_4$ and not on the entire $\\overline{\\hat{\\mathcal{A}}}_3$. \nBy Corollary \\ref{pa2_section_mult_around_pa3} and \\ref{pa2_section_mult_around_hat_d4}, \nthe contribution from $\\mathcal{P} \\mathcal{A}_3$ and $\\hat{\\mathcal{D}}_4$ are $2$ and $3$ respectively. This proves the claim. \n\\begin{rem}\nIn the above proof we are using Lemma \\ref{cohomology_ring_of_pv} with $M:= \\overline{\\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{A}}^{\\#}_1}$. However, in this case \n$M$ is not a smooth manifold; it is only a pseudocycle. Lemma \\ref{cohomology_ring_of_pv} is actually true even when $M$ happens to be a \npseudocycle. \n\\end{rem}\n\n\\begin{rem}\nA completely different method is used in \\cite{Z1} to compute $\\mathcal{N}(\\mathcal{A}_1 \\mathcal{A}_2, n)$; instead of removing the cusp, the node is removed.\nIn fact, all the numbers $\\mathcal{N}(\\mathcal{A}_1 \\mathfrak{X}_k,n)$ can also be computed by removing the node, instead of removing the $\\mathfrak{X}_k$ singularity. However, in order to obtain a recursive formula for the number of degree $d$ curves \nthrough $\\delta_{d} - (\\delta+k)$ generic points and having $\\delta$-nodes and one singularity of type $\\mathfrak{X}_k$, \nwe have to apply the method employed in this paper \n(i.e., we have to remove the $\\mathfrak{X}_k$-singularity, not the node). \nThis observation is again due to Aleksey Zinger. \n\\end{rem}\n\n\n\n\\noindent \\textbf{Proof of \\eqref{algopa3a1}:} Let $\\mathbb{W}_{n,m,3}^1$ and $\\mathcal{Q}$ be as in \\eqref{generic_Q} with $k=3$.\nBy Lemma \\ref{cl_two_pt}, statement \\ref{a1_pa2_cl} we have \n\\begin{align*}\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_2& = \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_2 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{A}}_2- \\mathcal{P} \\mathcal{A}_2) \\sqcup \n\\Big( \\Delta \\overline{\\mathcal{P} \\mathcal{A}}_4 \\cup \\Delta \\overline{\\hat{\\mathcal{D}}^{\\#}_5}\\Big), \\\\\n &= \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_2 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{A}}_3 \\cup \\overline{\\hat{\\mathcal{D}}^{\\#}_4}) \\sqcup \n\\Big( \\Delta \\overline{\\mathcal{P} \\mathcal{A}}_4 \\cup \\Delta \\overline{\\hat{\\mathcal{D}}^{\\#}_5}\\Big) \n\\end{align*}\nwhere the last equality follows from \\cite{BM13} (cf. Lemma \\ref{cl}, statement \\ref{A2cl}). By Proposition \\ref{A3_Condition_prp}, the section \n$$ \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_3} \\oplus \\mathcal{Q} : \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_2 \\longrightarrow \\pi_2^*\\UL_{\\mathcal{P} \\mathcal{A}_3} \\oplus \\mathbb{W}_{n,m,3}^1 $$\nvanishes transversely on $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathcal{A}_3$. By definition, it does not vanish on $\\mathcal{A}_1 \\circ \\hat{\\mathcal{D}}_4^{\\#}$. \nBy Corollary \\ref{a1_pak_mult_is_2_Hess_neq_0}, the contribution to the Euler class from the points of $\\Delta \\mathcal{P} \\mathcal{A}_4$ is $2$. \nFurthermore, by definition the section does not vanish on $\\Delta\\hat{\\mathcal{D}}^{\\#}_5$.\nHence \n\\bgd\n\\Big\\langle e(\\pi_2^*\\UL_{\\mathcal{P} \\mathcal{A}_3} \\oplus \\mathbb{W}_{n,m,3}^1), ~~[\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_2] \\Big\\rangle = \\mathcal{N}(\\mathcal{A}_1\\mathcal{P}\\mathcal{A}_3,n,m) + 2 \\mathcal{N}(\\mathcal{P} \\mathcal{A}_4, n, m) \n\\edd\nwhich proves the equation. \\qed \\\\\n\n\n\n\\noindent \\textbf{Proof of \\eqref{algopa4a1}:} Let $\\mathbb{W}_{n,m,4}^1$ and $\\mathcal{Q}$ be as in \\eqref{generic_Q} with $k=4$.\nBy Lemma \\ref{cl_two_pt}, statement \\ref{a1_pa2_cl} we have \n\\begin{align*}\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_3 &= \n\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_3 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{A}}_3- \\mathcal{P} \\mathcal{A}_3) \\sqcup \n\\Big( \\Delta \\overline{\\mathcal{P} \\mathcal{A}}_5 \\cup \\Delta \\overline{\\mathcal{P} \\mathcal{D}^{\\vee}_5} \\Big), \\\\\n&= \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_3 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{A}}_4 \\cup \\overline{\\mathcal{P} \\mathcal{D}}_4) \\sqcup \n\\Big( \\Delta \\overline{\\mathcal{P} \\mathcal{A}}_5 \\cup \\Delta \\overline{\\mathcal{P} \\mathcal{D}^{\\vee}_5} \\Big) \n\\end{align*}\nwhere the last equality follows from \\cite{BM13} (cf. Lemma \\ref{cl}, statement \\ref{A3cl}). By Proposition \\ref{A3_Condition_prp}, the section \n$$ \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_4} \\oplus \\mathcal{Q} : \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_3 \\longrightarrow \\pi_2^*\\UL_{\\mathcal{P} \\mathcal{A}_4} \\oplus \\mathbb{W}_{n,m,4}^1 $$\nvanishes transversely on $\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_4$. It is easy to see that it does not vanish on $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathcal{D}_4$. \nBy Corollary \\ref{a1_pak_mult_is_2_Hess_neq_0}, the contribution to the Euler class from the points of $\\Delta\\mathcal{P} \\mathcal{A}_5$ is $2$. \nMoreover, the section does not vanish on $\\Delta \\mathcal{P} \\mathcal{D}^{\\vee}_5$. \nHence \n\\bgd\n\\Big\\langle e(\\pi_2^*\\UL_{\\mathcal{P} \\mathcal{A}_4} \\oplus \\mathbb{W}_{n,m,4}^1), ~~[\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_3] \\Big\\rangle = \\mathcal{N}(\\mathcal{A}_1\\mathcal{P}\\mathcal{A}_4,n,m) + 2 \\mathcal{N}(\\mathcal{P} \\mathcal{A}_5, n, m) \n\\edd\nwhich proves the equation. \\qed \\\\ \n\n\\noindent \\textbf{Proof of \\eqref{algopa5a1}:} Let $\\mathbb{W}_{n,m,5}^1$ and $\\mathcal{Q}$ be as in \\eqref{generic_Q} with $k=5$.\nBy Lemma \\ref{cl_two_pt}, statement \\ref{a1_pa2_cl} we have \n\\begin{align*}\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_4 &= \n\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_4 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{A}}_4- \\mathcal{P} \\mathcal{A}_4) \\sqcup \n\\Big( \\Delta \\overline{\\mathcal{P} \\mathcal{A}}_6 \\cup \n\\Delta \\overline{\\PP \\D_7^{s}} \\cup \\Delta \\overline{\\mathcal{P} \\mathcal{E}}_6 \\Big), \\\\\n&= \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_4 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{A}}_5 \\cup \\overline{\\mathcal{P} \\mathcal{D}}_5) \\sqcup \n\\Big( \\Delta \\overline{\\mathcal{P} \\mathcal{A}}_6 \\cup \n\\Delta \\overline{\\PP \\D_7^{s}} \\cup \\Delta \\overline{\\mathcal{P} \\mathcal{E}}_6 \\Big) \n\\end{align*}\nwhere the last equality follows from \\cite{BM13} (cf. Lemma \\ref{cl}, statement \\ref{A4cl}). By Proposition \\ref{A3_Condition_prp}, the section \n$$ \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_5} \\oplus \\mathcal{Q} : \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_4 \\longrightarrow \\pi_2^*\\UL_{\\mathcal{P} \\mathcal{A}_5} \\oplus \\mathbb{W}_{n,m,5}^1 $$\nvanishes transversely on $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathcal{A}_5$. \nBy \\cite{BM13}\nwe conclude that this section \nvanishes on $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathcal{D}_5$ with a multiplicity of $2$. \nBy Corollary \\ref{a1_pak_mult_is_2_Hess_neq_0} and \\ref{a1_pa4_mult_is_5_around_pe6}, \nthe contribution to the Euler class from the points of $\\Delta\\mathcal{P} \\mathcal{A}_6$ and $\\Delta\\mathcal{P} \\mathcal{E}_6$ are $2$ and $5$ respectively. \nSince the dimension of $\\mathcal{P} \\mathcal{D}_7$ is one less than the rank of $\\pi_2^*\\UL_{\\mathcal{P} \\mathcal{A}_5} \\oplus \\mathbb{W}_{n,m,5}^1$ and $\\mathcal{Q}$ is generic, \nthe section \ndoes not vanish on $\\Delta \\overline{\\mathcal{P} \\mathcal{D}}_7$. Since $\\overline{\\PP \\D_7^{s}}$ is a subset of $\\Delta \\overline{\\mathcal{P} \\mathcal{D}}_7$ \n(by \\eqref{one_a1_one_pa4_f02_zero_f12_not_zero_is_pd7}), \nthe section \ndoes not vanish on $\\overline{\\PP \\D_7^{s}}$ either. \nHence\n\\begin{align*}\n\\Big\\langle e(\\pi_2^*\\UL_{\\mathcal{P} \\mathcal{A}_5} \\oplus \\mathbb{W}_{n,m,5}^1), ~[\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_4] \\Big\\rangle & = \\mathcal{N}(\\mathcal{A}_1\\mathcal{P}\\mathcal{A}_5,n,m) +2\\mathcal{N}(\\mathcal{A}_1\\mathcal{P}\\mathcal{D}_5,n,m)\\\\ \n & ~~ + 2 \\mathcal{N}(\\mathcal{P} \\mathcal{A}_6, n, m)+ 5\\mathcal{N}(\\mathcal{P} \\mathcal{E}_6, n ,m) \n\\end{align*}\nwhich proves the equation. \\qed \\\\ \n\n\\noindent \\textbf{Proof of \\eqref{algopa6a1}:} Let $\\mathbb{W}_{n,m,6}^1$ and $\\mathcal{Q}$ be as in \\eqref{generic_Q} with $k=6$.\nBy Lemma \\ref{cl_two_pt}, statement \\ref{a1_pa5_cl} we have \n\\begin{align*}\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_5 & = \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_5 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{A}}_5- \\mathcal{P} \\mathcal{A}_5) \\sqcup \n\\Big( \\Delta \\overline{\\mathcal{P} \\mathcal{A}}_7 \\cup \\Delta \\overline{\\PP \\D_8^{s}} \\cup \\Delta \\overline{\\mathcal{P} \\mathcal{E}}_7 \\Big) \\\\\n&= \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_5 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{A}}_6 \\cup \\overline{\\mathcal{P} \\mathcal{D}}_6 \\cup \\overline{\\mathcal{P} \\mathcal{E}}_6 ) \\sqcup \n\\Big( \\Delta \\overline{\\mathcal{P} \\mathcal{A}}_7 \\cup \\Delta \\overline{\\PP \\D_8^{s}} \\cup \\Delta \\overline{\\mathcal{P} \\mathcal{E}}_7 \\Big) \n\\end{align*}\nwhere the last equality follows from \\cite{BM13} (cf. Lemma \\ref{cl}, statement \\ref{A5cl}). By Proposition \\ref{A3_Condition_prp}, the section \n$$ \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_6} \\oplus \\mathcal{Q} : \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_5 \\longrightarrow \\pi_2^*\\UL_{\\mathcal{P} \\mathcal{A}_6} \\oplus \\mathbb{W}_{n,m,6}^1 $$\nvanishes transversely on $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathcal{A}_6$. \nBy \\cite{BM13}\nwe conclude that this section \nvanishes on $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathcal{D}_6$ with a multiplicity of $4$. \nBy Corollary \\ref{a1_pak_mult_is_2_Hess_neq_0} and \\ref{a1_pa5_mult_is_5_around_pe7}, \nthe contribution to the Euler class from the points of $\\Delta\\mathcal{P} \\mathcal{A}_7$ and $\\Delta\\mathcal{P} \\mathcal{E}_7$ are $2$ and $6$ respectively. \nSince the dimension of $\\mathcal{P} \\mathcal{D}_8$ is one less than the rank of $\\pi_2^*\\UL_{\\mathcal{P} \\mathcal{A}_6} \\oplus \\mathbb{W}_{n,m,6}^1$ and $\\mathcal{Q}$ is generic, \nthe section \ndoes not vanish on $\\Delta \\overline{\\mathcal{P} \\mathcal{D}}_8$. Since $\\Delta \\PP \\D_8^{s}$ is a subset of $\\Delta \\overline{\\mathcal{P} \\mathcal{D}}_8$ \n(by \\eqref{one_a1_one_pa5_f02_zero_f12_not_zero_is_pd8}), the section does not vanish \non $\\Delta \\overline{\\PP \\D_8^{s}}$ either. Hence \n\\begin{align*}\n\\Big\\langle e(\\pi_2^*\\UL_{\\mathcal{P} \\mathcal{A}_6} \\oplus \\mathbb{W}_{n,m,6}^1), ~[\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_5] \\Big\\rangle & = \\mathcal{N}(\\mathcal{A}_1\\mathcal{P}\\mathcal{A}_6,n,m) +4\\mathcal{N}(\\mathcal{A}_1\\mathcal{P}\\mathcal{D}_6,n,m)\\\\ \n & ~~ + 2 \\mathcal{N}(\\mathcal{P} \\mathcal{A}_7, n, m)+ 6\\mathcal{N}(\\mathcal{P} \\mathcal{E}_7, n ,m) \n\\end{align*}\nwhich proves the equation. \\qed \\\\ \n\n\\noindent \\textbf{Proof of \\eqref{algopd4a1}:} Let $\\mathbb{W}_{n,0,4}^1$ and $\\mathcal{Q}$ be as in \\eqref{generic_Q} with $k=4$ and $m=0$.\nBy Lemma \\ref{cl_two_pt}, statement \\ref{a1_pa2_cl} we have \n\\begin{align*}\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_3 &= \n\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_3 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{A}}_3- \\mathcal{P} \\mathcal{A}_3) \\sqcup \n\\Big( \\Delta \\overline{\\mathcal{P} \\mathcal{A}}_5 \\cup \\Delta \\overline{\\mathcal{P} \\mathcal{D}^{\\vee}_5} \\Big) \\\\\n&= \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}_3 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{A}}_4 \\cup \\overline{\\mathcal{P} \\mathcal{D}}_4) \\sqcup \n\\Big( \\Delta \\overline{\\mathcal{P} \\mathcal{A}}_5 \\cup \\Delta \\overline{\\mathcal{P} \\mathcal{D}^{\\vee}_5} \\Big)\n\\end{align*}\nwhere the last equality follows from \\cite{BM13} (cf. Lemma \\ref{cl}, statement \\ref{A3cl}). By Proposition \\ref{A3_Condition_prp}, the section \n$$ \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_4} \\oplus \\mathcal{Q} : \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_3 \\longrightarrow \\pi_2^*\\UL_{\\mathcal{P} \\mathcal{D}_4} \\oplus \\mathbb{W}_{n,m,4}^1 $$\nvanishes transversely on $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathcal{D}_4$. It is easy to see that the section does not vanish on $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathcal{A}_4$. \nBy Corollary \\ref{a1_pak_mult_is_2_Hess_neq_0} and \\ref{psi_pd4_section_vanishes_order_two_around_dual_d5}, \nthe contribution to the Euler class from the points of $\\Delta\\mathcal{P} \\mathcal{A}_5$ and $\\mathcal{P} \\mathcal{D}_5^{\\vee}$ are $2$ and $2$ respectively. \nHence \n\\begin{align}\n\\Big\\langle e(\\pi_2^*\\UL_{\\mathcal{P} \\mathcal{D}_4} \\oplus \\mathbb{W}_{n,0,4}^1), ~[\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{A}}_3] \\Big\\rangle & = \\mathcal{N}(\\mathcal{A}_1\\mathcal{P}\\mathcal{A}_4,n,0) + 2 \\mathcal{N}(\\mathcal{P} \\mathcal{A}_5, n, 0) \\nonumber \\\\\n& +2\\big\\langle e(\\mathbb{W}_{n,0,4}^1), ~[\\overline{\\mathcal{P} \\mathcal{D}^{\\vee}_5}] \\big\\rangle. \\label{pd5_dual_prelim_a1_pa4}\n\\end{align}\nSince the map $\\pi: \\mathcal{P} \\mathcal{D}_5^{\\vee} \\longrightarrow \\mathcal{D}_5$ is one to one, we conclude that \n\\begin{align}\n\\big\\langle e(\\mathbb{W}_{n,0,4}^1), ~[\\overline{\\mathcal{P} \\mathcal{D}^{\\vee}_5}] \\big\\rangle &= \\mathcal{N}(\\mathcal{D}_5, n). \\label{pd5_dual_d5_equality_numbers}\n\\end{align}\nThis follows from the same argument as in \\cite{BM13}\nEquations \\eqref{pd5_dual_d5_equality_numbers} and \\eqref{pd5_dual_prelim_a1_pa4} imply \n\\eqref{algopd4a1}.\\\\ \n\\hf \\hf Here is an alternative way to compute the lhs of \\eqref{pd5_dual_d5_equality_numbers}. \nRecall that \n\\[ \\overline{\\mathcal{P} \\mathcal{D}}_4 = \\mathcal{P} \\mathcal{D}_4 \\cup \\overline{\\mathcal{P} \\mathcal{D}}_5 \\cup \\overline{\\mathcal{P} \\mathcal{D}^{\\vee}_5}. \\]\nThe section \n$\\Psi_{\\mathcal{P} \\mathcal{D}_5^{\\vee}}: \\overline{\\mathcal{P} \\mathcal{D}}_4 \\longrightarrow \\UL_{\\mathcal{P} \\mathcal{D}_5^{\\vee}}$ vanishes transversely on $\\mathcal{P} \\mathcal{D}_5^{\\vee}$ and \ndoes not vanish on $\\mathcal{P} \\mathcal{D}_5$. Hence \n\\begin{align*}\n\\big\\langle e(\\UL_{\\mathcal{P} \\mathcal{D}_5^{\\vee}} \\oplus \\mathbb{W}_{n,0,5}^0), ~[\\overline{\\mathcal{P} \\mathcal{D}}_4] \\big\\rangle &= \\big\\langle e(\\mathbb{W}_{n,0,5}^0), ~ [\\overline{\\mathcal{P} \\mathcal{D}^{\\vee}_5}]\\big\\rangle = \\mathcal{N}(\\mathcal{D}_5, n). \n\\end{align*}\nIt is easy to check directly that these two methods give the same answer for $\\mathcal{N}(\\mathcal{D}_5, n)$. \\qed \\\\ \n\n\n\n\n\n\n\\noindent \\textbf{Proof of \\eqref{algopd4a1_lambda} :} Let $\\mathbb{W}_{n,1,4}^1$ and $\\mathcal{Q}$ be as in \\eqref{generic_Q} with $k=4$ and $m=1$.\nBy Lemma \\ref{cl_two_pt}, statement \\ref{a1_d4_cl} we have \n\\begin{align}\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{D}}_4} & = \\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{D}}_4 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\hat{\\mathcal{D}}}_4- \\hat{\\mathcal{D}}_4) \\sqcup \n\\Big( \\Delta \\overline{\\hat{\\mathcal{D}}}_6 \\Big) \\nonumber \\\\\n\\implies \\overline{\\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{D}}^{\\#}_4} &= \\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{D}}_4^{\\#} \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\hat{\\mathcal{D}}^{\\#}_4}- \\hat{\\mathcal{D}}_4^{\\#}) \\sqcup \n\\Big( \\Delta \\overline{\\hat{\\mathcal{D}}^{\\# \\flat}_6} \\Big) \\qquad (\\textnormal{since} ~~\\overline{\\hat{\\mathcal{D}}}_4 = \\overline{\\hat{\\mathcal{D}}^{\\#}_4} ~~\\textnormal{and} \n~~\\overline{\\hat{\\mathcal{D}}}_6 = \\overline{\\hat{\\mathcal{D}}^{\\# \\flat}_6}) \\nonumber \\\\\n&= \\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{D}}_4^{\\#} \\sqcup \\overline{\\mathcal{A}}_1 \\circ \\overline{\\mathcal{P} \\mathcal{D}}_4 \\sqcup \n\\Big( \\Delta \\overline{\\hat{\\mathcal{D}}^{\\# \\flat}_6} \\Big) \\qquad \\textnormal{(by definition).} \\nonumber\n\\end{align}\nBy Proposition \\ref{PD4_Condition_prp}, the section \n$$ \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{A}_3} \\oplus \\mathcal{Q} : \\overline{\\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{D}}^{\\#}_4} \\longrightarrow \\pi_2^*\\UL_{\\mathcal{P} \\mathcal{A}_3} \\oplus \\mathbb{W}_{n,m,4}^1 $$\nvanishes transversely on $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathcal{D}_4$. By definition, the section does not vanish on $\\hat{\\mathcal{D}}^{\\# \\flat }_6$. \nHence \n\\begin{align*}\n\\Big\\langle e(\\pi_2^*\\UL_{\\mathcal{P} \\mathcal{A}_3} \\oplus \\mathbb{W}_{n,1,4}^1), ~[\\overline{\\overline{\\mathcal{A}}_1 \\circ \\hat{\\mathcal{D}}^{\\#}_4}] \\Big\\rangle & = \\mathcal{N}(\\mathcal{A}_1\\mathcal{P}\\mathcal{D}_4,n,1)\n\\end{align*}\nwhich proves the equation. \\qed \\\\ \n\n\\noindent \\textbf{Proof of \\eqref{algopd5a1}:} Let $\\mathbb{W}_{n,m,5}^1$ and $\\mathcal{Q}$ be as in \\eqref{generic_Q} with $k=5$. \nBy Lemma \\ref{cl_two_pt}, statement \\ref{a1_pd4_cl} we have \n\\begin{align*}\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_4 & = \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}_4 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{D}}_4- \\mathcal{P} \\mathcal{D}_4) \\sqcup \n\\Big( \\Delta \\overline{\\mathcal{P} \\mathcal{D}}_6 \\cup \\Delta \\overline{\\PP \\D_6^{\\vee s}} \\Big)\\\\\n&= \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}_4 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{D}}_5 \\cup \\overline{\\mathcal{P} \\mathcal{D}^{\\vee}_5}) \\sqcup \n\\Big( \\Delta \\overline{\\mathcal{P} \\mathcal{D}}_6 \\cup \\Delta \\overline{\\PP \\D_6^{\\vee s}} \\Big)\n\\end{align*}\nwhere the last equality follows from \\cite{BM13} (cf. Lemma \\ref{cl}, statement \\ref{D4cl}). By Proposition \\ref{D4_Condition_prp}, the section \n$$ \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_5}^{\\mathbb{L}} \\oplus \\mathcal{Q} : \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_4 \\longrightarrow \\pi_2^*\\UL_{\\mathcal{P} \\mathcal{D}_5} \\oplus \\mathbb{W}_{n,m,5}^1 $$\nvanishes transversely on $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathcal{D}_5$. Moreover, it does not vanish on $\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}_5^{\\vee}$ by definition. \nBy Corollary \\ref{psi_pd5_section_vanishes_order_two_around_pd6}, \nthe contribution to the Euler class from the points of $\\Delta\\mathcal{P} \\mathcal{D}_6$ is $2$. \nThe section does not vanish on $\\Delta \\PP \\D_6^{\\vee s}$ by definition. \nSince the dimension of $\\mathcal{P} \\mathcal{D}_6^{\\vee}$ is same as the dimension of $\\mathcal{P} \\mathcal{D}_6$ \nand $\\mathcal{Q}$ is generic, by \\eqref{pd6_dual_s_is_subset_of_pd6_dual}, the section does not vanish on $\\Delta \\overline{\\PP \\D_6^{\\vee s}}$. \nHence \n\\begin{align*}\n\\Big\\langle e(\\pi_2^*\\UL_{\\mathcal{P} \\mathcal{D}_5} \\oplus \\mathbb{W}_{n,m,5}^1), ~[\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_4] \\Big\\rangle & = \\mathcal{N}(\\mathcal{A}_1\\mathcal{P}\\mathcal{D}_5,n,m) + 2 \\mathcal{N}(\\mathcal{P} \\mathcal{D}_6, n, m)\n\\end{align*}\nwhich proves the equation. \\qed \\\\\n\n\n\\noindent \\textbf{Proof of \\eqref{algopd6a1} and \\eqref{algope6a1}:} Let $\\mathbb{W}_{n,m,6}^1$ and $\\mathcal{Q}$ be as in \\eqref{generic_Q} with $k=6$. \nBy Lemma \\ref{cl_two_pt}, statement \\ref{a1_pd5_cl} we have \n\\begin{align*}\n\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_5 & = \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}_5 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{D}}_5- \\mathcal{P} \\mathcal{D}_5) \\sqcup \n\\Big( \\Delta \\overline{\\mathcal{P} \\mathcal{D}}_7 \\cup \\Delta \\overline{\\mathcal{P} \\mathcal{E}}_7 \\Big) \\\\\n&= \\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}_5 \\sqcup \\overline{\\mathcal{A}}_1 \\circ (\\overline{\\mathcal{P} \\mathcal{D}}_6 \\cup \\overline{\\mathcal{P} \\mathcal{E}}_6) \\sqcup \n\\Big( \\Delta \\overline{\\mathcal{P} \\mathcal{D}}_7 \\cup \\Delta \\overline{\\mathcal{P} \\mathcal{E}}_7 \\Big) \n\\end{align*}\nwhere the last equality follows from \\cite{BM13} (cf. Lemma \\ref{cl}, statement \\ref{D5cl}). By Propositions \\ref{D6_Condition_prp} and \\ref{E6_Condition_prp}, the sections \n\\[ \\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_6} \\oplus \\mathcal{Q} : \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_5 \\longrightarrow \\pi_2^*\\UL_{\\mathcal{P} \\mathcal{D}_6} \\oplus \\mathbb{W}_{n,m,6}^1, \n~~\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{E}_6} \\oplus \\mathcal{Q} : \\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_5 \\longrightarrow \\pi_2^*\\UL_{\\mathcal{P} \\mathcal{E}_6} \\oplus \\mathbb{W}_{n,m,6}^1 \\]\nvanishes transversely on $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathcal{D}_6$ and $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathcal{E}_6$ respectively. \nMoreover, they do not vanish on $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathcal{E}_6$ and $\\mathcal{A}_1 \\circ \\mathcal{P} \\mathcal{D}_6$ respectively. \nBy Corollary \\ref{a1_pdk_mult_is_2_f12_neq_0} and \\ref{psi_pe6_and_pd6_section_vanishes_order_one_around_pe7}\nthe contribution of the section $\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{D}_6} \\oplus \\mathcal{Q}$ \nto the Euler class from the points of $\\Delta\\mathcal{P} \\mathcal{D}_7$ and $\\Delta \\mathcal{P} \\mathcal{E}_7$ are $2$ and $1$ respectively. \nBy Corollary \\ref{psi_pe6_and_pd6_section_vanishes_order_one_around_pe7}, the contribution of the section \n$\\pi_2^*\\Psi_{\\mathcal{P} \\mathcal{E}_6} \\oplus \\mathcal{Q}$ from the points of $\\Delta \\mathcal{P} \\mathcal{E}_7$ is $1$; moreover it does not vanish on $\\Delta \\mathcal{P} \\mathcal{D}_7$. \nHence \n\\begin{align*}\n\\Big\\langle e(\\pi_2^*\\UL_{\\mathcal{P} \\mathcal{D}_6} \\oplus \\mathbb{W}_{n,m,6}^1), ~[\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_5] \\Big\\rangle & = \\mathcal{N}(\\mathcal{A}_1\\mathcal{P}\\mathcal{D}_6,n,m) + 2 \\mathcal{N}(\\mathcal{P} \\mathcal{D}_7, n, m)+\\mathcal{N}(\\mathcal{P} \\mathcal{E}_7, n, m), \\\\\n\\Big\\langle e(\\pi_2^*\\UL_{\\mathcal{P} \\mathcal{E}_6} \\oplus \\mathbb{W}_{n,m,6}^1), ~[\\overline{\\overline{\\mathcal{A}}_1 \\circ \\mathcal{P} \\mathcal{D}}_5] \\Big\\rangle & = \\mathcal{N}(\\mathcal{A}_1\\mathcal{P}\\mathcal{E}_6,n,m) + \\mathcal{N}(\\mathcal{P} \\mathcal{E}_7, n, m)\n\\end{align*}\nwhich prove equations \\eqref{algopd6a1} and \\eqref{algope6a1}. \\qed \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzgdmr b/data_all_eng_slimpj/shuffled/split2/finalzzgdmr new file mode 100644 index 0000000000000000000000000000000000000000..d9994d2bc8f4453417ee6b6b16325c914a975e0f --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzgdmr @@ -0,0 +1,5 @@ +{"text":"\\section{INTRODUCTION}\nZero-sum games (ZSGs) are mathematical models describing the interaction of mutually adversarial decision makers. \nIn the realm of machine learning, ZSGs have played a central role in the development of techniques such as generative adversarial networks (GANs) \\cite{goodfellow_2014_generative} and adversarial training \\cite{madry_2017_towards}.\nWithin this context, two solution concepts have been adopted for predicting the outcome of ZSGs: the Nash equilibrium (NE)~\\cite{nash_1950_equilibrium} and the Stackelberg equilibrium (SE) \\cite{Stackelberg-1952}.\nThe NE is a prediction observed under the assumption that both players simultaneously choose their strategies (probability measures over the set of possible actions or decisions).\nOn the other hand, the SE describes the outcome in which one of the players (the leader) publicly and irrevocably commits to use a particular strategy before its opponent (the follower). In such a case, the follower chooses its strategy as a best response to the commitment of the leader. \n\nCommitments are said to be in mixed strategies when the leader is allowed to commit to strategies whose support contains more than one action. In this case, the relevant solution concept is the SE in mixed strategies~\\cite{conitzer_2006_computing, conitzer_2016_stackelberg, leonardos_2018_commitment}. \nInterestingly, in finite ZSGs, the payoffs at the NE and the SE in mixed strategies are identical, as shown in~\\cite{von_2010_leadership}.\nThe commitment is said to be in pure strategies when the leader is constrained to commit to play one action with probability one. This is assimilated to the case in which the follower perfectly observes the action played by the leader.\nThe relevant solution concept under these assumptions is the SE in pure strategies~\\cite{Stackelberg-1952, simaan_1973_stackelberg, simaan_1973_additional}.\nThe expected payoff at the SE in pure strategies is equal to the $\\min\\max$ or $\\max\\min$ solution, where the optimization is over the set of actions \\cite{jin_2020_local, bai_2021_sample}.\n\nIn adversarial training, the underlying assumption is that the follower (the attacker or adversary) perfectly observes the action played by the leader (the learner) \\cite{huang_2022_robust, zuo_2021_adversarial, bruckner_2011_stackelberg, gao_2022_achieving, bai_2021_sample}. Similarly, in data integrity attacks, the follower (the learner) perfectly observes the action of the leader (the attacker) \\cite{chivukula_2017_adversarial, liu_2009_game, kantarciouglu_2011_classifier}. \nThat is, adversarial training and data integrity attacks are studied using the SE in pure strategies.\nAlternatively, GANs are modelled by ZSGs in which the relevant solution concept is the NE (or SE in mixed strategies)~\\cite{hsieh_2019_finding, oliehoek_2018_beyond}. Essentially, ZSGs are used to predict game outcomes in terms of mixed strategies (probability measures), instead of actions (pure strategies). \nIn a nutshell, the underlying assumption of the SE in mixed strategies is that the strategy to which the leader commits to is perfectly observed by the follower and the actions are unobservable. Alternatively, the assumption of the SE in pure strategies is that actions are perfectly observable, which makes the notion of commitment irrelevant. \nThese assumptions are difficult to justify in practice. Often, data is obtained via data acquisition systems that are subject to quantization errors, additive noise, and impairments due to data transmission and storage. These additional impairments are not necessarily due to malicious agents but the nature of the data acquisition and information processing \\cite{cover_elements_2012}. \nIn real system implementations, the observations of actions and commitments, if they occur, are subject to noise. Nonetheless, the impact of noisy observations in adversarial training, GANs, and most areas of ML relying on ZSGs remains an uncharted territory, in part due to the lack of simple and adapted solution concepts. This work makes progress in this direction and proposes a game formulation that takes into account noisy observations of both actions and commitments.\n\n\\subsection{Contributions}\n\n For pedagogical purposes, results are presented for $2 \\times 2$ ZSGs. Nonetheless, the results can be readily extended to the case of two-player ZSGs, with finite number of actions. \nThe contributions are presented as follows.\n\nSection~\\ref{SecNOA} introduces a game formulation in which the follower obtains a noisy observation of the action played by the leader, whereas the commitment is assumed to be perfectly observed. Three results are presented: First, the set of best responses of the follower is characterized and the role of the priors formed by the follower with the available information is presented. Second, the set of optimal commitments for the leader is calculated and it is shown that, even subject to noise, observations either benefit or never make a difference to the follower. Third, the equilibrium is shown to always exist. Benefits for the follower are observed at the equilibrium exclusively when the ZSG exhibits a unique NE in mixed strategies. In all other cases, e.g., ZSG exhibiting strategic dominance, unique NE in pure strategies, or infinitely many NEs, the payoffs with and without observations are identical.\n\nSection~\\ref{SecCM} introduces a game formulation in which the follower obtains a noisy observation of both the action played by the leader and the commitment. The commitment mismatch is modelled by a deterministic distortion (affine function) that is assumed to be known by the leader and ignored by the follower.\nCommitment mismatch is shown to be either beneficial or immaterial to the leader. Nonetheless, beneficial situations for the leader are shown to be not stable, in part because, an equilibrium does not necessarily exist. \nThis phenomenon arises due to the fact that optimal commitments induce infinitely many best responses for the follower and each of them leads to a different payoff. To overcome this challenge, the leader is subject to commit to suboptimal strategies in order to induce a unique best response that can be predicted.\n\nThe work is finalized with a discussion and final remarks presented in Section~\\ref{SecDiscussion}. The proofs of all results are presented in the supplementary material. \n \n\\subsection*{Notation}\nGiven a finite set $\\set{X}$, the notation $2^{\\set{X}}$ represents \nthe power set of $\\set{X}$. The notation $\\simplex{\\set{X}}$ represents the set of all probability measures that can be defined on the measurable space $\\left( \\set{X}, 2^{\\set{X}}\\right)$. The set of all subsets of $\\simplex{\\set{X}}$ is denoted by $2^{\\simplex{\\set{X}}}$.\nGiven two matrices $\\matx{a}$ and $\\matx{b}$, their Hadamard product is denoted by $\\matx{a} \\circ \\matx{b}$.\n\n\\section{PRELIMINARIES}\\label{SecPrel}\nConsider a two-player two-action zero-sum game in normal form with payoff matrix \n\\begin{IEEEeqnarray}{rcl}\n\\label{EqMatrixU}\n\\matx{u} & = & \n\\begin{pmatrix}\nu_{1,1} & u_{1,2}\\\\\nu_{2,1} & u_{2,2}\n\\end{pmatrix}.\n\\end{IEEEeqnarray}\nLet the elements of the set $\\set{K} \\triangleq \\{1,2\\}$ represent the indices of the players; \nand let the elements of the set $\\set{A}_1 = \\set{A}_2 \\triangleq \\{a_1, a_2\\}$ represent the actions of the players.\nHence, for all $(i,j) \\inCountTwo^2$, when \\Pone plays $a_i$ and \\Ptwo plays $a_j$, the outcome of the game is $u_{i, j}$.\nIn the following, such a game is represented by the tuple \n\\begin{IEEEeqnarray}{rcl}\n\\label{EqTheGame}\n\\gameNF{\\matx{u}} & \\triangleq & \\left(\\set{K}, \\set{A}_1 , \\set{A}_2 , \\matx{u} \\right).\n\\end{IEEEeqnarray} \nThe remaining part of this section relies on the following assumptions:\n$(i)$~The game $\\gameNF{\\matx{u}}$ is repeated infinitely many times; \n$(ii)$~At each repetition, the players are oblivious of all previous repetitions; and \n$(iii)$~actions are simultaneously chosen at each repetition.\nUnder assumptions $(i) - (iii)$, the average payoff achieved by the players in the \\emph{repeated game} can be expressed in terms of their \\emph{strategies}.\nFor all $k \\in \\set{K}$, the strategy of \\Pkth is a probability measure denoted by $P_{A_k} \\in \\simplex{\\set{A}_k}$. \nAt each repetition of the game, players choose their actions by sampling their probability measures (strategies). \nLet the average payoff be represented by the function $u: \\simplex{\\set{A}_1} \\times \\simplex{\\set{A}_2} \\to \\reals$ such that, given the strategies $P_{A_1}$ and $P_{A_2}$, \n\\begin{IEEEeqnarray}{rcl}\n\\label{EqNormalFormU}\nu\\left(P_{A_1}, P_{A_2} \\right) & = & \\sum_{(i,j) \\in \\{1,2\\}^2} P_{A_1} \\left( a_i \\right) P_{A_2}\\left(a_j\\right) u_{i,j}. \\squeezeequ\n\\end{IEEEeqnarray}\nNote that the average payoff coincides with the expected payoff under assumptions $(i) - (iii)$.\n\\Pone chooses its strategy $P_{A_1}$ aiming to maximize the expected payoff $u\\left(P_{A_1}, P_{A_2} \\right)$, whereas \\Ptwo chooses the strategy $P_{A_2}$ to minimize it. \nInterestingly, under assumptions $(i) - (iii)$, players can calculate their optimal strategies before the beginning of the repeated game, as shown in the following sections. \n\nWhen the repeated game is played without commitments, the relevant solution concept for the ZSG $\\gameNF{\\matx{u}}$ in~\\eqref{EqTheGame} is the NE. The following lemma characterizes the payoff at the NE and shows that $2\\times2$ ZSGs exhibit either a unique NE or infinitely many NEs. \n\\begin{lemma}\\label{LemmaNE}\nLet the probability measures $P^{\\star}_{A_1} \\in \\simplex{\\set{A}_1}$ and $P^{\\star}_{A_2} \\in \\simplex{\\set{A}_2}$ form a NE of the game $\\gameNF{\\matx{u}}$ in~\\eqref{EqTheGame}. \nIf the entries of the matrix $\\matx{u}$ in~\\eqref{EqMatrixU} satisfy\n\\begin{subequations}\n\\begin{IEEEeqnarray}{lcl}\n\\left( u_{1,1} - u_{1,2} \\right) \\left( u_{2,2} - u_{2,1} \\right) > 0 & \\mbox{ and } & \\\\\n\\left( u_{1,1} - u_{2,1} \\right) \\left( u_{2,2} - u_{1,2} \\right) > 0,\n\\end{IEEEeqnarray}\n\\label{EqMixedAssumption}\n\\end{subequations}\nthen, the NE of the game $\\gameNF{\\matx{u}}$ in~\\eqref{EqTheGame} is unique, with \n\\begin{subequations}\\label{EqNEStratExample}\n\\begin{IEEEeqnarray}{rcl}\n\\label{EqPA1StarExample}\nP^{\\star}_{A_1}(a_1) & = & \\frac{u_{2,2}-u_{2,1}}{u_{1,1} - u_{1,2} - u_{2,1}+u_{2,2}} \\in (0,1)\\mbox{ and } \\IEEEeqnarraynumspace\\\\\n\\label{EqPA2StarExample}\nP^{\\star}_{A_2}(a_1) & = &\\frac{u_{2,2}-u_{1,2}}{u_{1,1} - u_{1,2} - u_{2,1}+u_{2,2}}\\in (0,1).\n\\end{IEEEeqnarray} \n\\end{subequations}\nMoreover, the expected payoff at the NE is\n\\begin{IEEEeqnarray}{rcl}\nu(P_{A_1}^{\\star},P_{A_2}^{\\star}) & = & \\frac{u_{1,1}u_{2,2} - u_{1,2}u_{2,1}}{u_{1,1} - u_{1,2} - u_{2,1}+u_{2,2}}.\n\\end{IEEEeqnarray}\nIf the entries of the matrix $\\matx{u}$ in~\\eqref{EqMatrixU} satisfy\n\\begin{subequations}\n\\begin{IEEEeqnarray}{lcl}\n\\left( u_{1,1} - u_{1,2} \\right) \\left( u_{2,2} - u_{2,1} \\right) \\leqslant 0 & \\mbox{ or } & \\\\\n\\left( u_{1,1} - u_{2,1} \\right) \\left( u_{2,2} - u_{1,2} \\right) \\leqslant 0,\n\\end{IEEEeqnarray}\n\\label{EqNotMixedAssumption}\n\\end{subequations}\nthen, there exists either a unique NE or infinitely many NEs. Moreover, all NE strategies lead to the same payoff,\n\\begin{IEEEeqnarray}{rcl}\\label{Equ:f_3}\nu(P_{A_1}^{\\star},P_{A_2}^{\\star}) & = & \\min \\lbrace \\max\\lbrace u_{1,1}, u_{2,1}\\rbrace, \\max\\lbrace u_{1,2}, u_{2,2}\\rbrace \\rbrace \\squeezeequ\\\\\n& = & \\max \\lbrace \\min\\lbrace u_{1,1}, u_{1,2}\\rbrace, \\min\\lbrace u_{2,1}, u_{2,2}\\rbrace \\rbrace. \\IEEEeqnarraynumspace \\squeezeequ\n\\end{IEEEeqnarray}\n\\end{lemma}\nA payoff matrix $\\matx{u}$ that satisfies~\\eqref{EqMixedAssumption} represents a ZSG exhibiting a unique NE in strictly mixed strategies. Alternatively, a payoff matrix $\\matx{u}$ that satisfies~\\eqref{EqNotMixedAssumption} represents a ZSG exhibiting \\emph{strategic dominance}, a unique pure NE, or infinitely many NEs. \n\n\n\\section{NOISY OBSERVATIONS OF THE ACTIONS}\\label{SecNOA}\nIn this section, the repeated game is assumed to be played with commitments under the assumptions $(i)$ and $(ii)$ in Section~\\ref{SecPrel}, and a new assumption: $(iv)$~at each repetition, the leader chooses its action and the follower obtains a noisy observation. \nThat is, assumption $(iii)$ is dropped, and the follower chooses a strategy at each repetition, knowing the commitment and a noisy observation of the action played by the leader.\n\\subsection{Game Formulation}\n\nDenote by $A_1$, $A_2$, and $\\tilde{A}_2$ the random variables representing the actions of \\Pone (the follower), \\Ptwo (the leader), and the noisy observation of the action played by \\Ptwo at each repetition of the game, respectively. \nLet also $P_{A_1 \\tilde{A}_2 A_2} \\in \\simplex{\\set{A}_1 \\times \\set{A}_2 \\times \\set{A}_2}$ be the probability measure jointly induced by $A_1$, $\\tilde{A}_2$, and $A_2$, which satisfies for all $\\left( a, \\tilde{b},b \\right) \\in \\set{A}_1 \\times \\set{A}_2 \\times \\set{A}_2$, \n\\begin{IEEEeqnarray}{rcl}\n\\label{EqTheJointThing}\nP_{A_1 \\tilde{A}_2 A_2} \\left( a, \\tilde{b}, b \\right) = P_{A_2} \\left( b \\right) P_{\\tilde{A}_2 | A_2 = b} \\left( \\tilde{b} \\right) P_{A_1 | \\tilde{A}_2 = \\tilde{b}} \\left( a \\right), \\IEEEeqnarraynumspace \\supersqueezeequ\n\\end{IEEEeqnarray}\nwhere the probability measure $P_{A_2 } \\in \\simplex{\\set{A}_2}$ is the strategy of \\Ptwo;\nthe pair of probability measures $\\left( P_{A_1| \\tilde{A}_2 = a_1}, P_{A_1| \\tilde{A}_2 = a_2}\\right) \\in \\simplex{\\set{A}_1} \\times \\simplex{\\set{A}_1}$ form the strategy of \\Pone;\nand the pair of measures $\\left( P_{\\tilde{A}_2 | A_2 = a_1}, P_{\\tilde{A}_2 | A_2 = a_2}\\right)$ form a binary channel through which the follower observes the action of the leader.\n\nUsing this notation, the development of the repeated game is described as follows. \nBefore the beginning of the repetittions, \\Ptwo publicly and irrevocably announces its strategy $P_{A_2}$. \nAt each repetition, \\Ptwo (the leader) plays the action $b \\in \\set{A}_2$ with probability $P_{A_2}\\left( b \\right)$. \n\\Pone observes action $\\tilde{b} \\in \\set{A}_2$ with probability $P_{\\tilde{A}_2 | A_2 = b} \\left( \\tilde{b} \\right)$.\nFinally, \\Pone plays the action $a \\in \\set{A}_1$, with probability $P_{A_1| \\tilde{A}_2 = \\tilde{b}}\\left( a \\right)$ and both players obtain their payoffs. \n\nThe expected payoff obtained by the players is determined by the function $v: \\simplex{\\set{A}_1} \\times \\simplex{\\set{A}_1} \\times \\simplex{\\set{A}_2} \\to \\reals$, such that given the strategy $\\left( P_{A_1| \\tilde{A}_2 = a_1}, P_{A_1| \\tilde{A}_2 = a_2}\\right)$ of \\Pone, often denoted by $ P_{A_1| \\tilde{A}_2 }$, and the strategy $P_{A_2}$ of \\Ptwo, the expected payoff $v\\left(P_{A_1| \\tilde{A}_2}, P_{A_2} \\right)$ is \n\\begin{IEEEeqnarray}{rcl}\n\\label{EqTheCostFunction}\nv\\left(P_{A_1| \\tilde{A}_2}, P_{A_2} \\right) & = & \\sum_{(i,j) \\inCountTwo^2} u_{i,j} P_{A_1 A_2} \\left( a_i , a_j\\right), \\IEEEeqnarraynumspace \\squeezeequ\n\\end{IEEEeqnarray}\nwhere the joint probability measure $P_{A_1 A_2}$ is the marginal probability measure on $A_1$ and $A_2$ of the probability measure $P_{A_1 \\tilde{A}_2 A_2}$ in~\\eqref{EqTheJointThing}.\n\nThe generalization of the game in normal form $\\gameNF{\\matx{u}}$ in~\\eqref{EqTheGame} obtained by including the binary channel formed by the measures $ P_{\\tilde{A}_2 | A_2 =a_{1} }$ and $P_{\\tilde{A}_2 | A_2 =a_{2} }$ is \ndescribed by the tuple \n\\begin{IEEEeqnarray}{c}\n\\label{EqTheGameNoise}\n\\game{G}\\left(\\matx{u}, \\matx{w} \\right) = \\left(\\set{K}, \\set{A}_1, \\set{A}_2 , \\matx{u}, \\matx{w} \\right),\n\\end{IEEEeqnarray} \nwhere the $2\\times2$ matrix $\\matx{w}$ satisfies\n\\begin{IEEEeqnarray}{rcl}\n\\label{EqTheChannel}\n\\matx{w}=\n\\begin{pmatrix}\nP_{\\tilde{A}_2 | A_2 = a_1}(a_1) &P_{\\tilde{A}_2 | A_2 = a_2}(a_1) \\\\\nP_{\\tilde{A}_2 | A_2 = a_1}(a_2) & P_{\\tilde{A}_2 | A_2 = a_2}(a_2)\n\\end{pmatrix}.\n\\end{IEEEeqnarray}\n\n\\subsection{The Set of Best Responses of \\Pone}\nThe set of best responses of \\Pone to a given strategy of \\Ptwo is determined by the correspondence $\\BR_1: \\simplex{\\set{A}_{2}} \\to 2^{\\simplex{\\set{A}_1} \\times \\simplex{\\set{A}_1} }$. That is, given the commitment $P_{A_2}$, it holds that\n\\begin{IEEEeqnarray}{rcl}\\label{Equ:BR_1}\n\\label{EqBR1}\n\\BR_1\\left( P_{A_2} \\right) & = &\\arg\\max_{ (Q_1, Q_2) \\in \\simplex{\\set{A}_1} \\times \\simplex{\\set{A}_1}} v(Q_1, Q_2, P_{A_2}), \\IEEEeqnarraynumspace \\supersqueezeequ\n\\end{IEEEeqnarray}\nwhere the function $v$ is defined in~\\eqref{EqTheCostFunction}.\nIn order to study the set $\\BR_1\\left( P_{A_2} \\right)$ in~\\eqref{EqBR1}, consider the $2\\times2$ matrix\n\\begin{IEEEeqnarray}{rcl}\n\\label{EqMatrixUs}\n\\matx{u}^{(i)} & = & \\matx{u}\n\\begin{pmatrix}\nP_{\\tilde{A}_2 | A_2 = a_1}(a_i) & 0 \\\\\n 0 & P_{\\tilde{A}_2 | A_2 = a_2}(a_i)\n\\end{pmatrix},\n\\end{IEEEeqnarray}\nwith $i \\inCountTwo$, where the matrix $\\matx{u}$ is defined in~\\eqref{EqMatrixU}; and the probability measures $P_{\\tilde{A}_2 | A_2 = a_1}$ and $P_{\\tilde{A}_2 | A_2 = a_2}$ are defined in~\\eqref{EqTheChannel}.\nThe following lemma shows that, given a commitment $P_{A_2}$, the set of best responses $\\BR_1(P_{A_2})$ in~\\eqref{EqBR1} is the Cartesian product of two sets that can be described independently. \n\n\\begin{lemma}\\label{CorBR1Separates}\nThe correspondance $\\BR_1$ in~\\eqref{EqBR1} satisfies for all $P \\in \\simplex{\\set{A_2}}$,\n\\begin{IEEEeqnarray}{rcl}\n\\label{EqBR1X}\n\\BR_1\\left( P \\right) &=& \\BR_{1,1}\\left( P \\right) \\times \\BR_{1,2}\\left( P \\right), \n\\end{IEEEeqnarray}\nwhere for all $i \\inCountTwo$, the correspondence $\\BR_{1,i}: \\simplex{\\set{A}_{2}} \\to 2^{\\simplex{\\set{A}_1}}$ is such that \n\\begin{IEEEeqnarray}{rcl}\n\\label{EqBR1i}\n\\BR_{1,i}\\left( P \\right) & = & \\arg\\max_{ Q \\in \\simplex{\\set{A}_1}} \n\\begin{pmatrix}\nQ\\left( a_1 \\right)\\\\\nQ\\left( a_2 \\right)\n\\end{pmatrix}^{\\sfT}\n\\matx{u}^{(i)}\n\\begin{pmatrix}\nP\\left(a_1\\right)\\\\\nP\\left(a_2\\right)\n\\end{pmatrix}, \\IEEEeqnarraynumspace \n\\end{IEEEeqnarray}\nwhere the matrix $\\matx{u}^{(i)}$ is in~\\eqref{EqMatrixUs}.\n\\end{lemma}\n %\nA first observation from Lemma~\\ref{CorBR1Separates} is that in the case in which the matrices $\\matx{u}^{(1)}$ and $\\matx{u}^{(2)}$ in~\\eqref{EqMatrixUs} are identical, \\Pone chooses its actions independently of the noisy observation of the action played by \\Ptwo. In such a case, $\\BR_{1,1}\\left( P_{A_2} \\right) = \\BR_{1,2}\\left( P_{A_2} \\right)$, and thus, the best response of \\Pone depends exclusively on its opponent's strategy~$P_{A_2}$.\n\nFor all $i \\inCountTwo$ and for all $P \\in \\simplex{\\set{A}_2}$, the cardinality of set $\\BR_{1,i}\\left( P \\right)$ is either one or infinite. In the case in which $\\BR_{1,i}\\left( P \\right)$ is unitary, the only element is a pure strategy. Alternatively, when the cardinality is infinity, the set $\\BR_{1,i}\\left( P \\right)$ is identical to the set of all possible probability measures on $\\set{A}_1$, i.e., $\\BR_{1,i}\\left( P \\right) = \\simplex{\\set{A}_1}$. That is, at each repetition of the game, \\Pone chooses its actions either deterministically, i.e, with probability one; or indifferently. \nThe following lemma formalizes this observation. \n\n\\begin{lemma}\\label{LemmaBreakingBad1}\nGiven a probability measure $P \\in \\simplex{\\set{A}_2}$, for all $i \\inCountTwo$, the correspondence $\\BR_{1,i}$ in~\\eqref{EqBR1i} satisfies\n\\begin{IEEEeqnarray}{rCl}\\label{Equ:f_1_1}\n&&\\BR_{1,i}(P)=\n\\ \\left\\{ \n \\begin{array}{cl}\n\\hspace{-1.5ex} \\{ Q \\in \\Delta(\\set{A}_1): Q(a_1) = 1 \\}, & \\hspace{-1ex}\\textnormal{if } s_i> 0,\\\\\n\\hspace{-1.5ex} \\{ Q \\in \\Delta(\\set{A}_1): Q(a_1) = 0 \\}, & \\hspace{-1ex}\\textnormal{if } s_i< 0,\\\\\n\\simplex{\\set{A}_1}, & \\hspace{-1ex}\\textnormal{if } \ns_i= 0, \n \\end{array}\n \\right. \\IEEEeqnarraynumspace \\supersqueezeequ\n\\end{IEEEeqnarray}\nwhere $s_i \\in \\reals$ is given by \n\\begin{IEEEeqnarray}{rcl} \\label{Eqsi}\n&&s_i \\triangleq \\left(u_{1,1} - u_{2,1}\\right) P\\left( a_1 \\right) P_{\\tilde{A}_2 | A_2 = a_1}\\left( a_i \\right) \\IEEEnonumber \\\\\n&& \\quad + \\left(u_{1,2}-u_{2,2} \\right)P\\left( a_2 \\right) P_{\\tilde{A}_2 | A_2 = a_2}\\left( a_i \\right). \\label{EqBigOne}\n\\end{IEEEeqnarray}\n\\end{lemma}\n\nFor all $\\left( i, j\\right) \\inCountTwo^2$, at a given game repetition, the expected payoff, when \\Pone plays $a_j$, \\Ptwo has committed to $P_{A_2}$, and the noisy observation is $a_i$, is $u_{j,1} P_{A_2}\\left( a_1 \\right) P_{\\tilde{A}_2 | A_2 = a_1}\\left( a_i \\right) + u_{j,2} P_{A_2}\\left( a_2 \\right) P_{\\tilde{A}_2 | A_2 = a_2}\\left( a_i \\right)$. \nThus, the right-hand side of the equality in~\\eqref{EqBigOne} is the difference between the expected payoff obtained by the players when \\Pone plays $a_1$ and when it plays $a_2$, subject to the observation $a_i$ and the commitment $P_{A_2}$. \nHence, from Lemma~\\ref{LemmaBreakingBad1}, the optimal action to be played at a given repetition by \\Pone is the action that maximizes the expected payoff subject to the noisy observation and the commitment. When both actions induce the same expected payoff, \\Pone chooses its actions following any strategy.\n\nThe following lemma presents a different view of the correspondences $\\BR_{1,1}$ and $\\BR_{1,2}$ in~\\eqref{EqBR1i}. It suggests that at each game repetition, \\Pone performs an estimation of the action played by \\Ptwo based on the knowledge of the commitment and the noisy observation.\n\n\\begin{lemma}\\label{LemmaBreakingGrounds}\nGiven a probability measure $P \\in \\simplex{\\set{A}_2}$, for all $i \\inCountTwo$, the correspondence $\\BR_{1,i}$ in~\\eqref{EqBR1i} satisfies\n\\begin{IEEEeqnarray}{rcl}\n\\BR_{1,i}\\left( P \\right) & = & \\arg\n\\max_{Q \\in \\simplex{\\set{A}_1}} u\\left( Q , P_{A_2 | \\tilde{A}_2 = a_i} \\right),\n\\end{IEEEeqnarray}\nwhere the function $u$ is defined in~\\eqref{EqNormalFormU}; the probability measure $P_{A_2 | \\tilde{A}_2 = a_i}$ satisfies for all $j \\inCountTwo$,\n\\begin{IEEEeqnarray}{rcl}\n\\label{EqPost}\nP_{A_2 | \\tilde{A}_2 = a_i} \\left( a_j \\right) & = & \\frac{P_{\\tilde{A}_2 | A_2 = a_j} \\left( a_i \\right) P\\left( a_j \\right)}{\\displaystyle\\sum_{\\ell \\inCountTwo} P_{\\tilde{A}_2 | A_2 = a_{\\ell}} \\left( a_i \\right) P\\left( a_{\\ell} \\right)},\n\\end{IEEEeqnarray}\nwith the probability measures $P_{\\tilde{A}_2 | A_2 = a_{1}}$ and $P_{\\tilde{A}_2 | A_2 = a_{2}}$ defined in~\\eqref{EqTheChannel}.\n\\end{lemma}\n\nFor all $(i,j) \\inCountTwo^2$, the likelihood with which \\Ptwo has chosen action $a_j$ given the commitment $P$ and the noisy observation $a_i$ is $P_{A_2 | \\tilde{A}_2 = a_i} \\left( a_j \\right)$ in~\\eqref{EqPost}. \nHence, from Lemma~\\ref{LemmaBreakingBad1} and Lemma~\\ref{LemmaBreakingGrounds}, at a given repetition of the game $\\game{G}\\left(\\matx{u}, \\matx{w} \\right)$, the optimal action of \\Pone to the observation $a_i$ and the commitment $P$ is identical to the optimal action of a player in the game $\\gameNF{\\matx{u}}$ in~\\eqref{EqTheGame} when its opponent plays the strategy $P_{A_2 | \\tilde{A}_2 = a_i}$ in~\\eqref{EqPost}. \n \n\\subsection{The Best Strategies of \\Ptwo}\nLet the function $\\hat{v}: \\simplex{\\set{A}_2} \\rightarrow \\reals$ be such that given a probability measure $P \\in \\simplex{\\set{A}_2}$,\n\\begin{IEEEeqnarray}{rcl}\n\\label{Eqv}\n\\hat{v}\\left( P \\right) = \\max_{\\left(Q_1, Q_2\\right) \\in \\BR_{1}\\left( P \\right)} v\\left(Q_1, Q_2, P \\right),\n\\end{IEEEeqnarray}\nwhere the function $v$ is defined in~\\eqref{EqTheCostFunction} and the correspondence $\\BR_{1}$ is defined in~\\eqref{EqBR1}.\nThe set of best strategies for \\Ptwo, under the assumption that \\Pone observes the commitment and obtains a noisy observation of the action played by \\Ptwo, is the set of minimizers of the function $\\hat{v}$ in~\\eqref{Eqv}. \nLet $P^{(1)}$ and $P^{(2)}$ be two real numbers such that for all $i \\inCountTwo$,\n\\begin{IEEEeqnarray}{rcl}\n\\label{EqPi}\n\\begin{pmatrix}\n1\\\\\n0\n\\end{pmatrix}^{\\sfT}\n\\matx{u}^{(i)}\n\\begin{pmatrix}\nP^{(i)}\\\\\n1- P^{(i)}\n\\end{pmatrix} \n& = &\n\\begin{pmatrix}\n0\\\\\n1\n\\end{pmatrix}^{\\sfT}\n\\matx{u}^{(i)}\n\\begin{pmatrix}\nP^{(i)}\\\\\n1- P^{(i)}\n\\end{pmatrix}. \\IEEEeqnarraynumspace \n\\end{IEEEeqnarray}\nFrom Lemma~\\ref{LemmaBreakingBad1}, it holds that if $P^{(i)} \\in [0,1]$, for some $i\\inCountTwo$, and \\Ptwo adopts a strategy $P_{A_2} \\in \\simplex{\\set{A}_2}$ such that $P_{A_2}\\left( a_1 \\right) = P^{(i)}$, then $\\BR_{1,i}\\left( P_{A_2} \\right) = \\simplex{\\set{A}_1}$. \n\nLet the function $\\hat{u}: \\simplex{\\set{A}_2} \\to \\reals$ be such that for all $P \\in \\simplex{\\set{A}_2}$, \n\\begin{IEEEeqnarray}{rcl}\n\\label{EqHatu}\n\\hat{u}\\left( P \\right) & = & \\max_{Q \\in \\simplex{\\set{A}_1}} u\\left(Q, P\\right),\n\\end{IEEEeqnarray}\nwhere the function $u$ is defined in~\\eqref{EqNormalFormU}. \n\nThe set of best strategies of \\Ptwo, under the assumption that \\Pone observes the commitment but not the actual action played by the leader, is formed by the probability measures in $\\simplex{\\set{A}_2}$ that minimize the function $\\hat{u}$ in~\\eqref{EqHatu}. \nUnder this assumption, if the probability measure $P \\in \\simplex{\\set{A}_2}$ is one of the best strategies of \\Ptwo, then it is also a strategy of \\Ptwo in at least one NE. Moreover, $\\hat{u}\\left( P \\right) $ is the payoff at the NE.\n\nFigure~\\ref{FigBellesFigures} depicts the functions $\\hat{v}$ in~\\eqref{Eqv} and $\\hat{u}$ in~\\eqref{EqHatu}. Note that in all cases, the function $\\hat{v}$ is lower bounded by the function $\\hat{u}$. This implies that, when the follower is granted with an observation of the action played by the leader, even subject to noise, the payoff does not decrease, and in some cases, might significantly increase, as shown by the following lemma. \n\\begin{lemma}\\label{LemmaLowerBounds}\nLet the probability measures $P^{\\star}_{A_1} \\in \\simplex{\\set{A}_1}$ and $P^{\\star}_{A_2} \\in \\simplex{\\set{A}_2}$ form one of the NEs of the game $\\gameNF{\\matx{u}}$ in~\\eqref{EqTheGame}. \nFor all $P \\in \\simplex{\\set{A}_2}$, it holds that\n\\begin{IEEEeqnarray}{rcl}\\label{EqLBTricky}\n u(P_{A_1}^{\\star},P_{A_2}^{\\star})\n & \\leqslant & \\hat{u}(P)\\\\\n\\label{EqLBTrickyA}\n & \\leqslant & \\hat{v}\\left( P \\right)\\\\\n\\label{EqLBTrickyB}\n & \\leqslant & \\sum_{k\\inCountTwo}P(a_k) \\left( \\max_{i\\inCountTwo} u_{i,k} \\right), \\IEEEeqnarraynumspace \\supersqueezeequ\n\\end{IEEEeqnarray}\nwhere the functions $u$, $\\hat{v}$, and $\\hat{u}$ are defined in~\\eqref{EqNormalFormU},~\\eqref{Eqv} and~\\eqref{EqHatu}, respectively.\n\\end{lemma}\nThe inequality in~\\eqref{EqLBTricky} follows from the definition of the function $\\hat{u}$ in~\\eqref{EqHatu}. The inequality in~\\eqref{EqLBTrickyA} shows that for all the strategies that \\Ptwo might adopt, the payoff with noisy observations is larger than or equal to the payoff without observations. \nHence, even subject to noise, granting the follower with an observation of the action played by the leader is either beneficial or immaterial for the follower. \nThe inequality in~\\eqref{EqLBTrickyB} holds with equality when \\Pone is always able to best respond to the actual action played by the leader. \nThis is for instance the case when the observation of the action played by the leader is noiseless.\n\n \n\n\\subsection{Equilibria}\n\nThe solution concept for the game $\\game{G}\\left(\\matx{u}, \\matx{w} \\right)$ in~\\eqref{EqTheGameNoise} is defined hereunder.\n\\begin{definition}[Equilibrium]\\label{DefEquilibrium}\nThe tuple $\\left( P_{A_1| \\tilde{A}_2 = a_1}^{\\dagger}, P_{A_1| \\tilde{A}_2 = a_2}^{\\dagger}, P_{A_2}^{\\dagger}\\right) \\in \\simplex{\\set{A}_1} \\times \\simplex{\\set{A}_1} \\times \\simplex{\\set{A}_2}$ is said to form an equilibrium of the game $\\game{G}\\left(\\matx{u}, \\matx{w} \\right)$ in~\\eqref{EqTheGameNoise} if\n\\begin{IEEEeqnarray}{l}\nP_{A_2}^{\\dagger} \\in \\arg \\min_{P \\in \\simplex{\\set{A}_2}}\\hat{v}\\left( P \\right) \\supersqueezeequ \\IEEEeqnarraynumspace \\mbox{ and }\\\\\n\\label{EqBigMax}\n\\left( P_{A_1| \\tilde{A}_2 = a_1}^{\\dagger}, P_{A_1| \\tilde{A}_2 = a_2}^{\\dagger} \\right) \\in \\BR_1\\left( P_{A_2}^{\\dagger} \\right),\n\\end{IEEEeqnarray}\nwhere the function $\\hat{v}$ is in~\\eqref{Eqv} and the correspondence $ \\BR_1$ is in~\\eqref{EqBR1X}.\n\\end{definition}\n\nThe following theorem ensures the existence of an equilibrium for the game $\\game{G}\\left(\\matx{u}, \\matx{w} \\right)$ in~\\eqref{EqTheGameNoise}.\n\n\\begin{theorem}[Existence]\\label{TheoExistance}\nThe game $\\game{G}\\left(\\matx{u}, \\matx{w} \\right)$ in~\\eqref{EqTheGameNoise} always possesses an equilibrium.\n\\end{theorem}\n\nWhen the game $\\game{G}\\left(\\matx{u}, \\matx{w} \\right)$ in~\\eqref{EqTheGameNoise} possesses several equilibria, the payoff is identical at all equilibria, as shown by the following theorem.\n\n\n\\begin{theorem}[Equilibrium Payoff]\\label{TheoEquilibrium}\nLet the tuple $\\left( P_{A_1| \\tilde{A}_2 = a_1}^{\\dagger}, P_{A_1| \\tilde{A}_2 = a_2}^{\\dagger}, P_{A_2}^{\\dagger}\\right) \\in \\simplex{\\set{A}_1} \\times \\simplex{\\set{A}_1} \\times \\simplex{\\set{A}_2}$ form an equilibrium of the game $\\game{G}\\left(\\matx{u}, \\matx{w} \\right)$ in~\\eqref{EqTheGameNoise}. \nIf the matrix $\\matx{u}$ in~\\eqref{EqMatrixU} satisfies~\\eqref{EqMixedAssumption}, then\n\\begin{IEEEeqnarray}{rcl}\\label{Equ:f_1}\n\\hat{v}\\left( P_{A_2}^ \\dagger\\right) & = & \\min\\lbrace \\hat{v}\\left( P_{1}\\right),\\hat{v}\\left( P_{2}\\right) \\rbrace,\n\\end{IEEEeqnarray}\nwhere, the function $\\hat{v}$ is defined in~\\eqref{Eqv}, and for all $i \\inCountTwo$, the probability measure $P_i \\in \\simplex{\\set{A}_2}$ is such that $P_{i}\\left( a_1 \\right) = P^{(i)}$, with $P^{(i)}$ in~\\eqref{EqPi}.\nAlternatively, if the entries of the matrix $\\matx{u}$ satisfy~\\eqref{EqNotMixedAssumption}, then\n\\begin{IEEEeqnarray}{rcl}\n\\hat{v}\\left( P_{A_2}^ \\dagger\\right) & = & \\min \\left\\{ \\max \\left\\{ u_{1,1}, u_{2,1} \\right\\}, \\max \\left\\{ u_{1,2}, u_{2,2} \\right\\}\\right\\}. \\squeezeequ \\IEEEeqnarraynumspace\n\\end{IEEEeqnarray}\n\\end{theorem}\n\nWhen the payoff matrix $\\matx{u}$ in~\\eqref{EqMatrixU} satisfies~\\eqref{EqMixedAssumption}, at the equilibrium, \\Ptwo commits to a strategy that renders \\Pone indifferent to play any of its actions for at least one of the noisy observations. That is, for some $j \\inCountTwo$, when at a given game repetition, \\Pone obtains $a_j$ as the noisy observation, it holds that $\\BR_{1,j}\\left( P_{A_2}^{\\dagger} \\right) = \\simplex{\\set{A}_1}$.\nThe following lemma sheds more light into this particularity.\n\n\n\\begin{lemma}\\label{LemmaEquilibriumEquality}\nLet $\\set{S} \\subseteq \\simplex{\\set{A}_1} \\times \\simplex{\\set{A}_1} \\times \\simplex{\\set{A}_2}$ be the set of equilibria of the game $\\game{G}\\left(\\matx{u}, \\matx{w} \\right)$ in~\\eqref{EqTheGameNoise}. \nLet the matrix $\\matx{u}$ in~\\eqref{EqMatrixU} satisfy~\\eqref{EqMixedAssumption}, and let $P^{\\star}_{A_{1}} \\in \\simplex{\\set{A}_1}$ and $P^{\\star}_{A_{2}} \\in \\simplex{\\set{A}_2}$ form a NE in the game $\\game{G}(\\matx{u})$ in~\\eqref{EqTheGame}.\nThen, there exists a tuple $ \\left(Q_1, Q_2, P \\right) \\in \\set{S}$ such that $P(a_1) \\in \\left\\{ P^{(1)}, P^{(2)}\\right\\}$. \nFurthermore, if $P(a_1) = P^{(i)}$, with $i \\in \\{1,2 \\}$, then it holds that $P_{A_2| \\tilde{A}_2 = a_{i}} \\left( a_{1} \\right) = P^{\\star}_{A_{2}} \\left( a_{1} \\right)$, \nwhere $P_{A_2|\\tilde{A}_2 = a_i}$ is in~\\eqref{EqPost}.\n\\end{lemma}\n\nLemma~\\ref{LemmaBreakingGrounds} and Lemma~\\ref{LemmaEquilibriumEquality} lead to a deeper conclusion. When the payoff matrix $\\matx{u}$ in~\\eqref{EqMatrixU} satisfies~\\eqref{EqMixedAssumption}, at the equilibrium, \\Ptwo (the leader) commits to a strategy such that for at least one $i \\inCountTwo$, the posterior $P_{A_2 | \\tilde{A}_2 = a_i}$ in~\\eqref{EqPost} is equal to the strategy of \\Ptwo at the (unique) NE. \n\nFinally, when the payoff matrix $\\matx{u}$ in~\\eqref{EqMatrixU} satisfies~\\eqref{EqNotMixedAssumption}, the payoff at the equilibrium is achieved by committing to a pure strategy. \nMore specifically, the payoff at the equilibrium of the game $\\game{G}\\left(\\matx{u}, \\matx{w}\\right)$ in~\\eqref{EqTheGameNoise} is the same as the payoff at the SE in pure strategies of the game $\\game{G}\\left(\\matx{u}\\right)$ in~\\eqref{EqTheGame}.\n\n\\subsection{Relevance of Noisy Observations}\n\nThe following lemma presents necessary and sufficient conditions under which the follower cannot benefit from the noisy observations.\n\n\\begin{lemma}\\label{LemmaEquilibriumEquality1}\nLet the tuple $\\left( P_{A_1| \\tilde{A}_2 = a_1}^{\\dagger}, P_{A_1| \\tilde{A}_2 = a_2}^{\\dagger}, P_{A_2}^{\\dagger}\\right) \\in \\simplex{\\set{A}_1} \\times \\simplex{\\set{A}_1} \\times \\simplex{\\set{A}_2}$ form an equilibrium of the game $\\game{G}\\left(\\matx{u}, \\matx{w} \\right)$ in~\\eqref{EqTheGameNoise}. Let also the tuple $\\left(P^{\\star}_{A_1}, P^{\\star}_{A_2}\\right) \\in \\simplex{\\set{A}_1} \\times \\simplex{\\set{A}_2}$ form one of the NEs of the game $\\gameNF{\\matx{u}}$ in~\\eqref{EqTheGame}. Then, \n\\begin{IEEEeqnarray}{rcl}\\label{Equ:f_5}\nv\\left( P_{A_1| \\tilde{A}_2}^ \\dagger, P_{A_2}^ \\dagger\\right) & = &u(P_{A_1}^{\\star},P_{A_2}^{\\star}) , \n\\end{IEEEeqnarray}\nif and only if, $(a)$ the matrix $\\matx{u}$ in~\\eqref{EqMatrixU} satisfies~\\eqref{EqNotMixedAssumption}; or $(b)$ the matrix $\\matx{u}$ in~\\eqref{EqMatrixU} satisfies~\\eqref{EqMixedAssumption} and the channel $\\matx{w}$ in~\\eqref{EqTheChannel} satisfies $\\det{\\matx{w}} =0$.\n\\end{lemma}\nLemma~\\ref{LemmaEquilibriumEquality1} establishes that granting the follower with noisy observations of the action played by the leader does not make any difference in two particular scenarios. \nFirst, in ZSGs with strategic dominance, NEs in pure strategies and infinitely many NEs (condition $(a)$).\nSecond, in ZSGs with unique NE in strictly mixed strategies when the observation given to the follower of the action of the leader is independent of the action actually played (condition $(b)$). Note that a channel that satisfies $\\det{\\matx{w}} =0$ is a channel whose mutual information between the channel input and channel output is zero.\n\nLemma~\\ref{LemmaLowerBounds} and Lemma~\\ref{LemmaEquilibriumEquality1} imply that granting the follower with relevant noisy observations of the action played by the leader makes a difference exclusively for ZSGs with a unique NE in mixed strategies. In this case, relevant noisy observations refer to observations obtained through channels with positive mutual information between the channel input and the channel output ($\\det{\\matx{w}}\\neq 0$). \n\nThe following lemma describes the special case of channels with maximum mutual information between the channel input and the channel output. That is, channels whose output is deterministic given the channel input. These channels satisfy the condition $\\abs{\\det\\matx{w}} = 1$.\n\n\\begin{lemma}\\label{LemmaPureEquilibriumEquality}\nLet $\\left( P_{A_1| \\tilde{A}_2 = a_1}^{\\dagger}, P_{A_1| \\tilde{A}_2 = a_2}^{\\dagger}, P_{A_2}^{\\dagger}\\right) \\in \\simplex{\\set{A}_1} \\times \\simplex{\\set{A}_1} \\times \\simplex{\\set{A}_2}$ form an equilibrium of the game $\\game{G}\\left(\\matx{u}, \\matx{w} \\right)$ in~\\eqref{EqTheGameNoise}. If $\\abs{\\det\\matx{w}} = 1$, then\n\\begin{IEEEeqnarray}{rcl}\\label{Equ:PureEquilibriumEquality}\n\\hat{v}\\left( P_{A_2}^ \\dagger\\right) & = & \\min \\lbrace \\max\\lbrace u_{1,1}, u_{2,1}\\rbrace, \\max\\lbrace u_{1,2}, u_{2,2}\\rbrace \\rbrace. \\IEEEeqnarraynumspace\n\\end{IEEEeqnarray}\n\\end{lemma}\n %\nLemma~\\ref{LemmaPureEquilibriumEquality} strengthens the observation that under perfect observations at each repetition, the strategy to which the leader commits to becomes irrelevant. \n\n\\section{COMMITMENT MISMATCH}\\label{SecCM} \nIn this section, the commitment observed by the follower in the game $\\game{G}\\left(\\matx{u}, \\matx{w} \\right)$ in~\\eqref{EqTheGameNoise} is assumed to be different from the actual commitment of the leader. This scenario is referred to as \\emph{commitment mismatch}.\n\n\\subsection{Game Formulation}\nLet $\\matx{t}$ be a given $2\\times2$ nonsingular stochastic matrix. \nLet $P_{A_2} \\in \\simplex{\\set{A}_2}$ be the commitment announced by \\Ptwo before the beginning of the game repetitions. The commitment observed by \\Pone is denoted by $\\tilde{P}_{A_2} \\in \\simplex{\\set{A}_2}$ and satisfies,\n\\begin{IEEEeqnarray}{rcl}\n\\label{EqPtilde}\n\\begin{pmatrix}\n\\tilde{P}_{A_2} (a_1)\\\\\n\\tilde{P}_{A_2} (a_2)\n\\end{pmatrix}\n= \n\\matx{t} \n\\begin{pmatrix}\nP_{A_2} (a_1)\\\\\nP_{A_2} (a_2)\n\\end{pmatrix}.\n\\end{IEEEeqnarray}\nThat is, the commitment observed by the follower is a deterministic distortion of the commitment announced by the leader. Note that the leader is not engaged on learning the commitment observed by the follower, nor the follower is engaged in learning the commitment announced by the leader, as in~\\cite{muthukumar_2019_robust}. Here, the follower assumes that $\\tilde{P}_{A_2}$ is the commitment actually announced by the leader, and the leader is aware of this.\nThe extension of the game $\\game{G}\\left(\\matx{u}, \\matx{w} \\right)$ in~\\eqref{EqTheGameNoise} obtained by including the binary channel represented by the stochastic matrix $\\matx{t}$ is described by the tuple \n\\begin{IEEEeqnarray}{c}\n\\label{EqTheGameNoiseTilde}\n\\game{G}\\left(\\matx{u}, \\matx{w}, \\matx{t} \\right) = \\left(\\set{K}, \\set{A}_1, \\set{A}_2 , \\matx{u}, \\matx{w}, \\matx{t} \\right).\n\\end{IEEEeqnarray} \n\n\\subsection{The Best Strategies of \\Ptwo}\nLet the correspondence $\\tilde{v}: \\simplex{\\set{A}_2} \\rightarrow \\reals$ be such that given the probability measure $P_{A_2} \\in \\simplex{\\set{A}_2}$,\n\\begin{IEEEeqnarray}{rcl}\n\\label{Eqvtilde}\n\\tilde{v}\\left( P_{A_2} \\right) = \\max_{\\left(Q_1, Q_2\\right) \\in \\BR_{1}\\left( \\tilde{P}_{A_2} \\right)} v\\left(Q_1, Q_2, P_{A_2} \\right),\n\\end{IEEEeqnarray}\nwhere the function $v$ is defined in~\\eqref{EqTheCostFunction}, the correspondence $\\BR_{1}$ is defined in~\\eqref{EqBR1}, and the probability measure $\\tilde{P}_{A_2}$ is in~\\eqref{EqPtilde}.\nThe correspondence $\\tilde{v}$ in~\\eqref{Eqvtilde} determines the payoff achieved by the players. \nThus, the set of best strategies for \\Ptwo, under the assumption that \\Pone observes a distorted commitment, is the set of probability measures that minimize $\\tilde{v}$ in~\\eqref{Eqvtilde}, if such a minimum exists. \nMore specifically, the cardinality of the set $\\BR_{1}\\left( \\tilde{P}_{A_2} \\right)$ in~\\eqref{Eqvtilde} is either one or infinite (Lemma~\\ref{LemmaBreakingBad1}). Hence, there might exist two tuples $\\left(Q_1, Q_2\\right) \\in \\BR_{1}\\left( \\tilde{P}_{A_2} \\right)$ and $\\left(Q_3, Q_4\\right) \\in \\BR_{1}\\left( \\tilde{P}_{A_2} \\right)$, for which $v\\left(Q_1, Q_2, P_{A_2} \\right) \\neq v\\left(Q_3, Q_4, P_{A_2} \\right)$. In this case, $\\tilde{v}\\left( P_{A_2} \\right)$ corresponds to a subset of $\\reals$ in which each element is induced by an element of the set $\\BR_{1}\\left( \\tilde{P}_{A_2} \\right)$.\nFrom this perspective, the minimization of $\\tilde{v}$ might not be a well posed optimization problem.\n\nFor all $i \\inCountTwo$, let $\\tilde{P}^{(i)} \\in \\reals$ be such that\n\\begin{IEEEeqnarray}{rcl}\n\\label{EqPiTilde}\n\\begin{pmatrix}\n\\tilde{P}^{(i)}\\\\\n1-\\tilde{P}^{(i)}\n\\end{pmatrix}\n=\n\\matx{t}^{-1}\n\\begin{pmatrix}\nP^{(i)}\\\\\n1-P^{(i)}\n\\end{pmatrix},\n\\end{IEEEeqnarray}\nwith $P^{(i)}$ in~\\eqref{EqPi}. \nUsing this notation, the following lemma shows an explicit expression for the correspondence $\\tilde{v}$ in a special case.\n\\begin{lemma}\\label{LemmaPayoffVP2MIsmatch}\nAssume that the matrix $\\matx{u}$ in~\\eqref{EqMatrixU} satisfies~\\eqref{EqMixedAssumption} and $ u_{1,1} - u_{1,2} -u_{2,1} + u_{2,2} >0$. \nAssume also that $\\det{\\matx{w}}>0$ and $\\det{\\matx{t}}>0$, with $\\matx{w}$ in~\\eqref{EqTheChannel} and $\\matx{t}$ in~\\eqref{EqTheGameNoiseTilde}. \nFor all $P \\in \\simplex{\\set{A}_2}$, it holds that:\nIf $P(a_1) > \\tilde{P}^{(2)}$, with $\\tilde{P}^{(2)}$ in~\\eqref{EqPiTilde}, then it follows that \n\\begin{IEEEeqnarray}{rcccl}\\label{EqVP2Mismatch_1}\n\\tilde{v}\\left( P \\right) & = & u_{1,1} P(a_1) + u_{1,2} P(a_2).\n \\end{IEEEeqnarray}\nIf $P(a_1) = \\tilde{P}^{(2)}$, then it follows that \n\\begin{IEEEeqnarray}{l}\\label{EqVP2Mismatch_2}\n\\nonumber\n\\tilde{v}\\left( P \\right)\n = \\bigg\\lbrace \\left(u_{1,1} P(a_1)+ u_{1,2}P(a_2)\\right) \\beta \\\\\n + \\left( \\begin{pmatrix}\n1 \\\\\n1\n \\end{pmatrix}^\\sfT \n \\left(\\matx{u} \\circ \\matx{w} \\right) \n \\begin{pmatrix}\n P(a_1) \\\\\n P(a_2)\n \\end{pmatrix}\n \\right) \\left( 1 -\\beta \\right) : \\beta \\in [0,1] \\bigg\\rbrace.\\supersqueezeequ \\IEEEeqnarraynumspace \n \\end{IEEEeqnarray} \nIf $\\tilde{P}^{(1)} < P(a_1) < \\tilde{P}^{(2)} $, then it follows that \n\n \n \\begin{IEEEeqnarray}{l}\n\\tilde{v}\\left( P \\right) \\supersqueezeequ \n = \\left( \\begin{pmatrix}\n1 \\\\\n1\n \\end{pmatrix}^\\sfT \n \\left(\\matx{u} \\circ \\matx{w} \\right) \n \\begin{pmatrix}\n P(a_1) \\\\\n P(a_2)\n \\end{pmatrix}\n \\right) . \\label{EqVP2Mismatch_3} \\supersqueezeequ \\IEEEeqnarraynumspace\n\\end{IEEEeqnarray}\n\nIf $P(a_1) = \\tilde{P}^{(1)}$, with $\\tilde{P}^{(1)}$ in~\\eqref{EqPiTilde}, then it follows that \n\\begin{IEEEeqnarray}{l}\\label{EqVP2Mismatch_4}\n\\nonumber\n\\tilde{v}\\left( P \\right)\n = \\bigg\\lbrace \\left(u_{2,1} P(a_1)+ u_{2,2}P(a_2)\\right) \\left(1-\\beta\\right) \\\\\n + \\left( \\begin{pmatrix}\n1 \\\\\n1\n \\end{pmatrix}^\\sfT \n \\left(\\matx{u} \\circ \\matx{w} \\right) \n \\begin{pmatrix}\n P(a_1) \\\\\n P(a_2)\n \\end{pmatrix}\n \\right) \\beta : \\beta \\in [0,1] \\bigg\\rbrace.\\supersqueezeequ \\IEEEeqnarraynumspace \n \\end{IEEEeqnarray} \nIf $P(a_1) < \\tilde{P}^{(1)}$, then it follows that \n\\begin{IEEEeqnarray}{rcccl}\n\\tilde{v}\\left( P \\right) & = & u_{2,1} P(a_1) + u_{2,2} P(a_2).\\label{EqVP2Mismatch_5}\n\\end{IEEEeqnarray}\n\\end{lemma}\nFigure~\\ref{FigBellesFigures} depicts the correspondence $\\tilde{v}$ in~\\eqref{Eqvtilde}. \nNote that for all $i \\inCountTwo$, if the commitment of the leader $P_{A_2}$ satisfies \n$P_{A_2}(a_1) = \\tilde{P}^{(i)}$, it holds that $\\tilde{v}\\left(P_{A_2} \\right)$ is a closed interval. \nFigure~\\ref{FigBellesFigures} also depicts the existence of some commitments of \\Ptwo for which the expected payoff is smaller than the expected payoff achieved when no distortion of the commitment is considered. \nThe following lemma formalizes this observation and shows that even a deterministic distortion of the commitment can benefit the leader in particular cases.\n\\begin{lemma}\\label{LemmaMixMismatch}\nConsider the following assumptions: \n$(a)$~The matrix $\\matx{u}$ in~\\eqref{EqMatrixU} satisfies~\\eqref{EqMixedAssumption}; \n$(b)$~ For all $i \\inCountTwo$, the probability measures $Q_i \\in \\simplex{\\set{A}_2}$ such that $Q_{i}\\left( a_1 \\right) = P^{(i)}$, with $P^{(i)}$ in~\\eqref{EqPi}, satisfy $\\hat{u}\\left( Q_{1}\\right) \\neq \\hat{u}\\left( Q_{2}\\right)$, \nwhere the function $\\hat{u}$ is in~\\eqref{EqHatu}.\nIf $ \\det{\\matx{t}} \\notin \\lbrace 0, 1 \\rbrace$, then, there exists a strategy $ P \\in \\simplex{\\set{A}_2}$ such that $\\tilde{v}(P) < \\hat{v}(P_{A_2}^\\dagger)$, \nwhere the tuple $\\left( P_{A_1| \\tilde{A}_2 = a_1}^{\\dagger}, P_{A_1| \\tilde{A}_2 = a_2}^{\\dagger}, P_{A_2}^{\\dagger}\\right) \\in \\simplex{\\set{A}_1} \\times \\simplex{\\set{A}_1} \\times \\simplex{\\set{A}_2}$ is an equilibrium of the game $\\game{G}\\left(\\matx{u}, \\matx{w} \\right)$ in~\\eqref{EqTheGameNoise}. \n\\end{lemma}\n\n\\subsection{Equilibria}\n\nThe solution concept for the game $\\game{G}\\left(\\matx{u}, \\matx{w}, \\matx{t} \\right)$ in~\\eqref{EqTheGameNoiseTilde} is the following.\n\n\\begin{definition}[Equilibrium]\\label{DefEquilibriumTilde}\nThe tuple $\\left( P_{A_1| \\tilde{A}_2 = a_1}, P_{A_1| \\tilde{A}_2 = a_2}, P_{A_2}\\right) \\in \\simplex{\\set{A}_1} \\times \\simplex{\\set{A}_1} \\times \\simplex{\\set{A}_2}$ is said to form an equilibrium of the game $\\game{G}\\left(\\matx{u}, \\matx{w}, \\matx{t} \\right)$ in~\\eqref{EqTheGameNoiseTilde} if\n\\begin{IEEEeqnarray}{l}\nP_{A_2} \\in \\arg \\min_{P \\in \\simplex{\\set{A}_2}}\\tilde{v}\\left( P \\right) \\supersqueezeequ \\IEEEeqnarraynumspace \\mbox{ and }\\\\\n\\label{EqBigMaxTintin}\n\\left( P_{A_1| \\tilde{A}_2 = a_1}, P_{A_1| \\tilde{A}_2 = a_2} \\right) \\in \\BR_1\\left( \\tilde{P}_{A_2} \\right),\n\\end{IEEEeqnarray}\nwhere the correspondence $\\tilde{v}$ is in~\\eqref{Eqvtilde}, the correspondence $ \\BR_1$ is in~\\eqref{EqBR1X}, and the probability measures $P_{A_2}$ and $\\tilde{P}_{A_2}$ satisfy~\\eqref{EqPtilde}.\n\\end{definition}\nThe game $\\game{G}\\left(\\matx{u}, \\matx{w}, \\matx{t} \\right)$ in~\\eqref{EqTheGameNoiseTilde} does not necessarily possess an equilibrium. This is due to the fact that the minimum of $\\tilde{v}$ in~\\eqref{Eqvtilde} does not always exist, as shown hereunder. \n\nFor all $i \\inCountTwo$, let the measure $\\tilde{P}_{i} \\in \\simplex{\\set{A}_2}$ be such that $\\tilde{P}_{i}(a_1) = \\tilde{P}^{(i)}$. \nLet the function $\\omega: \\simplex{\\set{A}_2} \\to \\reals$ be defined for all $P \\in \\simplex{\\set{A}_2}$ as follows:\nIf $P(a_1) > \\tilde{P}^{(2)}$, with $\\tilde{P}^{(2)}$ in~\\eqref{EqPiTilde}, then\n$\\omega\\left( P \\right) = u_{1,1} P(a_1) + u_{1,2} P(a_2)$.\nIf $\\tilde{P}^{(1)} \\leqslant P_{A_2}(a_1) \\leqslant \\tilde{P}^{(2)} $, then\n$\\omega\\left( P \\right) = \\left( u_{1,1} P_{\\tilde{A}_2 | A_2 = a_1}(a_1) + u_{2,1} P_{\\tilde{A}_2 | A_2 = a_1}(a_2)\\right)P(a_1) + \\left( u_{1,2}P_{\\tilde{A}_2 | A_2 = a_2}(a_1) + u_{2,2}P_{\\tilde{A}_2 | A_2 = a_2}(a_2) \\right) P(a_2)$. \nIf $P(a_1) < \\tilde{P}^{(1)}$, then \n$\\omega\\left( P \\right) = u_{2,1} P(a_1) + u_{2,2} P(a_2)$. \n\nIn the example of Figure~\\ref{FigBellesFigures}, for all $P \\in \\simplex{\\set{A}_2}\\setminus\\lbrace \\tilde{P}_{1}, \\tilde{P}_{2} \\rbrace$, it holds that $\\tilde{v}(P) = \\omega(P)$, \nand the function $\\omega$ is discontinuous at $\\tilde{P}_{1}$ and $\\tilde{P}_{2}$ with a minimum at $\\tilde{P}_{2}$ (magenta triangle). \nNote that $\\tilde{v}(\\tilde{P}_1)$ and $\\tilde{v}(\\tilde{P}_2)$ are intervals, and thus, the minimization of $\\tilde{v}$ does not have a solution. \nEssentially, if \\Ptwo (the leader) commits to play the probability measure $\\tilde{P}_{A_2}$ that minimizes $\\omega$, i.e., $P_{A_2} = \\tilde{P}_{2}$, then the set of the best responses of \\Pone when the noisy observation is $a_{2}$ is $\\BR_{1,2}(\\tilde{P}_{A_2}) =\\simplex{\\set{A}_1}$, where the probability measures $P_{A_2}$ and $\\tilde{P}_{A_2}$ satisfy~\\eqref{EqPtilde}. \nThat is, for each game repetition in which the output of the channel is $a_{2}$, \\Pone might choose its action by sampling any probability measure in $\\simplex{\\set{A}_1}$ and achieve an expected payoff in the interval $\\tilde{v}(P_{2})$. If such a payoff is larger than $\\omega(P_{A_2})$, then \\Ptwo can deviate and obtain a payoff arbitrarily close to $\\omega(P_{A_2})$. This shows the non existence of an equilibrium in the example of Figure~\\ref{FigBellesFigures}.\n\nNote that the equilibrium exists for the cases in which the strategy of \\Pone is independent of the commitment, e.g., ZSG with strategic dominance.\n\\begin{figure}\n\\centering\n \\includegraphics[width=\\linewidth]{Figures\/1.pdf}\n\n\n\n\n\\caption{Plots of the functions $\\hat{v}$ in~\\eqref{Eqv} and $\\hat{u}$ in~\\eqref{EqHatu}; and the correspondence $\\tilde{v}$ in~\\eqref{Eqvtilde} as a function of the probability $P_{A_2}(a_1)$, with parameters $\\matx{u} =\\left( -8, 6 ; 2, -2\\right)$, $\\matx{w} = \\left( 0.8, 0.2; 0.2, 0.8\\right)$ and $\\matx{t} = \\left( 0.9, 0.1; 0.1, 0.9\\right)$.\nThe tuple $(P_{A_1}^{\\star},P_{A_2}^{\\star})$ is the unique NE in~\\eqref{EqNEStratExample} and for all $i \\inCountTwo$, $P_i\\left(a_1 \\right)= P^{(i)}$, with $P^{(i)}$ in~\\eqref{EqPi} and $\\tilde{P}_i\\left(a_1 \\right)= \\tilde{P}^{(i)}$, with $\\tilde{P}^{(i)}$ in~\\eqref{EqPiTilde}.\n}\n\\label{FigBellesFigures}\n\\end{figure}\n\\subsection{Equilibrium Refinements}\n \nIn the example in Figure~\\ref{FigBellesFigures}, the function $\\omega$, which can be minimized, is obtained from the correspondence $\\tilde{v}$ in~\\eqref{Eqvtilde} by replacing the closed intervals $\\tilde{v}(\\tilde{P}_1)$ and $\\tilde{v}(\\tilde{P}_2)$ by the real numbers $\\min\\tilde{v}(\\tilde{P}_1)$ and $\\min\\tilde{v}(\\tilde{P}_2)$, respectively. Hence, the correspondence $\\tilde{v}$ and the function $\\omega$ are identical if \\Pone is forced to choose the strategy that minimizes the expected payoff every time it observes that \\Ptwo commits either to the strategies $\\tilde{P}_1$ or $\\tilde{P}_2$.\nUnder this assumption, the game $\\game{G}\\left(\\matx{u}, \\matx{w}, \\matx{t} \\right)$ in Figure~\\ref{FigBellesFigures} possesses an equilibrium in which \\Ptwo commits to play the strategy $P_{A_2}$ that satisfies $P_{A_2} = \\tilde{P}_{2}$.\nThis observation is reminiscent of the equilibrium refinements proposed in~\\cite{Leitmann-JOTA-1978} for the SE of bi-matrix games, i.e., the \\emph{strong}-SE. \n\nAnother refinement of the solution concept in Definition~\\ref{DefEquilibriumTilde} can be obtained when the leader commits to a strategy $P_{A_2}$ that satisfies $\\tilde{v}(P_{A_2}) = \\omega(\\tilde{P}_{2}) + \\epsilon$, with $\\epsilon > 0$ arbitrarily small. This refinement is reminiscent of the solution concept known as $\\epsilon$-equilibrium for the NE~\\cite{Fudenberg-Tirole-Book}. In this case, the leader admits to committing to a suboptimal strategy in order to be able to force a unique (and predictable) best response from its opponent. \n \n\\section{FINAL REMARKS AND DISCUSSION}\\label{SecDiscussion}\n\nThe analysis of ZSG with commitments in which the follower is granted with a noisy observation of the action and the commitment of the leader have been studied following a Bayesian approach. This approach relies on the capability of the follower to construct posterior probability measures on the actions of the leader based on the available information. The construction of posteriors is more general than the notion of incomplete information in the extensive form of ZSGs, which is limited to modelling the inability of players to distinguish between elements of the \\emph{information sets} (maximum entropy posteriors) \\cite{von_1947_theory}. \nNote also that noisy observations cannot be modelled using Bayesian games as introduced in \\cite{harsanyi_1967_games1, harsanyi_1968_games2}.\nThis new game formulation is shown to always posses an equilibrium under the assumption that the commitment is observed perfectly. When this assumption is dropped, the game is shown to have an equilibrium only under strict conditions. Despite that commitment mismatch can significantly benefit the leader, such benefits are not achievable at a stable point. To benefit from commitment mismatch, the leader must admit to commit to a suboptimal strategy in order to be able to unequivocally predict the best response of its opponent. \n\nThis work relies on the assumption that actions are observed through discrete channels for which the sets of channel inputs and channel outputs are finite and identical. Nonetheless, these channels fail to model many typical data processing impairments, which calls for more elaborate channel models. For instance, the effect of erasures is modelled by the \\emph{erasure channel} \\cite{Elias-1956}; and the effect of additive white Gaussian noise (AWGN) is modelled by the AWGN channel \\cite{Shannon-1948, Shannon-1948_2}. Nonetheless, the extension of this work to these and other channel models is not trivial. \nFinally, it is important to highlight that the conclusions of this work hold under the assumption that both players are aware of the existence of a channel through which actions are observed. Moreover, such a channel is assumed to be known by both players. \n\n\\balance\n \\bibliographystyle{apalike}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:introduction}\n\n\nAccurate camera relocalization plays an important role in autonomous driving. Given query images, the camera relocalization task aims at estimating the camera poses for which the images are taken. In recent years, many camera relocalization approaches have been proposed. Generally speaking, they fall into two categories.\n\\begin{enumerate}\n \\item One solution is to treat relocalization as a matching task. This solution assumes the availability of a database or a map that stores prior information (e.g., 3D point clouds, images, or descriptors) of sample points. Given a query image, the matching model finds the best match between the query and the database based on a similarity score. The estimated camera pose is then inferred from the matched prior information in the database.\n \n \\item Another solution does not assume the availability of a database and uses only neural networks to regress the camera poses of the query images. This approach constructs an implicit relation between images and poses, which is called \\emph{camera pose regression} (CPR).\n\\end{enumerate}\nThe prerequisite of a database on the one hand can boost the accuracy of the camera relocalization by storing useful prior information. On the other hand, the computation and storage requirements are proportionate to the number of sample points in the database. To decouple relocalization from the need for a database, there has been a recent surge of research interest in the second category CPR. \n\n\n\n\\begin{figure}[!htb]\n\\begin{center}\n\\includegraphics[width=0.47\\textwidth]{figures\/neighboring.pdf}\n\\end{center}\n\\caption{Multi-view camera pose regression with neighboring information, without the need for any database. }\n\\label{fig:neighboring}\n\\end{figure}\n\nThe pioneering work PoseNet \\cite{posenet} uses a convolutional neural network (CNN) to extract features from a single image as vector embeddings, and the embeddings are directly regressed to the 6-DoF poses. To further improve the regression performance in driving scenarios, multi-view-based models extend the input from a single image to multi-view images. MapNet \\cite{mapnet} leverages pre-computed visual odometry to post-process the output pose trajectory. GNNMapNet \\cite{gnnmapnet} integrates a graph neural network (GNN) into CNN to make image nodes interact with neighbors.\n\nThe above-mentioned multi-view-based models show promising performance in benign driving environments. To operate well in challenging environments, the model must be robust to environmental perturbations (e.g., changing seasons, weather, illumination, and unstable objects), and effectively leverage neighboring information from spatially or temporally nearby frames of a single vehicle or multi-view images shared from other spatially nearby agents (e.g., using V2X communication) as shown in \\cref{fig:neighboring}. Images sharing such neighboring information are said to be \\emph{covisible}.\n\nRecently, neural Ordinary Differential Equations (ODEs) \\cite{chen2018neural} and Partial Differential Equations (PDEs) \\cite{chamberlain2021grand, chamberlain2021blend} have demonstrated their robustness against input perturbations \\cite{yan2019robustness,kang2021Neurips}. Moreover, GNNs can effectively aggregate neighborhood information. We thus propose RobustLoc that not only explores the relations between graph neighbors but also utilizes neural differential equations to improve robustness. We test our new multi-view-based model on three challenging autonomous driving datasets and verify that it outperforms existing state-of-the-art (SOTA) CPR methods.\n\nOur main contributions are summarized as follows:\n\\begin{enumerate}\n\n\\item \nWe represent the features extracted from a CNN in a graph and apply graph neural diffusion layers at each stage. I.e., we design feature diffusion blocks at both the feature map extraction and vector embedding stages to achieve robust feature representations. Each diffusion block consists of not only cross-diffusion from node to node in a graph but also self-diffusion within each node. We also propose multi-level training with the branched decoder to better regress the target poses.\n\n\n\\item We conduct experiments in both ideal and challenging noisy autonomous driving datasets to demonstrate the robustness of our proposed method. The experiments verify that our method achieves better performance than the current SOTA CPR methods. \n\n\\item We conduct extensive ablation studies to provide insights into the effectiveness of our design. \n\\end{enumerate}\n\n\n\n\\section{Related Work}\n\n\n\n\\subsection{Image Matching}\nNetVLAD \\cite{netvlad} integrates CNN into image retrieval, where a trainable VLAD layer is proposed to implicitly split images into different clusters. SuperGlue \\cite{sarlin2020superglue} assumes 2D image key points are available, and point-wise matching is achieved by an attentional graph network and the Sinkhorn algorithm. Pixloc \\cite{pixloc} integrates 3D point clouds into image matching, which estimates the pose by minimizing the reprojection error of the 3D points. \n\n\\subsection{Camera Pose Regression}\nGiven the query images, CPR models directly regress the camera poses of these images without the need for a database. Thus, it does not depend on the scale of the database, which is definitely a born gift compared with those database methods. \\\\\nPoseNet \\cite{posenet} and GeoPoseNet \\cite{geoposenet2017} propose the simultaneous learning for location and orientation by integrating balance parameters. MapNet \\cite{mapnet} uses visual odometry to serve as the post-processing technique to optimize the regressed poses. LsG \\cite{lsg} and LSTM-PoseNet \\cite{lstmpose} integrates the sequential information by fusing PoseNet and LSTM. AD-PoseNet and AD-MapNet \\cite{adposenet} leverages the semantic masks to drop out the dynamic area in the image. AtLoc \\cite{atloc} introduces the global attention to guide the network to learn better representation. GNNMapNet \\cite{gnnmapnet} expands the feature exploration from a single image to multi-view images using GNN. IRPNet \\cite{irpnet} proposes to use two branches to regress translation and orientation respectively. Coordinet \\cite{coordinet} uses the coordconv \\cite{coordconv} and weighted average pooling \\cite{fc4} to capture spatial relations. \n\n\n\n\\subsection{Neural Differential Equations and Robustness}\n\nThe dynamics of a system are usually described by ordinary or partial differential equations. The paper \\cite{chen2018neural} first proposes trainable neural ODEs by parameterizing the continuous dynamics of hidden units. The hidden state of the ODE network is modeled as:\n\\begin{align}\n\\ddfrac{\\bm{y}(t)}{t}=f_{\\theta}(\\bm{y}(t)) \\label{eq:ode_f}\n\\end{align}\nwhere $\\bm{y}(t)$ denotes the latent state of the trainable network $f_{\\theta}$ that is parameterized by weights $\\theta$. Recent studies \\cite{yan2019robustness,kang2021Neurips} have demonstrated that neural ODEs are intrinsically more robust against input perturbations compared to vanilla CNNs. \n\n\nIn addition, neural PDEs \\cite{chamberlain2021grand, chamberlain2021blend} have been proposed and applied to GNN, where the diffusion process is modeled on the graph. Furthermore, the stability of the heat semigroup and the heat kernel under perturbations of the Laplace operator (i.e., local perturbation of the manifold) is studied \\cite{SonKanWan:C22}.\n\n\n\\section{Proposed Model}\n\n\nIn this section, we provide a detailed description of our proposed CPR approach.\nWe assume that the input is a set of images $\\{\\bm{I}_{i}\\}_{i\\in [N]}$ that may be covisible (see \\cref{fig:neighboring}).\\footnote{\\emph{Notations:} In this paper, we use $[N]$ to denote the set of integers $\\{1, 2, \\ldots, N\\}$.We use boldfaced lowercase letters like $\\bm{m}$ to denote vectors and boldface capital letters like $\\bm{W}$ to denote matrices.} Our objective is to perform CPR on the input images.\n\n\n\n\n\n\n\n\n\n\\begin{figure*}[ht]\n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{figures\/architecture.pdf}\n\\end{center}\n\\caption{The main architecture of RobustLoc. Feature diffusion is performed at both the feature map stage and the vector embedding stage. The branched decoder regresses the 6-DoF poses based on the vector embeddings or the pooled feature maps. The details for multi-layer decoding are shown in \\cref{fig:multi-level}. }\n\\label{fig:model}\n\\end{figure*}\n\n\\subsection{RobustLoc Overview}\nWe first summarize our multi-view CPR pipeline, which can be decomposed into three different stages, as follows (see \\cref{fig:model} and \\cref{fig:multi-level}): \n\\begin{enumerate}\n\\item Given $N$ neighboring images, a CNN extracts the feature maps of all these images. Our proposed feature map diffusion block then performs cross-self diffusion on the feature maps.\n\\item After feature map diffusion, a global average pooling module aggregates the feature maps as vector embeddings, which contain global representations of these images. Similarly, those vector embeddings are then diffused by cascaded diffusion blocks. \n\\item Based on the vector embeddings, the branched decoder module regresses the output camera poses. During training, decoding is performed on multiple levels to provide better feature constraints.\n\\end{enumerate}\n\n\n\n\\subsection{Neural Diffusion for Feature Maps}\n The input images $\\{\\bm{I}_{i}\\}_{i\\in [N]}$ are passed through a CNN to obtain the feature maps $ \\{\\bm{m}_{i} \\in \\mathbb{R}^{H \\times W \\times C } \\}_{i\\in [N]}$. Here, $C$ is the channel dimension, while $H$ and $W$ are the dimensions of a feature map. For each feature map $\\bm{m}_i$, we denote its $j$-th element as $\\bm{m}_{i,j}\\in \\mathbb{R}^{C} , j\\in [HW]$. We next describe the feature map diffusion block, where we perform cross-diffusion from node to node in a graph, and self-diffusion within each node. The two diffusion processes update the feature map by leveraging the neighboring information or only using each node's individual information, respectively.\n\\subsubsection{Cross-Diffusion Dynamics.}\nTo support the cross-diffusion over feature maps, we formulate the first graph in our pipeline as:\n\\begin{align}\n\\mathcal{G}^{\\mathrm{feat}}=(\\mathcal{V}^{\\mathrm{feat}}, \\mathcal{E}^{\\mathrm{feat}}), \n\\end{align}\nwhere the node set $\\mathcal{V}^{\\mathrm{\\mathrm{feat}}}=\\{\\bm{m}_{i,j}\\}_{(i,j)\\in [N]\\times[HW]}$ contains element-wise features $\\bm{m}_{i,j}$ and the edge set $\\mathcal{E}^{\\mathrm{\\mathrm{feat}}}$ is defined as the complete graph edges associated with attention weights as discussed below. And the complete graph architecture is demonstrated to be an effective design shown in \\cref{tab:graph}.\n\nTo achieve robust feature interaction, we next define the cross-diffusion process as: \n\\begin{align}\n\\frac{\\partial}{\\partial t} \\bm{x}(t)\n& = f_{\\mathrm{cross}}(\\bm{x}(t)), \\label{eq:pde}\n\\end{align}\nwhere $f_{\\mathrm{cross}}(\\bm{x}(t))$ is a neural network and can be approximately viewed as a neural PDE with the partial differential operations over a manifold space replaced by the attention modules that we will introduce later. We denote the input to the feature map diffusion module as the initial state at $t=t_{0}$ as $\\bm{x}(t_{0})=\\left\\{\\bm{m}_{i,j}\\right\\}_{(i,j)\\in [N]\\times[HW]}$, where $\\bm{x}(t)=\\left\\{\\bm{m}_{i,j}(t)\\right\\}_{(i,j)\\in [N]\\times[HW]}$ denotes the hidden state of the diffusion.\nThe diffusion process is known to have robustness against local perturbations of the manifold \\cite{chen1998stability} where the local perturbations in our CPR task include challenging weather conditions, dynamic street objects, and unexpected image noise. Therefore, we expect our module \\cref{eq:pde} is simultaneously capable of leveraging the neighboring image information and holding robustness against local perturbations.\n\n\nWe next introduce the computation of attention weights in $f_{\\mathrm{cross}}(\\bm{x}(t))$ for node features at time $t$. \nWe first generate the embedding of each node using multi-head fully connected (FC) layers with learnable parameter matrix $\\bm{W}_{k}$ and bias $\\bm{b}_{k}$ at each head $k=[K]$, where $K$ is the number of heads. The output at each head $k$ can be written as:\n\\begin{align}\n\\bm{m}_{i,j;k}^{\\mathrm{FC}}(t)=\\bm{W}_{k} \\bm{m}_{i,j}(t) + \\bm{b}_{k}.\n\\end{align}\nThe attention weights are then generated by computing the dot product among all the neighboring nodes using the features \n$\\bigl\\{\\bm{m}_{i,j;k}^{\\mathrm{FC}}(t)\\bigr\\}_{(i,j)\\in[N]\\times[HW]}$. We have\n\\begin{align}\n& \\{a_{(i,j),(i',j');k}(t)\\}_{(i',j')\\in \\mathcal{N}_{i,j}} \\nonumber\\\\ \n& =\\mathrm{Softmax}_{(i',j') \\in \\mathcal{N}(i,j)}(\\bm{m}_{i,j;k}^{\\mathrm{FC}}(t) \\cdot \\bm{m}_{i', j';k}^{\\mathrm{FC}}(t)),\n\\end{align}\nwhere $\\mathcal{N}_{i,j}$ denotes the set of neighbors of node $\\bm{m}_{i,j}$. \nLet\n\\begin{align}\n\\bm{m}_{i,j;k}^{\\mathrm{weighted}}(t)= \\sum_{(i',j') \\in \\mathcal{N}_{i,j}} a_{(i,j),(i',j');k}(t) \\bm{m}_{i',j';k}^{\\mathrm{FC}}(t).\n\\end{align} Finally, the updated node features are obtained by concatenating the weighted node features from all heads as \n\\begin{align}\nf_{\\mathrm{cross}}(\\bm{x}(t))=\\set*{ \\concat_{k\\in[K]}(\\bm{m}_{i,j;k}^{\\mathrm{weighted}}(t)) }_{(i,j)\\in [N]\\times[HW]}.\n\\end{align}\n\nBased on the above pipeline, the output of the cross-diffusion at time $t=t_{1}$ can be obtained as:\n\n\\begin{align}\n\\bm{x}(t_{1})=F_{\\mathrm{cross}}(\\bm{x}(t_{0})),\n\\end{align}\nwhere $F_{\\mathrm{cross}}(\\cdot)$ denotes the solution of \\cref{eq:pde} integrated from $t=t_{0}$ to $t=t_{1}$. \n\n\\subsubsection{Self-Diffusion Dynamics.} In the next step, we update each node feature independently. The node-wise feature update can be regarded as a rewiring of the complete graph to an edgeless graph, and the node-wise feature update is described as:\n\\begin{align}\n\\ddfrac{\\bm{m}_{i,j}(t)}{t}=f_{\\mathrm{self}}(\\bm{m}_{i,j}(t))=\\mathrm{MLP}(\\bm{m}_{i,j}(t)).\n\\label{eq:NODE}\n\\end{align}%\nAnd the output of self-diffusion can be obtained as:\n\\begin{align}\n\\bm{m}_{i,j}(t_{2})=F_{\\mathrm{self}}(\\bm{m}_{i,j}(t_{1})).\n\\end{align}\nwhere $F_{\\mathrm{self}}(\\cdot)$ denotes the solution of \\cref{eq:NODE} integrated from $t=t_{1}$ to $t=t_{2}$. \nAs neural ODEs are robust against input perturbations \\cite{yan2019robustness,kang2021Neurips}, we expect the updating of each node feature according to the self-diffusion \\cref{eq:NODE} to be robust against perturbations like challenging weather conditions, dynamic street objects, and image noise.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Vector Embeddings and Diffusion}\nAfter the feature map neural diffusion, we feed the updated feature maps into a global average pooling module to generate the vector embeddings $\\{ \\bm{h}_{i}\\in \\mathbb{R}^{C} \\}_{i\\in [N]}$, where\n\\begin{align}\n\\bm{h}_{i}=\\mathrm{Pooling} (\\bm{m}_{i}) .\n\\end{align}\nEach vector embedding contains rich global representations for the input image together with the information diffused from the neighboring images. To enable diffusion for the global information, we propose to design the vector embedding graph as:\n\\begin{align}\n\\mathcal{G}^{\\mathrm{vect}}=(\\mathcal{V}^{\\mathrm{vect}}, \\mathcal{E}^{\\mathrm{vect}}), \n\\end{align}\nwhere the node set $\\mathcal{V}^{\\mathrm{vect}}=\\{\\bm{h}_{i}\\}_{i\\in[N]}$ contains image vector embeddings $\\bm{h}_{i}$ and the edge set $\\mathcal{E}^{\\mathrm{vect}}$ is also defined to be the complete graph. Based on this graph $\\mathcal{G}^{\\mathrm{vect}}$, we construct the cascaded diffusion blocks, to perform global information diffusion. Within the cascaded blocks, each basic diffusion block consists of two diffusion layers: a cross-diffusion layer and a self-diffusion layer, similar to the two diffusion schemes introduced at the feature map diffusion phase.\n\n\\subsection{Pose Decoding}\nIn this subsection, we explain the pose decoding operations.\n\n\\subsubsection{Branched Pose Decoder.}\\label{decoder}\n\nEach camera pose $\\bm{p}=\\{\\bm{d},\\bm{r}\\}\\in\\mathbb{R}^{6}$, consists of a 3-dimensional translation $ \\bm{d}\\in\\mathbb{R}^{3}$ and a 3-dimensional rotation $ \\bm{r}\\in \\mathbb{R}^{3}$. Thus CPR can be viewed as a multi-task learning problem. However, since the translation and rotation elements of $\\bm{p}$ do not scale compatibly, the regression converges in different basins. To deal with it, previous methods consider regression for translation and rotation respectively and demonstrate it is an effective way to improve performance \\cite{irpnet}. In our paper, we also follow this insight to design the decoder.\n\nFirstly, the feature embeddings $\\{ \\bm{h}_{\\bm{d}}, \\bm{h}_{\\bm{r}}\\}$ for translation and rotation are extracted from the feature embedding $\\bm{h}$ using different non-linear MLP layers as:\n\n\\begin{align}\n\\bm{h}_{\\bm{d}}=\\mathrm{MLP}_{\\bm{d}}(\\bm{h}),\n\\end{align}\n\\begin{align}\n\\bm{h}_{\\bm{r}}=\\mathrm{MLP}_{\\bm{r}}(\\bm{h}),\n\\end{align}\nThus, the features of translation and rotation are decoupled. Next in the second stage, the pose output can be regressed as:\n\n\\begin{align}\n\\bm{p}=\\bm{W} (\\bm{h}_{\\bm{d}}\\concat\\bm{h}_{\\bm{r}}) + \\bm{b}\n\\end{align}\nwhere $\\bm{W},\\bm{b}$ are learnable parameters. During training, we compute the regression loss of decoded poses from multiple levels, which we will introduce below. During inference, we use the decoded pose from the last layer as the final output pose.\n\n\n\n\n\n\n\n\n\n\\subsubsection{Multi-level Pose Decoding Graph.} \nTo better regularize the whole regression pipeline, we propose to leverage the feature maps at multiple levels. As shown in \\cref{fig:multi-level}, at the vector embedding stage, we use the vector embeddings to regress the poses, while at the feature map stage, we use the feature maps. Denoting the feature maps at layer $l$ as $\\{ \\bm{m}_{i}^{l} \\in \\mathbb{R}^{H \\times W \\times C} \\}_{i\\in[N]}$, the pose decoding graph at layer $l$ can be formulated as: \n\\begin{align}\n\\mathcal{G}^{\\mathrm{pose},l}=(\\mathcal{V}^{\\mathrm{pose},l},\\mathcal{E}^{\\mathrm{pose},l}),\n\\end{align}\nwhere edge set $\\mathcal{E}^{\\mathrm{pose},l}$ is defined to be connected with two spatially adjacent nodes which can be viewed as the odometry connection, while the node set $\\mathcal{V}^{\\mathrm{pose},l}$ is defined depending on layers since the information used to regress poses is different:\n\\begin{align}\n\\mathcal{V}^{\\mathrm{pose},l}=\n\\left\\{\n\\begin{array}{ll}\n\\{ \\bm{h}_{i} \\}_{i\\in[N]} & \\text{if}\\ l=L, \\\\\n\\{ \\bm{m}_{i}^{l} \\}_{i\\in[N]} & \\text{otherwise}, \\\\\n\\end{array}\n\\right.\n\\end{align}\nwhere $L$ represents the last layer in our network.\n\n\\begin{figure}[!htb]\n\\begin{center}\n\\includegraphics[width=0.42\\textwidth]{figures\/multi-level.pdf}\n\\end{center}\n\\caption{Multi-level pose decoding. Decoding can be directly applied to vector embeddings. Feature maps are first pooled and then decoded. }\n\\label{fig:multi-level}\n\\end{figure}\n\nAt the last layer where there are vector embeddings, we can directly apply the pose decoder to generate absolute pose messages. By contrast, at feature map layers, we first apply a global average pooling module on the feature maps to formulate feature vectors, and pose messages can be obtained using the pose decoder:\n\\begin{align}\n\\bm{p}^{l}_{i}=\n\\left\\{\n\\begin{array}{l l}\nf_{\\mathrm{decoder}}^{l}( \\bm{h}^{l}_{i}) & \\text{if}\\ l=L, \\\\\nf_{\\mathrm{decoder}}^{l}( \\mathrm{Pooling}(\\bm{m}^{l}_{i})) & \\text{otherwise}. \\\\\n\\end{array}\n\\right.\n\\end{align}\nwhere $f_{\\mathrm{decoder}}^{l}(\\cdot)$ is the pose decoder at layer $l$. Using the simplified relative pose computation technique in \\cite{atloc}, the relative pose messages $\\bm{p}^{l}_{i,i'}$ at layer $l$ can be generated as:\n\\begin{align}\n\\bm{p}^{l}_{i,i'} = \\bm{p}^{l}_{i'} - \\bm{p}^{l}_{i}.\n\\end{align}\nBy leveraging multi-layer information, we expect not only the last layer but also the preceding middle-level layers can directly learn the implicit relation between images and poses, which helps to improve the robustness against perturbations.\n\n\\subsection{Loss Function}\nFollowing the approach in \\cite{atloc}, we use a weighted balance loss for translation and rotation predictions. For the input image $I_{i}$, we denote the translation and rotation targets as $\\bm{d}_{i}^{*} \\in \\mathbb{R}^{3}$ and $\\bm{r}_{i}^{*} \\in \\mathbb{R}^{3}$ respectively. Then the absolute pose loss term $\\mathcal{L}^{l}_{i}$ and the relative pose loss term $\\mathcal{L}^{l}_{i,i^{'}}$ at decoding layer $l$ are computed as: \n\\begin{align}\n&\\ml{\\mathcal{L}^{l}_{i} = \\norm{\\bm{d}_{i}^{l}-\\bm{d}_{i}^{*}} \\exp(-\\alpha^{l}) + \\alpha^{l} \\\\\n+ \\norm{\\bm{r}_{i}^{l}-\\bm{r}_{i}^{*}} \\exp(-\\beta^{l}) + \\beta^{l},}\\\\\n&\\ml{\\mathcal{L}^{l}_{i,i^{'}} = \\norm{\\bm{d}_{i,i^{'}}^{l}-\\bm{d}_{i,i^{'}}^{*}} \\exp(-\\gamma^{l}) + \\gamma^{l} \\\\ \n+ \\norm{\\bm{r}_{i,i^{'}}^{l}-\\bm{r}_{i,i^{'}}^{*}} \\exp(-\\lambda^{l}) + \\lambda^{l}},\n\\end{align}\nwhere $\\bm{d}_{i}^{l},\\bm{r}_{i}^{l},\\bm{d}_{i,i^{'}}^{l},\\bm{r}_{i,i^{'}}^{l}$ are outputs at layer $l$, while $\\alpha^{l}, \\beta^{l}, \\gamma^{l}, \\lambda^{l}$ are all learnable parameters at layer $l$. Finally, the overall loss function can be obtained as:\n\\begin{align}\n\\mathcal{L}=\\sum_{l\\in\\{3,4,L\\}}\\sum_{i\n\\in[N], i^{'}\\in \\mathcal{N}^l_i}\\mathcal{L}^{l}_{i} + \\mathcal{L}^{l}_{i,i^{'}},\n\\end{align}\nwhere $\\mathcal{N}^l_i$ is the neighborhood of node $i$ in $\\mathcal{G}^{\\mathrm{pose},l}$. We use the logarithmic form of the quaternion to represent rotation $\\bm{r}$ as:\n\\begin{align}\n\\bm{r}=\\log \\bm{q}=\\left\\{\n\\begin{array}{ll}\n\\frac{(\\bm{q}_{2},\\bm{q}_{3},\\bm{q}_{4})}{\\norm{(\\bm{q}_{2},\\bm{q}_{3},\\bm{q}_{4})}}\\cos^{-1} \\bm{q}_{1} &\\ \\text{if}\\ \\norm{(\\bm{q}_{2},\\bm{q}_{3},\\bm{q}_{4})} \\not= 0,\\\\\n0 &\\ \\text{otherwise},\n\\end{array}\n\\right.\n\\end{align}\nwhere $\\bm{q} = (\\bm{q}_{1}, \\bm{q}_{2}, \\bm{q}_{3}, \\bm{q}_{4}) \\in \\mathbb{R}^{4}$ represents a quaternion.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Experiments}\nIn this section, we first evaluate our proposed model on three large autonomous driving datasets. We next present an ablation study to demonstrate the effectiveness of our model design.\n\n\\begin{table*}[!htp]\\footnotesize\n\\centering\n\\begin{tabular}{c | l | c c c c c c } \n\\toprule\n& \\multirow{3}{*}{Model} & \\multicolumn{2}{c}{Loop (cross-day)} & \\multicolumn{2}{c}{Loop (within-day)} & \\multicolumn{2}{c}{Full} \\\\\n& & \\multicolumn{1}{c}{Mean} & \\multicolumn{1}{c}{Median} & \\multicolumn{1}{c}{Mean} & \\multicolumn{1}{c}{Median} & \\multicolumn{1}{c}{Mean} & \\multicolumn{1}{c}{Median} \\\\ \n\\midrule\n\\multirow{5}{*}{\\rotatebox{90}{+ Extra Data}}\n& GNNMapNet + \\emph{post.} & 7.96 \/ \\underlinecloser{2.56} & - & - & - & 17.35 \/ \\underlinecloser{3.47} & - \\\\\n\n& ADPoseNet & - & - & - & 6.40 \/ 3.09 & - & 33.82 \/ 6.77 \\\\\n\n& ADMapNet & - & - & - & 6.45 \/ 2.98 & - & 19.18 \/ 4.60 \\\\\n\n& MapNet+ & 8.17 \/ 2.62 & - & - & - & 30.3 \/ 7.8 & \\\\\n\n& MapNet+ + \\emph{post.} & 6.73 \/ \\textbf{2.23} & - & - & - & 29.5 \/ 7.8 & -\\\\\n\n\\midrule\n\\multirow{7}{*}{\\rotatebox{90}{CPR Only}}\n& GeoPoseNet & 27.05 \/ 18.54 & 6.34 \/ 2.06 & - & - & 125.6 \/ 27.1 & 107.6 \/ 22.5 \\\\\n\n& MapNet & 9.30 \/ 3.71 & 5.35 \/ \\underlinecloser{1.61} & - & - & 41.4 \/ 12.5 & 17.94 \/ 6.68\\\\\n\n& LsG & 9.08 \/ 3.43 & - & - & - & 31.65 \/ 4.51 & - \\\\\n\n& AtLoc & 8.74 \/ 4.63 & 5.37 \/ 2.12 & - & - & 29.6 \/ 12.4 & 11.1 \/ 5.28 \\\\\n\n& AtLoc+ & \\underlinecloser{7.53} \/ 3.61 & \\underlinecloser{4.06} \/ 1.98 & - & - & 21.0 \/ 6.15 & 6.40 \/ 1.50\\\\\n\n& CoordiNet & - & - & \\underlinecloser{4.06} \/ \\underline{1.44} & \\underline{2.42} \/ \\underline{0.88} & \\underline{14.96} \/ 5.74 & \\textbf{3.55} \/ \\underline{1.14} \\\\\n\n& RobustLoc (ours) & \\textbf{4.68} \/ 2.67 & \\textbf{3.70} \/ \\textbf{1.50} & \\textbf{2.49} \/ \\textbf{1.40} & \\textbf{1.97}\/ \\textbf{0.84} & \\textbf{9.37} \/ \\textbf{2.47} & \\underline{5.93} \/ \\textbf{1.06} \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Median and mean translation\/rotation estimation error (m\/$^\\circ$) on the Oxford RobotCar dataset. The best and the second-best results in each metric are highlighted in \\textbf{bold} and \\underlinecloser{underlined} respectively. ``-'' denotes no data provided.\n}\n\n\\label{tab:robotcar}\n\\end{table*}\n\\subsection{Datasets and Implemention Details}\n\\subsubsection{Oxford RobotCar.}\nThe Oxford RobotCar dataset\\cite{robotcar} is a large autonomous driving dataset collected by a car driving along a route in Oxford, UK. It consists of two different routes: 1) Loop with a trajectory area of $8.8\\times10^{4}\\mathrm{m}^{2}$ and length of $10^{3}\\mathrm{m}$, and 2) Full with a trajectory area of $1.2\\times10^{6}\\mathrm{m}^{2}$ and length of $9\\times10^{3}\\mathrm{m}$.\n\n\\subsubsection{4Seasons.}\nThere are only a few existing methods designed for robust CPR in driving environments, and the experiment on the Oxford dataset is insufficient for comparison. Thus we also conduct experiments on another driving dataset to cover more driving scenarios. The 4Seasons dataset \\cite{4seasons} is a comprehensive dataset for autonomous driving SLAM. It was collected in Munich, Germany, covering varying perceptual conditions. Specifically, it contains different environments including the business area, the residential area, and the town area. In addition, it consists of a wide variety of weather conditions and illuminations. In our experiments, we use 1) Business Campus (business area), 2) Neighborhood (residential area), and 3) Old Town (town area).\n\n\\subsubsection{Perturbed RobotCar.}\nTo further evaluate the performance under challenging environments, we inject noise into the RobotCar Loop dataset and call this the Perturbed RobotCar dataset as shown in \\cref{fig:noisy}. We create three scenarios: 1) Medium (with fog, snow, rain, and spatter on the lens), 2) Hard (with added Gaussian noise), and 3) Hard (+ \\emph{noisy training}) (i.e., training with noisy augmentation).\n\n\n\\begin{figure}[!htp]\n\\begin{center}\n\\includegraphics[width=0.4\\textwidth]{figures\/noisy.pdf}\n\\end{center}\n\\caption{Visualization of the Perturbed RobotCar dataset. Medium is with fog, snow, rain, and spatter on the lens. Hard is with added Gaussian noise.}\n\\label{fig:noisy}\n\\end{figure}\n\n\n\\subsubsection{Implemention}\nWe use ResNet34 as the backbone, which is pre-trained on the ImageNet dataset. We set the maximum number of input images as $11$. We resize the shorter side of each input image to $128$ and set the batch size to $64$. The Adam optimizer with a learning rate $2\\times10^{-4}$ and weight decay $5\\times10^{-4}$ is used to train the network. Data augmentation techniques include random cropping and color jittering. We set the integration times $t_{0}=0$, $t_{1}=1$, and $t_{2}=2$. The number of attention heads is 8. We train our network for $300$ epochs. All of the experiments are conducted on an NVIDIA A5000. \n\n\\subsubsection{Baselines.} The baselines are described in the section ``Related Work''.\n\n\\subsection{Main Results}\\label{subsec:main results}\nOn the Oxford RobotCar dataset, as shown in \\cref{tab:robotcar}, we obtain the best performance in 10 out of 12 metrics. Using the mean error, which is easily influenced by outlier predictions, RobustLoc outperforms the baselines by a significant margin. In the most challenging route Full, to the best of our knowledge, RobustLoc is the first to achieve less than $10\\mathrm{m}$ mean translation error for CPR.\n\nThe 4Seasons dataset consists of more varied driving scenes. As shown in \\cref{tab:4seasons}, RobustLoc achieves the best performance in 11 out of 12 metrics. Again, using the mean error metric, RobustLoc outperforms the baselines by a significant margin.\n\nOn the Perturbed RobotCar dataset, where the images contain more challenging weather conditions and noisy perturbations, RobustLoc achieves the best in all metrics. The superiority of RobustLoc over other baselines is more obvious in \\cref{tab:noisy robotcar}.\n\n\n\n\n\n\n\\begin{table*}[!htp]\\footnotesize\n\\centering\n\\begin{tabular}{l | c c c c c c } \n\\toprule\n\\multirow{2}{*}{Model} & \\multicolumn{2}{c}{Business Campus} & \\multicolumn{2}{c}{Neighborhood} & \\multicolumn{2}{c}{Old Town}\\\\\n& \\multicolumn{1}{c}{Mean}& \\multicolumn{1}{c}{Median}& \\multicolumn{1}{c}{Mean}& \\multicolumn{1}{c}{Median}& \\multicolumn{1}{c}{Mean}& \\multicolumn{1}{c}{Median}\\\\\n\\midrule\nGeoPoseNet & 11.04 \/ 5.78 & 5.93 \/ 2.03 & 2.87 \/ 1.30 & 1.92 \/ 0.88 & 64.81 \/ 6.67 & 15.03 \/ 1.57 \\\\\n\nMapNet & 10.35 \/ 3.78 & 5.66 \/ 1.83 & 2.81 \/ 1.05 & 1.89 \/ 0.92 & 46.56 \/ 7.14 & 16.52 \/ 2.12 \\\\\n\nGNNMapNet & \\underlinecloser{7.69} \/ 4.34 & \\underlinecloser{5.52} \/ 2.16 & 3.02 \/ 2.92 & 2.14 \/ 1.45 & \\underlinecloser{41.54} \/ 7.30 & 19.23 \/ 3.26 \\\\\n\n\nAtLoc & 11.53 \/ 4.84 & 5.81 \/ \\underlinecloser{1.50} & 2.80 \/ 1.16 & 1.83 \/ 0.93 & 84.17 \/ 7.81 & 17.10 \/ 1.73 \\\\\n\nAtLoc+ & 13.70 \/ 6.41 & 5.58 \/ 1.94 & 2.33 \/ 1.39 & 1.61 \/ 0.88 & 68.40 \/ 5.51 & 14.52 \/ 1.69 \\\\\n\nIRPNet & 10.95 \/ 5.38 & 5.91 \/ 1.82 & 3.17 \/ 2.85 & 1.98 \/ 0.90 & 55.86 \/ 6.97 & 17.33 \/ 3.11 \\\\\n\nCoordiNet & 11.52 \/ \\underlinecloser{3.44} & 6.44 \/ \\textbf{1.38} & \\underlinecloser{1.72} \/ \\underlinecloser{0.86} & \\underlinecloser{1.37} \/ \\underlinecloser{0.69} & 43.68 \/ \\underlinecloser{3.58} & \\underlinecloser{11.83} \/ \\underlinecloser{1.36} \\\\\n\nRobustLoc (ours) & \\textbf{4.28} \/ \\textbf{2.04} & \\textbf{2.55} \/ \\underlinecloser{1.50} & \\textbf{1.36} \/ \\textbf{0.83} & \\textbf{1.00} \/ \\textbf{0.65} & \\textbf{21.65} \/ \\textbf{2.41} & \\textbf{5.52} \/ \\textbf{1.05} \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Median and mean translation\/rotation estimation error (m\/$^\\circ$) on the 4Seasons dataset. The best and the second-best results in each metric are highlighted in \\textbf{bold} and \\underlinecloser{underlined} respectively.\n}\n\\label{tab:4seasons}\n\\end{table*}\n\n\n\n\n\n\n\\begin{table*}[!htp]\\footnotesize\n\\centering\n\\begin{tabular}{l | c c c c c c } \n\\toprule\n\n\\multirow{2}{*}{Model} & \\multicolumn{2}{c}{Medium} & \\multicolumn{2}{c}{Hard} & \\multicolumn{2}{c}{Hard (+ \\emph{noisy training})} \\\\\n& \\multicolumn{1}{c}{Mean} & \\multicolumn{1}{c}{Median} & \\multicolumn{1}{c}{Mean}& \\multicolumn{1}{c}{Median} \n& \\multicolumn{1}{c}{Mean} & \\multicolumn{1}{c}{Median} \\\\\n\\midrule\nGeoPoseNet & 20.47 \/ 8.76 & 8.70 \/ 2.30 & 41.71 \/ 17.63 & 14.02 \/ 3.13 & 24.03 \/ 11.14 & 7.14 \/ 1.70\\\\\nMapNet & 17.93 \/ 7.01 & 6.89 \/ 2.00 & 49.36 \/ 20.01 & 18.37 \/ \\underlinecloser{2.58} & 21.22 \/ 8.38 & 6.38 \/ 1.97 \\\\\nGNNMapNet & \\underlinecloser{16.17} \/ 7.24 & 8.02 \/ 2.35 & 73.97 \/ 35.57 & 61.47 \/ 19.73 & \\underlinecloser{14.55} \/ \\underlinecloser{7.62} & 6.69 \/ \\underlinecloser{1.57} \\\\\n\nAtLoc & 19.92 \/ 7.25 & 7.26 \/ \\underlinecloser{1.74} & 52.56 \/ 23.46 & 15.01 \/ 3.17 & 23.48 \/ 11.43 & 7.42 \/ 2.38\\\\\nAtLoc+ & 17.68 \/ 7.48 & \\underlinecloser{6.19} \/ 1.80 & \\underlinecloser{37.92} \/ 18.65 & \\underlinecloser{12.17} \/ 2.93 & 22.61 \/ 11.23 & \\underlinecloser{6.21} \/ 1.83 \\\\\nIRPNet & 16.35 \/ 7.56 & 8.71 \/ 2.28 & 45.72 \/ 21.84 & 17.99 \/ 3.50 & 24.73 \/ 11.20 & 6.73 \/ 1.82\\\\\nCoordiNet & 17.67 \/ \\underlinecloser{6.66} & 7.63 \/ 1.79 & 44.11 \/ \\underlinecloser{16.42} & 17.21 \/ 2.70 & 24.06 \/ 12.27 & 6.25 \/ 1.61 \\\\\nRobustLoc (ours) & \\bf{8.12} \/ \\bf{3.83} & \\textbf{5.34} \/ \\textbf{1.53} & \\textbf{27.75} \/ \\textbf{9.70} & \\textbf{11.59} \/ \\textbf{2.64} & \\textbf{10.06} \/ \\textbf{4.95} & \\textbf{5.18} \/ \\textbf{1.43} \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Median and mean translation\/rotation estimation error (m\/$^\\circ$) on the Perturbed RobotCar dataset. The best and the second-best results in each metric are highlighted with \\textbf{bold} and \\underlinecloser{underline} respectively. RobustLoc achieves the best in \\textbf{all} metrics.\n}\n\\label{tab:noisy robotcar}\n\\end{table*}\n\n\n\n\\subsection{Analysis}\n\\subsubsection{Ablation Study.}\nWe justify our design for RobustLoc by ablating each module. From \\cref{tab:ablation study}, we observe that every module in our design contributes to the final improved estimation. We see that making use of neighboring information from covisible frames and learning robust feature maps contribute to more accurate CPR.\n\n\\begin{table}[!htb]\\footnotesize\n\\centering\n\\begin{tabular}{l c c } \n\\toprule\n\\multirow{1}{*}{Method} & \\multicolumn{1}{c}{ Mean Error (m\/$^\\circ$) on Loop (c.)} \\\\\n\\midrule\nbase model & 8.38 \/ 4.29 \\\\\n+ feature map graph & 7.01 \/ 3.86 \\\\\n+ vector embedding graph & 6.24 \/ 3.21\\\\\n+ diffusion & 5.53 \/ 2.95\\\\\n+ branched decoder & 5.14 \/ 2.79\\\\\n+ multi-level decoding & \\textbf{4.68} \/ \\textbf{2.67} \\\\\n\\midrule\ndiffusion at stage 3 & 5.27 \/ 2.90 \\\\\ndiffusion at stage 3,4 & 4.86 \/ 3.18 \\\\\ndiffusion at stage 4 & \\textbf{4.68} \/ \\textbf{2.67} \\\\\nmulti-layer concatenation & 5.80 \/ 3.26 \\\\\n\\midrule\nmore augmentation & \\textbf{4.68} \/ \\textbf{2.67} \\\\\nless augmentation & 5.32 \/ 3.17 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Ablation study, diffusion design, and augmentation design comparison on the Oxford RobotCar dataset.}\n\\label{tab:ablation study}\n\\end{table}\n\n\n\\subsubsection{Salience Visualization.}\nSalience maps shown in \\cref{fig:salience} suggest that in driving environments, RobustLoc pays more attention to relatively robust features such as the skyline and the road, similar to PixLoc \\cite{pixloc}. In addition, dynamic objects such as vehicles are implicitly suppressed in RobustLoc's regression pipeline.\n\n\\begin{figure}[!htb]\n\\begin{center}\n\\includegraphics[width=0.4\\textwidth]{figures\/saliency.pdf}\n\\end{center}\n\\caption{Robust features from RobustLoc. }\n\\label{fig:salience}\n\\end{figure}\n\n\n\\subsubsection{Diffusion and Augmentation.}\nUsing multi-level features is an effective method in dense prediction tasks such as depth estimation \\cite{yan2021cadepth}. To test if this holds in CPR, we use the feature maps from the lower stage 3 (see \\cref{fig:multi-level}), which however does not lead to performance improvement shown in \\cref{tab:ablation study}. We also utilize the multi-level concatenation strategy used in GNNMapNet. This does not lead to significant changes. These experiments demonstrate that CPR benefits more from high-level features with more semantic information than from low-level local texture features. Finally, we test the performance when training with less data augmentation, which leads to worse performance. This suggests that more extensive data augmentation can enhance the model robustness in challenging scenarios, which is consistent with the experimental results on the Perturbed Robotcar dataset in \\cref{tab:noisy robotcar}. \n\n\n\n\n\n\\subsubsection{Graph Design.}\nWe next explore the use of different graph designs for feature map diffusion and vector embedding diffusion. The grid graph stacks an image with two other spatially adjacent images as a cube, and the attention weights are formulated within the 6-neighbor area (for feature maps) or the 2-neighbor area (for vector embeddings). The self-cross graph computes attention weights first within each image and then across different images. From \\cref{tab:graph}, we see that the complete graph has the best performance. This is because, in the complete graph, each node can interact with all other nodes, allowing the aggregation of useful information with appropriate attention weights.\n\n\n\\begin{table}[!htb]\\footnotesize\n\\centering\n\\begin{tabular}{l c c} \n\\toprule\n\\multirow{1}{*}{Method} & \\multicolumn{1}{c}{Mean Error (m\/$^\\circ$) on Full} \\\\\n\\midrule\ngrid graph & 15.67 \/ 2.95 \\\\\nself-cross graph & 15.31 \/ 3.28 \\\\\ncomplete graph & \\textbf{9.37} \/ \\textbf{2.47} \\\\\n\\midrule\n\\multirow{1}{*}{} & \\multicolumn{1}{c}{Mean Error ($^\\circ$) on Business Campus} \\\\\n\\midrule\nquaternion & 2.23 \\\\\nLie group & 2.20 \\\\\nrotation matrix & 2.25 \\\\\nlog (quaternion) & \\textbf{2.04} \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Graph design comparison on the Oxford RobotCar dataset and rotation representation comparison on the 4Seasons dataset.}\n\\label{tab:graph}\n\\end{table}\n\n\n\\subsubsection{Rotation Representation.}\nWe compare different representations of rotation in \\cref{tab:graph}, where the log form of the quaternion is the optimal choice. The other three representations, including the vanilla quaternion, the Lie group, and the vanilla rotation matrix, show similar performance.\n\n\n\n\n\\subsubsection{Trajectory Visualization.}\nWe visualize the output pose trajectories as shown in \\cref{fig:trajectory}, where a significant gap can be seen from the comparison. RobostLoc outputs more smooth and globally accurate poses compared with the previous method, which shows the effectiveness of our design.\n\\begin{figure}[!htb]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{figures\/trajectory.pdf}\n\\end{center}\n\\caption{Trajectory visualization on the Oxford RobotCar dataset. The ground truth trajectories are shown in bold blue lines, and the estimated trajectories are shown in thin red lines. The stars mark the start of the trajectories.}\n\\label{fig:trajectory}\n\\end{figure}\n\n\n\\subsubsection{Inference Speed.}\nWe finally test the performance using a different number of input frames. The inference speed does not drop significantly when increasing the input frames. And even the slowest one (using $11$ frames) can run $50$ iterations per second and achieve real-time regression. On the other hand, more frames can bring performance improvement when the input size is small, while further increasing frame size does not bring significant change.\n\n\\begin{table}[!htp]\\footnotesize\n\\centering\n\\begin{tabular}{l c c c c c} \n\\toprule\n\\multirow{1}{*}{\\#frames} & \\multicolumn{1}{c}{3} & \\multicolumn{1}{c}{5} & \\multicolumn{1}{c}{7} & \\multicolumn{1}{c}{9} & \\multicolumn{1}{c}{11} \\\\\n\\midrule\nSpeed (iters\/s) & \\textbf{56} & 55 & 53 & 52 & 50\\\\\nMean Error (m) & 5.28 & 5.09 & 4.96 & \\textbf{4.68} & 4.72 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{The performance using different numbers of frames on the Oxford RobotCar Loop (cross-day).}\n\\label{tab:additional insight}\n\\end{table}\n\n\n\n\n\n\n\\section{Conclusion}\nWe have proposed and verified the performance of a robust CPR model RobustLoc. The model's robustness derives from the use of information from covisible images and neural graph diffusion to aggregate neighboring information, which is present in challenging driving environments. Extensive experimental results demonstrate that RobustLoc achieves SOTA performance.\n\n\n\\section{Acknowledgments}\nThis work is supported under the RIE2020 Industry Alignment Fund-Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s), and by the National Research Foundation, Singapore and Infocomm Media Development Authority under its Future Communications Research \\& Development Programme.\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nWe first discuss briefly the problem of dictionary learning and the different choices one has to make to perform this task depending on the problem at hand. We also introduce notations and standard operators.\n\\subsection{Why a Dictionary ?}\n\nIn order to analyze a given finite dataset $X:=\\{x_n \\in \\mathbb{R}^D,n=1,..,N\\}$ different approaches are possible. One of them lies on the assumption that those observations actually come from some latent representation that are mixed together, different mixing leading to different observations but with fixed dictionary $\\Phi_K:=\\{\\phi_k \\in \\mathbb{R}^D,k=1,...,K\\}$ with usually $K \\ll N$. One can thus rewrite \n\\begin{equation}\\label{eq0}\n x_n = f(\\Phi,x_n).\n\\end{equation}\nIn practice, a linear assumption is used for $f$ and we write the prediction as \n\\begin{equation}\n \\hat{x}_n=\\hat{f}(\\hat{\\Phi}_K,x_n),\n\\end{equation}\nwhere one estimate the functional and the filter-bank through a reconstruction loss defined as\n\\begin{equation}\n E_n = ||x_n-\\hat{x}_n ||^2,\n\\end{equation}\nwhere we assume here a squared error but any loss function can be used in general.\nThe linear assumption imposed on $f$ leads to the estimation weighting coefficients we denote by $\\alpha_{n,k}$ weighting for observation $n$ the atom $k$, and those attributes are the new features used to represent the observations.\n\nThis new representation of $x_n$ via its corresponding feature vector $\\bm{\\alpha_n}$ can be used for many tasks s.a. clustering, denoising, data generation, anomaly detection, compression and much more.\nDepending on the constraints one imposes on the feature vectors $\\bm{\\alpha_n}$ and the dictionary $\\Phi_K$, one can come back to standard known frameworks s.a. Principal Component Analysis (PCA)\\cite{jolliffe2002principal}, Independent Component Analysis (ICA)\\cite{hyvarinen2004independent}, Sparse Coding (SC)\\cite{olshausen1997sparse}, (Semi) Nonegative Matrix Factorization (sNMF)\\cite{lee1999learning,ding2010convex}, Gaussian Mixture Model (GMM)\\cite{bilmes1998gentle}, and many more, but all those approaches can be categorized into two main categories: Complete Dictionary Learning and Over-Complete Dictionary Learning.\n\nThe use of a dictionary also extends to standard template matching, the mot popular technique and optimal in the GLRT sense, where new examples are to be mapped to one of the template to be clustered or else.\n\n\n\\subsection{(Over)complete Dictionary Learning}\nAs we saw, the dictionary learning problem finds many formulations but the main difference resides in the properties of the learned dictionary, namely if it is complete or over-complete. \nThe general case of complete or orthogonal basis imposes the following constraint on the filter-bank:\n\\begin{align}\n <\\phi_j,\\phi_k>=0,\\forall j \\not = k,\\;K=D,\n\\end{align}\nand with sometimes the complementary constraints that $||\\phi_k||=1,\\forall k$ leads to an orthonormal basis. The orthogonality of the atoms allow to have exact reconstruction leading to $E_n=0,\\forall n$ as by definition one has \n\\begin{equation}\\label{eq1}\n \\hat{x}_n=\\sum_i \\frac{}{||\\hat{\\phi}_k||^2}\\hat{\\phi}_k,\\forall n.\n\\end{equation}\nGiven a dictionary $\\Phi$ this decomposition is unique and thus we have $(\\hat{\\Phi}_K,x_n) \\Rightarrow \\hat{\\bm{\\alpha}}_n,\\forall n$.\nHowever, while through the Gram-Schmidt process existence of such a basis is guaranteed, it is not unique.\n\n\nOn the other hand, $\\Phi_K$ can be an over-complete basis with the main difference coming that now $K>D$ and with the extreme case of $K=N$. Containing more atoms than the dimension of the space leads to interesting properties in practice such as sparse and independent features (ICA), clustering properties (K-means, NMF), biological understanding as is has been empirically shown that visual and auditory cortex of many mammals contains over-complete dictionaries.\nYet, all those benefits coming from the redundancy of the atoms also lead to the non-unique features $\\bm{\\alpha}_n$ even when the dictionary is kept fixed, thus we have that $(\\hat{\\Phi}_K,x_n) \\not \\Rightarrow \\hat{\\bm{\\alpha}}_n$.\nAs a result, one has to solve the additional optimization problem of\n\\begin{equation}\n \\bm{\\hat{\\alpha}_n}=\\argmin_{\\bm{\\alpha}\\in \\Omega \\subset \\mathbb{R}^K} ||x_n-\\sum_k\\alpha_k\\hat{\\phi}_k ||.\n\\end{equation}\n\nAs a result, whether one is in a complete or over-complete dictionary setting, the optimization problems are always ill-posed by the non-uniqueness of the solutions forcing to impose additional structures or constraints in order to reach well posed problems. For the complete case, the main approach consist in imposing that as few atoms as possible are used leading to PCA, a very powerful approach for dimensionality reduction and compression. For over-complete cases, different sparsity criteria are imposed on the features $\\bm{\\alpha}_n$ such as norm-(0,1,2). For a norm-0 we are in the Matching pursuit case, norm1 is sparse coding and norm2 is ridge regression.\nFor each of those cases many very efficient exact or iterative optimization algorithms have been developed to estimate $\\hat{\\Phi}_k$ and $\\hat{\\bm{\\alpha}}_n$ yet there still exist a conceptual gap between the two concepts and the two approaches are often seen as orthogonal.\n\n\nAs we have seen, both settings lead to different beneficial aspects, compression, easy of projection and reconstruction or sparsity\/clustering but more complex optimization problems, but, at a higher level, the signal processing community has always put a gap between those frameworks. As well put by Meyer in [CITE] one has to choose between encoding and representation.\n\n\nWe thus propose in this paper a novel approach allowing to have an over-complete dictionary yet given an input, a dynamic basis selection reduces it to an optimal complete dictionary without need for optimization in order to reconstruct. The selection is done without optimization in a forward manner leading to an efficient algorithm. This thus allows to inherit from all the properties induced by orthonormal basis while allowing adaptivity for better allowing to learn an over-complete basis with a nonlinear basis selection leading to an orthonormal basis when conditioned on the input. We also provide results from a low-dimensional manifold perspective and show that our approach perform nonparametric orbit estimation.\nWe validate on compression, dictionary learning tasks and clustering.\n\n\n\n\n\n\n\\section{Deep Residual Oja Network}\n\\subsection{Shallow Model}\nIs it possible to learn a dictionary inheriting the benefits or complete and over-complete dictionaries ? We present one solution here. \nWe first motivate the need for such a framework as well as present the general approach and notations. Throughout the next sections, the choice of the Oja name for the algorithm will become blatant for the reader.\n\n\nKeeping the previously defined notation, we aim at learning an over-complete basis with the number of atoms defined by $FK$ with $F>1$ called the increase factor, note that $F=1$ lead to a complete basis, $K=D$ unless otherwise defined. By definition, the following projection-reconstruction scheme\n\\begin{equation}\n \\hat{x}_n=\\sum_k \\frac{}{||\\hat{\\phi}_k||^2}\\hat{\\phi}_k,\n\\end{equation}\ncan not reach an error $E_n<\\epsilon$ and in fact $E_n$ increases with $F$. One way to resolve this issue comes from an optimal basis selection point of view leading to\n\\begin{equation}\\label{loss1}\n E_n:=||x_n-\\sum_k \\delta_{n,k}\\frac{}{||\\hat{\\phi}_k||^2}\\hat{\\phi}_k|| < \\epsilon,\n\\end{equation}\nwith $\\delta{n,k}\\in \\{0,1\\},\\forall n,k$ representing a mask allowing to use a subset of the dictionary $\\hat{\\Phi}_{FK}$ we denote by $\\rho_{\\delta_{n,.}}[\\hat{\\Phi}_{FK}]$. \n\\iffalse\n\\tiny\nAs we aim at finding a systematic way to perform this basis selection, there are few possible approaches: either selecting the top-$\\kappa$ atoms $(S1)_\\kappa$ w.r.t. their activation maps energy, or keep the ones above a given threshold via either soft-thresholding (S2-1) or hard-thresholding (S2-2). If one assumes some kind of concentration among the dictionary, another approach or local winner-takes-all (S3) can be used.\n\nWe first present the conditions to have optimal reconstruction for each of the strategies.\n\\begin{theorem}\nFor an over-complete basis, $(S1)_\\kappa$ is optimal if $\\kappa=1$ and $F\\Rightarrow \\infty$ s.t. $\\forall x \\in \\Omega, \\exists k : \\phi_k=x$, (S2) is optimal if all inputs are made of atoms with always the same energy, (S3) is optimal is the dictionary can be rewritten as blocks s.t. selecting one atom per block lead to a complete basis. \n\\end{theorem}\nThe strategies are thus\n\\begin{itemize}\n\\item $(S1)_\\kappa$ : $\\delta_{n,k}=1 \\iff Card(\\{j:||<||,j=1,..,FK,j\\not = k\\})>FK-\\kappa$\n\\item $(S2-1)$ : for this case in addition of applying a mask, there is a change in the value used to project back the atom, it is defined as \n\\begin{equation}\nE_n:=||x_n-\\sum_k \\delta_{n,k}(-b_k*sgn())\\hat{\\phi}_k|| < \\epsilon,\n\\end{equation}\nand with $\\delta_{n,k}=1 \\iff $\n\\item $(S2-2)$ : $\\delta_{n,k}=1 \\iff ||>b_k$ where $b_k$ is a atom dependent threshold value either learned or imposed.\n\\item $(S3)$ : Let first partition $\\Phi_{FK}$ into an union of $D$ sets of $F$ atoms we denote $\\Phi'_{d},d=1,...,D$ as \n\\begin{align}\n \\Phi_{FK} :=\\big[ \\Phi'_{1}\\vline \\dots \\vline \\Phi'_{D} \\big],\n\\end{align}\nwith \n\\begin{equation}\n \\Phi'_d=\\big[\\phi_{Fk} \\vline \\dots \\vline \\phi_{F(k+1)} \\big], \\phi_{.}\\in \\mathbb{R}^D,\n\\end{equation}\nlet rewrite $\\delta_{n,d,f}$ be the indicator function applying to the $n^{th}$ example and $f^{th}$ filter of $\\Phi'_d$. We thus have\n\\[\n\\hat{\\delta}_{n,d,f}=1 \\iff f=\\argmax_{\\phi \\in \\hat{\\Phi}'_d} ||, \\forall d\n\\]\n\\end{itemize}\nWe are interested in this paper into the $(S1)$ strategy.\n\nFor (S1) the assumption of having a very large number of atoms is intractable. Or is it ? We now propose a way to achieve this and present results showing optimality of (S1) over other strategies, yet, we first present the framework allowing to have efficiently a humongous number of atoms.\n\n\\normalsize\n\n\\fi\n\n\n\n\n\\subsubsection{Error Bounds, Learning and Link with Oja Rule}\nWe first provide an error upper-bound for the proposed scheme $(S1)_{\\kappa=1}$.\nTo simplify notations be also define by $\\phi_\\kappa(x_n)$ the atom defined by\n\\begin{equation}\n\\phi_\\kappa(x_n)=\\phi_{k'},k'=\\argmax_k \\frac{||^2}{||\\phi_k||^2}.\n\\end{equation} \n\\begin{theorem}\nThe error induced by $(S1)_{\\kappa=1}$ is $||x_n||^2-\\frac{||^2}{||\\phi_\\kappa(x_n)||^2}$ which is simply the reconstruction error from the best atom as since only one filter is used, nothing else is present.\n\\end{theorem}\n\\begin{proof}\nBy the incomplete basis theorem, there exists a basis s.t. it contains the atom $\\phi_{\\kappa}$, we denote by $\\phi_k$ such a basis, with $k=1,...,D$, we thus have\n\\begin{align}\nE_n=&|| x_n- \\frac{}{||\\phi_\\kappa(x_n)||^2}\\phi_\\kappa(x_n)||^2\\nonumber\\\\\n=&|| \\sum_k\\frac{}{||\\phi_k||^2}\\phi_k- \\frac{}{||\\phi_\\kappa(x_n)||^2}\\phi_\\kappa(x_n)||^2 &&\\text{ incomplete basis theorem} \\nonumber \\\\\n=&|| \\sum_{k\\not = \\kappa}\\frac{}{||\\phi_k||^2}\\phi_k||^2\\nonumber\\\\\n=&\\sum_{k\\not = \\kappa}\\frac{||^2}{||\\phi_k||^2}\\nonumber\\\\\n=&||x||^2-\\frac{||^2}{||\\phi_\\kappa(x_n)||^2}&&\\text{Parseval's Theorem}\\nonumber\\\\\n=&||x||^2\\Big(1-\\cos(\\theta(x_n,\\phi_\\kappa(x_n))^2\\Big)\\label{en_eq}\n\\end{align}\nAnd we have by definition $E_n\\geq 0, E_n=0 \\iff x_n=\\phi_\\kappa(x_n)$\n\\end{proof}\nThere is first a few comments on the loss and updates. First of all, the loss is closely related to a k-mean objective with cosine similarity and specifically spherical k-means which is the case where the centers and the data points are re-normalized to unit norm and has the objective to minimize\n\\begin{equation}\n \\sum_n(1-cos(x_n,p_{c(n)})).\n\\end{equation}\n\n\n\\subsubsection{Learning and Oja Rule}\nIn order to learn the filter-bank $\\Phi_K$, a common approach is to use an alternating scheme between finding the cluster belongings and optimizing the atoms w.r.t. this estimate. We first derive a gradient descent scheme to update the atoms and study some of its characteristics.\n\nIf we now denote by $n(k):=\\{n:n=1,...,N|\\kappa(x_n)=k\\}$ be the collection of the sample index in cluster $k$, he resulting loss $E_{n(k)}:=\\frac{\\sum_{n \\in n(k)}E_n}{Card(n(k))}$ we can now derive a gradient descent step as\n\\begin{equation}\n \\phi_k^{(t+1)}(\\lambda)=\\phi_k^{(t)}-\\lambda \\frac{d E_{n(k)}}{d \\phi_k},\n\\end{equation}\nwith \n\\begin{align}\n \\frac{d E_{n(k)}}{d \\phi_k}&=\\frac{1}{Card(n(k))}\\sum_{n \\in n(k)} \\frac{2||}{||\\phi_k||^2}\\Big( \\frac{||\\phi_k}{||\\phi_k||^2}-(-1)^{1_{ <0}}x_n \\Big),\\nonumber\\\\\n &=\\frac{1}{Card(n(k))}\\sum_{n \\in n(k)} \\frac{2}{||\\phi_k||^2}\\Big( \\frac{\\phi_k}{||\\phi_k||^2}-x_n \\Big).\\label{oja_eq}\n\\end{align}\nOn the other hand, if one adopts an adaptive gradient step $\\lambda$ per atom and point with one of the two strategies $\\lambda_1,\\lambda_2$ defined as\n\\begin{align}\n \\lambda_1&=\\frac{}{2||x_n||^2}\\\\\n \\lambda_2&=\\frac{||\\phi_k||^4}{2^2}\n\\end{align}\nthen we have the \n\\begin{align}\n \\phi_k^{(t+1)}(\\lambda_1)&=\\phi_k^{(t)}-\\frac{1}{\\sum_n \\cos(\\theta(x_n,\\phi_k))^2}\\sum_{n \\in n(k)}\\cos(\\theta(x_n,\\phi_k))^2\\Big( \\frac{\\phi_k}{||\\phi_k||^2}-x_n \\Big),\\label{eq_online1}\\\\\n \\phi_k^{(t+1)}(\\lambda_2)&=\\frac{1}{Card(n(k))}\\sum_{n \\in n(k)}\\frac{||\\phi_k||^2}{}x_n\\label{eq_online2}\n\\end{align}\nwe thus end up with in the $\\lambda_1$ case to a simple update rule depending on a weighted average of the points in the cluster based on their cosine similarity squared whereas for $\\lambda_2$ we obtain a rule a la convex NMF with is a plain without update combination of the points available.\n\nOn the other hand, it is clear that minimizing $E_n$ from Eq. \\ref{en_eq} is equivalent to maximizing $E^+_n=\\frac{}{||\\phi_\\kappa(x_n)||^2}$. As a result, one can seize in Eq. \\ref{oja_eq} the Oja rule as we can rewrite a GD update of $\\phi_k$ as\n\\begin{align}\n \\phi_k^{(t+1)}&=\\phi^{(t)}_k+\\gamma \\frac{d E^+_n}{d \\phi_k}(\\phi^{(t)}_k)\\\\\n \\phi_k^{(t+1)}&=\\phi^{(t)}_k+\\gamma \\Big( x_n\\frac{}{||\\phi_k||^2}-(\\frac{}{||\\phi_k||^2})^2\\phi_k \\Big)\\label{eq_online3}\n\\end{align}\nknown as the Oja rule.\nSPEAK ABOUT OJA RULE. And in fact, the convergence of Oja rule toward the first eigenvector-eigenvalue is not surprising as $E^+_{n(k)}$ leads explicitly to \n\\begin{equation}\n \\phi_k=\\argmax_{\\phi}\\frac{1}{Card(n(k))}\\frac{\\phi^TX(k)^TX(k)\\phi}{\\phi^T\\phi},\\label{eq_pca}\n\\end{equation}\nwhich is known as the Rayleigh quotient and is a formulation of PCA leading to a one step global optimum being the greatest eigenvector-eigenvalue.\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=5in]{time_mnist.png}\n\\end{figure}\n\n\n\\begin{pseudocode}[doublebox]{Filter-Bank Learning strategy}{X,K}\n\\text{Initialize }\\Phi_K\\\\\n\\WHILE \\text{not converged} \\DO\n\\BEGIN\n\\FOR k \\GETS 1 \\TO K \\DO\n\\BEGIN\n\\text{Compute }n(k) \\text{ with current }\\Phi_K\\\\\n\\text{Update }\\phi_k \\text{with }n(k) \\text{ and }X(k)\\text{ according to Eq. \\ref{eq_pca}}\\\\\n\\END\\\\\n\\END\\\\\n\\RETURN{\\Phi_k}\n\\end{pseudocode}\n\n\\begin{pseudocode}[doublebox]{Online Filter-Bank Learning strategy}{X,K}\n\\text{Initialize }\\Phi_K\\\\\n\\WHILE \\text{not converged} \\DO\n\\BEGIN\n\\FOR n \\GETS 1 \\TO N \\DO\n\\BEGIN\n\\kappa = \\argmax_k \\frac{||^2}{||\\phi_k||^2||x_n||^2}\\\\\n\\text{Update }\\phi_\\kappa \\text{ according to Eq. \\ref{eq_online1} or Eq.\\ref{eq_online2} or Eq.\\ref{eq_online3}}\\\\\n\\END\\\\\n\\END\\\\\n\\RETURN{\\Phi_k}\n\\end{pseudocode}\n\n\n\n\n\n\n\\begin{theorem}\nIf the distribution of the $x_n$ in the space is uniformly distributed, the optimal over-complete basis for $(S1)_{kappa=1}$ is thus the one corresponding of a quantization of the sphere, it is unique up to a change of sign and global rotations (same applied to each atom). For the $2$-dimensional case it is easy to see that the maximum error for any given point $x_n$ is exactly upper-bounded by $||x||^2\\Big(1-\\cos(\\frac{\\pi}{2FK})^2\\Big)$ if $FK$ growths exponentially .(?? CHECK POWER OF HIGH DIMENION COSINE decompose as union of 2D spaces)\n\\end{theorem}\n\\begin{proof}\nFor 2D we now the upper bound is $||x||^2\\Big(1-\\cos(\\frac{\\pi}{2FK})^2\\Big)$ with $FK$ atoms, we thus rewrite \n\\begin{align*}\n||x-\\hat{x}||^2=\\sum_{d=1}^{D\/2}||x_d-\\hat{x}_d||^2\n\\end{align*}\nand see that in order to have the upper bound for each subspace we need the cartesian product of all the subspace basis $\\Phi_{FK}$ leading to $FK$ atoms. Thus one need to grow exponentially the number of atom w.r.t the dimension to have a loss increasing linearly.\n\\end{proof}\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=5in]{error_bound.png}\n\\end{figure}\n\n\n\n\nHowever this pessimistic upper-bound assumes the worst possible scenario: uniform distribution of the data point in the space $\\mathbb{R}^D$ which in general is false. In fact, many dataset have inherent structures and at least lye only in a small subset sometime regular of $\\mathbb{R}^D$.\nIn general, data might be living in unions of subsets and thus providing a general strategy or optimal basis is a priori more complex thus pushing the need to learn the atoms as it is done in general for k-mean applications.\n\\begin{theorem}\\label{th5}\nA sufficient condition for $(S1)_{kappa=1}$ to be optimal is that the data are already clustered along $FD$ lines, or orbits ??? :)\n\\end{theorem}\nWe now present one way to tackle the curse of dimensionality in the next section\n\n\n\n\\subsection{Multiple Atoms}\n\\begin{equation}\n E_n=|| x_n- \\sum_{k=1}^K\\frac{}{||\\phi^k_\\kappa(x_n)||^2}\\phi^k_\\kappa(x_n)||^2\n\\end{equation}\nFor learning atom after atom a la coordinate ascent we have that \n\\begin{align*}\n \\hat{\\phi}^{k'}_j=& \\argmin_{\\phi^{k'}_j}\\sum_n E_n\\\\\n =&\\argmin_{\\phi^{k'}_j}\\sum_{n \\in n(k,j)} || \\Big(x_n- \\sum_{k=1,\\not = k'}^K\\frac{}{||\\phi^k_\\kappa(x_n)||^2}\\Big)-\\frac{}{||\\phi^{k'}_j||^2}\\phi^{k'}_\\kappa(x_n)||^2\n\\end{align*}\nas we showed in the last section we end up with the same update rule but with the input being substracted by the other used atoms. Thus we still perform PCA but on the input minus the other atoms. Note that this ensures orthogonality between the atoms.\n\n\\subsection{From Shallow to Deep Residual for better Generalization Error Bounds and Combinatorially Large Dictionaries}\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=5in]{laroue.png}\n\\end{figure}\n\n\n\n\n\n\nWe now consider the analysis of the generalization performance on out of bag observations as well as the problem of having really large dataset.\n\n\\begin{theorem}\nIf we suppose an finite training set and an Oja Network with sufficient filters in order to reach a training error of $0$ then we know that the generalization error is directly lower-bounded by how close the testing and training examples are. In fact\n\\begin{equation}\n E_{new}\\propto \\cos(\\theta(x_\\kappa,x_{new})),\\kappa=\\argmin_n \\cos(\\theta(x_n,x_{new})).\n\\end{equation}\n\\end{theorem}\n\nThe proof is straightforward as we know the network is able to perfectly reconstruct the training set and only the training set.\nAs a result, if the training set is well and uniformly sampled among the space of possible observations, a shallow Oja Network can be considered as optimal also for the testing set.\nHowever, and especially for computer vision task, it is well known that every observation is very far form each other in term of distance when dealt with in the pixel domain, also, requiring a proper sampling of the space of images is clearly outrageous.\nWe thus now present the result motivating deep architectures in general including the Oja Network.\n\n\\begin{align}\nR^{(l)}_n=&R^{(l-1)}_n-\\frac{}{||\\phi_\\kappa^{(l)}||^2}\\phi_\\kappa^{(l)}, \\kappa = \\argmax_k \\frac{||}{||R^{(l-1)}_n||^2||\\phi_k^{(l)}||^2} \\nonumber \\\\\nR^{(0)}_n=&x_n\n\\end{align}\n\n\nas a result as soon as the input and the template are not orthogonal there is convergence. \n\\begin{theorem}\nSince we have by definition that the selected atom is the one with smaller angle, if it is $0$ it means that the input $R^{(l-1)}$ is orthogonal to all the learned dictionary $\\Phi^{(l)}$\n\\begin{equation}\n\\cos \\Big(\\theta (R^{(l-1)}_n,\\phi^{(l)}_\\kappa) \\Big)^2=0 \\iff R^{(l-1)} indep \\phi^{(l)}_k \\forall k,\n\\end{equation}\nand thus they live in two orthogonal spaces.\nTO PROVE : ALL THE NEXT ONES ARE ALSO 0 \n\\end{theorem}\n\n\\begin{theorem}\nThe residual decreases exponentially w.r.t. the depth of the model.\n\\end{theorem}\n\\begin{proof}\n\\begin{align}\n||R^{(l)}_n||^2=&||R^{(l-1)}_n-\\frac{}{||\\phi_\\kappa^{(l)}||^2}\\phi_\\kappa^{(l)}||^2 \\nonumber \\\\\n=&||R^{(l-1)}_n||^2-\\frac{^2}{||\\phi_\\kappa^{(l)}||^2}\\nonumber \\\\\n=&||R^{(l-1)}_n||^2\\Big(1-\\cos \\Big(\\theta(R^{(l-1)}_n,\\phi_\\kappa^{(l)}) \\Big)^2\\Big)\\\\\n=&||x_n||^2\\prod_{l=1}^l\\Big(1-\\cos \\Big(\\theta(R^{(l-1)}_n,\\phi_\\kappa^{(l)}) \\Big)^2 \\Big)\n\\end{align}\n\\end{proof}\n\nThe final template can be flattened via\n\\begin{align}\nT_n =& \\sum_l \\frac{}{||\\phi_\\kappa^{(l)}||^2}\\phi_\\kappa^{(l)}\\\\\n=&\\sum_l P^{(l)}_n\n\\end{align}\n\n\\subsubsection{Learning}\nComputing the gradient finds a great recursion formula we define as follows:\n\\begin{align}\n A_{i,j}=\\left\\{ \\begin{matrix}\n 0 \\iff jI_d+R^{(i-1)}\\phi^{(i)}_\\kappa}{||\\phi^{(i)}_\\kappa||^2}+\\frac{2\\phi^{(i)}_\\kappa\\phi^{(i)T}_\\kappa}{||\\phi^{(i)}_\\kappa||^4} \\iff i=j\\\\\n A_{i,j-1}-\\frac{\\phi^{(j)}_\\kappa\\phi^{(j)T}_\\kappa A_{i,j-1}}{||\\phi^{(j)}_\\kappa||^2}\\iff j>i\n \\end{matrix} \\right.\n\\end{align}\nthus $A_{i,j} \\in \\mathbb{R}^{D \\times D}$ then we have\n\\begin{align}\n \\mathcal{L}_n=&||R^{(L)}_n||,\\\\\n \\frac{\\textbf{d} \\mathcal{L}^2}{\\textbf{d} \\phi^{(l)}_\\kappa}=&2R^{(L)}_n\\frac{\\textbf{d} R^{(L)}_n}{\\textbf{d} \\phi^{(l)}_\\kappa}\\\\\n \\end{align}\nHowever as we will see below we have a nice recursive definition to compute all those derivatives, in fact\n\\begin{equation}\\text{Init. }\n\\begin{cases}\n \\frac{\\textbf{d} P^{(l)}_n}{\\textbf{d} \\phi^{(l)}_\\kappa}&=\\frac{I_d+R^{(l-1)_n}\\phi^{(l)}_\\kappa}{||\\phi^{(l)}_\\kappa||^2}+\\frac{2\\phi^{(l)}_\\kappa\\phi^{(l)T}_\\kappa}{||\\phi^{(l)}_\\kappa||^4},\\\\\n \\frac{\\textbf{d} R^{(l)}_n}{\\textbf{d} \\phi^{(l)}_\\kappa}&=-\\frac{\\textbf{d} P^{(l)}_n}{\\textbf{d} \\phi^{(l)}_\\kappa}\n\\end{cases}\n\\end{equation}\n\n\n\\begin{equation}\\text{Recursion }\n\\begin{cases}\n \\frac{\\textbf{d} P^{(l+1)}_n}{\\textbf{d} \\phi^{(l)}_\\kappa}&=\\frac{\\phi^{(l+1)}_\\kappa\\phi^{(l+1)^T}_\\kappa}{||\\phi^{(l+1)}_\\kappa||^2}\\frac{\\textbf{d} R^{(l)}_n}{\\textbf{d} \\phi_\\kappa^{(l)}},\\\\\n \\frac{\\textbf{d} R^{(l+1)}_n}{\\textbf{d} \\phi^{(l)}_\\kappa}&=\\frac{\\textbf{d} R^{(l)}_n}{\\textbf{d} \\phi^{(l)}_\\kappa}-\\frac{\\textbf{d} P^{(l+1)}_n}{\\textbf{d} \\phi^{(l)}_\\kappa}\n\\end{cases}\n\\end{equation}\n\n\n\\begin{pseudocode}[doublebox]{Residual Oja Network}{X,K}\nR_n \\GETS X_n, \\forall n \\\\\n\\FOR l \\GETS 1 \\TO L \\DO\n\\BEGIN\n\\text{Initialize }\\Phi^{(l)}_K \\text{ from }R\\\\\n\\WHILE \\text{not converged} \\DO\n\\BEGIN\n\\FOR k \\GETS 1 \\TO K \\DO\n\\BEGIN\n\\text{Compute }n(k) \\text{ with current }\\Phi^{(l)}_K\\\\\n\\text{Update }\\phi^{(l)}_k \\text{with }n(k) \\text{ and }R(k)\\text{ according to Eq. \\ref{eq_pca}}\\\\\n\\END\\\\\n\\END\\\\\nR_n = (R_n-\\frac{}{||\\phi^{(l)}_\\kappa ||^2}\\phi^{(l)}_\\kappa)\n\\END\\\\\n\\RETURN{\\Phi^{(l)}_k, \\forall l}\n\\end{pseudocode}\n\n\n\\begin{pseudocode}[doublebox]{Online Residual Oja Network}{X,K}\nR_n \\GETS X_n, \\forall n \\\\\n\\FOR l \\GETS 1 \\TO L \\DO\n\\BEGIN\n\\text{Initialize }\\Phi^{(l)}_K \\text{ from }R\\\\\n\\WHILE \\text{not converged} \\DO\n\\BEGIN\n\\FOR n \\GETS 1 \\TO N \\DO\n\\BEGIN\n\\kappa = \\argmax_k \\frac{||^2}{||\\phi^{(l)}_k||^2||R_n||^2}\\\\\n\\text{Update }\\phi^{(l)}_\\kappa \\text{ according to Eq. \\ref{eq_online1} or Eq.\\ref{eq_online2} or Eq.\\ref{eq_online3}}\\\\\n\\END\\\\\n\\END\\\\\nR_n = (R_n-\\frac{}{||\\phi^{(l)}_\\kappa ||^2}\\phi^{(l)}_\\kappa)\n\\END\\\\\n\\RETURN{\\Phi^{(l)}_k, \\forall l}\n\\end{pseudocode}\n\n\n\n\\begin{theorem}\nWith a Deep (Oja) Network, the previously presented lower-bound of the generalization error becomes an upper-bound.\n\\end{theorem}\nIn addition of guaranteeing better generalization errors through depth, we also benefit from another gain. The depth as we will see allows for an exponential amount of possible templates to be constructed perfectly with only a linear increase in the number of learned parameters.\n\n\n\\begin{lstlisting}[caption=Input to Mask]\n####################\n# INPUT: X(N,channels,Ix,Jx),w(n_filters,channels,Iw,Jw)\n####################\nk = T.nnet.conv.conv2d(x,w,stride=stride,\n border_mode='valid',flip_filters=False,input_shape=(N,channels,Ix,Jx),\n filters_shape=(n_filters,channels,Iw,Jw))#(N,n_filters,(Ix-Iw)\/stride+1,(Jx-Jw)\/stride+1)\noutput = ((k>0)*2-1)*T.signal.pool.max_pool_2d_same_size(\n\t\ttheano.tensor.abs_(k).dimshuffle([0,2,3,1]),\n\t\t(1,n_filters)).dimshuffle([0,3,1,2])#(N,n_filters,(Ix-Iw)\/stride+1,(Jx-Jw)\/stride+1)\nmask = T.switch(T.eq(output,0),0,1)#(N,n_filters,(Ix-Iw)\/stride+1,(Jx-Jw)\/stride+1)\n\\end{lstlisting}\n\n\\begin{lstlisting}[caption=Reconstruction]\n####################\n# INPUTS: Z(N,n_filters,Iz,Jz),w(n_filters,channels,Iw,Jw),stride\n####################\ndilated_output = T.set_subtensor(T.zeros((N,n_filters,(Iz-1)*stride),(Iz-1)*stride),\n\t\tdtype='float32')[:,:,::stride,::stride],Z)#(N,n_filters,Ix-Iw+1,Jx-Jw+1)\nrec = T.nnet.conv.conv2d(dilated_Z,w.dimshuffle([1,0,2,3]),stride=1,\n border_mode='full',flip_filters=False)#(N,channels,Ix,Jx)\n\\end{lstlisting}\n\n\\begin{lstlisting}[caption=Mask to Grad]\n###################\n# INPUT : rec(N,C,Ix,Jx),mask(N,n_filters,Iz,Jz),Iw,Jw\n###################\nd_W,outputs=theano.scan(fn=lambda acc,i,X,mask:\n acc+conv2d(rec[i].dimshuffle([0,'x',1,2]),mask[i].dimshuffle([0,'x',1,2]),\n input_shape=(C,1,Ix,Jx),\n filter_shape=(n_filters,1,Iz,Jz)).dimshuffle([1,0,2,3]),\n sequences=[theano.tensor.arange(N,dtype='int32')],\n non_sequences=[rec,mask],outputs_info = T.zeros((n_filters,C,Iw,Jw),dtype='float32'))\nd_W = d_W[-1]\n\\caption{algo}\n\\end{lstlisting}\n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=5in]{error_energy.png}\n\\caption{Top : evolution of the reconstruciton error w.r.t. epochs. Bottom: evolution of the energy captured per level during training. At first stage the last levels capture everything since random initialization makes the global filters almost orthogonal to images, during training global filters learn to capture the low frequencies. Since it is known that natural images have a $1\/f$ decay of energy over frequencies $f$ we can see that the final energy repartition is indeed bigger for low-frequency\/global filters and go down for smaller filters.}\n\\end{figure}\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=5in]{rec_example.png}\n\\caption{Example of decomposition and reconstruction of some CIFAR10 images. From right to left is the final residual (reconstruction minus original), the original image, the reconstructed images and then all the decompositions, on the left is the global\/large one. Summing elementwise columns $1$ to $8$ leads to column $9$ the reconstrued input.}\n\\end{figure}\n\n\n\\subsection{Previous Work}\nThe proposedm ethod can be seen as bilinear sparse coding with one-hot latent vector $y$ \\cite{grimes2005bilinear} for the case of only one filter used. There is also a direct link with the probabilistic version of this work, namely mixture of PPCA \\cite{tipping1999probabilistic,tipping1999mixtures}, as here we are in a ''hard clustering'' case similarly to k-means versus GMM. \nBy the selection of the best matching atom, we find some links with matching pursuit \\cite{tropp2007signal,pati1993orthogonal} and also locality sensitive hashing \\cite{indyk1998approximate,johnson1984extensions} especially in the cosine similarity distance.\n\nThis problem can also be seen from a best basis selection point of view coupled with dictionary learning. \nPopular examples with varying degrees of computational overhead include convex relaxations such\nas $L1$-norm minimization \\cite{beck2009fast,candes2006robust,tibshirani1996regression}, greedy approaches like orthogonal matching pursuit (OMP)\n\\cite{pati1993orthogonal,tropp2004greed}, and many flavors of iterative hard-thresholding (IHT) \\cite{blumensath2009iterative,blumensath2010normalized}\nVariants of these algorithms find practical relevance in numerous disparate domains, including feature selection \\cite{cotter2002sparse,figueiredo2002adaptive}, outlier removal \\cite{candes2005decoding,ikehata2012robust}, compressive sensing \\cite{baraniuk2007compressive}, and source localization \\cite{baillet2001electromagnetic,model2006signal}\n\n\\section{Conclusion}\nWe presented a hierarchical version of the deterministic mixture of PCA and presented results on CIFAR10 images. We also provided algorithms allowing GPU computation for large scale dataset and speed. The main novelty comes from the deterministic formulate of the probabilistic mixture of PCA which allows easier use as it is known in general that MPPCA is unstable for large scale problems. From this we derived its hierarchical residual version which inherits many benefits and allow for exponentially good reconstruction w.r.t. the depth. We also believe that this residual approach allowing to learn orthogonal spaces will lead to interesting dictionary learning mixing for example residual networks with this approach.\n\\iffalse\n\\section{Validation Results}\n\\subsection{MNIST}\nWe present now reconstruction error on MNIST for different $F$ values as $1,4,8$. Note that for better comparison, we also provided the reconstruction error provided by PCA for the case of taking the best $KF$ projectors. Note that for the proposed approach, we always have $||\\alpha||_0=K$.\n\\begin{table}\n\\begin{tabular}{c|c|c|c|}\\hline\n &F=1&F=4&F=8\\\\\\hline\nPCA &0.000245 \\textbf{0.000246} & 6.08e-05 \\textbf{6.16e-05} & 1.92e-05 \\textbf{1.97e-05} \\\\ \\hline\nPROPOSED &0.000251 \\textbf{0.000252} & 0.000135 \\textbf{0.000136} & 0.000116 \\textbf{0.000117}\\\\ \\hline\n\\end{tabular}\n\\caption{Global Dictionary with $K=32$}\n\\end{table}\n\n\\begin{table}\n\\begin{tabular}{c|c|c|c|}\\hline\n &F=1&F=4&F=8\\\\\\hline\nPCA &0.000517 \\textbf{0.000518}&0.000245 \\textbf{0.000246}&0.000133 \\textbf{0.000134} \\\\ \\hline\nPROPOSED &0.000538 \\textbf{0.000538}&0.000361 \\textbf{0.000362} & 0.000309 \\textbf{0.000310}\\\\ \\hline\n\\end{tabular}\n\\caption{Global Dictionary with $K=8$}\n\\end{table}\n\nWe also provide results for the case of local patches below without overlap (thus the two global and local errors can be compared) :\n\\begin{table}\n\\begin{tabular}{c|c|c|c|}\\hline\n &F=1&F=4&F=8\\\\\\hline\nPROPOSED (8,8) K=8 &0.000348 \\textbf{0.000347}& 0.000205 \\textbf{0.000205} & 0.000178 \\textbf{0.000177} \\\\ \\hline\nPROPOSED (8,8) K=32 &0.000123 \\textbf{0.000122}&9.68e-05 \\textbf{9.66e-05} & 9.47e-05 \\textbf{9.457e-05}\\\\ \\hline\n\\end{tabular}\n\\caption{Global Dictionary with $K=8,32$}\n\\end{table}\n\n\nFor all the presented tables, the associated filters are in the appendix.\n\n\\subsection{CIFAR10}\n\n\n\n\n\\subsubsection*{Acknowledgments}\n\nUse unnumbered third level headings for the acknowledgments. All\nacknowledgments go at the end of the paper. Do not include\nacknowledgments in the anonymized submission, only in the final paper.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe growing number of known exoplanet systems raises the question of\nhow to properly define the habitability zone around a star \\citep{Kasting:1993hw,Barnes:2011tc}. Its\ndefinition depends on the interactions existing\nbetween a planet and its host star, which are gravitational (tidal forces),\nmagnetic (wind-planet interactions, hereafter referred to as SPMI) and\nradiative (\\textit{e.g.}, stellar EUV ionisation flux).\nMagnetized interactions between a star and its orbiting planets have\nrecently been suggested to be at the origin of a possibly enhanced planet detectability\n\\citep{Jardine:2008ec,Fares:2010hq,Miller:2012gq}. In the case of a\nclose-in planet, these interactions may also be at the origin of anomalous stellar\nmagnetic activity \\citep{Cuntz:2000ef,Lanza:2008fn,Donati:2008hw}. It\nwas also suggested that it could affect the star-planet rotational\nevolution\n\\citep{Laine:2008dx,Pont:2009ip,Cohen:2010jm,Vidotto:2010iv,Lanza:2010bo}. Theoretical \nwork is needed to better understand SPMIs.\n\nBased on a pioneering work done in the context of the satellites of Jupiter\n\\citep{Goldreich:1969kf,Kivelson:2004vf}, \\citet{Laine:2011jt} built\nan analytical model describing the various\ncomponents of SPMIs in the case of unmagnetized\nplanets. Pursuing the same goal, \\citet{Lanza:2013gj} also developed\nsemi-analytical models of SPMIs in the context of magnetized\nplanets. However, a systematic numerical validation of those models still\nremains to be properly done \\citep[see][for first\nsteps towards such a validation]{Ip:2004ba,Cohen:2011gg}. \n\nFocusing on close-in planets, the SPMIs include magnetic reconnection,\nmagnetic field diffusion at\nthe stellar surface and in the planet vicinity,\nradiation and ionisation processes in the planetary magnetosphere and\nmagneto-sonic wave propagation.\nA numerical investigation of SPMI requires a careful description\nof those physical processes although it is generally not possible to\ntreat all of them simultaneously with a unique model. Hence,\nspecific strategies such as dedicated boundary conditions have to be\ndeveloped to study SPMI from a global point of view. We detail in this\nwork how to develop both stellar (section \\ref{sec:star-bound-cond})\nand planetary (section \\ref{sec:plan-bound-cond}) boundary conditions\nto globally model the different SPMI cases,\nwithin the MHD formalism.\n\n\\section{Stellar boundary conditions}\n\\label{sec:star-bound-cond}\n\n\\begin{figure}[b]\n\\begin{center}\n \n \\includegraphics[width=0.9\\linewidth]{fig1} \n \\caption{\\textbf{(a)} Schematic of the multi-layer boundary condition\n ensuring good conservation properties of the MHD solution as well as\n reactivity to external stimuli. Fixed quantities are forced to the\n Parker wind solution. The subscript $p$ stands for the poloidal\n component $(\\varpi,z)$ of vector in cylindrical\n coordinates. \\textbf{(b)} Typical wind solution \n used for SPMI. The color map represents the logarithmic density, the white lines\n the poloidal magnetic field lines. The slow and fast Alfv\\`en surfaces are labeled\n by the dashed lines, and the arrows show velocity field. The stellar\n surface is labeled by a black quarter of a circle. The axes are in\n stellar radius units. \\textbf{(c)}\n Effective rotation rate as a function of the streamfunction\n $\\psi$ for good (blue dots) and bad (black dots) boundary conditions. The red dashed horizontal line labels the stellar rotation\n rate. Low values of $\\psi$ correspond to open polar field lines and\n larger values of $\\psi$ to closed equatorial field lines. Each dot corresponds to a grid point.}\n \\label{fig:fig1}\n\\end{center}\n\\end{figure}\n\nWe model stellar winds following numerous previous\nanalytical and numerical studies\n\\citep{Weber:1967kx,Washimi:1993vm,Ustyugova:1999ig,Keppens:2000ea,Matt:2004kd,Matt:2012ib}. We use standard\nMHD theory to numerically model with the PLUTO code\n\\citep{Mignone:2007iw} magnetized steady state flows \nanchored at the surface \nof a rotating star. We model winds driven by the thermal pressure of\nthe stellar corona in a 2D axisymmetric cylindrical geometry \\citep[see][for a more detailed description of the MHD\nmodel we use]{Strugarek:2012th}. \n\nThe steady-state wind solution can depend very sensitively on the\ntype of boundary conditions that are imposed under the stellar\nsurface. Because we want to use our model to study SPMIs, the stellar boundary\nconditions have to be able to both react and adapt to external stimuli\noriginating from the orbiting planet. The design of a boundary\ncondition satisfying those two conditions, and its associated stellar\nwind solution, are displayed in panels (a) and (b) of fig. \\ref{fig:fig1}.\n\n\nWe developed a layered boundary condition over which the stellar wind\ncharacteristics are progressively enforced as we go deeper\nunder the stellar surface. This boundary condition ensures \nvery good conservation properties \\citep{Lovelace:1986kd,Zanni:2009kc}\nalong the magnetic field lines. This \nis exemplified in panel (c) of fig. \\ref{fig:fig1}. We display \nthe effective rotation rate $\\Omega_{\\mbox{eff}} \\equiv \\frac{1}{\\varpi}\\left(v_\\phi\n -\\frac{v_p}{B_p}B_\\phi \\right)$ as a function of the streamfunction\n$\\psi$ generating the poloidal magnetic field. In a steady-state,\nideal MHD wind, $\\Omega_{\\mbox{eff}}$ should be constant along each\nfield line and equal to $\\Omega_{\\star}$. The blue dots\ncorrespond to the boundary condition described in panel (a), and the\nblack dots to a case where $B_{\\phi}$ is set to 0 at all latitudes in\nthe third boundary level. We observe that the target\nstellar rotation rate (dashed horizontal red line) is\nrecovered only with the correct boundary conditions. Conservation errors\nexist at the open-closed field lines\nboundary ($\\psi \\sim 0.23$), but they remain confined to very few grid\npoints in the simulation domain. Finally, this boundary condition is intrinsically\nable to react to a perturbation by a planet orbiting a star by,\n\\textit{e.g.}, modifying the stellar wind topology.\nWe discuss now the importance of\nplanetary boundary conditions when studying SPMIs.\n\n\\section{Planetary boundary conditions}\n\\label{sec:plan-bound-cond}\n\n\\begin{figure}[b]\n\\begin{center}\n \n \\includegraphics[width=0.9\\linewidth]{fig2_2}\n \\caption{Zoom on planetary boundary conditions effects for dipolar (upper panels) and\n unipolar (lower panels) interactions. The color map represents the\n gas pressure in logarithmic scale,\n and the white lines the magnetic field lines. The planet surface is\n labeled by a black circle at $1$ stellar radius. Panels (a) and (b) show the fiducial dipolar case,\n and panel (c) is the unrealistic case of a planet with a very high\n internal pressure. Panel (d) represents a\n Venus-like interaction and panels (e) and (f) two Io-Jupiter-like interactions.}\n \\label{fig:fig2}\n\\end{center}\n\\end{figure}\n\nSPMIs are generally decomposed in two categories: the so-called\nunipolar and dipolar interactions \\citep{Zarka:2007fo}, which refer to \nthe cases of unmagnetized and magnetized planets.\nBoth interactions can be modeled\nwithin the MHD formalism with an adequate boundary condition design. We\ndetail in this section how to design such boundary conditions. The\nexamples given here were all done for a planet with a radius of $r_p = 0.1\\,\nr_\\star$, a mass of $M_p=0.01\\, M_\\star$, an orbital radius\nof $r_{\\rm{orb}}=3r_\\star$ and a resolution of $0.03\\, r_p$ at the\nplanetary surface.\n\nWe consider the planet itself as a boundary condition. The PLUTO\ncode allows one to define internal domains as boundary conditions\nover which all variables can be altered during the model evolution. In\nall cases, we set the poloidal velocity to zero and the\nazimuthal velocity to the keplerian velocity inside the planet. We\nalso set the density and pressure values inside the planet to \nfiducial values which are consistent with its gravity field. These\nvalue have to be carefully prescribed since they can trigger undesirable\neffects in the vicinity of the planet. We give an example of a\ndipolar case in panels (a) and (b) of fig. \\ref{fig:fig2} (the planetary\nmagnetic field is simply enforced in the planetary interior in this\ncase). A stable configuration is obtained when the magnetic pressure and the\ngas pressure equilibrate at the interface between the planetary\nmagnetosphere and the stellar wind. The ram pressure plays little role\nhere because the planet we consider is in the so called\n\\textit{dead-zone} of the stellar wind, in which the poloidal\nvelocity is negligible. We show in panel (c) the exact same simulation for\nan extreme case where we multiplied the internal pressure of the\nplanet by a factor of $20$. The former pressure balance then fails and\na wind is driven from the planet itself. The planetary dipole opens up and a shock\neventually creates at the interface between the two\n``winds''. Such undesirable effects may also be obtained by varying the\ndensity of the interior of the planet. Hence, any SPMI model must be developed\nto minimize such undesirable effects in the final solution.\n\nModeling a planet in the unipolar case is a bit more complex than in the\ndipolar case. Two classes of unipolar interactions can indeed be\ndistinguished: Venus-like interaction (case V) and Io-Jupiter like\ninteraction (case IJ). Note however that in both cases, we consider a\nplanet located inside the stellar wind dead-zone, at $r_{p}=3\\, R_{\\star}$.\n\nIn case V, the ionisation of the planetary atmosphere \nby the stellar EUV radiation flux allows the creation of a\nionosphere which acts\nas a barrier between the stellar wind magnetic field and the \nunmagnetized interior of the planet \\citep{Russell:1993jk}. Depending\non the stellar wind conditions around the planet, an induced\nmagnetosphere may then be sustained on secular time scales. We show\ncase V in panel (d) of fig. \\ref{fig:fig2}. The\nionosphere is modeled as a very thin ($< 0.2\\,\nr_{p}$) highly conductive boundary layer under the \nplanetary surface. The wrapping of the magnetic field lines around\nplanet \\citep{Russell:1993jk} is naturally recovered.\n\nIn case IJ, no ionosphere is created and the stellar wind magnetic field\npervades inside the planet. The SPMI then depends on the ratio of\nelectrical conductivities between the planetary interior and the stellar\nsurface where the magnetic field lines are \\textit{a priori}\nanchored. This ratio sets the effective drag the planet is able to\ninduce on the stellar wind magnetic field lines. We use the ability of\nthe PLUTO code to add extra ohmic diffusion in\nthe planet interior to model it and show in\nfigure \\ref{fig:fig2} two extreme cases in which\nmagnetic field lines are dragged (panel e) or diffused (panel f) by\nthe planet. In all cases, we obtain a statistical steady state in\nwhich the SPMI can be analyzed in details. \n\n\\section{Conclusions}\n\\label{sec:conclusions}\n\nWe showed in this work that is it possible to model the global,\nmagnetized and non-linear interactions between a star and\na planet, within the MHD formalism. It requires a careful\ndevelopment of adequate boundary conditions to represent\nthe various interaction cases. We showed that boundary conditions play\na very important role both at the stellar surface and in the planetary\ninterior. Steady state solutions could be found in the dipolar case as\nwell as in both the Venus-like and Io-Jupiter-like unipolar\ncases. \n\nThe SPMI model we developed will be useful for exploring\nstable interaction configurations between a close-in planet and its\nhost star. In addition, it will enable quantitative predictions of\nrotational evolution of star-planet systems due to the\neffective magnetic torques which develop in the context of dipolar and unipolar\ninteractions \\citep{Strugarek:2013uh}. Finally, such models\ncould also be used to study potential SPMI induced emissions, which we\nwill analyze in a future work.\n\n\\acknowledgements\n\nWe thank A. Mignone and his team for making the PLUTO code\nopen-source. We thank A. Cumming, R. Pinto, C. Zanni and P. Zarka\nfor inspiring discussions on star-planet magnetized interactions. This\nwork was supported by the ANR TOUPIES and the ERC project\nSTARS2. We acknowledge access to supercomputers through GENCI project\n1623 and Prace infrastructures. A. Strugarek acknowledges support from\nthe Canada's Natural Sciences and Engineering Research Council. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThis article is motivated by two perennial questions of approximation\ntheory: Assume that a finite number of samples of a continuous function $f$ on some\ncompact set is given, (i) find a good or optimal approximation of $f$\nfrom these samples and derive error estimates, and (ii) approximate an\nintegral $\\int f$ from these samples and derive error estimates, in\nother words, find a quadrature rule based on the given samples.\n\n We argue, completely in line with the tradition of approximation theory, that these\nquestions are best answered by means of \\mz\\ families and inequalities. \nRoughly speaking, a \\mz\\ family is a double-indexed set of points\n$\\xkn $ such that the sampled $\\ell ^2$-norm of the $n$-th layer $\\sum\n_{k} |p(\\xkn )|^2 $ is an equivalent norm for the space of ``polynomials''\nof degree $n$ with uniform constants.\nOur main result then shows that the existence of a \\mz\\ family already\nimplies (i) approximation theorems from pointwise samples of a function, and (ii)\nquadrature rules. This is, of course, folkore, and the content of an\nabundance of results in approximation theory and numerical analysis \non many levels of generality. In the literature \\mz\\ families are constructed for\nthe purpose of quadrature rules and approximation theorems~\\cite{FM10,MNW01},\nour main insight is that quadrature rules\nand approximation theorems follow automatically from a\n\\mz\\ family. It is one of our objectives to explain\nthis conceptual hierarchy: \\mz\\ families are first, then quadrature rules and\napproximation theorems come for free. \n\n\nThe main novelty of our contribution is a completely elementary\nderivation of approximation theorems and quadrature rules based on the \nexistence of a \\mz\\ family. This derivation is fairly simple and is based solely on the\nbasic definitions of \\mz\\ families, orthogonal projections, Sobolev\nspaces, and least square problems. The assumptions are minimal and\nonly require an orthonormal basis $\\{\\phi _k\\}$ and an associated\nnon-decreasing sequence of ``eigenvalues'' $\\{\\lambda _k\\} \\subseteq\n\\bR ^+$. This set-up is similar to the one in~\\cite{FM10,FM11,MM08}. \n\nOur point of view is informed by the theory of non-uniform\nsampling of bandlimited functions and their discrete analogs developed\nin the 1990's by many groups~\\cite{BH90,FGtp94,fgs95,PST01,Sunw02}.\nIndeed, a sampling theorem is simply a \\mz\\ inequality (upper and\nlower) for a fixed function space, and some of the first \\mz\\\ninequalities for scattered points on the torus (or non-uniform sampling points)\nwere derived in this context~\\cite{Gro93c}. The method in this paper \nwas essentially developed in~\\cite{Gro99} for the local approximation of\nbandlimited functions by trigonometric polynomials from samples. \n\nSeveral technical aspects deserve special mention.\n\n(i) In general, the error estimates for quadrature rules depend on a\ncovering radius (or mesh size) or on the number of nodes that arise in a particular construction of a\n\\mz\\ family~\\cite{BCCG14,EGO17}. In our derivation the constants depend only on the\ncondition number of the \\mz\\ family and thus on their definition.\n\n(ii) Whereas quadrature\nrules are often connected to \\mz\\ inequalities with respect to the\n$L^1$-norm~\\cite{FM10}, we derive such rules from \\mz\\ inequalities in\nthe $L^2$-norm by means of frame theory. The frame approach to\nquadrature rules is \nmotivated by a question of N.~ Trefethen about convergence of\nthe standard quadrature rules after a perturbation of the uniform\ngrid, see~\\cite{TW14,AT17}\n\n(iii) In several articles on \\mz\\ families \\cite{FM10,MM08,OP12} the polynomial growth of\nthe spectral function $\\sum _{k=1}^n |\\phi _k(x)|^2$ is used\nimplicitly or as a hypothesis. In our treatment\n(Lemma~\\ref{critval}), the growth of the spectral function is \nrelated to the critical Sobolev exponent and leads to explicit and \ntransparent error estimates. In hindsight the appearance\nof the spectral function is not surprising, as it is the reciprocal of\nthe Christoffel function associated to an orthonormal basis (or to a\nset of orthogonal polynomials), and is thus absolutely fundamental for\npolynomial interpolation and quadrature rules. See~\\cite{Nev86} for an extended survey. \n\n\\vspace{ 2mm} \nThe paper is organized as follows: the end of this introduction\nprovides a brief survey of related literature. In Section~2 we \nintroduce \\mz\\ families for the torus and prove the resulting\napproximation theorems and quadrature rules. In Section~3 we treat the\nsame question in more generality on a compact space with a given\northonormal basis and corresponding eigenvalues. With the appropriate\ndefinitions of Sobolev spaces and error terms, the formulation of the\nmain results and the proofs are then identical. In a sense Section~2\nis redundant, but we preferred to separate the proofs from the\nconceptual work. This separation reveals the simplicity of the\narguments more clearly. \n\n\n\n\\subsection{Discussion of related work}\nIt is impossible to do full justice to the extensive literature on \\mz\\ inequalities,\n approximation theorems, and quadrature rules, we will therefore\n mention only some aspects and apologize for any omissions. \n\nThe classical theory of \\mz\\ inequalities deals with the interpolation\nby polynomials and the associated quadrature rules and is surveyed\nbeautifully in~\\cite{Lub98}. An extended recent survey with a\ncomplementary point of vie\nis contained in~\\cite{DTT19}. \n\n(i) \\emph{\\mz\\ families on the torus and trigonometric polynomials.} \n\\mz\\ inequalities for scattered nodes were considered in the theory of\nnonuniform sampling~\\cite{FGtp94}. An early example of \\mz\\\ninequalities on the torus is contained in the\nestimates of~\\cite[Thm.~4]{Gro93c}. \\mz\\ with respect to different measures\nmeasures were then studied in~\\cite{Erd99,Lub99,MT00,RS97}.\nA complete characterization of\n \\mz\\ families on the torus with respect to Lebesgue measure in terms of suitable\n Beurling densities was given by Ortega-Cerd\\`a\n and Saludes~\\cite{OS07}. \n\n (ii) The next phase concerned \\emph{\\mz\\ inequalities and quadrature\n on the sphere}: The goal of \\cite{MNW01} is the construction of\n good quadrature rules on the sphere via \\mz\\ inequalities, sufficient\n conditions for \\mz\\ families are obtained in~\\cite{MP14}. \n Necessary density conditions for \\mz\\ inequalities on the\n sphere are derived in~\\cite{Mar07}, and \\cite{MOC10} studies the\n connection between \\mz\\ families and Fekete points on the sphere.\n \\cite{HS06} derives worst case errors for quadratures on the sphere. \n\n (iii) \\emph{\\mz\\ families on metric measure spaces.} The most\n general constructions of \\mz\\ families and quadrature rules are due to Filbir and Mhaskar\n in a series of papers~\\cite{FM10,FM11,MM08,Mh18}. They work for metric measure spaces with\n Gaussian estimates for the heat kernel associated to an orthonormal\n basis. This theory includes in particular \\mz\\ families on compact\n Riemannian manifolds. A related theory is contained in~\\cite{FFP16}\n whose goal is the construction of frames for Besov spaces. Again,\n necessary density conditions for \\mz\\ families on compact Riemannian\n manifolds have been derived by Ortega-Cerd\\`a and\n Pridhnani~\\cite{OP12}. \n\n(iv) \\emph{Approximation of functions from samples via least squares:}\nIn general the approximating polynomials do not interpolate, therefore\nthe best approximation of a given function by a ``polynomial'' is\nobtained by solving a least squares problem. A \\mz\\ inequality then\nimplies bounds on the condition number of the underlying matrix. This\nconnection appears, among others, in~\\cite{Gro99,FM11} and is\nhighlighted in Proposition~\\ref{prop1}. Modern\nversions use random sampling to generate \\mz\\ inequalities. This\naspect was studied in~\\cite{BG04,SZ04} for random sampling in\nfinite-dimensional subspaces, \\cite{CM17,ABC19,AC19} contain recent studies of the stability of\nleast squares reconstruction. We point out in particular~\\cite{CM17}\nwhere the Christoffel function is identified as the optimal weight for\na given probability measure. Finally we highlight the series of papers\non generalized sampling~\\cite{AH12,AHP13} as an alternative approach to the\napproximation of functions from finitely many linear measurements. In this\ncase the constants in the error estimates are formulated with the angle between\nsubspaces rather than with the condition number of the \\mz\\ family. \n\n\n\n\n\\section{Approximation of functions on the torus from nonuniform samples}\nAs a model example where the technique is completely transparent, we\nfirst deal with nonuniform sampling on the torus $\\bT $. On $\\bT$ the\napproximation spaces are the space of trigonometric polynomials $\\cT\n_n$ of degree $n$, i.e.,\n $p\\in \\cT _n$, if $p(x) = \\sum _{k=-n}^n c_k e^{2\\pi i k x}$. \n\n \\begin{definition} \\label{defmz}\n Let $\\cX = \\{ \\xkn : n\\in \\bN , k=1, \\dots , L_n \\}$ be a\n doubly-indexed set of points in $\\bT \\simeq (-1\/2,1\/2] $ and $\\tau\n = \\{ \\tkn : n\\in \\bN , k= 1, \\dots , L_n \\}\\subseteq (0,\\infty ) $ be a\n family of non-negative weights. Then $\\cX $ \n is called a \\mz\\ family, if there\n exist constants $A,B >0$ such that \n \\begin{equation}\n \\label{eq:1}\nA \\|p\\|_2^2 \\leq \\sum _{k=1} ^{L_n} |p(\\xkn )|^2 \\tau _{n,k} \\leq B\n\\|p \\|_2^2 \\qquad \\text{ \\emph{for all} } p \\in \\cT _n \\, .\n \\end{equation}\nThe ratio $\\kappa = B\/A$ is the global condition number of the \\mz\\\nfamily, and $\\cX _n =\\{ \\xkn : k=1, \\dots , L_n \\}$ is the $n$-th\nlayer of $\\cX $. \n\\end{definition}\n\nThe point of Definition~\\ref{defmz} is that the constants are uniform in the\ndegree $n$ and that usually $\\cX _n$ contains more than\n$\\mathrm{dim}\\, \\cT _n = 2n+1$ points, so\nthat it is not an interpolating set for $\\cT _n$. Weights are\nomnipresent in the classical theory of \\mz\\ inequalities~\\cite{Lub99},\n in sampling\ntheory they are used to improve condition numbers~\\cite{fgs95}, and\nin Fourier sampling they serve as density compensating factors. \nCurrently they play an important role in weighted least squares\nproblem in statistical estimation in~\\cite{CM17,ABC19,AC19}.\n\nGiven the samples $\\{ f(\\xkn ) \\}$ of a continuous function $f $ on\n$\\bT $ on the $n$-th layer $\\cX _n$, we first need to approximate $f$\nusing only these samples. For this we solve a sequence of least\nsquares problems with samples taken from the $n$-th layer $\\cX _n$: \n\\begin{equation}\n \\label{eq:8}\np_n = \\mathrm{argmin} _{p\\in \\cT _n} \\sum _{k=1} ^{L_n} |f(\\xkn ) -\np(\\xkn )|^2 \\tkn \\, .\n\\end{equation}\nThis procedure yields a sequence of trigonometric\npolynomials for every $f\\in C(\\bT )$. In general, these polynomials\ndo not interpolate the given $f$ on $\\cX _n$,\nbut they yield the best $\\ell ^2$-approximation of the data $\\{f(\\xkn )\\}$ by a\ntrigonometric polynomial in $\\cT _n$. Therefore $p_n$ is usally called a\nquasi-interpolant.\n\nThe question is now how the $p_n$'s approximate $f$ on all of $\\bT\n$. As always in approximation theory, the answer depends on the\nsmoothness of $f$. For this we use the standard Sobolev spaces $H^\\sigma\n(\\bT )$ with norm \n\\begin{equation}\n \\label{eq:2}\n \\|f\\| _{H^\\sigma } = \\Big(\\sum _{k\\in \\bZ } |\\fhat (k)|^2\n (1+k^2)^{\\sigma } \\Big)^{1\/2} \\, ,\n\\end{equation}\nwhere $\\fhat (k) = \\int _0^1 f(x)\ne^{-2\\pi i kx} \\, dx$ is the $k$-th Fourier coefficient of $f$. \n\nOur main theorem asserts the convergence of the\nquasi-interpolants of $f$. \n\n\\begin{tm} \\label{tm1}\nLet $\\cX $ be a \\mz\\ family with associated weights $\\tau $ and\ncondition number $\\kappa = B\/A$. \n\n(i) If $f\\in H^\\sigma $ for $\\sigma >1\/2$, then \n \\begin{equation}\n \\label{eq:3}\n \\|f-p_n\\|_2 \\leq C_\\sigma \\sqrt{1+\\kappa ^2} \\|f\\|_{\\hs }\n n^{-\\sigma +1\/2} \\, ,\n \\end{equation}\nwith a constant depending on $\\sigma$ (roughly $C_\\sigma \\approx\n(\\sigma -1\/2)^{-1\/2}$).\n \n(ii) If $f$ extends to an analytic function on an strip $\\{z\\in \\bC\n: |\\mathrm{Im} z| < \\rho _0 \\}$, then the convergence is geometric, i.e.,\n \\begin{equation}\n \\label{eq:3a}\n \\|f-p_n\\|_2 = \\cO (e^{-\\rho n}) \\, .\n \\end{equation}\n for every $\\rho <\\rho _0$.\n\\end{tm}\n\n\n\nThe proof starts with the orthogonal projection $P_nf(x) = \\sum _{|k|\\leq n}\n\\fhat (k) e^{2\\pi i kx} $ of $f$ onto the trigonometric \npolynomials $\\cT _n$. Note that $P_nf $ is the $n$-th partial sum of\nthe Fourier series of $f$. The proof of Theorem~\\ref{tm1} is based on the orthogonal decomposition\n\\begin{equation}\n \\label{eq:4}\n \\|f- p_n \\|_2^2 = \\|f- P_nf \\|_2^2 + \\|P_nf - p_n \\|_2^2 \\, .\n\\end{equation}\nThe first term measures how fast the partial sums of the Fourier\nseries converge to $f$, whereas the second, and more interesting, term\ncompares the best $L^2$-approximation of $f$ in $\\cT _n$ with the\napproximation $p_n$ obtained from the samples of $f$ on $\\cX _n$. \n\n\\subsection{Sampling and embeddings in $\\hs $ }\n\nBefore entering the details of the proof, we state some well-known \nfacts about the Sobolev space $\\hs $. \n\\begin{lemma} \\label{lm1}\nAssume that $\\sigma > 1\/2$. \n\n(i) Sobolev embedding: Then $\\hs (\\bT ) $ is continuously embedded in\n$C(\\bT )$.\n \n(ii) Convergence rate: for all $f\\in \\hs (\\bT ) $\n\\begin{equation}\n \\label{eq:5}\n \\|f-P_nf \\|_\\infty \\leq \\|f \\|_{\\hs } \\, \\phi _\\sigma (n) \\, ,\n\\end{equation}\nwhere $\\phi _\\sigma (n) = (\\sigma -1\/2)^{-1\/2}\\, n^{-\\sigma\n +1\/2}$.\n\n(iii) Sampling in $\\hs $: If $\\cX $ satisfies the sampling\ninequalities \\eqref{eq:1} and $f\\in \\hs $, then \n\\begin{equation}\n \\label{eq:6}\n\\sum _{k=1}^{L_n} |f(\\xkn )|^2 \\tkn \\leq B \\|f\\|_\\infty ^2 \\leq B C_\\sigma ^2 \\|f\\|_{\\hs }^2 \\, , \n\\end{equation}\n \\end{lemma}\n\n \\begin{proof}\n(i) and (ii) are standard (and also follow from Lemma~\\ref{sobgen}).\n\n(iii) The sampling inequalities\n\\eqref{eq:1} applied to the constant function $p\\equiv 1$ with\n$\\|p\\|_2=1$ yield\n\\begin{equation}\n \\label{eq:7}\n A \\leq \\sum _{k=1}^{L_n} \\tkn \\leq B \\, .\n\\end{equation}\nThe claim follows from \\eqref{eq:7} and the Sobolev embedding.\n \\end{proof}\n\n\n\n\\subsection{Quasi-interpolation versus projection}\nTo estimate the norm $\\| P_n f - p_n\\|_2$, let us introduce the\nvectors and matrices that arise in the explicit solution of the least squares\nproblem \\eqref{eq:8}.\nLet $$\ny_n = (\\tau _{n,1}^{1\/2} f(x_{n,1}), \\dots , \\tau _{n,L_n}^{1\/2}\nf(x_{n,L_n})) \\in \\bC ^{L_n}$$\nbe the\ngiven data vector, and $U_n $ be the $L_n \\times (2n+1)$-matrix (a\nVandermonde matrix) with\nentries \n\\begin{equation}\n \\label{eq:10}\n(U_n)_{kl} = \\tkn ^{1\/2} e^{2\\pi i x_{n,k} l} \\qquad k=1,\\dots , L_n, |l| \\leq n\n\\, . \n\\end{equation}\nWe write \n\\begin{equation}\n \\label{eq:11}\n T_n = U_n ^* U_n \\, . \n\\end{equation}\nFor the numerical construction of $p_n$ we note that $T_n$ is a\nToeplitz matrix and thus accessible to fast algorithms~\\cite{Gro93c,fgs95,PST01}. For our\nanalysis we collect the following facts.\n\n\\begin{lemma} \\label{lem2a}\n Assume that $\\cX $ is a \\mz\\ family. Then\n\n(i) the spectrum of $T_n$ is contained in the\ninterval $[A,B]$ for all $n\\in \\bN $, and \n\n(ii) the solution of the least\nsquares problem \\eqref{eq:8} yields a trigonometric polynomial $p_n =\n\\sum _{|k| \\leq n} a_{n,k} e^{2\\pi i kx}\n\\in \\cT _n$ with a coefficient vector $a_n \\in \\bC ^{2n+1}$ given by\n\\begin{equation}\n \\label{eq:12}\n a_n = T_n \\inv U_n ^* y_n \\, .\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\n (i) Note that for $p\\in \\cT _n$ and $p(x) = \\sum _{|l|\\leq n} a_l\n e^{2\\pi i l x}$ the point evaluation at $\\xkn \\in \\cX _n$ is\n precisely \n$$\n \\tkn ^{1\/2} p(\\xkn ) = (U_n a )_k \\, ,\n$$ and the sampled $2$-norm is \n$$\n\\sum _{k=1} ^{L_n} |p(\\xkn )|^2 \\tau _{n,k} = \\langle U_n a , U_n\na\\rangle = \\langle T_n a, a \\rangle \\, .\n$$\nBy \\eqref{eq:1} the spectrum of $T_n$ is contained in the interval\n$[A,B]$. \n\n(ii) This is the standard formula for the solution of a least squares\nproblem by means of the Moore-Penrose pseudo-inverse $U_n ^\\dagger =\n(U_n^* U_n)\\inv U_n ^* = T_n\\inv U_n^*$. \n\\end{proof}\n\n\nHere is the decisive estimate for the second term in\n\\eqref{eq:4}. The following lemma relates the solution to the least\nsquares problem~\\eqref{eq:8} to the best approximation of $f$ in $\\cT\n_n$. Compare~\\cite{Gro99,Gro01b} for an early use of this argument. \n\n\\begin{prop}\\label{prop1}\nLet $p_n$ be the solution of the least squares problem\n\\eqref{eq:8}. Then \n\\begin{equation}\n \\label{eq:13}\n \\|P_nf - p_n \\|_2^2 \\leq A^{-2} B \\sum _{k=1}^{L_n} |f(\\xkn ) - P_n\n f(\\xkn )|^2 \\tkn \\, .\n\\end{equation}\n \\end{prop}\n\n \\begin{proof}\n Let $f_n \\in \\bC ^{2n+1}$ be the Fourier coefficients of the\nprojection $P_nf$, i.e, $f_n = (\\fhat (-n), \\fhat (-n+1), \\dots, \\fhat\n(n-1), \\fhat (n))$. Then by Plancherel's theorem\n$$\n\\|P_n f - p_n \\|^2_2 = \\| f_n - a_n \\|^2_2 \\, ,\n$$\nwhere the norm on the left-hand side is taken in $L^2(\\bT)$ and on the\nright-hand side in $\\bC ^{2n+1}$. Using \\eqref{eq:12} for the solution of\nthe least squares problem \\eqref{eq:8} and $T_n = U_n^* U_n$, we obtain\n\\begin{align*}\n \\| f_n - a_n \\|^2_2 &= \\|f_n - T_n \\inv U_N^* y_n\\|_2^2 \\\\\n&= \\| T_n \\inv U_n^* (U_n f_n - y_n)\\|_2^2 \\\\\n& \\leq A^{-2} B \\|U_n f_n - y_n\\|_2^2 \\, , \n\\end{align*}\n because the operator norm of $T_n\\inv $ is bounded by $A\\inv $\n and the norm of $U^*_n$ is bounded by $\\|U_n ^*\\| = \\|U_n\\|=\n \\|U_n^* U_n \\|^{1\/2} = \\|T_n \\|^{1\/2} \\leq B^{1\/2}$. \nFinally, \n$$\n(U_nf_n)_k = \\tkn ^{1\/2} \\sum _{|l| \\leq n} e^{2\\pi i \n \\xkn l} \\hat{f}(l) = \\tkn ^{1\/2} P_n\nf(\\xkn ) \\, ,\n$$\nand thus\n$$\n\\|U_n f_n - y\\|_2^2 = \\sum _{k=1} ^{L_n} |P_nf(\\xkn ) - f(\\xkn )|^2\n\\tkn \\, ,\n$$ \nand the statement is proved. \n \\end{proof}\n\n\n\\subsection{Proof of Theorem~\\ref{tm1}}\n\\begin{proof}\n(i) We use the orthogonal decomposition $\n\\|f- p_n \\|_2^2 = \\|f- P_nf \\|_2^2 + \\|P_nf - p_n \\|_2^2 \\, .\n$\nThen Lemma \\ref{lm1}(ii) yields \n$$\n \\|f- P_nf \\|_2^2 \\leq \\|f- P_nf \\|_\\infty^2 \\leq \\,\\|f\\|_{\\hs } ^2 \\,\n\\phi _\\sigma (n)^2\n \\, ,\n$$\nand Proposition~\\ref{prop1} yields \n$$\n \\|P_nf - p_n \\|_2^2 \\leq A^{-2} B \\sum _{k=1}^{L_n} |f(\\xkn ) - P_n\n f(\\xkn )|^2 \\tkn \\, .\n$$\nWe now apply \\eqref{eq:7} and Lemma~\\ref{lm1}(ii) to $f-P_nf \\in \\hs\n(\\bT )$ and\ncontinue the inequality as \n$$\n\\sum _{k=1}^{L_n} |f(\\xkn ) - P_n\n f(\\xkn )|^2 \\tkn \\leq B \\|f-P_nf\\|_{\\infty} ^2 \\leq B \\|f\\|_{\\hs }\n ^2 \\phi _\\sigma (n)^2 \\, .\n$$\nThe combination of these inequalities yields the final error estimate\n\\begin{equation}\n \\label{eq:14}\n \\|f-p_n\\|_2^2 \\leq \\big( 1 +\n \\frac{B^2}{A^2} \\big) \\|f\\|_{\\hs } ^2 \\, \\phi _\\sigma (n) ^2 \\, .\n\\end{equation}\nSince $\\phi _\\sigma (n) = \\cO (n^{-\\sigma +1\/2})$, Theorem~\\ref{tm1} is\nproved. \n\n(ii) If $f$ can be extended to an analytic function of the strip $\\{z\\in \\bC\n: |\\mathrm{Im} z| < \\rho _0 \\}$, then its Fourier coefficients decay\nexponentially as $|\\fhat (k) | \\leq c_\\rho e^{-\\rho |k|}$ for every\n$\\rho <\\rho _0$ with an appropriate constant. Consequently\n\\begin{equation}\n \\label{eq:abc}\n\\|f-P_nf\\|_\\infty \\leq \\sum _{|k|>n} |\\fhat (k)| \\leq c_\\rho \\sum\n_{|k|>n} e^{-\\rho |k|} = \\frac{2c_\\rho}{e^{\\rho}-1} e^{-\\rho n} \\, ,\n\\end{equation}\nwhich proves the exponential decay.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\\subsection{Quadrature Rules}\nTo deduce a set of quadrature rules, we use frame theory to obtain\nsuitable weights. See~\\cite{Chr16,DS52} for the basic facts. The\nfollowing argument is typical in sampling theory, whereas often the\nderivation of quadrature rules relies on abstract functional analytic\narguments as in~\\cite{MNW01}.\n\nLet $k^{(n)}_x\\in \\cT_n$ be the reproducing kernel of $\\cT _n$ defined by\n$p(x) = \\langle p, k^{(n)}_x \\rangle $ for all $\\pi \\in \\cT _n$ and $x\\in\n\\bT $. In fact, $k^{(n)}_x(y) = \\frac{\\sin (2n+1)\\pi (y-x)}{\\sin \\pi (y-x)}$ is just the Dirichlet kernel for $\\cT\n_n$. In the language of frame theory each inequality of \\eqref{eq:1}\nsimply says that $\\{\\tkn ^{1\/2} \\, k^{(n)}_{\\xkn\n} : k=1, \\dots , L_n\\}$ is a frame for $\\cT _n$ with frame\nbounds $A,B>0$ independent of $n$. Equivalently, the associated frame\noperator\n$S_np = \\sum _{k=1}^{L_n} \\tkn \\langle p, k^{(n)}_{\\xkn } \\rangle k^{(n)}_{\\xkn }\n$\nis invertible on $\\cT _n$ for every $n\\in \\bN $ and we obtain the dual\nframe $e_{n,k} = S_n \\inv ( \\tkn ^{1\/2} k^{(n)}_{\\xkn }) $. The factorization\n$S_n\\inv S_n = \\mathrm{I}_{\\cT _n}$ yields the following\nreconstruction formula for all trigonometric polynomials $p \\in \\cT\n_n$ from their samples: \n\\begin{equation}\n \\label{eq:17}\n p = \\sum _{k=1}^{L_n} \\tkn \\langle p , k^{(n)}_{\\xkn } \\rangle\n S\\inv k^{(n)}_{\\xkn } = \\sum _{k=1}^{L_n} \\tkn ^{1\/2} p(\\xkn ) \\,\n e_{n,k} \\, \n \n\\end{equation}\nFurthermore, $\\{e_{n,k} : k=1, \\dots, L_n\\}$ is a frame for $\\cT _n$\nwith frame bounds $B\\inv $ and $ A\\inv $, again independent of $n$.\nThis property implies in particular that for the constant \nfunction $1$ with $\\|1\\|_2 = 1$ we have\n\\begin{equation}\n \\label{eq:a1}\n \\sum _{k=1} ^{L_n} |\\langle 1, e_{n,k} \\rangle |^2 \\leq A\\inv \\|1\\|_2\n = A\\inv \\, .\n\\end{equation}\nWe now define the weights for the quadrature rules by \n\\begin{equation}\n \\label{eq:18}\n w_{n,k} = \\tkn ^{1\/2} \\langle e_{n,k} ,1 \\rangle = \\tkn ^{1\/2} \\int\n _{-1\/2} ^{1\/2} e_{n,k}(x) \\, dx \\, , \n\\end{equation}\nand the corresponding quadrature rule by \n\\begin{equation} \\label{eq:18b}\n I_n(f) = \\sum _{k=1}^{L_n} f(\\xkn ) w_{n,k} \\, .\n\\end{equation}\nWe also write $I(f) = \\int _{-1\/2} ^{1\/2} f(x) \\, dx $ for the\nintegral of $f$ on $\\bT $. \n\nAs a consequence of the definitions we obtain the following easy\nproperties of this quadrature rule.\n\\begin{lemma} \\label{lem-quad}\n Let $w_{n,k}$ and $I_n$ be defined as in \\eqref{eq:18} and\n \\eqref{eq:18b}.\n\n (i) Then the quadrature rule $I_n$ is exact on $\\cT _n$, i.e.,\n $I_n(p) = I(p)$ for all $p \\in \\cT _n$.\n\n (ii) For $f\\in \\hs (\\bT )$ we have\n \\begin{equation}\n \\label{eq:a2}\n |I_n(f)|^2 \\leq A\\inv \\sum _{k=1}^{L_n} |f(\\xkn )|^2 \\tkn \\leq\n \\tfrac{B}{A}\\|f\\|_\\infty ^2 \\, .\n \\end{equation}\n\\end{lemma}\n\n\\begin{proof}\n (i) follows from \\eqref{eq:17}. For (ii) we use\n \\begin{equation*}\n |I_n(f)|^2 \\leq \\big( \\sum _{k=1}^{L_n} |f(\\xkn )|^2 \\tkn \\Big) \\,\n\\Big( \\sum _{k=1}^{L_n} |\\langle e_{n,k} , 1 \\rangle )|^2 \\Big)\n\\leq \\frac{B}{A} \\|f\\|_\\infty ^2 \\, .\n \\end{equation*}\n\\end{proof}\n\nAs a consequence of Theorem~\\ref{tm1} we obtain the following\nconvergence theorem.\n\n\\begin{tm}\n \\label{tm2}\nLet $\\cX $ be a \\mz\\ family with weights $\\tau $ and let $\\{I_n: n \\in \\bN \\}$ be the\nassociated sequence of quadrature rules.\n\n(i) If $f\\in C(\\bT )$, then \n\\begin{equation}\n \\label{eq:190}\n |I(f) - I_n (f)| \\leq (1+\\sqrt{\\kappa} ) \\inf _{p \\in \\cT _n}\n \\|f-p\\|_\\infty \\, . \n\\end{equation}\nConsequently, if $f\\in C^\\sigma (\\bT )$, then $ |I(f) - I_n (f)| =\n\\cO (n^{-\\sigma })$. \n\n\n(ii) If $f\\in \\hs $ for $\\sigma >1\/2$, then \n\\begin{equation}\n \\label{eq:19}\n |I(f) - I_n (f)| \\leq (1+ \\sqrt{\\kappa }) \\|f\\|_{\\hs } \\phi\n _\\sigma (n) \\, ,\n\\end{equation}\nwith $\\phi _\\sigma (n) = (\\sigma -1\/2)^{-1\/2} n^{-\\sigma +1\/2}$. \n\n\n \n(iii) If $f$ extends to an analytic function on a strip $\\{z\\in \\bC\n: |\\mathrm{Im}\\, z| < \\rho _0 \\}$, then for $\\rho <\\rho _0$\n$$\n|I(f) - I_n (f)| = \\cO (e^{-\\rho n}) \\, .\n$$\n\\end{tm}\n\n\n\\begin{proof}\n(i) and (ii) Let $P_nf$ is the orthogonal projection of $f$\nonto $\\cT _N$ and $q_n$ be the best approximation of $f$ in $\\cT _n$ with respect\nto $\\| \\cdot \\|_\\infty $. \n \n Since $I_n$ is exact on\n $\\cT _n$, we have $I(P_nf) = I_n (P_nf)$ and $I(q_n) =\n I_n(q_n)$. Then we obtain with \\eqref{eq:a2} that \n \\begin{align*}\n |I(f) - I_n(f)| &\\leq |I(f-q_n)| + |I_n(q_n - f)| \\\\\n &\\leq \\|f-q_n\\|_\\infty + (B\/A)^{1\/2} \\|f-q_n\\|_\\infty \\\\\n & = (1+\\sqrt{\\kappa } ) \\inf _{p\\in \\cT _n}\n \\|f-p\\|_\\infty \\, .\n \\end{align*}\nFor the approximating polynomial $P_nf$ we obtain \n \\begin{align*}\n |I(f) - I_n(f)| &\\leq |I(f-P_nf)| + |I_n(P_nf - f)| \\\\\n &\\leq \\|f-P_nf\\|_\\infty + (B\/A)^{1\/2} \\|f-P_nf\\|_\\infty \\\\\n &\\leq (1+\\sqrt{\\kappa } ) \\, \\|f\\|_{\\hs } \\, \\phi _\\sigma (n)\n \\, , \\end{align*}\n with $\\phi _\\sigma (n) = (\\sigma -1\/2)^{-1\/2} n^{-\\sigma +1\/2}$\n by Lemma~\\ref{lm1}.\n \n(iii) If $f$ can be extended to the strip $\\{z\\in \\bC\n: |\\mathrm{Im} z| < \\rho _0 \\}$, then we use the error estimate\n\\eqref{eq:abc} for $\\|f-P_nf\\|_\\infty$ and obtain \n$$\n |I(f) - I_n(f)| \\leq (1+ (B\/A)^{1\/2}) \\|f-P_nf\\|_\\infty \\leq c_\\rho '\n (1+\\sqrt{\\kappa }) \\, e^{-\\rho n}\n$$\nfor every $\\rho <\\rho _0$ with a constant $c_\\rho '$ depending on $\\rho\n$. \n\\end{proof}\n\nTheorem~\\ref{tm2} answers a question of N.\\ Trefethen~\\cite{AT17}\nabout the rate of convergence of the standard quadrature rules for\nscattered nodes.\n\n\\section{General Approximation Theorems}\n Theorem~\\ref{tm1} and the quadrature rule of Theorem~\\ref{tm2}\nrequired hardly any tools, and the proofs use only the definitions of \\mz\\\ninequalities, Sobolev spaces, and the solution formula for least square\nproblems. We will now show that the results for $\\bT $ can be\nextended significantly with a mere change of\nnotation.\n\nFor an axiomatic approach to approximation theorems from samples and\nquadrature rules, we assume that $M$ is a\ncompact space and $\\mu $ is a probability measure on $M$. Furthermore, \n\n(i) $\\{\\phi _k : k\\in \\bN\\}$ is an orthonormal basis for $L^2(M,\\mu\n)$. In agreement with the notation for Fourier series, we write $\\fhat (k)\n= \\langle f, \\phi _k\\rangle =\n\\int _M f(x) \\overline{\\phi _k(x)} \\, d\\mu (x)$ for the $k$-th\ncoefficient, so that the orthogonal expansion is $f= \\sum _k \\fhat (k) \\phi _k$. \n\n(ii) Next, let $\\lambda _k \\geq 0$ be a non-decreasing sequence with\n$\\lim _{k\\to \\infty } \\lambda _k = \\infty $. \nThe associated Sobolev\nspace $\\hs (M)$ is defined by\n\\begin{equation}\n \\label{eq:20}\n \\hs (M) = \\{ f\\in L^2(M): \\|f\\|_{\\hs } ^2 = \\sum _{k=1}^\\infty\n |\\fhat (k)|^2 (1+\\lambda _k ^2)^{\\sigma } < \\infty \\} \\, . \n\\end{equation}\nWith $\\cP _n$ we denote the set of ``polynomials'' of degree $n$ on\n$M$ by \n\\begin{equation}\n \\label{eq:21}\n \\cP _n = \\{ p \\in L^2(M): p = \\sum _{k: \\lambda _k \\leq n} \\fhat (k)\n \\, \\phi _k \\} \\, .\n\\end{equation}\n This space is finite-dimensional because $\\lim _{k\\to \\infty }\n \\lambda _k = \\infty $. The definition of $\\cP _n$ encapsulates the\n appropriate notion of bandlimitedness with respect to the basis\n $\\{ \\phi _n \\}$; in the terminology of \\cite{FM10} the functions \n in $\\cP_n $ are called diffusion polynomials.\n\n Note that both $\\hs (M)$ and $\\cP _n$ depend on the orthonormal basis\n and on the sequence $\\{\\lambda _k\\}$. We may think of $\\{\\phi _k\\}$\n as the set of eigenfunctions of an unbounded positive operator on\n $L^2(M,\\mu )$ with eigenvalues $ \\lambda _k$. \n\n The following table illustrates the transition from the set-up in\nSection~2 to the general theory. \n\n\n\n\n\\begin{table}[h!]\n \\begin{center}\n \\caption{Generalization}\n \\label{tab:table1}\n \\begin{tabular}{l|l}\n \\emph{from torus $\\bT$} & \\emph{to manifold $M$} \\\\\n \\hline\n ONB $e^{2\\pi i k x}$ for $L^2(\\bT )$ & ONB $\\phi _k (x) $ for $L^2(M,\\mu )$ \\\\\n Fourier coefficients $\\fhat (k) $ & Fourier coefficients~$ \\langle f, \\phi _k \\rangle $ \\\\\n Eigenvalues $ k^2$ of $-\\frac{1}{4\\pi ^2}\\tfrac{d^2}{dx^2} $ & ``Eigenvalues'' $\\lambda _k \\geq 0 $, $\\lambda _k \\to \\infty $ \\\\\n Sobolev space $\\hs (\\bT)$ & Sobolev space $\\hs (M) $ \\\\\n Trigonometric polynomials $\\cT _n$ & ``Polynomials'' $\\cP _n\n $, $p = \\sum _{k: \\lambda _k \\leq n } \\fhat (k) \\, \\phi _k $\n \\end{tabular}\n \\end{center}\n\\end{table}\n\n \n \n For meaningful statements we make the following natural assumptions:\n\n (i) Every basis element $\\phi _k$ is continuous (and thus bounded)\n on $M$ and $\\phi\n_1 \\equiv 1$. Then the point evaluation $p \\to p(x)$ makes sense on\n$\\cP _n$ and $\\int _M f d\\mu = \\langle f, \\phi _1\\rangle$. \n\n(ii) There exists a critical index\n$\\sigma _{\\mathrm{crit}}$ such that for $\\sigma > \\scrit $ the sum\n\\begin{equation}\n \\label{eq:a3}\nC_\\sigma ^2 = \\sup _{x\\in M} \\sum _{k=1}^\\infty |\\phi _k(x)|^2\n(1+\\lambda _k ^2)^{-\\sigma } <\\infty\n \\, \n\\end{equation}\nconverges\\footnote{In \\cite{BCCG14} the expression $\\sum _{k=1}^\\infty \\phi _k(x) \\phi _k(y)\n(1+\\lambda _k ^2)^{-\\sigma }$ is called the Bessel kernel associated\nto the orthonormal basis $\\{\\phi _k\\}$.}.\n\n(iii) The error estimates will be in terms of the remainder function \n\\begin{equation}\n \\label{eq:26}\n \\phi _\\sigma (n) = \\sup _{x\\in M} \\Big(\\sum _{k: \\lambda _k > n} |\\phi _k(x)|^2 (1+ \\lambda _k ^2)^{-\\sigma\n} \\Big)^{1\/2} \\, .\n\\end{equation}\nBy \\eqref{eq:a3} $\\phi _\\sigma (n) \\to 0$ as $n\\to \\infty $. \n\nA \\mz\\ family $\\cX $ for $M$ is a doubly-indexed set $\\cX = \\{ \\xkn\n: n\\in \\bN , k=1 , \\dots , L_n\\} \\subseteq\nM$ with associated weights $\\{ \\tkn \\}$, such that \n\\begin{equation}\n \\label{eq:1a}\nA \\|p\\|_2^2 \\leq \\sum _{k=1} ^{L_n} |p(\\xkn )|^2 \\tau _{n,k} \\leq B\n\\|p \\|_2^2 \\qquad \\text{ for all } p \\in \\cP _n \\, ,\n\\end{equation}\nwith constants $A,B >0$ independent of $n$.\n\n\\subsection{Embeddings}\nWe first prove the versions for approximation and embedding in the\ngeneral context.\n\n\\begin{lemma}\n \\label{sobgen}\n (i) If $\\sigma > \\scrit $, then $\\hs (M) \\subseteq C(M)$, in fact,\n $\\| f \\|_\\infty \\leq C_\\sigma \\, \\|f\\|_{\\hs }$. \n\n (ii)\nFor $f\\in C(M)$ \n \\begin{equation}\n \\label{eq:a4}\n \\|f-P_nf \\|_2 \\leq \\|f-P_nf\\|_\\infty \\leq \\|f\\|_{\\hs } \\phi\n _\\sigma (n) \\, .\n \\end{equation}\n\n(iii) Assume that $\\cX $ is a \\mz\\ family with weights $\\tau $. For $f\\in \\hs (M) $ we have \n\\begin{equation}\n \\label{eq:6a}\n \\sum _{k=1}^{L_n} |f(\\xkn )|^2 \\tkn \\leq B C_\\sigma ^2 \\|f\\|_{\\hs }^2 \\, .\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\n(i) and (ii):\nLet $f = \\sum _{k=1}^\\infty \\fhat (k) \\phi _k$. Then with the\nCauchy-Schwarz inequality, \n\\begin{align*}\n|f(x) | \\leq \\Big(\\sum _{k=1}^\\infty |\\fhat (k)|^2 (1+\\lambda\n _k ^2)^\\sigma \\Big)^{1\/2} \\, \\sup _{x\\in M} \\Big(\\sum\n _{k=1}^\\infty |\\phi _k(x)|^2 (1+\\lambda\n _k ^2)^{-\\sigma } \\Big)^{1\/2} = C_\\sigma \\|f\\|_{\\hs } \\, .\n\\end{align*}\nThe approximation error is estimated by \n\\begin{align}\n \\|f-P_nf\\|_2 & \\leq \\|f - P_n f \\|_\\infty \\notag \\\\\n &\\leq \\|f \\|_{\\hs } \\sup _{x\\in M} \\Big(\\sum _{k: \\lambda _k > n} |\\phi _k(x)|^2 (1+ \\lambda _k )^{-\\sigma\n} \\Big)^{1\/2} = \\|f\\|_{\\hs } \\, \\phi _\\sigma (n) \\, . \\label{eq:25}\n\\end{align}\nIn the first inequality we have used the fact that $\\mu $ is a\nprobability measure. \n\n(iii) is proved as in Lemma~\\ref{lm1}. The sampling inequalities\n\\eqref{eq:1} applied to the constant function $p\\equiv 1$ with\n$\\|p\\|_2=1$ yields\n\\begin{equation}\n \\label{eq:7a}\n A \\leq \\sum _{k=1}^{L_n} \\tkn \\leq B \\, .\n\\end{equation}\nConsequently, with the embedding of (i), we have \n$$\n\\sum _{k=1}^{L_n} |f(\\xkn )|^2 \\tkn \\leq \\|f\\|_\\infty ^2 \\sum\n_{k=1}^{L_n} \\tkn \\leq B C_\\sigma ^2 \\|f\\|_{\\hs }^2 \\, .\n$$ \n\\end{proof}\n\n\n\\subsection{Quasi-interpolation versus projection}\n\n To produce optimal\napproximations of $f$ in $\\cP _n$ from the samples $\\cX _n= \\{ \\xkn : k=1 , \\dots\n, L_n\\} $ in the $n$-th layer of $\\cX $, we solve the sequence of least\nsquares problems\n\\begin{equation}\n \\label{eq:8b}\np_n = \\mathrm{argmin} _{p\\in \\cT _n} \\sum _{k=1} ^{L_n} |f(\\xkn ) -\np(\\xkn )|^2 \\tkn \\, .\n\\end{equation}\nNow let $y_n = (\\tau _{n,1} ^{1\/2} f(x_{n,1}), \\dots , \\tau\n_{n,L_n}^{1\/2} f(x_{n,L_n})) \\in \\bC ^{L_n}$ be the\ngiven data vector, and $U_n $ be the $L_n \\times \\mathrm{dim}\\, \\cP _n$-matrix with\nentries \n\\begin{equation}\n \\label{eq:10a}\n(U_n)_{kl} = \\tkn ^{1\/2} \\phi _l (x_{n,k}) \\qquad k=1,\\dots , L_n,\n l = 1, \\dots , \\mathrm{dim}\\, \\cP _n\n\\, , \n\\end{equation}\nand set $T_n = U_n^* U_n $. With this notation the solution to\n\\eqref{eq:8b} is the polynomial $p_n =\n\\sum _{k: \\lambda _k \\leq n} a_{n,k} \\phi _k\n\\in \\cP _n$ with coefficient vector \n\\begin{equation}\n \\label{eq:12b}\n a_n = T_n \\inv U_n ^* y_n \\, .\n\\end{equation}\nAgain, since $\\langle T_nc,c\\rangle = \\langle U_nc, U_nc \\rangle =\n\\sum _{k=1}^{L_n} |p(\\xkn )|^2 \\tkn $, the spectrum of\nevery $T_n$ is contained in the interval $[A,B]$ and we have uniform\n upper bounds for the norms of $T_n$ and $T_n\\inv $. \n\nWe now have the analogue of Proposition~\\ref{prop1}.\n\n\\begin{prop} \\label{prop1gen}\nLet $\\cX $ be a \\mz\\ family with weights $\\tau $. Let $p_n$ be the solution of the least squares problem\n\\eqref{eq:8b}. Then \n\\begin{align}\n \\label{eq:13a}\n \\|P_nf - p_n \\|_2^2 &\\leq A^{-2} B \\sum _{k=1}^{L_n} |f(\\xkn ) - P_n\n f(\\xkn )|^2 \\tkn \\\\\n & \\leq \\tfrac{B^2}{A^2} \\|f-P_nf\\|^2_\\infty \\leq\n \\kappa ^2 \\|f\\|^2_{\\hs } \\phi _\\sigma (n)^2 \\, . \\notag \n\\end{align}\n\\end{prop}\n\nThe proof is identical to the proof of Proposition~\\ref{prop1}, this\ntime combined with Lemma~\\ref{sobgen}.\n\n\\subsection{Approximation of continuous functions from samples}\n\nIn this general context the analog of Theorem~\\ref{tm1} read as\nfollows. \n\n\\begin{tm}\n \\label{tm1a}\nAssume that $\\cX = \\{\\cX _n :n\\in \\bN \\}$ is a \\mz\\ family for $M$ with\ncondition number $\\kappa = B\/A$ and associated weights $\\{\\tkn \\}$. \n\n If $f\\in H^\\sigma (M)$, then \n \\begin{equation}\n \\label{eq:3b}\n \\|f-p_n\\|_2 \\leq \\sqrt{1+\\kappa ^2} \\|f\\|_{\\hs } \\phi _\\sigma (n) \\, .\n \\end{equation}\n\\end{tm}\n\n\\begin{proof}\n The proof is identical to the proof of\n Theorem~\\ref{tm1}. This is our main point. We simply use Lemma~\\ref{sobgen} and\n Proposition~\\ref{prop1gen}, precisely, \\eqref{eq:a4} and\n \\eqref{eq:13a} in the decomposition\n$$\n\\|f- p_n \\|_2^2 = \\|f- P_nf \\|_2^2 + \\|P_nf - p_n \\|_2^2 \\, .\n$$\n\\end{proof}\n\nNoting that the error estimates also hold for fixed $n$, we obtain the\nfollowing useful consequence. \n\\begin{cor}\nAssume that $\\{ y_k : k=1, \\dots ,L\\}\\subseteq M$ is a sampling set for $\\cP _n$\nwith weights $\\tau _k$,\ni.e., for some $A,B>0$ and all $p \\in \\cP _n$\n$$\nA \\|p\\|_2^2 \\leq \\sum _{k=1}^L |p(y_k)|^2 \\tau _k \\leq B \\|p\\|_2^2 \\,\n.\n$$\n For $f\\in C(M)$ solve $q = \\mathrm{argmin} _{p\\in \\cP _n} \\sum _{k=1}^L\n|f(y_k) - p(y_k)|^2 \\tau _k$. Then \n \\begin{align*}\n \\|f-q\\|_2 &\\leq \\big(1+\\big(\\tfrac{B}{A}\\big)^2 \\big)^{1\/2} \\|f-P_nf\\|_\\infty \\, .\n\\end{align*} \n\\end{cor}\nThe corollary gives a possible answer to how well a given\nfunction on $M$ can be approximated from samples. Again, the statement\nhighlights the importance of sampling inequalities in approximation\ntheoretic problems. \n\n\n\\subsection{Quadrature rules}\n\\label{sec:Quadrat}\n\nLikewise the derivation and convergence of the quadrature rules is\ncompletely analogous to Theorem~\\ref{tm2}. The reproducing kernel for $\\cP _n$ is given by\n$k^{(n)}_x(y) = \\sum _{k : \\lambda _k \\leq n} \\overline{\\phi _k (x)}\n\\phi _k(y)$.\nThen the \\mz\\ inequalities \\eqref{eq:1a} say that every set $\\{\\tkn\n^{1\/2} \\, k^{(n)}_{\\xkn\n} : k=1, \\dots , L_n\\}$ is a frame for $\\cP _n$ with uniform frame\nbounds $A,B>0$. We thus obtain the dual\nframe $e_{n,k} \\in \\cP _n$ such that every polynomial $p \\in \\cP\n_n$ can be reconstructed from the samples on $\\cX _n$ by \n\\begin{equation}\n \\label{eq:17a}\n p = \\sum _{k=1}^{L_n} \\tkn \\langle p , k^{(n)}_{\\xkn } \\rangle\n S\\inv k^{(n)}_{\\xkn } = \\sum _{k=1}^{L_n} \\tkn ^{1\/2} p(\\xkn ) \\,\n e_{n,k} \\, \n \n\\end{equation}\nSince $1\\in \\cP _n$ for all $n$, we have again\n\\begin{equation}\n \\label{eq:a1b}\n \\sum _{k=1} ^{L_n} |\\langle 1, e_{n,k} \\rangle |^2 \\leq A\\inv \\|1\\|_2\n = A\\inv \\, .\n\\end{equation}\n The weights for the quadrature rules are defined by \n\\begin{equation}\n \\label{eq:18a}\n w_{n,k} = \\tkn ^{1\/2} \\langle e_{n,k} ,1 \\rangle = \\tkn ^{1\/2} \\int\n _M e_{n,k}(x) \\, d\\mu(x) \\, , \n\\end{equation}\nand the corresponding quadrature rule is defined by \n\\begin{equation} \\label{eq:18bb}\n I_n(f) = \\sum _{k=1}^{L_n} f(\\xkn ) w_{n,k} \\, .\n\\end{equation}\n Writing $I(f) = \\int _M f(x) \\, d\\mu(x) $,\nthe convergence rules for the quadrature \\eqref{eq:18bb} can now be\nstated as follows. \n\n\n\\begin{tm}\n \\label{tm2a}\nLet $\\cX $ be a \\mz\\ family on $M$ with weights $\\tau $ and let $\\{I_n: n \\in \\bN \\}$ be the\n associated sequence of quadrature rules. Assume that $\\sigma\n>\\scrit $. \n\n\n(i) If $f\\in C(M )$, then \n\\begin{equation}\n \\label{eq:190b}\n |I(f) - I_n (f)| \\leq (1+\\sqrt{\\kappa }) \\inf _{p \\in \\cP _n}\n \\|f-p\\|_\\infty \\, .\n\\end{equation}\n\n\n\n(ii) If $f\\in \\hs $ for $\\sigma >\\scrit $, then \n\\begin{equation}\n \\label{eq:19b}\n |I(f) - I_n (f)| \\leq (1+\\sqrt{\\kappa }) \\|f\\|_{\\hs }\n \\phi _\\sigma (n) \\, . \n\\end{equation}\n\\end{tm}\n\n\nError estimates of this type are, of course, \nwell-known, see, e.g. ~\\cite{CGT11,BCCG14,HS06}. \nThe main difference is in the constants: usually these depend mainly\non the mesh size of the points $\\xkn $, whereas Theorems~\\ref{tm1a}\nand \\ref{tm2a} involve only the bounds of the \\mz\\ inequalities. \n\nTheorems~\\ref{tm1a} and \\ref{tm2a} are pure formalism. Their main\ninsight is that\napproximation theorems from samples are a direct consequence of the\nexistence of \\mz\\ families. Thus the ``real'' and deep question was and still is how to construct a \\mz\\\nfamily for a given $M$ and orthonormal basis. This is precisely what\nis accomplished in \\cite{FM10,FM11} under similar assumptions on an\nabstract metric measure space. \n\n\\vspace{2mm}\n\n\\noindent \\textbf{Example.}\n For the torus $\\phi _k (x) = e^{2\\pi i kx}$ and\n$\\lambda _k = k$. Then the error function is \n$$\n\\phi _\\sigma (n)^2 = \\sum _{|k|>n} (1+k^2)^{-\\sigma } \\leq 2 \\int\n_n^\\infty x^{-2\\sigma } \\, dx = \\frac{2}{2\\sigma -1} n^{-2\\sigma +1}\n\\, .\n$$\nConsequently $\\phi\n_\\sigma (n) = (\\sigma - 1\/2)^{-1\/2} n^{-\\sigma +1\/2} $ in Lemma~\\ref{lm1} and\nTheorem~\\ref{tm1}. \n\n\n\\subsection{The spectral function and \\mz\\ families}\n\nTheorem~\\ref{tm1a}, as formulated, is almost void of content, as the\nerror function $\\phi _\\sigma $ from \\eqref{eq:26} depends on the orthonormal basis and the chosen \n$\\lambda _k$ in a rather intransparent manner. With the interpretation\nof $(\\phi _k,\n\\lambda _k)$ as the eigenvalues and eigenfunctions of a positive\nunbounded operator, the spectral theory of partial differential\noperators suggests a suitable condition to elaborate the error\nfunction $\\phi _\\sigma $ further.\n\n\\begin{definition}\n We say the orthonormal basis $\\{\\phi _k:k\\in \\bN \\}$ and the\n eigenvalues $\\{\\lambda _k: k\\in \\bN \\}$ satisfy Weyl's law, if there\n exist constants $d=d_{\\phi,\\lambda }>0$ and $C= C_{\\phi,\\lambda\n }>0$, such that\n \\begin{equation}\n \\label{eq:weyl}\n \\sum _{k: \\lambda _k \\leq n} |\\phi _k(x)|^2 \\leq C n^d \\, .\n \\end{equation}\n\\end{definition}\nIntegrating over $M$, \\eqref{eq:weyl} implies the eigenvalue count\n\\begin{equation}\n \\label{eq:weyl2}\n\\# \\{ k: \\lambda _k \\leq n\\} \\leq C n^d \\, . \n\\end{equation}\nIn the spectral theory of partial differential operators or pseudodifferential\noperators the function $ \\sum _{k: \\lambda _k \\leq n} |\\phi _k(x)|^2$\nis called the spectral function, and \\eqref{eq:weyl2} is Weyl's law\nfor the count of eigenvalues~\\cite{Gar53,hormander3,Shubin91}. In the theory\nof orthogonal polynomials the function $ (\\sum _{k\\leq n} |\\phi\n_k(x)|^2)\\inv $ is called the Christoffel function and plays a\ncentral role in the investigation of orthogonal\npolynomials~\\cite{Nev86}. In this context assumption \\eqref{eq:weyl}\nsays that the Christoffel function along a subsequence determined by\nthe $\\lambda _k$'s is bounded\npolynomially from below. \n\nUnder the assumption of Weyl's law we can determine the critical\nexponent and the asymptotics of the error function precisely. One may\nalso say that Weyl's law implies the correct version of the Sobolev\nembedding. \n\\begin{prop}\n \\label{critval}\nLet $\\{\\phi _k:k\\in \\bN \\}$ be an orthonormal basis for $L^2(M,\\mu )$\nand $\\lambda _k \\geq 0$ be a non-decreasing sequence with $\\lim _{k\\to\n \\infty } \\lambda _k = \\infty $. Assume that $(\\phi _k, \\lambda _k)$\nsatisfies Weyl's law \\eqref{eq:weyl} with exponent $d$. Then the critical value is $\\scrit = d\/2$, i.e., if $\\sigma\n >d\/2$, then\n $$\n C_\\sigma ^2 = \\sup _{x\\in M} \\sum _{k=1}^\\infty |\\phi _k(x)|^2\n (1+\\lambda _k^2)^{-\\sigma } <\\infty \\, . \n $$\nMoreover, the error function is\n \\begin{equation}\n \\label{eq:a5}\n \\phi _\\sigma (n) = C_\\sigma \\, n^{-\\sigma +d\/2}\n \\end{equation}\n\\end{prop}\n\\begin{proof}\n We only show \\eqref{eq:a5}. Choose $M \\in \\bN$, such that $2^{M}\n \\leq n \\leq 2^{M+1}$. We split the sum defining $\\phi _\\sigma $\n into dyadic blocks and use \\eqref{eq:weyl} as follows: \n \\begin{align*}\n \\sum _{\\lambda _k >n}^\\infty |\\phi _k(x)|^2\n (1+\\lambda _k^2)^{-\\sigma } & \\leq \\sum _{n=M}^\\infty \\sum _{k: 2^n \\leq\n \\lambda _k < 2^{n+1}} |\\phi _k(x)|^2 (1+\\lambda _k^2)^{-\\sigma } \\\\\n &\\leq \\sum _{n=M}^\\infty 2^{-2n \\sigma } \\sum _{k: 2^n \\leq\n \\lambda _k < 2^{n+1}} |\\phi _k(x)|^2 \\\\\n &\\leq C \\sum _{n=M} ^\\infty 2^{-2n \\sigma } 2^{d(n+1)} \n = C 2^d\\sum _{n=M} ^\\infty 2^{n(d-2\\sigma )} \\\\\n &= C \\frac{2^d}{ (1-2^{d-2\\sigma } )} \\, 2^{M(d-2\\sigma ) } \\leq\n C_\\sigma n^{-2\\sigma +d} \\, ,\n \\end{align*}\nwith convergence precisely for $\\sigma > d\/2$. \n\\end{proof}\n\n\n\n\\noindent \\textbf{Example.} Let $-\\Delta $ be the Laplacian on a\nbounded domain $M\\subseteq \\rd $ with $C^\\infty$-boundary and $\\phi\n_k$ be its eigenfunctions, \ni.e., $-\\Delta \\phi _k =\n\\lambda _k^2 \\phi _k$.\nThen the $(\\phi _k,\n\\lambda _k)$'s satisfy Weyl's law with exponent $d$ and the constant is\nroughly $C= \\mathrm{vol}\\, (M)$. Similar statements hold for the\nLaplace-Beltrami operator on a compact Riemannian manifold and for\ngeneral elliptic partial differential operators, see~\\cite{Gar53,hormander68,hormander3,Shubin91}\n\nWeyl's law plays an important role in the theory and construction of \\mz\\\nfamilies in metric measure spaces by Filbir and\nMhaskar~\\cite{FM10,FM11}. In particular, they show that Weyl's law is\nequivalent to Gaussian estimates for the heat kernel. A weaker form of\nWeyl's law is equivalent to a sampling inequality~\\cite{Pes16}.\n\n\n\n\n\n\n\n\n\n\\vspace{ 2mm}\n\n\\noindent\\textbf{Concluding remarks.} The simplicity of the proofs\nlead to conceptual insights into the role of \\mz\\ families, but the\nresults are certainly limited.\n\n(i) The proofs work only for $p=2$. \\mz\\ families with respect to\ngeneral $p$-norms seem to require different techniques. \n\n(ii) The weights for the quadrature rules are not necessarily\npositive. So far, positive weights are obtained only with special\nconstructions from \\mz\\\nfamilies with $p=1$ and sufficiently dense sampling on each\nlevel~\\cite{FM10}.\n\n(iii) The existence of \\mz\\ families has been established on many\nlevel of generality~\\cite{FM11,FFP16} through the construction of\nsufficiently fine meshes on each level. So far necessary density\nconditions have been found only for \\mz\\ on compact Riemannian\nmanifolds~\\cite{OP12}. It would be interesting to extend the scope of\nthe necessary conditions to the conditions used in Section~3 and to\ncompare the densities of the existing \\mz\\ families to the necessary\ndensity conditions. \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzguah b/data_all_eng_slimpj/shuffled/split2/finalzzguah new file mode 100644 index 0000000000000000000000000000000000000000..dc978679905bd66ea4459a95f62f08b7edcb1055 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzguah @@ -0,0 +1,5 @@ +{"text":"\\section{\\@startsection {section}{1}{{\\zeta}@}{3.25ex plus 1ex minus\n\n\n\\numberwithin{equation}{section}\n\n\n\n\n\n\\begin{document}\n\n\\title{Backward and forward filtering under the weak H\\\"ormander conditio\n\n\n\n\\author{Andrea Pascucci \\and\n Antonello Pesce\n}\n\n\n\\institute{A. Pascucci \\at\n Department of Mathematics, University of Bologna, Piazza di Porta san Donato, 5 Bologna \\\\\n Tel.: +39-0512094428\\\\\n \\email{andrea.pascucci@unibo.it} \n \\and\n A. Pesce \\at\n Department of Mathematics, University of Bologna, Piazza di Porta san Donato, 5 Bologna \\\\\n \\email{antonello.pesce2@unibo.it} \n}\n\n\\date{Received: date \/ Accepted: date}\n\n\n\\maketitle\n\n\\begin{abstract}\nWe derive the forward and backward filtering equations for a class of degenerate partially\nobservable diffusions, satisfying the weak H\\\"ormander condition. Our approach is based on the\nH\\\"older theory for degenerate SPDEs that allows to pursue the direct approaches proposed by N. V.\nKrylov and A. Zatezalo, and A. Yu. Veretennikov, avoiding the use of general results from\nfiltering theory. As a by-product we also provide existence, regularity and estimates for the\nfiltering density. \\keywords{filtering \\and H\\\"older theory of SPDEs \\and Langevin equation \\and\nweak H\\\"ormander condition}\n \\subclass{60G35 \\and 60H15 \\and 60J60 \\and 35H20}\n\\end{abstract}\n\n\n\\section{Introduction}\nThe classical kinetic model\n\\begin{equation}\\label{aaee1}\n \\begin{cases}\n dX_{t}=V_{t}dt, \\\\\n dV_{t}={\\sigma} dW_{t},\\qquad {\\sigma}>0,\n \\end{cases}\n\\end{equation}\nis a remarkable example of a system of SDEs whose Kolmogorov equation\n\\begin{equation}\\label{aaee1bb}\n \n \\frac{{\\sigma}^{2}}{2}{probabilit\\`a }_{vv}f+v{probabilit\\`a }_{x}f+{probabilit\\`a }_{t}f=0,\\qquad (t,x,v)\\in{\\mathbb {R}}^{3},\n\\end{equation}\nis hypoelliptic but not uniformly parabolic. Precisely, \\eqref{aaee1bb} satisfies the {\\it weak}\nH\\\"ormander condition in that the drift plays a key role in the noise propagation (see\n\\cite{Kolmogorov2} and the introduction in \\cite{Hormander}). In \\eqref{aaee1} $W$ is a Brownian\nmotion and $X,V$ represent position and velocity of a particle. This type of SDEs arises in\nseveral linear and non-linear models in physics (see, for instance, \\cite{Cercignani},\n\\cite{Lions1},\n\\cite{Desvillettes}, \\cite{MR2130405})\nand in mathematical finance (see, for instance, \\cite{BarucciPolidoroVespri},\n\\cite{Pascucci2011}).\n\nIn this paper we study the filtering problem for \\eqref{aaee1}. To the best of our knowledge, this\nkind of problem was never considered in the literature, possibly because the known results for\nhypoelliptic SPDEs (e.g. \\cite{MR736147}, \\cite{MR705933}, \\cite{Krylov17}, \\cite{MR3839316} and\n\\cite{MR3706782}) do not apply in this case. Here we propose a unified approach for the derivation\nof the {\\it backward and forward filtering equations} based on the H\\\"older theory for degenerate\nSPDEs recently developed in \\cite{PascucciPesce1} and \\cite{pasc:pesc:19} (see also \\cite{Chow94}\nand \\cite{MR1755998} for similar results for uniformly parabolic SPDEs).\nHaving an existence and regularity theory at hand, we can pursue the ``direct'' approaches\nproposed by Krylov and Zatezalo \\cite{MR1795614} and Veretennikov \\cite{Veretennikov}, thus\navoiding the use of general results from filtering theory. In particular, as in \\cite{Veretennikov} we\nderive the backward filtering equation ``by hand'',\nwithout resorting to prior knowledge of the SPDE, in a more direct way compared to the classical\napproach in \\cite{MR553909}, \\cite{MR583435}, \\cite{MR1070361} or \\cite{MR3839316}.\n\nTo be more specific, we consider the following general setup: we assume that the position $X_{t}$\nand the velocity $V_{t}$ of a particle are scalar stochastic processes only partially observable\nthrough some observation process $Y_{t}$. The joint dynamics of $X,V$ and $Y$ is given by the\nsystem of SDEs\n\\begin{equation}\\label{eq1}\n\\begin{cases}\n dX_t=V_tdt,\\\\\n dV_t={b(t,X_t,V_t,Y_t)dt}\n \\sigma^i(t,X_t,V_t,Y_t)dW^i_t,\\\\\n dY_t=h(t,X_t,V_t,Y_t)dt\n {_{\\scalebox{.6}{0}}}\\sigma^i(t,Y_t)dW^i_t,\n\\end{cases}\n\\end{equation}\nwhere we adopt the Einstein summation convention, $W_t=(W_t^1,\\cdots, W_t^{n})$ denotes a\n$n$-dimensional Brownian motion, with $n\\ge2$, defined on a complete probability space\n$({\\Omega},\\mathcal{F},P)$ with a filtration $(\\mathcal{F}_t)_{t\\in [0,T]}$ satisfying the usual assumptions. Hereafter,\nfor simplicity we set $Z_{t}=(X_{t},V_{t})$ and denote by $z=(x,v)$ and ${\\zeta}=({\\xi},{\\nu})$ the points in\n${\\mathbb {R}}^{2}$.\n\nLet $\\mathcal{F}_{t,T}^{Y}={\\sigma}(Y_s,t\\le s\\le T)$ define the filtration of observations and let ${\\varphi}$ be a\nbounded and continuous function, ${\\varphi}\\in bC({\\mathbb {R}}^{2})$. The filtering problem consists in finding\nthe best $\\mathcal{F}_{t,T}^{Y}$-measurable least-square estimate of ${\\varphi}(Z_{T})$, that is the conditional\nexpectation $E\\left[{\\varphi}(Z_{T})\\mid \\mathcal{F}_{t,T}^{Y}\\right]$. Our first result, Theorem \\ref{th2}\nshows that\n\\begin{equation}\\label{ae7bis}\n E\\left[{\\varphi}(Z_{T}^{t,z})\\mid {\\mathcal{F}^{Y}_{t,T}}\\right]=\n \\int_{{\\mathbb {R}}^{2}}\\hat{\\mathbf{\\mathbf{\\Gamma}}}(t,z;T,{\\zeta}){\\varphi}({\\zeta})d{\\zeta},\n\\end{equation}\nwhere $\\hat{\\mathbf{\\mathbf{\\Gamma}}}$ is the (normalized) fundamental solution of the {\\it forward filtering\nequation}; the latter is a SPDE of the form\n\\begin{equation}\\label{spde_forwbbb}\n d_{\\mathbf{B}}u_s({\\zeta})=\\mathcal{A}_{s,{\\zeta}}u_s({\\zeta})ds+\\mathcal{G}_{s,{\\zeta}}u_s({\\zeta})dW_s\n\\end{equation}\nwhere $\\mathbf{B}={probabilit\\`a }_s+{\\nu}{probabilit\\`a }_{\\xi}$ and\n\\begin{align}\n \\mathcal{A}_{s,{\\zeta}}u_s({\\zeta})=\\frac{1}{2}\\bar{a}_s({\\zeta}){probabilit\\`a }_{{\\nu}\\n}u_s({\\zeta})+\\text{\\it ``first order terms''},\\qquad\n \\mathcal{G}_{s,{\\zeta}}u_s({\\zeta})=\\bar{{\\sigma}}_s({\\zeta}){probabilit\\`a }_{{\\nu}}u_s({\\zeta})+\\bar{h}_s({\\zeta})u_s({\\zeta}).\n\\end{align}\nThe forward filtering SPDE is precisely formulated in \\eqref{Forward_eq2}. The symbol\n$d_{\\mathbf{B}}$ in \\eqref{spde_forwbbb} indicates that the SPDE is understood in the It\\^o (or\nstrong) sense, that is\n\\begin{equation}\\label{spde_forw1bbb}\n u_{s}\\left(\\gamma^{\\mathbf{B}}_{s-t}({\\zeta})\\right)=u_{t}({\\zeta})+\\int_{t}^{s}{\\mathcal{A}_{{\\tau},\\gamma^{\\mathbf{B}}_{{\\tau}-t}({\\zeta})}}u_{\\tau}(\\gamma^{\\mathbf{B}}_{{\\tau}-t}({\\zeta}))d{\\tau}\n + \\int_{t}^{s}\\mathcal{G}_{{\\tau},\\gamma^{\\mathbf{B}}_{{\\tau}-t}({\\zeta})}u_{\\tau}(\\gamma^{\\mathbf{B}}_{{\\tau}-t}({\\zeta}))dW_{{\\tau}},\\qquad s\\in[t,T],\n\\end{equation}\nwhere $s\\mapsto\\gamma^{\\mathbf{B}}_{s}({\\xi},{\\nu})$ denotes the integral curve, starting from $({\\xi},{\\nu})$, of the\nadvection vector field ${\\nu}{probabilit\\`a }_{{\\xi}}$ or, more explicitly, $\\gamma^{\\mathbf{B}}_{s}({\\xi},{\\nu})=({\\xi}+s{\\nu},{\\nu})$.\n{\\begin{example\nThe prototype of \\eqref{spde_forwbbb} is the Langevin SPDE\n\\begin{equation}\\label{spde_forw_pro}\n d_{\\mathbf{B}}u_s({\\xi},{\\nu})=\\frac{{\\sigma}^{2}}{2}{probabilit\\`a }_{{\\nu}\\n}u_s({\\xi},{\\nu})ds+{\\beta}{probabilit\\`a }_{{\\nu}}u_s({\\xi},{\\nu})dW_s,\n\\end{equation}\nwith ${\\sigma},{\\beta}$ constant parameters. Clearly, if $u_s=u_s({\\xi},{\\nu})$ is a smooth function then\n\\eqref{spde_forw_pro} can be written in the usual It\\^o form\n\\begin{equation}\\label{spde_forw_pro_bis}\n du_s({\\xi},{\\nu})=\\left(\\frac{{\\sigma}^{2}}{2}{probabilit\\`a }_{{\\nu}\\n}u_s({\\xi},{\\nu}){-}{\\nu}{probabilit\\`a }_{{\\xi}}u_s({\\xi},{\\nu})\\right)ds+{\\beta}{probabilit\\`a }_{{\\nu}}u_s({\\xi},{\\nu})dW_s.\n\\end{equation}\nNotice that ${probabilit\\`a }_{{\\xi}}$, being equal to the Lie bracket $[{probabilit\\`a }_{{\\nu}},\\mathbf{B}]$, has to be regarded\nas a {\\it third order derivative} in the intrinsic sense of subelliptic operators (cf.\n\\cite{MR657581}): this motivates the use of the ``Lie stochastic differential'' $d_{\\mathbf{B}}$\ninstead of the standard It\\^o differential in \\eqref{spde_forw}. Notice also that\n\\eqref{spde_forw_pro} reduces to the forward Kolmogorov (or Fokker-Planck) equation for\n\\eqref{aaee1} when ${\\beta}=0$.\n\\end{example}\n\nAnalogously, in Section \\ref{bSPDE} we prove that\n\\begin{equation}\n E\\left[{\\varphi}(Z_{T}^{t,z,y},Y_{T}^{t,z,y})\\mid {\\mathcal{F}^{Y}_{t,T}}\\right]=\n \\int\\limits_{{\\mathbb {R}}^{3}}\\bar{\\mathbf{\\Gamma}}(t,z,y;T,{\\zeta},{\\eta}){\\varphi}({\\zeta},{\\eta})d{\\zeta} d{\\eta}, \\qquad (t,z,y)\\in [0,T]\\times\n {\\mathbb {R}}^2\\times{\\mathbb {R}},\n\\end{equation}\nwhere $\\bar{\\mathbf{\\Gamma}}$ denotes the (normalized) fundamental solution of the {\\it backward filtering\nequation} that is a SPDE of the form\n\\begin{equation}\\label{spde_backbbb}\n -d_{{\\mathbf{B}}}u_t(z,y)=\\tilde{\\mathcal{A}}_{t}\n u_t(z,y)dt+\\tilde{\\mathcal{G}}_{t}u_t(z,y)\\star dW_{t}.\n\\end{equation}\nWe refer to \\eqref{Backward_eq2} for the precise formulation of the backward filtering SPDE. The\nsymbol $\\star$ means that \\eqref{spde_backbbb} is written in terms of the {\\it backward It\\^o\nintegral} whose definition is recalled in Section \\ref{Itoback} for reader's convenience. We\nshall see that the coefficients of the forward filtering SPDE are random, while the coefficients\nof the backward filtering SPDE are deterministic. Moreover, \\eqref{spde_forwbbb} is posed in\n${\\mathbb {R}}^{3}$ (including the time variable) while \\eqref{spde_backbbb} is posed in ${\\mathbb {R}}^{4}$.\n\nThe rest of the paper is organized as follows. In Section \\ref{sec1} we resume and extend the\nH\\\"older theory for degenerate SPDEs satisfying the weak H\\\"ormander condition, developed in\n\\cite{PascucciPesce1} and \\cite{pasc:pesc:19}. In Section \\ref{sec2}, which is the core of the\npaper, we state the filtering problem and derive the forward and backward filtering SPDEs. Section\n\\ref{proofK} contains the proof of the results about the existence and Gaussian estimates for the\nfundamental solutions of the filtering SPDEs. In Section \\ref{Itoback} we collect the definition\nand some basic result about backward stochastic integration.\n\n\n\\section{Fundamental solution of Langevin-type SPDEs}\\label{sec1}\nWe present the H\\\"older theory for degenerate SPDEs that will be used in the derivation of the\nfiltering equations. We first introduce some general notation and the functional spaces used\nthroughout the paper.\n\nWe denote by $z=(x,v_{1},\\dots,v_{d})$ and ${\\zeta}=({\\xi},{\\nu}_{1},\\dots,{\\nu}_{d})$ the points in\n${\\mathbb {R}}\\times{\\mathbb {R}}^d$. Moreover, for any $k\\in{\\mathbb {N}}$, $0<{\\alpha}<1$ and $0\\le tt}\\int_{{\\mathbb {R}}^{2}}\n {\\mathbf{\\Gamma}}(t,z;s,{\\zeta}){\\varphi}(z)dz={\\varphi}(z_{0}),\\qquad P\\text{-a.s.}$$\n\\end{itemize}\n\\end{definition}\nIn \\cite{pasc:pesc:19}, under suitable assumptions on the coefficients, we proved existence and\nGaussian-type estimates of a fundamental solution for \\eqref{spde_forw} when $b_s\\equiv c_s\\equiv\nh_s\\equiv 0$ and $d=1$. Here we slightly extend those results to an SPDE of the general form\n\\eqref{spde_forw} and to the backward version of it, that is\n\\begin{equation}\\label{spde_back}\n -d_{\\mathbf{B}}u_t(z)=\\mathcal{A}_{t,z}u_t(z)dt+\\mathcal{G}_{t,z}u_t(z)\\star dW_t, \\qquad\n \\mathbf{B}={probabilit\\`a }_t+v_1{probabilit\\`a }_x.\n\\end{equation}\n\nWe denote by $\\cev{\\mathbf{C}}^{k+{\\alpha}}_{t,T}$ (and $\\mathbf{b}\\cev{\\mathbf{C}}^{k+{\\alpha}}_{t,T}$) the\nstochastic H\\\"older spaces formally defined as in Definition \\ref{def1} with $\\mathcal{P}_{t,T}$\nin condition ii) replaced by the backward predictable ${\\sigma}$-algebra $\\cev{\\mathcal{P}}_{t,T}$\ndefined in terms of the backward Brownian filtration (cf. Section \\ref{Itoback}). Again,\n\\eqref{spde_back} is understood in the strong sense:\n\\begin{definition}\\label{ad2}\nA solution to \\eqref{spde_back} on $[0,s]$ is a process $u=u_{t}(x,v)\\in\n\\cev{\\mathbf{C}}^{0}_{0,s}$ that is twice continuously differentiable in the variables $v$ and\nsuch that\n\\begin{equation}\\label{spde_back1}\n u_{t}\\left(\\gamma^{\\mathbf{B}}_{s-t}(z)\\right)=u_{s}(z)+\\int_{t}^{s}{\\mathcal{A}_{{\\tau},\\gamma^{\\mathbf{B}}_{s-{\\tau}}(z)}}u_{\\tau}(\\gamma^{\\mathbf{B}}_{s-{\\tau}}(z))d{\\tau}\n + \\int_{t}^{s}\\mathcal{G}_{{\\tau},\\gamma^{\\mathbf{B}}_{s-{\\tau}}(z)}u_{\\tau}(\\gamma^{\\mathbf{B}}_{s-{\\tau}}(z))\\star dW_{{\\tau}},\\qquad t\\in[0,s].\n\\end{equation}\n\\end{definition}\n\\begin{definition}\\label{d2} A fundamental solution for\nthe backward SPDE \\eqref{spde_back} is a stochastic process $\\cev{\\mathbf{\\Gamma}}=\\cev{\\mathbf{\\Gamma}}(t,z;s,{\\zeta})$\ndefined for $0\\le t0$ and multi-index ${\\beta}\\in\n{\\mathbb {N}}_{0}^{N}$, we set\n\\begin{equation}\\label{aea1}\n \\langle f\\rangle_{{\\varepsilon},{\\beta}}:=\\sup_{w\\in{\\mathbb {R}}^{N}}(1+|w|^2)^{{\\varepsilon}}|{probabilit\\`a }_{w}^{{\\beta}}f(w)|.\n\\end{equation}\n\\begin{assumption}\\label{ass3}\nThere exist ${\\varepsilon}>0$ and two random variables $M_1\\in L^{p}({\\Omega})$, with\n$p>\\max\\left\\{2,\\frac{1}{{\\varepsilon}}\\right\\}$, and $M_2\\in L^{\\infty}({\\Omega})$ such that with probability one\n\\begin{align\n \\sup_{t\\in[0,T]}\\left(\\langle {\\sigma}_{t}\\rangle_{{\\varepsilon},{\\beta}}+\\langle {\\sigma}_{t}\\rangle_{1\/2+{\\varepsilon},{\\beta}'}\\right)&\\le M_{1},\\qquad |{\\beta}|=1,\\\n |{\\beta}'|=2,3,\\\\\n \\sup_{t\\in[0,T]}\\langle h_{t}\\rangle_{1\/2,{\\beta}}&\\le M_{2},\\qquad |{\\beta}|=1.\n\\end{align}\n\\end{assumption}} Assumption \\ref{ass3} requires that ${\\sigma}_{t}(z)$ and $h_t(z)$ flatten as $z\\to \\infty$. In\nparticular, this condition is clearly satisfied if ${\\sigma}$ and $h$ depend only on $t$ or, more\ngenerally, if the spatial gradients of ${\\sigma}$ and $h$ have compact support.\n\nIn order to state the main result of this section, Theorem \\ref{TH1} below, we need to introduce\nsome additional notation: we consider the Gaussian kernel\n\\begin{equation}\\label{gammal}\n \\Gamma_{\\lambda}(t,x,v)=\\frac{\\lambda}{t^{\\frac{d+3}{2}}}\\exp\\left(-\\frac{1}{2\\lambda}\\left(\\frac{x^{2}}{t^{3}}+\\frac{|v|^{2}}{t}\\right)\\right),\n \\qquad t>0,\\ (x,v)\\in{\\mathbb {R}}\\times{\\mathbb {R}}^{d},\\ \\lambda>0.\n\\end{equation}\nTo fix ideas, for $d=1$ and up to some renormalization, $\\Gamma_{\\lambda}$ is the fundamental solution\nof the degenerate Langevin equation \\eqref{aaee1bb}. For a recent survey on the theory of this\nkind of ultra-parabolic operators and the related sub-elliptic structure, we refer to\n\\cite{anceschi}.\n\nIn the following statement, we denote by $g^{\\text{\\rm\\tiny IW},-1}$ (and $\\cev{g}^{\\text{\\rm\\tiny IW},-1}$) the inverse of the It\\^o-Wentzell\nstochastic flow $(x,v)\\mapsto g^{\\text{\\rm\\tiny IW}}(x,v):= \\left(x,\\gamma^{\\text{\\rm\\tiny IW}}_{t,s}(x,v)\\right)$ defined by\n\\eqref{sde_forw} (and $(x,v)\\mapsto \\cev{g}^{\\text{\\rm\\tiny IW}}(x,v):= \\left(x,\\theta^{\\text{\\rm\\tiny IW}}_{t,s}(x,v)\\right)$ defined by\n\\eqref{sde_back}, respectively). Moreover, we consider the vector field\n\\begin{align}\\label{flusso2}\n \\mathbf{Y}_{t,s}(z)&:=\\Big((\\gamma^{\\text{\\rm\\tiny IW}}_{t,s})_1(z),-(\\gamma^{\\text{\\rm\\tiny IW}}_{t,s}(z))_1(\\nabla_v\\gamma^{\\text{\\rm\\tiny IW}}_{t,s})^{-1}(z){{probabilit\\`a }_x\\gamma^{\\text{\\rm\\tiny IW}}_{t,s}(z)}\\Big),\n\\end{align}\nwith $\\nabla_v\\gamma^{\\text{\\rm\\tiny IW}}=({probabilit\\`a }_{v_j}\\gamma^{\\text{\\rm\\tiny IW}}_i)_{i,j=1,\\cdots d}$ and ${probabilit\\`a }_x\\gamma^{\\text{\\rm\\tiny IW}}=({probabilit\\`a }_x\\gamma^{\\text{\\rm\\tiny IW}}_i)_{i=1,\\cdots d}$,\nand define $\\cev{\\mathbf{Y}}_{t,s}$ analogously. Eventually, equation\n $$\\gamma_{s}^{t,z}=z+\\int_{t}^{s}\\mathbf{Y}_{t,{\\tau}}(\\gamma_{{\\tau}}^{t,z})d{\\tau},\\qquad s\\in[t,T],$$\ndefines the integral curve of $\\mathbf{Y}_{t,s}$ starting from $(t,z)$, and equation\n $$\\cev{\\gamma}_{t}^{s,{\\zeta}}={\\zeta}+\\int_{t}^{s}\\cev{\\mathbf{Y}}_{{\\tau},s}(\\cev{\\gamma}_{{\\tau}}^{s,{\\zeta}})d{\\tau},\\qquad t\\in[0,s],$$\ndefines the integral curve of $\\cev{\\mathbf{Y}}_{t,s}$ ending at $(s,{\\zeta})$. The main result of this section is\nthe following theorem whose proof is postponed to Section \\ref{proofK}.\n\\begin{theorem}\\label{TH1}\nUnder Assumptions \\ref{ass1}-i), \\ref{ass2} and \\ref{ass3}, the forward SPDE \\eqref{spde_forw} has\na fundamental solution $\\mathbf{\\mathbf{\\Gamma}}$ and there exists a positive random variable $\\lambda$ such\nthat\n\\begin{align}\n \\Gamma_{\\lambda^{-1}}\\left({s-t}, g^{\\text{\\rm\\tiny IW},-1}_{t,s}({\\zeta})-\\gamma_{s}^{t,z} \\right) &\\leq\n \\mathbf{\\mathbf{\\Gamma}}(t,z;s,{\\zeta}) \\leq \\Gamma_{\\lambda}\\left({s-t}, g^{\\text{\\rm\\tiny IW},-1}_{t,s}({\\zeta})-\\gamma_{s}^{t,z}\\right), \\label{t_e1}\\\\\n \\left| {probabilit\\`a }_{{\\nu}_i}\\mathbf{\\mathbf{\\Gamma}}(t,z;s,{\\xi},{\\nu})\\right|&\\leq\n \\frac{1}{\\sqrt{s-t}}\\Gamma_{\\lambda}\\left({s-t}, g^{\\text{\\rm\\tiny IW},-1}_{t,s}({\\xi},{\\nu})-\\gamma_{s}^{t,z}\\right), \\label{t_e2}\\\\\n \\left| {probabilit\\`a }_{{\\nu}_i{\\nu}_j}\\mathbf{\\mathbf{\\Gamma}}(t,z;s,{\\xi},{\\nu})\\right|\n &\\leq \\frac{1}{s-t}\\Gamma_{\\lambda}\\left({s-t}, g^{\\text{\\rm\\tiny IW},-1}_{t,s}({\\xi},{\\nu})-\\gamma_{s}^{t,z}\\right), \\label{t_e3}\n\\end{align}\nfor every $0\\leq t0$ that depends only on\nthe general constants of Assumptions \\ref{ass1}, \\ref{ass2} and \\ref{ass3}}:\n\\begin{equation}\\label{aee18}\n\\begin{split}\n \\Gamma_{\\lambda^{-1}}\\left(s-t,{\\zeta}-{{\\gamma}}_{t,s}^{t_{0},z_{0}}(z)\\right)\\le\\,\n \\mathbf{\\Gamma}^{t_{0},z_{0}}(t,z;s,{\\zeta})&\\le \\Gamma_{\\lambda}\\left(s-t,{\\zeta}-{{\\gamma}}_{t,s}^{t_{0},z_{0}}(z)\\right), \\\\\n |{probabilit\\`a }_{\\nu} \\mathbf{\\Gamma}^{t_{0},z_{0}}(t,z;s,{\\xi},{\\nu})|&\\le \\frac{1}{\\sqrt{s-t}}\\Gamma_{\\lambda}\\left(s-t,({\\xi},{\\nu})-{{\\gamma}}_{t,s}^{t_{0},z_{0}}(z)\\right),\\\\\n |{probabilit\\`a }_{{\\nu}\\n}\\mathbf{\\Gamma}^{t_{0},z_{0}}(t,z;s,{\\xi},{\\nu})|&\\le \\frac{1}{s-t}\\Gamma_{\\lambda}\\left(s-t,({\\xi},{\\nu})-{{\\gamma}}_{t,s}^{t_{0},z_{0}}(z)\\right),\n\\end{split}\n\\end{equation}\nfor $0\\le t\\lambda$)}\n &\\le\n \\frac{1}{(s-t)^{1-\\bar{{\\alpha}}\/2}}\\mathbf{\\Gamma}_{\\bar{\\lambda}}(s-t,{\\zeta}-{\\gamma}_s^{t,z}).\n\\end{align}\nNext, we set\n\\begin{align}\n \\mathbf{\\Gamma}\\otimes H (t,z;s,{\\zeta})&:=\\int_t^s\\int_{{\\mathbb {R}}^2}H(t,z;{\\tau},w)\\mathbf{\\Gamma}({\\tau},w;s,{\\zeta})dw d{\\tau}.\n\\end{align}\nA recursive application of the Duhamel principle shows that\n\\begin{align}\\label{Expansion1}\n \\mathbf{\\Gamma}(t,z;s,{\\zeta})&=Z(t,z;s,{\\zeta})+\\mathbf{\\Gamma}\\otimes H (t,z;s,{\\zeta})\\\\ &= Z(t,z;s,{\\zeta}) + \\sum_{k=1}^{N-1}Z\\otimes\n H^{\\otimes k} (t,z;s,{\\zeta})+\\mathbf{\\Gamma}\\otimes H^{\\otimes N} (t,z;s,{\\zeta}),\\qquad N\\ge1.\n\\end{align}\nAs $N$ tends to infinity we formally obtain a representation of $\\mathbf{\\Gamma}$ as a series of convolution\nkernels. Unfortunately, as already noticed in \\cite{MR2659772} and \\cite{pasc:pesc:19}, the\npresence of the transport term makes it hard to control the iterated kernels uniformly in $N$,\nas opposed to the classical parametrix method for uniformly parabolic PDEs.\nThus the remainder $\\mathbf{\\Gamma}\\otimes H^{\\otimes N}$ must be handled with a different technique, borrowed\nfrom stochastic control theory: the rest of the proof proceeds exactly in the same way as in\n\\cite{pasc:pesc:19} to which we refer for a detailed explanation.\n\n\\medskip\n\nNext, we consider the backward equation\n\\begin{equation}\\label{aee16b}\n \\cev{\\mathcal{A}}_{t}u_{t}(z)+\\cev{\\mathbf{Y}}_{t}u_{t}(z)+{probabilit\\`a }_{t}u_{t}(z)=0,\\qquad t\\in[0,s),\\ z=(x,v)\\in{\\mathbb {R}}^{2},\n\\end{equation}\nwhere $\\cev{\\mathcal{A}}_t$ is a second order operator of the form\n $$\\cev{\\mathcal{A}}_t=\\cev{a}_t{probabilit\\`a }_{vv} + \\cev{b}_t{probabilit\\`a }_{v} +\\cev{c}_t,\\qquad z=(x,v)\\in{\\mathbb {R}}^{2}\n $$\nand\n$\\cev{\\mathbf{Y}}_{t\n=(\\cev{\\mathbf{Y}}_{t})_{1}{probabilit\\`a }_{x}+(\\cev{\\mathbf{Y}}_{t})_{2}{probabilit\\`a }_{v}.$\nFor a fixed $(s_{0},{\\zeta}_{0})\\in (0,s]\\times{\\mathbb {R}}^{2}$, we define the linearized version of\n\\eqref{aee16b}, that is\n\\begin{equation}\\label{aee16lin_b}\n\n \\cev{\\mathcal{A}}^{s_{0},{\\zeta}_{0}}_{t}u_{t}(z)+\\cev{\\mathbf{Y}}_{t}^{s_{0},{\\zeta}_{0}}u_{t}(z)+{probabilit\\`a }_{t}u_{t}(z)=0,\\qquad t\\in[0,s),\\ z\\in{\\mathbb {R}}^{2},\n\\end{equation}\nwhere the definition of $\\cev{\\mathbf{Y}}_{t}^{s_{0},{\\zeta}_{0}}$ is analogous to that of\n$\\mathbf{Y}_{s}^{t_{0},z_{0}}$ in \\eqref{aee17} and\n\\begin{equation}\\label{linearized_PDE_back}\n \\cev{\\mathcal{A}}^{s_{0},{\\zeta}_{0}}_t:=\\cev{a}_t(\\cev{{\\gamma}}_t^{s_{0},{\\zeta}_{0}}){probabilit\\`a }_{vv},\\qquad\n \\cev{{\\gamma}}_{t}^{s_{0},{\\zeta}_{0}}={\\zeta}_{0}+\\int_{t}^{s_{0}}\\cev{\\mathbf{Y}}_{{\\tau}}(\\cev{{\\gamma}}_{{\\tau}}^{s_{0},{\\zeta}_{0}})d{\\tau},\\qquad t\\in[0,s_{0}].\n\\end{equation}\nEquation \\eqref{aee16lin_b} has an explicit fundamental solution\n$\\cev{\\mathbf{\\Gamma}}^{s_{0},{\\zeta}_{0}}=\\cev{\\mathbf{\\Gamma}}^{s_{0},{\\zeta}_{0}}(t,z;s,{\\zeta})$ of Gaussian type, that satisfies\nestimates analogous to \\eqref{aee18}.\nThe {\\it backward parametrix} for {\\eqref{aee16b}} is defined as\n $$\\cev{Z}(t,z;s,{\\zeta})=\\cev{\\mathbf{\\Gamma}}^{s,{\\zeta}}(t,z;s,{\\zeta}),\\qquad 0\\le t> \\frac{2 D_{array}^2}{\\lambda}$, where $D_{array}$ is the maximum\nphysical size of the PAF and $\\lambda$ is the wavelength of operation of the PAF. \nThe radiation pattern when the PAF is excited by an arbitrary set of port voltages\nis obtained by scaling the embedded beam patterns with the\nport voltage and summing them up.\nHence the dimension of the embedded beam pattern in this definition is m$^{-1}$. \nAt the far-field, the beam pattern can be described by an outgoing spherical wave,\n\\begin{equation}\n\\vec{\\mathcal{E}}^e_i(\\vec{r}) = \\vec{E}^e_i(\\theta, \\phi) \\; \\frac{e^{j\\vec{k}.\\vec{r}}}{r},\n\\label{farf}\n\\end{equation}\nwhere $\\vec{\\mathcal{E}}^e_i$ is the $i^{th}$ embedded beam pattern,\n$r$ and $\\hat{r}$ are the magnitude and the unit vector in the direction \nof $\\vec{r}$ respectively, $\\vec{k} = \\frac{2\\pi}{\\lambda} \\hat{r}$ is \nthe propagation vector. Here $\\vec{E}^e_i$ depends only \non the coordinates $\\theta, \\phi$. The geometric phase due to the location\nof elements (or in other words the excitation current distribution) away\nfrom the co-ordinate center is included in $\\vec{E}^e_i$. From the \ndefinition of embedded pattern it follows that $\\vec{E}^e_i$ is dimensionless. \nThe fields here are harmonic quantities, and for simplicity we omit the term $e^{j\\omega t}$.\nThe radiation pattern of the PAF when excited by a set of arbitrary\nport voltages is \n\\begin{eqnarray} \n\\vec{\\mathcal{E}}(\\vec{r}) & = & \\sum_{i=1,M} v_{0_i} \\vec{\\mathcal{E}}^e_i(\\vec{r}), \\nonumber \\\\ \n & = & \\bm V_0^T \\boldsymbol{\\vec{\\mathcal{E}}^e}, \n\\label{PAFfpat}\n\\end{eqnarray}\nwhere $\\bm V_0$ is the vector of port voltages $v_{0_i}$ (see Fig.~\\ref{fig1}a). The \nradiation pattern $\\vec{\\mathcal{E}}$ has units V\/m. In the far-field,\nthe ($\\theta, \\phi$) dependence of the radiation pattern can be written\nin a similar fashion,\n\\begin{equation}\n\\vec{E}(\\theta, \\phi) = \\bm V_0^T \\boldsymbol{\\vec{E}^e}.\n\\label{Evsvembpat}\n\\end{equation}\nThe unit of $\\vec{E}$ is V. In this report, we refer to both $\\boldsymbol{\\vec{\\mathcal{E}}^e}$ \nand $\\boldsymbol{\\vec{E}^e}$ as embedded beam pattern VEB. \n\nAnother definition for embedded beam pattern is: the $j^{th}$ embedded beam pattern, \n$\\vec{\\mathcal{\\psi}}^e_j$ is \nthe beam pattern of the PAF when $j^{th}$ port is excited with\n1 A and all other ports are open circuited, i.e.\n\\begin{eqnarray}\n\\mathcal{J}_{0_i} & = & 1\\;\\; \\textrm{A}\\;\\; \\textrm{for}\\; i = j, \\nonumber\\\\\n\\mathcal{J}_{0_i} & = & 0\\;\\; \\textrm{A}\\;\\; \\textrm{for}\\; i \\neq j,\n\\end{eqnarray}\nwhere $\\mathcal{J}_{0_i}$ are the port currents.\nThe source impedance for excitation is considered to be equal to $z_0$. \nAs before there are $M$ embedded beam patterns, which are represented conveniently\nas a vector $\\boldsymbol{\\vec{\\mathcal{\\psi}}^e}$,\n\\begin{equation}\n\\boldsymbol{\\vec{\\mathcal{\\psi}}^e}^T = \\left[\\vec{\\mathcal{\\psi}}^e_1, \\vec{\\mathcal{\\psi}}^e_2, ... \\right].\n\\end{equation} \nThe beam pattern at far field can be written as,\n\\begin{equation}\n\\vec{\\mathcal{\\psi}}^e_i(\\vec{r}) = \\vec{\\Psi}^e_i(\\theta, \\phi) \\; \\frac{e^{j\\vec{k}.\\vec{r}}}{r},\n\\label{ifarf}\n\\end{equation}\nThe radiation pattern of the PAF when excited by a set of arbitrary\nport currents is \n\\begin{eqnarray} \n\\vec{\\mathcal{E}}(\\vec{r}) & = & \\sum_{i=1,M} \\mathcal{J}_{0_i} \\vec{\\mathcal{\\psi}}^e_i(\\vec{r}), \\nonumber \\\\ \n & = & \\bm I_0^T \\boldsymbol{\\vec{\\mathcal{\\psi}}^e}, \n\\label{PAFfpat2}\n\\end{eqnarray}\nwhere $\\bm I_0$ is the vector of port currents $\\mathcal{J}_{0_i}$ (see Fig.~\\ref{fig1}b). The \nradiation pattern $\\vec{\\mathcal{E}}$ has the unit V\/m and\n$\\vec{\\mathcal{\\psi}}^e_i$ has unit V\/A\/m. As before, \nthe ($\\theta, \\phi$) dependence of the far-field radiation pattern can be written\nas,\n\\begin{equation}\n\\vec{E}(\\theta, \\phi) = \\bm I_0^T \\boldsymbol{\\vec{\\Psi}^e}.\n\\label{Evsiembpat}\n\\end{equation}\nThe unit of $\\vec{E}$ is V and that of $\\vec{\\Psi}^e$ is V\/A. In this report,\nwe refer to both $\\boldsymbol{\\vec{\\mathcal{\\psi}}^e}$ and $\\boldsymbol{\\vec{\\Psi}^e}$ as the \nembedded beam pattern CEB. \n\nThe relationship between the two embedded beam patterns can be obtained\nusing the network relationship between the port voltages and\ncurrents, $\\bm V_0 = \\bm Z \\bm I_0$.\nSubstituting this relationship in Eq.~\\ref{PAFfpat}, we get \n\\begin{equation}\n\\vec{\\mathcal{E}} = \\bm V_0^T \\boldsymbol{\\vec{\\mathcal{E}}^e} = \\bm I_0^T \\bm Z^T \\boldsymbol{\\vec{\\mathcal{E}}^e}.\n\\label{eq11}\n\\end{equation}\nFrom Eq.~\\ref{PAFfpat2} \\& \\ref{eq11} it follows\n\\begin{equation}\n\\boldsymbol{\\vec{\\mathcal{\\psi}}^e} = \\bm Z^T \\boldsymbol{\\vec{\\mathcal{E}}^e}\n\\end{equation}\nFor a reciprocal PAF, $\\bm Z^T = \\bm Z$, and so the above equation \ncan also be written as\n\\begin{equation}\n\\boldsymbol{\\vec{\\mathcal{\\psi}}^e} = \\bm Z \\boldsymbol{\\vec{\\mathcal{E}}^e}\n\\end{equation}\n\n\n\\section{PAF model equations corresponding to the two embedded beam patterns}\n\nThe PAF model equations are somewhat simplified when written in terms \nof CEB $\\boldsymbol{\\vec{\\mathcal{\\psi}}^e}$. Essentially in almost \nall relevant equations the impedance matrix $\\bm Z$ is absorbed in the\nembedded beam pattern when $\\boldsymbol{\\vec{\\mathcal{\\psi}}^e}$ is used. For example,\nthe open circuit voltage vector (see Eq. 37 in Roshi \\& Fisher 2016)\nat the output of the PAF for VEB, \n$\\boldsymbol{\\vec{\\mathcal{E}}^e}$, and CEB, $\\boldsymbol{\\vec{\\mathcal{\\psi}}^e}$, is given by Eqs.~\\ref{eq14a} \\& \\ref{eq14b}\nrespectively; \n\\begin{eqnarray}\n\\bm V_{oc} & = & \\bm Z \\int_{A_{free}} \\left(\\boldsymbol{\\vec{\\mathcal{E}}^e}^T\n \\times \\boldsymbol{\\mathcal{I}} \\vec{\\mathcal{H}_r} -\n \\boldsymbol{\\mathcal{I}} \\vec{\\mathcal{E}_r} \\times \\boldsymbol{\\vec{\\mathcal{H}}^e}\\right)\n \\cdot \\hat{n}\\; \\textrm{d}A, \\label{eq14a} \\\\\n & = & \\int_{A_{free}} \\left(\\boldsymbol{\\vec{\\mathcal{\\psi}}^e}^T\n \\times \\boldsymbol{\\mathcal{I}} \\vec{\\mathcal{H}_r} -\n \\boldsymbol{\\mathcal{I}} \\vec{\\mathcal{E}_r} \\times \\boldsymbol{\\vec{\\mathcal{J}}^e}\\right)\n \\cdot \\hat{n}\\; \\textrm{d}A. \\label{eq14b}\n\\label{voc}\n\\end{eqnarray} \nHere $\\boldsymbol{\\vec{\\mathcal{H}}^e}$ and $\\boldsymbol{\\vec{\\mathcal{J}}^e}$ are the magnetic \nfield patterns corresponding to the VEB, $\\boldsymbol{\\vec{\\mathcal{E}}^e}$\nand the CEB, $\\boldsymbol{\\vec{\\mathcal{\\psi}}^e}$ respectively, $\\vec{\\mathcal{E}_r}$ and\n$\\vec{\\mathcal{H}_r}$ are the incident electric and magnetic fields on the PAF respectively, \n$\\boldsymbol{\\mathcal{I}}$ is the identify matrix, and the integration is over a region outside the\nPAF (see Fig. 2 in Roshi \\& Fisher 2016). The notation used\nin Eq.~\\ref{voc} is explained in Appendix J of Roshi \\& Fisher (2016).\nA list of model equations corresponding to the VEB, \n$\\boldsymbol{\\vec{\\mathcal{E}}^e}$ (left) and the CEB, $\\boldsymbol{\\vec{\\mathcal{\\psi}}^e}$ (right) is given below.\n\\begin{align}\n\\bm R_{spill} & = \\frac{4 k_B T_g}{z_f} \\bm Z \\bm C_{Ce1}, \\bm Z^H \n&\\bm R_{spill} = & \\frac{4 k_B T_g}{z_f} \\bm C_{C\\psi1}, \\\\\n\\bm R_{signal}& = \\frac{2 S_{source}}{z_f} \\bm Z \\bm C_{Ie}, \\bm Z^H\n&\\bm R_{signal} = & \\frac{2 S_{source}}{z_f} \\bm C_{I\\psi}, \\\\\nT_{spill} & = T_g\n \\frac{\\bm w_1^H \\bm Z \\bm C_{Ce1} \\bm Z^H \\bm w_1}\n {\\bm w_1^H \\bm Z \\bm C_{Ce} \\bm Z^H \\bm w_1}, \n&T_{spill} = & T_g\n \\frac{\\bm w_1^H \\bm C_{C\\psi1} \\bm w_1}\n {\\bm w_1^H \\bm C_{C\\psi} \\bm w_1}, \\\\\nT_A & = \\frac{S_{source}}{2 k_B} \\frac{\\bm w_1^H \\bm Z \\bm C_{Ie} \\bm Z^H \\bm w_1}\n {\\bm w_1^H \\bm Z \\bm C_{Ce} \\bm Z^H \\bm w_1},\n&T_A = & \\frac{S_{source}}{2 k_B} \\frac{\\bm w_1^H \\bm C_{I\\psi} \\bm w_1}\n {\\bm w_1^H \\bm C_{C\\psi} \\bm w_1} ,\\\\\n\\eta_{app} & = \\frac{1}{A_{ap}} \\; \\; \\frac{\\bm w_1^H \\bm Z \\bm C_{Ie} \\bm Z^H \\bm w_1}\n { \\bm w_1^H \\bm Z \\bm C_{Ce} \\bm Z^H \\bm w_1} ,\n&\\eta_{app} = & \\frac{1}{A_{ap}} \\; \\; \\frac{\\bm w_1^H \\bm C_{I\\psi} \\bm w_1}\n { \\bm w_1^H \\bm C_{C\\psi} \\bm w_1}. \n\\end{align}\nHere $\\bm R_{spill}$, $\\bm R_{signal}$ are the open circuit voltage correlations due to spillover\nnoise and that due to radiation from source respectively, $T_{spill}$ is the spillover\ntemperature and $T_A$ is the antenna temperature due to the source, $\\eta_{app}$ is the\naperture efficiency, $\\bm w_1$ is the weight vector applied on the open circuit voltage correlations\n(see Roshi \\& Fisher 2016), \n\\begin{eqnarray}\n\\bm C_{Ce1} & \\equiv & \\int_{\\Omega_{spill}} \\boldsymbol{\\vec{E^e}}\\cdot \\boldsymbol{\\vec{E^e}}^H \\textrm{d}\\Omega,\n\\label{eqce1} \\\\\n\\bm C_{C\\psi1} & \\equiv & \\int_{\\Omega_{spill}} \\boldsymbol{\\vec{\\Psi}^e}\\cdot \\boldsymbol{\\vec{\\Psi}^e}^H \\textrm{d}\\Omega, \\label{eqcsi1}\\\\\n\\bm C_{Ce} & \\equiv & \\int_{4\\pi} \\boldsymbol{\\vec{E^e}}\\cdot \\boldsymbol{\\vec{E^e}}^H \\textrm{d}\\Omega, \\\\\n\\bm C_{C\\psi} & \\equiv & \\int_{4\\pi} \\boldsymbol{\\vec{\\Psi}^e}\\cdot \\boldsymbol{\\vec{\\Psi}^e}^H \\textrm{d}\\Omega,\\\\\n\\bm C_{Ie} & \\equiv & \\left(\\int_{A_{pap}} \\boldsymbol{\\vec{\\mathcal{E}}^e_{pap}} \\textrm{d} A \\right) \\cdot\n \\left(\\int_{A_{pap}} \\boldsymbol{\\vec{\\mathcal{E}}^e_{pap}} \\textrm{d} A \\right)^H,\n\\label{eqie} \\\\\n\\bm C_{I\\psi} & \\equiv & \\left(\\int_{A_{pap}} \\boldsymbol{\\vec{\\mathcal{\\psi}}^e_{pap}} \\textrm{d} A \\right) \\cdot\n \\left(\\int_{A_{pap}} \\boldsymbol{\\vec{\\mathcal{\\psi}}^e_{pap}} \\textrm{d} A \\right)^H, \n\\label{eqisi}\n\\end{eqnarray}\n$k_B$ is the Boltzmann constant, $T_g$ is the ground temperature, $z_f$ is the free\nspace impedance, $S_{source}$ is the flux density of the observed source \nand $\\boldsymbol{\\vec{\\mathcal{E}}^e_{pap}}$ and \n$\\boldsymbol{\\vec{\\mathcal{\\psi}}^e_{pap}}$ are the aperture fields (see Roshi \\& Fisher 2016) due to \nthe VEB, $\\boldsymbol{\\vec{\\mathcal{E}}^e}$ and the CEB, $\\boldsymbol{\\vec{\\mathcal{\\psi}}^e}$ respectively. \nIn Eqs.~\\ref{eqce1} \\& \\ref{eqcsi1} the integration is over the\nparts of the beam solid angle, $\\Omega_{spill}$, seeing the ground radiation field and\nin Eqs.~\\ref{eqie} \\& \\ref{eqisi} the integration is over the aperture\nplane, $A_{pap}$, of physical area $A_{ap}$.\nThe model equations\nthat are not affected by the embedded beam pattern definition are\n\\begin{eqnarray}\n\\bm R_{rec} & = & 4 k_B T_0 \\; \\Big(R_n \\boldsymbol{\\mathcal{I}} + \\sqrt{R_n g_n} \\;\\big(\\rho \\bm Z + \\rho^* \\bm Z^H\\big) + g_n \\bm Z\\bm Z^H \\Big), \\\\\nT_n & =& T_{min} + N T_0 \\; \\frac{\\bm w_1^H (\\bm Z - Z_{opt} \\boldsymbol{\\mathcal{I}})\n (\\bm Z - Z_{opt} \\boldsymbol{\\mathcal{I}})^H \\bm w_1} {\\mbox{Re}\\{Z_{opt}\\} \\; \\frac{1}{2}\\bm w_1^H (\\bm Z + \\bm Z^H) \\bm w_1}, \\\\\n\\bm R_{cmb} & =& 2 k_B T_{cmb} (\\bm Z + \\bm Z^H), \\\\\n\\bm R_{sky} & \\approx& 2 k_B T_{sky} (\\bm Z + \\bm Z^H).\n\\end{eqnarray}\nHere $\\bm R_{rec}$, $\\bm R_{cmb}$, $\\bm R_{sky}$ are the open circuit voltage correlations\ndue to the amplifier noise, the cosmic microwave background and the sky background\nradiation respectively,\n$T_n$ is the receiver temperature of the PAF, $T_0 = 290 K$, $R_n$, $g_n$ and $\\rho$ \nare the noise parameters of the amplifier,\nwhich can equivalently be expressed in terms of the minimum noise temperature $T_{min}$,\nLange invariance $N$ and optimum impedance $Z_{opt}$ (Pospieszalski 2010); \n$T_{cmb}$ is the cosmic microwave\nbackground temperature and $T_{sky} = T_{cmb} + T_{bg,\\nu_0} \\left(\\frac{\\nu}{\\nu_0}\\right)^{-2.7}$\nis the temperature of the sky background at the observed off-source\nposition, $T_{bg,\\nu_0}$ is the galactic background radiation\ntemperature at $\\nu_0$, and $\\nu$ is the frequency at which $\\bm R_{sky}$\nis computed.\n\n\\section{Embedded beam patterns from the CST far-field patterns}\n\\label{A8}\n\nThe CST (\\verb|https:\/\/www.cst.com\/|) microwave studio provides the\nfar-field pattern $\\vec{E'}_j$ when the $j^{th}$ port is excited and all\nother ports are terminated with the CST port impedance (in our case it is 50 $\\Omega$).\nFrom Eqs.~\\ref{Evsvembpat} \\& \\ref{Evsiembpat} we get\n\\begin{equation}\n\\vec{E'}_j = \\sum_{i=1,M} q_{ij} \\; \\vec{E}^e_i,\n\\label{cstembed}\n\\end{equation}\n\\begin{equation}\n\\vec{E'}_j = \\sum_{i=1,M} \\mathcal{J}_{ij} \\; \\vec{\\Psi}^e_i.\n\\label{cstembed1}\n\\end{equation}\nHere $q_{ij} = v_{0_i}$ is the port voltage and $\\mathcal{J}_{ij}$ is the\nport current. These voltages and currents are computed below. \nThe elements of the wave amplitude vector for the excitation are \n\\begin{eqnarray}\na_i & = & \\sqrt{2\\,P_{stim}} \\quad \\textrm{for} \\; i=j \\nonumber \\\\\n & = & 0 \\quad\\quad\\quad\\quad\\;\\; \\textrm{for}\\; i \\neq j\n\\end{eqnarray}\nwhere $P_{stim} = 0.5$ W, is the RMS excitation power in the CST simulation. \nThe wave amplitude vector $\\bm b$ is then\n\\begin{equation}\n\\bm b = a_j \\begin{bmatrix}\n S_{1j} \\\\\n S_{2j} \\\\\n \\threevdots \\\\\n S_{jj} \\\\\n \\threevdots \\\\\n S_{Mj}\n\\end{bmatrix}\n,\n\\end{equation}\nwhere $a_j$ is the $j^{th}$ element of the vector $\\bm a$,\n$S_{ij}, i = 1$ to $M$ is the $j^{th}$ column of $\\bm S$. The\nport voltages and currents are then\n\\begin{eqnarray}\nq_{ij} & = & \\sqrt{z_0} (a_i + b_i) \\nonumber \\\\\n & = & \\sqrt{z_0} (1 + S_{jj}) a_j \\quad \\textrm{for}\\;\\;\\; i = j \\nonumber \\\\\n & = & \\sqrt{z_0} S_{ij} a_j \\quad\\quad\\quad \\textrm{for}\\;\\;\\; i \\neq j \\label{qij}\\\\\n\\mathcal{J}_{ij} & = & \\frac{1}{\\sqrt{z_0}} (a_i - b_i) \\nonumber \\\\\n & = & \\frac{1}{\\sqrt{z_0}} (1 - S_{jj}) a_j \\quad \\textrm{for}\\;\\;\\; i = j \\nonumber \\\\\n & = & \\frac{-1}{\\sqrt{z_0}} S_{ij} a_j \\quad\\quad\\quad \\textrm{for}\\;\\;\\; i \\neq j\n\\label{jij}\n\\end{eqnarray}\nThe set of far-field patterns provided by the CST along with\nthe port voltages and currents can be used to obtain the VEB,\n$\\boldsymbol{\\vec{E}^e}$ and the CEB, $\\boldsymbol{\\vec{\\Psi}^e}$. \nEq.~\\ref{cstembed} \\& \\ref{cstembed1} for the set of far-field patterns\ncan be concisely written as\n\\begin{eqnarray}\n\\boldsymbol{\\vec{E}^{'}} & = & \\bm Q \\; \\boldsymbol{\\vec{E}^e}, \\\\\n\\boldsymbol{\\vec{E}^{'}} & = & \\bm J \\; \\boldsymbol{\\vec{\\Psi}^e}.\n\\end{eqnarray}\nwhere the elements of the matrix $\\bm Q$ are $q_{ij}$ and that of the \nmatrix $\\bm J$ are $\\mathcal{J}_{ij}$.\nThis equation is valid for each $\\theta, \\phi$.\nUsing Eqs.~\\ref{qij} \\& \\ref{jij} $\\bm Q$ and $\\bm J$ can be written as\n\\begin{eqnarray}\n\\bm Q & = & \\sqrt{2\\, z_0\\, P_{stim}} \\;\\;(\\boldsymbol{\\mathcal{I}} + \\bm S),\\\\\n\\bm J & = & \\sqrt{\\frac{2\\, P_{stim}}{z_0}} \\;\\;(\\boldsymbol{\\mathcal{I}} - \\bm S).\n\\end{eqnarray}\nThe matrices $\\bm Q$ and $\\bm J$ are also related through the equation \n\\begin{equation}\n\\bm J = \\bm Q \\bm Z^{-1}.\n\\end{equation}\nThe embedded beam patterns are then obtained as\n\\begin{eqnarray}\n\\boldsymbol{\\vec{E}^e} & = & \\bm Q^{-1} \\; \\boldsymbol{\\vec{E}^{'}}, \\\\\n\\boldsymbol{\\vec{\\Psi}^e} & = & \\bm J^{-1} \\; \\boldsymbol{\\vec{E}^{'}}.\n\\end{eqnarray}\n\n\\section{Some sanity checks}\n\n\\subsection{Energy conservation}\nWe verify here whether the computed embedded beam patterns \nsatisfy energy conservation.\nDetails of such a verification for the VEB $\\boldsymbol{\\vec{E}^e}$ are given in\nRoshi \\& Fisher (2016).\nWe consider below the case for CEB, $\\boldsymbol{\\vec{\\mathcal{\\psi}}^e}$. \nFrom the definition of embedded beam pattern $\\vec{\\mathcal{\\psi}}^e_j$ \nthe port currents are\n\\begin{eqnarray}\n\\mathcal{J}_{0_i} & = & 1\\; \\textrm{A} \\quad \\textrm{for}\\;\\;\\; i = j, \\nonumber \\\\\n & = & 0\\; \\textrm{A} \\quad \\textrm{for}\\;\\;\\; i \\neq j,\n\\end{eqnarray}\nand hence the wave amplitudes are \n\\begin{eqnarray}\n\\frac{1}{\\sqrt{z_0}} (a_i - b_i) & = & 1 \\quad \\textrm{for}\\;\\;\\; i = j, \\nonumber \\\\\n & = & 0 \\quad \\textrm{for}\\;\\;\\; i \\neq j. \n\\end{eqnarray}\nThe vector $\\bm a$ can be written as\n\\begin{equation}\n\\bm a = \\bm b + \\sqrt{z0} \\begin{bmatrix}\n 0 \\\\\n 0 \\\\\n \\threevdots \\\\\n 1 \\\\\n \\threevdots \\\\\n 0\n\\end{bmatrix},\n\\end{equation}\nwhere the non-zero element (which is 1) is located at $j^{th}$ row. Substituting\nthis in the equation $\\bm b = \\bm S \\bm a $ \nand re-arranging we get\n\\begin{equation}\n\\bm b = \\sqrt{z0}\\;(\\boldsymbol{\\mathcal{I}} - \\bm S)^{-1} \\begin{bmatrix}\n S_{1j} \\\\\n S_{2j} \\\\\n \\threevdots \\\\\n S_{jj} \\\\\n \\threevdots \\\\\n S_{Mj}\n\\end{bmatrix}.\n\\end{equation}\nPower dissipated at the $j^{th}$ port is\n\\begin{eqnarray}\nP_{dis} & = & \\frac{1}{2} (a_j a_j^* - b_j b_j^*), \\\\\n & = & \\frac{\\sqrt{z_0}}{2} (\\sqrt{z_0} + (b_j + b_j^*)).\n\\label{pdis1}\n\\end{eqnarray}\nThe far-field beam pattern of the PAF for the above excitation is the\nembedded beam pattern $\\vec{\\mathcal{\\psi}}^e_j$ and hence\nthe radiated power is, \n\\begin{eqnarray}\nP_{rad} & = & \\frac{1}{2 z_f}\\int_{sphere} \\vec{\\mathcal{\\psi}}^e_j \\cdot \\vec{\\mathcal{\\psi}}^{e*}_j \\; \\textrm{d}A, \\nonumber \\\\\n & = & \\frac{1}{2 z_f}\\int_{4\\pi} \\vec{\\Psi}^e_j \\cdot \\vec{\\Psi}^{e*}_j \\; \\textrm{d}\\Omega.\n\\end{eqnarray}\nFor loss-less PAF $P_{dis} = P_{rad}$. This equality is satisfied in our\nPAF model computation. Further, for a loss-less antenna,\n\\begin{equation}\nP_{rad} = \\frac{1}{2} \\mathcal{J}_{0_j}^2 \\; \\textrm{Re}\\{Z_{pin_j}\\},\n\\end{equation}\nwhere $\\mathcal{J}_{0_j}$ is the current flowing to port $j$, which for the embedded pattern\n$\\vec{\\mathcal{\\psi}}^e_j$ is 1 A and $Z_{pin_j}$ is the input\nimpedance of port $j$ when all other ports are open circuited. The input\nimpedance for this case is given by\n\\begin{equation}\nZ_{pin_j} = z_{jj},\n\\label{embZin}\n\\end{equation}\nwhere $z_{jj}$ is the $j^{th}$ diagonal element of the impedance matrix ${\\bm Z}$.\nThus\n\\begin{equation}\nP_{rad} = \\frac{1}{2} \\textrm{Re}\\{z_{jj}\\}.\n\\label{embpdis}\n\\end{equation}\n\n\\subsection{PAF in a thermal radiation field}\n\nIn this Section, we show that the open circuit voltage correlations $\\bm R_t$,\nobtained from the two embedded beam patterns, when the the PAF is\nembedded in a black body radiation field are equal to the result given by\nTwiss's theorem (Twiss 1955). \nThe correlation $\\bm R_t$ is given by (Roshi \\& Fisher 2016) \n\\begin{eqnarray}\n\\bm R_t & = &\\frac{4 k_B T_0}{z_f} \\bm Z \\left(\n\\int_{4\\pi} \\boldsymbol{\\vec{E}^e}\\cdot \\boldsymbol{\\vec{E}^e}^H \\textrm{d}\\Omega \\right) \\bm Z^H, \\nonumber \\\\\n & = & \\frac{4 k_B T_0}{z_f} \\bm Z \\bm C_{Ce} \\bm Z^H,\n\\label{thcorr1}\n\\end{eqnarray}\nfor the VEB, $\\boldsymbol{\\vec{\\mathcal{E}}^e}$ and \n\\begin{equation}\n\\bm R_t = \\frac{4 k_B T_0}{z_f} \\bm C_{C\\psi},\n\\label{thcorr2}\n\\end{equation}\nfor the CEB, $\\boldsymbol{\\vec{\\mathcal{\\psi}}^e}$.\nFor a loss-less antenna the power dissipated at the ports should be equal to the radiated power,\nwhich can be used to calculate $\\bm C_{Ce}$ and $\\bm C_{C\\psi}$. The \nenergy balance condition gives, \n\\begin{eqnarray}\n\\frac{1}{2} \\left(\\frac{\\bm V_0^H \\bm I_0}{2} + \\frac{\\bm I_0^H \\bm V_0}{2}\\right) & = & \\frac{1}{2 z_f} \\bm V_0^H \\bm C_{Ce} \\bm V_0, \\nonumber \\\\\n\\frac{1}{4} \\bm V_0^H \\left(\\bm Z^{-1} + \\left(\\bm Z^{-1}\\right)^H \\right) \\bm V_0 & = &\n\\frac{1}{2 z_f} \\bm V_0^H \\bm C_{Ce} \\bm V_0,\n\\label{ce}\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\frac{1}{2} \\left(\\frac{\\bm V_0^H \\bm I_0}{2} + \\frac{\\bm I_0^H \\bm V_0}{2}\\right) & = & \\frac{1}{2 z_f} \\bm I_0^H \\bm C_{C\\psi} \\bm I_0, \\nonumber \\\\\n\\frac{1}{4} \\bm I_0^H \\left(\\bm Z + \\bm Z^H \\right) \\bm I_0 & = &\n\\frac{1}{2 z_f} \\bm I_0^H \\bm C_{C\\psi} \\bm I_0.\n\\label{csi}\n\\end{eqnarray}\nSince Eqs.~\\ref{ce} \\& \\ref{csi} are valid for arbitrary excitations it follows\nthat\n\\begin{eqnarray} \n\\frac{1}{2} \\left(\\bm Z^{-1} + \\left(\\bm Z^{-1}\\right)^H \\right) & = &\n\\frac{1}{z_f} \\bm C_{Ce}, \\label{impbc1} \\\\\n\\frac{1}{2} \\left(\\bm Z + \\bm Z^H \\right) & = &\n\\frac{1}{z_f} \\bm C_{C\\psi}. \n\\label{impbc2}\n\\end{eqnarray}\nSubstituting Eq.~\\ref{impbc1} in Eq.~\\ref{thcorr1} and Eq.~\\ref{impbc2} in \nEq.~\\ref{thcorr2}, we get \n\\begin{equation}\n\\bm R_t = 2 k_B T_0 \\Big(\\bm Z + \\bm Z^H\\Big),\n\\end{equation}\nfrom both Eqs.~\\ref{thcorr1} \\& \\ref{thcorr2},\nwhich is the voltage correlation given by Twiss's theorem (Twiss 1955). \n\n\\section*{Acknowledgment}\n\nI thank Rick Fisher and Bill Shillue for carefully proof reading the report and providing useful\ncomments.\n\n\\section*{References}\n\n\\noindent \nPospieszalski, M. W., 2010, IEEE Microwave Magazine, 11, 61 \n\n\\noindent\nRoshi, D. A., Fisher, J. R., 2016, NRAO, Electronics division internal report, 330. \\\\\n\\url{https:\/\/library.nrao.edu\/public\/memos\/edir\/EDIR_330.pdf}\n\n\\noindent\nTwiss, R. Q., J. Appl. Phys., 1955, 26(5) 599.\n\n\\end{document}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection{Definition}\n\n\\begin{definition}[Primitive root of unity]\n\\label{defi:racprim}\n Let $q$ be a prime power. A matrix\n $A \\in M_\\ell(\\mathbb{F}_{q^s})$ is called a \\emph{primitive $m$-th root of\n unity} if\n \\begin{itemize}\n \\item $A^m = I_\\ell$,\n \\item $A^i \\neq I_\\ell$ if $i < m$,\n \\item $\\det(A^i - A^j) \\neq 0$, whenever $i \\neq j$.\n \\end{itemize}\n\\end{definition}\n\n\\begin{proposition}\n\\label{prop:racprim}\n Let $q$ be a prime power and suppose that $q^{s\\ell} - 1 = m$.\n Then there exists a primitive $m$-th root of unity in\n $M_{\\ell}(\\mathbb{F}_{q^s})$.\n\\end{proposition}\n\n\\begin{proof}\n Let $\\alpha \\in \\mathbb{F}_{q^{s\\ell}}$ be a primitive $m$-th root of unity\n and $A \\in M_{\\ell}(\\mathbb{F}_{q^s})$ be the companion matrix of the\n irreducible polynomial $f(X) \\in \\mathbb{F}_{q^s}[X]$ of $\\alpha$ over\n $\\mathbb{F}_{q^s}$. There exists $P \\in \\GL_{\\ell}(\\mathbb{F}_{q^{s\\ell}})$ and an\n upper triangular matrix $U \\in M_{\\ell}(\\mathbb{F}_{q^{s\\ell}})$ whose\n diagonal coefficients are the eigenvalues of $A$ such that\n $A = P^{-1} U P$. The eigenvalues of $A$ are exactly the\n roots of $f$ and then are primitive $m$-th roots of unity. Therefore\n $A$ satisfies the three conditions of Definition~\\ref{defi:racprim}.\n\\end{proof}\n\n\\begin{definition}[Block minimum distance]\n Let $\\mathcal{C}$ be a linear code over $\\mathbb{F}_q$ of length $m\\ell$.\n We define the \\emph{$\\ell$-block minimum distance} of $\\mathcal{C}$ to be\n the minimum distance of the folded code of $\\mathcal{C}$.\n\\end{definition}\n\n\\begin{definition}[Left quasi-BCH codes]\n\\label{defi:QBCH}\n Let $A$ be a primitive $m$-th root of unity in\n $M_{\\ell}(\\mathbb{F}_{q^s})$ and $\\delta \\leq m$.\n We define the $\\ell$-quasi-BCH code of length $m\\ell$,\n with respect to $A$, with designed minimum distance $\\delta$,\n over $\\mathbb{F}_q$ by\n \\begin{multline*}\n \\qbch_q(m,\\ell,\\delta,A) := \\\\\n \\left\\lbrace\n (c_1,\\ldots,c_m) \\in (\\mathbb{F}_q^\\ell)^m :\n \\sum_{j = 0}^{m - 1} A^{ij}c_j = 0\n \\text{ for } i = 1,\\ldots,\\delta - 1\n \\right\\rbrace.\n \\end{multline*}\n We call the linear map\n \\begin{equation*}\n \\begin{array}{rcl}\n \\mathcal{S}_A : (\\mathbb{F}_q^{\\ell})^m & \\rightarrow & (\\mathbb{F}_{q^s}^{\\ell})^m \\\\\n x = (x_1,\\ldots,x_m) & \\mapsto & \\sum_{j = 0}^{m - 1}\n A^j x_j\n \\end{array}\n \\end{equation*}\n the \\emph{syndrome} map with respect to $\\qbch(m,\\ell,\\delta,A)$.\n\\end{definition}\n\n\\begin{proposition}\n Using the notation of Definition~\\ref{defi:QBCH},\n $\\qbch_q(m,\\ell,\\delta,A)$ has dimension at least\n $(m - e(\\delta - 1))\\ell$ and $\\ell$-block minimum distance at least\n $\\delta$. In other words $\\qbch_q(m,\\ell,\\delta,A)$ is an\n $[m\\ell , \\geq (m - s(\\delta - 1))\\ell , \\geq \\delta]_{\\mathbb{F}_q}$-code.\n\\end{proposition}\n\n\\begin{proof}\n According to Definition~\\ref{defi:QBCH} we have that\n \\begin{equation*}\n H = \\begin{pmatrix}\n I_{\\ell} & A & \\cdots & A^{m - 1} \\\\\n I_{\\ell} & A^2 & \\cdots & A^{2(m - 1)} \\\\\n \\vdots & \\vdots & & \\vdots \\\\\n I_{\\ell} & A^{\\delta - 1} & \\cdots & A^{(\\delta - 1)(m - 1)}\n \\end{pmatrix} \\in M_{(\\delta - 1)\\ell,m\\ell}(\\mathbb{F}_{q^s})\n \\end{equation*}\n is a parity check matrix of $\\qbch_q(m,\\ell,\\delta,A)$.\n Let\n \\begin{equation*}\n V = \\begin{pmatrix}\n I_\\ell & A & \\cdots & A^{\\delta - 1} \\\\\n I_\\ell & A^2 & \\cdots & A^{2(m - 1)} \\\\\n \\vdots & \\vdots & & \\vdots \\\\\n I_\\ell & A^{\\delta - 1} & \\cdots & A^{(\\delta - 1)^2}\n \\end{pmatrix}.\n \\end{equation*}\n Using the Vandermonde matrix trick we find that the determinant $D$\n of $V$ over $M_{\\ell}(\\mathbb{F}_{q^s})[A]$ is $\\prod_{i < j} (A^i - A^j)$.\n By the definition of $A$ we have $\\det_{\\mathbb{F}_{q^s}} D \\neq 0$, thus\n $V$ is invertible over $M_{\\ell}(\\mathbb{F}_{q^s})[A]$ and then, invertible\n over $\\mathbb{F}_{q^s}$. Therefore $H$ has full rank over $\\mathbb{F}_{q^s}$.\n \n Let $i:\\mathbb{F}_q^{m\\ell} \\rightarrow \\mathbb{F}_{q^s}^{m\\ell}$ be the canonical\n injection and denote by\n $h:\\mathbb{F}_{q^s}^{m\\ell} \\rightarrow \\mathbb{F}_{q^s}^{(\\delta - 1)\\ell}$\n the $\\mathbb{F}_q$-linear map given by $H$. Then we have\n $\\dim_{\\mathbb{F}_q}(\\im h) = e(\\delta - 1)\\ell$. Thus\n $\\dim_{\\mathbb{F}_{q^s}}(\\im h \\circ i) \\leq (\\delta - 1)\\ell$ and\n $\\dim_{\\mathbb{F}_q}(\\im h \\circ i) \\leq e(\\delta - 1)\\ell$. Therefore\n $\\dim_{\\mathbb{F}_q}(\\ker h \\circ i) \\geq m\\ell - e(\\delta - 1)\\ell$.\n Suppose that there exists a codeword\n \\mbox{$c = (c_1,\\ldots,c_m) \\in \\mathcal{C} \\setminus \\{0\\}$}\n with $\\ell$-block weight $b \\leq \\delta - 1$. Note $i_1,\\ldots,i_b$\n the indexes such that $c_{i_j}\\neq 0$ for $i=1,\\ldots,b$.\n This implies that the matrix\n \\begin{equation*}\n \\begin{pmatrix}\n A^{i_1} & A^{i_2} & \\cdots & A^{i_b} \\\\\n A^{2i_1} & A^{2i_2} & \\cdots & A^{2i_b} \\\\\n \\vdots & \\vdots & & \\vdots \\\\\n A^{(\\delta - 1) i_1} &\n A^{(\\delta - 1) i_2} &\n \\cdots &\n A^{(\\delta - 1) i_b} \\\\\n \\end{pmatrix}\n \\end{equation*}\n has not full rank which is absurd. \n\\end{proof}\n\n\\begin{example}\n Consider the $3$-quasi-BCH codes defined by primitive roots in\n $M_3(\\mathbb{F}_{2^2})$ of length $63$ over $\\mathbb{F}_2$ with designed\n minimum distance $6$ defined by a $21$-th root of unity in\n $\\mathbb{F}_{2^2}$. In other words, $q = 2,m = 21,\\ell = 3,s = 2$ and\n $\\delta = 6$. There are $22$ non-equivalent codes splitting as\n follows:\n \\begin{equation*}\n \\begin{array}{|c|c|}\n \\hline\n \\text{Number of codes} & \\text{Parameters} \\\\\n \\hline\n 2 & [63,33,6]_{\\mathbb{F}_2} \\\\\n \\hline\n 18 & [63,33,7]_{\\mathbb{F}_2} \\\\\n \\hline\n 2 & [63,36,6]_{\\mathbb{F}_2} \\\\\n \\hline\n \\end{array}\n \\end{equation*}\n Notice that their dimension is always at least\n $(m - e(\\delta - 1))\\ell = 33$ and their minimum distance is\n at least $\\delta = 6$. All the computations have been performed\n with the \\textsc{magma} computer algebra system \\cite{magma}.\n\\end{example}\n\n\\begin{example}\n Let $q = 5,m = 7,\\ell = 3,s = 2$ and $\\delta = 3$. Let\n $\\omega \\in \\mathbb{F}_{5^2}$ be a primitive $(5^2 - 1)$-th root of unity and\n \\begin{equation*}\n A=\\begin{pmatrix}\n \\omega^9 & \\omega^4 & \\omega^{22} \\\\\n \\omega^{11} & \\omega^{11} & \\omega^{15} \\\\\n \\omega^2 & \\omega^{19} & 1 \n \\end{pmatrix}\n \\in M_3(\\mathbb{F}_{5^2}).\n \\end{equation*}\n Then the left $3$-quasi-BCH code of length $21$ with respect\n to $A$ with designed minimum distance $3$ over $\\mathbb{F}_5$ has parameters\n $[21,9,7]_{\\mathbb{F}_5}$. Its generator polynomial is given by\n \\begin{multline*}\n g(X) = \n \\begin{pmatrix} 1 & 4 & 3 \\\\ 3 & 3 & 4 \\\\ 1 & 1 & 4 \\end{pmatrix} X^4 +\n \\begin{pmatrix} 4 & 0 & 0 \\\\ 4 & 0 & 0 \\\\ 4 & 0 & 4 \\end{pmatrix} X^3 +\n \\begin{pmatrix} 3 & 0 & 4 \\\\ 0 & 3 & 4 \\\\ 0 & 0 & 0 \\end{pmatrix} X^2 +\\\\\n \\begin{pmatrix} 2 & 3 & 2 \\\\ 4 & 4 & 4 \\\\ 3 & 1 & 1 \\end{pmatrix} X +\n \\begin{pmatrix} 1 & 0 & 0 \\\\ 0 & 1 & 0 \\\\ 0 & 0 & 1 \\end{pmatrix}\n \\in M_3(\\mathbb{F}_5)[X].\n \\end{multline*}\n\\end{example}\n\n\n\\subsection{The one-to-one correspondence}\n\nIt is well-known \\cite[Theorem~1, page~190]{SloMacWil86} that there\nis a one-to-one correspondence between cyclic codes of length $n$\nover $\\mathbb{F}_q$ and monic factors of $X^n - 1 \\in \\mathbb{F}_q[X]$ \\emph{i.e.}\nideals of $\\mathbb{F}_q[X] \/ (X^n - 1)$.\nIn \\cite{CayChaAbd2010,Chabot2011} the authors start to exhibit\nsuch a correspondence for quasi-cyclic codes. They show that there\nis a correspondence between a subfamily of $\\ell$-quasi-cyclic codes\nof length $m\\ell$ over $\\mathbb{F}_q$ and reversible factors of\n$X^n - 1 \\in M_{\\ell}(\\mathbb{F}_q)[X]$.\n\nThe one-to-one correspondence between $\\ell$-quasi cyclic codes\nand left ideals of $M_{\\ell}(\\mathbb{F}_q)[X] \/ (X^m - 1)$ is a consequence\nof the two following lemmas.\n\n\\begin{lemma}\n\\label{lem:mod_gen}\n Let $R$ be a commutative principal ring and $M$ be a free left\n module of finite rank $s$ over $R$. Then every submodule $N$ of $M$\n can be generated by at most $s$ elements.\n\\end{lemma}\n\n\\begin{proof}\n It is an easy adaptation of the proof of\n \\cite[Theorem~7.1, page~146]{Lang2002}.\n\\end{proof}\n\n\\begin{lemma}\n\\label{lem:morita}\n Let $s$ be a positive integer and $R$ be a commutative principal\n ring. Then there is a one-to-one correspondence between the\n submodules of $R^s$ and the left ideals of $M_s(R)$.\n\\end{lemma}\n\n\\begin{proof}\n Note that this is a particular case of the Morita equivalence\n for modules. See for example\n \\cite[n\\textsuperscript{o}4, page~99]{BourbakiAlgCom8}. This\n particular case can be proved directly.\n To a submodule $N \\subseteq R^s$, we can build a left ideal\n of $M_s(R)$ whose elements have rows in $N$. Conversely,\n to a left ideal $I \\subseteq M_s(R)$ we associate the\n submodule of $R^s$ generated by all the rows of all the elements\n of $I$. It is straightforward to check that these maps are\n inverse to each other.\n\\end{proof}\n\nNote that $M_{\\ell}(\\mathbb{F}_q)[X] \/ (X^m - 1)$ and\n$M_{\\ell}(\\mathbb{F}_q[X] \/ (X^m - 1))$ are isomorphic as rings and that\n$R = \\mathbb{F}_q[X] \/ (X^m - 1)$ is a commutative principal ring. By\nLemma~\\ref{lem:mod_gen} any submodule of $R^{\\ell}$ can be generated\nby at most $\\ell$ elements. Therefore by Lemma~\\ref{lem:morita}\nany left ideal of $M_{\\ell}(R) = M_{\\ell}(\\mathbb{F}_q)[X] \/ (X^m - 1)$ is\nprincipal.\n\n\\begin{theorem}\n\\label{thm:one-to-one}\n There is a one-to-one correspondence between $\\ell$-quasi-cyclic\n codes over $\\mathbb{F}_q$ of length $m\\ell$\n and left ideals of $M_{\\ell}(\\mathbb{F}_q)[X] \/ (X^m - 1)$.\n\\end{theorem}\n\n\\begin{proof}\n Let $g = (g_{11},\\ldots,g_{1\\ell},g_{21},\\ldots,g_{2\\ell},\\ldots,\n g_{m1},\\ldots,g_{m\\ell}) \\in \\mathbb{F}_q^{m\\ell}$. We associate\n to $g$ the element $\\varphi(g) \\in (\\mathbb{F}_q[X] \/ (X^m - 1))^{\\ell}$\n defined by\n \\begin{multline*}\n \\varphi(g) =\n \\left(\n g_{11} + g_{21} X + \\cdots + g_{m1} X^{m - 1} ;\n \\right. \\\\\n g_{12} + g_{22} X + \\cdots + g_{m2} X^{m - 1} ; \\ldots ; \\\\\n \\left.\n g_{1\\ell} + g_{2\\ell} X + \\cdots + g_{m\\ell} X^{m - 1}\n \\right).\n \\end{multline*}\n Then $\\varphi$ induces a one-to-one correspondence between\n $\\ell$-quasi-cyclic codes of length $m\\ell$ over $\\mathbb{F}_q$\n and submodules of $(\\mathbb{F}_q[X] \/ (X^m - 1))^{\\ell}$.\n The theorem follows by Lemma~\\ref{lem:morita}.\n\\end{proof}\n\nLet $\\pr_{i,j}$ be the projection of the $i,i + 1,\\ldots,j$ coordinates:\n\\begin{equation*}\n \\begin{array}{rcl}\n \\pr_{i,j} : \\mathbb{F}_q^n & \\longrightarrow & \\mathbb{F}_q^{j - i + 1} \\\\\n (x_1,\\ldots,x_n) & \\longmapsto &\n (x_i,x_{i + 1},\\ldots,x_{j - 1},x_j).\n \\end{array}\n\\end{equation*}\nWe have the following obvious lemma:\n\n\\begin{lemma}\n\\label{LemmaBlockRank}\n Let $\\mathcal{C}$ be an $\\ell$-quasi-cyclic code over $\\mathbb{F}_q$ of dimension $k$\n and length $m\\ell$.\n Then there exists an integer $r$ such that $1 \\leq r \\leq k$ and\n for any generator matrix $G$ of $\\mathcal{C}$ and $0 \\leq i \\leq m - 1$,\n the rank of the $i\\ell + 1,i\\ell + 2,\\ldots,(i + 1)\\ell$ columns of\n $G$ is $r$.\n\\end{lemma}\n\n\n\\begin{definition}[Block rank]\n Taking the notation of Lemma~\\ref{LemmaBlockRank},\n we call the integer $r$ the \\emph{block rank} of $\\mathcal{C}$. Note that\n $r$ depends only on $\\mathcal{C}$ and not on any particular generator matrix\n of $\\mathcal{C}$.\n\\end{definition}\n\n\\subsection{The generator polynomial of an $\\ell$-quasi-cyclic code}\n\nIn this subsection we fix an $\\ell$-quasi-cyclic code $\\mathcal{C}$ over $\\mathbb{F}_q$.\nIf $\\ell = 1$, then $\\mathcal{C}$ is a cyclic code of length $n$ and a\ngenerator matrix of $\\mathcal{C}$ can be given\n\\cite[Theorem~1,~(e), page~191]{SloMacWil86} by\n\\begin{equation}\n\\label{equ:cyclic-gen}\n \\begin{pmatrix}\n g(X) & & & \\\\\n & Xg(X) & & \\\\\n & & \\dots & \\\\\n & & & X^{n - \\deg g}g(X)\n \\end{pmatrix},\n\\end{equation}\nwhere $g(X) \\in \\mathbb{F}_q[X]$ is the generator polynomial of $\\mathcal{C}$.\nThe block rank of $\\mathcal{C}$ is $1$ and we see that we can write\na generator matrix of $\\mathcal{C}$ with only $1$ vector and its shifts\n(by $T^{\\ell} = T$). The natural generalization of this result\nfor quasi-cyclic codes is done using the block rank.\n\nLet $r$ be the block rank of $\\mathcal{C}$, the following algorithm computes\na basis of $\\mathcal{C}$ from $r$ vectors of $\\mathcal{C}$ and their shifts.\nWe call the \\emph{first index} of a nonzero vector\n$x = (x_1,\\ldots,x_{m\\ell})$ the least integer $0 \\leq i \\leq m - 1$\nsuch that \\mbox{$(x_{i\\ell + 1},\\ldots,x_{(i + 1)\\ell}) \\neq 0$} and denote\nit by $\\first(x) = \\first(x_1,\\ldots,x_{m\\ell})$.\nLet\n\\begin{equation*}\n \\begin{array}{rlc}\n p:\\mathbb{F}_q^{m\\ell} &\\longrightarrow & \\mathbb{F}_q^{\\ell}\\\\\n x = (x_1,\\ldots,x_{m\\ell}) & \\longmapsto &\n (x_{i\\ell + 1},\\ldots,x_{(i + 1)\\ell}),\n \\end{array}\n\\end{equation*}\nwhere $i = \\first(x_1,\\ldots,x_n)$ if $x \\neq 0$ and $p(0) = 0$.\n\n\\begin{algorithm}\n\\label{al:basis-block-rank}\n\\caption{Basis computation with the block rank}\n\\begin{algorithmic}[1]\n\\REQUIRE A generator matrix $G$ of $\\mathcal{C}$.\n\\ENSURE A generator matrix formed by $r$ rows from $G$\n and some of their shifts.\n\\STATE $G' \\gets $ a row echelon form of $G$.\n\\STATE Denote by $g_1,\\ldots,g_k$ the rows of $G'$.\n\\STATE $M \\gets \\max \\{ \\first(g_i) : i \\in \\{ 0,\\ldots,m - 1 \\} \\}$.\n\\STATE $B_M' \\gets \\emptyset$.\n\\STATE $G_{M + 1} \\gets \\emptyset$.\n\\FOR{$j = M \\to 0$}\n \\STATE $B_j \\gets$ $\\{ g_i : i \\in \\{ 1,\\ldots,k \\}\n \\text{ and } \\first(g_i) = j \\}$.\n \\FOR{each element $x$ of $B_j$}\n \\IF{$p(B_j') \\cup \\{ p(x) \\}$ are independent}\n \\STATE $B_j' \\gets B_j' \\cup \\{ x \\}$.\n \\ENDIF\n \\ENDFOR\n \\STATE $G_j \\gets G_{j + 1} \\cup B_j'$.\n \\STATE $B_{j - 1}' \\gets T^{\\ell}(B_j')$.\n\\ENDFOR\n\\RETURN $G_0$.\n\\end{algorithmic}\n\\end{algorithm}\n\nNote that Algorithm~\\ref{al:basis-block-rank} applied to a cyclic\ncode, \\emph{i.e.} $\\ell = 1$, returns exactly the\nmatrix~\\eqref{equ:cyclic-gen} and we can deduce the generator\npolynomial of $\\mathcal{C}$ at the cost of the computation of a row echelon\nform of any generator matrix of $\\mathcal{C}$.\n\n\\begin{proposition} \n Algorithm~\\ref{al:basis-block-rank} works correctly as expected and\n returns a generator matrix $G$ of $\\mathcal{C}$ made of $r$ linearly\n independent vectors of $\\mathcal{C}$ and some of their shifts.\n\\end{proposition}\n\n\\begin{proof}\n We will prove by descending induction on $j$ that:\n \\begin{enumerate}\n \\item $B_j' \\supseteq T^{\\ell} (B_{j + 1}')\n \\supseteq \\dots\n \\supseteq T^{(M - j)\\ell} (B_M')$.\n \\item $\\#B_j' \\leq r$.\n \\item The vectors of $B_j'$ are linearly independent.\n \\item The vectors of $G_j$ are linearly independent.\n \\item $\\gen{G_j} = \\gen{g_i : i \\in \\{ 1,\\ldots,k \\}\n \\text{ and } \\first(g_i) \\geq j }$.\n \\end{enumerate}\n Let $j = M$. By step~3, we have $B_M \\neq \\emptyset$. Item~1\n is trivially satisfied. By Lemma~\\ref{LemmaBlockRank},\n $\\#B_M \\leq r$ and item~2 is satisfied. As\n $G_{M + 1} = B_M' = \\emptyset$ then\n $G_M = B_M' = B_M = \\{ g_i : i \\in \\{ 1,\\ldots,k \\}\n \\text{ and } \\first(g_i) \\geq M \\}$ and\n items~3 to~5 are satisfied.\n\n Suppose that $j < M$ and that items~1 to~5 are satisfied for\n $i = j + 1,\\ldots,M$. First note that $B_j \\neq \\emptyset$.\n If we had $B_j = \\emptyset$ then, as $G'$ is in row echelon form,\n $g_1,\\ldots,g_k,T^{(M - j)\\ell}(g_k)$ would be linearly independent\n which is a contradiction.\n\n Items~1 and~3 are satisfied by steps~7, 9 and~10 of the algorithm. By\n Lemma~\\ref{LemmaBlockRank} and step~9, item~2 is satisfied. For all\n $x \\in G_{j + 1}$, we have $\\first(x) \\geq j + 1$, thus, by\n item~3, the elements of $G_j$ are linearly independent and item~4\n is satisfied. Let $g$ be a vector of $G'$ such that $\\first(g) = j$, then\n the construction of $B_j'$ implies that we have\n \\begin{equation*}\n \\first \\left( g - \\sum_{u \\in B_j'} \\mu_u u \\right) \\geq j + 1\n \\end{equation*}\n where $\\mu_u \\in \\mathbb{F}_q$ for $u \\in B_j'$. Then by item~5 of the\n inductive hypothesis, we have\n \\begin{equation*}\n \\left( g - \\sum \\mu_u u \\right) \\in G_{j + 1}.\n \\end{equation*}\n Thus we have\n $\\gen{G_j} = \\gen{ g_i : i \\in \\{1,\\ldots,k\\}\n \\text{ and } \\first(g_i) \\geq j }$ and item~5\n is satisfied.\n\n As a consequence of the previous induction, $G_0$ is constituted of\n linearly independent vectors and generates\n $\\gen{ g_i : i \\in \\{1,\\ldots,k\\} \\text{ and } \\first(g_i) \\geq 0 }\n = \\mathcal{C}$ by item~5. By Lemma~\\ref{LemmaBlockRank} we must have\n exactly $r$ vectors $g \\in G_0$ such that $\\first(g) = 0$. Thus\n by items~1 and~2 we have\n \\begin{equation*}\n r = \\#B_0' = \\sum_{\\lambda = 0}^M\n \\#\\left(\n B_{\\lambda}' \\setminus T^{\\ell} (B_{\\lambda + 1}')\n \\right)\n \\end{equation*}\n which shows that $G_0$ is constituted of $r$ linearly independent\n vectors of $\\mathcal{C}$ and some of their shifts.\n\\end{proof}\n\n\\begin{corollary}\n\\label{cor:gen}\n There exist $g_1,\\ldots,g_r$ linearly independent vectors of $\\mathcal{C}$\n such that $\\gggg$ span $\\mathcal{C}$.\n If we denote by $g_{i,j}$ the $j$'th coordinate of $g_i$ and let\n \\begin{equation*}\n G_i =\n \\begin{pmatrix}\n g_{1,i\\ell + 1} & \\dots & g_{1,(i + 1)\\ell} \\\\\n \\vdots & & \\vdots \\\\\n g_{r,i\\ell + 1} & \\dots & g_{r,(i + 1)\\ell} \\\\\n & 0 &\n \\end{pmatrix}\n \\in M_{\\ell}(\\mathbb{F}_q)\n \\end{equation*}\n and\n \\begin{equation*}\n g(X) = \\frac{1}{X^{\\nu}} \\sum_{i = 0}^{m - 1} G_i X^i\n \\in M_{\\ell}(\\mathbb{F}_q)[X],\n \\end{equation*}\n where $\\nu$ is the least integer such that $G_i \\neq 0$,\n then $\\mathcal{C}$ corresponds to the left ideal $\\gen{g(X)}$\n by Theorem~\\ref{thm:one-to-one}.\n\\end{corollary}\n\n\\begin{corollary}\n Taking the notation of the proof of Theorem~\\ref{thm:one-to-one},\n the submodule $\\varphi(\\mathcal{C}) \\subseteq (\\mathbb{F}_q[X] \/ (X^m - 1))^{\\ell}$\n is generated by $r$ elements as an $\\mathbb{F}_q[X] \/ (X^m - 1)$-module\n but cannot be generated by less that $r$ elements. If $\\mathcal{C}$ is a\n cyclic code then we have $r = 1$ and we find the classical result\n about cyclic codes.\n\\end{corollary}\n\n\\begin{definition}[Generator polynomial]\n The polynomial $g(X) \\in M_{\\ell}(\\mathbb{F}_q)[X]$ from\n Corollary~\\ref{cor:gen} is called a \\emph{generator polynomial}\n of $\\mathcal{C}$.\n\\end{definition}\n\n\\begin{example}\n Let $I = \\gen{P(X),Q(X)} \\subset M_3(\\mathbb{F}_4)[X]\/(X^5-1)$ be a left ideal.\n The row echelon form generator matrix of the $3$-quasi cyclic code\n $\\mathcal{C}_I$ associated to the left ideal $I$ is \n \\begin{equation*}\n G = \\left( \\begin{array}{ccc|ccc|ccc|ccc|ccc}\n 1 & 0 & \\omega^2 &\n 0 & 0 & 0 &\n 0 & \\omega^2 & \\omega &\n \\omega & 0 & 1 &\n 0 & 0 & 0 \\\\\n \n 0 & 1 & \\omega^2 &\n 0 & 0 & 0 &\n 0 & 0 & 0 &\n \\omega & \\omega & 0 &\n 1 & 0 & \\omega^2 \\\\\n \n \\hline\n \n 0 & 0 & 0 &\n 1 & 0 & \\omega^2 &\n 0 & 0 & 0 &\n 0 & \\omega^2 & \\omega &\n \\omega & 0 & 1 \\\\\n \n 0 & 0 & 0 &\n 0 & 1 & \\omega^2 &\n 0 & \\omega^2 & \\omega &\n \\omega & 0 & 1 &\n \\omega & \\omega & 0 \\\\\n \n \\hline\n \n 0 & 0 & 0 &\n 0 & 0 & 0 &\n 1 & 1 & 0 &\n \\omega^2 & 0 & \\omega &\n 0 & \\omega^2 & \\omega \n \\end{array} \\right).\n \\end{equation*}\n Algorithm~\\ref{al:basis-block-rank} gives that\n $(g_4,g_5,T^3(g_4),T^3(g_5),T^{2 \\times 3}(g_5))$ is a basis of\n $\\mathcal{C}_I$. Moreover\n \\begin{equation*}\n g(X) =\n \\begin{pmatrix}\n 0 & 1 & \\omega^2 \\\\\n 0 & 0 & 0 \\\\\n 0 & 0 & 0\n \\end{pmatrix} +\n \\begin{pmatrix}\n 0 & \\omega^2 & \\omega \\\\\n 1 & 1 & 0 \\\\\n 0 & 0 & 0\n \\end{pmatrix} X +\n \\begin{pmatrix}\n \\omega & 0 & 1 \\\\\n \\omega & 0 & \\omega \\\\\n 0 & 0 & 0\n \\end{pmatrix} X^2 +\n \\begin{pmatrix}\n \\omega & \\omega & 0 \\\\\n 0 & \\omega^2 & \\omega \\\\\n 0 & 0 & 0\n \\end{pmatrix} X^3\n \\end{equation*}\n is a generator polynomial of $\\mathcal{C}_I$ and\n $I=\\gen{P(X),Q(X)} = \\gen{g(X)}$.\n\\end{example}\n\n\\subsection{A property of generator polynomials}\n\nThe following proposition generalizes\n\\cite[Theorem~1,~(c), page~190]{SloMacWil86} and\n\\cite[Theorem~4, page~196]{SloMacWil86}.\n\n\\begin{proposition}\n Let $\\mathcal{C}$ be an $\\ell$-quasi-cyclic code of length $m\\ell$ over $\\mathbb{F}_q$.\n Let $P(X)$ be a generator polynomial of $\\mathcal{C}$ and $Q(X)$ a\n generator polynomial of its dual.\n Then \n \\begin{equation*}\n P(X) \\left( ^t Q^{\\star}(X) \\right) = 0 \\pmod{X^m - 1}\n \\end{equation*}\n where $Q^{\\star}$ denotes the reciprocal polynomial of $Q$ and $^t Q$\n the polynomial whose coefficients are the transposed matrices of\n the coefficients of $Q$.\n\\end{proposition}\n\n\\begin{proof}\n Since $P(X) = \\sum_{i = 0}^{m - 1} P_i X^i$ is a generator\n polynomial of $\\mathcal{C}$, the rows of the matrix\n \\begin{equation*}\n \\begin{pmatrix} P_0 & P_1 & \\ldots & P_{m - 1} \\end{pmatrix}\n \\end{equation*}\n and their shifts span $\\mathcal{C}$.\n Similarly $Q(X) = \\sum_{i = 0}^{m - 1} Q_i X^i$ and the rows of\n \\begin{equation*}\n \\begin{pmatrix} Q_0 & Q_1 & \\ldots & Q_{m - 1} \\end{pmatrix}\n \\end{equation*}\n and their shifts span $\\mathcal{C}^{\\perp}$.\n By definition of a dual code, we have\n \\begin{equation*}\n \\begin{pmatrix} P_0 & P_1 & \\cdots & P_{m-1} \\end{pmatrix}\n \\begin{pmatrix} ^t Q_0 \\\\ ^t Q_1 \\\\ \\vdots \\\\ ^t Q_{m-1} \\end{pmatrix}\n = \\sum_{i=0}^{m-1} P_i \\left( ^t Q_i \\right) = 0.\n \\end{equation*}\n As $\\mathcal{C}$ and $\\mathcal{C}^{\\perp}$ are $\\ell$-quasi cyclic codes we also have\n \\begin{equation*}\n \\sum_{i = 0}^{m - 1} P_i \\left( ^t Q_{i + j \\mod m} \\right) = 0\n \\end{equation*}\n for all $j \\in \\mathbb{Z}$. Therefore \n \\begin{equation*}\n P(X) \\left( ^t Q^{\\star}(X) \\right) =\n \\sum_{j = 0}^{m-1} \\sum_{i = 0}^{m - 1}\n P_i \\left( ^tQ_{i - j \\mod m} \\right) X^j\n = 0 \\mod (X^m-1).\n \\end{equation*}\n Hence the proposition.\n\\end{proof}\n\n\n\n\\subsection{The key equation}\n\nAs in the scalar case, we exhibit a key equation for quasi-BCH codes.\nIn this subsection, all vectors are considered to be single-column\nmatrices. Consider $\\mathbb{F}_q^{\\ell}$ as a product ring of $\\ell$ copies of\n$\\mathbb{F}_q$. We define a map\n\\begin{equation*}\n \\begin{array}{rcl}\n \\Psi : M_{\\ell}(\\mathbb{F}_{q^s})[[X]] \\times \\mathbb{F}_q^{\\ell}[[X]] &\n \\rightarrow & \\mathbb{F}_{q^s}^{\\ell}[[X]] \\\\\n (f,g) & \\mapsto & \\sum_{i,j} f_j g_i X^{i + j}\n \\end{array}\n\\end{equation*}\nwhere the $f_i g_j$ are matrix-vector products. In the sequel we will\ndenote $\\Psi(f,g)$ simply by $f \\mathbin{\\diamond} g$. Note that we have\n$(fh) \\mathbin{\\diamond} g = f \\mathbin{\\diamond} (h \\mathbin{\\diamond} g)$ for any $h \\in M_{\\ell}(\\mathbb{F}_{q^s})$.\n\nLet $c$ be a codeword of $\\mathcal{C}$ sent over a channel,\n$y \\in (\\mathbb{F}_q^{\\ell})^m$ be the received word and let $e$ be\nthe error vector \\emph{i.e.} $e = y - c$ such that\n$\\w(e) = w \\leq \\lfloor (\\delta - 1) \/ 2 \\rfloor$. Let\n$W = \\supp(e) = \\{ i_1,\\ldots,i_w \\}$.\n\n\\begin{definition}[Locator and evaluator polynomials]\n We define the \\emph{locator polynomial} by\n \\begin{equation*}\n \\Lambda(X) := \\prod_{i \\in W} (1- A^i X)\n \\in M_{\\ell}(\\mathbb{F}_{q^s})\n \\end{equation*}\n and the \\emph{evaluator polynomial} by\n \\begin{equation*}\n L(X) := \\sum_{i \\in W}\n \\left(\n \\prod_{j \\neq i}^{w}\n A^i (1 - A^j) X\n \\right) \\diamond y_i\n \\in \\mathbb{F}_{q^s}^{\\ell}[X].\n \\end{equation*}\n\\end{definition}\n\n\\begin{lemma}\n \\label{lem:invertSerie}\n Let $B \\in M_\\ell(\\mathbb{F}_q)$ be a nonzero matrix, then $1 - BX$ has a\n left- and right- inverse in $M_\\ell(\\mathbb{F}_q)[[X]]$, both equal to\n \\begin{equation*}\n \\sum_{j = 0}^{+\\infty} B^j X^j.\n \\end{equation*}\n\\end{lemma}\n\nWe see that the locator polynomial $\\Lambda(X)$ is invertible in\nthe power series ring $M_{\\ell}(\\mathbb{F}_{q^s})[[X]]$ and we have\n\n\\begin{align*}\n \\left( \\Lambda(X)^{-1} \\right) \\diamond L(X) &=\n \\sum_{i \\in W} \\left(\n A^i (1 - A^i X)^{-1}\n \\right) \\mathbin{\\diamond} y_i \\\\\n &= \\sum_{i \\in W} \\left(\n \\sum_{j = 0}^{+\\infty} A^{i(j + 1)} X^j\n \\right) \\mathbin{\\diamond} y_i \\\\\n &= \\sum_{j = 0}^{+\\infty}\n \\sum_{i \\in W} A^{i(j + 1)} y_i X^j.\n\\end{align*}\n\nUsing the fact that $y = c + e$ and that, by definition,\n$\\mathcal{S}_{A^i}(y) = \\mathcal{S}_{A^i}(e)$ for any $i = 0,\\ldots,\\delta - 1$\nwe have\n\n\\begin{equation*}\n \\left( \\Lambda(X)^{-1} \\right) \\diamond L(X) =\n \\sum_{j = 0}^{+\\infty} \\mathcal{S}_{A^{j + 1}}(e) X^j\n := S_{\\infty}(X).\n\\end{equation*}\n\n\\begin{proposition}\n\\label{prop:KE}\n For any error vector $e \\in \\mathbb{F}_q^{m\\ell}$ such that\n $w(e) \\leq \\lfloor (\\delta - 1) \/ 2 \\rfloor$ we have\n \\begin{center}\n \\fbox{$\\Lambda(X) \\mathbin{\\diamond} S_{\\infty}(X) = L(X)$}\n \\end{center}\n and therefore\n \\begin{equation}\n \\label{equ:KE}\n \\Lambda(X) \\mathbin{\\diamond} S_{\\infty}(X) \\equiv L(X) \\mod X^{\\delta}.\n \\end{equation}\n We will refer to~\\eqref{equ:KE} as the \\emph{key equation}.\n\\end{proposition}\n\n\\subsubsection{Problems solving the key equation}\n\\label{sss:solving_ke}\n\nIn the case of BCH codes, the extended Euclidean and Berlekamp-Massey\nalgorithms can be used to solve the key equation.\nWe denote by $S_{\\delta}(X)$ the polynomial\n$S_{\\infty}(X) \\mod X^{\\delta}$ from~\\eqref{equ:KE} which can\nbe written as\n\n\\begin{equation}\n\\label{equ:lambda_L}\n \\begin{pmatrix}\n \\Lambda_0 & \\dots & \\Lambda_{\\delta - 1} &\n \\vline & L_0 & \\dots & L_{\\delta - 1}\n \\end{pmatrix}\n \\begin{pmatrix}\n S_0 & S_1 & \\dots & S_{\\delta - 1} \\\\\n & S_0 & & \\vdots \\\\\n & & \\ddots & \\vdots \\\\\n & & & S_0 \\\\\n \\hline\n -1 & 0 & \\dots & 0 \\\\\n 0 & -1 & & \\vdots \\\\\n \\vdots & & \\ddots & 0 \\\\\n 0 & \\dots & 0 & -1\n \\end{pmatrix}\n = 0.\n\\end{equation}\n\nWhere the $S_i$'s and $L_i$'s are column vectors such that\nthe $S_i$'s are the coefficients of $S_{\\delta}$ in\n$\\mathbb{F}_{q^s}^{\\ell}$ and the $L_i$'s are\nthe coefficients in $\\mathbb{F}_{q^s}^{\\ell}$ of $L(X)$.\nThe $\\Lambda_i$'s are the coefficients of $\\Lambda(X)$ in\n$M_{\\ell}(\\mathbb{F}_{q^s})$.\nThis system of linear equations over $\\mathbb{F}_{q^s}$ has many solutions in\n$\\mathbb{F}_{q^s}$ since there are $\\ell\\delta + \\delta$ unknowns and only\n$\\delta$ equations for each row of\n\\begin{equation*}\n \\begin{pmatrix}\n \\Lambda_0 & \\dots & \\Lambda_{\\delta - 1} &\n \\vline & L_0 & \\dots & L_{\\delta - 1}\n \\end{pmatrix}.\n\\end{equation*}\nHowever, we are only interested in the solution such that\n$(\\Lambda_0,\\ldots,\\Lambda_{\\delta - 1})$ is an error locator\npolynomial. In other words, if we let $\\mathfrak{B}$ be the\nsolutions of~\\eqref{equ:lambda_L} and\n\\begin{equation*}\n \\mathfrak{S} = \\left\\{\n \\prod_{i \\in W} (1- A^i X) \\in M_{\\ell}(\\mathbb{F}_{q^s}) :\n W \\subset \\{ 1,\\ldots,m \\} \\text{ and }\n \\#W \\leq \\lfloor (\\delta - 1) \/ 2 \\rfloor\n \\right\\}\n\\end{equation*}\nbe the set of all possible locator polynomials corresponding\nto errors of weight at most $\\lfloor (\\delta - 1) \/ 2 \\rfloor$,\nwe are interested in the elements of $\\mathfrak{B} \\cap \\mathfrak{S}$.\n\n\\begin{proposition}\n There exists one and only one solution of\n equation~\\eqref{equ:lambda_L} in $\\mathfrak{S}$.\n\\end{proposition}\n\n\\begin{proof}\n Equation~\\ref{equ:KE} ensures that there exists at least one element\n in $\\mathfrak{B} \\cap \\mathfrak{S}$. If there were more than one\n solution in $\\mathfrak{S}$ there would exist more than one codeword\n in a Hamming ball of radius $\\lfloor (\\delta - 1) \/ 2 \\rfloor$ which\n is absurd.\n\\end{proof}\n\nThe solving of~\\eqref{equ:lambda_L} remains difficult. One\nneeds an exponential (in $\\ell\\delta$) number of arithmetic operations\nin $\\mathbb{F}_{q^s}$ to find the element of $\\mathfrak{B} \\cap \\mathfrak{S}$.\nFor small values of $q$, $\\ell$ and $\\delta$ the solution can be found\nby exhaustive search on the solutions of~\\eqref{equ:lambda_L}.\n\n\\subsubsection{Unambiguous decoding scheme}\n\nIn this subsection, we prove that, as in the BCH case, the roots of\nthe locator polynomial (in $\\mathbb{F}_{q^s}[A]$) give precious\ninformation about the location of errors. The factorization of\npolynomials of $M_\\ell(\\mathbb{F}_{q^s})[X]$ is not unique, all the\nroots of the locator polynomial do not indicate an error position.\n\n\\begin{proposition}\n\\label{prop:equivRacineLoca}\n Let $e \\in \\mathbb{F}_q^{m\\ell}$ be an error vector such that\n $w(e) \\leq \\lfloor (\\delta - 1) \/ 2 \\rfloor$ and $\\Lambda(X)$ be\n the locator polynomial associated to $e$. We have \n \\begin{equation*}\n e_i \\neq 0 \\Longleftrightarrow \\Lambda(A^{-i}) = 0.\n \\end{equation*}\n\\end{proposition}\n\n\\begin{proof}\n By definition, we have $\\Lambda(A^{-i}) = 0$ if $e_i \\neq 0$.\n Conversely, if $e_i = 0$ then $A^j A^{-i} \\neq I_{\\ell}$ for\n $j \\in \\supp(e)$. Thus $1 - A^j A^{-i}$ is a unit in\n $\\mathbb{F}_{q^s}[A]$ by definition of $A$. Therefore\n $\\Lambda(A^{-i}) \\neq 0$.\n\\end{proof}\n \nThese roots can be found by an exhaustive search on the powers of $A$\nin at most $m$ attempts. At this step the support of the error vector\n$e$ is known. The last step to complete the decoding is to find the\nvalue of the error.\n\n\\begin{proposition}\n\\label{prop:errorEvaluation}\n Let $e \\in \\mathbb{F}_q^{m\\ell}$ be an error such that\n $w(e) \\leq \\lfloor (\\delta - 1) \/ 2 \\rfloor$, \\mbox{$W=\\supp(e)$}, $\\Lambda(X)$ be the\n locator and $L(X)$ be the evaluator polynomials associated to\n $e$. If $A^{-i}$ is a root of $\\Lambda(X)$ for $ i \\in W$,\n then \n \\begin{equation*}\n e_i = \\prod_{j \\in W \\setminus \\{i\\}}\n (A^{i} - A^{j})^{-1} L(A^{-i})\n \\end{equation*}\n where $L(A^j)$ denotes $\\sum (A^j)^i L_i$.\n\\end{proposition}\n\n\\begin{proof}\n Let $i_0 \\in W$. We have\n \\begin{align*}\n L(A^{-i_0}) &= \\sum_{i = 1}^{w} \\prod_{j \\neq i}^{w}\n A_i (1 - A^{-i_0} A_j) y_i\\\\\n &= \\prod_{j \\in W \\setminus \\{i_0\\}}\n A^{i_0} (1 - A^{-i_0} A^j) e_{i_0}\\\\\n &= \\prod_{j \\in W \\setminus \\{i_0\\}}\n (A^{i_0} - A^j) e_{i_0}.\n \\end{align*}\n By definition of $A$, $A^{i_0} - A^{j}$ is invertible for all\n $j \\in W$ hence the result.\n\\end{proof}\n\n\\begin{algorithm}\n\\label{al:DecodingQCBCH}\n\\caption{Decoding algorithm for quasi-BCH codes}\n\\begin{algorithmic}\n \\REQUIRE{The received word $y = c + e$ where\n $c \\in \\mathcal{C}$ and $w(e) \\leq \\lfloor (\\delta - 1) \/ 2 \\rfloor$.}\n \\ENSURE{The codeword $c$, if it exists such that\n $d(y,c) \\leq \\lfloor (\\delta - 1) \/ 2 \\rfloor$.}\n \\STATE $S_{\\delta}(X) \\gets$ Syndrome of $y$.\n \\STATE Compute $\\Lambda(X)$ and $L(X)$\n (Subsection~\\ref{sss:solving_ke}).\n \\STATE $\\mathfrak{R} \\gets$ roots of $\\Lambda(X)$ in $\\mathbb{F}_{q^s}[A]$.\n \\STATE $W \\gets \\{i | A^{-i} \\in \\mathfrak{R} \\}$.\n \\STATE $\\zeta \\gets (0,\\ldots,0)$.\n \\FOR{$i \\in W$}\n \\STATE $\\zeta_i = \\prod_{j \\in W \\setminus \\{i\\}}\n (A^{i} - A^{j})^{-1} L(A^{-i})$.\n \\ENDFOR\n \\RETURN $y - \\zeta$.\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\n\n\n\\section{Introduction}\n\\label{Sec:Intro}\n\n \\subsection{Context}\n \\input{Intro}\n\n \\subsection{First definitions}\n \\label{Ssec:Rappel}\n \\input{Rappel}\n\n\\section{Properties of quasi-cyclic codes}\n\\label{Sec:Classification}\n\\input{Classification}\n\n\\section{Quasi-BCH}\n\\label{Sec:BCH}\n\\input{BCH}\n\n\\section{Decoding scheme for quasi-BCH codes}\n\\label{Sec:Key}\n\\input{KE}\n\n\\section{Evaluation codes}\n\\label{Sec:Eval}\n\\input{eval}\n\n\\section{Conclusion}\n\\label{Sec:Conlu}\n\\input{conclusion}\n\n\\section*{Acknowledgments} \n\\input{acknowledgements}\n\n\\bibliographystyle{plain}\n\n\n\\subsection{Definition and parameters}\n\nIn this subsection we generalize evaluation codes. For any ring $R$\nand any positive integer $k$, we denote by $R[X]_{0.99$) overlap with the Laughlin state. We then evolve this state under $H_0+V_{\\rm C}$, and consider the overlap of the evolved state $\\Psi(t)$ with other trial wave functions, including the one for the quasihole state. In (a), we plot the maximally attained overlap between evolved state $\\Psi(t)$ and quasihole state $\\Psi_{\\rm qh}'$ (defined in Eq. (\\ref{qhprime}) as a function of the detuning $\\delta$ and the Rabi frequency $\\Omega$. In (b), we plot the overlap between $\\Psi(t)$ and different trial wave functions as a functions of time. This includes the overlaps with the initial state $\\Psi(0)$ (blue dashed line), with the quasihole state $\\Psi_{\\rm qh}'$ (red dotted line), and with the model wave function $\\Psi_{\\rm model}(t)$ given in Eq. (\\ref{model}) (green solid line). Here we have chosen coupling parameters $\\Omega=0.2 e^2\/\\epsilon l_{\\rm B}$ and $\\delta=0.04 e^2\/\\epsilon l_{\\rm B}$. Units of time are given as inverts of $\\Omega' = \\sqrt{\\Omega^2+\\delta^2}$.\n}\n\\end{figure}\n\nTo make this assessment more quantitative, we have numerically simulated the time evolution under $H=H_0+V_{\\rm C}$ for $N=5$ electrons, where the single-particle part $H_0$ is defined in Eq. (\\ref{H0}), and $V_{\\rm C}$ denotes Coulomb interactions. In the simulation, we have restricted the Hilbert space to the two coupled Landau levels, and \nthe angular momentum of the initial state fixes the quantum number $\\sum_i (m_i-\\ell n_i)$. For the initial state $\\Psi(0)$, we have chosen the ground state of $V_{\\rm C}$ within the $n=0$ level at fixed total angular momentum $L_N$. This state has large overlap ($\\sim0.99$) with the Laughlin state. We then determine the overlap of the evolved state $\\Psi(t)$ with the state $\\Psi_{\\rm qh}' \\equiv \\prod_i a_i^\\dagger f_{\\rm qh}^0 \\Psi(0) $, that is, a state obtained from the initial state by introducing a quasihole and raising the Landau level index of all electrons. In Fig. \\ref{fig2}(a), we plot the maximally attained overlap during the course of the evolution as a function of the detuning $\\delta$ and Rabi frequency $\\Omega$. As a promising result, we find that the Rabi frequency does not have to be much larger than the many-body gap for the fidelity to reach values close to one. \nWe note that the many-body gap above the Laughlin phase is on the order $0.15 e^2\/\\epsilon l_{\\rm B}$. This value corresponds to 0.3 eV, if we assume a typical magnetic field strength of 10 teslas, and use the permittivity of the vacuum, $\\epsilon=\\epsilon_0$.\nOur numerical simulation also shows that the best choice for the detuning is not at resonance, but at about $\\delta=0.05~e^2\/\\epsilon l_{\\rm B}$, that is, for an optical frequency below the Landau level resonance. The value of the detuning roughly compensates the interaction energy difference when electrons are pumped into the quasihole state. Due to a larger total angular momentum in the quasihole state, the Coulomb repulsion in this state is decreased, and the many-body resonance is shifted away from the single-particle value.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.48\\textwidth, angle=0]{fig4.eps}\n\\caption{\\label{fig2qe} {\\bf Fidelity of the quasielectron pump.} The setup in this case is similar to Fig.~\\ref{fig2}, but the pump photons have orbital angular momentum $l=-1$. Moreover, we simulate pumping in the presence of an additional potential in the lowest Landau level acting on $m=0$ state, to initially remove the population of this state (see main text). (a) We plot the maximally attained overlap with the quasielectron state during a pumping cycle. (b) We plot the overlap of the time-evolved state with the quasielectron state (shifted into LL1) $\\Psi_{\\rm qe}'$ (red dotted line), the Laughlin state $\\Psi_{\\rm L}$ (blue dashed line), or a time-dependent model wave function $\\Psi_{\\rm model}(t)$ (green solid line). The coupling parameters are $\\Omega=0.4 e^2\/\\epsilon l_{\\rm B}$ and $\\delta=-0.1 e^2\/\\epsilon l_{\\rm B}$}.\n\\end{figure} \n\n\\paragraph{Spontaneous emission.} Spontaneous emission limits the lifetime of any state above the Fermi level. Therefore, we need to prepare the state of interest in the Landau level at the Fermi energy. This can be achieved by applying two subsequent $\\pi$-pulses, as shown in Fig. \\ref{fig:raman}: The first pulse, with orbital angular momentum $\\ell=1$, transfers the electrons into an excited Landau level, and simultaneously shifts the orbital quantum number $m$ to $m+1$, as discussed above. The second pulse with $\\ell=0$ returns the electrons to the original Landau level, without changing orbital states. Using sufficiently large Rabi frequencies, both pulses can operate at large fidelities. The combination of both pulses then results in a quasihole excitation within the Landau level at the Fermi surface. With this scheme, spontaneous emission can only occur during the pulse duration. To neglect this effect, we have to demand that the lifetime in the excited level is large compared to the duration $t=\\pi\/\\Omega$ of a $\\pi$-pulse. In other words, \nthe coupling has to be fast compared to the emission rate. In summary, large Rabi frequencies (on the order of eV) suppress both decoherence due to interactions or due to spontaneous emission. However, strong Rabi couplings also lead to non-radiative losses. This will set a practical limit to the Rabi strength of the pulse, and thus, to the fidelity of our scheme. A further requirement on the Rabi frequency is that it is large compared to the disorder potential, which ensures that the selection rules for orbital angular momentum are well obeyed.\n\nWithin our simulation, we have also studied how the system evolves from the initial Laughlin-like state (polarized in $n=0$ at $t=0$) into a quasihole state (polarized in $n=1$ at $t=\\pi\/\\Omega'$ with $\\Omega'\\equiv \\sqrt{\\delta^2+\\Omega^2}$). It is found that at intermediate times $0 \\Omega$.\n\n \n\\subsection{Detecting anyonic properties \\label{sec:corbino}}\nIn the remainder of this section, we briefly discuss possible detection schemes for fractional charge and statistics, which potentially benefit from a method to generate exactly one quasihole by a pulsed light beam.\n\n\\paragraph{Fractional charge.}\nA possible charge measurement can be performed on a Corbino disk. The insertion of flux through a Corbino disk has been discussed in Ref. \\onlinecite{thouless90}. As described in the previous section, our scheme increases the angular momentum of the electrons. This shifts them towards the outer edge in the same way as creation of an additional flux through the inner circle of the Corbino disk would do. The reverse process, which transports charges towards the inner edge can be achieved by decreasing the angular momentum of the electrons. \n\nThe confining potential at the edge makes it energetically favorable for the charge to return to its original position. This leads to transport through a wire connecting the two edges of the Corbino disk. However, considering a fractional quantum Hall system at filling $\\nu=1\/q$, $q$ quasiparticles need to be shifted to the outer edge in order to accumulate a total electronic charge $e$. Thus, $n$ pumping cycles are expected to produce a current of $n\/q$ electrons, and the number of pumping cycles serves as a direct measure of the fractional charge.\n\n\\paragraph{Fractional statistics.}\nThe detection of fractional statistics is possible using interferometers, either of the Fabry-Perot or the Mach-Zehnder type. Such devices are suited to detect Aharonov-Bohm phases, as proposed for instance in Ref. \\onlinecite{chamon97} and realized in Refs. \\onlinecite{camino05,mcclure12,willett13}, by measuring the interference of currents along different paths. In these schemes, the interference pattern is sensitive to changes of the magnetic field, which yields the value of the fractional charge. It is also sensitive to the number of quasiparticle between the different arms of the interferometer, and from this, the statistical angle of the excitations can be deduced. However, to extract both charge and statistical angle from the interference pattern, exact knowledge about the number of excitations is needed. Thus, our scheme may allow for improved measurements as it provides individual control over these excitations.\n\n\n\\section{Light-induced potentials:}\n\\label{pin}\nThe previous section has demonstrated that light with orbital angular momentum can be used to mimic the addition of a flux, and to produce a quasiparticle excitation. In the present section, we will not be concerned with the production of the excitation, but we will be interested in ways to stabilize and control the quasiparticle. Specifically, we will \nconsider an optical potential which locally repels the electrons and thereby traps a quasihole. \n\nA major concern addressed in this section is the finite width of the optical potential, in contrast to $\\delta$-like potentials which have been studied earlier in the context of cold atoms \\cite{paredes01,julia-diaz12}. A numerical investigation shows that a potential with small but finite width is even better suited for trapping quasiholes than a point-like potential. However, the gap above the quasihole state is found to decrease when the potential becomes broader than the magnetic length. Since the optical wavelength is usually larger than the magnetic length, we will present some ideas to achieve subwavelength potentials using a three-level coupling.\n\nGiven the flexibility of optical potentials, they appear to be particularly well suited for moving the quasihole. Thus, an optical trap for quasiholes may provide a tool for braiding anyonic excitations. To demonstrate that ability, we show that, when the potential is moved on a closed contour, the wave function acquires a Berry phase proportional to the fractional charge of the quasihole. \n\nThe calculations and discussions in this section hold for both non-relativistic systems and for Dirac materials.\n\n\\subsection{AC Stark shift}\nThe mechanism which provides the desired optical potential is AC Stark shift. The AC Stark shift is routinely used to trap cold atoms in optical lattices. Recently, it has been suggested to trap Dirac electrons in graphene by exploiting AC Stark shift \\cite{morina18}. In a GaAs quantum well, this shift can be produced by optically pumping below the band gap \\cite{vonlehmen86}. Alternatively, if the system is coupled to a cavity, an enhanced Stark shift can be achieved using a resonance of the cavity \\cite{edo}. In general, the energy shift $\\Delta E$ experienced by the electronic energy in a laser field ${\\bf E}({\\bf r},t)$, is given by $\\Delta E= {\\bf d}\\cdot {\\bf E}({\\bf r},t)$, where ${\\bf d}= \\alpha [ E_x({\\bf r},t),E_y({\\bf r},t)]$ is the dipole moment induced by the field. The polarizability $\\alpha$ is inversely proportional to the detuning $\\Delta$ from the closest resonance. With this, the optical potential reads:\n\\begin{align}\n V_{\\rm opt} \\propto \\frac{I(z)}{\\Delta},\n\\end{align}\nwhere $I(z)$ is the laser intensity in the complex plane, assumed to be constant in time. In the following, we will consider a Gaussian beam, that is, an optical potential $V_{\\rm opt}^{(\\xi,w)}(z) = \\left(\\frac{l_{\\rm B}}{w}\\right)^2 V_{{\\rm opt},0} \\exp[|z-\\xi|^2\/w^2]$, characterized by the position of the beam focus $\\xi$, the width $w$ of the beam, and an potential strength $V_0$. The prefactor $ \\left(\\frac{l_{\\rm B}}{w}\\right)^2$ normalizes the intensity such that $\\lim_{w\\rightarrow 0} V_{\\rm opt}^{(\\xi,w)}(z)= V_{{\\rm opt},0} \\delta(z)$, with $\\delta(z)$ being the Dirac distribution. \n\n For the purpose of trapping a quasihole, it is necessary that the strength of the potential compensates the energy gap above the Laughlin state. Thus, the relevant energy scale is given by the Coulomb energy $e^2\/(\\epsilon l_{\\rm B})$, with the magnetic length $l_{\\rm B}$ representing the typical length scale relevant for the quantum Hall physics.\nThis length scale determines the size of an electronic orbital, but also of defects like quasiparticles and quasiholes. If $\\tilde B$ is the magnetic field strength in tesla, $l_{\\rm B} = 26 \\ {\\rm nm}\/\\sqrt{\\tilde B}$. For a magnetic field of 9 T, and with a dielectric constant $\\epsilon_{\\rm d}=12$ (as in GaAs), this energy scale is on the order of 150 meV, and a typical gap will be on the order of 15 meV. An early measurement in GaAs \\cite{vonlehmen86} has obtained an AC shift of 0.2 meV was with a laser intensity of 8 $\\rm MW\/cm^2$. \n\nApart from the energy scale, also the length scale of the potential plays an important role. With the size of a quasihole being on the order of the magnetic length, we expect that the length scale of a trapping potential should not significantly exceed this scale. However, the minimum length scale of an optical potential is limited by the wavelength of the light, which in the visible regime is on the order of hundreds of nanometers. In contrast, for magnetic field strengths on the order of a few teslas, the magnetic length is only a few nanometers. We will thus need to evaluate whether a potential with finite width $w \\gg l_{\\rm B}$ is still suited to trap quasiholes.\n \n \\subsection{$\\delta$-like potentials}\n Before considering the case of finite-width potentials, we verify that a point-like potential ($w=0$) gives rise to the desired excitations. This becomes obvious when we look at the parent Hamiltonian of the Laughlin state, that is, at some model interactions $V_{\\rm parent}$ for which the Laughlin wave function $\\Psi_{\\rm L}$ is the densest zero-energy eigenstate. Such parent Hamiltonian is given in terms of Haldane pseudopotentials $V_m$, specifying the interaction strength between two electrons at relative angular momentum $\\hbar m$. In the 1\/3-Laughlin state, all pairs of electrons have relative angular momentum $3\\hbar$, so the Laughlin state has zero energy in a model Hamiltonian with $V_m=0$ for $m\\geq 3$. Since for spin-polarized fermions the relative angular momentum cannot be even, a Hamiltonian with only a single non-zero pseudopotential, $V_1$, provides a parent Hamiltonian for the Laughlin state. It follows that the quasihole state $f_{\\rm qh}^\\xi \\Psi_{\\rm L}$ becomes the densest zero-energy eigenstate of $V_{\\rm parent}+V_0\\delta(\\xi)$, when the potential strength exceeds a critical value. To see this, we note that the quasihole state carries the same anti-correlations between the electrons as the Laughlin state, but at the same time has vanishing density at position $\\xi$. \n \n The scenario of a $\\delta$-potential has been studied before in greater detail in the context of cold atoms \\cite{paredes01,julia-diaz12}. In these systems of neutral particles, which can be brought into the fractional quantum Hall regime by artificial gauge fields, the cyclotron frequency is on the order of the trap frequency ($\\sim 10$ Hz), resulting in a magnetic length $l_{\\rm B} = \\sqrt{\\hbar\/(M\\omega_{\\rm c})}$ on the order of microns, with $M$ being the mass of the atoms. Due to this different length scale, finite-width effects can indeed be neglected in these artificial systems. \n\n \\subsection{Finite-width potentials}\nTo study the role of the finite potential width $w$, we turn to numerical diagonalization methods, by which we obtain the ground state of $V_{\\rm C}+V_{\\rm opt}^{(\\xi,w)}$ for different laser positions $\\xi$ and different beam widths $w$. Generally, we find large overlaps of these states with the Laughlin quasihole state $f_{\\rm qh}^\\xi \\Psi_{\\rm L}$, even when the beam becomes as broad as (or even broader than) the electronic cloud. In our numerics, we have considered both disk and torus geometries which we discuss separately below.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.48\\textwidth, angle=0]{fig7.eps}\n\\caption{\\label{gap} {\\bf Gap above the quasihole state on a torus.} We plot the energy gap above the three degenerate quasihole states on a square torus in the presence of an optical potential $V_{\\rm opt}^{(\\xi,w)}$, as a function of the potential width $w$. The $N$ electrons are confined in the $n=0$ Landau level, generated by the presence of $N_{\\Phi}=3N+1$ magnetic fluxes.}\n\\end{figure}\n\n\\paragraph{Torus.} The torus geometry is convenient because due to its compact nature no trapping potential is required to confine the electrons. Since the torus has no edge, this geometry is well suited for studying the bulk behavior of large systems for which deformations at the edge are irrelevant. Interestingly, on the torus, the ground state of $V_{\\rm C}+V_{\\rm opt}^{(\\xi,w)}$ is almost independent of $w$. The overlap with the exact Laughlin quasihole state \\footnote{The Laughlin quasihole states can be obtained as zero-energy states from the parent Hamiltonian, that is, from pseudopotential interactions with $V_1$ being the only non-zero term, together with a $\\delta$-like potential to create the quasihole. On the torus, one needs to consider the Hilbert space of $N$ electrons and $N_{\\Phi}=3N+1$ magnetic fluxes, which corresponds to LL filling 1\/3 plus one additional flux.} takes large values close to 1, cf. Table \\ref{table1}. While this result shows that the finite potential width does not modify the quasihole state in the bulk, we also find that the energy gap above the quashihole states is quite sensitive to the width of the beam (see Fig. \\ref{gap}). Up to a certain value, of the order of the magnetic length, a finite potential width is found to increase the stability of the quasihole. However, the gap starts to decrease when the beam width exceeds the magnetic length. \nThis result can be understood by noticing that also the quasihole has a finite size of the order of the magnetic length, and the formation of a quasihole reduces the energy due to the optical potential most efficiently when the spatial overlap between quasihole and potential becomes largest. Obviously, in broader potentials a quasihole becomes less efficient for reducing potential energy.\n\n\n\\paragraph{Disk.}\nThe disk, though the most natural geometry to study quantum Hall physics, suffers strongly from finite-size effects. Even the concept of a filling factor is not defined on an infinite disk because each Landau level contains an infinite amount of states. It is necessary to assume a trapping potential which controls the electron density. A realistic trapping potential consists of hard walls, so the potential is flat in the entire system, except for the edge, where the potential energy steeply increases. Effectively, such potential puts a constraint on the Hilbert space, as it restricts the orbitals to those which fit into the flat region. This means that angular momenta beyond a certain value are not available anymore. Since we are interested in the Laughlin state (with angular momentum $L_N$), and in its quasihole excitation (increasing the angular momentum by up to $N$ quanta), we will assume that the trap effectively truncates the Hilbert space at $L_N+N$. Therefore, we perform the exact diagonalization study within a space of Fock states of angular momentum $L_N \\leq L \\leq L_N+N$. This choice yields the quasihole state as the only zero-energy eigenstate if the parent Hamiltonian is applied, that is, for a point-like potential $V_{\\rm opt}^{(\\xi,w=0)}$ and pseudopotential interactions $V_m \\sim \\delta_{1,m}$. \n\n\\begin{table}\n\\begin{tabular}{c|c|c}\n $w\/l_{\\rm B}$ & Overlap on torus & Overlap on disk \\\\\n & $N=7$ & $N=8$ \\\\\n & $N_{\\phi}=22$ & $84\\leq L\/\\hbar \\leq 92$ \\\\ \n \\hline\n 0 & 0.9884 & 0.9450 \\\\\n 3 & 0.9885 & 0.9552 \\\\\n 6 & 0.9851 & 0.9409\n\\end{tabular} \n\\caption{\\label{table1}\nOverlaps between Laughlin quasihole state, and ground state of $V_{\\rm C}+V_{\\rm opt}^{(\\xi,w)}$ on disk and square torus, for different $w$.\nParameters on the torus: $V_0=1,\\xi=0$, and on the disk: $V_0=10,\\xi=2$. On the torus, exhibiting three (quasi)-degenerate ground states, overlap refers to the three (equal) eigenvalues of the 3x3 overlap matrix.\n}\n\\end{table}\n\nImportantly, we find that Coulomb interactions do not significantly alter the scenario. Comparing the exact Laughlin quasihole state and the ground state of $V_{\\rm C}+V_{\\rm opt}^{(\\xi,w)}$, we obtain an overlap of about 0.95 for $N=8$. Strikingly, the potential width $w$ has only a minor effect on these numbers if the potential is chosen sufficiently strong, see Table \\ref{table1}.\n\nThere is, however, a notable consequence of finite-range interactions appearing in our numerics on the disk: the quasihole position does not exactly coincide with the position of the optical potential anymore, as seen in Fig. \\ref{rxi}. Although this observation seems to be an artifact due to the neglection of the trap, it will be important to take it into account when determining the quasihole charge, as discussed in the next subsection. To this end, the data in Fig. \\ref{rxi} is needed to calibrate the quasihole position.\n\nEnergetic arguments explain the mismatch between quasihole position and potential minimum: shifting the quasihole towards the center increases the angular momentum, and thereby reduces the energy of long-ranged interactions. A realistic trapping potential would compensate this effect by penalizing the angular momentum increase, but this term is missing in our numerical study. If Coulomb interactions are replaced by the short-ranged pseudopotential model, the quasihole position coincides with the potential minimum. In this case, the interaction energy is zero, and shifting the quasihole cannot lead to an interaction energy gain.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.48\\textwidth, angle=0]{fig8.eps}\n\\caption{\\label{rxi} {\\bf Calibrating the quasihole position on the disk.} By neglecting the trapping energy, long-range interactions lead to a shift of the radial position $r$ of the quasihole towards the center. The plotted curve is used to calibrate the true quasihole position $r$ as a function of the parameter $|\\xi|$, specifying the maximum of the optical potential, for $N=8$ electrons.}\n\\end{figure}\n\n\n\\subsection{Realization of sub-diffraction\\xspace potentials}\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.48\\textwidth, angle=0]{fig9.eps}\n\\caption{\\label{fig:sub} \n\\textbf{Sub-diffraction potentials.}\n(a) \nEngineering a sub-diffraction\\xspace potential via three-level coupling.\nAn $\\Uparrow$-hole at the Fermi level (empty dotted ball) experiences an attractive potential by coupling to the electrons (black balls) in\nthe $\\downarrow$-level of the conduction band and in the valence band state $\\ket{v}$. A particle-hole transformation relates this process to the standard single-particle EIT scenario, applied to a single hole. Although the laser fields do not induce a direct potential for ${\\uparrow}$-electrons, the attractive potential for $\\Uparrow$-holes results in an effective repulsive potential for the ${\\uparrow}$-electrons.\n(b) \nWe show a 1D cut through the potential $V(z)$ and the laser fields $\\Omega_c(z)$ and $\\Omega_p$. \nEven though $\\Omega_c$ is diffraction limited, we can achieve a sub-diffraction\\xspace $V(z)$ by working with ${\\rm max}[\\Omega_c(z)]\\gg \\Omega_p$.\n}\n\\end{figure}\nIn the previous section, we showed that the manipulation of anyons profits from potentials of width $w\\sim l_B$. This requires a sub-diffraction\\xspace addressability which can be achieved by employing techniques analogous to the ones used in ultra-cold atoms~\\cite{Gorshkov2008a,Wang2018}. \nThe basic idea is to use three energy levels which provides much more flexibility than just the two-level scheme used to induce AC Stark shift. \nAs an example we consider GaAs, for which we can use the level scheme shown in Fig. \\ref{fig:sub}~(a), in analogy to Fig. \\ref{cases} used for the STIRAP. The scheme consists of two spin-levels in the conduction band and one level in the valence band. We now choose the Fermi energy through the upper spin level ${{\\uparrow}}$, so both the ${\\downarrow}$-level and the valence band are occupied. The two-electron system can be mapped onto a single-particle problem via particle-hole transformation, and a repulsive potential on ${{\\uparrow}}$-electrons will be achieved by engineering an attractive potential for $\\Uparrow$-holes. Therefore, we operate at the two-photon detuning $\\delta_\\down<0$. \n\nMoreover, we use two laser fields: a strong $\\Omega_c\\xspace(z)$\nwhich is position dependent [for the easiness of presentation we fix it to $\\Omega_c\\xspace(z)^2=\\Omega_0^2(1-\\exp[|z-\\xi|^2\/w^2])$], \nand a weaker $\\Omega_p\\xspace$ which is homogeneous in space. \n The Hamiltonian reads\n\\begin{equation}\nH_{\\rm \\scriptscriptstyle al}=\n\\left(\n\\begin{array}{ccc}\n\\delta_\\down& 0 & \\Omega _c(z) \\\\\n0&0& \\Omega_p \\\\\n \\Omega _c(z) & \\Omega_p & \\Delta\\\\%-i\\gamma \n\\end{array}\n\\right)\\label{HaEIT}\n\\end{equation}\nin the bare hole-state-basis: $\\{\\ket{{\\Downarrow}},\\ket{{\\Uparrow}}, \\ket{v}\\}$.\nFor $|\\delta_\\down| \\ll\\Omega_p\\xspace$ which ensures that we can consider $\\delta_\\down$ perturbatively, and for an appropriate preparation scheme~\\cite{Wang2018,Lacki2016}, the internal state of a hole\ncan be described using a dark state $\\ket{D}=\\frac{\\Omega_c\\xspace(z)}{\\sqrt{\\Omega_p\\xspace^2+\\Omega_c\\xspace(z)^2}}\\ket{{\\Uparrow}}-\\frac{\\Omega_p\\xspace}{\\sqrt{\\Omega_p\\xspace^2+\\Omega_c\\xspace(z)^2}}\\ket{{\\Downarrow}}$.\nFrom the form of $\\ket{D}$, we see that the hole experiences an attractive potential $V(z)=\\delta_{\\Downarrow}\\frac{\\Omega_p\\xspace^2}{\\Omega_p\\xspace^2+\\Omega_c\\xspace(z)^2}$, which for $\\Omega_0\\gg\\Omega_p\\xspace$ can have sub-diffraction\\xspace width $w_s=w\/s$ characterized by the enhancement factor $s\\sim \\Omega_0\/\\Omega_p\\xspace$ and the depth $V_0=\\delta_\\down$.\n\n\nAssuming that we can describe our system using three levels,\nthe available depth of the trap is mainly limited by: (i) the validity of the rotating wave approximation, and (ii) the coupling to the short-lived intermediate level.\nThe (i) limitation constrains the strength of $\\Omega_c(z)$ to $\\Omega_0\\ll\\Delta{\\rm \\scriptscriptstyle bg}$.\nTogether with $\\Omega_p\\gg V_0$ and $s=\\Omega_0\/\\Omega_p$, we get that $s\\ll \\Omega_0\/V_0\\ll \\Delta_{\\rm \\scriptscriptstyle bg}\/V_0$. \nFor $\\Delta_{\\rm \\scriptscriptstyle bg}\\sim 1.5$~eV, $\\Omega_0\\sim 0.5$~eV, and $V_0\\sim15$~meV, we see that enhancement factors $s$ on the order of $10$ are within a reach.\nThe losses in (ii), lead to the broadening of the trapping potential by $\\gamma_v \\frac{V_0^2}{\\Omega_p^2}\\ll \\gamma_v$, which [compared with the depth of the potential $V_0$] is negligible for the lifetimes $\\tau_v=1\/\\gamma_v$ on the order of 10~ps.\nNote that in contrast to ultra-cold atoms~\\cite{Lacki2016,Jendrzejewski2016,Wang2018,BieniasInPreparation}, the kinetic energy is quenched in a magnetic field, and therefore non-adiabatic corrections to the Born-Oppenheimer potentials~\\cite{Lacki2016,Jendrzejewski2016} are negligible. This relaxes some of the constraints posed on the possible trapping depths. \nWe leave a more detailed analysis, beyond the estimates presented here, for the future work.\n\nFinally, in the case of graphene, we envision similar possibilities: for example, one can use other filled Landau levels as the additional two levels in the ladder or lambda three-level scheme.\n\n\n\\subsection{Moving a quasihole.}\nIn the remaining part of this section, we consider an optical potential which is moved on a closed loop. As a quasihole is trapped by the potential, this procedure is expected to imprint a Berry phase onto the wave function which is proportional to the charge of the quasihole. Thus, by calculating the quasihole charge from the Berry phase we will verify that moving the optical potential is suited to move an excitation. By considering short-range interactions instead of Coulomb interactions, finite-size effects become small, and the fractional charge matches with the value 1\/3, expected for a thermodynamically large Laughlin system. We will also compare an idealized adiabatic evolution, restricted to the ground state Hilbert space, with the true dynamic evolution. This establishes the maximal speed with which the potential should be moved.\n\n\n\\paragraph{Relation between Berry phase and charge.} If the position $\\xi$ of a single charge $q$ is moved, the wave function $\\Psi(\\xi)$ will pick up a Berry phase $\\gamma=\\oint {\\rm d}\\xi \\langle \\Psi(\\xi) | \\bigtriangledown_\\xi | \\Psi(\\xi) \\rangle$, and this phase is proportional to the magnetic flux through the enclosed area $A$ times the value $q$ of the electric charge. This relation is normalized such that the electron charge $e$ acquires a Berry phase $\\gamma=2\\pi$ when it encircles one flux. In a constant magnetic field, with the magnetic length defined such that an area $2 \\pi l_{\\rm B}^2$ contains one flux quantum, we have the relation\n\\begin{align}\n\\label{qe}\n \\frac{q}{e} = \\gamma\\frac{l_{\\rm B}^2}{A}.\n\\end{align}\nThus, by studying the phase of the wave function upon moving the quasihole, we can extract the electric charge of the excitation. \n\n\\paragraph{Results from adiabatic evolution.}\nWe have performed such calculation using the disk geometry with $N=8$ electrons. We considered a Hamiltonian $H=V_{\\rm int}+V_{\\rm opt}^{(\\xi,w)}$, where the interactions $V_{\\rm int}$ are either Coulomb interactions or the parent Hamiltonian of the Laughlin state. For the optical potential $V_{\\rm opt}^{(\\xi,w)}$, we considered a finite width $w$ up to $3l_{\\rm B}$ as well as the limit $w\\rightarrow 0$. Our results are plotted in Fig. \\ref{charge}: Importantly, in case of the ideal interactions from the pseudopotential model, the width of the optical potential has a minor effect on the Berry phase, or, respectively the measured charge of the quasihole. Both, for a point-like potential and for a broad beam ($w=3 l_{\\rm B}$), the quasihole charge is almost independent of the quasihole position, as it should be in a quantum liquid. Moreover, the value of the charge is close to the expected value $1\/3$. The accuracy of this result despite the small system size is due to the particular choice of interactions. As the pseudopotential model has short range interactions, finite-size effects are much weaker than in the long-range Coulomb case. For Coulomb interactions, the charge as a function of quasihole position is found to be $>0.4e$, that is, it differs significantly from $e$\/3. Surprisingly, in the presence of Coulomb interactions, the results for the finite-width potential are closer to the ideal value 1\/3.\n\nThe important conclusion which we draw from Fig. \\ref{charge} is that the finite width of the optical potential does not appear as a limiting factor for a charge measurement by moving the potential. In other words, the observed behavior suggests that even if the optical trap is much wider than the actual size of a quasihole, the quasihole still follows the contour described by the moving potential, and despite their broad width optical potentials can be used for moving and braiding anyons.\n\nThe results shown in Fig. \\ref{charge} were obtained from an ``adiabatic'' simulation, that is, we actually did not simulate the dynamics of the system while the potential is moved, but we assumed that for any potential position the system remains in its ground state. Thus, we simply determine the ground state $\\Psi_n$ at different quasihole positions $R e^{i\\varphi_n}= R e^{i n \\Delta \\varphi}$, and obtain the phase difference between subsequent states from their overlap. Summing up all phase differences along the contour gives the Berry phase\n\\begin{align}\n\\label{gamma_static}\n\\gamma = \\sum_n {\\rm Im}\\left( \\langle \\Psi_{n+1} | \\Psi_n \\rangle -1\\right).\n\\end{align}\nIn this approach, it is important to fix the global gauge. In our case of a non-degenerate ground state, the possible global gauge transformations are $U(1)$ rotation. Since we compare eigenvectors obtained from two different diagonalization procedures, we have to assure that the global gauge remains the same. This can be done by demanding that a certain component of the state vector is real and positive \\cite{kohmoto}. However, this procedure requires that some assumptions and conditions hold: Of course, any state along the path needs to have a non-zero overlap with this reference state. Moreover, one must ensure that, after discretization of the parameter space, the global gauge transformation does not remove the Berry phase. We can achieve this by choosing a reference component which does not gain a phase when the potential is moved. It is easy to find such a component, since we know that the quasihole is described by a wave function of the type $\\prod_i (z_i-\\xi)\\Psi$. This means that the part of the wave function given by $\\prod_i z_i\\Psi$ is not affected by the quasihole position. Any component which has non-zero overlap with this expression can be used as a reference component, that is, any occupied Fock state with angular momentum $L=L_N+N\\hbar$.\n\n \\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.48\\textwidth, angle=0]{fig10.eps}\n\\caption{\\label{charge} {\\bf Charge of Laughlin quasihole.}\nWe plot the charge $q$ of a quasihole in a system of $N=8$ electrons on the disk as a function of radial quasihole position $r$. The total angular momentum is restricted to the Laughlin regime, $L_N\\leq L \\leq L_N+N$, and the Hamiltonian consists of interactions $V_{\\rm int}$ and an optical potential $V_{\\rm opt}^{(\\xi,w)}$, of width $w$ and focused at position $\\xi$.\nWe have tuned the radial position $|\\xi|\/l_{\\rm B}$ of the optical potential from 0.1 to 2, and obtained the corresponding radial position position $r$ of the quasihole. The potential is then moved on a circle around the origin, which leads to a Berry phase which we evaluate using the static method of Eq. (\\ref{gamma_static}) for 200 discrete steps. We consider both Coulomb and pseudopotential interactions, the latter providing a parent Hamiltonian for the Laughlin state. We compare point-like potentials ($w=0$) and finite-width potentials ($w=3l_{\\rm B}$). Independently of the potential width, the system with pseudopotential interactions agrees well with the expected value $q\/e=1\/3$, whereas finite-size effects spoil the numerical value in the system with Coulomb interactions.\n}\n\\end{figure}\n\n\\paragraph{Dynamical evolution.}\nIn the remainder, we compare the ``adiabatic'' approach with a dynamic one. In the dynamic approach, we really simulate the time evolution of the system while the potential is moved, without assuming adiabaticity of the process. Of course, the dynamic approach is much more costly, as it requires full diagonalization of the Hamiltonian, whereas in the static approach only the ground state is needed. But there are some conceptual advantages of the dynamic method: First, this method yields the overlap between initial and final state which provides a measure for the adiabaticity of the process. From this one can also obtain information about how fast the optical potential may be varied. Second, the dynamic method does not require the gauge fixing procedure described above. \n\n\nFor our dynamical simulation we discretize time, and define $H_n$ as the Hamiltonian with the optical potential $V_{\\rm opt}$ at position $R e^{i\\varphi_n}$. We then evolve for short periods $\\Delta t$ under $H_n$, applying the evolution operator $U_n=\\exp\\left(i H_n \\Delta t\\right)$ to the quantum state, and afterwards we quench from $H_n$ to $H_{n+1}$. \n Starting from $\\Psi_0$, the ground state of $H_0$, we reach a final state $\\Psi=\\prod_{n=1}^{n_{\\rm max}} U_n \\Psi_0$, where $n_{\\rm max}=2\\pi\/\\Delta \\varphi-1$. If the process was adiabatic, initial and final state only differ by a phase $\\langle \\Psi | \\Psi_0 \\rangle = e^{i \\phi}$. This phase now consists of a dynamical contribution $\\phi_T$, determined by the energy of the state, and the Berry phase $\\gamma$. In the particular case of a circular rotation around the origin, the energy does not change, and $\\phi_T=E_0 T$ where $T=n_{\\rm max}\\Delta t$. Thus, the Berry phase is obtained by\n\\begin{align}\n \\label{gamma_dyn}\n \\gamma= {\\rm Im}\\left({\\rm ln} \\langle \\Psi | \\Psi_0 \\rangle \\right)-E_0 n_{\\rm max}\\Delta t.\n\\end{align}\n\n If the duration of a time step $\\Delta t$ is of the order of $1\/\\Delta E$, where $\\Delta E$ is the energy gap, the dynamic method produces exactly the same result as the adiabatic one. Interestingly, even for much shorter time steps, the system still behaves adiabatic in the sense that its overlap with excited states remains negligible, and initial and final state remain the same up to a phase difference. However, this phase difference acquires some errors, in the sense that it differs from the adiabatic value. This behavior is demonstrated in the data shown in Fig. \\ref{adi} for a system of $N=5$ electrons with Coulombic interactions and an optical potential of width $w=3 l_{\\rm B}$. This finding suggests that a quasihole can be moved relatively fast without energetically exciting the system, but this yet does not guarantee an adiabatic phase evolution.\n \n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.48\\textwidth, angle=0]{fig11.eps}\n\\caption{\\label{adi} {\\bf Deviations from adiabatic process due to finite times.}\nWe compare two types of errors occurring if the quasihole is not moved adiabatically, as a function of the time step duration $\\Delta t$ (in units $\\hbar \\epsilon l_{\\rm B}\/e^2$).\nThe red curve shows the relative phase error, defined as $\\Delta \\gamma\/\\gamma_{\\rm ad}$. Here, $\\Delta \\gamma$ is the difference between the adiabatically obtained Berry phase $\\gamma_{\\rm ad}$ via Eq. (\\ref{gamma_static}), and the value obtained dynamically using Eq. (\\ref{gamma_dyn}). The blue curve shows the quantum state error, defined as $1-|\\langle {\\rm initial \\ state} | {\\rm final \\ state} \\rangle |$, that is, the amount of norm which becomes excited during the evolution. The state error is low ($<10^{-3}$) for \ntime steps $\\Delta t > 0.5$, while a similar phase error can only be achieved by significantly longer time steps $\\Delta t > 15$.\n}\n\\end{figure}\n \n\n\n\\section{Summary}\nWe have proposed several optical tools which can provide microscopic control over excitations in integer or fractional quantum Hall systems. In Sec. \\ref{pump}, we have \ndeveloped ideas for a quasiparticle pump, based on interactions between electrons and photons with orbital angular momentum. For graphene, empty and filled Landau levels can optically be coupled as discussed in Sec. \\ref{graphene}. For GaAs, a spin-flip Raman coupling is possible, see Sec. \\ref{GaAs}. We have applied a STIRAP scheme on this many-body scenario, which allows avoiding decoherence. Our techniques to create individual quasiparticles are robust against disorder and can give rise to novel ways of measuring fractional charge and statistics. A possible application within a Corbino disk geometry is given in Sec. \\ref{sec:corbino}. \n\nIn Sec. \\ref{pin}, we have discussed different strategies for optically trapping a quasihole. We have studied the role played by the potential width for the stability of the trap. Even shallow potentials are found to support quasihole states, but the gap above the quasihole state is largest when the width is on the order of the magnetic length. A simple way of achieving an optical potential is based on the AC Stark shift, but the potential width, limited by the wavelength, exceeds the ideal length scale. Improvements are possible using a three-level coupling scheme, where for the prize of a weaker trap the potential width can be brought below the diffraction limit. We have also simulated the system dynamics in a moving potential, showing that such a process imprints a Berry phase in the electronic wave function according to the fractional charge of the quasihole. The optical potentials thus might become useful for braiding quasiparticles, which is the operation on which future topological quantum computers might be based on.\n\nIn summary, our manuscript advances quantum-optical tools for engineering and manipulating quantum Hall systems. In previous work, optical driving near a Landau level resonance has been suggested as a tool for engineering novel quantum Hall states \\cite{areg}. Here, we have applied similar ideas in order to control bulk excitations of a quantum Hall system. Other interesting aspects of optically coupled Landau levels regard quantized dissipation rates \\cite{tran18}, or optically induced electron localization \\cite{arikawa17}.\n\n\n\\acknowledgments\nWe acknowledge fruitful discussions with Wade de Gottardi, Ze-Pei Cian, and Hwanmun Kim. This research was financially supported by the NSF through the PFC@JQI, AFOSR-MURI FA95501610323, EFRI-1542863, CNS-0958379,\nCNS-0855217, ACI-1126113 and the City University of\nNew York High Performance Computing Center at the\nCollege of Staten Island, Sloan Fellowship, YIP-ONR.\nP.B. acknowledges support by AFOSR, NSF PFC@JQI, NSF QIS, ARL CDQI, ARO MURI, and ARO.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn special relativity, the Lorentz transforms supersede their\nclassical equivalent, the Galilean transforms~\\citep{goldstein1980}.\nLorentz transforms operate on four-vectors such as the four-velocity\nor four-potential and are usually operationalised as multiplication by\na $4\\times 4$ matrix. A Lorentz transform takes the components of an\narbitrary four-vector as observed in one coordinate system and returns\nthe components observed in another system which is moving at constant\nvelocity with respect to the first.\n\nThere are a few existing software tools for working with Lorentz\ntransforms, mostly developed in an educational context. Early work\nwould include that of \\citet{horwitz1992} who describe {\\tt relLab}, a\nsystem for building a range of {\\em gendanken} experiments in an\ninteractive graphical environment. The author asserts that it runs on\n``any Macintosh computer with one megabyte of RAM or more'' but it is\nnot clear whether the software is still available. More modern\ncontributions would include the {\\tt OpenRelativity}\ntoolkit~\\citep{sherin2016} which simulates the effects of special\nrelativity in the {\\tt Unity} game engine.\n\nThe {\\tt lorentz} package~\\cite{hankin2022_lorentz_package} is written\nin the R programming language~\\cite{rcore2022}, providing {\\tt\nR}-centric functionality for the physics of special relativity. It\ndeals with formal Lorentz boosts, converts between three-velocities\nand four-velocities, and provides computational support for the\ngyrogroup structure of relativistic three-velocity addition. I\nleverage the power of the R programming language and the package\nitself to search for a gyrodistributive law in appendix A.\n\n\n\\section{The R programming language}\n\nThe R programming language~\\cite{rcore2022} has an emphasis on\nstatistics and data analysis~\\cite{chambers2008}. However, R is a\ngeneral-purpose tool and is increasingly being used in the physical\nsciences~\\cite{mullen2022}. For example, functionality for working\nwith general relativity is given\nin~\\cite{hankin2020,hankin2021string}. R is interpreted, not\ncompiled, giving instant feedback on commands and allowing rapid\ndevelopment. In this document, the typical cycle is presented as\nfollows:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> 2+2\n\\end{Sinput}\n\\begin{Soutput}\n[1] 4\n\\end{Soutput}\n\\end{Schunk}\n\nWe see the user's query of {\\tt 2+2} is accepted, and the result, {\\tt\n4}, given as a response. The ``{\\tt [1]}'' indicates that the\nreturned value is a vector of length 1.\n\n\\subsection{The {\\tt lorentz} package: an overview}\n\nR's capabilities are extended through user-created ``packages'', which\noffer specialist additional functionality to base R. Packages may be\ninstalled independently and cover a wide range of computational\nfacilities. R packages contain code, data, and documentation in a\nstandardised format that can be installed by users of R, typically via\na centralised software repository such as CRAN. R packages must\nconform to a strict specification and pass extensive quality control\nchecks which ensure the usability and long-term stability of packages\nfor end users.\n\n\\subsection{Installation of the {\\tt lorentz} package}\n\nThe R system may be downloaded from \\url{https:\/\/cran.r-project.org\/};\nmany users prefer the Rstudio IDE, available from\n\\url{https:\/\/posit.co\/}. Once R is installed, the {\\tt lorentz}\npackage is easily loaded. Type:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> install.packages(\"lorentz\")\n\\end{Sinput}\n\\end{Schunk}\n\nat the command line, and this will download the package from CRAN. To\ninstall the package, use\n\n\\begin{Schunk}\n\\begin{Sinput}\n> library(\"lorentz\")\n\\end{Sinput}\n\\end{Schunk}\n\nand this will make the package functions available to the R session.\n\n\\section{Lorentz transforms: active and passive}\n\nPassive transforms are the usual type of transforms taught and used in\nrelativity. However, sometimes active transforms are needed and it is\neasy to confuse the two. Here I will discuss passive and then active\ntransforms, and illustrate both in a computational context.\n\n\\subsection*{Passive transforms}\n\n\\newcommand{\\vvec}[2]{\\begin{pmatrix}#1 \\\\ #2\\end{pmatrix}}\n\\newcommand{\\twomat}[4]{\\begin{pmatrix} #1 & #2 \\\\ #3 & #4\\end{pmatrix}}\n\nConsider the following canonical Lorentz transform in which we have\nmotion in the $x$-direction at speed $v>0$; the motion is from left to\nright. We consider only the first two components of four-vectors, the\n$y$- and $z$- components being trivial. A typical physical\ninterpretation is that I am at rest, and my friend is in his spaceship\nmoving at speed $v$ past me; and we are wondering what vectors which I\nmeasure in my own rest frame look like to him. The (passive) Lorentz\ntransform is:\n\n\\begin{equation*}\n\\twomat{\\gamma}{-\\gamma v}{-\\gamma v}{\\gamma}\n\\end{equation*}\n\nAnd the canonical example of that would be:\n\n\\begin{equation*}\n\\twomat{\\gamma}{-\\gamma v}{-\\gamma v}{\\gamma}\\vvec{1}{0}=\\vvec{\\gamma}{-\\gamma v}\n\\end{equation*}\n\nwhere the vectors are four velocities (recall that $\\vvec{1}{0}$ is\nthe four-velocity of an object at rest). Operationally, I measure the\nfour-velocity of an object to be $\\vvec{1}{0}$, and he measures the\nsame object as having a four-velocity of $\\vvec{\\gamma}{-\\gamma v}$.\nSo I see the object at rest, and he sees it as moving at speed $-v$;\nthat is, he sees it moving to the left (it moves to the left because\nhe is moving to the right relative to me). The {\\tt lorentz}\npackage~\\cite{hankin2022_lorentz_package} makes computations easy.\nSuppose $v=0.6c$ in the $x$-direction.\n\n\\begin{Schunk}\n\\begin{Sinput}\n> # NB: speed of light = 1 by default\n> u <- as.3vel(c(0.6,0,0)) # coerce to a three-velocity\n> u\n\\end{Sinput}\n\\begin{Soutput}\nA vector of three-velocities (speed of light = 1)\n x y z\n[1,] 0.6 0 0\n\\end{Soutput}\n\\begin{Sinput}\n> as.4vel(u) # four-velocity is better for calculations\n\\end{Sinput}\n\\begin{Soutput}\nA vector of four-velocities (speed of light = 1)\n t x y z\n[1,] 1.25 0.75 0 0\n\\end{Soutput}\n\\begin{Sinput}\n> (B <- boost(u)) # transformation matrix\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 1.25 -0.75 0 0\nx -0.75 1.25 0 0\ny 0.00 0.00 1 0\nz 0.00 0.00 0 1\n\\end{Soutput}\n\\end{Schunk}\n\n(note that element $[1,2]$ of the boost matrix $B$ is negative as we\nhave a passive transform). Then a four-velocity of $(1,0,0,0)^T$\nwould appear in the moving frame as\n\n\\begin{Schunk}\n\\begin{Sinput}\n> B\n\\end{Sinput}\n\\begin{Soutput}\n [,1]\nt 1.25\nx -0.75\ny 0.00\nz 0.00\n\\end{Soutput}\n\\end{Schunk}\n\nThis corresponds to a speed of $-0.75\/1.25=-0.6$. Observe that it is\npossible to transform an arbitrary four-vector:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> B\n\\end{Sinput}\n\\begin{Soutput}\n [,1]\nt 0.5\nx 4.5\ny -8.0\nz 9.0\n\\end{Soutput}\n\\end{Schunk}\n\n\n\\subsubsection*{Null vectors: light}\n\nLet's try it with light (see section~\\ref{photonsection} for more\ndetails on photons). Recall that we describe a photon in terms of its\nfour momentum, not four-velocity, which is undefined for a photon.\nSpecifically, we {\\em define} the four-momentum of a photon to be\n\n\\begin{equation*}\n \\left(\n \\begin{array}{c}\n E\/c\\\\Ev_x\/c^2\\\\Ev_y\/c^2\\\\Ev_z\/c^2\n \\end{array}\n \\right)\n \\end{equation*}\n\nSo if we consider unit energy and keep $c=1$ we get $p=\\vvec{1}{1}$\nin our one-dimensional world (for a rightward-moving photon) and the\nLorentz transform is then\n\n\\begin{equation*}\n \\twomat{\\gamma}{-\\gamma v}{-\\gamma v}{\\gamma}\\vvec{1}{1}=\\vvec{\\gamma-\\gamma v}{\\gamma-\\gamma v}\n \\end{equation*}\n\n\nSo, in the language used above, I see a photon with unit energy, and\nmy friend sees the photon with energy\n$\\gamma(1-v)=\\sqrt{\\frac{1-v}{1+v}}<1$, provided that $v>0$: the\nphoton has less energy in his frame than mine because of Doppler\nredshifting. It's worth doing the same analysis with a\nleftward-moving photon:\n\n\\begin{equation*}\n \\twomat{\\gamma}{-\\gamma v}{-\\gamma v}{\\gamma}\\vvec{1}{-1}=\\vvec{\\gamma(1+v)}{-\\gamma(1+v)}\n \\end{equation*}\n\nHere the photon has more energy for him than me because of blue\nshifting: he is moving to the right and encounters a photon moving to\nthe left. The R idiom would be\n\n\\begin{Schunk}\n\\begin{Sinput}\n> B\n\\end{Sinput}\n\\begin{Soutput}\n [,1]\nt 0.5\nx 0.5\ny 0.0\nz 0.0\n\\end{Soutput}\n\\begin{Sinput}\n> B\n\\end{Sinput}\n\\begin{Soutput}\n [,1]\nt 2\nx -2\ny 0\nz 0\n\\end{Soutput}\n\\end{Schunk}\n\nfor the left- and right- moving photons respectively. \n\nThe above analysis uses {\\em passive} transforms: there is a single\nphysical reality, and we describe that one physical reality using two\ndifferent coordinate systems. One of the coordinate systems uses a\nset of axes that are {\\em boosted} relative to the axes of the other.\n\nThis is why it makes sense to use prime notation as in\n$x\\longrightarrow x'$ and $t\\longrightarrow t'$ for a passive Lorentz\ntransform: the prime denotes measurements made using coordinates\nthat are defined with respect to the boosted system, and we see\nnotation like\n\n\\begin{equation*}\n\\vvec{t'}{x'}=\\twomat{\\gamma}{-\\gamma v}{-\\gamma v}{\\gamma}\\vvec{t}{x}\n\\end{equation*} \n\nThese are the first two elements of a displacement four-vector. It is\nthe same four-vector but viewed in two different reference frames.\n\n\\subsection*{Active transforms}\n\nIn the passive view, there is a single physical reality, and we are\njust describing that one physical reality using two different\ncoordinate systems. Now we will consider active transforms: there are\ntwo physical realities, but one is boosted with respect to another. \n\nSuppose me and my friend have zero relative velocity, but my friend is\nin a spaceship and I am outside it, in free space, at rest. He\nconstructs a four-vector in his spaceship; for example, he could fire\nbullets out of a gun which is fixed in the spaceship, and then\ncalculate their four-velocity as it appears to him in his\nspaceship-centric coordinate system. We both agree on this\nfour-velocity as our reference frames are identical: we have no\nrelative velocity.\n\nNow his spaceship acquires a constant velocity, leaving me stationary.\nMy friend continues to fire bullets out of his gun and sees that their\nfour-velocity, as viewed in his spaceship coordinates, is the same as\nwhen we were together.\n\nNow he wonders what the four-velocity of the bullets is in my\nreference frame. This is an {\\em active} transform: we have two\ndistinct physical realities, one in the spaceship when it was at rest\nwith respect to me, and one in the spaceship when moving. And both\nthese realities, by construction, look the same to my friend in the\nspaceship.\n\nSuppose, for example, he sees the bullets at rest in his spaceship;\nthey have a four-velocity of $\\vvec{1}{0}$, and my friend says to\nhimself: ``I see bullets with a four velocity of $\\vvec{1}{0}$, and I\nknow what that means. The bullets are at rest. What are the bullets'\nfour velocities in Robin's reference frame?\". This is an {\\em active}\ntransform:\n\n\\begin{equation*}\n\\twomat{\\gamma}{\\gamma v}{\\gamma v}{\\gamma}\\vvec{1}{0}=\\vvec{\\gamma}{\\gamma v}\n\\end{equation*}\n\n(we again suppose that the spaceship moves at speed $v>0$ from left to\nright). So he sees a four velocity of $\\vvec{1}{0}$ and I see\n$\\vvec{\\gamma}{\\gamma v}$, that is, with a positive speed: the bullets\nmove from left to right (with the spaceship). The R idiom would be:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> (B <- boost(as.3vel(c(0.8,0,0)))) # 0.8c left to right\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 1.666667 -1.333333 0 0\nx -1.333333 1.666667 0 0\ny 0.000000 0.000000 1 0\nz 0.000000 0.000000 0 1\n\\end{Soutput}\n\\begin{Sinput}\n> solve(B)\n\\end{Sinput}\n\\begin{Soutput}\n [,1]\nt 1.666667\nx 1.333333\ny 0.000000\nz 0.000000\n\\end{Soutput}\n\\end{Schunk}\n\n\\section{Successive Lorentz transforms}\n\nCoordinate transformation is effected by standard matrix\n multiplication; thus composition of two Lorentz transforms is also\n ordinary matrix multiplication:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> u <- as.3vel(c(0.3,-0.4,+0.8))\n> v <- as.3vel(c(0.4,+0.2,-0.1))\n> L <- boost(u)\n> L\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 3.256577 -2.2327055 0.5419596 -2.0800479\nx -1.437147 1.6996791 -0.0237489 0.4194255\ny 1.091131 -0.7581795 1.1190282 -0.6029155\nz -2.519789 1.5878378 -0.2023170 2.1879612\n\\end{Soutput}\n\\end{Schunk}\n\nBut observe that the resulting transform is not a pure boost, as the\nspatial components are not symmetrical. We may decompose the matrix\nproduct $L$ into a pure translation composed with an orthogonal\nmatrix, which represents a coordinate rotation. The R idiom is\n{\\tt pureboost()} for the pure boost component, and {\\tt orthog()}\nfor the rotation:\n \n\\begin{Schunk}\n\\begin{Sinput}\n> (P <- pureboost(L)) # pure boost\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 3.2565770 -2.2327055 0.5419596 -2.0800479\nx -2.2327055 2.1711227 -0.2842745 1.0910491\ny 0.5419596 -0.2842745 1.0690039 -0.2648377\nz -2.0800479 1.0910491 -0.2648377 2.0164504\n\\end{Soutput}\n\\begin{Sinput}\n> P - t(P) # check for symmetry\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 0 0 0 0\nx 0 0 0 0\ny 0 0 0 0\nz 0 0 0 0\n\\end{Soutput}\n\\end{Schunk}\n\n\nNow we compute the rotation:\n\\begin{Schunk}\n\\begin{Sinput}\n> (U <- orthog(L)) # rotation matrix\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 1.000000e+00 -1.332268e-14 3.219647e-15 -1.509903e-14\nx -1.054712e-14 9.458514e-01 1.592328e-01 -2.828604e-01\ny 8.659740e-15 -1.858476e-01 9.801022e-01 -6.971587e-02\nz -1.953993e-14 2.661311e-01 1.185098e-01 9.566241e-01\n\\end{Soutput}\n\\begin{Sinput}\n> U[2:4,2:4] # inspect the spatial components\n\\end{Sinput}\n\\begin{Soutput}\n x y z\nx 0.9458514 0.1592328 -0.28286043\ny -0.1858476 0.9801022 -0.06971587\nz 0.2661311 0.1185098 0.95662410\n\\end{Soutput}\n\\begin{Sinput}\n> round(crossprod(U) - diag(4),10) # check for orthogonality\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 0 0 0 0\nx 0 0 0 0\ny 0 0 0 0\nz 0 0 0 0\n\\end{Soutput}\n\\begin{Sinput}\n> ## zero to within numerical uncertainty\n\\end{Sinput}\n\\end{Schunk}\n\n\\section[Units in which c is not 1]{Units in which ${\\mathbf c\\neq 1}$} \n\nThe preceding material used units in which $c=1$. Here I show how the\npackage deals with units such as SI in which $c=299792458\\neq 1$. For\nobvious reasons we cannot have a function called {\\tt c()} so the\npackage gets and sets the speed of light with function {\\tt sol()}:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> sol(299792458)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 299792458\n\\end{Soutput}\n\\begin{Sinput}\n> sol()\n\\end{Sinput}\n\\begin{Soutput}\n[1] 299792458\n\\end{Soutput}\n\\end{Schunk}\n\nThe speed of light is now~$299792458$ until re-set by {\\tt sol()} (an\nempty argument queries the speed of light). We now consider speeds\nwhich are fast by terrestrial standards but involve only a small\nrelativistic correction to the Galilean result:\n \n\\begin{Schunk}\n\\begin{Sinput}\n> u <- as.3vel(c(100,200,300))\n> as.4vel(u)\n\\end{Sinput}\n\\begin{Soutput}\nA vector of four-velocities (speed of light = 299792458)\n t x y z\n[1,] 1 100 200 300\n\\end{Soutput}\n\\end{Schunk}\n\nThe gamma correction term $\\gamma$ is only very slightly larger\nthan~$1$ and indeed R's default print method suppresses the\ndifference:\n \n\\begin{Schunk}\n\\begin{Sinput}\n> gam(u)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 1\n\\end{Soutput}\n\\end{Schunk}\n\nHowever, we can display more significant figures by subtracting one:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> gam(u)-1\n\\end{Sinput}\n\\begin{Soutput}\n[1] 7.789325e-13\n\\end{Soutput}\n\\end{Schunk}\n\nor alternatively we can use the {\\tt gamm1()} function which\ncalculates $\\gamma-1$ more accurately for speeds $\\ll c$:\n \n\\begin{Schunk}\n\\begin{Sinput}\n> gamm1(u)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 7.78855e-13\n\\end{Soutput}\n\\end{Schunk}\n\nThe Lorentz boost is again calculated by the {\\tt boost()} function:\n \n\\begin{Schunk}\n\\begin{Sinput}\n> boost(u)\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 1 -1.112650e-15 -2.22530e-15 -3.337950e-15\nx -100 1.000000e+00 1.11265e-13 1.668975e-13\ny -200 1.112650e-13 1.00000e+00 3.337950e-13\nz -300 1.668975e-13 3.33795e-13 1.000000e+00\n\\end{Soutput}\n\\end{Schunk}\n\nThe boost matrix is not symmetrical, even though it is a pure boost,\nbecause $c\\neq 1$. \n\n\nNote how the transform is essentially the Galilean\nresult, which is discussed below.\n\n\\subsection{Changing units}\n\nOften we have a four-vector in SI units and wish to express this in natural units.\n\n\\begin{Schunk}\n\\begin{Sinput}\n> sol(299792458)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 299792458\n\\end{Soutput}\n\\begin{Sinput}\n> disp <- c(1,1,0,0)\n\\end{Sinput}\n\\end{Schunk}\n\nIf we interpret {\\tt disp} as a four-displacement, it corresponds to\nmoving 1 metre along the x-axis and waiting for one second. To\nconvert this to natural units we multiply by the passive\ntransformation matrix given by {\\tt ptm()}:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> ptm(to_natural=TRUE)\n\\end{Sinput}\n\\begin{Soutput}\n [,1]\nt 299792458\nx 1\ny 0\nz 0\n\\end{Soutput}\n\\end{Schunk}\n\nIn the above, see how the same vector is expressed in natural units in\nwhich the speed of light is equal to 1: the unit of time is about\n$3\\times 10^{-9}$ seconds and the unit of distance remains the metre.\nAlternatively, we might decide to keep the unit of time equal to one\nsecond, and use a unit of distance equal to 299792458 metres which\nagain ensures that~$c=1$:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> ptm(to_natural=TRUE,change_time=FALSE)\n\\end{Sinput}\n\\begin{Soutput}\n [,1]\nt 1.000000e+00\nx 3.335641e-09\ny 0.000000e+00\nz 0.000000e+00\n\\end{Soutput}\n\\end{Schunk}\n\nAs a further check, we can take two boost matrices corresponding to\nthe same coordinate transformation but expressed using different units\nof length and verify that their orthogonal component agrees:\n\n\n\\begin{Schunk}\n\\begin{Sinput}\n> sol(1)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 1\n\\end{Soutput}\n\\begin{Sinput}\n> B1 <- boost((2:4)\/10)\n> orthog(B1)[2:4,2:4]\n\\end{Sinput}\n\\begin{Soutput}\n x y z\nx 0.9832336 0.09752166 0.15408208\ny -0.1020390 0.99454439 0.02166761\nz -0.1511284 -0.03702671 0.98782044\n\\end{Soutput}\n\\end{Schunk}\n\nNow we create {\\tt B2} which is the same physical object but using a\nlength scale of one-tenth of {\\tt B2} (which requires that we\nmultiply the speed of light by a factor of 10):\n\n\\begin{Schunk}\n\\begin{Sinput}\n> sol(10)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 10\n\\end{Soutput}\n\\begin{Sinput}\n> B2 <- boost(2:4)\n> orthog(B2)[2:4,2:4]\n\\end{Sinput}\n\\begin{Soutput}\n x y z\nx 0.9832336 0.09752166 0.15408208\ny -0.1020390 0.99454439 0.02166761\nz -0.1511284 -0.03702671 0.98782044\n\\end{Soutput}\n\\end{Schunk}\n\nso the two matrices agree, as expected.\n\n\n\\section{Infinite speed of light}\n\nIn the previous section considered speeds that were small compared\nwith the speed of light and here we will consider the classical limit\nof infinite $c$:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> sol(Inf)\n\\end{Sinput}\n\\begin{Soutput}\n[1] Inf\n\\end{Soutput}\n\\end{Schunk}\n\nThen the familiar parallelogram law operates:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> u <- as.3vel(1:3)\n> v <- as.3vel(c(-6,8,3))\n> u+v\n\\end{Sinput}\n\\begin{Soutput}\nA vector of three-velocities (speed of light = Inf)\n x y z\n[1,] -5 10 6\n\\end{Soutput}\n\\begin{Sinput}\n> v+u\n\\end{Sinput}\n\\begin{Soutput}\nA vector of three-velocities (speed of light = Inf)\n x y z\n[1,] -5 10 6\n\\end{Soutput}\n\\end{Schunk}\n\nAbove we see that composition of velocities is commutative, unlike the\nrelativistic case. The boost matrix is instructive:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> boost(u)\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 1 0 0 0\nx -1 1 0 0\ny -2 0 1 0\nz -3 0 0 1\n\\end{Soutput}\n\\begin{Sinput}\n> boost(u+v)\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 1 0 0 0\nx 5 1 0 0\ny -10 0 1 0\nz -6 0 0 1\n\\end{Soutput}\n\\begin{Sinput}\n> boost(u)\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 1 0 0 0\nx 5 1 0 0\ny -10 0 1 0\nz -6 0 0 1\n\\end{Soutput}\n\\end{Schunk}\n\nAbove, see how the boost matrix for the composed velocity of $u+v$\ndoes not have any rotational component, unlike the relativistic case\n[recall that {\\tt boost()} gives a {\\em passive} transform, which is\nwhy the sign of the numbers in the first column is changed]. With an\ninfinite speed of light, even ``large'' speeds have zero relativistic\ncorrection:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> gamm1(1e100)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 0\n\\end{Soutput}\n\\end{Schunk}\n\nFunction {\\tt rboost()} returns a random Lorentz transform matrix,\nwhich is in general a combination of a pure Lorentz boost and an\northogonal rotation. With an infinite speed of light, it requires a\nspeed:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> set.seed(0)\n> options(digits=3)\n> (B <- rboost(1)) # random boost, speed 1\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\n[1,] 1.000 0.000 0.0000 0.000\n[2,] -0.411 0.213 -0.9402 0.266\n[3,] -0.279 -0.917 -0.0989 0.385\n[4,] 0.868 -0.336 -0.3260 -0.884\n\\end{Soutput}\n\\end{Schunk}\n\nWe can decompose {\\tt B} into a pure boost and an orthogonal\ntransformation:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> orthog(B)\n\\end{Sinput}\n\\begin{Soutput}\n [,1] [,2] [,3] [,4]\n[1,] 1 0.000 0.0000 0.000\n[2,] 0 0.213 -0.9402 0.266\n[3,] 0 -0.917 -0.0989 0.385\n[4,] 0 -0.336 -0.3260 -0.884\n\\end{Soutput}\n\\begin{Sinput}\n> pureboost(B)\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\n[1,] 1.000 0 0 0\n[2,] -0.123 1 0 0\n[3,] 0.131 0 1 0\n[4,] -0.984 0 0 1\n\\end{Soutput}\n\\end{Schunk}\n\n\nBoost matrices can be applied to any four-vector. Here I show how\npure spatial displacements transform with an infinite light speed.\n\n\\begin{Schunk}\n\\begin{Sinput}\n> (u <- as.3vel(c(10,0,0))) # velocity of 10, parallel to x axis\n\\end{Sinput}\n\\begin{Soutput}\nA vector of three-velocities (speed of light = Inf)\n x y z\n[1,] 10 0 0\n\\end{Soutput}\n\\begin{Sinput}\n> (B <- boost(u))\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 1 0 0 0\nx -10 1 0 0\ny 0 0 1 0\nz 0 0 0 1\n\\end{Soutput}\n\\begin{Sinput}\n> d <- c(0,1,0,0) # displacement of distance one, parallel to the x-axis\n> B\n\\end{Sinput}\n\\begin{Soutput}\n [,1]\nt 0\nx 1\ny 0\nz 0\n\\end{Soutput}\n\\end{Schunk}\n\nAbove we see that a spatial displacement is the same for both\nobservers. We can similarly apply a boost to a temporal displacement:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> d <- c(1,0,0,0) # displacement of one unit of time, no spatial component\n> B\n\\end{Sinput}\n\\begin{Soutput}\n [,1]\nt 1\nx -10\ny 0\nz 0\n\\end{Soutput}\n\\end{Schunk}\n\nAbove we see the result expected from classical mechanics.\n\n\n\n\\section{Vectorization}\n\nHere I discuss vectorized operations (to avoid confusion between boost\nmatrices and their transposes we will use $c=10$). The issue is\ndifficult because a Lorentz boost is conceptually a matrix product of\na $4\\times 4$ matrix with vector with four elements:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> sol(10)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 10\n\\end{Soutput}\n\\begin{Sinput}\n> u <- as.3vel(c(5,-6,4))\n> (U <- as.4vel(u))\n\\end{Sinput}\n\\begin{Soutput}\nA vector of four-velocities (speed of light = 10)\n t x y z\n[1,] 2.09 10.4 -12.5 8.34\n\\end{Soutput}\n\\begin{Sinput}\n> B <- boost(U)\n> B\n\\end{Sinput}\n\\begin{Soutput}\n [,1]\nt 1.00e+00\nx -1.33e-15\ny 4.44e-16\nz 1.78e-15\n\\end{Soutput}\n\\end{Schunk}\n\n(note that the result is the four-velocity of an object at rest, as\nexpected, for we use passive transforms by default). However, things\nare different if we wish to consider many four-vectors in one R\nobject. A vector ${\\mathbf V}$ of four-velocities is a matrix: each\n{\\em row} of ${\\mathbf V}$ is a four-velocity. In the package we\nrepresent this with objects of class {\\tt 4vel}. Because a vector is\ntreated (almost) as a one-column matrix in R, and the four-velocities\nare rows, we need to take a transpose in some sense.\n\n\\begin{Schunk}\n\\begin{Sinput}\n> u <- 1:7 # speed in the x-direction [c=10]\n> jj <- cbind(gam(u),gam(u)*u,0,0)\n> (U <- as.4vel(jj))\n\\end{Sinput}\n\\begin{Soutput}\nA vector of four-velocities (speed of light = 10)\n t x y z\n[1,] 1.01 1.01 0 0\n[2,] 1.02 2.04 0 0\n[3,] 1.05 3.14 0 0\n[4,] 1.09 4.36 0 0\n[5,] 1.15 5.77 0 0\n[6,] 1.25 7.50 0 0\n[7,] 1.40 9.80 0 0\n\\end{Soutput}\n\\end{Schunk}\n\nNow a boost, also in the x-direction:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> (B <- boost(as.3vel(c(6,0,0)))) # 6\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 1.25 -0.075 0 0\nx -7.50 1.250 0 0\ny 0.00 0.000 1 0\nz 0.00 0.000 0 1\n\\end{Soutput}\n\\end{Schunk}\n\nNote the asymmetry of $B$, in this case reflecting the speed of light\nbeing 10 (but note that boost matrices are not always symmetrical,\neven if $c=1$).\n\nTo effect a {\\em passive} boost we need to multiply each row of $U$ by\nthe transpose of the boost matrix $B$:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> U\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\n[1,] 1.18 -6.28 0 0\n[2,] 1.12 -5.10 0 0\n[3,] 1.07 -3.93 0 0\n[4,] 1.04 -2.73 0 0\n[5,] 1.01 -1.44 0 0\n[6,] 1.00 0.00 0 0\n[7,] 1.02 1.75 0 0\n\\end{Soutput}\n\\end{Schunk}\n\nwe can verify that the above is at least plausible:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> is.consistent.4vel(U\n\\end{Sinput}\n\\begin{Soutput}\n[1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE\n\\end{Soutput}\n\\end{Schunk}\n\nthe above shows that the four velocities $U$, as observed by an\nobserver corresponding to boost $B$, satisfies $U^iU_i=-c^2$. Anyway,\nin this context we really ought to use {\\tt tcrossprod()}:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> tcrossprod(U,B)\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\n[1,] 1.18 -6.28 0 0\n[2,] 1.12 -5.10 0 0\n[3,] 1.07 -3.93 0 0\n[4,] 1.04 -2.73 0 0\n[5,] 1.01 -1.44 0 0\n[6,] 1.00 0.00 0 0\n[7,] 1.02 1.75 0 0\n\\end{Soutput}\n\\end{Schunk}\n\nwhich would be preferable (because this idiom does not require one to\ntake a transpose) although the speed increase is unlikely to matter\nmuch because $B$ is only $4\\times 4$.\n\nThe above transforms were passive: we have some four-vectors measured\nin my rest frame, and we want to see what these are four-vectors as\nmeasured by my friend, who is moving in the positive x direction at\n60\\% of the speed of light (remember that $c=10$). See how the\nx-component of the transformed four-velocity is negative, because in\nmy friend's rest frame, the four velocities are pointing backwards.\n\nTo effect an {\\em active} transform we need to take the matrix inverse\nof $B$:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> solve(B)\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 1.25 0.075 0 0\nx 7.50 1.250 0 0\ny 0.00 0.000 1 0\nz 0.00 0.000 0 1\n\\end{Soutput}\n\\end{Schunk}\n\nand then\n\n\\begin{Schunk}\n\\begin{Sinput}\n> tcrossprod(U,solve(B))\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\n[1,] 1.33 8.79 0 0\n[2,] 1.43 10.21 0 0\n[3,] 1.55 11.79 0 0\n[4,] 1.69 13.64 0 0\n[5,] 1.88 15.88 0 0\n[6,] 2.13 18.75 0 0\n[7,] 2.49 22.75 0 0\n\\end{Soutput}\n\\end{Schunk}\n\nIn the above, note how the positive x-component of the four-velocity\nis increased because we have actively boosted it. We had better check\nthe result for consistency:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> is.consistent.4vel(tcrossprod(U,solve(B)))\n\\end{Sinput}\n\\begin{Soutput}\n[1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE\n\\end{Soutput}\n\\end{Schunk}\n\n\\section{Multiple boosts}\n\n\nIf we are considering multiple boosts, it is important to put them in\nthe correct order. First we will do some passive boosts.\n\n\\begin{Schunk}\n\\begin{Sinput}\n> sol(100)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 100\n\\end{Soutput}\n\\begin{Sinput}\n> B1 <- boost(r3vel(1))\n> B2 <- boost(r3vel(1))\n> (U <- r4vel(5))\n\\end{Sinput}\n\\begin{Soutput}\nA vector of four-velocities (speed of light = 100)\n t x y z\n[1,] 1.99 -162.4 46.3 34.32\n[2,] 2.58 149.0 185.0 -3.07\n[3,] 1.75 -110.4 -18.4 -90.19\n[4,] 1.70 55.1 -88.3 90.15\n[5,] 2.62 -205.6 98.2 81.53\n\\end{Soutput}\n\\end{Schunk}\n\nSuccessive boosts are effected by matrix multiplication; there are at\nleast four equivalent R constructions:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> U\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\n[1,] 11.09 -601 -844 -383\n[2,] 3.00 -70 -166 -218\n[3,] 13.98 -639 -1087 -594\n[4,] 7.75 -239 -697 -220\n[5,] 12.23 -706 -911 -394\n\\end{Soutput}\n\\begin{Sinput}\n> U\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\n[1,] 11.09 -601 -844 -383\n[2,] 3.00 -70 -166 -218\n[3,] 13.98 -639 -1087 -594\n[4,] 7.75 -239 -697 -220\n[5,] 12.23 -706 -911 -394\n\\end{Soutput}\n\\begin{Sinput}\n> tcrossprod(U, B2\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\n[1,] 11.09 -601 -844 -383\n[2,] 3.00 -70 -166 -218\n[3,] 13.98 -639 -1087 -594\n[4,] 7.75 -239 -697 -220\n[5,] 12.23 -706 -911 -394\n\\end{Soutput}\n\\begin{Sinput}\n> U\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\n[1,] 11.09 -601 -844 -383\n[2,] 3.00 -70 -166 -218\n[3,] 13.98 -639 -1087 -594\n[4,] 7.75 -239 -697 -220\n[5,] 12.23 -706 -911 -394\n\\end{Soutput}\n\\end{Schunk}\n\n(in the above, note that the result is the same in each case).\n\n\\subsection*{A warning}\n\nIt is easy to misapply matrix multiplication in this context. Note\ncarefully that the following natural idiom is {\\bf incorrect}:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> U\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\n[1,] 1220 -203.2 46.3 34.32\n[2,] -1115 186.1 185.0 -3.07\n[3,] 830 -138.1 -18.4 -90.19\n[4,] -411 68.7 -88.3 90.15\n[5,] 1545 -257.1 98.2 81.53\n\\end{Soutput}\n\\begin{Sinput}\n> ## The above idiom is incorrect. See\n> ## https:\/\/www.youtube.com\/watch?v=m7-bMBuVmHo&t=1s\n> ## (in particular @1:08) for a technical explanation of why \n> ## this is a Very Bad Idea (tm).\n\\end{Sinput}\n\\end{Schunk}\n\nIt is not clear to me that the idiom above has any meaning at all.\n\n\\section{The stress-energy tensor}\n\nThe stress-energy tensor (sometimes the energy-momentum tensor) is a\ngeneralization and combination of the classical concepts of density,\nenergy flux, and the classical stress tensor~\\citep{schutz1985}. It\nis a contravariant tensor of rank two, usually represented as a\nsymmetric $4\\times 4$ matrix. The {\\tt lorentz} package includes\nfunctionality for applying Lorentz transforms to the stress energy\ntensor.\n\n\\begin{Schunk}\n\\begin{Sinput}\n> sol(1) # revert to natural units \n\\end{Sinput}\n\\begin{Soutput}\n[1] 1\n\\end{Soutput}\n\\begin{Sinput}\n> D <- dust(1) # Dust is the simplest nontrivial SET, with \n> D # only one nonzero component\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 1 0 0 0\nx 0 0 0 0\ny 0 0 0 0\nz 0 0 0 0\n\\end{Soutput}\n\\end{Schunk}\n\nThe stress-energy tensor is usually written with two upstairs\n(contravariant) indices, as in~$T^{\\alpha\\beta}$; it may be\ntransformed using the {\\tt transform\\_uu()} function: package:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> B <- boost(as.3vel(c(0.0,0.8,0.0)))\n> transform_uu(D,B)\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 2.78 0 -2.22 0\nx 0.00 0 0.00 0\ny -2.22 0 1.78 0\nz 0.00 0 0.00 0\n\\end{Soutput}\n\\end{Schunk}\n\nIn this reference frame, the dust is not at rest: the stress-energy\ntensor has components corresponding to nonzero pressure and momentum\ntransfer, and the $[t,t]$ component is greater, at 2.78, than its rest\nvalue of 1. Note that the $[t,y]$ component is negative as we use\npassive transforms. If one wants to consider the stress-energy tensor\nwith downstairs indices (here we will use a photon gas), we need to\nuse {\\tt transform\\_dd()}:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> pg <- photongas(3)\n> pg\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 3 0 0 0\nx 0 1 0 0\ny 0 0 1 0\nz 0 0 0 1\n\\end{Soutput}\n\\begin{Sinput}\n> transform_uu(pg,B)\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 10.11 0 -8.89 0\nx 0.00 1 0.00 0\ny -8.89 0 8.11 0\nz 0.00 0 0.00 1\n\\end{Soutput}\n\\end{Schunk}\n\nagain we see that the $[0,0]$ component is larger than its rest value,\nand we see nonzero off-diagonal components which correspond to the\ndynamical behaviour. As a consistency check we can verify that this\nis the same as transforming the SET with upstairs indices, using the\n{\\tt lower()} and {\\tt raise()} functions:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> raise(transform_dd(lower(pg),lower(B)))\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 10.11 0 -8.89 0\nx 0.00 1 0.00 0\ny -8.89 0 8.11 0\nz 0.00 0 0.00 1\n\\end{Soutput}\n\\begin{Sinput}\n> raise(transform_dd(lower(pg),lower(B))) - transform_uu(pg,B) #zero to numerical precision\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 0 0 0 0\nx 0 0 0 0\ny 0 0 0 0\nz 0 0 0 0\n\\end{Soutput}\n\\end{Schunk}\n\nOne of the calls to {\\tt lower()} is redundant; for a photon gas,\nraising or lowering both indices does not change the components as the\nMinkowski metric is symmetric and orthogonal.\n\n\\subsection{Successive boosts}\n \nSuccessive boots are represented as ordinary matrix multiplication.\nAgain the {\\tt magrittr} package can be used for more readable idiom.\n \n\\begin{Schunk}\n\\begin{Sinput}\n> B1 <- boost(as.3vel(c(0.5,-0.4,0.6)))\n> B2 <- boost(as.3vel(c(0.1,-0.1,0.3)))\n> pf <- perfectfluid(4,1)\n> pf\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 4 0 0 0\nx 0 1 0 0\ny 0 0 1 0\nz 0 0 0 1\n\\end{Soutput}\n\\begin{Sinput}\n> pf\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 38.4 -18.17 15.24 -28.2\nx -18.2 9.38 -7.03 13.0\ny 15.2 -7.03 6.89 -10.9\nz -28.2 12.98 -10.89 21.1\n\\end{Soutput}\n\\begin{Sinput}\n> pf\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 38.4 -18.17 15.24 -28.2\nx -18.2 9.38 -7.03 13.0\ny 15.2 -7.03 6.89 -10.9\nz -28.2 12.98 -10.89 21.1\n\\end{Soutput}\n\\end{Schunk}\n\nAgain as a consistency check, we may verify that transforming\ndownstairs indices gives the same result:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> lower(pf)\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 38.4 -18.17 15.24 -28.2\nx -18.2 9.38 -7.03 13.0\ny 15.2 -7.03 6.89 -10.9\nz -28.2 12.98 -10.89 21.1\n\\end{Soutput}\n\\end{Schunk}\n\n(note that the matrix representation of the Lorentz transforms\nrequires that the order of multiplication be reversed for successive\ncovariant transforms, so {\\tt B1} and {\\tt B2} must be swapped).\n\n\\subsection{Speed of light and the stress-energy tensor}\n\nHere I will perform another consistency check, this time with non-unit\nspeed of light, for a perfect fluid:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> sol(10)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 10\n\\end{Soutput}\n\\begin{Sinput}\n> pf_rest <- perfectfluid(1,4)\n> pf_rest\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 1.04 0.00 0.00 0.00\nx 0.00 0.04 0.00 0.00\ny 0.00 0.00 0.04 0.00\nz 0.00 0.00 0.00 0.04\n\\end{Soutput}\n\\end{Schunk}\n\nThus {\\tt pf\\_rest} is the stress energy for a perfect fluid at rest\nin a particular frame $F$. We may now consider the same perfect\nfluid, but moving with a three velocity of~$(3,4,5)'$: with respect to\n$F$:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> u <- as.3vel(3:5)\n> pf_moving <- perfectfluid(1,4,u)\n> pf_moving\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 2.08 6.24 8.32 10.4\nx 6.24 18.76 24.96 31.2\ny 8.32 24.96 33.32 41.6\nz 10.40 31.20 41.60 52.0\n\\end{Soutput}\n\\end{Schunk}\n\nThe consistency check is to verify that transforming to a frame in\nwhich the fluid is at rest will result in a stress-energy tensor that\nmatches {\\tt pf\\_rest}:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> transform_uu(perfectfluid(1,4,u),boost(u))\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nt 1.04e+00 -3.01e-16 -1.65e-15 9.04e-16\nx -1.33e-15 4.00e-02 -1.87e-15 -4.95e-15\ny -3.33e-15 7.18e-15 4.00e-02 -1.87e-15\nz -3.55e-15 9.87e-16 1.08e-14 4.00e-02\n\\end{Soutput}\n\\end{Schunk}\n\nthus showing agreement to within numerical precision.\n\n\\section{Photons}\n\\label{photonsection}\nIt is possible to define the four-momentum of photons by specifying\ntheir three-velocity and energy, and using {\\tt as.photon()}:\n \n\\begin{Schunk}\n\\begin{Sinput}\n> sol(1)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 1\n\\end{Soutput}\n\\begin{Sinput}\n> (A <- as.photon(as.3vel(cbind(0.9,1:5\/40,5:1\/40))))\n\\end{Sinput}\n\\begin{Soutput}\n E p_x p_y p_z\n[1,] 1 0.990 0.0275 0.1375\n[2,] 1 0.992 0.0551 0.1103\n[3,] 1 0.993 0.0828 0.0828\n[4,] 1 0.992 0.1103 0.0551\n[5,] 1 0.990 0.1375 0.0275\n\\end{Soutput}\n\\end{Schunk}\n\nabove, $A$ is a vector of four-momentum of five photons, all of unit\nenergy, each with a null world line. They are all moving\napproximately parallel to the x-axis. We can check that this is\nindeed a null vector:\n \n\\begin{Schunk}\n\\begin{Sinput}\n> inner4(A)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 1.45e-16 2.56e-17 -5.55e-17 2.56e-17 1.45e-16\n\\end{Soutput}\n\\end{Schunk}\n\nshowing that the vectors are indeed null to numerical precision. What\ndo these photons look like in a frame moving along the $x$-axis at\n$0.7c$?\n \n\\begin{Schunk}\n\\begin{Sinput}\n> tcrossprod(A,boost(as.3vel(c(0.7,0,0))))\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\n[1,] 0.430 0.406 0.0275 0.1375\n[2,] 0.428 0.409 0.0551 0.1103\n[3,] 0.427 0.410 0.0828 0.0828\n[4,] 0.428 0.409 0.1103 0.0551\n[5,] 0.430 0.406 0.1375 0.0275\n\\end{Soutput}\n\\end{Schunk}\n\nAbove, see how the photons have lost the majority of their energy due\nto redshifting. Blue shifting is easy to implement as either a\npassive transform:\n \n\\begin{Schunk}\n\\begin{Sinput}\n> tcrossprod(A,boost(as.3vel(c(-0.7,0,0))))\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\n[1,] 2.37 2.37 0.0275 0.1375\n[2,] 2.37 2.37 0.0551 0.1103\n[3,] 2.37 2.37 0.0828 0.0828\n[4,] 2.37 2.37 0.1103 0.0551\n[5,] 2.37 2.37 0.1375 0.0275\n\\end{Soutput}\n\\end{Schunk}\n \nor an active transform:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> tcrossprod(A,solve(boost(as.3vel(c(0.7,0,0)))))\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\n[1,] 2.37 2.37 0.0275 0.1375\n[2,] 2.37 2.37 0.0551 0.1103\n[3,] 2.37 2.37 0.0828 0.0828\n[4,] 2.37 2.37 0.1103 0.0551\n[5,] 2.37 2.37 0.1375 0.0275\n\\end{Soutput}\n\\end{Schunk}\n\ngiving the same result.\n\n\n\\subsection{Reflection in mirrors}\n\n\\citet{gjurchinovski2004} discusses reflection of light from a\nuniformly moving mirror and here I show how the {\\tt lorentz} package\ncan illustrate some of his insights. We are going to take the five\nphotons defined above and reflect them in an oblique mirror which is\nitself moving at half the speed of light along the $x$-axis. The\nfirst step is to define the mirror {\\tt m}, and the boost {\\tt B}\ncorresponding to its velocity:\n \n\\begin{Schunk}\n\\begin{Sinput}\n> m <- c(1,1,1)\n> B <- boost(as.3vel(c(0.5,0,0)))\n\\end{Sinput}\n\\end{Schunk}\n\nAbove, the three-vector $m$ is parallel to the normal vector of the\nmirror and $B$ shows the Lorentz boost needed to bring it to rest. We\nare going to reflect these photons in this mirror. The R idiom for\nthe reflection is performed using a sequence of transforms. First,\ntransform the photons' four-momentum to a frame in which the mirror is\nat rest:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> A\n\\end{Sinput}\n\\begin{Soutput}\n E p_x p_y p_z\n[1,] 1 0.990 0.0275 0.1375\n[2,] 1 0.992 0.0551 0.1103\n[3,] 1 0.993 0.0828 0.0828\n[4,] 1 0.992 0.1103 0.0551\n[5,] 1 0.990 0.1375 0.0275\n\\end{Soutput}\n\\begin{Sinput}\n> (A <- as.4mom(A\n\\end{Sinput}\n\\begin{Soutput}\n E p_x p_y p_z\n[1,] 0.583 0.566 0.0275 0.1375\n[2,] 0.582 0.569 0.0551 0.1103\n[3,] 0.581 0.569 0.0828 0.0828\n[4,] 0.582 0.569 0.1103 0.0551\n[5,] 0.583 0.566 0.1375 0.0275\n\\end{Soutput}\n\\end{Schunk}\n\nAbove, see how the photons have lost energy because of a redshift (the\n{\\tt as.4mom()} function has no effect other than changing the column\nnames). Next, reflect the photons in the mirror (which is at rest):\n\n\\begin{Schunk}\n\\begin{Sinput}\n> (A <- reflect(A,m))\n\\end{Sinput}\n\\begin{Soutput}\n E p_x p_y p_z\n[1,] 0.583 0.0786 -0.460 -0.350\n[2,] 0.582 0.0793 -0.434 -0.379\n[3,] 0.581 0.0795 -0.407 -0.407\n[4,] 0.582 0.0793 -0.379 -0.434\n[5,] 0.583 0.0786 -0.350 -0.460\n\\end{Soutput}\n\\end{Schunk}\n\nAbove, see how the reflected photons have a reduced the x-component of\nmomentum; but have acquired a substantial $y$- and $z$- component.\nFinally, we transform back to the original reference frame. Observe\nthat this requires an {\\em active} transform which means that we need\nto use the matrix inverse of $B$:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> (A <- as.4mom(A\n\\end{Sinput}\n\\begin{Soutput}\n E p_x p_y p_z\n[1,] 0.719 0.427 -0.460 -0.350\n[2,] 0.718 0.427 -0.434 -0.379\n[3,] 0.717 0.427 -0.407 -0.407\n[4,] 0.718 0.427 -0.379 -0.434\n[5,] 0.719 0.427 -0.350 -0.460\n\\end{Soutput}\n\\end{Schunk}\n\nThus in the original frame, the photons have lost about a quarter of\ntheir energy as a result of a Doppler effect: the mirror was receding\nfrom the source. The photons have imparted energy to the mirror as a\nresult of mechanical work. It is possible to carry out the same\noperations in one line:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> A <- as.photon(as.3vel(cbind(0.9,1:5\/40,5:1\/40)))\n> A\n\\end{Sinput}\n\\begin{Soutput}\n E p_x p_y p_z\n[1,] 0.719 0.427 -0.460 -0.350\n[2,] 0.718 0.427 -0.434 -0.379\n[3,] 0.717 0.427 -0.407 -0.407\n[4,] 0.718 0.427 -0.379 -0.434\n[5,] 0.719 0.427 -0.350 -0.460\n\\end{Soutput}\n\\end{Schunk}\n\n\\subsection{Disco ball}\n\nIt is easy to define a disco ball, which is a sphere covered in\nmirrors. For the purposes of exposition, we will use a rather shabby\nball with only 7 mirrors:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> dfun <- function(n){matrix(rnorm(n*3),ncol=3)\n> (disco <- dfun(7))\n\\end{Sinput}\n\\begin{Soutput}\n [,1] [,2] [,3]\n[1,] -0.8255 0.5638 -0.0246\n[2,] 0.4893 -0.0811 0.8683\n[3,] -0.0654 0.4519 0.8897\n[4,] -0.6690 -0.2310 -0.7065\n[5,] -0.7243 -0.6781 -0.1248\n[6,] -0.5078 -0.4407 0.7402\n[7,] 0.5224 -0.4929 0.6958\n\\end{Soutput}\n\\end{Schunk}\n\nThen we define a unit-energy photon moving parallel to the x-axis, and\nreflect it in the disco ball:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> p <- as.photon(c(1,0,0))\n> reflect(p,disco)\n\\end{Sinput}\n\\begin{Soutput}\n E p_x p_y p_z\nx 1 -0.3630 0.9309 -0.0406\nx 1 0.5212 0.0794 -0.8497\nx 1 0.9914 0.0591 0.1164\nx 1 0.1049 -0.3091 -0.9452\nx 1 -0.0491 -0.9823 -0.1807\nx 1 0.4843 -0.4476 0.7518\nx 1 0.4542 0.5150 -0.7270\n\\end{Soutput}\n\\end{Schunk}\n\n(above, {\\tt p} is a photon moving along the x-axis; standard R\nrecycling rules imply that we effectively have one photon per mirror\nin {\\tt disco}. See how the photons' energy is unchanged by the\nreflection). We might ask what percentage of photons are reflected\ntowards the source; but to do this we would need a somewhat more\nstylish disco ball, here with 1000 mirrors:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> table(reflect(p,dfun(1000))[,2]>0) # should be TRUE with probability sqrt(0.5)\n\\end{Sinput}\n\\begin{Soutput}\nFALSE TRUE \n 294 706 \n\\end{Soutput}\n\\end{Schunk}\n\n(compare the expected value of $1000\/\\sqrt{2}\\simeq 707$). But it is\nperhaps more fun to consider a relativistic disco in which the mirror\nball moves at 0.5c:\n\n\n\\begin{Schunk}\n\\begin{Sinput}\n> B <- boost(as.3vel(c(0.5,0,0)))\n> p\n\\end{Sinput}\n\\begin{Soutput}\n t x y z\nx 0.546 0.0913 0.5375 -0.0235\nx 0.840 0.6808 0.0458 -0.4906\nx 0.997 0.9943 0.0341 0.0672\nx 0.702 0.4033 -0.1785 -0.5457\nx 0.650 0.3006 -0.5671 -0.1043\nx 0.828 0.6562 -0.2584 0.4340\nx 0.818 0.6361 0.2973 -0.4197\n\\end{Soutput}\n\\end{Schunk}\n\nAbove, note the high energy of the photon in the third row. This is\nbecause the third mirror of {\\tt disco} is such that the photon hits\nit with grazing incidence; this means that the receding of the mirror\nis almost immaterial. Note further that a spinning disco ball would\ngive the same (instantaneous) results.\n\n\\subsection{Mirrors and rotation-boost coupling}\n\nConsider the following situation: we take a bunch of photons which in\na certain reference frame are all moving (almost) parallel to the\n$x$-axis. Then we reflect the photons from a mirror which is moving\nwith a composition of pure boosts, and examine the reflected light in\ntheir original reference frame. The R idiom for this would be:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> sol(1)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 1\n\\end{Soutput}\n\\begin{Sinput}\n> light_start <- as.photon(as.3vel(cbind(0.9,1:5\/40,5:1\/40)))\n> m <- c(1,0,0) # mirror normal to x-axis\n> B1 <- boost(as.3vel(c(-0.5, 0.1, 0.0)))\n> B2 <- boost(as.3vel(c( 0.2, 0.0, 0.0)))\n> B3 <- boost(as.3vel(c( 0.0, 0.0, 0.6)))\n> B <- B1\n> light <- light_start\n> light <- reflect(light,m)\n> light <- as.4mom(light\n> light\n\\end{Sinput}\n\\begin{Soutput}\n E p_x p_y p_z\n[1,] 2.30 -2.11 0.119 0.920\n[2,] 2.31 -2.13 0.147 0.897\n[3,] 2.32 -2.14 0.175 0.874\n[4,] 2.32 -2.15 0.203 0.849\n[5,] 2.33 -2.16 0.230 0.824\n\\end{Soutput}\n\\end{Schunk}\n\nSee how the photons have picked up momentum in the $y$- and $z$-\ndirection, even though the mirror is oriented perpendicular to the\n$x$-axis (in its own frame). Again it is arguably preferable to use\npipes:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> light_start\n\\end{Sinput}\n\\begin{Soutput}\n E p_x p_y p_z\n[1,] 2.30 -2.11 0.119 0.920\n[2,] 2.31 -2.13 0.147 0.897\n[3,] 2.32 -2.14 0.175 0.874\n[4,] 2.32 -2.15 0.203 0.849\n[5,] 2.33 -2.16 0.230 0.824\n\\end{Soutput}\n\\end{Schunk}\n\nCompare when the speed of light is infinite:\n\\begin{Schunk}\n\\begin{Sinput}\n> sol(Inf)\n\\end{Sinput}\n\\begin{Soutput}\n[1] Inf\n\\end{Soutput}\n\\begin{Sinput}\n> light_start <- as.photon(as.3vel(cbind(0.9,1:5\/40,5:1\/40)))\n> B1 <- boost(as.3vel(c(-0.5, 0.1, 0.0)))\n> B2 <- boost(as.3vel(c( 0.2, 0.0, 0.0)))\n> B3 <- boost(as.3vel(c( 0.0, 0.0, 0.6)))\n> B <- B1\n> light_start\n\\end{Sinput}\n\\begin{Soutput}\n E p_x p_y p_z\n[1,] 0 0.990 0.0275 0.1375\n[2,] 0 0.992 0.0551 0.1103\n[3,] 0 0.993 0.0828 0.0828\n[4,] 0 0.992 0.1103 0.0551\n[5,] 0 0.990 0.1375 0.0275\n\\end{Soutput}\n\\begin{Sinput}\n> light_start\n\\end{Sinput}\n\\begin{Soutput}\n E p_x p_y p_z\n[1,] 0 -0.990 0.0275 0.1375\n[2,] 0 -0.992 0.0551 0.1103\n[3,] 0 -0.993 0.0828 0.0828\n[4,] 0 -0.992 0.1103 0.0551\n[5,] 0 -0.990 0.1375 0.0275\n\\end{Soutput}\n\\end{Schunk}\n\nNote that, in the infinite light speed case, the energy of the photons\nis zero (photons have zero rest mass); further observe that in this\nclassical case, the effect of the mirror is to multiply the $x$-momentum\nby $-1$ and leave the other components unchanged, as one might expect\nfrom a mirror perpendicular to $(1,0,0)$.\n\n\\section{Three-velocities}\n\nIn contrast to four-velocities, three-velocities do not form a group\nunder composition as the velocity addition law is not\nassociative~\\citep{ungar2006}. Instead, three-velocity composition\nhas an algebraic structure known as a {\\em gyrogroup} (this\nobservation was the original motivation for the package).\n\\citeauthor{ungar2006} shows that the velocity addition law for\nthree-velocities is\n\n\\begin{equation}\n\\mathbf u\\oplus\\mathbf v=\n\\frac{1}{1+\\mathbf u\\cdot\\mathbf v}\n\\left\\{\n\\mathbf u + \\frac{\\mathbf v}{\\gamma_\\mathbf u} + \\frac{\\gamma_\\mathbf u\n\\left(\\mathbf u\\cdot\\mathbf v\\right)\\mathbf u}{1+\\gamma_\\mathbf u}\n\\right\\}\n\\end{equation}\n \nwhere~$\\gamma_\\mathbf u=\\left(1-\\mathbf u\\cdot\\mathbf u\\right)^{-1\/2}$ and we are\nassuming~$c=1$. \\citeauthor{ungar2006} goes on to show that, in\ngeneral, $\\mathbf u\\oplus\\mathbf v\\neq\\mathbf v\\oplus\\mathbf u$\nand~$(\\mathbf u\\oplus\\mathbf v)\\oplus\\mathbf w\\neq\\mathbf u\\oplus(\\mathbf v\\oplus\\mathbf w)$. He also\ndefines the binary operator~$\\ominus$\nas~$\\mathbf u\\ominus\\mathbf v=\\mathbf u\\oplus\\left(-\\mathbf v\\right)$, and implicitly\ndefines~$\\ominus\\mathbf u\\oplus\\mathbf v$ to be~$\\left(-\\mathbf u\\right)\\oplus\\mathbf v$. If\nwe have\n\n\\begin{equation}\n\\gyr{u}{v}\\mathbf x=-\\left(\\mathbf u\\oplus\\mathbf v\\right)\\oplus\\left(\\mathbf u\\oplus\\left(\\mathbf v\\oplus\\mathbf x\\right)\\right)\n\\end{equation}\n\nthen\n\n\\begin{eqnarray}\n\\mathbf u\\oplus\\mathbf v &=& \\gyr{u}{v}\\left(\\mathbf v\\oplus\\mathbf u\\right)\\label{noncom}\\\\\n\\gyr{u}{v}\\mathbf x\\cdot\\gyr{u}{v}\\mathbf y &=& \\mathbf x\\cdot\\mathbf y\\label{doteq}\\\\\n\\gyr{u}{v}\\left(\\mathbf x\\oplus\\mathbf y\\right) &=& \\gyr{u}{v}\\mathbf x\\oplus\\gyr{u}{v}\\mathbf y\\\\\n\\left(\\gyr{u}{v}\\right)^{-1} &=& \\left(\\gyr{v}{u}\\right)\\label{gyrinv}\\\\\n\\mathbf u\\oplus\\left(\\mathbf v\\oplus\\mathbf w\\right) &=&\\left(\\mathbf u\\oplus\\mathbf v\\right)\\oplus\\gyr{u}{v}\\mathbf w\\label{nonass1}\\\\\n\\left(\\mathbf u\\oplus\\mathbf v\\right)\\oplus\\mathbf w &=&\\mathbf u\\oplus\\left(\\mathbf v\\oplus\\gyr{v}{u}\\mathbf w\\right)\\label{nonass2}\n\\end{eqnarray}\n\nConsider the following R session:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> sol(1)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 1\n\\end{Soutput}\n\\begin{Sinput}\n> u <- as.3vel(c(-0.7,+0.2,-0.3))\n> v <- as.3vel(c(+0.3,+0.3,+0.4))\n> w <- as.3vel(c(+0.1,+0.3,+0.8))\n> x <- as.3vel(c(-0.2,-0.1,-0.9))\n> u\n\\end{Sinput}\n\\begin{Soutput}\nA vector of three-velocities (speed of light = 1)\n x y z\n[1,] -0.7 0.2 -0.3\n\\end{Soutput}\n\\end{Schunk}\n\nHere we have three-vectors {\\tt u} etc. We can see that {\\tt u} and\n{\\tt v} do not commute:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> u+v\n\\end{Sinput}\n\\begin{Soutput}\nA vector of three-velocities (speed of light = 1)\n x y z\n[1,] -0.545 0.482 -0.00454\n\\end{Soutput}\n\\begin{Sinput}\n> v+u\n\\end{Sinput}\n\\begin{Soutput}\nA vector of three-velocities (speed of light = 1)\n x y z\n[1,] -0.429 0.572 0.132\n\\end{Soutput}\n\\end{Schunk}\n\n(the results differ). We can use equation~\\ref{noncom}\n\\begin{Schunk}\n\\begin{Sinput}\n> (u+v)-gyr(u,v,v+u)\n\\end{Sinput}\n\\begin{Soutput}\nA vector of three-velocities (speed of light = 1)\n x y z\n[1,] 1.77e-16 -7.08e-16 1.23e-16\n\\end{Soutput}\n\\end{Schunk}\n\nshowing agreement to within numerical error. It is also possible to\nuse the functional idiom in which we define {\\tt f()} to be the\nmap~$\\mathbf x\\mapsto\\gyr{u}{v}\\mathbf x$. In R:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> f <- gyrfun(u,v)\n> (u+v)-f(v+u) # should be zero\n\\end{Sinput}\n\\begin{Soutput}\nA vector of three-velocities (speed of light = 1)\n x y z\n[1,] 1.77e-16 -7.08e-16 1.23e-16\n\\end{Soutput}\n\\end{Schunk}\n\nFunction {\\tt gyrfun()} is vectorized, which means that it plays\nnicely with (R) vectors. Consider\n\n\\begin{Schunk}\n\\begin{Sinput}\n> u9 <- r3vel(9)\n> u9\n\\end{Sinput}\n\\begin{Soutput}\nA vector of three-velocities (speed of light = 1)\n x y z\n [1,] -0.6635 -0.1850 -0.0324\n [2,] 0.7232 0.3007 0.0448\n [3,] -0.4816 -0.2698 0.0903\n [4,] 0.7186 0.5215 0.1149\n [5,] -0.4988 -0.5469 0.4781\n [6,] -0.0446 0.7934 -0.4839\n [7,] 0.4225 0.6831 -0.2474\n [8,] -0.5546 -0.1326 0.4232\n [9,] 0.0366 0.0748 0.4407\n\\end{Soutput}\n\\end{Schunk}\n\nThen we can create a vectorized gyrofunction:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> f <- gyrfun(u9,v)\n> f(x)\n\\end{Sinput}\n\\begin{Soutput}\nA vector of three-velocities (speed of light = 1)\n x y z\n [1,] -0.0282 -7.04e-02 -0.924\n [2,] -0.3568 -1.36e-01 -0.845\n [3,] -0.0692 -2.67e-02 -0.924\n [4,] -0.3503 -1.89e-01 -0.838\n [5,] 0.0444 1.59e-01 -0.913\n [6,] -0.2297 -4.52e-01 -0.776\n [7,] -0.3268 -3.10e-01 -0.811\n [8,] 0.0130 3.85e-06 -0.927\n [9,] -0.1409 -5.01e-02 -0.915\n\\end{Soutput}\n\\end{Schunk}\n\nNote that the package vectorization is transparent when using syntactic sugar:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> u9+x\n\\end{Sinput}\n\\begin{Soutput}\nA vector of three-velocities (speed of light = 1)\n x y z\n [1,] -0.7436 -0.2345 -0.5826\n [2,] 0.6411 0.2532 -0.6615\n [3,] -0.6319 -0.3444 -0.6273\n [4,] 0.6860 0.5266 -0.4421\n [5,] -0.6903 -0.6791 -0.0511\n [6,] -0.0952 0.7096 -0.6909\n [7,] 0.3115 0.6168 -0.6976\n [8,] -0.8231 -0.2462 -0.3690\n [9,] -0.2550 -0.0524 -0.7806\n\\end{Soutput}\n\\end{Schunk}\n\n(here, the addition operates using R's standard recycling rules).\n\n\\subsection{Associativity}\n\nThree velocity addition is not associative:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> (u+v)+w\n\\end{Sinput}\n\\begin{Soutput}\nA vector of three-velocities (speed of light = 1)\n x y z\n[1,] -0.465 0.655 0.501\n\\end{Soutput}\n\\begin{Sinput}\n> u+(v+w)\n\\end{Sinput}\n\\begin{Soutput}\nA vector of three-velocities (speed of light = 1)\n x y z\n[1,] -0.549 0.667 0.416\n\\end{Soutput}\n\\end{Schunk}\n\nBut we can use equations~\\ref{nonass1} and~\\ref{nonass2}:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> (u+(v+w)) - ((u+v)+gyr(u,v,w))\n\\end{Sinput}\n\\begin{Soutput}\nA vector of three-velocities (speed of light = 1)\n x y z\n[1,] 6.92e-16 -1.38e-15 -6.92e-16\n\\end{Soutput}\n\\begin{Sinput}\n> ((u+v)+w) - (u+(v+gyr(v,u,w)))\n\\end{Sinput}\n\\begin{Soutput}\nA vector of three-velocities (speed of light = 1)\n x y z\n[1,] 0 0 5.35e-16\n\\end{Soutput}\n\\end{Schunk}\n\n\\subsection{Visualization of noncommutativity and nonassociativity of three-velocities}\n\nConsider the following three-velocities:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> u <- as.3vel(c(0.4,0,0))\n> v <- seq(as.3vel(c(0.4,-0.2,0)), as.3vel(c(-0.3,0.9,0)),len=20)\n> w <- as.3vel(c(0.8,-0.4,0))\n\\end{Sinput}\n\\end{Schunk}\n\nObjects~$\\mathbf v$ and $\\mathbf w$ are single three-velocities, and object $\\mathbf v$\nis a vector of three velocities. We can see the noncommutativity of\nthree velocity addition in figures~\\ref{comfail1} and~\\ref{comfail2},\nand the nonassociativity in figure~\\ref{assfail}.\n\n\\begin{figure}[htbp]\n \\begin{center}\n\\begin{Schunk}\n\\begin{Sinput}\n> comm_fail1(u=u, v=v)\n\\end{Sinput}\n\\end{Schunk}\n\\includegraphics{lorentz_arxiv-comfail1_fig}\n\\caption{Failure\\label{comfail1} of the commutative law for velocity\n composition in special relativity. The arrows show successive\n velocity boosts of $+\\mathbf u$ (purple), $+\\mathbf v$ (black), $-\\mathbf u$ (red),\n and~$-\\mathbf v$ (blue) for $\\mathbf u,\\mathbf v$ as defined above. Velocity $\\mathbf u$ is\n constant, while $\\mathbf v$ takes a sequence of values. If velocity\n addition is commutative, the four boosts form a closed\n quadrilateral; the thick arrows show a case where the boosts almost\n close and the boosts nearly form a parallelogram. The blue dots\n show the final velocity after four successive boosts; the distance\n of the blue dot from the origin measures the combined velocity,\n equal to zero in the classical limit of low speeds. The discrepancy\n becomes larger and larger for the faster elements of the sequence\n $\\mathbf v$}\n \\end{center}\n\\end{figure}\n\n\\begin{figure}[htbp]\n \\begin{center}\n\\begin{Schunk}\n\\begin{Sinput}\n> comm_fail2(u=u, v=v)\n\\end{Sinput}\n\\end{Schunk}\n\\includegraphics{lorentz_arxiv-comfail2_fig}\n\n\\caption{Another view of the failure of the commutative\n law\\label{comfail2} in special relativity. The black arrows show\n velocity boosts of $\\mathbf u$ and the blue arrows show velocity boosts of\n $\\mathbf v$, with $\\mathbf u,\\mathbf v$ as defined above; $\\mathbf u$ is constant while\n $\\mathbf v$ takes a sequence of values. If velocity addition is\n commutative, then $\\mathbf u+\\mathbf v=\\mathbf v+\\mathbf u$ and the two paths end at the\n same point: the parallelogram is closed. The red lines show the\n difference between $\\mathbf u+\\mathbf v$ and $\\mathbf v+\\mathbf u$}\n \\end{center}\n\\end{figure}\n\n\\begin{figure}[htbp]\n \\begin{center}\n\\begin{Schunk}\n\\begin{Sinput}\n> ass_fail(u=u, v=v, w=w, bold=10)\n\\end{Sinput}\n\\end{Schunk}\n\\includegraphics{lorentz_arxiv-assfail_fig}\n\\caption{Failure of the associative law \\label{assfail} for velocity\n composition in special relativity. The arrows show successive\n boosts of $\\mathbf u$ followed by $\\mathbf v+\\mathbf w$ (black lines), and $\\mathbf u+\\mathbf v$\n followed by $\\mathbf w$ (blue lines), for $\\mathbf u$, $\\mathbf v$, $\\mathbf w$ as defined\n above; $\\mathbf u$ and $\\mathbf w$ are constant while $\\mathbf v$ takes a sequence of\n values. The mismatch between $\\mathbf u+\\left(\\mathbf v+\\mathbf w\\right)$ and\n $\\left(\\mathbf u+\\mathbf v\\right)+\\mathbf w$ is shown in red}\n \\end{center}\n\\end{figure}\n\n\n\\subsection[The magrittr package: pipes]{The {\\tt magrittr} package: pipes}\n\nThree velocities in the {\\tt lorentz} package work nicely with\n{\\tt magrittr}. If we define\n\n\\begin{Schunk}\n\\begin{Sinput}\n> u <- as.3vel(c(+0.5,0.1,-0.2))\n> v <- as.3vel(c(+0.4,0.3,-0.2))\n> w <- as.3vel(c(-0.3,0.2,+0.2))\n\\end{Sinput}\n\\end{Schunk}\n\nThen pipe notation operates as expected:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> jj1 <- u\n> jj2 <- u+v\n> speed(jj1-jj2)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 2.21e-16\n\\end{Soutput}\n\\end{Schunk}\n\nThe pipe operator is left associative:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> jj1 <- u\n> jj2 <- (u+v)+w\n> speed(jj1-jj2)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 7.39e-17\n\\end{Soutput}\n\\end{Schunk}\n\n\nIf we want right associative addition, the pipe operator needs\nbrackets:\n\n\\begin{Schunk}\n\\begin{Sinput}\n> jj1 <- u\n> jj2 <- u+(v+w)\n> speed(jj1-jj2)\n\\end{Sinput}\n\\begin{Soutput}\n[1] 3.45e-17\n\\end{Soutput}\n\\end{Schunk}\n\n\\subsection{Numerical verification}\n\nHere I provide numerical verification of equations~\\ref{noncom}\nto~\\ref{nonass2}. If we have\n\n\\begin{Schunk}\n\\begin{Sinput}\n> x <- as.3vel(c(0.7, 0.0, -0.7))\n> y <- as.3vel(c(0.1, 0.3, -0.6))\n> u <- as.3vel(c(0.0, 0.8, +0.1)) # x,y,u: single three-velocities\n> v <- r3vel(5,0.9)\n> w <- r3vel(5,0.8) # v,w: vector of three-velocities\n> f <- gyrfun(u,v)\n> g <- gyrfun(v,u)\n\\end{Sinput}\n\\end{Schunk}\n\nThen we can calculate the difference between the left hand side and\nright hand side numerically:\n \n\\begin{Schunk}\n\\begin{Sinput}\n> max(speed((u+v) - f(v+u))) # equation 3\n\\end{Sinput}\n\\begin{Soutput}\n[1] 3.49e-13\n\\end{Soutput}\n\\begin{Sinput}\n> max(abs(prod3(f(x),f(y)) - prod3(x,y))) # equation 4\n\\end{Sinput}\n\\begin{Soutput}\n[1] 5.94e-15\n\\end{Soutput}\n\\begin{Sinput}\n> max(speed(f(x+y) - (f(x)+f(y)))) # equation 5\n\\end{Sinput}\n\\begin{Soutput}\n[1] 3.81e-12\n\\end{Soutput}\n\\begin{Sinput}\n> max(speed(f(g(x)) - g(f(x)))) # equation 6\n\\end{Sinput}\n\\begin{Soutput}\n[1] 7.3e-13\n\\end{Soutput}\n\\begin{Sinput}\n> max(speed((u+(v+w)) - ((u+v)+f(w)))) # equation 7\n\\end{Sinput}\n\\begin{Soutput}\n[1] 6.25e-14\n\\end{Soutput}\n\\begin{Sinput}\n> max(speed(((u+v)+w) - (u+(v+g(w))))) # equation 8\n\\end{Sinput}\n\\begin{Soutput}\n[1] 2.84e-14\n\\end{Soutput}\n\\end{Schunk}\n\n(all zero to numerical precision). \n\n\n\\section{Conclusions}\n\nThe {\\tt lorentz} package furnishes some functionality for\nmanipulating four-vectors and three-velocities in the context of\nspecial relativity. The R idiom is relatively natural and the package\nhas been used to illustrate different features of relativistic\nkinematics. The package leverages the powerful R programming language\nto conduct a systematic search for a gyrodistributive law, without\nsuccess. If such a law exists, it is complicated: it is not one of\nthe 688128 natural forms considered in the search.\n\n\n\\bibliographystyle{apalike}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzhvxy b/data_all_eng_slimpj/shuffled/split2/finalzzhvxy new file mode 100644 index 0000000000000000000000000000000000000000..751e475144481f86dec4e401a04c3c624be3f1a6 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzhvxy @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nOur present understanding of the primordial Universe relies on the paradigm of inflation \\citep{inflationhist1,inflationhist2,inflationhist3}, introducing a phase of accelerated expansion in the first fractions of a second after the primordial singularity. Such a phenomenon is expected to leave a background of gravitational waves propagating in the primordial plasma during recombination, leaving a permanent mark imprinted in the polarization anisotropies of the cosmic microwave background (CMB): the primordial $B$-modes \\citep{InflationmodesB3,InflationsmodesB1,InflationsmodesB2}. The amplitude of the angular power spectrum of those primordial $B$-modes is characterized by the {tensor-to-scalar ratio} $r$, which is proportional to the energy scale at which inflation occurred \\citep{renergyscale}. Hence, looking for this smoking gun of inflation allows us to test our best theories of fundamental physics in the primordial Universe at energy scales far beyond the reach of particle accelerators. In this scope, it is one of the biggest challenges of cosmology set out for the next decades. The best experimental upper limit on the $r$ parameter so far is $r<0.032$ \\citep[95\\,\\% C.L.,][]{tristram,bicep2021,PlanckandBICEP}.\n\nThe JAXA Lite (Light) satellite, used for the $B$-mode polarization and Inflation from cosmic background Radiation Detection (\\textit{LiteBIRD}{}) mission, is designed to observe the sky at large angular scales in order to constrain this parameter $r$ down to $\\delta r= 10^{-3}$, including all sources of uncertainty \\citep{litebird,LiteBIRDUpdatedDesign}. Exploring this region of the parameter space is critical, because this order of magnitude for the tensor-to-scalar ratio is predicted by numerous physically motivated inflation models (for a review see e.g., \\cite{EncyclopediaInflationaris}) \n\nHowever, the success of this mission relies on our ability to treat polarized foreground signals. Indeed various diffuse astrophysical sources emit polarized $B$-mode signals above the primordial ones, the strongest being due to the diffuse polarized emission of our own Galaxy \\citep{PlanckCompoSep}. Even in a diffuse region like the BICEP\/Keck field, the Galactic $B$-modes are at least ten times stronger at 150\\,GHz than the $r=0.01$ tensor $B$-modes targeted by the current CMB experiments \\citep{BICEPKECKGW}.\n\nThe true complexity of polarized foreground emission that the next generation of CMB experiments will face is still mostly unknown today. Underestimation of this complexity can lead to the estimation of a spurious nonzero value of $r$ \\citep[see e.g.,][]{PlanckL,Remazeilles_etal_2016}.\n\nAt high frequencies ($>100$ GHz), the thermal emission of interstellar dust grains is the main source of Galactic foreground contaminating the CMB \\citep{dusthighfreq,PlanckDust2}. The canonical model of the spectral energy distribution (SED) of this thermal emission for intensity and polarization is given by the modified black body (MBB) law \\citep{Desertdustmodel}. This model provides a good fit to the dust polarization SED at the sensitivity of the \\textit{Planck}{} satellite \\citep{PlanckDust2} but it may not fully account for\nit at the sensitivity of future experiments \\citep{HensleyBull}. Furthermore, due to changes of physical conditions across the galaxy, spatial variations of the SEDs are present between and along the lines of sight. The former leads to what is known as \\emph{frequency decorrelation} in the CMB community \\citep[see e.g.][]{tassis,PlanckL,pelgrims2021}. Moreover, both effects lead to averaging MBBs when observing the sky (unavoidable line-of-sight or beam-integration effects). Because of the nonlinearity of the MBB law, those averaging effects will {distort} the SED, leading to deviations from this canonical model \\citep{Chluba}. \n\n\\cite{Chluba} proposed a general framework called {``moment expansion''} of the SED to take into account those distortions, using a Taylor expansion around the MBB with respect to its spectral parameters \\citep[Taylor expansion of foreground SEDs was discussed in previous studies; see e.g.,][]{stolyarov2005}. This method is agnostic: it does not require any assumption on the real complexity of the polarized dust emission. The moment expansion approach thus provides a promising tool with which to model the unanticipatable complexity of the dust emission in real data.\n\n\\cite{Mangilli} generalized this formalism for the sake of CMB data analysis in harmonic space and for cross-angular power spectra and applied it successfully to complex simulations and \\textit{Planck}{} High-Frequency Instrument (HFI) intensity data. This latter work shows that the real complexity of Galactic foregrounds could be higher than expected, encouraging us to follow the path opened by the moment expansion formalism.\n\nIn the present work, we apply the moment expansion in harmonic space to characterize and treat the dust foreground polarized emission of \\textit{LiteBIRD}{} high-frequency simulations, using dust-emission models of increasing complexity. We discuss the ability of this method to recover an unbiased value for the $r$ parameter, with enough accuracy to achievethe scientific objectives of the \\textit{LiteBIRD} mission.\n\nIn Sect.~\\ref{sec:formalism}, we first review the formalism of moment expansion in map and harmonic domains. We then describe in Sect.~\\ref{sec:sims} how we realize several sets of simulations of the sky as seen by the \\textit{LiteBIRD}{} instrument with varying dust complexity and how we estimate the angular power spectra. In Sect.~\\ref{sec:fit}, we describe how we estimate the moment parameters and the tensor-to-scalar ratio $r$ in those simulations. The results are then presented in Sect.~\\ref{sec:results}. Finally, we discuss those results and the future work that has to be done in the direction opened by moment expansion in Sect.~\\ref{sec:discussion}.\n\n\\section{\\label{sec:formalism}Formalism}\n\n\\subsection{Characterizing the dust SED in real space}\n\n\\subsubsection{Modified black body model \\label{sec:mbb}}\n\nThe canonical way to characterize astrophysical dust-grain emission in every volume element of the Galaxy is given by the modified black body (MBB) function, consisting of multiplying a standard black body SED $B_\\nu(T)$ at a given temperature $T_0$ by a power-law of the frequency $\\nu$ with a spectral index $\\beta_0$. The dust intensity map $I_{\\rm D}(\\nu,\\vec{n})$ observed at a frequency $\\nu$ in every direction with respect to the unit vector $\\vec{n}$, can then be written as:\n\\begin{equation}\n\\label{eq:MBB}\n I(\\nu,\\vec{n}) = \\left(\\frac{\\nu}{\\nu_0}\\right)^{\\beta_0} \\frac{B_{\\nu}(T_0)}{B_{\\nu_0}(T_0)} A(\\vec{n}) \n =\\frac{I_{\\nu}(\\beta_0,T_0)}{I_{\\nu_0}(\\beta_0,T_0)} A(\\vec{n}),\n\\end{equation}\n\n\\noindent where $A(\\vec{n})$ is the dust intensity template at a reference frequency $\\nu_0$\\footnote{Throughout this work, we use $\\nu_0 = 353$\\,GHz.}.\nWe know that the physical conditions (thermodynamic and dust grain properties) change through the interstellar medium across the Galaxy, depending, in an intricate fashion, on the gas velocity and density, the interstellar radiation field, the distance to the Galactic center \\citep[see e.g., ][]{dustacrossMW,dustvarMW,PlanckVardust,PlanckCompoSep,vardustdisk,dustradfield}. This change of physical conditions leads to variations in $\\beta$ and $T$ depending on the direction of observation $\\vec{n}$:\n\n\\begin{equation}\n\\label{eq:MBBn}\n I(\\nu,\\vec{n}) = \\frac{I_{\\nu}(\\beta(\\vec{n}),T(\\vec{n}))}{I_{\\nu_0}(\\beta(\\vec{n}),T(\\vec{n}))} A(\\vec{n}).\n\\end{equation}\n\nThe SED amplitude and parameters (temperature and spectral index) are then different for every line of sight. It is therefore clear that, in order to provide a realistic model of the dust emission, the frequency and spatial dependencies may not be trivially separated. \n\n\\subsubsection{\\label{sect:limitsmbb}Limits of the modified black body}\n\nThe dust SED model given by the MBB has proven to be highly accurate \\citep{Planck2014dust,Planck2015dust}. However, it must be kept in mind that this model is empirical and is therefore not expected to give a perfect description of the dust SED in the general case. Indeed, physically motivated dust grain emission models predict deviations from it \\citep[e.g.,][]{modelbeyondmbb}. Surveys tend to show that the dust-emission properties vary across the observed 2D sky and the 3D Galaxy \\citep{PlanckDust2}. Furthermore, in true experimental conditions, one can never directly access the pure SED of a single volume element with specific emission properties and unique spectral parameters. Averages are therefore made over different SEDs emitted from distinct regions with different physical emission properties, in a way that may not be avoided: along the line of sight; between different lines of sight, inside the beam of the instrument or; when doing a spherical harmonic decomposition to calculate the angular power spectra over large regions of the sky.\n\nThe MBB function is nonlinear, and therefore summing MBBs with different spectral parameters does not return another MBB function and produces \\emph{SED distortions}. For all these reasons, modeling the dust emission with a MBB is intrinsically limited, even when doing so with spatially varying spectral parameters. As a consequence, inaccuracies might appear when modeling the dust contribution to CMB data that will unavoidably impact the final estimation of the cosmological parameters. \n\n\\subsubsection{Moment expansion in pixel space}\n\\label{sec:moment_pixel}\n\nA way to address the limitation of the MBB model in accurately describing the dust emission is given by the {moment expansion} formalism proposed by \\cite{Chluba}. This formalism is designed to take into account the SED distortions due to averaging effects by considering a multidimensional Taylor expansion of the distorted SED $I(\\nu,\\vec{p})$ around the mean values $\\vec{p}_{0}$ of its spectral parameters $\\vec{p} = \\{p_i\\}$. This is the so-called {moment expansion} of the SED, which can be written as\n\\begin{align}\n I(\\nu,\\vec{p}) = I(\\nu,\\vec{p}_{0}) &+ \\sum_i \\omega_1^{p_i}\\langle\\partial_{p_i}I(\\nu,\\vec{p})\\rangle_{\\vec{p}=\\vec{p}_0}\n \\nonumber \\\\\n &+ \\frac{1}{2}\\sum_{i,j} \\omega_2^{p_ip_j}\\langle\\partial_{p_i}\\partial_{p_j}I(\\nu,\\vec{p})\\rangle_{\\vec{p}=\\vec{p}_0}\n \\nonumber \\\\\n &+ \\dots \\nonumber\\\\\n &+\\frac{1}{\\alpha!} \\sum_{i,\\dots,k} \\omega_\\alpha^{p_i\\dots p_k}\\langle\\partial_{p_i}\\dots\\partial_{p_k}I(\\nu,\\vec{p})\\rangle_{\\vec{p}=\\vec{p}_0},\n\\label{eq:momentgeneral}\n\\end{align}\n\nwhere the first term on the right-hand side is the SED without distortion $I(\\nu,\\vec{p}_{0})$ evaluated at $\\vec{p}=\\vec{p}_0$, and the other terms are the so-called {moments} of order $\\alpha$, quantified by the {moment parameters} $\\omega_\\alpha^{p_i\\dots p_k}$ for the expansion with respect to any parameter of $\\vec{p}$. Performing the expansion to increasing order adds increasing complexity to the SED $I(\\nu,\\vec{p}_{0})$. \n\nFor the MBB presented in Sect.~\\ref{sec:mbb}, there are two parameters so that $\\vec{p} = \\{\\beta,T\\}$. Thus the dust moment expansion reads\n\n\\begin{align}\n I(\\nu,\\vec{n}) = \\frac{I_{\\nu}(\\beta_0,T_0)}{I_{\\nu_0}(\\beta_0,T_0)} \\bigg\\{ & A(\\vec{n}) + \\omega^\\beta_1(\\vec{n}) \\ln\\left(\\frac{\\nu}{\\nu_0}\\right)+ \\frac{1}{2}\\omega^\\beta_2(\\vec{n}) \\ln^2\\left(\\frac{\\nu}{\\nu_0}\\right)\\nonumber \\\\[2mm]\n &+ \\omega^T_1(\\vec{n})\\Big( \\Theta(\\nu,T_0) - \\Theta(\\nu_0,T_0) \\Big) + \\dots \\bigg\\},\n\\label{eq:momentinttemp}\n\\end{align}\n\n\\noindent where the expansion has been written up to order two in $\\beta$ (with moment expansion parameters $\\omega^\\beta_1$ at order one and $\\omega^\\beta_2$ at order two) and to order one in $T$ (with a moment expansion parameter $\\omega^T_1$ at order one). The following expression has been introduced to simplify the black body derivative with respect to $T$:\n\n\\begin{align}\n \\Theta(\\nu,T) = \\frac{x}{T}\\frac{e^{x}}{e^{x}-1},\\ {\\rm with}\\ x = \\frac{h \\nu}{k T}.\n\\end{align}\n\nThe moment expansion in pixel space can be used for component separation and possibly crossed with other methods \\citep[see e.g.,][]{RemazeillesmomentsILC,Debabrata_2021}. However, in the present work, we are interested in the modeling of the dust at the $B$-mode angular power spectrum level. Performing the moment expansion at the angular power spectrum level adds some complexity to the SEDs due to the additional averaging occurring when dealing with spherical harmonic coefficients. Indeed, these coefficients are estimated on potentially large fractions of the sky and probe regions with various physical conditions. On the other hand, the expansion at the power spectrum level possibly drastically reduces the parameter space with respect to performing the expansion in every sky pixel.\n\n\\subsection{Characterizing the dust SED in harmonic space}\n\n\\subsubsection{Dust SED in spherical harmonic space \\label{sec:formalismspectra}}\n\nThe expansion presented in Sect.~\\ref{sec:moment_pixel} can be applied in spherical harmonic space using the same logic. The sky emission projection then reads\n\n\\begin{equation}\n I(\\nu,\\vec{n}) = \\sum_{\\ell = 0}^{\\infty}\\sum_{m=-\\ell}^{\\ell}I^\\nu_{\\ell m}Y_{\\ell m} (\\vec{n}).\n\\end{equation}\n\nApplying the moment expansion to the spherical harmonics coefficients, with respect to $\\beta$ and $T$, as in Eq.~\\ref{eq:momentinttemp}, leads to\n\n\\begin{align}\n I^\\nu_{\\ell m} = & \\frac{I_{\\nu}(\\beta_0(\\ell),T_0(\\ell))}{I_{\\nu_0}(\\beta_0(\\ell),T_0(\\ell))} \\bigg\\{ A_{\\ell m} + \\omega^\\beta_{1,\\ell m} \\ln\\left(\\frac{\\nu}{\\nu_0}\\right) + \\frac{1}{2}\\omega^\\beta_{2,\\ell m} \\ln^2\\left(\\frac{\\nu}{\\nu_0}\\right) \\nonumber \\\\[2mm]\n &+ \\omega^T_{1,\\ell m} \\Big( \\Theta(\\nu,T_0(\\ell)) - \\Theta(\\nu_0,T_0(\\ell)) \\Big) + \\dots \\quad\\bigg\\},\n\\label{eq:momentint2temp}\n\\end{align}\n\n\n\\noindent where this time $\\beta_0(\\ell)$ and $T_0 (\\ell)$ are the averages of $\\beta$ and $T$ at a given multipole $\\ell$ over the sky fraction we are looking at. We note that the moment parameters $\\omega^{p_i}_{\\alpha,\\ell m}$ involved here are different from the $\\omega^{p_i}_i(\\vec{n})$ appearing in Eq.~\\ref{eq:momentinttemp} in the map space because they involve different averaging. In principle, the moment expansion in harmonic space can take into account the three kinds of spatial averages presented in Sect.~\\ref{sect:limitsmbb}.\n\nAs the dust spectral index and temperature are difficult to separate in the frequency range considered for CMB studies \\citep[{i.e.}, Rayleigh-Jeans domain, see e.g.][]{betatcorr}, the moment expansion in harmonic space has only been applied in the past with respect to $\\beta$, with the temperature being fixed to a reference value $T=T_0$ \\citep{Mangilli,Azzoni}. In the present paper, for the first time, the moment expansion in harmonic space is instead performed with respect to both $\\beta$ and $T$, as it was in real space in \\citet{RemazeillesmomentsILC}.\n\n\\subsubsection{Cross-power spectra}\n\nRelying on the derivation made by \\cite{Mangilli} and Eq.~\\ref{eq:momentint2temp}, we can explicitly write the cross-spectra between two maps $M_{\\nu_i}$ and $M_{\\nu_j}$ at frequencies $\\nu_i$ and $\\nu_j$, using the moment expansion in $\\beta$ and $T$ as follows:\n\n\\begin{align}\n \\mathcal{D}_\\ell(\\nu_i \\times \\nu_j) &= \\frac{I_{\\nu_i}(\\beta_0(\\ell),T_0(\\ell))I_{\\nu_j}(\\beta_0(\\ell),T_0(\\ell))}{I_{\\nu_0}(\\beta_0(\\ell),T_0(\\ell))^2} \\cdot \\bigg\\{ \\nonumber \\\\[-0.5mm]\n \n 0^{\\rm th}\\ \\text{order}\\;&\n \\begin{cases}\n & \\ \\mathcal{D}_\\ell^{A \\times A}\n \\end{cases}\n \\nonumber \\\\[-0.5mm]\n \n 1^{\\rm st}\\ \\text{order}\\ \\beta\\;&\n \\begin{cases}\n &+\\mathcal{D}_\\ell^{A \\times \\omega^{\\beta}_1}\\left[ \\ln\\left(\\frac{\\nu_i}{\\nu_0}\\right) + \\ln\\left(\\frac{\\nu_j}{\\nu_0}\\right) \\right] \\nonumber \\\\\n &+ \\mathcal{D}_\\ell^{\\omega^{\\beta}_1 \\times \\omega^{\\beta}_1} \\left[\\ln\\left(\\frac{\\nu_i}{\\nu_0}\\right)\\ln\\left(\\frac{\\nu_j}{\\nu_0}\\right) \\right]\\nonumber \\\\ \\end{cases}\\\\[-0.5mm]\n \n 1^{\\rm st}\\ \\text{order}\\ T \\;&\n \\begin{cases}\n &+\\mathcal{D}_\\ell^{A \\times \\omega_1^T} \\left( \\Theta_i + \\Theta_j - 2\\Theta_0\\right) \\\\\n &+ \\mathcal{D}_\\ell^{\\omega_1^T \\times \\omega_1^T}\\Big(\\Theta_i - \\Theta_0\\Big)\\left(\\Theta_j - \\Theta_0\\right)\\nonumber\n \\end{cases}\\\\[-0.5mm]\n \n 1^{\\rm st}\\ \\text{order}\\ T\\beta \\;&\n \\begin{cases}\n &+ \\mathcal{D}_\\ell^{\\omega^{\\beta}_1 \\times \\omega_1^T} \\left[ \\ln\\left(\\frac{\\nu_j}{\\nu_0} \\right)\\Big( \\Theta_i - \\Theta_0\\Big) + \\ln\\left(\\frac{\\nu_i}{\\nu_0} \\right)\\left( \\Theta_j - \\Theta_0\\right)\\right] \\nonumber \\\\\n \\end{cases}\\\\[-0.5mm]\n \n 2^{\\rm nd}\\ \\text{order}\\ \\beta \\;&\n \\begin{cases}\n &+ \\frac{1}{2} \\mathcal{D}_{\\ell}^{A \\times\\omega^{\\beta}_{2}} \\left[ \\ln^2\\left(\\frac{\\nu_i}{\\nu_0}\\right)\n + \\ln^2\\left(\\frac{\\nu_j}{\\nu_0}\\right)\n \\right]\n \\\\[-0.5mm]\n &+ \\frac{1}{2} \\mathcal{D}_{\\ell}^{\\omega^{\\beta}_{1} \\times \\omega^{\\beta}_{2}} \\Big[ \\ln \\left(\\frac{\\nu_i}{\\nu_0}\\right) \\ln^2\\left(\\frac{\\nu_j}{\\nu_0}\\right) \n +\\ln\\left(\\frac{\\nu_j}{\\nu_0}\\right)\n \\ln^2 \\left(\\frac{\\nu_i}{\\nu_0}\\right) \\Big] \n \\\\[-0.5mm]\n &+\\frac{1}{4} \\mathcal{D}_{\\ell}^{\\omega^{\\beta}_{2} \\times \\omega^{\\beta}_{2}} \\left[\\ln^2 \\left(\\frac{\\nu_i}{\\nu_0}\\right) \\ln^2 \\left(\\frac{\\nu_j}{\\nu_0}\\right) \\right]\n \\,\n \\end{cases}\\nonumber\\\\[-0.5mm]\n &+ \\dots \\bigg\\},\n\\label{eq:moments}\n\\end{align}\n\n\\noindent where we use the following abbreviation: $\\Theta(\\nu_k,T_0(\\ell))\\equiv \\Theta_k$, so that $\\Theta_0=\\Theta(\\nu_0,T_0(\\ell))$, and we defined the moment expansion cross-power spectra between two moments $\\mathcal{M}$ and $\\mathcal{N}$ as\n\n\\begin{equation}\n \\mathcal{C}_\\ell^{\\mathcal{M}\\times\\mathcal{N}} = \\sum_{m, m'=-\\ell}^{\\ell} \\mathcal{M}_{\\ell m} \\mathcal{N}_{\\ell m'},\\ {\\rm with}\\ (\\mathcal{M},\\mathcal{N})\\in\\left\\{A,\\omega^{\\beta}_1,\\omega^{T}_1, \\omega^{\\beta}_2,\\ \\dots\\right\\}.\n\\label{eq:moments_spectra}\n\\end{equation}\n\nIn the remainder of this article, we use the $\\mathcal{D}_\\ell$ quantity, which is a scaling of the angular power spectra, and is defined as \n\\begin{equation}\n \\mathcal{D}_\\ell \\equiv \\frac{\\ell(\\ell +1)}{2 \\pi} \\mathcal{C}_\\ell.\n\\label{eq:dl}\n\\end{equation}\n\nEquation~\\ref{eq:moments} has been written using the expansion with respect to $\\beta$ at order two and $T$ at order one, as in Eq.~\\ref{eq:momentint2temp}. Nevertheless, the terms involving power spectra between order two in $\\beta$ and order one in $T$ have been neglected so as to match the needs of the implementation of our method in the following. \n\nHereafter, when we refer to \"order $k$\" at the angular power spectrum level, we are referring to moment expansion terms involving the pixel space moment up to order $k$. For example, $D_\\ell^{A\\times\\omega_1^T}$ and $D_\\ell^{\\omega^\\beta_1\\times\\omega_1^T}$ are order one, while $D_\\ell^{A\\times\\omega^\\beta_2}$, $D_\\ell^{\\omega^\\beta_1\\times\\omega^\\beta_2}$ and $D_\\ell^{\\omega^\\beta_2\\times\\omega^\\beta_2}$ are order two. At order zero, one retrieves the MBB description of the cross-angular power spectra SED $\\mathcal{D}_\\ell(\\nu_i\\times\\nu_j)$ as a function of the frequencies $\\nu_i$ and $\\nu_j$.\n\nThis formalism was originally introduced to analyze the complexity of intensity data in \\cite{Mangilli}. In the present work, we focus on $B$-mode polarization power spectra. This was put forward after analyzing the \\textit{Planck}{} and balloon-borne Large Aperture Submillimeter Telescope for Polarimetry (BLASTPol) data and finding that the polarization fraction appears to be constant in the far-infrared-to-millimetre wavelengths {\\citep{blastpol1,blastpol2}}. This allows us to assume that the same grain population is responsible for the total and polarized foreground emission \\citep{dustmodels}. As a result, intensity and polarization SED complexity may be similar.\nNevertheless, $Q$ and $U$ can have a different SED because of the polarization angle frequency dependence \\citep[see e.g.,][]{tassis,moment_polar} and so can $E$ and $B$. This could be a limitation when analyzing the dust $E$ and $B$ with a single moment expansion, especially when SED variations occur along the line of sight.\nEven when trying to model a single polarization component ---as we do in the present work, dealing only with $B$ modes--- it is not clear whether the distorted SED can be modeled in terms of $\\beta$ and $T$ moments only. Further work needs to be done to assess this question. However, they should not impact the present study in which variations along the line of sight are not simulated.\n\nModeling the complexity of the foreground signals by means of the moment expansion of the $B$-mode angular power spectrum has already been successfully applied to Simons Observatory \\citep{SimonsObservatory} simulated data \\citep{Azzoni}. However, the approach taken by these latter authors is different from the one presented above. They apply a \\emph{minimal} moment expansion: assumptions are made to keep only the $\\mathcal{D}_\\ell^{\\omega^{\\beta}_1\\times\\omega^{\\beta}_1}$ and $\\mathcal{D}_\\ell^{A\\times\\omega^{\\beta}_2}$ parameters, which are modeled with a power-law scale dependence. These assumptions may not hold for experiments with higher sensitivity and observing wider sky patches. Furthermore, they assume a scale-invariant dust spectral index. In this work, on the other hand, we relax these assumptions in order to characterize the required spectral complexity of the dust emission for \\textit{LiteBIRD}{}.\n\n\\section{\\label{sec:sims}Simulations and cross-spectra estimation}\n\n\\subsection{\\label{sec:LiteBIRD}\\textit{LiteBIRD}{}}\n\n\\textit{LiteBIRD}{} is an international project proposed by the Japanese spatial agency (JAXA), which selected it in May 2019 as a strategic large class mission. The launch is planned for 2029 for a minimal mission duration of 3 years. \\citep{2020SPIE,Ptep}\n\n\\textit{LiteBIRD}{} is designed to realize a full sky survey of the CMB at large angular scales in order to look for the reionization bump of primordial $B$-modes and explore the scalar-to-tensor ratio ($r$) parameter space with a total uncertainty $\\delta r$ below $10^{-3}$, including foreground cleaning and systematic errors.\n\\textit{LiteBIRD}{} is composed of three telescopes observing in different frequency intervals: the Low-, Medium- and High-Frequency Telescopes (LFT, MFT and HFT).\nEach of the telescopes illuminates a focal plane composed of hundreds of polarimetric detectors. The whole instrument will be cooled down to 5\\,K \\citep{LiteBIRDUpdatedDesign} while the focal plane will be cooled down to 100\\,mK \\citep{LBsubK}. In order to mitigate the instrumental systematic effects, the polarization is modulated by a continuously rotating half-wave plate. \\textit{LiteBIRD}{} will observe the sky in 15 frequency bands from 40 to 402\\,GHz. Table~\\ref{tab:litetab} gives the details of the frequency bands and their sensitivities in polarization \\citep[adapted from][see Sect.~\\ref{sec:Instrsim}]{2020SPIE}.\n\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{cccc}\n \\multirow{2}{*}{Telescope} & Frequency & Sensitivity $\\sigma^{\\rm noise}_{Q,U}(\\nu)$ & $\\boldsymbol \\theta_{\\rm FWHM}$ \\\\\n & [GHz]& [$\\mu$K$\\cdot$arcmin] & {arcmin}\\\\\n \\hline\n LFT & 40.0 & 37.42 & {70.5} \\\\\n LFT & 50.0 & 33.46 & {58.5} \\\\\n LFT & 60.0 & 21.31 & {51.1} \\\\\n LFT & 68.0 & {19.91\/31.77} & {41.6\/47.1} \\\\\n LFT & 78.0 & {15.55\/19.13} & {36.9\/43.8} \\\\\n LFT & 89.0 & {12.28\/28.77} & {33.0\/41.5} \\\\\n LFT\/MFT & 100.0 & {10.34\/8.48} & {30.2\/37.8} \\\\\n LFT\/MFT & 119.0 & {7.69\/5.70} & {26.3\/33.6} \\\\\n LFT\/MFT & 140.0 & {7.25\/6.38} & {23.7\/30.8} \\\\\n MFT & 166.0 & 5.57 & {28.9} \\\\\n MFT\/HFT & 195.0 & {7.05\/10.50} & {28.0\/28.6} \\\\\n HFT & 235.0 & 10.79 & {24.7}\\\\\n HFT & 280.0 & 13.8 & {22.5} \\\\\n HFT & 337.0 & 21.95 & {20.9} \\\\\n HFT & 402.0 & 47.45 & {17.9} \\\\\n\\end{tabular}\n\\caption{\\footnotesize Instrumental characteristics of \\textit{LiteBIRD}{} used in this study \\citep[adapted from][see Sect.~\\ref{sec:Instrsim}]{2020SPIE}. Some frequency bands are shared by two different telescopes or detector arrays. If so, the two values of polarization sensitivities $\\sigma^{\\rm noise}_{Q,U}(\\nu)$ and instrumental beam full width at half maximum $\\theta_{\\rm FWHM}$ are displayed on the same line.}\n\\label{tab:litetab}\n\\end{center}\n\\end{table}\n\n\\subsection{Components of the simulations}\n\\label{sec:ingredients}\n\nWe build several sets of \\textit{LiteBIRD}{} sky simulations. These multi-frequency sets of polarized sky maps are a mixture of CMB, dust, and instrumental noise. The simulations are made at the nine highest frequencies accessible by the instrument ($\\geq 100$ GHz), where dust is the predominant source of foreground contamination.\nFor every studied scenario, we built $N_{\\rm sim} = 500$ simulations, each composed of a set of $N_{\\rm freq}=9$ pairs of sky maps $(Q,U)$ built using the {\\sc HEALPix} package, with $N_{\\rm side} = 256$ \\citep{healpix}. All the signals will be expressed in $\\mu{\\rm K}_{\\rm CMB}$ units.\n\n\\subsubsection{Cosmic microwave background signal}\n\nTo generate the CMB signal, we use the {\\it Code for Anisotropies in the Microwave Background } \\citep[CAMB,][]{CAMB} to create a fiducial angular power spectrum from the best-fit values of cosmological parameters estimated by the recent \\textit{Planck}{} data analysis \\citep{PlanckOverview}.\n\nFor the $B$-modes, we consider the two different components of the spectrum: lensing-induced and primordial (tensor), so that $\\mathcal{D}_{\\ell}^{BB} =\\mathcal{D}_{\\ell}^{{\\rm lensing}} + r_{\\rm sim} \\cdot \\mathcal{D}_{\\ell}^{{\\rm tensor}}$, where $\\mathcal{D}_{\\ell}^{{\\rm tensor}}$ refers to the tensor $B$-modes for $r=1$ and $r_{\\rm sim}$ labels the input values of the tensor-to-scalar ratio $r$ contained in the simulation. We use two different values throughout this work: $r_{\\rm sim}=0$, which is used in the present work as the reference simulations and $r_{\\rm sim}=10^{-2}$ used for consistency checks when the CMB primordial signal is present.\n\nFor all simulations, we then generate the Stokes $Q$ and $U$ CMB polarization Gaussian realization maps $S^{\\rm CMB}_{\\nu,r_{\\rm sim}}$ from the angular power spectra using the {\\tt synfast} function of {\\sc HEALPix}.\n\n\\subsubsection{Foregrounds: dust}\n\nOur study focuses on high frequencies ($\\geq 100$\\,GHz) only, where thermal dust emission is the main source of polarized foreground as mentioned in Sect.~\\ref{sec:intro}. We make use of two different scenarios of increasing complexity included in the {\\sc PySM} \\citep{Pysm} and one of intermediate complexity not included in the {\\sc PySM}:\n\n\\begin{itemize}\n \\item {\\tt d0}, included in the {\\sc PySM}: the dust polarization $Q$ and $U$ maps are taken from $S^{Planck}_{\\nu=353}$, the \\textit{Planck}{} 2015 data at 353\\,GHz \\citep{planck_2015_overview}, extrapolated to a frequency $\\nu$ using the MBB given in Eq.~\\ref{eq:MBB} with a temperature $T_0=T_{\\tt d0}=20$\\,K and spectral index $\\beta_0=\\beta_{\\tt d0}=1.54$ constant over the sky:\n \n \\begin{equation}S^{\\rm dust}_\\nu=S_\\nu^{\\tt d0}=\\frac{I_\\nu(\\beta_{\\tt d0},T_{\\tt d0})}{I_{\\nu_0}(\\beta_{\\tt d0},T_{\\tt d0})}\\cdot S^{Planck}_{353},\n \\end{equation}\n \n \\item {\\tt d1T}, introduced here: the dust polarization $Q$ and $U$ maps are also taken from \\citet{planck_2015_overview} but they are extrapolated to a frequency $\\nu$ using the MBB given in Eq.~\\ref{eq:MBBn}, with spatially varying spectral index $\\beta(\\vec{n})$, as in {\\tt d1} and a fixed temperature $T_0=T_{\\tt d1T}=21.9$\\,K, obtained as the mean of the \\textit{Planck}{} {\\sc Commander} dust temperature map \\citep{planck_2015_commander} on our $f_{\\rm sky}=0.7$ sky mask: \n \n \\begin{equation}\n S^{\\rm dust}_\\nu=S_\\nu^{\\tt d1T}=\\frac{I_\\nu(\\beta(\\vec{n}),T_{\\tt d1T})}{I_{\\nu_0}(\\beta(\\vec{n}),T_{\\tt d1T})}\\cdot S^{Planck}_{353}.\n \\end{equation}\n \n \\item {\\tt d1}, included in the {\\tt PySM}: similar to { \\tt d1T} with both a spatially varying temperature $T(\\vec{n})$ and spectral index $\\beta(\\vec{n})$ obtained from the \\textit{Planck}{} data using the {\\sc Commander} code \\citep{planck_2015_commander}: \n \n \\begin{equation}\n S^{\\rm dust}_\\nu=S_\\nu^{\\tt d1}=\\frac{I_\\nu(\\beta(\\vec{n}),T(\\vec{n}))}{I_{\\nu_0}(\\beta(\\vec{n}),T(\\vec{n}))}\\cdot S^{Planck}_{353}.\n \\end{equation}\n\n\\end{itemize}\n\n\\subsubsection{ \\label{sec:Instrsim}Instrumental noise}\n\nThe band polarization sensitivities $\\sigma^{\\rm noise}_{Q,U}(\\nu)$ are derived from the noise equivalent temperature (NET) values converted into $\\mu$K$\\cdot$arcmin for each telescope (LFT, MFT and HFT). As seen in Table~\\ref{tab:litetab}, some frequency bands are overlapping between two telescopes. In this situation, we take the mean value of the two NETs, weighted by the beam full width at half maximum (FWHM) $\\theta$ as:\n\n\\begin{equation}\n \\sigma^{\\rm noise}_{Q,U}(\\nu_{\\rm overlapping}) = \\sqrt{\\frac{1}{ \\left(\\frac{\\theta_{\\rm min}}{{\\theta_{\\rm max}}}\\sigma^{\\rm noise}_{Q,U}(\\nu_{\\theta_{\\rm min}})\\right)^{-2} + \\left(\\sigma^{\\rm noise}_{Q,U}(\\nu_{\\theta_{\\rm max}})\\right)^{-2} }},\n\\end{equation}\n\n{\\noindent where $\\theta_{\\rm min}$ is the smallest FWHM among the two and $\\theta_{\\rm max}$ the largest. The band polarization sensitivities} are displayed in Table~\\ref{tab:litetab}. For every simulation, the noise component $N_\\nu$ is generated in every pixel of the maps with a Gaussian distribution centered on zero, with standard deviation $\\sigma^{\\rm noise}_{Q,U}(\\nu)$ weighted by the pixel size (and $\\sqrt{2}\\cdot\\sigma^{\\rm noise}_{Q,U}(\\nu)$ for the maps used to compute the auto-power spectra, see Sect.~\\ref{sec:spectra_estimation}).\n\nFor simplicity, we choose to ignore beam effects in our simulations, assuming they can be taken into account perfectly. Simulations are thus produced at infinite (0\\,arcmin) resolution and no beam effect is corrected for when estimating the angular power spectrum. This is equivalent to convolving the maps by Gaussian beams of finite resolution and correcting the power spectra for the associated Gaussian beam window functions.\n\n\\subsection{Combining signals and building the simulated maps}\n\n\\label{sec:simulations}\n\nThe simulated $(Q,U)$ maps $M_\\nu$, for a given simulation, can be expressed as the sum:\n\n\\begin{equation}\nM_\\nu=S^{\\rm CMB}_{\\nu,r_{\\rm sim}}+S^{\\rm dust}_\\nu+N_\\nu.\n\\label{eq:map}\n\\end{equation}\n\\begin{figure*}\n \\centering\n \\includegraphics[scale = 0.2]{Figures\/simulations.pdf}\n \\caption{\\footnotesize Mean value over the $N_{\\rm sim}$ simulations of the $B$-mode angular power spectra $\\mathcal{D}_{\\ell}(\\nu_i \\times \\nu_j)$ for the {\\tt d1c} simulation type, with $r_{\\rm sim}=0$. The color bar spans all the $N_{\\rm cross}$ spectra $\\mathcal{D}_{\\ell}(\\nu_i \\times \\nu_j)$, associated to their reduced cross-frequency $\\nu_{\\rm red.}=\\sqrt{\\nu_i \\nu_j}$, from 100\\,GHz (dark red) to 402\\,GHz (dark blue). The input CMB lensing power spectrum is shown as a black dashed line.}\n \\label{fig:simulations}\n\\end{figure*}\n\n\nCosmic microwave background and noise are simulated stochastically: for each simulation, we generate a new realization of the CMB maps $S^{\\rm CMB}_{\\nu,r_{\\rm sim}}$ and the noise maps $N_\\nu$. The dust map $S^{\\rm dust}_\\nu$ is the same for each simulation, at a given frequency.\n\nHereafter, we use the notation {\\tt d0}, {\\tt d1T,} and {\\tt d1} to refer to simulations containing only dust and \\textit{LiteBIRD}{} noise, {\\tt d0c}, {\\tt d1Tc,} and {\\tt d1c} for simulations including CMB, dust, and \\textit{LiteBIRD}{} noise and, finally, and {\\tt c} for the simulation containing only CMB and \\textit{LiteBIRD}{} noise. The different components present in these different {simulation types} are summarized in Table~\\ref{tab:sims}.\n\n\\begin{table}[t!]\n\\centering\n \\begin{tabular}{c !{\\color{white}\\vrule width 1pt} c!{\\color{white}\\vrule width 1pt}c!{\\color{white}\\vrule width 1pt}c!{\\color{white}\\vrule width 1pt}c!{\\color{white}\\vrule width 1pt}c}\n & $S^{\\rm CMB}_{\\nu,r_{\\rm sim}}$ & $S^{\\tt d0}_\\nu$ & $S^{\\tt d1T}_\\nu$ & $S^{\\tt d1}_\\nu$ & $N_\\nu$ \\\\[0.5ex]\\noalign{\\color{white}\\hrule height 1pt}\n {\\tt c} & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\\noalign{\\color{white}\\hrule height 1pt}\n {\\tt d0} & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\n \\noalign{\\color{white}\\hrule height 1pt}\n {\\tt d1T} & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\n \\noalign{\\color{white}\\hrule height 1pt}\n {\\tt d1} & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\n \\noalign{\\color{white}\\hrule height 1pt}\n {\\tt d0c} & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\n \\noalign{\\color{white}\\hrule height 1pt}\n {\\tt d1Tc} & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\n \\noalign{\\color{white}\\hrule height 1pt}\n {\\tt d1c} & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\n \\end{tabular}\n\n\\caption{\\footnotesize Summary of the different components present in the simulated maps $M_\\nu$ in Eq.~\\ref{eq:map}, for every \\emph{simulation type}. A tick on a green background signifies that the component is present in the simulations, red with a cross symbol shows that it is absent.}\n\\label{tab:sims}\n\\end{table}\n\n\\subsection{ \\label{sect:spectra} Angular power spectra of the simulations}\n\n\\subsubsection{\\label{sect:mask}Mask}\n\nA mask is applied on the simulated maps presented in Sect.~\\ref{sec:simulations} in order to exclude the Galactic plane from the power-spectrum estimation. The mask is created by setting a threshold on the polarized intensity ($P=\\sqrt{Q^2+U^2}$) of the \\textit{Planck}{} 353\\,GHz map \\citep{PlanckOverview} \\footnote{\\url{http:\/\/pla.esac.esa.int\/pla\/}}, smoothed with a $10^{\\circ}$ beam. In order to keep $f_{\\rm sky}= 0.7$,$f_{\\rm sky}= 0.6,$ and $f_{\\rm sky}= 0.5$, the cut is applied at 121\\,$\\mu$K, 80\\,$\\mu$K, and 53\\,$\\mu$K, respectively. We then realize a C2 apodization of the binary mask with a scale of $5^{\\circ}$ using {\\sc Namaster} \\citep{namaster}. The resulting Galactic masks are displayed in Fig.~\\ref{fig:mask}. These masks are similar to those used in \\citet{PlanckDust2}.\n\n\\subsubsection{Estimation of the angular power spectra}\n\\label{sec:spectra_estimation}\n\nWe use the {\\sc Namaster}\\footnote{\\url{https:\/\/github.com\/LSSTDESC\/NaMaster}} software \\citep{namaster} to compute the angular power spectra of each simulation. {\\sc Namaster} allows us to correct for the $E$ to $B$ leakage bias due to the incomplete sky coverage. Therein we use a \\emph{purification} process to suppress the effect of the $E$ to $B$ leakage in the variance. For every simulation, from the set of maps $M_{\\nu_i}$, we compute all the possible auto-frequency and cross-frequency spectra $\\mathcal{D}_\\ell({\\nu_i\\times\\nu_j})\\equiv\\mathcal{D}_\\ell({M_{\\nu_i}\\times M_{\\nu_j}})$ with\n\n\\begin{align}\n\\nu_i\\times \\nu_j \\in \\left\\{\\right.&100\\times 100, 100 \\times 119, 100 \\times 140,\\ \\dots, 100\\times402,\\nonumber\\\\\n&119\\times140,\\ \\dots, 119\\times402,\\nonumber\\\\[-2mm]\n&\\qquad\\qquad\\vdots\\nonumber\\\\\n&337\\times337, 337\\times402,\\nonumber\\\\\n&\\left.\\!\\!402 \\times 402 \\right\\},\n\\label{eq:all_cross}\n\\end{align}\n\n\\noindent leading to $N_{\\rm cross}=N_{\\rm freq}\\cdot(N_{\\rm freq}+1)\/2=45$ cross-frequency spectra. These spectra are displayed in Fig.~\\ref{fig:simulations} for the case of the {\\tt d1c} simulation\ntype.\\\\\n\nIn order to avoid noise auto-correlation in the auto-spectra ({i.e.}, $\\mathcal{D}_\\ell(\\nu_i\\times\\nu_j)$ when $i=j$), the latter are estimated in a way that differs slightly from what is presented in Sect.~\\ref{sec:Instrsim}. We simulate two noise-independent data subsets at an observing frequency $\\nu_i$, with a noise amplitude $\\sqrt{2}$ higher than that of the frequency band, and compute the cross-angular power spectrum between those. Thus, $\\mathcal{D}_\\ell(\\nu_i\\times\\nu_i)$ is free from noise auto-correlation bias at the expense of multiplying the noise amplitude in the spectrum by a factor of two. This approach is similar to that commonly used by the \\textit{Planck}{} Collaboration \\citep[see e.g.,][]{PlanckDust,PlanckDust2,tristram}.\n\n\n\nThe spectra are evaluated in the multipole interval $\\ell \\in [1,200]$ in order to be able to focus on the reionization and recombination bumps of the primordial $B$-modes spectra. \nThe spectra are binned in $N_{\\ell}=20$ bins of size $\\Delta \\ell = 10$ using {\\sc Namaster}. The same binning is applied throughout this article such that, in the following, the multipole $\\ell$ denotes the multipole bin of size $\\Delta\\ell=10$ centered on $\\ell$\\footnote{ The $N_\\ell$ multipole bins are centered on the following $\\ell$ values: $[6.5, 16.5, 26.5, 36.5, 46.5, 56.5, 66.5, 76.5, 86.5,\n96.5, 106.5,\\\\ 116.5, 126.5, 136.5, 146.5, 156.5, 166.5, 176.5,\n186.5, 196.5]$}.\n\nFrom the sets of $(Q,U)$ maps, {\\sc Namaster} computes the $\\mathcal{D}_\\ell^{EE}$, $\\mathcal{D}_\\ell^{BB}$, and $\\mathcal{D}_\\ell^{EB}$ angular power spectra; for the sake of the present analysis, we keep only $\\mathcal{D}_\\ell^{BB}$. Hence, when we discuss or analyze power spectra, we are referring to the $B$-mode power spectra $\\mathcal{D}_\\ell^{BB}$. All spectra are expressed in $(\\mu{\\rm K}_{\\rm CMB})^2$.\n\n\\section{\\label{sec:fit}Best-fit implementation}\n\n\nIn order to characterize the complexity of the dust SED that will be measured by \\textit{LiteBIRD}{}, we modeled the angular power spectra of our simulations described in Sect.~\\ref{sec:sims} over the whole frequency and multipole ranges with the moment expansion formalism introduced in Sect.~\\ref{sec:formalism}.\n\n\\subsection{General implementation}\n\\label{sec:fit_dust}\n\nFor each multipole $\\ell$, we ordered the angular power spectra $\\mathcal{D}_\\ell^{BB}(\\nu_i\\times\\nu_j$) as in Eq.~\\ref{eq:all_cross} in order to build a SED that is a function of both $\\nu_i$ and $\\nu_j$. We fit this SED with models, as in Eq.~\\ref{eq:moments} for example, using a Levenberg-Marquardt $\\chi^2$ minimization with {\\tt mpfit} \\citep{mpfit}\\footnote{\\url{https:\/\/github.com\/segasai\/astrolibpy\/tree\/master\/mpfit}}. All the fits performed with {\\tt mpfit} were also realized with more computationally heavy Monte Carlo Markov Chains (MCMC) with {\\tt emcee} \\citep{emcee}, giving compatible results, well within the error bars.\n\nThe reduced $\\chi^2$ minimization is given by\n\n\\begin{equation}\n \\chi^2 = \\frac{1}{N_{\\rm d.o.f.}}\\vec{R}^T\\mathbb{C}^{-1}\\vec{R},\n\\label{eq:chi2}\n\\end{equation}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale = 0.3]{Figures\/corr2.png}\n \\caption{\\footnotesize Correlation matrix (${\\rm Corr}_{\\ell\\ell'} \\equiv \\mathbb{C}_{\\ell\\ell'}\/\\sqrt{\\mathbb{C}_{\\ell\\ell}\\mathbb{C}_{\\ell'\\ell'}}$) for the $N_{\\rm sim}$ simulations in {\\tt d1c}. Every block represents a value of $\\ell$ and contains the ordered $N_{\\rm cross} = 45$ cross-spectra. The red squares represent the truncation of the full covariance matrix applied in the analysis (kept entries in red, other entries set to zero).}\n \\label{fig:cov}\n\\end{figure}\n\n\\noindent where $N_{\\rm d.o.f.}$ is the number of degrees of freedom and $\\mathbb{C}$ is the covariance matrix of our $N_{\\rm sim}$ simulations, represented in Fig.~\\ref{fig:cov}, of dimension $(N_{\\ell}\\cdot N_{\\rm cross})^2$:\n\\begin{equation}\n \\mathbb{C}_{\\ell,\\ell'}^{i\\times j,k\\times l} = {\\rm cov}\\left(\\mathcal{D}^{\\rm sim}_\\ell (\\nu_i \\times \\nu_j),\\mathcal{D}^{\\rm sim}_{\\ell'} (\\nu_k \\times \\nu_l)\\right).\n\\label{eq:cov}\n\\end{equation}\n\nThe entire covariance matrix $\\mathbb{C}$ is, in general, not invertible. To avoid this, we kept only the $\\ell=\\ell'$ block-diagonal of $\\mathbb{C}$ with the strongest correlation values\\footnote{${\\rm Corr}_{\\ell\\ell'} \\equiv \\mathbb{C}_{\\ell\\ell'}\/\\sqrt{\\mathbb{C}_{\\ell\\ell}\\mathbb{C}_{\\ell'\\ell'}}$}, as well as the $(\\ell=6.5,\\ell'=16.5)$ off-diagonal blocks showing a significant anti-correlation, as illustrated in Fig.~\\ref{fig:cov}. It was then possible to invert the thus-defined truncated correlation matrix with the required precision most of the time. \n\nIn the case of the {\\tt d1} simulation type, we experienced a fit convergence issue for $\\sim20$\\,\\% of the simulations, leading to a very large $\\chi^2$. In order to overcome this problem, two options lead to identical results: throwing away the outliers from the analysis or fitting using only the block-diagonal matrix (i.e., the $\\ell=6.5$, $\\ell'=16.5$ block is set to zero). This last option solves the conversion issue while providing sufficient precision. The results presented in the following are using the block-diagonal matrix when the simulation type is {\\tt d1}.\n\nFinally, in Eq.~\\ref{eq:chi2}, $\\vec{R}$ is the residual vector associated with every simulation of size $N_{\\ell}\\times N_{\\rm cross}$:\n\n\\begin{equation}\n\\vec{R} =\n\\begin{pmatrix}\n\\mathcal{R}_{\\ell = 6.5}(100 \\times 100) \\\\\n\\mathcal{R}_{\\ell = 6.5}(100 \\times 119) \\\\\n\\vdots \\\\\n\\mathcal{R}_{\\ell = 16.5}(100 \\times 100)\\\\\n\\vdots \\\\\n\\mathcal{R}_{\\ell = 196.5}(402 \\times 402)\\\\\n\\end{pmatrix}, \n\\end{equation}\n\n\\noindent with $\\mathcal{R}_\\ell (\\nu_i \\times \\nu_j) = \\mathcal{D}^{\\rm sim}_\\ell (\\nu_i \\times \\nu_j) - \\mathcal{D}^{\\rm model}_\\ell(\\nu_i \\times \\nu_j)$.\n\nThe expression used for the model to fit is given by:\n\n\\begin{equation}\n\\begin{split}\n \\mathcal{D}_{\\ell}^{\\rm model}(\\nu_i \\times \\nu_j) &= \\mathcal{D}_{\\ell}^{{\\rm dust}}\\left(\\beta_0(\\ell), T_0(\\ell), \\mathcal{D}^{\\mathcal{M}\\times\\mathcal{N}}_{\\ell}(\\nu_i\\times\\nu_j)\\right) \\\\ & + A_{\\rm lens} \\cdot \\mathcal{D}_{\\ell}^{{\\rm lensing}} + r \\cdot \\mathcal{D}_{\\ell}^{{\\rm tensor}}, \n\\label{eq:model}\n\\end{split}\n\\end{equation}\n\n\\noindent where $A_{\\rm lens}$ is not a free parameter and will remain fixed to zero (when there is no CMB, simulation types {\\tt d0}, {\\tt d1T,} and {\\tt d1}) or one (when the CMB is included, simulation types {\\tt d0c}, {\\tt d1Tc} and {\\tt d1c}). We leave the question of the impact of dust modeling with moments on the lensing measurement for future work. In Eq.~\\ref{eq:model}, the free parameters can thus be $\\beta_{0}(\\ell)$, $T_{0}(\\ell)$, and $\\mathcal{D}^{\\mathcal{M}\\times\\mathcal{N}}_{\\ell}(\\nu_i\\times\\nu_j)$ and the tensor-to-scalar ratio $r$. The estimated value of $r$ is referred to as $\\hat{r}$\n\nNo priors on the parameters are used in order to explore the parameter space with minimal assumptions. Finally, a frequency-dependent conversion factor is included in $\\mathcal{D}_{\\ell}^{{\\rm dust}}$ -- from (MJy$\\cdot$sr$^{-1})^2$ to $(\\mu$K$_{\\rm CMB})^2$ -- to express the dust spectra in ($\\mu$K$_{\\rm CMB})^2$ units. In those units, $\\mathcal{D}_{\\ell}^{{\\rm lensing}}$ and $\\mathcal{D}_{\\ell}^{\\rm tensor}$ are frequency-independent.\n\nTo mitigate the impact of outliers in our simulations, all the final values of the best-fit parameters and $\\chi^2$ distributions are represented by their median and median absolute deviations over $N_{\\rm sim}$ values. For the tensor-to-scalar ratio $\\hat{r}$, we chose to represent all the best-fit values from the $N_{\\rm sim}$ simulations in a histogram and we assume its distribution is normal. Fitting a Gaussian curve on this histogram and getting the mean and standard deviation gives us the final values of $\\hat{r}$ and $\\sigma_{\\hat{r}}$ presented in the paper.\n\n\n\n\\subsection{Implementation for the dust component}\n\\label{sec:fit_general}\n\nFor the dust component, we consider four different \\emph{fitting schemes}, corresponding to four expressions for the dust model $\\mathcal{D}^{\\rm dust}_\\ell$ in Eq.~\\ref{eq:model}, which are referred to as \"MBB\", \"$\\beta$-1\", \"$\\beta$-$T$\", and \"$\\beta$-2\". Each of them corresponds to a truncation of Eq.~\\ref{eq:moments}, keeping only some selected terms of the moment expansion: MBB stands for those of the modified black body, $\\beta$-1 for those of the expansion in $\\beta$ at first order, $\\beta$-2 for the expansion in $\\beta$ at second order, and $\\beta$-$T$ for the expansion in both $\\beta$ and $T$ at first order. We chose the $\\beta$-1 and $\\beta$-2 truncations based on the studies of \\citet{Mangilli} and \\citet{Azzoni}, where the dust SED moment expansion is performed only with respect to $\\beta$. The $\\beta$-$T$ fitting scheme is instead the first-order truncation in both $\\beta$ and $T$, introduced here for the first time at the power spectrum level. The parameters fitted in each of these fitting schemes are summarized in Table~\\ref{tab:parameter}. We note that the $\\beta$-2 and $\\beta$-$T$ fitting schemes share the same number of free parameters. Finally, when we fit $\\hat{r}$ at the same time as the dust parameters, the fitting schemes will be referred to as $r$MBB, $r\\beta$-1, $r\\beta$-$T,$ and $r\\beta$-2.\n\nDifferent physical processes are expected to occur at different angular scales, leading to different SED properties. Thus, we estimate the dust-related parameters with one parameter per multipole bin. As an example, we estimate $\\beta_0 = \\beta_0(\\ell)$ and $T_0 = T_0(\\ell)$ to be able to take into account their scale dependence, at the cost of increasing the number of free parameters in our model. This is also true for the higher order moments. On the other hand, $\\hat{r}$ is not scale dependent and, when it is fitted, we add one single parameter over the whole multipole range.\n\nIn \\cite{Mangilli}, the first-order moment expansion parameter $\\mathcal{D}_\\ell^{A\\times\\omega^\\beta_1}$ is considered to be the leading order correction to the MBB spectral index. We applied a similar approach in the present work, extending it to the dust temperature when it is fitted. In our pipeline, we proceed iteratively:\n\n\\begin{enumerate}\n\\item(i) we fit $\\beta_0(\\ell)$ and $T_0(\\ell)$ at order zero (MBB), for each $\\ell$, \n\n\\item (ii) we fix $\\beta_0(\\ell)$ and $T_0(\\ell)$ and fit the higher order parameters, as in Eq.~\\ref{eq:moments}, (iii) we update the $\\beta_0(\\ell)$ to $\\beta_{\\rm corr}(\\ell)$ {(and $T_0(\\ell)$ to $T_{\\rm corr}(\\ell)$ in the case of $\\beta$-$T$)} as:\n\n\\begin{equation}\n \\beta_{\\rm corr}(\\ell) = \\beta_0(\\ell) + \\frac{\\mathcal{D}^{A \\times \\omega^{\\beta}_1 }_{\\ell}}{\\mathcal{D}^{A \\times A}_{\\ell}},\\quad T_{\\rm corr}(\\ell) = T_0(\\ell) + \\frac{\\mathcal{D}^{A \\times \\omega^{T}_1 }_{\\ell}}{\\mathcal{D}^{A \\times A}_{\\ell}},\n\\label{eq:iteration}\n\\end{equation}\n\\item[iv)] and we iterate from (ii) fixing $\\beta_0(\\ell)=\\beta_{\\rm corr}(\\ell)$, until $\\mathcal{D}_\\ell^{A \\times \\omega^{\\beta}_1}$ converges to be compatible with zero (and $T_0(\\ell)=T_{\\rm corr}(\\ell)$, until $\\mathcal{D}_\\ell^{A \\times \\omega^{T}_1}$ converges to zero in the case of $\\beta$-$T$).\n\\end{enumerate}\nWe used three such iterations, which we found to be sufficient to guarantee the convergence. As the moment expansion is a nonorthogonal and incomplete basis \\citep{Chluba}, this iterative process is performed to ensure that the expansions up to different orders share the same $\\beta_0(\\ell)$ and $T_0(\\ell)$ with $\\mathcal{D}^{A \\times \\omega^{\\beta}_1 }_\\ell=0$ and $\\mathcal{D}^{A \\times \\omega^{T}_1 }_\\ell=0$.\n\n\n\\begin{table}[t!]\n\\centering\n \\normalsize\n \\begin{tabular}{c!{\\color{white}\\vrule width 1pt}c!{\\color{white}\\vrule width 1pt}c!{\\color{white}\\vrule width 1pt}c!{\\color{white}\\vrule width 1pt}c}\n & MBB & $\\beta$-1 & $\\beta$-$T$ & $\\beta$-2\\\\\n \\noalign{\\color{white}\\hrule height 1pt}\n $N_{\\rm param.}$ & \\cellcolor{mygrey} $3N_\\ell$& \\cellcolor{mygrey} $2N_\\ell$ & \\cellcolor{mygrey} $5N_\\ell$ & \\cellcolor{mygrey} $5N_\\ell$ \\\\\\noalign{\\color{white}\\hrule height 1pt}\n $\\beta_0(\\ell)$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;& \\cellcolor{myyellow}$\\circ$ & \\cellcolor{myyellow}$\\circ$ & \\cellcolor{myyellow}$\\circ$\\\\\\noalign{\\color{white}\\hrule height 1pt}\n $T_0(\\ell)$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;& \\cellcolor{myred}$\\times$ & \\cellcolor{myyellow}$\\circ$ & \\cellcolor{myred}$\\times$\\\\\\noalign{\\color{white}\\hrule height 1pt}\n $D_\\ell^{A\\times A}$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;& \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$\\\\\\noalign{\\color{white}\\hrule height 1pt}\n $D_\\ell^{A\\times\\omega_1^\\beta}$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\\noalign{\\color{white}\\hrule height 1pt}\n $D_\\ell^{\\omega_1^\\beta\\times\\omega_1^\\beta}$ & \\cellcolor{myred}$\\times$& \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\\noalign{\\color{white}\\hrule height 1pt}\n $D_\\ell^{A\\times\\omega_1^T}$ & \\cellcolor{myred}$\\times$& \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; &\\cellcolor{myred}$\\times$\\\\\\noalign{\\color{white}\\hrule height 1pt}\n $D_\\ell^{\\omega_1^T\\times\\omega_1^T}$ & \\cellcolor{myred}$\\times$& \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{myred}$\\times$\\\\\\noalign{\\color{white}\\hrule height 1pt}\n $D_\\ell^{\\omega_1^\\beta\\times\\omega_1^T}$ & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{myred}$\\times$\\\\\\noalign{\\color{white}\\hrule height 1pt}\n $D_\\ell^{A\\times\\omega_2^\\beta}$ & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\\noalign{\\color{white}\\hrule height 1pt}\n $D_\\ell^{\\omega_1^\\beta\\times\\omega_2^\\beta}$ & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\\noalign{\\color{white}\\hrule height 1pt}\n $D_\\ell^{\\omega_2^\\beta\\times\\omega_2^\\beta}$ & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\\noalign{\\color{white}\\hrule height 1pt}\n \\end{tabular}\n \\normalsize\n\\caption{\\footnotesize Summary of the fitted parameters in the four dust moment expansion \\emph{fitting schemes} we consider (MBB, $\\beta$-1, $\\beta$-$T,$ and $\\beta$-2), in Eq.~\\ref{eq:moments}. A tick on a green background signifies that the parameter is fitted, red with a cross symbol shows that the parameter is not fitted, and a circle symbol on yellow means that the parameter is fixed and corrected through an iterative process as presented in Sect.~\\ref{sec:fit_general}. $\\mathcal{D}_\\ell^{A\\times A}$ is fixed to the MBB best-fit value in the case of $\\beta$-1, $\\beta$-$T,$ and $\\beta$-2 and all the other moments are set to zero when they are not fitted. When $\\hat{r}$ is fitted at the same time, the fitting schemes are denoted $r$MBB, $r\\beta$-1, $r\\beta$-$T,$ and $r\\beta$-2, and they have one more parameter than the number of parameters reported in the first line.}\n\\label{tab:parameter}\n\\end{table}\n\n\\section{\\label{sec:results} Results}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/chi2dustgauss_d0.pdf}\n \\includegraphics[width=\\columnwidth]{Figures\/chi2dustgauss_d1T.pdf}\n \\includegraphics[width=\\columnwidth]{Figures\/chi2dustgauss_d1.pdf}\n \\caption{\\footnotesize Median of the reduced $\\chi^2$ in every multipole bin $\\ell$, for all the $N_{\\rm sim}$ simulations of {\\tt d0} (top, orange), {\\tt d1T} (middle, green) and {\\tt d1} (bottom, blue), on $f_{\\rm sky}=0.7$. The reduced $\\chi^2$ values are reported for the four different fitting schemes: MBB (circles), $\\beta$-1 (crosses), $\\beta$-$T$ (diamonds) and $\\beta$-2 (triangles). The values for the four fitting schemes are shifted from each others by $\\ell=2$, in order to distinguish them. The black dashed line represents $\\chi^2_{\\rm red}=1$.}\n \\label{fig:chi2dust}\n\\end{figure}\n\nIn this section, we present our evaluation of the best-fit parameters for the different {fitting schemes} presented in Sect.~\\ref{sec:fit_dust} on the $B$-mode cross-angular power spectra computed from the different {simulation types} presented in Sect.~\\ref{sec:simulations} and on the Galactic mask keeping $f_{\\rm sky}=0.7$, which is defined in Sect.~\\ref{sect:mask}.\nWe first tested the simulation types containing only dust and noise in order to calibrate the dust complexity of our data sets in Sect.~\\ref{sec:results_dust_only}. We then used CMB only plus noise simulations to assess the minimal error on $\\hat{r}$ in Sect.~\\ref{sec:c} and, finally, we explored the dust, CMB, and noise simulation types to assess the impact of the dust complexity on $\\hat{r}$ in Sect.~\\ref{sec:results_dust_CMB}.\n\n\\subsection{Dust only}\n\\label{sec:results_dust_only}\n\nTo evaluate the amplitude of the dust moment parameters contained in the dust simulations in the absence of CMB, we ran the fitting schemes presented in Sect.~\\ref{sec:fit_dust} in the three simulation types {\\tt d0}, ${\\tt d1T,}$ and ${\\tt d1}$ presented in Sect.~\\ref{sec:simulations}. In these cases, $A_{\\rm lens}$ and $r$ in Eq.~\\ref{eq:model} are both fixed to zero and the fitted parameters are given in Table~\\ref{tab:parameter} for every fitting scheme.\n\n\\subsubsection{{\\tt d0}}\n\\label{sec:d0}\n\nThe {\\tt d0} dust maps presented in Sect.~\\ref{sec:ingredients} extrapolate between frequency bands with a MBB SED with constant parameters over the sky: $\\beta_{\\tt d0} = 1.54$ and $T_{\\tt d0} = 20$\\,K. We performed the fit with the four fitting schemes presented in Sect.~\\ref{sec:fit_dust}.\n\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/betadustgauss.pdf}\n \\includegraphics[width=\\columnwidth]{Figures\/Tgauss.pdf}\n \\caption{\\footnotesize (Top): Median of the best fit values of $\\beta_0(\\ell)$ in {\\tt d0} (orange), {\\tt d1T} (green), and {\\tt d1} (blue) for the MBB (circles). $\\beta_{\\tt d0}$ is marked by the dashed black line. (Bottom): Same as above but with $T_0(\\ell)$, the black dashed-lines being $T_{\\tt d0}=20$\\,K and $T_{\\tt d1T}=21.9$\\,K.}\n \\label{fig:betaTdust}\n\\end{figure}\n\n\nIn Fig.~\\ref{fig:chi2dust} the values of the reduced $\\chi^2(\\ell)$ for each fitting scheme are displayed. For every fitting scheme (MBB, $\\beta$-1, $\\beta$-$T$ and $\\beta$-2), the reduced $\\chi^2$ are close to 1 over the whole multipole range (slightly below 1 for the $\\beta$-1, $\\beta$-$T$ and $\\beta$-2 fitting scheme). This indicates that the MBB is a good fit to the cross-angular power spectra computed from the {\\tt d0} maps with a spatially invariant MBB SED, as expected. Adding additional (higher order) parameters, such as with $\\beta$-1, $\\beta$-$T$ and $\\beta$-2, has no significant effect on the $\\chi^2$.\n\nIn Fig.~\\ref{fig:betaTdust} we can see that the best-fit values of $\\beta_0(\\ell)$ and $T_0(\\ell)$ are compatible with constant values $\\beta_0(\\ell)=\\beta_{\\tt d0}$ and $T_0(\\ell)=T_{\\tt d0}$, as expected for this simulated data set.\n\nThe best-fit values of the dust amplitude and the moment-expansion parameters are presented in Figs.~\\ref{fig:Adust}, \\ref{fig:moments1}, \\ref{fig:moments2}, and \\ref{fig:moments3}, respectively. The amplitude power spectrum is compatible with that of the dust template map used to build {\\tt d0} and the moment-xpansion parameters are compatible with zero for every fitting scheme, as expected with no spatial variation of the SED. \nTherefore, the moment expansion method presented in Sect.~\\ref{sec:formalism} passes the \\emph{null test} in the absence of SED distortions, with the {\\tt d0} simulated data set. \n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/Adustgauss.pdf}\n \\caption{\\footnotesize Median of the best-fit values of $\\mathcal{D}_\\ell^{A \\times A}$ for {\\tt d0} (orange), {\\tt d1T} (green), and {\\tt d1} (blue) using the MBB fitting scheme. The values for the three simulation types are shifted with respect to one another by $\\ell=2$ in order to distinguish them. The black dashed line is the amplitude power spectrum of the dust template map used to build the three simulation sets {\\tt d0}, {\\tt d1T,} and {\\tt d1}.}\n \\label{fig:Adust}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/w1w1dustgauss.pdf}\n \\caption{\\footnotesize Best-fit values of the first-order moment $\\mathcal{D}_\\ell^{\\omega^{\\beta}_1\\times\\omega^{\\beta}_1}$ for {\\tt d0} (orange), {\\tt d1T} (green), and {\\tt d1} (blue), fitting with $\\beta$-1 (crosses), $\\beta$-2 (triangles), and $\\beta$-$T$ (diamonds).}\n \\label{fig:moments1}\n\\end{figure}\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/Aw2dustgauss.pdf}\n \\includegraphics[width=\\columnwidth]{Figures\/w1w2dustgauss.pdf}\n \\includegraphics[width=\\columnwidth]{Figures\/w2w2dustgauss.pdf}\n \\caption{\\footnotesize Best-fit values of the second-order $\\mathcal{D}_\\ell^{A\\times\\omega^{\\beta}_2}$, $\\mathcal{D}_\\ell^{\\omega^{\\beta}_1\\times\\omega^{\\beta}_2}$ and $\\mathcal{D}_\\ell^{\\omega^{\\beta}_2\\times\\omega^{\\beta}_2}$ moment parameters in {\\tt d0} (orange), {\\tt d1T} (green), and {\\tt d1} (blue) for $\\beta$-2 (triangles). }\n \\label{fig:moments2}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/b1t1dustgauss.pdf}\n \\includegraphics[width=\\columnwidth]{Figures\/t1t1dustgauss.pdf}\n \\caption{\\footnotesize Best-fit values of the first-order $\\mathcal{D}_\\ell^{\\omega^{\\beta}_1\\times\\omega^{T}_1}$ and $\\mathcal{D}_\\ell^{\\omega^{T}_1\\times\\omega^{T}_1}$ moment parameters in {\\tt d0} (orange), {\\tt d1T} (green), and {\\tt d1} (blue) for $\\beta$-$T$ (diamonds). }\n \\label{fig:moments3}\n\\end{figure}\n\n\n\\subsubsection{{\\tt d1T}}\n\\label{sec:d1T}\n\n\nWe now introduce, as a first layer of complexity, the spatial variations of the spectral index associated to a fixed temperature over the sky with the {\\tt d1T} simulation type. The dust temperature was fixed to $T_{\\rm d1T}=21.9$\\,K while the spectral index $\\beta(\\vec{n})$ was allowed to vary between lines of sight. The four different fitting schemes presented in Sect.~\\ref{sec:fit_dust} are fitted over the cross-spectra of our simulations as in Sect.~\\ref{sec:d0}.\n\nThe reduced $\\chi^2(\\ell)$ values for each fitting scheme can be found in Fig.~\\ref{fig:chi2dust}. It can be seen that the MBB no longer provides a good fit for the dust SED, especially at low multipoles. Averaging effects of spatially varying SEDs are more important over large angular scales and thus SED distortions and moments are expected to be more significant at low multipoles. Indeed, the moments added to the fit in $\\beta$-1 are enough to lower the reduced $\\chi^2$ such that it becomes compatible with 1 over almost all of the multipole range. The fitting schemes $\\beta$-$T$ and $\\beta$-2, including more parameters than $\\beta$-1, provide a fit of similar goodness, except in the multipole bin $\\ell=66.5$ where they are closer to 1.\n\nFigure~\\ref{fig:betaTdust} presents the best-fit values of $\\beta_0(\\ell)$ in the case of the MBB fit. For the sake of clarity, the values after iteration (see Sect.~\\ref{sec:fit_dust}) for $\\beta$-1, $\\beta$-$T,$ and $\\beta$-2 are not shown, but they present comparable trends. We can see that the best-fit values of $\\beta_0(\\ell)$ for this {\\tt d1T} simulation type are no longer compatible with a constant. $\\beta_0(\\ell)$ fitted values show a significant increase at low (<100) multipoles, up to $\\beta_0(\\ell=16.5)=1.65$. For $\\ell>100$, $\\beta_0(\\ell)$ is close to a constant of value $\\sim1.53$. This increase towards the low $\\ell$ is correlated to the increase of the MBB $\\chi^2$ discussed in the previous paragraph. However, we note that in the lowest $\\ell$-bin, the $\\beta_0(\\ell)$ value is close to 1.53 and that the $\\chi^2$ of the MBB fit is close to unity.\n\n\nThe best-fit values of $T_0(\\ell)$ are also presented in Fig.~\\ref{fig:betaTdust} in the case of the MBB fit. Here again, the values after iteration for the other fitting schemes are not presented, but are similar. The {\\tt d1T} $T_0(\\ell)$ best-fit values oscillate around $T_{\\tt d1T}=21.9$\\,K, without being strictly compatible with a constant value, as would be expected for this simulation type. This tends to indicate that the SED distortions due to the spectral index spatial variations are affecting the accuracy at which we can recover the correct angular dependence of the sky temperature. \n\nThe amplitude power spectrum is displayed in Fig.~\\ref{fig:Adust} for the MBB fitting scheme. The other fitting scheme results are not presented for clarity and would not be distinguishable from those of the MBB. The fitted $\\mathcal{D}_\\ell^{A\\times A}$ is compatible with the one of the dust template map used to build the simulations. \n\nAll the parameters of the moment expansion with respect to $\\beta$ can be found in Figs.~\\ref{fig:moments1} and \\ref{fig:moments2}, and are now significantly detected, except for $\\mathcal{D}_\\ell^{\\omega^\\beta_2 \\times \\omega^\\beta_2}$. In Fig.~\\ref{fig:moments3}, we can observe that the parameters of the moment expansion with respect to the temperature (only present in the $\\beta$-$T$ fit) remain undetected. The SED distortions due to the spatial variations of $\\beta$ are well detected, while no SED distortion linked to the temperature is seen, as expected for the ${\\tt d1T}$ simulation type.\n\n\n\\subsubsection{{\\tt d1}}\n\\label{sec:d1}\n\nWe now discuss the {\\tt d1} simulations, with the highest complexity in the polarized dust SED. In this more physically relevant simulation type, the dust emission is given by a MBB with variable index $\\beta(\\vec{n})$ and temperature $T(\\vec{n})$ over the sky.\nWe ran the four different fitting schemes on the {\\tt d1} simulation type, as we did in Sect.~\\ref{sec:d0} and \\ref{sec:d1T}.\n\nThe values of the reduced $\\chi^2(\\ell)$ are displayed in Fig.~\\ref{fig:chi2dust}. For the MBB and $\\beta$-1, the reduced $\\chi^2$ are not compatible with unity, especially at low multipole. This indicates that none of them are a good fit anymore for the spatially varying SED with $\\beta(\\vec{n})$ and $T(\\vec{n})$. With $\\beta$-2 and $\\beta$-$T$, the $\\chi^2(\\ell)$ values become compatible with unity, except for the $\\ell=26.5$ bin. We note that $\\beta$-$T$ provides a slightly better fit than $\\beta$-2 in this bin.\n\nLooking at the medians of the best-fit values of $\\beta_0(\\ell)$ for {\\tt d1} in Fig.~\\ref{fig:betaTdust}, we can see that the spectral index is changing with respect to $\\ell$, as discussed in Sect.~\\ref{sec:d1T}, in a similar manner as for the {\\tt d1T} simulation type. The fitted temperature $T_0(\\ell)$ values for {\\tt d1} show an increasing trend from $\\sim17$ to $\\sim20.5$\\,K and from $\\ell=16.5$ to $\\ell\\sim100$. At higher multipoles, $T_0(\\ell)$ is close to a constant temperature of 20.5\\,K. In {\\tt d1}, as for {\\tt d1T}, the angular scales at which we observe strong variations of $\\beta_0(\\ell)$ and $T_0(\\ell)$ are the ones for which we observe a poor $\\chi^2$ for some fitting schemes. Also, as for {\\tt d1T}, the largest angular scale $\\ell$-bin, at $\\ell=6.5,$ shows $\\beta$ and $T$ values close to the constant value at high $\\ell$, which are associated with $\\chi^2$ values closer to unity.\nThe best-fit values of the amplitude $\\mathcal{D}_\\ell^{A\\times A}$ are shown in Fig.~\\ref{fig:Adust}. These are similar to those of the other simulation types. \n\nThe moment-expansion parameters fitted on {\\tt d1} are shown in Figs.~\\ref{fig:moments1}, \\ref{fig:moments2}, and \\ref{fig:moments3}. For this simulation type, the moment parameters are all significantly detected with respect to both $\\beta$ and $T$. This was already the case with the \\textit{Planck}{} intensity simulations, produced in a similar way, as discussed in \\cite{Mangilli}. Their detections quantify the complexity of dust emission and SED distortions from the MBB present in the {\\tt d1} simulation type, due to the spatial variations of $\\beta(\\vec{n})$ and $T(\\vec{n})$. \n\n\n\\subsection{\\label{sec:nodust} CMB only}\n\\label{sec:c}\n\nIn order to calibrate the accuracy at which the $r$ parameter can be constrained with the \\textit{LiteBIRD}{} simulated data sets presented in Sect.~\\ref{sec:simulations}, we tested the simulation type with no dust component, $M_\\nu^{\\tt c}$, and with no tensor modes ($r_{\\rm sim}=0$, only CMB lensing and noise). We fit the expression in Eq.~\\ref{eq:model} with $\\mathcal{D}_\\ell^{\\rm dust}$ fixed to zero and $A_{\\rm lens}$ fixed to one ({i.e.}, $r$ is the only parameter we fit in this case).\nDoing so over the $N_{\\rm sim}$ simulations, we obtain $\\hat{r}= (0.7 \\pm 3.5) \\times 10^{-4}$.\nThis sets the minimal value we can expect to retrieve for $\\hat{r}$ with our assumptions if the dust component is perfectly taken into account.\n\n\n\\subsection{Dust and CMB}\n\\label{sec:results_dust_CMB}\n\nWe now present our analysis of the simulations including dust, CMB (lensing), and noise ({\\tt d0c}, {\\tt d1Tc} and {\\tt d1c}) with no primordial tensor modes ($r_{\\rm sim}=0$). As described above, we applied the four fitting schemes for the dust on the three simulation types, fitting $\\hat{r}$ and fixing $A_{\\rm lens}$ to~one (namely $r$MBB, $r\\beta$-1, $r\\beta$-$T$ and $r\\beta$-2) simultaneously. \n\nThe best-fit values of $\\beta_0(\\ell)$, $T_0(\\ell)$ and the moment expansion parameters $\\mathcal{D}_\\ell^{\\mathcal{M}\\times\\mathcal{N}}$ derived with the simulation types {\\tt d0c}, {\\tt d1Tc,} and {\\tt d1c} are not discussed further when they are compatible with the ones obtained for the {\\tt d0}, {\\tt d1T,} and {\\tt d1} simulation types and presented in Sect.~\\ref{sec:results_dust_only}.\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/gauss_r_0_d0_SLD_full.pdf}\n \\includegraphics[width=\\columnwidth]{Figures\/gauss_r_0_d1T_SLD_full.pdf}\n \\includegraphics[width=\\columnwidth]{Figures\/gauss_r_0_d1_SLD_full.pdf}\n \\caption{\\footnotesize \\emph{(Top panel)}: Posterior on $\\hat{r}$ in the {\\tt d0c} simulation type for the different fitting schemes: $r$MBB (blue, dotted line), $r\\beta$-1 (red, dashed line), $r\\beta$-$T$ (green, solid line), and $r\\beta$-2 (yellow, dash-dotted line). The vertical black dashed line marks the value of $r_{\\rm sim}=0$. \\emph{(Central panel)}: Same, but in the case of the {\\tt d1Tc} simulation type. \\emph{(Bottom panel)}: Same, but in the case of the {\\tt d1c} simulation type.}\n \\label{fig:rgauss1}\n\\end{figure}\n\n\n\\subsubsection{{\\label{sec:d0c}}{\\tt d0c}}\n\nFor {\\tt d0c}, as for {\\tt d0}, we recover the input constant spectral index and temperature $\\beta_{\\tt d0}$ and $T_{\\tt d0}$ at all angular scales for every fitting scheme. Furthermore, we do not detect any moment, when fitting $r\\beta$-1, $r\\beta$-$T,$ and $r\\beta$-2. This simulation type therefore constitutes our {null-test} when $\\hat{r}$ and the dust parameters are fitted at the same time. The addition of the CMB lensing in the simulations and the addition of $r$ to the fits thus does not lead to the detection of the moment parameters nor biases the recovery of the spectral index and the temperature.\n\nThe posterior distributions of the estimated tensor-to-scalar ratio $\\hat{r}$ are displayed in Fig.~\\ref{fig:rgauss1} and their mean and standard deviations are summarized in Table~\\ref{tab:rvalues}. We note that $\\hat{r}$ is compatible with the input value ($r_{\\rm sim}=0)$ for all the fitting schemes. For $r$MBB and $r\\beta$-1, the dispersion $\\sigma_{\\hat{r}}$ is comparable with the CMB-only scenario discussed in Sect.~\\ref{sec:nodust}. For $r\\beta$-$T$ and $r\\beta$-2, the width of the distribution increases by a factor of $\\sim 2$ and $\\sim 4$, respectively.\n\n\\subsubsection{{\\tt d1Tc}}\n\\label{sec:d1Tc}\n\nThe posterior distribution of $\\hat{r}$ in the case of the {\\tt d1Tc} simulation type is displayed in Fig.~\\ref{fig:rgauss1} for the four fitting schemes and the mean value and standard deviation of these distributions are summarized in Table~\\ref{tab:rvalues}. We can see that in the case of $r$MBB, we fit $\\hat{r}\\pm\\sigma_{\\hat{r}}=(99.7\\pm6.2)\\times10^{-4}$. In that case, the input tensor-to-scalar ratio $r_{\\rm sim}=0$ is not recovered and we obtain a bias on the central value of $\\hat{r}$ of $\\sim 16\\,\\sigma_{\\hat{r}}$. As discussed in Sect.~\\ref{sect:limitsmbb}, this is expected because we know that the MBB is not a good dust model for a SED with spatially varying spectral index, as we also verify in Sect.~\\ref{sec:d1T} looking at the $\\chi^2$ values.\n\nUsing the $r\\beta$-1 fitting scheme allows us to recover $\\hat{r}=(-8.0\\pm6.4)\\times10^{-4}$, where $r_{\\rm sim}$ is recovered within $\\sim\\,2\\sigma_{\\hat{r}}$, while $r\\beta$-2 and $r\\beta$-$T$ recover the input value within $1\\,\\sigma_{\\hat{r}}$ (with $\\hat{r}=(-3.6\\pm 13.0)\\times10^{-4}$ and $\\hat{r}=(0.7\\pm20.9)\\times10^{-4}$, respectively). As in Sect~\\ref{sec:d0c}, the deviation remains similar between $r$MBB and $r\\beta$-1 and increases by a factor of $\\sim 2$ and $\\sim 4$ from $r\\beta$-1 to $r\\beta$-$T$ and $r\\beta$-2, respectively.\n\n\\begin{table}[t!]\n\\centering\\scalebox{1}{\n \\centering\n \\begin{tabular}{c | ccc}\n ($\\hat{r} \\pm \\sigma_{\\hat{r}})\\times10^{4}$ & {\\tt d0c} & {\\tt d1Tc} & {\\tt d1c}\\\\\n \\hline\n $r$MBB & \\cellcolor{mygreen} $0.3 \\pm 3.9$ & \\cellcolor{myred}$99.7 \\pm 6.2$ & \\cellcolor{myred} $125.1 \\pm 5.9$\\\\\n $r\\beta$-1 & \\cellcolor{mygreen} $0.5 \\pm 4.5$ & \\cellcolor{myyellow} $-8.0 \\pm 6.4$& \\cellcolor{myred} $32.9 \\pm 6.5$\\\\\n $r\\beta$-$T$ & \\cellcolor{mygreen} $0.3 \\pm 9.5$ & \\cellcolor{mygreen} $-3.6 \\pm 13.0$ & \\cellcolor{mygreen} $-3.3 \\pm 11.7$\\\\\n $r\\beta$-2 & \\cellcolor{mygreen} $0.7 \\pm 16.4$ & \\cellcolor{mygreen} $0.7 \\pm 20.9$ & \\cellcolor{myyellow} $-37.4 \\pm 19.4$\\\\ \n \\hline \n \\end{tabular}}\n\\caption{\\footnotesize Best-fit values of $\\hat{r}$ in units of $10^{-4}$ on $f_{\\rm sky}=0.7$. The green values are compatible with $r_{\\rm sim}=0$ at $1\\,\\sigma_{\\hat{r}}$, the yellow values are compatible with $r_{\\rm sim}=0$ at $2\\,\\sigma_{\\hat{r}}$ and the red values are incompatible with $r_{\\rm sim}=0$ at more than $2\\,\\sigma_{\\hat{r}}$.}\n\\label{tab:rvalues}\n\\end{table}\n\n\\subsubsection{{\\tt d1c}}\n\\label{sec:d1c}\n\nIn the case of the {\\tt d1c} simulation type, as in {\\tt d0c} and {\\tt d1Tc}, we fit $\\hat{r}$ in addition to the dust-related parameters. In that case, dust moment parameters are recovered as for {\\tt d1} (see Sect.~\\ref{sec:d1}), except for the $r\\beta$-2 fitting scheme.\n\nFigure~\\ref{fig:momentsd1d1c} compares the moment parameters between $\\beta$-2 on the {\\tt d1c} simulations type, fitting only the dust-related parameters and $r\\beta$-2 on {\\tt d1c} when jointly fitting the dust parameters and $\\hat{r}$. We observe that $\\mathcal{D}_\\ell^{\\omega^{\\beta}_2 \\times \\omega^{\\beta}_2}$ is not consistently recovered when fitting $\\hat{r}$ in addition to the dust parameters.\n\nA similar comparison can be found in Fig.~\\ref{fig:moments2d1d1c} for the moment parameters between $\\beta$-$T$ and $r\\beta$-$T$ on the {\\tt d1c} simulation type. Using this fitting scheme, we can see that all the moments are correctly recovered when adding $\\hat{r}$ to the fit.\n\n\nThe $\\hat{r}$ posterior distributions in the case of {\\tt d1c} are displayed in Fig.~\\ref{fig:rgauss1} and summarized in Table~\\ref{tab:rvalues}. As discussed in Sect.~\\ref{sect:limitsmbb} and observed in Sect.~\\ref{sec:d1c}, the $r$MBB fit is highly biased, with $\\hat{r}=(125.1 \\pm 5.9)\\times10^{-4}$ (by more than 21\\,$\\sigma_{\\hat{r}}$). When fitting the $r\\beta$-1, this bias is significantly reduced ($\\hat{r}=(32.9 \\pm 6.5)\\times10^{-4}$, $~5\\,\\sigma_{\\hat{r}}$ away from $r_{\\rm sim}=0$), illustrating the ability of the first-moment parameters to correctly capture part of the SED complexity. However, performing the expansion in both $\\beta$ and $T$ with $r\\beta$-$T$ allows us to recover $r_{\\rm sim}$ without bias ($\\hat{r}=(-3.3\\pm 11.7)\\times10^{-4}$), highlighting the need for the description of the SED complexity in terms of dust temperature for this simulated data set where both $\\beta$ and $T$ vary spatially. On the other hand, for $r\\beta$-2, a negative tension ($1.9\\,\\sigma_{\\hat{r}}$) can be observed: $\\hat{r}=(-37.4\\pm 19.4)\\times10^{-4}$. This tension is discussed in Sect.~\\ref{sec:bias}. \n\nFor {\\tt d1c}, the $\\hat{r}$ distribution widths roughly meet the foreground cleaning requirements of \\textit{LiteBIRD}{} presented in Sect.~\\ref{sec:LiteBIRD} for $r$MBB and $r\\beta$-1 but are higher for $r\\beta$-$T$ and $r\\beta$-2. We also note that, with the same number of free parameters, all the standard deviations $\\sigma_{\\hat{r}}$ slightly increase compared to the {\\tt d0c} simulation type. This is expected due to the increasing dust complexity. \n\n\\section{\\label{sec:discussion} Discussion}\n\n\n\\subsection{Lessons learnt}\n\nIn Sect.~\\ref{sec:results}, we apply the fitting pipeline introduced in Sect.~\\ref{sec:fit} on \\textit{LiteBIRD}{} simulated data sets on $f_{\\rm sky}=0.7$ and for $r_{\\rm sim}=0$, including the various dust simulation types defined in Sect.~\\ref{sec:ingredients}. We fitted the estimated $B$-mode power-spectra with the four different fitting schemes summarized in Table~\\ref{tab:parameter}. Our main results can be summarized as follows: \n\n\\begin{itemize}\n\\item The MBB fitting scheme provides a good fit for the dust component in the {\\tt d0} and {\\tt d0c} simulation types. \n However, when the spectral index changes with the angular scale, such as in the {\\tt d1T}, {d1Tc}, {\\tt d1,} and {\\tt d1c} simulations, this approach no longer provides a good fit because of the complexity of the dust SED. As a consequence, in the $r$MBB case, $r_{\\rm sim}$ cannot be recovered without a significant bias. \n\n\\item The $\\beta$-1 fitting scheme allows us to perform a good fit for the dust complexity using the {\\tt d0} and {\\tt d1T} simulations but not for {\\tt d1}, while the $r\\beta$-1 fitting scheme yields estimates of $\\hat{r}$ close to $r_{\\rm sim}$ within $1\\,\\sigma_{\\hat{r}}$ for {\\tt d0c}, and within $2\\,\\sigma_{\\hat{r}}$ for {\\tt d1Tc}, but presenting a bias of $\\sim 6\\,\\sigma_{\\hat{r}}$ for {\\tt d1c}. \n\n\\item The $\\beta$-$T$ fitting scheme provides a good fit for every dust model, while using the $r\\beta$-$T$ fitting scheme allows us to recover $\\hat{r}$ values consistent with $r_{\\rm sim}$ within $1\\,\\sigma_{\\hat{r}}$ for all the simulation types, but is associated with an increase of $\\sigma_{\\hat{r}}$ by a factor $\\sim 2$ compared to the $r\\beta$-1 case.\n\n\\item The $\\beta$-2 fitting scheme also provides a good fit for each dust model, and the $r\\beta$-2 fitting scheme leads to values of $\\hat{r}$ compatible with $r_{\\rm sim}$ within $1\\,\\sigma_{\\hat{r}}$ for all the simulation types but {\\tt d1c}. In this last case, there is a negative tension of $\\sim2\\,\\sigma_{\\hat{r}}$. \n For all the simulation types, there is an increase of $\\sigma_{\\hat{r}}$ by a factor of $\\sim 4$ compared to the $r\\beta$-1 case.\n\n\\end{itemize}\n\nThe present analysis shows that the temperature could be a critical parameter for the moment expansion in the context of \\textit{LiteBIRD}{}.\n\nIndeed, for simulations including a dust component with a spectral index and a temperature that both vary spatially, as in {\\tt d1}, the only fitting scheme allowing us to recover $r_{\\rm sim}$ within $1\\,\\sigma_{\\hat{r}}$ is $r\\beta$-$T$, the expansion to first order in both $\\beta$ and $T$. This shows that expanding in $\\beta$ only, without treating $T$, is not satisfactory when looking at such large fractions of the sky. Indeed, when applying the $\\beta$-2 fitting scheme, the $\\mathcal{D}_\\ell^{\\omega^{\\beta}_2 \\times \\omega^{\\beta}_2}$ parameter remains undetected for the {\\tt d1T} simulation type (Sect~\\ref{sec:d1T}), while it is significantly detected using the {\\tt d1} simulation type (Sect~\\ref{sec:d1}). Nevertheless, {\\tt d1T} and {\\tt d1}share the same template of $\\beta(\\vec{n})$ (Sect.~\\ref{sec:ingredients}) and they only differ by the sky temperature (constant for {\\tt d1T} and varying for {\\tt d1}). This suggests that the observed $\\mathcal{D}_\\ell^{\\omega^{\\beta}_2 \\times \\omega^{\\beta}_2}$ with the {\\tt d1} simulations originates from the temperature variations and not those in the spectral index. This observation shows that it is less convenient to use the $\\beta$-2 fitting scheme than the $\\beta$-$T$ one in order to correctly recover the moment-expansion parameters and $\\hat{r}$ when temperature varies spatially.\n\nMoreover, we saw that $ \\sigma_{\\hat{r}}$ is lower when using the fitting scheme $r\\beta$-$T$ instead of $r\\beta$-2 for every simulation type, even if both have the same number of free parameters. This second observation additionally encourages an approach where the SED is expanded with respect to both $\\beta$ and $T$. Nevertheless, the uncertainty on $\\hat{r}$ we obtain in this case ($\\sigma_{\\hat{r}}=1.17\\times10^{-3}$) is larger than the \\textit{LiteBIRD}{} requirements. \n\n\\subsection{Increasing the accuracy on the tensor-to-scalar ratio}\n\\label{sec:Opt}\n\nIn Sect.~\\ref{sec:d1} and Fig.~\\ref{fig:chi2dust}, we see that the MBB and $\\beta$-1 fitting schemes do not provide good fits for the {\\tt d1} dust simulations, especially at low multipoles ($\\ell \\lesssim 100$). Conjointly, in Fig.~\\ref{fig:moments3}, we can see that the $\\beta$-$T$ moment parameters are significantly detected for $\\ell \\lesssim 100$ and compatible with zero above that threshold, suggesting that their corrections to the SED are predominantly required at large angular scales.\n\nThis implies that we can improve the pipeline presented in Sect.~\\ref{sec:fit} to keep only the required parameters in order to recover $\\hat{r}$ compatible with $r_{\\rm sim}$ with a minimal $\\sigma_{\\hat{r}}$. It can be achieved by applying the $r\\beta$-1 fitting sheme over the whole multipole range, while restricting the $r\\beta$-$T$-specific ($\\mathcal{D}_\\ell^{\\omega^\\beta_1\\times\\omega^T_1}$ and $\\mathcal{D}_\\ell^{\\omega^T_1\\times\\omega^T_1}$) moment-expansion parameters fit to the low multipoles range. We note that in order to correct the bias, it is still necessary to keep the $r\\beta$-1 moment parameters even at high multipoles, because the MBB does not provide a good fit even for $\\ell \\in [100,200]$, as we can see in Fig.~\\ref{fig:chi2dust}. We define $\\ell_{\\rm cut}$ as the multipole bin under which we keep all the $r\\beta$-$T$ moment parameters and above which we use the $r\\beta$-1 scheme. \n\n\nThe best-fit values and standard deviations of $\\hat{r}$ for different values of $\\ell_{\\rm cut}$ are displayed in Table~\\ref{tab:rellcut}. We can see that a trade-off has to be found: the smaller the $\\ell_{\\rm cut}$ , the bigger the shift from $r_{\\rm sim}$, and the bigger the $\\ell_{\\rm cut}$, the higher the value of $\\sigma_{\\hat{r}}$. The trade-off point seems to be found for $\\ell_{\\rm cut} \\sim 80$, allowing us to recover $\\hat{r}$ without tension, with $\\sigma_{\\hat{r}} =8.8\\times10^{-4}$. The error on $r$ is thus reduced by more than $\\sim30\\,\\%$ with respect to the nonoptimized fit and meets the \\textit{LiteBIRD}{} requirements.\n \n\n\n\n\n\\subsection{\\label{sec:fsky} Tests with smaller sky fractions}\n\n\\begin{table}[t!]\n\\centering\\scalebox{1}{\n \\centering\n \\begin{tabular}{c|c}\n $\\ell_{\\rm cut}$ & $(\\hat{r} \\pm \\sigma_{\\hat{r}})\\times 10^{4}$\\\\\\hline\n 50 & \\cellcolor{myred}$12.0 \\pm 7.3$ \\\\\n 60 & \\cellcolor{mygreen}$7.3 \\pm 7.9$ \\\\\n 70 & \\cellcolor{mygreen}$4.9 \\pm 8.1$ \\\\\n 80 & \\cellcolor{mygreen}$-0.9 \\pm 8.8$ \\\\\n 90 & \\cellcolor{mygreen}$-2.1 \\pm 9.9$ \\\\\n \\end{tabular}}\n\\caption{\\footnotesize Best-fit values of $\\hat{r} \\pm \\sigma_{\\hat{r}}$ in units of $10^{-4}$ for different values of $\\ell_{\\rm cut}$ for the {\\tt d1c} simulations with $f_{\\rm sky}=0.7$, when applying the $r\\beta$-$T$ fitting scheme. The green values are compatible with $r_{\\rm sim}=0$ at $1\\,\\sigma_{\\hat{r}}$.}\n\\label{tab:rellcut}\n\\end{table}\n\n\\begin{table}[t!]\n\\centering\\scalebox{1}{\n \\centering\n \\begin{tabular}{c | ccc}\n $(\\hat{r} \\pm \\sigma_{\\hat{r}})\\times10^4$ & $r_{\\rm sim}=0.01$ & $f_{\\rm sky}=0.5$ & $f_{\\rm sky}=0.6$\\\\\n \\hline\n $r$MBB & \\cellcolor{myred} $204.8 \\pm 7.7$ & \\cellcolor{myred} $47.3 \\pm 5.6$ & \\cellcolor{myred} $59.2 \\pm 5.4$\\\\\n $r\\beta$-1 & \\cellcolor{myred} $129.0 \\pm 8.3$ & \\cellcolor{myyellow} $-8.4 \\pm 6.7$ & \\cellcolor{mygreen} $1.8 \\pm 6.2$\\\\\n $r\\beta$-$T$ & \\cellcolor{mygreen} $94.6 \\pm 15.1$ &\\cellcolor{mygreen} $0.02 \\pm 13.4$ & \\cellcolor{mygreen} $-1.1 \\pm 12.0$\\\\\n $r\\beta$-2 & \\cellcolor{myyellow} $62.5 \\pm 25.0$ & \\cellcolor{mygreen} $4.3 \\pm 24.2$ & \\cellcolor{mygreen} $-3.2 \\pm 22.4$\\\\\n \\hline\n \\end{tabular}}\n\\caption{\\footnotesize Best-fit values of $\\hat{r}$ in units of $10^{-4}$ for an alternative {\\tt d1c} simulation with $r_{\\rm sim}=0.01$ on $f_{\\rm sky}=0.7$, and with $r_{\\rm sim}=0$ but on $f_{\\rm sky}=0.5$ and $f_{\\rm sky}=0.6$. The green values are compatible with $r_{\\rm sim}$ at $1\\,\\sigma_{\\hat{r}}$, the yellow values are compatible with $r_{\\rm sim}$ at $2\\,\\sigma_{\\hat{r}}$ , and the red values are incompatible with $r_{\\rm sim}$ at more than $2\\,\\sigma_{\\hat{r}}$.}\n\\label{tab:rvalues2}\n\\end{table}\n\nIn all the results presented in Sect.~\\ref{sec:results}, we were considering a sky fraction of $f_{\\rm sky}=0.7$. This sky mask keeps a considerable fraction of the brightest Galactic dust emission. To quantify the impact of the sky fraction on our analysis, we ran the pipeline as in Sect.~\\ref{sec:d1c} with the different masks introduced in Sect.~\\ref{sect:mask} ($f_{\\rm sky}=0.5$ and $f_{\\rm sky}=0.6$). This was done with the {\\tt d1c} simulation type. \n\nThe posteriors on $\\hat{r}$ for the different fitting schemes are displayed in Fig.~\\ref{fig:rgauss2} and Table~\\ref{tab:rvalues2}. We can see that, while the $r$MBB fiting scheme always leads to biased estimates, the $r\\beta$-1 case allows us to recover $\\hat{r}$ at $1.25\\,\\sigma_{\\hat{r}}$ for $f_{\\rm sky}=0.5$ and within $1\\,\\sigma_{\\hat{r}}$ for $f_{\\rm sky}=0.6$. In the two situations, the results using the $r\\beta$-$T$ and $\\beta$-2 fitting schemes are both unbiased with estimates of $\\hat{r}$ compatible with $r_{\\rm sim}$ within $1\\,\\sigma_{\\hat{r}}$. The $\\sigma_{\\hat{r}}$ hierarchy between the $r$MBB, $r\\beta$-1, $r\\beta$-$T,$ and $r\\beta$-2 fitting schemes is the same as for $f_{\\rm sky}=0.7$ (see Sect.~\\ref{sec:d1c}). Nevertheless, we observe that $\\sigma_{\\hat{r}}$ increases as the sky fraction decreases, as does the statistical error (cosmic variance of the lensing and noise). The bias, on the other hand, decreases for all the fitting schemes with the sky fraction, which is expected because less dust emission contributes to the angular power spectra. The negative tension observed on the $\\hat{r}$ posterior in Sect.~\\ref{sec:d1c} for the $r\\beta$-2 case is not present when using smaller sky fractions. In Fig.~\\ref{fig:fskydust}, the $r\\beta$-2 moment parameters are displayed. We can see that they are not significantly detected for the $f_{\\rm sky}=0.5$ and 0.6, unlike for $f_{\\rm sky}=0.7$. As we have seen that some of the moments in the $\\beta$-2 fitting scheme failed to model SED distortions coming from temperature, we can suppose that, in our simulations, the temperature variations play a less significant role in the dust SED on the $f_{\\rm sky}=0.5$ and 0.6 masks than they play in the $f_{\\rm sky}=0.7$ one. As a consequence, they have a smaller impact on $r$ when not properly taken into account.\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/gauss_r_0_d1_SLD_full_fsky0.5.pdf}\n \\includegraphics[width=\\columnwidth]{Figures\/gauss_r_0_d1_SLD_full_fsky0.6.pdf}\n \\caption{\\footnotesize \\emph{(Top panel)}: Posterior on $\\hat{r}$ in the {\\tt d1c} simulation type on $f_{\\rm sky}=0.5$ for the different fitting schemes: $r$MBB (blue, dotted line), $r\\beta$-1 (red, dashed line), $r\\beta$-$T$ (green, solid line), and $r\\beta$-2 (yellow, dash-dotted line). The vertical black dashed line marks the value of $r_{\\rm sim}=0$.\\emph{(Bottom panel)}: Same, in the case of the {\\tt d1c} simulation type on $f_{\\rm sky}=0.6$.}\n \\label{fig:rgauss2}\n\\end{figure}\n\n\\subsection{Tests with nonzero input tensor modes}\n\\label{sec:rnot0}\n\nWe show in Sect.~\\ref{sec:d1c} that the $r\\beta$-$T$ fitting scheme allows us to retrieve $\\hat{r}$ compatible with zero when $r_{\\rm sim}=0$. We now want to assess the potential leakage of $\\hat{r}$ in the moment expansion parameters if $r_{\\rm sim} \\neq 0$. In this case, primordial tensor signals would be incorrectly interpreted as dust complexity. We run the pipeline as described in Sect.~\\ref{sec:d1c} with $r_{\\rm sim}=0.01$, in the ${\\tt d1c}$ simulation type. This value of $r_{\\rm sim}=0.01$ is larger than the value targeted by \\textit{LiteBIRD},{} but given the order of magnitude of the error on $\\hat{r}$ observed in the previous sections, a potential leakage could be left unnoticed using a smaller $r_{\\rm sim}$. \n\nLooking at the final posterior on $\\hat{r}$ (Fig.~\\ref{fig:rgauss3} and Table~\\ref{tab:rvalues2}), we can see that the results are comparable with the $r_{\\rm sim}=0$ case, but centered on the new input value $r_{\\rm sim}=0.01$. The $r$MBB fitting scheme gives a highly biased posterior of $\\hat{r}=(2.048\\pm0.077)\\times10^{-2}$ ; the bias is reduced but still significant when using the $r\\beta$-1 scheme ($\\hat{r}=129.0\\pm 8.3\\times10^{-4}$) ; in the $\\beta$-$T$ case we get an estimate of $\\hat{r}=94.6\\pm 15.1\\times10^{-4}$ compatible with the input value of $r_{\\rm sim}=100\\times10^{-4}$ ; and finally, the $\\beta$-2 fitting scheme leads to a negative $2\\,\\sigma_{\\hat{r}}$ tension ($\\hat{r}=62.5\\pm25.0\\times10^{-4}$). This demonstrates the robustness of our method and its potential application to component separation. We note that the negative bias at second order is still present in the $r_{\\rm sim }=0.01$ case, illustrating that setting a positive prior on $\\hat{r}$ would not have been a satisfying solution when $r_{\\rm sim}=0$.\n\n\\subsection{\\label{sec:bias} Exploring the correlations between the parameters}\n\nWe now examine the substantial increase in the dispersion on the $\\hat{r}$ posteriors between the $r\\beta$-1 fitting scheme on the one hand and the $r\\beta$-$T$ and $r\\beta$-2 ones on the other. Indeed, in Sect.~\\ref{sec:d1c}, we show that $\\sigma_{\\hat{r}}$ is about two times greater when using the $r\\beta$-$T$ scheme than the $r\\beta$-1 one, and about four times larger in the case of $r\\beta$-2, while the $r\\beta$-$T$ and $r\\beta$-2 schemes share the same number of free parameters. Some other points to clarify are the shift on $\\hat{r}$ appearing for $r\\beta$-2 in the {\\tt d1c} scenario, discussed in Sect.~\\ref{sec:d1c}, and the inability to correctly recover $\\mathcal{D}_\\ell^{\\omega_2^\\beta \\times \\omega_2^\\beta}$ when $\\hat{r}$ is added to the fit illustrated in Fig.~\\ref{fig:momentsd1d1c}. \n\nThe 2D-SED shapes of the parameters $\\mathcal{D}_\\ell^{\\mathcal{N} \\times \\mathcal{M}}(\\nu_i \\times \\nu_j)$ in the $(\\nu_i, \\nu_j)$ space\\footnote{For example, $\\mathcal{S}(\\nu_i,\\nu_j) = \\frac{I_{\\nu_i}(\\beta_0,T_0)I_{\\nu_j}(\\beta_0,T_0)}{I_{\\nu_0}(\\beta_0,T_0)^2}\\cdot \\left[\\ln\\left(\\frac{\\nu_i}{\\nu_0}\\right)\\ln\\left(\\frac{\\nu_j}{\\nu_0}\\right) \\right]$ is associated to the $\\mathcal{D}^{\\omega_1 \\times \\omega_1}_\\ell$ parameter (see Eq.~\\ref{eq:moments}).} are displayed in Fig.~\\ref{fig:momentshapes}. We used the nine frequencies of \\textit{LiteBIRD}{} presented in Sect.~\\ref{sec:Instrsim} and fixed $\\beta_0=1.54$ and $T_0 = 20$\\,K. We also introduce the CMB 2D-SED shape with the black body function:\n\n\\begin{equation}\n B_{\\rm CMB}(\\nu_i \\times \\nu_j) = \\frac{B_{\\nu_i}(T_{\\rm CMB})B_{\\nu_j}(T_{\\rm CMB})}{B_{\\nu_0}(T_{\\rm CMB})^2},\n\\end{equation}\n\n\\noindent where $T_{\\rm CMB} = 2.726$\\,K. \n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/gauss_r_0.01_d1_SLD_full.pdf}\n \\caption{\\footnotesize Posterior on $\\hat{r}$ in the {\\tt d1c} simulation type with $r_{\\rm sim}=0.01$ and $f_{\\rm sky}=0.7$ for the different fitting schemes: $r$MBB (blue, dotted line), $r\\beta$-1 (red, dashed line), $r\\beta$-$T$ (green, solid line), and $r\\beta$-2 (yellow, dash-dotted line). The vertical black dashed line marks the value of $r_{\\rm sim}$.}\n \\label{fig:rgauss3}\n\\end{figure}\n\nThe 2D correlation coefficients between these 2D-SED shapes are displayed in Fig.~\\ref{fig:corr3D}. We present the correlations between the shapes of the parameters in the case of the $r\\beta$-$T$ and $r\\beta$-2 fitting schemes. We can see that all the moment parameters in $\\omega^{\\beta}_2$ are strongly correlated with the CMB SED signal, while the ones in $\\omega^{T}_1$ are not. \n\nWe showed that, when fitting $\\beta$-2 on {\\tt d1c}, the SED distortions due to spatial variations of $T$ are incorrectly detected by the second-order moment parameters with respect to the spectral index $\\beta$. Due to the correlations highlighted above, those spurious moment parameters could then leak into $\\hat{r}$ when adding it to the fit in $r\\beta$-2. This explains both the negative shift on the $\\hat{r}$ posterior using $\\beta$-2 in the {\\tt d1c} simulation type with $f_{\\rm sky}=0.7$ presented in Sect.~\\ref{sec:d1c} and \\ref{sec:rnot0}, and the inability to correctly recover the $\\omega^{\\beta}_2 \\times \\omega^{\\beta}_2$ dust moment parameter presented in Fig.~\\ref{fig:momentsd1d1c}. In addition, it gives a natural reason for the surge of $\\sigma_{\\hat{r}}$ when the second-order moments in $\\beta$ are added to the fit. \n\nOn the other hand, the moment parameters in $\\omega^{T}_1$ are strongly correlated with the moments in $\\omega^{\\beta}_1$. This behavior is expected due to the strong correlation between $\\beta$ and $T$ \\citep[see e.g.,][]{betatcorr}. However those moment parameters are less correlated with the CMB signal than the second-order parameters of $\\beta$-2. This points out that the factor of $\\sim 2$ on $\\sigma_{\\hat{r}}$ between $\\beta$-$T$ and $\\beta$-2 is due to this correlation of the 2D-SED shapes.\nAs the parameters in $\\omega^{T}_1$ are highly correlated with one another, we expect them to be highly redundant in the fit. However, repeating the process described in Sect.~\\ref{sec:d1c} using only $\\mathcal{D}^{A \\times \\omega^{T}_1}_\\ell$ for $\\beta$-$T$ ---which is equivalent to applying the $\\beta$-1 fitting scheme with an iterative correction to the temperature $T_0(\\ell)$--- gives a $\\hat{r}$ posterior similar to the one obtained for $\\beta$-1 alone. Taking the other $\\omega^{T}_1$ terms into account appears to be necessary in order to recover an unbiased distribution of $\\hat{r}$.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/corr3D.pdf}\n \\includegraphics[width=\\columnwidth]{Figures\/corr3DLB_betaT.pdf}\n \\caption{\\footnotesize Correlation matrices of the 2D-SED shapes of the CMB ($B_{\\rm CMB}(\\nu_i\\times\\nu_j)$ and dust moments $\\mathcal{D}_\\ell^{\\mathcal{N} \\times \\mathcal{M}}(\\nu_i \\times \\nu_j)$ in the $(\\nu_i, \\nu_j)$ space). Each element represents the Pearson correlation coefficient between any 2 of these 2D-SED shapes. The correlation matrices are displayed in the case of the $\\beta$-2 fitting scheme (top panel) and the $\\beta$-$T$ one (bottom panel).}\n \\label{fig:corr3D}\n\\end{figure}\n\n\\subsection{Adding synchrotron to the simulations }\n\\label{sec:synchrotron}\n\nThermal dust is not the only source of polarized foreground that must be considered for CMB studies. Although subdominant at high frequencies ($\\geq 100\\,$GHz), the synchrotron emission due to accelerated electrons in the interstellar medium is still expected to represent a significant fraction of the total polarized signal. \n\nIn order to take one more step towards realistic forecasts for the \\textit{LiteBIRD}{} instrument, we add a synchrotron contribution to the {\\tt d1c} simulations presented in \\ref{sec:simulations} using the {\\tt s1} template included in the {\\sc PySM}. In this scenario, the synchrotron SED for each line of sight is given by a power law of the form (in antenna temperature units)\n\n\\begin{equation}\n S_\\nu^{\\tt s1} = A_{\\tt s1}(\\vec{n}) \\left(\\frac{\\nu}{\\nu^{\\tt s1}_0}\\right)^{\\beta_{\\tt s1}(\\vec{n})},\n\\label{eq:s1powerlaw}\n\\end{equation}\n\n\\noindent where the amplitude $A_{\\tt s1}(\\vec{n})$ and the spectral index $\\beta_{\\tt s1}(\\vec{n})$ maps are derived from the combination of the \\textit{WMAP}{} mission 23 GHz map \\cite{WMAPfg} and Haslam 408 GHz map \\cite{Haslam1}. $\\nu^{\\tt s1}_0$ is defined as 23 GHz. The simulations containing synchrotron are referred to as {\\tt d1s1c} below.\n\nIf not treated in the fit, the presence of synchrotron is expected to induce a bias on the $\\hat{r}$ posterior distribution. Regarding the dust MBB discussed in Sect.~\\ref{sect:limitsmbb}, the synchrotron SED is expected to have distortions. However, as the synchrotron polarized emission is significantly lower than that of dust, in the frequency range considered in the present work, we expect the distortions to be small compared to the ones induced by dust and we leave their modeling to a further study. \n\nIn order to minimize the number of free parameters used for fitting the synchrotron emission, we model its power spectrum as a power law of the multipole $\\ell$ \\citep{krachmalnicoff_s-pass_2018}. Therefore, combining with the synchrotron SED in Eq.~\\ref{eq:s1powerlaw}, the synchrotron component of the cross-angular power spectra reads \n\n\\begin{equation}\n \\mathcal{D}^{\\rm sync}_\\ell (\\nu_i \\times \\nu_j) = A_{\\rm s} \\left(\\frac{\\nu_i \\nu_j}{\\nu_0}\\right)^{\\beta_{\\rm s}} \\ell^{\\alpha_{\\rm s}},\n\\label{eq:syncromoment}\n\\end{equation}\n\nwhere the amplitude coefficient $A_{\\rm s}$ is treated as a free parameter while we fix $\\beta_{\\rm s}=-3$ (median value of the ${\\tt s1}$ $\\beta_{\\rm s}$ map on our $f_{\\rm sky}=0.7$ mask) and $\\alpha_{\\rm s}=-1$ \\citep{krachmalnicoff_s-pass_2018}. \n\nWhen fitting the {\\tt d1s1c} simulations, we either use the $r\\beta$-$T$ fitting scheme, neglecting the synchrotron component, or we add the synchrotron component in Eq.~\\ref{eq:syncromoment} to the model in Eq.~\\ref{eq:model}. We refer to this latter case as the {\\tt s}$r\\beta$-$T$ fitting scheme.\nIn Fig.~\\ref{fig:synchrotron}, the $\\hat{r}$ posteriors derived from the {\\tt d1s1c} simulations are displayed with $r_{\\rm sim}=0$ and $f_{\\rm sky}=0.7$. \n\nUsing the $r\\beta$-$T$ fitting scheme, we find $\\hat{r} = (143.1 \\pm 13.5)\\times 10^{-4}$. As expected, even at high frequencies, modeling the synchrotron component is critical and cannot be neglected in order to recover an unbiased value of $\\hat{r}$.\nOn the other hand, using {\\tt s}$r\\beta$-$T$ fitting scheme, we recover $\\hat{r} = (-5.4 \\pm 13.2)\\times 10^{-4}$. This result is comparable with the one obtained for the {\\tt d1c} simulations in Sect.~\\ref{sec:d1c}, with a minor increase in $\\sigma_{\\hat{r}}$. We can therefore conclude that a model as simple as that of Eq.~\\ref{eq:syncromoment} is sufficient to take into account the {\\tt s1} component at $\\nu > 100$\\,GHz and the corresponding SED distortions can be neglected in order to recover an unbiased value of $\\hat{r}$. In principle, as we know that the dust-synchrotron spatial correlation is significant at large scales \\citep{PlanckDust2}, Eq.~\\ref{eq:model} should include a dust-synchrotron term \\citep[see e.g.,][]{SOgalactic}. In our study, where we consider cross-spectra from 100 to 402\\,GHz, this dust-synchrotron term is subdominant, but it could be significant when considering cross-spectra between LiteBIRD's extreme frequency bands (e.g., the 40$\\times$402 cross-spectrum). The moment expansion might be more complicated as well in this case, as we could expect some correlation between the dust and synchrotron moment-terms.\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/gauss_r_0_d1s1_SLD_full.pdf}\n \\caption{\\footnotesize Posterior on $\\hat{r}$ in the {\\tt d1s1c} simulation type with $r_{\\rm sim}=0$ and $f_{\\rm sky}=0.7$ for the different fitting schemes: $r\\beta$-$T$ (green, solid line) and {\\tt s}$r\\beta$-$T$ (orange, dash-dotted line). The vertical black dashed line marks the value $r_{\\rm sim}=0$.}\n \\label{fig:synchrotron}\n\\end{figure}\n\nThis result shows that a full polarized foreground content can be treated at high frequencies when using a power law SED for the synchrotron coupled with the moment expansion of the MBB up to first order in both $\\beta$ and $T$ for the dust SED. A full study remains to be done in that direction using all the frequency bands of the \\textit{LiteBIRD}{} instrument. Eventually, Eq.~\\ref{eq:syncromoment} will also have to be expanded in moments with respect to its parameters. Doing so, one can expect to recover an unbiased value of $\\hat{r}$ associated with a decrease in $\\sigma_{\\hat{r}}$ down to a value compatible with the full success criterion of the mission.\n\n\\subsection{Limitations of this work and caveats}\n\n{As discussed in Sect.~\\ref{sec:formalismspectra}, we neglected polarization effects through this work by treating the $BB$ signal as an intensity signal. This is not problematic in the present work, because no variations along the lines of sight were present in the simulations. However, this point has to be addressed using complex simulations or real sky data.}\n\nThe choice of reference frequency $\\nu_0$ used for the normalization of the MBB in Eq.~\\ref{eq:MBB}, which is not discussed in this study, can potentially have a significant impact on the moment expansion and, in turn, on the measurement of $\\hat{r}$. Indeed, $\\nu_0$ is the pivot frequency of the moment expansion (moments are equal to zero at $\\nu_0$) and will determine the shape of the SED distortion around it. A poor choice for this reference frequency can have disastrous consequences for the moment fit: for example, if it is chosen far away from the observed bands, all the moments will become degenerated. In our case, the reference frequency (353\\,GHz) is within the observed frequency range (100 to 402\\,GHz), but we have not tried to optimize its position. In addition, the $\\nu_0$ pivot of our moment expansion coincides with the one used to extrapolate the dust template map in the {\\sc PySM} and we have not quantified how much this impacts our results. \n\nFinally, as pointed out several times in this work, the quantitative results depend strongly on the sky model of our simulations. Moreover, we lack dedicated sky models where we can control the complexity of the dust SED, either by directly including moments or by averaging the emission from the 3D structure of the Galaxy. However, both methods are beyond the scope of the present work.\n\n\n\\section{Conclusion}\n\nBeing able to precisely characterize the complexity of the Galactic thermal dust polarized SED has become critical for the measurement of the faint primordial $B$-mode signal of the CMB, especially at the sensitivity targeted by future CMB experiments such as the \\textit{LiteBIRD}{} satellite mission.\n\nIn this work, we applied the moment expansion formalism to the dust emission SED as a component-separation tool to recover the tensor-to-scalar ratio parameter $r$ in \\textit{LiteBIRD}-simulated data. This formalism, proposed by \\citet{Chluba} and implemented in harmonic space by \\citet{Mangilli}, allows us to deal with the spectral complexity of the Galactic dust signal by modeling its deviations from the canonical MBB model at the cross-angular power spectrum level. In the case of the data-driven realistic dust emission model ---we explore ({\\sc PySM} {\\tt d1}) \nhere---, suitably taking into account the dust SED distortions prevents the spurious detection of the primordial $B$-mode signal. \n\nWe show that the dust spectral index $\\beta$ and dust temperature $T$ spatial variations significantly distort the dust cross-power spectrum SED. The MBB is not a good model to describe the data in that case and the estimation of $r$ is dramatically affected. In the case where no primordial signal is included in the simulated data sets, not taking into account the dust SED complexity leads to a highly significant spurious detection of $r$ with \\textit{LiteBIRD}{} (from $\\hat{r}\\simeq5\\times10^{-3}$ to $1.25\\times10^{-2}$, with a 8.4 to 21.2\\,$\\sigma$ significance, from 50 to 70\\,\\% of the sky, respectively). \n\nTo overcome this obstacle, we applied the moment expansion formalism in order to model these SED distortions. We demonstrate that, at \\textit{LiteBIRD}{} sensitivity, the previously studied moment expansion with respect to the dust spectral index $\\beta$ \\citep{Mangilli,Azzoni} does not give satisfactory results. Indeed, expanding in $\\beta$ to first order (following the angular power spectrum definition of the order) leads to a significant bias on 70\\,\\% of the sky ($\\hat{r}=(3.29\\pm0.65)\\times10^{-3}$ when $r_{\\rm sim}=0$ and $\\hat{r}=(1.29\\pm0.08)\\times10^{-2}$, when $r_{\\rm sim}=10^{-2}$). At second order in $\\beta$, we observe a $\\sim$2\\,$\\sigma$ negative tension ($\\hat{r}=(-3.7\\pm1.9)\\times10^{-3}$ when $r_{\\rm sim}=0$ and $\\hat{r}=(6.25\\pm2.50)\\times10^{-3}$, when $r_{\\rm sim}=10^{-2}$). \n\nWe introduce for the first time in this work the expansion of the dust angular cross-power spectra with respect to both $\\beta$ and $T$. We show that by using this expansion up to first order, we correctly model the dust SED distortions due to spatial variations of both $\\beta$ and $T$ at the map level. This allows us to recover $r$ parameter without bias, with $\\hat{r}=(-3.3\\pm11.7)\\times10^{-4}$ if $r_{\\rm sim}=0$ and $\\hat{r}=(0.95\\pm0.15)\\times10^{-2}$ if $r_{\\rm sim}=10^{-2}$. Thus, despite the known degeneracy between the dust spectral index and its temperature in the Rayleigh-Jeans domain, it is important to correctly model the latter in order to accurately retrieve the tensor-to-scalar ratio $r$ at the unprecedented precision reached by experiments such as \\textit{LiteBIRD}{}. \n\nAdding parameters to tackle the dust SED complexity means an increase in the error budget. Given the \\textit{LiteBIRD}{} bands and sensitivities we consider in this work (frequency bands above 100\\,GHz), the ideal sensitivity on $r$ without delensing is $\\sigma_{\\hat{r}}=3.4\\times10^{-4}$. In the ideal case, where the dust $\\beta$ and $T$ are constant over the sky ({\\sc PySM} {\\tt d0}), separating the CMB from dust leads to $\\sigma_{\\hat{r}}=3.9\\times10^{-4}$ on 70\\,\\% of the sky. Adding the expansion to first order in $\\beta$ does not significantly increase the error ($\\sigma_{\\hat{r}}=4.5\\times10^{-4}$), but expanding to first order in both $\\beta$ and $T$ multiplies it by a factor of $\\sim2$ ($\\sigma_{\\hat{r}}=9.5\\times10^{-4}$) and to second order in $\\beta$ by a factor of $\\sim4$ ($\\sigma_{\\hat{r}}=16.4\\times10^{-3}$). We show that the surge of $\\sigma_{\\hat{r}}$ between the two latter cases, sharing the same number of free parameters, is due to strong correlations between the SED of the second-order moments in $\\beta$ and the CMB. This is an important point, as it could lead to some intrinsic limitation for component-separation algorithms based exclusively on the modeling of the SED. Furthermore, when dealing with real data, if the dust SED is complex enough to have significant second-order distortions with respect to $\\beta$, CMB experiments might reach a dilemma: either include the second order in the modeling at the cost of losing sensitivity on $r,$ or neglect it at the cost of a potential spurious detection. Coupling the SED-based separation with methods exploiting the diversity of spatial distribution between components \\citep[e.g.,][]{Wavelets} seems a natural way to overcome this issue.\n\nNevertheless, moment expansion at the cross-angular power spectrum level provides a powerful and agnostic tool, allowing us to analytically recover the actual dust complexity without making any further assumptions. We additionally show that this method is robust, in the sense that it can effectively distinguish the primordial tensor signal from dust when $r_{\\rm sim} \\neq 0$, as in the case of \\textit{LiteBIRD}{} simulations. The dust moments in $\\beta$ and $T$ at first order are needed in order to retrieve a reliable measure of $r$; they are significantly detected for $\\ell\\lesssim100$. We can therefore define a cut in $\\ell$ above which we do not fit for the whole complexity of the dust (we fit only the expansion up to first order in $\\beta$ and not in $\\beta$ and $T$). Doing so, we can reduce the error on $\\hat{r}$ while keeping the bias negligible ($\\hat{r}=(-0.9\\pm8.8)\\times10^{-4}$). We could imagine other ways to reduce the number of free parameters in our model \\cite[e.g., assuming a power-law of $\\ell$ behavior for the moments, as in][]{Azzoni} and hence reduce the error on $r$. However, this optimization really depends on the simulated sky complexity and has not been comprehensively explored in the present work.\n\nThe {\\sc PySM} {\\tt d1} sky simulations, being data-driven, are widely used by the CMB community as they contain some of the real sky complexity. Nevertheless, at high-Galactic latitudes, the dust spectral index and temperature templates from \\textit{Planck}{} are dominated by systematic errors (uncertainty on the assumed zero-level of the \\textit{Planck}{} intensity maps, residual cosmic infrared background (CIB), anisotropies, instrumental noise, etc.). Therefore, some of the complexity we observe far from the Galactic plane in this sky model is not real. On the other hand, the modeled SED of the dust is {exactly} a MBB in each pixel, and line-of-sight averages or more complex dust models are ignored. As a consequence, our method and CMB $B$-mode component-separation algorithms in general need to be confronted with more complex models in order to really assess their performances in a quantitative manner. \n\nFinally, although we demonstrate that the synchrotron component can be tackled at frequencies above 100\\,GHz with a minimal model under our assumptions, a study over the full \\textit{LiteBIRD}{} frequency bands, including synchrotron and the potential moment expansion of its SED, will be considered as a natural next step for a further application.\n\n\n\\section*{Acknowledgments}\n\n\nThis work is supported in Japan by ISAS\/JAXA for Pre-Phase A2 studies, by the acceleration program of JAXA research and development directorate, by the World Premier International Research Center Initiative (WPI) of MEXT, by the JSPS Core-to-Core Program of A. Advanced Research Networks, and by JSPS KAKENHI Grant Numbers JP15H05891, JP17H01115, and JP17H01125. The Italian \\textit{LiteBIRD}{} phase A contribution is supported by the Italian Space Agency (ASI Grants No. 2020-9-HH.0 and 2016-24-H.1-2018), the National Institute for Nuclear Physics (INFN) and the National Institute for Astrophysics (INAF). The French \\textit{LiteBIRD}{} phase A contribution is supported by the Centre National d'Etudes Spatiale (CNES), by the Centre National de la Recherche Scientifique (CNRS), and by the Commissariat \u00e0 l'Energie Atomique (CEA). The Canadian contribution is supported by the Canadian Space Agency. The US contribution is supported by NASA grant no. 80NSSC18K0132. \nNorwegian participation in \\textit{LiteBIRD}{} is supported by the Research Council of Norway (Grant No. 263011). The Spanish \\textit{LiteBIRD}{} phase A contribution is supported by the Spanish Agencia Estatal de Investigaci\u00f3n (AEI), project refs. PID2019-110610RB-C21 and AYA2017-84185-P. Funds that support the Swedish contributions come from the Swedish National Space Agency (SNSA\/Rymdstyrelsen) and the Swedish Research Council (Reg. no. 2019-03959). The German participation in \\textit{LiteBIRD}{} is supported in part by the Excellence Cluster ORIGINS, which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy (Grant No. EXC-2094 - 390783311). This research used resources of the Central Computing System owned and operated by the Computing Research Center at KEK, as well as resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy.\n\nMR acknowledges funding support from the ERC Consolidator Grant CMBSPEC (No. 725456) under the European Union's Horizon 2020 research and innovation program.\n\nThe authors would like to thank David Alonso, Josquin Errard, Nicoletta Krachmalnicoff and Giuseppe Puglisi, for useful discussions as well as Jens Chluba for insightful comments on earlier version of this work. \n\n\n\n\\bibliographystyle{aa}\n\n\\section{Introduction}\n\\label{sec:intro}\n\nOur present understanding of the primordial Universe relies on the paradigm of inflation \\citep{inflationhist1,inflationhist2,inflationhist3}, introducing a phase of accelerated expansion in the first fractions of a second after the primordial singularity. Such a phenomenon is expected to leave a background of gravitational waves propagating in the primordial plasma during recombination, leaving a permanent mark imprinted in the polarization anisotropies of the cosmic microwave background (CMB): the primordial $B$-modes \\citep{InflationmodesB3,InflationsmodesB1,InflationsmodesB2}. The amplitude of the angular power spectrum of those primordial $B$-modes is characterized by the {tensor-to-scalar ratio} $r$, which is proportional to the energy scale at which inflation occurred \\citep{renergyscale}. Hence, looking for this smoking gun of inflation allows us to test our best theories of fundamental physics in the primordial Universe at energy scales far beyond the reach of particle accelerators. In this scope, it is one of the biggest challenges of cosmology set out for the next decades. The best experimental upper limit on the $r$ parameter so far is $r<0.032$ \\citep[95\\,\\% C.L.,][]{tristram,bicep2021,PlanckandBICEP}.\n\nThe JAXA Lite (Light) satellite, used for the $B$-mode polarization and Inflation from cosmic background Radiation Detection (\\textit{LiteBIRD}{}) mission, is designed to observe the sky at large angular scales in order to constrain this parameter $r$ down to $\\delta r= 10^{-3}$, including all sources of uncertainty \\citep{litebird,LiteBIRDUpdatedDesign}. Exploring this region of the parameter space is critical, because this order of magnitude for the tensor-to-scalar ratio is predicted by numerous physically motivated inflation models (for a review see e.g., \\cite{EncyclopediaInflationaris}) \n\nHowever, the success of this mission relies on our ability to treat polarized foreground signals. Indeed various diffuse astrophysical sources emit polarized $B$-mode signals above the primordial ones, the strongest being due to the diffuse polarized emission of our own Galaxy \\citep{PlanckCompoSep}. Even in a diffuse region like the BICEP\/Keck field, the Galactic $B$-modes are at least ten times stronger at 150\\,GHz than the $r=0.01$ tensor $B$-modes targeted by the current CMB experiments \\citep{BICEPKECKGW}.\n\nThe true complexity of polarized foreground emission that the next generation of CMB experiments will face is still mostly unknown today. Underestimation of this complexity can lead to the estimation of a spurious nonzero value of $r$ \\citep[see e.g.,][]{PlanckL,Remazeilles_etal_2016}.\n\nAt high frequencies ($>100$ GHz), the thermal emission of interstellar dust grains is the main source of Galactic foreground contaminating the CMB \\citep{dusthighfreq,PlanckDust2}. The canonical model of the spectral energy distribution (SED) of this thermal emission for intensity and polarization is given by the modified black body (MBB) law \\citep{Desertdustmodel}. This model provides a good fit to the dust polarization SED at the sensitivity of the \\textit{Planck}{} satellite \\citep{PlanckDust2} but it may not fully account for\nit at the sensitivity of future experiments \\citep{HensleyBull}. Furthermore, due to changes of physical conditions across the galaxy, spatial variations of the SEDs are present between and along the lines of sight. The former leads to what is known as \\emph{frequency decorrelation} in the CMB community \\citep[see e.g.][]{tassis,PlanckL,pelgrims2021}. Moreover, both effects lead to averaging MBBs when observing the sky (unavoidable line-of-sight or beam-integration effects). Because of the nonlinearity of the MBB law, those averaging effects will {distort} the SED, leading to deviations from this canonical model \\citep{Chluba}. \n\n\\cite{Chluba} proposed a general framework called {``moment expansion''} of the SED to take into account those distortions, using a Taylor expansion around the MBB with respect to its spectral parameters \\citep[Taylor expansion of foreground SEDs was discussed in previous studies; see e.g.,][]{stolyarov2005}. This method is agnostic: it does not require any assumption on the real complexity of the polarized dust emission. The moment expansion approach thus provides a promising tool with which to model the unanticipatable complexity of the dust emission in real data.\n\n\\cite{Mangilli} generalized this formalism for the sake of CMB data analysis in harmonic space and for cross-angular power spectra and applied it successfully to complex simulations and \\textit{Planck}{} High-Frequency Instrument (HFI) intensity data. This latter work shows that the real complexity of Galactic foregrounds could be higher than expected, encouraging us to follow the path opened by the moment expansion formalism.\n\nIn the present work, we apply the moment expansion in harmonic space to characterize and treat the dust foreground polarized emission of \\textit{LiteBIRD}{} high-frequency simulations, using dust-emission models of increasing complexity. We discuss the ability of this method to recover an unbiased value for the $r$ parameter, with enough accuracy to achievethe scientific objectives of the \\textit{LiteBIRD} mission.\n\nIn Sect.~\\ref{sec:formalism}, we first review the formalism of moment expansion in map and harmonic domains. We then describe in Sect.~\\ref{sec:sims} how we realize several sets of simulations of the sky as seen by the \\textit{LiteBIRD}{} instrument with varying dust complexity and how we estimate the angular power spectra. In Sect.~\\ref{sec:fit}, we describe how we estimate the moment parameters and the tensor-to-scalar ratio $r$ in those simulations. The results are then presented in Sect.~\\ref{sec:results}. Finally, we discuss those results and the future work that has to be done in the direction opened by moment expansion in Sect.~\\ref{sec:discussion}.\n\n\\section{\\label{sec:formalism}Formalism}\n\n\\subsection{Characterizing the dust SED in real space}\n\n\\subsubsection{Modified black body model \\label{sec:mbb}}\n\nThe canonical way to characterize astrophysical dust-grain emission in every volume element of the Galaxy is given by the modified black body (MBB) function, consisting of multiplying a standard black body SED $B_\\nu(T)$ at a given temperature $T_0$ by a power-law of the frequency $\\nu$ with a spectral index $\\beta_0$. The dust intensity map $I_{\\rm D}(\\nu,\\vec{n})$ observed at a frequency $\\nu$ in every direction with respect to the unit vector $\\vec{n}$, can then be written as:\n\\begin{equation}\n\\label{eq:MBB}\n I(\\nu,\\vec{n}) = \\left(\\frac{\\nu}{\\nu_0}\\right)^{\\beta_0} \\frac{B_{\\nu}(T_0)}{B_{\\nu_0}(T_0)} A(\\vec{n}) \n =\\frac{I_{\\nu}(\\beta_0,T_0)}{I_{\\nu_0}(\\beta_0,T_0)} A(\\vec{n}),\n\\end{equation}\n\n\\noindent where $A(\\vec{n})$ is the dust intensity template at a reference frequency $\\nu_0$\\footnote{Throughout this work, we use $\\nu_0 = 353$\\,GHz.}.\nWe know that the physical conditions (thermodynamic and dust grain properties) change through the interstellar medium across the Galaxy, depending, in an intricate fashion, on the gas velocity and density, the interstellar radiation field, the distance to the Galactic center \\citep[see e.g., ][]{dustacrossMW,dustvarMW,PlanckVardust,PlanckCompoSep,vardustdisk,dustradfield}. This change of physical conditions leads to variations in $\\beta$ and $T$ depending on the direction of observation $\\vec{n}$:\n\n\\begin{equation}\n\\label{eq:MBBn}\n I(\\nu,\\vec{n}) = \\frac{I_{\\nu}(\\beta(\\vec{n}),T(\\vec{n}))}{I_{\\nu_0}(\\beta(\\vec{n}),T(\\vec{n}))} A(\\vec{n}).\n\\end{equation}\n\nThe SED amplitude and parameters (temperature and spectral index) are then different for every line of sight. It is therefore clear that, in order to provide a realistic model of the dust emission, the frequency and spatial dependencies may not be trivially separated. \n\n\\subsubsection{\\label{sect:limitsmbb}Limits of the modified black body}\n\nThe dust SED model given by the MBB has proven to be highly accurate \\citep{Planck2014dust,Planck2015dust}. However, it must be kept in mind that this model is empirical and is therefore not expected to give a perfect description of the dust SED in the general case. Indeed, physically motivated dust grain emission models predict deviations from it \\citep[e.g.,][]{modelbeyondmbb}. Surveys tend to show that the dust-emission properties vary across the observed 2D sky and the 3D Galaxy \\citep{PlanckDust2}. Furthermore, in true experimental conditions, one can never directly access the pure SED of a single volume element with specific emission properties and unique spectral parameters. Averages are therefore made over different SEDs emitted from distinct regions with different physical emission properties, in a way that may not be avoided: along the line of sight; between different lines of sight, inside the beam of the instrument or; when doing a spherical harmonic decomposition to calculate the angular power spectra over large regions of the sky.\n\nThe MBB function is nonlinear, and therefore summing MBBs with different spectral parameters does not return another MBB function and produces \\emph{SED distortions}. For all these reasons, modeling the dust emission with a MBB is intrinsically limited, even when doing so with spatially varying spectral parameters. As a consequence, inaccuracies might appear when modeling the dust contribution to CMB data that will unavoidably impact the final estimation of the cosmological parameters. \n\n\\subsubsection{Moment expansion in pixel space}\n\\label{sec:moment_pixel}\n\nA way to address the limitation of the MBB model in accurately describing the dust emission is given by the {moment expansion} formalism proposed by \\cite{Chluba}. This formalism is designed to take into account the SED distortions due to averaging effects by considering a multidimensional Taylor expansion of the distorted SED $I(\\nu,\\vec{p})$ around the mean values $\\vec{p}_{0}$ of its spectral parameters $\\vec{p} = \\{p_i\\}$. This is the so-called {moment expansion} of the SED, which can be written as\n\\begin{align}\n I(\\nu,\\vec{p}) = I(\\nu,\\vec{p}_{0}) &+ \\sum_i \\omega_1^{p_i}\\langle\\partial_{p_i}I(\\nu,\\vec{p})\\rangle_{\\vec{p}=\\vec{p}_0}\n \\nonumber \\\\\n &+ \\frac{1}{2}\\sum_{i,j} \\omega_2^{p_ip_j}\\langle\\partial_{p_i}\\partial_{p_j}I(\\nu,\\vec{p})\\rangle_{\\vec{p}=\\vec{p}_0}\n \\nonumber \\\\\n &+ \\dots \\nonumber\\\\\n &+\\frac{1}{\\alpha!} \\sum_{i,\\dots,k} \\omega_\\alpha^{p_i\\dots p_k}\\langle\\partial_{p_i}\\dots\\partial_{p_k}I(\\nu,\\vec{p})\\rangle_{\\vec{p}=\\vec{p}_0},\n\\label{eq:momentgeneral}\n\\end{align}\n\nwhere the first term on the right-hand side is the SED without distortion $I(\\nu,\\vec{p}_{0})$ evaluated at $\\vec{p}=\\vec{p}_0$, and the other terms are the so-called {moments} of order $\\alpha$, quantified by the {moment parameters} $\\omega_\\alpha^{p_i\\dots p_k}$ for the expansion with respect to any parameter of $\\vec{p}$. Performing the expansion to increasing order adds increasing complexity to the SED $I(\\nu,\\vec{p}_{0})$. \n\nFor the MBB presented in Sect.~\\ref{sec:mbb}, there are two parameters so that $\\vec{p} = \\{\\beta,T\\}$. Thus the dust moment expansion reads\n\n\\begin{align}\n I(\\nu,\\vec{n}) = \\frac{I_{\\nu}(\\beta_0,T_0)}{I_{\\nu_0}(\\beta_0,T_0)} \\bigg\\{ & A(\\vec{n}) + \\omega^\\beta_1(\\vec{n}) \\ln\\left(\\frac{\\nu}{\\nu_0}\\right)+ \\frac{1}{2}\\omega^\\beta_2(\\vec{n}) \\ln^2\\left(\\frac{\\nu}{\\nu_0}\\right)\\nonumber \\\\[2mm]\n &+ \\omega^T_1(\\vec{n})\\Big( \\Theta(\\nu,T_0) - \\Theta(\\nu_0,T_0) \\Big) + \\dots \\bigg\\},\n\\label{eq:momentinttemp}\n\\end{align}\n\n\\noindent where the expansion has been written up to order two in $\\beta$ (with moment expansion parameters $\\omega^\\beta_1$ at order one and $\\omega^\\beta_2$ at order two) and to order one in $T$ (with a moment expansion parameter $\\omega^T_1$ at order one). The following expression has been introduced to simplify the black body derivative with respect to $T$:\n\n\\begin{align}\n \\Theta(\\nu,T) = \\frac{x}{T}\\frac{e^{x}}{e^{x}-1},\\ {\\rm with}\\ x = \\frac{h \\nu}{k T}.\n\\end{align}\n\nThe moment expansion in pixel space can be used for component separation and possibly crossed with other methods \\citep[see e.g.,][]{RemazeillesmomentsILC,Debabrata_2021}. However, in the present work, we are interested in the modeling of the dust at the $B$-mode angular power spectrum level. Performing the moment expansion at the angular power spectrum level adds some complexity to the SEDs due to the additional averaging occurring when dealing with spherical harmonic coefficients. Indeed, these coefficients are estimated on potentially large fractions of the sky and probe regions with various physical conditions. On the other hand, the expansion at the power spectrum level possibly drastically reduces the parameter space with respect to performing the expansion in every sky pixel.\n\n\\subsection{Characterizing the dust SED in harmonic space}\n\n\\subsubsection{Dust SED in spherical harmonic space \\label{sec:formalismspectra}}\n\nThe expansion presented in Sect.~\\ref{sec:moment_pixel} can be applied in spherical harmonic space using the same logic. The sky emission projection then reads\n\n\\begin{equation}\n I(\\nu,\\vec{n}) = \\sum_{\\ell = 0}^{\\infty}\\sum_{m=-\\ell}^{\\ell}I^\\nu_{\\ell m}Y_{\\ell m} (\\vec{n}).\n\\end{equation}\n\nApplying the moment expansion to the spherical harmonics coefficients, with respect to $\\beta$ and $T$, as in Eq.~\\ref{eq:momentinttemp}, leads to\n\n\\begin{align}\n I^\\nu_{\\ell m} = & \\frac{I_{\\nu}(\\beta_0(\\ell),T_0(\\ell))}{I_{\\nu_0}(\\beta_0(\\ell),T_0(\\ell))} \\bigg\\{ A_{\\ell m} + \\omega^\\beta_{1,\\ell m} \\ln\\left(\\frac{\\nu}{\\nu_0}\\right) + \\frac{1}{2}\\omega^\\beta_{2,\\ell m} \\ln^2\\left(\\frac{\\nu}{\\nu_0}\\right) \\nonumber \\\\[2mm]\n &+ \\omega^T_{1,\\ell m} \\Big( \\Theta(\\nu,T_0(\\ell)) - \\Theta(\\nu_0,T_0(\\ell)) \\Big) + \\dots \\quad\\bigg\\},\n\\label{eq:momentint2temp}\n\\end{align}\n\n\n\\noindent where this time $\\beta_0(\\ell)$ and $T_0 (\\ell)$ are the averages of $\\beta$ and $T$ at a given multipole $\\ell$ over the sky fraction we are looking at. We note that the moment parameters $\\omega^{p_i}_{\\alpha,\\ell m}$ involved here are different from the $\\omega^{p_i}_i(\\vec{n})$ appearing in Eq.~\\ref{eq:momentinttemp} in the map space because they involve different averaging. In principle, the moment expansion in harmonic space can take into account the three kinds of spatial averages presented in Sect.~\\ref{sect:limitsmbb}.\n\nAs the dust spectral index and temperature are difficult to separate in the frequency range considered for CMB studies \\citep[{i.e.}, Rayleigh-Jeans domain, see e.g.][]{betatcorr}, the moment expansion in harmonic space has only been applied in the past with respect to $\\beta$, with the temperature being fixed to a reference value $T=T_0$ \\citep{Mangilli,Azzoni}. In the present paper, for the first time, the moment expansion in harmonic space is instead performed with respect to both $\\beta$ and $T$, as it was in real space in \\citet{RemazeillesmomentsILC}.\n\n\\subsubsection{Cross-power spectra}\n\nRelying on the derivation made by \\cite{Mangilli} and Eq.~\\ref{eq:momentint2temp}, we can explicitly write the cross-spectra between two maps $M_{\\nu_i}$ and $M_{\\nu_j}$ at frequencies $\\nu_i$ and $\\nu_j$, using the moment expansion in $\\beta$ and $T$ as follows:\n\n\\begin{align}\n \\mathcal{D}_\\ell(\\nu_i \\times \\nu_j) &= \\frac{I_{\\nu_i}(\\beta_0(\\ell),T_0(\\ell))I_{\\nu_j}(\\beta_0(\\ell),T_0(\\ell))}{I_{\\nu_0}(\\beta_0(\\ell),T_0(\\ell))^2} \\cdot \\bigg\\{ \\nonumber \\\\[-0.5mm]\n \n 0^{\\rm th}\\ \\text{order}\\;&\n \\begin{cases}\n & \\ \\mathcal{D}_\\ell^{A \\times A}\n \\end{cases}\n \\nonumber \\\\[-0.5mm]\n \n 1^{\\rm st}\\ \\text{order}\\ \\beta\\;&\n \\begin{cases}\n &+\\mathcal{D}_\\ell^{A \\times \\omega^{\\beta}_1}\\left[ \\ln\\left(\\frac{\\nu_i}{\\nu_0}\\right) + \\ln\\left(\\frac{\\nu_j}{\\nu_0}\\right) \\right] \\nonumber \\\\\n &+ \\mathcal{D}_\\ell^{\\omega^{\\beta}_1 \\times \\omega^{\\beta}_1} \\left[\\ln\\left(\\frac{\\nu_i}{\\nu_0}\\right)\\ln\\left(\\frac{\\nu_j}{\\nu_0}\\right) \\right]\\nonumber \\\\ \\end{cases}\\\\[-0.5mm]\n \n 1^{\\rm st}\\ \\text{order}\\ T \\;&\n \\begin{cases}\n &+\\mathcal{D}_\\ell^{A \\times \\omega_1^T} \\left( \\Theta_i + \\Theta_j - 2\\Theta_0\\right) \\\\\n &+ \\mathcal{D}_\\ell^{\\omega_1^T \\times \\omega_1^T}\\Big(\\Theta_i - \\Theta_0\\Big)\\left(\\Theta_j - \\Theta_0\\right)\\nonumber\n \\end{cases}\\\\[-0.5mm]\n \n 1^{\\rm st}\\ \\text{order}\\ T\\beta \\;&\n \\begin{cases}\n &+ \\mathcal{D}_\\ell^{\\omega^{\\beta}_1 \\times \\omega_1^T} \\left[ \\ln\\left(\\frac{\\nu_j}{\\nu_0} \\right)\\Big( \\Theta_i - \\Theta_0\\Big) + \\ln\\left(\\frac{\\nu_i}{\\nu_0} \\right)\\left( \\Theta_j - \\Theta_0\\right)\\right] \\nonumber \\\\\n \\end{cases}\\\\[-0.5mm]\n \n 2^{\\rm nd}\\ \\text{order}\\ \\beta \\;&\n \\begin{cases}\n &+ \\frac{1}{2} \\mathcal{D}_{\\ell}^{A \\times\\omega^{\\beta}_{2}} \\left[ \\ln^2\\left(\\frac{\\nu_i}{\\nu_0}\\right)\n + \\ln^2\\left(\\frac{\\nu_j}{\\nu_0}\\right)\n \\right]\n \\\\[-0.5mm]\n &+ \\frac{1}{2} \\mathcal{D}_{\\ell}^{\\omega^{\\beta}_{1} \\times \\omega^{\\beta}_{2}} \\Big[ \\ln \\left(\\frac{\\nu_i}{\\nu_0}\\right) \\ln^2\\left(\\frac{\\nu_j}{\\nu_0}\\right) \n +\\ln\\left(\\frac{\\nu_j}{\\nu_0}\\right)\n \\ln^2 \\left(\\frac{\\nu_i}{\\nu_0}\\right) \\Big] \n \\\\[-0.5mm]\n &+\\frac{1}{4} \\mathcal{D}_{\\ell}^{\\omega^{\\beta}_{2} \\times \\omega^{\\beta}_{2}} \\left[\\ln^2 \\left(\\frac{\\nu_i}{\\nu_0}\\right) \\ln^2 \\left(\\frac{\\nu_j}{\\nu_0}\\right) \\right]\n \\,\n \\end{cases}\\nonumber\\\\[-0.5mm]\n &+ \\dots \\bigg\\},\n\\label{eq:moments}\n\\end{align}\n\n\\noindent where we use the following abbreviation: $\\Theta(\\nu_k,T_0(\\ell))\\equiv \\Theta_k$, so that $\\Theta_0=\\Theta(\\nu_0,T_0(\\ell))$, and we defined the moment expansion cross-power spectra between two moments $\\mathcal{M}$ and $\\mathcal{N}$ as\n\n\\begin{equation}\n \\mathcal{C}_\\ell^{\\mathcal{M}\\times\\mathcal{N}} = \\sum_{m, m'=-\\ell}^{\\ell} \\mathcal{M}_{\\ell m} \\mathcal{N}_{\\ell m'},\\ {\\rm with}\\ (\\mathcal{M},\\mathcal{N})\\in\\left\\{A,\\omega^{\\beta}_1,\\omega^{T}_1, \\omega^{\\beta}_2,\\ \\dots\\right\\}.\n\\label{eq:moments_spectra}\n\\end{equation}\n\nIn the remainder of this article, we use the $\\mathcal{D}_\\ell$ quantity, which is a scaling of the angular power spectra, and is defined as \n\\begin{equation}\n \\mathcal{D}_\\ell \\equiv \\frac{\\ell(\\ell +1)}{2 \\pi} \\mathcal{C}_\\ell.\n\\label{eq:dl}\n\\end{equation}\n\nEquation~\\ref{eq:moments} has been written using the expansion with respect to $\\beta$ at order two and $T$ at order one, as in Eq.~\\ref{eq:momentint2temp}. Nevertheless, the terms involving power spectra between order two in $\\beta$ and order one in $T$ have been neglected so as to match the needs of the implementation of our method in the following. \n\nHereafter, when we refer to \"order $k$\" at the angular power spectrum level, we are referring to moment expansion terms involving the pixel space moment up to order $k$. For example, $D_\\ell^{A\\times\\omega_1^T}$ and $D_\\ell^{\\omega^\\beta_1\\times\\omega_1^T}$ are order one, while $D_\\ell^{A\\times\\omega^\\beta_2}$, $D_\\ell^{\\omega^\\beta_1\\times\\omega^\\beta_2}$ and $D_\\ell^{\\omega^\\beta_2\\times\\omega^\\beta_2}$ are order two. At order zero, one retrieves the MBB description of the cross-angular power spectra SED $\\mathcal{D}_\\ell(\\nu_i\\times\\nu_j)$ as a function of the frequencies $\\nu_i$ and $\\nu_j$.\n\nThis formalism was originally introduced to analyze the complexity of intensity data in \\cite{Mangilli}. In the present work, we focus on $B$-mode polarization power spectra. This was put forward after analyzing the \\textit{Planck}{} and balloon-borne Large Aperture Submillimeter Telescope for Polarimetry (BLASTPol) data and finding that the polarization fraction appears to be constant in the far-infrared-to-millimetre wavelengths {\\citep{blastpol1,blastpol2}}. This allows us to assume that the same grain population is responsible for the total and polarized foreground emission \\citep{dustmodels}. As a result, intensity and polarization SED complexity may be similar.\nNevertheless, $Q$ and $U$ can have a different SED because of the polarization angle frequency dependence \\citep[see e.g.,][]{tassis,moment_polar} and so can $E$ and $B$. This could be a limitation when analyzing the dust $E$ and $B$ with a single moment expansion, especially when SED variations occur along the line of sight.\nEven when trying to model a single polarization component ---as we do in the present work, dealing only with $B$ modes--- it is not clear whether the distorted SED can be modeled in terms of $\\beta$ and $T$ moments only. Further work needs to be done to assess this question. However, they should not impact the present study in which variations along the line of sight are not simulated.\n\nModeling the complexity of the foreground signals by means of the moment expansion of the $B$-mode angular power spectrum has already been successfully applied to Simons Observatory \\citep{SimonsObservatory} simulated data \\citep{Azzoni}. However, the approach taken by these latter authors is different from the one presented above. They apply a \\emph{minimal} moment expansion: assumptions are made to keep only the $\\mathcal{D}_\\ell^{\\omega^{\\beta}_1\\times\\omega^{\\beta}_1}$ and $\\mathcal{D}_\\ell^{A\\times\\omega^{\\beta}_2}$ parameters, which are modeled with a power-law scale dependence. These assumptions may not hold for experiments with higher sensitivity and observing wider sky patches. Furthermore, they assume a scale-invariant dust spectral index. In this work, on the other hand, we relax these assumptions in order to characterize the required spectral complexity of the dust emission for \\textit{LiteBIRD}{}.\n\n\\section{\\label{sec:sims}Simulations and cross-spectra estimation}\n\n\\subsection{\\label{sec:LiteBIRD}\\textit{LiteBIRD}{}}\n\n\\textit{LiteBIRD}{} is an international project proposed by the Japanese spatial agency (JAXA), which selected it in May 2019 as a strategic large class mission. The launch is planned for 2029 for a minimal mission duration of 3 years. \\citep{2020SPIE,Ptep}\n\n\\textit{LiteBIRD}{} is designed to realize a full sky survey of the CMB at large angular scales in order to look for the reionization bump of primordial $B$-modes and explore the scalar-to-tensor ratio ($r$) parameter space with a total uncertainty $\\delta r$ below $10^{-3}$, including foreground cleaning and systematic errors.\n\\textit{LiteBIRD}{} is composed of three telescopes observing in different frequency intervals: the Low-, Medium- and High-Frequency Telescopes (LFT, MFT and HFT).\nEach of the telescopes illuminates a focal plane composed of hundreds of polarimetric detectors. The whole instrument will be cooled down to 5\\,K \\citep{LiteBIRDUpdatedDesign} while the focal plane will be cooled down to 100\\,mK \\citep{LBsubK}. In order to mitigate the instrumental systematic effects, the polarization is modulated by a continuously rotating half-wave plate. \\textit{LiteBIRD}{} will observe the sky in 15 frequency bands from 40 to 402\\,GHz. Table~\\ref{tab:litetab} gives the details of the frequency bands and their sensitivities in polarization \\citep[adapted from][see Sect.~\\ref{sec:Instrsim}]{2020SPIE}.\n\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{cccc}\n \\multirow{2}{*}{Telescope} & Frequency & Sensitivity $\\sigma^{\\rm noise}_{Q,U}(\\nu)$ & $\\boldsymbol \\theta_{\\rm FWHM}$ \\\\\n & [GHz]& [$\\mu$K$\\cdot$arcmin] & {arcmin}\\\\\n \\hline\n LFT & 40.0 & 37.42 & {70.5} \\\\\n LFT & 50.0 & 33.46 & {58.5} \\\\\n LFT & 60.0 & 21.31 & {51.1} \\\\\n LFT & 68.0 & {19.91\/31.77} & {41.6\/47.1} \\\\\n LFT & 78.0 & {15.55\/19.13} & {36.9\/43.8} \\\\\n LFT & 89.0 & {12.28\/28.77} & {33.0\/41.5} \\\\\n LFT\/MFT & 100.0 & {10.34\/8.48} & {30.2\/37.8} \\\\\n LFT\/MFT & 119.0 & {7.69\/5.70} & {26.3\/33.6} \\\\\n LFT\/MFT & 140.0 & {7.25\/6.38} & {23.7\/30.8} \\\\\n MFT & 166.0 & 5.57 & {28.9} \\\\\n MFT\/HFT & 195.0 & {7.05\/10.50} & {28.0\/28.6} \\\\\n HFT & 235.0 & 10.79 & {24.7}\\\\\n HFT & 280.0 & 13.8 & {22.5} \\\\\n HFT & 337.0 & 21.95 & {20.9} \\\\\n HFT & 402.0 & 47.45 & {17.9} \\\\\n\\end{tabular}\n\\caption{\\footnotesize Instrumental characteristics of \\textit{LiteBIRD}{} used in this study \\citep[adapted from][see Sect.~\\ref{sec:Instrsim}]{2020SPIE}. Some frequency bands are shared by two different telescopes or detector arrays. If so, the two values of polarization sensitivities $\\sigma^{\\rm noise}_{Q,U}(\\nu)$ and instrumental beam full width at half maximum $\\theta_{\\rm FWHM}$ are displayed on the same line.}\n\\label{tab:litetab}\n\\end{center}\n\\end{table}\n\n\\subsection{Components of the simulations}\n\\label{sec:ingredients}\n\nWe build several sets of \\textit{LiteBIRD}{} sky simulations. These multi-frequency sets of polarized sky maps are a mixture of CMB, dust, and instrumental noise. The simulations are made at the nine highest frequencies accessible by the instrument ($\\geq 100$ GHz), where dust is the predominant source of foreground contamination.\nFor every studied scenario, we built $N_{\\rm sim} = 500$ simulations, each composed of a set of $N_{\\rm freq}=9$ pairs of sky maps $(Q,U)$ built using the {\\sc HEALPix} package, with $N_{\\rm side} = 256$ \\citep{healpix}. All the signals will be expressed in $\\mu{\\rm K}_{\\rm CMB}$ units.\n\n\\subsubsection{Cosmic microwave background signal}\n\nTo generate the CMB signal, we use the {\\it Code for Anisotropies in the Microwave Background } \\citep[CAMB,][]{CAMB} to create a fiducial angular power spectrum from the best-fit values of cosmological parameters estimated by the recent \\textit{Planck}{} data analysis \\citep{PlanckOverview}.\n\nFor the $B$-modes, we consider the two different components of the spectrum: lensing-induced and primordial (tensor), so that $\\mathcal{D}_{\\ell}^{BB} =\\mathcal{D}_{\\ell}^{{\\rm lensing}} + r_{\\rm sim} \\cdot \\mathcal{D}_{\\ell}^{{\\rm tensor}}$, where $\\mathcal{D}_{\\ell}^{{\\rm tensor}}$ refers to the tensor $B$-modes for $r=1$ and $r_{\\rm sim}$ labels the input values of the tensor-to-scalar ratio $r$ contained in the simulation. We use two different values throughout this work: $r_{\\rm sim}=0$, which is used in the present work as the reference simulations and $r_{\\rm sim}=10^{-2}$ used for consistency checks when the CMB primordial signal is present.\n\nFor all simulations, we then generate the Stokes $Q$ and $U$ CMB polarization Gaussian realization maps $S^{\\rm CMB}_{\\nu,r_{\\rm sim}}$ from the angular power spectra using the {\\tt synfast} function of {\\sc HEALPix}.\n\n\\subsubsection{Foregrounds: dust}\n\nOur study focuses on high frequencies ($\\geq 100$\\,GHz) only, where thermal dust emission is the main source of polarized foreground as mentioned in Sect.~\\ref{sec:intro}. We make use of two different scenarios of increasing complexity included in the {\\sc PySM} \\citep{Pysm} and one of intermediate complexity not included in the {\\sc PySM}:\n\n\\begin{itemize}\n \\item {\\tt d0}, included in the {\\sc PySM}: the dust polarization $Q$ and $U$ maps are taken from $S^{Planck}_{\\nu=353}$, the \\textit{Planck}{} 2015 data at 353\\,GHz \\citep{planck_2015_overview}, extrapolated to a frequency $\\nu$ using the MBB given in Eq.~\\ref{eq:MBB} with a temperature $T_0=T_{\\tt d0}=20$\\,K and spectral index $\\beta_0=\\beta_{\\tt d0}=1.54$ constant over the sky:\n \n \\begin{equation}S^{\\rm dust}_\\nu=S_\\nu^{\\tt d0}=\\frac{I_\\nu(\\beta_{\\tt d0},T_{\\tt d0})}{I_{\\nu_0}(\\beta_{\\tt d0},T_{\\tt d0})}\\cdot S^{Planck}_{353},\n \\end{equation}\n \n \\item {\\tt d1T}, introduced here: the dust polarization $Q$ and $U$ maps are also taken from \\citet{planck_2015_overview} but they are extrapolated to a frequency $\\nu$ using the MBB given in Eq.~\\ref{eq:MBBn}, with spatially varying spectral index $\\beta(\\vec{n})$, as in {\\tt d1} and a fixed temperature $T_0=T_{\\tt d1T}=21.9$\\,K, obtained as the mean of the \\textit{Planck}{} {\\sc Commander} dust temperature map \\citep{planck_2015_commander} on our $f_{\\rm sky}=0.7$ sky mask: \n \n \\begin{equation}\n S^{\\rm dust}_\\nu=S_\\nu^{\\tt d1T}=\\frac{I_\\nu(\\beta(\\vec{n}),T_{\\tt d1T})}{I_{\\nu_0}(\\beta(\\vec{n}),T_{\\tt d1T})}\\cdot S^{Planck}_{353}.\n \\end{equation}\n \n \\item {\\tt d1}, included in the {\\tt PySM}: similar to { \\tt d1T} with both a spatially varying temperature $T(\\vec{n})$ and spectral index $\\beta(\\vec{n})$ obtained from the \\textit{Planck}{} data using the {\\sc Commander} code \\citep{planck_2015_commander}: \n \n \\begin{equation}\n S^{\\rm dust}_\\nu=S_\\nu^{\\tt d1}=\\frac{I_\\nu(\\beta(\\vec{n}),T(\\vec{n}))}{I_{\\nu_0}(\\beta(\\vec{n}),T(\\vec{n}))}\\cdot S^{Planck}_{353}.\n \\end{equation}\n\n\\end{itemize}\n\n\\subsubsection{ \\label{sec:Instrsim}Instrumental noise}\n\nThe band polarization sensitivities $\\sigma^{\\rm noise}_{Q,U}(\\nu)$ are derived from the noise equivalent temperature (NET) values converted into $\\mu$K$\\cdot$arcmin for each telescope (LFT, MFT and HFT). As seen in Table~\\ref{tab:litetab}, some frequency bands are overlapping between two telescopes. In this situation, we take the mean value of the two NETs, weighted by the beam full width at half maximum (FWHM) $\\theta$ as:\n\n\\begin{equation}\n \\sigma^{\\rm noise}_{Q,U}(\\nu_{\\rm overlapping}) = \\sqrt{\\frac{1}{ \\left(\\frac{\\theta_{\\rm min}}{{\\theta_{\\rm max}}}\\sigma^{\\rm noise}_{Q,U}(\\nu_{\\theta_{\\rm min}})\\right)^{-2} + \\left(\\sigma^{\\rm noise}_{Q,U}(\\nu_{\\theta_{\\rm max}})\\right)^{-2} }},\n\\end{equation}\n\n{\\noindent where $\\theta_{\\rm min}$ is the smallest FWHM among the two and $\\theta_{\\rm max}$ the largest. The band polarization sensitivities} are displayed in Table~\\ref{tab:litetab}. For every simulation, the noise component $N_\\nu$ is generated in every pixel of the maps with a Gaussian distribution centered on zero, with standard deviation $\\sigma^{\\rm noise}_{Q,U}(\\nu)$ weighted by the pixel size (and $\\sqrt{2}\\cdot\\sigma^{\\rm noise}_{Q,U}(\\nu)$ for the maps used to compute the auto-power spectra, see Sect.~\\ref{sec:spectra_estimation}).\n\nFor simplicity, we choose to ignore beam effects in our simulations, assuming they can be taken into account perfectly. Simulations are thus produced at infinite (0\\,arcmin) resolution and no beam effect is corrected for when estimating the angular power spectrum. This is equivalent to convolving the maps by Gaussian beams of finite resolution and correcting the power spectra for the associated Gaussian beam window functions.\n\n\\subsection{Combining signals and building the simulated maps}\n\n\\label{sec:simulations}\n\nThe simulated $(Q,U)$ maps $M_\\nu$, for a given simulation, can be expressed as the sum:\n\n\\begin{equation}\nM_\\nu=S^{\\rm CMB}_{\\nu,r_{\\rm sim}}+S^{\\rm dust}_\\nu+N_\\nu.\n\\label{eq:map}\n\\end{equation}\n\\begin{figure*}\n \\centering\n \\includegraphics[scale = 0.2]{Figures\/simulations.pdf}\n \\caption{\\footnotesize Mean value over the $N_{\\rm sim}$ simulations of the $B$-mode angular power spectra $\\mathcal{D}_{\\ell}(\\nu_i \\times \\nu_j)$ for the {\\tt d1c} simulation type, with $r_{\\rm sim}=0$. The color bar spans all the $N_{\\rm cross}$ spectra $\\mathcal{D}_{\\ell}(\\nu_i \\times \\nu_j)$, associated to their reduced cross-frequency $\\nu_{\\rm red.}=\\sqrt{\\nu_i \\nu_j}$, from 100\\,GHz (dark red) to 402\\,GHz (dark blue). The input CMB lensing power spectrum is shown as a black dashed line.}\n \\label{fig:simulations}\n\\end{figure*}\n\n\nCosmic microwave background and noise are simulated stochastically: for each simulation, we generate a new realization of the CMB maps $S^{\\rm CMB}_{\\nu,r_{\\rm sim}}$ and the noise maps $N_\\nu$. The dust map $S^{\\rm dust}_\\nu$ is the same for each simulation, at a given frequency.\n\nHereafter, we use the notation {\\tt d0}, {\\tt d1T,} and {\\tt d1} to refer to simulations containing only dust and \\textit{LiteBIRD}{} noise, {\\tt d0c}, {\\tt d1Tc,} and {\\tt d1c} for simulations including CMB, dust, and \\textit{LiteBIRD}{} noise and, finally, and {\\tt c} for the simulation containing only CMB and \\textit{LiteBIRD}{} noise. The different components present in these different {simulation types} are summarized in Table~\\ref{tab:sims}.\n\n\\begin{table}[t!]\n\\centering\n \\begin{tabular}{c !{\\color{white}\\vrule width 1pt} c!{\\color{white}\\vrule width 1pt}c!{\\color{white}\\vrule width 1pt}c!{\\color{white}\\vrule width 1pt}c!{\\color{white}\\vrule width 1pt}c}\n & $S^{\\rm CMB}_{\\nu,r_{\\rm sim}}$ & $S^{\\tt d0}_\\nu$ & $S^{\\tt d1T}_\\nu$ & $S^{\\tt d1}_\\nu$ & $N_\\nu$ \\\\[0.5ex]\\noalign{\\color{white}\\hrule height 1pt}\n {\\tt c} & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\\noalign{\\color{white}\\hrule height 1pt}\n {\\tt d0} & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\n \\noalign{\\color{white}\\hrule height 1pt}\n {\\tt d1T} & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\n \\noalign{\\color{white}\\hrule height 1pt}\n {\\tt d1} & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\n \\noalign{\\color{white}\\hrule height 1pt}\n {\\tt d0c} & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\n \\noalign{\\color{white}\\hrule height 1pt}\n {\\tt d1Tc} & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\n \\noalign{\\color{white}\\hrule height 1pt}\n {\\tt d1c} & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\n \\end{tabular}\n\n\\caption{\\footnotesize Summary of the different components present in the simulated maps $M_\\nu$ in Eq.~\\ref{eq:map}, for every \\emph{simulation type}. A tick on a green background signifies that the component is present in the simulations, red with a cross symbol shows that it is absent.}\n\\label{tab:sims}\n\\end{table}\n\n\\subsection{ \\label{sect:spectra} Angular power spectra of the simulations}\n\n\\subsubsection{\\label{sect:mask}Mask}\n\nA mask is applied on the simulated maps presented in Sect.~\\ref{sec:simulations} in order to exclude the Galactic plane from the power-spectrum estimation. The mask is created by setting a threshold on the polarized intensity ($P=\\sqrt{Q^2+U^2}$) of the \\textit{Planck}{} 353\\,GHz map \\citep{PlanckOverview} \\footnote{\\url{http:\/\/pla.esac.esa.int\/pla\/}}, smoothed with a $10^{\\circ}$ beam. In order to keep $f_{\\rm sky}= 0.7$,$f_{\\rm sky}= 0.6,$ and $f_{\\rm sky}= 0.5$, the cut is applied at 121\\,$\\mu$K, 80\\,$\\mu$K, and 53\\,$\\mu$K, respectively. We then realize a C2 apodization of the binary mask with a scale of $5^{\\circ}$ using {\\sc Namaster} \\citep{namaster}. The resulting Galactic masks are displayed in Fig.~\\ref{fig:mask}. These masks are similar to those used in \\citet{PlanckDust2}.\n\n\\subsubsection{Estimation of the angular power spectra}\n\\label{sec:spectra_estimation}\n\nWe use the {\\sc Namaster}\\footnote{\\url{https:\/\/github.com\/LSSTDESC\/NaMaster}} software \\citep{namaster} to compute the angular power spectra of each simulation. {\\sc Namaster} allows us to correct for the $E$ to $B$ leakage bias due to the incomplete sky coverage. Therein we use a \\emph{purification} process to suppress the effect of the $E$ to $B$ leakage in the variance. For every simulation, from the set of maps $M_{\\nu_i}$, we compute all the possible auto-frequency and cross-frequency spectra $\\mathcal{D}_\\ell({\\nu_i\\times\\nu_j})\\equiv\\mathcal{D}_\\ell({M_{\\nu_i}\\times M_{\\nu_j}})$ with\n\n\\begin{align}\n\\nu_i\\times \\nu_j \\in \\left\\{\\right.&100\\times 100, 100 \\times 119, 100 \\times 140,\\ \\dots, 100\\times402,\\nonumber\\\\\n&119\\times140,\\ \\dots, 119\\times402,\\nonumber\\\\[-2mm]\n&\\qquad\\qquad\\vdots\\nonumber\\\\\n&337\\times337, 337\\times402,\\nonumber\\\\\n&\\left.\\!\\!402 \\times 402 \\right\\},\n\\label{eq:all_cross}\n\\end{align}\n\n\\noindent leading to $N_{\\rm cross}=N_{\\rm freq}\\cdot(N_{\\rm freq}+1)\/2=45$ cross-frequency spectra. These spectra are displayed in Fig.~\\ref{fig:simulations} for the case of the {\\tt d1c} simulation\ntype.\\\\\n\nIn order to avoid noise auto-correlation in the auto-spectra ({i.e.}, $\\mathcal{D}_\\ell(\\nu_i\\times\\nu_j)$ when $i=j$), the latter are estimated in a way that differs slightly from what is presented in Sect.~\\ref{sec:Instrsim}. We simulate two noise-independent data subsets at an observing frequency $\\nu_i$, with a noise amplitude $\\sqrt{2}$ higher than that of the frequency band, and compute the cross-angular power spectrum between those. Thus, $\\mathcal{D}_\\ell(\\nu_i\\times\\nu_i)$ is free from noise auto-correlation bias at the expense of multiplying the noise amplitude in the spectrum by a factor of two. This approach is similar to that commonly used by the \\textit{Planck}{} Collaboration \\citep[see e.g.,][]{PlanckDust,PlanckDust2,tristram}.\n\n\n\nThe spectra are evaluated in the multipole interval $\\ell \\in [1,200]$ in order to be able to focus on the reionization and recombination bumps of the primordial $B$-modes spectra. \nThe spectra are binned in $N_{\\ell}=20$ bins of size $\\Delta \\ell = 10$ using {\\sc Namaster}. The same binning is applied throughout this article such that, in the following, the multipole $\\ell$ denotes the multipole bin of size $\\Delta\\ell=10$ centered on $\\ell$\\footnote{ The $N_\\ell$ multipole bins are centered on the following $\\ell$ values: $[6.5, 16.5, 26.5, 36.5, 46.5, 56.5, 66.5, 76.5, 86.5,\n96.5, 106.5,\\\\ 116.5, 126.5, 136.5, 146.5, 156.5, 166.5, 176.5,\n186.5, 196.5]$}.\n\nFrom the sets of $(Q,U)$ maps, {\\sc Namaster} computes the $\\mathcal{D}_\\ell^{EE}$, $\\mathcal{D}_\\ell^{BB}$, and $\\mathcal{D}_\\ell^{EB}$ angular power spectra; for the sake of the present analysis, we keep only $\\mathcal{D}_\\ell^{BB}$. Hence, when we discuss or analyze power spectra, we are referring to the $B$-mode power spectra $\\mathcal{D}_\\ell^{BB}$. All spectra are expressed in $(\\mu{\\rm K}_{\\rm CMB})^2$.\n\n\\section{\\label{sec:fit}Best-fit implementation}\n\n\nIn order to characterize the complexity of the dust SED that will be measured by \\textit{LiteBIRD}{}, we modeled the angular power spectra of our simulations described in Sect.~\\ref{sec:sims} over the whole frequency and multipole ranges with the moment expansion formalism introduced in Sect.~\\ref{sec:formalism}.\n\n\\subsection{General implementation}\n\\label{sec:fit_dust}\n\nFor each multipole $\\ell$, we ordered the angular power spectra $\\mathcal{D}_\\ell^{BB}(\\nu_i\\times\\nu_j$) as in Eq.~\\ref{eq:all_cross} in order to build a SED that is a function of both $\\nu_i$ and $\\nu_j$. We fit this SED with models, as in Eq.~\\ref{eq:moments} for example, using a Levenberg-Marquardt $\\chi^2$ minimization with {\\tt mpfit} \\citep{mpfit}\\footnote{\\url{https:\/\/github.com\/segasai\/astrolibpy\/tree\/master\/mpfit}}. All the fits performed with {\\tt mpfit} were also realized with more computationally heavy Monte Carlo Markov Chains (MCMC) with {\\tt emcee} \\citep{emcee}, giving compatible results, well within the error bars.\n\nThe reduced $\\chi^2$ minimization is given by\n\n\\begin{equation}\n \\chi^2 = \\frac{1}{N_{\\rm d.o.f.}}\\vec{R}^T\\mathbb{C}^{-1}\\vec{R},\n\\label{eq:chi2}\n\\end{equation}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale = 0.3]{Figures\/corr2.png}\n \\caption{\\footnotesize Correlation matrix (${\\rm Corr}_{\\ell\\ell'} \\equiv \\mathbb{C}_{\\ell\\ell'}\/\\sqrt{\\mathbb{C}_{\\ell\\ell}\\mathbb{C}_{\\ell'\\ell'}}$) for the $N_{\\rm sim}$ simulations in {\\tt d1c}. Every block represents a value of $\\ell$ and contains the ordered $N_{\\rm cross} = 45$ cross-spectra. The red squares represent the truncation of the full covariance matrix applied in the analysis (kept entries in red, other entries set to zero).}\n \\label{fig:cov}\n\\end{figure}\n\n\\noindent where $N_{\\rm d.o.f.}$ is the number of degrees of freedom and $\\mathbb{C}$ is the covariance matrix of our $N_{\\rm sim}$ simulations, represented in Fig.~\\ref{fig:cov}, of dimension $(N_{\\ell}\\cdot N_{\\rm cross})^2$:\n\\begin{equation}\n \\mathbb{C}_{\\ell,\\ell'}^{i\\times j,k\\times l} = {\\rm cov}\\left(\\mathcal{D}^{\\rm sim}_\\ell (\\nu_i \\times \\nu_j),\\mathcal{D}^{\\rm sim}_{\\ell'} (\\nu_k \\times \\nu_l)\\right).\n\\label{eq:cov}\n\\end{equation}\n\nThe entire covariance matrix $\\mathbb{C}$ is, in general, not invertible. To avoid this, we kept only the $\\ell=\\ell'$ block-diagonal of $\\mathbb{C}$ with the strongest correlation values\\footnote{${\\rm Corr}_{\\ell\\ell'} \\equiv \\mathbb{C}_{\\ell\\ell'}\/\\sqrt{\\mathbb{C}_{\\ell\\ell}\\mathbb{C}_{\\ell'\\ell'}}$}, as well as the $(\\ell=6.5,\\ell'=16.5)$ off-diagonal blocks showing a significant anti-correlation, as illustrated in Fig.~\\ref{fig:cov}. It was then possible to invert the thus-defined truncated correlation matrix with the required precision most of the time. \n\nIn the case of the {\\tt d1} simulation type, we experienced a fit convergence issue for $\\sim20$\\,\\% of the simulations, leading to a very large $\\chi^2$. In order to overcome this problem, two options lead to identical results: throwing away the outliers from the analysis or fitting using only the block-diagonal matrix (i.e., the $\\ell=6.5$, $\\ell'=16.5$ block is set to zero). This last option solves the conversion issue while providing sufficient precision. The results presented in the following are using the block-diagonal matrix when the simulation type is {\\tt d1}.\n\nFinally, in Eq.~\\ref{eq:chi2}, $\\vec{R}$ is the residual vector associated with every simulation of size $N_{\\ell}\\times N_{\\rm cross}$:\n\n\\begin{equation}\n\\vec{R} =\n\\begin{pmatrix}\n\\mathcal{R}_{\\ell = 6.5}(100 \\times 100) \\\\\n\\mathcal{R}_{\\ell = 6.5}(100 \\times 119) \\\\\n\\vdots \\\\\n\\mathcal{R}_{\\ell = 16.5}(100 \\times 100)\\\\\n\\vdots \\\\\n\\mathcal{R}_{\\ell = 196.5}(402 \\times 402)\\\\\n\\end{pmatrix}, \n\\end{equation}\n\n\\noindent with $\\mathcal{R}_\\ell (\\nu_i \\times \\nu_j) = \\mathcal{D}^{\\rm sim}_\\ell (\\nu_i \\times \\nu_j) - \\mathcal{D}^{\\rm model}_\\ell(\\nu_i \\times \\nu_j)$.\n\nThe expression used for the model to fit is given by:\n\n\\begin{equation}\n\\begin{split}\n \\mathcal{D}_{\\ell}^{\\rm model}(\\nu_i \\times \\nu_j) &= \\mathcal{D}_{\\ell}^{{\\rm dust}}\\left(\\beta_0(\\ell), T_0(\\ell), \\mathcal{D}^{\\mathcal{M}\\times\\mathcal{N}}_{\\ell}(\\nu_i\\times\\nu_j)\\right) \\\\ & + A_{\\rm lens} \\cdot \\mathcal{D}_{\\ell}^{{\\rm lensing}} + r \\cdot \\mathcal{D}_{\\ell}^{{\\rm tensor}}, \n\\label{eq:model}\n\\end{split}\n\\end{equation}\n\n\\noindent where $A_{\\rm lens}$ is not a free parameter and will remain fixed to zero (when there is no CMB, simulation types {\\tt d0}, {\\tt d1T,} and {\\tt d1}) or one (when the CMB is included, simulation types {\\tt d0c}, {\\tt d1Tc} and {\\tt d1c}). We leave the question of the impact of dust modeling with moments on the lensing measurement for future work. In Eq.~\\ref{eq:model}, the free parameters can thus be $\\beta_{0}(\\ell)$, $T_{0}(\\ell)$, and $\\mathcal{D}^{\\mathcal{M}\\times\\mathcal{N}}_{\\ell}(\\nu_i\\times\\nu_j)$ and the tensor-to-scalar ratio $r$. The estimated value of $r$ is referred to as $\\hat{r}$\n\nNo priors on the parameters are used in order to explore the parameter space with minimal assumptions. Finally, a frequency-dependent conversion factor is included in $\\mathcal{D}_{\\ell}^{{\\rm dust}}$ -- from (MJy$\\cdot$sr$^{-1})^2$ to $(\\mu$K$_{\\rm CMB})^2$ -- to express the dust spectra in ($\\mu$K$_{\\rm CMB})^2$ units. In those units, $\\mathcal{D}_{\\ell}^{{\\rm lensing}}$ and $\\mathcal{D}_{\\ell}^{\\rm tensor}$ are frequency-independent.\n\nTo mitigate the impact of outliers in our simulations, all the final values of the best-fit parameters and $\\chi^2$ distributions are represented by their median and median absolute deviations over $N_{\\rm sim}$ values. For the tensor-to-scalar ratio $\\hat{r}$, we chose to represent all the best-fit values from the $N_{\\rm sim}$ simulations in a histogram and we assume its distribution is normal. Fitting a Gaussian curve on this histogram and getting the mean and standard deviation gives us the final values of $\\hat{r}$ and $\\sigma_{\\hat{r}}$ presented in the paper.\n\n\n\n\\subsection{Implementation for the dust component}\n\\label{sec:fit_general}\n\nFor the dust component, we consider four different \\emph{fitting schemes}, corresponding to four expressions for the dust model $\\mathcal{D}^{\\rm dust}_\\ell$ in Eq.~\\ref{eq:model}, which are referred to as \"MBB\", \"$\\beta$-1\", \"$\\beta$-$T$\", and \"$\\beta$-2\". Each of them corresponds to a truncation of Eq.~\\ref{eq:moments}, keeping only some selected terms of the moment expansion: MBB stands for those of the modified black body, $\\beta$-1 for those of the expansion in $\\beta$ at first order, $\\beta$-2 for the expansion in $\\beta$ at second order, and $\\beta$-$T$ for the expansion in both $\\beta$ and $T$ at first order. We chose the $\\beta$-1 and $\\beta$-2 truncations based on the studies of \\citet{Mangilli} and \\citet{Azzoni}, where the dust SED moment expansion is performed only with respect to $\\beta$. The $\\beta$-$T$ fitting scheme is instead the first-order truncation in both $\\beta$ and $T$, introduced here for the first time at the power spectrum level. The parameters fitted in each of these fitting schemes are summarized in Table~\\ref{tab:parameter}. We note that the $\\beta$-2 and $\\beta$-$T$ fitting schemes share the same number of free parameters. Finally, when we fit $\\hat{r}$ at the same time as the dust parameters, the fitting schemes will be referred to as $r$MBB, $r\\beta$-1, $r\\beta$-$T,$ and $r\\beta$-2.\n\nDifferent physical processes are expected to occur at different angular scales, leading to different SED properties. Thus, we estimate the dust-related parameters with one parameter per multipole bin. As an example, we estimate $\\beta_0 = \\beta_0(\\ell)$ and $T_0 = T_0(\\ell)$ to be able to take into account their scale dependence, at the cost of increasing the number of free parameters in our model. This is also true for the higher order moments. On the other hand, $\\hat{r}$ is not scale dependent and, when it is fitted, we add one single parameter over the whole multipole range.\n\nIn \\cite{Mangilli}, the first-order moment expansion parameter $\\mathcal{D}_\\ell^{A\\times\\omega^\\beta_1}$ is considered to be the leading order correction to the MBB spectral index. We applied a similar approach in the present work, extending it to the dust temperature when it is fitted. In our pipeline, we proceed iteratively:\n\n\\begin{enumerate}\n\\item(i) we fit $\\beta_0(\\ell)$ and $T_0(\\ell)$ at order zero (MBB), for each $\\ell$, \n\n\\item (ii) we fix $\\beta_0(\\ell)$ and $T_0(\\ell)$ and fit the higher order parameters, as in Eq.~\\ref{eq:moments}, (iii) we update the $\\beta_0(\\ell)$ to $\\beta_{\\rm corr}(\\ell)$ {(and $T_0(\\ell)$ to $T_{\\rm corr}(\\ell)$ in the case of $\\beta$-$T$)} as:\n\n\\begin{equation}\n \\beta_{\\rm corr}(\\ell) = \\beta_0(\\ell) + \\frac{\\mathcal{D}^{A \\times \\omega^{\\beta}_1 }_{\\ell}}{\\mathcal{D}^{A \\times A}_{\\ell}},\\quad T_{\\rm corr}(\\ell) = T_0(\\ell) + \\frac{\\mathcal{D}^{A \\times \\omega^{T}_1 }_{\\ell}}{\\mathcal{D}^{A \\times A}_{\\ell}},\n\\label{eq:iteration}\n\\end{equation}\n\\item[iv)] and we iterate from (ii) fixing $\\beta_0(\\ell)=\\beta_{\\rm corr}(\\ell)$, until $\\mathcal{D}_\\ell^{A \\times \\omega^{\\beta}_1}$ converges to be compatible with zero (and $T_0(\\ell)=T_{\\rm corr}(\\ell)$, until $\\mathcal{D}_\\ell^{A \\times \\omega^{T}_1}$ converges to zero in the case of $\\beta$-$T$).\n\\end{enumerate}\nWe used three such iterations, which we found to be sufficient to guarantee the convergence. As the moment expansion is a nonorthogonal and incomplete basis \\citep{Chluba}, this iterative process is performed to ensure that the expansions up to different orders share the same $\\beta_0(\\ell)$ and $T_0(\\ell)$ with $\\mathcal{D}^{A \\times \\omega^{\\beta}_1 }_\\ell=0$ and $\\mathcal{D}^{A \\times \\omega^{T}_1 }_\\ell=0$.\n\n\n\\begin{table}[t!]\n\\centering\n \\normalsize\n \\begin{tabular}{c!{\\color{white}\\vrule width 1pt}c!{\\color{white}\\vrule width 1pt}c!{\\color{white}\\vrule width 1pt}c!{\\color{white}\\vrule width 1pt}c}\n & MBB & $\\beta$-1 & $\\beta$-$T$ & $\\beta$-2\\\\\n \\noalign{\\color{white}\\hrule height 1pt}\n $N_{\\rm param.}$ & \\cellcolor{mygrey} $3N_\\ell$& \\cellcolor{mygrey} $2N_\\ell$ & \\cellcolor{mygrey} $5N_\\ell$ & \\cellcolor{mygrey} $5N_\\ell$ \\\\\\noalign{\\color{white}\\hrule height 1pt}\n $\\beta_0(\\ell)$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;& \\cellcolor{myyellow}$\\circ$ & \\cellcolor{myyellow}$\\circ$ & \\cellcolor{myyellow}$\\circ$\\\\\\noalign{\\color{white}\\hrule height 1pt}\n $T_0(\\ell)$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;& \\cellcolor{myred}$\\times$ & \\cellcolor{myyellow}$\\circ$ & \\cellcolor{myred}$\\times$\\\\\\noalign{\\color{white}\\hrule height 1pt}\n $D_\\ell^{A\\times A}$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;& \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$\\\\\\noalign{\\color{white}\\hrule height 1pt}\n $D_\\ell^{A\\times\\omega_1^\\beta}$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\\noalign{\\color{white}\\hrule height 1pt}\n $D_\\ell^{\\omega_1^\\beta\\times\\omega_1^\\beta}$ & \\cellcolor{myred}$\\times$& \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\\noalign{\\color{white}\\hrule height 1pt}\n $D_\\ell^{A\\times\\omega_1^T}$ & \\cellcolor{myred}$\\times$& \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; &\\cellcolor{myred}$\\times$\\\\\\noalign{\\color{white}\\hrule height 1pt}\n $D_\\ell^{\\omega_1^T\\times\\omega_1^T}$ & \\cellcolor{myred}$\\times$& \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{myred}$\\times$\\\\\\noalign{\\color{white}\\hrule height 1pt}\n $D_\\ell^{\\omega_1^\\beta\\times\\omega_1^T}$ & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \\cellcolor{myred}$\\times$\\\\\\noalign{\\color{white}\\hrule height 1pt}\n $D_\\ell^{A\\times\\omega_2^\\beta}$ & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\\noalign{\\color{white}\\hrule height 1pt}\n $D_\\ell^{\\omega_1^\\beta\\times\\omega_2^\\beta}$ & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\\noalign{\\color{white}\\hrule height 1pt}\n $D_\\ell^{\\omega_2^\\beta\\times\\omega_2^\\beta}$ & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{myred}$\\times$ & \\cellcolor{mygreen}\\tikz\\fill[scale=0.3](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\\\\\noalign{\\color{white}\\hrule height 1pt}\n \\end{tabular}\n \\normalsize\n\\caption{\\footnotesize Summary of the fitted parameters in the four dust moment expansion \\emph{fitting schemes} we consider (MBB, $\\beta$-1, $\\beta$-$T,$ and $\\beta$-2), in Eq.~\\ref{eq:moments}. A tick on a green background signifies that the parameter is fitted, red with a cross symbol shows that the parameter is not fitted, and a circle symbol on yellow means that the parameter is fixed and corrected through an iterative process as presented in Sect.~\\ref{sec:fit_general}. $\\mathcal{D}_\\ell^{A\\times A}$ is fixed to the MBB best-fit value in the case of $\\beta$-1, $\\beta$-$T,$ and $\\beta$-2 and all the other moments are set to zero when they are not fitted. When $\\hat{r}$ is fitted at the same time, the fitting schemes are denoted $r$MBB, $r\\beta$-1, $r\\beta$-$T,$ and $r\\beta$-2, and they have one more parameter than the number of parameters reported in the first line.}\n\\label{tab:parameter}\n\\end{table}\n\n\\section{\\label{sec:results} Results}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/chi2dustgauss_d0.pdf}\n \\includegraphics[width=\\columnwidth]{Figures\/chi2dustgauss_d1T.pdf}\n \\includegraphics[width=\\columnwidth]{Figures\/chi2dustgauss_d1.pdf}\n \\caption{\\footnotesize Median of the reduced $\\chi^2$ in every multipole bin $\\ell$, for all the $N_{\\rm sim}$ simulations of {\\tt d0} (top, orange), {\\tt d1T} (middle, green) and {\\tt d1} (bottom, blue), on $f_{\\rm sky}=0.7$. The reduced $\\chi^2$ values are reported for the four different fitting schemes: MBB (circles), $\\beta$-1 (crosses), $\\beta$-$T$ (diamonds) and $\\beta$-2 (triangles). The values for the four fitting schemes are shifted from each others by $\\ell=2$, in order to distinguish them. The black dashed line represents $\\chi^2_{\\rm red}=1$.}\n \\label{fig:chi2dust}\n\\end{figure}\n\nIn this section, we present our evaluation of the best-fit parameters for the different {fitting schemes} presented in Sect.~\\ref{sec:fit_dust} on the $B$-mode cross-angular power spectra computed from the different {simulation types} presented in Sect.~\\ref{sec:simulations} and on the Galactic mask keeping $f_{\\rm sky}=0.7$, which is defined in Sect.~\\ref{sect:mask}.\nWe first tested the simulation types containing only dust and noise in order to calibrate the dust complexity of our data sets in Sect.~\\ref{sec:results_dust_only}. We then used CMB only plus noise simulations to assess the minimal error on $\\hat{r}$ in Sect.~\\ref{sec:c} and, finally, we explored the dust, CMB, and noise simulation types to assess the impact of the dust complexity on $\\hat{r}$ in Sect.~\\ref{sec:results_dust_CMB}.\n\n\\subsection{Dust only}\n\\label{sec:results_dust_only}\n\nTo evaluate the amplitude of the dust moment parameters contained in the dust simulations in the absence of CMB, we ran the fitting schemes presented in Sect.~\\ref{sec:fit_dust} in the three simulation types {\\tt d0}, ${\\tt d1T,}$ and ${\\tt d1}$ presented in Sect.~\\ref{sec:simulations}. In these cases, $A_{\\rm lens}$ and $r$ in Eq.~\\ref{eq:model} are both fixed to zero and the fitted parameters are given in Table~\\ref{tab:parameter} for every fitting scheme.\n\n\\subsubsection{{\\tt d0}}\n\\label{sec:d0}\n\nThe {\\tt d0} dust maps presented in Sect.~\\ref{sec:ingredients} extrapolate between frequency bands with a MBB SED with constant parameters over the sky: $\\beta_{\\tt d0} = 1.54$ and $T_{\\tt d0} = 20$\\,K. We performed the fit with the four fitting schemes presented in Sect.~\\ref{sec:fit_dust}.\n\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/betadustgauss.pdf}\n \\includegraphics[width=\\columnwidth]{Figures\/Tgauss.pdf}\n \\caption{\\footnotesize (Top): Median of the best fit values of $\\beta_0(\\ell)$ in {\\tt d0} (orange), {\\tt d1T} (green), and {\\tt d1} (blue) for the MBB (circles). $\\beta_{\\tt d0}$ is marked by the dashed black line. (Bottom): Same as above but with $T_0(\\ell)$, the black dashed-lines being $T_{\\tt d0}=20$\\,K and $T_{\\tt d1T}=21.9$\\,K.}\n \\label{fig:betaTdust}\n\\end{figure}\n\n\nIn Fig.~\\ref{fig:chi2dust} the values of the reduced $\\chi^2(\\ell)$ for each fitting scheme are displayed. For every fitting scheme (MBB, $\\beta$-1, $\\beta$-$T$ and $\\beta$-2), the reduced $\\chi^2$ are close to 1 over the whole multipole range (slightly below 1 for the $\\beta$-1, $\\beta$-$T$ and $\\beta$-2 fitting scheme). This indicates that the MBB is a good fit to the cross-angular power spectra computed from the {\\tt d0} maps with a spatially invariant MBB SED, as expected. Adding additional (higher order) parameters, such as with $\\beta$-1, $\\beta$-$T$ and $\\beta$-2, has no significant effect on the $\\chi^2$.\n\nIn Fig.~\\ref{fig:betaTdust} we can see that the best-fit values of $\\beta_0(\\ell)$ and $T_0(\\ell)$ are compatible with constant values $\\beta_0(\\ell)=\\beta_{\\tt d0}$ and $T_0(\\ell)=T_{\\tt d0}$, as expected for this simulated data set.\n\nThe best-fit values of the dust amplitude and the moment-expansion parameters are presented in Figs.~\\ref{fig:Adust}, \\ref{fig:moments1}, \\ref{fig:moments2}, and \\ref{fig:moments3}, respectively. The amplitude power spectrum is compatible with that of the dust template map used to build {\\tt d0} and the moment-xpansion parameters are compatible with zero for every fitting scheme, as expected with no spatial variation of the SED. \nTherefore, the moment expansion method presented in Sect.~\\ref{sec:formalism} passes the \\emph{null test} in the absence of SED distortions, with the {\\tt d0} simulated data set. \n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/Adustgauss.pdf}\n \\caption{\\footnotesize Median of the best-fit values of $\\mathcal{D}_\\ell^{A \\times A}$ for {\\tt d0} (orange), {\\tt d1T} (green), and {\\tt d1} (blue) using the MBB fitting scheme. The values for the three simulation types are shifted with respect to one another by $\\ell=2$ in order to distinguish them. The black dashed line is the amplitude power spectrum of the dust template map used to build the three simulation sets {\\tt d0}, {\\tt d1T,} and {\\tt d1}.}\n \\label{fig:Adust}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/w1w1dustgauss.pdf}\n \\caption{\\footnotesize Best-fit values of the first-order moment $\\mathcal{D}_\\ell^{\\omega^{\\beta}_1\\times\\omega^{\\beta}_1}$ for {\\tt d0} (orange), {\\tt d1T} (green), and {\\tt d1} (blue), fitting with $\\beta$-1 (crosses), $\\beta$-2 (triangles), and $\\beta$-$T$ (diamonds).}\n \\label{fig:moments1}\n\\end{figure}\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/Aw2dustgauss.pdf}\n \\includegraphics[width=\\columnwidth]{Figures\/w1w2dustgauss.pdf}\n \\includegraphics[width=\\columnwidth]{Figures\/w2w2dustgauss.pdf}\n \\caption{\\footnotesize Best-fit values of the second-order $\\mathcal{D}_\\ell^{A\\times\\omega^{\\beta}_2}$, $\\mathcal{D}_\\ell^{\\omega^{\\beta}_1\\times\\omega^{\\beta}_2}$ and $\\mathcal{D}_\\ell^{\\omega^{\\beta}_2\\times\\omega^{\\beta}_2}$ moment parameters in {\\tt d0} (orange), {\\tt d1T} (green), and {\\tt d1} (blue) for $\\beta$-2 (triangles). }\n \\label{fig:moments2}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/b1t1dustgauss.pdf}\n \\includegraphics[width=\\columnwidth]{Figures\/t1t1dustgauss.pdf}\n \\caption{\\footnotesize Best-fit values of the first-order $\\mathcal{D}_\\ell^{\\omega^{\\beta}_1\\times\\omega^{T}_1}$ and $\\mathcal{D}_\\ell^{\\omega^{T}_1\\times\\omega^{T}_1}$ moment parameters in {\\tt d0} (orange), {\\tt d1T} (green), and {\\tt d1} (blue) for $\\beta$-$T$ (diamonds). }\n \\label{fig:moments3}\n\\end{figure}\n\n\n\\subsubsection{{\\tt d1T}}\n\\label{sec:d1T}\n\n\nWe now introduce, as a first layer of complexity, the spatial variations of the spectral index associated to a fixed temperature over the sky with the {\\tt d1T} simulation type. The dust temperature was fixed to $T_{\\rm d1T}=21.9$\\,K while the spectral index $\\beta(\\vec{n})$ was allowed to vary between lines of sight. The four different fitting schemes presented in Sect.~\\ref{sec:fit_dust} are fitted over the cross-spectra of our simulations as in Sect.~\\ref{sec:d0}.\n\nThe reduced $\\chi^2(\\ell)$ values for each fitting scheme can be found in Fig.~\\ref{fig:chi2dust}. It can be seen that the MBB no longer provides a good fit for the dust SED, especially at low multipoles. Averaging effects of spatially varying SEDs are more important over large angular scales and thus SED distortions and moments are expected to be more significant at low multipoles. Indeed, the moments added to the fit in $\\beta$-1 are enough to lower the reduced $\\chi^2$ such that it becomes compatible with 1 over almost all of the multipole range. The fitting schemes $\\beta$-$T$ and $\\beta$-2, including more parameters than $\\beta$-1, provide a fit of similar goodness, except in the multipole bin $\\ell=66.5$ where they are closer to 1.\n\nFigure~\\ref{fig:betaTdust} presents the best-fit values of $\\beta_0(\\ell)$ in the case of the MBB fit. For the sake of clarity, the values after iteration (see Sect.~\\ref{sec:fit_dust}) for $\\beta$-1, $\\beta$-$T,$ and $\\beta$-2 are not shown, but they present comparable trends. We can see that the best-fit values of $\\beta_0(\\ell)$ for this {\\tt d1T} simulation type are no longer compatible with a constant. $\\beta_0(\\ell)$ fitted values show a significant increase at low (<100) multipoles, up to $\\beta_0(\\ell=16.5)=1.65$. For $\\ell>100$, $\\beta_0(\\ell)$ is close to a constant of value $\\sim1.53$. This increase towards the low $\\ell$ is correlated to the increase of the MBB $\\chi^2$ discussed in the previous paragraph. However, we note that in the lowest $\\ell$-bin, the $\\beta_0(\\ell)$ value is close to 1.53 and that the $\\chi^2$ of the MBB fit is close to unity.\n\n\nThe best-fit values of $T_0(\\ell)$ are also presented in Fig.~\\ref{fig:betaTdust} in the case of the MBB fit. Here again, the values after iteration for the other fitting schemes are not presented, but are similar. The {\\tt d1T} $T_0(\\ell)$ best-fit values oscillate around $T_{\\tt d1T}=21.9$\\,K, without being strictly compatible with a constant value, as would be expected for this simulation type. This tends to indicate that the SED distortions due to the spectral index spatial variations are affecting the accuracy at which we can recover the correct angular dependence of the sky temperature. \n\nThe amplitude power spectrum is displayed in Fig.~\\ref{fig:Adust} for the MBB fitting scheme. The other fitting scheme results are not presented for clarity and would not be distinguishable from those of the MBB. The fitted $\\mathcal{D}_\\ell^{A\\times A}$ is compatible with the one of the dust template map used to build the simulations. \n\nAll the parameters of the moment expansion with respect to $\\beta$ can be found in Figs.~\\ref{fig:moments1} and \\ref{fig:moments2}, and are now significantly detected, except for $\\mathcal{D}_\\ell^{\\omega^\\beta_2 \\times \\omega^\\beta_2}$. In Fig.~\\ref{fig:moments3}, we can observe that the parameters of the moment expansion with respect to the temperature (only present in the $\\beta$-$T$ fit) remain undetected. The SED distortions due to the spatial variations of $\\beta$ are well detected, while no SED distortion linked to the temperature is seen, as expected for the ${\\tt d1T}$ simulation type.\n\n\n\\subsubsection{{\\tt d1}}\n\\label{sec:d1}\n\nWe now discuss the {\\tt d1} simulations, with the highest complexity in the polarized dust SED. In this more physically relevant simulation type, the dust emission is given by a MBB with variable index $\\beta(\\vec{n})$ and temperature $T(\\vec{n})$ over the sky.\nWe ran the four different fitting schemes on the {\\tt d1} simulation type, as we did in Sect.~\\ref{sec:d0} and \\ref{sec:d1T}.\n\nThe values of the reduced $\\chi^2(\\ell)$ are displayed in Fig.~\\ref{fig:chi2dust}. For the MBB and $\\beta$-1, the reduced $\\chi^2$ are not compatible with unity, especially at low multipole. This indicates that none of them are a good fit anymore for the spatially varying SED with $\\beta(\\vec{n})$ and $T(\\vec{n})$. With $\\beta$-2 and $\\beta$-$T$, the $\\chi^2(\\ell)$ values become compatible with unity, except for the $\\ell=26.5$ bin. We note that $\\beta$-$T$ provides a slightly better fit than $\\beta$-2 in this bin.\n\nLooking at the medians of the best-fit values of $\\beta_0(\\ell)$ for {\\tt d1} in Fig.~\\ref{fig:betaTdust}, we can see that the spectral index is changing with respect to $\\ell$, as discussed in Sect.~\\ref{sec:d1T}, in a similar manner as for the {\\tt d1T} simulation type. The fitted temperature $T_0(\\ell)$ values for {\\tt d1} show an increasing trend from $\\sim17$ to $\\sim20.5$\\,K and from $\\ell=16.5$ to $\\ell\\sim100$. At higher multipoles, $T_0(\\ell)$ is close to a constant temperature of 20.5\\,K. In {\\tt d1}, as for {\\tt d1T}, the angular scales at which we observe strong variations of $\\beta_0(\\ell)$ and $T_0(\\ell)$ are the ones for which we observe a poor $\\chi^2$ for some fitting schemes. Also, as for {\\tt d1T}, the largest angular scale $\\ell$-bin, at $\\ell=6.5,$ shows $\\beta$ and $T$ values close to the constant value at high $\\ell$, which are associated with $\\chi^2$ values closer to unity.\nThe best-fit values of the amplitude $\\mathcal{D}_\\ell^{A\\times A}$ are shown in Fig.~\\ref{fig:Adust}. These are similar to those of the other simulation types. \n\nThe moment-expansion parameters fitted on {\\tt d1} are shown in Figs.~\\ref{fig:moments1}, \\ref{fig:moments2}, and \\ref{fig:moments3}. For this simulation type, the moment parameters are all significantly detected with respect to both $\\beta$ and $T$. This was already the case with the \\textit{Planck}{} intensity simulations, produced in a similar way, as discussed in \\cite{Mangilli}. Their detections quantify the complexity of dust emission and SED distortions from the MBB present in the {\\tt d1} simulation type, due to the spatial variations of $\\beta(\\vec{n})$ and $T(\\vec{n})$. \n\n\n\\subsection{\\label{sec:nodust} CMB only}\n\\label{sec:c}\n\nIn order to calibrate the accuracy at which the $r$ parameter can be constrained with the \\textit{LiteBIRD}{} simulated data sets presented in Sect.~\\ref{sec:simulations}, we tested the simulation type with no dust component, $M_\\nu^{\\tt c}$, and with no tensor modes ($r_{\\rm sim}=0$, only CMB lensing and noise). We fit the expression in Eq.~\\ref{eq:model} with $\\mathcal{D}_\\ell^{\\rm dust}$ fixed to zero and $A_{\\rm lens}$ fixed to one ({i.e.}, $r$ is the only parameter we fit in this case).\nDoing so over the $N_{\\rm sim}$ simulations, we obtain $\\hat{r}= (0.7 \\pm 3.5) \\times 10^{-4}$.\nThis sets the minimal value we can expect to retrieve for $\\hat{r}$ with our assumptions if the dust component is perfectly taken into account.\n\n\n\\subsection{Dust and CMB}\n\\label{sec:results_dust_CMB}\n\nWe now present our analysis of the simulations including dust, CMB (lensing), and noise ({\\tt d0c}, {\\tt d1Tc} and {\\tt d1c}) with no primordial tensor modes ($r_{\\rm sim}=0$). As described above, we applied the four fitting schemes for the dust on the three simulation types, fitting $\\hat{r}$ and fixing $A_{\\rm lens}$ to~one (namely $r$MBB, $r\\beta$-1, $r\\beta$-$T$ and $r\\beta$-2) simultaneously. \n\nThe best-fit values of $\\beta_0(\\ell)$, $T_0(\\ell)$ and the moment expansion parameters $\\mathcal{D}_\\ell^{\\mathcal{M}\\times\\mathcal{N}}$ derived with the simulation types {\\tt d0c}, {\\tt d1Tc,} and {\\tt d1c} are not discussed further when they are compatible with the ones obtained for the {\\tt d0}, {\\tt d1T,} and {\\tt d1} simulation types and presented in Sect.~\\ref{sec:results_dust_only}.\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/gauss_r_0_d0_SLD_full.pdf}\n \\includegraphics[width=\\columnwidth]{Figures\/gauss_r_0_d1T_SLD_full.pdf}\n \\includegraphics[width=\\columnwidth]{Figures\/gauss_r_0_d1_SLD_full.pdf}\n \\caption{\\footnotesize \\emph{(Top panel)}: Posterior on $\\hat{r}$ in the {\\tt d0c} simulation type for the different fitting schemes: $r$MBB (blue, dotted line), $r\\beta$-1 (red, dashed line), $r\\beta$-$T$ (green, solid line), and $r\\beta$-2 (yellow, dash-dotted line). The vertical black dashed line marks the value of $r_{\\rm sim}=0$. \\emph{(Central panel)}: Same, but in the case of the {\\tt d1Tc} simulation type. \\emph{(Bottom panel)}: Same, but in the case of the {\\tt d1c} simulation type.}\n \\label{fig:rgauss1}\n\\end{figure}\n\n\n\\subsubsection{{\\label{sec:d0c}}{\\tt d0c}}\n\nFor {\\tt d0c}, as for {\\tt d0}, we recover the input constant spectral index and temperature $\\beta_{\\tt d0}$ and $T_{\\tt d0}$ at all angular scales for every fitting scheme. Furthermore, we do not detect any moment, when fitting $r\\beta$-1, $r\\beta$-$T,$ and $r\\beta$-2. This simulation type therefore constitutes our {null-test} when $\\hat{r}$ and the dust parameters are fitted at the same time. The addition of the CMB lensing in the simulations and the addition of $r$ to the fits thus does not lead to the detection of the moment parameters nor biases the recovery of the spectral index and the temperature.\n\nThe posterior distributions of the estimated tensor-to-scalar ratio $\\hat{r}$ are displayed in Fig.~\\ref{fig:rgauss1} and their mean and standard deviations are summarized in Table~\\ref{tab:rvalues}. We note that $\\hat{r}$ is compatible with the input value ($r_{\\rm sim}=0)$ for all the fitting schemes. For $r$MBB and $r\\beta$-1, the dispersion $\\sigma_{\\hat{r}}$ is comparable with the CMB-only scenario discussed in Sect.~\\ref{sec:nodust}. For $r\\beta$-$T$ and $r\\beta$-2, the width of the distribution increases by a factor of $\\sim 2$ and $\\sim 4$, respectively.\n\n\\subsubsection{{\\tt d1Tc}}\n\\label{sec:d1Tc}\n\nThe posterior distribution of $\\hat{r}$ in the case of the {\\tt d1Tc} simulation type is displayed in Fig.~\\ref{fig:rgauss1} for the four fitting schemes and the mean value and standard deviation of these distributions are summarized in Table~\\ref{tab:rvalues}. We can see that in the case of $r$MBB, we fit $\\hat{r}\\pm\\sigma_{\\hat{r}}=(99.7\\pm6.2)\\times10^{-4}$. In that case, the input tensor-to-scalar ratio $r_{\\rm sim}=0$ is not recovered and we obtain a bias on the central value of $\\hat{r}$ of $\\sim 16\\,\\sigma_{\\hat{r}}$. As discussed in Sect.~\\ref{sect:limitsmbb}, this is expected because we know that the MBB is not a good dust model for a SED with spatially varying spectral index, as we also verify in Sect.~\\ref{sec:d1T} looking at the $\\chi^2$ values.\n\nUsing the $r\\beta$-1 fitting scheme allows us to recover $\\hat{r}=(-8.0\\pm6.4)\\times10^{-4}$, where $r_{\\rm sim}$ is recovered within $\\sim\\,2\\sigma_{\\hat{r}}$, while $r\\beta$-2 and $r\\beta$-$T$ recover the input value within $1\\,\\sigma_{\\hat{r}}$ (with $\\hat{r}=(-3.6\\pm 13.0)\\times10^{-4}$ and $\\hat{r}=(0.7\\pm20.9)\\times10^{-4}$, respectively). As in Sect~\\ref{sec:d0c}, the deviation remains similar between $r$MBB and $r\\beta$-1 and increases by a factor of $\\sim 2$ and $\\sim 4$ from $r\\beta$-1 to $r\\beta$-$T$ and $r\\beta$-2, respectively.\n\n\\begin{table}[t!]\n\\centering\\scalebox{1}{\n \\centering\n \\begin{tabular}{c | ccc}\n ($\\hat{r} \\pm \\sigma_{\\hat{r}})\\times10^{4}$ & {\\tt d0c} & {\\tt d1Tc} & {\\tt d1c}\\\\\n \\hline\n $r$MBB & \\cellcolor{mygreen} $0.3 \\pm 3.9$ & \\cellcolor{myred}$99.7 \\pm 6.2$ & \\cellcolor{myred} $125.1 \\pm 5.9$\\\\\n $r\\beta$-1 & \\cellcolor{mygreen} $0.5 \\pm 4.5$ & \\cellcolor{myyellow} $-8.0 \\pm 6.4$& \\cellcolor{myred} $32.9 \\pm 6.5$\\\\\n $r\\beta$-$T$ & \\cellcolor{mygreen} $0.3 \\pm 9.5$ & \\cellcolor{mygreen} $-3.6 \\pm 13.0$ & \\cellcolor{mygreen} $-3.3 \\pm 11.7$\\\\\n $r\\beta$-2 & \\cellcolor{mygreen} $0.7 \\pm 16.4$ & \\cellcolor{mygreen} $0.7 \\pm 20.9$ & \\cellcolor{myyellow} $-37.4 \\pm 19.4$\\\\ \n \\hline \n \\end{tabular}}\n\\caption{\\footnotesize Best-fit values of $\\hat{r}$ in units of $10^{-4}$ on $f_{\\rm sky}=0.7$. The green values are compatible with $r_{\\rm sim}=0$ at $1\\,\\sigma_{\\hat{r}}$, the yellow values are compatible with $r_{\\rm sim}=0$ at $2\\,\\sigma_{\\hat{r}}$ and the red values are incompatible with $r_{\\rm sim}=0$ at more than $2\\,\\sigma_{\\hat{r}}$.}\n\\label{tab:rvalues}\n\\end{table}\n\n\\subsubsection{{\\tt d1c}}\n\\label{sec:d1c}\n\nIn the case of the {\\tt d1c} simulation type, as in {\\tt d0c} and {\\tt d1Tc}, we fit $\\hat{r}$ in addition to the dust-related parameters. In that case, dust moment parameters are recovered as for {\\tt d1} (see Sect.~\\ref{sec:d1}), except for the $r\\beta$-2 fitting scheme.\n\nFigure~\\ref{fig:momentsd1d1c} compares the moment parameters between $\\beta$-2 on the {\\tt d1c} simulations type, fitting only the dust-related parameters and $r\\beta$-2 on {\\tt d1c} when jointly fitting the dust parameters and $\\hat{r}$. We observe that $\\mathcal{D}_\\ell^{\\omega^{\\beta}_2 \\times \\omega^{\\beta}_2}$ is not consistently recovered when fitting $\\hat{r}$ in addition to the dust parameters.\n\nA similar comparison can be found in Fig.~\\ref{fig:moments2d1d1c} for the moment parameters between $\\beta$-$T$ and $r\\beta$-$T$ on the {\\tt d1c} simulation type. Using this fitting scheme, we can see that all the moments are correctly recovered when adding $\\hat{r}$ to the fit.\n\n\nThe $\\hat{r}$ posterior distributions in the case of {\\tt d1c} are displayed in Fig.~\\ref{fig:rgauss1} and summarized in Table~\\ref{tab:rvalues}. As discussed in Sect.~\\ref{sect:limitsmbb} and observed in Sect.~\\ref{sec:d1c}, the $r$MBB fit is highly biased, with $\\hat{r}=(125.1 \\pm 5.9)\\times10^{-4}$ (by more than 21\\,$\\sigma_{\\hat{r}}$). When fitting the $r\\beta$-1, this bias is significantly reduced ($\\hat{r}=(32.9 \\pm 6.5)\\times10^{-4}$, $~5\\,\\sigma_{\\hat{r}}$ away from $r_{\\rm sim}=0$), illustrating the ability of the first-moment parameters to correctly capture part of the SED complexity. However, performing the expansion in both $\\beta$ and $T$ with $r\\beta$-$T$ allows us to recover $r_{\\rm sim}$ without bias ($\\hat{r}=(-3.3\\pm 11.7)\\times10^{-4}$), highlighting the need for the description of the SED complexity in terms of dust temperature for this simulated data set where both $\\beta$ and $T$ vary spatially. On the other hand, for $r\\beta$-2, a negative tension ($1.9\\,\\sigma_{\\hat{r}}$) can be observed: $\\hat{r}=(-37.4\\pm 19.4)\\times10^{-4}$. This tension is discussed in Sect.~\\ref{sec:bias}. \n\nFor {\\tt d1c}, the $\\hat{r}$ distribution widths roughly meet the foreground cleaning requirements of \\textit{LiteBIRD}{} presented in Sect.~\\ref{sec:LiteBIRD} for $r$MBB and $r\\beta$-1 but are higher for $r\\beta$-$T$ and $r\\beta$-2. We also note that, with the same number of free parameters, all the standard deviations $\\sigma_{\\hat{r}}$ slightly increase compared to the {\\tt d0c} simulation type. This is expected due to the increasing dust complexity. \n\n\\section{\\label{sec:discussion} Discussion}\n\n\n\\subsection{Lessons learnt}\n\nIn Sect.~\\ref{sec:results}, we apply the fitting pipeline introduced in Sect.~\\ref{sec:fit} on \\textit{LiteBIRD}{} simulated data sets on $f_{\\rm sky}=0.7$ and for $r_{\\rm sim}=0$, including the various dust simulation types defined in Sect.~\\ref{sec:ingredients}. We fitted the estimated $B$-mode power-spectra with the four different fitting schemes summarized in Table~\\ref{tab:parameter}. Our main results can be summarized as follows: \n\n\\begin{itemize}\n\\item The MBB fitting scheme provides a good fit for the dust component in the {\\tt d0} and {\\tt d0c} simulation types. \n However, when the spectral index changes with the angular scale, such as in the {\\tt d1T}, {d1Tc}, {\\tt d1,} and {\\tt d1c} simulations, this approach no longer provides a good fit because of the complexity of the dust SED. As a consequence, in the $r$MBB case, $r_{\\rm sim}$ cannot be recovered without a significant bias. \n\n\\item The $\\beta$-1 fitting scheme allows us to perform a good fit for the dust complexity using the {\\tt d0} and {\\tt d1T} simulations but not for {\\tt d1}, while the $r\\beta$-1 fitting scheme yields estimates of $\\hat{r}$ close to $r_{\\rm sim}$ within $1\\,\\sigma_{\\hat{r}}$ for {\\tt d0c}, and within $2\\,\\sigma_{\\hat{r}}$ for {\\tt d1Tc}, but presenting a bias of $\\sim 6\\,\\sigma_{\\hat{r}}$ for {\\tt d1c}. \n\n\\item The $\\beta$-$T$ fitting scheme provides a good fit for every dust model, while using the $r\\beta$-$T$ fitting scheme allows us to recover $\\hat{r}$ values consistent with $r_{\\rm sim}$ within $1\\,\\sigma_{\\hat{r}}$ for all the simulation types, but is associated with an increase of $\\sigma_{\\hat{r}}$ by a factor $\\sim 2$ compared to the $r\\beta$-1 case.\n\n\\item The $\\beta$-2 fitting scheme also provides a good fit for each dust model, and the $r\\beta$-2 fitting scheme leads to values of $\\hat{r}$ compatible with $r_{\\rm sim}$ within $1\\,\\sigma_{\\hat{r}}$ for all the simulation types but {\\tt d1c}. In this last case, there is a negative tension of $\\sim2\\,\\sigma_{\\hat{r}}$. \n For all the simulation types, there is an increase of $\\sigma_{\\hat{r}}$ by a factor of $\\sim 4$ compared to the $r\\beta$-1 case.\n\n\\end{itemize}\n\nThe present analysis shows that the temperature could be a critical parameter for the moment expansion in the context of \\textit{LiteBIRD}{}.\n\nIndeed, for simulations including a dust component with a spectral index and a temperature that both vary spatially, as in {\\tt d1}, the only fitting scheme allowing us to recover $r_{\\rm sim}$ within $1\\,\\sigma_{\\hat{r}}$ is $r\\beta$-$T$, the expansion to first order in both $\\beta$ and $T$. This shows that expanding in $\\beta$ only, without treating $T$, is not satisfactory when looking at such large fractions of the sky. Indeed, when applying the $\\beta$-2 fitting scheme, the $\\mathcal{D}_\\ell^{\\omega^{\\beta}_2 \\times \\omega^{\\beta}_2}$ parameter remains undetected for the {\\tt d1T} simulation type (Sect~\\ref{sec:d1T}), while it is significantly detected using the {\\tt d1} simulation type (Sect~\\ref{sec:d1}). Nevertheless, {\\tt d1T} and {\\tt d1}share the same template of $\\beta(\\vec{n})$ (Sect.~\\ref{sec:ingredients}) and they only differ by the sky temperature (constant for {\\tt d1T} and varying for {\\tt d1}). This suggests that the observed $\\mathcal{D}_\\ell^{\\omega^{\\beta}_2 \\times \\omega^{\\beta}_2}$ with the {\\tt d1} simulations originates from the temperature variations and not those in the spectral index. This observation shows that it is less convenient to use the $\\beta$-2 fitting scheme than the $\\beta$-$T$ one in order to correctly recover the moment-expansion parameters and $\\hat{r}$ when temperature varies spatially.\n\nMoreover, we saw that $ \\sigma_{\\hat{r}}$ is lower when using the fitting scheme $r\\beta$-$T$ instead of $r\\beta$-2 for every simulation type, even if both have the same number of free parameters. This second observation additionally encourages an approach where the SED is expanded with respect to both $\\beta$ and $T$. Nevertheless, the uncertainty on $\\hat{r}$ we obtain in this case ($\\sigma_{\\hat{r}}=1.17\\times10^{-3}$) is larger than the \\textit{LiteBIRD}{} requirements. \n\n\\subsection{Increasing the accuracy on the tensor-to-scalar ratio}\n\\label{sec:Opt}\n\nIn Sect.~\\ref{sec:d1} and Fig.~\\ref{fig:chi2dust}, we see that the MBB and $\\beta$-1 fitting schemes do not provide good fits for the {\\tt d1} dust simulations, especially at low multipoles ($\\ell \\lesssim 100$). Conjointly, in Fig.~\\ref{fig:moments3}, we can see that the $\\beta$-$T$ moment parameters are significantly detected for $\\ell \\lesssim 100$ and compatible with zero above that threshold, suggesting that their corrections to the SED are predominantly required at large angular scales.\n\nThis implies that we can improve the pipeline presented in Sect.~\\ref{sec:fit} to keep only the required parameters in order to recover $\\hat{r}$ compatible with $r_{\\rm sim}$ with a minimal $\\sigma_{\\hat{r}}$. It can be achieved by applying the $r\\beta$-1 fitting sheme over the whole multipole range, while restricting the $r\\beta$-$T$-specific ($\\mathcal{D}_\\ell^{\\omega^\\beta_1\\times\\omega^T_1}$ and $\\mathcal{D}_\\ell^{\\omega^T_1\\times\\omega^T_1}$) moment-expansion parameters fit to the low multipoles range. We note that in order to correct the bias, it is still necessary to keep the $r\\beta$-1 moment parameters even at high multipoles, because the MBB does not provide a good fit even for $\\ell \\in [100,200]$, as we can see in Fig.~\\ref{fig:chi2dust}. We define $\\ell_{\\rm cut}$ as the multipole bin under which we keep all the $r\\beta$-$T$ moment parameters and above which we use the $r\\beta$-1 scheme. \n\n\nThe best-fit values and standard deviations of $\\hat{r}$ for different values of $\\ell_{\\rm cut}$ are displayed in Table~\\ref{tab:rellcut}. We can see that a trade-off has to be found: the smaller the $\\ell_{\\rm cut}$ , the bigger the shift from $r_{\\rm sim}$, and the bigger the $\\ell_{\\rm cut}$, the higher the value of $\\sigma_{\\hat{r}}$. The trade-off point seems to be found for $\\ell_{\\rm cut} \\sim 80$, allowing us to recover $\\hat{r}$ without tension, with $\\sigma_{\\hat{r}} =8.8\\times10^{-4}$. The error on $r$ is thus reduced by more than $\\sim30\\,\\%$ with respect to the nonoptimized fit and meets the \\textit{LiteBIRD}{} requirements.\n \n\n\n\n\n\\subsection{\\label{sec:fsky} Tests with smaller sky fractions}\n\n\\begin{table}[t!]\n\\centering\\scalebox{1}{\n \\centering\n \\begin{tabular}{c|c}\n $\\ell_{\\rm cut}$ & $(\\hat{r} \\pm \\sigma_{\\hat{r}})\\times 10^{4}$\\\\\\hline\n 50 & \\cellcolor{myred}$12.0 \\pm 7.3$ \\\\\n 60 & \\cellcolor{mygreen}$7.3 \\pm 7.9$ \\\\\n 70 & \\cellcolor{mygreen}$4.9 \\pm 8.1$ \\\\\n 80 & \\cellcolor{mygreen}$-0.9 \\pm 8.8$ \\\\\n 90 & \\cellcolor{mygreen}$-2.1 \\pm 9.9$ \\\\\n \\end{tabular}}\n\\caption{\\footnotesize Best-fit values of $\\hat{r} \\pm \\sigma_{\\hat{r}}$ in units of $10^{-4}$ for different values of $\\ell_{\\rm cut}$ for the {\\tt d1c} simulations with $f_{\\rm sky}=0.7$, when applying the $r\\beta$-$T$ fitting scheme. The green values are compatible with $r_{\\rm sim}=0$ at $1\\,\\sigma_{\\hat{r}}$.}\n\\label{tab:rellcut}\n\\end{table}\n\n\\begin{table}[t!]\n\\centering\\scalebox{1}{\n \\centering\n \\begin{tabular}{c | ccc}\n $(\\hat{r} \\pm \\sigma_{\\hat{r}})\\times10^4$ & $r_{\\rm sim}=0.01$ & $f_{\\rm sky}=0.5$ & $f_{\\rm sky}=0.6$\\\\\n \\hline\n $r$MBB & \\cellcolor{myred} $204.8 \\pm 7.7$ & \\cellcolor{myred} $47.3 \\pm 5.6$ & \\cellcolor{myred} $59.2 \\pm 5.4$\\\\\n $r\\beta$-1 & \\cellcolor{myred} $129.0 \\pm 8.3$ & \\cellcolor{myyellow} $-8.4 \\pm 6.7$ & \\cellcolor{mygreen} $1.8 \\pm 6.2$\\\\\n $r\\beta$-$T$ & \\cellcolor{mygreen} $94.6 \\pm 15.1$ &\\cellcolor{mygreen} $0.02 \\pm 13.4$ & \\cellcolor{mygreen} $-1.1 \\pm 12.0$\\\\\n $r\\beta$-2 & \\cellcolor{myyellow} $62.5 \\pm 25.0$ & \\cellcolor{mygreen} $4.3 \\pm 24.2$ & \\cellcolor{mygreen} $-3.2 \\pm 22.4$\\\\\n \\hline\n \\end{tabular}}\n\\caption{\\footnotesize Best-fit values of $\\hat{r}$ in units of $10^{-4}$ for an alternative {\\tt d1c} simulation with $r_{\\rm sim}=0.01$ on $f_{\\rm sky}=0.7$, and with $r_{\\rm sim}=0$ but on $f_{\\rm sky}=0.5$ and $f_{\\rm sky}=0.6$. The green values are compatible with $r_{\\rm sim}$ at $1\\,\\sigma_{\\hat{r}}$, the yellow values are compatible with $r_{\\rm sim}$ at $2\\,\\sigma_{\\hat{r}}$ , and the red values are incompatible with $r_{\\rm sim}$ at more than $2\\,\\sigma_{\\hat{r}}$.}\n\\label{tab:rvalues2}\n\\end{table}\n\nIn all the results presented in Sect.~\\ref{sec:results}, we were considering a sky fraction of $f_{\\rm sky}=0.7$. This sky mask keeps a considerable fraction of the brightest Galactic dust emission. To quantify the impact of the sky fraction on our analysis, we ran the pipeline as in Sect.~\\ref{sec:d1c} with the different masks introduced in Sect.~\\ref{sect:mask} ($f_{\\rm sky}=0.5$ and $f_{\\rm sky}=0.6$). This was done with the {\\tt d1c} simulation type. \n\nThe posteriors on $\\hat{r}$ for the different fitting schemes are displayed in Fig.~\\ref{fig:rgauss2} and Table~\\ref{tab:rvalues2}. We can see that, while the $r$MBB fiting scheme always leads to biased estimates, the $r\\beta$-1 case allows us to recover $\\hat{r}$ at $1.25\\,\\sigma_{\\hat{r}}$ for $f_{\\rm sky}=0.5$ and within $1\\,\\sigma_{\\hat{r}}$ for $f_{\\rm sky}=0.6$. In the two situations, the results using the $r\\beta$-$T$ and $\\beta$-2 fitting schemes are both unbiased with estimates of $\\hat{r}$ compatible with $r_{\\rm sim}$ within $1\\,\\sigma_{\\hat{r}}$. The $\\sigma_{\\hat{r}}$ hierarchy between the $r$MBB, $r\\beta$-1, $r\\beta$-$T,$ and $r\\beta$-2 fitting schemes is the same as for $f_{\\rm sky}=0.7$ (see Sect.~\\ref{sec:d1c}). Nevertheless, we observe that $\\sigma_{\\hat{r}}$ increases as the sky fraction decreases, as does the statistical error (cosmic variance of the lensing and noise). The bias, on the other hand, decreases for all the fitting schemes with the sky fraction, which is expected because less dust emission contributes to the angular power spectra. The negative tension observed on the $\\hat{r}$ posterior in Sect.~\\ref{sec:d1c} for the $r\\beta$-2 case is not present when using smaller sky fractions. In Fig.~\\ref{fig:fskydust}, the $r\\beta$-2 moment parameters are displayed. We can see that they are not significantly detected for the $f_{\\rm sky}=0.5$ and 0.6, unlike for $f_{\\rm sky}=0.7$. As we have seen that some of the moments in the $\\beta$-2 fitting scheme failed to model SED distortions coming from temperature, we can suppose that, in our simulations, the temperature variations play a less significant role in the dust SED on the $f_{\\rm sky}=0.5$ and 0.6 masks than they play in the $f_{\\rm sky}=0.7$ one. As a consequence, they have a smaller impact on $r$ when not properly taken into account.\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/gauss_r_0_d1_SLD_full_fsky0.5.pdf}\n \\includegraphics[width=\\columnwidth]{Figures\/gauss_r_0_d1_SLD_full_fsky0.6.pdf}\n \\caption{\\footnotesize \\emph{(Top panel)}: Posterior on $\\hat{r}$ in the {\\tt d1c} simulation type on $f_{\\rm sky}=0.5$ for the different fitting schemes: $r$MBB (blue, dotted line), $r\\beta$-1 (red, dashed line), $r\\beta$-$T$ (green, solid line), and $r\\beta$-2 (yellow, dash-dotted line). The vertical black dashed line marks the value of $r_{\\rm sim}=0$.\\emph{(Bottom panel)}: Same, in the case of the {\\tt d1c} simulation type on $f_{\\rm sky}=0.6$.}\n \\label{fig:rgauss2}\n\\end{figure}\n\n\\subsection{Tests with nonzero input tensor modes}\n\\label{sec:rnot0}\n\nWe show in Sect.~\\ref{sec:d1c} that the $r\\beta$-$T$ fitting scheme allows us to retrieve $\\hat{r}$ compatible with zero when $r_{\\rm sim}=0$. We now want to assess the potential leakage of $\\hat{r}$ in the moment expansion parameters if $r_{\\rm sim} \\neq 0$. In this case, primordial tensor signals would be incorrectly interpreted as dust complexity. We run the pipeline as described in Sect.~\\ref{sec:d1c} with $r_{\\rm sim}=0.01$, in the ${\\tt d1c}$ simulation type. This value of $r_{\\rm sim}=0.01$ is larger than the value targeted by \\textit{LiteBIRD},{} but given the order of magnitude of the error on $\\hat{r}$ observed in the previous sections, a potential leakage could be left unnoticed using a smaller $r_{\\rm sim}$. \n\nLooking at the final posterior on $\\hat{r}$ (Fig.~\\ref{fig:rgauss3} and Table~\\ref{tab:rvalues2}), we can see that the results are comparable with the $r_{\\rm sim}=0$ case, but centered on the new input value $r_{\\rm sim}=0.01$. The $r$MBB fitting scheme gives a highly biased posterior of $\\hat{r}=(2.048\\pm0.077)\\times10^{-2}$ ; the bias is reduced but still significant when using the $r\\beta$-1 scheme ($\\hat{r}=129.0\\pm 8.3\\times10^{-4}$) ; in the $\\beta$-$T$ case we get an estimate of $\\hat{r}=94.6\\pm 15.1\\times10^{-4}$ compatible with the input value of $r_{\\rm sim}=100\\times10^{-4}$ ; and finally, the $\\beta$-2 fitting scheme leads to a negative $2\\,\\sigma_{\\hat{r}}$ tension ($\\hat{r}=62.5\\pm25.0\\times10^{-4}$). This demonstrates the robustness of our method and its potential application to component separation. We note that the negative bias at second order is still present in the $r_{\\rm sim }=0.01$ case, illustrating that setting a positive prior on $\\hat{r}$ would not have been a satisfying solution when $r_{\\rm sim}=0$.\n\n\\subsection{\\label{sec:bias} Exploring the correlations between the parameters}\n\nWe now examine the substantial increase in the dispersion on the $\\hat{r}$ posteriors between the $r\\beta$-1 fitting scheme on the one hand and the $r\\beta$-$T$ and $r\\beta$-2 ones on the other. Indeed, in Sect.~\\ref{sec:d1c}, we show that $\\sigma_{\\hat{r}}$ is about two times greater when using the $r\\beta$-$T$ scheme than the $r\\beta$-1 one, and about four times larger in the case of $r\\beta$-2, while the $r\\beta$-$T$ and $r\\beta$-2 schemes share the same number of free parameters. Some other points to clarify are the shift on $\\hat{r}$ appearing for $r\\beta$-2 in the {\\tt d1c} scenario, discussed in Sect.~\\ref{sec:d1c}, and the inability to correctly recover $\\mathcal{D}_\\ell^{\\omega_2^\\beta \\times \\omega_2^\\beta}$ when $\\hat{r}$ is added to the fit illustrated in Fig.~\\ref{fig:momentsd1d1c}. \n\nThe 2D-SED shapes of the parameters $\\mathcal{D}_\\ell^{\\mathcal{N} \\times \\mathcal{M}}(\\nu_i \\times \\nu_j)$ in the $(\\nu_i, \\nu_j)$ space\\footnote{For example, $\\mathcal{S}(\\nu_i,\\nu_j) = \\frac{I_{\\nu_i}(\\beta_0,T_0)I_{\\nu_j}(\\beta_0,T_0)}{I_{\\nu_0}(\\beta_0,T_0)^2}\\cdot \\left[\\ln\\left(\\frac{\\nu_i}{\\nu_0}\\right)\\ln\\left(\\frac{\\nu_j}{\\nu_0}\\right) \\right]$ is associated to the $\\mathcal{D}^{\\omega_1 \\times \\omega_1}_\\ell$ parameter (see Eq.~\\ref{eq:moments}).} are displayed in Fig.~\\ref{fig:momentshapes}. We used the nine frequencies of \\textit{LiteBIRD}{} presented in Sect.~\\ref{sec:Instrsim} and fixed $\\beta_0=1.54$ and $T_0 = 20$\\,K. We also introduce the CMB 2D-SED shape with the black body function:\n\n\\begin{equation}\n B_{\\rm CMB}(\\nu_i \\times \\nu_j) = \\frac{B_{\\nu_i}(T_{\\rm CMB})B_{\\nu_j}(T_{\\rm CMB})}{B_{\\nu_0}(T_{\\rm CMB})^2},\n\\end{equation}\n\n\\noindent where $T_{\\rm CMB} = 2.726$\\,K. \n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/gauss_r_0.01_d1_SLD_full.pdf}\n \\caption{\\footnotesize Posterior on $\\hat{r}$ in the {\\tt d1c} simulation type with $r_{\\rm sim}=0.01$ and $f_{\\rm sky}=0.7$ for the different fitting schemes: $r$MBB (blue, dotted line), $r\\beta$-1 (red, dashed line), $r\\beta$-$T$ (green, solid line), and $r\\beta$-2 (yellow, dash-dotted line). The vertical black dashed line marks the value of $r_{\\rm sim}$.}\n \\label{fig:rgauss3}\n\\end{figure}\n\nThe 2D correlation coefficients between these 2D-SED shapes are displayed in Fig.~\\ref{fig:corr3D}. We present the correlations between the shapes of the parameters in the case of the $r\\beta$-$T$ and $r\\beta$-2 fitting schemes. We can see that all the moment parameters in $\\omega^{\\beta}_2$ are strongly correlated with the CMB SED signal, while the ones in $\\omega^{T}_1$ are not. \n\nWe showed that, when fitting $\\beta$-2 on {\\tt d1c}, the SED distortions due to spatial variations of $T$ are incorrectly detected by the second-order moment parameters with respect to the spectral index $\\beta$. Due to the correlations highlighted above, those spurious moment parameters could then leak into $\\hat{r}$ when adding it to the fit in $r\\beta$-2. This explains both the negative shift on the $\\hat{r}$ posterior using $\\beta$-2 in the {\\tt d1c} simulation type with $f_{\\rm sky}=0.7$ presented in Sect.~\\ref{sec:d1c} and \\ref{sec:rnot0}, and the inability to correctly recover the $\\omega^{\\beta}_2 \\times \\omega^{\\beta}_2$ dust moment parameter presented in Fig.~\\ref{fig:momentsd1d1c}. In addition, it gives a natural reason for the surge of $\\sigma_{\\hat{r}}$ when the second-order moments in $\\beta$ are added to the fit. \n\nOn the other hand, the moment parameters in $\\omega^{T}_1$ are strongly correlated with the moments in $\\omega^{\\beta}_1$. This behavior is expected due to the strong correlation between $\\beta$ and $T$ \\citep[see e.g.,][]{betatcorr}. However those moment parameters are less correlated with the CMB signal than the second-order parameters of $\\beta$-2. This points out that the factor of $\\sim 2$ on $\\sigma_{\\hat{r}}$ between $\\beta$-$T$ and $\\beta$-2 is due to this correlation of the 2D-SED shapes.\nAs the parameters in $\\omega^{T}_1$ are highly correlated with one another, we expect them to be highly redundant in the fit. However, repeating the process described in Sect.~\\ref{sec:d1c} using only $\\mathcal{D}^{A \\times \\omega^{T}_1}_\\ell$ for $\\beta$-$T$ ---which is equivalent to applying the $\\beta$-1 fitting scheme with an iterative correction to the temperature $T_0(\\ell)$--- gives a $\\hat{r}$ posterior similar to the one obtained for $\\beta$-1 alone. Taking the other $\\omega^{T}_1$ terms into account appears to be necessary in order to recover an unbiased distribution of $\\hat{r}$.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/corr3D.pdf}\n \\includegraphics[width=\\columnwidth]{Figures\/corr3DLB_betaT.pdf}\n \\caption{\\footnotesize Correlation matrices of the 2D-SED shapes of the CMB ($B_{\\rm CMB}(\\nu_i\\times\\nu_j)$ and dust moments $\\mathcal{D}_\\ell^{\\mathcal{N} \\times \\mathcal{M}}(\\nu_i \\times \\nu_j)$ in the $(\\nu_i, \\nu_j)$ space). Each element represents the Pearson correlation coefficient between any 2 of these 2D-SED shapes. The correlation matrices are displayed in the case of the $\\beta$-2 fitting scheme (top panel) and the $\\beta$-$T$ one (bottom panel).}\n \\label{fig:corr3D}\n\\end{figure}\n\n\\subsection{Adding synchrotron to the simulations }\n\\label{sec:synchrotron}\n\nThermal dust is not the only source of polarized foreground that must be considered for CMB studies. Although subdominant at high frequencies ($\\geq 100\\,$GHz), the synchrotron emission due to accelerated electrons in the interstellar medium is still expected to represent a significant fraction of the total polarized signal. \n\nIn order to take one more step towards realistic forecasts for the \\textit{LiteBIRD}{} instrument, we add a synchrotron contribution to the {\\tt d1c} simulations presented in \\ref{sec:simulations} using the {\\tt s1} template included in the {\\sc PySM}. In this scenario, the synchrotron SED for each line of sight is given by a power law of the form (in antenna temperature units)\n\n\\begin{equation}\n S_\\nu^{\\tt s1} = A_{\\tt s1}(\\vec{n}) \\left(\\frac{\\nu}{\\nu^{\\tt s1}_0}\\right)^{\\beta_{\\tt s1}(\\vec{n})},\n\\label{eq:s1powerlaw}\n\\end{equation}\n\n\\noindent where the amplitude $A_{\\tt s1}(\\vec{n})$ and the spectral index $\\beta_{\\tt s1}(\\vec{n})$ maps are derived from the combination of the \\textit{WMAP}{} mission 23 GHz map \\cite{WMAPfg} and Haslam 408 GHz map \\cite{Haslam1}. $\\nu^{\\tt s1}_0$ is defined as 23 GHz. The simulations containing synchrotron are referred to as {\\tt d1s1c} below.\n\nIf not treated in the fit, the presence of synchrotron is expected to induce a bias on the $\\hat{r}$ posterior distribution. Regarding the dust MBB discussed in Sect.~\\ref{sect:limitsmbb}, the synchrotron SED is expected to have distortions. However, as the synchrotron polarized emission is significantly lower than that of dust, in the frequency range considered in the present work, we expect the distortions to be small compared to the ones induced by dust and we leave their modeling to a further study. \n\nIn order to minimize the number of free parameters used for fitting the synchrotron emission, we model its power spectrum as a power law of the multipole $\\ell$ \\citep{krachmalnicoff_s-pass_2018}. Therefore, combining with the synchrotron SED in Eq.~\\ref{eq:s1powerlaw}, the synchrotron component of the cross-angular power spectra reads \n\n\\begin{equation}\n \\mathcal{D}^{\\rm sync}_\\ell (\\nu_i \\times \\nu_j) = A_{\\rm s} \\left(\\frac{\\nu_i \\nu_j}{\\nu_0}\\right)^{\\beta_{\\rm s}} \\ell^{\\alpha_{\\rm s}},\n\\label{eq:syncromoment}\n\\end{equation}\n\nwhere the amplitude coefficient $A_{\\rm s}$ is treated as a free parameter while we fix $\\beta_{\\rm s}=-3$ (median value of the ${\\tt s1}$ $\\beta_{\\rm s}$ map on our $f_{\\rm sky}=0.7$ mask) and $\\alpha_{\\rm s}=-1$ \\citep{krachmalnicoff_s-pass_2018}. \n\nWhen fitting the {\\tt d1s1c} simulations, we either use the $r\\beta$-$T$ fitting scheme, neglecting the synchrotron component, or we add the synchrotron component in Eq.~\\ref{eq:syncromoment} to the model in Eq.~\\ref{eq:model}. We refer to this latter case as the {\\tt s}$r\\beta$-$T$ fitting scheme.\nIn Fig.~\\ref{fig:synchrotron}, the $\\hat{r}$ posteriors derived from the {\\tt d1s1c} simulations are displayed with $r_{\\rm sim}=0$ and $f_{\\rm sky}=0.7$. \n\nUsing the $r\\beta$-$T$ fitting scheme, we find $\\hat{r} = (143.1 \\pm 13.5)\\times 10^{-4}$. As expected, even at high frequencies, modeling the synchrotron component is critical and cannot be neglected in order to recover an unbiased value of $\\hat{r}$.\nOn the other hand, using {\\tt s}$r\\beta$-$T$ fitting scheme, we recover $\\hat{r} = (-5.4 \\pm 13.2)\\times 10^{-4}$. This result is comparable with the one obtained for the {\\tt d1c} simulations in Sect.~\\ref{sec:d1c}, with a minor increase in $\\sigma_{\\hat{r}}$. We can therefore conclude that a model as simple as that of Eq.~\\ref{eq:syncromoment} is sufficient to take into account the {\\tt s1} component at $\\nu > 100$\\,GHz and the corresponding SED distortions can be neglected in order to recover an unbiased value of $\\hat{r}$. In principle, as we know that the dust-synchrotron spatial correlation is significant at large scales \\citep{PlanckDust2}, Eq.~\\ref{eq:model} should include a dust-synchrotron term \\citep[see e.g.,][]{SOgalactic}. In our study, where we consider cross-spectra from 100 to 402\\,GHz, this dust-synchrotron term is subdominant, but it could be significant when considering cross-spectra between LiteBIRD's extreme frequency bands (e.g., the 40$\\times$402 cross-spectrum). The moment expansion might be more complicated as well in this case, as we could expect some correlation between the dust and synchrotron moment-terms.\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\columnwidth]{Figures\/gauss_r_0_d1s1_SLD_full.pdf}\n \\caption{\\footnotesize Posterior on $\\hat{r}$ in the {\\tt d1s1c} simulation type with $r_{\\rm sim}=0$ and $f_{\\rm sky}=0.7$ for the different fitting schemes: $r\\beta$-$T$ (green, solid line) and {\\tt s}$r\\beta$-$T$ (orange, dash-dotted line). The vertical black dashed line marks the value $r_{\\rm sim}=0$.}\n \\label{fig:synchrotron}\n\\end{figure}\n\nThis result shows that a full polarized foreground content can be treated at high frequencies when using a power law SED for the synchrotron coupled with the moment expansion of the MBB up to first order in both $\\beta$ and $T$ for the dust SED. A full study remains to be done in that direction using all the frequency bands of the \\textit{LiteBIRD}{} instrument. Eventually, Eq.~\\ref{eq:syncromoment} will also have to be expanded in moments with respect to its parameters. Doing so, one can expect to recover an unbiased value of $\\hat{r}$ associated with a decrease in $\\sigma_{\\hat{r}}$ down to a value compatible with the full success criterion of the mission.\n\n\\subsection{Limitations of this work and caveats}\n\n{As discussed in Sect.~\\ref{sec:formalismspectra}, we neglected polarization effects through this work by treating the $BB$ signal as an intensity signal. This is not problematic in the present work, because no variations along the lines of sight were present in the simulations. However, this point has to be addressed using complex simulations or real sky data.}\n\nThe choice of reference frequency $\\nu_0$ used for the normalization of the MBB in Eq.~\\ref{eq:MBB}, which is not discussed in this study, can potentially have a significant impact on the moment expansion and, in turn, on the measurement of $\\hat{r}$. Indeed, $\\nu_0$ is the pivot frequency of the moment expansion (moments are equal to zero at $\\nu_0$) and will determine the shape of the SED distortion around it. A poor choice for this reference frequency can have disastrous consequences for the moment fit: for example, if it is chosen far away from the observed bands, all the moments will become degenerated. In our case, the reference frequency (353\\,GHz) is within the observed frequency range (100 to 402\\,GHz), but we have not tried to optimize its position. In addition, the $\\nu_0$ pivot of our moment expansion coincides with the one used to extrapolate the dust template map in the {\\sc PySM} and we have not quantified how much this impacts our results. \n\nFinally, as pointed out several times in this work, the quantitative results depend strongly on the sky model of our simulations. Moreover, we lack dedicated sky models where we can control the complexity of the dust SED, either by directly including moments or by averaging the emission from the 3D structure of the Galaxy. However, both methods are beyond the scope of the present work.\n\n\n\\section{Conclusion}\n\nBeing able to precisely characterize the complexity of the Galactic thermal dust polarized SED has become critical for the measurement of the faint primordial $B$-mode signal of the CMB, especially at the sensitivity targeted by future CMB experiments such as the \\textit{LiteBIRD}{} satellite mission.\n\nIn this work, we applied the moment expansion formalism to the dust emission SED as a component-separation tool to recover the tensor-to-scalar ratio parameter $r$ in \\textit{LiteBIRD}-simulated data. This formalism, proposed by \\citet{Chluba} and implemented in harmonic space by \\citet{Mangilli}, allows us to deal with the spectral complexity of the Galactic dust signal by modeling its deviations from the canonical MBB model at the cross-angular power spectrum level. In the case of the data-driven realistic dust emission model ---we explore ({\\sc PySM} {\\tt d1}) \nhere---, suitably taking into account the dust SED distortions prevents the spurious detection of the primordial $B$-mode signal. \n\nWe show that the dust spectral index $\\beta$ and dust temperature $T$ spatial variations significantly distort the dust cross-power spectrum SED. The MBB is not a good model to describe the data in that case and the estimation of $r$ is dramatically affected. In the case where no primordial signal is included in the simulated data sets, not taking into account the dust SED complexity leads to a highly significant spurious detection of $r$ with \\textit{LiteBIRD}{} (from $\\hat{r}\\simeq5\\times10^{-3}$ to $1.25\\times10^{-2}$, with a 8.4 to 21.2\\,$\\sigma$ significance, from 50 to 70\\,\\% of the sky, respectively). \n\nTo overcome this obstacle, we applied the moment expansion formalism in order to model these SED distortions. We demonstrate that, at \\textit{LiteBIRD}{} sensitivity, the previously studied moment expansion with respect to the dust spectral index $\\beta$ \\citep{Mangilli,Azzoni} does not give satisfactory results. Indeed, expanding in $\\beta$ to first order (following the angular power spectrum definition of the order) leads to a significant bias on 70\\,\\% of the sky ($\\hat{r}=(3.29\\pm0.65)\\times10^{-3}$ when $r_{\\rm sim}=0$ and $\\hat{r}=(1.29\\pm0.08)\\times10^{-2}$, when $r_{\\rm sim}=10^{-2}$). At second order in $\\beta$, we observe a $\\sim$2\\,$\\sigma$ negative tension ($\\hat{r}=(-3.7\\pm1.9)\\times10^{-3}$ when $r_{\\rm sim}=0$ and $\\hat{r}=(6.25\\pm2.50)\\times10^{-3}$, when $r_{\\rm sim}=10^{-2}$). \n\nWe introduce for the first time in this work the expansion of the dust angular cross-power spectra with respect to both $\\beta$ and $T$. We show that by using this expansion up to first order, we correctly model the dust SED distortions due to spatial variations of both $\\beta$ and $T$ at the map level. This allows us to recover $r$ parameter without bias, with $\\hat{r}=(-3.3\\pm11.7)\\times10^{-4}$ if $r_{\\rm sim}=0$ and $\\hat{r}=(0.95\\pm0.15)\\times10^{-2}$ if $r_{\\rm sim}=10^{-2}$. Thus, despite the known degeneracy between the dust spectral index and its temperature in the Rayleigh-Jeans domain, it is important to correctly model the latter in order to accurately retrieve the tensor-to-scalar ratio $r$ at the unprecedented precision reached by experiments such as \\textit{LiteBIRD}{}. \n\nAdding parameters to tackle the dust SED complexity means an increase in the error budget. Given the \\textit{LiteBIRD}{} bands and sensitivities we consider in this work (frequency bands above 100\\,GHz), the ideal sensitivity on $r$ without delensing is $\\sigma_{\\hat{r}}=3.4\\times10^{-4}$. In the ideal case, where the dust $\\beta$ and $T$ are constant over the sky ({\\sc PySM} {\\tt d0}), separating the CMB from dust leads to $\\sigma_{\\hat{r}}=3.9\\times10^{-4}$ on 70\\,\\% of the sky. Adding the expansion to first order in $\\beta$ does not significantly increase the error ($\\sigma_{\\hat{r}}=4.5\\times10^{-4}$), but expanding to first order in both $\\beta$ and $T$ multiplies it by a factor of $\\sim2$ ($\\sigma_{\\hat{r}}=9.5\\times10^{-4}$) and to second order in $\\beta$ by a factor of $\\sim4$ ($\\sigma_{\\hat{r}}=16.4\\times10^{-3}$). We show that the surge of $\\sigma_{\\hat{r}}$ between the two latter cases, sharing the same number of free parameters, is due to strong correlations between the SED of the second-order moments in $\\beta$ and the CMB. This is an important point, as it could lead to some intrinsic limitation for component-separation algorithms based exclusively on the modeling of the SED. Furthermore, when dealing with real data, if the dust SED is complex enough to have significant second-order distortions with respect to $\\beta$, CMB experiments might reach a dilemma: either include the second order in the modeling at the cost of losing sensitivity on $r,$ or neglect it at the cost of a potential spurious detection. Coupling the SED-based separation with methods exploiting the diversity of spatial distribution between components \\citep[e.g.,][]{Wavelets} seems a natural way to overcome this issue.\n\nNevertheless, moment expansion at the cross-angular power spectrum level provides a powerful and agnostic tool, allowing us to analytically recover the actual dust complexity without making any further assumptions. We additionally show that this method is robust, in the sense that it can effectively distinguish the primordial tensor signal from dust when $r_{\\rm sim} \\neq 0$, as in the case of \\textit{LiteBIRD}{} simulations. The dust moments in $\\beta$ and $T$ at first order are needed in order to retrieve a reliable measure of $r$; they are significantly detected for $\\ell\\lesssim100$. We can therefore define a cut in $\\ell$ above which we do not fit for the whole complexity of the dust (we fit only the expansion up to first order in $\\beta$ and not in $\\beta$ and $T$). Doing so, we can reduce the error on $\\hat{r}$ while keeping the bias negligible ($\\hat{r}=(-0.9\\pm8.8)\\times10^{-4}$). We could imagine other ways to reduce the number of free parameters in our model \\cite[e.g., assuming a power-law of $\\ell$ behavior for the moments, as in][]{Azzoni} and hence reduce the error on $r$. However, this optimization really depends on the simulated sky complexity and has not been comprehensively explored in the present work.\n\nThe {\\sc PySM} {\\tt d1} sky simulations, being data-driven, are widely used by the CMB community as they contain some of the real sky complexity. Nevertheless, at high-Galactic latitudes, the dust spectral index and temperature templates from \\textit{Planck}{} are dominated by systematic errors (uncertainty on the assumed zero-level of the \\textit{Planck}{} intensity maps, residual cosmic infrared background (CIB), anisotropies, instrumental noise, etc.). Therefore, some of the complexity we observe far from the Galactic plane in this sky model is not real. On the other hand, the modeled SED of the dust is {exactly} a MBB in each pixel, and line-of-sight averages or more complex dust models are ignored. As a consequence, our method and CMB $B$-mode component-separation algorithms in general need to be confronted with more complex models in order to really assess their performances in a quantitative manner. \n\nFinally, although we demonstrate that the synchrotron component can be tackled at frequencies above 100\\,GHz with a minimal model under our assumptions, a study over the full \\textit{LiteBIRD}{} frequency bands, including synchrotron and the potential moment expansion of its SED, will be considered as a natural next step for a further application.\n\n\n\\section*{Acknowledgments}\n\n\nThis work is supported in Japan by ISAS\/JAXA for Pre-Phase A2 studies, by the acceleration program of JAXA research and development directorate, by the World Premier International Research Center Initiative (WPI) of MEXT, by the JSPS Core-to-Core Program of A. Advanced Research Networks, and by JSPS KAKENHI Grant Numbers JP15H05891, JP17H01115, and JP17H01125. The Italian \\textit{LiteBIRD}{} phase A contribution is supported by the Italian Space Agency (ASI Grants No. 2020-9-HH.0 and 2016-24-H.1-2018), the National Institute for Nuclear Physics (INFN) and the National Institute for Astrophysics (INAF). The French \\textit{LiteBIRD}{} phase A contribution is supported by the Centre National d'Etudes Spatiale (CNES), by the Centre National de la Recherche Scientifique (CNRS), and by the Commissariat \u00e0 l'Energie Atomique (CEA). The Canadian contribution is supported by the Canadian Space Agency. The US contribution is supported by NASA grant no. 80NSSC18K0132. \nNorwegian participation in \\textit{LiteBIRD}{} is supported by the Research Council of Norway (Grant No. 263011). The Spanish \\textit{LiteBIRD}{} phase A contribution is supported by the Spanish Agencia Estatal de Investigaci\u00f3n (AEI), project refs. PID2019-110610RB-C21 and AYA2017-84185-P. Funds that support the Swedish contributions come from the Swedish National Space Agency (SNSA\/Rymdstyrelsen) and the Swedish Research Council (Reg. no. 2019-03959). The German participation in \\textit{LiteBIRD}{} is supported in part by the Excellence Cluster ORIGINS, which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy (Grant No. EXC-2094 - 390783311). This research used resources of the Central Computing System owned and operated by the Computing Research Center at KEK, as well as resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy.\n\nMR acknowledges funding support from the ERC Consolidator Grant CMBSPEC (No. 725456) under the European Union's Horizon 2020 research and innovation program.\n\nThe authors would like to thank David Alonso, Josquin Errard, Nicoletta Krachmalnicoff and Giuseppe Puglisi, for useful discussions as well as Jens Chluba for insightful comments on earlier version of this work. \n\n\n\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Appendix}\\noindent\nHere we state expressions for the power spectrum defined in \\Eqref{eq:Hankelkx} (for simplicity with vanishing short distance cutoff $\\lambda=0$).\nUsing the exact expression for $g^{(1)}(r)$ of \\eqref{eq:integral} in the definition of \\ensuremath{\\mathcal{T}(k)}\\xspace, and interchanging the integrals we obtain an integral representation \nfor \\ensuremath{\\mathcal{T}(k)}\\xspace given by\n\\begin{equation}\\label{eq:Tkintegral}\n\\ensuremath{\\mathcal{T}(k)}\\xspace~=~\\frac{g\\,\\Lambda}{\\pi\\,k}\\int\\displaylimits_0^\\infty \\frac{dp\\, p}{\\mathrm{e}^{(E-\\mu)\/T}+1} \\left\\{j_0\\left[\\Lambda\\left(p-k\\right)\\right]-j_0\\left[\\Lambda\\left(p+k\\right)\\right]\\right\\}\\;.\n\\end{equation}\nTo leading order in Sommerfeld expansion this evaluates to (this is consistent with first expanding $g^{(1)}(r)$ to $\\mathcal{O}(T^2\/\\ensuremath{k_{\\mathrm{F}}}\\xspace^2)$ as in \\eqref{eq:g_Sommerfeld} \nand then performing \\eqref{eq:Hankelkx})\n\\begin{equation}\\label{eq:TkExpl}\n \\begin{split}\n\\ensuremath{\\mathcal{T}(k)}\\xspace~=~\n& \\frac{g}{\\pi}\\left\\{\\mathrm{Si}\\left[\\left(\\ensuremath{k_{\\mathrm{F}}}\\xspace-k\\right)\\Lambda\\right]+\\mathrm{Si}\\left[\\left(\\ensuremath{k_{\\mathrm{F}}}\\xspace+k\\right)\\Lambda\\right] \n- \\frac{2\\,\\sin\\left(\\ensuremath{k_{\\mathrm{F}}}\\xspace\\Lambda\\right)\\sin\\left(k\\Lambda\\right)}{k\\Lambda} \\right\\}+ \\\\\n& \\frac{g\\,\\pi\\,T^2}{6\\,\\ensuremath{k_{\\mathrm{F}}}\\xspace k}\\left\\{ \\Lambda\\mu^2 \\left[\n \\frac{\\cos\\left[\\left(\\ensuremath{k_{\\mathrm{F}}}\\xspace-k\\right)\\Lambda\\right]}{\\ensuremath{k_{\\mathrm{F}}}\\xspace-k}-\\frac{\\cos\\left[\\left(\\ensuremath{k_{\\mathrm{F}}}\\xspace+k\\right)\\Lambda\\right]}{\\ensuremath{k_{\\mathrm{F}}}\\xspace+k}\\right] \\right.+ \\\\\n& \\hspace{1.2cm}\\left. \\frac{\\left(\\ensuremath{k_{\\mathrm{F}}}\\xspace^2-\\ensuremath{k_{\\mathrm{F}}}\\xspace k-\\mu^2\\right)\\sin \\left[\\left(\\ensuremath{k_{\\mathrm{F}}}\\xspace-k\\right)\\Lambda\\right]}{\\left(\\ensuremath{k_{\\mathrm{F}}}\\xspace-k\\right)^2} - \\frac{\\left(\\ensuremath{k_{\\mathrm{F}}}\\xspace^2+\\ensuremath{k_{\\mathrm{F}}}\\xspace k-\\mu^2\\right)\\sin \\left[\\left(\\ensuremath{k_{\\mathrm{F}}}\\xspace+k\\right)\\Lambda\\right]}{\\left(\\ensuremath{k_{\\mathrm{F}}}\\xspace+k\\right)^2}\\right\\}\\;.\n\\end{split}\n\\end{equation}\nAnalytic expressions for the critical temperatures are given by \n\\begin{equation}\\label{eq:Tcrit0}\n T^2_{c,0}(\\nu)=\\frac{6\\,\\ensuremath{k_{\\mathrm{F}}}\\xspace^4}{\\pi^2}\\frac{\\left[\\sin(\\B{\\nu})-\\Si{\\B{\\nu}}\\right]}{\\left[ \\left(2\\mu^2-\\ensuremath{k_{\\mathrm{F}}}\\xspace^2\\right)\\B{\\nu}\\cos(\\B{\\nu}) + \\left(\\mu^2\\B{\\nu}^2+\\ensuremath{k_{\\mathrm{F}}}\\xspace^2-2\\mu^2\\right)\\sin(\\B{\\nu})\\right]}\\;,\n\\end{equation}\n\\begin{equation}\\label{eq:TcritFermi}\n\\begin{split}\n T^2_{c,\\mathrm{F}}(\\nu)=& \\frac{24\\, \\ensuremath{k_{\\mathrm{F}}}\\xspace^4}{\\pi^2} \\frac{\\left[1-\\cos(2\\B{\\nu})-\\B{\\nu}\\Si{2\\B{\\nu}}\\right]}{\n \\B{\\nu}^2\\left[4\\ensuremath{k_{\\mathrm{F}}}\\xspace^2-2\\mu^2\\cos(\\B{\\nu})\\right]+\\B{\\nu}\\left[ \\left(\\mu^2-2\\ensuremath{k_{\\mathrm{F}}}\\xspace^2\\right)\\sin(2\\B{\\nu})\\right]}\\;,\n\\end{split}\n\\end{equation}\n\\begin{equation}\\label{eq:TcritDFermi}\n\\begin{split}\n & T^2_{c,\\mathrm{F}'}(\\nu)= \\frac{36\\, \\ensuremath{k_{\\mathrm{F}}}\\xspace^4}{\\pi^2} \\left[2\\B{\\nu}^2 - 4 \\sin(\\B{\\nu})^2 +\\B{\\nu} \\sin(2\\B{\\nu})\\right] \\times \\\\\n & \\left[ 4\\mu^2 \\B{\\nu}^4 - 6 \\B{\\nu}^2 (2\\ensuremath{k_{\\mathrm{F}}}\\xspace^2+(\\ensuremath{k_{\\mathrm{F}}}\\xspace^2-2\\mu^2)\\cos(2\\B{\\nu})) + 3 (3\\ensuremath{k_{\\mathrm{F}}}\\xspace^2- 2\\mu^2) \\B{\\nu} \\sin(2\\B{\\nu}) + 6 \\mu^2 \\B{\\nu}^3 \\sin(2\\B{\\nu})\\right]^{-1}\\;.\n\\end{split}\n \\end{equation}\nHere $\\B{\\nu}$ are the zeros of the Bessel function $J_{3\/2}$ typically called $j_{3\/2,\\nu}$ and $\\mathrm{Si}$ is the integral sine.\nNote that $T^2_{c,0}(\\nu)>0$ only for odd $\\nu$, while $T^2_{c,\\mathrm{F}}(\\nu),T^2_{c,\\mathrm{F}'}(\\nu)<0$ for all $\\nu$.\n\n\\twocolumngrid\n\\bibliographystyle{utphys}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\@startsection {section}{1}{\\z@}%\n {-3.5ex \\@plus -1ex \\@minus -.2ex\n {2.3ex \\@plus.2ex}%\n {\\normalfont\\large\\bfseries}}\n\\renewcommand\\subsection{\\@startsection{subsection}{2}{\\z@}%\n {-3.25ex\\@plus -1ex \\@minus -.2ex}%\n {1.5ex \\@plus .2ex}%\n {\\normalfont\\bfseries}}\n\n\n \n\\def1.2{1.2}\n\\parskip 6 pt\n\n\\marginparwidth 0pt\n\\oddsidemargin 0pt\n\\evensidemargin 0pt\n\\marginparsep 0pt\n\\topmargin -0.5in\n\\textwidth 6.5in\n\\textheight 9.0 in\n\\newcommand{\\begin{equation}}{\\begin{equation}}\n\\newcommand{\\end{equation}}{\\end{equation}}\n\\newcommand{\\begin{eqnarray}}{\\begin{eqnarray}}\n\\newcommand{\\end{eqnarray}}{\\end{eqnarray}}\n\\newcommand{\\\"o}{\\\"o}\n\\newcommand{{\\rm Tr}}{{\\rm Tr}}\n\\newcommand{\\gone}[1]{{}}\n\\newcommand{\\noindent $\\bullet\\ $}{\\noindent $\\bullet\\ $}\n\\newcommand{{\\rm Re}}{{\\rm Re}}\n\\newcommand{{\\rm Im}}{{\\rm Im}}\n\\newcommand{{\\mathcal{N}}}{{\\mathcal{N}}}\n\n\\newcommand{\\ensuremath{H}\\xspace}{\\ensuremath{H}\\xspace}\n\\newcommand{\\ensuremath{f}\\xspace}{\\ensuremath{f}\\xspace}\n\\newcommand{\\ensuremath{Q}\\xspace}{\\ensuremath{Q}\\xspace}\n\\newcommand{\\ensuremath{R}\\xspace}{\\ensuremath{R}\\xspace}\n\n\n\n\\begin{document}\n\\begin{titlepage}\n\\begin{flushright}\nMAD-TH-11-02\n\\end{flushright}\n\n\\vfil\n\n\\begin{center}\n\n{\\bf \\large \nQuantization of charges and fluxes in warped Stenzel geometry\n}\n\n\n\\vfil\n\n\nAkikazu Hashimoto$^a$ and Peter Ouyang$^b$\n\n\\vfil\n\n$^a$ \nDepartment of Physics, University of Wisconsin, Madison, WI\n53706, USA\n\n$^b$ \nDepartment of Physics, Purdue University, West Lafayette, IN 47907, USA\n\n\n\\vfil\n\n\\end{center}\n\n\\begin{abstract}\n\\noindent We examine the quantization of fluxes for the warped Stiefel\ncone and Stenzel geometries and their orbifolds, and distinguish the\nroles of three related notions of charge: Page, Maxwell, and brane.\nThe orbifolds admit discrete torsion, and we describe the associated\nquantum numbers which are consistent with the geometry in its large\nradius and small radius limits from both the type IIA and the M-theory\nperspectives. The discrete torsion, measured by a Page charge, is\nrelated to the number of fractional branes. We relate the shifts in\nthe Page charges under large gauge transformations to the\nHanany-Witten brane creation effect.\n\n\\end{abstract}\n\\vspace{0.5in}\n\n\n\\end{titlepage}\n\\renewcommand{1.2}{1.05} \n\n\\section{Introduction}\n\nRecently, a holographic duality for superconformal Chern-Simons-Matter\ntheories in 2+1 dimensions with ${\\cal N}=6$ and ${\\cal N}=8$\nsupersymmetry was proposed \\cite{Aharony:2008ug,Aharony:2008gk}. These\nfield theories have $U(N)_k \\times U(N+l)_{-k}$ product gauge symmetry\n(where the subscripts refer to the Chern-Simons level associated with\neach gauge group) and bifundamental matter fields. In the large $N$\nlimit, the field theory has a dual gravity description in terms of\nM-theory as $N$ M2-branes on the orbifold $C^4\/Z_k$ (where the\norbifold acts by rotating each of the complex planes by an angle $2\n\\pi\/k$ simultaneously) and $l$ fractional branes. The supergravity\nsolution corresponding to this brane system is $AdS_4 \\times S^7\/Z_k$,\nand the quantum number $l$ is encoded in the discrete torsion of the\n$H_3(Z) = Z_k$ homology group of $S^7\/Z_k$.\n\nThe M-theory background can also be described in type IIA supergravity\nby dimensionally reducing along the Hopf-fiber of $S^7\/Z_k$. In this\ndescription, the geometry has the form $AdS_4 \\times CP^3$ and is the\neffective description when $1 \\ll N \\ll k^5$. Homologically, $CP^3$ is\nvery different from $S^7\/Z_k$, particularly in that $CP^3$ has no\ndiscrete torsion cycles, but it does possess integral homology. Even\nfor this simple example, the relationship of the spectrum of charges\nand fluxes in the M-theory and the type IIA descriptions is subtle.\n\nOne way to gain some perspective on the physical meaning of the\ndefining data of the gravity side of these correspondences is to\nrealize the superconformal field theory as either the UV or IR fixed\npoint of a holographic renormalization group flow. For example, a\nsuperconformal Chern-Simons theory can arise as the IR fixed point of\nan RG flow from a Chern-Simons-Yang-Mills-Matter theory\n\\cite{Hashimoto:2008iv,Aharony:2009fc}. Several related realizations\nhave also been considered \\cite{Hashimoto:2010bq}. These\nrenormalization group flows are dual to transverse geometries which\ndiffer from $R^8$, and many of these constructions have the\ninteresting property that the dual geometry admits a normalizable\n4-form. In M-theory, this allows one to introduce a nontrivial 4-form\nflux. The freedom of tuning the 4-form flux has a specific\ninterpretation in terms of tuning the parameters of the dual field\ntheory, and in some examples, one can explore dynamical features such\nas phase transitions in the low energy effective field theory from the\ngeometry of the supergravity dual\n\\cite{Aharony:2009fc,Hashimoto:2010bq}.\n\nIn this article, we investigate the duality of ${\\cal N}=2$\nChern-Simons quiver theories dual to $AdS_4 \\times V_{5,2}\/Z_k$ where\n$V_{5,2}$ is a homogeneous Sasaki-Einstein seven-manifold. This\nduality was originally considered by Martelli and Sparks in\n\\cite{Martelli:2009ga}. On the field theory side, it generalizes the\nmodel of ABJM by adding a chiral multiplet in the adjoint\nrepresentation to each factor of the $U(N)_k \\times U(N+l)_{-k}$ gauge\ngroup. The gravity dual description can be deformed in the IR, giving\nrise to a geometry known as the warped Stenzel metric. At the moment,\nlittle is known about the field theory interpretation of this IR\ndeformation. In order to facilitate in its interpretation, it is\nuseful to enumerate the the discrete and continuous parameters\nassociated with this system. This is related to the problem of\nquantizing charges and fluxes in the gravity dual.\n\nIn type IIA (and IIB) supergravity, there is a well-known subtlety in\nimposing charge quantization, which arises in the example studied in\nthis paper. The $V_{5,2}\/Z_k$ geometry, reduced to IIA along the\n$U(1)$ isometry along which the $Z_k$ acts, is a space $M_2$ which has\nthe same homology structure as $CP^3$ \\cite{Martelli:2009ga}; in\nparticular there is a nontrivial 4-cycle. Now, one might want to\nquantize the four-form flux through this cycle, but the natural\ngauge-invariant four-form\n\\begin{equation} \\tilde F_4 = d A_3 + H_3 \\wedge A_1 \\end{equation}\nis not closed, and therefore its integral through the 4-cycle is not\nconserved and cannot be quantized. A similar issue arises in the flux\nof $* \\tilde F_4$ through $M_2$. These apparent difficulties have also\nappeared in earlier examples considered in\n\\cite{Aharony:2009fc,Hashimoto:2010bq} and their resolution is well\nunderstood. The four-form flux satisfies a modified Bianchi identity,\n\\begin{equation} d \\tilde F_4 = -H_3 \\wedge F_2 \\end{equation}\nso to define a conserved charge we should not integrate $\\tilde{F}_4$\nbut a modified flux which is chosen to be closed:\n\\begin{equation} Q_4^{Page} = {1 \\over (2 \\pi l_s)^3 g_s } \\int (-\\tilde F_4 + B\n\\wedge F_2) \\label{q4page}\\ .\\end{equation}\nThis new charge, known as the Page charge, is one of the three subtle\nnotion of charges identified by Marolf \\cite{Marolf:2000cb}. The three\ncharges being referred here are the Maxwell charge, brane charge, and\nthe Page charge, and they can take distinct values in gauge theories\ninvolving Chern-Simons terms as is the case for type IIA\nsupergravity. Each of these charges respects some, but not all, of the\nproperties commonly associated with charges in simpler contexts: gauge\ninvariance, conservation, localization, and integer quantization. Page\ncharge turns out to respect conservation, localization, and integer\nquantization, but fails to be invariant with respect to large gauge\ntransformations which shift the period of $B_2$. This ambiguity is\nprecisely what is required to interpret the Hanany-Witten brane\ncreation effects in the brane construction of these models is\nintimately connected to the duality of the field theory.\n\nIn this article, we will analyze the quantization of fluxes in $AdS_4\n\\times V_{5,2}\/Z_k$ geometry and its Stenzel deformation from the type\nIIA perspective, and relate the gauge ambiguity to Hanany-Witten brane\ncreation effects along the lines of\n\\cite{Aharony:2009fc,Hashimoto:2010bq}. In \\cite{Martelli:2009ga}, it\nwas argued that the Stenzel deformation is incompatible with the\npresence of discrete torsion which gives rise to a non-vanishing value\nof $l$ in $U(N)_k \\times U(N+l)_{-k}$. On the contrary, we find that\nsome values of $l$ are allowed, and explain the source of this\napparent discrepancy. We will also examine the compatibility of the\nIIA and the M-theory perspectives.\n\n\\section{Stiefel, Stenzel, and the ${\\cal N}=2$ Chern-Simons-Quiver theory}\n\nIn this section, we briefly review the construction of ${\\cal N}=2$\nChern-Simons-Quiver theory, its gravity dual, and its Stenzel\ndeformation. We closely follow the presentation of\n\\cite{Martelli:2009ga}.\n\n\\subsection{Stiefel cone}\n\\label{sec2.1}\nThe starting point is a non-compact Calabi-Yau 4-fold \n\\begin{equation} z_0^n + z_1^2+z_2^2+z_3^2 + z_4^2 = 0 \\label{curve} \\end{equation}\nwhere we take $n=2$. This geometry is a cone whose base is a\nSasaki-Einstein seven manifold $V_{5,2}$, also known as the Stiefel\nmanifold. Had one taken $n=1$, the geometry of the Calabi-Yau 4-fold\nwould have been $R^8$ which is formally a cone over $S^7$. For $n\n>2$, the geometry is not a cone over a Sasaki-Einstein manifold\n\\cite{Martelli:2009ga}.\n\nWhen M2 branes are placed at the tip of the cone, we obtain a warped\ngeometry $AdS_4 \\times V_{5,2}$. The base $Y_2=V_{5,2}$ has a torsion\n3-cycle $H_3(Y_2,Z)=Z_2$. \n\n\nThe $Z_k$ orbifold is taken on the $U(1)_b$ isometry which rotates\n\\begin{equation} (z_0,z_1+z_2, z_3+z_4, z_1-z_2,z_3-z_4) \\end{equation}\nwith weights $(0,1,1,-1,-1)$. On $Y_2\/Z_k$, this changes the torsion\ngroup from $Z_2$ to $H_3(Y_2\/Z_k,Z)=Z_{2k}$, so\n\\begin{equation} {1 \\over (2 \\pi l_p)^3} \\int_{\\Sigma_3} C_3 = {l -k \\over 2 k} \\end{equation}\nfor $\\Sigma_3$ which generates $H_3(Y_2\/Z_k)$. Here we have shifted\n$l$ by $k$ compared to what is written in (3.26) of\n\\cite{Martelli:2009ga}. Both $l$ and $k$ are integers so this shift is\na matter of convention in describing the supergravity background.\n\nWhen reduced to IIA along the $U(1)_b$ direction parametrized by\n$\\gamma$, the Sasaki-Einstein space $Y_2\/Z_k$ decomposes into\n\\begin{equation} ds^2(Y_2\/Z_k) = ds^2(M_2) + {w \\over k^2}(d \\gamma + kP)^2 \\end{equation}\nand the IIA string frame metric becomes\n\\begin{equation} ds^2 = \\sqrt{w} {R^3 \\over k} \\left({1 \\over 4} ds^2(\\mbox{AdS}_4)+ds^2(M_2)^2\\right) \\end{equation}\nwith\n\\begin{equation} F_2 = k g_s l_s \\Omega_2, \\qquad \\Omega_2 = d P \\ . \\end{equation}\n\nSince\n\\begin{equation} C_3 = A_3 + B_2 \\wedge d \\gamma \\end{equation}\nwith this dimensional reduction, $B_2$ turns out to have the period\n\\begin{equation} {1 \\over (2 \\pi l_s)^2} \\int B_2 = {l \\over 2k} - {1 \\over 2} \\ . \\end{equation}\n\n\\subsection{Brane construction and the Hanany-Witten effect}\n\nThe field theory dual is conjectured in \\cite{Martelli:2009ga} to\narise from the low energy limit of a network of D3-branes, an NS5-brane and\na $(1,k)$ 5-brane in type IIB on $R^{1,2} \\times S^1 \\times R^2\\times\nC^2$. The D3-branes wind along $R^{1,2} \\times S^1$. The NS5 is\nextended along $R^{1,2}$, one of the $R$ in $R^2$ and along the curve\n$w_1=- i w_0^2$ where $C^2$ is parametrized by $(w_0,w_1)$. The\n$(1,k)$ 5-brane is extended along $R^{1,2}$, a line at an angle\nrelative to the NS5-brane in $R^2$, and along $w_1=i w_0^2$ in\n$C^2$. There may also be fractional D3-branes stretching between the\nNS5 and the $(1,k)$ 5-brane at $(w_0,w_1)=(0,0)$.\n\nIn a brane configuration of this type, the Hanany-Witten brane\ncreation effect occurs when one of the 5-branes are moved around the\ncircle $S^1$ keeping the other 5-brane fixed. If there were $N$\ninteger and $l$ fractional branes to start with, moving the 5-brane\nonce around the circle will give rise to a shift\n\\begin{eqnarray} N & \\rightarrow & N + l \\cr\nl & \\rightarrow & l + 2k \\ . \n\\end{eqnarray}\n\n\\subsection{Stenzel Deformation}\n\\label{sec2.3}\n\nIn this subsection, we will briefly review the IR deformation of the\nStiefel cone. As an algebraic curve, it amounts to deforming\n(\\ref{curve}) to\n\\begin{equation} z_0^2 + z_1^2+z_2^2+z_3^2 + z_4^2 = \\epsilon^2 \\label{stenzeleq}\\ . \\end{equation}\nThe tip of the cone is blown up by an $S^4$ parametrized by \n\\begin{equation} \\sum_{i=0}^4 ({\\rm Re} z_i)^2 =\\epsilon^2, \\qquad {\\rm Im} z_i = 0 \\end{equation}\nand the full geometry can be viewed as the cotangent bundle over\n$S^4$. This geometry is also known as the Stenzel geometry\n\\cite{Stenzel} and admits an explicit metric \\cite{Cvetic:2000db}. In\nthe notation adopted in \\cite{Klebanov:2010qs}, the metric takes the\nform\n\n\\begin{equation} ds^2 = c^2( dr^2 + \\nu^2) + a^2 \\sum_{i=1}^3 \\sigma_i^2 + b^2 \\sum_{i=1}^3 \\tilde \\sigma_i^2 \\end{equation}\nwhere\n\\begin{eqnarray} \na^2(r) &=& 3^{-1\/4} \\lambda^2 \\epsilon^{3\/2} (2 + \\cosh 2r)^{1\/4} \\cosh r, \\cr\nb^2(r) &=& 3^{-1\/4} \\lambda^2 \\epsilon^{3\/2} (2 +\\cosh 2r)^{1\/4} \\sinh r \\tanh r \\cr\nc^2(r) &=& 3^{3\/4} \\lambda^2 \\epsilon^{3\/2}(2+\\cosh 2r)^{-3\/4} \\cosh^3 r \\end{eqnarray}\nand $\\nu$, $\\sigma_i$, and $\\tilde \\sigma_i$ are left-invariant\none-forms of the coset $SO(5)\/SO(3)$ (for which a nice explicit basis\nappears in \\cite{Klebanov:2010qs}.)\n\n\nAt $r=0$, the geometry collapses to an $S^4$. At large $r$, the\ngeometry asymptotes to a cone over $V_{5,2}$. Formally, this geometry\nis similar to the deformed $B_8$ space \\cite{Gibbons:1989er} which\ncollapses to an $S^4$ near the tip, and asymptotes to cone over a\nsquashed 7-sphere, but there are a few important differences. One is\nthe fact the $Z_k$ orbifold along the $U(1)_b$ of the Stenzel geometry\nhas fixed points at antipodal points of $S^4$ at $r=0$. We will\ncomment on other differences below.\n\nOne important feature of the Stenzel geometry is that it admits a\nself-dual 4-form which can be written, explicitly, as\n\\begin{eqnarray} G_4 &=& m \\left\\{ {3 \\over \\epsilon^3 \\coth^4{r \\over 2}} \\left[ a^3 c \\nu \\wedge \\sigma_1 \\wedge \\sigma_2 \\wedge \\sigma_3 + {1 \\over 2} b^3 c d r \\wedge \\tilde \\sigma_1 \\wedge \\tilde \\sigma_2 \\wedge \\tilde \\sigma_3 \\right] \\right. \\cr\n&& \\left. + {1 \\over 2 \\epsilon^3 \\coth^4{r \\over 2}} \\left[{1 \\over 2} a^2 b c \\epsilon^{ijk} d r \\wedge \\sigma_i \\wedge \\sigma_j \\wedge \\tilde \\sigma_k + a b^2 c \\epsilon^{ijk} \\nu \\wedge \\sigma_i \\wedge \\tilde \\sigma_j \\wedge \\tilde \\sigma_k \\right]\\right\\} \\ . \n\\label{g4}\n\\end{eqnarray}\n\n\nBecause the four-form is self-dual, and the background geometry is\nCalabi-Yau, one can turn on this flux in eleven-dimensional\nsupergravity without breaking supersymmetry \\cite{Becker:2001pm}.\nMoreover, it gives rise to a solution where the background geometry is\nunmodified except for the presence of a warp factor $H$, as in the\nstandard warped product ansatz\n\\begin{eqnarray} ds^2 &=& H^{-2\/3} (-dt^2+dx_1^2 + dx_2^2) + H^{1\/3} ds_8^2 \\cr\nF_4 & = & dt \\wedge dx_1 \\wedge dx_2 \\wedge d \\tilde H^{-1} + m G_{4} \\ . \\label{ansatz} \\end{eqnarray}\nThe warp factor itself can be determined by solving the four-form field equation,\n\\begin{equation} d * G = {1 \\over 2} G \\wedge G \\ , \\end{equation}\nwhere in general there can be additional source terms due to the\npresence of explicit M2-branes.\n\n\\section{Quantization of fluxes in Stiefel cones and Stenzel space}\n\nLet us now consider the quantization of fluxes in the warped Stiefel\ncones and Stenzel geometries in order to identify the discrete\nparameters characterizing the background. There are two guiding\nprinciples which we follow in carrying out the quantization. One is\nthat quantized fluxes should be invariant under deformation of\nGaussian surfaces unless the discrete unit of charge crosses the\nsurface. The other is for the quantization condition to be invariant\nunder string dualities.\n\n\\subsection{Review of Maxwell, brane, and Page charges}\n\nWe begin by considering the quantization of fluxes for the Stenzel\ngeometry in the IIA description. While the IIA description of the\nStenzel geometry is singular near the core, one still expects Gauss\nlaw considerations to lead to a consistent picture far away from the core\nregion, where the geometry looks essentially like the warped Stiefel\ncone.\n\nThe relevant fluxes to consider then are the flux of $\\tilde F_4$ through\nthe generator of $H_4(M_2,Z)$ and the flux of $*\\tilde F_4$ through\nthe six cycle $M_2$. As we mentioned earlier, however, these fluxes\ndepend on the radius $r$ at which we identify the base $M_2$ for the\nbackground in consideration.\n\nThe resolution to these apparent difficulties is the realization that\none is dealing with a situation where the Maxwell, brane, and Page\ncharge are distinct from one another, and that care is required in\napplying quantization conditions on the appropriate charge.\n\nLet us recall the specific definition of three charges. In type IIA\nsupergravity, the four form $\\tilde F_4 =d A_3 + H_3 \\wedge A_1$ is\ngauge invariant and well defined but is not closed and does not\nrespect Gauss' law. One can nonetheless compute the period of $\\tilde\nF_4$ on the generator of $H_4(M_2,Z)$ in the $r \\rightarrow \\infty$\nlimit. This defines the Maxwell charge. In contrast, the period of\nPage flux $-(\\tilde F_4 +B_2 \\wedge F_2)$ on $H_4(M_2,Z)$ is\nindependent of $r$, although it is ambiguous with respect to large\ngauge transformation of $B_2$. This quantity defines the Page\ncharge. Finally, the amount of charge carried by a brane source\nthrough its Wess-Zumino couplings defines the brane charge. Brane\ncharge includes the contribution of lower-brane charges from the\npull-back of the NSNS 2-form in the Wess-Zumino coupling. Therefore,\nif the background contains a non-uniform NSNS 2-form $B_2$, the brane\ncharge is not conserved with respect to changes in the position of the\nbranes. Some of these subtleties appeared originally in\n\\cite{Bachas:2000ik}.\n\nThe triplet of charges exists for the other forms, e.g. the six form\n$F_6 = * \\tilde F_4$ and are summarized in appendix B of\n\\cite{Aharony:2009fc}. For the flux of $F_6 =*F_4$, is is also useful\nto introduce the notion of bulk charge $Q_{bulk}$ which is the total\ncharges carried by the bulk fields\n\\begin{equation} Q_2^{bulk} = \\int_{Y_2} {1 \\over 2} G_4 \\wedge G_4 \\ . \\end{equation}\nThen, the bulk charge can be understood as being related to the brane\nand Maxwell charges via\n\\begin{equation} Q_2^{Maxwell} = Q_2^{brane} + Q_2^{bulk} \\ . \\end{equation}\nTo correctly quantize the supergravity solution, one should impose the\ndiscreteness condition on the Page charges, and not on Maxwell, brane,\nor bulk charges.\n\n\\subsection{Quantization on the Stiefel cone}\n\nTo illustrate the integrality of Page charges and the non-integrality\nof the other charges, let us first carryout the quantization procedure\nfor the Stiefel cone.\n\nFirst, consider the flux of $\\tilde F_4$. The Stiefel geometry has\nvanishing fourth Betti number, so there is no $G_4$ to consider in\nM-theory, and after dimensional reduction, the IIA flux $\\tilde F_4$\nalso vanishes. We are not done yet, however, because we still have to\nconsider the Page flux (\\ref{q4page}), which contains a term $B_2\n\\wedge F_2$, and $F_2$ is nonvanishing in the dimensional reduction of\nthe orbifolded Stiefel cone. Requiring the Page flux to be integer\nquantized imposes the quantization condition\n\\begin{equation} \\int B_2 = -{l \\over 2k} + {1 \\over 2} \\end{equation}\nwhich we inferred independently from M-theory considerations\nearlier in section \\ref{sec2.1}.\n\nNext, we consider the quantization of flux of D2 charge through\n$M_2$. We are interested in determining the Maxwell charge when the\nPage charge is set to $N$. One finds\n\\begin{equation} Q_2^{Maxwell} = N - {l(l-2k) \\over 2k} \\label{max1}\\end{equation}\nwhich can essentially be viewed as the sum of a contribution from $N$ M2-branes\nand a contribution from the discrete torsion, along the lines of\n\\cite{Bergman:2009zh}. The Maxwell charge $Q_2^{Maxwell}$ has several\nnotable features. First, it is not necessarily integer valued. Second,\nit is invariant under\n\\begin{equation} N \\rightarrow N+l, \\qquad l \\rightarrow l+2k \\ . \\end{equation}\nThis is consistent with the property of Maxwell charge that it is\nconserved under continuous deformations corresponding to moving one of\nthe 5-branes around the $S^1$ in the type IIB brane\nconstruction. Finally, $Q_2^{Maxwell}$ can go to zero or negative for\nsome range of $(N, l, k)$. This suggests that the entropy of the\nsuperconformal field theory is going to zero or negative, signaling a\nphase transition. The condition that $Q_2^{Maxwell}$ is positive is\nalso related to the condition necessary for supersymmetry to be\nunbroken as was highlighted in related contexts in\n\\cite{Aharony:2009fc,Hashimoto:2010bq}.\n\n\\subsection{Quantization in the Stenzel geometry}\n\nLet us now extend our analysis of flux quantization to the case where\nthe Stiefel cone is deformed into the Stenzel geometry, as described\nin Section \\ref{sec2.3}. To keep a general set of charges under\nconsideration, we will study the case where the Stenzel manifold has\nbeen quotiented by a $Z_k$ orbifold action.\n\nThe most important feature of the geometry in the deep IR is its\nsingularity structure after the orbifold has been taken. At the tip\nof the deformed orbifolded cone, the geometry has the local structure\n$(R^4\\times S^4)\/Z_k$, and in particular it has two fixed points\nwhich we can think of as the north and south poles of the $S^4\/Z_k$.\nAt each of the fixed points, the local geometry is $R^8\/Z_k$\n\\cite{Martelli:2009ga}. This geometric feature has a nice\nimplication. The supersymmetry of the deformed Stenzel cone is\nconsistent with adding some mobile M2-branes, and we are free to move\nsome number of them to either of the orbifold fixed points. Then the\ntheory on the M2-branes in the deep IR should simply be two copies of\nthe ABJM theory.\n\n\nAt any finite excitation energy the theory should feel the effects of\ncurvature and the self-dual four-form flux in the background which\nbreak the supersymmetry from $\\mathcal{N} = 6$ to $\\mathcal{N}=2$.\nHowever, for issues such as charge quantization, we should be able to\nwork in the extreme low energy limit and use our intuition from the\nABJM case. In particular one might expect that it is possible to turn\non discrete torsion at each singularity, and we will see that this is\ncorrect, although the torsion will be subject to some global\nconstraints.\n \n\nFirst we will consider the type IIA reduction of this geometry. This\ngeometry develops a dilaton and curvature singularity near the\ntip. However, because the geometry asymptotes to the Stiefel cone away\nfrom the tip, and because quantization of Page fluxes in type IIA\ndescription appropriately respects Gauss law\/localization of charge\nsources, we are able to partially constrain the discrete parameters of\nthe supergravity ansatz. We will then continue to consider the\ngeometry and fluxes near the core region from the M-theory\nperspective, and identify additional constraints which further\nrestrict the parameters of the ansatz.\n\nThe Stenzel manifold admits the self-dual four form flux (\\ref{g4}) which\ncan be derived from a three-form potential $C_3$\n\\cite{Klebanov:2010qs}\n\\begin{equation} C_3 = m \\beta + \\alpha \\Omega_2 \\wedge d \\gamma \\label{3-form}\n\\end{equation}\n\\begin{equation} \\beta = {a c \\over \\epsilon^3 \\cosh^4{r \\over 2}} \\left[ (2 a^2 + b^2) \\tilde \\sigma_1 \\wedge \\tilde \\sigma_2 \\wedge \\tilde \\sigma_3 + {a^2 \\over 2} \\epsilon^{ijk} \\sigma_i \\wedge \\sigma_j \\wedge \\tilde \\sigma_k \\right] \\ , \\label{beta}\\end{equation}\nwhere $\\Omega_2$ and $\\gamma$ are as defined in\n\\cite{Martelli:2009ga}.\\footnote{For the interested reader, $\\gamma$\nis the angular coordinate which is quotiented by the orbifold action,\nand $\\Omega_2$ is proportional to the geometric flux associated with\n$\\gamma$.} Here we have added an exact term proportional to $\\alpha$,\nwhich does not affect the gauge invariant four-form flux. This exact\nterm is present in the $AdS_4 \\times V_{5,2}\/Z_k$ system with discrete\ntorsion \\cite{Martelli:2009ga} which is the UV limit of the Stenzel\nsolution.\n\nIn quantizing the flux of the type IIA Page flux through the four\ncycle of $M_2$, we impose the condition\n\\begin{equation} \\int_{S^4} G_4 + n k \\int_{\\tilde S^3\/Z_k} C_3 = (2 \\pi) n \\alpha=\n-(2 \\pi l_p)^3 (l-k), \\qquad n=2 \\label{aquant} \\end{equation}\nwhich constrains $\\alpha$. Note that in the asymptotically conical limit,\nthe torsion is $Z_{2k}$-valued, and so $l$ takes integer values in the\nrange $0\\le l \\le2k-1$.\n\n\nIn addition to this, the flux of $G_4$ through $S^4\/Z_k$ is\nindependently quantized to be integral. This implies\n\\begin{equation} \\int_{S^4\/Z_k} G_4 = {8 \\pi^2 \\over 3 \\sqrt{3}k} m = (2 \\pi l_p)^3\nq \\end{equation}\nfor integer $q$. This constraint has no counterpart in the Stiefel\ncone as neither the $S^4$ cycle nor the self-dual 4-form exist in that\ncase.\n\n\nNow let us consider the quantization conditions that arise from\nconsidering M-theory near the orbifold fixed points; we will show that\nthe expected charges at the singularities are compatible with the IIA\ncalculations. At the north pole of $S^4\/Z_k$, the pull-back of $G_4$\non the $R^4\/Z_k$ fiber was computed in \\cite{Martelli:2009ga}:\n\\begin{equation} {1 \\over (2 \\pi l_p)^3} \\int_{R^4\/Z_k} G_4 = {q \\over 2} \\equiv {\\tilde M \\over 2} = M \\end{equation}\nwhere $M$ and $\\tilde M$ are the variables used in\n\\cite{Martelli:2009ga}. This means that the integral of $C_3$\n(including both the nontrivial flux and the discrete torsion\ncontribution) on $S^3\/Z_k$ at the north pole is\n\\begin{equation} {1 \\over (2 \\pi l_p)^3} \\int_{\\tilde S^3\/Z_k} C_3 = -{l \\over 2k} + {1 \\over 2} -{q \\over 2} \\ \\label{c3north}.\\end{equation}\n\n\nSuppose that at the north pole we impose the condition that the system is described by charges as in the ABJ case with $l^N$ units of discrete torsion (including a shift by 1\/2 a unit as discussed in \\cite{Aharony:2009fc}.) This is compatible with (\\ref{c3north}) provided that\n\\begin{equation} -{l^N \\over k} + {1 \\over 2} = -{l \\over 2k} + {1 \\over 2} -{q \\over 2} \\end{equation}\nor equivalently\n\\begin{equation} l^N = {l + kq \\over 2}. \\label{lN}\\end{equation}\n\nAt the south pole, the computation is very similar, except that the\nflux quantum $q$ appears with a minus sign:\n\\begin{equation} l^S = {l - kq \\over 2} \\label{lS}\\end{equation}\nThe difference in the pull-back of $C_3$ between the north and the\nsouth pole is just the total flux $q$, while the discrete torsion\ncontribution must be the same at the north and south poles because the\ntorsion has no associated flux.\\footnote{In the coordinates of\n\\cite{Bergman:2001qi,Klebanov:2010qs}, the $U(1)_b$ quotient as\ndefined in \\cite{Martelli:2009ga} is imposed on the angular coordinate\n$\\phi_2$. With this choice of $U(1)$ action, the poles of the\n$S^4\/Z_k$ are located at $(\\tau=0,\\alpha = \\pi\/2,\\psi = 0,\\pi)$. In\nthe vicinity of the poles, one can check that the one-forms\n$\\tilde{\\sigma_i}$ differ by a sign,\n$\\tilde{\\sigma}_i(N)=-\\tilde{\\sigma}_i(S)$, so the three-form $\\beta$\nin (\\ref{beta}) also changes by a sign from the north pole to the\nsouth pole.}\n\nHow should we interpret the formulas (\\ref{lN}) and (\\ref{lS})? The\nfirst thing to note is that $l^N$ and $l^S$ are equal mod $k$, so if\nthey had described decoupled systems we would have said that they were\nequivalent up to a large gauge transformation. However, they are not\ndecoupled, and there is no large gauge transformation that sets them\nequal to each other. Instead, the picture that has emerged is that\n$l^N$ and $l^S$ locally appear to describe the same torsion, but\nglobally there is a topologically nontrivial twist relating them, and\nthe winding number of the twist is just the number of units of $G_4$\nflux in $S^4\/Z_k$.\n\nThe second thing to note is that in the local ABJ models at the north\nand south poles, $l^N$ and $l^S$ should themselves be integers, or in\nother words $l-kq$ must be even. This means that for a given $q$ and\n$k$, $l$ must take either only even or only odd values. In the\nundeformed theory, $l$ described a $Z_{2k}$-valued discrete torsion,\nbut we see that our local considerations at the tip of the Stenzel\ngeometry remove half of the possible values of $l$, and the discrete\ntorsion in the deformed case is $Z_k$-valued. This phenomenon is\nreminiscent of the deformed conifold; the ``singular'' conifold admits\na $Z_2$-valued discrete torsion which is not present in the deformed\nconifold \\cite{Vafa:1994rv}.\n\n\nWe can now examine the quantization of the six form flux though $M_2$\nin IIA or the 7-form flux through $V_{5,2}$ in M-theory which measures\nthe charge of D2\/M2 branes in this background.\n\nOne way to approach this issue is to first examine the brane charges\npresent in this setup. Before adding any explicit 2-branes, there are\n2-brane charges arising from the discrete torsion at the $Z_k$ fixed\npoints at north and south poles \\cite{Bergman:2009zh}. These should\nhave the same form as what was computed in \\cite{Aharony:2009fc}, so\nwe find\n\\begin{equation} Q_2^{torsion} = \\left(-{l^N \\over k} + {1 \\over 2}\\right)+\\left(-{l^S \\over k} + {1 \\over 2}\\right) = -{l(l-2k) \\over 4k}- {k q^2 \\over 4} \\ . \\end{equation}\nIf, in addition, we were to introduce $N$ 2-branes which can be at any\npoint in the Stenzel geometry, there will be an additional\ncontribution of $N$ to the brane charge\n\\begin{equation} Q_2^{brane} = N-{l(l-2k) \\over 4k}- {k q^2 \\over 4} \\ . \\end{equation}\nSince Maxwell charge is the sum of brane charge, and since the bulk\ncharge is given by\n\\begin{equation} Q_2^{bulk} ={1 \\over (2 \\pi l_p)^6} \\int_{{\\cal M}_8} {1 \\over 2} G\\wedge G = {2^{11} m^2 \\mbox{vol}(V_{5,2}) \\over (2 \\pi l_p)^6 3^6} = {k q^2 \\over 4}\\end{equation}\nwe infer that\n\\begin{equation} Q_2^{Maxwell}=N-{l(l-2k) \\over 4k} \\ . \\label{max2}\\end{equation}\nIt also follows that the Page charge $Q_2^{Page}=N$.\n\nThis result is gratifying for several reasons. First, this result\nreflects the accounting of all identifiable charge sources in an\notherwise consistent and smooth M-theory background aside from the\norbifold fixed point. The final answer is the same as what we inferred\nfor the undeformed Stiefel cone (\\ref{max1}). It then follows that\nthe gauge invariant Maxwell charge is invariant under the shifts\n\\begin{equation} N \\rightarrow N+l, \\qquad l \\rightarrow l + 2k\\end{equation}\nwhich arises naturally from several perspectives mentioned earlier.\n\nThe only additional constraint imposed by the Stenzel deformation is\nthe restriction on the parity of $l$ so that $l$ is congruent to $kq$\nmod 2. This is far milder than what was found in\n\\cite{Martelli:2009ga}. \n\n\\section{Discussion}\n\nIn this article, we reviewed the quantization of fluxes in warped\nStiefel cone and its Stenzel deformation which is conjectured to be\nthe holographic dual of ${\\cal N}=2$ Chern-Simons matter theory in 2+1\ndimensions. We described the subtle difference between several\ndifferent yet related notions of charges, and recovered a structure\ncompatible with the pattern of Hanany-Witten brane creation effects\nand duality cascades.\n\nThere are a number of interesting features which one can infer from\nthe structure of the gravity solution. $Q_2^{brane}$ is a measure of\nthe number of degrees of freedom in the deep infrared of this\nsystem. When $Q_2^{brane}$ is zero or negative, we expect the system\nto break supersymmetry and flow to a different universality class of\nvacuum as was the case for many related system\n\\cite{Aharony:2009fc,Hashimoto:2010bq}. It would be very interesting\nto better understand the nature of the effective low energy physics\nwhen the system is in this new phase. This question can be addressed in\nthe simple context of $k=1$ where there are no $Z_k$ orbifold fixed\npoints, and by taking $q$ to be even, we can even set $l=0$ and\ndisregard the contribution from the discrete torsion.\n\nOne way to probe the fate of pushing the system which is slightly\nperturbed into this new phase is to start with a background with $q$\nlarge but $Q_2^{brane}=0$ (which can easily be arranged for $q$ even\nand $k=1$). Consider now adding $p\\ll q$ anti M2-brane as a\nprobe. This setup is very similar to adding anti D3-brane in warped\ndeformed conifolds \\cite{Kachru:2002gs} which has received a lot of\nattention (and controversy) as a possible prototype as a gravity dual\nof a metastable vacua\n\\cite{DeWolfe:2008zy,Bena:2009xk,Bena:2010gs}. For the Stenzel\nmanifold, the effective action of the brane probe undergoing a KPV-like\ntransition \\cite{Kachru:2002gs} works essentially in the same way as\nis illustrated in figure \\ref{figa}. However, from the point of view\nof the bound $Q_2^{brane}>0$, one expects the stable supersymmetric\nminima not to exist when $p$ anti M2-branes are introduced. \n\n\\begin{figure}\n\\centerline{\\begin{tabular}{ccc}\n\\includegraphics[width=1.8in]{kpvV1} & \n\\includegraphics[width=1.8in]{kpvV2} & \n\\includegraphics[width=1.8in]{kpvV3} \\\\\n(a) & (b) & (c)\n\\end{tabular}}\n\\caption{Potential $V(\\psi)$ for $p$ anti D3-brane blowing up to an\nNS5-brane wrapping an $S^2$ of fixed latitude in $\\psi$ in $S^3$ at\nthe tip of the Klebanov-Strassler solution. (a), (b), and (c)\ncorresponds to $p\/M=0.03$, $p\/M=0.08$, and $p\/M=-0.03$, respectively.\nThese figures originally appeared in figure 2 of \\cite{Kachru:2002gs}.\n\\label{figa}}\n\\end{figure}\n\nTentatively, we interpret these facts as follows. The computation of\nthe potential $V(\\psi)$ neglected the backreaction of the anti-branes,\nand when the number of anti-branes is parametrically small ($p \\ll q$)\nthis probe approximation is valid. In particular, the existence of\nthe non-BPS local minimum in \\ref{figa}.(a) is a robust prediction in\nthis limit. However, when the state in the metastable false vacuum\nillustrated in figure \\ref{figa}.(a) tunnels to the putative ``true''\nvacuum, the amount of charge carried by the probe grows to $q-p$ which\nis not parametrically small compared to $q$. The backreaction due to\nthis charge can be significant, and so the computation of the\ntunneling potential is (at least) not obviously self-consistent. It\nis tempting to speculate that the supersymmetric vacuum might actually\nbe spurious and that the non-BPS local minimum is the global minimum\nwhich characterizes the dynamics in the $Q_2^{brane}<0$ phase of these\ntheories up to corrections suppressed by $p\/q$.\n\nSimilar considerations apply to the BPS domain wall one constructs for $p\n< 0$ for which the KPV potential has the form illustrated in figure\n\\ref{figa}.(c). This domain wall can also be viewed as arising from\nwrapping an M5-brane on $S^4$ at the tip of the Stenzel manifold. A\n5-brane wrapped on a 4-cycle is effectively a string, and in 2+1\ndimensions, a string forms a domain wall. It would be very interesting\nto understand the nature of vacua separated by these domain\nwalls. Since M5-brane wrapped on $S^4$ with $q$ units of flux must\nhave $q$ additional M2-branes ending on it to cancel the anomaly, some\nquantum numbers of the vacuum must shift to reflect this. Nonetheless,\none expects the Maxwell charges $Q_2^{Maxwell}$ and $Q_4^{Maxwell}$ to\nbe invariant as one crosses the domain wall, as these charges are\nconserved. Making complete sense of these expectations requires taking\nthe full back reaction of the M5-brane and the $q$ anomalous M2-branes\ninto account. Unfortunately, $q$ M2-branes can not be treated reliably\nas a probe, making systematic analysis of these issues a challenge.\n\nLet us also mention that similar issues of stable\/metastable non-BPS\nvacua, domain wall, and low energy effective field theories can be\ndiscussed in the closely related $B_8$ system building on the analysis\nof \\cite{Hashimoto:2010bq} and \\cite{Gukov:2001hf}. Quantization of\ncharges and the enumeration of brane, Maxwell, and Page charges for\nthis system was carried out in \\cite{Hashimoto:2010bq}. Here, however,\nwe encounter one additional puzzle. It was argued in\n\\cite{Gukov:2001hf} that the 4-form flux through $S^4$ at the tip of\nthe $B_8$ cone is half integral as a result of the shift originally\ndue to Witten \\cite{Witten:1996md}. This would appear to require half\ninteger units of M2-branes to end on the domain wall made by wrapping\nthe M5 on the $S^4$. Of course, the number of M2's ending on an M5 is\nconstrained to be an integer. Perhaps this is indicating that odd\nnumber of M5-branes are forbidden from wrapping the\n$S^4$. Alternatively, this paradox is another manifestation of not\nsystematically taking the back reaction of the domain wall into\naccount.\n\n\nFinally, let us emphasize that for the time being, the concrete field\ntheory interpretation of the Stenzel deformation and the quantum\nnumber $q$ is not known. The gravity dual suggests that the parameter\n$q$ is important for both the IR and the UV physics. At large radius,\n$q$ is related to the total number of units of M2-brane charge\ngenerated by the cascade, which in turn affects the UV gauge symmetry.\nNear the tip of the Stenzel geometry, the $G_4$ flux is nonvanishing\nso $q$ should also appear in the data of the IR field theory. Of\ncourse, $q$ can only be nonzero when the geometry is deformed.\nMartelli and Sparks conjectured that this deformation was related to\nturning on a particular mass term on the field theory side. One can\nindeed see that the null geodesic can travel from boundary at infinity\nto the core in finite field theory time, and so the spectrum of\nglueball-like states will exhibit a discrete structre whose scale is\nset by the deformation. If this conjecture is correct, it would\nsuggest that the field theory confines because of a mass deformation\n(reminiscent of the $\\mathcal{N}=1^*$ theory in\n$d=4$\\cite{Polchinski:2000uf} and the mass deformed ABJM theory\n\\cite{Lin:2004nb,Gomis:2008vc,Kim:2010mr,Cheon:2011gv}) rather than as\na dynamical effect, as is the case in the Klebanov-Strassler system\n\\cite{Klebanov:2000hb}. It should be very interesting to understand\nthis theory better.\n\n\n\n\n\n\\section*{Acknowledgements}\n\nWe would like to thank Ofer Aharony and Shinji Hirano for collaboration on\nrelated issues and for discussions at the early stage of this work.\nWe also thank \nIgor Klebanov,\nDon Marolf,\nDario Martelli, and\nJames Sparks\nfor useful comments and discussions. The work of AH is\nsupported in part by the DOE grant DE-FG02-95ER40896 and PO is supported in part by DOE grant\nDE-FG02-91ER40681.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\subsection{General results}\n\\label{sec:4D-general}\n\nThis section will follow the same logic as Secs.~\\ref{sec:Z2-general} and~\\ref{sec:ON-general} devoted to 3d CFTs. Historically however the very first attempt to study crossing relations using numerical techniques focused on 4d CFTs. This analysis, pioneered in \\textcite{Rattazzi:2008pe} and then refined in \\textcite{Rychkov:2009ij}, was spurred by high energy physics motivations which will be reviewed in Sec.~\\ref{sec:4D-bsm}. But first let us discuss general conformal bootstrap results for 4d CFTs with various global symmetries.\n\nConsider first the simple case of a 4d CFT containing a scalar operator $\\phi$ with dimension $\\Delta_\\phi$. We further assume that it is charged under a global symmetry (e.g., a $\\mathbb{Z}_2$ symmetry) so that the OPE $\\phi\\times\\phi$ does not contain $\\phi$. Then it is interesting to ask how high can one push the dimension of the first scalar operator in this OPE. It is also interesting to ask how large the OPE coefficient of the stress tensor $\\lambda_{\\phi\\phi T}\\propto \\Delta_\\phi\/\\sqrt{C_T}$ is allowed to be~\\cite{Poland:2010wg,Rattazzi:2010gj}, which translates into a lower bound on the central charge $C_T$. The best bounds to date were computed in \\textcite{Poland:2011ey} and are shown in Fig.~\\ref{fig:4D-singlecorr}.\\footnote{The bound on $C_T$ can be somewhat strengthened by incorporating the assumption that $\\phi$ is the lowest dimension scalar, as in \\textcite{Rattazzi:2010gj}.} When $\\Delta_\\phi$ approaches the unitarity bound, both bounds approach the free theory value for $\\Delta_{\\phi^2}$ and $C_T$. This is consistent with the fact that a scalar with dimension $(d-2)\/2$ satisfies $\\del^2\\phi=0$ whenever inserted in a correlation function, and must therefore be a free scalar.\n\n\\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig46a-simplescalar.pdf}(a)\n\\includegraphics[width=\\figwidth]{fig46b-ctscalar.pdf}(b)\n \\caption{\\label{fig:4D-singlecorr} (Color online) (a) Upper bound on the dimension of the first scalar in the $\\phi\\times \\phi$ OPE as a function of $\\Delta_\\phi$ in 4d unitary CFTs; (b) lower bound on the central charge $C_T$, computed by maximizing the OPE coefficient $\\lambda_{\\phi\\phi T}$~\\cite{Poland:2011ey}.\n}\n \\end{figure}\n\nAnalogous bounds have been obtained for CFTs assuming various continuous global symmetries. \\textcite{Poland:2011ey} studied the 4pt functions of a scalar $\\phi$ transforming in the fundamental representation of $SO(N)$ or $SU(N)$, deriving an upper bound on the dimension of the lowest singlet scalar in the OPEs {$\\phi_i \\times \\phi_j$} (or {$\\phi^{\\dagger i}\\times \\phi_j$} in the case of $SU(N)$), as well as a lower bound on the central charge, shown in Fig.~\\ref{fig:4D-SON}.\\footnote{Numerics indicate that $SU(N)$ and $SO(2N)$ singlet and central charge bounds coincide~\\cite{Poland:2011ey}. A priori, because $SU(N)\\subset SO(2N)$, and because only singlets give rise to singlets when representations are reduced, these $SO(2N)$ bounds must be at least as strong as for $SU(N)$, but the exact coincidence is unexpected and remains unexplained.} As expected, the bounds scale with $N$, the size of the fundamental representation, at least when the dimension of the external scalar approaches the free value. It should be possible to extend this analysis to obtain upper bounds on the dimensions of operators transforming in other representations. For scalars in the symmetric traceless representation of $SO(4)$ this was done in \\textcite{Poland:2011ey}.\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\figwidth]{fig47a-scalarSUN.pdf}(a)\n\\includegraphics[width=\\figwidth]{fig47b-CT-SON.pdf}(b)\n \\caption{\\label{fig:4D-SON} (Color online) (a) Upper bounds on the singlet scalar dimension in $SO(N)$ and $SU(N)$ symmetric 4d CFTs, as a function of $\\Delta_\\phi$ in the fundamental; (b) lower bounds on $C_T$ in the same theories \\cite{Poland:2011ey}.}\n \\end{figure}\n \nIt is also possible to place upper bounds on the OPE coefficients of conserved vectors {of dimension 3} in the OPE of $\\phi$ with its conjugate. This class of operators includes the conserved currents of the considered global symmetry $G=SO(N)$ or $SU(N)$, transforming in the adjoint representation of $G$. Upper bounds on their OPE coefficients translate into the lower bounds on the central charges $C_J$. These bounds are shown in Fig.~\\ref{fig:4D-CJ-SON}. Once again the $SU(N\/2)$ bounds coincide with $SO(N)$ ones~\\cite{Caracciolo:2014cxa}. For $\\Delta_\\phi$ close to the free value, these bounds smoothly approach the free $SO(N)$ value.\n\nIn addition, for $G=SU(N)$ the OPE {$\\phi^{\\dagger i}\\times\\phi_j$} may also contain conserved currents of some other global symmetry which may exist in the theory, which are singlets under $G$. The lower bound on their inverse-square OPE coefficient is given in Fig.~\\ref{fig:4D-CJ-singlet-SUN}. Close to the free theory dimension, these bounds approach the value corresponding to the theory of $N$ massless complex scalar fields, whose full symmetry $SO(2N)$ is indeed larger that $SU(N)$. \n\nAdditionally,~\\textcite{Caracciolo:2014cxa} derived lower bounds on $C_J$ in the presence of a gap in the scalar singlet sector, as well as for extended global symmetries $SO(N)\\times SO(M)$ and $SO(N)\\times SU(M)$.\n\n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig48-CJboundsSON.pdf}\n \\caption{\\label{fig:4D-CJ-SON} (Color online) Lower bound on $C_J$ in $SO(N)$-symmetric unitary 4d CFTs as a function of the dimension of a scalar in the fundamental~\\cite{Poland:2011ey}. $SU(N\/2)$ adjoint currents satisfy the same bound~\\cite{Caracciolo:2014cxa}.}\n \\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig49-CJ-singlet-SUN.pdf}\n \\caption{\\label{fig:4D-CJ-singlet-SUN}\n (Color online) Lower bound on the inverse square OPE coefficient of a singlet current in $SU(N)$-symmetric unitary 4d CFTs as a function of dimension of a scalar in the fundamental~\\cite{Poland:2011ey}.}\n \\end{figure}\n \nUnlike in 3d, most of the 4d bounds computed so far do not display any prominent kink or other dramatic feature, suggesting that existing 4d CFTs may lie inside the allowed regions and not on the boundary. Note however that some unexplained features are visible in the $C_J$ lower bounds in Fig.~\\ref{fig:4D-CJ-SON}, as well as in the bounds on supersymmetric CFTs discussed in Sec.~\\ref{sec:4Dsusy}. \n\nThe bounds discussed in this section have been obtained by studying a 4pt function {$\\langle \\phi_i\\phi_j\\phi_k\\phi_l\\rangle$ or $\\langle \\phi_i\\phi^{\\dagger j}\\phi_k\\phi^{\\dagger l}\\rangle$}, where {$\\phi_i$} is a single primary operator or a global symmetry multiplet of primary operators. As far as we are aware, a systematic study of numerical bootstrap constraints from mixed correlators in 4d CFTs has not yet been performed outside of the supersymmetric context~\\cite{Lemos:2015awa,Li:2017ddj}. It will be important to do so in the future, and to study the impact on such bounds of assuming only a limited set of relevant operators. \n \n\\subsection{Applications to the hierarchy problem}\n\\label{sec:4D-bsm}\nNext we will review some bootstrap results which shed light on the attempts to alleviate the hierarchy problem of the Standard Model (SM) of particle physics, which historically was one of the motivations for the development of the numerical bootstrap in 4d.\n\nFor the purposes of our discussion, the hierarchy problem can be briefly summarized as follows. The SM is certainly not the complete description of fundamental interactions, as it doesn't account for dark matter, baryogenesis, neutrino masses, and gravity. Instead it can be regarded as an effective description, valid at least up to the electroweak scale, where it has been extensively tested, including at the ongoing Large Hadron Collider (LHC) experiments. According to the effective field theory paradigm, the leading effects in this description are captured by the relevant and marginal operators, while all higher-dimensional operators correspond to subleading effects and are suppressed by powers of the electroweak scale ($\\Lambda_{\\text{IR}}\\sim $ {100 GeV}) over the scale of new physics ($\\Lambda_{\\text{UV}}$). The incredible success of the SM in precisely describing all phenomena observed so far is elegantly explained by simply pushing the scale of new physics to high values. In particular, electroweak precision tests and more importantly bounds from flavor physics (in particular from $K$-$\\bar K$ mixing) generically require $\\Lambda_{\\text{UV}}\\gtrsim 10^{5}\\text{ TeV}$. \n\nThis simple assumption creates however a tension (called the hierarchy problem) with the other energy scale in the theory, namely the scale associated with the only relevant operator present in the SM---the Higgs mass term $H^\\dagger H$. Indeed, whenever a relevant deformation exists, it is generically expected to be generated at the fundamental scale with order one strength, unless some symmetry prevents this from happening. The contrary is usually considered an unnatural tuning of the model, similarly to how, in condensed matter systems, one typically needs to adjust a control parameter to approach a critical point.\n\nThe quest for a solution to the hierarchy problem \nhas been and remains an important goal in theoretical high energy physics. Strategies for solving it can be broadly divided in two categories: the first makes use of an additional symmetry that prevents the Higgs mass term from appearing, and then slightly breaks it in order to generate a scale parametrically smaller than $\\Lambda_{\\text{UV}}$. The second strategy instead removes altogether the dangerous relevant deformation by increasing the scaling dimension of the Higgs mass term. An example of the first strategy is low-energy supersymmetry, while the second one is realized in technicolor, which replaces the Higgs field with a fermion bilinear operator, of scaling dimension close to three. \n\nWhile technicolor solves the hierarchy problem by making the Higgs mass term irrelevant, it also raises the SM Yukawa operator dimensions from 4 to 6. To generate heavy quark masses of needed size, these operators need to originate at an energy scale not much above $\\Lambda_{\\text{IR}}$. This leads to a tension with flavor observables, due to four-fermion operators expected to originate at about the same scale unless yet additional structure is added. To elegantly solve this problem, \\textcite{Luty:2004ye} proposed the ``conformal technicolor\" scenario, in which the Higgs field has a scaling dimension close to the free value, while the Higgs mass term is close to marginality or irrelevant. \n\nMore precisely, to realize this scenario one would need a unitary CFT which contains a scalar operator $H$ replacing the SM Higgs field. To preserve the SM custodial symmetry, the CFT must have an $SO(4)$ global symmetry, with $H$ transforming in the fundamental. The scaling dimension requirements are as follows: $\\Delta_H$ has to be close to 1, while $\\Delta_{S}\\gtrsim 4$, where $S$ is the first scalar $SO(4)$-singlet operator in the OPE $H^\\dagger \\times H$, playing the role of the Higgs mass term in this setup. Given the scaling dimension requirements, this hypothetical CFT must necessarily be strongly coupled, while its coupling to the rest of the SM (gauge fields and fermions) can be treated as a small perturbation. \n\nThe 4d numerical bootstrap grew out from the attempts to show that the most optimistic requirements $\\Delta_H\\to 1$, $\\Delta_S>4$ are impossible to realize. A proof of this theorem about unitary 4d CFTs is visible in the upper bound on $\\Delta_S$ provided by the $N=4$ curve in Fig.~\\ref{fig:4D-SON}, which approaches 2 for $\\Delta_H\\to 1$. \n\nIt is phenomenologically acceptable to have $\\Delta_H$ slightly deviate from 1 without violating flavor constraints, and to allow $\\Delta_S$ somewhat below 4 at the price of some moderate tuning~\\cite{Luty:2004ye,Rattazzi:2008pe,Rychkov-talk}. Although this freedom helps to alleviate bootstrap constraints, some tension remains. Fig.~\\ref{fig:4D-ConformalTechnicolor} from \\textcite{Poland:2011ey} shows the regions of $\\{\\Delta_H,\\Delta_S\\}$ allowed under different degrees of tuning and different assumptions about the structure of the flavor sector. The conclusion is that compatibility with the bootstrap bound can be achieved only under optimistic flavor assumptions and with a moderate tuning.\n\nAn additional phenomenological constraint on conformal technicolor comes from the existence of the Higgs boson particle. While a SM-like Higgs boson may appear in conformal technicolor as a resonance of the strong dynamics at the electroweak scale associated with breaking of conformal invariance~\\cite{Luty:2004ye}, it is expected to be somewhat heavier than the experimentally observed value 125 GeV, and to have some deviations in its coupling to the top quark, which were not seen so far. This further reduces the likelihood that the conformal technicolor scenario is realized in nature. Still, the above analysis, performed prior to the Higgs boson discovery, remains a beautiful example of how theoretical investigations can lead to first-principles constraints on strongly coupled scenarios for particle physics beyond the SM.\n\n\n \\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\figwidth]{fig50-ConformalTechnicolor.pdf}\n \\caption{\\label{fig:4D-ConformalTechnicolor}\n (Color online) Viable regions in the $\\{\\Delta_H,\\Delta_S\\}$ plane for conformal technicolor models in the flavor-generic (red) and flavor-optimistic (cross-hatched green) cases, superimposed with the $SO(4)$ bound. Regions for no tuning, $10\\%$, and $1\\%$ tuning are shown in successively lighter shades of each color, with the largest region corresponding to $1\\%$ tuning in each case. Flavor-generic models are ruled out~\\cite{Poland:2011ey}.}\n \\end{figure}\n \n\n\\subsection{Theories with 4d $\\mathcal{N}=1$ supersymmetry}\n\nBefore discussing the numerical bootstrap results, let us spend a few words on the structure of representations of the superconformal algebra. For concreteness we will give this discussion for SCFTs with 4d $\\mathcal{N}=1$ supersymmetry.\\footnote{For similar results in other dimensions or with extended supersymmetry see \\textcite{Minwalla:1997ka} and the summary of recent progress in Sec.~\\ref{sec:other}. Many results described here can also be treated in a uniform way across dimensions for algebras with the same number of supercharges, see \\textcite{Bobev:2015jxa}.} Superconformal primary operators (annihilated by the special superconformal generators $\\cal S, \\bar{\\cal S}$) are labelled by four numbers $(q,\\bar{q},\\ell,\\bar{\\ell})$, where $\\ell,\\bar{\\ell}$ are the usual Lorentz quantum numbers and $q,\\bar{q}$ are related to the scaling dimension $\\Delta$ and $R-$charge of the superconformal primary operator:\n\\begin{equation}\n\\Delta = q+\\bar{q} \\qquad R = \\frac23(q-\\bar{q})\\,.\n\\end{equation}\nUnitarity bounds on these operators were worked out by~\\textcite{Flato:1983te} and~\\textcite{Dobrev:1985qv}, taking the form\n\\begin{align}\n&q \\geqslant \\frac12 \\ell+1\\,,\\, \\bar{q}\\geqslant \\frac12 \\bar{\\ell}+1 &(\\ell\\bar{\\ell}\\neq0)\\,,\\nonumber\\\\\n&q \\geqslant \\frac12 \\ell+1 &(\\bar{q} = \\bar{\\ell} =0) \\,,\\\\\n&\\bar{q} \\geqslant \\frac12 \\bar{\\ell}+1 &(q = \\ell =0) \\,.\\nonumber\n\\end{align}\nThe second and third lines in the above expression identify chiral ($\\Phi_{\\alpha_1....\\alpha_\\ell}$) or antichiral ($\\bar{\\Phi}_{\\dot{\\alpha}_1....\\dot{\\alpha}_{\\bar{\\ell}}}$) operators, which are annihilated by the supercharge $\\bar{\\mathcal{Q}}$ or $\\mathcal{Q}$, respectively.\n\nFinally, we would like to mention a few theoretical results for superconformal blocks present in the literature, focusing on those relevant for the 4d $\\mathcal{N}=1$ bootstrap. Superconformal blocks for correlation function of scalar superconformal primaries can be expressed in terms of finite linear combinations of ordinary scalar conformal blocks with suitable dimensions and spin; however, computing these coefficients can be a challenging task. The work of \\textcite{Poland:2010wg,Vichi:2011ux} obtained the superconformal blocks for 4pt functions of a scalar chiral supermultiplet $\\Phi$. Shortly after, \\textcite{Fortin:2011nq} computed superconformal blocks for 4pt functions of the multiplet associated to global symmetry conserved currents, whose lowest component is again a scalar field.\\footnote{Some incorrect coefficients and missing blocks were later pointed out in \\textcite{Berkooz:2014yda,Khandker:2014mpa}.} A similar analysis applicable to 4pt functions of $R$-current multiplets (containing the stress tensor) was also recently carried out in \\textcite{Manenti:2018xns}. The general approach in these works was to classify the possible 3-point functions in superspace using the formalism of \\textcite{Osborn:1998qu} and then expand in the Grassmann variables $\\theta_i$ to compute relations between OPE coefficients of conformal primaries. In this approach one must also carefully compute the norm of each conformal primary in the multiplet.\\footnote{Such norms were worked out for general multiplets in \\textcite{Li:2014gpa}.} \n\nThe work of \\textcite{Fitzpatrick:2014oza} developed alternate techniques based on either solving the super-Casimir equation or writing the blocks as superconformal integrals using a super-embedding formalism. The latter approach was employed in \\textcite{Khandker:2014mpa} to find the blocks appearing in the more general correlation function $\\langle\\Phi_1\\bar{\\Phi}_2 \\Psi_1\\bar{\\Psi}_2\\rangle$, where $\\Phi_i$ and $\\Psi_i$ are scalar superconformal primary operators with arbitrary dimension and $R$-charge,\\footnote{The first and second pair have the same conformal weights $q,\\bar q$, hence the notation.} with the restriction that the exchanged operator is neutral under $R$-symmetry. This analysis was later extended in \\textcite{Li:2016chh} to the more general case of four distinct scalar superconformal primary operators with arbitrary scaling dimensions and $R$-charges, with no restriction on the exchanged operators besides those imposed by superconformal symmetry. However, this analysis was missing a particular class of superconformal blocks, associated to exchanged primaries in representations of the Lorentz group with $\\ell\\neq\\bar\\ell$. In this case the corresponding superconformal primary does not enter the OPE of the external operators, but some of its superconformal descendants do. This issue was fixed in \\textcite{Li:2017ddj}.\n\n\n\\subsubsection{Bounds without global symmetries}\n\nNow we will summarize numerical results for correlation functions involving scalar chiral superfields. The first numerical studies, starting with \\textcite{Poland:2010wg} and improved in \\textcite{Vichi:2011ux} and \\textcite{Poland:2011ey}, focused on 4pt functions containing a single scalar chiral supermultiplet $\\langle\\Phi\\bar{\\Phi}\\Phi\\bar{\\Phi}\\rangle$. Crossing symmetry for this correlation function involves two OPE channels. Because of the chirality conditions, the $\\Phi\\times\\bar{\\Phi}$ OPE only receives contributions from traceless symmetric tensor superconformal primaries together with their $\\mathcal Q\\bar{\\mathcal{Q}}$ and $\\mathcal Q^2 \\bar{\\mathcal{Q}}^2$ superdescendants, giving rise to the superconformal blocks described above. The $\\Phi\\times\\Phi$ OPE on the other hand is more subtle and can receive three different contributions: 1) the chiral superfield $\\Phi^2$; 2) $\\bar{\\mathcal{Q}}$ descendants of semi-short multiplets; 3) $\\bar{\\mathcal{Q}}^2$ descendants of generic (long) multiplets. As a result, this channel allows conformal blocks of even spin $\\ell = \\bar{\\ell}$ at either the protected dimensions $\\Delta=2\\Delta_\\Phi+\\ell$ or at unprotected dimensions satisfying the unitarity bound $\\Delta\\geqslant|2\\Delta_\\Phi-3|+\\ell+3$.\n\n\\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig53a-dim_phibphi.pdf} (a)\n\\includegraphics[width=\\figwidth]{fig53b-phi2_ope_coef.pdf} (b)\n \\caption{\n \\label{fig:4chiral_longscalar} (Color online) (a) Upper bound on the dimension of the operator $\\mathcal R$ as a function of $\\Delta_\\Phi$~\\cite{Poland:2011ey,Li:2017ddj}. The shaded area is excluded. The dashed line at $\\Delta_{\\mathcal R}=2 \\Delta_\\Phi$ corresponds to generalized free theories. (b) Lower and upper bounds on the OPE coefficient of the chiral operator $\\Phi^2$ entering the $\\Phi\\times\\Phi$ OPE. The vertical dotted line is at $\\Delta_\\Phi = 1.407$ and the horizontal dashed line is at the free theory value $\\lambda_{\\Phi\\Phi\\Phi^2}\\equiv\\lambda_\\phi^2=\\sqrt{2}$~\\cite{Poland:2015mta}.\n}\n \\end{figure}\n\n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig54a-cc_assum.pdf} (a)\n\\includegraphics[width=\\figwidth]{fig54b-cT_upper_lower.pdf} (b)\n \\caption{\n \\label{fig:4chiral_ct} \n (Color online) (a) Lower bound on the central charge as a function of $\\Delta_\\Phi$ assuming that $\\Delta_{\\mathcal R}$ is consistent with the unitarity bound (thin line) or it saturates the upper bound in Fig.~\\ref{fig:4chiral_longscalar} (thick line). The shaded area is excluded~\\cite{Li:2017ddj}. (b) Lower and upper bounds on the central charge as a function of $\\Delta_\\Phi$, with the assumption that there is no $\\Phi^2$ operator. The upper bounds correspond to different gaps until the second spin-1 superconformal primary $\\Delta_{\\ell=1}\\geqslant 3.1, 3.3, 3.5, 3.7, 3.9, 4, 4.1$ (from left to right). The shaded area is excluded~\\cite{Poland:2015mta}.\n}\n \\end{figure}\n\nIn Fig.~\\ref{fig:4chiral_longscalar} we show an upper bound on the dimension of the first real scalar supermultiplet $\\mathcal R$ entering the OPE $\\Phi\\times\\bar{\\Phi}$. A first important consequence of this result is that in any perturbative SCFT, the combination $2\\Delta_\\Phi -\\Delta_{\\mathcal R}$ must be positive (or very suppressed) to satisfy the bound. Secondly, one can observe a minor kink-like feature on the boundary of the allowed region. The same feature appears in the lower bound on the central charge, Fig~\\ref{fig:4chiral_ct}, and it also coincides with the minimal value of $\\Delta_\\Phi$ consistent with the absence of the chiral operator $\\Phi^2$, as shown in the bottom panel of Fig~\\ref{fig:4chiral_longscalar}. \n \n In light of this, it is tempting to conjecture the existence of a ``minimal SCFT\" that realizes the chiral ring relation $\\Phi^2=0$ and saturates these bounds. This conjecture has been seriously addressed by~\\textcite{Poland:2015mta} and \\textcite{Li:2017ddj}, who studied the properties of this hypothetical theory. Notice that the minimal value of $\\Delta_\\Phi$ consistent with the chiral ring assumption, let us call it $\\Delta_\\Phi^\\text{mSCFT}$, represents an extremal solution, and it is therefore uniquely determined. In addition, to coincide with the kinks it should agree with the solution obtained from maximizing the dimension of the first neutral unprotected operator and the solution obtained from minimizing the central charge, at the same value of $\\Delta_\\Phi$. \n \nFig~\\ref{fig:4chiral_ct} (a) shows that the two extremization procedures generically lead to two different solutions, except at $\\Delta_\\Phi^\\text{mSCFT}$. This confirms our expectation of a unique solution coinciding with the kinks. Furthermore, by inputting a gap between the stress-tensor multiplet (whose lowest component is the spin-1 $U(1)_R$ current) and the next spin-1 supermultiplet, one is able to extract an upper bound on the central charge. While this bound is gap-dependent at generic $\\Delta_\\Phi$, it almost coincides with the lower bound at $\\Delta_\\Phi^\\text{mSCFT}$, as shown in Fig~\\ref{fig:4chiral_ct} (b). By extrapolating these results at large $\\Lambda$,~\\textcite{Poland:2015mta} obtained the prediction \\mbox{$\\Delta_\\Phi^\\text{mSCFT} \\approx 1.428,\\, c \\approx 0.111$}, perhaps consistent with \\mbox{$\\Delta_\\Phi^\\text{mSCFT} = 10\/7,\\, c = 1\/9$}.\n\nRecently a few theories have been proposed as mSCFT candidates \\cite{Xie:2016hny,Buican:2016hnq}, which implement the chiral ring condition $\\Phi^2=0$; however, they don't quite match the bootstrap predictions presented here. In particular the central charge is much larger than $1\/9$.\n\nIt is also worth noticing that, in any solution saturating the dimension bound of Fig.~\\ref{fig:4chiral_longscalar}, the chiral operator $\\Phi$ is not charged under any global symmetry. If it was, in fact, the solution would contain a spin-$1$ conserved current, which in $\\mathcal N=1$ SUSY happens to be the superdescendant of a dimension-2 real scalar which would appear in the $\\Phi\\times\\bar{\\Phi}$ OPE. \n\nTo conclude this section, let us mention that the work of \\textcite{Li:2017ddj} also performed a bootstrap study of a system of mixed correlators involving a scalar chiral superfield $\\Phi$ and a long real scalar superfied $\\mathcal R$, identified with the first scalar operator appearing in the $\\Phi\\times\\bar{\\Phi}$ OPE. Unlike in 3d, this analysis didn't seem to allow one to easily isolate a closed region. A preliminary inspection of the extremal solution doesn't reveal any obvious low-lying operators decoupling from the spectrum, but rather it involves a rearrangement of higher dimensional operators~\\cite{Stergiou2017}. It will be interesting to study this rearrangement further and understand how to robustly isolate the conjectured mSCFT in future work. \n \n\\subsubsection{Bounds with global symmetries}\n\nAs mentioned in the previous section, conserved currents of global symmetries $j_\\mu^a $ sit in real supermultiplets $\\mathcal J^a$ whose lowest component is a dimension-2 scalar $J^a$. In addition, the multiplet satisfies the conservation condition $D^2 \\mathcal J^a =\\bar{D}^2 \\mathcal J^a =0$. Bootstrapping correlation functions of the scalars $J^a$ allows one access to the space of local SCFTs with a given global symmetry. Hence, due to supersymmetry, one can apply the same machinery encountered so far, with no need to deal with spinning conformal blocks. \n\nBounds on OPE coefficients of $SU(N)$ currents were explored in \\textcite{Berkooz:2014yda} and dimension bounds (and coefficient bounds assuming gaps) from single 4pt functions $\\$ were explored in \\textcite{Li:2017ddj}. The latter work also studied the case of mixed correlators involving $J$ and a chiral field $\\Phi$ charged under the global symmetry. Note that this charge necessarily differs from the $R$-symmetry, which instead is part of the conformal algebra: the conserved current associated with the latter is the lowest component of the Ferrara-Zumino multiplet which contains the stress-tensor and supercurrents.\n\nA key result, shown in Fig.~\\ref{fig:JJJJ_ct}, shows that any local SCFT with a continuous global symmetry must contain a real scalar multiplet $\\mathcal O$ with dimension $\\Delta_\\mathcal{O} \\leqslant 5.246$. The same figure also shows upper bounds on the OPE coefficient associated to $J$ itself as well as the one associated to the stress tensor multiplet, denoted as $V$. Interestingly, both bounds on $c_J$ and $c_V$ show plateaus for small values of the gap in the scalar sector. These are perhaps consistent with the existence of SCFT solutions shaping the bounds. On the other hand, the values extracted from Fig.~\\ref{fig:JJJJ_ct} are much smaller then the limits one obtains by inspecting the correlation functions of chiral superfields. For instance, the relation between $c_V$ and Fig.~\\ref{fig:4chiral_ct} is $c_V^2=1\/(90c)$, making the bound on the central charge very weak.\\footnote{The OPE coefficient bounds obtained in \\textcite{Berkooz:2014yda} for $SU(N)$ current 4pt functions were also relatively weak.}\n\n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig55a-cJ_from_JJJJ.pdf} (a)\n\\includegraphics[width=\\figwidth]{fig55b-cT_from_JJJJ.pdf} (b)\n \\caption{\n \\label{fig:JJJJ_ct} \n (Color online) Upper bounds on the OPE coefficients $c_{\\mathcal{O}} \\equiv 2^{-\\ell\/2} \\lambda_{JJ\\mathcal{O}}$ appearing in the $J\\times J$ OPE arising from (a) $J$ itself or (b) the stress-tensor supermultiplet $V$, as a function of the dimension of the first unprotected scalar $\\mathcal O$ in the $J\\times J$ OPE. The region to the right of the dotted vertical line at $\\Delta_\\mathcal{O} = 5.246$ is not allowed~\\cite{Li:2017ddj}.\n}\n \\end{figure}\n\nAn alternative method to study SCFTs with global symmetries is to consider external scalar operators in nontrivial representations of the symmetry. An important target is to make contact with supersymmetric QCD theories, e.g.~supersymmetric gauge theories with gauge group $SU(N_c)$ and $N_f$ flavors of quarks \\mbox{$Q_i, \\overline{Q}^{\\bar{j}}$}, with $N_f$ in the conformal window \\mbox{$3N_c\/2 \\leqslant N_f \\leqslant 3N_c$}~\\cite{Seiberg:1994pq}. The simplest gauge-invariant operators are the mesons $M_i^{\\bar j}=Q_i \\overline{Q}^{\\bar{j}}$, which transform as bi-fundamentals of \\mbox{$SU(N_f)_L\\times SU(N_f)_R$} and have dimension \\mbox{$\\Delta_M = 3(1-N_c\/N_f)$}. Due to supersymmetry, both the central charge and current central charge can be exactly computed due to their relation to anomaly coefficients.\n\nA partial bootstrap analysis applicable to meson 4pt functions was performed in \\textcite{Poland:2011ey}, which considered chiral scalar multiplets transforming in the fundamental representation of $SU(N)$ and obtained bounds on the OPE coefficients associated to conserved currents transforming in both the singlet and the adjoint representations of $SU(N)$. As can be seen in Fig.~\\ref{fig:SQCD}, these bounds are still somewhat far from the exact results of supersymmetric QCD (SQCD) theories, most likely because this study didn't utilize the full symmetry. It will be very interesting in future work to extend these analyses of chiral 4pt functions to use the whole SQCD global symmetry, together with mixed correlators containing the $SU(N_f)_{L\/R}$ current multiplets and\/or the stress-tensor multiplet.\n\\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig56-SQCDcomparison.pdf}\n \\caption{\n \\label{fig:SQCD} \n (Color online) Lower bounds on the effective 2pt function coefficient $\\kappa_{\\text{eff}} = 1\/\\lambda_{\\Phi\\Phi J}^2$ of $SU(N)$ singlet currents appearing in $\\Phi \\times \\bar{\\Phi}$, where $\\Phi$ is a chiral scalar of dimension $d$ in the fundamental of $SU(N)$, for $N = 2, \\ldots, 14$. The bounds are normalized to the value $\\kappa_{\\text{chiral}}$ corresponding to a free chiral superfield. Each dot connected to a bound corresponds to the exact value in an SQCD theory with the same symmetry~\\cite{Poland:2011ey}.\n}\n \\end{figure}\n\n\\subsection{Theories with 3d $\\mathcal{N}=2$ supersymmetry}\n\\label{sec:Fermions-WZ} \n\nAnother interesting set of targets for the conformal bootstrap are the zoo of 3d CFTs with $\\mathcal{N}=1$ or $\\mathcal{N}=2$ supersymmetry. We made initial contact with the former in Sec.~\\ref{sec:Fermions-GN}, where there were no constraints from supersymmetry used other than relations between scaling dimensions {(see however footnote~\\ref{footnote:3dN1})}. The superconformal representation theory of the latter has a similar structure to that of 4d $\\mathcal{N}=1$ SCFTs; for details see \\textcite{Minwalla:1997ka} and \\textcite{Bobev:2015jxa}. Perhaps the simplest such theory is the $\\mathcal{N}=2$ supersymmetric Wess-Zumino model described in Sec.~\\ref{sec:FermionModels}. This CFT can be thought of as the IR fixed point of a theory of a single chiral superfield $\\Phi = \\phi + \\psi \\theta + F \\theta^2$ and superpotential $W = \\lambda \\Phi^3$. The fixed point has a $U(1)_R$ symmetry under which $\\Phi$ has charge 2\/3, implying exact dimensions for the complex scalar $\\phi$ and the Dirac fermion $\\psi$: $\\Delta_{\\phi} = q_{\\phi} = 2\/3$, $\\Delta_{\\psi} = \\Delta_{\\phi} + 1\/2 = 7\/6$.\n\n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig57-PRL_epsbound_3d_nmax9}\n \\caption{\\label{fig:Fermions-SUSY3kinks} (Color online) Bound on the dimension of the first unprotected scalar $\\Phi \\bar{\\Phi}$ in the $\\Phi\\times\\bar{\\Phi}$ OPE in 3d SCFTs with $\\mathcal{N}=2$ supersymmetry \\cite{Bobev:2015vsa}.\n }\n \\end{figure}\n\nApplying the numerical bootstrap to the 4pt function $\\<\\Phi \\bar{\\Phi} \\Phi \\bar{\\Phi}\\>$ and incorporating the unitarity bounds and superconformal blocks of $\\mathcal{N}=2$ superconformal symmetry,~\\textcite{Bobev:2015vsa,Bobev:2015jxa} and \\textcite{Li:2017kck} studied general bounds on the dimension of the leading unprotected scalar operator $\\Phi \\bar{\\Phi}$, with the basic result shown in Fig.~\\ref{fig:Fermions-SUSY3kinks}. Curiously, the resulting bound has three distinct features, the first of which occurs at a scaling dimension $\\Delta_{\\Phi} \\simeq 2\/3$. This gives a sharp upper bound $\\Delta_{\\Phi\\bar{\\Phi}} < 1.91$ for the $\\mathcal{N}=2$ supersymmetric Wess-Zumino model and a plausible conjecture that the model saturates the optimal version of this bound. Further analysis of the extremal spectrum of this kink can be found in \\textcite{Bobev:2015vsa,Bobev:2015jxa}, while \\textcite{Li:2017kck} found that an isolated island around $\\{\\Delta_{\\Phi}, \\Delta_{\\Phi \\bar{\\Phi}}\\} = \\{0.6678(13), 1.903(10)\\}$ could be obtained by assuming a modest gap in the spectrum of spin-1 superconformal primaries $\\Delta_{J'} \\geqslant 3.5$.\n\nThe middle kink occurs near $\\Delta_{\\Phi} = 3\/4$, and coincides with a kinematic threshold beyond which superconformal descendants of anti-chiral operators $Q^2 \\bar{\\Psi}$ can no longer appear in the $\\Phi \\times \\Phi$ OPE. It is not yet clear if any CFT sits at this kink. The right-most kink, occurring near $\\Delta_{\\Phi} \\sim .86$, also still lacks a clear interpretation, but seems to interpolate to the kink in the 4d $\\mathcal{N}=1$ bounds discussed above. Notably, the extremal spectrum of this kink seems to satisfy the chiral ring relation \\mbox{$\\Phi^2 = 0$}~\\cite{Bobev:2015jxa} and an island around the point can also be isolated using a set of gap assumptions~\\cite{Li:2017kck}, making it a plausible candidate for a new CFT. Finally let us mention that this analysis was also extended to 3d $\\mathcal{N}=2$ supersymmetric CFTs with $O(N)$ global symmetry by~\\textcite{Chester:2015qca,Chester:2015lej}, who found similar features at each value of $N$. A related 3d $\\mathcal{N}=2$ theory with multiple interacting chiral superfields and a conformal manifold was also recently studied using bootstrap methods in~\\textcite{Baggio:2017mas}.\n\n\n\n\\section{Introduction}\n\\input{intro\/intro.tex}\n\n\\section{Conformal field theory techniques in $d$ dimensions}\n\\input{theory\/theory.tex}\n\n\\subsection{Conformal transformations}\n\\input{theory\/conformaltransformations.tex}\n\n\\subsection{Operators: primaries and descendants}\n\\input{theory\/primaries.tex}\n\n\\subsection{Correlation functions}\n\\input{theory\/correlation.tex}\n\n\\subsection{Operator product expansion}\n\\input{theory\/OPE.tex}\n\n\\subsection{Constraints from unitarity}\n\\input{theory\/unitarity.tex}\n\n\\subsection{Conformal blocks}\n\\input{theory\/conformalblocks.tex}\n\n\\subsection{Global symmetry}\n\\input{theory\/global.tex}\n\n\\subsection{Conserved local currents}\n\\input{theory\/ward.tex}\n\n\\subsection{Crossing relations}\n\\input{theory\/crossing.tex}\n\n\\section{Numerical methods} \n\\input{numerical\/numerical.tex}\n\n\\section{Applications in $d=3$}\n\\input{3D\/3D.tex}\n\n\\subsection{Bounds on critical vs multicritical behavior}\n\\input{multicrit\/multicrit.tex}\n\n\\subsection{$\\bZ_2$ global symmetry} \n\\input{Z2\/Z2.tex}\n\n\\subsection{$O(N)$ global symmetry}\n\\input{ON\/ON.tex}\n\n\\subsubsection{$O(2)$ global symmetry}\n\\input{O2\/O2.tex}\n\n\\subsubsection{$O(3)$ global symmetry}\n\\input{O3\/O3.tex}\n\n\\subsection{CFTs with fermion operators}\n\\input{Fermions\/Fermions.tex}\n\n\\subsection{QED${}_3$}\n\\input{QED3\/QED3.tex}\n\n\\subsection{Current and stress-tensor bootstrap}\n\\input{JandT\/JandT.tex}\n\n\\subsection{Future targets}\n\\input{targets\/targets.tex}\n\n\\section{Applications in $d=4$}\n\\input{4D\/4D.tex}\n\n\\subsection{Constraints on the QCD${}_4$ conformal window}\n\\input{4Dwindow\/4Dwindow.tex}\n\n\\section{Applications to superconformal theories}\n\\input{4Dsusy\/4Dsusy.tex}\n\n\\section{Applications to nonunitary models}\n\\input{nonunitary\/nonunitary.tex}\n\n\\section{Other applications}\n\\input{brief\/brief.tex}\n\n\\begin{acknowledgments}\n\nWe would like to thank Connor Behan, John Chalker, Shai Chester, Subham Dutta Chowdhury, Luigi Del Debbio, Mykola Dedushenko, Yin-Chen He, Kuo-Wei Huang, Denis Karateev, Zohar Komargodski, Petr Kravchuk, Adam Nahum, Yu Nakayama, Miguel Paulos, Silviu Pufu, Leonardo Rastelli, Subir Sachdev, David Simmons-Duffin, Andreas Stergiou, Tin Sulejmanpasic, Senthil Todadri, Emilio Trevisani, Ettore Vicari, and William Witczak-Krempa for useful discussions and comments. \n\nDP is supported by NSF grant PHY-1350180 and Simons Foundation grant 488651 (Simons Collaboration on the Nonperturbative Bootstrap). \nSR is supported by the Simons Foundation grant 488655 (Simons Collaboration on the Nonperturbative Bootstrap), and by Mitsubishi Heavy Industries as an ENS-MHI Chair holder.\nAV is supported by the Swiss National Science Foundation under grant no. PP00P2-163670 and by the ERC-STG under grant no. 758903.\n\n\\end{acknowledgments}\n\n\n\n\n\\section{Outlook}\n\\label{sec:outlook}\n\nThe conformal bootstrap is still in its infancy and there remains much low-hanging fruit to pick along with many important open questions. For instance, can we use the bootstrap to fully classify the space of critical CFTs with a given symmetry, placing universality on a rigorous footing? Can the bootstrap solve the conformal windows of QED${}_3$ and QCD${}_4$? Can it be used as a discovery tool to find new, perhaps non-Lagrangian, CFTs? Is there an analytical understanding of the kinks in numerical bounds or why certain CFTs such as the 3d Ising model live in them? Which CFTs can be found using extremal spectrum or truncation methods? And is there a fruitful way to incorporate developments in the analytical bootstrap with rigorous numerical methods?\n\nFor newcomers to the numerical bootstrap who want to quickly get started, after learning CFT basics we recommend becoming familiar with the available software tools,\\footnote{Many of the currently available tools are collected at the webpage:~\\url{http:\/\/bootstrapcollaboration.com\/activities\/}.} particularly \\texttt{SDPB} {which is under active development,} along with one of the efficient methods to compute conformal blocks in the dimension of interest as described in Sec.~\\ref{sec:cb}. Then one can start reproducing numerical bounds and thinking about how they can be generalized to say something new about situations of physical interest. For this purpose it is also helpful to get used to restating the physical properties of critical systems using symmetries and the spectrum of scaling dimensions, so questions can be sharply rephrased in the language of the bootstrap. Good luck!\n\n\\subsubsection{Models}\n\n\\label{sec:FermionModels}\nThe preceding sections discussed constraints from crossing relations for 4pt functions of scalar operators. Many 3d or (2+1)d CFTs of theoretical and experimental interest also contain fermionic operators, and here we will discuss what the bootstrap has so far been able to say about them.\n\nPerhaps the simplest example is the family of CFTs described by the Gross-Neveu model at criticality~\\cite{Gross:1974jv}.\\footnote{This model and its variations are frequently invoked to describe quantum phase transitions in condensed matter systems with emergent Lorentz symmetry in (2+1)d. Some examples of its applications include models for phase transitions in graphene~\\cite{herbut2006interactions, herbut2009relativistic}, the Hubbard model on the honeycomb and $\\pi$-flux lattice~\\cite{toldin2015fermionic}, models of time-reversal symmetry breaking in d-wave superconductors~\\cite{vojta2000quantum, vojta2003quantum}, and models of 3-dimensional gapless semiconductors~\\cite{Moon:2012rx,Herbut:2014lfa}.} While the critical theory is often described as the UV fixed point in a theory of fermions with 4-fermi interactions $\\mathcal{L} \\sim (\\bar{\\psi} \\psi)^2$, a better nonperturbative definition is as an IR fixed point in a theory with a scalar field coupled to fermions via Yukawa interactions. The latter Gross-Neveu-Yukawa (GNY) model contains a scalar $\\phi$ and $N$ Majorana fermions $\\psi_i$:\n\\beq\n {\\cal L}_{\\rm GNY} =\\frac 12 \\sum_{i = 1}^N \\bar \\psi_i (\\slashed{\\partial} + g \\phi ) \\psi_i + \\frac 12 \\partial^\\mu \\phi\\partial_\\mu \\phi + \\frac 12 m^2 \\phi^2 + \\lambda \\phi^4 \\,.\n \\label{GNYLag}\n \\end{equation}\n This model has an $O(N)$ symmetry rotating the fermions. A fixed point can be established perturbatively at large $N$ in a $1\/N$ expansion, see e.g.~\\textcite{Gracey:1992cp,Gracey:1993kc} and \\textcite{Derkachov:1993uw}. This model has also been studied extensively from the perspective of the $\\epsilon$-expansion, with recent results by~\\textcite{fei2016yukawa,Mihaila:2017ble,Zerf:2017zqi}.\n \nAn interesting special case is $N=1$ (a single Majorana fermion coupled to a real scalar). It is expected \\cite{fei2016yukawa} that this model may contain a fixed point with $\\mathcal{N}=1$ supersymmetry. This supersymmetric fixed point has been proposed to described a critical point on the boundary of topological superconductors~\\cite{Grover:2013rc}. \n\nThere are variations of this model containing multiple scalar order parameters. One notable example is the $\\mathcal{N}=2$ supersymmetric critical Wess-Zumino model, containing a complex scalar related to a 3d Dirac fermion by supersymmetry.\\footnote{We will describe some of the implications of supersymmetry and a bootstrap analysis connecting to this model later in Sec.~\\ref{sec:4Dsusy}.} This theory has been proposed to describe a critical point on the surface of topological insulators~\\cite{Ponte:2012ru,Grover:2013rc}, and a superconducting critical point in (2+1)d Dirac semimetals with an attractive Hubbard interaction \\cite{Li:2017dkj}. Another important example is the Gross-Neveu-Heisenberg model, described in Sec.~\\ref{sec:O3}.\n\n\\subsubsection{General results}\n\\label{sec:Fermions-general}\n\nWe first discuss general results following from the existence of fermionic operators. Specialized bounds where the critical GNY and other models are featured more prominently will be discussed below.\n\nA bootstrap analysis of 4pt functions of identical Majorana fermions $\\<\\psi\\psi\\psi\\psi\\>$ was performed in \\textcite{Iliesiu:2015qra} and extended to 4pt functions $\\< \\psi_i \\psi_j \\psi_k \\psi_l \\>$ containing fermions that are vectors under an $O(N)$ symmetry in \\textcite{Iliesiu:2017nrv}. These studies both assumed a general (2+1)d CFT with parity symmetry. Tensor structures and conformal blocks for 4pt functions were derived using a spinorial embedding-space formalism also developed in \\textcite{Iliesiu:2015qra}, similar in logic to the vectorial embedding space reviewed in Appendix~\\ref{sec:embedding}. \n\nIn Fig.~\\ref{fig:Fermions-oddScalar} we show general upper bounds on the leading parity-odd and parity-even scalars in the $\\psi \\times \\psi$ OPE, called $\\sigma$ and $\\epsilon$ respectively. The bound on $\\sigma$ is nearly saturated by the MFT line $\\Delta_{\\sigma} = 2\\Delta_{\\psi}$, at least at small values of $\\Delta_{\\sigma}$. As $\\Delta_{\\psi} \\rightarrow 1$ the bound approaches the free theory value $\\Delta_{\\sigma} = 2$, where we can identify $\\sigma = \\bar{\\psi} \\psi$. On the other hand, there is an abrupt discontinuity in the bound around $\\Delta_{\\psi} \\sim 1.27$ occurring when $\\Delta_{\\sigma}$ approaches 3. This jump also coincides with a kink in the bound on $\\Delta_{\\epsilon}$. The interpretation of these features is currently an open question -- it is tempting to speculate that a CFT may live at the top of the jump in the bound on $\\Delta_{\\sigma}$ and in the kink in the bound on $\\Delta_{\\epsilon}$ but no concrete candidate CFTs have yet been identified. If it exists, this CFT would appear to have an unusual property of not possessing any relevant scalar deformations.\\footnote{Hypothetical theories with this property were recently named ``dead-end\" CFTs by~\\textcite{Nakayama:2015bwa}. They should be distinguished from ``self-organized\" CFTs which do not have any relevant \\emph{singlet} scalars as defined in Sec.~\\ref{sec:multicrit}.}\n\n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig28a-oddScalar}(a)\n\\includegraphics[width=\\figwidth]{fig28b-evenScalar}(b)\n \\caption{\\label{fig:Fermions-oddScalar}\n(Color online) Upper bounds on the dimension of (a) the first parity-odd scalar $\\sigma$ and (b) the first parity-even scalar $\\epsilon$ in the OPE $\\psi\\times \\psi$, as a function of $\\Delta_\\psi$ \\cite{Iliesiu:2015qra}. Here $\\psi$ is a Majorana fermion primary operator in a 3d parity-invariant unitary CFT.} \n \\end{figure}\n\nIn Fig.~\\ref{fig:Fermions-centralCharge} we also show the general lower bounds on the central charge $C_T$ (normalized to its value in the theory of a free Majorana fermion), obtained by bounding the coefficient of the stress-tensor conformal block. These lower bounds approach the free values as $\\Delta_{\\psi} \\rightarrow 1$ and disappear completely for $\\Delta_{\\psi} \\gtrsim 1.47$. In the case of $O(N)$ symmetry they can be seen to grow linearly with $N$ and are compatible with values computed in the $1\/N$ expansion of the GNY model. Generalizations to the current central charge $C_J$ for fermions charged under $O(N)$ symmetry were also computed in \\textcite{Iliesiu:2017nrv}.\n\n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig29a-centralCharge}(a)\n\\includegraphics[width=\\figwidth]{fig29b-cTBound}(b)\n \\caption{\\label{fig:Fermions-centralCharge}\n (Color online) Lower bounds on $C_T$ as a function of $\\Delta_\\psi$, where $\\psi$ is (a) Majorana fermion or (b) a multiplet of Majorana fermions in the fundamental representation of an $O(N)$ global symmetry group \\cite{Iliesiu:2015qra,Iliesiu:2017nrv}.}\n \\end{figure}\n\n\n\\subsubsection{Gross-Neveu-Yukawa models}\n\\label{sec:Fermions-GN}\n\nIn the critical GNY model at large $N$, $\\psi_i$ has dimension $1 + 4\/(3\\pi^2 N) + \\ldots$, while the leading parity-odd scalars in the $\\psi_i \\times \\psi_j$ OPE are the $O(N)$ singlet $\\phi$ with dimension $1-32\/(3\\pi^2 N)+\\ldots$ and the $O(N)$ symmetric tensor $\\bar{\\psi}_i \\psi_j$ with dimension $2 + 32\/(3\\pi^2 N) + \\ldots$~\\cite[{Table 1}]{Iliesiu:2017nrv}. The accumulation point $(\\Delta_{\\psi}, \\Delta_{\\sigma}) \\rightarrow (1,1)$ sits well in the interior of Fig.~\\ref{fig:Fermions-oddScalar}(a), but by imposing a gap until the second parity-odd scalar, $\\Delta_{\\sigma'} \\geqslant 2 + \\delta$ for different positive values of $\\delta$, we have the possibility of obtaining an allowed region that rules out critical GNY models with $N$ sufficiently large.\n\nThis is realized in Fig.~\\ref{fig:Fermions-grossNeveuSmallSigPrime}, where the effect of gaps ranging from $\\Delta_{\\sigma'} \\geqslant 2.01$ to $\\Delta_{\\sigma'} \\geqslant 2.9$ are shown. At very small values of $\\delta$ the lower bounds of the allowed regions possess a kink whose location matches very well to the large-$N$ GNY model prediction. At larger values of $\\delta$, the precise map between $\\delta$ and $N$ is not known but it is plausible that the kinks continue to match to the GNY model even at small values of $N$. However, starting around $\\Delta_{\\sigma'} \\geqslant 2.3$, a second lower feature also appears in these curves, where they all intersect and have an additional kink at a point near $(1.08, .565)$. \n\nThis structure of an ``upper\" and ``lower\" kink can be seen clearly in Fig.~\\ref{fig:Fermions-oneRelevantOddScalar}, specialized to the case \\mbox{$\\Delta_{\\sigma'} \\geqslant 3$}. In fact, in this case the line \\mbox{$\\Delta_{\\psi} = \\Delta_{\\sigma} + 1\/2$} expected for theories with supersymmetry comes very close to (but just misses) the upper kink. Thus, it is tempting to conjecture that the $\\mathcal{N}=1$ supersymmetric Gross-Neveu-Yukawa model, see Sec.~\\ref{sec:FermionModels}, may sit in this feature and has $\\Delta_{\\sigma'}$ slightly smaller than 3. This picture seems consistent with estimate \\mbox{$\\Delta_{\\sigma} \\approx 0.59$} from a Pad\\'e-extrapolation of the $\\epsilon$-expansion \\cite{fei2016yukawa}, as well as with the rigorous lower bound \\mbox{$\\Delta_\\sigma\\geqslant 0.565$} \\cite{Bashkirov:2013vya}, which follows using another supersymmetric relation $\\Delta_\\epsilon=\\Delta_\\sigma+1$ together with the bootstrap bound in Fig.~\\ref{fig:Z2-epsbound}, applicable with parity playing the role of a $\\mathbb{Z}_2$ symmetry.\\footnote{\\label{footnote:3dN1}Further progress on this CFT was made very recently in~\\textcite{Rong:2018okz} and~\\textcite{Atanasov:2018kqw}, where it was understood how to obtain an island in the scalar mixed-correlator bootstrap around $\\Delta_{\\sigma} = 0.584444(30)$. In these studies in addition to relations between scaling dimensions it is important to incorporate nontrivial 3d $\\mathcal{N}=1$ superconformal blocks.} An additional speculation is that the lower feature may coincide with a non-supersymmetric fixed point, called GNY${}^*$ in \\textcite{Iliesiu:2017nrv}, which is seen in the $\\epsilon$-expansion as a nonunitary fixed point at large $N$, but whose fate at small $N$ and $\\epsilon \\rightarrow 1$ is not known. \n\nAdditional evidence for this picture comes from the generalization of the bounds to $O(N)$ symmetry~\\cite{Iliesiu:2017nrv}, where one can place independent bounds on different $O(N)$ representations. In Fig.~\\ref{fig:Fermions-ONtwokinks} we show computed bounds on the leading singlet dimension $\\Delta_{\\sigma}$, assuming that the next singlet is irrelevant, $\\Delta_{\\sigma'} \\geqslant 3$. These bounds also show both an upper and lower kink, which appear not too far from the $\\epsilon$-expansion estimates for the GNY and GNY${}^*$ models. In Fig.~\\ref{fig:Fermions-SigmaT} we also highlight the bounds on the leading $O(N)$ symmetric tensor $\\sigma_T$, which display mysterious and unexplained jumps when $\\Delta_{\\sigma_T}$ reaches marginality and at smaller values of $\\Delta_{\\psi_i}$ show a series of kinks which match to the large-$N$ GNY models. Understanding the mechanism behind these jumps is an important open problem, which may be related to the spectrum rearrangement phenomena from Sec.~\\ref{sec:Z2-spectrum}.\n\n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig30a-grossNeveuSmallSigPrime}\n\\includegraphics[width=\\figwidth]{fig30b-grossNeveuSigPrime2To3}\n \\caption{\\label{fig:Fermions-grossNeveuSmallSigPrime}\n (Color online) Effect of imposing a gap until the second pseudoscalar $\\sigma'$ on the parameter space of 3d parity-invariant CFTs~\\cite{Iliesiu:2015qra}.}\n \\end{figure}\n\n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig31a-oneRelevantOddScalar}\n\\includegraphics[width=\\figwidth]{fig31b-sigmaVsPsiWithSigPrimeGT3Nmax12}\n \\caption{\\label{fig:Fermions-oneRelevantOddScalar}\n (Color online) Effect of imposing that there is only one relevant pseudoscalar, $\\Delta_{\\sigma'}\\geqslant 3$, in 3d parity-invariant CFTs~\\cite{Iliesiu:2015qra}.}\n \\end{figure}\n\n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig32-gapOnSigmaPrimeSigBound}\n \\caption{\\label{fig:Fermions-ONtwokinks}\n (Color online) Effect of imposing a gap $\\Delta_{\\sigma'}\\geqslant 3$ in the singlet pseudoscalar sector of $O(N)$-symmetric fermionic CFTs \\cite{Iliesiu:2017nrv}. The kinks at low $N$ may perhaps be identified with the GNY and GNY${}^*$ CFTs.}\n \\end{figure}\n\n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig33a-universalBoundsOnSigmaT}(a)\n\\includegraphics[width=\\figwidth]{fig33b-universalBoundsOnSigmaTZoomedIn}(b)\n \\caption{\\label{fig:Fermions-SigmaT} \n (Color online) Upper bounds on the dimension of the symmetric traceless pseudoscalar $\\sigma_{T}$ in the OPE $\\psi_i\\times \\psi_j$ in $O(N)$-symmetric fermionic CFTs~\\cite{Iliesiu:2017nrv}. Notice the mysterious jumps in the wide view of the bounds (a) when they cross marginality. (b) gives a zoom on the small $\\Delta_\\psi$ region, where the bounds exhibit kinks, in agreement with the GNY dimensions at large $N$.}\n \\end{figure}\n \n \n \n\n\n\\subsection{Outline}\n\nDue to the overwhelming number of results in various incarnations of the conformal bootstrap, our review will necessarily be limited in scope. Let us briefly outline the topics that we will cover. We begin in Sec.~\\ref{sec:informal} with an informal overview of the conformal bootstrap. Sec.~\\ref{sec:cft} provides a concise introduction to the conformal field theory techniques that are needed to set up the bootstrap in $d$ dimensions. We follow in Sec.~\\ref{sec:numerical} with an overview of the various numerical methods that have been employed in studies of the bootstrap. Secs.~\\ref{sec:appl} and~\\ref{sec:appl4d} review results obtained from applying these methods to 3d and 4d CFTs. Sec.~\\ref{sec:4Dsusy} reviews results obtained with the stronger assumptions of 4d $\\mathcal{N}=1$ or 3d $\\mathcal{N}=2$ superconformal symmetry. We comment on applications to nonunitary models in Sec.~\\ref{sec:nonunitary}. Notably absent from our main review are CFTs in other dimensions (e.g., $d=2$ or $d>4$), CFTs with extended supersymmetry, analytical progress in the bootstrap, logarithmic and nonrelativstic CFTs, and other related topics. We finish with a brief overview of progress in these related lines of research in Sec.~\\ref{sec:other} and give some concluding words in Sec.~\\ref{sec:outlook}.\n\n\n\\section{Conformal bootstrap: informal overview}\n\\label{sec:informal}\n\nIn this section we will give a brief outline of the conformal bootstrap approach to critical phenomena in $d$ dimensions.\nWe will be rather informal in this section, while in the subsequent sections the same material will be treated in more depth and at a higher level of rigor. For another short introduction to these matters, see \\textcite{Poland:2016chs}. For longer pedagogical introductions see \\textcite{Rychkov:2016iqz,Simmons-Duffin:2016gjk}.\n\nAs a simple physical setup where these methods would be applicable, we can consider a statistical physics system in $d$ spatial dimensions which is (a) in thermodynamic equilibrium and (b) at a temperature corresponding to a continuous phase transition (so that the correlation length is infinite). Suppose that we are interested in equal-time correlation functions of some local quantities characterizing this system:\n\\beq\n\\langle {\\cal O}_1(x_1)\\ldots {\\cal O}_n(x_n) \\rangle\\,,\n\\label{eq:corr}\n\\end{equation}\nwhere $x_i$ are positions in $\\mathbb{R}^d$. For example, one can think of the 3d Ising model at the critical point, with ${\\cal O}_i(x)$ the local magnetization, local energy density, etc. In general, the ${\\cal O}_i(x)$ are called local operators. \n\nWe are interested in the behavior of the correlators~\\reef{eq:corr} at distances large compared to any microscopic (such as lattice) scale $a$. According to Wilson's RG theory, continuous phase transitions are fixed points of RG flows, which means that the long-distance behavior of~\\reef{eq:corr} will have scale invariance (as well as rotation and translation invariance). Using scale invariance, we can formally extend the long-distance behavior of these correlators from distances $|x_i-x_j|\\gg a$ to arbitrary short distances. In what follows we work in the so-defined \\emph{continuous limit} theory, which is exactly scale invariant at all distances from $0$ to $\\infty$.\\footnote{However, in this paper we will not consider behavior of correlators at coincident points.} \n\nAs discussed in the introduction, we expect that the critical theory is also conformally invariant (i.e., a CFT). This means that for any \\emph{conformal transformation} of $d$-dimensional space $x\\to x'$ (see Sec.~\\ref{sec:conformaltransformations} for the definition), Eq.~\\reef{eq:corr} is related to the same correlation function evaluated at points $x_1',\\ldots,x_n'$. This invariance property (or covariance) of correlation functions is expressed as a transformation rule for local operators, in the next section appearing in Eq.~\\reef{eq:fieldrepresentation}. For scalar operators, we have\n\\beq\n{\\cal O}(x')=\\Omega(x)^{-\\Delta_{\\cal O}} {\\cal O}(x)\\,,\n\\label{eq:invsc}\n\\end{equation}\n where $\\Omega(x)=|\\partial x'\/\\partial x|^{1\/d}$ is the $x$-dependent scale factor of the conformal transformation, and $\\Delta_{\\cal O}$ is a fixed parameter characterizing the operator ${\\cal O}$, called its \\emph{scaling dimension}.\\footnote{To be precise, such transformation rules hold for \\emph{primary} local operators, a subtlety which will not play a role in this informal discussion.}\n\n\\textcite{Polyakov:1970xd} noticed that invariance under \\reef{eq:invsc} strongly restricts two-point (2pt) and three-point (3pt) correlation functions. The 2pt function is nonzero only for identical {operators}\nand can be normalized to one:\n\\beq\n\\langle {\\cal O}_i(x_1){\\cal O}_j(x_2)\\rangle =\\delta_{ij}{|x_{1}-{x_2}|^{-2\\Delta_i}}\\,,\n\\label{eq:2pt}\n\\end{equation}\nwhile the 3pt function is fixed up to a numerical coefficient:\n\\beq\n\\langle {\\cal O}_1(x_1){\\cal O}_2(x_2){\\cal O}_3(x_3)\\rangle =\\frac{\\lambda_{123}}{|x_{12}|^{h_{123}} |x_{13}|^{h_{132}} |x_{23}|^{h_{231}}}\\,,\n\\label{eq:3pt}\n\\end{equation}\nwhere $x_{ij} \\equiv x_i-x_j$ and $h_{ijk} \\equiv \\Delta_i+\\Delta_j-\\Delta_k$. Similar equations hold for operators with indices, see Sec.~\\ref{sec:correlations}. \n\nThe set of numerical parameters $\\Delta_i$ and $\\lambda_{ijk}$ appearing in \\reef{eq:2pt} and \\reef{eq:3pt} is called the \\emph{CFT data}. It turns out that the CFT data determine not only 2pt and 3pt functions, but are also sufficient to compute all local observables in CFTs in flat space, by which we mean all correlation functions of local operators, including four-point (4pt) and higher-order correlation functions.\\footnote{It should be mentioned that CFTs also possess nonlocal observables in addition to the local ones, which are not necessarily determined by the OPE data. For example, one can probe a CFT by extended operators, such as boundaries or defects, or put it in a space of nontrivial geometry or topology. In this review we will focus on the local observables, although the bootstrap philosophy can also be useful in the study of some nonlocal observables; see~Sec.~\\ref{sec:bdry} for boundaries and defects and Sec.~\\ref{sec:other} for the modular bootstrap.} \n \nTo see this, one uses the OPE, which says that we can replace the insertion of two nearby local operators inside a correlation function by a series of single local operators:\n\\beq\n{\\cal O}_i(x_1){\\cal O}_j(x_2)=\\sum_k f_{ijk} {\\cal O}_k(y)\\,.\n\\label{eq:OPE}\n\\end{equation}\nThe coefficients of the series $f_{ijk}$ may and will depend on the relative positions of the operators ${\\cal O}_i$, ${\\cal O}_j$, ${\\cal O}_k$, and on their quantum numbers. Crucially, however, these coefficients are not supposed to depend on which other operators appear in the correlation function, as long as they are sufficiently far away from $x_1,x_2,y$. The precise criterion in the CFT context will be given in Eq.~\\reef{eq:converge}.\n\nNotice the freedom in where we put operators appearing on the r.h.s.~of the OPE: we can choose $y={\\textstyle\\frac 12}(x_1+x_2)$, $y=x_1$, or any other point nearby. Different choices of $y$ can be related by Taylor-expanding ${\\cal O}_k$, and thus can be compensated by changing {the} coefficients of derivatives of ${\\cal O}_k$ in the OPE. In what follows we will group the operator ${\\cal O}_k$ together with all its derivatives, formally thinking of $f_{ijk}$ as an infinite power series in $\\del_y$. \n\nThere are two things that make OPE in conformal field theories more powerful than in a generic QFT. Firstly, compatibility of the OPE with conformal invariance determines the functions $f_{ijk}$ up to a numerical prefactor, coinciding with the 3pt function coefficient $\\lambda_{ijk}$ (for this reason it is also called an OPE coefficient):\n\\beq\nf_{ijk}(x_1,x_2,y,\\del_y) = \\lambda_{ijk} \\hat{f}_{ijk}(x_1,x_2,y,\\del_y)\\,.\n\\end{equation}\nThe reduced functions $\\hat f_{ijk}$ depend only on {the} operator dimensions $\\Delta_i,\\Delta_j,\\Delta_k$, the spins of these operators (which are kept implicit in this informal discussion), and on the space dimension $d$.\\footnote{Note that here we are assuming the normalization in Eq.~(\\ref{eq:2pt}).}\n\nSecondly, the OPE in conformal theories has a finite radius of convergence, which is determined by the distance to the next operator insertions. For example, in the correlator of Eq.~\\reef{eq:npt} given below, the OPE will converge if \n\\beq\n|x_1-y|,\\,|x_2-y|<\\min_{i=3\\ldots n} |x_i-y|\\,,\n\\label{eq:converge}\n\\end{equation}\ni.e.~if there exists a sphere centered at $y$ and separating $x_1$, $x_2$ from any other operator insertion.\n\nBecause of these two reasons we can compute any correlation function recursively using the OPE, provided that we know the CFT data. For example, suppose we want to compute the $n$-point function\n\\beq\n\\langle {\\cal O}_1(x_1) {\\cal O}_2(x_2) {\\cal O}_3(x_3)\\ldots {\\cal O}_n(x_n)\\rangle\\,.\n\\label{eq:npt}\n\\end{equation}\nApplying the OPE to ${\\cal O}_1(x_1) {\\cal O}_2(x_2)$, we reduce this correlator to a sum of correlators containing $n-1$ operators\n\\beq\n\\langle {\\cal O}_k(y) {\\cal O}_3(x_3)\\ldots {\\cal O}_n(x_n)\\rangle\\,.\n\\end{equation}\nProceeding in this way, we will eventually get down to 2pt functions, which are determined by the CFT data. The only parameters which will enter this computation are the operator positions and quantum numbers, the CFT data, and the space dimension $d$.\\footnote{Notice that although the presented scheme solves the problem of computing $n$-point functions in principle, it is not trivial to do in practice. For 4pt functions, the necessary techniques will be presented in Sec.~\\ref{sec:cb}.}\n \nConsider now the case of a 4pt function (Eq.~\\reef{eq:npt} with $n=4$)\n and compute it in two different ways. The first way is to apply the OPE to the pairs of operators ${\\cal O}_1 {\\cal O}_2$ and ${\\cal O}_3{\\cal O}_4$. This reduces the 4pt function to an infinite sum of 2pt functions of operators which appear in these OPEs. A second way is to apply the OPE to the pairs ${\\cal O}_1{\\cal O}_4$ and ${\\cal O}_2{\\cal O}_3$. Since we are dealing with the same 4pt function, the two expansions must agree in their overlapping regions of convergence. This \\emph{crossing relation} represents a consistency condition on the CFT data and is illustrated in Fig.~\\ref{fig:bootstrap}.\n\nThe main idea of the conformal bootstrap is that by imposing the crossing relation, we should be able to significantly winnow down the set of all possible CFT data. In the subsequent sections of this review, we will see how the crossing relation can be written in a mathematically manageable form, and how numerical algorithms can be applied to extract from it concrete constraints. \n\nIdeally, if we impose crossing relations for \\emph{all} 4pt functions of the theory, we will be left with the CFT data corresponding to the actually existing critical theories. In practice, it has so far been possible to impose crossing relations on only a handful of 4pt functions at a time. However, we will see that even this limited procedure produces nontrivial constraints, which are in some cases surprisingly strong.\n\n\\begin{figure}[t!]\n \\includegraphics[width=0.49\\textwidth]{fig01-bootstrapO.pdf}\n \\caption{\\label{fig:bootstrap}\n Crossing relation for the 4pt function $\\<{\\cal O}_1 {\\cal O}_2 {\\cal O}_3 {\\cal O}_4\\>$.}\n \\end{figure}\n\n \\subsection{Universality and the role of microscopic input}\n\\label{sec:universality}\nA fundamental concept in the theory of critical phenomena is universality: all continuous phase transitions can be grouped into universality classes which share the same critical exponents. This is neatly explained in Wilson's RG theory: two phase transitions will fall into the same universality class if they are described by the same fixed point. On the other hand, the conformal bootstrap provides a different perspective on the same phenomenon: each universality class corresponds to a different CFT, with a different set of CFT data. \n\nThese two points of view are clearly complementary, and it is important to establish the correspondence between them. Consider for example the critical exponents. In RG theory they can be related to the eigenvalues $\\lambda^{y_i}$ of the RG transformation linearized around the fixed point, where $\\lambda>1$ is the RG rescaling factor. As is well known, these eigenvalues are simply related to the scaling dimensions of the local operators: $y_i=d-\\Delta_i$. Thus, information about the critical exponents can be easily extracted from CFT data, and agreement of their values between an RG fixed point and a CFT may give us confidence that the two describe the same critical universality class.\n\nThere are however three more fundamental structural characteristics which can be used to identify universality classes, even before considering the numerical values of critical exponents. These characteristics may not be sufficient to uniquely classify the different CFTs, but they will give us a convenient starting point.\n\n\\emph{1. The global (or internal) symmetry group.} It can be discrete, as for the $\\bZ_2$ symmetry of the Ising model, or continuous, as for the $O(N)$ models. In RG studies, the global symmetry group is specified by considering an RG flow in the space of microscopic theories described by an action possessing a given symmetry. The global symmetry group for a CFT is the same group $G$ as for the corresponding RG fixed point, although it is specified in a different way: by demanding that each local operator transform in an irreducible representation of $G$ and that OPE coefficients respect this symmetry structure.\n\nWe note in passing that unlike the global symmetry, the presence of a \\emph{gauge symmetry} in a microscopic description does not manifest itself in the conformal bootstrap, because physically observable local CFT operators are gauge invariant.\\footnote{Gauge symmetries can make themselves known more indirectly, through anomaly coefficients which show up in the correlation functions of local operators or the existence of higher-form symmetries.}\n\n\\emph{2. The number of relevant singlet scalars.} The number of scalar operators which are relevant (i.e., have dimension $\\Delta_i$, the resulting constraints take the form\n\\begin{eqnarray}\\label{eq:crossingvec}\n0 &=& \\sum_{{\\cal O}} \\lambda_{\\sigma\\s{\\cal O}}^2 \\vec{F}^{\\Delta_\\sigma}_{\\Delta_{{\\cal O}},\\ell_{{\\cal O}}},\n\\end{eqnarray}\nwhere $\\vec{F}^{\\Delta_\\sigma}_{\\Delta_{{\\cal O}},\\ell_{{\\cal O}}}$ can be thought of as a vector with components\n\\begin{eqnarray}\n\\label{eq:F}\n\\left(\\vec{F}^{\\Delta_\\sigma}_{\\Delta_{{\\cal O}},\\ell_{{\\cal O}}}\\right)^{mn} &=& \\partial_{z}^m \\partial_{\\bar{z}}^n F^{\\Delta_\\sigma}_{\\Delta_{{\\cal O}}, \\ell_{{\\cal O}}}(z,\\bar{z})\\big|_{z=\\bar{z}=1\/2}\\,,\n\\end{eqnarray}\nwhere we take derivatives of the functions \\reef{eq:Feq} and we keep components up to a cutoff $m+n \\leqslant \\Lambda$.\\footnote{{Since the functions \\reef{eq:Feq} are odd under $z\\to 1-z$, $\\bar{z}\\to 1-z$, only components with {$m+n$ odd} lead to nontrivial equations.}} {This computation will thus involve derivatives of conformal blocks up to some finite order.}\n \n{Computing the vectors $\\vec{F}^{\\Delta_\\sigma}_{\\Delta,\\ell}$ constitutes a nontrivial preliminary step for analyzing Eq.~\\reef{eq:crossingvec}. This step is handled starting from one of the many exact or approximate expressions for conformal blocks discussed in Sec.~\\ref{sec:cb}. The state-of-the-art approach is to use the rational approximation, see Sec.~\\ref{sec:rational}, where available software packages are also described. This approach gives rise to approximate expressions which reproduce $\\vec{F}^{\\Delta_\\sigma}_{\\Delta,\\ell}$ with any desired precision. These expressions can be efficiently evaluated ``on the fly\", as needed in the {continuous} simplex algorithm from Sec.~\\ref{sec:modified-simplex}. They can also be used as an input to the semidefinite programming methods described in Sec.~\\ref{sec:SDP}.}\n\nWe now proceed to {describe} strategies on how to decide if Eq.~\\reef{eq:crossingvec} has solutions, i.e.~if there {exists} some choice of the exchanged CFT spectrum $\\{\\Delta_{{\\cal O}},\\ell_{{\\cal O}}\\}$ and OPE coefficients $\\lambda_{\\sigma\\sigma{\\cal O}}$ which makes it satisfied. First, let us remark that Eq.~(\\ref{eq:crossingvec}) is a set of linear equations in $\\lambda_{\\sigma\\s{\\cal O}}^2$. This is at the heart of both the linear programming approaches described in this subsection as well as the extremal functional and truncation methods described below. In particular, if one has a candidate CFT spectrum for operators appearing in $\\sigma \\times \\sigma$ but does not know the OPE coefficients, one can straightforwardly solve a linear algebra problem to find the coefficients.\n\nIn unitary (or reflection positive) CFTs, Eq.~(\\ref{eq:crossingvec}) states that a sum of vectors must add to zero with {\\it positive} coefficients, due to $\\lambda_{\\sigma\\s{\\cal O}}$ necessarily being real. For some choices of the CFT spectrum $\\{\\Delta_{{\\cal O}},\\ell_{{\\cal O}}\\}$ this is not possible, as illustrated in Fig.~\\ref{fig:separating-plane}. When it is not possible one can identify a separating plane $\\alpha$ through the origin such that all vectors point to one side of the plane.\\footnote{Some vectors may point in the plane but at least one must point outside of it.}\n\n\\begin{figure}[t!]\n \\centering\n\\includegraphics[width=0.49\\textwidth]{fig05-separating-plane}\n \\caption{\\label{fig:separating-plane}\nLeft: A case where vectors can sum to zero with positive coefficients. Right: A case where vectors cannot sum to zero with positive coefficients and there exists a separating plane $\\alpha$ such that all vectors point on one side of the plane. Figure from \\textcite{Poland:2016chs}.}\n \\end{figure}\n\nThis observation forms the basis for the first numerical strategy of analyzing the crossing relation \\cite{Rattazzi:2008pe}: input some assumption about the CFT spectrum (e.g., a gap in the scalar spectrum with all other operators satisfying unitarity bounds) and numerically search for a separating plane $\\alpha$. Equivalently we can say that we are applying a linear functional $\\sum_{mn} \\alpha_{mn} \\partial_{z}^m \\partial_{\\bar{z}}^n \\left[ \\cdot \\right]\\big|_{z=\\bar{z}=1\/2}$ to the crossing relations and checking if it is possible to derive a contradiction. Concretely, one can look for a vector $\\vec{\\alpha}$ such that the scalar product is strictly positive on at least one operator whose OPE coefficient is nonzero (this may be the identity, the stress tensor, or any other operator that we assume appears in the OPE):\n\\beq\n\\vec{\\alpha} \\cdot \\vec{F}^{\\Delta_\\sigma}_{\\Delta_{{\\cal O}^*},\\ell_{{\\cal O}^*}} > 0\\,,\n\\end{equation}\nwhile it is nonnegative for all other $\\{\\Delta_{{\\cal O}},\\ell_{{\\cal O}}\\}$ allowed by our assumptions:\n\\beq\n\\vec{\\alpha} \\cdot \\vec{F}^{\\Delta_\\sigma}_{\\Delta_{{\\cal O}},\\ell_{{\\cal O}}} \\geqslant 0\\,.\n\\end{equation}\nEach inequality $\\vec{\\alpha} \\cdot \\vec{F}^{\\Delta_\\sigma}_{\\Delta_{{\\cal O}},\\ell_{{\\cal O}}} \\geqslant 0$ identifies a half-space and their intersection carves out a convex cone. \n\nThere is still one issue before the vector $\\vec{\\alpha}$ can be searched for numerically -- a priori there are an infinite number of allowed vectors labeled by all $\\{\\Delta_{{\\cal O}},\\ell_{{\\cal O}}\\}$. \nThe first numerical bootstrap studies\\footnote{See~\\textcite{Rattazzi:2008pe}, \\textcite{Rychkov:2009ij}, \\textcite{Caracciolo:2009bx}, \\textcite{Poland:2010wg}, \\textcite{Rattazzi:2010gj,Rattazzi:2010yc}, \\textcite{Vichi:2011ux,Vichi:2011zza}, and \\textcite{ElShowk:2012ht}.} employed a discretization approach: namely, they discretized the set $\\{\\Delta_{{\\cal O}},\\ell_{{\\cal O}}\\}$ using some small spacing between allowed dimensions so that there are a finite number of linear inequalities satisfied by a finite number of unknown coefficients $\\vec{\\alpha}$. Then the problem becomes a standard linear programming problem and can be solved using standard algorithms. These include simplex algorithms, where one moves from vertex to vertex on the edge of the feasible region, or interior point algorithms, where one instead traverses the interior of the feasible region. Software packages that have been used {in the past} for this purpose are \\texttt{Mathematica}, the \\texttt{GNU Linear Programming Kit (GLPK)}, and the \\texttt{IBM ILOG CPLEX Optimizer}. {This discretization approach is currently considered to be obsolete, although it retains pedagogical value. More efficient approaches avoiding discretization will be discussed below.}\n\nOne can slightly modify the problem in order to place bounds on OPE coefficients \\cite{Caracciolo:2009bx}. By isolating one particular contribution ${\\cal O}^*$ and again applying a functional $\\vec{\\alpha}$ one rewrites the equation as\n\\beq\n\\label{eq:OPEbound}\n\\lambda_{\\sigma\\s{\\cal O}^*}^2 \\vec{\\alpha} \\cdot \\vec{F}^{\\Delta_\\sigma}_{\\Delta_{{\\cal O}^*},\\ell_{{\\cal O}^*}} = -\\vec{\\alpha} \\cdot \\vec{F}^{\\Delta_\\sigma}_{0,0} - \\sum_{{\\cal O}} \\lambda_{\\sigma\\s{\\cal O}}^2 \\vec{\\alpha} \\cdot \\vec{F}^{\\Delta_\\sigma}_{\\Delta_{{\\cal O}},\\ell_{{\\cal O}}}.\n\\end{equation}\nThen by imposing the normalization condition \\mbox{$\\vec{\\alpha} \\cdot \\vec{F}^{\\Delta_\\sigma}_{\\Delta_{{\\cal O}^*},\\ell_{{\\cal O}^*}} = 1$} and the positivity constraints \\mbox{$\\vec{\\alpha} \\cdot \\vec{F}^{\\Delta_\\sigma}_{\\Delta_{{\\cal O}},\\ell_{{\\cal O}}} \\geqslant 0$} one obtains the upper bound \\mbox{$\\lambda_{\\sigma\\s{\\cal O}^*}^2 \\leqslant -\\vec{\\alpha} \\cdot \\vec{F}^{\\Delta_\\sigma}_{0,0}$}. The strongest upper bound is obtained by {\\it minimizing} $ -\\vec{\\alpha} \\cdot \\vec{F}^{\\Delta_\\sigma}_{0,0}${, which yields an optimization problem that can be solved with linear programming algorithms, adopting the above-mentioned discretization approach or other methods discussed below}. Alternatively, one can also seek lower bounds by instead imposing $\\vec{\\alpha} \\cdot \\vec{F}^{\\Delta_\\sigma}_{\\Delta_{{\\cal O}},\\ell_{{\\cal O}}} \\leqslant 0$ and maximizing $ -\\vec{\\alpha} \\cdot \\vec{F}^{\\Delta_\\sigma}_{0,0}$ \\cite{Poland:2011ey}. However, in general it is not possible to obtain lower bounds on OPE coefficients unless the operator ${\\cal O}^*$ is isolated in the allowed spectrum, since one could always imagine that ${\\cal O}^*$ has a zero OPE coefficient but operators infinitesimally close to it have nonzero coefficients. \n\n\\subsubsection{Continuous primal simplex algorithm}\n\\label{sec:modified-simplex}\n\nInstead of looking for a vector $\\vec{\\alpha}$ with the desired positivity properties, an alternate strategy is to search directly for a set of vectors $\\{\\vec{F}^{\\Delta_\\sigma}_{\\Delta_{{\\cal O}},\\ell_{{\\cal O}}}\\}$ appearing in Eq.~(\\ref{eq:crossingvec}), subject to the positivity conditions $\\lambda_{\\sigma\\s{\\cal O}}^2 \\geqslant 0$. This search can be viewed as a ``primal'' formulation of the linear program, whereas the search for $\\vec{\\alpha}$ described above can be viewed as the related ``dual'' problem. Note that in this formulation there are a continuously infinite number of possible vectors $\\vec{F}^{\\Delta_\\sigma}_{\\Delta_{{\\cal O}},\\ell_{{\\cal O}}}$ in the search space. \\textcite{El-Showk:2014dwa} {developed} a modification of Dantzig's simplex algorithm in order to handle such a continuous search space.\\footnote{Such linear programming problems are called `continuous' or `semi-infinite' \\cite{reemtsen_numerical_1998}.} The essential idea is to use Newton's method at each step of the algorithm to identify a vector to add, which is optimal over some continuous interval of scaling dimensions $\\left[\\Delta_{\\text{min}}, \\Delta_{\\text{max}}\\right]$ and discrete set of spins $\\left[0, \\ell_{\\text{max}}\\right]$. {For reasons explained in \\textcite{El-Showk:2014dwa}, it is necessary to perform computations at a precision higher than machine precision. This continuous simplex algorithm is one of two state-of-the art methods for the conformal bootstrap, the other one being the semidefinite programming method described below.} Three implementations of this algorithm are available: \na \\texttt{C++} code \\texttt{SIPSolver} \\cite{DSDSIPsolver} and a \\texttt{Python\/Cython} code \\cite{ElShowkRychkov} which were used for the computations in \\cite{El-Showk:2014dwa}, as well as a \\texttt{Julia} package \\texttt{JuliBootS}~\\cite{Paulos:2014vya}.\n\n\\subsection{Semidefinite programming}\n\\label{sec:SDP}\n\nWhile the linear programming techniques described above are adequate for crossing relations of single 4pt functions (possibly charged under some global symmetry), they are more difficult to adapt for systems of crossing relations containing multiple operators. The reason is that the resulting crossing relations for mixed correlators are no longer linear in the positive squares of OPE coefficients.\\footnote{However, they can be made linear at the expense of introducing additional continuous parameters~\\cite{El-Showk:2016mxr}. {This observation has not yet been implemented and it is not known how it would perform in practice.}} The same issue arises when considering 4pt functions of spinning operators, where multiple 3pt function tensor structures exist. In these situations one can phrase the optimization problem needed to obtain bounds using the language of semidefinite programming rather than linear programming \\cite{Kos:2014bka}.\\footnote{For a related problem of multiple internal symmetry coupling structures this was observed in \\textcite{Rattazzi:2010yc}.}\n\nAnother use of semidefinite programming \\cite{Poland:2011ey} is to avoid needing to discretize and impose a cutoff on the exchanged operator dimensions appearing in the positivity constraints such as $\\vec{\\alpha} \\cdot \\vec{F}_{\\Delta_{{\\cal O}}, \\ell_{{\\cal O}}} \\geqslant 0$. We will describe both of these uses of semidefinite programming, as well as how they can be combined, below.\n\nIn most applications to the bootstrap, it has proven necessary for numerical stability to solve the semidefinite programs described below at a precision higher than machine precision. The first numerical studies made use of the software \\texttt{SDPA-GMP}~\\cite{SDPAGMP} (a variant of \\texttt{SDPA}~\\cite{SDPA}) for this purpose. \n{The state-of-the-art is} an efficient software package \\texttt{SDPB}, described in \\textcite{Simmons-Duffin:2015qma}, which improves on the \\texttt{SDPA}'s primal-dual interior point algorithm primarily by taking advantage of matrix block structure and parallelization.\\footnote{{Further development of \\texttt{SDPB} is being carried out within the Simons Collaboration on the Nonperturbative Bootstrap (\\url{http:\/\/bootstrapcollaboration.com\/}), and this package will likely remain at the forefront of the numerical bootstrap studies in the coming years.}} In order to set up the problems so that they can be solved by \\texttt{SDPB}, recent studies have typically used either \\texttt{Mathematica} {notebooks}, or the interfaces \\texttt{PyCFTBoot}~\\cite{Behan:2016dtz} or \\texttt{cboot}~\\cite{CBoot}. \n\n\\subsubsection{Mixed correlators}\n\n\\label{sec:mixed-correlators}\n\nWe will illustrate the use of semidefinite programming for mixed correlators with a simple example. Consider a system of 4pt functions containing two operators $\\sigma$ and $\\epsilon$, where $\\sigma$ is odd under a $\\mathbb{Z}_2$ symmetry and $\\epsilon$ is even. The resulting system of crossing relations for $\\<\\sigma\\s\\sigma\\s\\>$, $\\<\\sigma\\s\\epsilon\\e\\>$, and $\\<\\epsilon\\e\\epsilon\\e\\>$ takes the form\n\\cite{Kos:2014bka}\n\\beq\n\\label{eq:crossingequationwithv}\n 0 = \\sum_{{\\cal O}^+} \\begin{pmatrix}\\lambda_{\\sigma\\s{\\cal O}} & \\lambda_{\\epsilon\\e{\\cal O}}\\end{pmatrix} \\vec{V}_{+,\\Delta,\\ell}\\begin{pmatrix} \\lambda_{\\sigma\\s{\\cal O}} \\\\ \\lambda_{\\epsilon\\e{\\cal O}} \\end{pmatrix}+ \\sum_{{\\cal O}^-} \\lambda_{\\sigma\\epsilon{\\cal O}}^2 \\vec{V}_{-,\\Delta,\\ell}\\,,\n\\end{equation}\nwhere the components of the vectors $\\vec{V}_{\\pm,\\Delta,\\ell}$ run over 5 independent crossing relations,\\footnote{In this section we are using vector notation to describe the vector of crossing relations, rather than derivatives.} ${\\cal O}_{\\pm}$ denote operators even\/odd under $\\mathbb{Z}_2$ symmetry, and each $\\vec{V}_{+,\\Delta,\\ell}$ is a 5-vector of $2 \\times 2$ matrices:\n\n\\beq\n\\vec{V}_{-,\\Delta,\\ell} = \\begin{pmatrix} 0 \\\\ 0 \\\\ F_{-,\\Delta,\\ell}^{\\sigma\\epsilon,\\sigma\\epsilon}(z,\\bar{z}) \\\\ (-1)^{\\ell} F_{-,\\Delta,\\ell}^{\\epsilon\\sigma,\\sigma\\epsilon}(z,\\bar{z}) \\\\ - (-1)^{\\ell} F_{+,\\Delta,\\ell}^{\\epsilon\\sigma,\\sigma\\epsilon}(z,\\bar{z}) \\end{pmatrix},\\nonumber\n\\end{equation} \n\\beq\n\\vec{V}_{+,\\Delta,\\ell} = \\begin{pmatrix} \\begin{pmatrix} F^{\\sigma\\s,\\sigma\\s}_{-,\\Delta,\\ell}(z,\\bar{z}) & 0 \\\\ 0 & 0 \\end{pmatrix} \\\\ \\begin{pmatrix} 0 & 0 \\\\ 0 & F^{\\epsilon\\e,\\epsilon\\e}_{-,\\Delta,\\ell}(z,\\bar{z}) \\end{pmatrix}\\\\ \\begin{pmatrix} 0 & 0 \\\\ 0 & 0 \\end{pmatrix} \\\\ \\begin{pmatrix} 0 & \\frac12 F^{\\sigma\\s,\\epsilon\\e}_{-,\\Delta,\\ell}(z,\\bar{z}) \\\\ \\frac12 F^{\\sigma\\s,\\epsilon\\e}_{-,\\Delta,\\ell}(z,\\bar{z}) & 0 \\end{pmatrix} \\\\ \\begin{pmatrix} 0 & \\frac12 F^{\\sigma\\s,\\epsilon\\e}_{+,\\Delta,\\ell}(z,\\bar{z}) \\\\ \\frac12 F^{\\sigma\\s,\\epsilon\\e}_{+,\\Delta,\\ell}(z,\\bar{z}) & 0 \\end{pmatrix} \\end{pmatrix}.\n\\end{equation}\nThe appearing functions $F^{ij,kl}_{\\pm,\\Delta,\\ell}$ are given in \\reef{eq:Funeq}. One can then look for bounds by making some assumption about the spectrum and searching for a functional $\\vec{\\alpha} = \\sum_{mn} \\vec{\\alpha}_{mn} \\partial_{z}^m \\partial_{\\bar{z}}^n \\left[\\cdot\\right] \\big|_{z=\\bar{z}=1\/2}$ satisfying the properties\n\\begin{align}\n&\\begin{pmatrix} 1 & 1\\end{pmatrix} \\vec \\alpha \\cdot \\vec{V}_{+,0,0} \\begin{pmatrix} 1 \\\\ \n1 \\end{pmatrix} > 0 \\,, \\nn\\\\\n&\\vec \\alpha \\cdot \\vec{V}_{+,\\Delta,\\ell} \\succeq 0 \\quad\\textrm{for all $\\mathbb{Z}_2$-even operators with $\\ell$ even,} \\nn\\\\\n&\\vec \\alpha \\cdot \\vec{V}_{-,\\Delta,\\ell} \\geqslant 0 \\quad \\textrm{for all $\\mathbb{Z}_2$-odd operators.} \n\\label{eq:functionalinequalities1}\n\\end{align}\nThe novel feature is now that $\\vec \\alpha \\cdot \\vec{V}_{+,\\Delta,\\ell} \\succeq 0$ must be a {\\it positive semidefinite} $2 \\times 2$ matrix, which makes the search in Eq.~(\\ref{eq:functionalinequalities1}) a semidefinite programming problem. Similar structure appears for more general systems of mixed\/spinning correlators, where if an exchanged operator has $N$ OPE coefficients appearing in the system then the needed positivity condition will be phrased in terms of positive semidefinite $N \\times N$ matrices.\n\n\n\\subsubsection{Polynomial approximations}\n\\label{sec:poly}\nA different use of semidefinite programming, relevant for both single correlators or mixed correlators, is to avoid any discretization of the exchanged operator dimensions~\\cite{Poland:2011ey}. We will first explain the idea for single correlators, where one imposes inequalities of the form\n\\begin{eqnarray}\n\\sum_{mn} \\alpha_{mn} \\partial_z^m \\partial_{\\bar{z}}^n F^{\\Delta_\\sigma}_{\\Delta,\\ell}(z,\\bar{z}) \\big|_{z=\\bar{z}=1\/2} &\\geqslant& 0\\,.\n\\end{eqnarray} \nDue to the pole expansion of the conformal blocks described in Sec.~\\ref{sec:analyticrecursion}, if one keeps a finite number of poles{,} then by reorganizing $h_{\\Delta,\\ell}$ into a rational function of $\\Delta$, such derivatives can be rewritten in the form (see Sec.~\\ref{sec:rational})\n\\begin{eqnarray}\n\\partial_z^m \\partial_{\\bar{z}}^n F^{\\Delta_\\sigma}_{\\Delta,\\ell}(z,\\bar{z}) \\big|_{z=\\bar{z}=1\/2} \\approx \\chi_{\\ell}(\\Delta) P_{\\ell}^{mn}(\\Delta)\\,,\n\\end{eqnarray}\nwhere $P_{\\ell}^{mn}(\\Delta)$ is a {\\it polynomial} in $\\Delta$, and\n$\\chi_{\\ell}(\\Delta)$ is a positive function for all $\\Delta$ and $\\ell$ satisfying the unitarity bounds. The degree of the polynomial depends on the number of poles kept in the expansion of the conformal block. Then one simply needs to impose the polynomial inequalities\n\\begin{eqnarray}\n\\sum_{mn} \\alpha_{mn} P^{mn}_{\\ell}(\\Delta^{\\text{min}}_{\\ell}+x) \\geqslant 0\n\\end{eqnarray}\n for all $x \\geqslant 0$, where the minimum dimension at each spin $\\Delta^{\\text{min}}_{\\ell}$ depends on the assumptions being made. \n \nSuch inequalities for polynomials can be rewritten in terms of positive semidefinite matrices following a theorem of~\\textcite{Hilbert:1888}. The relevant theorem states that any polynomial $P(x)$ that is nonnegative on the interval $[0,\\infty)$ can be written in the form\n\\beq\nP(x) = a(x) + x b(x),\n\\end{equation}\nwhere $a(x)$ and $b(x)$ are sums-of-squares of polynomials. Such sums-of-squares can in turn always be expressed in the form\n\\beq\na(x) = \\text{Tr}(A Q_{d_1}(x)), \\quad b(x) = \\text{Tr}(B Q_{d_2}(x))\\,,\n\\end{equation}\n\\\\[2pt]\nwhere $Q_{d}(x)\\equiv [x]_d [x]_d^T$ is a matrix built out of the monomials $[x]_d = (1,x,\\ldots, x^d)^T$, $d_1 = \\left\\lfloor \\frac12 \\deg P \\right\\rfloor$, $d_2 = \\left\\lfloor \\frac12 (\\deg P-1) \\right\\rfloor$, and $A$ and $B$ are some positive semidefinite matrices.\n\nWith this rewriting, one needs to search for coefficients $\\alpha_{mn}$ and positive semidefinite matrices $A_{\\ell}, B_{\\ell} \\succeq 0$ such that\n\\begin{multline}\n\\sum_{mn} \\alpha_{mn} P^{mn}_{\\ell}(\\Delta^{\\text{min}}_{\\ell}+x) =\\\\\n \\text{Tr}(A_{\\ell} Q_{d_1}(x)) + x \\text{Tr}(B_{\\ell} Q_{d_2}(x)).\n\\end{multline}\nIn practice one must also impose a cutoff on the set of included spins $0 \\leqslant \\ell \\leqslant \\ell_{\\text{max}}$. This search, combined with a normalization condition such as $\\vec{\\alpha} \\cdot \\vec{F}_{0,0}^{\\Delta_\\sigma} = 1$, is now in the form of a semidefinite programming problem. \n\nAs explained in detail in \\textcite{Kos:2014bka}, this idea can also be applied to systems of mixed or spinning correlators where exchanged operators have multiple OPE coefficients appearing in the system. In those cases, after truncating the conformal block pole expansions one {imposes} constraints of the form\n\\begin{gather}\n\\sum_{mn}\\vec{\\alpha}_{mn} \\cdot\n\\begin{pmatrix}\n\\vec{P}^{(11;mn)}_\\ell(\\Delta) &\\dots& \\vec{P}_\\ell^{(1N;mn)}(\\Delta)\\\\\n\\vdots & \\ddots &\\vdots\\\\\n\\vec{P}^{(N1;mn)}_\\ell(\\Delta) & \\dots & \\vec{P}_\\ell^{(NN;mn)}(\\Delta)\n\\end{pmatrix}\\succeq 0\n\\label{eq:matrixpolynomialconstraints1}\\\\\n\\textrm{for}\\quad \\Delta\\geqslant \\Delta^{\\text{min}}_{\\ell}.\\nn\n\\end{gather}\nAgain there is a theorem that such positive semidefinite matrix polynomials can always be written as sums-of-squares of matrix polynomials. A consequence, worked out in \\textcite{Kos:2014bka}, is that each entry can be written as \n\\begin{multline}\n\\sum_{mn} \\vec{\\alpha}_{mn} \\cdot \\vec{P}^{(ij;mn)}_\\ell(\\Delta^{\\text{min}}_{\\ell}+x) =\\\\ \\text{Tr}(A^{ij}_{\\ell} Q_{d_1}(x)) + x \\text{Tr}(B^{ij}_{\\ell} Q_{d_2}(x))\n\\end{multline}\nin terms of positive semidefinite matrices \\mbox{$A_{\\ell}^{ij}, B_{\\ell}^{ij} \\succeq 0$}, \nand the problem is again phrased as a semidefinite programming problem.\n\n\\subsection{Bounds and allowed regions}\n\n\\label{sec:allowed}\nThe algorithms described in the previous sections can be used to establish if a given point in the space of CFT data, parametrized by the dimensions of external operators and by assumptions on the exchanged spectrum, belongs to the region allowed by crossing and unitarity. Since the exchanged spectrum contains infinitely many operators, there are infinitely many assumptions one can test. The art of the numerical bootstrap is to choose an interesting assumption, and then to delineate as precisely as possible the allowed region corresponding to this assumption. \n\nAs we will see in the next sections, one of the most frequently asked questions is the following: given an OPE $\\mathcal O\\times \\mathcal O$, derive an upper bound $\\Delta_\\text{max}$ on the dimension of the first operator appearing in this OPE having specified transformation properties under $SO(d)$ (and eventually under a global symmetry $G$),\\footnote{In particular, the existence of a bound with $\\Delta_{\\max}<\\infty$ provides a proof that such an operator exists. See \\textcite{Rattazzi:2008pe} and \\cite[{section 10.5}]{Simmons-Duffin:2016gjk} for intuitive explanations involving some numerics of why such bounds should exist at all, and \\textcite{Hogervorst:2013sma} and \\cite[{section 4.3.3}]{Rychkov:2016iqz} for an approximate analytic argument. At present, while the existence of bounds can sometimes be understood via such simple means, their actual values can only be precisely computed using the powerful numerical techniques described in the previous sections. Only in a handful of cases, e.g.~\\textcite{Mazac:2016qev} and \\textcite{Mazac:2018mdx}, have the best possible bounds been proven analytically.} assuming e.g.~that other operator {dimensions are allowed to take any values} allowed by unitarity. One can answer this question, for instance, as a function of $\\Delta_{\\mathcal O}$. This defines an allowed region with a boundary $\\Delta_\\text{max}(\\Delta_\\mathcal O)$. Similarly, when one obtains an upper (or lower) bound on an OPE {coefficient} as discussed in Sec.~\\ref{sec:LP}, this represents the boundary of an allowed region. These boundaries give us a view into the intricate underlying geometry of the space of CFT data allowed by crossing and unitarity.\n\n\n\\subsection{Spectrum extraction}\n\\label{sec:spectrum-extraction}\n\nA point in the allowed region (see Sec.~\\ref{sec:allowed}) is specified by external operator dimensions and by a handful of other numbers characterizing the assumptions, such as gaps on the exchanged operator spectrum. Once we ascertained that a point belongs to the allowed region, in some cases it is important to be able to go one step further and to extract an explicit solution to crossing, i.e.~the whole spectrum of exchanged operator dimensions and their OPE coefficients. The precise way of doing this depends on which algorithm one uses. An important point is that we expect this solution to be {non-unique} inside the allowed region, but it should generically become unique on its boundary (see below).\n\nThe spectrum extraction is simplest in the primal simplex method, Sec.~\\ref{sec:modified-simplex}. In this case the spectrum is encoded directly in the set of basic vectors and is available in each step of the algorithm.\n\nIn the dual formulation of the linear programming method, one does not have access to the spectrum strictly inside the allowed region. However, one can extract a solution to crossing symmetry from the limiting functional when one approaches a {\\it boundary} of this region, by extremizing either an operator dimension or OPE coefficient. This is called the {\\it extremal functional method}, introduced in \\textcite{Poland:2010wg} and \\textcite{ElShowk:2012hu}.\n\nNamely, when approaching the boundary from the disallowed region, the system is on the verge of no longer allowing a separating plane, and the vector on which we require strict positivity is degenerating into the plane. In the case of a single crossing relation where we have imposed strict positivity on the identity operator, as we approach a dimension boundary we can find a vector $\\vec{\\alpha}$ such that $\\vec{\\alpha} \\cdot \\vec{F}^{\\Delta_{\\sigma}}_{0,0} \\rightarrow 0$, together with the sum rule\n\\begin{eqnarray}\\label{eq:extremal}\n\t0 &=& \\sum_{{\\cal O}} \\lambda_{\\sigma\\s{\\cal O}}^2 \\vec{\\alpha} \\cdot \\vec{F}^{\\Delta_{\\sigma}}_{\\Delta_{{\\cal O}},\\ell_{{\\cal O}}},\n\\end{eqnarray}\nwhere $\\vec{\\alpha} \\cdot \\vec{F}^{\\Delta_{\\sigma}}_{\\Delta_{{\\cal O}},\\ell_{{\\cal O}}} \\geqslant 0$ for all other possible (non-identity) operators in the spectrum. One obtains a similar condition from the OPE coefficient bound in Eq.~(\\ref{eq:OPEbound}) if one sets the OPE coefficient to its extremal value $\\lambda_{\\sigma\\s{\\cal O}^*}^2 = -\\vec{\\alpha} \\cdot \\vec{F}_{0,0}^{\\Delta_{\\sigma}}$. \n\nIn fact, it is easy to see that in order for these sums to hold along the boundary of the allowed region, it is necessary for either $\\lambda_{\\sigma\\s{\\cal O}}^2$ to be zero or for $\\vec{\\alpha} \\cdot \\vec{F}^{\\Delta_{\\sigma}}_{\\Delta_{{\\cal O}},\\ell_{{\\cal O}}}$ to be zero. Thus, the zeroes of $\\vec{\\alpha} \\cdot \\vec{F}^{\\Delta_{\\sigma}}_{\\Delta_{{\\cal O}},\\ell_{{\\cal O}}}$ tell us the scaling dimensions and spins at which the OPE coefficients are allowed to be nonzero. The resulting {\\it extremal spectrum} is generically unique~\\cite{ElShowk:2012hu}.\n\nIn the above-mentioned primal simplex method, the extremal spectrum is reached from within the allowed region, and is encoded in the set of basic vectors that remain after the algorithm terminates. That this should agree with the dual approach via extremal functionals is guaranteed by the strong duality of linear programs.\n\nThe extremal functional method for extracting the spectrum is also applicable when using semidefinite programming. In this case the extremal functional is constructed in the dual formulation.\\footnote{It is not understood at present how to formulate an algorithm to extract the extremal spectrum along a dimension bound directly from the allowed region in the context of semidefinite programming. The currently used procedure is to sit in the interior of the space allowed by scaling dimension bounds and extremize an OPE coefficient to find an extremal functional.} Once the extremal spectrum is known, it is straightforward to reconstruct the OPE coefficients of the exchanged operators by either directly solving the bootstrap equations after inputting the extremal spectrum or extracting them from the primal solution of the primal-dual algorithm. {\\textcite{Simmons-Duffin:2016wlq} gives a precise algorithm for doing this using functionals output by \\texttt{SDPB}, realized in a \\texttt{Python} code\n\t\\cite{DSD-spectrum-extraction}.}\n\nAn important {open question is to understand} which CFTs are described by spectra which are extremal with respect to some extremization condition. As we will see in subsequent sections, empirically this seems to be the case for a variety of interesting CFTs including the 3d Ising and $O(N)$ models. Although it is not currently understood why it should be so, some speculations are given in Sec.~\\ref{sec:Z2-spectrum}.\n\n \\subsubsection{Flow method}\n \n \\label{sec:flow}\nAn interesting idea was proposed in \\textcite{El-Showk:2016mxr}, where given one extremal solution one can efficiently ``flow'' along the boundary to reconstruct nearby extremal solutions. The idea is to perturb the extremal spectrum and then impose that the perturbed spectrum is also extremal using (\\ref{eq:extremal}) as well as the tangency conditions $\\vec{\\alpha} \\cdot \\left(\\partial_{\\Delta_{{\\cal O}}} \\vec{F}^{\\Delta_{\\sigma}}_{\\Delta_{{\\cal O}},\\ell_{{\\cal O}}}\\right)$. By linearizing perturbations of these conditions, the search for a nearby extremal spectrum (or a more precise extremal spectrum) can be efficiently solved using Newton's method. This approach then avoids the use of convex optimization after the initial step of finding an initial extremal solution, and can also be used to flow to nonunitary extremal solutions. This idea was shown to work well in $d=1$ in \\textcite{El-Showk:2016mxr,Paulos:2016fap}.\\footnote{The code is implemented as a separate module of \\texttt{JuliBoots}~\\cite{Paulos:2014vya}, available on request from its author.} It appears very promising and it needs to be further explored and extended, especially into higher dimensions.\n\n\\subsection{Truncation method}\n\\label{sec:truncation}\n\nFinally we wish to turn to an idea introduced by \\textcite{Gliozzi:2013ysa} and explored in a variety of works,\\footnote{See \\textcite{Gliozzi:2014jsa}, \\textcite{Gliozzi:2015qsa}, \\textcite{Gliozzi:2016cmg}, \\textcite{Esterlis:2016psv}, \\textcite{Hikami:2017hwv,Hikami:2017sbg,Hikami:2018mrf}, \\textcite{Li:2017agi,Li:2017ukc}, and \\textcite{LeClair:2018edq}.} which we will call the truncation method. The basic idea is to truncate the bootstrap equations to a finite number of operators $\\{\\Delta_{\\sigma}, {\\cal O}_I \\}$ with $N$ unknown scaling dimensions. After normalizing by the identity contribution $f^{\\Delta_{\\sigma}}_{\\Delta_{{\\cal O}_I},\\ell_{{\\cal O}_I}}(z,\\bar{z}) \\equiv F^{\\Delta_{\\sigma}}_{\\Delta_{{\\cal O}_I},\\ell_{{\\cal O}_I}}(z,\\bar{z})\/\\left(-F^{\\Delta_{\\sigma}}_{0,0}(z,\\bar{z})\\right)$, let us write the crossing equations as\n\\begin{eqnarray}\\label{eq:linearsystem}\n\\sum_{{\\cal O}_I} \\lambda_{\\sigma\\s{\\cal O}_I}^2 f^{\\Delta_{\\sigma}}_{\\Delta_{{\\cal O}_I},\\ell_{{\\cal O}_I}} &=& 1,\\nonumber\\\\\n\\sum_{{\\cal O}_I} \\lambda_{\\sigma\\s{\\cal O}_I}^2 \\vec{f}^{\\Delta_{\\sigma}}_{\\Delta_{{\\cal O}_I},\\ell_{{\\cal O}_I}} &=& 0.\n\\end{eqnarray}\nHere the first equation containing $f^{\\Delta_{\\sigma}}_{\\Delta_{{\\cal O}_I},\\ell_{{\\cal O}_I}} \\equiv f^{\\Delta_{\\sigma}}_{\\Delta_{{\\cal O}_I},\\ell_{{\\cal O}_I}}(1\/2,1\/2)$ is viewed as an ``inhomogeneous\" equation containing the identity contribution on the right-hand side, and the second ``homogeneous\" equation contains the vector of derivatives $\\left(\\vec{f}^{\\Delta_{\\sigma}}_{\\Delta_{{\\cal O}_I},\\ell_{{\\cal O}_I}}\\right)^{mn} = \\partial_{z}^m \\partial_{\\bar{z}}^n f^{\\Delta_\\sigma}_{\\Delta_{{\\cal O}_I}, \\ell_{{\\cal O}_I}}(z,\\bar{z})\\big|_{z=\\bar{z}=1\/2}$. If one keeps $M$ derivatives with $M > N$ then the system becomes over-constrained, and only has solutions if all of the minors of order $N$ of the linear system vanish,\n\\begin{eqnarray}\\label{eq:det}\n\\text{det}A_i=0,\\quad A_i\\subset A=\\left[ \\left(\\vec{f}^{\\Delta_{\\sigma}}_{\\Delta_{{\\cal O}_I},\\ell_{{\\cal O}_I}}\\right)^{mn} \\right]_{N\\times M}.\n\\end{eqnarray}\nHere the ``rows\" of $A$ would run over different choices of $N$ derivatives $mn$ and the ``columns\" run over the $N$ unknown scaling dimensions. Note that the set of unknown scaling dimensions will include the external dimension $\\Delta_{\\sigma}$ in addition to the ${\\cal O}_I$, but may exclude exchanged operators of known dimension, such as the stress tensor of known dimension $\\Delta_T = d$. The general strategy is to solve the determinant conditions Eq.~(\\ref{eq:det}) to obtain an approximate spectrum of scaling dimensions, and then use the system in Eq.~(\\ref{eq:linearsystem}), including the inhomogeneous equation, in order to fix the OPE coefficients.\n\nA big advantage of the truncation approach over the linear and semidefinite programming approaches of the previous sections is that it does not require unitarity, i.e.~it works equally well for any sign of the OPE coefficients. For example, the idea has been successfully applied to the nonunitary Lee-Yang model, as well as to bulk-boundary bootstrap problems where there is no positivity in the coefficients. Another advantage is that it is relatively simple to implement, and the idea can be explored e.g.~using fairly simple \\texttt{Mathematica} notebooks. \n\nOn the other hand, we also see several disadvantages with this approach in its current incarnation. One is that the resulting spectrum can have a strong sensitivity to the set of included operators (e.g., the choices of spins) and to the set of derivatives included. It is also very difficult to assign reliable errors to the spectrum output from the method.\\footnote{Comparison with the rigorous results obtained using the linear and semidefinite programming methods, when possible, shows that the published truncation method errors are often underestimated.} Thus, it would be desirable to find ways to make the approach more systematic with errors under control. Some steps in this direction were recently taken in \\textcite{Li:2017ukc}. Applications to the boundary bootstrap also seem to be less sensitive to these issues~\\cite{Gliozzi:2015qsa,Gliozzi:2016cmg}. \n\nAnother issue is that simple implementations of numerical studies of the nonlinear determinant conditions~(\\ref{eq:det}), such as using the iterative Newton method implemented in \\texttt{Mathematica}'s \\texttt{FindRoot} function, do not scale very well with increasing the number of operators and the method likely needs a more efficient numerical implementation in order to push beyond $\\sim 10$ operators.\\footnote{One can view the flow method described in Sec.~\\ref{sec:flow} as a kind of more efficient implementation where additional extremality conditions have been added.} \n\nNotice that since we are truncating the spectrum, we cannot generally expect to find \\emph{exact} solutions of Eq.~\\reef{eq:det}. On the other hand, the set of determinant conditions is in fact redundant because of the Pl\\\"ucker relations satisfied by the minors of a matrix, see \\textcite{Hikami:2017hwv}. A cleaner numerical formulation can be obtained by replacing Eq.~\\reef{eq:det} with the problem of minimizing the smallest singular value of the matrix $A$~\\cite{Esterlis:2016psv,LeClair:2018edq}.\n\nFinally, similarly to the extremal spectra methods above, it is not clear which CFT spectra are ``truncable\" in the sense that they can be found with this approach.\n\n\n\n\n\n\\subsubsection{General results}\n\\label{sec:ON-general}\n\nAs discussed in Sec.~\\ref{sec:global}, correlation functions of CFT operators that are in irreducible representations of the global symmetry $G$ can be organized using group theory and decomposed into different $G$-invariant tensor structures. Sec.~\\ref{sec:crossing} explained how these structures enter the crossing relations. The first numerical analyses of the resulting equations occurred in the context of 4d CFTs,\\footnote{See \\textcite{Poland:2010wg}, \\textcite{Rattazzi:2010yc}, \\textcite{Vichi:2011ux}, and \\textcite{Poland:2011ey}.} but the group theoretic structure is $d$-independent. The bootstrap for $O(N)$ symmetry in 3d was investigated by \\textcite{Kos:2013tga, Kos:2015mba,Kos:2016ysd} and \\textcite{Nakayama:2014yia}.\n\nWe will start our analysis assuming that the CFT contains an operator $\\phi\\equiv(\\phi_a)_{a=1}^N$ \nin the fundamental representation of $O(N)$, of dimension $\\Delta_\\phi$. Mimicking the discussion in Sec.~\\ref{sec:Z2-general}, we would like to learn about the operators in the OPE $\\phi_{a} \\times \\phi_{b}$. By group theory, operators of even spin $\\ell$ in this OPE will transform as $O(N)$ singlets or symmetric traceless tensors of rank 2, while odd-spin operators will transform in the rank-2 antisymmetric representation. \n\nFrom the crossing relations for the 4pt function of $\\phi$ one can put upper bounds on the dimensions of various operators. For the lowest dimension scalars ($\\ell=0$) in the singlet ($s$) and symmetric traceless tensor ($t$) sector, these bounds are shown in Figs.~\\ref{fig:ONsinglet}, \\ref{fig:ONsymtrace} as a function of $\\Delta_\\phi$ for various values of $N$. The ``kinks\" in these bounds will be interpreted in the next section. \n\\begin{figure}\n\\includegraphics[width=\\figwidth]{fig20-singletbound.pdf}\n\\caption{\\label{fig:ONsinglet}(Color online) Upper bound on the dimension of $s$ \\cite{Kos:2013tga}.}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=\\figwidth]{fig21-symtracelessbound.pdf}\n\\caption{\\label{fig:ONsymtrace}(Color online) Upper bound on the dimension of $t$ \\cite{Kos:2013tga}.}\n\\end{figure}\n\nThe $\\phi\\times\\phi$ OPE also contains two interesting operators of spin $\\ell\\geqslant 1$: the stress tensor $T$ and the conserved current $J$. Using the bootstrap one can put lower bounds on their two-point function coefficients $C_T$ and $C_J$ (defined in Sec.~\\ref{sec:ward}) given in Figs.~\\ref{fig:ONCT}, \\ref{fig:ONCJ}. This is done by bounding from above the OPE coefficients $\\lambda_{\\phi\\phi T}\\propto \\Delta_{\\phi}\/\\sqrt{C_T}$ and $\\lambda_{\\phi\\phi J}\\propto 1\/\\sqrt{C_J}$ {(see Eqs.~(\\ref{eq:lambdaT}, \\ref{eq:lambdaJ}))}.\n\nLet's discuss the monotonicity of these bounds with $N$. Since $O(N+1)\\supset O(N)$, the bounds on $C_T$, $C_J$, and on $\\Delta_t$ should get stronger with increasing $N$, and indeed they do (notice that $C_T$ is plotted divided by $N$). Although it may seem counterintuitive that the $\\Delta_s$ bound gets weaker with $N$, there is no contradiction. The point is that the symmetric traceless tensor of $O(N+1)$ contains a singlet $\\tilde s$ when decomposed with respect to $O(N)$. Therefore the only constraint is that the $O(N)$ singlet bound should be weaker than the $O(N+1)$ symmetric traceless bound, which is satisfied by inspection.\n\nNotice also that the scaling of the $C_T$, $C_J$ bounds with $N$ close to $\\Delta_\\phi=1\/2$ is consistent with the fact that in the theory of $N$ free scalars, $C_T$ grows linearly with $N$ while $C_J$ is constant.\n\n\\begin{figure}\n\\includegraphics[width=\\figwidth]{fig22-CT.pdf}\n\\caption{\\label{fig:ONCT}(Color online) Lower bound on $C_T$ computed under the assumption $\\Delta_s, \\Delta_t\\geqslant 1$ \\cite{Kos:2013tga}.}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=\\figwidth]{fig23-CJ.pdf}\n\\caption{\\label{fig:ONCJ}(Color online) Lower bound on $C_J$ \\cite{Nakayama:2014yia}.}\n\\end{figure}\n\n\\subsubsection{Critical $O(N)$ model}\n\nThe most famous 3d CFT with $O(N)$ symmetry is the critical point of the $O(N)$ lattice model, which is the generalization of Eq.~(\\ref{eq:Ising}) to $N$-component spins satisfying the constraint $|\\vec{s}|=1$.\nThis CFT is also known as the Wilson-Fisher fixed point, being an IR fixed point of the $O(N)$ symmetric scalar field theory with quartic interaction~\\cite{Wilson:1971dc}. For any integer $N$ this 3d CFT is unitary, given that the microscopic realizations are unitary.\\footnote{\\label{note:ONnonunitary}Sometimes one discusses analytic continuation of $O(N)$ models to noninteger $N$. These analytic continuations are nonunitary~\\cite{Maldacena:2011jn}, and fall outside the range of validity of the linear\/semidefinite methods. Although such attempts were made~\\cite{Shimada:2015gda}, we would advise caution. Here we will only consider integer $N\\geqslant 2$. We will discuss nonunitary CFTs in Sec.~\\ref{sec:nonunitary}.}\n\nIt's natural to ask where the critical $O(N)$ models lie in the parameter space of $O(N)$ symmetric CFTs allowed by the general bounds from the previous section. In the Wilson-Fisher description, $\\phi_{a}$ is the fundamental scalar field appearing in the Lagrangian, $s=\\phi^2$, and $t$ is the traceless part of $\\phi_a \\phi_b$. Dimensions of these fields have been previously estimated using RG methods (in particular the $\\epsilon$-expansion and the large $N$ expansion), Monte Carlo studies, and experiments. Comparing the $s$ and $t$ bounds and these prior determinations, marked with crosses in Figs.~\\ref{fig:ONsinglet}, \\ref{fig:ONsymtrace}, one is led to conjecture that the critical $O(N)$ models correspond to the ``kinks\". Similar kink-like features are visible in the lower bounds on $C_J$ and $C_T$. In the latter case the kinks can be made sharper by imposing that the $S$ operator saturate the gap, see Fig.~5 in \\textcite{Kos:2013tga}. This conjecture can be used to extract values of the $\\phi$, $s$, $t$ dimensions and of $C_T$, given in Table 3 of \\textcite{Kos:2013tga}. \n\nWe will now discuss how to isolate the critical $O(N)$ models without relying on the kink conjecture. The idea is to exploit the crucial physical feature of these CFTs --- that they possess robust gaps in the operator spectrum. The singlet scalar $s$ corresponds to the temperature deformation of the critical point and is relevant. The next singlet scalar, $s'$, must necessarily be irrelevant (otherwise the critical point would be multicritical), implying the gap $\\Delta_{s'}\\geqslant 3$ in the singlet scalar sector. We also expect a gap in the fundamental representation scalar sector. The order parameter $\\phi_{a}$ belongs to this sector and is relevant, while most likely the next fundamental scalar is irrelevant: $\\Delta_{\\phi'}\\geqslant 3$. This can be also deduced using the Wilson-Fisher description, using a nonrigorous but suggestive equation of motion argument~\\cite{Kos:2015mba}.\n\n\\textcite{Kos:2015mba}~studied bootstrap constraints for the system of three correlators \n{$\\{\\langle \\phi_a\\phi_b\\phi_c\\phi_d\\rangle$, $\\langle \\phi_a\\phi_b ss\\rangle$, $\\langle ssss\\rangle\\}$. } \nImposing the assumptions $\\Delta_{s'}\\geqslant 3$, $\\Delta_{\\phi'}\\geqslant 3$, they found small allowed regions (``islands\") shown in Fig.~\\ref{fig:ONarchipelago}. Improved versions of these islands for $O(2)$ and $O(3)$, discussed in the next sections, were subsequently obtained in \\textcite{Kos:2016ysd}. It's important to stress that, like in Fig.~\\ref{fig:Z2-mixed-sigpgap} for the Ising model, there are disconnected allowed regions outside the shown part of the parameter space; see e.g.~Fig.~\\ref{fig:O2singlet} below for the $O(2)$ case. These regions are practically unexplored and they might contain other interesting CFTs.\n\\begin{figure}\n\\includegraphics[width=\\figwidth]{fig24-ONarchipelago.pdf}\n\\caption{\\label{fig:ONarchipelago}(Color online) The $O(N)$ archipelago \\cite{Kos:2015mba}.}\n\\end{figure}\n\n\n\\subsubsection{Monopole bootstrap for QED${}_3$}\n\\label{sec:QED3-monopole}\n\nAn alternate approach, pursued in \\textcite{Chester:2016wrc} and \\textcite{Chester:2017vdh}, is to focus on monopole operators. When dealing with a compact $U(1)$ gauge field, these operators create topologically nontrivial configurations of the gauge field having magnetic flux emerging from a spacetime point.\\footnote{Thus they could also be called instantons, but the common terminology refers to them as monopoles.} Such operators are charged under a topological $U(1)_T$ global symmetry with symmetry current \\mbox{$J_T^{\\mu} = \\frac{1}{8\\pi} \\epsilon^{\\mu \\nu \\rho}F_{\\nu\\rho}$}. Taking the monopole operators to have charge $q \\in \\mathbb{Z}\/2$, the scalar monopoles transform in representations of $SU(N_f)$ corresponding to fully rectangular Young diagrams with $N_f\/2$ rows and $2|q|$ columns~\\cite{Dyer:2013fja}. Thus, the lightest scalar monopoles $M^I_{\\pm1\/2}$ are expected to be in $SU(N_f)$ representations with $N_f\/2$ fully antisymmetric indices.\\footnote{Monopoles with spin transform in other nontrivial flavor representations, see \\textcite{Chester:2017vdh}.}\n\nThe bootstrap for 4pt functions of $M^I_{\\pm1\/2}$ was studied in \\textcite{Chester:2016wrc} for $N_f = 2, 4, 6$, where they focused on placing bounds on the dimension of the second monopole operator $\\Delta_{M_{1}}$, making various assumptions about gaps in the uncharged $(q=0)$ sector. These bounds are shown in Figs.~\\ref{fig:QED3-monopole26}, \\ref{fig:QED3-monopole4}, where for $N_f=4,6$ they can be compared with the large $N_f$ estimate (black cross). Intriguingly, there is a kink-like discontinuity in the bound which comes close to the large $N_f$ estimate for certain values of the gap in the uncharged sector for operators in the same $SU(N_f)$ representation. By increasing the gap above $M_1$, the allowed region could also be turned into a peninsula around the kink. Similar bounds for the lightest spinning monopoles in the case $N_f = 4$, along with a comparison to the large $N_f$ predictions, were presented in \\textcite{Chester:2017vdh}.\n\nWhile these results are not definitive, they seem promising and show that the bootstrap for QED${}_3$ has a reasonable chance to be successful, perhaps after a few more ingredients are added. Some possible directions would be to consider a multiple correlator bootstrap involving $M_{\\pm1\/2}$, $M_{\\pm1}$, and\/or $\\bar{\\psi}_i \\psi^j$. It may also be fruitful to combine these with constraints from 4pt functions containing the $U(1)_T$ current, the $SU(N_f)$ current, or the stress tensor. \n\n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig34a-MN2plot}(a)\n\\includegraphics[width=\\figwidth]{fig34b-MN6plot}(b)\n \\caption{\\label{fig:QED3-monopole26} (Color online) Bounds on $\\Delta_{M_1}$ in terms of $\\Delta_{M_{1\/2}}$ in $d=3$ for $N_f=2,6$ (a,b) with various assumptions on the gaps in the uncharged sector in the same $SU(N_f)$ representation as $M_1$ \\cite{Chester:2016wrc}.}\n \\end{figure}\n \n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig35a-MN4plot}(a)\n\\includegraphics[width=\\figwidth]{fig35b-MN4Daggerplot}(b)\n \\caption{\\label{fig:QED3-monopole4} (Color online) (a) is the analogue of Fig.~\\ref{fig:QED3-monopole26} for $N_f=4$. (b) starts from the $\\Delta_2\\geq3$ case of (a), and shows that placing an additional gap $\\Delta_{M'_1}$ above $\\Delta_{M_1}$ turns the kink into a peninsula \\cite{Chester:2016wrc}.}\n \\end{figure}\n \n\\subsubsection{Bosonic QED${}_3$ and deconfined quantum critical points}\n\\label{bQED3}\n\nFinally we would like to review the rich physics of bosonic QED${}_3$, where some bootstrap insights have recently been obtained. Bosonic QED${}_3$ is obtained by coupling the $U(1)$ gauge field to $N$ complex scalars $\\phi_i$ with an $SU(N)$ invariant potential $m^2 |\\phi|^2 +\\lambda (|\\phi|^2)^2$. This is also known as the $N$-component abelian Higgs model, and is believed to flow to a CFT for large enough $N$. Unlike for fermions, the boson mass term preserves all the symmetries and has to be fine-tuned to reach the fixed point.\n\nThis model has been much discussed in the condensed matter literature as the ``non-compact CP$^{N-1}$ model'' (NCCP$^{N-1}$) in connection with the phenomenon of ``deconfined criticality\" \\cite{deconfined}. To briefly review this connection, the physical systems of interest are certain quantum antiferromagnets in $(2+1)$ dimensions, which have a quantum phase transition between N\\'eel and Valence-Bond-Solid (VBS) phases.\\footnote{The absence of a disordered phase in such transitions can be understood using `t Hooft anomalies, see \\textcite{Komargodski:2017dmc}. This perspective also gives insight into the rich physics of interfaces in these theories \\cite{Komargodski:2017smk}.} The transition can be described by the $O(3)$ nonlinear sigma model (NLSM) for the N\\'eel order parameter, modified by the inclusion of Berry phase effects which suppress topological defects (hedgehogs), which will play an important role below. \n\nThe $O(3)$ NLSM can be written as the CP$^1$ model, which has two complex vectors $\\mathbf{z}=(z_1,z_2)$ subject to the constraint $|z_1|^2+|z_2|^2=1$ and a $U(1)$ gauge invariance $\\mathbf{z}\\sim e^{i\\phi} \\mathbf{z}$. This explains the emergence of the gauge field. Replacing the constraint by a quartic potential, and adding a Maxwell kinetic term for the gauge field (expected to be generated by the RG flow), one obtains bosonic QED$_{3}$ with $N=2$. \n\nIn the language of QED${}_{3}$, the above-mentioned topological defects are the monopole operators of quantized charge, similar to the ones in Sec.~\\ref{sec:QED3-monopole}. Of course the dimensions of monopole operators differ in bosonic and fermionic QED. Also here we will normalize the topological charge to be integer $q\\in \\bZ$. \n\nIf a monopole of charge $q$ appears in the action, it breaks the global topological $U(1)_T$ symmetry to the $\\bZ_q$ subgroup. Microscopic descriptions of quantum antiferromagnets may realize a discrete subgroup of $U(1)_T$ at the lattice level. On cubic lattices, a $\\bZ_{q_0}$ with $q_0=4$ is preserved, while for the hexagonal and rectangular lattices we have $q_0=3$ and $q_0=2$. The $\\bZ_{q_0}$ symmetry is also visible in the VBS phase where it permutes the vacua. This microscopic symmetry means that only monopoles with charges multiple of $q_0$ appear. Monopoles with different charges have their fugacity killed by the above-mentioned Berry phases~\\cite{Read-Sachdev}.\n\nIn light of the above discussion, the analysis of the critical behavior of QED${}_3$ can be split into two parts. First, does bosonic QED${}_3$, with all monopoles suppressed, have a fixed point?\nIf the answer is yes, then one can ask: can this fixed point be reached provided that one allows monopoles with charges in multiples of $q_0$? For this to happen, the monopole of charge $q_0$ has to be irrelevant.\n \nOne can study these questions analytically at large $N$: one finds a fixed point and computes the critical exponents in the $1\/N$ expansion.\\footnote{See \\textcite{Murthy:1989ps}, \\textcite{Kaul:2008xw}, \\textcite{Metlitski:2008dw}, and \\textcite{Dyer:2015zha}} At small $N$ one resorts to Monte Carlo simulations. The bootstrap at present cannot by itself resolve the question of the fixed point existence. However, it can provide valuable consistency checks on the other studies. Suppose that a certain Monte Carlo simulation is done on a lattice preserving a $\\bZ_{q_0}$ subgroup, finds a second order phase transition, and measures the scaling dimensions $\\Delta_q$ of monopole operators $M_q$ for a subset of charges $q$ (we denote by $M_0$ the relevant singlet scalar driving the transition). We have the following OPE algebra in the scalar sector, omitting the OPE coefficients ($M_{-q}=M_q^\\dagger$):\n\\beq\nM_{q} \\times M_{q'} \\sim \\delta_{q+q'}\\mathds{1}+ M_{q+q'}+\\ldots\\,.\n\\end{equation}\nBy the above discussion, the operator $M_{q_0}$ has to be irrelevant, as well as the higher charge monopoles. We can use the bootstrap to study the consistency of this algebra given the measured operator dimensions. \n\n\\subsubsection{Aside: constraints on symmetry enhancement}\n\\label{sec:enhancement}\n\nWhat we just presented is an instance of a more general question: under which conditions can the global symmetry of the fixed point $G$ be larger than the microscopically realized symmetry $H$? The case of interest for the previous section is $G=U(1)$ and $H=\\bZ_{q_0}$. For the symmetry enhancement to happen, operators which break $G$ to $H$ must be irrelevant. The bootstrap is a powerful tool to study whether this irrelevance assumption is consistent with conformal symmetry and with other information which may be available about the fixed point. We will see further applications of this philosophy in Secs.~\\ref{sec:4D-bsm} and~\\ref{sec:4Dwindow}.\n\nWe will now describe bootstrap constraints on the symmetry enhancement from $\\bZ_{q_0}$ to $U(1)$ derived by~\\textcite{Nakayama:2016jhq}. Enhancement from $\\bZ_2$ requires that $M_2$ is irrelevant. Since $M_1\\times M_1\\sim M_2$, one can bound $\\Delta_2$ given $\\Delta_1$, by studying the 4pt function $\\langle M_1 M_1^\\dagger M_1 M_1^\\dagger\\rangle$. The resulting bound is given in Fig.~\\ref{fig:QED3-emergent}. Imposing $\\Delta_2>3$, one gets a necessary condition \\mbox{$\\Delta_1>1.08$} for enhancement from $\\bZ_2$ to $U(1)$. \n\n\\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig36-charge_2}\n \\caption{\\label{fig:QED3-emergent} (Color online) The 3d upper bound on $\\Delta_2$ as a function of $\\Delta_1$~\\cite{Nakayama:2016jhq}. It may be possible to improve this bound if $\\Delta_0$ is known. The same bound applies to $M_2\\times M_2\\sim M_4$.}\n \\end{figure}\n \n The same plot in Fig.~\\ref{fig:QED3-emergent} can be used to derive rough necessary conditions on the enhancement from $\\bZ_4$ to $U(1)$. Indeed, the bound applies also to $M_2\\times M_2\\sim M_4$. If $M_4$ is irrelevant, then we must have $\\Delta_2>1.08$, which in turn implies $\\Delta_1>0.504$. \n \nTo study enhancement from $\\bZ_3$, one analyzes simultaneously three 4pt functions $\\langle M_1 M_1^\\dagger M_1 M_1^\\dagger\\rangle$, $\\langle M_1 M_1^\\dagger M_2 M_2^\\dagger\\rangle$, $\\langle M_2 M_2^\\dagger M_2 M_2^\\dagger\\rangle$. It is reasonable to assume that $M_4$ is irrelevant (as would be the case if $M_3$ is irrelevant and $\\Delta_q$ is monotonic in $q$), and to impose $\\Delta_0>1.044$ (which follows from an assumption that the fixed point is critical and not multicritical, see Sec.~\\ref{sec:multicrit}). Under these assumptions, the upper bound on $\\Delta_3$ as a function of $\\{\\Delta_1,\\Delta_2\\}$ is shown in Fig.~\\ref{fig:QED3-emergent-ch3}. From this bound, irrelevance of $M_3$ requires $\\Delta_1>0.585$. \n \n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig37-charge3}\n \\caption{\\label{fig:QED3-emergent-ch3} (Color online) An upper bound on $\\Delta_3$ as a function of $\\{\\Delta_1,\\Delta_2\\}$ under the assumptions that $\\Delta_0>1.044$, $\\Delta_4>3$ \\cite{Nakayama:2016jhq}. It follows from Fig.~\\ref{fig:QED3-emergent} that the range of $\\Delta_2$ is restricted by the latter assumption from below, and, for fixed $\\Delta_1$, from above.}\n \\end{figure}\n\n\\subsubsection{Back to deconfined criticality: is the transition second order?}\n \n The necessary conditions described in the previous section have been compared with available Monte Carlo and large $N$ data on the N\\'eel-VBS transition which claim to see a second-order transition and measure some critical exponents. For square and hexagonal lattices, there is nice consistency, as for rectangular lattices for $N\\leqslant 4$ and $N\\geqslant 6$, while some $N=5$ simulations are inconsistent with the bootstrap. The conclusion is that there must either be an error in the $N=5$ Monte-Carlo measurement or in the assumption that the transition is second-order. See \\textcite{Nakayama:2016jhq} for this survey and for further details. \n\nIt should be emphasized that while the bootstrap results may point out an inconsistency in Monte Carlo simulations, they cannot, at present, validate them and prove that the phase transition is indeed second order. It is still possible that even in the above cases when there is a nice agreement between Monte Carlo results and the bootstrap necessary conditions, the transition is still very weakly first order and not second order. \n\nLet us focus on the case $N=2$ which presents a controversy. Large-scale Monte Carlo simulations for $N=2$ were performed in \\textcite{Nahum:2015jya}, using a loop model on a cubic lattice which is in the same universality class as NCCP${}^1$ and has monopole suppression up to $q_0=4$, and going up to very large lattices of linear size up to $L=640$.\\footnote{See also \\textcite{Harada2013, Sreejith:2018ypg} for simulations of other microscopic models in the same universality class.} While they have not seen signs of a finite correlation length or a conventional first order transition, and observed scaling behavior of correlation functions at distances $1\\ll r \\ll L$, they have seen scaling violation for observables at larger distances $r\\sim L$, inconsistent with a conventional second order transition. \n\nSo, is the transition second order or weakly first order? Assuming a second order transition, \\textcite{Nahum:2015jya} extracted the scaling dimension of the monopole operator $\\Delta_1=0.625(15)$, which is consistent with the bootstrap condition $\\Delta_1>0.504$ necessary for the enhancement from $\\bZ_4$ to $U(1)$. However there is an extra piece of information which allows one to set up an even more stringent bootstrap test: further symmetry enhancement at the transition from $SO(3)\\times U(1)$ to $SO(5)$. Here $SO(3)$ acts on the N\\'eel order parameter $N_a = \\mathbf{z}^\\dagger\\sigma_a \\mathbf{z}$. Empirically, the scaling dimension of $N$ is very close to $\\Delta_1$~\\cite{Nahum:2015jya} and, moreover, the joint probability distribution of $(N, M_1)$ is very close to the spherical one after a rescaling~\\cite{Nahum:2015vka}, which can be explained if $N$ and $M_1$ belong to a vector multiplet $\\Phi$ of $SO(5)$ of dimension $\\Delta_\\Phi=\\Delta_1$. \n\nIn this description, the relevant scalar which drives the transition is a component of the symmetric traceless tensor (roughly $\\Phi_A\\Phi_B-\\text{trace}$).\\footnote{\\textcite{Nahum:2015vka} measured its scaling dimension to be $\\sim 1.5$.} For the $SO(5)$ enhancement to happen, any other scalar which breaks $SO(5)$ back to $SO(3)\\times U(1)$ must be irrelevant. In addition, the $SO(5)$ singlet $S$ (roughly $\\Phi_A\\Phi^A$) must be irrelevant for the transition to be second order, since otherwise the fixed point will not be reached. See \\textcite{Wang:2017txt} for further discussion. Given the dimension $\\Delta_\\Phi=\\Delta_1$ as above, it is straightforward to compute an upper bound on the dimension of $S$ which occurs in the OPE $\\Phi\\times\\Phi$. This is the same bound as for $N=5$ in Fig.~\\ref{fig:ONsymtrace} except the plot has to be extended to larger $\\Delta_\\Phi$. \\textcite{Nakayama:2016jhq} and~\\textcite{DSD2016} performed this analysis and report that $\\Delta_S>3$ is excluded for $\\Delta_\\Phi$ as above. In fact $\\Delta_S>3$ requires $\\Delta_\\Phi>0.76$~\\cite{Nakayama2016}.\n\nTo summarize, the bootstrap excludes a second-order phase transition described by a unitary 3d CFT with symmetry enhanced to $SO(5)$ and the order parameter scaling dimension taking the above value suggested by the Monte Carlo simulations. \nIn our opinion, the most compelling interpretation of available data is a weakly first-order transition due to walking RG behavior which ensues when the RG flow has no fixed points for a real value of the coupling but two complex conjugate fixed points with small imaginary parts. This is the same mechanism as for the weakly first-order transition in the 2d Potts model with $Q\\gtrsim 4$. As discussed in \\textcite{Nahum:2015jya}, this scenario may resolve the observed scaling violations at distances $r\\sim L$. It can also accommodate the enhancement to $SO(5)$~\\cite{Wang:2017txt}. In this scenario, there is no unitary 3d CFT (the complex fixed points being nonunitary), and the bootstrap bounds do not apply, resolving the contradiction.\n\nFinally, let us note that a similar analysis can constrain another scenario outlined in \\textcite{Wang:2017txt}, in which a variant called the easy-plane NCCP${}^1$ model is conjectured to have a fixed point with enhanced $O(4)$ symmetry and be dual to $N_f=2$ fermionic QED${}_3$. In this scenario, the conjectured fixed point cannot contain any fully $O(4)$-invariant scalar perturbations. However, recent Monte Carlo simulations~\\cite{Qin:2017cqw} point to a dimension for the $O(4)$ vector order parameter $\\Delta_\\Phi = 0.565(15)$ which seems incompatible with the bootstrap bound assuming irrelevance of the $O(4)$ singlet (Fig.~\\ref{fig:ONsinglet} extended further to the right), which requires $\\Delta_\\Phi > 0.868$~\\cite{PolandUnpublished}. In the future it will be interesting to further study the fate of these models using both bootstrap and Monte Carlo data.\n\n\n\\subsubsection{Multifield Landau-Ginzburg models}\n\nThere exists rich phenomenology of fixed points arising from Lagrangians with multiple scalar fields transforming under product group symmetries, e.g.~$SO(n) \\times SO(m)$. One can consider Lagrangians involving two coupled scalar multiplets, one transforming in the fundamental of $SO(n)$ and another of $SO(m)$. {Alternatively,} one can consider a field transforming in the bifundamental of $SO(n) \\times SO(m)$. Such Lagrangians have been invoked to describe phase transitions in many physical systems{; see \\textcite{Vicari:2007ma} for further details.}\n\n{When studying these fixed points using the RG,} a recurrent feature is that many of {them} do not exist in the $4-\\epsilon$ expansion and have to be studied directly in 3d. Since such computations lack a manifestly small expansion parameter, there seems to be no consensus about the existence of these fixed points. So this appears to be a perfect target for a nonperturbative approach like the bootstrap. Some preliminary bootstrap studies of 3d CFTs with \\mbox{$SO(n) \\times SO(m)$} were carried out in \\textcite{Nakayama:2014lva,Nakayama:2014sba}, but in our opinion more work is needed before firm conclusions can be drawn.\n\n\\subsubsection{Projective space models}\n\nAn interesting 3d lattice model is the CP$^{n}$ model, where microscopic lattice variables belong to CP$^{n}$ and have ferromagnetic interactions preserving the symmetry (see below for the antiferromagnetic case). Recall that CP$^{n}$ can be realized by starting with $(n+1)$-dimensional complex vectors $\\mathbf{z}=(z_1,\\ldots,z_{n+1})$ and imposing the constraint $\\mathbf{z}^\\dagger \\cdot \\mathbf{z} =1$, preserved up to the equivalence $\\mathbf{z}\\sim e^{i\\phi}\\mathbf{z}$. A simple lattice Hamiltonian is \n\\beq\nH=-J\\sum_{\\langle i j\\rangle} |\\mathbf{z}^\\dagger_i \\cdot \\mathbf{z}_j|^2\\,,\n\\end{equation}\nwith $J>0$ in the considered ferromagnetic case. The physics of this model is influenced by defects (hedgehogs), which are possible because $\\pi_2(CP^{n})=\\mathbb{Z}$. Here we consider the CP$^{n}$ model with defects allowed. It should be distinguished from the ``non-compact CP$^{n}$ model\" which results when defects are suppressed, see Sec.~\\ref{bQED3}.\n\nThe CP${}^1$ model is equivalent to the $O(3)$ model, with the order parameter $N_a=\\mathbf{z}^\\dagger \\sigma_a \\mathbf{z}$, and it has a second-order phase transition described by the same CFT.\n\nThe CP${}^2$ model has an internal symmetry $SU(3)$ (modulo global issues), with traceless hermitian matrix $Q_{ab}=z_a \\bar{z}_b-\\delta_{ab}$ as an order parameter. The Landau-Ginzburg description contains a cubic invariant ${\\rm Tr}(Q^3)$ and would suggest a first-order transition, but Monte Carlo simulations \\cite{Nahum:2013qha} indicate that the phase transition is continuous. This is similar to what happens for the 3-state Potts model in 2d and can be explained as an effect of fluctuations. Monte Carlo results for the critical exponents are $\\eta=0.23(2)$ and $\\nu=0.536(13)$, translating into the dimensions of $Q$ and of the relevant singlet scalar. Can this model be isolated using the numerical bootstrap?\n\nOne can also consider antiferromagnetic projective space models, taking $J<0$ in the above Hamiltonian.\nAntiferromagnetic CP$^{n}$ models \\cite{Delfino:2015gba} don't give rise to new universality classes.\\footnote{The ACP$^{1}$ model on a cubic lattice is equivalent to the ferromagnetic model and, as the latter, has a phase transition in the $O(3)$ universality class. For higher $n$ there is no equivalence between the antiferromagnetic and ferromagnetic models. The ACP${}^2$ model has a second-order transition which belongs to the $O(8)$ universality class (and so is different from CP${}^2$). For still higher $n$ the transition is first order.} On the other hand, a new class is observed for the antiferromagnetic RP${}^4$ model \\cite{Pelissetto:2017pxb}, and it could constitute a target for the bootstrap.\\footnote{The RP${}^n$ models are versions of $O(n)$ models with a gauged $\\bZ_2$ symmetry. Their second-order phase transitions for $n=2,3$ belong to the $O(2)$ and $O(5)$ classes respectively, but the $n=4$ class is mysterious.}\n\n\\subsubsection{Nonabelian gauge and Chern-Simons matter theories}\n\nWhile we have focused our attention on QED${}_3$, there is a whole landscape of 3d gauge theories coupled to various types of matter. An interesting case is QCD${}_3$ with a simple gauge group $G$ coupled to $N_f$ fundamental fermions. Such theories may for example play a role in the physics of cuprate superconductors~\\cite{Chowdhury:2014efa,Chowdhury:2014jya}. A fixed point can be established and the properties studied at large $N_f$~\\cite{Appelquist:1989tc}. For example, in \\textcite{Dyer:2013fja} a systematic study of monopole operators in such theories was performed, allowing for estimates of the bottom of the conformal window for different choices of $G$ by imposing irrelevance of the monopole operators~\\cite[{Table 4}]{Dyer:2013fja}. QCD${}_3$ coupled to both fermions and scalars was also proposed to describe the critical point of the `orthogonal semi-metal' (OSM) confinement transition in \\textcite{2018arXiv180401095G}, with critical exponents extracted in quantum Monte Carlo simulations. It would be very interesting to understand how to isolate these theories using bootstrap techniques and test these estimates. \n\nAnother natural set of targets consists of Chern-Simons gauge fields coupled to matter. Such theories are known to have conformal fixed points and sit in an intricate web of dualities.\\footnote{See for example \\textcite{Aharony:2015mjs}, \\textcite{Aharony:2016jvv}, \\textcite{Seiberg:2016gmd}, \\textcite{Hsin:2016blu}, \\textcite{Benini:2017dus}, and \\textcite{Gomis:2017ixy}. Duality means that two different microscopic descriptions lead to the same IR CFT (perhaps after tuning some parameters). Why should dualities exist? One reason may be the paucity of CFTs. If so, some dualities may perhaps be explained by the bootstrap, providing evidence that there is a single CFT satisfying certain constraints (symmetry, the number of relevant operators, etc). Then any microscopic theory satisfying these constraints should flow to this CFT at criticality. In this sense, the results of Section \\ref{sec:O2} provide an explanation for the particle-vortex duality of the Abelian Higgs model, originally proposed by \\textcite{Peskin:1977kp} and \\textcite{Dasgupta:1981zz}.} Some possible experimental realizations of these theories as transitions between fractional quantum Hall states were proposed in \\textcite{Lee:2018udi}. A hallmark of these theories is the existence of parity-violation; it would be interesting to see if they can be found after introducing parity-violating couplings into the bootstrap. Monopole operators in these theories were also recently studied in~\\textcite{Chester:2017vdh} and would constitute natural targets for the bootstrap.\n\n\\subsubsection{Other models}\n\nAnother theory briefly mentioned in Sec.~\\ref{sec:O3} is the Gross-Neveu-Heisenberg (GNH) model, a variant of the GN models with a 3-component scalar order parameter. For a pedagogical review of the model, its applications, and its connection to the lattice Hubbard model, see \\textcite{Sachdev:2010uz}. This constitutes another interesting target for the bootstrap.\n\nA 3d CFT with $SU(4)$ global symmetry and an order parameter in the symmetric tensor representation was considered in \\textcite{Basile:2004wa,Basile:2005hw}. It was proposed to describe a continuous chiral phase transition in 4d $SU(N)$ gauge theory coupled to $N_f=2$ massless quarks in the adjoint representation at finite temperature. The existence of this CFT and some information about critical exponents was found using RG methods; it would be interesting to explore it using the bootstrap. \n\n\n\\subsubsection{The Casimir equation}\n\\label{sec:casimir}\n\n{Let us consider} the following alternative representation of CPWs. \nIn radial quantization, as mentioned in Sec.~\\ref{sec:OPE}, the above 4pt function is expressed as a scalar product of two states \n\\beq\n\\<\\phi_3(x_3)\\phi_4(x_4)|\\phi_1(x_1)\\phi_2(x_2)\\>\\,\n\\end{equation}\nliving on a sphere separating $x_1,x_2$ from $x_3,x_4$. The CPW then corresponds to inserting an orthogonal projector $\\calP_{\\Delta,\\ell}$ {onto} the conformal multiplet of ${\\cal O}_{\\Delta,\\ell}$:\n\\beq\n\\lambda_{12{\\cal O}}\\lambda_{34{\\cal O}}\\,{\\rm W}_{\\cal O} = \\<\\phi_3(x_3)\\phi_4(x_4)|\\calP_{\\Delta,\\ell}|\\phi_1(x_1)\\phi_2(x_2)\\>\\,.\n\\label{eq:CPWrad}\n\\end{equation}\nFor future reference, the projector can be written as\n\\beq \n\\label{eq:proj}\n\\calP_{\\Delta,\\ell}=\\sum_{\\alpha,\\beta={\\cal O},P{\\cal O},PP{\\cal O},\\ldots}|\\alpha\\>G^{\\alpha\\beta}\\<\\beta|\\,,\n\\end{equation}\nwhere $G_{\\alpha\\beta}=\\<\\alpha|\\beta\\>$ is the Gram matrix of the multiplet and $G^{\\alpha\\beta}$ is its inverse.\n\nFurthermore, consider the quadratic Casimir\\footnote{The quartic Casimir operator \n\t${\\mathcal C}_4 = \\frac12 {\\cal J}_{AB}{\\cal J}^{BC}{\\cal J}_{CD}{\\cal J}^{DA}$ has also proved useful in some conformal block studies \\cite{DO3, Hogervorst:2013kva}\\,.}\n\\begin{eqnarray}\n\\mathcal C_2 = \\frac12 {\\cal J}_{AB}{\\cal J}^{BA}\\,,\n\\end{eqnarray}\nwhere ${\\cal J}_{AB}$ are the $SO(d+1,1)$ generators, Eq.~(\\ref{eq:SODalgebra}). Insert this operator into Eq.~\\reef{eq:CPWrad} right after $\\calP_{\\Delta,\\ell}$. The resulting expression can be computed in two ways.\nWhen we act with $\\mathcal C_2$ on the left we have\n\\beq\n\\calP_{\\Delta,\\ell}\\,{\\mathcal C}_2=C_{\\Delta,\\ell} \\calP_{\\Delta,\\ell}\\,,\n\\end{equation}\nwhere $C_{\\Delta,\\ell}$ is the quadratic Casimir eigenvalue: \n\\begin{eqnarray}\nC_{\\Delta,\\ell} = \\Delta(\\Delta-d) + \\ell(\\ell+d-2)\\,.\n\\end{eqnarray}\nOn the other hand, the action of $\\mathcal C_2$ on the right can be computed using the representation of the conformal generators on primaries as first-order differential operators, mentioned in Sec.~\\ref{sec:primaries}.\nWe conclude that the CPW, and hence the conformal block, satisfies a second-order partial differential equation.\\footnote{We followed the presentation in \\cite[{section 9.3}]{Simmons-Duffin:2016gjk}. The same conclusion can be reached using the OPE \\cite{Costa:2011dw}.} The actual form of this ``Casimir equation\" is most conveniently found using the embedding formalism \\cite{DO2}. In the $z,\\bar{z}$ coordinates of Eq.~(\\ref{eq:zzb}) it takes the form\n\\begin{eqnarray}\\label{eq:cb_diffeq}\n&& \\mathcal D\\, g^{\\Delta_{12},\\Delta_{34}}_{\\Delta,\\ell}(z,\\bar{z}) = C_{\\Delta,\\ell}\\,\\, g^{\\Delta_{12},\\Delta_{34}}_{\\Delta,\\ell}(z, \\bar{z})\\,, \n\\end{eqnarray}\nwhere\n\\begin{gather}\n \\mathcal D = \\mathcal D_z + \\mathcal D_{\\bar{z}} + 2(d-2) \\frac{z \\bar{z}}{z-\\bar{z}} [(1-z)\\partial_z - (1-\\bar{z})\\partial_{\\bar{z}} ]\\,, \\nonumber\\\\\n \\mathcal D_z = \\textstyle 2 z^2(1-z)\\partial_z^2 -(2+\\Delta_{34}-\\Delta_{12}) z^2\\partial_z + \\frac{\\Delta_{12}\\Delta_{34}}{2}z\\,.\n\\end{gather}\n\n{Moreover}, the leading $z,\\bar{z}\\to 0$ behavior of the conformal block can be easily determined using the OPE, and this provides boundary conditions for Eq.~\\reef{eq:cb_diffeq}. Considering the $x_{12},x_{34}\\rightarrow 0$ limit in Eq.~\\reef{eq:CPW} and using Eqs. (\\ref{eq:2points}) and \\reef{eq:OPEnorm}, one obtains\\footnote{The limit is worked out carefully in e.g.~\\textcite{DO1} or \\textcite{Costa:2011dw}.}\n\\beq\n\\label{eq:cb_ope}\ng^{\\Delta_{12},\\Delta_{34}}_{\\Delta,\\ell}(z, \\bar{z})\n\\underset{z,\\bar{z}\\rightarrow0}{\\sim}\n\\calN_{d,\\ell} \\, (z\\bar{z})^{\\frac\\Delta2} {\\rm Geg}_\\ell\\left(\\frac{z+\\bar{z}}{2\\sqrt{z\\bar{z}}}\\right)\\,,\n\\end{equation}\nwhere ${\\rm Geg}_\\ell(x)$ is a Gegenbauer polynomial,\n\\beq\n\\label{eq:Geg}\n{\\rm Geg}_\\ell(x)=C^{(d\/2-1)}_\\ell(x)\\,,\n\\end{equation}\nand the normalization factor $\\calN_{d,\\ell}$ is given by\\footnote{Here $(a)_n$ stands for the Pochhammer symbol.}\n\\beq\n\\calN_{d,\\ell} = \\frac{\\ell!}{{(-2)^\\ell}(d\/2-1)_\\ell}\\,.\n\\end{equation}\nWe warn the reader that many different {normalization choices can be found} in the literature. Different conformal block normalizations correspond to different normalizations of OPE coefficients as compared with the one in Eq.~\\reef{eq:OPEnorm}. In this review we will use the above normalization unless {mentioned} otherwise. For the reader's convenience, we have collected some other frequently used normalizations in Table~\\ref{tab:cb_norm}.\n\n\\begingroup\n\\squeezetable\n\\begin{table}[htp]\n\\begin{center}\n\\renewcommand{\\arraystretch}{2}\n\\begin{tabular}{|c|c|}\n\\hline\n$\\calN_{d,\\ell}$ & Reference \\\\\n\\hline\n $\\frac{\\ell!}{(-2)^\\ell(d\/2-1)_\\ell}$ & \n \\begin{minipage}{0.37\\textwidth}\n \t\\textcite{DO1,DO2}, \\\\\n \t\\textcite{Rattazzi:2008pe}, \\\\\n \t\\textcite{Penedones:2015aga}, {\\bf this review} \n \\end{minipage}\\\\\n \\hline\n $ \\frac{\\ell!}{(d-2)_\\ell}$ & \n \\begin{minipage}{0.37\\textwidth} \n \t\\textcite{DO3},\\\\ \n \t\\textcite{Hogervorst:2013sma}, \\\\ \n \t\\textcite{ElShowk:2012ht,El-Showk:2014dwa}, \\\\ \n\t\\textcite{Costa:2016xah}, \\\\\n \t\\texttt{JuliBoots} \\cite{Paulos:2014vya}, \\texttt{cboot} \\cite{CBoot}\n \\end{minipage}\\\\\n \\hline\n $ \\frac{(-1)^\\ell \\ell!}{4^{\\Delta}(d\/2-1)_\\ell}$ & \\begin{minipage}{0.37\\textwidth} \\textcite{Kos:2014bka,Kos:2015mba,Kos:2016ysd}, \\textcite{Li:2017ddj}\\\\\\texttt{PyCFTBoot} \\cite{Behan:2016dtz}\\end{minipage} \\\\\n \\hline\n $\\frac{\\ell!}{(d\/2-1)_\\ell}$ & \\begin{minipage}{0.37\\textwidth} \\textcite{Poland:2011ey}, \\textcite{Poland:2015mta} \\end{minipage}\\\\\n \\hline\n $ \\frac{\\ell!}{4^{\\Delta}(d-2)_\\ell}$ &\\begin{minipage}{0.37\\textwidth} \\textcite{Kos:2013tga}\\\\ {\\texttt{Mathematica} notebook \\cite{DSDnotebookLink}} \\end{minipage} \\\\\n \\hline\n $ \\frac{(-1)^\\ell\\ell!}{(d\/2-1)_\\ell}$ & \\textcite{Simmons-Duffin:2016wlq}\\\\\n \\hline\n\\end{tabular}\n\\end{center}\n\\caption{Summary of various conformal block normalizations $\\calN_{d,\\ell}$, Eqs.~(\\ref{eq:cb_ope}, \\ref{eq:cb_ope_e1}), used in the literature. \\label{tab:cb_norm}}\n\\end{table}\n\\endgroup\n \nBy solving Eq.~(\\ref{eq:cb_diffeq}) one can find conformal blocks for even $d$ \\cite{DO2}. They are expressed in terms of the basic function\n\\begin{eqnarray}\nk_\\beta(x) = x^{\\beta\/2} {}_2F_1\\left(\\frac{\\beta-\\Delta_{12}}2,\\frac{\\beta+\\Delta_{34}}2,\\beta;x\\right)\\,,\n\\end{eqnarray}\nwhich satisfies\n\\begin{eqnarray}\n\\mathcal D_x k_\\beta(x) = \\frac12\\beta(\\beta-2) k_\\beta(x)\\,, \\quad k_\\beta(x) \\underset{x\\rightarrow0}{\\sim} x^{\\beta\/2}\\,.\n\\end{eqnarray}\nIn the simplest case of $d=2$, we have $\\calD=\\calD_z+\\calD_{\\bar z}$, so the conformal blocks factorize. They take the form\\footnote{A partial case of this result was first found in \\textcite{Ferrara:1974ny} by another method. See also \\textcite{Osborn:2012vt} for general conformal blocks in 2d. Notice that the 2d global conformal blocks discussed here should be distinguished from the Virasoro conformal blocks.}\n\\begin{multline} \\label{eq:cb_d2}\nd=2:\\quad g^{\\Delta_{12},\\Delta_{34}}_{\\Delta,\\ell}(z,\\bar{z}) = \\frac{1}{(-2)^\\ell(1+\\delta_{\\ell 0})}\\\\ \\times\\left(k_{\\Delta+\\ell}(z)k_{\\Delta-\\ell}(\\bar{z})+z\\leftrightarrow \\bar{z}\\right)\\,.\n\\end{multline}\nResults for higher even $d$ can then be found using recursion relations relating blocks in $d$ and $d+2$ dimensions \\cite{DO2}. The important case of $d=4$ reads\\footnote{This result was first found in \\textcite{DO1} by resumming the OPE expansion.}\n\\begin{multline} \\label{eq:cb_d4}\nd=4:\\quad g^{\\Delta_{12},\\Delta_{34}}_{\\Delta,\\ell}(z,\\bar{z}) =\\frac{1}{(-2)^\\ell}\\\\\n\\times \\frac{z\\bar{z}}{z-\\bar{z}} \\left(k_{\\Delta+\\ell}(z)k_{\\Delta-\\ell-2}(\\bar{z})- z\\leftrightarrow \\bar{z}\\right)\\,.\n\\end{multline}\n\nIn odd $d$, general closed-form solutions of the Casimir equation are so far unavailable. \nSometimes, one can get closed-form solutions along the ``diagonal\" $z=\\bar{z}$, as e.g.~in $d=3$ for all equal external dimensions \\cite[{Eqs.~(3.7-3.10)}]{Rychkov:2015lca}. Other {expressions} along the diagonal{,} valid for any $d${,} can be found in~\\textcite{Hogervorst:2013kva}. Using these results as a starting point, one can compute derivatives of conformal blocks orthogonal to the diagonal using the Casimir equation, by the Cauchy-Kovalevskaya method, see Sec.~\\ref{sec:rational}. The knowledge of these derivatives is usually sufficient for numerical conformal bootstrap applications. Other techniques used to access the conformal blocks numerically will be discussed below.\n\nFinally, let us mention that conformal blocks have simple transformation properties under the interchange of external operators $1\\leftrightarrow 2$ and $3\\leftrightarrow 4$ \\cite{DO1,DO3}:\n\\begin{gather}\n g^{\\Delta_{12},\\Delta_{34}}_{\\Delta,\\ell}(u\/v,1\/v) = (-1)^\\ell v^{\\frac{\\Delta_{34}}2 } g^{-\\Delta_{12},\\Delta_{34}}_{\\Delta,\\ell}(u,v)\\nonumber\\\\\n\\hspace{3cm}= (-1)^\\ell v^{-\\frac{\\Delta_{12}}2 } g^{\\Delta_{12},-\\Delta_{34}}_{\\Delta,\\ell}(u,v)\\,.\n \\end{gather}\nThis follows from the symmetry of the OPE under the same interchange. As a check, the explicit expressions in Eqs.~(\\ref{eq:cb_d2}-\\ref{eq:cb_d4}) satisfy these relations.\n\n\n\\subsubsection{Radial expansion for conformal blocks}\n\\label{sec:radial}\n\nWhile closed-form expressions for conformal blocks in general $d$ are unknown, there exist rapidly convergent power series expansions. Following \\textcite{Hogervorst:2013sma}, we will describe a particular conformal frame used to generate such expansions.\n\n\\begin{figure}[t]\n\\begin{centering}\n\\includegraphics[width=0.6\\columnwidth]{fig04-radial.pdf}\n\\caption{\\label{fig:radial}\n(Color online) Conformal frame defining the radial coordinate. Figure from \\cite{Hogervorst:2013sma}.\n}\n\\end{centering}\n\\end{figure}\n\nStarting from the conformal frame \\reef{eq:conformalframe1}, we apply an additional conformal transformation which keeps the four points in the same 2-plane but moves them into a configuration symmetric around the origin as in Fig.~\\ref{fig:radial}. So the points $x_1=-x_2$ are now on a circle of radius $r<1$, while $x_3=-x_4$ lie on the unit circle.\n\nLet us call ${\\bf n}$ and $\\bf{n}'$ the unit vectors pointing to $x_2$ and $x_3$, and introduce the complex \\emph{radial coordinate} \\cite{Pappadopulo:2012jk}\n\\beq\n\\label{eq:reta}\n\\rho = r e^{i \\theta}\\,, \\qquad {\\bf n}\\cdot {\\bf n}'= \\cos \\theta=\\eta\\,,\n\\end{equation}\nwhich is related to the variable $z$ in Eq.~\\reef{eq:z} via\n\\begin{eqnarray}\n\\rho = \\frac{z}{(1-\\sqrt{1-z})^2} \\,,\n\\qquad \nz = \\frac{4\\rho}{(1+\\rho)^2}\\,.\\nonumber\n\\end{eqnarray}\nSee \\textcite{Hogervorst:2013sma} for why $\\rho$ is preferable to $z$ for constructing rapidly convergent power series expansions for conformal blocks. \n\nIn this configuration, the 4pt function is interpreted as a matrix element between two radial quantization states: $\\<\\phi_3(1,{\\bf n}')\\phi_4(1,-{\\bf n}')|$ and $|\\phi_1(r,-{\\bf n})\\phi_2(r,{\\bf n})\\>= r^D|\\phi_1(1,-{\\bf n})\\phi_2(1,{\\bf n})\\>$. The factor $r^D$, with $D$ the dilatation generator, takes care of the radial dependence.\\footnote{$D$ plays the role of the Hamiltonian operator in radial quantization and $\\log r$ is time.}\n\nConsider then the conformal partial wave given in Eq.~\\reef{eq:CPWrad}.\nThe conformal multiplet of the operator ${\\cal O}_{\\Delta,\\ell}$ at level $m$ contains descendants $|\\Delta+m,j\\>$ of spin $j$ varying from $\\max(0,\\ell-m)$ to $\\ell+m$. We need to know the matrix elements between these descendants and the above in and out states. Leaving aside the overall normalization of these matrix elements, their dependence on the unit vector $\\mathbf{n}$\nmust be proportional to the traceless symmetric tensor $({\\bf n}_{\\mu_1} \\ldots {\\bf n}_{\\mu_j}-\\text{traces})$. Contracting two such tensors for $\\mathbf{n}$ and $\\mathbf{n}'$ {gives}, up to a constant factor, the Gegenbauer polynomial ${\\rm Geg}_j({\\bf n}\\cdot {\\bf n'})$ from Eq.~\\reef{eq:Geg}.\n\nWe conclude that the conformal block has a power series expansion of the form\n\\beq\n\\label{eq:radial_exp}\n g^{\\Delta_{12},\\Delta_{34}}_{\\Delta,\\ell}(u,v)= r^{\\Delta}\\sum_{m=0}^\\infty r^{m}\n \\sum_{j} w(m,j) \\; {\\rm Geg}_j(\\eta)\\,,\n\\end{equation}\nwhere $w(m,j)\\ne 0$ only for $\\max(0,\\ell-m)\\leqslant j \\leqslant \\ell+m$. Using unitarity, one can also conclude that $w(m,j)\\geqslant 0$ if $\\Delta$ is above the unitarity bound and $\\Delta_{12}=-\\Delta_{34}$.\n\nSince $z\\sim 4\\rho$ at small $z$, the OPE limit \\reef{eq:cb_ope} becomes\n\\begin{eqnarray}\n\\label{eq:cb_ope_e1}\ng^{\\Delta_{12},\\Delta_{34}}_{\\Delta,\\ell}(r, \\eta)\\displaystyle \\underset{r\\rightarrow0}{\\sim} \n\\calN_{d,\\ell}(4r)^{\\Delta}{\\rm Geg}_\\ell\\left(\\eta \\right) \\,,\n\\end{eqnarray}\nwhich fixes $w(0,\\ell)=\\calN_{d,\\ell} 4^\\Delta$. To find higher $w(m,j)$, one must determine the normalization of the descendant matrix elements and not just their dependence on $\\mathbf{n},\\mathbf{n}'$. While in principle this can be done using conformal algebra, two more efficient techniques will be discussed below. \n\nThe expansion \\reef{eq:radial_exp} converges for $|\\rho|<1$, showing that conformal blocks are smooth and real-analytic functions in this region.\\footnote{An exception occurs at the origin because of the $r^\\Delta$ factor.} The conformal block decomposition \\reef{eq:CBdec} can be similarly argued to converge for $|\\rho|<1$.\\footnote{This can be shown rigorously in unitary CFTs \\cite{Pappadopulo:2012jk}. While there are no general results concerning the convergence of conformal block decomposition is nonunitary theories, it appears reasonable to assume that it remains convergent in the same region.} In terms of the $z$ coordinate, this covers the whole complex plane minus the cut $(1,+\\infty)$, improving the convergence result argued below Eq.~\\reef{eq:CBdec} using the $z$ frame.\n\n\n\\subsubsection{Recursion relation from the Casimir equation}\n\\label{sec:casimirrecursion}\n\nThe first method to find the coefficients $w(m,j)$ is to substitute the expansion \\reef{eq:radial_exp} into the Casimir equation.\nThis gives rise to recurrence relations, obtained in \\textcite{Hogervorst:2013sma} and \\textcite{Costa:2016xah}, which determine $w(m,j)$ for $m>0$ starting from $w(0,\\ell)$.\n\nNamely, defining the functions $ f_{m,j} \\equiv r^m {\\rm Geg}_j(\\eta)$, it is straightforward to show that any of the operators $\\{r,\\eta,\\partial_r,\\partial_\\eta\\}$ acting on these functions produces linear combinations of $f_{m\\pm1,j\\pm1}$. Similarly, the operator $\\calD$ in \\reef{eq:cb_diffeq}, when written in radial coordinates, maps $f_{m,j}$ into a linear combination of $f_{m+\\hat{m},j+\\hat{\\jmath}}$ functions with suitable shifts. Eq.~\\reef{eq:cb_diffeq} then gives rise to a relation which can be economically written in the form \\cite{Costa:2016xah}\n\\begin{eqnarray}\n\\label{recrelScalar}\n\\sum_{(\\hat{m},\\hat{\\jmath}) \\in \\mathcal{S}} c(\\hat{m},\\hat{\\jmath}) \\; w(m+\\hat{m},j+\\hat{\\jmath})=0 \\ ,\n\\end{eqnarray}\nwhere the set $\\mathcal{S}=\\{(0,0),(-1,1),(-1,-1),\\ldots\\}$ contains $30$ points, all of which but the first have $\\hat m<0$. The coefficients $c(\\hat{m},\\hat{\\jmath})$ are known functions of the variables $\\Delta_{12}$, $\\Delta_{34}$, $\\Delta$, $\\ell$, $d$, $m$, and $j$ \\cite[{attached \\texttt{Mathematica} notebook}]{Costa:2016xah}. Using Eq.~\\reef{recrelScalar}, the coefficient $w(m,j)$ can then be recursively expressed in terms of $w(m',j)$ with $m'\\sim Q_A (\\Delta-\\Delta_A^*)\\,,\n\\end{equation} \nwith $Q_A$ some constant. When ${\\cal O}_A^{\\rm null}$ becomes null, all of its descendants become null too, with the rate proportional to \\reef{eq:QA}. Moreover, it can be shown that the Gram matrix in the submultiplet consisting of these descendants is equal to \\reef{eq:QA} times the (non-singular) Gram matrix of the multiplet of ${\\cal O}_A$, up to corrections of higher order in $\\Delta-\\Delta_A^*$. This explains why the residue in \\reef{eq:resid} involves the whole conformal block of ${\\cal O}_A$.\\footnote{The Casimir equation gives another argument for why the residue is a conformal block. Near the pole the Casimir equation for the block reduces to the Casimir equation for the residue \\cite{Rychkov-OIST}. The Casimir eigenvalue of the null descendant is the same as for the original block (since it's a descendant): $C_{\\Delta_*,\\ell}=C_{\\Delta_A,\\ell_A}$. Finally, the boundary condition at $r\\to0$ is consistent with the residue being the conformal block.}\n\nThe coefficient $R_A$ in $\\reef{eq:resid}$ is a product of three factors:\n\\begin{equation}\n R_A=M_A^{(L)} Q^{-1}_A M_A^{(R)} \\ ,\n \\label{QMM}\n\\end{equation}\nwhere $Q_A$ is defined in \\reef{eq:QA}, while $M_A^{(L)}$ and $ M_A^{(R)}$ come from the 3pt functions \n $\\langle \\phi_3\\phi_4| {\\cal O}_A^{\\rm null} \\rangle $ and $\\langle {\\cal O}_A^{\\rm null} | \\phi_1 \\phi_2 \\rangle$. \n\nUsing information about the poles, we can now write a complete formula for the conformal block. It is convenient to define the regularized conformal block $h_{\\Delta,\\ell} \\equiv h^{\\Delta_{12},\\Delta_{34}}_{\\Delta,\\ell}$ by removing a $(4r)^\\Delta$ prefactor:\n\\begin{equation}\ng^{\\Delta_{12},\\Delta_{34}}_{\\Delta, \\ell}(r,\\eta)= (4r)^{\\Delta} h_{\\Delta, \\ell}(r,\\eta) \\,.\n\\end{equation}\n\nThe {function} $h_{\\Delta, \\ell}$ has the same poles in $\\Delta$ as $g_{\\Delta,\\ell}$. Moreover it is a meromorphic function of $\\Delta$, and is therefore fully characterized by its poles and the value at infinity:\n\\begin{multline}\n\\label{eq:recscalar}\nh_{\\Delta,\\ell}(r,\\eta)=h_{\\infty,\\ell}(r,\\eta)\\\\\n+\\sum_{A} \\frac{R_A }{\\Delta-\\Delta^*_A} (4 r)^{n_A}\\,\nh_{\\Delta^*_A+n_A,\\ell_A}(r,\\eta)\\,.\n\\end{multline}\nDetailed analysis shows that the poles occurring in this equation organize into one finite and two infinite sequences:\n\\begin{equation}\\label{eq:polestable}\n\\begin{array}{lccc}\nA&\\Delta^*_A &n_A &\\ell_A\t \\\\ \n\\hline\n\\mbox{I${}_n$ }\\ \\ (n\\in\\hbox{$\\bb N$})\t\t& 1-\\ell-n \t& n \t &\\ell + n \\\\ \n\\mbox{II${}_n$ }\\ (1\\leqslant n\\leqslant \\ell) \\;\\; \t& \\ell+d-1-n \t & n &\\ell - n\\\\\n\\mbox{III${}_n$ } (n\\in\\hbox{$\\bb N$}) \\; \t& \\frac{d}2-n & 2n &\\ell \\\\ \n\\end{array}\n\\end{equation}\nUsing this definition, it is easy to check that the residues of the poles themselves are nonsingular (except in even dimensions, see below).\n\nThe $h_{\\infty,\\ell}$ term and the constants $R_{A}$ are given by \\cite{Kos:2014bka, Penedones:2015aga} \n\t\\begin{align}\n\t\t\th_{\\infty, \\ell}(r,\\eta)&= \\textstyle \\frac{\\left(1-r^2\\right)^{1-\\frac{d}2}\n\t\t\t\\calN_{d,\\ell}{\\rm Geg}_\\ell(\\eta)}\n\t\t{\\left(r^2-2 \\eta r+1\\right)^{\\frac{1-\\Delta_{12}+\\Delta_{34}}{2}} \\left(r^2+2 \\eta r+1\\right)^{\\frac{1+\\Delta_{12}-\\Delta_{34}}{2}}}\\,, \\nn\\\\\n\t\tR_{\\text{I}_n}&=\\textstyle \\frac{-n (-2)^n }{ (n!)^2} \\left(\\frac{\\Delta_{12}+1-n}{2}\\right)_n \\left(\\frac{\\Delta_{34}+1-n}{2}\\right)_n \\ ,\\nn \\\\\n\t\tR_{\\text{II}_n}&= \\textstyle\\frac{-n \\,\\ell! }{ (-2)^n (n!)^2 (\\ell-n)! } \\frac{(d+\\ell-n-2)_n}{\\left(\\frac{d}2+\\ell-n\\right)_n\n\t\t\\left(\\frac{d}2+\\ell-n-1\\right)_n} \t \\label{eq:residues}\\\\\n\t\t&\\quad\\textstyle\\times \\left(\\frac{\\Delta_{12}+1-n}{2}\\right)_n \\left(\\frac{\\Delta_{34}+1-n}{2}\\right)_n \\ ,\\nn \\\\\n\t\tR_{\\text{III}_n}&=\\textstyle\\frac{-n (-1)^{n} \\left(\\frac{d}2-n-1\\right)_{2 n}}{(n!)^2 \\left(\\frac{d}2+\\ell-n-1\\right)_{2 n} \\left(\\frac{d}2+\\ell-n\\right)_{2 n}} \\nn\\\\\n\t\t&\\quad\\times\\textstyle\\left( \\frac{\\Delta _{12}-\\frac{d}2-\\ell-n+2}{2} \\right)_n \\left( \\frac{\\Delta _{12}+\\frac{d}2+\\ell-n}{2} \\right)_n \\nn\\\\\n\t\t&\\quad\\times\\textstyle \\left( \\frac{\\Delta _{34}-\\frac{d}2-\\ell-n+2}{2} \\right) _n \\left( \\frac{\\Delta _{34}+\\frac{d}2+\\ell-n}{2} \\right)_n \\ . \\nonumber\n\t\\end{align}\n\nThe key property of Eq.~(\\ref{eq:recscalar}) is that each pole residue comes with a factor $r^{n_A}$. This means that it can be used as a recursion relation to generate the regularized conformal block as a power series in $r$. Indeed, suppose we want to compute $h_{\\Delta,l}(r,\\eta)$ up to $O(r^{N})$. We use Eq.~(\\ref{eq:recscalar}) keeping all poles with $n_A\\leqslant N$, of which there are finitely many. The residues of these poles themselves are needed up to smaller order $O(r^{N-n_A})$, so we get a recursion relation. This is one of the most elegant and efficient currently known methods to compute the conformal blocks {outside of even $d$.}\n\nThe described recursion relation is adequate for computing conformal blocks in odd dimensions and also in generic $d$. It cannot be {applied} directly in even $d$, since some simple poles coalesce into double poles. This is not a problem, since even $d$ conformal blocks are known in closed form. Alternatively, one can apply the recursion relation $\\epsilon$ away from an even $d$, and take the limit $\\epsilon\\to0$ after the coefficients of the $r$ expansion have been generated. This gives the correct result because the conformal blocks vary analytically with $d$.\n\n\n\\subsubsection{Rational approximation of conformal blocks and their derivatives}\n\\label{sec:rational}\n\nWe will now describe {how to construct} rational approximations to conformal blocks and their derivatives at a given point $(r_*,\\eta_*)$, {which permit an efficient numerical evaluation of these quantities as a function of $\\Delta$. This will play an important role in the numerical techniques described in Sec.~\\ref{sec:numerical}.} Our focus will be on rational approximations to scalar conformal blocks, but later in Sec.~\\ref{sec:spinning} we will also describe how they can be extended to blocks for external spinning operators.\n\nA rational approximation for conformal block derivatives at a given point can be obtained by combining the radial expansion (\\ref{eq:radial_exp}) and the recursion relation (\\ref{eq:recscalar}). It can be expressed in the form\n\\beq\n\\label{eq:rationalApprox}\n\\partial_r^m \\partial_\\eta^n g_{\\Delta,\\ell}(r_*,\\eta_*) =(4 r_*)^\\Delta \\left(\\frac{P_N^{mn}(\\Delta)}{Q_N(\\Delta)} + O(r_*^{N-m}) \\right)\\,.\n\\end{equation}\nHere $Q_N$ is a polynomial made by the product of poles given in Eq.~(\\ref{eq:polestable}) up to order $N$,\n\\begin{eqnarray}\nQ_N(\\Delta) \n= \n\\prod\n_{\n\tA=(\\text{I,II,III})\n\t{}_n\n\t,\\ n\\leqslant N\n} \n(\\Delta-\\Delta_A^*)\\,,\n\\end{eqnarray} \nand $P_N^{mn}(\\Delta)$ is a polynomial with ${\\rm deg}(P_N^{mn})\\leqslant{\\rm deg}(Q_N)+m$. The approximation can be made arbitrarily precise by increasing $N$, at the expense of increasing the order of the polynomials.\n\nIn numerical applications it is often desirable to keep the polynomial order relatively small while maintaining a precise approximation. This can be accomplished using a trick introduced in~\\textcite{Kos:2013tga}, where one discards some number of poles but compensates by modifying the residues of the kept poles. For example, if one keeps $n$ poles, one can choose their new residues by demanding that the first $n\/2$ $\\Delta$-derivatives match between the old and new functions at both {the unitarity bound} and $\\Delta = \\infty$. \n\nAn important property that will be exploited in Sec.~\\ref{sec:numerical} is that the denominator $Q_N(N)$ is always positive in unitary theories. This follows from the fact that all the poles are at values of $\\Delta$ below the unitarity bound. \n\nThe techniques introduced in the previous sections allow one to compute conformal blocks either in closed form or as {a} power series in the variable $r$. Starting from these expressions one can take a direct approach of first analytically computing the $r$ expansion to order $N$, taking $r,\\eta$ derivatives of the resulting expression, and evaluating the result at the point $r_*,\\eta_*$. The result can then be recombined to the form in Eq.~(\\ref{eq:rationalApprox}). Since the crossing relations will be more simply written in $z,\\bar{z}$ coordinates, one then typically converts to $z,\\bar{z}$ derivatives at the corresponding point $z_*, \\bar{z}_*$ using a suitable transformation matrix. This approach, while somewhat inefficient at large $N$ due to the need to compute the analytical dependence on $\\eta$, has been successfully used in the literature, almost always at the crossing symmetric point $z_*=\\bar{z}_*=1\/2$ which corresponds to $\\eta_*=1,r_*=3-2\\sqrt{2}$.\n\n{A somewhat more efficient algorithm is the following: \\\\\n(i) Compute the $r$ expansion to order $N$ and take derivatives only along the radial direction $\\eta=1$ ($z=\\bar{z}$) using either the methods of Sections \\ref{sec:casimirrecursion} or \\ref{sec:analyticrecursion}.\\footnote{In even dimensions one can start from the closed form expression of Sec.~\\ref{sec:casimir} evaluated at $\\eta=1$, and expand in $r$.} \\\\\n(ii) Convert to $z,\\bar{z}$ derivatives along the diagonal $z=\\bar{z}$ using a suitable transformation matrix. \\\\\n(iii) Use the Casimir equation to recursively compute derivatives in the transverse direction. }\n\nLet us briefly discuss the last step, also called Cauchy-Kovalevskaya method. Consider the Casimir differential equation, Eq.~(\\ref{eq:cb_diffeq}), and express it in the variables $a=z+\\bar z,\\,\\sqrt b=(z-\\bar z)$. The radial direction corresponds to $b=0$. Moreover, since conformal blocks are symmetric in $z\\leftrightarrow \\bar z$, their power series expansion away from the $z=\\bar z$ line will contain only integer powers of $b$.\nLet us denote the $\\partial_a^m\\partial_b^n$ derivative of the conformal block evaluated at $z=\\bar z=1\/2$ by $h_{m,n}$. From step (i) we know $h_{m,0}$ for any $m$. Then, we can translate the Casimir equation into a recursion relation for $h_{m,n}$ (with $n>0$) in terms of $h_{m,n}$ with lower values of $n$. This recursion relation was obtained in {\\cite[{Appendix C}]{ElShowk:2012ht} for $\\Delta_{12}=\\Delta_{34}=0$, and generalized in \\cite[{Eq. (2.17)}]{Behan:2016dtz}.} It has the general structure: \n\\begin{align}\\label{eq:Cauchy-Kovalevskaya}\n\th_{m,n}&=\\sum_{m'\\leqslant m-1} m (\\ldots) h_{m',n} \\\\\n\t&+\\sum_{m'\\leqslant m+2}\\left[ (\\ldots) h_{m',n-1} + (n-1) (\\ldots) h_{m',n-2} \\right]\\,.\\nonumber\n\\end{align}\nSince the Casimir equation is of second order, $m'$ can only take values up to $m+2$. Also the recursion relation for $h_{0,n}$ only involves $h_{m',n'}$ with {$n'$. Consider a primary ``shadow operator\" $\\widetilde{{\\cal O}}_{d-\\Delta,\\ell}$ which has the same spin $\\ell$ as ${\\cal O}$ and dimension $d-\\Delta$. We stress that this operator is fictitious, it does not belong to the theory as a local operator, and in particular {the fact} that its dimension is below the unitarity bound is of no concern.\n\nThe starting point of the shadow formalism is the following integral:\n\\begin{align}\\label{eq:partial_wave}\n{\\rm U}_{\\Delta,\\ell}(x_1,x_2,x_3&,x_4) = \\int d^d x \\langle \\phi_1(x_1)\\phi_2(x_2)\\mathcal O_{\\Delta,\\ell}^{\\mu_1\\ldots\\mu_\\ell}(x)\\rangle \\nn\\\\\n &\\times\\langle \\widetilde{\\mathcal O}_{d-\\Delta,\\ell;\\mu_1\\ldots\\mu_\\ell}(x) \\phi_3(x_3)\\phi_4(x_4)\\rangle\\,,\n\\end{align}\nwhere under the integral sign we have a product of the conformal scalar-scalar-(spin $\\ell$) 3pt functions in Eq.~\\reef{eq:3points}, with the spin-$\\ell$ operators having dimensions $\\Delta$ and $d-\\Delta$.\n\nThe function ${\\rm U}_{\\Delta,\\ell}$ has two special properties. First, it conformally transforms in the same way as the 4pt function $\\langle \\phi_1(x_1)\\phi_2(x_2)\\phi_3(x_3)\\phi_4(x_4)\\rangle$.\nThis is because the product (operator $\\times$ shadow) transforms as a dimension $d$ primary scalar, which compensates for the Jacobian in the transformation of $d^d x$. Consequently we can write\n\\beq\n\\label{eq:Uf}\n{\\rm U}_{\\Delta,\\ell} = f_{\\Delta,\\ell}(u,v)\\,{\\bf K}_4 \\,,\n\\end{equation}\nwhere ${\\bf K}_4$ is as in Eq.~(\\ref{eq:4pf_scalars}) and $f_{\\Delta,\\ell}(u,v)$ is some function of $u$ and $v$.\n\nSecond, it is straightforward to see that ${\\rm U}_{\\Delta,\\ell}$ is an eigenfunction of the Casimir operator acting at $x_1$, $x_2$, with eigenvalue $C_{\\Delta,\\ell}$. Since the latter property is also true for the CPW ${\\rm W}_{\\cal O}$, it is tempting to identify $f_{\\Delta,\\ell}(u,v)$ in \\reef{eq:Uf} with the conformal block (up to a proportionality factor). However, this is not quite true. The point is {that} the conformal blocks of the operator and of its shadow satisfy the same Casimir equation, since their Casimir eigenvalues coincide: $C_{\\Delta,\\ell}=C_{d-\\Delta,\\ell}$. For this reason $f_{\\Delta,\\ell}$ is a linear combination of the block $g_{\\Delta,\\ell}$ and of the shadow block $g_{d-\\Delta,\\ell}$; see \\cite[{Eq. (3.25)}]{DO3} for the precise relation. \n\nFrom the practical viewpoint, the main advantage of the shadow formalism is that the integrand in Eq.~\\reef{eq:partial_wave} is quite easy to write. The downside is that the resulting conformal integrals are not always easy to evaluate, and that it is necessary to disentangle the contribution of a proper conformal block from the shadow one. \n\nEfficient ways to deal with these problems were proposed by \\textcite{SimmonsDuffin:2012uy}. First of all, the integrals become much easier to evaluate when written using the embedding formalism. Second, to separate the block from the shadow one uses that they transform differently under a monodromy transformation\n\\begin{gather}\nz\\rightarrow e^{2\\pi i} z,\\quad \\bar z=\\text{fixed}\\,,\\\\\ng_{\\Delta,\\ell}(z,\\bar z) \\rightarrow e^{2\\pi \\Delta i} \\,g_{\\Delta,\\ell}(z,\\bar z)\\,.\n\\end{gather}\nThe wanted conformal block is isolated via a \\emph{monodromy projector}, implemented as a proper choice of the integration contour in Eq.~(\\ref{eq:partial_wave}).\nThis prescription allows one to extract integral expressions for generic conformal blocks in arbitrary $d$. In some cases the conformal integrals can be performed exactly, and the results match the known formulas from other techniques.\n\n\n\\subsubsection{Spinning conformal blocks}\n\\label{sec:spinning}\n\nAlthough in this review we will mostly deal with scalar 4pt functions, the bootstrap has also been successfully applied to 4pt functions of operators {with spin; e.g., see Secs.~\\ref{sec:Fermions} for $j=1\/2$ spinors and \\ref{sec:JandT} for $\\ell=1,2$ tensors in 3d.} Here we will review the theory of the associated conformal blocks, referred to as ``spinning\", which present additional difficulties compared to the blocks of external scalars. \n\nAs in the scalar case, spinning conformal blocks correspond to the contribution of an entire conformal multiplet to a 4pt function. They are defined by the equation\n\\begin{align}\n\\label{eq:spinning_cb}\n&\\langle {\\cal O}_3 {\\cal O}_4 | \\mathcal P_{\\Delta,r} |{\\cal O}_1{\\cal O}_2 \\rangle \n=\\\\ \n&{\\bf K}_4 \\sum_{a=1}^{n_3}\\sum_{b=1}^{n'_3} \\sum_{c=1}^{n_4} \\lambda_{12 \\mathcal O^\\dagger}^{(a)} \\lambda_{34 \\mathcal O}^{(b)}{\\bf T}_{4}^{(c)}(x_i,\\zeta_i) {\\bf G}_{c,\\Delta,r}^{a,b}(\\Delta_i,r_i,u,v).\\nn\n\\end{align}\n\nHere, the external operators ${\\cal O}_i={\\cal O}_{\\Delta_i,r_i}(x_i,\\zeta_i)$ are positioned {at} $x_i$ and have their indices contracted with auxiliary polarization vectors (or spinors) $\\zeta_i$. They transform in some general $SO(d)$ {(or $Spin(d)$)} representations $r_i$. On the other hand $\\mathcal O_{\\Delta,r}$ is the exchanged operator (and $\\mathcal O_{\\Delta,r^\\dagger}^\\dagger$ its conjugate, see the discussion in Sec.~\\ref{sec:2pt}), and $\\mathcal P_{\\Delta,r}$ is the projector onto its conformal multiplet similar to Eq.~(\\ref{eq:proj}).\n\nThe prefactor ${\\bf K}_4$ is as in Eq.~(\\ref{eq:4pf_scalars}); it captures the scaling properties of the 4pt function, leaving everything else dimensionless. Eq.~\\reef{eq:spinning_cb} also contains a sum over possible conformally invariant 4pt tensor structures ${\\bf T}_4^{(c)}$, and a double sum over possible 3pt function structures\n\\begin{eqnarray}\n\\label{eq:3pt-spinning}\n&&\\langle \\mathcal O_{\\Delta_1,\nr_1}(x_1,\\zeta_1) \\mathcal O_{\\Delta_2,\nr_2}(x_2,\\zeta_2) \\mathcal O^\\dagger_{\\Delta,\nr^\\dagger}(x_3,\\zeta_3)\\rangle =\\\\\n&&\\hspace{3em}\\displaystyle \\sum_{a=1}^{n_3} \\lambda_{12 \\mathcal O^{\\dagger}}^{(a)} {\\bf T}_3^{(a)}(x_i,\\zeta_i,\\{\\Delta_1,r_1\\},\\{\\Delta_2,r_2\\},\\{\\Delta,r^{\\dagger}\\}),\\nn\n\\end{eqnarray}\nand similarly for $n'_3$. Finally, the functions ${\\bf G}_{c,\\Delta,r}^{a,b}(\\Delta_i,r_i,u,v)$ are the spinning conformal blocks. \n\nAccording to the above definition, when $r$ is not a real representation, both ${\\bf G}_{c,\\Delta,r}^{a,b}$ and ${\\bf G}_{c,\\Delta,r^\\dagger}^{a,b}$ have to be considered and generally both these blocks are nonzero. We will see a 4d example for $r=(\\ell,\\ell+p)$ below.\n\nSpinning blocks can be computed by reducing them to ``seed\" blocks. Consider the simplest case when the exchanged primary is a traceless symmetric spin $\\ell$. To understand the reduction to seeds, the key observation is that the 3pt tensor structures \\reef{eq:3pt-spinning} can be produced by differentiating the more elementary scalar-scalar-(spin $\\ell$) 3pt functions \\reef{eq:3points}. Namely \\textcite{Costa:2011dw} showed that there exist ``spinning-up\" differential operators $D^{(a)}_{r_1,r_2}$, depending on $x_i$ and $\\zeta_i$, such that\n\\begin{eqnarray}\n&&{\\bf T}_3^{(a)}(x_i,\\zeta_i,\\{\\Delta_1,r_1\\},\\{\\Delta_2,r_2\\},\\{\\Delta,r\\}) =\\\\\n&&\\hspace{3em} D^{(a)}_{r_1,r_2} \\mathcal {\\bf T}_3(x_i,\\zeta_3,\\{\\Delta'_1,0\\},\\{\\Delta'_2,0\\},\\{\\Delta,r\\}),\\nn\n\\end{eqnarray}\nfor a suitable basis of 3pt structures and choice of $\\Delta_i'$.\\footnote{In a {generic} basis of 3pt structures, e.g.~one that would be naturally constructed using the embedding space or conformal frame formalisms, there would be a linear combination of terms like the r.h.s. with different shifts.} Notice that in the above expression the third point is not affected. Therefore, in the definition (\\ref{eq:spinning_cb}), the differential operators do not interfere with the sum over descendants in $\\mathcal P_{\\Delta,\\ell}$. One concludes that the spinning blocks can be obtained by differentiating the scalar blocks:\n\\begin{multline}\n\\label{eq:spinning_cb_p0}\n{\\bf K}_4 (\\Delta_i)\\sum_{c=1}^{n_4} {\\bf T}_{4}^{(c)}(x_i,\\zeta_i){\\bf G}_{c,\\Delta,\\ell}^{a,b}(\\Delta_i,r_i,u,v) \\\\\n= D^{(a)}_{r_1,r_2} D^{(b)}_{r_3,r_4} {\\bf K}_4 (\\Delta_i') g_{\\Delta,\\ell}^{\\Delta_{12}',\\Delta_{34}'}(u,v)\\,,\n\\end{multline}\nwhere $g_{\\Delta,\\ell}^{\\Delta_{12}',\\Delta_{34}'}(u,v)$ is the scalar conformal block discussed at length in the previous sections, referred to as a seed block in this situation. \n\nIn 3d, traceless symmetric tensors exhaust all bosonic $SO(d)$ representations, and therefore all bosonic spinning blocks can be obtained from scalar seeds via \\reef{eq:spinning_cb_p0}.\\footnote{Some explicitly worked out cases in 3d are for external operator pairs being (current)-(current) \\cite{Costa:2011dw}, scalar-(current or stress tensor) \\cite{Li:2015itl}, and (stress tensor)-(stress tensor) \\cite{Dymarsky:2017yzx}.} The 3d spinning-up operators were also extended to external spinors and exchanged spin $\\ell$ by \\textcite{Iliesiu:2015qra}. \n\nIf a representation $r$ does not couple to two scalars, its conformal block cannot be reduced to the scalar seed using this method. One therefore needs more seed blocks for such representations. As an example, consider the half-integer spin representations in 3d. The simplest pair of external operators to which they couple are a scalar $\\phi$ and a Majorana fermion $\\psi$. The corresponding conformal block $ \\langle\\phi_3 \\psi_4 | \\mathcal P_{\\Delta,j}|\\phi_1 \\psi_2\\rangle$ for half-integer $j$ can be taken as a seed. It was computed by \\textcite{Iliesiu:2015akf}, using recursion relations as in Sec.~\\ref{sec:analyticrecursion}, making the list of 3d seeds complete. \n\nA similar discussion holds in 4d. In this case the complete set of seed blocks corresponds to the representations $r=(\\ell,\\ell+p)$ and $(\\ell+p,\\ell)$ appearing in the 4pt function of two scalars, one $(p,0)$ tensor, and one $(0,p)$ tensor:\n\\beq\n\\langle \\phi_3(x_3)\\mathcal O_{\\Delta_4,(0,p)}(x_4)| \\mathcal P_{\\Delta,r}|\n \\phi_1(x_1)\\mathcal O_{\\Delta_2,(p,0)}(x_2) \\rangle\\,.\n\\end{equation}\nAll of these seeds were computed in closed form by \\textcite{Echeverri:2016dun}, making use of the shadow formalism from Sec.~\\ref{sec:shadow}.\n\nOnce the seeds are known, a relation analogous to (\\ref{eq:spinning_cb_p0}) allows one to relate any conformal block to a combination of seed blocks thorough a suitable set of spinning-up operators $D^{(a)}_{r_i,r_j}$. The latter can be nicely written in the embedding formalism discussed in Appendix~\\ref{sec:embedding} or one of its generalizations. The precise expressions can be found in \\textcite{Costa:2011dw,Iliesiu:2015akf} in 3d or \\textcite{Echeverri:2015rwa} in 4d. In 4d there is also available a comprehensive {\\tt Mathematica} package {\\tt CFTs4D} \\cite{Cuomo:2017wme} designed to facilitate general spinning 4d conformal block computations. {Spinors and spinor-tensor correlators in aribtrary dimensions were instead studied in \\textcite{Isono:2017grm}}\n\nLet us mention briefly several other ideas which have proved useful when dealing with spinning blocks. \\textcite{Karateev:2017jgd} introduced a more general class of ``weight-shifting\" operators which act on correlation functions. In addition to reproducing the spinning-up operators as a special case, they have a further interesting consequence: when acting on a conformal block these operators can change the $SO(d)$ {(or $Spin(d)$)} representation of the exchanged state {by utilizing the $6j$ symbols of the conformal group. Through} repetitive use of these operators, it is possible to express any conformal block, including the seeds, in terms of the scalar ones.\\footnote{Explicit formulas expressing the seed blocks in 3d and 4d are provided in \\textcite{Karateev:2017jgd}.} {These methods also lead to efficient derivations of various recursion relations satisfied by the conformal blocks.}\n\nThe Casimir recursion approach from Sec.~\\ref{sec:casimirrecursion} was extended to arbitrary external bosonic operators by \\textcite{Costa:2016xah}. More recently, \\textcite{Kravchuk:2017dzd} considered similar expansions for arbitrary external operators, and related the recursion relation coefficients to the $6j$ symbols of $Spin(d-1)$, which are known in closed form for arbitrary representations in {$d=3,4$}, and for representations entering the seed blocks in arbitrary $d$. He also discusses how to convert from the $z$ to the $\\rho$ coordinate, as is needed for practical applications. \n\nThe pole expansion of Sec.~\\ref{sec:analyticrecursion} has also been generalized to spinning conformal blocks \\cite{Penedones:2015aga,Costa:2016xah}. Although no closed form expressions are known for the analogues of $h_\\infty$ and of the residues $R_A$ in Eq.~(\\ref{eq:recscalar}), these ingredients can sometimes be found by combining this approach with the spinning-up\/weight-shifting operators, as in {\\textcite{Iliesiu:2015akf}, \\textcite{Dymarsky:2017xzb}, and \\textcite{Karateev:2017jgd}.} Commuting these operators with the pole expansion sum, one obtains the expected pole expansion for spinning conformal blocks. By truncating the pole expansion, rational approximations similar to those considered in Sec.~\\ref{sec:rational} can then be constructed for each spinning block tensor structure.\n\nFinally, the shadow block technique discussed in Sec.~\\ref{sec:shadow} has been used to compute the conformal blocks appearing in 4pt function of two scalars and two identical conserved currents \\cite{Rejon-Barrera:2015bpa}.\n\n\n\n\\subsubsection{2pt functions}\n\\label{sec:2pt}\n\nIt follows from Eq.~(\\ref{eq:correlations_covariance}) that the 2pt function of two operators ${\\cal O}_{\\Delta_1,r_1}$ and ${\\cal O}_{\\Delta_2,r_2}$ vanishes unless $\\Delta_1=\\Delta_2$ and $r_1=r_2^\\dagger$.\\footnote{Here $\\dagger$ means complex conjugation in Lorentzian signature, or taking the dual reflected representation in Euclidean signature, where reflected means replacing generators $R_{1\\nu}$ by $-R_{1\\nu}$. In 3d all representations are real, so the requirement $r_1 = r_2^{\\dagger}$ reduces to $r_1=r_2$, while in 4d if $r_1=(\\ell,\\bar \\ell)$ then $r_2=(\\bar\\ell,\\ell)$.} As a consequence, for every physical operator ${\\cal O}_{\\Delta,r}$, one can identify an operator ${\\cal O}^\\dagger_{\\Delta,r^\\dagger}$ which transforms in the conjugate representation.\\footnote{The precise action of Hermitian conjugation on Hilbert space operators depends on the signature and choice of quantization surface. For a detailed discussion see \\textcite{Simmons-Duffin:2016gjk}.}\n\nFurther, one can almost always work in a basis of operators such that ${\\cal O}$ has a nonzero 2pt function only with ${\\cal O}^\\dagger$, which is usually stated as ``the 2pt function is diagonal\".\\footnote{Examples of nonunitary conformal theories in which the 2pt functions cannot be so diagonalized occur in logarithmic CFTs, see e.g.~\\textcite{Hogervorst:2016itc}. We will not consider them in this review.} \nFor example, this is always possible in unitary theories. For operators in real $SO(d)$ representations $r^{\\dagger} = r$, like traceless symmetric tensors, we can choose a real operator basis so that ${\\cal O}^\\dagger = {\\cal O}$.\n\nSpecializing to traceless symmetric tensors, the 2pt function takes the form\\footnote{For the purposes of this review, it is sufficient to consider correlation functions in Euclidean signature. Most equations can also be used in Lorentzian signature, provided that all points are spacelike separated. For timelike separation one needs modifications, such as an $i\\epsilon$ prescription, which we will not discuss.}\n\\begin{gather}\n\\langle {\\cal O}_{\\Delta,\\ell}(x_1,\\zeta_1) {\\cal O}_{\\Delta,\\ell}(x_2,\\zeta_2) \\rangle = \\frac{\\left( I_{\\mu\\nu}(x_{12})\\zeta_1^\\mu \\zeta_2^\\nu\\right)^\\ell-\\text{traces} }{(x_{12}^2)^{\\Delta}}\\,,\\nn\\\\\nI_{\\mu\\nu}(x) =\\eta_{\\mu\\nu} -2 {x_\\mu x_\\nu}\/{x^2}\\,, \\label{eq:2points}\n\\end{gather}\nwhere $x_{ij} \\equiv x_i-x_j$, and ``traces\" are terms proportional to $\\zeta_1^2$, $\\zeta_2^2$, which are uniquely fixed by the tracelessness of ${\\cal O}_{\\Delta,\\ell}$. This generalizes Eq.~\\reef{eq:2pt} for scalars. It is customary to normalize such 2pt functions to unity, with exceptions being conserved currents and the stress tensor, see Sec.~\\ref{sec:ward}. The nontrivial part of the correlator is its numerator, which {specifies the dependence on the operator indices}. We will refer to such numerators as ``tensor structures\".\n\nIf the CFT contains a global symmetry, operators are grouped into global symmetry multiplets $\\pi$. In this case Eq.~\\reef{eq:2points} still applies to the individual components of the multiplets, with obvious appropriate modifications.\\footnote{If $\\pi$ is a complex representation, then it is not convenient to use the real operator basis. The nonzero 2pt function will then be between ${\\cal O}$ and ${\\cal O}^\\dagger$ transforming in $\\bar\\pi$.} We will discuss the consequences of global {symmetries} further in Sec.~\\ref{sec:global}.\n\n\n\\subsubsection{3pt functions}\n\\label{sec:3pt}\n\nNext we turn to 3pt functions, focusing on the case where the first two operators are scalars. Then it turns out that the third operator can only be a traceless symmetric tensor. {Generalizing Eq.~\\reef{eq:3pt} for three scalars, the 3pt function takes the form \\cite{Mack:1976pa}}\n\\begin{multline}\n\\label{eq:3points}\n\\langle \\mathcal O_{\\Delta_1}(x_1) \\mathcal O_{\\Delta_2}(x_2) \\mathcal O_{\\Delta_3,\\ell}(x_3, \\zeta) \\rangle= \\\\\n\\lambda_{123}\\, [\\left(Z^\\mu_{123} \\zeta_\\mu \\right)^\\ell -\\text{traces}]\\, {\\bf K}_3\\,,\n\\end{multline} \nwhere ${\\bf K}_3={\\bf K}_3(\\Delta_i,x_i)$ is given by\n\\beq\n{\\bf K}_3 =\\frac{1} {(x^2_{12})^{\\frac{h_{123}+\\ell}2} (x^2_{13})^{\\frac{h_{132}-\\ell}2}\\,\n\t(x^2_{23})^{\\frac{h_{231}-\\ell}2}}\\,,\n\\end{equation}\n$h_{ijk} \\equiv \\Delta_i+\\Delta_j-\\Delta_k$, and $Z_{123}^{\\mu}= \\frac{x^\\mu_{13}}{x_{13}^2}-\\frac{x^\\mu_{23}}{x_{23}^2}$. This 3pt function is unique up to the overall coefficient $\\lambda_{123}$. Notice that as defined,\n\\beq\n\\label{eq:flipsign}\n\\lambda_{123} = (-1)^\\ell \\lambda_{213}\\,,\n\\end{equation}\nwhile if $\\ell=0$ we can exchange any pair of fields and $\\lambda_{123}$ is fully symmetric.\nThe normalization of these coefficients is unambiguous, since the operators are assumed to be unit-normalized according to Eq.~(\\ref{eq:2points}). \nTogether with the {spectrum}, the $\\lambda$'s constitute the \\emph{CFT data}, which distinguish one CFT from another, as discussed in Sec.~\\ref{sec:informal}. \n\nIn unitary theories, {the CFT data must satisfy a set of} general well-understood constraints, see Sec.~\\ref{sec:unitarity}. {Significantly more} nontrivial constraints on the CFT data come from the crossing relations to be discussed in Sec.~\\ref{sec:crossing}.\n\nFor operators in three general $SO(d)$ representations{, the 3pt} functions take a form more complicated than \\reef{eq:3points}. They are also in general not unique, although for any three representations there is at most a finite-dimensional space of allowed tensor structures. The problem of their construction has been completely solved in the most physically important cases of $d=3$~\\cite{Costa:2011mg,Iliesiu:2015qra} and $d=4$~\\cite{Elkhidir:2014woa}. For general $d$ there are partial results, e.g.~\\textcite{Costa:2011mg} for 3pt functions of traceless symmetric tensors,~\\textcite{Costa:2016hju} for traceless mixed-symmetry tensors, and~\\textcite{Kravchuk:2016qvl} for a general approach to classifying the structures.\n\n\\subsubsection{4pt functions}\n\nFinally let us consider 4pt functions, which as mentioned in Sec.~\\ref{sec:informal} play {a} fundamental role in the conformal bootstrap. Focusing here on the case of scalars, the 4pt function must take the general form\n\\beq\n\\label{eq:4p}\n\\langle \\mathcal O_{\\Delta_1}(x_1) \\mathcal O_{\\Delta_2}(x_2) \\mathcal O_{\\Delta_3}(x_3) \\mathcal O_{\\Delta_4}(x_4)\\rangle = g(u,v)\\, \\mathbf{K}_4\\,.\n\\end{equation}\nThe factor $\\mathbf{K}_4=\\mathbf{K}_4(\\Delta_i,x_i)$ is given by\n\\beq\n\\label{eq:K4}\n\\mathbf{K}_4 = \n\\frac{1}{(x_{12}^2)^{\\frac{\\Delta_1+\\Delta_2}2}(x_{34}^2)^{\\frac{\\Delta_3+\\Delta_4}2}} \\left(\\frac{x^2_{24}}{x^2_{14}}\\right)^{\\frac{\\Delta_{12}}2}\n\\left(\\frac{x^2_{14}}{x^2_{13}}\\right)^{\\frac{\\Delta_{34}}2}\n\\,, \n\\end{equation}\nwhere {$\\Delta_{ij} \\equiv \\Delta_i - \\Delta_j$}. This factor by itself transforms under conformal transformations as {prescribed by Eq.~(\\ref{eq:correlations_covariance}). }\nThe remaining part of the correlator, $g(u,v)$, must be a function of two \\emph{cross ratios} $u,v$:\n\\begin{eqnarray}\n\\label{eq:uv}\nu=\\frac{x_{12}^2 x_{34}^2}{x_{13}^2 x_{24}^2}\\,, \\qquad v=\\frac{x_{14}^2 x_{23}^2}{x_{13}^2 x_{24}^2}\\,,\n\\end{eqnarray}\nwhich are invariant under all conformal transformations.\n\nWhile no further information about $g(u,v)$ can be obtained from conformal invariance alone, it can in fact be computed in terms of the CFT data using additional tools such as the OPE and conformal blocks. This will be discussed in Secs.~\\ref{sec:OPE} and~\\ref{sec:cb}.\n\n\n\\subsubsection{Conformal frames} \n\\label{sec:confframe}\nHere we will give a more group theoretical intuition of the number of degrees of freedom contained in a given correlator, and in particular of why conformal invariance fixes 2pt and 3pt functions up to a few constants, but allows arbitrariness in 4pt functions. \n \nGiven a set of $n$ points, we can make use of conformal transformations to arrange them in convenient configurations. {For instance, given} 3 arbitrary points we can find a conformal transformation which maps them to $x_{1,2,3}=0,\\hat{e},\\infty$, where $\\hat{e}$ is a fixed unit vector.\n\n\\begin{figure}[t]\n\t\\begin{centering}\n\t\t\\includegraphics[width=0.6\\columnwidth]{fig02-z-frame.pdf}\n\t\t\\caption{\\label{fig:z-frame}\n\t\t\t(Color online) Conformal frame defining the $z$ coordinate. Figure from \\cite{Hogervorst:2013sma}.\n\t\t}\n\t\\end{centering}\n\\end{figure}\n\nFor 4 points, we can first find a conformal transformation fixing 3 of them as above, and then rotate around the axis to put the fourth point into a fixed plane (we assume that $d\\geqslant 2$). The resulting configuration can be parametrized in Euclidean signature as ($\\mathbf{0} \\equiv \\mathbf{0}_{d-2}$)\\footnote{We define ${\\cal O}(\\infty)$ {by taking} the limit of $|x_4|^{2\\Delta_{\\cal O}} {\\cal O}(x_4)$ {as} $x_4\\to\\infty$, {which yields a finite value for the correlation function.}}\n\\begin{align}\nx_1 &= (0,0,\\mathbf{0})\\,,\\quad x_2 = (\\sigma,\\tau,\\mathbf{0})\\,,\\nn\\\\\nx_3 &= (1,0,\\mathbf{0})\\,,\\quad x_4 = (\\infty,0,\\mathbf{0})\\,.\n\t\\label{eq:conformalframe1}\n\\end{align}\nIt is customary to define (see Fig.~\\ref{fig:z-frame})\n\\beq\n\\label{eq:z}\nz=\\sigma+i\\tau,\\quad \\bar{z}=\\sigma-i\\tau\\,,\n\\end{equation} \nwhich are complex conjugate variables if we are working in the Euclidean.\\footnote{Notice that we can analytically continue to {the Lorentzian} via $\\tau\\to i t$, and then $z$ and $\\bar z$ become independent real variables, but this will not play a role in this review.} The conformal cross ratios can be expressed in terms of $z$, $\\bar{z}$ as\n\\begin{eqnarray}\n\\label{eq:zzb}\n\tu= z\\bar{z} \\,, \\qquad v= (1-z)(1-\\bar{z})\\,.\n\\end{eqnarray}\n\nA choice of points $x_i$, as in Eq.~(\\ref{eq:conformalframe1}), is called a \\emph{conformal frame}. It can be thought of as a gauge fixing of most or all of the conformal symmetry. By construction, any coordinate configuration can be reduced to the conformal frame form. Therefore, the knowledge of a correlation function in the conformal frame is sufficient to reconstruct it at any other point through its covariance properties {\\cite{Osborn:1993cr}}. The functional forms of 2pt and 3pt functions are fixed because their conformal frames do not contain any free parameters. The 4pt conformal frame\n\\reef{eq:conformalframe1} has 2 real parameters, explaining the functional freedom of the conformal 4pt function. See Sec.~\\ref{sec:radial} for another frequently used conformal frame.\n\nConformal frames provide a way to construct conformal correlators {which is sometimes more convenient than the embedding formalism described in App.~\\ref{sec:embedding}.} This method can also be used to classify the allowed tensor structures. {An important role is then played by the stabilizer group, defined as the set of conformal transformations leaving the conformal frame configuration invariant. It is $SO(d-1)$ for 3pt functions and $SO(d-2)$ for 4pt functions. One classifies tensor structures invariant under the stabilizer group, and each of them lifts to an independent conformally invariant tensor structure \\cite{Kravchuk:2016qvl}. This method is particularly useful when dealing with 4pt functions of tensor operators: it does not overcount tensor structures, which may happen in the embedding formalism unless special care is taken.}\n\n\n\n\\subsubsection{Explicit solutions to crossing}\n\\label{sec:explicit}\n\nMany nontrivial 2d CFTs have exact solutions (e.g.~the minimal models), and the conformal block decompositions of their 4pt functions provide explicit solutions to crossing relations. \nHere we will discuss a few explicit solutions to crossing known in $d>2$. Their existence is important, even though as we will see they come from theories which are not physically the most interesting ones. For example, it is common to check the numerical algorithms against the known explicit solutions to exclude coding errors, before proceeding to study more physically interesting solutions numerically. \n\nEssentially all explicit solutions in $d>2$ are provided by scale-invariant ``gaussian theories\", i.e.~theories coming from a quadratic action written in terms of a fundamental field and not having any massive parameter.\\footnote{The only exceptions known to us are the ``fishnet theories\" --- nonunitary bi-scalar field theories integrable in the large-$N$ limit \\cite{Gurdogan:2015csr}. Recently some conformally-invariant 4pt functions and their conformal block decompositions were computed in such theories in 4d \\cite{Grabner:2017pgm}, and in their nonlocal generalizations to arbitrary $d$ \\cite{Kazakov:2018qbr}.}\nThe correlation functions of such theories are generated by Wick's theorem from the basic 2pt function of the fundamental field. The simplest examples are the massless free scalar and massless free fermion theory, which are conformally invariant in any $d$, and the free abelian gauge theory, conformally invariant in $d=4$. In 4d, explicit conformal block decompositions of 4pt scalar correlation functions $\\< {\\cal O} {\\cal O} {\\cal O} {\\cal O} \\>$ in these theories (for ${\\cal O}=\\phi$, $\\phi^2$, $\\bar\\psi\\psi$, $F_{\\mu\\nu}^2$) were obtained by \\textcite{DO1}. \n\nAnother class of gaussian theories are mean field theories (MFTs), also called generalized free fields~\\cite{Heemskerk:2009pn}, \\cite[{section 4}]{ElShowk:2011a}. Correlation functions in these theories have the same disconnected structure generated by Wick's theorem as in the above mentioned free theories. The only difference is that the scaling dimension of the fundamental field, fixed to a particular value in free theories, becomes a free parameter in MFT.\\footnote{This structure naturally emerges in large-$N$ CFTs as a consequence of large-$N$ factorization. This is particularly transparent in CFTs with holographic duals, since MFT correlation functions are generated by free massive fields in AdS${}_{d+1}$ and the arbitrary scaling dimension is determined by the mass.} For example, we can consider the MFT of a scalar field $\\phi$ of arbitrary dimension $\\Delta_\\phi$. Such a MFT is unitary as long as $\\Delta_\\phi$ satisfies the unitarity bound, and reduces to the free massless scalar for $\\Delta_\\phi=(d-2)\/2$. Just like for the usual free theories, the full space of operators in MFTs can be classified by considering normal-ordered products of the fundamental field and its derivatives.\\footnote{The OPE $\\phi\\times\\phi$ contains only operators of the schematic form $\\phi (\\partial^{2})^n\\partial^\\ell\\phi$, which have spin $\\ell$ and dimension \\mbox{$2\\Delta_\\phi+2n+\\ell$}.} For example there is an operator $\\phi^2$ which has dimension $2\\Delta_\\phi$. \n\nAlthough relatively trivial and nonlocal, MFTs satisfy most CFT axioms (except for for the existence of a local stress tensor). As we will see below, they frequently fall inside regions allowed by the bootstrap bounds, so it helps to be familiar with them. Explicit conformal block decompositions of MFT 4pt functions containing scalars were obtained by \\textcite{Heemskerk:2009pn} for $d=2,4$ and by \\textcite{Fitzpatrick:2011dm} in general $d$. \n\n\n\n\n\n\n\n\\subsubsection{Unitarity bounds}\n\\label{sec:UB}\n\nWe already get a simple and powerful constraint by considering radial quantization states $|\\Psi\\>$ produced by a local operator ${\\cal O}$ acting at the origin. In this case the conjugate operator is inserted at infinity. For {a} primary ${\\cal O}$ we recover that its 2pt function must have positive normalization and hence can be normalized to one as in \\reef{eq:2points}. Additional constraints arise from considering descendants of ${\\cal O}$. The conformal algebra computes the norms of descendants as polynomials in the primary dimension $\\Delta$. Imposing that all descendants have a non-negative norm gives a lower bound on $\\Delta$. This \n``unitarity bound\" depends on the representation $r$ of $SO(d)$ (or its double cover {for spinor representations}) in which the primary transforms.\\footnote{Standard CFT references are \\textcite{Ferrara:1974pt}, \\textcite{Mack:1975je}, and \\textcite{Minwalla:1997ka}. An early physics reference is \\textcite{doi:10.1063\/1.1705183}. In the mathematics literature, these bounds were derived by \\textcite{jantzen_kontravariante_1977}, although the relevance of this work for physics was realized only recently \\cite{Penedones:2015aga,Yamazaki:2016vqi}. See also \\textcite{Rychkov:2016iqz,Simmons-Duffin:2016gjk} for a review. Unitarity bounds can be equivalently derived by studying the positivity of the Fourier transform of the 2pt function analytically continued to Lorentzian signature (the Wightman function), see \\textcite{Ferrara:1974pt}, \\textcite{Mack:1975je} (in the sufficiency part of the argument), as well as \\textcite{Grinstein:2008qk} for a recent exposition emphasizing physics.} \\footnote{In Lorentzian signature, operators satisfying the unitarity bounds correspond to the unitary representations of the universal covering group of the Lorentzian conformal group $SO(d,2)$ having positive energy. Notice that in Euclidean signature operators satisfying the unitarity bounds have no relation to the representation of the Euclidean conformal group $SO(d+1,1)$ which are unitary in the usual mathematical sense of the term. This is already clear from looking at the principal series unitary representations of $SO(d+1,1)$ which have complex scaling dimensions $d\/2+i\\bR$.}\n\nIn 3d, the representation $r$ is labeled by a half-integer $j$, with $j=\\ell$ for traceless symmetric spin-$\\ell$ tensors. The unitarity bounds are\n\\begin{align}\n\td=3:\\quad &\\Delta\\geqslant 1\/2 &&(\\text{scalar, } j=0)\\,,\\nn\\\\\n\t&\\Delta\\geqslant 1 &&(\\text{smallest spinor, } j=1\/2) \\,,\\\\\n\t&\\Delta\\geqslant j+1\\ &&(j>1\/2)\\,.\\nn\n\t\\end{align}\n\nIn 4d, we can label the representation $r$ by two integers $(\\ell,\\bar \\ell)$, with traceless symmetric spin-$\\ell$ tensors having $\\ell=\\bar\\ell$.\\footnote{It is also common in the literature to label by half-integers $j=\\ell\/2,\\, \\bar{j}=\\bar{\\ell}\/2$.} The unitarity bounds then read\n\\begin{align}\n\td=4:\\quad &\\Delta\\geqslant 1&&(\\text{scalar, } \\ell=\\bar\\ell=0)\\,,\\nn\\\\\n\t&\\Delta\\geqslant {\\textstyle\\frac 12} \\ell+1 &&(\\ell>0, \\bar\\ell=0) \\,,\\label{eq:4dunitarity}\\\\\n\t&\\Delta\\geqslant {\\textstyle\\frac 12}(\\ell+\\bar\\ell)+2 &&(\\ell \\bar\\ell\\ne0)\\,.\\nn\n\\end{align}\n\nFor the 5d and 6d unitarity bounds see \\textcite{Minwalla:1997ka}. For some representations occurring in all dimensions the unitarity bounds can be written in dimension-independent form as follows:\n\\begin{align}\n\t &\\Delta\\geqslant {\\textstyle\\frac 12}(d-2)&&(\\text{scalar})\\,,\\nn\\\\\n\t&\\Delta\\geqslant {\\textstyle\\frac 12}(d-1) &&(\\text{smallest spinor}) \\,,\\\\\n\t&\\Delta\\geqslant \\ell+d-2 &&(\\text{traceless symmetric, spin }\\ell\\geqslant 1)\\,.\\nn\n\\end{align}\n\nAs a final comment, in physics literature the unitarity bounds are often derived by imposing positivity of the descendant norm{s} on the first (and the second, for scalars) level. It is a nontrivial fact that no further constraints arise from higher levels. See \\textcite[{Tables 3 and 5}]{Bourget:2017kik} for a review of rigorous mathematical results for unitary bounds in any $d$.\n\n\\subsubsection{OPE coefficients}\n\nUnitarity also gives reality constraints on OPE coefficients of real operators. Consider the 3pt function \\reef{eq:3points} between two scalars and a traceless symmetric tensor, assuming all three operators are real. Then the 3pt function coefficient must be real:\n\\beq\n\\label{eq:reality}\n\\lambda_{123}\\in \\bR\\,.\n\\end{equation}\nTo argue this, we can consider a 6pt function $\\<{\\cal O}_1 {\\cal O}_2 (\\Theta {\\cal O}_3) {\\cal O}_3 {\\cal O}_2 {\\cal O}_1\\>$, with the operators arranged mirror-symmetrically against a plane into two compact groups positioned a large distance from each other (see Fig.~\\ref{fig:6pt}). Here $\\Theta$ is the reflection factor mentioned in footnote \\ref{note:Theta}. Reflection positivity implies that this 6pt function should be real and positive.\\footnote{For this argument we are thus using the standard Osterwalder-Schrader reflection positivity and not the ``inversion-positivity\".} On the other hand, by cluster decomposition this 6pt function is equal to the product of two distant 3pt functions, which is easily seen to be $\\lambda_{123}^2$ times a positive number. So \\reef{eq:reality} follows. We stress that this conclusion holds for both even and odd $\\ell$.\\footnote{In essence we argued that the complex conjugate of a 3pt function is equal to the 3pt function of conjugate fields at reflected positions. This (for general $n$-point functions) is sometimes taken as an additional axiom for unitary theories, encoded by the equation ${\\cal O}(\\tau,\\mathbf{x})^\\dagger = {\\cal O}^\\dagger(-\\tau,\\mathbf{x})$ valid in Euclidean quantization by planes. Upon analytic continuation to Lorentzian signature, this leads to commutativity of operators at spacelike separation, used to prove reality of OPE coefficients in \\textcite{Rattazzi:2008pe}. Our 6pt argument shows that this axiom is not independent but follows from reflection positivity and cluster decomposition.}\n\n\\begin{figure}[t]\n\t\\begin{centering}\n\t\t\\includegraphics[width=0.6\\columnwidth]{fig03-6pt.pdf}\n\t\t\\caption{\\label{fig:6pt}\n Positivity of this 6pt function implies reality of the 3pt function coefficient $\\lambda_{123}$, see the text.\n\t\t}\n\t\\end{centering}\n\\end{figure}\n\nIt was important for the above argument that the tensor structure entering \\reef{eq:3points} was parity invariant (i.e., it did not involve the $\\epsilon$-tensor). This argument can be generalized to OPE coefficients for general 3pt tensor structures. The OPE coefficients of tensor structures must be purely imaginary or real depending on whether they involve the $\\epsilon$-tensor or not. One must similarly be careful with OPE coefficients involving spinors.\n\nConsider now the 4pt function $\\<{\\cal O}_2 {\\cal O}_1 {\\cal O}_1 {\\cal O}_2\\>$ where ${\\cal O}_1$ and ${\\cal O}_2$ are real scalars and the point configuration is reflection-symmetric or inversion-symmetric. This 4pt function should be non-negative as a basic consequence of unitarity, and Eq.~\\reef{eq:reality} implies that a more nuanced statement is true: the individual contribution of every primary ${\\cal O}$ to this 4pt function is non-negative, see Eq.~\\reef{eq:CBdec} below. This can be generalized to external operators in general $SO(d)$ (or $Spin(d)$) representations, including the case when there are multiple 3pt function tensor structures.\n\nTo summarize, the unitarity bounds say that the CFT Hilbert space has a positive-definite norm, and the OPE coefficient reality constraints say that {the} OPE preserves this positive-definite structure. If the CFT data satisfies both of these constraints, we are guaranteed that the CFT will be unitary. The bootstrap {obtains} further constraints on CFT data by combining unitarity with crossing relations.\n\n\n\\subsubsection{Averaged null energy condition}\n\\label{sec:ANEC}\n\nIn a QFT in Lorentzian signature, we can consider the integral of the stress tensor component $T_{++}$ along a light ray: the light-like direction $x^+$ {with} all other coordinates fixed to zero. The averaged null energy condition (ANEC) says that this light-ray operator has a non-negative expectation value in any state:\\footnote{Such conditions were first introduced in general relativity, with integration along a null geodesic, in connection with singularity theorems and wormholes. Here we focus on the ANEC in flat space, first discussed by \\textcite{Klinkhammer:1991ki}.}\n\\beq\n\\label{eq:ANEC}\n\\<\\Phi|\\int_{-\\infty}^\\infty dx^+\\,T_{++} |\\Phi\\>\\geqslant 0\\,.\n\\end{equation}\nThe ANEC should hold in any unitary QFT. Two general proofs of the ANEC were given recently, one via quantum information \\cite{Faulkner:2016mzt}, and one by causality \\cite{Hartman:2016lgu}.\\footnote{See also \\textcite{Kravchuk:2018htv} for a recent discussion of light-ray operators in Lorentzian CFTs and an alternative proof of the ANEC.} \nSpecializing to CFTs, the causality argument makes it clear that the ANEC is not an extra assumption but follows from other CFT axioms such as unitarity, the OPE, and crossing relations for correlation functions involving $T_{\\mu\\nu}$.\\footnote{This is also suggested by the fact that {bounds following from the ANEC can be reproduced in the numerical bootstrap, see Sec.~\\ref{sec:JandT}.}} Notice however that any results following from the ANEC will require the existence of a local stress-tensor operator.\n\nChoosing $|\\Phi\\>$ in \\reef{eq:ANEC} to be generated by a local operator ${\\cal O}$ acting on the vacuum, the ANEC leads to positivity constraints on 3pt functions $\\<{\\cal O} T_{\\mu\\nu} {\\cal O}\\>$ called ``conformal collider bounds\" \\cite{Hofman:2008ar}.\\footnote{Conformal collider bounds in general dimensions for states created by the stress tensor or global symmetry currents were obtained in \\textcite{Buchel:2009sk} and \\textcite{Chowdhury:2012km}. A proof of these bounds independent from the ANEC was given in \\textcite{Hofman:2016awc}; see also \\textcite{Hartman:2015lfa,Hartman:2016dxc}. Other generalizations of these bounds have been explored in~\\textcite{Li:2015itl}, \\textcite{Komargodski:2016gci}, \\textcite{Chowdhury:2017vel}, \\textcite{Cordova:2017zej}, \\textcite{Meltzer:2017rtf}, and \\textcite{Cordova:2017dhq}. Sum rules involving the same coefficients were also recently presented in~\\textcite{Witczak-Krempa:2015pia}, \\textcite{Chowdhury:2016hjy}, \\textcite{Chowdhury:2017zfu}, and \\textcite{Gillioz:2016jnn,Gillioz:2018kwh}. } \n\nRecently, \\textcite{Cordova:2017dhq} used the ANEC to argue that primaries of high chirality (large $|\\ell-\\bar\\ell|$) in unitary 4d CFTs should satisfy unitarity bounds stronger than \\reef{eq:4dunitarity}. From partial checks for $\\bar\\ell=0,1$, they conjecture the general bound (assuming $\\ell\\geqslant \\bar\\ell$)\n\\beq\n\\label{eq:Cordova}\n\\Delta \\geqslant \\ell\\,.\n\\end{equation}\nIf $\\bar\\ell=0$ this becomes stronger than \\reef{eq:4dunitarity} for $\\ell>2$ and for $\\ell>\\bar\\ell+4$ otherwise. This can be viewed as a CFT strengthening of the theorem of \\textcite{Weinberg:1980kq}.\n\n\\subsubsection{Stress tensor}\n\nIn the axiomatic approach considered here, a local CFT is simply defined as a CFT having a local conserved stress tensor operator $T_{\\mu\\nu}$. In the operator classification, $T_{\\mu\\nu}$ is a traceless symmetric spin-2 primary of scaling dimension $d$.\\footnote{Conformal invariance allows one to consistently impose conservation of the stress tensor. In technical language, the divergence of the dimension $d$ traceless symmetric spin-2 primary is a null descendant and can be set to zero.}\n\nIn local CFTs, the conformal algebra generators \\reef{eq:algebra} are obtained by integrating the stress tensor against {a vector field $\\epsilon^{\\cal J}_\\nu(x)$, describing the corresponding infinitesimal conformal transformation, over a surface $\\Sigma$ surrounding the origin. Thus we have}\n\\begin{eqnarray}\n\\label{eq:TwardIdentites}\n{\\cal J} = -\\int_{\\Sigma} dS_\\mu \\epsilon^{\\cal J}_\\nu(x) T^{\\mu\\nu}(x)\\,,\n\\end{eqnarray}\nwhich is independent of the shape of $\\Sigma$. See \\textcite{Simmons-Duffin:2016gjk} for a detailed review of this way of introducing the conformal algebra.\\footnote{However, it should be stressed that there are physically interesting theories which satisfy all CFT axioms except for the existence of the local stress tensor. Examples include defect and boundary CFTs (see Sec.~\\ref{sec:bdry} and footnote \\ref{note:defects}), and critical points of models with long-range interactions, see~e.g.~\\textcite{Paulos:2015jfa} and~\\textcite{Behan:2017dwr,Behan:2017emf}.} In particular, the dilatation generator $D$ is given by \\reef{eq:TwardIdentites} with $\\epsilon^D_\\nu=x_\\nu$.\n\nIt is conventional to normalize the stress tensor via Eq.~\\reef{eq:TwardIdentites}. Namely, inserting the above surface operator in any correlator should have the effect of replacing the operator at the origin by $[{\\cal J},{\\cal O}(0)]$, assuming the other operators are outside of the region enclosed by $\\Sigma$. This constraint is called an (integrated) Ward identity.\n\nA frequently occurring case is to consider the 3pt function $\\<{\\cal O}(0) T_{\\mu\\nu}(x){\\cal O}(y)\\>$ which by the Ward identity should reduce to $\\<[{\\cal J},{\\cal O}(0)] {\\cal O}(y)\\>$ after integration. Since $[{\\cal J},{\\cal O}(0)]$ is known, this provides constraints on the coefficients of various tensor structures in the 3pt function. \n\nThese constraints should be imposed in addition to constraints from conservation of $T_{\\mu\\nu}$. Vanishing of the divergence is automatic for 2pt functions, while in general it must be imposed on 3pt functions containing $T_{\\mu\\nu}$, placing constraints on the allowed tensor structures. Such constraints are not independent if the other operators are scalars, but become nontrivial if they have spin, see \\textcite{Osborn:1993cr} and \\textcite{Costa:2011mg}.\\footnote{Some important cases are when ${\\cal O}$ is a conserved spin-1 vector or the stress tensor itself. In both these cases there are several tensor structures allowed by conformal invariance and conservation, and only one independent Ward identity, see \\textcite{Osborn:1993cr} and \\textcite{Dymarsky:2017xzb,Dymarsky:2017yzx}. Ward identity constraints on 3pt functions $\\<\\psi T \\bar\\psi\\>$ with $\\psi$ a fermion were studied in 3d by \\textcite{Iliesiu:2015qra} and in 4d by \\textcite{Elkhidir:2017iov}. In these cases there are two independent tensor structures allowed by conservation, and their coefficients can both be fixed by considering the Ward identity for $D$ as well as for $P_{\\mu}$ or $M_{\\mu\\nu}$.}\n\nIn particular, when ${\\cal O} = \\phi$ is a scalar, there is just one tensor structure. Using the Ward identity e.g.~for ${\\cal J}=D$ one fixes the OPE coefficient completely. In the notation of \\reef{eq:3points} we have \\cite{Osborn:1993cr}\n\\begin{gather}\n\t\\langle\\phi(x_1) \\phi(x_2) T(x_3,\\zeta)\\rangle =\\lambda_{\\phi\\phi T} [(Z_{123} \\cdot \\zeta)^2-{\\textstyle\\frac 12} \\zeta^2] \\, \\mathbf{K}_3\\,,\\nonumber\\\\\n\t\\lambda_{\\phi \\phi T} = - \\frac{d \\Delta_\\phi }{(d-1)S_d},\\quad S_d = \\frac{2\\pi^{d\/2}}{\\Gamma(d\/2)} \\,.\n\t\t\\label{eq:TOO1}\n\\end{gather}\nIt can also be shown that the stress tensor does not couple to two scalars of unequal dimension, as the 3pt function structure \\reef{eq:3points} is then incompatible with conservation.\n\nSince we normalize via Eq.~\\reef{eq:TwardIdentites}, the stress tensor 2pt function will not be unit-normalized but will contain a constant $C_T$ called the central charge:\\footnote{This corresponds to one of the central charge definitions in $d=2$. Notice however that in $d>2$, there is no known analogue of the Virasoro algebra interpretation of the central charge.}\n\\begin{eqnarray}\n\\label{eq:CT}\n\\langle T(x_1,\\zeta_1)T(x_2,\\zeta_2)\\rangle = \\frac{C_T}{S_d^2}\\frac{(\\zeta_1 \\cdot I \\cdot \\zeta_2)^2-\\frac1d \\zeta_1^2\\zeta_2^2}{(x_{12}^2)^{d}}\\,.\n\\end{eqnarray}\nA similar convention will be set below for conserved spin-1 currents, while the rest of primaries are kept unit-normalized. \n\nIn the normalization Eqs.~\\reef{eq:TOO1} and~\\reef{eq:CT}, the contribution of the stress tensor to 4pt functions of scalars is given by:\\footnote{This is easy to find by rescaling $T_{\\mu\\nu}$ to match the normalization in Eq.~(\\ref{eq:2points}).} \n\\begin{eqnarray}\n&&\\langle\\phi(x_1)\\phi(x_2) \\phi'(x_3)\\phi'(x_4)\\rangle \\supset p_{d,2}\\, g^{0,0}_{d,2}(u,v) \\, \\mathbf K_4\\,,\\nonumber\\\\\n&&p_{d,2} = \\lambda_{\\phi\\phi T} \\lambda_{\\phi'\\phi' T}\\frac{S_d^2}{ C_T} = \\frac{d^2}{(d-1)^2}\\frac{\\Delta_\\phi \\Delta_{\\phi'}}{C_T}\\,.\n\\label{eq:lambdaT}\n\\end{eqnarray}\nAs usual, the conformal block is normalized according to Eq.~(\\ref{eq:cb_ope}). This constraint can play an important role in bootstrap analyses involving multiple 4pt functions, as it implies that the stress tensor contributes to different 4pt functions in a correlated way. \n\nWhile outside of 2d there is no analogue of the ``$c$-theorem\"~\\cite{Zamolodchikov:1986gt} for $C_T$,\\footnote{Instead, it is known that in 3d the sphere free energy satisfies an ``$F$-theorem\", see \\textcite{Jafferis:2011zi}, \\textcite{Klebanov:2011gs}, and \\textcite{Casini:2012ei}, while in 4d the $a$ anomaly coefficient satisfies an ``$a$-theorem\", see \\textcite{Cardy:1988cwa}, \\textcite{Osborn:1989td}, \\textcite{Jack:1990eb}, \\textcite{Komargodski:2011vj}, and \\textcite{Komargodski:2011xv}.} the central charge typically scales with the number of degrees of freedom. This is illustrated by the values of the central charge of a free theory containing $n_\\phi$ scalars, $n_\\psi$ Dirac fermions, and $n_A$ gauge vectors (in 4d only), given by~\\cite{Osborn:1993cr}\n\\beq\nC_T= \\frac{d}{d-1} n_\\phi +2^{\\left\\lfloor{d\/2}\\right\\rfloor-1} d \\, n_\\psi +16 \\,\\delta_{d,4}\\, n_A \\,.\n\\end{equation}\n\n\n\\subsubsection{Global symmetry currents}\n\nThe case of a continuous global symmetry in a local CFT is analogous. In this case there are conserved spin-1 currents $J_\\mu^A$ which transform in the adjoint representation of $G$ {and have scaling dimension $d-1$.} Global symmetry generators are obtained by integrating $J_\\mu^A$ {over} a surface, which defines a normalization for the current and leads to Ward identities. \n\nFor concreteness, consider scalar operator $\\phi_i$ with generators $(T^A)_{i}^{j}$ transforming in some representations $r$ of $G$ as well as $\\phi^\\dagger{}^{j}$ transforming in $\\bar r$. We assume that the scalar 2pt function is unit-normalized{, $\\langle\\phi_i \\phi^{\\dagger}{}^{j} \\rangle \\propto \\delta_{i}^{j}$,} as discussed in Sec.~\\ref{sec:global}. The generators of the global symmetry transformations are {then} $Q^A=-i\\int_\\Sigma dS^\\mu J^A_\\mu $ and the associated Ward identity requires $[Q^A,\\phi_i] = -(T^A)_{i}^{j} \\phi_j$. The 3pt function with $J^A$ is then fixed to be \\cite{Osborn:1993cr,Poland:2010wg}\n\\beq\n\\label{eq:JOO}\n\\langle \\phi_i(x_1) \\phi^{\\dagger}{}^{j}(x_2) J^A(x_3,\\zeta)\\rangle =-\\frac{i}{S_d} (T^A)_{i}^{j} [Z_{123} \\cdot \\zeta ] \\, \\mathbf{K}_3 \\,.\n\\end{equation}\nIn this normalization one can define a \\emph{current central charge} $C_J$ by\n\\begin{eqnarray}\\label{eq:CJ}\n&& \\langle J^A(x_1,\\zeta_1)J^B(x_2,\\zeta_2)\\rangle =\\tau^{AB}\\frac{C_J}{S_d^2} \\frac{\\zeta_1 \\cdot I \\cdot \\zeta_2}{(x_{12}^2)^{d-1}}\\,,\\hspace{2em} \n\\end{eqnarray}\nwhere $\\tau^{AB}= {\\rm Tr}\\left[T^{A}T^{B}\\right]$. \n\nIn the end, rescaling $J^A_\\mu$ to match the normalization of Eq.~(\\ref{eq:2points}), the contribution of a spin-1 conserved current to the scalar 4pt function is\n\\begin{eqnarray}\n&&\\langle \\phi_i(x_1) \\phi^{\\dagger}{}^{j}(x_2) \\phi_k(x_3) \\phi^{\\dagger}{}^{l}(x_4) \\rangle \\supset - \\frac{\\mathcal T_{ik}^{jl} }{C_J } \\, g_{d-1,1}(u,v) \\,\\mathbf K_4 \\,,\n\\nonumber\\\\\n&&\\mathcal T_{ik}^{j l} = (\\tau^{-1})_{AB}(T^A)_{i}^{j}(T^B)_{k}^{l}\\,.\n\\label{eq:lambdaJ}\n\\end{eqnarray}\nNotice that $\\tau^{AB}$ in \\reef{eq:CJ}, $(T^A)_{i}^{j}$ in \\reef{eq:JOO}, and $\\calT_{ik}^{jl}$ are examples of 2pt, 3pt, {and} 4pt $G$-invariant tensors as discussed in Sec.~\\ref{sec:global}.\n\nFor example, if $\\phi$ is a complex scalar charged under a $U(1)$ with charge 1, then $\\mathcal T_{ik}^{jl}=\\mathcal T=1$. In the case in which $\\phi_i$ is in the fundamental representation of $SU(N)$ or $SO(N)$ (where $\\bar{r} = r$) we have instead \n\\begin{align}\n\t\\mathcal T_{ik}^{jl} &= \\delta_i^{l} \\delta_k^{j} - \\frac{1}N \\delta_i^{j} \\delta_k^{l} \\qquad &(G = SU(N))\\,,\\\\\n\t\\mathcal T_{ijkl} &= \\frac12\\left(\\delta_{il} \\delta_{kj} - \\delta_{ik} \\delta_{jl}\\right) \\qquad &(G = SO(N)) \\,.\n\\end{align}\n\nA note about normalization is in order: once the generators $T^A$ are chosen, the Ward identity fixes the normalization of $J^A$ and determines $C_J$ according to our definition. Clearly, if we use a different generator normalization, then the value of $C_J$ would be modified accordingly. Moreover, once Eq.~(\\ref{eq:JOO}) is established, the Ward identity fixes the normalization of any other generator in any other representation. \n\nFinally, it should be mentioned that while free theories contain higher-spin conserved currents, there exist no-go theorems showing that interacting CFTs in $d\\ge3$ dimensions do not have conserved currents of spin $\\ell\\geqslant 3$, see \\textcite{Maldacena:2011jn} and \\textcite{Alba:2013yda}. This can be thought of as a CFT analogue of the Coleman-Mandula theorem for S-matrices.\n\n\n\\subsubsection{General results}\n\\label{sec:Z2-general}\n\nWe are not aware of any unitary 3d CFTs which do not possess any global symmetry.\\footnote{The Lee-Yang model has no global symmetry but is nonunitary, see Sec.~\\ref{sec:nonunitary}. {In any CFT with a global symmetry $G$, the singlet sector is closed under OPE. From the bootstrap point of view the singlet sector can be studied in isolation, results of Sec.~\\ref{sec:multicrit} being an example, and would appear as a perfectly consistent CFT with no global symmetry. Dealing only with local operators, we do not consider this construction as defining a complete CFT, as the singlet sector can in principle be extended back by including the other sectors (although it is not known how to decide in practice whether such an extension is possible by looking at the correlators of the singlet sector).}} Actually, most 3d CFTs have \\emph{continuous} global symmetries. Here we will start by considering the effect of having a discrete $\\bZ_2$,\\footnote{The bounds described in this section will also hold if the $\\bZ_2$ is taken to be a parity or time-reversal symmetry.} which may be a full symmetry as for the 3d Ising model, or a subgroup of a larger group.\\footnote{Another physically important discrete symmetry is cubic symmetry, see Sec.~\\ref{sec:O3} and footnote~\\ref{note:B3}.} \n\nIn the CFT context, a $\\bZ_2$ symmetry imposes selection rules on the possible operators appearing in different OPE channels. Let us take a {$\\bZ_2$-odd scalar} operator $\\sigma$ and consider the $\\sigma \\times \\sigma$ OPE. It can only contain $\\bZ_2$-even operators: \n\\beq\n\\sigma \\times \\sigma \\sim \\unit + \\lambda_{\\sigma\\s\\epsilon} \\epsilon + \\lambda_{\\sigma\\s T} T^{\\mu\\nu} + \\ldots .\n\\end{equation}\nHere, $\\unit$ is the identity operator, $\\epsilon$ is the leading $\\bZ_2$-even scalar, $T^{\\mu\\nu}$ is the stress-energy tensor, and so on. In particular, unlike in \\reef{eq:O0}, $\\sigma$ does not appear in the OPE.\n\n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig07-3dkinklarge}\n \\caption{\\label{fig:Z2-epsbound}\n(Color online) Upper bound on $\\Delta_\\epsilon$ as a function of $\\Delta_\\sigma$ in 3d CFTs~\\cite{ElShowk:2012ht}.}\n \\end{figure}\n\nIn this setup, we would like to ask what is the maximal allowed value of $\\Delta_\\epsilon$. A numerical bootstrap analysis of the 4pt function function $\\<\\sigma\\s\\sigma\\s\\>$ (via linear or semidefinite programming) produces an upper bound on $\\Delta_{\\epsilon}$ as a function of $\\Delta_{\\sigma}$, shown in Fig.~\\ref{fig:Z2-epsbound}.\\footnote{\\textcite{Nakayama:2016jhq} observed empirically that the bounds in Figs.~\\ref{fig:multicrit} and~\\ref{fig:Z2-epsbound} coincide. A priori one may have expected a stronger bound in Fig.~\\ref{fig:Z2-epsbound} due to the extra constraint of not allowing $\\sigma$ in the r.h.s.~of the OPE.} The point $\\{1\/2,1\\}$ corresponds to the theory of a free massless scalar while the point $\\sim\\{0.518, 1.413\\}$, sitting near a discontinuity in the boundary, corresponds to the critical 3d Ising model which we discuss further below. Other theories that live in the interior of this region are the critical $O(N)$ models (see Sec.~\\ref{sec:ON}), where we can identify $\\sigma$ with a component of the $O(N)$ fundamental $\\phi_i$ and $\\epsilon$ with a component of the $O(N)$ symmetric tensor $t_{ij}$, as well as the line of mean field theory CFTs with $\\Delta_{\\epsilon} = 2 \\Delta_{\\sigma}$ (see Sec.~\\ref{sec:explicit}).\n\n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig08-deltae-eprime-above3}\n \\caption{\\label{fig:Z2-epsgap3bound}\n(Color online) Allowed region in the $\\{\\Delta_\\sigma,\\Delta_\\epsilon\\}$ plane under the assumption that $\\epsilon$ is the only relevant scalar \\cite{ElShowk:2012ht}.}\n \\end{figure}\n\nA particularly physically interesting class of $\\bZ_2$-symmetric CFTs are those with only one relevant $\\bZ_2$-even operator (i.e., they have $S=1$). If the microscopic realization of the theory preserves the $\\bZ_2$ symmetry, then this condition ensures that only one parameter must be tuned in order to reach the critical point. This allowed region in $\\{\\Delta_\\sigma,\\Delta_\\epsilon\\}$ space was also computed in \\textcite{ElShowk:2012ht} from the $\\<\\sigma\\s\\sigma\\s\\>$ correlator, assuming that all scalars aside from the contribution at $\\Delta_{\\epsilon}$ are irrelevant. This region is shown in Fig.~\\ref{fig:Z2-epsgap3bound}, with the assumption having the effect of carving into the allowed region from both the left and from the bottom.\n\n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig09-ctmin-old}\n \\caption{\\label{fig:Z2-cbound}\n (Color online) Lower bound on the central charge as a function of $\\Delta_\\sigma$ \\cite{ElShowk:2012ht}.}\n \\end{figure}\n\nAnother general result from this 4pt function is a lower bound on the central charge $C_T$ shown in Fig.~\\ref{fig:Z2-cbound}, obtained by computing an upper bound on the coefficient $\\lambda_{\\sigma\\s T} \\propto \\frac{\\Delta_{\\sigma}}{\\sqrt{C_T}}$ (see Sec.~\\ref{sec:ward} and Eq.~\\reef{eq:lambdaT}). As $\\Delta_{\\sigma} \\rightarrow 1\/2$, the lower bound on $C_T$ approaches the free scalar value, while near the critical 3d Ising dimension $\\Delta_{\\sigma} \\sim 0.518$, the lower bound on $C_T$ is seen to have a minimum. This particular bound was computed with the mild assumption $\\Delta_{\\epsilon} \\geqslant 1$, so it is applicable to any theory living in the allowed region seen in Fig.~\\ref{fig:Z2-epsbound}. \n\n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig10-allowed-Tprime-vs-sigma-corrected1}\n \\caption{\\label{fig:Z2-spin2bound}\n (Color online) Upper bound on the dimension $\\Delta_{T'}$ of the first $\\bZ_2$-even spin-2 operator after the stress tensor, as a function of $\\Delta_\\sigma$ \\cite{ElShowk:2012ht}.}\n \\end{figure}\n\n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig11-spin4-bound}\n \\caption{\\label{fig:Z2-spin4-bound}\n (Color online) Upper bound on the dimension of the leading $\\bZ_2$-even spin-4 operator \\cite{ElShowk:2012ht}.}\n \\end{figure}\n\nBefore we move on to discussing what can be learned from systems of several 4pt functions, we would like to highlight that upper bounds on the leading unknown scaling dimension in other channels can also be computed and are often quite strong. For example, an upper bound on the leading unknown spin-2 dimension $\\Delta_{T'}$ (the first $\\bZ_2$-even spin-2 operator after the stress tensor) is shown in Fig.~\\ref{fig:Z2-spin2bound}, and an upper bound on the leading spin-4 dimension $\\Delta_C$ is shown in Fig.~\\ref{fig:Z2-spin4-bound}. The bound on $\\Delta_{T'}$ shows a sharp jump near the critical 3d Ising value, while no such transition is seen in the bound on $\\Delta_C$ (which is close to being saturated by MFT: $\\Delta_C = 2\\Delta_{\\sigma} + 4$). The jump in $\\Delta_{T'}$ shows that it is possible for the low-dimension spin-2 operator present in the spectrum for $\\Delta_{\\sigma} \\gtrsim 0.52$ to decouple at smaller values {of $\\Delta_{\\sigma}$}. We discuss operator decoupling phenomena further in Sec.~\\ref{sec:Z2-spectrum} .\n\n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig12-mixed-sigpgap}\n \\caption{\\label{fig:Z2-mixed-sigpgap}\n (Color online) Allowed region following from the analysis of three 4pt functions assuming $\\Delta_{\\sigma'}\\geqslant 3$ with no assumption on $\\Delta_{\\epsilon'}$ \\cite{Kos:2014bka}.}\n \\end{figure}\n\nNext one can ask what is the effect of adding constraints from other 4pt functions. So far the main system that has been studied in the literature is $\\{\\<\\sigma\\s\\sigma\\s\\>, \\<\\sigma\\s\\epsilon\\e\\>, \\<\\epsilon\\e\\epsilon\\e\\>\\}$, though other systems may also prove interesting. An advantage of including the correlator $\\<\\sigma\\s\\epsilon\\e\\>$ is that it allows one to probe the $\\bZ_2$-odd operators appearing in the OPE: \n\\beq\n\\sigma \\times \\epsilon \\sim \\lambda_{\\sigma\\epsilon\\sigma} \\sigma + \\lambda_{\\sigma\\epsilon\\sigma'} \\sigma' + \\ldots .\n\\end{equation}\nIn \\textcite{Kos:2014bka} it was found that with no assumptions this system leads to an allowed region identical to Fig.~\\ref{fig:Z2-epsbound}, while by inputting the assumption of a single relevant $\\bZ_2$-odd operator (i.e., $\\Delta_{\\sigma'} \\geqslant 3$) it leads to the allowed region shown in Fig.~\\ref{fig:Z2-mixed-sigpgap}. In this plot one can see a detached ``island\" containing the critical Ising model as well as a ``bulk\" region further to the right. This ``bulk\" region has so far not been systematically explored in the literature: it would be very interesting to understand what other CFTs lie inside of it.\n\n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig13-mixed-differentgaps}\n \\caption{\\label{fig:Z2-mixed-differentgaps}\n (Color online) This plot assumes $\\Delta_{\\epsilon'} \\geqslant 3$ (light blue), $\\Delta_{\\sigma'} \\geqslant 3$ (medium blue), or both gaps simultaneously (dark blue). Figure from \\cite{Kos:2014bka}.}\n \\end{figure}\n\nIn Fig.~\\ref{fig:Z2-mixed-differentgaps} we also show the difference between assuming $\\Delta_{\\epsilon'} \\geqslant 3$, $\\Delta_{\\sigma'} \\geqslant 3$, and both assumptions simultaneously. One can see that the assumption of a gap in the $\\bZ_2$-odd spectrum is primarily responsible for creating the detached region. In the next section we describe the connection to the critical Ising model in more detail, as well as the techniques and additional inputs that can be used to make this detached island as small as possible.\n\n\n\\subsubsection{Critical Ising model}\n\\label{sec:Z2-Ising}\n\nPerhaps the most well-known 3d CFT is the critical 3d Ising model. The study of this model has a long history~\\cite{DombGreenVol3}, in part because it describes critical behavior in uniaxial magnets, liquid-vapor transitions, binary fluid mixtures, the quark-gluon plasma, and more \\cite{Pelissetto:2000ek}. While these applications are predominantly for systems in three spatial dimensions at finite temperature, described by a 3d Euclidean CFT, the critical Ising model can also be realized as a Lorentzian (2+1)d quantum critical point~\\cite{Fradkin-Susskind,Henkel-Ising3d}. Here we work in the Euclidean signature; the Lorentzian version is obtainable by Wick rotation and has the same set of CFT data.\n\nIn its original formulation as a model of ferromagnetism, the 3d Ising model is described using a set of spins $s_i=\\pm 1$ on a cubic lattice in $\\bR^3$ with nearest neighbor interactions, with partition function\n\\beq\n\\label{eq:Ising}\nZ = \\sum_{\\{s_i\\}} \\exp\\Bigl(- J \\sum_{\\} s_i s_j \\Bigr).\n\\end{equation}\nAt a critical value of the coupling $J$, the model becomes a nontrivial CFT at long distances. Notice that the lattice model has a manifest $\\bZ_2$ symmetry under which $s_i \\rightarrow -s_i$. This symmetry is inherited by the CFT, which contains local operators that are either even or odd under its $\\bZ_2$ global symmetry.\n\nAnother microscopic realization is in terms of a continuous scalar field theory in 3 dimensions, with action\n\\beq\nS = \\int d^3 x \\left( \\frac12 (\\partial \\sigma)^2 + \\frac12 m^2 \\sigma^2 + \\frac{1}{4!} \\lambda \\sigma^4 \\right),\n\\end{equation}\nwhich also has a $\\bZ_2$ symmetry under which $\\sigma \\rightarrow -\\sigma$. Because both $m^2$ and $\\lambda$ describe relevant couplings, this theory is described by a free scalar at short distances but has nontrivial behavior at long distances. At a critical value of the dimensionless ratio $m^2\/\\lambda^2$ the long-distance behavior is described by a CFT, which is the same as for the above lattice model.\n\nFrom the conformal bootstrap perspective, the Ising CFT has a $\\bZ_2$ global symmetry, one relevant $\\bZ_2$-odd scalar operator $\\sigma$, and one relevant $\\bZ_2$-even scalar operator $\\epsilon$. This is evident from experimental realizations, where $\\bZ_2$-preserving microscopic realizations require one tuning (e.g., tuning the temperature in uniaxial magnets) and $\\bZ_2$-breaking microscopic realizations require two tunings (e.g., tuning both temperature and pressure in liquid-vapor transitions).\\footnote{The existence of $\\bZ_2$-breaking liquid-vapor experimental realizations, allowing one to get $\\bZ_2$ as an emergent symmetry and predict the total number of relevant scalars, is a {nice feature of} the Ising model which does not have analogues for the $O(N)$ models.} Note that the assumption that the only relevant scaling dimensions are $\\Delta_{\\sigma}$ and $\\Delta_{\\epsilon}$ is the same assumption that went into producing the dark blue detached region of Fig.~\\ref{fig:Z2-mixed-differentgaps}.\n\n\\textcite{Kos:2016ysd} pursued a numerical analysis of the mixed-correlator bootstrap system containing $\\sigma$ and $\\epsilon$ to high derivative order. In addition, they studied the impact of scanning over different possible values of the ratio $\\lambda_{\\epsilon\\e\\epsilon}\/\\lambda_{\\sigma\\s\\epsilon}$. This scan effectively inputs the information that there is a single operator in the OPE occurring at the scaling dimension $\\Delta_{\\epsilon}$, whereas the plot of Fig.~\\ref{fig:Z2-mixed-differentgaps} allowed for the possibility of multiple degenerate operator contributions at the dimension $\\Delta_{\\epsilon}$.\\footnote{More precisely, the scan inputs that the outer product of OPE coefficients $\\left(\\lambda_{\\sigma\\s\\epsilon} \\quad \\lambda_{\\epsilon\\e\\epsilon}\\right) \\otimes \\left(\\lambda_{\\sigma\\s\\epsilon} \\quad \\lambda_{\\epsilon\\e\\epsilon}\\right)$ appearing in Eq.~(\\ref{eq:crossingequationwithv}) at dimension $\\Delta_{\\epsilon}$ is a rank 1 matrix, rather than the more generic rank 2 possibility which occurs if there are degenerate contributions.} This led to the three-dimensional allowed region shown in Fig.~\\ref{fig:Z2-3dIsingIsland} and its projection to the $\\{\\Delta_{\\sigma},\\Delta_{\\epsilon}\\}$ plane shown in Fig.~\\ref{fig:Z2-IsingIsland}. In addition, for each point in this region the magnitude of the leading OPE coefficients were also bounded, with the result shown in Fig.~\\ref{fig:Z2-OPEBound}. These world-record numerical determinations are summarized below in Table.~\\ref{tab:lowestdim}.\n\nFinally let us mention that recent studies of the conformal bootstrap for stress-tensor 4pt functions have also made contact with the 3d Ising model. In particular, after inputting known values of the leading parity-even spectrum, \\textcite{Dymarsky:2017yzx} gave a new bound on the leading parity-odd $\\mathbb{Z}_2$-even scalar, $\\Delta_{\\text{odd}} < 11.2$, and constrained the independent coefficient in the stress-tensor 3pt function (parametrized by the variable $\\theta$) to be in the range $0.01 < \\theta < 0.05$ if $\\Delta_{\\text{odd}} > 3$ and in a tighter range $0.01 < \\theta < 0.018-0.019$ if $\\Delta_{\\text{odd}}$ is close to saturating its bound. We will discuss these constraints in more detail in Sec.~\\ref{sec:JandT}.\n\n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig14-3dIsingIsland}\n \\caption{\\label{fig:Z2-3dIsingIsland}\n (Color online) Allowed region in the $\\{\\Delta_\\sigma,\\Delta_\\epsilon,\\lambda_{\\epsilon\\e\\epsilon}\/\\lambda_{\\sigma\\s\\epsilon}\\}$ space obtained in \\textcite{Kos:2016ysd}. \n }\n \\end{figure}\n\n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig15-IsingIsland}\n \\caption{\\label{fig:Z2-IsingIsland}\n (Color online) Projection of the 3d region in Fig.~\\ref{fig:Z2-3dIsingIsland} on the $\\{\\Delta_\\sigma,\\Delta_\\epsilon\\}$ plane and its comparison with a Monte Carlo prediction for the same quantities \\cite{Kos:2016ysd}.\n }\n \\end{figure}\n\n \\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig16-IsingOPEBound}\n \\caption{\\label{fig:Z2-OPEBound}\n (Color online) Variation of $\\lambda_{\\epsilon\\e\\epsilon}$ and $\\lambda_{\\sigma\\s\\epsilon}$ within the allowed region in Fig.~\\ref{fig:Z2-3dIsingIsland} \\cite{Kos:2016ysd}.\n }\n \\end{figure}\n\n\\subsubsection{Spectrum extraction and rearrangement}\n\\label{sec:Z2-spectrum}\n\nWe have seen in the previous section the remarkable precision with which the leading scaling dimensions of the critical 3d Ising model can be determined. This raises the immediate question of how well we can extract other operator dimensions and OPE coefficients in the spectrum using bootstrap methods (specifically the strategies described in Sec.~\\ref{sec:spectrum-extraction})\n\nEven prior to the mixed-correlator studies mentioned above, \\textcite{El-Showk:2014dwa} extracted the spectrum using the primal simplex method strategy, from a solution to crossing for the $\\<\\sigma\\s\\sigma\\s\\>$ correlator which minimizes the central charge $C_T$. For example, Fig.~\\ref{fig:Z2-ct-spin0} shows the scalar operators in the extracted spectrum as a function of $\\Delta_\\sigma$ near the 3d Ising model. A fascinating feature of these plots is the bifurcation of operators that occurs at the Ising value of $\\Delta_\\sigma$, which can be interpreted as a decoupling of one of the operators in the spectrum. This ``spectrum rearrangement\" phenomenon has yet to be fully understood, but speculatively it could be connected to the nonperturbative equations of motion (i.e., the 3d analogue of the relation $\\sigma \\partial^2 \\sigma \\sim \\sigma^4$ at the Wilson-Fisher fixed point) or a higher-dimensional extension of the null state conditions in the 2d Ising CFT (see also Sec.~\\ref{sec:why}).\\footnote{The 2d analogue of Fig.~\\ref{fig:Z2-epsbound} also displays a sharp kink exactly at the location of the 2d Ising model \\cite{Rychkov:2009ij}, at which the corresponding extremal solution displays a decoupling of states expected from the null state conditions \\cite{El-Showk:2014dwa}. The upper bound to the right of the kink can be interpreted as a one-parameter family of unitary 4pt functions which for a discrete sequence of $\\Delta_\\sigma$'s reduce to the 4pt function of the $\\phi_{1,2}$ operator in the higher unitary minimal models, see \\textcite{Liendo:2012hy} and \\textcite{Behan:2017rca}. While these higher minimal models exhibit further null state conditions, they are not visible in this 4pt function, and hence do not lead to kinks in this bound. \n\t\n\t}\n\n\\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig17-ct-spin0}\n \\caption{\\label{fig:Z2-ct-spin0}\n (Color online) The spectrum of $\\bZ_2$-even scalar operators appearing in the solution to crossing minimizing $C_T$ near $\\Delta_\\sigma$ corresponding to the 3d Ising model \\cite{El-Showk:2014dwa}. Line 1 corresponds to the $\\epsilon$ operator and shows little variation on the scale of this plot. All other lines exhibit the spectrum rearrangement phenomenon.\n }\n \\end{figure}\n\nOn the other hand, spectrum extraction using the extremal functional method was applied to the critical 3d Ising model by \\textcite{Komargodski:2016auf,Simmons-Duffin:2016wlq}. In the latter work, for a set of 20 trial points distributed within the island of Fig.~\\ref{fig:Z2-3dIsingIsland}, $C_T$-minimization was performed and the zeros of the extremal functional were found. While some zeros jump significantly when moving from point to point, many of them are found to be present in all families with tiny variations. About a hundred such ``stable zeros\" were identified, and are believed to represent operators which truly exist in the 3d Ising CFT, providing a remarkable view of the spectrum of this theory. The subset of stable operators with dimensions $\\Delta \\leqslant 8$, and their OPE coefficients, are shown in Table~\\ref{tab:lowestdim}. \n\nThis approach, while not fully rigorous, is intuitively justified as a means to extend the reach of rigorous analysis which produced the island in Fig.~\\ref{fig:Z2-3dIsingIsland}. The errors on stable operator dimensions and OPE coefficients are assigned as standard deviations in the set of trial points. Although these errors are not rigorous, as opposed to rigorous errors implied by Figs.~\\ref{fig:Z2-3dIsingIsland}, \\ref{fig:Z2-IsingIsland}, and \\ref{fig:Z2-OPEBound}, we believe that they represent realistic estimates. In future studies the error estimates can be further checked by enlarging the set of trial points and by extremizing multiple quantities as opposed to just $C_T$.\n\nResults of this approach for the leading towers of low-twist operators (of increasing spin) have also been tested against the analytical bootstrap computations in the lightcone limit\\footnote{{See~\\textcite{Fitzpatrick:2012yx}, \\textcite{Komargodski:2012ek}, \\textcite{Alday:2015ewa}, and \\textcite{Simmons-Duffin:2016wlq}.}} which yield analytical expressions for the large-spin asymptotics. In Fig.~\\ref{fig:tauSigSig0}, the data points extracted from the extremal functional approach show the leading twist ($\\tau=\\Delta-\\ell$) trajectory in the $\\hbox{$\\bb Z$}_2$-even sector as a function of $\\bar{h} = \\ell+\\tau\/2$, while the curve shows the analytical computation, displaying excellent agreement with the data even down to small spins. Similar good agreement was also found with the extracted OPE coefficients and subleading trajectories, as well as in the $\\hbox{$\\bb Z$}_2$-odd sector. \n\nWe also report here the prediction for the central charge from the above $C_T$-minimization over the 20 points in the island \\cite{Simmons-Duffin-private1}:\n\\beq\n\\label{eq:central-charge}\nC^{\\rm Ising}_T\/C_T^{\\text{free boson}} =0.9465389(12)\\,,\n\\end{equation}\nimproving the previous $C_T$-minimization determination by \\textcite{El-Showk:2014dwa}.\\footnote{One can also extract $C_T$ from Table \\ref{tab:lowestdim} using $\\lambda_{\\sigma\\sigma T}\\propto {\\Delta_\\sigma}\/{\\sqrt{C_T}}$. While consistent with \\reef{eq:central-charge}, this would have a larger error, because the errors on $\\lambda_{\\sigma\\sigma T}$ and $\\Delta_\\sigma$ are correlated.}\n\n\\begin{table}\n\\begin{center}\n{\\small\n\\begin{tabular}{|c|c|l|l|l|l|l|}\n\\hline\n${\\cal O}$ & $\\hbox{$\\bb Z$}_2$ & $\\ell$ & $\\Delta$ & $f_{\\sigma\\s{\\cal O}}$ & $f_{\\epsilon\\e{\\cal O}}$ \\\\\n\\hline\n$\\epsilon$ & $+$ & 0 & $1.412625{\\bf\\boldsymbol(10\\boldsymbol)}$ & $1.0518537{\\bf\\boldsymbol(41\\boldsymbol)}$ & $1.532435{\\bf\\boldsymbol(19\\boldsymbol)}$ \\\\\n$\\epsilon'$ & $+$ & 0 & $3.82968(23)$ & $0.053012(55)$ & $1.5360(16)$ \\\\\n& $+$ & 0 & $6.8956(43)$ & $0.0007338(31)$ & $0.1279(17)$ \\\\\n& $+$ & 0 & $7.2535(51)$ & $0.000162(12)$ & $0.1874(31)$ \\\\\n$T_{\\mu\\nu}$ & $+$ & 2 & $3$ & $0.32613776(45)$ & $0.8891471(40)$ \\\\\n$T'_{\\mu\\nu}$ & $+$ & 2 & $5.50915(44)$ & $0.0105745(42)$ & $0.69023(49)$ \\\\\n& $+$ & 2 & $7.0758(58)$ & $0.0004773(62)$ & $0.21882(73)$ \\\\\n$C_{\\mu\\nu\\rho\\sigma}$ & $+$ & 4 & $5.022665(28)$ & $0.069076(43)$ & $0.24792(20)$ \\\\\n& $+$ & 4 & $6.42065(64)$ & $0.0019552(12)$ & $-0.110247(54)$ \\\\\n& $+$ & 4 & $7.38568(28)$ & $0.00237745(44)$ & $0.22975(10)$ \\\\\n& $+$ & 6 & $7.028488(16)$ & $0.0157416(41)$ & $0.066136(36)$ \\\\\n\\hline\n\\hline\n${\\cal O}$ & $\\hbox{$\\bb Z$}_2$ & $\\ell$ & $\\Delta$ & $f_{\\sigma\\epsilon{\\cal O}}$ &-\\\\\n\\hline\n$\\sigma$ & $-$ & 0 & $0.5181489{\\bf\\boldsymbol(10\\boldsymbol)}$ & $1.0518537{\\bf\\boldsymbol(41\\boldsymbol)}$ &\\\\\n$\\sigma'$ & $-$ & 0 & $5.2906(11)$ & $0.057235(20)$ &\\\\\n& $-$ & 2 & $4.180305(18)$ & $0.38915941(81)$ &\\\\\n& $-$ & 2 & $6.9873(53)$ & $0.017413(73)$ &\\\\\n& $-$ & 3 & $4.63804(88)$ & $0.1385(34)$ &\\\\\n& $-$ & 4 & $6.112674(19)$ & $0.1077052(16)$ &\\\\\n& $-$ & 5 & $6.709778(27)$ & $0.04191549(88)$ &\\\\\n\\hline\n\\end{tabular}\n}\n\\end{center}\n\\caption{\nStable operators in the critical 3d Ising model with dimensions $\\Delta\\leqslant 8$ \\cite{Simmons-Duffin:2016wlq}. Conventional names are shown in the leftmost column when available. Errors in bold are rigorous. All other errors are non-rigorous but, in our opinion, realistic. See Eq.~\\reef{eq:central-charge} for the central charge prediction from the same study.\nBecause we have chosen a different conformal block normalization convention, the OPE coefficients are related to our convention by $\\lambda_{ij{\\cal O}} = 2^{\\ell\/2} f_{ij{\\cal O}}$ (see Table \\ref{tab:cb_norm}). \n}\n\\label{tab:lowestdim}\n\\end{table}\n\n\\begin{figure}[t!]\n \\centering\n \\scalebox{0.8}{\n\\includegraphics[width=\\figwidth]{fig18-tauSigSig0}\n}\n \\caption{\\label{fig:tauSigSig0}\n (Color online) Comparison of the extremal functional spectrum with the analytic bootstrap \\cite{Simmons-Duffin:2016wlq}; see the text.\n }\n \\end{figure}\n\n\n\\subsubsection{Why kink? Why island?}\n\\label{sec:why}\n\nOne may be wondering why the 3d Ising model happens to live at a kink in Fig.~\\ref{fig:Z2-epsbound}. Plausibly, this has to do with the minimality of the spectrum of exchanged operators required to satisfy the crossing relation. In the interior of the allowed region in Fig.~\\ref{fig:Z2-epsbound}, the solution to crossing is not unique. When working numerically, a typical solution contains as many operators as the number of derivatives at $z=\\bar z=1\/2$ one is keeping in \\reef{eq:crossingvec}, \\reef{eq:F}. On the other hand, when one approaches the boundary of the allowed region in Fig.~\\ref{fig:Z2-epsbound}, the nature of the solution changes in that the operators first organize into pairs with nearby dimensions, and the pairs then merge into single operators at the boundary \\cite{El-Showk:2014dwa}. Thus the extremal solutions to crossing are quite economical, containing {many} fewer operators than the interior solutions, roughly {by} a half.\\footnote{It is a bit more than half because doubling never occurs for operators which remain at the unitarity bound, such as the stress tensor {(if present in the extremal solution)}, and for operators which saturate the gaps that one is imposing. In general, whether doubling occurs in the bulk of the spectrum depends on how many second-order zeros the extremal functional has. If there are too many zeros, then for some of them, called ``singles\" in \\textcite{El-Showk:2016mxr}, doubling will not occur. See also Sec.~\\ref{sec:flow} for the flow method which uses such considerations to move along the boundary of the allowed region.}\n\nFurther reduction of the spectrum occurs at the kink. When one approaches the kink moving along the boundary, squared OPE coefficients of certain operators tend to zero. Further analytic continuation of the solution beyond the kink would be inconsistent with unitarity. Thus two different solution branches meet at the kink \\cite{El-Showk:2014dwa}, and the spectrum exhibits rearrangement phenomena mentioned in Sec.~\\ref{sec:Z2-spectrum}.\n\nTo summarize, that the 3d Ising model lives at a kink suggests that it is a CFT with a particularly minimal spectrum of operators. If this idea can be made precise, perhaps it can pave the way to an exact solution.\n\nLeaving the kink aside, let us discuss the island. It is perhaps not surprising that considering crossing for several 4pt functions the allowed region shrinks compared to what was allowed when considering just one 4pt function. It is however altogether unexpected and remarkable that considering only three 4pt functions, plus a {physically motivated} and robust\\footnote{Islands can be also produced for the Ising and other CFTs using a single 4pt function and reasonable assumptions about gaps in the spin-1 and spin-2 operator spectrum \\cite{Li:2017kck}. The robustness of these results (i.e.~their independence of the numerical values of the assumed gaps in a certain range) needs further investigation.} assumption of only two relevant operators, allows one to produce the tiny island shown in Figs.~\\ref{fig:Z2-mixed-sigpgap} and \\ref{fig:Z2-IsingIsland}.\n\nIt is not currently understood why this happens. Would the island continue to shrink indefinitely with increasing the number of included derivatives? Or would it stabilize, requiring one to add further correlators to fully fix the CFT? More generally, is it sufficient to include only 4pt functions of relevant operators or are external irrelevant operators also needed to have a unique solution? These are fascinating questions for the future.\n\nWe will see many kinks and islands in the subsequent sections of this review, about which similar considerations can be made.\n\n\\subsubsection{Nongaussianity}\n\n\\label{sec:nongauss}\nSince the leading spectrum and OPE coefficients of the critical 3d Ising model are now known to a high degree of precision, it is possible to reconstruct the full 4pt function $\\<\\sigma\\s\\sigma\\s\\>$ over a wide range of cross ratios. One can then probe the question of how much this 4pt function deviates from the ``gaussian\", i.e.~fully disconnected, form $\\<\\sigma\\s\\sigma\\s\\> = \\<\\sigma\\s\\>\\<\\sigma\\s\\> + \\text{perms}$. This question is also motivated by the fact that the Ising model contains higher-spin operators with dimensions that deviate by a small amount from those of higher-spin currents, see Fig.~\\ref{fig:tauSigSig0}. The first two of these operators are the $\\bZ_2$-even spin-4 and spin-6 operators in Table \\ref{tab:lowestdim}, of dimension close to 5 and 7 respectively.\n\n\\textcite{Rychkov:2016mrc} probed this question quantitatively using bootstrap data to reconstruct the ratio $Q(z,\\bar{z}) = \\frac{g(z,\\bar{z})}{1+(z\\bar{z})^{\\Delta_\\sigma} + (z\\bar{z}\/(1-z)(1-\\bar{z}))^{\\Delta_\\sigma}}$ in the critical 3d Ising model, where the denominator corresponds to the ``gaussian\" expectation. A plot of this deviation over a fundamental domain in the complex $z$ plane is shown in Fig.~\\ref{fig:3d}. They found e.g. that $Q < 0.75$ over a wide range of cross-ratio space and that it attains a minimum value of $Q_{\\min} \\approx 0.683$. Thus, any attempt to explain the small anomalous dimensions of higher-spin operators must account for this significant nongaussianity.\n\n\\begin{figure}[t!]\n \\centering\n\\includegraphics[width=\\figwidth]{fig19-nongaussianity.pdf}\n \\caption{\\label{fig:3d}\n (Color online) The nongaussianity ratio $Q$ in the critical 3d Ising model \\cite{Rychkov:2016mrc}.\n }\n \\end{figure}\n\n\\subsubsection{Boundary and defect bootstrap, nontrivial geometries, off-criticality}\n\n\\label{sec:bdry}\n\nIt is also interesting to study the physics of defects in the 3d Ising model. These include both co-dimension one defects (e.g., flat 2d boundaries or interfaces) and co-dimension two defects (i.e., 1d line defects).\\footnote{\\label{note:defects}See~\\textcite{Gadde:2016fbj}, \\textcite{Billo:2016cpy}, \\textcite{Lauria:2017wav}, \\textcite{Fukuda:2017cup}, \\textcite{Rastelli:2017ecj}, \\textcite{Herzog:2017xha}, \\textcite{Herzog:2017kkj}, and \\textcite{Lemos:2017vnx} for some recent general discussions of defects in CFT.} Here we would like to highlight for the reader some recent numerical bootstrap studies of such defects.\n\nBootstrap constraints in the 3d Ising model in the presence of a flat 2d boundary were first studied using linear programming techniques in \\textcite{Liendo:2012hy}, where a number of rigorous bounds were placed on the scaling dimensions and OPE coefficients of boundary operators for different choices of boundary conditions, corresponding to the ``special\" and ``extraordinary\" transitions, assuming positivity of the bulk channel expansion coefficients. Estimates of the leading boundary data using the truncation method were also computed by \\textcite{Gliozzi:2015qsa} and \\textcite{Gliozzi:2016cmg}, where precise estimates applicable to the boundary condition of the ``ordinary\" transition could also be made. \n\nStudies have also been performed of the $\\mathbb{Z}_2$ twist line defect in the 3d Ising model, constructed on the lattice by reversing the Ising coupling on a semi-infinite half-plane. The 1d boundary of this half-plane then yields the twist line defect, which can also be defined in terms of its simple monodromy properties. Local operators living on this defect were studied using both Monte Carlo techniques~\\cite{Billo:2013jda} and numerical bootstrap (linear programming) techniques~\\cite{Gaiotto:2013nva}, with excellent agreement. \n \nA related line of inquiry is to study CFTs such as the critical 3d Ising model on nontrivial geometries, {the nontrivial case being manifolds not globally conformally equivalent to infinite flat space.}\\footnote{CFT correlation functions on manifolds conformally equivalent to flat space, such as the sphere $S^d$ or the ``cylinder\" $S^{d-1}\\times \\bR$, can be obtained from the flat space correlators via a Weyl transformation.} This is motivated in part by the search for a higher-dimensional analogue of modular invariance. One concrete realization has been to study the 3d Ising model on real projective space~\\cite{Nakayama:2016cim}. In this case the unknown coefficients in one-point functions of scalar primary operators $\\<\\mathcal{O}\\> \\propto A_{\\mathcal{O}}$ enter into a variant of the bootstrap equations called the cross-cap bootstrap equations. Numerical truncation studies of the cross-cap bootstrap equations in this work have yielded new nontrivial predictions, e.g. $A_{\\epsilon} = 0.667(2)$ and $A_{\\epsilon'} = 0.896(5)$ in the critical 3d Ising model on real projective space. Another interesting geometry is $S^1\\times \\bR^{d-1}$, which corresponds to putting the CFT at finite temperature. This was studied for the 3d Ising model and other higher-dimensional CFTs in \\textcite{Iliesiu:2018fao}. \\textcite{Gobeil:2018fzy} also discussed a generalization of the conformal block concept relevant for this geometry.\n\nLet us finally mention an interesting study \\cite{Caselle:2016mww} which combined the knowledge of the 3d Ising model CFT data acquired by the bootstrap with conformal perturbation theory. They achieved a remarkable agreement with the experimental data describing the 2pt function $\\<\\sigma\\sigma\\>$ off criticality, i.e.~at temperatures slightly different from the critical temperature, which corresponds to perturbing the CFT by a $\\int d^3x\\, \\epsilon(x)$ perturbation.","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\nIn the last years, the development and deployment of Artificial Intelligence (AI) has grown dramatically and the European Commission (EC) expects that by 2025 the economic impact will reach between 6.5 and 12 trillion annually \\cite{Factsheet_EU}.\nTo shape the future of the technological innovation produced by AI, the EC took an active role in 2018 by introducing its own strategy on AI in which they proposed to work with all Member States on a coordinated plan to foster synergies across the European Union (EU) and identify common priorities to address the societal challenges with AI solutions, while taking into account the ethical implications. The European initiative is based on three pillars \\cite{AIStrategy}:\n\n\\begin{enumerate}[nolistsep]\n \\item Boost the EU's technological and industrial capacity and AI uptake across the economy by private and public sectors. This implies to strengthen research and development investments in AI in the EU.\n \\item Prepare for socio-economic changes brought by the transformation of AI in the labour market. Member States will need to prepare the Society to develop basic digital skills; re-skill or up-skill workers affected by automation, robotics and AI; and train more AI specialists, aiming for academic excellence.\n \\item Ensure an appropriate ethical and legal framework to promote trustworthy and accountable AI made in Europe.\n\\end{enumerate}\n\nThis approach aims to make most of the opportunities offered by AI to develop solutions for social good, i.e. technology that has a positive impact on the society and the environment, based on European values and respecting fundamental rights. As a result of this coordinated action, the EC invited all Member States to develop their national strategies, including the expected investments and the implementation measures \\cite{Coordinated_Plan}.\nThis effort builds upon an ambitious goal set out in the European AI Strategy: ``to become the world-leading region for developing and deploying cutting-edge, ethical and secure AI, promoting a human-centric approach in the global context'' \\cite{Coordinated_Plan}. In these regards, the EC has made important steps. First, it tasked a group of independent experts to set up an ethical AI framework (the so-called Trustworthy AI guidelines \\cite{Trustworthy_EU}) and, more recently, put forward a proposal to regulate high-risk AI applications \\cite{Whitepaper_EU}.\n\nThe aim of this paper is to explore how Member States are approaching AI through the lens of their National Strategies. In particular, we focus on their investment plans and how these commit to the human-centric approach proposed by the Coordinated Plan on AI. Our guiding questions are: What do Member States plan to do for a responsible development of AI? Do they translate the ethical and social concerns into actual prevention measures? What are their plans to make AI developments more democratic and open to society? In other words, is AI made in Europe truly fostering social good? Though the presented work is a limited investigation, which is part of a wider project and has no pretension of exhaustiveness, we took it as an excuse to open a discussion on the Social, Ethical, Legal, Economic and Cultural issues (ELSEC) of AI within Europe.\n\nIn \\S\\ref{sec:soa} we introduce similar studies and the main differences from our work. In \\S\\ref{sec:methods} we describe our method along with the data that has been used. We present the main findings of our qualitative analysis in \\S\\ref{sec:results} and discuss them in \\S\\ref{sec:disc}. We conclude with a brief summary in \\S\\ref{sec:concl}. \n\n\\section{State of the Art}\\label{sec:soa}\n\nThis work connects to a large literature dealing with a variety of guidelines and frameworks, which have rapidly sprung up worldwide to promote a responsible and sustainable development of AI. Several studies carried out a comparative analysis to identify similarities and divergences among these initiates.\nExamples include studies mapping keywords of different guidelines \\cite{zeng2018linking} and broad scope reviews \\cite{Human_right_standford,jobin2019global,hagendorff2020ethics}.\nMost of these studies consider an heterogeneous set of documents released by a variety of entities, including private companies, non-profit organisations and public institutions. Also, their main purpose is to study common ethical topics and their coverage across principles and guidelines issued in the few last years.\n\nCompared to these works, our study presents some points of contact, but also important distinctions. On the one hand, it shares the attention towards the ethical development of AI. But, on the other hand, it focuses on a more homogeneous set of documents (i.e. National Strategies) which are all part of a challenging European strategy. So, rather than (dis)agreements on AI ethical principles, our focus is more on how these principles translate into plans and measures taken by the European countries.\n\nThe present analysis is part of a wider research comparing National Strategies on distinct topics, such as re-skilling and education plans, and aims at supporting the ongoing debate on ethical AI principles. In particular, we do agree with \\cite{fjeld2020principled} that principles are better understood in their cultural, linguistic, geographic, and organizational context and investigating Europe's AI strategy from the perspective of different Member States adds value to the study of the European AI landscape.\n\n\\section{Method} \\label{sec:methods}\n\\subsection{Document selection}\nWe conducted a qualitative analysis of the investment plans stated in the European AI National Strategies.\nIn order to generate our data-set, we have evaluated all the EU National Strategies currently available. According to AI Watch \\cite{van2020ai}, 23 nations out of 27 have presented their strategy so far.\nHowever, we have reduced the selection to 15 nations based on the following requirements: \n\n\\begin{enumerate}[nolistsep]\n \\item The national strategy needs to be official. Neither draft nor action plans were considered.\n \\item Only AI strategies from Member States of the EU were included to ensure a common commitment towards the objectives of the Commission. \n \n \\item The documents need to be available in English to avoid language misinterpretations. \n\\end{enumerate}\nWe have monitored the release of new strategies from May 2020 to August 2020 to keep the data-set updated. This process generated the following list of countries: Austria\\cite{Austria}, Belgium\\cite{Belgium}, Czech Republic\\cite{CzechRepublic}, Denmark\\cite{Denmark}, Finland\\cite{Finland}, France\\cite{France}, Germany\\cite{Germany}, Lithuania\\cite{Lithuania}, Luxembourg\\cite{Luxembourg}, Malta\\cite{Malta}, Portugal\\cite{Portugal}, Slovakia\\cite{Slovakia}, Spain\\cite{Spain}, Sweden\\cite{Sweden}, and the Netherlands\\cite{Netherlands}.\n\n\\subsection{Qualitative Analysis}\nIn order to evaluate the documents, two researchers conducted an independent analysis to minimize the risks of bias. In the first place, the researchers selected the portion of text in each strategy associated to investments. This purposeful selection was guided by the following criteria: \\textit{(i)} investments with clear estimations in money allocation; \\textit{(ii)} investments made by the nation under consideration; and \\textit{(iii)} investments planned for the year of the publication or in the future. \n \nAs a first step, the researchers analyzed the text by assigning a label to each portion of the text that referred to a similar theme, according to the open code method \\cite{strauss1998basics}.\nThen the labels for each analysis were inspected and confronted to create a final and unified version. The final version resulted in a total of 18 first-order codes labels. As a second step, we aggregated the labels according to common characteristics in 8 high-level second-order codes categories using the axial coding \\cite{strauss1998basics}. Table \\ref{tab:qualitative} represents the first and second-order codes, along with the nations they refer to.\nThe table includes the results from 11 Member States which report information investment based on our requirements. \n\n\\subsection{Limitations} \\label{subsec:limit}\nOur analysis is bounded by some limits that arise from the nature of the selected documents.\nEven if the European Commission provided some guidance in \\cite{AIStrategy} of the objectives that should be covered, each document is structured in its own way, with differences in the level of uniformity and details provided.\nTherefore it was not possible to obtain a full comparison of monetary investments to be considered a representative picture of the European landscape. Also, we do not discard the hypothesis that some nations could have decided to not present economic estimations, or broadly appoint to them in these strategies. However, we have decided to remove from our research any discussion that did not include an economic value to focus our data on concrete actions. Nevertheless, we suggest that research with broader analysis on the investment (e.g. including examples of investments from other countries or intention to invest in specific fields) could bring benefit to have a holistic vision of the topic. \n\n\\begin{table*}[t]\n\\centering\n\\caption{Qualitative analysis of the investment areas in the European AI National Strategies}\n\\small\n\\begin{tabular}{|p{4cm}|p{3.9cm}|p{1cm}|p{2.5cm}|p{1cm}|}\n\\hline\n\\multicolumn{5}{|c|}{\\textbf{Investment Codes}}\\\\\n\\hline \\textbf{Nations (\\# of occurrences)} & \\textbf{1st Order Codes} & \\textbf{N(\\#occ)} & \\textbf{2nd Order Codes} & \\textbf{N(\\#occ)}\\\\\n\\hline\n\\multicolumn{1}{|l|}{Netherlands} & {\\cellcolor[gray]{0.9} Social Impact} & {\\cellcolor[gray]{0.9} 1} & {\\cellcolor[gray]{0.8}Society} & {\\cellcolor[gray]{0.8} 2}\\\\ \\cline{1-3}\n\\multicolumn{1}{|l|}{Denmark} & {\\cellcolor[gray]{0.9} Digital Welfare Solution} & {\\cellcolor[gray]{0.9} 1} & {\\cellcolor[gray]{0.8}} & {\\cellcolor[gray]{0.8}} \\\\ \\cline{1-3}\n\\hline\n\\multicolumn{1}{|l|}{Netherlands} & {\\cellcolor[gray]{0.9}Public Collaboration} & {\\cellcolor[gray]{0.9} 1} & {\\cellcolor[gray]{0.8} Cooperation} & {\\cellcolor[gray]{0.8} 3}\\\\ \\cline{1-3}\n\\multicolumn{1}{|l|}{Denmark} & {\\cellcolor[gray]{0.9} Public-Private Collaboration} & {\\cellcolor[gray]{0.9} 1} & {\\cellcolor[gray]{0.8}} & {\\cellcolor[gray]{0.8}}\\\\ \\cline{1-3}\n\\multicolumn{1}{|l|}{Denmark} & {\\cellcolor[gray]{0.9}International Collaboration} & {\\cellcolor[gray]{0.9}1} & {\\cellcolor[gray]{0.8}} & {\\cellcolor[gray]{0.8}}\\\\ \\cline{1-3}\n\\hline\n\\multicolumn{1}{|l|}{Belgium, Denmark (2), Germany, Malta, Spain} & {\\cellcolor[gray]{0.9} Current Investment} & {\\cellcolor[gray]{0.9}6} & {\\cellcolor[gray]{0.8}National Fund} & {\\cellcolor[gray]{0.8}11}\\\\ \\cline{1-3}\n\\multicolumn{1}{|l|}{Belgium (2), Denmark, Finland, Netherlands } & {\\cellcolor[gray]{0.9}Future Investments} & {\\cellcolor[gray]{0.9}5} & {\\cellcolor[gray]{0.8}} & {\\cellcolor[gray]{0.8}}\\\\ \\cline{1-3}\n\\hline\n\\multicolumn{1}{|l|}{Belgium, Denmark (3)} & {\\cellcolor[gray]{0.9}Digital Technology} & {\\cellcolor[gray]{0.9}4} & {\\cellcolor[gray]{0.8}Innovation} & {\\cellcolor[gray]{0.8}7}\\\\ \\cline{1-3}\n\\multicolumn{1}{|l|}{Denmark} & {\\cellcolor[gray]{0.9}Cybersecurity} & {\\cellcolor[gray]{0.9}1} & {\\cellcolor[gray]{0.8}} & {\\cellcolor[gray]{0.8}}\\\\ \\cline{1-3}\n\\multicolumn{1}{|l|}{Denmark} & {\\cellcolor[gray]{0.9}Data Collection} & {\\cellcolor[gray]{0.9}1} & {\\cellcolor[gray]{0.8}} & {\\cellcolor[gray]{0.8}}\\\\ \\cline{1-3}\n\\multicolumn{1}{|l|}{Netherlands} & {\\cellcolor[gray]{0.9}Supercomputing} & {\\cellcolor[gray]{0.9}1} & {\\cellcolor[gray]{0.8}} & {\\cellcolor[gray]{0.8}}\\\\ \\cline{1-3}\n\\hline\n\\multicolumn{1}{|l|}{Malta} & {\\cellcolor[gray]{0.9}National Promotion} & {\\cellcolor[gray]{0.9}1} & {\\cellcolor[gray]{0.8}International Representation} & {\\cellcolor[gray]{0.8}1}\\\\ \n\\hline\n\\multicolumn{1}{|l|}{Malta, Netherlands (5)} & {\\cellcolor[gray]{0.9}Employee Training} & {\\cellcolor[gray]{0.9}6} & {\\cellcolor[gray]{0.8}Education} & {\\cellcolor[gray]{0.8}12}\\\\ \\cline{1-3}\n\\multicolumn{1}{|l|}{Denmark, France} & {\\cellcolor[gray]{0.9}AI Literacy for Citizens} & {\\cellcolor[gray]{0.9}2} & {\\cellcolor[gray]{0.8}} & {\\cellcolor[gray]{0.8}}\\\\ \\cline{1-3}\n\\multicolumn{1}{|l|}{Denmark, Finland, Netherlands (2)} & {\\cellcolor[gray]{0.9}Educational Fund} & {\\cellcolor[gray]{0.9}4} & {\\cellcolor[gray]{0.8}} & {\\cellcolor[gray]{0.8}}\\\\ \\cline{1-3}\n\\hline\n\\multicolumn{1}{|l|}{Denmark (3), Germany, Malta (2), Netherlands (2)} & {\\cellcolor[gray]{0.9}Companies Investment} & {\\cellcolor[gray]{0.9}8} & {\\cellcolor[gray]{0.8}Private} & {\\cellcolor[gray]{0.8}8}\\\\ \\cline{1-3}\n\\hline\n\\multicolumn{1}{|l|}{Denmark} & {\\cellcolor[gray]{0.9}Investment for Local Administrations} & {\\cellcolor[gray]{0.9}1} & {\\cellcolor[gray]{0.8}Public} & {\\cellcolor[gray]{0.8}5}\\\\ \\cline{1-3}\n\\multicolumn{1}{|l|}{Austria, Lithuania, Denmark, Sweden} & {\\cellcolor[gray]{0.9}AI Research} & {\\cellcolor[gray]{0.9}4} & {\\cellcolor[gray]{0.8}} & {\\cellcolor[gray]{0.8}}\\\\ \\cline{1-3}\n\\hline\n\\end{tabular}\n\\label{tab:qualitative} \n\n\\end{table*} \n\n\\section{Results} \\label{sec:results}\nAccording to the results, 11 National Strategies report investment plans that meet our requirements. These include Austria, Belgium, Denmark, Finland, France, Germany, Lithuania, Malta, Spain, Sweden and the Netherlands. In the following subsections we highlight the main findings distinguishing between general investment plans and investments with an explicit commitment to society (e.g. welfare solutions, education and social impact)\n\n\\subsection{General Trends}\nMost of the National Strategies (7 out of 11) report packages of investments in AI initiatives (\\textbf{National Fund}).\nThese investments vary depending on whether they refer to ongoing efforts (\\textbf{Current Investment}) or future plans (\\textbf{Future investment}). Their description is usually generic and reports total volumes which often cover different areas of application (e.g. Belgium plans to invest at least EUR 1 billion by 2030 focusing on specific areas such as healthcare\/life sciences). \n\nSome strategies provide figures which connect to the digital transformation (\\textbf{Innovation}). For example, the Netherlands is investing EUR 18 million in a new national supercomputer (\\textbf{Supercomputing}), while Denmark allocated DKK 1.5 billion on cyber and information security (\\textbf{Cybersecurity}) and DKK 250 million in data quality and cross-sectoral cooperation on health data (\\textbf{Data Collection}).\nAnother emerging trend regards to the investments in the private sector (\\textbf{Private}), putting especial attention to support start-ups and SMEs in the uptake of AI, as they make 99\\% of business in Europe \\cite{AIStrategy}. Thus, it is clear that the early adoption of new technologies will help boost innovation and competition in the AI landscape. \nIn some strategies there are figures which refer more specifically to the public sector (\\textbf{Public}). For instance, Denmark allocates resources for testing and deploying digital welfare solutions in municipalities and regions (\\textbf{Investment for Local Administration}), while Austria, Lithuania, Denmark, and Sweden report investments in academic research (\\textbf{AI Research}). Another interesting case regards Malta, which plans to spend EUR 1 million per annum to promote their international visibility and become an emerging hub for technologies in Europe (\\textbf{International Representation}). \n \n\n\\subsection{Social Goods}\nWhile documents include details about general investments (like AI research and public collaboration), just a few of them report quantified investments with regard to the social good. For example, the Netherlands reports investments to study the impact of AI on work and employment (\\textbf{Social Impact}). Denmark specifies allocated resources for digital welfare solutions (\\textbf{Digital Welfare Solution}), which connect to a wider reform in the Public Sector aimed at contributing to better and more cohesive welfare services. Regarding education, seven strategies propose economic plans. For example, Denmark and France describe investments to support the population with the challenge of obtaining new digital competences and be prepared to the new work places that are expected to be created with the rise of AI technologies, and that will require a new generation of experts in different fields. (\\textbf{AI Literacy for Citizens)}.\nThe Netherlands describe multiple economic initiatives (5 times) for training workers and promoting a learning culture in SMEs (\\textbf{Employee Training}), and, along with Denmark and Finland, they propose concrete investments in higher education. For example, the Danish government set aside a pool of DKK 190 million (EUR 25 million) to cover all technical fields, including new technologies like AI (\\textbf{Educational Fund}). \nAnother interesting proposal regards the investments for cooperation. Indeed, the Netherlands reported an open call, worth EUR 2.3 million, on explainable, socially aware and responsible AI (\\textbf{Public collaboration}). \n\n\\section{Discussion} \\label{sec:disc}\nIn this section we consider the collected results in the light of European strategy \n(see the three pillars outlined in \\S\\ref{sec:intro}) and provide our vision for the way forward.\\\\\nIn order to boost the EU's technological and industrial capacity, Member States seem to embrace the direction suggested by the EC to encourage the progress of AI in the private and public sector. This is well reflected in our analysis. Indeed, 7 countries out of 11 provide specific estimates for their investments in the private and public sector and 3 of them (Denmark, Netherlands and Malta) report more than one measure (e.g. the Danish strategy includes 5 instances of the codes \\textbf{``Private''} and \\textbf{``Public''}, see Table \\ref{tab:qualitative}). Our results align also with a recent survey by the EC that found that 42\\% of the European companies are already using AI \\cite{European2020enterprise}.\nEducation is the key concept to prepare the society for the socio-economic changes, and align with the second pillar. Even though it is a recurrent objective included in AI strategies, we observed fewer nations reporting investments in numeric terms (4 nations out of 11). Those plans regard the re-training and upskilling of the population will play an important role to include the society in the transformation. Indeed, AI literacy and education can contribute to fill the gap created by the rapid growth in AI between the producers, who know the strengths and limits of this technology, and the consumers, who may lack knowledge about AI and be more exposed to harmful applications.\nThis will lead, on one side, to new opportunities for citizens to develop AI-based competences at work and contribute to the digital transformation that will shape our society. On the other side, culture will bring a faster acceptance of new technology and penetration in society, bringing to life the aim to improve society that Europe is wishing.\nDifferent nations point out the importance of including citizens in the process of defining the future applications of AI, especially those that will be deployed and used by public administrations \n(e.g Austria intends to support societal discussion to increase acceptance, or Czech Republic involving the employee in technological transformation). \n\nThe third pillar relies on the creation of ethical and legal framework, which is tackled with different initiatives in the National Strategies. For example, 5 of them (Belgium, Denmark, Luxembourg, Malta, Spain) state they want to create an ethical committee to supervise the use and development of AI systems. Malta puts forward the proposal of an national AI certification program based on its Ethical AI Framework. However all these propositions lack details about allocated resources. While some of these proposals build upon existing initiatives and investments schemes we expect to see further measures as the ambitious goal of Trustworthy AI cannot be achieved without costs. The set up of an appropriate ethical and legal framework is in fact a demanding effort which implies a long-term view and the mobilization of huge resources (e.g. experts in different fields, new business processes, holistic assessment methodologies, audits, etc.) \n\n\\subsection*{Our Vision for a better strategy}\nWe have identified four key areas that could improve the impact of the current investments to develop AI technology for social good.\n\n\\textbf{Global benefit:} We suggest to envision an accessible and inclusive approach that \\textit{(i)} includes the needs and opinions of different actors and stakeholders (e.g the Netherlands and Denmark report investments for citizens and for international cooperation); \\textit{(ii)} focus on diversified fields aligned with the Sustainable Goals for Development \\cite{SGDs} (health, agriculture, environment etc.), and \\textit{(iii)} considers direct and indirect consequences of the use and development of AI-based solutions. In our opinion, it is important to understand the knock-on effect that those investments have to predict possible opportunities and limitations in the long-term.\n\n\\textbf{Legal Frameworks\/Support:} We suggest to provide measurable efforts in the promotion of the legal and ethical aspects of AI, including \\textit{(i)} transparency and trustworthiness of the AI system; \\textit{(ii)} safeguard of the physical or psychological integrity and the dignity of the human being, and \\textit{(iii)} dissemination to the society. We recommended that each initiative and decision affected by an AI system, especially those coming form the public sector, should be easily accessible for the citizens. To progress towards a competitive AI landscape in Europe requires to build upon an empowered society able to interact with the technology, be aware of its technical and ethical limits and the legal processes that protect it.\n\n\\textbf{Social Implications:} The implications of the use of AI could be unpredictable, therefore we believe important to understand the challenges that our society will face. Initiatives, such as the one presented by the Netherlands to invest in research lines to study the social impact of AI, will be necessary to promote a responsible use of new technologies. Even if expert consultations can offer a general overview of the risks, real case applications could bring light to new issues. To obtain a more comprehensive view of these implications, we believe that it will be necessary to create multidisciplinary teams that can analyse different dimensions of this challenge.\n \n\\textbf{Societal Participation:} Citizens are already playing a key role in the AI landscape as users, but also by generating and sharing data that is used for multiple purposes. Thus it would be fair to include them in the definition of the ethical, legal, socio-economic and cultural strategies that will shape the future of Europe. To promote a human-centric approach of AI, the society should be put in the loop of the life cycle of AI systems trough direct participation or open consultations. As part of the AI4EU Observatory we are planing a citizen consultation to further analyse this vision of the European strategy of AI. \n\n\\section{Conclusions} \\label{sec:concl}\nIn this paper we present an ongoing study on how European countries are approaching the field of AI, with its promises and risks, through the lens of their national AI strategies. In particular, we aimed to investigate how European countries are investing in AI and to what extent the stated plans can contribute to the benefit of the whole society. To understand how Member States are investing in AI-based technologies for social good, we conducted a qualitative on 15 nations highlighting the distribution of economic investments. Although the sources we used were limited (see \\S\\ref{subsec:limit}), our findings show that National Strategies are aligned to the pillars of the European strategic vision of AI. However, these still lack of concrete actions that will define the path of a human-centric and trustworthy AI. There is a need for a stronger commitment to boost the collaboration among public and private sector to reach a network of excellence in Europe, able to attract talent and generate innovation, without leaving behind an ethical and legal framework able to protect and prioritise European citizen's rights and interests.\n\n\\section*{Acknowledgement}\nThe authors are partially supported by the project A European AI On Demand Platform and Ecosystem (AI4EU) H2020-ICT-26 \\#825619. The views expressed in this paper are not necessarily those of the consortium AI4EU.\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzjpge b/data_all_eng_slimpj/shuffled/split2/finalzzjpge new file mode 100644 index 0000000000000000000000000000000000000000..273587876be766794996b24aeb0b09cab7c0df4c --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzjpge @@ -0,0 +1,5 @@ +{"text":"\\section{#1}}\n\\newcommand{\\subsect}[1]{\\subsection{#1}}\n\\newcommand{\\subsubsect}[1]{\\subsubsection{#1}}\n\\renewcommand{\\theequation}\n {\\arabic{section}.\\arabic{equation}}\n\\renewcommand{\\thefootnote}{\\fnsymbol{footnote}}\n\n\n\n\\def{\\it i.e.}{{\\it i.e.}}\n\\def{\\it e.g.}{{\\it e.g.}}\n\n\\def\\begin{equation}{\\begin{equation}}\n\\def\\end{equation}{\\end{equation}}\n\\def\\begin{array}{\\begin{array}}\n\\def\\end{array}{\\end{array}}\n\n\\def{\\cal U}{{\\cal U}}\n\\def\\hbox{Fun}{\\hbox{Fun}}\n\\def\\co{\\Delta} \n\n\\def\\alpha{\\alpha}\n\\def{\\beta}{{\\beta}}\n\n\\def\\bar{\\alpha}{\\bar{\\alpha}}\n\\def\\bar{\\beta}{\\bar{\\beta}}\n\n\\def\\gamma{\\gamma}\n\\def\\g{{\\cal G}} \n\\def\\varepsilon{\\varepsilon}\n\n\\def\\k{\\omega} \n\n\\def\\J{{\\extr J}} \n\\def\\P{{\\extr P}} \n\n\\def\\R{{\\extr R}} \n\\def\\T{{\\extr T}} \n\\def\\X{{\\extr X}} \n\\def\\W{{\\extr W}} \n\n\\def\\C{\\mbox{\\ C}} \n\\def\\S{\\mbox{\\ S}} \n\n\\def\\s{{{so}}} \n\\def\\is{{{iso}}} \n\n\\def\\k_1,\\k_2,\\dots,\\k_N{\\k_1,\\k_2,\\dots,\\k_N}\n\\def0,\\k_2,\\dots,\\k_N{0,\\k_2,\\dots,\\k_N}\n\\def\\k_2,\\dots,\\k_N{\\k_2,\\dots,\\k_N}\n\\def\\k_2,\\k_3,\\k_4{\\k_2,\\k_3,\\k_4}\n\\def\\k_2,\\k_3{\\k_2,\\k_3}\n\\def\\k_2{\\k_2}\n\n\\def\\lambda{\\lambda}\n\n\n\\def\\Gamma^{(m)}_\\pardeform{\\Gamma^{(m)}_\\lambda}\n\\def\\Gamma^{(m)}{\\Gamma^{(m)}}\n\n\n\\catcode`\\@=11\n\\font\\tenmsa=msam10\n\\font\\sevenmsa=msam7\n\\font\\fivemsa=msam5\n\\font\\tenmsb=msbm10\n\\font\\sevenmsb=msbm7\n\\font\\fivemsb=msbm5\n\\newfam\\msafam\n\\newfam\\msbfam\n\\textfont\\msafam=\\tenmsa \\scriptfont\\msafam=\\sevenmsa\n \\scriptscriptfont\\msafam=\\fivemsa\n\\textfont\\msbfam=\\tenmsb \\scriptfont\\msbfam=\\sevenmsb\n \\scriptscriptfont\\msbfam=\\fivemsb\n\n\\def\\hexnumber@#1{\\ifnum#1<10 \\number#1\\else\n \\ifnum#1=10 A\\else\\ifnum#1=11 B\\else\\ifnum#1=12 C\\else\n \\ifnum#1=13 D\\else\\ifnum#1=14 E\\else\\ifnum#1=15 F\\fi\\fi\\fi\\fi\\fi\\fi\\fi}\n\n\\def\\msa@{\\hexnumber@\\msafam}\n\\def\\msb@{\\hexnumber@\\msbfam}\n\\mathchardef\\blacktriangleright=\"3\\msa@49\n\\mathchardef\\blacktriangleleft=\"3\\msa@4A\n\\catcode`\\@=12\n\n\n\\def\\triangleright\\!\\!\\!\\blacktriangleleft{\\triangleright\\!\\!\\!\\blacktriangleleft}\n\\def\\triangleright\\!\\!\\!<{\\triangleright\\!\\!\\!<}\n\\def>\\!\\!\\blacktriangleleft{>\\!\\!\\blacktriangleleft}\n\\def\\triangleright{\\triangleright}\n\\def\\triangleright\\!\\!\\!\\blacktriangleleft{\\triangleright\\!\\!\\!\\blacktriangleleft}\n\\def\\blacktriangleright\\!\\!\\!\\triangleleft{\\blacktriangleright\\!\\!\\!\\triangleleft}\n\\def\\triangleleft{\\triangleleft}\n\\def\\bar\\triangleright{\\bar\\triangleright}\n\\def\\triangleright\\!\\!\\!<{\\triangleright\\!\\!\\!<}\n\\def>\\!\\!\\!\\blacktriangleleft{>\\!\\!\\!\\blacktriangleleft}\n\\def>\\!\\!\\!\\triangleleft{>\\!\\!\\!\\triangleleft}\n\\def\\blacktriangleright\\!\\!\\!<{\\blacktriangleright\\!\\!\\!<}\n\n\n\\defH{H}\n\\defA{A}\n\\defK{K}\n\\defh{h}\n\\defg{g}\n\\defa{a}\n\\defb{b}\n\\defc{c}\n\n\n\n\n\\begin{document}\n\n\\rightline{DAMTP 96--100}\n\\rightline{To appear in J. Phys. {\\bf A}}\n\\vspace{1.5cm}\n\n\\begin{center} \n{\\large{\\bf{GRADED CONTRACTIONS AND BICROSSPRODUCT STRUCTURE}}}\n{\\large{\\bf{OF DEFORMED INHOMOGENEOUS ALGEBRAS \n\\footnote{PACS numbers: 02.20, 02.20Sv, 02.40}\n}}}\n\\end{center} \n\n\\bigskip\\bigskip\n\n\\begin{center} \nJ. A. de Azc\\'arraga$^{1}$ \n\\footnote{St. John's College Overseas Visiting Scholar.}\n\\footnote{On sabbatical (J.A.) leave and on leave of absence (J.C.P.B.)\nfrom Departamento de F\\'{\\i}sica Te\\'orica and IFIC, Centro Mixto Univ. \nde Valencia-CSIC, E--46100 Burjassot (Valencia), Spain. E-mails: \nj.azcarraga@damtp.cam.ac.uk (azcarrag@evalvx.ific.uv.es), \npbueno@lie.ific.uv.es.},\nM.A. del Olmo$^2$ \\footnotemark[4], \nJ.C. P\\'erez Bueno$^{1}$ \\footnotemark[3], \nand M. Santander$^2$ \n\\footnote{e-mails: olmo@cpd.uva.es, santander@cpd.uva.es.}\n\\end{center}\n\n\\begin{center} {\\it $^1$ Department of Applied Mathematics and Theoretical \nPhysics, \\\\\nSilver St., Cambridge CB3 9EW, UK}\n\\end{center}\n\n\\begin{center}{\\it{$^2$ Departamento de F\\'{\\i}sica Te\\'orica,\nUniversidad de Valladolid}\\\\ E--47011, Valladolid, Spain} \n\\end{center}\n\n\n\n\\begin{abstract}\nA family of deformed Hopf algebras corresponding to the classical\nmaximal isometry algebras of zero-curvature $N$-dimensional spaces (the\ninhomogeneous algebras ${iso}(p,q), \\ p+q=N,$ as well as some of their\ncontractions) are shown to have a bicrossproduct structure. This is done for\nboth the algebra and, in a low-dimensional example, for the (dual) group\naspects of the deformation.\n\\end{abstract}\n\n\\vfill\\eject\n\\sect{Introduction}\n\nThe procedure to deform simple algebras and groups\nwas established by Drinfel'd\n\\cite{Dri}, Jimbo \\cite{Ji} and Faddeev, Reshetikhin and Takhtajan \\cite{FRT}. \nThe algorithm, which leads to the\nso-called `quantum' algebras, does not cover, however, the case of\nnon-semisimple algebras. Since the contraction process leads to\ninhomogeneous algebras by starting from simple ones, it is natural\nto use it as a way to deform inhomogeneous Lie ({\\it i.e.}, `classical'\nor undeformed) algebras. This path of extending the classical idea\nof the Lie algebra contraction to the case of deformed algebras\nwas proposed by Celeghini {\\em et al.} \\cite{CGST}. The basic\nrequirement to define a deformed inhomogeneous algebra is\nthe commutativity of the processes of contraction and\ndeformation: when considering a simple algebra and one of their\ninhomogeneous contractions, both at classical and deformed\nlevels, the deformation of the contracted inhomogeneous Lie algebras \nshould coincide with the contraction of the deformed simple algebra.\nThis commutativity is not always guaranteed, and in general requires \n\\cite{CGST} a redefinition of the deformation parameter $q$ in \nterms of the contraction parameter and the new deformation one, so that\n$q$ is not a passive element in the contraction. \nThis was used, for instance, to obtain the $\\kappa$-Poincar\\'e algebra\n\\cite{LNRT}, for which the deformation parameter\n$\\kappa$ has dimensions of inverse of length. \n\nThe concept of contraction of Lie\nalgebras (or groups) was discussed in the early fifties by \\.In\\\"on\\\"u \nand Wigner \\cite{IW} (see also \\cite{Saletan}). \nThe idea of group contraction itself arose in the group analysis\nof the non-relativistic limit, and its applications to\nmathematical physics problems have been very fruitful. \nThe study of details\nbehind this procedure unveils interesting mathematical structures,\nwhich in many important cases are linked to physical properties. \nIn particular, the contraction process may increase the group cohomology\n\\cite{AA} (see also \\cite{AHPS}), \nas it is the case in the standard non-relativistic limit. \nSeveral attempts have been made to systematise the study of\ncontractions \nand recently a new approach has\nbeen put forward by Moody, Montigny and Patera \\cite{dMPMP},\nunder the name of graded contractions. \nThe key idea there is to preserve\na given grading of the original Lie algebra. This condition may fit\nneatly with physical requirements and is automatically\nsatisfied in the simplest case of the \\.In\\\"on\\\"u--Wigner contractions,\nwhich correspond to the simplest ${\\extr Z}_2$-grading.\n\nA class of Lie algebras describing a whole family of\ncontractions is the so-called orthogonal Cayley-Klein (CK) algebras. The\nname is due to historical reasons: these are the Lie algebras of the\nmotion groups of real spaces with a \nprojective metric \\cite{SommerYRY} (see also \\cite{SHO}). \nThe same family appears as a natural subset of all ${\\extr Z}_2^{\\otimes\nN}$-graded contractions which can be obtained from $\\s(N+1)$ \\cite{HMOS}. \nAnd furthermore, among orthogonal CK\nalgebras we find not only all simple pseudo-orthogonal algebras,\nbut many non-semisimple algebras of physical importance, as the\nkinematical Poincar\\'e and Galilei algebras in $(N-1,1)$ dimensions, the\nEuclidean algebra in $N$ dimensions, etc. The CK scheme\ndoes not deal with a single Lie algebra, but with a whole\nfamily of them simultaneously,\neach of which is parametrised by a set of real\nnumbers with a well-defined geometrical and physical significance. \nThe main point to be stressed is the ability of this kind of approach to\ndescribe some properties of many Lie algebras in a single unified form.\nThis is possible as the Lie algebras in the CK family, though not\nsimple, are `very near' to the simple ones, and many structural\nproperties of the simple algebras, when suitably reformulated, still\nsurvive for the CK algebras.\n\nIt is possible to give deformations of algebras in the CK\nfamily; naturally enough these will be \nsaid to belong to the CK family of Hopf `quantum' algebras. \nIn \\cite{BHOSab} deformations of the\nenveloping algebras of all algebras in the CK family of \n$\\s(p,q),\\ p+q=3, 4$ were given. For higher dimensions, {\\it i.e.}\\ for \nalgebras in the family of $\\s(p,q),\\ (p+q=N+1)$ with $N>3$, a\nquantum deformation of the general parent member of the CK family \nis still not known, yet there exists a scheme of quantum deformations\nencompassing all motion algebras of flat affine spaces in $N$\ndimensions, which include the ordinary inhomogeneous \n$\\is(p,q),\\ (p+q=N)$ \n\\cite{BHOScd}. This scheme provides a Hopf algebra\ndeformation for each algebra in the family. Some of its members \nare physically relevant non-semisimple algebras, and include as\nparticular cases most of the deformations of these algebras found in the\nliterature. \n\nAn important fact in quantum algebra\/group theory is the (co)existence\nof two closely linked algebraic structures: the algebra (as expressed by the \ncommutators or the commuting properties of the algebra \nof functions on the group) and the coalgebra (as given by the coproduct). \nMost of the complications found when doing quantum contractions can\nbe traced to the need of dealing simultaneously with these two aspects.\nFor instance, a naive contraction might lead to divergences either in\nthe coproduct or in the $R$--matrix \\cite{CGST,BGHOS}.\nOne of the main motivations behind the CK scheme was to\nbe able to describe at the same time a family of algebras, including\nsome simple and some contracted algebras, in such a way that the\npossible origin of divergences under contractions is clearly seen\nand controlled. \n\nIn this paper we address a specific problem where the advantages of a\nCK type scheme are exhibited.\nIn the classical case, an \\.In\\\"on\\\"u--Wigner (IW) contraction\nof a simple algebra leads to a non-semisimple one which is the\nsemidirect sum of an abelian algebra and the preserved subalgebra of\nthe original algebra with respect which the contraction was made.\n{\\it All} IW contractions of simple algebras have a semidirect\nstructure. \nIt is then natural to ask: is there a similar pattern for the contracted \ndeformations {\\it i.e.},\nfor the Hopf algebra deformations of contracted simple Lie algebras? \nThe analogue of the semidirect product is an example of the bicrossproduct \nof Hopf algebras, introduced by Majid \\cite{MajMajid} \n(see also \\cite{SingMoln,bicII}).\nThe aim of this paper is to show that all deformed algebras in\nthe affine\\footnote{We use the word `affine' in the sense of inhomogeneous. \nNot all deformed inhomogeneous groups have a bicrossproduct structure; this \nis, for instance, the case of ${\\cal U}_q({\\cal E}(2))$ \nas discussed in \\cite{APb}.} \nCK family $\\is_{\\k_2,\\dots,\\k_N}(N)$ have indeed a\nbicrossproduct structure, as is the case of the $\\kappa$-Poincar\\'e \n\\cite{KPoinBicros}. \nThis result opens the possibility of\nrecovering more easily the deformed dual groups\n$\\hbox{Fun}_q(ISO_{\\k_2,\\dots,\\k_N}(N))$ by using the dual bicrossproduct\n`group-like' expressions (see \\cite{APb} for some group-like (rather\nthan algebra-like) examples of this construction).\nClassically, the $\\is_{\\k_2,\\dots,\\k_N}(N)$ family includes all inhomogeneous\nLie algebras $\\is (p,q),\\ (p+q=N)$, so we will refer loosely to the aim of \nthe paper as showing the bicrossproduct structure of deformed inhomogeneous\ngroups. It should be kept in mind, however, that we are referring to a\nspecific deformation, and that there exist examples \n(see \\cite{APb}) where \na contraction of a deformed algebra has no bicrossproduct structure. \n\nThe paper is organised as follows. In Section II we \nbriefly describe the classical Cayley--Klein algebras and \npresent a discussion on contractions and \ndimensional analysis since this is relevant for the assignment of physical \ndimensions to the deformation parameters. \nIn Sec. III we give the explicit expressions for their\n$q$--deformations. The bicrossproduct structure of these $q$--deformed\nCayley--Klein Hopf algebras is shown in Section IV. Examples of this structure\nfor physically interesting algebras are presented in Section V. \nIn Section VI we show, as an example, how to obtain the (dual) \ngroup deformation in the case of lowest dimension $N=2$. \nSome conclusions close the paper. \n\n\n\n\\sect{ Affine Cayley--Klein Lie algebras and dimensional analysis} \n\n\n\\subsect{The CK scheme of geometries and Lie algebras}\n\n\\bigskip\nThe complete family of the $so(N+1)$ CK algebras is a set of real Lie algebras \nof dimension $(N+1)N\/2$, characterised by\n$N$ real parameters $(\\k_1,\\k_2,\\dots,\\k_N)$ \\cite{SHO}. \nThis family appears, {\\it e.g.}, as a\nnatural subfamily \\cite{HMOS} of all the graded\ncontractions from the Lie algebra $so(N+1)$ \\cite{HS} corresponding to a\n${\\extr Z}_2^{\\otimes N}$ grading of $so(N+1)$, \nand its elements will be denoted\n$\\s_{\\k_1,\\k_2,\\dots,\\k_N} (N+1)$; \nin particular, \n$\\s_{1, 1, \\dots, 1} (N+1) \\equiv \\s(N+1)$. \nIn terms of a basis of\n$\\s_{\\k_1,\\k_2,\\dots,\\k_N} (N+1)$ adapted to the grading, $\\{ \\J_{ab};\\ a 0)$. \nIf all the $\\k_i$ are non-zero we can introduce also\n$\\J_{ba}, (a