diff --git a/data_all_eng_slimpj/shuffled/split2/finalztok b/data_all_eng_slimpj/shuffled/split2/finalztok new file mode 100644 index 0000000000000000000000000000000000000000..4d477b26e7164ab6e929c88bec9266f02b54d154 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalztok @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nQuantum metrology is an emerging cross-disciplinary field between precision measurement and quantum technology,\nand has now become one of the most promising fields in quantum technology due to the general belief that it\ncould step into the industrial-grade applications in a short time~\\cite{Giovannetti2004,Giovannetti2011,Degen2017,\nBraun2018,Pezze2018}. Meanwhile, its development not only benefits the applied technologies like the magnetometry,\nthermometry, and gravimetry, but also the studies in fundamental physics such as the detection of gravitational\nwaves~\\cite{LIGO2013} and the search of dark matters~\\cite{Backes2021,Jiang2021}. As the theoretical support of\nquantum metrology, quantum parameter estimation started from 1960th~\\cite{Helstrom1967}, and has become an\nindispensable component of quantum metrology nowadays~\\cite{Paris2009,Toth2014,Szczykulska2016,Liu2020,Rafal2020,\nSidhu2020,Albarelli2020,Liu2022,Reich2013a,Reich2013b,Goerz2014}.\n\nOne of the key challenges in quantum parameter estimation is to design optimal schemes with quantum apparatuses\nand quantum resources, leading to enhanced precision when compared with their classical counterparts. A typical\nscheme in quantum parameter estimation usually contains four steps: (1) preparation; (2) parameterization; (3)\nmeasurement; and (4) classical estimation. The first step is the preparation of the probe state. The parameters\nto be estimated are involved in the second step, which is also known as sensing in the field of quantum sensing.\nWith the parameterized state given in the second step, the third step is to perform the quantum measurement, which\nresults in a set of probability distributions. Estimating the unknown parameters from the obtained probability\ndistributions is finished in the last step. The design of an optimal scheme usually requires the optimizations of\nsome or all of the steps above.\n\nIn quantum parameter estimation, there exist various mathematical bounds to depict the theoretical precision\nlimit. Depending on the type of the bound considered, it will be more or less informative depending on the type\nof estimation scenario considered, be it: single-shot vs. many-repetition scenario, single vs. multiple-parameter\nscenario, etc. Moreover, by choosing different objective functions when optimizing quantum estimation schemes,\none may arrive at solutions with contrastingly different robustness properties, complexity of practical implementation\nand so on. Hence, the design of optimal schemes has to be performed case by case most of the time. This is the reason\nwhy a general quantum parameter estimation toolkit is needed. Developing such a toolkit is the major motivation of\nthis work.\n\nCurrently, there exist many useful toolkits based on various platforms in quantum information and quantum metrology.\nA famous one is the QuTiP developed by Johansson, Nation, and Nori~\\cite{Johansson2012,Johansson2013} in 2012,\nwhich can execute many basic calculations in quantum information. In the field of quantum control, Machnes et\nal.~\\cite{Machnes2011} developed DYNAMO and Hogben et al. developed Spinach~\\cite{Hogben2011} based on Matlab.\nGoerz et al. developed Krotov~\\cite{Goerz2019}, which owns three versions based on Fortran, Python, and Julia,\nrespectively. G\\\"{u}nther et al. developed Quandary~\\cite{Gunther2021} based on C++. Moreover, there exist other\npackages like Quandary~\\cite{Groth2014} for quantum transport and ProjectQ~\\cite{Steiger2018} for quantum computing.\nIn quantum metrology, Chabuda and Demkowicz-Dobrza\\'{n}ski developed TNQMetro~\\cite{Chabuda2021}, a tensor-network\nbased Python package to perform efficient quantum metrology computations.\n\nHereby we present a new toolkit, QuanEstimation, based on both Python and Julia for the quantum parameter estimation and\nprovide some examples to demonstrate its usage and performance. QuanEstimation contains several widely-used metrological\ntools, such as the asymptotic Fisher information based quantities as well as their Bayesian counterparts (including direct\nBayesian cost minimization, Bayesian versions of the classical and quantum Cram\\'{e}r-Rao bounds as well as the quantum\nZiv-Zakai bound). For the sake of scheme design, QuanEstimation can execute the optimizations of the probe state, control,\nand measurement, as well as the simultaneous optimizations among them with both gradient-based and gradient-free methods.\nDue to the fact that most of the time adaptive measurement schemes are the best practical way to realize the asymptotic\nadvantage indicated by the quantum Fisher information, QuanEstimation can also execute online adaptive measurement\nschemes, such as the adaptive phase estimation, and provide the real-time values of the tunable parameters that can be\ndirectly used in an experiment.\n\n\\section{Overview}\n\n\\begin{figure*}[bt]\n\\centering\\includegraphics[width=17cm]{Fig_schematic.pdf}\n\\caption{Schematic of the package structure of QuanEstimation.\nThe blue boxes, white boxes with blue edges, white boxes with\norange boxes, gray boxes, and gray boxes with dotted orange\nboundaries represent the folders, files, classes, functions\nor methods, and wrapped Julia methods which are solved in\nJulia scripts, respectively.\n\\label{fig:package_structure}}\n\\end{figure*}\n\nQuanEstimation is a scientific computing package focusing on the calculations and optimizations in quantum\nparameter estimation. It is based on both Python and Julia. The interface is written in Python due to the fact\nthat nowadays Python is one of the most popular platforms for scientific computing. However, QuanEstimation\ncontains many optimization processes which need to execute massive numbers of elementary processes such as the loops.\nThese elementary processes could be very time-consuming in Python, and thus strongly affect the efficiency of the\noptimizations. This is why Julia is involved in this package. Julia has many wonderful features, such as optional\ntyping and multiple dispatch, and these features let the loop and other calculation processes cost way less time\nthan those in Python. Hence, the optimizations in QuanEstimation are all performed in Julia. Nevertheless, currently\nthe community of Julia is not comparable to that of Python, and the hybrid structure of this package would allow\nthe people who are not familiar with Julia use the package without any obstacle. In the meantime, QuanEstimation\nhas a full Julia version for the users experienced in Julia.\n\nThe package structure of QuanEstimation is illustrated in Fig.~\\ref{fig:package_structure}. The blue boxes, white\nboxes with blue edges represent the folders and files. The white boxes with orange boxes and gray boxes represent\nclasses and functions\/methods. The gray boxes with dotted orange boundaries are wrapped Julia methods which are\nsolved in Julia, namely, this part of calculation are sent to Julia to execute.\n\nThe functions for the calculation of the parameterization process and dynamics are in the folder named\n\"Parameterization\". In this folder, the file \"GeneralDynamics.py\" contains the functions to solve the Lindblad-type\nmaster equation. Currently, the master equation is solved via the matrix exponential. To improve the efficiency, the\ncalculation of the dynamics via the matrix exponential are executed in Julia and when the calculation is finished,\nthe data is sent back to Python for further use. The file \"NonDynamics.py\" contains the non-dynamical methods\nfor the parameterization, which currently includes the description via Kraus operators. Details and the usage of\nthese functions will be thoroughly introduced in Sec.~\\ref{sec:para}.\n\nThe functions for the calculation of the metrological tools and bounds are distributed in two folders named\n\"AsymptoticBound\" and \"BayesianBound\". In the folder \"AsymptoticBound\", the file \"CramerRao.py\" contains the\nfunctions to calculate the quantities related to the quantum Cram\\'{e}r-Rao bounds, and the file \"Holevo.py\"\ncontains those to calculate the Holevo-type quantum Cram\\'{e}r-Rao bound. In the folder \"BayesianBound\", the file\n\"BayesCramerRao.py\" contains the functions to calculate several versions of the Bayesian classical and quantum\nCram\\'{e}r-Rao bounds and \"ZivZakai.py\" contains the function to calculate the quantum Ziv-Zakai bound. The file\n\"BayesEstimation.py\" contains the functions to execute the Bayesian estimation and the maximum likelihood estimation.\nThe aforementioned metrological tools and the corresponding rules to call them will be given in Sec.~\\ref{sec:tools}.\n\nThe functions for the calculation of metrological resources are placed in the folder named \"Resource\". In this folder,\nthe file \"Resource.py\" currently contains two types of resources, the spin squeezing and the target time to reach a\ngiven value of an objective function, which will be thoroughly introduced in Sec.~\\ref{sec:resource}. The resources\nthat can be readily calculated via QuTiP~\\cite{Johansson2012,Johansson2013} are not included at this moment.\n\nThe scripts for the control optimization, state optimization, measurement optimization, and comprehensive\noptimization are in the folders named \"ControlOpt\", \"StateOpt\", \"MeasurementOpt\", and \"ComprehensiveOpt\",\nrespectively. The structures of these folders are basically the same, and here we only take the folder of \"ControlOpt\"\nas an demonstration to explain the basic structure. In this folder, the file \"ControlStruct.py\" contains a\nfunction named {\\fontfamily{bch}\\selectfont\\small\\itshape ControlOpt()} and a class named {\\fontfamily{bch}\\selectfont\\small\\itshape ControlSystem()}. The function\n{\\fontfamily{bch}\\selectfont\\small\\itshape ControlOpt()} is used to receive the initialized parameters given by the user, and then delivers them\nto one of the classes in the files \"GRAPE\\_Copt.py\", \"PSO\\_Copt.py\", \"DE\\_Copt.py\", and \"DDPG\\_Copt.py\" according to\nthe user's choice of the algorithm. These classes inherit the attributes in {\\fontfamily{bch}\\selectfont\\small\\itshape ControlSystem()}. Then based on\nthe choice of the objective function, the related parts in {\\fontfamily{bch}\\selectfont\\small\\itshape ControlSystem()} is called in these classes to\nfurther run the scripts in Julia. {\\fontfamily{bch}\\selectfont\\small\\itshape ControlSystem()} contains all the common parts that different algorithms\nwould use and the interface with the scripts in Julia. This design is to avoid the repetition codes in the algorithm\nfiles and let the extension neat and simple when more algorithms need to be included in the future. The usage of\nQuanEstimation for control optimization, state optimization, measurement optimization, and comprehensive optimization,\nas well as the corresponding illustrations will be thoroughly discussed in Secs.~\\ref{sec:control_opt}, \\ref{sec:state_opt},\n\\ref{sec:measurement_opt}, and~\\ref{sec:comprehensive_opt}, respectively.\n\nThe scripts for the adaptive measurement are in the folder named \"AdaptiveScheme\". In this folder, the file \"Adaptive.py\"\ncontains the class to execute the adaptive measurement scheme, and \"Adapt\\_MZI.py\" contains the class to generate\nonline and offline adaptive schemes in the Mach-Zehnder interferometer. The details of the adaptive scheme and how to\nperform it with QuanEstimation will be given in Sec.~\\ref{sec:adapt}.\n\nThe folder \"Common\" contains some common functions that are regularly called in QuanEstimation. Currently it\ncontains three functions. {\\fontfamily{bch}\\selectfont\\small\\itshape SIC()} is used to generate a set of rank-one symmetric informationally complete\npositive operator-valued measure. {\\fontfamily{bch}\\selectfont\\small\\itshape suN\\_generator()} is used to generate a set of su($N$) generators.\n{\\fontfamily{bch}\\selectfont\\small\\itshape BayesInput()} is used to generate a legitimate form of Hamiltonian (or a set of Kraus operators) and its\nderivative, which can be used as the input in some functions in \"BayesEstimation.py\" and \"Adaptive.py\".\n\nAll the Julia scripts are located in the folder named \"JuliaSrc\". One design principle of QuanEstimation for the\noptimizations is that once the calculation goes into the parts in Julia, it will stay in Julia until all the calculations\nare finished and data generated. Hence, \"JuliaSrc\" also contains the scripts to calculate the metrological tools and\nresources for the sake of internal calling in Julia. To keep a high extendability, the optimizations are divided into\nfour elements in Julia, including the scenario of optimization, the algorithm, the parameterization process and the\nobjective function, which are distributed in the files \"OptScenario.jl\", \"Algorithm.jl\", \"Parameterization.jl\", and\n\"ObjectiveFunc.jl\" in the folders \"OptScenario\", \"Algorithm\", \"Parameterization\", and \"ObjectiveFunc\", respectively.\nOnce the information and parameter settings of all elements are input by the user, they are sent to the file \"run.jl\",\nwhich is further used to execute the program. As a matter of fact, \"JuliaSrc\" is also an independent package. If the\nusers are familiar with the language of Julia, they can directly use the full Julia package.\n\nSimilar to other packages, the usage of QuanEstimation requires the existence of some other packages in the environment.\nIn python it requires the pre-installation of numpy, scipy, sympy, cvxpy, and more$\\_$itertools. In Julia it requires\nthe pre-installation of LinearAlgebra, Zygote, Convex, SCS, ReinforcementLearning, SparseArrays, DelimitedFiles,\nStatsBase, BoundaryValueDiffEq, Random, Trapz, Interpolations, Printf, IntervalSets, StableRNGs, and Flux. The calling\nof the package in Python can be done with the following line of codes:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nfrom quanestimation import *\n\\end{lstlisting}\nAll the scripts demonstrated in the following are based on this calling form.\n\n\n\\section{Parameterization process}\n\\label{sec:para}\n\nThe parameterization process is a key step in the quantum parameter estimation, and in physical terms this process\ncorresponds to a parameter dependent quantum dynamics. Hence, the ability to solve the dynamics is an indispensable\nelement of numerical calculations in quantum parameter estimation. In QuanEstimation, we mainly focus on the dynamics\ngoverned by the quantum master equation\n\\begin{align}\n\\partial_t\\rho &=\\mathcal{L}\\rho \\nonumber \\\\\n&=-i[H,\\rho]+\\sum_i \\gamma_i\\left(\\Gamma_i\\rho\\Gamma^{\\dagger}_i\n-\\frac{1}{2}\\left\\{\\rho,\\Gamma^{\\dagger}_i \\Gamma_i \\right\\}\\right) \\label{eq:mastereq},\n\\end{align}\nwhere $\\rho$ is the evolved density matrix, $H$ is the Hamiltonian of the system, and $\\Gamma_i$ and $\\gamma_i$\nare the $i$th decay operator and decay rate, respectively. The total Hamiltonian $H$ includes two terms, the\nfree Hamiltonian $H_0(\\bold{x})$, which is a function of the parameters $\\bold{x}$ and control Hamiltonian\n$H_{\\mathrm{c}}$. In the quantum parameter estimation, most calculations require the dynamical information\nof $\\rho$ and its derivatives with respect to $\\bold{x}$, which is denoted by $\\partial_{\\bold{x}}\\rho:=\n(\\partial_{0}\\rho,\\partial_1\\rho,\\dots)$ with $\\partial_a$ short for $\\partial_{x_a}$. Hence, in the package\n$\\rho$ and $\\partial_{\\bold{x}}\\rho$ can be found simultaneously via the codes:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\ndynamics = Lindblad(tspan,rho0,H0,dH,\n decay=[],Hc=[],ctrl=[])\nrho,drho = dynamics.expm()\n\\end{lstlisting}\nHere the input {\\fontfamily{bch}\\selectfont\\small\\itshape tspan} is an array representing the time length for the evolution and {\\fontfamily{bch}\\selectfont\\small\\itshape rho0}\nis a matrix representing the initial (probe) state. {\\fontfamily{bch}\\selectfont\\small\\itshape H0} is a matrix or a list representing the free\nHamiltonian. It is a matrix when the free Hamiltonian is time-independent and a list (the length equals to\nthat of {\\fontfamily{bch}\\selectfont\\small\\itshape tspan}) when it is time-dependent. {\\fontfamily{bch}\\selectfont\\small\\itshape dH} is a list containing the derivatives of\n$H_0(\\bold{x})$ on $\\bold{x}$, i.e., $[\\partial_a H_0,\\partial_b H_0,\\dots]$. {\\fontfamily{bch}\\selectfont\\small\\itshape decay} is a list including\nboth decay operators and decay rates, and its input rule is {\\fontfamily{bch}\\selectfont\\small\\itshape decay=[[Gamma1,gamma1],[Gamma2,gamma2],\\dots]},\nwhere {\\fontfamily{bch}\\selectfont\\small\\itshape Gamma1} ({\\fontfamily{bch}\\selectfont\\small\\itshape Gamma2}) and {\\fontfamily{bch}\\selectfont\\small\\itshape gamma1} ({\\fontfamily{bch}\\selectfont\\small\\itshape gamma2}) represent $\\Gamma_1$\n($\\Gamma_2$) and $\\gamma_1$ ($\\gamma_2$), respectively. The default value is empty which means the dynamics is\nunitary. {\\fontfamily{bch}\\selectfont\\small\\itshape Hc} is a list of matrices representing the control Hamiltonians and when it is empty, the\ndynamics is only governed by the free Hamiltonian. {\\fontfamily{bch}\\selectfont\\small\\itshape ctrl} (default value is empty) is a list of arrays\ncontaining the control amplitudes with respect the control Hamiltonians in {\\fontfamily{bch}\\selectfont\\small\\itshape Hc}. The output {\\fontfamily{bch}\\selectfont\\small\\itshape rho}\nis a list representing density matrices in the dynamics. {\\fontfamily{bch}\\selectfont\\small\\itshape drho} is also a list and its $i$th entry is\na list containing all derivatives $\\partial_{\\bold{x}}\\rho$ at $i$th time interval. The dynamics in the package\nis solved by the matrix exponential, i.e., the density matrix at $j$th time interval is calculated via\n$\\rho_j=e^{\\Delta t_j\\mathcal{L}}\\rho_{j-1}$ with $\\Delta t_j$ a small time interval and $\\rho_{j-1}$ the density\nmatrix at the previous time interval. $\\partial_{\\bold{x}}\\rho_j$ is solved by the iterative equation\n\\begin{align}\n\\partial_{\\bold{x}}\\rho_j &=\\Delta t_j(\\partial_{\\bold{x}}\\mathcal{L})\\rho_j\n+e^{\\Delta t_j \\mathcal{L}}(\\partial_{\\bold{x}}\\rho_{j-1}) \\nonumber \\\\\n&=-i\\Delta t_j[\\partial_{\\bold{x}}H_0, \\rho_j]+e^{\\Delta t_j \\mathcal{L}}(\\partial_{\\bold{x}}\\rho_{j-1}).\n\\end{align}\nIn the package $\\Delta t_j$ is automatically obtained by calculating the difference between the $j$th and $(j-1)$th\nentries in {\\fontfamily{bch}\\selectfont\\small\\itshape tspan}. The numerical accuracy of the equation above is limited by the set of $\\{\\Delta t_j\\}$,\nindicating that a smaller $\\{\\Delta t_j\\}$ would always benefit the improvement of the accuracy in general. However,\na smaller $\\{\\Delta t_j\\}$ also means a larger number of calculation steps for a fixed evolution time, resulting in a\ngreater time consumption. Hence, in practice a reasonable values of $\\{\\Delta t_j\\}$ should be chosen to balance the\naccuracy and time consumption.\n\nThe calculation of metrological bounds, which will be discussed in the next section, does not rely on the calling\nof above intrinsic dynamics in the package as they only require the input of $\\rho$ and $\\partial_{\\bold{x}}\\rho$\n(and other essential parameters), not any dynamical information. Hence, the dynamics can also be solved by other\npackages like QuTip~\\cite{Johansson2012,Johansson2013}.\n\nIn certain cases, the parameterization process can be described by some non-dynamical methods, such as the Kraus\noperators. In this case, the parameterized density matrix can be expressed by\n\\begin{equation}\n\\rho(\\bold{x})=\\sum_i K_i(\\bold{x})\\rho_0 K_i^{\\dagger}(\\bold{x}),\n\\label{eq:kraus_opt}\n\\end{equation}\nwhere $K_i(\\bold{x})$ is a Kraus operator satisfying $\\sum_{i}K^{\\dagger}_i K_i=\\openone$ with $\\openone$ the\nidentity operator, $\\rho_0$ is the probe state which is independent of the unknown parameters. In QuanEstimation,\n$\\rho$ and $\\partial_{\\bold{x}}\\rho$ obtained from Kraus operators can be solved via the codes:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nrho,drho = Kraus(rho0,K,dK)\n\\end{lstlisting}\nHere {\\fontfamily{bch}\\selectfont\\small\\itshape rho0} is a matrix representing the probe state, {\\fontfamily{bch}\\selectfont\\small\\itshape K} is a list of matrices with each\nentry a Kraus operator, and {\\fontfamily{bch}\\selectfont\\small\\itshape dK} is a list with $i$th entry also a list representing the derivatives\n$\\partial_{\\bold{x}}K_i$.\n\nThe aforementioned functions only calculate $\\rho$ and $\\partial_{\\bold{x}}\\rho$ at a fixed point of $\\bold{x}$.\nHowever, in the Bayesian scenarios, the values of $\\rho$ and $\\partial_{\\bold{x}}\\rho$ with respect to a regime\nof $\\bold{x}$ may be in need. In this case, if the users can provide the specific functions of $H$ and\n$\\partial_{\\bold{x}}H$, or Kraus operators $\\{K_i\\}$ and derivatives $\\{\\partial_{\\bold{x}} K_i\\}$, the variables\n{\\fontfamily{bch}\\selectfont\\small\\itshape H}, {\\fontfamily{bch}\\selectfont\\small\\itshape dH} (or {\\fontfamily{bch}\\selectfont\\small\\itshape K}, {\\fontfamily{bch}\\selectfont\\small\\itshape dK}) can be generated by the function\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL,mathescape=true]\nH0,dH = BayesInput(x,func,dfunc,\n channel=\"dynamics\")\n\\end{lstlisting}\nHere {\\fontfamily{bch}\\selectfont\\small\\itshape x} is a list of arrays representing the regime of $\\bold{x}$. {\\fontfamily{bch}\\selectfont\\small\\itshape H0} is a list of matrices\nrepresenting the free Hamiltonian with respect to the values in {\\fontfamily{bch}\\selectfont\\small\\itshape x}, and it is multidimensional in the\ncase that {\\fontfamily{bch}\\selectfont\\small\\itshape x} has more than one entry. {\\fontfamily{bch}\\selectfont\\small\\itshape dH} is a (multidimensional) list with each entry also a\nlist representing $\\partial_{\\bold{x}}H$ with respect to the values in {\\fontfamily{bch}\\selectfont\\small\\itshape x}. {\\fontfamily{bch}\\selectfont\\small\\itshape func} and\n{\\fontfamily{bch}\\selectfont\\small\\itshape dfunc} are the handles of the functions {\\fontfamily{bch}\\selectfont\\small\\itshape func()} and {\\fontfamily{bch}\\selectfont\\small\\itshape dfunc()}, which are defined\nby the users representing $H(\\bold{x})$ and $\\partial_{\\bold{x}}H(\\bold{x})$. Notice that the output of\n{\\fontfamily{bch}\\selectfont\\small\\itshape dfunc()} should also be a list representing $[\\partial_0 H,\\partial_1 H,\\dots]$. The output of\n{\\fontfamily{bch}\\selectfont\\small\\itshape BayesInput()} can be switched between {\\fontfamily{bch}\\selectfont\\small\\itshape H}, {\\fontfamily{bch}\\selectfont\\small\\itshape dH} and {\\fontfamily{bch}\\selectfont\\small\\itshape K}, {\\fontfamily{bch}\\selectfont\\small\\itshape dK} by\nsetting {\\fontfamily{bch}\\selectfont\\small\\itshape channel=\"dynamics\"} or {\\fontfamily{bch}\\selectfont\\small\\itshape channel=\"Kraus\"}. After calling {\\fontfamily{bch}\\selectfont\\small\\itshape BayesInput()},\n$\\rho$ and $\\partial_{\\bold{x}}\\rho$ can be further obtained via calling of {\\fontfamily{bch}\\selectfont\\small\\itshape Lindblad()} and {\\fontfamily{bch}\\selectfont\\small\\itshape Kraus()}.\n\n\\section{Quantum metrological tools}\n\\label{sec:tools}\n\nIn this section, we will briefly introduce the metrological tools that have been involved in QuanEstimation and\ndemonstrate how to calculate them with our package. Both asymptotic and Bayesian tools are included, such as the\nquantum Cram\\'{e}r-Rao bounds, Holevo Cram\\'{e}r-Rao bound, Bayesian estimation, and Bayesian type of Cram\\'{e}r-Rao\nbounds like Van Trees bound and Tsang-Wiseman-Caves bound.\n\n\\subsection{Quantum Cram\\'{e}r-Rao bounds}\n\\label{sec:QCRB}\n\nQuantum Cram\\'{e}r-Rao bounds~\\cite{Helstrom1976,Holevo1982} are the most renown metrological tools in quantum\nparameter estimation. Let $\\rho=\\rho(\\bold{x})$ be a parameterized density matrix and\n$\\{\\Pi_y\\}$ a set of positive operator-valued measure (POVM), then the covariance matrix\n$\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\}):=\\sum_y\\mathrm{Tr}(\\rho\\Pi_y)(\\hat{\\bold{x}}-\\bold{x})(\\hat{\\bold{x}}\n-\\bold{x})^{\\mathrm{T}}$ for the unknown parameters $\\bold{x}=(x_0,x_1,\\dots)^{\\mathrm{T}}$ and the corresponding\nunbiased estimators $\\hat{\\bold{x}}=(\\hat{x}_0,\\hat{x}_1,\\dots)^{\\mathrm{T}}$ satisfies the following\ninequalities~\\cite{Helstrom1976,Holevo1982}\n\\begin{equation}\n\\mathrm{cov}\\left(\\hat{\\bold{x}}, \\{\\Pi_y\\}\\right)\n\\geq \\frac{1}{n}\\mathcal{I}^{-1}\\left(\\{\\Pi_y\\}\\right)\n\\geq \\frac{1}{n} \\mathcal{F}^{-1},\n\\end{equation}\nwhere $n$ is the repetition of the experiment, $\\mathcal{I}$ is the classical Fisher information matrix (CFIM) and\n$\\mathcal{F}$ is the quantum Fisher information matrix (QFIM). Note that the estimators $\\hat{\\bold{x}}$ are in\nfact functions of the measurement outcomes $y$, and formally should always be written as $\\hat{{\\bold{x}}}(y)$.\nStill, we drop this explicit dependence on $y$ for conciseness of formulas. A thorough derivation\nof this bound can be found in a recent review~\\cite{Liu2020}.\n\nFor a set of discrete probability\ndistribution $\\{p(y|\\bold{x})=\\mathrm{Tr}(\\rho\\Pi_y)\\}$, the CFIM is defined by\n\\begin{equation}\n\\mathcal{I}_{ab}=\\sum_{y}\\frac{1}{p(y|\\bold{x})}[\\partial_a p(y|\\bold{x})][\\partial_b p(y|\\bold{x})].\n\\label{eq:CFIM}\n\\end{equation}\nHere $\\mathcal{I}_{ab}$ is short for $\\mathcal{I}_{x_a,x_b}$, the $ab$th entry of the CFIM. For a\ncontinuous probability density, the equation above becomes $\\mathcal{I}_{ab}=\\int \\frac{1}{p(y|\\bold{x})}\n[\\partial_a p(y|\\bold{x})][\\partial_b p(y|\\bold{x})]\\mathrm{d}y$. The diagonal entry $\\mathcal{I}_{aa}$\nis the classical Fisher information (CFI) for $x_a$.\n\nThe QFIM does not depend on the actual measurement performed, and one can encounter a few equivalent definitions\nof this quantity. The one the most often used reads:\n\\begin{equation}\n\\mathcal{F}_{ab}=\\frac{1}{2}\\mathrm{Tr}(\\rho\\{L_a, L_b\\})\n\\end{equation}\nwith $\\mathcal{F}_{ab}$ being the $ab$th entry of $\\mathcal{F}$ and $L_{a(b)}$ the symmetric logarithmic\nderivative (SLD) operator for $x_{a(b)}$. $\\{\\cdot,\\cdot\\}$ represents the anti-commutator. The\nSLD operator is Hermitian and determined by the equation\n\\begin{equation}\n\\partial_{a}\\rho=\\frac{1}{2}(\\rho L_{a}+L_{a}\\rho).\n\\end{equation}\nThe mathematical properties of the SLD operator and QFIM can be found in a recent review~\\cite{Liu2020}.\nThe diagonal entry of $\\mathcal{F}_{aa}$ is the quantum Fisher information (QFI) for $x_a$.\nUtilizing the spectral decomposition $\\rho=\\sum_{i}\\lambda_i |\\lambda_i\\rangle\\langle \\lambda_i|$, the\nSLD operator can be calculated via the equation\n\\begin{equation}\n\\langle\\lambda_i|L_{a}|\\lambda_j\\rangle=\\frac{2\\langle\\lambda_i| \\partial_{a}\\rho |\\lambda_j\\rangle}\n{\\lambda_i+\\lambda_j}, \\label{eq:SLD_eigen}\n\\end{equation}\nfor $\\lambda_i$ or $\\lambda_j$ not equal to zero. For $\\lambda_i=\\lambda_j=0$, the corresponding matrix entry of\n$L_a$ can be set to zero.\n\nIn QuanEstimation, the SLD operator can be calculated via the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nSLD(rho,drho,rep=\"original\",eps=1e-8)\n\\end{lstlisting}\nHere the input {\\fontfamily{bch}\\selectfont\\small\\itshape rho} is a matrix representing the parameterized density matrix, and {\\fontfamily{bch}\\selectfont\\small\\itshape drho}\nis a list of matrices representing the derivatives of the density matrix on $\\bold{x}$, i.e.,\n$[\\partial_0\\rho,\\partial_1\\rho,\\dots]$. When {\\fontfamily{bch}\\selectfont\\small\\itshape drho} only contains one entry ($[\\partial_0 \\rho]$),\nthe output of {\\fontfamily{bch}\\selectfont\\small\\itshape SLD()} is a matrix ($L_0$), and it is a list ($[L_0,L_1,\\dots]$) otherwise. The basis\nof the output SLD can be adjusted via the variable {\\fontfamily{bch}\\selectfont\\small\\itshape rep}. The default choice {\\fontfamily{bch}\\selectfont\\small\\itshape rep=\"original\"}\nmeans the basis is the same with that of the input density matrix. The other choice is {\\fontfamily{bch}\\selectfont\\small\\itshape rep=\"eigen\"},\nwhich means the SLD is written in the eigenspace of the density matrix. Due to the fact that the entries of SLD\nin the kernel are arbitrary, in the package they are just set to be zeros for simplicity. The default machine\nepsilon is {\\fontfamily{bch}\\selectfont\\small\\itshape eps=1e-8}, which can be modified as required. Here the machine epsilon means that if a\neigenvalue of the density matrix is less than the given number ($10^{-8}$ by default), it will be treated as\nzero in the calculation of SLD.\n\nApart from the SLD operator, the QFIM can also be defined via other types of logarithmic derivatives.\nSome well-used ones are the right and left logarithmic derivatives (RLD, LLD)~\\cite{Holevo1982,Yuen1973}. The RLD\nand LLD are determined by $\\partial_{a}\\rho=\\rho \\mathcal{R}_a$ and $\\partial_{a}\\rho=\\mathcal{R}_a^{\\dagger}\\rho$,\nrespectively. Utilizing the spectral decomposition, the entries of RLD and LLD can be calculated as\n\\begin{align}\n\\langle\\lambda_i| \\mathcal{R}_{a} |\\lambda_j\\rangle\n&= \\frac{1}{\\lambda_i}\\langle\\lambda_i| \\partial_{a}\\rho |\\lambda_j\\rangle,~~\\lambda_i\\neq 0; \\\\\n\\langle\\lambda_i| \\mathcal{R}_{a}^{\\dagger} |\\lambda_j\\rangle\n&= \\frac{1}{\\lambda_j}\\langle\\lambda_i| \\partial_{a}\\rho |\\lambda_j\\rangle,~~\\lambda_j\\neq 0.\n\\end{align}\nThe corresponding QFIM is $\\mathcal{F}_{ab}=\\mathrm{Tr}(\\rho \\mathcal{R}_a \\mathcal{R}^{\\dagger}_b)$. In QuanEstimation,\nthe LLD and RLD can be calculated via the functions {\\fontfamily{bch}\\selectfont\\small\\itshape RLD()} and {\\fontfamily{bch}\\selectfont\\small\\itshape LLD()}. The inputs are the same\nwith {\\fontfamily{bch}\\selectfont\\small\\itshape SLD()}. Notice that the RLD and LLD only exist when the support of $\\rho$ contains the the support of\n$\\partial_a\\rho$. Hence, if this condition is not satisfied, the calculation will be terminated and a line of\nreminder will arise to remind that {\\fontfamily{bch}\\selectfont\\small\\itshape RLD()} and {\\fontfamily{bch}\\selectfont\\small\\itshape LLD()} do not exist in this case.\n\nIn QuanEstimation, the QFIM and QFI can be calculated via the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nQFIM(rho,drho,LDtype=\"SLD\",exportLD=False,\n eps=1e-8)\n\\end{lstlisting}\nHere {\\fontfamily{bch}\\selectfont\\small\\itshape LDtype=\" \"} is the type of logarithmic derivatives, including {\\fontfamily{bch}\\selectfont\\small\\itshape \"SLD\"}, {\\fontfamily{bch}\\selectfont\\small\\itshape \"RLD\"},\nand {\\fontfamily{bch}\\selectfont\\small\\itshape \"LLD\"}. Notice that the values of QFIM based on RLD and LLD are actually the same when\nthe RLD and LLD exist. If {\\fontfamily{bch}\\selectfont\\small\\itshape exportLD=True}, apart from the QFIM, the corresponding values of logarithmic\nderivatives in the original basis will also be exported.\n\nIn the case that the parameterization is described via the Kraus operators, the QFIM can be calculated via the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nQFIM_Kraus(rho0,K,dK,LDtype=\"SLD\",\n exportLD=False,eps=1e-8)\n\\end{lstlisting}\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape rho0} is a matrix representing the density matrix of the initial state. {\\fontfamily{bch}\\selectfont\\small\\itshape K} is a\nlist of matrices with each entry a Kraus operator, and {\\fontfamily{bch}\\selectfont\\small\\itshape dK} is a list with $i$th entry being also a list\nrepresenting the derivatives $\\partial_{\\bold{x}}K_i$.\n\nThe CFIM and CFI for a fully classical scenario can be calculated by the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nFIM(p,dp,eps=1e-8)\n\\end{lstlisting}\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape p} is an array representing the probability distribution and {\\fontfamily{bch}\\selectfont\\small\\itshape dp} is a list\nwith the $i$th entry being itself also a list containing the derivatives of $p_i$ on $\\bold{x}$, i.e.\n$[\\partial_0 p_i,\\partial_1 p_i,\\dots]$. For a quantum scenario, the CFIM can be calculated by\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nCFIM(rho,drho,M=[],eps=1e-8)\n\\end{lstlisting}\nThe variable {\\fontfamily{bch}\\selectfont\\small\\itshape M} is a list containing a set of POVM. The default measurement is a set of rank-one symmetric\ninformationally complete POVM (SIC-POVM)~\\cite{Gour2014,Fuchs2017,Renes2004}. A set of rank-one SIC-POVM\n$\\{\\frac{1}{d}|\\phi_j\\rangle\\langle\\phi_j|\\}^{d^2}_{j=1}$ satisfies $|\\langle\\phi_j|\\phi_k\\rangle|^2=(d\\delta_{jk}+1)\/(d+1)$\nfor any $j$ and $k$ with $|\\phi_j\\rangle$ being a normalized quantum state and $d$ the dimension of the Hilbert space.\nOne way to construct a set of SIC-POVM is utilizing the Weyl\u2013Heisenberg operators~\\cite{Renes2004,Scott2010}, which is\ndefined by $D_{ab}=(-e^{i\\pi\/d})^{ab}A^{a}B^{b}$. The operators $A$ and $B$ satisfy $A|k\\rangle=|k+1\\rangle$,\n$B|k\\rangle=e^{i2\\pi k\/d}|k\\rangle$ with $\\{|k\\rangle\\}^{d-1}_{k=0}$ an orthonormal basis in the Hilbert space.\nThere exists a normalized fiducial vector $|\\psi\\rangle$ in the Hilbert space such that $\\{\\frac{1}{d}D_{ab}\n|\\psi\\rangle\\langle\\psi|D^{\\dagger}_{ab}\\}^d_{a,b=1}$ is a set of SIC-POVM. In the package, $|\\psi\\rangle$ is\ntaken as the one numerically found by Fuchs et al. in Ref.~\\cite{Fuchs2017}. If the users want to see the\nspecific formula of the SIC-POVM, the function {\\fontfamily{bch}\\selectfont\\small\\itshape SIC(n)} can be called. The input {\\fontfamily{bch}\\selectfont\\small\\itshape n} is\nthe dimension of the density matrix. Currently, the function {\\fontfamily{bch}\\selectfont\\small\\itshape SIC(n)} only valid when $n\\leq 151$.\n\n\\begin{figure}[tp]\n\\centering\\includegraphics[width=8.5cm]{Fig_QFI_illus.pdf}\n\\caption{The demonstrating code for the calculation of QFI and CFI with QuanEstimation.\nThe inset is the evolution of $\\mathcal{F}_{\\omega\\omega}\/t$ (solid blue line) and\n$\\mathcal{I}_{\\omega\\omega}\/t$ (dashed red line). The initial state is $|+\\rangle$.\nThe true value of $\\omega$ ($\\omega_{\\mathrm{tr}}$) is set to be $1$, and\nthe decay rates are set to be $\\gamma_{+}\/\\omega_{\\mathrm{tr}}=0$ and\n$\\gamma_{-}\/\\omega_{\\mathrm{tr}}=0.1$. Planck units are applied here.\n\\label{fig:QFI_code}}\n\\end{figure}\n\nIn both functions {\\fontfamily{bch}\\selectfont\\small\\itshape QFIM()} and {\\fontfamily{bch}\\selectfont\\small\\itshape CFIM()}, the outputs are real numbers ($\\mathcal{F}_{aa}$ and\n$\\mathcal{I}_{aa}$) in the single-parameter case, namely, when {\\fontfamily{bch}\\selectfont\\small\\itshape drho} only contains one entry, and\nthey are real symmetric or Hermitian matrices in the multi-parameter scenarios. The basis of QFIM and CFIM are determined by the order\nof entries in {\\fontfamily{bch}\\selectfont\\small\\itshape drho}. For example, when {\\fontfamily{bch}\\selectfont\\small\\itshape drho} is $[\\partial_0\\rho,\\partial_1\\rho,\\dots]$,\nthe basis of the QFIM and CFIM is $\\{x_0,x_1,\\dots\\}$.\n\nFor some specific scenarios, the calculation method in {\\fontfamily{bch}\\selectfont\\small\\itshape QFIM()} may be not efficient enough. Therefore,\nwe also provide the calculation of QFIM in some specific scenarios. The first one is the calculation in the\nBloch representation. In this case, the function for the calculation of QFIM is of the form:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nQFIM_Bloch(r,dr,eps=1e-8)\n\\end{lstlisting}\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape r} is an array representing a Bloch vector and {\\fontfamily{bch}\\selectfont\\small\\itshape dr} is a list of arrays representing\nthe derivatives of the Bloch vector on $\\bold{x}$. Gaussian states are very commonly used in quantum metrology, and the\ncorresponding QFIM can be calculated by the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nQFIM_Gauss(R,dR,D,dD)\n\\end{lstlisting}\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape R} is an array representing the first-order moment, i.e., the expected value\n$\\langle\\bold{R}\\rangle:=\\mathrm{Tr}(\\rho\\bold{R})$ of the vector $\\bold{R}=(q_1,p_1,q_2,p_2,\\dots)^{\\mathrm{T}}$,\nwhere $q_i=(a_i+a^{\\dagger}_i)\/\\sqrt{2}$ and $p_i=(a_i-a^{\\dagger}_i)\/(i\\sqrt{2})$ are the quadrature operators\nwith $a_i$ ($a^{\\dagger}_i$) the annihilation (creation) operator of $i$th bosonic mode. {\\fontfamily{bch}\\selectfont\\small\\itshape dR} is a list\nwith $i$th entry also a list containing the derivatives $\\partial_{\\bold{x}}\\langle[\\bold{R}]_i\\rangle$. Here\n$[\\cdot]_i$ represents the $i$th entry of the vector. {\\fontfamily{bch}\\selectfont\\small\\itshape D} is a matrix representing the second-order\nmoment, $D_{ij}=\\langle [\\bold{R}]_i [\\bold{R}]_j+[\\bold{R}]_j[\\bold{R}]_i\\rangle\/2$, and {\\fontfamily{bch}\\selectfont\\small\\itshape dD} is a list\nof matrices representing the derivatives $\\partial_{\\bold{x}}D$. Notice that {\\fontfamily{bch}\\selectfont\\small\\itshape QFIM\\_Bloch()} and\n{\\fontfamily{bch}\\selectfont\\small\\itshape QFIM\\_Gauss()} can only compute the SLD-based QFIM.\n\n\n\\emph{Example.} Now we present an example to show the usage of these functions. Consider a single qubit\nHamiltonian $H=\\omega\\sigma_3\/2$ with $\\sigma_3$ a Pauli matrix and $\\omega$ the frequency. Take $\\omega$\nas the parameter to be estimated and assume its true value (denoted by $\\omega_{\\mathrm{tr}}$) is 1.\nPlanck unit ($\\hbar=1$) is applied in the Hamiltonian. The dynamics is governed by the master equation\n\\begin{eqnarray}\n\\partial_t\\rho&=&-i\\left[H, \\rho\\right]+\\gamma_{+}\\left(\\sigma_+\\rho\\sigma_{-}-\\frac{1}{2}\n\\left\\{\\sigma_{-}\\sigma_{+}, \\rho\\right\\}\\right) \\nonumber\\\\\n& &+\\gamma_{-}\\left(\\sigma_-\\rho\\sigma_{+}-\\frac{1}{2}\\left\\{\\sigma_{+}\\sigma_{-}, \\rho\\right\\}\\right),\n\\label{eq:ME_spon}\n\\end{eqnarray}\nwhere $\\sigma_{\\pm}=(\\sigma_1\\pm\\sigma_2)\/2$ with $\\sigma_{1}$, $\\sigma_{2}$ also Pauli matrices.\n$\\gamma_{+}$ and $\\gamma_{-}$ are the decay rates. The measurement is taken as\n$\\{|+\\rangle\\langle+|,|-\\rangle\\langle-|\\}$ with\n\\begin{equation}\n|\\pm\\rangle:=\\frac{1}{\\sqrt{2}}(|0\\rangle\\pm|1\\rangle).\n\\end{equation}\nHere $|0\\rangle$ ($|1\\rangle$) is the eigenstate of $\\sigma_3$ with respect to the eigenvalue $1$ ($-1$). The\nspecific codes for the calculation of QFI\/CFI are given in Fig.~\\ref{fig:QFI_code}, and the corresponding evolution\nof $\\mathcal{F}_{\\omega\\omega}\/t$ (solid blue line) and $\\mathcal{I}_{\\omega\\omega}\/t$ (dashed red line)\nare shown in the inset.\n\n\n\\subsection{Holevo Cram\\'{e}r-Rao bound}\n\nHolevo Cram\\'{e}r-Rao bound (HCRB) is another useful asymptotic bound in quantum parameter\nestimation and tighter than the quantum Cram\\'{e}r-Rao bound in general. The HCRB can be expressed\nas~\\cite{Holevo1973,Rafal2020,Nagaoka1989,Hayashi2008}\n\\begin{equation}\n\\mathrm{Tr}(W\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\}))\\geq \\min_{\\bold{X},V} \\mathrm{Tr}(WV)\n\\end{equation}\nwith $W$ the weight matrix and $V$ a matrix satisfying $V\\geq Z(\\bold{X})$. Here $Z(\\bold{X})$ is a\nHermitian matrix and its $ab$th entry is defined by $[Z(\\bold{X})]_{ab}:=\\mathrm{Tr}(\\rho X_a X_b)$,\nwhere $\\bold{X}=[X_0,X_1,\\cdots]$ is a vector of operators and its $a$th entry is defined by\n$X_a:=\\sum_y (\\hat{x}_a-x_a)\\Pi_y$ with $\\hat{x}_a$ the $a$th entry of $\\hat{\\bold{x}}$.\nTo let the local estimator $\\hat{\\bold{x}}$ unbiased, $\\bold{X}$ needs to satisfy\n$\\mathrm{Tr}(X_a\\partial_b\\rho)=\\delta_{ab},\\,\\forall a, b$. Here $\\delta_{ab}$ is the Kronecker\ndelta function. An equivalent formulation of HCRB is~\\cite{Holevo1973,Rafal2020,Nagaoka1989,Hayashi2008}\n\\begin{equation}\n\\min_{\\bold{X},V}\\mathrm{Tr}(WV)\\!=\\!\\min_{\\bold{X}}~\\!\\mathrm{Tr}(W\\mathrm{Re}(Z))\n\\!+\\!\\Vert\\sqrt{W}\\mathrm{Im}(Z)\\sqrt{W}\\Vert,\n\\end{equation}\nwhere $\\mathrm{Re}(Z)$ and $\\mathrm{Im}(Z)$ represent the real and imaginary parts of $Z$, and $\\Vert\\cdot\\Vert$\nis the trace norm, i.e., $\\Vert A\\Vert:=\\mathrm{Tr}\\sqrt{A^{\\dagger}A}$ for a matrix $A$. Numerically, in\na specific matrix basis $\\{\\lambda_i\\}$ which satisfies $\\mathrm{Tr}(\\lambda_i\\lambda_j)=\\delta_{ij}$, the HCRB\ncan be solved via the semidefinite programming as it can be reformulated into a linear semidefinite\nproblem~\\cite{Albarelli2019}:\n\\begin{align}\n& \\min_{\\bold{X},V}~\\mathrm{Tr}(WV), \\nonumber \\\\\n& \\mathrm{subject}~\\mathrm{to}~\n\\begin{cases}\n\\left(\\begin{array}{cc}\nV & \\Lambda^{\\mathrm{T}}R^{\\dagger} \\\\\nR\\Lambda & \\openone\\\\\n\\end{array}\\right)\\geq 0, \\\\\n\\sum_i[\\Lambda]_{ai}\\mathrm{Tr}(\\lambda_i\\partial_b\\rho)=\\delta_{ab}.\n\\end{cases}\n\\end{align}\nHere the $ij$th entry of $\\Lambda$ is obtained by decomposing $\\bold{X}$ in the basis $\\{\\lambda_i\\}$,\n$X_i=\\sum_j [\\Lambda]_{ij}\\lambda_j$, and $R$ satisfies $Z=\\Lambda^{\\mathrm{T}}R^{\\dagger}R\\Lambda$.\nThe semidefinite programming can be solved by the package CVXPY~\\cite{Diamond2016,Agrawal2018} in Python\nand Convex~\\cite{Udell2014} in Julia. In QuanEstimation, the HCRB can be calculated via the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nHCRB(rho,drho,W,eps=1e-8)\n\\end{lstlisting}\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape W} is the weight matrix and {\\fontfamily{bch}\\selectfont\\small\\itshape rho}, {\\fontfamily{bch}\\selectfont\\small\\itshape drho} have been introduced\npreviously. Since $Z_{aa}$ is equivalent to the variance of the unbiased observable $O:=\\sum_y\\hat{x}_a\\Pi_y$\n[unbiased condition is $\\mathrm{Tr}(\\rho O)=x$], i.e., $Z_{aa}=\\mathrm{Tr}(\\rho O^2)-[\\mathrm{Tr}(\\rho O)]^2$,\nin the case of single-parameter estimation the optimal $V$ is nothing but $Z_{aa}$ itself. Furthermore, it\ncan be proved that $Z_{aa}\\geq 1\/\\mathcal{F}_{aa}$ and the equality is attainable asymptotically. Hence,\none can see that $\\min_{X_a}Z_{aa}=1\/\\mathcal{F}_{aa}$, which means the HCRB is equivalent to the quantum\nCram\\'{e}r-Rao bound in the single-parameter estimation. Due to better numerical efficiency of QFI computation,\nwhenever {\\fontfamily{bch}\\selectfont\\small\\itshape drho} has only one entry, the calling of {\\fontfamily{bch}\\selectfont\\small\\itshape HCRB()} will automatically jump to\n{\\fontfamily{bch}\\selectfont\\small\\itshape QFIM()} in the package. Similarly, if $W$ is a rank-one matrix, the HCRB also reduces to\n$\\mathrm{Tr}(W\\mathcal{F}^{-1})$ and thus in this case the calculation of HCRB will also be replaced by\nthe calculation of QFIM.\n\n\\begin{figure}[tp]\n\\centering\\includegraphics[width=8.0cm]{Fig_HCRB.pdf}\n\\caption{Time evolution of $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ (solid red line),\n$\\mathrm{Tr}(W\\mathcal{I}^{-1})$ (dashed black line) and HCRB (dash-dotted blue\nline) in the case of two-qubit system with the XX coupling. The probe state is\n$(|00\\rangle+|11\\rangle)\/\\sqrt{2}$. $W=\\openone$ and $\\omega_1=1$. The true values\nof $\\omega_2$ and $g$ are $1$ and $0.1$, respectively. The decay rates\n$\\gamma_1=\\gamma_2=0.05\\omega_1$. The POVM for $\\mathrm{Tr}(W\\mathcal{I}^{-1})$\nis $\\{\\Pi_1$, $\\Pi_2$, $\\openone-\\Pi_1-\\Pi_2\\}$ with $\\Pi_1=0.85|00\\rangle\\langle 00|$\nand $\\Pi_2=0.1|\\!+\\!+\\rangle\\langle+\\!+\\!|$. Planck units are applied here. }\n\\label{fig:HCRB}\n\\end{figure}\n\n\\emph{Example.} Now let us take a two-parameter estimation as an example to demonstrate the calculation of HCRB with\nQuanEstimation. Consider a two-qubit system with the XX coupling. The Hamiltonian of this system is\n\\begin{equation}\nH=\\omega_1\\sigma^{(1)}_3+\\omega_2\\sigma^{(2)}_3+g\\sigma^{(1)}_1\\sigma^{(2)}_1,\n\\end{equation}\nwhere $\\omega_{1}$, $\\omega_2$ are the frequencies of the first and second qubit,\n$\\sigma^{(1)}_{i}=\\sigma_{i}\\otimes\\openone$, and $\\sigma^{(2)}_{i}=\\openone\\otimes\\sigma_{i}$ for $i=1,2,3$.\n$\\openone$ is the identity matrix. Planck units are applied here ($\\hbar=1$). The parameters $\\omega_2$ and $g$ are\nthe ones to be estimated. The dynamics is governed by the master equation\n\\begin{equation}\n\\partial_t\\rho=-i\\left[H, \\rho\\right]+\\sum_{i=1,2}\\gamma_i\\left(\\sigma_3^{(i)}\\rho\\sigma_3^{(i)}-\\rho \\right)\n\\end{equation}\nwith $\\gamma_i$ the decay rate for $i$th qubit. The time evolutions of quantum Cram\\'{e}r-Rao bound\n[$\\mathrm{Tr}(W\\mathcal{F}^{-1})$], classical Cram\\'{e}r-Rao bound [$\\mathrm{Tr}(W\\mathcal{I}^{-1})$], and\nHCRB are shown in Fig.~\\ref{fig:HCRB}. The POVM for $\\mathrm{Tr}(W\\mathcal{I}^{-1})$ is\n$\\{\\Pi_1$, $\\Pi_2$, $\\openone-\\Pi_1-\\Pi_2\\}$ with $\\Pi_1=0.85|00\\rangle\\langle 00|$ and\n$\\Pi_2=0.1|\\!++\\rangle\\langle++\\!|$. The probe state is $(|00\\rangle+|11\\rangle)\/\\sqrt{2}$ and the weight\nmatrix $W=\\openone$. As shown in this plot, HCRB (dash-dotted blue line) is tighter than $\\mathrm{Tr}(W\\mathcal{F}^{-1})$\n(solid red line), which is in agreement with the fact that the HCRB is in general tighter than the quantum Cram\\'{e}r-Rao\nbound, unless the quantum Cram\\'{e}r-Rao bound is attainable, in which case the two bounds coincide~\\cite{Rafal2020}.\nThe gap between $\\mathrm{Tr}(W\\mathcal{I}^{-1})$ and HCRB indicates that the chosen measurement is not optimal.\n\n\\subsection{Bayesian estimation}\n\\label{sec:Bayesian}\n\nBayesian estimation is another well-used method in parameter estimation, in which the prior distribution is updated\nvia the posterior distribution obtained by the Bayes' rule\n\\begin{equation}\np(\\bold{x}|y)=\\frac{p(y|\\bold{x})p(\\bold{x})}{\\int p(y|\\bold{x})p(\\bold{x})\\mathrm{d}\\bold{x}},\n\\label{eq:Bayes_posterior}\n\\end{equation}\nwhere $p(\\bold{x})$ is the current prior distribution, $y$ is the result obtained in practice, and\n$\\int\\mathrm{d}\\bold{x}:=\\int\\mathrm{d}x_0\\int\\mathrm{d}x_1\\cdots$. The prior distribution is then updated with\n$p(\\bold{x}|y)$, and the estimated value of $\\bold{x}$ is obtained via a reasonable estimator, such as the\nexpected value $\\hat{\\bold{x}}=\\int\\bold{x} p(\\bold{x}|y)\\mathrm{d}\\bold{x}$ or the maximum a posteriori\nestimation (MAP), $\\hat{\\bold{x}}=\\mathrm{argmax}_{\\bold{x}}\\,p(\\bold{x}|y)$.\n\nIn QuanEstimation, the Bayesian estimation can be performed via the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\npout,xout = Bayes(x,p,rho,y,M=[],\n estimator=\"mean\",savefile=False)\n\\end{lstlisting}\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape x} is a list of arrays representing the regimes of $\\bold{x}$, which is the same with the\nfunction {\\fontfamily{bch}\\selectfont\\small\\itshape BayesInput()} discussed in Sec.~\\ref{sec:para}. Notice that in the package all the\ncalculations of the integrals over the prior distributions are performed discretely. Hence, for now the input prior\ndistribution is required to be an array, instead of a continuous function. {\\fontfamily{bch}\\selectfont\\small\\itshape p} is an array representing the\nvalues of $p(\\bold{x})$ with respect to $\\bold{x}$. It is multidimensional in the case of multiparameter estimation, i.e.,\nthe entry number of {\\fontfamily{bch}\\selectfont\\small\\itshape x} are at least two. The input {\\fontfamily{bch}\\selectfont\\small\\itshape rho} is a (multidimensional) list of matrices\nrepresenting the values of density matrix with respect to all values of $\\bold{x}$, which can be alternatively generated\nvia the function {\\fontfamily{bch}\\selectfont\\small\\itshape BayesInput()} if specific functions of $H$ and $\\partial_{\\bold{x}}H$ on $\\bold{x}$ can be\nprovided. {\\fontfamily{bch}\\selectfont\\small\\itshape M=[]} is a list of matrices representing a set of POVM and its default setting is a SIC-POVM.\n{\\fontfamily{bch}\\selectfont\\small\\itshape y} is an array representing the results obtained in an experiment. The result corresponds to the POVM operator\ninput in {\\fontfamily{bch}\\selectfont\\small\\itshape M}, which means it is an integer between 0 and $d-1$ with $d$ the entry number of the set of\nPOVM. The type of estimator can be set via {\\fontfamily{bch}\\selectfont\\small\\itshape estimator=\" \"} and currently it has two choices. When\n{\\fontfamily{bch}\\selectfont\\small\\itshape estimator=\"mean\"} the estimator is the expected value, and when {\\fontfamily{bch}\\selectfont\\small\\itshape estimator=\"MAP\"} the estimator\nis the MAP. The output {\\fontfamily{bch}\\selectfont\\small\\itshape pout} (a multidimensional array) and {\\fontfamily{bch}\\selectfont\\small\\itshape xout} (an array) are the final\nposterior distribution and estimated value of $\\bold{x}$ obtained via the chosen estimator. When {\\fontfamily{bch}\\selectfont\\small\\itshape savefile=True},\ntwo files \"pout.npy\" and \"xout.npy\" will be generated, which include the updated $p(\\bold{x})$ and the corresponding optimal\n$\\bold{x}$ in all rounds. If the users call this function in the full-Julia package, the output files are \"pout.csv\"\nand \"xout.csv\".\n\n\\begin{figure}[tp]\n\\centering\\includegraphics[width=8.5cm]{Fig_Bayes_est.pdf}\n\\caption{Iteration of posterior distribution by the Bayes' rule. The inset shows the change\nof estimated value as a function of iteration for MAP (solid red line), MLE (dashed blue line),\nand expectation (dash-dotted green line). The dotted black line represents the true value.}\n\\label{fig:bayes_mle}\n\\end{figure}\n\n\\emph{Example.} Now let us consider a simple example with the Hamiltonian\n\\begin{equation}\nH = \\frac{\\kappa\\omega_0}{2}(\\sigma_1\\cos x + \\sigma_3\\sin x),\n\\label{eq:Bayes_demo}\n\\end{equation}\nwhere $x$, $\\kappa$ are two dimensionless parameters and $x$ is taken as the unknown one. Planck units are applied here\n($\\hbar=1$) and $\\omega_0$ is set to be 1. The initial state is taken as $|+\\rangle$ and the target time $\\omega_0 T=1$.\nThe prior distribution is assumed to be uniform in the regime $[0,\\pi\/2]$. The measurement is\n$\\{|+\\rangle\\langle +|,|-\\rangle\\langle-|\\}$. The results in experiment are simulated by a random generation according\nto the probabilities $p(\\pm|x)=\\langle\\pm|\\rho|\\pm\\rangle$ with respect to the value $x=\\pi\/4$. As shown in\nFig.~\\ref{fig:bayes_mle}, with the growth of iteration number, the deviation decreases monotonously and the estimated\nvalue (center value of the distribution) approaches to $\\pi\/4$, which can also be confirmed by the convergence of\nestimated value (solid red line) shown in the inset. As a matter of fact, here the maximum likelihood estimation (MLE)\ncan also provide similar performance by taking the likelihood function with the MAP estimator\n$\\hat{\\bold{x}}=\\mathrm{argmax}_{\\bold{x}}\\,\\prod_i p(y_i|\\bold{x})$ (dashed blue line in the inset). In QuanEstimation,\nthis MLE can be calculated by the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nLout,xout = MLE(x,rho,y,M=[],savefile=False)\n\\end{lstlisting}\nWhen {\\fontfamily{bch}\\selectfont\\small\\itshape savefile=True}, two files \"Lout.npy\" and \"xout.npy\" will be generated including all the data in the\niterations.\n\nIn Bayesian estimation, another useful tool is the average Bayesian cost~\\cite{Robert2007} for the quadratic cost,\nwhich is defined by\n\\begin{equation}\n\\bar{C}:=\\int p(\\bold{x})\\sum_y p(y|\\bold{x})(\\bold{x}-\\hat{\\bold{x}})^{\\mathrm{T}}\nW(\\bold{x}-\\hat{\\bold{x}})\\,\\mathrm{d}\\bold{x}\n\\end{equation}\nwith $W$ the weight matrix. In QuanEstimation, this average Bayesian cost can be calculated via the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nBayesCost(x,p,xest,rho,M,W=[],eps=1e-8)\n\\end{lstlisting}\nHere {\\fontfamily{bch}\\selectfont\\small\\itshape x} and {\\fontfamily{bch}\\selectfont\\small\\itshape p} are the same with those in {\\fontfamily{bch}\\selectfont\\small\\itshape Bayes()}. {\\fontfamily{bch}\\selectfont\\small\\itshape xest} is a list of arrays\nrepresenting the estimator $\\hat{\\bold{x}}$. The $i$th entry of each array in {\\fontfamily{bch}\\selectfont\\small\\itshape xest} represents the estimator\nwith respect to $i$th result. In the case of the single-parameter scenario, $W$ is chosen to be 1 regardless the input.\nThe average Bayesian cost satisfies the inequality~\\cite{Rafal2020}\n\\begin{equation}\n\\bar{C}\\geq\\int p(\\bold{x})\\left(\\bold{x}^{\\mathrm{T}}W\\bold{x}\\right)\\mathrm{d}\\bold{x}\n-\\sum_{ab}W_{ab}\\mathrm{Tr}\\left(\\bar{\\rho}\\bar{L}_a \\bar{L}_b\\right),\n\\label{eq:BCB}\n\\end{equation}\nwhere $\\bar{\\rho}:=\\int p(\\bold{x})\\rho\\,\\mathrm{d}\\bold{x}$ and the operator $\\bar{L}_a$ is determined by the equation\n$\\int x_a p(\\bold{x})\\rho\\,\\mathrm{d}\\bold{x}=(\\bar{L}_a\\bar{\\rho}+\\bar{\\rho}\\bar{L}_a)\/2$. In the case of the\nsingle-parameter scenario, the inequality above reduces to\n\\begin{equation}\n\\bar{C}\\geq \\int p(x) x^2\\,\\mathrm{d}x-\\mathrm{Tr}(\\bar{\\rho}\\bar{L}^2)\n\\end{equation}\nand represents a bound which is always saturable---the optimal measurement correspond to projection measurement in the\neigenbasis of $\\bar{L}$, while the corresponding eigenvalues represent the estimated values of the parameter. If the\nmean value $\\int p(x) x\\,\\mathrm{d}x$ is subtracted to zero, then the inequality above can be rewritten into\n$\\bar{C}\\geq \\delta^2 x-\\mathrm{Tr}(\\bar{\\rho}\\bar{L}^2)$ with $\\delta^2 x:=\\int p(x) x^2\\,\\mathrm{d}x-\\int p(x)\nx\\,\\mathrm{d}x$ the variance of $x$ under the prior distribution. In QuanEstimation, the bound given in Eq.~(\\ref{eq:BCB})\ncan be calculated via the following function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nBCB(x,p,rho,W=[],eps=1e-8)\n\\end{lstlisting}\nHere the inputs {\\fontfamily{bch}\\selectfont\\small\\itshape x} and {\\fontfamily{bch}\\selectfont\\small\\itshape p} are the some with those in {\\fontfamily{bch}\\selectfont\\small\\itshape Bayes()} and {\\fontfamily{bch}\\selectfont\\small\\itshape BayesCost()}.\n{\\fontfamily{bch}\\selectfont\\small\\itshape W} represents the weight matrix and the default value is the identity matrix.\n\n\\subsection{Bayesian Cram\\'{e}r-Rao bounds}\n\nIn the Bayesian scenarios, the quantum Cram\\'{e}r-Rao Bounds and Holevo Cram\\'{e}r-Rao bound are not\nappropriate to grasp the the ultimate precision limits\nas they are ignorant of the prior information. Still, Bayesian Cram\\'{e}r-Rao bounds can be used instead. In these scenarios,\nthe covariance matrix is redefined as\n\\begin{equation}\n\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\})\\!=\\!\\int \\!p(\\bold{x})\\sum_y\\mathrm{Tr}(\\rho\\Pi_y)\n(\\hat{\\bold{x}}\\!-\\!\\bold{x})(\\hat{\\bold{x}}\\!-\\!\\bold{x})^{\\mathrm{T}}\\mathrm{d}\\bold{x},\n\\end{equation}\nwhere the integral $\\int\\mathrm{d}\\bold{x}:=\\iiint\\mathrm{d}x_0\\mathrm{d}x_1\\cdots$. In such cases, one version of\nthe Bayesian Cram\\'{e}r-Rao bound (BCRB) is of the form\n\\begin{equation}\n\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\})\\geq \\int p(\\bold{x})\n\\left(B\\mathcal{I}^{-1}B+\\bold{b}\\bold{b}^{\\mathrm{T}}\\right)\\mathrm{d}\\bold{x},\n\\label{eq:BCRB_type1}\n\\end{equation}\nwhere $\\mathcal{I}$ is the CFIM, and $\\bold{b}=(b(x_0),b(x_1),\\dots)^{\\mathrm{T}}$ is the vector of biases,\ni.e., $b(x_a)=\\sum_y\\hat{x}_a p(y|\\bold{x})-x_a$ for each $x_a$ with $p(y|\\bold{x})$ the conditional probability.\n$B$ is a diagonal matrix with the $a$th entry $B_{aa}=1+[\\bold{b}']_{a}$. Here $\\bold{b}':=(\\partial_0 b(x_0),\n\\partial_1 b(x_1),\\dots)^{\\mathrm{T}}$. The quantum correspondence of this bound (BQCRB) reads\n\\begin{equation}\n\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\})\\geq\\int p(\\bold{x})\n\\left(B\\mathcal{F}^{-1}B+\\bold{b}\\bold{b}^{\\mathrm{T}}\\right)\\mathrm{d}\\bold{x},\n\\label{eq:BQCRB_type1}\n\\end{equation}\nwhere $\\mathcal{F}$ is the QFIM of all types. As a matter of fact, there exists a similar version of\nEq.~(\\ref{eq:BCRB_type1}), which can be expressed by\n\\begin{equation}\n\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\})\\geq \\mathcal{B}\\,\\mathcal{I}_{\\mathrm{Bayes}}^{-1}\n\\,\\mathcal{B}+\\int p(\\bold{x})\\bold{b}\\bold{b}^{\\mathrm{T}}\\mathrm{d}\\bold{x},\n\\label{eq:BCRB_type2}\n\\end{equation}\nwhere $\\mathcal{I}_{\\mathrm{Bayes}}=\\int p(\\bold{x})\\mathcal{I}\\mathrm{d}\\bold{x}$ is the average CFIM with\n$\\mathcal{I}$ the CFIM defined in Eq.~(\\ref{eq:CFIM}). $\\mathcal{B}=\\int p(\\bold{x})B\\mathrm{d}\\bold{x}$ is the\naverage of $B$. Its quantum correspondence reads\n\\begin{equation}\n\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\})\\geq \\mathcal{B}\\,\\mathcal{F}_{\\mathrm{Bayes}}^{-1}\n\\,\\mathcal{B}+\\int p(\\bold{x})\\bold{b}\\bold{b}^{\\mathrm{T}}\\mathrm{d}\\bold{x},\n\\label{eq:BQCRB_type2}\n\\end{equation}\nwhere $\\mathcal{F}_{\\mathrm{Bayes}}=\\int p(\\bold{x})\\mathcal{F}\\mathrm{d}\\bold{x}$ is average QFIM with $\\mathcal{F}$\nthe QFIM of all types.\n\nAnother version of the Bayesian Cram\\'{e}r-Rao bound is of the form\n\\begin{equation}\n\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\})\\geq \\int p(\\bold{x})\n\\mathcal{G}\\left(\\mathcal{I}_p+\\mathcal{I}\\right)^{-1}\\mathcal{G}^{\\mathrm{T}}\\mathrm{d}\\bold{x},\n\\label{eq:BCRB_type3}\n\\end{equation}\nand its quantum correspondence can be expressed by\n\\begin{equation}\n\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\})\\geq \\int p(\\bold{x})\n\\mathcal{G}\\left(\\mathcal{I}_p+\\mathcal{F}\\right)^{-1}\\mathcal{G}^{\\mathrm{T}}\\mathrm{d}\\bold{x},\n\\label{eq:BQCRB_type3}\n\\end{equation}\nwhere the entries of $\\mathcal{I}_{p}$ and $\\mathcal{G}$ are defined by\n\\begin{equation}\n[\\mathcal{I}_{p}]_{ab}:=[\\partial_a \\ln p(\\bold{x})][\\partial_b \\ln p(\\bold{x})],\n\\label{eq:BayesIp}\n\\end{equation}\nand $\\mathcal{G}_{ab}:=[\\partial_b\\ln p(\\bold{x})][\\bold{b}]_a+B_{aa}\\delta_{ab}$. The derivations and thorough\ndiscussions of these bounds will be further discussed in an independent paper, which will be announced in a short time.\n\nThe functions in QuanEstimation to calculate $\\mathcal{I}_{\\mathrm{Bayes}}$ and $\\mathcal{F}_{\\mathrm{Bayes}}$ are:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nBCFIM(x,p,rho,drho,M=[],eps=1e-8)\nBQFIM(x,p,rho,drho,LDtype=\"SLD\",eps=1e-8)\n\\end{lstlisting}\nAnd the functions for the calculations of BCRBs and BQCRBs are:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nBCRB(x,p,dp,rho,drho,M=[],b=[],db=[],\n btype=1,eps=1e-8)\nBQCRB(x,p,dp,rho,drho,b=[],db=[],btype=1,\n LDtype=\"SLD\",eps=1e-8)\n\\end{lstlisting}\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape x} and {\\fontfamily{bch}\\selectfont\\small\\itshape p} are the same with those in the function {\\fontfamily{bch}\\selectfont\\small\\itshape Bayes()}. {\\fontfamily{bch}\\selectfont\\small\\itshape dp} is\na (multidimensional) list of arrays representing the derivatives of the prior distribution, which is only essential when\n{\\fontfamily{bch}\\selectfont\\small\\itshape btype=3}. In the case that {\\fontfamily{bch}\\selectfont\\small\\itshape btype=1} and {\\fontfamily{bch}\\selectfont\\small\\itshape btype=2}, it could be set as {\\fontfamily{bch}\\selectfont\\small\\itshape []}.\n{\\fontfamily{bch}\\selectfont\\small\\itshape rho} and {\\fontfamily{bch}\\selectfont\\small\\itshape drho} are (multidimensional) lists representing the values of $\\rho$ and\n$\\partial_{\\bold{x}}\\rho$. For example, if the input {\\fontfamily{bch}\\selectfont\\small\\itshape x} includes three arrays, which are the values of $x_0$,\n$x_1$ and $x_2$ for the integral, then the $ijk$th entry of {\\fontfamily{bch}\\selectfont\\small\\itshape rho} and {\\fontfamily{bch}\\selectfont\\small\\itshape drho} are a matrix $\\rho$ and\na list $[\\partial_{0}\\rho,\\partial_{1}\\rho,\\partial_{2}\\rho]$ with respect to the values $[x_0]_i$, $[x_1]_j$, and $[x_2]_k$.\nHere $[x_0]_i$, $[x_1]_j$, and $[x_2]_k$ represent the $i$th, $j$th, and $k$th value in the first, second, and\nthird array in {\\fontfamily{bch}\\selectfont\\small\\itshape x}. As a matter of fact, if the users can provide specific functions of $H$ and\n$\\partial_{\\bold{x}}H$ on $\\bold{x}$, {\\fontfamily{bch}\\selectfont\\small\\itshape rho} and {\\fontfamily{bch}\\selectfont\\small\\itshape drho} can be alternatively generated via the\nfunctions {\\fontfamily{bch}\\selectfont\\small\\itshape BayesInput()} and {\\fontfamily{bch}\\selectfont\\small\\itshape Lindblad()} [or {\\fontfamily{bch}\\selectfont\\small\\itshape Kraus()}]. {\\fontfamily{bch}\\selectfont\\small\\itshape b} and {\\fontfamily{bch}\\selectfont\\small\\itshape db}\nare two lists of arrays representing $\\bold{b}$ and $\\bold{b}'$, and the default settings for both of them are zero vectors\n(unbiased). In {\\fontfamily{bch}\\selectfont\\small\\itshape BCRB()} the measurement is input via {\\fontfamily{bch}\\selectfont\\small\\itshape M=[]}, and if it is empty, a set of rank-one\nSIC-POVM will be automatically applied, similar to that in {\\fontfamily{bch}\\selectfont\\small\\itshape CFIM()}. Moreover, {\\fontfamily{bch}\\selectfont\\small\\itshape btype=1}, {\\fontfamily{bch}\\selectfont\\small\\itshape btype=2},\nand {\\fontfamily{bch}\\selectfont\\small\\itshape btype=3} represent the calculation of Eqs.~(\\ref{eq:BCRB_type1}), (\\ref{eq:BCRB_type2}), and (\\ref{eq:BCRB_type3}).\nIn the meantime, in {\\fontfamily{bch}\\selectfont\\small\\itshape BQCRB()}, {\\fontfamily{bch}\\selectfont\\small\\itshape btype=1}, {\\fontfamily{bch}\\selectfont\\small\\itshape btype=2}, and {\\fontfamily{bch}\\selectfont\\small\\itshape btype=3} represent the\ncalculation of Eqs.~(\\ref{eq:BQCRB_type1}), (\\ref{eq:BQCRB_type2}) and (\\ref{eq:BQCRB_type3}). Similar to {\\fontfamily{bch}\\selectfont\\small\\itshape QFIM()},\n{\\fontfamily{bch}\\selectfont\\small\\itshape LDtype=\" \"} here is the type of logarithmic derivatives, including three choices: {\\fontfamily{bch}\\selectfont\\small\\itshape \"SLD\"}, {\\fontfamily{bch}\\selectfont\\small\\itshape \"RLD\"},\nand {\\fontfamily{bch}\\selectfont\\small\\itshape \"LLD\"}. Recently, Ref.~\\cite{Liu2016} provide an optimal biased bound based on the type-1 BQCRB in the case of\nsingle-parameter estimation, which can be calculated in QuanEstimation via the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nOBB(x,p,dp,rho,drho,d2rho,LDtype=\"SLD\",\n eps=1e-8)\n\\end{lstlisting}\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape dp} is an array containing the derivatives $\\partial_x p$. {\\fontfamily{bch}\\selectfont\\small\\itshape d2rho} is a list\ncontaining the second order derivative of the density matrix on the unknown parameter.\n\n\\begin{figure}[tp]\n\\centering\\includegraphics[width=8cm]{Fig_bayes.pdf}\n\\caption{(a) The performance of classical Bayesian bounds, including BCRB of type 1 (solid\nred line), type 2 (dashed green line), type 3 (dotted blue line), and VTB (dash-dotted\nblack line). (b) The performance of quantum Bayesian bounds, including BQCRB\nof type 1 (solid red line), type 2 (dashed green line), type 3 (dotted blue line),\nQVTB (dash-dotted black line), and QZZB (solid cyan pentagram line). The parameters\n$\\mu=0$ and $\\kappa=\\pi\/2$ in the plots. Planck units are applied here. }\n\\label{fig:bayes}\n\\end{figure}\n\nAnother famous Bayesian version of Cram\\'{e}r-Rao bound is introduced by Van Trees in 1968~\\cite{vanTrees1968},\nwhich is known as the Van Trees bound (VTB). The VTB is expressed by\n\\begin{equation}\n\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\})\\geq \\left(\\mathcal{I}_{\\mathrm{prior}}\n+\\mathcal{I}_{\\mathrm{Bayes}}\\right)^{-1},\n\\end{equation}\nwhere $\\mathcal{I}_{\\mathrm{prior}}=\\int p(\\bold{x})\\mathcal{I}_{p}\\mathrm{d}\\bold{x}$ is the CFIM for $p(\\bold{x})$\nwith $\\mathcal{I}_p$ defined in Eq.~(\\ref{eq:BayesIp}). In the derivation, the assumption\n\\begin{equation}\n\\int\\partial_{a}\\left[b(x_b)p(\\bold{x})\\right]\\mathrm{d}\\bold{x}=0\n\\end{equation}\nis applied for all subscripts $a$ and $b$. In 2011, Tsang, Wiseman and Caves~\\cite{Tsang2011} provided a quantum\ncorrespondence of the VTB (QVTB). The Tsang-Wiseman-Caves bound is of the form\n\\begin{equation}\n\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\})\\geq \\left(\\mathcal{I}_{\\mathrm{prior}}\n+\\mathcal{F}_{\\mathrm{Bayes}}\\right)^{-1}.\n\\end{equation}\nThe functions in QuanEstimation for the calculation of VTB and QVTB are:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nVTB(x,p,dp,rho,drho,M=[],eps=1e-8)\nQVTB(x,p,dp,rho,drho,LDtype=\"SLD\",eps=1e-8)\n\\end{lstlisting}\nHere {\\fontfamily{bch}\\selectfont\\small\\itshape dp} is a (multidimensional) list of arrays representing the derivatives of the prior distribution.\nFor example, if {\\fontfamily{bch}\\selectfont\\small\\itshape x} includes 3 arrays, which are the values of $x_0$, $x_1$ and $x_2$ for the integral,\nthen the $ijk$th entry of {\\fontfamily{bch}\\selectfont\\small\\itshape dp} is an array $(\\partial_0 p,\\partial_1 p,\\partial_2 p)$ with respect to\nvalues $[x_0]_i$, $[x_1]_j$ and $[x_2]_k$.\n\n\\emph{Example.} Let us still take the Hamiltonian in Eq.~(\\ref{eq:Bayes_demo}) and initial state $|+\\rangle$ as\nan example. $x$ is still the parameter to be estimated. The prior distribution is taken as a Gaussian distribution\n\\begin{equation}\np(x)=\\frac{1}{c\\eta\\sqrt{2\\pi}}e^{-\\frac{(x-\\mu)^2}{2\\eta^2}}\n\\label{eq:Bayes_prior}\n\\end{equation}\nin a finite regime $[-\\pi\/2, \\pi\/2]$, where $\\mu$ is the expectation, $\\eta$ is the standard deviation, and\n$c=\\frac{1}{2}\\big[\\mathrm{erf}(\\frac{\\pi-2\\mu}{2\\sqrt{2}\\eta})+\\mathrm{erf}(\\frac{\\pi+2\\mu}{2\\sqrt{2}\\eta})\\big]$\nis the normalized coefficient. Here $\\mathrm{erf}(x):=\\frac{2}{\\sqrt{\\pi}}\\int^x_0 e^{-t^2}\\mathrm{d}t$ is the error\nfunction. The measurement in the classical bounds is taken as a set of SIC-POVM. The performance of the classical and\nquantum Bayesian bounds are given in Figs.~\\ref{fig:bayes}(a) and \\ref{fig:bayes}(b). As shown in Fig.~\\ref{fig:bayes}(a),\nin this case BCRB of type 1 (solid red line) and type 2 (dashed green line) are tighter than type 3 (dotted blue\nline) and VTB (dash-dotted black line) when the deviation $\\eta$ is small. With the increase of $\\eta$, BCRB of type 1\nand type 3 coincide with each other, so do BCRB of type 2 and VTB. Furthermore, BCRB of type 1 and type 3 are always\ntighter than type 2 and VTB in this example. The performance of quantum Bayesian bounds are similar, as shown in\nFig.~\\ref{fig:bayes}(b). BQCRB (solid red line for type 1 and dashed green line for type 2) are tighter than type 3\n(dotted green line) and QVTB (dash-dotted black line) when $\\eta$ is small and BQCRB of type 1 (type 2) and type 3\n(QVTB) coincide with each other for a large $\\eta$.\n\n\\subsection{Quantum Ziv-Zakai bound}\n\nApart from the Cram\\'{e}r-Rao bounds, the Ziv-Zakai bound is another useful bound in Bayesian scenarios. It was\nfirst provided by Ziv and Zakai in 1969~\\cite{Ziv1969} for the single-parameter estimation and then extended to\nthe linear combination of multiple parameters by Bell et al.~\\cite{Bell1997}, which is also referred to\nas the Bell-Ziv-Zakai bound. In 2012, Tsang provided a quantum correspondence of the Ziv-Zakai bound~\\cite{Tsang2012}\n(QZZB), and in 2015 Berry et al.~\\cite{Berry2015} provided a quantum correspondence of the Bell-Ziv-Zakai bound.\nIn QZZB, the variance $\\mathrm{var}(\\hat{x},\\{\\Pi_y\\})$, a diagonal entry of the covariance matrix, satisfies the\nfollowing inequality\n\\begin{eqnarray}\n\\mathrm{var}(\\hat{x},\\{\\Pi_y\\}) &\\geq & \\frac{1}{2}\\int_0^\\infty \\mathrm{d}\\tau\\tau\n\\mathcal{V}\\int_{-\\infty}^{\\infty} \\mathrm{d}x\\min\\!\\left\\{p(x), p(x+\\tau)\\right\\} \\nonumber \\\\\n& & \\times\\left(1-\\frac{1}{2}||\\rho(x)-\\rho(x+\\tau)||\\right),\n\\end{eqnarray}\nwhere $||\\cdot||$ is the trace norm. $\\mathcal{V}$ is the \"valley-filling\" operator\nsatisfying $\\mathcal{V}f(\\tau)=\\max_{h\\geq 0}f(\\tau+h)$. In the numerical calculations, the prior distribution has\nto be limited or truncated in a finite regime $[\\alpha,\\beta]$, i.e., $p(x)=0$ when $x>\\beta$ or $x<\\alpha$, and\nthen the QZZB reduces to\n\\begin{eqnarray}\n\\mathrm{var}(\\hat{x},\\{\\Pi_y\\}) &\\geq & \\frac{1}{2}\\int_0^{\\beta-\\alpha}\\mathrm{d}\\tau\\tau\n\\mathcal{V}\\int_{\\alpha}^{\\beta}\\mathrm{d}x\\min\\left\\{p(x), p(x+\\tau)\\right\\} \\nonumber \\\\\n& & \\times\\left(1-\\frac{1}{2}||\\rho(x)-\\rho(x+\\tau)||\\right).\n\\end{eqnarray}\nThe function in QuanEstimation for the calculation of QZZB is:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nQZZB(x,p,rho,eps=1e-8)\n\\end{lstlisting}\nThe performance of QZZB is also demonstrated with the Hamiltonian in Eq.~(\\ref{eq:Bayes_demo}) and prior\ndistribution in Eq.~(\\ref{eq:Bayes_prior}), as shown in Fig.~\\ref{fig:bayes}(b). In this example, its performance\n(solid cyan pentagram line) is worse than BQCRB and QVTB. However, this tightness relation may dramatically change\nin other systems or with other prior distributions. Hence, in a specific scenario using QuanEstimation to perform\na thorough comparison would be a good choice to find the tightest tool for the scheme design.\n\n\\section{Metrological resources}\n\\label{sec:resource}\n\nThe improvement of precision usually means a higher consumption of resources. For example, the repetition of\nexperiments will make the deviation of the unknown parameter to scale proportionally to $1\/\\sqrt{n}$ ($n$ the repetition\nnumber) in theory. The repetition number or the total time is thus the resource responsible for this improvement.\nConstraint on quantum resources is an important aspect in the study of quantum parameter estimation, and is crucial\nto reveal the quantum advantage achievable in practical protocols. The numerical calculations of some typical resources\nhave been added in QuTiP, such as various types of entropy and the concurrence. Hence, we do not need to rewrite\nthem in QuanEstimation. Currently, two additional metrological resources, spin squeezing and the time to reach\na given precision limit are provided in the package. The spin squeezing can be calculated via the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nSpinSqueezing(rho,basis=\"Dicke\",output=\"KU\")\n\\end{lstlisting}\nHere the input {\\fontfamily{bch}\\selectfont\\small\\itshape rho} is a matrix representing the state. The basis of the state can be adjusted via\n{\\fontfamily{bch}\\selectfont\\small\\itshape basis=\" \"}. Two options {\\fontfamily{bch}\\selectfont\\small\\itshape \"Dicke\"} and {\\fontfamily{bch}\\selectfont\\small\\itshape \"Pauli\"} represent the Dicke basis\nand the original basis of each spin. {\\fontfamily{bch}\\selectfont\\small\\itshape basis=\"Pauli\"} here is equivalent to choose {\\fontfamily{bch}\\selectfont\\small\\itshape basis=\"uncoupled\"}\nin the function {\\fontfamily{bch}\\selectfont\\small\\itshape jspin()} in QuTiP. Two types of spin squeezing can be calculated in this function.\n{\\fontfamily{bch}\\selectfont\\small\\itshape output=\"KU\"} means the output is the one given by Kitagawa and Ueda~\\cite{Kitagawa1993}, and\n{\\fontfamily{bch}\\selectfont\\small\\itshape output=\"WBIMH\"} means the output is the one given by Wineland et al.~\\cite{Wineland1992}.\n\nThe time to reach a given precision limit can be calculated via the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nTargetTime(f,tspan,func,*args,**kwargs)\n\\end{lstlisting}\nNotice that {\\fontfamily{bch}\\selectfont\\small\\itshape Lindblad()} and {\\fontfamily{bch}\\selectfont\\small\\itshape Lindblad.expm()} should be first called before using this function.\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape f} is a float number representing the given value of the precision limit. The time is searched\nwithin the regime defined by the input {\\fontfamily{bch}\\selectfont\\small\\itshape tspan} (an array). {\\fontfamily{bch}\\selectfont\\small\\itshape func} is the handle of a function\n{\\fontfamily{bch}\\selectfont\\small\\itshape func()} depicting the precision limit. {\\fontfamily{bch}\\selectfont\\small\\itshape *args} is the corresponding input parameters, in\nwhich {\\fontfamily{bch}\\selectfont\\small\\itshape rho} and {\\fontfamily{bch}\\selectfont\\small\\itshape drho} should be the output of {\\fontfamily{bch}\\selectfont\\small\\itshape Lindblad.expm()}. {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}\nis the keyword arguments in {\\fontfamily{bch}\\selectfont\\small\\itshape func()}. The difference between input parameters and keyword arguments in\nQuanEstimation is that the keyword arguments have default values and thus one does not have to assign values to\nthem when calling the function. Currently, all the asymptotic bounds discussed in Sec.~\\ref{sec:tools} are available\nto be called here.\n\n\n\\section{Control optimization}\n\\label{sec:control_opt}\n\nQuantum control is a leading approach in quantum metrology to achieve the improvement of measurement\nprecision and boost the resistance to decoherence. This is possible thanks to high controllability of typical\nquantum metrological setups. A paradigmatic controllable Hamiltonian is of the form\n\\begin{equation}\nH=H_0(\\bold{x})+\\sum^K_{k=1}u_k(t) H_k,\n\\end{equation}\nwhere $H_0(\\bold{x})$ is the free Hamiltonian containing the unknown parameters $\\bold{x}$ and $H_k$ is the\n$k$th control Hamiltonian with the corresponding control amplitude $u_k(t)$. In quantum parameter estimation,\nthe aim of control is to improve the precision of the unknown parameters. Hence, natural choices for the the\nobjective function $f$ are the various metrological bounds. The quantum Cram\\'{e}r-Rao bounds are easiest to calculate\nand hence will typically be the first choice. In the single-parameter estimation, the QFI or CFI can be taken as\nthe objective function, depending whether the measurement can be optimized or is fixed. In the multiparameter scenario,\nthe target function can be $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ or $\\mathrm{Tr}(W\\mathcal{I}^{-1})$. In QuanEstimation,\n$1\/\\mathrm{Tr}(W\\mathcal{F}^{-1})$ and $1\/\\mathrm{Tr}(W\\mathcal{I}^{-1})$ are used as the objective functions\ninstead since the maximization is more precise than the minimization in practice. In the following, this technical aspect\nwill not be brought up again for the conciseness of writing.\n\nSearching the optimal controls in order to achieve the maximum or minimum values of an objective function is the core\ntask in quantum control. Most existing optimization algorithms are capable of providing useful control strategies in\nquantum parameter estimation. The gradient-based algorithms usually perform well in small-scale systems. For complex\nproblems where the gradient-based methods are more challenging or even fail to work at all, gradient-free algorithms\nare a good alternative. Here we introduce several control algorithms in quantum parameter estimation that have been\nadded into our package and give some illustrations.\n\nFirst, we present the specific codes in QuanEstimation for the execution of the control optimization:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL,mathescape=true]\ncontrol = ControlOpt(savefile=False,\n method=\"auto-GRAPE\",**kwargs)\ncontrol.dynamics(tspan,rho0,H0,dH,Hc,\n decay= [],ctrl_bound=[])\ncontrol.QFIM(W=[],LDtype=\"SLD\")\ncontrol.CFIM(M=[],W=[])\ncontrol.HCRB(W=[])\n\\end{lstlisting}\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape tspan} is an array representing the time for the evolution. {\\fontfamily{bch}\\selectfont\\small\\itshape rho0} is a matrix\nrepresenting the density matrix of the initial state. {\\fontfamily{bch}\\selectfont\\small\\itshape H0} is a matrix representing the free Hamiltonian\n$H_0(\\bold{x})$ and {\\fontfamily{bch}\\selectfont\\small\\itshape Hc} is a list containing the control Hamiltonians, i.e., $[H_1,H_2,\\dots]$.\n{\\fontfamily{bch}\\selectfont\\small\\itshape dH} is a list of matrices representing $\\partial_{\\bold{x}}H_0$. In the case that only one entry\nexists in {\\fontfamily{bch}\\selectfont\\small\\itshape dH}, the objective functions in {\\fontfamily{bch}\\selectfont\\small\\itshape control.QFIM()} and {\\fontfamily{bch}\\selectfont\\small\\itshape control.CFIM()}\nare the QFI and CFI, and if more than one entries are input, the objective functions are $\\mathrm{Tr}(W\\mathcal{F}^{-1})$\nand $\\mathrm{Tr}(W\\mathcal{I}^{-1})$. Different types of QFIM can be selected as the objective function via the\nvariable {\\fontfamily{bch}\\selectfont\\small\\itshape LDtype=\" \"}, which includes three options {\\fontfamily{bch}\\selectfont\\small\\itshape \"SLD\"}, {\\fontfamily{bch}\\selectfont\\small\\itshape \"RLD\"}, and\n{\\fontfamily{bch}\\selectfont\\small\\itshape \"LLD\"}. The measurement for CFI\/CFIM is input via {\\fontfamily{bch}\\selectfont\\small\\itshape M=[]} in {\\fontfamily{bch}\\selectfont\\small\\itshape control.CFIM()} and\nthe default value is a SIC-POVM. The weight matrix $W$ can be manually input via {\\fontfamily{bch}\\selectfont\\small\\itshape W=[]}, and the default\nvalue is the identity matrix.\n\nIn some cases, the control amplitudes have to be limited in a regime, for example $[a,b]$, which\ncan be realized by input {\\fontfamily{bch}\\selectfont\\small\\itshape ctrl\\_bound=[a,b]}. If no value is input, the default regime is $[-\\infty,\\infty]$.\n{\\fontfamily{bch}\\selectfont\\small\\itshape decay=[]} is a list of decay operators and corresponding decay rates for the master equation in\nEq.~(\\ref{eq:mastereq}) and its input rule is {\\fontfamily{bch}\\selectfont\\small\\itshape decay=[[Gamma\\_1,gamma\\_1],...]}. The default value\nfor {\\fontfamily{bch}\\selectfont\\small\\itshape savefile} is {\\fontfamily{bch}\\selectfont\\small\\itshape False}, which means only the controls obtained in the final episode\nwill be saved in the file named \"controls.csv\", and if it is set to be {\\fontfamily{bch}\\selectfont\\small\\itshape True}, the controls obtained\nin all episodes will be saved in this file. The values of QFI, CFI, $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ or\n$\\mathrm{Tr}(W\\mathcal{I}^{-1})$ in all episodes will be saved regardless of this setting in the file named\n\"f.csv\". Another file named \"total\\_reward.csv\" will also be saved to save the total rewards in all episodes\nwhen DDPG is chosen as the optimization method. Here the word \"episode\" is referred to as a round of\nupdate of the objective function in the scenario of optimization.\n\n\\begin{table}[tp]\n\\begin{tabular}{c|c|c|c}\n\\hline\n\\hline\nAlgorithms & method= & \\multicolumn{2}{c}{~**kwargs and default values~}\\\\\n\\hline\n\\multirow{6}{*}{auto-GRAPE} & \\multirow{6}{*}{\"auto-GRAPE\"} & \"Adam\" & True \\\\\n\\multirow{6}{*}{(GRAPE)} & \\multirow{6}{*}{(\"GRAPE\")} & \"ctrl0\" & [] \\\\\n & & \"max\\_episode\" & 300 \\\\\n & & \"epsilon\" & 0.01 \\\\\n & & \"beta1\" & 0.90 \\\\\n & & \"beta2\" & 0.99 \\\\\n\\hline\n\\multirow{7}{*}{PSO} & \\multirow{7}{*}{\"PSO\"} & \"p\\_num\" & 10 \\\\\n & & \"ctrl0\" & [] \\\\\n & & \"max\\_episode\" & [1000,100] \\\\\n & & \"c0\" & 1.0 \\\\\n & & \"c1\" & 2.0 \\\\\n & & \"c2\" & 2.0 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\multirow{6}{*}{DE} & \\multirow{6}{*}{\"DE\"} & \"p\\_num\" & 10 \\\\\n & & \"ctrl0\" & [] \\\\\n & & \"max\\_episode\" & 1000 \\\\\n & & \"c\" & 1.0 \\\\\n & & \"cr\" & 0.5 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\multirow{5}{*}{DDPG} & \\multirow{5}{*}{\"DDPG\"} & \"ctrl0\" & [] \\\\\n & & \"max\\_episode\" & 500 \\\\\n & & \"layer\\_num\" & 3 \\\\\n & & \"layer\\_dim\" & 200 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\caption{Available control methods in QuanEstimation andcorresponding\ndefault parameter settings. Notice that auto-GRAPE and GRAPE are not\navailable when {\\fontfamily{bch}\\selectfont\\small\\itshape control.HCRB()} is called.}\n\\label{table:ctrl_paras}\n\\end{table}\n\nThe switch of optimization algorithms can be realized by {\\fontfamily{bch}\\selectfont\\small\\itshape method=\" \"}, and the corresponding parameters\ncan be set via {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}. All available algorithms in QuanEstimation are given in Table~\\ref{table:ctrl_paras}\ntogether with the corresponding default parameter settings. Notice that in some algorithms maybe more than one set\nof guessed controls are needed, and if not enough sets are input then random-value controls will be generated\nautomatically to fit the number. In the meantime, if excessive number of sets are input, only the suitable number\nof controls will be used. {\\fontfamily{bch}\\selectfont\\small\\itshape LDtype=\"SLD\"} is the only choice when {\\fontfamily{bch}\\selectfont\\small\\itshape method=\"GRAPE\"} as the QFIMs\nbased on RLD and LLD are unavailable to be the objective function for GRAPE in the package. All the aforementioned\nalgorithms will be thoroughly introduced and discussed with examples in the following subsections.\n\n\\begin{algorithm*}[tp]\n\\SetArgSty{}\n\\caption{GRAPE} \\label{algorithm:grape}\nInitialize the control amplitude $u_k(t)$ for all $t$ and $k$; \\\\\n\\For {episode=1, $M$}{\nReceive initial state $\\rho_{0}$ ($\\rho_{\\mathrm{in}}$); \\\\\n\\For {$t=1, T$}{\nEvolve with the control $\\rho_t=e^{\\Delta t\\mathcal{L}_t} \\rho_{t-1}$; \\\\\nCalculate the derivatives $\\partial_\\bold{x}\\rho_t=-i\\Delta t [\\partial_\\bold{x} H_0(\\bold{x})]^{\\times}\\rho_t\n+e^{\\Delta t\\mathcal{L}_t} \\partial_\\bold{x} \\rho_{t-1}$; \\\\\nSave $\\rho_t$ and $\\partial_\\bold{x}\\rho_t$; \\\\\n\\For {$k=1, K$}{\nCalculate $\\frac{\\delta \\rho_t}{\\delta u_k(t)}=-i\\Delta t H^{\\times}_k\\rho_t$,\n$\\partial_\\bold{x}\\!\\left(\\!\\frac{\\delta\\rho_t}{\\delta u_k(t)}\\!\\right)=\n-i\\Delta t H^{\\times}_k(\\partial_{\\bold{x}}\\rho_t)$; \\\\\n\\For {$j=t-1, 1$}{\nCalculate $\\frac{\\delta \\rho_t}{\\delta u_k(j)}=e^{\\Delta t\\mathcal{L}_t}\n\\frac{\\delta \\rho_{t-1}}{\\delta u_k(j)}$,\n$\\partial_\\bold{x}\\!\\left(\\frac{\\delta\\rho_t}{\\delta u_k(j)}\\right)=\\left(\\partial_\\bold{x}\ne^{\\Delta t\\mathcal{L}_t}\\right)\\frac{\\delta\\rho_{t-1}}{\\delta u_k(j)}\n+e^{\\Delta t\\mathcal{L}_t}\\partial_\\bold{x}\\!\\left(\\frac{\\delta \\rho_{t-1}}\n{\\delta u_k(j)}\\right)$;}\nSave $\\frac{\\delta \\rho_t}{\\delta u_k(t)}$, $\\partial_\\bold{x}\\!\\left(\\frac{\\delta\\rho_t}\n{\\delta u_k(t)}\\right)$\nand all $\\frac{\\delta \\rho_t}{\\delta u_k(j)}$,\n$\\partial_\\bold{x}\\!\\left(\\frac{\\delta\\rho_t}{\\delta u_k(j)}\\right)$;\n}}\nCalculate the SLDs for all $\\bold{x}$ and the objective function $f(T)$; \\\\\n{\\For {$t=1, T$}{\n\\For {$k=1, K$}{\nCalculate the gradient $\\frac{\\delta f(T)}{\\delta u_k(t)}$ with $\\frac{\\delta\\rho_T}\n{\\delta u_k(t)}$ and $\\partial_\\bold{x}\\!\\left(\\frac{\\delta \\rho_T}{\\delta u_k(t)}\\right)$; \\\\\nUpdate control $u_k(t)\\!\\leftarrow\\! u_k(t)\\!+\\!\\epsilon\\frac{\\delta f(T)}{\\delta u_k(t)}$.\n}}}\n}\nSave the controls $\\{u_k\\}$ and corresponding $f(T)$.\n\\end{algorithm*}\n\nApart from the QFIM and CFIM, the HCRB can also be taken as the objective function in the case of multiparameter\nestimation, which can be realized by calling {\\fontfamily{bch}\\selectfont\\small\\itshape control.HCRB()}. Notice that auto-GRAPE and GRAPE are\nnot available in {\\fontfamily{bch}\\selectfont\\small\\itshape method=\" \"} here as the calculation of HCRB is performed via optimizations (semidefinite\nprogramming), not direct calculations. Due to the equivalence between the HCRB and quantum Cram\\'{e}r-Rao bound in\nthe single-parameter estimation, if {\\fontfamily{bch}\\selectfont\\small\\itshape control.HCRB()} is called in this case, the entire program will\nbe terminated and a line of reminder will arise to remind the users to invoke {\\fontfamily{bch}\\selectfont\\small\\itshape control.QFIM()} instead.\n\n\n\\subsection{Gradient ascent pulse engineering}\n\nThe gradient ascent pulse engineering algorithm (GRAPE) was developed by Khaneja et al.~\\cite{Khaneja2005}\nin 2005 for the design of pulse sequences in the Nuclear Magnetic Resonance systems, and then applied into the\nquantum parameter estimation for the generation of optimal controls~\\cite{Liu2017a,Liu2017b}, in which the\ngradients of the objective function $f(T)$ at a fixed time $T$ were obtained analytically. In the pseudocode given\nin Ref.~\\cite{Liu2022}, the propagators between any two time points have to be saved, which would occupy a large\namount of memory during the computation and make it difficult to deal with high-dimensional Hamiltonians or long-time\nevolutions. To solve this problem, a modified pseudocode is provided as given in Algorithm~\\ref{algorithm:grape}.\nIn this modified version, after obtaining the evolved state $\\rho_t$ and $\\partial_{\\bold{x}}\\rho$, the gradient\n$\\delta\\rho_t\/\\delta u_k(t)$ and its derivatives with respect to $\\bold{x}$ are then calculated via the equations\n\\begin{equation}\n\\frac{\\delta\\rho_t}{\\delta u_k(t)}=-i\\Delta t H^{\\times}_k(\\rho_t)\n\\end{equation}\nwith $\\Delta t$ a small time interval, $H^{\\times}_k(\n\\cdot)=[H_k,\\cdot]$ the commutator between $H_{k}$ and other\noperators, and\n\\begin{equation}\n\\partial_\\bold{x}\\!\\left(\\frac{\\delta\\rho_t}{\\delta u_k(t)}\\!\\right)\n=-i\\Delta t H^{\\times}_k(\\partial_\\bold{x}\\rho_t),\n\\end{equation}\nThe gradients $\\delta\\rho_t\/\\delta u_k(j)$ ($j}\n\\caption{auto-GRAPE} \\label{algorithm:autogrape}\nInitialize the control amplitude $u_k(t)$ for all $t$ and $k$; \\\\\n\\For {episode=1, $M$}{\nReceive initial state $\\rho_{0}$ ($\\rho_{\\mathrm{in}}$); \\\\\n\\For {$t=1, T$}{\nEvolve with the control $\\rho_t=e^{\\Delta t\\mathcal{L}_t} \\rho_{t-1}$; \\\\\nCalculate the derivatives $\\partial_\\bold{x} \\rho_t=-i\\Delta t [\\partial_\\bold{x} H_0(\\bold{x})]^{\\times}\\rho_t\n+e^{\\Delta t\\mathcal{L}_t} \\partial_\\bold{x} \\rho_{t-1}$; \\\\\nSave $\\rho_t$ and $\\partial_\\bold{x} \\rho_t$;\\\\\n}\nCalculate the SLD and objective function $f(T)$. \\\\\nCalculate the gradient $\\frac{\\delta f(T)}{\\delta u_k(t)}$ with the auto-differential method\nfor all $t$ and $k$.\\\\\n{\\For {$t=1, T$}{\n\\For {$k=1, K$}{\nUpdate control $u_k(t)\\!\\leftarrow\\! u_k(t)\\!+\\!\\epsilon\\frac{\\delta f(T)}{\\delta u_k(t)}$.\n}}}\n}\nSave the controls $\\{u_k\\}$ and corresponding $f(T)$.\n\\end{algorithm}\n\n\\begin{figure}[tp]\n\\centering\\includegraphics[width=8.5cm]{Fig_ADillus.pdf}\n\\caption{The schematic of chain rules in automatic differentiation with the logarithmic\nderivative related functions as the objective function.}\n\\label{fig:AD_illus}\n\\end{figure}\n\n\\begin{table}[tp]\n\\def1.15{1.15}\n\\begin{tabular}{c|c|c|c|c}\n\\hline\n\\hline\n\\multirow{3}{*}{$N$} & \\multicolumn{2}{c|}{M1} & \\multicolumn{2}{c}{M2}\\\\\n\\cline{2-5}\n& ~~computing~~& ~~~~memory~~~~& ~~computing~~ &~~~~memory~~~~\\\\\n& time & allocation & time & allocation \\\\\n\\hline\n$2$ & 4.46\\,$\\mu$s & 2.99\\,KB & 5.14\\,$\\mu$s & 2.24\\,KB \\\\\n$2^{2}$ & 18.09\\,$\\mu$s & 17.01\\,KB & 11.17\\,$\\mu$s & 5.46 \\,KB \\\\\n$2^{3}$ & 257.65\\,$\\mu$s & 217.63\\,KB & 35.84\\,$\\mu$s & 18.79\\,KB \\\\\n$2^{4}$ & 4.55\\,ms & 3.34\\,MB & 151.51\\,$\\mu$s & 90.18\\,KB \\\\\n$2^{5}$ & 174.61\\,ms & 53.01\\,MB & 962.17\\,$\\mu$s & 501.85\\,KB \\\\\n$2^{6}$ & 9.45\\,s & 846.18\\,MB & 11.05\\,ms & 3.31 \\,MB \\\\\n$2^{7}$ & 6151.51\\,s & 137.95\\,GB & 45.70\\,ms & 230.98\\,MB \\\\\n$2^{8}$ & - & - & 347.50\\,ms & 1.73\\,GB \\\\\n$2^{9}$ & - & - & 3.29\\,s & 13.36\\,GB \\\\\n$2^{10}$ & - & - & 41.51\\,s & 105.08\\,GB \\\\\n\\end{tabular}\n\\begin{ruledtabular}\n\\begin{tabular}{ccccccc}\n$\\omega T$ & 5 & 10 & 15 & 20 & 30 & 40\\\\\\specialrule{0.05em}{0pt}{3pt}\nGRAPE & 5.23\\,s & 21.75\\,s & 44.95\\,s & 71.00\\,s &178.56\\,s &373.89\\,s \\\\\nauto-GRAPE & 0.32\\,s & 0.77\\,s & 1.45\\,s & 2.19\\,s & 4.14\\,s & 7.00\\,s \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\caption{Upper table: comparison of the average computing time and memory\nallocation for the calculation of the gradient of QFI between two realization\nmethods of AD. M1 and M2 represent the first and second methods. $N$ is the\ndimension of the density matrix. The density matrix and its derivative are\ngenerated randomly in the test. Lower table: comparison of the average\ncomputing time per episode between GRAPE and auto-GRAPE with different target\ntime $T$. Parallel computing is not applied here. KB, MB, and GB represent\nKilobyte, Megabyte, and Gigabyte, respectively.}\n\\label{table:auto}\n\\end{table}\n\nThe core of AD is utilizing the chain rules to evaluate the derivatives of the objective function. As illustrated in\nFig.~\\ref{fig:AD_illus}, in AD the value of the objective function $f$ is evaluated from left to right (red arrows),\nand the derivatives are calculated backwards (blue arrows), which is also called pullback in the language of\nAD. In our case, the differentiation of $f$ on a control amplitude $u_k$ needs to be evaluated through all three\npaths, from $f$ to $\\rho$, from $f$ to $\\partial_{\\bold{x}}\\rho$ (if $f$ is a function of $\\partial_{\\bold{x}}\\rho$)\nand from $f$ to $G$ to $L$. Here $L$ represents the SLDs of all parameters and $G:=G(L)=G(\\rho,\\partial_{\\bold{x}}\\rho)$\ncould be any intermediate function. For example, the contribution of the path from $f$ to $\\rho$ to the derivative\n$\\mathrm{d}f\/\\mathrm{d}u_k$ is $\\frac{\\partial f}{\\partial \\rho}\\frac{\\partial \\rho}{\\partial u_k}$. Notice that\nhere $\\partial f\/\\partial \\rho$ is a formal derivative. The paths to $\\rho$ and $\\partial_{\\bold{x}}\\rho$ can be\nroutinely solved in Zygote, however, the path to $L$ cannot be solved due to the entry-by-entry calculation of\nSLD in Eq.~(\\ref{eq:SLD_eigen}), which causes the difficulty to generate $\\partial L\/\\partial \\rho$\nand $\\partial L\/\\partial(\\partial_{\\bold{x}}\\rho)$, and therefore $\\partial G\/\\partial\\rho$\nand $\\partial G\/\\partial(\\partial_{\\bold{x}}\\rho)$ cannot be obtained. The chain rules in AD cannot be applied then.\nHence, we need to manually provide $\\partial G\/\\partial \\rho$ and $\\partial G\/\\partial (\\partial_{\\bold{x}}\\rho)$ to\nlet AD work in our case. To do it, one should first know that the total differentiation $\\mathrm{d}G_{\\alpha\\beta}$\n(the $\\alpha\\beta$th entry of $\\mathrm{d}G$) can be evaluated via the equation\n\\begin{equation}\n\\mathrm{d}G_{\\alpha\\beta}=\\sum_{ij}\\frac{\\partial G_{\\alpha\\beta}}{\\partial L_{ij}}\\mathrm{d} L_{ij}\n+\\frac{\\partial G_{\\alpha\\beta}}{\\partial (L_{ij})^{*}}\\mathrm{d} (L_{ij})^{*},\n\\end{equation}\nwhich can be written into a more compact matrix form\n\\begin{equation}\n\\mathrm{d}G_{\\alpha\\beta}=\\mathrm{Tr}\\!\\left(\\left(\\frac{\\partial G_{\\alpha\\beta}}\n{\\partial L}\\right)^{\\mathrm{T}}\\mathrm{d}L+\\left(\\frac{\\partial G_{\\alpha\\beta}}\n{\\partial L^{*}}\\right)^{\\mathrm{T}}\\mathrm{d}L^{*}\\right).\n\\end{equation}\nDue to the fact that the SLD is a Hermitian matrix, one can have $dL^{*}=dL^{\\mathrm{T}}$, and the equation above\nreduces to\n\\begin{align}\n\\mathrm{d}G_{\\alpha\\beta}&=\\mathrm{Tr}\\!\\left(\\left(\\frac{\\partial G_{\\alpha\\beta}}\n{\\partial L}\\right)^{\\mathrm{T}}\\mathrm{d}L+\\frac{\\partial G_{\\alpha\\beta}}\n{\\partial L^{\\mathrm{T}}}\\mathrm{d}L\\right) \\nonumber \\\\\n&= 2\\mathrm{Tr}\\!\\left(\\left(\\frac{\\partial G_{\\alpha\\beta}}{\\partial L}\\right)^{\\mathrm{T}}\\mathrm{d}L\\right).\n\\label{eq:dG}\n\\end{align}\nNow we introduce an auxiliary function $h$ which satisfies\n\\begin{equation}\n\\left(\\frac{\\partial G_{\\alpha\\beta}}{\\partial L}\\right)^{\\mathrm{T}}=\\rho h^{\\mathrm{T}}+h^{\\mathrm{T}}\\rho.\n\\end{equation}\nThis equation is a typical Lyapunov equation and can be numerically solved. Substituting the equation above into\nthe expression of $\\mathrm{d}G_{\\alpha\\beta}$, one can find that\n\\begin{equation}\n\\mathrm{d}G_{\\alpha\\beta}=2\\mathrm{Tr}\\left(h^{\\mathrm{T}}\\mathrm{d}L\\rho+h^{\\mathrm{T}}\\rho\\mathrm{d}L\\right).\n\\end{equation}\nDue to the fact that $\\partial_{\\bold{x}}\\rho=(\\rho L+L\\rho)\/2$, we have\n$\\rho\\mathrm{d}L+(\\mathrm{d}L)\\rho=2\\mathrm{d}(\\partial_{\\bold{x}}\\rho)-(\\mathrm{d}\\rho) L-L\\mathrm{d}\\rho$,\nwhich means\n\\begin{equation}\n\\mathrm{d}G_{\\alpha\\beta}=2\\mathrm{Tr}\\!\\left(2h^{\\mathrm{T}}\\mathrm{d}(\\partial_{\\bold{x}}\\rho)\\right)\n-2\\mathrm{Tr}\\!\\left(\\left(Lh^{\\mathrm{T}}+h^{\\mathrm{T}}L\\right)\\mathrm{d}\\rho\\right).\n\\label{eq:dG_h}\n\\end{equation}\nNext, since $G=G(\\rho,\\partial_{\\bold{x}}\\rho)$, $\\mathrm{d}G_{\\alpha\\beta}$ can also be expressed by\n\\begin{equation}\n\\mathrm{d}G_{\\alpha\\beta}=2\\mathrm{Tr}\\!\\left(\\!\\left(\n\\frac{\\partial G_{\\alpha\\beta}}{\\partial\\rho}\\right)^{\\!\\mathrm{T}}\n\\!\\mathrm{d}\\rho\\!+\\!\\left(\\frac{\\partial G_{\\alpha\\beta}}\n{\\partial(\\partial_{\\bold{x}}\\rho)}\\right)^{\\!\\mathrm{T}}\n\\!\\mathrm{d}(\\partial_{\\bold{x}}\\rho)\\!\\right).\n\\end{equation}\nThis equation is derived through a the similar calculation procedure for Eq.~(\\ref{eq:dG}). Comparing this equation\nwith Eq.~(\\ref{eq:dG_h}), one can see that\n\\begin{align}\n\\frac{\\partial G_{\\alpha\\beta}}{\\partial\\rho}&=2h, \\\\\n\\frac{\\partial G_{\\alpha\\beta}}{\\partial (\\partial_{\\bold{x}}\\rho)} &=-hL^{\\mathrm{T}}-L^{\\mathrm{T}}h.\n\\end{align}\nWith these expressions, $\\partial G\/\\partial\\rho$ and $\\partial G\/\\partial(\\partial_{\\bold{x}}\\rho)$ can be obtained\ncorrespondingly. In this way, the entire path from $f$ to $L$ is connected. Together with the other two paths, AD can\nbe fully applied in our case. The performance of computing time and memory allocation for the calculation of the\ngradient of QFI between these two realization methods of AD are compared with different dimensional density matrices.\nThe dimension is denoted by $N$. As shown in the upper table in Table~\\ref{table:auto}, the computing time and memory\nallocation of the second method are better than the first one except for the case of $N=2$, and this advantage becomes\nvery significant when $N$ is large. Moreover, the computing time and memory allocation of the first method grow fast\nwith the increase of dimension, which is reasonable as the calculations, especially the diagonalization, in the first\nmethod are performed in the $N^2$-dimensional space. There is no data of the first method when $N$ is larger than 7\nas the memory occupation has exceeded our computer's memory. From this comparison, one can see that the second method\nperforms better than the first one in basically all aspects and hence is chosen as the default auto-GRAPE method in\nQuanEstimation.\n\n\\emph{Example.} Consider the dynamics in Eq.~(\\ref{eq:ME_spon}) and control Hamiltonian in Eq.~(\\ref{eq:ctrl_demo}).\nNow define\n\\begin{eqnarray}\n\\delta_{\\mathrm{c}}\\omega &:=& 1\/\\sqrt{\\mathcal{I}_{\\omega\\omega}}, \\label{eq:c_deviation} \\\\\n\\delta_{\\mathrm{q}}\\omega &:=& 1\/\\sqrt{\\mathcal{F}_{\\omega\\omega}} \\label{eq:q_deviation}\n\\end{eqnarray}\nas the theoretical optimal deviations with and without fixed measurement. The performance of controls generated via\nGRAPE and auto-GRAPE are shown in Figs.~\\ref{fig:SPara_ctrl}(a) and \\ref{fig:SPara_ctrl}(b), which are obtained by\n300 episodes in general. In QuanEstimation, the number of episodes can be set via the variable {\\fontfamily{bch}\\selectfont\\small\\itshape max\\_episode=300}\nin {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs} in Table~\\ref{table:ctrl_paras}. As shown in these plots, the values of\n$\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{q}}\\omega$ in (a) and $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{c}}\\omega$\nin (b) obtained via GRAPE (red pentagrams) and auto-GRAPE (blue circles) basically coincide with each other, which is\nreasonable as they are intrinsically the same algorithm, just with different gradient calculation methods. However,\nauto-GRAPE shows a significant improvement on the computing time consumption, as given in the lower table in\nTable~\\ref{table:auto}, especially for a large target time $T$. The growth of average computing time per episode with\nthe increase of $T$ in auto-GRAPE is quite insignificant compared to that in GRAPE. Adam can be applied by setting\n{\\fontfamily{bch}\\selectfont\\small\\itshape Adam=True} in {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}. For the sake of a good performance, one can set appropriate Adam parameters\nin {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}, including the learning rate {\\fontfamily{bch}\\selectfont\\small\\itshape epsilon}, the exponential decay rate for the first (second)\nmoment estimates {\\fontfamily{bch}\\selectfont\\small\\itshape beta1} ({\\fontfamily{bch}\\selectfont\\small\\itshape beta2}). The default values of these parameters in the package are 0.01 and\n0.90 (0.99). If {\\fontfamily{bch}\\selectfont\\small\\itshape Adam=False}, the controls are updated with the constant step {\\fontfamily{bch}\\selectfont\\small\\itshape epsilon}. Due to the\nconvergence problem of Adam in some cases, several points in the figure are obtained by a second running of the codes with\na constant step, which takes the optimal control obtained in the first round (with Adam) as the initial guess.\n\nIn some scenarios, the time resolution of the control amplitude could be limited if the dynamics is\ntoo fast or the target time is too short. Hence, in the numerical optimization in such cases, the time steps\nof control cannot equal to that of the dynamics. Here we use the total control amplitude number $N_{\\mathrm{c}}\n=T\/\\Delta t_{\\mathrm{c}}$ with $\\Delta t_{\\mathrm{c}}$ the control time step, to represent the time resolution\nof the control and we assume $\\Delta t_{\\mathrm{c}}$ is fixed in the dynamics. A full $N_{\\mathrm{c}}$ in\nFigs.~\\ref{fig:SPara_ctrl}(a) and \\ref{fig:SPara_ctrl}(b) means $\\Delta t_{\\mathrm{c}}$ equals to the dynamical\ntime step $\\Delta t$. In the numerical calculation, it is possible that quotient of $\\Delta t_{\\mathrm{c}}$ by\n$\\Delta t$ is not an integer, indicating that the existing time of all control amplitudes cannot be equivalent.\nTo avoid this problem, in QuanEstimation the input number ($N_{t}$) of dynamical time steps is automatically\nadjusted to $kN_{\\mathrm{c}}$ with $k$ the smallest integer to let $kN_{\\mathrm{c}}>N_t$, if it is not already\nan integer multiple of $N_{\\mathrm{c}}$. For example, if $N_{\\mathrm{c}}=3$ and $N_{\\mathrm{t}}=100$,\nthen $N_{\\mathrm{t}}$ is adjusted to 102. Notice that in the package GRAPE is not available to deal with a non-full\n$N_{\\mathrm{c}}$ scenario for a technical reason. If GRAPE is invoked in this case, it would automatically go\nback to auto-GRAPE. As a matter of fact, auto-GRAPE outperforms GRAPE in most aspects, therefore, we strongly\nsuggest the users choose auto-GRAPE, instead of GRAPE, in practice.\n\nThe performance of controls with limited $N_{\\mathrm{c}}$ is also demonstrated in Figs.~\\ref{fig:SPara_ctrl}(a)\nand \\ref{fig:SPara_ctrl}(b) with the dynamics in Eq.~(\\ref{eq:ME_spon}) and control Hamiltonian in\nEq.~(\\ref{eq:ctrl_demo}). It can be seen that the constant-value controls ($N_{\\mathrm{c}}=1$, orange upward\ntriangles) cannot reduce the values of $\\delta_{\\mathrm{c}}\\omega$ and $\\delta_{\\mathrm{q}}\\omega$. In the case\nof fixed measurement it can only suppress the oscillation of $\\delta_{\\mathrm{c}}\\omega$. The performance improves\nwith the increase of $N_{\\mathrm{c}}$ and when $N_{\\mathrm{c}}=10$, the values of $\\delta_{\\mathrm{q}}\\omega$ and\n$\\delta_{\\mathrm{c}}\\omega$ are very close to those with a full $N_{\\mathrm{c}}$. This fact indicates that inputting\n10 control amplitudes is good enough in this case and a full $N_{\\mathrm{c}}$ control is unnecessary. A limited\n$N_{\\mathrm{c}}$ here could be easier to realize in practice and hence benefit the experimental realization.\n\n\\begin{figure*}[tp]\n\\centering\\includegraphics[width=17.5cm]{Fig_GRAPE.pdf}\n\\caption{The performance of control-enhanced (a) $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{q}}\\omega$\nand (b) $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{c}}\\omega$ with different $N_{\\mathrm{c}}$. The\noptimal controls are generated via GRAPE and auto-GRAPE with the dynamics in Eq.~(\\ref{eq:ME_spon}) and\ncontrol Hamiltonian in Eq.~(\\ref{eq:ctrl_demo}). The dotted black lines in (a) and (b) represent\n$\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{q}}\\omega$ and $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{c}}\\omega$\nwithout control. The red pentagrams are those obtained via GRAPE with a full $N_{\\mathrm{c}}$, i.e., $N_{\\mathrm{c}}$\nequals to the number of time steps. The blue circles, green crosses, purple diamonds, cyan downward triangles and\norange upward triangles represent those obtained via auto-GRAPE with $N_{\\mathrm{c}}$ being full, 10, 6, 3 and 1,\nrespectively. Other parameters are set to be the same with those in Fig.~\\ref{fig:QFI_code}. (c1-c4) The optimal\ncontrols in the case of $\\omega_{\\mathrm{tr}}T=20$ with $N_{\\mathrm{c}}$ being (c1) 1, (c2) 3, (c3) 6 and (c4) 10,\nrespectively. The true value $\\omega_{\\mathrm{tr}}$ is set to be $1$. Planck units are applied here.}\n\\label{fig:SPara_ctrl}\n\\end{figure*}\n\n\\subsection{Particle swarm optimization}\n\n\\begin{figure*}[tp]\n\\centering\\includegraphics[width=17.5cm]{Fig_PSO.pdf}\n\\caption{(a) Illustration of the basic operation of PSO in $m$th round of episode.\nThe personal best (with the blue subscript pb) for each particle in this\nround is obtained by comparing all the values of $f$ of this particle in\nall previous rounds including the current one. The global best (with the red\nsubscript gb) is obtained by comparing the values of $f$ of all personal bests in\nthis round. The light gray areas represent the process of comparison, which takes\nthe values of $\\{u_k\\}$ with respect to the maximum value of $f$. (b) The\ncontrol-enhanced values of $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{q}}\\omega$\nwith a full $N_{\\mathrm{c}}$ (red pentagrams) and $N_{\\mathrm{c}}=6$ (green circles),\nwhere the controls are generated via PSO. (c) The optimal controls for $N_{\\mathrm{c}}=6$\nin the case of $\\omega_{\\mathrm{tr}}T=20$. The true value $\\omega_{\\mathrm{tr}}$ is set\nto be $1$. Planck units are applied here.}\n\\label{fig:PSO}\n\\end{figure*}\n\n\\begin{algorithm*}[tp]\n\\SetArgSty{}\n\\caption{PSO} \\label{algorithm:pso}\nInitialize the control $\\left\\{u_k\\right\\}^i_1$ for each $i \\in \\left[1,P\\right]$; \\\\\nInitialize $\\left\\{\\delta u_k\\right\\}^i_1=0$ for each $i \\in \\left[1,P\\right]$;\\\\\nAssign $f(\\left\\{u_k\\right\\}^i_{0,\\mathrm{pb}})=0$ for each $i \\in \\left[1,P\\right]$; \\\\\n\\For {$m=1, M$\\ \\do}{\n\\For {$i=1, P$\\ \\do}{\nReceive the control $\\left\\{u_k\\right\\}^i_m$;\\\\\nEvolve the state with $\\left\\{u_k\\right\\}^i_m$ and calculate the objective function\n$f(\\left\\{u_k\\right\\}^i_m)$ at the target time $T$; \\\\\nCompare $f(\\left\\{u_k\\right\\}^i_m)$ with value of the personal best in last episode\n$f(\\left\\{u_k\\right\\}^i_{m-1,\\mathrm{pb}})$ and assign the new personal best\n$\\left\\{u_k\\right\\}^i_{m,\\mathrm{pb}}=\\mathrm{arg}\n\\left(\\max\\left\\{f(\\left\\{u_k\\right\\}^i_{m-1,\\mathrm{pb}}),\nf(\\left\\{u_k\\right\\}^i_m)\\right\\}\\right)$;}\nCompare all $f(\\left\\{u_k\\right\\}^i_{m,\\mathrm{pb}})$ with $i\\in[1,P]$ and assign the global best\n$\\left\\{u_k\\right\\}_{m, \\mathrm{gb}}=\\mathrm{arg}\\left(\\max\\limits_{i\\in\\left[1,P\\right]}\nf(\\left\\{u_k\\right\\}^i_{m, \\mathrm{pb}})\\right)$;\\\\\n\\For {$i=1, P$\\ \\do}\n{Calculate $\\left\\{\\delta u_k\\right\\}^i_m= c_0 \\left\\{\\delta u_k\\right\\}^i_{m-1} +\n\\mathrm{rand}() \\cdot c_1\\big(\\left\\{u_k\\right\\}^i_{m, \\mathrm{pb}}-\\left\\{u_k\\right\\}^i_m\\big) +\n\\mathrm{rand}() \\cdot c_2\\big(\\left\\{u_k\\right\\}_{m,\\mathrm{gb}}-\\left\\{u_k\\right\\}^i_m\\big)$;\\\\\nUpdate the control $\\left\\{u_k\\right\\}^i_{m+1} = \\left\\{u_k\\right\\}^i_m+\\left\\{\\delta u_k\\right\\}^i_m$.\n}}\nSave the global best $\\{u_k\\}_{M,\\mathrm{gb}}$.\n\\end{algorithm*}\n\nParticle swarm optimization (PSO) is a well-used gradient-free method in\noptimizations~\\cite{Kennedy1995,Eberhart2001}, and has been applied in the detection\nof gravitational waves~\\cite{Michimura2018}, the characterization of open systems~\\cite{Stenberg2016},\nthe prediction of crystal structure~\\cite{Wang2010}, and in quantum metrology it has been used\nto generate adaptive measurement schemes in phase estimations~\\cite{Hentschel2010,Hentschel2011}.\n\nA typical version of PSO includes a certain number (denoted by $P$) of parallel particles. In quantum\ncontrol, these particles are just $P$ sets of controls $\\{u_k\\}$ labelled by $\\{u_k\\}^i$ for $i=1,\\dots,P$.\nThe value of $\\{u_k\\}$ of $i$th particle in $m$th round of episode is further denoted by $\\{u_k\\}^i_m$.\nThe basic optimization philosophy of PSO is given in Fig.~\\ref{fig:PSO}(a) and the pseudocode is given\nin Algorithm~\\ref{algorithm:pso}. In the pseudocode, $\\left\\{u_k\\right\\}^i_{0,\\mathrm{pb}}$ and\n$f(\\left\\{u_k\\right\\}^i_{0,\\mathrm{pb}})$ are just formal notations representing the initialization\nof the personal bests. There exist two basic concepts in PSO, the personal best and global best. In\nthe $m$th round of episode, the personal best of $i$th particle ($\\{u_k\\}^i_{m,\\mathrm{pb}}$) is\nassigned by the $\\{u_k\\}$ with respect to the maximum value of $f$ among all previous episodes of\nthis particle, namely,\n\\begin{equation}\n\\{u_k\\}^i_{m,\\mathrm{pb}}=\\mathrm{arg}\\left(\\max \\limits_{n\\in\\left[1,m\\right]}\nf(\\left\\{u_k\\right\\}^i_n)\\right)\n\\end{equation}\nwith $\\mathrm{arg}(\\cdot)$ the argument. For example, as illustrated in Fig.~\\ref{fig:PSO}, if $f^1_j$ is\nthe maximum in $\\{f^1_1,f^1_2,\\dots,f^1_m\\}$, then $\\{u_k\\}^1_{m,\\mathrm{pb}}$ is assigned by $\\{u_k\\}^1_j$.\nOnce the personal bests are obtained for all particles, the global best is assigned by the $\\{u_k\\}$ with\nrespect to the maximum value of $f$ among all personal bests, i.e.,\n\\begin{equation}\n\\{u_k\\}_{m, \\mathrm{gb}}=\\mathrm{arg}\\left(\\max\\limits_{i\\in\\left[1,P\\right]}\nf(\\{u_k\\}^i_{m,\\mathrm{pb}})\\right).\n\\end{equation}\nWith all personal bests and the global best, the velocity $\\{\\delta u_k\\}^i_m$ for the $i$th particle is\ncalculated by\n\\begin{align}\n\\{\\delta u_k\\}^i_m =& c_0 \\{\\delta u_k\\}^i_{m-1}\n\\!+\\!\\mathrm{rand}()\\cdot c_1\\left(\\{u_k\\}^i_{m,\\mathrm{pb}}-\\{u_k\\}^i_m\\right) \\nonumber \\\\\n& +\\mathrm{rand}()\\cdot c_2\\left(\\{u_k\\}_{m,\\mathrm{gb}}-\\{u_k\\}^i_m\\right),\n\\end{align}\nwhere rand() represents a random number within $[0,1]$ and $c_0$, $c_1$, $c_2$ are three positive\nconstant numbers. In the package, these parameters can be adjusted in {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}, shown in\nTable~\\ref{table:ctrl_paras}, via the variables {\\fontfamily{bch}\\selectfont\\small\\itshape c0}, {\\fontfamily{bch}\\selectfont\\small\\itshape c1} and {\\fontfamily{bch}\\selectfont\\small\\itshape c2}. A\ntypical choice for these constants is $c_0=1$, $c_1=c_2=2$, which are also the default values in the package.\n{\\fontfamily{bch}\\selectfont\\small\\itshape max\\_episode} in {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs} represents the episode number to run. If it is only set to be\na number, for example {\\fontfamily{bch}\\selectfont\\small\\itshape max\\_episode=1000}, the program will continuously run 1000 episodes. However,\nif it is a list, for example {\\fontfamily{bch}\\selectfont\\small\\itshape max\\_episode=[1000,100]}, the program will also run 1000 episodes\nin total but replace $\\{u_k\\}$ of all particles with the current global best every 100 episodes.\n{\\fontfamily{bch}\\selectfont\\small\\itshape p\\_num} represents the particle number and is set to be 10 in default. The initial guesses\nof control can be input via {\\fontfamily{bch}\\selectfont\\small\\itshape ctrl0} and the default choice {\\fontfamily{bch}\\selectfont\\small\\itshape ctrl0=[]} means all the guesses\nare randomly generated. In the case that the number of input guessed controls is less than the particle number,\nthe algorithm will generate the remaining ones randomly. On the other hand, if the number is larger than the\nparticle number, only the suitable number of controls will be used. The optimization result can be realized\nrepeatedly by fixing the value of the variable {\\fontfamily{bch}\\selectfont\\small\\itshape seed}, and its default value is 1234 in the package.\n\n\\begin{algorithm*}[tp]\n\\SetArgSty{}\n\\caption{DE}\nInitialize the control $\\{u_k\\}^i$ for $i\\in[1,P]$; \\\\\nEvolve the state with $\\{u_k\\}^i$ and calculate the objective function $f(\\{u_k\\}^i)$\nat the target time $T$ for $i\\in[1,P]$; \\\\\n\\For {episode=1, $M$}{\n\\For {$i=1,P$}{\nRandomly generate three integers $p_1$, $p_2$, $p_3$ in the regime $[1,P]$; \\\\\nGenerate $\\{G_k\\}$ via the equation $\\{G_k\\}=\\{u_k\\}^{p_1}+c(\\{u_k\\}^{p_2}-\\{u_k\\}^{p_3})$; \\\\\n\\For {$k=1, K$}{\nGenerate a random integer $a\\in[1, N_{\\mathrm{c}}]$; \\\\\n\\For {$j=1, N_{\\mathrm{c}}$}{\nGenerate a random number $r\\in[0,1]$ and assign\n$ [Q_k]_j=\n\\begin{cases}\n[G_k]_j, & {\\mathrm{if}~r\\leq c_r~\\mathrm{or}~j=a}, \\\\\n[u_k]_j, & {\\mathrm{if}~r>c_r~\\mathrm{and}~j\\neq a};\n\\end{cases}$\n}}\nEvolve the state with the control $\\{Q_k\\}$ and calculate $f(\\{Q_k\\})$ at time $T$; \\\\\n\\If {$f(\\{u_k\\}^i)c_r~\\mathrm{and}~j\\neq a},\n\\end{cases}\n\\end{equation}\nwhere $[G_k]_j$ is the $j$th entry of $G_k$ and $[u_k]_j$ is the $j$th entry of a $u_k$ in $\\{u_k\\}^i$. This\nequation means if $r$ is no larger than a given constant $c_r$ (usually called crossover constant in DE),\nthen assign $[G_k]_j$ to $[Q_k]_j$, otherwise assign $[u_k]_j$ to $[Q_k]_j$. In the meantime, the $a$th\nentry of $Q_k$ always takes the value of $[G_k]_j$ regardless the value of $r$ to make sure at least one\npoint mutates. After the crossover, the values of objective functions $f(\\{u_k\\}^i)$ and $f(\\{Q_k\\})$ are compared,\nand $\\{u_k\\}^i$ is replaced by $\\{Q_k\\}$ if $f(\\{Q_k\\})$ is larger. In the package, $c$ and $c_r$ can be adjusted\nvia the variables {\\fontfamily{bch}\\selectfont\\small\\itshape c} and {\\fontfamily{bch}\\selectfont\\small\\itshape cr} in {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}, and the default values are 1.0 and 0.5.\n\n\\emph{Example.} The performance of controls generated via DE is also illustrated with the dynamics in Eq.~(\\ref{eq:ME_spon})\nand control Hamiltonian in Eq.~(\\ref{eq:ctrl_demo}). $\\delta_{\\mathrm{q}}\\omega$ is defined in Eq.~(\\ref{eq:q_deviation}).\nAs shown in the Fig.~\\ref{fig:DE}(b), different with PSO, the performance of DE with a full $N_{\\mathrm{c}}$ (red pentagrams)\nis very close to that of auto-GRAPE (dash-dotted gray line), even for a large target time $T$, which indicates that DE works\nbetter than PSO in this example. More surprisingly, in the case of $N_{\\mathrm{c}}=6$, DE (green circles) not only outperforms\nPSO, but also significantly outperforms auto-GRAPE (dashed light-blue line). This result indicates that no algorithm has the\nabsolute advantage in general. Comparison and combination of different algorithms are a better approach to design optimal\ncontrols in quantum metrology, which can be conveniently finished via QuanEstimation. The optimal controls obtained via DE\nfor $N_{\\mathrm{c}}=6$ are given in Fig.~\\ref{fig:DE}(c) in the case of $\\omega_{\\mathrm{tr}}T=20$. The results above are\nobtained with 1000 episodes, which can be adjusted via {\\fontfamily{bch}\\selectfont\\small\\itshape max\\_episode=1000} in {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}.\n\n\\subsection{Deep Deterministic Policy Gradients}\n\n\\begin{figure}[bp]\n\\centering\\includegraphics[width=8.5cm]{Fig_DDPG.pdf}\n\\caption{(a) The control-enhanced values of $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{q}}\\omega$\nwith a full $N_{\\mathrm{c}}$ (red pentagrams) and $N_{\\mathrm{c}}=6$ (green circles),\nwhere the controls are generated via DDPG. (b-c) The change of total reward in the\nepisodes in the case of (b) a full $N_{\\mathrm{c}}$ and (c) $N_{\\mathrm{c}}=6$.\n(d) The controls obtained via DDPG for $N_{\\mathrm{c}}=6$ in the case of\n$\\omega_{\\mathrm{tr}}T=20$. The true value $\\omega_{\\mathrm{tr}}$ is set to be $1$.\nPlanck units are applied here.}\n\\label{fig:DDPG}\n\\end{figure}\n\nDeep Deterministic Policy Gradients (DDPG) is a powerful tool in machine learning~\\cite{Lillicrap2015}\nand has already been applied in quantum physics to perform quantum multiparameter estimation~\\cite{Xu2021}\nand enhance the generation of spin squeezing~\\cite{Tan2021}. The pseudocode of DDPG for quantum estimation\nand the corresponding flow chart can be found in Ref.~\\cite{Liu2022}, and the details will not be repeatedly\naddressed herein.\n\n\\emph{Example.} The performance of controls generated via DDPG in the case of single-parameter estimation is also\nillustrated with the dynamics in Eq.~(\\ref{eq:ME_spon}) and control Hamiltonian in Eq.~(\\ref{eq:ctrl_demo}), as shown\nin Fig.~\\ref{fig:DDPG}(a). $\\delta_{\\mathrm{q}}\\omega$ is defined in Eq.~(\\ref{eq:q_deviation}). The reward is taken\nas the logarithm of the ratio between the controlled and non-controlled values of the QFI at time $t$. It can be seen\nthat the performance of DDPG with a full $N_{\\mathrm{c}}$ (red pentagrams) shows a significant disparity with that of\nauto-GRAPE (dash-dotted gray line). A more surprising fact is that it is even worse than the performance of both\nauto-GRAPE (dashed light-blue line) and DDPG (green circles) with $N_{\\mathrm{c}}=6$. And the performance of DDPG with\n$N_{\\mathrm{c}}=6$ also presents no advantage compared to PSO and DE. However, we cannot rashly say that PSO and DE\noutperform DDPG here as DDPG involves way more parameters and maybe a suitable set of parameters would let its performance\ncomparable or even better than PSO and DE. Nevertheless, we can still safely to say that PSO and DE, especially DE, are\neasier to find optimal controls in this example and DDPG does not present a general advantage here. The total reward in\nthe case of $\\omega_{\\mathrm{tr}}T=20$ with a full $N_{\\mathrm{c}}$ and $N_{\\mathrm{c}}=6$ are given in Figs.~\\ref{fig:DDPG}(b)\nand \\ref{fig:DDPG}(c), respectively. The total reward indeed increases and converges for a full $N_{\\mathrm{c}}$, but the\nfinal performance is only sightly better than the non-controlled value [dotted black line in Fig.~\\ref{fig:DDPG}(a)].\nFor $N_{\\mathrm{c}}=6$, the total reward does not significantly increase, which means the corresponding performance\nof $\\delta_{\\mathrm{q}}\\omega$ basically comes from the average performance of random controls. The controls obtained\nvia DDPG for $N_{\\mathrm{c}}=6$ are shown in Fig.~\\ref{fig:DDPG}(d).\n\n\\subsection{Performance of the convergence speed}\n\nApart from the improvement of the objective function, the convergence speed is also an important aspect of an\nalgorithm to evaluate its performance. Here we illustrate the convergence performance of different algorithms\nin Fig.~\\ref{fig:converg} in the single-parameter scenario discussed previously, namely, the dynamics in\nEq.~(\\ref{eq:ME_spon}) and control Hamiltonian in Eq.~(\\ref{eq:ctrl_demo}) with a full $N_{\\mathrm{c}}$. As\nshown in Fig.~\\ref{fig:converg}(a), GRAPE (dashed red line) and auto-GRAPE (dotted black line) show higher\nconvergence speed than PSO (solid green line) and DE (dash-dotted cyan line). This phenomenon coincides with\nthe common understanding that the gradient-based methods converge faster than gradient-free methods in general.\nDE converges slower than GRAPE and auto-GRAPE, but the final performance of QFI basically coincides with them.\nPSO presents the slowest speed in this example and the final result of QFI is also worse than others. DDPG is\nnot involved in this figure as its improvement on the QFI is not as significant as others.\n\nThe effect of Adam in auto-GRAPE is also illustrated in Fig.~\\ref{fig:converg}(b). Denote $\\epsilon$ as the\nlearning rate in Adam. In the case of constant-step update, auto-GRAPE with $\\epsilon=0.01$ (dotted black line)\nconverges faster than that with $\\epsilon=0.005$ (dash-dotted green line), which is common and reasonable as a\nlarge step usually implies a higher convergence speed. However, when Adam is invoked, this difference becomes\nvery insignificant and both lines (solid gray line for $\\epsilon=0.01$ and dashed blue line for $\\epsilon=0.005$)\nconverge faster than constant-step updates. However, it should be noticed that a large $\\epsilon$ in Adam may\nresult in a strong oscillation of $\\delta_{\\mathrm{q}}\\omega$ in the episodes, and it should be adjusted to smaller\nvalues if one wants to avoid this phenomenon.\n\n\\begin{figure}[tp]\n\\centering\\includegraphics[width=8.5cm]{Fig_convergence.pdf}\n\\caption{(a) The convergence performance of different algorithms, including\nGRAPE (dashed red line), auto-GRAPE (dotted black line), PSO (solid green line)\nand DE (dash-dotted cyan line). (b) The convergence performance of auto-GRAPE\nwith constant step $\\epsilon=0.01$ (dotted black line), $\\epsilon=0.005$\n(dash-dotted green line), and with Adam (solid gray line for $\\epsilon=0.01$\nand dashed blue line for $\\epsilon=0.005$). The target time $\\omega_{\\mathrm{tr}}T=20$,\nand the true value $\\omega_{\\mathrm{tr}}$ is set to be 1. Planck units are\napplied here. }\n\\label{fig:converg}\n\\end{figure}\n\n\\subsection{Multiparameter estimation}\n\\label{sec:multi}\n\n\\begin{figure}[tp]\n\\centering\\includegraphics[width=8.5cm]{Fig_multipara.pdf}\n\\caption{(a) The performance of controls generated via different algorithms,\nincluding GRAPE (red pentagrams), auto-GRAPE (cyan triangles) with full\n$N_{\\mathrm{c}}$, PSO (blue crosses) with full $N_{\\mathrm{c}}$, DE (yellow\ncircles) with full $N_{\\mathrm{c}}$ and DDPG (orange pluses) with full\n$N_{\\mathrm{c}}$. (b) The performance of controls generated via PSO (dark\nblue diamonds), DE (small red hollow circles) and DDPG (large purple hollow\ncircles) with limited $N_{\\mathrm{c}}$ ($N_{\\mathrm{c}}=10$). $W$ is chosen\nto be $\\openone$.}\n\\label{fig:multipara}\n\\end{figure}\n\nCompared to the single-parameter estimation, multiparameter estimation is a more challenging problem in\nquantum metrology. In this case, $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ cannot be used as the objective function\nin the implementation of GRAPE as the analytical calculation of $\\mathcal{F}^{-1}$ is very difficult,\nif not fully impossible, when the number of parameter is large. Hence, in GRAPE when $W=\\openone$,\n$\\sum_{a}1\/\\mathcal{F}_{aa}$, a lower bound of $\\mathrm{Tr}(\\mathcal{F}^{-1})$, is taken as\nthe superseded objective function~\\cite{Liu2020,Liu2017b,Liu2022}. Unfortunately, $\\sum_{a}W_{aa}\/\\mathcal{F}_{aa}$\nfails to be a valid lower bound for a general $W$. In this case, to keep $\\sum_{a}W_{aa}\/\\mathcal{F}_{aa}$\na valid lower bound, the parameters for estimation have to be reorganized by the linear combination of the\noriginal ones to let $W$ be diagonal, which causes the inconvenience to implement GRAPE insuch cases.\nDifferent with GRAPE, this problem naturally vanishes in auto-GRAPE as the inverse matrix $\\mathcal{F}^{-1}$\nis calculated automatically and so does the gradient. In the meantime, PSO and DE would also not face such\nproblems as they are gradient-free.\n\n\\emph{Example.} Here we take an electron-nuclear spin system, which can be readily realized in the Nitrogen-vacancy\ncenters, as an example to demonstrate and compare the performance of different algorithms included in QuanEstimation.\nThe Hamiltonian of this system reads~\\cite{Barry2020,Schwartz2018,Rembold2020}\n\\begin{equation}\nH_0\/\\hbar=DS^2_3+g_{\\mathrm{S}}\\vec{B}\\cdot\\vec{S}+g_{\\mathrm{I}}\\vec{B}\\cdot\\vec{I}\n+\\vec{S}^{\\,\\mathrm{T}}\\mathcal{A}\\vec{I},\n\\label{eq:NV_H}\n\\end{equation}\nwhere $S_i=s_i\\otimes\\openone$ and $I_i=\\openone\\otimes\\sigma_i$ ($i=1,2,3$) represent the\nelectron and nuclear ($^{15}\\mathrm{N}$) operators with $s_1$, $s_2$ and $s_3$ spin-1 operators.\nTheir specific expressions are\n\\begin{eqnarray}\ns_1 = \\frac{1}{\\sqrt{2}}\\left(\\begin{array}{ccc}\n0 & 1 & 0 \\\\\n1 & 0 & 1 \\\\\n0 & 1 & 0\n\\end{array}\\right),\ns_2 = \\frac{1}{\\sqrt{2}}\\left(\\begin{array}{ccc}\n0 & -i & 0\\\\\ni & 0 & -i\\\\\n0 & i & 0\n\\end{array}\\right)\\!\\!, \\nonumber\n\\end{eqnarray}\nand $s_3=\\mathrm{diag}(1,0,-1)$. The vectors $\\vec{S}=(S_1,S_2,S_3)^{\\mathrm{T}}$,\n$\\vec{I}=(I_1,I_2,I_3)^{\\mathrm{T}}$ and $\\mathcal{A}$ is the hyperfine tensor. In this case,\n$\\mathcal{A}=\\mathrm{diag}(A_1,A_1,A_2)$ with $A_1$ and $A_2$ the axial and transverse magnetic hyperfine\ncoupling coefficients. The hyperfine coupling between the magnetic field and electron are approximated to be\nisotopic. The coefficients $g_{\\mathrm{S}}=g_\\mathrm{e}\\mu_\\mathrm{B}\/\\hbar$ and\n$g_{\\mathrm{I}}=g_\\mathrm{n}\\mu_\\mathrm{n}\/\\hbar$. Here $g_\\mathrm{e}$ ($g_\\mathrm{n}$) is the $g$ factor\nof the electron (nuclear), $\\mu_\\mathrm{B}$ ($\\mu_\\mathrm{n}$) is the Bohr (nuclear) magneton and $\\hbar$ is\nthe Plank's constant. The control Hamiltonian is\n\\begin{equation}\nH_{\\mathrm{c}}\/\\hbar=\\sum^3_{i=1}\\Omega_i(t)S_i,\n\\label{eq:NV_c}\n\\end{equation}\nwhere $\\Omega_i(t)$ is a time-varying Rabi frequency. In practice, the electron suffers from the noise of\ndephasing, which means the dynamics of the full system is described by the master equation\n\\begin{equation}\n\\partial_t\\rho=-i[H_0+H_{\\mathrm{c}},\\rho]+\\frac{\\gamma}{2}(S_3\\rho S_3-S^2_3\\rho-\\rho S^2_3),\n\\label{eq:NV_ME}\n\\end{equation}\nwith $\\gamma$ the dephasing rate, which is usually inverse proportional to the dephasing time $T^{*}_2$.\n\nNow we use this system as a controllable magnetometer to estimate the magnetic field $\\vec{B}$, which is\na three-parameter estimation problem. The optimal controls can be obtained via different algorithms.\nIn this case, the initial state is taken as $(|1\\rangle+|\\!-\\!1\\rangle)\\otimes|\\!\\!\\uparrow\\rangle\/\\sqrt{2}$,\nwhere $(|1\\rangle+|\\!-\\!1\\rangle)\/\\sqrt{2}$ is an electron state with $|1\\rangle$ ($|\\!-\\!1\\rangle$) the\neigenstate of $s_3$ corresponding to the eigenvalue $1$ ($-1$). $|\\!\\!\\uparrow\\rangle$ is a nuclear state and\nthe eigenstate of $\\sigma_3$ corresponding to the eigenvalue 1. $W$ is chosen to be $\\openone$. The systematic\nparameters are chosen as $D=2\\pi\\times 2.87$\\,GHz, $g_{\\mathrm{S}}=2\\pi\\times 28.03$\\,GHz\/T,\n$g_{\\mathrm{I}}=2\\pi\\times 4.32$\\,MHz\/T, $A_1=2\\pi\\times 3.65$\\,MHz, $A_2=2\\pi\\times 3.03$\\,MHz, and the\ntrue values of $\\vec{B}$ are $B_1=B_2=B_3=0.50$\\,mT. The dephasing rate $\\gamma=2\\pi\\times 1$\\,MHz. All the\nparameter values are selected according to Ref.~\\cite{Barry2020,Felton2009}.\n\nThe performance of controls given by different algorithms is given in Fig.~\\ref{fig:multipara}. The control\namplitude is limited in the regime $[-20\\,\\mathrm{MHz},20\\,\\mathrm{MHz}]$. In the case of a full $N_{\\mathrm{c}}$\n[$N_{\\mathrm{c}}=2000T\/(0.01\\,\\mathrm{\\mu s})$], as shown in Fig.~\\ref{fig:multipara}(a), the performance of GRAPE\n(red pentagrams),\nauto-GRAPE (cyan triangles), PSO (blue crosses), DE (yellow circles) and DDPG (orange pluses) basically coincide\nfor small target time ($T\\leq 0.01\\,\\mathrm{\\mu s}$), and the reduction of $\\mathrm{Tr}(W\\mathcal{F}^{-1})$\nis limited compared to the non-controlled values (solid black line). In the regime of large target time\n($T> 0.01\\,\\mathrm{\\mu s}$), auto-GRAPE shows the best performance. GRAPE is not applied in these points as its\ntime consumption is too heavy for our computers. PSO and DE only find controls that provide slightly enhancement\non $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ in this regime. The different behaviors of the performance are due to the\nlarge search space in this case. For example, the total control number for $T=0.08\\,\\mathrm{\\mu s}$ is 48000\nincluding all three controls $\\Omega_{1}$, $\\Omega_{2}$ and $\\Omega_{3}$. In such a large parameter space, different\nwith the gradient-based methods, the gradient-free methods cannot promise to find optimal values. Hence, the\ngradient-based methods would be a good choice in such cases. However, one should notice that the gradient-based\nmethods like auto-GRAPE could be more memory consuming than gradient-based methods. In the case that the computer\nmemory is limited, one may have to choose gradient-free methods.\n\nIn the case of a small search space, for example $N_{\\mathrm{c}}=10$, the performance of PSO and DE improve\nsignificantly, as shown in Fig.~\\ref{fig:multipara}(b). Both PSO (dark blue diamonds) and DE (small red hollow\ncircles) with $N_{\\mathrm{c}}=10$ outperform the full $N_{\\mathrm{c}}$ cases, yet\nDDPG with $N_{\\mathrm{c}}=10$ (large purple hollow circles) does not show this behavior. Similar to the\nsingle-parameter scenario, DE provides a better performance than PSO and DDPG when the control number\n$N_{\\mathrm{c}}$ is limited. A more interesting fact is that for some target time, like $T=0.03\\,\\mathrm{\\mu s}$,\nPSO and DE even provide comparable performance with auto-GRAPE with a full $N_{\\mathrm{c}}$, indicating that\nthe control schemes given by PSO and DE in this case not only meet the best precision limit, but are also simpler\nto implement in experiments than the full-$N_{\\mathrm{c}}$ scheme given by auto-GRAPE.\n\n\\subsection{Minimum parameterization time optimization}\n\nThe control optimizations discussed in the previous subsections are performed with a fixed target time $T$. In some\nscenarios, the goal is not to achieve the highest precision within a fixed time, but to reach a given precision as\nsoon as possible. This problem requires the search of minimum time to reach a given value of the objective function,\nwhich can be realized in QuanEstimation in the class {\\fontfamily{bch}\\selectfont\\small\\itshape ControlOpt()}. After the callings of\n{\\fontfamily{bch}\\selectfont\\small\\itshape control=ControlOpt()} and {\\fontfamily{bch}\\selectfont\\small\\itshape control.dynamics()}, one can use the following codes to solve this problem:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL,mathescape=true]\ncontrol.mintime(f,W=[],M=[],method=\"binary\",\n target=\"QFIM\",LDtype=\"SLD\")\n\\end{lstlisting}\nHere the input {\\fontfamily{bch}\\selectfont\\small\\itshape f} is a float number representing the given value of the objective function. The type of\nobjective function can be adjusted via {\\fontfamily{bch}\\selectfont\\small\\itshape target=\" \"}, which includes three options {\\fontfamily{bch}\\selectfont\\small\\itshape \"QFIM\"}\n(default), {\\fontfamily{bch}\\selectfont\\small\\itshape \"CFIM\"}, and {\\fontfamily{bch}\\selectfont\\small\\itshape \"HCRB\"}. The measurement can be input via {\\fontfamily{bch}\\selectfont\\small\\itshape M=[]} if\nnecessary, and in this case the objective function will be chosen as the CFIM regardless of the setting in\n{\\fontfamily{bch}\\selectfont\\small\\itshape target=\" \"}. In the case of {\\fontfamily{bch}\\selectfont\\small\\itshape target=\"QFIM\"}, the type of QFIM can be changed via\n{\\fontfamily{bch}\\selectfont\\small\\itshape LDtype=\" \"}. The choices include {\\fontfamily{bch}\\selectfont\\small\\itshape \"SLD\"}, {\\fontfamily{bch}\\selectfont\\small\\itshape \"RLD\"}, and {\\fontfamily{bch}\\selectfont\\small\\itshape \"LLD\"}.\n{\\fontfamily{bch}\\selectfont\\small\\itshape method=\"binary\"} represents the binary search (logarithmic search) and {\\fontfamily{bch}\\selectfont\\small\\itshape method=\"forward\"}\nrepresents the forward search from the beginning of time. Choosing a suitable method may help to improve the\ncalculation efficiency. For example, if the users already know that the minimum time is very small compared to $T$,\nthe forward search would be more efficient than the binary search. Notice that the search is restricted in the regime\n$[0,T]$ where $T$ is given by the array {\\fontfamily{bch}\\selectfont\\small\\itshape tspan} input in {\\fontfamily{bch}\\selectfont\\small\\itshape ControlOpt()}, and the current codes\ncan only deal with full-$N_{\\mathrm{c}}$ controls. The outputs are two files \"mtspan.csv\" and \"controls.csv\" containing\nthe array of time from the start to the searched minimum time and the corresponding length of optimal controls,\nrespectively.\n\n\\section{State optimization}\n\\label{sec:state_opt}\n\n\\begin{table}[tp]\n\\begin{tabular}{c|c|c|c}\n\\hline\n\\hline\n~~Algorithms~~ & ~~method=~~ & \\multicolumn{2}{c}{~~**kwargs and default values~~}\\\\\n\\hline\n\\multirow{6}{*}{AD} & \\multirow{6}{*}{\"AD\"} & \"Adam\" & False \\\\\n & & \"psi0\" & [] \\\\\n & & \"max\\_episode\" & 300 \\\\\n & & \"epsilon\" & 0.01 \\\\\n & & \"beta1\" & 0.90 \\\\\n & & \"beta2\" & 0.99 \\\\\n\\hline\n\\multirow{7}{*}{PSO} & \\multirow{7}{*}{\"PSO\"} & \"p\\_num\" & 10 \\\\\n & & \"psi0\" & [] \\\\\n & & \"max\\_episode\" & [1000,100] \\\\\n & & \"c0\" & 1.0 \\\\\n & & \"c1\" & 2.0 \\\\\n & & \"c2\" & 2.0 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\multirow{6}{*}{DE} & \\multirow{6}{*}{\"DE\"} & \"p\\_num\" & 10 \\\\\n & & \"psi0\" & [] \\\\\n & & \"max\\_episode\" & 1000 \\\\\n & & \"c\" & 1.0 \\\\\n & & \"cr\" & 0.5 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\multirow{9}{*}{NM} & \\multirow{9}{*}{\"NM\"} & \"p\\_num\" & 10 \\\\\n & & \"psi0\" & [] \\\\\n & & \"max\\_episode\" & 1000 \\\\\n & & \"ar\" & 1.0 \\\\\n & & \"ae\" & 2.0 \\\\\n & & \"ac\" & 0.5 \\\\\n & & \"as0\" & 0.5 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\multirow{3}{*}{RI} & \\multirow{3}{*}{\"RI\"} & \"psi0\" & [] \\\\\n & & \"max\\_episode\" & 300 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\multirow{5}{*}{DDPG} & \\multirow{5}{*}{\"DDPG\"} & \"psi0\" & [] \\\\\n & & \"max\\_episode\" & 500 \\\\\n & & \"layer\\_num\" & 3 \\\\\n & & \"layer\\_dim\" & 200 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\caption{Available methods for state optimization in QuanEstimation and\ncorresponding default parameter settings. Notice that AD is not available\nwhen the HCRB are taken as the objective function.}\n\\label{table:StateOpt_paras}\n\\end{table}\n\nQuantum resources like entanglement and squeezing are key in quantum parameter estimation to demonstrate a quantum\nenhanced precision. In contrast to the dynamical resources like time or control, entanglement and squeezing are\nusually embedded in the probe state, indicating that different probe states would present dramatically different\nperformance on the precision. The search of optimal probe states is thus an essential step in the design of optimal\nschemes. Various methodologies, including direct analytical calculations~\\cite{Caves1981,Liu2013,Jarzyna2012,Lang2013,\nLang2014,Modi2011,Monras2006,Fiderer2019,Safranek2016,Knysh2014,Fujiwara2001}, semi-analytical~\\cite{Dorner2009,\nRafal2009,Forsgren2002,Maccone2009,Knysh2011,Yuan2017} and full numerical approaches~\\cite{Frowis2014,Knott2016,\nRafal2020a,Basilewitsch2020,Larrouy2020}, have been proposed and discussed. More advances of the state optimization\nin quantum metrology can be found in a recent review~\\cite{Liu2022}. QuanEstimation includes the process of state\noptimization with various methods, including both gradient-based and gradient-free methods. The specific codes in\nQuanEstimation for the execution of state optimization are as follows:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL,mathescape=true]\nstate = StateOpt(savefile=False,method=\"AD\",\n **kwargs)\nstate.dynamics(tspan,H0,dH,Hc=[],ctrl=[],\n decay=[])\nstate.QFIM(W=[],LDtype=\"SLD\")\nstate.CFIM(M=[],W=[])\n\\end{lstlisting}\nIn the case that the parameterization is described by the Kraus operators, replace {\\fontfamily{bch}\\selectfont\\small\\itshape state.dynamics()}\nwith the code {\\fontfamily{bch}\\selectfont\\small\\itshape state.Kraus(K,dK)}. The parameters above have already been introduced previously. The\ndefault settings {\\fontfamily{bch}\\selectfont\\small\\itshape W=[]} and {\\fontfamily{bch}\\selectfont\\small\\itshape M=[]} means $W=\\openone$ and the measurement is a SIC-POVM.\nThe optimization method can be adjusted via {\\fontfamily{bch}\\selectfont\\small\\itshape method=\" \"} and corresponding parameters can be set via\n{\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}. The available optimization methods and corresponding default parameter settings are given\nin Table~\\ref{table:StateOpt_paras}. Two files \"f.csv\" and \"states.csv\" will be generated at the end of the program,\nwhich include the values of the objective function in all episodes and the final obtained optimal probe state.\nWhen {\\fontfamily{bch}\\selectfont\\small\\itshape savefile=True}, the states obtained in all episodes will be saved in \"states.csv\". In the\nmultiparameter estimation, the HCRB can also be chosen as the objective function by calling the codes:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL,mathescape=true]\nstate.HCRB(W=[])\n\\end{lstlisting}\nNotice that if {\\fontfamily{bch}\\selectfont\\small\\itshape method=\"AD\"}, {\\fontfamily{bch}\\selectfont\\small\\itshape state.HCRB()} is not available. Similar to the control\noptimization, if the users invoke {\\fontfamily{bch}\\selectfont\\small\\itshape state.HCRB()} in the single-parameter scenario, a warning will\narise to remind them to call {\\fontfamily{bch}\\selectfont\\small\\itshape state.QFIM()} instead.\n\n\\begin{algorithm}[tp]\n\\SetArgSty{}\n\\caption{AD for pure states} \\label{algorithm:AD}\nReceive the guessed probe state $\\rho_{0}=|\\psi_{\\mathrm{in}}\\rangle\\langle\\psi_{\\mathrm{in}}|$; \\\\\n\\For {episode=1, $M$}{\nCalculate the density matrix $\\rho_{T}$ and its derivative $\\partial_{\\bold{x}}\\rho_{T}$ at the target time $T$; \\\\\nCalculate the objective function $f(T)$ with $\\rho_{T}$ and $\\partial_{\\bold{x}}\\rho_{T}$; \\\\\nCalculate the gradients $\\big\\{\\frac{\\delta f(T)}{\\delta c_{i}}\\big\\}$ with the automatic differentiation; \\\\\nUpdate the coefficients $\\{c_{i}\\} \\leftarrow \\{c_{i}\\}+\\epsilon\\big\\{\\frac{\\delta f(T)}{\\delta c_{i}}\\big\\}$; \\\\\nNormalize the coefficients $\\{c_i\\}\\leftarrow \\frac{1}{\\sqrt{\\sum_j |c_j|^2}}\\{c_i\\}$.\n}\nSave the coefficients and reconstruct the state.\n\\end{algorithm}\n\nIn the previous section, we already showed the power of automatic differentiation (AD) in the construction of\nauto-GRAPE. Similarly, it can also be used in the state optimization. Due to the convexity of the QFI and\nQFIM~\\cite{Toth2014,Liu2020}, the optimal probe states are pure states in most scenarios. Hence, we first\nconsider the state optimization within the set of pure states. The pseudocode of AD in state optimization\nfor pure states is given in Algorithm~\\ref{algorithm:AD}. In a specific basis $\\{|i\\rangle\\langle i|\\}$, a probe\nstate could be expanded by $|\\psi\\rangle=\\sum_i c_i|i\\rangle$, and the search of optimal probe states is equivalent\nto the search of a set of normalized complex coefficients $\\{c_i\\}$. In AD, a guessed probe state is first given or\ngenerated and evolved to the target time $T$ according to the given dynamics, during which the density matrices\nand corresponding derivatives with respect to $\\bold{x}$ are calculated and saved. Then after calculating the objective\nfunction $f(T)$ at time $T$, all gradients $\\{\\delta f(T)\/\\delta c_i\\}$ are evaluated via the automatic\ndifferentiation, and the coefficients $\\{c_i\\}$ are updated accordingly with the step $\\epsilon$. This step can\nbe adjusted via {\\fontfamily{bch}\\selectfont\\small\\itshape epsilon} in {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}. Finally, the updated coefficients are normalized\nas required by the quantum mechanics. In the package, Adam is not applied by default in AD and it can be turned\non by setting {\\fontfamily{bch}\\selectfont\\small\\itshape Adam=True} in {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}.\n\nRegarding the gradient-free methods, apart from the PSO, DE and DDPG, QuanEstimation also contains the Nelder-Mead\nalgorithm (NM)~\\cite{Nelder1965}, which has already been used by Fr\\\"{o}wis et al.~\\cite{Frowis2014} to perform\nthe state optimization in the case of collective spins. The detailed flow chart of NM to locate the minimum value\nof an objective function can be found in Ref.~\\cite{Liu2022}. For the sake of self-consistency of the paper, here\nwe present its pseudocode in Algorithm~\\ref{algorithm:NM} for the search of the maximum value of $f$ at the target\ntime $T$.\n\n\\begin{algorithm}[tp]\n\\SetArgSty{}\n\\caption{NM for pure states} \\label{algorithm:NM}\nReceive a set of guessed states $|\\psi_1\\rangle,\\cdots,|\\psi_{n+1}\\rangle$;\\\\\n\\For {episode=1, $M$}{\nEvolve all states according to the given dynamics and calculate the objective function $f$\nat time $T$; \\\\\nSort the states and reassign the indices to let\n$f(|\\psi_1\\rangle)\\geq f(|\\psi_2\\rangle)\\geq\\cdots\\geq f(|\\psi_{n+1}\\rangle)$;\\\\\nCalculate the average state $|\\psi_{\\mathrm{a}}\\rangle=\\frac{1}{\\sqrt{\\mathcal{N}_{\\mathrm{a}}}}\n\\sum^n_{k=1}|\\psi_{k}\\rangle$; \\\\\nCalculate the reflected state\n$|\\psi_{\\mathrm{r}}\\rangle=\\frac{1}{\\sqrt{\\mathcal{N}_{\\mathrm{r}}}}\n[|\\psi_{\\mathrm{a}}\\rangle+a_{\\mathrm{r}}(|\\psi_{\\mathrm{a}}\\rangle-|\\psi_{n+1}\\rangle)]$; \\\\\n\\uIf {$f(|\\psi_{\\mathrm{r}}\\rangle)>f(|\\psi_1\\rangle)$}\n{Calculate the expanded state\n$|\\psi_{\\mathrm{e}}\\rangle=\\frac{1}{\\sqrt{\\mathcal{N}_{\\mathrm{e}}}}\n[|\\psi_{\\mathrm{a}}\\rangle+a_{\\mathrm{e}}(|\\psi_{\\mathrm{r}}\\rangle-|\\psi_{\\mathrm{a}}\\rangle)]$; \\\\\n\\eIf {$f(|\\psi_{\\mathrm{r}}\\rangle)\\geq f(|\\psi_{\\mathrm{e}}\\rangle)$}\n{Replace $|\\psi_{n+1}\\rangle$ with $|\\psi_{\\mathrm{r}}\\rangle$;}\n{Replace $|\\psi_{n+1}\\rangle$ with $|\\psi_{\\mathrm{e}}\\rangle$;}}\n\\uElseIf {$f(|\\psi_1\\rangle) \\geq f(|\\psi_{\\mathrm{r}}\\rangle) > f(|\\psi_n\\rangle)$}\n{Replace $|\\psi_{n+1}\\rangle$ with $|\\psi_{\\mathrm{r}}\\rangle$;}\n\\uElseIf {$f(|\\psi_n\\rangle) \\geq f(|\\psi_{\\mathrm{r}}\\rangle) > f(|\\psi_{n+1}\\rangle)$}\n{Calculate the outside contracted state\n$|\\psi_{\\mathrm{oc}}\\rangle=\\frac{1}{\\sqrt{\\mathcal{N}_{\\mathrm{oc}}}}\n[|\\psi_{\\mathrm{a}}\\rangle+a_{\\mathrm{c}}(|\\psi\\rangle_{\\mathrm{r}}-|\\psi_{\\mathrm{a}}\\rangle)]$;\\\\\n\\eIf{$f(|\\psi_{\\mathrm{oc}}\\rangle) \\geq f(|\\psi_{\\mathrm{r}}\\rangle)$}\n{Replace $|\\psi_{n+1}\\rangle$ with $|\\psi_{\\mathrm{oc}}\\rangle$;}\n{Replace all $|\\psi_k\\rangle$ for $k\\in[2,n+1]$ with\n$\\frac{1}{\\sqrt{\\mathcal{N}_k}}[|\\psi_1\\rangle+a_{\\mathrm{s}}(|\\psi_k\\rangle-|\\psi_1\\rangle)]$;}\n}\n\\Else {Calculate the inside contracted state\n$|\\psi_{\\mathrm{ic}}\\rangle=\\frac{1}{\\sqrt{\\mathcal{N}_{\\mathrm{ic}}}}\n[|\\psi_{\\mathrm{a}}\\rangle-a_{\\mathrm{c}}(|\\psi_{\\mathrm{a}}\\rangle-|\\psi_{n+1}\\rangle)]$;\\\\\n\\eIf {$f(|\\psi_{\\mathrm{ic}}\\rangle) > f(|\\psi_{n+1}\\rangle)$}\n{Replace $|\\psi_{n+1}\\rangle$ with $|\\psi_{\\mathrm{ic}}\\rangle$;}\n{Replace all $|\\psi_k\\rangle$ for $k\\in[2,n+1]$ with\n$\\frac{1}{\\sqrt{\\mathcal{N}_k}}[|\\psi_1\\rangle+a_{\\mathrm{s}}(|\\psi_k\\rangle-|\\psi_1\\rangle)]$;}\n}}\n\\end{algorithm}\n\nIn NM, $n+1$ guessed states are input and sorted descendingly according to the corresponding values of $f$, namely,\n$f(|\\psi_1\\rangle)\\geq\\cdots\\geq f(|\\psi_{n+1}\\rangle)$. In one episode of optimization, the average state\n$|\\psi_{\\mathrm{a}}\\rangle:=\\frac{1}{\\sqrt{\\mathcal{N}_{\\mathrm{a}}}}\\sum^n_{k=1}|\\psi_k\\rangle$ and reflected state\n$|\\psi_{\\mathrm{r}}\\rangle:=\\frac{1}{\\sqrt{\\mathcal{N}_{\\mathrm{r}}}}[|\\psi_{\\mathrm{a}}\\rangle+a_{\\mathrm{r}}\n(|\\psi_{\\mathrm{a}}\\rangle-|\\psi_{n+1}\\rangle)]$ are first calculated. In the case that the reflected state is\nbetter than $|\\psi_1\\rangle$, i.e., $f(|\\psi_{\\mathrm{r}}\\rangle)$ is larger than $f(|\\psi_1\\rangle)$, the expanded\nstate $|\\psi_{\\mathrm{e}}\\rangle:=\\frac{1}{\\sqrt{\\mathcal{N}_{\\mathrm{e}}}}[|\\psi_{\\mathrm{a}}\\rangle+a_{\\mathrm{e}}\n(|\\psi_{\\mathrm{r}}\\rangle-|\\psi_{\\mathrm{a}}\\rangle)]$ is then calculated and compared to the reflected state. If\nthe reflected state is still better, then replace $|\\psi_{n+1}\\rangle$ with $|\\psi_{\\mathrm{r}}\\rangle$, otherwise\nreplace $|\\psi_{n+1}\\rangle$ with $|\\psi_{\\mathrm{e}}\\rangle$. In the case that the performance of the reflected\nstate is in the middle of $|\\psi_1\\rangle$ and $|\\psi_n\\rangle$, just replace $|\\psi_{n+1}\\rangle$ with it. If its\nperformance is between $|\\psi_n\\rangle$ and $|\\psi_{n+1}\\rangle$, then the outside contracted state\n$|\\psi_{\\mathrm{oc}}\\rangle:=\\frac{1}{\\sqrt{\\mathcal{N}_{\\mathrm{oc}}}}[|\\psi_{\\mathrm{a}}\\rangle+a_{\\mathrm{c}}\n(|\\psi\\rangle_{\\mathrm{r}}-|\\psi_{\\mathrm{a}}\\rangle)]$ is calculated and compared to the reflected state.\n$|\\psi_{n+1}\\rangle$ is replaced with $|\\psi_{\\mathrm{oc}}\\rangle$ if $|\\psi_{\\mathrm{oc}}\\rangle$ outperforms the\nreflected state, otherwise all states $\\{|\\psi_k\\rangle\\}$, except the best one $|\\psi_1\\rangle$, are replaced with\nthe states $\\frac{1}{\\sqrt{\\mathcal{N}_k}}[|\\psi_1\\rangle+a_{\\mathrm{s}}(|\\psi_k\\rangle-|\\psi_1\\rangle)]$ and the\nprogram goes to the next episode. In the case that $|\\psi_{\\mathrm{r}}\\rangle$ is no better than any state in\n$\\{|\\psi_k\\rangle\\}$, the inside contracted state\n$|\\psi_{\\mathrm{ic}}\\rangle:=\\frac{1}{\\sqrt{\\mathcal{N}_{\\mathrm{ic}}}}\n[|\\psi_{\\mathrm{a}}\\rangle-a_{\\mathrm{c}}(|\\psi_{\\mathrm{a}}\\rangle-|\\psi_{n+1}\\rangle)]$ is then calculated and\ncompared to $|\\psi_{n+1}\\rangle$. If it is better than $|\\psi_{n+1}\\rangle$, replace $|\\psi_{n+1}\\rangle$ with it,\notherwise perform the same replacement operation to all states as done previously. At the beginning of next round,\nall states are sorted in descending order again.\n$\\mathcal{N}_{\\mathrm{a}}:=\\langle\\psi_{\\mathrm{a}}|\\psi_{\\mathrm{a}}\\rangle$\nis the normalization coefficient, same as $\\mathcal{N}_{\\mathrm{r}}$, $\\mathcal{N}_{\\mathrm{e}}$,\n$\\mathcal{N}_{\\mathrm{oc}}$ and $\\mathcal{N}_{\\mathrm{ic}}$. A general setting of the coefficients are\n$a_{\\mathrm{r}}=1.0$, $a_{\\mathrm{e}}=2.0$, $a_{\\mathrm{c}}=a_{\\mathrm{s}}=0.5$, which are also the default\nvalues in the package. These coefficients can be adjusted in {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs} (shown in\nTable~\\ref{table:StateOpt_paras}) via {\\fontfamily{bch}\\selectfont\\small\\itshape ar}, {\\fontfamily{bch}\\selectfont\\small\\itshape ae}, {\\fontfamily{bch}\\selectfont\\small\\itshape ac} and {\\fontfamily{bch}\\selectfont\\small\\itshape as0}.\nIn the meantime, {\\fontfamily{bch}\\selectfont\\small\\itshape p\\_num} in {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs} represents the state number $n+1$.\n\n\\begin{figure*}[tp]\n\\centering\\includegraphics[width=17.5cm]{Fig_StataOpt_unitary.pdf}\n\\caption{(a) The performance of the optimal probe states searched via AD (cyan triangles),\nRI (red pluses), PSO (blue crosses), DE (yellow circles) and NM (purple squares) in the\nLipkin\u2013Meshkov\u2013Glick model in the absence of noise. The blue dots represents the\nvalue of $\\sqrt{\\lambda T}\\delta g$ for the coherent spin state $|\\pi\/2,\\pi\/2\\rangle$,\nand the dash-dotted black and dashed black lines represent $1\/\\sqrt{N}$ and $1\/N$,\nrespectively. (b) The convergence performance of AD (dash-dotted cyan line), RI (solid\nred line), PSO (dotted blue line), DE (dashed yellow line), and NM (dotted star purple\nline) in the case of $N=500$. (c1-c5) The searched optimal states with different algorithms\nin the case of $N=100$. The target time is chosen as $\\lambda T=10$. The true value of $g$\nis 0.5, and the value of $h\/\\lambda$ is set to be $0.1$. Planck units are applied here.}\n\\label{fig:StateOpt_unitary}\n\\end{figure*}\n\n\\begin{algorithm}[tp]\n\\SetArgSty{}\n\\caption{RI}\nReceive the guessed probe state $\\rho_0$; \\\\\n\\For {episode=1, $M$}{\nEvolve the state with $\\rho=\\sum_i K_i \\rho_0 K^{\\dagger}_i$;\\\\\nCalculate the derivative $\\partial_{a}\\rho = \\sum_i(\\partial_{a}K_i)\\rho_0 K^{\\dagger}_i\n+ K_i\\rho_0(\\partial_{a}K^{\\dagger}_i)$; \\\\\nCalculate the QFI and the SLD $L$ with $\\rho$ and $\\partial_{\\bold{x}}\\rho$; \\\\\nCalculate the matrix $\\mathcal{M}$; \\\\\nFind the eigenvector ${|\\psi_{\\mathrm{m}}\\rangle}$ of $\\mathcal{M}$ corresponding to\nits largest eigenvalue; \\\\\nReplace $\\rho_0$ with $|\\psi_{\\mathrm{m}}\\rangle\\langle\\psi_{\\mathrm{m}}|$.}\nReturn the optimal state ${|\\psi\\rangle}$ and the QFI.\n\\label{algorithm:Iter}\n\\end{algorithm}\n\n\\begin{figure*}[tp]\n\\centering\\includegraphics[width=17.5cm]{Fig_StateOpt_noise.pdf}\n\\caption{The performance of probe states obtained via different algorithms for\n(a) $N=8$ and (c) $N=30$ when the collective dephasing exists. The solid red line,\ndashed star blue line, dash-dotted circle cyan line, dashed purple line represent\nthe values of $\\sqrt{\\lambda T}\\delta g$ for the searched states obtained via AD,\nPSO, DE, and NM, respectively. The dash-dotted green line represents that of NM\nwith 20 parallel sets. The dotted black line represent the result of\n$|\\pi\/2,\\pi\/2\\rangle$. (b1-b5) The searched optimal states for $N=8$. (d1-d5) The\nsearched optimal states for $N=30$. The target time $\\lambda T=10$, and the true\nvalues of $g$ is 0.5. The value of $h\/\\lambda$ is set to be $0.1$ and the decay\nrate $\\gamma\/\\lambda=0.1$. Planck units are applied here.}\n\\label{fig:StateOpt_noise}\n\\end{figure*}\n\nApart from the aforementioned algorithms, there also exist dedicated algorithms for the state optimization\nin quantum parameter estimation. Here we introduce a reverse iterative algorithm (RI), which was first proposed\nin Ref.~\\cite{Demkowicz2011,Macieszczak2014} in the Bayesian estimation context, and then applied to the QFI\nin Ref.~\\cite{Macieszczak2013a}. In the case of single-parameter estimation, the QFI can be rewritten into\n\\begin{equation}\n\\label{eq:qfisup}\n\\mathcal{F}_{aa} = \\sup_{A} \\left[2\\mathrm{Tr}(A \\partial_a\\rho)-\\mathrm{Tr}(\\rho A^2)\\right].\n\\end{equation}\nThis form is equivalent to the standard definition of the QFI as can be seen by solving the maximization\nproblem $2\\mathrm{Tr}(A\\partial_a\\rho)-\\mathrm{Tr}(\\rho A^2)$ with respect to $A$, which is formally a\nquadratic function in matrix $A$ and the resulting extremum condition yields the standard linear equation\nfor $\\partial_a\\rho=\\frac{1}{2}(A\\rho+\\rho A)$, i.e., the optimal $A=L_a$ is just the SLD operator. When this\nsolution is plugged into the formula and it yields $\\mathrm{Tr}(\\rho L^2_a)$, which is in agreement with\nthe standard definition of the QFI. Consider the parameterization process described by the Kraus operators\ngiven in Eq.~(\\ref{eq:kraus_opt}), $\\rho=\\sum_i K_i(x)\\rho_0 K_i^\\dagger(x)$. Taking into account\nEq.~(\\ref{eq:qfisup}), we see that the problem of identifying the optimal input state $\\rho_0$ that maximizes\nthe QFI, can be written as a double maximization problem:\n\\begin{equation}\n \\sup_{\\rho_0}\\mathcal{F}_{aa} = \\sup_{A,\\rho_0}\n \\left[2\\mathrm{Tr}(A \\partial_a\\rho)-\\mathrm{Tr}(\\rho A^2)\\right].\n\\end{equation}\nThis observation leads to an effective iterative protocol, where for a fixed $\\rho_0$ we find the optimal $A$\nthat maximizes the above expression, and then fixing the optimal $A$ found in the previous step we look for the\noptimal $\\rho_0$. In order to implement the procedure, note that the QFI can be rewritten in the `Heisenberg\npicture' form, where the Kraus operators effectively act on the $L_a$ operators, as\n\\begin{equation}\n\\mathcal{F}_{aa}=\\mathrm{Tr}\\left(\\rho_0 \\mathcal{M}\\right)\n\\end{equation}\nwith\n\\begin{equation}\n\\mathcal{M}\\!=\\!\\sum_i 2\\!\\left[(\\partial_a K^{\\dagger}_i)L_a K_i\n\\!+\\!K^{\\dagger}_i L_a(\\partial_a K_i)\\right]\\!-\\!K_i^\\dagger L^2_a K_i.\n\\end{equation}\nThis equation indicates that for a fixed $\\mathcal{M}$ (i.e.~fixed $A=L_a$), the optimal probe state is nothing\nbut the eigenvector corresponding to the maximum eigenvalue of $\\mathcal{M}$. The pseudocode of this algorithm\nis given in Algorithm~\\ref{algorithm:Iter}. In one round of the optimization, $\\mathcal{M}$ is calculated and\nits eigenvector with respect to the maximum eigenvalue of $\\mathcal{M}$ is calculated and used as the probe\nstate in the next round. In the package, this method can be invoked via {\\fontfamily{bch}\\selectfont\\small\\itshape method=\"RI\"}. The number\nof episodes and the seed can be adjusted in {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs} (shown in Table~\\ref{table:StateOpt_paras})\nvia {\\fontfamily{bch}\\selectfont\\small\\itshape max\\_episode} and {\\fontfamily{bch}\\selectfont\\small\\itshape seed}. Notice that this method is only available when\n{\\fontfamily{bch}\\selectfont\\small\\itshape state.Kraus()} is invoked, and in the current version of the package, it only works for the\nsingle-parameter quantum estimation, i.e., the objective function is the QFI. The extension to the CFI and\nthe case of multiparameter estimation will be thoroughly discussed in an independent paper.\n\n\\emph{Example.} Here we use the Lipkin\u2013Meshkov\u2013Glick model as an example to show the state optimization with\nQuanEstimation. The Hamiltonian of this model is~\\cite{Lipkin1965}\n\\begin{equation}\nH_{\\mathrm{LMG}}=-\\frac{\\lambda}{N}(J_1^2+gJ_2^2)-hJ_3,\n\\end{equation}\nwhere $J_i=\\frac{1}{2}\\sum_{j=1}^N \\sigma_i^{(j)}$ ($i=1,2,3$) is the collective spin operator with $\\sigma_i^{(j)}$\nthe $i$th Pauli matrix for the $j$th spin. $N$ is the total number of spins, $\\lambda$ is the spin\u2013spin interaction\nstrength, $h$ is the strength of the external field and $g$ is the anisotropic parameter. All searches with\ndifferent algorithms start from the coherent spin state $|\\theta=\\pi\/2,\\phi=\\pi\/2\\rangle$, which is defined\nby~\\cite{Ma2011}\n\\begin{equation}\n|\\theta,\\phi\\rangle=\\exp\\left(-\\frac{\\theta}{2}e^{-i\\phi} J_{+}+\\frac{\\theta}{2}e^{i\\phi}J_-\\right)|J,J\\rangle,\n\\end{equation}\nwhere $|J,J\\rangle$ is a Dicke state with $J=N\/2$ and $J_{\\pm}=J_1\\pm iJ_2$. Here we consider the case\nthat the search is constrained to pure states with fixed $J=N\/2$, which can be expressed as\n$|\\psi\\rangle=\\sum^J_{m=-J}c_m|J,m\\rangle$ with $|J,m\\rangle$ a general Dicke state and $c_m$ a complex\ncoefficient. Let us first study the single-parameter scenario with $g$ the parameter to be estimated.\n\n\\begin{figure*}[tp]\n\\centering\\includegraphics[width=15.5cm]{Fig_StateOpt_multipara.pdf}\n\\caption{The performance of different algorithms for the weight matrix (a)\n$W=\\mathrm{diag}(1\/2,1\/2)$ and (b) $W=\\mathrm{diag}(1\/3,2\/3)$. The solid red\nline, dashed star blue line, dash-dotted circle cyan line, dashed purple line\nand dash-dotted green line represent the results obtained via AD, PSO, DE, NM,\nand NM with 20 parallel sets, respectively. The dotted black line represent the\nresult of $|\\pi\/2,\\pi\/2\\rangle$. (c1-c2) The optimal states obtained from AD and\nDE for $W=\\mathrm{diag}(1\/2,1\/2)$. (d1-d2) The optimal states obtained from AD\nand DE for $W=\\mathrm{diag}(1\/3,2\/3)$. The target time $\\lambda T=10$. The true\nvalues of $g$ and $h\/\\lambda$ are set to be 0.5 and $0.1$. Planck units are\napplied here.}\n\\label{fig:StateOpt_multipara}\n\\end{figure*}\n\nThe performance of the optimal probe states searched via AD (cyan triangles), RI (red pluses), PSO (blue crosses),\nDE (yellow circles) and NM (purple squares) in the absence of noise are given in Fig.~\\ref{fig:StateOpt_unitary}(a).\nHere $\\delta g=1\/\\sqrt{\\mathcal{F}_{gg}}$ is the theoretical optimal deviation for $g$. The target time is taken\nas $\\lambda T=10$ (Planck units are applied). The performance of DDPG is not good enough and thus not shown in\nthe figure. For a very small $N$, the searched optimal states do not show an obvious advantage than the state\n$|\\pi\/2,\\pi\/2\\rangle$ (blue dots). However, when $N$ is large the advantage becomes significant, and the performance\nof all searched states outperform $|\\pi\/2,\\pi\/2\\rangle$ and $1\/\\sqrt{N}$ (dash-dotted black line) in the case that\n$N$ is larger than around 6. For a large $N$, the performance of the states obtained via AD and RI are the best and\nvery close to $1\/N$ (dashed black line). The performance of DE and PSO basically coincide with each other (more\naccurately to say, the performance of DE is slightly better than that of PSO), but is worse than AD and RI. The\nperformance of NM is the worst in this example. Please note that we cannot rashly say that the general performance\nof NM is worse than DE or PSO in the state optimization just based on this plot as different parameter settings in\nthe algorithms sometimes could dramatically affect the behaviors, yet we basically use the general recommended settings\nin all algorithms. Nevertheless, different sensitivities of the parameter settings on the final result still indicates\nthat DE and PSO are easier to locate optimal states than NM at least in this example.\n\nRegarding the convergence performance in this example, as shown in Fig.~\\ref{fig:StateOpt_unitary}(b), RI shows\nthe fastest convergence speed and the best optimized value. AD is slightly slower than RI but still way faster than the\ngradient-free methods. However, the disadvantage of AD is that occupation of memory grows very fast with the increase\nof $N$. Hence, RI would be the best choice to try first for the state optimization in the case of unitary parameterization.\nIn the last, as a demonstration, the searched optimal states via different algorithms in the case of $N=100$ are shown\nin Figs.~\\ref{fig:StateOpt_unitary}(c1-c5).\n\n\\emph{Example.} When the collective dephasing is involved, the dynamics of this system is governed by the following\nmaster equation\n\\begin{equation}\n\\partial_t\\rho = -i[H_{\\mathrm{LMG}},\\rho]+\\gamma \\left(J_3\\rho J_3-\\frac{1}{2}\\left\\{\\rho, J^2_3\\right\\}\\right)\n\\label{eq:dephasing_LMG}\n\\end{equation}\nwith $\\gamma$ the decay rate. The performance of optimal probe states searched via AD (solid red line), PSO (dashed\nstar blue line), DE (dash-dotted circle cyan line) and NM (dashed purple line) are illustrated with $N=8$ and $N=30$\nin Figs.~\\ref{fig:StateOpt_noise}(a) and \\ref{fig:StateOpt_noise}(c), respectively. The corresponding optimal probe\nstates are given in Figs.~\\ref{fig:StateOpt_noise}(b1-b4) for $N=8$ and Figs.~\\ref{fig:StateOpt_noise}(d1-d4) for\n$N=30$. In both cases, the states obtained via AD, PSO and DE basically present coincidental performance at time $T$,\nand outperform $|\\pi\/2,\\pi\/2\\rangle$ (dotted black lines). Similar to the unitary scenario, the state obtained via\nNM shows a worse performance at time $T$, and it even fails to find a better state than $|\\pi\/2,\\pi\/2\\rangle$ in the\ncase of $N=30$. In this figure, the number of parallel sets (also called particles in PSO and populations in DE) are\n10 for all NM, DE and PSO. After increasing the number of parallel sets from 10 to 20 [labelled by NM (20) in the\nplot], the performance of NM (dash-dotted green line) improves in the case of $N=8$, which basically coincides with\nothers. However, it still fails to find a better state when $N=30$. More number of parallel sets may be requires for\nNM in this case. The states obtained via NM (20) are shown in Figs.~\\ref{fig:StateOpt_noise}(b5) and\n\\ref{fig:StateOpt_noise}(d5) for $N=8$ and $N=30$, respectively.\n\n\\begin{table}[bp]\n\\begin{tabular}{c|c|c|c}\n\\hline\n\\hline\n~~Algorithms~~ & ~~method=~~ & \\multicolumn{2}{c}{~~**kwargs and default values~~}\\\\\n\\hline\n\\multirow{7}{*}{PSO} & \\multirow{7}{*}{\"PSO\"} & \"p\\_num\" & 10 \\\\\n & & \"measurement0\" & [] \\\\\n & & \"max\\_episode\" & [1000,100] \\\\\n & & \"c0\" & 1.0 \\\\\n & & \"c1\" & 2.0 \\\\\n & & \"c2\" & 2.0 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\multirow{6}{*}{DE} & \\multirow{6}{*}{\"DE\"} & \"p\\_num\" & 10 \\\\\n & & \"measurement0\" & [] \\\\\n & & \"max\\_episode\" & 1000 \\\\\n & & \"c\" & 1.0 \\\\\n & & \"cr\" & 0.5 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\multirow{5}{*}{AD} & \\multirow{7}{*}{\"AD\"} & \"Adam\" & False \\\\\n\\multirow{5}{*}{(available when} & & \"measurement0\" & [] \\\\\n\\multirow{5}{*}{{\\fontfamily{bch}\\selectfont\\small\\itshape mtype=\"input\"})} & & \"max\\_episode\" & 300 \\\\\n & & \"epsilon\" & 0.01 \\\\\n & & \"beta1\" & 0.90 \\\\\n & & \"beta2\" & 0.99 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\caption{Available methods for measurement optimization in QuanEstimation and\ncorresponding default parameter settings. Notice that AD is only available\nwhen {\\fontfamily{bch}\\selectfont\\small\\itshape mtype=\"input\"}. Here {\\fontfamily{bch}\\selectfont\\small\\itshape measurement0} is the initial\nguess of the measurement.}\n\\label{table:MeasOpt_paras}\n\\end{table}\n\nNext we discuss the state optimization in multiparameter estimation. Consider the simultaneous estimation\nof $g$ and $h\/\\lambda$ in the Lipkin\u2013Meshkov\u2013Glick model with the dynamics in Eq.~(\\ref{eq:dephasing_LMG}).\nFigures~\\ref{fig:StateOpt_multipara}(a) and \\ref{fig:StateOpt_multipara}(b) show the performance of optimal\nstates obtained via different algorithms for $W=\\mathrm{diag}(1\/2,1\/2)$ and $W=\\mathrm{diag}(1\/3,2\/3)$, respectively.\nIn both cases AD (solid red line) and DE (dash-dotted circle cyan line) present the best performance at the target\ntime $\\lambda T=10$, and DE even slightly outperform AD in the case of $W=\\mathrm{diag}(1\/2,1\/2)$. The performance\nof PSO (dashed star blue line) is worse than AD and DE, yet still better than NM (dashed purple line) and NM with\n20 parallel sets (dash-dotted green line). The performance of NM does not even outperform the coherent spin state\n$|\\pi\/2,\\pi\/2\\rangle$ (dotted black line) in the case of $W=\\mathrm{diag}(1\/2,1\/2)$. Hence, apart from gradient-based\nalgorithm like AD, PSO and DE would also be good choices for state optimizations. The optimal states obtained from AD\nand DE for $W=\\mathrm{diag}(1\/2,1\/2)$ and $W=\\mathrm{diag}(1\/3,2\/3)$ are demonstrated in\nFigs.~\\ref{fig:StateOpt_multipara}(c1-c2) and Figs.~\\ref{fig:StateOpt_multipara}(d1-d2), respectively. Although the\nperformance on $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ are basically the same for these states, they may still have gaps on\nother properties like the difficulties of preparation, the robustness to the imperfect preparation and so on. Hence,\nin practice one needs to compare these optimal states comprehensively case by case to make wise choices.\n\n\n\\section{Measurement optimization}\n\\label{sec:measurement_opt}\n\n\\begin{figure*}[tp]\n\\centering\\includegraphics[width=17.5cm]{Fig_Mopt.pdf}\n\\caption{(a) The performance of optimal projective measurements obtained via PSO\n(blue crosses) and DE (yellow circles) in the case of single-parameter estimation.\nThe dashed cyan line represents the values of $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{q}}\\omega$\nand the dotted black line represents the values of $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{c}}\\omega$\nwith respect to the projective measurement $\\{\\Pi_{+}\\!=\\!|+\\rangle\\langle+|,\n\\Pi_{-}\\!=\\!|-\\rangle\\langle-|\\}$. The true value $\\omega_{\\mathrm{tr}}=1$. Planck units\nare applied in this plot. (b) The performance of optimal projective measurements obtained\nvia PSO (blue crosses) and DE (yellow circles) in the case of multiparameter estimation in\nthe absence of control. The black underlines and cyan triangles represent the values of\n$\\mathrm{Tr}(W\\mathcal{F}^{-1})$ without and with optimal control. The red pentagrams\nrepresent the controlled values of $\\mathrm{Tr}(W\\mathcal{I}^{-1})$ with the optimal\nmeasurements obtained in the non-controlled scenario. (c) Demonstration of the optimal\nprojective measurement obtained by DE in the multiparameter estimation at the target time\n$T=0.04\\,\\mu$s. The red and blue bars represent the real and imaginary parts of the coefficients\nof the optimal measurement in the basis $\\{|1\\!\\uparrow\\rangle,|1\\!\\downarrow\\rangle,\n|0\\!\\uparrow\\rangle,|0\\!\\downarrow\\rangle,|\\!-\\!1\\!\\uparrow\\rangle,|\\!-\\!1\\!\\downarrow\\rangle\\}$.}\n\\label{fig:Mopt}\n\\end{figure*}\n\nMeasurement is critical in quantum parameter estimation~\\cite{Yu2021,Rath2021,Zhang2020,XuL2021}. On one hand,\nall asymptotic bounds require some optimal measurements to attain if it is attainable, and hence the search\nof optimal measurements is a natural requirement in theory to approach the ultimate precision limit. On the\nother hand, the choice of measurements is usually limited in practice, and how to find conditioned optimal\nmeasurements with the practical measurements in hand is an important step towards the design of a realizable\nscheme. QuanEstimation includes the optimization of measurements for several scenarios. The first one is the\noptimization of rank-one projective measurements. A set of projective measurements $\\{\\Pi_i\\}$ satisfies\n$\\Pi_i\\Pi_j=\\Pi_i\\delta_{ij}$ and $\\sum_i\\Pi_i=\\openone$, and it can be rewritten into\n$\\{|\\phi_i\\rangle\\langle\\phi_i|\\}$ with $\\{|\\phi_i\\rangle\\}$ an orthonormal basis in the Hilbert space. In\nthis way, the optimization of rank-one projective measurement is equivalent to identifying the optimal basis,\nwhich can be realized using PSO and DE in QuanEstimation. In this case the automatic differentiation is not\nworking very well due to the Gram-Schmidt orthogonalization procedure after the update of $\\{|\\phi_i\\rangle\\}$\naccording to the gradients. In some cases, the realizable measurement has to be limited in\nthe linear combination of a given set of POVM, hence, the second scenario is to find the optimal linear\ncombination of an input measurement. Moreover, in some cases the measurement $\\{\\Pi_i\\}$ has to be fixed, but\nan arbitrary unitary operation can be invoked before performing the measurement, which is equivalent to a new\nmeasurement $\\{U\\Pi_i U^{\\dagger}\\}$. Based on this, the third scenario is to find the optimal rotated\nmeasurement of an input measurement.\n\nThe codes in QuanEstimation for the execution of measurement optimization are as follows:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL,mathescape=true]\nm = MeasurementOpt(mtype=\"projection\",\n minput=[],savefile=False,\n method=\"DE\",**kwargs)\nm.dynamics(tspan,rho0,H0,dH,Hc=[],ctrl=[],\n decay=[])\nm.CFIM(W=[])\n\\end{lstlisting}\nIn the case that the parameterization is described by the Kraus operators, replace {\\fontfamily{bch}\\selectfont\\small\\itshape m.dynamics()} with\nthe code {\\fontfamily{bch}\\selectfont\\small\\itshape m.Kraus(rho0,K,dK)}. The optimization method can be adjusted via {\\fontfamily{bch}\\selectfont\\small\\itshape method=\" \"}\nand corresponding parameters can be set via {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}. The available optimization methods and\ncorresponding default parameter settings are given in Table~\\ref{table:MeasOpt_paras}. Two files \"f.csv\" and\n\"measurements.csv\" will be generated at the end of the program. When {\\fontfamily{bch}\\selectfont\\small\\itshape savefile=True}, the\nmeasurements obtained in all episodes will be saved in \"measurements.csv\".\n\nThe variable {\\fontfamily{bch}\\selectfont\\small\\itshape mtype=\" \"} defines the type of scenarios for the optimization, and currently it includes\ntwo options: {\\fontfamily{bch}\\selectfont\\small\\itshape mtype=\"projection\"} and {\\fontfamily{bch}\\selectfont\\small\\itshape mtype=\"input\"}. The first one means the optimization\nis performed in the first scenario, i.e., within the set of projective measurements. In this case,\n{\\fontfamily{bch}\\selectfont\\small\\itshape minput=[]} should keep empty. Since $|\\phi_i\\rangle$ in a rank-one projective measurement\n$\\{|\\phi_i\\rangle\\langle\\phi_i|\\}$ can be expended as $|\\phi_i\\rangle=\\sum_j C_{ij}|j\\rangle$ in a given\northonormal basis $\\{|j\\rangle\\}$, the optimization of the rank-one projective measurement is equivalent to\nthe optimization of a complex matrix $C$. When the gradient-free methods are applied, all entries in $C$ are\nupdated via the given algorithm in each episode, then adjusted via the Gram-Schmidt orthogonalization\nprocedure to make sure $\\{|\\phi_i\\rangle\\langle\\phi_i|\\}$ is a legitimate projective measurement, i.e.,\n$\\langle\\phi_i|\\phi_j\\rangle=\\delta_{ij},~\\forall i,j$ and $\\sum_i|\\phi_i\\rangle\\langle\\phi_i|=\\openone$. The\nsecond option {\\fontfamily{bch}\\selectfont\\small\\itshape mtype=\"input\"} means the optimization is performed in the second and third scenarios.\nThe input rule of {\\fontfamily{bch}\\selectfont\\small\\itshape minput} for the second scenario is {\\fontfamily{bch}\\selectfont\\small\\itshape minput=[\"LC\", [Pi1,Pi2,...], m]} and\nfor the third one is {\\fontfamily{bch}\\selectfont\\small\\itshape minput=[\"rotation\", [Pi1,Pi2,...]]}. Here {\\fontfamily{bch}\\selectfont\\small\\itshape [Pi1,Pi2,...]} is a list of\nmatrices representing the input measurement $[\\Pi_1,\\Pi_2,\\dots]$. The variable {\\fontfamily{bch}\\selectfont\\small\\itshape m} in the second\nscenario is an integer representing the number of operators of the output measurement, and thus should be no\nlarger than that of the input measurement. For example, assume the input measurement is $\\{\\Pi_i\\}^6_{i=1}$ and\ninput 4 in the position of {\\fontfamily{bch}\\selectfont\\small\\itshape m} means the the output measurement is $\\{\\Pi^{\\prime}_{i}\\}^4_{i=1}$ where\n$\\Pi^{\\prime}_i=\\sum^{6}_{j=1}B_{ij}\\Pi_j$. The optimization is to find an optimal real matrix $B$ for the optimal\nCFI or $\\mathrm{Tr}(W\\mathcal{I}^{-1})$. To make sure the updated measurement in each episode is still a legitimate\nPOVM, all entries of $B$ are limited in the regime $[0,1]$ and $\\sum_{i}B_{ij}$ is required to be 1, which is\nrealized by the normalization process. In this scenario, apart from PSO and DE, AD can also be implemented. In\nthe third scenario, the unitary operation is expressed by $U=\\prod_k \\exp(i s_k\\lambda_k)$ where $\\lambda_k$ is\na SU($N$) generator and $s_k$ is a real number in the regime $[0,2\\pi]$. The optimization is to find an optimal\nset of $\\{s_k\\}$ for the optimal CFI or $\\mathrm{Tr}(W\\mathcal{I}^{-1})$, and similar to the second scenario, AD\nis also available here besides PSO and DE. In the case that {\\fontfamily{bch}\\selectfont\\small\\itshape mtype=\"projection\"}, each entry of\n{\\fontfamily{bch}\\selectfont\\small\\itshape measurement0} in {**kwargs} is a list of arrays, and in the case that {\\fontfamily{bch}\\selectfont\\small\\itshape mtype=\"input\"}, each\nentry is an array.\n\n\\emph{Example.} Now we consider two models to demonstrate the measurement optimizations in the first scenario. The\nfirst one is a single-parameter case with the single-qubit Hamiltonian $H=\\omega\\sigma_3\/2$ and dynamics in\nEq.~(\\ref{eq:ME_spon}). $\\delta_{\\mathrm{c}}\\omega$ and $\\delta_{\\mathrm{q}}\\omega$ are defined in\nEqs.~(\\ref{eq:c_deviation}) and (\\ref{eq:q_deviation}). As shown in Fig.~\\ref{fig:Mopt}(a), $\\delta_{\\mathrm{c}}\\omega$\nfor the projective measurement $\\{\\Pi_{+}\\!=\\!|+\\rangle\\langle+|,\\Pi_{-}\\!=\\!|-\\rangle\\langle-|\\}$ (dotted black line) can\nonly reach $\\delta_{\\mathrm{q}}\\omega$ (dashed cyan line) at some specific time points, which has already been shown in\nSec.~\\ref{sec:QCRB}. However, utilizing the optimal projective measurements obtained via PSO (blue crosses) and DE (yellow\ncircles), $\\delta_{\\mathrm{c}}\\omega$ saturates $\\delta_{\\mathrm{q}}\\omega$ for all target time. This performance coincides\nwith the common understanding that the QFI can be theoretically attained by certain optimal measurements.\n\nIn the case of multiparameter estimation, we use the Hamiltonian in Eq.~(\\ref{eq:NV_H}) and dynamics in\nEq.~(\\ref{eq:NV_ME}) to demonstrate the performance of the optimal projective measurements. The magnetic field\n$\\vec{B}$ is still the quantity to be estimated. Different with the single-parameter case, the values of\n$\\mathrm{Tr}(W\\mathcal{I}^{-1})$ for the optimal measurements found by PSO (blue crosses) and DE (yellow circles)\ncannot attain $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ (black underlines) in the absence of control, as shown in\nFig.~\\ref{fig:Mopt}(b). The gap between $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ and $\\mathrm{Tr}(W\\mathcal{I}^{-1})$ is\ndue to the fact that the quantum Cram\\'{e}r-Rao bound is not attainable here. Next, together with the optimal\nmeasurement which gives the lowest $\\mathrm{Tr}(W\\mathcal{I}^{-1})$, the control is also invoked to further evaluate\nthe reduction of $\\mathrm{Tr}(W\\mathcal{I}^{-1})$. Utilizing the optimal controls obtained via auto-GRAPE, the values\nof $\\mathrm{Tr}(W\\mathcal{I}^{-1})$ (red pentagrams) continue to reduce compared to the non-controlled case, yet it\nis still unable to attain the controlled values of $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ (cyan triangles) in general due\nto the attainability problem. Nevertheless, their differences are very insignificant for some target time, indicating\nthat the combined performance of the optimal measurement and optimal control approaches to the ultimate precision limit.\nThe optimal measurement $\\{|\\phi_1\\rangle\\langle\\phi_1|,\\cdots,|\\phi_6\\rangle\\langle\\phi_6|\\}$ obtained by DE in the\nabsence of control are demonstrated in Fig.~\\ref{fig:Mopt}(c). The red and blue bars represent the real and imaginary\nparts of the coefficients of $|\\phi_1\\rangle$ to $|\\phi_6\\rangle$ in the basis\n$\\{|1\\!\\uparrow\\rangle,|1\\!\\downarrow\\rangle,|0\\!\\uparrow\\rangle,|0\\!\\downarrow\\rangle,|\\!-\\!1\\!\\uparrow\\rangle,\n|\\!-\\!1\\!\\downarrow\\rangle\\}$.\n\n\\begin{figure}[tp]\n\\centering\\includegraphics[width=8.5cm]{Fig_Mopt_input.pdf}\n\\caption{Demonstration of the measurement optimization in the second (LC) and\nthird scenarios (rotation). The cyan upward triangles, blue crosses and\nyellow circles represent the performance of optimal measurements found by AD, PSO,\nand DE, respectively in the second scenario. The red downward triangles, green\ndiamonds and orange pentagrams represent the performance of optimal measurements\nfound by AD, PSO, and DE in the third scenario. }\n\\label{fig:Mopt_input}\n\\end{figure}\n\nThe optimizations in the second and third scenarios are also demonstrated with the Hamiltonian in\nEq.~(\\ref{eq:NV_H}) and dynamics in Eq.~(\\ref{eq:NV_ME}). The input measurement is taken as\n$\\{|ij\\rangle\\langle ij|\\}_{i=0,\\pm 1;j=\\uparrow,\\downarrow}$, which includes 6 operators. In the second\nscenario, the number of output POVM operators is set to be 4. As shown in Fig.~\\ref{fig:Mopt_input}, the\nperformance of measurements found by AD (cyan upward triangles), PSO (blue crosses) and DE (yellow circles)\napproach to and even reach that of the input measurement (magenta pluses). This fact indicates that in this\ncase, an optimal 4-operator measurement can reach the performance of the original 6-operator measurement, and\nthe reduction of operator numbers may benefit the practical precision of the measurements in experiments.\nIn the third scenario, the performance of optimal measurements found by AD (red downward triangles), PSO\n(green diamonds) and DE (orange pentagrams) not only significantly better than that of the input measurement,\nbut also approach to the ultimate precision limit given by $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ (black underlines),\nindicating that the performance of these optimal measurements are very close to that of the global optimal\nmeasurements, if there exist any. The probe states, the true values of the parameters to be estimated and other\nparameters are set to be the same with those in Sec.~\\ref{sec:multi}.\n\n\n\\section{Comprehensive optimization}\n\\label{sec:comprehensive_opt}\n\n\\begin{table}[tp]\n\\begin{tabular}{c|c|c|c}\n\\hline\n\\hline\n~~Algorithms~~ & ~~method=~~ & \\multicolumn{2}{c}{~~**kwargs and default values~~}\\\\\n\\hline\n\\multirow{9}{*}{PSO} & \\multirow{9}{*}{\"PSO\"} & \"p\\_num\" & 10 \\\\\n & & \"psi0\" & [] \\\\\n & & \"ctrl0\" & [] \\\\\n & & \"measurement0\" & [] \\\\\n & & \"max\\_episode\" & [1000,100] \\\\\n & & \"c0\" & 1.0 \\\\\n & & \"c1\" & 2.0 \\\\\n & & \"c2\" & 2.0 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\multirow{8}{*}{DE} & \\multirow{8}{*}{\"DE\"} & \"p\\_num\" & 10 \\\\\n & & \"psi0\" & [] \\\\\n & & \"ctrl0\" & [] \\\\\n & & \"measurement0\" & [] \\\\\n & & \"max\\_episode\" & 1000 \\\\\n & & \"c\" & 1.0 \\\\\n & & \"cr\" & 0.5 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\multirow{7}{*}{AD} & \\multirow{7}{*}{\"AD\"} & \"Adam\" & False \\\\\n\\multirow{7}{*}{(available} & & \"psi0\" & [] \\\\\n\\multirow{7}{*}{{for SC})} & & \"ctrl0\" & [] \\\\\n & & \"measurement0\" & [] \\\\\n & & \"max\\_episode\" & 300 \\\\\n & & \"epsilon\" & 0.01 \\\\\n & & \"beta1\" & 0.90 \\\\\n & & \"beta2\" & 0.99 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\caption{Available methods for comprehensive optimization in QuanEstimation and\ncorresponding default parameter settings. Notice that AD is only available\nwhen {\\fontfamily{bch}\\selectfont\\small\\itshape com.SC()} is called. }\n\\label{table:CompOpt_paras}\n\\end{table}\n\n\\begin{figure*}[tp]\n\\centering\\includegraphics[width=16cm]{Fig_compre.pdf}\n\\caption{Illustration of the comprehensive optimization (first lines with\ngray background) and combination of univariate optimizations (second lines)\nin four types of multivariate optimizations, including the optimizations of\n(a) the probe state and measurement (SM), (b) the probe state and control (SC),\n(c) control and measurement (CM), and (d) the probe state, control, and measurement\n(SCM).}\n\\label{fig:compre}\n\\end{figure*}\n\nThe previous sections focused on the univariate (single variable) optimizations. However, in a\npractical scenario the probe state, control (if available) and measurement may all need to be\noptimized. More importantly, the optimal results obtained for an univariate optimization may cease\nto be optimal when other variables are involved. For example, the optimal probe state and\nmeasurement for the non-controlled case may not be optimal anymore in the controlled case. Hence,\nsometimes a comprehensive optimization, i.e., simultaneous multivariate optimization, is in need.\n\nQuanEstimation can deal with four types of multivariate optimizations, including the optimizations of\nthe probe state and measurement (SM), the probe state and control (SC), control and measurement (CM),\nand all three together (SCM). In these scenarios, the key feature of comprehensive optimization is\nthat all variables are optimized simultaneously. Regarding the objective function, in the cases of SM,\nCM, and SCM, namely, when the measurement is involved, it has to be dependent on the measurement.\nIn current version of the package it is chosen as the CFI or $\\mathrm{Tr}(W\\mathcal{I}^{-1})$.\nIn the case of SC, the objective function could be either QFI\/$\\mathrm{Tr}(W\\mathcal{F}^{-1})$ or\nCFI\/$\\mathrm{Tr}(W\\mathcal{I}^{-1})$ for a flexible or fixed choice of measurement. The process of\ncomprehensive optimizations and corresponding objective functions have been illustrated in the first\nlines (with gray background) in Figs.~\\ref{fig:compre}(a-d). In QuanEstimation, the codes for the\nexecution of comprehensive optimization are:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL,mathescape=true]\ncom = ComprehensiveOpt(savefile=False,\n method=\"DE\",**kwargs)\ncom.dynamics(tspan,H0,dH,Hc=[],ctrl=[],\n decay=[],ctrl_bound=[])\ncom.SM(W=[])\ncom.SC(W=[],M=[],target=\"QFIM\",LDtype=\"SLD\")\ncom.CM(rho0,W=[])\ncom.SCM(W=[])\n\\end{lstlisting}\nIn the case that the parameterization is described by the Kraus operators, replace {\\fontfamily{bch}\\selectfont\\small\\itshape com.dynamics()}\nwith the code {\\fontfamily{bch}\\selectfont\\small\\itshape com.Kraus(K,dK)}. All four types of comprehensive optimizations can be called through\n{\\fontfamily{bch}\\selectfont\\small\\itshape com.SM()}, {\\fontfamily{bch}\\selectfont\\small\\itshape com.SC()}, {\\fontfamily{bch}\\selectfont\\small\\itshape com.CM()}, and {\\fontfamily{bch}\\selectfont\\small\\itshape com.SCM()}. Notice that if\n{\\fontfamily{bch}\\selectfont\\small\\itshape com.Kraus()} is invoked, only {\\fontfamily{bch}\\selectfont\\small\\itshape com.SM()} is available as control is not suitable for the\nparameterization process described by the Kraus operators. In {\\fontfamily{bch}\\selectfont\\small\\itshape com.CM()}, the input {\\fontfamily{bch}\\selectfont\\small\\itshape rho0}\nis a matrix representing the fixed probe state. In {\\fontfamily{bch}\\selectfont\\small\\itshape com.SC()}, the objective function can be set via\n{\\fontfamily{bch}\\selectfont\\small\\itshape target=\" \"}, including three choices {\\fontfamily{bch}\\selectfont\\small\\itshape target=\"QFIM\"} (default), {\\fontfamily{bch}\\selectfont\\small\\itshape target=\"CFIM\"},\nand {\\fontfamily{bch}\\selectfont\\small\\itshape target=\"HCRB\"}. If a set of measurement is input via {\\fontfamily{bch}\\selectfont\\small\\itshape M=[]}, the objective function\nwill be automatically chosen as the CFIM regardless of the input in {\\fontfamily{bch}\\selectfont\\small\\itshape target=\" \"}. The type of QFIM\ncan be adjusted via {\\fontfamily{bch}\\selectfont\\small\\itshape LDtype=\" \"} ({\\fontfamily{bch}\\selectfont\\small\\itshape \"SLD\"}, {\\fontfamily{bch}\\selectfont\\small\\itshape \"RLD\"}, {\\fontfamily{bch}\\selectfont\\small\\itshape \"LLD\"}). The\navailable methods for the comprehensive optimization and corresponding default parameter settings are given in\nTable~\\ref{table:CompOpt_paras}. Notice that AD is only available when {\\fontfamily{bch}\\selectfont\\small\\itshape com.SC()} is called and\nthe objective function is not the HCRB. At the end of the program, \"f.csv\" will be generated including the values\nof the objective function in all episodes. In the meantime, some or all of the files \"controls.csv\", \"states.csv\",\nand \"measurements.csv\" will also be generated according to the type of comprehensive optimization.\n\nAlternatively, the multivariate optimization can also be finished by the combination of univariate\noptimizations, as shown in the second lines in Figs.~\\ref{fig:compre}(a-d). In the case of SM (or\nCM) shown in Fig.~\\ref{fig:compre}(a) [Fig.~\\ref{fig:compre}(c)], one could first perform the state\n(control) optimization with QFI\/$\\mathrm{Tr}(W\\mathcal{F}^{-1})$ the objective function. Next,\ntake the found optimal state (control) as the fixed input, and further optimize the measurement\nwith CFI\/$\\mathrm{Tr}(W\\mathcal{I}^{-1})$ the objective function. If the optimized values of the\nCFI\/$\\mathrm{Tr}(W\\mathcal{I}^{-1})$ in the second process reaches the optimized values of the\nQFI\/$\\mathrm{Tr}(W\\mathcal{F}^{-1})$ in the first process, the entire scheme is then optimal. Things\ncould be more complex in the multiparameter estimation due to the attainability problem. The existence\nof the gap between the optimized $\\mathrm{Tr}(W\\mathcal{I}^{-1})$ and $\\mathrm{Tr}(W\\mathcal{F}^{-1})$\ndoes not necessarily mean the scheme is not optimal. Nevertheless, there is no doubt that a smaller gap\nalways implies a better scheme at least in theory. In the case of SC, the state optimization and\ncontrol optimization can be performed in turn with the optimal quantity found in the previous\nturn as the fixed input [Fig.~\\ref{fig:compre}(b)]. Same with the comprehensive optimization, both\nQFI\/$\\mathrm{Tr}(W\\mathcal{F}^{-1})$ and CFI\/$\\mathrm{Tr}(W\\mathcal{I}^{-1})$ can be taken as the\nobjective function in this case. At last, in the case of SCM the combination strategy in SC\ncould be performed first with QFI\/$\\mathrm{Tr}(W\\mathcal{F}^{-1})$ the objective function, and\nthe measurement is further optimized with the found optimal state and control as the fixed input\n[Fig.~\\ref{fig:compre}(d)]. Same with the scenario of SM, if the optimized CFI\/$\\mathrm{Tr}(W\\mathcal{I}^{-1})$\nobtained in the second process reaches the optimized QFI\/$\\mathrm{Tr}(W\\mathcal{F}^{-1})$ in the\nfirst process, the entire scheme is optimal.\n\n\\emph{Example.} Now we provide some demonstrations on the comprehensive optimization with QuanEstimation\nand compare their performance with the combination strategy. First, consider a non-controlled example\nwith the single-qubit Hamiltonian $\\omega\\sigma_3\/2$, which is a SM scenario. The dynamics is\ngoverned by Eq.~(\\ref{eq:ME_spon}) with decay rates $\\gamma_{-}\/\\omega_{\\mathrm{tr}}=0$ and\n$\\gamma_{+}\/\\omega_{\\mathrm{tr}}=0.1$. The target time $\\omega_{\\mathrm{tr}}T=20$. In this case, the optimized\nvalues of $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{c}}\\omega$ in the comprehensive optimization and\ncombination strategy are both 0.608 (in the units of $\\omega_{\\mathrm{tr}}$, same below), equivalent to the optimal\n$\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{q}}\\omega$ obtained in the solely state optimization, indicating\nthat the schemes found by both strategies are indeed optimal in theory. Next we invoke the controls described\nin Eq.~(\\ref{eq:ctrl_demo}). In the case of SC, the optimized $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{q}}\\omega$\nobtained in the combination strategy is 0.441, and that in the comprehensive optimization is 0.440. Furthermore,\nin the case of SCM, the optimized $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{c}}\\omega$ provided by the\ncombination strategy is 0.441, equivalent to the optimal $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{q}}\\omega$\nobtained in the SC, and that provided by the comprehensive optimization is 0.443. The performance of these two\nstrategies basically coincide with each other in this example.\n\n\\begin{figure}[tp]\n\\centering\\includegraphics[width=8.5cm]{Fig_compre_NV.pdf}\n\\caption{Performance comparison between the comprehensive optimization\nand combination strategy in the multiparameter estimation in the case of SCM.\nThe dashed blue line represents the optimization of $\\mathrm{Tr}(W\\mathcal{I}^{-1})$\nin the comprehensive optimization. The solid red lines represent the optimization\nof $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ in the SC (first 500 episodes) and that of\n$\\mathrm{Tr}(W\\mathcal{I}^{-1})$ in the measurement optimization (last 500\nepisodes) in the combination strategy. The inset shows the performance of\ndifferent combination strategies in the SC part due to the episode number\nof each optimization. All the optimizations in the figure are finished by DE.}\n\\label{fig:compre_multi}\n\\end{figure}\n\nThis equivalent performance may due to two facts: the example is simple and the QFI is attainable in\ntheory. In the multiparameter estimation, these two strategies may show divergent performance as the\nQFIM is not always guaranteed to be attainable. For example, in the case of SCM, $\\mathrm{Tr}(W\\mathcal{F}^{-1})$\nare first optimized in the SC. However, it is hard to say whether the optimal probe state and control\nfor an unattainable $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ can still provide a good $\\mathrm{Tr}(W\\mathcal{I}^{-1})$\nand benefit the subsequent measurement optimization. To investigate it, we still take the Nitrogen-vacancy\ncenter as an example. The free Hamiltonian, control Hamiltonian, and dynamics are described in\nEqs.~(\\ref{eq:NV_H}), (\\ref{eq:NV_c}) and (\\ref{eq:NV_ME}). The performance of comprehensive optimization and\ncombination strategy in the SCM are shown in Fig.~\\ref{fig:compre_multi}. The comprehensive optimization (dashed\nblue line), which takes $\\mathrm{Tr}(W\\mathcal{I}^{-1})$ as the objective function, basically converges at\naround $110$ episodes. The combination strategy (solid red line) splits into two parts, the one in the first\n500 episodes is the combination optimization of SC, and that in the last 500 episodes is the optimization of the\nmeasurement. The gap between these two lines is actually the gap between the optimal $\\mathrm{Tr}(W\\mathcal{F}^{-1})$\nand the value of $\\mathrm{Tr}(W\\mathcal{I}^{-1})$ with a random measurement. In the SC part, the alternative\noptimizations of the probe state and control can be done in different ways due to the episode number of each\noptimization. As shown in the inset of Fig.~\\ref{fig:compre_multi}, here we test several selections, including\n20 episodes for each optimization (solid circle blue line), 50 episodes for each optimization (dashed green\nline), 100 episodes for each optimization (dash-dotted cyan line), 200 episodes for state optimization and 300\nepisodes for control optimization (solid red line), and 300 episodes for state optimization and 200 episodes\nfor control optimization (dotted black line). In these selections, the fourth one, 200 episodes for state\noptimization and 300 episodes for control optimization, shows the best performance at the end of the 500\nepisodes, and the corresponding optimal state and control are chosen for the subsequent measurement optimization.\nIn this example, the final performance of the combination strategy is better than that of the simultaneous\nstrategy, indicating that the unattainability of $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ in the SC does not present\nnegative effects on the final performance. However, this result does not mean the combination strategy is always\nbetter in general. In practice, the comparison of these two strategies might still be needed case by case in\nthe scheme design.\n\n\\section{Adaptive measurement schemes}\n\\label{sec:adapt}\n\nAdaptive measurement is another common scenario in quantum parameter estimation. In this scenario, apart from the\nunknown parameters $\\bold{x}$, the Hamiltonian also includes a set of tunable parameters $\\bold{u}$. A typical case\nis that the tunable parameters are invoked by the same way with $\\bold{x}$, resulting in the total Hamiltonian\n$H(\\bold{x}+\\bold{u})$. In the point estimation approach, the QFIM and CFIM computed at the true values of $\\bold{x}$\nmay not always provide the practically achievable precision due to the fact that the actual working point may be\nslightly away from the true values. Hence, the tunable parameters $\\bold{u}$ are invoked to let the Hamiltonian\n$H(\\bold{x}+\\bold{u})$ work at the optimal point $\\bold{x}_{\\mathrm{opt}}$. An obvious difficulty for the\nimplementation of this scheme is that one actually does not known the true values in practice, which means $\\bold{u}$\nhas to be given according to the estimated values $\\hat{\\bold{x}}$, and the entire scheme would only be useful when\nit is implemented adaptively. In the meantime, a pre-estimation of $\\bold{x}$ is usually needed. The inaccuracy of\n$\\hat{\\bold{x}}$ would result in the inaccuracy of $\\bold{u}$, and $\\hat{\\bold{x}}+\\bold{u}$ is then inevitably far\nfrom $\\bold{x}_{\\mathrm{opt}}$, causing a lousy performance of the scheme. This scheme has been applied by Berni\net al.~\\cite{Berni2015} in optical phase estimation with additional real-time feedback controls.\n\nNow let us introduce the in detail all steps required to implement this scheme. Consider the Hamiltonian $H(\\bold{x})$\nwhere $\\bold{x}$ is restricted in a finite regime with a prior distribution $p(\\bold{x})$. The first step is to find\nthe optimal value $\\bold{x}_{\\mathrm{opt}}$ in this regime with respect to the minimum $\\mathrm{Tr}(W\\mathcal{I}^{-1})$\nwhen the measurement is fixed. If the measurement can be altered flexibly in practice, $\\bold{x}_{\\mathrm{opt}}$,\ntogether with the corresponding optimal measurement, can be obtained with $\\mathrm{Tr}(W\\mathcal{F}^{-1})$\nthe objective function. Next, perform the pre-estimation via the Bayesian estimation with the fixed or\noptimal measurement and update the prior distribution with the posterior distribution in Eq.~(\\ref{eq:Bayes_posterior}).\nWhen $p(\\bold{x})$ has been updated to a reasonable narrow distribution, the tunable parameters $\\bold{u}$ are then\ninvoked into the system. In the $n$th round of this step, with the observed result $y^{(n)}$, the posterior distribution\nis obtained via the Bayes' rule as\n\\begin{equation}\np(\\bold{x},\\bold{u}^{(n)}|y^{(n)})=\\frac{p(y^{(n)}|\\bold{x},\\bold{u}^{(n)})\np(\\bold{x})}{\\int p(y^{(n)}|\\bold{x},\\bold{u}^{(n)})p(\\bold{x})\\mathrm{d}\\bold{x}},\n\\end{equation}\nwhere $\\bold{u}^{(n)}$ is obtained in the $(n-1)$th round. The estimated value $\\hat{\\bold{x}}^{(n)}$ can be\nobtained through the MAP, $\\hat{\\bold{x}}^{(n)}=\\mathrm{argmax}\\,p(\\bold{x},\\bold{u}^{(n)}|y^{(n)})$. The\nvalue of $\\bold{u}$ used in the next round is obtained via the formula\n$\\bold{u}^{(n+1)}=\\bold{x}_{\\mathrm{opt}}-\\hat{\\bold{x}}^{(n)}$, and the prior distribution is also replaced by\nthe current posterior distribution. In QuanEstimation, the pre-estimation can be finished with the function\n{\\fontfamily{bch}\\selectfont\\small\\itshape Bayes()} discussed in Sec.~\\ref{sec:Bayesian}, and the adaptive process can be executed with the codes:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL,mathescape=true]\napt = adaptive(x,p,rho0,savefile=False,\n max_episode=1000,eps=1e-8)\napt.dynamics(tspan,H,dH,Hc=[],ctrl=[],\n decay=[])\napt.CFIM(M=[],W=[])\n\\end{lstlisting}\nIn the case that the parameterization process is described by the Kraus operators, replace\n{\\fontfamily{bch}\\selectfont\\small\\itshape apt.dynamics()} with {\\fontfamily{bch}\\selectfont\\small\\itshape apt.Kraus(K,dK)}. The inputs {\\fontfamily{bch}\\selectfont\\small\\itshape x} and {\\fontfamily{bch}\\selectfont\\small\\itshape p} are the\nsame with those in {\\fontfamily{bch}\\selectfont\\small\\itshape Bayes()}. The input {\\fontfamily{bch}\\selectfont\\small\\itshape H} is a list of matrices representing the Hamiltonian\nwith respect to the values in {\\fontfamily{bch}\\selectfont\\small\\itshape x}, and it is multidimensional in the multiparameter case. {\\fontfamily{bch}\\selectfont\\small\\itshape dH}\nis a (multidimensional) list with each entry also a list representing $\\partial_{\\bold{x}}H$ with respect to the\nvalues in {\\fontfamily{bch}\\selectfont\\small\\itshape x}. In the case that specific functions of $H$ and $\\partial_{\\bold{x}}H$ can be provided,\n{\\fontfamily{bch}\\selectfont\\small\\itshape H} and {\\fontfamily{bch}\\selectfont\\small\\itshape dH} can be alternatively generated via the function {\\fontfamily{bch}\\selectfont\\small\\itshape BayesInput()} discussed\nin Sec.~\\ref{sec:para}. In {\\fontfamily{bch}\\selectfont\\small\\itshape apt.CFIM()}, {\\fontfamily{bch}\\selectfont\\small\\itshape M} is the input measurement and the default one is a\nset of SIC-POVM.\n\nDuring the running of the codes, three files \"xout.csv\", \"y.csv\", and \"pout.csv\" will be generated including the\ndata of $\\hat{\\bold{x}}$, result $y$ in all rounds of iteration and final obtained $p(\\bold{x})$. In the case\nthat {\\fontfamily{bch}\\selectfont\\small\\itshape savefile=True}, \"pout.csv\" contains the data of $p(\\bold{x})$ in all rounds. If the choice of\nmeasurement is flexible in the experiment, before the invocation of {\\fontfamily{bch}\\selectfont\\small\\itshape apt.CFIM()}, the optimal measurement\nwith respect to $\\bold{x}_{\\mathrm{opt}}$ can be first obtained via calling {\\fontfamily{bch}\\selectfont\\small\\itshape M = apt.Mopt(W=[])}.\nIn the case that the users would like to run the pre-estimation with the optimal measurement, they can just call\n{\\fontfamily{bch}\\selectfont\\small\\itshape apt.Mopt()} first and input the optimal measurement to {\\fontfamily{bch}\\selectfont\\small\\itshape Bayes()} for the pre-estimation.\n\nDuring the running of {\\fontfamily{bch}\\selectfont\\small\\itshape apt.CFIM()}, the users should type the result $y$ obtained in practice on the\nscreen and receive the values of $\\bold{u}$ used for the next round of experiment. In the case that the users\nhave already done the pre-estimation by themselves, they can directly use {\\fontfamily{bch}\\selectfont\\small\\itshape adaptive()} without calling\n{\\fontfamily{bch}\\selectfont\\small\\itshape Bayes()} first.\n\n\\begin{figure}[tp]\n\\centering\\includegraphics[width=8.5cm]{Fig_adpt.pdf}\n\\caption{Performance comparison between the adaptive (dashed blue line)\nand non-adaptive (solid red line) schemes. The adaptive measurement starts\nafter 500 rounds of pre-estimation. The non-adaptive scheme is a full\nBayesian estimation.}\n\\label{fig:adpt}\n\\end{figure}\n\nLet us still take the Hamiltonian in Eq.~(\\ref{eq:Bayes_demo}) as an example. The initial state is $|+\\rangle$ and\nthe target time $\\omega_0 T=1$ (Planck units are applied). The prior distribution is uniform in the regime\n$(-\\pi\/4,3\\pi\/4)$. The measurement is $\\{|+\\rangle\\langle +|,|-\\rangle\\langle-|\\}$. $x_{\\mathrm{opt}}$ is taken\nas zero. The results are simulated by generating random values in the regime $[0,1]$. When it is smaller (larger)\nthan $p(+|x)$, the posterior distribution is calculated with $p(+|x)$ [$p(-|x)$]. As shown in Fig.~\\ref{fig:adpt},\nafter 500 rounds of pre-estimation, the adaptive scheme (dashed blue line) indeed shows better performance (smaller\nvariance) compared to the non-adaptive scheme (solid red line) which is fully finished by the Bayesian estimation.\n\nAnother famous adaptive scheme in quantum parameter estimation is the online adaptive phase estimation proposed by\nBerry et al.~\\cite{Berry2000,Berry2001}. In this scheme, after reading the result $y^{(n)}$ in the $n$th round,\nthe value of the tunable phase $\\Phi_{n+1}$ or phase difference $\\Delta\\Phi_{n+1}$ is generated. The relation\nbetween $\\Phi_{n+1}$ and $\\Delta\\Phi_{n+1}$ can be taken as $\\Phi_{n+1}=\\Phi_{n}-(-1)^{y^{(n)}}\\Delta\\Phi_{n+1}$.\nHentschel and Sanders~\\cite{Hentschel2010,Hentschel2011} further provided an offline strategy with PSO, and the\noptimization methods are further extended to DE~\\cite{Lovett2013} and genetic algorithm~\\cite{Rambhatla2020}\nin recent years. Apart from the original references, details of this scheme can also be found in a recent\nreview~\\cite{Liu2022}. In QuanEstimation, this scheme can be executed by the codes:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL,mathescape=true]\napt = adaptMZI(x,p,rho0)\napt.general()\napt.online(output=\"phi\")\n\\end{lstlisting}\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape rho0} is a matrix representing the probe state. The output can be tuned between $\\Phi$ and\n$\\Delta\\Phi$ by setting {\\fontfamily{bch}\\selectfont\\small\\itshape output=\"phi\"} or {\\fontfamily{bch}\\selectfont\\small\\itshape output=\"dphi\"} in {\\fontfamily{bch}\\selectfont\\small\\itshape apt.online()}.\nThe offline strategy can also be executed by replacing {\\fontfamily{bch}\\selectfont\\small\\itshape apt.online()} with\n{\\fontfamily{bch}\\selectfont\\small\\itshape apt.offline(method=\"DE\",**kwargs)}. PSO is also available here ({\\fontfamily{bch}\\selectfont\\small\\itshape method=\"PSO\"}). When the\nentire program is finished, a file named \"xout.csv\" including the data of output in all rounds will be generated.\nIn the case of online scheme, an additional file \"y.csv\" including the result $y$ in all rounds will also be\ngenerated. The design of {\\fontfamily{bch}\\selectfont\\small\\itshape apt.general()} here is to give us a space for the further inclusion of the\nadaptive phase estimation in other optical scenarios such as the SU(1,1) interferometers.\n\n\\section{Summary}\n\nIn this paper, we present a new open-source toolkit, QuanEstimation, for the design of optimal schemes in the\nquantum parameter estimation. The source of the package, as well as the demonstrating codes for the calculation\nof all examples discussed in this paper, can be download in GitHub~\\cite{github1}. This package is based on\nboth platforms of Python and Julia. The combined structure is to guarantee the calculation efficiency of Julia is\nfully utilized, and in the meantime, the people who have no knowledge of Julia would have no obstacle in using\nthis package. In the meantime, a full Julia version of the package is also available in GitHub~\\cite{github2}, which is suitable\nfor those familiar with Julia. QuanEstimation includes several well-studied metrological tools in quantum parameter\nestimation, such as the various types of Cram\\'{e}r-Rao bounds and their quantum correspondences, quantum Ziv-Zakai\nbound, and Bayesian estimation. To perform the scheme design, QuanEstimation can execute the optimizations of the\nprobe state, control, measurement, and the comprehensive optimizations, namely, the simultaneous optimizations\namong them. General adaptive measurement schemes as well as the adaptive phase estimation can also be performed\nwith this toolkit.\n\nQuanEstimation is suitable for many practical quantum systems, especially those with finite-dimensional Hilbert\nspaces, such as the trapped ions, nitrogen-vacancy centers, and quantum circuits. Therefore, it is not only\nuseful for the theorists working in the field of quantum parameter estimation, but could also be particularly\nuseful for the experimentalists who are not familiar with the theories in this field yet intend to utilize\nthem to design experimental schemes. More functions and features will be constantly input into the package and\nthe calculation efficiency for certain specific scenarios will be further improved in the future. We believe that\nthere is a good chance that this package would become a common toolkit in the field of quantum metrology for the\nnumerical calculations and scheme designs.\n\n\\begin{acknowledgments}\nThe authors would like to thank Prof.~Re-Bing Wu, Prof.~Christiane P. Koch, Prof.~Lijian Zhang, Jinfeng Qin,\nand Yuqian Xu for helpful discussions. This work was supported by the National Natural Science Foundation of\nChina (Grants No.\\,12175075, No.\\,11805073, No.\\,11935012 and No.\\,11875231), and the National Key Research and\nDevelopment Program of China (Grants No.\\,2017YFA0304202 and No.\\,2017YFA0205700). H.Y. also acknowledges the\nsupport from the Research Grants Council of Hong Kong (Grant No.\\,14307420). R.D.D. was supported by National\nScience Center (Poland) with Grant No.~2020\/37\/B\/ST2\/02134.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\nWeinberg (1992, Paper I) identified color-selected variables in the\nIRAS Point Source Catalog (PSC) with AGB stars based on color\nconsistency and the circumstantial sensitivity of the IRAS survey to\nlong-period variables (cf. Harmon \\& Gilmore 1988). These were then\nused as rough standard candles to infer a large-scale asymmetry in the\nstellar distribution. The identification of IRAS variables with AGB\nstars was strengthened by an in-depth study of a bright subset (Allen,\nKleinmann \\& Weinberg 1993). Carbon-selected AGB stars (carbon stars) have also\nproven to be effective tracers (see e.g. Metzger \\& Schechter 1994).\nAdvantages of AGB tracers are reviewed in Weinberg (1994). In\ngeneral, standard candle analyses have the advantage over flux or star\ncount analyses in providing direct information about the\nthree-dimensional structure of the Galaxy. However, uncertainties in\ntheir selection and intrinsic properties may bias any inference and,\nespecially for the IRAS-selected sample, the census is incomplete.\n\nPaper I described an approach to large-scale Galactic structure using\na star count analysis which allows the information to be reconstructed\nand possibly corrected in\nthe observer's coordinate system before translating to a\nGalactocentric system. Unfortunately, this translation approach is\nonly natural if the coverage is complete and suffered in application\nto the IRAS sample because of spatial gaps due to an incomplete second\nfull-sky epoch. Here, we present the results of a different approach\nto the problem: the direct density estimation by maximum likelihood.\nA Bayesian density estimation has the advantage of directly incorporating\nselection effects and missing data.\n\nThe number of ongoing surveys that bear on Galactic structure---SDSS,\n2MASS, DENIS---which at various stages will have surveyed parts of the\nsky is a second motivation for this study; there is a need for a\nsystematic method suited to inferential studies using possibly\nincomplete data from many wave bands. Recent analyses \n(e.g. Bahcall \\& Soneira 1980 in the optical;\nWainscoat et al. 1992 in the infrared) have modeled the\nGalactic components with standard profiles and structural parameters\nchosen to provide a match to star count data. To\nexplore the structural parameters themselves, we propose a Bayesian\ndensity estimation technique to treat data from scattered fields\nduring the survey and to easily incorporate data from wave bands.\nConceptually, this approach is midway between a classical inversion\nand modeling. \n\nThe first part of the paper describes and characterizes the\nmethod. More specifically, \\S\\ref{sec:iras}\nreviews the IRAS selection procedure described in Paper I and motivates the\napproach. The new\nanalysis based on statistical density estimation is presented\nin \\S\\ref{sec:bayes} and precisely defined in \\S\\ref{sec:likelihood}.\nThe second part of the paper\ndescribes Monte-Carlo tests and the results of \napplying the method to the IRAS data\n(\\S\\ref{sec:results}). We conclude in \\S\\ref{sec:summary} with a\nsummary and discussion.\n\n\\section{IRAS source selection} \\label{sec:iras}\n\nThe analysis in Paper I was based on the variables selected in the\nIRAS Point Source Catalog (1988) by both color and\n$P_{var}$. Following the source selection procedure described in\nPaper I, we selected stars from IRAS Point Source Catalog with\n$F_{12}>2$ Jy and variability flag $P_{var}\\ge 98\\%$. Although the flux\nlimit reduces the confusion in source identification toward\nthe center of the Galaxy, it also restricts the sensitivity to distant\nsources. The limiting distance to a star ($d$) is estimated using a simple\nexponential layer with vertical scale height $h$ and mid-plane extinction\ncoefficient $K_{12}$:\n\\begin{equation}\nm = M + 5 \\lg d - 5 + K_{12}\\,h\\,(1-e^{-d \\sin |b| \/ h})\\,\/\\sin |b|.\n\\label{eq:b1}\n\\end{equation}\nFor a typical AGB star ($L = 3500 L_{\\odot}$, see Appendix A) and\n$K_{12}=0.18$ kpc$^{-1}$, the limiting distance in the plane is\n$R_{lim}=7$ kpc. We assume that the extinction is dominated by the\nmolecular gas, $h=100$ pc and the extincting layer is horizontally\nisotropic. The true extinction toward the inner Galaxy is most likely\ndominated by the molecular ring and nuclear region given the molecular\ngas distribution. However, precise estimate of the true distribution\nis not available and an horizontally isotropic model will adequately\nrepresent its systematic effect on the photometric distances.\n\nOf the more than 158,000 good flux-quality sources listed in IRAS PSC,\n5,736 satisfy both flux limit and variability criteria. Their spatial\ndistribution is shown in Figure \\ref{fig:sp_distr.1}. To obtain\nvariability data, at least two epochs are needed. Unfortunately,\nIRAS' multiple epochs did not have complete sky coverage. Most of the\ncoverage (77\\% in the galactic plane) was achieved in HCON 2 and HCON\n3 separated by roughly 7.5 months on average. The rest of the galactic\nplane is poorly sampled (shaded regions in Figure\n\\ref{fig:sp_distr.1}). For this analysis, all the data in the poorly\nsampled sectors have been excised, reducing the size of the sample to\n5,500 stars.\n\n\n\\section{Method overview} \\label{sec:bayes}\n\nAll of the selection effects but especially data incompleteness\ngreatly complicate the analysis. Bayesian techniques are ideally\nsuited to parameter estimation over data with general but well-defined\nselection criteria and underlies both the maximum entropy and maximum\nlikelihood procedures. Below, we will parameterize the source density\nby an exponentiated orthogonal series with unknown coefficients\n$A_{ij}$ and $B_{ij}$ (cf. eq. \\ref{eq:d15}). In this context, the\nbasic theorem of the theory reads:\n\\begin{equation}\nP \\, ( \\{A_{ij}\\} ,\\, \\{B_{ij}\\} \\, | \\, D, \\, I \\,) = \n {{ P \\,(\\{A_{ij}\\} ,\\, \\{B_{ij}\\} \\, | \\, I \\,) \\cdot\n P \\,( D \\, | \\, \\{A_{ij}\\} ,\\, \\{B_{ij}\\} ,\\, I \\,) } \\over\n P \\,( D \\, | \\, I \\,)}. \\label{eq:d1}\n\\end{equation}\nThe probability $P \\, ( \\{A_{ij}\\} ,\\, \\{B_{ij}\\} \\, | \\, D, \\, I \\,)$\nis the conditional (or {\\it posterior}) probability of the\ncoefficients of the source density provided the data ($D$) and\ninformation ($I$) describing its incompleteness. The probability\n$P\\,(\\{A_{ij}\\} ,\\, \\{B_{ij}\\} \\, | \\, I \\,)$ is the prior probability\n(or simply, {\\it prior}) of the coefficients provided only the\ninformation. Following Bretthorst (1990), we assign the prior using\nthe maximum entropy principle. In our case it is constant implying\nthat all coefficient values are equally likely initially. The\nfunction $P \\,( D \\, | \\, \\{A_{ij}\\} ,\\, \\{B_{ij}\\} ,\\, I \\,)$ is the\ndirect probability which describes the likelihood of data given the\ncoefficients. Finally, $P \\,( D \\, | \\, I \\,)$ is a normalization\nconstant which may be omitted provided that the posterior probability\nis normalized.\n\nWith these definitions, it follows that\n\\begin{equation}\nP \\, ( \\{A_{ij}\\} ,\\, \\{B_{ij}\\} \\, | \\, D, \\, I \\,) = \\mbox{Const} \\cdot\n P \\,( D \\, | \\, \\{A_{ij}\\} ,\\, \\{B_{ij}\\} ,\\, I \\,), \\label{eq:d2}\n\\end{equation}\nor in words, the posterior probability is proportional to the\nlikelihood function. Therefore, the best estimate of posterior\nprobability is obtained for the set coefficients which maximize the\nlikelihood function.\n\n\\section{Likelihood function} \\label{sec:likelihood}\n\nThe likelihood is the joint probability of the observed stars given a\nsource density. We may then consider the probability of observing a star with\nintrinsic luminosity in the range $( L, L+dL )$ to be detected in the\ndistance interval $( s, s+ds )$, in the azimuth interval $( l, l+dl )$, in\nthe galactic latitude interval $( b, b+db )$\nand with magnitude in the range $( m, m+dm )$. Assuming a normal\ndistribution of intrinsic luminosities $L$ and a normal error\ndistribution for the apparent magnitudes $m$ this becomes:\n\\begin{eqnarray}\nP_n\\,(s,\\,l,\\,b,\\,m,\\,L\\,|\\,\\sigma_m,\\,\\sigma_L,\\,K_{12},\\,h,\\,R_0)\\,\ns^2\\,ds\\,\\cos b\\,db\\,dl\\,dL\\,dm = \\nonumber \\\\ \nC \\cdot \\Sigma (r,\\,\\phi,\\,z)\\,e^{-{(L-\\overline L)}^2\/2 \\sigma_L^2}\\,\ne^{-{(m-\\overline m)}^2\/2 \\sigma_m^2}\\,\ns^2\\,ds\\,\\cos b\\,db\\,dl\\,dL\\,dm. \\label{eq:d3}\n\\end{eqnarray}\nHere $s$, $l$, $b$ are coordinates about the observer's position, $r$,\n$\\phi$, $z$ are coordinates about the center of the Galaxy, $C$ is the\nnormalization constant, $\\Sigma (r, \\phi, z)$ is the source density at\ngalactocentric radius $R_0$, $\\overline L$ and $\\sigma_L$ are the mean\nintrinsic luminosity and the dispersion of the sample, $\\sigma_m$ is\nthe measurement error in magnitudes and $\\overline m = \\overline\nm\\,(s, b)$ is given by equation (\\ref{eq:b1}). Alternatively, we may\nreplace luminosity by absolute magnitude:\n\\begin{eqnarray}\nP_n\\,(s,\\,l,\\,b,\\,m,\\,M\\,|\\,\\sigma_m,\\,\\sigma_M,\\,K_{12},\\,h,\\,R_0)\\,\ns^2\\,ds\\,\\cos b\\,db\\,dl\\,dM\\,dm = \\nonumber \\\\\nC \\cdot \\Sigma (r,\\,\\phi,\\,z)\\,e^{-{(M-\\overline M)}^2\/2 \\sigma_M^2}\\,\ne^{-{(m-\\overline m)}^2\/2 \\sigma_m^2}\\,\ns^2\\,ds\\,\\cos b\\,db\\,dl\\,dM\\,dm,\\label{eq:d4}\n\\end{eqnarray}\nwhere $\\overline M$ and $\\sigma_M$ correspond to $\\overline L$ and\n$\\sigma_L$. The Gaussian distributions in $L$ or $M$ in the above two\nequations can be generalized to an arbitrary luminosity function for\ntraditional star count applications. Although we will not give the\ngeneral expressions below, the development is parallel.\n\nSince the convolution of two Gaussians is a new Gaussian whose\nvariance is the sum of the two individual variances\n\\begin{equation}\n\\sigma_{m, eff}^2 = \\sigma_m^2 + \\sigma_M^2, \\label{eq:d5}\n\\end{equation}\nequation (\\ref{eq:d4}) can be rewritten as\n\\begin{eqnarray}\nP_n\\,(s,\\,l,\\,b,\\,m\\,|\\,\\sigma_{m, eff},\\,k,\\,H,\\,R_0)\\,\ns^2\\,ds\\,\\cos b\\,db\\,dl\\,dm = \\nonumber \\\\\nC \\cdot \\Sigma (r,\\,\\phi,\\,z)\\,e^{-{(m-\\overline m)}^2\/2 \\sigma_{m, eff}^2}\\,\ns^2\\,ds\\,\\cos b\\,db\\,dl\\,dm \\label{eq:d6}\n\\end{eqnarray} \nafter integrating over the unmeasured absolute magnitude $M$. \nFor notational clarity, we will\nomit the subscript ``eff'' and write simply $\\sigma_m$. The constant\n$C$ is determined from the normalization condition:\n\\begin{equation}\nC \\int _{-\\infty} ^{+\\infty} e^{-{(m-\\overline m)}^2\/2 \\sigma_m^2}\\,dm\n\\int dl \\int _0 ^{s_{max}(b)}\\,s^2\\,ds \\int _{-{\\pi \\over 2}} ^{\\pi \\over 2}\n{ \\Sigma (r,\\,\\phi,\\,z)\\,\\cos b\\,db} = 1. \\label{eq:d7}\n\\end{equation}\nThe integration over $l$ runs over entire circle except missing\nazimuthal sectors, explicitly accounting for missing data at\nparticular ranges in azimuth. The limiting distance $s_{max}$ in the\n$l$, $b$ direction incorporates the 2 Jy flux limit. \n\nIn a standard star count analysis no explicit distance information is\nprovided and $s$ is eliminated from analysis by integration, yielding\n\\begin{eqnarray}\nP_n\\,(l,\\,b,\\,m\\,|\\,\\ldots)\\,\\cos b\\,db\\,dl\\,dm = \\nonumber \\\\\nC \\,\\int _0 ^{s_{max}(b)} {\\Sigma (r,\\,\\phi,\\,z)\\,\ne^{-{(m-\\overline m)}^2\/2 \\sigma_m^2}\\,s^2\\,ds} \\cos b\\,db\\,dl\\,dm. \n\\label{eq:d9}\n\\end{eqnarray}\nFor our relatively small sample of IRAS stars, sensitivity to vertical\nstructure will be poor. This motivates replacing the general unknown\nthree-dimensional disk density with a density which depends on radial\nposition and azimuth alone: $\\Sigma (r,\\,\\phi,\\,z) = {\\overline\n\\Sigma} (r,\\,\\phi)$.\n\nFinally, the joint probability of observing $N$ stars selected from\nthe IRAS PSC is\n\\begin{equation} \nL \\equiv P_{total} = \\prod _{n=1} ^N P_n ( l,b,m | \\ldots).\\label{eq:d11}\n\\end{equation}\nExpressing the likelihood function in logarithmic form, our desired\nsolution is the set of parameters which maximize\n\\begin{equation} \n\\log L = \\sum _{n=1} ^N \\log P_n ( l,b,m | \\ldots).\\label{eq:d12}\n\\end{equation}\n\nThis and nearly all star count analyses reduce to standard problem of\ndensity estimation: find the density function $f ( x )$, which\nsatisfies non-negativity constraint\n\\begin{equation}\nf(x) \\ge 0 \\label{eq:d13}\n\\end{equation}\nand integral constraint\n\\begin{equation}\n\\int f(x) dx = 1 \\label{eq:d14}\n\\end{equation}\nwhich best describes the observed data distribution. Both parametric\nand non-parametric estimation techniques have been used to solve this\nproblem (e.g. Silverman 1986; Izenman 1991). For inhomogeneous\nmultidimensional data, the positivity constraint is\ncumbersome. However, searching for the unknown function $f ( x )$\nin the form of an exponentiated orthogonal series (Clutton-Brock\n1990), guarantees positivity. A candidate stellar surface density is:\n\\begin{equation}\n\\overline \\Sigma (r,\\phi) = \\exp \\biggl\\{ \\sum_{i=1} ^{i_{max}} \\sum_{j=0}\n^{j_{max}} \\left[ A_{ij}\\cos j\\phi + B_{ij}\\sin j\\phi \\right]\nJ_j(k_i^j r) \\biggr\\} ,\\label{eq:d15}\n\\end{equation}\nwhere $J_j(x)$ is Bessel function of $j^{\\hbox{th}}$ order and $k_i^j$\nis $i^{\\hbox{th}}$ root of Bessel function of $j^{\\hbox{th}}$ order\nand are chosen to produce a complete orthogonal set over the disk of\nradius $R_{max}$. The coefficients $A_{ij}, B_{ij}$ are the parameters\nto be determined. There is no loss of generality in taking the\nFourier-Bessel series although the choice is arbitrary.\n\n\n\\section{Results} \\label{sec:results}\n\n\\subsection{Sensitivity to incompleteness}\n\nA major advantage of the approach presented here over that in Paper I\nis that the significance of inferred structure is robustly\nquantified. In particular, we can test the sensitivity of selection\neffects to the detection of a bar. To test the presence of the\ncoverage gaps, we generated four sample disks of 1,000 stars each\nusing the source density (\\ref{eq:d15}) with\n$\\sqrt{A_{ij}^2+B_{ij}^2}=1$ for $j=0,2$ and zero otherwise and the\nfollowing bar position angles: $0^\\circ$, $\\pm45^\\circ$, and\n$90^\\circ$. The root sum square of the coefficients $A_{ij}$ and\n$B_{ij}$ represents the strength of $i^{th}$ radial component for the\n$j^{th}$ polar harmonic. Figure \\ref{fig:test} shows the restored\nstrength of a harmonic $\\sqrt{A_{ij}^2 + B_{ij}^2}$ as a function of\nthe position angle of the bar.\nInsensitivity of these strengths to bar position angle suggests that\nmissing azimuths will not obscure the inference of true bar. The\ncomputed values are consistent with the expected value of unity.\n\nConversely, regions of missing data can produce non-axisymmetric\ndistortions, and in principle, suggest the existence of a bar in\ninitially axisymmetric sample. However, analysis of a simulated\naxisymmetric disk ($A_{10}=A_{20}=1$; all others = 0) and the same\nazimuthal incompleteness as in the real sample shows that the power in\nthe non-axisymmetric harmonics is about 3\\% of the axisymmetric\ncontribution. Together these tests suggest that the misidentification\nof a bar relative due to missing azimuthal sectors alone is unlikely.\n\n\n\\subsection{Application to IRAS data}\n\nThe formalism developed in \\S\\ref{sec:likelihood} requires\nthe distance to galactic center $R_0$, extinction in the\nplane $K_{12}$ and average luminosity of the AGB stars $\\overline L$.\nWe adopted $R_0=8.0$ kpc, $K_{12}=0.18$\nmag\/kpc and $\\overline L = 3500 L_{\\odot}$. The method can\nbe straightforwardly modified for complex models (e.g. patchy or non-uniform\nextinction), the only limitation\nhere is the CPU available and sufficient data to attain a satisfactory\nmeasure of confidence.\n\nChoosing the truncation of the series in equation (\\ref{eq:d15}) poses\na problem common to many non-parametric density estimations: because\ntoo few terms result in large bias and too many terms increase\nvariance, $i_{max}$, $j_{max}$ would be best determined by jointly\nminimizing the bias and the variance. However, this approach is\ncomputationally prohibitive due to the integral in\nequation (\\ref{eq:d9}) and the normalization (\\ref{eq:d7}). Therefore,\na heuristic approach was adapted in selecting $i_{max}$, $j_{max}$\nbased on the increase in the likelihood function when a particular\nterm or set of terms is added. Significance could be quantified in\nterms of the likelihood ratio (Wilks 1962) but we have not done this\nhere. In addition, the hardware available to us makes it impossible to\nsample the parameter space beyond $i_{max}=4$,\n$j_{max}=4$. Nevertheless, up to that limit, the space was sampled\nthoroughly, with some of the solutions shown in Figure\n\\ref{fig:dens_all} along with the corresponding offsets of the\nlikelihood function (the lowest value of likelihood is set to $0$ for\nease in comparison).\nSome of the figures feature the ghost peaks due to the absence of data\nbeyond the galactic center or in missing azimuthal sectors (see\nFigs. \\ref{fig:sp_distr.1} and \\ref{fig:sp_distr.2}). The likelihood\nanalysis may attempt to place a non-existing source density peak in\nthat region, provided it will increase the overall score. We will\npursue penalizing the likelihood function and other procedures for\nchoosing an alternative prior (dropping the assumption that all\ncoefficients in (\\ref{eq:d15}) are equally likely initially) in future\nwork.\n\nMore importantly, all reconstructions in Figure \\ref{fig:dens_all}\nimply a jet-like feature in the first quadrant. As in Paper I, the\ndepth of our sample (estimated to correspond to a mean distance of 7\nkpc in the plane) prevents ascertaining whether this feature\ncorresponds to a bisymmetric bar or is a lopsided distortion.\nHowever, decreasing the flux limit to 1 Jy leads to detection of\nsimilar feature on the far side of the Galaxy, suggesting a real\nbar. This motivates a reconstruction with enforced bisymmetry, shown\nin Figure \\ref{fig:dens_even}. Here the corresponding prior assigns\nzero values to coefficients of odd azimuthal order.\nThe likelihood value (the origin is the same as in Figure\n\\ref{fig:dens_all}) has dropped substantially, because the resulting density\nlacks data support beyond the Galactic center. \nIn both figures, the bar is well defined and has a similar length and \nposition angle. \n\nTo quantify the strength and position angle of the bar, we fitted the\nisodensity contours ($i_{max}=j_{max}=4$) by ellipses. The logarithm\nof a suitable likelihood function for estimating the semi-major axes,\neccentricity and position angle is\n\\begin{equation}\n\\log L = \\sum _{i=1} ^M {\\biggl[ \\Sigma_{rec}(r_i, \\phi_i)-C \\biggr]}^2,\n\\label{eq:d17}\n\\end{equation}\nwhere $\\Sigma_{rec}(r, \\phi)$ is the reconstructed density function and\n$C$ is isodensity level. The summation runs over equally spaced points\non ellipse. For a given ellipse, a grid of semimajor axis values are\nspecified and the surface density $C$, position angle $\\phi_0$ and\neccentricity $e$ which maximizes $\\log L$ are found. The results\nare presented in Figures \\ref{fig:levels} and \\ref{fig:angles}.\n\nFigure \\ref{fig:levels} indicates that the density profile drops to\nhalf of its central value at about 4 kpc. The half-length would then\nbe about 4 kpc, in good agreement with the value obtained in Paper I.\nIf we take this value as the size of the major axis of the bar, then\nthe axis ratio varies from 2.2 in the central regions to 2.7 in the\nouter regions of the bar. The value of the position angle for the\nentire extent of the bar (out to 4 kpc) is $\\approx 19^{\\circ}$. The\naccuracy of the position angle determination can be quantified in\nterms of confidence interval, making use of the fact that in the limit\nof large number of sources $N$, the likelihood in $n$ dimensions is\ndistributed as $\\chi^2\/2$ with $n$ degrees of freedom (e.g. Lehmann\n1959). We analyzed the likelihood as the function of a single variable\n-- orientation angle of the bar in the plane. The analysis gives the\nuncertainty of $1^{\\circ}$ at $3\\sigma$ level.\n\nAnother way to determine the parameters of the bar is to look at the\nmap of the ratio of non-axisymmetric to axisymmetric components of the\ndensity. The ratio displays two peaks at $3.3\\pm0.1$ kpc located on\nthe opposite sides from the center, the line connecting them has the\nposition angle of $\\sim24^{\\circ}\\pm2^{\\circ}$. The peak ratio, the\nrelative strength of the bar, is $0.73$. This implies the existence\nof a strong bar in the intermediate age population responsible for the\nAGB stars.\n\n\\subsection{Disk scale length}\n\nHaving calculated the source density, we are in a position to\ncharacterize the parent population of the IRAS variables. In Paper I,\nwe assumed that these variables represented a disk population based on\ntheir flux distribution but several colleagues have suggested in\ndiscussion that the IRAS variables are more likely to be bulge stars.\nHere, we determine the scale length of the population in the Galactic\nplane. For comparison, we fit our reconstruction by an oblate\nspheroid model (G0 bulge model from the DIRBE study by Dwek et\nal. 1995):\n\\begin{equation} \n\\Sigma_{G0} ( x, y ) = \\Sigma_0 e^{-0.5 r^2},\n\\label{eq:d18}\n\\end{equation}\nwith $r^2 = ( x^2 + y^2 ) \/ r_0^2$. The scale length $r_0$ is found\nby minimizing the following cost function while simultaneously\nsatisfying the overall normalization constraint for $\\Sigma_{G0}$\n(eq. \\ref{eq:d14}):\n\\begin{equation} \n\\hbox{cost} = \\int d^2 r {\\biggl[ \\Sigma_{rec}-\\Sigma_{G0} \\biggr]}^2.\n\\label{eq:d19}\n\\end{equation}\nTo estimate the value of $r_0$, we used the covariance matrix from\nthe likelihood \nanalysis used to determine $\\Sigma_{rec}$ to make 5000 Monte Carlo\nrealizations of the source density. The ensemble of realizations,\nthen, have $\\Sigma_{rec}$ as their mean. For each realization, we\nfound $r_0$ by minimizing the cost function (\\ref{eq:d19}) and the\nresulting distribution of scale lengths is shown in Figure\n\\ref{fig:dwek}. Our result $r_0 = 4.00\\pm0.55$ kpc indicates that the\nIRAS variables have the scale length of the old disk population. This\nvalue is in good agreement with the scale length $4.5$ kpc reported by\nHabing (1988), derived from analysis of a color-selected IRAS sample.\nDwek's value obtained by analyzing bulge emission was\n$r_0=0.91\\pm0.01$ kpc. The factor of $4$ difference between the scale\nlengths suggests that the IRAS bar and the bulge-bar belong to\ndistinct populations.\n\n\\subsection{Optical depth due to microlensing}\n\nOriginally proposed as a test for dark matter in the Milky Way halo\n(Paczy\\'nski 1986), gravitational microlensing was later shown (Griest\net al. 1991; Paczy\\'nski 1991) to be potentially useful for extracting\ninformation about the inner regions of our Galaxy. Three groups (OGLE,\nMACHO and EROS) are monitoring stars in the Galactic bulge for\ngravitational microlensing and have found higher event rates\nthan most theoretical estimates. Udalski et al. (1994) derived\nlensing optical depth $\\tau = (3.3 \\pm 1.2) \\times 10^{-6}$ toward the\nBaade's window ($l = 1^{\\circ}, b = -3.9^{\\circ}$) based on the\nanalysis of the OGLE data, and MACHO group reported $\\tau =\n3.9^{+1.8}_{-1.2} \\times 10^{-6}$ (Alcock et al. 1995a) estimated from\nthe sample of clump giants, while theoretical estimates give optical\ndepths in the range $0.5 - 2.0 \\times 10^{-6}$ (e.g. Alcock et\nal. 1995a; Evans 1994). Following Paczy\\'nski's et al. (1994) suggestion that\na bar with a small inclination angle could enhance the optical depth,\nZhao et al. (1995) have developed a detailed bar model and found $\\tau\n= (2.2 \\pm 0.5) \\times 10^{-6}$.\nHere, we estimate the optical depth using our density\nreconstruction, $\\Sigma_{rec}$, assuming that our AGB sample represents\nthe entire stellar disk.\n\nThe lensing optical depth is defined as the probability of any of the\nsources being lensed with magnification factor $A > 1.34$, with\n\\begin {equation}\nA = {u^2+2 \\over u \\sqrt{u^2+4}}, \\qquad u \\equiv {r \\over R_E}\n\\label {eq:d20}\n\\end {equation}\n(Refsdal 1964), where $r$ is the distance between the projected position \nof the source and the lensing mass, $R_E$ is the radius of Einstein ring.\nKiraga \\& Paczy\\'nski (1994) derived\n\\begin {equation}\n\\tau = {4 \\pi G \\over c^2}\\,\\, \n{\\int _0 ^{\\infty} \\left[ \\int _0 ^{D_s} \\rho \\, {D_d(D_s-D_d) \\over D_s} \\,\\, dD_d \\right] \n\\rho \\, D_s^{2+2\\beta}\\, dD_s \\over \\int _0 ^{\\infty} \\rho \\, D_s^{2+2\\beta}\\, dD_s},\n\\label {eq:d21}\n\\end {equation}\nwhere $D_s$ is the distance to the sources, $D_d$ is the distance to\nthe deflectors and the free parameter $\\beta$ accounts for\ndetectablity of sources in a flux-limited survey. The reasonable\nrange is $-3 \\le \\beta \\le -1$ and we take $\\beta = -1$ following\nEvans (1994) and Kiraga \\& Paczy\\'nski (1994). \nThe density $\\rho = \\rho_{bulge}\n+ \\rho_{disk}$, with $\\rho_{bulge}$ given by equation (1) of Kent\n(1992), and\n\\begin {equation}\n\\rho_{disk} = C \\, \\Sigma_{44} (r, \\phi) \\, e^{-|z|\/h}, \n\\label {eq:d22}\n\\end {equation}\nwhere $\\Sigma_{44}$ is the surface density of our $i=4, j=4$ model (\\ref{eq:d15}) and \n$h=0.325$ kpc is the scale height. \nWe explored two possible normalization prescriptions: (1) Assign a\nlocal column density of $\\sim 50\\, M_{\\odot}\\, pc^{-2}$ (``canonical\ndisk'' following Kuijken \\& Gilmore 1989; Gould 1990). The mass of the\ndisk in this case is $M_{disk} = 1.95 \\times 10^{10} M_{\\odot}$. \n(2) Assign the total disk mass of $M = 6 \\times 10^{10} M_{\\odot}$\n(Bahcall \\& Soneira 1980). The second normalization gives local\ncolumn density of approximately $100\\, M_{\\odot}\\, pc^{-2}$ (``maximal\ndisk'' of Alcock et al. 1995b). We prefer the latter here because the\noptical depth estimate depends on the global mass distribution rather\nthan the local density. In addition, there are some indications that\nthe variation of the column density with galactic longitude may be\nquite significant -- a factor of $2-3$ (Rix \\& Zaritsky 1995; Gnedin,\nGoodman \\& Frei 1995). The mass of the bulge is $M_{bulge} = 1.65\n\\times 10^{10} M_{\\odot}$.\n\nFor the canonical disk case, the total lensing optical depth at\nBaade's window is $1.1 \\times 10^{-6}$, and both bulge and disk lenses\ncontribute $50\\%$ to that number. Most of the optical depth (76\\%) is\ndue to lensing of bulge sources. If the disk is maximal, optical depth\nis $1.6 \\times 10^{-6}$. Disk lenses now account for $1.1 \\times 10^{-6}$\n(68\\% of the total optical depth) and the contribution by bulge sources\nstill dominates (59\\%).\nFor both scenarios, optical depth is a function of the\norientation of the bar. We investigate the enhancement produced by\nthe bar over axisymmetric models of the disk $\\rho\\propto e^{-r\/R} \\,\ne^{-|z|\/h}$, where $R = 3.5$ kpc for fixed disk mass. Figure\n\\ref{fig:baadetau} displays the ratio of optical depths of\nnon-axisymmetric to axisymmetric disk models as a function of the\nposition angle of the bar for both normalization scenarios.\nThe difference between the two curves illustrates the role of the disk\nin lensing. The largest enhancement of approximately 30\\% obtains\nwhen the bar is aligned along the line of sight as expected. The ratio\nof optical depths decreases gradually when the bar is in the first\nGalactic quadrant, with $\\ge 20\\%$ enhancement out to $\\phi_0 =\n50^{\\circ}$.\n\nCurrent generation optical-band lensing surveys have concentrated on\nlow-extinction bulge-centered windows to maximize the lensing event\nrate. An infrared-band lensing microlensing survey would be less\nconstrained by extinction and therefore more efficient probe of the\noverall structure of the Galaxy. In particular, any bar which is not\nperfectly aligned along the Sun--Galactic Center axis will produce an\nasymmetry in the optical depth. We describe this asymmetry by the ratio\nof the difference in optical depths at positive and negative longitude\nto their arithmetic mean. This ratio is shown in Figure \\ref{fig:reldiff} for\nour model (cf. eqns. \\ref{eq:d21} and \\ref{eq:d22}). Comparison with\nthe Bahcall \\& Soneira model (1980) suggests that $\\beta\\approx-1$ is\na fair approximation of the high-luminosity end of the disk luminosity\nfunction. Therefore, equation (\\ref{eq:d21}) also applies at large\n$|l|$ where both lenses and sources are disk members. The large 40\\%\nasymmetry about $|l|\\approx30^{\\circ}$ is due to a local increase in\nthe surface density at negative longitudes close to the observer\n(Figure \\ref{fig:dens_even}). More important than the details of\nasymmetry is the suggestion that a pencil-beam microlensing survey in\nthe infrared would be sensitive to global asymmetries in the stellar\ndisk component. Confusion is not a limitation at $b=0^\\circ$ for larger\nvalues of $| l |$ and the optical depth has a magnitude similar to Baade's\nwindow.\n\n\\section{Summary and discussion} \\label{sec:summary}\n\nThis paper explores a model-independent Bayesian estimation of the\nstellar density from star counts, rigorously accounting for incomplete\ndata. The general approach can incorporate multiple colors and even\ndifferent databases. The usual high dimensionality and topological\ncomplexity of the posterior distribution, however, complicates both\noptimization algorithms and subsequent moment analyses. We propose\nhere a hybrid downhill plus directed-search Monte Carlo algorithm; the\nformer speeds convergence and the latter facilitates the location of\nthe global extremum. Other similar and potentially more efficient\ntechniques which can bypass the extremization step altogether (such as\ngeneral Markov Chain Monte Carlo) are worth careful consideration.\n\nApplication of the technique to the variability-selected sample\ndescribed in Weinberg (1992), assumed to be AGB stars, confirms the\npresence of a strong non-axisymmetric feature in the first Galactic\nquadrant. By imposing bisymmetry on the source density, clear\nsignature of a bar is obtained. The size and shape of density\nisophotes suggests a bar semi-major axis of approximately 4 kpc and\nposition angle of $\\phi_0 = 18^\\circ \\pm 2^\\circ$ at the outer edge of\nthe bar. The analysis of the scale length for the AGB candidate\ndistribution gives $r_0=4.00\\pm0.55$ kpc, indicating that these\nobjects are part of the old disk population.\n\nFinally, we use our estimate for non-axisymmetric Galactic disk to\nexplore the dependence of optical depth to gravitational microlensing\nby bulge and disk stars. The disk bar does enhance the optical depth\n$\\tau$ towards Baade's window by roughly 30\\% but the overall value is\nstill roughly a factor of two below the MACHO result $\\tau =\n3.9^{+1.8}_{-1.2} \\times 10^{-6}$. Of interest for future\nmicrolensing surveys is the finding that our inferred large-scale bar\nwill produce a significant asymmetry in $\\tau$ at positive and\nnegative longitudes beyond the bulge. The peak asymmetry for our\nmodel occurs at $|l|=30^\\circ$ and at $b=0$ we predict similar values\nof $\\tau$ to the Baade's window field. Such a survey might best be\ncarried out in the infrared to take advantage of the low interstellar\nextinction and colors of the late-type giants. At\n$|l|\\gtrsim30^\\circ$, confusion should not be a limitation at\n$b=0^\\circ$.\n\n\\acknowledgements\n\nWe thank Steve Price and Mike Skrutskie for comments. This work was\nsupported in part by NASA grant NAG 5-1999 and the Alfred P. Sloan\nFoundation.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn this paper, we study the problem of producing near-optimal solutions of random optimization problems by polynomials of low degree in the input data. Namely, we prove that no low-degree polynomial can succeed at achieving a certain objective value in two optimization problems: (a) optimizing the Hamiltonian of the (spherical or Ising) $p$-spin glass model, and (b) finding a large independent set in a sparse \\ER graph, with high probability in the realization of the problem. We rule out polynomials of degree as large as $cn$ for the $p$-spin glass models and as large as $cn\/\\log n$ for the independent set problem for a constant $c$, provided the algorithm is assumed to succeed modulo exponentially small in $n$ probability, where $n$ is the problem dimension. More generally, we provide a tradeoff between the degree of polynomials that we rule out and the success probability assumed. \nFor the spherical $p$-spin model, we also give a lower bound against Langevin dynamics.\n\nOur motivation for focusing on ``low-degree'' approximations is two-fold. Firstly, from an approximation theory perspective, producing near-optimal solutions by a polynomial in the input is very natural.\nIndeed, in many problems of interest the best known polynomial-time algorithms can be placed within the family of low-degree methods. For example, in the settings we consider here, the best known polynomial-time optimization results can be captured by the approximate message passing (AMP) framework~\\cite{montanari-sk,EMS-opt} (for the $p$-spin) and by the class of local algorithms on sparse graphs~\\cite{LauerWormald} (for the independent set problem), respectively. Both of these families of algorithms are captured by constant-degree polynomials; see Appendix~\\ref{app:low-deg-alg} for more details.\nFor spherical $p$-spin glass models, earlier work of \\cite{subag-full-rsb} introduced an algorithm which performs as well as AMP; we expect this algorithm to also fall into the family of low-degree methods, but verifying this is less clear. Secondly, a recent line of work~\\cite{p-cal,HS-bayesian,sos-hidden,sam-thesis} on the \\emph{sum-of-squares hierarchy} has produced compelling evidence that the power of low-degree polynomials is a good proxy for the intrinsic computational complexity of a broad class of \\emph{hypothesis testing} problems. Below, we briefly review this theory of low-degree polynomials in hypothesis testing.\n\nThe low-degree framework was initiated in \\cite{HS-bayesian,sos-hidden,sam-thesis} to study computational hardness in hypothesis testing problems. Specifically, this line of work has focused on high-dimensional testing problems where the goal is to determine whether a given sample (e.g., an $n$-vertex graph) was drawn from the ``null'' distribution $\\QQ_n$ (e.g., the \\ER model) or the ``planted'' distribution $\\PP_n$ (e.g., a random graph with planted structure such as a large clique or a small cut). Through an explicit and relatively straightforward calculation, one can determine whether there exists a (multivariate) polynomial $f$ (in the entries of the observed sample) of a given degree $D = D(n)$ that can distinguish $\\PP_n$ from $\\QQ_n$ (in a particular sense) \\cite{HS-bayesian,sos-hidden,sam-thesis}. A conjecture of Hopkins~\\cite{sam-thesis} (inspired by~\\cite{p-cal,HS-bayesian,sos-hidden}) postulates that for ``natural'' high-dimensional testing problems, if there is a polynomial-time algorithm to distinguish $\\PP_n, \\QQ_n$ (with error probability $o(1)$) then there is also an $O(\\log n)$-degree polynomial that can distinguish $\\PP_n, \\QQ_n$. One justification for this conjecture is its deep connection with the \\emph{sum-of-squares (SoS) hierarchy}---a powerful class of meta-algorithms---and in particular the \\emph{pseudo-calibration} approach~\\cite{p-cal}, which suggests that low-degree polynomials are as powerful as any SoS algorithm (see~\\cite{sos-hidden,sam-thesis,sos-survey} for details). Another justification for the conjecture is that $O(\\log n)$-degree polynomials can capture a very broad class of spectral methods (see~\\cite[Theorem~4.4]{lowdeg-notes} for specifics), which in turn capture the best known algorithms for many high-dimensional testing problems (e.g., \\cite{tensor-pca-sos,fast-sos,sos-hidden}). For many classical statistical tasks---planted clique, sparse PCA, community detection, tensor PCA, etc.---it has indeed been verified that $O(\\log n)$-degree polynomials succeed (at testing) in the same parameter regime as the best known polynomial-time algorithms (e.g.,~\\cite{HS-bayesian,sos-hidden,sam-thesis,sk-cert,lowdeg-notes,subexp-sparse}). (Oftentimes, the hypothesis testing variants of these types of problems seem to be equally hard as the more standard task of recovering the planted signal.) Lower bounds against low-degree polynomials are one concrete form of evidence that the existing algorithms for these problems cannot be improved (at least without drastically new algorithmic techniques). For more details on the low-degree framework for hypothesis testing, we refer the reader to~\\cite{sam-thesis,lowdeg-notes}.\n\nOne goal of the current work is to extend the low-degree framework to the setting of random optimization problems. This includes defining what it means for a low-degree polynomial to succeed at an optimization task, and giving techniques by which one can prove lower bounds against all low-degree polynomials. One difference between the optimization and testing settings is that many existing optimization algorithms can be represented as constant-degree polynomials (see Appendix~\\ref{app:low-deg-alg}), instead of the $O(\\log n)$-degree required in the testing case. A substantial difficulty that we face in the optimization setting is that, in contrast to the testing setting, it does not seem possible to prove lower bounds against low-degree polynomials via a straightforward explicit calculation. To overcome this, our proofs take a more indirect route and leverage a certain structural property---the \\emph{overlap gap property (OGP)}---of the optimization landscape, combined with stability properties of low-degree polynomials. We also use similar techniques to give lower bounds against Langevin dynamics, a canonical Monte Carlo analogue of gradient descent; while this is not a low-degree polynomial (due to its continuous-time nature), it is similar in spirit and has similar stability properties.\n\nWhile the OGP has been used to rule out various classes of other algorithms previously (see below), its usage in our current setting presents some substantial technical difficulties which we need to overcome. Roughly speaking, the property states that for every pair of nearly-optimal solutions $x_1$ and $x_2$, their normalized overlap (normalized inner product) measured with respect to the ambient Hilbert space must lie in a disjoint union of intervals $[0,\\nu_1]\\cup [\\nu_2,1]$. This property extends to the case of families of instances as well in the sense that even if one considers a natural interpolation between two independent instances of the problem, for every two members of the interpolated family and every pair of solutions $x_1,x_2$ which are near optimizers for these two members, respectively, it is still the case that the overlap of $x_1$ and $x_2$ belongs to $[0,\\nu_1]\\cup [\\nu_2,1]$. The main idea of the proof from OGP is based on the contradiction argument. If the result of the algorithm is known to be stable then, denoting by $x(t)$ the result of the algorithm corresponding to the interpolation step $t$, it should be the case that the overlap between $x(0)$ and $x(t)$ changes ``continuously''. At the same time we show separately that the starting solution $x(0)$ and terminal solution $x(1)$ have an overlap at most $\\nu_1$, and thus at some point the overlap between $x(0)$ and $x(t)$ belongs to $(\\nu_1,\\nu_2)$, which is a contradiction. \n\nEstablishing stability for low-degree polynomials and Langevin dynamics is quite non-trivial and constitutes the key technical contribution of the paper. For the case of polynomials, these stability results harness results from Gaussian and Boolean Fourier analysis. We prove two separate variants of this stability result, depending on whether the random input is Gaussian- or Bernoulli-distributed. A key technical result in the Gaussian case is Theorem~\\ref{thm:hyp-stable} which informally states that if we have two $\\rho$-correlated random instances $X$ and $Y$ of a random tensor, and $f$ is a {vector-valued} low-degree polynomial defined on such tensors, then the distance $\\|f(X)-f(Y)\\|_2$ is unlikely to exceed a certain value which depends continuously on $\\rho$. In particular this distance is small when $\\rho\\approx 1$. Proving this result relies on a well-known consequence of hypercontractivity for low-degree polynomials,\nand basic properties of Hermite polynomials (the orthogonal polynomials of the Gaussian measure).\nIn the case of Bernoulli-distributed inputs, we prove a related stability result (Theorem~\\ref{thm:binary-stable}) which shows that when the input variables are resampled one at a time, the output of a vector-valued low-degree polynomial will never change significantly in one step, with nontrivial probability. The proof involves the notion of total influence from Boolean analysis, as well as a direct proof by induction on the dimension.\nThe proof of stability for Langevin dynamics is based on the continuous dependence of stochastic differential equations {on their coefficients}. \n\nThe OGP emerged for the first time in the context of spin glass theory and random constraint satisfaction problems. It was first proven implicitly in~\\cite{achlioptas2008algorithmic}, \\cite{AchlioptasCojaOghlanRicciTersenghi}, and \\cite{mezard2005clustering}. These papers established that the set of satisfying assignments of a random K-SAT formula partitions into clusters above a certain clause-to-variables density. This was postulated as evidence of algorithmic hardness of finding satisfying assignments for such densities. Implicitly, the proof reveals that the overlaps of satisfying assignments exhibit the OGP, and clustering is inferred from this. It is worth noting that while OGP implies the existence of clusters, the converse is not necessarily the case, as one can easily construct a clustered space of solutions with overlaps spanning the entire interval $[0,1]$. \nA direct algorithmic implication of the OGP was shown for the first time in~\\cite{gamarnik2014limits}, where OGP was proven to be a barrier for local algorithms---defined as the so-called \\emph{factors of i.i.d.\\ (FIID)}---designed to find large independent sets in sparse \\ER graphs. The OGP was used to show that, asymptotically, these algorithms cannot find independent sets larger than a multiplicative factor $1\/2+1\/(2\\sqrt{2}) \\approx 0.85$ of optimal. The present paper recovers this result as a special case, since (as we discuss in Appendix~\\ref{app:low-deg-alg}) local algorithms can be captured by constant-degree polynomials. The lower bound against local algorithms was improved by~\\cite{rahman2017local} to a multiplicative factor of $1\/2$. This is the best possible since $1\/2$-optimal independent sets can be found by local algorithms; more precisely, this was shown in \\cite{LauerWormald} for the case of random regular graphs, but a similar result is expected to hold for sparse \\ER graphs as well (although we are not aware of any literature formally verifying this). It is not clear how to improve the multiplicative factor in the lower bound to $1\/2$ for low-degree polynomials, as~\\cite{rahman2017local} uses a more sophisticated variant of OGP than we use here. Several subsequent papers used OGP to rule out various classes of algorithms, including local algorithms for finding large cuts in random hypergraphs~\\cite{chen2019suboptimality}, random walk--based algorithms (WALKSAT)~\\cite{coja2017walksat}, and AMP-type algorithms for optimizing the Hamiltonian of the Ising $p$-spin model~\\cite{gamarnik2019overlap}. \nThe current work draws inspiration from a key idea in \\cite{chen2019suboptimality,gamarnik2019overlap}, namely that a particular variant of OGP---the same variant that we use in the current work---implies failure of any sufficiently ``stable'' algorithm.\n\n\nWe emphasize that the class of algorithms ruled out by the lower bounds in this paper (namely, low-degree polynomials) not only captures existing methods such as AMP and local algorithms, but contains a strictly larger (in a substantial way) class of algorithms than prior work on random optimization problems. We now illustrate this claim in the setting of the $p$-spin optimization problem. The best known polynomial-time algorithms for optimizing the $p$-spin Hamiltonian are captured by the AMP framework \\cite{montanari-sk,EMS-opt}. Roughly speaking, AMP algorithms combine a linear update step (tensor power iteration) with entry-wise non-linear operations. For a fairly general class of $p$-spin optimization problems (including spherical and Ising mixed $p$-spin models), it is now known precisely what objective value can be reached by the best possible AMP algorithm~\\cite{EMS-opt}. While this may seem like the end of the story, we point out that for the related \\emph{tensor PCA} problem---which is a variant of the $p$-spin model with a planted rank-1 signal---AMP is known to be substantially sub-optimal compared to other polynomial-time algorithms~\\cite{RM-tensor}. None of the best known polynomial-time algorithms~\\cite{RM-tensor,tensor-pca-sos,fast-sos,kikuchi,hastings-quantum,replicated-gradient} use the tensor power iteration step as in AMP, and there is evidence that this is fundamental~\\cite{algorithmic-tensor}; instead, the optimal algorithms include spectral methods derived from different tensor operations such as \\emph{tensor unfolding}~\\cite{RM-tensor,tensor-pca-sos} (which can be interpreted as a higher-order ``lifting'' of AMP~\\cite{kikuchi}). These spectral methods are captured by $O(\\log n)$-degree polynomials. With this in mind, we should \\emph{a priori} be concerned that AMP might also be sub-optimal for the (non-planted) $p$-spin optimization problem. This highlights the need for lower bounds that rule out not just AMP, but all low-degree polynomial algorithms. While the lower bounds in this paper do not achieve the precise optimal thresholds for objective value, they rule out quite a large class of algorithms compared to existing lower bounds for random optimization problems.\n\nWe refer the reader to Appendix~\\ref{app:low-deg-alg} for a more detailed discussion of how various optimization algorithms can be approximated by low-degree polynomials.\n\n\n\n\\subsubsection*{Notation}\n\nWe use $\\| \\cdot \\|_2$ and $\\langle \\cdot,\\cdot \\rangle$ to denote the standard $\\ell^2$ norm and inner product of vectors. We also use the same notation to denote the Frobenius norm and inner product of tensors. We use the term \\emph{polynomial} both to refer to (multivariate) polynomials $\\RR^m \\to \\RR$ in the usual sense, and to refer to vector-valued polynomials $\\RR^m \\to \\RR^n$ defined as in~\\eqref{eq:vec-val-poly}. We abuse notation and use the term \\emph{degree-$D$ polynomial} to mean a polynomial of degree \\emph{at most} $D$. A \\emph{random polynomial} has possibly-random coefficients, as defined in Section~\\ref{sec:poly-alg}. We use $A^c$ to denote the complement of an event $A$. Unless stated otherwise, asymptotic notation such as $o(1)$ or $\\Omega(n)$ refers to the limit $n \\to \\infty$ with all other parameters held fixed. In other words, this notation may hide constant factors depending on other parameters such as the degree $d$ in the independent set problem.\n\n\n\n\\section{Main Results}\n\\label{sec:main-results}\n\n\n\\subsection{Optimizing the $p$-Spin Glass Hamiltonian}\n\n\nThe first class of problems we consider here is optimization of the (pure) $p$-spin glass Hamiltonian, defined as follows. Fix an integer $p \\geq 2$ and let $Y \\in (\\RR^n)^{\\otimes p}$ be a $p$-tensor with real coefficients. For $x \\in \\RR^n$, consider the objective function\n\\begin{equation}\\label{eq:p-spin-def}\nH_n(x;Y) = \\frac{1}{n^{(p+1)\/2}} \\g{Y,x^\\tp}.\n\\end{equation}\nNote that all homogeneous polynomials of degree $p$ (in the variables $x$) can be written in this form for some $Y$. We focus on the case of a random coefficient tensor $Y$. In this setting, the function $H_n$ is \nsometimes called the Hamiltonian for a $p$-spin glass model in the statistical physics literature. More precisely, for various choices of a (compact) domain $\\cX_n \\subset \\R^n$, we are interested in approximately solving the optimization problem\n\\begin{equation}\\label{eq:max-H}\n\\max_{x \\in \\cX_{n}} H_n(x;Y)\n\\end{equation}\ngiven a random realization of the coefficient tensor $Y$ with i.i.d\\ $\\mathcal{N}(0,1)$ entries. Here and in the following we let $\\PP_Y$ denote the law of $Y$. (When it is clear from context we omit the subscript $Y$.)\n\n\nWe begin first with a simple norm constraint, namely, we will take as domain \n$\\cS_n =\\{x\\in\\R^n: \\norm{x}_2 =\\sqrt n\\}$, the sphere in $\\R^n$ of radius $\\sqrt{n}$.\nWe then turn to understanding a binary constraint, namely where the domain is the discrete hypercube $\\Sigma_n =\\{+1,-1\\}^n$. Following the statistical physics literature, in the former setting, we call the objective the \\emph{spherical} $p$-spin glass Hamiltonian and the latter setting the \\emph{Ising} $p$-spin glass Hamiltonian.\n\nIn both settings, quite a lot is known about the maximum. It can be shown \\cite{Ton02,JagTob17} that the maximum value of $H_n$ has an almost sure limit (as $n \\to \\infty$ with $p$ fixed), \ncalled the \\emph{ground state energy}, which we will denote by $E_p(\\cS)$ \nfor the spherical setting and $E_p(\\Sigma)$ for the Ising setting. Explicit \nvariational formulas are known for $E_p(\\cS)$ \\cite{ABC13,JagTob17,ChenSen17} and $E_p(\\Sigma)$ \\cite{AuffChen18,JS17}.\n\nAlgorithmically, it is known how to find, in polynomial time, a solution of value $E_p^\\infty(\\cS) - \\varepsilon$ or $E_p^\\infty(\\Sigma_n) - \\varepsilon$ (respectively for the spherical and Ising settings) for any constant $\\varepsilon > 0$~\\cite{subag-full-rsb,montanari-sk,EMS-opt}. In both the spherical and Ising settings, these constants satisfy $E_2^\\infty = E_2$ and $E_p^\\infty < E_p$ for $p \\ge 3$. In other words, it is known how to efficiently optimize arbitrarily close to the optimal value in the $p=2$ case, but not when $p \\ge 3$.\n\n\n\n\\subsubsection{Low-Degree Polynomial Algorithms}\\label{sec:poly-alg}\n\nOur goal here is to understand how well one can optimize~\\eqref{eq:max-H} via the output of a vector-valued low-degree polynomial in the coefficients $Y$. To simplify notation we will often abuse notation and refer to the space of \n$p$-tensors on $\\R^n$ by $\\R^m \\cong (\\R^n)^\\tp$ where $m=n^p$.\n\nWe say that a function $f:\\RR^m\\to\\RR^n$ is a polynomial of degree (at most) $D$ if it may be written in the form \n\\begin{equation}\\label{eq:vec-val-poly}\nf(Y) = (f_1(Y),\\ldots,f_n(Y)),\n\\end{equation}\nwhere each $f_i:\\RR^m\\to\\R$ is a polynomial of degree at most $D$. \n\nWe will also consider the case where $f$ is allowed to have random coefficients, \nprovided that these coefficients are independent of $Y$. That is, we will assume that \nthere is some probability space $(\\Omega,\\PP_\\omega)$ and that \n$f:\\RR^m\\times\\Omega\\to\\RR^n$ is such that $f(\\cdot,\\omega)$ is a polynomial of degree \nat most $D$ for each $\\omega\\in\\Omega$. We will abuse notation and refer to this as a \\emph{random polynomial} $f: \\RR^m \\to \\RR^n$.\n\nOur precise notion of what it means for a polynomial to optimize~$H_n$ will depend somewhat on the domain~$\\cX_n$. This is because it is too much to ask for the polynomial's output to lie in $\\cX_n$ exactly, and so we fix a canonical rounding scheme that maps the polynomial's output to $\\cX_n$. We begin by defining this notion for the sphere: $\\cX_n = \\cS_n$.\n\n\\paragraph{The spherical case.}\n\nWe will round a polynomial's output to the sphere $\\cS_n$ by normalizing it in the standard way. To this end, for a random polynomial $f: \\RR^m \\to \\RR^n$ we define the random function $g_f: \\RR^m \\to \\cS_n \\cup \\{\\infty\\}$ by\n\\[\ng_f(Y,\\omega) = \\sqrt n \\frac{f(Y,\\omega)}{\\norm{f(Y,\\omega)}_2},\n\\]\nwith the convention $g_f(Y,\\omega) = \\infty$ if $f(Y,\\omega)=0$.\n\n\\begin{definition}\nFor parameters $\\mu \\in \\RR$, $\\delta \\in [0,1]$, $\\gamma \\in [0,1]$, and a random polynomial $f: \\RR^m \\to \\RR^n$, we say that $f$ $(\\mu,\\delta,\\gamma)$-optimizes the objective \\eqref{eq:p-spin-def} on $\\cS_n$ if the following are satisfied when $(Y,\\omega)\\sim\\PP_Y\\otimes \\PP_\\omega$:\n\\begin{itemize}\n \\item $\\displaystyle \\Ex_{Y,\\omega} \\|f(Y,\\omega)\\|^2_2 = n$ \\; (normalization).\n \\item With probability at least $1-\\delta$ over $Y$ and $\\omega$, we have both $H_n(g_f(Y,\\omega);Y) \\ge \\mu$ and $\\|f(Y,\\omega)\\|_2 \\ge \\gamma \\sqrt{n}$.\n\\end{itemize}\n\\end{definition}\n\n\n\\noindent Implicitly in this definition, the case $f(Y,\\omega)=0$ must occur with probability at most $\\delta$. The meaning of the parameters $(\\mu,\\delta,\\gamma)$ is as follows: $\\mu$ is the objective value attained after normalizing the polynomial's output to the sphere, and $\\delta$ is the algorithm's failure probability. Finally, $\\gamma$ is involved in the norm bound $\\|f(Y,\\omega)\\|_2 \\ge \\gamma \\sqrt{n}$ that we need for technical reasons. Since the domain is $\\cS_n$, $f$ is ``supposed to'' output a vector of norm $\\sqrt{n}$. While we do not require this to hold exactly (and have corrected for this by normalizing $f$'s output), we do need to require that $f$ usually does not output a vector of norm too much smaller than $\\sqrt{n}$. This norm bound is important for our proofs because it ensures that a small change in $f(Y,\\omega)$ can only induce a small change in $g_f(Y,\\omega)$.\n\nWe now state our main result on low-degree hardness of the spherical $p$-spin model, with the proof deferred to Section~\\ref{sec:pf-lowdeg-pspin}.\n\n\\begin{theorem}\\label{thm:spherical-lowdeg}\nFor any even integer $p \\ge 4$ there exist constants $\\mu < E_p(\\cS)$, $n^* \\in \\NN$, and $\\delta^* > 0$ such that the following holds. For any $n \\ge n^*$, any $D \\in \\NN$, any $\\delta \\le \\min\\{\\delta^*,\\frac{1}{4} \\exp(-2D)\\}$, and any $\\gamma \\ge (2\/3)^D$, there is no random degree-$D$ polynomial that $(\\mu, \\delta, \\gamma)$-optimizes \\eqref{eq:p-spin-def} on $\\cS_n$.\n\\end{theorem}\n\n\n\\noindent A number of remarks are in order. First, this result exhibits a tradeoff between the degree $D$ of polynomials that we can rule out and the failure probability $\\delta$ that we need to assume. In order to rule out polynomials of \\emph{any} constant degree, we need only the mild assumption $\\delta = o(1)$. On the other hand, if we are willing to restrict to algorithms of failure probability $\\delta = \\exp(-cn)$ (which we believe is reasonable to expect in this setting), we can rule out all polynomials of degree $D \\le c'n$ for a constant $c' = c'(c)$. It has been observed in various hypothesis testing problems that the class of degree-$n^\\delta$ polynomials is at least as powerful as all known $\\exp(n^{\\delta-o(1)})$-time algorithms~\\cite{sam-thesis,lowdeg-notes,subexp-sparse}. This suggests that optimizing arbitrarily close to the optimal value in the spherical $p$-spin (for $p \\ge 4$ even) requires fully exponential time $\\exp(n^{1-o(1)})$.\n\nThe best known results for polynomial-time optimization of the spherical $p$-spin were first proved by~\\cite{subag-full-rsb} but can also be recovered via the AMP framework of~\\cite{EMS-opt}. As discussed in Appendix~\\ref{app:low-deg-alg}, these AMP algorithms can be captured by constant-degree polynomials. Furthermore, the output of such an algorithm concentrates tightly around $\\sqrt{n}$ and thus easily satisfies the norm bound with $\\gamma = (2\/3)^D$ required by our result. We also expect that these AMP algorithms have failure probability $\\delta = \\exp(-\\Omega(n))$; while this has not been established formally, a similar result on concentration of AMP-type algorithms has been shown by~\\cite{gamarnik2019overlap}.\n\nOur results are limited to the case where $p \\ge 4$ is even and $\\mu$ is a constant slightly smaller than the optimal value $E_p(\\cS)$. These restrictions are in place because the OGP property used in our proof is only known to hold for these values of $p$ and $\\mu$. If the OGP were proven for other values of $p$ or for a lower threshold $\\mu$, our results would immediately extend to give low-degree hardness for these parameters (see Theorem~\\ref{thm:spherical-ogp-lowdeg}). Note that we cannot hope for the result to hold when $p=2$ because this is a simple eigenvector problem with no computational hardness: there is a constant-degree algorithm to optimize arbitrarily close to the maximum (see Appendix~\\ref{app:low-deg-alg}).\n\n\n\n\\paragraph{The Ising case.}\n\nWe now turn to low-degree hardness in the Ising setting, where the domain is the hypercube: $\\cX_n = \\Sigma_n$. In this case, we round a polynomial's output to the hypercube by applying the sign function. For $x \\in \\RR$, let\n\\[ \\sgn(x) = \\left\\{\\begin{array}{ll} +1 & \\text{if } x \\ge 0 \\\\ -1 & \\text{if } x < 0, \\end{array}\\right. \\]\nand for a vector $x \\in \\RR^n$ let $\\sgn(x)$ denote entry-wise application of $\\sgn(\\cdot)$. We now define our notion of near optimality for a low-degree polynomial.\n\n\\begin{definition}\nFor parameters $\\mu \\in \\RR$, $\\delta \\in [0,1]$, $\\gamma \\in [0,1]$, $\\eta \\in [0,1]$, and a random polynomial $f: \\RR^m \\to \\RR^n$, we say that $f$ $(\\mu,\\delta,\\gamma,\\eta)$-optimizes the objective \\eqref{eq:p-spin-def} on $\\Sigma_n$ if the following are satisfied.\n\\begin{itemize}\n \\item $\\displaystyle \\Ex_{Y,\\omega} \\|f(Y,\\omega)\\|^2_2 = n$ \\; (normalization).\n \\item With probability at least $1-\\delta$ over $Y$ and $\\omega$, we have both $H_n(\\sgn(f(Y,\\omega));Y) \\ge \\mu$ and \\mbox{$|\\{i \\in [n] \\;:\\; |f_i(Y,\\omega)| \\ge \\gamma\\}| \\ge (1 - \\eta)n$}. \n\\end{itemize}\n\\end{definition}\n\n\\noindent The interpretation of these parameters is similar to the spherical case, with the addition of $\\eta$ to take into account issues related to rounding. More precisely, as in the spherical case, $\\mu$ is the objective value attained after rounding the polynomial's output to the hypercube, and $\\delta$ is the failure probability. The parameters $\\gamma, \\eta$ are involved in an additional technical condition, which requires $f$'s output not to be too ``small'' in a particular sense. Specifically, all but an $\\eta$-fraction of the coordinates of $f$'s output must exceed $\\gamma$ in magnitude. The need for this condition in our proof arises in order to prevent a small change in $f(Y,\\omega)$ from inducing a large change in $\\sgn(f(Y,\\omega))$.\n\nWe have the following result on low-degree hardness in the Ising setting. The proof is deferred to Section~\\ref{sec:pf-lowdeg-pspin}.\n\\begin{theorem}\\label{thm:ising-lowdeg}\nFor any even integer $p \\ge 4$ there exist constants $\\mu < E_p(\\Sigma)$, $n^* \\in \\NN$, $\\delta^* > 0$, and $\\eta > 0$ such that the following holds. For any $n \\ge n^*$, any $D \\in \\NN$, any $\\delta \\le \\min\\{\\delta^*,\\frac{1}{4} \\exp(-2D)\\}$, and any $\\gamma \\ge (2\/3)^D$, there is no random degree-$D$ polynomial that $(\\mu, \\delta, \\gamma, \\eta)$-optimizes \\eqref{eq:p-spin-def} on $\\Sigma_n$.\n\\end{theorem}\n\n\\noindent This result is very similar to the spherical case, and the discussion following Theorem~\\ref{thm:spherical-lowdeg} also applies here. The best known algorithms for the Ising case also fall into the AMP framework~\\cite{montanari-sk,EMS-opt} and are thus captured by constant-degree polynomials. These polynomials output a solution ``close'' to the hypercube in a way that satisfies our technical condition involving $\\gamma, \\eta$. As in the spherical case, the case $p=2$ is computationally tractable; here it is not a simple eigenvector problem but can nonetheless be solved by the AMP algorithm of~\\cite{montanari-sk,EMS-opt}.\n\n\n\n\n\\subsubsection{Langevin Dynamics and Gradient Descent}\n\n\n\nOne natural motivation for understanding low-degree hardness is to investigate the performance of natural iterative schemes, such as power iteration or gradient descent. In the spherical $p$-spin model, the natural analogue of these algorithms (in continuous time) are \\emph{Langevin dynamics} and \\emph{gradient flow}. \nWhile these are not directly low-degree methods, the overlap gap property can still be seen to imply hardness for these results in a fairly transparent manner. \n\nTo make this precise, let us introduce the following.\nLet $B_t$ denote spherical Brownian motion. (For a textbook introduction to spherical Brownian motion see, e.g., \\cite{Hsu02}.) For any variance $\\sigma\\geq 0$, we introduce \\emph{Langevin dynamics} for $H_n$ \nto be the strong solution to the stochastic differential equation\n\\[\ndX_t = \\sigma dB_t + \\nabla H_n(X_t;Y)dt,\n\\]\nwith $X_0=x$, where here $\\nabla$ denotes the spherical gradient. Note that since $H_n(x;Y)$ is a polynomial in $x$, $H_n$ is (surely) smooth and consequently the solution is well-defined in the strong sense \\cite{Hsu02}. The case $\\sigma = 0$ is referred to as \\emph{gradient flow} on the sphere. \n\nIn this setting, it is natural to study the performance with random starts which are independent of $Y$, e.g., a uniform at random start. In this case, if the initial distribution is given by $X_0\\sim\\nu$ for some $\\nu\\in\\cM_1(\\cS_n)$, the space of probability measures on $\\cS_n$, we will denote the law by $Q_\\nu$. In this setting we have the following result which is, again, a consequence of the overlap gap property. \n\n\\begin{theorem}\\label{thm:langevin-main}\nLet $p\\geq 4$ be even. There exists $\\mu < E_p(\\cS)$ and $c>0$ such that for any $\\sigma\\geq0$, $T\\geq0$ fixed, \n $n$ sufficiently large, and $\\nu\\in\\cM_1(\\cS_n)$, if $X_t$ denotes Langevin dynamics for $H_n(\\cdot;Y)$ with variance $\\sigma$ and initial data $\\nu$, then\n\\[\n\\PP_Y\\otimes Q_\\nu(H_n(X_T;Y) \\leq \\mu )\\geq 1-\\exp(-c n).\n\\]\nIn particular, the result holds for $\\nu_n = \\mathrm{Unif}(\\cS_n)$, the uniform measure on $\\cS_n$.\n\\end{theorem}\n\n\\noindent The proof can be found in Section~\\ref{sec:pf-langevin}. To our knowledge, this is the first proof that neither Langevin dynamics nor gradient descent reach the ground state started from uniform at random start. {We note furthermore, that the above applies even to $T \\leq c' \\log n$ for some $c'>0$ sufficiently small.}\n\nThere has been a tremendous amount of attention paid to the Langevin dynamics of spherical $p$-spin glass models. It is impossible here to provide a complete reference though we point the reader here to the surveys \\cite{BCKM98,Cug03,Gui07,jagannath2019dynamics}. \nTo date, much of the analysis of the dynamics in the \\emph{non-activated} regime considered here ($n\\to \\infty$ and then $t\\to\\infty$) has concentrated on the Crisanti--Horner--Sommers--Cugiandolo--Kurchan (CHSCK) equations approach \\cite{crisanti1993sphericalp,CugKur93}.\nThis approach centers around the analysis \nof a system of integro-differential equations which are satisfied by the scaling limit of natural observables of the underlying system. While this property of the scaling limit has now been shown rigorously \\cite{BADG01,BADG06}, there is limited rigorous understanding of the solutions of the CHSCK equations beyond the case when $p=2$.\nA far richer picture is expected here related to the phenomenon of \\emph{aging} \\cite{Gui07,BA02}. \n\nMore recently a new, differential inequality--based approach to understanding this regime was introduced in \\cite{BGJ20}, which provides upper and lower bounds on the energy level reached for a given initial data. That being said, this upper bound is nontrivial only for $\\sigma$ sufficiently large.\n\nWe end by noting that overlap gap--like properties, namely ``free energy barriers'' have been used to develop spectral gap estimates for Langevin dynamics which control the corresponding $L^2$-mixing time \\cite{GJ16,arous2018spectral}. In \\cite{arous2018spectral}, it was shown that exponentially-small spectral gaps are connected to the existence of free energy barriers for the overlap, which at very low temperatures can be shown to be equivalent to a variant of the overlap gap property in this setting. To our knowledge, however, this work is the first approach to connect the behavior of Langevin dynamics in the non-activated regime ($n\\to\\infty$ and then $t\\to\\infty$) that utilizes the overlap distribution. Finally we note here that the overlap gap property has been connected to the spectral gap for local, reversible dynamics of Ising spin glass models in \\cite{arous2018spectral} as well as to gradient descent and approximate message passing schemes in \\cite{gamarnik2019overlap}. \n\n\n\n\\subsection{Maximum Independent Set Problem in Sparse Random Graphs}\n\nWe now consider the problem of finding a large independent set in a sparse random graph. Here, we are given the adjacency matrix of an $n$-vertex graph, represented as $Y \\in \\{0,1\\}^m$ where $m = \\binom{n}{2}$. We write $Y \\sim G(n,d\/n)$ to denote an \\ER graph on $n$ nodes with edge probability $d\/n$, i.e., every possible edge occurs independently with probability $d\/n$. We are interested in the regime where first $n \\to \\infty$ (with $d$ fixed) and then $d \\to \\infty$. A subset of nodes $S\\subseteq [n]$ is an \\emph{independent set} if it spans no edges, i.e., for every $i,j \\in S$, $(i,j)$ is not an edge. Letting $\\cI(Y)$ denote the set of all independent sets of the graph $Y$, consider the optimization problem\n\\begin{equation}\\label{eq:max-indep}\n\\max_{S \\in \\cI(Y)} |S|\n\\end{equation}\nwhere $Y \\sim G(n,d\/n)$.\n\nAs $n \\to \\infty$ with $d$ fixed, the rescaled optimum value of~\\eqref{eq:max-indep} is known to converge to some limit with high probability:\n\\begin{align*}\n \\frac{1}{n}\\, {\\max_{S \\in \\cI(Y)} |S|}\\to\\alpha_d,\n\\end{align*}\nas shown in~\\cite{BayatiGamarnikTetali}. The limit $\\alpha_d$ is known to have the following asymptotic behavior as $d\\to\\infty$:\n\\begin{align*}\n \\alpha_d=(1+o_d(1)){2\\log d\\over d},\n\\end{align*}\nas is known since the work of Frieze~\\cite{FriezeIndependentSet}.\nThe best known polynomial-time algorithm for this problem is achieved by a straightforward greedy algorithm which constructs a $1\/2$-optimal independent set, i.e., an independent set of size $\\frac{\\log d}{d} n$ asymptotically as $n \\to \\infty$ and then $d \\to \\infty$.\n\nWe will study the ability of low-degree polynomials to find a large independent set. It is too much to ask for a polynomial to exactly output the indicator {vector} of an independent set, so we fix the following rounding scheme that takes a polynomial's output and returns an independent set. Recall the terminology for random polynomials defined in Section~\\ref{sec:poly-alg}.\n\n\\begin{definition}\nLet $f: \\{0,1\\}^m \\to \\RR^n$ be a random polynomial. For $Y \\in \\{0,1\\}^m$, and $\\eta > 0$, let $V^\\eta_f(Y,\\omega) \\in \\cI(Y)$ be the independent set obtained by the following procedure. Let\n\\[A = \\{i \\in [n] \\,:\\, f_i(Y,\\omega) \\ge 1\\},\\] \\[\\tilde A = \\{i \\in A \\,:\\, \\text{$i$ has no neighbors in $A$ in the graph $Y$}\\},\\]\nand\n\\[B = \\{i \\in [n] \\,:\\, f_i(Y,\\omega) \\in (1\/2,1)\\}.\\]\nLet\n\\[ V^\\eta_f(Y,\\omega) = \\left\\{\\begin{array}{ll} \\tilde A & \\text{if } |A \\setminus \\tilde A| + |B| \\le \\eta n, \\\\ \\emptyset & \\text{otherwise.} \\end{array}\\right. \\]\n\\end{definition}\n\n\\noindent In other words, $f$ should output a value $\\ge 1$ to indicate that a vertex is in the independent set and should output a value $\\le 1\/2$ to indicate that it is not. It is allowed to make up to $\\eta n$ ``errors'', each of which can either be a vertex for which the output value lies in $(1\/2,1)$, or a vertex that violates the independent set constraint. Vertices that violate the independent set constraint are thrown out, and if too many errors are made then the empty set $\\emptyset$ is returned. For our proofs it is crucial that this definition of $V_f^\\eta$ ensures that a small change in $f(Y,\\omega)$ cannot induce a large change in the resulting independent set $V_f^\\eta(Y,\\omega)$ (without encountering the failure event $\\emptyset$).\n\nWe now formally define what it means for a polynomial to find a large independent set.\n\\begin{definition}\nFor parameters $k \\in \\NN$, $\\delta \\in [0,1]$, $\\gamma \\ge 1$, $\\eta > 0$, and a random polynomial $f: \\{0,1\\}^m \\to \\RR^n$, we say that $f$ $(k,\\delta,\\gamma,\\eta)$-optimizes~\\eqref{eq:max-indep} if the following are satisfied.\n\\begin{itemize}\n \\item $\\displaystyle \\Ex_{Y,\\omega} \\|f(Y,\\omega)\\|^2_2 \\le \\gamma k$.\n \\item With probability at least $1-\\delta$ over $Y$ and $\\omega$, we have $|V_f^\\eta(Y,\\omega)| \\ge k$.\n\\end{itemize}\n\\end{definition}\n\n\\noindent The parameter $k$ denotes the objective value attained (after rounding), i.e., the size of the independent set. For us, $k$ will be a fixed multiple of $\\frac{\\log d}{d} n$, since this is the scale of the optimum. The parameter $\\delta$ is the algorithm's failure probability. Note that if $f$ were to ``perfectly'' output the $\\{0,1\\}$-valued indicator vector of a size-$k$ independent set, then we would have $\\|f(Y,\\omega)\\|^2_2 = k$. The parameter $\\gamma$ controls the degree to which this can be violated. Finally, $\\eta$ is the fraction of ``errors'' tolerated by the rounding process $V_f^\\eta$.\n\nWe now state our main result of low-degree hardness of maximum independent set, with the proof deferred to Section~\\ref{sec:pf-lowdeg-indep}.\n\n\\begin{theorem}\\label{thm:MIS-main}\nFor any $\\alpha > 1 + 1\/\\sqrt{2}$ there exists $d^* > 0$ such that for any $d \\ge d^*$ there exist $n^* > 0$, $\\eta > 0$, and $C_1, C_2 > 0$ such that the following holds. Let $n \\ge n^*$, $\\gamma \\ge 1$, and $D \\le \\frac{C_2 n}{\\gamma \\log n}$, and suppose $\\delta \\ge 0$ satisfies\n\\[ \\delta < \\exp\\left(-C_1 \\gamma D \\log n\\right). \\]\nThen for $k = \\alpha \\frac{\\log d}{d} n$, there is no random degree-$D$ polynomial that $(k,\\delta,\\gamma,\\eta)$-optimizes~\\eqref{eq:max-indep}.\n\\end{theorem}\n\n\\noindent This shows that low-degree polynomials cannot find an independent set of size (asymptotically) exceeding $(1 + 1\/\\sqrt{2}) \\frac{\\log d}{d} n$, which is roughly $85$\\% of the optimum. This is the threshold above which OGP can be shown using a first moment argument as in~\\cite{gamarnik2014limits}.\n\nIf $\\gamma$ is a constant, Theorem~\\ref{thm:MIS-main} gives a similar tradeoff between $D$ and $\\delta$ as our results for the $p$-spin model, although here there is an extra factor of $\\log n$. If we are willing to restrict to algorithms of failure probability $\\delta = \\exp(-cn)$ then we can rule out all polynomials of degree $D \\le c'n\/\\log n$ for a constant $c' = c'(c)$. As in the $p$-spin model, this suggests that exponential time $\\exp(n^{1-o(1)})$ is needed in order to find an independent set larger than $(1 + 1\/\\sqrt{2}) \\frac{\\log d}{d} n$.\n\nAs discussed in the introduction, the best known polynomial-time algorithm can find an independent set $1\/2$ as large as the optimum (asymptotically), and we expect this can also be achieved by a local algorithm (although this has only been shown rigorously for regular graphs). Any such local algorithm can be represented as a constant-degree polynomial (see Appendix~\\ref{app:low-deg-alg}). We expect that this polynomial satisfies our technical assumptions with parameters $k=(1+o_d(1)){\\log d\\over d}n$, $\\gamma = O(1)$, $\\delta = \\exp(-\\Omega(n))$, and any constant $\\eta > 0$ (although we have not included a formal proof of this).\n\n\n\n\n\n\\subsection{The Overlap Gap Property}\n\nAs discussed in the introduction, the preceding results will follow due to certain geometric properties of the super-level sets of the objectives. The main property is called the \\emph{overlap gap property (OGP)}. Let us begin by defining this formally in a general setting. \n\n\n\\begin{definition}\\label{definition:OGP}\nWe say that a family of real-valued functions $\\mathcal{F}$ with common domain $\\mathcal{X}\\subset \\R^n$ satisfies the \\emph{overlap gap property} for an overlap $R:\\cX\\times \\cX\\to \\R_{\\ge 0}$ \nwith parameters $\\mu \\in \\RR$ and $0\\leq\\nu_1<\\nu_2\\leq 1$ if for every $f_1,f_2\\in\\mathcal{F}$ and every $x_1,x_2\\in\\cX$ satisfying\n$f_k(x_k)\\geq \\mu$ for $k=1,2$, we have that\n$ R(x,y) \\in [0,\\nu_1]\\cup [\\nu_2,1].$\n\\end{definition}\n\\noindent For ease of notation, when this holds, we simply say that $\\mathcal{F}$ satisfies the $(\\mu,\\nu_1,\\nu_2)$-OGP for $R$ on $\\cX$. Furthermore, as it is often clear from context, we omit the dependence of the above on $R$.\n\nWhile the definition above might be satisfied for trivial reasons and thus not be informative, it will be used in this paper in the setting where $\\|x\\|_2^2\\le n$ for every $x\\in \\mathcal{X}$, {$R(x_1,x_2)=|\\langle x_1,x_2\\rangle|\/n$}, and with parameters chosen so that with high probability $\\mu<\\sup_{x\\in \\mathcal{X}}H(x)$ for every $H\\in\\mathcal{F}$. Thus, in particular $R(x_1,x_2)\\le 1$ for every $x_1,x_2\\in \\mathcal{X}$, and $\\mu$ measures some proximity from optimal values for each objective function $H$. The definition says informally that for every two $\\mu$-optimal solutions with respect to any two choices of objective functions, their normalized inner product is either at least $\\nu_2$ or at most $\\nu_1$. \n\nIn the following, we require one other property of functions, namely separation of their superlevel sets.\n\\begin{definition}\\label{definition:well-separated}\nWe say that two real-valued functions $f,g$ with common domain $\\cX$ are $\\nu$-separated above $\\mu$ with respect to the overlap $R:\\cX\\times \\cX \\to \\R_{\\ge 0}$ if for any $x,y \\in \\cX$ with $f(x)\\geq \\mu$ and $g(y)\\geq \\mu$, we have that $R(x,y) \\leq \\nu$.\n\\end{definition}\n\\noindent This property can be thought of a strengthening of OGP for two distinct functions. In particular, the parameter $\\nu$ will typically equal the parameter $\\nu_1$ in the definition of OGP.\n\nLet us now turn to stating the precise results regarding these properties in the settings we consider here. It can be shown that the overlap gap property holds for $p$-spin glass Hamiltonians in both the spherical and Ising settings with respect to the overlap $R(x,y) =\\frac{1}{n}\\abs{\\g{x,y}}$. More precisely, let $Y$ be i.i.d.\\ $\\mathcal{N}(0,1)$ and let $Y'$ denote an independent copy of $Y$. \nConsider the corresponding family of real-valued functions\n\\begin{equation}\\label{eq:interpolated-family-p-spin}\n\\cA(Y,Y') =\\{\\cos(\\tau) H_n(\\cdot\\,;Y)+\\sin(\\tau)H_n(\\cdot\\,;Y') \\,:\\, \\tau \\in [0,\\pi\/2]\\}.\n\\end{equation}\nWe then have the following, which will follow by combining bounds from \\cite{ChenSen17,AuffChen18}. The second result is a restatment of \\cite[Theorem 3.4]{gamarnik2019overlap}. The proof can be found in Section~\\ref{sec:pf-ogp-pspin}.\n\n\\begin{theorem} \\label{thm:pspin-ogp}\nTake as overlap $R(x,y) =\\frac{1}{n}\\abs{\\g{x,y}}$ and let $Y$ and $Y'$ be independent $p$-tensors with i.i.d.\\ $\\mathcal{N}(0,1)$ entries. For every even $p\\geq4$\nthere exists an $\\eps>0$ such that the following holds:\n\\begin{enumerate}\n \\item For the domain $\\cS_n$, there are some $0\\leq\\nu_1<\\nu_2\\leq1$ and some $c>0$ such that the following holds with probability at least $1-\\exp(-c n)$:\n \\begin{itemize}\n \\item $\\cA(Y,Y')$ has the overlap gap property for $R$ with parameters $(E_p(\\cS)-\\eps,\\nu_1,\\nu_2)$. \n \\item $H_n(\\cdot\\,;Y)$ and $H_n(\\cdot\\,;Y')$ are $\\nu_1$-separated above $E_p(\\cS)-\\eps$ with respect to $R$.\n \\end{itemize}\n \\item For the domain $\\Sigma_n$, there are some $0\\leq\\nu_1<\\nu_2\\leq1$ and some $c>0$ such that the following holds with probability at least $1-\\exp(-c n)$:\n \\begin{itemize}\n \\item $\\cA(Y,Y')$ has the overlap gap property for $R$ with parameters $(E_p(\\Sigma)-\\eps,\\nu_1,\\nu_2)$.\n \\item $H_n(\\cdot\\,;Y)$ and $H_n(\\cdot\\,;Y')$ are $\\nu_1$-separated above $E_p(\\Sigma)-\\eps$ with respect to $R$.\n \\end{itemize}\n\\end{enumerate}\n\\end{theorem}\n\n\nLet us now turn to the maximum independent set problem. Let us begin by first observing that we may place this family of optimization problem on a common domain. To this end, \nconsider as domain, the Boolean hypercube $\\cB_n =\\{0,1\\}^n$. \nNote that by viewing a vector $x$ as the indicator function of the set $S=S(x):=\\{i:x_i =1\\}$, we have a correspondence between the points $x\\in\\cB_n$ and subsets of the vertex set $[n]$.\nLet $m = \\binom{n}{2}$, let $Y \\in \\{0,1\\}^m$ denote the adjacency matrix of some graph on $[n]$ vertices, and consider the function $F(x;Y)$ given by \n\\[\nF(x;Y) = \\abs{S(x)} \\cdot \\One\\{S(x)\\in \\mathcal{I}(Y)\\}.\n\\]\nThe maximum independent set problem for $Y$ can then be written in the form \n\\[\n\\max_{x\\in\\cB_n} F(x;Y).\n\\]\n\n\n\\noindent Let us now construct the analogue of the family $\\cA(Y,Y')$ from \\eqref{eq:interpolated-family-p-spin} in this setting. \n\\begin{definition}\\label{def:path}\nFor $Y, Y' \\in \\{0,1\\}^m$, the \\emph{path from $Y$ to $Y'$} is $Y = Z_0 \\to Z_1 \\to \\cdots \\to Z_m = Y'$ where $(Z_i)_j = Y_j$ for $j > i$ and $(Z_i)_j = Y'_j$ otherwise. The path is denoted by $Y\\mapsto Y'$. \n\\end{definition}\n\\noindent Here (and throughout) we have fixed an arbitrary order by which to index the edges of a graph (the coordinates of $Y$).\n\n\n\nNow let $Y, Y' \\in \\{0,1\\}^m$ be (the adjacency matrices of) independent $G(n,d\/n)$ random graphs. We can then consider the family of functions \n\\begin{equation}\\label{eq:interpolated-family-MIS}\n\\cF(Y,Y') = \\{F(\\cdot\\,;Z) \\,:\\, Z \\text{ is on the path } Y\\mapsto Y'\\}.\n\\end{equation}\n\n\\noindent We can now state the relevant overlap gap property.\n\n\\begin{theorem}\\label{thm:ogp-graph}\nFor any $\\alpha > 1 + 1\/\\sqrt{2}$ there exist constants $0 \\le \\tilde\\nu_1 < \\tilde\\nu_2 \\le 1$ and $d^* > 0$ such that for any constant $d \\ge d^*$, the following holds. If $Y,Y' \\sim G(n,d\/n)$ independently, the following holds with probability at least $1-\\exp(-\\Omega(n))$.\n\\begin{itemize}\n\\item The family of functions $\\mathcal{F}$ from \\eqref{eq:interpolated-family-MIS} with domain $\\mathcal{X}=\\mathcal{B}_n$ satisfies the overlap gap property with overlap $R(x_1,x_2)=\\frac{1}{n} |\\langle x_1,x_2\\rangle|$ and parameters $\\mu = k := \\alpha \\frac{\\log d}{d} n$, $\\nu_1 = \\tilde \\nu_1 \\frac{k}{n}$, $\\nu_2 = \\tilde \\nu_2 \\frac{k}{n}$ with probability at least $1-\\exp(-\\Omega(n))$.\n\\item Furthermore, the functions $F(\\cdot\\,;Y)$ and $F(\\cdot\\,;Y')$ are $\\nu_1$-separated above $\\mu$.\n\\end{itemize}\n\\end{theorem}\n\n\\noindent Above (and throughout), $\\Omega(n)$ pertains to the limit $n \\to \\infty$ with $\\alpha,d$ fixed, i.e., it hides a constant factor depending on $\\alpha,d$. Note that here the overlap is simply the (normalized) cardinality of the intersection of the two sets: $R(x_1,x_2) = \\frac{1}{n}|S(x_1) \\cap S(x_2)|$.\n\nThe proof of Theorem~\\ref{thm:ogp-graph}---which is deferred to Section~\\ref{sec:pf-ogp-indep}---is an adaptation of the first moment argument of~\\cite{gamarnik2014limits}: we compute the expected number of pairs of independent sets whose overlap lies in the ``forbidden'' region, and show that this is exponentially small.\n\n\n\n\n\\section{Proofs for $p$-Spin Model}\\label{sec:pf-pspin}\n\n\\subsection{Low-Degree Polynomials are Stable}\n\nIn this section we prove a noise stability--type result for polynomials of Gaussians, which will be a key ingredient in our proofs. Throughout this section, let $d\\geq 1$ and let $Y\\in \\RR^d$ be a vector with i.i.d.\\ standard Gaussian entries. Denote the standard Gaussian measure on $\\R^d$ by $\\Gamma^d$. For two standard Gaussian random vectors defined on the same probability space, we write $X\\sim_\\rho Y$ if their covariance satisfies $\\mathrm{Cov}(X,Y) = \\rho\\, I$ for some $\\rho \\in [0,1]$, where $I$ denotes the identity matrix. Throughout this section, all polynomials have non-random coefficients. The goal of this section is to prove the following stability result.\n\\begin{theorem}\n\\label{thm:hyp-stable}\nLet $0\\leq \\rho\\leq 1$.\nLet $X,Y$ be a pair of standard Gaussian random vectors on $\\R^d$ such that $X\\sim_\\rho Y$. Let $P$ denote the joint law of $X, Y$. Let $f: \\R^d \\to \\RR^k$ be a (deterministic) polynomial of degree at most $D$ with $\\EE \\norm{f(X)}_2^2 = 1$. For any $t \\ge (6e)^D$,\n\\[ P(\\|f(X) - f(Y)\\|_2^2 \\ge 2t(1-\\rho^D)) \\le \\exp\\left(-\\frac{D}{3e} t^{1\/D}\\right). \\]\n\\end{theorem}\n\n\nWe begin by recalling the following standard consequence of hypercontractivity; see Theorem~5.10 and Remark~5.11 of \\cite{janson-gaussian} or \\cite[Sec.\\ 3.2]{LedouxTalagrand}.\n\\begin{proposition*}[Hypercontractivity for polynomials]\nIf $f: \\R^d \\to \\RR$ is a degree-$D$ polynomial and $q \\in [2,\\infty)$ then\n\\begin{equation}\\label{eq:hyp-moment}\n\\Ex\\left[|f(Y)|^q\\right] \\le (q-1)^{qD\/2} \\Ex[f(Y)^2]^{q\/2}. \n\\end{equation} \n\\end{proposition*}\n\n\\noindent Let us now note the following useful corollary of this result for vector-valued polynomials.\n\n\\begin{lemma}\n\\label{lem:2-norm-moment}\nIf $f: \\RR^d \\to \\RR^k$ is a degree-$D$ polynomial and $q \\in [2,\\infty)$ then\n\\[ \n\\Ex[\\|f(Y)\\|_2^{2q}] \\le [3(q-1)]^{qD} \\Ex[\\|f(Y)\\|_2^2]^q.\n\\]\n\\end{lemma}\n\\begin{proof}\nLet us begin by observing that by the Cauchy-Schwarz inequality and \\eqref{eq:hyp-moment}, \n\\begin{align}\n\\EE[\\|f(Y)\\|_2^4] \n&\\le \\sum_i \\EE[f_i(Y)^4] + 2 \\sum_{i < j} \\sqrt{\\EE[f_i(Y)^4]\\EE[f_j(Y)^4]}\\nonumber \\\\\n&\\le \\sum_i 9^D \\EE[f_i(Y)^2]^2 + 2 \\sum_{i < j} 9^D \\EE[f_i(Y)^2] \\EE[f_j(Y)^2]= 9^D \\left(\\E \\norm{f(Y)}_2^2\\right)^2.\\label{eq:2-norm-moment-1}\n\\end{align}\nOn the other hand, since $\\|f(Y)\\|_2^2$ is a polynomial of degree at most $2D$, we may again apply \\eqref{eq:hyp-moment} to obtain\n\\[\n\\EE[\\|f(Y)\\|_2^{2q}] \\le (q-1)^{qD} \\EE[\\|f(Y)\\|_2^4]^{q\/2} \\leq [3(q-1)]^{qD}\\E[\\norm{f(Y)}_2^2]^2\n\\]\nas desired, where in the last line we used \\eqref{eq:2-norm-moment-1}.\n\\end{proof}\n\n\\noindent With these results in hand we may now prove the following preliminary tail bound.\n\\begin{proposition}\n\\label{prop:2-norm-tail}\nIf $f: \\R^d \\to \\RR^k$ is a degree-$D$ polynomial, then for any $t \\ge (6e)^D$,\n\\[ \n\\Gamma^d(\\|f(Y)\\|_2^2 \\ge t \\,\\EE[\\|f(Y)\\|_2^2]) \\le \\exp\\left(-\\frac{D}{3e} t^{1\/D}\\right). \n\\]\n\\end{proposition}\n\\noindent (Recall that $\\Gamma^d(\\cdot)$ denotes probability under standard Gaussian measure.)\n\\begin{proof}\nUsing Lemma~\\ref{lem:2-norm-moment}, for any $q \\in [2,\\infty)$,\n\\begin{align*}\n\\Gamma^d(\\|f(Y)\\|_2^2 \\ge t) &= \\Gamma^d(\\|f(Y)\\|_2^{2q} \\ge t^q) \\le \\EE[\\|f(Y)\\|_2^{2q}]t^{-q}\\\\\n& \\le [3(q-1)]^{qD} \\EE[\\|f(Y)\\|_2^2]^q t^{-q} \n\\le (3q)^{qD}\\, \\EE[\\|f(Y)\\|_2^2]^q\\, t^{-q}\n\\end{align*}\nand so, letting $q = t^{1\/D}\/(3e) \\ge 2$,\n\\[ \\Gamma^d(\\|f(Y)\\|_2^2 \\ge t \\,\\EE[\\|f(Y)\\|_2^2]) \\le [(3q)^D\/t]^q = \\exp(-Dq) = \\exp(-D t^{1\/D}\/(3e)).\\qedhere \\]\n\\end{proof}\n\n\\noindent It will be helpful to recall the \\emph{noise operator}, $T_\\rho:L^2(\\Gamma^d)\\to L^2(\\Gamma^d)$, defined by\n\\[\nT_\\rho f(x) = \\E f(\\rho x+\\sqrt{1-\\rho^2}Y)\n\\]\nwhere $\\rho \\in [0,1]$. Recall that for $t\\geq 0$, $P_t := T_{e^{-t}}$ is the classical Ornstein-Uhlenbeck semigroup. In particular,\nif $(h_\\ell)$ are the Hermite polynomials on $\\R$ normalized to be an orthonormal basis for $L^2(\\Gamma^1)$, then the eigenfunctions of $T_\\rho$\nare given by products of Hermite polynomials \\cite{LedouxTalagrand}.\nIn particular, for any $\\psi(x)$ of the form $\\psi(x) = h_{\\ell_1}(x_1)\\cdots h_{\\ell_d}(x_d)$. \nwe have\n\\begin{equation}\\label{eq:hermite-eigenval}\nT_\\rho \\psi (x) = \\rho^D \\psi(x)\n\\end{equation}\nwhere $D=\\sum \\ell_j$. With this in hand we are now in position to prove the following inequality. \n \\begin{lemma}\\label{lem:ex-noise}\nIf $f: \\RR^d \\to \\RR^k$ is a degree-$D$ polynomial with $\\EE \\|f(Y)\\|_2^2 = 1$, then for any $\\rho \\in [0,1]$, if $X\\sim_\\rho Y$,\n\\[ \n\\Ex \\|f(X) - f(Y)\\|_2^2 \\le 2(1 - \\rho^D). \n\\] \n\\end{lemma}\n\\begin{proof}\nLet $X_\\rho$ be given by\n\\[\nX_\\rho = \\rho Y + \\sqrt{1-\\rho^2} Y',\n\\]\nwhere $Y'$ is an independent copy of $Y$. Observe that $(X_\\rho,Y)$ is equal in law to $(X,Y)$. In this case, we see that\n\\begin{align*}\n \\Ex\\|f(X) - f(Y)\\|_2^2 \n = 2 - 2 \\EE \\langle f(X), f(Y) \\rangle \n = 2 - 2 \\EE \\langle f(X_\\rho), f(Y) \\rangle\n = 2 - 2 \\EE \\langle T_\\rho f(Y), f(Y) \\rangle.\n\\end{align*}\nConsider the collection of products of real valued Hermite polynomials of degree at most $D$,\n\\[\n\\mathcal{H}_D =\\{\\psi:\\R^d\\to\\R \\,:\\, \\psi(x) = h_{\\ell_1}(x_1)\\cdots h_{\\ell_d}(x_d)\\; s.t.\\, \\sum \\ell_i \\leq D\\}.\n\\]\nObserve that $\\mathcal{H}_D$ is an orthonormal system in $L^2(\\Gamma^d)$ and that the collection of real-valued polynomials $p:\\R^d\\to\\R$ of degree at most $D$ is contained in its closed linear span. As such, since $\\rho^D\\leq \\rho^s$ for $0\\leq s\\leq D$, we see that for any $1\\leq i \\leq d$,\n\\[\n\\rho^D \\E f_i(Y)^2 \\leq \\E\\, T_\\rho f_i(Y) f_i(Y) \\leq \\E f_i(Y)^2\n\\]\nby \\eqref{eq:hermite-eigenval}. Summing in $i$ yields \n\\[\n\\rho^D\\leq \\E \\langle T_\\rho f(Y),f(Y)\\rangle \\leq 1.\n\\]\nCombining this with the preceding bound yields the desired inequality.\n\\end{proof}\n\n\\noindent We are now in position to prove the main theorem of this section.\n\\begin{proof}[Proof of Theorem~\\ref{thm:hyp-stable}]\nLet $Y'$ be an independent copy of $Y$. Then \nif we let $\\tilde{Y} = (Y,Y')$, this is a standard Gaussian vector on $\\R^{2d}$. Furthermore if we let\n\\[\nh(\\tilde{Y})= f(Y) - f(\\rho Y + \\sqrt{1-\\rho^2}Y'),\n\\]\nthen $h$ is a polynomial of degree at most $D$ in $\\tilde{Y}$ and, by Lemma~\\ref{lem:ex-noise}, \n\\[\n\\EE \\|h(\\tilde Y)\\|_2^2=\\E\\norm{f(X)-f(Y)}_2^2 \\le 2(1-\\rho^D).\n\\]\nThe result now follows from Proposition~\\ref{prop:2-norm-tail}.\n\\end{proof}\n\n\n\n\n\n\\subsection{Failure of Low-Degree Algorithms}\\label{sec:pf-lowdeg-pspin}\n\nIn this section we prove our main results on low-degree hardness for the spherical and Ising $p$-spin models (Theorems~\\ref{thm:spherical-lowdeg} and \\ref{thm:ising-lowdeg}). The main content of this section is to show that the OGP and separation properties imply failure of stable algorithms, following an interpolation argument similar to~\\cite{gamarnik2019overlap}. The main results then follow by combining this with the stability of low-degree polynomials (Theorem~\\ref{thm:hyp-stable}) and the fact that OGP and separation are known to hold (Theorem~\\ref{thm:pspin-ogp}).\n\n\\paragraph{The spherical case.}\n\nWe begin by observing the following elementary fact: when two vectors of norm at least $\\gamma$ are normalized onto the unit sphere, the distance between them can only increase by a factor of $\\gamma^{-1}$.\n\\begin{lemma}\\label{lem:norm-bound}\nIf $\\|x\\|_2 = \\|y\\|_2 = 1$ and $a \\ge \\gamma$, $b \\ge \\gamma$ then $\\|x - y\\|_2 \\le \\gamma^{-1} \\|ax - by\\|_2$.\n\\end{lemma}\n\\begin{proof}\n We have\n \\[ \\|ax - by\\|_2^2 = a^2 + b^2 - 2ab \\langle x,y \\rangle = (a-b)^2 + ab \\|x - y\\|_2^2 \\ge \\gamma^2 \\|x - y\\|_2^2.\\qedhere \\]\n\\end{proof}\n\n\\noindent Throughout the following, it will be convenient to define the following interpolated family of tensors. Consider $(Y_\\tau)_{\\tau \\in [0,\\pi\/2]}$ defined by\n\\begin{equation}\\label{eq:Y-tau-def}\nY_\\tau = \\cos(\\tau) Y+ \\sin(\\tau) Y'.\n\\end{equation}\nNote that by linearity of inner products, we may equivalently write $\\cA(Y,Y')$ from \\eqref{eq:interpolated-family-p-spin} as\n\\[\n\\cA(Y,Y') =\\{ H_n(x;Y_\\tau) \\,:\\, \\tau \\in [0,\\pi\/2]\\}.\n\\]\n\n\\noindent The following result shows that together, the OGP and separation properties imply failure of low-degree polynomials for the spherical $p$-spin.\n\n\\begin{theorem}\\label{thm:spherical-ogp-lowdeg}\nFor any $0 \\le \\nu_1 < \\nu_2 \\le 1$, there exists a constant $\\delta^* > 0$ such that the following holds. Let $p, n, D \\in \\NN$ and $\\mu \\in \\RR$.\nSuppose that $Y,Y'$ are independent $p$-tensors with i.i.d.\\ standard Gaussian entries and let $\\cA(Y,Y')$ be as in \\eqref{eq:interpolated-family-p-spin}. Suppose further that with probability at least $3\/4$ over $Y,Y'$, we have that $\\cA(Y,Y')$ has the $(\\mu,\\nu_1,\\nu_2)$-OGP on domain $\\cS_n$ {with overlap $R=|\\langle \\cdot,\\cdot\\rangle|\/n$}, and that $H_n(\\cdot\\,,Y)$ and $H_n(\\cdot\\,,Y')$ are $\\nu_1$ separated above $\\mu$. Then for any $\\delta \\le \\min\\{\\delta^*,\\frac{1}{4} \\exp(-2D)\\}$ and any $\\gamma \\ge (2\/3)^D$, there is no random degree-$D$ polynomial that $(\\mu, \\delta, \\gamma)$-optimizes \\eqref{eq:p-spin-def} on $\\cS_n$.\n\\end{theorem}\n\n\\begin{proof}\nLet $Y,Y'$ be as in the statement of the theorem, and let $P=\\PP_Y \\otimes \\PP_\\omega$ denote the joint law of $(Y,\\omega).$\nAssume on the contrary that $f$ is a random degree $D$ polynomial which $(\\mu,\\delta,\\gamma)$-optimizes $H_n(\\cdot,Y)$. We first reduce to the case where $f$ is deterministic. \n\nLet $A(Y,\\omega)$ denote the ``failure'' event\n\\[\nA(Y,\\omega)= \\{H_n(g_f(Y,\\omega);Y) < \\mu \\;\\vee\\; \\|f(Y,\\omega)\\|_2 < \\gamma \\sqrt{n}\\}.\n\\]\nSince $\\EE \\|f(Y,\\omega)\\|_2^2 = n$ and $\\PP(A(Y,\\omega)) \\le \\delta$, we have by Markov's inequality, \n\\[\n\\PP_\\omega\\{\\EE_Y \\|f(Y,\\omega)\\|_2^2 \\ge 3n\\} \\le 1\/3\n\\quad \\text{ and }\\quad \n\\PP_\\omega(\\PP_Y(A(Y,\\omega)) \\ge 3\\delta) \\le 1\/3.\n\\]\nThis means that there exists an $\\omega^* \\in \\Omega$ such that $\\EE_Y \\|f(Y,\\omega^*)\\|_2^2 \\le 3n$ and $\\PP_Y\\{A(Y,\\omega^*)\\} \\le 3\\delta$. Fix this choice of $\\omega = \\omega^*$ so that $f(\\cdot) = f(\\cdot,\\omega^*)$ becomes a deterministic function.\n\nLet $Y,Y' \\in (\\RR^n)^\\tp$ be independently i.i.d.\\ $\\mathcal{N}(0,1)$, let $Y_\\tau$ be as in \\eqref{eq:Y-tau-def}, and $\\cA(Y,Y')$ as in \\eqref{eq:interpolated-family-p-spin}.\nFor some $L \\in \\NN$ to be chosen later, divide the interval $[0,\\pi\/2]$ into $L$ equal sub-intervals: $0 = \\tau_0 < \\tau_1 < \\cdots < \\tau_L = \\pi\/2$, and let $x_\\ell = g_f(Y_{\\tau_\\ell})$. We claim that with positive probability (over $Y,Y'$), all of the following events occur simultaneously and that this leads to a contradiction:\n\\begin{enumerate}\n \\item [(i)] The family $\\cA(Y,Y')$ has the $(\\mu,\\nu_1,\\nu_2)$-OGP on $\\cS_n$ and $H_n(\\cdot,Y)$ and $H_n(\\cdot, Y')$ are $\\nu_1$-separated above $\\mu$.\n \\item [(ii)] For all $\\ell \\in \\{0,1,\\ldots,L\\}$, $f$ succeeds on input $Y_{\\tau_\\ell}$, i.e., the event $A(Y_{\\tau_\\ell},\\omega^*)^c$ holds.\n \\item[(iii)] For all $\\ell \\in \\{0,1,\\ldots,L-1\\}$, $\\|f(Y_{\\tau_\\ell}) - f(Y_{\\tau_{\\ell+1}})\\|_2^2 < \\gamma^2 cn$ for some $c = c(\\nu_1,\\nu_2) > 0$ to be chosen later.\n\\end{enumerate}\nFirst, let us see why (i)-(iii) imply a contradiction. Combining (i) and (ii) gives $|\\frac{1}{n} \\langle x_0,x_\\ell \\rangle| \\in [0,\\nu_1] \\cup [\\nu_2,1]$ for all $\\ell$, and $|\\frac{1}{n} \\langle x_0,x_L \\rangle| \\in [0,\\nu_1]$. Since we also have $|\\frac{1}{n} \\langle x_0,x_0 \\rangle| = 1$, there must exist an $\\ell$ that crosses the OGP gap in the sense that\n\\[ \n\\nu_2 - \\nu_1 \\le \\frac{1}{n}\\Big|\\abs{\\g{x_0,x_\\ell}} - \\abs{\\g{x_0,x_{\\ell+1}}}\\Big| \\leq\n\\frac{1}{n}|\\langle x_0,x_\\ell \\rangle - \\langle x_0,x_{\\ell+1} \\rangle| \\leq \\frac{1}{\\sqrt n} \\|x_\\ell - x_{\\ell+1}\\|_2.\n\\]\nSince $\\|f(Y_{\\tau_\\ell})\\|_2, \\|f(Y_{\\tau_{\\ell+1}})\\|_2 \\ge \\gamma \\sqrt{n}$ by (ii), Lemma~\\ref{lem:norm-bound} gives\n\\[ \\nu_2 - \\nu_1 \\le \\frac{1}{\\sqrt n} \\|x_\\ell - x_{\\ell+1}\\|_2 \\le \\frac{1}{\\gamma \\sqrt{n}} \\|f(Y_{\\tau_\\ell}) - f(Y_{\\tau_{\\ell+1}})\\|_2, \\]\nwhich contradicts (iii) provided we choose $c \\le (\\nu_2-\\nu_1)^2$.\n\nIt remains to show that (i)-(iii) occur simultaneously with positive probability. By assumption, (i) fails with probability at most $1\/4$, so it is sufficient to show that (ii) and (iii) each fail with probability at most $1\/3$. By a union bound, (ii) fails with probability at most $3\\delta (L+1)$, which is at most $1\/3$ provided\n\\begin{equation}\\label{eq:L-cond-1}\nL \\le \\frac{1}{9\\delta} - 1.\n\\end{equation}\nFor (iii), we will apply Theorem~\\ref{thm:hyp-stable} with some $\\tilde D \\ge D$ (since we are allowed to use any upper bound on the degree) and $t = (6e)^{\\tilde D}$. For any $\\ell$ we have $Y_{\\tau_\\ell} \\sim_\\rho Y_{\\tau_{\\ell+1}}$ with $\\rho = \\cos\\left(\\frac{\\pi}{2L}\\right)$. Using $\\EE_Y \\|f(Y)\\|_2^2 \\le 3n$,\n\\begin{equation}\\label{eq:ffd}\n\\PP( \\|f(Y_{\\tau_\\ell}) - f(Y_{\\tau_{\\ell+1}})\\|_2^2 \\ge 6n(6e)^{\\tilde D}(1-\\rho^{\\tilde D}) ) \\le \\exp(-2 \\tilde{D}).\n\\end{equation}\nSince\n\\[ 1-\\rho^{\\tilde D} = 1-\\cos^{\\tilde D}\\left(\\frac{\\pi}{2L}\\right) \\le 1 - \\left(1 - \\frac{1}{2}\\left(\\frac{\\pi}{2L}\\right)^2\\right)^{\\tilde D} \\le \\frac{\\tilde D}{2}\\left(\\frac{\\pi}{2L}\\right)^2, \\]\nequation~\\eqref{eq:ffd} implies\n\\[ \\PP( \\|f(Y_{\\tau_\\ell}) - f(Y_{\\tau_{\\ell+1}})\\|_2^2 \\ge \\gamma^2 cn ) \\le \\exp(-2\\tilde{D}) \\]\nprovided\n\\begin{equation}\\label{eq:L-cond-2}\nL \\ge \\frac{\\pi}{2\\gamma} \\sqrt{\\frac{3\\tilde{D}}{c}}(6e)^{\\tilde D\/2}.\n\\end{equation}\nThus, (iii) fails with probability at most $L \\exp(-2\\tilde{D})$, which is at most $1\/3$ (as desired) provided\n\\begin{equation}\\label{eq:L-cond-3}\nL \\le \\frac{1}{3} \\exp(2\\tilde{D}).\n\\end{equation}\nTo complete the proof, we need to choose integers $\\tilde D \\ge D$ and $L$ satisfying~\\eqref{eq:L-cond-1}, \\eqref{eq:L-cond-2}, \\eqref{eq:L-cond-3}, i.e.,\n\\begin{equation}\\label{eq:L-final}\n\\frac{\\pi}{2\\gamma} \\sqrt{\\frac{3\\tilde{D}}{c}}(\\sqrt{6e})^{\\tilde D} \\le L \\le \\min\\left\\{\\frac{1}{9\\delta}-1, \\frac{1}{3}(e^2)^{\\tilde D}\\right\\}.\n\\end{equation}\nRequire $\\delta \\le \\frac{1}{4} \\exp(-2\\tilde{D})$ so that the second term in the $\\min\\{\\cdots\\}$ is smaller (when $\\tilde{D}$ is sufficiently large). Since $\\gamma \\ge (2\/3)^D \\ge (2\/3)^{\\tilde D}$ and $\\frac{3}{2}\\sqrt{6e} < e^2$, there now exists an $L \\in \\NN$ satisfying~\\eqref{eq:L-final} provided that $\\tilde D$ exceeds some constant $D^* = D^*(c)$. Set $\\tilde D = \\max\\{D,D^*\\}$ and $\\delta^* = \\frac{1}{4} \\exp(-2D^*)$ to complete the proof.\n\\end{proof}\n\n\\noindent Our main result on low-degree hardness of the spherical $p$-spin now follows by combining the above with the fact that OGP and separation hold in a neighborhood of the optimum.\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:spherical-lowdeg}]\nThis result follows by combining Theorem~\\ref{thm:spherical-ogp-lowdeg} with Theorem~\\ref{thm:pspin-ogp}.\n\\end{proof}\n\n\\paragraph{The Ising case.} We now turn to the corresponding result for the Ising $p$-spin model, which again shows that together, OGP and separation imply failure of low-degree polynomials. \n\n\\begin{theorem}\\label{thm:ising-ogp-lowdeg}\nFor any $0 \\le \\nu_1 < \\nu_2 \\le 1$ there exist constants $\\delta^* > 0$ and $\\eta > 0$ such that the following holds. Let $p, n, D \\in \\NN$ and $\\mu \\in \\RR$. Suppose that $Y,Y'$ are independent $p$-tensors with i.i.d.\\ standard Gaussian entries and let $\\cA(Y,Y')$ be as in \\eqref{eq:interpolated-family-p-spin}. Suppose further that with probability at least $3\/4$ over $Y,Y'$, we have that $\\cA(Y,Y')$ has the $(\\mu,\\nu_1,\\nu_2)$-OGP on domain $\\Sigma_n$ with overlap $R=|\\langle \\cdot,\\cdot\\rangle|\/n$, and that $H_n(\\cdot\\,,Y)$ and $H_n(\\cdot\\,,Y')$ are $\\nu_1$ separated above $\\mu$. Then for any $\\delta \\le \\min\\{\\delta^*,\\frac{1}{4} \\exp(-2D)\\}$ and any $\\gamma \\ge (2\/3)^D$, there is no random degree-$D$ polynomial that $(\\mu, \\delta, \\gamma, \\eta)$-optimizes \\eqref{eq:p-spin-def} on $\\Sigma_n$.\n\\end{theorem}\n\\begin{proof}\nThe proof is nearly identical to that of Theorem~\\ref{thm:spherical-ogp-lowdeg} above, so we only explain the differences. We now define $A(Y,\\omega)$ to be the failure event \n\\[A(Y,\\omega) = \\{H_n(\\sgn(f(Y,\\omega));Y) < \\mu \\;\\vee\\; |\\{k \\in [n] \\;:\\; |f_k(Y,\\omega)| \\ge \\gamma\\}| < (1 - \\eta)n\\},\n\\]\nand define $x_\\ell = \\sgn(f(Y_{\\tau_\\ell}))$. The only part of the proof we need to modify is the proof that (i)-(iii) imply a contradiction, including the choice of $c$. As above, combining (i) and (ii) gives the existence of an $\\ell$ for which $\\nu_2 - \\nu_1 \\le \\frac{1}{\\sqrt n} \\|x_\\ell - x_{\\ell+1}\\|_2$, i.e., $\\frac{1}{4}\\|x_\\ell - x_{\\ell+1}\\|_2^2 \\ge \\frac{1}{4}(\\nu_2 - \\nu_1)^2 n$, implying that $x_\\ell$ and $x_{\\ell+1}$ differ in at least $\\Delta := \\frac{1}{4}(\\nu_2 - \\nu_1)^2 n$ coordinates. Let $\\eta = \\Delta\/(2n) = \\frac{1}{8}(\\nu_2 - \\nu_1)^2$ so that there must be at least $\\Delta\/2$ coordinates $i$ for which $|f_i(Y_{\\tau_\\ell}) - f_i(Y_{\\tau_{\\ell+1}})| \\ge \\gamma$. This implies $\\|f(Y_{\\tau_\\ell}) - f(Y_{\\tau_{\\ell+1}})\\|_2^2 \\ge \\gamma^2 \\cdot \\frac{\\Delta}{2} = \\frac{1}{8}\\gamma^2 (\\nu_2 - \\nu_1)^2 n$, which contradicts (iii) provided we choose $c \\le \\frac{1}{8}(\\nu_2 - \\nu_1)^2$.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:ising-lowdeg}]\nThis result follows by combining Theorems~\\ref{thm:ising-ogp-lowdeg} and \\ref{thm:pspin-ogp}.\n\\end{proof}\n\n\n\n\n\\subsection{Stability of Langevin and Gradient Flows}\nLet $U\\in C^\\infty(\\cS_n)$ be some smooth function and for any $\\sigma\\geq0$ we can consider \\emph{Langevin Dynamics with potential $U$ and variance $\\sigma$} to be the strong solution of the stochastic differential equation (in It\\^o form)\n\\[\n\\begin{cases}\ndX_t = \\sigma dB_t - \\nabla U dt\\\\\nX_0 \\sim \\nu,\n\\end{cases}\n\\]\nwhere $B_t$ is spherical Brownian motion, $\\nabla$ is the spherical gradient, and $\\nu\\in\\mathcal{M}_1(\\cS_n)$ is some probability measure on the sphere called \\emph{the initial data}. Note that in the case $\\sigma=0$ this is simply gradient flow for $U$.\n\nWe recall here the following basic fact about the well-posedness of such equations, namely their continuous dependence on the function $U$. In the following, for a vector-valued function $F: \\cS_n\\to T\\cS_n$, we let $\\norm{F}_\\infty$ denote the essential supremum of the norm of $F$ induced by the canonical metric. (Here $T\\cS_n$ denotes the tangent bundle to $\\cS_n$.)\n\\begin{lemma*}\nLet $U,V\\in C^\\infty(\\cS_n)$ and $\\sigma\\geq 0$. Fix $\\nu\\in \\mathcal{M}_1(\\cS_n)$. Let $X^U_t$ and $X^V_t$ denote the corresponding solutions to Langevin dynamics with potentials $U$ and $V$ respectively and with the same variance $\\sigma$ with respect to the same Brownian motion $B_t$. Suppose further that their initial data are the same. Then there is a universal $C>0$ such that for any $t>0$\n\\begin{equation}\\label{eq:well-posed}\n\\sup_{s\\leq t}\\norm{X^U_s - X^V_s}_2 \\leq C t e^{Ct \\norm{\\nabla U}_\\infty \\vee \\norm{\\nabla V}_\\infty}\\norm{\\nabla U-\\nabla V}_\\infty \\quad \\text{ a.s.,}\n\\end{equation}\nwhere $\\norm{\\cdot}_2$ denotes Euclidean distance in the canonical embedding of $\\cS_n\\subseteq \\RR^n$.\n\\end{lemma*}\n\\noindent The proof of this result is a standard consequence of Gronwall's inequality and can be seen, e.g., in \\cite{varadhan,Tes12}.\n\n In this section, for a $p$-tensor $A$ we will write $A(x_1,\\cdots,x_p)$ to denote the action of $A$ on $p$ vectors, i.e., $A(x_1,\\cdots, x_p)= \\g{A,x_1\\otimes\\cdots\\otimes x_p}.$ Viewing this as a multilinear operator, we denote the operator norm by\n\\[\n\\norm{A}_{\\op} = \\sup_{\\norm{x_1}_2=\\cdots=\\norm{x_p}_2=1} A(x_1,...,x_p).\n\\]\nAs a consequence of the above, we note the following. We then have the following.\n \n\\begin{lemma}\\label{lem:langevin-stability-main}\nLet $\\delta = n^{-\\alpha}$ for some $\\alpha>0$ and let $\\{\\tau_i\\}$ denote a partition of $[0,\\pi\/2]$ with $|\\tau_{i+1}-\\tau_i| \\leq \\delta$ with $\\lceil\\delta^{-1}\\rceil+1$ elements. Let $(X^{\\tau_i})_i$ denote the family of strong solutions to Langevin dynamics with variance $\\sigma\\geq0$, potentials $H_n(\\cdot;Y_{\\tau_i})$ and initial data, $\\nu\\in\\mathcal{M}_1(\\cS_n)$ . We have that there is a $C>0$ independent of $n$ such that for any $T>0$ \n\\[\n\\sup_{i}\\sup_{s\\leq T}\\norm{X_s^{\\tau_i}-X_s^{\\tau_{i+1}}}_2 \\leq C T e^{C T} n^{-\\alpha}\n\\]\nwith probability at least $1-e^{-\\Omega(n)}$.\n\\end{lemma}\n\\begin{proof}\nEvidently, the proof will follow by \\eqref{eq:well-posed} upon controlling the gradients of $H_n(\\cdot;Y)$. To this end, we see that\n\\[\n\\nabla H_n(x;Y_\\tau) = \\frac{1}{n^{\\frac{p+1}{2}}}(Y_{\\tau}(\\pi_x,x,\\ldots,x)+\\cdots+Y_{\\tau}(x,\\ldots,x,\\pi_x))\n\\]\nwhere $\\pi_x$ denotes the projection on to $T_x \\cS_n$. In particular,\n\\[\nn^{-\\frac{p+1}{2}}\\norm{\\nabla H_n(x;Y_{\\tau})}_2 \\leq \\frac{1}{\\sqrt{n}} \\norm{Y_{\\tau}}_{\\op} \\leq \\frac{1}{\\sqrt{n}}(\\norm{Y}_{\\op}+\\norm{Y'}_{\\op}).\n\\]\nBy a standard epsilon-net argument (see, e.g., \\cite[Lemma 3.7]{BGJ20}), we have that\n\\[\n\\norm{Y}_{\\op}\\leq C\\sqrt{n}\n\\]\nwith probability $1-e^{-\\Omega(n)}$ (while the lemma in~\\cite{BGJ20} states the result for the expectation, one can either apply this to the probability by Borell's inequality, or simply note that the penultimate step in that proof is the desired high-probability bound). \nThus after a union bound, with probability $1-\\exp(-\\Omega(n))$\n\\[\n\\sup_{0\\leq\\tau\\leq \\pi\/2} \\norm{\\nabla H_n(\\cdot\\,;Y_\\tau)}_\\infty \\leq 2C.\n\\]\nOn the other hand, in law we have that\n$Y_{\\tau_i}-Y_{\\tau_{i+1}} = Z$ satisfies \n\\[\nZ \\stackrel{(d)}{=} Y\\sqrt{(\\cos(\\tau_i)-\\cos(\\tau_{i+1}))^2+(\\sin(\\tau_i)-\\sin(\\tau_{i+1}))^2}.\n\\]\nSince both cosine and sine are 1-Lipschitz, we see that the entries of $Z$ are i.i.d.\\ and have variance at most $\\delta$. \nConsequently, by the same epsilon-net argument, we have with probability $1-O(n^\\alpha e^{-c n})$,\n\\[\n\\max_{i}\\norm{\\nabla H_n(\\cdot\\,,Y_{\\tau_i})-\\nabla H_n(\\cdot\\,,Y_{\\tau_{i+1}})}_2\\leq C \\delta\n\\]\nas desired.\n\\end{proof}\n\n\\subsection{Failure of Langevin Dynamics}\\label{sec:pf-langevin}\n\nWe begin by noting the following concentration result. \n\\begin{lemma}\\label{lem:concetration-langevin}\nFix $T\\geq 0$ and $\\sigma\\geq0$. Let $X_T$ denote the solution of Langevin dynamics with potential $H_n$ , variance $\\sigma$, and initial data $\\nu\\in\\cM_1(\\cS_n)$, and let $Q_\\nu$ denote its law conditionally on $Y$. Then we have that there is some $c>0$ such that for every $\\eps>0$\n\\[\n\\PP_Y\\otimes Q_\\nu( \\abs{H_n(X_T;Y)-\\EE H_n(X_T;Y)} \\geq \\eps) \\leq \\exp(-c \\eps^2 n)\n\\]\n\\end{lemma}\n\\begin{proof}\nNote as before that for any two tensors $Y$ and $Y'$, we have\n\\[\n\\norm{\\nabla H_n(\\cdot\\,;Y)-\\nabla H_n(\\cdot\\,;Y')}_2\\leq \\norm{Y-Y'}_{\\op} \\leq \\norm{Y-Y'}_2,\n\\]\nwhere here for a tensor $A$, $\\norm{A}_2$ denotes the square root of the sum of the squares of its entries. Consequently, by \\eqref{eq:well-posed}, the map $Y\\mapsto X_T$ is uniformly $C$-Lipschitz for some $C=C(T)>0$ independent of $n$. The result then follows by Gaussian concentration of measure.\n\\end{proof}\n\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:langevin-main}]\nIn the following, we let $P=\\PP\\otimes Q_\\mu$. Recall the family $(Y_\\tau)$ from \\eqref{eq:Y-tau-def} and $\\cA(Y,Y')$ from \\eqref{eq:interpolated-family-p-spin}. Let $\\delta=n^{-\\alpha}$ for some $\\alpha>0$ and define $(\\tau_i)$ as in Lemma~\\ref{lem:langevin-stability-main}.\nFix an $\\eps>0$ and let $G$ denote the event that the overlap gap property holds for $\\cA(Y,Y')$ with parameters $(E_p(\\cS)-\\eps,\\nu_1,\\nu_2)$\nas well as $\\nu_1$-separation of $H_n(\\cdot\\,;Y_0)$ and $H_n(\\cdot\\,;Y_1)$ above level $E_p(\\cS)-\\eps$. By Theorem~\\ref{thm:pspin-ogp}, this holds for every $\\eps>0$ sufficiently small with probability $1-\\exp(-\\Omega(n))$. \n\n\nLet $X^{\\tau_i}$ denote the solutions to Langevin dynamics\ncorresponding to the potentials $H_n(\\cdot\\,;Y_{\\tau_i})$.\nLet $B_n$ and $\\tilde{B}_n$ denote the bad events\n\\begin{align*}\n\\tilde{B}_n &= \\{\\exists i \\,:\\, H_n(X_T^{\\tau_i};Y_{\\tau_i}) \\geq E_p(\\cS)-\\eps \\}\\\\\nB_n &= \\{ H_n(X_T^{\\tau_i};Y_{\\tau_i}) \\geq E_p(\\cS)- 3\\eps \\; \\forall i\\}.\n\\end{align*}\nLet $E_i(\\eps)$ denote the complement of the event bounded in Lemma~\\ref{lem:concetration-langevin} applied to $X^{\\tau_i}_T$, and let $E(\\eps)= \\cap E_i(\\eps)$ which has probability at least $1-\\exp(-\\Omega(n))$. Note that on $\\tilde{B}_n\\cap E(\\eps)$, we have that $\\E H_n(X^{\\tau_i}_T;Y_{\\tau_i})\\geq E_p(\\cS)-2\\eps$ for some $i$. As the expectation is non-random and independent of $i$, this holds for all $i$. Consequently, $\\tilde{B}_n\\cap E(\\eps)\\subset B_n$. Thus we have $P(\\tilde{B}_n)\\leq P(B_n) + \\exp(-\\Omega(n))$. \n\nSuppose now that the events $B_n$ and $G$ have non-empty intersection. Let us work on this intersection.\nBy $\\nu_1$-separation, recalling the overlap function $R(x,y)=\\abs{\\frac{1}{n}\\g{x,y}}$, we have that \n\\[\nR(X^0_T,X^1_T) \\leq \\nu_1\n\\]\nwhereas $R(X^0,X^0)=1$. On the other hand, by Lemma~\\ref{lem:langevin-stability-main}, it follows that\n\\[\n\\abs{R(X_T^0,X_T^{\\tau_i})-R(X_T^0,X_T^{\\tau_{i+1}})}\\leq \\sqrt{n} C T e^{C T}n^{-\\alpha}.\n\\]\nThus, choosing $\\alpha>1\/2$, we see that for $n$ sufficiently large, there must be some (random) $j$ such that \n\\[\n\\nu_1 < \\abs{R(X_T^0,X_T^{\\tau_j})} <\\nu_2.\n\\]\nThis contradicts the overlap gap property. Thus $B_n\\subseteq G^c$.\nConsequently, we have that \n\\[\nP(\\tilde{B}_n)\\leq P(B_n) +e^{-\\Omega(n)} \\leq P(G^c)+e^{-\\Omega(n)}=e^{-\\Omega(n)}.\n\\]\nObserving that $\\tilde{B}_n^c$ is contained in the event we are trying to bound yields the desired result by monotonicity of probabilities.\n\\end{proof}\n\n\\subsection{Proof of Overlap Gap Property}\n\\label{sec:pf-ogp-pspin}\n\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:pspin-ogp}]\nWe begin with the spherical setting. Let us view $H_n(x;Y)$ as a Gaussian process on $\\cS_n$. It was shown in \\cite[Theorem 3]{ChenSen17} that for any $\\tau,\\eps>0$ with \n$\\tau \\leq \\pi\/2$ there are\n$C,c,\\tilde \\mu>0$ such that with probability at least $1-Ce^{-c n}$, \n\\[\n\\max_{R(x,y)>\\eps} H_n(x;Y_\\tau) + H_n(x,Y)\n<\\max_{x\\in\\cS_n} H_n(x;Y_\\tau)+\\max_{x\\in\\cS_n} H_n(x;Y)- \\tilde \\mu,\n\\]\nso that if both $u,v$ satisfy\n\\begin{equation*}\n \\begin{aligned}\n \\max_{x\\in\\cS_n} H_n(x;Y_\\tau) - \\tilde \\mu \/2 &\\leq H_n(u,Y_\\tau)\\\\\n \\max_{x\\in\\cS_n} H_n(x;Y) - \\tilde \\mu \/2 &\\leq H_n(v;Y)\n \\end{aligned}\n\\end{equation*}\nthen it must be that $|R(u,v)|<\\eps$. (The result is stated there for $\\tau <\\pi\/2$, but can be extended to the easier case of $\\tau = \\pi\/2$. See Remark~\\ref{rem:tau} below.) One can then replace the maximum on the right-hand side of the above upon recalling that by Borell's inequality, \n\\[\n\\PP( \\abs{ \\max_{x\\in \\cS_n} H_n(x;Y) -\\E \\max_{x\\in\\cS_n} H_n(x;Y)} \\geq \\eps)\\leq C\\exp(-cn\\eps^2)\n\\]\nfor some $C,c>0$. In particular, upon recalling that $\\E\\max_{\\cS_n} H_n(x;Y)\\to E_p(\\cS)$ \\cite{JagTob17}, for $n$ sufficiently large we obtain\n\\begin{equation}\\label{eq:disorder-overlap}\n \\begin{aligned}\n E_p(\\cS) - \\tilde \\mu \/4 &\\leq H_n(u,Y_\\tau)\\\\\n E_p(\\cS) - \\tilde \\mu \/4 &\\leq H_n(v;Y).\n \\end{aligned}\n\\end{equation}\nOn the other hand, as shown in \\cite[Theorem 6]{AuffChen18}, \\eqref{eq:disorder-overlap} holds with $\\tau =0$ as well, except now we have that the inner products of the near-maximal $u,v$ must satisfy $R(u,v)\\in[0,\\nu_1]\\cup[\\nu_2,1]$ for some $0\\leq\\nu_1<\\nu_2\\leq1$. By combining these results we can obtain the overlap gap \nproperty with parameters $(E_p(\\cS)-\\tilde\\mu\/4,\\nu_1,\\nu_2)$ \nby applying the discretization argument from in \n\\cite{gamarnik2019overlap}. Note that, \n\\eqref{eq:disorder-overlap} in the case $\\tau = \\pi\/2$ implies \n$\\eps$-separation below level $E_p(\\cS)-\\tilde \\mu\/4$. As $\\eps$ was arbitrarily small we can take $\\eps=\\nu_1$.\n\nAfter recalling that $\\E \\max_{\\Sigma_n} H_n(x;Y)\\to E_p(\\Sigma)$ \\cite{Ton02}, we see that the second result is a restatement of \\cite[Theorem 3.4]{gamarnik2019overlap} after applying Borell's inequality as in \\eqref{eq:disorder-overlap}.\n\\end{proof}\n\\begin{remark}\\label{rem:tau}\nWhile the result of \\cite[Theorem 3]{ChenSen17} is only stated for $0<\\tau<\\pi\/2$, it easily extends to the case $\\tau = \\pi\/2$ by differentiating in the Lagrange multiplier term $\\lambda$ in the ``RSB bound'' from \\cite[Eq.\\ 59]{ChenSen17}. \nFor the reader's convenience, we sketch this change. We follow here the notation of \\cite{ChenSen17}. By comparing to \\cite[Eq.\\ 78]{ChenSen17}, one sees that $E(0,u,\\lambda)$ from \\cite[Eq.\\ 61]{ChenSen17} satisfies $E(0,u,0)= 2 E_p(\\cS) \\, (= 2GS )$. On the other hand for $u>0$ we have $\\partial_\\lambda E(0,u,0) = - u<0$, from which it follows that $\\min_\\lambda T(0,u,\\lambda)<2 E_p(\\cS)$ as desired. The case $u<0$ follows by symmetry.\n\\end{remark}\n\n\n\n\\section{Proofs for Maximum Independent Set}\n\n\n\\subsection{Low-Degree Polynomials are Stable}\n\nIn this section we prove a key structural property (Theorem~\\ref{thm:binary-stable}) of low-degree polynomials \non the Boolean hypercube. Roughly speaking, with nontrivial probability, a low-degree polynomial will not change its output significantly at any step when its input coordinates are resampled one at a time.\n\nThroughout this section, we work with the Boolean hypercube $\\{0,1\\}^m$ and let $Y=(Y_1,...,Y_m)$\ndenote a Bernoulli random vector, $Y\\in\\{0,1\\}^m$, with independent entries that satisfy\n\\[\nP(Y_i = 1) = p_i,\n\\]\nfor some $0 < p_i < 1$. We view the hypercube as a graph where the vertex set is $V = \\{0,1\\}^m$ and the edge set consists of those edges $(x,y)$ such that $x$ and $y$ differ in exactly one coordinate.\n\nWe introduce the following local regularity property of (non-random) functions $\\{0,1\\}^m \\to \\RR^n$.\n\n\\begin{definition}\nLet $f:\\{0,1\\}^m\\to\\RR^n$ and let $c>0$. \nAn edge $(x,y)$ in $\\{0,1\\}^m$ is said to be \\emph{$c$-bad}\nfor $f$ if \n\\[\n\\|f(x)-f(y)\\|_2^2 \\geq c\\, \\E \\|f(Y)\\|_2^2. \n\\]\n\\end{definition}\n\n\\noindent For $x,y \\in \\{0,1\\}^m$, recall the definition of the path $x \\mapsto y$ (Definition~\\ref{def:path}), which naturally corresponds to a walk on the edges of the hypercube graph. We now turn to the main result of this section, which shows that for a low-degree polynomial, a random path has no bad edges with nontrivial probability.\n\n\\begin{theorem}\\label{thm:binary-stable}\nLet $Y$ be a Bernoulli random vector with $P(Y_i=1)=p_i$, let $Y'$ be an independent copy of $Y$, and let $\\lambda = \\min_i (p_i\\wedge 1-p_i)$. For any $c>0$ and any (deterministic) degree-$D$ polynomial $f: \\{0,1\\}^m \\to \\RR^n$ we have\n\\[\nP( Y \\mapsto Y' \\text{ has no } c\\text{-bad edge for } f ) \\geq \\lambda^{4D\/c}.\n\\]\n\\end{theorem}\n\n\\noindent The key steps in the proof of Theorem~\\ref{thm:binary-stable} are contained in the following two lemmas. Throughout the following, for a point $x\\in\\{0,1\\}^m$, we let $x_{-i}$ denote the all-but-$i$th coordinates of $x$, and let $q(x)=P(x\\mapsto Y' \\text{ has no } c\\text{-bad edge})$. \n\n\\begin{lemma}\\label{lem:total-inf}\nLet $f: \\{0,1\\}^m \\to \\RR^n$ be a polynomial of degree $D$ and let $Y$ be a Bernoulli random vector with $P(Y_i=1)=p_i$.\nLet $B_i$ denote the event that the edge corresponding to flipping the $i$th coordinate of $Y$ is $c$-bad for $f$. Then\n\\begin{equation}\\label{eq:total-inf}\n\\frac{c}{2} \\sum_{i=1}^m (p_i \\wedge 1-p_i) P(B_i) \\le D.\n\\end{equation}\n\\end{lemma}\n\n\\begin{lemma}\\label{lem:potential}\nIf $Y$ is a Bernoulli random vector with $P(Y_i = 1)=p_i$, then \n\\begin{equation}\\label{eq:potential}\n-\\E \\log q(Y) \\le \\sum_{i=1}^m S(p_i) P(B_i)\n\\end{equation}\nwhere $S$ denotes the binary entropy $S(p) = -p \\log p - (1-p) \\log(1-p)$.\n\\end{lemma}\n\n\\noindent Intuitively,~\\eqref{eq:total-inf} states that if $D$ is small, there cannot be too many bad edges. The proof will be based on the fact that low-degree polynomials have small \\emph{total influence}. Intuitively,~\\eqref{eq:potential} states that if most paths contain a bad edge then there must be many bad edges in total. The actual definition of ``bad'' will not be used in the proof of the latter lemma. We defer the proof of these lemmas momentarily\n\nWe first show how to deduce Theorem~\\ref{thm:binary-stable} from the above lemmas, and then we prove the lemmas.\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:binary-stable}]\n\nIf $p \\le 1\/2$ then $-p \\log p \\ge -(1-p)\\log(1-p)$ and so $S(p) \\le -2p \\log p$. If instead $p > 1\/2$ then $-p \\log p \\le -(1-p)\\log(1-p)$ and so $S(p) \\le -2(1-p) \\log(1-p)$. Therefore, in either case we have\n\\begin{equation}\\label{eq:H-bound}\nS(p_i) \\le 2 (p_i \\wedge 1-p_i) \\log(1\/\\lambda).\n\\end{equation}\n\n\\noindent We now have\n\\begin{align*}\n- \\log \\E\\, q(Y) \n\\le -\\E \\log q(Y)\n\\le \\sum_{i=1}^m S(p_i) P(B_i) \n\\le 2 \\log(\\frac{1}{\\lambda}) \\sum_{i=1}^m (p_i \\wedge 1-p_i) P(B_i) \\le 2 \\log(\\frac{1}{\\lambda}) \\cdot \\frac{2D}{c} \n\\end{align*}\nwhere in the first inequality we used Jensen's inequality, the second we used \\eqref{eq:potential}, the third we used \\eqref{eq:H-bound}, and the last we used \\eqref{eq:total-inf}.\nThe result follows by re-arrangement.\n\\end{proof}\n\nBefore turning to the proof of the above lemmas, let us pause and recall here some basic facts from Fourier analysis on the Boolean cube. For more on this see \\cite{o-book}. For $i \\in [m]$, let $\\phi_i(Y_i) = \\frac{Y_i - p_i}{\\sqrt{p_i(1-p_i)}}$, and for $S \\subseteq [m]$, let $\\phi_S(Y) = \\prod_{i \\in S} \\phi_i(Y_i)$.\nRecall that the functions $\\{\\phi_S\\}_{S\\subseteq[m]}$ form an orthonormal basis for $L^2(P)$. For a function $f$ we denote its fourier coefficients by $\\hat f(S) = \\E f(Y) \\phi_S(Y)$.\nObserve that Parseval's theorem in this setting reads: for a function $f:\\{0,1\\}^m\\to\\R$, we have\n\\[\n\\E[f(Y)^2] = \\sum_{S\\subseteq[m]} \\hat f(S)^2.\n\\]\nFor a function $f$ we denote the \\emph{total influence} by \n\\[\nI(f) = \\sum_{S\\subseteq [m]}\\abs{S} \\cdot \\hat f (S)^2.\n\\]\nFinally, consider the \\emph{Laplacian} operator $L_i$, defined by \n\\[\nL_i f = \\sum_{S\\ni i} \\hat f(S) \\phi_S(x),\n\\]\nwhich can be thought of as ``the part of $f$ that depends on $i$''.\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:total-inf}]\nLet us begin by first fixing an entry $f_j$ of $f$. Since $f_j$ is of degree at most $D$, its spectrum is such that $\\hat f_j (S) =0 $ for any $S$ with $\\abs{S}>D$. As such,\n\\begin{align*}\nD\\,\\E[f_j(Y)^2] = D \\sum_{\\abs{S}\\leq D} \\hat f_j(S)^2 &\\geq I(f_j) = \\sum_{i}\\E(L_i f_j(Y))^2\\\\\n&= \\sum_{i} \\E [(1-p_i) (L_i f_j(Y_{-i}[0]))^2 +p_i (L_i f_j(Y_{-i}[1]))^2]\n\\end{align*}\nwhere $Y_{-i}[\\ell] \\in \\{0,1\\}^m$ is obtained from $Y_{-i}$ by setting the $i$th coordinate to $\\ell$.\nUsing that for any $a,b\\in\\R$ and $p\\in[0,1]$, $(p\\wedge 1-p) (a-b)^2\\leq 2((1-p)a^2 +p b^2),$ we see that the above display is bounded below by \n\\[ \\frac{1}{2}\\sum_{i} (p_i \\wedge 1-p_i)\\E(L_if_j(Y_{-i}[1])-L_if_j(Y_{-i}[0]))^2\n= \\frac{1}{2}\\sum_i (p_i \\wedge 1-p_i) \\E(f_j(Y_{-i}[1])-f_j(Y_{-i}[0]))^2. \\]\nSumming over $j$ and applying the definition of $c$-bad edge, we obtain\n\\begin{align*}\nD\\,\\E\\norm{f(Y)}_2^2 &\\geq \\frac{1}{2}\\sum_i (p_i \\wedge 1-p_i) \\E\\|f_j(Y_{-i}[1])-f_j(Y_{-i}[0])\\|_2^2\\\\\n&\\geq \\frac{1}{2} \\sum_i (p_i\\wedge 1-p_i) P(B_i) \\,c \\,\\E\\|f(Y)\\|_2^2.\n\\end{align*}\nCancelling the net factor of $\\E\\norm{f(Y)}_2^2$ yields the result.\n\\end{proof}\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:potential}]\nProceed by induction on $m$. The base case $m = 1$ is straightforward: if the single edge is bad then both sides of~\\eqref{eq:potential} equal $S(p_1)$, and otherwise both sides equal 0.\n\nFor the inductive step, let $q_0(Y_{-1})$ denote the probability over $Y'_{-1}$ that the path $Y_{-1}[0] \\mapsto Y'_{-1}[0]$ has no $c$-bad edge. Similarly define $q_1(Y_{-1})$ for the path $Y_{-1}[1] \\mapsto Y'_{-1}[1]$. (Note that these are probabilities on $\\{0,1\\}^{m-1}$.) \nIntegrating in the first coordinate, we have by independence,\n\\begin{equation}\\label{eq:pot-exp}\n-\\E \\log q(Y) = -\\E[(1-p_1) \\log q(Y_{-1}[0]) + p_1 \\log q(Y_{-1}[1])]\n\\end{equation}\nwhere\n\\[ \nq(Y_{-1}[0]) = (1-p_1)q_0(Y_{-1}) + p_1 q_1(Y_{-1}) \\One_{ B_1^c} \\]\nand\n\\[ q(Y_{-1}[1]) = (1-p_1)q_0(Y_{-1}) \\One_{B_1^c} + p_1 q_1(Y_{-1}). \\]\nIf $B_1^c$ holds (i.e., the edge corresponding to flipping the first coordinate is ``good'') then the expression inside the expectation in~\\eqref{eq:pot-exp} is\n\\begin{align*}\n(1-p_1) \\log q(Y_{-1}[0]) + p_1 \\log q(Y_{-1}[1]) &= \\log [(1-p_1)q_0(Y_{-1}) + p_1 q_1(Y_{-1})] \\\\\n&\\ge (1-p_1) \\log q_0(Y_{-1}) + p_1 \\log q_1(Y_{-1})\n\\end{align*}\nusing concavity of $t \\mapsto \\log t$. If instead $B_1$ holds (i.e., the edge is bad),\n\\begin{align*}\n(1-p_1) \\log q(Y_{-1}[0]) + p_1 \\log q(Y_{-1}[1]) &= (1-p_1) \\log[(1-p_1) q_0(Y_{-1})] + p_1 \\log[p_1 q_1(Y_{-1})] \\\\\n&= -S(p_1) + (1-p_1) \\log q_0(Y_{-1}) + p_1 \\log q_1(Y_{-1}).\n\\end{align*}\nPutting it all together,\n\\begin{align}\n-\\E \\log q(Y) &\\le -\\E[-S(p_1)\\One_{B_1} + (1-p_1) \\log q_0(Y_{-1}) + p_1 \\log q_1(Y_{-1})] \\nonumber \\\\\n&= S(p_1) P(B_1) - (1-p_1) \\E \\log q_0(Y_{-1}) - p_1 \\E\\log q_1(Y_{-1}).\n\\label{eq:pot-final}\n\\end{align}\nBy the induction hypothesis,\n\\[ -\\E \\log q_0(Y_{-1}) \\le \\sum_{i > 1} S(p_i) P(B_i\\vert Y_1 =0). \\] \nSimilarly,\n\\[ -\\E \\log q_1(Y_{-1}) \\le \\sum_{i > 1} S(p_i) P(B_i\\vert Y_1 = 1). \\]\nCombining these yields \n\\[ - (1-p_1) \\E \\log q_0(Y_{-1}) - p_1 \\E \\log q_1(Y_{-1}) \\le \\sum_{i > 1} S(p_i) P(B_i). \\]\nPlugging this into~\\eqref{eq:pot-final} completes the proof.\n\\end{proof}\n\n\n\n\n\\subsection{Failure of Low-Degree Algorithms}\n\\label{sec:pf-lowdeg-indep}\n\nThis section is devoted to proving Theorem~\\ref{thm:MIS-main}. We start with the following result which shows that together, OGP and separation imply that low-degree polynomials fail to find large independent sets. Recall the family of functions $\\cF(Y,Y')$ from~\\eqref{eq:interpolated-family-MIS}.\n\\begin{theorem}\\label{thm:indep-ogp-lowdeg}\nSuppose $d \\le n\/2$ and $\\nu_2 n \\le k$. Suppose that with probability at least $1-\\Delta$ when $Y,Y' \\sim G(n,d\/n)$ independently, $\\cF(Y,Y')$ has $(k,\\nu_1,\\nu_2)$-OGP, and $F(\\cdot\\,,Y)$ and $F(\\cdot\\,;Y')$ are $\\nu_1$-separated above $k$. If\n\\begin{equation}\\label{eq:Delta-cond}\n\\Delta + 3\\delta(m+1) < \\exp\\left(-\\frac{96 \\gamma D k \\log(n\/d)}{(\\nu_2 - \\nu_1)^2 n}\\right)\n\\end{equation}\nthen for any $\\eta \\le \\frac{1}{4}(\\nu_2 - \\nu_1)^2$, there is no random degree-$D$ polynomial that $(k,\\delta,\\gamma,\\eta)$-optimizes~\\eqref{eq:max-indep}.\n\\end{theorem}\n\n\\begin{proof}\nAssume on the contrary that $f$ $(\\mu,\\delta,\\gamma,\\eta)$-optimizes maximum independent set. Let $A(Y,\\omega)$ denote the ``failure'' event\n\\[\nA(Y,\\omega) =\\{|V_f^\\eta(Y,\\omega)| < k\\}.\n\\]\nAs in the proof of Theorem~\\ref{thm:spherical-ogp-lowdeg}, we can reduce to the case where $f$ is deterministic: there exists $\\omega^* \\in \\Omega$ such that the resulting deterministic function $f(\\cdot) = f(\\cdot,\\omega^*)$ satisfies $\\EE_Y \\|f(Y)\\|_2^2 \\le 3\\gamma k$ and $\\PP_Y(A(Y,\\omega^*)) \\le 3\\delta$.\n\n\nLet $Y,Y' \\sim G(n,d\/n)$ independently, and let $Y = Z_0 \\to Z_1 \\to \\cdots \\to Z_m = Y'$ be the path $Y \\mapsto Y'$. \nLet $S_j = V_f^\\eta(Z_j)$. Consider the following events.\n\\begin{enumerate}\n \\item [(i)] The family $\\cF(Y,Y')$ has the $(k,\\nu_1,\\nu_2)$-OGP on $\\{0,1\\}^m$ and the functions $F(\\cdot\\,;Y)$ and $F(\\cdot\\,;Y')$ are $\\nu_1$-separated above $k$.\n \\item [(ii)] For all $j \\in \\{0,1,\\ldots,m\\}$, $f$ succeeds on input $Z_j$, i.e., the event $A(Z_j,\\omega^*)^c$ holds.\n\\end{enumerate}\nWith probability at least $1 - \\Delta - 3\\delta(m+1)$, the events (i) and (ii) occur simultaneously. We will show that when this happens, the path $Y \\mapsto Y'$ must contain a $c$-bad edge (for a particular choice of $c$). This will allow us to derive a contradiction with Theorem~\\ref{thm:binary-stable}.\n\nToward this end, suppose (i) and (ii) both occur. Since $\\nu_2 n \\le k$, it follows that some $j$ must cross the OGP gap in the sense that $|S_0 \\cap S_j| \\ge \\nu_2 n$ and $|S_0 \\cap S_{j+1}| \\le \\nu_1 n$. Thus, letting $\\One_{S} \\in \\{0,1\\}^n$ be the indicator of $S$,\n\\[ (\\nu_2 - \\nu_1)n \\le |\\langle \\One_{S_0}, \\One_{S_j} - \\One_{S_{j+1}}\\rangle| \\le \\|\\One_{S_0}\\|_2 \\cdot \\|\\One_{S_j} - \\One_{S_{j+1}}\\|_2 = \\sqrt{|S_0|} \\cdot \\sqrt{|S_j \\triangle S_{j+1}|} \\le \\sqrt{n} \\cdot \\sqrt{|S_j \\triangle S_{j+1}|} \\]\nwhere $\\triangle$ denotes symmetric difference.\nFrom the definition of $V_f^\\eta$, there must be at least $|S_j \\triangle S_{j+1}| - 2 \\eta n$ coordinates $i$ for which $|f_i(Z_j) - f_i(Z_{j+1})| \\ge 1\/2$. This means\n\\[ \\|f(Z_j) - f(Z_{j+1})\\|_2^2\n\\ge \\frac{1}{4}(|S_j \\triangle S_{j+1}| - 2\\eta n)\n\\ge \\frac{1}{4}\\left[(\\nu_2 - \\nu_1)^2 n - 2 \\eta n\\right]\n\\ge \\frac{n}{8}(\\nu_2 - \\nu_1)^2. \\]\nprovided $\\eta \\le \\frac{1}{4}(\\nu_2 - \\nu_1)^2$. Since $\\EE_Y \\|f(Y)\\|_2^2 \\le 3\\gamma k$, we now have that $(Z_j,Z_{j+1})$ is a $c$-bad edge for $c = (\\nu_2 - \\nu_1)^2 n\/(24 \\gamma k)$.\n\nApplying Theorem~\\ref{thm:binary-stable} yields\n\\[ \\Delta + 3\\delta(m+1) \\ge (d\/n)^{4D\/c} = \\exp\\left(-\\frac{96 \\gamma D k \\log(n\/d)}{(\\nu_2-\\nu_1)^2 n}\\right). \\]\nThis contradicts~\\eqref{eq:Delta-cond}, completing the proof.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:MIS-main}]\nLet $\\alpha > 1 + 1\/\\sqrt{2}$, and let $0 \\le \\tilde\\nu_1 < \\tilde\\nu_2 \\le 1$ be the constants from Theorem~\\ref{thm:ogp-graph}. Provided $d$ is sufficiently large, Theorem~\\ref{thm:ogp-graph} allows us to apply Theorem~\\ref{thm:indep-ogp-lowdeg} with parameters $k = \\alpha \\frac{\\log d}{d} n$, $\\nu_j = \\tilde\\nu_j \\frac{k}{n}$ (for $j = 1,2$), $\\Delta = \\exp(-\\Omega(n))$. This requires $\\eta \\le \\frac{1}{4}(\\nu_2 - \\nu_1)^2 = \\frac{\\alpha^2\\log^2 d}{4d^2}(\\tilde\\nu_2 - \\tilde\\nu_1)^2$ and gives the desired result provided\n\\[ \\exp(-\\Omega(n)) + 3\\delta\\left(\\binom{n}{2}+1\\right) < \\exp\\left(-\\frac{96 \\gamma D d \\log(n\/d)}{(\\tilde\\nu_2-\\tilde\\nu_1)^2 \\alpha \\log d}\\right). \\]\nTo satisfy this (for $n$ sufficiently large), it is sufficient to have\n\\[ \\delta < \\frac{1}{3n^2}\\left[\\exp(-\\tilde C_1 \\gamma D \\log n) - \\exp(-\\tilde C_2 n)\\right] \\]\nwhere $\\tilde C_1, \\tilde C_2 > 0$ are constants depending on $\\alpha, d$. It is in turn sufficient to have $\\tilde C_1 \\gamma D \\log n \\le \\frac{1}{2} \\tilde C_2 n$ and $\\delta < \\frac{1}{4n^2} \\exp(-\\tilde C_1 \\gamma D \\log n) = \\exp(-\\tilde C_1 \\gamma D \\log n - \\log 4 - 2 \\log n)$. This completes the proof with $C_2 = \\frac{\\tilde C_2}{2\\tilde C_1}$ and $C_1 = \\tilde C_1 + 3$.\n\\end{proof}\n\n\n\n\n\\subsection{Proof of Overlap Gap Property}\n\\label{sec:pf-ogp-indep}\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:ogp-graph}]\nFix integers $k_1,k_2,\\ell$ satisfying $k_1 \\ge k$, $k_2 \\ge k$, and $1 \\le \\ell \\le k$. Fix $j_1,j_2 \\in \\{0,1,\\ldots,m\\}$. Let $T(k_1,k_2,\\ell,j_1,j_2)$ denote the expected number of ordered pairs $(S_1,S_2)$ where $S_1$ is an independent set in $Z_{j_1}$ with $|S_1| = k_1$, $S_2$ is an independent set in $Z_{j_2}$ with $|S_2| = k_2$, and $|S_1 \\cap S_2| = \\ell$ (where the expectation is over $Y,Y'$). Define $\\alpha_1, \\alpha_2, \\beta$ by the relations $k_1 = \\alpha_1 \\frac{\\log d}{d} n$, $k_2 = \\alpha_2 \\frac{\\log d}{d} n$, and $\\ell = \\beta \\frac{\\log d}{d} n$. Restrict to the case $\\delta \\le \\beta \\le \\alpha-\\delta$ for an arbitrary but fixed constant $\\delta > 0$ (which may depend on $\\alpha$ but not $d$); we will show an interval of forbidden overlaps within this interval for $\\beta$. Note that $\\alpha_1 \\ge \\alpha$ and $\\alpha_2 \\ge \\alpha$, and we can assume $\\alpha_1 \\le 2+\\delta$ and $\\alpha_2 \\le 2+\\delta$ since (for sufficiently large $d$) there are no independent sets of size exceeding $(2+\\delta)\\frac{\\log d}{d} n$ with high probability. We have\n\\begin{equation}\\label{eq:TE}\n \\begin{aligned}\nT(k_1,k_2,\\ell,j_1,j_2) &= \\binom{n}{\\ell}\\binom{n-\\ell}{k_1-\\ell}\\binom{n-k_1}{k_2-\\ell} (1-d\/n)^E \\\\\n&\\le \\binom{n}{\\ell}\\binom{n}{k_1-\\ell}\\binom{n}{k_2-\\ell} (1-d\/n)^E\n\\end{aligned}\n\\end{equation}\nwhere $E \\ge \\binom{k_1}{2} + \\binom{k_2}{2} - \\binom{\\ell}{2}$ (the worst case being $j_1 = j_2$). Using the standard bounds ${n \\choose k} \\le (\\frac{ne}{k})^k$ (for $1 \\le k \\le n$) and $\\log(1+x) \\le x$ (for $x > -1$),\n\\begin{align*}\nT(k_1,k_2,\\ell,j_1,j_2) &\\le \\exp\\left(\\ell \\log\\frac{ne}{\\ell} + (k_1 - \\ell)\\log \\frac{ne}{k_1-\\ell} + (k_2 - \\ell)\\log \\frac{ne}{k_2-\\ell} - E \\frac{d}{n}\\right) \\\\\n&\\le \\exp\\left[\\frac{\\log^2 d}{d} n \\left(\\beta + (\\alpha_1-\\beta) + (\\alpha_2-\\beta) - \\frac{1}{2}(\\alpha_1^2 + \\alpha_2^2 - \\beta^2) + \\varepsilon_d + o(1)\\right)\\right] \\\\\n&= \\exp\\left[\\frac{\\log^2 d}{d} n \\left(\\alpha_1 - \\frac{1}{2} \\alpha_1^2 + \\alpha_2 - \\frac{1}{2} \\alpha_2^2 - \\beta + \\frac{1}{2}\\beta^2 + \\varepsilon_d + o(1)\\right)\\right]\n\\end{align*}\nwhere $\\varepsilon_d \\to 0$ as $d \\to \\infty$. Since $\\alpha_1 \\ge \\alpha > 1 + 1\/\\sqrt{2}$, we have $\\alpha_1 - \\frac{1}{2} \\alpha_2^2 < 1\/4$ and likewise for $\\alpha_2$. Note that $\\beta \\mapsto \\beta - \\frac{1}{2} \\beta^2$ has maximum value $1\/2$ at $\\beta = 1$. Thus if we choose $\\delta > 0$ small enough (depending on $\\alpha$ but not $d$), for any $\\beta \\in [1-\\delta,1+\\delta]$ and any $\\alpha_1 \\ge \\alpha$, $\\alpha_2 \\ge 2$, we have\n\\[ \\alpha_1 - \\frac{1}{2} \\alpha_1^2 + \\alpha_2 - \\frac{1}{2} \\alpha_2^2 - \\beta + \\frac{1}{2}\\beta^2 \\le -\\delta, \\]\nimplying $T(k_1,k_2,\\ell,j_1,j_2) \\le \\exp(-\\Omega(n) (\\delta - \\varepsilon_d - o(1)))$, which is $\\exp(-\\Omega(n))$ for sufficiently large $d$. Accordingly, let $\\nu_1 = (1-\\delta)\/\\alpha$ and $\\nu_2 = (1+\\delta)\/\\alpha$. We now have that OGP (with the desired parameters) holds with high probability, using Markov's inequality and a union bound over the $\\le n^7$ possible values for $(k_1, k_2, \\ell, j_1, j_2)$.\n\nIt remains to show $\\nu_1$-separation, which pertains to the case $j_1 = 0$, $j_2 = m$. In this case,~\\eqref{eq:TE} holds with the stronger statement $E = \\binom{k_1}{2} + \\binom{k_2}{2}$. As a result, the expression in~\\eqref{eq:TE} is non-increasing in $\\beta$ (provided $\\beta \\ge \\delta$ and $d$ is sufficiently large). By the above argument, we can again conclude $T(k_1,k_2,\\ell,0,m) \\le \\exp(-\\Omega(n))$ but now under the weaker condition $\\beta \\ge 1-\\delta$ in place of $\\beta \\in [1-\\delta,1+\\delta]$. This completes the proof.\n\\end{proof}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nConformal field theories describe very special points in the space of quantum field theories\nthat seem to provide unique views into non-perturbative dynamics through a variety of rather\ncomplementary techniques, such as holography, integrability, localization and the conformal\nbootstrap. One of the principal analytical tools for conformal field theory are conformal\npartial wave (or block) expansions that were proposed early on in \\cite{Ferrara:1973vz}.\nThe role they play in the study of models with conformal symmetry is very similar to the\nrole of Fourier analysis in systems with translational symmetry. While conformal blocks\nare entirely determined by kinematics, they allow to separate very neatly the dynamical\nmeat of a theory from its kinematical bones. For example, an $N$-point function of local\noperators in a conformal field theory can be a very complicated object. If expanded in\nconformal blocks, however, the coefficients factorize into a set of three-point couplings,\ni.e.\\ most of the complicated dependence on the insertion points resides in the kinematical\nskeleton of a conformal field theory. This is the reason conformal blocks expansions\nare so important.\n\nConformal blocks for four-point functions of local operators in bosonic conformal field\ntheories are relatively well studied by now, see e.g. \\cite{Dolan:2000ut,Dolan:2003hv,\nDolan:2011dv,Costa:2011dw,SimmonsDuffin:2012uy,Penedones:2015aga,Hogervorst:2013sma,\nEcheverri:2016dun,Schomerus:2016epl,Karateev:2017jgd,Isachenkov:2017qgn,Dyer:2017zef,Erramilli:2019njx,Fortin:2019fvx,Fortin:2019dnq,\nFortin:2020ncr} and references therein. On the other hand, while we know of many\nexamples of such theories in $d=3$ dimensions, most conformal field theories in\n$d\\geq 4$ seem to possess supersymmetry. The enhancement from conformal to superconformal\nsymmetry should lead to simplifications, at least once the kinematical aspects are well\nunder control. This, however, is not yet the case. In fact, while four-point blocks of\nhalf-BPS operators or the superprimary components of more general supermultiplets have\nbeen constructed and applied, see e.g. \\cite{Dolan:2001tt,Dolan:2004mu,Nirschl:2004pa,\nPoland:2010wg,Fortin:2011nq,Fitzpatrick:2014oza,Khandker:2014mpa,Bobev:2015jxa,Bissi:2015qoa,\nDoobary:2015gia,Lemos:2015awa,Liendo:2016ymz,Lemos:2016xke,Chang:2017xmr,Bobev:2017jhk,\nLiendo:2018ukf,Berkooz:2014yda,Li:2016chh,Li:2017ddj,Gimenez-Grau:2019hez}, relatively little\nis actually known about blocks and block expansions for more generic external multiplets\nthat span long(er) representations of the superconformal algebra. On the other hand it\nhas been shown in \\cite{Cornagliotto:2017dup} that the bootstrap with long multiplets\nis significantly more constraining on CFT data than the bootstrap with e.g. external\nBPS operators, see also \\cite{Kos:2018glc}. This provides strong motivation\nto investigate blocks and crossing symmetry for long multiplets, which is the main\ngoal of our work.\n\\medskip\n\nIn order to explain the main results of this paper, let us briefly review a few basic\nfacts about conformal partial wave expansions in bosonic conformal field theories. We\nstart from some four-point correlator $G(x_i)$ with its full dependence on the insertion\npoints $x_i$ of the fields. As is well known, conformal symmetry implies that $G(x_i)$\nis fully determined by a some function of the two cross ratios $u,v$ one can form from\nfour points in $\\mathbb{R}^d$. More precisely, it is possible to write the correlation\nfunction $G$ as\n\\begin{equation}\nG(x_i) = \\Omega(x_i) g(u,v) \\ .\n\\end{equation}\nWe stress that such a behavior is not restricted to scalar correlation functions. If\nthe fields carry spin, then $G$ takes values in the space of polarizations of the\nfour fields. The function $g$, on the other hand takes values in the space of four-point\ntensor structures whose dimension is smaller than that of the space of polarizations,\nin general, at least for $d > 3$. Hence, one should think of $\\Omega$ as a rectangular\nmatrix. We shall refer to such a matrix valued function $\\Omega$ of the insertion points\nas four-point \\textit{tensor factor}. In some sense to become clear below it combines all\nfour-point tensor structures into one single object $\\Omega$. Many authors have studied\ntensor structures for spinning four-point functions in conformal field theories, see\ne.g. \\cite{Osborn:1993cr,Costa:2011mg,Costa:2011dw,Kravchuk:2016qvl,Cuomo:2017wme,Karateev:2018oml,\nKarateev:2019pvw}.\n\nThe tensor factor $\\Omega(x_i)$ is restricted but not determined by conformal symmetry.\nIn fact, there is some obvious `gauge' freedom that is associated with matrix-valued\nfunctions $\\zeta(u,v)$ one can move back and forth between the tensor factor $\\Omega$ and the function $g(u,v)$,\ni.e.\\ the gauge symmetry acts as $(\\Omega,g) \\rightarrow (\\Omega \\zeta^{-1}, \\zeta g)$. The\nfunction $g$ of the cross ratios may be expanded in terms of conformal partial waves which,\nafter the influential work of Dolan and Osborn \\cite{Dolan:2000ut,Dolan:2003hv}, are characterised\nas eigenfunctions of the so-called Casimir differential equations. The form of these equations,\nhowever, depends on the gauge choice that is made when splitting $G$ into $\\Omega$ and $g$. For\nfour-point functions of identical scalar fields of weight $\\Delta_0$, for example, Dolan and Osborn\nchose $\\Omega_s = x_{12}^{-2\\Delta_0} x^{-2\\Delta_0}_{34}$. Note that this factor $\\Omega =\n\\Omega_s$ also depends on a split of the four points into two sets of two, a choice usually\nreferred to as a channel. Here we have displayed the factor $\\Omega$ for the so-called\n$s$-channel. The $t$-channel is obtained by exchanging the fields inserted at $x_2$ and\n$x_4$. With their pick of $\\Omega_s$, Dolan and Osborn worked out the associated\nCasimir differential equation for the function $g_s$ and similarly for $g_t$. Solutions\nof these Casimir equations provided them with a set of blocks $g_{s\/t}^{\\Delta, l}(u,v)$\nin which one can then expand $g_s$ and $g_t$\n\\begin{equation}\nG(x_i) = \\Omega_s(x_i) \\sum p_{\\Delta,l} g^{\\Delta,l}_s (u,v) =\n\\Omega_t(x_i) \\sum p_{\\Delta,l} g^{\\Delta,l}_t (u,v) \\ .\n\\end{equation}\nThe equality between the first and second sum is the famous crossing symmetry equation.\nAn important observation is that writing this equation does actually not require a complete\nknowledge of the tensor factors. It is sufficient to know the ratio of the $s$- and\n$t$-channel $\\Omega$\n$$\nM(u,v) = \\Omega_t^{-1}(x_i) \\Omega_s (x_i) = \\left(\\frac{v}{u}\\right)^{\\Delta_0}\\ ,\n$$\nwhich is a function of the two cross ratios only. We call this important object $M$\nthe \\textit{crossing factor} $M$. In the case of spinning fields the crossing factor\nbecomes a matrix. This ratio of $s$- and $t$-channel tensor factors is not to be\nconfused with the crossing or fusing matrix of the conformal group. While the\ncrossing factor relates the $s$- and $t$-channel tensor factors, the crossing matrix\nrelates the conformal blocks in the two channels by providing the expansion\ncoefficients of $s$-channel blocks in terms of $t$-channel ones, see\n\\cite{Liu:2018jhs,Sleight:2018ryu,Chen:2019gka} for some recent discussion\nin the context of higher dimensional conformal field theory.\n\nIn \\cite{Isachenkov:2016gim} it was noticed that scalar four-point functions $G$ admit an different\ngauge choice for the factor $\\Omega$ such that the associated Casimir equations take the\nform of an eigenvalue equation for an integrable 2-particle Hamiltonian of Calogero-Sutherland\ntype. This was later explained in \\cite{Schomerus:2016epl,Schomerus:2017eny} through harmonic\nanalysis on the conformal group and then extended to fields with spin in which case the quantum\nmechanical potential becomes matrix valued. For spinning fields, the tensor structures of such a\nCalogero-Sutherland gauge were constructed recently in \\cite{Buric:2019dfk}. The goal of our work\nis to extend all this to the case of superconformal symmetry. In \\cite{Buric:2019rms} we have\nconstructed the Casimir equations for superconformal symmetries of type I. The form of these equations\nallows us to compute superblocks systematically as finite sums of spinning bosonic blocks.\nWhat was missing up to now is the construction of the associated tensor structures and in particular\nthe crossing factor $M$. Below we fill this gap and construct both the tensor structures\nand the crossing factor for all superconformal algebras of type I. Explicit formulas\nfor the crossing factors in 4-dimensional superconformal algebras will be given in our\nforthcoming paper \\cite{N1D4_paper}. Early work on tensor structures for four-point\ncorrelators of superconformal field theories includes \\cite{Park:1997bq,Park:1999pd,Osborn:1998qu,Heslop:2002hp,Heslop:2004du,Nirschl:2004pa}.\n\\medskip\n\nLet us now describe the plan of this work in more detail. The next section contains some\nbasic background material on superconformal algebras, where we introduce the notion of\nsuperspace and discuss the infinitesimal and global action of the conformal symmetry\nthereon. Special attention will be paid to the action of the so-called Weyl inversion,\nwhich plays an important role in later sections. Section 3 contains the first new\nresult of this work. There we construct a special family $g(x_i)$ of supergroup elements\nthat depend on the insertion points of the fields along with a matrix realization that\nuniquely encodes the correlation function $G(x_i)$. This generalizes a similar formula\nfor bosonic conformal field theories in \\cite{Buric:2019dfk} to the supersymmetric setup.\nIn section 4 we begin to specialize the discussion to superconformal algebras of type I,\ni.e.\\ to cases in which the R-symmetry group contains an abelian factor $U(1)$. After\nintroducing super Cartan coordinates through a particular KAK factorization of the\nsuperconformal group we can construct the tensor factors $\\Omega$ for any choice\nof spins and any channel through an elegant group theoretical construction. This\nthen allows us to build the crossing factor $M$ as a quotient of $s$- and $t$-channel\ntensor factors and prove its conformal invariance explicitly. Our main result, which is\nstated in eqs.\\ \\eqref{eq:crossingmatdef}, \\eqref{eq:crossingmatrix}, expresses the crossing\nfactor $M$ through representation matrices of some particular family of elements of\nthe group $K$ that is generated by dilations, rotations and R-symmetry transformations.\nAll constructions in section 2-4 are illustrated at the example of $\\mathfrak{g} =\n\\mathfrak{sl}(2|1)$ of the $\\mathcal{N} = 2$ superconformal algebra in $d=1$ dimensions.\nLet us also note that our discussion includes the purely bosonic case $\\mathcal{N}=0$\nfor which the crossing factor was not constructed previously beyond a few special spin\nassignments. As a corollary to our discussion we state the crossing factor for arbitrary\nspinning four-point functions in 3-dimensional conformal field theories. For all other\nhigher dimensional examples, bosonic as well as supersymmetric, our results for the\ncrossing factor are stated in the form of a precise easy-to-follow algorithm. In\norder to obtain ready-to-use formulas one needs to input some classical results\nfrom the group theory of rotations $SO(d)$. We will discuss this for certain\nmixed correlators in $\\mathcal{N}=1$ superconformal theories in an accompanying\nwork \\cite{N1D4_paper}.\n\n\n\n\\section{Superspace and Superconformal Symmetry}\n\nIn order to state and prove our main results we need some background on supergroups,\nsuperspaces and the action of superconformal symmetry thereon. Here we want to review\nthese concepts and at the same time introduce a mathematical language that is appropriate\nfor our subsequent discussion. In particular, we recall the notion of superspace\nin the second subsection and explain how one constructs an infinitesimal action of the\nsuperconformal algebra thereon. This action is lifted to global transformations in the\nthird subsection, with some special focus on the so-called Weyl inversion, a close\nrelative of the conformal inversion which is guaranteed to exist in any superconformal\nfield theory. For more mathematical minded readers we have incorporated a more abstract\nand introductory subsection on the concept of supergroups. While this helps to make\nequations in subsequent subsections mathematically rigorous, readers who feel familiar\nwith supergroups and superspaces are encouraged to skip the first subsection, at least\nupon first reading.\n\n\\subsection{Some basics on superalgebras and supergroups}\n\nIn this subsection we introduce some very basic notions and notations concerning\nsupergroups. Our conventions agree with \\cite{Kostant:1975qe,Leites:1980rna,Wess:1992cp}.\nLet $\\mathfrak{h}$ be some Lie superalgebra, i.e. a graded vector space $\\mathfrak{h} =\n\\mathfrak{h}_{\\bar 0} \\oplus \\mathfrak{h}_{\\bar 1}$ with a graded Lie bracket.\nWe denote the latter by $[. , . ]_\\pm$. The associated \\textit{universal enveloping\nalgebra} $U(\\mathfrak{h})$ is the graded associative algebra generated by elements $X \\in\n\\mathfrak{h}$, with relations such that graded commutators are given by the Lie bracket.\nIn a slight abuse of notations we shall denote the graded commutators in the universal\nenveloping algebra by $[.,.]_\\pm$ as well.\n\nThe universal enveloping algebra comes equipped with a co-product $\\Delta$, i.e. with\na homomorphism\n$$ \\Delta: U(\\mathfrak{h}) \\rightarrow U(\\mathfrak{h}) \\otimes U(\\mathfrak{h})\\ . $$\nHere, the tensor product is to be understood in the graded sense, i.e. elements are\nmultiplied as\n$$ (a_1 \\otimes b_1) \\cdot (a_2 \\otimes b_2) = (-1)^{|a_2||b_1|} a_1 a_2 \\otimes\nb_1 b_2 \\ ,$$\nwhere $|a|=0$ if $a$ is even and $|a|=1$ if $a$ is odd, as usual. On the generating\nelements $X \\in \\mathfrak{h} \\subset U(\\mathfrak{h})$, the co-product is given by\n\\begin{equation}\n\\Delta(X) = X \\otimes 1 + 1 \\otimes X \\ .\n\\end{equation}\nFrom here one can extend $\\Delta$ uniquely to the entire universal enveloping algebra\nas a homomorphism of graded algebras. The co-product is the algebraic structure that\nallows us to build tensor products of any two representations of the Lie superalgbra\n$\\mathfrak{h}$ or its universal envelop $U(\\mathfrak{h})$.\n\\medskip\n\nLet us now turn to another algebra that we can associate to $\\mathfrak{h}$, namely the so-called\n\\textit{structure algebra} $\\mathcal{F}(\\mathfrak{h})$. By definition, $\\mathcal{F}$ is a\ngraded commutative algebra whose generators $x_A$ are associated to the basis elements $X^A$\nof the Lie superalgebra $\\mathfrak{h}$. The elements $x_A$ possess the same degree $|x_A|= |A|$\nas the generators $X^A$, i.e. $x_A$ is an ordinary bosonic variable if $X^A$ is even while\n$x_A$ is a Grassmann variable in case $X^A$ is odd. From the construction we have sketched\nhere it is evident that $\\mathcal{F}$ can be thought of as the \\textit{algebra of functions}\non the supergroup associated with $\\mathfrak{h}$ which is generated here from set of coordinate\nfunctions, one for each element of the Lie superalgebra.\n\\smallskip\n\n\n\nThe two algebras we have associated to $\\mathfrak{h}$ up to now are actually closely related.\nIn the case of bosonic groups, the generators $X$ of the Lie algebra give rise to (right)\ninvariant vector fields that act on functions as some first order differential operators.\nThese differential operators $\\mathcal{R}_X$ can be multiplied and added and thereby\nprovide an action of elements $a$ in the universal enveloping algebra $U(\\mathfrak{h})$ through\ndifferential operators $\\mathcal{R}_a$ of higher order. One may combine the application\nof any such differential operator to a function on the group with the evaluation at the\ngroup unit $e$ to obtain a map that assigns as number\n\\begin{equation} \\label{eq:duality}\n\\mathcal{R}_a(f)(e) = (a,f) = f(a) \\in \\mathbb{C}\n\\end{equation}\nto a pair of an element $ a \\in U(\\mathfrak{h})$ and a (complex valued) function $f$ on the\ngroup. In other words, elements of $U(\\mathfrak{h})$ give linear functionals of the algebra of\nfunctions or structure algebra $\\mathcal{F}(\\mathfrak{h})$ and vice versa. In this form, the\nstatement remains true for Lie superalgebras and is often expressed by saying that\n$\\mathcal{F}(\\mathfrak{h})$ and $U(\\mathfrak{h})$ are dual to each other, see also \\cite{Sternberg:1975}\nfor a nice discussion of this point.\n\\medskip\n\nEquipped with these two algebraic structures, namely the universal enveloping algebra\n$U(\\mathfrak{h})$ and the structure algebra $\\mathcal{F}(\\mathfrak{h})$, we want to introduce the concept\nof \\textit{supergroup elements} $h$. Let us first give a formal definition according to\nwhich $h$ is an even element of the graded tensor product $U(\\mathfrak{h}) \\otimes \\mathcal{F}(\\mathfrak{h})$\nthat satisfies\n\\begin{equation} \\label{eq:Deltah}\n(\\Delta \\otimes \\textit{id}) h = \\ \\stackrel{1}{h}\\ \\stackrel{2}{h} \\ .\n\\end{equation}\nHere, the application of the co-product $\\Delta$ to the first tensor factor of $h$\nproduces an element in $U(\\mathfrak{h}) \\otimes U(\\mathfrak{h}) \\otimes \\mathcal{F}(\\mathfrak{h})$. The\nfactors on the right hand side are elements in the same threefold tensor product.\nMore concretely, $\\stackrel{2}{h}$ is the element $1 \\otimes h$ with trivial entry\nin the first tensor factor. Similarly $\\stackrel{1}{h}$ denotes the element $h$ with\ntrivial entry in the second tensor factor.\n\nThe element $h$ is not uniquely characterized by these properties, but we do not need\nto be more specific. It might be helpful to think of $h$ as the object $h = \\exp (x_A\nX^A)$. The element $x_A X^A$ in the exponent is even and upon expansion of the\nexponential provides us with an even element in the graded tensor product $U(\\mathfrak{h})\n\\otimes \\mathcal{F}(\\mathfrak{h})$. In order to construct this element one moves all the\nelements $x_A$ of the structure algebra to the right of the superalgebra generators\n$X^B$ using\n$$ x_A X^B = (-1)^{|A| |B|} X_B x_A \\ ,$$\nwhich implements our convention to consider the graded tensor product of $U(\\mathfrak{h})$\nand $\\mathcal{F}(\\mathfrak{h})$ rather than the ordinary one. After the reordering we indeed\nobtain an infinite sum of products between elements in the universal enveloping algebra\n$U(\\mathfrak{h})$ with elements of the structure algebra $\\mathcal{F} (\\mathfrak{h})$. If we apply\nthe co-product in the universal enveloping algebra we formally\nobtain\n\\begin{equation}\n(\\Delta \\otimes \\textit{id}) h = e^{x_A (X^A \\otimes 1 + 1 \\otimes X^A)} = e^{x_A (X^A \\otimes 1)}\ne^{x_A (1 \\otimes X^A)} =\\ \\stackrel{1}{h}\\ \\stackrel{2}{h} \\ .\n\\end{equation}\nIn writing the single exponential as a product of exponentials we used the fact that\nthe exponent if an even object so that $x_A (X^A \\otimes 1)$ commutes with $x_A (1\n\\otimes X^A)$. In conclusion, we have constructed an object $h$ with the properties\nwe demanded in the previous paragraph, at least formally. In physics, it is\ncustomary to evaluate $h$ in some representation $\\pi$ of the Lie superalgebra $\\mathfrak{h}$\nor, equivalently, its universal enveloping algebra. Thereby one obtains a finite\ndimensional supermatrix $h^\\pi = (\\pi \\otimes \\textit{id}) h$ with entries from the structure\nalgebra $\\mathcal{F}$. In the following we often use the symbol $h$ for such a\nmatrix $h$ rather than the universal element $h \\in U(\\mathfrak{h}) \\otimes \\mathcal{F}(\\mathfrak{h})$.\n\\medskip\n\nWhat we have explained so far actually suffices as background for most of our\ndiscussion below, except for the construction of an infinitesimal action of the\nconformal superalgebra on superspace in the next subsection. To obtain explicit\nformulas for the first order derivative operators $\\mathcal{R}_X$ that are\nassociated with the elements $X \\in \\mathfrak{h}$ let us first extend the structure\nalgebra $\\mathcal{F}(\\mathfrak{h})$ of ``functions on the supergroup'' to a differentially\ngraded algebra $d\\mathcal{F}(\\mathfrak{h})$ of ``differential forms on the supergroup''.\nThe latter is a bi-graded commutative algebra generated by elements $x_A$ and\n$dx_A$, with a second grading associated to the form degree.\nOn the algebra $d\\mathcal{F}(\\mathfrak{h})$ we can define a differential $d$ that\nsquares to zero $d^2 = 0$ and satisfies the graded Leibniz rule\n$$ d (f \\wedge g) = df \\wedge g + (-1)^{\\textit{deg}(f)} f \\wedge dg \\ . $$\nHere $\\textit{deg}(f)$ denotes the form degree of $f$. Let us stress that there is no\nadditional sign associated with the $\\mathbb{Z}_2$ grading that distinguishes between even\n(bosonic) and odd (fermionic) elements. This means that $d$ is treated as an even object.\nHence, for a given $A$, $x_A$ and $dx_A$ possess the same degree, i.e.\\ $dx_A$ is even\n[odd] in case $x_A$ is even [odd].\n\\medskip\n\nSince the structure algebra $\\mathcal{F}(\\mathfrak{h})$ is contained in the larger differentially\ngraded algebra $d\\mathcal{F}(\\mathfrak{h})$ we can also think of the supergroup element $h \\in\nU(\\mathfrak{h}) \\otimes \\mathcal{F}(\\mathfrak{h})$ as an element of the differential graded algebra\n$U(\\mathfrak{h}) \\otimes d\\mathcal{F}(\\mathfrak{h})$ with the additional rule that $dX^A= X^Ad$, i.e.\\\nwe consider the generators $X^A$ of the Lie superalgebra as constants and the differential\n$d$ as even. Now it makes sense to consider the Maurer-Cartan form\n\\begin{equation}\ndh h^{-1} \\in U(\\mathfrak{h}) \\otimes d\\mathcal{F}(\\mathfrak{h}) \\ .\n\\end{equation}\nIf we apply the differential to the equation \\eqref{eq:Deltah} that characterizes\n$h$ we obtain\n\\begin{equation}\n\\Delta(dh h^{-1}) = \\left(\\stackrel{\\phantom{0}}{d}\\stackrel{1}{h} \\ \\stackrel{2}{h} +\n\\stackrel{1}{h} \\ \\stackrel{\\phantom{0}}{d}\\stackrel{2}{h}\\right)\\ \\stackrel{2}{h}\\!^{-1} \\\n\\stackrel{1}{h}\\!^{-1}\n= \\ \\stackrel{\\phantom{0}}{d}\\stackrel{1}{h} \\ \\stackrel{1}{h}\\!^{-1} +\n \\ \\stackrel{\\phantom{0}}{d}\\stackrel{2}{h} \\ \\stackrel{2}{h}\\! ^{-1}\\ .\n\\end{equation}\nWe conclude that the Maurer-Cartan form takes values in the Lie superalgebra $\\mathfrak{h}\n\\subset U(\\mathfrak{h})$, as it is the case for usual bosonic Lie groups. Consequently, it\nmay be expanded as\n\\begin{equation}\ndh h^{-1} = dx_A C_{AB} X^B\\quad \\textit{where} \\quad C_{AB} \\in\n\\mathcal{F}(\\mathfrak{h})\\ .\n\\end{equation}\nThe matrix elements $C_{AB}$ possess degree $|A|+|B|$, i.e.\\ they are even elements\nof the structure algebra if $|A|=|B|$ and odd otherwise. We also stress that the elements\n$C_{AB}$ depend on the choice of the supergroup element $h$. One of the main uses of the\nmatrix elements $C_{AB}$ is to construct the right-invariant vector fields, i.e. an action\nof the Lie superalgebra $\\mathfrak{h}$ through first order differential operators acting on the\nstructure algebra $\\mathcal{F}(\\mathfrak{h})$. These vector fields are given by\n\\begin{equation} \\label{eq:RHA}\n \\mathcal{R}_{X^A} = \\mathcal{R}_A :=\n \\mathcal{C}^G_{AB}\\partial_B, \\nonumber\n\\end{equation}\nwhere $\\mathcal{C} = C^{-1}$ denotes the inverse of $C$ and $\\partial_B$ is the (graded)\nderivative with respect to the coordinate $x_B$. Its action on an arbitrary function $f\n\\in \\mathcal{F}(\\mathfrak{h})$ can be read off from $df = dx_B (\\partial_B f)$. In particular,\nwhen acting on the individual coordinate functions, $x_A$ is obeys $(\\partial_B x_A) =\n\\delta_{A,B}$. The action of partial derivatives on products of functions satisfies the\ngraded Leibniz rule which implies that\n\\begin{equation}\n\\partial_B x_A = (\\partial_B x_A) + (-1)^{|A||B|} x_A \\partial_B = \\delta_{A,B} +\n(-1)^{|A||B|} x_A \\partial_B \\ .\n\\end{equation}\nSince we have assumed that the differential $d$ acts trivially on the generators\n$X^A$ of the universal enveloping algebra, i.e. $(dX^A) = 0$ we conclude that\n$\\partial_\\beta X^A = 0$, i.e.\\ the generators $X^A$ are constant objects on\nthe supergroup statisfying\n\\begin{equation}\n\\partial_B X^A = (-1)^{|A||B|} X^A \\partial_B \\ \\ .\n\\end{equation}\nWith this list of properties of the partial derivatives we conclude our construction\nof the right invariant vector fields \\eqref{eq:RHA} and thereby our short mathematical\nreview of superalgebras and the theory of supergroups. The formulation we have introduced\nhere is well adapted to our needs below and also paves the way for some interesting\nextensions, see the concluding section.\n\n\\subsection{Superspace and the infinitesimal action of superconformal symmetry}\n\nThis subsection serves two purposes. On the one hand we need to introduce the notion\nof superspace that is one of the crucial ingredients throughout the rest of the paper.\nIn addition we shall also construct an action of the superconformal algebra $\\mathfrak{g}$\nthrough ``first differential operators'' on superspace. This infinitesimal action of\nthe superconformal symmetry on superspace will play only a minor role below since\nmost of our analysis is based on global transformations.\n\nTo set up notations let us denote the superconformal algebra by $\\mathfrak{g}$. Its\nbosonic subalgebra $\\mathfrak{g}_{\\bar 0}$ consists of $d$-dimensional conformal transformations\nin $\\mathfrak{so}(1,d+1)$ as well as R-symmetry transformations in some Lie algebra\n$\\mathfrak{u}$. To define superspace we pick some decomposition\n\\begin{equation}\\label{eq:decomposition}\n \\mathfrak{g} = \\mathfrak{m} \\oplus \\mathfrak{p} \\nonumber\n\\end{equation}\nof $\\mathfrak{g}$ into two Lie subalgebras $\\mathfrak{p}$ and $\\mathfrak{m}$. The standard\nchoice would be to define $\\mathfrak{p}$ as the span of all elements in $\\mathfrak{g}$ that\nlower the eigenvalue of the dilation generator $D \\in \\mathfrak{g}_{\\bar 0}$, i.e.\n$$ \\mathfrak{p} := \\mathfrak{g}_{\\leq 0} =\n\\textit{span}\\left(\\, X \\in \\mathfrak{g}\\, | \\, [D,X] = \\alpha X\n\\, , \\, \\alpha \\leq 0 \\right)\\ . $$\nFor this choice, $\\mathfrak{m}$ then consists of generators $P$ of translations and\nthe supercharges $Q$. We shall briefly comment on other choices below. We also choose\na basis $X^A$ of elements in $\\mathfrak{g}$ that is compatible with the decomposition\n\\eqref{eq:decomposition}. Elements $X^A$ that lie in the subspace $\\mathfrak{m}$ will be\nlabeled by lower case Latin indices while those that lie in the complement $\\mathfrak{p}$\ncarry Greek indices.\n\nThe decomposition of the Lie superalgebra $\\mathfrak{g}$ into $\\mathfrak{m}$ and $\\mathfrak{p}$ determines\na decomposition of the corresponding universal enveloping algebra $U(\\mathfrak{g})=\nU(\\mathfrak{m})\\otimes U(\\mathfrak{p})$ as well as of the structure algebra\n$\\mathcal{F}(\\mathfrak{g})=\\mathcal{F}(\\mathfrak{m})\\otimes\\mathcal{F}(\\mathfrak{p})$. Recall that the\nstructure algebras $\\mathcal{F}(\\mathfrak{m})$ and $\\mathcal{F}(\\mathfrak{p})$ are generated by\nthe coordinates $x_a$ and $x_\\alpha$, respectively, with $x_a$ and $x_\\alpha$ being\nGrassmann variables if the corresponding elements $X^a$ and $X^\\alpha$ are fermionic\ngenerators of the Lie superalgebra. The structure algebra $\\mathcal{F}(\\mathfrak{m})$ is what\nis referred to a \\textit{superspace} $\\mathcal{M} = \\mathcal{F}(\\mathfrak{m})$. Loosely\nspeaking one may think of it as the algebra of ``functions on the supergroup $M$'',\nthough we have not defined what we mean by a supergroup and do not intend to do so.\n\\medskip\n\nNow that we know what superspace is let us construct an infinitesimal action of\nthe superconformal symmetry thereon. Here we shall closely follow the general\nconstructions we outlined in the previous subsection and introduce supergroup\nelements $m=m(x_a)$ and $p=p(x_\\alpha)$. In case of $m$ we work with the\nfollowing standard choice\n\\begin{equation}\nm(x_a) = e^{x_a X^a} .\n\\end{equation}\nThe infinitesimal action of the conformal algebra on the coordinates $x_a$ of our\nsuperspace descends from the left-regular action of $\\mathfrak{g}$ and thus can be\ncomputed from the Maurer-Cartan form,\n\\begin{equation}\n dg g^{-1} = dx_A C^{G}_{AB}X^B\\ .\n\\end{equation}\nIn computing the Maurer-Cartan form for $\\mathfrak{g}$ it is usual to relate it to the\nMaurer-Cartan forms that are associated with $\\mathfrak{m}$ and $\\mathfrak{p}$\n\\begin{equation}\n dm m^{-1} = dx_a C^M_{ab} X^b \\quad , \\quad\n dp p^{-1} = dx_\\alpha C^{P}_{\\alpha\\beta} X^\\beta\\ .\\nonumber\n\\end{equation}\nWith our choice $g=mp$ of the supergroup element $g$ as a product of the two\nelements $m$ and $p$ it follows that\n\\begin{align}\n & dg g^{-1} = dx_A\\partial_A(m p) (m p)^{-1} = dx_a (\\partial_a m) m^{-1} +\n dx_\\alpha m (\\partial_\\alpha p) p^{-1} m^{-1} =\\nonumber\\\\[2mm]\n & = dx_a C^M_{ab} X^b + dx_\\alpha m C_{\\alpha\\beta}X^\\beta m^{-1} =\n dx_a C^M_{ab} X^b + dx_\\alpha C^P_{\\alpha\\beta}\\Big((M_1)_{\\beta a} X^a +\n (M_2)_{\\beta\\gamma}X^\\gamma\\Big) \\ . \\label{MC-form}\n\\end{align}\nThe last equality defines the two matrices $M_{1,2}$,\n\\begin{equation}\n m X^\\beta m^{-1} = (M_1)_{\\beta a} X^a + (M_2)_{\\beta\\gamma} X^\\gamma\\ .\n\\end{equation}\nFrom the equation $(\\ref{MC-form})$ we can read off the coefficients $C^G_{AB}$\nof the Maurer-Cartan form for $\\mathfrak{g}$. The inverse $\\mathcal{C}^G$ of this matrix\nis easily seen to take the form\n\\begin{equation}\n \\mathcal{C}^G = \\begin{pmatrix}\n \\mathcal{C}^M & 0 \\\\\n -M_2^{-1}M_1\\mathcal{C}^M & M_2^{-1} \\mathcal{C}^P\n \\end{pmatrix} \\ , \\nonumber\n\\end{equation}\nwhere the first row\/column corresponds to direction in $\\mathfrak{m}$ while the second\nrow\/column collects all the directions in $\\mathfrak{p}$. As stated before, the matrix\n$\\mathcal{C}^G$ provides us with the right-invariant vector fields \\eqref{eq:RHA}\non the conformal supergroup. To project these operators to the superspace one\nsimply sets $\\partial_\\alpha=0$,\n\\begin{equation}\\label{eq:resultRM}\n \\mathcal{R}^{(M)} = \\begin{pmatrix}\n \\mathcal{C}^M & 0\\\\\n -M_2^{-1}M_1\\mathcal{C}^M & M_2^{-1} \\mathcal{C}^P\n \\end{pmatrix}\n \\begin{pmatrix} \\partial\\\\ 0 \\end{pmatrix} =\n \\begin{pmatrix} \\mathcal{C}^M_{ab}\\partial_b\\\\\n -(M_2^{-1}M_1\\mathcal{C}^M)_{\\alpha b}\\partial_b\n \\end{pmatrix}\\ . \\nonumber\n\\end{equation}\nThis is the main result of this subsection. As mentioned above, the differential\noperators on superspace depend on $C^M$ and hence on the choice of the supergroup\nelement $m$. The choice of the supergroup element $p$, on the other hand, is\nirrelevant since the coefficients $C^P$ of the Maurer-Cartan form $dp p^{-1}$\ndropped out in the last step when we set all derivatives $\\partial_\\alpha$ to\nzero.\n\\smallskip\n\nOur result \\eqref{eq:resultRM} applies to all decompositions of $\\mathfrak{g}$ into two Lie\nsubalgebras $\\mathfrak{m}$ and $\\mathfrak{p}$. As we pointed out in the first paragraph, the standard\nchoice is to take $\\mathfrak{p}$ to contain generators that do not increase the conformal weight.\nIn that case, the structure algebra $\\mathcal{M} = \\mathcal{F}(\\mathfrak{m})$ is called the standard\nsuperspace. If the superconformal algebra $\\mathfrak{g}$ is of type I, however, there\nexist other natural choices to which the constructions of this subsection apply. In a\ntype I superalgebra the R-symmetry contains a $U(1)$ subalgebra which commutes\nwith all bosonic generators but assigns the fermionic ones a non-trivial\nR-charge $\\pm 1$. As usual, we can decompose the Lie superalgebra $\\mathfrak{g} =\n\\mathfrak{g}_{\\leq 0} \\oplus \\mathfrak{g}_{> 0}$ by splitting off those generators in\n$\\mathfrak{g}_{>0}$ that strictly increase the conformal weight. These consist\nof supercharges $Q$ and generators of translations. In a type I superalgebra\nwe can now split the space $\\mathfrak{q}$ or supercharges $Q$ according to\nthe sign of their $U(1)$ R-charge as $\\mathfrak{q} = \\mathfrak{q_+} \\oplus\n\\mathfrak{q}_-$. With this in mind we can introduce two new decompositions\n$\\mathfrak{g} = \\mathfrak{m}_\\pm \\oplus \\mathfrak{p}_\\pm$ of the superconformal algebra where\n \\begin{equation}\n \\mathfrak{p}_\\pm = \\mathfrak{g}_{\\leq 0} \\oplus \\mathfrak{q}_\\pm \\ , \\quad\n \\mathfrak{m}_\\pm = \\mathfrak{g}_1\\oplus\\mathfrak{q}_\\mp = \\mathfrak{g}\/\n \\mathfrak{p}_\\pm \\ . \\nonumber\n\\end{equation}\nFrom the properties of type I Lie superalgebras, one may easily show that both\n$\\mathfrak{p}_\\pm$ and $\\mathfrak{m}_\\pm$ are subalgebras of $\\mathfrak{g}$.\nThe associated superspaces $\\mathcal{M}_\\pm = \\mathcal{F}(\\mathfrak{m}_\\pm)$ are\ncalled the chiral and anti-chiral superspace, respectively.\n\\bigskip\n\n\\noindent\n{\\bf Example:} As an example, let us illustrate the construction of superspace and the differential\noperators in the case of the 1-dimensional $\\mathcal{N}=2$ superconformal algebra $\\mathfrak{g}=\\mathfrak{sl}(2|1)$.\nThe smallest faithful representation of $\\mathfrak{g}$ is 3-dimensional. We may choose the generators\nas\n\\begin{equation} \\label{eq:bosrep}\n D = \\begin{pmatrix}\n 1\/2 & 0 & 0\\\\\n 0 & -1\/2 & 0\\\\\n 0 & 0 & 0\n \\end{pmatrix},\\ P = \\begin{pmatrix}\n 0 & 1 & 0\\\\\n 0 & 0 & 0\\\\\n 0 & 0 & 0\n \\end{pmatrix},\\ K = \\begin{pmatrix}\n 0 & 0 & 0\\\\\n 1 & 0 & 0\\\\\n 0 & 0 & 0\n \\end{pmatrix},\\ R = \\begin{pmatrix}\n -1 & 0 & 0\\\\\n 0 & -1 & 0\\\\\n 0 & 0 & -2\n \\end{pmatrix},\\nonumber\n\\end{equation}\nfor the four bosonic generators and\n\\begin{equation} \\label{eq:fermrep}\n Q_- = \\begin{pmatrix}\n 0 & 0 & 0\\\\\n 0 & 0 & 0\\\\\n 0 & 1 & 0\n \\end{pmatrix},\\ Q_+ = \\begin{pmatrix}\n 0 & 0 & 1\\\\\n 0 & 0 & 0\\\\\n 0 & 0 & 0\n \\end{pmatrix},\\ S_- = \\begin{pmatrix}\n 0 & 0 & 0\\\\\n 0 & 0 & 0\\\\\n 1 & 0 & 0\n \\end{pmatrix},\\ S_+ = \\begin{pmatrix}\n 0 & 0 & 0\\\\\n 0 & 0 & 1\\\\\n 0 & 0 & 0\n \\end{pmatrix},\\nonumber\n\\end{equation}\nfor the fermionic ones. Here we shall consider the decomposition $\\mathfrak{g} = \\mathfrak{m} \\oplus \\mathfrak{p}$\nwith the Lie superalgebra $\\mathfrak{m}$ spanned by $P, Q_+$ and $Q_-$. The corresponding superspace\n$\\mathcal{M}$ is generated by one bosonic variable $u$ along with two Grassmann variables\n$\\theta$ and $\\bar \\theta$. In this case the supergroup element $m$ we introduced above takes\nthe following matrix form\n\\begin{equation} \\label{eq:m-1d}\nm(x) = e^{u P + \\theta Q_+ + \\bar \\theta Q_-} = \\begin{pmatrix}\n 1 & X & \\theta \\\\\n 0 & 1 & 0 \\\\\n 0 & -\\bar\\theta & 1\n \\end{pmatrix} \\ ,\n\\end{equation}\nwhere $X = u-\\frac12 \\theta \\bar \\theta$ and $x = (u,\\theta,\\bar\\theta)$ represents the three\ngenerators of the structure algebra.\n\\smallskip\n\nThe construction we outlined above provides us with an action of the superconformal algebra\n$\\mathfrak{g}$ on this superspace with differential operators $\\mathcal{R}_X = u$ of the form\n\\begin{align}\n & p = \\partial_u\\ ,\\quad & k = -u^2\\partial_u - u\\theta\\partial_{\\theta} -\n u\\bar\\theta\\partial_{\\bar\\theta}\\ , \\label{eq:sldop1} \\\\[2mm]\n & d = u\\partial_u + \\frac12\\theta\\partial_{\\theta} + \\frac12\\bar\\theta\n \\partial_{\\bar\\theta}\\ ,\\quad\n & r =\\theta\\partial_{\\theta} - \\bar\\theta\\partial_{\\bar\\theta}\\ ,\n \\label{eq:sldop2}\\\\[2mm]\n & q_+ = \\partial_{\\theta} - \\frac12\\bar\\theta\\partial_u \\ ,\\quad &\n q_- = \\partial_{\\bar\\theta} - \\frac12\\theta\\partial_u\\ ,\n \\label{eq:sldop3} \\\\[2mm]\n & s_+ = -(u+\\frac12\\theta\\bar\\theta)q_+\\ ,\\quad &\n s_- = (u-\\frac12\\theta\\bar\\theta)q_-\\ .\n \\label{eq:sldop4}\n\\end{align}\nAs we pointed out in our discussion above, the choice of $p$ is not relevant for\nthe final result. We encourage the reader to derive these explicit expressions\nfrom our general formula \\eqref{eq:resultRM}.\n\n\\subsection{Global superconformal symmetry and Weyl inversions}\n\nHaving constructed superspace along with an action of the superconformal algebra\nthereon, our next task is to construct the action of global conformal transformations.\nAs we shall see in a moment, most of the global transformations act in an obvious\nway. The only exception are special conformal transformations. For bosonic conformal\nsymmetry, the easiest way to construct these is through the conformal inversion of\ntranslations. We follow essentially the same strategy in the supersymmetric context,\nexcept we need to replace the conformal inversion by a closely related Weyl inversion.\nThe latter extends nicely to superconformal algebras while conformal inversions may\nnot actually exist, see below.\n\nDefining the action of global conformal transformations on superspace requires a\nlittle bit of preparation. We shall think of a global symmetry transformation as\nbeing associated to a supergroup element $h=h(s)$. We may consider $h$ as a matrix\nwhose matrix elements are functions on the supergroup, i.e.\\ elements of the structure\nalgebra generated by the coordinates $s_a$ and $s_\\alpha$. The graded commutative\nalgebra that is generated by these coordinates is just another copy of the algebra\nthat is generated by $x_a$ and $x_\\alpha$. From now on we shall suppress the\ndependence on $s$ again. The left action of such an element $h$ on the supergroup\nelement $g(x)= m(x_a)p(x_\\alpha)$ is simply given by the left multiplication $g(x)\n\\mapsto h g(x)$. In order to obtain the action on superspace, we need to\nfactorize $h g(x)$ as\n\\begin{equation}\nh g(x) = m(y(x,h)) p(x,h) = e^{y(x,h)_a X^a} p(x,h) \\ .\n\\end{equation}\nThis factorization defines the $h$ transform $h(x)_a$ of the superspace\ncoordinates $x_a$. Note that $y(x,h)_a$ are elements in the tensor product of\ntwo structure algebras, the one generated by coordinates $x$ and the one that\nis generated by $s$. It is particularly easy to apply this definition to\nrotations, dilations and R-symmetries since these form a subgroup $K$\nthat respects the split of $\\mathfrak{g}$ into $\\mathfrak{m}$ and $\\mathfrak{p}$. In fact, the\nLie algebra $\\mathfrak{k}$ is even a subalgebra of $\\mathfrak{p}$. In order to factorize\n$$k g(x) = k m(x) p(x) = m(y(x,k)) p(x,k)$$\nfor some $k \\in K$\\footnote{Here we assume that all the matrix elements are\nconstant functions on the supergroup, i.e. they are proportional to the\nidentity element of the structure algebra.} all we need to do is move\n$k$ through $m$. Since the generators $X^a$ transform in some representation\n$\\kappa$ of $K$, the effect can be captured by a linear transformation of the\ncoordinates $x_a$, i.e. $y(x,k)_a =\\kappa_{ab}(k) x_b$. Also (super-)translations\nare easy to discuss. These are associated with elements $h(c) = m(c)$ so that\nmultiplication of $h$ with $g(x)$ only requires to multiply $m(c) m(x) =\nm(y(x,c))$. Since bosonic translations commute among each other and with the\nsupercharges $Q$, the only non-trivial terms in the product $m(c) m(x)$ come\nfrom the non-vanishing anti-commutators of the supercharges. But these can be\nevaluated easily in concrete examples and hence the computation of $c(x)$ is\nstraightforward.\n\\medskip\n\nIt now remains to discuss the action of special (super-)conformal transformations.\nWe will not discuss these directly but instead focus on one particular global\nsuperconformal transformation, namely the superconformal extension of the Weyl\ninversion $w$. As we shall see, this Weyl inversion relates special super\nconformal transformations to supertranslations, just as in the bosonic case.\n\nBefore we enter the discussion of the Weyl inversion, let us briefly recall\nhow the ordinary inversion of conformal field theories is constructed. By\ndefinition, the \\textit{conformal group} is a Lie group with $\\mathfrak{g}=\n\\mathfrak{so}(d+1,1)$ as its Lie algebra. Let $O(d+1,1)$\nbe the group of pseudo-orthogonal matrices. Its identity component is\ndenoted by $SO^+(d+1,1)$. This group can be realised as the quotient\n\\begin{equation}\n SO^+(d+1,1) = \\textit{Spin}(d+1,1)\/\\mathbb{Z}_2 \\nonumber\n\\end{equation}\nof the universal covering group $\\textit{Spin}(d+1,1)$ by its centre. Both $SO^+(d+1,1)$ and\n$Spin(d+1,1)$ act on the compactified Euclidean space, but only the first action is\nfaithful. In the case of $\\textit{Spin}(d+1,1)$, both elements of the centre act trivially.\nObviously, both $SO^+(d+1,1)$ and $\\textit{Spin}(d+1,1)$ possess the same Lie algebra $\\mathfrak{g}\n= \\mathfrak{so}(d+1,1)$. The conformal inversion\n\\begin{equation}\n I x^\\mu = \\frac{x^\\mu}{x^2} \\nonumber\n\\end{equation}\nis an element of $O(d+1,1)$, but it resides in a component that it not connected to the\nidentity component, i.e. the conformal inversion $I$ is not an element of $SO^+(d+1,1)$.\nWe can improve on this issue by multiplying the inversion with some spatial reflection.\nThe so-called Weyl inversion $w=s_{e_d}\\circ I$ involves the reflection on $\\mathbb{R}^d$\nthat sends $x_d$ to $- x_d$ and it belongs to $SO^+(d+1,1)$. We can actually construct\nthe Weyl inversion explicitly through the following exponential of conformal generators,\n\\begin{equation}\n w = e^{\\pi\\frac{K_d-P_d}{2}}. \\label{Weyl-inversion}\n\\end{equation}\nThere are two elements of $\\textit{Spin}(d+1,1)$ which project to $w$. We use the\nexpression \\eqref{Weyl-inversion} as our definition of the Weyl inversion for\n$\\textit{Spin}(d+1,1)$. One can check that its square is the non-trivial element of\nthe centre, i.e.\\ that $w^2=-1$.\n\\medskip\n\nIn passing to the superconformal algebra we use the same formula \\eqref{Weyl-inversion} to\ndefine the Weyl element and hence the Weyl inversion. The bosonic part $\\mathfrak{g}_{\\bar 0}$ of the\nsuperconformal algebra $\\mathfrak{g}$ is generated by the bosonic conformal algebra\n$\\mathfrak{g}_\\textit{bos}$ along with the generators $U \\in \\mathfrak{u}$ of R-symmetry\ntransformations. The latter commute with all elements of $\\mathfrak{g}_{\\bar 0}$ and hence the\nassociated universal enveloping algebras satisfy $U(\\mathfrak{g}_{\\bar 0}) \\cong U(\\mathfrak{g}_\\textit{bos})\n\\otimes U(\\mathfrak{u})$. By construction $w$ lies in $w \\in U(\\mathfrak{g}_\\textit{bos})$ and it is\ntrivial in the $U(\\mathfrak{u})$,\n$$ w = w \\otimes e \\in U(\\mathfrak{g}_\\textit{bos}) \\otimes U(\\mathfrak{u}) \\cong\n U(\\mathfrak{g}_{\\bar 0})\\ . $$\nWhile the action of the element $w$ on generators of the R-symmetry transformations\nis trivial, its action on the fermionic generators is not. Using that conjugation of\nthe generator $D$ of dilations with the Weyl inversion is given by $\\text{Ad}_{w_{bos}}\n(D)=-D$ we obtain\n\\begin{equation}\n \\frac12\\text{Ad}_w(Q) = \\text{Ad}_w([D,Q]) = [\\text{Ad}_w(D),\\text{Ad}_w(Q)] =\n - [D,\\text{Ad}_w(Q)]\\ , \\nonumber\n\\end{equation}\ni.e.\\ when a supercharge $Q$ is acted upon by the Weyl inversion it is sent to a generator\nwhose conformal weight is $-1\/2$. Consequently, the Weyl inversion interchanges generators\nof supertranslations and super special conformal transformations. For superconformal\nalgebras of type I, see the final paragraph of the previous subsection for a definition,\none can similarly use that $\\text{Ad}_w(R) = w R w^{-1} = R$ to deduce\n\\begin{equation}\n \\text{Ad}_w(\\mathfrak{q}_\\pm) \\subset \\mathfrak{s}_\\pm \\ . \\label{odd-generators}\n\\end{equation}\nIn conclusion we have seen that the super Weyl inversion exists for all superconformal\nalgebras and we stated some of its most important properties. This is to be contrasted\nwith the fact that a supersymmetric analogue of the ordinary conformal inversion may\nactually not exist. Assuming that one could choose the superconformal group such that\nthe inversion $I$ belonged to the bosonic conformal subgroup, then the arguments\nleading to eq.\\ $(\\ref{odd-generators})$ with $w\\times e$ replaced by $I\\times e$ would\nremain valid. On the other hand, as the example $\\mathfrak{g} = \\mathfrak{sl}(4|1)$ shows,\nthe fact that $I$ commutes with rotations is inconsistent with eq. $(\\ref{odd-generators})$,\nbearing in mind that $\\mathfrak{q}_+$ and $\\mathfrak{s}_+$ are non-isomorphic modules of the\nrotation group. Fortunately for us, the existence of the super Weyl inversion will\nsuffice.\n\n\n\\medskip\n\n\\noindent\n{\\bf Example:} Let us briefly discuss super-conformal transformations and in\nparticular the super Weyl inversion for the Lie superalgebra $\\mathfrak{sl}(2|1)$.\nAs we discussed at the end of the previous subsection, this Lie superalgebra admits\na 3-dimensional representation. All generators have been spelled out in this\nrepresentations above. Within this representation, the supergroup element\n$m(x)$ takes the form \\eqref{eq:m-1d}. The subgroup $K$ is generated by dilations\nand $U(1)_R$ symmetry transformations which are generated by $D$ and $R$, i.e.\\ $k\n= \\exp(\\lambda D + \\vartheta R)$. Under global transformations with elements $k \\in K$\nthe superspace coordinates $x=(u,\\theta,\\bar \\theta)$ transform as\n\\begin{equation}\ny(x,k) = (e^\\lambda u, e^{\\frac12 \\lambda +\\vartheta}\\theta, e^{\\frac12\\lambda - \\vartheta} \\bar \\theta) \\ .\n\\end{equation}\nHere we can either think of $\\lambda$ and $\\vartheta$ as some real parameters of the\ntransformation or as coordinates on the supergroup, i.e.\\ as two generators of the\nstructure algebra. Supertranslations with an element $m(c) = m(v,\\eta,\\bar \\eta)$\nact as $m(c) m(x) = m(c(x))$ with\n\\begin{equation}\ny(x,c) = c(x) = (u+v + \\frac12 \\theta \\bar \\eta + \\frac12 \\bar \\theta \\eta, \\theta+\\eta,\n\\bar \\theta + \\bar \\eta)\\ .\n\\end{equation}\nThe components of $c = (v,\\eta,\\bar \\eta)$ are generators of the structure algebra.\nIt remains to discuss the Weyl inversion. Within the 3-dimensional representation\nit is straightforward to compute the Weyl inversion from eq.\\ \\eqref{Weyl-inversion},\n\\begin{equation} \\label{eq:wmatrix}\n w = e^{\\pi\\frac{K-P}{2}} = \\begin{pmatrix}\n 0 & -1 & 0\\\\\n 1 & 0 & 0\\\\\n 0 & 0 & 1\n \\end{pmatrix}.\\nonumber\n\\end{equation}\nNote that $w^2 = \\textit{diag}(-1,-1,1)$, i.e.\\ it squares to $-1$ within the\nbosonic conformal group and is trivially extended within the R-symmetry group.\nIt is now straightforward to compute the action of the Weyl inversion on superspace\nby decomposing the matrix $w m(x) = m(w(x)) p(x,w)$ with $w(x)\n= y(x,w)$ given by\n\\begin{equation}\n \\ w(u) = -\\frac{1}{u}\\ ,\\quad w(\\theta) = \\frac{\\theta}{u}\\ ,\\quad\n w(\\bar\\theta) = \\frac{\\bar\\theta}{u}\\ . \\label{w-action-1d}\n\\end{equation}\nNote that the action of $w$ on the bosonic coordinate $u$ is the same as in\nbosonic conformal field theory. This had to be the case, since in the chosen coordinate\nsystem on the superspace $\\mathcal{M}$ the action of the conformal algebra generators on\n$x$ is the same as in bosonic theory. Furthermore, we have $w(p,q_+,q_-)w^{-1}=(-k,-s_+,s_-)$,\nin accordance with the relations $w^{-1}(P,Q_+,Q_-)w = (-K,-S_+,S_-)$ satisfied by $3\\times3$\nmatrices. Often such conditions are used to derive the action of the inversion. In the\napproach here, this is not necessary as the action of $w$ can be computed directly.\nThis concludes our discussion of the 1-dimensional $\\mathcal{N} =2$ superspace and\nthe global action of the superconformal symmetry on it.\n\n\n\\section{Lifting Correlators to the Supergroup}\n\nThis section contains the first new result of the present work. We establish an isomorphism\nbetween the solutions of superconformal Ward identities that are satisfied by a four-point\nfunction of arbitrary spinning fields and certain covariant functions on the superconformal\ngroup, to which one may also refer as $K$-spherical functions. The construction finds\nroots in ideas from \\cite{Dobrev:1977qv}, and generalises that of \\cite{Buric:2019dfk} to\nthe superconformal setting. One key ingredient in our formula is a family of supergroup\nelements $g(x_i)$ that depends on the insertion points of the four fields in superspace.\nThese will play an important role in the following sections as well. In the first subsection\nwe state all this precisely before we illustrate the formulas at the example of $\\mathfrak{g}=\n\\mathfrak{sl}(2|1)$ in the second. The third subsection contains the proof of our\nstatement.\n\n\\subsection{Statement of the result}\n\nLet us now consider a four-point function in some superconformal field theory.\nTo each field we associate a copy of our superspace $\\mathcal{M}$. The generators\n$x_{ia}$ of these spaces carry a label $i=1, \\dots, 4$ in addition to\nthe label $a$ we introduced in the previous section. The corresponding supergroup\nelements $m_i = m(x_i)$ are given by\n\\begin{equation}\nm(x_i) = e^{x_{ia} X^a} .\n\\end{equation}\nHere the summation over $a$ is understood. Given any pair of labels $i,j$ we\ndefine the variables $x_{ij} = (x_{ija}) \\in \\mathcal{M}_i \\otimes \\mathcal{M}_j$\nthrough\n\\begin{equation} \\label{eq:xij}\nm(x_{ij}) = m(x_j)^{-1} m(x_i) \\ .\n\\end{equation}\nConcrete expressions for the components of $x_{ij}$ can be worked out from the\nanti-commutator relations of the supercharges $Q$. One may think of $m(x_i)$ as\na function on superspace with values in the universal enveloping algebra or,\nmore concretely, after evaluation in a fundamental representation of the Lie\nsuperalgebra $\\mathfrak{g}$, as a matrix valued function on superspace.\n\nIn the last section we also introduced the Weyl element $w$ through equation\n\\eqref{Weyl-inversion}. Note that $w$ is constructed out of generators of the\nbosonic conformal group only. In particular it acts trivially within the\nR-symmetry group $U$. We can think of $w$ as a grouplike element in the\nuniversal enveloping algebra or, after application of a fundamental\nrepresentation, as a concrete matrix such as in eq.\\ \\eqref{eq:wmatrix}.\nWith the help of the Weyl inversion, let us define a new family of\nsupergroup elements $n$ through\n\\begin{equation} \\label{eq:nx}\nn(x) = w^{-1} m(x) w\\ .\n\\end{equation}\nSince $m$ involves only generators $X^a \\in \\mathfrak{g}_{>0}$ of the superconformal\nalgebra that raise the conformal weight, i.e.\\ generators $P$ of translations\nand supercharges $Q$, the element $n$ is built using generators $Y^a$ from the\nalgebra $\\mathfrak{g}_{<0}$ that lower the conformal weight, see our previous discussion\nof the Weyl inversion. This means that $n$ involves special conformal generators\n$K$ as well as the fermionic generators $S$.\n\nIn order to proceed, let us introduce another supergroup element $k=k(t)$ using\nthe remaining generators $X \\in \\mathfrak{g}_0$ that commute with the generator of\ndilations and therefore neither appear in $n$ nor in $m$. It means that $k$ is\nbuilt from the generators of dilations, rotations and R-symmetry\ntransformations, all of which are even (bosonic). Given the three supergroup\nelements $m,n,k$ we can now decompose $w m(x)$ as\n\\begin{equation} \\label{eq:factorization}\nw m(x) = m(y(x))\\, n(z(x)) \\, k(t(x)) \\ ,\n\\end{equation}\nwhere the components of $y(x) = (y(x)_a)$, $z(x) = (z(x)_a)$ and $t(x)\n= (t(x)_\\varrho)$ are certain functions of the superspace coordinates $x_i$ that\ncan be worked out concretely on a case-by-case basis. We shall state concrete\nformulas in some examples below. Let us stress that it is through this factorization\n\\eqref{eq:factorization} that we introduce the action of the Weyl inversion\n$w$ on superspace, i.e. by definition $y(x) = w x$. We consider the\nfunctions $y,z$ and $t$ as given for now and use them to introduce\n\\begin{equation}\ny_{ij} = y(x_{ij}) = w x_{ij} \\ , \\quad z_{ij}= z(x_{ij}) \\ , \\quad\nt_{ij} = t(x_{ij}) \\ .\n\\end{equation}\nBy definition we have\n\\begin{equation} \\label{eq:factorizationij}\nw m(x_{ij}) = m(y_{ij}) \\, n(z_{ij}) \\, k(t_{ij}) \\ .\n\\end{equation}\nThe components of $x_{ij}, y_{ij}, z_{ij}$ and $t_{ij}$ are elements in the\nfour-fold tensor product $\\mathcal{M}^4 \\cong \\mathcal{M}^{\\otimes_4}$ of the\nsuperspace $\\mathcal{M}$, one copy for each insertion point. This is all we\nneed to know about the superconfiguration space of the four insertion points.\n\\medskip\n\nSo, let us now consider some four-point correlation function $G$ in a quantum\nfield theory with superconformal symmetry given by $\\mathfrak{g}$. The fields $\\Phi$ of our\ntheory are organized in supermultiplets. We label these supermultiplets through\nthe quantum numbers of their superprimaries. These consist of a conformal weight\n$\\Delta$, a spin $\\lambda$ and the R-charges $q$. The collection of these\nquantum numbers determine a finite dimensional irreducible representation\n$\\rho = \\rho_{\\Delta, \\lambda,q}$ on the Lie algebra $\\mathfrak{k} = \\mathfrak{g}_0$\nthat is spanned by dilations, rotations and R-symmetries. We denote the carrier\nspace of this representation by $V = V_\\rho$ and shall often refer to it as the\nspace of superpolarizations. Let us stress that elements of $V_\\rho$ are associated\nwith polarizations of the superprimary in the supermultiplet $\\Phi$. In our\nfour-point function we have four supermultiplets whose superprimary components\ntransform in representations $\\rho_i,\\ i = 1, \\dots, 4$. The polarizations of\nthese four superprimary fields span the vector spaces $V_i$.\n\nGiven these data, we now consider the space $\\mathcal{F}(\\mathfrak{g})\\otimes\nV_{1234}$ of ``functions $F$ on the supergroup'' that take values in the vector\nspace $V_{1234} = V_1 \\otimes \\dots \\otimes V_4$. Among its elements we restrict\nto those functions $F$ that possess the following covariance property\\footnote{\nMathematically minded readers should think of $g$ as a supergroup element $g \\in\nU(\\mathfrak{g}) \\otimes \\mathcal{F}$ where $\\mathcal{F}$ can be any graded commutative\nalgebra, see section 2.1. The object $F(g) \\in \\mathcal{F} \\otimes V_{1234}$ is\nthen obtained using the duality between $\\mathcal{F}(\\mathfrak{g})$ and $U(\\mathfrak{g})$,\nsee eq.\\ \\eqref{eq:duality}.}\n\\begin{align}\\label{eq:covariance}\n F(k_l g k_r)= \\Big(\\rho_1(k_l)\\otimes\\rho_2(w k_l w^{-1})\\otimes\n \\rho_3(k_r^{-1})\\otimes\\rho_4(w k_r^{-1}w^{-1})\\Big) F(g) \\ ,\n\\end{align}\nfor all $k_l,k_r \\in K$. In analogy with ordinary Lie theory, such an $F$ will\nbe called a $K$-spherical function. To digest the mathematical meaning of this\nformula a bit better, let us pretend for a moment that we are dealing with some\nordinary Lie algebra $\\mathfrak{g}$ rather than a superalgebra. In that case, $g$ as\nwell as $k_l, k_r$ are elements of the bosonic group $G$. When we write $F(g)$\nwe let the group element $g$ act as a global symmetry transformation on the\nspace $\\mathcal{F}(\\mathfrak{g})$ of functions on the group and evaluate the result\nat the group unit. Stated more directly we simply evaluate the vector valued\nfunction $F$ at the point $g$ of the group manifold. Almost the same is true\nfor superalgebras except that $g$ is a matrix whose matrix elements are taken\nfrom some Grassmann algebra and $F$ is a prescription that turns such a matrix\ninto a vector $F(g)$ whose components are elements of that Grassmann algebra.\nTo evaluate $F(k_l g k_r)$ we employ the left-right action of $K \\times K$ on\nthe space $\\mathcal{F}(\\mathfrak{g})$ of functions on the supergroup to transform $F$\ninto new element of $F^{(k_l,k_r)}$ of the space $\\mathcal{F}(\\mathfrak{g}) \\otimes\nV_{1234}$. When we apply this transformed $F^{(k_l,k_r)}$ to $g$ we obtain\nanother vector $F^{(k_l,k_r)}(g) = F(k_lgk_r)$ with Grassmann valued components.\nThe covariance condition \\eqref{eq:covariance} selects those elements $F$ for\nwhich the two vectors $F(g)$ and $F(k_lg k_r)$ are related by a specific matrix\nrotation that is obtained from representation matrices of $k_l$ and $k_r$ in the\nrepresentations $\\rho_i$. The precise construction of this matrix, which\nalso involves conjugation with the Weyl element $w$ in two of the four\ntensor factors will become clear in the third subsection.\n\nLet us now come back to our correlation function $G_4$. By construction, $G_4(x_i)$\nis a function on the four-fold tensor product $\\mathcal{M}^4$ of superspace that takes\nvalues in the space $V_{1234}$ of polarizations, i.e.\\ $G_4 \\in \\mathcal{M}^4 \\otimes\nV_{1234}$. Being the four-point function in some superconformal field theory, $G_4$\ntransforms in a very special way under superconformal transformations. This can be\nexpressed in terms of a set of superconformal Ward identities. As a consequence of\nthese covariance properties one may show that, given $G_4$, there exists a unique\nfunction $F \\in \\mathcal{F}(\\mathfrak{g}) \\otimes V_{1234}$ on the supergroup with\ncovariance property \\eqref{eq:covariance} such that\n\\begin{eqnarray}\nG_4(x_i) & = & \\Big(1\\otimes\\rho_2(k(t_{21}))^{-1}\\otimes1\\otimes\n\\rho_4(k(t_{43}))^{-1}\\Big) F(g(x_i))\\, , \\label{magic-formula}\\\\[2mm]\n& & \\textit{where}\\ g(x_i) = n(y_{21})^{-1} m(x_{31}) n(y_{43})\\ . \\label{eq:gxi}\n\\end{eqnarray}\nThe argument of $F$ is a product of supergroup elements, i.e.\\ an element of $U(\\mathfrak{g})\n\\otimes \\mathcal{M}^4$ or some matrix representation thereof. After the application of\n$F$ we obtain an element of $\\mathcal{M}^4 \\otimes V_{1234}$. We may think of this as\na vector valued function on the four-fold tensor product of superspaces which can be\ncompared to $G_4$. The factor in front of $F$, that relates $F(g(x_i))$ to $G_4(x_i)$\nis a certain matrix of functions on $\\mathcal{M}$ that acts non-trivially on the two\nfactors $V_2$ and $V_4$. We shall also refer to eq.\\ \\eqref{magic-formula} as the\nsupersymmetric \\textit{lifting formula}.\n\\smallskip\n\nLet us remark that there is a quick sanity check of our formula, namely one may\nverify that both sides of the lifting formula \\eqref{magic-formula} satisfy the same\nWard identities for infinitesimal transformations generated by elements in $X\n\\in \\mathfrak{g}_{\\geq 0}$. The latter is spanned by translations, supercharges $Q$,\nrotations, dilations and R-symmetry transformations. The key observation is\nthat\n\\begin{equation}\\label{eq:diffrel}\n\\sum_{j=1}^4 \\mathcal{R}_X^{(j)} g(x_i)\n= \\left[ X \\otimes \\textit{id}, g(x_i) \\right]\\ .\n\\end{equation}\nRecall that the argument $g(x_i)$ of $F$ may be considered as a matrix whose\nentries are functions on the four-fold product of superspace. On these matrix\nelements we act with the sum of right invariant vector fields $\\mathcal{R}_X$\nfor $X \\in \\mathfrak{g}_{\\geq 0}$, acting on one set of superspace coordinates each.\nThe differential operators $\\mathcal{R}$ were constructed in the previous section.\nOur claim is that the resulting matrix of functions on superspace is the same\nas for the matrix commutator of the representation matrix for $X$ with the\nproduct of supergroup elements. This property holds essentially by construction\nof the argument of $F$. This is not a full proof of our formula yet since the\nargument cannot easily be extended to special (super-)conformal transformations.\nWe give a complete derivation in the third subsection after we have\nillustrated the notations and constructions we introduced in this section\nfor the $\\mathcal{N}=2$ superconformal algebra in $d=1$ dimension.\n\n\\subsection{Illustration for 1-dimensional superconformal algebra}\n\nLet us continue to illustrate our constructions and statements in the example of the\n$\\mathcal{N}=2$ superconformal algebra in $d=1$. Recall that the fundamental representation\nof this algebra is 3-dimensional and hence we realize all our supergroup elements as\n$3\\times 3$ matrices with components in the superspace. The elements $m(x)$ were\nconstructed in eq.\\ \\eqref{eq:m-1d} already. The Weyl inversion $w$ and its action on\nsuperspace were worked out in eqs.\\ (\\ref{eq:wmatrix}) and \\eqref{w-action-1d},\nrespectively. It is easy to determine the $3 \\times 3$ matrices $n(x)$ to take\nthe form\n\\begin{equation} \\label{eq:n-1d}\n n(x) = w^{-1} m(x) w = \\begin{pmatrix}\n 1 & 0 & 0\\\\\n -X & 1 & -\\theta\\\\\n -\\bar\\theta & 0 & 1\n \\end{pmatrix}\\ ,\n\\end{equation}\nwhere $X = u - \\frac12 \\theta \\bar\\theta$ is the same even combination of\nsuperspace coordinates $(u,\\theta,\\bar \\theta)$ that appeared in our formula\n\\eqref{eq:m-1d} for $m(x)$. The central ingredient in our construction above is\nthe factorization formula \\eqref{eq:factorization} for $w m(x)$. In the case\nof $\\mathfrak{g} = \\mathfrak{sl}(2|1)$ this reads\n\\begin{equation}\n \\begin{pmatrix}\n 0 & -1 & 0\\\\\n 1 & X & \\theta\\\\\n 0 & -\\bar\\theta & 1\n \\end{pmatrix} = \\begin{pmatrix}\n 1 & -\\frac1u \\left(1+\\frac{\\theta\\bar\\theta}{2u}\\right) & \\theta\/u\\\\\n 0 & 1 & 0\\\\\n 0 & -\\bar\\theta\/u & 1\n \\end{pmatrix} \\begin{pmatrix}\n 1 & 0 & 0\\\\\n u+\\frac12\\theta\\bar\\theta & 1 & \\theta\\\\\n \\bar\\theta & 0 & 1\n \\end{pmatrix} \\begin{pmatrix}\n \\frac1u\\left(1-\\frac{\\theta\\bar\\theta}{2u}\\right) & 0 & 0\\\\\n 0 & u\\left(1-\\frac{\\theta\\bar\\theta}{2u}\\right) & 0\\\\\n 0 & 0 & 1-\\frac{\\theta\\bar\\theta}{u}\n \\end{pmatrix} \\ . \\label{eq:matrixfactorization-1d}\n\\end{equation}\nComparing the first of the three factors with the expression \\eqref{eq:m-1d} for\n$m(y)$ we deduce\n\\begin{equation}\ny(x) = (Y+\\frac12\\eta\\bar\\eta,\\eta,\\bar\\eta) = w(u,\\theta,\\bar\\theta) =\n\\left(\\frac{-1}{u},\\frac{\\theta}{u},\\frac{\\bar\\theta}{u}\\right)\\ . \\label{eq:wact-1d}\n\\end{equation}\nThis agrees of course with the result we found in eq. \\eqref{w-action-1d}. Turning\nto the second matrix factor in the factorization formula and comparing with eq.\\\n\\eqref{eq:n-1d} for $n(z)$ we conclude\n\\begin{equation} \\label{eq:zcoord-1d}\nz(x) = (Z+\\frac12\\zeta\\bar\\zeta,\\zeta,\\bar\\zeta) = (-u,-\\theta,-\\bar\\theta) \\ .\n\\end{equation}\nUsing the representation matrices for $D$ and $R$ that we spelled out in\neq.\\ \\eqref{eq:bosrep}, the third factor, finally, can be written as\n\\begin{equation}\nk(t(x)) = e^{-\\log u^2 D + \\frac{\\theta\\bar\\theta}{2u}R}\\ .\n\\end{equation}\nWe lift the matrix equation \\eqref{eq:matrixfactorization-1d} to the following\nfactorization identity for supergroup elements\n\\begin{equation}\n w m(x) = w e^{x\\cdot X} =\n e^{w(x)\\cdot X} e^{-x\\cdot X^w} e^{-\\log u^2 D + \\frac{\\theta\\bar\\theta}{2u}R} \\ ,\n \\label{fund-1d}\n\\end{equation}\nwhere $X^w = w^{-1}(P,Q_+,Q_-)w = (-K,-S_+,S_-)$. Given several points $x_i$ in superspace,\nwe can now compute the supercoordinates $x_{ij}$ by evaluating the product $m(x_j)^{-1}\nm(x_i)$. The result is given by $x_{ij} = (u_{ij},\\theta_{ij}, \\bar \\theta_{ij})$ with\n\\begin{equation}\n u_{ij} = u_i - u_j -\\frac12\\theta_i\\bar\\theta_j - \\frac12\\bar\\theta_i\\theta_j \\ ,\\quad\n \\theta_{ij} = \\theta_i - \\theta_j \\ ,\\quad\n \\bar\\theta_{ij} = \\bar\\theta_i - \\bar\\theta_j \\ .\n \\label{distance}\n\\end{equation}\nFor completeness let us also state how the Weyl inversion acts on $x_{ij}$\n\\begin{equation}\n w(x_{ij}) = (-u_{ij}^{-1},u_{ij}^{-1}\\theta_{ij},\n u_{ij}^{-1}\\bar\\theta_{ij})\\ . \\label{inverse}\n\\end{equation}\nOf course this coincides with the formula \\eqref{eq:wact-1d} applied to the\nsuperspace coordinates $x_{ij}$. At this point we have explained all the\ningredients that are needed to construct the supergroup elements $g(x_i)$\nthat were introduced in eq.\\ \\eqref{eq:gxi}.\n\\smallskip\n\nLet us now consider a four-point function $G_4$ of primary fields with conformal\nweights $\\Delta_i$ and R-charges $r_i$ for $i=1, \\dots, 4$. Given $\\Delta$ and\n$r$, the corresponding representation $\\rho_i$ of the group $K = SO(1,1) \\times\nU(1)$ reads\n\\begin{equation} \\label{eq:rho-1d}\n \\rho_{\\Delta,r}(e^{\\lambda D + \\kappa R}) = e^{-\\Delta\\lambda + r\\kappa}\\ .\n\\end{equation}\nSince the group $K$ is abelian, the space $V$ of polarizations is 1-dimensional\nand so is the tensor product $V_{1234} = V_1 \\otimes \\dots \\otimes V_4$. According\nto our general result \\eqref{magic-formula}, there exists a unique functional $F$\nwith the covariance properties\n\\begin{align}\n F(e^{\\lambda_l D + \\kappa_l R} g e^{\\lambda_r D + \\kappa_r R})=\n e^{(\\Delta_2-\\Delta_1)\\lambda_l+(r_1+r_2)\\kappa_l} e^{(\\Delta_3-\\Delta_4)\\lambda_r\n - (r_3+r_4)\\kappa_r} F(g) \\ ,\n\\end{align}\nsuch that the lifting formula reads\n\\begin{equation}\n G_4(x_i) = \\Omega(x_i) \\, F(e^{-w(x_{21})\\cdot X^w} e^{x_{31}\\cdot X}\n e^{w(x_{43})\\cdot X^w})\n \\label{magic-1d}\n\\end{equation}\nand the prefactor $\\Omega$ is given by\n\\begin{equation} \\label{eq:Omegasl2}\n\\Omega =\\Omega(x_i) = \\frac{e^{r_2\\frac{\\theta_{12}\\bar\\theta_{12}}{2u_{12}}+\n r_4\\frac{\\theta_{34}\\bar\\theta_{34}}{2u_{34}}}}{u_{12}^{2\\Delta_2}\n u_{34}^{2\\Delta_4}}\\ .\n\\end{equation}\nIt is instructive to verify that the commutation relations \\eqref{eq:diffrel}\nhold for the argument of $F$ and to evaluate $G_4$ for Weyl-inverted arguments\n$w(x_i)$, thereby showing that the right hand side of the lifting formula\n\\eqref{magic-1d} indeed satisfy the same conformal Ward identities as the\nfour-point function.\n\n\\subsection{Proof of the lifting formula}\n\nThe goal of this subsection is to prove the main result \\eqref{magic-formula} for\nan arbitrary superconformal group. Before doing that, let us give one more definition,\nan extension of the factorization formula \\eqref{eq:factorization}\n\\begin{equation}\n h m(x) = m(y(x,h)) n(z(x,h)) k(t(x,h)) \\ , \\label{matrix-identity}\n\\end{equation}\nfrom the Weyl inversion $h = w$ to arbitrary elements $h$ of the superconformal\ngroup. This formula also extends our analysis in section 2.3 where we studied the action\nof global conformal transformations on superspace. At the time we only cared about the\nfirst factor $m(y(x,h))$ in the product on the right hand side. The new formula\n\\eqref{matrix-identity} extends the action of global superconformal transformations\nto the whole superconformal group. Otherwise all the additional explanations we\nprovided in section 2.3. remain applicable. The extended factorization formula involves\nthree sets of functions $y(x,h) = (y(x,h)_a)$, $z(x,h) = (z(x,h)_a)$ and $t(x,h) =\n(t(x,h)_\\varrho)$. For $h=w$ we recover the functions we introduced in the\nprevious section.\n\nA four-point correlation function $G_4$ satisfies a set of Ward identities. For global\nsuperconformal transformations $h$ these may be written in the form\n\\begin{equation} \\label{eq:G4Wardid}\n G_4 (x_i^h) = \\Big(\\bigotimes_{i=1}^4 \\rho_i (k(t(x_i,h)))\\Big) G_4(x_i) \\ .\n\\end{equation}\nNote that correlation functions are essentially invariant under these transformations\nexcept some factors depending in the weight, spin and the R-charges. This dependence\nis encoded in the choice of representations $\\rho_i$, as we explained above. In a first\nstep we want to lift the correlator $G_4$ to and object $F_4\\in\\mathcal{F}_1\\otimes V_1\n\\otimes \\dots \\otimes \\mathcal{F}_4\\otimes V_4$, where $\\mathcal{F}_i$ are supercommuting\ncopies of the structure algebra $\\mathcal{F}(\\mathfrak{g})$ of functions on the supergroup.\nThis can be done in a unique way if we require\n\\begin{equation}\\label{eq:F4rightcov}\n F_4(m(x_i)) = G_4(x_i)\\ ,\\quad \\quad F_4 (g_i n_i k_i) =\n \\bigotimes_{i=1}^{4} \\rho_i (k_i^{-1})\n F_4(g_i) \\ .\n\\end{equation}\nHere our notations are the same as in section 3.1, see our extended discussion before\nequation \\eqref{magic-formula}. The Ward identities \\eqref{eq:G4Wardid} satisfied by\n$G_4$ imply the following invariance conditions satisfied by $F_4$ under simultaneous\nleft multiplication of its four arguments by an element $h$ of the superconformal group,\n\\begin{eqnarray}\\label{eq:F4leftinv}\n F_4(h m(x_i)) & = & F_4\\Big( m(x_i^h) n(z(x_i,h)) k(t(x_i,h)) \\Big) \\\\[2mm]\n & = & \\Big( \\bigotimes_{i=1}^4 \\rho_i (k(t(x_i,h))^{-1}) \\Big) G_4(x_i^h) =\n G_4(x_i) = F(m(x_i)) \\ .\n\\end{eqnarray}\nOther than the Ward identity, we have used the definitions $(\\ref{matrix-identity})$\nand $(\\ref{eq:F4rightcov})$. Given this element $F_4$ and the Weyl inversion $w$ we\ncan construct a new object $F\\in\\mathcal{F}(\\mathfrak{g})\\otimes V_{1234}$ through\nthe prescription\n\\begin{equation}\\label{eq:FfromF4}\n F(g) := F_4 (e,w^{-1},g,gw^{-1}) \\ .\n\\end{equation}\nWhile this might look a bit bizarre at first, it is easy to verify that it\ndefines a $K$-spherical function $F$, i.e. that $F$ satisfies the covariance\nlaw \\eqref{eq:covariance}. Indeed, from the definition \\eqref{eq:FfromF4} of\n$F$, the left invariance condition \\eqref{eq:F4leftinv} and the right\ncovariance law in eq.\\ \\eqref{eq:F4rightcov} of $F_4$ we obtain\n\\begin{align*}\n F(k_l g k_r) &= F_4(e,w^{-1},k_l g k_r, k_l g k_r w^{-1}) =\n F_4(k_l^{-1},w^{-1} w k_l^{-1}w^{-1},g k_r, g w^{-1} w k_r w^{-1} )\\\\[2mm]\n &= \\Big(\\rho_1(k_l)\\otimes\\rho_2(w k_l w^{-1})\n \\otimes\\rho_3(k_r^{-1})\\otimes\\rho_4(wk_r^{-1}w^{-1})\\Big) F(g) \\ .\n\\end{align*}\nIn conclusion we have shown that a correlation function $G_4$ provides us with a\n$K$-spherical function $F$. It is actually not difficult to invert the map and\nrecover $G_4$ from $F$. Suppressing the last two arguments and their corresponding\nprefactors for simplicity, we have\n\\begin{eqnarray*}\nF_4(m(x_1),m(x_2)) & = & \\left(1 \\otimes \\rho_2(k(t_{21})^{-1})\\right)\nF_4\\left(m(x_1) n(y_{21}), m(x_2) k(t_{21})^{-1} n(z_{21})^{-1} \\right) \\\\[2mm]\n& = & \\left(1 \\otimes \\rho_2(k(t_{21})^{-1})\\right)\nF_4\\left(m(x_1) n(y_{21}), m(x_1) m(x_{21}) k(t_{21})^{-1} n(z_{21})^{-1} \\right) \\\\[2mm]\n& = & \\left(1 \\otimes \\rho_2(k(t_{21})^{-1})\\right)\nF_4\\left(m(x_1) n(y_{21}), m(x_1) w^{-1} m(y_{21}) \\right) \\\\[2mm]\n& = & \\left(1 \\otimes \\rho_2(k(t_{21})^{-1})\\right)\nF_4\\left(m(x_1) n(y_{21}), m(x_1) n(y_{21}) w^{-1} \\right).\n\\end{eqnarray*}\nIn the first step we used the covariance property \\eqref{eq:F4rightcov} of $F_4$ in the\nfirst two arguments to multiply the first argument with $n(y_{21})$ and the second with\n$k(t_{21})^{-1} n(z_{21})^{-1}$. Since the latter contains a factor $k$ it needed to\nbe compensated by a rotation in the second factor of the space of superpolarizations.\nThen we inserted the definition of $m(x_{21})$ and used that\n$$ m(x_{21}) = w^{-1} m(y_{21}) n(z_{21}) k(t_{21})\\ . $$\nThis factorization formula is essentially the definition of $y_{21}, z_{21}$ and $t_{21}$.\nFinally we moved the Weyl element $w^{-1}$ through $m$ using that $n = w^{-1} m w$. We can now\napply the same steps to the third and fourth argument to obtain\n\\begin{equation}\\label{eq:F4ggwggw}\nF_4(m(x_i)) = \\left(1 \\otimes \\rho_2(k(t_{21})^{-1}) \\otimes 1 \\otimes \\rho_4(k(t_{43})^{-1})\\right)\nF_4\\left(g_{12}(x_i),g_{12}(x_i) w^{-1}, g_{34}(x_i), g_{34}(x_i) w^{-1} \\right),\n\\end{equation}\nwhere we introduced the elements\n$$ g_{ij}= m(x_i) n(y_{ji})\\ . $$\nFinally, we can use the invariance property \\eqref{eq:F4leftinv} of\n$F$ for $h = g_{12}^{-1}$ to obtain\n$$ F_4(m(x_i)) = \\left(1 \\otimes \\rho_2(k(t_{21})^{-1})\n \\otimes 1 \\otimes \\rho_4(k(t_{43})^{-1})\\right)\n F_4\\left(e, w^{-1}, g(x_i), g(x_i) w^{-1}\\right) \\ .\n$$\nHere $g(x_i)$ is the element we introduced in eq.\\ \\eqref{eq:gxi}. Using our definition of the functional\n$F$ in eq.\\ \\eqref{eq:FfromF4} and the relation between $F_4$ and $G_4$ we have thereby established the\nlifting formula \\eqref{magic-formula}.\n\\medskip\n\nFrom the above derivation, one may deduce the following transformation properties of $g_{ij}$ and $k(t_{ji})$\nunder superconformal transformations\n\\begin{eqnarray}\ng_{ij}(x^h) & = & h \\, g_{ij}(x)\\, k(t(x_i,h))^{-1} \\ , \\label{eq:gijtrafoh}\\\\[2mm]\nk(t_{ji}^h) & = & k^{w}(t(x_i,h))\\, k(t_{ji})\\, k(t(x_j,h))^{-1}\\ , \\label{eq:ktrafoh}\n\\end{eqnarray}\nwhere $k^{w} = w k w^{-1}$. Indeed these are necessary for the right hand\nside of eq. \\eqref{eq:F4ggwggw} to satisfy the same Ward identities as the left hand side. A complete proof\nof the two transformation laws can be found in appendix $A$. These two formulas will play a significant\nrole in the computation of the crossing factor to which we turn next.\n\n\n\\section{Tensor Structures and Crossing Symmetry Equations}\n\n\nHaving lifted the spinning four-point fucntion $G_4$ from superspace to the\nsuperconformal group through eq.\\ \\eqref{magic-formula} we can now employ\n(super)group theoretic constructions to study superconformal correlators.\nIn the first subsection we employ a supersymmetric version of the Cartan\nor KAK decomposition for superconformal groups of type I to factorize\nfour-point functions into the product of a tensor factor $\\Theta =\n\\Theta(x_i)$ and a function $\\Psi$ that depends on superconformal cross ratios\nonly.\\footnote{As explained in the introduction, our group theoretic factorization\n$G_4 = \\Theta \\Psi$ is reminiscent of the factorization $G_4 = \\Omega g$ used in\nmost of the CFT literature. The difference between the two factorizations can be\nquantified through the ratio $\\Theta \\Omega^{-1}$ which has a non-trivial\ndependence on cross ratios. As we shall see below, $\\Theta$ and $\\Psi$ are\nmore universal than the factors $\\Omega$ and $g$.} This part of our analysis\nextends constructions in \\cite{Buric:2019dfk} to the superconformal setting.\nWe can perform the factorization for different channels. The supercrossing\nfactor, i.e. the ratio of the corresponding tensor factors $\\Theta_s$ and\n$\\Theta_t$ for the $s$- and $t$-channel, is studied at the end of the first\nsubsection. There we establish its superconformal invariance and compute it\nfor bosonic conformal symmetries in any dimension $d$.\n\nAt this stage, all quantities depend on fermionic variables. In particular,\nthe function $\\Psi$ still depends on some number of nilpotent invariants.\nBy expanding all quantities in the Grassmann variables we construct the\ncrossing factor in the second subsection. This is then used to write the\ncrossing symmetry constraints in the independent coefficients of the\noperator product expansions in terms of functions of two bosonic cross\nratios only. As shown in \\cite{Buric:2019rms}, the latter may be expanded\ninto wave functions of some Calogero-Sutherland Hamiltonian. By collecting\nall the material we have put together through our discussion of the\nexample $\\mathfrak{g} = \\mathfrak{sl}(2|1)$ we can finally calculate the\ncrossing factor for $\\mathcal{N}=2$ superconformal field theories in\n$d=1$ dimension, see third subsection.\n\n\n\\subsection{Cartan coordinates, tensor and crossing factors}\n\nWe will now construct the tensor structures, starting from the lifting formula \\eqref{magic-formula}.\nNote that eq.\\ \\eqref{magic-formula} treats each of the four insertion points differently\nand hence it breaks the permutation symmetry of correlators in a Euclidean theory. Different\npermutations $\\sigma$ of the four points are associated with different channels. We refer to\nthe channel that is associated with the identity permutation $\\sigma = \\sigma_s = \\textit{id}$\nas the $s$-channel. Another important case for us is the permutation $\\sigma = \\sigma_t = (24)$\nwhich we call the $t$-channel. In any case, given the choice of the channel $\\sigma$, we can\nextend the lifting formula \\eqref{magic-formula} to become\n\\begin{equation} \\label{eq:magic-formulasigma}\nG_4(x_i) = \\rho_{\\sigma(2)}(k(t_{\\sigma(2)\\sigma(1)})^{-1})\n\\rho_{\\sigma(4)}(k(t_{\\sigma(4)\\sigma(3)})^{-1}) F_\\sigma(g(x_{\\sigma(i)})) \\ .\n\\end{equation}\nHere, the factor $\\rho_{\\sigma(i)}$ acts on the $\\sigma(i)^\\textit{th}$ tensor factor in\nthe space of superpolarizations and it acts trivially on all other tensor factors.\n\nIn order to proceed we adopt a new coordinate system that we refer to as\nCartan coordinates. So far we have decomposed supergroup elements $g$ into factors\n$m$, $n$ and $k$. Now we consider a different decomposition in which supergroup\nelements $g$ are written as\n\\begin{equation} \\label{eq:sCartan}\ng = k_l \\eta_l a \\eta_r k_r \\ ,\n\\end{equation}\nwhere $k_{l}$ and $k_{r}$ are associated with the subgroup $K$ that is obtained through\nexponentiation of rotations, dilations and R-symmetry transformations. Similarly, the\nfactors $\\eta_l$ and $\\eta_r$ are associated with fermionic generators. More specifically,\neach of these factors contains half of the generators in the odd subspace $\\mathfrak{g}$. In the\nfollowing we consider Lie superalgebra $\\mathfrak{g}$ of type I for which the internal symmetry\ngroup contains a $U(1)$-factor which allows us to decompose the fermionic generators according\nto the sign of the $U(1)$ R-charge. It turns out that half of the supercharges $Q$ possess\npositive R-charge while the others possess negative R-charge and similarly for the super\nspecial conformal transformations $S$. Let us agree that $\\eta_l$ uses generators of\nnegative charge while $\\eta_r$ is build from generators with positive charge. The central\nfactor $a = a(u_1,u_2)$, finally, depends on two bosonic coordinates $u_1,u_2$ only and\nit is assumed to take the form\n\\begin{equation}\n\\label{adef}\na(u_1,u_2) = e^{\\frac{u_1+u_2}{4}(P_1+K_1) -i \\frac{u_1-u_2}{4}(P_2 - K_2)}\\ .\n\\end{equation}\nLet us note that a factorization of supergroup elements $g$ in the form \\eqref{eq:sCartan}\nis not unique. In fact, given any such factorization we can produce another factorization\nof the very same form by the transformation\n\\begin{equation} \\label{eq:gaugeB}\n\\left(k_l,\\eta_l;k_r,\\eta_r\\right) \\rightarrow\n\\left(k_l b, b^{-1} \\eta_l b ; b^{-1} k_r, b^{-1} \\eta_r b\\right),\n\\end{equation}\nwhere $b$ are elements associated with the subalgebra $\\mathfrak{so}(d-2)\\oplus \\mathfrak{u}_r\n\\subset \\mathfrak{k}$ and therefore commute with $a = a(u_1,u_2)$. At the same time, the elements\n$b^{-1} \\eta_{l\/r} b$ can still be written as exponentials of fermionic generators with\nnegative(l) and positive(r) $U(1)$ R-charge, respectively. Hence our gauge transformation\n\\eqref{eq:gaugeB} respects the Cartan decomposition. Elements $b$ form the stabilizer\ngroup $B = SO(d-2) \\times U_r$ of the Cartan decomposition \\eqref{eq:sCartan}. For later\nuse we introduce a projector $P$ by integrating $b$ over the entire stabilizer group $B$,\n\\begin{equation}\\label{eq:Pdef}\nP = \\frac{1}{\\text{Vol}\\ B} \\int_B d\\mu b \\ =\n\\frac{1}{\\text{Vol}\\ B} \\int_B d\\mu(\\beta) b(\\beta) \\ ,\n\\end{equation}\nwhere $\\mu$ is the Haar measure on $B$. For pedagogical reasons we have introduced some\ncoordinates $\\beta$ on $B$ so that the element $b$ could be written explicitly as a function\n$b = b (\\beta)$ on $B$ with values in $U(\\mathfrak{b})$. As we indicated, $P$ can be considered\nas an element in the universal enveloping algebra $U(\\mathfrak{g})$. More concretely, after\nevaluation in some representation we can also think of $P$ is a matrix. By construction, this\nmatrix has two important properties\n\\begin{equation} \\label{eq:Pprop}\nP^2 = P \\quad , \\quad b P = P\\ .\n\\end{equation}\nWe can verify the second equation very easily using the integral representation of $P$ and\nthe left invariance of the Haar measure. The first property then follows from $bP = b(\\beta)\nP$ by performing an additional integration over $B$ since $P$ is a constant on $B$.\n\nIn our analysis below we will apply the projector $P$ to an a function $f(u_i,\\theta,\\bar \\theta)$\nthat takes values in the representation space $V_{1234}$ of $K$. The latter may be considered a\ncarrier space for a representation of $B$ by restriction from $K$ to its subgroup $B$. The\nrepresentation of $P$ on such an object $f$ is denoted by $\\mathcal{P}$, i.e.\n\\begin{equation} \\label{eq:calP}\n\\mathcal{P} [f(u_i,\\theta,\\bar \\theta)]= \\frac{1}{\\text{Vol}\\ B} \\int_B d\\mu \\chi(b)\nf(u_i,\\theta^b,\\bar\\theta^b)\\ ,\n\\end{equation}\nwhere $\\theta^b$ and $\\bar \\theta^b$ denotes the action of $b$ on the Grassmann coordinates\n$\\theta$ and $\\bar \\theta$ and $\\chi(b)$ is a shorthand for the action of $b$ on the finite\ndimensional vector space $V_{1234}$ of superpolarizations,\n\\begin{equation} \\label{eq:chi}\n\\chi(b)=\\rho_1(b)\\otimes \\rho_2(w b w^{-1})\\otimes \\rho_3(b)\\otimes \\rho_4(w b w^{-1})\\ .\n\\end{equation}\nIn practical computations it is convenient to make some specific choices for the\nCartan factors that remove the gauge freedom \\eqref{eq:gaugeB}. Such gauge fixing\nconditions are arbitrary and at the end of every calculation one has to check that\nthe result does not depend on them.\n\\medskip\n\nLet us now apply the Cartan factorization to the argument $g(x_{\\sigma(i)})$ of the\nfunctional $F_\\sigma$ in eq.\\ \\eqref{eq:magic-formulasigma},\n\\begin{equation}\\label{Cartan-factors}\n g(x_{\\sigma(i)}) = k_{\\sigma,l}(x_i) \\eta_{\\sigma,l}(x_i)a_\\sigma(x_i)\n \\eta_{\\sigma,r}(x_i) k_{\\sigma,r}(x_i) \\ .\n\\end{equation}\nThe formula \\eqref{eq:magic-formulasigma} and covariance properties of $F_\\sigma$ give\n\\begin{align}\nG_4(x_i) & = \\rho_{\\sigma(2)}(k(t_{\\sigma(2)\\sigma(1)})^{-1})\n\\rho_{\\sigma(4)}(k(t_{\\sigma(4)\\sigma(3)})^{-1}) F_\\sigma(g(x_{\\sigma(i)})) \\nonumber \\\\[2mm]\n& \\hspace*{-4pt} = \\rho_{\\sigma(2)}\\left(k(t_{\\sigma(2)\\sigma(1)})\\right)^{-1} \\rho_{\\sigma(4)}\n\\left(k(t_{\\sigma(4)\\sigma(3)})\\right)^{-1} F_\\sigma(k_{\\sigma,l}\\eta_{\\sigma,l}\na_\\sigma \\eta_{\\sigma,r}k_{\\sigma,r}) \\\\[2mm]\n& \\hspace*{-4pt} = \\rho_{\\sigma(1)}(k_{\\sigma,l})\\rho_{\\sigma(2)}\\left(k(t_{\\sigma(2)\\sigma(1)})^{-1}\nk_{\\sigma,l}^{w}\\right)\\rho_{\\sigma(3)}(k_{\\sigma,r}^{-1})\\rho_{\\sigma(4)}\n\\left(k(t_{\\sigma(4)\\sigma(3)})^{-1} (k^{-1}_{\\sigma,r})^{w}\\right)\nF_\\sigma(\\eta_{\\sigma,l}a_\\sigma \\eta_{\\sigma,r}) . \\nonumber\n\\end{align}\nFor simplicity, we dropped the dependence of Cartan factors on the insertion points, i.e.\\\nfor example $k_{\\sigma,l} = k_{\\sigma,l}(x_i) = k_l(x_{\\sigma(i)})$. We will\ndiscuss the concrete functional dependence of the insertion points a bit later.\n\nLet us spell out the previous formula for the $s$ and $t$-channel. In the\n$s$-channel one obtains\n\\begin{equation} \\label{eq:G4schannel}\nG_4(x_i) = \\rho_{1} (k_{s,l}) \\rho_{2}(k(t_{21})^{-1}\nk^{w}_{s,l}) \\rho_{3}(k_{s,r}^{-1})\n\\rho_{4}(k(t_{43})^{-1} (k^{w}_{s,r})^{-1})\n\\mathcal{P}_s F_s(\\eta_{s,l}a_s \\eta_{s,r})\\ ,\n\\end{equation}\nwhile the $t$-channel gives\n\\begin{equation} \\label{eq:G4tchannel}\nG_4(x_i) = \\rho_{1} (k_{t,l}) \\rho_{4}(k(t_{41})^{-1} k^{w}_{t,l}) \\rho_{3}(k_{t,r}^{-1})\n\\rho_{2}(k(t_{23})^{-1} (k^{w}_{t,r})^{-1}) \\mathcal{P}_t F_t(\\eta_{t,l}a_t \\eta_{t,r})\\ .\n\\end{equation}\nHere we introduced projector $\\mathcal{P}$ that was defined in eq.\\ \\eqref{eq:calP} explicitly\nto stress that $F(\\eta_l a\\eta_r)$ takes value in the space of $B$-invariants. Roughly\nspeaking, the two factors in front of $F_{s}$ and $F_t$ are the $s$- and $t$-channel tensor\nstructures.\n\nThe ratio of these $s$- and $t$-channel tensor structures is referred to as supercrossing\nfactor and we denote it by $\\mathcal{M}$. As we can read off the the previous two formulas\nthe supercrossing factor takes the form\n\\begin{equation} \\label{eq:crossingmatdef}\n\\mathcal{M}_{st}(x_i) = \\mathcal{P}_t \\, \\bigotimes_{i=1}^4 \\rho_i(\\kappa_i) \\, \\mathcal{P}_s\n\\ , \\end{equation}\nwhere the four elements $\\kappa_i$ are given by\n \\begin{eqnarray}\n\\kappa_1 = k_{t,l}^{-1}k_{s,l} \\quad & , & \\quad\n\\kappa_{2} = k^{w}_{t,r} k(t_{23})k(t_{21})^{-1} k^{w}_{s,l} \\\\[2mm]\n\\kappa_{3} = k_{t,r}k_{s,r}^{-1} \\quad & , & \\quad\n\\kappa_{4} = (k^{w}_{t,l})^{-1} k(t_{41})k(t_{43})^{-1} (k^{w}_{s,r})^{-1} \\ .\n\\end{eqnarray}\nIt is important to stress the two projectors in eq.\\ \\eqref{eq:crossingmatdef} make the supercrossing\nfactor independent of any gauge fixing conditions for our gauge symmetry \\eqref{eq:gaugeB}. In fact\none can easily check using eq. \\eqref{eq:Pprop} that any gauge transformation with some element $b$\nis absorbed by the projectors.\n\nOur main goal is to compute the matrix $\\mathcal{M}$ explicitly. Note that it depends on the\ninsertion points $x_i$ in superspace through the dependence of the factors $k(t_{ij}) = k(t(x_{ij}))$,\nthat were defined in eq. \\eqref{eq:factorizationij}, as well as through the factors $k_{l,r}$\nin the Cartan decomposition \\eqref{eq:sCartan} of the supergroup elements $g_{s,t}(x_i)$. In\norder to compute the matrix $\\mathcal{M}_{st}$ we first show that it is invariant under superconformal\ntransformation, i.e. $\\mathcal{M}_{st}(x_i^h) = \\mathcal{M}_{st}(x_i)$. This then implies that it is a\nfunction of cross ratios only and so it can be computed after moving the insertion points into\na special positions.\n\nTo see that $\\mathcal{M}_{st}$ is a conformal invariant we must study the dependence of the four\ntensor components one after another. We have already stated the transformation behavior of the\nfactors $k(t_{ij})$ at the end of the previous section, see eq.\\ \\eqref{eq:ktrafoh}. What we need\nto study now is the transformation behavior of the factors $k_{l,r}$ in the Cartan decomposition\n\\eqref{eq:sCartan}. To this end let us first note that, according to eq. \\eqref{eq:gijtrafoh},\nthe supergroup elements $g_\\sigma(x_i)$ transform as\n\\begin{equation}\ng_\\sigma(x_i^h) = k(t(x_{\\sigma(1)},h))\\, g_\\sigma(x_i)\\, k(t(x_{\\sigma(3)},h))^{-1}\\ .\n\\end{equation}\nBecause of the gauge freedom of the Cartan decomposition which we described in eq.\\ \\eqref{eq:gaugeB},\nknowing the behavior of $g_\\sigma(x_i)$ under conformal transformations does not allow us to uniquely\ndetermine the transformation law of the factors, but we can conclude that\n\\begin{equation}\nk_{\\sigma,l}(x^h_i) = k(t(x_{\\sigma(1)},h)) k_{\\sigma,l}(x_i) b_\\sigma(x_i,h) \\quad , \\quad\nk_{\\sigma,r}(x^h_i) = b^{-1}_\\sigma(x_i,h) k_{\\sigma,r}(x_i) k(t(x_{\\sigma(3)},h))^{-1} \\\n\\end{equation}\nfor some factor $b$ that may depend on the channel, the superspace insertion points $x_i$\nand the superconformal transformation $h$, yet must be the same for the left and right\nfactors $k_l$ and $k_r$. For the case of $s$- and $t$-channels, these become\n\\begin{equation}\nk_{s\/t,l}(x^h_i) = k(t(x_{1},h)) k_{s\/t,l} b_{s\/t}(x_i,h)\\quad , \\quad\nk_{s\/t,r}(x^h_i) = b_{s\/t}^{-1}(x_i,h) k_{s\/t,r} k(t(x_{3},h))^{-1} \\ .\n\\end{equation}\nWith these transformation laws it is now easy to verify that all four tensor components\n$\\kappa_i$ of the crossing factor $M$ are indeed invariant under superconformal\ntransformations, up to gauge transformation, i.e.\n\\begin{equation}\n\\kappa_i(x^h_k) = b^{-1}_t(x_k,h)\\, \\kappa_i(x_k)\\, b_s(x_k,h) \\quad , \\quad\n\\kappa_j(x^h_k) = w b^{-1}_t(x_kh) w^{-1}\\, \\kappa_j(x_k)\\, w b_s(x_k,h) w^{-1}\\ ,\n\\end{equation}\nwhere $i=1,3$ and $j=2,4$. To get the last two relations one employs the formula for\n$k(t^h_{ji})$ given in eq.\\ \\eqref{eq:ktrafoh}). Using the definition \\eqref{eq:calP}\nof the projectors $\\mathcal{P}_s =\\mathcal{P}_t$ and the property \\eqref{eq:Pprop} of\n$P \\in U(\\mathfrak{b})$ we see that $\\mathcal{M}_{st}(x_i)$ is indeed invariant under\nconformal transformations.\n\\medskip\n\nThe analysis we have performed in this section holds for conformal and superconformal\nsymmetries alike. It is actually quite instructive to evaluate the final formula\n\\eqref{eq:crossingmatdef} for the crossing factor for spinning correlators in bosonic\nconformal field theories. In this case it is in fact rather easy to obtain $\\mathcal{M}_{st}$\nsince we can effectively reduce the problem to one on the 2-dimensional conformal group. We\nwill deviate from previous notations and use $G$ to denote the bosonic conformal group\n$\\textit{SO}(d+1,1)$ and assume $d>2$.\n\nSince the crossing factor is conformally invariant, in computing $\\mathcal{M}(u,v)$ we may assume\nthat $x_i$ are any points that give the correct cross ratios $u$ and $v$. In particular,\nall points can be assumed to lie in the 2-dimensional plane $P$ that is spanned by the\nfirst two unit vectors $e_1,e_2$ of the $d$-dimensional space $\\mathbb{R}^d$. In this\ncase, the element $g_\\sigma(x_i)$ is seen to belong to the conformal group of the plane,\ni.e.\\ $g_\\sigma(x_i) \\in G_P=SO(3,1)\\subset G$. Within this group $g_\\sigma(x_i)$ admits\na unique Cartan decomposition, which can also serve as its Cartan decomposition in $G$,\nbearing in mind that the torus $A\\subset G_P \\subset G$ of the Cartan decomposition of\n$G$ is actually a subgroup of $G_P$. Put in another way, the Cartan decomposition of\n$G_P$ defines a particular gauge fixing for Cartan factors of $g(x_i)$. Note that all\nrelevant rotations are generated by the element $M_{12}$, which commutes with the Weyl\ninversion $w$ when $d>2$. Hence we conclude that the factors $\\kappa_i$ that arise\nin the transition from $s$- to $t$-channel must be of the form\n\\begin{equation}\\label{form-of-kappas}\n \\kappa_i = e^{\\gamma_i D} e^{\\varphi_i M_{12}} \\ ,\n\\end{equation}\nfor some functions $\\gamma_i$ and $\\varphi_i$ that depend on the insertion points $x_i$\nof the four fields through their two cross ratios. Having determined the general from of\n$\\kappa_i$, we can find the undetermined coefficients by a direct calculation. Since we\ncan perform the calculation in any conformal frame we set for convenience,\n\\begin{align}\\label{point-configurations-1}\n & x_1 = \\frac{\\cosh^2\\frac{u_1}{2}+\\cosh^2\\frac{u_2}{2}}{2\\cosh^2\\frac{u_1}{2}\\cosh^2\\frac{u_2}{2}} e_1 - i\n \\frac{\\cosh^2\\frac{u_1}{2}-\\cosh^2\\frac{u_2}{2}}{2\\cosh^2\\frac{u_1}{2}\n \\cosh^2\\frac{u_2}{2}} e_2\\ ,\\ x_2 = 0\\ ,\\ x_3 = e_1\\ ,\\ x_4 = \\infty e_1\\ .\n\\end{align}\nThen it follows\n\\begin{equation}\n \\kappa_1 = \\kappa_3 = e^{\\gamma D + \\alpha M_{12}}\\, , \\quad\n \\kappa_2 = \\kappa_4 = e^{\\gamma D - \\alpha M_{12}}\\ ,\n\\end{equation}\nwhere\n\\begin{equation}\n e^{4\\gamma} = \\frac{x_{12}^2 x_{34}^2}{x_{14}^2 x_{23}^2}\\, ,\\quad\n e^{2i\\alpha} = \\frac{\\cosh{\\frac{u_1}{2}}}{\\cosh{\\frac{u_2}{2}}}\\ .\n\\end{equation}\nTo complete this description let us also quote from \\cite{Buric:2019dfk} that\n\\begin{eqnarray}\n\\label{eq:uz}\ne^{u_i} & = & 1 - \\frac{2}{z_i}\\left(1+\\sqrt{1-z_i}\\right)\\ , \\\\[2mm]\n\\textit{where} \\quad u = z_1 z_2 & = & \\frac{x_{12}^2x_{34}^2}{x_{13}^2x_{24}^2}\n\\quad , \\quad v = (1-z_1)(1-z_2) = \\frac{x_{14}^2x_{23}^2}{x_{13}^2 x_{24}^2}\\ .\n\\end{eqnarray}\nLet us note that $\\mathcal{M}$ was originally defined using representations of $K = SO(1,1)\n\\times SO(d)$, but is computed using only representation theory of $SO(1,1)\\times SO(2)$.\n\\medskip\n\n\\noindent\n{\\bf Example:} To make the last point manifest, let us give some more details for\nconformal theories in $d=3$ dimensions. Let us decompose the factors $k_l = d_l r_l$\nand $k_r = d_r r_r$ into dilations $d_{l\/r}$ and rotations $r_{l\/r}$. Following\n\\cite{Schomerus:2016epl,Schomerus:2017eny,Buric:2019dfk} we parametrize the elements\n$r$ of the 3-dimensional rotation group through Euler angles,\n\\begin{equation}\n r(\\phi,\\theta,\\psi) = e^{-\\phi M_{12}} e^{-\\theta M_{23}} e^{-\\psi M_{12}} \\ .\n\\end{equation}\nWith this choice of coordinates, the elements $\\kappa_i$ have $\\phi =\\pm\\alpha$ and\n$\\theta = \\psi = 0$. Next let us recall that matrix elements of the spin-$j$\nrepresentation of $SU(2)$ read\n\\begin{equation}\nt^j_{m n} (\\phi,\\theta,\\psi) = \\langle j,m| g(\\phi,\\theta,\\psi) | j,n\\rangle =\ne^{-i(m\\phi+n\\psi)} d^j_{m n}(\\theta) \\ .\n\\end{equation}\nHere, the function $d^j_{m n}$ is known as Wigner's $d$-function. It is expressed\nin terms of Jacobi polynomials $P^{(\\alpha,\\beta)}_n$ as\n\\begin{equation}\nd^j_{m n}(\\theta) = i^{m-n} \\sqrt{\\frac{(j+m)!(j-m)!}{(j+n)!(j-n)!}}\n\\Big(\\sin\\frac{\\theta}{2}\\Big)^{m-n} \\Big(\\cos\\frac{\\theta}{2}\\Big)^{m+n}P^{(m-n,m+n)}_{j-m}(\\cos\\theta) \\ .\n\\end{equation}\nFor $\\theta=0$, the only non-zero matrix elements are those with $m=n$. Furthermore\n\\begin{equation}\nt^j_{n n}(\\pm\\alpha,0,0) = e^{\\mp in\\alpha} P^{(0,2n)}_{j-n}(1) =\ne^{\\mp in\\alpha} = \\left(\\frac{\\cosh\\frac{u_1}{2}}{\\cosh\\frac{u_2}{2}}\\right)^{\\mp\\frac{n}{2}} \\ .\n\\end{equation}\nSince the stabilizer group $B = SO(d-2)$ for a bosonic conformal field theory in $d=3$\ndimensions is trivial, so it the projector $P$. Putting all this together we conclude\nthat the crossing factor reads\n\\begin{equation}\n (\\mathcal{M}_{st})^{ijkl}_{pqrs} = \\left(\\frac{u}{v}\\right)^{-\\frac14\\sum\\Delta_i}\n \\left(\\frac{\\cosh\\frac{u_1}{2}}{\\cosh\\frac{u_2}{2}}\\right)^{\\frac12(i+k-j-l)}\n \\delta^i_p \\delta^j_q \\delta^k_r \\delta^l_s\\ ,\n\\end{equation}\nwhere $u,v$ are the usual $s$-channel cross ratios and $u_i = u_i(u,v)$ are functions\nthereof, see eq.\\ \\eqref{eq:uz}. The first factor in this result for the spinning crossing\nfactor is well known from scalar correlators. The correction it receives for spinning\ncorrelators are diagonal in the space of polarizations but depend on the eigenvalues\nof the generator $J_z$ for rotations around one particular direction $e_z$.\n\n\\subsection{Blocks and crossing symmetry equation}\n\nIn case of bosonic conformal theories, the crossing factor we have just computed\nalong with spinning conformal blocks is all it takes to write down crossing symmetry\nconstraints. For superconformal symmetries of type I, some more work is needed in order\nto spell out these equations. We describe the additional\nelements in this subsection before we illustrate the entire formalism at the\nexample for $\\mathcal{N}=2$ superconformal theories in $d=1$ dimensions in the\nnext. Along the way we also review the construction of conformal blocks from\n\\cite{Buric:2019rms}. In order not to clutter the presentation too much, the first\npart of our discussion focuses on the $s$-channel. Other channels can be dealt with\nsimilarly.\n\nIn Subsection 4.1 we have shown that the four-point function of primary fields in an\narbitrary representations of a conformal superalgebra of type I can we written as\n\\begin{equation} \\label{eq:GThetaPsi}\nG_4(x_i) = \\Theta_{s}(x_i) \\Psi_s(u_i;\\theta,\\bar \\theta)\\ ,\n\\end{equation}\nwhere the supertensor factor $\\Theta_{s}(x_i)$ depends on the insertion points of\nthe fields through\n\\begin{equation} \\label{eq:defOmegas}\n\\Theta_{s} (x_i) = \\omega^{-1\/2}(u_1,u_2)\n\\rho_{1} (k_{s,l}) \\rho_{2}(k(t_{21})^{-1}k^{w}_{s,l})\n\\rho_{3}(k_{s,r}^{-1}) \\rho_{4}(k(t_{43})^{-1} (k^{w}_{s,r})^{-1}) P_s\\ ,\n\\end{equation}\nand $\\Psi_s$ is a function of the cross ratios, including all nilpotent\/fermionic superconformal\ninvariants, that is given by\n\\begin{equation} \\label{eq:deffs}\n\\Psi_s(u_i;\\theta,\\bar \\theta) = \\omega^{1\/2}(u_1,u_2) F_s(\\eta_{s,l}a_s \\eta_{s,r})\\ .\n\\end{equation}\nIn splitting eq.\\ \\eqref{eq:G4schannel} into a product of a supertensor factor and a\nfunction $\\Psi_s$ of the cross ratios we have included a scalar factor\n\\begin{equation}\n\\omega(u_1,u_2) = 4(-1)^{2-d}(\\sinh\\frac{u_1}{2} \\sinh\\frac{u_2}{2})^{2d-2}\\coth\\frac{u_1}{2}\n\\coth\\frac{u_2}{2} |\\sinh^{-2}\\frac{u_1}{2}-\\sinh^{-2}\\frac{u_2}{2}|^{d-2}\\ ,\n\\end{equation}\nwhich depends on the bosonic cross $u_1,u_2$ ratios only and may in fact be interpreted as\nthe volume of \\(K\\times K\\) bosonic orbits on the conformal group, see \\cite{Schomerus:2017eny}\nfor details. The factor $\\omega$ is conventional, but has some advantages that will be pointed\nout below.\n\nLet us now further analyse the factor $F_s$ in formula \\eqref{eq:deffs} by expanding it in\nthe fermionic variables. The Grassmann variables $\\theta$ that multiply the odd generators\nof negative $U(1)$ R-charge in the exponent of $\\eta_l$ generate an algebra $\\Lambda_\\theta$\nwhile those variables $\\bar\\theta$ that multiply the positively charged odd generators in the\nexponent of $\\eta_r$ give rise to a Grassmann algebra $\\Lambda_{\\bar \\theta}$. Before the\nexpansion, the wave functions $\\Psi_s(u_1,u_2;\\theta,\\bar\\theta)$ are vector valued, with\ntwo copies of the bosonic subgroup $K$ acting in the image of $F$.\nThe first copy, which we refer to as $K_l$ acts on $V_{(12)} = V_1 \\otimes V'_2$. Except\nfor the conjugation with the Weyl inversion in the second tensor component, one may think\nof $V_{(12)}$ as the space of superpolarizations for the first two fields. Similarly, the\nsecond copy $K_r$ acts on $V_{(34)} = V_3 \\otimes V'_4$. When we perform the fermionic\nexpansion, the coefficients sit in the representation spaces\n\\begin{equation}\nV_l = V_{(12)} \\otimes \\Lambda_\\theta \\ , \\quad V_r = V_{(34)} \\otimes\n\\Lambda_{\\bar \\theta} \\\n\\end{equation}\nof $K_l$ and $K_r$. Note that the bosonic subgroup $K$ acts on the two Grassmann algebras\nso that indeed both spaces form a representation of $K$. We also refer to the spaces\n$V_l$ and $V_r$ as spaces of polarizations, as opposed to $V_{(12)}$ and $V_{(34)}$ which\nwe have called the spaces of superpolarizations.\n\nAs we explained before, the covariance properties of $F$ imply that $\\Psi$ takes\nvalues in the subspace of $B$-invariants, i.e. in the space\n$$ \\mathcal{T} = \\left(V_{l} \\otimes V_r \\right)^B \\ , $$\nwhich we also refer to as the space of tensor structures. One may think of its\nelements as $B$-invariant elements in the space of function of the Grassmann variables\n$\\theta$ and $\\bar \\theta$ that take values the space of superpolarizations. Let us fix\nsome basis of elements $\\omega^I$ in $\\mathcal{T}$ and denote the dual basis by $\\hat{\\omega}_I$.\nWe can collect these elements into two objects\n\\begin{equation} \\label{eq:vdef}\nv_s(x_i) = (\\omega^1(x_i), \\dots, \\omega^T(x_i)) \\, , \\quad \\ \\hat v_s(x_i) = (\\hat \\omega_1(x_i),\n\\dots, \\hat \\omega_T(x_i)) \\ ,\n\\end{equation}\nwhere $T = \\textit{dim} \\mathcal{T}$ is the number of tensor structures. One may think of\n$v_s$ as a rectangular matrix from the space $\\mathcal{T}$ of tensor structures to the space\n$V_{(12)} \\otimes V_{(34)}$ with matrix elements in the Grassmann algebra $\\Lambda_\\theta \\otimes\n\\Lambda_{\\bar\\theta} = \\Lambda \\mathfrak{g}_{\\bar 1}$. Through the Cartan decomposition\nof $g(x_i)$, the Grassmann variables $\\theta$ and $\\bar \\theta$ are concrete functions\non the superspace $\\mathcal{M}^\\otimes_4$. We have displayed this dependence on the\nsupercoordinates $x_i$ explicitly.\n\nThe coefficients in the fermionic expansion of $\\Psi$ are functions $\\psi_I$ of the two\nbosonic cross ratios $u_1,u_2$ that take values in the space of $V_{(12)} \\otimes V_{(34)}$\nof superpolarizations. We can write this in the form\n\\begin{equation} \\label{Psivpsi}\n\\Psi(u_1,u_2;\\theta,\\bar \\theta) = v_s(x_i) \\cdot \\psi(u_1,u_2) = v_s(x_i) P_s\n\\psi(u_1,u_2) \\ .\n\\end{equation}\nPutting eqs.\\ \\eqref{eq:GThetaPsi} and \\eqref{Psivpsi} together we can now write the\nfour-point function as\n\\begin{equation}\nG_4(x_i) = \\Theta_s(x_i) v_s(x_i) \\cdot \\psi(u_1,u_2) \\ .\n\\end{equation}\nWe want to expand $G_4$ into superblocks, i.e. eigenfunctions of the super Casimir\noperator. The latter turns out to take a particularly simple form when evaluated on\n$\\psi(u_1,u_2)$. As we have shown in \\cite{Buric:2019rms}, one finds that\n\\begin{equation} \\label{eq:sCasimir}\n\\textit{Cas}_s G_4(x_i) = \\Theta_s(x_i) v_s(x_i)\\cdot (H^{V_l,V_r}_0 + A) \\psi(u_1,u_2)\\ .\n\\end{equation}\nHere $H_0$ is the spinning Calogero-Sutherland Hamiltonian for bosonic blocks, i.e.\n$H_0$ takes the form\n\\begin{equation}\nH_0 = - \\frac{\\partial_2}{\\partial u_1^2} - \\frac{\\partial_2} {\\partial u_2^2} +\nV(u_1,u_2)\\ ,\n\\end{equation}\nwhere $V(u_1,u_2)$ is a potential that takes values in the space of $T\\times T$\nmatrices. The precise form of the potential depends on the pair $(V_l,V_r)$ of\nrepresentations of $K$, but it is the same one obtains for the spinning Casimir\noperator of the bosonic conformal algebra in Calogero-Sutherland gauge. In\n$d=3,4$ dimensions such matrix potentials were worked out explicitly in\n\\cite{Schomerus:2016epl,Schomerus:2017eny}. The second term $A$ is a matrix\nvalued potential that was shown to be nilpotent and the precise form of these\nterms is remarkably simple, see \\cite{Buric:2019rms}.\n\nThe eigenfunctions of the Hamiltonian $H_0 = H_0^{V_l,V_r}$ we have just described\nwill be denoted by $\\psi_0(\\lambda_i;u_i) = \\psi^{V_l,V_r}_0(\\lambda_i,u_i)$. Here\n$\\lambda_i$ denote the eigenvalues of the (second and higher order) Hamiltonians\nwhich are directly related to the (spin and weight) quantum numbers of the intermediate\nfields in the conformal field theory. Functions $\\psi_0(u_i)$\nare well studied, and explicit expressions exist at least in dimension $d \\leq 4$, see in\nparticular \\cite{Echeverri:2016dun}. Eigenfunctions of the full Hamiltonian $\\mathcal{H}$\nwill denoted by $\\psi(\\lambda_i;u_i)$. Nilpotency of $A$ guarantees that quantum mechanical\nperturbation theory truncates at some order $N-1\\leq\\text{dim}\\mathfrak{g}_+$, so that we\ncan obtain exact results by summing just a few orders of the perturbative expansion. It\nturns out that, at any order of the expansion, the perturbation may be evaluated explicitly\nwith some input from the\nrepresentation theory of $SO(d+2)$. It results in expressions superconformal blocks as\nfinite linear combinations of spinning bosonic blocks. In this sense our results provide\na complete solution of the Casimir equations for type I superconformal symmetry and in\nparticular for 4-dimensional conformal field theories with any number $\\mathcal{N}$ of\nsupersymmetries.\n\nHere we have described the Casimir equation and its solution for the $s$-channel but it\nis clear that similar discussions apply to all channels. Reinstating the subscripts $s$\nand $t$ we end up with blocks $\\psi_s(\\lambda_i;u^s_i)$ and $\\psi_t (\\lambda_i;u^t_i)$.\nThe eigenvalues $\\lambda_i = \\lambda_i(\\mathcal{O})$ are related to the quantum numbers\n(weight, spin, $R$ charges) of the intermediate supermultiplets $\\mathcal{O}$. Let us\nalso stress that these blocks $\\psi$ are multi-component objects with $T$ components\n$\\psi^I, I=1, \\dots, T$ labeled by a basis of four-point tensor structures. For each\neigenvalue one can actually find $T$ independent solutions which are usually labeled\nby pairs $(a,b)=ab$ of three-point tensor structures for the relevant operators\nproducts. Consequently, the blocks $\\psi = (\\psi^{I,ab})$ carry two sets of labels,\nan index $I$ running over four point tensor structures and an index $ab$ that enumerates\npairs of three point structures. The arguments $u_i^{s\/t}$ of the blocks are functions\non superspace that are invariant under superconformal transformations. They are related\nby an exchange of the labels $2$ and $4$ and we can express one in terms of the other.\nEquating the $s$- and $t$-channel expansion of the four-point function $G_4$ one finds\nthat\n\\begin{equation}\\label{eq:crossing}\n\\sum_I\\sum_{\\mathcal{O}} \\lambda_{12\\mathcal{O},a}\n \\lambda_{34\\mathcal{O},b} M^{JI}_{st}(u^s_i)\n\\psi^{I,ab}_s(\\lambda_i(\\mathcal{O});u^s_i)\n=\n\\sum_{\\mathcal{O}} \\lambda_{14\\mathcal{O},a} \\lambda_{23\\mathcal{O},b}\n\\psi^{J,ab}_t(\\lambda_i(\\mathcal{O});u^t_i)\\ ,\n\\end{equation}\nwhere the indexes $a,b$ and $I,J$ numerate three and four-point tensor structures,\nrespectively, and the crossing factor $M_{st}= M_{st}(u_i)$ is given by\n\\begin{equation} \\label{eq:crossingmatrix}\nM_{st} = \\hat v_t(x_i)^t \\sqrt{\\frac{\\omega(u^t_i)}{\\omega(u^s_i)}}\n\\mathcal{M}_{st}(u_i,\\theta_s,\\bar \\theta_s)\nv_s(x_i)\\ .\n\\end{equation}\nNote that the matrix elements of $M_{st}$ depend on the bosonic cross ratios\nonly. The summation in eq.\\ \\eqref{eq:crossing} runs over all superprimary\nfields $\\mathcal{O}$ in the theory. Here we have expressed the crossing factor\n$\\mathcal{M}_{st}(x_i)$ which we defined in eq.\\ \\eqref{eq:crossingmatdef}\nin terms of the $s$-channel invariants and we think of the $t$-channel\ninvariants on the right hand side as functions of the $s$-channel ones.\nPractically, it is easier to relate the $t$- and $s$-channel invariants\n$u^t_i$ and $u^s_i$ to the usual bosonic cross ratios and expand on both \nsides in the nilpotent invariants. \n\nThe additional factors $v_s$ and $v_t$ express the supercrossing factor\n$\\mathcal{M}$ in terms of its action in the space $\\mathcal{T}$ of tensor\nstructures. Let us note that all three factors that appear in eq.\\\n\\eqref{eq:crossingmatrix} are well defined and straightforward to\ncompute explicitly, even though the computations can be a bit cumbersome.\nThe only additional information one then needs in order to evaluate the\ncrossing symmetry constraint \\eqref{eq:crossing} is the relation\nbetween the bosonic cross ratios $u^s_i$ and $u^t_i$ in the two\ndifferent channels. These are not difficult to determine from the\nCartan decomposition. Let us stress, however, that the relations\nbetween $u^s_i$ and $u^t_i$ involve fermionic invariants so that\nthere is an additional fermionic Taylor expansion to be performed\non the right hand side of eq.\\ \\eqref{eq:crossing} when we\nexpress $u^t_i$ in terms of $u^s_i$. We will illustrate all this\nnow in the case of $\\mathfrak{g} = \\mathfrak{sl}(2|1)$.\n\n\n\n\\subsection{Illustration for 1-dimensional superconformal algebra}\n\nWe can now put all the above together and compute the crossing factor between the $s-$ and the $t$-\nchannel for the $\\mathcal{N} = 2$ superconformal algebra in one dimension. To this end, the first step\nis to find the group elements $g_s(x_i)$ and $g_t(x_i)$ which appear in the argument of the covariant\nfunction $F$. In turn, this requires the supergroup elements $m(x_i)$, see eq.\\ \\eqref{eq:m-1d}, and\nthe Weyl element \\eqref{eq:wmatrix}. Then one computes the products $m(x_j) m(x_i)$ and $w\nm(x)$ to construct the variables $x_{ij}$ and the action of the Weyl inversion on superspace. For the\nexample at hand this was carried out in subsection 3.2.\n\nThese calculations provide all the input that is needed to determine $g_s(x_i)$. The supergroup elements\n$g_t(x_i)$ for the $t$-channel are obtained by exchanging the labels $2$ and $4$. At this point, $g_s$\nand $g_t$ depend on four sets of superspace variables, i.e. they are $3\\times3$ matrices whose elements\nare functions in all the $u_i,\\theta_i,\\bar\\theta_i$ for $i=1, \\dots, 4$. Since the crossing factor is\na superconformal invariant, we can apply superconformal transformations to gauge fix the coordinates of\nthe four insertion points. The following choice turns out to be convenient\n\\begin{equation} \\label{eq:xigauge}\n x_1 = (x,\\theta_1,\\bar\\theta_1),\\ x_2 = (0,0,0),\\ x_3 =\n (1,\\theta_3,\\bar\\theta_3),\\ x_4 = (\\infty,0,0)\\ .\n\\end{equation}\nWith this gauge choice, the entries of the matrices $g_s(x_i)$ and $g_t(x_i)$ depend on the bosonic\ncoordinate $x$ and the four Grassmann variables $\\theta_{1,3}$ and $\\bar \\theta_{1,3}$ only.\n\\medskip\n\nIn the second step we have to find the Cartan decomposition for both families $g_s$ and $g_t$. For our\n1-dimensional theory, the Cartan coordinates are introduced as\n\\begin{gather}\ng=e^{\\kappa R}e^{\\lambda_l D}e^{\\bar q Q_-+\\bar s S_-}e^{\\frac{u}{2}(P+K)}e^{qQ_++sS_+}e^{\\lambda_r D}\\ .\n\\end{gather}\n\nThis agrees with the general prescription \\eqref{eq:sCartan}, except that the torus of\nelements $a$ is parametrized by a single variable $u$ in this case. Through straightforward\nmanipulations of supermatrices one finds the following expressions for the Cartan coordinates\nof $g_s$ and $g_t$ in our gauge \\eqref{eq:xigauge}. For the bosonic Cartan coordinates in\n$s$-channel one has\n\\begin{align}\\label{eq:kls}\n & \\cosh^2 \\frac{u_s}{2} = \\frac1x \\Big( 1 - \\frac12\\theta_3\\bar\\theta_3 -\\frac{\\theta_1\\bar\\theta_1}{2x} +\n\\frac{\\theta_1\\bar\\theta_3}{x} +\\frac{\\theta_1\\bar\\theta_1\\theta_3\\bar\\theta_3}{4x} \\Big)\\ ,\\quad e^{-2\\kappa_s} = 1 +\n\\frac{\\theta_1}{x}(\\bar\\theta_1 - \\bar\\theta_3)\\ ,\\\\[2mm]\n& e^{\\lambda_{s,l}-\\lambda_{s,r}} = \\Big(1-x-\\frac12\\theta_1\\bar\\theta_1-\\frac12\\theta_3\\bar\\theta_3+\n\\theta_1\\bar\\theta_3\\Big) \\Big(x-\\frac12\\theta_1\\bar\\theta_1\\Big)\\ ,\\label{eq:lsm}\\\\[2mm]\n& e^{\\lambda_{s,l}+\\lambda_{s,r}} = \\Big(1+\\frac12\\theta_3\\bar\\theta_3\\Big)\\Big(x-\\frac12\\theta_1\\bar\\theta_1\\Big)\\ ,\n\\label{eq:lsp}\n\\end{align}\nwhile in the $t$-channel these coordinates read\n\\begin{align}\\label{eq:klt}\n& \\cosh^2 \\frac{u_t}{2} = x\\Big( 1 + \\frac12\\theta_3\\bar\\theta_3 + \\frac{\\theta_1\\bar\\theta_1}{2x}\n- \\theta_1\\bar\\theta_3 + \\frac{\\theta_1\\bar\\theta_1\\theta_3\\bar\\theta_3}{4x}\\Big)\\ ,\\quad\ne^{-2\\kappa_t} = 1 + \\bar\\theta_3 (\\theta_3 - \\theta_1)\\ , \\\\[2mm]\n& e^{\\lambda_{t,l}-\\lambda_{t,r}} = - \\Big(1-x-\\frac12\\theta_1\\bar\\theta_1-\\frac12\\theta_3\\bar\\theta_3+\n\\theta_1\\bar\\theta_3\\Big)\\Big(1+\\frac12\\theta_3\\bar\\theta_3\\Big)\\ ,\\label{eq:ltm} \\\\[2mm]\n& e^{\\lambda_{t,l}+\\lambda_{t,r}} = \\Big(1-\\frac12\\theta_3\\bar\\theta_3\\Big)\n\\Big(x+\\frac12\\theta_1\\bar\\theta_1\\Big)\\ .\\label{eq:ltp}\n\\end{align}\nIn order to extract $\\sinh(u_t\/2)$ from the first and $\\exp \\lambda_{t,l}\\ ,\\ \\exp\\lambda_{t,r}$ from the\nlast two lines one has to take some square roots. Here we use the following convention\n\\begin{equation}\ne^{\\lambda_{t,r}}=i (\\frac{x}{1-x})^\\frac{1}{2}- \\dots\\ , \\quad\ne^{\\lambda_{t,l}}=-i \\sqrt{x(1-x)}- \\dots \\ , \\quad\n\\sinh(u_t\/2)= i\\sqrt{1-x}-\\dots \\ .\n\\end{equation}\nThe fermionic Cartan coordinates, on the other hand, are given by the following expressions\n\\begin{align} \\label{eq:sqs}\n & q_s = e^{\\frac12\\lambda_{s,r}}\\Big(\\theta_3 -\\frac{\\theta_1}{x} \\Big( 1-\\frac12\\theta_3\\bar\\theta_3\\Big)\\Big)\\ ,\n \\quad s_s = e^{-\\frac12\\lambda_{s,r}}\\frac{\\theta_1}{x}\\ ,\\\\[2mm]\n & {\\bar q_s = e^{-\\frac12\\lambda_{s,l}}(\\bar\\theta_3 -\\bar\\theta_1)\\ ,\n \\quad \\bar s_s = -e^{\\frac12\\lambda_{s,l}}\\frac{\\bar\\theta_3}{x}}\\ ,\\\\[2mm]\n & q_t = e^{\\frac12\\lambda_{t,r}}(\\theta_3 - \\theta_1)\\ ,\n \\quad s_t = -e^{-\\frac12\\lambda_{t,r}}\\theta_1\\Big(1-\\frac12\\theta_3\\bar\\theta_3\\Big)\\ ,\\\\[2mm]\n & {\\bar q_t = -e^{-\\frac12\\lambda_{t,l}}\\Big(\\bar\\theta_1 - \\bar\\theta_3\\Big(x+\\frac12\\theta_3\\bar\\theta_1\\Big)\\Big)}\\ ,\n \\quad \\bar s_t = e^{\\frac12\\lambda_{t,l}}\\bar\\theta_3\\ . \\label{eq:sqt}\n\\end{align}\nThis concludes the second step of the construction, namely the determination of the Cartan coordinates\nin the two channels.\n\\medskip\n\nAs a third step we want to compute the supercrossing factor $\\mathcal{M}_{st}$ between the two channels\nthat was defined in eq.\\ \\eqref{eq:crossingmatdef}. Note that for our superconformal algebra the group\n$K$ is generated by dilations $D$ and R-symmetry transformations $R$ only. It is abelian and hence\nall its irreducible representations are 1-dimensional. Therefore, the supercrossing factor $\\mathcal{M}_{st}$\nconsists just of a single function in the variables $x, \\theta_{1,3}$ and $\\bar \\theta_{1,3}$. It depends,\nof course, on the choice of representations for the external superfields. We shall pick four such\nrepresentations $(\\Delta_i,r_i)$, corresponding to the conformal weight and the R-charges of the\nsuperprimaries, as before. The associated representations $\\rho$ of $K = \\textit{SO}(1,1) \\times U(1)$\nwere introduced in eq.\\ \\eqref{eq:rho-1d}. Note that in our gauge \\eqref{eq:xigauge} the factors\n$k(t_{41})$ and $k(t_{43})$ are trivial. Therefore, we have\n\\begin{align}\n & \\kappa_1 = e^{(\\lambda_{s,l}-\\lambda_{t,l})D + (\\kappa_s - \\kappa_t) R}\\ ,\\quad\n \\kappa_4 = e^{(\\lambda_{t,l}+\\lambda_{s,r})D-\\kappa_t R},\\\\[2mm]\n & \\kappa_3 = e^{(\\lambda_{t,r}-\\lambda_{s,r})D}\\ ,\\quad \\kappa_2 =\n e^{-(\\lambda_{t,r}+\\lambda_{s,l}-\\log x^2)D + (\\kappa_s - \\frac12\\theta_3\\bar\\theta_3 +\n \\frac{\\theta_1\\bar\\theta_1}{2x})R}.\n\\end{align}\nThe matrix ${\\mathcal{M}}$ can be written in terms of superspace coordinates by inserting our\nexplicit formulas \\eqref{eq:kls}-\\eqref{eq:ltp} for the Cartan coordinates in the $s$- and\n$t$-channel. This gives\n\\begin{align}\n \\mathcal{M}_{st} & = e^{\\frac{i\\pi}{2}(\\Delta_2+\\Delta_4-\\Delta_1-\\Delta_3)} x^{-2\\Delta_1}\n \\alpha^{\\frac32\\Delta_1-\\frac12\\Delta_2-\\frac12\\Delta_3-\\frac12\\Delta_4} \\times \\nonumber \\\\[2mm]\n & \\hspace*{2cm} \\times \\beta^{\\frac12\\Delta_1+\\frac12\\Delta_2-\\frac32\\Delta_3+\\frac12\\Delta_4}\n e^{r_1(\\kappa_s-\\kappa_t) +r_2(\\kappa_s-\\frac12\\theta_3\\bar\\theta_3+\n \\frac{\\theta_1\\bar\\theta_1}{2x})-r_4\\kappa_t},\n\\end{align}\nwhere $\\alpha$ and $\\beta$ denote the following superspace elements\n\\begin{equation}\\label{alpha-beta}\n \\alpha = x + \\frac12\\theta_1\\bar\\theta_1,\\ \\beta = 1 - \\frac12\\theta_3 \\bar\\theta_3\\ .\n\\end{equation}\nIn order to compute the crossing factor $M_{st}$ we are now instructed to find the map \\eqref{eq:vdef}\nin both $s-$ and $t$-channel. The general construction of $v$ is easy to implement since all representations\nare 1-dimensional. One finds\n\\begin{align}\n v = (1,q\\bar q,q\\bar s,s\\bar q,s\\bar s,qs\\bar q\\bar s)\\ .\n\\end{align}\nOnce we insert the expressions \\eqref{eq:kls}-\\eqref{eq:sqt} for Cartan coordinates in the two channels\nwe obtain\n\\begin{align} \\label{eq:vs-1d}\n & v_s = \\Big(1,-\\frac{(\\bar\\theta_1 - \\bar\\theta_3)(\\theta_1-x\\theta_3)}{x^{3\/2}\\sqrt{1-x}},\n \\frac{(\\theta_1 - x\\theta_3)\\bar\\theta_3+\\frac14\\Omega}{x^{3\/2}},\\frac{(\\bar\\theta_1-\\bar\\theta_3)\n \\theta_1+\\frac14\\Omega}{x^{3\/2}},\\frac{- \\theta_1\\bar\\theta_3\\sqrt{1-x}}{x^{3\/2}}, \\frac{\\Omega}{x^2} \\Big)\\ ,\\\\[2mm]\n & v_t = \\Big(1, i\\frac{(\\theta_1-\\theta_3)(\\bar\\theta_1-x\\bar\\theta_3)}{\\sqrt{1-x}},\n \\frac{x\\bar\\theta_3(\\theta_1 - \\theta_3)+\\frac14\\Omega}{\\sqrt{x}}, \\frac{\\theta_1 (\\bar\\theta_1 - x\\bar\\theta_3)+\n \\frac14\\Omega}{\\sqrt{x}}, i\\theta_1\\bar\\theta_3\\sqrt{1-x}\\ , \\Omega\\Big),\n \\label{eq:vt-1d}\n\\end{align}\nwhere $\\Omega = \\theta_1\\bar\\theta_1\\theta_3\\bar\\theta_3$. Now we have all the elements that are\nneeded to compute the crossing factor $M_{st}$ which we defined in eq.\\ \\eqref{eq:crossingmatrix} as\n\\begin{equation} \\label{eq:crossingmatrix-1d}\nM_{st} = \\hat v_t^T \\sqrt{\\frac{\\sinh u_t}{\\sinh u_s}}\n\\mathcal{M}_{st} v_s \\ ,\n\\end{equation}\nwhere $\\sinh u$ is a special instance of the function \\(\\omega(u)\\) that we introduced in eq.\\ \\eqref{eq:deffs}.\nAll factors that enter our expression for $\\mathcal{M}_{st}$ belong to the algebra $\\mathbb{C}[x,x^{-1}]\n\\otimes\\mathcal{A}$ where $\\mathcal{A}$ is the 6-dimensional algebra that is spanned by the elements\n\\begin{equation}\n e_1 = 1\\ ,\\ e_2 = \\theta_1 \\bar\\theta_1\\ ,\\ e_3 = \\theta_1 \\bar\\theta_3\\ ,\\ e_4 = \\theta_3 \\bar\\theta_1\\ ,\\\n e_2 = \\theta_3 \\bar\\theta_3\\ ,\\ e_6=\\Omega \\ .\n\\end{equation}\nIf we represent the $e_i$ by the canonical (column) vector, the row vectors $v_{s\/t}$ become $6\\times 6$\nmatrices whose entries are functions of $x$. Similarly we can also turn the factor $\\sqrt{\\frac{\\sinh u_t}\n{\\sinh u_s}}\\mathcal{M}_{st}$ into a $6\\times 6$ matrix if we replace the elements $e_i$ by their matrix\nrepresentation in the left regular representation of $\\mathcal{A}$. Multiplying all these matrices the\nfinal result is a $6 \\times 6$ matrix of functions in $x$ which is given by\n\\begin{equation} \\label{eq:crossingmatrix-1dvs2}\nM_{st} = v_t^{-1} \\sqrt{\\frac{\\sinh u_t}{\\sinh u_s}}\n\\mathcal{M}_{st} v_s \\ .\n\\end{equation}\nHaving computed the crossing factor between $s$- and $t$-channel there is only one final step left,\nnamely to relate the $s$- and $t$-channel cross ratios. Since the arguments of the functions $f$ in\nthe two channels are related by a change of variables that involves Grassmann coordinates, we need\nto perform a fermionic Taylor expansion in order to write the crossing equation in terms of functions\nof the bosonic cross ratio $x$ only, e.g. in the $t$-channel this expansion of \\(f_t(\\cosh^2\\frac{u_t}{2})\\)\ntakes the following form\n\\begin{equation}\n f_t = \\left(1 + x\\Big(\\frac12\\theta_3\\bar\\theta_3 + \\frac{\\theta_1\\bar\\theta_1}{2x} - \\theta_1\\bar\\theta_3\n + \\frac{\\theta_1\\bar\\theta_1\\theta_3\\bar\\theta_3}{4x}\\Big)\\partial + \\frac14 x\\Omega\\partial^2\\right) f_t(x)\\ .\n\\end{equation}\nUpon substitution, the crossing factor is a $6\\times6$ matrix of second order differential operators\nin $x$. This concludes our construction of the crossing symmetry equations for long multiplets of\n$\\mathcal{N}=2$ superconformal field theories in $d=1$ dimension.\n\n\\section{Conclusions and Outlook}\n\n In this work we have laid out a systematic theory that allows to decompose four-point functions\nof local operators into superconformal blocks. It applies to all superconformal field theories\nin which the superconformal symmetry is of type I, i.e.\\ for which the R-symmetry contains a\n$U(1)$ subgroup. This is the case for all superconformal field theories in $d=4$ dimensions and\na few other cases, in particular in $d=1,2$ and also 3-dimensional $\\mathcal{N}=2$ theories.\nIn a first step we lifted the four point correlation function of arbitrary (long) operators to\na function on the conformal supergroup, see eq.\\ \\eqref{magic-formula}. A crucial ingredient in\nthis auxiliary step was to assign a special family of supergroup elements $g(x_i)$ to the\nsuperspace insertion points $x_i$ of the four fields. Let us stress that this first step is\nstill entirely general in that it applies to all superconformal algebras and not just those of type\nI. The specialization became necessary for the second step in which we introduced a special\nsupersymmetric version of the Cartan or KAK coordinates on the supergroup. As we had shown in\nour previous work \\cite{Buric:2019rms}, these coordinates are chosen to bring the Casimir\nequations into a remarkably simple form that allows to construct all superblocks as finite\nlinear combinations of spinning bosonic blocks. The main purpose of the present work was to\ndetermine the associated tensor factors that map functions of two cross ratios back to\nthe original correlation function $G_4(x_i)$ on superspace. These tensor factors consist\nof two factors, a map $\\Theta(x_i)$ on the space of superpolarizations, see eq.\\\n\\eqref{eq:defOmegas}, and a map $v(x_i)$ from the space of superpolarizations to the\nspace of tensor structures that was defined in eq.\\ \\eqref{eq:vdef}. The full evaluation of\nthese two factors required to perform the Cartan decomposition of $g(x_i)$ explicitly. This\nis in principle straightforward, but can be a bit cumbersome, in particular for higher\ndimensions $d>2$. We have illustrated the explicit computation at the example of the\n$\\mathcal{N}=2$ superconformal algebra in $d=1$ dimension in the last subsection. Higher\ndimensional examples will be treated in forthcoming work.\n\nFor some applications and in particular in order to write down crossing symmetry equations, the\ntensor factors are actually not that important. What is needed, in addition to the conformal\nblocks, of course, is only the ratio of tensor factors between different channels. This\nquantity, which we dubbed the \\textit{crossing factor} is a superconformal invariant and hence it\ncan be computed in any (super)conformal frame. Here we computed it for the $\\mathcal{N}=2$ superconformal\nalgebra in $d=1$. Along with our previous results on conformal blocks for this symmetry algebra, see\n\\cite{Buric:2019rms}, this allows to write down crossing symmetry constraints for long multiplets,\nrecovering results from \\cite{Cornagliotto:2017dup}. In the latter paper it was shown that the\nnumerical (super-)conformal bootstrap involving long multiplets is significantly more constraining\nthan the boostrap with the short or the superprimary components of long multiplets. Our new\nderivation of these constraints, however, is now entirely algorithmic and it can be extended\nwithout any significant additional difficulty to higher dimensional superconformal algebras of\ntype I. Let us also stress once again that the computation of the crossing factor in higher\ndimensional theories is significantly simpler that the computation of tensor factors. We\nhave illustrated this at the example of bosonic conformal algebras where the computation of the\ncrossing factor was reduced to computations in the subgroup $\\textit{SO}(1,3)$ of the $d$-dimensional\nconformal group $\\textit{SO}(1,d+1)$.\n\nOur focus here was on developing the general theory. Concrete applications in particular to\n4-dimensional superconformal theories will be addressed in forthcoming work. In particular\nwe will spell out the crossing symmetry constraint between two channels of a four-point\nfunction involving two half-BPS and two long operators in a 4-dimensional $\\mathcal{N}=1$\nsuperconformal theory. This requires to combine all the elements of our apprroach. On the\none hand we apply the constructions and results of \\cite{Buric:2019rms} to spell out the\nCasimir-equations in Calogero-Sutherland gauge and we use them to construct analytic expressions\nfor the conformal blocks as finite sums of spinning bosonic blocks. When restricted to the\nsuperprimary fields at the bottom of the long multiplets, our new blocks coincide with those\nconstructed in \\cite{Li:2017ddj}. On the other hand, we evaluate our formula\n\\eqref{eq:crossingmatrix} for the crossing factor in the example of $\\mathfrak{sl}(1|4)$.\nCombining these two types of input we obtain crossing equations that can be exploited with\nexisting numerical techniques. Since the superblocks are finite linear combinations of\nspinning bosonic blocks with coefficients whose analytical form is known, the evaluation\nof the superblocks only requires the numerical evaluation of 4-dimensional spinning\nbosonic blocks which has been developed in the past, see in particular \\cite{Karateev:2019pvw}.\nGiven the experience with the long multiplet bootstrap in $d=2$ dimensions, see\n\\cite{Cornagliotto:2017dup}, we expect that numerical studies of the extended crossing\nequations can improve on the constraints obtained from the restricted equations in\n\\cite{Li:2017ddj}. This may provide new clues also on the elusive minimal\n$\\mathcal{N}=1$ minimal superconformal field theory.\n\nOf course it would also be interesting to spell out crossing symmetry constraints for\nother correlators in superconformal theories such as the multiplets of R-currents or\nthe stress tensor multiplets. In principle our approach applies to such quantities as\nwell, as long as the superconformal algebra is of type I. Of course, applications to\nvarious types of shorter multiplets containing conserved operators should provide\nsome simplifications which we did not address in this work. It would be interesting to \nstudy these in more detail with a view on possible extensions of recent results in \n\\cite{Manenti:2019jds}. Similarly, when summing over all operators in the crossing \nequations, one has to take shorting conditions for the blocks of short intermediate \nexchange into account. Usually it is done on the case by case basis \n\\cite{Arutyunov:2002fh,Doobary:2015gia,Aprile:2017bgs,Sen:2018del}. It would be \ntempting to adopt the Calogero-Sutherland approach for a systematic analysis, at \nleast in \\(d=4\\). Let us also mention that the situation is getting even more \ncomplicated in the case of non-unitary theories \\cite{Yamazaki:2019yfd}. \n\nAnother interesting direction concerns correlation functions involving non-local\noperators such as boundaries, interfaces and (line, surface, $\\dots$) defects\nBlock expansions for a large class of defect two-point functions are known, see e.g.\n\\cite{Liendo:2012hy,Billo:2016cpy,Lauria:2017wav,Lauria:2018klo}. A Calogero-Sutherland\ntheory of such blocks was developed in \\cite{Isachenkov:2018pef}. It would be\ninteresting to supplement this by a theory of tensor structures and to extend\nboth ingredients, blocks and tensor structures, to the superconformal algebras.\nWe will return to this issue in future work. An example of physically relevant\n1-dimensional defects are superconformal light-ray operators which model\nhigh-energy scattering in supersymmetric gauge theories. Their two and\nthree-point correlation functions in the BFKL limit was already calculated in\n\\cite{Balitsky:2013npa,Balitsky:2015tca,Balitsky:2015oux}. These results may\nbe considered as a first step in the realisation of the bootstrap programme\nfor super lightray operators. Block expansions and bootstrap equations for\nsupersymmetric defects have also been studied and applied e.g.\\ in\n\\cite{Liendo:2016ymz,Liendo:2018ukf,Bianchi:2018zpb,Gimenez-Grau:2019hez,\nBianchi:2019sxz}.\n\nThe restriction to type I superalgebras is certainly a limiting one that we would\nlike to overcome in view of possible applications of the bootstrap e.g. to the\n6-dimensional $(2,0)$ theory \\cite{Beem:2015aoa} or to many relevant examples of\nsuperconformal field theories in $d=3$, see \\cite{Abl:2019jhh,Alday:2020tgi,Rong:2018okz,\nAtanasov:2018kqw,Agmon:2019imm} for some recent work and further references. With the\nexception of the $\\mathcal{N=2}$ superconformal algebra in $d=3$, non of the\nsuperalgebras in these examples is of type I. While it is possible to treat some\nspecial cases with methods similar to those described here, in particular in low\ndimensions, it is not clear to us whether our approach does admit a systematic\nextension. This remains an interesting challenge for future research.\n\\bigskip\n\n\\noindent\n{\\bf Acknowledgements:} We thank James Drummond, Aleix Gimenez-Grau, Paul Heslop, Mikhail Isachenkov,\nMadalena Lemos, Pedro Liendo, Junchen Rong, Philine van Vliet for comments and fruitful discussion. The work of ES was supported\nby ERC grant 648630 IQFT. VS and IB acknowledge support by\nthe Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's\nExcellence Strategy \u2013 EXC 2121 ,,Quantum Universe'' \u2013 390833306.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDeep convolutional neural networks (CNN) are now arguably the most popular computer vision algorithms. Models such as VGG \\cite{vgg} and ResNet \\cite{he2016deep} are widely used. However, these models contain up to hundreds of millions of parameters, resulting in high memory footprint, long inference time and even longer training time. \n\nThe memory footprint and inference time of deep CNNs directly translate to application size and latency in production. Popular techniques based on model sparsification are able to deliver orders of magnitude reduction in the number of parameters in the network. \\cite{han2015deep} Together with emerging efficient sparse convolution kernel implementations, deep CNNs can now be realistically used in production after training \\cite{gray2017gpu, park2016faster,chen2018escort}. \n\nHowever, the training of these deep CNNs is still a lengthy and expensive process. The hundreds of millions of parameters in the model must all be iteratively updated hundreds of thousands of times in a typical training process based on back-propagation. Recent research has attempted to address the training time issue by demonstrating effective training on large scale computing clusters consisting of thousands of GPUs or high-end CPUs \\cite{you2018imagenet,akiba2017extremely,jia2018highly}. However, these computing clusters are still extremely expensive and labor-intensive to set up or maintain, even if the actual training process is reduced to minutes. \n\nAn alternative to using large computing clusters is to accelerate the computations of the gradients themselves. One option is to introduce highly optimized software \\cite{chetlur2014cudnn} or new hardware \\cite{markidis2018nvidia, jouppi2017datacenter}. The training can also be performed in lower precision, which can lead to massive speedups with appropriate hardware support \\cite{micikevicius2017mixed}. Another less pursued option, complementary to the previous two, is to approximate the actual gradient computation themselves \\cite{sun2017meprop, sun2018training, wei2017minimal,adelman2018faster}. Other recent works have also suggested that the exact gradient might not be necessary for efficient training of deep neural networks. Studies have shown that only the sign of the gradient is necessary for efficient back propagation \\cite{xiao2018biologically,wen2017terngrad}. Surprisingly, even random gradients can be used to efficiently train neural networks \\cite{lillicrap2016random,nokland2016direct}. However, these findings are mostly limited to small fully connected networks on smaller datasets. The approximation algorithms proposed also cannot directly translate into real wall-clock speedups in training time due to lack of efficient GPU implementation.\n\nIn this work, we hypothesize that we can extend gradient approximation methods to deep neural networks to speed up gradient computations in the training process. We hypothesize that we can apply these approximations to only a subset of the layers and maintain the validation accuracy of the trained network. We validate our hypotheses on three deep CNNs (2-layer CNN \\cite{krizhevsky2009learning}, ResNet-20 \\cite{he2016deep} VGG-19 \\cite{vgg}) on CIFAR-10. Our methods are fully compatible with classic deep CNN architectures and do not rely on explicit sparsity information that must be input to the network, like approaches such as SBnet and Sub-manifold networks \\cite{ren2018sbnet,graham2017submanifold}. \n\nWe summarize our contributions as follows: \n\\begin{itemize}\n \\item We present three gradient approximation methods for training deep CNNs, along with an efficient GPU implementations for one of them. \n \\item We explore the application of these methods to deep CNNs and show that they allow for training convergence with minimal validation accuracy loss.\n \\item We describe the concept of approximation schedules, a way to reason about applying different approximation methods across different layers and training batches.\n\\end{itemize}\n\n\\section{Approximation Methods}\n\nIn a forward-backward pass of a deep CNN during training, a convolutional layer requires three convolution operations: one for forward propagation and two for backward propagation, as demonstrated in Figure \\ref{fig:back}. We approximate the convolution operation which calculates the gradients of the filter values, which constitutes roughly a third of the computational time. We aim to apply the approximation a quarter of the time across layers\/batches. This leads to a theoretical maximum speedup of around 8 percent. \n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.6\\linewidth]{forwardprop.png}\n\\includegraphics[width=0.8\\linewidth]{backprop.png}\n\\end{center}\n \\caption{Forward and backward propagation through a convolutional layer during training. Asterisks indicate convolution operations and the operation in the red box is the one we approximate.}\n\\label{fig:back}\n\\label{fig:onecol}\n\\end{figure}\n\n\\subsection{Zero Gradient}\nThe first method passes back zero as the weight gradient of a chosen layer for a chosen batch. If done for every training batch, it effectively freezes the filter weights. \n\n\\subsection{Random Gradient}\nThe second method passes back random numbers sampled from a normal distribution with mean 0 and standard deviation $\\frac{1}{128}$ (inverse of batch size) as the weight gradient of a chosen layer for a chosen batch. Different values in the weight gradient are chosen independently. Importantly, this is different from the random feedback alignment method discussed in \\cite{lillicrap2016random} and \\cite{nokland2016direct} as we regenerate the random numbers every training batch. We implement this using tf.py\\_func, where np.random.normal is used to generate the random values. This approach is extremely inefficient, though surprisingly faster than a naive cuRAND implementation in a custom tensorflow operation for most input cases. We are working on a more efficient implementation. \n\n\\subsection{Approximated Gradient}\nThe third method we employ is based on the top-k selection algorithms popular in literature. \\cite{wei2017minimal} In the gradient computation for a filter in a convolutional layer, only the largest-magnitude gradient value is retained for each output channel and each batch element. They are scaled according to the sum of the gradients in their respective output channels so that the gradient estimate is unbiased, similar to the approach employed in \\cite{wangni2018gradient}. All other gradients are set to zero. This results in a sparsity ratio of $1-\\frac{1}{HW}$, where $H$ and $W$ are the height and width of the output hidden layer. The filter gradient is then calculated from this sparse version of the output gradient tensor with the saved input activations from the forward pass. The algorithm can be trivially modified to admit the top-k magnitude gradient values with an adjustment of the scaling parameter, a direction of future research. Similar to the random gradient method, we find that we need to scale our approximated gradient by a factor proportional to the batch size for effective training. In the experiments here, we scale them by $\\frac{1}{128}$.\n\n\\subsection{Efficient GPU Implementation}\n\nA major contribution of this work is an implementation of the approximated gradient method in CUDA. This is critical to achieve actual wall-clock training speedups. A naive Tensorflow implementation using tf.image.extract\\_glimpse does not use the GPU and results in significantly slower training time. \n\nEfficient GPU implementations for dense convolutions frequently use matrix lowering or transforms such as FFT or Winograd \\cite{chetlur2014cudnn,liu2018efficient}. However, the overheads of these transformations might not be worth the benefit in a sparse setting. Recent approaches have sought to perform the sparse convolution directly on CPU or GPU \\cite{park2016faster,chen2018escort}. Here we also opt for the latter approach. We interpret the sparse convolution in the calculation of the filter gradient as a patch extraction procedure, as demonstrated in Figure \\ref{fig:algo}.\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{algo.png}\n\\end{center}\n \\caption{The approximation algorithm illustrated for an example with two filters and three input elements. For each filter, we extract a patch from each batch element's input activations and accumulate the patches.}\n\\label{fig:algo}\n\\end{figure}\n\nFormally, let's assume we have an input tensor $I$ and output gradient tensor $dO$ in $NCHW$ format, where $N$ is the batch dimension, $C$ the channel dimension and $H$, $W$ the height and width of the hidden layer. The filter tensor, $f$, has dimension $KKC_iC_o$, where $K$ is the filter size, $C_i$ is the number of channels in $I$ and $C_o$ is the number of channels in $dO$. We will use the symbol $\\ast$ to denote the convolution operation. In order to compute $df$, we have to convolve $I$ with $dO$. If we zero out all elements in $dO$ except one for each output channel dimension, then convolution becomes a collection of $C_o$ patches of shape $KKC_i$ from $I$, as specified below in Algorithm 1. \n\n\\begin{algorithm}\n\\caption{Max Gradient Approximation}\\label{euclid}\n\\begin{algorithmic}[1]\n\n\\State $df[:,:,:,:] = 0$\n\\For {$c = 1:C_o$}\n\\For {$n = 1:N$}\n\\State $row, col\\gets \\arg\\max abs(dO[n,c,:,:])$ \n\\State $sum \\gets \\sum dO[n,c,:,:]$ (sum is a scalar)\n\n\\State $df[:,c,:,:] += I[n,:,row:row+K,col:col+K] * sum$\n\\EndFor\n\\EndFor\n\n\\end{algorithmic}\n\\end{algorithm}\n\nOur kernel implementation expects input activations in $NHWC_i$ format and output gradients in $NC_oHW$ format. It produces the output gradient in $C_oKKC_i$ format. In $NHWC$ format, GPU global memory accesses from the patch extractions can be efficiently coalesced across the channel dimension, which is typically a multiple of 8. Each thread block is assigned to process several batch elements for a fixed output channel. Each thread block first computes the indices and values of the nonzero weight values from the output gradients. Then, they extract the corresponding patches from the input activations and accumulate them to the result. \n\nWe benchmark the performance of our code against NVIDIA cuDNN v7.4.2 library apis. Approaches such as cuSPARSE have been demonstrated to be less effective in a sparse convolution setting and are not pursued here \\cite{chen2018escort}. All timing metrics are obtained on a workstation with a Titan-Xp GPU and 8 Intel Xeon CPUs at 3.60GHz.\n\nAll training experiments are conducted in $NCHW$ format, the preferred data layout of cuDNN. As a result, we incur a data transpose overhead of the input activations from $NCHW$ to $NHWC$. In addition, we also incur a slight data transpose overhead of the filter gradient from $C_oKKC_i$ to $KKC_iC_o$. \n\n\\subsection{Approximation Schedules}\n\nHere, we introduce the concept of approximation schedules. This concept allows us to specify when particular approximations are applied in the training process and how to combine different approximations. Existing approximation methods such as DropBack \\cite{golub2018dropback} and PruneTrain \\cite{lym2019prunetrain} can be applied to a specific layer, but are applied over all training batches since their application at a particular training batch changes the structure of the network, thus affecting all subsequent batches. The three approximation methods we have mentioned approximate the gradient computation of a weight filter for a single training batch. They can be thus applied to a specific layer for a specific training batch. We refer to the term \"approximation schedule\" as a specification of what approximation method to apply for each layer and each training batch that is consistent with the above rules. An example of an approximation schedule is shown in Figure \\ref{fig:schedule}. More aggressive approximation schedules might lead to a higher loss in accuracy, but would also result in higher speedups. Here, we demonstrate that simple heuristics to pick approximation schedules can lead to good results on common networks such as ResNet-20 and VGG-19. While the efficacy of simple heuristics is crucial for the applicability of the proposed approximation methods in practice, determining the optimal approximation schedule for different neural network architectures is an interesting direction of future research.\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{schedule.png}\n\\end{center}\n \\caption{Example approximation schedule for a 5-layer network over 4 training batches. Full Grad denotes regular gradient computation without approximation.}\n\\label{fig:schedule}\n\\end{figure}\n\n\\section{Evaluation}\nWe test our approach on three common neural network architectures (2-layer CNN \\cite{krizhevsky2009learning}, VGG-19 \\cite{vgg} and ResNet-20 \\cite{he2016deep}) on the CIFAR-10 dataset. The local response normalization in the 2-layer CNN is replaced by the more modern batch normalization method \\cite{ioffe2015batch}. For all three networks, we aim to use the approximation methods 25 percent of the time. In this work, we test all three approximation methods separately and do not combine. On the 2-layer CNN, we apply the selected approximation method to the second convolutional layer every other training batch. On VGG-19 and ResNet-20, we apply the selected approximation method to every fourth convolutional layer every training batch, starting from the second convolutional layer. For example, the three approximation schedules for the 2-layer CNN are shown in Figure \\ref{fig:2-layer-schedule}. We start from the second layer because recent work has shown that approximating the first convolutional layer is difficult \\cite{adelman2018faster}. This results in four approximated layers for VGG-19 and five approximated layers for ResNet-20. For the ResNet-20 model, we train a baseline ResNet-14 model as well. Training a smaller model is typically done in practice when training time is of concern. Ideally, our approximation methods to train the larger ResNet-20 model should result in higher validation accuracy than the ResNet-14 model. For ResNet-20, we also experiment with other approximation schedules to show that our approximation methods are robust to schedule choice. \n\nWe train the networks until the validation accuracy stabilizes. It took around 500 epochs for the 2-layer CNN, 250 epochs for the ResNet-20 model, and 200 epochs for the VGG-19 model. We use an exponentially decaying learning rate with the Adam optimizer for all three models. We apply typical data augmentation techniques such as random cropping and flipping to each minibatch. All training was performed within Tensorflow 1.13 \\cite{abadi2016tensorflow}. \n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.7\\linewidth]{2-layer-schedules.png}\n\\end{center}\n \\caption{The three approximation schedules studied for the 2-layer network using a) zero gradient method b) random gradient method c) approximated gradient method.}\n\\label{fig:2-layer-schedule}\n\\end{figure}\n\n\\subsection{Performance Comparisons}\n\nWe compare the performance of our GPU kernel for the approximated gradient method with the full gradient computation for the weight filter as implemented in cuDNN v7.4.2. cuDNN offers state-of-the-art performance in dense gradient computation and is used in almost every deep learning library. For each input case, cuDNN tests several hand-assembled kernels to pick the fastest one. The kernels fully utilize the high floating point throughput of the GPU to perform the dense gradient computations. In contrast, sparse approximations of the gradient usually involve less arithmetic\/memory ratio and do not admit as efficient kernel implementations on GPU. It is oftentimes necessary to impose structure or high sparsity ratio to achieve actual performance gain \\cite{zhu2018structurally}. Here we demonstrate that our gradient approximation method does yield an efficient GPU implementation that can lead to actual speedups compared to cuDNN. \n\nWe present timing comparisons for a few select input cases encountered in the network architectures used in this work in Table \\ref{table:perf}. We aggregate the two data transpose overheads of the input activations and the filter gradients. (In almost every case, the data transpose overhead of the input activations dominates.) We make three observations. \n\nFirstly, in most cases, the gradient approximation, including data transposition, is at least three times as fast as the cuDNN baseline. Secondly, we observe that cuDNN timing scales with the number of input channels times the height and width of the hidden layer, whereas our approximation kernel timing scales with the number of input channels alone. This is expected from the nature of the computations involved: the performance bottleneck of our kernel is the memory intensive patch extractions, the sizes of which scale with the number of input channels times filter size. Thirdly, we observe that in many cases, the data transposition overhead is over fifty percent of the kernel time, suggesting that our implementation can be further improved by fusing the data transpose into the kernel as in SBNet \\cite{ren2018sbnet}. This is left for future work. \n\n\\begin{table}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{performance.png}\n\\end{center}\n \\caption{Performance comparisons. All timing statistics in microseconds. Approx. total column is the sum of the CUDA Kernel time and the transpose overhead.}\n\\label{table:perf}\n\\end{table}\n\n\\subsection{Training Convergence}\n\nWe present convergence results for the training of our three neural networks using the chosen approximation schedules with two metrics, training loss and validation accuracy. In Figure \\ref{fig:2acc}, we see that for the 2-layer CNN, all approximation methods result in training loss curves and validation accuracy curves similar to the ones obtained by full gradient computation. We can even see that the random gradient method surpasses full gradient computation in terms of validation accuracy. The zero gradient method is very similar to full gradient computation while the approximated gradient method does slightly worse. \\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{2-layer-results2.png}\n\\end{center}\n \\caption{a) Training loss of 2-layer CNN with different approximation methods. b) Validation accuracy of 2-layer CNN with different approximation methods.}\n\\label{fig:2acc}\n\\end{figure}\n\nThe approximation methods remain robust on larger networks, such as ResNet-20, shown in Figure \\ref{fig:racc}. In this case, we can see from both the loss curves and the validation accuracy that our approximated gradient methods do slightly worse than full gradient computation, but better than both random gradient or zero gradient methods. Curiously, the random gradient method maintains a high training loss throughout the training, but is still able to achieve good validation accuracy. \n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{resnet_results_2.png}\n\\end{center}\n \\caption{a) Training loss of ResNet-20 with different approximation methods. b) Validation accuracy of ResNet-20 with different approximation methods. The loss curve of the random gradient method stagnates but the validation accuracy is competitive.}\n\\label{fig:racc}\n\\end{figure}\n\nFor VGG-19, shown in Figure \\ref{fig:vacc}, we see that full gradient descent actually lags that of the approximated methods in reaching target validation accuracy. In this case, all three approximation methods perform very well. However, full gradient descent does overtake all approximation methods finally in terms of validation accuracy. This suggests that perhaps a fruitful approach to explore, at least for networks similar to VGG, would be to use approximations early on in training and switch to full gradient computation when a validation accuracy plateau has been reached.\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{vgg_results_2.png}\n\\end{center}\n \\caption{a) Training loss of VGG-19 model with different approximation methods. b) Validation accuracy of VGG-19 model with different approximation methods.}\n\\label{fig:vacc}\n\\end{figure}\n\n\\subsection{Speedup-Accuracy Tradeoffs}\n\nHere, we present the wall-clock speedups achieved for each network and approximation method. We compare the speedups against the validation accuracy loss, measured from the best validation accuracy achieved during training. Validation accuracy was calculated every ten epochs. As aforementioned, the random gradient implementation is quite inefficient and is pending future work. The speedup takes into account the overhead of defining a custom operation in Tensorflow, as well as the significant overhead of switching gradient computation on global training step. For the 2-layer CNN, we are unable to achieve wall-clock speedup for all approximation methods, even the zero gradient one, because of this overhead. (Table \\ref{tab:2acc}) However, all approximation methods achieve little validation accuracy loss. The random gradient method even outperforms full gradient computation by 0.8\\%. \n\n\\begin{table}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{tradeoff_2layer.png}\n\\end{center}\n \\caption{Training speedup and validation accuracy loss for the approximation methods on 2-layer CNN. Negative speedup indicates a slowdown.}\n\\label{tab:2acc}\n\\end{table}\n\nFor ResNet-20, the approximation schedule we choose does not involve switching gradient computations. We avoid the switching overhead and can achieve speedups for both the zero gradient method and the approximated gradient method. As shown in Table \\ref{tab:racc}, the zero gradient method achieves roughly a third of the speedup compared to training the baseline ResNet-14 model. The approximated gradient method also achieves a 3.5\\% wall-clock speedup, and is the only method to suffer less accuracy loss than just using a smaller ResNet-14. In the following section, we demonstrate that with other approximation schedules, the approximated gradient method can achieve as little as 0.1\\% accuracy loss. \n\n\\begin{table}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{tradeoff_resnet.png}\n\\end{center}\n \\caption{Training speedup and validation accuracy loss for the approximation methods on ResNet-20. Negative speedup indicates a slowdown.}\n\\label{tab:racc}\n\\end{table}\n\nFor VGG-19, despite being quicker to converge, the approximation methods all have worse validation accuracy than the baseline method. (Table \\ref{tab:vacc}) The best approximation method appears to be the random gradient method, though it is extremely slow due to our inefficient implementation in Tensorflow. The other two methods also achieve high validation accuracies, with the approximated gradient method slightly better than the zero gradient method. Both methods are able to achieve speedups in training. \n\n\\begin{table}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{tradeoff_vgg.png}\n\\end{center}\n \\caption{Training speedup and validation accuracy loss for the approximation methods on VGG-19. Negative speedup indicates a slowdown.}\n\\label{tab:vacc}\n\\end{table}\n\n\n\\subsection{Robustness to Approximation Schedule}\n\nHere, we explore two new approximation schedules for ResNet-20, keeping the total proportion of the time we apply the approximation to 25 percent. We will refer to the approximation schedule presented in the secion above as schedule 1. Schedule 2 applies the selected approximation method every other layer for every other batch. Schedule 3 applies the selected approximation method every layer for every fourth batch. We also present the baseline result of the ResNet-14 model. \n\nAs we can see from Figure \\ref{fig:robust} and Table \\ref{tab:robust-2}, under schedules 2 and 3, both the zero gradient and the approximated gradient method perform well. In fact, for the approximated gradient and the zero gradient methods the validation accuracy loss is smaller than schedule 1. Indeed, in schedule 3, the approximated gradient's best validation accuracy is within 0.1\\% of that of the full gradient computation. The random gradient method's validation accuracy is in now line with its poor loss curve for these two approximation schedules. This suggests that the random gradient method does not work well for ResNet-20 architecture.\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{robust.png}\n\\end{center}\n \\caption{a) Training loss of ResNet-20 with different approximation methods for approximation schedule 2. b) Validation accuracy of ResNet-20 with different approximation methods for approximation schedule 2. c) Training loss of ResNet-20 with different approximation methods for approximation schedule 3. d) Validation accuracy of ResNet-20 with different approximation methods for approximation schedule 3.}\n\\label{fig:robust}\n\\end{figure}\n\\begin{table}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{robust_table.png}\n\\end{center}\n \\caption{Validation accuracy for different approximation schedules on ResNet-20. Schedule 1 is the same as presented above.}\n\\label{tab:robust-2}\n\\end{table}\n\n\\section{Discussion and Conclusion}\n\nWhile research on accelerating deep learning inference abounds, there is relatively limited work focused on accelerating the training process. Recent works such as PruneTrain prune the neural network in training, but suffers quite serious loss in validation accuracy \\cite{lym2019prunetrain}. Approaches such as DropBack \\cite{golub2018dropback} and MeProp \\cite{wei2017minimal,sun2017meprop} show that approximated gradient are sufficient in successfully training neural networks but don't yet offer real wall-clock speedups. In this work, we show that we can train deep neural networks to good validation accuracy with very minimal gradient information on a subset of the layers, leading to wall-clock speedups for training. \n\nWe are surprised by the consistent strong performance of the zero gradient method. For ResNet-20, for two of the three approximation schedules tested, the validation accuracy loss is better than that of a smaller baseline network. Its performance is also satisfactory on VGG-19 as well as the 2-layer CNN. It admits an extremely fast implementation that delivers consistent speedups. This points to a simple way to potentially boost training speed in deep neural networks, while maintaining their performance advantage over shallower alternatives. \n\nWe also demonstrate that random gradient methods can train deep neural networks to convergence, provided they are only applied to a subset of the layers. For the 2-layer CNN and VGG-19, this method leads to the least validation accuracy loss of all three approximation methods. However, its performance serious lags other methods on ResNet-20, suggesting that its performance is network-architecture-specific. Naive feedback alignment, where the random gradient signal is fixed before training starts, has been shown to be difficult to extend to deep convolutional architectures \\cite{han2019efficient,bartunov2018assessing}.We show here that if the random gradients are newly generated every batch and applied to a subset of layers, they can be used to train deep neural networks to convergence. Interestingly, generating new random gradients every batch effectively abolishes any kind of possible ``alignment'' in the network, calling for a new explanation of why the network converges. Evidently, this method holds the potential for an extremely efficient implementation, something we are currently working on. \n\nFinally, we present a gradient approximation method with an efficient GPU implementation. Our approximation method is consistent in terms of validation accuracy across different network architectures and approximation schedules. Although the training wall clock time speedup isn't large, the validation accuracy loss is also small. We wish to re-emphasize here the small validation accuracy difference observed between the baseline ResNet-14 and ResNet-20, leading us to believe that novel training speed-up methods must incur minimal validation accuracy loss to be more practical than simply training a smaller network.\n\nIn conclusion, we show that we can ``fool\" deep neural networks into training properly while supplying it only very minimal gradient information on select layers. The approximation methods are simple and robust, holding the promise to accelerate the lengthy training process for state-of-the-art deep CNNs. \n\n\n\\section{Future Work}\nBesides those already mentioned, there are several more interesting directions of future work. One direction is predicting the validation accuracy loss that a neural network would suffer from a particular approximation schedule. With such a predictor, we can optimize for the fastest approximation schedule while constraining the final validation accuracy loss before the training run. We can also examine the effects of mingling different approximation methods and integrating existing methods such as PruneTrain and Dropback \\cite{lym2019prunetrain, golub2018dropback}. Another direction is approximating the gradient of the hidden activations, as is done in meProp \\cite{sun2017meprop}. However, if we approximate the hidden activations at a deeper layer of the network, the approximation error will be propagated to the shallower layers. Due to this concern, we start with approximating filter weight gradients, where the effect of errors are local. Finally, we are working on integrating this approach into a distributed training setting, where the approximation schedule is now 3-dimensional (machine, layer, batch). This approach would be crucial for the approximation methods to work with larger scale datasets such as ImageNet, thus potentially allowing for wall-clock speed-up in large scale training. \n\n\n{\\small\n\\bibliographystyle{ieee_fullname}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzafxy b/data_all_eng_slimpj/shuffled/split2/finalzzafxy new file mode 100644 index 0000000000000000000000000000000000000000..f687f15bbabf8a3c34e11cb49217e6ee1f2f9589 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzafxy @@ -0,0 +1,5 @@ +{"text":"\\section{Submission of conference papers to ICLR 2022}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Method}\n\n\\begin{figure}\n\\vspace{-3mm}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{figures\/CLIP_framework.pdf}\n\\vspace{-6mm}\n\\caption{Overall architecture of FILIP, a dual-stream model with Transformer-based image and text encoders. On top of the image and text encoders, the representations of textual tokens and visual tokens are linearly projected to the multi-modal joint space. A novel fine-grained contrastive learning equipped with cross-modal late interaction is proposed, which uses a token-wise maximum similarity between visual and textual tokens. }\n\\label{fig:framework}\n\\end{center}\n\\vspace{-4mm}\n\\end{figure}\n\n\\label{sec:model_overview}\n\nIn this paper, we propose a new cross-modal pre-training model that excels in fine-grained interaction between image encoder and text encoder for mining more detailed semantic alignment, named as FILIP, as shown in Figure \\ref{fig:framework}. \nParticularly, FILIP is a dual-stream model with Transformer-based image and text encoders.\nFor the visual modality, the image encoder is a Vision Transformer \\citep{dosovitskiy2020image} which takes the concatenation of an extra [CLS] token embedding and linearly projected image patches as input.\nFor the textual modality, following \\cite{radford2021learning}, we use the lower-cased byte pair encoding (BPE) \\citep{sennrich2016neural}\nwith a vocabulary size of 49,408 to tokenize the text. \nEach text sequence starts with [BOS] token and ends with [EOS] token.\nAfter the word embedding layer, the token embeddings are fed into a modified decoder-only Transformer model as in \\citep{radford2019language}. \nOn top of the image and text encoders, the representations of textual tokens and visual tokens are linearly projected to the multi-modal common space, and are separately L2-normalized. \nDifferent from existing dual-stream models (e.g., CLIP and ALIGN) which models cross-modal interaction via only the global features of the entire image and text sequence, \nwe introduce a novel fine-grained contrastive learning objective equipped with cross-modal late interaction which takes into account the fine-grained interaction between image patches and textual tokens, detailed in Section \\ref{sec:late_interaction}.\n\n\n\n\n\\vspace{-1mm}\n\\subsection{Fine-grained Contrastive Learning}\n\\label{sec:late_interaction}\n\nContrastive representation learning has recently been found to learn better representations than its predictive counterpart \nin both visual \\citep{tian2020contrastive} and vision-language cross-modal pre-training \\citep{radford2021learning}.\nUnder a general formulation of cross-modal contrastive learning \\citep{radford2021learning}, we want to learn encoders $f_\\theta$ for image data $\\mathcal{I}$ and $g_\\phi$ for text data $\\mathcal{T}$ such that, given an image $ \\vx_{img} \\in {\\mathcal{I}}$, and a text $\\vx_{text} \\in {\\mathcal{T}}$, the encoded representations $f_\\theta(\\vx_{img})$ and $g_\\phi(\\vx_{text})$ are close if they are related and far apart if not, under a distance metric. \nIn each training batch, we sample $b$ image-text pairs $\\{\\vx_{img}_k, \\vx_{text}_k\\}_{k=1}^b$.\nFor image $\\vx_{img}_k$ in image-text pair $\\{\\vx_{img}_k, \\vx_{text}_k\\}$, $\\vx_{text}_k$ is its positive, while the other texts \nwill be used as in-batch negatives. \nThe image-to-text contrastive loss ${\\mathcal{L}}^I_k$ for $\\vx_{img}_k$ can then be formulated as\n\\[\n{\\mathcal{L}}^I_k (\\vx_{img}_k, \\{\\vx_{text}_j\\}_{j=1}^b) = -\\frac{1}{b} \\log \\frac{exp(s_{k,k}^I)}{\\sum_{j} exp(s_{k,j}^I)},\n\\]\nwhere $s_{k,j}^I$ denotes the similarity of the $k$-th image to the $j$-th text.\nSimilarly, the text-to-image contrastive loss for $\\vx_{text}_k$ is\n\\[\n{\\mathcal{L}}^T_k (\\vx_{text}_k, \\{\\vx_{img}_j\\}_{j=1}^b) = -\\frac{1}{b} \\log \\frac{exp(s_{k,k}^T)}{\\sum_{j} exp(s_{j,k}^T)}.\n\\]\nThe total loss of this mini-batch can be represented by \n\\begin{equation}\n{\\mathcal{L}} = \\frac{1}{2}\\sum\\limits_{k=1}^b ({\\mathcal{L}}^I_k + {\\mathcal{L}}^T_k). \\label{eq:contrastive_loss}\n\\end{equation}\n\n\\vspace{-1mm}\n\\subsubsection{Cross-modal Late Interaction}\n\\label{sec:Cross-modal-late}\nFrom the contrastive loss (\\ref{eq:contrastive_loss}), the cross-modal interaction is reflected in how we compute the similarities $s_{i,j}^I$ and $s_{i,j}^T$ for the $i$-th image and $j$-th text.\nPrevious methods like CLIP~\\citep{radford2021learning} and ALIGN~\\citep{jia2021scaling} simply encode each image or text separately to a global feature i.e., $f_\\theta(\\vx_{img}_i) \\in \\mathbb{R}^{d}$ and $g_\\phi(\\vx_{text}_j) \\in \\mathbb{R}^{d}$, and compute these two similarities as\n\\begin{equation}\n s^I_{i,j} = s^T_{i,j} = f_\\theta(\\vx_{img}_i)^\\top g_\\phi(\\vx_{text}_j), \\label{eq:orig_loss}\n\\end{equation}\nneglecting finer-grained interactions (e.g., word-patch alignment) between the two modalities.\nTo alleviate this problem, while simultaneously maintain the training and inference efficiency of dual-stream models, we apply a cross-modal late interaction inspired by \\cite{khattab2020colbert} to model the token-wise cross-modal interaction.\n\nSpecifically, \ndenote $n_1$ and $n_2$ as the number of (non-padded) tokens of the $i$-th image and $j$-th text, respectively, \nand the corresponding encoded features \nare $f_\\theta(\\vx_{img}_i) \\in \\mathbb{R}^{n_1 \\times d}$ and $g_\\phi(\\vx_{text}_j) \\in \\mathbb{R}^{n_2 \\times d}$.\nFor the $k$-th visual token, we compute its similarities with all textual tokens of $\\vx_{text}_j$, and use the largest one \n\\begin{equation}\n\\max_{0\\le r < n_2} [f_\\theta(\\vx_{img}_i)]_k^\\top [g_\\phi(\\vx_{text}_j)]_r\n\\label{eq:tokenwise_max_sim}\n\\end{equation} \nas its token-wise maximum similarity with $\\vx_{text}_j$.\nWe then use the average token-wise maximum similarity of all non-padded tokens \nin the image (resp. text) as the similarity of an image to a text (resp. a text to an image). \nThe similarity of the $i$-th image to the $j$-th text\ncan thus be formulated as:\n\\begin{equation}\ns_{i,j}^I (\\vx_{img}_i, \\vx_{text}_j) = \\frac{1}{n_1}\\sum_{k=1}^{n_1} [f_\\theta(\\vx_{img}_i)]_k^\\top [g_\\phi(\\vx_{text}_j)]_{m_k^I}, \\label{eq:late_sim_i}\n\\end{equation}\nwhere $m_k^I = \\arg \\max_{0\\le r < n_2} [f_\\theta(\\vx_{img}_i)]_k^\\top [g_\\phi(\\vx_{text}_j)]_r$.\nSimilarly, the similarity of the $j$-th text to the $i$-th image is\n\\begin{equation}\ns_{i,j}^T (\\vx_{img}_i, \\vx_{text}_j) = \\frac{1}{n_2}\\sum_{k=1}^{n_2} [f_\\theta(\\vx_{img}_i)]_{m_k^T}^\\top [g_\\phi(\\vx_{text}_j)]_k, \\label{eq:late_sim_t}\n\\end{equation}\nwhere $m_k^T = \\arg \\max_{0\\le r < n_1} [f_\\theta(\\vx_{img}_i)]_r^\\top [g_\\phi(\\vx_{text}_j)]_k$.\nNote that $s_{i,j}^I (\\vx_{img}_i, \\vx_{text}_j)$ in Equation (\\ref{eq:late_sim_i}) does not necessarily equal $s_{i,j}^T (\\vx_{img}_i, \\vx_{text}_j)$ in Equation (\\ref{eq:late_sim_t}).\n\n\n\\begin{remark}\n\\label{rmk:late_interaction_loss}\nIntuitively, the token-wise maximum similarity in Equation~(\\ref{eq:tokenwise_max_sim}) means that\nfor each image patch, we find its most similar textual token.\nSimilarly, for each textual token, we also find its closest image patch.\nBy applying this to the similarity calculation in (\\ref{eq:late_sim_i}) and (\\ref{eq:late_sim_t}) \nfor contrastive loss (\\ref{eq:contrastive_loss}),\n the dual-stream model learns fine-grained alignment between image patches and textual tokens.\n\\end{remark}\n\n\nThe original late interaction mechanism in \\citep{khattab2020colbert} computes the relevance score of a document to a query \\textit{padded with mask tokens}, as a \\textit{sum} of token-wise maximum similarities,\nand is optimized via a \\textit{pairwise} softmax cross-entropy loss.\nThough inspired from \\citet{khattab2020colbert}, our proposed cross-modal late interaction differs in several aspects.\nFirstly, we exclude the padded textual tokens when computing the similarity, as they harm the performance. \nWe speculate that this is because these padded tokens also learn textual representations and will mislead the model to align image patches to these meaningless padded tokens rather than meaningful non-padded words.\nSecondly, when computing similarities (\\ref{eq:late_sim_i}) and (\\ref{eq:late_sim_t}), we use the average of the token-wise maximum similarities instead of summation in \\citep{khattab2020colbert}. This is because the number of non-padded tokens varies from text to text, and this summation over all non-padded tokens can have quite different magnitudes, leading to less stabilized training and worse final performance.\nThirdly, we optimize the late interaction mechanism via a contrastive loss (\\ref{eq:contrastive_loss}) which is found powerful vision-language pre-training~\\citep{radford2021learning} instead of the original pairwise loss in \\citep{khattab2020colbert}.\n\n\\textbf{Training Efficiency.}\nThough the cross-modal late interaction is able to capture finer-grained features \ncompared with the original loss, \nit relies on the token-wise representations of both modalities, \nand can be inefficient in terms of communication, memory and computation, especially when the batch size is large. \nTo alleviate this problem, we utilize several methods.\nFirstly, we reduce the embedding size\nto 256.\nBesides, we reduce the precision of the last-layer features of both modalities from fp32 to fp16 before node communication in a distributed learning setting, \nand perform the multiplication in Equations (\\ref{eq:late_sim_i}) and (\\ref{eq:late_sim_t}) under the reduced precision.\nIn addition, since the complexity of similarity calculation scales with the sequence length of \ntextual tokens and image patches,\nfor each image (resp. text), we select the 25\\% tokens with the highest token-wise maximum similarity score (Equation (\\ref{eq:tokenwise_max_sim})) among all texts (resp. images) in the same local worker before node communication, based on the intuition that each sample can be represented by a few of the most representative tokens. Effects of these modifications are studied in Section \\ref{sec:efficiency-study-of-late-loss}.\n \n\n\n\n\\subsubsection{Prompt Ensemble and Templates}\n\\label{sec:prompt_ensemble}\n\nDue to the problem of polysemy and inconsistency with the pre-training process, following \\citet{radford2021learning}, we also use prompt templates to augment the original label for some downstream tasks.\nFor visualizations, for simplicity, we \nuse only one prompt template across the paper, i.e. ``a photo of a \\{label\\}.'' as \\citet{radford2021learning}.\nFor other experiments, we\nreport results using prompt ensemble following~\\citet{radford2021learning}.\nWhen multiple prompts are allowed, the token-wise representations of different prompt templates for the same class label are different, and can not be summed together to form \na mean textual representation as in \\citep{radford2021learning}.\nThus, instead of ensembling different prompt templates by their mean textual representation, we ensemble them\nby their mean token-wise similarity.\nSpecifically, suppose there are $C$ prompt templates, each label is augmented to $C$ different texts $\\vx_{text}_1, \\vx_{text}_2, \\cdots, \\vx_{text}_C$.\nThe \nsimilarity between an image $\\vx_{img}$ and this label is computed as\n$\n\\frac{1}{C}\\sum_{c=1}^C s_{\\cdot,\\cdot}^I (\\vx_{img}, \\vx_{text}_c),\n$\nwhere $s_{\\cdot,\\cdot}^I$ is defined in Equation (\\ref{eq:late_sim_i}).\n\nWe use a unified rule-based method inspired by \\citet{radford2018improving}\nto construct prompt templates for image classification tasks.\nSpecifically, each template consists of four components:\n\n\\vspace{-3mm}\n\\begin{equation}\n\\text{[prefix] \\{label\\}, [category description]. [suffix].} \\label{eq:prompt_template}\n\\end{equation}\nHere, the ``[prefix]'' is an in-context description like ``a photo of a\" similar as~\\cite{radford2021learning};\n``{label}'' is a class label of the dataset;\n``[category description]'' describes the category which is found helpful for some fine-grained image classification datasets \\citep{radford2021learning}, e.g., `` a type of pet'' for dataset Oxford-IIIT Pets.\nAn interesting finding is that,\nadding a suffix that includes the reference word ``it\" (e.g., ``I like it.\") at the end of the prompt empirically improves the zero-shot classification performance of the proposed model.\nWe speculate this is because the reference word ``it\" strengthens the fine-grained cross-modal alignment, as it\ncan also be aligned to image patches of the target object. Detailed prompt templates for different datasets can be found in Appendix~\\ref{apdx:prompt_template}.\n\n\n\n\n\\vspace{-1mm}\n\\subsection{Image and Text Augmentation}\n\n\\label{sec:augmentation}\nTo obtain better generalization and data-efficiency of the model, we perform data augmentation on both images and texts during the pre-training phase to construct more image-text pairs. \nWe apply AutoAugment \\citep{krizhevsky2012imagenet,sato2015apac,cubuk2019autoaugment,hoffer2020augment} for image augmentation, following the SOTA vision recognition methods \\citep{touvron2021training,xie2020self}.\nTo ensure the augmented texts are semantically similar as the original one, for\ntext augmentation, we rewrite the original text using \nback-translation \\citep{xie2020unsupervised,sennrich2016improving}.\nSpecifically,\nthe texts are first translated to the target language and then translated back to the source language. \nWe choose German and Russian as the target language and get extra two texts for each image-text pair. \nWhen constructing a batch of image-text pairs during the pre-training, the text of each image-text pair is randomly sampled from the three candidate texts, i.e., the original text and two back-translated texts.\n\n\n\\vspace{-1mm}\n\\subsection{Pre-training Dataset}\n\\label{sec:dataset_construction}\n\nA sufficiently large image-text dataset is a prerequisite for vision-language pre-training. \nRecent CLIP \\citep{radford2021learning} and ALIGN \\citep{jia2021scaling} construct datasets with 400M and 1800M image-text pairs, respectively. \nIn this work, we also construct a large-scale dataset called FILIP300M, which consists of 300M image-text pairs and covers board vision and language concepts.\nSpecifically, we collect image-text pairs from the Internet,\nand apply the following image- and text-based filtering rules to clean data.\nFor image-based filtering, we remove the images whose shorter dimension is smaller than 200 pixels and the aspect ratio is larger than 3. \nFor text-based filtering, we keep only English texts, and exclude the meaningless ones, e.g., img\\_0.jpg. \nWe also discard image-text pairs whose texts are repeated for over 10 times. \nBesides, we also use 3 public datasets, including Conceptual Captions 3M (CC3M) \\citep{sharma2018conceptual}, Conceptual 12M (CC12M) \\citep{changpinyo2021cc12m} and Yahoo Flickr Creative Commons 100M (YFCC100M) \\citep{thomee2016yfcc100m}. We apply the same filtering rules on YFCC100M. Finally, we use about 340M image-text pairs for pre-training. \nDespite using a smaller training dataset than CLIP and ALIGN, our models still outperform them in most down-steam tasks (see Section~\\ref{sec:expt}).\n\n\n\\begin{table}\n\\vspace{-2mm}\n\\Large\n\\caption{Top-1 accuracy(\\%) of zero-shot image classification on 12 datasets. Our FILIP can boost 3$\\sim$5\\% accuracy on average.}\n\\label{zeroshot-classification-table}\n\\vspace{-1mm}\n\\begin{center}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{l|cccc cccc cccc |c}\n\n&\\rotatebox{90}{\\Large{CIFAR10}}~~ &\n\\rotatebox{90}{\\Large{CIFAR100}}~~ &\n\\rotatebox{90}{\\Large{Caltech101}}~~ &\n\\rotatebox{90}{\\Large{StanfordCars}}~~ &\n\\rotatebox{90}{\\Large{Flowers102}}~~ &\n\\rotatebox{90}{\\Large{Food101}}~~ &\n\\rotatebox{90}{\\Large{SUN397}}~~ &\n\\rotatebox{90}{\\Large{DTD}}~ &\n\\rotatebox{90}{\\Large{Aircrafts}}~~ &\n\\rotatebox{90}{\\Large{OxfordPets}}~~ &\n\\rotatebox{90}{\\Large{EuroSAT}}~~ & \n\\rotatebox{90}{\\Large{\\textbf{ImageNet}}}~~ &\n\\rotatebox{90}{\\Large{\\textbf{Average}}}~~ \\\\\n\\midrule\nCLIP-ViT-B\/32 & 91.3 & 65.1 & 87.9 & 59.4 & 66.7 & 84.4 & 63.2 & 44.5 & 21.2 & 87.0 & 49.4 & 63.2 & 65.3 \\\\\n$\\text{FILIP}_{\\text{base}}$-ViT-B\/32 & 86.9 & 65.5 & 91.9 & 55.4 & 85.3 & 82.8 & 69.1 & 49.3 & 57.2 & 88.1 & 49.9 & 68.8 & \\textbf{70.9}$^{+5.6}$ \\\\\n\n\\midrule\nCLIP-ViT-L\/14 & 96.2 & 77.9 & 92.6 & 77.3 & 78.7 & 92.9 & 67.7 & 55.3 & 36.1 & 93.5 & 59.9 & 75.3 & 75.3 \\\\ \n$\\text{FILIP}_{\\text{large}}$-ViT-L\/14 & 95.7 & 75.3 & 93.0 & 70.8 & 90.1 & 92.2 & 73.1 & 60.7 & 60.2 & 92 & 59.2 & 77.1 & \\textbf{78.3}$^{+3.0}$ \\\\\n\n\\bottomrule\n\\end{tabular}}\n\\end{center}\n\\vspace{-2mm}\n\\end{table}\n\n\n\n\n\n\\section{Conclusion and Future Work}\n\\label{sec:conclusion}\nThis paper introduces FILIP, a simple yet generic framework towards fine-grained vision-language pre-training.\nBy using a token-wise maximum similarity, our method learns fine-grained representation for patches in the images and words in the sentences.\nWhile it achieves competitive results against several large-scale multi-modal pre-training on various downstream tasks, both its architecture and training procedure can still be optimized to improve its performance. In the future, a\nmore advanced image encoder as well as a well-designed interaction layer can be used to boost the performance.\nFurthermore, we can further add more masked language\/image loss to support more generation tasks.\nTo this end, we hope to extend FILIP as a generic and unified interface for solving a large variety of vision-language tasks.\n\n\n\\section{Related Work}\n\\paragraph{Vision-Language Pre-training Models.} The pre-train-and-fine-tune scheme has achieved great success in the domains of natural language processing~\\citep{devlin2018bert,brown2020language} and computer vision~\\citep{dosovitskiy2020image}.\nIt is then naturally extended to a joint cross-modal domain of \nVision-and-Language Pre-training (VLP). \nThe pre-training datasets of recent VLP models \ninclude publically available \ndatasets \nlike YFCC100M \\citep{thomee2016yfcc100m} and CC12M \\citep{changpinyo2021cc12m}, as well as \nlarger-scale datasets with more than 100M samples\nin CLIP \\citep{radford2021learning} and ALIGN \\citep{jia2021scaling}, which are shown to be even more powerful. \nThe pre-training tasks of VLP models can be categorized into two categories: image-text contrastive learning task and Language Modeling (LM) based tasks:\n(i) CLIP \\citep{radford2021learning}, ALIGN \\citep{jia2021scaling} and UNIMO \\citep{li2020unimo} make use of cross-modal contrastive learning which aligns the textual and visual information into a unified semantic space; (ii) VisualBERT \\citep{li2019visualbert}, UNITER \\citep{chen2020uniter}, M6 \\citep{lin2021m6}, and DALL-E \\citep{ramesh2021zeroshot} employ LM-like objectives, including both masked LM (e.g., Masked Language\/Region Modeling), and autoregressive LM (e.g., image captioning, text-grounded image generation).\nOn the other hand, some methods rely on a pre-trained object detection model such as Faster-RCNN \\citep{ren2015faster} to extract image regional\nfeatures offline, which requires extra labeled bounding-box data and makes the approach less scalable.\nRecent efforts such as SOHO \\citep{huang2021seeing} and SimVLM \\citep{wang2021simvlm} try to eliminate this burden via visual dictionary or PrefixLM \n\\citep{raffel2020exploring}. \nIn this paper, we \n directly \nlearn fine-grained vision-language representations\nin an end-to-end and simpler manner while maintaining the benefit of inference efficiency.\n\n\n\\paragraph{Multi-Modality Interaction Mechanism.} \nThe core of vision-language pre-training models lies in modeling the interaction between the two modalities. \nThere are mainly two types of cross-modal interaction architectures: single-stream and dual-stream models. \nSingle-stream models like VisualBERT \\citep{li2019visualbert} and ViLT \\citep{kim2021vilt} directly concatenate the patch-wise or regional visual features and textual embeddings \nand feed them to the transformer-based model.\nDual-stream models such as ViLBERT \\citep{lu2019vilbert} and CLIP~\\citep{radford2021learning} have separate encoders for different modalities. \nThis allows flexible use of different models for different modalities, and\n efficient inference for downstream tasks like image-text retrieval, through the ability of decoupling the encoders and pre-compute image\/text features offline.\nIn this paper, while following the dual-stream approach for its flexible and efficient inference,\nwe further propose a new multi-modal interaction mechanism to capture the fine-grained representations. \n\n\n\n\\section{Introduction}\nLarge-scale Vision-Language Pre-training (VLP) models like CLIP \\citep{radford2021learning} and ALIGN \\citep{jia2021scaling} have recently demonstrated success across \nvarious downstream tasks. They learn visual and textual representations from millions of image-text pairs collected from the Internet and show superior zero-shot ability and robustness. The core technique of these models lies in the global contrastive alignment of the images and texts through a dual-stream model. Such architecture is inference-efficient for downstream tasks like retrieval because the encoders for the two modalities can be decoupled and the image or text representations can be pre-computed offline. \nHowever, CLIP and ALIGN\nmodel the cross-modal interaction via solely the similarity of the global feature of each modality, lacking the ability of capturing finer-level information like the relationship between visual objects and textual words.\nIn this paper, we develop a simple yet efficient cross-modal finer-grained interaction mechanism for large-scale VLP.\n\nTo achieve finer-grained cross-modal interaction, previous methods mainly exploited two kinds of methods.\n(1) One line of work \\citep{chen2020uniter,li2020oscar,m5product,li2020unimo,zhang2021vinvl,capture} uses a pre-trained object detector to extract region-of-interest (ROI) features from images, and then fuses it with the paired text through a VLP model.\nThis design complicates the pre-training due to pre-computing and storing a large number of ROI features. \nIn addition, the zero-shot ability of these approaches is usually limited by the predefined number of classes and their performance is also restricted by the quality of the detector.\n(2) Another line of work \\citep{li2021align, kim2021vilt} \nenforces the token-wise or patch-wise representations from both modalities into the same space and models these finer-grained interactions via cross-attention \\citep{li2021align} or self-attention \\citep{kim2021vilt}. \nHowever, these methods are usually less efficient in terms of both training and inference. In particular, during training, cross-attention in \\citep{li2021align} requires to be performed in an encoder-decoder structure, while the complexity of the self-attention \\citep{kim2021vilt} grows quadratically with the length of the prolonged concatenated sequences of both modalities. During inference, the data from both modalities are intertwined to compute the cross-attention or self-attention, and can not be pre-computed offline as dual-stream models like CLIP and ALIGN.\nThis can be less efficient for downstream tasks like image\/text retrieval and image classification.\n\nIn this paper, we propose a large-scale Fine-grained Interactive Language-Image Pre-training framework named FILIP \nto address these limitations.\nInspired by \\cite{khattab2020colbert}, \nwe model the fine-grained semantic alignment through a novel cross-modal\nlate interaction mechanism in the contrastive loss, instead of using cross or self-attention.\nSpecifically, our fine-grained contrastive learning uses a token-wise maximum similarity between visual and textual tokens to guide the contrastive objective.\nIn this way,\nFILIP successfully leverages the finer-grained expressiveness\namong image patches and textual words \nwhile simultaneously gaining the ability to pre-compute image and text representations offline.\nUnlike \\cite{khattab2020colbert}, we discard the padded tokens and use average \ninstead summation \nof token-wise maximum similarities when computing the image-text alignment,\nwhich enhances the cross-modal representation learning and stabilizes training.\nFurthermore, we construct a large-scale pre-training dataset named FILIP300M from the Internet.\nData cleaning and image-text data augmentation are also explored and proved useful in this work.\n\nExtensive experiments show that by effectively learning fine-grained representations,\nFILIP achieves state-of-the-art performance on multiple downstream vision-language tasks, including zero-shot image classification and image-text retrieval. For example, FILIP reaches 77.1\\% top-1 accuracy for zero-shot ImageNet classification, surpassing CLIP with less training data.\nVisualizations on word-patch alignment further show that FILIP learns meaningful finer-grained features with promising localization ability. \n\n\n\n\n\n\n\\section{Experiments}\n\\label{sec:expt}\n\n\\vspace{-1mm}\n\\subsection{Experimental Setup}\n\\label{sec:experiment_details}\n\\textbf{Model Architectures.}\nWe train two \nmodels from scratch, i.e., $\\text{FILIP}_{\\text{base}}$ and $\\text{FILIP}_{\\text{large}}$. \nThe model architectures follow\nCLIP \\citep{radford2021learning}, i.e., the image encoder is ViT-B\/32 for $\\text{FILIP}_{\\text{base}}$ and ViT-L\/14 for $\\text{FILIP}_{\\text{large}}$. More details can be found in Appendix \\ref{apdx:expt_setting}.\n\n\\textbf{Pre-training Details.}\nTo save memory and scale up the batch size, automatic mixed-precision \\citep{micikevicius2018mixed} and gradient checkpoint \\citep{griewank2000algorithm, chen2016training} are used\nThe input images are resized to $224 \\times 224$ resolution during pre-training and the maximum length of the text is limited to $77$ tokens following \\citet{radford2021learning}.\nThe training is mainly conducted on Nvidia V100 GPUs and Ascend Cards.\n$\\text{FILIP}_{\\text{base}}$ is trained on 128 cards about 9 days and $\\text{FILIP}_{\\text{large}}$ takes about 24 days to train on 192 cards. \nUnless otherwise specified, we use $\\text{FILIP}_{\\text{large}}$ to compare with other methods and $\\text{FILIP}_{\\text{base}}$ for ablation.\nWe train both models using the LAMB optimizer \\citep{2019Large} and cosine learning rate schedule \\citep{2016SGDR} with a linear warmup. \nWeight decay regularization is applied to all parameters except bias, layer normalization, token embedding, positional embedding and temperature in contrastive loss. \nDetailed values of hyperparameters for different datasets and models can be found in Appendix \\ref{apdx:expt_setting}.\n\n\n\n\\vspace{-1mm}\n\\subsection{Zero-Shot Image Classification}\n\\label{sec:zeroshot_classification}\nIn this section, we evaluate our proposed FILIP on the zero-shot image classification task.\nWe compare our FILIP with CLIP \\citep{radford2021learning} on 12 downstream classification datasets, using the same evaluation setting as in CLIP. As described in Section \\ref{sec:prompt_ensemble}, we apply a set of prompts for each dataset and ensemble them to get the final results, see Appendix \\ref{apdx:prompt_template} for details. We only compare the zero-shot performance with CLIP here as ALIGN does not release its model and the related performances are not reported in their paper.\n\nTable \\ref{zeroshot-classification-table} shows the results on 12 datasets.\nDespite using less training data (340M vs. 400M), both $\\text{FILIP}_{\\text{base}}$ and $\\text{FILIP}_{\\text{large}}$ considerably outperform their CLIP counterparts in terms of average top-1 accuracy over 12 datasets, i.e., achieving absolute improvements of 5.6\\% and 3.0\\%, respectively. In particular, our FILIP surpasses CLIP on ImageNet, the largest dataset among 12 datasets.\nFILIP also achieves substantial performance gains on some domain-specific datasets, e.g., for Aircrafts, the two FILIP models reach a 30\\% improvement over CLIP on average. We speculate this is because,\nunlike CLIP which aggregates the information of the whole image into the representation of the [CLS] token, our proposed FILIP model focuses more on the target object by directly aligning the image patches corresponding to the target object with the textual tokens corresponding to the class label (visualizations of word-patch alignment are in Section \\ref{sec:Visualization of Fine-grained Alignment}).\n\n\n\n\\begin{table}\n\\vspace{-3mm}\n\\center\n\\caption{Results of zero-shot image-text retrieval on Flickr30K and MSCOCO datasets. The last two rows (marked with *) report the zero-shot results on Flickr30K dataset of model fine-tuned on MSCOCO dataset, following the setting of ALBEF \\citep{li2021align}.}\n\\huge\n\\label{tab:zero-shot-retrieval-table}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{ccccccccccccc}\n\\toprule\n & \\multicolumn{6}{c}{Flickr30K} & \\multicolumn{6}{c}{MSCOCO} \\\\\n & \\multicolumn{3}{c}{image-to-text} & \\multicolumn{3}{c}{text-to-image} & \\multicolumn{3}{c}{image-to-text} & \\multicolumn{3}{c}{text-to-image} \\\\\n & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 \\\\\n\\midrule\nUnicoder-VL & 64.3 & 85.8 & 92.3 & 48.4 & 76.0 & 85.2 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\\\\nImageBERT & 70.7 & 90.2 & 94.0 & 54.3 & 79.6 & 87.5 & 44.0 & 71.2 & 80.4 & 32.3 & 59.0 & 70.2 \\\\\nUNITER & 83.6 & 95.7 & 97.7 & 68.7 & 89.2 & 93.9 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\\\\nCLIP & 88.0 & 98.7 & 99.4 & 68.7 & 90.6 & 95.2 & 58.4 & 81.5 & 88.1 & 37.8 & 62.4 & 72.2 \\\\\nALIGN & 88.6 & 98.7 & 99.7 & \\textbf{75.7} & \\textbf{93.8} & \\textbf{96.8} & 58.6 & 83.0 & 89.7 & 45.6 & 69.8 & 78.6 \\\\\n\\textbf{FILIP} & \\textbf{89.8} & \\textbf{99.2} & \\textbf{99.8} & {75.0} & {93.4} & {96.3} & \\textbf{61.3} & \\textbf{84.3} & \\textbf{90.4} & \\textbf{45.9} & \\textbf{70.6} & \\textbf{79.3} \\\\ \\hline\nALBEF* & 94.1 & 99.5 & 99.7 & 82.8 & 96.3 & 98.1 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\\\\n\\textbf{FILIP}* & \\textbf{95.4} & \\textbf{99.8} & \\textbf{100.0} & \\textbf{84.7} & \\textbf{97.0} & \\textbf{98.7} & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\\\\n\\bottomrule \n\\end{tabular}}\n\\vspace{-3mm}\n\\end{table}\n\n\n\n\\vspace{-1mm}\n\\subsection{Image-Text Retrieval}\n\\label{sec:image_text_retrieval}\nImage-text retrieval consists of two sub-tasks: image-to-text retrieval and text-to-image retrieval. \nWe evaluate our FILIP model on two retrieval benchmark datasets: Flickr30K \\citep{dataset_flickr30k} and MSCOCO \\citep{dataset_mscoco}, under both zero-shot and fine-tuned settings. \nMore details of experimental setting can be found in Appendix \\ref{apdx:expt_setting}.\n\nTables \\ref{tab:zero-shot-retrieval-table} and \\ref{tab:finetuned-retrieval-table} show the results of \nzero-shot and fine-tuned \nimage-text retrieval,\nrespectively. We compare our FILIP model against methods with complex attention layers including Unicoder-VL \\citep{unicoder_vl}, ImageBERT \\citep{imagebert}, UNITER \\citep{chen2020uniter}, VILLA \\citep{villa}, ERNIE-ViL \\citep{ernie_vil}, Oscar \\citep{li2020oscar}, VinVL \\citep{zhang2021vinvl}, ALBEF \\citep{li2021align}, and methods trained on larger-scale image-text datasets including CLIP \\citep{radford2021learning} and ALIGN \\citep{jia2021scaling}. As we can see, FILIP achieves state-of-the-art performances under all metrics on both Flickr30K and MSCOCO datasets, except for zero-shot text-to-image retrieval on Flickr30K, where FILIP achieves competitive performance with SOTA. For zero-shot image-to-text retrieval on MSCOCO dataset, the absolute R@1 of our proposed FILIP is 2.7\\% higher than ALIGN, which is trained on a much larger dataset.\n\n\\begin{table}\n\\vspace{-3mm}\n\\center\n\\caption{Results of \nfine-tuned \nimage-text retrieval on Flickr30K and MSCOCO datasets.}\n\\Large\n\\label{tab:finetuned-retrieval-table}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{ccccccccccccc}\n\\toprule\n & \\multicolumn{6}{c}{Flickr30K} & \\multicolumn{6}{c}{MSCOCO} \\\\\n & \\multicolumn{3}{c}{image-to-text} & \\multicolumn{3}{c}{text-to-image} & \\multicolumn{3}{c}{image-to-text} & \\multicolumn{3}{c}{text-to-image} \\\\\n & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 \\\\\n\\midrule\nUnicoder-VL & 86.2 & 96.3 & 99.0 & 71.5 & 90.9 & 94.9 & 62.3 & 87.1 & 92.8 & 48.4 & 76.7 & 85.9 \\\\\nImageBERT & 87.0 & 97.6 & 99.2 & 73.1 & 92.6 & 96.0 & 66.4 & 89.8 & 94.4 & 50.5 & 78.7 & 87.1 \\\\\nUNITER & 87.3 & 98.0 & 99.2 & 75.6 & 94.1 & 96.8 & 65.7 & 88.6 & 93.8 & 52.9 & 79.9 & 88.0 \\\\\nVILLA & 87.9 & 97.5 & 98.8 & 76.3 & 94.2 & 96.8 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\\\\nERNIE-ViL & 88.1 & 98.0 & 99.2 & 76.7 & 93.6 & 96.4 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\\\\nOscar & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & 73.5 & 92.2 & 96.0 & 57.5 & 82.8 & 89.8 \\\\\nVinVL & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & 75.4 & 92.9 & 96.2 & 58.8 & 83.5 & 90.3 \\\\\nALIGN & 95.3 & 99.8 & 100.0 & 84.9 & 97.4 & 98.6 & 77.0 & 93.5 & 96.9 & 59.9 & 83.3 & 89.8 \\\\\nALBEF & 95.9 & 99.8 & \\textbf{100.0} & 85.6 & 97.5 & 98.9 & 77.6 & 94.3 & 97.2 & 60.7 & \\textbf{84.3} & 90.5 \\\\\nOur FILIP & \\textbf{96.6} & \\textbf{100.0} & \\textbf{100.0} & \\textbf{87.1} & \\textbf{97.7} & \\textbf{99.1} & \\textbf{78.9} & \\textbf{94.4} & \\textbf{97.4} & \\textbf{61.2} & \\textbf{84.3} & \\textbf{90.6} \\\\\n\\bottomrule\n\\end{tabular}}\n\\vspace{-3mm}\n\\end{table}\n\n\n\n\\begin{table}\n\\vspace{-3mm}\n\\caption{Ablation study of different components on pre-training subset of YFCC100M. I2T and T2I are abbreviations for image-to-text and text-to-image retrieval, respectively. ``ZS'' means zero-shot performance. Underlined numbers have\nthe highest improvements for the corresponding metrics. }\n\\label{ablation-yfcc-table}\n\\begin{center}\n\\begin{tabular}{l|rrrrr}\n\\toprule \\multirow{2}{*}{ Model } & \\multicolumn{4}{c} { MSCOCO } & ImageNet \\\\\n & I2T R@1 & I2T R@5 & T2I R@1 & T2I R@5 & ZS Top1 \\\\\n\\midrule Baseline (ViT-B\/32) & $25.0$ & $49.5$ & $14.7$ & $34.7$ & 30.4 \\\\\n~w\/ image augmentation & $26.1$ & $51.8$ & $16.5$ & $37.5$ & $ 32.5 $ \\\\ \n~w\/ back translation & $29.2$ & $55.0$ & $17.9$ & $39.8$ & $33.9$ \\\\\n~w\/ cross-modal late interaction & $\\underline{30.5}$ & $\\underline{55.3}$ & $\\underline{18.5}$ & $\\underline{40.0}$ & $\\underline{34.3}$\\\\\nOur $\\text{FILIP}_{\\text{base}}$ & $\\mathbf{33.4}$ & $\\mathbf{60.1}$ & $\\mathbf{23.0}$ & $\\mathbf{46.2}$ & $\\mathbf{37.8}$ \\\\ \n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\vspace{-3mm}\n\\end{table}\n\n\n\\begin{table}[t!]\n\\vspace{-3mm}\n \\caption{Efficiency study of the cross-modal late interaction. ``orig'' and ``late'' stand for the contrastive loss based on the original cosine similarity \n in CLIP and our proposed cross-modal late interaction, respectively. ``ZS'' means zero-shot performance.\n We report results for ViT-B\/32 trained on filtered YFCC100M with 8 V100 GPUs, with a batch size of 512 per GPU. Training time and memory consumption are tested using the same gradient checkpoint configuration. \n \n * denotes our final setting used in other experiments.}\n \\label{tab:efficiency-late}\n \\begin{center}\n\n \\begin{tabular}{cccccccc}\n \\toprule \\multirow{2}{*}{ Loss } &Embed&Embed & Token & Training time & Memory & ImageNet \\\\\n & dim & precision& \\% & (sec\/iter) & (MB) & ZS Top1 \\\\\n \\midrule \n orig (baseline)& 512 & fp32 & - &1.31 & 14300 & 30.4 \\\\\n late & 512 & fp32 &100\\% & 2.85 & 26000 & 34.6 \\\\\n late & 512 & fp16 &100\\% & 2.67 & 23468 & 34.5 \\\\\n late & 256 & fp16 &100\\% & 2.31 & 22382 & \\textbf{35.2} \\\\\n late & 256 & fp16 &50\\% & 1.61 & 16336 & 34.5 \\\\\n late* & 256 & fp16 &25\\% & 1.39 & 16100 & 34.3 \\\\\n \n \n \n \n \\bottomrule\n \\end{tabular}\n \\end{center}\n \\vspace{-3mm}\n\\end{table}\n\n\\vspace{-1mm}\n\\subsection{Ablation Study}\n\\label{sec:ablation_yfcc}\n\n\\textbf{Effectiveness of Each Component.}\nWe study the effectiveness of each component in FILIP, i.e., image\/text augmentations and cross-modal late interaction. Experiments are conducted on \n$\\text{FILIP}_{\\text{base}}$,\nwith a filtered subset of YFCC100M as the training dataset (as described in Section \\ref{sec:dataset_construction}),\non both zero-shot retrieval and classification tasks. \nWe measure models' performance on MSCOCO zero-shot image-text retrieval and ImageNet zero-shot classification, which are two effective indicators \nfor the quality of the learned vision-language representations. \n\nTable \\ref{ablation-yfcc-table} reports the results. As can be seen, all three components \nare beneficial for both tasks.\nDespite the simple design, cross-modal late interaction brings significant performance improvements over the baseline (the vanilla CLIP ViT-B\/32), with an absolute R@1 gain of 5.5\\% (resp. 3.8\\%) for image-to-text (resp. text-to-image) retrieval on MSCOCO and an absolute top-1 accuracy gain of 3.9\\% for zero-shot classification on ImageNet. \nFurther improvements are observed when all components are combined together.\n\n\n\n\n\n\\begin{figure}\n\\vspace{-3mm}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{figures\/ImageNetVis_vsCLIP.pdf}\n\t\\vspace{-5mm}\n\t\\caption{Visualizations of word-patch alignment for 4 classes of the ImageNet dataset and ``a photo of a \\{label\\}.\" is the prompt. Numbers in the parentheses after the class label indicate the location indices of the class label in the tokenized textual sequence. The correct predictions are highlighted by opaque patches with the class label indices in red.}\n\t\\label{fig:single_obj}\n\t\\vspace{-3mm}\n\\end{figure}\n\n\n\\textbf{Efficiency Study of Cross-modal Late Interaction.}\n\\label{sec:efficiency-study-of-late-loss}\nSince the late interaction mechanism in Section~\\ref{sec:Cross-modal-late} requires to calculate the similarity between all visual and textual tokens, \nits efficiency can be a problem when\nemployed in large-scale distributed training.\nAs described in Section \\ref{sec:Cross-modal-late}, we make several attempts to address the issue. \nTable \\ref{tab:efficiency-late} shows the efficiency improvement on zero-shot classification on ImageNet when\nthese attempts are applied. \nAs can be seen, these attempts improve the efficiency of late interaction \nwithout accuracy drop. Combining all three attempts achieves only slightly slower training and larger memory consumption than the original loss in CLIP.\n\n\n\\vspace{-1mm}\n\\subsection{Visualization of Fine-grained Alignment}\n\\label{sec:Visualization of Fine-grained Alignment}\n\nIn this section, we visualize FILIP's capability of capturing fine-grained cross-modal correspondence using the method of word-patch alignment. To make a fair comparison, we use our $\\text{FILIP}_{\\text{base}}$ trained on YFCC100M and CLIP's ViT-B\/32, which are of the same size, for visualization. Each image is patchified to $7\\times 7$ image patches. \nMore visualization results can be found in Appendix~\\ref{apdx:more_vis}.\n\n\\textbf{Visualization Method.} The word-patch alignment is performed based on the token-wise similarity between the image patches and textual tokens. Specifically, for the $k$-th image patch, the location index of textual token with the largest similarity with it ($m_k^I$ in Equation~(\\ref{eq:late_sim_i})) is considered as its predicted label, and is placed at the center of it.\nTake class ``balloon'' as an example. \nThere are 8 tokens in the tokenized textual sequence ``[BOS] a photo of a balloon. [EOS]'', \nand the location index of the class label ``balloon'' is ``5''. \nNote that one class label may be tokenized to more than one token.\nLocation indices of textual tokens corresponding to the class label are highlighted in red, while the others are marked in white.\nA desired model that learns fine-grained representations would predict image patches of the target object to red indices.\n\n\\textbf{Observations.}\nFigure~\\ref{fig:single_obj} shows the word-patch alignment results for FILIP and CLIP on 4 classes from the ImageNet dataset.\nAs can be seen,\nFILIP exhibits the finer-grained understanding of \nan image in the following aspects. \n(i) A single object: \nFrom the visualization of class ``small white butterfly'', the image patches covering the object are all classified correctly;\n(ii) Same object in different shapes:\nFrom the visualizations of class ``balloon'' and ``lifeboat'', image patches corresponding to all target objects with different shapes and locations are correctly classified; \n(iii) Key Components of an object: For class ``electric locomotive'', there are two key components crucial to correctly classifying the image, i.e., ``electric'' and ``locomotive'', whose corresponding textual token indices are ``5'' and ``6'', respectively. As can be seen, image patches matching these two key components are respectively correctly classified.\nOn the other hand, CLIP can not correctly align image patches with corresponding textual tokens.\nCompared with \\cite{kim2021vilt} which uses an extra optimal transport to align the textual word and image patch distributions, the word-patch alignment can be simply automatically learned by our method. \n\n\n\n\n\n\n\n\\section{Appendix}\n\n\\subsection{Datasets Summary}\nTable \\ref{tab:dataset_statistics} shows the number of image-text pairs of each datasets used in different pre-training methods.\n\\begin{table}[h]\n\\center\n\\caption{Number of image-text pairs used in the pre-training of FILIP, CLIP and ALIGN.}\n\\label{tab:dataset_statistics}\n\\begin{tabular}{ c|cccc|c|c } \n \\hline\n\\multirow{2}{*}{} & \\multicolumn{4}{c|} { FILIP } & { CLIP } & { ALIGN } \\\\\n& CC3M & CC12M & YFCC100M & FILIP300M & \\citep{radford2021learning} & \\citep{jia2021scaling} \\\\ \n \\hline\n \\# & 3M & 10M & 26M & 300M & 400M & 1800M \\\\\n \\hline\n\\end{tabular}\n\\end{table}\n\n\n\n\n\\subsection{Detailed Experimental Settings}\n\\label{apdx:expt_setting}\n\n\n\\begin{table}[htbp]\n\\caption{The architecture parameters for FILIP models.}\n\\label{tab:Filip_model_Hyperparameter}\n \\centering\n \n \\begin{tabular}{l|cc ccc ccc}\n \\toprule \n \\multirow{2}{*}{ Model } & Embedding & Input & \\multicolumn{3}{c}{ Image Encoder } & \\multicolumn{3}{c}{ Text Encoder } \\\\\n & dimension & resolution & \\#layers & width & \\#heads & \\#layers & width & \\#heads \\\\\n \\midrule \n \n $\\text{FILIP}_{\\text{base}}$ & 256 & $224\\times 224$ & 12 & 768 & 12 & 12 & 512 & 8 \\\\\n $\\text{FILIP}_{\\text{large}}$ & 256 & $224\\times 224$ & 24 & 1024 & 16 & 12 & 768 & 12 \\\\\n \n \\bottomrule\n \\end{tabular}\n \n\\end{table}\n\n\n\\paragraph{Model Architectures.} \nWe follow the same architecture design as CLIP, for both $\\text{FILIP}_{\\text{base}}$ and $\\text{FILIP}_{\\text{large}}$, except that we reduce the embedding dimension from 512\/768 to 256 for the efficiency of loss computation.\nTable \\ref{tab:Filip_model_Hyperparameter} describes the details of architectures.\n\n\\begin{table}\n \\centering\n \\caption{Common hyperparameters used for FILIP pre-training.}\n \n \\label{tab:pre-training hyperparams}\n \\centering\n \\begin{tabular}{l|c}\n \\toprule Hyperparameter & Value \\\\\n \\midrule\n Vocabulary size & 49408 \\\\\n Initial temperature & $0.07$ \\\\\n LAMB $\\beta_{1}$ & $0.9$ \\\\\n LAMB $\\beta_{2}$ & $0.999$ \\\\\n LAMB $\\epsilon$ & $10^{-4}$ \\\\\n Warm-up iters & 3000 \\\\\n Training epochs & 30 \\\\\n \\bottomrule\n \\end{tabular}\n \n \\end{table}\n \n\\begin{table}\n \\centering\n \\caption{Model- and dataset-specific hyperparameters used for FILIP pre-training.\n \n Numbers in batch size represent the total batch size across all workers and are calculated as: batch size per GPU $\\times$ \\#GPUs. FILIP340M is the combination of FILIP300M, YFCC100M, CC12M and CC3M.}\n \\label{tab:Model_and_dataset_specific hyperparameters}\n \\begin{minipage}{\\textwidth}\n \\centering\n \\begin{tabular}{l|l|cccc}\n \\toprule \n Model & Dataset & Batch size & Base LR & Weight decay & \\\\\n \\midrule \n $\\text{FILIP}_{\\text{base}}$ & YFCC100M & $1024 \\times 8 $ & $6 \\times 10^{-3}$ & 3e-2 \\\\\n $\\text{FILIP}_{\\text{base}}$ & FILIP340M & $ 320 \\times 128 $ & $ 2 \\times 10^{-3}$ & 3e-3 \\\\\n \\midrule\n $\\text{FILIP}_{\\text{large}}$ & FILIP340M & $ 160 \\times 192 $ & $ 8 \\times 10^{-4}$ & 3e-3 \\\\ \n \n \n \n \n \n \\bottomrule\n \\end{tabular}\n \\end{minipage}\n\\end{table}\n\n\n\\paragraph{Details for Pre-training and Hyperparameters.} \nFor the implementation of the contrastive loss, following CLIP \\citep{radford2021learning} and ALIGN \\citep{jia2021scaling}, we also set the temperature in the softmax function to be a learnable parameter and initialize it as 0.07. \nFor the pre-training, we use the LAMB optimizer implemented by the cybertronai's open-source repository (\\url{https:\/\/github.com\/cybertronai\/pytorch-lamb}). \nFor the learning rate scheduler, we first assign a base learning rate and then linearly warm it up to the peak learning rate according to the effective total batch size by a square root strategy, $peak\\_lr=base\\_lr \\times \\sqrt{\\frac{total\\_bs}{512}}$. \nWe note that a large weight decay is crucial to stabilize training and improve generalization.\nSpecifically, we found that the training stability is a challenging issue when applying mix-precision training to large-scale models, i.e., the training is extremely unstable and the NaN loss easily happens. Recent works DALL-E \\citep{ramesh2021zeroshot} and Cogview \\citep{ding2021cogview} also notice this issue and provide their solutions. \nHowever, we found that simply increasing the weight decay and applying the trick of removing the weight decay of specific parameters as described in Section \\ref{sec:experiment_details} work for our case.\nThe base learning rate and weight decay are selected manually via observing the performance at the early training stage.\nTable \\ref{tab:pre-training hyperparams} summarizes the common hyperparameters and Table \\ref{tab:Model_and_dataset_specific hyperparameters} shows the model- and dataset-specific hyperparameters for FILIP pre-training.\n\n\n\n\n\n\\paragraph{Details for Image-text Retrieval.} Following previous works \\citep{jia2021scaling,li2021align}, for Flickr30K, we test on the 1K test set with or without fine-tuning on the 30K training set, while for MSCOCO, we test on the 5K test set with or without fine-tuning on the 113K training set. We use the similarity between image and text for ranking and use the contrastive loss for fine-tuning. Since there are multiple texts for each image in these two datasets, we change the ground-truth label of contrastive loss to consider multiple positives, by assigning a probability of 1\/\\#positive to each positive following ALBEF~\\citep{li2021align}. Besides, we also use prompts during evaluation for both datasets, see Appendix \\ref{apdx:prompt_template} for details. Table \\ref{tab:hyperparameter_image_text_retrieval} shows the hyperparameters for image-text retrieval fine-tuning.\n\n\\begin{table}[htbp]\n \\centering\n \\caption{Hyperparameters used for image-text retrieval fine-tuning.}\n \\label{tab:hyperparameter_image_text_retrieval}\n \\begin{tabular}{l|c}\n \\toprule Hyperparameter & Value \\\\\n \\midrule\n Image size & 392 $\\times$ 392 \\\\\n Training epochs & 3 \\\\\n Optimizer & LAMB \\\\ \n Batch size & 5120 \\\\\n Base LR & $2 \\times 10^{-4}$ \\\\\n Weight decay & $3 \\times 10^{-4}$ \\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\n\n\\subsection{More visualizations of Word-patch Alignment and Grad-cam Heatmaps}\n\\label{apdx:more_vis}\nIn Figure~\\ref{fig:MoreImageNetVis}, we visualize the cross-modal alignment of the proposed method for more images, in terms of\nboth word-patch alignment as described in Section~\\ref{sec:Visualization of Fine-grained Alignment} and Grad-CAM heatmaps~\\citep{selvaraju2017grad}.\nWe compute the Grad-CAM heatmaps based on the average self-attention maps over the image patches classified to targeted textual tokens (i.e., the textual token(s) corresponding to the class label in the ImageNet dataset) in the last layer of the image encoder. We average the heatmaps over all attention heads.\nAs can be seen, our proposed model learns meaningful alignment between image patches and textual tokens.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{figures\/ImageNetVis_appendix.pdf}\n\\caption{More visualizations on different classes of ImageNet dataset. Numbers in the parentheses after the class label indicate the location indices of class label in the tokenized textual sequence. }\n\\label{fig:MoreImageNetVis}\n\\end{center}\n\\end{figure}\n\n\n\\subsection{Prompt Templates for downstream tasks}\n\\label{apdx:prompt_template}\n\\paragraph{Image Classification.} Table~\\ref{tbl:prompt_templates} shows the prompt templates for different image classification datasets\nin the form of ``\n$\n\\text{[prefix] \\{label\\}, [category description]. [suffix].} \n$\n''\nin Equation (\\ref{eq:prompt_template}).\nThere are three components to be determined in the template, i.e., the prefix, the category description and the suffix.\nFor each component, we select several well-performed ones for each dataset. \nThen we use the full combinations of all three components as the set of prompt templates for ensemble.\nFor instance, we use 5 prefixes, no category descriptions, and 6 suffixes for dataset ImageNet. \nThen the total number of prompt templates for this dataset is: $5 \\times 1 \\times 6=30$.\n\n\\paragraph{Image-text Retrieval.} Following CLIP \\citep{radford2021learning}, we use prompt in zero-shot image-text retrieval for both Flickr30K and MSCOCO datasets.\nThe prompt is selected by the same rule as described in Section~\\ref{sec:prompt_ensemble}, except that we do not use ``[category description]'' here. Table \\ref{tab:prompts_for_retrieval} shows the prompt templates for zero-shot image-text retrieval on Flickr30K and MSCOCO datasets. \n\n\n\\begin{table}\n\\small\n \\centering\n \\caption{Prompt templates used for 12 downstream image classification tasks. \n }\n \\label{tbl:prompt_templates}\n \\resizebox{\\textwidth}{!}{\n \\begin{tabular}{m{1.5cm}|m{4cm}|m{2cm}|m{4cm}}\n \t\\toprule\n \tDataset & Prefix & Category description & Suffix \\\\ \\midrule\n \tCIFAR10 & ``a photo of a\", ``a jpeg photo of a\", ``a painting of a\", ``itap of a\", ``graffiti of a\", ``a cartoon\", ``a doodle\" & None & None, ``It's common in daily life\", ``It's cute\", ``It's ugly\", ``It's weird\", ``Hope you like it\" \\\\ \\midrule\n \tCIFAR100 & ``a jpeg photo of a\", ``a painting of a\", ``a good photo of a\", ``a bad photo of a\", ``a photo of a\", ``itap of a\", ``a rendering of a\" & None & None, ``It's common in daily life\", ``It's beautiful\", ``It's ugly\", ``I like it\", ``I take it today\" \\\\ \\midrule\n \n \tCaltech101 & ``a photo of a\", ``a cropped photo of a\", ``a good photo of a\", ``a bad photo of a\" & None & None, ``I like it\", ``I hate it\", ``It's ugly\", ``It's cute\" \\\\ \\midrule\n \tStanford-Car & ``a photo of a\", ``a close-up photo of a\", ``a good photo of a\", ``a bad photo of a\" & ``a type of car\", ``a type of automobile\" & ``I like it\", ``It belongs to my friend\", ``It's brand new\", ``It's popular recently\", ``It's important to me\", ``I take it today\" \\\\ \\midrule\n \tFlowers102 & ``a photo of a (many) \", ``a rendering of a (many) \", ``itap of a (many) \" & ``a type of flower\", ``a type of bloom\" & ``It's beautiful\", ``It's from my best friend\", ``It gives out a sweet perfume\/fragrance\" \\\\ \\midrule\n \tImageNet & ``a photo of a\", \"a good photo of a\", ``a bad photo of a\", ``a close-up photo of a\", ``itap of a\" & None & ``I like it\", ``It's common in daily life\", ``It's not common in daily life\", ``It's ugly\", ``It's cute\", ``It's beautiful\" \\\\ \\midrule\n \tFood101 & ``a photo of my\", ``a close-up photo of my\", ``itap of my\" & ``a type of food\", ``a type of nourishment\" & ``I made it today\", ``I like it\", ``I hate it\", ``It's delicious\", ``It's with nice flavour\", ``It's with terrible flavour\", ``It's popular recently\" \\\\ \\midrule\n \tSUN397 & ``a photo of a\", ``a good photo of a\", ``a bad photo of a\", ``a bright photo of a\", a dark photo of a\", ``a black and white photo of a\", ``a nice scene of a\", ``a terrible scene of a\" & None & None, ``I like it\", ``I hate it\", ``It's beautiful\", ``It's common in daily life\", ``It's important to me\" \\\\ \\midrule\n \tDTD & ``itap of a\", ``a close-up photo of a\" & ``texture\", ``surface\", ``material\" & None, ``It's out of style\", ``It's popular in old days\", ``It's ugly\", ``It's beautiful\" \\\\ \\midrule\n \tAircrafts & ``a photo of the\", ``a close-up photo of the\", ``a good photo of the \", ``a pixelated photo of the\" & ``a type of plane\", ``a type of aircraft\", ``a type of airliner\" & None,``I like it\", ``It's important to me\", ``I take it today\", ``Hope you like it\" \\\\ \\midrule\n \tOxford Pet & ``a photo of my\", ``a low resolution photo of my\", ``a good photo of my\" & ``a type of pet\", ``a type of dog or cat\" & None, ``It's cute\", ``It's important to me\", ``I like it\", ``It's beautiful\" \\\\ \\midrule\n \tEuroSAT & ``a photo of a\", ``a painting of a\", ``a cropped photo of a\", ``a good photo of a\", ``a blurry photo of a\" & None, ``an example of aerial or satellite images\" & None, ``I like it\", ``It's taken from an aircraft or some flying object\", ``It's collected by imaging satellites\" \\\\ \\bottomrule\n \\end{tabular}\n }\n\\end{table}\n\n\n\n\\begin{table}[!t]\n\\small\n \\centering\n \\caption{Prompt templates used for zero-shot image-text retrieval on Flickr30K and MSCOCO datasets. \n }\n \\label{tab:prompts_for_retrieval}\n \\resizebox{0.8\\textwidth}{!}{\n \\begin{tabular}{m{1.5cm}|m{3cm}|m{3cm}|m{1.5cm}}\n \t\\toprule\n \tDataset & Task & Prefix & Suffix \\\\ \\midrule\n \t\\multirow{2}{*}{ Flickr30K } & image-to-text retrieval & ``a good photo of the'' & ``I hate it.'' \\\\ \n \t & text-to-image retrieval & ``a good photo of'' & None \\\\ \\midrule\n \t\\multirow{2}{*}{ MSCOCO } & image-to-text retrieval & ``a good photo of'' & ``It is ugly.'' \\\\ \n \t & text-to-image retrieval & None & None \\\\ \\bottomrule\n\n \\end{tabular}\n }\n\\end{table}\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\\label{section:intro}\n\n\nIn [IKoT1, IKoT2] we showed that there is an interesting relation \nbetween the exact WKB theory and the topological recursion for the \nconfluent family of the Gauss hypergeometric differential equations, \nthat is, we verified that the Voros coefficients of (confluent) \nhypergeometric equations are expressed as the difference values \nof the free energy of the spectral curve obtained as the classical \nlimit of the equations. In this paper we discuss the extension of \nthis result to a family of hypergeometric differential equations \nassociated with $2$-dimensional degenerate Garnier systems. \n\nThe $N$-dimensional Garnier system is a Hamiltonian system with $N$ \nvariables obtained through monodromy preserving deformations of second \norder linear differential equations on $\\mathbb{P}^1$ with $N+3$ regular \nsingular points. In the case of $N=1$, the system reduces to the sixth \nPainlev\\'e equation $P_{\\rm VI}$ and the Gauss hypergeometric function \ngives a particular solution of $P_{\\rm VI}$. In this sense the Gauss \nhypergeometric equation and its confluent version are associated with \nPainlev\\'e equations (i.e., $1$-dimensional Garnier system). In the \nsame manner a confluent family of hypergeometric differential systems \nwith two independent variables are associated with the $2$-dimensional \nGarnier system according to the following diagram of degeneration: \n\\begin{figure}[htbp]\n$$\n\\xymatrix@!C=25pt@R=7pt{\n&&\n&& \\text{$K(1,2,2)$} \\ar@{->}[rr] \\ar@{->}[rrdd]\n&& \\text{$K(2,3)$} \\ar@{->}[rrd] \n&&\n&&\n\\\\\n\\text{$K(1,1,1,1,1)$} \\ar@{->}[rr] \n&& \\text{$K(1,1,1,2)$} \\ar@{->}[rru] \\ar@{->}[rrd] \n&&\n&&\n&& \\text{$K(5)$} \n\\\\\n&&\n&& \\text{$K(1,1,3)$} \\ar@{->}[rr] \\ar@{->}[rruu]\n&& \\text{$K(1,4)$} \\ar@{->}[rru] \n&&\n&&\n}\n$$\n\\label{fig:confluence}\n\\end{figure}\n\n\\noindent\nHere $K(1,1,1,1,1)$ designates the $2$-dimensional Garnier system and, \nin general, the symbol $(\\#)=(r_1, \\ldots, r_m)$ means that its underlying \nmonodromy preserving deformation is concerned with a linear equation \nwith $m$ singular points of Poincar\\'e ranks $r_1-1, \\ldots, r_m-1$. \nIn what follows a hypergeometric differential system with two independent \nvariables is called a hypergeometric system of type $(\\#)$ when it is \nassociated with a confluent $2$-dimensional Garnier system $K(\\#)$. \nAmong them, in this article, we consider the hypergeometric systems of \ntype $(1,4)$ and $(2,3)$, or the following two third-order ordinary \ndifferential equations obtained from the hypergeometric systems of \ntype $(1,4)$ and $(2,3)$ by fixing the second variable $x_2 = t$: \nThe first one is \n\\begin{equation}\n\\label{eq:intro:(1,4)_eq(d\/dx)} \n\t\\left\\{ 3 \\hbar^3 \\frac{d^3}{dx^3} + 2t \\hbar^2 \\frac{d^2}{dx^2} + x \\hbar \\frac{d}{dx} \n\t\t\t- \\hat{\\lambda}_\\infty \\right\\} \\psi = 0, \n\\end{equation} \nwhich is called the hypergeometric equation of type $(1,4)$, and the second one is \n\\begin{equation}\n\\label{eq:intro:(2,3)_eq(d\/dx)}\n\t\\left\\{ 4 \\hbar^3 \\frac{d^3}{dx^3} - 2 x \\hbar^2 \\frac{d^2}{dx^2} \n\t\t\t+ 2 ( \\hat{\\lambda}_\\infty - \\hbar ) \\hbar \\frac{d}{dx} - t \\right\\} \\psi = 0, \n\\end{equation}\nwhich is called the hypergeometric equation of type $(2,3)$. \n\nThe purpose of this article is to show that the Voros coefficients \nof \\eqref{eq:intro:(1,4)_eq(d\/dx)} and \\eqref{eq:intro:(2,3)_eq(d\/dx)} \nare expressed as the difference values of the free energy \ndefined through the topological recursion due to Eynard and Orantin \\cite{EO} \n(\\thmref{main(i)}) and further that the explicit forms \nof Voros coefficients and the free energy of \n\\eqref{eq:intro:(1,4)_eq(d\/dx)} and \\eqref{eq:intro:(2,3)_eq(d\/dx)} can be \nobtained by using this relation between these two quantities \n(\\thmref{main(iii)} and \\thmref{main(iv)}). \n\nVoros coefficients are defined as contour integrals of the logarithmic \nderivative of WKB solutions in the thoery of the exact WKB analysis. \nIts importance in the study of the global behavior of solutions has been already recognized by the pioneering work of Voros (\\cite{Voros83}). The explicit form of Voros coefficients plays an important role to describe parametric Stokes phenomena, which are Stokes phenomena with respect to parameters included in the equation. \nThe explicit form of the Voros coefficients is now known for the confluent family of the Gauss hypergeometric equation and the hypergeometric equation of type (1,4) (\\cite{SS, Takei08, KoT11, ATT, Aoki-Tanda, AIT, IKo}). \nVoros coefficients are also studied for the Painlev\\'e equations to clarify the parametric Stokes phenomena (\\cite{I14}). \n\nOn the other hand, the topological recursion introduced by Eynard and \nOrantin (\\cite{EO}) is a generalization of the loop equations that the \ncorrelation functions of the matrix model satisfy. For a Riemann surface \n$\\Sigma$ and meromorphic functions $x$ and $y$ on $\\Sigma$, it \nproduces an infinite tower of meromorphic differentials $W_{g,n}(z_1, \\ldots, z_n)$ on $\\Sigma$. \nA triplet $(\\Sigma, x, y)$ is called a spectral curve and $W_{g,n}(z_1, \\ldots, z_n)$ is called a correlation function. As is shown in \\cite{GS, DM, BE} etc., \nthe quantization scheme connects WKB solutions of \ndifferential equations with the topological recursion. More precisely, \nWKB solutions of a differential equation are constructed also by \ncorrelation functions for the spectral curve corresponding to the \nclassical limit of the differential equation provided that the spectral curve \nsatisfies the so-called ``admissibility condition\" (cf. \\ \\cite[Definition 2.7]{BE}). \nMoreover, for a spectral curve, we can define free energies \n(also called symplectic invariants) $F_g$. For more details about the topological recursion, \nsee, e.g., the review paper \\cite{EO-08}. \nThe main results of this paper as well as those of \\cite{IKT-part1,IKT-part2} \nstrengthen the interplay between the WKB theory and the topological recursion \nand these interplays are expected to produce more profound insights in these theories. \n\nThe paper is organized as follows: \nIn \\S2 we recall some fundamental facts about \nthe exact WKB analysis and Eynard-Orantin's topological recursion. \nIn \\S3 we study quantization of the spectral curve. \nIn \\S4 we state our main theorem (\\thmref{main(i)}, \\thmref{main(ii)}, \\thmref{main(iii)} and \\thmref{main(iv)}). \nWe give a proof of our result only for (1,4) curve, but (2,3) curve can be treated similarly. \n\\section*{Acknowledgement}\nThe author would like to express my special thanks to late Professor Tatsuya Koike. \nThe author is also very grateful to Professors \nTakashi Aoki, \nSampei Hirose, \nKohei Iwaki, \nShingo Kamimoto, \nTakahiro Kawai, \nSaiei Matsubara,\nGenki Shibukawa, \nTakahiro Shigaki, \nNobuki Takayama, \nYoshitsugu Takei \nand \nMika Tanda \nfor helpful discussions and communications. \n\n\n\n\\section{Voros coefficients and the topological recursion}\n\\label{sec:review}\n\n\n\\subsection{WKB solutions}\n\\label{subsec:WKB-sol}\n\nIn this article we discuss the third order ordinary differential equation with a small parameter $\\hbar \\ne 0$ \nof the form\n\\begin{equation}\n\\label{eq:3rd-ODE}\n\t\\left\\{\n\t\tp_0(x, \\hbar) \\hbar^3 \\frac{d^3}{dx^3} + p_1(x, \\hbar) \\hbar^2 \\frac{d^2}{dx^2} \n\t\t+ p_2(x, \\hbar) \\hbar \\frac{d}{dx} + p_3(x, \\hbar)\n\t\\right\\}\\psi = 0,\n\\end{equation}\nwhere $x \\in \\mathbb{C}$, and\n\\begin{equation}\n\tp_i(x, \\hbar) = p_{i,0}(x) + \\hbar p_{i,1}(x) \\quad (i = 0, 1, 2, 3)\n\\end{equation}\nwith rational functions $p_{i,j}(x)$ ($i = 0, 1, 2, 3$, $j = 0, 1$) and \n\\begin{equation}\n\tp_{0,1}(x) = p_{1,1}(x) = 0. \n\\end{equation} \nWe consider \\eqref{eq:3rd-ODE} as a differential equations on the Riemann sphere $\\mathbb{P}^1$ \nwith regular or irregular singular points. \nFor \\eqref{eq:3rd-ODE} we can construct a formal solution, called a WKB solution, of the form \n\\begin{align}\n\\label{eq:WKB-type}\n\t\\psi (x, \\hbar) \n\t\t&= \\exp \\left[ \\int^x S(x, \\hbar) dx \\right].\n\\end{align}\nThe logarithmic derivative $S(x, \\hbar)$ of WKB solutions of \\eqref{eq:3rd-ODE} satisfies the equation \n\\begin{equation}\n\\begin{split}\n\\label{eq:Riccati-gen}\n\tp_0(x, \\hbar) \\hbar^3 \n\t\t\\left( \\frac{d^2}{dx^2} S(x, \\hbar) + 3 S(x, \\hbar) \\frac{d}{dx} S(x, \\hbar) + {S(x, \\hbar)}^3 \\right) \n\t+ p_1(x, \\hbar) \\hbar^2 \\left( \\frac{d}{dx} S(x, \\hbar) + {S(x, \\hbar)}^2 \\right) \\\\\n\t\\qquad + \\hbar p_2(x, \\hbar) S(x, \\hbar) + p_3(x, \\hbar) = 0. \n\\end{split}\n\\end{equation}\nEq. \\eqref{eq:Riccati-gen} is a counterpart of the Riccati equation in the second-order case \nand admits a solution of the form \n\\begin{align}\n\\label{eq:Riccati-gen-expansion}\n\tS(x, \\hbar) \n\t\t&:= \\hbar^{-1} S_{-1}(x) + S_0(x) + \\hbar S_1(x) + \\cdots \n\t\t= \\sum_{m = -1}^{\\infty} \\hbar^m S_m(x). \n\\end{align}\nIn fact, by substituting \\eqref{eq:Riccati-gen-expansion} into \\eqref{eq:Riccati-gen}, and comparing like powers of both sides with respect to $\\hbar$, we obtain \n\\begin{align}\n\\label{eq:Riccati-gen-1}\n\tp_{0,0}(x) S_{-1}^3 + p_{1,0}(x) S_{-1}^2 + p_{2,0}(x) S_{-1} + p_{3,0}(x) &= 0,\\\\\n\\label{eq:Riccati-gen-2}\n\t\\left(3 p_{0,0}(x) S_{-1}{}^2 +2 p_{1,0}(x) S_{-1} +p_{2,0}(x)\\right) S_0 \n\t+ 3 p_{0,0}(x) S_{-1} \\frac{d S_{-1}}{dx} +p_{1,0}(x) \\frac{d S_{-1}}{dx} \\\\\n\t+p_{2,1}(x) S_{-1} +p_{3,1}(x) &= 0, \\notag \n\\end{align}\nand\n\\begin{align}\n\\label{eq:Riccati-gen-3}\n\t\\left(3 p_{0,0}(x) S_{-1}{}^2 +2 p_{1,0}(x) S_{-1} +p_{2,0}(x)\\right) S_{m + 1} \n\t+ \\sum_{\\substack{i + j + k = m-1 \\\\ i, j, k \\geq 0}} S_{i} S_{j} S_{k} + 3 \\sum_{j = 0}^{m-1} S_{m - j - 1} S_{j} \n\t\\\\ \n\t+ 3 p_{0,0}(x) S_m \\frac{d S_{-1}}{dx} + 3 p_{0,0}(x) S_{-1} \\frac{d S_{m}}{dx} \n\t+ p_{0,0}(x)\\frac{d^2 S_{m-1}}{dx^2} \n\t+ p_{1,0}(x) \\sum_{j = 0}^m S_{m - j} S_{j} \\notag \\\\\n\t+ p_{1,0}(x) \\frac{d S_{m}}{dx} + p_{2,1}(x) S_m = 0 \\quad (m \\geq 0). \\notag \n\\end{align} \nEq. \\eqref{eq:Riccati-gen-1} has three solutions, and once we fix one of them, \nwe can determine $S_m$ for $m \\geq 0$ uniquely and recursively \nby \\eqref{eq:Riccati-gen-2} and \\eqref{eq:Riccati-gen-3}. \n\n\n\n\\subsection{Voros coefficients}\n\\label{subsec:Voros-coeff}\n\nA Voros coefficient is defined as a properly regularized integral of $S(x, \\hbar)$ along \na path connecting singular points of \\eqref{eq:3rd-ODE}. \nWhen $S_m(x)$ with $m \\geq 1$ is integrable at any singular point of \\eqref{eq:3rd-ODE}, \nwe can define Voros coefficients by \n\\begin{equation}\\label{eq:def-Voros-coeff}\nV_{\\gamma_{b_1, b_2}}(\\hbar)\n:= \\int_{\\gamma_{b_1, b_2}}\n\\big( S(x, \\hbar) -\\hbar^{-1}S_{-1}(x) - S_0(x)\\big) dx\n= \\sum_{m = 1}^{\\infty} \\hbar^m \\int_{\\gamma_{b_1, b_2}} S_m(x) dx,\n\\end{equation}\nwhere $\\gamma_{b_1, b_2}$ is a path from a singular point $b_1$ to a singular point $b_2$.\n(When there is no need to specify a path $\\gamma_{b_1,b_2}$, we use the abbreviated \nnotation $V(\\hbar)$ instead of $V_{\\gamma_{b_1, b_2}}(\\hbar)$.)\nNote that Voros coefficients only depend on the class \n$[\\gamma_{b_1, b_2}]$ of paths in the relative homology group\n$$\nH_1 \\big(\\mathbb{P}^1 \\setminus\n\\{\\text{Turning points}\\},\n\\{\\text{Singular points}\\}; \\mathbb{Z} \\big).\n$$\nSuch an integration contour (or a relative homology class) \ncan be understood as a lift of a path on $x$-plane \nonto the Riemann surface of $S_{-1}(x)$ \n(i.e., three sheeted covering of $x$-plane). \nThe lift of a path is specified by drawing branch cuts and distinguishing the \nfirst, second and third sheets of the Riemann surface.\n\n\n\\subsection{The global topological recursion}\n\\label{sec:TR}\n\nLet us first fix notation. \nWe restrict ourselves to the case when a spectral curve is of genus $0$ \nbecause we will not discuss the general case in this paper \n(see \\cite{BE-12} for the general definition). \n\n\\begin{dfn} \\label{def:spectral-curve}\nA spectral curve (of genus $0$) is a pair $(x(z), y(z))$\nof non-constant rational functions on $\\mathbb{P}^1$, \nsuch that their exterior differentials \n$dx$ and $dy$ never vanish simultaneously. \n\\end{dfn}\n\nLet $R$ be the set of ramification points of $x(z)$, \ni.e., $R$ consists of zeros of $dx(z)$ of any order and poles of $x(z)$ \nwhose orders are greater than or equal to two \n(here we consider $x$ as a branched covering map from $\\mathbb{P}^1$ to itself). \nWe further assume that \n\n\\begin{itemize}\n\\item[(A1)]\nA function field $\\mathbb{C}(x(z), y(z))$ coincides with $\\mathbb{C}(z)$.\n\n\\item[(A2)]\n\nIf $r$ is a ramification point which is a pole of $x(z)$, \nand if $Y(z) = - x(z)^2 y(z)$ is holomorphic near $r$,\nthen $dY(r) \\neq 0$.\n\n\\item[(A3)]\nAll of the ramification points of $x(z)$ are simple,\ni.e., the ramification index of each ramification point\nis two.\n\n\\item[(A4)]\nWe assume branch points are all distinct,\nwhere a branch point is defined as the image of\na ramification point by $x(z)$.\n\\end{itemize}\n\nWe need to introduce some notation to define the topological recursion. \n\n\\begin{dfn} \\label{def:effective-ramification}\nA ramification point $r$ is said to be ineffective if \nthe correlation functions $W_{g,n}(z_1,\\dots,z_n)$ \nfor $(g,n) \\ne (0,1)$ are holomorphic at $z_i = r$ for each $i=1,\\dots,n$. \nA ramification point which is not ineffective is called effective. \nThe set of effective ramification points is denoted by $R^{\\ast}$ $(\\subset R)$. \n\\end{dfn}\n\n\n\\begin{dfn}\nFor two sets $A$ and $B$, $A \\subseteq_k B$ means $A \\subseteq B$ and $|A| = k$. \n\\end{dfn}\n\n\\begin{dfn}\n$\\mathcal{S}(\\bm{t})$ denotes the set of set partitions of $\\bm{t} = \\{ t_1, \\ldots, t_k \\}$. \n\\end{dfn}\n\nThen, we define the recursive structure: \n\n\\begin{dfn}[{\\cite[Definition 3.4]{BE}}]\nLet $\\{ W_{g, n} \\}$ be an arbitrary collection of symmetric multidifferential on $(\\mathbb{P}^1)^n$ \nwith $g \\geq 0$ and $n \\geq 1$. Let $k \\geq 1$, \n$\\bm{t} = \\{ t_1, \\ldots, t_k \\}$ and $\\bm{z} = \\{ z_1, \\ldots, z_n \\}$. Then, we define \n\\begin{align}\n\\label{eq:R(k)}\n\t{\\mathcal{R}}^{(k)} \\left(W_{g, n+1}(\\bm{t}; \\bm{z})\\right) \n\t\t&:= \\sum_{\\mu \\in \\mathcal{S}(\\bm{t})} \n\t\t\t\\sum_{\\sqcup_{i=1}^{l(\\mu)} I_i = \\{1, 2, \\cdots, n\\}} \n\t\t\t\\sum'_{\\sum_{i=1}^{l(\\mu)} g_i = g + l(\\mu) - k} \n\t\t\t\\left\\{ \\prod_{i=1}^{l(\\mu)} W_{g_i, |\\mu_i| + |I_i|}(\\mu_i, z_{I_i}) \\right\\}. \n\\end{align}\nThe first summation in \\eqref{eq:R(k)} is over set partitions of $\\bm{t}$, \n$l(\\mu)$ is the number of subsets in the set partition $\\mu$. \nThe third summation in \\eqref{eq:R(k)} is over all $l(\\mu)$-tuple \nof non-negative integers $(g_1, \\ldots, g_{l(\\mu)})$ such that $\\sum_{i=1}^{l(\\mu)} g_i = g + l(\\mu) - k$. \n$\\sqcup$ denotes the disjoint union, \nand the prime ${}'$ on the summation symbol in \\eqref{eq:R(k)} means that we exclude terms for\n$(g_i, |\\mu_i| + |I_i|) = (0, 1)$ ($i = 1, \\ldots, l(\\mu)$)\n(so that $W_{0, 1}$ does not appear) in the sum. \nWe also define \n\\begin{align}\n\t{\\mathcal{R}}^{(0)} W_{g, n+1}(\\bm{z}) &:= \\delta_{g,0} \\delta_{n,0}, \n\\end{align}\nwhere $\\delta_{i,j}$ is the Kronecker delta symbol. \n\\end{dfn}\n\n\\begin{ex} \nFor $k=2$, $\\mathcal{S}(\\bm{t})$ is given by \n\\begin{align}\n\\mathcal{S}(\\{ t_1, t_2 \\}) \n= \\Bigl\\{ \\bigl\\{ \\{ t_1, t_2 \\} \\bigr\\}, \\bigl\\{\\{ t_1 \\}, \\{ t_2 \\} \\bigr\\} \\Bigr\\}. \n\\end{align}\nTherefore, we have \n\\begin{align}\n{\\mathcal{R}}^{(2)} \\left(W_{g, n+1}(\\bm{t}; \\bm{z})\\right) \n&= W_{g-1,n+2}(\\bm{t}, \\bm{z}) \n\t+ \\sum_{I_1\\sqcup I_2 = \\{1, 2, \\cdots, n\\} } \n\t\t\\sum'_{g_1 + g_2 = g - 1} \n\t\t\\left\\{ \\prod_{i=1}^{2} W_{g_i, 1 + |I_i|}(t_i, z_{I_i}) \\right\\}. \n\\end{align}\n\\end{ex}\n \nWe now define the topological recursion. \n\n\\begin{dfn}[{\\cite[Definition 3.6]{BE}}]\nEynard-Orantin's correlation function\n$W_{g, n}(z_1, \\cdots, z_n)$ for $g \\geq 0$ and $n \\geq 1$ \nis defined as a multidifferential \non $(\\mathbb{P}^1)^n$ using the recurrence relation\n\\begin{align}\n\\label{eq:gTR}\n\t&W_{g, n+1}(z_0, z_1, \\cdots, z_n) \\\\\n\t&:= \\sum_{r \\in R} \\mathop{\\rm{Res}}_{z = r} \\left\\{ \n\t\t\t\\sum_{k=1}^{r-1} \\sum_{\\beta(z) \\subseteq_k \\tau'(z)} \n\t\t\t(-1)^{k+1} \\frac{w^{z - \\alpha}(z_0)}{E^{(k)}(z;\\beta(z))} \n\t\t\t{\\mathcal{R}}^{(k+1)} \\left(W_{g, n+1}(z, \\beta(z);z_1, \\cdots, z_n)\\right) \n\t\t\\right\\} \\notag\n\\end{align}\nfor $2g + n \\geq 2$ with initial conditions\n\\begin{align}\nW_{0, 1}(z_0) &:= y(z_0) dx(z_0),\n\\quad\nW_{0, 2}(z_0, z_1) = B(z_0, z_1)\n:= \\frac{dz_0 dz_1}{(z_0 - z_1)^2}.\n\\end{align}\nHere we set $W_{g,n} \\equiv 0$ for a negative $g$ and \n\\begin{align}\n\\label{eq:E(k)}\n\tE^{(k)}(z; t_1, \\ldots, t_k)\n\t\t&:= \\prod_{i=1}^k (W_{0,1}(z) - W_{0,1}(t_i)). \n\\end{align}\nThe second and third summations in \\eqref{eq:gTR} together mean that \nwe are summing over all subsets of $\\tau'(z)$. \n$\\alpha$ is an arbitrary base point on $\\mathbb{P}^1$, but it can be checked (see \\cite{BE-12}) that the definition is actually independent of the choice of base point $\\alpha$.\nWe have also used the multi-index notation:\nfor $I = \\{i_1, \\cdots, i_m\\} \\subset \\{1, 2, \\cdots, n\\}$\nwith $i_1 < i_2 < \\cdots < i_m$, $z_I:= (z_{i_1}, \\cdots, z_{i_m})$.\n\\end{dfn}\n\nNote that this recursion was called ``global topological recursion\" in \\cite{BE-12}. \nIt was shown in \\cite{BE-12} that it is indeed equivalent to the following \nusual local formulation of the topological recursion when the ramification points are all simple. \n\n\n\\subsection{Free energy through the topological recursion}\n\\label{subsec:free-energy}\n\nThe $g$-th free energy $F_g$ ($g\\geq 0$) is a complex number\ndefined for the spectral curve,\nand one of the most important objects in Eynard-Orantin's theory.\nIt is also called a symplectic invariant since it is \n``almost'' invariant under symplectic transformations\nof spectral curves (see \\cite{EO-13} for the details). \n\n\\begin{dfn}[{\\cite[Definition 4.3]{EO}}]\nFor $g \\geq 2$, the $g$-th free energy $F_g$ is defined by\n\\begin{equation}\n\\label{def:Fg2}\nF_g := \\frac{1}{2- 2g} \\sum_{r \\in R} \\mathop{\\rm{Res}}_{z = r}\n\\big[\\Phi(z) W_{g, 1}(z) \\big]\n\\quad (g \\geq 2),\n\\end{equation}\nwhere $\\Phi(z)$ is a primitive of $y(z) dx(z)$. \nFor $g=1$, we define the free energy $F_1$ satisfying \\eqref{eq:variational_free-energy}. \nThe free energies $F_0$ for $g=0$ is also defined, but in a different manner \n(see \\cite[\\S 4.2.3]{EO} for the definition). \n\\end{dfn}\nNote that the right-hand side of \\eqref{def:Fg2} does not\ndepend on the choice of the primitive\nbecause $W_{g, 1}$ has no residue at each ramification point. \n\nIn applications (and in our article), the generating series\n\\begin{equation} \\label{eq:total-free-energy}\nF := \\sum_{g = 0}^{\\infty} \\hbar^{2g-2} F_g\n\\end{equation}\nof $F_g$'s is crucially important. \nWe also call the generating series \\eqref{eq:total-free-energy} \nthe free energy of the spectral curve. \n\n\n\\subsection{Variational formulas for the correlation functions}\n\\label{subsec:variational-formula-TR}\n\nIn \\S \\ref{subsection:quantum-(1,4)} and \\S \\ref{subsection:quantum-(2,3)} \nwe will consider a family of spectral curves parametrized by complex parameters. \nFor our purpose, we briefly recall the variational formulas obtained \nby \\cite[\\S 5]{EO} which describe the differentiation \nof the correlation functions $W_{g,n}$ and the free energies $F_g$ \nwith respect to the parameters.\n\nSuppose that we have given a family \n$(x_\\varepsilon(z), y_\\varepsilon(z))$\nof spectral curves parametrized by a complex parameter \n$\\varepsilon$ which lies on a certain domain $U \\subset {\\mathbb C}$ \nsuch that \n\\begin{itemize}\n\\item \n$x_\\varepsilon(z), y_\\varepsilon(z)$ depend \nholomorphically on $\\varepsilon \\in U$. \n\\item \n$x_\\varepsilon(z), y_\\varepsilon(z)$ \nsatisfy the assumptions (A1) -- (A4) \nfor any $\\varepsilon \\in U$. \n\\item \nThe cardinality of the set $R_\\varepsilon$ of \nramification points of $x_\\varepsilon(z)$ is constant on $\\varepsilon \\in U$\n(i.e. ramification points of $x_\\varepsilon(z)$ \nare distinct for any $\\varepsilon \\in U$).\n\\end{itemize}\nThen, the correlation functions $W_{g,n}(z_1, \\dots, z_n; \\varepsilon)$ \nand the $g$-th free energy $F_g(\\varepsilon)$ defined from the spectral curve \n$(x_\\varepsilon(z), y_\\varepsilon(z))$\nare holomorphic in $\\varepsilon \\in U$ \nas long as $z_i \\notin R_\\varepsilon$ for any $i=1,\\dots,n$. \n\nIn order to formulate a variational formula for correlation functions, \nwe need to introduce the notion of ``differentiation with fixed $x$\". \nFor a meromorphic differential $\\omega(z; \\varepsilon)$ on ${\\mathbb P}^1$, \nwhich depends on $\\varepsilon$ holomorphically, define \n\\begin{equation}\n\\delta_{\\varepsilon} \\, \\omega(z; \\varepsilon) \n:= \\left( \n\\frac{\\partial}{\\partial \\varepsilon} \\omega(z_{\\varepsilon}(x); \\varepsilon) \n\\right) \\biggl|_{x=x_{\\varepsilon}(z)} \n\\quad (z \\notin R_\\varepsilon), \n\\end{equation}\nwhere $z_{\\varepsilon}(x)$ is (any branch of) the inverse function of \n$x = x_{\\varepsilon}(z)$ which is defined away from branch points \n(i.e. points in $x_{\\varepsilon}(R_\\varepsilon)$). \nIn \\cite{EO} the notation \n$\\delta_{\\Omega} \\, \\omega(z; \\varepsilon) \\big|_{x(z)}$ \nis used for $\\delta_{\\varepsilon} \\omega(z; \\varepsilon)$ defined above.\nSuch differentiation $\\delta_{\\varepsilon}$ can be generalized \nto multidifferentials in an obvious way. \nThen, under these assumptions, the variational formula is formulated as follows.\n\n\\begin{thm}[{\\cite[Theorem 5.1]{EO}}] \\label{thm:VariationFormula}\nIn addition to the above conditions, for any $\\varepsilon \\in U$, \nwe further assume that \n\\begin{itemize}\n\\item \nIf $r_\\varepsilon \\in R_\\varepsilon$ is a zero of \n$dx_\\varepsilon(z)$, then the functions\n$\\partial x_\\varepsilon\/ \\partial \\varepsilon$ and \n$\\partial y_\\varepsilon\/ \\partial \\varepsilon$ are holomorphic\n(as functions of $z$) at $r_\\varepsilon$, \nand $dy_\\varepsilon(z)$ does not vanish\n(as a differential of $z$) at $r_\\varepsilon$.\n\\item \nIf $r_\\varepsilon \\in R_\\varepsilon$ is a pole of $x_\\varepsilon(z)$ \nwith an order greater than or equal to two, then \n\\[\n\\frac{\\Omega_\\varepsilon(z) \\, B(z_1, z) \\, B(z_2 , z)}\n{dy_\\varepsilon(z) dx_\\varepsilon(z)}\n\\]\nis holomorphic (as a differential in $z$) at $r(\\varepsilon)$, where \n\\begin{equation} \\label{eq:Omega}\n\\Omega_\\varepsilon(z) := \n\\frac{\\partial y_\\varepsilon}{\\partial \\varepsilon}(z) \\, dx(z)\n- \\frac{\\partial x_\\varepsilon}{\\partial \\varepsilon}(z) \\, dy(z).\n\\end{equation}\n\\item \nThere exist a path $\\gamma$ in $\\mathbb{P}^1$ passing through\nno ramification point and a function $\\Lambda_\\varepsilon (z)$ \nholomorphic in a neighborhood of $\\gamma$ for which the following holds.\n\\begin{equation}\n\\Omega_\\varepsilon(z) = \n\\int_{\\zeta \\in \\gamma} \\Lambda_\\varepsilon(\\zeta) \\, B(z, \\zeta).\n\\end{equation} \n\\end{itemize}\nThen, $W_{g,n}(z_1, \\dots, z_n; \\varepsilon)$ \nand $F_g(\\varepsilon)$ defined from the spectral curve \n$(x_\\varepsilon(z), y_\\varepsilon(z))$\nsatisfy the following relations: \n\\begin{itemize}\n\\item[{\\rm{(i)}}]\nFor $2g + n \\geq 2$, \n\\begin{equation}\n\\delta_{\\varepsilon} \\, W_{g, n} \n(z_1, \\cdots, z_n; \\varepsilon)\n= \\int_{\\zeta \\in \\gamma} \\Lambda_\\varepsilon(\\zeta) \\,\nW_{g, n + 1}(z_1, \\cdots, z_n, \\zeta; \\varepsilon) \n\\end{equation}\nholds on $\\varepsilon \\in U$ as long as each of $z_1, \\cdots, z_n$ satisfies \n$z_i \\notin R_\\varepsilon$. \n\n\n\\item[{\\rm{(ii)}}]\nFor $g \\geq 1$,\n\\begin{equation}\n\\label{eq:variational_free-energy}\n\\frac{\\partial F_g}{\\partial \\varepsilon}(\\varepsilon)\n= \\int_{\\gamma}\\Lambda_\\varepsilon(z) \\, W_{g, 1}(z;\\varepsilon)\n\\end{equation}\nholds on $\\varepsilon \\in U$.\n\n\n\\end{itemize}\n\n\\end{thm}\n\nSee \\cite[\\S 5.1]{EO} \n(based on the Rauch's variation formula; see \\cite{KK} for example) \nfor the proof.\nWe note that, since we modify the definition of the topological recursion \nby adding higher order poles of $x(z)$ as ramification point, \nwe also need to require the second condition in the above claim. \n\n\n\n\n\n\n\\section{Quantization of spectral curves}\n\\label{subsec:quantum-curve}\n\n\nWe treat the quantization by using the divisor with parameters which was introduced by \\cite{BE}. \n\n In this article, we consider the defining equation of the spectral curve\n\\begin{equation}\n\\label{eq:spectral-curve}\n\tP(x, y) = p_0(x) y^3 + p_1(x) y^2 + p_2(x) y + p_3(x) = 0. \n\\end{equation}\n\n\\begin{dfn}[{\\cite[Definition 2.3]{BE}}]\nLet us rewrite the defining equation \\eqref{eq:spectral-curve} of the spectral curve as \n\\begin{equation}\n\tP(x, y) = \\sum_{i, j \\in A} \\alpha_{i, j} x^i y^j = 0 \\quad (\\alpha_{i, j} \\ne 0).\n\\end{equation}\nThen the Newton polygon $\\Delta$ of \\eqref{eq:spectral-curve} is the convex hull of the set $A$. \n\\end{dfn}\n\n\\begin{dfn}[{\\cite[Definition 2.5]{BE}}]\nFor $m = 2, 3$, we define the following meromorphic function on $\\mathbb{P}^1$: \n\\begin{equation}\n\tP_m(x, y) = \\sum_{k = 1}^{m-1} p_{m-1-k}(x) y^k = 0. \n\\end{equation}\n\\end{dfn}\n\n\\begin{dfn}[{\\cite[Definition 2.7]{BE}}]\nWe say that a spectral curve is admissible if: \n\\begin{itemize}\n\t\\item[1.] Its Newton polygon $\\Delta$ has no interior point; \n\t\\item[2.] If the origin $(x, y) = (0, 0) \\in \\mathbb{C}^2$ is on the curve \n\t\t\t$\\{ P(x, y) = 0 \\subset \\mathbb{C}^2\\}$, then the curve is smooth at this point. \n\\end{itemize}\n\\end{dfn}\n\nWe assume that our spectral curve $(x(z), y(z))$ is admissible. \nThen the following theorem holds according to \\cite{BE}. \n\n\\begin{thm}[{\\cite[Lemma 5.14]{BE}}]\n\\label{thm:WKB-Wg,n-BE}\nLet $\\beta_i \\quad (1 \\leqq i \\leqq n)$ be simple poles of $x(z)$ and \n\\begin{equation}\n\\label{eq:D}\n\\begin{split}\n\tD(z ; \\underline{\\nu}) &= [z] - \\sum_{i=1}^{n} \\nu_i [\\beta_i]\n\\end{split}\n\\end{equation}\nbe a divisor on $\\mathbb{P}^1$, where $\\nu_i \\quad (1 \\leqq i \\leqq n)$ are complex numbers satisfying \n$\\sum_{i=1}^{n} \\nu_i = 1$. \nFor a differential $\\omega(z)$, we define its integration along the divisor $D(z; \\underline{\\nu})$ by \n\\[\n\\int_{D(z; \\underline{\\nu})} \\omega(z) = \n\\sum_{i=1}^{n} \\nu_i \\int^{z}_{\\beta_i} \\omega(z) \n\\]\nand extend the definition to multidifferentials in an obvious way. \nLet $W_{g, n}(z_1, \\cdots, z_{n})$ be the correlation functions of a spectral curve $(x(z), y(z))$ defined from (\\ref{eq:spectral-curve}). \nThen,\n\\begin{equation}\n\\label{eq:WKB-Wg,n}\n\\begin{split}\n\t\\psi(x, \\hbar)\n\t&= \\exp \\Bigg[ \\hbar^{-1} \\int^z W_{0, 1}(z) \n\t\t+ \\frac{1}{2!} \\int_{D(z ; \\nu)} \\int_{D(z ; \\nu)} \n\t\t\t\\left( W_{0, 2}(z_1, z_2) - \\frac{dx(z_1) \\, dx(z_2)}{(x(z_1) - x(z_2))^2} \\right) \\\\\n\t&\\quad \\left. \\left.\n\t + \\sum_{m = 1}^{\\infty} \\hbar^m \n\t \t\\left\\{ \\sum_{\\substack{2g + n - 2 = m \\\\ g \\geq 0, \\, n \\geq 1}} \n\t\t\t\\frac{1}{n!} \\int_{D(z ; \\nu)} \\cdots \\int_{D(z ; \\nu)} W_{g, n}(z_1, \\ldots, z_n) \n\t\t\\right\\} \\right] \\right|_{z = z(x)}\n\\end{split}\n\\end{equation}\nis a WKB type formal solution of \n\\begin{equation}\n\\begin{split}\n\\label{eq:quantization}\n\t\\left[ D_1 D_2 \\frac{p_0(x)}{x^{\\lfloor \\alpha_{3} \\rfloor}} D_{3} \n\t\t\t+ D_1 \\frac{p_1(x)}{x^{\\lfloor \\alpha_{2} \\rfloor}} D_{2} \n\t\t\t+ \\frac{p_2(x)}{x^{\\lfloor \\alpha_{1} \\rfloor}} D_{1} \n\t\t\t+ \\frac{p_3(x)}{x^{\\lfloor \\alpha_{0} \\rfloor}} \n\t\t\t- \\hbar C_1 D_1 \\frac{x^{\\lfloor \\alpha_{2} \\rfloor}}{x^{\\lfloor \\alpha_{1} \\rfloor}} \n\t\t\t- \\hbar C_2 \\frac{x^{\\lfloor \\alpha_{1} \\rfloor}}{x^{\\lfloor \\alpha_{0} \\rfloor}} \n\t\\right] \\psi = 0, \n\\end{split}\n\\end{equation}\nwhere \n\\begin{align*}\n\t\\alpha_m &= \\inf \\{ a \\mid (a, m) \\in \\Delta \\} \\quad (m = 0, 1, 2, 3), \\\\\n\tD_i \n\t&= \\hbar \\frac{x^{\\lfloor \\alpha_{i} \\rfloor}}{x^{\\lfloor \\alpha_{i-1} \\rfloor}} \\frac{d}{dx} \n\t\t\\quad (i = 1, 2, 3), \\\\\n\tC_k \n\t&= \\sum_{i = 1}^{n} \\nu_i \\left( \n\t\t\\lim_{z \\rightarrow {\\beta_i}} \\frac{P_{k+1}(x(z), y(z))}{{x(z)}^{\\lfloor \\alpha_{3-k} \\rfloor + 1}}\n\t\t\\right) \\quad (k = 1, 2). \n\\end{align*}\n\\end{thm}\n\n\\begin{rem}\nIt is mentioned by \\cite[Remark 5.12]{BE} that \nit is also possible to choose a pole of $x$ of order more than one as $\\beta_i$ in \\eqref{eq:D}\nwhen $\\beta \\notin R^{\\ast}$. \nTherefore, we can use \\thmref{WKB-Wg,n-BE} in the case (1,4) and (2,3) curve in the next section. \n\\end{rem}\n\n\n\\subsection{Quantum (1,4) curve}\n\\label{subsection:quantum-(1,4)}\n\nLet us consider the (1,4) curve defined by \n\\begin{equation}\n\\label{eq:(1,4)_P(x,y)}\n\tP(x, y) = 3 y^3 + 2t y^2 + x y - {\\lambda_\\infty} = 0, \n\\end{equation}\nwith parameters $t, {\\lambda_\\infty} \\ne 0$. \nA rational parameterization of this curve is \n\\begin{equation}\n\\label{eq:(1,4)_parameterization}\n\\begin{cases}\n\t\\displaystyle\n\tx = x(z) \n\t= \\frac{-3 z^3 - 2t z^2 + {\\lambda_\\infty}}{z} = -3 z^2 - 2t z + \\frac{{\\lambda_\\infty}}{z}, \\\\[10pt]\n\t\\displaystyle\n\ty = y(z) = z. \n\\end{cases}\n\\end{equation}\nFirst few terms of the correlation functions and free energies are computed as \n\\begin{align*}\n\tW_{0, 3}(z_1, z_2, z_3) \n\t&= \\biggl\\{ \n\t\t\\frac{2 z_1(15 {z_1}^5 - (9{z_2} + 9{z_3} - 4t) {z_1}^4 \n\t\t\t\t- (2t{z_2} + 2t{z_3} - 3{z_2}{z_3}) {z_1}^3 + {\\lambda_\\infty} {z_1}^2 \n\t\t\t\t- {\\lambda_\\infty} {z_2}{z_3})}\n\t\t\t{({z_1} - {z_2})^3 ({z_1} - {z_3})^3 (6 {z_1}^3 + 2t {z_1}^2 + {\\lambda_\\infty})^2} \\\\ \n\t&\\quad\n\t\t+ \\frac{2 z_2(15 {z_2}^5 - (9{z_3} + 9{z_1} - 4t) {z_2}^4 \n\t\t\t\t- (2t{z_3} + 2t{z_1} - 3{z_3}{z_1}) {z_2}^3 + {\\lambda_\\infty} {z_2}^2 \n\t\t\t\t- {\\lambda_\\infty} {z_3}{z_1})}\n\t\t\t{({z_2} - {z_3})^3 ({z_2} - {z_1})^3 (6 {z_2}^3 + 2t {z_2}^2 + {\\lambda_\\infty})^2} \\\\ \n\t&\\quad\n\t\t+ \\frac{2 z_3(15 {z_3}^5 - (9{z_1} + 9{z_2} - 4t) {z_3}^4 \n\t\t\t\t- (2t{z_1} + 2t{z_2} - 3{z_1}{z_2}) {z_3}^3 + {\\lambda_\\infty} {z_3}^2 \n\t\t\t\t- {\\lambda_\\infty} {z_1}{z_2})}\n\t\t\t{({z_3} - {z_1})^3 ({z_3} - {z_2})^3 (6 {z_3}^3 + 2t {z_3}^2 + {\\lambda_\\infty})^2}\n\t\t\\biggl\\} \\\\\n\t&\\quad\n\t\t\\times d{z_1} \\, d{z_2} \\, d{z_3}, \\\\\n\tW_{1, 1}(z)\n\t&= \\frac{z^2 (27 z^6 - 99 z^3 - 36 {\\lambda_\\infty} t z^2 - 4 {\\lambda_\\infty} t^2 z + 3 {\\lambda_\\infty}^2)}\n\t\t\t{(6 z^3 + 2t z^2 + {\\lambda_\\infty})^4} \\, dz,\n\\end{align*}\n\\begin{align*}\t\n\tF_0(\\lambda_\\infty, t)\n\t&= - \\frac{t^6}{972} + \\frac{2 {\\lambda_\\infty} t^3}{27} - \\frac{3 {\\lambda_\\infty}^2}{4} \n\t\t+ \\frac{{\\lambda_\\infty}^2}{4} \\log{(-3 {\\lambda_\\infty}^2)}, \\quad\n\tF_1(\\lambda_\\infty, t) = - \\frac{1}{12} \\log{\\lambda_\\infty}.\n\\end{align*}\n\\begin{rem}\n\tIt seems $W_{0,3}$ has singularities at $z_1 = z_2 = z_3$, \n\tbut we can verify that $W_{0,3}$ is holomorphic there. \n\\end{rem}\n\nWe choose \n\\begin{equation}\n\\label{eq:(1,4)_D}\n\\begin{split}\n\tD(z ; \\nu) \n\t&= [z] - (1 - \\nu_\\infty) [0] - \\nu_\\infty [\\infty] \\\\\n\t&= (1 - \\nu_\\infty) ([z] - [0]) + \\nu_\\infty ([z] - [\\infty]) \n\\end{split}\n\\end{equation}\nas the divisor for the quantization. \n\\begin{rem}\n\t$z = \\infty$ is a double pole of $x(z)$, i.e., $\\infty \\in R$, but we can verify that $\\infty \\notin R^{\\ast}$. \n\tTherefore, we can choose $\\beta = \\infty$ as a base point. \n\\end{rem}\nThen, \\thmref{WKB-Wg,n-BE} gives the quantum curve of the (1,4) curve (quantum (1,4) curve): \n\\begin{equation}\n\\label{eq:(1,4)_eq(d\/dx)} \n\t\\left\\{ 3 \\hbar^3 \\frac{d^3}{dx^3} + 2t \\hbar^2 \\frac{d^2}{dx^2} + x \\hbar \\frac{d}{dx} \n\t\t\t- \\hat{\\lambda}_\\infty \\right\\} \\psi = 0. \n\\end{equation} \nHere we used the notation \n\\begin{equation} \\label{eq:lambda-hat-(1,4)}\n\t\\hat{\\lambda}_\\infty = \\lambda_\\infty - \\nu_{\\infty} \\hbar.\n\\end{equation}\nNote that the special case $t = 0$\nof the equation has been already constructed as \na quantum curve in \\cite[\\S6.2.2]{BE}. \n\nLet $S_m(x, \\lambda, \\nu)$ be the coefficient of the Voros coefficient of \\eqref{eq:(1,4)_eq(d\/dx)}. Then $S_m(x, \\lambda, \\nu)$ satisfies the following lemma. \n\\begin{lem}\n\\label{lem:(1,4)_Sn} \nFor $m = 1, 2, \\cdots$, we have\n\\begin{align} \nS_m(x, \\lambda, \\nu) = O(x^{-{2}}) \n\\quad\n(x \\rightarrow \\infty).\n\\end{align} \n\\end{lem} \n\n\n\\subsection{Quantum (2,3) curve}\n\\label{subsection:quantum-(2,3)}\n\nLet us consider the (2,3) curve defined by \n\\begin{equation}\n\\label{eq:(2,3)_P(x,y)}\n\tP(x, y) = 4 y^3 - 2 x y^2 + 2 {\\lambda_\\infty} y - t = 0\n\\end{equation}\nwith parameters $t, {\\lambda_\\infty} \\ne 0$. \nA rational parameterization of this curve is \n\\begin{equation}\n\\label{eq:(2,3)_parameterization}\n\\begin{cases}\n\t\\displaystyle\n\tx = x(z) \n\t= \\frac{4 z^3 + 2 {\\lambda_\\infty} z - t}{2 z^2} = 2 z + \\frac{{\\lambda_\\infty}}{z} - \\frac{t}{2 z^2} \\\\[10pt]\n\t\\displaystyle\n\ty = y(z) = z. \n\\end{cases}\n\\end{equation}\nFirst few terms of the correlation functions and free energies are computed as \n\\begin{align*}\n\tW_{0, 3}(z_1, z_2, z_3) \n\t&= \\biggl\\{ \n\t\t- \\frac{{z_1}^2 (8 {z_1}^5 - 4({z_2} + {z_3}) {z_1}^4 - 2 {\\lambda_\\infty} {z_1}^3 \n\t\t\t\t+ t {z_1}^2 + (2 {\\lambda_\\infty} {z_2}{z_3} + t {z_2} + t {z_3}) {z_1} - 3t {z_2}{z_3}}\n\t\t\t{({z_1} - {z_2})^3 ({z_1} - {z_3})^3 (2 {z_1}^3 - {\\lambda_\\infty} {z_1} + t)^2} \\\\ \n\t&\\qquad\n\t\t- \\frac{{z_2}^2 (8 {z_2}^5 - 4({z_3} + {z_1}) {z_2}^4 - 2 {\\lambda_\\infty} {z_2}^3 \n\t\t\t\t+ t {z_2}^2 + (2 {\\lambda_\\infty} {z_3}{z_1} + t {z_3} + t {z_1}) {z_2} - 3t {z_3}{z_1}}\n\t\t\t{({z_2} - {z_3})^3 ({z_2} - {z_1})^3 (2 {z_2}^3 - {\\lambda_\\infty} {z_2} + t)^2} \\\\ \n\t&\\qquad\n\t\t- \\frac{{z_3}^2 (8 {z_3}^5 - 4({z_1} + {z_2}) {z_3}^4 - 2 {\\lambda_\\infty} {z_3}^3 \n\t\t\t\t+ t {z_3}^2 + (2 {\\lambda_\\infty} {z_1}{z_2} + t {z_1} + t {z_2}) {z_3} - 3t {z_1}{z_2}}\n\t\t\t{({z_3} - {z_1})^3 ({z_3} - {z_2})^3 (2 {z_3}^3 - {\\lambda_\\infty} {z_3} + t)^2}\n\t\t\\biggl\\} \\\\\n\t&\\quad\n\t\t\\times d{z_1} \\, d{z_2} \\, d{z_3}, \\\\\n\tW_{1, 1}(z)\n\t&= - \\frac{(4 z^3 - t) (8 {\\lambda_\\infty} z^4 - 20t z^3 + 2 {\\lambda_\\infty} t z - t^2)}\n\t\t\t{8(2 z^3 - {\\lambda_\\infty} z + t)^4} \\, dz,\n\\end{align*}\n\\begin{align*}\t\n\tF_0(\\lambda_\\infty, t)\n\t= - \\frac{{\\lambda_\\infty}^2}{4} \\log{(-2 t)}, \\quad\n\tF_1(\\lambda_\\infty, t)\n\t= - \\frac{1}{8} \\log{t}.\n\\end{align*}\n\\begin{rem}\n\tIt seems $W_{0,3}$ has singularities at $z_1 = z_2 = z_3$, \n\tbut we can verify that $W_{0,3}$ is holomorphic there. \n\\end{rem}\n\nWe choose \n\\begin{equation}\n\\label{eq:(2,3)_D}\n\\begin{split}\n\tD(z ; \\nu) \n\t&= [z] - (1 - \\nu_\\infty) [0] - \\nu_\\infty [\\infty] \\\\\n\t&= (1 - \\nu_\\infty) ([z] - [0]) + \\nu_\\infty ([z] - [\\infty]) \n\\end{split}\n\\end{equation}\nas the divisor for the quantization. \n\\begin{rem}\n\t$z = 0$ is a double pole of $x(z)$, i.e., $0 \\in R$, but we can verify that $0 \\notin R^{\\ast}$. \n\tTherefore, we can choose $\\beta = 0$ as a base point. \n\\end{rem}\nThen, \\thmref{WKB-Wg,n-BE} gives the quantum curve of the (2,3) curve (quantum (2,3) curve): \n\\begin{equation}\n\\label{eq:(2,3)_eq(d\/dx)}\n\t\\left\\{ 4 \\hbar^3 \\frac{d^3}{dx^3} - 2 x \\hbar^2 \\frac{d^2}{dx^2} \n\t\t\t+ 2 ( \\hat{\\lambda}_\\infty - \\hbar ) \\hbar \\frac{d}{dx} - t \\right\\} \\psi = 0, \n\\end{equation}\nwhere \n\\begin{equation} \\label{eq:lambda-hat-(2,3)}\n\t\\hat{\\lambda}_\\infty = \\lambda_\\infty - \\nu_{\\infty} \\hbar.\n\\end{equation}\n\n\\begin{lem}\n\\label{lem:(1,4)_Sn} \nFor $m = 1, 2, \\cdots$, we have\n\\begin{align} \nS_m(x, \\lambda, \\nu) = O(x^{-{3\/2}}) \n\\quad\n(x \\rightarrow \\infty).\n\\end{align} \n\\end{lem} \n\n\n\n\n\n\\section{Voros coefficients and the free energy}\n\\label{sec:Voros-vs-TR}\n\n\n\\subsection{Relations between Voros coefficients and the free energy}\n\\label{subsec:Voros-vs-TR}\n\n\nIn this subsection we formulate the main results which allow us to express the Voros coefficients \nof the quantum curves discussed in \\S \\ref{subsec:quantum-curve} by the free energy \nwith a parameter shift. \n\nLet \n\\begin{equation} \n\\label{eq:total-free-energy}\n\tF({\\lambda_{\\infty}}, t; \\hbar)\n\t= \\sum_{g = 0}^{\\infty} \\hbar^{2g - 2} F_g({\\lambda_{\\infty}}, t)\n\\end{equation}\nbe the free energy for the spectral curve in \\S \\ref{subsec:quantum-curve}. \nThen, the precise statement is formulated as follows. \n\n\\begin{thm}\n\\label{thm:main(i)}\n\\begin{equation} \n\\label{eq:V-and-F-general}\n\tV({\\lambda_{\\infty}}, t, \\nu_{\\infty}; \\hbar)\n\t= F(\\hat{\\lambda}_{\\infty} + \\hbar, t, \\hbar) - F(\\hat{\\lambda}_{\\infty}, t, \\hbar) \n\t\t- \\frac{\\partial F_0}{\\partial \\lambda_{\\infty}} \\hbar^{-1} \n\t\t+ \\frac{2 \\nu_{\\infty} - 1}{2} \\frac{\\partial^2 F_0}{\\partial {\\lambda_{\\infty}}^2}. \n\\end{equation}\nHere $\\hat{\\lambda}_{\\infty} = {\\lambda_{\\infty}} - \\nu_{\\infty} \\hbar$ as we have introduced in \\eqref{eq:lambda-hat-(1,4)}. \n\\end{thm}\n\nWe can prove \\thmref{main(i)} similarly to the case of the Weber equation because the proof of \\thmref{main(i)} does not depend on $t$. \n\nTo prove \\thmref{main(i)}, we need the following identity. \n\n\\begin{lem}\n\\label{lem:variation}\n\\begin{equation}\n\\label{eq:variation}\n\t\\frac{\\partial^n}{\\partial{\\lambda_{\\infty}}^n} F_g\n\t= \\int_{\\zeta_1 = 0}^{\\zeta_1=\\infty}\\cdots \\int_{\\zeta_n = 0}^{\\zeta_n=\\infty}\n\t\tW_{g, n}(\\zeta_1, \\cdots, \\zeta_n)\\qquad (2g + n \\geq 3).\n\\end{equation}\n\\end{lem}\n\n\\begin{proof}[Proof of Lemma \\ref{lem:variation}]\nBecause\n\\begin{equation}\n\t\\Omega(z) \n\t= \\frac{\\partial y(z)}{\\partial {\\lambda_{\\infty}}} \\cdot dx(z)\n\t\t- \\frac{\\partial x(z)}{\\partial {\\lambda_{\\infty}}} \\cdot dy(z)\n\t= - \\frac{dz}{z}\n\t= \\int^{\\zeta = \\infty}_{\\zeta = 0} B(z, \\zeta)\n\\end{equation}\nholds, Theorem \\ref{thm:VariationFormula}\ngives \\eqref{eq:variation}, except for the case $g=0$. \nBy using the expressions of $W_{0,3}$ and $F_0$, \nwe can verify \\eqref{eq:variation} holds for for $(g,n) = (0,3)$ directly. \nTherefore, thanks to Theorem \\ref{thm:VariationFormula}, \nwe can conclude that \\eqref{eq:variation} is also valid for $g=0$ and $n \\ge 3$. \nThis completes the proof. \n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{thm:main(i)}]\nBy Theorem \\ref{thm:WKB-Wg,n-BE}, the Voros coefficient can be rewritten as\n\\begin{align}\n\tV({\\lambda_{\\infty}}, t, \\nu_{\\infty}; \\hbar) \n\t&= \\sum_{m = 1}^{\\infty} \\hbar^m \\int_0^\\infty \n\t\t\\Bigl( S(x(z), {\\lambda_{\\infty}}, t, \\nu_{\\infty}; \\hbar) - \\hbar^{-1} S_{-1}(x(z), {\\lambda_{\\infty}}, t) \n\t\t\t- S_0(x(z), {\\lambda_{\\infty}}, t, \\nu_{\\infty}) \n\t\t\\Bigr) \\frac{dx}{dz} \\, dz \\\\ \n\t&= \\sum_{m = 1}^{\\infty} \\hbar^m \\int_0^\\infty \n\t\t\\left\\{ \\sum_{\\substack{2g + n - 2 = m \\\\ g \\geq 0, \\, n \\geq 1}} \n\t\t\t\t\\frac{1}{n!} \\frac{d}{dz} \\int_{\\zeta_1 \\in D(z; \\underline{\\nu})}\n\t\t\t\t\\cdots \\int_{\\zeta_n \\in D(z; \\underline{\\nu})} W_{g, n}(\\zeta_1, \\ldots, \\zeta_n) \n\t\t\\right\\} dz \\notag \\\\\n\t&= \\sum_{m = 1}^{\\infty} \\hbar^m \n\t\t\\sum_{\\substack{2g + n - 2 = m \\\\ g \\geq 0, \\, n \\geq 1}} \\frac{1}{n!} \n\t\t\t\\left( \\int_{\\zeta_1 \\in D(\\infty; \\underline{\\nu})} \\cdots \n\t\t\t\t\t\\int_{\\zeta_n \\in D(\\infty; \\underline{\\nu})} \\right. \\notag \\\\\n\t&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\left. \n\t\t\t\t\t- \\int_{\\zeta_1 \\in D(0; \\underline{\\nu})} \\cdots \n\t\t\t\t\t\\int_{\\zeta_n \\in D(0; \\underline{\\nu})} \n\t\t\t\\right) W_{g, n}(\\zeta_1, \\ldots, \\zeta_n). \\notag\n\\end{align}\nBecause\n\\begin{equation}\n\tD(\\infty; \\underline{\\nu}) = (1 - \\nu_{\\infty}) ([\\infty] - [0]) \n\t\\quad\\text{and}\\quad \n\tD(0; \\underline{\\nu}) = - \\nu_{\\infty} ([\\infty] - [0]), \n\\end{equation}\nwe have\n\\begin{equation}\n\tV({\\lambda_{\\infty}}, t, \\nu_{\\infty}; \\hbar) \n\t= \\sum_{m = 1}^{\\infty} \\hbar^m \\sum_{\\substack{2g + n - 2 = m \\\\ g \\geq 0, \\, n \\geq 1}} \n\t\t\\frac{(1 - \\nu_{\\infty})^n - (- \\nu_{\\infty})^n}{n!} \\int_0^\\infty \\cdots \\int_0^\\infty \n\t\t\t\tW_{g, n}(\\zeta_1, \\ldots, \\zeta_n). \n\\end{equation}\nNow we use Lemma \\ref{lem:variation}:\n\\begin{align}\n\tV({\\lambda_{\\infty}}, t, \\nu_{\\infty}; \\hbar) \n\t&= \\sum_{m = 1}^{\\infty} \\hbar^m \\sum_{\\substack{2g + n - 2 = m \\\\ g \\geq 0, \\, n \\geq 1}} \n\t\t\\frac{(1 - \\nu_{\\infty})^n - (- \\nu_{\\infty})^n}{n!} \\frac{ \\partial^n F_g }{ \\partial {\\lambda_{\\infty}}^n } \\\\\n\t&= \\sum_{n = 1}^{\\infty} \\frac{(1 - \\nu_{\\infty})^n - (- \\nu_{\\infty})^n}{n!} \n\t\t\\hbar^n \\frac{ \\partial^n F({\\lambda_{\\infty}}, t; \\hbar) }{ \\partial {\\lambda_{\\infty}}^n } \n\t\t\t- \\frac{(1 - \\nu_{\\infty}) - (- \\nu_{\\infty})}{\\hbar}\\frac{\\partial F_0}{\\partial {\\lambda_{\\infty}}} \n\t\t\\notag\\\\\n\t&\\qquad \n\t\t\t- \\frac{(1 - \\nu_{\\infty})^2 - (- \\nu_{\\infty})^2}{2!} \\frac{\\partial^2 F_0}{\\partial{\\lambda_{\\infty}}^2} \n\t\t\\notag\\\\\n\t&= F \\left({\\lambda_{\\infty}} - \\nu_{\\infty} \\hbar + \\hbar, t; \\hbar \\right) \n\t\t- F \\left({\\lambda_{\\infty}} - \\nu_{\\infty} \\hbar, t; \\hbar \\right) \n\t\t- \\frac{\\partial F_0}{\\partial {\\lambda_{\\infty}}} \\hbar^{-1}\n\t\t+ \\frac{2 \\nu_{\\infty} - 1}{2} \\frac{\\partial^2 F_0}{\\partial {\\lambda_{\\infty}}^2}. \\notag \n\\end{align}\n\\end{proof}\n\n\\begin{rem} \\label{rem:regularization}\nIn the definition (\\ref{eq:def-Voros-coeff}) of the Voros coefficient, we subtracted the first two terms \n$\\hbar^{-1}S_{-1}$ and $S_0$ because these terms are singular at endpoints of the path \n$\\gamma$. \nHowever, a regularization procedure of divergent integral (see \\cite{Voros-zeta} for example) \nallows us to define the regularized Voros coefficient as follows:\n\\begin{equation} \n\tV_{\\rm reg}({\\lambda_{\\infty}}, t, \\nu_{\\infty}; \\hbar) \n\t:= \\hbar^{-1}V_{-1}({\\lambda_{\\infty}}, t, \\nu_{\\infty}) + V_0({\\lambda_{\\infty}}, t, \\nu_{\\infty}) \n\t\t+ V({\\lambda_{\\infty}}, t, \\nu_{\\infty}; \\hbar), \n\\end{equation}\nwhere $V_{-1}({\\lambda_{\\infty}}, t, \\nu_{\\infty})$ and $V_0({\\lambda_{\\infty}}, t, \\nu_{\\infty})$ are obtained by solving \n\\begin{equation} \n\\label{eq:zeta-regularization-equation}\n\t\\frac{\\partial^2}{\\partial {\\lambda_{\\infty}}^2} V_{-1} \n\t= \\int_{\\gamma} \\frac{\\partial^2}{\\partial {\\lambda_{\\infty}}^2} S_{-1}(x) \\, dx, \\quad \n\t\\frac{\\partial}{\\partial {\\lambda_{\\infty}}} V_{0}\n \t= \\int_{\\gamma} \\frac{\\partial}{\\partial {\\lambda_{\\infty}}} S_{0}(x) \\, dx.\n\\end{equation}\nActually, we can verify that $\\partial_{{\\lambda_{\\infty}}}^2 S_{-1}(x) dx$ and \n$\\partial_{{\\lambda_{\\infty}}}S_0(x) dx$ are holomorphic at $x=\\infty$ although \n$S_{-1}$ and $S_0$ are singular there.\nHence, the equations \\eqref{eq:zeta-regularization-equation} make sense\nand we can find $V_{-1}$ and $V_{0}$. For example, in the case of the (1,4) quantum curve, \nwe obtain \n\\begin{align}\n\t\\frac{\\partial^2}{\\partial {\\lambda_{\\infty}}^2} V_{-1} \n\t= \\frac{1}{{\\lambda_{\\infty}}}, \\qquad \n\t\\frac{\\partial}{\\partial {\\lambda_{\\infty}}} V_{0} \n\t= - \\frac{2 \\nu_{\\infty} - 1}{2 {\\lambda_{\\infty}}}. \n\\end{align}\nActually, we can verify that the regularized integrals are realized by the correction terms \n\\begin{equation}\n\tV_{-1} = \\frac{\\partial F_0}{\\partial {\\lambda_{\\infty}}}, \\qquad\n\tV_{0} = - \\frac{2 \\nu_{\\infty} - 1}{2} \\frac{\\partial^2 F_0}{\\partial {\\lambda_{\\infty}}^2} \n\\end{equation}\nin the right hand-side of the relation \\eqref{eq:V-and-F-general}.\nThus we conclude that the regularized Voros coefficient satisfies \n\\begin{equation} \n\\label{eq:Vreg-and-free-energy}\n\tV_{\\rm reg}({\\lambda_{\\infty}}, t, \\nu_{\\infty}; \\hbar) \n\t= F \\left({\\lambda_{\\infty}} - \\nu_{\\infty} \\hbar + \\hbar, t; \\hbar \\right) \n\t\t- F \\left({\\lambda_{\\infty}} - \\nu_{\\infty} \\hbar, t; \\hbar \\right). \n\\end{equation}\n\\end{rem}\n\n\n\n\\subsection{Three-term difference equations satisfied by the free energy} \n\\label{subsec:Three-term_difference-eq.} \n\n\nIn this subsection, we derive the three-term difference equation which the generating function of the free energies satisfies. The precise statement is formulated as follows. \n\n\\begin{thm}\n\\label{thm:main(ii)}\nThe free energy \\eqref{eq:total-free-energy} satisfies the following difference equation.\n\\begin{equation}\n\\label{eq:free-energy_difference-eq.}\n\tF({\\lambda_{\\infty}} + \\hbar, t; \\hbar) - 2 F({\\lambda_{\\infty}}, t; \\hbar) + F({\\lambda_{\\infty}} - \\hbar, t; \\hbar) \n\t= \\frac{\\partial^2 F_0}{\\partial {\\lambda_{\\infty}}^2}. \n\\end{equation}\n\\end{thm}\n\nWe will only give the proof for the (quantum) (1,4) curve because the result for the (quantum) (2,3) curve is proved in a similar manner. \n\nTo prove Theorem \\ref{thm:main(ii)}, we need the following identity.\n\n\\begin{lem}\n\\label{lem:Voros-parameter}\n\\begin{equation}\n\\label{eq:Voros-difference}\n\tV({\\lambda_{\\infty}}, t, - \\nu_{\\infty}, \\hbar) - V({\\lambda_{\\infty}}, t, 1 - \\nu_{\\infty}, \\hbar) \n\t\t= - \\log{ \\left( 1 - \\frac{\\nu_{\\infty} \\hbar}{{\\lambda_{\\infty}}} \\right) }.\n\\end{equation}\n\\end{lem}\n\n\\begin{proof}[Proof of Theorem \\ref{thm:main(ii)}]\nFrom \\lemref{Voros-parameter}, \n\\begin{equation}\n\\label{eq:main:tmpeq}\n\tV({\\lambda_{\\infty}}, t, 0, \\hbar) = V({\\lambda_{\\infty}}, t, 1, \\hbar).\n\\end{equation}\nIt follows from \\thmref{main(i)} that\n\\begin{align}\n\tV({\\lambda_{\\infty}}, t, 0, \\hbar)\n\t\t&= F \\left({\\lambda_{\\infty}} + \\hbar, t; \\hbar \\right) - F \\left({\\lambda_{\\infty}}, t; \\hbar \\right)\n\t\t\t- \\frac{\\partial F_0}{\\partial {\\lambda_{\\infty}}} \\hbar^{-1}\n\t\t\t- \\frac{1}{2} \\frac{\\partial^2 F_0}{\\partial {\\lambda_{\\infty}}^2}, \\\\\n\tV({\\lambda_{\\infty}}, t, 1, \\hbar) \n\t\t&= F \\left({\\lambda_{\\infty}}, t; \\hbar \\right) - F \\left({\\lambda_{\\infty}} - \\hbar, t; \\hbar \\right)\n\t\t\t- \\frac{\\partial F_0}{\\partial {\\lambda_{\\infty}}} \\hbar^{-1}\n\t\t\t+ \\frac{1}{2} \\frac{\\partial^2 F_0}{\\partial {\\lambda_{\\infty}}^2}.\n\\end{align}\nBy substituting these two relations into \\eqref{eq:main:tmpeq}, we obtain \\thmref{main(ii)}.\n\\end{proof}\n\n\n\n\\subsection{The explicit form of the free energy}\n\\label{subsec:FreeEnergy} \n\n\nWe obtain explicit formulas for the coefficients of the free energy and Voros coefficients. In this subsection we provide the explicit expressions for the free energy. We will only give the proof for the (quantum) (1,4) curve because the result for the (quantum) (2,3) curve is proved in a similar manner. \n\n\\begin{thm}\n\\label{thm:main(iii)}\nFor $g \\geq 2$, the $g$-th free energy of \nthe spectral curve $(C)$\nhas the following expression.\n\\begin{itemize}\n\t\\item[$\\bullet$] For (1,4) curve (\\S \\ref{subsection:quantum-(1,4)}):\n\\end{itemize} \\vspace{-1.3em}\n\\begin{equation}\n\\label{eq:(1,4)_Fg(concrete-form)}\n\tF_g({\\lambda_{\\infty}}, t) = \\frac{B_{2g}}{2g(2g - 2)} \\dfrac{1}{{{\\lambda_\\infty}}^{2g-2}} \\quad (g \\geq 2), \n\\end{equation}\nwhere $\\{B_n\\}_{n \\geq 0}$ designates the Bernoulli number defined by \n\\begin{equation}\n\\label{def:Bernoulli}\n\t\\frac{w}{e^w - 1} = \\sum_{n = 0}^{\\infty} B_n \\frac{w^n}{n!}.\n\\end{equation}\n($F_0$ and $F_1$ for (1,4) curve are given in \\S \\ref{subsection:quantum-(1,4)}.) \n\\begin{itemize}\n\t\\item[$\\bullet$] For (2,3) curve (\\S \\ref{subsection:quantum-(2,3)}):\n\\end{itemize} \\vspace{-1.3em}\n\\begin{equation}\n\\label{eq:(2,3)_Fg(concrete-form)}\n\tF_g({\\lambda_{\\infty}}, t) = 0 \\quad (g \\geq 2).\n\\end{equation}\n($F_0$ and $F_1$ for (2,3) curve are given in \\S \\ref{subsection:quantum-(2,3)}.) \n\\end{thm}\n\n\nTo prove \\thmref{main(iii)}, we need the following lemma. \n\n\\begin{lem}\n\\label{lem:t-dependence}\n\\begin{equation}\n\\label{eq:t-dependence}\n\t\\frac{ \\partial F_g }{ \\partial t} = 0 \\qquad (g \\geq 1).\n\\end{equation}\n\\end{lem}\n\nLemma \\ref{lem:t-dependence} is obtained from \n\n\\begin{lem}\n\\label{lem:variation-t}\nFor the (1,4) equation \n\\begin{equation}\n\\label{eq:variation-t}\n\t\\frac{ \\partial F_{g} }{ \\partial t} \n\t= - \\mathop{\\rm{Res}}_{z = \\infty} z^2 \\, W_{g,1}(z) \n\\end{equation}\nholds. \n\\end{lem}\n\n\\begin{lem}\n\\label{lem:variation-t()}\nFor the (1,4) equation the following relations hold: \n\\begin{align}\n\\label{eq:variation-t()_1}\n\t\\mathop{\\rm{Res}}_{z = \\infty} z^2 \\sum_{m = -1}^{\\infty} \\hbar^m S_m(x(z)) dx(z) \n\t= C_{-1}(z, {\\lambda_{\\infty}}, \\nu_{\\infty}) \\hbar^{-1} + C_0(z, {\\lambda_{\\infty}}, \\nu_{\\infty}), \\\\\n\\label{eq:variation-t()_3}\n\t\\mathop{\\rm{Res}}_{z = \\infty} z^2 \\sum_{\\substack{g \\geq 0, \\, n \\geq 2 \\\\ (g, n) \\ne (0, 2)}} \n\t\t\\frac{\\hbar^{2g - 2 + n}}{(n-1)!} \n\t\t\\int_{\\infty}^z \\cdots \\int_{\\infty}^z W_{g, n}(z, z_2, \\ldots, z_n)\n\t= \\sum_{g \\geq 1} \\hbar^{2g} C_{g, 2}, \n\\end{align}\nwhere $C_{-1}$, $C_{0}$ and $C_{g, 2}$ $(g \\geq 1)$ are constant with respect to $\\hbar$. \n\\end{lem}\n\n\\begin{proof}[Proof of Lemma \\ref{lem:variation-t}]\nBy using the Riccati equation, \nwe can verify \\eqref{eq:variation-t()_1} directly. \nBecause\n\\begin{equation}\n\t\\Omega(z) \n\t= \\frac{\\partial y(z)}{\\partial t} \\cdot dx(z)\n\t\t- \\frac{\\partial x(z)}{\\partial t} \\cdot dy(z)\n\t= 2z \\, dz \n\t= - \\frac{1}{2 \\pi i} \\int_{\\zeta \\in \\gamma} {\\zeta}^2 B(z, \\zeta)\n\\end{equation}\nholds, Theorem \\ref{thm:VariationFormula}\ngives \\eqref{eq:variation-t}. \n\\end{proof}\n\n\\begin{proof}[Proof of Lemma \\ref{lem:variation-t()}]\nBecause $W_{g, n}(z_1, \\cdots, z_n)$ are holomorphic at $z_i = \\infty$ $(1 \\leq i \\leq n)$ \nfor $2g - 2 + n \\geq 1$, we find that \n\\begin{align*}\n\tW_{g, n}(z, z_2, \\ldots, z_n) \n\t\\sim \\frac{d z_i}{{z_i}^2} \\left( {C}^{(i)}_{g,n} + O(1\/z_i) \\right\n\t\\quad (z_i \\rightarrow \\infty ). \n\\end{align*}\nThen, since lower order terms $O(1\/z_i)$ vanish in the limit $z_i \\to \\infty$ $(1 \\leq i \\leq n)$,\nwe obtain\n\\begin{align*}\n\t\\int_{\\zeta_2 = \\infty}^{\\zeta_2 = z_2} W_{g, n}(z, \\zeta_2, \\ldots, \\zeta_n) \n\t= \\int_{\\zeta_2 = \\infty}^{\\zeta_2 = z_2} \n\t\t\t\\frac{C_{g,n} d \\zeta_2 \\cdots d \\zeta_n}\n\t\t\t\t{{z}^2 {\\zeta_2}^2 {\\zeta_3}^2 \\cdots {\\zeta_n}^2} d z\n\t= - \\frac{C_{g,n} d \\zeta_3 \\cdots d \\zeta_n}{{z}^2 {z_2} {\\zeta_3}^2 \\cdots {\\zeta_n}^2} d z.\n\\end{align*}\nTherefore, \n\\begin{align*}\n\t\\left. \\int_{\\infty}^{z_2} \\cdots \\int_{\\infty}^{z_n} W_{g, n}(z, z_2, \\ldots, z_n) \n\t\\right|_{z_2 = \\cdots = z_n = z}\n\t\\sim \\left( \\frac{(-1)^{n+1}C_{g,n}}{z^{n+1}} + \\cdots \\right) dz \n\\end{align*}\nholds. Multiplying both sides of the equation by $z^2$ and calculating residues, we obtain \\eqref{eq:variation-t()_3}. \n\n\\end{proof}\n\n\\begin{proof}[Proof of Lemma \\ref{lem:t-dependence}]\nBy taking $\\nu = 0$ in \\thmref{WKB-Wg,n-BE} we obtain \n\\begin{align}\n\t&\\left. \\log{\\psi} \\right|_{x = x(z)} = \\sum_{m = -1}^{\\infty} \\hbar^m \\int^{x(z)} S_m dx \\\\\n\t&= \\sum_{m = -1}^{\\infty} \\hbar^m \n\t \t\\left\\{ \\sum_{\\substack{2g + n - 2 = m \\\\ g \\geq 0, \\, n \\geq 1}} \n\t\t\t\\frac{1}{n!} \\int_{\\infty}^z \\cdots \\int_{\\infty}^z \n\t\t\t\t\\left( W_{g, n}(z_1, \\ldots, z_n) \n\t\t\t\t\t\t- \\delta_{g,0} \\delta_{n,2} \\frac{dx(z_1) \\, dx(z_2)}{(x(z_1) - x(z_2))^2} \n\t\t\t\t\\right)\n\t\t\\right\\}. \\notag \n\\end{align}\nIt follows from this equation that \n\\begin{align}\n\t\\sum_{g \\geq 0} \\hbar^{2g - 1} \\left(- \\mathop{\\rm{Res}}_{z = \\infty} z^2 W_{g, 1}(z) \\right) \n\t&= - \\mathop{\\rm{Res}}_{z = \\infty} z^2 \\sum_{m = -1}^{\\infty} \\hbar^m S_m(x(z)) dx(z) \\\\\n\t&\\qquad \\notag \n\t\t+ \\mathop{\\rm{Res}}_{z = \\infty} z^2 \n\t\t\t\\int_{\\infty}^z \\left( W_{0, 2}(z, z_2) - \\frac{dx(z) \\, dx(z_2)}{(x(z) - x(z_2))^2} \\right) \\\\\n\t&\\qquad \\notag\n\t\t+ \\mathop{\\rm{Res}}_{z = \\infty} z^2 \\sum_{\\substack{g \\geq 0, \\, n \\geq 2 \\\\ (g, n) \\ne (0, 2)}} \n\t\t\t\\frac{\\hbar^{2g - 2 + n}}{(n-1)!} \n\t\t\t\\int_{\\infty}^z \\cdots \\int_{\\infty}^z W_{g, n}(z, z_2, \\ldots, z_n). \n\\end{align}\nBecause the left hand side of this equation is written by \n\\begin{align*}\n\t\\sum_{g \\geq 0} \\hbar^{2g - 1} \\left(- \\mathop{\\rm{Res}}_{z = \\infty} z^2 W_{g, 1}(z) \\right) \n\t= - \\hbar^{-1} \\mathop{\\rm{Res}}_{z = \\infty} z^2 W_{0, 1}(z) \n\t\t+ \\sum_{g \\geq 1} \\hbar^{2g - 1} \\frac{ \\partial F_g }{ \\partial t}, \n\\end{align*}\nwe compare the odd terms with respect to $\\hbar$ of both sides. \nBy using \\lemref{variation-t()} we find that there is no odd term whose order with respect to \n$\\hbar$ is greater than or equal to one in the right hand side. \nIt means that \\eqref{eq:t-dependence} holds. \n\\end{proof}\n\nNow we give a proof of \\thmref{main(iii)}.\n\n\\begin{proof}[Proof of Theorem \\ref{thm:main(iii)}]\nBy using a shift operator (or an infinite order differential operator) \n$e^{\\hbar\\partial_{{\\lambda_{\\infty}}}}$, the equation (\\ref{eq:free-energy_difference-eq.}) \nin \\thmref{main(ii)} becomes\n\\begin{equation}\n\\label{prop:difference-eq:sol:tmp:1}\n\te^{-\\hbar\\partial_{{\\lambda_{\\infty}}}} (e^{\\hbar\\partial_{{\\lambda_{\\infty}}}} - 1)^2 F({\\lambda_{\\infty}}, t; \\hbar)\n\t= \\frac{\\partial^2 F_0}{\\partial {\\lambda_{\\infty}}^2}. \n\\end{equation}\nIt follows from \n\\begin{equation}\n\te^{-w} (e^w - 1)^2 \n\t\\left\\{ \\frac{1}{w^2} - \\sum_{n = 0}^{\\infty} \\frac{B_{n + 2}}{\\, n + 2 \\,} \\frac{\\, w^n \\,}{\\, n! \\,}\n\t\\right\\} = 1\n\\end{equation} \n(which follows from the definition \\eqref{def:Bernoulli} of the Bernoulli numbers) that\n\\begin{equation}\n\te^{-\\hbar\\partial_{{\\lambda_{\\infty}}}} (e^{\\hbar\\partial_{{\\lambda_{\\infty}}}} - 1)^2\n\t\\left\\{ (\\hbar\\partial_{{\\lambda_{\\infty}}})^{-2} - \\sum_{n = 0}^{\\infty} \\frac{B_{n + 2}}{\\, n + 2 \\,} \n\t\t\t\\frac{\\, (\\hbar\\partial_{{\\lambda_{\\infty}}})^n \\,}{\\, n! \\,}\n\t\\right\\} = {\\rm{id}}.\n\\end{equation}\nHence we find that\n\\begin{align}\n\\label{sol:FreeEnergy}\n\t\\hat{F}({\\lambda_{\\infty}}, t;\\hbar)\n\t&:= \\left\\{ (\\hbar\\partial_{{\\lambda_{\\infty}}})^{-2} - \\sum_{n = 0}^{\\infty} \\frac{B_{n + 2}}{\\, n + 2 \\,} \n\t\t\t\t\\frac{\\, (\\hbar\\partial_{{\\lambda_{\\infty}}})^n \\,}{\\, n! \\,}\n\t\t\\right\\} \\frac{\\partial^2 F_0}{\\partial {\\lambda_{\\infty}}^2} \\\\\n\t&= \\hbar^{-2} F_0({\\lambda_{\\infty}}, t) \n\t\t- \\frac{1}{12} \\frac{\\partial^2 F_0}{\\partial {\\lambda_{\\infty}}^2} \n\t\t+ \\sum_{g = 2}^{\\infty} \\frac{B_{2g}}{2g(2g-2)} \\frac{\\hbar^{2g - 2}}{{\\lambda_{\\infty}}^{2g-2}} \n\t\t+ \\hat{F}_t (t)\n\t\t\\notag\n\\end{align}\nis a solution of \\eqref{eq:free-energy_difference-eq.}. \nHere we note that, \n\\begin{equation}\n\\frac{\\partial^2 F_0}{\\partial {\\lambda_{\\infty}}^2} = \\frac{1}{2} \\log{(-3 {\\lambda_{\\infty}}^2)} \n\\end{equation}\nholds.\n\nSince $F$ and $\\hat{F}$ satisfies the same difference equation \n\\eqref{eq:free-energy_difference-eq.}, their difference \n\t$G := F - \\hat{F} = \\sum_{g=2}^{\\infty} \\hbar^{2g-2} G_{g}({\\lambda_{\\infty}}, t)$ \nsatisfies \n\\begin{equation}\n\tG({\\lambda_{\\infty}} + \\hbar, t; \\hbar) - 2G({\\lambda_{\\infty}}, t; \\hbar) + G({\\lambda_{\\infty}} - \\hbar, t; \\hbar) = 0.\n\\end{equation}\nThis relation implies that, each coefficient $G_{g}({\\lambda_{\\infty}}, t)$ of $G$ must satisfy \n$\\partial_{\\lambda_{\\infty}}^2 G_{g} = 0$.\nTherefore, each term of $G$ must be a linear in ${\\lambda_{\\infty}}$. \nHowever, due to the homogeneity\nand \\lemref{t-dependence}, \n$F_g - \\hat{F}_g$ must be zero for all $g$. \nThis shows the desired equality \\eqref{eq:(1,4)_Fg(concrete-form)}. \n\\end{proof}\n\n\n\\subsection{The explicit form of Voros coefficients} \n\\label{subsec:voros} \n\n \nIn this subsection we provide the explicit expressions for Voros coefficients. We will only give the proof for the (quantum) (1,4) curve because the result for the (quantum) (2,3) curve is proved in a similar manner. \n\n\\begin{thm}\n\\label{thm:main(iv)}\nThe Voros coefficients\nfor the following quantum curve \nhas the following expression. \n\\begin{itemize}\n\t\\item[$\\bullet$] For (1,4) curve (\\S \\ref{subsection:quantum-(1,4)}):\n\\end{itemize} \\vspace{-1.3em}\n\\begin{equation}\n\\label{eq:(1,4)_Voros(concrete-form)}\n\tV({\\lambda_{\\infty}}, t, \\nu_{\\infty}, \\hbar) \n\t= \\sum_{m = 1}^{\\infty} \\frac{B_{m+1}(\\nu_{\\infty})}{m(m + 1)} \n\t\t\\left( \\frac{\\hbar}{{\\lambda_{\\infty}}} \\right)^{m}.\n\\end{equation}\nHere $B_m(t)$ is the Bernoulli polynomial defined through the generating function as\n\\begin{equation}\n\\label{def:BernoulliPoly}\n\\frac{w e^{X w}}{e^w - 1} = \\sum_{m = 0}^{\\infty} B_m(X) \\frac{w^m}{m!}.\n\\end{equation}\n(These expressions were also obtained in \\cite{IKo}.)\n\\begin{itemize}\n\t\\item[$\\bullet$] For (2,3) curve (\\S \\ref{subsection:quantum-(2,3)}):\n\\end{itemize} \\vspace{-1.3em}\n\\begin{equation}\n\\label{eq:(2,3)_Voros(concrete-form)}\n\tV({\\lambda_{\\infty}}, t, \\nu_{\\infty}, \\hbar) = 0. \n\\end{equation}\n\\end{thm}\n\n\n\n\\begin{proof}\nThe relation \\eqref{eq:Vreg-and-free-energy} \nbetween the regularized Voros coefficient \nand the free energy can be written as\n\\begin{equation}\n\tV_{\\rm reg}(\\lambda_\\infty, t, \\nu_\\infty; \\hbar) \n\t= e^{- \\nu_\\infty \\hbar \\partial_{\\lambda_\\infty}} \\Big( e^{\\hbar\\partial_{\\lambda_\\infty}} - 1 \\Big) \n\t\tF(\\lambda_\\infty, t; \\hbar) \n\\end{equation}\nby the shift operators. \nUsing the three term relation \\eqref{eq:free-energy_difference-eq.} of $F$, \nwe have\n\\begin{equation} \n\\label{eq:(1,4)-diffrence-eq-for-V}\n\\begin{split}\n\te^{(\\nu_\\infty - 1) \\hbar \\partial_{\\lambda_\\infty}} \\Big( e^{\\hbar\\partial_{\\lambda_\\infty}} - 1 \\Big) \n\t\tV(\\lambda_\\infty, t, \\nu_\\infty; \\hbar)\n\t= e^{-\\hbar \\partial_{\\lambda_\\infty}} \\Big( e^{\\hbar\\partial_{\\lambda_\\infty}} - 1 \\Big)^2 \n\t\tF(\\lambda_\\infty, t; \\hbar) \n\t= \\frac{1}{2} \\log{(-3 {\\lambda_\\infty}^2)}.\n\\end{split}\n\\end{equation}\n\nLet us invert the shift operator \n$e^{(\\nu_\\infty - 1) \\hbar \\partial_{\\lambda_\\infty}} \\left( e^{\\hbar\\partial_{\\lambda_\\infty}} - 1 \\right)$\n(or solving the difference equation) \nto obtain an expression of $V_{\\rm reg}$. \nFor the purpose, we use a similar technique used in the previous subsection. \nNamely, it follows from \n\\begin{equation}\n\te^{- X w} (e^{w} - 1) \n\t\\left(\\frac{1}{w} + \\sum_{m=0}^{\\infty} \\frac{B_{m+1}(X)}{m+1} \\frac{w^m}{m!} \\right) = 1\n\\end{equation}\n(cf.\\,\\eqref{def:BernoulliPoly}) that \n\\begin{equation}\n\te^{- X \\hbar \\partial_{\\lambda_\\infty}} (e^{\\hbar \\partial_{\\lambda_\\infty}} - 1) \n\t\\left( (\\hbar \\partial_{\\lambda_\\infty})^{-1} \n\t\t\t+ \\sum_{m=0}^{\\infty} \\frac{B_{m+1}(X)}{m+1} \\frac{(\\hbar \\partial_{\\lambda_\\infty})^m}{m!} \n\t\\right) \n\t= {\\rm id}. \n\\end{equation}\nThe last equality with $X = 1 - \\nu_\\infty$ shows that the formal series\n\\begin{align} \n\\label{eq:expression-Vreg}\n\tV_{\\rm reg} \n\t&= \\hbar^{-1} \\frac{\\partial F_0}{\\partial \\lambda_\\infty} \n\t\t- \\frac{\\nu_\\infty - 1}{2} \\frac{\\partial^2 F_0}{\\partial {\\lambda_\\infty}^2} \n\t\t+ \\sum_{m=1}^{\\infty} \\frac{B_{m+1}(1- \\nu_\\infty)}{m+1} \n\t\t\\frac{(\\hbar \\partial_{\\lambda_\\infty})^m \\log \\lambda_\\infty}{m!} \\\\\n\t\t\\notag \n\t&= \\hbar^{-1} V_{-1} + V_0 + \n\t\t\\sum_{m=1}^{\\infty} \\frac{(-1)^{m+1}B_{m+1}(1 - \\nu_\\infty)}{m(m+1)} \n\t\t\\left(\\frac{\\hbar}{\\lambda_\\infty}\\right)^m \\\\\n\t\t\\notag \n\t&= \\hbar^{-1} V_{-1} + V_0 + \n\t\t\\sum_{m=1}^{\\infty} \\frac{B_{m+1}(\\nu_\\infty)}{m(m+1)} \n\t\t\\left(\\frac{\\hbar}{\\lambda_\\infty}\\right)^m \n\\end{align}\nsatisfies the difference equation \\eqref{eq:(1,4)-diffrence-eq-for-V}.\nHere we used $B_1(X) = X - 1\/2$ and the equality \n$B_m(X) = (-1)^{m}B_m(1-X)$.\n\\end{proof}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\nF\\\"urstenberg's correspondence principle creates a fruitful link between finite combinatorics and ergodic theory. It connects additive combinatorics with the study of shift invariant measures on the Cantor set $\\{0,1\\}^\\mathbb{Z}$. In particular it leads to various strengthenings and generalizations of Szemer\\'edi's celebrated theorem on arithmetic progressions. \n\nThe goal of this paper is to study a similar correspondence principle between finite large girth $d$-regular graphs and ${\\rm Aut}(T_d)$ invariant probability measures on $F^{V(T_d)}$ where $F$ is a finite set and $T_d$ is the $d$-regular tree with vertex set $V(T_d)$. The case $d=2$ is basically classical ergodic theory however the case $d\\geq 3$ is much less developed. \n\nOur approach can be summarized as follows. Assume that $G$ is a $d$-regular graph of girth $g$. We think of $d$ as a fixed number (say $10$) and $g$ as something very large. We wish to scan the large scale structure of $G$ in the following way. We put a coloring $f:V(G)\\rightarrow F$ on the vertices of $G$ with values in a finite set $F$. (It does not have to be a proper coloring i.e. neighboring vertices can have identical color.) Then we look at the colored neighborhoods (of bounded radius) of randomly chosen points $v\\in V(G)$. By this sampling we obtain a probability distribution on $F$-colored (bounded) trees that carries valuable information on the global structure of $G$. For example, if there is a coloring $f:V(G)\\rightarrow\\{0,1\\}$ such that, with high probability, a random vertex $v$ has a color different from its neighbours, then $G$ is essentially bipartite. \n\nIt turns out to be very convenient to regard the information obtained from a specific coloring as an approximation of a probability measure on $F^{V(T_d)}$ that is invariant under ${\\rm Aut}(T_d)$. This can be made precise by using Benjamini--Schramm limits of colored graphs (see Section \\ref{invproc}, or \\cite{bs} for the original formulation). We will use the following definition.\n\n\\begin{definition} Let $\\mathcal{S}=\\{G_i\\}_{i=1}^\\infty$ be a sequence of $d$-regular graphs. We say that $\\mathcal{S}$ is a large girth sequence if for every $\\varepsilon>0$ there is an index $n$ such that for every $i\\geq n$ the probability that a random vertex in $G_i$ is contained in a cycle of length at most $\\lceil 1\/\\varepsilon\\rceil$ is at most $\\varepsilon$.\n\\end{definition}\n\n\\begin{definition}\\label{profile} Let $\\mathcal{S}=\\{G_i\\}_{i=1}^\\infty$ be a large girth sequence of $d$-regular graphs, and $F$ a finite set. We denote by $[\\mathcal{S}]_F$ the set of ${\\rm Aut}(T_d)$ invariant probability measures on $F^{V(T_d)}$ that arise as Benjamini--Schramm limits of $F$-colorings $\\{f_i:V(G_i)\\rightarrow F\\}_{i=1}^\\infty$ of $\\mathcal{S}$. We denote by $[\\mathcal{S}]$ the set $\\bigcup_{n\\in\\mathbb{N}}[\\mathcal{S}]_{\\{1,2,\\dots,n\\}}$.\n\\end{definition}\n\n It is clear that if $\\mathcal S'$ is a subsequence of $\\mathcal S$, then $[\\mathcal S]\\subseteq [\\mathcal S']$. If $[\\mathcal{S}]=[\\mathcal S']$ holds for every subsequence $S'$ of $S$, then $\\mathcal{S}$ is called {\\it local-global convergent} (see Subsection \\ref{corresp} and \\cite{HLSz}). Local-global convergent sequences of graphs have limit objects in the form of a {\\it graphing} \\cite{HLSz}. For a convergent sequence $\\mathcal S$ the set $[\\mathcal{S}]$ carries important information on the structure of the graphs in $\\mathcal{S}$.\n\nWe call a process $\\mu$ {\\it universal} if $\\mu\\in[\\mathcal{S}]$ for every large girth sequence $\\mathcal{S}$. Universality means, roughly speaking, that it defines a structure that is universally present in every large girth $d$-regular graph.\n Weakening the notion of universality, we call a process $\\mu$ {\\it typical} if $\\mu\\in [\\{\\mathbb G_{n_i}\\}_{i=1}^\\infty]$ holds with probability 1 for some fixed sequence $\\{n_i\\}_{i=1}^{\\infty}$, where $\\{\\mathbb G_{n_i}\\}_{i=1}^{\\infty}$ is a sequence of independently and uniformly chosen random $d$-regular graphs with $|V(\\mathbb G_{n_i})|=n_i$. We will see that understanding typical processes is basically equivalent with understanding the large scale structure of random $d$-regular graphs. More precisely, we will formulate a correspondence principle (see Subsection \\ref{corresp}) between the properties of random $d$-regular graphs and typical processes. \n\n\\medskip\n\nAmong universal processes, factor of i.i.d processes on $T_d$ (see \\cite{russ} and the references therein) have a distinguished role because of their close connection to local algorithms \\cite{gamarnik, HLSz, kungabor}. They can be used to give estimates for various structures (such as large independent sets \\cite{csoka, harangi, hoppen, mustazee}, matchings \\cite{csokalipp, nazarov}, subgraphs of large girth \\cite{damien, kungabor}, etc., see also \\cite{goldberg}) in $d$-regular graphs. On the other hand, \\cite{cordec} characterizes the covariance structure of weak limits of factor of i.i.d. processes and thus it gives a necessary condition for a process to be factor of i.i.d. However, there are only few general and widely applicable sufficient conditions. This is a difficult question even for branching Markov processes that are important in statistical physics (e.g. Ising model, Potts model). In Section \\ref{glauber} we give a Dobsrushin-type sufficient condition for a branching Markov chain to be factor of i.i.d. \nWe use standard methods from statistical physics, in particular, a heat-bath version of Glauber dynamics. The idea behind this goes back to Ornstein and Weiss: sufficient conditions for fast mixing of Glauber dynamics often imply that the process is factor of i.i.d. See also the paper of H\\\"aggstr\\\"om, Jonasson and Lyons \\cite{russregi}.\nWe will see that the necessary condition on the covariance structure given in \\cite{cordec} is not sufficient for a branching Markov chain to be factor of i.i.d. To show this, we use our necessary conditions for typical processes (Section \\ref{entropy}), which automatically apply for factor of i.i.d. processes.\n\n\\medskip\n\nOur paper is built up as follows. In the first part we summarize various known and new facts about factor of i.i.d, universal and typical processes, local-global convergence and graphings. Moreover, in this part, we formulate our correspondence principle between typical processes and random $d$-regular graphs. In Section \\ref{glauber} we focus more on branching Markov chains on $T_d$. We give a Dobrushin-type sufficient condition for a branching Markov chain to be factor of i.i.d. In the last part (Section \\ref{entropy}) we give necessary conditions for a process to be typical using joint entropy functions. We will see that this result implies necessary conditions on the large scale structure of random $d$-regular graphs. (Note that our entropy method is closely related to the F-invariant, introduced by Lewis Bowen \\cite{lewis} in ergodic theory, and also to the ideas developed by Molloy and Reed \\cite{molloyreed} to study random $d$-regular graphs in combinatorics.) In particular, we prove that the value distributions of eigenvectors of random $d$-regular graphs can not be concentrated around boundedly many values (this is even true for approximative eigenvectors). Moreover, we show that random $d$-regular graphs do not cover bounded $d$-regular weighted graphs (for precise formulation, see Theorem \\ref{thm:combap}). These results are closely related to the papers of Molloy and Reed \\cite{molloyreed} about dominating ratio and Bollob\\'as \\cite{bollind} about independence numbers. \n\n\n\n\n\n\\section{Invariant processes}\\label{invproc}\n\nLet $T_d$ be the (infinite) $d$-regular tree with vertex set $V(T_d)$ and edge set $E(T_d)$. \nLet $M$ be a topological space. We denote by $I_d(M)$ the set of $M$-valued random processes on the $d$-regular tree \n$T_d$ that are invariant under automorphisms of $T_d$. More precisely, $I_d(M)$ is the set of ${\\rm Aut}(T_d)$ \ninvariant Borel probability measures on the space $M^{V(T_d)}$. (If $\\Psi\\in {\\rm Aut}(T_d)$, then $\\Psi$ induces \na map naturally from $M^{V(T_d)}$ to itself: given a labelling of the vertices of $T_d$, the new label of a vertex is \nthe label of its inverse image at $\\Psi$. The probability measures should be invariant with respect to this induced map.) \nThe set $I_d(M)$ possesses a topological structure; namely the restriction of the weak topology for probability measures on $M^{V(T_d)}$ to $I_d(M)$. Note that most of the time in this paper $M$ is a finite set. We denote by $I_d$ the set of invariant processes on $T_d$ with finitely many values.\n\nLet $T_d^*$ denote the rooted $d$-regular tree: it is $T_d$ with a distinguished vertex $o$, which is called the root. \nLet $N$ be a topological space and $f:M^{V(T_d^*)}\\rightarrow N$ be a Borel measurable function that is invariant under \n${\\rm Aut}(T_d^*)$, which is the set of root-preserving automorphisms of $T_d^*$. For every $\\mu\\in I_d(M)$ the function $f$ defines a new process $\\nu \\in I_d(N)$ by evaluating \n$f$ simultaneously at every vertex $v$ (by placing the root on $v$) on a $\\mu$-random element in $M^{V(T_d)}$.\nWe say that $\\nu$ is a {\\it factor} of $\\mu$. \n\nA possible way to get processes in $I_d$ goes through Benjamini--Schramm limits. For the general definition see \\cite{bs}. We will use and formulate it for colored large-girth graph sequences, as follows. Let $F$ be a finite set. Assume that $\\{G_i\\}_{i=1}^\\infty$ is a large girth sequence of $d$-regular graphs. Let $\\{f_i:V(G_i)\\rightarrow F\\}_{i=1}^\\infty$ be a sequence of colorings of $G_i$. For every pair of numbers $r,i\\in\\mathbb{N}$ we define the probability distribution $\\mu_{r,i}$ concentrated on rooted $F$-colored finite graphs as follows. We pick a random vertex $v\\in V(G_i)$ and then we look at the neighborhood $N_r(v)$ of radius $r$ of $v$ (rooted by $v$) together with the coloring $f_i$ restricted to $N_r(v)$. The colored graphs $(G_i,f_i)$ are Benjamini--Schramm convergent if for every $r\\in\\mathbb{N}$ the sequence $\\{\\mu_{r,i}\\}_{i=1}^\\infty$ weakly converges to some measure $\\mu_r$. The limit object is the probability measure $\\mu$ on $F^{V(T_d^*)}$ with the property that the marginal of $\\mu$ in the neighborhood of radius $r$ of the root is $\\mu_r$. It is easy to see that the measure we get from $\\mu$ by forgetting the root is in $I_d(F)$. \n\nWe list various classes of invariant processes on $T_d$ that are related to large girth sequences of finite graphs. \n\n\\bigskip\n\n\\noindent{\\bf Factor of i.i.d. processes:}~Let $\\mu\\in I_d([0,1])$ be the uniform distribution on $[0,1]^{V(T_d)}$, which is \nthe product measure of the uniform distributions on the interval $[0,1]$. A {\\it factor of i.i.d. process is a factor of the process $\\mu$}. Let $F_d$ denote the set of such processes in $I_d$. \nSee Lemma \\ref{ebred} for an easy example producing independent sets as factor of i.i.d. processes.\n\n\\bigskip\n\n\\noindent{\\bf Local processes:}~We say that a process is {\\it local} if it is in the closure of factor of i.i.d \nprocesses in the weak topology. Let $L_d$ denote the set of such processes in $I_d$.\n\n\\bigskip\n\n\\noindent{\\bf Universal processes:}~A process $\\mu\\in I_d$ is called universal if $\\mu\\in [\\mathcal{S}]$ holds for every large girth sequence $\\mathcal{S}$ of $d$-regular graphs. We denote the set of such processes by $U_d$. \n\n\\bigskip\n\n\\noindent{\\bf Typical processes:}~A process $\\mu\\in I_d$ is called typical if $\\mu\\in [\\{\\mathbb G_{n_i}\\}_{i=1}^\\infty]$ holds with probability 1 for some fixed sequence $\\{n_i\\}_{i=1}^{\\infty}$, where $\\{\\mathbb G_{n_i}\\}_{i=1}^{\\infty}$ is a sequence of independently chosen uniform random $d$-regular graphs with $|V(\\mathbb G_{n_i})|=n_i$. We denote the set of typical processes by $R_d$. \n\n\\bigskip\n\n\\begin{lemma} \\label{lem:bovul}We have the follwing containments:\n\n$$F_d\\subseteq L_d\\subseteq U_d\\subseteq R_d.$$\n\n\\end{lemma}\n\n\\begin{proof} The first and last containments are trivial. The containment $L_d\\subseteq U_d$ is easy to see. For a proof we refer to \\cite{HLSz} where a much stronger theorem is proved. \\hfill $\\square$\n\\end{proof}\n\n\\medskip\n\nWe also know by recent results of Gamarnik and Sudan \\cite{gamarnik} and Rahman and Vir\\'ag \\cite{mustazee} that $L_d\\neq R_d$ for \nsufficiently large $d$. Their result implies that the indicator function of a maximal independent set \n(a set of vertices that does not contain any neighbors) in a random $d$-regular graph is not in $L_d$ (that is, the largest independent set can not be approximated with \nfactor of i.i.d. processes); on the other hand, it is in $R_d$.\n\nIt is sometimes useful to consider variants of $F_d,L_d,U_d$ and $R_d$ where the values are in an infinite topological space $N$. The definitions can be easily modified using the extension of Benjamini--Schramm limits to colored graphs where the colors are in a topological space. We denote by $F_d(N),L_d(N),U_d(N)$ and $R_d(N)$ the corresponding set of processes.\nUsing this notation, it was proved in \\cite{harangi} that $F_d(\\mathbb{R})\\neq L_d(\\mathbb{R})$. In that paper Harangi and Vir\\'ag used random Gaussian wave functions \\cite{wave} to show this. \n See also Corollary 3.3. in the paper of Lyons \n\\cite{russ}: it provides a discrete-valued example for a process in $L_d(\\lbrace 0,1\\rbrace)\\setminus U_d(\\lbrace 0,1\\rbrace)$.\n\\medskip\n\nThe following question remains after these results.\n\n\\begin{question} Is it true that $U_d=L_d$?~ Is it true that $U_d=R_d$? \n\\end{question}\n\n\\medskip\n\nIt is an important goal of this paper to give sufficient conditions (for particular models) and necessary conditions for processes to be in one of the above classes.\nA recent result \\cite{cordec} in this direction is the following.\n\n\\begin{theorem}\\label{thmcordec} Let $\\mu\\in L_d(\\mathbb{R})$ and let $v,w\\in V(T_d)$ be two vertices of distance $k$. Let $f:T_d\\rightarrow\\mathbb{R}$ be a $\\mu$-random function. Then the correlation of $f(v)$ and $f(w)$ is at most $(k+1-2k\/d)(d-1)^{-k\/2}$. \n\\end{theorem}\n\nNote that the statement also holds for processes in $R_d$; however the proof of that extension uses the very hard theorem of J. Friedman \\cite{friedman} on the second eigenvalue of random $d$-regular graphs. There are various examples showing that the condition of Theorem \\ref{thmcordec} is not sufficient. We also give a family of such examples using branching Markov processes (see Theorem \\ref{exnotsuff}).\nBranching Markov processes will play an important role in this paper so we give a brief description of them.\n\n\\medskip\n\n\\noindent{\\bf Branching Markov processes:}~Now choose $M$ to be a finite state space $S$ with the \ndiscrete topology. Let $Q$ be the transition matrix of a reversible Markov chain on the \nstate space $S$. Choose the state of the root uniformly at random. Then make random steps according \nto the transition matrix $Q$ to obtain the states of the neighbors of the root. These steps \nare made conditionally independently, given the state of the root. Continue this: given the \nstate of a vertex at distance $k$ from the root, choose the states of its neighbors which are \nat distance $k+1$ from the root conditionally independently and according to the transition matrix $Q$.\nIt is easy to see that reversibility implies that the distribution of the collection of the random variables we get is invariant, \nhence the distribution of the branching Markov process (which will be denoted by $\\nu_Q$) is in $I_d(S)$.\n\nIn the particular case when there is a fixed probability \nof staying at a given state, and another fixed probability of transition between distinct states, the branching Markov process is identical to \nthe Potts model on the tree and for $|S|=2$ we get the Ising model. See e.g. \\cite{evans, sly} for the description of the connection of the parameters \nof the two models. \n\n\\medskip\n\n\\subsection{Correspondence between typical processes and random $d$-regular graphs}\n\n\\label{corresp}\n\nTypical processes might be of interest on their own, being the processes that can be modelled on random $d$-regular graphs. In addition to this, we can go in the other direction. As we will see later, results on typical processes imply statements for random $d$-regular graphs. In the last section, based on entropy estimates we give necessary conditions for an invariant process to be typical. In this section we show how these results can be translated to statements about random $d$-regular graphs. We will present a correspondence principle between these objects. \n\n\\subsubsection{Local-global convergence and metric} \n\n\nWhen we want to study the correspondence between typical processes (which are defined on \nthe vertex set of the $d$-regular tree) and random $d$-regular graphs, another notion of convergence of bounded \ndegree graphs will be useful. In this subsection we briefly resume the concept of local-global convergence (also called colored neighborhood convergence) based on the papers of Bollob\\'as and Riordan \\cite{BR} (where this notion was introduced) and Hatami, Lov\\'asz and Szegedy \\cite{HLSz}. \n\nIn the beginning of this section, we defined the notion of local (Benjamini--Schramm) convergence of bounded degree graphs. However, we need a finer convengence notion that captures more of the global structure than local convergence.\n Recall that if $F$ is a finite set (colors) and $G$ is a finite graph with some $f: V(G)\\rightarrow F$ , then by picking a random vertex $v\\in V(G)$ and looking at its neighborhood $N_r(v)$ of radius $r$, we get a probability distribution $\\mu_{r, G,f}$, which is concentrated on rooted $F$-colored finite graphs. (These distributions are called the local statistics of the coloring $f$.) \nLet $[k]=\\lbrace 1, \\ldots, k\\rbrace$, and we define \n\\[Q_{r,G,k}=\\lbrace \\mu_{r,G,f}\\vert f:V(G)\\rightarrow [k]\\rbrace. \\] \n\nLet $U^{r,k}$ be the set of triples $(H, o, f)$ where $(H, o)$ is a rooted graph of radius at most $r$ and $f: V(H)\\rightarrow [k]$ is a coloring of its vertices with (at most) $k$ colors. Let $\\mathcal M(U^{r,k})$ be the set of probability measures on $U^{r,k}$. With this notation, we have that $Q_{r, G, k}\\subseteq \\mathcal M(U^{r,k})$. The space $\\mathcal M(U^{r,k})$ is a compact metric space equipped with the total variation \ndistance of probability measures: \n\\[d_{TV}(\\mu, \\nu)=\\sup_{A\\subseteq U^{r,k}}|\\mu(A)-\\nu(A)|.\\]\n\n(Note that we will use an equivalent definition of total variation distance later in this paper.)\n\n\\begin{definition}[Local-global convergence, \\cite{HLSz}.] A sequence of finite graphs $(G_n)_{n=1}^{\\infty}$ with uniform degree bound $d$ is locally-globally convergent if for every $r, k\\geq 1$, the sequence $(Q_{r,G_n, k})$ converges in the Hausdorff distance inside the compact metric space $(\\mathcal M(U^{r,k}),\\, d_{TV})$.\n\\end{definition}\n\nFor every locally-globally convergent sequence $(G_n)$ of bounded degree graphs there is a limit object called graphing such that the sets of local statistics of $G_n$ converge to the local stastics of the limit object; see Theorem 3.2 of \\cite{HLSz} for the precise statement, and e.g. \\cite{aldous, cordec, gabor} for more about graphings. \n\n The following metrization of local-global convergence was defined by Bollob\\'as and Riordan \\cite{BR}.\n\n\\begin{definition}[Colored neighborhood metric, \\cite{BR}]\n Let $G, G'$ be finite graphs. Their colored neighborhood distance is the following:\n\\begin{equation}\\label{dcn}d_{CN}(G,G')=\\sum_{k=1}^{\\infty}\\sum_{r=1}^{\\infty} 2^{-k-r} d_H(Q_{r,G,k}, Q_{r, G',k}),\\end{equation}\nwhere $d_H$ denotes the Hausdorff distance of sets in the compact metric space $(\\mathcal M(U^{r,k}),\\, d_{TV})$. \n\\end{definition}\n\nLet $X_d$ be the set of all finite graphs with maximum degree at most $d$. It is clear from the definition that every \nsequence in $X_d$ contains a locally-globally convergent subsequence \\cite{HLSz}. It follows that the completion \n$\\overline {X_d}$ of the metric space $(X_d, d_{CN})$ is a compact metric space. It was proved in \\cite{HLSz} that the elements of $X_d$ can be represented by certain measurable graphs called \ngraphings. \n\n\\begin{definition} [Graphing, \\cite{HLSz}.]Let $\\Omega$ be a Polish topological space and let $\\nu$ be a probability measure on the Borel sets in $X$. A graphing is a graph $\\mathcal G$ on $V(\\mathcal G)=\\Omega$ with Borel measureable edge set $E(\\mathcal G)\\subset \\Omega\\times \\Omega$ in which all degrees are at most $d$ and \n\\[\\int_A e(x, B)d\\nu(x)=\\int_B e(x, A)d\\nu(x)\\] \nfor all measurable sets $A, B\\subset \\Omega$, where $e(x, S)$ is the number of edges from \n$x\\in\\Omega$ to $S\\subseteq \\Omega$.\n\\end{definition}\nIf $\\mathcal G$ is graphing, then $Q_{r, \\mathcal G, k}$ makes sense with the additional condition that the coloring $f: \\Omega\\rightarrow [k]$ is measurable. Hence local-global convergence and metric both extend to graphings. \n\nWe will need the following two lemmas about the metric $d_{CN}$. We remark that for sake of simplicity we will use the notion of random $d$-regular graphs with $n$ vertices in the sequel without any restriction on $d$ and $n$. If $d$ and $n$ are both odd, then there are no such graphs. We will formulate the statements such that they trivially hold for the empty set as well.\n\n\\begin{lemma} \\label{lem:halo}For all $d\\geq 1$ and $\\varepsilon>0$ there exists $F(\\varepsilon)$ such that for all $n\\geq 1$ in \nthe set of $d$-regular graphs with $n$ vertices endowed with $d_{CN}$ there exist an $\\varepsilon$-net of size at most $F(\\varepsilon)$. \\label{lem:net}\n\\end{lemma} \n\\begin{proof}Using compactness, we can choose an $\\varepsilon\/2$-net $N$ in the space $(\\overline {X_d}, d_{CN})$. We show that $F(\\varepsilon):=|N|$ is a good choice. Let $N'$ be the subset of $N$ consisting of points $x$ such that the ball of radius \n$\\varepsilon\/2$ around $x$ contains a $d$-regular graph with $n$ vertices. To each element in $N'$ we assign a $d$-regular graph with $n$ vertices of distance at most $\\varepsilon\/2$. It is clear that set of these \ngraphs have the desired properties. \\end{proof}\\hfill $\\square$\n\n\\begin{lemma}\\label{lem:lip}\nFor all $\\delta>0$ there exists $i_0$ such that for all $i\\geq i_0$ and graphs $G_1, G_2\\in X_d$ both on the vertex set $[i]$ and $|E(G_1)\\triangle E(G_2)|=1$ satisfy \n$d_{CN}(G_1, G_2)\\leq \\delta$.\n\\end{lemma}\n\\begin{proof} \nSince the sum of the weights is finite in \\eqref{dcn}, and the all the Hausdorff distances are at most 1, it is enough to prove the statement for a single term. Let us fix $k$ and $r$. Let $\\mu_{r,G_1,f}\\in Q_{r, G_1, k}$ be an arbitrary element corresponding to a coloring $f: [i]\\rightarrow [k]$. It is enough to prove that the \ntotal variation distance of $\\mu_{r,G_1,f}$ and $\\mu_{r,G_2,f}$ can be bounded from above by a quantity depending only on $i$ and tending to zero as $i$ goes to $\\infty$. Let $e$ be the only edge in $E(G_1)\\triangle E(G_2)$. In both $G_1$ and $G_2$ there are boundedly many vertices $v$ such that $e$ intersects the neighborhood of radius $r$ of $v$. It is easy to see that $2(d+1)^r$ is such a bound. The colored neighborhoods of the rest of the vertices are the same in $G_1$ and $G_2$. It follows that the total variation distance of $\\mu_{r,G_1,f}$ and $\\mu_{r,G_2,f}$ is at most $2(d+1)^r\/i$. This completes the proof. \\hfill $\\square$\n\\end{proof}\n\n\\subsubsection{Typical processes} \n\nIn this section we prove a correspondence principle between typical processes and random $d$-regular graphs. \n\nThroughout this section, $d\\geq 3$ will be fixed, and $\\mathbb G_n$ will be a uniformly chosen random $d$-regular graph on $n$ vertices. \n\n\\begin{lemma}\\label{typl1} For fixed $d\\geq 3$ there is a sequence $\\lbrace B_n\\rbrace_{n=1}^{\\infty}$ of $d$-regular graphs with $|V(B_n)|=n$ such that $d_{CN}(B_n, \\mathbb G_n)$ tends to $0$ in probability as $n\\rightarrow \\infty$.\n\\end{lemma}\n\n\\begin{proof}\nGiven $\\varepsilon>0$, for all $n\\geq 1$, by using Lemma \\ref{lem:net}, we choose an $\\varepsilon\/4$-net $N_n$ of size at most $F(\\varepsilon\/4)$ in the set of $d$-regular graphs with $n$ vertices with respect to the colored neighborhood metric. (We emphasize that the size of the net does not depend on the number of vertices of the graph.) For each $n$, let $B_{n, \\varepsilon}\\in N_n$ be a (deterministic) $d$-regular graph on vertices such that \n\\begin{equation}\\label{conce1}\\mathbb P(d_{CN}(B_{n, \\varepsilon}, \\mathbb G_n)\\leq \\varepsilon\/4)\\geq \\frac{1}{F(\\varepsilon\/4)},\\end{equation} \nwhere $\\mathbb G_n$ is a uniform random $d$-regular graph on $n$ vertices. \nSuch a $B_{n, \\varepsilon}$ must exist according to the definition of the $\\varepsilon\/4$-net $N_n$. \n\nWe define $f_{n, \\varepsilon}(H_n)=d_{CN}(B_{n, \\varepsilon}, H_n)$ for $d$-regular graphs $H_n$ on $n$ vertices. By Lemma \\ref{lem:lip}, if $n\\geq n_0$ with some fixed $n_0$, then $f_{n, \\varepsilon}$ is a Lipschitz function with $\\delta$. By well-known concentration inequalities (based on the exploration process and Azuma's inequality on martingales, see e.g. \\cite[Chapter 7]{alon}, this implies the following. For all $\\eta>0$ there exists $n_1=n_1(\\eta)$ such that \n\\begin{equation}\\label{conce2}\\mathbb P(|f_{n, \\varepsilon}(\\mathbb G_n)-\\mathbb E(f_{n, \\varepsilon}(\\mathbb G_n))|>\\eta)\\leq \\eta \\qquad (n\\geq n_1).\\end{equation}\nBy choosing $0<\\eta<\\min(\\varepsilon\/4, 1\/F(\\varepsilon\/4))$, inequalities \\eqref{conce1} and \\eqref{conce2} together imply $\\mathbb E(f_{n, \\varepsilon}(\\mathbb G_n))\\leq \\varepsilon\/2$ $(n\\geq n_1)$. That is, since $f_{n, \\varepsilon}$ is concentrated around its expectation (due to its Lipschitz property) for large $n$, and $\\mathbb G_n$ is close to some fixed graph with probability with a positive lower bound not depending on $n$, we conclude that this expectation has to be small for $n$ large enough. \n\nPutting this together, this yields \n\\[\\mathbb P(f_{n, \\varepsilon}(\\mathbb G_n)>\\varepsilon)=\\mathbb P(d_{CN}(B_{{n, \\varepsilon}},\\mathbb G_n)>\\varepsilon)\\leq \\varepsilon \\qquad (n\\geq n(\\varepsilon)).\\]\n\nBy a standard diagonalization argument, let $k(n)=\\max\\{k\\, \\vert \\, n(1\/k)\\varepsilon\\rbrace$ is infinite for some $\\varepsilon>0$. Choose $S'\\subseteq S$ by Proposition \\ref{prop:graphing}; that is, $(\\mathbb G_i)_{i\\in S'}$ locally-globally converges to a fixed graphing $\\mathcal G$ with probability 1. On the other hand, by independence, it follows that with probability 1 we have $\\mathbb G_i\\in C$ for infinitely many $i\\in S'$. \nSince $C$ is closed in the local-global topology, and $\\mathcal G$ is the limit of the whole sequence almost surely, this implies that $\\mathcal G$ has to be in $C$. But, by definition, $\\mathcal G$ is typical. This contradicts our assumption on $C$. \\hfill $\\square$\n\\end{proof}\n\n\\medskip\n\n\n\nThe main application of Proposition \\ref{prop:corres} is that we can turn statements about typical processes into statements about random $d$-regular graphs. \nAs we have explained before, typical processes are exactly the processes coming from typical graphings. Therefore if we succeed in excluding typical processes from a closed set within the weak topology of invariant processes, then at the same time we exclude typical graphings from a closed set within the local-global topology, and through Proposition \\ref{prop:corres} we obtain a result for random $d$-regular graphs. \nWe will demonstrate this principle on concrete examples in Section \\ref{dominating}. \n\n\n\n\n\\subsection{Joinings and related metric}\n\n\\label{joining}\n\n\nAn invariant coupling, or shortly {\\it joining}, of two elements $\\mu,\\nu\\in I_d(M)$ is a process $\\psi\\in I_d(M\\times M)$ such that the two marginal processes of $\\psi$ (with respect to the first and second coordinate in $M\\times M$) are $\\mu$ and $\\nu$.\nWe denote by $C(\\mu,\\nu)$ the set of all joinings of $\\mu$ and $\\nu$.\n\nAssume that the topology on $M$ is given by a metric $m:M\\times M\\rightarrow\\mathbb{R}^+\\cup\\{0\\}$. Then we define a distance $m_c$ on $I_d(M)$ in the following way.\n\\begin{equation}m_c(\\mu,\\nu)=\\inf_{\\psi\\in C(\\mu,\\nu)}\\mathbb{E}(m(\\psi|_v)),\\label{eq:metric}\\end{equation} \nwhere $v$ is an arbitrary fixed vertex of $T_d$ and $\\psi|_v$ is the restriction of $\\psi$ to $v$. Note that automorphism invariance implies that $m_c$ does not depend on the choice of $v$. \nIf $M$ has finite diameter, then $m_c(\\mu,\\nu)$ is a finite number bounded by this diameter. \n\nThis is basically Ornstein's $\\bar d$-metric, which was originally defined for $\\mathbb Z$-invariant processes, see e.g. \\cite{glasner}. See also the recent papers of Lyons and Thom \\cite{russ, monoton1} where \nseveral results and open questions on $T_d$ are presented, connecting the factor of i.i.d. processes to \nthis metric.\n \n\n The key to the proof of the fact that this is a metric is the notion of relatively independent joining \\cite[Chapter 15, Section 7]{glasner}. Assume that $\\psi_{1,2}\\in C(\\mu_1,\\mu_2)$ and $\\psi_{2,3}\\in C(\\mu_2,\\mu_3)$. Let us consider the unique joining of $\\psi_{1,2}$ and $\\psi_{2,3}$ that identifies the marginal $\\mu_2$ and has the property that $\\mu_1$ and $\\mu_3$ are conditionally independent with respect to $\\mu_2$. \nWe remark that using relatively independent joinings and some kind of Borel--Cantelli arguments one can check that the space of invariant processes is complete with respect to the $\\bar d$-metric.\n\n\n\nThe case when $M$ is a finite set plays a special role in our paper. In this case we define \n$m(x,y)=1$ if $x\\neq y$ and $m(x,x)=0$ for $x,y\\in M$. The corresponding metric $m_c$ is regarded as the Hamming distance for processes in $I_d(M)$.\n\n\\medskip \n\n\n\\section{Glauber dynamics and branching Markov processes}\n\n\\label{glauber}\n\nGlauber dynamics is an important tool in statistical physics. In this chapter we consider a variant of \nheat-bath Glauber dynamics that is an $m_c$-continuous transformation on $I_d(M)$. \nWe begin with the finite case, then we define the Dobrushin coefficient, and formulate the main results: a Dobrushin-type sufficient condition for branching Markov chains to be factor of i.i.d. \nThen we give a brief description of the Poisson Glauber dynamics that seems to be the closest analogy to classical Glauber dynamics, and we define something similar, that is more technical, but more useful in our applications. \n\n\\subsection{Glauber dynamics on finite graphs}\n\n\\label{finiteglaub}\n\nFirst suppose that $G$ is a (potentially infinite) $d$-regular graph, and we have a reversible Markov chain with finite state space $S$ \nand transition matrix $Q$. We think of $G$ such that each vertex has a state from $S$; the state of the graph is an element in $S^{V(G)}$. A {\\it Glauber step at vertex} $v\\in V(G)$ is a way of generating a random state from a given state of the graph. \nWe do this by randomizing the state of $v$ conditionally on the states of its neighbors, as follows. \n\nLet $N(v)$ denote the set of the neighbors of $v$. Let $C=v\\cup N(v)$ and $\\mu_C$ the distribution of the branching Markov process restricted to $C$. For a state $\\omega\\in S^{N(v)}$, we define $B_{v, \\omega}$ to be the conditional distribution of the state of $v$ given $\\omega$. The Glauber step at $v$ (the so called heat-bath version) is the operation of randomizing the state of $v$ from $B_{v, \\omega}$.\n\n\nNow we define the Glauber dynamics on a finite graph. It is a Markov chain on the state space of the graph $S^{V(G)}$ obtained by choosing a vertex $v$ uniformly at random, and performing the Glauber step at $v$. \nSee e.g. Section 3.3. in \\cite{markovmixing} on Glauber dynamics for various models. \n\n\n It is also clear from the theory of \nfinite state space Markov chains that (with appropriate conditions on $Q$) this Markov chain has a unique stationary \ndistribution, which is the limiting distribution of the Glauber dynamics. However, the order of the mixing time depends on $Q$; the question typically is whether the mixing \ntime can be bounded by a linear \nfunction of the number of vertices. Our main result will show that the so called Dobrushin condition, which implies fast mixing, also implies that the process is factor of i.i.d. Note that the connection between fast mixing and factor of i.i.d. property was also implicitly used in \\cite{gamarnik}. A paper of Berger, Kenyon, Mossel and Peres \\cite{berger} deals with the problem of fast mixing on trees for the Ising model, i.e. when there are only two states. See Theorem 1.4. of \\cite{berger}. Furthermore Mossel and Sly \\cite{exact} gave a sharp threshold for \ngeneral bounded degree graphs. The recent paper \nof Lubetzky and Sly \\cite{spacetime} contains more refined results for the Ising model with underlying graph \n$(\\mathbb Z\/n\\mathbb Z)^d$, and its Theorem 4 refers to analogous results for general graphs. \n\nIt is important to mention the paper of Bubley and Dyer \\cite{pathcoupling} on fast mixing of \nthe Glauber dynamics of Markov chains and on the path coupling technique, which \nis applied in \\cite{berger}, \nand whose ideas will be used in what follows. \nSee also the paper of Dembo and Montanari \\cite{dembo} and Chapter 15 in \\cite{markovmixing} for more details on mixing time of the Glauber dynamics.\n\n\\subsection{The Dobrushin coefficient and factor of i.id. processes} \n\nWhen we examine how the properties of the Glauber dynamics depend on the transition matrix $Q$, it is helpful to investigate the following: how does a change in the state of a single neighbor of $v$ effect the conditional \ndistribution of the state of $v$ at the Glauber step? This is the idea of the definition of the Dobrushin coefficient (see e.g. \n\\cite{pathcoupling, dobrushin}). \n\n\\begin{definition}[Dobrushin coefficient] \\label{def:dobr}Let us consider a reversible Markov chain on a finite state space $S$ with transition matrix $Q$. \nThe Dobrushin coefficient of the Markov chain is defined by \n\\begin{multline*}D=\\sup \\bigl \\lbrace d_{TV}( B_{v,\\omega}, B_{v,\\omega'}): \\omega, \\omega'\\in S^{N(v)},\\ |\\lbrace u\\in N(v): \\omega(u)\\neq \\omega'(u)\\rbrace|=1 \\bigr \\rbrace, \\end{multline*} \nwhere $d_{TV}$ is the total variation distance of \nprobability distributions:\n\\begin{multline*}d_{TV}(P_1,P_2)\n=\\frac{1}{2}\\sum_{s\\in S}|P_1(s)-P_2(s)|\\\\=\\inf\\lbrace \\mathbb P(X\\neq Y): X\\sim P_1,\\ Y\\sim P_2,\\ \\mathbb P \\textrm{\\ is a coupling of } X \\textrm{\\ and\\ }Y\n\\rbrace.\\end{multline*}\n\\end{definition}\n\nTo put it in another way, we consider pairs of configurations on the neighbours of $v$ that differ at only one place. \nWe calculate the total variation distance of the conditional distributions at $v$ given the two configurations. \nFinally we take the supremum for all these pairs. Note that this definition depends only on $Q$ and \nthe number of neighbors of $v$.\n\n\\medskip\n\nNow we can formulate the main result of this section, which will be proved in Subsection \\ref{proofthm1}. \n\n\\begin{theorem}\\label{thm1}\nIf the condition $D<1\/d$ holds for a reversible Markov chain with transition matrix $Q$ on a finite state space $S$, then the \nbranching Markov process $\\nu_Q$ corresponding to $Q$ on the $d$-regular tree $T_d$ is a factor of i.i.d. process; that is, $\\nu_Q\\in F_d(S)$.\n\\end{theorem}\n\nThis theorem is \nheuristically in accordance with the results of Bubley and Dyer \\cite{pathcoupling}, who proved fast mixing of the Glauber \ndynamics if the condition $D<1\/d$ holds. Moveover, this condition has other consequences for correlation decay and the \nuniqueness of the Gibbs measure under various circumstances \\cite{dobrushin, lovasz, sokal, weitz}. However, we do not know in general \nwhether fast mixing or the uniqueness of the Gibbs measure implies that the branching Markov process is factor of i.i.d.\n \n\n\n\n\n\n\\subsection{Poisson Glauber dynamics on $T_d$}\n\nWhen the vertex set of the underlying graph is finite, as we have already seen in Subsection \\ref{finiteglaub}, it is easy to define the Glauber dynamics. \nFrom now on we get back to the infinite $d$-regular tree, where it is not possible to choose a vertex uniformly at \nrandom, and perform Glauber dynamics step by step this way. \nIn this subsection we give a heuristic description of the continuous time Glauber dynamics on the infinite tree for motivation. However, for our purposes the discrete version defined in the next subsection is \nmore convenient, hence we omit the precise details of the definition of the continuous time model.\n\n\nWe assign independent Poisson processes with rate 1 to the vertices of the tree. That is, each vertex has a \nsequence of random times when it wakes up. At the beginning, at time zero, the vertices are in random \nstates chosen independently and uniformly from the finite state space $S$. When a vertex wakes up, it performs a single \nGlauber step defined earlier. This depends only on the state of the neighbors of the \nvertex. However, to know these states, we have to know what has happened when the neighbors have performed Glauber steps earlier. \nThis continues, hence it is not trivial whether this process is well-defined. To see this, one can check that the \nexpectation of the number of Glauber steps that effect the randomization of a vertex waking up is finite. \n\nThis argument could be made precise (see e.g. \\cite[Theorem 1]{howard} for the definition of joint distribution of the Poisson processes on $T_3$). The advantage of the continuous time Glauber dynamics is the fact that \nthe probability that neighbors wake up at the same time is zero. When we define the discrete time Glauber step \nin the next subsection, we will have to pay attention to avoid the event that neighbors are waking up simultaneously. \n\n\n \n\\subsection{The factor of i.i.d. Glauber step on $T_d$}\n\n\nAs we have seen in Subsection \\ref{finiteglaub}, the single Glauber step for finite graphs maps each configuration in \n$S^{V(G)}$ to a random configuration. Now we are working with the infinite $d$-regular tree $T_d$, hence \nwe deal with random processes, which are probability distributions on $S^{V(T_d)}$. We \nwill describe a way of performing Glauber steps simultaneosly at different vertices such that our procedure produces factor of i.i.d. \nprocesses from factor of i.i.d. processes. \n\nGiven a configuration $\\omega \\in S^{V(T_d)}$, which is a labelling of \nthe vertices of the $d$-regular tree with labels from the finite state space $S$ of the Markov chain, we will perform a \nsingle Glauber step to get a random configuration $G\\omega$ in $S^{V(T_d)}$. Fix the \ntransition matrix $Q$. The scheme is \nthe following; we give the details afterwards. \n\\begin{enumerate}\n\\item Choose an invariant random subset $U$ of $V(T_d)$ such that it has positive density and it does not contain any two vertices of distance less than 3. \n\\item For each vertex $v\\in U$ perform the usual Glauber step at $v$: randomize the state of vertex $v$ according to \nthe conditional distribution with respect to the states of its neighbours.\n\\end{enumerate}\n\nMore precisely, for the first part we need the following lemma. \n\n\\begin{lemma} \\label{ebred}It is possible to find an invariant random subset $U$ \nof $V(T_d)$ such that \\begin{itemize}\n\\item it is factor of i.i.d.: the distribution of the indicator function of $U$ is in $F_d(\\lbrace 0,1\\rbrace)$;\n\\item it has positive density: the probability \nthat the root $o$ is in $U$ is positive;\n\\item it does not contain any two vertices of distance less than 3. \n\\end{itemize}\n\\end{lemma}\n\n\\begin{proof} We start with $[0,1]^{V(T_d)}$ endowed with $\\mu$, the product measure of the uniform distributions on the \ninterval $[0,1]$. That is, vertices have independent and uniformly distributed labels from $[0,1]$. \n\nA vertex $v\\in V(T_d)$ will be in $U$ if its label is larger than the labels of the vertices in its neighbourhood of \nradius 2. That is, for $\\omega\\in [0,1]^{V(T_d)}$ we set $f(\\omega)=1$ if $\\omega$ at the root $o$ is larger than $\\omega_u$ for all $u\\in V(T_d)$ at distance at most 2 from the root. Otherwise $f(\\omega)=0$. Then we get the characteristic function of $U$ by placing the root to each vertex and applying $f$. This is a factor of i.i.d. process satisfying all conditions. \\hfill $\\square$\n\n\\end{proof}\n\n\\medskip\n\nThis lemma ensures that we can perform the first part of the Glauber step as a factor of i.i.d. process. \nAs for the second part, we just refer to the definition of the Glauber step at a single vertex: each vertex $v\\in U$ randomizes its \nstate given the state of its neighbors and according to the distribution of the branching Markov process \nconstrained on the finite subset $v\\cup N(v)$. Since the distance of any two vertices \nin $U$ is at least 3, these randomizations can be performed simoultaneously and independently. \n\n\nIt is straightforward to extend the definition of the Glauber step to a map from the set of probability measures on $S^{V(T_d)}$ to \nitself. Namely, choose a random configuration from $S^{V(T_d)}$ according to the given measure, and perform the \nGlauber step described above. This gives a new probability measure on $S^{V(T_d)}$. It is also easy to see \nthat if we apply this for an invariant probability measure, then the resulting measure will also be invariant. \nHence we have extended the definition of the Glauber step to a transformation of the form $G: I_d(S)\\rightarrow I_d(S)$.\n\nMoreover, note that if $\\nu$ is factor of i.i.d., then $G(\\nu)$ is also factor of i.i.d., \nsince the set of vertices performing Glauber steps is chosen by a factor of i.i.d. process by Lemma \\ref{ebred}, and \nGlauber steps depend only on the state of the neighbors of these vertices.\n\n\\subsection{The invariance of the branching Markov process for the Glauber step} \n\nIn order to prove Theorem \\ref{thm1}, we will need the fact that the Glauber step defined above does not change \nthe distribution of the branching Markov process. \n\n\\begin{proposition}[Invariance] \\label{prop:inv}\nIf $\\nu_Q\\in I_d(S)$ is the branching Markov process with transition matrix $Q$\nthen it is a fixed point of the Glauber step corresponding to $Q$ and $d$ (i.e. $G(\\nu_Q)=\\nu_Q$.)\n\\end{proposition}\n\n\\begin{proof}\nFirst we check that the Glauber step at a single vertex $u$ does not change the \ndistribution of the branching Markov process. It follows from the fact that the distribution of the state of $u$ and the joint distribution of the states at $V(T_d)\\setminus \\{u\\cup N(u)\\}$ are conditionally independent given the states of the vertices in $N(u)$. \n\nLet $U$ be the set of vertices performing Glauber steps when we apply $G$. Since these vertices are far away from each other \n(their distance is at least 3 according to Lemma \\ref{ebred}), the randomizations are independent, and therefore, since the Glauber step at a \nsingle vertex does not change the distribution, it is also invariant for finitely many steps. On the other hand, for arbitrary $U$ it is possible to \nfind finite sets of vertices $U_n$ such that (i) $U_n\\subseteq U_{n+1}$ for all $n$; (ii) $\\bigcup_{n=1}^{\\infty} U_n=V(T_d)$; (iii) if a vertex is in $U\\cap U_n$, then all its neighbors are in $U_n$. For example, one can use balls of appropriate radius with a few vertices \nomitted from the boundary. Since every $U_n$ contains finitely many vertices, and vertices on the boundary of $U_n$ do not perform Glauber steps, the distribution of the branching Markov process is invariant for the \nGlauber steps at vertices $U\\cap U_n$. This also implies that the branching Markov process is invariant for $G$, when we perform Glauber steps at \nthe vertices of $U$ simultaneously.\n\\hfill $\\square$\n\\end{proof}\n\n\\subsection{The Glauber step as a contraction}\n\n\n\n\n\nWe will prove that if the Dobrushin coefficient (Definition \\ref{def:dobr}) is small enough, then the factor of i.i.d. Glauber step is a contraction \nwith respect to the metric \n$m_c$ derived from the Hamming distance on $S$.\nFirst we need a notation and a lemma. \n\n\\begin{definition}[Coupling Hamming distance] Let $S$ be a finite state space with the discrete topology and with the Hamming distance: $m(s,s)=0$ for all $s\\in S$ and $m(s,t)=1$ if $s\\neq t$. We denote by $h_c$ the metric defined by equation \\eqref{eq:metric} on $I_d(S)$ corresponding to the Hamming distance (see Section \\ref{joining}). \n\\end{definition}\n\nRecall that $B_{v,\\omega}$ is the distribution of the state of vertex $v$ at the Glauber step if the state of its \nneighbors are given by $\\omega\\in S^{N(v)}$.\n\n\\begin{lemma}\\label{lem:pc}Suppose that we have a branching Markov process on $T_d$ with Dobrushin coefficient $D$. Fix \n$v\\in V(T_d)$ and $\\omega, \\omega'\\in S^{N(v)}$ such that $|\\lbrace u\\in N(v): \\omega(u)\\neq \\omega'(u)\\rbrace|=k$. Then we have that \n\\[d_{TV}(B_{v,\\omega}, B_{v, \\omega'})\\leq k D.\\]\n\n\\end{lemma}\n\\begin{proof} The case $k=1$ is trivial. The general case follows by induction using the triangle inequality. \\hfill $\\square$\n\\end{proof}\n\n\\medskip\n\nNow we can prove that the factor of i.i.d. Glauber step is a contraction if the Dobrushin condition holds.\n\\begin{proposition}\\label{prop:contract}\nIf $D<{1\/d}$, then $G: I_d(S)\\rightarrow I_d(S)$ is a contraction with respect to the coupling Hamming distance $h_c$; that is, \nthere exists $r<1$ such that\n\\[h_c(G(\\nu_1), G(\\nu_2))0$ such that $r:=(1+\\varepsilon)(1-p+pdD)<1$, where $p>0$ is the density of $U$ in the Glauber step. This is possible if $D<1\/d$. Fix $\\nu_1, \\nu_2\\in I_d(S)$. Denote their distance $h_c(\\nu_1, \\nu_2)$ by $h$. By the definition of the metric $h_c$, there is \na joining $\\Psi$ of $\\nu_1$ and $\\nu_2$ such that $\\mathbb E(m(\\Psi|_v))<(1+\\varepsilon)h$ holds ayt any given vertex $v$, where $m$ denotes the Hamming distance on $S$. \n\nOur goal is to construct a joining $\\Psi'$ of $G(\\nu_1)$ and $G(\\nu_2)$ such that $\\mathbb{E}(m(\\Psi'|_v))\\leq rh$.\nWe construct this joining in a way that the set of vertices that perform the Glauber step are the same for $\\nu_1$ and $\\nu_2$. \nAs a first step we choose an invariant random set $U$ according to Lemma \\ref{ebred} such that $U$ is independent from $\\Psi$. \n\nWe define $\\Psi'$ from $\\Psi$ and $U$ as follows. When we randomize the state of a given vertex $v\\in U$, conditionally on the states of vertices in $N(v)$, we use the best possible coupling of the conditional distributions in total variation (the probability that the two random variables are different is minimal). Since we deal with finite number of configurations and a discrete probability space for fixed $u$, this is sensible. For the distinct vertices in $U$ we join these couplings independently to get $\\Psi'$ for a fixed $U$. This defines $\\Psi'$ on the whole extended probability space. \n\nSince $U$ is invariant and the randomizations depend only on the states of the neighbors, $\\Psi'$ is also invariant. \nIt is clear that the marginal distributions $\\nu_1'$ and $\\nu_2'$ of $\\Psi'$ are identical to $G(\\nu_1)$ \nand $G(\\nu_2)$, respectively. \n\nNow we give an upper bound on the coupling Hamming distance of $\\nu_1'$ and $\\nu_2'$. \n\nFix $v\\in V(T_d)$. The probability that $v\\in U$ is $p$ by definition. With probability $1-p$ its state is not changed, therefore there is a difference in $\\Psi'$ with probability $E(m(\\Psi|_v))0$. We denote by $G_{n,\\varepsilon}$ the set of $S$-colored $d$-regular graphs on the vertex set $V_n$ with the restriction that the distribution of vertex colors is $\\varepsilon$-close to $\\nu_v$ and the distribution of colored (directed) edges is $\\varepsilon$-close to $\\nu_e$ in total variation distance.\nSince $\\nu$ is typical we know that if $n$ is large enough and belongs to the sequence $\\{n_i\\}_{i=1}^{\\infty}$, then almost every $d$-regular graph on $n$ vertices is in $G_{n,\\varepsilon}$. It follows that \n\\begin{equation}\\label{entpr1}\n\\limsup_{n\\rightarrow\\infty} \\frac{|G_{n,\\varepsilon}|}{t_n}\\geq 1\n\\end{equation}\n holds for every $\\varepsilon>0$ where $t_n$ is the number of $d$-regular graphs on $n$ vertices.\n\n\nIn the rest of the proof we basically compute the asymptotic behavior of $\\log|G_{n,\\varepsilon}|$ if $\\varepsilon$ is small and $n$ is large enough depending on $\\varepsilon$. We start by assigning $d$ half-edges to each element of $V_n$. Let $V_n^*$ denote the set of these half edges. We first color the vertices according to the distribution $\\nu_v$. We color $V_n^*$ such that each half edge inherits the color of its incident vertex. Then we match these half-edges \nsuch that the distribution of the colors of the endpoints of a uniform random edge is $\\nu_e$.\nTo be more precise, in each coloring throughout this proof, we allow an $\\varepsilon$ error in the total variation distance of distributions. \n\nThere are $H(\\begin{picture}(4,10)\n\\put(2,3){\\circle*{2}}\n\\end{picture})^{n(1+o(1))}$ ways \nto color $V_n$ with distribution $\\nu_v$. \nHere $o(1)$ means a quantity that goes to $0$ if first $n$ goes to infinity and then $\\varepsilon$ goes to $0$.\n \nAssume that the vertices of $V_n$ have a fix coloring. Let $M$ denote the set of perfect macthings on $V_n^*$ that satisfy the above requirement. By Lemma \\ref{entlem} we have that $$|M|= PM(nd)H(\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture})^{nd\/2(1+o(1))}\/H(\\begin{picture}(4,10)\n\\put(2,3){\\circle*{2}}\n\\end{picture})^{nd(1+o(1))}.$$\n\n\nFinally we have to take into consideration that the order of the half-edges does not matter, hence we \nget every coloring $(d!)^{n}$ times. \n\nPutting everything together, the number of colored $d$-regular graphs on $V_n$ with the required property is the following:\n\\[\\frac{H(\\begin{picture}(4,10)\n\\put(2,3){\\circle*{2}}\n\\end{picture})^{n(1+o(1))} PM(nd)H(\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture})^{nd\/2(1+o(1))}}{H(\\begin{picture}(4,10)\n\\put(2,3){\\circle*{2}}\n\\end{picture})^{nd(1+o(1))}(d!)^n}.\\] \n\nUsing the same argument about the half-edges but forgetting about all colorings, one can see that the number of \n$d$-regular graphs on $n$ vertices is \n\\[\\frac{PM(nd)}{(d!)^n}.\\]\n\nBy (\\ref{entpr1}) we conclude that\n\\[\\limsup_{n\\rightarrow\\infty} \\frac{H(\\begin{picture}(4,10)\n\\put(2,3){\\circle*{2}}\n\\end{picture})^{n(1+o(1))}H(\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture})^{nd\/2(1+o(1))}}{H(\\begin{picture}(4,10)\n\\put(2,3){\\circle*{2}}\n\\end{picture})^{nd(1+o(1))}}\\geq 1;\\]\n\\[ H(\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture})^{d\/2(1+o(1))}\\geq H(\\begin{picture}(4,10)\n\\put(2,3){\\circle*{2}}\n\\end{picture})^{(d-1)(1+o(1))}.\\]\n \nBy tending to $0$ with $\\varepsilon$, taking the logarithm of both sides and rearranging we get the statement of the theorem. \\hfill $\\square$\n\\end{proof}\n\nSimilarly to the proof of Theorem \\ref{edgevertex}, one can show the following.\n\n\\begin{theorem}\\label{staredge} For any typical process $\\nu\\in R_d$ the following holds:\n\\[h(\\begin{picture}(10,12)\n\\put(5,3){\\circle*{2}}\n\\put(5,8){\\circle*{2}}\n\\put(5,-2){\\circle*{2}}\n\\put(0,3){\\circle*{2}}\n\\put(10,3){\\circle*{2}}\n\\put(5,3){\\line(0,1){5}}\n\\put(5,-2){\\line(0,1){5}}\n\\put(5,3){\\line(1,0){5}}\n\\put(0,3){\\line(1,0){4}}\n\\end{picture}_d)\\geq \\frac{d}{2} h(\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture}),\\]\nwhere \\ \\begin{picture}(10,12)\n\\put(5,3){\\circle*{2}}\n\\put(5,8){\\circle*{2}}\n\\put(5,-2){\\circle*{2}}\n\\put(0,3){\\circle*{2}}\n\\put(10,3){\\circle*{2}}\n\\put(5,3){\\line(0,1){5}}\n\\put(5,-2){\\line(0,1){5}}\n\\put(5,3){\\line(1,0){5}}\n\\put(0,3){\\line(1,0){4}}$\\ \\ \\ \\ _d$\n\\end{picture} \\ \\ is the star of degree $d$.\n\\end{theorem}\n\n\\begin{proof} The proof is very similar to the proof of Theorem \\ref{edgevertex} so we only give the details that are different. Let $\\nu\\in R_d\\cap I_d(S)$. Let $C$ denote the star of degree $d$. We label the root of $C$ by $0$ and the endpoints of the rays by $\\{1,2,\\dots,d\\}$. Let \\ $\\begin{picture}(10,12)\n\\put(5,3){\\circle*{2}}\n\\put(5,8){\\circle*{2}}\n\\put(5,-2){\\circle*{2}}\n\\put(0,3){\\circle*{2}}\n\\put(10,3){\\circle*{2}}\n\\put(5,3){\\line(0,1){5}}\n\\put(5,-2){\\line(0,1){5}}\n\\put(5,3){\\line(1,0){5}}\n\\put(0,3){\\line(1,0){4}}\n\\end{picture}_d$\\ and\\ $\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture}$\\ denote the marginal distributions of $\\nu$ on the degree $d$ star and on an edge in $T_d$. Again we count $S$-colored $d$-regular graphs on $n$ vertices with the restriction that the distribution on random stars and edges are close to\\ $\\begin{picture}(10,12)\n\\put(5,3){\\circle*{2}}\n\\put(5,8){\\circle*{2}}\n\\put(5,-2){\\circle*{2}}\n\\put(0,3){\\circle*{2}}\n\\put(10,3){\\circle*{2}}\n\\put(5,3){\\line(0,1){5}}\n\\put(5,-2){\\line(0,1){5}}\n\\put(5,3){\\line(1,0){5}}\n\\put(0,3){\\line(1,0){4}}\n\\end{picture}_d$ and\\ $\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture}$. Let $V_n$ be a set of $n$ elements. To each element $v_i\\in V_n$ we assign $d$ half-edges $\\{v_{i,j}\\}_{j=1}^d$. We denote by $V_n^*$ the set of half-edges. Let $f:V_n^*\\rightarrow S\\times S$ be a coloring of the half-edges with pairs of elements from $S$ such that the first coordinates of $f(v_{i,j})$ and $f(v_{i,k})$ are the same, say $g(i)\\in S$, for every triple $1\\leq i\\leq n$ and $1\\leq j,k\\leq d$. \nTo each number $1\\leq i\\leq n$ we can assign an $S$-colored version of the star $C$ such that the color of the root $0$ is $s_i$ and the color of $j\\in V(C)$ is the second coordinate of $f(v_{i,j})$ for $1\\leq j\\leq d$. \nWe say that $f$ is \"good\" if the distribution of these colored stars is \\ $\\begin{picture}(10,12)\n\\put(5,3){\\circle*{2}}\n\\put(5,8){\\circle*{2}}\n\\put(5,-2){\\circle*{2}}\n\\put(0,3){\\circle*{2}}\n\\put(10,3){\\circle*{2}}\n\\put(5,3){\\line(0,1){5}}\n\\put(5,-2){\\line(0,1){5}}\n\\put(5,3){\\line(1,0){5}}\n\\put(0,3){\\line(1,0){4}}\n\\end{picture}_d$ if $1\\leq i\\leq n$ is random. The number of good colorings is $H(\\begin{picture}(10,12)\n\\put(5,3){\\circle*{2}}\n\\put(5,8){\\circle*{2}}\n\\put(5,-2){\\circle*{2}}\n\\put(0,3){\\circle*{2}}\n\\put(10,3){\\circle*{2}}\n\\put(5,3){\\line(0,1){5}}\n\\put(5,-2){\\line(0,1){5}}\n\\put(5,3){\\line(1,0){5}}\n\\put(0,3){\\line(1,0){4}}\n\\end{picture}_d)^{n(1+o(1))}$. \nWe obtain a $d$-regular graph $G$ with a desired coloring $g$ by using a perfect matching on the set of half-edges such that the second coordinate of each half-edge is equal to the first coordinate of its pair in the mathching. \nUsing Lemma \\ref{entlem}, we obtain that the number of such perfect matchings is $$PM(nd)H(\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture})^{(dn\/2)(1+o(1))}H(\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture})^{-dn(1+o(n))}.$$ Thus the number of $d$-regular graphs with a desired coloring is\n$$\\frac{PM(nd)H(\\begin{picture}(10,12)\n\\put(5,3){\\circle*{2}}\n\\put(5,8){\\circle*{2}}\n\\put(5,-2){\\circle*{2}}\n\\put(0,3){\\circle*{2}}\n\\put(10,3){\\circle*{2}}\n\\put(5,3){\\line(0,1){5}}\n\\put(5,-2){\\line(0,1){5}}\n\\put(5,3){\\line(1,0){5}}\n\\put(0,3){\\line(1,0){4}}\n\\end{picture}_d)^{n(1+o(1))}}{H(\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture})^{(dn\/2)(1+o(1))}d!^n}.$$ Similarly to the proof of Theorem \\ref{edgevertex} we obtain that $H(\\begin{picture}(10,12)\n\\put(5,3){\\circle*{2}}\n\\put(5,8){\\circle*{2}}\n\\put(5,-2){\\circle*{2}}\n\\put(0,3){\\circle*{2}}\n\\put(10,3){\\circle*{2}}\n\\put(5,3){\\line(0,1){5}}\n\\put(5,-2){\\line(0,1){5}}\n\\put(5,3){\\line(1,0){5}}\n\\put(0,3){\\line(1,0){4}}\n\\end{picture}_d)\\geq H(\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture})^{d\/2}$. This completes the proof. \n\n\\end{proof}\n\n\\subsection{Entropy inequalities and branching Markov chains}\n\nIn Theorem \\ref{thm1} we gave a sufficient condition for a branching Markov process to be factor of i.i.d. process. This can not be necessary, as the example of the Ising model shows. The \nIsing model with parameter $\\vartheta$ is the particular case where the Markov chain has only two states and the transition \nmatrix $Q=\\left(\n\\begin{tabular}{cc} \n$\\frac{1+\\vartheta}{2}$ & $\\frac{1-\\vartheta}{2}$\\\\ $\\frac{1-\\vartheta}{2}$ & $\\frac{1+\\vartheta}{2}$\n\\end{tabular}\n\\right)$ is symmetric. \nThat is, when we propagate the states from the root along the tree, $\\frac{1+\\vartheta}{2}$ is the probability that we keep \nthe current state. The model is called ferromagnetic if $\\vartheta\\geq 0$; i.e. if it is more likely to keep the current state than to change it. The Dobrushin coefficient of the Ising model with parameter $\\vartheta\\geq 0$ is just $\\vartheta$. \nTherefore our theorem implies that when $-1\/d<|\\vartheta|<1\/d$, then the ferromagnetic Ising model is a factor of i.i.d. \nprocess. But a stronger statement is known: the Ising model is a factor of i.i.d. if $-1\/(d-1)\\leq\\vartheta\\leq 1\/(d-1)$. To prove this, \none can use that the clusters in the random cluster representation of the Ising model are almost surely finite in this \nregime. See e.g.\nSection 3 of \\cite{russ} for the details. See also the paper of H\\\"aggstr\\\"om, Jonasson and Lyons \\cite{russregi} for a generalization of this result to random-cluster and Potts models.\n\nIt is also known that the Ising model with parameter $|\\vartheta|>1\/\\sqrt{d-1}$ can not be factor of i.i.d. (not even a weak limit of factor of i.i.d processes) see \\cite{russ} and \\cite{cordec}. \nIt is an open question whether the Ising model with $1\/(d-1)< |\\vartheta|\\leq 1\/\\sqrt{d-1}$ is factor of i.i.d. or not (or whether it is limit of \nfactor of i.i.d). \n\n\nFor the ferromagnetic Ising model, the parameter $\\vartheta$ is equal to the spectral radius of the transition matrix $Q$, which is, in general, the second largest eigenvalue in absolute value after the eigenvalue $1$. \nMore generally, the results of \\cite{cordec} imply that a branching Markov process is not the weak limit of factor of i.i.d. \nprocesses if the spectral radius $\\varrho$ of its transition matrix $Q$ is larger than $1\/\\sqrt{d-1}$. We will \nuse Theorem \\ref{edgevertex} to show that for general branching Markov processes the correlation bound is far from being optimal. \n\n\\begin{theorem}\\label{exnotsuff} For every $d\\geq 3$ and $\\varepsilon>0$ there exists a transition matrix $Q$ such that \n\\begin{itemize}\n\\item its spectral radius is less than $\\varepsilon$;\n\\item the branching Markov process on the $d$-regular tree $T_d$ according to $Q$ is not a typical process, and hence it is not the weak limit of factor of i.i.d. processes.\n\\end{itemize}\n\\end{theorem}\n\\begin{proof}\nChoose a prime $p$ which is equal to 1 modulo 4 and which satisfies $\\frac{2\\sqrt{p}}{p+1}<\\varepsilon$. \nLet $G$ be a $(p+1)$-regular Ramanujan graph (see the definition below) on $k$ vertices such that \n\\[k>(p+1)^{\\frac{d}{d-2}}.\\] Due to Lubotzky, Phillips and Sarnak \\cite{lbs}, this is possible. Let $Q$ be the\ntransition matrix of the simple random walk on the vertices of $G$. (That is, $Q$ is the adjacency matrix of $G$ normalized \nby $p+1$.) Let $r$ be the spectral radius of $G$. \nBy the definition of Ramanujan graphs we \nhave that $r\\leq \\frac{2\\sqrt{p}}{p+1}< \\varepsilon$.\n\nThe branching Markov process on $T_d$ according to $Q$ is an invariant process in $I_d(N)$, where $N$ represents the \nvertices of $G$, that is, it has $k$ elements. Since $G$ is regular, the stationary random walk is uniformly distributed \non its \nvertices, and therefore the vertex entropy of this branching Markov process is just $\\ln k$. \n\nAs for the edge entropy: we can choose the first vertex uniformly at random, and then one of its \n$p+1$ neighbors arbitrarily, but the order does not matter. Therefore the edge entropy is $\\ln k+\\ln (p+1)$. \n\nFrom Proposition \\ref{edgevertex} we get that if the branching Markov process according to the transition matrix $Q$ was a typical process, \nthen the following would be true: \n\\[\\frac{d}{2}h(\\begin{picture}(4,10)\n\\put(2,0){\\line(0,1){6}}\n\\put(2,0){\\circle*{2}}\n\\put(2,6){\\circle*{2}}\n\\end{picture})\\geq (d-1)h(\\begin{picture}(4,10)\n\\put(2,3){\\circle*{2}}\n\\end{picture});\\]\n\\[\\frac{d}{2}[\\ln k+\\ln(p+1)]\\geq (d-1)\\ln k;\\]\n\\[d\\ln(p+1)\\geq (d-2)\\ln k;\\]\n\\[(p+1)^{d\/(d-2)}\\geq k.\\]\n\nThis contradicts the choice of $k$. Therefore the branching Markov process according to $Q$ is not a typical process. \n\\end{proof}\n\n\\begin{remark} The example of the Potts model shows that the typicallity of a process or the fact whether it is factor of i.i.d. can not be decided based only on the \nnumber of states and the spectral radius. \nLet $Q_1$ be the \ntransition matrix of the Potts model on $k$ states (see e.g. \\cite{sly}): with a \ngiven probability $p$ it stays at the actual state, otherwise it chooses another state uniformly at random. Its \nspectral radius is equal to $1-\\frac{pk}{k-1}$. Moreover, it is also known that the Potts model satisfies the Dobrushin condition if $k>2d$ \\cite{sokal}. \nBy choosing $p$ such that the spectral radius is so small that the previous theorem can be applied, we get that the branching Markov chain in the previous theorem is not limit of factor of i.i.d., while Theorem \\ref{thm1} implies that the branching Markov process according to $Q_1$ \nis a factor of i.i.d. process. \n\n \n\\end{remark}\n\n\\begin{remark}\nWe have seen that the entropy inequality can lead to stronger bound than the correlation decay when the number of states is sufficiently large. However, for the Ising model, when $k=2$, the correlation decay bound is stronger than the bound we get from this entropy inequality. \n\\end{remark}\n\n\\medskip\n\n\\subsection{Entropy inequalities and random $d$-regular graphs}\n\n\\label{dominating}\n\nIn this section we show how to use entropy inequalities to obtain results about random $d$-regular graphs. Our strategy is that we use Theorem \\ref{staredge} to show that certain invariant processes can not be typical. Then, by the correspondence principle, we translate this to statements about random $d$-regular graphs. Throughout this section we assume that $d\\geq 3$. \n\n\nWe denote by $C$ the degree $d$ star in $T_d$ with root $o$ and leaves $w_1,w_2,\\dots,w_d$.\nLet $\\mu\\in I_d(M)$ be an invariant process. If $F$ is a finite subset of $V(T_d)$, then we denote by $\\mu_F$ the marginal distribution of $\\mu$ restricted to $F$, and by $\\nu_F$ the product measure of the marginals of $\\mu_F$.\nWe denote by $t(F)$ the total correlation of the joint distribution of $\\mu_F$; that is, $t(F)=h(\\nu_F)-h(F)$.\n\n\n\n\n\n\\begin{proposition} \\label{prop:41}\n\n\n\nLet $\\mu$ be a typical process and suppose that $h(C)-h(C\\setminus \\{ w_1\\})\\leq b$ for some $b\\geq 0$. \nThen $t(C\\setminus \\{ w_1\\})\\leq b\\frac{2d-2}{d-2}$ and \\[d_{TV}(\\mu_{C\\setminus \\{ w_1\\}}, \\nu_{C\\setminus \\{ w_1\\}})\\leq \\sqrt{b(d-1)\/(d-2)}. \\]\n\\end{proposition}\n\n\\begin{proof}\nBy Theorem \\ref{staredge} and the condition of the proposition we get \n\\begin{equation}\\label{eq:p41}0\\leq h(C)-\\frac d2 h(\\{o, w_1\\})\\leq h(C\\setminus \\{w_1\\})-\\frac d2 h(\\{o, w_1\\})+b.\\end{equation}\nBy using a simple upper bound on the entropy of $C\\setminus \\{w_1\\}$ we get \n\\begin{equation*}0\\leq h(o)+(d-1)[h(\\{o,w_1\\})-h(o)]- \\frac d2 h(\\{o, w_1\\})+b.\n\\end{equation*}\nBy rearranging and multiplying by $d\/(d-2)$, this implies \n\\[-\\frac d2h(\\{o, w_1\\})\\leq \\frac{db}{d-2}-dh(o).\\]\nPutting this together with inequality \\eqref{eq:p41}, we conclude \n\\[0\\leq h(C\\setminus \\{w_1\\})-dh(o)+\\frac{2d-2}{d-2}b.\\] \nSince $h(\\nu_F)=dh(o)$ for an invariant process if $F$ consists of $d$ vertices, this concludes the proof of the first inequality.\n\nObserve that $t(C\\setminus \\{ w_1\\})=D\\big(\\mu_{C\\setminus \\{ w_1\\}}||\\nu_{C\\setminus \\{ w_1\\}}\\big)$, where $D$ denotes the relative entropy. \nRecall that Pinsker's inequality says that $D(P||Q)\\geq 2d_{TV}(P, Q)^2$, where $P$ and $Q$ are two probability distributions on the same set. This implies the statement. \\hfill $\\square$\n\\end{proof}\n\n\\medskip\n\n\nAs a first application of Proposition \\ref{prop:41}, we use it in the case of $b=0$. \n\n\\begin{definition} \\label{def:rigid}\nLet $S$ be a finite set and $\\mu\\in I_d(S)$ be an invariant process. Assume that $C$ is a degree $d$ star in $T_d$ with root $o$ and leaves $w_1,w_2,\\dots,w_d$. We say that $\\mu$ is {\\it rigid} if \n\\begin{enumerate} \n\\item the values on $C\\setminus\\{w_1\\}$ uniquely determine the value on $w_1$;\n\\item $\\mu$ restricted to $C\\setminus\\{w_1\\}$ is not i.i.d. at the vertices. \n\\end{enumerate}\n\\end{definition}\n\n\n\n\n\n\n\\begin{proposition}\\label{apstaredge} If $\\mu\\in I_d(S)$ is a rigid process, then it is not typical.\n\\end{proposition}\n\\begin{proof} The first assumption in Definition \\ref{def:rigid} implies that Proposition \\ref{prop:41} holds for $\\mu$ with $b=0$, and thus we obtain that $\\mu_{C\\setminus\\{w_1\\}}=\\nu_{C\\setminus\\{w_1\\}}$, which contradicts the second assumption. \\hfill $\\square$ \n\\end{proof}\n\n\\medskip\n\nWe give an example for families of rigid processes. \n\n\\begin{lemma}\\label{rig1} Assume that $S$ is a finite set in $\\mathbb{R}$ and that $\\mu$ satisfies the eigenvector equation; namely, that a $\\mu$-random function $f:T_d\\rightarrow S$ satisfies that $\\lambda f(o)=f(w_1)+f(w_2)+\\dots+f(w_d)$ holds with probability $1$. Then $\\mu$ is rigid.\n\\end{lemma} \n\\begin{proof}\nObserve that $f(w_1)=\\lambda f(o)-(f(w_2)+f(w_3)+\\dots+f(w_d))$, which shows that the first condition is satisfied. We want to exclude the possibility that $f(o), f(w_2), f(w_3), \\ldots, f(w_n)$ are identically distributed independent random variables. We can assume that all values in $S$ are taken with positive probability. This means that for every pair $(c_1,c_2)\\in S\\times S$ we have with positive probability that $f(w_2)=f(w_3)=\\dots=f(w_d)=c_1$, $f(o)=c_2$, and thus $f(w_1)=\\lambda c_2-(d-1)c_1$. It follows that $\\lambda S+(1-d)S\\subseteq S$ (using Minkowski sum), which is impossible if $S$ is finite.\\hfill $\\square$\n\\end{proof}\n\n\\medskip\n\nWe give further applications of Proposition \\ref{prop:41} in extremal combinatorics.\n\n\\begin{definition}\\label{def:cover}\nLet $G=(V, E)$ be a $d$-regular (not necessarily finite) graph. Let $M: S\\times S\\rightarrow \\mathbb N \\cup \\{0\\}$. We assume that $\\sum_{q\\in S} M(s,q)=d$ holds for every $s\\in S$. Furthermore, we suppose that the weighted directed graph with adjacency matrix $M$ is connected. Let $f: V\\rightarrow S$ be an arbitrary function. We say that $f$ is a covering at $v\\in V$ if \n\\[\\big |\\,\\{w\\ | \\ f(w)=q, w\\in N(v)\\}\\,\\big |=M(f(v), q),\\] \nwhere $N(v)$ is the set of neighbors of $v$. \n\\end{definition}\n\n\\begin{lemma}\\label{lem:covrig} Assume that $M: S\\times S\\rightarrow \\mathbb N \\cup \\{0\\}$ is as in the previous definition. Fix $\\varepsilon\\geq 0$ and $d\\geq 3$. Assume furthermore that $\\mu\\in I_d(S)$ is an invariant process such that a $\\mu$-random function $f: V(T_d^*)\\rightarrow S$ is a covering at the root $o$ with probability $1-\\varepsilon$, and the distribution of $f(o)$ is supported on at least two elements. Then the following hold. \n\\begin{enumerate}[(a)]\n\\item $h(C)-h(C\\setminus \\{ w_1\\})\\leq \\varepsilon \\log |S|$. \n\\item There exists $\\delta=\\delta(M, \\varepsilon)>0$ such that $\\mathbb P(f(o)=s)\\geq \\delta$ holds for all $s\\in S$.\n\\item By using the notation of Proposition \\ref{prop:41}, we have \\[d_{TV}(\\mu_{C\\setminus \\{ w_1\\}}, \\nu_{C\\setminus \\{ w_1\\}})\\geq \\frac12 (\\delta^d-\\varepsilon).\\] \n\\item If $\\varepsilon=0$, then $\\mu$ is rigid.\n\\end{enumerate} \n\\end{lemma}\n\\begin{proof} We denote by $A$ the event that $f$ is a covering at $o$, and by $B$ its complement. Then $\\mathbb P(B)=\\varepsilon$.\n\n\\noindent $(a)$ For $\\varepsilon=0$: observe that $f(w_1)$ is the unique element $q\\in S$ with the following property: \n\\[|\\,\\{w\\ | \\ f(w)=q, w\\in \\{w_2, w_3, \\ldots, w_d\\}\\}\\,\\big |=M(f(o), q)-1,\\] \nwhich depends only on the values of $f$ on $C\\setminus \\{w_1\\}$. Therefore the values on $C\\setminus \\{w_1\\}$ uniquely determine the value on $w_1$, and the two entropies are equal.\nOtherwise, conditional entropy with respect to an event with positive probability will be defined as the entropy of the conditional distribution. Then we have \n\\[\nh(C)=h(C|A)\\mathbb P(A)+h(C|B)\\mathbb P(B)-\\mathbb P(A)\\log \\mathbb P(A)-\\mathbb P(B)\\log \\mathbb P(B);\\]\n\\[h(C\\setminus \\{w_1\\})=h(C\\setminus \\{w_1\\}|A)\\mathbb P(A)+h(C\\setminus \\{w_1\\}|B)\\mathbb P(B)-\\mathbb P(A)\\log \\mathbb P(A)-\\mathbb P(B)\\log \\mathbb P(B).\\]\nIf $A$ holds, then by the argument above, the value on $w_1$ is uniquely determined by the other ones. Hence \n$h(C\\setminus \\{w_1\\}|A)=h(C|A)$. On the other hand, $h(C|B)\\leq h(C\\setminus \\{w_1\\}|B)+\\log |S|$ is a trivial upper bound. Therefore we obtain\n\\[h(C)-h(C\\setminus \\{ w_1\\})=[h(C|B)-h(C\\setminus \\{ w_1\\}|B)]\\mathbb P(B)\\leq \\varepsilon \\log |S|.\\]\n\n \\noindent $(b)$ We show that $\\delta(M, \\varepsilon)\\geq \\frac{a}{d^k}-\\frac{\\varepsilon}{d-1}$ holds, where $k$ is the diameter of the directed graph with adjacency matrix $M$. If $s\\in S$ has probability $a$, then any of its neighbors $t$ has probability at least $(a-\\varepsilon)\/d$, due to the following. The probability of the event $D$ that $f(o)=s$ and $f$ is a covering at the root is at least $a-\\varepsilon$. Given $D$, the joint distribution of the neighbors is permutation invariant. On the event $D$, the values of $f$ evaluated at the neighbors of the root are exactly the neighbors of $s$ with multiplicity in $M$. Hence the probability that the value of $f$ at a fixed neighbor of the root is $t$ is at least $1\/d$ conditionally on $D$. Using the invariance of the process, this proves the lower bound for the probability of $t$. \n \n We can choose an element $s_0\\in S$ which has probability at least $1\/|S|$. By induction, we have that an element of distance $m$ from $s_0$ in the directed graph $M$ has probability at least \n \\[\\frac{1}{|S|d^m}-\\varepsilon \\bigg [\\frac{1}{d}+\\frac{1}{d^2}+\\ldots+\\frac{1}{d^m} \\bigg].\\]\n Since every other element in $S$ can be reached by a directed path of length at most $k$ in $M$, the proof is complete. \n \n \\noindent $(c)$ Choose $s_1,s_2\\in S$ such that $M(s_1, s_2)\\leq d\/2$. The covering property at $o$ implies that the probability of the event $\\{f(o)=s_1, f(w_2)=s_2, f(w_3)=s_2, \\ldots, f(w_d)=s_2\\}$ is zero. That is, this event has conditional probability 0 with respect to $A$. It follows that \n \\[\\mathbb P(f(o)=s_1, f(w_2)=s_2, f(w_3)=s_2, \\ldots, f(w_d)=s_2)\\leq \\mathbb P(B)=\\varepsilon.\\]\nOn the other hand, by part $(b)$ and invariance, the same event has probability at least $\\delta^d$ when we consider $\\nu$ restricted to $C\\setminus \\{w_1\\}$ (recall that $\\nu$ is the product measure of the marginals). This implies the statement.\n\n\\noindent $(d)$ The first property follows from the argument in $(a)$. In addition, we have seen in part $(c)$ that the probability of a given configuration is 0. On the other hand, by $(b)$, the probability of each value is positive. \n This excludes the possibility that $\\mu$ restricted to $C\\setminus\\{w_1\\}$ is i.i.d. \\hfill $\\square$\n\n\\end{proof}\n\n\\medskip\n\nFor the combinatorial applications, we need the following definition.\n\n\\begin{definition} \nLet $G=(V, G)$ be a finite $d$-regular graph, and $M: S\\times S\\rightarrow \\mathbb N\\cup \\{0\\}$ as in definition \\ref{def:cover}. For an arbitrary function $g: V\\rightarrow S$ let $W\\subset V$ be the subset of vertices $v$ at which $h$ is not a covering. We introduce the quantity $e(g):=|W|\/|V|$. Furthermore, we define the covering error ratio of $G$ with respect to $M$ by \\[c(G,M)=\\min_{g: V\\rightarrow S} e(g).\\] \n\\end{definition}\n\nIt will be important that the covering error ratio can be extended to graphings in a natural way such that the extension is continuous in the local-global topology. Let $\\mathcal{G}$ be a graphing on the vertex set $\\Omega$. Let $g:\\Omega\\rightarrow S$ be an arbitrary measurable function. Let $W\\subseteq\\Omega$ be the set of vertices at which $g$ is not a covering of $M$. We denote by $e(g)$ the measure of $W$. We define $c(\\mathcal{G},M)$ as the infimum of $e(g)$ where $g$ runs through all measurable maps $g:\\Omega\\rightarrow S$. We can also obtain $c(\\mathcal{G},M)$ as a minimum taken on processes. For $\\mu\\in I_d(S)$ let $e(\\mu)$ denote the probability that a $\\mu$ random function $f:T_d^*\\rightarrow S$ is not a covering of $M$ at $o$. Using the fact that $e(\\mu)$ is continuous in the weak topology and that $\\gamma(\\mathcal{G},S)$ is compact in the weak topology we obtain that\n\\begin{equation}\\label{cermin}\nc(\\mathcal{G},M)=\\min_{\\mu\\in\\gamma(\\mathcal{G},S)}e(\\mu).\n\\end{equation}\n\nNow we are ready to prove the next combinatorial statement. Recall that $\\delta(M, 0)>0$, and hence $\\varepsilon_0$ defined in the theorem is also positive.\n\n\\begin{theorem}\\label{thm:combap} Fix $d\\geq 3$ and $M$ as in the definition \\ref{def:cover}. Let \n\\[\\varepsilon_0=\\inf\\bigg\\{\\varepsilon>0: \\frac 12(\\delta(M, \\varepsilon)^d-\\varepsilon)\\leq \\sqrt {\\varepsilon\\log |S|\\frac{d-1}{d-2}}\\bigg\\},\\] \nwhere $\\delta(M, \\varepsilon)$ is defined in Lemma \\ref{lem:covrig} $(b)$.\n Then for every $0<\\varepsilon<\\varepsilon_0$ the probability $\\mathbb P(c(\\mathbb G_i, M)<\\varepsilon)$ converges to $0$ as $i\\rightarrow \\infty$, where $\\mathbb G_i$ is a random $d$-regular graph on $i$ vertices. \n\\end{theorem}\n\n\\begin{proof} Suppose that the invariant process $\\mu\\in I_d(S)$ satisfies the conditions of Lemma \\ref{lem:covrig} for some $\\varepsilon>0$, and it is typical. Part $(a)$ implies that Proposition \\ref{prop:41} can be applied with $b=\\varepsilon \\log |S|$. Putting this together with part $(c)$ of the lemma, we obtain\n\\[\\frac 12[\\delta(M, \\varepsilon)^d-\\varepsilon]\\leq d_{TV}(\\mu_{C\\setminus \\{ w_1\\}}, \\nu_{C\\setminus \\{ w_1\\}})\\leq \\sqrt {\\varepsilon\\log |S|\\frac{d-1}{d-2}}.\\]\n\nBy equation (\\ref{cermin}) it follows that $c(\\mathcal{G},M)\\geq \\varepsilon_0$ holds for every typical graphing in $\\overline{X_d}$. Let $0<\\varepsilon<\\varepsilon_0$ be an arbitrary real number and and let $Q_\\varepsilon=\\{\\mathcal{G}|c(\\mathcal{G},M)\\leq\\varepsilon\\}$. By applying Proposition \\ref{prop:corres} for $Q_\\varepsilon$, the proof is complete. \\hfill $\\square$\n\\end{proof}\n\n\\medskip\n\nTheorem \\ref{thm:combap} provides a family of combinatorial statements depending on the matrix $M$. \nAn interesting application of Theorem \\ref{thm:combap} is when $M$ is the adjacency matrix of a $d$-regular simple graph $H$. In this case we obtain that random $d$-regular graphs do not cover (not even in an approximative way) the graph $H$. If we apply Proposition \\ref{apstaredge} to such a matrix $M$ we get the following. \nLet $\\mu\\in I_d(V(H))$ be the invariant process on $T_d$ that is a covering map from $T_d$ to $H$. Then $\\mu$ is not typical and thus it is not in the weak closure of factor of i.i.d processes. \n\n\n\n\n\n\n\n\n We show two concrete examples, using only $2\\times 2$ matrices, to illustrate how our general statement of Theorem \\ref{thm:combap} is related to known results. Note that in these special cases the literature has better bounds then ours; our goal is only demonstrating the connection between different areas.\n\n\\begin{equation*}\nM_1=\\begin{pmatrix} 0 & d \\\\ 1 & d-1 \\end{pmatrix}~~~,~~~M_2=\\begin{pmatrix} 0~~ & d \\\\ d~~ & 0 \\end{pmatrix}\n\\end{equation*}\n\nThe dominating ratio of a finite graph $G$ is the following. Let $m$ be the size of the smallest set of vertices $V'$ of $G$ such that each vertex of $G$ is either in $V'$ or connected to a vertex in $V'$. The dominating ratio is defined as $dr(G)=m\/|V(G)|$. It is clear that the dominating ratio of a $d$-regular graph is at least $1\/(d+1)$. It is easy to see that the dominating ratio of a $d$-regular graph $G$ is equal to $1\/(d+1)$ if and only if $c(G,M_1)=0$. For this particular matrix, one can use a better bound than the general one given in Lemma \\ref{lem:covrig}. Namely, as a simple calculation shows, $\\delta(M, \\varepsilon)=1\/(d+1)-\\varepsilon\/(d+1)$ can be chosen.\nTheorem \\ref{thm:combap} applied to $M_1$ gives to following combinatorial statement. \n\n\\begin{proposition} For every $d\\geq 3$ we define \n\\[\\varepsilon_0=\\inf\\bigg\\{\\varepsilon>0: \\frac 12\\bigg[\\bigg(\\frac{1-\\varepsilon}{d+1}\\bigg)^d-\\varepsilon\\bigg]\\leq \\sqrt {\\varepsilon\\log |S|\\frac{d-1}{d-2}}\\bigg\\}.\\] \nThen $P(dr(\\mathbb G_i)<1\/(d+1)+\\varepsilon)$ converges to $0$ as $i\\rightarrow \\infty$ for all $0<\\varepsilon<\\varepsilon_0$. \\end{proposition}\n\nThis gives the following for small values of $d$. \n\n\\begin{center}\n\\begin{tabular}{lcccc}\n$d$ & 3&4&5&6\\\\ \\hline\n$\\varepsilon_0$ & $4.38\\cdot 10^{-5}$ & $6.15\\cdot 10^{-7}$ & $4.47\\cdot 10^{-9}$&$2.08\\cdot10^{-11}$ \n\\end{tabular}\n\\end{center}\n\nFor $d=3$ Molloy and Reed \\cite{molloyreed} gave a much better bound $0.2636$ for the dominating ratio; our result gives $0.2500438$. It would be interesting to improve our bounds for larger $d$ as well. \n\n\n\\medskip\n\nThe next application shows that random $d$-regular graphs are separated from being bipartite, which was first proved by Bollob\\'as \\cite{bollind}. To put it in another way, it says that the independence ratio (size of the largest independent set divided by the number of vertices) of a random $d$-regular graph is at most $1\/2-\\varepsilon_0$ with probability tending to $1$ with the number of vertices for some $\\varepsilon_0>0$. We can obtain this by applying Theorem \\ref{thm:combap} for the matrix $M_2$. In fact, $\\delta(M, \\varepsilon)\\leq 1\/2-\\varepsilon$, due to the following argument. One of the states has probability at least $1\/2$, let us say state $0$. Fix a neighbor of the root. If the root is in state $0$, and the random function is a covering at $0$, then its neighbor is in state 1. This event has probability at least $1\/2-\\varepsilon$, hence the probability of 1 is at least $1\/2-\\varepsilon$. \n\nTherefore \n\\[\\varepsilon_0=\\inf\\bigg\\{\\varepsilon>0: \\frac 12[(1\/2-\\varepsilon)^d-\\varepsilon]\\leq \\sqrt {\\varepsilon\\log 2\\cdot \\frac{d-1}{d-2}}\\bigg\\}.\\] \n\n\nAbout the best known bounds, see McKay \\cite{mckay} for small $d$. \nFor large $d$, the independence ratio of random $d$-regular graphs is concentrated around $2\\log d\/d$ \\cite{bollind, sly}. Our results do not improve their bounds.\n\n\n\\medskip\n\n\n\n\\begin{remark} From Lemma \\ref{rig1} and Proposition \\ref{apstaredge} we obtain that any typical processes $\\mu$ (and thus any factor of i.i.d process) that satisfy the eigenfunction equation must take infinitely many values. It would be good to see a finer statement about the possible value distributions. Maybe these distributions are always Gaussian. \n\\end{remark}\n\n\\begin{remark} The proof of Theorem \\ref{thm:combap} makes use of the fact that $c(G,M)$ is continuous in the local-global topology. The continuity of various combinatorial parameters in the Benjamini--Schramm topology was studied in e.g. \\cite{miklos1, miklos2, gabor}. In those cases it is also possible to prove combinatorial statements through continuity and the analytic properties of the limit objects.\n\\end{remark}\n\n\\subsubsection*{Acknowledgement.} The authors are grateful to Mikl\\'os Ab\\'ert and to B\\'alint Vir\\'ag for helpful discussions and for organizing active seminars in Budapest related to this topic. The research was supported by the MTA R\\'enyi Institute Lend\\\"ulet Limits of Structures Research Group.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe notion of causality is deeply rooted in our understanding of nature. In ordinary situations with a fixed spacetime background we can always say that the cause belongs to the past light cone of the effect, and the effect to the future light cone of the cause. This familiar idea might be untenable at regimes at which the quantum mechanical properties of the systems under study are of comparable relevance to their gravitational properties \\cite{isham1993canonical, hartle1993spacetime, butterfield2001spacetime}, for instance if the metric tensor, and thus the causal relations, are subject to quantum fluctuations.\n\nThe crucial role played by probability in quantum mechanics on the one hand, and the dynamical causal structure of general relativity on the other hand, led to the conjecture that a theory unifying general relativity and quantum mechanics should be a probabilistic theory on a dynamical causal structure \\cite{hardy2005probability}. Adopting an operational point of view, we can ask what the measurable consequences of an indefinite causal structure would be. The process matrix framework \\cite{oreshkov2012quantum} is a possible way to address this question, and exploits techniques typical of quantum information to deal with the problem. The framework retains the validity of ordinary quantum mechanics at a local level, i.e. in local laboratories where quantum operations are performed, but makes no assumptions on the global causal structure outside the laboratories. Interestingly, the framework allows for processes that are more general than those allowed by the standard (causal) quantum formalism. In particular, they include situations in which the direction of the signaling, and thus causality in the operational sense, is not fixed. Nonetheless, logical paradoxes, such as signaling to the past, are ruled out by consistency conditions. \n\nWe call a process matrix causally ordered if it allows for signalling only in one fixed direction between the parties. A (bipartite) process matrix is {\\it causally separable} if it can be decomposed as a convex combination of causally ordered processes. An example of a {\\it causally nonseparable} process is the `quantum switch' \\cite{oreshkov2012quantum, oreshkov2015causal}. This is a quantum system with an auxiliary degree of freedom which can coherently control the order in which operations are applied. The quantum switch provides quantum computational advantages with respect to quantum circuits with fixed gate order \\cite{chiribella2013quantum, chiribella2012perfect, araujo2014computational} and has recently been implemented with linear optics \\cite{procopio2014experimental}.\n\nIn their original formulation, process matrices were only defined for finite dimensional Hilbert spaces \\cite{oreshkov2012quantum, araujo2014computational, araujo2015witnessing, oreshkov2015causal}. Despite providing an arena for the experimental verification of systems like the quantum switch, finite-dimensional systems are too restrictive for the purpose of studying indefinite causality. The generalization of the formalism to continuous variables broadens the class of systems which can be described with the formalism. In particular Gaussian quantum optics, used to describe some cases of continuous-variable quantum systems, has a very important role in quantum information processing \\cite{weedbrook2012gaussian}. The generalization proposed here can be straightforwardly used to devise new experiments. As an example of such applications, we propose an infinite-dimensional version of the quantum switch.\n\nIn addition, quantum fluctuations of the metric and of the causal structures are expected at high energies, where both quantum and gravitational effects become relevant. At these regimes a description in terms of quantum fields is required. The generalization proposed here is a necessary step towards this goal and paves the way for a more thorough study of quantum fields on indefinite causal structures. With this in mind, it is worth noting that a proper treatment of quantum fields requires to solve problems related to the localisation of the local laboratories and the tensor product structure of the Hilbert spaces. The study of this problem is beyond the scope of this work and is likely to require the framework of algebraic quantum field theory \\cite{haag1964algebraic, haag1992local}.\n\nContrary to the finite-dimensional case, in this work we face difficulties related to singularities. These singularities arise from the straightforward generalization of the approach used in finite dimensions when the dimensions of the Hilbert space tend to infinity. We solve this problem by using a phase space representation of the process matrices in terms of Wigner functions. We also show that the notion of causal nonseparability is maintained in infinite dimensions and we provide an argument for the causal nonseparability of the quantum switch. Specifically, we show that it exhibits interference due to the superposition of the order in which the operations are applied.\n\n\\section{The W-matrix formalism}\nIn this section we give a brief introduction to the W-matrix formalism in finite-dimensional Hilbert spaces, following the first formulation given in \\cite{oreshkov2012quantum}. Here we restrict the discussion to a two-party scenario, but the formalism is valid for an arbitrary number or local observers. Let us consider the two observers A and B, situated in separate local laboratories. We assume that standard quantum mechanics is valid in each local laboratory. However, we make no assumptions on the global causal structure outside the laboratories. This means that each observer is free to perform local quantum operations on a physical system in a finite-dimensional Hilbert space. More specifically, the local operations performed in the laboratories are completely positive (CP) maps $\\mathcal{M}_i^{A}: \\mathcal{L}(\\mathcal{H}^{A_1}) \\rightarrow \\mathcal{L}(\\mathcal{H}^{A_2})$ and $\\mathcal{M}_j^{B}: \\mathcal{L}(\\mathcal{H}^{B_1}) \\rightarrow \\mathcal{L}(\\mathcal{H}^{B_2})$, where $\\mathcal{L}(\\mathcal{H})$ denotes linear operators acting on the finite-dimensional Hilbert space $\\mathcal{H}$, and where $\\mathcal{H}^{A_1},\\, \\mathcal{H}^{A_2}$ and $ \\mathcal{H}^{B_1}, \\, \\mathcal{H}^{B_2}$ are respectively the input and output Hilbert spaces of A and B. It is convenient to use the Choi-Jamio{\\l}kowski (CJ) isomorphism \\cite{jamiolkowski1972linear, choi2000completely}, which associates an operator in the tensor product of two given Hilbert spaces to a map between the two. We write the CJ-equivalent of the local operations as $M_i^{X}= (\\mathbb{1}\\otimes \\mathcal{M}_i^{X})\\left| \\Phi^+ \\right> \\left< \\Phi^+ \\right|$ on $\\mathcal{H}^{X_1}\\otimes \\mathcal{H}^{X_2}$, $X=A,\\, B$, where $\\left| \\Phi^+ \\right>= \\sum_i \\left| i \\right>_{X_1} \\left| i \\right>_{X_1}$ is the maximally entagled state in the input Hilbert space. Given the set of CP maps $\\left\\lbrace \\mathcal{M}_i^{X} \\right\\rbrace_{i=1}^{n}$ corresponding to all possible $n$ local outcomes, the sum $\\sum_i \\mathcal{M}_i^X$ is also trace preserving (TP). Physically this means that an outcome always occurs in an experiment. Using the Choi-Jamio{\\l}kowski isomorphism, we can write this condition (CPTP condition) as $\\sum_i\\text{Tr}_{X_2}M_i^X= \\mathbb{1}_{X_1}$.\n\nGiven the set of CP maps accounting for all possible local operations, we can ask which are the most general correlations between the outcomes of the two observers. The most general way to linearly map local quantum operations to probability distributions can be written as $p(\\mathcal{M}_i^A,\\, \\mathcal{M}_j^B)= \\text{Tr}\\left[ W (M_i^A \\otimes M_j^B) \\right]$, where we introduce the process matrix $W \\in \\mathcal{L}\\left( \\mathcal{H}^{A_1}\\otimes \\mathcal{H}^{A_2} \\otimes \\mathcal{H}^{B_1} \\otimes \\mathcal{H}^{B_2}\\right)$, a positive linear operator $W \\geq 0$. The non-negativity of the probabilities (including the case when the two parties share entanglement) is ensured by the positivity of the W-matrix. Moreover, we require that probabilities are normalised, i.e. $\\sum_{ij}p(\\mathcal{M}_i^A,\\, \\mathcal{M}_j^B)=1$.\n\nIn \\cite{araujo2015witnessing} it was shown that the characterization of the W-matrix in the two-party scenario and finite-dimensional Hilbert spaces can be given as\n\\begin{align} \\label{eq:Characterization}\n\t& W \\geq 0, \\\\\n\t& \\text{Tr}W =d_{A_2} d_{B_2}, \\qquad d_X= \\text{dim}(\\mathcal{H}_{X}), \\nonumber\\\\\n\t& _{B_1 B_2} W = _{A_2 B_1 B_2} W, \\nonumber\\\\\n\t& _{A_1 A_2} W = _{B_2 A_1 A_2} W,\\nonumber\\\\\n\t& W= _{A_2} W + _{B_2} W - _{A_2 B_2} W, \\nonumber\n\\end{align}\nwhere $_{X} W= \\frac{\\mathbb{1}_X}{d_X} \\otimes \\text{Tr}_X W$.\nThis means that not all the subspaces of the space of process matrices are allowed, because they give rise to non-normalized probabilities. In \\cite{oreshkov2012quantum} it is shown that these terms can be interpreted as logical paradoxes. As an example, let us assume a one-party scenario in which the input and output Hilbert spaces are two-dimensional and a basis is provided by the two states $\\left|0\\right>$ and $\\left|1\\right>$. Let the W-matrix be an identity channel from the observer's output to the observer's input. Then if the observer applies a local operation which flips the qubit, we get the paradox $\\left|0\\right>=\\left|1\\right>$. This paradox is of the type of the `grandfather paradox', in which an agent goes back in time and kills his grandfather. This situations are automatically ruled out in the W-matrix formalism by the conditions \\eqref{eq:Characterization}. On the other hand, the requirements \\eqref{eq:Characterization} together with the local CPTP maps give rise to correlations which are more general than those of standard quantum mechanics.\n\nIn the formulation in finite-dimensional Hilbert spaces the characterization of the process matrix heavily relies on the dimension of the Hilbert spaces of the observers, so that taking the representation of W and letting the dimensions tend to infinity would lead to singularities. Therefore a straightforward generalization to infinite dimensions is not possible. An alternative formulation, suitable for infinite-dimensional Hilbert spaces, is given in terms of Wigner functions, which provide an equivalent description to the usual operator representation. We will see that the requirement that W gives rise to consistent probabilities restricts the possible Wigner representations, and provides an equivalent characterization of the process matrix to the finite-dimensional case.\n\n\\section{Extension to infinite dimensions}\nThe extension of the W-matrix formalism to continuous variables presents some novel features in contrast to the original framework in finite-dimensional Hilbert spaces. These features are analogous to those encountered in the infinite-dimensional limit of ordinary quantum mechanics of finite-dimensional systems \\cite{peres2006quantum}, and mainly concern the boundedness of the operators representing a quantum state.\n\nWe consider two local observers, $A$ and $B$, each provided with a local laboratory and free to perform local operations on a quantum system. In infinite dimensions we have to restrict the domain $\\mathcal{L}(\\mathcal{H})$ of linear operators on the Hilbert space $\\mathcal{H}$ to bounded linear operators on $\\mathcal{H}$. We call this space $\\mathcal{B}(\\mathcal{H})$. The maps describing the local operations in A and B are represented by completely positive (CP) maps $\\mathcal{M}_i^{A}: \\mathcal{B}(\\mathcal{H}_{A_{1}}) \\rightarrow \\mathcal{B}(\\mathcal{H}_{A_{2}})$, $\\mathcal{M}_j^{B}: \\mathcal{B}(\\mathcal{H}_{B_{1}}) \\rightarrow \\mathcal{B}(\\mathcal{H}_{B_{2}})$, where $\\mathcal{H}_{X_{1}}, \\, \\mathcal{H}_{X_{2}}$, $X=A,\\,B$, are the (infinite-dimensional) input and output Hilbert spaces of each laboratory. Each map $\\mathcal{M}_i^{X}$ describes transformations of a state $\\rho$ with outcome $i$ and output state $\\mathcal{M}_i^{X}(\\rho)$. A convenient way of representing CP maps is through the Choi-Jamiolkowski (CJ) isomorphism (see \\cite{jamiolkowski1972linear, choi2000completely} for the original definition in finite dimensions, \\cite{holevo2011entropy} for the extension to infinite dimensions), which associates an operator $M_i^X$ to a CP map $\\mathcal{M}_i^X$ through\n\t$M_i^{X}= \\left( \\mathbb{1} \\otimes \\mathcal{M}_i^{X} \\right) \\left| \\Phi^+ \\right> \\left< \\Phi^+ \\right|$.\nHere $\\left| \\Phi^+ \\right>= \\int dx \\left| xx \\right>_{X_{1}}$ is the non-normalized maximally entangled state in $\\mathcal{H}_{X_{1}} \\otimes \\mathcal{H}_{X_{1}}$ and $\\mathbb{1}$ is the identity operator. Since the probability of obtaining an outcome is unity, the sum over all possible $\\mathcal{M}_i^X$ is a completely positive trace-preserving (CPTP) map. This condition, which we refer to as CPTP condition, is expressed in terms of the CJ equivalent $M^X= \\sum_i M_i^X$ as $\\operatorname{Tr}_{X_{2}}(M^{X})=\\mathbb{1}_{X_{1}}$.\n\nThe process matrix is an operator $W \\in \\mathcal{B}(\\mathcal{H}_{A_{1}} \\otimes \\mathcal{H}_{A_{2}}\\otimes \\mathcal{H}_{B_{1}}\\otimes \\mathcal{H}_{B_{2}})$ such that $W \\geq 0$ and the probability of two measurement outcomes $i$ and $j$ is\n\\begin{equation}\n\tp(\\mathcal{M}_i^{A},\\, \\mathcal{M}_j^{B}) = \\operatorname{Tr} \\left[ W (M_i^{A} \\otimes M_j^{B}) \\right].\n\\end{equation}\nThe probability should satisfy $0 \\leq p(\\mathcal{M}_i^{A},\\, \\mathcal{M}_j^{B}) \\leq 1$. In particular, the condition $\\sum_{ij} p(\\mathcal{M}_i^{A},\\, \\mathcal{M}_j^{B}) = 1$ implies that $\\operatorname{Tr}\\left[W (M^{A}\\otimes M^{B})\\right]=1$ for every pair of CPTP maps $\\mathcal{M}^{A},\\, \\mathcal{M}^{B}$. From now on we will only consider the CJ representation of the CP maps. \n\n\\subsection{Characterization of the one-party scenario}\nThe one party scenario can be obtained from the two parties when the Hilbert spaces of one observer are one-dimensional.\nThe Wigner equivalent of a CPTP map $M$ (we omit here the index relative to the observer) and of a process matrix $W$ is a function of four variables on the phase space, namely $M(\\boldsymbol{\\xi}_{1}, \\boldsymbol{\\xi}_{2})$ and $W(\\boldsymbol{\\xi}_{1}, \\boldsymbol{\\xi}_{2})$. Here the subscripts $1$ and $2$ refer respectively to the input and output Hilbert space and the quantity $\\boldsymbol{\\xi}_i$ corresponds to the point in the phase space $\\boldsymbol{\\xi}_i=(x_i, p_i)$. In terms of Wigner functions, the CPTP condition becomes\n\t$\\frac{1}{2\\pi} \\int d\\boldsymbol{\\xi}_{2} M (\\boldsymbol{\\xi}_{1}, \\boldsymbol{\\xi}_{2})=1$.\nBy computing the Fourier transform $\\tilde{M}(\\boldsymbol{\\eta}_{1}, \\boldsymbol{\\eta}_{2})= \\frac{1}{(2\\pi)^2}\\int d\\boldsymbol{\\xi}_{1} d\\boldsymbol{\\xi}_{2} M(\\boldsymbol{\\xi}_{1}, \\boldsymbol{\\xi}_{2}) e^{-i \\boldsymbol{\\xi}_{1} \\cdot \\boldsymbol{\\eta}_{1}}e^{-i \\boldsymbol{\\xi}_{2} \\cdot \\boldsymbol{\\eta}_{2}}$, with $\\boldsymbol{\\eta}_i= (\\kappa_i, \\omega_i)$ the previous condition reads\n\t\\begin{equation} \\label{eq:CPTPcondition}\n\t\t\\tilde{M} (\\boldsymbol{\\eta}_{1},\\boldsymbol{0})= 2\\pi \\delta(\\boldsymbol{\\eta}_{1}),\n\t\\end{equation}\nwhere $\\delta(\\boldsymbol{\\eta}_1)= \\delta(\\kappa_1)\\delta(\\omega_1)$ and $\\delta$ is the Dirac delta function.\n\nWe use the CPTP condition \\eqref{eq:CPTPcondition} to characterize the $W$-matrix. In terms of the Wigner representation the normalization of probability $Tr(W M^A)=1$ is\n\\begin{equation} \\label{eq:norm_oneparty}\n\t\\frac{1}{(2\\pi)^2} \\int d \\boldsymbol{\\eta}_{1} d \\boldsymbol{\\eta}_{2} \\tilde{W} (\\boldsymbol{\\eta}_{1}, \\boldsymbol{\\eta}_{2})\\tilde{M} (\\boldsymbol{\\eta}_{1}, \\boldsymbol{\\eta}_{2})=1.\n\\end{equation}\t\nFor each $\\tilde{M}(\\boldsymbol{\\eta}_{1}, \\boldsymbol{\\eta}_{2})$ we identify a small interval $S_{2}(\\tilde{M}) \\in \\mathbb{R}^2$ around $\\boldsymbol{\\eta}_{2} = \\boldsymbol{0}$ where we can approximate $\\tilde{M} (\\boldsymbol{\\eta}_{1}, \\boldsymbol{\\eta}_{2})$ with $\\tilde{M} (\\boldsymbol{\\eta}_{1}, \\boldsymbol{0})$. We assume that the function $\\tilde{M}$ has a well-defined limit at $\\boldsymbol{\\eta}_2=0$. For all possible $\\tilde{M} (\\boldsymbol{\\eta}_{1}, \\boldsymbol{\\eta}_{2})$ we choose the smallest interval $S_{2}=\\min_{\\tilde{M}} S_{2}(\\tilde{M})$. We set\n\t$S_{2} \\equiv \\left[ -\\frac{\\epsilon}{2},\\,\\frac{\\epsilon}{2}\\right]\\times \\left[ -\\frac{\\delta}{2},\\,\\frac{\\delta}{2}\\right]$.\nWe now split our integral in two parts: in the first one the output variables are integrated over $S_{2}$; in the second one the integration is performed on $\\mathbb{R}^2\\setminus S_{2}$. By using equation \\eqref{eq:CPTPcondition} in the integral on $S_{2}$, equation \\eqref{eq:norm_oneparty} reads\n\\begin{equation} \\label{eq:splitoneparty}\n\t1= \\frac{\\epsilon \\delta}{2\\pi} \\tilde{W} (\\boldsymbol{0}, \\boldsymbol{0}) + \\left<\\tilde{W}\\tilde{M}\\right>_{\\mathbb{R}^2, \\mathbb{R}^2\\setminus S_{2}},\n\\end{equation}\nwhere $\\left< f \\right>_{R_i, R_j}= \\frac{1}{(2\\pi)^2}\\int_{R_i} d \\boldsymbol{\\eta}_{1} \\int_{R_j} d \\boldsymbol{\\eta}_{2} f(\\boldsymbol{\\eta}_{1}, \\boldsymbol{\\eta}_{2})$. Note that, in order to satisfy equation \\eqref{eq:splitoneparty}, $\\tilde{W}(\\boldsymbol{\\eta}_1,\\boldsymbol{0})$ can not diverge faster than $1\/\\epsilon \\delta$. This implies that for all possible $\\tilde{M} (\\boldsymbol{\\eta}_{1}, \\boldsymbol{\\eta}_{2})$, restricted to the domain $\\mathbb{R}^2 \\times (\\mathbb{R}^2 \\setminus S_2)$, the second term in the sum is always equal to the same constant. This can only happen if the second term in the sum in equation \\eqref{eq:splitoneparty} vanishes, so we conclude that \n\t$\\tilde{W} (\\boldsymbol{\\eta}_{1}, \\boldsymbol{\\eta}_{2})=0$ when $\\boldsymbol{\\eta}_{2} \\notin S_{2}$ and $\\tilde{W} (\\boldsymbol{0}, \\boldsymbol{\\eta}_{2}) = \\frac{2\\pi}{\\epsilon \\delta}$ when $\\boldsymbol{\\eta}_{2} \\in S_{2}$.\nWe now send $\\epsilon$ and $\\delta$ to zero. In the limit we find\n\\begin{equation} \\label{eq:Woneparty}\n\t\\tilde{W} (\\boldsymbol{\\eta}_{1}, \\boldsymbol{\\eta}_{2})= 2 \\pi w(\\boldsymbol{\\eta}_{1}) \\delta(\\boldsymbol{\\eta}_{2}),\n\\end{equation}\nwhere $w(\\boldsymbol{\\eta}_1)$ is a function to be determined.\n\nWe now ask which conditions $w(\\boldsymbol{\\eta}_{1})$ should satisfy in order for the probability to be normalized. If we substitute the result \\eqref{eq:Woneparty} in the condition for the normalization of the probability \\eqref{eq:norm_oneparty} we see that\n\t$1=\t\\frac{1}{2\\pi} \\int d \\boldsymbol{\\eta}_{1} w(\\boldsymbol{\\eta}_{1}) \\tilde{M} (\\boldsymbol{\\eta}_{1},\\boldsymbol{0})= w(\\boldsymbol{0})$.\nMoreover, we can write the complete expression for the Wigner function as\n\t$W (\\boldsymbol{\\xi}_{1}, \\boldsymbol{\\xi}_{2}) \n\n\t=\\frac{1}{2\\pi} \\int d \\boldsymbol{\\eta}_{1} e^{i\\boldsymbol{\\xi}_{1} \\cdot \\boldsymbol{\\eta}_{1}}w (\\boldsymbol{\\eta}_{1})$.\nThe Wigner equivalent of the $W$-matrix does not depend on the variables of the second Hilbert space. In the operator representation this result is equivalent to having the identity in the second Hilbert space. This is compatible with the finite-dimensional case shown in \\cite{oreshkov2012quantum}. Moreover, given $W = W_1 \\otimes \\mathbb{1}_2$, computing the partial trace on the first system leads to\n\t$Tr_{1} W_1 =\\frac{1}{(2\\pi)^2} \\int d \\boldsymbol{\\xi}_{1} d \\boldsymbol{\\eta}_{1} e^{i\\boldsymbol{\\xi}_{1} \\cdot \\boldsymbol{\\eta}_{1}} w (\\boldsymbol{\\xi}_{1})= w(\\boldsymbol{0})=1$.\nThis means that in $\\mathcal{H}_1$ the $W$-matrix is a state with unit trace. Therefore, the most general form of the total $W$ for the one-party case is $W= \\rho \\otimes \\mathbb{1}$, consistent with the finite-dimensional case.\n\n\\subsection{Characterization of the two-party scenario}\nIn the bipartite case the Wigner equivalent of the $W$-matrix is a function of eight variables in the phase space $W(\\boldsymbol{\\xi}_{A_{1}}, \\boldsymbol{\\xi}_{A_{2}}, \\boldsymbol{\\xi}_{B_{1}}, \\boldsymbol{\\xi}_{B_{2}})$, where the notation is consistent with the previous case.\nThe probability normalization in terms of the Fourier transform of the Wigner equivalents of the operators is\n\\begin{align} \\label{eq:normprobAB}\n\t1=&\\frac{1}{(2\\pi)^4} \\int d \\boldsymbol{\\eta}_{A_{1}} d \\boldsymbol{\\eta}_{A_{2}} d \\boldsymbol{\\eta}_{B_{1}} d \\boldsymbol{\\eta}_{B_{2}}\\tilde{W}(\\boldsymbol{\\eta}_{A_{1}}, \\boldsymbol{\\eta}_{A_{2}}, \\boldsymbol{\\eta}_{B_{1}}, \\boldsymbol{\\eta}_{B_{2}})\\nonumber \\\\ \n\t& \\times\\tilde{M}^{A}(\\boldsymbol{\\eta}_{A_{1}}, \\boldsymbol{\\eta}_{A_{2}}) \\tilde{M}^{B}( \\boldsymbol{\\eta}_{B_{1}}, \\boldsymbol{\\eta}_{B_{2}}),\n\\end{align}\nwhere the CPTP condition for $\\tilde{M}^A$ and $\\tilde{M}^B$ is described by equation \\eqref{eq:CPTPcondition}.\nConsider now a specific local operation for one of the two parties, say Alice, given by\n$\\tilde{M}^{A}(\\boldsymbol{\\eta}_{A_{1}}, \\boldsymbol{\\eta}_{A_{2}}) = 2\\pi \\delta(\\boldsymbol{\\eta}_{A_{1}}) \\chi(R_{A_{2}})$, where $\\chi(R_{A_{2}})$ is the characteristic function over the set $R_{A_{2}}$, $\\chi(R_{A_{2}})= 1$ when $\\boldsymbol{\\eta}_{A_{2}} \\in R_{A_{2}}$, $\\chi(R_{A_{2}})= 0$ otherwise. $R_{A_{2}}$ is a two-dimensional set defined as $R_{A_{2}} = \\left[ -\\frac{1}{2\\alpha_1},\\frac{1}{2\\alpha_1} \\right]\\times \\left[ -\\frac{1}{2\\alpha_2},\\frac{1}{2\\alpha_2} \\right]$ and $\\alpha_1, \\, \\alpha_2$ are two arbitrary positive numbers. This choice of the measurement satisfies the CPTP condition for all $\\alpha_1,\\, \\alpha_2$. By inserting this in equation \\eqref{eq:normprobAB} we obtain\n\\begin{align}\n\t1=&\\frac{1}{(2\\pi)^3}\\frac{\\alpha_1 \\alpha_2}{\\alpha_1 \\alpha_2} \\int d \\boldsymbol{\\eta}_{A_{2}} d \\boldsymbol{\\eta}_{B_{1}} d \\boldsymbol{\\eta}_{B_{2}} \\tilde{W}(\\boldsymbol{0}, \\boldsymbol{\\eta}_{A_{2}}, \\boldsymbol{\\eta}_{B_{1}}, \\boldsymbol{\\eta}_{B_{2}})\\nonumber \\\\\n\t&\\times\\chi(R_{A_{2}}) \\tilde{M}^{B}(\\boldsymbol{\\eta}_{B_{1}}, \\boldsymbol{\\eta}_{B_{2}}) \\nonumber\n\\end{align}\nIf we now let $\\alpha_1, \\alpha_2$ be very large, but still finite, we can approximate $\\alpha_1 \\alpha_2 \\chi(R_{A_{2}})$ with the product of two delta functions, so that we can perform the integration in $\\boldsymbol{\\eta}_{A_{2}}$ by evaluating the $W$-matrix in the origin. Therefore, the condition to impose on the total $W$ to have an integral converging to a constant (one) is\n\t$\\tilde{W}(\\boldsymbol{0},\\, \\boldsymbol{\\eta}_{A_{2}}, \\boldsymbol{\\eta}_{B_{1}}, \\boldsymbol{\\eta}_{B_{2}})= 2\\pi\\alpha_1 \\alpha_2 \\tilde{W}_{B}(\\boldsymbol{\\eta}_{B_{1}}, \\boldsymbol{\\eta}_{B_{2}})$\n whenever $\\boldsymbol{\\eta}_{A_{2}} \\in R_{A_{2}}$ and $W=0$ otherwise. $\\tilde{W}_{B}(\\boldsymbol{\\eta}_{B_{1}}, \\boldsymbol{\\eta}_{B_{2}})$ is the reduced $W$ of the observer B. As a consequence, in the limit $\\alpha_1,\\,\\alpha_2 \\rightarrow \\infty$ we obtain\n$1=\\frac{1}{(2\\pi)^2} \\int d \\boldsymbol{\\eta}_{B_{1}} d \\boldsymbol{\\eta}_{B_{2}} \\tilde{W}_{B}(\\boldsymbol{\\eta}_{B_{1}}, \\boldsymbol{\\eta}_{B_{2}})\\tilde{M}^{B}( \\boldsymbol{\\eta}_{B_{1}}, \\boldsymbol{\\eta}_{B_{2}})$.\nThe previous equation describes exactly the one-party case, so we can apply the result \\eqref{eq:Woneparty} and write\n\\begin{equation} \\label{eq:middlecondW_A}\n\t\\tilde{W}(\\boldsymbol{0},\\, \\boldsymbol{\\eta}_{A_{2}}, \\boldsymbol{\\eta}_{B_{1}}, \\boldsymbol{\\eta}_{B_{2}})= (2 \\pi)^2 \\tilde{w}_{B_{1}}(\\boldsymbol{\\eta}_{B_{1}})\\delta(\\boldsymbol{\\eta}_{B_{2}})\\delta(\\boldsymbol{\\eta}_{A_{2}}).\n\\end{equation}\nThis decomposition of $\\tilde{W}$ is correct only if $\\boldsymbol{\\eta}_{A_{2}}$ is arbitrarily close to the origin. If we now repeat the same procedure by swapping the measurements of Alice and Bob we find an analogous condition\n\\begin{equation} \\label{eq:middlecondW_B}\n\t\\tilde{W}(\\boldsymbol{\\eta}_{A_1},\\, \\boldsymbol{\\eta}_{A_{2}}, \\boldsymbol{0}, \\boldsymbol{\\eta}_{B_{2}})= (2\\pi)^2\\tilde{w}_{A_{1}}(\\boldsymbol{\\eta}_{A_{1}})\\delta(\\boldsymbol{\\eta}_{A_{2}})\\delta(\\boldsymbol{\\eta}_{B_{2}}),\n\\end{equation}\nwhich holds when $\\boldsymbol{\\eta}_{B_{2}}$ is arbitrarily close to the origin. \n\nWe now go back to the equation \\eqref{eq:normprobAB} for the normalization of probability. Similarly to the one-party case, we define two intervals\n$S_{A_{2}} = \\left[ -\\frac{\\epsilon_A}{2},\\frac{\\epsilon_A}{2} \\right]\\times \\left[ -\\frac{\\delta_A}{2},\\frac{\\delta_A}{2} \\right] \\in \\mathbb{R}^2$ and\n\t$S_{B_{2}} = \\left[ -\\frac{\\epsilon_B}{2},\\frac{\\epsilon_B}{2} \\right]\\times \\left[ -\\frac{\\delta_B}{2},\\frac{\\delta_B}{2} \\right] \\in \\mathbb{R}^2$, where we can approximate the functions $\\tilde{M}^{A}$ and $\\tilde{M}^{B}$ with their values in respectively $\\boldsymbol{\\eta}_{A_{2}} = \\boldsymbol{0}$ and $\\boldsymbol{\\eta}_{B_{2}} = \\boldsymbol{0}$. We can now split the probability condition in four parts, writing the integrals over $A_2$ and $B_2$ as the sum of an integral over $S_{A_2}$ and $S_{B_2}$ and on the rest of the integration region $\\bar{S}_{A_{2}}$ and $\\bar{S}_{B_{2}}$. Using the CPTP condition for the local operations we find\n\\begin{equation}\n\t1=P_{S_{A_{2}},S_{B_{2}}}+ P_{S_{A_{2}}, \\bar{S}_{B_{2}}} + P_{\\bar{S}_{A_{2}}, S_{B_{2}}} + P_{\\bar{S}_{A_{2}},\\bar{S}_{B_{2}}}\n\\end{equation}\nwhere\n\\begin{align*}\n\t&P_{S_{A_{2}},S_{B_{2}}}= const,\\\\\n\t&P_{S_{A_{2}}, \\bar{S}_{B_{2}}}= k_A \\int d \\boldsymbol{\\eta}_{B_{1}} \\tilde{w}_{B_{1}}(\\boldsymbol{\\eta}_{B_{1}})\\int_{\\mathbb{R}^2 \\setminus S_{B_{2}}}d\\boldsymbol{\\eta}_{B_{2}}\\delta(\\boldsymbol{\\eta}_{B_{2}}),\\\\\n\t&P_{\\bar{S}_{A_{2}}, S_{B_{2}}}= k_B \\int d \\boldsymbol{\\eta}_{A_{1}} \\tilde{w}_{A_{1}}(\\boldsymbol{\\eta}_{A_{1}})\\int_{\\mathbb{R}^2 \\setminus S_{A_{2}}}d\\boldsymbol{\\eta}_{A_{2}}\\delta(\\boldsymbol{\\eta}_{A_{2}}),\\\\\n\t&P_{\\bar{S}_{A_{2}},\\bar{S}_{B_{2}}}= \\left< \\tilde{W}\\tilde{M}^{A} \\tilde{M}^{B}\\right>_{\\mathbb{R}^2, \\mathbb{R}^2, \\mathbb{R}^2\\setminus S_{A_{2}}, \\mathbb{R}^2 \\setminus S_{B_{2}}}.\n\\end{align*}\nHere, $k_A,\\ k_B$ are constants and the notation for the last term is analogous to the one used in the one-party case. $P_{S_{A_{2}}, \\bar{S}_{B_{2}}}$ and $P_{\\bar{S}_{A_{2}}, S_{B_{2}}}$ are identically zero because the delta functions vanish in the interval.\nSince the integral is equal to the same constant for all local operations we conclude that the fourth term is zero in the interval considered. For this to be the case, the $W$-function should be zero outside $S_{A_{2}}$ or $S_{B_{2}}$, at least in one of the outputs. Setting $\\tilde{W}$ equal to zero in the input would instead lead to the trivial solution $W=0$. By taking the limit when the intervals $S_{A_{2}}, S_{B_{2}}$ reduce to a point, and following an analogous procedure to the one-party case, it is possible to show that the $W$-matrix is a delta function at least in one of the two outputs. Applying the inverse Fourier transform, in the original variables $\\boldsymbol{\\xi}_i$ the conditions on the $W$ imply that the Wigner equivalent of the process matrix can not depend on both outputs at the same time, i.e. $W(\\boldsymbol{\\xi}_{A_{1}}, \\boldsymbol{\\xi}_{A_{2}}, \\boldsymbol{\\xi}_{B_{1}})$ or $W(\\boldsymbol{\\xi}_{A_{1}}, \\boldsymbol{\\xi}_{B_{1}}, \\boldsymbol{\\xi}_{B_{2}})$. As we have already pointed out in the one-party scenario, this condition is equivalent to having an identity in at least one of the two output Hilbert spaces when $W$ is represented in the space of linear operators on the tensor product of the four Hilbert spaces.\n\n\nThe results for the infinite-dimensional process matrices show that the bipartite $W$ allows for three different situations. The first case consists in a shared state between $A$ and $B$ with no-signaling between the two observers. In the framework of infinite-dimensional W-matrices this is described as $W(\\boldsymbol{\\xi}_{A_1}, \\boldsymbol{\\xi}_{B_1})$. The fact that W does not depend on the output variables corresponds to the condition, shown in \\cite{oreshkov2012quantum}, $W= \\rho_{A_{1} B_{1}} \\otimes \\mathbb{1}_{A_{2} B_{2}}$. The second and third case describe signaling from one observer to the other. In this case the W-matrix is written as $W(\\boldsymbol{\\xi}_{A_1}, \\boldsymbol{\\xi}_{B_1}, \\boldsymbol{\\xi}_{B_2})$, with correlations at least between $\\boldsymbol{\\xi}_{A_1}$ and $\\boldsymbol{\\xi}_{B_2}$, when B signals to A or as $W(\\boldsymbol{\\xi}_{A_1}, \\boldsymbol{\\xi}_{A_2}, \\boldsymbol{\\xi}_{B_1})$, where at least $\\boldsymbol{\\xi}_{B_1}$ and $\\boldsymbol{\\xi}_{A_2}$ are correlated, when A signals to B. These two terms are described respectively as $W_{A_{1} A_{2} B_{1}} \\otimes \\mathbb{1}_{B_{2}}$ and $W_{A_{1} B_{1} B_{2}} \\otimes \\mathbb{1}_{A_{2}}$ in the finite-dimensional case.\n\nWe are interested in processes, which we refer to as \\emph{causally nonseparable}, where it is not possible to decompose the $W$-matrix as \\cite{araujo2015witnessing, oreshkov2015causal}\n\\begin{equation} \\label{eq:causallyseparable}\n\tW= \\lambda W^{A \\prec B} + (1-\\lambda)W^{B \\prec A},\n\\end{equation}\nwhere $0 \\leq \\lambda \\leq 1$. If equation \\eqref{eq:causallyseparable} holds, the $W$-matrix can always be understood as a classical (convex) mixture of a term which allows signaling from A to B with probability $\\lambda$ and a term which allows signaling from B to A with probability $1-\\lambda$. The possibility for A and B to share an entangled state with no-signaling correlations is also included in equation \\eqref{eq:causallyseparable}.\n\n\\section{Quantum switch in infinite dimensions}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[scale=0.35]{switch.pdf}\n\t\\caption{A quantum system is prepared in a state $\\left| \\psi_I \\right>$ at time $t_I$ and is sent in a superposition of two paths. Each path, realized by sending the particle through a fiber (solid and dotted line in the figure), enters the two laboratories A and B in a fixed order and is detected by C at time $t_O$ after exiting the two laboratories. In each local laboratory the state undergoes local quantum operations described as measurement and repreparation. The probability of measurement outcomes shows an interference pattern due to the superposition of two causal orders. The interference can not be reproduced from local operations performed in a fixed causal order.}\n\t\\label{fig:switch}\n\\end{figure}\nA scheme of the quantum switch is provided in Figure \\ref{fig:switch}. The switch involves three local observers, which we denote as A, B and C. The observers perform local quantum operations, here chosen to be a measurement followed by a repreparation of a quantum state. Outside the laboratories the system propagates along two ``fibers'' (solid and dotted line in Figure \\ref{fig:switch}), which represent the propagation of the quantum system along an additional spatial degree of freedom. A quantum state $\\left|\\psi_I \\right>$ is prepared at time $t_I$ and sent in a superposition of two paths. In one of the paths the particle enters laboratory A at time $t_1$ and laboratory B at time $t_2 > t_1$; in the second path the order of the operations A and B is reversed. After exiting the laboratories A and B the system is detected by the observer C at time $t_O$. Note that in order to preserve the coherence of the process the measurements should not reveal the time.\n \nThe switch describes a quantum process in which the order of the local operations is in a superposition. In finite dimensions it has been proved that the $W$-matrix which describes the switch is causally nonseparable \\cite{araujo2015witnessing}, i.e. it can not be written as $W=\\lambda W^{A \\prec B \\prec C}+(1-\\lambda)W^{B \\prec A \\prec C}$, where C always comes after A and B and $0 \\leq\\lambda \\leq 1$. Here we generalize the switch to infinite dimensions, and provide an alternative proof of its causal nonseparability.\n\nThe $W$-matrix is an operator acting on the tensor product of six Hilbert spaces, $W \\in \\mathcal{B}(\\mathcal{H}_{A_1}\\otimes \\mathcal{H}_{A_2} \\otimes \\mathcal{H}_{B_1} \\otimes \\mathcal{H}_{B_2} \\otimes \\mathcal{H}_{C_1} \\otimes \\mathcal{H}_{p})$. The first five spaces are infinite-dimensional and $\\mathcal{H}_{p}$ is a two-dimensional Hilbert space spanned by the vectors $\\left| 0 \\right>$ and $\\left| 1 \\right>$, which label each of the paths (fibers) taken by the particle (see Figure \\ref{fig:switch}). The W-matrix of the switch is pure and can be written as\n\t$W= \\left| w \\right> \\left< w \\right|$,\nwhere $\\left| w \\right>= \\int d\\bar{r}\\, w(\\bar{r})\\left| \\bar{r} \\right>$, with $\\bar{r}=(r_{A_{1}},\\ r_{A_{2}},\\ r_{B_{1}}, \\ r_{B_{2}}, r_{C_{1}})$. Explicitly,\n\\begin{equation} \\label{eq:Wfunction}\n\tw(\\bar{r})= \\frac{1}{\\sqrt{2}}\\int dr_I \\psi_I(r_I) \\left( w^{A \\prec B \\prec C}\\left| 0 \\right> + w^{B \\prec A\\prec C} \\left| 1 \\right>\\right).\n\\end{equation} \nHere, $\\psi_I (r_I)$ is a normalized square-integrable function. The variables of the functions $w^{A \\prec B \\prec C}=w^{A \\prec B \\prec C}(r_I, \\bar{r})$ and $w^{B \\prec A \\prec C}=w^{B \\prec A \\prec C}(r_I,\\bar{r})$, where the arguments parametrize the propagation along the fiber, are omitted in \\eqref{eq:Wfunction} for simplicity. The total state $\\left| w \\right>$ is a superposition of two terms, decribed by $w^{A \\prec B \\prec C}$ and $w^{B \\prec A \\prec C}$, which can be explicitly written as\n\\begin{align}\n\tw^{A \\prec B \\prec C} = &G_{I1}(r_{A_{1}}-r_{I}) G_{12}(r_{B_{1}}-r_{A_{2}})G_{2O}(r_{C_1}-r_{B_{2}}) \\label{eq:wABC}\\\\\n\tw^{B \\prec A \\prec C} = &G_{I1}(r_{B_{1}}-r_{I}) G_{12}(r_{A_{1}}-r_{B_{2}})G_{2O}(r_{C_1}-r_{A_{2}}) \\label{eq:wBAC}\n\\end{align}\nwhere $G_{ab}(r_b-r_a)= \\left< r_b \\right| e^{-\\frac{i}{\\hbar}\\hat{H}(t_b - t_a)}\\left| r_a \\right>$ is the Green function between $r_a$ and $r_b$ and $\\hat{H}$ is the hamiltonian which generates the evolution along the fiber.\n\nConsider now the local operations performed by one of the parties, say A. Suppose that A measures the state in a region $R_i$ of the whole laboratory A. Afterwards, the state is reprepared in $\\left| \\phi_A \\right>$. The Choi-Jamio{\\l}kowski equivalent of this local operation in A's laboratory is\n\t$M_i^A = \\int_{R_i} dy_A \\left| y_A \\right> \\left< y_A \\right| \\otimes \\left| \\phi_A \\right>\\left< \\phi_A \\right|$. \nThe intervals $R_i$ satisfy $R_i \\cap R_j= \\emptyset$ for $i \\neq j$ and $\\cup_i R_i= V_{A}$, where $V_A$ is the volume of the local laboratory. The same considerations are valid for the case of B. The observer C detects the state he receives by projecting it over the region $R_k$ of the volume of his laboratory $V_C$ and by recombining the two paths via a measurement on the $\\left| \\pm \\right>= (\\left| 0 \\right>\\pm \\left| 1 \\right>)\/\\sqrt{2}$ basis. As a consequence, the local operation performed by C is $M_{k \\pm}^C= M_k^C \\otimes \\left| \\pm \\right>\\left< \\pm \\right|$, where $M_k^C= \\int_{R_k} dy_C \\left| y_C \\right> \\left< y_C \\right|$ and it is implied that the output Hilbert space of C is one-dimensional.\n\nThe probability of the measurement outcomes is then given by\n\t$p_{ijk \\pm}= p(\\mathcal{M}_i^A, \\ \\mathcal{M}_j^B, \\mathcal{M}_{k \\pm}^C)= \\left< w\\right| (M_i^A \\otimes M_j^B \\otimes M_{k \\pm}^C) \\left| w \\right>$. For simplicity we first consider a density of probability $\\Pi_{ijk \\pm}=\\Pi_{ijk \\pm}(r_I, r'_I)$ such that\n\t$p_{ijk \\pm}= \\int dr_I dr'_I \\psi_I (r_I) \\psi^*_I (r'_I) \\Pi_{ijk \\pm}(r_I, r'_I)$.\nThen we can write\n\\begin{equation} \\label{eq:densityswitch}\n\t\\Pi_{ijk \\pm}= \\frac{1}{2} \\left[ \\pi^{A \\prec B \\prec C}_{ijk \\pm} + \\pi^{B \\prec A \\prec C}_{ijk \\pm} + 2 \\operatorname{Re} \\pi^{int}_{ijk \\pm}\\right],\n\\end{equation}\nwhere we can express the single terms in the sum by adopting a vector notation with $\\left| w^{A\\prec B \\prec C} \\right> = \\int d \\bar{r}\\, w^{A\\prec B \\prec C} \\left| \\bar{r} \\right>$ and $\\left| w^{B\\prec A \\prec C} \\right> = \\int d \\bar{r}\\, w^{B\\prec A \\prec C} \\left| \\bar{r} \\right>$,\n\\begin{align}\n\t&\\pi^{A \\prec B \\prec C}_{ijk \\pm}=\\frac{1}{2}\\left< w^{A\\prec B\\prec C} \\right| M_i^A \\otimes M_j^B \\otimes M_{k}^C \\left| w^{A\\prec B \\prec C} \\right> \\nonumber\\\\\n\t&\\pi^{B \\prec A \\prec C}_{ijk \\pm}=\\frac{1}{2}\\left< w^{B\\prec A \\prec C} \\right| M_i^A \\otimes M_j^B \\otimes M_k^C \\left| w^{B\\prec A \\prec C}\\right> \\nonumber\\\\\n\t&\\pi^{int}_{ijk \\pm}=\\pm\\frac{1}{2}\\left< w^{A\\prec B \\prec C} \\right| M_i^A \\otimes M_j^B \\otimes M_k^C \\left| w^{B\\prec A \\prec C} \\right>.\n\\end{align}\n\nAssuming $t_1-t_I=t_2-t_1=t_O-t_2=\\Delta t$, we can show that $p_{ijk \\pm}$ describes a two-way signaling from A to B to C and from B to A to C. Specifically, we show that the two terms $\\pi_{ijk \\pm}^{A \\prec B \\prec C}$ and $\\pi_{ijk \\pm}^{B \\prec A \\prec C}$ correspond to a process in which the order of the events is fixed. Instead, $\\pi^{int}_{ijk \\pm}$ is an interference term, due to the superposition of causal orders, describing a two-way signaling between the three observers. In order to show this we can sum over the outputs of the observers and show how the marginals depend on the settings $\\phi_A$ of $M_i^A$ and $\\phi_B$ of $M_j^B$.\n\nWe assume that the states $\\psi_I, \\phi_A$ and $\\phi_B$ are prepared so that the probability of detection in the three local laboratories is almost one. This means that the integration over the volume of any local laboratory (A, B or C) can be extended to an integral over the whole space, since this would amount to adding a negligible term to the sum. Defining $p^{ABC}(ijk\\pm | \\phi_A, \\phi_B)= \\int d r_I d r'_I \\psi_I (r_I) \\psi^*_I (r'_I)\\pi_{ijk\\pm}^{A \\prec B \\prec C}$ the integral of the first term in equation \\eqref{eq:densityswitch}, we find that $\\sum_{jk \\pm}p^{ABC}(ijk\\pm | \\phi_A, \\phi_B)= p^{ABC}(i)$, which means that A does not receive information from B and C. Moreover, since $\\sum_{ij}p^{ABC}(ijk\\pm | \\phi_A, \\phi_B)= p^{ABC}(k\\pm | \\phi_B)$, C receives information from B. Finally, the fact that $\\sum_{ik\\pm}p^{ABC}(ijk\\pm | \\phi_A, \\phi_B)= p^{ABC}(j | \\phi_A)$ means that B receives information from A but not from C. Therefore, we conclude that the probability describes a causally ordered process where A signals to B and B signals to C. The situation is symmetric under the exchange of A and B if we consider the integral of the second term in equation \\eqref{eq:densityswitch}, $p^{BAC}(ijk\\pm | \\phi_A, \\phi_B)= \\int d r_I d r'_I \\psi_I (r_I) \\psi^*_I (r'_I)\\pi_{ijk\\pm}^{B \\prec A \\prec C}$.\n\nA probabilistic mixture of the two terms corresponds to a process with no fixed causal order, but causally separable in the sense previously discussed. In contrast, when the quantum switch is considered an additional interference term appears. The interference corresponds to $\\pi^{int}_{ijk \\pm}$ in equation \\eqref{eq:densityswitch} and it can be shown to be\n\\begin{align}\n\t&\\pi^{int}_{ijk\\pm}= \\pm \\frac{1}{2}\\int_{R_i} dr_{A_1} \\int_{R_j} dr_{B_1} \\int_{R_k} dr_{C_1} \\times \\nonumber\\\\\n\t&\\int dr_{A_2} dr'_{A_2} dr_{B_2} dr'_{B_2} w^{A\\prec B \\prec C}w^{*B\\prec A \\prec C}\\times \\nonumber\\\\\n\t&\\phi_A(r'_{A_2})\\phi^*_A(r_{A_2})\\phi_B(r'_{B_2})\\phi^*_B(r_{B_2}),\n\\end{align}\nwhere $w^{A\\prec B \\prec C}= w^{A\\prec B \\prec C}(r_I, r_{A_1}, r_{A_2}, r_{B_1}, r_{B_2}, r_{C_1})$ and $w^{B\\prec A \\prec C}=w^{B\\prec A \\prec C}(r'_I, r_{A_1}, r'_{A_2}, r_{B_1}, r'_{B_2}, r_{C_1})$ were defined in equations \\eqref{eq:wABC} and \\eqref{eq:wBAC}. To show that there is two-way signaling, we define $p^{int}(ijk \\pm| \\phi_A, \\phi_B)=\\int d r_I d r'_I \\psi_I (r_I) \\psi^*_I (r'_I)\\pi_{ijk\\pm}^{int}$ and sum over the outputs of the three observers. We find that $\\sum_{ij}p^{int}(ijk \\pm| \\phi_A, \\phi_B)= p^{int}(k \\pm | \\phi_A, \\phi_B)$, so both A and B signal to C. Moreover, the two conditions $\\sum_{j}p^{int}(ijk \\pm | \\phi_A, \\phi_B)= p^{int}(ik \\pm | \\phi_A, \\phi_B)$ and $\\sum_{i}p^{int}(ijk \\pm| \\phi_A, \\phi_B)= p^{int}(jk \\pm | \\phi_A, \\phi_B)$ mean respectively that B signals to A and C, and A signals to B and C. Therefore, we conclude that there is two-way signaling. Since the W-matrix is pure and the correlations can exhibit signaling in both directions A to B to C and B to A to C, we conclude that the process is causally nonseparable.\n\nTo summarise, in this paper we generalize the process matrix framework to continuous-variable quantum systems. This means that, as well as in finite dimensions, it is possible to describe the correlations between the measurement outcomes of two (or more) observers who can receive or send signals in absence of a global causally-ordered background. The correlations obtained are more general than those allowed by ordinary (causal) quantum mechanics. This generalization is suitable to devise new experiments using continuous-variable quantum systems, such as those considered in Gaussian quantum optics. Moreover, this work constitutes the first step towards the goal of formulating quantum fields on indefinite causal structures. As an example of application of this work, we implemented an infinite-dimensional version of the quantum switch exhibiting correlations stemming from quantum superposition of channels. \n\n\n\\begin{acknowledgments}\n\tWe thank Bernhard Baumgartner, Fabio Costa, Adrien Feix and Magdalena Zych for useful discussions. We acknowledge support from the European Commission project RAQUEL (No. 323970); the Austrian Science Fund (FWF) through the Special Research Program Foundations and Applications of Quantum Science (FoQuS), the doctoral programme CoQuS, and Individual Project (No. 24621).\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe problem of string matching can be defined as follows. Given a text $T=t_1\nt_2\\cdots t_n$ and a pattern $P=p_1p_2\\cdots p_m$, with letters from an alphabet $\\Sigma$,\nfind all the occurrences of the pattern in the text.\nThis problem can be solved in $O(n+m)$ time by using well known algorithms\n(e.g., KMP \\cite{KMP77}). \n\nA more general formulation allows ``don't care'' or ``wild card''\ncharacters in the text and\/or the pattern. Pattern matching with don't cares can\nbe solved in $O(n \\log |\\Sigma| \\log m)$ as shown in \\cite{FP74}. A more\nrecent result \\cite{CC07} gives a deterministic $O(n \\log m)$ time algorithm.\n\nYet another enhancement is to allow for mismatches. We can\nformulate two versions of this problem: {\\bf 1) pattern matching with mismatches:} \n find the distance between the pattern and the text for every alignment\nbetween the pattern and the text or {\\bf 2) pattern matching with $k$\nmismatches:} find only alignments for which the distance is no\nmore than a given threshold $k$. \n\n\nThe distance metric used can be the Hamming distance, the edit distance or\nother criteria such as the number of non-overlapping inversions (e.g.\n\\cite{CC+13}). In this paper we focus on the Hamming distance.\nThe Hamming distance between two strings $A$ and $B$ is defined as the number\nof positions where the two strings differ and is denoted by $Hd(A,B)$. \n\nPattern matching with mismatches can be solved, naively, by computing the\nHamming distance for every alignment of the pattern in the text, in time $O(nm)$. However, the fastest known exact algorithm is\nAbrahamson's algorithm \\cite{ABR87} that runs in $O(n \\sqrt{m \\log m})$\ntime.\n\nPattern matching with $k$ mismatches can be solved in $O(nk)$\ntime (see \\cite{LV85} and \\cite{GG86}). These algorithms are based on a\ntechnique called the Kangaroo method (see section \\ref{sec_kangaroo}). This\nmethod computes the Hamming distance for every alignment in $O(k)$ time by\n``jumping'' from one error to the next. A faster algorithm for pattern\nmatching with $k$ mismatches runs in $O(n\\sqrt{k \\log k})$ \\cite{ALP04}. A\nsimpler version of this algorithm was given in \\cite{NR15}.\n\nRecent work has also addressed the online\nversion of pattern matching, where the text is received in a\nstreaming model, one character at a time, and it cannot be stored in its\nentirety (see e.g., \\cite{CKP08}, \\cite{PP09}, \\cite{PL07}).\nAnother version of this problem matches the pattern against multiple input\nstreams (see e.g., \\cite{CEP+07}). Yet another interesting problem is to sample a\nrepresentative set of mismatches for every alignment (see e.g., \\cite{CEP+12}).\nA survey of string matching with mismatches is given in \\cite{Nav01}.\nA description of practical on-line string searching algorithms can be\nfound in \\cite{NR02}.\n\nYet another formulation allows for don't care or wild card characters.\nPattern matching with mismatches and don't cares can be solved in $O(n \\sqrt{g \\log m})$ time, where $g$ is the number of non-wild card\npositions in the pattern (see \\cite{NR15}). This is done by a simple extension of Abrahamson's algorithm.\n\nPattern matching with $k$ mismatches and don't cares can be solved in time\n$O(nk^2\\log^2m)$ as shown in \\cite{Clif10}. The runtime can be improved to \n $O(nk\\;\\text{polylog}m)$ as shown in \\cite{Clif10, CEP+09}\nIf we allow don't cares only in the pattern, the problem can be solved in\n$O(n\\sqrt[3]{mk\\log^2m})$ time as shown in \\cite{CP07}. \nThis is also the problem we discuss in this paper.\n\n{\\bf Notation:} Let $T_i$ denote $t_i t_{i+1},\\ldots t_{i+m-1}$ for all\n$i=1..n-m+1$.\n \n{\\bf Pattern matching with $k$ mismatches and don't cares in the pattern:}\nGiven a text $T=t_1t_2\\ldots t_n$ and a pattern $P=p_1p_2\\ldots p_m$ from an\nalphabet $\\Sigma$, with $|\\Sigma|\\leq n$, and an integer $k$. Output all $i$, $1\\leq\ni\\leq n-m+1$, for which $Hd(P, T_i) \\leq k$.\nThe pattern may contain don't care characters, that match any character.\n\nGiven a pattern $P$, with don't cares, a maximal length substring of $P$ that\nhas no don't cares is called an ``{\\bf island}''. We will denote the number of\nislands in $P$ as $q$.\nIn this paper we give two algorithms for pattern matching with $k$ mismatches\nwhere there are don't cares in the pattern. The first one runs in\n$O(n\\sqrt{(q+k)\\log m})$ time. The second one runs in time $O(n\\sqrt[3]{qk\\log^2 m} +\nn\\sqrt{k\\log m})$ where $q$ is the number of islands in $P$. By combining the\ntwo, we show that pattern matching with $k$ mismatches and don't cares in the\npattern can be solved in $O(n\\sqrt{k\\log m}+n\\min\\{\\sqrt[3]{qk\\log^2 m},\\sqrt{q\\log m}\\})$ time.\nIf the number of islands is $O(k)$ our runtime becomes $O(n\\sqrt{k \\log m})$,\nwhich essentially matches the best known runtime for pattern matching with $k$\nmismatches without don't cares ($O(n\\sqrt{k\\log k})$). Since $q$ is always less\nthan $m$, our algorithm outperforms the $O(n\\sqrt[3]{mk\\log^2m})$ algorithm of\n\\cite{CP07}.\nFor $q=O(k^2)$, our algorithm outperforms the best known $O(nk\n\\;\\text{polylog}\\; m)$ algorithms of \\cite{Clif10, CEP+09}.\n\n\n\\section{Methods}\n\nBoth algorithms in this paper have the same basic structure (see section\n\\ref{sec_basic}).\nThe difference is in how fast we can answer the single alignment verification question:\n\n\\begin{question}\nGiven $i$, is the Hamming distance between $P$ and $T_i$ no more than\n$k$?\n\\end{question}\n\nIn the first algorithm (section \\ref{sec_alg1}), we can answer this question in\n$O(q+k)$ time. In the second algorithm (section \\ref{sec_alg2}), we can answer\nthis question in $O(\\sqrt[3]{k^2q^2\\log m} + k)$ time.\n\n\\subsection{Background}\n\nWe start by reviewing a number of well known techniques\nused in the literature for pattern pattern matching with $k$ mismatches (e.g., see\n\\cite{ALP04}), namely:\nconvolution, marking, filtering and the Kangaroo method.\n\n\\subsubsection{Convolution}\nGiven two arrays $T=t_1t_2\\ldots t_n$ and $P=p_1p_2\\ldots\np_m$ (with $m\\leq n$), the convolution of $T$ and $P$ is a sequence\n$C=c_1,c_2,\\ldots,c_{n-m+1}$ where $c_i=\\sum_{j=1}^mt_{i+j-1}p_j$, for $1\\leq\ni\\leq (n-m+1)$. \n\n \n\nConvolution can be applied to pattern matching with mismatches, as follows.\nGiven a string $S$ and a character $\\alpha$ define string $S^{\\alpha}$\nas $S^\\alpha[i]=1$ if $S[i]=\\alpha$ and $0$ otherwise.\nLet $C^\\alpha=convolution(T^\\alpha, P^\\alpha)$. Then $C^\\alpha[i]$ gives the\nnumber of matches between $P$ and $T_i$\nwhere the matching character is $\\alpha$. Therefore, one convolution gives us\nthe number of matches contributed by a single character to each of the\nalignments. Then $\\sum_{\\alpha \\in \\Sigma}C^{\\alpha}[i]$ is the total number of\nmatches between $P$ and $T_i$.\n\nOne convolution can be computed in\n$O(n\\log m)$ time by using the Fast Fourier Transform. \nIf the convolutions are applied on binary inputs, as is often the case in\npattern matching applications, some speedup techniques are presented in \\cite{FG09}.\n\n\\subsubsection{Marking}\\label{sec_marking} \n\n\nMarking is an algorithm that counts the number of matches of\nevery alignment, as follows.\nThe algorithm scans the text one character at a time\nand ``marks'' all the alignments that would produce a match between the current\ncharacter in the text and the corresponding character in the pattern. \nThe marking algorithm is generally used only on a subset of the pattern. That\nis, given a set $A$ of positions in $P$ the marking algorithm counts matches\nbetween the text and the subset of $P$ given by $A$. The pseudocode of the\nmarking algorithm is given in Algorithm \\ref{alg_counting}.\n\n\\begin{algorithm}\n\\SetKwInOut{Input}{input}\\SetKwInOut{Output}{output}\n\\caption{Mark$(T, P, A)$}\\label{alg_counting} \n\\Input{Text $T$, pattern $P$ and a set $A$ of positions in $P$} \n\\Output{An array $M$ where $M[i]$ gives the number of matches between $T_i$\nand $P$, on the subset of positions of $P$ given by $A$}\n\\lFor {$i\\leftarrow 1$ \\KwTo $n$}{$M[i]=0$}\n\\For {$i\\leftarrow 1$ \\KwTo $n$}{\n \\For{$j \\in A$ s.t. $P[j] = T[i]$}{\n \\lIf{$i-j+1 > 0$}{\n $M[i-j+1]${\\bf $++$}\n }\n }\n}\n\\Return $M$\\;\n\\end{algorithm}\n\n\\subsubsection{Filtering}\n\nFiltering is a method for reducing the number of alignments to look\nat. Filtering is based on the following principle.\nIf we restrict our pattern to only $2k$ positions, any alignment that has no more than\n$k$ mismatches, must have at least $k$ matches among the $2k$ positions.\nTo count matches among the $2k$ positions selected, for every alignment, we use\nthe marking algorithm. If the total number of marks generated is $B$ then there can\nbe no more than $B\/k$ positions that have at least $k$ marks. Therefore, instead\nof $n-m+1$ alignments we only have to look at $B\/k$ alignments. Each alignment\nis then verified using other methods.\n\n\\subsubsection{The Kangaroo method}\\label{sec_kangaroo} \n\nThe Kangaroo method allows us to check if the number of mismatches for a\nparticular alignment is no more than $k$, in $O(k)$ time. The Kangaroo method\nconstructs a generalized suffix tree of $T+P$, where $+$ means concatenation.\nThis suffix tree can be enhanced to answer Lowest Common Ancestor\n(LCA) queries in $O(1)$ time \\cite{AH+76}. LCA queries give us the longest\ncommon prefix between any portion of the text and any portion of the pattern,\nessentially telling us where the first mismatch appears. \nSpecifically, to count mismatches between\n$P$ and $T_i$, first perform an LCA query to find the position of the\nfirst mismatch between $P$ and $T_i$. Let this position be $j$. Then,\nperform another LCA to find the first mismatch between $P_{j+1..m}$ and\n$T_{i+j+1.. i+m-1}$, which gives the second mismatch of alignment $i$.\nContinue to ``jump'' from one mismatch to the next, until the end\nof the pattern is reached or we have found more than $k$ mismatches.\nTherefore, after $O(k)$ LCA queries we will either find all the mismatches or\ndetermine that there are more than $k$ of them. \nThe Kangaroo pseudocode is given in Algorithm \\ref{alg_kangaroo}.\n\n\\begin{algorithm}\n\\SetKw{LCA}{LCA}\n\\SetKw{True}{true}\n\\SetKw{False}{false}\n\\SetKwInOut{Input}{input}\\SetKwInOut{Output}{output}\n\\caption{Kangaroo$(P, T_i, k)$}\\label{alg_kangaroo}\n\\Input{A pattern $P$, an alignment $T_i$ and an integer $k$}\n\\Output{\\True if the pattern matches the alignment with no more than $k$\nmismatches, \\False otherwise}\n$j=0$\\;\n$d=0$\\;\n\\While{$d \\leq k$} {\n $j = j + \\LCA(T_{i+j}, P_{j+1})+1$\\;\n \\If {$j > m$} {\n \\Return{\\True}\\;\n }\n $d=d+1;$\n} \n\\Return{\\False}\\;\n\\end{algorithm}\n\n\\subsection{General Algorithm}\\label{sec_basic}\n\nWe are now ready to present the main algorithms given in this paper. The\ngeneral structure of both the algorithms is given in Algorithm \\ref{alg_basic}.\n\n\n\\begin{algorithm}\n\\caption{$K$-Mismatches with Wild Cards}\\label{alg_basic}\nLet $F_a$ be the number of occurrences of character $a$ in $T$ for all $a \\in\n\\Sigma$\\; \nLet $Cost(A)=\\Sigma_{i \\in A}F_{P[i]}$\\;\nLet $A$ be a set of positions in $P$ such that $|A|\\leq 2k$\nand $Cost(A) \\leq B$\\; \n$M=Mark(T, P, A)$\\;\n\\eIf{$|A| == 2k$}{\n $R=\\{\\}$\\;\n \\For{$i=1$ to $n$} {\n \\If {$M_i \\geq k$ {\\bf and} $DistNoMoreThanK(T_i, P, k)$} {\n $R = R \\cup \\{i\\}$\\;\n }\n }\n}{\n \n \\For{$a \\in \\Sigma$ s.t. $a \\neq P[i], \\forall i \\in A$} {\n $M'=Convolution(T,P,a)$\\;\n $M \\text{+=} M'$\\;\n }\n $R = \\{i \\in [1..n] | M_i \\geq m - k\\}$\\; \n}\n\\Return{$R$}\\;\n\\end{algorithm}\n\n{\\bf Algorithm and analysis:} \nFor each position $i$ in $P$ such that $P[i]=a$, we assign a cost $F_a$ where\n$F_a$ is the number of occurrences of $a$ in $T$. The algorithm starts by\nchoosing up to $2k$ positions from the pattern such that the total cost does not exceed a ``budget''\n$B$. The positions are chosen by a simple greedy strategy: sort all the\ncharacters by their cost $F_a$. Start choosing positions equal to the ``cheapest''\ncharacter, then choose positions equal to the next cheapest character, and\nso on until we have chosen $2k$ positions or we have exceeded the budget $B$.\n\n{\\bf Case 1:} If we can find $2k$ positions that cost no more than $B$, then\nwe call the marking algorithm with those $2k$ positions. Any\nposition in $T$ that receives less than $k$ marks, has more than $k$ mismatches,\nso we now focus on positions in $T$ that have at least $k$ marks.\nIf the total number of marks is $B$, then there will be no more than\n$B\/k$ positions that have at least $k$ marks. We verify each of these\npositions to see if they have more than $k$ mismatches. Let the time for a\nsingle verification be $O(V)$.\nThen, the runtime is $O(BV\/k)$.\n\n{\\bf Case 2:} If we cannot find $2k$ positions that cost no more than $B$,\nthen we compute marking for the positions that we did choose before we ran out\nof budget.\nThen, for each of the characters that we did not choose, we compute one\nconvolution to count how many matches they contribute to each alignment. It\nis easy to see that each of the characters not chosen for marking must have $F_a\n> B\/(2k)$.\nTherefore, the total number of such characters is no more than $n\/(B\/(2k))$. Therefore, the runtime of the convolution stage is $O(nk\/B * n \\log m)$. The runtime of the marking\nstage is $O(B)$, therefore the total runtime is $O(B + nk\/B * n \\log m)$.\n\nIf we make the runtime of the two cases equal, we can find the optimal value of\n$B$.\n\n\\begin{align*}\nBV\/k = B+n^2k\/B \\log m \\Rightarrow B=nk\\sqrt{\\frac{\\log m}{V}}\n\\end{align*}\n\nThis gives an asymptotic runtime of $O(BV\/k)=O(n\\sqrt{V \\log m})$. Therefore,\nthe runtime of the algorithm depends on $V$, which is the time it takes to\nverify whether a given single alignment has no more than $k$ mismatches.\n \n\\subsection{Single alignment distance in $O(q+k)$ time}\n\\label{sec_alg1}\n\nWe can answer the single alignment question \nin $O(q+k)$ time where $q$ is the number of {\\it islands} in the pattern as\nshown in Algorithm \\ref{alg_verif1}.\nThe algorithm uses Kangaroo jumps \\cite{LV85} to go to the next mismatch within\nan island in $O(1)$ time. If there is no mismatch left in the island, the algorithm goes\nto the next island also in $O(1)$ time.\nTherefore, the runtime is $O(q+k)$. With $V=O(q+k)$, Algorithm \\ref{alg_basic} does pattern matching\nwith $k$ mismatches in $O(n\\sqrt{(q+k)\\log m})$ time.\n\n\n\\begin{algorithm}\n\\label{alg_verif1}\n\\caption{$DistNoMoreThanK\\_V1(T_i, P, k)$}\n$d=0$\\;\n$j=1$\\;\n\\While{$d \\leq k$ {\\bf and} $j \\leq q$}{\n $r =$ no. of mismatches between island $j$ and\n corresponding region of $T_i$ (use Kangaroo jumps)\\; \n $d \\text{+=} r$\\; \n $j \\text{+=} 1$\\; \n}\n\\Return{$d \\leq k$}\n\\end{algorithm}\n\n\\subsection{Single alignment distance in $O(k^{2\/3}q^{2\/3}\\log^{1\/3}m+k)$ time}\n\\label{sec_alg2}\n\nThis idea is based on splitting the pattern into sections. We know that no more\nthan $k$ sections can have mismatches. The remaining sections have to match\nexactly. Consider exact pattern matching with don't cares.\nWe can check where a pattern matches the text exactly by using a constant number of convolutions. This is\ntrue because we can compute the values $C_i = \\Sigma_{j=0}^{m-1}(T_{i+j}-P_j)^2T_{i+j}P_j$ using a constant\nnumber of convolutions (see \\cite{CC07}). If $C_i=0$ then the pattern matches\nthe text at position $i$. \n\nUsing this result, we will split the pattern into $S$\nsections. In each section we include $q\/S$ islands. For each of the $S$\nsections, we use a constant number of convolutions to check where the section\nmatches the text. If $P$ has no more than $k$ mismatches at a particular\nalignment, then at least $S-k$ sections have to match exactly. Each of the at\nmost $k$ sections that do not match exactly are verified using Kangaroo jumps as seen\nearlier. One section takes at most $O(q\/S+k')$ time, where $k'$ is the number\nof mismatches discovered in that section. Over all the sections, the $k'$\nterms add up to no more than $k$, therefore the entire alignment can be verified\nin time $O(S+k+kq\/S)$.\n\n\nIf we make $V=O(S + k + kq\/S)$ in Algorithm \\ref{alg_basic}, then its runtime\nbecomes $O(n \\sqrt{V\\log m}) = O(n \\sqrt{(S + k + kq\/S)\\log m})$. \nThe preprocessing time for the $S$ sections is $O(Sn \\log m)$. The\noptimal value of $S$ is such that the preprocessing equals the main runtime:\n\n\\begin{align*}\n & n \\sqrt{(S + k + kq\/S)\\log m} = Sn \\log m \\\\\n\\Rightarrow & S + k + kq\/S = S^2 \\log m\\\\\n\\Rightarrow & S^2\/\\log m + kS\/\\log m + kq\/\\log m = S^3\\\\\n\\Rightarrow & S \\approx O(\\sqrt[3]{kq\/\\log m})\n\\end{align*}\n\nThis makes $V=O(S + k + kq\/S)=\nO(k +\\sqrt[3]{k^2q^2\\log m})$. This gives a\nruntime for pattern matching with $k$ mismatches of:\n\n\\begin{align*}\nO(nS\\log m + n \\sqrt{V\\log m}) = & O\\left(n\\sqrt[3]{kq\\log^2\nm} + n\\sqrt{ (k + \\sqrt[3]{k^2q^2\\log m}) \\log m }\\right)\\\\\n= & O\\left( n \\sqrt[3]{kq\\log^2 m} + n\\sqrt{k \\log m} \\right)\\\\\n\\end{align*}\n\n\\subsection{Combined result}\nIf $q < k^2$ then we can use the algorithm of section\n\\ref{sec_alg1}, which runs in $O(n\\sqrt{(q+k)\\log m})$ time. Otherwise, if\n$q > k^2$, we use the algorithm of section \\ref{sec_alg2}, which runs in\n$O(n\\sqrt[3]{qk\\log^2 m} + n\\sqrt{k \\log m})$ time.\nThus we have the following:\n\n\\begin{theorem}\nPattern matching with $k$ mismatches, with don't care\nsymbols in the pattern, can be solved in\n$O\\left(n\\sqrt{k \\log m} + n\\min\\{\\sqrt{q\\log m}, \\sqrt[3]{qk\\log^2\nm}\\}\\right)$ time.\n\\end{theorem}\n\n\\section{Conclusions}\nIn this paper we have offered efficient algorithms for the problem of pattern matching with $k$ mismatches. Specifically,\nwe have presented an algorithm that runs in\n$O(n\\sqrt{k\\log m}+n\\min\\{\\sqrt[3]{qk\\log^2 m},\\sqrt{q\\log m}\\})$ time, where $q$ is the number of islands. If the number of islands $q$ is $o(m)$, this algorithm is asymptotically\nfaster than the previous best algorithm for pattern matching with $k$ mismatches\nwith don't cares in the pattern.\n\n\\section{Acknowledgments}\nThis work has been supported in part by the following grants: NSF 1447711 and NIH R01-LM010101.\n\n\n\n\\section*{Bibliography}\n\\bibliographystyle{elsarticle-num} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzajvh b/data_all_eng_slimpj/shuffled/split2/finalzzajvh new file mode 100644 index 0000000000000000000000000000000000000000..bf655dab8293881e5aff0dd2e48d28654f4e8736 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzajvh @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction} \\label{intro}\n\nNoncommutative Quantum Mechanics is an extensively studied subject \\cite{Gamboa:2001,Nair:2001,Duval:2001,Pei-Ming:2002,Horvathy:2002,Zhang:2004,Acatrinei:2005,Bastos:2006ps,2010CMaPh.299..709B,Bertolami:2005jw,Bernardini} and its interest arises for many reasons, more particularly from the fact that noncommutativity is present string theory and quantum gravity and black hole models (see e.g. \\cite{SW,Connes,BastosQCBH}). NCQM can be viewed as the low-energy and the finite number of particles limit of noncommutative field theories and its main difference from standard quantum mechanics is the inclusion of an additional set of commutation relations for position and momentum operators. The Heisenberg-Weyl algebra for these operators,\n\n\\begin{equation} \\label{heisenberg}\n\\left[\\hat{x}_i,\\hat{x}_j\\right]=0, \\hspace{40pt}\n\\left[\\hat{p}_i,\\hat{p}_j\\right]=0, \\hspace{40pt}\n\\left[\\hat{x}_i,\\hat{p}_j\\right]=\\mathrm{i}\\hbar\\delta_{ij}\n\\end{equation}\nis deformed to the NC algebra:\n\n\\begin{equation} \\label{NC algebra}\n\\left[\\hat{q}_i,\\hat{q}_j\\right]=\\mathrm{i}\\theta_{ij}, \\hspace{40pt}\n\\left[\\hat{\\pi}_i,\\hat{\\pi}_j\\right]=\\mathrm{i}\\eta_{ij}, \\hspace{40pt}\n\\left[\\hat{q}_i,\\hat{\\pi}_j\\right]=\\mathrm{i}\\hbar\\delta_{ij},\n\\end{equation}\nwhere $\\theta_{ij}$ and $\\eta_{ij}$ are anti-symmetric real matrices. The two sets of variables, $\\{\\hat{x}_i,\\hat{p}_i\\}$ and $\\{\\hat{q}_i,\\hat{\\pi}_i\\}$ are related by a non-canonical linear transformation usually refered to as Darboux transformation, also known as Seiberg-Witten (SW) map. It is known that, although this map is not unique, all physical observables are independent of the chosen map \\cite{Bastos:2006ps,2010CMaPh.299..709B}. Moreover, since the NC operators are defined in the same Hilbert space as the commutative ones, one can obtain a representation of them, up to some order of the noncommutative parameters, without the need for the Darboux transformation. However, in most cases, it is simpler to use this transformation in order to recover some known aspects of quantum mechanics. \n\nBesides the well-known operator formulation of quantum mechanics, a phase-space formulation of NCQM has been constructed \\cite{Bastos:2006ps,2010CMaPh.299..709B} which allows for a straightforward implementation of noncommutativity. This formulation is useful for treating general problems such as, for instance, in cases where the potential is not specified. In this case, the position noncommutativity may be treated by a change in the product of functions to the Moyal $\\star$-product, defined as:\n\n\\begin{equation}\nA(x)\\star_{\\theta}B(x):=A(x)\\mathrm{e}^{(\\mathrm{i}\/2)(\\overleftarrow{\\partial_{x_i}})\\theta_{ij}(\\overrightarrow{\\partial_{x_j}})}B(x),\n\\end{equation}\nand the momentum noncommutativity is introduced via a Darboux transformation. In the case of simple potentials, the use of the Darboux transformation ensures on its own, up to some order of the noncommutative parameter, a suitable noncommutative formulation.\n\nThroughout the following sections, whenever need, the Darboux transformation to be used is as follows \\cite{Bastos:2006ps}:\n\n\\begin{equation} \\label{Darboux}\n\\hat{q}_i=\\hat{x}_i-\\frac{\\theta_{ij}}{2\\hbar}\\hat{p}_j, \\qquad \\hat{\\pi}_i=\\hat{p}_i+\\frac{\\eta_{ij}}{2\\hbar}\\hat{x}_j.\n\\end{equation}\n\n\n\n\n\n\n\\section{Gauge Invariance}\n\nIn order to study the effects of NCQM we shall consider some physical systems of interest and investigate the implications of the NC deformation. The first example to consider is that of a particle with mass $m$ and charge $q$ in a magnetic field, with the Hamiltonian given by\n\\begin{equation}\n\\hat{H}= \\frac{1}{2m}\\left[\\boldsymbol{\\hat{\\pi}}-q\\boldsymbol{A}(\\boldsymbol{q})\\right]^2.\n\\end{equation}\n\nIn order to study this system we use the Moyal $\\star$-product for the product of terms and then use the Darboux transformation, Eq. (\\ref{Darboux}), to write the noncommuting Hamiltonian in terms of the commuting variables, $\\hat{x}$ and $\\hat{p}$. Thus, considering,\n\\begin{equation}\n\\hat{H}(\\hat{q},\\hat{\\pi})\\Psi(q)=\\hat{H}(\\hat{x},\\hat{\\pi})\\star_\\theta\\Psi(x)=\\hat{H}(\\hat{x},\\hat{\\pi})\\mathrm{e}^{(\\mathrm{i}\/2)(\\overleftarrow{\\partial_{x_i}})\\theta_{ij}(\\overrightarrow{\\partial_{x_j}})}\\Psi(x),\n\\end{equation}\nat first order in the parameter $\\theta$,\n\\begin{gather}\n\\left[\\hat{H}(\\hat{x},\\hat{\\pi})+\\frac{\\mathrm{i}\\theta_{ab}}{2}\\partial_a\\hat{H}(\\hat{x},\\hat{\\pi})\\partial_b\\right]\\Psi(x)= \\nonumber\\\\\n=\\left[\\frac{1}{2m}\\left(\\boldsymbol{\\hat{\\pi}}^2-2q\\boldsymbol{\\hat{\\pi}}\\cdot\\boldsymbol{A}(\\boldsymbol{q})+q^2A^2(q)\\right)+\\frac{\\mathrm{i}\\theta_{ab}}{2}\\partial_a\\left(q^2A^2(x)-2q\\boldsymbol{A}(x)\\cdot\\hat{\\pi}\\right)\\partial_b\\right]\\Psi(x)\n\\end{gather}\n\n\nIf we now consider that $\\theta_{ab}=\\theta\\epsilon_{ab}$, where $\\epsilon_{ab}$ is the 2-dimentional antisymmetric symbol, the effective noncommutative Hamiltonian, at first order in $\\theta$, becomes:\n\\begin{equation}\n\\hat{H}=\\frac{1}{2m}\\left(\\boldsymbol{\\hat{\\pi}}^2-2q\\boldsymbol{\\hat{\\pi}}\\cdot\\boldsymbol{A}(\\boldsymbol{q})+q^2A^2(q)\\right)+\\frac{\\mathrm{i}}{4m}\\left[\\nabla\\left(q^2A^2(x)-2q\\boldsymbol{A}(x)\\cdot\\hat{\\pi}\\right)\\times\\nabla\\right]\\cdot\\boldsymbol{\\theta}\n\\end{equation}\nwhere $\\boldsymbol{\\theta}=\\theta(1,-1,1)$. We now make use of the Darboux transformation, Eq. (\\ref{Darboux}), in the momentum operator (which is now the only noncommutative operator in the Hamiltonian) to obtain:\n\\begin{multline} \\label{eq ncemg}\n\\hat{H}=\\frac{1}{2m}\\left[\\left(\\hat{\\boldsymbol{p}}-q\\boldsymbol{A}(\\boldsymbol{x})\\right)^2-\\frac{1}{\\hbar}(\\hat{\\boldsymbol{x}}\\times\\hat{\\boldsymbol{p}})\\cdot\\boldsymbol{\\eta}-\\frac{q}{\\hbar}(\\hat{\\boldsymbol{x}}\\times\\boldsymbol{A}(\\boldsymbol{x}))\\cdot\\boldsymbol{\\eta}+\\frac{1}{4\\hbar^2}\\eta^2\\epsilon_{ij}\\epsilon_{ik}\\hat{x}_j\\hat{x}_k\\right] \\\\\n-\\frac{1}{4m\\hbar}\\left[\\nabla\\left(q^2A^2(\\boldsymbol{x})-2q\\boldsymbol{A}(\\boldsymbol{x})\\cdot\\hat{\\boldsymbol{p}}-\\frac{q}{\\hbar}(\\hat{\\boldsymbol{x}}\\times\\boldsymbol{A}(\\boldsymbol{x}))\\cdot\\boldsymbol{\\eta}\\right)\\times\\hat{\\boldsymbol{p}}\\right]\\cdot\\boldsymbol{\\theta},\n\\end{multline}\nwhere, as in the case of $\\theta$, $\\boldsymbol{\\eta}=\\eta(1,-1,1)$. We aim now to see how a gauge transformation modifies the Hamiltonian and study the condition under which the Hamiltonian is gauge invariant. Gauge invariance must be imposed, otherwise a gauge change would lead to a modification of the system energy for the same physical configuration. For this purpose, we consider a gauge transformation to the vector potential $\\boldsymbol{A}\\rightarrow\\boldsymbol{A}'=\\boldsymbol{A}+\\boldsymbol{\\nabla}\\alpha$, where $\\alpha$ is a scalar function of position. Consider now the first set of terms in the Hamiltonian, Eq. (\\ref{eq ncemg}). Under the stated transformation, we get:\n\\begin{equation}\n\\begin{split}\n\\frac{1}{2m}\\left[\\left(\\hat{\\boldsymbol{p}}-q\\boldsymbol{A}(\\boldsymbol{x})-q\\boldsymbol{\\nabla}\\alpha\\right)^2-\\frac{1}{\\hbar}(\\hat{\\boldsymbol{x}}\\times\\hat{\\boldsymbol{p}})\\cdot\\boldsymbol{\\eta}- \\right. \\\\\n\\left. -\\frac{q}{\\hbar}(\\hat{\\boldsymbol{x}}\\times\\boldsymbol{A}(\\boldsymbol{x}))\\cdot\\boldsymbol{\\eta}-\\frac{q}{\\hbar}(\\hat{\\boldsymbol{x}}\\times\\boldsymbol{\\nabla}\\alpha)\\cdot\\boldsymbol{\\eta}+\\frac{1}{4\\hbar^2}\\eta^2\\epsilon_{ij}\\epsilon_{ik}\\hat{x}_j\\hat{x}_k\\right].\n\\end{split}\n\\end{equation}\n\nIf we now change the wave function on which the Hamiltonian acts, to $\\Psi=\\mathrm{e}^{\\mathrm{i}q\\alpha\/\\hbar}\\Psi'$, the first set of extra terms in Eq. (\\ref{eq ncemg}) coming from the gauge transformation will be cancelled and so we may conclude that this set of therms is not problematic. However, this is not true for the second set of terms which is transformed to,\n\n\\begin{equation}\n\\left[\\nabla\\left(q^2(A(\\boldsymbol{x})+\\boldsymbol{\\nabla}\\alpha)^2-2q\\boldsymbol{A}(\\boldsymbol{x})\\cdot\\hat{\\boldsymbol{p}}-2q\\boldsymbol{\\nabla}\\alpha\\cdot\\hat{\\boldsymbol{p}}-\\frac{q}{\\hbar}(\\hat{\\boldsymbol{x}}\\times\\boldsymbol{A}(\\boldsymbol{x}))\\cdot\\boldsymbol{\\eta}-\\frac{q}{\\hbar}(\\hat{\\boldsymbol{x}}\\times\\boldsymbol{\\nabla}\\alpha)\\cdot\\boldsymbol{\\eta}\\right)\\times\\hat{\\boldsymbol{p}}\\right]\\cdot\\boldsymbol{\\theta}.\n\\end{equation}\n\nIf we now consider the wave function transformation, $\\Psi=\\mathrm{e}^{\\mathrm{i}q\\alpha\/\\hbar}\\Psi'$, we verify that the gauge transformation is not cancelled due to the momentum operator outside the divergence acting on the exponential. Thus, the phase transformation that absorbs the gauge transformation terms in the first part of the Hamiltonian, Eq. (\\ref{eq ncemg}), does not do so for the second set of terms. This comes from the fact that, in the first term, the change in $\\boldsymbol{A}$ can be seen as a change in $\\hat{\\boldsymbol{p}}$, and a constant change in momenta can always be absorbed by a phase change. The same does not occur for the change in the second term, making it impossible to accommodate it into a change in phase. Therefore, in order to make the Hamiltonian gauge invariant, this term must vanish. To accomplish this for any $\\boldsymbol{A}$, $\\theta$ must vanish. This result is consistent to an explicit computation in the context of the Hamiltonian of fermionic fields \\cite{Bertolami:2011rv}.\n\n\n\\section{Gravitational Quantum Well and the Equivalence Principle in NCQM}\n\nA very interesting system to directly connect gravity to quantum mechanics is the gravitational quantum well \\cite{Landau,LF,Nesvizhesky}. As we shall see, this connection can be used to constrain quantum measurements of gravity phenomena and to test the equivalence principle (see also Refs. \\cite{Bertolami:2003,Bastos:2010au}). It is easy to show that this principle holds for usual quantum mechanics, in the sense that a gravitational field is equivalent to an accelerated reference frame. We shall see that this also holds in the context of NCQM for isotropic noncommutativity parameters. In the following we shall study the noncommutative GQW \\cite{Bertolami:2005jw} and its connection to accelerated frames of reference.\n\n\\subsection{Fock space formulation of NC Gravitational Quantum Well}\n\nLet us consider the GQW in the context of NCQM.\nTo start with we review some aspects of the usual GQW in standard quantum mechanics. The Hamiltonian is given by:\n\n\\begin{equation} \\label{QGWH}\n\\hat{H}=\\frac{1}{2m}\\hat{\\boldsymbol{p}}^2+mg\\hat{x}_i.\n\\end{equation}\nfor a particle with mass, $m$, in a gravitational field with acceleration, $g$, in the $x_i$ direction.\n\nWith the Fock space treatment in mind we define creation and annihilation operators for this Hamiltonian:\n\\begin{gather}\n\\hat{b}=\\left(\\frac{m^2}{\\hbar^2g}\\right)^{\\frac{1}{3}}\\left[\\hat{x}+\\frac{i}{2}\\left(\\frac{g^2\\hbar}{m^4}\\right)^{\\frac{1}{3}}\\hat{p}_x\\right], \\label{annihilation}\\\\\n\\hat{b}^\\dagger=\\left(\\frac{m^2}{\\hbar^2g}\\right)^{\\frac{1}{3}}\\left[\\hat{x}-\\frac{i}{2}\\left(\\frac{g^2\\hbar}{m^4}\\right)^{\\frac{1}{3}}\\hat{p}_x\\right], \\label{creation}\n\\end{gather}\nwhere the definition concerns only for the $x$ direction, as the $y$ component of the Hamiltonian is just that of a free particle. The normalization factors is chosen so that the operators $\\hat{b}$ and $\\hat{b}^\\dagger$ are dimensionless. The Hamiltonian can then be rewritten as \n\\begin{equation} \\label{comm hamiltonian}\n\\hat{H}=K_1\\left(\\hat{\\Gamma}_x+\\hat{\\Gamma}_y\\right)+K_2\\left(\\hat{b}^\\dagger_x+\\hat{b}_x\\right),\n\\end{equation}\nwhere\n\\begin{gather}\n\\hat{\\Gamma}_i=\\hat{b}^\\dagger_i\\hat{b}_i+\\hat{b}_i\\hat{b}^\\dagger_i-\\hat{b}^\\dagger_i\\hat{b}^\\dagger_i-\\hat{b}_i\\hat{b}_i, \\\\\nK_1=\\frac{1}{16}\\left(\\frac{\\hbar^3m^2}{g}\\right)^{2\/3}, \\\\\nK_2=\\frac{mg}{2}\\left(\\frac{\\hbar^2g}{m^2}\\right)^{1\/3}.\n\\end{gather}\n\nGiven the form of the Hamiltonian, it is evident that it is not diagonal in this representation, so it is not particularly useful for calculations of eigenstates and eigenvalues. This is expected from the usual solution to this problem, in which the energies involve the zeros of the Airy function, $Ai(x)$. We now examine the noncommutative Hamiltonian \\cite{Bertolami:2005jw},\n\n\\begin{equation} \\label{NCGQWH}\n\\hat{H}^{NC}=\\frac{1}{2m}\\left[\\hat{p}_x^2+\\hat{p}_y^2\\right]+mg\\hat{x}+\\frac{\\eta}{2m\\hbar}(\\hat{x}\\hat{p}_y-\\hat{y}\\hat{p}_x)+\\frac{\\eta^2}{8m\\hbar^2}\\left(\\hat{x}^2+\\hat{y}^2\\right);\n\\end{equation}\nwhich is the equation of a particle under the influence of a gravitational field plus a fictitious ``magnetic field\", $\\overrightarrow{B_{NC}}=-(\\eta\/q\\hbar)\\overrightarrow{\\mathrm{e}_z}$, plus an harmonic restoring force. Through the definitions, Eqs. (\\ref{annihilation}) and (\\ref{creation}), it can be rewriten it, up to first order in $\\theta$ and $\\eta$, as:\n\\begin{equation} \\label{noncomm hamiltonian}\n\\hat{H}^{NC}=K_1\\left(\\hat{\\Gamma}_x+\\hat{\\Gamma}_y\\right)+K_2\\left(\\hat{b}^\\dagger_x+\\hat{b}_x\\right)+\\frac{i\\eta}{4m\\hbar^{\\frac{2}{3}}}\\left(\\hat{b}^\\dagger_y\\hat{b}_x-\\hat{b}^\\dagger_x\\hat{b}_y\\right).\n\\end{equation}\n\nIt should be pointed out that this treatment considers only first order terms in either $\\eta$ or $\\theta$, although the latter does not show up in the Hamiltonian as its effect can be absorbed by a phase factor of the wave function. Noting the similarities between both commutative and noncommutative Hamiltonians, we might ask wether there is a transformation that can turn one into the other. That might be an interesting finding as, then, noncommutativity, at least for this system, could be regarded as a modification to the commutative case, and noncommutative eigenfunctions could be constructed using commutative ones, which are well known. Furthermore, it would make noncommutativity the result of a transformation of variables, and not a fundamental property of the system under study. In order to pursue this analysis, we must introduce an operator transformation in which the new operators, $\\hat{a}_i$ and $\\hat{a}^\\dagger_i$ for $i=x,y$, obey the same commutation relations as the original operators. Thus we define,\n\n\\begin{gather} \\label{a operators}\n\\hat{b}_i:=\\sum_{j=1}^2u_{ij}\\hat{a}_j+s_{ij}\\hat{a}^\\dagger_j, \\\\\n\\hat{b}^\\dagger_i:=\\sum_{j=1}^2u^*_{ij}\\hat{a}^\\dagger_j+s^*_{ij}\\hat{a}_j,\n\\end{gather}\nwhere we impose the commutation relations\n\n\\begin{equation}\n\\left[\\hat{a}_i,\\hat{a}^\\dagger_j\\right]=\\delta_{ij},\n\\end{equation}\nand all the other commutation relations vanish. These conditions introduce a set of constraints on the parameters $u_{ij}$ and $s_{ij}$, namely:\n\n\\begin{gather}\n\\lvert u_{11}\\lvert^2-\\lvert s_{11} \\lvert^2+\\lvert u_{12}\\lvert^2-\\lvert s_{12}\\lvert^2=1, \\nonumber\\\\\n\\lvert u_{21}\\lvert^2-\\lvert s_{21} \\lvert^2+\\lvert u_{22}\\lvert^2-\\lvert s_{22}\\lvert^2=1.\n\\end{gather}\n\nConsidering Eq. (\\ref{comm hamiltonian}) in terms of operators $\\hat{b}_i$ and $\\hat{b}^\\dagger_i$ and using the definitions, Eq. (\\ref{a operators}), we get the Hamiltonian in terms of the operators $\\hat{a}_i$ and $\\hat{a}^\\dagger_i$ as \n\n\\begin{multline} \\label{new hamiltonian}\n\\hat{H}=K_1\\left[\\gamma_1\\hat{a}^\\dagger_x\\hat{a}_x+\\gamma_1\\hat{a}_x\\hat{a}^\\dagger_x+\\gamma_2\\hat{a}^\\dagger_x\\hat{a}^\\dagger_x+\\gamma^*_2\\hat{a}_x\\hat{a}_x+\\gamma_3\\hat{a}^\\dagger_y\\hat{a}_y+\\gamma_3\\hat{a}_y\\hat{a}^\\dagger_y+\\gamma_4\\hat{a}^\\dagger_y\\hat{a}^\\dagger_y+ \\right. \\\\\n\\left. +\\gamma^*_4\\hat{a}_y\\hat{a}_y+2\\gamma_5\\hat{a}^\\dagger_x\\hat{a}^\\dagger_y+2\\gamma^*_5\\hat{a}_x\\hat{a}_y+2\\gamma_6\\hat{a}^\\dagger_x\\hat{a}_y+2\\gamma^*_6\\hat{a}^\\dagger_y\\hat{a}_x\\right]+ \\\\\n+K_2\\left[\\hat{a}^\\dagger_x\\left(u^*_{11}+s_{11}\\right)+\\hat{a}_x\\left(s^*_{11}+u_{11}\\right)+\\hat{a}^\\dagger_y\\left(u^*_{12}+s{12}\\right)+\\hat{a}_y\\left(s^*_{12}+u_{12}\\right)\\right],\n\\end{multline}\nwhere, for simplicity, we have defined,\n\n\\begin{subequations}\n\\begin{equation}\n\\gamma_1:=\\lvert u_{11}\\lvert^2+\\lvert s_{11}\\lvert^2-u^*_{11}s^*_{11}-u_{11}s_{11}+\\lvert u_{21}\\lvert^2+\\lvert s_{21}\\lvert^2-u^*_{21}s^*_{21}-u_{21}s_{21},\n\\end{equation}\n\\begin{equation}\n\\gamma_2:=2u^*_{11}s_{11}-\\left(u^*_{11}\\right)^2-s_{11}^2+2u^*_{21}s_{21}-\\left(u^*_{21}\\right)^2-s_{21}^2,\n\\end{equation}\n\\begin{equation}\n\\gamma_3:=\\lvert u_{12}\\lvert^2+\\lvert s_{12}\\lvert^2-u^*_{12}s^*_{12}-u_{12}s_{12}+\\lvert u_{22}\\lvert^2+\\lvert s_{22}\\lvert^2-u^*_{22}s^*_{22}-u_{22}s_{22},\n\\end{equation}\n\\begin{equation}\n\\gamma_4:=2u^*_{12}s_{12}-\\left(u^*_{12}\\right)^2-s_{12}^2+2u^*_{22}s_{22}-\\left(u^*_{22}\\right)^2-s_{22}^2,\n\\end{equation}\n\\begin{equation}\n\\gamma_5:=u^*_{11}s_{12}+s_{11}u^*_{12}-u^*_{11}u^*_{12}-s^*_{11}s^*_{12}+u^*_{21}s_{22}+s_{21}u^*_{22}-u^*_{21}u^*_{22}-s^*_{21}s^*_{22},\n\\end{equation}\n\\begin{equation}\n\\gamma_6:=u^*_{11}u_{12}+s_{11}s^*_{12}-u^*_{11}s^*_{12}-s^*_{11}u^*_{12}+u^*_{21}u_{22}+s_{21}s^*_{22}-u^*_{21}s^*_{22}-s^*_{21}u^*_{22}.\n\\end{equation}\n\\end{subequations}\n\nComparing the Hamiltonian in Eq. (\\ref{new hamiltonian}) to the one in Eq. (\\ref{noncomm hamiltonian}), we can immediately set the conditions for the $\\gamma_i$'s\n\n\\begin{subequations}\n\\begin{equation}\n\\gamma_1=1,\n\\end{equation}\n\\begin{equation}\n\\gamma_2=-1,\n\\end{equation}\n\\begin{equation}\n\\gamma_3=1,\n\\end{equation}\n\\begin{equation}\n\\gamma_4=-1,\n\\end{equation}\n\\begin{equation}\n\\gamma_5=0,\n\\end{equation}\n\\begin{equation}\n\\gamma_6=\\mathrm{i}\\frac{\\eta}{4m\\hbar^{\\frac{2}{3}} K_1}:=i\\eta\\mathrm{c} , \\mathrm{c}\\in \\mathbb{R}.\n\\end{equation}\n\\end{subequations}\n\n\nFurthermore, comparing the terms that are linear in the $\\hat{a}$ operators, we get two additional equations for the $u$ and $s$ parameters,\n\n\\begin{subequations}\n\\begin{equation}\nu^*_{11}+s_{11}=1,\n\\end{equation}\n\\begin{equation}\nu^*_{12}+s_{12}=0.\n\\end{equation}\n\\end{subequations}\n\nIn total we now have 16 variables and a total of 16 distinct equations constraining the values of this variables. Hence, this system of equations has either a single solution or none. It is found that this system has no solution for $\\eta\\neq 0$, which can be verified using well known Mathematica or MatLab procedures. Therefore, it is not possible to describe, as expected, the noncommutative Hamiltonian as a mixture of eigenstates of the commutative Hamiltonian, and so it is a completely different problem. Once again we stress that this result is only valid at first order in both noncommutative parameters. However, it is reassuring to confirm that, at least at this level, noncommutativity is indeed a completely different problem than the commutative one.\n\n\\subsection{Equivalence Principle}\n\nHaving verified that the noncommutative Hamiltonian of the GQW is in fact a different problem than the commutative one, we can try to examine the issue of the noncommutative Equivalence Principle. We have seen that the only parameter having an effect on the eigenstates and eigenvalues is $\\eta$, as the $\\theta$ factor can be absorbed into a phase factor in the wave function of the system. The WEP states that, locally, any gravitational field is equivalent to an accelerated reference frame. This is one of the basic tenets of General Relativity and holds with great accuracy (see e.g. Ref. \\cite{Bertolami:2012}, chapter 22, for a review of the experimental status of relativity). In standard QM, for the GQW, this can be verified to hold in a quite simple way. In the context of NCQM we will show how it can be verified in what follows next. For this purpose we consider the noncommutative GQW Schr$\\ddot{\\mathrm{o}}$dinger equation,\n\n\\begin{equation}\n\\hat{H}^{NC}_g\\Psi=\\left[\\frac{1}{2m}\\left(\\hat{\\pi}_x^2+\\hat{\\pi}_y^2\\right)+mg\\hat{Q}_x\\right]\\Psi=E\\Psi\n\\end{equation}\nand applying the Darboux transformation to write it in terms of the commutative variables, that is, Eq. (\\ref{NCGQWH}):\n\n\\begin{equation} \\label{schrodinger of NC GQW}\n\\left[\\frac{1}{2m}\\left(\\hat{p}_x^2+\\hat{p}_y^2\\right)+mg\\hat{x}+\\frac{\\eta}{2m\\hbar}(\\hat{x}\\hat{p}_y-\\hat{y}\\hat{p}_x)+\\frac{\\eta^2}{8m\\hbar^2}\\left(\\hat{x}^2+\\hat{y}^2\\right)\\right]\\Psi=\\mathrm{i}\\hbar\\frac{\\partial\\Psi}{\\partial t},\n\\end{equation}\nwhere we have considered the time dependent problem as we have to use a change of coordinates evolving in time. We now consider the noncommutative free particle equation:\n\n\\begin{equation} \\label{free hamiltonian}\n\\left[-\\frac{\\hbar^2}{2m}\\left(\\frac{\\partial^2}{\\partial x^2}+\\frac{\\partial^2}{\\partial y^2}\\right)-\\frac{\\mathrm{i}\\eta}{2m}(x\\frac{\\partial}{\\partial y}-y\\frac{\\partial}{\\partial x})+\\frac{\\eta^2}{8m\\hbar^2}\\left(x^2+y^2\\right)\\right]\\Psi=\\mathrm{i}\\hbar\\frac{\\partial\\Psi}{\\partial t},\n\\end{equation}\nand introduce a change of coordinates defined as\n\n\\begin{subequations} \\label{acc coordinates}\n\\begin{equation}\nx'=x+\\sigma(t)\n\\end{equation}\n\\begin{equation}\ny'=y\n\\end{equation}\n\\end{subequations}\n\nIn order for the WEP to be preserved we require that\n\n\\begin{equation} \\label{equality}\n\\hat{H}^{NC}_g(\\hat{\\boldsymbol{x}},\\hat{\\boldsymbol{p}})\\Psi(x,y)=\\hat{H}^{NC}_{free}(\\hat{\\boldsymbol{x'}},\\hat{\\boldsymbol{p'}})\\Psi'(x',y'),\n\\end{equation}\nwhere $\\hat{H}^{NC}_g$ is the noncommutative GQW Hamiltonian and $\\hat{H}^{NC}_{free}$ is the noncommutative Hamiltonian of a free particle and $\\Psi'(x',y')=\\mathrm{e}^{\\mathrm{i}\\phi(x',y')}\\Psi(x',y')$, so that the eigenfunctions are the same, but by a phase. Starting from the free particle Hamiltonian we write it in terms of an accelerated reference frame coordinates, and thus,\n\\begin{subequations} \\label{acc differentials}\n\\begin{equation}\n\\frac{\\partial}{\\partial x'}\\Psi(x',y')=\\frac{\\partial}{\\partial x}\\Psi(x,y),\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial}{\\partial y'}\\Psi(x',y')=\\frac{\\partial}{\\partial y}\\Psi(x,y),\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial}{\\partial t'}\\Psi(x',y')=\\left(\\frac{\\partial}{\\partial t}-\\frac{\\mathrm{d}\\sigma(t)}{\\mathrm{d}t}\\frac{\\partial}{\\partial x}\\right)\\Psi(x,y).\n\\end{equation}\n\\end{subequations}\n\nHence, combining Eqs. (\\ref{acc coordinates}) and (\\ref{acc differentials}), the right-hand side of Eq. (\\ref{free hamiltonian}) becomes:\n\\begin{multline}\n\\left[-\\frac{\\hbar^2}{2m}\\left(\\frac{\\partial^2}{\\partial x^2}+\\frac{\\partial^2}{\\partial y^2}\\right)-\\frac{\\mathrm{i}\\eta}{2m}\\left(x\\frac{\\partial}{\\partial y}-y\\frac{\\partial}{\\partial x}\\right)-\\frac{\\mathrm{i}\\eta}{2m}\\sigma(t)\\frac{\\partial}{\\partial y}+\\frac{\\eta^2}{8m\\hbar^2}\\left(x^2+y^2\\right)+ \\right. \\\\\n\\left. \\frac{\\eta^2}{8m\\hbar^2}\\left(-2x\\sigma(t)+\\sigma^2(t)\\right)\\right]\\Psi'(x,y)=\\mathrm{i}\\hbar\\left(\\frac{\\partial}{\\partial t}-\\frac{\\mathrm{d}\\sigma(t)}{\\mathrm{d}t}\\frac{\\partial}{\\partial x}\\right)\\Psi'(x,y).\n\\end{multline}\n\nIn order to check if Eq. (\\ref{equality}) is consistent we must either compute the phase $\\phi$ or prove there is no wave function which holds for the mentioned relation. For this we consider the relation between $\\Psi$ and $\\Psi'$ and compute the action of the operators on the wave function $\\Psi'(x',y')=\\mathrm{e}^{\\mathrm{i}\\phi(x',y')}\\Psi(x',y')$. The obtained result is as follows:\n\\begin{multline} \\label{full equation}\n\\left[-\\frac{\\hbar^2}{2m}\\left(\\frac{\\partial^2}{\\partial x^2}+\\frac{\\partial^2}{\\partial y^2}\\right)-\\frac{\\mathrm{i}\\eta}{2m}\\left(x\\frac{\\partial}{\\partial y}-y\\frac{\\partial}{\\partial x}\\right)+\\frac{\\eta^2}{8m\\hbar^2}\\left(x^2+y^2\\right)\\right]\\Psi'+\\left[-\\frac{\\mathrm{i}\\hbar^2}{2m}\\frac{\\partial^2\\phi}{\\partial x^2}+\\frac{\\hbar^2}{2m}\\frac{\\partial\\phi}{\\partial x}^2 \\right. \\\\\n\\left. -\\frac{\\mathrm{i}\\hbar^2}{2m}\\frac{\\partial^2\\phi}{\\partial y^2}+\\frac{\\hbar^2}{2m}\\frac{\\partial\\phi}{\\partial y}^2+\\frac{\\eta}{2m}y\\frac{\\partial\\phi}{\\partial x}-\\frac{\\eta}{2m}x\\frac{\\partial\\phi}{\\partial y}+\\frac{\\eta}{2m}\\sigma(t)\\frac{\\partial\\phi}{\\partial y}-\\frac{\\eta^2}{4m\\hbar^2}x\\sigma(t)+\\frac{\\eta^2}{4\\hbar^2}\\sigma^2(t)+ \\right. \\\\\n\\left. \\hbar\\frac{\\partial\\phi}{\\partial t}+\\hbar\\frac{\\mathrm{d}\\sigma}{\\mathrm{d}t}\\frac{\\partial\\sigma}{\\partial x}\\right]\\Psi'+\\left[-\\frac{\\mathrm{i}\\hbar^2}{2m}\\frac{\\partial\\phi}{\\partial x}+\\mathrm{i}\\hbar\\frac{\\mathrm{d}\\sigma}{\\mathrm{d}t}\\right]\\frac{\\partial\\Psi'}{\\partial x}+\\left[-\\frac{\\mathrm{i}\\hbar^2}{m}\\frac{\\partial\\phi}{\\partial y}-\\frac{\\mathrm{i}\\eta}{2m}\\sigma(t)\\right]\\frac{\\partial\\Psi'}{\\partial t}=\\mathrm{i}\\hbar\\frac{\\partial\\Psi'}{\\partial t}.\n\\end{multline}\n\nNow, for the purpose of retrieving the noncommutative GQW we must compare both Schr$\\ddot{\\mathrm{o}}$dinger equations to set constraints on the form of the phase $\\phi$. Imposing that the term multiplying the derivative of $\\Psi'$ vanishes, we get:\n\\begin{equation}\n\\frac{\\partial\\phi}{\\partial x}=\\frac{m}{\\hbar}\\frac{\\mathrm{d}\\sigma}{\\mathrm{d}t},\n\\end{equation}\nwhich implies, taking into account the fact that $\\sigma$ only depends on time, that:\n\\begin{equation} \\label{first form}\n\\phi=\\frac{m}{\\hbar}\\frac{\\mathrm{d}\\sigma}{\\mathrm{d}t}x+f(y,t).\n\\end{equation}\n\nConsidering that the last term on the left-hand side of Eq. (\\ref{full equation}) must vanish, and Eq. (\\ref{first form}), it follows that\n\\begin{equation}\n\\frac{\\hbar^2}{m}\\frac{\\partial f}{\\partial y}=-\\frac{\\eta}{2m}\\sigma(t)\\space\\Rightarrow\\space f(y,t)=-\\frac{\\eta}{2\\hbar^2}\\sigma(t)y+\\mu(t);\n\\end{equation}\nreplacing this result into the second term of Eq. (\\ref{full equation}) and comparing with the Hamiltonian, Eq. (\\ref{schrodinger of NC GQW}), yields\n\\begin{equation}\nm\\frac{\\mathrm{d}^2\\sigma}{\\mathrm{d}t^2}x+\\nu(t)=mgx\n\\end{equation}\nwhere $\\nu(t)$ is the sum of all time dependent terms and can be made to vanish through a suitable choice of the function $\\mu(t)$. There is only one non-vanishing remaining term and in order to Eq. (\\ref{equality}) to hold we must impose that\n\\begin{equation}\n\\frac{\\mathrm{d}^2\\sigma}{\\mathrm{d}t^2}=g\\space\\Rightarrow\\space\\sigma(t)=\\sigma_0+vt+\\frac{1}{2}gt^2\n\\end{equation}\n\nThus, we can see that Eq. (\\ref{equality}) holds as far as\n\\begin{equation}\nx'=x+\\sigma_0+vt+\\frac{1}{2}gt^2\n\\end{equation}\nwhich corresponds to an accelerated reference frame. The WEP is then verified to hold for NCQM at least as long as we consider that the noncommutative parameters are isotropic. Hence, bounds on the WEP turn out to be limits on the isotropy of the NC parameters.\n\nFinally, the phase difference between the wavefunctions $\\Phi$ and $\\Phi'$ is given by:\n\\begin{equation}\n\\Psi=\\mathrm{e}^{\\mathrm{i}\\left(\\frac{m}{\\hbar}\\frac{\\mathrm{d}\\sigma}{\\mathrm{d}t}x-\\frac{\\eta}{2\\hbar^2}\\sigma(t)y+\\mu(t)\\right)}\\Psi'\n\\end{equation}\nand, as it has been analysed in Ref. \\cite{Bastos:2008b}, this does not give rise to any physically meaningful effect.\n\n\\subsection{Anisotropic noncommutativity}\n\nAs we have seen in the last subsection, the WEP holds in NCQM, unless NC parameters are anisotropic, i.e. $\\eta_{xy}\\neq\\eta_{xz}$. In what follows we use the bounds on the WEP to constrain the difference between components of the $\\eta$ matrix. The ensued discussion is similar to the one carried out in Ref. \\cite{Bastos:2010au} in the context of the entropic gravity proposal \\cite{Verlinde:2010hp}.\nThe noncommutative Hamiltonian for the GQW is given by Eq. (\\ref{NCGQWH}).\nIn order to find the eigenstates for this problem we use perturbation theory up to first order in $\\eta$, which is sufficient to obtain differences in the energy spectrum for different directions of the gravitational field. For this purpose we define\n\\begin{equation}\n\\hat{H}^{NC}=\\hat{H}_0^{NC}+\\hat{V},\n\\end{equation}\nwhere we consider $\\hat{V}$ a perturbation to the exactly soluble Hamiltonian $\\hat{H}_0^{NC}$, defined by\n\n\\begin{subequations}\n\\begin{equation}\n\\hat{H}_0^{NC}:=\\frac{\\hat{p}_x}{2m}+\\frac{\\hat{p}_y}{2m}+mg\\hat{x},\n\\end{equation}\n\\begin{equation} \\label{perturbation}\n\\hat{V}:=\\frac{\\eta}{2m\\hbar}\\left(\\hat{y}\\hat{p}_x-\\hat{x}\\hat{p}_y\\right)+\\frac{\\eta^2}{8m\\hbar^2}\\left(\\hat{x}^2+\\hat{y}^2\\right).\n\\end{equation}\n\\end{subequations}\n\nSince we are only interested in the corrections of order $\\eta$, we can disregard the second term in $\\hat{V}$. The soluble Hamiltonian is that of a free particle in the $y$ direction and that of the GQW in the $x$ direction. Solutions to these problems are well-known and are given by (e.g. Ref. \\cite{Landau})\n\n\\begin{equation} \\label{solution}\n\\Psi_{nk}(x,y)=A_nAi\\left(\\left(\\frac{2m^2g}{\\hbar^2}\\right)^{1\/3}\\left(x-\\frac{E_{n}}{mg}\\right)\\right)\\chi(y),\n\\end{equation}\nwhere $Ai(z)$ is the Airy function, $\\chi(y)$ is the solution for the free particle, and $E_{n}$ and $A_n$ are the energy eigenvalues in the $x$ direction and the normalization factor for the Airy function, given, respectively, by,\n\n\\begin{equation}\nE_{n}=-\\left(\\frac{mg^2\\hbar^2}{2}\\right)^{1\/3}\\alpha_n,\n\\end{equation}\n\n\\begin{equation}\nA_n=\\left[\\left(\\frac{\\hbar^2}{2m^2g}\\right)^{1\/3}\\int_{\\alpha_n}^{+\\infty}\\mathrm{d}zAi^2(z)\\right]^{-1\/2},\n\\end{equation}\nwhere $\\alpha_n$ are the zeros of the Airy function. The energy eigenvalues in the $y$ direction are given by,\n\\begin{equation}\nE_{y}=\\frac{\\hbar^2k^2}{2m},\n\\end{equation}\nwhere $k$ is the momentum of the particle. The change in energy is given by the expectation value of the operator $\\hat{V}$ in a general state given by Eq. (\\ref{solution}) and, the leading order perturbation to the energy of the system in any state, is given by,\n\n\\begin{equation}\n\\Delta E_n=\\bra{\\Psi_{nk}}\\hat{V}\\ket{\\Psi_{nk}}=\\frac{\\eta k}{2m}\\left[\\left(\\frac{2m^2g}{\\hbar^2}\\right)^{-2\/3}\\mathrm{I}_1^{(n)}+\\frac{E_n}{mg}\\right].\n\\end{equation}\nIt must be noted that we computed the energy eigenvalues for the case of a two dimensional Hamiltonian in the $xy$ plane, so we can write,\n\\begin{equation}\nE_{nk}^{xy}=-\\left(\\frac{mg^2\\hbar^2}{2}\\right)^{1\/3}\\alpha_n+\\frac{\\hbar^2k^2}{2m}+\\frac{\\eta_{xy} k}{2m}\\left[\\left(\\frac{2m^2g}{\\hbar^2}\\right)^{-2\/3}\\mathrm{I}_1^{(n)}+\\frac{E_n}{mg}\\right].\n\\end{equation}\nThus an anisotropy in the momentum space breaks the equivalence principle. \n\nConsider now the NC GQW for a particle moving along the $y$ direction with a gravitational field in the $x$ direction and the same equation for a particle traveling along the $x$ direction with a gravitational field in the $z$ direction. Assuming that the test particles have the same momentum in the direction in which they are free, hence:\n\n\\begin{equation} \\label{deltag}\nmx(g_x-g_z)=\\frac{k}{2m}\\left[\\left(\\frac{2m^2g}{\\hbar^2}\\right)^{2\/3}\\mathrm{I}_1^{(n)}+\\frac{E_n}{mg}\\right]\\left(\\eta_{xy}-\\eta_{yz}\\right),\n\\end{equation}\nwhere $x$ is the position of the test particle. Thus, using the bound on the WEP for two different directions (see e.g. Ref. \\cite{PhysRevLett.100.041101}):\n\n\\begin{equation} \\label{torsion balance}\n\\frac{\\Delta a}{a}:=\\frac{|a_1-a_2|}{a}\\lesssim 10^{-13},\n\\end{equation}\nplus data from Ref. \\cite{Nesvizhesky} , namely that $k=1.03\\times 10^8\\>\\>m^{-1}$ and $x=12.2\\>\\>\\mu m$ for the eigenstate of lower energy and $g=9.80665\\>\\>m\/s^2$, Eq. (\\ref{deltag}) yields:\n\n\\begin{equation} \\label{relation}\n\\frac{\\Delta g}{g}=1.4\\times 10^{60}\\Delta\\eta.\n\\end{equation}\n\nApplying the bound from Eq. (\\ref{torsion balance}) to Eq. (\\ref{relation}), the bound for $\\Delta\\eta$ is computed to be:\n\n\\begin{equation}\n\\Delta\\eta\\lesssim 10^{-73} \\>\\mathrm{kg}^2\\mathrm{m}^2\\mathrm{s}^{-2},\n\\end{equation}\nwhich bounds the noncommutative momentum anisotropy in a quite stringent way. In natural units:\n\n\\begin{equation}\n\\sqrt{\\Delta\\eta}\\lesssim10^{-10}\\>\\>\\mathrm{eV}.\n\\end{equation}\n\n\n\\section{Lorentz invariance}\nLorentz symmetry is a fundamental cornerstone of all known physical theories. Thus, it is natural to consider experimental bounds on this invariance to constrain noncommutativity which explicitly violates Lorentz symmetry . A major tool for these tests is the relativistic dispersion relation,\n\\begin{equation} \\label{dispersion relation}\nE^2=p^2c^2+m^2c^4.\n\\end{equation} \n\nThis relation is tested with great accuracy at very high energies. Indeed, ultra-high energy cosmic rays allow for constraining this relationship for an extra quadratic term on the energy to the $1.7\\times 10^{-25}$ level \\cite{Bertolami:1999da}. This estimate is confirmed through direct measurements by the Auger collaboration \\cite{Auger}. \n\nThus, assuming a correction of the dispersion relation proportional to $E^2$ at the $1.7\\times 10^{-25}$ level \\cite{Bertolami:1999da}, then it is possible to constrain the $\\eta$ parameter, that is: \n\\begin{equation}\n\\eta \\leqslant (1.7\\times 10^{-25}) E^2,\n\\end{equation}\nhence for ultra-high energy cosmic rays, with $E\\sim 10^{20} \\> \\mathrm{eV}$, we can establish that $\\sqrt{\\eta}\\leqslant 4.1\\times 10^{7}\\>\\mathrm{eV}$, which is not at all a very stringent upper bound. A much more constraining bound can be set through low-energy tests of Lorentz symmetry. Indeed, assuming limits arising from the nuclear Zeeman levels, one can establish that $\\eta \\leqslant 10^{-22}E^2$, which for $E \\sim \\mathrm{MeV}$ \\cite{PhysRevLett.57.3125}, implies that $\\sqrt{\\eta}\\leqslant 10^{-11}\\>\\mathrm{MeV}\\simeq 10^{-5}\\>\\>\\mathrm{eV}$. This result is competitive with the most stringent bound on $\\eta$, namely $\\sqrt{\\eta}\\leqslant 2 \\times 10^{-5}\\>\\>\\mathrm{eV}$ \\cite{Bertolami:2011rv}, obtained from the hydrogen hyperfine transition, the most accurate experimental result ever obtained. \n\n\n\\section{Discussion and Conclusions}\n\nIn this work we have addressed several issues on NCQM. Gauge invariance of the electromagnetic field is verified to hold only if the parameter $\\theta$ vanishes, which is consistent with previous work for fermionic fields \\cite{Bertolami:2011rv}. This result implies that, for abelian gauge theories, spatial directions do commute and noncommutative effects are expected only for the momenta.\n\n\nAlso, we have compared the GQW Hamiltonian in the context of NCQM with the Hamiltonian for the same problem in QM. Using the Fock space formalism with creation and annihilation operators, we found no evidence for a connection between this two problems at first order in the parameter $\\eta$. This shows that NCQM poses a different problem from QM at least in the context of GQW.\nFollowing this result, we studied the WEP in the noncommutative scenario. It is concluded that this principle holds for NCQM in the sense that an accelerated frame of reference is locally equivalent to a gravitational field, as long as noncommutativity is isotropic. If an anisotropy is introduced in the noncommutative parameters, using data from Refs. \\cite{PhysRevLett.100.041101,Nesvizhesky}, we set a bound on the anisotropy of the $\\eta$ parameter, $\\sqrt{\\Delta\\eta}\\lesssim10^{-10}\\>\\>\\mathrm{eV}$. It is then clear that the anisotropy of the noncommutative momentum parameter is many orders of magnitude smaller than the NC parameter itself. This result also states that the existence of a preferential observer to whom the spatial $x$,$y$ and $z$ directions are well defined is limited to the same degree as the anisotropy factor.\n\nAdditionally, the breaking of Lorentz symmetry is examined in the context of NCQM. Assuming a violation of the relativistic dispersion relation proportional to $E^2$, bounds from ultra-high energy cosmic rays (see Refs. \\cite{Bertolami:1999da,Auger}) imply that $\\sqrt{\\eta}\\leq 4.1\\times 10^7\\>\\>\\mathrm{eV}$. Considering instead bounds arising from nuclear Zeeman levels, one can obtain that $\\sqrt{\\eta}\\leq 10^{-5}\\>\\>\\mathrm{eV}$, which is competitive with bounds arising from the hydrogen hyperfine transition $\\sqrt{\\eta}\\leq 2\\times 10^{-6}\\>\\>\\mathrm{eV}$ \\cite{Bertolami:2011rv}, the most stringent bound ever obtained. \n\n\\vspace{0.5cm} \n\n\\noindent\n{ \\bf Acknowledgements}\n\n\\noindent\nThe authors would like to thank Catarina Bastos for relevant discussions on the matter of this work.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe theory of optimal transport has drawn much attention in recent years.\nIts applications to geometry and PDEs have in particular been largely \ndisseminated. In this paper, we would like to show its effectiveness in a \ndynamical context. We are interested in arguably the simplest dynamical\nsystem where the action on measures is significantly different from the action \non points, namely expanding circle maps.\n\nAnother goal of the paper is to examplify the rigorous differential structure\ndefined by N. Gigli \\cite{Gigli}, for the simplest possible compact manifold.\nNote that one can use absolutely continuous curves to define the almost \neverywhere differentiability of maps, see in particular\n\\cite{Gigli2} where this method is applied to the exponential map.\nOther previous uses of variants of this manifold structure\ninclude the definition of gradient flows, as in the pioneering \\cite{Otto} and in \n\\cite{Ambrosio-Gigli-Savare}, and of curvature, as in \\cite{Lott}.\nBut up to our knowledge, no example of explicit derivative of a measure-defined\nmap at a given point had been computed.\n\n\\subsection{An important model example}\n\nLet us first consider the usual degree $d$\nself-covering map of the circle $\\mathbb{S}^1 = \\mathbb{R}\/\\mathbb{Z}$ defined by\n\\[\\Phi_d(x) = dx \\mod 1.\\]\nIt acts on the set $\\mathscr{P}(\\mathbb{S}^1)$ of Borel probability measures,\nendowed with the topology of weak convergence, by the push-forward map\n$\\Phi_{d\\#}$. \n\nA map like $\\Phi_d$ can act by composition on the right on a\nfunction space (e.g. Sobolev spaces). \nThe adjoint of this map is usually called\na Perron-Frobenius operator or a transfer operator, and a great \ndeal of effort has been made to understand these\noperators, especially their spectral properties (see for example \\cite{Baladi}). \nOne can consider\n$\\Phi_{d\\#}$ as an analogue for possibly singular measures of the Perron-Frobenius\noperator of $\\Phi_d$.\n\nAs pointed out by the referee of a previous version of this paper, using the\nfinite-to-one maps \n\\[(x_1,\\ldots,x_n)\\mapsto \\frac1n\\delta_{x_1}+\\dots+\\frac1n\\delta_{x_n}\\]\nit is easy to prove that $\\Phi_{d\\#}$ is topologically transitive and has infinite\ntopological entropy. To refine this last remark, we shall prove that $\\Phi_{d\\#}$ \nhas positive metric mean dimension (a metric dynamical invariant of infinite-entropy\nmaps).\n\\begin{theo}\\label{theo:mdim}\nFor all integer $d\\geqslant 2$ and all exponent $p\\in[1,+\\infty)$ we have \n\\[\\mathop \\mathrm{mdim}_M\\nolimits(\\Phi_{d\\#},\\mathop \\mathrm{W}\\nolimits_p)\\geqslant p(d-1)\\]\nwhere $\\mathop \\mathrm{W}\\nolimits_p$ is the Wasserstein metric with cost $|\\cdot|^p$.\n\\end{theo}\nThe definition of Wasserstein metrics is given below; for the definiton\nof metric mean dimension and the proof of the above result, see \nSection \\ref{sec:entropy}.\nExcept in this result, we shall only use the quadratic Wasserstein metric.\n\nOur main goal is to study the first-order dynamics of $\\Phi_{d\\#}$ near the uniform\nmeasure $\\lambda$. The precise setting will be exposed latter; let us just give\na few elements. The tangent space $T_\\mu$ to $\\mathscr{P}(\\mathbb{S}^1)$ at a measure\n$\\mu$ that is absolutely continuous with continuous density identifies\nwith the Hilbert space $L^2_0(\\mu)$ of all vector fields \n$v:\\mathbb{S}^1\\to\\mathbb{R}$ that are $L^2$ with respect to $\\mu$,\nand such that $\\int v \\,\\lambda =0$. More generally,\nif $\\mu$ is atomless $T_\\mu$ identifies with a Hilbert subspace\n$L^2_0(\\mu)$ of $L^2(\\mu)$.\n\nWe have a kind of exponential map: $\\exp_\\mu(v)=\\mu+v:=(\\mathrm{Id}+v)_\\#\\mu$. \nThen we say that a map $f$ acting on $\\mathscr{P}(\\mathbb{S}^1)$\nhas G\\^ateau derivative $L$ at $\\mu$ if $f(\\mu)$ has no atom\nand $L:L^2_0(\\mu)\\to L^2_0(f(\\mu))$ is a continous linear operator\nsuch that for all $v$ we have\n\\[\\mathop \\mathrm{W}\\nolimits(f(\\mu+tv),f(\\mu)+tLv)=o(t).\\]\n\nOur first differentiability result is the following.\n\\begin{theo}\\label{theo:diff}\nThe map $\\Phi_{d\\#}$ has a G\\^ateaux derivative at $\\lambda$,\nequal to $d$ times\nthe Perron-Frobenius operator of $\\Phi_d$ acting on $L^2_0(\\lambda)$.\nIn particular its spectrum is the disc of radius $d$ and all numbers of modulus $0$ and all integer $K$\nthere is a radius $r>0$ such that for all $k\\leqslant K$ and\nall $a\\in B^n(0,r)$ the following holds:\n\\[\\mathop \\mathrm{W}\\nolimits\\big(\\Phi_{d\\#}^k(F(a)),F(a)\\big)\\leqslant \\varepsilon |a|.\\]\n\\end{theo}\nHere $B^n$ denotes the unit Euclidean ball centered at $0$ and $\\mathop \\mathrm{W}\\nolimits$ is the quadratic\nWasserstein distance (whose definition is recalled below).\n\nIt is easy to construct invariant measures near the \nabsolutely continuous one, for example supported on a union of periodic orbits.\nOne can also\nconsider convex sums $(1-a)\\rho\\lambda+a\\mu$ where $\\mu$ is any invariant measure\nand $a\\ll 1$. But note that the curves $a\\mapsto (1-a)\\rho\\lambda+a\\mu$ need not\nbe rectifiable, let alone Lipschitz. Bernoulli measures are also examples;\nthey are singular, atomless, fully supported invariant measures of $\\Phi_d$\nthat can be arbitrary close to $\\lambda$. \n\nThe nearly invariant measures above seem of a different\nnature, and a natural question is how regular they are.\nThey are given by push-forwards of the uniform measure by continuous \nfunctions; \nfor example in the model case a one parameter family is given by\n\\[\\big(\\mathrm{Id}+t\\sum_{\\ell=0}^\\infty d^{-\\ell}\\cos(2\\pi d^\\ell \\cdot)\\big)_\\#\\lambda \\]\nwhere $t\\in[0,\\varepsilon)$. This makes it easy to prove that almost all\nof them are atomless.\n\\begin{prop}\\label{prop:atomless}\nIf $\\mu$ is an atomless measure and $v\\in L^2(\\mu)$,\n for all but a countable number of values of $t\\in[0,1]$, the\nmeasure $\\mu+tv=(\\mathrm{Id}+tv)_{\\#}\\mu$ has no atom.\n\nIn particular, with the notation of Theorem \\ref{theo:almost-invariant},\nfor almost all $a$ the measure $F(a)$ has no atom.\n\\end{prop}\n\nThis leaves open the following, antagonist questions.\n\\begin{ques}\nIs the measure $F(a)$ absolutely continuous for most, or at least some $a\\neq 0$?\n\\end{ques}\n\n\\begin{ques}\nIs the measure $F(a)$ invariant for most, or at least some $a\\neq 0$?\n\\end{ques}\n\nThe next natural questions, not adressed at all here, concerns the \ndynamical properties of the action\non measures of higher dimensional hyperbolic dynamical systems like Anosov\nmaps or flows, or of discontinuous systems like interval exchange maps.\n\n\n\\subsection{Recalls and notations}\n\nThe most convenient point of view here is to construct the circle as\nthe quotient $\\mathbb{R}\/\\mathbb{Z}$. We shall often and without notice write\na real number $x\\in[0,1)$ to mean its image by the canonical projection. We proceed\nsimilarly for intervals of length less than $1$.\n\nRecall that the push-forward of a measure is defined by \n$\\Phi_\\#\\mu(A)=\\mu(\\Phi^{-1}A)$ for all Borelian set $A$.\n\nFor a detailled introduction on optimal transport, the interested reader can for\nexample consult \\cite{Villani}. Let us give an overview of the properties we shall need.\nGiven an exponent $p\\in[1,\\infty)$, if $(X,d)$ is a general metric space, assumed to be polish (complete \nseparable) to avoid mesurability issues and endowed with its Borel \n$\\sigma$-algebra, its $L^p$ \\emph{Wasserstein space} is\nthe set $\\mathscr{W}_p(X)$ of probability measures $\\mu$ on $X$ whose $p$-th moment is finite:\n\\[\\int d^p(x_0,x) \\,\\mu(dx)<\\infty\\qquad\\mbox{ for some, hence all }x_0\\in X\\]\nendowed with the following metric: given $\\mu,\\nu\\in\\mathscr{W}_p(X)$ one sets\n\\[\\mathop \\mathrm{W}\\nolimits_p(\\mu,\\nu)=\\left(\\inf_\\Pi \\int_{X\\times X} d^p(x,y)\\, \n \\Pi(dx dy)\\right)^{1\/p}\\]\nwhere the infimum is over all probability measures $\\Pi$ on $X\\times X$\nthat projects to $\\mu$ on the first factor and to $\\nu$ on the second one.\nSuch a measure is called a transport plan between $\\mu$ and $\\nu$, and is\nsaid to be optimal when it achieves the infimum. In this setting, an optimal\ntransport plan always exist. Note that when $X$ is compact, the set $\\mathscr{W}_p(X)$\nis equal to the set $\\mathscr{P}(X)$ of all probability measures on $X$.\n\nThe name ``transport plan'' is suggestive: it is a way to describe what amount of\nmass is transported from one region to another.\n\nThe function $\\mathop \\mathrm{W}\\nolimits_p$ is a metric, called the ($L^p$) Wasserstein metric, \nand when $X$ is compact it induces the weak topology. We sometimes\ndenote $\\mathop \\mathrm{W}\\nolimits_2$ simply by $\\mathop \\mathrm{W}\\nolimits$.\n\n\\section{Metric mean dimension}\\label{sec:entropy}\n\nMetric mean dimension is a metric invariant of dynamical systems introduced by\nLindenstrauss and Weiss \\cite{Lindenstrauss-Weiss}, that refines topological entropy\nfor infinite-entropy systems.\n\nLet us briefly recall the definitions. Given a\nmap $f:X\\to X$ acting on a metric space, for any\n$n\\in\\mathbb{N}$ one defines a new metric on $X$ by\n\\[d_n(x,y):= \\max\\{d(f^k(x),f^k(y));0\\leqslant k\\leqslant n\\}.\\]\nGiven $\\varepsilon>0$, one says that a subset $S$ of $X$ is\n$(n,\\varepsilon)$-separated if $d_n(x,y)\\geqslant \\varepsilon$ whenever\n$x\\neq y\\in S$. Denoting by $N(f,\\varepsilon,n)$ the maximal size of a \n$(n,\\varepsilon)$-separated set, the topological entropy of $f$ is defined as\n\\[h(f) := \\lim_{\\varepsilon\\to 0} \\limsup_{n\\to+\\infty} \n\\frac{\\log N(f,\\varepsilon,n)}{n}.\\]\nNote that this limit exists since $\\limsup_{n\\to+\\infty} \\frac1n \\log N(f,\\varepsilon,n)$\nis nonincreasing in $\\varepsilon$.\nThe adjective ``topological'' is relevant since $h(f)$ does not depend upon the\ndistance on $X$, but only on the topology it defines.\nThe topological entropy is in some sense a global measure of the dependance on initial condition\nof the considered dynamical system. \nThe map $\\Phi_d$ is a classical example, whose topological entropy is $\\log d$.\n\nNow, the metric mean dimension is\n\\[\\mathop \\mathrm{mdim}_M\\nolimits(f,d) := \\liminf_{\\varepsilon\\to 0} \\limsup_{n\\to+\\infty} \n \\frac{\\log N(f,\\varepsilon,n)}{n|\\log\\varepsilon|}.\\]\nIt is zero as soon as topological entropy is finite. Note that this quantity\ndoes depend upon the metric; here we shall use $\\mathop \\mathrm{W}\\nolimits_p$.\nLindenstrauss and Weiss define the metric mean dimension using\ncovering sets rather than separated sets, but this does not matter since\ntheir sizes are comparable.\n\nLet us prove Theorem \\ref{theo:mdim}:\nthe metric mean dimension of $\\Phi_{d\\#}$ is at least $p(d-1)$ when\n$\\mathscr{P}(\\mathbb{S}^1)$ is endowed with the $W_p$ metric.\nIn another paper \\cite{Kloeckner2}, we prove the same kind of result,\nreplacing $\\Phi_d$ by any map having positive entropy. However\nTheorem \\ref{theo:mdim} has a better constant and its proof is simpler.\n\n\\begin{proof}[Proof of Theorem \\ref{theo:mdim}]\nTo construct\na large $(n,\\varepsilon)$-separated set, we proceed as follows: we start with the point\n$\\delta_0$, and choose a $\\varepsilon$-separated set of its inverse images. Then we inductively\nchoose $\\varepsilon$-separated sets of inverse images of each elements of the set \npreviously defined.\nDoing this, we need not control the distance between inverse images of two different elements.\n \nLet $k\\gg 1$ and $\\alpha>0$ be integers; $\\varepsilon$ will be exponential in $-k$. Let\n$A_k$ be the set all $\\mu\\in\\mathscr{P}(\\mathbb{S}^1)$ such that $\\mu((1-2^{-k},1))=0$\nand $\\mu([0,1\/d])\\geqslant 1\/2$. These conditions are designed to bound from\nbelow the distances between the antecedents to be constructed: a given amount \nof mass (second condition) will have to travel a given distance (first\ncondition).\n\nAn element $\\mu\\in A_k$ decomposes as $\\mu=\\mu_h+\\mu_t$ where\n$\\mu_h$ is supported on $[0,1-d2^{-k}]$ and $\\mu_t$ is supported\non $(1-d2^{-k},1-2^{-k})$. Let $e_1,\\ldots, e_d$ be the right inverses to\n$\\Phi$ defined onto $[0,1\/d), [1\/d,2\/d),\\ldots [(d-1)\/d,1)$ respectively.\nFor all integer tuples $\\ell=(\\ell_1,\\ldots,\\ell_d)$ such that $\\ell_1\\geqslant 2^{\\alpha k-1}$\nand $\\sum \\ell_i=2^{\\alpha k}$, define\n\\[\\mu_\\ell=e_{1\\#}(\\ell_1 2^{-\\alpha k}\\mu_h+\\mu_t)+\\sum_{i>1} e_{i\\#}(\\ell_i 2^{-\\alpha k}\\mu_h)\\]\n(see figure \\ref{fig:antecedents} that illustrates the case $d=2$).\nIt is a probability measure on $\\mathbb{S}^1$,\nlies in $A_k$ and $\\Phi_{d\\#}(\\mu_\\ell)=\\mu$. Moreover, if $\\ell'\\neq\\ell$\nthen any transport plan from $\\mu_\\ell$ to $\\mu_{\\ell'}$ has to move a \nmass at least $2^{-\\alpha k-1}$ by a distance at least $2^{-k}d^{-1}$. Therefore,\n\\[\\mathop \\mathrm{W}\\nolimits_p(\\mu_\\ell,\\mu_{\\ell'})\\geqslant d^{-1}2^{-k(\\alpha\/p+1)-1\/p}.\\]\n\n\\begin{figure}[htp]\\begin{center}\n\\input{antecedents.pstex_t}\n\\caption{Construction of separated antecedents of a given measure.}\n\\label{fig:antecedents}\n\\end{center}\\end{figure}\n\nLet $\\varepsilon=d^{-1}2^{-k(\\alpha\/p+1)-1\/p}$\nand define $S_n$ inductively as follows.\nFirst, $S_0=\\{\\delta_0\\}$. Given $S_n\\subset A_k$, $S_{n+1}$\nis the set of all $\\mu_\\ell$ constructed above, where $\\mu$ runs through\n$S_n$.\n\nBy construction, $S_{n+1}$ has at least $C2^{\\alpha k(d-1)}$ times\nhas many elements as $S_n$, for some constant $C$ depending only on $d$. \nThen $S_n$ has at least $C^n 2^{n\\alpha k(d-1)}$ elements.\nLet $\\mu$, $\\nu$\nbe two distinct elements of $S_n$ and $m$ be the greatest index such that\n$\\Phi_{d\\#}^m\\mu\\neq \\Phi_{d\\#}^m\\nu$. Since $\\Phi_{d\\#}^n\\mu=\\delta_0=\\Phi_{d\\#}^n\\nu$,\n$m$ exists and is at most $n-1$. The measures $\\mu'=\\Phi_{d\\#}^m\\mu$ and \n$\\nu'=\\Phi_{d\\#}^m\\nu$ both lie in $S_{n-m}$ and have the same image. Therefore,\nthey are $\\varepsilon$-separated. This shows that $S_n$ is $(n,\\varepsilon)$-separated.\n\nIt follows that \n\\begin{eqnarray*}\n\\frac{\\log N(\\Phi_{d\\#},\\varepsilon,n)}{n|\\log\\varepsilon|}\n &\\geqslant& \n \\frac{C}{|\\log\\varepsilon|}+\\frac{\\alpha(d-1)}{\\frac{\\alpha}{p}+1}\n \\left(\\frac{-\\frac1p-\\frac{\\log d}{\\log 2}}{|\\log\\varepsilon|}+1 \\right) \\\\\n &\\geqslant& \\frac{\\alpha(d-1)}{\\frac{\\alpha}{p}+1}(1+o(1))+o(1).\n\\end{eqnarray*}\nIn the case of a general $\\varepsilon$, we get the same bound on\n$\\log N$ up to an additive term $n\\alpha(d-1)\\log 2$, so that\n\\[\\mathop \\mathrm{mdim}_M\\nolimits(\\Phi_{d\\#},\\mathop \\mathrm{W}\\nolimits_p) \\geqslant \\frac{\\alpha(d-1)}{\\frac{\\alpha}{p}+1}.\\]\nBy taking $\\alpha\\to\\infty$ we get $\\mathop \\mathrm{mdim}_M\\nolimits(\\Phi_{d\\#},\\mathop \\mathrm{W}\\nolimits_p)\\geqslant p(d-1)$.\n\\end{proof}\n\n\n\\section{The first-order differential structure on measures}\n\nIn this section we give a short account on the work of Gigli \\cite{Gigli}\nin the particular case of the circle.\nNote that considering the Wasserstein space of a Riemannian manifold as an\ninfinite-dimensionnal Riemannian manifold dates back to the work\nof Otto \\cite{Otto}. \nHowever, in many ways it stayed a formal view until the work of Gigli.\n\n\\subsection{Why bother with this setting?}\n\nBefore getting started, let us explain why we do not simply use the natural affine\nstructure on $\\mathscr{P}(\\mathbb{S}^1)$,\nthe tangent space at a point simply consisting on signed measures having\nzero total mass. Similarly, one could consider simpler to just\ntake the smooth functions of $\\mathbb{S}^1$ as coordinates to define a smooth structure\non $\\mathscr{P}(\\mathbb{S}^1)$. \n\nThe first argument against these points of vue is that optimal transportation is\nabout pushing mass, not (directly) about recording the variation of density at each point.\n\nMore important, these simple ideas would lead a path of the form \n$\\gamma_t=t\\delta_x+(1-t)\\delta_y$ to be smooth. However, the Wasserstein\ndistance between $\\gamma_t$ and $\\gamma_s$ has the order of $\\sqrt{|t-s|}$,\nso that $\\gamma_t$ is not rectifiable (it has infinite length)! This also holds,\nfor example, for convex sums of measures with different supports.\n\nOne could argue that the previous paths can be made Lipschitz by using $\\mathop \\mathrm{W}\\nolimits_1$\ninstead of $\\mathop \\mathrm{W}\\nolimits_2$, so let us give another argument:\nin the affine structure, the Lebesgue measure does not have a tangent space but only a \ntangent cone since $\\lambda+t\\mu$ is not a positive measure for all small\n$t$ unless $\\mu\\ll\\lambda$. If one wants to consider singular measures in the same\nsetting than regular ones, the $\\mathop \\mathrm{W}\\nolimits_2$ setting seems to be the right tool.\n\nNote that it will appear that the differential structure on $\\mathscr{P}(\\mathbb{S}^1)$ depends\nnot only on the differential structure of the circle, but also on its metric.\nThis should not be considered surprising: in finite dimension, the fact that the\ndifferential structures are defined independently of any reference to a metric comes\nfrom the equivalence of norms in Euclidean space: here, in infinite dimension, even the simple\nformula $\\mathop \\mathrm{W}\\nolimits(f(\\mu+tv),f(\\mu)+tD_xf(v)) = o(t)$ involves a metric in a crucial way.\n\nOne could also be\nsurprised that this differential structure involving the metric of the circle could\nbe preserved by expanding maps of non-constant derivative. This point shall be\ncleared in Section \\ref{sec:general}, see Proposition \\ref{prop:centering} and the\ndiscussion before it.\n\n\\subsection{The exponential map}\n\n Note that as is customary\nin these topics, by a geodesic we mean a non-constant globally minimizing geodesic segment\nor line, parametrized proportionaly to arc length.\n\nGiven $\\mu\\in\\mathscr{P}(\\mathbb{S}^1)$, there are several equivalent ways to define its\ntangent space $T_\\mu$. In fact, $T_\\mu$ has a vectorial structure only when \n$\\mu$ is atomless; otherwise it is only a tangent cone. Note that the atomless\ncondition has to be replaced by a more intricate one in higher dimension.\n\nThe most Riemannian way to construct $T_\\mu$ is to use the exponential map.\nLet $\\mathscr{P}(T\\mathbb{S}^1)_\\mu$ be the set of probability measures \non the tangent bundle\n$T\\mathbb{S}^1$ that are mapped to $\\mu$ by the canonical projection.\n\nGiven $\\xi,\\zeta\\in \\mathscr{P}(T\\mathbb{S}^1)_\\mu$, one defines\n\\[\\mathop \\mathrm{W}\\nolimits_\\mu(\\xi,\\zeta) = \\left(\\inf_\\Pi \\int_{T\\mathbb{S}^1\\times T\\mathbb{S}^1} d^2(x,y)\n \\,\\Pi(dx dy)\\right)^{1\/2}\\]\nwhere $d$ is any metric whose restriction to the fibers is the riemannian\ndistance (here the fibers are isometric to $\\mathbb{R}$), and the infimum \nis over transport plans $\\Pi$ that are mapped to the identity\n$(\\mathrm{Id},\\mathrm{Id})_\\#\\mu$ by the canonical projection on $\\mathbb{S}^1\\times \n\\mathbb{S}^1$. This means that we allow only to move the mass \\emph{along}\nthe fibers. Equivalently, one can desintegrate $\\xi$ and $\\zeta$ along $\\mu$,\nwriting $\\xi=\\int\\xi_x \\,\\mu(dx)$ and $\\zeta=\\int \\zeta_x \\,\\mu(dx)$, with\n$(\\xi_x)_{x\\in\\mathbb{S}^1}$ and $(\\zeta_x)_{x\\in\\mathbb{S}^1}$ two families\nof probability measures on $T_x\\mathbb{S}^1\\simeq \\mathbb{R}$ uniquely\ndefined up to sets of measure zero. Then one gets\n\\[\\mathop \\mathrm{W}\\nolimits_\\mu^2(\\xi,\\zeta)=\\int_{\\mathbb{S}^1} \\mathop \\mathrm{W}\\nolimits^2(\\xi_x,\\zeta_x) \\mu(dx)\\]\nwhere one integrates the squared Wasserstein metric defined with respect to the\nRiemannian metric, that is $|\\cdot|$.\n\nThere is a natural cone structure on $\\mathscr{P}(T\\mathbb{S}^1)_\\mu$, extending the scalar \nmultiplication on the tangent bundle: letting $D_r$ be the \ndilation of ratio $r$ along fibers, acting on $T\\mathbb{S}^1$, one defines \n$r\\cdot \\xi:=(D_r)_\\#\\xi$.\n\nThe exponential map $\\exp:T\\mathbb{S}^1\\to \\mathbb{S}^1$ now gives a map\n\\[\\exp_\\# : \\mathscr{P}(T\\mathbb{S}^1)_\\mu\\to\\mathscr{P}(\\mathbb{S}^1).\\]\nThe point is that not for all \n$\\xi\\in \\mathscr{P}(T\\mathbb{S}^1)_\\mu$, is there a $\\varepsilon>0$ such that \n$t\\mapsto \\exp_\\#(t\\cdot\\xi)$ defines a geodesic of $\\mathscr{P}(\\mathbb{S}^1)$ on \n$[0,\\varepsilon)$. Consider for example $\\mu=\\lambda$, and $\\xi$ be defined\nby $\\xi_x\\equiv1$. Then $\\exp_\\#(t\\cdot\\xi)=\\lambda$ for all $t$: one rotates all\nthe mass while letting it in place would be more efficient.\n\nThe first definition is that $T_\\mu$ is the closure in $\\mathscr{P}(T\\mathbb{S}^1)_\\mu$ of the subset\nof all $\\xi$ such that $\\exp_\\#(t\\cdot\\xi)$ defines a geodesic for small \nenough $t$.\n\n\n\\subsection{Another definition of the tangent space}\n\nLet us now give another definition, assuming $\\mu$ is atomless.\nWe denote by $|\\cdot|_{L^2(\\mu)}$ the norm defined by the measure $\\mu$, and\nby $|\\cdot|_2$ the usual $L^2$ norm defined by the Lebesgue measure\n$\\lambda$.\n\nGiven a smooth \nfunction $f:\\mathbb{S}^1\\to\\mathbb{R}$, its gradient \n$\\nabla f:\\mathbb{S}^1\\to T\\mathbb{S}^1$ can be used to push $\\mu$\nto an element $\\xi_f=(\\nabla f)_\\#\\mu$ of $\\mathscr{P}(T\\mathbb{S}^1)_\\mu$.\n This element has the \nproperty that $\\exp_\\#(t\\cdot\\xi)=(\\mathrm{Id}+t\\xi_f)_\\#\\mu$ defines a geodesic for small \nenough $t$, with a time bound depending on \n$\\nabla f$ and not on $\\mu$. More precisely,\nthe geodesicness holds as soon as no mass is moved\na distance more than $1\/2$, and no element of mass crosses another one,\nand these conditions translate to $t (\\nabla f)'(x)\\geqslant -1$ for all\n$x$. This is a particular case of Kantorovich duality, see for example\n\\cite{Villani2}, especially figure 5.2.\n\nNow, let $L^2_0(\\mu)$ be the set of all vector fields $v\\in L^2(\\mu)$\nthat are $L^2(\\mu)$-approximable by gradient of smooth functions.\nThen the image of the map $v\\mapsto (\\mathrm{Id},v)_\\#\\mu$ defined on $L^2_0(\\mu)$ \nwith value in $\\mathscr{P}(T\\mathbb{S}^1)_\\mu$ is precisely $T_\\mu$.\nIn particular, this means that as soon as $\\mu$ is atomless, the disintegration\n$(\\xi_x)_x$ of an element of $T_\\mu$ writes $\\xi_x=\\delta_{v(x)}$ for some\nfunction $v$ and $\\mu$-almost all $x$. Moreover, $v$ is $L^2(\\mu)$-approximable\nby gradient of smooth functions; note that amoung smooth vector fields,\ngradients are characterized by $\\int \\nabla f \\lambda = 0$.\nWe shall freely identify the tangent space with $L^2_0(\\mu)$ whenever $\\mu$\nhas no atom.\n\nIn the important case when $\\mu=\\rho\\lambda$ for some continuous density $\\rho$,\na vector field $v\\in L^2(\\mu)$ is approximable by gradient of smooth functions\nif and only if $\\int v\\lambda = 0$.\nWe get that in this case, $T_\\mu$ can be \nidentified with the set of functions $v:\\mathbb{S}^1\\to \\mathbb{R}$\nthat are square-integrable with respect to $\\mu$ and of mean zero \nwith respect to $\\lambda$. When $\\mu$\nis the uniform measure, we write $L^2_0$ instead of $L^2_0(\\lambda)$.\nNote that if $v\\in L^2(\\mu)$ has neither its negative part nor its\npositive part $\\lambda$-integrable, then it can be approximated in\n$L^2(\\mu)$ norm by gradient of smooth functions, and that\nif $\\mu$ has not full support, then $L^2_0(\\mu)=L^2(\\mu)$.\n\n\nFor simplicity, given $v\\simeq \\xi\\in L^2_0(\\mu)\\simeq T_\\mu$ we shall denote\n$\\exp_\\#(t\\cdot\\xi)$ by $\\mu+tv$. In other words,\n$\\mu+tv=(\\mathrm{Id}+tv)_\\#\\mu$.\n\nThis point of view is convenient, in particular because the distance between\nexponential curves issued from $\\mu$ can be estimated easily:\n\\[\\mathop \\mathrm{W}\\nolimits(\\mu+tv,\\mu+tw)\\underset{t\\to 0}\\sim t|v-w|_{L^2(\\mu)}.\\]\n Note that when $v$ is differentiable,\nthen by geodesicness for $t$ small enough we have\n\\[\\mathop \\mathrm{W}\\nolimits(\\mu,\\mu+tv) = t |v|_{L^2(\\mu)}\\]\nand not only an equivalent. This will prove useful in the next subsection\nwhere several measures and vector fields will be involved.\n\n\n\\subsection{Two properties}\n\nWe shall prove that the exponential map can be used to construct\nbi-Lipschitz embeddings of small, finite-dimensional balls into $\\mathscr{P}(\\mathbb{S}^1)$,\nthen we shall study how the density of an absolutely continuous\nmeasure evolves when pushed by a small vector field.\n\n\nThe following natural result shall be used in the proof of Theorem \n\\ref{theo:almost-invariant}.\n\\begin{prop}\\label{prop:embedding}\nGiven $\\mu\\in \\mathscr{P}(\\mathbb{S}^1)$ and $(v_1,\\ldots,v_n)$\ncontinuous, linearly independent vector fields in $L^2_0(\\mu)$,\nthere is an $\\eta>0$ such that the map $B^n(0,\\eta)\\to\\mathscr{P}(\\mathbb{S}^1)$ defined\nby $E(a)=\\mu+\\sum a_i v_i$ is bi-Lipschitz.\n\\end{prop}\nThe difficulty is only technical: we already know that $E$ is bi-Lipschitz\nalong rays and we need some uniformity in the distance estimates to prove\nthe global bi-Lipschitzness. The continuity hypothesis is not satisfactory\nbut is all we need in the sequel.\n\nNote that we did not assume that $\\mu$ has no atom; when it has, $L^2_0(\\mu)$\n(still defined as the closure in $L^2(\\mu)$ of gradients of smooth functions)\nis not the tangent cone $T_\\mu\\mathscr{P}(\\mathbb{S}^1)$ but only a part of it. Note that\nif $v$ is a $C^1$ vector field of vanishing $\\lambda$-mean,\n$(\\mu+tv)_t$ still defines a geodesic as long as $tv'\\geqslant -1$.\n\n\\begin{proof}\nLet $a,b\\in B^n$. The plan $(\\mathrm{Id}+\\sum a_i v_i,\\mathrm{Id}+\\sum b_i v_i)_\\#\\lambda$\ntransports $E(a)$ to $E(b)$\nat a cost \n\\[\\left|\\sum (a_i-b_i)v_i\\right|_2^2 \n\\leqslant \\left(\\sum |v_i|_2^2\\right)\\, |a-b|^2\\]\nso that $E$ is Lipschitz.\n\nUp to a linear change of coordinates, we assume that the $v_i$ form\nan orthonormal family of $L^2_0(\\mu)$. To bound the distance between\n$E(a)$ and $E(b)$ from below, we shall design a vector field $\\tilde v$\nsuch that pushing $E(a)$ by $\\tilde v$ gives a measure close to \n$E(b)$.\n\nChoose $\\varepsilon>0$\nsuch that for all $i$ we have \n\\[|x-y|\\leqslant\\varepsilon \\Rightarrow |v_i(x)-v_i(y)|\\leqslant \\frac{1}{4\\sqrt{n}}.\\]\nAssume moreover $\\varepsilon<1\/8$.\n\nLet $w_i$ be gradient of smooth functions such that\n$|v_i-w_i|_\\infty\\leqslant \\varepsilon$.\nLet $\\eta>0$ be small enough to ensure $2\\sqrt{n}\\eta\\leqslant 1$ and\n$w_i'\\geqslant -(4n\\eta)^{-1}$ fo all $i$.\n\nFix $a,b\\in B^n(0,\\eta)$ and introduce two maps defined by\n$\\psi(y)=y+\\sum a_i v_i(y)$ and $\\tilde\\psi(y)=y+\\sum a_i w_i(y)$.\nNote that $\\tilde\\psi'\\geqslant 1\/2$ so that $\\tilde\\psi$ is\na diffeomorphism and $\\tilde\\psi^{-1}$ is $2$-Lipschitz. Let\n$\\tilde v = \\sum (b_i-a_i)v_i\\circ\\tilde\\psi^{-1}$.\n\nOn the first hand, given any $y\\in\\mathbb{S}^1$, we have\n\\[|\\tilde\\psi(y)-\\psi(y)|\\leqslant |a|\\left(\\sum(w_i(y)-v_i(y))^2\\right)^{1\/2} \n \\leqslant |a|\\sqrt{n}\\varepsilon\\]\nso that\n\\[|y-\\tilde\\psi^{-1}\\psi(y)|\\leqslant 2\\sqrt{n}|a|\\varepsilon\\leqslant \\varepsilon\\]\nand\n\\[\\left|v_i(\\tilde\\psi^{-1}\\psi(y))-v_i(y)\\right|\\leqslant\\frac1{4\\sqrt{n}}.\\]\nIt follows that\n\\[\\left|\\sum(b_i-a_i)(v_i(\\tilde\\psi^{-1}\\psi(y)) -v_i(y))\\right|\\leqslant\\frac14|b-a|,\\]\nand therefore\n\\begin{equation}\n\\left|\\tilde v\\circ\\psi-\\sum(b_i-a_i)v_i\\right|_{L^2(\\nu)}\\leqslant\\frac14|b-a|\n\\label{eq:lip1}\n\\end{equation}\nwhere $\\nu$ could be any probability measure. We shall take\n$\\nu=\\mu+\\sum a_i v_i$.\n\nSimilarly,\n\\begin{eqnarray}\n|\\tilde v|_{L^2(\\nu)} &=& \\left(\\int \\tilde v^2(x) \\,(\\psi_\\#\\mu)(dx)\\right)^{1\/2} \\nonumber\\\\\n &=& \\left(\\int \\tilde v^2(\\psi x)\\,\\mu(dx)\\right)^{1\/2} \\nonumber\\\\\n &=& \\left|\\sum(b_i-a_i)v_i\\tilde\\psi^{-1}\\psi\\right|_{L^2(\\mu)} \\nonumber\\\\\n &\\geqslant& \\frac34\\left|\\sum(b_i-a_i)v_i\\right|_{L^2(\\mu)} \\nonumber\\\\\n|\\tilde v|_{L^2(\\nu)} &\\geqslant& \\frac34 |b-a|.\n\\end{eqnarray}\n\nOn the other hand, we have\n\\[ \\mathop \\mathrm{W}\\nolimits\\left(\\mu+\\sum a_i v_i, \\mu+\\sum b_i v_i\\right)\\geqslant\n \\mathop \\mathrm{W}\\nolimits(\\nu,\\nu+\\tilde v)-\\mathop \\mathrm{W}\\nolimits\\left(\\nu+\\tilde v,\\mu+\\sum b_i v_i\\right).\\]\n\nLet $\\tilde w=\\sum(b_i-a_i)w_i\\circ\\tilde\\psi^{-1}$. We have\n$|\\tilde v-\\tilde w|_\\infty\\leqslant \\varepsilon |b-a|$.\nIn particular, $|\\tilde w|_{L^2(\\nu)}\\geqslant\\frac58|b-a|$.\nThe choice of $\\eta$ ensures that $\\tilde w'\\geqslant-1$, so that\n\\[\\mathop \\mathrm{W}\\nolimits(\\nu,\\nu+\\tilde w)=|\\tilde w|_{L^2(\\nu)}\\geqslant \\frac58|b-a|.\\]\nSince $\\mathop \\mathrm{W}\\nolimits(\\nu+\\tilde v,\\nu+\\tilde w)\\leqslant |\\tilde v-\\tilde w|_\\infty$\nwe get\n\\begin{equation}\n\\mathop \\mathrm{W}\\nolimits(\\nu,\\nu+\\tilde v)\\geqslant \\frac12|b-a|.\n\\end{equation}\nFinally, since $\\nu+\\tilde v= (\\psi+\\tilde v \\psi)_\\#\\mu$,\n\\eqref{eq:lip1} shows that \n\\[\\mathop \\mathrm{W}\\nolimits\\left(\\nu+\\tilde v,\\mu+\\sum b_i v_i\\right)\\leqslant\\frac14|b-a|\\]\nso that\n\\[ \\mathop \\mathrm{W}\\nolimits\\left(\\mu+\\sum a_i v_i, \\mu+\\sum b_i v_i\\right)\\geqslant \\frac14|b-a|.\\]\n\\end{proof}\n\n\n\\begin{prop}\\label{prop:density}\nLet $\\rho$ be a $C^1$ density and $v:\\mathbb{S}^1\\to \\mathbb{R}$\nbe a $C^1$ vector field. Then for $t\\in\\mathbb{R}$ small enough\n$\\rho\\lambda+tv$ is absolutely continuous and its density\n$\\rho_t$ is continuous and satisfy\n\\[\\rho_t(x) = \\rho(x) -t(\\rho v)'(x) + o(t)\\]\nwhere the remainder term is independent of $x$.\n\\end{prop}\n\n\\begin{proof}\nLet $t$ be small enough so that $\\mathrm{Id}+tv$ is a diffeomorphism.\nThen for all integrable function $f$, one has\n\\begin{eqnarray*}\n\\int f(x) (\\rho\\lambda+tv)(dx) &=& \\int f(x) (\\mathrm{Id}+tv)_\\#(\\rho\\lambda)(dx)\\\\\n &=& \\int f(x+tv(x)) \\rho(x) dx\\\\\n &=& \\int f(y) \\left(\\frac{\\rho}{1+tv'}\\right)\\circ(\\mathrm{Id}+tv)^{-1}(y) dy\n\\end{eqnarray*}\nby a change of variable. It follows that \n\\begin{eqnarray*}\n\\rho_t &=& \\frac{\\rho}{1+tv'}\\circ(\\mathrm{Id}+tv)^{-1}\\\\\n &=& \\left(\\rho(1-tv')\\right)\\circ(\\mathrm{Id}-tv)+o(t)\\\\\n &=& \\rho-t(\\rho'v+v'\\rho)+o(t)\n\\end{eqnarray*}\nwhere the $o(t)$ term depends upon $\\rho$\nand $v$ but is uniform in $x$.\n\\end{proof}\nNote that the $o(t)$ depends in particular on the\nmoduli of continuity of $v'$ and $\\rho'$ and need not\nbe an $O(t^2)$ unless $v$ and $\\rho$ are $C^2$.\n\n\n\\section{First-order dynamics in the model case}\\label{sec:firstorder}\n\nIn this section we show that $\\Phi_{d\\#}$ is (weakly) differentiable at the point \n$\\lambda$. Its derivative is an\nexplicit, simple endomorphism of a Hilbert space, and we shall give a brief\nstudy of its spectrum.\n\n\\begin{theo}\\label{theo:differential}\nLet $\\mathscr{L}_d:L^2_0\\to L^2_0$ be the linear operator defined by\n\\[\\mathscr{L}_d v(x)= v(x\/d)+v((x+1)\/d)+\\dots+v((x+d-1)\/d).\\]\n Then $\\mathscr{L}_d$ is the\nderivative of $\\Phi_{d\\#}$ at $\\lambda$ in the following sense:\nfor all $v\\in L^2_0\\simeq T_\\lambda$, one has\n\\[\\mathop \\mathrm{W}\\nolimits\\left(\\Phi_{d\\#}(\\lambda+tv),\\lambda+t\\mathscr{L}_d(v)\\right)=o(t).\\]\n\\end{theo}\nFirst, we recognize in $\\mathscr{L}_d$ a multiple of\nthe Perron-Frobenius operator of $\\Phi_d$,\nthat is the adjoint of the map $u\\mapsto u\\circ \\Phi$, acting on the space $L^2_0$.\nSecond, we only get a G\\^ateaux derivative, when one would prefer a Fr\\'echet one,\nthat is a formula of the kind\n\\[\\mathop \\mathrm{W}\\nolimits(\\Phi_{d\\#}(\\lambda+v),\\lambda+\\mathscr{L}_d(v))=o(|v|).\\]\nHowever, we shall see that such a uniform bound does not\nhold. \nHowever, one easily gets uniform remainder terms in restriction to any finite-dimensional\nsubspace of $L^2_0$.\n\n\\subsection{Differentiability of $\\Phi_{d\\#}$}\n\nThe main point to prove in the above theorem is the following estimate.\n\\begin{lemm}\\label{lemm:composition}\nGiven a density $\\rho$, vector fields $v_1,\\ldots,v_n\\in L^2(\\rho\\lambda)$\nand positive numbers\n$\\alpha_1,\\ldots,\\alpha_n$ adding up to $1$, one has\n\\[\\mathop \\mathrm{W}\\nolimits\\left(\\rho\\lambda+t\\sum_i \\alpha_i v_i\\,,\\,\n \\sum_i\\alpha_i(\\rho\\lambda+tv_i)\\right)=o(t).\\]\n\\end{lemm}\nWe could deduce this result from Proposition \\ref{prop:density}\nbut for the sake of diversity let us give a different proof,\nwhich is almost contained in Figure \\ref{fig:transport}.\n\n\\begin{proof}\nWe prove the case $n=2$ since the general case can then be deduced by a straightforward\ninduction.\nLet $\\varepsilon$ be any positive number. Let $\\bar \\rho$, $\\bar v_1$ \nand $\\bar v_2$ be a piecewise constant density and two piecewise constant\nvector fields that approximate $\\rho$ in $L^1$ norm and\n$v_1$ and $v_2$ in $L^2$ norm:\n$|\\rho-\\bar\\rho|_1\\leqslant \\varepsilon^2$ and\n$|v_i-\\bar v_i|_{L^2(\\rho\\lambda)}\\leqslant\\varepsilon$.\n\nThe measure $((\\mathrm{Id}+v_i)\\times(\\mathrm{Id}+\\bar v_i))_\\#\\rho\\lambda$\nis a transport plan from \n$\\rho\\lambda+v_i$ to $\\rho\\lambda+\\bar v_i$, whose cost is \n$|v_i-\\bar v_i|_{L^2(\\rho\\lambda)}^2$.\nThis shows that $\\mathop \\mathrm{W}\\nolimits(\\rho\\lambda+v_i,\\rho\\lambda+\\bar v_i)\\leqslant \\varepsilon$.\nA transport plan $\\Pi$ from $\\rho\\lambda$ to $\\bar\\rho\\lambda$\nthat lets the common mass in place and transports the rest in any way\nmoves a mass $\\frac12|\\rho-\\bar\\rho|_1$ by a distance at most $\\frac12$,\nthus $\\mathop \\mathrm{W}\\nolimits(\\rho\\lambda,\\bar\\rho\\lambda)\\leqslant 2^{-3\/2}\\varepsilon$.\nNow $\\big(\\mathrm{Id}+\\bar v_i,\\mathrm{Id}+\\bar v_i\\big)_\\#\\Pi$ is a transport\nplan from $\\rho\\lambda+\\bar v_i$ to $\\bar\\rho\\lambda+\\bar v_i$\nwith the same cost as $\\Pi$, so that \n$\\mathop \\mathrm{W}\\nolimits(\\rho\\lambda+\\bar v_i,\\bar\\rho\\lambda+\\bar v_i)\n\\leqslant 2^{-3\/2}\\varepsilon$. It follows that\n\\[\\mathop \\mathrm{W}\\nolimits\\left(\\sum\\alpha_i(\\rho\\lambda+ t v_i),\n \\sum\\alpha_i(\\bar\\rho\\lambda+t \\bar v_i)\\right)\\leqslant C\\varepsilon t\\]\nfor a constant $C=2^{-3\/2}+1$, and similarly\n\\[\\mathop \\mathrm{W}\\nolimits\\left(\\rho\\lambda+\\sum\\alpha_i t v_i,\n \\bar\\rho\\lambda+\\sum\\alpha_it \\bar v_i\\right)\\leqslant C\\varepsilon t.\\]\n\nWe can moreover assume that $\\bar\\rho$ and $\\bar v_i$ are constant on each interval\nof the form $[i\/k,(i+1)\/k)$ for some fixed $k$ (depending upon\n$\\rho$, $v_1$, $v_2$ and $\\varepsilon$).\n\nTo see what happens on such an interval $I$, temporarily denoting by\n$\\rho$, $v_1$ and $v_2$\nthe values taken by the functions $\\bar\\rho$ and $\\bar v_i$ on $I$, let us \nconstruct for $t$ small enough an economic transport plan from \n$(\\mathrm{Id}+t(\\alpha_1v_1+\\alpha_2v_2))_\\# \\rho\\lambda_{|I}$ to \n$\\alpha_1(\\mathrm{Id}+tv_1)_\\#\\rho\\lambda_{|I}+\\alpha_2(\\mathrm{Id}+tv_2)_\\#\\rho\\lambda_{|I}$. \nIf the intervals $(\\mathrm{Id}+tv_1)(I)$ and $(\\mathrm{Id}+tv_2)(I)$ meet, \none can simply let the common mass in \nplace and move at each side a mass $\\alpha_1\\alpha_2\\rho|v_1-v_2|t$\nby a distance at most $|v_1-v_2|t$ (see\nfigure \\ref{fig:transport}; this is not optimal but sufficient for our purpose).\nThis transport plan has a cost \n$t^3 \\alpha_1\\alpha_2\\rho|v_1-v_2|^31$. Such a map is a self-covering,\nand has a unique absolutely continuous invariant measure \n(see e.g. \\cite{Katok-Hasselblatt})\nwhich has a positive and $C^1$ density \\cite{Krzyzewski}, denoted by $\\rho$. The measure itself is denoted\nby $\\rho\\lambda$.\nNote that as sets, $L^2(\\rho\\lambda)=L^2$, although they differ as Hilbert spaces.\nAll integrals where the variable is implicit are with respect to the Lebesgue measure $\\lambda$.\n\nThe result is as follows.\n\\begin{theo}\\label{theo:expanding}\nThe map $\\Phi_\\#$ has a G\\^ateaux derivative\n$\\mathscr{L} : L^2_0(\\rho\\lambda) \\to L^2_0(\\rho\\lambda)$ at $\\rho\\lambda$,\ngiven by\n\\[\\mathscr{L}v(x) = \\sum_{y\\in\\Phi^{-1}(x)} \\frac{\\rho(y)}{\\rho(x)} v(y)\n -\\frac{\\int{v\\Phi'\\frac{\\rho}{\\rho\\circ\\Phi}}}{\\rho(x)\\int1\/\\rho}\n\\]\nMoreover the adjoint operator of $\\mathscr{L}$ in $L^2_0(\\rho\\lambda)$\nis given by\n\\[\\mathscr{L}^* u = \\Phi'\\, u\\circ\\Phi.\\]\n\\end{theo}\n\n\\subsection{Proof of Theorem \\ref{theo:expanding}}\n\nFirst, as in the case of $\\Phi_{d\\#}$, Lemma \\ref{lemm:composition} shows that\nfor $v\\in L^2_0(\\rho\\lambda)$,\n\\begin{equation}\nd\\left(\\Phi_\\#(\\rho\\lambda+tv), \\rho\\lambda+t\\tilde{\\mathscr{L}}v\\right)=o(t)\n\\label{eq:tildoperator}\n\\end{equation}\nwhere \n\\[\\tilde{\\mathscr{L}}v(x) = \\sum_{y\\in\\Phi^{-1}(x)} \\frac{\\rho(y)}{\\rho(x)} v(y)\\]\nis the first term in the expression of $\\mathscr{L}$.\nIn words, each of the inverse image of $x$ gives a contribution to the local displacement\nof mass that is proportional to $v(y)$ and to $\\rho(y)$.\n\nThis seems very similar to the case of $\\Phi_\\#$, except that $\\tilde{\\mathscr{L}}$ need\nnot map $L^2_0(\\rho\\lambda)$ to itself! Let us stress, once again, that the condition\nthat $v\\in L^2_0(\\rho\\lambda)$ has mean zero is to be understood \\emph{with respect to\nthe uniform measure} $\\lambda$, since it translates the \\emph{metric} property of being (close to)\nthe gradient of a smooth function. This does not prevent Equation \\eqref{eq:tildoperator}\nto make sense, but shows that $\\tilde{\\mathscr{L}}v$ cannot be considered as the\ndirectional derivative of $\\Phi_\\#$ since it does not belong to $T_{\\rho\\lambda}=L^2_0(\\rho\\lambda)$.\nIn fact, we shall see that there is another vector field, that lies in $L^2_0(\\rho\\lambda)$ and\ngives the same pushed measure (at least at order $1$).\n\n\\begin{prop}\\label{prop:centering}\nGiven $\\tilde w\\in L^2(\\rho\\lambda)$ and assuming that $\\tilde w$ is $C^1$, there \nis a $C^1$\nvector field $w\\in L^2_0(\\rho\\lambda)$ such that\n$\\mathop \\mathrm{W}\\nolimits(\\rho\\lambda+t\\tilde w,\\rho\\lambda+tw)=o(t)$. Moreover, $w$ is given by\n\\[w=\\tilde w+\\frac{\\int \\tilde w}{\\rho\\int 1\/\\rho}.\\]\n\\end{prop}\n\n\\begin{proof}\nThis is a direct application of Proposition \\ref{prop:density}: \nwe search for a $w$ such that $(\\rho w)'=(\\rho\\tilde w)'$,\nso that the densities $\\rho_t$ and $\\tilde\\rho_t$ of $\\rho\\lambda+tw$\nand $\\rho\\lambda+t\\tilde w$ are $L^\\infty$ and therefore $L^1$ close\none to the other. This ensures that \n$\\mathop \\mathrm{W}\\nolimits(\\rho\\lambda+t\\tilde w,\\rho\\lambda+tw)\\leqslant |\\rho_t-\\tilde\\rho_t|=o(t)$.\n\nBut there exists exactly one vector field $w$ that is $C^1$, \nhas mean zero, and such that\n$(\\rho w)'=(\\rho\\tilde w)'$: it is given by the claimed formula.\n\\end{proof}\n\nNote that we did not bother to prove the unicity of $w$: Gigli's construction\nshows that the first order\nperturbation of the measure (with respect to the $L^2$ Wasserstein metric)\ncharacterizes a tangent vector \nin $T_\\mu$, see Theorem 5.5 in \\cite{Gigli}.\n\nNow if one considers the ``centering'' operator \n$\\mathscr{C}:L^2(\\rho\\lambda)\\to L^2_0(\\rho\\lambda)$\ndefined by \n\\[\\mathscr{C} v= v-\\frac{\\int v}{\\rho\\int 1\/\\rho},\\]\nthe derivative of $\\Phi_\\#$ at $\\rho\\lambda$ is given by the composition \n$\\mathscr{C}\\tilde{\\mathscr{L}}$.\nIndeed, the previous proposition shows this for a $C^1$ argument, but \n$C^1$ vector fields are dense\nin $L^2_0(\\rho\\lambda)$ and the involved operators are continuous\nin the $L^2(\\rho\\lambda)$ topology.\n\nTo get the expression of $\\mathscr{L}$ given in Theorem \\ref{theo:expanding}, one \nonly need a change of variable:\ndenoting by $\\Phi_i^{-1}$ ($i=1,2,\\ldots, d$) the right inverses to $\\Phi$ \nthat are onto intervals $[a_1=0,a_2), [a_2,a_3), \\ldots,\n[a_d,a_{d+1}=1)$ one has\n\\begin{eqnarray*}\n\\int\\tilde{\\mathscr{L}}v \n &=& \\sum_i \\int \\frac{\\rho\\circ\\Phi_i^{-1}}{\\rho} v\\circ\\Phi_i^{-1} \\\\\n &=& \\sum_i \\int_{a_i}^{a_{i+1}} \\frac{\\rho}{\\rho\\circ\\Phi} v \\Phi' \\\\\n &=& \\int v \\Phi'\\frac{\\rho}{\\rho\\circ\\Phi}.\n\\end{eqnarray*}\n\nThe computation of the adjoint is a similar change of variable that we omit. \nNote that the adjoint\nof the extension to $L^2(\\rho\\lambda)$ of $\\mathscr{L}$ (with the same expression) is\n\\[u\\mapsto \\Phi'\\, u\\circ\\Phi - \\frac{\\Phi'\\int u}{\\rho\\circ\\Phi \\int{1\/\\rho}}\\]\nand the second term vanishes when $u$ is in $L^2_0(\\rho\\lambda)$. \nThe first term is also the adjoint in $L^2(\\rho\\lambda)$\nof $\\tilde{\\mathscr{L}}$, and this adjoint preserves $L^2_0(\\rho\\lambda)$. \nIn other words,\n$\\mathscr{L}$ is the adjoint in $L^2_0(\\rho\\lambda)$ of the adjoint in\n$L^2(\\rho\\lambda)$ of $\\tilde{\\mathscr{L}}$.\nAn interesting feature of the expression of $\\mathscr{L}^*$ is that it does not \ninvolve the invariant measure.\n\n\\subsection{Spectral study}\n\nEven if $\\mathscr{L}$ is not a multiple of the Perron-Frobenius operator\nof $\\Phi$, its first term $\\tilde{\\mathscr{L}}$\nis a weighted transfert operator, with weight\n$g=\\frac{\\rho}{\\rho\\circ\\Phi}$. According to Theorem 2.5 in \\cite{Baladi},\nevery number of modulus less than $R_g=\\lim_n(\\sup\\tilde{\\mathscr{L}}^n1)^{1\/n}$\nis an eigenvalue of infinite multiplicity with continuous eigenfunctions.\n\n\\begin{prop}\nWe have $R_g\\geqslant \\min \\Phi'>1$, and in consequence\nthere is an infinite linearly independent family $(v_i)_i$\nof continuous functions in $L^2_0(\\rho\\lambda)$ such\nthat $\\mathscr{L}v_i=v_i$.\n\\end{prop}\n\n\\begin{proof}\nLet $m=\\min \\Phi'$: we have $m>1$ and, since $\\rho\\lambda$ is\ninvariant,\n\\[\\rho(x)=\\sum_{y\\in\\Phi^{-1}(x)}\\frac{\\rho(y)}{\\Phi'(y)}\n \\leqslant \\frac1m\\sum_{y\\in\\Phi^{-1}(x)} \\rho(y) \\]\nIt follows that for all positive continuous function $f$,\n\\[\\tilde{\\mathscr{L}}f=\\sum_{y\\in\\Phi^{-1}(x)}\\frac{\\rho(y)}{\\rho(x)}f(y)\n \\geqslant m|\\inf f|;\\]\nin particular, $R_g\\geqslant m>1$ and there is a linearly independent\ninfinite family $u_0,u_1,\\ldots,u_i\\ldots$\nof continuous $1$-eigenfunctions of $\\tilde{\\mathscr{L}}$.\nIf not all $u_i$ have mean $0$ (with respect to Lebesgue's measure\n$\\lambda$), assume the mean of $u_0$ is not zero and\nlet $v_i=u_i-\\alpha_i u_0$ where $\\alpha_i$ is chosen such that\n$\\int v_i\\lambda=0$. Otherwise, simply put $v_i=u_i$.\n\nNow, since $\\tilde{\\mathscr{L}}v_i=v_i$ and $v_i$ has mean zero,\nwe get $\\mathscr{L} v_i=\\tilde{\\mathscr{L}}v_i=v_i$.\n\\end{proof}\n\nIn the same way, we see that all numbers less than $m>1$ are eigenvalues\nof $\\mathscr{L}$ (with infinite multiplicity and continuous eigenfunctions).\n\n\\section{Nearly invariant measures}\n\nIn this section we prove Theorem \\ref{theo:almost-invariant} and Proposition\n\\ref{prop:atomless}.\n\n\\subsection{Construction}\n\nFix some positive integer $n$ and let $v_1,\\ldots,v_n$ be continuous,\nlinearly independent eigenfunctions for \n$\\mathscr{L}=D_{\\rho\\lambda}(\\Phi_\\#)$.\n\nFor all $a=(a_1,\\ldots,a_n)\\in B^n(0,\\eta)$, define\n$E(a)=\\rho\\lambda+\\sum_i a_i v_i\\in\\mathscr{P}(\\mathbb{S}^1)$ and using Proposition\n\\ref{prop:embedding}, choose $\\eta$ small\nenough to ensure that $E$ is bi-Lipschitz. Then define\n$F(a)=E(\\eta a)$ on the unit ball $B^n$.\n\n\\begin{prop}\nWe have\n\\[\\mathop \\mathrm{W}\\nolimits\\big(\\Phi_\\#(F(a)),F(a)\\big) = o(|a|)\\]\nand, as a consequence, for all $\\varepsilon>0$\nand all integer $K$, there is a radius $r$ \nsuch that for all $k\\leqslant K$\nand all $a\\in B^n(0,c)$ the following holds:\n\\[\\mathop \\mathrm{W}\\nolimits\\big(\\Phi_{d\\#}^k(F(a)),F(a)\\big)\\leqslant \\varepsilon |a|.\\]\n\\end{prop}\n\n\\begin{proof}\nSince we have restricted ourselves to a finite-dimensional space, \nwe have $\\mathop \\mathrm{W}\\nolimits\\big(\\Phi_\\#(\\rho\\lambda+\\eta\\sum a_iv_i),\n \\rho\\lambda+\\eta\\sum a_i\\mathscr{L}(v_i)\\big) = o(|a|)$\nand, since $\\mathscr{L}(v_i)=v_i$, we get\n$\\mathop \\mathrm{W}\\nolimits\\big(\\Phi_\\#(F(a)),F(a)\\big) = o(|a|)$.\n\nThe second inequality follows easily. The map $\\Phi_\\#$ is $L$-Lipschitz for some $L>1$\n($L=d$ in the model case, $L>d$ otherwise). For all $\\varepsilon>0$\nand for all integer $K$, let $r>0$ be small enough to ensure that\n\\[|a|<\\delta\\Rightarrow \\mathop \\mathrm{W}\\nolimits\\big(\\Phi_\\#(F(a)),F(a)\\big) \n \\leqslant \\frac{L-1}{L^{k-1}-1}\\varepsilon |a|.\\]\nThen\n\\begin{eqnarray*}\n\\mathop \\mathrm{W}\\nolimits\\big(\\Phi_{\\#}^k(F(a)),F(a)\\big) \n &\\leqslant& \\sum_{\\ell=1}^{k-1} \\mathop \\mathrm{W}\\nolimits\\big(\\Phi_{\\#}^\\ell(F(a)),\n \\Phi_{\\#}^{\\ell-1}(F(a))\\big)\\\\\n &\\leqslant& \\sum_{\\ell=1}^{k-1} L^{\\ell-1} \\mathop \\mathrm{W}\\nolimits\\big(\\Phi_{d\\#}(F(a)),F(a)\\big)\\\\\n &\\leqslant& \\varepsilon |a|.\n\\end{eqnarray*}\n\\end{proof}\n\nThis ends the proof of Theorem \\ref{theo:almost-invariant}. It would be \ninteresting to have explicit control on $r$ in terms of $\\varepsilon$,\n$n$ and $K$, and in particular to replace the $o(|a|)$\nby a $O(|a|^\\alpha)$ for some $\\alpha>1$. This seems uneasy because,\neven in the model\ncase where $v_i$ are explicit, we can approximate them by $C^\\infty$\nvector fields $w_i$ with a good control on $(-w_i')^{-1}$ and\n$w'$, but only bad bounds on $w''$ (and therefore the modulus of continuity\nof $w'$).\n\n\n\\subsection{Regularity}\n\nLet us prove that given $\\mu$ an atomless measure and\n$v\\in L^2_0(\\mu)$ (or, indifferently, $v\\in L^2(\\mu)$), for all but\ncountably many values of the parameter $t$, the measure $\\mu+tv$\nhas no atom.\n\n\\begin{proof}[Proof of Proposition \\ref{prop:atomless}]\nBy a line in $T\\mathbb{S}^1\\simeq \\mathbb{S}^1\\times \\mathbb{R}$,\nwe mean the image of a non-horizontal line of $\\mathbb{R}^2$ by\nthe quotient map $(x,y)\\mapsto (x \\mod 1,y)$. We sometimes\nrefer to a line by an equation of one of its lifts in $\\mathbb{R}^2$.\n\nThe measure $\\mu+tv$ has an atom at $s$ if and only if\nthe measure $\\Gamma=(\\mathrm{Id},v)_\\#\\mu$ defined on $T\\mathbb{S}^1$\ngives a positive mass to the line $(x+ty=s)$. Since\n$\\mu$ has no atom, neither does $\\Gamma$, and since two lines intersect in a\ncountable set, the intersection of two lines is $\\Gamma$-negligible.\nIt follows that there can be at most $n$ different lines that are given\na mass at least $1\/n$ by $\\Gamma$. In particular, at most countably many lines\nare given a positive mass by $\\Gamma$, and the result follows.\n\\end{proof}\n\nFor a general $L^2$ vector field, we cannot hope for more.\nThe following folklore example shows a $L^2_0$ function such that\n$\\lambda +tv$ is stranger to $\\lambda$ for almost all $t$.\n\\begin{exem}\nLet $K$ be a four-corner Cantor set of $\\mathbb{R}^2$.\nMore precisely, $A,B,C,D$ are the vertices of a square,\n$S_A,S_B,S_C,S_D$ are the homotheties of coefficient $1\/4$ centered\nat these points, and $K$ is the unique fixed point of the map\ndefined on compact sets $M\\subset\\mathbb{R}^2$ by\n\\[\\mathscr{S}(M)=S_A(M)\\cup S_B(M)\\cup S_C(M)\\cup S_D(M).\\]\nThe Cantor set $K$ projects on a well-chosen line to an interval,\nsee figure \\ref{fig:Cantor}, while in almost all directions\nit projects to $\\lambda$-negligible sets, see e.g. \n\\cite{Peres-Simon-Solomyak} for a proof.\nChoose the square so that $K$ projects vertically to $[0,1]$ (identified\nto $\\mathbb{S}^1$), and for $x\\in[0,1]$ define $v(x)$ as the least $y$\nsuch that $(x,y)\\in K$. Then $v$ is $L^2$ and, up to a vertical translation,\nwe can even assume that $v\\in L^2_0$. But for almost all $t$,\nthe measure $\\lambda+tv$ is concentrated into a negligible set.\n\\end{exem}\n\n\\begin{figure}\\begin{center}\n\\includegraphics[scale=.5]{Cantor}\n\\caption{A square Cantor set that projects vertically to a segment,\n but projects in almost all directions to negligible sets.\n On the right, an approximation of the graph of the function $v$.}\n \\label{fig:Cantor}\n\\end{center}\\end{figure}\n\n\n\\subsection*{Acknowledgements} I am indebted to Artur Oscar Lopes for his\nnumerous questions and comments on the various versions of this paper,\nand it is a pleasure to thank him.\n\nI also wish to thank Fr\\'ed\\'eric Faure, \n\\'Etienne Ghys, Nicola Gigli, Antoine Gournay, Nicolas Juillet\nand Herv\\'e Pajot for \ninteresting discussions and their comments on earlier versions of this paper,\nand an anonymous referee for her or his constructive criticism.\n\n\\bibliographystyle{smfalpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe spatial localization of neurons in the brain plays a critical role since\ntheir connectivity patterns largely depends on their type and their position\nrelatively to nearby neurons and regions (short-range or\/and long-range\nconnections). Interestingly enough, if the neuroscience literature provides\nmany data about the spatial distribution of neurons in different areas and\nspecies (e.g. \\cite{Pasternak:1975} about the spatial distribution of neurons\nin the mouse barrel cortex, \\cite{McCormick:2000} about the neuron spatial\ndistribution and morphology in the human cortex, \\cite{Blazquez-Llorca:2014}\nabout the spatial distribution of neurons innervated by chandelier cells), the\ncomputational literature exploiting such data is rather scarce and the spatial\nlocalization is hardly taken into account in most neural network models (be it\ncomputational, cognitive or machine learning models). One reason may be the\ninherent difficulty in describing the precise topography of a population such\nthat most of the time, only the overall topology is described in term of\nlayers, structures or groups with their associated connectivity patterns (one\nto one, one to all, receptive fields, etc.). One can also argue that such\nprecise localization is not necessary because for some model, it is not\nrelevant (machine learning) while for some others, it may be subsumed into the\nnotion of cell assemblies \\cite{Hebb:1949} that represent the spatiotemporal\nstructure of a group of neurons wired and acting together. Considering cell\nassemblies as the basic computational unit, one can consider there is actually\nfew or no interaction between assemblies of the same group and consequently,\ntheir spatial position is not relevant. However, if cell assemblies allows to\ngreatly simplify models, it also brings implicit limitations whose some have\nbeen highlighted in \\cite{Nallapu:2017}. To overcome such limitations, we\nthink the spatial localization of neurons is an important criterion worth to be\nstudied because it could induces original connectivity schemes from which new\ncomputational properties can be derived as it is illustrated on figure\n\\ref{fig:diffusion}.\\\\\n\\begin{figure}[htbp]\n \\includegraphics[width=.5\\textwidth]{.\/boots.jpg}\n \\includegraphics[width=.5\\textwidth]{.\/boots-stipple.png}\n \\caption{\\textbf{Stippling.} According to\n Wikipedia\\protect\\footnotemark, {\\em Stippling is the creation of a pattern\n simulating varying degrees of solidity or shading by using small\n dots. Such a pattern may occur in nature and these effects are frequently\n emulated by artists.} The pair of boots (left part) have been first\n converted into a gray-level image and processed into a stippling figure\n (right part) using the weighted Voronoi stippling technique by\n \\cite{Secord:2002} and replicated in \\cite{Rougier:2017}. Image from\n \\cite{Rougier:2017} (CC-BY license).}\n \\label{fig:boots}\n\\end{figure}\n\nHowever, before studying the influence of the spatial localisation of neurons,\nit is necessary to design first a method for the arbitrary placement of\nneurons. This article introduces a graphical, scalable and intuitive method for\nthe placement of neurons (or any other type of cells actually) over a\ntwo-dimensional manifold and provides as well the necessary information to\nconnect neurons together using either an automatic mapping or a user-defined\nfunction. This graphical method is based on a stippling techniques originating\nfrom the computer graphics domain for non-photorealistic rendering as\nillustrated on figure \\ref{fig:boots}.\n\n\\footnotetext{\n Stippling Wikipedia entry at {\\tt https:\/\/en.wikipedia.org\/wiki\/Stippling}}\n\n\\begin{figure}\n \\includegraphics[width=\\textwidth]{.\/figure-diffusion.pdf} \n \\caption{\\textbf{Influence of spatial distribution on signal propagation.}\n \\textbf{\\textsf{A.}} A k-nearest neighbours (k=5) connectivity pattern\n shows mid-range connection lengths in low local density areas (left part)\n and short-range connection lengths in high density areas (right\n part). \\textbf{\\textsf{B.}} Shortest path from top to bottom using a\n k-nearest neighbours connectivity pattern (k=5). The lower the density, the\n shorter the path and the higher the density, the longer the path. On the\n far left, the shortest path from top to bottom is only 6 connections while\n this size triples on the far right to reach 19 connections. Said\n differently, the left part is the fast pathway while the right part is the\n slow pathway relatively to some input data that would feed the architecture\n from the top. \\textbf{\\textsf{C.}} Due to the asymmetry of cells position,\n a signal entering on the top side (materialized with small arrows) travels\n at different speeds and will consequently reach the bottom side at\n different times. This represents a spatialization of\n time. \\textbf{\\textsf{D.}} Due to the asymmetry of cells position, a signal\n entering on the left side (materialized with small arrows) slows down while\n traveling before reaching the right side. This represents a compression of\n time and may serve as a short-term working memory.}\n \\label{fig:diffusion}\n\\end{figure}\n\n\n\\section{Methods}\n\nBlue noise \\cite{Ulichney:1987} is {\\em an even, isotropic yet unstructured\n distribution of points} \\cite{Mehta:2012} and has {\\em minimal low frequency\n components and no concentrated spikes in the power spectrum energy}\n\\cite{Zhang:2016}. Said differently, blue noise (in the spatial domain) is a\ntype of noise with intuitively good properties: points are evenly spread\nwithout visible structure (see figure \\ref{fig:CVT} for the comparison of a\nuniform distribution and a blue noise distribution). This kind of noise has been\nextensively studied in the computer graphic domain and image processing because\nit can be used for object distribution, sampling, printing, half-toning,\netc. One specific type of spatial blue noise is the Poisson disc distribution\nthat is a 2D uniform point distribution in which all points are separated from\neach other by a minimum radius (see right part of figure\n\\ref{fig:CVT}). Several methods have been proposed for the generation of such\nnoise, from the best in quality (dart throwing \\cite{Cook:1986}) to faster ones\n(rejection sampling \\cite{Bridson:2007}), see \\cite{Lagae:2008} for a\nreview. An interesting variant of the Poisson disk distribution is a non\nisotropic distribution where local variations follow a given density function\nas illustrated on figure \\ref{fig:boots} where the density function has been\nspecified using the image gray levels. On the stippling image on the right,\ndarker areas have a high concentration of dots (e.g. boots sole) while lighter\nareas such as the background display a sparse number of dots. There exist\nseveral techniques for computing such stippling density-driven pattern (optimal\ntransport \\cite{Mehta:2012}, variational approach \\cite{Chen:2012}, least\nsquares quantization \\cite{Lloyd:1982}, etc.) but the one by \\cite{Secord:2002}\nis probably the most straightforward and simple and has been recently replicated\nin \\cite{Rougier:2017}.\n\n\\begin{figure}[htbp]\n \\includegraphics[width=\\textwidth]{.\/figure-CVT.pdf}\n \\caption{\\textbf{Centroidal Voronoi Tesselation.} \\textbf{\\textsf{A.}}\n Voronoi diagram of a uniform distribution (n=256) where black dots\n represent the uniform distribution and white circles represent the\n centroids of each Voronoi cells. \\textbf{\\textsf{B.}} Centroidal Voronoi\n diagram where the point distribution matches the centroid distribution.}\n \\label{fig:CVT}\n\\end{figure}\n\n\\subsection{Distribution}\n\nThe desired distribution is given through a bitmap RGBA image that provides two\ntypes of information. The three color channels indicates the identity of a cell\n(using a simple formula of the type $identity = 256 \\times 256 \\times R + 256\n\\times G + B$ for $0 \\leq R,G,B < 256$) and the alpha channel indicates the\ndesired local density. This input bitmap has first to be resized (without\ninterpolation) such that the mean pixel area of a Voronoi cell is 500\npixels. For example, if we want a final number of 1000 cells, the input image\nneeds to be resized such that it contains at least 500x1000 pixels. For\ncomputing the weighted centroid, we apply the definition proposed in\n\\cite{Secord:2002} over the discrete representation of the domain and use a\nLLoyd relaxation scheme.\n\\[\n {\\bf C}_i = \\frac{\\int_A {\\bf x}\\rho({\\bf x})dA}{\\int_A \\rho({\\bf x})}\n\\]\nMore precisely, each Voronoi cell is rasterized (as a set of pixels) and the\ncentroid is computed (using the optimization proposed by the author that allow\nto avoid to compute the integrals over the whole set of pixels composing the\nVoronoi cell). As noted by the author, the precision of the method is directly\nrelated to the size of the Voronoi cell. Consequently, if the original density\nimage is too small relatively to the number of cells, there might be quality\nissues. We use a fixed number of iterations ($n=50$) instead of using the\ndifference in the standard deviation of the area of the Voronoi regions as\nproposed in the original paper. Last, we added a threshold parameter that\nallows to perform a pre-processing of the density image: any pixel with an\nalpha level above the threshold is set to the threshold value before\nnormalizing the alpha channel. Figure \\ref{fig:gradient} shows the distribution\nof four populations with respective size 1000, 2500, 5000 and 10000 cells,\nusing the same linear gradient as input. It is remarkable to see that the local\ndensity is approximately independent of the total number of cells.\n\\begin{figure}\n \\includegraphics[width=\\textwidth]{.\/figure-density.pdf} \n \\caption{\\textbf{Non-uniform distribution (linear gradient).} Different\n population distribution (size of 1000, 2500, 5000 and 10000 cells) using\n the same linear gradient as input have been computed. Each distribution has\n been split into four equal areas and the respective proportion and number\n of cells present in the area is indicated at the bottom of the area. The\n proportion of cells present in each areas is approximately independent\n ($\\pm$2.5\\%) of the overall number of cells. }\n \\label{fig:gradient}\n\\end{figure}\n\n\n\\subsection{Connection}\n\nMost computational models need to define the connectivity between the different\npopulations that compose the model. This can be done by specifying projections\nbetween a source population and a target population. Such projections\ncorrespond to the axon of the source neuron making a synaptic contact with the\ndendritic tree of the target neuron. In order to define the overall model\nconnectivity, one can specify each individual projection if the model is small\nenough (a few neurons). However, for larger models (hundreds, thousands or\nmillions of neurons), this individual specification would be too cumbersome and\nwould hide any structure in the connectivity scheme. Instead, one can use\ngeneric connectivity description \\cite{Djurfeldt:2014} such as one-to-one,\none-to-all, convergent, divergent, receptive fields, convolutional, etc. For\nsuch connectivity scheme to be enforced, it requires either a well structured\npopulations (e.g. grid) or a simple enclosing topology \\cite{Ekkehard:2015}\nsuch as a rectangle or a disc. In the case of arbitrary shapes as shown on\nfigure \\ref{fig:mapping}, these methods cannot be used directly. However, we\ncan use an indirect mapping from a reference shape such as the unit disc and\ntake advantage of the Riemann mapping theorem that states (definition from\n\\cite{Bolt:2010}):\\\\\n\n\\textbf{Riemann mapping theorem} (from \\cite{Bolt:2010}). {\\em Let $\\Omega$ be\n a (non empty) simply connected region in the complex plane that is not the\n entire plane. Then, for any $z_0 \\in \\Omega$, there exists a bianalytic\n (i.e. biholomorphic) map $f$ from $\\Omega$ to the unit disc such that\n $f(z0)=0$ and $f'(z0)>0$.}\\\\\n\nSuch mapping is {\\em conformal}, that it, it preserves angles while {\\em\n isometric} mapping preserves lengths (developable surfaces) and {\\em\n equiareal} mapping preserves areas. \\citet{Kerzman:1986} introduced a method\nto compute the Riemann mapping function using the Szeg\u00f6 kernel that is\nnumerically stable while \\citet{Trefethen:1980} introduced numerical methods\nfor solving the more specific conformal Schwarz-Christoffel transformation\n(conformal transformation of the upper half-plane onto the interior of a simple\npolygon). Furthermore, a Matlab toolkit is available in \\cite{Driscoll:1996} as\nwell as a Python translation (\\url{https:\/\/github.com\/AndrewWalker\/cmtoolkit})\nthat has been used to produce the figure \\ref{fig:mapping} that shows some examples of\narbitrary shapes and the automatic mapping of the polar and Cartesian domains.\n\\begin{figure}\n \\includegraphics[width=\\textwidth]{.\/figure-conformal-maps.png} \n \\caption{\\textbf{Conformal mappings.} Examples of conformal mappings on\n arbitrary spline shapes using the conformal Riemann mapping via the Szeg\u00f6\n kernel \\cite{Kerzman:1986}. Top line shows conformal mapping of the polar\n domain, bottom line show conformal mapping of the Cartesian domain.}\n \\label{fig:mapping}\n\\end{figure}\nHowever, even if automatic, this mapping can be perceived as not\nintuitive. Provided the shape are not too distorted, we'll see in the results\nsection that ad-hoc mapping can also be used.\n\n\n\\subsection{Visualization}\n\nHaving now a precise localization for each cell of each population, we have\nseveral ways of visualizing the activity within the model. The most\nstraightforward way is to simply draw the activity of a cell at its position\nusing a disc of varying color (a.k.a. colormap) or varying size, correlated\nwith cell activity. This requires the total number of cells to be not too large\nor the display would be cluttered. For a moderate number of cells, we can take\nadvantage of the dual Voronoi diagram of the cell position as illustrated on\nfigure \\ref{fig:diffusion}, using a colormap to paint the Voronoi\ncell. Finally, if the number of cells is really high, A two-dimensional\nhistogram of the mean activity (with a fixed number of bins) can be used as\nshown on figure \\ref{fig:BG}C using a bicubic interpolation filter.\n\n\n\n\\section{Results}\n\nWe'll now illustrate the use of the proposed method on three different cases.\n\n\\subsection{Case 1: Retina cells}\n\nThe human retina counts two main types of photoreceptors, namely rods, and\ncones (L-cones, M-cones and S-cones). They are distributed over the retinal\nsurface in an non uniform way, with a high concentration of cones (L-cones and\nM-cones) in the foveal region while the rods are to be found mostly in the\nperipheral region with a peak density at around 18-20$^\\circ$ of foveal\neccentricity. Furthermore, the respective size of those cells is different,\nrods being much smaller than cones. The distribution of rods and cells in the\nhuman retina has been extensively studied in the literature and is described\nprecisely in a number of work \\cite{Curcio:1990,Ahnelt:2000}. Our goal here is\nnot to fit the precise distribution of cones and rods but rather to give a\ngeneric procedure that can be eventually used to fit those figures, for a\nspecific region of the retina or the whole retina. The main difficulty is the\npresence of two types of cells having different sizes. Even though there exist\nblue-noise sampling procedures taking different size into account\n\\cite{Zhang:2016}, we'll use instead the aforementioned method using a two\nstage procedure as illustrated on figure \\ref{fig:retina}.\n\n\\begin{figure}\n \\includegraphics[width=\\textwidth]{.\/figure-rods-cones.pdf} \n \\caption{\\textbf{Cones and rods distribution.} \\textbf{\\textsf{A.}} The\n density map for cones placement (n=25) is a circular and quadratic gradient\n with highest density in the center. \\textbf{\\textsf{B.}} The density map\n for rods placement (n=2500) is built using the rods distribution. Starting\n from a linear density, ``holes'' with different sized are created at the\n location of each cone, preventing rods to spread over these areas during\n the stippling procedure. \\textbf{\\textsf{C.}} Final distribution of cones\n and rods. Cones are represented as white blobs (splines) while rods are\n represented as Voronoi regions using random colors to better highlight the\n covered area.}\n \\label{fig:retina}\n\\end{figure}\n\nA first radial density map is created for the placement of 25 cones and the\nstippling procedure is applied for 15 steps to get the final positions of the 25\ncones. A linear rod density map is created where discs of varying (random)\nsizes of null density are created at the position of the cones. These discs will\nprevent the rods to spread over these areas. Finally, the stippling procedure\nis applied a second time over the built density map for 25 iterations. The\nfinal result can be seen on figure \\ref{fig:retina}C where rods are tightly\npacked on the left, loosely packed on the left and nicely circumvent the cones.\n\n\n\\subsection{Case 2: Neural field}\n\nNeural fields describe the dynamics of a large population of neurons by taking\nthe continuum limit in space, using coarse-grained properties of single neurons\nto describe the activity\n\\cite{Wilson:1972,Wilson:1973,Amari:1977,Coombes:2014}. In this example, we\nconsider a neural field with activity $u$ that is governed by an equation of\nthe type:\n\\[\n\\tau\\frac{\\partial u(x,t)}{\\partial t} = -u(x,t) + \\int_{-\\infty}^{+\\infty} w(x,y) f(u(y,t)) dy + I(x) + h\n\\]\nThe lateral connection kernel $w$ is a difference of Gaussian (DoG) with short\nrange excitation and long range inhibition and the input $I(x)$ is constant\nand noisy. In order to solve the neural field equation, the spatial domain has\nbeen discretized into $40 \\times 40$ cells and the temporal resolution has been\nset to $10ms$. On figure \\ref{fig:DNF}A, one can see the characteristic Turing\npatterns that have formed within the field. The number and size of clusters\ndepends on the lateral connection kernel. Figure \\ref{fig:DNF}B shows the\ndiscretized and homogeneous version of the DNF where each cell has been assigned\na position on the field, the connection kernel function and the parameters\nbeing the same as in the continuous version. The result of the simulation shown\non figure \\ref{fig:DNF}B is the histogram of cell activities using $40 \\times\n40$ regular bins. One can see the formation of the Turing patterns that are\nsimilar to the continuous version. On figure \\ref{fig:DNF}C however, the\nposition of the cells have been changed (using the proposed stippling method)\nsuch that there is a torus of higher density. This is the only difference with\nthe previous model. While the output can still be considered to be Turing\npatterns, one can see clearly that the activity clusters are precisely\nlocalized onto the higher density regions. Said differently, the functional\nproperties of the field have been modified by a mere change in the\nstructure. This tends to suggest that the homogeneous condition of neural fields\n(that is the standard hypothesis in most works because it facilitates the\nmathematical study) is actually quite a strong limitation that constrains the\nfunctional properties of the field.\n\n\n\\begin{figure}[htbp]\n \\begin{center}\n \\includegraphics[width=.32\\textwidth]{.\/figure-DNF-A.pdf}\n \\includegraphics[width=.32\\textwidth]{.\/figure-DNF-B.pdf}\n \\includegraphics[width=.32\\textwidth]{.\/figure-DNF-C.pdf}\n \\end{center}\n \\caption{\\textbf{Non-homogeneous discrete neural field}.\n \\textbf{\\textsf{A.}} Turing patterns resulting from a continuous and\n homogeneous neural field with constant and noisy\n input. \\textbf{\\textsf{B.}} Turing patterns resulting from a discrete and\n homogeneous neural field with constant and noisy input. White dots indicate\n the position of the cells. Mean activity is compute from the histogram of\n cells activity using $40 \\times 40$ bins. \\textbf{\\textsf{C.}} Localized\n Turing patterns resulting from a discrete and non-homogeneous neural field\n with constant and noisy input. White dots indicate the position of the\n cells. Mean activity is computed from the histogram of cells activity using\n $40 \\times 40$ bins. }\n \\label{fig:DNF}\n\\end{figure}\n\n\n\n\n\\subsection{Case 3: Basal ganglia}\n\nThe basal ganglia is a group of sub-cortical nuclei (striatum, globus pallidus,\nsubthamalic nucleus, subtantia nigra) associated with several functions such as\nmotor control, action selection and decision making. There exists a functional\ndissociation of the ventral and the dorsal part of the striatum (caudate,\nputamen and nucleus accumbens) that is believed to play an important role in\ndecision making \\cite{ODoherty:2004,Balleine:2007,Meer:2011} since these two\nregions do not receive input from the same structures. For a number of models,\nthis functional dissociation results in the dissociation of the striatum into\ntwo distinct neural groups even though such anatomical dissociation does not\nexist {\\em per se} (see \\cite{Humphries:2010}). Without any proper topography\nof the striatal nucleus, it is probably the most straightforward way to\nproceed. However, if each group would possess its own topography, it would become\npossible to distinguish the ventral from the dorsal part of the BG, as\nillustrated on figure \\ref{fig:BG} on a coronal view of the BG. We do not\npretend this simplified view is sufficient to give account on all the intricate\nconnections between the different nuclei composing the basal ganglia, but it\nmight nonetheless help to have better understanding of the structure because it\nbecomes possible to link external input to specific part of this or that\nstructure (eg. ventral or dorsal part of the striatum). This could lead to\ndifferential processing in different part of the striatum and may reconcile\ndifferent theories regarding the role of the ventral and the dorsal part.\n\n\\begin{figure}\n \\includegraphics[width=\\textwidth]{.\/figure-BG.pdf} \n \\caption{\\textbf{Coronal view of the basal ganglia.} \\textbf{\\textsf{A.}}\n Scalable Vector Graphic (SVG) source file defining each structure in terms\n of border (solid black lines), major and minor axis (dashed lines), input\n (red line) and output (blue line). Local density is given by the alpha\n channel and structure identity is given by the color. In this coronal view\n of the basal ganglia, the Caudate is red (RGB=(0.83,0.15,0.15)), the GPe is\n blue (RGB=(0.12,0.46,0.70)) and the GPi is green (RGB=(0.17,0.62,0.17)).\n \\textbf{\\textsf{B.}} Distribution of 2500 neurons respecting the local\n density and structural organization (Caudate: 1345 cells, GPe: 884 cells,\n GPi: 271 cells). Neurons receiving input are drawn in red, neurons sending\n output are drawn in blue. Each neuron possesses two set of coordinates:\n one global Cartesian coordinate set and a local curvilinear coordinates set\n defined as the distances to the major and the minor axis of the structure\n the neuron belongs to. \\textbf{\\textsf{C.}} Mean activity histogram of the\n different structures using 32x32 bins and a bi-cubic interpolation\n filter. Each bin includes from zero to several neurons. \\textbf{\\textsf{D.}}\n Cell activities represented using the dual Voronoi diagram of the cell\n position. Each Voronoi region is painted according to the activity of the\n corresponding centroid (i.e. neuron). }\n \\label{fig:BG}\n\\end{figure}\n\n\n\n\n\\section{Discussion}\n\nWe've introduced a graphical, scalable and intuitive method for the placement\nand the connection of biological cells and we illustrated its use on three\nuse-cases. We believe this method, even if simple and obvious, might be\nworth to be considered in the design of a new class of model, in between\nsymbolic model and realistic model. Our intuition is that such topography may\nbe an important aspect that needs to be taken into account and studied in order\nfor the model to benefit from structural functionality. Furthermore, the\nproposed specification of the architecture as an SVG file associated with the\nscalability of the method could guarantee to some extent the scalability of the\nproperties of the model.\\\\\n\n\\textbf{Notes:} All figures were produced using the Python scientific stack,\nnamely, SciPy \\cite{Jones:2001}, Matplotlib \\cite{Hunter:2007} and NumPy\n\\cite{Walt:2011}. All sources are available on GitHub \\cite{rougier:2017b}\n\n\n\\renewcommand*{\\bibfont}{\\small}\n\\printbibliography[title=References]\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn traditional quantum physics courses at the undergraduate level, only linear Hermitian operators are discussed, keeping the conventional wisdom that a quantum observable in a measurement experiment must possess real eigenvalues and the Hermiticity property of it ensures that. However, later Bender and \nBoettcher~\\cite{bender:boettcher:prl98} showed that Hermiticity is not a necessary condition (though sufficient) for an observable (say, Hamiltonian) to have real eigenvalues. If a Hamiltonian preserves the parity ($\\pazo{P}$) and time-reversal ($T$) symmetry, it still can exhibit real eigenvalues or eigenenergies within a certain parameter regime. Such Hamiltonians are dubbed $\\pazo{PT}$ symmetric Hamiltonians. As just mentioned, beyond one or more particular points in the parameter space, the Hamiltonian starts picking up complex eigenenergies and those special points are labeled as \\emph{exceptional points} (EPs). An EP is the degeneracy point where the complex eigenenergies coalesce. However, unlike the Hermitian degeneracy point, the eigenfunctions become identical (up to a phase factor) instead of being orthogonal to each other. EPs have been interesting for the past decades as they have been the points signaling phase transitions ($\\pazo{PT}$ broken). EPs can signal several exotic phenomena such as unidirectional invisibility~\\cite{lin:etal:christo:gr:prl11,regensburger:etal:nat12,zhu:etal:ol13,feng:etal:nmat13}, loss-induced transparency~\\cite{guo:etal:prl09}, topological mode switching or energy transfer~\\cite{liu:etal:pra21,geng:etal:prsa21}, single mode lasing operation~\\cite{hodaei:etal:sc14,feng:etal:sc14}, on-chip control of light propagation~\\cite{peng:etal:nphys14}, optical sensitivity against external perturbation~\\cite{wiersig:prl14,lin:etal:christo:gr:prl11,hodaei:etal:nat17}, and dynamic phase transition in condensed matter systems~\\cite{tripathi:galda:barman:vinokur:prb16}. \n\n\nTo demonstrate the possibility of real eigenvalues out of a non-Hermitian matrix (which turns out to be $\\pazo{PT}$\nsymmetric), let us consider a simple two-level or two-state system that can be defined by the following $2\\times 2$ matrix.\n\\begin{align}\n{\\bf H}_\\text{TLS}=\n \\begin{bmatrix}\n \\epsilon_1 & 0\\\\\n 0 & \\epsilon_2 \n \\end{bmatrix}\\,.\n\\end{align}\nHere the eigenenergies $\\epsilon_1$ and $\\epsilon_2$ denote the two separate quantum states (if $\\epsilon_1\\ne \\epsilon_2$) or degenerate quantum states (if $\\epsilon_1=\\epsilon_2$). Now if there is mixing between the separated states (say, due to photon absorption\/emission, a particle from the lower\/higher energy level reaches the higher\/lower energy level), we get a finite off-diagonal term (say, $t$). Then the Hamiltonian looks like\n\\begin{align}\n{\\bf H}_\\text{TLS}^\\text{mix}=\n \\begin{bmatrix}\n \\epsilon_1 & t\\\\\n t & \\epsilon_2 \n \\end{bmatrix}\\,.\n\\end{align}\nThe mixing Hamiltonian is also known as the Landau-Zener Hamiltonian in the context of avoided level \ncrossing~\\cite{rubbmark:etal:pra81,shevchenko:ashhab:nori:pr10}.\nIf $\\epsilon_1$, $\\epsilon_2$, and $t$ are real, ${\\bf H}_\\text{TLS}$ and ${\\bf H}_\\text{TLS}^\\text{mix}$ are \nHermitian as they satisfy the Hermiticity condition $a_{ji}^*=a_{ij}$ where $a_{ij}$ is the matrix element at $i$-th row and $j$-th column. Now if we make the diagonal parts complex: $\\epsilon_1=\\epsilon+i\\gamma$ and $\\epsilon_2=\\epsilon-i\\gamma$ (gain term $i\\gamma$ and loss term $-i\\gamma$ added to a degenerate energy level $\\epsilon$),\nwe have\n\\begin{align}\n{\\bf H}_\\text{TLS}^1=\n \\begin{bmatrix}\n \\epsilon+i\\gamma & t\\\\\n t & \\epsilon-i\\gamma \n \\end{bmatrix}\\,\n=\\epsilon{\\bf 1}+i\\gamma\\sigma^z+t\\sigma^x\\,.\n\\label{eq:H:TLS:loss:gain:1}\n\\end{align}\nThe Hamiltonian ${\\bf H}_\\text{TLS}^1$ fails to satisfy the Hermiticity condition and hence non-Hermitian. \nHowever, we can easily write down the following eigenvalue or characteristic equation. \n\\blgn\n(E-\\epsilon)^2+\\gamma^2-t^2=0\n\\elgn\nproviding the eigenenergies:\n\\blgn\nE_1,E_2=\\epsilon\\pm \\sqrt{t^2-\\gamma^2}\\,.\n\\elgn \nLike in the previous example, non-Hermiticity can also be introduced via asymmetry in the off-diagonal terms in the TLS matrix, for example,\n\\blgn\n{\\bf H}_{\\text{TLS}}^{2}=\n \\begin{bmatrix}\n \\epsilon & t+\\lambda\\\\\n t-\\lambda & \\epsilon \n \\end{bmatrix}\\,\n\\elgn\nleading to the characteristic equation:\n\\blgn\n(E-\\epsilon)^2=\\lambda^2-t^2\n\\elgn\nwhich provides the eigenenergies:\n\\blgn\nE_1,E_2=\\epsilon\\pm \\sqrt{t^2-\\lambda^2}\\,.\n\\label{eq:Es:TLS:2}\n\\elgn\nDespite ${\\bf H}_{\\text{TLS}}^{1}$ and ${\\bf H}_{\\text{TLS}}^{2}$ being non-Hermitian, their characteristic\nequations show that eigenenergies can become real within certain non-Hermiticity parameter regimes: $|\\gamma|\\le t$ and $|\\lambda|\\le t$ respectively while these parameters are real. Both these Hamiltonians preserve the $\\pazo{PT}$ symmetry~\\cite{wang:ptrsa13} and beyond the above-mentioned regimes, complex eigenenergies emerge leading to $\\pazo{PT}$ symmetry broken phases. In our paper, we shall address both these scenarios and study the nature of EPs. We dub the first kind of Hamiltonian (${\\bf H}_{\\text{TLS}}^{1}$) \\emph{diagonal or orbital} $\\pazo{PT}$-symmetric and the second kind ((${\\bf H}_{\\text{TLS}}^{2}$) \\emph{off-diagonal or kinetic} $\\pazo{PT}$-symmetric. We construct both of these scenarios in the context of the hydrogen molecule: our testing model.\n\n \nOur paper is organized in the following way. We first discuss the non-interacting version of the hydrogen molecule\nand how the eigenenergies are obtained after constructing the basis set and the Hamiltonian matrix upon that. \nThen we introduce the asymmetry into the hopping elements keeping the $\\pazo{PT}$-symmetry reserved for the \nHamiltonian and discuss the behavior of its complex eigenenergies. We then introduce the Hubbard interaction\nterm to that and discuss the complex eigenenergies. Finally, we add complex gain and loss terms to the orbital energies (maintaining the $\\pazo{PT}$-symmetry again) and discuss the existence of multiple sets of EPs and their dependence on the interaction strength. \n \n\n\n\n\\section{Noninteracting hydrogen molecule}\n A hydrogen molecule consists of two hydrogen atoms where each atomic electron participates in covalent bonding with the other one. This scenario (neglecting vibrational modes and other interactions) can be modeled by a two-site electronic problem where electrons can hop from one site to another site (mimicking the orbital overlap)~\\cite{book:ashcroft:mermin76:ssp,alvarez:blanco:ejp01}. In the second quantization notation, the Hamiltonian is equivalent to the two-site tight-binding Hamiltonian:\n\\begin{align}\n\\hat H^0 = \\epsilon\\sum_\\sigma (c\\y_{1\\sigma} c\\py_{1\\sigma} + c\\y_{2\\sigma} c\\py_{2\\sigma}) + t\\sum_\\sigma (c\\y_{1\\sigma} c\\py_{2\\sigma} + c\\y_{2\\sigma} c\\py_{1\\sigma})\\,\n\\label{eq:H0}\n\\end{align}\nwhere $c\\y_{i\\sigma}$ or $c\\py_{i\\sigma}$ operator creates or annihilates an electron of spin $\\sigma$ at site $i$ ($i\\in 1,2$; $\\sigma \\in \\uparrow,\\downarrow$) $\\big[ c\\y_{i\\sigma}|0\\rangle_i=|\\sigma\\rangle_i$; $c\\py_{i\\sigma}|\\sigma\\rangle_i=|0\\rangle_i \\big]$, $\\epsilon$ is the atomic energy of a hydrogen atom, $t$ is the amplitude of hopping from site 1 to site 2 or vice versa. \n\nWe get six possible atomic states for the above Hamiltonian which form the basis $\\{\\ket{i}\\}$, $i= 1,2,3,4,5,6$, the nonzero matrix elements of the Hamiltonian are (see Appendix~\\ref{app:construct:H0})\n\\blgn\nH^0_{11}&=H^0_{22}=H_{33}=H_{44}=H_{55}=H_{66}=2\\epsilon\\\\\nH^0_{23}&=t=H^0_{32}\\\\\nH^0_{24}&=-t=H^0_{42}\\\\\nH^0_{35}&=t=H^0_{53}\\\\\nH^0_{45}&=-t=H^0_{54}\n\\elgn\nwhere $H_{ij}=\\bra{i} \\hat H \\ket{j}$ for a generic Hamiltonian matrix element. Thus the Hamiltonian \nappears in the matrix form: \n\\begin{align}\n{\\bf H^0}=\n \\begin{bmatrix}\n \n 2\\epsilon &0 &0 &0 &0 &0 \\\\ \n 0 &2\\epsilon &t &-t &0 &0 \\\\ \n 0 &t &2\\epsilon &0 &t &0 \\\\ \n 0 &-t &0 &2\\epsilon &-t &0 \\\\ \n 0 &0 &t &-t &2\\epsilon &0 \\\\ \n 0 &0 &0 &0 &0 &2\\epsilon\n \\end{bmatrix}\n \\,.\n\\end{align}\n\nThe above matrix can be divided into three block-diagonal matrices and one can note\nthey represent three distinguished sectors of total spin $S_z=1,0,-1$ (considering each electron \nis a spin-$\\frac{1}{2}$ particle):\n\\begin{align}\n{\\bf H^0}\n&=\n\\begin{bmatrix}\n ~\\boxed{S_z=1} & &\\\\\n &\\boxed{S_z=0} &\\\\\n & &\\boxed{S_z=-1}\n\\end{bmatrix}\n\\,.\n\\end{align}\nFor $S_z=\\pm 1$, the eigenenergies are trivial: $E=2\\epsilon$. \nFor $S_z=0$ matrix:\n\\blgn\n\\begin{bmatrix}\n 2\\epsilon &t &-t &0\\\\ \n t &2\\epsilon &0 &t \\\\ \n -t &0 &2\\epsilon &-t \\\\ \n 0 &t &-t &2\\epsilon\\\\ \n\\end{bmatrix}\n\\,,\n\\label{eq:Sz:0:matrix}\n\\elgn\nthe characteristic equation becomes \n\\blgn\n\\begin{vmatrix}\n 2\\epsilon-E &t &-t &0\\\\ \n t &2\\epsilon-E &0 &t \\\\ \n -t &0 &2\\epsilon-E &-t \\\\ \n 0 &t &-t &2\\epsilon-E\\\\ \n\\end{vmatrix}\n=0\\,\n\\elgn\n\\blgn\n\\Rightarrow (2\\epsilon-E)^2[(2\\epsilon-E)^2 - 4t^2]=0\n\\elgn\nsolving which we obtain the following eigenenergies:\n$2\\epsilon$ (degeneracy=4), $2(\\epsilon-t)$, and $2(\\epsilon+t)$.\nBy setting $\\epsilon$ to 0, we get: $0$, $-2t$, and $2t$ as three distinct eigenenergies.\nFor positive values of $t$, the states with eigenenergy $\\pm 2t$ correspond to \nantibonding (energy $> \\epsilon$) and bonding states (energy $< \\epsilon$) respectively.\n \n\n\n\n\n\\section{Non-interacting hydrogen molecule with off-diagonal $\\pazo{PT}$ symmetry}\nOpen quantum systems or dissipative systems have been studied for a long time \nwhere non-Hermiticity occurs naturally as a decay term in the Hamiltonian~\\cite{frensley:rmp90,dalibard:etal92,hatano:nelson:prl96,fukui:kawakmi:prb98,bertlmann:etal:pra06}. \nIn our model Hamiltonian $H^0$, we introduce non-Hermiticity through the following dissipative current (asymmetric hopping) term $H^\\lambda$~\\cite{cabib:prb75}.\n\\blgn\n\\hat H^\\lambda = \\lambda \\sum_\\sigma(c\\y_{1\\sigma} c\\py_{2\\sigma} - c\\y_{2\\sigma} c\\py_{1\\sigma})\\,.\n\\elgn\nOne can easily check that\n$\\hat H\\y_\\lambda=\\lambda \\sum_\\sigma(c\\y_{2\\sigma} c\\py_{1\\sigma} - c\\y_{1\\sigma} c\\py_{2\\sigma})\\ne \\hat H^\\lambda$.\nWe rewrite our new Hamiltonian as\n\\blgn\n\\hat H^1\n&=H^0+H^\\lambda\\nonumber\\\\\n&= \\epsilon\\sum_\\sigma (c\\y_{1\\sigma} c\\py_{1\\sigma} + c\\y_{2\\sigma} c\\py_{2\\sigma}) + \\sum_\\sigma[t^+ c\\y_{1\\sigma} c\\py_{2\\sigma} + t^{-}c\\y_{2\\sigma} c\\py_{1\\sigma}]\\,\n\\label{eq:H:primed}\n\\elgn \nwhere $t^+\\equiv t+\\lambda;\\quad t^-\\equiv t-\\lambda$.\n\n\\subsection*{$\\pazo{PT}$ symmetry:} \nSince $\\hat H$ is already Hermitian and hence also $\\pazo{PT}$ symmetric, to prove that $\\hat H^1$ \nis $\\pazo{PT}$ symmetric as well, we only need to show that $\\hat H^\\lambda$ is $\\pazo{PT}$ symmetric.\n$\\lambda$ is equivalent to a hopping amplitude and hence it changes sign under time-reversal: \n\\blgn\n\\pazo{T} \\hat H^\\lambda \\pazo{T}^{-1} = -\\lambda \\sum_\\sigma(c\\y_{1\\sigma} c\\py_{2\\sigma} - c\\y_{2\\sigma} c\\py_{1\\sigma})\\,. \n\\elgn\nNow under parity ($\\pazo{P}$) operation, site 1 and 2 get interchanged and we finally obtain \n\\blgn\n\\pazo{P}\\pazo{T} \\hat H^\\lambda\\pazo{T}^{-1}\\pazo{P}^{-1} = -\\lambda \\sum_\\sigma(c\\y_{2\\sigma} c\\py_{1\\sigma} - c\\y_{1\\sigma} c\\py_{2\\sigma}) \n= \\hat H^\\lambda\\,. \n\\elgn \nHence $\\hat H^\\lambda$ is invariant under $\\pazo{PT}$ symmetry operation and \nthe Hamiltonian in matrix form: \n\\begin{align}\n{\\bf H^1}=\n \\begin{bmatrix}\n \n 2\\epsilon &0 &0 &0 &0 &0 \\\\ \n 0 &2\\epsilon &t^- &-t^- &0 &0 \\\\ \n 0 &t^+ &2\\epsilon &0 &t^- &0 \\\\ \n 0 &-t^+ &0 &2\\epsilon &-t^- &0 \\\\ \n 0 &0 &t^+ &-t^+ &2\\epsilon &0 \\\\ \n 0 &0 &0 &0 &0 &2\\epsilon\n \\end{bmatrix}\n \\,.\n\\label{eq:H1:matrix}\n\\end{align}\nLike in the earlier case, we find this matrix also bears a block-diagonal form where the blocks represent three distinguished sectors of total spin ($S_z$) 1, 0 and -1 respectively.\nThe characteristic equation of the $S_z=0$ block is \n\\blgn\n\\begin{vmatrix}\n 2\\epsilon-E &t_- &-t_- &0\\\\ \n t_+ &2\\epsilon-E &0 &t_- \\\\ \n -t_+ &0 &2\\epsilon-E &-t_- \\\\ \n 0 &t_+ &-t_+ &2\\epsilon-E\\\\ \n\\end{vmatrix}\n= 0\\,.\n\\elgn\n\\begin{comment}\n\\blgn\n\\Rightarrow\n(2\\epsilon-E)\\, \n\\begin{vmatrix}\n2\\epsilon-E & 0 & t_- \\\\ \n0 & 2\\epsilon-E & -t_- \\\\\nt_+ & -t_+ & 2\\epsilon-E\n\\end{vmatrix}\n-t_-\n\\begin{vmatrix}\nt_+ & 0 & t_- \\\\ \n-t_+ & 2\\epsilon-E & -t_- \\\\\n0 & -t_+ & 2\\epsilon-E \n\\end{vmatrix}\n\\\\-t_-\n\\begin{vmatrix}\nt_+ & 2\\epsilon-E & t_- \\\\ \n-t_+ & 0 & -t_- \\\\\n0 & t_+ & 2\\epsilon-E \n\\end{vmatrix}\n=0\n\\elgn\n\\blgn\n&\\Rightarrow (2\\epsilon-E) \\bigg[\n (2\\epsilon-E)\\big\\{(2\\epsilon-E)^2-t_+t_-\\big\\}\n +t_-\\big\\{-t_+(2\\epsilon-E)\\big\\}\n\\bigg]\\nonumber\\\\\n&\\quad -t_-\\bigg[\nt_+\\big\\{(2\\epsilon-E)^2-t_+t_-\\big\\}+t_+(t_+t_-)\n\\bigg]\\nonumber\\\\\n&\\quad -t_-\\bigg[\nt_+(+t_+t_-)-(2\\epsilon-E)\\big\\{-t_+(2\\epsilon-E)\\big\\}\n+t_-(-t_+^2)\n\\bigg]=0\\\\\n\\elgn\n\\end{comment}\n\\blgn\n&\\Rightarrow (2\\epsilon-E)^2[(2\\epsilon-E)^2-2t_+t_-]\n-2\\tmt_+(2\\epsilon-E)^2=0\\nonumber\\\\\n&\\Rightarrow (2\\epsilon-E)^2[(2\\epsilon-E)^2-4t_+t_-]=0\\,.\n\\label{eq:sing:diss:nonint}\n\\elgn\nThus the eigenenergies of $\\hat H^1$ are $2\\epsilon$ (degeneracy 4), $2(\\epsilon\\pm \\sqrt{t^2-\\lambda^2})$. \nWhen $|\\lambda|>t$ situation occurs, the last two eigenenergies (we name this pair as $E^\\pm$) \nbecome complex: $E^\\pm=2(\\epsilon\\pm i\\sqrt{\\lambda^2-t^2})$.\nThus symmetrically around $\\lambda=0$, a pair of EPs arise at $\\lambda_e=\\pm t$ in the parameter space of $\\lambda$. \nIn \\fref{fig:ImE:vs:lambda:nonint}, we plot the real and imaginary parts of $E^\\pm$ as functions of $\\lambda$. For our parameter choice $t=1$ and $\\epsilon=0.5$, we find at $|\\lambda|\\ge t$, the real parts become zero and the imaginary parts become finite, signifying EPs at $\\lambda_e=\\pm t=\\pm 1$. The eigenenergies are very similar to that of the typical TLS Hamiltonian in ~\\eref{eq:Es:TLS:2} discussed in the Introduction. \n\\begin{figure}[tph!]\n\\subfigure[]{\n\\centering\\includegraphics[totalheight=6cm]{.\/ImE_vs_lambda_nonint.eps}\n\\label{fig:ImE:vs:lambda:nonint}\n}\n\\subfigure[]{\n\\centering\\includegraphics[totalheight=6cm]{.\/ReE_vs_lambda_nonint.eps}\n\\label{fig:ReE:vs:lambda:nonint}\n}\n\\caption{(a) Imaginary and (b) real parts of the two complex eigenenergies of the Hamiltonina $H^1$ plotted as functions of $\\lambda$ for $t=1.0$, $\\epsilon=0.5$.}\n\\label{fig:E:vs:lambda:nonint}\n\\end{figure}\n\n\n\n\\section{Hubbard hydrogen molecule with off-diagonal $\\pazo{PT}$ symmetry}\nWe turn on the Coulomb interaction between the atoms in the hydrogen molecule and for simplicity, we consider it be the on-site Hubbard interaction ($H^U$) which is routinely used in studies of correlated materials~\\cite{book:gebhard10:mott:mit,book:ashcroft:mermin76:ssp}. The Hubbard interaction term is expressed as\n\\blgn\nH^U \\equiv U(\\hat{n}_{1\\uparrow}\\hat{n}_{1\\downarrow}+\\hat{n}_{2\\uparrow}\\hat{n}_{2\\downarrow})\n\\elgn\nwhere $\\hat n_{i\\sigma}$ is the occupation number operator (${\\hat n}_{i\\sigma}=c\\y_{i\\sigma} c_{i\\sigma}$) and $U$ \namounts to the Coulomb energy one must pay to bring two electrons of opposite spins together. \nThe full interacting Hamiltonian then becomes\n\\blgn\nH^2 = H^0+ H^\\lambda + H^U = H^1 + H^U\\,.\n\\elgn\nSince $\\hat n_{i\\sigma}$ is the occupation number operator, we can easily notice\n\\blgn\nH^2\\ket{1}&=0\\\\\nH^2\\ket{2}&=U\\ket{2}\\\\\nH^2\\ket{3}&=0\\\\\nH^2\\ket{4}&=0\\\\\nH^2\\ket{5}&=U\\ket{5}\\\\\nH^2\\ket{6}&=0\\\\\n\\elgn\nWorking with the same basis states as before, the total Hamiltonian in matrix form can be written as the sum of the respective matrices for $H^U$ and $H^1$:\n\\begin{align}\n{\\bf H^2}\n=\n \\begin{bmatrix}\n \n 2\\epsilon &0 &0 &0 &0 &0 \\\\ \n 0 &2\\epsilon+U &t^- &-t^- &0 &0 \\\\ \n 0 &t^+ &2\\epsilon &0 &t^- &0 \\\\ \n 0 &-t^+ &0 &2\\epsilon &-t^- &0 \\\\ \n 0 &0 &t^+ &-t^+ &2\\epsilon+U &0 \\\\ \n 0 &0 &0 &0 &0 &2\\epsilon\n \\end{bmatrix}\n \\,.\n\\label{eq:H2:Sz:0:matrix}\n\\end{align}\n The characteristic equation for the $S_z=0$ sector of \\eref{eq:H2:Sz:0:matrix} is \n\\blgn\n\\begin{vmatrix}\n 2\\epsilon+U-E &t_- &-t_- &0\\\\ \n t_+ &2\\epsilon-E &0 &t_- \\\\ \n -t_+ &0 &2\\epsilon-E &-t_- \\\\ \n 0 &t_+ &-t_+ &2\\epsilon+U-E\\\\ \n\\end{vmatrix}\n=0\n\\nonumber\n\\elgn\n\\begin{figure}[tph!]\n\\subfigure[]{\n\\centering\\includegraphics[totalheight=6cm]{.\/ImE_vs_lambda_int.eps}\n\\label{fig:ImE:vs:lambda:int}\n}\n\\subfigure[]{\n\\centering\\includegraphics[totalheight=6cm]{.\/ReE_vs_lambda_int.eps}\n\\label{fig:ReE:vs:lambda:int}\n}\n\\subfigure[]{\n\\centering\\includegraphics[totalheight=6cm]{.\/lambda_e_vs_U.eps}\n\\label{fig:le:vs:U}\n}\n\\caption{(a) Imaginary and (b) real parts of the two complex eigenenergies plotted as functions of $\\lambda$ for $t=1.0$, $\\epsilon=0.5$, and $U=2.0$. (c) The exceptional points positions $|\\lambda_e|$ varying with \nHubbard interaction strength $U$ marks the boundary between $\\pazo{PT}$ broken and unbroken phases.}\n\\label{fig:eig:int}\n\\end{figure}\n\n\\blgn\n&\\Rightarrow (2\\epsilon-E)(2\\epsilon+U-E)\\nonumber\\\\\n&\\qquad\\times\\bigg[(2\\epsilon-E)(2\\epsilon+U-E)-4t_+t_-\\bigg]=0\n\\label{eq:sing:diss:Hubb:form0}\\\\\n&\\Rightarrow (2\\epsilon-E)(2\\epsilon+U-E)\\nonumber\\\\\n&\\qquad\\times\\bigg[(2\\epsilon-E+U\/2)^2-U^2\/4-4t_+t_-\\bigg]=0\\,.\n\\label{eq:sing:diss:Hubb}\n\\elgn\nThus the eigenenergies of $\\hat H^2$ are $2\\epsilon$ (degeneracy 3), $2\\epsilon+U$, $\\frac{1}{2}(4\\epsilon\\pm \\sqrt{16t_+t_- + U^2}+U)=\\frac{1}{2}(4\\epsilon\\pm \\sqrt{16(t^2-\\lambda^2) + U^2}+U)$. We can check that by setting $U=0$ in \\eref{eq:sing:diss:Hubb}, we get back the non-interacting limit (\\eref{eq:sing:diss:nonint}). \nWe have complex eigenenergies when the discriminant (term inside the square root) becomes negative, i.e. when $|\\lambda|> \\sqrt{t^2+U^2\/16}$. \nThus presence of interaction shifts the positions of the EPs and we have $\\lambda_e=\\pm \\sqrt{t^2+U^2\/16}$. For our choice of parameters: $t=1$, $U=2$, $\\epsilon=0.5$, we find $\\lambda_e\\simeq \\pm 1.118$ (see \\fref{fig:ImE:vs:lambda:int} and \\fref{fig:ReE:vs:lambda:int} for the imaginary and real parts of $E^\\pm$).\n\\fref{fig:le:vs:U} shows $\\lambda_e$ symmetrically shifts from the non-interacting limit ($\\lambda_e(U=0)=1$) as $U$ moves both in positive and negative directions. The parabolic curve for $|\\lambda_e|$ marks the boundary between $\\pazo{PT}$ broken and unbroken phases on the $|\\lambda|-U$ plane.\n\n\n\\section{Hubbard hydrogen molecule with diagonal $\\pazo{PT}$ symmetry}\nWe now consider the case when the orbital energies of the hydrogen atoms get tuned to different \nenergy levels by addition of complex loss and gain terms. For simplicity, let $\\eps_+=\\epsilon+i\\gamma$, $\\eps_-=\\epsilon-i\\gamma$ be the energies, i.e. there are equal amounts of loss and gain terms added to the orbital energies. Hence the orbital part of our Hamiltonian becomes \n\\blgn\n\\hat H^\\gamma= \\eps_+\\sum_\\sigma c\\y_{1\\sigma} c\\py_{1\\sigma} +\\eps_-\\sum_\\sigma c\\y_{2\\sigma} c\\py_{2\\sigma}\\,.\n\\elgn\nTwo-level or two-band systems with loss and gain terms have been successfully realized in several photonic and optical setups~\\cite{person:rotter:stockmann:barth:prl00,makris:elganainy:christodoulides:musslimani:prl08,guo:etal:prl09,feng:elganainy:ge:nphoton17}. Considering both diagonal and off-diagonal non-Hermiticity, our most generic $\\pazo{PT}$ symmetric Hamiltonian reads\n\\blgn\n\\hat H^3 \n&= H^\\lambda + H^\\gamma + H^U\\nonumber\\\\\n&= \\sum_\\sigma\\big[\\eps_+ c\\y_{1\\sigma} c\\py_{1\\sigma} + \\eps_- c\\y_{2\\sigma} c\\py_{2\\sigma} \n+ t_+ c\\y_{1\\sigma} c\\py_{2\\sigma} + t_- c\\y_{2\\sigma} c\\py_{1\\sigma}\\big]\\nonumber\\\\ \n&\\quad+ U(\\hat{n}_{1\\uparrow}\\hat{n}_{1\\downarrow}+\\hat{n}_{2\\uparrow}\\hat{n}_{2\\downarrow})\\,. \n\\label{eq:H3}\n\\elgn \n\\subsection*{$\\pazo{PT}$ symmetry:}\n$H^\\gamma$ is $\\pazo{PT}$ symmetric as we can check:\nUnder $\\pazo{T}$ operation\n\\blgn\n\\pazo{T} H^\\gamma \\pazo{T}^{-1}= \\sum_\\sigma\\big[\\eps_- c\\y_{1\\sigma} c\\py_{1\\sigma} + \\eps_+ c\\y_{2\\sigma} c\\py_{2\\sigma}\\big] \n\\elgn\nand under $\\pazo{PT}$ operation\n\\blgn\n\\pazo{P}\\pazo{T} H^\\gamma\\pazo{T}^{-1} \\pazo{P}^{-1} = \\sum_\\sigma\\big[\\eps_- c\\y_{2\\sigma} c\\py_{2\\sigma} + \\eps_+ c\\y_{1\\sigma} c\\py_{1\\sigma}\\big]\n= H^\\gamma\\,. \n\\elgn\nFollowing the same basis formulation, we get the Hamiltonian in matrix form:\n\\begin{align}\n&{\\bf H^3}\\nonumber\\\\\n&=\n \\begin{bmatrix}\n \n \\eps_++\\eps_- &0 &0 &0 &0 &0 \\\\ \n 0 &2\\eps_-+U &t_- &-t_- &0 &0 \\\\ \n 0 &t_+ &\\eps_++\\eps_- &0 &t_- &0 \\\\ \n 0 &-t_+ &0 &\\eps_++\\eps_- &-t_- &0 \\\\ \n 0 &0 &t_+ &-t_+ &2\\eps_++U &0 \\\\ \n 0 &0 &0 &0 &0 &\\eps_++\\eps_-\n \\end{bmatrix}\n \\,.\n\\label{eq:H3:Sz:0:matrix}\n\\end{align}\nAgain like in the earlier cases, the $S_z=0$ sector of the block-diagonal form\nyields the characteristic equation:\n\\begin{comment} \n\\blgn\n\\begin{vmatrix}\n 2\\eps_-+U-E &t_- &-t_- &0\\\\ \n t_+ &\\eps_++\\eps_--E &0 &t_- \\\\ \n -t_+ &0 &\\eps_++\\eps_--E &-t_- \\\\ \n 0 &t_+ &-t_+ &2\\eps_++U-E\\\\ \n\\end{vmatrix}\\nonumber\\\\\n=0\n\\nonumber\n\\elgn\n\\end{comment} \n\\begin{comment}\n\\blgn\n\\Rightarrow\n&(2\\eps_-+U-E)\\, \n\\begin{vmatrix}\n\\eps_++\\eps_--E & 0 & t_- \\\\ \n0 & \\eps_++\\eps_--E & -t_- \\\\\nt_+ & -t_+ & 2\\eps_++U-E\n\\end{vmatrix}\n-t_-\n\\begin{vmatrix}\nt_+ & 0 & t_- \\\\ \n-t_+ & \\eps_++\\eps_--E & -t_- \\\\\n0 & -t_+ & 2\\eps_++U-E \n\\end{vmatrix}\n\\nonumber\\\\\n&\\qquad\\qquad\n-t_-\n\\begin{vmatrix}\nt_+ & \\eps_++\\eps_--E & t_- \\\\ \n-t_+ & 0 & -t_- \\\\\n0 & t_+ & 2\\eps_++U-E \n\\end{vmatrix}\n=0\\nonumber\\\\\n&\\Rightarrow \n(2\\eps_-+U-E) \\bigg[\n (\\eps_++\\eps_--E)\\big\\{(\\eps_++\\eps_--E)(2\\eps_++U-E)-t_+t_-\\big\\}\n +t_-\\big\\{-t_+(\\eps_++\\eps_--E)\\big\\}\n \\bigg]\\nonumber\\\\\n&\\quad \n-t_-\\bigg[t_+\\big\\{(\\eps_++\\eps_--E)(2\\eps_++U-E)-t_+t_-\\big\\}+\\tmt_+^2\n\\bigg]\\nonumber\\\\\n&\\quad \n-t_-\\bigg[t_+(+t_+t_-)-(\\eps_++\\eps_--E)\\big\\{-t_+(2\\eps_++U-E)\\big\\}\n+t_-(-t_+^2)\n\\bigg]=0\\nonumber\\\\\n\\Rightarrow\n&(2\\eps_-+U-E)\\bigg[(\\eps_++\\eps_--E)\\,\\big\\{(\\eps_++\\eps_--E)(2\\eps_++U-E)-t_+t_-\\big\\}\n-\\tmt_+(\\eps_++\\eps_--E)\\bigg]\\nonumber\\\\\n&\\quad-t_+t_-\\bigg[(\\eps_++\\eps_--E)(2\\eps_++U-E)\\bigg]\\nonumber\\\\\n&\\quad-t_+t_-\\bigg[(\\eps_++\\eps_--E)(2\\eps_++U-E)\\bigg]=0\\nonumber\\\\\n\\elgn\n\\blgn\n\\Rightarrow \n&(2\\eps_-+U-E)(\\eps_++\\eps_--E)\\nonumber\\\\\n&\\quad\\times\\bigg[(\\eps_++\\eps_--E)(2\\eps_++U-E)-2t_+t_-\\bigg]\\nonumber\\\\\n&\\quad -2t_+t_-(\\eps_++\\eps_--E)(2\\eps_++U-E)=0\\nonumber\\\\\n\\end{comment}\n\\blgn\n&(\\eps_++\\eps_--E)\\nonumber\\\\\n&\\quad\\times\\bigg[(2\\eps_-+U-E)(\\eps_++\\eps_--E)(2\\eps_++U-E)\\nonumber\\\\\n&\\quad-4t_+t_-(\\eps_++\\eps_-+U-E)\\bigg]=0\n\\label{eq:doub:diss:Hubb:form0}\\\\\n&\\Rightarrow (\\eps_++\\eps_--E)\\nonumber\\\\\n&\\quad\\times\\bigg[(2\\eps_-+U-E)(\\eps_++\\eps_--E)(2\\eps_++U-E)\\nonumber\\\\\n&\\quad-4t_+t_-(\\eps_++\\eps_--E)-4t_+t_- U\\bigg]=0\\,.\n\\label{eq:doub:diss:Hubb}\n\\elgn\n\\eref{eq:doub:diss:Hubb:form0} reproduces \\eref{eq:sing:diss:Hubb:form0} once we set $\\gamma=0$ (then we have $\\eps_+=\\eps_-=\\epsilon$). The eigenenergies of $\\hat H^3$ are\n$2\\epsilon$ (degeneracy 3), and the three roots of the cubic equation inside the \nbracket of \\eref{eq:doub:diss:Hubb}:\n\\defS{S}\n\\defD{D}\n\\blgn\n&(2\\eps_-+U-E)(\\eps_++\\eps_--E)(2\\eps_++U-E)\\nonumber\\\\\n&\\quad-4t_+t_-(\\eps_++\\eps_--E)-4t_+t_- U = 0\\,\n\\label{eq:cubic:doub:diss:Hubb}\n\\elgn\nwhich can be simplified as (see Appendix~\\ref{app:cubic})\n\\blgn\nX^3-U X^2-K X - L=0\n\\label{eq:doub:diss:Hubb:cubic:form}\n\\elgn\nwith $X\\equiv x+U$; $x\\equiv \\eps_++\\eps_--E$; $K\\equiv 4(t^2-\\gamma^2-\\lambda^2)$; $L\\equiv 4\\gamma^2 U$. \n\n\n\n\\begin{figure}[htp!]\n\\subfigure[]{\n\\includegraphics[totalheight=4.5cm]{.\/ImE_vs_lambda_fixed_gamma.eps}\n\\label{fig:ImE:vs:lambda:fixed:gamma}\n}\n\\subfigure[]{\n\\includegraphics[totalheight=4.5cm]{.\/ReE_vs_lambda_fixed_gamma.eps}\n\\label{fig:ReE:vs:lambda:fixed:gamma}\n}\n\\caption{(a) Imaginary and (b) real parts of the complex eigenenergy pair plotted as a function of \ndissipative parameter $\\lambda$ for $t=1.0$, $\\epsilon=0.5$, and $U=2.0$ at $\\gamma=0.1$.}\n\\label{fig:E:vs:lambda:fixed:gamma}\n\\end{figure}\nThus once we solve for $X$ in \\eref{eq:doub:diss:Hubb:cubic:form} by typical Cardano's method~\\cite{book:tignol01:galois} or numerically~\\cite{book:press:etal02:nrecipe:inC}, we\nexpect to have at least one real root all the time, the other two roots become complex\nconjugates of each other (since the coefficients of $X$ are real) beyond a certain parameter space. This pair of complex conjugate roots give rise to EPs at the parameter space when the complex roots just become real. \nSince we introduce two kinds of non-Hermiticity via the orbital energy and the hopping \nterms, it may be natural to expect observing additional EPs. These EPs are different from higher order EPs~\\cite{hodaei:etal:nat17}, since we are focusing always\non the pair of energy levels that can become complex in certain parameter regimes, while the other levels always promise to be real. \nWe notice, for a fixed \n$\\gamma$, as we shift $\\lambda$ from zero, $\\text{Im}\\,E^\\pm$ start becoming finite beyond a point $\\lambda_{e1}$, then again disappear at $\\lambda_{e2}$, and then become finite above $\\lambda_{e3}$. \n (see \\fref{fig:ImE:vs:lambda:fixed:gamma}). $\\lambda_{e1}$, $\\lambda_{e2}$, and $\\lambda_{e3}$: all these are EPs as they are degenerate onset points of imaginary eigenenergies and like in the previous cases, they appear symmetrically around $\\lambda=0$. Though presence of additional EPs can be anticipated due to double non-Hermitian terms in the Hamiltonian and cubic nature of the characteristic equation (\\eref{eq:doub:diss:Hubb:cubic:form}), the behavior of all of them are not alike.\nUnlike the previous cases, the additional EPs break the mirror symmetry between $\\text{Re}\\,E^\\pm$ \nseen in the earlier case: the energy levels are not equally distributed around the EPs (see \\fref{fig:ReE:vs:lambda:fixed:gamma}). These additional EPs are different because the eigenenergies generate from complex conjugate pairs of root of a cubic equation, where the discriminant depends on an additional coefficient compared to the quadratic equation's case.%\nThe asymmetry in the real parts of $E^\\pm$ gets reversed once we change of the sign of $U$. \nThe asymmetry becomes more evident when we plot them against $\\gamma$ for fixed\n$\\lambda$ or even when $H^\\lambda$ is turned off (see \\fref{fig:ReE:vs:gamma:fixed:lambda}). However, when we set $U=0$,\nwe get back symmetric real eigenenergy pair just like a typical TLS (see \\fref{fig:ReE:vs:gamma:fixed:lambda:U0}).\nThis can be easily understood by noticing that \\eref{eq:doub:diss:Hubb:cubic:form} reduces to effectively quadratic equation $x^2-4(t^2-\\gamma^2-\\lambda^2)=0$ (for $t^2\\ne \\gamma^2+\\lambda^2$) which produces typical square-root EPs at $\\gamma_e=\\pm 2\\sqrt{t^2-\\lambda^2}$ and in $\\lambda_e=\\pm 2\\sqrt{t^2-\\gamma^2}$ in $\\gamma$ and $\\lambda$ parameter spaces respectively, similar to the form $\\lambda_e$ has for $H^1$ and $H^2$.\n\\begin{figure}[htp!]\n\\subfigure[]{\n\\centering\\includegraphics[totalheight=4.3cm,clip]{.\/ImE_vs_gamma_fixed_lambda.eps}\n\\label{fig:ImE:vs:gamma:fixed:lambda}\n}\n\\subfigure[]{\n\\centering\\includegraphics[totalheight=4.3cm,clip]{.\/ReE_vs_gamma_fixed_lambda.eps}\n\\label{fig:ReE:vs:gamma:fixed:lambda}\n}\n\\subfigure[]{\n\\centering\\includegraphics[totalheight=4.3cm,clip]{.\/ImE_vs_gamma_fixed_lambda_U0.eps}\n\\label{fig:ImE:vs:gamma:fixed:lambda:U0}\n}\n\\subfigure[]{\n\\centering\\includegraphics[totalheight=4.3cm,clip]{.\/ReE_vs_gamma_fixed_lambda_U0.eps}\n\\label{fig:ReE:vs:gamma:fixed:lambda:U0}\n}\n\\caption{(a) Imaginary and (b) real parts of the complex eigenenergy pair plotted as a function of \nloss\/gain parameter $\\gamma$ for $t=1.0$, $\\epsilon=0.5$, and $U=2.0$ at $\\lambda=0$. (c) Imaginary and (d) real parts of the complex eigenenergy pair plotted against $\\gamma$ for the non-interacting case ($U=0$) at $\\lambda=0.6$ while other parameters remain the same. In the non-interacting situation, the TLS eigenenergy symmetry is recovered.}\n\\label{fig:E:vs:gamma:fixed:lambda}\n\\end{figure}\nThe $\\pazo{PT}$ broken and unbroken phase diagrams are shown in \\fref{fig:gammae:vs:U}. For no other non-Hermiticity parameter, the phase boundary hits unity in the non-interacting limit ($U=0$) at $t=1$, agreeing with the result recently obtained by Pan {\\it et al.}~\\cite{pan:wang:cui:chen:pra20}. However, as soon as the off-diagonal non-Hermiticity parameter is turned on (e.g. $\\lambda=0.5$ case shown \\fref{fig:gammae:vs:U}), the boundary diminishes implying $PT$-symmetry breaking at lower values of $\\gamma$. \n\\begin{figure}[htp!]\n\\centering\\includegraphics[height=6cm,clip]{.\/gamma_e_vs_U.eps}\n\\caption{$\\pazo{PT}$ broken and unkbroken phases on $\\gamma$-$U$ plane for $t=1$, $\\epsilon=0.5$.\nThe upper and lower curves show the phase boundary for zero and finite ($\\lambda=0$) off-diagonal non-Hermiticity parameters.}\n\\label{fig:gammae:vs:U}\n\\end{figure}\n\n\\subsection*{Dependence of execeptional points on the Hubbard interaction $U$:}\nAs we notice that the presence of three sets of EPs and interaction plays a role in creating an asymmetry in the real eigenvalues, we decide to plot their positions $\\lambda_{e1}$, $\\lambda_{e2}$, and $\\lambda_{e3}$ against the interaction strength. \\fref{fig:lambdae1:vs:U} shows that $\\lambda_{e1}$ always exists (even when $U=0$) and it decreases as $U$ is increased. On the other hand, \\fref{fig:lambdae2:vs:U} and \\fref{fig:lambdae3:vs:U} clearly show that both $\\lambda_{e2}$ and $\\lambda_{e3}$ arise only at a finite value of $U$ and depending on the value of loss-gain parameter $\\gamma$, it monotonically increases with $U$. $\\lambda_{e3}$'s positions do not vary as significantly as $\\lambda_{e2}$'s do for different $\\gamma$ values (e.g. $\\gamma=0.1$ and $\\gamma=0.2$) shown in the figures). In the non-interacting case, the loop structures in $\\text{Im}\\,E^\\pm$ (hence $\\lambda_{e2}$ and $\\lambda_{e3}$) disappear and we only obtain $\\lambda_{e1}$. \nThus we can categorize two distinguishable kinds of EPs: (A) \\emph{interaction generated} ($\\lambda_{e2}$ and $\\lambda_{e3}$) and (B) \\emph{self-generated}. These interaction generated EPs are different from traditional EPs often discussed in the literature and deserve special attention and further theoretical and experimental research.\n\\begin{figure}[htp!]\n\\subfigure[]{\n\\centering\\includegraphics[totalheight=5cm]{.\/lambdae1_vs_U.eps}\n\\label{fig:lambdae1:vs:U}\n}\n\\subfigure[]{\n\\centering\\includegraphics[totalheight=5cm]{.\/lambdae2_vs_U.eps}\n\\label{fig:lambdae2:vs:U}\n}\n\\subfigure[]{\n\\centering\\includegraphics[totalheight=5cm]{.\/lambdae3_vs_U.eps}\n\\label{fig:lambdae3:vs:U}\n}\n\\caption{Positions of exceptional points (a) $\\lambda_{e1}$, (b) $\\lambda_{e2}$, and (c) $\\lambda_{e3}$ as Hubbard interaction strength $U$ is varied for $\\gamma=0.1$ and $\\gamma=0.2$ keeping $t=1$, $\\epsilon=0.5$.}\n\\label{fig:lambdae1:lambdae2:lambdae3:vs:U}\n\\end{figure}\n\\section{Conclusion}\n$\\pazo{PT}$ symmetric non-Hermitian physics have been successfully observed in several two level photonic and optical systems. \nOne particular feature of such Hamiltonians is the existence of exceptional points (EPs) beyond which complex eigenenergies emerge signaling breaking of the symmetry in the eigenfunctions. As a simplistic model, we consider a hydrogen molecule with Hubbard interaction acting between its atoms' electrons. We then introduce both diagonal and off-diagonal $\\pazo{PT}$ symmetries and notice that\ninteraction plays differently with different kinds of EPs generated by the parameters of the Hamiltonian. Changing the position of one kind of EPs in the increasing direction and the other kind in decreasing direction by varying interaction strength can offer flexibility in fine tuning EPs and more control over their potential applications. In a realistic hydrogen molecule, non-Hermitian loss-gain terms might be introduced through laser induced molecular ionization and dissociation~\\cite{lefebvre:etal:prl09,wrona:etal:srep20}. Besides this, a more precise two-site Hubbard model could be emulated in an ultracold double well system~\\cite{murmann:etal:prl15} or via NMR~\\cite{melo:etal:nmr:po21}. \nThe role of interaction on the EPs has been studied recently~\\cite{pan:wang:cui:chen:pra20} for the Hubbard interaction. However, the interplay of the diagonal and off-diagonal $\\pazo{PT}$-symmetries and the role\nof interaction on them have not been studied ever to the best of our knowledge. Such interplay might be extended to the fermionic or bosonic lattice Hubbard models and effect on interesting physics such as closure of Mott gap~\\cite{tripathi:galda:barman:vinokur:prb16,tripathi:vinokur:srep20} or multiple $\\pazo{PT}$-broken phases~\\cite{jin:song:ap13} can be studied.\n\n\\section{Acknowledgement and announcement}\nThe authors thank the HBCSE, Mumbai for providing an opportunity to collaborate through their NIUS Physics 15.3 camp.\n\nOur codes are available on the Github repository: \\newline\\url{https:\/\/github.com\/hbaromega\/PT-symmetric-2-site-Hubbard-hydrogen}, \\newline under GNU General Public License.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nWe consider periodic adiabatic processes of spinless short-range entangled\nphases with period $T$ at zero temperature. The ultimate goal when attacking\nthis type of time-dependent problems on the general ground would be to obtain\nan expression for the physical observables induced by the time evolution in\nterms of \\textit{instantaneous} eigenstates and eigenenergies of the system.\n\nIn their pioneering works, Niu and Thouless~\\cite{thouless1983,niu1984} found\nsuch an expression for the current operator uniformly averaged over the entire\nspace. In the formulation, they assumed the periodic boundary conditions with\nthe period $L_i$ for $i=x,y$ and introduced the solenoidal flux\n$\\bm{\\phi}=(\\phi_x,\\phi_y)$ as illustrated in Fig.~\\ref{fig:current}. \nFor concreteness we work in two spatial dimensions throughout this work. Then\nthe current operator can be expressed as\n\\begin{equation}\n\\hat{j}_{t\\bm{\\phi}}^i\\equiv\\frac{1}{L_i}\\int d^2x\\hat{j}_{t\\bm{\\phi}}^i(\\bm{x})=\\partial_{\\phi_i}\\hat{H}_{t\\bm{\\phi}},\\quad i=x,y.\\label{eq:currentphi}\n\\end{equation}\n(For brevity we show the dependence on time $t$, flux $\\bm{\\phi}$, and etc., in\nthe subscript.) Further taking an average over all values of $\\bm{\\phi}$, the\nexpectation value of the current operator induced by the adiabatic\ntime-evolution can be expressed as the time derivative of the many-body Berry\nphase for varying $\\bm{\\phi}$\n\\begin{eqnarray}\n\\int\\frac{d^2\\phi}{(2\\pi)^2}\\langle\\hat{\\bm{j}}_{t\\bm{\\phi}}\\rangle=\\partial_t\\left(\\int\\frac{d^2\\phi}{(2\\pi)^2}\\langle\\Phi_{t\\bm{\\phi}}\\vert i\\partial_{\\bm{\\phi}}\\vert\\Phi_{t\\bm{\\phi}}\\rangle\\right),\t\n\t\\label{eq:jx}\n\\end{eqnarray}\nwhere $\\vert\\Phi_{t\\bm{\\phi}}\\rangle$ is the instantaneous ground state of the\nHamiltonian $\\hat{H}_{t\\bm{\\phi}}$. This expression assumes the periodicity in\n$\\bm{\\phi}$ (see Eqs.~\\eqref{eq:p1}, \\eqref{eq:p2} below). It is \\textit{not}\npossible to further impose the periodicity in time simultaneously. Instead we\nhave $\\vert\\Phi_{T\\bm{\\phi}}\\rangle\n=e^{-i\\bm{\\phi}\\cdot\\bm{Q}}\\vert\\Phi_{0\\bm{\\phi}}\\rangle$, where\n$\\bm{Q}=\\int_0^Tdt\\int\\frac{d^2\\phi}{(2\\pi)^2}\\langle\\hat{\\bm{j}}_{t\\bm{\\phi}}\\rangle\\in\\mathbb{Z}^2$ is the pumped\ncharge during in one cycle.\n\nThe result~(\\ref{eq:jx}) is formally similar to the constitutive relation for\nMaxwell's equations\n\\begin{align}\n\t\\bm j(t, \\bm x)&=\\partial_t\\bm p(t, \\bm x)+\\bm\\nabla\\times\\bm m(t, \\bm x),\\label{eq:jmeso}\n\\end{align}\nwhere $\\bm p$ and $\\bm m$ are the bulk polarization and the bulk magnetization.\nLater it was shown~\\cite{king-smith1993,vanderbilt1993,resta1994,resta2007}\nthat Thouless result~(\\ref{eq:jx}) combined with constitutive\nrelation~(\\ref{eq:jmeso}) gives a useful formula for the bulk polarization ---\nthis development marked the birth of ``the modern theory'' of electric\npolarization. The bulk polarization is given by the integral of the Berry\nconnection (the \\textit{first} Chern-Simons form) $P_1$ [see\nEq.~\\eqref{eq:modthp} for the formula for band insulators].\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=0.6\\columnwidth]{torus.pdf}\t\t\n\t\t\\caption{\\label{fig:current} Two dimensional system with\n\t\tperiodic boundary conditions viewed as toroidal topology. The\n\t\ttwo solenoidal fluxes are denoted by $\\phi_x$ and $\\phi_y$. \n\t\tThe averaged current operator can be expressed as the\n\tderivative of the Hamiltonian with respect to the fluxes [Eq.~\\eqref{eq:currentphi}].}\n\t\\end{center}\n\\end{figure}\n\nAlternatively, let us impose \\textit{the periodicity in time}\n$\\vert\\Phi_{T\\bm{\\phi}}\\rangle=\\vert\\Phi_{0\\bm{\\phi}}\\rangle$ instead. In this\nsetting it is useful to integrate over time rather than the solenoidal flux.\nThen the Thouless result reads~\\cite{niu1984}\n\\begin{align}\n&\\int_0^Tdt\\langle \\hat{\\bm{j}}_{t\\bm{\\phi}}\\rangle=-\\partial_{\\bm{\\phi}}\\varphi_{\\bm{\\phi}},\n\t\\label{eq:jTh}\\\\\n&\\varphi_{\\bm{\\phi}}\\equiv \\int_0^Tdt\\langle\\Phi_{t\\bm{\\phi}}\\vert i\\partial_t\\vert\\Phi_{t\\bm{\\phi}}\\rangle,\n\\label{eq:Berry}\n\\end{align}\nwhere $\\varphi_{\\bm{\\phi}}$ is the many-body Berry phase associated with the\nadiabatic time-evolution. In the thermodynamic limit,\n$\\partial_{\\bm{\\phi}}\\varphi_{\\bm{\\phi}}$ is independent of\n$\\bm{\\phi}$~\\cite{niu1984,PhysRevB.98.155137} and one can set\n$\\bm{\\phi}=\\bm{0}$ for instance. There is also a contribution from\n$\\partial_{\\bm{\\phi}}E_{t\\bm{\\phi}}$ in Eqs.~\\eqref{eq:jx}, \\eqref{eq:jTh} but\nit is negligibly small for the same reason. We find this formulation of the\nThouless pump more useful because it can be generalized to wider class of\nphysical observables as we discuss below.\n\nThe persistent current associated with a part of the orbital magnetization can also\nbe expressed using the instantaneous eigenstates and eigenenergies of the\nHamiltonian. For band insulators, it can be written as the curl of a vector\n(see Fig.~\\ref{fig:magnetization}a), which together with the constitutive\nrelation~(\\ref{eq:jmeso}), allows one to define the orbital magnetization\n$\\bm{m}_{\\text{pers}}$. (The subscript refers to the contribution associated\nwith the persistent current.) Alternatively, one can evaluate the change of the\ninstantaneous ground state energy with respect to the external magnetic field.\nThis recent development~\\cite{xiao2005,thonhauser2005,ceresoli2006,shi2007}\ngoes under the name of ``the modern theory'' of the orbital magnetization [see\nEq.~\\eqref{eq:modthm}]. Unlike the bulk polarization, $\\bm{m}_{\\text{pers}}$ is\nnot related to topological response.\n\nIn this work we develop a general formulation of the remaining contribution to\nthe electric current in the constitutive relation ~\\eqref{eq:jmeso} that are\nneither captured by the averaged current in Eqs.~\\eqref{eq:jx}, \\eqref{eq:jTh}\nnor by the persistent current\n$\\bm{\\nabla}\\times\\bm{m}_{\\text{pers}}(t,\\bm{x})$. We find that, after\ncoarse-graining in time, this contribution can be expressed as the curl of an\nadditional term $\\mathbcal{m}$ to the orbital magnetization so that $\\bm{m}$ in\nEq.~\\eqref{eq:jmeso} is given by\n\\begin{equation}\n\\bm{m}=\\bm{m}_{\\text{pers}}+\\mathbcal{m}.\n\\end{equation}\nOur main result is that $\\mathbcal{m}$ can be obtained as a derivative of the\nmany-body Berry phase with respect to an external magnetic field $B_z$ applied\nin $z$ direction\n\\begin{align}\n\tTV\\mathcal{m}_z&=\\partial_{B_z}\\varphi_{B_z}\\rvert_{B_z=0},\n\t\\label{eq:calMz}\n\\end{align}\nwhere $V$ represents the system size and $\\varphi_{B_z}$ is defined by\nEq.~(\\ref{eq:Berry}) upon substitution $\\bm\\phi\\rightarrow B_z$. This\nexpression is well-defined in two-dimensional systems with the open boundary\ncondition at least in one direction. There are known subtleties when applying\nuniform magnetic field to periodic systems. See Sec.~\\ref{subsec:general} for\nthe detailed discussion. In the following we assume vanishing Chern numbers in\n$(\\phi_x,\\phi_y)$, $(t,\\phi_x)$ and $(t,\\phi_y)$ spaces.\n\nAs comparison, in the presence of an external uniform magnetic field $\\bm\nB=(0,0,B_z)^{\\rm T}$, the instantaneous orbital magnetization\n$\\bm{m}_{\\text{pers}}$ gives an energy shift\n$\\int_0^Tdt(E_{tB_z}-E_{t0})\/T=-V\\bm{m}_{\\text{pers}}\\cdot\\bm B+O(\\bm{B}^2)$ of\nthe many-body ground state. (Throughout this work we set $\\hbar=1$.)\nAccordingly, after the period $T$, the ground state acquires an additional\nphase proportional $TV\\bm{m}_{\\text{pers}}\\cdot\\bm B$. On the other hand, a\nnon-zero value of $\\mathbcal{m}$ shows up as Berry phase\n$TV\\mathbcal{m}\\cdot\\bm B+O(\\bm{B}^2)$ acquired by the many-body ground state.\nFor this reason, we name $\\mathbcal{m}$ \\textit{geometric orbital\nmagnetization}. The bulk quantity $T\\mathbcal{m}$ is independent of the period\n$T$ and is defined ``mod $e$''. This ambiguity is reflecting the possibility of\ndecorating the boundary by one-dimensional Thouless pump.\n\nFor band insulators, we perform the perturbation theory with respect to the\napplied magnetic field following Refs.~\\onlinecite{shi2007,essin2010} and find\nthat geometric orbital magnetization consists of two contributions \n\\begin{equation}\n\\mathbcal{m}=\\mathbcal{m}^{\\text{top}}+\\mathbcal{m}^{\\text{non-top}},\n\\label{eq:calm2}\n\\end{equation}\nthe topological contribution $\\mathbcal{m}^{\\text{top}}$ is expressed as\nintegral of the \\textit{third} Chern-Simons form $P_3$ in $(t,k_x,k_y)$ space\n[see Eq.~\\eqref{eq:3Dtopo2}], while the non-topological contribution\n$\\mathbcal{m}^{\\text{non-top}}$ is written in terms of instantaneous Bloch\nstates and energies [Eq.~\\eqref{eq:nontopcalM}]. The obtained expression for\n$\\mathbcal{m}$ of band insulators has a formal similarity with the expression\nfor the magnetoelectric polarizability of three-dimensional band\ninsulators~\\cite{essin2010,Malashevich2010,Chen2011} upon identification\n$t\/T\\leftrightarrow k_z\/2\\pi$. It is worth mentioning that due to relatively\nlarge gap (order of electronvolts) of band insulators, the adiabaticity\nconditions is not particularly restrictive, the period $T$ can be as small as\nseveral femtoseconds.\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=1.0\\columnwidth]{magnetization2.pdf}\t\t\n\t\t\\caption{\\label{fig:magnetization}\n\t\tDifferent contributions to orbital magnetization of\n\t\ttwo-dimensional periodic adiabatic process with period $T$. a): Persistent current $\\bm\n\t\tj_{\\text{pers}}$ within each unit cell produces the\n\t\tinstantaneous orbital magnetization $\\bm m$. b): An adiabatic\n\t\tprocess where an electron trapped in a potential well whose\n\t\tcenter $\\bm{x}=\\bm{r}(t)$ is moving along dashed curve. In the\n\t\tpresence of an externally applied magnetic field, the many-body\n\t\tBerry phase $\\varphi_{B_z}$ is given by the Aharonov-Bohm flux\n\t\t(the hatched area). c): Periodic boundary conditions are necessary\n\t\twhen each of two potential wells comes back to its initial\n\t\tposition after time $T$ by passing though the seam. Two\n\t\tpossible areas to define Aharonov-Bohm flux (the hatched one and\n\t\tthe non-hatches one) differ by an integer flux quanta. d): Unit\n\t\tcell consists of a single anisotropic potential well that is\n\t\tspinning during adiabatic process. e): Two identical potential\n\t\twells that exchange their positions after single adiabatic\n\t\tcycle. All adiabatic processes shown here have vanishing\n\t\tintegrated current~(\\ref{eq:jTh}).}\n\t\\end{center}\n\\end{figure}\n\nTo gain intuitive understanding of the two contributions~(\\ref{eq:calm2}), one\ncan think of the topological piece $\\mathbcal{m}^{\\text{top}}$ to be\noriginating from the Aharanov-Bohm contribution to the many-body Berry phase in\nthe magnetic field. Thus $\\mathbcal{m}^{\\text{top}}$ describes the\nmagnetization from electrons, whose positions are moving during the adiabatic\nprocess, as depicted with dashed lines and arrows in\nFig.~\\ref{fig:magnetization}b-c and e. Although Fig.~\\ref{fig:magnetization}a\nmay look similar, $\\bm j_{\\text{pers}}$ in Fig.~\\ref{fig:magnetization}a\nrepresents a \\emph{static} persistent current that is uniformly distributed on\nthe ring. In contrast, the current density in Fig.~\\ref{fig:magnetization}b at\neach time is localized to the position of the potential well and it becomes\ndivergence-free only after averaging over the period $T$. Similarly, the\nnon-topological piece $\\mathbcal{m}^{\\text{non-top}}$ can be understood to be\noriginating from ``spinning'' of anisotropic crystalline potentials, see\nFig~\\ref{fig:magnetization}d.\n\nLet us mention at this point several related works. Adiabatic\ndynamics can be induced by time-dependent lattice deformations (phonons), which\nis the subject of studies on dynamical deformations of\ncrystals.~\\cite{ceresoli2002,juraschek2017,juraschek2018,dong2018,stengel2018}\nIn Refs.~\\onlinecite{juraschek2017,juraschek2018} it was shown that\na time-varying polarization gives rise to a contribution to the orbital\nmagnetization, and the semi-classical description developed in\nRef.~\\onlinecite{dong2018} found the same effect within their framework. The\ntime-varying polarizations in these works correspond to the situation depicted\nin Fig.~\\ref{fig:magnetization}b. In the case of band insulators, we find that\nthey are captured by the \\textit{abelian} third Chern-Simons form. Furthermore,\nRefs.~\\onlinecite{ceresoli2002,stengel2018} showed that rotation of molecules,\nas in Fig.~\\ref{fig:magnetization}d, gives rise to an orbital magnetization\ncontribution that can be captured by relation~(\\ref{eq:calMz}). The present\napproach gives unified description of the above-mentioned effects. More\nimportantly, it properly describes the orbital magnetization in adiabatic\nprocesses that have not been previously considered: the process in\nFig.~\\ref{fig:magnetization}e has inversion symmetry at all times, thus\npolarization is time-independent, yet it gives rise to non-zero $\\mathbcal{m}$,\nwhich for the case of band insulators is captured by \\textit{non-abelian} third\nChern-Simons form, see Sec.~\\ref{subsec:tgeoM}.\n\nAnalogous to the bulk polarization, crystalline symmetries can quantize topological\ngeometric orbital magnetization $\\mathbcal{m}^{\\text{top}}$. We show that,\nunder certain crystalline symmetries, $\\mathbcal{m}^{\\text{top}}$ is related to\nrecently discussed higher-order topological\nphases.~\\cite{parameswaran2017,schindler2018,peng2017,langbehn2017,song2017,fang2018,ezawa2018,shapourian2017,zhu2018,yan2018,wang2018,wang2018b,khalaf2018,khalaf2018b,trifunovic2019,nobuyuki2018}\nAmong them, the topological insulators that exhibit quantized corner charges in\nthe presence of certain crystalline symmetries attracted recently a lot of\ntheoretical~\\cite{benalcazar2017,benalcazar2017,benalcazar2018} and\nexperimental~\\cite{serra-garcia2018,peterson2018} attention. Although, due to\ncrystalline symmetries, the bulk quadrupole moment is well defined in these\nsystems (see Fig.~\\ref{fig:C4}a), it is still disputed in the literature\nwhether such definition is possible in the absence of any quantizing crystalline\nsymmetries.~\\cite{kang2018,metthew2018,ono2019} Recent work in\nRef.~\\onlinecite{vanmiert2018b} revealed a connection between higher-order\ntopological\ninsulators~\\cite{parameswaran2017,schindler2018,peng2017,langbehn2017,song2017,fang2018,ezawa2018,shapourian2017,zhu2018,yan2018,wang2018,wang2018b,khalaf2018,khalaf2018b,trifunovic2019,nobuyuki2018}\nprotected by roto-inversion symmetries and adiabatic processes that involve\ntopological insulators with quantized corner charges. In\nSec.~\\ref{sec:symmetries} we show that the adiabatic processes discussed by van\nMiert and Ortix~\\cite{vanmiert2018b} are characterized by quantized geometric\norbital magnetization, and we relate the value of $T\\mathcal{m}_z^{\\text{top}}$ to\nthe quantized corner charge.\n\nThe remaining of this article is organized as follows: in Sec.~\\ref{sec:prelim}\nwe review the modern theory of the polarization and the orbital\nmagnetization, Sec.~\\ref{sec:noninteracting} contains derivation of our main\nresults, Sec.~\\ref{sec:symmetries} discusses the role of symmetries in the\nadiabatic process, and Sec.~\\ref{sec:examples} presents various non-interacting\nexamples that illustrate difference between instantaneous orbital\nmagnetization, topological and non-topological geometric orbital magnetization.\nMore precisely, we consider toy models illustrating systems depicted in\nFig.~\\ref{fig:magnetization}. As a more realistic application of physics\nconsidered in this work, we present in Sec.~{\\ref{subsec:rotoM}} calculation of\nmagnetization induced by rotation of an insulator. A long time\nago,~\\cite{barnett1915,barnett1935} Barnett considered magnetization of an\nuncharged paramagnetic material when spun on its axis. Modeling paramagnetic\nmaterial as collection of local magnetic moments that are randomly oriented,\nBarnett~\\cite{barnett1915} argued that rotation creates a torque that acts to\nalign local magnetic moments with rotation axis. This torque gives rise to\nmagnetization $M=\\chi\\Omega\/\\gamma$, where $\\chi$ is paramagnetic\nsusceptibility, $\\Omega$ is rotation frequency and $\\gamma$ is electron\ngyromagnetic ratio. Barnett's measurement of this\neffect~\\cite{barnett1915} provided first accurate measurement of electron\ngyromagnetic ratio. We calculate $\\mathbcal{m}$ for this model, which, as seen from\nEq.~(\\ref{eq:calMz}), is also proportional to rotational frequency\n$\\Omega=2\\pi\/T$ and estimate quantum correction to Barnett effect. Electron\ncontribution to $\\mathbcal{m}$ has both topological and non-topological piece,\nbut since the system is uncharged, we find that electron contribution to\n$\\mathbcal{m}^{\\text{top}}$ is canceled by corresponding ionic contribution. Thus\nresulting $\\mathbcal{m}$ is solely due to anisotropy of crystalline potential,\nanalogous to toy model in Fig.~\\ref{fig:magnetization}d. In\nSec.~\\ref{sec:examples2} we consider examples of general interacting systems\nwhere periodic adiabatic process consists of\n``spinning''~\\cite{ceresoli2002,stengel2018} or\n``shaking''~\\cite{juraschek2017,juraschek2018,dong2018} of the whole\nsystem.~\\cite{goldman2014} Our conclusions and outlook can be found in\nSec.~\\ref{sec:conclusions}.\n\n\\section{Preliminaries}\\label{sec:prelim}\nHere we review the formulation of the polarization and the orbital\nmagnetization for band insulators in $2+1$ dimensions developed in\nRefs.~\\onlinecite{king-smith1993,vanderbilt1993,resta1994,resta2007,xiao2005,thonhauser2005,ceresoli2006,shi2007}.\nTo simplify notations, we assume primitive lattice vectors of the square\nlattice type, but this general framework is \\textit{not} restricted to this\nspecial choice. \n\n\\subsection{Modern theory}\nLet us denote by\n$\\psi_{t\\bm{k}n}(\\bm{x})=(a\/\\sqrt{V})e^{i\\bm{k}\\cdot\\bm{x}}u_{t\\bm{k}n}(\\bm{x})$\nthe instantaneous Bloch function of $n$-th occupied band, satisfying\n$h_{t}|\\psi_{t\\bm{k}n}\\rangle=\\varepsilon_{t\\bm{k}n}|\\psi_{t\\bm{k}n}\\rangle$.\nHere $h_t$ is the single-particle Hamiltonian with a periodic potential, $V=L_xL_y$ is the\nsystem size and $a$ is the lattice constant. We choose the cell-periodic gauge\nso that they obey the following conditions for any lattice vector $\\bm{R}$ and\nreciprocal lattice vector $\\bm{G}$~\\cite{vanderbilt2018}\n\\begin{align}\n&u_{t\\bm{k}n}(\\bm{x}+\\bm{R})=u_{t\\bm{k}n}(\\bm{x}),\\\\\n&u_{t\\bm{k}+\\bm Gn}(\\bm{x})=e^{-i\\bm G\\cdot\\bm{x}}u_{t\\bm{k}n}(\\bm{x}).\\label{eq:uk_b1}\n\\end{align}\n\nAccording to the modern theory, the bulk polarization density $\\bm{p}(t)$ is\ngiven by\n\\begin{equation}\n\\bm{p}(t)=\\frac{ei}{V}\\sum_{\\bm{k}n\\in\\text{occ}}\\langle u_{t\\bm{k}n}|\\bm{\\nabla}_{\\bm{k}}u_{t\\bm{k}n}\\rangle\\,\\,\\text{ mod }\\,\\,\\frac{e}{a}.\\label{eq:modthp}\n\\end{equation}\nwhere $e$ $(<0)$ is the electric charge. The sum over $\\bm{k}$ can be replaced\nwith the integral $V\\int \\frac{d^2k}{(2\\pi)^2}$ over the first Brillouin zone.\nSimilarly, the orbital magnetization density $\\bm{m}_{\\text{pers}}(t)$ is given\nby\n\\begin{equation}\n\\bm{m}_{\\text{pers}}(t)=\\frac{ei}{2V}\\sum_{\\bm{k}n\\in\\text{occ}}\\langle\\bm{\\nabla}_{\\bm{k}}u_{t\\bm{k}n}|\\times(h_{t\\bm{k}}+\\varepsilon_{t\\bm{k}n})|\\bm{\\nabla}_{\\bm{k}}u_{t\\bm{k}n}\\rangle,\\label{eq:modthm}\n\\end{equation}\nwhere $h_{t\\bm{k}}\\equiv e^{-i\\bm{k}\\cdot\\bm{x}}h_{t}e^{i\\bm{k}\\cdot\\bm{x}}$. The ambiguity in\nEq.~\\eqref{eq:modthp} can be seen by a smooth gauge transformation\n$|u_{t\\bm{k}n}\\rangle'= \\sum_m|u_{t\\bm{k}m}\\rangle(U_{\\bm{k}})_{m,n}$ that\nchanges the integral in Eq.~\\eqref{eq:modthp} by an integer multiple of $e\/a$,\nwhile the integral in~\\eqref{eq:modthm} remains unchanged. \n\n\nIn addition to the derivation via the Thouless pump as we described in the\nintroduction, the formula \\eqref{eq:modthp} was also verified in terms of the\nWannier state localized around the unit cell $\\bm{R}$.\n\\begin{equation}\n|w_{tn\\bm{R}}\\rangle\\equiv\\frac{a}{\\sqrt{V}}\\sum_{\\bm k}e^{-i\\bm k\\cdot\\bm R}|\\psi_{tn\\bm k}\\rangle.\\label{eq:Wannier}\n\\end{equation}\nIn terms of the Wannier function, $\\bm{p}(t)$ is the deviation of the Wannier center from $\\bm{R}$, i.e., $\\bm{p}(t)=\\frac{e}{a^2}\\int d^2x(\\bm{x}-\\bm{R})|w_{t n\\bm{R}}(\\bm{x})|^2$. \n\n\nWhen the origin of unit cell is changed by $\\bm{\\delta}$, we find\n\\begin{align}\n&|u_{t\\bm{k}n}\\rangle'=e^{i\\bm k\\cdot \\bm\\delta}|u_{t\\bm{k}n}\\rangle,\\label{origin}\\\\\n&\\bm{p}'(t)=\\bm{p}(t)-\\frac{e\\bm{\\delta}}{a^2},\\\\\n&\\bm{m}_{\\text{pers}}'(t)=\\bm{m}_{\\text{pers}}(t).\n\\end{align}\nNamely, $\\bm{p}(t)$ depends on the specific choice of the origin, while $\\bm{m}_{\\text{pers}}(t)$ does\nnot. Therefore, it is not $\\bm{p}(t)$ itself but rather the change $\\Delta \\bm{p}(t)$\nthat is of physical interest. It also follows that for an periodic adiabatic\nprocess, where the system is translated by certain number of unit\ncells during the period $T$, the orbital magnetization is periodic in time\nwhile the polarization is not.\n\nFor interacting systems under the periodic boundary condition, the combination\nin the parenthesis in Eq.~\\eqref{eq:jx} replaces Eq.~\\eqref{eq:modthp}. The\nperiodicity in $\\phi_i$ in this formulation is encoded in the relation\n\\begin{align}\n&\\vert\\Phi_{t2\\pi\\phi_y}\\rangle =e^{-2\\pi i\\hat{P}_x}\\vert\\Phi_{t0\\phi_y}\\rangle,\\label{eq:p1}\\\\\n&\\vert\\Phi_{t\\phi_x2\\pi}\\rangle =e^{-2\\pi i\\hat{P}_y}\\vert\\Phi_{t\\phi_x0}\\rangle,\\label{eq:p2}\n\\end{align}\nwhere $\\hat{P}_i$ is the polarization operator (see\nRef.~\\onlinecite{watanabe2018} for example). \n\n\\subsection{Topological response}\\label{sec:top_res}\nTo discuss the physical consequence of $\\Delta \\bm{p}(t)$, let us recall first the\ntopological linear response in $(1+1)$ dimension~\\cite{thouless1983} that holds\nat a mesoscopic scale after coarse-graining\n\\begin{align}\n&j^\\mu(t,x)=-\\sum_\\nu\\varepsilon^{\\mu\\nu}\\partial_\\theta P_1(\\theta)\\partial_\\nu\\theta,\\label{eq:1Dtopo}\\\\\n&P_1(\\theta)\\equiv-e\\int\\frac{dk}{2\\pi} \\tr A_{\\theta k},\\label{eq:1Dtopo2}\\\\\n&(A_{\\theta k})_{n,m}\\equiv-i\\langle u_{\\theta kn}|\\partial_{k}u_{\\theta km}\\rangle.\n\\end{align}\nHere, $x^\\mu$ ($\\mu=0,1$) represents $(t,x)$, and $j^\\mu$ corresponds to\n$(n,j)$. $A_{\\theta k}$ is a (finite dimensional) matrix constructed by\n\\textit{occpied} Bloch states and the trace in Eq.~\\eqref{eq:1Dtopo2} is the\nmatrix trace. Comparing with above equations, we see that $P_1(\\theta(t))$ is\nthe 1D version of $\\bm{p}(t)$ in Eq.~\\eqref{eq:modthp}. This response is\nderived starting from the Chern-Simons theory\n$j^\\mu=\\frac{C_1}{2\\pi}\\sum_{\\nu\\lambda}\\varepsilon^{\\mu\\nu\\lambda}\\partial_\\nu\nA_\\lambda^{\\text{ex}}$ in $(2+1)$ dimensions that describes the response toward\nan external field $A^{\\text{ex}}$ and reducing the dimension to $(1+1)$\ndimensions.\n\nThe parameter $\\theta$ in Eq.~\\eqref{eq:1Dtopo} is a slowly varying field\ninterpolating between two different systems. For example, an adiabatic time\nevolution $\\theta(t)$ induces the bulk current $j(t,x)=\\partial_t\nP_1(\\theta(t))$. The bulk charge transfer from $t=0$ to $t=T$ is thus given\nby $\\int_{0}^{T}dt j(t,x)=P_1(\\theta(T))-P_1(\\theta(0))$.\nSimilarly, a transition of one 1D system to another can be described by\n$\\theta(x)$, giving rise to a charge density $n(t,x)=-\\partial_x\nP_1(\\theta(x))$. Therefore, the total charge $Q^{\\text{edge}}$ accumulated to\nthe boundary is $Q^{\\text{edge}}=\\int_{x_0}^{x_1}dx\nn(t,x)=P_1(\\theta(x_0))-P_1(\\theta(x_1))$. For a given $\\theta$ that specifies\n$P_1(\\theta)$ as a continuous function of $t$ and $x$, even the integer part of\n$Q^{\\text{edge}}$ is well-defined. However, only the fractional part of\n$Q^{\\text{edge}}$ is independent of the detailed choice of the interpolation\n--- the fractional part depends only on the initial and the final\nvalues of $P_1$ that can be individually computed by Eq.~\\eqref{eq:1Dtopo2}.\nWhat we described here can be straightforwardly translated to 2D systems. The\npumped charge through the bulk per unit length along $\\bm n$ is given by\n$\\bm{Q}\\cdot\\bm n$, where\n\\begin{equation}\n\t\\bm{Q}\\equiv\\int_{0}^{T}dt\\,\\partial_t\\bm{p}(t)=\\frac{1}{T}[\\bm{p}(T)-\\bm{p}(0)].\\label{eq:Jb}\n\\end{equation}\n\nThe analog of Eq.~\\eqref{eq:1Dtopo} in $(3+1)$ dimensions reads~\\cite{qi2008}\n\\begin{align}\n&j^\\mu(t,\\bm x)=-\\frac{1}{2\\pi}\\sum_{\\nu,\\lambda,\\rho}\\varepsilon^{\\mu\\nu\\lambda\\rho}\\partial_\\theta P_3(\\theta)\\partial_\\nu\\theta \\partial_\\lambda A_\\rho^{\\text{ex}},\\label{eq:3Dtopo}\\\\\n&P_3(\\theta)\\equiv -e\\int\\frac{d^3k}{8\\pi^2}\\tr \\bm A_{\\theta\\bm{k}}\\cdot(\\bm\\nabla_{\\bm k}+\\tfrac{2i}{3}\\bm A_{\\theta\\bm{k}})\\times\\bm A_{\\theta\\bm{k}},\\\\\n&(\\bm A_{\\theta\\bm{k}})_{n,m}\\equiv-i\\langle u_{\\theta\\bm{k}n}|\\bm\\nabla_{\\bm k}u_{\\theta\\bm{k}m}\\rangle.\n\\end{align}\nHere, $\\mu,\\nu,\\rho,\\lambda=0,1,2,3$. Again, $\\bm A_{\\theta\\bm{k}}$ is defined\nby occupied Bloch states. This response is derived from the\nChern-Simons theory\n$j^\\mu=\\frac{C_2}{8\\pi^2}\\varepsilon^{\\mu\\nu\\lambda\\rho\\sigma}\\partial_\\nu\nA_\\lambda^{\\text{ex}}\\partial_\\rho A_\\sigma^{\\text{ex}}$ in $(4+1)$ dimensions by a\ndimensional reduction. This topological response implies, for example, the\nmagnetoelectric effect~\\cite{qi2008,essin2009,essin2010}\n$\\rho(z)=-\\frac{1}{2\\pi} \\partial_zP_3(\\theta(z))B_z^{\\text{ex}}$. \n\n\\section{Geometric orbital magnetization}\\label{sec:noninteracting}\nIn this section we present the derivations of our main results. We start with\nverifying the most general expression ~\\eqref{eq:calMz} for the geometric\norbital magnetization. Then we derive the formula for the topological and\nnon-topological contributions in Eq.~\\eqref{eq:calm2}.\n\n\\subsection{Berry phases in adiabatic process}\\label{subsec:general}\nSuppose we are interested in the expectation value of the quantity $\\hat{X}$,\ngiven by\n\\begin{equation}\n\\hat{X}=\\partial_\\epsilon \\hat{H}_{\\epsilon}|_{\\epsilon=0}\\label{eq:other}\n\\end{equation}\nfor some parameter $\\epsilon$ in the Hamiltonian. For example, in the case of\nthe averaged current operator, $\\epsilon$ can be identified with the solenoidal\nflux $\\bm{\\phi}$ [see Eq.~\\eqref{eq:currentphi}]. Likewise, for the orbital\nmagnetization we use the external magnetic field $B_z$.\n\nNow suppose that the Hamiltonian $\\hat{H}_{t}$ has a periodic adiabatic\ndependence on $t$, and let $\\vert\\Phi_{t}\\rangle$ be the instantaneous\nground state with the energy eigenvalue $E_{t}$. We assume an excitation\ngap $\\Delta_{t}$ and the time-dependence of the Hamiltonian must be slow\nenough so that $\\Delta_{t}T\\gg1$. Using the density matrix\n$\\hat\\rho_t=\\vert\\Psi_t\\rangle\\langle\\Psi_t\\vert$ obeying the time-dependent\nSchr{\\\"o}dinger equation $\\partial_t\\hat\\rho_t=-i[\\hat H_t,\\hat\\rho_t ]$, we\nexpress the time-average of the expectation value of $\\hat{X}_t$ as\n\\begin{equation}\n\tX\\equiv\\int_0^T\\frac{dt}{T}{\\rm tr}[\\hat\\rho_t\\hat{X}_t].\\label{eq:Mz}\n\\end{equation}\nIn the absence of the time-evolution, the density matrix is identical to\n$\\vert\\Phi_{t}\\rangle\\langle\\Phi_{t}\\vert$. It acquires contributions from\nexcited states $\\hat{H}_{t}|\\Phi_{t}^M\\rangle=E_{t}^M|\\Phi_{t}^M\\rangle$ due to\nthe time evolution. To the lowest-order perturbation theory with respect to\n$(\\Delta_{t}T)^{-1}$, the relevant matrix elements are given\nby~\\cite{thouless1983,watanabe2018}\n\\begin{equation}\n\\langle\\Phi_{t}^M\\vert\\hat\\rho_t\\vert\\Phi_{t}\\rangle=\\langle\\Phi_{t}\\vert\\hat\\rho_t\\vert\\Phi_{t}^M\\rangle^*=\\frac{i\\langle\\Phi_{t}^M\\vert\\partial_t\\vert\\Phi_{t}\\rangle}{E_{t}^M-E_{t}}.\n\\label{eq:rho_adiabatic}\n\\end{equation}\nNow we plug\n$\\hat{X}_t\\equiv\\partial_{\\epsilon}\\hat{H}_{t\\epsilon}|_{\\epsilon=0}$ and make\nuse of the Sternheimer identity\n\\begin{equation}\n(E_{t}-E_{t}^M)\\langle\\Phi_{t}^M\\vert\\partial_{\\epsilon}\\vert\\Phi_{t\\epsilon}\\rangle|_{\\epsilon=0}=\\langle\\Phi_{t}^M\\vert\\partial_{\\epsilon}\\hat{H}_{t\\epsilon}|_{\\epsilon=0}\\vert\\Phi_{t}\\rangle,\\label{SI}\n\\end{equation}\nwhich follows by differentiating\n$\\hat{H}_{t\\epsilon}\\vert\\Phi_{t\\epsilon}\\rangle=E_{t\\epsilon}\\vert\\Phi_{t\\epsilon}\\rangle$\nwith respect to $\\epsilon$ at $\\epsilon=0$. In our notation,\n$\\hat{H}_{t\\epsilon}|_{\\epsilon=0}=\\hat{H}_t$ and\n$\\vert\\Phi_{t\\epsilon}\\rangle|_{\\epsilon=0}=\\vert\\Phi_{t}\\rangle$. Combining\nthese equations, we find\n\\begin{align}\n\tX&=\\int_0^T\\frac{dt}{T}(\\partial_{\\epsilon} E_{t\\epsilon}+{\\cal F}_{t\\epsilon})\\rvert_{\\epsilon=0},\\label{eq:BerryF}\n\\end{align}\nwhere \n\\begin{align}\n\t{\\cal F}_{t\\epsilon}&\\equiv i\\partial_t\\langle\\Phi_{t\\epsilon}\\vert \\partial_{\\epsilon}\\vert\\Phi_{t\\epsilon}\\rangle-i\\partial_\\epsilon\\langle\\Phi_{t\\epsilon}\\vert \\partial_t\\vert\\Phi_{t\\epsilon}\\rangle,\n\\end{align}\nis the Berry curvature in $(t,\\epsilon)$ space. Further assuming the\nperiodicity in time\n\\begin{align}\n\t\\vert\\Phi_{T\\epsilon}\\rangle=\\vert\\Phi_{0\\epsilon}\\rangle,\\label{eq:GSp}\n\\end{align}\nwe arrive at our general expression\n\\begin{align}\n\tX&=X_{\\text{inst}}+X_{\\text{geom}},\\\\\n\tX_{\\text{inst}}&\\equiv\\int_0^T\\frac{dt}{T}\\langle\\Phi_t\\vert\\hat{X}_t\\vert\\Phi_t\\rangle=\\int_0^T\\frac{dt}{T}\\partial_{\\epsilon} E_{t\\epsilon}\\rvert_{\\epsilon=0},\\label{eq:Xinst}\\\\\n\tX_{\\text{geom}}&\\equiv -\\frac{1}{T}\\partial_\\epsilon\\varphi_\\epsilon\\rvert_{\\epsilon=0},\\quad\\varphi_\\epsilon\\equiv \\int_0^Tdt\\langle\\Phi_{t\\epsilon}\\vert i\\partial_t\\vert\\Phi_{t\\epsilon}\\rangle,\\label{eq:Xgeom}\n\\end{align}\nwhere $X_{\\text{inst}}$ is the time average of the expectation value using the\ninstantaneous ground state and $X_{\\text{geom}}$ is the geometric contribution\noriginating from the adiabatic time dependence. This is the generalization of\nEq.~\\eqref{eq:jTh} for the electric current to physical observables written as\nthe derivative of Hamiltonian as in Eq.~\\eqref{eq:other}. The following\nbasis-independent expressions may also be useful \n\\begin{align}\n\tX_{\\text{geom}}&=\\int_0^T\\frac{dt}{T} i{\\rm tr}\\hat P_{t\\epsilon}[\\partial_t\\hat P_{t\\epsilon},\\partial_\\epsilon\\hat P_{t\\epsilon} ]|_{\\epsilon=0}\\label{eq:MzSPint}\\\\\n\t&={\\rm Re}\\int_0^T\\frac{dt}{T}\\oint\\frac{dz}{\\pi}{\\rm tr}[(\\partial_t\\hat{H}_{t})\\hat{G}_{t}^2\\partial_{\\epsilon}\\hat{H}_{t\\epsilon}|_{\\epsilon=0}\\hat{G}_{t}],\n\\end{align}\nwhere $\\hat P_{t\\epsilon}=|\\Phi_{t\\epsilon}\\rangle\\langle\\Phi_{t\\epsilon}\\vert$\nis the projector onto the many-body ground state,\n$\\hat{G}_{t}=(z-\\hat{H}_{t})^{-1}$ is the many-body Green function, and the\nintegration contour encloses only the ground state at $z=E_{t}$.\n\nLet us now specialize to the case $\\epsilon=B_z$. Then $X_{\\text{geom}}$ in\nEq.~\\eqref{eq:Xgeom} gives the geometric orbital magnetization $\\mathbcal{m}$\nin Eq.~\\eqref{eq:calMz}, while $X_{\\text{inst}}$ is the persistent\ncurrent contribution. Note the additional minus sign because of\n$\\hat{M}_z=-\\partial_{B_z} \\hat{H}_{B_z}$. Previously,\nRefs.~\\onlinecite{ceresoli2002,stengel2018} considered the Berry curvature in\n$(t,B_z)$ space to describe orbital magnetization induced by rotation of\nmolecules. See also examples in Sec.~\\ref{subsec:nontgeoM}\nand~\\ref{subsec:rotom} below.\n\nWhen applying these formulae, one has to be careful about boundary conditions.\nIf open boundary conditions in at least one direction are imposed, the\nresult~(\\ref{eq:MzSPint}) is directly applicable. However, a process that is\nperiodic in time under periodic boundary condition may loose its\nperiodicity in time under open boundary conditions. For example, the system in\nFig.~\\ref{fig:magnetization}c is not periodic in time if the open boundary\ncondition in $y$ direction is imposed. Similarly, the periodicity in time\nrequires periodic boundary conditions in both directions for the $C_4$-symmetric\nsystem in Fig.~\\ref{fig:C4e}. Keeping (original) periodic boundary conditions\nin both directions in the presence of magnetic field, implies that the net flux\nthrough the system has to vanish. If the system is homogeneous, local\ncontributions to $\\mathbcal{m}$ cancel out and we cannot obtain a useful\ninformation about the system. (For single-particle problems there is a\nresolution as we discuss below.) Finally, one can impose \\textit{magnetic}\nperiodic boundary conditions assuming that the total magnetic flux applied to\nthe system $B_zL_xL_y$ is an integer multiple of\n$2\\pi$.~\\cite{brown1964,zak1964} However, each eigenstate of $\\hat{H}_{B_z}$\nmay not be analytic as a function of $B_z$ despite the fact that the magnetic\nfield $B_z=2\\pi\/L_xL_y$ itself can be made small for a large systems. The\nexpression~\\eqref{eq:MzSPint} is still applicable if the projector onto the\ninstantaneous ground state is analytic function of $B_z$, which is the case for\nband insulators with vanishing Chern\nnumber.~\\cite{essin2010,Malashevich2010,gonze2011,Chen2011} However, to our\nknowledge there is no general proof for gapped interacting systems. \n\n\\subsection{Noninteracting systems}\n\\label{nis}\nLet us apply this general expression to noninteracting\nelectrons described by the quadratic Hamiltonian\n\\begin{equation}\n\\hat{H}_{t\\epsilon}=\\sum_{n}\\varepsilon_{t\\epsilon n}\\hat{\\gamma}_{t\\epsilon n}^\\dagger \\hat{\\gamma}_{t\\epsilon n}.\n\\end{equation}\nWe label single-particle states in such a way that $\\varepsilon_{t\\epsilon\nn+1}\\geq\\varepsilon_{t\\epsilon n}$ for all $n=1,2,\\cdots$. We also assume a\nfinite gap $\\Delta=\\varepsilon_{t\\epsilon N+1}-\\varepsilon_{t\\epsilon N}$\nbetween $N$-th and $(N+1)$-th levels. We write the single particle state\n$|\\gamma_{t\\epsilon n}\\rangle\\equiv \\hat{\\gamma}_{t\\epsilon\nn}^\\dagger|0\\rangle$. Then the $N$-particle ground state can be written as\n\\begin{equation}\n|\\hat{\\Phi}_{t\\epsilon}\\rangle=\\prod_{n=1}^N\\hat{\\gamma}_{t\\epsilon n}^\\dagger |0\\rangle.\n\\end{equation}\nFor a later purpose, we allow for a unitary transformation \\textit{among the\noccupied levels}\n\\begin{equation}\n\\hat{\\psi}_{t\\epsilon \\ell}^\\dagger=\\sum_{n=1}^N\\hat{\\gamma}_{t\\epsilon n}^\\dagger U_{n\\ell},\\quad \\ell=1,2,\\cdots,N.\n\\end{equation}\nAlthough such a basis change may sound unnecessary, in the actual application\nof this framework it is sometimes important to work in the proper basis by\nchoosing $U_{n\\ell}$ appropriately. (See Sec.~\\ref{subsec:GOMBI} for an\nexample). After all we find that the many-body Berry phase\n$\\varphi_{\\epsilon}$ is given by the sum of single-particle Berry phases\n$\\varphi_{\\epsilon\\ell}$ of occupied levels\n\\begin{align}\n\\varphi_{\\epsilon}= \\sum_{\\ell=1}^N\\varphi_{\\epsilon\\ell},\\label{eq:manysingle}\\quad\\varphi_{\\epsilon\\ell}\\equiv\\int_0^Tdt\\langle\\psi_{t\\epsilon\\ell}\\vert i\\partial_t\\psi_{t\\epsilon\\ell}\\rangle.\n\\end{align}\nTherefore, we get the following expressions for single-particle problems. The\nlatter two expressions are basis-independent\n\\begin{align}\nX_{\\text{geom}}\n&=\\int_0^T\\frac{dt}{T}\\,i\\sum_{\\ell=1}^N\\langle \\partial_t\\psi_{t\\epsilon\\ell}\\vert \\partial_\\epsilon\\psi_{t\\epsilon\\ell}\\rangle|_{\\epsilon=0}\\\\\n&=\\int_0^T\\frac{dt}{T}\\,i{\\rm tr}P_{t\\epsilon}[\\partial_t P_{t\\epsilon},\\partial_\\epsilon P_{t\\epsilon}]|_{\\epsilon=0}\\\\\n&={\\rm Re}\\oint\\frac{dz}{\\pi}\\int_0^T\\frac{dt}{T}\\tr[(\\partial_th_{t})g_{t}^2\\partial_{\\epsilon}h_{t\\epsilon}\\rvert_{\\epsilon=0}g_{t}],\n\t\\label{eq:MzSP}\n\\end{align}\nwhere\n$P_{t\\epsilon}=\\sum_{\\ell=1}^N\\vert\\psi_{t\\epsilon\\ell}\\rangle\\langle\\psi_{t\\epsilon\\ell}\\vert=\\sum_{\\ell=1}^N\\vert\\gamma_{t\\epsilon\nn}\\rangle\\langle\\gamma_{t\\epsilon n}\\vert$ is the projector onto occupied\nsingle-particle states, $h_{t\\epsilon}=\\sum_n\\varepsilon_{t\nn\\epsilon}\\vert\\gamma_{t \\epsilon n}\\rangle\\langle\\gamma_{t \\epsilon n}\\vert$\nis the single-particle Hamiltonian, $g_{t}=(z-h_{t})^{-1}$ is single-particle\nGreen function, and the integration contour encloses all the occupied states at\n$z=\\varepsilon_{t n}$ ($n=1,2,\\cdots,N$).\n\nFor the orbital magnetization, we again set $\\epsilon=B_z$. The\nsame remarks as in the previous section apply here. In the case of band\ninsulators, one may want to impose periodic boundary conditions to preserve the\ntranslation symmetry. As discussed in the previous section there are two\npossibilities to achieve this. One can change the the boundary condition to\nmagnetic periodic boundary conditions.~\\cite{brown1964,zak1964} For\nsingle-particle systems, assuming symmetric gauge $\\bm A^{\\rm ex}(\\bm x)=\\bm\nB\\times\\bm x\/2$, the magnetic periodic boundary conditions can be taken into\naccount explicitly by restricting the form of the projector $\\hat{P}_{tB_z}$\nto~\\cite{essin2010,gonze2011} $\\langle\\bm x_1\\vert\n\\hat{P}_{tB_z}\\vert\\bm x_2\\rangle=\\hat{P}^\\prime_{tB_z}(\\bm x_2,\\bm x_1)\ne^{ieB_z\\bm x_1\\times\\bm x_2\\cdot\\hat{\\bm z}\/2}$, where $P^\\prime_{tB_z}(\\bm\nx_1,\\bm x_2)$ is an arbitrary $N\\times N$ matrix function (not necessarily\nprojector) that satisfies $P^\\prime_{tB_z}(\\bm x_1+\\bm R,\\bm x_2+\\bm\nR)=P^\\prime_{tB_z}(\\bm x_1,\\bm x_2)$, where $\\bm R$ is an element of Bravais\nlattice. The expression for $\\hat{P}^\\prime_{tB_z}(\\bm x)$ can be found\nperturbatively in $B_z$,~\\cite{essin2010,gonze2011} which, after substituting\nback to Eq.~(\\ref{eq:manysingle}), yields an expression for the Berry phase and\n$\\mathbcal{m}$. The second option is to apply a spatially modulating magnetic\nfield as we discuss below. \n\n\\subsection{Geometric orbital magnetization for band insulators}\n\\label{subsec:GOMBI}\nBelow we consider band insulators and show that geometric orbital magnetization\nhas two contributions as in Eq.~\\eqref{eq:calm2}. Since we assume the periodic\nboundary condition both in $x$ and $y$, we apply a slowly modulating magnetic\nfield~\\cite{shi2007,essin2010} in order to avoid changing of the boundary\ncondition as discussed in the previous subsection. We use the vector potential\n\\begin{align}\n&\\bm{A}^{\\text{ex}}(\\bm{x})=\\frac{\\epsilon}{2q}(-\\sin qy,\\sin qx,0)^{\\rm T},\\\\\n&\\bm{B}(\\bm{x})=\\bm{\\nabla}\\times\\bm{A}^{\\text{ex}}(\\bm{x})=\\bm{e}_z\\epsilon f(\\bm{x})\n\\end{align}\nwith $\\bm{e}_z\\equiv(0,0,1)^{\\rm T}$, $q\\equiv2\\pi\/L$, and $f(\\bm{x})=(\\cos\nqx+\\cos qy)\/2$. (To simplify the notation, we assume $L=L_x=L_y$ in this\nsubsection). Such a magnetic field induces the change of the Bloch function\n\\begin{align}\n&|\\partial_\\epsilon\\psi_{t\\epsilon n\\bm{k}}\\rangle|_{\\epsilon=0}\\notag\\\\\n&=-\\sum_{n'\\bm{k}'}|\\psi_{tn'\\bm{k}'}\\rangle\\frac{\\langle\\psi_{tn'\\bm{k}'}|\\partial_\\epsilon h_{t\\epsilon}|_{\\epsilon=0}|\\psi_{tn\\bm{k}}\\rangle}{\\varepsilon_{tn'\\bm{k}'}-\\varepsilon_{tn\\bm{k}}}\n\\end{align}\nand $|\\partial_\\epsilon w_{t\\epsilon n\\bm{R}}\\rangle|_{\\epsilon=0}$ is given\nvia Eq.~\\eqref{eq:Wannier}.\n\nWe compute the Berry phase using the formula~\\eqref{eq:manysingle} derived\nabove. It is important to work in the Wannier basis for which the magnetic\nfield effectively becomes uniform in the limit $q\\rightarrow0$. The\nsingle-particle Berry phase in this basis, summed over occupied bands, takes\nthe following form\n\\begin{align}\n\t\\partial_{\\epsilon}\\varphi_{\\epsilon \\bm{R}}|_{\\epsilon=0}&=-\\int_0^Tdt\\sum_{n\\in\\text{occ}}\\langle i\\partial_tw_{tn \\bm{R}}\\vert \\partial_{\\epsilon}w_{t\\epsilon n \\bm{R}}\\rangle|_{\\epsilon=0}+\\text{c.c.}\\notag\\\\\n\t&=Ta^2\\mathcal{m}_zf(\\bm{R}).\n\\end{align}\nIf we further sum over $\\bm{R}$, or equivalently if we work in the Bloch basis\n$|\\psi_{tn\\bm{k}}\\rangle$, we get $0$ reflecting the fact that for bulk systems\nFourier component $\\mathcal{m}_z(\\bm q)$ vanishes for $\\bm q\\neq0$. Thus, care\nmust be taken to correctly read off local contribution to $\\mathcal{m}_z$--- an\nunintentional integration over $\\bm R$ of a term proportional to $f(\\bm R)$\nmakes it impossible to find the correct value of $\\mathcal{m}_z$. The rest\ncalculation follows the appendix in Ref.~\\onlinecite{essin2010}. Upon taking\nthe limit $q\\rightarrow0$, we find\n\\begin{widetext}\n\\begin{align}\n\\mathcal{m}_z&=\\lim_{q\\rightarrow0}\\frac{1}{T}\\int_0^Tdt\\int\\frac{d^2k}{(2\\pi)^2}\\sum_{n\\in\\text{occ}}\\sum_{n'\\bm{k}'}\\,\\langle i\\partial_t\\psi_{tn\\bm{k}}|\\psi_{tn'\\bm{k}}\\rangle \\frac{\\langle\\psi_{tn'\\bm{k}}|\\partial_\\epsilon h_{t\\epsilon}|_{\\epsilon=0}|\\psi_{n\\bm{k}'}\\rangle}{\\varepsilon_{tn'\\bm{k}}-\\varepsilon_{tn\\bm{k}'}}+\\text{c.c}.\\notag\\\\\n&=-\\frac{e}{2T}\\int_0^Tdt\\int\\frac{d^2k}{(2\\pi)^2}\\sum_{n\\in\\text{occ}}\\sum_{n'}\\langle\\partial_t u_{n\\bm k}\\vert u_{n^\\prime\\bm k}\\rangle\n\\frac{\\langle u_{tn'\\bm k}\\vert\\bm\\nabla_{\\bm k}(h_{t\\bm k}+\\varepsilon_{tn\\bm k})\\times\\vert\\bm\\nabla_{\\bm k}u_{tn\\bm k}\\rangle}{\\varepsilon_{tn'\\bm k}-\\varepsilon_{tn\\bm k}}+\\text{c.c}.\t\n\\end{align}\nThis last expression can precisely be expressed as the sum of two terms,\n$\\mathbcal{m}^{\\text{top}}+\\mathbcal{m}^{\\text{non-top}}$. The topological\npiece $\\mathbcal{m}^{\\text{top}}$ reads\n\\begin{align}\n&\\mathbcal{m}^{\\text{top}}= \\bm{e}_zP_3\/T,\\label{eq:topc}\\\\\n&P_3\\equiv -\\frac{e}{2}\\int_0^Tdt\\int\\frac{d^2k}{(2\\pi)^2}\\tr[\\bm A_{\\bm{K}}\\cdot\\bm\\nabla_{\\bm{K}}\\times\\bm A_{\\bm{K}}+\\tfrac{2i}{3}\\bm A_{\\bm{K}}\\cdot\\bm A_{\\bm{K}}\\times\\bm A_{\\bm{K}}].\\label{eq:3Dtopo2}\n\\end{align}\nThe Berry connection $(\\bm A_{\\bm{K}})_{n,m}\\equiv-i\\langle\nu_{\\bm{K}n}|\\bm\\nabla_{\\bm{K}}u_{\\bm{K}m}\\rangle$ is defined using occupied\nBloch states as a function of $\\bm{K}\\equiv(t,\\bm{k})$. The smoothness and the\nperiodicity of $\\bm A_{\\bm{K}}$ are assumed in the integral in\nEq.~\\eqref{eq:3Dtopo2}. Such a choice is possible only when both the pumped\ncharge through the bulk $\\bm Q$ in Eq.~\\eqref{eq:Jb} and the 2D Chern\nnumber for $(k_x,k_y)$ vanish. \n\nThe non-topological contribution depends also on instantaneous eigenenergies of the Bloch Hamiltonian\n\\begin{align}\n\t\\mathbcal{m}^{\\text{non-top}}&=\\sum_{n\\in\\text{occ}}\\sum_{n'\\in\\text{unocc}}\\frac{e}{2T}\\int_0^Tdt\\int\\frac{d^2k}{(2\\pi)^2}\\frac{\n\t\\langle u_{tn\\bm k}|\\partial_tP_{t\\bm k}|u_{tn'\\bm k}\\rangle\\langle u_{tn'\\bm k}|\\{\\bm\\nabla_{\\bm k}h_{t\\bm k}\\times\\bm\\nabla_{\\bm k}P_{t\\bm k}\\}|u_{tn\\bm k}\\rangle\n\t}{\\varepsilon_{tn\\bm k}-\\varepsilon_{tn'\\bm k}}+\\text{c.c.}\\notag\\\\\n\t&=\\frac{e}{2T}\\int_0^Tdt\\int\\frac{d^2k}{(2\\pi)^2}\\oint\\frac{dz}{2\\pi i}{\\rm tr}\\left[ \\partial_tP_{t\\bm k}g_{t\\bm k}\\{\\bm\\nabla_{\\bm k}h_{t\\bm k}\\times\\bm\\nabla_{\\bm k}P_{t\\bm k}\\}g_{t\\bm k} \\right]+\\text{c.c.}\n\t\\label{eq:nontopcalM}\n\\end{align}\n\\end{widetext}\nHere, $h_{t\\bm k}$ is Bloch Hamiltonian, $P_{t\\bm k}=\\sum_{n\\in\\text{occ}}\\vert u_{tn\\bm{k}}\\rangle\\langle u_{tn\\bm{k}}\\vert$ is the projector\nonto occupied bands at $\\bm k$, $g_{t\\bm k}=(z-h_{t\\bm k})^{-1}$ is Bloch's\nGreen function, the curly brackets denote symmetrization $\\{\\bm A\\times\\bm\nB\\}=\\bm A\\times\\bm B+\\bm B\\times\\bm A$, and the integration contour encloses\nall the filled Bloch states at $z=\\varepsilon_{tn\\bm{k}}$. See the appendix of Ref.~\\onlinecite{essin2010}\nfor the details. Note that both $\\mathbcal{m}^{\\text{top}}$ and $\\mathbcal{m}^{\\text{non-top}}$ are not affected by the shift of the origin in Eq.~\\eqref{origin}.\n\n\\subsection{Topological contribution from response theory}\\label{subsec:topom}\nHere we give an alternative, easier derivation of $\\mathbcal{m}^{\\text{top}}$\nin Eq.~\\eqref{eq:topc} from the topological response theory. To this end let\nus further reduce one spatial dimension in Eq.~\\eqref{eq:3Dtopo} to achieve the\ntopological quadratic response in $(2+1)$d~\\cite{qi2008}:\n\\begin{align}\n&\\mathcal{j}^\\mu(t,\\bm x)=-\\frac{1}{2\\pi}\\sum_{\\nu,\\lambda,\\rho}\\varepsilon^{\\mu\\nu\\lambda}G_2(\\theta,\\phi)\\partial_\\nu\\theta \\partial_\\lambda \\phi,\\label{eq:2Dtopo}\\\\\n&\\frac{1}{2\\pi}G_2(\\theta,\\phi)\\equiv-e\\int\\frac{d^2k}{32\\pi^2} \\,\\varepsilon^{\\mu\\nu\\rho\\sigma}\\tr F_{\\mu\\nu} F_{\\rho\\sigma},\\label{eq:2Dtopo2}\n\\end{align}\nwhere $F_{\\mu\\nu}\\equiv\\partial_\\mu A_{\\nu}-\\partial_\\nu\nA_{\\mu}+i[A_{\\mu},A_{\\nu}]$ is the Berry curvature in the\n$(k_x,k_y,\\theta,\\phi)$ space and $\\theta$ and $\\phi$ are two slowly varying\nfields: $\\theta(t)$ denotes an adiabatic and periodic time dependence and\n$\\phi(\\bm{x})$ describes a smooth interface of domains\n(Fig.~\\ref{fig:Qboundary}). In this setting, we find\n\\begin{align}\n\\mathbcal{j}(t,\\bm{x})=\\frac{1}{2\\pi}G_2(\\theta,\\phi)\\partial_t\\theta(t)\\bm{\\nabla}\\phi(\\bm{x})\\times\\bm{e}_z\n\\end{align}\nso that\n\\begin{align}\n\\label{eq:j4}\n\t{\\mathbcal{j}}(\\bm{x})&\\equiv\\int_{0}^{T}\\frac{dt}{T}\\mathbcal{j}(t,\\bm{x})=\\partial_\\phi P_3(\\phi)\\bm{\\nabla}\\phi(\\bm{x})\\times\\bm{e}_z\/T\\notag\\\\\n\t&=\\bm{\\nabla}\\times[\\bm{e}_zP_3(\\phi(\\bm{x}))\/T]=\\bm{\\nabla}\\times\\mathbcal{m}^{\\text{top}}(\\bm{x}).\n\\end{align}\nThis reproduces Eq.~\\eqref{eq:topc}. In the derivation we used the relation $\\int_0^{2\\pi}\\frac{d\\theta}{2\\pi}\nG_2(\\theta,\\phi)=\\partial_\\phi P_3(\\phi)$. It is important to note that\n$\\mathbcal{j}(t,\\bm x)$ itself cannot be written as a curl of a vector field ---\nEq.~\\eqref{eq:j4} holds only after the time convolution (or equivalently the\ntime average). \n\nThe above derivation relies on the connection~(\\ref{eq:jmeso}) between\n$\\mathbcal{m}^{\\text{top}}$ and topological edge current in adiabatically\ndriven two-dimensional systems. To see this more concretely, let us consider\nthe boundary of two regions with $\\phi_0\\equiv\\phi(\\bm x_0)$ and\n$\\phi_1\\equiv\\phi(\\bm x_1)$ (see Fig.~\\ref{fig:Qboundary}). Just like in the\ncase of polarization, only the fractional part of the edge current is the bulk\ncontribution that depends only on $\\phi_0$ and $\\phi_1$. This can be understood\nby noticing that decorating the boundary with a 1D chain leads to an integer\ncharge transfer through the Thouless pump.~\\cite{thouless1983} To capture the\nfractional bulk contribution to the edge current, one can separately compute\n$\\mathcal{m}_z(\\bm x_0)$ and $\\mathcal{m}_z(\\bm x_1)$ without paying attention\nto their continuity. The geometric contribution to the charge transfer along\n$i$ direction, $i=x,y$ between two bulk systems with $\\mathcal{m}_z(\\bm{x}_0)$\nand $\\mathcal{m}_z(\\bm{x}_1)=\\mathcal{m}_z(\\bm{x}_1')$\n(Fig.~\\ref{fig:Qboundary}) is given by\n\\begin{align}\nI_i^{\\text{edge}}&\\equiv\\int_{\\bm x_0}^{\\bm x_1} dx\\mathcal{j}_i(\\bm{x})=\\mathcal{m}_z(\\bm x_0)-\\mathcal{m}_z(\\bm x_1')\\mod e.\n\\end{align}\n\nNotice that $I^{\\text{edge}}$ of two adjacent edges may differ by an integer. To\nsee this formally, let us consider a charge flow $\\Delta Q^{\\text{corner}}$ into a corner\nsurrounded by a closed curve $\\bm x_\\alpha$ with $\\bm x_1=\\bm x_0$ (see\nFig.~\\ref{fig:Qboundary}). The net charge flow in the process is given by the\nsecond Chern number\n\\begin{equation}\n\t\\label{eq:Qc}\n\\Delta Q^{\\text{corner}}\\equiv T\\oint d\\bm{x}_\\alpha\\times\\mathbcal{j}(\\bm{x}_\\alpha)\\cdot\\bm{e}_z=\\int \\frac{d\\theta d\\phi}{2\\pi}G_2(\\theta,\\phi).\n\\end{equation}\nFor example, when the corner is formed by two edges along $x$ and $y$\ndirections, we have \n\\begin{equation}\n\\Delta Q^{\\text{corner}}=T(I_x^{{\\text{edge}}}-I_y^{\\text{edge}}),\n\\label{n1n2}\n\\end{equation}\nmeaning that the charge transfer along two intersecting edges can only differ\nby an integer multiple of $e$. Clearly, $\\Delta Q^{\\text{corner}}$ is \\textit{not} a bulk\ntopological invariant in general, since its value can be changed by closing the\nboundary gap, i.e., attaching 1D Thouless pump at certain boundaries\n(Figs.~\\ref{fig:Qboundary} and~\\ref{fig:C4}b). \n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=0.7\\columnwidth]{fig2.pdf}\t\t\n\t\t\\caption{\\label{fig:Qboundary}The boundary current along the\n\t\tinterface of two adiabatic processes $h_{\\phi(\\bm x_i)\\theta(t)\\bm{k}}$ with $i=0$ and $1$. A 1D decoration with Thouless\n\t\tpump changes the edge charge transfer by an integer and leads\n\t\tto integer corner charge accumulation. Hatched parts denote the boundary area between the two systems.}\n\t\\end{center}\n\\end{figure}\n\n\\subsection{Symmetry constraints and corner charge}\n\\label{sec:symmetries}\nHere we consider adiabatic process of two-dimensional systems constrained by\ncertain symmetries that quantize $T\\mathcal{m}_z^\\text{top}$. We show that, if\nthe symmetry allows one to define the bulk contribution to quadrupole moment,\nthe quantized quadrupole moment is equal to $T\\mathcal{m}_z^\\text{top}$. Such adiabatic\nprocesses were recently discussed by van Miert and Ortix, who found the\nconnection between the quantized corner charge and higher-order topological\ninvariant.~\\cite{vanmiert2018}\n\nFor concreteness, let us consider the four-fold rotation $C_4$ mapping\n$\\bm{x}=(x,y,0)$ to $C_4\\bm{x}=(-y,x,0)$. It is easy to see that boundary\ndecorations by polarized one-dimensional chains do not affect the fractional\npart of the corner charge $\\Delta Q^{\\text{corner}}$, see Fig.~\\ref{fig:C4}a. We\nconsider an \\textit{arbitrary} interpolation between the system of interest\n$h_{0\\bm k}$ and the reference system $h_{T\/2\\,\\bm k}$ that has no\ncorner charge. The second half of the cycle is performed in a $C_4$-symmetric\nmanner\n\\begin{equation}\n\tU_{C_4}h_{t\\bm k}U_{C_4}^\\dagger=h_{T-t\\,C_4\\bm k}.\n\t\\label{eq:C4cycle}\n\\end{equation}\nThe $C_4$ symmetry defined above behaves as the roto-inversion $IC_4$ in\n$(t,k_x,k_y)$-space, resulting in the following transformation law for\n$\\mathcal{m}_z^\\text{top}$:\n\\begin{equation}\n\tC_4:\\quad \\mathcal{m}_z^\\text{top}\\rightarrow-\\mathcal{m}_z^\\text{top}.\n\t\\label{eq:P3_C4}\n\\end{equation}\nThis does not mean that $T\\mathcal{m}_z^\\text{top}$ vanishes since it is defined only ${\\rm\nmod}\\,1$. Thus in the presence of $C_4$ symmetry $T\\mathcal{m}_z$ is quantized\neither $0$ or $e\/2\\mod e$. When $T\\mathcal{m}_z=e\/2\\mod e$, the circulating\nedge current as in Fig.~\\ref{fig:Qboundary} violates $C_4$\nsymmetry constraint~(\\ref{eq:C4cycle})---the only allowed edge current\ndistribution is shown by black arrows in Fig.~\\ref{fig:C4}b. Note that the\ninversion symmetry, for example, also quantizes $T\\mathcal{m}_z^\\text{top}$ but\nthe total corner charge accumulation during inversion-symmetric cycles need to\nvanish since the charge distribution of quadrupole moment is invariant under\nthe inversion (see Fig.~\\ref{fig:C4}c).\n\nNow we show that the parity of the corner charge accumulation $\\Delta\nQ^{\\text{corner}}$ is actually a bulk topological invariant for symmetric\nadiabatic processes satisfying constraint~\\eqref{eq:C4cycle}, see also\nFig.~\\ref{fig:C4}b. To this end, consider two perpendicular edges along $x$\nand $y$ direction, related to each other by $C_4$ symmetry. The\nrelations~\\eqref{n1n2} and \\eqref{eq:P3_C4} suggest that\n$I^{\\text{edge}}_y=-I^{\\text{edge}}_x$ and that\n\\begin{equation}\n\t\\Delta Q^{\\text{corner}}=2TI^{\\text{edge}}_x=2T\\mathcal{m}_z^\\text{top}(\\bm x_0)\\mod 2e.\n\t\\label{eq:qcorner}\n\\end{equation}\nFurthermore, Fig.~\\ref{fig:C4}c tells us that the corner charge accumulation\nduring the symmetric process is $\\Delta Q^{\\text{corner}}=2 q^{\\text{corner}}$. Therefore,\n\\begin{equation}\n\tq^{\\text{corner}}=T\\mathcal{m}_z^\\text{top}(\\bm x_0) =P_3(\\phi_0) \\mod e.\\label{corner}\n\\end{equation}\nWe will discuss an example of quadrupole insulators with $P_3=e\/2$ in\nSec.~\\ref{subsec:tgeoM} using this result. On the other hand, a $C_4$-symmetric\nphase that hosts a corner charge of $q^{\\text{ corner}}=e\/4$ were recently\nreported.~\\cite{benalcazar2018} The fact that $\\Delta Q^{\\text{\ncorner}}\\in\\mathbb{Z}$ forces us to conclude that $C_4$-symmetric\nadiabatic process cannot be constructed for such a phase.\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=\\columnwidth]{C4.pdf}\n\t\t\\caption{\\label{fig:C4} a): The four-fold rotation symmetry $C_4$ of $h_{0\\bm{k}}$ imposes constraint on\n\t\tthe time-independent boundary decorations and they cannot alter the corner\n\t\tcharge. b): Decorating the boundary with one-dimensional\n\t\tThouless pumps while respecting $C_4$ symmetry (combined with the time flip) of the adiabatic process can change $\\Delta\n\t\tQ^\\text{corner}$ by an \\textit{even} integer. c): Comparison of\n\t\taction of $C_4$ and the inversion $\\mathcal{I}$ on the corner charge distribution\n\t\tafter one period of adiabatic process.}\n\t\\end{center}\n\\end{figure}\n\nAlternatively, as discussed in detail in Ref.~\\onlinecite{vanmiert2018}, the\n$C_4$-symmetric adiabatic process $h_{t\\bm k}$ considered above, can be\nviewed as a 3D topological insulator protected by the roto-inversion symmetry\n$IC_4$ upon identification $k_z=2\\pi t\/T$. In fact, the 3D topological insulator\nwith $P_3=e\/2$ obtained this way is a second-order topological insulator. If we\nconsider a geometry with the open boundary conditions in $xy$-plane and the\nperiodic boundary conditions in $z$-direction, such a second-order phase can be\ntranslationally invariant in $z$-direction both in the bulk and on the boundary.\nThe boundary hosts an odd number of chiral modes running along each of four\nhinges in $IC_4$-symmetric manner. Going back to the picture of an adiabatic\nprocess, it becomes clear that the corner charge accumulation is an odd integer\nas $t$ is varied from $0$ to $T$, which is consistent with the above\nresult~(\\ref{eq:qcorner}).\n\n\\section{Examples: noninteracting systems}\\label{sec:examples}\nIn this section, we discuss a simple model of noninteracting spinless electrons\nin a periodic potential, which highlights the distinction of two contributions\nto the bulk orbital magnetization, $\\bm{m}_{\\text{pers}}$ and $\\mathbcal{m}$.\nAdditionally, we want to consider examples where there is only topological\ngeometric magnetization, Sec.~\\ref{subsec:tgeoM}, only non-topological\ngeometric magnetization, Sec.~\\ref{subsec:nontgeoM}, and both topological and\nnon-topological contributions, Sec.~\\ref{subsec:rotoM}. To keep the discussions\nsimple while capturing the relevant physics, we focus on isolated orbitals\nwithout any overlap between them.\n\n\\subsection{Bloch functions in the localized limit}\nLet us consider a time-dependent deep potential $v_t^0(\\bm{x})$ centering at\n$\\bm{x}=\\bm{r}(t)$ that accommodates at least one bound state. Let $\\phi_t^0(\\bm x)$ be the wavefunction of the instantaneous lowest-energy bound state, satisfying\n$h_t^0\\phi_t^0(\\bm{x})=\\varepsilon_t^0\\phi_t^0(\\bm{x})$ with \n\\begin{equation}\nh_{t}^0=\\frac{1}{2m}[\\tfrac{1}{i}\\bm{\\nabla}-e\\bm{A}_t^{\\text{ex}}(\\bm{x})]^2+v_t^0(\\bm{x}).\\label{simplemodel}\n\\end{equation}\nHere $\\bm{A}_t^{\\text{ex}}(\\bm{x})$ describes an external field. In these expressions, the superscript $0$ implies the quantities for an isolated orbit. \nWhen the\npotential $v_t^0(\\bm{x})$ is deep enough, $\\phi_t^0(\\bm{x})$ should be\nwell-localized around $\\bm{x}=\\bm{r}(t)$ with the localization length $\\xi\\ll a$.\nHence, we assume that \n\\begin{equation}\n\\int d^2x |\\phi_t^0(\\bm{x})|^2=1,\\quad \\int d^2x \\bm{x}|\\phi_t^0(\\bm{x})|^2=\\bm{r}(t)\\label{exp}\n\\end{equation}\nand that both $\\phi_t^0(\\bm{x})$ and $v_t^0(\\bm{x})$ decays fast enough, i.e.,\n$|v_t^0(\\bm{x})|,|\\phi_t^0(\\bm{x})|\\rightarrow0$ as $|\\bm{x}-\\bm{r}(t)|\\gg\\xi$. \n\nWith these building blocks, we construct a periodic potential and the\ncell-periodic Bloch state.\n\\begin{align}\n&v_t(\\bm{x})\\equiv\\sum_{\\bm{R}}v_t^0(\\bm{x}-\\bm{R}),\\label{pot}\\\\\n&u_{t\\bm k}(\\bm{x})\\equiv\\frac{a}{\\sqrt{V}}\\sum_{\\bm{R}}e^{i\\bm{k}\\cdot(\\bm{R}-\\bm{x})}\\phi_t^0(\\bm{x}-\\bm{R}),\\label{eq:Bloch}\n\\end{align}\nWe assume that $\\bm{A}^{\\text{ex}}(\\bm{x})$ respects the\nperiodicity, i.e.,\n$\\bm{A}^{\\text{ex}}(\\bm{x}-\\bm{R})=\\bm{A}^{\\text{ex}}(\\bm{x})$. Then, as far\nas $\\phi_t^0(\\bm{x}-\\bm{R})^*\\phi_t^0(\\bm{x})$ and\n$v_t^0(\\bm{x}-\\bm{R})\\phi_t^0(\\bm{x})$ ($\\bm{R}\\neq\\bm{0}$) are entirely neglected,\n$u_{t\\bm k}(\\bm{x})$ is an eigenstate of the periodic Hamiltonian\n\\begin{equation}\nh_{t\\bm{k}}=\\frac{1}{2m}[\\tfrac{1}{i}\\bm{\\nabla}-e\\bm{A}_t^{\\text{ex}}(\\bm{x})+\\bm{k}]^2+v_t(\\bm{x})\\label{egH}\n\\end{equation}\nwith a completely flat band dispersion $\\varepsilon_{t\\bm{k}}=\\varepsilon_t^0$. \n\n\\subsection{Polarization and instantaneous magnetization}\nLet us first demonstrate the modern theory formula for the polarization and the\norbital magnetization by deriving $\\bm{p}$ and $\\bm{m}$ in two different ways.\n\nFirst, we present direct calculation of the polarization and the orbital\nmagnetization from the microscopic charge distribution and the persistent\ncurrent densities in this insulator. The instantaneous contribution to the\nlocal charge and current distribution from a single orbit $\\phi_t^0(\\bm{x})$\ncan be written as\n\\begin{align}\n\t\\label{eq:nt0}\n&n_t^0(\\bm{x})\\equiv e|\\phi_t^0(\\bm{x})|^2,\\\\\n&\\bm{j}_t^0(\\bm{x})\\equiv\\frac{e}{m i}\\phi_t^0(\\bm{x})^*(\\bm{\\nabla}-ie\\bm{A}^{\\text{ex}}(\\bm{x}))\\phi_t^0(\\bm{x}).\n\\label{eq:jt0}\n\\end{align}\nWe introduce vector fields $\\bm{p}_t^0(\\bm{x})$ and $\\bm{m}_t^0(\\bm{x})$ such\nthat\n\\begin{equation}\nn_t^0(\\bm{x})=\\bar{n}^0-\\bm{\\nabla}\\cdot\\bm{p}_t^0(\\bm{x}),\\quad\n\\bm{j}_t^0(\\bm{x})=\\bm{\\nabla}\\times\\bm{m}_t^0(\\bm{x}).\\label{defm0}\n\\end{equation}\nThe existence of such $\\bm{m}_t^0(\\bm{x})$ is guaranteed by the divergence-free\nnature of the instantaneous current density $\\bm{j}_t^0(\\bm{x})$. The current\ndensity induced by the adiabatic motion of $\\bm{r}(t)$ is captured by\n$\\mathbcal{j}_t^0(\\bm{x})$ in Eq.~\\eqref{idcurrent} whose divergence may not\nvanish. We assume both $\\bm{p}_t^0(\\bm{x})$ and $\\bm{m}_t^0(\\bm{x})$ decay\nrapidly for $|\\bm{x}-\\bm{r}(t)|>\\xi$, which specifies the boundary\ncondition for differential equations~(\\ref{defm0}).\n\nPhysical quantities of the insulator composed of periodically arranged\nlocalized orbits can be written as the sum of the contributions from each\norbit. For example microscopic current is given by\n\\begin{equation}\n\\bm{j}^{\\text{micro}}(t,\\bm{x})\\equiv\\sum_{\\bm{R}}\\bm{j}_t^0(\\bm{x}-\\bm{R})\n\\end{equation}\nand analogously for $n$, $\\bm{p}$, and $\\bm{m}$. These \\textit{microscopic}\nexpressions have a strong spatial dependence, periodically oscillating at the\nscale of $a$. To derive to a smooth mesoscopic description, we need to perform\na convolution in space (Sec. 6.6 of Ref.~\\onlinecite{jackson1999}). Here we\nchoose the Gaussian $g(\\bm{x})=(\\pi R^2)^{-1}e^{-|\\bm{x}|^2\/R^2}$ ($R\\gg a$)\n\\begin{equation}\n\\bm{j}(t,\\bm{x})\\equiv\\int d^2x' g(\\bm{x}-\\bm{x}')\\bm{j}^{\\text{micro}}(t,\\bm{x}').\n\\end{equation}\nWe do the same for other quantities. Relations such as\n$\\bm{j}(t,\\bm{x})=\\bm{\\nabla}\\times\\bm{m}(t,\\bm{x})$ are preserved by the\nconvolution. Because the convolution is identical to the average for the\nperiodic distribution, we find $n(t,\\bm{x})=\\bar{n}=\\frac{e}{a^2}$,\n$\\bm{j}(t,\\bm{x})=\\bm{0}$,\n\\begin{align}\n&\\bm{p}(t,\\bm{x})=\\frac{1}{a^2}\\int d^2x'\\bm{p}_t^0(\\bm{x}'),\\label{modd2}\\\\\n&\\bm{m}_{\\text{pers}}(t,\\bm{x})=\\frac{1}{a^2}\\int d^2x'\\bm{m}_t^0(\\bm{x}').\\label{mod2}\n\\end{align}\n This is the part of the orbital magnetization produced by the persistent current as illustrated in Fig.~\\ref{fig:magnetization}a.\n\nLet us check that we get the same results using the general formulae of the\nmodern theory. Because of the non-overlapping assumption of $\\phi_t^0(\\bm{x})$,\nit can be readily shown that the formula in Eqs.~\\eqref{eq:modthp},\n\\eqref{eq:modthm} for the Bloch function~\\eqref{eq:Bloch} can be simplified to \n\\begin{align}\n&\\bm{p}(t)=\\frac{1}{a^2}\\int d^2x\\bm{x}n_t^0(\\bm{x})\\,\\,\\left(=\\frac{e}{a^2}\\bm{r}(t)\\right),\\label{modd1}\\\\\n&\\bm{m}_{\\text{pers}}(t)=\\frac{1}{2a^2}\\int d^2x\\bm{x}\\times\\bm{j}_t^0(\\bm{x}).\\label{mod1}\n\\end{align}\nwhere we used Eqs.~\\eqref{eq:nt0} and \\eqref{eq:jt0}. These are well-known\nexpressions in classical electrodynamics for the charge and current\ndistributions in a confined region (see Sec.~4.1 and 5.6 of\nRef.~\\onlinecite{jackson1999}). The equivalence of Eqs.~\\eqref{modd2},\n\\eqref{mod2} and \\eqref{modd1}, \\eqref{mod1} can be easily checked by using\nthe definition of $\\bm{p}_t^0$ and $\\bm{m}_t^0$ in Eq.~(\\ref{defm0}) and\nintegrating by parts. The second equality of Eq.~\\eqref{modd1} follows from\nEqs.~\\eqref{exp}, \\eqref{eq:nt0}, and \\eqref{modd1}.\n\n\\subsection{Topological geometric magnetization}\\label{subsec:tgeoM}\nNext, we discuss the topological geometric contribution\n$\\mathbcal{m}^{\\text{top}}$ for this model. To this end, suppose that the\nposition of the potential minimum $\\bm{r}(t)$ adiabatically moves as a function\nof $t\\in[0,T]$ and forms a closed curve as illustrated in\nFig.~\\ref{fig:magnetization}b. We assume the form of the potential, and thus\nthe localization length, remains unchanged during the adiabatic process.\n\nWe first apply our general expression for $\\mathbcal{m}^{\\text{top}}$ in\nEq.~\\eqref{eq:topc} to the Bloch function~\\eqref{eq:Bloch}. Thanks to the\nnon-overlapping assumption, the vector potential $\\bm{A}_{(t,\\bm{k})}$ is\n$\\bm{k}$-independent:\n\\begin{equation}\n\t\\bm{A}_{\\bm{K}}=(A_t,-\\bm{r}(t))^{\\rm T}\\label{eq:AK}\n\\end{equation}\nwith $A_t\\equiv -i\\int d^2x\\phi_t^0(\\bm{x})^*\\partial_t\\phi_t^0(\\bm{x})$.\nPlugging this into Eq.~\\eqref{eq:3Dtopo2}, we find\n\\begin{equation}\nP_3=\\frac{e}{2a^2}\\int_0^{T} dt\\,\\bm{r}(t)\\times \\partial_t\\bm{r}(t)\\cdot\\bm{e}_z=\\frac{eS_{\\bm{r}}}{a^2},\n\\end{equation}\nwhere $S_{\\bm{r}}$ represents the area enclosed by the orbit of $\\bm{r}(t)$ in\none cycle. Therefore,\n\\begin{equation}\n\t\\mathbcal{m}^{\\text{top}}=\\bm{e}_z\\frac{eS_{\\bm{r}}}{Ta^2}=\\frac{e}{2Ta^2}\\oint \\bm{r}(t)\\times d\\bm{r}(t).\\label{MArea1}\n\\end{equation}\nObserve the analogy to $\\bm{m}$ in Eq.~\\eqref{mod1}. This expression does not\nhave integer ambiguity because it is given by \\textit{abelian} third\nChern-Simons form.\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[width=0.5\\columnwidth]{C4e.pdf}\t\t\n\t\t\\caption{\\label{fig:C4e} An adiabatic process with four-fold\n\t\trotational symmetry $C_4$. Each unit cell contains two\n\t\toccupied Wannier orbitals, whose trajectories during adiabatic\n\t\tprocess are shown with dashed red and blue lines. The hatched\n\t\tarea is Arahonov-Bohm flux per unit cell acquired by such\n\t\tadiabatic process under applied magnetic field. Letters $a$\n\t\tand $b$ denote Wyckoff positions.}\n\t\\end{center}\n\\end{figure}\n\nLet us verify this result from a direct calculation. The adiabatic motion of\nthe single orbit following the potential minimum at $\\bm{x}=\\bm{r}(t)$ induces\na local current distribution\n\\begin{equation}\n\\mathbcal{j}_t^0(\\bm{x})=\\partial_t\\bm{r}(t)n_t^0(\\bm{x}).\\label{idcurrent}\n\\end{equation}\nIt becomes divergence-free if averaged over one period\n\\begin{align}\n&\\mathbcal{j}^0(\\bm{x})=\\int_0^T\\frac{dt}{T}\\mathbcal{j}_t^0(\\bm{x})=\\frac{1}{T}\\oint d\\bm{r}(t)n_t^0(\\bm{x}),\\\\\n&\\bm{\\nabla}\\cdot\\mathbcal{j}^0(\\bm{x})=-\\frac{e}{T}\\oint d\\bm{r}\\cdot\\bm{\\nabla}_{\\bm{r}}n_t^0(\\bm{x})=0.\n\\end{align}\nAs we have seen above, the sum of such microscopic currents from each unit cell\nproduces the bulk magnetization\n\\begin{align}\n\\mathbcal{m}^{\\text{top}}&=\\frac{1}{2a^2}\\int d^2x\\,\\bm{x}\\times\\mathbcal{j}^0(\\bm{x})\\notag\\\\\n&=\\frac{e}{2Ta^2}\\oint \\left(\\int d^2x \\bm{x}|\\phi_0(\\bm{x}-\\bm{r}(t))|^2\\right)\\times d\\bm{r}(t).\n\\end{align}\nThis agrees with Eq.~\\eqref{MArea1} because the integral in the parenthesis is\nprecisely $\\bm{r}(t)$ due to Eq.~\\eqref{exp}.\n\nThe result in Eq.~\\eqref{MArea1} can be readily generalized to the case with\nmulti-orbitals, such as examples in Fig.~\\ref{fig:magnetization}c and e. Let\nus introduce potential minima $\\bm{x}=\\bm{r}_n(t)$ ($n=1,2,\\cdots$) in each\nunit cell, which are adiabatically varied as a function of $t\\in[0,T]$. This\ntime, each orbit is allowed to form an \\textit{open} curve, as far as the total\npolarization $\\bm{p}(t)=(e\/a^2)\\sum_{n}\\bm{r}_n(t)$ satisfies\n$\\bm{p}(T)=\\bm{p}(0)$. Under such an assumption, we find that\n\\begin{align}\n\t\\label{eq:P3ukN}\n\t\\mathbcal{m}^{\\text{top}}=&\\sum_{n}\\frac{e}{Ta^2}\\bigg(\\bm{S}_{\\bm{r}_n}+\\frac{1}{2}\\bm{r}_n(0)\\times\\bm{r}_n(T)\\bigg),\\\\\n\t\\bm S_{\\bm{r}_n}&\\equiv\\frac{1}{2}\\int_{\\bm{r}_n(0)}^{\\bm{r}_n(T)}\\bm{r}_n(t)\\times d\\bm{r}_n(t).\\label{Sn}\n\\end{align}\nWe present the proof in the Appendix~\\ref{app:MzKP}. Although the above\nexpression appears to be the sum of single-band contributions, the ``would-be''\ncontribution from each band depends on the specific choice of the origin when\nit does not form a closed loop. Only after performing the summation over\nall occupied bands, or in other words, only after fully taking into account the\n\\textit{non-abelian} nature of the third Chern-Simons form, the result restores\nthe independence from the origin choice.\n\nAs the application of the formula \\eqref{eq:P3ukN}, let us discuss the corner\ncharge of the $C_4$-symmetric quadrupole insulator introduced in\nRefs.~\\onlinecite{benalcazar2017,benalcazar2017}. \nFor the wallpaper group\n$p4$, there exist three spacial Wyckoff positions: the unit cell origin at\n$\\bm{x}_a=(0,0)$, the center of the plaquette at $\\bm{x}_b=(a\/2,a\/2)$, and the\ncenter of bonds at $\\bm{x}_c=(a\/2,0)$, $(0,a\/2)$.~\\cite{ITC} In the nontrivial\nphase, the two occupied Wannier orbitals locate at $\\bm{x}_b$, while in the\ntrivial phase they are at $\\bm{x}_a$. We consider a periodic adiabatic process\nillustrated in Fig.~\\ref{fig:C4e} starting with the nontrivial phase at $t=0$\nand passing the trivial phase at $t=T\/2$. The\ninstantaneous Hamiltonian $h_{t\\bm{k}}$ itself breaks the $C_4$-symmetry down\nto $C_2$ symmetry except when $t=0$ and $T\/2$, while the adiabatic process as a\nwhole implements the full $C_4$ in the sense of Eq.~\\eqref{eq:C4cycle}. We can\nreadily compute $P_3$ of this process using Eq.~\\eqref{eq:P3ukN} which turns\nout to be $e\/2$. This is the corner charge of the quadrupole insulator as\npredicted by Eq.~\\eqref{corner}, which agrees with the original\nstudy.~\\cite{benalcazar2017,benalcazar2017} A variant of this adiabatic\nprocess was also discussed in Ref.~\\onlinecite{vanmiert2018}.\n\n\\subsection{Non-topological geometric magnetization}\\label{subsec:nontgeoM}\nIn this example we first consider a single electron in an anisotropic and\nrotating two-dimensional well,~\\cite{goldman2014} see\nFig.~\\ref{fig:magnetization}d. We assume a harmonic confining potential, i.e.,\nHamiltonian~(\\ref{simplemodel}) with \n\\begin{equation}\n\tv_t^0(\\bm x)=\\frac{1}{2}m(\\omega_x^2x_t^2+\\omega_y^2y_t^2),\n\t\\label{eq:assymv0}\n\\end{equation}\nwhere $\\bm x_t\\equiv(x\\cos\\Omega t+y\\sin\\Omega t,-x\\sin\\Omega t+y\\cos\\Omega t)$\nand $\\Omega=2\\pi\/T$. To obtain the geometric orbital magnetization for this\nmodel, we consider external magnetic field $\\bm B=B_z\\bm e_z$ described by the\nvector potential $\\bm A^{\\rm ex}(\\bm x)=\\bm B\\times\\bm x\/2$. (Strictly speaking,\nthis form is valid only around the origin as it lacks the required\nperiodicity.) The wave function of the instantaneous ground state of this model\ncan be obtained based on Ref.~\\onlinecite{rebane2012}:\n\\begin{align}\n\t\\phi_{t}^0(\\bm x)=\\mathcal{N}e^{\\frac{im\\omega_c(\\omega_y-\\omega_x)x_ty_t}{2(\\omega_x+\\omega_y)}-\\frac{m\\sqrt{(\\omega_x+\\omega_y)^2+\\omega_c^2}(\\omega_xx_t^2+\\omega_yy_t^2)}{2(\\omega_x+\\omega_y)}},\n\t\\label{eq:phi0assym}\n\\end{align}\nwhere $\\cal N$ is the normalization factor and $\\omega_c\\equiv eB_z\/m$ is the\ncyclotron frequency. The Berry phase $\\varphi_{B_z}$ during the adiabatic\nprocess $t\\in[0,T]$ is\n\\begin{align}\n\t\\varphi_{B_z}^0&=\\int_0^{T}dt\\int d^2 x\\phi_{t}^0(\\bm x)^*i\\partial_t\\phi_{t}^0(\\bm x)\\notag\\\\\n\t&=\\frac{\\pi \\omega_c(\\omega_y-\\omega_x)^2}{2\\omega_x\\omega_y\\sqrt{(\\omega_x+\\omega_y)^2+\\omega_c^2}}.\n\t\\label{eq:phiberry}\n\\end{align}\nFrom Eq.~(\\ref{eq:calMz}) it follows that the adiabatic\nprocess~(\\ref{eq:assymv0}) has non-zero geometric orbital magnetic moment\n$\\mathcal{m}_z^0$\n\\begin{align}\n\t\t\\mathcal{m}_z^0=\\frac{e(\\omega_y-\\omega_x)^2\\Omega}{4ma^2\\omega_x\\omega_y(\\omega_x+\\omega_y)}.\\label{cmz0}\n\\end{align}\n\n\nNow we construct the Bloch function \\eqref{eq:Bloch} using $\\phi_{t}^0(\\bm x)$\nas the building block and compute the geometric orbital magnetization\n$\\mathcal{m}_z$ based on Eq.~\\eqref{eq:nontopcalM} for the corresponding band\ninsulator. To this end we need instantaneous eigenstates and eigenenergies in\nabsence of external magnetic field including unoccupied bands. The Hamiltonian\n$h_{t\\bm k}$ is given by Eq.~(\\ref{egH}) with $v_t^0(\\bm x)$ given by\nEq.~(\\ref{eq:assymv0}) and $\\bm A^{\\rm ex}=0$. We assume that there is no\noverlap between wavefunctions belonging to different unit cells as before.\n(When $\\omega_{x}, \\omega_{y}$ are large enough, such an assumption is valid at\nleast for relevant low-energy states.) Bloch\nwavefunctions read \n\\begin{align}\nu_{t\\bm{n}\\bm k}(\\bm{x})&\\equiv\\frac{a}{\\sqrt{V}}\\sum_{\\bm{R}}e^{i\\bm{k}\\cdot(\\bm{R}-\\bm{x})}\\phi_{t\\bm{n}}^0(\\bm{x}-\\bm{R}),\n\\end{align}\nwhere $\\bm{n}\\equiv(n_x,n_y)$ labels energy levels of two-dimensional the\nanisotropic harmonic oscillator and $\\bm{n}=(0,0)$ corresponds to the ground\nstate in Eq.~\\eqref{eq:phi0assym} with $\\omega_c=0$. Substituting above\nexpressions to Eqs.~(\\ref{eq:topc}) and~(\\ref{eq:nontopcalM}), we find\n\\begin{align}\n\t&\\mathcal{m}_z^{\\text{top}}=0,\\\\\n\t&\\mathcal{m}_z^{\\text{non-top}}=\\sum_{\\bm{n}\\neq(0,0)}\\frac{\\vert\\langle\\phi_{t}^0\\vert\\bm x\\times\\bm\\nabla\\vert\\phi_{t\\bm{n}}^0\\rangle\\vert^2e\\Omega}{4ma^2(n_x\\omega_x+n_y\\omega_y)}\\notag\\\\\n\t&=\\frac{\\vert\\langle\\phi_{t}^0\\vert\\bm x\\times\\bm\\nabla\\vert\\phi_{t(1,1)}^0\\rangle\\vert^2e\\Omega}{4ma^2(\\omega_x+\\omega_y)}=\\mathcal{m}_z^0.\n\t\\label{eq:barnett}\n\\end{align}\n\n\n\\subsection{Geometric magnetization by rotation}\\label{subsec:rotoM}\nHere we calculate the contribution to the geometric orbital magnetization of a rotating\nuncharged body and compare it to the classical Barnett\neffect.~\\cite{barnett1915,barnett1935} The Barnett effect predicts magnetization\n$\\chi\/\\gamma\\Omega$, where $\\chi$ is the paramagnetic susceptibility, $\\gamma$\nis the electron gyromagnetic ratio, and $\\Omega$ is the rotation frequency. Since the rotation\naxis does not necessarily coincide with potential well minima we have $v_t^0(\\bm x-\\bm\nr(t))$ with $v_t^0$ from Eq.~(\\ref{eq:assymv0}) and $\\bm r(t)=(R\\cos\\Omega\nt,R\\sin\\Omega t,0)^{\\rm T}$, where $R$ is the distance of the potential well minima to the\nrotation axis. The lowest-energy instantaneous wavefunction $\\phi_t(\\bm x)$\ncan be obtained from Eq.~(\\ref{eq:phi0assym}) by performing gauge\ntransformation\n\\begin{align}\n\t\\label{eq:phi0}\n\t\\phi_{t}^0(\\bm x)=\\mathcal{N}e^{\\frac{im\\omega_c(\\omega_y-\\omega_x)(x_t-R)y_t}{2(\\omega_x+\\omega_y)}+\\frac{i}{2}m\\omega_c\\bm{e}_z\\cdot\\bm r(t)\\times\\bm x}\\\\\n\t\\times e^{-\\frac{m\\sqrt{(\\omega_x+\\omega_y)^2+\\omega_c^2}(\\omega_x(x_t-R)^2+\\omega_yy_t^2)}{2(\\omega_x+\\omega_y)}}.\\notag\n\\end{align}\nAs compared to Eq.~\\eqref{eq:phiberry}, the Berry phase $\\varphi_{B_z}$ during\nthe adiabatic process $t\\in[0,T]$ acquires an additional contribution $eB_z \\pi\nR^2$ from the Aharonov-Bohm phase. Therefore electrons contribute to the\nfollowing geometric orbital magnetization \n\\begin{align}\n\t\\mathcal{m}_z&=\\frac{eR^2\\Omega}{2a^2}+\\mathcal{m}_z^0.\n\\end{align}\nThe first term can be identified with $\\mathcal{m}_z^{\\text{top}}$ in\nEq.~(\\ref{MArea1}) and the second term is the contribution in Eq.~\\eqref{cmz0}.\nSince the body is uncharged, the contribution from ions cancels the topological\ncontribution, while $\\mathcal{m}_z^{\\text{non-top}}=\\mathcal{m}_z^0$ remains\nsince ions are much more localized compared to electrons. Assuming anisotropy\n$\\omega_x\/\\omega_y=2$, and confinement of electrons on the scale of angstroms,\nwe find that contribution~(\\ref{eq:barnett}) is on the same order as Barnett\neffect for paramagnets with paramagnetic susceptibility $\\chi\\sim10^{-5}$. For\ncomparison, paramagnets have typically magnetic susceptibility\n$\\chi\\sim10^{-3}-10^{-5}$.~\\cite{wiki:paramagnetism}\n\n\\section{Examples: finite interacting systems}\\label{sec:examples2}\nIn this section we demonstrate the validity of Eq.~\\eqref{eq:calMz} for\nfinite interacting systems. We consider two canonical ways of introducing the\ntime-dependence to the Hamiltonian: rotating~\\cite{ceresoli2002,stengel2018}\nand translating the whole system.~\\cite{goldman2014,juraschek2017,dong2018}\n\nWe consider many-body systems under the \\textit{open boundary condition} in two\nspatial dimensions. We start with a \\textit{time-independent} Hamiltonian\n$\\hat{H}$ that can contain arbitrary interactions. The total charge, current,\npolarization, and orbital magnetization operator for this Hamiltonian can be\nwritten as $\\hat{\\bm{N}}=\\int_Vd^2x \\hat{\\bm{n}}(\\bm{x})$, $\\hat{\\bm{J}}=\\int\nd^2x \\hat{\\bm{j}}(\\bm{x})$, $\\hat{\\bm{X}}=\\int_Vd^2x\n\\bm{x}\\hat{\\bm{n}}(\\bm{x})$, $\\hat{\\bm{M}}=(1\/2)\\int_Vd^2x\n\\bm{x}\\times\\hat{\\bm{j}}(\\bm{x})$. We stress that these expressions are valid\nonly when the system is confined in a finite region; they need to be modified\nin extended systems under the periodic boundary conditions as done by the\nmodern theory. We denote the many-body ground state of $\\hat{H}$ and its\nenergy by $|\\Phi\\rangle$ and $E$, respectively.\n\nTo compute the many-body Berry phase, let $\\hat{H}_{\\bm{B}}$ be the Hamiltonian\nwith the vector potential in the symmetric gauge\n$\\bm{A}^{\\text{ex}}(\\bm{x})=(1\/2)\\bm{B}\\times\\bm{x}$ with $\\bm{B}=B_z\\bm{e}_z$.\nExpanding to the linear order in $B_z$ and using\n$\\hat{\\bm{j}}(\\bm{x})=-\\partial_{\\bm{A}(\\bm{x})}\\hat{H}$, we get\n\\begin{equation}\n\\hat{H}_{B_z}=\\hat{H}-\\hat{M}_zB_z+O(B_z^2).\\label{perturbationmB}\n\\end{equation}\nTherefore, the ground state of $\\hat{H}_{B_z}$ to the leading order in $B_z$\ncan be expressed as\n\\begin{equation}\n|\\Phi_{B_z}\\rangle=|\\Phi\\rangle+\\hat{Q}\\frac{1}{\\hat{H}-E}\\hat{Q}\\hat{M}_zB_z|\\Phi\\rangle+O(B_z^2).\\label{firstB}\n\\end{equation}\nHere $\\hat{Q}\\equiv1-|\\Phi\\rangle\\langle\\Phi|$ is the projector onto excited\nstates.\n\n\\subsection{Rotation}\\label{subsec:rotom}\nHere we consider the time-dependence of the problem induced by the rotation of\nthe whole system\n\\begin{equation}\n\\hat{H}_t\\equiv e^{-i\\hat{L}_z\\Omega t}\\hat{H}e^{i\\hat{L}_z\\Omega t},\n\\end{equation}\nwhere $\\Omega=\\bm{e}_z\\Omega$ is the rotation frequency and $\\hat{\\bm{L}}$ is\nthe angular momentum operator. For the time-dependent Hamiltonian $\\hat{H}_t$, \nthe orbital magnetization operator $\\hat{\\bm{M}}_t\\equiv (1\/2)\\int_Vd^2x\n\\bm{x}\\times\\hat{\\bm{j}}_t(\\bm{x})$\n\\begin{equation}\n\\hat{\\bm{M}}_t=e^{-i\\hat{L}_z\\Omega t}\\hat{\\bm{M}}e^{i\\hat{L}_z\\Omega t}.\\label{eq:rotm}\n\\end{equation}\n\nWe evaluate the instantaneous contribution $\\bm{m}_{\\text{pers}}$ and the\ngeometric contribution $\\mathbcal{m}$ to the orbital magnetization via the\nformulae in Eqs.~\\eqref{eq:Xinst} and \\eqref{eq:Xgeom}. The instantaneous\ncontribution is given by the instantaneous ground state\n$|\\Phi_t\\rangle=e^{-i\\hat{L}_z\\Omega t}|\\Phi\\rangle$\n\\begin{equation}\n\\bm{m}_{\\text{pers}}\\equiv\\int_0^T\\frac{dt}{T}\\frac{\\langle\\Phi_t|\\hat{\\bm{M}}_t|\\Phi_t\\rangle}{V}=\\frac{\\langle\\Phi|\\hat{\\bm{M}}|\\Phi\\rangle}{V}.\\label{M1}\n\\end{equation}\nThe geometric contribution is given by the many-body Berry phase. Since the\ninstantaneous ground state of the Hamiltonian $\\hat{H}_{tB_z}\\equiv\ne^{-i\\hat{L}_z\\Omega t}\\hat{H}_{B_z}e^{i\\hat{L}_z\\Omega t}$ is given by\n$|\\Phi_{tB_z}\\rangle=e^{-i\\hat{L}_z\\Omega t}|\\Phi_{B_z}\\rangle$, we have \n\\begin{equation}\n\\varphi_{B_z}=\\int_0^Tdt\\langle\\Phi_{tB_z}|i\\partial_t|\\Phi_{tB_z}\\rangle=T\\langle\\Phi_{B_z}|\\hat{L}_z\\Omega|\\Phi_{B_z}\\rangle.\n\\end{equation}\nThis is the expectation value of $\\hat{L}_z\\Omega$ in the presence of the\nperturbation $-\\hat{m}_z B_z$ in Eq.~\\eqref{perturbationmB}. Using\nEq.~\\eqref{firstB}, we get\n\\begin{equation}\n\t\\mathbcal{m}=\\langle\\Phi|\\hat{L}_z\\Omega\\hat{Q}\\frac{1}{\\hat{H}-E}\\hat{Q}\\frac{\\hat{\\bm{M}}}{V}|\\Phi\\rangle+\\text{c.c.}\\label{M2}\n\\end{equation}\n\nWe verify these results by solving time-dependent problem. The solution to the\n\\textit{time-dependent} Schr\\\"odinger equation\n$i\\partial_t|\\Psi_t\\rangle=\\hat{H}_t|\\Psi_t\\rangle$ can be readily constructed\nusing the ground state $|\\Phi_{\\Omega}\\rangle$ of the \\textit{time-independent}\nHamiltonian\n\\begin{equation}\n\\hat{H}_{\\Omega}\\equiv \\hat{H}-\\hat{L}_z\\Omega.\\label{perturbationOL}\n\\end{equation}\nThe solution that is smoothly connected to the ground state in the static limit $\\Omega\\rightarrow0$ reads\n\\begin{equation}\n|\\Psi_t\\rangle=e^{-i\\hat{L}_z\\Omega t-iE_{\\Omega}t}|\\Phi_{\\Omega}\\rangle.\\label{eq:rotphi2}\n\\end{equation}\nThe time-average of the orbital magnetization is thus given by\n\\begin{equation}\n\\bm{m}=\\int_0^T\\frac{dt}{T}\\frac{\\langle\\Psi_t|\\hat{\\bm{M}}_t|\\Psi_t\\rangle}{V}=\\frac{\\langle\\Phi_{\\Omega}|\\hat{\\bm{M}}|\\Phi_{\\Omega}\\rangle}{V}.\n\\end{equation}\nThis is the expectation value of $\\hat{\\bm{M}}$ in the presence of the\nperturbation $-\\hat{L}_z\\Omega$ as in Eq.~\\eqref{perturbationOL}. The\nfirst-order perturbation theory with respect to $\\Omega$ gives\n\\begin{equation}\n\t\\bm{m}=\\bm m_{\\text{pers}}+\\langle\\Phi|\\hat{L}_z\\Omega\\hat{Q}\\frac{1}{\\hat{H}-E}\\hat{Q}\\frac{\\hat{\\bm{M}}}{V}|\\Phi\\rangle+\\text{c.c.}\n\\end{equation}\nThis is precisely $\\bm{m}_{\\text{pers}}+\\mathbcal{m}$ predicted above in\nEqs.~\\eqref{M1} and ~\\eqref{M2}. As it is clear from the derivation, the\nagreement of the two independent approaches is guaranteed by the Maxwell\nrelation for the free energy\n$\\hat{F}\\equiv\\hat{H}-\\hat{L}_z\\Omega-\\hat{M}_zB_z$\n\n\\begin{equation}\n\\partial_{B_z}\\langle\\hat{L}_z\\rangle=-\\partial_{B_z}\\partial_{\\Omega}\\langle\\hat{F}\\rangle=-\\partial_{\\Omega}\\partial_{B_z}\\langle\\hat{F}\\rangle=\\partial_{\\Omega}\\langle\\hat{M}_z\\rangle.\n\\end{equation}\n\n\\subsection{Translation}\nNext let us introduce the time-dependence by the translation. All discussions\nproceed in essentially the same way, while there are still a few differences.\nFirst we define the time-dependent Hamiltonian by\n\\begin{equation}\n\\hat{H}_t'\\equiv \\hat T_t\\hat{H}\\hat T_t^\\dagger,\n\\end{equation}\nwhere $\\hat T_t=e^{-i\\hat{\\bm P}\\cdot\\bm r(t)}$ is the translation by amount $\\bm\nr(t)$ and $\\hat{\\bm{P}}$ is the momentum operator. For $\\hat{H}_t$ the orbital\nmagnetization operator becomes\n\\begin{equation}\n\\hat{\\bm{M}}_t'= \\hat T_t\\left(\\hat{\\bm{M}}+\\frac{1}{2}\\bm{r}(t)\\times\\hat{\\bm{J}}\\right)\\hat T_t^\\dagger\\label{eq:transm}\n\\end{equation}\nwhere the second term in the parenthesis is due to the change of the origin. The\ninstantaneous ground state $|\\Phi_t'\\rangle=\\hat T_t|\\Phi\\rangle$\ngives $\\bm{m}_{\\text{pers}}$ as in Eq.~(\\ref{M1}), where we used\n$\\langle\\Phi|\\hat{\\bm{J}}|\\Phi\\rangle=\\bm{0}$.\n\nNext, we compute the geometric contribution via the many-body Berry phase.\nIn the presence of magnetic field translation operator $\\hat\nT_{tB_z}\\equiv\\hat T_{B_z}(\\bm r(t))$ becomes translation followed by gauge\ntransformation~\\cite{brown1964,zak1964}\n\\begin{equation}\n\\partial_t\\hat{T}_{tB_z}\\equiv -i(\\hat{\\bm{P}}+\\tfrac{e}{2}\\bm{B}\\times\\hat{\\bm{X}})\\cdot\\partial_t\\bm{r}(t)\\hat{T}_{tB_z}.\n\\end{equation}\nThe instantaneous ground state of $\\hat{H}_{tB_z}\\equiv\n\\hat{T}_{tB_z}\\hat{H}_{B_z}\\hat{T}_{tB_z}^\\dagger$ is\n$|\\Phi_{tB_z}\\rangle=\\hat{T}_{tB_z}|\\Phi_{B_z}\\rangle$, thus the many-body\nBerry phase reads\n\\begin{equation}\n\\varphi_{B_z}=\\int_0^Tdt\\langle\\Phi_{B_z}|\\hat{T}_{tB_z}^\\dagger i\\partial_t\\hat{T}_{tB_z}|\\Phi_{B_z}\\rangle=eN\\bm{S}_{\\bm{r}}\\cdot \\bm{B},\n\\end{equation}\nHere, $\\bm{S}_{\\bm{r}}\\equiv\\frac{1}{2}\\oint \\hat{\\bm{r}}\\times d\\bm{r}$\nrepresents the area swept by $\\bm{r}(t)$ in one cycle. In the derivation, we\nused\n\\begin{align}\n&\\hat{T}_{tB_z}^\\dagger i\\partial_t\\hat{T}_{tB_z}\\notag\\\\\n&=(\\hat{\\bm{P}}+\\tfrac{e}{2}\\bm{B}\\times\\hat{\\bm{X}})\\cdot\\dot{\\bm{r}}(t)+\\tfrac{eN}{2}\\bm{r}(t)\\times\\partial_t\\bm{r}(t)\\cdot \\bm{B}.\n\\end{align}\nTherefore, when the whole system is translated, the geometric contribution to\nthe orbital magnetization captures the Aharonov-Bohm phase\n\\begin{equation}\n\\mathbcal{m}=\\frac{eN}{TV}\\bm{S}_{\\bm{r}}.\\label{M4}\n\\end{equation}\n\nTo verify the above results, we consider the time-dependent Schr\\\"odinger\nequation $i\\partial_t|\\Psi_t'\\rangle=\\hat{H}_t'|\\Psi_t'\\rangle$. An\n approximate solution is given by $|\\Psi_t'\\rangle=\\hat\nT_t|\\Phi_{t\\bm{r}}\\rangle$, where $|\\Phi_{t\\bm{r}}\\rangle$ is the instantaneous\nground state of the Hamiltonian $\\hat{H}_{t\\bm{r}}\\equiv\n\\hat{H}-\\hat{\\bm{P}}\\cdot\\partial_t\\bm{r}(t)$. Therefore, the time-average of\nthe orbital magnetization is\n\\begin{align}\n&\\bm{m}=\\int_0^T\\frac{dt}{T}\\frac{\\langle\\Psi_t'|\\hat{\\bm{M}}_t'|\\Psi_t'\\rangle}{V}\\\\\n&=\\int_0^T\\frac{dt}{T}\\frac{\\langle\\Phi_{t\\bm{r}}|\\hat{\\bm{M}}|\\Phi_{t\\bm{r}}\\rangle}{V}+\\frac{eN}{2TV}\\int_0^Tdt\\,\\bm{r}(t)\\times \\partial_t\\bm{r}(t).\\notag\n\\end{align}\nIn the adiabatic limit, this reproduces $\\bm{m}_{\\text{pers}}+\\mathbcal{m}$ in\nEqs.~\\eqref{M1} and \\eqref{M4}. In the derivation we used\n$\\langle\\Phi_{t\\bm{r}}|\\hat{\\bm{J}}|\\Phi_{t\\bm{r}}\\rangle=eN\\partial_t\\bm{r}(t)$\nfor the ground state of $\\hat H_{t\\bm r}$.\n\n\\section{Conclusion}\\label{sec:conclusions}\nIn order to obtain current and charge distribution in a medium, one needs to\nsolve Maxwell's equation together with two constitutive relations [see\nEq.~(\\ref{eq:jmeso})] that fully characterize the medium at the mesoscopic\nscale. The modern theories, developed in the last 30 years, provide handy\nformulae to calculate electric\npolarization~\\cite{king-smith1993,vanderbilt1993,resta1994,resta2007} and\norbital magnetization~\\cite{xiao2005,thonhauser2005,ceresoli2006,shi2007} for\nrealistic materials.\n\nThe focus of this work is on spinless short-range entangled systems under\nperiodic adiabatic evolution. Our main result is to identify an additional\ncontribution to the orbital magnetization that we name geometric orbital\nmagnetization $\\mathbcal{m}$. This new contribution is defined only after\nperforming the time-average over the period of the adiabatic process, which\nmakes the current density divergence-free. We find that the geometric orbital\nmagnetization can be expressed compactly as derivative of the many-body Berry\nphase with respect to an externally applied magnetic field. For band\ninsulators, we obtain handy formulae for the bulk geometric orbital\nmagnetization $\\mathbcal{m}$ in terms of instantaneous Bloch states and\nenergies. Interestingly, we find\nthat for band insulators\n$\\mathbcal{m}=\\mathbcal{m}^{\\text{top}}+\\mathbcal{m}^{\\text{non-top}}$ consists\nof two pieces, where topological piece $\\mathbcal{m}^{\\text{top}}$ depends\nonly on the Bloch states of occupied bands. For spinless systems only electric\npolarization and orbital magnetization enter constitutive relations, since the\ncontributions from higher moments are typically negligible.~\\cite{jackson1999}\nIn this sense, our results together with ``the modern theories'' provide a\ncomplete mesoscopic description of a medium under periodic adiabatic time\nevolution. In this work we have not considered adiabatic processes with ground\nstate degeneracy,~\\cite{meidan2011} it would be interesting to see to which\nextent our findings can be generalized to such systems.\n\nIn the present work, the adiabaticity assumption is crucial for validity of the\nobtained results. In practice, for band insulators with band gaps on the order\nof electronvolt, this conditions requires that the period $T$ is larger than\ncouple of femtoseconds. Nevertheless, shorter period $T$ results in a larger\ngeometric orbital magnetization. It would be therefore interesting to extend\nour results to the case of strong drive that excites unoccupied bands. In the\ncase of Thouless pumps such extension was very fruitful and resulted in recent\ndiscovery of shift currents.~\\cite{balz1981,morimoto2016}\n\nAlthough higher (than dipole) electric and magnetic multiple moments typically\ndo not enter constitutive relations, the knowledge of these quantities may be\nuseful for certain systems.~\\cite{benalcazar2017,benalcazar2018} In fact, it is\na topic of current research whether higher moments can be established as bulk\nquantities in general.~\\cite{gao2018b,shitade2018,kang2018,metthew2018,ono2019}\nIn the presence of certain crystalline symmetries, both electric\npolarization and topological geometric orbital magnetization can be quantized,\nin which case they can serve as a topological invariants. In this context, we\nshowed that the quantized quadrupole moment is related to\n$\\mathcal{m}^{\\text{top}}_z$ in systems with proper symmetries that allow bulk\ndefinition of the quadrupole moment.~\\cite{ono2019}\n\nIn this work we succeeded in separating $\\mathbcal{m}$ into the topological and\nthe non-topological piece only for band insulators. There, we found that the\ntopological contribution is expressed as the third Chern-Simons form ($P_3$), in\n$(t,k_x,k_y)$ space. For interacting systems, based on examples considered in\nSec.~\\ref{sec:examples2}, we conclude that it is possible to separate\nAharonov-Bohm contribution originating from the center of mass motion. In fact,\nthis contribution can be captured by calculating $P_3$ formally defined for the\nmany-body ground state as a function of time and two solenoidal fluxes.\nClearly, the many-body $P_3$ defined in such manner is abelian and does not\ncapture all possible topological contributions. For example, it vanishes for\nthe model in Fig.~\\ref{fig:C4}. As a future direction, it would be interesting\nto see if separation achieved for band insulators is possible for general\nsingle-particle, or even many-body systems. The affirmative answer to this\nquestion would provide a way to define $P_3$ in two dimensional systems with\nadiabatic time-dependence lacking the translational invariance or the\nsingle-particle description. The formula for $P_3$ in many-body\nthree-dimensional systems already exist in the\nliterature,~\\cite{wang2014,shiozaki2018b} where it was argued that $P_3$ is\nrelated to the magnetoelectric polarizability. The magnetoelectric\npolarizability of three-dimensional materials contains, at\nleast for the case of band insulators, not only topological but also\nnon-topological contribution,~\\cite{essin2010} thus the analogous ``separation\nquestion'' arises also in that context. Additionally, defining quantized\nquadrupole moment for interacting systems is one of the open\nquestions.~\\cite{kang2018,metthew2018,ono2019} Since for band insulators we\nfind connection between $\\mathcal{m}_z^{\\text{top}}$ and quantized quadrupole\nmoment, separating $\\mathcal{m}_z^{\\text{top}}$ contribution in interacting\nsystems might provide useful many-body definition of quantized quadrupole\nmoment.\n\nWe hope that our work will also have practical implication as it contributes to\nemerging field of ``dynamical material design'' by providing a way to calculate\nadditional orbital magnetization contribution that appears in these\nsystems.~\\cite{ceresoli2002,juraschek2017,juraschek2018,dong2018,stengel2018}\n\n\\begin{acknowledgements}\nWe would like to thank David Vanderbilt for drawing\nRefs.~\\onlinecite{ceresoli2002,juraschek2017,juraschek2018,dong2018,stengel2018}\nto our attention. The work of S.O. is supported by Materials Education program\nfor the future leaders in Research, Industry, and Technology (MERIT). The work\nof H.W. is supported by JSPS KAKENHI Grant No.~JP17K17678 and by JST PRESTO\nGrant No.~JPMJPR18LA. \n\\end{acknowledgements}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzauce b/data_all_eng_slimpj/shuffled/split2/finalzzauce new file mode 100644 index 0000000000000000000000000000000000000000..00e5451b3cb7051439198d9a665d5f0d13c1178f --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzauce @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\\label{sec:intro}\nWith the initial LIGO detection of a black hole merger \\cite{Abbott:2016blz}, the multi-messenger observation of a merging neutron star binary \\cite{GBM:2017lvd} and the recent observation of an intermediate mass black hole merger \\cite{Abbott:2020khf} we are beginning to realise the discovery potential of gravitational waves (GWs). LIGO and other ground-based detectors are optimised for stellar-origin black holes and sensitive to the $10$ Hz -$10$ kHz frequency band. The exploration of signals from the merger of the much larger black holes at the centre of galaxies \nwill take place at future space-based GW observatories, where longer arm lengths open up the $10^{-4}$ Hz to $10^{-1}$ Hz range of the GW spectrum. Such experiments include the ESA-NASA mission LISA \\cite{Audley:2017drz}, \nTaiji \\cite{Guo:2018npi} and TianQin \\cite{Mei:2020lrl}, all aiming for launch in the mid-2030s. LISA and TianQin have both launched test satellites, the final LISA pathfinder mission results \\cite{Anderson:2018nqj} and the initial results from TianQin-1 \\cite{Luo:2020bls} are promising.\n\nAs well as massive black hole mergers, expected astrophysical sources in the millihertz band include galactic binaries \\cite{Postnov:2014tza}, extreme mass ratio binaries \\cite{AmaroSeoane:2007aw} and precursors for stellar origin black hole mergers \\cite{Sesana:2016ljz}. Cosmological sources could include stochastic gravitational wave backgrounds (SGWBs) from inflation, cosmic strings and cosmological phase transitions \\cite{Caprini:2018mtu}. \n\nIn this paper we focus on a SGWB from cosmological phase transitions, specifically those from around 10 picoseconds after the Big Bang, when it is expected the electroweak symmetry broke. In the Standard Model this process occurs via crossover \\cite{Kajantie:1996mn,Kajantie:1996qd}, and the GW signal is expected to be negligible at observable frequencies \\cite{Ghiglieri:2020mhm}. However, there are numerous extensions to the Standard Model in which a first order phase transition is possible. A review of possible extensions can be found in Ref.~\\cite{Caprini:2019egz}.\n \nIn such theories, below the critical temperature, bubbles of the stable phase spontaneously nucleate in the surrounding metastable phase. These bubbles expand, collide and merge until only the stable phase remains, leaving behind a characteristic spectrum of sound waves, which are a persistent source of GWs \\cite{Hindmarsh:2013xza,Hindmarsh:2015qta,Hindmarsh:2017gnf}. The collision of bubble walls \\cite{Kosowsky:1991ua,Cutting:2020nla,Lewicki:2020jiv,Lewicki:2020azd} and turbulent flows \n\\cite{Kosowsky:2001xp,Gogoberidze:2007an,Caprini:2009yp,Pol:2019yex} also generate GWs. \nHere, we consider only the contribution from sound waves as they are currently expected to be the dominant source \nover a wide range of parameters \\cite{Caprini:2019egz}. \n\nIf the critical temperature is in the range 100 -- 1000 GeV, the peak frequency of the GW power spectrum can be in the millihertz range, and potentially detectable at a space-based observatory. This means that the discovery potential of GW observations includes fundamental physics beyond the Standard Model. The new physics may include a mechanism for baryogenesis \\cite{Kuzmin:1985mm}, a strong motivation for \nconsidering Standard Model extensions with a first order phase transition.\nFor a recent introduction to baryogenesis see Ref.~\\cite{Cline:2018fuq}, \nand to phase transitions in the early Universe see Refs.~\\cite{Mazumdar:2018dfl,Hindmarsh:2020hop}. \nFor a review of the prospects for probing physics beyond the Standard Model, see Ref.~\\cite{Caprini:2019egz}.\n\nNumerical simulations for the acoustic contribution to the GW signature from a first order phase transition \\cite{Hindmarsh:2013xza,Hindmarsh:2015qta,Hindmarsh:2017gnf} have shown that the sound waves generated by the expanding bubble determine the GW power spectrum. \nThe simulations motivate a simple broken power-law model for the sound wave contribution used by the LISA cosmology working group \\cite{Caprini:2019egz}. The uncertainties that arise from various levels of approximations have been explored in \\cite{Guo:2021qcq}. \n\nCurrently the most sophisticated model for computing the GW power spectrum from sound waves \nis the sound shell model (SSM) \\cite{Hindmarsh:2016lnk,Hindmarsh:2019phv}. \nThe SSM shows how the GW power spectrum can be computed from the velocity power spectrum of the fluid, which in turn is dependent on a few key thermodynamic parameters that can be calculated from the underlying theory. \nThese key parameters effect the overall amplitude, frequency scale and the detailed shape of the power spectra. \nIn its simplest form, there are four thermodynamic parameters: the bubble nucleation temperature, \nthe transition rate, the transition strength, and the bubble wall speed. All are in principle computable from an \nunderlying theory, making them the interface between observation and theory. \nAt the moment there are significant uncertainties in these calculations \\cite{Croon:2020cgk}. \nOur work can be used to set targets for future developments of theoretical methods.\n\nThe SSM predicts two important frequency scales in the power spectrum, and a double broken power law has been proposed as an analytic fit \\cite{Hindmarsh:2019phv}. The functional form depends on the peak power, peak frequency, the ratio of the frequencies of the two breaks \nand the slope between the two breaks. We call these the spectral parameters, and distinguish them from the thermodynamic parameters discussed above. We show that the double broken power law form is much closer to the SSM prediction than \nthe single broken power law fit given in \\cite{Caprini:2019egz}. \n\nIn this work we use the Fisher matrix \\cite{Fisher:1922saa} to explore LISA's ability to extract parameters \nthat describe a SGWB from a first order phase transition, \talso examining the effect of \nthe expected foregrounds from galactic and extragalactic compact binaries. The Fisher matrix is known to overestimate uncertainties, especially when there are degeneracies amongst parameters, as is thought to be the case with the thermodynamic parameters. Despite this, we can expect the Fisher matrix will give an insight into parameter sensitivity and provide a better understanding of the degeneracies themselves.\n\nWe calculate the relative uncertainty both of the spectral parameters and the thermodynamic parameters as described above, \nwith and without foregrounds, over a range of fiducial models.\nWe focus on LISA but the methods could be easily adapted to other missions by altering the noise model. This complements general power law searches in mock LISA data \\cite{Adams:2013qma, Boileau:2021sni, Boileau:2020rpg} Fisher matrix analysis for a single broken power law with LISA, DECIGO and BBO mock data \\cite{Hashino:2018wee}, searches for cosmological phase transition SGWB in LIGO and NANOGrav data \\cite{Romero:2021kby,Arzoumanian:2021teu}, and methods for general SGWB searches where the search is agnostic about the spectral shape of the GW background \\cite{Pieroni:2020rob,Flauger:2020qyi}.\n\nFor our fiducial models \nwe focus on a thermodynamic parameter space motivated by an electroweak-scale transition, by relevance for observation, \nand also by the reliability of predictions. \nThe electroweak scale motivates the choice of nucleation temperature $\\TN=100$ GeV. \nRelevance for observation motivates examining supercooled transitions with \nmean bubble spacing to Hubble length ratio $r_* = 10^{-1}$ and $10^{-2}$, as much smaller values would render the signal too weak.\nThe reliability of the sound shell model predictions can be tested against numerical simulations \\cite{Cutting:2019zws} \nin the range of wall speeds $0.24 < \\vw < 0.92$ and with transition strength parameter $\\al < 0.5$. \nWe study the range $0.4 < \\vw < 0.9$, as lower wall speeds will also probably not be observable at LISA. \n\nThis parameter space produces signals with gravitational wave density fraction today up to $\\Omega_\\text{p,0} \\sim 10^{-10}$ and \npeak frequencies $f_\\text{p,0}$ in the range $10^{-2}$ mHz to $5$ mHz, which can produce SNRs well over 100. \nA transition with $r_* = 0.1$ should produce an observable signal over most of the \nparameter space.\n\nOf the spectral parameters, LISA will be most sensitive to the peak power and peak frequency, reaching \napproximately 10\\% uncertainty in the peak power and frequency for signal-to-noise ratio (SNR) above 20. \nFor $r_* = 0.1$, SNR 20 can be reached over most of the range $0.5 < \\vw < 0.8$ and $\\al > 0.2$. \nFor $r_*= 0.01$, stronger transitions are required to reach the same SNR.\n\nOf the thermodynamic parameters, there is greatest sensitivity to the wall speed. \nAt SNR $=20$ the relative uncertainty in the wall speed is $10\\%$ in some regions of the thermodynamic parameter space, \nbut the sensitivity to the other parameters is reduced by degeneracies. \nExamining the principal components, \none finds an uncertainty of $3\\%$ or better for the two highest-order components, at SNR $=20$. \nHence there are good prospects for combinations of parameters. \nThe best-determined principal component is dominated by the wall speed. The second-best has $\\al$ as the most important contribution, but other parameters also contribute.\n\nIf a parameter combination could be predicted in the light of other data, the prospects for estimating the other parameters would be much better. \nAs a simple example, we consider a case where the nucleation temperature is known, \nfor a transition in which the mean bubble spacing parameter is $r_* = 0.1$.\nHere, the phase transition strength and the mean bubble spacing can be constrained to $10\\%$ and $30\\%$ respectively. \nIf the galactic binary foreground can be removed, \nthe uncertainty in the phase transition strength can be as low as 10\\% for transitions with $\\al \\simeq 0.1$. \n\nThe paper is organised as follows. In Sec.~\\ref{sec:cosmo} we review the production of GWs from a first order phase transition in the early universe, how they relate to the underlying thermodynamic parameters, and introduce the SSM \\cite{Hindmarsh:2016lnk}. The setup we consider for LISA and the noise model is outlined in Sec.~\\ref{sec:noise}. In Sec.~\\ref{sec:fm} we describe our method for calculating the Fisher matrix, relative uncertainties and principal components. The relative uncertainties in the spectral and thermodynamic parameters are presented in Sec.~\\ref{sec:results}. The discussion of the results are given in Sec.~\\ref{sec:Discussion}.\n\nIn this work we set $c= 1$ and $k_{\\text{B}} =1$, unless otherwise specified. \n\n\\section{Gravitational waves from a first order cosmological phase transition}\n\\label{sec:cosmo}\n\\subsection{Cosmological phase transitions }\nAs the universe expanded and cooled, significant changes of in the equation of state must have occurred at temperatures of around 100 GeV, when elementary particle rest masses were generated, and at \n100 MeV, when quarks and gluons became confined into hadrons. \n\nIt is known that both of these changes happened via a smooth cross-over in the Standard Model \n\\cite{Borsanyi:2016ksw}, \\cite{Kajantie:1996qd,Kajantie:1996mn}, but in extensions of the Standard Model \nfirst order electroweak-scale transitions are common (see Ref.~\\cite{Caprini:2019egz} for a survey). Such phase transitions are often associated with a change in the symmetry of the \nplasma, accompanied by a change in the value of an order parameter, which in the \ncase of the electroweak transition is the magnitude of the Higgs field. \n\nIn a symmetry-breaking phase transition such as the electroweak transition, one often refers to the high-temperature phase as the ``symmetric'' phase and the low-temperature phase as the ``broken'' phase. \n \nIn a first order phase transition, at a critical temperature $T_\\text{c}$ there are \ntwo degenerate minima of the free energy separated by a barrier. \nAs the temperature cools below $\\Tc$, the broken phase becomes lower in free energy, and the \nsystem can move to it via localised thermal or quantum fluctuations. \nThis leads to bubbles of broken phase nucleating within the symmetric region. These bubbles expand due to the pressure difference between the interior and exterior, inevitably present as the pressure is minus the free energy density. \nThe bubbles collide and merge until only the broken phase remains. \nSome of the latent heat of the transition is converted into kinetic energy of the cosmological fluid surrounding the bubbles, which is a source of shear stress, leading to the production of gravitational waves. \n\nNow we introduce the key thermodynamic parameters that determine the gravitational wave signature from a first order phase transition (see e.g.~\\cite{Hindmarsh:2020hop}). The first of these is the nucleation temperature $\\TN$, which we define as the peak of the globally-averaged bubble nucleation rate. The Hubble rate at the nucleation temperature sets the frequency scale of the GW power spectrum. \n\nThe second one is the nucleation rate parameter $\\beta$, which is often given as a ratio with $\\HN$, the Hubble parameter at $\\TN$\n\\begin{equation}{\\label{Eq:B\/H}}\n \\tilde{\\beta}=\\frac{\\beta}{\\HN} \\sim \\frac{\\vw}{\\HN R_*},\n\\end{equation}\nwhere $\\vw$ is the speed of the expanding bubble wall and $R_*$ is the mean bubble spacing. From this we see that $\\tilde{\\beta}$ controls $R_*$, which in turn sets the characteristic wavelength of the gravitational radiation. The constant of proportionality is $(8\\pi)^{1\/3}$ \\cite{Enqvist:1991xw} for detonations, but \nfor deflagrations it is also dependent on $\\al$ and $\\vw$, as the nucleation rate is reduced by \nthe reheating of the fluid in front of the bubble wall. In view of this uncertainty, it is more convenient to work in terms of $R_*$, and more precisely \nthe Hubble-scaled mean bubble spacing\n\\begin{equation}\\label{Eq:rstar}\nr_* = \\HN R_*.\n\\end{equation}\nNote that $\\be^{-1}$ is the time taken for a bubble wall to move a distance $R_*$, and therefore has \nan interpretation as the duration of the phase transition. \n\nAnother key parameter in the generation of GWs is the phase transition strength parameter $\\al$\n \\begin{equation}{\\label{Eq:alpha}}\n \\alpha=\\left.\\frac{4}{3} \\frac{\\Delta \\theta}{w_{\\mathrm{s}}}\\right|_{T=T_{\\mathrm{n}}}\n \\end{equation}\nwhere $w_\\text{s} $ is the enthalpy of the fluid in the symmetric phase, and $\\Delta \\theta=\\theta_{\\mathrm{s}}-\\theta_{\\mathrm{b}}$, where $\\theta$ is a quarter of the trace of the energy-momentum tensor, \nand subscripts s and b denote symmetric and broken phases. \nThe trace difference is the energy available to be converted to shear stress energy and thus GW power. A stronger transition means more energy is converted to shear stress energy and a larger overall amplitude for the GW signal. \n\nThe fourth parameter is the wall speed, $\\vw$, which (with $\\al$) determines the motion of the surrounding plasma induced by the passing bubble wall. Wall speeds are split into three categories relative to the speed of sound $\\cs$. Deflagrations occur when $\\vw<\\cs$, where the surrounding fluid is pushed in front of the expanding phase transition wall. When $\\vw$ is greater than a certain critical speed $c_\\text{J}$, the Jouguet speed, \nthe motion in the plasma is entirely behind the bubble wall, and the fluid configuration is called a detonation.\nThe Jouguet speed is given by \n\\begin{equation}\\label{Eq:Jouguet}\nc_\\text{J} = c_\\text{s} \\frac{\\left(1 + \\sqrt{\\al(2 + 3\\al) }\\right)}{\\left(1 + \\al\\right)}\n\\end{equation}\nIf the wall speed is between the sound speed and the Jouguet speed, \nthe velocity profile is a mix between deflagrations and detonations, non-zero both in front and behind the bubble wall. \nThese supersonic deflagrations \\cite{KurkiSuonio:1995pp}, sometimes called hybrids \\cite{Espinosa:2010hh}, \nare very finely tuned, and it is not clear that they exist in a real fluid. \n\nThe sound speeds in the two phases are also potentially important parameters \\cite{Giese:2020rtr,Giese:2020znk}. \nTo simplify this first analysis, we will take them both to be the ultrarelativistic value $\\cs = 1\/\\sqrt{3}$, \nto focus on LISA's sensitivity to the four parameters ($\\TN$,$\\al$,$r_*$,$\\vw$). \n\\subsection{Gravitational waves from a first order phase transition}\n\nIn a first order transition driven by thermal fluctuations, sound waves created by the expanding bubbles \nare the dominant source of gravitational waves \\cite{Hindmarsh:2013xza,Hindmarsh:2015qta,Hindmarsh:2017gnf}.\n\nApproximate fits to the numerical power spectra are available in \\cite{Hindmarsh:2017gnf}. They have a fixed broken power law shape, with peak intensity and frequency depending on \nthe four thermodynamic parameters in an easily computable way \\cite{Caprini:2019egz}. \nThe peak intensity depends on $\\al$, $r_*$ and $\\vw$, while the peak frequency \ndepends on $\\TN$ and $r_*$. \nIt is clear that there are likely to be degeneracies in the power spectrum with respect to \nthe thermodynamic parameters, which would intrinsically limit \nLISA's ability to measure them individually. \n\nHowever, the simulations make it clear that the shape of the GW power spectrum also \ndepends on wall speed and transition strength, \nand such dependence is found in a more sophisticated theoretical framework, \nthe sound shell model (SSM) \\cite{Hindmarsh:2016lnk,Hindmarsh:2019phv}.\nWe therefore use the sound shell model to model the GW power spectrum from phase transitions,\nand investigate LISA's constraining power on its parameters. \nWhile the sound shell model has not been tested in detail against a wide range of numerical simulations, \nit can act as guidance for data analysis techniques aimed at \nextracting phase transition parameters from phase transitions. \n\nTo characterise how the energy density in GWs is distributed over frequencies today we introduce the \ngravitational wave power spectrum \\cite{Allen:1997ad}\n \\begin{equation}{\\label{Eq:GW_en_dens}}\n\\Omega_{\\text{gw,0}} (f) \\equiv \\frac{1}{\\rho_{\\text{c},0}}\\frac{d \\rho_{\\text{gw},0}}{d \\ln f},\n\\end{equation}\nwhere $f$ is frequency and $d\\rho_{\\text{gw}}$ is the gravitational wave energy density within a frequency interval $df$. \nThe critical density is $\\rho_{\\text{c}} = {3H^2}\/{8\\pi G}$, where $H$ is the Hubble rate, $G$ is the gravitational constant and $c$ is the speed of light. Quantities evaluated at the present day are given the subscript 0. For the Hubble constant \n$H_0$ we take the central value measured by the Planck satellite $H_0= 67.4 \\, \\text{km s}^{-1}\\text{Mpc}^{-1}$ as given in \\cite{Aghanim:2018eyx}.\n\n The general form of the gravitational wave power spectrum from a first order phase transition \n is\n \\begin{equation}\\label{Eq:Omgw_ssm}\n \\Omega_\\text{gw}(z) = 3K^{2}(\\vw,\\al)\\left(\\HN \\tau_{\\mathrm{v}}\\right)\\left(\\HN R_*\\right) \\frac{z^{3}}{2 \\pi^{2}} \\tilde P_{\\text{gw}}\\left(z\\right),\n \\end{equation}\n where $R_*$ is the mean bubble spacing, \n $z = k R_*$, $k$ is comoving wavenumber and $K(\\vw,\\al)$ \n is the fraction of the total energy converted into kinetic energy of the fluid. \n The Hubble rate at nucleation is $\\HN$, $\\tau_v $ is the lifetime of the shear stress source, \nthe factor $R_*$ appears as an estimate of the source coherence time and $\\tilde P_{\\text{gw}} \\left(z\\right)$ is the dimensionless spectral density. \nEq.~(\\ref{Eq:Omgw_ssm}) can be regarded as the definition of $\\tilde P_{\\text{gw}}$. \n Its integral (denoted $\\tilde\\Omega_\\text{gw}$ in Refs.~\\cite{Hindmarsh:2015qta,Hindmarsh:2017gnf,Hindmarsh:2019phv}) \ndepends only weakly on the thermodynamic parameters, taking values of order $10^{-2}$. \n\n\nAs the notation of Eq.~(\\ref{Eq:Omgw_ssm}) suggests, the important parametric dependences of the total power are \nthrough the kinetic energy fraction, the source lifetime, and the source coherence time.\nThe kinetic energy fraction depends only on the transition strength $\\alpha$ and the wall speed $v_w$.\nThe lifetime of the GW source $\\tau_v$ is the shorter of the two timescales, \nthe Hubble time $\\HN^{-1}$ and the fluid flow lifetime $\\tau_v$, \nwhich is estimated as $R_* \/ \\sqrt{K}$, the timescale for \nnon-linearities to become important.\nDenoting the ratio of the two timescales by $x = \\HN R_* \/ \\sqrt{K} $, \nwe approximate the Hubble-scaled source lifetime as \\cite{Guo:2020grp}\n \\begin{equation}{\\label{Eq:Hntv}}\n\\HN \\tau_v \\simeq \\left(1 - \\frac{1}{\\sqrt{1 + 2x}} \\right).\n \\end{equation}\nFrom this we see that even if the flow persists over many Hubble times it does not \ncontinue to contribute to the GW power spectrum. \nFor future convenience we will combine the factors of the source lifetime and source coherence time into one, \n\\begin{equation}\\label{Eq:scaling_factors_GW}\nJ = H_n R_* H_n\\tau_v = r_* \\left(1 - \\frac{1}{\\sqrt{1 + 2x}} \\right).\n\\end{equation}\n\n\nThe sound shell model \\cite{Hindmarsh:2016lnk,Hindmarsh:2019phv} predicts the gravitational wave power spectrum as \na numerical function of a given set of thermodynamic parameters ($\\TN$, $\\alpha$, $r_* $, $\\vw$) and \nscaled wavenumbers $z$. We denote this prediction $\\Omega_\\text{gw}^\\text{ssm}(z)$. \nThe shape of the power spectrum has significant dependencies on $\\vw$ and $\\alpha$.\n \nRecent 3d-hydro simulations for $\\al$ up to $\\mathcal{O}$(1) (strong transitions) found that as transition strength increases the efficiency of fluid kinetic energy production becomes less than previously expected \\cite{Cutting:2019zws}. For deflagrations this is thought to be due to reheating which occurs in front of the expanding bubbles, which leads to a reduction in pressure difference, and a slowing of the bubble wall. The reduction in kinetic energy production leads to a suppression in gravitational waver power, which we approximate by a factor $\\Sigma(\\vw,\\al)$. The estimation of this suppression factor from the numerical simulations is described in Appendix \\ref{sec:suppression_factor}. \n\nThe gravitational wave power spectrum at dimensionless comoving wavenumber $z$ just after the transition, and before \nany further entropy production, is then\n\\begin{equation}\\label{Eq:SSM_suppressed}\n\\Omega_{\\text{gw}}(z) = \\Omega_\\text{gw}^\\text{ssm}(z)\\Sigma(\\vw,\\al),\n\\end{equation}\nwhere $ \\Omega_\\text{gw}^\\text{ssm}(z)$ is the sound shell model prediction.\n\nToday the power spectrum at physical frequency $f$ is \n\\begin{equation}{\\label{Eq:Omgw0_sup}}\n \\Omega_{\\text{gw,0}}(f) =F_{\\text{gw,0}} \\Omega_{\\text{gw}}(z(f)),\n\\end{equation}\nwhere \n\\begin{equation}{\\label{Eq:Fgw0_def}}\nF_{\\text{gw,0}}=\\Omega_{\\gamma, 0}\\left(\\frac{g_{s 0}}{g_{s *}}\\right)^{\\frac{4}{9}} \\frac{g_{*}}{g_{0}} = (3.57 \\pm 0.05) \\times 10^{-5} {\\bigg( \\frac{100}{g_*}\\bigg)}^{\\frac{1}{3}} ,\n\\end{equation}\nis the power attenuation following the end of the radiation era. \nHere, $\\Omega_{\\gamma, 0}$ is the photon energy density parameter today, $g_{s }$ denotes entropic degrees of freedom and $g$ describes the pressure degrees of freedom. \nIn both cases the subscripts $0$ and $*$ refer to their value today and the value at the time the GWs were produced respectively. \nWe evaluate $F_{\\text{gw,0}}$ with the values given in \\cite{Caprini:2019egz}, and use a reference value $g_* = 100 $. \n\nWe convert from dimensionless wavenumber $z$ to frequency today by taking into account redshift\n\\begin{equation}\n\\label{e:fzrstar}\nf =\\frac{z }{r_*} f_{*,0}\n\\end{equation}\nwhere \\cite{Caprini:2019egz}\n\\begin{equation} {\\label{Eq:f0} }\nf_{*,0}= 2.6 \\times 10^{-6} \\,\\textrm{Hz} \\left(\\frac{\\TN}{100\\,\\textrm{GeV}}\\right)\\left(\\frac{g_*}{100}\\right)^{\\frac{1}{6}}\n\\end{equation}\nis the Hubble rate at the phase transition redshifted to today. \nWe assume the phase transition takes place well within one Hubble time so all frequencies throughout the transition have the same redshift. \n\n\\begin{figure}[b!]\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{Vary_vw_alpha_0_2_rs_0_1_T_100.pdf}\n \\caption{\\label{Fig: vary_vw} Fixed: $\\al =0.2, \\quad r_* =0.1, \\quad \\TN=100 $ GeV. }\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{Vary_alpha_vw_0_6_rs_0_1_T_100.pdf}\n \\caption{\\label{Fig: vary_al} Fixed: $\\vw = 0.6, \\quad r_* =0.1, \\quad \\TN=100$ GeV. }\n \\end{subfigure}\n\n \\medskip\n\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{Vary_rs_vw_0_6_alpha_0_2_T_100.pdf}\n \\caption{\\label{Fig: vary_rs}Fixed: $\\vw = 0.6, \\quad \\al =0.2, \\quad \\TN=100 $ GeV. }\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{Vary_T_vw_0_6_alpha_0_2_rs_0_1_T_100.pdf}\n \\caption{\\label{Fig: vary_Tn}Fixed: $\\vw = 0.6, \\quad \\al =0.2, \\quad r_* =0.1$. }\n \\end{subfigure}\n \\caption{{\\label{Fig:parameter_vary}} Gravitational wave power spectra for a first order phase transition calculated using the sound shell model, Eq.(\\ref{Eq:Omgw0_sup}). In each panel we vary one of the thermodynamic parameters $\\vw$ (wall speed), $\\al$ (phase transition strength), $r_*$ (Hubble-scaled bubble spacing) and $\\TN$ (nucleation temperature). Shown also in solid black is the LISA instrument noise given by the science requirements (SR) document sensitivity curve (Eq.~(\\ref{Eq:OmInst}), \\cite{LISA_SR_doc}). The dashed line shows the predicted foreground from extragalactic binaries, Eq.~(\\ref{Eq:Om_LV}), along with a grey uncertainty band. The dash-dotted line shows the estimated foreground from unresolved galactic binaries, Eq.~(\\ref{Eq:Om_gb}).\nSignal-to-noise ratios for $\\TN = 100$ GeV and $r_* = 0.1, 0.01$ are given in Fig.~\\ref{fig:SNRs}. \n}\n\\end{figure}\n\nWe compute the scale-free gravitational wave spectral density\n\\begin{equation}{\\label{Eq:power_gw_scaled_output}}\n\\hat{\\mathcal{P}}_{\\text{gw}} (z)= 3K^{2} \\frac{z^{3}}{2 \\pi^{2}} \\tilde P_{\\text{gw}} \\left(z\\right) , \n\\end{equation}\nusing the PTtools python module \\cite{Hindmarsh:2019phv}. \n In this work we evaluated the power spectra at $100 $ logarithmic spaced $z$ values between $1$ and $1000$.\n The number of points used in the fluid shell profiles was $70000$, with $7000$ wavevectors used in in the velocity convolution integrations. The bubble lifetime distribution, taken to be exponential, was integrated with $200$ linearly spaced values between $0$ and $20\\beta^{-1}$. \nThe high wavenumber resolution was used to ensure the integration over the velocity power spectrum converges. \nAs mentioned in the introduction, we explore the prospects for estimation in the parameter space $0.4 < \\vw < 0.9$, $\\alpha < 0.5$, $r_* = 0.01,0.1$ and $\\TN = 100$ GeV.\n\nWe show, using this framework to calculate the GW power spectra, how varying the thermodynamic parameters effects the shape, frequency scales and amplitudes of the power spectrum in Fig.~\\ref{Fig:parameter_vary}.\n\nFrom Fig.~\\ref{Fig: vary_vw} we see the wall speed $\\vw$ has a strong effect on the shape of the power spectrum, \nespecially between the sound speed and the Jouguet speed. \nAt low $\\vw$ the power spectrum is narrow and as $\\vw$ approaches the speed of sound the peak broadens, due to the \nnarrowing of the sound shell around expanding bubbles. \nOnce $\\vw>\\cs$ the peak begins to narrow again. As $\\vw$ increases we also see a decrease in overall amplitude, \nbecause the efficiency of converting \nlatent heat into fluid motion depends on $\\vw$. \n\nAs the strength of the phase transition, $\\al$, increases so does the overall amplitude of the GW power spectrum, as more kinetic energy is deposited into the plasma (see Fig.~\\ref{Fig: vary_al}). In Fig.~\\ref{Fig: vary_rs} we note that $r_*$ contributes both to the frequency scale and overall amplitude of the power spectrum. In Fig.~\\ref{Fig: vary_Tn} we see that the nucleation temperature $\\TN$ affects only the frequency scale see Eq.(\\ref{Eq:f0}). \n\n\nWe note that there is more structure in these power spectra than can be captured by a broken power law, motivating \nan improved approximation in the next section. \nThe precise functional dependence on the thermodynamic parameters is likely to change as our understanding improves, \nbut our analysis can be easily adapted to include future developments. \nWe believe that the double broken power law form is \nlikely to remain adequate. \n\n\n\n\n\\subsection{Double broken power law }\n\\label{ssec:cosmo_dbl_brkn}\nThe full calculation in the SSM can be computationally intensive when one is calculating many power spectra over a large parameter space. This motivates the use of an analytic fit that that can be used for rapid evaluation. The LISA Cosmology Working Group \nput forward a single broken power law to describe the acoustic contribution to the GW power spectrum, with two parameters, the peak amplitude $\\Omega_\\text{p}$ and the peak frequency $f_\\text{p}$, whose scale is set by the bubble spacing $R_*$ \\cite{Caprini:2019egz}.\n\nIn the SSM there are in fact two characteristic length scales, $R_*$ and the width of the sound shell $\\Delta\\Rstar \\sim |\\vw - \\cs|\/\\be$, which indicate a double broken power law may be a good fit for the power spectrum \\cite{Hindmarsh:2019phv}. A general form for such a double broken power law can be defined by four spectral parameters $(\\Omega_\\text{p}, f_\\text{p}, \\rb, b)$, with the power spectrum taking the form \n\\begin{equation}{\\label{Eq:omgw_dbl_brkn}}\n \\Omega_\\text{gw}^\\text{fit} =F_{\\text{gw,0}}\\Omega_\\text{p} M(s,\\rb, b)\n\\end{equation}\nwhere $\\Omega_\\text{p}$ is the peak power of the power spectrum, $s = f\/f_\\text{p}$, $f_\\text{p}$ is the frequency corresponding to $\\Omega_\\text{p}$ and $\\rb = f_{\\text{b}} \/f_\\text{p}$ describes the ratio between the two breaks in the spectrum. The parameter $b$ defines the spectral slope between the two breaks. The spectral shape $M(s,\\rb, b)$ is a double broken power law with a spectral slope $9$ at low frequencies and $-4$ at high frequencies. \n\\begin{equation}{\\label{Eq: M double_break}}\n M ( s, \\rb , b ) = s^ { 9 } {\\left( \\frac { 1 + \\rb^4 } { \\rb^4 + s^4}\\right)}^{(9 -b)\/4} \\left( \\frac { b +4 } { b + 4 - m + m s ^ { 2 } } \\right) ^ { (b +4) \/ 2 } .\n\\end{equation}\nIn this function, $m$ has been chosen to ensure that for $\\rb<1$ the peak occurs at $s=1$ and $M(1,\\rb,b) = 1$, giving \n\\begin{equation}{\\label{Eq: m}}\n m = \\left( 9 {\\rb}^4+ b\\right) \/ \\left( {\\rb}^4 +1 \\right).\n\\end{equation}\n\nUltimately, we want to connect these spectral parameters quantitatively with the thermodynamic parameters in order to understand the underlying theory, however these relationships are not straightforward. An outline of how the spectral parameters depend on the thermodynamic parameters is as follows \n\\begin{equation} {\\label{Eq:spectral_parameters_thermo}}\n\\begin{aligned}\n\\Omega_\\text{p,0} &= F_{\\text{gw,0}} J\\left(r_{*}, K\\left(\\alpha, \\vw\\right)\\right) \\hat{\\Omega}_{\\text{p}}\\left(\\alpha, v_{w}\\right) \\Sigma_\\text{ssm}(\\vw,\\al)\\\\\nf_\\text{p,0} &= f_{*,0}(\\TN)z_{\\text{p}}\\left(\\alpha, \\vw \\right)\/r_* \\\\\n\\rb &=\\rb\\left(\\alpha, \\vw\\right)\\\\\nb &=b \\left(\\alpha, \\vw\\right),\n\\end{aligned} \n\\end{equation} \nwhere $\\hat{\\Omega}_{\\text{p}}$ is the maximum of $\\Omega_\\text{gw}^\\text{ssm}(z)$, \n$\\zp$ is $z$ at the peak power and $\\zb$ is the scale-free wavenumber at the second break. \n$J$ is the timescale pre-factor defined in Eq.~(\\ref{Eq:scaling_factors_GW}).\n\n In Fig.~\\ref{fig:omp_hat_zp_hat_SSM} we show the peak power today $\\Omega_{\\text{p,0}}^{\\text{ssm}}$ and the corresponding peak frequency $f_{\\text{p,0}}^{\\text{ssm}}$ calculated in the SSM, using Eq.(\\ref{Eq:Omgw0_sup}), for the $\\vw$ and $\\al$ parameter space of interest, with $r_* =0.1$ and $\\TN = 100$ GeV. Fig.~\\ref{fig:4_param_contours} shows the corresponding best fit spectral parameters for the double broken power law model. \n\n\\begin{figure}[h!]\n\\centering \n\\includegraphics[width=0.45\\textwidth]{4_param_model_Omp0ssmContours.pdf}\n\\includegraphics[width=0.45\\textwidth]{4_param_model_fp0_ssmContours.pdf}\n\\caption{ \\label{fig:omp_hat_zp_hat_SSM} The peak power today $\\Omega_{\\text{p,0}}^{\\text{ssm}}$ and the peak frequency today $f_{\\text{p,0}}^{\\text{ssm}}$ caluclated with the sound shell model, for a range of wall speeds, $\\vw$, and phase transition strengths,$\\al$, The Hubble-scaled mean bubble spacing $r_* = 0.1$ and nucleation temperature $\\TN$. The turquoise dashed line shows the Jouguet speed Eq.~(\\ref{Eq:Jouguet}).}\n\\end{figure}\n\\begin{figure}[h!]\n\\centering \n\\includegraphics[width=0.45\\textwidth]{4_param_model_Omp0Contours.pdf}\n\\includegraphics[width=0.45\\textwidth]{4_param_model_fp0Contours.pdf}\n\\includegraphics[width=0.45\\textwidth]{4_param_model_rbContours.pdf}\n\\includegraphics[width=0.45\\textwidth]{4_param_model_bContours.pdf}\n\n\\caption{\\label{fig:4_param_contours} The best fit spectral parameters from fitting the double broken power law model to the power spectra from the sound shell model Eq.~(\\ref{Eq:Omgw0_sup}). $\\Omega_{\\text{p,0}}$ is the peak power today with the Hubble-scaled mean bubble spacing $r_* = 0.1$ and $\\Tn = 100$ GeV, $f_\\text{p,0}$ is the corresponding position of the peak (scaled using Eq.(\\ref{Eq:spectral_parameters_thermo})), $\\rb$ is the ratio of the frequency positions of the two breaks in the spectrum and $b$ the spectral slope of the power law between the two breaks. The turquoise dashed line is the Jouguet speed, Eq.~(\\ref{Eq:Jouguet}).}\n\\end{figure}\n\n\nA comparison of the quality of the fit of the single broken power law and double broken power law models to the GW power spectrum from a first order phase transition, as described by the sound shell model, can be found in Appendix \\ref{sec:fit_compare}.\n\\section{Noise model}\n\\label{sec:noise}\n\\subsection{LISA sensitivity curve}\nThe sensitivity of a gravitational wave detector can be characterised by the effective noise power spectral density\\footnote{In this paper we consider two-sided power spectral densities, meaning the frequencies range from $- f_{\\text{max}}$ to $+f_{\\text{max}}$ } (PSD) $S(f)$, which is the gravitational strain \nspectral density required to produce a signal equal to the instrument noise $N(f)$.\nIf $\\mathcal{R}(f)$ is the detector response function for gravitational waves, \n\\begin{equation}\n S(f) \\equiv \\frac{N(f)}{\\mathcal{R}(f)} .\n\\end{equation}{\\label{Eq:S_def}}\n\nLISA \\cite{Audley:2017drz} is designed to be a triangular constellation of spacecraft connected by three pairs of laser links, through which changes in the distance between three pairs of free-falling test \nmasses can be measured. The changes in distance are monitored through the differences in phase between the local oscillators and the remote spacecraft oscillators, communicated by the laser. The phase differences can be combined in different ways with different time delays \nto eliminate the laser noise \\cite{Tinto:2001ii,Estabrook:2000ef}, by using the technique of time delay interferometry (TDI).\n\nWe work with the three noise-orthogonal TDI variables $\\mit{A}$, $\\mit{E}$ and $\\mit{T}$ as described in \\cite{Tinto:2002de,Prince:2002hp}. \nThe $\\mit{T}$ channel is insensitive to GWs at low frequencies. We will make the simplifying assumption that the $\\mit{T}$ channel allows us to completely characterise the instrument noise. \n\nThe instrument noise in LISA is expected to be dominated by two main sources: the test mass acceleration noise (acc), due to local disturbances of the test mass,\nand the optical metrology noise (oms) which includes shot noise. \nAs outlined in the LISA Science Requirements Document \\cite{LISA_SR_doc} the target for the single link optical path-length fluctuations is\n\\begin{equation}{\\label{Eq:P_oms}}\nP_{\\text{oms}}(f)=\\left(\\frac{1.5 \\times 10^{-11} \\mathrm{m}}{L}\\right)^{2}\\mathrm{Hz}^{-1},\n\\end{equation}\nwhere $L = 2.5\\times10^8\\;\\text{m}$ is the constellation arm length. \nThe single test mass acceleration noise target is \n\\begin{equation}{\\label{Eq:P_acc}} \nP_{\\text{acc}}(f) = \\left(\\frac{3 \\times 10^{-15} \\,\\text{m} \\,\\text{s}^{-2}}{(2 \\pi f )^2L}\\right)^{2}\\left(1+\\left(\\frac{0.4 \\mathrm{mHz}}{f}\\right)^{2}\\right) \\mathrm{Hz}^{-1}.\n\\end{equation}\nIn the $\\mit{A}$ and $\\mit{E}$ channels the instrument noise is then (see e.g. \\cite{Smith:2019wny})\n\\begin{equation}\nN_{\\A} = N_{\\E} = \\left[\\left(4+2 \\cos \\left(f \/ \\ft \\right)\\right) P_{\\text{oms}}+8\\left(1+\\cos \\left(f \/ \\ft\\right)+\\cos ^{2}\\left(f \/ \\ft\\right)\\right) P_{\\text{acc}}\\right]|W|^{2},\n\\end{equation}\\label{Eq:Na}\nwhere $\\ft = c\/(2 \\pi L)$ is the transfer frequency and $c$ is the speed of light.\nThe function $W (f,\\ft)= 1 - \\exp(-2if\/\\ft) $ is the modulation caused by one round trip of a signal along a link. \nWe use a simplified version of Eq.~(\\ref{Eq:Na}) with $\\cos\\left(f \/ \\ft\\right) = 1 $, \n\\begin{equation} \\label{Eq:Na_simp}\nN_{\\A}(f) \\simeq \\left( 6 P_{\\text{oms}}(f) + 24P_{\\text{acc}}(f)\\right)|W(f)|^{2} ,\n\\end{equation} \nwhich gives a reasonable fit to the true sensitivity curve. \n\nThe gravitational wave response function for the $\\mit{A}$ and $\\mit{E}$ channels is known only numerically,\\footnote{Recently, an analytic expression for the \nresponse function in the TDI $X$ channel has been derived \\cite{Zhang:2020khm}, but \nthe $\\mit{A}$ and $\\mit{E}$ channels also require the response function of the $XY$ cross-correlation. \n} \nbut an approximate fit is \n\\begin{equation}\\label{Eq:Ra_fit} \n\\mathcal{R}_{\\mit{A}}^{\\mathrm{Fit}} =\\mathcal{R}_ {\\mit{E}}^{\\mathrm{Fit}}\\simeq \\frac{9}{20}|W|^{2}\\left[1+\\left(\\frac{f}{4 \\ft\/3}\\right)^{2}\\right]^{-1}.\n\\end{equation}\n\nWe can now construct the approximate noise power spectral density for the $\\mit{A}$ and $\\mit{E}$ channels using Eqs.~(\\ref{Eq:S_def}), (\\ref{Eq:Na_simp}) and (\\ref{Eq:Ra_fit}):\n\\begin{equation}\\label{Eq:SA}\nS_{\\text{A}}= S_{\\text{E}} =\\frac{N_{\\A}}{\\mathcal{R}_{\\mit{A}}} \\simeq \\frac{40}{3}\\left(P_{\\text{oms}} + 4P_{\\text{acc}} \\right)\\left[1+\\left(\\frac{f}{4 \\ft\/ 3}\\right)^{2}\\right],\n\\end{equation} \nin this work we will be interested in the sensitivity to the GW fractional energy density power spectrum, which is related to the PSD by \n\\begin{equation}{\\label{Eq:OmInst}}\n\\OmInst =\\left(\\frac{4 \\pi^{2}}{3 H_{0}^{2}}\\right) f^{3} S_{\\A}(f)\n\\end{equation} \nwe will refer to this as the LISA instrument noise \n\n\\subsection{Extragalactic compact binaries}\n\nA stochastic gravitational wave background (SGWB) from a superposition of unresolved extragalactic compact binaries is expected in the millihertz GW frequency band \\cite{Regimbau:2011rp}. This signature is expected to be stationary, Gaussian and isotropic, distinguishable only by its frequency spectrum from cosmological signatures, such as a SGWB from a first order phase transition. \nIt is composed of signals from stellar origin black hole binaries, neutron star binaries, \nand white dwarf binaries. \nThese objects include precursors to compact binary mergers seen by the LIGO-Virgo collaboration \\cite{LIGOScientific:2019vic}. \nWe will refer to this as the extragalactic binary foreground (eb), \nwhich has the GW power spectrum \n\\begin{equation}{\\label{Eq:Om_LV}}\n\\Omeb(f) = \\Omega_\\text{ref,eb}\\left(\\frac{f}{f_\\text{ref,eb}}\\right)^{\\frac{2}{3}} .\n\\end{equation}\nWe will assume that it is dominated by stellar origin black hole binaries, and take\n$\\Omega_{\\textrm{ref,eb}}$ to be the energy density of the LIGO-Virgo compact binaries at the reference frequency $f_{\\textrm{ref,eb}} = 25 $ Hz. \nThe current estimate is $\\Omega_{\\textrm{ref,eb}} = 8.9 ^{+12.6}_{-5.6} \\times 10^{-10}$ \\cite{LIGOScientific:2019vic}. \nIt is well below the instrument noise, and therefore not a significant contributor to the overall noise\nrelevant for stochastic backgrounds. This foreground is shown in Fig.(\\ref{Fig:parameter_vary}). \nThe contribution to the \namplitude $\\Omega_{\\textrm{ref,eb}}$ from black hole and neutron star binaries\nwill be more accurately measured by LIGO\/Virgo once it reaches design \nsensitivity \\cite{Martynov:2016fzi,TheLIGOScientific:2016wyq}, \nand by future ground-based detectors that may be online at a similar time to LISA \\cite{Maggiore:2019uih,Sathyaprakash:2012jk}. \n\n\\subsection{Unresolved galactic compact binaries }\nA significant noise source for LISA is due to the large number of white dwarf binaries \nlocated within our galaxy \\cite{Bender:1997hs,Evans:1987qa}. \nSome loud binaries will be individually resolvable, and as the mission progresses more will be identified. At any mission stage, \nunresolved binaries will produce a confusion noise, which can be estimated using an iterative subtraction procedure outlined in \\cite{Cornish:2017vip}. \nAfter a 4-year mission, estimates suggest around $20,000$ of the estimated $20$ million galactic binaries (gb) will be resolved, \nleaving a foreground\\footnote{In this work we use the correction to the sign of the coefficient $b$ given in \\cite{Schmitz:2020rag}.} \n\\begin{equation}{\\label{Eq:S_gb}}\n S _ {\\text{gb}} ( f ) = A \\left(\\frac{1\\,\\textrm{mHz}}{f}\\right) ^ { - 7 \/ 3 } \\exp \\left( - \\left(\\frac{f}{f_{\\text{ref,gb}}}\\right) ^ { a } - b f \\sin ( c f ) \\right) \\left[1 + \\tanh \\left( d \\left( f _ { k } - f \\right) \\right)\\right],\n\\end{equation}\nwhere $A = 9 \\times 10^{-38}$ $\\textrm{mHz}^{-1}$ and $f_{\\text{ref,gb}} =1000$ mHz. \n The parameters $a$, $b$, $c$ and $f_k$ depend on the observation time: for a 4-year observation period, $a =0.138 $, $b =-0.221\\, \\text{mHz}^{-1} $, $c = 0.521 \\,\\text{mHz}^{-1} $, $d =1.680\\; \\text{mHz}^{-1} $ and the frequency of the knee of the power spectrum is $f_k = 1.13$ mHz. $S _ {\\text{gb}} $ can be expressed in terms of energy density,\n \\begin{equation}{\\label{Eq:Om_gb}}\n\\Omgb = \\left(\\frac{4 \\pi^{2}}{3 H_{0}^{2}}\\right) f^{3} S_{\\textrm{gb}}(f) ,\n\\end{equation}\nwhich we will refer to as the galactic binary foreground, and show in Fig.~\\ref{Fig:parameter_vary}. There is potential for this foreground to be extracted separately, due to the annual modulation in the signal as LISA's direction of maximum sensitivity sweeps past the galactic plane \\cite{Adams:2013qma}.\nIf no attempt to remove the annually modulated stochastic signals is made, galactic binaries will be a significant source of noise around 1 mHz.\nWe will consider parameter sensitivity both with and without the galactic binary foreground, to estimate upper and lower bounds. \n\nIn addition to the foregrounds considered here there are a number of other sources that may need to be considered when trying to extract a stochastic GW background from a first order phase transition. These include confusion noise from unresolved extreme mass ratio inspirals \\cite{Bonetti:2020jku} and a foreground from unresolved massive black hole binaries \\cite{Sesana:2004gf}. \nIn addition, extragalactic white dwarf binaries could contribute significantly to the compact binary foreground \\cite{Farmer:2003pa}. \nWe choose to leave the inclusion of these foregrounds for future work: current models are not as well characterised, \nand, at least in the case of massive black hole binaries, are expected to be less significant. \n\n\\subsection{Signal-to-noise ratio}\n\\begin{figure}[ht!]\n\\centering \n\\includegraphics[width=0.8\\textwidth]{SNRs_Thermo_params_fixed_T100_KE_sup__rs_0_01without_and_with_GB.pdf}\n\\includegraphics[width=0.8\\textwidth]{SNRs_Thermo_params_fixed_T100_KE_sup__rs_0_1without_and_with_GB.pdf}\n\\caption{\\label{fig:SNRs} The signal-to-noise $\\rho$ for different combinations of the wall speed $\\vw$, phase transition strength $\\al$, Hubble-scaled mean bubble spacing $r_*$, with the nucleation temperature $\\TN = 100$ GeV. \nIn the left column the noise model includes the LISA instrument noise - Eq.~(\\ref{Eq:OmInst}), the foreground from unresolved stellar origin black hole binaries - Eq.~(\\ref{Eq:Om_LV}). In the right hand column we also include the unresolved galactic binary foreground- Eq.~(\\ref{Eq:Om_gb}). The turquoise dashed line shows the Jouguet detonation speed, the minimum speed of a detonation for each $\\al$, given in Eq.~(\\ref{Eq:Jouguet}). }\n\\end{figure}\n\nAs a first assessment of whether a signal is observable or not, one can calculate the signal-to-noise ratio $\\rho$ by comparing the signal $\\Omega_\\text{gw}$ with the noise model $\\Omega_\\text{n}$ \\cite{Allen:1997ad,Maggiore:1999vm}:\n\\begin{equation}{\\label{Eq:SNR}}\n\\rho = \\sqrt{T_{\\text{obs}}\\int^{f_{\\text{max}}}_{f_{\\text{min}}} df \\frac{h^2\\Omega_{\\text{gw,0}}^2}{h^2\\Omega_\\text{n}^2}},\n\\end{equation}\nwhere $\\Omega_\\text{n}$ is the sum of all sources of noise. Our base noise model consists of the LISA instrument noise as given in Eq.~(\\ref{Eq:OmInst}), the extragalactic background Eq.~(\\ref{Eq:Om_LV}) and the galactic binaries Eq.~(\\ref{Eq:Om_gb}), so that \n\\begin{equation}\\label{Eq:base_noise}\n\\Omega_\\text{n} = \\OmInst + \\Omeb + \\Omgb.\n\\end{equation}\nIn this work we take the observation time $T_{\\text{obs}} = 4$ years, which is LISA's designed mission lifetime \\cite{Audley:2017drz}, so that we can use our noise model in combination with the prediction of the galactic binary foreground given in \\cite{Cornish:2017vip}. \nLISA's science operational time is expected to be $\\approx 75\\%$ to the total mission lifetime, but the mission may last up to 10 years. \n\nIn Fig.~\\ref{fig:SNRs}, we calculate $\\rho$ for the thermodynamic parameter space explored in this paper. The power spectra are described by Eq.~(\\ref{Eq:Omgw0_sup}). Generally, $\\rho$ is larger for stronger phase transitions, corresponding to larger $\\al$, and sensitivity to the wall speeds $\\vw$ peaks in the region of the speed of sound $\\cs$. \n\n\\section{Fisher matrix analysis}\n\\label{sec:fm}\n\nAn estimation of LISA's sensitivity to parameters that describe a first order phase transition can be obtained by Fisher matrix (FM) analysis. The FM is \nthe curvature of a Gaussian approximation to the posterior likelihood around the maximum. \nThe inverse of the FM is the covariance matrix, the diagonal elements of which give an approximation of the uncertainty in any given parameter. \n\n\\subsection{LISA likelihood model}\n\nHere we outline how we model the LISA data, explaine the assumptions made, and define the likelihood used. \nWe model the LISA data as an unbroken $T_{\\text{obs}} = 4$ yr stream with a regular data sampling interval $ T_{\\text{samp}}=5$ s. \nThe frequency domain gravitational wave strain amplitude $h(f)$ is the Fourier transform of the strain time series $h(t)$:\n\\begin{equation}{\\label{Eq:hf}} \nh(f_n) = \n\\frac{1}{\\sqrt{N}}\\sum_{m = 0}^{N-1} h(t) \\exp{\\left(- 2 \\pi i f_n t_m \\right )},\n\\end{equation}\nwhere $t_m = m T_{\\text{samp}}$ and $f_n = n \/T_{\\text{obs}} $, \nwith $-N\/2 < n \\le N\/2$, and $N = T_{\\text{obs}}\/ T_{\\text{samp}}$. \nThe strain amplitude is related to the gravitational wave power spectrum by \n\\begin{equation}\n\\Omega_{\\text{gw,0}}(f_n) = \\left(\\frac{4 \\pi^{2}}{3 H_{0}^{2}}\\right) f_n^{3} {|h(f_n)|^2}.\n\\end{equation}\nWe consider the time series of $A$ and $E$ TDI channels, which in the frequency domain have \npower spectral densities $D^{\\A}_n =|\\mit{A}(f_n)|^2$ and $D^{\\E}_n = |\\mit{E}(f_n)|^2$. The variances of the Fourier amplitudes in the $A$ and $E$ channels are taken to be identical and independent, and written $\\Sn$, and to depend on a vector of model parameters $\\vec{\\theta}$. \n\nAs $N \\simeq 2\\times 10^7$ it saves computation time to group the data by frequency binning, which is a crude way \nof grouping the data, but sufficient for the level of analysis we carry out.\nWe split the frequency range into a set of $N_\\text{b}$ logarithmically spaced positive frequency bins, with\nbin boundaries $\\fbin$, where $b $ ranges from $0$ to $N_\\text{b}$. In each bin there are \n\\begin{equation}\\label{Eq:nb}\nn_{b} = \\left[ (f_{b } - f_{b-1}) T_{\\text{obs}} \\right]\n\\end{equation}\ndifferent frequencies, where the square brackets denote the integer part. \n\n\nFirst considering $D^{\\A}_n$ we define $\\bar{D}^{\\A}_b$ to be the weighted mean value for $D^{\\mit{A}}_n$ in bin $b$, so that \n\\begin{equation}\n\\bar{D}^{\\A}_b = \\frac{S_b}{n_b} \\sum_{n \\in I_b} \\frac{D^{\\A}_n}{S_{n}} ,\n\\end{equation}\nwith \n\\begin{equation} \n\\frac{1}{S_{b}} = \\frac{1}{n_b} \\sum_{n \\in I_b} \\frac{1}{S_{n}} ,\n\\end{equation} \nwhere $I_b$ the set of integers $n$ such that $|f_n|$ is in frequency bin $b$ and $S_n$ is the mean value of $D^{\\mit{A}}_n$.\n\nAs $\\bar{D}^{\\A}_b$ is the average of squares of $2n_b$ normally distributed real random variables, \nthe likelihood is a chi-squared distribution \\footnote{In this paper we echo the notation used in \\cite{Pieroni:2020rob}.}\n\\begin{equation}\np\\left(\\bar{D}^{\\A}_b \\mid S_{b}\\right)=\\prod_{b=1}^{N_{b}} \\frac{1}{\\left(n_{b}-1\\right) !} \\frac{n_{b}}{S_{b}}\\left(n_{b} \\frac{\\bar{D}^{\\A}_b}{S_{b}}\\right)^{n_{b}-1} \\exp \\left(-n_{b} \\frac{\\bar{D}^{\\A}_b}{S_{b}}\\right).\n\\end{equation}\nIt will be convenient to approximate the likelihood with a Gaussian distribution, \nusing the central limit theorem with the assumption $n_b \\gg 1$ for all bins. The distribution \nfor $\\bar{D}^{\\A}_b$ has mean $S_{b}$ and \nvariance $ S_{b}^2\/n_{b} $, giving the Gaussian approximation \n\n\\begin{equation}\\label{Eq:Likeli_CLT} \np(\\bar{D}^{\\A}_b| S_{b}) = \\prod_{b=1}^{N_\\text{b}} \\left(\\frac{n_{b}}{2\\pi S_{b}^2}\\right)^{\\frac{1}{2}} \\exp{\\left(-\\frac{1}{2}\\frac{n_{b}\\left(\\bar{D}^{\\A}_b - S_{b}\\right)^{2}}{S_{b}^2}\\right)}.\n\\end{equation} \nAs the $\\mit{A}$ and $\\mit{E}$ channels are uncorrelated and are assumed to a first approximation have identical noise, \nwe can combine them into an average variable $\\bar{D}_{b} = \\left(\\bar{D}^{\\A}_b +\\bar{D}^{\\E}_b \\right)\/2$ with variance $S_{b}^2\/2n_{b}$.\nThe likelihood for the binned average spectral density $\\bar{D}_{b}$ is then \n\\begin{equation}\\label{Eq:Likeli_CLT_A_E} \np(\\bar{D}_{b}| S_{b}) = \\prod_{b=1}^{N_\\text{b}} \\left(\\frac{2n_{b}}{2\\pi S_{b}^2}\\right)^{\\frac{1}{2}} \\exp{\\left(-\\frac{1}{2}\\frac{2n_{b}\\left(\\bar{D}_{b} - S_{b}\\right)^{2}}{S_{b}^2}\\right)}.\n\\end{equation} \nThe Gaussian approximation is known to be biased \\cite{Bond:1998qg,Verde:2003ey,Hamimeche:2008ai}, \nand one could improve the accuracy of the likelihood with a log \nnormal correction \\cite{Verde:2003ey,Hamimeche:2008ai,Flauger:2020qyi}.\nOne can also evaluate the Fisher matrix directly with the chi-squared distribution. \nOn the other hand, \nusing the Gaussian approximation simplifies the calculations. As we now show,\nwe always have $2n_b \\gtrsim \\text{O}(10^2)$, for which the Gaussian approximation is sufficiently accurate.\n\nFirstly, when working with the double broken power law model, we take $N_\\text{b} = 100$ logarithmically spaced \nfrequency bins, for which $n_{b} \\gtrsim 155$. The frequency binning for calculations in the sound shell model is a little more complicated, as \nwe calculate the theoretical model in terms of the angular frequency scaled by the \nmean bubble separation $z = k\\Rstar$, rather than an absolute frequency. \nThis avoids having to recompute power spectra for different $r_*$ and $\\TN$, \nas the shape of the GW power spectrum depends only on the thermodynamic parameters $\\vw$ and $\\al$.\n\nThe transformation from $z$ to a frequency today is \n\\begin{equation}\nf = f_{*,0} z \/r_* ,\n\\end{equation}\nwhere $f_{*,0}$ is given in Eq.~(\\ref{Eq:f0}),\nand we recall that $r_* = \\Rstar \\HN$. \nIn this case, the number of data frequencies in bin $b$ is \n \\begin{equation}\n n_{b} = f_{*,0} T_{\\text{obs}} \\Delta z_b \/r_*\n \\end{equation}\nwhere $\\Delta z_b = z_{b} - z_{b-1}$. \nIn this work we compute $100$ $z $ values with logarithmic spacing between $1$ and $1000$. \nFor the LISA data described here, the minimum $n_{b} \\simeq 42 $.\n\nThe Fisher matrix is \n\\begin{equation}\nF_{ij} = \\left\\langle \\frac{\\partial l_G}{\\partial\\theta_i}\\frac{\\partial l_G}{\\partial\\theta_j}\\right\\rangle,\n\\end{equation}\nwhere $\\theta_i$ denotes the $i$th component of the vector of model parameters $\\vec{\\theta}$, \nand the Gaussian approximation to the log-likelihood is \n\\begin{equation} \nl_G = \\ln(p) = -\\frac{1}{2} \\sum_{b=1}^{N_\\text{b}} \\frac{2n_{b}\\left(\\bar{D}_{b} - S_{b}\\right)^{2}}{S_{b}^2} - \\sum_{b=1}^{N_\\text{b}} \\ln S_{b}+ \\text{const} .\n\\end{equation} \nHence the Gaussian approximation to the Fisher matrix is \n\\begin{equation}\nF_{ij}^{G} = \\sum_{b=1}^{N_\\text{b}} \\frac{2n_{b}}{S_{b}^2} \\frac{\\partial S_{b}}{\\partial\\theta_i}\\frac{\\partial S_{b}}{\\partial\\theta_j} .\n\\end{equation}\n\nOne can show that the Fisher matrix calculated using the full chi-squared distribution is \n\n\\begin{equation}\nF_{ij} = \\left(1+\\frac{1}{2 n_{b}}\\right)F_{ij}^{G},\n\\end{equation}\nHence with smallest value of $2n_{b} = 84$, the difference is minimal, and \nwe take $F_{ij}$ to be its Gaussian approximation from now on. \n\nWe will also use the power spectra, $\\Omega_{\\textrm{t}}(f_b,\\vec{\\theta}) $, rather than the spectral densities $S_{b}$ to formulate the theoretical model of the data:\n\\begin{equation}{\\label{Eq:Omn}}\n\\Omega_{\\textrm{t}}(f_b,\\vec{\\theta}) = \\Omega_{\\textrm{n}}(f_b) + \\Omega_{\\textrm{fg}}(f_b) + \\Omega_{\\textrm{pt}}(f_b,\\vec{\\theta}) ,\n\\end{equation}\nwhere we assume that the instrumental noise $\\Omega_{\\textrm{n}}$ and foregrounds $\\Omega_{\\textrm{fg}}$ are much \nbetter known than parameters of the phase transition signal $\\Omega_{\\textrm{pt}}$, \nand so the parameters in the Fisher matrix are just those describing the phase transition. \nWe consider two kinds of foregrounds: one from extragalactic binaries Eq.~(\\ref{Eq:Om_LV}), and one with both extragalactic and galactic binaries Eq.~(\\ref{Eq:Om_gb}). As only the ratio of spectra appears, the Fisher matrix in terms of the power spectrum is simply \n\\begin{equation}{\\label{Eq:Fij}}\nF_{ij} = T_{\\text{obs}} \\sum_{b=1}^{N_\\text{b}} \\frac{2\\Delta f_b }{\\Omt^2} \\frac{\\partial \\Omt}{\\partial\\theta_i}\\frac{\\partial \\Omt}{\\partial\\theta_j} .\n\\end{equation} \nThe sum on the right-hand side can be viewed as a numerical approximation to an integral over frequencies.\n\nThe covariance matrix is the inverse of the Fisher matrix, \n\\begin{equation}\\label{Eq:rel_u}\nC_{ij} = F^{-1}_{ij} ,\n\\end{equation}\nwhere the square roots of diagonal entries give the uncertainty in the $i$th parameter $\\Delta \\theta_i$. \nThese uncertainties include correlations between parameters. \n We define $\\theta_i$ to be the logarithmic model parameter, so the square roots of the diagonal entries in the covariance matrix are the relative uncertainty in the parameter.\n\n\\subsection{Principal components}\nThe Fisher matrix can be used to construct a set of uncorrelated and orthonormal observables, the principal components. \nAs the Fisher matrix is a symmetric $n \\times n$ matrix we can find its eigenvectors and eigenvalues \n\\begin{equation}\\label{Eq:Fisher_eigen}\nF = U\\Lambda U^{\\dagger}, \\quad \\Lambda = \\text{diag}(\\lambda_1,\\lambda_2,\\lambda_3,\\lambda_4),\n\\end{equation}\nwhere $U$ is a matrix of the orthonormal eigenvectors of the Fisher matrix, $u_n$ is the $n^{\\text{th}}$ eigenvector and $\\lambda_n$ is the $n^{\\text{th}}$ eigenvalue. We can then construct a new set of variables $\\vec{X} = (X_1,X_2,X_3,X_4)$ , each $X_n$ being a linear combination of our original parameters calculated by using $U^{\\dagger}$ as a projection vector, \n\\begin{equation}\\label{Eq:principal_components}\n\\vec{X} = U^{\\dagger} \\bar{\\theta}.\n\\end{equation}\n$X_n$ is the $n^{\\text{th}}$ principal component and the standard deviation in $X_n$ is $\\lambda_n^{-1\/2}$. The principal components are ordered according to the size of their corresponding eigenvalues, meaning $X_1$ is the highest-order and best constrained parameter, and $X_4$ is the lowest order parameter and worst constrained \\cite{Efstathiou:1998xx}.\n\n\\section{Fisher matrix calculation and relative uncertainties}\n\\label{sec:results}\nIn this section we calculate the Fisher matrix and the relative uncertainties as described in the previous section, for several scenarios. Firstly, the FM for the spectral parameters from the double broken power law fit to the SSM. Then we evaluate the FM for the four key thermodynamic parameters in the SSM \n$(\\vw,\\al,r_*,\\TN)$ for two cases: free and fixed nucleation temperature $\\TN$. We also calculate the expected sensitivity to the principal components of the GW power spectrum calculated with the SSM. In each case we investigate the impact of including the foreground from galactic binaries. \n\n\\subsection{Double broken power law model}\n\n\\begin{figure}[h!]\n\\centering \n \\includegraphics[width=1\\textwidth]{Relative_uncertainty_spectral_params_b_freeLISA_BBH_GB.pdf}\n \\includegraphics[width=1\\textwidth]{Relative_uncertainty_spectral_params_b_freeLISA_BBH.pdf}\n\n\\caption{\\label{fig:obs_P_rel_uncer} \nColoured contours show relative uncertainties calculated from the Fisher matrix for the \nparameters of the double broken power law model (Eq.~\\ref{Eq:omgw_dbl_brkn}):\npeak power $\\Omega_\\text{p,0}$, peak frequency $f_\\text{p,0}$, break ratio $\\rb$ and intermediate power law $b$. The line styles indicate the break ratio values $r_b$. The black lines show contours of signal-to-noise ratio $\\rho = 20 $ for different $\\rb$, with the same line styles. \nThe grey shaded area indicates the region where the peak signal power is above the combined instrumental noise and foregrounds. \nIn the upper panel the noise model consists of the LISA instrument noise, Eq.~(\\ref{Eq:OmInst}), foreground from compact binaries, Eq.~(\\ref{Eq:Om_LV}) and the galactic binary foreground, Eq.~(\\ref{Eq:Om_gb}). In the lower panel the galactic binary foreground is removed. }\n\n\\end{figure}\n\nFirst we look at the relative uncertainty for the spectral parameters as described in the proposed double broken power law model given in Eq.~(\\ref{Eq:omgw_dbl_brkn}). \n In this case the parameters are \n $\\vec{\\theta} = \\left(\\ln(\\Omega_\\text{p,0}),\\ln (f_\\text{p,0}),\\ln(\\rb), \\ln(b)\\right)$. The \n Fisher matrix entries can be calculated analytically. The gravitational wave power spectrum is evaluated at $100$ frequencies with logarithmic spacing between $10^{-5}\\;\\text{Hz}$ and $1\\;\\text{Hz}$. \nWe sample the parameter space as follows: 200 peak powers $\\Omega_\\text{p,0}$, with logarithmic spacing between $ 10^{-13}$ and $ 10^{-8}$; 200 peak frequencies $ f_\\text{p,0} $, with logarithmic spacing between $10^{-5}$ and $1$; 4 frequency break ratios $\\rb = [0.1,0.2,0.3,0.4]$; and intermediate power law with spectral slope $b=1$, the generic value, as explained in \\cite{Hindmarsh:2019phv}.\n\n\nThe range of spectral parameters at which we evaluate the relative uncertainties was chosen such that they could be produced by thermodynamic parameters currently explored in models and simulations (as displayed in Fig.~\\ref{fig:4_param_contours}). \nThe resulting relative uncertainties, with and without the foreground from \nunresolved galactic binaries, are shown in Fig.~\\ref{fig:obs_P_rel_uncer} as contours in the \n$(\\Omega_\\text{p,0},f_\\text{p,0})$ plane. The line style shows the frequency break ratio $\\rb$, and the colour \nthe relative uncertainty. Also plotted is the curve at which the signal-to-noise ratio $\\rho$ is 20, \nand for comparison, the noise model (which includes foregrounds) \n$\\Omega_{\\textrm{n}}(f) $ as a black line.\n\nIt can be seen that $\\rho=20$ is reached for peak powers well below the noise \nlevel, which is an effect of the integration over frequencies. One can regard the $\\rho=20$ line \nas a peak-integrated sensitivity \\cite{Schmitz:2020rag}, \nwhich generalises the idea of power law sensitivity \\cite{Thrane:2013oya} to \npeaked power spectra. \n\n The results for the relative uncertainty in $\\Omega_\\text{p,0}$ and $f_\\text{p,0}$ are consistent with \n those in Ref.~\\cite{Hashino:2018wee}, which studied the two-parameter single broken power law model \n advocated by the LISA Cosmology Working group \\cite{Caprini:2019egz}. \n One can summarise the conclusion \nin a parameter-independent way by the statement that a SNR of about 20 allows a measurement of the \n peak power and peak frequency at around a 10\\% level of uncertainty. \n If the unresolved galactic binaries are not \n removed, the parameter space required to achieve $\\rho=20$ is reduced. \n \n A 10\\% measurement of $\\rb$, which encodes information about the wall speed, \n requires higher signal-to-noise ratios, with the best resolved break ratio being $\\rb = 0.4$. \n This is the value of $\\rb$ giving a power spectrum with the narrowest peak, \nand so the whole peak is likely to be in the sensitivity window of the detector. \n\n\\subsection{Sound shell model}\nIn the simplest version of the sound shell model we study, the parameters are the logarithms of the \nwall speed, the phase transition strength, the Hubble-scaled mean bubble spacing and the nucleation temperature,\ngiving a parameter vector \n$\\vec{\\theta} = (\\ln \\vw, \\ln\\al,\\ln r_*, \\ln(\\TN\/\\text{GeV}))$. \n\n\nWe evaluate the Fisher matrix at all combinations of our parameter space using Eq.~(\\ref{Eq:Fij}). \nThe parameter space was sampled with 50 wall speeds $\\vw$ in the range $0.4 \\le \\vw \\le 0.9$, 51 phase transition strengths $ \\al $ logarithmically spaced between $0.01$ and $0.5$, 2 Hubble-scaled mean bubble spacings $r_* = 0.01,0.1$, and nucleation temperature $\\TN = 100\\;\\text{GeV}$.\n\nTo construct the Fisher matrix we need to calculate the partial differentials of the GW power spectrum with respect to each of our thermodynamic parameters. \nThe gradients with respect to $\\vw$ and $\\al$ were computed numerically. \nThe derivatives with respect to the Hubble-scaled mean bubble spacing $r_*$ and the nucleation temperature $\\TN$ are calculated as follows.\n\nWith the phase transition model spectrum $\\Omega_{\\textrm{pt}}$ given by $\\Omega_{\\text{gw,0}}$ in Eq.~(\\ref{Eq:Omgw0_sup}), \nwe recall that \n\\begin{equation}\\\nJ = H_n R_* H_n\\tau_v = r_* \\left(1 - \\frac{1}{\\sqrt{1 + 2x}} \\right), \n\\end{equation}\nwhere $x = r_*\/\\sqrt{K(\\al,\\vw)}$. The gravitational wave frequency today $f$ is related to the dimensionless wavenumber $z$ \nthrough $ z = r_*(f\/f_{*,0})$, with the reference frequency depending on $\\TN$ through Eq.~(\\ref{Eq:f0}).\nHence we find \n\\begin{equation}\n\\frac{\\partial \\Omega_{\\textrm{t}}(f)}{\\partial \\ln r_*} = \\Omega_{\\text{gw,0}} \\left( \\frac{\\partial \\ln J}{\\partial \\ln r_*} + \\gamma_\\text{gw}(z)\\right),\n\\end{equation}\nwhere \n\\begin{equation}\n \\frac{\\partial \\ln J}{\\partial \\ln r_*} = 1 + \\frac{r_*}{J} \\frac{x}{\\left( 1 + 2x \\right)^{3\/2}},\n\\end{equation}\nand \n$ \\gamma_\\text{gw} = d \\ln \\Omega_\\text{gw}\/d\\ln z$ is the local power law index of the gravitational wave power spectrum, \nwhich we compute numerically.\n\nThe partial differential with respect to $\\TN$ is then \n\\begin{equation}\n\\frac{\\partial \\Omega_{\\textrm{t}}(f)}{\\partial \\ln( \\TN\/\\text{GeV})} = - {\\Omega_{\\text{gw,0}}}\\gamma_\\text{gw}(z).\n\\end{equation}\n\nThe resulting relative uncertainties are shown \n in Figs.~\\ref{fig:thermo_params_fixed_rs_LISA_BBH_GB_T_100_as_param} with the galactic binary foreground and \\ref{fig:thermo_params_fixed_rs_LISA_BBH_T_100_as_param} without the galactic binary foreground. Below the Jouguet speed, indicated by a dashed line, \nthe fluid shell becomes a supersonic deflagration, with a significant change in \nthe sound wave power spectrum, and hence the gravitational wave power spectrum \\cite{Hindmarsh:2019phv}.\nThus one expects to see features in the signal-to-noise ratio and the relative uncertainties \nto the left of this line. \nThe intricate shape of the contours is also partly due to the complex degeneracies, discussed below, \nand inaccuracies in the interpolation of the numerically-determined GW suppression factor.\n\\begin{figure}[h!]\n\\begin{subfigure}[b]{\\textwidth}\n\\centering \n\\rotatebox{90}{\\hspace{20mm}$r_* = 0.01$}\n\\includegraphics[width=0.8\\textwidth]{Relative_uncertainty_Thermo_params_free_T100_KE_sup_LISA_BBH_GB_rs_0_01.pdf}\\\\[-10pt]\n\\rotatebox{90}{\\hspace{20mm}$r_* = 0.1$}\n\\includegraphics[width=0.8\\textwidth]{Relative_uncertainty_Thermo_params_free_T100_KE_sup_LISA_BBH_GB_rs_0_1.pdf}\n\\caption{\\label{fig:thermo_params_fixed_rs_LISA_BBH_GB_T_100_as_param} Noise model: LISA instrument noise, foregrounds from extragalactic compact binaries Eq.~(\\ref{Eq:Om_LV}) and unresolved galactic compact binaries (\\ref{Eq:Om_gb}).\n}\n\\end{subfigure}\n\n\\begin{subfigure}[b]{\\textwidth}\n\\centering \n\\rotatebox{90}{\\hspace{20mm}$r_* = 0.01$}\n\\includegraphics[width=0.8\\textwidth]{Relative_uncertainty_Thermo_params_free_T100_KE_sup_LISA_BBH_rs_0_01.pdf}\\\\[-10pt]\n\\rotatebox{90}{\\hspace{20mm}$r_* = 0.1$}\n\\includegraphics[width=0.8\\textwidth]{Relative_uncertainty_Thermo_params_free_T100_KE_sup_LISA_BBH_rs_0_1.pdf}\n\\caption{\\label{fig:thermo_params_fixed_rs_LISA_BBH_T_100_as_param} Noise model: same as above, \nwith the foreground from unresolved galactic binaries removed.}\n\\end{subfigure}\n\n\\caption{\\label{fig:thermo_params_fixed_rs_LISA_BBH_GB_T 100_as_param } Contours of relative uncertainty in the thermodynamic parameters wall speed $\\vw$, transition strength $\\al$, scaled mean bubble spacing $r_*$ and nucleation temperature $\\TN$. \nIn each sub-figure, the upper and lower panels have \nHubble-scaled bubble spacing $r_*$ as annotated. In both panels $\\TN= 100$GeV. \nThe black solid line shows contours of signal-to-noise ratio $\\rho$. \nThe turquoise dashed line is the Jouguet speed, the minimum for a detonation. }\n\\end{figure}\n\nA general conclusion is that, even when $\\rho=20$, the only parameter \nwhich has relative uncertainty less than 1 is the wall speed. \nThat the wall speed $\\vw$ is the best determined parameter is perhaps \nsurprising, but it can be understood as follows. \n\n\nLooking at the upper left panel of Fig.~\\ref{Fig:parameter_vary}, one can see \nthat varying the wall speed significantly changes the shape of the power spectrum, \nwhich none of the other parameters do. \nOn the other hand, the other parameters have complex degeneracies. \nFor example, $r_*$ and $\\TN$ both affect the overall frequency scale, \nand $\\al$ and $r_*$ both affect the overall amplitude of the power spectrum. \nIncreasing $\\TN$ (see Fig.~\\ref{Fig:parameter_vary}, bottom right panel) \nshifts the peak frequency, which can be compensated by a combination \nof increasing $r_*$ (Fig.~\\ref{Fig:parameter_vary}, bottom left panel) \nand reducing $\\al$. \n\nAnother general conclusion, clear from the comparison between Figs.~\\ref{fig:thermo_params_fixed_rs_LISA_BBH_GB_T_100_as_param} and \\ref{fig:thermo_params_fixed_rs_LISA_BBH_T_100_as_param}, is the importance of \nremoving the galactic binary foreground for parameter estimation. \nThe two figures represent the extremes of what can be achieved in practice.\nThe study of Ref.~\\cite{Adams:2013qma} indicates that the annual variation of the \ngalactic binary foreground will enable its near-complete removal, and \nso Fig.~\\ref{fig:thermo_params_fixed_rs_LISA_BBH_T_100_as_param} \nis likely to be a better approximation. \n\n\\FloatBarrier\n\\subsection{Principal component analysis }\nThe degeneracy between $\\al$, $r_*$ and $\\TN$ gives the impression that they will be virtually no sensitivity to these parameters, even at high signal-to-noise ratio. The Fisher matrix may be overestimating the uncertainties, so we look to the principal components to see if there is greater sensitivity to linear combinations of the thermodynamic parameters. The contours of the standard deviation of our principal components $\\lambda_n^{-1\/2}$ can be seen in Fig.~\\ref{fig:principal_component_uncertainty_fixed_rs_LISA_BBH_GB_T 100_as_param}\n\n\\begin{figure}[h!]\n\\begin{subfigure}[b]{\\textwidth}\n\\centering \n\\rotatebox{90}{\\hspace{20mm}$r_* = 0.01$}\n\\includegraphics[width=0.8\\textwidth]{eigen_values\/eigen_values_Thermo_params_free_T100_KE_sup_LISA_BBH_GB_rs_0_01.pdf}\\\\[-10pt]\n\\rotatebox{90}{\\hspace{20mm}$r_* = 0.1$}\n\\includegraphics[width=0.8\\textwidth]{eigen_values\/eigen_values_Thermo_params_free_T100_KE_sup_LISA_BBH_GB_rs_0_1.pdf}\n\\caption{\\label{fig:eigenalues_fixed_rs_LISA_BBH_GB_T_100_as_param} Noise model: LISA instrument noise, foregrounds from extragalactic compact binaries Eq.~(\\ref{Eq:Om_LV}) and unresolved galactic compact binaries (\\ref{Eq:Om_gb}).\n}\n\\end{subfigure}\n\n\\caption{\\label{fig:principal_component_uncertainty_fixed_rs_LISA_BBH_GB_T 100_as_param} Contours of standard deviation ($1\/\\sqrt{\\lambda_n}$) for the principal components constructed from the eigenvectors of the Fisher matrix evaluated across the wall speed $\\vw$ and phase transition strength $\\al$ parameter space.\nIn each sub-figure, the upper and lower panels have Hubble-scaled bubble spacing $r_*$ as annotated. In both panels $\\TN= 100$ GeV. \nThe black solid line shows contours of signal-to-noise ratio $\\rho$. \nThe turquoise dashed line is the Jouguet speed, the minimum for a detonation. }\n\n\\end{figure}\n\nComparing Figs.~\\ref{fig:thermo_params_fixed_rs_LISA_BBH_GB_T_100_as_param} and ~\\ref{fig:principal_component_uncertainty_fixed_rs_LISA_BBH_GB_T 100_as_param} it is immediately obvious that there is greater sensitivity to the principal components over a broader region of parameter space, even when the foreground from galactic binaries is present. In general, for GW power spectra with $\\rho > 20$ the two highest-order principal components reach $1\/\\sqrt{\\lambda_n} < 3\\% $ for both values of the Hubble-scaled mean bubble spacing. For GW power spectra with $r_* = 0.01$, \nwhilst there is only a small region of sensitivity to $\\vw$, there is broad sensitivity to the two highest-order principal components. \nIn the $r_* = 0.1$ case $1\/\\sqrt{\\lambda_n }< 30\\%$ for the majority of the parameter space for the two highest-order principal components (see Fig.~\\ref{fig:principal_component_uncertainty_fixed_rs_LISA_BBH_GB_T 100_as_param}).\n\\FloatBarrier\nTo investigate the contribution of the principal components to the thermodynamic parameters, \nwe assigned to each of the first three principal components the colours red, green and blue respectively.\nWe took the four thermodynamic parameter eigenvectors in the principal component basis, \nand constructed an RGB colour from the square of the corresponding entry in the eigenvector. \nA significant mixture of the fourth principal component would then appear as a dark colour.\n\nWe show the result in Fig.~\\ref{fig:eigen_vector}. \nWe see the wall speed $\\vw$ is predominantly red, meaning the first principal component provides the largest contribution, which confirms that we would expect greatest sensitivity to $\\vw$.\nThe other parameters show an interesting mix of colours, which is partly noise introduced when we interpolate the kinetic energy suppression data (see Appendix \\ref{sec:suppression_factor}). We believe the remaining sudden changes of colour comes from the degeneracy between parameters, in particular the streak originating around the speed of sound on the wall speed axis. \n\\begin{figure}[h!]\n\n\\centering \n\\rotatebox{90}{\\hspace{20mm}$r_* = 0.01$}\n\\includegraphics[width=0.8\\textwidth]{eigen_vector\/eigen_vector_contributions_to_Thermo_params_free_T100_LISA_BBH_GB_rs_0_01.pdf}\\\\[-10pt]\n\\rotatebox{90}{\\hspace{20mm}$r_* = 0.1$}\n\\includegraphics[width=0.8\\textwidth]{eigen_vector\/eigen_vector_contributions_to_Thermo_params_free_T100_LISA_BBH_GB_rs_0_1.pdf}\n\n\\caption{\\label{fig:eigen_vector} The contributions of the first three principal components to the thermodynamic parameters wall speed $\\vw$, transition strength $\\al$, scaled mean bubble spacing $r_*$ and nucleation temperature $\\TN$. Red, green and blue correspond to the first, second and third principal components respectively. The upper and lower panels have Hubble-scaled bubble spacing $r_*$ as annotated. In both panels $\\TN= 100$ GeV. Noise model: LISA instrument noise, foregrounds from extragalactic compact binaries Eq.~(\\ref{Eq:Om_LV}) and unresolved galactic compact binaries (\\ref{Eq:Om_gb}).\n}\n\\end{figure}\n\\FloatBarrier\n\n\\subsection{Sound shell model with fixed nucleation temperature}\n\\label{ssec:TP_fixedT}\n\n\\begin{figure}[ht!]\n\\begin{subfigure}[b]{\\textwidth}\n\\centering \n\\rotatebox{90}{\\hspace{20mm}$r_* = 0.01$}\n\\includegraphics[width=0.8\\textwidth]{Relative_uncertainty_Thermo_params_fixed_T100_KE_sup_LISA_BBH_GB_rs_0_01.pdf}\\\\\n\\rotatebox{90}{\\hspace{20mm}$r_* = 0.1$}\n\\includegraphics[width=0.8\\textwidth]{Relative_uncertainty_Thermo_params_fixed_T100_KE_sup_LISA_BBH_GB_rs_0_1.pdf}\n\\caption{\nNoise model: LISA instrument noise, foregrounds from extragalactic compact binaries Eq.~(\\ref{Eq:Om_LV}) and unresolved galactic compact binaries (\\ref{Eq:Om_gb}).\n}\n\\label{fig:thermo_params_fixed_T_BBH_GB_KE_suppress} \n\\end{subfigure}\n\\begin{subfigure}[b]{\\textwidth}\n\\centering \n\\rotatebox{90}{\\hspace{20mm}$r_* = 0.01$}\n\\includegraphics[width=0.8\\textwidth]{Relative_uncertainty_Thermo_params_fixed_T100_KE_sup_LISA_BBH_rs_0_01.pdf}\\\\\n\\rotatebox{90}{\\hspace{20mm}$r_* = 0.1$}\n\\includegraphics[width=0.8\\textwidth]{Relative_uncertainty_Thermo_params_fixed_T100_KE_sup_LISA_BBH_rs_0_1.pdf}\n\\caption{Noise model same as above, \nwith the foreground from unresolved galactic binaries removed.}\n\\label{fig:thermo_params_fixed_T_BBH_KE_suppress}\n\\end{subfigure}\n\\caption{Contours of relative uncertainty in the thermodynamic parameters wall speed $\\vw$, transition strength $\\al$ and scaled mean bubble spacing $r_*$ with \nnucleation temperature $\\TN= 100$ GeV, for gravitational wave power spectra calculated using the sound shell model, Eq.~(\\ref{Eq:Omgw0_sup}). \nIn each sub-figure, \nthe upper and lower panels have Hubble-scaled bubble spacing $r_*$ as annotated. The turquoise dashed line is the Jouguet speed, the minimum for a detonation.\n}\n\\end{figure}\n\nFor the final analysis we explore the impact of information from particle physics data. \nWhile the information is likely to constrain a combination of parameters, we \ntake as a limiting example a known nucleation temperature $\\TN = 100$ GeV. \nThe nucleation temperature is likely to be close to the critical temperature of the phase transition, \nwhich is the most straightforward thermodynamic parameter to calculate from an underlying theory,\n\n\nThe Fisher matrix, covariance matrix and relative uncertainties are calculated following the same procedure as above for two scenarios: first with our base noise model Fig.~\\ref{fig:thermo_params_fixed_T_BBH_GB_KE_suppress} and then with the unresolved galactic binaries foreground removed, Fig.~\\ref{fig:thermo_params_fixed_T_BBH_KE_suppress}.\n\n\\FloatBarrier\n\nPrior knowledge of the nucleation temperature $\\TN$ greatly improves the power of LISA to estimate other parameters, as the degeneracies become partly broken. With fixed $\\TN$ for GW power spectra with $\\rho>20$ the wall speed has relative uncertainty of less than $10 \\%$. Fixed $\\TN$ also improves sensitivity to the phase transition strength, $\\al$, and the Hubble-scaled mean bubble spacing, $r_*$.\nFor example, if the phase transition has $r_* = 0.1$, one can achieve relative uncertainties $\\Delta\\al\/\\al <10\\%$ and $\\Delta r_*\/r_* <3 0\\%$ \nwith a signal-to-noise ration greater than $50$.\n\nThere is an interesting feature in the relative uncertainty contours at $r_* = 0.1$, where the SNR is higher:\na small ridge of lower uncertainty in $\\alpha$, \nfor wall speeds just over 0.6. \nThis is accompanied by reduction in the sensitivity to $\\vw$. \n\nThe origin of this ridge is perhaps as follows. \nReferring to Fig.~\\ref{Fig:parameter_vary}, one can see that \nat around $\\vw = 0.6$ at $r_* = 0.1$, changes in the wall speed and $r_*$ \nhave the effect of moving the closest part of the signal to the sensitivity curve \nin a direction tangent to the sensitivity curve, without changing the shape. \nThis would mean that the likelihood changes little in these directions, and it would be difficult to distinguish between possible parameter values, this would lead to a reduction in sensitivity. \nChanges in $\\al$, on the other hand, change the signal power, and \nwill change the likelihood. Thus the likelihood is most sensitive \nto changes in $\\al$ in this region.\n\n\\section{Discussion}\n\\label{sec:Discussion}\nIn this paper we have explored the prospect of extracting the model parameters of a stochastic gravitational wave background from a first order phase transition at future space-based gravitational wave observatories. We focused on LISA, \nand the impact of including expected foregrounds from compact binaries. \nHere we studied the gravitational wave power spectra predicted by the sound shell model (SSM), and \nused Fisher matrix analysis to investigate the sensitivity both to the four parameters of a double broken power law approximation, \nand to the underlying thermodynamic parameters in the SSM. The key thermodynamic parameters are the nucleation temperature\n$\\TN$, the transition strength $\\al$, the mean bubble spacing in Hubble units $r_*$ and the wall (phase boundary) speed $\\vw$. \nWe assumed a sound speed $\\cs =1\/\\sqrt{3}$ in both phases, and leave an investigation of sensitivity to this parameter to \nfuture work. The fact that different sound speeds significantly change the kinetic energy fraction of the fluid \\cite{Giese:2020znk,Giese:2020rtr}\nsuggests that there will be sensitivity to this parameter as well. \n\n\nIn Sec.~\\ref{ssec:cosmo_dbl_brkn} we studied the double broken power law approximation to the gravitational wave power spectrum, which was advocated in Ref.~\\cite{Hindmarsh:2019phv}. \nIt has parameters characterising the peak $\\Omega_\\text{p}$ and two frequency scales, the peak frequency $f_\\text{p}$, and a lower ``break'' frequency $f_\\text{b} = \\rbf_\\text{p}$. \nIn its original form, the indices of the three \npower laws were fixed by arguments based on the limits of certain integrals. \nWe introduced a fourth parameter, the spectral slope of the intermediate power law $b$, \nto improve the fit between $f_\\text{b}$ and $f_\\text{p}$ for phase transitions proceeding by supersonic deflagrations.\nThis form, given in Eq.~(\\ref{Eq:omgw_dbl_brkn}), is a significant improvement on the single broken power law in fitting the predictions of the \nsound shell model (see Fig.~\\ref{fig:mean_res}). \n\nWe performed a Fisher matrix analysis to calculate the relative uncertainty for the four parameters of the \ndouble broken power law spectrum. \nIn Fig.~\\ref{fig:obs_P_rel_uncer} we see that the $\\Omega_\\text{p}$ and the peak frequency $f_\\text{p}$ are expected to be best constrained, \nwith a signal-to-noise ratio of 20 delivering determinations to around 10\\% \nfor peak frequencies between $10^{-4}$ and $10^{-2}$ Hz. \nThe other parameters, the break frequency ratio $\\rb$ and the intermediate power law $b$, are less well determined, but \ncan be determined with less than 1\\% relative uncertainty for signals with peak power and peak frequencies that lie on \nLISA's sensitivity curve, that is, signals at the same level as or above the instrument noise. \n\n\nThe extragalactic compact binary foreground expected from LIGO\/Virgo data \\cite{Abbott:2017xzg} is not an important contributor to the total noise, \nbut the galactic binary foreground would be significant if it could not be removed.\nThe main effect is to somewhat reduce the range over which parameters can be determined within a given uncertainty; \nthe magnitude of the effect can be judged in the difference between Figs.~\\ref{fig:thermo_params_fixed_rs_LISA_BBH_GB_T_100_as_param} and \\ref{fig:thermo_params_fixed_rs_LISA_BBH_T_100_as_param}.\nHowever, it is expected that the galactic binary foreground will be at least partially removable through its annual modulation \\cite{Adams:2013qma}. \n\nWe also studied LISA's sensitivity to the four principal thermodynamic parameters of a first order phase transition, as described above. The GW power spectrum model used was the sound shell model, Eq.~(\\ref{Eq:Omgw0_sup}), incorporating kinetic energy suppression in slow deflagrations \\cite{Cutting:2019zws}. \n\nWe investigated scenarios with a nucleation temperature $\\TN = 100$ GeV and \nHubble-scaled mean bubble spacing $r_* = 0.1, 0.01$, scanning over \na range in $(\\vw,\\al)$ space with $0.4 \\le \\vw \\le 0.9$, $0.005 \\le \\al \\le 0.5$, where \nnumerical simulations have been performed \\cite{Cutting:2019zws} and the model can be calibrated. \nWe observed that in order to match the total power in the simulations a suppression factor had to be applied \nto the sound shell model (see Appendix \\ref{sec:suppression_factor}).\n \nWe found that the wall speed $\\vw$ would be the best constrained parameter, with a relative uncertainty of better \nthan $30$\\% provided the signal-to-noise ratio is above 20, and the wall is supersonic, \neven in the worst-case scenario where the foreground from unresolved galactic binaries cannot be removed.\nAs the Hubble-scaled mean bubble spacing $r_*$ gets smaller the signal power decreases, this leads to a reduction in the region of parameter space over which a signal-to-noise ratio of 20 can be achieved.\nFor example, with $r_* = 0.1$, \nSNR 20 can be produced by phase transition strengths down to about $\\al \\simeq 0.13$, \nwhile at $r_* = 0.01$ the corresponding figure is $\\al \\simeq 0.35$.\n\nThere is limited sensitivity to the parameters $\\al$, $r_*$ and $\\TN$ due to \ndegeneracies. For example, the peak frequency is left unchanged by simultaneous changes in the\nnucleation temperature $\\TN$ and the mean bubble spacing $r_*$. Changing $r_*$ changes the peak \npower, which can be brought back to its original value, without changing the peak frequency much, by a change in \nthe transition strength $\\al$. \nThis will mean that the parameters most easily computable from underlying models, \n$\\TN$, $\\tilde\\be$, and $\\al$, will not be individually well determined. \n\n\nHowever, there is much better sensitivity to the principal components \nacross the explored parameter space. The two highest-order principal components have relative uncertainty less than $3\\%$ for GW power spectra with $\\rho> 20$, for both values of the Hubble-scaled mean bubble spacing. The highest-order component is found to be dominated by the wall speed, \nas is consistent with the wall speed being the best determined parameter. \n\n\nIf one of the parameters is known, the other thermodynamic parameters are much better constrained. \nFor example, with a known nucleation temperature $\\TN = 100$ GeV, the wall speed would have an estimated uncertainty of less than $10\\%$ for the majority of the parameter space, \nand the phase transition strength would \nbe almost as accurately measured as the wall speed. \nThe mean bubble spacing would be less accurately measured. \nA more realistic situation would involve constraints on masses and coupling constants in an underlying \nparticle physics model, which we will explore elsewhere. \n\nThe parameter degeneracies we have found mean that \n one must consider the reliability of the Fisher matrix as an indicator of parameter uncertainties. \n For the spectral parameters of the double broken power law fit the Fisher matrix can be trusted, as there is little or no degeneracy between parameters and the Gaussian approximation made with the Fisher matrix is reasonable. \nTo check, we carried out preliminary Markov Chain Monte Carlo (MCMC) estimation using the Gaussian \napproximation to the likelihood, which returned ellipsoidal posteriors for the spectral parameters. We also found\napproximate agreement between the marginalised posteriors and the uncertainties predicted by Fisher analysis. On the other hand the thermodynamic parameters do have significant degeneracies. In some regions of parameter space where the signal to noise ratio is high, \n preliminary MCMC has shown ellipsoidal posteriors, supporting the case that the principal component analysis \n of the Fisher matrix should be a reasonably good indicator.\nThe general tendency of the Fisher matrix is to over-estimate uncertainties \\cite{Efstathiou:1998xx}, and so we expect \nthat the uncertainty estimates presented here are conservative. \n \nIn summary, we have presented \nthe relative uncertainties calculated with the Fisher matrix as a preliminary guide to the expected power of LISA to resolve the spectral and thermodynamic parameters. The analysis makes it clear there are significant degeneracies that limit the accuracy of the direct \ndetermination of the thermodynamic parameters. However, the principal components show that at least two combinations \nof parameters can be well-determined, and the wall speed is will be the best measured phase transition parameter. \nFor GW signals with signal-to-noise ratio greater than 20 we found the relative uncertainty for the two highest-order principal components to be less than $ 3\\%$. This provides a target for the accuracy required from theoretical models.\nWe plan to carry out a more detailed MCMC analysis, exploring more realistic noise models, and with the sound speeds as parameters to an extended sound shell model.\n\n\n\\acknowledgments\n\nWe thank Oliver Gould, Elina Keih\\\"anen, Antony Lewis, Jes\\'us Torrado, Ville Vaskonen, Essi Vilhonen and Graham White for useful comments and feedback. We are grateful to Daniel Cutting for supplying data from Ref.~\\cite{Cutting:2019zws} and F\\\"eanor Reuben Ares for helpful discussions. CG (ORCID ID 0000-0002-7955-4465) is supported by a STFC Studentship. MH (ORCID ID 0000-0002-9307-437X) acknowledges support from the Academy of Finland (grant number 333609).\n\n\\newpage\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nSpacetime fluctuations are expected to be a generic feature of a\ntheory which combines quantum theory and gravitation. While there is \nstill no complete quantum theory of gravity, it is possible to\ninvestigate some of the characteristics expected of fluctuating spacetime\ngeometries. One can roughly classify these fluctuations as being either\nactive or passive.\nActive fluctuations arise from fluctuations of the dynamical degrees\nof freedom of gravity itself, that is, from the quantization of gravity.\n Passive fluctuations arise from fluctuations in the stress tensor due to\nquantum matter fields. In general, one expects both types of fluctuations to \nbe present. One approach to the study of fluctuating spacetimes is\nstochastic gravity~\\cite{Stochastic}. \nMore generally, there has been considerable\nactivity in recent years in the area of quantum gravity phenomenology,\nwhich seeks to find observational signatures of the quantum nature of \nspacetime~\\cite{Moffat,JR94,GLF,HS97,EMN00,NvD00,AC00,NCvD03,AC04,DHR04,\nBorgman}. \nThere has also been considerable attention given to the effects of classical\nstochastic gravitational fields~\\cite{Zipoy,Sachs,Kaufmann,BM71,BKPN90}\n and to scattering of probe particles by gravitons in\nan S-matrix approach~\\cite{Weber,DeWitt}. \n\nSpacetime geometry fluctuations should in principle produce observable\neffects on test particles, such as light rays. Several effects of\nfluctuating gravitational fields on light propagation have been discussed\nby previous authors. For example, Sachs and Wolfe~\\cite{Sachs} treated the \nscattering of cosmic microwave photons from cosmological density perturbations.\nZipoy~\\cite{Zipoy} argued that there will be apparent luminosity variations\nin a source seen through a stochastic gravitational field. More recently,\nthis effect has been treated~\\cite{Borgman} using a Langevin form of the \nRaychaudhuri equation~\\cite{Moffat}. In the latter approach, luminosity\nfluctuations are a signature of passive spacetime geometry fluctuations\ncaused by quantum stress tensor fluctuations. Zipoy~\\cite{Zipoy} also\nexamined changes in angular position due to a stochastic gravitational field.\nKaufman~\\cite{Kaufmann} considered redshift fluctuations from a bath\nof gravity waves. If one goes beyond the geometric optics approximation\nto consider electromagnetic waves propagating in a stochastic gravitational\nfield, there will be phase shift variations~\\cite{BM71,BKPN90,HS97}.\n \n\nThe purpose of the present paper is to analyze in more detail two particular\nsignatures of spacetime geometry fluctuations, redshift fluctuations\nand angular blurring of images. We derive simple expressions \ninvolving the Riemann tensor correlation function and use these \nto examine fluctuations in\nredshift and angular position of a source. We will illustrate this approach\nusing the cases of gravitons in a squeezed state and in a thermal state, \nand calculate the broadening of spectral lines and an angular blurring of \nan object viewed through a region of spacetime filled with gravitons.\nThis will serve as a simple example of spacetime undergoing active\nquantum fluctuations.\n\n\nThis paper is organized as follows: In Sect.~\\ref{sec:rab} we provide a\ndetailed derivation of the equations for redshift fluctuations and\nangular blurring given in Ref.~\\cite{peyresq10}. The derivation is\nindependent of the source of fluctuations, being based on the\nRiemann tensor correlation function. In Sect.~\\ref{sec;squeeze} we assume\nthat gravitons in a general squeezed state act as the source of\nfluctuations and are described by the correlation function, and we then\nspecialize the results from Sect.~\\ref{sec:rab} to this model. Several\nexamples are calculated for the variance of both redshift and\nangular position, and are compared with the expected classical time\nvariation of these quantities. We then express these results in\nterms of the energy density of the gravity wave and make an\norder of magnitude estimate for the size of the effect. \nIn Sect.~\\ref{sec:thermal} we assume a\nthermal bath of gravitons as the source of fluctuations and again\nspecialize the results of Section~\\ref{sec:rab} to this model. The results of\nthe paper are summarized and discussed in Sect.~\\ref{sec:final}. A brief review\nof the pertinent information regarding squeezed quantum states\nrequired for this paper can be found in the appendix. Units in which\n$G = \\hbar = c =1$ will be used, where $G$ is Newton's constant.\n\n\n\\section{Redshift and Angular Blurring in Linearized Gravity}\n\\label{sec:rab}\n\n\\subsection{Linearized Redshift}\n\nLet $t^{\\mu}$ be the 4-velocity of a source and $v^{\\mu}$ the\n4-velocity of a detector at the events of emission and absorption,\nrespectively, and let $k^{\\mu}$ be tangent to the null geodesic\nconnecting them (see Fig.~\\ref{spacetime}, e.g.,\\ path $DA$). Let\n$\\lambda$ be an affine parameter that runs from $\\lambda =0$\n(emission) to $\\lambda=\\lambda_0$ (absorption). Since $k^{\\mu}$ is\ntangent to a geodesic,\n\\begin{equation} \\label{geodesiceqn}\n \\frac{dk^{\\mu}}{d\\lambda} + \\Gamma^{\\mu}_{\\alpha\n \\beta}k^{\\alpha}k^{\\beta} = 0\n\\end{equation}\nand so\n\\begin{equation}\n k^{\\mu}(\\lambda)=\nk^{\\mu}(0) - \\int_0^{\\lambda} d\\lambda' \\, \\Gamma^{\\mu}_{\\alpha\n \\beta}(\\lambda')k^{\\alpha}(\\lambda')k^{\\beta}(\\lambda').\n\\end{equation}\nThis is an integral equation for $k^{\\mu}(\\lambda)$, but in the\nlinearized theory, we let $k^{\\mu}(\\lambda') \\approx k^{\\mu}(0)$ on\nthe right hand side. So at the point of detection,\n\\begin{equation}\n k^{\\mu}(\\lambda_0)=k^{\\mu}(0) - \\int_0^{\\lambda_0} d\\lambda' \\, \n\\Gamma^{\\mu}_{\\alpha\n \\beta}(\\lambda')k^{\\alpha}(0)k^{\\beta}(0).\n\\end{equation}\nThe frequency of emission is proportional to $-k^{\\mu}t_{\\mu}$. The\nproportionality constant depends on the choice of affine parameter.\nIf we choose the affine parameter such that, for example, $k^{\\mu} =\n(\\omega,\\omega,0,0)$ in the rest frame of the source, \nthen $\\omega = -t_{\\mu}k^{\\mu}.$ Since Eq.\n(\\ref{geodesiceqn}) is scale invariant, we may also select an affine\nparametrization such that $k^{\\mu} = (1,1,0,0).$ This amounts to\nchoosing the affine parameter to coincide with the source's proper\ntime at the evaluation point. We adopt the latter parametrization of\n$\\lambda$, in which case\n the ratio of the detected to the emitted frequency is\n\\begin{equation}\n \\frac{\\omega(\\lambda_0)}{\\omega_0} = -v_{\\mu}k^{\\mu}(\\lambda_0) \n= -v_{\\mu}k^{\\mu}(0) + v_{\\mu}\n \\int_0^{\\lambda_0} d\\lambda' \\, \\Gamma^{\\mu}_{\\alpha\n \\beta}(\\lambda')k^{\\alpha}(0)k^{\\beta}(0) \\, ,\n\\end{equation}\nwhere $\\omega_0 = \\omega(0)$ is the emitted frequency.\nThe first term on the right hand side of this equation can\nbe written as\n\\begin{equation}\n -v_{\\mu}k^{\\mu}(0)= -t_{\\mu}k^{\\mu}(0) - (v_{\\mu}-t_{\\mu})k^{\\mu}(0)\n = 1 - (v_{\\mu}-t_{\\mu})k^{\\mu}(0).\n\\end{equation}\nWith this, the fractional change in detected frequency between\nsource and detector is\n\\begin{equation} \\label{linearizedRF}\n \\frac{\\Delta \\omega}{\\omega_0} = \\frac{\\omega(\\lambda_0)-\\omega_0}{\\omega_0} \n= -(v_{\\mu}-t_{\\mu})k^{\\mu}(0) + v_{\\mu}\n \\int_0^{\\lambda_0} d\\lambda' \\, \\Gamma^{\\mu}_{\\alpha \\beta}(\\lambda')k^{\\alpha}(0)k^{\\beta}(0).\n\\end{equation}\nThe first term on the right hand side is a Doppler shift which may\nnot be zero even in flat space. The second term is a linearized\ngravitational redshift which depends on the intermediate geometry\nbetween source and detector. We will focus our attention on this\nsecond term.\n\n\\begin{figure}\n \n \\centering\n \\scalebox{.5}{\\includegraphics{spacetime.eps}}\\\\\n \\caption{A source moves along a worldline with tangent $t^{\\mu}$\n while a detector a proper distance $s$ away moves along a worldline\n with tangent $v^{\\mu}$. The source emits a ray at point D which has\n tangent $k^{\\mu}(\\lambda=0)$ at point D and tangent $k^{\\mu}(\\lambda_0)$ \n at A.\n Parallel propagation of $k^{\\mu}$\n around ABCD results in a slightly rotated vector $k^{\\mu} + \\Delta k^{\\mu}$.\n The closed path ABCD encloses the spacetime region of interest.}\n \\label{spacetime}\n\\end{figure}\n\n\n\\subsection{Rate of Change of Redshift}\n\nThe next step is to examine the rate of change of the redshift. We\nexamine the behavior of successive photons emitted from the source.\nOur interest lies in the change in frequency due to the effects of\ngravity rather than as a result of a variation of the output of the\nsource. This condition is enforced by requiring that the values of\n$k^{\\mu}$ at the start of any two null geodesics are related by\nparallel transport along the worldline of the source. Thus we assume\nthat $\\omega_0$ is constant between successive emissions and want\nto find $\\dot{\\omega}(\\lambda_0)$ at the detector due to changes in\nspacetime geometry. Initially at the detector,\n\\begin{equation} \\label{omega1}\n \\omega(\\tau_1,\\lambda_0) = \n-v_{\\mu}(\\tau_1)k^{\\mu}(\\tau_1,\\lambda_0)\\, \\omega_0\n\\end{equation}\nis the frequency at the detector at proper time $\\tau_1$. While\nafter a time $\\Delta\\tau$, at proper time $\\tau_2$, the frequency at\nthe detector is\n\\begin{equation} \\label{omega2}\n \\omega(\\tau_2,\\lambda_0) =\n -v_{\\mu}(\\tau_2)k^{\\mu}(\\tau_2,\\lambda_0)\\, \\omega_0 \\, .\n\\end{equation}\nTo find $v^{\\mu}(\\tau_2)$, we first note that $v^{\\mu}$ is tangent to\na geodesic:\n\\begin{equation}\n \\frac{dv^{\\mu}}{d\\tau} + \\Gamma^{\\mu}_{\\alpha\n \\beta}v^{\\alpha}v^{\\beta} = 0 \\, ,\n\\end{equation}\nso\n\\begin{equation}\n v^{\\mu}(\\tau_2) - v^{\\mu}(\\tau_1) \\approx \\frac{dv^{\\mu}}{d\\tau}\\Delta\\tau\n \\simeq -\\Gamma^{\\mu}_{\\alpha\\beta}v^{\\alpha}v^{\\beta}\\Delta\\tau.\n\\end{equation}\nThis depends on $\\Gamma^{\\mu}_{\\alpha\\beta}$ at the location of the\ndetector. For simplicity, we assume $\\Gamma^{\\mu}_{\\alpha\\beta} =\n0$ at the location of the detector. This can be achieved by\nassuming the detector is located in a flat region; more will be said\nabout this in the next subsection. Thus $v^{\\mu}(\\tau_2) =\nv^{\\mu}(\\tau_1) = v^{\\mu}.$\n\nTo find the change in $k^{\\mu}$, we parallel transport the vector\naround the closed path ABCD (see Fig.~\\ref{spacetime}). Recall that\nif we parallel transport a vector around a closed path, the change\nin the vector can be expressed as an integral of the Riemann tensor\nover the area enclosed by the path. For parallel transport around\nan infinitesimal parallelogram, we have for an arbitrary vector\n$V^{\\mu}$\n\\begin{equation}\n V^{\\mu} \\rightarrow V^{\\mu} + \\delta V^{\\mu}\\, ,\n\\end{equation} \nwhere\n\\begin{equation}\n \\delta V^{\\mu} =\n -R^{\\mu}_{\\phantom{\\mu}\\alpha\\nu\\beta}V^{\\alpha}t^{\\nu}k^{\\beta}\\Delta\\lambda\\Delta\\tau.\n\\end{equation}\nIntegrating over $\\lambda$ and $\\tau$ transports $V^{\\mu}$ around a finite\nparallelogram so that\n\\begin{equation}\n \\Delta V^{\\mu} = -\\int d\\tau \\int d\\lambda \\,\n R^{\\mu}_{\\phantom{\\mu}\\alpha\\nu\\beta}V^{\\alpha}t^{\\nu}k^{\\beta}.\n\\end{equation}\nSince we assume the detector is located in a flat region, we can\nfurther assume that $k^\\mu$ may be trivially transported from point $A$ to \npoint $B$, and we have \n\\begin{equation}\nk^{\\mu}(\\tau_1,\\lambda_0) + \\Delta k^{\\mu} = \nk^{\\mu}(\\tau_2,\\lambda_0) \\,.\n\\end{equation} \nThus\n\\begin{equation}\n \\Delta k^{\\mu} = k^{\\mu}(\\tau_2,\\lambda_0) - k^{\\mu}(\\tau_1,\\lambda_0) =\n - \\int_{\\tau_1}^{\\tau_2} d\\tau \\int_0^{\\lambda_0} d\\lambda \\,\n R^{\\mu}_{\\phantom{\\mu}\\alpha\\nu\\beta}k^{\\alpha}t^{\\nu}k^{\\beta}\\,,\n\\end{equation}\nand from Eqs.~(\\ref{omega1}) and (\\ref{omega2})\n\\begin{equation}\n \\frac{\\Delta\\omega(\\lambda_0)}{\\omega_0} =\n \\frac{\\omega(\\tau_2,\\lambda_0)-\\omega(\\tau_1,\\lambda_0)}{\\omega_0}=\n -v_{\\mu}\\Delta k^{\\mu} = v_{\\mu}\\int_{\\tau_1}^{\\tau_2}\n d\\tau \\int_0^{\\lambda_0} d\\lambda \\,\n R^{\\mu}_{\\phantom{\\mu}\\alpha\\nu\\beta}k^{\\alpha}t^{\\nu}k^{\\beta}.\n\\end{equation}\nThis equation can be thought of as relating the difference between\ntaking path DAB and path DCB. Taking limits in the previous equation\nyields the rate of change of the fractional redshift in the\nlinearized theory:\n\\begin{equation}\\label{RateOfChange}\n \\frac{d}{d\\tau}\\left(\\frac{\\omega(\\lambda_0)}{\\omega_0}\\right) \n= v_{\\mu} \\int_0^{\\lambda_0}\n d\\lambda \\,\n R^{\\mu}_{\\phantom{\\mu}\\alpha\\nu\\beta}k^{\\alpha}t^{\\nu}k^{\\beta}.\n\\end{equation}\nAn equivalent formula has been obtained by Braginsky and Menskii~\\cite{BM71} \nusing the geodesic deviation equation.\n\n\\subsection{Fluctuating Redshift}\n\nNow suppose the Riemann tensor is subject to fluctuations - active,\npassive, or both. We first have\nto specify $t^{\\mu}$ and $v^{\\mu}$ and how they behave under the\nfluctuations. The simplest assumption is that $t^{\\mu}$ and\n$v^{\\mu}$ do not fluctuate, and in the underlying flat geometry\n$v^{\\mu} = t^{\\mu}$. Physically, this might be achieved via several\nmethods. We have already mentioned that we can assume the detector\nto be located in a flat region. However, one could also consider\nthe source and the detector rigidly attached to one another by\nnon-gravitational forces. The same effect might be achieved if the\nsource and detector are separately attached to platforms (e.g.\\\nplanets) which are large enough that they travel on an average\ngeodesic in the mean spacetime. This can happen if the spatial\naverage of the fluctuations over the platform is small. For our\npurposes it is sufficient to assume the perturbation vanishes at\nboth the source and detector, e.g., a gravity wave passes between\nsource and detector, but far from either source or detector; so we can\nassume the source and detector to be located in flat regions. The\nfluctuations of the Riemann tensor are described by the correlation\nfunction\n\\begin{equation}\\label{CorFunction}\n C_{\\alpha\\beta\\mu\\nu\\,\\gamma\\delta\\rho\\sigma}(x,x') = \\langle\n R_{\\alpha\\beta\\mu\\nu}(x)\n R_{\\gamma\\delta\\rho\\sigma}(x')\\rangle -\n \\langle R_{\\alpha\\beta\\mu\\nu}(x)\\rangle\n \\langle R_{\\gamma\\delta\\rho\\sigma}(x')\\rangle,\n\\end{equation}\nwhere the indices $\\alpha\\beta\\mu\\nu$ refer to point $x$ while the\nindices $\\gamma\\delta\\rho\\sigma$ refer to point $x'$. With $v^{\\mu}\n= t^{\\mu}$, we get from Eq.~(\\ref{linearizedRF})\n\\begin{equation}\n \\xi\\equiv\\frac{\\Delta \\omega}{\\omega_0} = \n\\frac{\\omega(\\lambda_0)-\\omega_0}{\\omega_0} = \\int_0^{\\lambda_0} d\\lambda \\,\n \\Gamma^{\\mu}_{\\alpha\\beta}(\\lambda)k^{\\alpha}(0)k^{\\beta}(0)t_{\\mu}.\n\\end{equation}\nWe let $\\Gamma^{\\mu}_{\\alpha\\beta}$ fluctuate with fixed $k^{\\alpha}\n= k^{\\alpha}(0)$ and $t_{\\mu},$ so that\n\\begin{equation}\n \\langle\\xi\\rangle =\n \\int_0^{\\lambda_0}d\\lambda \\,\n \\langle\\Gamma^{\\mu}_{\\alpha\\beta}(\\lambda)\n \\rangle k^{\\alpha}k^{\\beta}t_{\\mu}.\n\\end{equation}\nThe integral in this equation is evaluated along a single line,\ne.g., along the paths DA or CB. The equation may be\ninterpreted in the following way: Suppose the spacetime is subject\nto quantum fluctuations. Then, given an ensemble of systems,\nmeasurement of the fractional redshift along the same line will\nyield different results. This equation gives the expectation value\nof those measurements. Therefore, comparing the result of an\nintegration along path DA with that along path CB will give the\nexpectation of the difference of the fractional redshifts. Another,\nmore convenient way to obtain the same information is via \nEq.~(\\ref{RateOfChange}). Notice that\n\\begin{equation}\n \\frac{d\\xi}{d\\tau} =\n \\frac{d}{d\\tau}\\left(\\frac{\\omega(\\lambda_0)-\\omega_0}{\\omega_0}\\right) =\n \\frac{d}{d\\tau}\\left(\\frac{\\omega(\\lambda_0)}{\\omega_0}\\right)\\,,\n\\end{equation}\n and thus from (\\ref{RateOfChange}),\n\\begin{equation}\n \\frac{d\\xi}{d\\tau} = \\int_0^{\\lambda_0}\n d\\lambda \\,\n R_{\\alpha\\beta\\mu\\nu}(\\tau,\\lambda)t^{\\alpha}k^{\\beta}t^{\\mu}k^{\\nu}.\n\\end{equation}\nIntegrating this expression yields\n\\begin{equation}\n \\Delta\\xi = \\xi\\bigr|_{\\tau_2}-\\xi\\bigr|_{\\tau_1} =\n \\int_{\\tau_1}^{\\tau_2} d\\tau \\int_0^{\\lambda_0}\n d\\lambda \\,\n R_{\\alpha\\beta\\mu\\nu}(\\tau,\\lambda)t^{\\alpha}k^{\\beta}t^{\\mu}k^{\\nu}.\n\\end{equation}\nWe now let $R_{\\alpha\\beta\\mu\\nu}(\\tau,\\lambda)$ fluctuate, and find\nthe expectation value $\\langle\\Delta\\xi\\rangle$ is\n\\begin{equation}\n \\langle\\Delta\\xi\\rangle = \\int_{\\tau_1}^{\\tau_2}d\\tau\n \\int_0^{\\lambda_0}d\\lambda \\, \\langle\n R_{\\alpha\\beta\\mu\\nu}(\\tau,\\lambda)\\rangle\n t^{\\alpha}k^{\\beta}t^{\\mu}k^{\\nu}.\n\\end{equation}\n\nThe expectation of the square, $\\langle(\\Delta\\xi)^2\\rangle$, is\nrelated to the line broadening an observer will see. Recall that\n$\\xi$ is the fractional change in frequency of an observed spectral\nline, the fractional redshift. Upon quantizing the\nperturbation between source and detector we strictly must discuss\nthe fractional redshift along a given path as an ensemble average.\nAn observer, over some proper time interval $\\Delta\\tau$, collects\ninformation on a distribution of fractional redshifts;\n$\\langle(\\Delta\\xi)^2\\rangle$ is the squared width\nof this distribution. The physical realization of this measurement\nis a broadening of the observed spectral line.\n\nHowever, there is a contribution to spectral line broadening due to\nregular time dependent variations of the spacetime, for example as a\nresult of passing classical gravity waves. We thus characterize the\nfluctuation of redshift about the classical time dependent variation\nthat arises from a nonzero expectation value of the Riemann tensor.\nUsing Eq.~(\\ref{CorFunction}), we can express the variance of the\nfractional redshift, $\\delta\\xi^2$, as\n\\begin{equation}\n \\delta\\xi^2 = \\langle(\\Delta\\xi)^2\\rangle -\n \\langle\\Delta\\xi\\rangle^2 =\n \\int da \\int da' \\,\n C_{\\alpha\\beta\\mu\\nu\\,\\gamma\\delta\\rho\\sigma}(x,x')\n t^{\\alpha}k^{\\beta}t^{\\mu}k^{\\nu}t^{\\gamma}k^{\\delta}t^{\\rho}k^{\\sigma}.\n \\label{eq:delta_xi}\n\\end{equation}\nThe integration over the spacetime region $\\int da$ corresponds to\n$\\int_{\\tau_1}^{\\tau_2} d\\tau \\int_0^{\\lambda_0} d\\lambda$ and\nsimilarly for $da'$. This is an integration over the spacetime\nregion enclosed by the path ABCD in Fig.~\\ref{spacetime}. The\npoint $x$ corresponds to the point $(\\tau,\\lambda)$ and similarly\nfor $x'$. \n\n\n\\subsection{Fluctuating Angular Position}\n\nWe can also relate the degree of angular blurring of the source\nobserved by the detector to the Riemann tensor correlation function.\nLet $s^{\\mu}$ be a unit spacelike vector in a direction orthogonal\nto the direction of propagation of the null rays; thus\n$s_{\\mu}t^{\\mu} = s_{\\mu}k^{\\mu}(\\lambda=0) = 0$. Then at the\nobservation point\n\\begin{equation}\n s_{\\mu}k^{\\mu}(\\lambda_0) = \\tan\\Theta \\approx \\Theta,\n\\end{equation}\nwhere $\\Theta$ is an angle in the plane defined by the pair of\nspacelike vectors $s^{\\mu}$ and $n^{\\mu} = k^{\\mu}-t^{\\mu}$ and is\nassumed to be small, $|\\Theta| \\ll 1$. \nThe angle $\\Theta$ is the angular deviation in\nthe direction of $s^{\\mu}$ of\nthe image of the source from its classical flat space position. \nA treatment similar to that shown for fractional redshift allows\nus to express the change in angle, $\\Delta\\Theta$, in terms of an\nintegral of the Riemann tensor as\n\\begin{equation}\n \\Delta\\Theta = s_{\\mu}\\Delta k^{\\mu} = \\int da \\, R_{\\alpha\\beta\\mu\\nu}\n s^{\\alpha}k^{\\beta}t^{\\mu}k^{\\nu}.\n\\end{equation}\nA fluctuating spacetime results in an ensemble distribution of image\npositions about the classical flat space position. Analogously to\nthe line broadening effects, the fluctuating angular position\nmanifests itself as a blurring of the source's image.\n$\\langle(\\Delta\\Theta)^2\\rangle$ is therefore a measure of the\nangular size of the image. The variance of $\\Delta\\Theta$,\n$\\delta\\Theta^2$, due to fluctuations in the Riemann tensor is\n\\begin{equation}\n \\delta\\Theta^2 = \\langle(\\Delta\\Theta)^2\\rangle -\n \\langle\\Delta\\Theta\\rangle^2 = \\int da\\int da' \\,\n C_{\\alpha\\beta\\mu\\nu \\, \\gamma\\delta\\rho\\sigma}(x,x')\n s^{\\alpha}k^{\\beta}t^{\\mu}k^{\\nu}s^{\\gamma}k^{\\delta}t^{\\rho}k^{\\sigma}.\n \\label{eq:delta_theta}\n\\end{equation}\n\n\n\\subsection{Quantization of the Riemann Tensor} \n\nOur key results for the redshift fluctuations, Eq.~(\\ref{eq:delta_xi}), \nand for the angular blurring, Eq.~(\\ref{eq:delta_theta}), apply both to \nactive and passive fluctuations of spacetime geometry. We will present\nsome explicit examples for the case of active fluctuations. \n We examine fluctuations produced by gravitons\noccupying a squeezed state and then a thermal bath of gravitons in\nthe following two sections, respectively. First we describe some of\nthe formalism relevant to both cases. A linearized quantum field\ntheory for gravity is used, with the field operator expanded as\n\\begin{equation}\n \\hat{h}_{\\mu\\nu} = \n\\sum_{\\ell, p} \\left( A_{\\mu\\nu}\\hat{a}_{\\ell,p}\ne^{i\\ell_{\\tau} x^{\\tau}} + h.c. \\right)\\, ,\n\\end{equation}\nwhere $\\ell^\\mu$ is the wavevector and $p$ labels the polarization\nof a mode with polarization tensor $A_{\\mu\\nu} = A_{\\mu\\nu}(\\ell,p)$.\nIn this theory, the Riemann tensor operator is given by\n\\begin{equation} \\label{MetricDerivatives}\n \\hat{R}_{\\alpha\\beta\\mu\\nu}(x)=\n\\partial_{\\nu}\\partial_{[\\alpha}\\hat{h}_{\\beta]\\mu}\n -\\partial_{\\mu}\\partial_{[\\alpha}\\hat{h}_{\\beta]\\nu}.\n\\end{equation}\nHere the convention for antisymmetrization as found in Ref.~\\cite{MTW} is\nused, i.e.\\ $\\partial_{\\nu}\\partial_{[\\alpha}h_{\\beta]\\mu}=\n\\frac{1}{2}(\\partial_{\\nu}\\partial_{\\alpha}h_{\\beta\\mu} -\n\\partial_{\\nu}\\partial_{\\beta}h_{\\alpha\\mu})$. Henceforth $h_{\\mu\\nu}$\nand $R_{\\alpha\\beta\\mu\\nu}$ are understood to be operators, and the\nhat may be suppressed without confusion. The expectation value of\n$R_{\\alpha\\beta\\mu\\nu}$ is\n\\begin{equation}\n \\langle R_{\\alpha\\beta\\mu\\nu}(x)\\rangle =\n \\langle\\partial_{\\nu}\\partial_{[\\alpha}h_{\\beta]\\mu}\\rangle -\n \\langle\\partial_{\\mu}\\partial_{[\\alpha}h_{\\beta]\\nu}\\rangle \\,.\n\\end{equation}\nIt is convenient to define\n\\begin{eqnarray}\n K_{\\alpha\\beta\\mu\\nu\\,\\gamma\\delta\\rho\\sigma}(x,x') &=& \n \\langle\\partial_{\\nu}\\partial_{\\alpha}h_{\\beta\\mu}(x)\n \\partial_{\\sigma}'\\partial_{\\gamma}'h_{\\delta\\rho}\n (x')\\rangle -\n \\langle\\partial_{\\nu}\\partial_{\\alpha}h_{\\beta\\mu}(x)\\rangle\n \\langle\\partial_{\\sigma}'\\partial_{\\gamma}'\n h_{\\delta\\rho}(x')\\rangle \\nonumber \\\\\n&=& \\partial_{\\nu}\\partial_{\\alpha}\\partial_{\\sigma}'\n \\partial_{\\gamma}'\n \\left(\\langle h_{\\beta\\mu}(x)h_{\\delta\\rho}(x')\\rangle\n - \\langle h_{\\beta\\mu}(x)\\rangle\n \\langle h_{\\delta\\rho}(x')\\rangle\\right) \\, ,\n\\end{eqnarray}\nwhere $\\partial'$ denotes differentiation with respect to\n$x^{\\prime}$. The Riemann tensor correlation function may be expressed as\n\\begin{equation} \\label{CorFun}\n C_{\\alpha\\beta\\mu\\nu\\,\\gamma\\delta\\rho\\sigma}(x,x') =\n 4 K_{[\\alpha\\beta][\\mu\\nu]\\,[\\gamma\\delta][\\rho\\sigma]}(x,x')\\,.\n\\end{equation}\n\n\n\n\\section{Gravitons in a Squeezed State}\n\\label{sec;squeeze}\n\\subsection{Single Mode Squeezed State}\n \nGravitons in a squeezed state are the natural result of quantum\ngraviton creation in a background gravitational field, such as in\nan expanding universe~\\cite{GS90}. Here we suppose that the region between \na source and a detector is filled with gravitons in a squeezed state, which\nproduce spacetime geometry fluctuations.\n A brief summary of the properties of squeezed\nstates needed for the following calculations may be found in the\nappendix. \nTo calculate $C_{\\alpha\\beta\\mu\\nu\\,\\gamma\\delta\\rho\\sigma}(x,x')$, \nwe must calculate $\\langle h_{\\beta\\mu}(x)h_{\\delta\\rho}(x')\\rangle$ \nand $\\langle h_{\\beta\\mu}(x)\\rangle$. \nWe are interested in the change in quantum \nfluctuations between the vacuum state and a squeezed state, so we\nmay take the correlation function to be normal-ordered.\n We choose to evaluate the expectation\nvalue with respect to a gravity wave in a general single mode\nsqueezed state $\\vert\\alpha,\\zeta\\rangle$ (see the appendix), \nwhere $\\alpha$ and $\\zeta$ are the\ndisplacement and squeeze parameters, respectively, and the \nonly mode excited has\na specific wave vector $\\ell^\\mu$, frequency $\\omega_g=\\ell^0$ and \npolarization $p$. We may\nnow proceed to calculate $\\langle\\zeta,\\alpha\\vert\nh_{\\mu\\nu}\\vert\\alpha,\\zeta\\rangle$, $\\langle\\zeta,\\alpha\\vert\nh_{\\mu\\nu}(x)h_{\\alpha\\beta} (x')\\vert\\alpha,\\zeta\\rangle$ and then find\n$:K_{\\alpha\\beta\\mu\\nu\\,\\gamma\\delta\\rho\\sigma}(x,x'):$. The result\nis:\n\\begin{equation}\n :K_{\\alpha\\beta\\mu\\nu\\,\\gamma\\delta\\rho\\sigma}(x,x'):\n =\\ell_{\\alpha}\\ell_{\\nu}\\ell_{\\gamma}\\ell_{\\sigma}A_{\\beta\\mu}\n A_{\\delta\\rho}f(x,x')\\,,\n\\end{equation}\nwhere\n\\begin{equation}\n f(x,x') =\n [\\cosh(2r)-1] \\cos[\\ell_{\\tau}(x^{\\tau}-x^{\\prime\\tau})]-\n \\sinh(2r)\\cos[\\ell_{\\tau}(x^{\\tau}+x^{\\prime\\tau})+\\theta].\n\\end{equation}\nThe parameters $r$ and $\\theta$ are defined such that\n$\\zeta=re^{i\\theta}$ (see the appendix). Using\nEq.~(\\ref{CorFun}), the normal ordered Riemann tensor correlation \nfunction can be expressed as:\n\\begin{equation}\n :C_{\\alpha\\beta\\mu\\nu\\,\\gamma\\delta\\rho\\sigma}(x,x'): = 4\\;\n (\\ell_{[\\alpha}A_{\\beta][\\mu}\\ell_{\\nu]})(\\ell_{[\\gamma}A_{\\delta ][\n \\rho}\\ell_{\\sigma]})f(x,x').\n\\end{equation}\nNote that the correlation function depends only upon the squeeze parameter\n$\\zeta$, and not upon the coherent state parameter $\\alpha$.\n\nWe now use this normal ordered version of the Riemann tensor correlation\nfunction in Eqs.~(\\ref{eq:delta_xi}) and (\\ref{eq:delta_theta}). \nWhen performing the\n$da$ and $da'$ spacetime integrations on $f(x,x')$, it is convenient to\nuse null coordinates, $u=t-x$, $v=t+x$ (see Fig.~\\ref{spacetime2}).\n\\begin{figure}\n \n \\centering\n \\scalebox{.5}{\\includegraphics{spacetime2.eps}}\\\\\n \\caption{The spacetime region between the source and detector in \n null coordinates.\n For each fixed $u = u^*$ between $u = 0$ and $u = t_0$, one integrates\n along the line $u = u^*$ from $v = u^*$ to $v = u^*+2s$.}\n \\label{spacetime2}\n\\end{figure}\nThe result is\n\\begin{eqnarray} \n F(\\omega_g,s,t_0) &=& \\int da \\int da' \\, f(x,x') = \\int_0^{t_{0}}du\n \\int_0^{t_{0}} du' \\int_u^{u+2s} dv \\int_{u'}^{u'+2s} dv' \\,\n f(x,x') \\nonumber \\\\\n &=& \\frac{16\\{1-\\cos[s(\\ell_x-\\omega_g)] \\} \\,[1-\\cos(\\omega_g t_0)]}\n {\\omega_g^2(\\ell_x-\\omega_g)^2} \\nonumber \\\\\n &\\times& [\\cosh(2r)-\\sinh(2r)\\cos(\\theta+s \\ell_x-\\omega_g(t_{0}+s))-1]\\,.\n \\label{F}\n\\end{eqnarray}\nHere $s$ is the spatial separation of source and detector, and $t_0$\nis the observation time. It should be noted that, since we integrate\nover a spacetime slice of constant y and z, we may \nset $y=y'=z=z'=0$. Finally, from Eq.~(\\ref{eq:delta_xi}), the\nredshift fluctuations become:\n\\begin{equation}\n \\delta\\xi^2 =\n 4\\, (\\ell_{[\\alpha} A_{\\beta ][\n \\mu}\\ell_{\\nu]})(\\ell_{[\\gamma}A_{\\delta ] [\n \\rho}\\ell_{\\sigma]})t^{\\alpha}k^{\\beta}t^{\\mu}k^{\\nu}\n t^{\\gamma}k^{\\delta}t^{\\rho}k^{\\sigma}F(\\omega_g,s,t_0).\n\\end{equation}\nWhile from Eq.~(\\ref{eq:delta_theta}), the angular fluctuations are given by\n\\begin{equation}\n \\delta\\Theta^2 =\n 4\\, (\\ell_{[\\alpha} A_{\\beta ][\n \\mu}\\ell_{\\nu]})(\\ell_{[\\gamma}A_{\\delta ][\n \\rho}\\ell_{\\sigma]})s^{\\alpha}k^{\\beta}t^{\\mu}k^{\\nu}s^{\\gamma}k^{\\delta}\n t^{\\rho}k^{\\sigma}F(\\omega_g,s,t_0).\n\\end{equation}\nInterestingly, since $F(\\omega_g,s,t_0)$ is independent of the\ndisplacement parameter, $\\alpha$, so are $\\delta\\xi^2$ and\n$\\delta\\Theta^2$. Therefore, the fluctuations depend only on the\nsqueezing parameter, $\\zeta$, and we can immediately say that a coherent\nstate (classical wave) for which $\\zeta=0$ induces no fluctuations. For\nthe following calculations, we assume the passing gravity waves are\nin the Transverse Tracefree~(TT) Gauge. However, note that since our\nresults derive from the Riemann tensor, the equations are gauge\ninvariant, and we use the TT gauge only for calculational convenience.\n\nTo begin, note that if the gravity waves and photons travel in\nparallel, then one finds $(\\ell_{[\\alpha} A_{\\beta ][ \\mu}\\ell_{\\nu]})\nt^{\\alpha}k^{\\beta}t^{\\mu}k^{\\nu} = 0$. This indicates that there\nis no induced redshift fluctuation nor angular blurring due to a\nsqueezed state gravity wave traveling with the photons. This is as\none might expect, since a gravity wave in the TT gauge has only\nnon-zero transverse components. We may subsequently restrict our\nattention to transversely propagating gravity waves and will assume\nthe gravity waves to propagate with $\\ell^{\\mu}=\\omega_g(1,0,0,1)$\nwhile the photons continue to have wave vector $k^{\\mu}=(1,1,0,0)$.\nWe define\n\\begin{equation}\n \\mathfrak{F}(\\omega_g,s,t_0)= [1-\\cos(\\omega_g s)][1-\\cos(\\omega_g t_0)]\n [\\cosh(2r)-\\sinh(2r)\\cos(\\theta-\\omega_g(t_{0}+s))-1]\\, ,\n\\end{equation}\nso that for a gravitational wave propagating in the z direction, for which\n$\\ell_x=0$,\n\\begin{equation}\n F(\\omega_g,s,t_0) = \\frac{16}{{\\omega_g}^4}\\mathfrak{F}(\\omega_g,s,t_0) \\,.\n\\end{equation}\n\n\n\\subsubsection{Redshift Fluctuations} \\label{rfcase}\n\nWith transversely propagating gravity waves,\n\\begin{equation} (\\ell_{[\\alpha} A_{\\beta ][\n \\mu} \\ell_{\\nu]})t^{\\alpha}k^{\\beta}t^{\\mu}k^{\\nu} = \n-\\frac{1}{4}\\omega_g^2 A_{+}\\, , \n\\end{equation}\nand therefore\n\\begin{equation} \\label{rfcase2eq}\n \\delta\\xi^2 = 4\\, A_{+}^2 \\mathfrak{F}(\\omega_g,s,t_0).\n\\end{equation}\nThere exists a non-zero effect which in this case depends only on\nthe $(+)$ polarization. For a gravity wave propagating in the $z$-direction,\n$A_+ = A_{xx} =-A_{yy}$.\n\n\\subsubsection{Angular Blurring} \\label{abcase}\n\\paragraph{Case 1} \nLet the photons\npropagate in the x direction while the gravity waves propagate in\nthe z direction and probe the y component of angular blurring,\n$s^{\\mu}=(0,0,1,0)$. We find\n\\begin{equation} (\\ell_{[\\alpha} A_{\\beta ][\n \\mu}\\ell_{\\nu]})s^{\\alpha}k^{\\beta}t^{\\mu}k^{\\nu} = \n\\frac{1}{4}\\omega_g^2 A_{\\times}\n\\end{equation}\nand\n\\begin{equation} \\label{abcase2eq}\n \\delta\\Theta_y^2 = 4\\, A_{\\times}^2\\mathfrak{F}(\\omega_g,s,t_0).\n\\end{equation}\nNote that here the y component of blurring depends only on the\n$(\\times)$ polarization of the gravity waves. \nHere $A_{\\times} = A_{xy}=A_{yx}$.\n\n\\paragraph{Case 2} \nTo probe the z component of blurring, set\n$s^{\\mu}=(0,0,0,1)$. The result is\n\\begin{equation} \\label{abcase3eq}\n \\delta\\Theta_z^2 = 4\\, A_{+}^2\\mathfrak{F}(\\omega_g,s,t_0)\\, ,\n\\end{equation}\nwhich is the same as Eq.~(\\ref{rfcase2eq}) for redshift fluctuations,\nand only depends on the $(+)$ polarization of the gravity waves.\n\n\n\\subsection{Classical Time Dependent Variation} \n\n\nAs noted above, the fluctuations in redshift and angle depend upon\nthe degree of squeezing, measured by the parameter $\\zeta$. In this\nsubsection, we will examine the expectation value of the change in\nfractional redshift, $\\langle\\Delta\\xi\\rangle$ and in angle,\n$\\langle\\Delta\\Theta\\rangle$. These quantities will depend only\nupon the displacement parameter $\\alpha$, and hence would be the same\nin the coherent state $|\\alpha,0 \\rangle$ as they are in the squeezed\nstate $|\\alpha,\\zeta \\rangle$. For this reason, we regard these\nquantities as giving the classical time-dependence. Alternative approaches \nto the classical time-dependence of these quantities may be found in \nRefs.~\\cite{Weber, Zipoy, Kaufmann}\n\n\nHere it is possible to\ncalculate $\\langle\\Delta\\xi\\rangle$ as a single integration of the\nRiemann tensor over the spacetime region $da$ via \nEq.~(\\ref{RateOfChange}). However, for comparison purposes, we calculate\n$\\langle\\Delta\\xi\\rangle^2$ directly. In this case,\n\\begin{equation}\n \\langle\\Delta\\xi\\rangle^2 = \\int da \\int da'\n \\,\\langle\\hat{R}_{\\alpha\\beta\\mu\\nu}(x)\\rangle\n \\langle\\hat{R}_{\\gamma\\delta\\rho\\sigma}(x')\\rangle\n t^{\\alpha}k^{\\beta}t^{\\mu}k^{\\nu}t^{\\gamma}k^{\\delta}t^{\\rho}k^{\\sigma}\\, ,\n\\end{equation}\nwhere now\n\\begin{equation}\n \\langle\\hat{R}_{\\alpha\\beta\\mu\\nu}(x)\\rangle\n \\langle\\hat{R}_{\\gamma\\delta\\rho\\sigma}(x')\\rangle = 4 \\sum_{l}\n (\\ell_{[\\alpha} A_{\\beta][\\mu}\\ell_{\\nu]})(\\ell_{[\\gamma}\n A_{\\delta][\\rho}\\ell_{\\sigma]}) f'(x,x')\n\\end{equation}\nwith\n\\begin{equation}\n f'(x,x') =\n \\alpha^2 e^{i\\ell_{\\tau} (x^{\\tau}+x^{\\prime\\tau})} +\n (\\alpha^{\\ast})^2 e^{-i\\ell_{\\tau} (x^{\\tau}+x^{\\prime\\tau})} +\n 2\\vert\\alpha\\vert^2\\cos[\\ell_{\\tau} (x^{\\tau}-x^{\\prime\\tau})].\n\\end{equation}\nIntegration of $f'(x,x')$ over the spacetime region of \nFig.~\\ref{spacetime} yields\n\\begin{eqnarray}\n& & F'(\\omega_g,s,t_0) = \\int da \\int da' \\, f'(x,x') = \\nonumber \\\\\n& & \\frac{16\\{1-\\cos[s(\\ell_x-\\omega_g )]\\} [1-\\cos(\\omega_g t_0)]}\n {\\omega_g^2(\\ell_x-\\omega_g)^2} \n \\left(\\alpha e^{\\frac{i}{2}[\\ell_x s-\\omega_g(s+t_0)]}\n + \\alpha^{\\ast} e^{-\\frac{i}{2}[\\ell_x s-\\omega_g(s+t_0)]}\\right)^2.\n\\end{eqnarray}\nThus the classical time variation of the redshift is characterized by:\n\\begin{equation}\\label{classical}\n \\langle\\Delta\\xi\\rangle^2 =\n 4\\, F'(\\omega_g,s,t_0)\n (\\ell_{[\\alpha} A_{\\beta][\\mu}\\ell_{\\nu]})(\\ell_{[\\gamma}\n A_{\\delta][\\rho}\\ell_{\\sigma]})\n t^{\\alpha}k^{\\beta}t^{\\mu}k^{\\nu}t^{\\gamma}k^{\\delta}t^{\\rho}k^{\\sigma}.\n\\end{equation}\nFor the case of redshift variations with photons and a gravity wave\npropagating perpendicularly (Sect.~\\ref{rfcase}),\n\\begin{equation} \\label{rfclassical}\n \\langle\\Delta\\xi\\rangle^2 =\n 4 \\,A_{+}^2 \\mathfrak{F'}(\\omega_g,s,t_0).\n\\end{equation}\nHere $\\mathfrak{F'}(\\omega_g,s,t_0)$ is defined in a similar way as\n$\\mathfrak{F}(\\omega_g,s,t_0)$ when $\\ell_x=0$,\n\\begin{equation}\n \\mathfrak{F'}(\\omega_g,s,t_0) = [1-\\cos(\\omega_g s)][1-\\cos(\\omega_g t_0)]\n \\left(\\alpha e^{-\\frac{i\\omega_g}{2}(s+t_0)}\n + \\alpha^{\\ast} e^{\\frac{i\\omega_g}{2}(s+t_0))}\\right)^2.\n\\end{equation}\nThe classical time variation of angular position yields similar\nresults. Particularly, for perpendicularly propagating photons and\ngravity waves and probing the y component of blurring (case 1 of\nSect.~\\ref{abcase}), one finds\n\\begin{equation} \\label{abclassical1}\n \\langle\\Delta\\Theta_y\\rangle^2 = 4\\,A_{\\times}^2 \\mathfrak{F}'(\\omega_g,s,t_0).\n\\end{equation}\nWhile probing the z component of blurring (case 2 of Sect.~\\ref{abcase}),\nthe result is\n\\begin{equation} \\label{abclassical}\n \\langle\\Delta\\Theta_z\\rangle^2 = \n 4 \\,A_{+}^2 \\mathfrak{F}'(\\omega_g,s,t_0).\n\\end{equation}\nNote that the function $F'(\\omega_g,s,t_0)$, and hence \n$\\mathfrak{F'}(\\omega_g,s,t_0)$, depends on the\ndisplacement parameter $\\alpha$, but not on the squeeze parameter,\n$r$. Therefore the same can be said for results\nEqs.~(\\ref{rfclassical}), (\\ref{abclassical1}), and (\\ref{abclassical}).\nIn particular $\\mathfrak{F'}(\\omega_g,s,t_0)=0$ for $\\alpha=0$. This means a\ncoherent state (classical wave), for which $r=0, \\alpha\\neq 0$,\nexhibits regular time variations but no fluctuations. Indeed,\nrecall from Eq.~(\\ref{F}) that $\\delta\\xi^2=\\delta\\Theta^2=0$\nfor $r=0$. We will exploit this fact shortly by considering a state\nfor which $\\alpha = 0$.\n\nIn a time-averaged measurement, the classical time variation will\nproduce line broadening and angular blurring, such as do the quantum\nspacetime fluctuations from a squeezed vacuum state. The two effects can, \nhowever, be distinguished in principle. If one were to make repeated\nmeasurements at the same point in the cycle of a gravity wave, the\neffects of classical time dependence would disappear, but those of\nquantum fluctuations would remain.\n\n\n\\subsection{Stress Tensor for Squeezed State Gravity Waves} \n\nWe\nwould like to obtain an order of magnitude estimate for\n$\\langle\\left(\\Delta\\xi\\right)^2\\rangle$ and\n$\\langle(\\Delta\\Theta)^2\\rangle$. However, the gravitational wave\namplitude $A_{\\mu\\nu}$ is not directly measurable. We therefore\nwould like to express the previous results in terms of the energy\ndensity of the waves. While the stress-energy tensor is not well\ndefined for a gravitational wave, one can define an effective stress\ntensor in the linearized theory (see, e.g., Ref.~\\cite{MTW}).\nClassically, this effective stress tensor is defined as\n\\begin{equation}\n T_{\\mu\\nu}^{GW} = \\frac{1}{32 \\pi}\\, \\langle\n h^{TT}_{\\alpha\\beta,\\mu}h^{TT}_{\\alpha\\beta,\\nu} \\rangle\\, .\n\\end{equation}\nHere the brackets denote a spatial average over several\nwavelengths and the TT superscript indicates we are working in the\nTransverse Tracefree gauge. We use this expression with the\nexpectation value of a gravity wave in a squeezed state, and\nperform a spatial average on the quantity\n\\begin{equation}\n \\langle\\zeta,\\alpha\\vert :\\hat{h}^{TT}_{\\alpha\\beta,\\mu}\n \\hat{h}^{TT}_{\\alpha\\beta,\\nu}:\n \\vert\\alpha,\\zeta\\rangle\n\\end{equation}\nThis operation results in an effective stress tensor of the form\n\\begin{equation}\n T_{\\mu\\nu} = \\frac{1}{32 \\pi}\\, \\sum_{\\alpha,\\beta}A_{\\alpha\\beta}^2\n k_{\\mu}k_\\nu [2\\vert\\alpha\\vert^2+\\cosh(2r)]\n\\end{equation}\nwhile\n\\begin{equation}\\label{T}\n :T_{\\mu\\nu}: \\, = \\frac{1}{32 \\pi}\\,\\sum_{\\alpha,\\beta}A_{\\alpha\\beta}^2\n k_{\\mu}k_\\nu [2\\vert\\alpha\\vert^2+\\cosh(2r)-1].\n\\end{equation}\nThe vacuum energy term, $T_{00}^{vac} = T_{00}-:T_{00}:$ is then\n\\begin{equation}\n T_{00}^{vac} =\n \\frac{1}{32 \\pi}\\,\\sum_{\\alpha,\\beta}A_{\\alpha\\beta}^2\\omega_g^2\\,.\n\\end{equation}\nExamining the contribution from the $(+)$ polarization, where\n$A_+ = A_{xx} = -A_{yy}$ and $A_\\times = A_{xy} = A_{yx} = 0$, one finds\n\\begin{equation}\n T_{00}^{vac} = \\frac{1}{16 \\pi}\\, A_{+}^2\\omega_g^2.\n\\end{equation}\nHowever, since the quantum vacuum energy is also\n$\\frac{1}{2}\\omega_g$, we can use the vacuum mode to fix the\nnormalization.\n\\begin{equation}\n VT_{00}^{vac}= \\frac{1}{16 \\pi}\\, A_{+}^2\\omega_g^2V=\\frac{1}{2}\\omega_g\n\\end{equation}\nfor each $\\bm{\\ell}$, where $V$ is the quantization volume, which leads to\n\\begin{equation}\\label{Amplitude}\n A_{+}= \\sqrt{\\frac{8 \\pi}{\\omega_gV}}.\n\\end{equation}\nFor the $(\\times)$ polarization, one also finds\n\\begin{equation}\n A_{\\times}= \\sqrt{\\frac{8 \\pi}{\\omega_gV}}.\n\\end{equation}\nThe previous results, Eqs.~(\\ref{rfcase2eq}),\n(\\ref{abcase2eq}), and (\\ref{abcase3eq}) become\n\\begin{equation}\n \\delta\\xi^2 = \\delta\\Theta_y^2 = \\delta\\Theta_z^2 =\n \\frac{32 \\pi}{\\omega_gV} \\, \\mathfrak{F}(\\omega_g,s,t_0)\\,.\n\\end{equation}\nSo far, we have considered a single mode. However, we may also have a \nsituation in which many modes are excited. In this case, we would insert a\nsum on modes on the right hand side of the above equation.\nIf the density of states is large, we let\n\\begin{equation}\n \\sum_{\\ell} \\rightarrow \\frac{V}{(2\\pi)^3}\\int{d^3 \\bm{\\ell}}. \n\\end{equation}\nSuppose the distribution of gravity waves in \n$\\bm{\\ell}$-space is narrowly\npeaked about some $\\omega_g$ with characteristic width\n$\\Delta\\omega_g$. If $\\Delta\\omega_g$ is small, then the integrand\nis essentially constant over the region of integration. The result\nof the integration is just the integrand multiplied by a volume,\n$(\\Delta \\ell_x)(\\Delta \\ell_y)(\\Delta\n \\ell_z)$, in $\\bm{\\ell}$-space, with \n$\\Delta \\ell_i$ a bandwidth in the $i$-direction of\n$\\bm{\\ell}$-space. Specifically,\n\\begin{equation} \\label{Eint}\n \\delta\\xi^2 = \\frac{\\omega_g^3}{4 \\pi^2}\\, \n(\\Delta \\ell_x)(\\Delta \\ell_y)(\\Delta \\ell_z)\\, F(\\omega_g,s,t_0).\n\\end{equation}\n\n\n\\subsection{Estimate of $\\delta\\xi^2$ and $\\delta\\Theta^2$} \n\nRecall that from Eqs.~(\\ref{rfclassical}),\n(\\ref{abclassical1}), and (\\ref{abclassical}), the classical time\nvariation depends on the displacement parameter, $\\alpha$, but not\non the squeeze parameter $\\zeta$. For the purpose of obtaining order of\nmagnitude estimates of $\\delta\\xi^2$ and $\\delta\\Theta^2$, suppose\n$\\alpha = 0$ and $r \\gg 1$; further assume the $+$ polarization, so\nthat $A_{\\times}=0$. Since for $\\alpha = 0$, \n$\\mathfrak{F'}(\\omega_g,s,t_0) = 0$,\nit is clear that $\\langle\\Delta\\xi\\rangle^2 = 0$ and\n$\\langle\\Delta\\Theta\\rangle^2 = 0$, from Eqs.~(\\ref{rfclassical}) and\n(\\ref{abclassical}), respectively. Therefore, for the case\n$\\alpha=0$ one finds $\\delta\\xi^2 = \\langle(\\Delta\\xi)^2\\rangle$ and\n$\\delta\\Theta^2 = \\langle(\\Delta\\Theta)^2\\rangle$.\n\n\nUsing Eqs.~(\\ref{T}) and (\\ref{Amplitude}), and integrating over \na sharply peaked distribution\nfunction in $\\bm{\\ell}$-space, the energy density for large $r$ becomes\n\\begin{equation}\n :T_{00}: \\, \\approx \n\\frac{\\omega_g e^{2r}}{4(2\\pi)^3}\\Delta \\ell_x \\Delta \\ell_y\n \\Delta \\ell_z.\n\\end{equation}\n From Eqs.~(\\ref{F}) and (\\ref{Eint}), we have for large $r$,\n\\begin{equation}\n \\langle(\\Delta\\xi)^2\\rangle \\approx\n \\frac{2 \\,e^{2r}}{\\pi^2 \\omega_g}\\Delta \\ell_x \\Delta \\ell_y \\Delta \\ell_z.\n\\end{equation}\nThus\n\\begin{equation}\n \\langle(\\Delta\\xi)^2\\rangle \\approx\n 64 \\pi \\frac{:T_{00}:}{\\omega_g^2} =\n 64 \\pi \\frac{:T_{00}:}{\\omega_g^2}\\ell_{\\rm Pl}^2\\, ,\n\\end{equation}\nwhere $\\ell_{\\rm Pl}$ is the Planck length.\nWith $\\omega_g = 2 \\pi\/ \\lambda_g$, we have\n\\begin{equation}\n \\langle(\\Delta\\xi)^2\\rangle \\approx\n\\frac{16}{\\pi} \\ell_{\\rm Pl}^2\\lambda_g^2 :T_{00}: \n\\quad \\mbox{or} \\quad \n (\\Delta\\xi)_{rms}\\approx\n \\ell_{\\rm Pl} \\lambda_g \\sqrt{:T_{00}:} \\, .\n\\end{equation}\nSuppose for example a gravity wave with $\\lambda_g = 1\\,\\mbox{ly}\n=10^{18} \\mbox{cm}$ and the closure energy density $:T_{00}:\\,=10^{8}\n\\mbox{cm}^{-4}$. Then with $\\ell_{\\rm Pl}=10^{-33}\\mbox{cm}$,\n\\begin{equation} \\label{magnitude}\n (\\Delta\\xi)_{rms} \\approx 10^{-33}\n 10^{18} \\sqrt{10^8} = 10^{-11}\\,.\n\\end{equation}\nBy comparing Eqs.~(\\ref{rfcase2eq}) and (\\ref{abcase3eq}), the\norder of magnitude of $(\\Delta\\Theta)_{rms}$ will be the\nsame as the result for $(\\Delta\\xi)_{rms}$.\n\nIn principle, the effect can be made as large as desired by\nincreasing the squeezing parameter $r$, which increases the energy\ndensity of the wave. However, as shown from the estimate\ngiven, this is a very small effect and is likely to be unobservable\nin the present day universe. This example serves as a useful model for\nspacetime geometry fluctuations, which will have large effects in the early\nuniverse and in the vicinity of black holes. \n\n\n\\section{Thermal Bath of Gravitons}\n\\label{sec:thermal}\n\n\nIn the previous section, the fluctuation effects of a gravitational\nwave in a single mode squeezed state were examined. Another useful\nexample is a thermal bath of gravitons, such as might be created by\nan evaporating black hole. We now consider such a\nthermal bath as the source of spacetime fluctuations.\nIn particular, suppose the spacetime geometry fluctuates in such a\nway that $\\langle\\Gamma_{\\alpha\\beta}^{\\mu}\\rangle = \\langle\nR^{\\mu}_{\\phantom{\\mu}\\alpha\\nu\\beta}\\rangle = 0$, but $\\langle\nR^{\\mu}_{\\phantom{\\mu}\\alpha\\nu\\beta}R^{\\gamma}_{\\phantom{\\gamma}\n\\delta\\rho\\sigma}\\rangle_{\\beta} \\neq 0$. In effect, we are\nignoring the average spacetime curvature due to the bath of\ngravitons. In this case the Riemann tensor correlation function is\n\\begin{equation}\n C_{\\alpha\\beta\\mu\\nu\\,\\gamma\\delta\\rho\\sigma}(x,x') = \n\\langle R_{\\alpha\\beta\\mu\\nu}(x)\n R_{\\gamma\\delta\\rho\\sigma}(x')\\rangle_{\\beta}\\, ,\n\\end{equation}\nand therefore\n\\begin{equation} \\label{thermalRF1}\n \\delta\\xi^2 = \\langle(\\Delta\\xi)^2\\rangle = \\int da \\int da' \\, \\langle\n R_{\\alpha\\beta\\mu\\nu}(x)\n R_{\\gamma\\delta\\rho\\sigma}(x')\\rangle_{\\beta} \\,\n t^{\\alpha}k^{\\beta}t^{\\mu}k^{\\nu}t^{\\gamma}k^{\\delta}t^{\\rho}k^{\\sigma}\n\\end{equation}\nand\n\\begin{equation} \\label{thermalAB1}\n \\delta\\Theta^2 = \\langle(\\Delta\\Theta)^2\\rangle = \\int da \\int da' \\,\n \\langle R_{\\alpha\\beta\\mu\\nu}(x)\n R_{\\gamma\\delta\\rho\\sigma}(x')\\rangle_{\\beta} \\,\n s^{\\alpha}k^{\\beta}t^{\\mu}k^{\\nu}s^{\\gamma}k^{\\delta}t^{\\rho}k^{\\sigma}\\, ,\n\\end{equation}\nwhere\n$\\langle R_{\\alpha\\beta\\mu\\nu}(x)\nR_{\\gamma\\delta\\rho\\sigma}(x')\\rangle_{\\beta}$ is the thermal normal-ordered\nRiemann tensor two point function.\n\n\n\\subsection{Redshift Fluctuations}\n\nIn the average rest frame of the source and detector, let $v^{\\mu} =\nt^{\\mu} = (1,0,0,0)$ and $k^{\\mu} = (1,1,0,0)$ as\npreviously. With this and the symmetry and cyclic properties of the\nRiemann tensor, Eq.~(\\ref{thermalRF1}) reduces to\n\\begin{equation}\n \\langle(\\Delta\\xi)^2\\rangle = \\int da \\int da' \\,\n \\langle R_{0101}(x)R_{0101}(x')\\rangle_{\\beta}.\n\\end{equation}\nWe construct the thermal Riemann tensor two point function via the\nMatsubara method as an infinite image sum in imaginary time of the \nvacuum two point function. We proceed for the moment in the\nFeynman gauge, but since we are strictly dealing with functions of\nthe Riemann tensor, the result will be gauge independent. From\nEq.~(\\ref{MetricDerivatives}) and the symmetry properties of\nthe metric tensor, the Riemann tensor vacuum two-point function can\nbe written as\n\\begin{multline}\n \\langle R_{0101}(x)R_{0101}(x')\\rangle \\\\ \\shoveleft{=\\frac{1}{4}\\langle(\n 2h_{01,01} - h_{00,11} - h_{11,00})(x)(2h_{01,01} - h_{00,11} - h_\n {11,00})(x')\\rangle} \\\\ = \\frac{1}{4}[4G_{0101,0101} -\n 2G_{0100,0111}-2G_{0111,0100} -2G_{0001,1101} + G_{0000,1111}\n \\\\ + G_{0011,1100} -2G_{1101,0001} + G_{1100,0011} +\n G_{1111,0000}].\n\\end{multline}\nHere, $G_{\\alpha\\beta\\mu\\nu}$ is the metric two point function. The\nfirst two indices on $G_{\\alpha\\beta\\mu\\nu}$ refer to point $x$\nwhile the second two refer to point $x'$ and similarly for the\nderivative indices. In the Feynman gauge\n\\begin{equation}\n G_{\\alpha\\beta\\mu\\nu} = (\\eta_{\\alpha\\mu}\\eta_{\\beta\\nu}+\n \\eta_{\\alpha\\nu}\\eta_{\\beta\\mu} -\n \\eta_{\\alpha\\beta}\\eta_{\\mu\\nu})D\n\\end{equation}\nwhere $D$ is the vacuum scalar two point function\n\\begin{equation}\n D = \\frac{1}{4\\pi^2}\\frac{1}{(\\Delta \\vec{x})^2 - (\\Delta t)^2}\\,.\n\\end{equation}\n One finds that the only nonzero components of \n$G_{\\alpha\\beta\\mu\\nu}$ are $G_{0101}, G_{0000}, G_{0011},\nG_{1100}$, and $G_{1111}$ and thus\n\\begin{equation}\n \\langle R_{0101}(x)R_{0101}(x')\\rangle = \\\\\n \\frac{1}{4}[-4D,_{txt'x'} + D,_{xxx'x'} + D,_{xxt't'} +\n D,_{ttx'x'} +D,_{ttt't'}].\n\\end{equation}\nOne can easily see from the function $D$ that $D,_{txt'x'} =\nD,_{xxt't'} = D,_{ttx'x'} = \\partial_t^2\\partial_x^2D$ and therefore\n\\begin{equation}\n \\langle R_{0101}(x)R_{0101}(x')\\rangle =\n \\frac{1}{4}(\\partial_t^4 - 2\\partial_t^2\\partial_x^2 +\n \\partial_x^4)D.\n\\end{equation}\nIt is now clear that the thermal Riemann tensor two point function\nmay be constructed from the vacuum two point function by making the\nreplacement $D\\rightarrow D_{\\beta}$ whence\n\\begin{equation} \\label{ThermalRFCorr}\n \\langle R_{0101}(x)R_{0101}(x')\\rangle_{\\beta} =\n \\frac{1}{4}(\\partial_t^4 - 2\\partial_t^2\\partial_x^2 +\n \\partial_x^4)D_{\\beta}.\n\\end{equation}\nAs mentioned earlier, $D_{\\beta}$ is constructed via the Matsubara\nmethod. First make $D$ periodic in imaginary time and then take an\ninfinite image sum so that\n\\begin{equation}\n D_{\\beta} = \n\\frac{1}{4\\pi^2}\\sideset{}{'}{\\sum}_{n=-\\infty}^{\\infty}\n\\frac{1}{(\\Delta \\vec{x})^2 -\n (t-t'+in\\beta)^2}.\n\\end{equation}\nThe prime on the summation indicates that we leave out the $n=0$\nterm, which corresponds to the vacuum term. As a result, \nEq.~(\\ref{ThermalRFCorr})\ngives a normal-ordered two-point function. We can set $y-y' = z-z' = 0$,\nand use null coordinates to write\n\\begin{equation}\n D_{\\beta} = \\frac{1}{4\\pi^2}\\sideset{}{'}{\\sum}_{n=-\\infty}^{\\infty}\\frac{1}{n^2\\beta^2 - \\Delta v\n \\Delta u - in\\beta(\\Delta v + \\Delta u)}.\n\\end{equation}\nThis, with the null coordinate version of Eq.~(\\ref{ThermalRFCorr}), gives\n\\begin{equation}\n \\langle(\\Delta\\xi)^2\\rangle = 4 \\int da \\int da' \\,\n \\partial_u\\partial_{u'}\\partial_v\\partial_{v'}D_{\\beta}.\n\\end{equation}\nThe integral can be evaluated in the following way. First note\nthat we are interested in the real part of $D_{\\beta}$, which is\neven in $n$, so that we may make the replacement\n\\[\\sideset{}{'}{\\sum}_{n=-\\infty}^{\\infty} \\rightarrow\n2\\sum_{n=1}^{\\infty}\\] and let\n\\begin{equation}\n Re (D_{\\beta}) = \\frac{1}{2\\pi^2}\\sum_{n=1}^{\\infty} G\n\\end{equation}\nwhere\n\\begin{equation}\n G = Re \\left[\\frac{1}{n^2\\beta^2 - \\Delta v\n \\Delta u - in\\beta(\\Delta v + \\Delta u)}\\right].\n\\end{equation}\nIn null coordinates,\n\\begin{equation}\n \\langle(\\Delta\\xi)^2\\rangle = \n\\frac{2}{\\pi^2}\\sum_{n=1}^{\\infty}\\int_0^{t_{0}}du\n \\int_0^{t_{0}} du' \\int_u^{u+2s} dv \\int_{u'}^{u'+2s} dv' \\,\n \\partial_v\\partial_{v'} \\partial_u\\partial_{u'}G.\n\\end{equation}\nPerforming the integrations on $v$ and $v'$ yields\n\\begin{equation}\n \\langle(\\Delta\\xi)^2\\rangle = \n\\frac{2}{\\pi^2}\\sum_{n=1}^{\\infty}\\int_0^{t_{0}}du\n \\int_0^{t_{0}} du' \\, I(\\Delta u)\\, ,\n\\end{equation}\nwhere we make use of the fact that $\\partial_u\\partial_{u'}G =\n-\\partial_u^2G$ in writing\n\\begin{equation}\n I(\\Delta u) = (\\partial_u^2 G)\\Bigr|_{\\Delta v = \\Delta u+2s} +\n (\\partial_u^2 G)\\Bigr|_{\\Delta v = \\Delta u-2s} - 2(\\partial_u^2 G)\\Bigr|_{\\Delta v = \\Delta u}.\n\\end{equation}\nThe function $I(\\Delta u)$ is a function of $\\Delta u$ only, for\nwhich\n\\begin{equation}\n \\int_0^{t_0} du \\int_0^{t_0} du' \\, I(\\Delta u) = \\int_0^{t_0} d(\\Delta\n u)(t_0-\\Delta u)I(\\Delta u)+\\int_{-t_0}^0 d(\\Delta\n u)(t_0+\\Delta u)I(\\Delta u).\n\\end{equation}\nThis now leads to the expression\n\\begin{equation} \\label{gravrf}\n \\langle(\\Delta\\xi)^2\\rangle =\n \\frac{2}{\\pi^2}\\sum_{n=1}^{\\infty}\\int_0^{t_0} d(\\Delta\n u)(t_0-\\Delta u)I(\\Delta u)+\\int_{-t_0}^0 d(\\Delta\n u)(t_0+\\Delta u)I(\\Delta u).\n\\end{equation}\nEquation~(\\ref{gravrf}) can be evaluated using a program such as\nMaple. The complete result is a rather lengthy expression and\nno insight is gained by writing it out. However, in the limit where\n$s \\gg t_0$ and $s \\gg\\beta$, i.e., the observationally reasonable limit where\nthe distance between source and detector is much larger than both\nthe observation time and the thermal wavelength, the expression\nreduces to\n\\begin{equation}\n \\langle(\\Delta\\xi)^2\\rangle\\sim\n \\frac{4}{9\\pi^2 a^2\\beta^2}[3\\pi^2 a^2\n \\mbox{csch}^2(\\pi a)+\n \\pi^2 a^2 - 3],\n\\end{equation}\nwith $a={t_0}\/{\\beta}$. In the limit where the observation time\nis short compared to the correlation time $\\beta$, i.e.\\ $t_0 \\ll\n\\beta$, we find\n\\begin{equation}\n \\langle(\\Delta\\xi)^2\\rangle \\sim \\frac{4t_0^2\\pi^2}{45\\beta^4} =\n \\frac{4t_0^2\\pi^2\\ell_{\\rm Pl}^2}{45\\beta^4}\n\\end{equation}\nIn this case, the rms line width grows linearly with observation\ntime,\n\\begin{equation}\n (\\Delta\\xi)_{rms} = \\frac{2\\pi\\ell_{\\rm Pl} t_0}{\\beta^2\\sqrt{45}}.\n\\end{equation}\nWhile on the other hand if $t_0 \\gg \\beta$,\n\\begin{equation}\n \\langle(\\Delta\\xi)^2\\rangle \\sim \\frac{4}{9\\beta^2} =\n \\frac{4\\ell_{\\rm Pl}^2}{9\\beta^2}.\n\\end{equation}\nHere the rms line width approaches a constant,\n\\begin{equation}\n (\\Delta\\xi)_{rms} = \\frac{2\\ell_{\\rm Pl}}{3\\beta}.\n\\end{equation}\n\n\n\\subsection{Angular Blurring}\n\nTurning our attention to Eq.~(\\ref{thermalAB1}), once again let\n$v^{\\mu} = t^{\\mu} = (1,0,0,0)$ and $k^{\\mu} = (1,1,0,0)$;\nadditionally let $s^{\\mu}=(0,0,1,0)$. With this and the symmetry and\ncyclic properties of the Riemann tensor, Eq. (\\ref{thermalAB1})\nreduces to\n\\begin{equation}\n \\langle(\\Delta\\Theta)^2\\rangle = \\int da \\int da' \\,\n \\langle\n (R_{2101}(x)-R_{2010}(x))(R_{2101}(x')-R_{2010}(x'))\\rangle_{\\beta}.\n\\end{equation}\nProceeding in the Feynman Gauge, the calculations follow those for\nredshift fluctuations. It is straightforward to show that\n\\begin{multline}\n \\langle(R_{yxtx}(x)-R_{ytxt}(x))(R_{yxtx}(x')-R_{ytxt}(x'))\\rangle\n \\\\ = \\frac{1}{4}(\\partial_t^4 + 2\\partial_t^3\\partial_x -\n 2\\partial_t\\partial_x^3 - \\partial_x^4)D = -4\\partial_u\n \\partial_v^2 \\partial_{v'}D\\,.\n\\end{multline}\nIn null coordinates, the equation for angular blurring becomes\n\\begin{equation}\n \\langle(\\Delta\\Theta)^2\\rangle = -4 \\int_0^{t_0}du \\int_0^{t_{0}}\n du' \\int_u^{u+2s} dv \\int_{u'}^{u'+2s} dv' \\, \\partial_u\n \\partial_v^2 \\partial_{v'}D.\n\\end{equation}\nThis integral can be solved via the same method invoked in the\nprevious subsection when dealing with redshift fluctuations. While\nit is not immediately obvious from this form of the integral, the\nlimiting results are the same as for line broadening. Namely, in the\ncase when $s \\gg t_0,\\beta$ we again have\n\\begin{equation}\n \\langle(\\Delta\\Theta)^2\\rangle\\sim\n \\frac{4}{9\\pi^2 a^2\\beta^2}[3\\pi^2 a^2\n \\mbox{csch}^2(\\pi a)+\n \\pi^2 a^2 - 3],\n\\end{equation}\nIn the limit where $a \\rightarrow 0$, or $t_0 \\ll \\beta$,\n\\begin{equation}\n (\\Delta\\Theta)_{rms} \\sim \\frac{2\\pi\\ell_{\\rm Pl} t_0}{\\beta^2\\sqrt{45}}\\, ,\n\\end{equation}\nwhile on the other hand when $a \\rightarrow \\infty$, or $t_0 \\gg \\beta$,\n\\begin{equation}\n (\\Delta\\Theta)_{rms} \\sim \\frac{2 \\ell_{\\rm Pl}}{3 \\beta} \\,.\n \\label{eq:theta_therm}\n\\end{equation}\nAs emphasized earlier, these results are independent of gauge\nchoice. As a check, these results were also calculated in the TT\ngauge using the thermal two point function and Hadamard function for\nthe TT gauge derived in Ref.~\\cite{Yu}. \n\nWe may compare Eq.~(\\ref{eq:theta_therm}) with \nthe results of Ref.~\\cite{Borgman}, where a heuristic result is found for \nthe angular blurring of the image of a source caused by passive fluctuations \nrather than active fluctuations as done here. The source of fluctuation \nthere is taken to be thermal fluctuations in the stress tensor of a scalar \nfield. In the high temperature limit, the passive fluctuation result is \n\\begin{equation}\n (\\Delta\\Theta)_{pass} \\sim \n\\frac{\\ell_{\\rm Pl}^2 s^{\\frac{3}{2}}}{b \\beta^{\\frac{5}{2}}} \\, ,\n\\end{equation}\nwhere $b$ is a characteristic width of the bundle of rays.\n The effect is seen to increase with \nsource-detector separation, though it should be mentioned that this result \nis valid for the regime $s\\alt {\\beta^3}\/{\\ell_{\\rm Pl}^2}$. An important \npoint is that the result for passive fluctuations has two powers of \n$\\ell_{\\rm Pl}$, \nwhile those for the active fluctuations have only one. One may therefore \nconclude that the effect tends to be larger for active fluctuations than \nfor passive fluctuations.\n\n\n\\section{Summary and Discussion}\n\\label{sec:final}\n\n\nIn this paper, two effects of spacetime geometry fluctuation\nare examined. Expressions are derived for fluctuations of redshift\nand angular position of a source as observed by a detector. The\nphysical manifestation of these effects is found to be a broadening\nof observed spectral lines and angular blurring of the image of\na source. The fluctuation of these observables depends on the\nRiemann tensor correlation function, which in turn characterizes\nfluctuations in the spacetime geometry. These effects should arise\nfor both active and passive metric fluctuations. However, in the case\nof passive fluctuations, it may be necessary to do some additional\nspacetime averaging, as was discussed in Ref.~\\cite{Borgman}. The\nexplicit examples discussed in the present paper concerned active \nfluctuations.\n\nThe effects of active spacetime fluctuations are examined by\nconsidering a linearized model of quantum gravity with gravitons\noccupying a squeezed vacuum state. In the absence of squeezing there\nis no effect, but the effects grow exponentially with the squeezing\nparameter and so in principle can be made quite large. The redshift\nand angular position fluctuations are related to the energy density\nof the wave, and finally an order of magnitude is estimated for a\ngravity wave with large energy density. The result of this\nestimation is quite small with questionable observability in the present day\nuniverse. The\nresults, however, indicate this is an interesting model for\ninvestigating the properties of fluctuating spacetime geometries.\n\nFurther, a thermal bath of gravitons is considered as the source of\nspacetime geometry fluctuation. An expression for spectral line\nbroadening is provided for the limit that the source-detector\nseparation is much larger than both the observation time and the\nthermal wavelength. For observation times short compared to the\nthermal wavelength, the rms spectral line width increases linearly\nwith the observation time. On the other hand, for observation times\nlong compared to the thermal wavelength, the rms line width\napproaches a constant. The results for angular blurring mirror the\nresults for spectral line broadening in the same limit of large\nsource-detector separation.\n\n\n\n\\begin{acknowledgments} \n This work was supported in part by the National\nScience Foundation under Grant PHY-0244898.\n\\end{acknowledgments}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn this paper we introduce a new class of strategic games that have appealing properties, and whose set of pure Nash equilibria can be described in a convenient encoding by rational generating functions using techniques pioneered by \\cite{Barvinok1994} and expanded by \\cite{Barvinok2003}. Generating functions techniques based on Barvinok's results have been applied to discrete optimization (see, for instance, \\cite{deloera-hemmecke-koeppe-weismantel:mixedintpoly-fixeddim-fullpaper,DeLoera2007}), and various other areas of applied and pure mathematics (see \\cite{DeLoera2005count} for a survey). To the authors' knowledge this is the first application of the theory of rational generating functions to the study of games.\n\nThis paper is motivated by open questions regarding the computational complexity of deciding the existence of pure Nash equilibria in strategic games. For a general reference on complexity issues in game theory see \\cite{Papadimitriou2007}. As opposed to the case of mixed-strategy Nash equilibria which are guaranteed to exist for every game, the general problem of deciding if a strategic game has a pure Nash equilibrium is NP-hard (\\cite{Gottlob2005}). In view of this difficulty, the problem has been explored under various restrictions on the actions and payoffs of the players; for instance, in graphical games (\\cite{Alvarez2005,Gottlob2005}), congestion games (\\cite{Dunkel2008,fabrikant-papadimitriou-talwar:2004:pure-nash}) and action graph games (\\cite{Jiang2007}). This paper continues this tradition, by introducing a class of games that will be shown to have convenient algorithms to decide if instances have pure Nash equilibria, and if they exist, to compute them.\n\nWe consider \\emph{integer programming games}, which are simultaneous games where the players' actions (pure strategies) are lattice points (i.e., integer points) inside polytopes described by systems of linear inequalities. Since the sets of actions are given implicitly by the description of polytopes, they may be of exponential size with respect to the input size. In our setting, each player's payoffs are given as \\emph{difference of piecewise linear convex (DPLC) functions}. As an aside, optimization problems involving the difference of convex functions are a well-studied class of nonconvex programs (see for example \\cite{Horst1999}).\n\nThe main result of the paper is that the set of pure Nash equilibria of integer programming games with DPLC payoffs can be encoded as a short rational generating function in polynomial time when the number of players, dimensions of the polytopes that define the action sets and the number of ``convex\" linear pieces in the payoffs are all fixed. Although these conditions are restrictive, note that each player may have an exponential number of actions. Indeed integer programming games with DPLC payoffs are a subset of a general class of games where deciding if a pure Nash equilibrium exists is $\\Sigma_2^p$-complete with a fixed number of players and exponential-sized strategy spaces \\cite{Alvarez2005}.\n\nBesides questions of complexity, a short rational generating function encoding is a convenient data structure for answering other questions of interest regarding the structure of pure Nash equilibria and related concepts. For instance, several questions analogous to those explored in \\cite{Conitzer2003} regarding \\emph{mixed} strategy Nash equilibria can be answered efficiently in our setting for \\emph{pure} Nash equilibria by using the rational generating function data structure.\n\nWe feel the main contributions of the paper are:\n\\begin{itemize}\n\\item Introducing the use of Barvinok short rational generating functions to the study of strategic games and demonstrating the power of encoding sets of pure equilibria as generating functions.\n\\item Presenting a tractable class of games, integer programming games with DPLC payoffs, for which pure strategy Nash equilibria and related quantities can be computed in polynomial time when certain dimensions are fixed.\n\\end{itemize}\n\nAlso of note are two ideas used in several places in this paper:\n\\begin{itemize}\n\\item In order to represent sets of equilibria, or other sets of interest, as rational generating functions we express the set as an overall feasible set in which unwanted elements, expressed as the union of projections of lattice point sets in polytopes, are removed. See for instance the proof of Theorem~\\ref{theorem:extended-main-result} where the set of pure Nash equilibria is defined in this way. This is a general technique that is adapted to various settings in Sections~\\ref{s:pure-nash}~to~\\ref{s:stack-nash}.\n\\item Some results are easier to show when the actions for each player in the game are \\emph{extended} to include a component that captures the payoff of that player. This extension allows for descriptions of \\emph{extended} strategy profiles and equilibria that are more amenable to generating function techniques and can readily be converted back to normal strategy profiles and equilibria. See for instance Definition~\\ref{def:extended-game}.\n\\end{itemize}\n\nThe paper is organized into the following sections. Section \\ref{s:integer-programming-games} introduces integer programming games and discusses an application of this framework to a model of competing firms producing identical indivisible goods. Section \\ref{s:gen-fun} reviews the basic theory of Barvinok generating functions and major results that will be used in the remainder of the paper. Section \\ref{s:pure-nash} discusses pure Nash equilibria and contains the main contributions of the paper -- demonstrating how generating functions can be used to encode sets of pure Nash equilibria. Section \\ref{s:computations} details several other applications of generating function constructions to the computation of Pareto optima, the price of anarchy, and pure minmax values. Lastly, Section \\ref{s:stack-nash} describes a sequential (Stackelberg--Nash) version of an integer programming game where a leader's actions affects the description of the polytopes defining the actions sets of a group of followers, who then play a simultaneous game.\n\n\\section{Integer Programming Games}\\label{s:integer-programming-games}\n\nWe begin by introducing the following class of strategic games:\n\n\\begin{definition}[\\textbf{Integer Programming Game}]\nAn \\emph{integer programming game} with $n$ players is a noncooperative game where the $S_i$ of actions for each player $i$ is the set of lattice points inside a polytope; that is,\n\\begin{equation}\\label{eq:strategySets}\nS_i=P_i\\cap{\\mathbb Z}^{d_i}\n\\end{equation}\nwhere $P_i = \\{\\ve x \\in \\mathbb R^{d_i}:M_i\\ve x \\le \\ve b_i\\}$ is a rational polytope.\n\nLet $I = \\{1,\\dots,n\\}$ denote the set of players. The set $S$ of action profiles $\\ve s = (\\ve s_1,\\ldots,\\ve s_n)$ is the Cartesian product of the $S_i$'s:\n$$S=\\prod_{i=1}^n S_i \\subseteq {\\mathbb Z}^d$$\nwhere $d = d_1+\\cdots+d_n$. The payoff functions are integer-valued of the form $u_i:S\\rightarrow {\\mathbb Z}$ for $i \\in I$.\n\\end{definition}\n\nAs noted in the introduction, a distinguishing feature of this class of games is that the action sets $S_i$ are defined succinctly by linear systems $M_i\\ve x \\le \\ve b_i$, even though $|S_i|$ may be exponential in size with respect to the input size of $M_i$ and $b_i$. We use rational generating functions to avoid explicitly enumerating each player's action set.\n\n\\begin{definition}[\\textbf{DPLC payoffs}]\nAn integer-valued payoff function $u:S\\rightarrow {\\mathbb Z}$ of a game is a \\emph{difference of piecewise-linear convex functions} or \\emph{DPLC} function if it can be expressed as:\n\\begin{equation*}\nu(\\ve s) = \\max_{k\\in K} f_k(\\ve s) - \\max_{l \\in L} g_l(\\ve s)\n\\end{equation*}\nwhere the $f_k$ and $g_l$ are affine functions with integer coefficients and where $K$ and $L$ are finite index sets. We refer to the $f_k$ as the ``convex\" pieces of $u$ and the $g_l$ as the ``concave\" pieces of $u$.\n\\end{definition}\n\nWe consider integer programming games where each player $i$ has a DPLC payoff function\n\\begin{equation}\\label{eq:DPLC}\nu_i(\\ve s) = \\max_{k\\in K_i} f_{ik_i}(\\ve s) - \\max_{l \\in L_i} g_{il_i}(\\ve s)\n\\end{equation}\nagain where the $f_{ik}$ and $g_{il}$ are given affine functions with integer coefficients and the sets $K_i$ and $L_i$ are given finite index sets.\n\nA first comment regarding this class of games is that they can express any finite game given in normal form. A normal form game is defined by action sets $A_1,\\ldots,A_n$ and payoffs $\\pi_i(\\ve a)$ for $\\ve a \\in A_1\\times\\cdots\\times A_n$ and $i \\in I$. We refer to the set $A = \\prod_{i=1}^n A_i$ as the set of action profiles. A normal form game is finite is $A$ is finite. We use the alternate notation $A_i$ and $\\pi_i$, as opposed to $S_i$ and $u_i$, for normal form games to draw a contrast between the fact that in normal form game the actions sets and payoffs for each action profile are given explicitly as part of the input, whereas in integer programming games the action sets $S_i$ and payoffs $u_i$ are given implicitly. This contrast between implicit and explicit representations is important when one considers the computational complexity of finding pure Nash equilibria (see \\cite{Alvarez2005} for a technical discussion).\n\n\\begin{proposition}\\label{prop:expressibility}\nEvery finite normal form game is equivalent to an integer programming game with DPLC payoffs.\n\\end{proposition}\n\\begin{Proof}\nLet finite normal form game $G$ be defined by action sets $A_1,\\ldots,A_n$ and payoffs $\\pi_i(\\ve a)$ for $\\ve a \\in A$. Let $d_i$ equal the number of elements in the action set $A_i$. Let vector $\\ve x_i =(x_{i,1},\\ldots,x_{i,d_i})$ denotes a mixed strategy profile for player $i$. Mixed strategy profiles $\\ve x$ lie in the unit simplex:\n$$P_i=\\{\\ve x_i=(x_{i,1},\\ldots,x_{i,d_i}) \\in \\mathbb R^{d_i}:\\ve x_i \\ge 0, ~\\sum_{j=1}^{d_i} x_{i,j}=1\\}.$$\nThis is a polytope, and the lattice points inside $P_i$ are exactly its vertices, which correspond to pure strategies (actions) for player $i$, namely, an action $a_i$ in $A_i$ is represented by its characteristic vector $\\chi_i(a_i)$. Thus, we represent the set of pure strategies for player $i$ as $S_i = P_i \\cap {\\mathbb Z}^{d_i}$. As above let $S = S_1 \\times \\dots \\times S_n$. This conforms to the framework of integer programming games.\n\nAs for the payoffs, our goal is to convert the explicitly given payoffs $\\pi_i$ over the set $A$ of action profiles into equivalent DPLC payoff functions $u_i$ over $S$. Let $A_i^+=\\{\\ve a\\in A:\\pi_i(\\ve a)>0\\}$ and $A_i^-=\\{\\ve a\\in A:\\pi_i(\\ve a)<0\\}$. For every $\\ve x = (\\chi_1(a_1),\\dots,\\chi_n(a_n)) \\in S$, define the DPLC payoff\n$$u_i(\\ve x) = \\max_{a_i \\in A_i^+} \\pi_i(\\ve a) \\left(\\sum_{i=1}^n x_{i,a_i} - n + 1\\right) - \\max_{a_i \\in A_i^-}\\pi_i(\\ve a) \\left( n-\\sum_{i=1}^n x_{i,a_i} - 1\\right).$$\nNote that for every $\\ve a \\in A$ we have $u_i(\\chi_1(a_1),\\dots,\\chi_n(a_n))=\\pi_i(\\ve a)$.\nThus the normal form game $G$ is equivalent to the integer programming game with pure strategy sets $S_i$ and DPLC payoffs $u_i$.\n\\end{Proof}\n\nNote that this representation has the same input size as the normal form game itself. Further computational complexity consequences of this proposition are discussed in Remark~\\ref{remark:expressibility}.\n\nAnother useful result demonstrates the power of modeling payoffs as DPLC functions. \\citet[Theorem 4.2]{Zalgaller2000} shows that every continuous piecewise linear function can be represented as the difference of two piecewise linear convex functions. Thus, a DPLC function can be used to describe any continuous piecewise linear payoff function, which in turn can be used to approximate an arbitrary continuous payoff function.\n\n\\begin{example}[\\textbf{Cournot game with indivisible goods and fixed production costs}]\\label{example}~\\\\\nA set of $n$ manufacturers (the players) produce indivisible goods which are then sold in a market. Player $i$ chooses an integer production level $s_{ij}$ for each of its indivisible products $j \\in \\{1,\\dots,d_i\\}$ subject to resource constraints $M_i \\ve s_i \\le \\ve b_i$ where $\\ve s_i = (s_{i1},\\ldots,s_{id_i})$. Thus, player $i$'s action set $S_i$ is the set of integer points in a polytope in $\\mathbb R^{d_i}$. The payoff to each player consists of revenues and production costs. Under usual assumptions, the revenue to each player $i$ is a concave function of the action profile $\\ve s$, which can be expressed as a piecewise linear concave function $\\min_{l \\in R_i} r_{il}(\\ve s)$. For each player $i$ and product $j$ there may be a fixed production cost $F_{ij}$. The variable production cost is a convex function of the production levels $\\ve s_i$ expressed as a piecewise linear convex function $\\max_{l \\in C_i} c_{il} (\\ve s_i)$. The payoff to player $i$ is thus\n\\begin{displaymath}\nu_i(\\ve s) = \\sum_{j=1}^{d_i} \\max\\{-F_{ij}, -F_{ij}s_{ij}\\} - \\left(\\max_{l \\in R_i} (-r_{il}(\\ve s)) + \\max_{l \\in C_i} c_{il} (\\ve s_i) \\right)\n\\end{displaymath}\nwhich can be expressed as a DPLC function. As will be discussed further in Remark~\\ref{remark:example}, this is precisely the structure that is analyzed in this paper.\n\\end{example}\n\n\\section{Introduction to rational generating functions}\\label{s:gen-fun}\n\nWe give a very brief overview of the theory of short rational generating\nfunctions used in this paper. \\cite{Barvinok1994}\nintroduced an algorithm for determining the exact\nnumber of lattice points in a rational polytope $P= \\{\\, \\ve x \\in \\mathbb R^n : A\n\\ve x \\leq \\ve b \\,\\}\\,$, that runs in polynomial time for every fixed\ndimension~$n$. This algorithmic breakthrough provided a strengthening\nof the famous algorithm of \\cite{Lenstra83}, which allows to\n\\emph{decide} whether $P$ contains a lattice point in polynomial time for\nevery fixed dimension.\n\nBarvinok's method works as follows. Consider the \\emph{generating\n function} of the lattice point set~$P\\cap{\\mathbb Z}^n$, which is defined as the\nmultivariate Laurent polynomial\n\\begin{equation}\\label{eq:long-generating-function}\n g(P;\\ve\\xi) = \\sum_{\\ve x\\in P\\cap{\\mathbb Z}^n} \\ve\\xi^{\\ve x}\n = \\sum_{\\ve x\\in P\\cap{\\mathbb Z}^n} \\xi_1^{x_1}\\cdots \\xi_n^{x_n} \\in {\\mathbb Z}[\\xi_1^{\\pm1},\\dots,\\xi_n^{\\pm1}].\n\\end{equation}\nThis Laurent polynomial, when encoded as a list of monomials, has exponential\nencoding size. Barvinok's algorithm computes a different representation of\nthe function $g(P;\\ve\\xi)$ as a sum of basic rational functions in the form\n\\begin{equation}\\label{eq:barvinok-formula}\n g(P;\\ve\\xi) = \\sum_{i \\in I} \\gamma_i\n \\frac{\\ve \\xi^{\\ve c_i}}{(1-\\ve \\xi^{\\ve d_{i1}})(1-\\ve \\xi^{\\ve d_{i2}})\\dots\n (1-\\ve \\xi^{\\ve d_{in}})},\n\\end{equation}\nwhere $I$ is a polynomial-size index set and all data are integer. This\nalgorithm runs in polynomial time whenever the dimension~$n$ is a fixed\nconstant. A formula of the type~\\eqref{eq:barvinok-formula} is called a\n\\emph{short rational generating function}.\n\nNote that each of the basic rational functions has poles (the\npoint~$\\ve\\xi=\\ve 1$ in particular is a pole of all the basic rational\nfunctions), but after summing up only removable singularities remain.\nObtaining the exact number of lattice points of~$P$ is easy\nin~\\eqref{eq:long-generating-function}, since clearly $|P\\cap{\\mathbb Z}^n| =\ng(P;\\ve1)$. Since~\\eqref{eq:barvinok-formula} is a formula for the same\nfunction (except for removable singularities), we also have $|P\\cap{\\mathbb Z}^n| =\n\\lim_{\\ve\\xi\\to\\ve1} g(P;\\ve\\xi)$, which can be evaluated in polynomial time by performing\na residue calculation with each basic rational function in the\nsum~\\eqref{eq:barvinok-formula}.\nAn important point to note is that this evaluation is possible with arbitrary\nrational generating functions that correspond to finite lattice point sets.\nIn other words, if we can compute in polynomial time a rational generating function of\na finite lattice point set~$S\\subseteq{\\mathbb Z}^n$, we can also compute in polynomial time its\ncardinality $|S|$. Therefore, we can also decide in polynomial time whether $S \\not =\\emptyset$.\n\nA first, trivial observation that allows to combine rational generating functions is\nthe following. Let $X\\subseteq{\\mathbb Z}^n$ and $Y\\in{\\mathbb Z}^k$ be lattice point sets\ngiven by their short rational generating functions~$g(X; \\ve\\xi)$ and\n$g(Y;\\ve\\eta)$. Then the direct product (Cartesian product) $X\\times Y$ also\nhas a short rational generating function that is simply the product of the\nrational functions:\n\\begin{displaymath}\n g(X\\times Y; \\ve\\xi,\\ve\\eta) = g(X;\\ve\\xi) \\times g(Y;\\ve\\eta).\n\\end{displaymath}\n\n\\cite{Barvinok2003} developed powerful algorithms to obtain\nshort rational generating functions of more general lattice point sets.\nThe first of these algorithms concerns constant-length Boolean combinations of\nfinite lattice\npoint sets that are already given by rational generating functions.\n\\begin{theorem}[Boolean Operations Theorem] \\label{theorem:boolean-operations}\n \\emph{(Corollary 3.7 in\n \\cite{Barvinok2003})} Let $m$ and $\\ell$ be fixed integers,\n and let $\\phi\\colon \\{0,1\\}^m\\to \\{0,1\\}$ be any Boolean function\n such that $\\phi(\\ve0) = 0$.\n \n \n Then there exists a constant $s = s(\\ell, m)$ and\n a polynomial-time algorithm for the following problem.\n Given as \\emph{input}, in binary encoding,\n \\begin{inputlist}\n \\item the dimension~$n$ and\n \\item rational generating functions\n $$ g(S_p; \\ve \\xi) = \\sum_{i \\in I_p} \\gamma_{pi} \\frac { \\ve \\xi^{\\ve c_{pi}} } { (1-\\ve\n \\xi^{\\ve d_{pi1}}) \\dots (1-\\ve \\xi^{\\ve d_{pis}}) },$$ of $m$ finite\n sets~$S_p\\subseteq{\\mathbb Z}^n$, represented by the rational numbers $\\gamma_{pi}$,\n integer vectors $\\ve c_{pi}$ and $\\ve d_{pij}$ for $p=1,\\dots,m$, $i\\in I_p$,\n $j=1,\\dots,\\ell_{mp}$ such that the numbers $\\ell_{mp}$ of terms in the\n denominators are at most~$\\ell$,\n \\end{inputlist}\n \\emph{output}, in binary encoding,\n \\begin{outputlist}\n \\item rational numbers $\\gamma_i$, integer vectors $\\ve c_i$, $\\ve d_{ij}$ for\n $i\\in I$, $j=1,\\dots,s_i$, where $s_i \\leq s$, such that\n $$ g(S; \\ve \\xi) = \\sum_{i \\in I} \\gamma_i \\frac { \\ve \\xi^{\\ve c_i} } { (1-\\ve\n \\xi^{\\ve d_{i1}}) \\dots (1-\\ve \\xi^{\\ve d_{is_i}}) }$$ is a rational generating\n function of the finite set~$S$ that is the Boolean combination of\n $S_1,\\dots,S_p$ corresponding to the function~$\\phi$.\n \\end{outputlist}\n\\end{theorem}\nNote that the restriction~$\\phi(\\ve0) = 0$ ensures that the set~$S$ will be finite.\nThe essential part of the construction of Theorem~\\ref{theorem:boolean-operations} is the\nimplementation of set intersections, which are based on the\n\\emph{Hadamard product} \\citep[Definition 3.2]{Barvinok2003}, which is the\nbilinear extension of the operation defined on monomials as\n\\begin{displaymath}\n \\alpha\\ve x^{\\ve\\xi} * \\alpha' \\ve x^{\\ve\\xi'}\n = \\begin{cases}\n \\alpha\\alpha' \\ve x^{\\ve\\xi} & \\text{if $\\ve\\xi=\\ve\\xi'$,} \\\\\n 0 & \\text{otherwise.}\n \\end{cases}\n\\end{displaymath}\nWith this definition, clearly\n\\begin{displaymath}\n g(S_1\\cap S_2) = g(S_1;\\ve\\xi) * g(S_2;\\ve\\xi).\n\\end{displaymath}\n\nAnother powerful method to define lattice point sets is by integer\nprojections.\nLet~$S\\subseteq{\\mathbb Z}^n$ be a finite lattice point set, given by its\nrational generating function~$g(S;\\ve\\xi)$. Let $\\psi\\colon {\\mathbb Z}^n\\to{\\mathbb Z}^k$ be a\nlinear function and denote by~$T=\\psi(S)$ the image (projection) of~$S$.\nIf the map~$\\psi$ is\none-to-one (injective) from~$S$, then\nthe generating function~$g(T; \\ve\\eta)$ of the\nprojection~$T$ can be computed by making a \\emph{monomial substitution} in\n$g(S;\\ve\\xi)$; see \\citet[Theorem~2.6]{Barvinok2003}. This fact is used in the\nproof of Corollary~\\ref{theorem:pure-nash-ratgenfun-encoding-concave}.\n\n\nWhen $S$ is the set of lattice points in a polytope~$P$,\nthe integer projection method of \\cite{Barvinok2003} can be\nemployed to construct a rational generating function of\nthe projection~$T$.\n\n\\begin{theorem}[Projection Theorem] \\label{theorem:projection}\n\\emph{(Theorem 1.7 in \\cite{Barvinok2003})} Let the dimension $n$ be a fixed constant.\nThen there exists a constant~$s = s(n)$ and a polynomial-time algorithm for the\nfollowing problem.\nGiven as \\emph{input}, in binary encoding,\n\\begin{inputlist}\n\\item an inequality description of a rational polytope~$P \\subset\\mathbb R^n$;\n\\item a positive integer~$k$; and\n\\item a linear map $\\psi\\colon{\\mathbb Z}^n\\to{\\mathbb Z}^k$ given by an integral matrix;\n\\end{inputlist}\n\\emph{output}, in binary encoding,\n\\begin{outputlist}\n\\item rational numbers $\\gamma_i$, integer vectors $\\ve c_i$, $\\ve d_{ij}$ for\n $i\\in I$, $j=1,\\dots,s_i$, where $s_i \\leq s$, such that\n$$ g(T; \\ve \\xi) = \\sum_{i \\in I} \\gamma_i \\frac { \\ve \\xi^{\\ve c_i} } { (1-\\ve\n \\xi^{\\ve d_{i1}}) \\dots (1-\\ve \\xi^{\\ve d_{is_i}}) }$$ is a rational generating\nfunction of the set $T = \\psi(P \\cap {\\mathbb Z}^n)$.\n\\end{outputlist}\n\\end{theorem}\n\nOnce a rational generating function of a set~$S$ has been computed, various\npieces of information can be extracted from it. We have already mentioned that it is\npossible to compute the cardinality of~$S$\nIn addition to that, we can \\emph{explicitly enumerate} all elements of~$S$. Since the\ncardinality of~$S$ can be exponential in the encoding length of the input, we use\noutput-sensitive complexity analysis, i.e., to measure the\ncomplexity of the enumeration algorithm in terms of both the input and the\noutput. The strongest notion of an output-sensitive polynomial-time\nenumeration algorithm is that of a \\emph{polynomial-space polynomial-delay\n enumeration algorithm}. Such an algorithm only uses space that is\npolynomial in the encoding length of the input data. In addition, the time\nspent between outputting two items, and before outputting the first item and\nafter outputting the last item, is bounded by a polynomial in the encoding\nlength of the input data. The following result is a version of Theorem~7 of\n\\cite{DeLoera2007}.\n\n\\begin{theorem}[{\\bf Enumeration Theorem}] \\label{theorem:output-sensitive-enumeration}\n Let the dimension $k$ and the maximum number~$\\ell$ of binomials in the\n denominator be fixed. Then there exists a polynomial-space polynomial-delay\n enumeration algorithm for the following enumeration problem.\n Given as \\emph{input}, in binary encoding,\n \\begin{inputlist}\n \\item a number $M\\in{\\mathbb Z}_+$;\n \\item rational numbers $\\gamma_i$, integer vectors\n $\\ve c_i$, $\\ve d_{ij}$ for $i\\in I$, $j=1,\\dots,\\ell_i$, where $\\ell_i\n \\leq \\ell$ such that\n $$\n \\sum_{i \\in I} \\gamma_i \\frac{\\ve s_0^{\\ve c_i}}{(1-\\ve s_0^{\\ve d_{i1}})(1-\\ve\n z^{\\ve d_{i2}})\\dots (1-\\ve s_0^{\\ve d_{is}})}\n $$\n is a\n rational generating function of a set $V\\subseteq{\\mathbb Z}^{k}$ of lattice\n points with $V\\subseteq[-M,M]^{k}$;\n \\item an integer~$p$ with $1\\leq p \\leq k$\n \\end{inputlist}\n \\emph{output}, in binary encoding,\n \\begin{outputlist}\n \\item all points in the projection of $V$ onto the last $p$~components,\n \\[\n W = \\{\\, \\ve w\\in{\\mathbb Z}^p: \\exists \\ve t\\in{\\mathbb Z}^{k-p} \\text{ such that } (\\ve\n t,\\ve w)\\in V \\,\\},\n \\]\n in lexicographic order.\n \\end{outputlist}\n\\end{theorem}\n\nIn addition, binary search can be used to optimize a linear function over a lattice point set encoded\nas a rational generating function.\n\n\\begin{theorem}[{\\bf Linear Optimization Theorem}] \\label{theorem:linear-optimization-binary-search}\n Let the dimension $k$ and the maximum number~$\\ell$ of binomials in the\n denominator be fixed. Then there exists a polynomial-time algorithm\n for the following problem.\n Given as \\emph{input}, in binary encoding,\n \\begin{description}\n \\item[~~~\\emph{($\\text{I}_1$)}] and \\emph{($\\text{I}_2$)} as in Theorem \\ref{theorem:output-sensitive-enumeration}\n \\item[~~~\\emph{($\\text{I}_3$)}] a vector $\\ve f\\in{\\mathbb Z}^k$,\n \\end{description}\n \\emph{output}, in binary encoding,\n \\begin{outputlist}\n \\item an optimal solution~$\\ve v^*\\in{\\mathbb Z}^k$ of the optimization problem\n $\\max \\{\\, \\langle \\ve f, \\ve v\\rangle : \\ve v\\in V \\,\\}$.\n \\end{outputlist}\n\\end{theorem}\n\n\nWe will use all the above results in the following constructions.\n\n\n\n\n\n\n\n\n\n\\section{Calculating Pure Nash Equilibria in Integer Programming Games}\\label{s:pure-nash}\n\nConsider an integer programming game with DPLC payoffs as defined in Section \\ref{s:integer-programming-games}. Our goal is to encode the Nash equilibria of such a game as a short rational generating function. The most general setting we provide an efficient algorithm for such an encoding is when the number of players and the dimension of their action sets are fixed and each player's DPLC payoff function has the form:\n\\begin{equation}\\label{eq:DPLC}\nu_i(\\ve s) = \\max_{k\\in K_i} f_{ik}(\\ve s) - \\max_{l \\in L_i} g_{il}(\\ve s)\n\\end{equation}\nwhere we now assume that the size of $K_i$ is a fixed. Since $S$ is bounded we assume without loss of generality that $u_i(\\ve s) \\ge 0$ for all $i \\in I$ and $\\ve s \\in S$. The analysis proceeds with two fundamental insights. First, when there are a fixed number of ``convex'' pieces, i.e., when $|K_i|$ is fixed, each player's payoff is piecewise linear concave within the region where an $f_{ik}$'s remains maximum. The second insight is that when payoffs are piecewise linear concave the hypograph of the payoff function is then a polyhedral set, encodable as a short rational generating function.\n\nFirst, a simple result towards partitioning the action profile space into regions according to the values of the linear pieces of the payoffs. We assume that $K_i$ is a totally ordered set.\n\n\\begin{lemma} For each player $i$, the set of all action profiles can be expressed as a disjoint union\n$$S = \\biguplus_{k \\in K_i} S_{ik}$$\nwhere\n$$S_{ik} = \\left\\{\\ve s \\in S: \\begin{array}{ll} f_{ik}(\\ve s) \\ge f_{ij}(\\ve s), & ~j > k \\\\\n\t\t\t\t\t\t\t\t\t\t\t\t f_{ik}(\\ve s) > f_{ij}(\\ve s), & ~j k'$. If $k>k'$ then since $\\ve s \\in S_{ik}$ this implies $f_{ik}(\\ve s) \\ge f_{ik'}(\\ve s)$. However, since $\\ve s \\in S_{ik'}$ this yields that $f_{ik'}(\\ve s) > f_{ik}(\\ve s)$, a contradiction. The result follows.\n\\end{Proof}\n\nNote that we could equivalently write $S_{ik}$ as follows,\n$$S_{ik} = \\left\\{\\ve s \\in S: \\begin{array}{ll} f_{ik}(\\ve s) \\ge f_{ij}(\\ve s), & ~j > k \\\\\n\t\t\t\t\t\t\t\t\t\t\t\t f_{ik}(\\ve s) \\ge f_{ij}(\\ve s) + 1, & ~j \\hat u_i(\\ve s).$\n\\end{definition}\n\n\\begin{lemma}\\label{lemma:bijection}\nConsider the game $G = (S,u_1,\\ldots, u_n)$ and its extended game $\\hat{G} = (\\hat{S}, \\hat{u}_1, \\ldots, \\hat{u}_n)$ as defined above.\n\\begin{enumerate}[\\rm(i)]\n\\item An extended pure Nash equilibrium of $\\hat{G}$ must be of the form\n\\begin{equation}\\label{eq:form} \\hat{\\ve s} = (\\ve s;\\ve u(s))=(\\ve s_1,\\ldots,\\ve s_n;u_1(\\ve s),\\ldots,u_n(\\ve s)). \\end{equation}\n\\item There is a bijection between the set $N$ of pure Nash equilibria of the original game and the set $\\hat N$ of extended pure Nash equilibria of the extended game.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{Proof}\n(i) Let $\\hat{\\ve s} = (\\ve s; \\ve y) \\in \\hat{S}$. By the disjoint union \\eqref{eq:ShatUnion} there exists a unique $\\ve k$ such that $\\hat {\\ve s} \\in \\hat S_{\\ve k}$. It follows that $\\ve s \\in S_{\\ve k}$. For all $i\\in N$ we have $\\ve s \\in S_{ik_i}$ and $y_i \\le f_{ik_i}(\\ve s) - \\max_{l \\in L_i} g_{il}(\\ve s) = u_i(s).$\nThus, $\\hat u_i(\\hat {\\ve s}) = y_i \\le u_i(\\ve s) = \\hat u_i(\\ve s_1,u_1(\\ve s);\\ldots;\\ve s_n,u_n(\\ve s))$.\nHence, whenever $y_i < u_i(\\ve s)$ it is profitable for player $i$ to deviate to the extended action $(\\ve s_i;u_i(\\ve s))$.\nTherefore, an extended pure Nash equilibrium must have the form \\eqref{eq:form}.\n(ii) Consider the mapping $\\varphi : N \\longrightarrow \\hat{N}$ defined by $\\ve s \\longmapsto (\\ve s; u(\\ve s))$. We claim $\\varphi$ is a well-defined bijection.\nFirst we show that $\\varphi$ is well-defined, that is $\\varphi(\\ve s) \\in \\hat N$ for all $\\ve s \\in N$. Clearly $\\varphi(\\ve s)$ is in $\\hat{S}$. Let $\\ve s \\in S_{\\ve k}$ for some $\\ve k$. Now consider a feasible deviating action for player $i$, $\\hat {\\ve s}_i^\\prime = (\\ve s_i^\\prime, y_i^\\prime)$, where $(\\hat{ \\ve s}_{-i}, \\hat{\\ve s}_i^\\prime) \\in \\hat S_{(\\ve k_{-i}, k_i^\\prime)}$ for some $k_i^\\prime \\in K_i$. In other words, player $i$ deviates from choosing $\\ve s_i \\in S_{ik_i}$ to choosing $\\ve s_i^\\prime \\in S_{ik_i^\\prime}$ and changing $y_i$ to $y_i^\\prime$. We have\n$$\\hat u_i(\\hat{\\ve s}_{-i},\\hat{\\ve s}_i^\\prime) = y_i^\\prime \\le u_i(\\ve s_{-i},\\ve s_i^\\prime) \\le u_i(\\ve s) = \\hat u_i(\\hat{\\ve s})$$\nwhere the second inequality holds since $\\ve s$ is a Nash equilibrium for the game $G$ and the final equality follows from (i). It follows that $\\hat{\\ve s}$ is an extended Nash equilibrium for the game $\\hat{G}$ and mapping $\\varphi$ is well-defined.\n\nIt is clear that $\\varphi$ is injective. As for surjectivity, by part (i) if follows that every Nash equilibrium in $\\hat{N}$ has the form $(\\ve s;u(\\ve s))$ for some $\\ve s \\in S$. It just remains to show that all such equilibria arise with $\\ve s \\in N$. Suppose the contrary, that is $\\hat{\\ve s} = (\\ve s;u(\\ve s)) \\in \\hat{N}$ but $\\ve s \\not \\in N$. Then there must be a profitable deviation $\\ve s_i^\\prime \\in S_i$ for some player $i$ in the original game $G$;\n$u_i(\\ve s_{-i}, \\ve s_i^\\prime) > u_i(\\ve s)$. This implies that there is a profitable deviation $\\hat{\\ve s}_i^\\prime = (\\ve s_i^\\prime; u_i(\\ve s_{-i}, \\ve s_i^\\prime))$ in the extended game since\n$$\\hat{u}_i(\\hat{\\ve s})= u_i(\\ve s) < u_i(\\ve s_{-i}, \\ve s_i^\\prime) = \\hat{u}_i(\\hat{\\ve s}_{-i},\\hat{\\ve s}_i^\\prime).$$\n\\end{Proof}\n\nWith this bijection, we can now state the main result of the paper:\n\n\\begin{theorem}\\label{theorem:extended-main-result}\nConsider an integer programming game with DPLC payoffs given by the following\ninput in binary encoding:\n\\begin{inputlist}\n\\item the number $n$ of players, and a bound $B \\in {\\mathbb N}$;\n\\item for each $i \\in I = \\{1,\\dots,n\\}$, the dimension $d_i$ and an inequality description $(M_i,\\ve b_i)$ of a rational polytope $P_i = \\{\\ve x \\in \\mathbb R^{d_i}: M_i\\ve x \\le \\ve b_i\\}\\subseteq [-B,B]^{d_i}$;\n\\item for each $i \\in I$, nonnegative integers $|K_i|$ and $|L_i|$, and for all integers $k$, $l$ such that $1\\le k \\le |K_i|$ and $1\\le j \\le |L_i|$, integer vectors $\\ve \\alpha_{ik} \\in {\\mathbb Z}^{d}$, $\\ve \\gamma_{il} \\in {\\mathbb Z}^{d}$ (where $d = d_1 + \\cdots + d_n$) and integers $\\beta_{ik}$, $\\delta_{ik}$ defining the affine functions $f_{ik}:S \\rightarrow {\\mathbb Z}$ and $g_{il}:S\\rightarrow {\\mathbb Z}$ by $f_{ik}(\\ve s) = \\ve \\alpha_{ik}\\cdot\\ve s + \\beta_{ik}$ and $g_{il}(\\ve s) = \\ve \\gamma_{il}\\cdot\\ve s + \\delta_{il}$ for all $\\ve s\\in S = \\prod_{i=1}^n (P_i \\cap {\\mathbb Z}^{d_i})$.\n\\end{inputlist}\nThe set $\\hat N$ of extended pure Nash equilibria of the extended game $\\hat G = (\\hat S, \\hat u_1, \\ldots, \\hat u_n)$ has a short rational generating function encoding, which can be computed in polynomial time when the total dimension~$d$ and the sizes $|K_i|$ are fixed for all $i \\in I$.\n\\end{theorem}\n\\begin{Proof}\nWe express the set of extended Nash equilibria as follows:\n$$\\hat N = \\hat S \\setminus \\bigcup_{i=1}^n D_i$$\nwhere\n\\begin{equation}\\label{eq:deviation-set}\nD_i = \\biguplus_{\\ve k\\in \\ve K}~\\bigcup_{k_i^\\prime \\in K_i} \\proj_{\\hat {\\ve s}}\\left\\{(\\hat {\\ve s}, \\hat {\\ve s}_i^\\prime):\n\\hat {\\ve s} \\in \\hat S_{\\ve k},~ (\\hat {\\ve s}_{-i}, \\hat {\\ve s}_i^\\prime) \\in \\hat S_{(\\ve k_{-i},k_i^\\prime)},~ \\hat u_i(\\hat {\\ve s}_{-i}, \\hat {\\ve s}_i^\\prime) \\ge \\hat u_i(\\hat {\\ve s}) + 1\n\\right\\}\n\\end{equation}\nNote that some of the projected sets may be empty.\n\nThe set $D_i$ is the set of action profiles where player $i$ has a profitable deviation. The description of $D_i$ in \\eqref{eq:deviation-set} is a union over profitable deviations from one set in the partition of $\\hat S$ to another. This description of $\\hat N$ is easy to verify using the definition of extended pure Nash equilibria.\n\nWe now establish that $\\hat N$ can be encoded as a short rational generating function. First we claim $\\hat S$ admits such an encoding. Consider the description of $\\hat S$ given in \\eqref{eq:ShatUnion}. The sets $F_{ik}$ are sets of lattice points inside rational polytopes and thus encodable as short rational generating functions. This in turn implies that $\\hat S_{\\ve k}$ admits such an encoding since it in turn is the set of lattice points inside a polytope. By the Boolean Operations Theorem, it follows that $\\hat S$ can be encoded as a short rational generating functions in polynomial time, since there is a constant number of sets $\\hat S_{\\ve k}$ under the assumption that the sets $I$ and $K_i$ are of fixed size.\n\nNote in addition that the sets to be projected in $\\eqref{eq:deviation-set}$\nare again sets of lattice points inside of rational polytopes, by observing\nthat the extended payoffs functions are linear. By the Projection Theorem it\nfollows that each set in the union can be encoded as a short rational\ngenerating function. Using again the Boolean Operations Theorem we conclude\nthat each $D_i$, and thus $\\hat N$, admit short rational generating function\nencodings which can be computed in polynomial time when the sizes of the sets\n$K_i$ are fixed for all $i \\in I$. We remark that the outer union\nin~\\eqref{eq:deviation-set} (indexed by $\\ve k\\in \\ve K$) is a disjoint union;\nthus its rational generating function can be computed by adding the rational\ngenerating functions of the parts, rather than using the construction of the\nBoolean Operations Theorem.\n\\end{Proof}\n\n\\begin{corollary} \\label{theorem:pure-nash-ratgenfun-encoding-concave}\nConsider an integer programming game with DPLC payoffs given by the same input as in Theorem \\ref{theorem:extended-main-result}. The set $N$ of pure Nash equilibria has a short rational generating function encoding which can be computed in polynomial time when the total dimension~$d$ and the sizes $|K_i|$ are fixed for all $i \\in I$.\n\\end{corollary}\n\\begin{Proof}\nBy the previous theorem, we can encode the set of \\emph{extended} pure Nash equilibria, $\\hat N$, in polynomial time. Using the bijective map $\\varphi$ given in the proof of Lemma~\\ref{lemma:bijection} we can use an appropriate monomial substitution in the rational generating function description of $\\hat N$ to yield a short rational generating function encoding for $N$.\n\\end{Proof}\n\nThe true power of the previous lemma lies in the fact that having a short rational encoding of a set allows for efficient counting, optimizing and enumerating procedures as discussed in Section \\ref{s:gen-fun}. Using these techniques we can answer numerous questions of interest on the existence, uniqueness and structure of equilibria.\n\n\\begin{corollary}\\label{theorem:count-find-sample-Nash} Consider an integer programming game with DPLC payoffs given by the same input as in Theorem \\ref{theorem:extended-main-result}. There is a polynomial time algorithm to compute the number of pure Nash equilibria, when the total dimension~$d$ and the sizes $|K_i|$ are fixed for all $i \\in I$. In addition, under the same assumptions there is a polynomial time algorithm to find a sample pure strategy Nash equilibrium, when at least one exists.\n\\end{corollary}\n\\begin{Proof}\nGiven the short rational generating function encoding of $N$ we calculate $|N|$ in polynomial time by counting methods discussed near the beginning of Section \\ref{s:gen-fun}. If an equilibrium exists, we can output one by running the polynomial-delay enumeration algorithm on $N$ described in the Enumeration Theorem and terminating just after one equilibrium has been generated. This can be done in polynomial time.\n\\end{Proof}\n\nNote that the algorithm runs in time polynomial in the total number $\\sum_{i \\in I} |L_i|$ of ``concave\" pieces in the payoff functions. This corollary can be used to answer the question of whether a unique pure Nash equilibrium exists -- simply check whether $|N|=1$. The following result is also immediate by the Enumeration Theorem.\n\n\\begin{corollary}\\label{theorem:Nash-enumeration}\nConsider an integer programming game with DPLC payoffs given by the same input as in Theorem \\ref{theorem:extended-main-result}. There is a polynomial-space polynomial-delay algorithm to enumerate all the pure Nash equilibria of the game when the total dimension~$d$ and the sizes $|K_i|$ are fixed for all $i \\in I$.\n\\end{corollary}\n\nReflecting on the content and context to the results of this section, we make the following useful remarks.\n\n\\begin{remark}\nConsidering the number of elements of the game we need to fix in Corollary~\\ref{theorem:count-find-sample-Nash} -- fixed number of players, fixed dimension of the polytopes, fixed sizes of the $K_i$ -- one might ask if there is an alternate method to generating functions that might yield similar results. The key observation is that the action sets are described implicitly as lattice points in polytopes given by linear inequalities, and thus the number of actions for each player may be exponential in the input size. Thus, simple enumeration of all the action profiles in $S$ is a computationally intractable approach to the problem.\n\\end{remark}\n\n\\begin{remark}\\label{remark:expressibility}\nIt was shown in Proposition \\ref{prop:expressibility} that every normal form game can be expressed as an integer programming game with DPLC payoffs. Note, however, that the dimensions of the action spaces are equal to number of corresponding actions in the normal form game. Indeed, using the notation of the proof of Proposition~\\ref{prop:expressibility}, we have $S_i \\in {\\mathbb Z}^{A_i}$. From a complexity point of view this representation is unsatisfactory. In Corollary~\\ref{theorem:count-find-sample-Nash} we require the dimension of the action spaces to be fixed, and thus we can only handle a fixed number of actions in the underlying normal form game. Normal form games with a fixed number of players and fixed number of actions are computationally uninteresting.\n\\end{remark}\n\n\n\\begin{remark}\\label{remark:example}\nThe Cournot game in Example \\ref{example} fits the assumptions of Corollary~\\ref{theorem:count-find-sample-Nash}. We assume that the number $n$ of players (manufacturers) is ``small\", i.e., fixed, in order for each manufacturer to have appreciable market power. We also assume that the total number $d$ of products is small. The sets $K_i$ have cardinality $O(2^{d_i})$, which is fixed, and thus the decomposition of $S$ in Corollary~\\ref{cor:partition} is comprised of a constant number of subsets. Since the algorithm in Corollary~\\ref{theorem:count-find-sample-Nash} scales polynomially with the sizes of the sets $L_i$, we can afford a variable number of ``concave\" pieces in the description of the payoff functions. These ``concave\" pieces are used to represent general concave revenue functions and the convex parts of the cost functions, when restricted to integer points.\n\\end{remark}\n\n\\section{Related computations}\\label{s:computations}\n\nIn addition to counting and enumerating pure Nash equilibria, generating function techniques can be used to derive efficient algorithms for related computations for integer programming games. Several of these are discussed in the following subsections.\n\nFirst, note that the encoding of the set of Nash equilibria as a short rational generating function, being a compact representation, is useful for learning about the specific structure of a game's equilibria. For instance, a simple calculation suffices for deciding the existence of a pure Nash equilibrium where player $i$ plays a given action $\\bar{\\ve s}_i$. Indeed, simply find the short rational generating function encoding of\n$$N^{(\\bar{\\ve s}_i)} = N \\cap \\{\\ve s \\in S: \\ve s_i = \\bar{\\ve s}_i\\}.$$\nin polynomial time under the same assumptions in the previous section.\n\nNow onto some more sophisticated calculations.\n\n\\subsection{Pareto optimality}\\label{ss:pareto}\n\nConsider the question of finding Pareto optimal pure Nash equilibria, if any exist, in an integer programming game with DPLC payoffs. To tackle this, we start by encoding the set of Pareto optimal action profiles of the game.\n\\begin{theorem}\\label{theorem:pareto}\nConsider an integer programming game with DPLC payoffs given by the same input as in Theorem \\ref{theorem:extended-main-result}. The set $PO$ of Pareto optimal action profiles has a short rational generating function encoding, which can be computed in polynomial time when the total dimension~$d$ and the sizes $|K_i|$ are fixed for all $i \\in I$.\n\\end{theorem}\n\\begin{Proof}\nThe proof is similar to those in the proof of Lemma~\\ref{lemma:bijection}, Theorem~\\ref{theorem:extended-main-result} and Corollary~\\ref{theorem:pure-nash-ratgenfun-encoding-concave}:\n\\begin{enumerate}[(i)]\n\\item Define $\\widehat{PO}$ as the set of Pareto optimal points in the extended game and find a generating function encoding for $\\widehat{PO}$.\\label{aa}\n\\item Derive a bijection between $PO$ and $\\widehat{PO}$.\\label{bb}\n\\item Use the generating function of $\\widehat {PO}$ and the bijection to obtain the generating function of $PO$.\\label{cc}\n\\end{enumerate}\nFor part \\eqref{aa} consider the following decomposition of $\\widehat{PO}$:\n\\begin{eqnarray*}\n\\widehat{PO} & = & \\{\\hat {\\ve s} \\in \\hat S: \\nexists~\\hat{\\ve s}^\\prime \\in \\hat S \\;\t\\text{ such that }\t\n\t\t\t\t\t\t\t\t\t\t\t(\\hat u_j (\\hat {\\ve s}^\\prime) \\ge \\hat u_j (\\hat {\\ve s}) \\text{ for all } j \\in I) \\text{ and }\n\t\t\t\t\t\t\t\t\t\t\t(\\hat u_i (\\hat {\\ve s}^\\prime) > \\hat u_i (\\hat {\\ve s}) \\text{ for some } i \\in I) \\} \\\\\n & = & \\hat S \\setminus \\bigcup_{i=1}^n PD_i\n\\end{eqnarray*}\nwhere\n\\begin{equation*}\nPD_i = \\bigcup_{\\ve k, \\ve k^\\prime \\in \\ve K} \\proj_{\\hat {\\ve s}}\\left\\{(\\hat {\\ve s}, \\hat {\\ve s}^\\prime):\n\\hat {\\ve s} \\in \\hat S_{\\ve k},~~\n\\hat {\\ve s}^\\prime \\in \\hat S_{\\ve k'},~~\n\\hat u_j(\\hat{\\ve s}^\\prime) \\ge \\hat u_j(\\hat {\\ve s}) ~~ \\text{ for all } j \\not = i, ~~\n\\hat u_i(\\hat {\\ve s}^\\prime) \\ge \\hat u_i(\\hat {\\ve s})+1\n\\right\\}\n\\end{equation*}\nis the set of Pareto dominated points due to a better alternative for player $i$. By analogous arguments an in the proof of Theorem~\\ref{theorem:extended-main-result} we can encode $\\widehat{PO}$ as a short rational generating function in polynomial time.\n\nAs for \\eqref{bb} the argument is nearly identical to that in the proof of Lemma~\\ref{lemma:bijection} and is therefore omitted. Finally, \\eqref{cc} uses the same idea as found in the proof Corollary~\\ref{theorem:pure-nash-ratgenfun-encoding-concave}.\n\\end{Proof}\n\nUsing the Boolean Operations Theorem we obtain a short rational generating function encoding of the set $N \\cap PO$ of all Pareto optimal pure Nash equilibria of the original game:\n\n\\begin{corollary}\\label{cor:pareto-nash}\nConsider an integer programming game with DPLC payoffs given by the same input as in Theorem \\ref{theorem:extended-main-result}. The set of Pareto optimal pure Nash equilibria has a short rational generating function encoding, which can be computed in polynomial time when the total dimension $d$ and the sizes $|K_i|$ are fixed for all $i \\in I$.\n\\end{corollary}\n\nAs in Section \\ref{s:pure-nash} we can use this generating function encoding to count and enumerate Pareto optimal equilibria.\n\n\\subsection{Pure prices of anarchy and stability}\\label{ss:anarchy}\n\nThe pure price of anarchy of a game measures the negative effects of competition on social welfare. The \\emph{social welfare} of an action profile is the corresponding total payoff of all players. The \\emph{pure price of anarchy} is the ratio of the maximum social welfare where the agents act together to maximize their total payoff to the \\emph{worst} social welfare that arises from a pure Nash equilibrium. The \\emph{pure price of stability} is the ratio of maximum social welfare to the \\emph{best} social welfare that arises from a pure Nash equilibrium. The pure price of anarchy has been studied in various network games, see \\cite{Dunkel2008} and the references therein for recent results on weighted congestion games. Using rational generating function techniques we can calculate the pure price of anarchy and stability efficiently.\n\n\\begin{theorem}\\label{theorem:price}\nConsider an integer programming game with DPLC payoffs given by the same input as in Theorem \\ref{theorem:extended-main-result}. There exist algorithms to compute the pure price of anarchy and the pure price of stability that run in polynomial time when the total dimension~$d$ and the sizes $|K_i|$ are fixed for all $i \\in I$.\n\\end{theorem}\n\\begin{Proof}\nLet $w^*$ denote the maximum social welfare attainable under cooperation of the players; that is,\n\\begin{equation}\\label{11} w^* = \\max \\left\\{ \\sum_{i=1}^n u_i(\\ve s): \\ve s \\in S\\right\\}.\\end{equation}\n\n\nAgain we work with the extended game. The first step in calculating $w^*$ is to note that\n\\begin{equation}\\label{22}\nw^* = \\max \\left\\{\\sum_{i=1}^n y_i : (\\ve s; \\ve y) \\in \\hat S\\right\\}\n\\end{equation}\nThe equivalence of \\eqref{11} and \\eqref{22} is verified by first noting that for all $(\\ve s; \\ve y) \\in \\hat S$, $y_i \\le u_i(\\ve s) = \\sum_i \\hat u_i(\\ve s;u(\\ve s))$.\nTherefore, if $(\\ve s^*, \\ve y^*)$ is an optimal solution to the right-hand side of \\eqref{22} then we must have $y_i^* = u_i(\\ve s)$ for all $i \\in N$. This implies $\\ve s$ is an optimal solution the right-hand side of \\eqref{11}.\n\nTo find $w^*$ we optimize the linear function $\\sum_i y_i$ over $\\hat S$. Every $(\\ve s, \\ve y) \\in \\hat S$ satisfies $\\ve s \\in [-B,B]^d$ and\n \\[y_i \\le u_i(\\ve s) \\le B\\left(\\max_{k\\in K_i}(||\\ve \\alpha_{ik}||_1 + |\\beta_{ik}|) + \\max_{l\\in L_i}(||\\ve \\gamma_{il}||_1 + |\\delta_{il}|)\\right).\\]\n That is, $(\\ve s; \\ve y) \\in [-M,M]^{d+n}$ for some polynomially sized integer $M$.\n Since in Section \\ref{s:pure-nash} we found a short rational generating function encoding of $\\hat S$, we apply the Linear Optimization Theorem to calculate $w^*$ in polynomial time.\n\nLet $\\tilde{w}$ denote the worst social welfare attained by a pure Nash equilibrium; that is,\n$$\\tilde{w} = \\min \\left\\{ \\sum_{i=1}^n u_i(\\ve s): \\ve s \\in N\\right\\}.$$\nTo calculate $\\tilde{w}$ we note\n$$\\tilde{w} = \\min \\left\\{\\sum_{i=1}^n y_i : (\\ve s;\\ve y) \\in \\hat N\\right\\},$$\nwhich follows from Lemma~\\ref{lemma:bijection}(i).\nUsing the short generating function encoding of $\\hat N \\subseteq \\hat{S}$ found in Section \\ref{s:pure-nash} we again apply the Linear Optimization Theorem to calculate $\\tilde{w}$ in polynomial time. Thus, we obtain the price of anarchy $w^*\/\\tilde{w}$ in polynomial time.\n\nThe method for calculating the pure price of stability is similar.\n\\end{Proof}\n\n\\subsection{Pure Threat Point}\\label{ss:threat}\nThe \\emph{pure minmax value to player $i$} in a game is defined as:\n\\begin{equation}\\label{eq:minmax-value}\n\\min_{\\ve s_{-i} \\in S_{-i}} ~ \\max_{\\ve s_i \\in S_i} ~u_i(\\ve s_i, \\ve s_{-i}).\n\\end{equation}\nAlthough mixed strategies are usually considered in calculating the (mixed) minmax values, here we restrict attention to pure strategies. The vector of \\emph{mixed} minmax values is known as the (mixed) \\emph{threat point}, which has drawn recent attention in the study of repeated games and explorations of computational implications of the Folk Theorem (see \\cite{Borgs2007}). Analogously, we define the \\emph{pure threat point} as the vector of pure minmax values.\n\nIt was recently shown that, in various restrictive settings, the problem of calculating the (mixed) threat point is NP-hard. For instance, it can be shown that computing the (mixed) threat point of a three player game with binary payoffs (\\{0,1\\}) is NP-hard to approximate (see \\cite{Borgs2007} Theorem 1 for a precise statement and proof.) Despite this negative result we show that pure threat points can be computed efficiently in our setting.\n\n\\begin{theorem}\\label{theorem:threat}\nConsider an integer programming game with DPLC payoffs given by the same input as in Theorem \\ref{theorem:extended-main-result}. There exists a polynomial time algorithm to compute the \\emph{pure} threat point when the total dimension~$d$ and the sizes $|K_i|$ are fixed for all $i \\in I$.\n\\end{theorem}\n\\begin{Proof}\nWe begin by demonstrating how to calculate the minmax value for player $i$ in polynomial time for each player $i$. Observe that an optimal value to the following bilevel optimization problem is the pure minmax value of player $i$:\n\\begin{equation}\\label{eq:bilevel-minmax}\n\\min_{\\ve s_i,\\ve s_{-i}} \\left\\{u_i(\\ve s_i,\\ve s_{-i}) : \\ve s_{-i} \\in S_{-i}, ~\\ve s_i \\in \\arg \\max_{\\ve s'_i \\in S_i} u_i(\\ve s'_i,\\ve s_{-i})\\right\\}.\n\\end{equation}\nThis bilevel optimization problem (see \\cite{Colson2007}) has essentially two players: a lower level player, or follower, who is player $i$, and an upper level player, or leader, who represents all the other players cooperating to ``punish\" $i$. Let\n$$G_i = \\big\\{\\ve s \\in S: \\ve s_i \\in \\arg \\max_{{\\ve s}_i \\in S_i} \\{u_i(\\ve s_i,\\ve s_{-i})\\}\\big\\}$$\ndenote the set of bilevel feasible solutions to (\\ref{eq:bilevel-minmax}). Note that \\eqref{eq:bilevel-minmax} is equivalent to $\\min \\{u_i(\\ve s): \\ve s \\in G_i\\}$.\n\n\nAs before we turn our attention to the extended game. We define the analogous set $\\hat G_i$:\n\\begin{equation*}\n\\hat G_i = \\left\\{\\hat {\\ve s} \\in \\hat S: \\hat{\\ve s_i} \\in \\arg \\max_{\\hat{\\ve s}_i^\\prime} \\{\\hat u_i(\\hat{\\ve s}_{-i}, {\\hat{\\ve s}_i}): (\\hat{\\ve s}_{-i}, \\hat{\\ve s}_i^\\prime) \\in \\hat S\\}\\right\\}. \\\\\n\\end{equation*}\nObserve that if $\\hat {\\ve s} = (\\ve s, \\ve y) \\in \\hat G_i$ then $y_i = u_i(\\ve s)$.\nThe set $\\hat G_i$ can be expressed as $\\hat G_i = \\hat S \\setminus D_i$,\nwhere $D_i$ is defined as in \\eqref{eq:deviation-set}. This follows since the optimization problem facing player $i$ is the same problem as when determining extended Nash equilibria is a single player game.\nThus by a direct application of Theorem~\\ref{theorem:extended-main-result} we can encode $\\hat{G}_i$ as a short rational generating function. Note that $\\hat G_i \\subseteq \\hat S \\subseteq [-M,M]^{d+n}$ where $M$ is as defined in the proof of Theorem~\\ref{theorem:price}. By applying the Linear Optimization Theorem find the optimal value of $\\min{y_i: (\\ve s, \\ve y) \\in \\hat {G}_i = \\min \\{u_i(\\ve s}): \\ve s \\in G_i\\}$, in polynomial time under the stated assumptions. The pure threat point can thus be calculated in polynomial time by finding the minmax value for each player.\n\\end{Proof}\n\n\n\\section{Stackelberg--Nash equilibria}\\label{s:stack-nash}\n\nWe now turn to applying these techniques to a sequential setting. In a \\emph{Stackelberg--Nash game}, Player~$0$ (the leader) chooses an action, described by a vector $\\ve s_0 \\in S_0$. The remaining players~$i\\in I =\\{1,\\ldots,n\\} $ (the followers)\nthen simultaneously choose their actions $\\ve s_i \\in S_i(\\ve s_0)$. Each player $i \\in I_0 = \\{0,1,\\ldots,n\\}$ then collects a payoff $u_i(\\ve s_0, \\ve s)$ where $\\ve s = (\\ve s_1,\\ldots, \\ve s_n)$.\n\nWe assume $S_0 = P_0 \\cap {\\mathbb Z}^{d_0}$ where $P_0$ is a rational polytope. For each $\\ve s_0 \\in S_0$ the followers play an integer programming game with DPLC payoffs. The action of each follower $i\\in I$ is described by a vector~$\\ve s_i \\in {\\mathbb Z}^{d_i}$\nfrom the set $S_i(\\ve s_0) = P_i(\\ve s_0) \\cap {\\mathbb Z}^{d_i}$. We assume $P_i(\\ve s_0) $ is the rational polytope\n$$P_i(\\ve s_0) = \\{\\ve x \\in \\mathbb R^{d_i}: M_i\\ve x \\le \\pi_i(\\ve s_0)\\} ~ \\text{for \\ } ~ i \\in I$$\nwhere $\\pi_i(\\ve s_0)$ is an integer valued affine function. Let $d = d_1 + \\dots + d_n$ and $d^+ = d_0 + d$.\n\nRegarding payoffs, we assume each follower has a DPLC payoff $u_i(\\ve s)$, independent of the leader's choice $\\ve s_0$ and given by\n\\begin{equation}\\label{eq:DPLC-followers}\nu_i(\\ve s) = \\max_{k\\in K_i} f_{ik}(\\ve s) - \\max_{l \\in L_i} g_{il}(\\ve s).\n\\end{equation}\nThe leader's payoffs are defined as the DPLC function\n\\begin{equation}\\label{eq:DPLC-leader}\nu_0(\\ve s_0,\\ve s) = \\max_{k\\in K_0} f_{0k}(\\ve s_0, \\ve s) - \\max_{l \\in L_0} g_{0l}(\\ve s_0, \\ve s).\n\\end{equation}\nWe assume all $K_i$ and $L_i$ are finite index sets and all $f_{ik}$ and $g_{il}$ are integer valued affine functions.\n\nObserve that given $\\ve s_0 \\in S_0$ we have a setup identical to that of Section \\ref{s:pure-nash}, where the set of action profiles for the followers is $S(\\ve s_0) = \\prod_{i=1}^n S_i(\\ve s_0) \\subseteq {\\mathbb Z}^d$.\n\nWe are interested in computing an optimal action for the leader while guaranteeing there exists a pure Nash equilibrium between the\nfollowers; see Remark \\ref{remark:focus-on-pure} below for justification of our interest in pure Nash equilibria.\nLet $N(\\ve s_0)$ denote the set of pure Nash equilibria between the\nfollowers when the leader has chosen action~$\\ve s_0\\in S_0$.\nAs in Section \\ref{s:pure-nash}, a pure Nash equilibrium in $N(\\ve s_0)$\nis an action profile $\\ve s \\in S(\\ve s_0)$ such that for every $i \\in I$ there does not exist a deviation $s_i^\\prime \\in S_i(\\ve s_0)$ such that $u_i(\\ve s_{-i},\\ve s_i^\\prime) > u_i(\\ve s)$.\n\nThe leader faces the following optimization problem:\n\\begin{equation}\\label{eq:Stackelberg--Nash}\n\\max_{\\ve s_0, \\ve s}\\{u_0(\\ve s_0, \\ve s) : s_0 \\in S_0 \\text{ and } s \\in N(\\ve s_0)\\}.\n\\end{equation}\nLet $N^+$ denote the set of all \\emph{Stackelberg--Nash equilibria}, i.e., optimal\nsolutions~$(\\ve s_0; \\ve s)$ to the optimization\nproblem~\\eqref{eq:Stackelberg--Nash}.\n\n\\begin{remark}\\label{remark:optimistic}\nNote that this formulation implicitly assumes that the leader, after choosing $\\ve s_0$, can choose a pure Nash equilibrium $\\ve s \\in N(\\ve s_0)$ in order to maximize her payoff. This is a generalization to the case of competing followers of the ``optimistic assumption'' common in the multilevel optimization literature. The simplest illustration of the assumption is in the bilevel setting where the leader has the ability to choose among alternate optima to the follower's problem (see \\cite{Colson2007}). Here we assume more generally that the leader can choose among the alternate pure Nash equilibria between the followers.\n\\end{remark}\n\n\\begin{remark}\\label{remark:focus-on-pure}\nThe focus solely on pure strategies may need some motivation. Some choice of $\\ve s_0 \\in S_0$ may give rise to no pure Nash equilibria in the followers' game, leaving only mixed Nash equilibria. We assume that the leader will avoid such an $\\ve s_0$, even if it gave rise to higher \\emph{expected} payoffs. Consider this as an extreme form of risk aversion, where any equilibrium in pure strategies in preferred by the leader so as to avoid any uncertainty in payoffs. By similar reasoning, we also assume the leader will not be interested in the mixed equilibria when pure equilibria exist. Extending the optimistic assumption discussed in Remark~\\ref{remark:optimistic} we assume the leader can compel the followers to reach a pure equilibrium whenever it exists.\n\\end{remark}\n\n\n\\begin{theorem}\\label{theorem:stack-nash}\n Consider a Stackelberg--Nash game with DPLC payoffs defined by the following\n input, given in binary encoding:\n \\begin{inputlist}\n\\item the number $n$ of followers, and a bound $B \\in {\\mathbb N}$;\n\\item the dimension $d_0$ and an inequality description $(M_0,\\ve b_0)$ of a rational polytope $P_0 = \\{\\ve x \\in \\mathbb R^{d_0}: M_0\\ve x \\le \\ve b_0\\}\\subseteq [-B,B]^{d_0}$ defining the leader's feasible set $S_0 = P_0 \\cap {\\mathbb Z}^{d_0}$;\n\\item for each $i \\in I = \\{1,\\dots,n\\}$, the dimension $d_i$, number $m_i$ of constraints, integer $m_i\\times d_i$ matrix $M_i$, integer $d_0 \\times m_i$ matrix $\\Phi_{i}$ and integer vector $\\psi_{i} \\in {\\mathbb Z}^{m_i}$ defining the affine function $\\pi_{i}:S_0 \\rightarrow {\\mathbb Z}^{m_i}$ by $\\pi_{i}(\\ve s_0) = \\Phi_i \\ve s_0 + \\psi_{i}$, and defining the follower $i$'s parameterized polytope $P_i(\\ve s_0) = \\{\\ve x \\in \\mathbb R^{d_i}: M_i\\ve x \\le \\pi_i(\\ve s_0)\\}$;\n\\item for each $i \\in I$, nonnegative integers $|K_i|$ and $|L_i|$, and for all integers $k$, $l$ such that $1\\le k \\le |K_i|$ and $1\\le j \\le |L_i|$, integer vectors $\\ve \\alpha_{ik} \\in {\\mathbb Z}^{d}$, $\\ve \\gamma_{il} \\in {\\mathbb Z}^{d}$ (where $d = d_1 + \\cdots + d_n$) and integers $\\beta_{ik}$, $\\delta_{ik}$ defining the affine functions $f_{ik}:{\\mathbb Z}^{d} \\rightarrow {\\mathbb Z}$ and $g_{il}:{\\mathbb Z}^{d}\\rightarrow {\\mathbb Z}$ by $f_{ik}(\\ve s) = \\ve \\alpha_{ik}\\cdot\\ve s + \\beta_{ik}$ and $g_{il}(\\ve s) = \\ve \\gamma_{il}\\cdot\\ve s + \\delta_{il}$ for all $\\ve s\\in {\\mathbb Z}^{d}$;\n\\item nonnegative integers $|K_0|$ and $|L_0|$, and for all integers $k$, $l$ such that $1\\le k \\le |K_0|$ and $1\\le j \\le |L_0|$, integer vectors $\\ve \\alpha_{0k} \\in {\\mathbb Z}^{d^+}$, $\\ve \\gamma_{0l} \\in {\\mathbb Z}^{d^+}$ (where $d^+ = d_0 + d$) and integers $\\beta_{0k}$, $\\delta_{0k}$ defining the affine functions $f_{0k}:{\\mathbb Z}^{d^+} \\rightarrow {\\mathbb Z}$ and $g_{0l}:{\\mathbb Z}^{d^+}\\rightarrow {\\mathbb Z}$ by $f_{0k}(\\ve s_0, \\ve s) = \\ve \\alpha_{ik}\\cdot(\\ve s_0,\\ve s) + \\beta_{0k}$ and $g_{0l}(\\ve s_0,\\ve s) = \\ve \\gamma_{0l}\\cdot(\\ve s_0,\\ve s) + \\delta_{0l}$ for all $(\\ve s_0,\\ve s) \\in {\\mathbb Z}^{d^+}$.\n\\end{inputlist}\nThen there exists a polynomial-time\nalgorithm to compute the leader's optimum payoff and a short rational generating\nfunction encoding of the set~$N^+$ of all Stackelberg--Nash equilibria when the total dimension $d^+$ and the sizes $|K_0|,|K_1|, \\dots,|K_n|$ are fixed.\n\\end{theorem}\n\\begin{Proof}\nWe mimic the development leading up to Theorem~\\ref{theorem:extended-main-result} in Section~\\ref{s:pure-nash} by defining an extended game with extended strategy profiles $(\\ve s_0, \\ve s, \\ve y)$ where $y_i \\le u_i(\\ve s)$ for all $i \\in I$. As before denote $\\hat{\\ve s} = (\\ve s, \\ve y)$. Let\n\\begin{displaymath}\nS_{ik} = \\left\\{(\\ve s_0, \\ve s) : \\ve s_0 \\in S_0,~\\ve s \\in S(s_0), ~ f_{ik}(\\ve s) \\ge f_{ij}(\\ve s) \\text{ for } ~j > k,~\tf_{ik}(\\ve s) > f_{ij}(\\ve s) \\text{ for } ~j 0$ dynamics, so that the density matrix evolves unitarily as\n\\begin{equation}\n\\hat{\\rho}(\\tau)=e^{-i\\hat{H}_g\\tau}\\hat{\\rho}^{ }_I\\,e^{i\\hat{H}_g\\tau}.\\label{eq:rho_tau} \n\\end{equation}\n\nWe study the dynamics of few-body observables $\\hat{O}$, whose expectation values can be written as\n\\begin{equation}\nO(\\tau)=\\text{Tr}[\\hat{O}\\hat{\\rho}(\\tau)].\n\\end{equation} \nAfter sufficiently long time evolution, $O(\\tau)$ is expected to equilibrate at its steady-state value given by the diagonal ensemble (DE) \n\\begin{equation}\nO_{\\text{DE}}=\\text{Tr}[\\hat{O}\\hat{\\rho}^{ }_{\\text{DE}}], \\quad\\text{where}\\quad\\hat{\\rho}^{ }_{\\text{DE}}=\\lim_{\\tau\\rightarrow\\infty}\\frac{1}{\\tau}\\int_{0}^{\\tau}\\hat{\\rho}(\\tau)d\\tau\\label{O_DE}.\n\\end{equation}\nIn the absence of extensive degeneracies in the energy spectrum of $\\hat{H}_g$, $\\hat{\\rho}^{ }_{\\text{DE}}$ can be written as~\\cite{rigol2008thermalization, dalessio_kafri_16}:\n\\begin{equation}\n\\hat{\\rho}^{ }_{\\text{DE}}=\\mathcal{P}^{ }_g[\\hat{\\rho}^{ }_I]\n=\\sum_{j}\\left(\\bra{E_j^g}\\hat{\\rho}^{ }_I\\ket{E_j^g}\\right)\\ket{E_j^g}\\bra{E_j^g}\\label{eq:rho_DE},\n\\end{equation}\nwhere $\\{\\ket{E_j^g}\\}$ are the energy eigenkets of $\\hat{H}_g$.\n\nTo describe the expectation values of observables after equilibration when $\\hat{H}_g$ is nonintegrable, because of eigenstate thermalization~\\cite{deutsch1991quantum, srednicki1994chaos, rigol2008thermalization, dalessio_kafri_16}, the diagonal ensemble $\\hat{\\rho}^{ }_{\\text{DE}}$ can be replaced with a Gibbs ensemble $\\hat{\\rho}^{ }_{\\text{GE}}$, \n\\begin{equation}\nO_{\\text{GE}}=\\text{Tr}[\\hat{O}\\hat{\\rho}^{ }_{\\text{GE}}]\\label{O_GE}.\n\\end{equation}\nThis can be done because one can show under very general physically relevant assumptions that, as a result of eigenstate thermalization, $O_{\\text{DE}}=O_{\\text{GE}}$ for few-body or local operators $\\hat{O}$~\\cite{dalessio_kafri_16}. We stress that $\\hat{\\rho}^{ }_{\\text{GE}}$ needs to be determined taking into account the conserved quantities of $\\hat{H}_g$. In this work, we consider a nonintegrable $\\hat H_0$ with only one conserved quantity, the number of particles $\\hat{N}$. \n\nFor a perturbation that breaks particle-number conservation, the thermal state $\\hat{\\rho}^{ }_{\\text{GE}}$ is given by\n\\begin{equation}\\label{eq:rho_GE}\n\\hat{\\rho}^{ }_{\\text{GE}}= \n\\dfrac{ e^{-\\hat{H}_g\/T}}{ \\text{Tr}[e^{-\\hat{H}_g\/T}]}, \\quad\\text{when}\\quad[\\hat{H}_g,\\hat{N}]\\ne0.\n\\end{equation}\nThe temperature $T$ is the only parameter that needs to be determined in $\\hat{\\rho}^{ }_{\\text{GE}}$. To find $T$ one uses the conservation of energy under the unitary evolution dictated by $\\hat{H}_g$:\n\\begin{equation}\n\\text{Tr}[\\hat\\rho^{ }_{\\text{GE}}\\hat H_g]=\\text{Tr}[\\hat \\rho^{ }_{I}\\hat H_g]\\label{finalT}.\n\\end{equation}\n\nIf the perturbation does not break particle-number conservation, then $\\hat{\\rho}^{ }_{\\text{GE}}$ has the form\n\\begin{equation}\\label{eq:rho_GE_N}\n\\hat{\\rho}^{ }_{\\text{GE}}= \n\\dfrac{ e^{-(\\hat{H}_g+\\mu\\hat{N})\/T}}{ \\text{Tr}[e^{-(\\hat{H}_g+\\mu\\hat{N})\/T}]}, \\quad\\text{when}\\quad [\\hat{H}_g,\\hat{N}]=0.\n\\end{equation}\nThe parameters $T$ and $\\mu$ are determined in this case using the conservation of energy~\\eqref{finalT} and of the number of particles\n\\begin{equation}\n\\text{Tr}[\\hat\\rho^{ }_{\\text{GE}}\\hat N]=\\text{Tr}[\\hat \\rho_{I}\\hat N]\\label{finalmu}.\n\\end{equation} \n\n\\subsection{Prethermalization and projected dynamics}\n\nHere we summarize the framework of prethermalization and the assumptions involved, as introduced in Ref.~\\cite{mallayya2019prethermalization}. The main assumption is a ``weak coupling'' condition $g\\tau_0\\ll 1$, where $\\tau_0$ is the time scale at which equilibration occurs under the reference (unperturbed) dynamics dictated by $\\hat{H}_0$. Since we want to avoid having $\\tau_0$ and the perturbed dynamics to depend on the system size, something that would happen if there is net transport of particles and\/or energy between different parts of the system, we assume that the initial states and the Hamiltonians are translationally invariant. Under the weak coupling assumption, thermalization occurs at times $\\tau\\gg\\tau_0$.\n\nAnother condition needed for one to be able to observe a two-step relaxation process is that the initial value of the conserved quantity per site, in our case $n_I=\\text{Tr}[\\hat{N}\\hat{\\rho}^{ }_I]\/L$ ($L$ is the number of lattice sites), should be sufficiently different from its equilibrated value, $n^{ }_{\\text{DE}}=\\text{Tr}[\\hat{N}\\hat{\\rho}^{ }_{\\text{DE}}]\/L$, namely, that $|n_I-n^{ }_{\\text{DE}}|\\sim\\mathcal{O}(1)>\\mathcal{O}(g)$. This ensures that the prethermal and thermal results for the observable are separated beyond $\\mathcal{O}(g)$ corrections. This condition was not explicitly stated in Ref.~\\cite{mallayya2019prethermalization}, and we will justify it with numerical results in Sec.~\\ref{Sec_Perturbations}.\n\nNow let us specify what we mean, in the two-step relaxation process, by fast prethermalization followed by a slow thermalization.\\\\\n(i) \\textbf{\\textit{Fast prethermal dynamics}} at $\\tau\\lesssim\\tau_0$: $O(\\tau)$ resembles the reference dynamics and relaxes to a prethermal value. This value is determined by the DE of $\\hat{H}_0$, which is $\\mathcal{P}^{ }_0[\\hat{\\rho}^{ }_I]$ [Eq.~\\eqref{eq:rho_DE}], with $\\mathcal{O}(g)$ corrections.\n\n(ii) \\textbf{\\textit{Slow thermalization}} at $\\tau\\gg\\tau_0$: $O(\\tau)$ relaxes slowly to the true equilibrium value $O_{\\text{DE}}$ [Eq.~\\eqref{O_DE}]. The slow relaxation proceeds via intermediate equilibrium states of $\\hat{H}_0$. These intermediate equilibrium states are projected diagonal ensembles (P-DEs) of $\\hat{H}_0$, $\\hat{\\rho}^{ }_{\\text{P-DE}}(\\tau)$. They are obtained acting with the projector $\\mathcal{P}^{ }_0$ on $\\hat{\\rho}(\\tau)$,\n\\begin{eqnarray}\n\\hat{\\rho}^{ }_{\\text{P-DE}}(\\tau)&=&\\mathcal{P}^{ }_0[\\hat{\\rho}(\\tau)]\\nonumber\\\\\n&=&\\sum_{i}\\left(\\bra{E_i^0}\\hat{\\rho}(\\tau)\\ket{E_i^0}\\right)\\ket{E_i^0}\\bra{E_i^0}\\label{proj_DE},\n\\end{eqnarray}\nwhere $\\{\\ket{E_i^0}\\}$ are the simultaneous eigenkets of $\\hat{H}_0$ and $\\hat{N}$. $\\hat{\\rho}^{ }_{\\text{P-DE}}(\\tau)$ defines the projected dynamics for the observable $\\hat{O}$\n\\begin{equation}\nO_{\\text{P-DE}}(\\tau)=\\text{Tr}\\left\\{\\hat{O}\\hat{\\rho}^{ }_{\\text{P-DE}}(\\tau)\\right\\}.\\label{eq:O_proj_DE}\n\\end{equation}\nBy construction, the exact dynamics and the projected dynamics are identical for the conserved quantities of the reference Hamiltonian, namely, for $\\hat{N}$ and $\\hat{H}_0$. For other observables $\\hat{O}$, an $\\mathcal{O}(g)$ discrepancy is expected between $O(\\tau)$ and $O_{\\text{P-DE}}(\\tau)$ for $\\tau\\gg\\tau_0$. The initial value $O_{\\text{P-DE}}(\\tau=0)$ is that of the prethermal state.\n\nIn the thermodynamic limit when $\\hat{H}_0$ is nonintegrable, because of eigenstate thermalization, the expectation values of observables can be computed replacing P-DE by a Gibbs ensemble (P-GE):\n\\begin{equation}\nO_{\\text{P-GE}}(\\tau)=\\text{Tr}[\\hat O \\hat{\\rho}^{ }_{\\text{P-GE}}(\\tau)]\\label{eq:O_PGE},\n\\end{equation}\nwhere $\\hat{\\rho}^{ }_{\\text{P-GE}}(\\tau)$ is given by\n\\begin{equation}\n\\hat{\\rho}^{ }_{\\text{P-GE}}(\\tau)=\\dfrac{ e^{-\\left[\\hat{H}_0+\\mu(\\tau)\\hat{N}\\right]\/T(\\tau)}}{ \\text{Tr}\\left\\{e^{-\\left[\\hat{H}_0+\\mu(\\tau)\\hat{N}\\right]\/T(\\tau)}\\right\\}},\\label{eq:rho_Proj_GE}\n\\end{equation}\nwith $T(\\tau)$ and $\\mu(\\tau)$ determined by the instantaneous values of the conserved quantities of $\\hat{H}_0$:\n\\begin{eqnarray}\n\\text{Tr}[\\hat{\\rho}^{ }_{\\text{P-GE}}(\\tau)\\hat H_0]&=&\\text{Tr}[\\hat \\rho(\\tau)\\hat H_0]\\label{eq:proj_finalT},\\\\\n\\text{Tr}[\\hat{\\rho}^{ }_{\\text{P-GE}}(\\tau)\\hat N]&=&\\text{Tr}[\\hat \\rho(\\tau)\\hat N]\\label{eq:proj_finalmu}.\n\\end{eqnarray}\nThe dynamics of the P-GE is thus dictated by the evolution of the broken conserved quantities (per site) of $\\hat{H}_0$, namely, $n(\\tau)=\\text{Tr}[\\hat \\rho(\\tau)\\hat N]\/L$ and $e_0(\\tau)=\\text{Tr}[\\hat \\rho(\\tau)\\hat H_0]\/L$. Their dynamics is described by an autonomous equation based on Fermi's golden rule (FGR). For $n(\\tau)$, this is given by\n\\begin{eqnarray}\n\\dfrac{dn}{d\\tau}&=&\\dfrac{2\\pi g^2}{L}\\sum_{i,j} \\delta(E^0_j-E^0_i) \\left(\\bra{E^0_j}\\hat N\\ket{E^0_j}-\\bra{E^0_i}\\hat N\\ket{E^0_i}\\right)\\nonumber\\\\&&\\times\\left|\\bra{E^0_j}\\hat{V}\\ket{E^0_i}\\right|^2P^0_i(\\tau) + \\mathcal{O}(g^3),\\label{fermi_rate}\n\\end{eqnarray}\nwhere $\\{\\ket{E_i^0}\\}$ are the simultaneous eigenkets of $\\hat{H}_0$ and $\\hat{N}$, and $P^0_i(\\tau)$ are the diagonal matrix elements of the P-DE at $\\tau$. $P^0_i(\\tau)$ can equivalently be replaced with the diagonal matrix elements of the P-GE, $\\bra{E_i^0}\\hat{\\rho}^{ }_{\\text{P-GE}}(\\tau)\\ket{E_i^0}$, which in turn are determined by $n(\\tau)$ and $e_0(\\tau)$. In Sec.~\\ref{sec_rate_NLCE}, we refine this FGR expression to obtain a relaxation rate. \n\nThe reference energy $e_0(\\tau)$ is approximately constant as $\\hat{H}_0$ and $\\hat{H}_g$ differ only by the perturbation term, and $\\hat{H}_g$ is an exact conserved quantity in the dynamics. If energy is not conserved, such as when $\\hat{V}$ is time dependent, a similar autonomous equation can describe the evolution of $e_0(\\tau)$. In the context of periodically driven perturbations, the FGR based equation was used to compute heating rates in Ref.~\\cite{mallayya2019heating}.\n\n(iii) \\textbf{\\textit{Thermal equilibrium}} at $\\tau\\rightarrow\\infty$:\nAfter sufficiently long time, the system attains its final equilibrium state under $\\hat{H}_g$, predicted by the DE [Eq.~\\eqref{eq:rho_DE}]. Since we consider a nonintegrable $\\hat{H}_g$, $\\hat\\rho^{ }_{\\text{DE}}$ can be replaced by $\\hat\\rho^{ }_{\\text{GE}}$ [in Eq.~\\eqref{eq:rho_GE}] to compute the expectation values of observables after equilibration.\n\nThe projected dynamics $O_{\\text{P-DE}}(\\tau)$ [Eq.~\\eqref{proj_DE}] also attains an equilibrium steady state value $\\overline{O}_{\\text{P-DE}}=\\lim_{\\tau\\rightarrow\\infty}\\tau^{-1}\\int_{0}^{\\tau}O_{\\text{P-DE}}(\\tau)d\\tau$ given by\n\\begin{equation}\n\\overline{O}_{\\text{P-DE}}=\\text{Tr}\\left\\{\\hat{O}\\mathcal{P}^{ }_0\\left[\\hat{\\rho}^{ }_{\\text{DE}}\\right]\\right\\},\\label{eq:Obar_PDE}\n\\end{equation}\nwhere the density matrix $\\mathcal{P}^{ }_0\\left[\\hat{\\rho}^{ }_{\\text{DE}}\\right]$ describes the equilibrium state of the projected dynamics ($\\overline{\\text{P-DE}}$). \n\nWith nonintegrable $\\hat{H}_0$, $\\overline{O}_{\\text{P-DE}}$ can be replaced with the thermal value $\\overline{O}_{\\text{P-GE}}$ given by\n\\begin{equation}\n\\overline{O}_{\\text{P-GE}}=\\text{Tr}\\left[\\hat{\\overline{\\rho}}^{ }_{\\text{P-GE}}\\hat{O}\\right]\\label{eq:O_PGE_bar},\n\\end{equation}\nwhere $\\hat{\\overline{\\rho}}^{ }_{\\text{P-GE}}$ is the Gibbs ensemble describing this thermal equilibrium ($\\overline{\\text{P-GE}}$). $\\hat{\\overline{\\rho}}^{ }_{\\text{P-GE}}$ is given by Eq.~\\eqref{eq:rho_Proj_GE} but with $\\bar{T}$ and $\\bar{\\mu}$ instead of $T(\\tau)$ and $\\mu(\\tau)$, respectively, determined by the equations\n\\begin{eqnarray}\n\\text{Tr}[\\hat{\\overline{\\rho}}^{ }_{\\text{P-GE}}\\hat H_0]&=&\\text{Tr}[\\hat \\rho^{ }_{\\text{GE}}\\hat H_0]\\label{eq:proj_finalT_bar},\\\\\n\\text{Tr}[\\hat{\\overline{\\rho}}^{ }_{\\text{P-GE}}\\hat N]&=&\\text{Tr}[\\hat \\rho^{ }_{\\text{GE}}\\hat N]\\label{eq:proj_finalmu_bar},\n\\end{eqnarray}\nwhere $\\hat \\rho^{ }_{\\text{GE}}$ is the true thermal state of $\\hat{H}_g$ [Eq.~\\eqref{eq:rho_GE}]. \n\n\n\\section{Hamiltonian, observables, and computational techniques}\\label{sec:hamnlce}\n\nIn this section we introduce the model Hamiltonian, the observables of interest, and the numerical techniques used in our calculations. \n\n\\subsection{Hamiltonians and observables}\n\nWe study the dynamics of strongly interacting hard-core bosons in a translationally invariant one-dimensional lattice with $L$ sites. Let $\\hat{b}_i^{\\dagger}$ $(\\hat{b}_i^{})$ be the creation (annihilation) operator of a hard-core boson at site $i$. They satisfy bosonic commutation relations with the hard-core constraint: $\\hat{b}_i^2=(\\hat{b}_i^{\\dagger})^2=0$. \n\nOur reference Hamiltonian is\n\\begin{eqnarray}\n&&\\hat{H_0}=\\sum_i \\left[ -t\\left( \\hat{b}^\\dagger_i \\hat{b}^{}_{i+1} + \\textrm{H.c.} \\right) -t'\\left( \\hat{b}^\\dagger_i \\hat{b}^{}_{i+2} + \\textrm{H.c.} \\right) \n\\right.\\label{model_H}\\\\\n&&\\left.+V\\left(\\hat{n}^{}_i-\\dfrac{1}{2}\\right)\\hspace*{-0.1cm}\\left(\\hat{n}^{}_{i+1}-\\dfrac{1}{2}\\right)+V'\\left(\\hat{n}^{}_i-\\dfrac{1}{2}\\right)\\hspace*{-0.1cm}\\left(\\hat{n}^{}_{i+2}-\\dfrac{1}{2}\\right)\\hspace*{-0.05cm}\\right],\\nonumber\n\\end{eqnarray}\nwhere $\\hat{n}_i=\\hat{b}^{\\dagger}_{i}\\hat{b}^{ }_{i}$, $t$ ($t'$) is the nearest (next-nearest)-neighbor hopping and $V$ ($V'$) is the nearest (next nearest)-neighbor interaction strength. We always consider $t\\ne0$ and $V\\ne0$, and fix $t'=V'=0.7$, so that $\\hat{H}_0$ is nonintegrable~\\cite{santos2010onset}. The total number of particles, $\\hat{N}=\\sum_i \\hat{b}_i^{\\dagger}\\hat{b}_i^{}$, is the only local conserved quantity of this Hamiltonian, $[\\hat{H}_0,\\hat{N}]=0$. We consider two perturbations of $\\hat{H}_0$, which we denote as $g_1\\hat{V}_1$ and $g_N\\hat{V}_N$.\n\nOur first perturbed Hamiltonian has the form $\\hat{H}_{g_1}=\\hat{H}_0+g_1\\hat{V}_1$, where\n\\begin{equation}\ng_1\\hat{V}_1 = g_1\\sum_i\\left[\\hat{b}^{}_{i}+\\frac{1}{2}\\left(\\hat{b}^{}_{i}\\hat{b}^{}_{i+1} \\right)+ \\text{H.c.}\\right].\\label{H1}\n\\end{equation}\nWe note that $\\hat{H}_{g_1}$ is not particle-number conserving, because $[\\hat{V}_1,\\hat{N}]\\ne0$, but it is particle-hole symmetric.\n\nOur second perturbed Hamiltonian has the form $\\hat{H}_{g_N}=\\hat{H}_0+g_N\\hat{V}_N$, where\n\\begin{equation}\ng_N\\hat{V}_N = g_N\\sum_i\\left[\\dfrac{1}{2}\\left(\\hat{b}^{\\dagger}_{i}\\hat{n}_{i+1}\\hat{b}^{ }_{i+2}\\right)+ \\text{H.c.}\\right].\\label{eq:HN}\n\\end{equation}\nThis perturbed Hamiltonian is particle-number conserving, because $[\\hat{V}_N,\\hat{N}]=0$, but it is not particle-hole symmetric, because $\\hat{V}_N$ breaks that symmetry of $\\hat{H}_0$. In what follows we use $\\hat{H}_g=\\hat{H}_0+g\\hat V$ as a common notation for $\\hat{H}_{g_1}=\\hat H_0+g_1\\hat V_1$ and $\\hat{H}_{g_N}=\\hat H_0+g_N\\hat V_N$. We only make a distinction between the two Hamiltonians when needed. \n\nThe pre-quench Hamiltonian $\\hat{H}_I$, which determines the initial state $\\hat{\\rho}^{ }_I$, is also described by Eq.~\\eqref{model_H} but with different coupling parameters $t_I$ and $V_I$. This ensures that $[\\hat H_I,\\hat H_0]\\ne 0$ so that $\\hat{\\rho}^{ }_I$ has nontrivial reference dynamics. $\\hat{\\rho}^{ }_I$ is in thermal equilibrium with respect to $\\hat{H}_I$ at a temperature $T_I$ and a chemical potential $\\mu_I$:\n\\begin{equation}\n\\hat{\\rho}^{ }_I=\\dfrac{ e^{-(\\hat{H}_I+\\mu_I\\hat{N})\/T_I}}{ \\text{Tr}[e^{-(\\hat{H}_I+\\mu_I\\hat{N})\/T_I}]}.\\label{eq:rho_I}\n\\end{equation}\n$\\mu_I$ allows us to control the site occupation of the initial state, $n(\\tau=0)=\\text{Tr}[\\hat{N}\\hat{\\rho}^{ }_I]\/L$. Due to the particle-hole symmetry of $\\hat{H}_I$, $\\mu_I=0$ corresponds to half filling. \n\n\\textit{Parameters for the numerical calculations}: We take $\\hat{H}_0$ [Eq.~\\eqref{model_H}] to have $t=V=1$, so that both nearest- and next-nearest-neighbor terms make $\\hat{H}_0$ nonintegrable. For the pre-quench Hamiltonian $\\hat{H}_I$, we take $t_I=0.5$ and $V_I=1.5$ so that the quench is not small and the system is far from equilibrium at time $\\tau=0$. The numerical results are generic and not sensitive to the choice of these coupling parameters. For the initial state $\\hat{\\rho}^{ }_I$ [Eq.~\\eqref{eq:rho_I}], we take $T_I=10$. We consider the initial chemical potential $\\mu_I=0$ to study quenches at half filling, and $\\mu_I=1.5$ to study quenches away from half filling (so that the initial site occupation is $n_I= 0.47$). \n\n\\textit{Observables:} We study the dynamics of two extensive observables: \\\\\n(i) The total number of particles $\\hat{N}$, whose expectation value per site (the site occupation) has been denoted as $n$. $\\hat{N}$ is the only local conserved quantity of $\\hat{H}_0$. \\\\\n(ii) The nearest neighbor density correlator \n\\begin{equation}\n\\hat{U}=\\sum_i\\left(\\hat{n}^{}_i-\\dfrac{1}{2}\\right)\\hspace*{-0.1cm}\\left(\\hat{n}^{}_{i+1}-\\dfrac{1}{2}\\right),\\label{eq:U}\n\\end{equation} \nwhose expectation value per site is denoted as $u$. This is an experimentally accessible local observable that exhibits nontrivial dynamics under both $\\hat{H}_0$ and $\\hat{H}_g$. The dynamics of other local observables such as the nearest-neighbor one-body correlator $\\left(\\sum_i \\hat{b}^\\dagger_i \\hat{b}^{}_{i+1} + \\textrm{H.c.}\\right)$ are qualitatively similar to that of $u(\\tau)$.\n\n\\subsection{Computational approaches}\n\n\\subsubsection{Numerical linked cluster expansion (NLCE)}\n\nWe use a numerical linked cluster expansion (NLCE) to calculate the expectation value of extensive observables $\\hat{O}$ per site, $\\langle \\hat{O}\\rangle\/L$, in the thermodynamic limit~\\cite{rigol2006numerical, *rigol2007numerical1, *rigol2007numerical2}. NLCE allows one to compute $\\langle \\hat{O}\\rangle\/L$ as a sum over contributions from all connected clusters $c$ that can be embedded on the lattice:\n\\begin{equation}\\label{nlce_eq}\n\\langle \\hat{O}\\rangle\/L=\\sum_{c}M(c)\\times W_{O}(c),\n\\end{equation}\nwhere $W_{O}(c)$ is the weight of cluster $c$, and $M(c)$ is the number of ways per site to embed the cluster $c$ in the lattice. $W_O(c)$ is computed for each cluster $c$ using the inclusion-exclusion principle:\n\\begin{equation}\\label{weight_subtraction}\nW_{O}(c)=\\langle\\hat{O}\\rangle_c- \\sum_{s \\subset c} W_{O}(s),\n\\end{equation}\nwhere $\\langle\\hat{O}\\rangle_c$ is the expectation value of $\\hat{O}$ in cluster $c$ and $s\\subset c$ denotes all connected sub-clusters of $c$. For the smallest cluster $c_0$, $W_{O}(c_0)=\\langle\\hat{O}\\rangle_{c_0}$. \n\nFor each cluster $c$, $\\langle\\hat{O}\\rangle_c = \\text{Tr}[\\hat{\\rho}^c\\hat{O}]$, where $\\hat{\\rho}^c$ is the relevant density matrix in the cluster. The appropriate cluster Hamiltonians $\\hat{H}^c$ that define $\\hat{\\rho}^c$ are modified from their definition in the thermodynamic limit to fit the sites and bonds in the cluster. For example, the initial state, $\\hat{\\rho}^{c}_I$ is\ngiven by Eq.~\\eqref{eq:rho_I} with the Hamiltonian $\\hat{H}_I\\rightarrow\\hat{H}^{c}_I$ for the cluster. Similarly, $\\hat{\\rho}^c(\\tau)$ [Eq.~\\eqref{eq:rho_tau}], $\\hat{\\rho}^{c}_{\\text{DE}}$ [Eq.~\\eqref{eq:rho_DE}], $\\hat{\\rho}^{c}_{\\text{GE}}$ [Eqs.~\\eqref{eq:rho_GE} and~\\eqref{eq:rho_GE_N}], $\\hat{\\rho}^{c}_{\\text{P-DE}}(\\tau)$ [Eq.~\\eqref{proj_DE}] and $\\hat{\\rho}^{c}_{\\text{P-GE}}(\\tau)$ [Eq.~\\eqref{eq:rho_Proj_GE}] are evaluated using the cluster modified Hamiltonians $\\hat{H}_g\\rightarrow\\hat{H}^c_g$ and $\\hat{H}_0\\rightarrow\\hat{H}^c_0$. $\\langle\\hat{O}\\rangle_c$ is calculated numerically using full exact diagonalization. \n\nSince our model has nearest- and next-nearest-neighbor bonds, one has the freedom to choose different building blocks to construct the clusters in NLCE~\\cite{mallayya2017numerical}. We use the maximally connected expansion~\\cite{rigol2014quantum}, in which each cluster $c$ is made of contiguous sites with all possible bonds present in the cluster modified Hamiltonians. The maximally connected expansion is ideal for dynamics~\\cite{mallayya2018quantum}, as well as for diagonal and grand canonical ensemble calculations, in our one-dimensional lattices~\\cite{rigol2014quantum, mallayya2017numerical}.\n\nThe order of the NLCE is set by the number of sites of the largest cluster used in the expansion. We compute the NLCE up to the 19$^{\\text{th}}$ order for quenches that conserve $\\hat{N}$, after exploiting reflection symmetry in the particle number sectors (the dimension of the largest sector is 46252). For quenches that break particle conservation, we exploit particle-hole and reflection symmetry to compute the NLCE up to the 18$^{\\text{th}}$ order (the largest sector dimension is 65792).\n\n\\subsubsection{Exact diagonalization (ED)}\n\nFor the quenches that break particle-number conservation, we also study dynamics in finite lattices with $L$ sites and periodic boundary conditions (PBC) solved using ED. The largest chain we solve has $L=19$ sites (whose largest sector dimension is 13797).\n\nWe also use ED to compute the FGR rate in Eq.~\\eqref{eq:FGR_rate_N} [see Sec.~\\ref{sec_rate_NLCE} and Appendix.~\\ref{appd}]. Evaluating Eq.~\\eqref{eq:FGR_rate_N} does not involve dynamics, and requires only the diagonalization of $\\hat{H}_0$, which conserves $\\hat{N}$. We evaluate Eq.~\\eqref{eq:FGR_rate_N} in chains with $L=22$ sites (whose largest sector dimension is 32065).\n\n\\section{Results}\\label{sec:results}\n\n\\subsection{Perturbations and initial states}\\label{Sec_Perturbations}\n\nIn Ref.~\\cite{mallayya2019prethermalization}, we studied prethermalization and thermalization in the context of perturbations that break a conserved quantity, and initial states for which the expectation value of the conserved quantity was different from the value after equilibration. Whether this two-step relaxation process occurs for perturbations or initial states that do not change the value of the conserved quantity is the question that we address next. \n\nIn Fig.~\\ref{Fig:Pert_dynamics} we show results for $u(\\tau)$ [Eq.~\\eqref{eq:U}] under three different scenarios. In Fig.~\\ref{Fig:Pert_dynamics}(a), the system evolves under a particle-number-conserving Hamiltonian, $\\hat{H}_{g_N}=\\hat{H}_0+g_N\\hat{V}_N$ [Eq.~\\eqref{eq:HN}]. In Fig.~\\ref{Fig:Pert_dynamics}(b), the system evolves under $\\hat{H}_{g_1}=\\hat{H}_0+g_1\\hat{V}_1$ [Eq.~\\eqref{H1}], which breaks particle-number conservation, but the initial state is at half filling. Due to the particle-hole symmetry of $\\hat{H}_{g_1}$, the system remains at half filling and the expectation value of the broken conserved quantity does not evolve in time. In Fig.~\\ref{Fig:Pert_dynamics}(c), the system evolves under $\\hat{H}_{g_1}=\\hat{H}_0+g_1\\hat{V}_1$ [Eq.~\\eqref{H1}] as in Fig.~\\ref{Fig:Pert_dynamics}(b), but the initial state is away from half filling. After equilibration under the dynamics dictated by $\\hat{H}_{g_1}$, the system must be at half filling due to the particle-hole symmetry of $\\hat{H}_{g_1}$. Thus the expectation value of the broken conserved quantity evolves in time.\n\n\\begin{figure}[!t]\n\\includegraphics[width=0.985\\linewidth]{ Pert_dynamics.pdf}\n \\caption{Dynamics of the nearest-neighbor correlation $u(\\tau)$ [Eq.~\\eqref{eq:U}] after a quench under three scenarios (see text). (a) The perturbation $g_N\\hat{V}_N$ conserves the particle number. The initial site occupation is $n_I=0.47$ [set with $\\mu_I=1.5$ and $T_I=10$ in Eq.~\\eqref{eq:rho_I}]. (b) The perturbation $g_1\\hat{V}_1$ breaks particle-number conservation. $n_I=1\/2$ ($\\mu_I=0$), and $T_I=10$. (c) (main panel and inset) The same perturbation $g_1\\hat{V}_1$ as in (b) and the same initial state as in (a) ($n_I=0.47$). All main panels show $u(\\tau)$ evaluated at the highest two orders of the NLCE: 19$^{\\text{th}}$ order (NLCE-19) and 18$^{\\text{th}}$ order (NLCE-18) for $\\hat{N}$-commuting Hamiltonians [all values of $g_N$ in (a) and $g_1=0$ in (b) and (c)], and 18$^{\\text{th}}$ order and 17$^{\\text{th}}$ order (NLCE-17) in all others. The inset in (c) shows $u(\\tau)$ in a finite chain with $L=19$ sites and periodic boundary conditions (PBC) solved using ED. All panels show the final equilibrium values given by the DE (shown for the highest order of the NLCE in the main panels, and $L=19$ sites in the inset) and GE prediction (in the main panels, evaluated to machine precision with NLCE). Also shown in (c) are the projected dynamics (P-DE) results for $g_1>0$, obtained using the 18$^{\\text{th}}$-order NLCE (main panel) and $L=19$ sites with ED (inset).}\n\\label{Fig:Pert_dynamics}\n\\end{figure}\n\nThe main panels in Fig.~\\ref{Fig:Pert_dynamics} show $u(\\tau)$ in the thermodynamic limit as obtained using the NLCE. The fact that curves for the last two orders of the NLCE overlap indicate that convergence errors are small during the dynamics. The equilibrium values predicted by the diagonal ensemble (DE) are evaluated in the highest order of the NLCE, and are shown as horizontal dashed lines. The thermal equilibrium values predicted by the grand canonical ensemble (GE) are also evaluated using the NLCE, and are shown as horizontal solid lines. The GE results converge exponentially faster than the DE ones, and attain machine precision already at much lower orders of the NLCE~\\cite{rigol2014quantum, *rigol2016fundamental, mallayya2017numerical}. To obtain the exact value in the GE within machine precision, a 15$^{\\text{th}}$-order NLCE is sufficient as the relative convergence errors become $<10^{-12}$. The agreement between the DE and GE results in the main panels in Fig.~\\ref{Fig:Pert_dynamics}(a)--\\ref{Fig:Pert_dynamics}(c) show that the system attains thermal equilibrium in the three scenarios considered, as expected for nonintegrable systems~\\cite{rigol2014quantum, rigol2016fundamental}. The small discrepancy between the DE and GE results is a consequence of the relatively slower convergence of the DE calculations. In Appendix~\\ref{appa}, we show that the DE estimate for $\\hat{U}$ converges towards the GE result with increasing orders of the NLCE. \n\nOne can see in Fig.~\\ref{Fig:Pert_dynamics}(a) and~\\ref{Fig:Pert_dynamics}(b) that $u(\\tau)$ relaxes to the thermal equilibrium result in a single step. On the other hand, Fig.~\\ref{Fig:Pert_dynamics}(c) shows the characteristic two-step relaxation process of prethermalization and thermalization. For $g_1=0.06$ and 0.12, the prethermal dynamics dominates at $\\tau\\lesssim 5$, and it is followed by a slow relaxation to the thermal equilibrium results. The dynamics in the slow relaxation regime is closely described by the projected dynamics [P-DE, see Eq.~\\eqref{eq:O_proj_DE}]. The (small) discrepancy between the actual dynamics and the projected one at late times is quadratic in $g_1$, and its relative magnitude is $\\lesssim 10^{-3}$ (see Appendix~\\ref{appb} and Ref.~\\cite{mallayya2019prethermalization}).\n\nThe three scenarios considered in Fig.~\\ref{Fig:Pert_dynamics} helps us sharpen the conditions needed to observe a two-step relaxation process. Having in mind that the energy of the perturbed Hamiltonian differs from the energy of the reference Hamiltonian only by an $\\mathcal{O}(g)$ correction, one can see that if the perturbation conserves $\\hat{N}$ [the case in Fig.~\\ref{Fig:Pert_dynamics}(a)] then the ``prethermal'' and the thermal equilibrium states also differ only by an $\\mathcal{O}(g)$ correction. Hence, no two-step relaxation process will be seen. \n\nBreaking a conservation law can make a big difference between the prethermal and thermal results even if the perturbation is small. To understand this, let us focus on the case in which the particle-number conservation is broken. In that case, the thermal equilibrium result is described by $\\hat{\\rho}^{ }_{\\text{GE}}$ in Eq.~\\eqref{eq:rho_GE}, which ensures maximal entropy at fixed energy without any constraint on $\\hat{N}$, while the prethermal result is given by $\\hat{\\rho}^{ }_{\\text{GE}}$ in Eq.~\\eqref{eq:rho_GE_N}, which ensures maximal entropy at fixed energy and fixed particle number. Even if the energy of the perturbed Hamiltonian differs from the energy of the reference Hamiltonian only by an $\\mathcal{O}(g)$ correction, the two ensembles are different if $n_I=\\text{Tr}[\\hat{N}\\hat{\\rho}^{ }_{I}]\/L$ is different from $n^{ }_{{\\text{GE}}}(g)=\\text{Tr}[\\hat{N}\\hat{\\rho}^{ }_{\\text{GE}}]\/L$. Namely, observables in the prethermal state will generally be $\\mathcal{O}(1)$ different from those in thermal equilibrium if the site occupations have an $\\mathcal{O}(1)$ difference. Then, the slow (because $g$ is small) dynamics driven by the perturbation will bring the system from the prethermal equilibrium to the thermal one [Fig.~\\ref{Fig:Pert_dynamics}(c)]. However, if the initial state has $n_I=n^{ }_{{\\text{GE}}}(g)$, as is the case in Fig.~\\ref{Fig:Pert_dynamics}(b) where $n_I=n^{ }_{{\\text{GE}}}(g)=0.5$, then the difference between the prethermal and the thermal ensembles is only $\\mathcal{O}(g)$, and the dynamics will exhibit a single-step relaxation like the one observed when particle-number conservation is not broken [Fig.~\\ref{Fig:Pert_dynamics}(a)].\n\nIn Fig.~\\ref{Fig:O_GE_vs_g}, we show the expectation value of $\\hat{U}$ after thermalization, $u^{ }_{\\text{GE}} = \\text{Tr} [\\hat{U} \\hat{\\rho}^{ }_{\\text{GE}}]\/L$, as a function of the perturbation strength for the three cases considered in Fig.~\\ref{Fig:Pert_dynamics}. Only the red dashed line in Fig.~\\ref{Fig:O_GE_vs_g}, corresponding to the GE of the dynamics studied in Fig.~\\ref{Fig:Pert_dynamics}(c), shows an $\\mathcal{O}(1)$ difference between the prethermal $(g_1=0)$ and the thermal $(g_1>0)$ equilibrium results in Fig.~\\ref{Fig:O_GE_vs_g}. This was the dynamics that exhibited two-step relaxation. On the other hand, the two cases that show only a single step relaxation have $\\mathcal{O}(g)$ differences between the predictions of the prethermal and the thermal equilibrium ensembles.\n\n\\begin{figure}[!t]\n\\includegraphics[width=0.95\\linewidth]{ UE1_GE_vs_g.pdf}\n \\caption{Thermal equilibrium (GE) value of the nearest-neighbor density correlation ($u^{ }_{\\text{GE}}$), after evolution under the three scenarios considered in Fig.~\\ref{Fig:Pert_dynamics}: $g_N\\hat{V}_N$ with $n_I=0.47$ $(\\mu_I=1.5)$, $g_1\\hat{V}_1$ with $n_I=1\/2$ $(\\mu_I=0)$, and $g_1\\hat{V}_1$ with $n_I=0.47$ ($\\mu_I=1.5$). All the results for $u^{ }_{\\text{GE}}$ reported in this figure were obtained with the 15$^{\\text{th}}$-order NLCE and are converged to machine precision accuracy.} \n\\label{Fig:O_GE_vs_g}\n\\end{figure}\n\nWhile the previous discussion pertained to the thermodynamic limit, the two-step relaxation picture can also be seen in the dynamics of a finite-size system. In the inset in Fig.~\\ref{Fig:Pert_dynamics}(c), we show the relaxation of $u(\\tau)$ under $\\hat{H}_{g_1}$ (also when the initial state is away from half filling), in a finite chain with $19$ sites and periodic boundary conditions. Those results were obtained using exact diagonalization (ED). $u(\\tau)$ for the $19$-site chain looks qualitatively similar to the thermodynamic limit result, with an early fast prethermal dynamics followed by a slow relaxation to the DE of the finite-size system. The slow relaxation regime follows the projected dynamics given by the P-DE of the finite chain. That said, as we discuss in Sec.~\\ref{sec:finsize}, finite-size effects can lead to misleading conclusions about the dependence of the relaxation rates on the strength of the perturbation so care should be taken when studying finite-size systems.\n\n\\KM{Another point to be highlighted about the results reported in Fig.~\\ref{Fig:Pert_dynamics} is that they make apparent the need to sharpen analyses reported in recent papers that explored prethermalization and thermalization in the context of random matrix theory and typicality~\\cite{reimann2019typicality, dabelow2020modification, dabelow2020relaxation, *dabelow2021typical, richter_20, heitmann2021nontrivial}. In those works prethermalization was argued to be generic in systems with perturbed dynamics, with no explicit conditions on the nature of the perturbations.}\n\n\\subsection{Projected Dynamics}\\label{sec_proj_dy}\n\nHaving identified two essential conditions to observe a two-step relaxation dynamics (under perturbation $g_1\\hat{V_1}$ and an initial state with $n_I\\ne1\/2$), we focus next on the slow thermalization regime described by the projected dynamics of $\\hat{U}$. In the thermodynamic limit, the projected dynamics can be described equivalently with a P-DE [Eq.~\\eqref{proj_DE}] or a P-GE [Eq.~\\eqref{eq:rho_Proj_GE}]. The final equilibrium value of the projected dynamics can be evaluated equivalently with $\\overline{\\text{P-DE}}$ defined in Eq.~\\eqref{eq:Obar_PDE} or its thermal counterpart $\\overline{\\text{P-GE}}$ [Eq.~\\eqref{eq:O_PGE_bar}]. \n\\KM{In Ref.~\\cite{mallayya2019prethermalization}, the projected dynamics were computed only using the diagonal ensemble (P-DE). In Fig.~\\ref{Fig:Proj_Dynamics}, we show results for the projected dynamics in both the DE and the GE} evaluated within the last two orders (17 and 18) of the NLCE, and in the two largest chains (18 and 19 sites) with periodic boundary conditions solved using ED. \n\nLet us first focus on the inset in Fig.~\\ref{Fig:Proj_Dynamics}, which shows results for P-GE evaluated using NLCE and ED. To evaluate P-GE at the $l^{\\text{th}}$-order NLCE ($L$ sites with ED), the dynamics of the reference energy $e_0(\\tau)=\\text{Tr}[\\hat \\rho(\\tau)\\hat H_0]\/L$ and particle number $n(\\tau)=\\text{Tr}[\\hat \\rho(\\tau)\\hat N]\/L$ are evaluated using the $l^{\\text{th}}$-order NLCE ($L$ sites with ED). From $e_0(\\tau)$ and $n(\\tau)$, the temperature $T(\\tau)$ and chemical potential $\\mu(\\tau)$ that define $\\hat{\\rho}_{\\text{P-GE}}(\\tau)$ are determined by numerically solving Eqs.~\\eqref{eq:proj_finalT} and~\\eqref{eq:proj_finalmu} at each $\\tau$. The two equations are solved to an accuracy of relative errors $\\lesssim 10^{-11}$. With $\\hat{\\rho}^{ }_{\\text{P-GE}}(\\tau)$ thus obtained, the P-GE of $\\hat{U}$ is evaluated to machine precision with NLCE. With all other calculations converged to machine precision, the only source of error in the P-GE calculations stems from the convergence errors (finite-size errors) in obtaining $e_0(\\tau)$ and $n(\\tau)$ in the dynamics with the $l^{\\text{th}}$-order NLCE ($L$ sites with ED). The inset in Fig.~\\ref{Fig:Proj_Dynamics} shows that the errors in the P-GE dynamics of $\\hat{U}$ are apparent only at long times ($\\tau \\gtrsim20$). At earlier times, the relative error between the 18$^{\\text{th}}$-order NLCE and the $L=19$ sites' ED calculation is less than $10^{-7}$ for $\\tau <2$, $10^{-5}$ for $\\tau<3$ and $10^{-3}$ for $\\tau\\lesssim 10$. Thus the P-GE at short times is accurately estimated both using NLCE and ED.\n\n\\begin{figure}[!t]\n\\includegraphics[width=0.985\\linewidth]{ PQ_dynamics.pdf}\n \\caption{Projected dynamics of $u(\\tau)$ following the quench $\\hat{H}_I\\rightarrow\\hat{H}_{g_1}$, with $g_1=0.06$. The initial state has $\\mu_I=1.5$ ($n_I=0.47$). The main panel shows the projected DE (P-DE) results evaluated with NLCE to 17$^{\\text{th}}$-order (NLCE-17) and 18$^{\\text{th}}$-order (NLCE-18), and ED in chains with $18$ sites (ED-18) and 19 sites (ED-19) and periodic boundary conditions. The equilibrium value of the projected DE $(\\overline{\\text{P-DE}})$ [Eq.~\\eqref{eq:Obar_PDE}] is evaluated with NLCE-18 and shown as a horizontal dotted line. The inset shows the projected GE (P-GE) results obtained using NLCE-17 and NLCE-18 (the P-GE results for NLCE-18 are also shown in main panel), as well as ED-18 and ED-19. The thermal equilibrium result of the projected dynamics $(\\overline{\\text{P-GE}})$ [Eq.\\eqref{eq:O_PGE_bar}] are shown in the main panel and the inset as a horizontal dashed line (computed within machine precision using the 15$^{\\text{th}}$-order NLCE).}\n\\label{Fig:Proj_Dynamics}\n\\end{figure}\n\nIn the main panel in Fig.~\\ref{Fig:Proj_Dynamics}, we show the projected dynamics of $\\hat{U}$ given by the P-DE, evaluated within the 17$^{\\text{th}}$ and 18$^{\\text{th}}$-order NLCE, $L=18$ and 19 sites using ED, as well as the P-GE result from the 18$^{\\text{th}}$-order NLCE (the same curve shown in the inset). The results in the main panel in Fig.~\\ref{Fig:Proj_Dynamics} show that the P-DE results from the ED calculations exhibit much larger deviations from the P-GE results at short times than the P-DE results from the NLCE calculations (we remind the reader that the P-GE results at short times exhibit negligible convergence errors, as shown in the inset). With increasing $L$, the P-DE results approach the ones of the P-GE, albeit slowly. With increasing the order of the NLCE, the already closer NLCE results for the P-DE at short times approach rapidly the ones of the P-GE. \n\nThe thermal equilibrium result of the projected dynamics $\\overline{\\text{P-GE}}$ (expected to be reached at long times), is given by Eq.~\\eqref{eq:O_PGE_bar} where $\\bar{T}$ and $\\bar{\\mu}$ are obtained numerically using Eqs.~\\eqref{eq:proj_finalT_bar} and~\\eqref{eq:proj_finalmu_bar}. For the expectation value of $\\hat{U}$ in $\\overline{\\text{P-GE}}$, $\\bar{u}^{ }_{\\text{P-GE}}$, no step in the calculation involves dynamics and all the density matrices are Gibbs ensembles, which means that one can use the NLCE to calculate $\\bar{u}^{ }_{{\\text{P-GE}}}$ exactly to machine precision. The result is shown in the main panel and the inset in Fig.~\\ref{Fig:Proj_Dynamics} as a dashed horizontal line marked $\\overline{\\text{P-GE}}$. The equilibrium value of the P-DE predicted by $\\overline{\\text{P-DE}}$ [Eq.~\\eqref{eq:Obar_PDE}] for $\\hat{U}$, $\\bar{u}^{ }_{\\text{P-DE}}$ evaluated with NLCE is shown in the main panel as a dotted horizontal line. The discrepancy between $\\bar{u}^{ }_{\\text{P-DE}}$ and $\\bar{u}^{ }_{\\text{P-GE}}$ is due to the convergence errors of the former. Like other DE predictions, these errors decrease with increasing the order of the NLCE (see Appendix~\\ref{appa}).\n\nFrom the above analysis, we conclude that the slow relaxation regime of observables in the thermodynamic limit is most accurately described by the P-GE at short times, and by $\\overline{\\text{P-GE}}$ at $\\tau\\rightarrow\\infty$. The P-DE results evaluated with NLCE are close and approach rapidly the P-GE results at short times. On the other hand, the P-DE results obtained using ED in a periodic chain with $L$ sites exhibit large deviations from the P-GE results at short times, and approach the P-GE results slowly with increasing $L$.\n\n\\begin{figure*}[!t]\n\\includegraphics[width=0.98\\linewidth]{ Rates_NLCE_FGR.pdf}\n \\caption{Slow relaxation dynamics of $u(\\tau)$ and $n(\\tau)$ as described by projected-dynamics calculations following a quench $\\hat{H}_I\\rightarrow\\hat{H}_{g_1}$ in the thermodynamic limit. The initial state is taken to have $\\mu_I=1.5$ and $T_I=10$. In (a) we show P-GE results for $u(\\tau)$ and in (b) for $n(\\tau)$. In both panels exponential relaxation is captured by the distance to thermalization $\\delta_l[u(\\tau)]$ [Eq.~\\eqref{eq:delta_l_u}] and $\\delta_l[n(\\tau)]$ [Eq.~\\eqref{eq:delta_l_n}], respectively. The results in (a) and (b) are evaluated within the $18^{\\text{th}}$ (NLCE-18) and $l=17^{\\text{th}}$ (NLCE-17) orders of the NLCE, for three perturbation strengths, $g_1=0.03,\\,0.06,$ and 0.12. The solid lines are exponential fits to the NLCE-18 results in the time interval $3\\le\\tau\\le 16$. (c) The relaxation rates obtained from exponential fits to $\\delta_l[u(\\tau)]$ [Rate($u$)] and $\\delta_l[n(\\tau)]$ [Rate($n$)] with NLCE-18 and NLCE-17 in the P-GE, as well as the Rate($u$) obtained from fits to the P-DE with NLCE-18, for various values of $g_1$. All exponential fits were done for $3\\le\\tau\\le 16$. The lines give the rate, $\\Gamma$, predicted by the FGR formula [Eq.~\\eqref{eq:FGR_rate_N}] evaluated using ED for a periodic chain with $L=22$ (solid line) and $L=20$ sites (dashed line). Rate($n$) and $\\Gamma$ are multiplied by a factor 2 to be compared with Rate($u$), see text.}\n\\label{Fig:Rate_NLCE}\n\\end{figure*}\n\n\\subsection{Relaxation rates in the thermodynamic limit}\\label{sec_rate_NLCE} \n\nHere we analyze the slow thermalization of $\\hat{U}$ and $\\hat{N}$ for the quench $\\hat{H}_I\\rightarrow\\hat{H}_{g_1}$ in which the initial state has $\\mu_I=1.5$ ($n_I=0.47$). This slow relaxation regime is described by the projected dynamics. We showed in the previous section that, in the thermodynamic limit, the projected dynamics and its long-time equilibrium are described most accurately by the P-GE [Eq.~\\eqref{eq:O_PGE}] and $\\overline{\\text{P-GE}}$ [Eq.~\\eqref{eq:O_PGE_bar}], respectively, evaluated with NLCE. \n\nIn Fig.~\\ref{Fig:Rate_NLCE}(a), we show that the relaxation dynamics of $\\hat{U}$ predicted by the P-GE is well described by an exponential for $g_1\\lesssim 0.12$. We show results for the normalized distance to thermalization:\n\\begin{equation}\n\\delta_l[u(\\tau)]=\\left|\\dfrac{u^{l}_{{\\text{P-GE}}}(\\tau)-\\bar{u}^{ }_{{\\text{P-GE}}}}{\\bar{u}^{ }_{{\\text{P-GE}}}}\\right|, \\label{eq:delta_l_u}\n\\end{equation}\nwhere $u^{l}_{{\\text{P-GE}}}(\\tau)$ is the prediction of the P-GE at time $\\tau$ evaluated at the $l^{\\text{th}}$-order NLCE, and $\\bar{u}^{ }_{{\\text{P-GE}}}$ is the thermal equilibrium value of the projected dynamics evaluated to machine precision.\n\nSimilarly, the broken conserved quantity $\\hat{N}$ relaxes exponentially as shown in Fig.~\\ref{Fig:Rate_NLCE}(b), with its distance to thermalization $\\delta_l[n(\\tau)]$ given by\n\\begin{equation}\n\\delta_l[n(\\tau)]=\\left|\\dfrac{n^{ }_{l}(\\tau)-n^{ }_{{\\text{GE}}}}{n^{ }_{{\\text{GE}}}}\\right|, \\label{eq:delta_l_n}\n\\end{equation}\nwhere $n^{ }_{l}(\\tau)$ is the particle number at $\\tau$ evaluated at the $l^{\\text{th}}$-order NLCE and $n^{ }_{{\\text{GE}}}=1\/2$ is the equilibrium site occupation. For both $\\delta_l[u(\\tau)]$ and $\\delta_l[n(\\tau)]$, the last two orders of the NLCE are well converged at short times and fit well to an exponential (from which we extract the relaxation rate). To fit the exponential, we select a time window $3\\le\\tau\\le16$ that excludes long times at which the convergence is not as good, and early times at which transient prethermal dynamics is present. The rates thus extracted for $0.03\\le g_1\\le0.12$ are shown in Fig.~\\ref{Fig:Rate_NLCE}(c).\n\nA well defined, time-independent, relaxation rate of $\\hat{N}$ can be obtained using the drift $dn\/d\\tau$ given by the FGR formula [Eq.~\\eqref{fermi_rate}] when $n(\\tau)$ is sufficiently close to $n^{ }_{{\\text{GE}}}$. This eventually occurs at long times for any $n_I\\ne n^{ }_{{\\text{GE}}}$. In the case of nonintegrable $\\hat{H}_0$ and the perturbation $g_1\\hat{V}_1$ [Eq.~\\eqref{H1}], we can simplify Eq.~\\eqref{fermi_rate} using the P-GE, and obtain the $\\tau$ independent relaxation rate (see Appendix~\\ref{appc}). The FGR equation involves two separate contributions from the perturbation: $\\hat{V}_1=\\hat{\\mathcal{V}}_{\\eta=1}+ \\hat{\\mathcal{V}}_{\\eta=2}$, where $\\hat{\\mathcal{V}}_{\\eta}$ connects the eigenkets of $\\hat{H}_0$ that differ by $\\eta$ particles. In a block diagonalized symmetry sector $s$ of both $\\hat{H}_{0}$ and $\\hat{N}$, let $F^s_{N,\\eta}(E_0)$ be the coarse grained value of the squared matrix elements of $\\hat{\\mathcal{V}}_{\\eta}$ given by\n\\begin{equation}\nF^s_{N,\\eta}(E_0)=\\text{Avg}_{\\Delta E}\\left(|\\bra{E^{N+\\eta}_j}\\hat{\\mathcal{V}}_{\\eta}\\ket{E^{N}_i}|^2\\right),\\label{eq:Vij_sq}\n\\end{equation}\nwhere $\\ket{E^{N}_i}$ and $\\ket{E^{N+\\eta}_j}$ are the eigenkets of $\\hat{H}_0$ with energies within a small window $(E_0-\\Delta E\/2, E_0+\\Delta E\/2)$, and particle number $N$ and $N+\\eta$, respectively. The FGR rate $\\Gamma$ for this system, when $L\\gg 1$, is given by $\\Gamma=\\Gamma_{(\\eta=1)}+\\Gamma_{(\\eta=2)}$ with\n\\begin{eqnarray}\n\\Gamma_{\\eta}=&&\\dfrac{2\\pi\\eta^2g_1^2}{\\text{Tr}[\\hat{N}^2 e^{-\\bar{\\beta}\\hat{H}_0}]}\\sum_{s}\\sum_{N=0}^{L-\\eta}\\int dE_0\\ e^{-\\bar{\\beta} E_0}\\label{eq:FGR_rate_N}\\nonumber\\\\ &&\\times F^s_{N,\\eta}(E_0)D^{s}_{N+\\eta}(E_0)D^{s}_{N}(E_0)\\label{eq:fgrtl}\n\\end{eqnarray}\nwhere $\\bar{\\beta}^{-1}=\\bar{T}$ is the temperature of the $\\overline{\\text{P-GE}}$ [Eq.~\\eqref{eq:proj_finalT_bar}] in the limit of $g_1\\rightarrow 0$, $D^s_N(E_0)$ is the density of states of $\\hat{H}_0$ at energy $E_0$, particle number $N$, in sector $s$, and $F^s_{N,\\eta}(E_0)$ is defined in Eq.~\\eqref{eq:Vij_sq}. For $\\Gamma$ in Eq.~\\eqref{eq:FGR_rate_N} to be well defined in the thermodynamic limit $F^s_{N,\\eta}(E_0)$ needs to be $\\propto [D^{s}_{N}(E_0)]^{-1}$, where we used that $D^{s}_{N+\\eta}(E_0)\\simeq D^{s}_{N}(E_0)$ in large systems. This is expected to be the case both in nonintegrable and in integrable-interacting systems for local operators that connect different sectors of a Hamiltonian~\\cite{leblondrigol2020}. We note that Eq.~\\eqref{eq:fgrtl} has similarities with the heating rate formula derived in Ref.~\\cite{mallayya2019heating} for periodically driven perturbations. \n\nWe calculate $\\Gamma$ evaluating Eq.~\\eqref{eq:FGR_rate_N} numerically for a finite system with periodic boundary conditions using ED (see Appendix~\\ref{appd}). Since this calculation does not involve dynamics and requires only the diagonalization of $\\hat{H}_0$, which conserves $\\hat{N}$, we are able to compute the rates using a larger periodic chain (with $22$ sites). In Fig.~\\ref{Fig:Rate_NLCE}(c), we show that the rates of $\\hat{N}$ [Rate($n$)] extracted from exponential fits like the ones shown in Fig.~\\ref{Fig:Rate_NLCE}(b) are in excellent agreement with $\\Gamma$ evaluated using Eq.~\\eqref{eq:FGR_rate_N}. \n\nOn the other hand, the rates of $\\hat{U}$ [Rate($u$)] estimated from fits of the P-GE results [as in Fig.~\\ref{Fig:Rate_NLCE}(a)] are twice as large. This factor of 2 can be understood by noticing that in the P-GE dictated by the evolution of $n(\\tau)$, the particle-hole symmetry of $\\hat{U}$ implies that $\\left( u^{l}_{{\\text{P-GE}}} (\\tau)-\\bar{u}^{ }_{{\\text{P-GE}}} \\right) \\propto \\left(n(\\tau)-0.5\\right)^2 + \\mathcal{O} \\left( \\left|n(\\tau)-0.5\\right|^4 \\right)$\\KM{, when $n(\\tau)$ is close to 0.5 [so that an expansion in powers of $n(\\tau)-0.5$ is meaningful]}. Thus $\\delta_l[n(\\tau)]\\propto \\exp\\left(-\\Gamma \\tau\\right)$ results in $\\delta_l[u(\\tau)]\\propto \\exp \\left(-2\\Gamma \\tau\\right)$. The Rate($u$) and $2\\times$Rate($n$) estimated from exponential fits to the P-GE dynamics evaluated using 17 and 18 orders of the NLCE, and $2\\Gamma$ evaluated with FGR in chains with $L=20$ and $L=22$ sites show excellent agreement with each other, with small discrepancies developing at larger $g_1\\sim 0.12$ (where FGR becomes less accurate). \n\nIn Fig.~\\ref{Fig:Rate_NLCE}(c), we also show the thermalization rates for $\\hat{U}$ obtained using fits to the P-DE dynamics, $u^{ }_{{\\text{P-DE}}}(\\tau)$, evaluated at the $18^{\\text{th}}$-order NLCE. Those rates are computed using exponential fits to the equivalent of Eq.~\\eqref{eq:delta_l_u} in the P-DE, involving $u^{ }_{{\\text{P-DE}}}(\\tau)$ and the equilibrium value $\\bar{u}^{ }_{{\\text{P-DE}}}$ [Eq.~\\eqref{eq:Obar_PDE}]. The P-DE rates are in good agreement with those obtained in the other calculations, as expected given the results in Fig.~\\ref{Fig:Proj_Dynamics}.\n\n\n\\section{Relaxation rates in finite systems}\\label{sec:finsize} \n\nIn this section, we discuss what happens when one uses finite-system calculations to study relaxation rates during dynamics at even smaller values of the perturbation strength than the ones considered in Fig.~\\ref{Fig:Rate_NLCE}. Since the relaxation dynamics is very slow in that regime, one needs long times to be able to fit exponentials and extract relaxation rates. This is something that can always be done using exact diagonalization in finite systems. On the other hand, because of the lack of convergence of NLCE calculations at long times, this is a regime that cannot be studied using NLCEs,\n\n\\begin{figure*}[!t]\n\\includegraphics[width=0.99\\linewidth]{ Rates_ED_PQDE_FGR.pdf}\n\\caption{The projected dynamics of $u(\\tau)$ and $n(\\tau)$ as described by the P-DE in finite chains with $L$ sites and periodic boundaries. The exponential relaxation of (a) $u(\\tau)$ and (b) $n(\\tau)$ is captured by the distance to equilibrium, $\\delta_L[u(\\tau)]$ [Eq.~\\ref{eq:delta_L_u}] and $\\delta_L[n(\\tau)]$ [Eq.~\\ref{eq:delta_L_n}] respectively. The results in (a) and (b) are obtained in chains with $L=19$ sites, for the perturbation strengths $g_1=0.005,\\, 0.01,\\, 0.03,\\,0.06,$ and 0.12. For clarity, the results for $\\delta_L[u(\\tau)]$ when $g_1=0.005$ and 0.01 are shown as an inset in (a). The solid lines are exponential fits in the time interval $3\\le\\tau\\le 20$. (c) The relaxation rates obtained from exponential fits to $\\delta_L[u(\\tau)]$ [Rate($u$)] and $\\delta_L[n(\\tau)]$ [Rate($n$)] for $L=17,\\,18,$ and 19 sites, and various \\KM{positive and negative} values of $g_1$ \\KM{($0.001\\le |g_1|\\le 0.12$)}. All exponential fits were done for $3\\le\\tau\\le 20$. The solid line shows the result of a power-law fit \\KM{$\\alpha |g_1|^{\\beta}$ to $\\delta_{19}[u(\\tau)]$ for $10^{-3}\\le |g_1|\\le10^{-2}$}. The dotted line shows the FGR rate $\\Gamma$ [Eq.~\\eqref{eq:FGR_rate_N}] evaluated using ED for $L=22$ sites. Rate($n$) and $\\Gamma$ are multiplied by a factor 2 to be compared with Rate($u$), see text.}\\label{Fig:Rate_ED}\n\\end{figure*}\n\nWe study the P-DE dynamics of $\\hat{U}$ and $\\hat{N}$ for the same quench discussed in Sec.~\\ref{sec_rate_NLCE}, but explore much smaller perturbations $g_1\\in(0.001, 0.12)$. As shown in the inset in Fig.~\\ref{Fig:Pert_dynamics}(c), the P-DE dynamics in finite systems closely follows the exact dynamics, but it can be significantly different from the P-GE dynamics (see Fig.~\\ref{Fig:Proj_Dynamics}). The differences are expected to increase in finite systems for smaller perturbations, because the assumption of thermalization at each time during the slow relaxation (to a new thermal equilibrium determined by the instantaneous site occupation) is less justified. Hence, in this section, we do not assume thermalization and use the P-DE [Eq.~\\eqref{eq:O_proj_DE}] to describe dynamics and $\\overline{\\text{P-DE}}$ [Eq.~\\eqref{eq:Obar_PDE}] to describe the final equilibrium state of the P-DE. \n\nIn Fig.~\\ref{Fig:Rate_ED}(a), we show how the expectation value of $\\hat{U}$ equilibrates in finite systems for different values of $g$. There we plot the distance to equilibrium \n\\begin{equation}\n\\delta_L[u(\\tau)]=\\left|\\dfrac{u^{L}_{{\\text{P-DE}}}(\\tau)-\\bar{u}^{L }_{{\\text{P-DE}}}}{\\bar{u}^{L}_{{\\text{P-DE}}}}\\right|, \\label{eq:delta_L_u}\n\\end{equation}\nwhere $u^{L}_{{\\text{P-DE}}}(\\tau)$ is the expectation value in the P-DE at time $\\tau$, and $\\bar{u}^{L} _{{\\text{P-DE}}}$ is the equilibrated result predicted by $\\overline{\\text{P-DE}}$, for a chain with $L$ sites. Similarly, in Fig.~\\ref{Fig:Rate_ED}(b), we show the equilibration dynamics of $\\hat{N}$ as characterized by the distance to equilibrium\n\\begin{equation}\n\\delta_L[n(\\tau)]=\\left|\\dfrac{n^{ }_{L}(\\tau)-n^{ }_{{\\text{DE}}}}{n^{ }_{{\\text{DE}}}}\\right|, \\label{eq:delta_L_n}\n\\end{equation}\nwhere $n^{ }_{L}(\\tau)$ is the particle number (per site) at $\\tau$, and the equilibrium site occupation $n^{ }_{{\\text{DE}}}=1\/2$ (because of the particle-hole symmetry of $\\hat{H}_{g_1}$). \n\nFigures~\\ref{Fig:Rate_ED}(a) and~\\ref{Fig:Rate_ED}(b) show that both $\\delta_L[u(\\tau)]$ and $\\delta_L[n(\\tau)]$ exhibit exponential regimes, for a chain with $L=19$ sites, as those identified in Figs.~\\ref{Fig:Rate_NLCE}(a) and~\\ref{Fig:Rate_NLCE}(b) in the NLCE calculations for the thermodynamic limit. A fit to $\\delta_L[u(\\tau)]$ and $\\delta_L[n(\\tau)]$ in the time window $3\\le\\tau\\le 20$ agrees well with an exponential. The relaxation rates thus obtained from exponential fits for $\\hat{U}$ [Rate($u$)] and $\\hat{N}$ [Rate($n$)] for various \\KM{positive and negative} values of $g_1$ and chains with three different sizes $L$, along with the FGR rate $\\Gamma$ from Fig.~\\ref{Fig:Rate_NLCE} for $L=22$, are shown in Fig.~\\ref{Fig:Rate_ED}(c). \n\nIn Fig.~\\ref{Fig:Rate_ED}(c), the results for Rate($u$) exhibit a surprising (and potentially misleading) finite size effect. The Rate($u$) for \\KM{$|g_1|<0.01$} deviates significantly away from the expected FGR prediction of $2\\Gamma$, and scales almost linearly with \\KM{$|g_1|$}. The latter behavior is different from the signature $(g_1)^2$ scaling of the FGR prediction. Increasing $L$ brings the Rate($u$) slowly towards $2\\Gamma$ by pushing the spurious scaling regime to smaller values of \\KM{$|g_1|$}. Currently, we do not understand why such a linear scaling regime emerges in the Rate($u$) at small $|g_1|$ in finite-system calculations. A recent study exploring how integrability and Anderson localization are broken in finite systems showed that a highly nontrivial regime precedes the onset of quantum chaos~\\cite{leblond2020}. We expect similar nontrivial finite-size effects when breaking only one conservation law. Another point to be noted about the results for the Rate($u$) in Fig.~\\ref{Fig:Rate_ED}(c) is that, as expected because of finite-size effects, the deviations from the FGR predictions are larger than those seen in the NLCE calculations in Fig.~\\ref{Fig:Rate_NLCE} for $0.03\\le g_1\\le0.12$.\n\nContrary to the results for the Rate($u$), the results for the Rate($n$) in Fig.~\\ref{Fig:Rate_ED}(c) do not show significant finite-size effects, and agree well with the FGR rate $\\Gamma$ for all values of $g_1$ and $L$ considered. This can be understood by noticing that $n^{ }_{L}(\\tau)$, being the conserved quantity of $\\hat{H}_0$, has the same value in the P-DE (by definition) and the P-GE (by construction) as for the actual dynamics under $\\hat{H}_{g_1}$. Since the actual dynamics at short times has small finite-size effects (because of locality at short times the system does not ``know'' its extent), then $n^{ }_{L}(\\tau)$ at short times agrees well with its value in the thermodynamic limit. At long times, both in finite systems and in the thermodynamic limit, $n^{ }_{{\\text{DE}}}=1\/2$ for all values of $g_1$ because this is set by the particle-hole symmetry. Hence, the thermodynamic limit behavior of $\\delta_L[n(\\tau)]$ at short times is properly captured by the exact diagonalization calculations and so are the values of Rate($n$) obtained from the (short-)time evolution fits. \n\n\\section{Summary}\\label{sec:summary}\n\nWe used numerical simulations to carry out an in-dept analysis on the framework of prethermalization and thermalization introduced in Ref.~\\cite{mallayya2019prethermalization}. We considered far-from-equilibrium initial states evolving under Hamiltonians of the form $\\hat{H}_0+g\\hat{V}$, where $\\hat{H}_0$ is nonintegrable and has one extensive conserved quantity (the total number of particles). We studied the dynamics of local observables in the thermodynamic limit using a NLCE, and in finite chains with periodic boundary conditions using ED. \n\nBy exploring three distinct scenarios with different perturbations and initial states, we showed that in order for a two-step relaxation process to occur in the dynamics of local observables (other than the conserved quantity), not only does the perturbation $g\\hat{V}$ have to break the extensive conserved quantity of $\\hat{H}_0$, but also the initial-state expectation value of the conserved quantity per site needs to be $\\mathcal{O}(1)$ different from the final equilibrium one. Observables (other than the conserved quantity) then evolve with a fast prethermal dynamics at short times followed by a slow relaxation to thermal equilibrium at long times. The slow relaxation regime is characterized by intermediate equilibrium states of $\\hat{H}_0$, which in the thermodynamic limit can be equivalently described using the projected DE (P-DE) or the projected GE (P-GE).\n\nWe argued that the thermodynamic limit results for the slow thermalization regime are most accurately described numerically using NLCE calculations for the P-GE. Using such calculations we showed that the slow thermalization \\KM{regime} is exponential, with a rate that can be accurately predicted using Fermi's golden rule [Eq.~\\eqref{eq:FGR_rate_N}]. We also showed that the NLCE results for the rates obtained using the P-DE are in good agreement with the P-GE and Fermi's golden rule ones. \n\nOn the other hand, in a finite system and for quantities that are not conserved in the reference dynamics, such as the expectation value of the nearest-neighbor density correlator, we showed that the P-DE calculations exhibit large finite-size effects. Strikingly, for very small perturbations, finite-size effects result in rates that are linear \\KM{in the absolute value} of the perturbation strength, at odds with Fermi's golden rule prediction. Increasing the system size pushes this linear scaling regime to smaller values of the perturbation so that it disappears in the thermodynamic limit. \n\n\\begin{acknowledgements}\nThis work was supported by the U.S. Office of Naval Research, Grant No.~N00014-14-1-0540 (K.M.), and by the National Science Foundation, Grant No.~2012145 (M.R.). The computations were carried out at the Institute for CyberScience at Penn State.\n\\end{acknowledgements}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzgodl b/data_all_eng_slimpj/shuffled/split2/finalzzgodl new file mode 100644 index 0000000000000000000000000000000000000000..cd298397d758efad67e7a050711df486342403d9 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzgodl @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nAs usual in algebraic dynamics, given a self-map $\\Phi\\colon X\\longrightarrow X$ of a quasi-projective variety $X$, we denote by $\\Phi^n$ the $n$-th iterate of $\\Phi$. Given a point $x\\in X$, we let $\\mathcal{O}_\\Phi(x)=\\{\\Phi^n(x)\\colon n\\in\\mathbb N\\}$ be the orbit of $x$. Recall that a point $x$ is periodic if there exists some $n\\in\\mathbb{N}$ such that $\\Phi^n(x)=x$; a point $y$ is preperiodic if there exists $m\\in\\mathbb{N}$ such that $\\Phi^m(y)$ is periodic. Our first result is the following.\n\n\n\\begin{theorem}\n\\label{thm:uniform-bound-fibers}\nLet $X$ and $Y$ be quasi-projective varieties defined over a field $K$ of characteristic $0$, let $f\\colon X\\longrightarrow Y$ be a morphism defined over $K$, let $\\Phi\\colon X\\longrightarrow X$ be an \\'etale endomorphism, and let $x\\in X(K)$. \nIf $|\\mathcal{O}_\\Phi(x)\\cap f^{-1}(y)|<\\infty$ for each $y\\in Y(K)$, then there is a constant $N$ such that $$|\\mathcal{O}_\\Phi(x)\\cap f^{-1}(y)|0,\n\\]\nwhere $h(\\cdot)$ is the logarithmic Weil height for algebraic numbers.\n\\end{theorem}\n\nNote that if $X=\\mathbb{A}^1$, the map $\\Phi\\colon X\\longrightarrow X$ is given by $\\Phi(x)=x+1$, and $f\\colon X\\hookrightarrow \\mathbb{P}^1$ is the usual embedding, then $h(f(\\Phi^n(0)))=\\log(n)$ for $n\\in\n\\mathbb{N}$. This example shows that Theorem~\\ref{thm:gaps} is, in some sense, the best possible. However, we believe that this gap result should hold more generally for rational self-maps. Specifically, we make the following conjecture.\n\n\\begin{conjecture} (Height Gap Conjecture)\n\\label{conj:gaps}\nLet $X$ be a quasi-projective variety defined over $\\overline{\\mathbb{Q}}$, let $\\Phi\\colon X\\dashrightarrow X$ be a rational self-map, and let $f\\colon X\\dashrightarrow \\mathbb{P}^1$ be a rational function. Then for $x\\in X(\\overline{\\mathbb{Q}})$ with the property that $\\Phi^n(x)$ avoids the indeterminacy locus of $\\Phi$ for every $n\\ge 0$, either $f(\\mathcal{O}_\\Phi(x))$ is finite or $$\\limsup_{n\\to\\infty} \\frac{h(f(\\Phi^n(x)))}{\\log(n)}>0.$$\n\\end{conjecture} \n\nTheorem \\ref{thm:gaps} proves this conjecture in the case of endomorphisms. Many interesting number theoretic questions fall under the umbrella of the gap conjecture stated above. As an example, we recall that a power series $F(x)\\in \\overline{\\mathbb{Q}}[[x]]$ is called $D$-\\emph{finite} if it is the solution to a non-trivial homogeneous linear differential equation with rational function coefficients. It is known that if $\\sum_{n\\geq0} a(n) x^n$ is a $D$-finite power series over a field of characteristic zero, then there is some $d\\ge 2$, a rational endomorphism $\\Phi\\colon\\mathbb{P}^d\\dashrightarrow \\mathbb{P}^d$, a point $c\\in \\mathbb{P}^d$ and a rational map $f\\colon \\mathbb{P}^d\\dashrightarrow \\mathbb{P}^1$ such that $a(n)=f\\circ \\Phi^n(c)$ for $n\\ge 0$, see \\cite[Section 3.2.1]{DML-book}. Heights of coefficients of $D$-finite power series have been studied independently, notably by van der Poorten and Shparlinski \\cite{vdPS}, who showed a gap result holds in this context that is somewhat weaker than what is predicted by our height gap conjecture above; specifically, they showed that if $\\sum_{n\\geq0} a(n)x^n\\in \\overline{\\mathbb{Q}}[[x]]$ is $D$-finite and $$\\limsup_{n\\to\\infty} \\frac{a(n)}{\\log\\log(n)}=0,$$ then the sequence $\\{a(n)\\}$ is eventually periodic. This was improved recently \\cite{BNZ}, where it is shown that if $\\limsup_{n\\to\\infty} \\frac{a(n)}{\\log(n)}=0$, then the sequence $\\{a(n)\\}$ is eventually periodic. We see this then gives additional underpinning to Conjecture \\ref{conj:gaps}. Furthermore, with the notation as in Theorem~\\ref{thm:gaps}, assume now that \n\\begin{equation}\n\\label{limsup is zero}\n\\limsup_{n\\to\\infty} \\frac{h\\!\\left(f(\\Phi^n(x))\\right)}{\\log(n)}=0.\n\\end{equation}\nThen Theorem~\\ref{thm:gaps} asserts that Equation~\\eqref{limsup is zero} yields that $f(\\mathcal{O}_\\Phi(x))$ is finite. We claim that actually this means that the set $\\{f(\\Phi^n(x))\\}_{n\\in\\mathbb{N}}$ is eventually periodic. Indeed, for each $m\\in\\mathbb{N}$, we let $Z_m$ be the Zariski closure of $\\{\\Phi^n(x)\\}_{n\\ge m}$. Then $Z_{m+1}\\subseteq Z_m$ for each $m$ and thus, by the Noetherian property, we get that there exists some $M\\in\\mathbb{N}$ such that $Z_{m}=Z_M$ for each $m\\ge M$. So, there exists a suitable positive integer $\\ell$ such that $\\Phi^\\ell$ induces an endomorphism of each irreducible component of $Z_M$; moreover, each irreducible component of $Z_M$ contains a Zariski dense set of points from the orbit of $x$. Furthermore, because $f(\\mathcal{O}_\\Phi(x))$ is a finite set, we get that $f$ must be constant on each irreducible component of $Z_M$ and thus, in particular, $f$ is constant on each orbit $\\mathcal{O}_{\\Phi^\\ell}(\\Phi^r(x))$ for $r$ sufficiently large. Hence, Theorem~\\ref{thm:gaps} actually yields that once Equation~\\eqref{limsup is zero} holds, then $\\{f(\\Phi^n(x))\\}_{n\\in\\mathbb{N}}$ is eventually periodic. \n \nIt is important to note that one cannot replace $\\limsup$ with $\\liminf$ in Conjecture \\ref{conj:gaps}, even in the case of endomorphisms. To see this, consider the map $\\Phi\\colon\\mathbb{A}^3\\to\\mathbb{A}^3$ given by $(x,y,z)\\mapsto (yz, xz, z+1)$.\nThen, letting $c=(0,1,1)$, it is easily shown by induction that for $n\\ge 0$, we have\n\\[\n\\Phi^{2n}(c)=(0, (2n)!, 2n+1)\\quad\\quad\\textrm{and}\\quad\\quad\\Phi^{2n+1}(c)=((2n+1)!,0,2n+2).\n\\]\nConsequently, if $f\\colon\\mathbb{A}^3\\to \\mathbb{A}^1$ is given by $f(x,y,z)=x+1$, then we see that $f(\\Phi^{2n}(c))=1$ and $f(\\Phi^{2n+1}(c))=(2n+1)!+1$ for every $n\\ge 0$, and so\n\\[\n\\liminf_{n\\to\\infty} \\frac{h(f(\\Phi^{n}(c)))}{\\log(n)}=0, \\quad\\quad\\textrm{while}\\quad\\quad \\limsup_{n\\to\\infty} \\frac{h(f(\\Phi^n(c)))}{\\log(n)}=\\infty.\n\\]\nDespite the fact that the conjecture does not hold when one replaces $\\limsup$ with $\\liminf$, we believe the following variant of Conjecture \\ref{conj:gaps} holds:\n\n\n\\begin{conjecture}\n\\label{conj:gaps-dense}\nLet $X$ be an irreducible quasi-projective variety defined over $\\overline{\\mathbb{Q}}$, let $\\Phi\\colon X\\dashrightarrow X$ be a rational self-map, and let $f\\colon X\\dashrightarrow \\mathbb{P}^1$ be a non-constant rational function. Let $x\\in X(\\overline{\\mathbb{Q}})$ with the property that $\\Phi^n(x)$ avoids the indeterminacy locus of $\\Phi$ for every $n\\ge 0$, and further suppose that $\\mathcal{O}_\\Phi(x)$ is Zariski dense in $X$. Then $$\\liminf_{n\\to\\infty} \\frac{h(f(\\Phi^n(x)))}{\\log(n)}>0.$$\n\\end{conjecture} \n\n\n\nWe point out that, if true, this would be a powerful result and would imply the Dynamical Mordell--Lang conjecture for rational self-maps when we work over a number field. To see this, let $Z$ be a quasi-projective variety defined over $\\overline{\\mathbb{Q}}$, let $\\Phi\\colon Z\\dashrightarrow Z$ be a rational self-map, $Y$ be a subvariety of $Z$, and suppose that the orbit of $x\\in Z(\\overline{\\mathbb Q})$ avoids the indeterminacy locus of $\\Phi$. As before, denote by $Z_n$ the Zariski closure of $\\{\\Phi^j(x) \\colon j\\ge n\\}$. Since $Z$ is a Noetherian topological space, there is some $m$ such that $Z_n=Z_m$ for every $n\\ge m$. Letting $X=Z_m$, and replacing $Y$ with $Y\\cap X$, it suffices to show that the conclusion to the Dynamical Mordell--Lang conjecture holds for the data $(X,\\Phi, x, Y)$. We let $X_1,\\ldots,X_d$ denote the irreducible components of $X$ and let $Y_i=Y\\cap X_i$. Since $\\Phi|_X$ is a dominant self-map, it permutes the components $X_i$, so there is some $b$ such that $\\Phi^b(X_i)\\subset X_i$ for each $i$. Then if we let $x_1,\\ldots ,x_d$ be elements in the orbit of $x$ with the property that $x_i\\in X_i$, then it suffices to show that the conclusion to the statement of the Dynamical Mordell--Lang conjecture holds for the data $(X_i, \\Phi^b, x_i, Y_i)$ for $i=1,\\ldots,d$. Then by construction, the orbit of $x_i$ under $\\Phi^b$ is Zariski dense. We prove that either $\\mathcal{O}_{\\Phi^b}(x_i)\\subset Y_i$ or that $\\O_{\\Phi^b}(x_i)$ intersects $Y_i$ finitely many times. If $Y_i=X_i$ or $Y_i=\\emptyset$ then the result is immediate; thus we may assume without loss of generality that $Y_i$ is a non-empty proper subvariety of $X_i$. We pick a non-constant morphism $f_i\\colon X_i\\longrightarrow \\mathbb{P}^1$ such that $f_i(Y_i)=1$. If $\\Phi^{bn}(x_i)\\in Y_i$, then $h(f(\\Phi^{bn}(x_i)))=0$. Conjecture \\ref{conj:gaps-dense} implies that this can only happen finitely many times, and so $\\{n\\colon \\Phi^{bn}(x_i)\\in Y_i\\}$ is finite.\n\n\n\\section{Proof of our main results}\nWe recall the following definitions. The ring of strictly convergent power series $\\mathbb{Q}_p\\langle z\\rangle$ is the collection of elements $P(z):=a_0+a_1 z+a_2 z^2 + \\cdots \\in \\mathbb{Q}_p[[z]]$ such that $|a_n|_p\\to 0$ as $n\\to \\infty$ and which thus consequently converge uniformly on $\\mathbb{Z}_p$. The \\emph{Gauss norm} is given by\n$\n|P(z)|_{\\rm Gauss}:=\\max_{n\\geq0} |a_n|_p.\n$ \nThe ring $\\mathbb{Z}_p\\\\subset\\mathbb Q_p\\$ is the set of $P(z)$ with $|P(z)|_{\\rm Gauss}\\leq1$, i.e.~the set of $P$ with $a_i\\in\\mathbb{Z}_p$.\n\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:uniform-bound-fibers}.]\nClearly, we may reduce immediately (at the expense of replacing $\\Phi$ by an iterate of it) to the case $X$ and $Y$ are irreducible.\n\nA standard spreading out argument (similar to the one employed in the proof of \\cite[Theorem~4.1]{DML-etale}) allows us to choose a model of $X$, $Y$, $f$, $\\Phi$, and $x$ over an open subset $U\\subseteq\\spec R$, where $R$ is an integral domain which is a finitely generated $\\mathbb{Z}$-algebra. In other words, $K$ is a field extension of the fraction field of $R$, we can find a map $\\mathcal{X}\\longrightarrow\\mathcal{Y}$ over $U$, a section $U\\longrightarrow\\mathcal{X}$, and an \\'etale endomorphism $\\mathcal{X}\\longrightarrow\\mathcal{X}$ over $U$ which base change over $K$ to be $f\\colon X\\longrightarrow Y$, $x\\colon\\spec K\\longrightarrow X$, and $\\Phi\\colon X\\longrightarrow X$, respectively. After replacing $U$ by a possibly smaller open subset, we can assume $U=\\spec R[g^{-1}]$ for some $g\\in R$. Since $R[g^{-1}]$ is a finitely generated $\\mathbb{Z}$-algebra, it is of the form $\\mathbb{Z}[u_1,\\dots,u_r]$. Applying \\cite[Lemma 3.1]{generalized-SML}, we can find a prime $p\\geq 5$ and an embedding $R[g^{-1}]$ into $\\mathbb Q_p$ which maps the $u_i$ into $\\mathbb{Z}_p$. Base changing by the resulting map $\\spec\\mathbb{Z}_p\\longrightarrow U$, we can assume $U=\\spec\\mathbb{Z}_p$. We will abusively continue to denote the map $\\mathcal{X}\\longrightarrow\\mathcal{Y}$ by $f$, the \\'etale endomorphism $\\mathcal{X}\\longrightarrow\\mathcal{X}$ by $\\Phi$, and the section $\\spec\\mathbb{Z}_p=U\\longrightarrow\\mathcal{X}$ by $x$. We let $\\overline{\\mathcal{X}}=\\mathcal{X}\\times_{\\mathbb{Z}_p}{\\mathbb F}_p$, let $\\overline{\\Phi}\\colon\\overline{\\mathcal{X}}\\longrightarrow\\overline{\\mathcal{X}}$ be the reduction of $\\Phi$, and let $\\overline{x}\\in\\mathcal{X}({\\mathbb F}_p)$ be the reduction of $x\\in\\mathcal{X}(\\mathbb{Z}_p)$.\n\nNotice that if $f(\\Phi^n(x))=y$, then since $x$ extends to a $\\mathbb{Z}_p$-point of $\\mathcal{X}$, necessarily $y\\in Y(K)$ extends to a $\\mathbb{Z}_p$-point of $\\mathcal{Y}$ as well. In particular, it suffices to give a uniform bound on the sets $\\{n:f(\\Phi^n(x))=y\\}$ as $y$ varies through the elements $\\mathcal{Y}(\\mathbb{Z}_p)$.\n\nTo prove Theorem \\ref{thm:uniform-bound-fibers}, we may replace $x$ by $\\Phi^\\ell(x)$ for some $\\ell\\in\\mathbb N$. Since $|\\mathcal{X}({\\mathbb F}_p)|<\\infty$, we can therefore assume $\\overline{x}$ is $\\overline{\\Phi}$-periodic, say of period $D$. It suffices to show that for each $1\\leq i$ such that in a $p$-adic analytic neighborhood, we have $$\\Phi^n(x)=(\\phi_1(n),\\dots,\\phi_d(n))\\in\\mathbb{Z}_p^d;$$ more precisely, letting $\\phi(z):=(\\phi_1(z),\\dots,\\phi_d(z))$, if $B\\subset\\mathcal{X}(\\mathbb{Z}_p)$ is the $p$-adic ball of points whose reduction mod $p$ is $\\overline{x}$, then there is an analytic bijection $\\iota\\colon B\\longrightarrow\\mathbb{Z}_p^d$, such that $\\iota\\!\\left(\\Phi^n(x)\\right)=\\phi(n)$.\n\nNext, fix an embedding $\\mathcal{Y}\\subset\\mathbb{P}^r_{\\mathbb{Z}_p}$, let $\\{V_i\\}_i$ be an open affine cover of $\\mathcal{Y}$, and for each $i$, let $\\{U_{ij}\\}_j$ be an open affine cover of $f^{-1}(V_i)$. We can further assume that each $V_i$ is contained in one of the coordinate spaces $\\AA^r_{\\mathbb{Z}_p} \\subset \\mathbb{P}^r_{\\mathbb{Z}_p}$. Since $\\mathcal{X}$ and $\\mathcal{Y}$ are quasi-compact, we can assume the $\\{U_{ij}\\}_{i,j}$ and $\\{V_i\\}_i$ are finite covers. Then we can view $f|_{U_{ij}}\\colon U_{ij} \\longrightarrow V_i \\subseteq \\AA^r_{\\mathbb{Z}_p}$ as a tuple of polynomials $(p_{ij0}, \\dots, p_{ijr})$. Letting $P_{ijk}(z)=p_{ijk}\\iota^{-1}\\phi(z)$, we see $f|_{\\O_{\\Phi}(x)}$ is given by the following piecewise analytic function: $$f(\\Phi^n(x))=(P_{ij0}(n), \\dots, P_{ijr}(n))$$ whenever $\\Phi^n(x)\\in U_{ij}$.\n\n\nIt therefore suffices to prove that for each $i,j$, there exists $N_{ij}$ such that for all $(y_1,\\dots,y_r)\\in V_i(\\mathbb{Z}_p)\\subseteq\\AA^r(\\mathbb{Z}_p)$, the number of simultaneous roots of $P_{ijk}(z)-y_k$ (for $k=1,\\dots,r$) is bounded by $N_{ij}$. In other words, we have reduced to proving the lemma below, where $S=\\{n:\\Phi^n(x)\\in U_{ij}\\}$ and $V=V_i(\\mathbb{Z}_p)$. \n\n\n\\begin{lemma}\n\\label{l:uniform-bound-piecewise-analytic}\nLet $r$ be a positive integer, let $V\\subset\\mathbb{Z}_p^r$, and let $S\\subset\\mathbb N$ be an infinite subset. For each $1\\leq k\\leq r$, let $P_k\\in\\mathbb{Z}_p\\$\nand consider the function $P\\colon S\\longrightarrow\\mathbb{Z}_p^r$ given by\n\\[\nP(n):=(P_1(n),\\dots,P_r(n)).\n\\]\nSuppose the set $\\{n\\in S:P(n)=y\\}$ is empty if $y\\in \\mathbb{Z}_p^r\\setminus V$ and is finite if $y\\in V$. Then there exists $N\\geq0$ such that\n\\[\n|\\{n\\in S:P(n)=y\\}|\\leq N\n\\]\nfor all $y \\in V$.\n\\end{lemma}\n\\begin{proof}\nWe may assume $S$ is infinite since otherwise we can take $N=|S|$. We claim that $P_k(z)$ is not a constant power series for some $k$. Suppose to the contrary that $P_k(z)=c_k\\in\\mathbb{Z}_p$ for each $k$. If $y:=(c_1,\\dots,c_r)\\in\\mathbb{Z}_p^r\\setminus V$, then we can take $N=0$. If $y\\in V$, then $\\{n\\in S:P(n)=y\\}=S$ which is infinite, contradicting the hypotheses of the lemma.\n\nWe have therefore shown that some $P_k(z)$ is non-constant. Let $\\mathcal{K}$ be the set of $k$ for which $P_k(z):=\\sum_{m\\geq0} c_{k,m}z^m$ is non-constant. Given any non-constant element $Q(z):=\\sum_{m\\geq0} c_mz^m$ of $\\mathbb{Z}_p\\$, let\n\\begin{equation}\n\\label{eqn:max-coeff-Gauss}\nD(Q):=\\max\\{m:|c_m|=|Q|_{\\rm Gauss}\\}.\n\\end{equation}\nRecall from Strassman's Theorem (see \\cite{strassman} or \\cite[Theorem 4.1, p.~62]{cassels}) that the number of zeros of $Q(z)$ is bounded by $D(Q)$. As a result, if $\\alpha\\in\\mathbb{Z}_p$, then the number of zeros of $Q(z)-\\alpha$ is bounded by $1+D(Q)$. Letting\n\\[\nN:=1+\\max_{k\\in\\mathcal{K}} D(P_k),\n\\]\nwe see then that for all $(y_1,\\dots,y_r)\\in\\mathbb{Z}_p^r$, the number of simultaneous zeros of $P_1(z)-y_1$, $\\dots$, $P_r(z)-y_r$ is bounded by $N$. In particular, $|\\{n\\in S:P(n)=y\\}|\\leq N$ for all $y\\in V$.\n\\end{proof}\nThis concludes the proof of Theorem~\\ref{thm:uniform-bound-fibers}.\n\\end{proof}\n\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:gaps}.] \nAs before, at the expense of replacing $\\Phi$ by an iterate, we may assume $X$ is irreducible. Furthermore, arguing as in the last paragraph of the introduction, we may assume $\\O_\\Phi(x)$ is Zariski dense.\n\nLet $K$ be a number field such that $X$, $\\Phi$, and $f$ are defined over $K$ and moreover, $x\\in X(K)$. As proven in \\cite{Schanuel}, there exists a constant $c_0>0$ such that for each real number $N\\ge 1$, there exist less than $c_0N^2$ algebraic points in $K$ of logarithmic height bounded above by $\\log(N)$. So, there exists a constant $c_1>1$ such that for each real number $N\\ge 1$, there are less than $c_1^N$ points in $K$ of logarithmic height bounded above by $N$.\n\nArguing as in the proof of Theorem~\\ref{thm:uniform-bound-fibers}, we can find a suitable prime number $p$, a model $\\mathcal{X}$ of $X$ over some finitely generated $\\mathbb{Z}$-algebra $R$ which embeds into $\\mathbb{Z}_p$ such that the endomorphism $\\Phi$ extends to an endomorphism of $\\mathcal{X}$, and a section ${\\rm Spec}(\\mathbb{Z}_p)\\longrightarrow \\mathcal{X}$ extending $x$; we continue to denote by $\\Phi$ and $x$ the endomorphism of $\\mathcal{X}$ and the section ${\\rm Spec}(\\mathbb{Z}_p)\\longrightarrow \\mathcal{X}$, respectively. At the expense of replacing both $\\Phi$ and $x$ by suitable iterates, we may assume the reduction of $x$ modulo $p$ (called $\\overline{x}$) is fixed under the induced action of $\\overline{\\Phi}$ on the special fiber of $\\mathcal{X}$. Consider the $p$-adic neighborhood in $B\\subset\\mathcal{X}(\\mathbb{Z}_p)$ consisting of all points whose reduction modulo $p$ is $\\overline{x}$. Then there is an analytic isomorphism $\\iota\\colon B\\to\\mathbb{Z}_p^m$ so that in these coordinates\n $$\\overline{x}=(0,\\ldots, 0)\\in \\mathbb{F}_p^m$$ \n and \n$\\Phi$ is given by $(x_1,\\cdots, x_m)\\mapsto (\\phi_1(x_1,\\dots, x_m),\\cdots, \\phi_m(x_1,\\dots, x_m))$, where\n$$\\phi_i(x_1,\\dots, x_m)\\equiv \\sum_{j=1}^m a_{i,j}x_j\\pmod{p}$$\n for each $i=1,\\dots, m$, for some suitable constants $a_{i,j}\\in\\mathbb{Z}_p$ (for more details, see \\cite[Section~11.11]{DML-book}). Applying \\cite[Theorem~11.11.1.1]{DML-book} (see also the proof of \\cite[Theorem~11.11.3.1]{DML-book}), there exists a $p$-adic analytic function $G\\colon\\mathbb{Z}_p\\longrightarrow \\mathbb{Z}_p^m$ such that for each $n\\ge 1$, we have\n\\begin{equation}\n\\label{eq:p-adic approximation}\n\\|\\Phi^n(x)-G(n)\\|\\le p^{-n},\n\\end{equation}\nwhere for any point $(x_1,\\dots, x_m)\\in\\mathbb{Z}_p^m$, we let\n$$\\|(x_1,\\dots, x_m)\\|:=\\max_{1\\leq i\\leq m} |x_i|_p.$$\n\n\n\nAs in the proof of Theorem~\\ref{thm:uniform-bound-fibers}, let $V_1\\simeq\\AA^1$ and $V_2\\simeq\\AA^1$ be the standard affine cover of $\\mathbb{P}^1$, and let $\\{U_{ij}\\}$ be a finite open affine cover of $\\mathcal{X}$ minus the indeterminacy locus of $f$ such that $f(U_{ij})\\subset V_i\\simeq\\AA^1$. Let\n\\[\nS_{ij}:=\\{n:\\Phi^n(x)\\in U_{ij}\\}.\n\\]\nSince $f|_{U_{ij}}$ is given by a polynomial with $p$-adic integral coefficients, there exist $H_{ij}(z)\\in\\mathbb{Z}_p\\$ such that\n\\[\nf(G(n))=H_{ij}(n)\n\\]\nwhenever $n\\in S_{ij}$. Notice that if $f(\\Phi^n(x))=y$, then since $x$ extends to a $\\mathbb{Z}_p$-point of $\\mathcal{X}$, necessarily $y\\in \\mathbb{P}^1(K)$ extends to a $\\mathbb{Z}_p$-point of $\\mathbb{P}^1$ as well. Thus, we need only concern ourselves with roots of $H_{ij}(z)-t$ for $t\\in\\mathbb{Z}_p$.\n\n\\begin{lemma}\n\\label{l:constant-rate-of-return}\nThere is some choice of $i$ and $j$ with the following properties:\n\\begin{enumerate}[label=$(\\arabic*)$]\n\\item\\label{constant-rate-of-return::infinite} $\\{f(\\Phi^n(x)):n\\in S_{ij}\\}$ is an infinite set,\n\\item\\label{constant-rate-of-return::Banach-density} $\\mathbb N\\smallsetminus S_{ij}$ has upper Banach density zero,\n\\item\\label{constant-rate-of-return::return} there exists a constant $\\kappa$ and a sequence $M_1$, there exists some $L\\ge 1$ such that $|a_L|_p>|a_j|_p$ for all $j>L$. As proven in Lemma~\\ref{l:uniform-bound-piecewise-analytic}, since $H(z)$ is not constant, there exists a uniform bound $C$ such that for each $t\\in \\mathbb{Z}_p$, the number of solutions to $H(z)=t$ is at most $C$. Furthermore, if $n$ is an element of $S$ such that $f(\\Phi^n(x))=t$, then equation \\eqref{eq:p-adic approximation}\nyields $$|H(n)-t|_p\\le p^{-n}.$$ \nAs mentioned above, by the Weierstrass Preparation Theorem, we can write\n\\[\nH(z)-t= q_t(z)u_t(z)\n\\]\nwith $q_t(z)$ a polynomial of degree $D(H-t)\\leq L$ and $u_t(z)$ a unit of Gauss norm $1$; moreover, the leading coefficient of $q_t(z)$ has $p$-adic norm equal to the Gauss norm of $H-t$. Hence, we can write\n\\[\nq_t(z)=b_t(z-\\beta_{1,t})\\cdots (z-\\beta_{D(H-t),t})\n\\]\nwith $b_t\\in \\mathbb{Q}_p$, the $\\beta_{j,t}\\in \\overline{\\mathbb Q}_p$, and\n\\[\n|b_t|_p=|H-t|_{\\rm Gauss}\\geq |a_L|_p.\n\\]\nWe have therefore bounded $|b_t|_p$ below independent of $t\\in\\mathbb{Z}_p$. As noted before the proof of the lemma, we know $|u_t(n)|_p=1$ for all $t\\in\\mathbb{Z}_p$ and $n\\in\\mathbb N$. Hence, there is a constant $c_2>0$ (independent of $t$) such that for all $t\\in\\mathbb{Z}_p$, if $|H(n)-t|_p\\le p^{-n}$ then there exists $1\\leq j\\leq D(H-t)$ such that\n\\[\n|n-\\beta_{j,t}|_pc_2 p^{\\min(n_{k_1},n_{k_2})\/L};\n\\]\ntherefore there exists a positive constant $c_3$ (independent of $t$, since both $L$ and $c_2$ are independent of $t$) such that for all $M\\ge 1$ and all $t\\in\\mathbb{P}^1(K)$,\n\\begin{equation}\n\\label{eq:c3-bound}\n\\#\\{n\\le M:n\\in S \\textrm{\\ and\\ } f(\\Phi^n(x))=t\\}\\leq c_3\\log(M).\\footnote{In fact, we have a substantially better bound. Let $\\exp^k$ denote the $k$-th iterate of the exponential function and let $L_p(M)$ be the smallest integer $k$ such that $\\exp^k(p)>M$. Then $\\#\\{n\\le M: n\\in S \\textrm{\\ and\\ } f(\\Phi^n(x))=t\\}\\leq c_3 L_p(M)$, however we will not need this stronger bound.}\n\\end{equation}\nAs an aside, we note that this type of gap is similar to the one obtained for the Dynamical Mordell--Lang problem in \\cite{gap-Compo}.\n\nNow, let $\\kappa$ be as in Lemma \\ref{l:constant-rate-of-return}, and choose a constant $c_4>1$ such that\n\\begin{equation}\n\\label{eq:c43}\nc_3\\cdot \\log(\\kappa c_4^r)\\cdot c_1^rc_4^{N_\\ell-1}.\n\\end{equation}\n\nTo conclude the proof, we show that for all $\\ell$ sufficiently large, there exists some $n_\\ell\\le \\kappa c_4^{N_\\ell}$ with the property that $n_\\ell\\in S$ and $h(f(\\Phi^{n_\\ell}(x)))\\ge N_\\ell$. If this were not the case, then since there are less than $c_1^{N_\\ell}$ algebraic numbers $t\\in\\mathbb{P}^1(K)$ of logarithmic Weil height bounded above by $N_\\ell$, by (\\ref{eq:constant-rate-of-return-inequality}) there would be such an algebraic number $t$ with \n\\[\n\\#\\{n\\le \\kappa c_4^{N_\\ell}:n\\in S \\textrm{\\ and\\ } f(\\Phi^n(x))=t\\}>\\frac{c_4^{N_\\ell-1}}{c_1^{N_\\ell}}>c_3\\log(\\kappa c_4^{N_\\ell})\n\\]\nand this violates inequality (\\ref{eq:c3-bound}). We have therefore proven our claim that for all $\\ell$ sufficiently large, there exists a positive integer $n_\\ell\\leq \\kappa c_4^{N_\\ell}$ with $h(f(\\Phi^{n_\\ell}(x)))\\ge N_\\ell$. So, \n\\[\n\\limsup_{n\\to\\infty}\\frac{h(f(\\Phi^n(x)))}{\\log(n)} \\geq \\lim_{\\ell\\to\\infty}\\frac{N_\\ell}{\\log(\\kappa)+N_\\ell\\log(c_4)}=\\frac{1}{\\log(c_4)}>0\n\\]\nas desired in the conclusion of Theorem~\\ref{thm:gaps}.\n\\end{proof}\n\n\n\\begin{lemma}\n\\label{lem:approx_2}\nIf $H(z)$ is a constant, then $\\limsup_{n\\to\\infty}\\frac{h(f(\\Phi^n(x)))}{\\log(n)}=\\infty$.\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:approx_2}.]\nBy property \\ref{constant-rate-of-return::infinite} of Lemma \\ref{l:constant-rate-of-return}, we can find a sequence $n_10$ such that for all sufficiently large $k$, we have\n\\[\nC'>\\frac{h(f(\\Phi^{n_{2k}}(x)))}{\\log(n_{2k})}\\geq \\frac{1}{2\\log(n_{2k})}(c_5 n_{2k-1} - \\log(2)),\n\\]\nwhere we have made use here of inequality (\\ref{eq:one height is large}). In particular, there is a constant $C>1$ such that for all $k$ sufficiently large,\n\\begin{equation}\n\\label{eqn:big-gaps}\nn_{2k}>C^{\\hspace{0.1em}n_{2k-1}}.\n\\end{equation}\nRecalling that $S$ does not contain any positive integers between $n_{2k-1}$ and $n_{2k}$, inequality (\\ref{eqn:big-gaps}) implies that $\\mathbb N\\smallsetminus S$ has positive upper Banach density. This contradicts property \\ref{constant-rate-of-return::Banach-density} of Lemma \\ref{l:constant-rate-of-return}, and so our initial assumption that $C'>\\frac{h(f(\\Phi^{n_{2k}}(x)))}{\\log(n_{2k})}$ is incorrect. This proves equation (\\ref{eq:even-term-subseq-limsup-infty}), and hence Lemma \\ref{lem:approx_2}.\n\\end{proof}\n\nClearly, Lemmas~\\ref{lem:approx_1} and \\ref{lem:approx_2} finish the proof of Theorem~\\ref{thm:gaps}.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSince the early work of Bloch and P{\\'o}lya \\cite{bloch} in the 30's, the study of random algebraic equations has now a long story \\cite{bharucha, farahmand}. In the last few years, it attracted a renewed interest in the context of probability and number theory \\cite{edelman}, as well as in the field of quantum chaos\n\\cite{bogo}. Recently, we showed that there are also interesting\nconnections between random polynomials and persistence properties of\nphysical systems \\cite{us_short, us_long}. \n\nHere we consider real random polynomials, {\\it i.e.} polynomials with\nreal random coefficients, of degree $n$. While these polynomials have exactly $n$ roots in the complex plane, \nthe number of roots on the {\\it real} line $N_n$ is a random variable. One would like to characterize the\nstatistics of this random variable and a natural question is thus :\nwhat is the mean number $\\langle N_n \\rangle$ of real roots and how\ndoes it behave with $n$ for large $n$ \\cite{edelman}? This question \nhas been widely studied in the past for Kac's polynomials $K_n(x) =\n\\sum_{k=0}^n a_k\\, x^k$ where $a_k$ are independent and identically\ndistributed (i.i.d.) random variables of\nfinite variance $\\langle a_k^2 \\rangle = \\sigma^2$. In that case it is\nwell known \nthat $\\langle N_n \\rangle \\sim \\frac{2}{\\pi}\\log \nn$, independently of $\\sigma$. This result was first obtained by\nKac \\cite{kac} for Gaussian \nrandom variables and it was later shown to hold also for a wider class\nof distributions of the coefficients $a_k$ \\cite{bharucha,\n farahmand}. Interesting generalizations of Kac's polynomials have\nbeen studied in the literature where $a_k$ are independent Gaussian variables but\nnon identical, such that \n$\\langle a_k^2\\rangle = k^{d-2}$, where $d>0$ is a real number,\nleading to $\\langle N_n \\rangle \\sim \\pi^{-1}(1+\\sqrt{d\/2})\\log{n}$\n\\cite{us_long, das}. Given the robustness of this asymptotic\nlogarithmic behavior of $\\langle N_n \\rangle$, it is natural to search for random\npolynomials for which $\\langle N_n \\rangle$ increases faster than $\\log{n}$, for instance algebraically. \n\nOne such instance is provided by the real Weyl polynomials $W_n(x)$\ndefined by \n\\begin{eqnarray}\\label{weyl}\nW_n(x) = \\sum_{k=0}^n \\epsilon_k \\frac{x^k}{\\sqrt{k!}} \\;,\n\\end{eqnarray}\nwhere $\\epsilon_k$ are i.i.d. random variables of zero mean and unit\nvariance. Thus here, $a_k = \\epsilon_k\/\\sqrt{k!}$ and the variance is\n$\\langle a_k^2 \\rangle = 1\/k!$, which for large $k$ behaves as\n$\\langle a_k^2 \\rangle \\propto e^{-k \\log k}$. For these real polynomials\nin Eq. (\\ref{weyl}), it is known that $\\langle N_n \\rangle \\propto\nn^{1\/2}$. For instance, in the special case where $\\epsilon_k$ are\nGaussian random \nvariables of unit variance, one has $\\langle N_n \\rangle \\sim\n\\frac{2}{\\pi} \\sqrt{n}$ \\cite{us_long, leboeuf}. Another interesting\nand intriguing instance of real random polynomials was introduced a\nlong time ago by Littlewood and Offord \\cite{littlewood} who studied\nthe random polynomials $L_n(x)$ given by \n\\begin{eqnarray}\\label{little}\nL_n(x) = \\frac{1}{2}+\\sum_{k=1}^n \\epsilon_k \\frac{x^k}{(k!)^{k}} \\;,\n\\end{eqnarray}\nwhere $\\epsilon_k = \\pm 1$ with equal probability. Thus in this case\n$a_k = \\epsilon_k \/(k!)^{k}$ and the variance is $\\langle a_k^2\n\\rangle = 1\/(k!)^{2k}$, which behaves for large $k$ as $\\langle a_k^2\n\\rangle \\propto e^{-2k^2 \\log k}$. Using algebraic\nmethods, they showed that such polynomials $L_n(x)$ have all their\nroots real and therefore $\\langle N_n \\rangle = n$. \n\nWe thus have here two examples of real random polynomials in\nEq.~(\\ref{weyl}) and Eq.~(\\ref{little}) where, at variance with Kac's\npolynomials, $\\langle N_n \\rangle$ grows algebraically with $n$. In the\nsecond example (\\ref{little}), the number of real roots is\n``macroscopic'' in the sense that, for large $n$, there is a finite\nfraction $\\langle \nN_n \\rangle\/n$ of the roots which are on the real axis. For $L_n(x)$\nin Eq. (\\ref{little}) this fraction is exactly one. We thus say\nthat there is a {\\it condensation} of the roots on the real line,\nsimilar to a Bose-Einstein condensation where a finite fraction of the\nparticles of a quantum-mechanical system (Bosons) condense into the lowest\nenergy level. In the case of random polynomials, the roots play the\nrole of the particles and the equivalent of the ground state is the real line.\n\n\nThe purpose of this paper is to understand what types of polynomials\nlead to this condensation phenomenon. Of course, it is very difficult to\naddress this question for any random coefficients $a_k$. However,\nguided by the two examples above in Eq.~(\\ref{weyl}) and\nEq.~(\\ref{little}), and in particular by the large $k$ behavior of \n$\\langle a_k^2 \\rangle$, we introduce a family of random polynomials\n$P_n(x)$ indexed by a real $\\alpha \\geq 0$ defined by \n\\begin{eqnarray}\\label{def_poly} \nP_n(x) = \\sum_{k=0}^{n} a_k \\, x^k \\;, \\; \\langle a_k^2 \\rangle = e^{-k^\\alpha} \\;,\n\\end{eqnarray} \nwhere $a_k$ are real independent Gaussian random variables of zero\nmean. While $\\alpha=0$ corresponds to Kac's polynomials, we recall that, for $W_n(x)$ in Eq. (\\ref{weyl}), $\\langle\na_k^2 \\rangle \\propto e^{-k \\log k}$ and for $L_n(x)$ in\nEq. (\\ref{little}), $\\langle a_k^2 \\rangle \\propto e^{-2 k^2 \\log\n k}$. Therefore, due to the extra \nlogarithmic factor, these random polynomials are not exactly of the form\nintroduced above (\\ref{def_poly}). However, for $\\alpha \\to 1^+$, one\nexpects to recover the behavior of $W_n(x)$ in Eq. (\\ref{weyl}) while\nfor $\\alpha \\to 2^+$, one\nexpects $P_n(x)$ to behave similarly to $L_n(x)$ in\nEq. (\\ref{little}) : this is depicted schematically in Fig. \\ref{fig1}. \n\nOur main results can be summarized as follows. As $\\alpha \\geq 0$ is varied\none finds three different {\\it phases}. The first phase corresponds to $0 \\leq\n\\alpha < 1 $, where one finds that $\\langle N_n \\rangle \\sim (2\/\\pi) \\log{n}$. In the second one,\ncorresponding to $1 < \\alpha < 2$, one has $\\langle N_n \\rangle \\sim \n\\frac{2}{\\pi}\\sqrt{\\frac{\\alpha-1}{\\alpha}} \\, n^{\\alpha\/2}$. And in the third phase, for $\\alpha > 2$, one\nfinds $\\langle N_n \\rangle \\sim n$. The condensation of the roots\non the real axis thus happens for $\\alpha \\geq 2$ and as one increases\n$\\alpha$, the condensation transition sets in at the critical value\n$\\alpha_c = 2$. Furthermore, one finds that these real\nroots condense into a quasi-periodic structure such that there is, on\naverage, one root in the interval \n$[-x_{m+1},-x_m] \\cup [x_m,x_{m+1}]$, with $x_m =\ne^{\\frac{\\alpha}{2}m^{\\alpha-1}}$, with $1 \\ll m 2$\ncorresponds to the low-temperature (ordered) phase. \n\\begin{figure}\n\\includegraphics[angle=0,scale=0.6]{recap.eps}\n\\caption{Asymptotic behavior of the mean number of real roots $\\langle\n N_n \\rangle$ of $P_n(x)$ in Eq. (\\ref{def_poly}) as a function of\n $\\alpha$. These polynomials exhibit a condensation of their roots on\n the real axis for $\\alpha \\geq 2$.}\\label{fig1}\n\\end{figure}\nRoughly speaking, one can consider our results as an interesting example where the transition from the\nhigh temperature where $\\langle N_n \\rangle \\propto \\log{n}$ (governed\nby a ``$\\alpha = 0$ fixed point'') to the\nlow temperature phase where $\\langle N_n \\rangle \\propto n$ (governed\nby ``$\\alpha = \\infty$'' fixed point) happens through a \n{\\it marginal phase}, for $1< \\alpha < 2 $, where $\\langle N_n \\rangle\n\\sim n^{\\phi}$ with an exponent $\\phi = \\alpha\/2$ which depends\ncontinuously on $\\alpha$. \n\nThe paper is organized as follows. In section 2, we describe the general\nframework to compute the local density of real roots, which directly\nleads to $\\langle N_n \\rangle$. In section 3 to 6 we then analyse\nseparately the \ncases $0 \\leq \\alpha < 1$, $\\alpha < 2$, $\\alpha > 2$ and the\n''critical case'' $\\alpha = 2$. In section 7, we\ngive a qualitative argument to explain the condensation transition\noccurring at \n$\\alpha_c = 2$ before we conclude in section 8. The Appendix contains\nsome useful technical details. \n \n\\section{General framework}\n\n\nFirst we notice that given that $P_n(x)$, as a function of\n$x$, is a Gaussian process, it is completely characterized by its\ntwo-point correlation function $C_n(x,y)$\n\\begin{equation}\\label{def_correl}\nC_n(x,y) = \\langle P_n(x) P_n(y) \\rangle = \\sum_{k=0}^n e^{-k^\\alpha} \\,x^k\\,y^k \\;,\n\\end{equation}\nwhere we used the notation $\\langle ... \\rangle$ to denote an average over the random variables $a_k$. \nA central object involved in the calculation of $\\langle N_n \\rangle$\nis $\\rho_n(x)$, the mean density of real roots at point $x$. If we denote $\\lambda_1, \\lambda_2, ..., \\lambda_p$ the $p$ real\nroots (if any) of $P_n(x)$, one has $\\delta(P_n(x)) = \\sum_{i=1}^p \\delta(x-\\lambda_i)\/|P_n'(\\lambda_i)|$ such that \n$\\rho_n(x)$ can be written as\n\\begin{eqnarray}\n\\rho_n(x) &=& \\sum_{i=1}^p \\langle \\delta(x-\\lambda_i) \\rangle = \n\\langle |P_n'(x)|\\delta(P_n(x)) \\nonumber \\\\\n&=& \\int_{-\\infty}^\\infty dy |y| \\langle \\delta(P_n'(x)-y) \\delta(P_n(x)) \\rangle \\;. \\label{def_density}\n\\end{eqnarray}\nUnder this form (\\ref{def_density}), one observes that the computation of \nthe mean density involves the joint distribution of the polynomial\n$P_n(x)$ and its derivative $P'_n(x)$ which is simply a bivariate\nGaussian distribution. After Gaussian integration over $y$, one obtains \n\\begin{eqnarray}\\label{def_density_inter}\n&&\\rho_n(x) = \\frac{\\sqrt{c_n(x) (c_n'(x)\/x + c_n''(x)) - [c_n'(x)]^2\n}}{2 \\pi c_n(x)} \\;, \\\\\n&&c_n(x) = C_n(x,x) = \\sum_{k=0}^n e^{-k^\\alpha} x^{2k}\\;. \\nonumber\n\\end{eqnarray}\nThis formula (\\ref{def_density_inter}) can be written in a very\ncompact way \\cite{edelman} : \n\\begin{eqnarray}\n\\rho_n(x) = \\frac{1}{\\pi} \\sqrt{\\partial_u \\partial_v \\log C_n(u,v)}\n\\bigg \n|_{u=v=x} \\;.\\label{ek_formula}\n\\end{eqnarray} \nGiven that the random coefficients $a_k$ are drawn from a symmetric distribution, we can restrict our study of $\\rho_n(x)$ on ${\\mathbb {R}}^+$ from which one obtains the mean number of real roots $\\langle N_n \\rangle$ as\n\\begin{equation}\n\\langle N_n \\rangle = 2 \\int_0^\\infty \\rho_n(x) dx \\;.\n\\end{equation}\n\n\n{\\bf An important change of variable.} We will see below that it is useful to consider these polynomials $P_n(x)$ in terms of another variable\n$Y$ defined as\n\\begin{equation}\\label{new_variable}\nY = \\left(\\frac{2}{\\alpha} \\log{x} \\right)^{\\frac{1}{\\alpha-1}} \\;.\n\\end{equation}\nWe denote $\\hat \\rho_n(Y)$ the mean density of the real roots in terms of this new variable such that one \nhas also $\\langle N_n \\rangle = \\int_0^\\infty \\hat \\rho_n(Y) dY$. For $0 < \\alpha < 1$ we will see that, for large $n$, most of the real roots of $P_n$ are located close to $Y = n$ while for $\\alpha > 1$, the density extends over the whole interval $Y \\in [1,n]$. This change of variable (\\ref{new_variable}) is motivated by the following analysis. \n\nFirst we notice that $C_n(x,y) = \\sum_{k=0}^n e^{-k^\\alpha} x^k y^k$ in Eq. (\\ref{def_correl}) is of the form $C_n(x,y)=\nc_n(\\sqrt{x y})$. Anticipating a saddle point analysis, one writes $c_n(x)$ as \n\\begin{eqnarray}\\label{def_series}\n&&c_n(x) = \\sum_{k=0}^n e^{-k^\\alpha} x^{2k} = \\sum_{k=0}^n\n\\exp{\\left(-\\phi(k,x) \\right)} \\;, \\;\\phi(k,x) = k^\\alpha - 2 k\n\\log{x} \\;. \n\\end{eqnarray}\nAlthough $\\phi(k,x)$ is defined for integers $k = 0, 1, 2, \\cdots, n$,\nit is readily extended to the real axis and denoted $\\phi(u,x) =\nu^\\alpha - 2u\\log{x}$ for $u \\in\n\\mathbb{R}^+$. The behavior of $c_n(x)$ is essentially governed by the\nbehavior of $\\phi(u,x)$ as a function of $u$ (and fixed $x$). In\nparticular, for $\\alpha < 1$, $\\phi(u,x)$ has a single maximum while\nfor $\\alpha > 1$, it has a single minimum for $u = u^*(x)$ given by\n\\begin{eqnarray}\\label{def_ustar} \n&&\\partial_u \\phi(u^*(x),x) = 0 \\; , \\; \\partial^2_u \\phi(u^*(x),x) =\n\\alpha(\\alpha-1) u^*(x)^{\\alpha-2} > 0 \\;, \\nonumber \\\\\n&& u^*(x) = \\left(\\frac{2}{\\alpha} \\log{x} \\right)^{\\frac{1}{\\alpha-1}} \\;.\n\\end{eqnarray} \nThe new variable $Y$ introduced above in Eq. (\\ref{new_variable}) is\nthus precisely $Y = u^*(x)$. As a consequence, the density behaves\nquite differently in both cases $\\alpha < 1$ and $\\alpha > 1$.\n\nFor $\\alpha < 1$, most of the real roots on $\\mathbb{R}^+$ are located\nin $[1, \\infty]$. For \nfixed $x>1$, $\\phi(u,x)$ as a function of $u$ in the interval $[0,n]$ has a\nglobal minimum for $u=n$. Therefore, the sum entering in the expression of \n$c_n(x)$ in Eq. (\\ref{def_series}) will be dominated by the terms with $k \\sim n$. The expansion\nof $\\phi(k,x)$ in Taylor series around $k=n$ yields\n\\begin{eqnarray}\n\\phi(k,x) &=& \\phi(n,x) + (k-n) (\\alpha n^{\\alpha-1} - 2 \\log{x} ) + \\cdots \\nonumber \\\\\n&=& (1-\\alpha)n^{\\alpha} - k(\\alpha n^{\\alpha-1}-2\\log{x}) + \\cdots \\;,\n\\end{eqnarray}\nwhere the higher order terms can be neglected in the large $n$ limit because $\\partial^j \\phi(n,x)\/\\partial u^j = {\\cal O}(n^{\\alpha - j})$ for $j \\geq 2$. Thus, for $\\alpha < 1$ one has \n\\begin{eqnarray}\\label{kac_sim}\nc_n(x) \\sim e^{-(1-\\alpha)n^\\alpha} \\sum_{k=0}^n (x e^{-\\frac{\\alpha}{2} n^{\\alpha-1}})^{2k} \\;,\n\\end{eqnarray}\nwhich, in terms of the rescaled variable $\\tilde x = x \\,e^{-\\frac{\\alpha}{2} n^{\\alpha-1}}$, is the correlator of Kac's polynomials. From this observation (\\ref{kac_sim}), one can straightforwardly obtain the mean number of real roots $\\langle N_n \\rangle$, this will be done in section 3. \n\nFor $\\alpha > 1$, the situation is quite different and in that case, $\\phi(u,x)$ has a single\nminimum for $u = u^*(x) = (\\frac{2}{\\alpha}\n\\log{x})^{\\frac{1}{\\alpha-1}}$ (\\ref{def_ustar}). Besides, we will see \nbelow that the main contribution \nto $\\langle N_n \\rangle$ on $\\mathbb{R}^+$ comes from the interval $1 < x < \\exp{\\left(\\frac{\\alpha}{2} n^{\\alpha-1} \\right)}$ where $1<\nu^*(x) < n$. In that case the sum entering in the definition of\n$c_n(x)$ in Eq. (\\ref{def_series}) is indeed dominated by $k \\sim\nu^*(x)$ and $c_n(x)$ can be evaluated by a saddle point\ncalculation. For this purpose, one obtains after some algebra explained in the Appendix, a convenient expression of $\\rho_n(x)$ as\n\\begin{eqnarray}\\label{start_expr_rho}\n\\hspace*{-1cm}\\rho_n(x) = \\frac{1}{\\pi x} \\left( \\frac{ \\sum_{k=0}^n (k-u^*(x))^2 e^{-\\phi(k,x)}}{\\sum_{k=0}^n e^{-\\phi(k,x)}} - \n \\left[ \\frac{ \\sum_{k=0}^n (k-u^*(x)) e^{-\\phi(k,x)}}{\\sum_{k=0}^n e^{-\\phi(k,x)}} \\right]^2\n\\right)^{\\frac{1}{2}} \\;,\n\\end{eqnarray}\nwhich is the starting point of our analysis for $\\alpha > 1$. For $1 < x < \\exp{\\left(\\frac{\\alpha}{2} n^{\\alpha-1} \\right)}$, one has $u^*(x)2$ and $\\alpha=2$ separately. This will be done in section 4, 5 and 6 respectively. \n\n\n\n\\section{The case $0 < \\alpha < 1$}\n\nIn that case, from the expression for $c_n(x)$ in Eq. (\\ref{kac_sim}),\nwe can use the results of Kac's polynomials to obtain that most of the\nreal roots will be such that, for large $n$, $x\ne^{-\\frac{\\alpha}{2}n^{\\alpha-1}} -1 = {\\cal O}(n^{-1})$\n\\cite{fyodorov}. \nIn other words, the real roots are distributed in a region of width\n$1\/n$ around $e^{\\frac{\\alpha}{2}n^{\\alpha-1}} = 1 + \\frac{\\alpha}{2}\nn^{\\alpha-1} + {\\cal O}(n^{\\alpha-2})$ and this distribution is\nexactly the same as the one for Kac's polynomials (corresponding to\n$\\alpha=0$). The number of real roots is thus also the same and given\nby \n\\begin{equation}\\label{last_kac}\n\\langle N_n \\rangle \\sim \\frac{2}{\\pi} \\log{n} \\;,\n\\end{equation}\nindependently of $\\alpha < 1$.\n\n\n\n\\section{The case $1 < \\alpha < 2$}\n\nIn that case $[u^*(x)]^{\\alpha-2} \\to 0$ for large $u^*(x)$ and one thus sees \non the asymptotic expression in\nEq. (\\ref{asympt_largex}) that the discrete sum can be replaced by an\nintegral. This yields, for large $n$ and large $x$ with $x <\n\\exp{(\\frac{\\alpha}{2} n^{\\alpha-1}})$ \n\\begin{equation}\\label{discrete_integral}\n\\hspace*{-0.5cm}\\sum_{k=0}^n\n g\\left( k-u^*(x)\\right) \\exp{(- \\phi(k,x))} \\sim e^{-\\phi(u^*(x),x)}\\int_{-\\infty}^\\infty g(y)\n e^{-\\frac{\\alpha(\\alpha-1)}{2} y^2 u^*(x)^{{\\alpha-2}} } \\, dy \\;.\n\\end{equation}\nNote that the prefactor $e^{-\\phi(u^*(x),x)}$ is unimportant for the computation of $\\rho_n(x)$ because it disappears between the numerator and the denominator in Eq. (\\ref{start_expr_rho}) and it will be omitted below. In particular, setting $g(z) = 1$ in Eq. (\\ref{discrete_integral}) one has\n\\begin{eqnarray}\\label{eq_g1}\n\\sum_{k=0}^n \\exp{(-\\phi(k,x))} \\propto \\sqrt{2 \\pi}\n\\left[\\frac{u^*(x)^{2-\\alpha}}{\\alpha(\\alpha-1)} \\right]^{\\frac{1}{2}} \\;,\n\\end{eqnarray}\nand similarly, setting $g(z)=z^2$ in Eq. (\\ref{discrete_integral}) one has\n\\begin{eqnarray}\\label{eq_gx2}\n\\sum_{k=0}^n (k-u^*(x))^2 \\exp{(-\\phi(k,x))} \\propto \\sqrt{2 \\pi}\n\\left[\\frac{u^*(x)^{2-\\alpha}}{\\alpha(\\alpha-1)} \\right]^{\\frac{3}{2}} \\;,\n\\end{eqnarray}\nwhile $\\sum_{k=0}^n (k-u^*(x))\\exp{(-\\phi(k,x))} \\sim 0$ to\nlowest order in $n$. Therefore using the exact expression given in\nEq. (\\ref{start_expr_rho}) together with the \nasymptotic behaviors given in Eq.~(\\ref{eq_g1}, \\ref{eq_gx2}), one obtains\nthe large $x$ behavior of \n$\\rho_n(x)$ as\n\\begin{equation}\\label{asympt_largex_alleq2}\n\\rho_n(x) \\sim \\frac{1}{\\pi x}\n\\frac{1}{\\sqrt{\\alpha(\\alpha-1)}} \\left(\\frac{2}{\\alpha} \\log{x}\n\\right)^{\\frac{2-\\alpha}{2(\\alpha-1)}} \\;.\n\\end{equation} \nFor a clear comparison with the case $\\alpha > 2$ (which will be analysed in\nthe next section), it is convenient to\nwrite the density $\\hat \\rho_n(Y)$, in terms of the variable $Y =\n\\left(\\frac{2}{\\alpha} \\log{x} \\right)^{\\frac{1}{\\alpha-1}}$, \nwhich reads, for $1 \\ll Y < n$ \n\\begin{eqnarray}\\label{asympt_largeX_alleq2}\n\\hat \\rho_n(Y) \\sim \\frac{\\sqrt{\\alpha(\\alpha-1)}}{2 \\pi} Y^{-\\frac{1}{2}(2-\\alpha)} \\;,\n\\end{eqnarray}\nand in Fig. \\ref{fig2} a), we show a sketch of this asymptotic\nbehavior (\\ref{asympt_largeX_alleq2}) of $\\hat \\rho_n(Y)$ for $1 \\ll Y\n< n$. \n\nWe can now compute $\\langle N_n \\rangle = \\int_{-\\infty}^\\infty \\rho_n(x) \\, dx$. First, one notices that for $\\alpha > 1$, the series entering in the definition of $c_n(x)$\nin Eq. (\\ref{def_series}) has an infinite radius of convergence so\nthat one readily obtains that $\\int_{-1}^{+1} \\rho_n(x) \\, dx$ is of\norder ${\\cal O}(1)$ in the \nlimit $n \\to \\infty$. Besides, for large $x \\gg e^{\\frac{\\alpha}{2}\n n^{\\alpha-1}}$, one has (see also Ref. \\cite{us_long}) \n\\begin{equation}\\label{very_largeX}\n\\rho_n(x) \\sim \\sqrt{\\frac{\\langle a_{n-1}^2\\rangle}{\\langle a_{n}^2\n \\rangle}}\\frac{1}{\\pi x^2} \\sim \\frac{e^{\\frac{\\alpha}{2}\n n^{\\alpha-1}}}{\\pi x^2} \\;,\n\\end{equation} \nwhich implies that $\\int_{e^{\\frac{\\alpha}{2}\n n^{\\alpha-1}}}^\\infty \\rho_n(x) \\, dx$ is also of order ${\\cal\n O}(1)$ in the limit $n\\to \\infty$. From these properties, it follows\nthat the main contributions to $\\langle N_n \\rangle$ on ${\\mathbb R}^+$ comes from the\ninterval $[1, e^{\\frac{\\alpha}{2} n^{\\alpha-1}}]$ where the asymptotic\nbehavior of $\\rho_n(x)$ is given in\nEq. (\\ref{asympt_largex_alleq2}). Therefore one has\n\\begin{equation}\\label{N_alleq2}\n\\langle N_n \\rangle \\sim 2 \\int_1^{e^{\\frac{\\alpha}{2} n^{\\alpha-1}}} \\rho_n(x) \\, dx\n\\sim \\frac{2}{\\pi} \\sqrt{\\frac{\\alpha-1}{\\alpha}} \\,n^{\\alpha\/2} \\;,\n\\end{equation}\nwhere the factor $2$ comes from the additional contribution coming from $[-e^{\\frac{\\alpha}{2} n^{\\alpha-1}},-1]$. We thus have here an algebraic growth\n$\\langle N_n \\rangle \\propto n^{\\alpha\/2}$ with a continuously varying exponent $\\alpha\/2$. This exponent tends to $1\/2$ as $\\alpha \\to 1^+$, which is\nexpected from the analysis of Weyl polynomials $W_n(x)$ in Eq. (\\ref{weyl}) for which $\\langle a_k^2 \\rangle \\propto e^{-k \\log k}$ (although the variance is not exactly of the form $\\langle a_k^2 \\rangle = e^{-k^\\alpha}$). Besides, from Eq. (\\ref{N_alleq2}), one also obtains that \nthe amplitude of this term proportional to $n^{\\alpha\/2}$ vanishes when $\\alpha \\to 1$. We recall that for $\\alpha \\leq 1$, one has instead $\\langle N_n \\rangle \\propto (\\frac{2}{\\pi}) \\log{n}$ (\\ref{last_kac}), characteristic for Kac's polynomials. This suggests that this limit $\\alpha \\to 1$ is rather singular in the sense that the asymptotic behavior of $\\langle N_n \\rangle$\nfor large $n$ changes \"discontinuously\" from $\\log{n}$ to $\\sqrt{n}$.\n\n\n\n\\section{The case $\\alpha > 2$}\n\nIn that case, the behavior of the discrete sum in\nEq. (\\ref{asympt_largex}), which \nenters in the computation of $\\rho_n(x)$ (\\ref{start_expr_rho}) is quite\ndifferent. Indeed, in that case $[u^*(x)]^{\\alpha-2} \\propto (\\log{x})^{(\\alpha-2)\/(\\alpha-1)}\\to \\infty$ for large\n$x$ and therefore the leading term for large $x$ in\nEq.~(\\ref{asympt_largex}) corresponds to $m=0$ if $b < 1\/2$ or $m=1$ in\n$b>1\/2$. \nKeeping these leading contributions, one has\n\\begin{eqnarray}\n&&\\sum_{k=0}^n\n g\\left( k-u^*(x)\\right) \\exp{(-\\phi(k,x))} \\propto g(-b) \\exp{\\left[-\\frac{\\alpha (\\alpha-1)}{2} b^2 u^*(x)^{{\\alpha-2}}\\right]} \\nonumber \\\\\n&&+ g(1-b)\\exp{\\left[-\\frac{\\alpha (\\alpha-1)}{2} \n (1-b)^2 u^*(x)^{{\\alpha-2}}\\right]}\\label{asympt_largex_alphaleq2} \\;.\n\\end{eqnarray}\nwhere, again, we have omitted the unimportant prefactor $e^{-\\phi(u^*(x),x)}$. Using this large $x$ expansion (\\ref{asympt_largex_alphaleq2}), one obtains $\\rho_n(x)$ in Eq. (\\ref{start_expr_rho}) as\n\\begin{eqnarray}\n \\rho_n(x) \\sim \\frac{2}{(\\pi x)\\cosh{\\left[\\frac{\\alpha(\\alpha-1)}{2} Y^{\\alpha-2}(1-2b) \\right]} } \\, , \\, Y = \\left(\\frac{2}{\\alpha} \\log{x} \\right)^{\\frac{1}{\\alpha-1}} \\;.\n\\end{eqnarray}\nIn terms of the variable $Y$, the density $\\hat \\rho_n(Y)$ reads, \n\\begin{eqnarray}\\label{pseudo_periodic}\n\\hat \\rho_n(Y = \\lfloor Y \\rfloor + b) \\sim \\frac{\\alpha(\\alpha-1)\n Y^{\\alpha-2}}{2\\pi\\cosh{\\left[ \\frac{\\alpha(\\alpha-1)}{2}\n Y^{\\alpha-2}(1-2b) \\right]}} \\;. \n\\end{eqnarray}\nIn Fig. \\ref{fig2} c), one shows a sketch of $\\hat \\rho_n(Y)$ for\nlarge $Y < n$ given by Eq. (\\ref{pseudo_periodic}) : it is\nqualitatively very different from the case\n$\\alpha < 2$ (see Fig. \\ref{fig2} a)). Indeed, $\\hat \\rho_n(Y)$\nexhibits peaks centered around $k + \\frac{1}{2}$ for large integers\n$1 \\ll k < n$. The height of these peaks is given by $\\alpha(\\alpha-1)\nk^{\\alpha-2}\/(2 \\pi)$ whereas its width scales like $k^{2 - \\alpha}$. \n\nFrom $\\rho_n(x)$, one can now compute the mean number of real\nroots. As in the case $\\alpha < 2$ (see Eq. (\\ref{very_largeX}) and above), one\ncan show that the main contribution to $\\langle N_n \\rangle$ comes\nfrom the intervals $[-e^{\\frac{\\alpha}{2} n^{\\alpha-1}},-1]$ and \n$[1, e^{\\frac{\\alpha}{2} n^{\\alpha-1}}]$. One thus\nhas from Eq. (\\ref{pseudo_periodic})\n\\begin{eqnarray}\\label{condensation}\n\\langle N_n \\rangle &=& 2 \\int_0^\\infty \\rho_n(x) \\, dx \\sim 2 \\int_0^n \\hat\n\\rho_n(Y) \\, dY \\\\\n&\\sim&\n \\sum_{k \\gg 1}^n \\int_0^1 \\frac{\\alpha(\\alpha-1)\n k^{\\alpha-2}}{\\pi\\cosh{\\left[ \\frac{\\alpha(\\alpha-1)}{2}\n k^{\\alpha-2}(1-2b) \\right]}} \\,db \\sim \\sum_{k \\gg 1}^n\n \\int_{-\\frac{\\alpha(\\alpha-1)}{2}k^{\\alpha-2}}^{\\frac{\\alpha(\\alpha-1)}{2}k^{\\alpha-2}} \\frac{dz}{\\pi \\cosh{z}} \\,, \\nonumber\n\\end{eqnarray}\nand finally \n\\begin{eqnarray}\n\\langle N_n \\rangle \\sim n \\;,\n\\end{eqnarray}\nwhere we have used $\\int_{-\\infty}^\\infty dz\/\\cosh{z} = \\pi$. This\ncondensation of the roots on the real axis, characterized by the fact\nthat $\\langle N_n \\rangle \\sim n$ thus occurs via the\nformation of this quasi-periodic structure (see Fig. \\ref{fig2}\nc)). More precisely, this computation in Eq. (\\ref{condensation})\nshows that for large $k$, $2 \\int_k^{k+1} \\hat \\rho_n(Y) \\,dY \\sim 1$ which\nmeans, going back to the original variable $x$, that there is, on\naverage, one root in the interval $[-x_{k+1},-x_k] \\cup [x_k,x_{k+1}]$,\nwith $x_k = e^{\\frac{\\alpha}{2} k^{\\alpha-1}}$. \n\n\n\\section{The special case $\\alpha = 2$}\n\nIn view of the previous analysis, it is tempting to consider the fraction of real roots $\\Phi = \\lim_{n\n \\to \\infty} \\langle N_n \\rangle \/ n$ as an ``order paramater''. For\n $\\alpha < 2$, \n one has $\\Phi = 0$ whereas $\\Phi = 1$ for $\\alpha > 2$. One can\n however interpolate smoothly between these two limiting cases by\n considering the case $\\alpha = 2$ and introducing an additional real\nparameter $\\mu$ such that\n\\begin{equation}\\label{def_mu}\n\\langle a_k^2 \\rangle = e^{-\\mu k^2} \\;.\n\\end{equation}\nPerforming the same algebra as explained in the Appendix, one obtains the same formula as given in Eq. (\\ref{start_expr_rho}) with $u^*(x) = \\mu^{-1} \\log{x}$. The new variable is thus here $Y = \\mu^{-1} \\log{x}$ and, setting $Y = \\lfloor Y \\rfloor + b$ it is easy to see that the\ndensity $\\hat \\rho_n(Y)$ is given by for $1 \\ll Y < n$ \n\\begin{equation}\\label{start_expr_rho_mu}\n\\hspace*{-2cm}\\hat \\rho_n(Y) = \\frac{\\mu}{\\pi} \\left[ \\frac{\\sum_{m=-\\infty}^\\infty (m-b)^2 e^{-\\mu(m-b)^2} }{\\sum_{m=-\\infty}^\\infty e^{-\\mu(m-b)^2}} - \\left[ \n\\frac{\\sum_{m=-\\infty}^\\infty (m-b) e^{-\\mu(m-b)^2} }{\\sum_{m=-\\infty}^\\infty e^{-\\mu(m-b)^2}}\\right]^2 \\right]^{1\/2} \\;,\n\\end{equation}\nwhich is thus 1-periodic for all $\\mu$. In Fig. \\ref{fig2} c), one\nshows a sketch of $\\hat \\rho(Y)$ for $\\alpha = 2$ given by\nEq. (\\ref{start_expr_rho_mu}). For $\\mu \\to 0$, the\ndensity is almost constant and $\\hat \\rho_n(Y) \\sim\n\\pi^{-1}\\sqrt{\\mu\/2}$ and the modulation of the density increases\nwith $\\mu$. For large $\\mu$, the sum in Eq.~(\\ref{start_expr_rho_mu})\nis dominated by the terms corresponding to $m=0$ and $m=1$ and $\\hat\n\\rho_n(Y)$ is thus given by a formula similar to Eq. (\\ref{pseudo_periodic})\nsetting $\\alpha=2$ and replacing $Y^{\\alpha-2}$ by $\\mu$. For the\naverage number of real roots one has \n\\begin{eqnarray}\n\\langle N_n \\rangle \\propto \n\\cases{\n\\frac{\\sqrt{2\\mu}}{\\pi} n \\;,\\; \\mu \\ll 1 \\\\\nn \\;,\\; \\mu \\gg 1\\;,}\n\\end{eqnarray}\nwhich shows that this family of real random polynomials (\\ref{def_mu})\ninterpolate smoothly between the cases $\\alpha < 2$\n(\\ref{N_alleq2}) and $\\alpha > 2$ (\\ref{condensation}). \n\n\\begin{figure}\n\\includegraphics[angle=0,scale=0.8]{combined.ps}\n\\caption{{\\bf a)} : Sketch of $\\hat \\rho_n(Y)$ (in arbitrary units)\n given in Eq. (\\ref{asympt_largeX_alleq2}) as \n a function of $Y$ for $1 \\ll Y < n$ for $\\alpha < 2$. {\\bf b)} : Sketch of $\\hat \\rho_n(Y)$ (in arbitrary units)\n given in Eq. (\\ref{start_expr_rho_mu}) as \n a function of $Y$ for $1 \\ll Y < n$ for $\\alpha = 2$. {\\bf c)} : Sketch of $\\hat \\rho_n(Y)$ (in arbitrary units)\n given in Eq. (\\ref{pseudo_periodic}) as \n a function of $Y$ for $1 \\ll Y < n$ for $\\alpha > 2$. Here $k$\n denotes an integer with $1 \\ll k < n$.}\\label{fig2} \n\\end{figure}\n\n\n\\section{A qualitative argument for the transition at $\\alpha=2$}\n\nThis condensation of the roots on the real axis can be qualitatively\nunderstood if one considers the random polynomials (for $x >0$) $\\hat P_n(Y) =\nP_n(x)$ of the variable\n$Y$, which one writes as \n\\begin{eqnarray}\\label{P_hat}\n\\hat P_n(Y) = \\sum_{k=0}^n \\hat a_k w(k,Y) \\;, \\; w(k,Y) =\n\\exp{\\left[-\\frac{1}{2}(k^\\alpha - \\alpha k Y^{\\alpha-1})\\right]} \\;,\n\\end{eqnarray}\nand $\\hat a_k$ are i.i.d. Gaussian variables of unit variance. It is\neasy to see that the weights $w(k,Y)$, as a function of $k$, have a\nsingle maximum for $k = Y$ where the second derivative is proportional to\n$k^{\\alpha - 2}$. Thus for $\\alpha > 2$, the weights get more and\nmore peaked around this maximum for large $k$, whereas $\\hat a_k$ is typically of order ${\\cal O}(1)$. \nTherefore, given a large integer $m$, $\\hat P_n(m)$ is, for $\\alpha > 2$, \ndominated by a single term corresponding to $k=m$. Consequently, the sign\nof $\\hat P_n(m)$ is essentially the sign of $\\hat a_m$. This in turn implies that, if $\\hat\na_m$ and $\\hat a_{m+1}$ have an opposite \nsign, $P_n(x)$ has, with a probability close to $1$, a root in the interval\n$[e^{\\frac{\\alpha}{2}m^{\\alpha-1}},e^{\\frac{\\alpha}{2}(m+1)^{\\alpha-1}}]$.\nIn the case where $\\hat a_m$ and $\\hat a_{m+1}$ have the same sign, the same\nargument shows that $P_n(x)$ has, with a probability close to $1$, a root in the interval\n$[-e^{\\frac{\\alpha}{2}(m+1)^{\\alpha-1}},-e^{\\frac{\\alpha}{2}(m)^{\\alpha-1}}]$.\nOne thus recovers qualitatively the result we had found from the\ncomputation of $\\hat \\rho_n(Y)$ in Eq. (\\ref{condensation}) where we\nhave shown that $P_n(x)$ has, on average, one root in the interval\n$[-e^{\\frac{\\alpha}{2}(m+1)^{\\alpha-1}},-e^{\\frac{\\alpha}{2}(m)^{\\alpha-1}}]\n\\cup\n [e^{\\frac{\\alpha}{2}m^{\\alpha-1}},e^{\\frac{\\alpha}{2}(m+1)^{\\alpha-1}}]$. This shows finally that $P_n(x)$ has, on average, $\\langle N_n \\rangle \\propto n$real \nroots. \n\nWe also point out that our argument explains in a\nrather intuitive way the result obtained \nby Littlewood and Offord \\cite{littlewood} for the random polynomials\n$L_n(x)$ (\\ref{little}). For these specific polynomials, \ndefining $x_0 = 0$, $x_{m} = m^m m !$, they rigorously proved, using\nalgebraic (and rather cumbersome) methods, that $L_n(x)$ has a root either on $[x_m,x_{m+1}]$\nif $\\epsilon_m \\epsilon_{m+1} =-1$ or in $[-x_{m+1},-x_{m}]$ if\n$\\epsilon_{m} \\epsilon_{m+1} = 1$. Our argument gives some insight on their intriguing result and allows to understand it in a rather simple way. \n\n \n \n\n\\section{Conclusion}\n\nTo conclude we have introduced a new family of random polynomials\n(\\ref{def_poly}), indexed by a real $\\alpha$. For these random\npolynomials, we have computed the mean density of real roots\n$\\rho_n(x)$ from which we computed the mean number of real roots\n$\\langle N_n \\rangle$\nfor large $n$. We have shown that, while for $0 \\leq \\alpha < 1$,\n$\\langle N_n \\rangle \\sim (\\frac{2}{\\pi}) \\log{n}$, the behavior of\n$\\langle N_n \\rangle$ for $\\alpha > 1$ deviates \nsignificantly from the logarithmic behavior characteristic for \nKac's polynomials. For $1< \\alpha < 2$, we have shown that $\\langle\nN_n \\rangle\n\\propto n^{\\alpha\/2}$ whereas for $\\alpha > 2$, $\\langle N_n \\rangle\n\\sim n$. This \nfamily of real random polynomials thus displays an interesting\ncondensation phenomenon \nof their roots on the real axis, which is accompanied by an ordering\nof the roots in \na quasi periodic structure : this is depicted in Fig. \\ref{fig2}. \n\nOf course, the occurrence of this transition raises several interesting\nquestions like the behavior of the variance of the number of real\nroots for large $n$ as $\\alpha$ is varied. It would be also interesting to\ncompute the two-point correlation function of the \nreal roots, which is a rather natural tool to characterize this periodic\nstructure we have found. In view of this, we hope that this interesting\nphenomenon will stimulate further research on random polynomials. \n\n\n\\begin{appendix}\n\\section{A useful expression for the mean density $\\rho_n(x)$}\n\nIn this appendix, we derive the expression for the mean density $\\rho_n(x)$ as given in Eq. (\\ref{start_expr_rho}) \nstarting from Eq. (\\ref{ek_formula}). We first write $c_n(x) = \\langle P_n(x) P_n(x)\\rangle$ as\n\\begin{eqnarray}\\label{c_app1}\nc_n(x) = e^{-\\phi(u^*(x),x)} \\sum_{k=0}^n e^{-\\tilde \\phi(k,x)} \\;,\n\\end{eqnarray}\nwhere $u^*(x)$ is the location of the minimum of $\\phi(u,x)$ given in\nEq. (\\ref{def_ustar}) \n\\begin{equation}\\label{def_ustar_app}\nu^*(x) = \\left(\\frac{2}{\\alpha} \\log{x}\\right)^{\\frac{1}{\\alpha-1}} \\;,\n\\end{equation}\nand\n\\begin{eqnarray}\\label{phi_tilde}\n&& \\phi(u^*(x),x) = (1-\\alpha)u^*(x)^\\alpha \\\\\n&&\\tilde \\phi(k,x) = \\phi(k,x) - \\phi(u^*(x),x) = k^\\alpha - \\alpha k\n\t [u^*(x)]^{\\alpha-1} + (\\alpha-1) [u^*(x)]^\\alpha \\;. \\nonumber\n\\end{eqnarray}\nThe correlator $C_n(x,y) = c_n(\\sqrt{xy})$ is given by\nEq. (\\ref{c_app1}) together with Eq. (\\ref{phi_tilde}) where $x$\nis replaced by $\\sqrt{xy}$. All the dependence of $C_n(x,y)$ in\n$x,y$ is thus contained in $u^*(\\sqrt{xy})$ only. From its definition\nin Eq.~(\\ref{def_ustar_app}) one has immediately\n\\begin{equation}\n\\partial_x u^*(\\sqrt{xy}) = \\frac{1}{\\alpha(\\alpha-1)} \\frac{1}{x}\n\t[u^*(\\sqrt{xy})]^{2-\\alpha} \\;,\n\\end{equation}\nfrom which we obtain a set of useful relations\n\\begin{eqnarray}\\label{relations}\n&&\\partial_{x,y}^2 \\phi(u^*(\\sqrt{xy}),\\sqrt{xy}) = -\\frac{1}{\\alpha(\\alpha-1)} \\frac{1}{xy} [u^*(\\sqrt{xy})]^{2-\\alpha}\\\\\n&& \\partial_x \\tilde \\phi(k,\\sqrt{xy}) = \\frac{1}{x}\n(u^*(\\sqrt{xy})-k) \\nonumber \\\\\n&& \\partial_{x,y}^2 \\tilde \\phi(k,\\sqrt{xy}) =\n\\frac{1}{\\alpha(\\alpha-1)} \\frac{1}{xy} [u^*(\\sqrt{xy})]^{\\alpha-2}\n\\;. \\nonumber\n\\end{eqnarray}\nFor the computation of $\\rho_n(x)$ from Eq. (\\ref{ek_formula}), it is\nuseful to introduce the notation, for any function $g(k)$ \n\\begin{equation}\n\\langle g(k) \\rangle_Z = \\frac{\\sum_{k=0}^n g(k)\\exp{(-\\tilde\n \\phi(k,\\sqrt{xy}))} \n}{\\sum_{k=0}^n \\exp{(-\\tilde \\phi(k,\\sqrt{xy}))}} \\;.\n\\end{equation}\nFrom $C_n(x,y) = c_n(\\sqrt{xy})$ and\n$c_n(x)$ given in Eq.(\\ref{c_app1}) one obtains \n\\begin{eqnarray}\\label{last_eq_app}\n&&\\partial_x \\partial_y \\log{C_n(\\sqrt{xy})} = - \\partial_{x,y}^2\n\\phi(u^*(\\sqrt{xy}),\\sqrt{xy}) - \\langle \\partial_x \\tilde\n\\phi(k,\\sqrt{xy}) \\partial_y \\tilde \\phi(k,\\sqrt{xy}) \\rangle_Z\n\\nonumber \\\\\n&&-\n\\langle \\partial_{x}\\tilde \\phi(k,\\sqrt{xy}) \\rangle_Z \\langle\n\\partial_{x}\\tilde \\phi(k,\\sqrt{xy}) \\rangle_Z \n- \\langle \\partial^2_{x,y}\\tilde \\phi(k,\\sqrt{xy}) \\rangle_Z \\;.\n\\end{eqnarray}\nFrom the above relations in Eq. (\\ref{relations}), it is readily seen\nthat the first and the last term in Eq. (\\ref{last_eq_app}) cancel\neach other. Using the relation in Eq. (\\ref{ek_formula}), one finally obtains\nthe relation given in the text in Eq. (\\ref{start_expr_rho}).\n\n\n\\end{appendix}\n\n\n\\section*{References}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section[#1]{\\centering\\normalfont\\scshape #1}}\n\\newcommand{\\ssubsection}[1]{%\n \\subsection[#1]{\\raggedright\\normalfont\\itshape #1}}\n\\newcommand{\\mathfrak{p}}{\\mathfrak{p}}\n\\newcommand{\\mathfrak{q}}{\\mathfrak{q}}\n\\begin{document}\n\\maketitle\n\n\n\\begin{abstract}\nThis paper has the following main results. Let $S$ be a polynomial ring in $n$ variables, over an arbitrary field. Let $\\mathscr{M}$ be the family of all monomial ideals in $S$.\n\\begin{enumerate}[(i)]\n \\item We give an explicit characterization of all $M\\in \\mathscr{M}$, such that $\\pd(S\/M)=n$.\n \\item We give the total, graded, and multigraded Betti numbers of $S\/M$, in homological degree $n$, for all $M\\in \\mathscr{M}$.\n \\item Let $M\\in \\mathscr{M}$. If $\\pd(S\/M)=n$, then $\\sum\\limits_{i=0}^n \\betti_i(S\/M)\\geq 2^n$.\n \\item Let $M\\in \\mathscr{M}$. If $M$ is Artinian and $\\betti_n(S\/M)=1$, then $M$ is a complete intersection. \n \\end{enumerate}\n \\end{abstract}\n \n\n \\section{Introduction}\n Let $S=k[x_1,\\ldots,x_n]$ be a polynomial ring in $n$ variables, over a field $k$. The title of this paper makes reference to those monomial ideals $M$ in $S$, for which the quotient module $S\/M$ has projective dimension $n$, and the present work is entirely concerned with the study of such ideals.\n \n We begin to examine projective dimension $n$ in the context of squarefree monomial ideals. We show that the only squarefree monomial ideal $M$ for which the projective dimension of $S\/M$ equals $n$ is the maximal ideal $M=(x_1,\\ldots,x_n)$. This result turns out to be instrumental in the proof of a later theorem, where we characterize the class of all monomial ideals with large projective dimension. This characterization, in turn, is an avenue to three results that we discuss below.\n \n General consensus says that the problem of describing the Betti numbers of an arbitrary monomial ideal of $S$ is utopian. In homological degree $n$, however, such description is particularly simple. In fact, we give the total, graded, and multigraded Betti numbers of $S\/M$, in homological degree $n$, for every monomial ideal $M$ of $S$.\n \n Another theorem proven in this article states that when the quotient $S\/M$ has projective dimension $n$, the sum of its Betti numbers is at least $2^n$. This result, already known for Artinian monomial ideals [Ch, CE], is related to the Buchsbaum-Eisenbud, Horrocks conjecture, which has been investigated and generalized over the course of the years [CE], [PS, Conjectures 6.5, 6.6, and 6.7]. The proof of our theorem has strong combinatorial flavor.\n \n Finally, we show that when $M$ is Artinian and the $n^{th}$ Betti number of $S\/M$ is 1, $M$ must be of the form $M=(x_1^{\\alpha_1},\\ldots,x_n^{\\alpha_n})$, where the $\\alpha_i$ are\n positive integers. Combining this result with [Pe, Theorem 25.7] (a criterion for $S\/M$ to be Gorenstein), we obtain the following. If $\\betti_n(S\/M)=1$, then $S\/M$ is Cohen-Macaulay if and only if $S\/M$ is Gorenstein if and only if $M=(x_1^{\\alpha_1},\\ldots,x_n^{\\alpha_n})$, for some ${\\alpha_1},\\ldots,{\\alpha_n}\\geq 1$.\n \n The organization of the article is as follows. Section 2 is about background and notation. Sections 3 and 4 prepare the ground to characterize all monomial ideals with large projective dimension. \n This characterization is the content of section 5. Section 6 is the heart of this work; it is in this section that we prove the three theorems advertised above.\n \n \\section{Background and Notation\n Throughout this paper $k$ is an arbitrary field, and $S$ represents a polynomial ring over $k$, in a finite number variables. The letter $n$ is always used to denote the number of variables of $S$. The letter $M$ represents a monomial ideal\nin $S$. With minor modifications, the construction that we give below can be found in [Me].\n \n\\begin{construction}\nLet $M$ be generated by a set of monomials $\\{l_1,\\ldots,l_q\\}$. For every subset $\\{l_{i_1},\\ldots,l_{i_s}\\}$ of $\\{l_1,\\ldots,l_q\\}$, with $1\\leq i_1<\\ldots1$, perhaps favoring an episodic nuclear obscuration and blowout governed by radiation pressure. \n\nThe largest limitation for the $\\it Swift$\/BAT survey, which is relatively unbiased and complete for local AGNs, is that its shallow sensitivity misses luminous quasars in the distant universe. Here, we explore high-luminosity\/redshift samples of optical quasars (e.g., \\citealt{Sch10}), optical--IR red quasars with large color excess, where $E(B-V)\\lesssim1.5$ mag (e.g., \\citealt{Gli07}; \\citealt{Ban12}; \\citealt{Ros15}; \\citealt{Ham17}), dust-obscured galaxies (DOGs, \\citealt{Dey08}; Hot DOGs, \\citealt{Eis12}), and submillimeter galaxies (SMGs, \\citealt{Bla02}), where the latter two are likely subsets and distant analogs of local ultraluminous infrared galaxies (ULIRGs, $\\log L_{\\rm IR}>10^{12} L_{\\odot}$; \\citealt{San88}) and their higher-luminosity cousins (e.g., hyLIRGs, \\citealt{San96}; ELIRGs, \\citealt{Tsa15}). We also add the highest-obscuration Compton-thick AGNs, that is, AGNs with X-ray-obscuring column densities of $N_{\\rm H}\\gtrsim10^{24}\\, \\rm cm^{-2}$, observed with $\\it NuSTAR$ \\citep{Har13}.\n\n\\begingroup\n\\begin{deluxetable*}{ccccccc}\n\\tablecolumns{7}\n\\tabletypesize{\\scriptsize}\n\\tablecaption{AGN samples}\n\\tablehead{\n\\colhead{Name} & \\colhead{Sample} & \\colhead{Selection} & \\colhead{Obscuration} & \\colhead{$z$} & \\colhead{$\\log L_{\\rm bol}\\,(\\ergs)$} & \\colhead{N}}\n\\startdata\nB15b; B16 & Compton-thick & Hard X-ray & $N_{\\rm H}$ & 0.001--0.051 & 42.5--45.8 & 16\\\\\nR17c & {\\it Swift}\/BAT & Hard X-ray & $N_{\\rm H}$, $E(B-V)_{\\rm nl}$ & 0.00--0.27 & 40.8--46.9 & 366\\\\\nY09; M17; V18b & Type 1 & Optical & $N_{\\rm H}$, $E(B-V)_{\\rm cont}$ & 0.15--4.26 & 44.8--48.7 & 174\\\\\nJ13 & Type 1 & Optical & $E(B-V)_{\\rm cont}$ & 0.14--4.13 & 45.7--48.2 & 14,531\\\\ \nL16; L17; G17& Red & Optical--IR & $N_{\\rm H}$, $E(B-V)_{\\rm cont}$ & 0.14--2.48 & 45.2--46.9 & 12\\\\ \nU12; K18 & Red & Optical--IR & $E(B-V)_{\\rm cont}$ & 0.29--0.96 & 45.8--47.1 & 23\\\\ \nG18 & Extremely red & Optical--IR & $N_{\\rm H}$ & 2.32 & 47.5 & 1\\\\ \nP19 & Extremely red & Optical--IR & $E(B-V)_{\\rm cont}$ & 2.24--2.95 & 46.7--47.8 & 28\\\\ \nL20 & Heavily reddened & Optical--IR & $N_{\\rm H}$ & 2.09--2.66 & 45.7--46.9 & 7\\\\ \nB12; B13; B15a; T19 & Heavily reddened & Optical--IR & $E(B-V)_{\\rm cont}$ & 1.46--2.66 & 46.0--48.6 & 51\\\\\nC16; T20 & DOG & Optical--IR & $N_{\\rm H}$ & 1.22--5.22 & 43.8--48.2 & 15\\\\\nS14; A16; R17a; V18a; Z18; A20 & Hot DOG & IR & $N_{\\rm H}$ & 1.01--4.60 & 46.2--48.1 & 9\\\\ \nA15 & Hot DOG & IR & $E(B-V)_{\\rm cont}$ & 0.29--4.59 & 45.4--48.9 & 129\\\\\nA08 & ULIRG, SMG & IR\/submm & $N_{\\rm H}$ & 0.04--2.05 & 44.6--45.6 & 3\\\\\n\\enddata\n\\tablecomments{AGN samples used in this work. We refer to each reference as having the objects with values or constraints on obscuration or accretion rate, while the original catalog paper is provided in the text (\\S2). Here, $N$ denotes the number of objects for each set of references having obscuration and accretion rate information used in \\S5, so they could be smaller than the number of sources from the references. The subscripts ``cont'' and ``nl'' under $E(B-V)$ values are derived using the continuum SED and narow-line ratios, respectively. The abbreviated references are \\citet{Bri15} (B15b); \\citet{Bri16} (B16); \\citet{Ric17c} (R17c); \\citet{You09} (Y09); \\citet{Mar17} (M17); \\citet{Vie18} (V18b); \\citet{Jun13} (J13); \\citet{Lam16,Lam17} (L16, L17); \\citet{Gli17a} (G17); \\citet{Urr12} (U12); \\citet{Kim18} (K18); \\citet{Gou18} (G18); \\citet{Per19} (P19); \\citet{Lan20} (L20); \\citet{Ban12,Ban13,Ban15} (B12, B13, B15a); \\citet{Tem19} (T19); \\citet{Cor16} (C16); \\citet{Tob20} (T20); \\citet{Ste14} (S14); \\citet{Ass16} (A16); \\citet{Ric17a} (R17a); \\citet{Vit18} (V18a); \\citet{Zap18} (Z18); \\citet{Ass20} (A20); \\citet{Ass15} (A15); \\citet{Ale08} (A08).} \n\\end{deluxetable*} \n\\endgroup \n\nThe latest studies of obscured quasars with large $E(B-V)$ values, through careful analysis to quantify and minimize the effect of obscuration, have reported near-Eddington to Eddington-limited accretion ($f_{\\rm Edd}\\sim$\\,0.1--1, e.g., \\citealt{Ale08}; \\citealt{Urr12}; \\citealt{Kim15}; \\citealt{Ass20}; \\citealt{Jun20}). Furthermore, \\citet{Gli17b} and \\citet{Lan20} find many obscured quasars with $f_{\\rm Edd, dust}>1$ at high $N_{\\rm H}$. These observations suggest that radiation pressure on dusty gas is effective, but is potentially less effective for obscured, luminous quasars since the length of time that luminous quasars are active is shorter than the length of time that less-luminous AGN are active (e.g., \\citealt{Hop05, Hop06}). Alternatively, luminous, obscured quasars are thought to be observed in a short phase in which they are blowing out the material through outflows stronger at higher luminosities (e.g., \\citealt{Lam17}; \\citealt{Per19}; \\citealt{Tem19}; \\citealt{Jun20}), perhaps requiring a different nuclear or galactic environment from less-luminous, obscured AGNs. Hence, there is growing interest in which AGN property drives radiation-pressure feedback, and in which temporal and spatial scales is it effective.\n\nIn this work, we attempt to constrain the $N_{\\rm H}$--$f_{\\rm Edd}$ and $E(B-V)$--$f_{\\rm Edd}$ planes for quasars from multiwavelength AGN samples (\\S2) and through a consistent method to estimate $N_{\\rm H}$, $E(B-V)$ (\\S3), and $f_{\\rm Edd}$ values (\\S4). We present (\\S5) and discuss (\\S6) the $N_{\\rm H}$--$f_{\\rm Edd}$ and $E(B-V)$--$f_{\\rm Edd}$ distributions for quasars in terms of various feedback mechanisms. Throughout, including the luminosities from the literature, we use a flat $\\Lambda$CDM cosmology with $H_{0}=\\mathrm{70\\,km\\,s^{-1}\\,Mpc^{-1}}$, $\\Omega_{m}=0.3$, and $\\Omega_{\\Lambda}=0.7$. \n\n\n\\section{The sample}\nProbing the distribution of $N_{\\rm H}$--$f_{\\rm Edd}$ and $E(B-V)$--$f_{\\rm Edd}$ values from a statistically complete AGN sample is complicated for several reasons. AGNs radiate across almost the entire electromagnetic spectrum, but show a wide range of spectral energy distributions (SEDs) due to physical processes governing the radiation, host galaxy contamination, and obscuration on various scales around the accreting BHs (e.g., \\citealt{Lan17}; \\citealt{Hic18}). Therefore, we found it beneficial to compile quasar samples selected at various wavelengths over a wide range of luminosity and redshift. Still, we chose to add only the data from the literature that meaningfully increase the sample size for a given wavelength selection.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[scale=.95]{f2.eps}\n\\caption{Bolometric luminosity ($L_{\\rm bol}$) as a function of redshift (left: $z<0.3$, right: $z>0.3$) for AGNs having $f_{\\rm Edd}$ and measurements of either $N_{\\rm H}$ (filled symbols) or $E(B-V)$ (open symbols, plotted together if they have both $N_{\\rm H}$ and $E(B-V)_{\\rm cont}$). The samples plotted are ULIRGs\/SMGs (A08, red circles), Hot DOGs (S14; A15; A16; R17a; V18a; Z18; A20, red stars), optical--IR red AGN samples named red quasars (U12; L16, L17; G17; K18, yellow circles), extremely red quasars (P19, yellow squares), heavily reddened quasars (B12; B13; B15a; T19; L20, yellow stars), Compton-thick AGNs (B15b; B16, black stars), $\\it Swift$\/BAT AGNs (R17c, black circles), and optically selected SDSS type 1 quasars (J13, blue squares for $z<0.3$, density plot for $z>0.3$ (due to a large sample), matched with Y09 in blue circles; WISSH quasars from M17 and V18b as blue stars). The $L_{\\rm bol} \\ge 10^{45.7}\\ergs$ boundary is marked (dashed line), and a constant 14--195\\,keV flux of $10^{-11} \\ergs \\rm cm^{-2}$ with bolometric correction applied (\\S4), roughly denotes the detection limit of {\\it Swift}\/BAT X-ray data (dotted line, drawn up to $z=1$).}\n\\end{figure*}\n\nIn Figure 2 we plot the AGN samples from X-ray, optically blue, optical--IR red, and IR\/submillimeter-bright populations, also summarized in Table 1. At $z<0.3$, the $\\it Swift$\/BAT AGN in R17c (with $N_{\\rm H}$ from \\citealt{Ric17d}; $E(B-V)$ from \\citealt{Kos17}) has the advantage of minimal obscuration bias from the hard X-ray selection and covers a wide range of luminosities, reaching down to low-luminosity AGNs and up to quasar luminosities ($10^{41}\\lesssim L_{\\rm bol} \\lesssim 10^{47} \\ergs$). However, the $\\it Swift$\/BAT survey lacks the sensitivity to probe the distant or luminous AGN populations marked in Figure 2. We complemented the highest obscurations using Compton-thick ($N_{\\rm H} \\gtrsim 10^{24}\\, \\rm cm^{-2}$) AGNs observed with $\\it NuSTAR$ (B15b; B16), and the higher luminosities\/redshifts from optical Sloan Digital Sky Survey (SDSS) quasars (e.g., \\citealt{Sch10}; $N_{\\rm H}$ from Y09; $E(B-V)$ from J13), red quasars (L16, originally from \\citealt{Gli12}; K18, averaged between \\citealt{Gli07} and \\citealt{Urr09}), and quasars from ULIRGs (broad-line\\footnote{Throughout, we classify narrow-line AGNs to be type $\\ge1.8$ (weak broad H$\\alpha$ and H$\\beta$, [\\ion{O}{3}]\/H$\\beta>3$, \\citealt{Win92}), and broad-line AGNs to be type $\\le1.5$ (comparable broad-to-narrow H$\\beta$ and [\\ion{O}{3}]\/H$\\beta<3$). When the AGN types are not specified, we follow the visual classifications from the literature.} ULIRGs in A08, with $N_{\\rm H}$ from \\citealt{Sev01}). \n\nWe further include $z>0.3$ quasars to search for luminous quasars, adding type 1 quasars from the SDSS (i.e., WISSH quasars, $N_{\\rm H}$ from M17 and $E(B-V)$ from V18b), and a variety of quasars with red optical-to-infrared colors, that is, heavily reddened quasars (B12; B13, B15a; T19), red quasars (U12; G17; L17; K18), extremely red quasars (P19 with $N_{\\rm H}$ from G18 and $E(B-V)$ from \\citealt{Ham17}), and DOGs (C16; T20). Hot DOGs (S14; A15\\footnote{This sample is being updated by P. R. M. Eisenhardt et al. (in preparation), but we simply refer to the numbers from A15 at this time.}; A16; R17a; V18a; Z18; A20), and broad-line (AGN-like) SMGs (A08 with $N_{\\rm H}$ from \\citealt{Ale05}) were also included, adding part of some samples at $z<0.3$ that extend to $z>0.3$ (A08; Y09; J13; L16; R17c; K18). \n\nDuplication among the samples was found in Compton-thick AGNs (B15b; B16), heavily reddened quasars (B12; T19) and red quasars (U12; K18), where we used the most recent values, except for those between B15b\/B16 and R17c, where we kept both the $N_{\\rm H}$ and $f_{\\rm Edd}$ estimates as they were based on multiple X-ray observations. The samples based on follow-up studies of SDSS quasars were separated into those with $N_{\\rm H}$ (Y09; M17) and those with $E(B-V)$ (J13; V18b), where the $f_{\\rm Edd}$ values from signal-to-noise ratio (S\/N) $>$20 spectra in \\citet{She11} and \\citet{Par12} were added. We removed beamed sources (R17c, flagged by \\citealt{Kos17} using the blazar catalog from \\citealt{Mas15}) for reliable $N_{\\rm H}$ and $f_{\\rm Edd}$ values (but see also, e.g., \\citealt{Bae19}, for estimation of $f_{\\rm Edd}$ in radio-bright AGNs). We used line widths corrected for instrumental resolution in estimating $M_{\\rm BH}$ (\\S4).\n\n\n\\section{Gas and Dust Obscuration}\nWe compiled $N_{\\rm H}$ and $E(B-V)$ values, representing gas and dust obscuration, for the AGN samples. For $N_{\\rm H}$, we used the line-of-sight X-ray obscuration from sources with enough X-ray counts to model the spectra ($\\gtrsim$40--60, defined by the respective references). Exceptions are obviously large absorption ($N_{\\rm H}\\ge10^{24}\\, \\rm cm^{-2}$) constraints in S14, V18a, and A20, where the exposure times are longer than 20 ks but have a relatively smaller number of X-ray counts due to Compton-thick absorption. We add these values to our analysis. The choice of models to fit or estimate the X-ray obscuration varies in the literature: \\citet{Mur09} (S14; L16; G17; V18a; Z18), \\citet{Bri11} (B15b; A16; B16; C16; A20), hardness-ratio-based $N_{\\rm H}$ conversion (L17), (absorbed) power-law fit (Y09; C16; M17; G18; L20), and a combination of models (A08; R17c; T20). Still, when the $N_{\\rm H}$ values are compared between various models (e.g., B15b; B16; G17; Z18; L20), they are mostly consistent within the uncertainties (but see also B15b and \\citealt{Liu15}, for the limitations of the models at Compton-thick column densities).\n\nFor $E(B-V)$, we used the UV\/optical--IR continuum SED-based $E(B-V)_{\\rm cont}$\\footnote{Throughout, $E(B-V)_{\\rm cont\/bl\/nl}$ are those derived from the continuum SED and broad\/narrow-line ratios, respectively, and the $E(B-V)_{\\rm nl}$ values are only mentioned as lower limits to $E(B-V)_{\\rm cont}$. We used the Milky Way extinction curve with total-to-selective extinction of 3.1 when transforming extinction to $E(B-V)$.} from the literature. Lower limits in $E(B-V)$ were given to the P19 data from \\citet{Ham17} because of likely underestimation using a narrow range of wavelengths to determine $E(B-V)$. For optical quasars, we determined the rest-frame $>0.3\\mu$m power-law continuum slope ${\\alpha}$ following $F_{\\nu} \\propto \\nu^{\\alpha}$, fit to the photometric SED. We assumed an intrinsic slope of ${\\alpha}=0.1\\pm0.2$ from the most blue (hot dust-poor, $\\sim3\\sigma$ outliers) quasars in J13, consistent with accretion disk models and polarized observations of quasar SEDs ($\\alpha \\approx 1\/3$, e.g., \\citealt{Sha73}; \\citealt{Kis08}). We limited the sample to quasars with at least three SDSS optical or UKIDSS near-IR \\citep{Law07} photometric detections at rest-frame $0.3-1\\mu$m and rest-frame near-IR detections at up to at least 2.3$\\mu$m to decompose the SED into the power-law continuum and dust emission (see J13 for details). We converted $\\alpha$ into $E(B-V)$ by reddening the intrinsic slope using a Milky Way extinction curve at $0.3-1\\mu$m to match the observed value of $\\alpha$, while fixing $E(B-V)=0$ when $\\alpha>0.1$. We checked if the $E(B-V)$ estimates from J13 are consistent with the literature by comparing the values cross-matched with 17 sources in V18b and 277 sources in C. Carroll et al. (submitted) at $E(B-V)<0.5$ mag. The J13 $E(B-V)$ values have a median offset and scatter of $0.05\\pm0.05$ and $0.04\\pm0.06$ mag, respectively, consistent within the uncertainties. We adopt the J13 values for the cross-matched sources (Y09; M17; V18b).\n\nThe $E(B-V)_{\\rm cont}$ values can suffer from host galaxy contamination in the rest-frame optical\/near-IR. We limited the samples with the SEDs decomposed into an AGN and a host galaxy (U12; A15; L17) to $L_{\\rm bol} \\ge 10^{45.2} \\ergs$ for the decomposition to contain a sufficient AGN contribution, and the samples without an SED decomposition (the remaining samples with $E(B-V)_{\\rm cont}$ in Table 1) to $L_{\\rm bol} \\ge 10^{45.7} \\ergs$, to minimize host galaxy contamination. Above the luminosity limits, the average host contamination at 5100\\AA\\ drops below 50\\% and 10\\%, respectively, for type 1 quasars \\citep{She11} and is consistent with the growing AGN contribution to the observed SEDs for red quasars at higher $L_{\\rm bol}$ (L17). The $L_{\\rm bol} \\ge 10^{45.7} \\ergs$ limit corresponds to $L_{\\rm bol} =10^{12.1} L_{\\odot}$, selecting ULIRG luminosities for IR-bright AGNs (e.g., \\citealt{Fan16}; \\citealt{Tob17}). The majority of the hard X-ray selected AGNs from R17c are less luminous than the $L_{\\rm bol}$ limits for $E(B-V)_{\\rm cont}$, but instead have robust measurements of $N_{\\rm H}$ from their hard X-ray spectra.\n\nThe nuclear dust-to-gas ratio traced by the $\\log E(B-V)_{\\rm cont\/bl}\/N_{\\rm H}$ (mag cm$^{2}$) values in Figure 3 is constant if the gas and dust obscuration are proportional (e.g., \\citealt{Usm14}). The values are overall smaller than the Galactic value ($-$21.8, e.g., \\citealt{Boh78}),\\footnote{One of the reasons for the offset may be that the majority of the AGNs have an excess of dust-free gas within the sublimation radius (e.g., \\citealt{Ris02, Ris07}; \\citealt{Mai10}; \\citealt{Bur16}; \\citealt{Ich19}), but we focus here on the overall value when including the more luminous AGNs.} with reported average values ranging between $-$22.8 \\citep{Mai01} and $-$22.3 (L20). L20 find relatively higher $E(B-V)\/N_{\\rm H}$ values for a sample of heavily reddened broad-line quasars at high luminosity, but there are similarly luminous quasars with relatively smaller $E(B-V)\/N_{\\rm H}$ values (i.e., the Hot DOGs or some optical--IR red quasars in Figure 3). Apart from the type 1 AGNs where the large scatter in $E(B-V)\/N_{\\rm H}$ could in part arise from the uncertainty constraining the lowest values in either quantity, we find the mean and scatter of $\\log E(B-V)\/N_{\\rm H} \\,(\\rm mag\\,\\,cm^{2})=-22.77 \\pm 0.51$ (observed) or $\\pm 0.41$\\footnote{We refer to the intrinsic scatter of the quantity $x=\\log E(B-V)\/N_{\\rm H}$, $\\sigma_{\\rm int}$, as the observed scatter with measurement error $\\Delta x$ subtracted in quadrature, that is, $\\sigma_{\\rm int}^{2}=\\Sigma_{i=1}^{n}\\{(x_{i}+22.77)^{2}-\\Delta x_{i}^{2}\\}\/(n-1)$. The errors on $E(B-V)$ values are missing for the L16 and G17 samples, but the intrinsic scatter of $\\log E(B-V)\/N_{\\rm H}$ decreases by only 0.01 dex if we assign the mean error of $\\Delta E(B-V)=0.12$ mag from the L17 sample used here.} (intrinsic) from 31 obscured AGNs (type 2 AGNs, optical--IR red quasars, and Hot DOGs) without upper\/lower limits in Figure 3, spanning absorption-corrected $L_{\\rm 2-10 keV}=10^{42.4-45.6}\\ergs$. The ratios are close to the \\citet{Mai01} value, but are highly scattered for any combination of AGN type over the luminosity probed, complicating a simple correspondence between dust and gas. We thus refer to both $E(B-V)$ and $N_{\\rm H}$ when selecting AGNs with dusty gas, using a conversion of $\\log E(B-V)\/N_{\\rm H}=-$22.8.\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.95]{f3.eps}\n\\caption{Log ratio between $E(B-V)_{\\rm cont\/bl}$ and $N_{\\rm H}$, plotted against intrinsic 2--10 keV luminosity. We plot only the data with $N_{\\rm H}$ uncertainty values based on sufficient X-ray counts. The data come from \\citet{Mai01} (marked M01, blue and red squares for type 1--1.5 and 1.8--2 classifications, respectively), red quasars (L16; G17; L17, yellow circles), heavily reddened quasars (L20, yellow stars), and Hot DOGs (S14; A16; R17a; Z18; A20, red stars). We show Galactic, L20, and \\citet{Mai01} $\\log E(B-V)\/N_{\\rm H}$ values and the range of their applicable luminosities for obscured AGNs (dashed line), and likewise from this work (solid line).}\n\\end{figure}\n\nThe $N_{\\rm H}$ and $E(B-V)$ values are thought to be nuclear obscuration close to the AGN center, but as the AGN geometry consists of an extended, kpc order narrow line region outside the central dusty structure (e.g., \\citealt{Kan18}; \\citealt{Min19}), we expect the narrow-line-based $E(B-V)_{\\rm nl}$ values to be smaller than the $E(B-V)_{\\rm cont\/bl}$ values measured closer to the nucleus (e.g., \\citealt{Zak03}; \\citealt{Zak05}; \\citealt{Gre14}; \\citealt{Jun20}). For the R17c sample providing narrow Balmer decrements, we find that the $E(B-V)_{\\rm nl}\/N_{\\rm H}$ values for obscured (type 1.8--2.0) AGNs are about an order of magnitude smaller than the $E(B-V)_{\\rm cont}\/N_{\\rm H}$ values in Figure 3, although showing an even larger scatter. This demonstrates that $E(B-V)_{\\rm nl}$ is simply much lower than $E(B-V)_{\\rm cont}$. Furthermore, the dust-to-gas ratios may decrease when using the global $N_{\\rm H}$, as it is larger than the nuclear line-of-sight $N_{\\rm H}$, such as for red quasars in L16, implying extended gas. The extended obscuration in obscured quasars will be considered to assess the effect of radiation pressure (\\S6.2), but for a better comparison of nuclear dust and gas obscuration, we remove $E(B-V)_{\\rm nl}$ values from further analysis, and we use $E(B-V)_{\\rm cont}$ as the fiducial estimate of $E(B-V)$ hereafter.\n\n\n\\section{Eddington Ratio}\nEstimating $f_{\\rm Edd}$ throughout the samples relies on several bolometric correction methods and black hole scaling relations. For the bolometric correction, we primarily relied on the hard X-ray (2--10\\,keV) luminosity to minimize the absorption correction for the X-ray samples in Table 1. The 2--10 keV intrinsic luminosities are based on a simple conversion of the 14--195 keV luminosities from R17c using a typical X-ray spectral slope; namely, $L_{\\rm 2-10 keV} = 2.67 L_{\\rm 14-195 keV}$ \\citep{Rig09}. The absorption-corrected X-ray-to-bolometric correction depends on $L_{\\rm bol}$ or $f_{\\rm Edd}$ (e.g., \\citealt{Marc04}; \\citealt{Vas07}; \\citealt{Lus12}). We used the \\citet{Marc04} bolometric correction as a function of $L_{\\rm bol}$, as the dynamic range of $L_{\\rm bol}$ ($\\sim$3--4 dex) is wider than that of $f_{\\rm Edd}$ ($\\sim$2 dex). When the X-ray luminosity was absent, we adopted the monochromatic bolometric correction from IR or extinction-corrected UV\/optical continuum or line luminosities, which are relatively insensitive to $L_{\\rm bol}$ (e.g., \\citealt{Ric06}; \\citealt{Lus12}). We used the corrections from $L_{\\rm 1350}$, $L_{\\rm 3000}$, and $L_{\\rm 5100}$\\footnote{Throughout, subscripts of $L$ indicate monochromatic continuum luminosity at that wavelength, measured in units of \\AA.} (3.81, 5.15, and 9.26, respectively, \\citealt{She11}) for optical quasars (J13; V18b) and obscured AGNs with extinction-corrected continuum luminosities (B12; B13; A15; B15a; objects in L17 without $N_{\\rm H}$; T19; A20), $L_{\\rm P\\beta}$ ($\\log L_{\\rm bol}\/10^{44} \\ergs = 1.29 +0.969 \\log L_{\\rm P\\beta}\/10^{42} \\ergs$, \\citealt{Kim15}) for K18, $L_{\\rm 3.4\\mu m}$ (8, \\citealt{Ham17}) for P19, $L_{\\rm 15\\mu m}$ (9, \\citealt{Ric06}) for U12, with each correction having systematic uncertainties of a few tens of percent up to a factor of few (e.g., \\citealt{Hec04}; \\citealt{Ric06}; \\citealt{Lus12}).\n\nWe estimated $M_{\\rm BH}$ mostly through stellar absorption or broad emission lines, using a mutually consistent methodology. The mass constant of single-epoch estimators for AGNs ($f$-factor), is determined assuming that the reverberation mapped AGNs lie on the $M_{\\rm BH}$--$\\sigma_{*}$ relation for inactive galaxies.\nWe thus use the same $M_{\\rm BH}$--$\\sigma_{*}$ relation (e.g., \\citealt{Woo15}), \n\\begin{eqnarray}\\begin{aligned}\n\\log &\\Big(\\frac{M_{\\rm BH}}{M_{\\odot}}\\Big)=(8.34\\pm0.05)\\\\\n&+(5.04\\pm0.28)\\log \\Big(\\frac{\\sigma_{*}}{200\\kms}\\Big),\n\\end{aligned}\\end{eqnarray}\nto estimate $\\sigma_{*}$-based $M_{\\rm BH}$ values for narrow-line AGNs where the host absorption lines are better seen, and to derive the $f$-factor in the broad FWHM-based single-epoch $M_{\\rm BH}$ estimators for broad-line AGNs where AGN emission dominates over the host galaxy. The $M_{\\rm BH}$($\\sigma_{*}$) estimates based on other $M_{\\rm BH}$--$\\sigma_{*}$ relations with a shallower slope, e.g., \\citet{Kor13}, are systematicaly offset to Equation (2) by 0.35 and $-$0.05 dex at $\\sigma_{*}=100$ and 400\\kms, respectively.\n\n\\begin{deluxetable*}{ccccc}\n\\tablecolumns{5}\n\\tabletypesize{\\scriptsize}\n\\tablecaption{AGN $M_{\\rm BH}$ estimators}\n\\tablehead{\n\\colhead{Type} & \\colhead{$a$} & \\colhead{$b$} & \\colhead{$c$} & \\colhead{$d$}}\n\\startdata\n$M_{\\rm BH}$($L_{1350}$, FWHM$_{\\rm C\\,IV}$) & 6.99$\\pm$0.16 & 0.547$\\pm$0.037 & 2.11$\\pm$0.11 & 0\\\\\n$M_{\\rm BH}$($L_{1350}$, FWHM$_{\\rm C\\,IV}$, $\\Delta \\rm v_{C\\,IV}$) & 6.62$\\pm$0.16 & 0.547$\\pm$0.037 & 2.11$\\pm$0.11 & 0.335$\\pm$0.022\\\\\n$M_{\\rm BH}$($L_{3000}$, FWHM$_{\\rm Mg\\,II}$) & 6.57$\\pm$0.13 & 0.548$\\pm$0.035 & 2.45$\\pm$0.06 & 0\\\\\n$M_{\\rm BH}$($L_{5100}$, FWHM$_{\\rm H\\beta}$) & 6.88$\\pm$0.12 & 0.533$\\pm$0.034 & 2 & 0\\\\ \n$M_{\\rm BH}$($L_{5100}$, FWHM$_{\\rm H\\alpha}$) & 6.99$\\pm$0.12 & 0.533$\\pm$0.034 & 2.12$\\pm$0.03 & 0\\\\\n$M_{\\rm BH}$($L_{\\rm P\\beta}$, FWHM$_{\\rm P\\beta}$) & 7.24$\\pm$0.16 & 0.45$\\pm$0.03 & 1.69$\\pm$0.16 & 0\\\\ \n$M_{\\rm BH}$($L_{\\rm P\\alpha}$, FWHM$_{\\rm P\\alpha}$) & 7.20$\\pm$0.16 & 0.43$\\pm$0.03 & 1.92$\\pm$0.18 & 0\\\\\n\\enddata\n\\tablecomments{$L$ is the continuum or broad-line luminosity, FWHM is the full width at half maximum of the best-fit broad-line model, and $\\Delta \\rm v_{CIV}$ is the broad \\ion{C}{4} line offset to the systemic redshift (\\citealt{She11} in J13, negative for blueshifts). $a$, $b$, $c$, $d$ are the coefficients in Equation (3).} \n\\end{deluxetable*} \n\nThe single-epoch $M_{\\rm BH} (L, \\rm FWHM)$ estimators were empirically calibrated between H$\\beta$ and H$\\alpha$, \\ion{Mg}{2}, or \\ion{C}{4} (\\citealt{Jun15} using the \\citealt{Ben13} $R_{\\rm BLR}$--$L$ relation) with \\ion{C}{4} blueshift correction when broad-line shifts were available (\\citealt{Jun17}), or were calibrated between hydrogen Balmer and Paschen series (\\citealt{Kim10}\\footnote{Using our adopted cosmology, we find that the $R_{\\rm BLR}$ values from \\citet{Ben13} are higher than from \\citet{Ben09} by 0.00--0.03 dex for the luminosity range used to derive Paschen line $M_{\\rm BH}$ values ($L_{5100}=10^{43.5-46}\\ergs$, A08; K18). K18 also note that using a single Gaussian to fit the broad Paschen lines will underestimate the $M_{\\rm BH}$ values by 0.06--0.07 dex, but these amounts are negligible compared to the significance of the results (\\S5).}, using the \\citealt{Ben09} $R_{\\rm BLR}$--$L$ relation), over a wide range of redshift and luminosity. This approach reduces the systematic offset from the choice of emission line by up to an order of magnitude\\footnote{We note that a nonlinear relation between $\\sigma$ and FWHM values could further result in positive\/negative biases in the FWHM-based $M_{\\rm BH}$ estimate at notably high and low FWHM values (e.g., \\citealt{Pet04}; \\citealt{Col06}), as well as whether to construct the UV or IR mass estimators to match the $M_{\\rm BH}$ values to the Balmer line based, or to match the UV or IR broad-line widths and the luminosities to the optical values separately. Our choice of $M_{\\rm BH}$ estimators has its own merits and limitations, and we test the systematic uncertainty of $M_{\\rm BH}$ in \\S5.} at extreme $M_{\\rm BH}$ values \\citep{Jun15}, or at extreme \\ion{C}{4} blueshifts \\citep{Jun17}. The estimators were updated using a common $f$-factor and uncertainty of $1.12 \\pm 0.31$ for the FWHM-based $M_{\\rm BH}$ \\citep{Woo15}, as shown below:\n\\begin{eqnarray}\\begin{aligned}\n\\log \\Big(&\\frac{M_{\\rm BH}}{M_{\\odot}}\\Big)=a+\\log\\Big(\\frac{f}{1.12}\\Big)+b\\,\\log\\Big(\\frac{L}{\\rm 10^{44} \\ergs}\\Big)\\\\\n&+c\\,\\log\\Big(\\frac{\\mathrm{FWHM}}{\\rm10^{3}\\kms}\\Big)+d\\,\\log\\Big(\\frac{\\Delta \\mathrm{v_{C\\,IV}}}{\\rm10^{3}\\kms}\\Big).\n\\end{aligned}\\end{eqnarray}\nThe set of $(a, b, c, d)$ values for the combination of $M_{\\rm BH}$($L$, FWHM, $\\Delta \\rm v$) are given in Table 2. For broad-line AGNs with X-ray observations and single-epoch UV\/optical $M_{\\rm BH}$ estimates, we converted the X-ray-based $L_{\\rm bol}$ into $L_{1350}$, $L_{3000}$, $L_{5100}$ using the aforementioned bolometric corrections, to minimize host galaxy contamination in 1350--5100\\AA. We removed sources with Balmer line widths similar to [\\ion{O}{3}] (A08) to distinguish broad lines from broadening by ionized gas outflows. We also limited the FWHM values to $\\le$10,000\\kms\\ where values otherwise (e.g., 4\\%\\ of the J13 sample) are potentially affected by rotating accretion disks and show double-peaked lines (e.g., \\citealt{Che89}; \\citealt{Era94}; Table 4 in \\citealt{Jun17}). Meanwhile, R17c removed single-epoch $M_{\\rm BH}$ estimates for $N_{\\rm H}\\ge10^{22}\\, \\rm cm^{-2}$ AGNs as the emission line profiles could be modified by obscuration or are dominated by the narrow component \\citep{Kos17}. However, as we already removed type $\\ge$1.8 sources when estimating $M_{\\rm BH}$ for broad-line AGNs, we keep the $N_{\\rm H}\\ge10^{22}\\, \\rm cm^{-2}$ sources. These obscured type $\\le$1.5 AGNs with $M_{\\rm BH}$(FWHM) in R17c do not significantly change the distribution of $N_{\\rm H}$--$f_{\\rm Edd}$ with respect to using $M_{\\rm BH}$($\\sigma_{*}$) values. This hints that obscuration does not significantly bias the single-epoch $M_{\\rm BH}$ estimates for broad-line AGNs, also consistent with the independence of broad \\ion{C}{4}-to-H$\\beta$ line width ratios with respect to the continuum slope for type 1 quasars (e.g., \\citealt{Jun17}). We thus carefully selected only the type $\\le$1.5 sources when using rest-frame UV--optical spectra to estimate $M_{\\rm BH}$(FWHM) for AGNs.\n\nAmong single-epoch $M_{\\rm BH}$ estimates with multiple broad-line detections, we adopted the estimators in the order of decreasing rest wavelength, while direct dynamical (B15b; B16; R17c) or reverberation-mapped (\\citealt{Ben15} in R17c) $M_{\\rm BH}$ values were adopted over other methods. Hot DOGs, which are heavily obscured AGNs typically showing strong, narrow lines, often display signatures of narrow-line outflows instead of ordinary broad emission lines (e.g., \\citealt{Wu18}; \\citealt{Jun20}). Unless the sources are thought to show scattered or leaked light from the broad-line region (A16; A20), we utilized the SED fit from A15 when deriving the $M_{\\rm BH}$ constraints. Applying their maximal stellar mass ($M_{*}$) estimates from the SED fit, we gave upper limits to the $M_{\\rm BH}$ values using the $M_{\\rm BH}$--$M_{*}$ relation. The $M_{\\rm BH}\/M_{*}$ values are thought to evolve less with redshift ($\\propto (1+z)^{\\gamma}$, $\\gamma\\lesssim1$) than $M_{\\rm BH}\/M_{\\rm bulge}$ (e.g., \\citealt{Ben11}; \\citealt{Din20}; \\citealt{Suh20}). We adopt $M_{\\rm BH}\/M_{*}\\sim0.003$ from the $z\\sim$1--2 AGNs in \\citet{Din20} and \\citet{Suh20}. The same relation was used to estimate $M_{\\rm BH}$ for the DOGs in C16 and T20. \n\nAlthough this analysis attempted to consistently estimate $M_{\\rm BH}$ for the various samples, systematic uncertainties of a factor of several are expected from the intrinsic scatter in the BH--host mass scaling relations (e.g., \\citealt{Kor13}) and the $R_{\\rm BLR}$--$L$ relation (e.g., \\citealt{Ben13}; \\citealt{Du14}). Overall, the compiled $L_{\\rm bol}$ and $M_{\\rm BH}$ estimates each have systematic uncertainties of up to a factor of several or more, and although the AGN $M_{\\rm BH}$ estimators include the $\\sim L^{0.5}$ dependence, reducing uncertainty from the bolometric correction going into $f_{\\rm Edd}\\propto L_{\\rm bol}\/M_{\\rm BH}$, we still expect systematic uncertainties of a factor of several in $f_{\\rm Edd}$. We thus will interpret only the group behavior of each AGN sample within the uncertainties in $f_{\\rm Edd}$. \n\n\n\\section{Results}\n\\begin{figure*}\n\\centering\n\\includegraphics[scale=0.95]{f4.eps}\n\\caption{The $N_{\\rm H}$--$f_{\\rm Edd}$ plane showing AGNs selected at different wavelengths. Horizontal lines separate obscured\/unobscured AGNs at $N_{\\rm H}=10^{22}\\, \\rm cm^{-2}$, and the effective Eddington ratio curves are plotted as solid\/dashed lines with respect to $N_{\\rm H}=10^{22}\\, \\rm cm^{-2}$. The symbols and color format follow that of Figure 2, except that the symbols are now filled when $L_{\\rm bol} \\ge 10^{45.7}\\ergs$ and open otherwise. Data outside the plotted region are shown along the boundary.}\n\\end{figure*}\nIn Figure 4 we plot the distributions of $N_{\\rm H}$--$f_{\\rm Edd}$ and $E(B-V)$--$f_{\\rm Edd}$ for the collection of AGN samples. It is clear that the forbidden region for dusty gas (Figure 1), previously less occupied by the AGNs from R17c, is well populated with IR\/submillimeter-selected and optical--IR red quasars, with a minor fraction of type 1 quasars. This is seen in both the $N_{\\rm H}$--$f_{\\rm Edd}$ and the $E(B-V)$--$f_{\\rm Edd}$ diagrams. We investigate this further in Figure 5 where we show the fraction of sources in the forbidden zone (i.e., $N_{\\rm H} \\ge 10^{22}\\, \\rm cm^{-2}$, $f_{\\rm Edd, dust}>1$) per bolometric luminosity bin as a function of $L_{\\rm bol}$. In the following, this fraction is referred to as $\\varphi$. It is clear that $\\varphi$ is minimal among the X-ray-selected AGNs with $\\varphi_{N_{\\rm H}}\\lesssim$10\\%\\ at $L_{\\rm bol} \\sim 10^{42-47}\\ergs$. Similarly, optically selected quasars have $\\varphi_{N_{\\rm H}}$ and $\\varphi_{E(B-V)}\\lesssim$20\\%\\ at $L_{\\rm bol} \\sim 10^{44-48}\\ergs$, with some uncertainty for $\\varphi_{N_{\\rm H}}$ at $L_{\\rm bol} \\sim 10^{47-48}\\ergs$. In contrast, the optical--IR red and IR\/submillimeter-bright quasars (hereafter referred to together as luminous, obscured quasars) commonly lie mostly in the forbidden region over a wide range of $N_{\\rm H}$ and $E(B-V)$ values, and we combined their statistics.\\footnote{A potential caveat is the difference in the dust-to-gas ratio observed between optical--IR red and IR\/submillimeter-bright quasars, which may bias the $\\varphi$ value between the populations. The average dust-to-gas ratios for each population from \\S3, are $\\langle\\log E(B-V)\/N_{\\rm H}\\rangle=-22.33$ and $-22.88$, respectively. Using the separate ratios, however, $\\varphi_{E(B-V)}$ (Figure 5 right) still remains consistent between the two populations.} The luminous, obscured quasars show $\\varphi_{N_{\\rm H}}$ and $\\varphi_{E(B-V)}\\gtrsim$\\,60\\%\\ at $L_{\\rm bol} \\sim 10^{46-48}\\ergs$, significantly higher than the less-luminous X-ray AGNs at comparable obscuration, or the comparably luminous but less obscured optical quasars. These findings confirm earlier studies by \\citet{Gli17b} and L20 on optical--IR red quasars, with our results applicable to general luminous, obscured quasars.\n\nSystematic uncertainties of a factor of several in $f_{\\rm Edd}$ (\\S4) may change the fraction of the samples in the forbidden region. We test this by giving a $\\pm$0.5 dex offset to the $f_{\\rm Edd, dust} (N_{\\rm H})=1$ curve and recalculating $\\varphi$. The $\\varphi$ values are nearly unchanged for the X-ray AGNs and optical quasars, whereas for the luminous, obscured quasars, $\\varphi$ may drop down to 40\\%--50\\% at $L_{\\rm bol} \\sim 10^{46-48}\\ergs$ if the observed $f_{\\rm Edd}$ values are overestimated by 0.5 dex. Still, the $\\varphi$ values for the luminous, obscured quasars are several times the X-ray AGNs or optical quasars at a given luminosity, and the main trend in Figure 5 remains unchanged. Modifications to the $f_{\\rm Edd, dust}$ curve may also occur when considering the effect of dust-to-gas ratios closer to the Milky Way value than the value adopted in this work, or radiation trapping. The enhanced absorption of the incident radiation by dust or trapping of reprocessed radiation lowers the $f_{\\rm Edd, dust}$ curve, at $N_{\\rm H} \\ge 10^{22}\\, \\rm cm^{-2}$ \\citep{Ish18}. Still, we note that both effects simply increase $\\varphi$ for luminous, obscured quasars, reinforcing our findings in Figures 4 and 5.\n\nThe $f_{\\rm Edd, dust}$ values can be further shifted by nuclear stars.\nAdopting the sphere of influence from the BH, we have\n\\begin{eqnarray}\\begin{aligned}\nr_{\\rm BH}=GM_{\\rm BH}&\/\\sigma_{*}^{2}\\\\\n=107\\,{\\rm pc}\\,&\\Big(\\frac{M_{\\rm BH}}{10^{9}\\,M_{\\odot}}\\Big)\\Big(\\frac{\\sigma_{*}}{200\\,\\kms}\\Big)^{-2},\n\\end{aligned}\\end{eqnarray} \nand the enclosed mass $M(10^{46} \\ergs$ quasars in this work are similar ($\\sim10^{9}M_{\\odot}$) for both the obscured and unobscured populations, the timescale of the luminous, obscured quasar phase with $f_{\\rm Edd, dust}>1$ is presumed to be similar to that of the $f_{\\rm Edd, dust}<1$ quasars. In contrast, the nearly complete absence of lower-luminosity AGNs with $f_{\\rm Edd, dust}>1$ (\\S5) suggests a much shorter obscured phase for lower-luminosity AGNs. This appears as a challenge for the radiation-pressure feedback in regulating the nuclear obscuration for luminous quasars, and we next consider possible evolutionary scenarios to achieve a coherent picture of dust obscuration in luminous quasars. \n\n\\subsection{Active timescale}\nFirst, the AGN timescale (hereafter $t_{\\rm AGN}$) is thought to be shorter for more luminous quasars, and this may explain higher $\\varphi$ values for luminous quasars. The nearby AGN fraction is measured to be tens of percent of the galaxy lifetime (e.g., \\citealt{Ho97}; \\citealt{Kau03}), with a corresponding $t_{\\rm AGN}$ of $\\sim10^{9}$\\,yr assuming typical galaxy lifetimes of $\\sim10^{10}$\\,yr. More luminous quasars are more rare, with expected $t_{\\rm QSO}\\sim10^{7-8}$\\,yr (e.g., \\citealt{Mart04}; \\citealt{Hop05}; \\citealt{Hop09}). To explain $\\varphi\\lesssim$1--10\\% for $L_{\\rm bol} \\lesssim 10^{44}\\ergs$ AGNs in Figure 5, we constrain the timescale for radiation pressure to clear the nuclear obscuration (hereafter the radiation feedback timescale, $t_{\\rm rad}=t_{\\rm AGN}\\,\\varphi$) to be $t_{\\rm rad}\\lesssim10^{9}\\,(0.01-0.1)\\sim10^{7-8}$\\,yr. Assuming that luminous, obscured quasars will evolve into comparably luminous unobscured quasars through radiation-pressure feedback, so as to explain the comparable number density between the populations, $t_{\\rm rad}$ for luminous ($L_{\\rm bol} \\gtrsim 10^{46}\\ergs$) quasars with $\\varphi\\gtrsim$\\,60\\%\\ would be $t_{\\rm rad}\\sim0.5t_{\\rm QSO}\\,\\varphi \\sim0.5(10^{7-8})(0.6-1)=(3-5)\\times10^{6-7}$\\,yr, roughly comparable to $t_{\\rm rad}$ for less-luminous AGNs. We note that if AGN activity is more episodic (e.g., \\citealt{Par11}; \\citealt{Yaj17}), feedback timescales may be shortened accordingly, although it seems more likely that luminous quasars have few episodes of vigorous accretion (e.g., \\citealt{Hop09}). Luminous, obscured quasars may thus appear to show higher $\\varphi$ values due to a shorter $t_{\\rm AGN}$ than less-luminous, obscured AGNs, even if they feel the same radiation pressure.\n\nWe have referred to $t_{\\rm QSO}\\lesssim10^{7-8}$\\,yr for quasars as a whole (e.g., $M_{\\rm B}\\lesssim-23$ mag or $L_{\\rm bol}\\gtrsim10^{45}\\ergs$), but if more luminous, obscured quasars are in a shorter phase of AGN evolution (shorter $t_{\\rm AGN}$), it better explains the highest $\\varphi$ values observed at $L_{\\rm bol} \\gtrsim 10^{46}\\ergs$. L20 note outflow timescales ($t_{\\rm out}$) for nuclear obscuration to clear away in an expanding shell by radiation pressure on dust,\n\\begin{equation}\nt_{\\rm out} \\approx 2\\times10^{5}\\,{\\rm yr}\\,\\Big(\\frac{r_0}{30\\rm pc}\\Big) \\Big(\\frac{v_{\\rm out}}{1000\\kms}\\Big)^{-1}\n\\end{equation}\nfinding $t_{\\rm out} \\approx 2\\times 10^{5}$ yr for Compton-thick gas expanding from an initial distance of $r_{0}=$30 pc until it reaches $N_{\\rm H}=10^{22}\\, \\rm cm^{-2}$, assuming $v_{\\rm out}=10^{3}$\\kms. If the dusty gas outflows are triggered by radiation pressure, we expect $t_{\\rm out}$ to be equal to $t_{\\rm rad}$. However, it is shorter than our estimated $t_{\\rm rad}$ values for luminous quasars, by $\\sim \\{(3-5)\\times10^{6-7}\\}\/(2\\times 10^{5})$ or $\\sim$1--2 orders of magnitude. This can be explained if $t_{\\rm AGN}$ for $L_{\\rm bol}\\gtrsim10^{46}\\ergs$ quasars are $\\sim$1--2 orders of magnitude shorter than the $t_{\\rm QSO}\\sim10^{7-8}$\\,yr we adopt, qualitatively consistent with the drop of $t_{\\rm AGN}$ for more luminous quasars in simulations (e.g., \\citealt{Hop05,Hop06}).\n\n\n\\subsection{Extended obscuration}\nAn alternative description is that it takes a longer $t_{\\rm rad}$ for luminous, obscured quasars to clear their obscuration than at lower luminosity. Radiation pressure from luminous, obscured quasars should effectively reach larger distances in the galaxy according to the decreasing small-scale dust covering factor observed in high $L_{\\rm bol}$ or $f_{\\rm Edd}$ AGNs (e.g., \\citealt{Mai07}; \\citealt{Tob14}; \\citealt{Ezh17}). Thus, observing high $\\varphi$ values in luminous, obscured quasars implies that dusty gas may be spatially extended into their hosts, in contrast to lower-luminosity AGNs. This is supported by observations of obscured quasars showing an extended distribution of disturbed emission (e.g., \\citealt{San88}; \\citealt{San96}; \\citealt{Urr08}; \\citealt{Gli15}; \\citealt{Fan16}; \\citealt{Ric17b}). An increased fraction of obscured yet broad-line AGNs (e.g., \\citealt{Lac15}; \\citealt{Gli18}) or extended dust extinction through lines of sight kiloparsecs away from narrow-line AGNs are seen (\\S3) at quasar luminosities, also in agreement with global column densities much larger than the line-of-sight $N_{\\rm H}$ for red quasars (\\S3).\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.95]{f6.eps}\n\\caption{A schematic diagram showing the maximally allowed accretion rate and radiation pressure\/outflow track on the $N_{\\rm H}$--$f_{\\rm Edd}$ plane for $N_{\\rm H}=10^{23}, 10^{24}\\, \\rm cm^{-2}$ quasars. At $L_{\\rm bol}\\lesssim10^{45}$\\ergs, the lack of objects in the forbidden region suggests that radiation pressure on dusty gas controls nuclear ($\\lesssim$10 pc order) obscuration and quickly drops the obscuration down to $N_{\\rm H}\\lesssim10^{22}\\, \\rm cm^{-2}$, while extended AGN outflows are less observed. At $L_{\\rm bol}\\gtrsim10^{46}$\\ergs, $f_{\\rm Edd, dust}>1$ accretion (solid line) may likewise clear the nuclear obscuration, but the high fraction of obscured quasars in the forbidden region suggests a short luminous quasar timescale and\/or an extended, $\\sim10^{2-3}$ pc-scale obscuration being cleared slowly by outflows (dotted line).}\n\\end{figure}\n\nAccording to simplified models for AGNs and host galaxy obscuration at multiple scales (e.g., \\citealt{Buc17}; \\citealt{Hic18}), the host galaxy kiloparsecs away from the nucleus is responsible for $N_{\\rm H} \\sim 10^{22}\\, \\rm cm^{-2}$, whereas obscuration from the inner AGN structure ($\\lesssim$10\\,pc) or circumnuclear starbursts ($\\sim$10--100\\,pc) can reach Compton-thick column densities. In addition, obscuration from gas-rich mergers (e.g., \\citealt{Hop08}) or higher gas fractions in high-redshift galaxies (e.g., \\citealt{Tac10}; \\citealt{Buc17}) may enhance the obscuration up to kiloparsec scales. Coming back to Equation (5), we find that $t_{\\rm out}$ for luminous, obscured quasars will be extended by 1--2 orders of magnitude ($t_{\\rm out}\\sim10^{6-7}$ yr) if extended obscuration due to mergers is spread over $\\sim10^{2-3}$ pc, closing the gap between the timescale arguments (\\S6.1) without even changing $t_{\\rm AGN}$. This scenario is also consistent with the $t_{\\rm out}$ values estimated by modeling expanding shells of dusty gas located $\\sim10^{2-3}$ pc away from luminous quasars \\citep{Ish17}. We direct readers to the theoretical and observational studies on how the nuclear outflows triggered by radiation pressure extend to the host galaxy (e.g., \\citealt{Har14}; \\citealt{Ish15}; \\citealt{Tho15}; \\citealt{Ish17}; \\citealt{Kan18}).\n\nAlthough the impact of radiation pressure from the AGN itself is weaker at extended regions of the galaxy, and R17c separate radiation-pressure feedback from inflows and outflows, radiation pressure has still been considered to launch outflows that may reach large distances (e.g., \\citealt{Hop10} and the discussion in L20). In this work, we separately considered radiation pressure to regulate $\\lesssim10$ pc order obscuration and outflows $\\sim10^{2-3}$pc scales, but note that radiation pressure is thought to cause extended outflows that eventually clear obscured quasars, according to gas-rich, merger-driven quasar evolution models (e.g., \\citealt{Hop08}; \\citealt{Hic09}). Not only are the highly ionized gas outflows on the order of $\\sim10^{3}\\kms$ found in the majority of quasars with $L_{[\\rm O\\,III]}\\gtrsim10^{42}$\\ergs\\ (or $L_{\\rm bol}\\gtrsim10^{45.5}$\\ergs), or $f_{\\rm Edd} \\gtrsim 0.1$ (e.g., \\citealt{Woo16}; \\citealt{Rak18}; \\citealt{Shi19}; \\citealt{Jun20}), they extend over kiloparsec scales together with Balmer line outflows with a weaker ionization potential or molecular outflows (e.g., \\citealt{Fio17}; \\citealt{Kan17}; \\citealt{Fle19}). This is in line with higher merger fractions seen in $L_{\\rm bol}\\gtrsim10^{46}$\\ergs\\ quasars (e.g., \\citealt{Tre12}; \\citealt{Fan16}; \\citealt{Dia18}), which is also the transitional luminosity where radiation-pressure feedback appears less effective (Figure 5). \n\nWe thus consider radiation pressure to be responsible for regulating not only the $\\lesssim$10 pc-order dusty structure (e.g., \\citealt{Law91}; R17c), but also the host galaxy environment in obscured $L_{\\rm bol} \\gtrsim 10^{46}$\\ergs\\ quasars where the triggered nuclear outflows may reach and clear $\\sim10^{2-3}$ parsec-scale material, slowly over a timescale of $\\sim10^{6-7}$\\,yr. This is consistent with the high-$f_{\\rm Edd}$ AGN outflows discussed in R17c, though their sample lacked the luminous quasars that we argue are responsible for producing extended outflows at $f_{\\rm Edd, dust}>1$ values. We summarize our discussion in Figure 6.\\\\\n\n\\section{Summary}\nUsing a collection of AGN samples spanning a wide dynamic range of luminosity, obscuration, and redshift, we probed the distribution of obscuration and accretion rate values to comparatively examine the role of radiation pressure in blowing out obscured quasars. We summarize our findings below:\n \n1. The fraction of AGNs in the forbidden zone for radiation pressure, $\\varphi$, is kept to $\\lesssim$20\\%\\ for all of the multi-wavelength-selected AGN samples compiled over a wide range of luminosity and redshift, consistent with previous findings that nuclear obscuration is quickly blown away by radiation pressure once the accretion rate exceeds the Eddington limit for dusty gas.\n\n2. This radiation-pressure feedback, that is, the acceleration of nuclear dusty gas, appears limited for luminous, obscured quasars at $N_{\\rm H}\\gtrsim10^{22}\\, \\rm cm^{-2}$ or $E(B-V) \\gtrsim 0.2$ mag, and $L_{\\rm bol}\\gtrsim10^{46}$\\ergs, where they show $\\varphi\\gtrsim60\\%$ over a wide range of AGN selection wavelengths or amount of obscuration. This may be explained by a combination of shorter luminous quasar lifetimes and extended obscuration cleared by outflows over a longer timescale than to clear the nuclear obscuration. \n\nUltimately, we expect to see the $M_{\\rm BH}$ values grow while luminous, obscured quasars become unobscured if extended outflows, slower than radiation pressure clearing the nuclear obscuration, are the bottleneck for AGN feedback. Ongoing hard X-ray surveys probing fainter sources (e.g., \\citealt{Lan17}; \\citealt{Oh18}) will confirm if distant, gas-obscured quasars are going through similar strengths of radiation-pressure feedback as dust-obscured quasars. Spatially resolved or global $N_{\\rm H}$ and $E(B-V)$ estimates for luminous, obscured quasars will better tell whether obscuration is indeed more extended in luminous quasars and will quantify the relative effect of radiation pressure and outflows to their parsec-to-kiloparsec scale gas and dust environments. \n\n\\acknowledgments\nWe thank the anonymous referee for the comments that greatly improved the paper and Andrew Fabian for kindly providing the $f_{\\rm Edd, dust}(N_{\\rm H})=1$ curves.\nThis research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2017R1A6A3A04005158). R.J.A. was supported by FONDECYT grant No. 1191124. R.C.H. and C.M.C. acknowledge support from the National Science Foundation under CAREER award no. 1554584. C.R. acknowledges support from the Fondecyt Iniciacion grant 11190831. \n\nThis work makes use of data from the $\\it NuSTAR$ mission, a project led by Caltech, managed by the Jet Propulsion Laboratory, and funded by NASA.\nThis research has made use of data and\/or software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA\/GSFC and the High Energy Astrophysics Division of the Smithsonian Astrophysical Observatory.\nThis publication makes use of data products from the United Kingdom Infrared Deep Sky Survey. UKIRT is owned by the University of Hawaii (UH) and operated by the UH Institute for Astronomy; operations are enabled through the cooperation of the East Asian Observatory. When the data reported here were acquired, UKIRT was operated by the Joint Astronomy Centre on behalf of the Science and Technology Facilities Council of the U.K.\nThis publication makes use of data products from the Wide-field Infrared Survey Explorer, \nwhich is a joint project of the University of California, Los Angeles, and the Jet Propulsion \nLaboratory\/California Institute of Technology, funded by the National Aeronautics and Space Administration.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzhcuu b/data_all_eng_slimpj/shuffled/split2/finalzzhcuu new file mode 100644 index 0000000000000000000000000000000000000000..283022ac15217210b8059feb00c3abe1a9a9339f --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzhcuu @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nLet $\\{ (\\xi _t ,\\eta _t) , t \\geqslant 0 \\}$ \nbe a L\\'evy process on $\\mathbb R ^{2}$.\nThe generalized Ornstein-Uhlenbeck process $\\{V_t , t \\geqslant 0 \\}$ on $\\mathbb R$ \nbased on $\\{ (\\xi _t ,\\eta _t) , t \\geqslant 0 \\}$ with initial condition $V_0$\nis defined as\n\\begin{equation}\\label{1.1}\nV_t = e^{-\\xi _t }\\left( V_0 + \\int _{0}^{t} e^{\\xi _{s-}} {d}\\eta _s \\right) , \n\\quad t\\geqslant 0,\n\\end{equation}\nwhere $V_0$ is a random variable independent of \n$\\{(\\xi _t ,\\eta _t) , t\\geqslant 0\\}$.\nThis process has recently been well-studied by \nCarmona, Petit, and Yor \\cite{CPY97}, \\cite{CPY01}, Erickson and Maller \\cite{EM05}, and \nLindner and Maller \\cite{LM05}.\n\nLindner and Maller \\cite{LM05} find that the generalized Ornstein-Uhlenbeck \nprocess $\\{V_t , t \\geqslant 0 \\}$ based on $\\{ (\\xi _t ,\\eta _t) , t \\geqslant 0 \\}$ \n turns out to be\n a stationary process with a suitable choice of $V_0$ if and only if \n\\begin{equation}\\label{1.2}\nP\\left(\n\\int _{0}^{\\infty -} e^{-\\xi_{s-}}dL_s \\text{ exists and is finite}\\right)=1,\n\\end{equation}\nwhere \n\\begin{equation}\\label{1.2a}\n\\int _{0}^{\\infty -} e^{-\\xi_{s-}}dL_s =\n\\lim _{t \\rightarrow \\infty} \\int _{0}^{t} e^{-\\xi_{s-}}dL_s \n\\end{equation}\nand $\\{ (\\xi _t ,L_t) , t \\geqslant 0 \\}$ \nis a L\\'evy process on $\\mathbb R^2$ defined by\n\\begin{equation}\\label{1.3}\nL_t=\\eta_t+\\sum_{00$ such that $h_X(x)>0$ for all $x\\geqslant c$ and that\n$\\{Y_t\\}$ is not the zero process. Then\n\\begin{equation}\\label{2.1}\nP\\left(\n\\int _{0}^{\\infty -} e^{-X_{s-}}dY_s \\text{ exists and is finite}\\right)=1\n\\end{equation}\nif and only if\n\\begin{equation}\\label{2.2}\n\\lim_{t\\to\\infty}X_t=+\\infty\\text{ a.\\,s.\\ and }\\int_{|y|\\geqslant e^c}\n\\frac{\\log|y|}{h_X(\\log|y|)} \\nu_Y (dy)<\\infty,\n\\end{equation}\nwhere $|y|$ is the Euclidean norm of $y \\in \\mathbb R^d$.\n\\end{thm}\n\n\\begin{proof}\nFirst, for $d=1$, this theorem is established in \\cite{EM05}.\nSecond, for $j=1,\\ldots,d$, the $j$th coordinate process $\\{Y_t^{(j)}, t\\ges0\\}$\nis a L\\'evy process on $\\mathbb R$ with L\\'evy measure \n$\\nu_{Y^{(j)}}(B)=\\int_{\\mathbb R^d} 1_B(y_j)\\nu_Y (dy)$ for any Borel set $B$ in \n$\\mathbb R$\nsatisfying $0\\not\\in B$, where $y=(y_1,\\ldots,y_d)$. Third,\nthe property \\eqref{2.1} is equivalent to\n\\begin{equation}\\label{2.1a}\nP\\left(\n\\int _{0}^{\\infty -} e^{-X_{s-}}dY_s^{(j)} \\text{ exists and is finite}\\right)=1\n\\quad\\text{for }j=1,\\ldots,d.\n\\end{equation}\n\nNext, we claim that the following \\eqref{2.3} and \\eqref{2.4}\nare equivalent:\n\\begin{gather}\n\\int_{|y|>M}\\frac{\\log |y|}{h_X(\\log |y|)} \\nu_Y(dy)<\\infty\\quad\n\\text{ for some }M\\geqslant e^c,\\label{2.3}\\\\\n\\int_{\\{y\\colon |y_j|>M\\}}\n\\frac{\\log |y_j|}{h_X(\\log |y_j|)} \\nu_Y(dy)<\\infty,\\quad\nj=1,\\ldots,d,\\quad\\text{for some }M\\geqslant e^c.\\label{2.4}\n\\end{gather}\nPut $f(u)=\\log u\/h_X(\\log u)$ for $u\\geqslant e^c$. This $f(u)$ is not necessarily\nincreasing for all $u\\geqslant e^c$. We use the words {\\em increasing} and \n{\\em decreasing} in the wide sense allowing flatness. But\n$f(u)$ is increasing for sufficiently large $u$ ( $>M_0$, say), because, for $x>c$,\n\\[\n\\frac{h_X(x)}{x}=\\frac{h_X(c)}{x}+\\frac{1}{x} \\int_c^x n(y)dy\n\\]\nwith $n(y)=\\nu_X(\\,(y,\\infty)\\,)$ and, with \n$d\/dx$ meaning the right derivative, we have\n\\begin{align*}\n&\\frac{d}{dx}\\left( \\frac{1}{x} \\int_c^x n(y)\ndy\\right)=\\frac{1}{x^2}\\left(-\\int_c^x n(y)\ndy+xn(x)\\right)\\\\\n&\\qquad=\\frac{1}{x^2}\\left(\\int_c^x (n(x)-n(y))\ndy+cn(x)\\right)<0\n\\end{align*}\nfor sufficiently large $x$\nif $n(c)>0$ (note that $\\int_c^x (n(x)-n(y))dy$\nis nonpositive and decreasing).\nThus we see that \\eqref{2.3} implies \\eqref{2.4}.\nIndeed, letting $M_1 = M\\lor M_0$, we have\n\\[\n\\int_{\\{y\\colon |y_j|>M_1\\}} f(|y_j|)\\nu_Y(dy)\\leqslant \\int_{\\{y\\colon \n|y_j|>M_1\\}} f(|y|)\\nu_Y(dy)\\leqslant \n\\int_{|y|>M_1} f(|y|)\\nu_Y(dy)<\\infty.\n\\]\nIn order to show that \\eqref{2.4} implies \\eqref{2.3}, let $g(x)=h_X(x)$ for \n$x\\geqslant c$ and $=h_X(c)$ for $-\\infty< xM_1} f(|y|)\\nu_Y(dy)\\leqslant\\int_{|y|>M_1} f(|y_1|+\\cdots+|y_d|)\n\\nu_Y(dy)\\\\\n&\\qquad\\leqslant\n\\int_{|y|>M_1}\\frac{\\log(|y_1|+\\cdots+|y_d|+1)}{h_X(\\log(|y_1|+\\cdots+|y_d|))}\n\\nu_Y(dy)\\\\\n&\\qquad\\leqslant\n\\sum_{j=1}^d \\int_{|y|>M_1}\\frac{\\log(|y_j|+1)}{h_X(\\log(|y_1|+\\cdots+|y_d|))}\n\\nu_Y(dy),\\\\\n&\\qquad=\\sum_{j=1}^d \\int_{|y|>M_1}\\frac{\\log(|y_j|+1)}{g(\\log(|y_1|+\\cdots+|y_d|))}\n\\nu_Y(dy)\\\\\n&\\qquad\\leqslant\\sum_{j=1}^d \\int_{|y|>M_1}\\frac{\\log(|y_j|+1)}{g(\\log(|y_j|))}\n\\nu_{\\eta}(dy)\\\\\n&\\qquad\\leqslant\\sum_{j=1}^d \\left(\\int_{|y_j|>M_1}\n\\frac{\\log(|y_j|+1)}{g(\\log(|y_j|))}\n\\nu_Y(dy)+\\int_{|y_j|\\leqslant M_1,\\,|y|>M_1}\\frac{\\log(|y_j|+1)}{g(\\log(|y_j|))}\n\\nu_Y(dy)\\right).\n\\end{align*}\nThe first integral in each summand is finite due to} \\eqref{2.4} \n and the second integral is also finite because the integrand is bounded.\nThis finishes the proof of equivalence of \\eqref{2.3} and \\eqref{2.4}.\n\nNow assume that \\eqref{2.2} holds. Then \\eqref{2.4} holds. Hence, by the theorem\nfor $d=1$, $\\int _{0}^{\\infty -} e^{-X_{s-}}dY_s^{(j)}$ exists and is finite \na.\\,s.\\ for all $j$ such that $\\{Y_t^{(j)}\\}$ is not the zero process.\nFor $j$ such that $\\{Y_t^{(j)}\\}$ is the zero process, we have\n$\\int _{0}^{\\infty -} e^{-X_{s-}}dY_s^{(j)}=0$. Hence \\eqref{2.1a} holds, that is,\n\\eqref{2.1} holds.\n\nConversely, assume that \\eqref{2.1} holds.\nLet\n\\[\nI_j=\\int_{\\{y\\colon |y_j|\\geqslant e^c\\}}\n\\frac{\\log |y_j|}{h_X(\\log |y_j|)} \\nu_Y(dy).\n\\]\nSince $\\{Y_t\\}$ is not the zero process, $\\{Y_t^{(j)}\\}$ is not the zero process\nfor some $j$. Hence, by the theorem for $d=1$, $\\lim_{t\\to\\infty}X_t=+\\infty$ \na.\\,s.\\ and $I_j<\\infty$ for such $j$. For $j$ such that $\\{Y_t^{(j)}\\}$ is the \nzero process, $\\nu_{Y^{(j)}}=0$ and $I_j=0$. Hence we have \\eqref{2.4} and thus\n\\eqref{2.2} holds due to the equivalence of \\eqref{2.3} and \\eqref{2.4}.\n\\end{proof}\n\n\\begin{rem}\\label{r2.1}\n(i) Suppose that $\\{X_t\\}$ satisfies $0-\\infty$ and $00$;\\\\\n(b$'$) $\\int_1^{\\infty} \\nu_X(\\,(y,\\infty)\\,)dy=\\infty$ and \\eqref{2.6} holds.\\\\\nSee also Doney and Maller \\cite{DM02}.\n\n(iii) If $\\lim_{t\\to\\infty}X_t=+\\infty$ a.\\,s., then $h_X(x)>0$ for all large\n$x$, as is explained in \\cite{EM05} after their Theorem 2.\n\\end{rem}\n\nWhen $\\{X_t\\}$ and $\\{Y_t\\}$ are\nindependent, the result in Remark \\ref{r2.1} (i) can be extended to more general\nexponential integrals of L\\'evy processes.\n\n\\begin{thm}\\label{t2.2}\nSuppose that $\\{X_t\\}$ and $\\{Y_t\\}$ are\nindependent and that $00$. Then\n\\begin{equation}\\label{2.7}\nP\\left(\n\\int _{0}^{\\infty -} e^{-(X_{s-})^{\\alpha}}dY_s \\text{ exists and is finite}\\right)=1\n\\end{equation}\nif and only if\n\\begin{equation}\\label{2.8}\n\\int_{\\mathbb R^d} (\\log^+ |y|)^{1\/\\alpha}\\nu_Y(dy)<\\infty.\n\\end{equation}\n\\end{thm}\n\nWe use the following result, which is a part of Proposition 4.3 of \n\\cite{S05b}.\n\n\\begin{prop}\\label{p2.1}\nLet $f$ be a locally square-integrable nonrandom function on $[0,\\infty)$\nsuch that there are positive constants $\\alpha$, $c_1$, and $c_2$ satisfying\n\\begin{equation*}\ne^{-c_2 s^{\\alpha}}\\leqslant f(s)\\leqslant e^{-c_1 s^{\\alpha}}\\quad\\text{for all large $s$.}\n\\end{equation*}\nThen \n\\[\nP\\left(\n\\int _{0}^{\\infty -} f(s)dY_s \\text{ exists and is finite}\\right)=1\n\\]\nif and only if \\eqref{2.8} holds.\n\\end{prop}\n\n\\begin{proof}[Proof of Theorem \\ref{t2.2}] \nLet $E[X_1]=b$. By assumption, $00$, and define \n$$\nT_c = \\inf \\{ t\\colon X_t =c\\}.\n$$\nSince we are assuming that $X_{t}$ does not have positive jumps and that\n$00$. This is Samorodnitsky's remark mentioned in \\cite{KLM06}.\nThe integral $\\int_0^{\\infty-} \\exp(-N_{s-}) ds$ is a special case of (ii) with\n$\\alpha=1$.\n\n\\begin{proof}[Proof of Theorem \\ref{t3.2}]\n(i) Let $Z=\\int_0^{\\infty-} e^{-N_{s-}} dY_s$.\nIf $\\{Y_t\\}$ is the zero process, then $Z=0$. \nIf $\\{Y_t\\}$ is not the zero process, then existence and finiteness of $Z$\nfollows from Theorem \\ref{t2.1}. Let $T_n=\\inf\\{s\\ges0\\colon N_s=n\\}$. \nClearly $T_n$ is finite and tends to infinity as $n\\to\\infty$ a.\\,s. We have\n\\begin{equation*}\nZ=\\sum_{n=0}^{\\infty}\\int_{T_n}^{T_{n+1}} e^{-N_{s-}} dY_s=\\sum_{n=0}^{\\infty}\ne^{-n}(Y(T_{n+1})-Y(T_n)).\n\\end{equation*}\nFor each $n$, $T_n$ is a stopping time for $\\{(N_s,Y_s)\\colon s\\ges0\\}$. Hence\n$\\{(N(T_n +s)-N(T_n), Y(T_n +s)-Y(T_n)), s\\ges0\\}$ and $\\{(N_s,Y_s), 0\\leqslant s\\leqslant\nT_n\\}$ are independent and the former process is identical in law with \n$\\{(N_s,Y_s), s\\ges0\\}$. It follows that the family\n$\\{Y(T_{n+1})-Y(T_n), n=0,1,2,\\ldots\\}$ is independent and identically distributed.\nThus, denoting $W_n= Y(T_{n+1})-Y(T_n)$, we have representation\n\\begin{equation}\\label{3.4}\nZ=\\sum_{n=0}^{\\infty} e^{-n} W_n,\n\\end{equation}\nwhere $W_0,W_1.\\ldots$ are independent and identically distributed\n and $W_n\\overset{\\mathrm d}{=} Y(T_1)$ ( $\\overset{\\mathrm d}{=}$ stands for\n\\lq\\lq has the same law as\"). Consequently we have\n\\begin{equation}\\label{3.5}\nZ=W_0+e^{-1}Z',\n\\end{equation}\nwhere $W_0$ and $Z'$ are independent and $Z'\\overset{\\mathrm d}{=} Z$.\nThe distribution of $W_0$ is infinitely divisible, since $W_0=Y(T_1)\\overset{\\mathrm d}{=} U_1$,\nwhere $\\{U_s\\}$ is a L\\'evy process given by subordination of $\\{Y_s\\}$ by a \ngamma process.\nHere we use our assumption of independence of $\\{N_t\\}$ and $\\{Y_t\\}$. Thus\n$\\mu$ is\n$e^{-1}$-semi-selfdecomposable and hence infinitely divisible.\nAn alternative proof of the infinite divisibility of $\\mu$ is to look at \nthe representation \\eqref{3.4} and to use that $\\mathcal L(Y(T_1))$\nis infinitely divisible. \n\n(ii) \nUse the representation \\eqref{3.4} with $W_n\\overset{\\mathrm d}{=} U_1$,\nwhere we obtain a L\\'evy process $\\{U_s\\}$ by subordination of $\\{Y_s\\}$ \nby a gamma process.\nSince gamma distributions are selfdecomposable, the results of Sato \\cite{S01b}\non inheritance of selfdecomposability in subordination\nguarantee that $\\mathcal L(U_1)$ is selfdecomposable under our assumption on $\\{Y_s\\}$.\nHence $\\mu$ is selfdecomposable, as selfdecomposability is preserved under \nconvolution and convergence.\nFurther, since selfdecomposability implies $b$-semi-selfdecomposability for each\n$b$, \\eqref{3.5} shows that $\\mu$ is of class $L_1(e^{-1},\\mathbb R^d)$.\n\n(iii) \nThe process $\\{Y_t\\}$ is a compound Poisson process on $\\mathbb R$ with $\\nu_Y$ \nconcentrated on the integers (see Corollary 24.6 of \\cite{S}). Let us consider\nthe L\\'evy measure $\\nu^{(0)}$ of $Y(T_1)$. \nLet $a>0$ be the parameter of the Poisson process $\\{N_t\\}$.\nAs in the proofs of (i)\nand (ii), $Y(T_1)\\overset{\\mathrm d}{=} U_1$, where \n$\\{U_s\\}$ is given by subordination of $\\{Y_s\\}$, by a gamma process\nwhich has L\\'evy measure $x^{-1}e^{-ax}dx$. \nHence, using Theorem 30.1 of \\cite{S}, we see that\n\\begin{equation*}\n\\nu^{(0)}(B)=\\int_0^{\\infty}P(Y_s\\in B)s^{-1}e^{-as}ds\n\\end{equation*}\nfor any Borel set $B$ in $\\mathbb R$. Thus $\\nu^{(0)}(\\mathbb R\\setminus\\mathbb Z)=0$.\n\nSuppose that $\\{Y_t\\}$ is not a decreasing process. Then some positive integer\nhas positive $\\nu^{(0)}$-measure. Denote by $p$ the minimum of such positive integers.\nSince $\\{Y_t\\}$ is compound Poisson, $P(Y_s=kp)>0$ for any $s>0$ for \n$k=1,2,\\ldots$. Hence $\\nu^{(0)}(\\{kp\\})>0$ for $k=1,2,\\ldots$. Therefore,\nfor each nonnegative\ninteger $n$, the L\\'evy measure $\\nu^{(n)}$ of $e^{-n}Y(T_1)$ satisfies\n$\\nu^{(n)}(\\{e^{-n}kp\\})>0$ for $k=1,2,\\ldots$. \nClearly, $\\nu^{(n)}$ is also discrete.\nThe representation \\eqref{3.4} shows that\n\\[\n\\nu_{\\mu}=\\sum_{n=0}^{\\infty} \\nu^{(n)}.\n\\]\nHence, $\\nu_{\\mu}$ is discrete and\n\\[\n\\nu_{\\mu}(\\{e^{-n}kp\\})>0\\quad\\text{for all $n=0,1,2,\\ldots$ and $k=1,2,\\ldots$\\;\\,.} \n\\]\nThus the points in $(0,\\infty)$ of positive $\\nu_{\\mu}$-measure are dense in\n$(0,\\infty)$.\n\nSimilarly, if $\\{Y_t\\}$ is not an increasing process, then the points in \n$(-\\infty,0)$ of positive $\\nu_{\\mu}$-measure are dense in $(-\\infty,0)$.\n\\end{proof}\n\nThe following remarks give information on continuity properties of the law $\\mu$.\nA distribution on $\\mathbb R^d$ is called\nnondegenerate if its support is not contained in any \naffine subspace of dimension $d-1$. \n\n\\begin{rem}\\label{r3.1}\n(i) Any nondegenerate selfdecomposable distribution on $\\mathbb R^d$ for $d\\ges1$\nis absolutely continuous (with respect to Lebesgue measure on $\\mathbb R^d$)\n although, for $d\\ges2$, its L\\'evy measure is not necessarily\nabsolutely continuous. This is proved by Sato \\cite{S82} (see also Theorem 27.13\nof \\cite{S}). \n\n(ii) Nondegenerate semi-selfdecomposable distributions on $\\mathbb R^d$ for $d\\ges1$\nare absolutely continuous or continuous singular, as Wolfe \\cite{W83} proves\n(see also Theorem 27.15 of \\cite{S}).\n\\end{rem}\n\n\\vskip 5mm\n\\section{An example of type $G$ random variable}\n\nIn Maejima and Niiyama \\cite{MNi05}, an improper integral \n\\begin{equation}\\label{4.1}\nZ= \\int_0^{\\infty -} e^{-(B_s+\\lambda s)}dS_s\n\\end{equation}\nwas studied,\nin relation to a stationary solution of the stochastic differential equation\n\\begin{equation*}\ndZ_t = - \\lambda Z_{t} dt + Z_{t-} dB_t + dS_t, \\quad t\\geqslant 0,\n\\end{equation*} \nwhere $\\{B_t, t\\geqslant 0\\}$ is a standard Brownian motion on $\\mathbb R$, $\\lambda >0$, and\n$\\{S _t, t\\geqslant 0\\}$ is a symmetric $\\alpha$-stable L\\'evy process with $0<\\alpha \\leqslant 2$\non $\\mathbb R$, independent of $\\{B_t\\}$.\nThey showed that $Z$ is {\\it of type $G$} in the sense that $Z$ is a variance mixture\nof a standard normal random variable by some infinitely divisible distribution.\nNamely, $Z$ is of type $G$ if \n\\begin{equation*}\nZ\\overset{\\mathrm d}{=} V^{1\/2}W\n\\end{equation*}\nfor some nonnegative infinitely divisible random variable\n$V$ and a standard normal random variable $W$ independent of each other.\nEquivalently, $Z$ is of type $G$ if and only if $Z\\overset{\\mathrm d}{=} U_1$, where $\\{U_t, t\\ges0\\}$ \nis given by subordination of a standard Brownian motion.\nIf $Z$ is of type $G$, then $\\mathcal L(V)$ is uniquely \ndetermined by $\\mathcal L(Z)$ (Lemma 3.1 of \\cite{S01b}).\n\nThe $Z$ in \\eqref{4.1} is a special case of those exponential integrals of L\\'evy\nprocesses which we are dealing with. Thus Theorem \\ref{t3.1} says that \nthe law of $Z$ is selfdecomposable. But the class of type $G$ distributions\n(the laws of type $G$ random variables) is neither larger nor smaller \nthan the class of symmetric selfdecomposable distributions. \nAlthough the proof that $Z$ is of type $G$ is found in \\cite{MNi05},\nthe research report is not well distributed.\nHence we give their proof below for readers.\nWe will show that the law of $Z$ belongs to a special \nsubclass of selfdecomposable distributions.\n\n\\begin{thm}\\label{t4.1}\nUnder the assumptions on $\\{B_t\\}$ and $\\{S_t\\}$ stated above, \n$Z$ in \\eqref{4.1} is of type $G$ and furthermore the mixing distribution for \nvariance, $\\mathcal L(V)$, is not only\ninfinitely divisible but also selfdecomposable.\n\\end{thm}\n\n\\begin{proof}\nIt is known (Proposition 4.4.4 of \nDufresne \\cite{D90}) that for any $a\\in\\mathbb R\\setminus \\{0\\}$, $b>0$,\n$$\n\\int_0^{\\infty} e^{aB_s-bs} ds \\overset{\\mathrm d}{=} {2}\\left (a^2 \\Gamma _{2ba^{-2}}\\right )^{-1},\n$$\nwhere $\\Gamma _\\gamma$ is the gamma random variable with parameter $\\gamma >0$,\nnamely, $P(\\Gamma _\\gamma \\in B) = \\Gamma (\\gamma) ^{-1}\\int_{B\\cap (0,\\infty)}\nx^{\\gamma -1}e^{-x}dx$.\nThe law of the reciprocal of gamma random variable is\ninfinitely divisible and, furthermore, selfdecomposable (Halgreen \\cite{H79}). \nWe have\n\\begin{align*}\nE \\left[ e^{iz Z } \\right] \n& = E \\left[ \\exp \\left(iz \\int_0^{\\infty -} e^{-(B_s+\\lambda s)} dS_s \n\\right) \\right] \\\\\n& = E \\left[ E \\left[ \\left. \\exp \\left( iz \\int_0^{\\infty -} \ne^{-(B_s+\\lambda s)} dS_s\\right)\\,\\right|\\,\\{B_s\\} \\right]\\right] ,\n\\end{align*}\nWe have $Ee^{izS_t}=\\exp(-ct|z|^{\\alpha})$ with some $c>0$. \nFor any nonrandom measurable function $f(s)$ satisfying $\\int_0^{\\infty}\n|f(s)|^{\\alpha}ds<\\infty$, we have\n$$\nE\\left [\\exp \\left( iz \\int_0^{\\infty -} f(s)dS_s\\right)\\right ]\n= \\exp \\left( -c|z |^{\\alpha}\\int_0^{\\infty} |f(s)|^{\\alpha}ds\\right)\n$$\n(see, e.\\,g.\\ Samorodnitsky and Taqqu \\cite{ST}). Hence\n\\begin{align*}\nE\\left [e^{iz Z}\\right ]\n& = E\\left [\\exp \\left( -c|z |^{\\alpha}\\int_0^{\\infty}\ne^{ -\\alpha B_s -\\alpha \\lambda s} ds \\right)\\right]\\\\\n& = E\\left [\\exp\\left(-c|z |^{\\alpha} 2 \n\\left (\\alpha ^2 \\Gamma_{2\\alpha ^{-1}\\lambda }\\right )^{-1}\\right)\\right ] .\n\\end{align*}\nIf we put\n$$\nH(dx) = P\\left ( 2 c\n\\left (\\alpha ^2 \\Gamma_{2\\alpha ^{-1}\\lambda }\\right )^{-1}\n\\in dx\\right ),\n$$\nthen \n$$\nE[e^{iz Z}] = \\int_0^{\\infty} e^{-u|z |^{\\alpha}} H(du).\n$$\nThis $H$ is the distribution of a positive\ninfinitely divisible (actually selfdecomposable) random variable.\nThis shows that $Z$ is a mixture of a symmetric\n$\\alpha$-stable random variable $S$ with $Ee^{izS}=e^{-|z|^{\\alpha}}$ \nin the sense that\n\\begin{equation}\\label{4.2}\nZ \\overset{\\mathrm d}{=} \\Gamma ^{-1\/\\alpha}S,\n\\end{equation}\nwhere $\\Gamma$ and $S$ are independent and $\\Gamma$ is a gamma random variable with\n$\\mathcal L(\\Gamma^{-1})=H$, that is, \n$\\Gamma=(2c)^{-1}\\alpha^2 \\Gamma_{2\\alpha^{-1}\\lambda}$.\nTo see that $Z$ is of type $G$, we need to rewrite \\eqref{4.2} as\n$$\nZ \\overset{\\mathrm d}{=} \\Gamma ^{-1\/\\alpha} S \\overset{\\mathrm d}{=} V^{1\/2}W,\n$$\nfor some infinitely divisible random variable $V>0$ independent of a standard \nnormal random variable $W$.\nLet $S^+_{\\alpha\/2}$ be a positive strictly $(\\alpha\/2)$-stable random variable\nsuch that\n$$\nE\\left[\\exp(-uS_{\\alpha\/2}^+)\\right]=\\exp\\left( -(2u)^{\\alpha\/2}\\right),\\quad u\\geqslant 0\n$$\nand $\\Gamma$, $W$, and $S_{\\alpha\/2}^+$ are independent. Then\n$$\nS\\overset{\\mathrm d}{=} (S_{\\alpha\/2}^+)^{1\/2} W,\n$$\nand hence $S$ is of type $G$. Let\n$$\nV=\\Gamma^{-2\/\\alpha}S_{\\alpha\/2}^+.\n$$\nThen\n$$\nV^{1\/2} W=(\\Gamma^{-2\/\\alpha}S_{\\alpha\/2}^+)^{1\/2} W\n=\\Gamma^{-1\/\\alpha}(S_{\\alpha\/2}^+)^{1\/2}W\n\\overset{\\mathrm d}{=}\\Gamma^{-1\/\\alpha} S\\overset{\\mathrm d}{=} Z.\n$$\nUsing a positive strictly $(\\alpha\/2)$-stable L\\'evy process $\\{S_{\\alpha\/2}^+(t),\nt\\ges0\\}$ independent of $\\Gamma$ with $\\mathcal L(S_{\\alpha\/2}^+(1))=S_{\\alpha\/2}^+$, \nwe \nsee that \n$$\nV\\overset{\\mathrm d}{=} S_{\\alpha\/2}^+(\\Gamma^{-1}).\n$$\nSince $\\Gamma^{-1}$ is selfdecomposable, $V$ is also selfdecomposable due to the\ninheritance of selfdecomposability in subordination of strictly stable L\\'evy\nprocesses (see \\cite{S01b}).\nTherefore $Z$ is of type $G$ with $\\mathcal L(V)$ being selfdecomposable. Also,\nthe selfdecomposability of $Z$ again follows.\n\\end{proof}\n\nIn their recent paper \\cite{AMR06}, Aoyama, Maejima, and Rosi\\'nski\n have introduced a new strict subclass \n(called $M(\\mathbb R^d)$) of\nthe intersection of the class of type $G$ distributions and the class of selfdecomposable\ndistributions on $\\mathbb R^d$ (see Maejima and Rosi\\'nski \\cite{MR} for the \ndefinition of \ntype $G$ distributions on $\\mathbb R^d$ for general $d$).\nIf we write the polar decomposition of the L\\'evy measure $\\nu$ by\n\\begin{equation*}\n\\nu (B) = \\int_K \\lambda (d\\xi)\\int_0^{\\infty} 1_B(r\\xi)\\nu_{\\xi}(dr),\n\\end{equation*}\nwhere $K$ is the unit sphere $\\{\\xi\\in\\mathbb R^d\\colon |\\xi|=1\\}$ \nand $\\lambda$ is a probability measure on $K$,\nthen the element of $M(\\mathbb R^d)$ is characterized as a symmetric infinitely\ndivisible distribution such that\n\\begin{equation*}\n\\nu_{\\xi}(dr) = g_{\\xi}(r^2)r^{-1}dr\n\\end{equation*}\nwith $g_{\\xi}(u)$ being completely monotone as a function of \n$u\\in(0,\\infty)$ and measurable with respect to $\\xi$.\nRecall that if we write $\\nu_{\\xi}(dr) = g_{\\xi}(r^2)dr$ instead, this gives a \ncharacterization of type $G$ distributions on $\\mathbb R^d$ (\\cite{MR}).\nIn \\cite{AMR06} it is shown that\n$$\n\\{ \\text{type $G$ distributions on $\\mathbb R$ with selfdecomposable mixing \ndistributions}\\} \\subsetneqq M(\\mathbb R).\n$$\n\nNow, by Theorem \\ref{t4.1} combined with the observation above, we see that\n$\\mathcal L (Z)$ in \\eqref{4.1} belongs to $M(\\mathbb R)$.\nIt is of interest as a\nconcrete example of random variable whose distribution belongs to $M(\\mathbb R)$.\n\nWe end the paper with a remark that, by Preposition 3.2 of \\cite{CPY01}, \nif $\\alpha =2$, our $\\mathcal L (Z)$ is also \nPearson type IV distribution of parameters $\\lambda$ and $0$.\n\n\\bigskip\n{\\bf Acknowledgments.} The authors would like to thank Alexander Lindner and \nJan Rosi\\'nski for their helpful comments while this\npaper was written.\n\n\\bigskip \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nTransducers compiled from simple replace expressions {\\tt UPPER} {\\tt\n->} {\\tt LOWER} (Karttunen 1995, Kempe and Karttunen 1996) are\ngenerally nondeterministic in the sense that they may yield multiple\nresults even if the lower language consists of a single string. For\nexample, let us consider the transducer in Figure \\ref{net1},\nrepresenting {\\tt a b | b | b a | a b a -> x}.\\footnote{The regular\nexpression formalism and other notational conventions used in\nthe paper are explained in the Appendix at the end.}\n\n\\begin{figure}\n\\begin{center}\n \\centerline{\\psfig{file=acl1.eps}}\n\\caption{\\label{net1}\\verb+ a b | b | b a | a b a -> x +. The four\npaths with ``aba'' on the upper side are: $<$0~{\\tt a}~0~{\\tt\nb:x}~2~{\\tt a}~0$>$, $<$0~{\\tt a}~0~{\\tt b:x}~2~{\\tt a:0}~0$>$,\n$<$0~{\\tt a:x}~1~{\\tt b:0}~2~{\\tt a}~0$>$, and $<$0~{\\tt a:x}~1~{\\tt\nb:0}~2~{\\tt a:0}~0$>$.}\n\\end{center}\n\\vspace*{-8mm}\n\\end{figure}\n\nThe application of this transducer to the input ``aba'' produces\nfour alternate results, ``axa'', ``ax'', ``xa'', and ``x'', as shown\nin Figure \\ref{net1}, since there are four paths in the network that\ncontain ``aba'' on the upper side with different strings on the lower\nside.\n\nThis nondeterminism arises in two ways. First of all, a replacement\ncan start at any point. Thus we get different results for ``aba''\ndepending on whether we start at the beginning of the string or in the\nmiddle at the ``b''. Secondly, there may be alternative replacements\nwith the same starting point. In the beginning of ``aba'', we can\nreplace either ``ab'' or ``aba''. Starting in the middle, we can\nreplace either ``b'' or ``ba''. The underlining in Figure \\ref{tab1}\n\\begin{figure}[here]\n\\vspace*{-2mm}\n\\begin{verbatim}\n a b a a b a a b a a b a\n - --- --- -----\n a x a a x x a x\n\\end{verbatim}\n\n\\caption{\\label{tab1}Four factorizations of ``aba''.}\n\\vspace*{-2mm}\n\\end{figure}\nshows the four alternate factorizations of the input string, that is,\nthe four alternate ways to partition the string ``aba'' with respect\nto the upper language of the replacement expression. The corresponding\npaths in the transducer are listed in Figure \\ref{net1}.\n\nFor many applications, it is useful to define another version of\nreplacement that produces a unique outcome whenever the lower language\nof the relation consists of a single string. To limit the number of\nalternative results to one in such cases, we must impose a unique\nfactorization on every input.\n\nThe desired effect can be obtained by constraining the directionality\nand the length of the replacement. Directionality means that the\nreplacement sites in the input string are selected starting from the\nleft or from the right, not allowing any overlaps. The length\nconstraint forces us always to choose the longest or the shortest\nreplacement whenever there are multiple candidate strings starting at\na given location. We use the term {\\bf directed replacement} to\ndescribe a replacement relation that is constrained by directionality\nand length of match. (See the end of Section 2 for a discussion about\nthe choice of the term.)\n\nWith these two kinds of constraints we can define four types of\ndirected replacement, listed in Figure \\ref{tab2}.\n\n\\begin{figure}[here]\n\\vspace*{-2mm}\n\\begin{verbatim}\n longest shortest\n match match\n left-to-right @-> @>\n right-to-left ->@ >@\n\\end{verbatim}\n\\caption{\\label{tab2}Directed replacement operators}\n\\vspace*{-2mm}\n\\end{figure}\n\nFor reasons of space, we discuss here only the left-to-right,\nlongest-match version. The other cases are similar.\n\nThe effect of the directionality and length constraints is that some\npossible replacements are ignored. For example, {\\tt a b | b | b a |\na b a @-> x } maps ``aba'' uniquely into ``x'', Figure \\ref{net2}.\n\n\\begin{figure}[here]\n\\begin{center}\n \\centerline{\\psfig{file=acl2.eps}}\n\\caption{\\label{net2}\\verb+a b | b | b a | a b a @-> x+. The single\npath with ``aba'' on the upper side is: $<$0~{\\tt a:x}~1~{\\tt b:0}~2\n{\\tt a:0}~0$>$.}\n\\end{center}\n\\vspace*{-6mm}\n\\end{figure}\n\nBecause we must start from the left and have to choose the longest\nmatch, ``aba'' must be replaced, ignoring the possible replacements\nfor ``b'', ``ba'', and ``ab''. The {\\tt @->} operator allows only the\nlast factorization of ``aba'' in Figure \\ref{tab1}.\n\nLeft-to-right, longest-match replacement can be thought of as a\nprocedure that rewrites an input string sequentially from left to\nright. It copies the input until it finds an instance of {\\tt\nUPPER}. At that point it selects the longest matching substring, which\nis rewritten as {\\tt LOWER}, and proceeds from the end of that\nsubstring without considering any other alternatives. Figure\n\\ref{pict1} illustrates the idea.\n\n\\begin{figure}[here]\n\\vspace*{-2mm}\n\\begin{center}\n \\centerline{\\psfig{file=acl3.eps}}\n\\caption{\\label{pict1}Left-to-right, longest-match replacement}\n\\end{center}\n\\vspace*{-6mm}\n\\end{figure}\n\nIt is not obvious at the outset that the operation can in fact be\nencoded as a finite-state transducer for arbitrary regular patterns.\nAlthough a unique substring is selected for replacement at each point,\nin general the transduction is not unambiguous because {\\tt LOWER} is\nnot required to be a single string; it can be any regular language.\n\nThe idea of treating phonological rewrite rules in this way was the\nstarting point of Kaplan and Kay (1994). Their notion of obligatory\nrewrite rule incorporates a directionality constraint. They observe\n(p. 358), however, that this constraint does not by itself guarantee a\nsingle output. Kaplan and Kay suggest that additional restrictions,\nsuch as longest-match, could be imposed to further constrain rule\napplication.\\footnote{\\label{foot1}The tentative formulation of the\nlongest-match constraint in \\cite[p. 358]{Kaplan+Kay:regmod} is too\nweak. It does not cover all the cases.} We consider this issue in more\ndetail.\n\nThe crucial observation is that the two constraints, left-to-right and\nlongest-match, force a unique factorization on the input string thus\nmaking the transduction unambiguous if the {\\tt LOWER} language\nconsists of a single string. In effect, the input string is\nunambiguously {\\bf parsed} with respect to the {\\tt UPPER} language.\nThis property turns out to be important for a number of applications.\nThus it is useful to provide a replacement operator that implements\nthese constraints directly.\n\nThe definition of the \\verb.UPPER @-> LOWER. relation is presented in\nthe next section. Section 3 introduces a novel type of replace\nexpression for constructing transducers that unambiguously recognize\nand mark instances of a regular language without actually replacing\nthem. Section 4 identifies some useful applications of the new\nreplacement expressions.\n\n\\section{Directed Replacement}\nWe define directed replacement by means of a composition of regular\nrelations. As in Kaplan and Kay (1994), Karttunen (1995), and other\nprevious works on related topics, the intermediate levels of the\ncomposition introduce auxiliary symbols to express and enforce\nconstraints on the replacement relation. Figure \\ref{tab3} shows the\ncomponent relations and how they are composed with the input.\n\n\\begin{figure}[here]\n\\vspace*{-2mm}\n\\begin{verbatim}\n Input string\n .o.\n Initial match\n .o.\n Left-to-right constraint\n .o.\n Longest-match constraint\n .o.\n Replacement\n\\end{verbatim}\n\\caption{\\label{tab3}Composition of directed replacement}\n\\vspace*{-2mm}\n\\end{figure}\n\nIf the four relations on the bottom of Figure \\ref{tab3} are composed in\nadvance, as our compiler does, the application of the replacement to\nan input string takes place in one step without any intervening levels\nand with no auxiliary symbols. But it helps to understand the logic to\nsee where the auxiliary marks would be in the hypothetical\nintermediate results.\n\nLet us consider the case of {\\tt a b | b | b a | a b a} {\\tt @-> x }\napplying to the string ``aba'' and see in detail how the mapping\nimplemented by the transducer in Figure \\ref{net2} is composed from\nthe four component relations. We use three auxiliary symbols, caret\n(\\verb+^+), left bracket (\\verb+<+) and right bracket (\\verb+>+),\nassuming here that they do not occur in any input. The first step,\nshown in Figure \\ref{tab4}, composes the input string with a\ntransducer that inserts a caret, in the beginning of every substring\nthat belongs to the upper language.\n\n\\begin{figure}[here]\n\\vspace*{-2mm}\n\\begin{verbatim}\n a b a \n ^ a ^ b a\n\\end{verbatim}\n\\caption{\\label{tab4}Initial match. Each caret marks the beginning of\na substring that matches ``ab'', ``b'', ``ba'', or ``aba''.}\n\\vspace*{-2mm}\n\\end{figure}\n\nNote that only one \\verb+^+ is inserted even if there are several\ncandidate strings starting at the same location.\n\nIn the left-to-right step, we enclose in angle brackets all the\nsubstrings starting at a location marked by a caret that are instances\nof the upper language. The initial caret is replaced by a {\\tt <}, and\na closing {\\tt >} is inserted to mark the end of the match. We permit\ncarets to appear freely while matching. No carets are permitted\noutside the matched substrings and the ignored internal carets are\neliminated. In this case, there are four possible outcomes, shown in\nFigure \\ref{tab5}, but only two of them are allowed under the\nconstraint that there can be no carets outside the brackets.\n\n\\begin{figure}[here]\n\\vspace*{-2mm}\n\\begin{verbatim}\n ALLOWED\n\n ^ a ^ b a ^ a ^ b a\n < a b > a < a b a >\n\n NOT ALLOWED\n\n ^ a ^ b a ^ a ^ b a\n ^ a < b > a ^ a < b a > \n\\end{verbatim}\n\\caption{\\label{tab5}Left-to-right constraint. {\\it No caret\noutside a bracketed region.}}\n\\vspace*{-2mm}\n\\end{figure}\n\nIn effect, no starting location for a replacement can be skipped over\nexcept in the context of another replacement starting further left in\nthe input string. (Roche and Schabes (1995) introduce a similar\ntechnique for imposing the left-to-right order on the transduction.)\nNote that the four alternatives in Figure \\ref{tab5} represent the four\nfactorizations in Figure \\ref{tab1}.\n\nThe longest-match constraint is the identity relation on a certain set\nof strings. It forbids any replacement that starts at the same\nlocation as another, longer replacement. In the case at hand, it means\nthat the internal {\\tt >} is disallowed in the context \\verb+< a b >\na+. Because ``aba'' is in the upper language, there is a longer, and\ntherefore preferred, \\verb+< a b a >+ alternative at the same starting\nlocation, Figure \\ref{tab5a}.\n\n\\begin{figure}[here]\n\\vspace*{-2mm}\n\\begin{verbatim}\n ALLOWED NOT ALLOWED\n\n < a b a > < a b > a \n\\end{verbatim}\n\\caption{\\label{tab5a}Longest match constraint. {\\it No upper language\nstring with an initial }{\\tt < }{\\it and a nonfinal }{\\tt > }{\\it in\nthe middle}.}\n\\vspace*{-2mm}\n\\end{figure}\n\nIn the final replacement step, the bracketed regions of the input\nstring, in the case at hand, just \\verb+< a b a >+ , are replaced by\nthe strings of the lower language, yielding ``x'' as the result for\nour example.\n\nNote that longest match constraint ignores any internal brackets. For\nexample, the bracketing {\\tt < a > < a >} is not allowed if the upper\nlanguage contains ``aa'' as well as ``a''. Similarly, the\nleft-to-right constraint ignores any internal carets.\n\nAs the first step towards a formal definition of {\\tt UPPER @-> LOWER}\nit is useful to make the notion of ``ignoring internal brackets'' more\nprecise. Figure \\ref{tab5b} contains the auxiliary definitions. For\nthe details of the formalism (briefly explained in the Appendix),\nplease consult Karttunen (1995), Kempe and Karttunen\n(1996).\\footnote{\\label{foot2}{\\tt UPPER'} is the same language as\n{\\tt UPPER} except that carets may appear freely in all nonfinal\npositions. Similarly, {\\tt UPPER''} accepts any nonfinal brackets.}\n\n\\begin{figure}[here]\n\\vspace*{-2mm}\n\\begin{verbatim}\n UPPER' = UPPER\/\n UPPER'' = UPPER\/\n\\end{verbatim}\n\\caption{\\label{tab5b}Versions of {\\tt UPPER} that freely allow\nnonfinal diacritics.}\n\\vspace*{-2mm}\n\\end{figure} \n\nThe precise definition of the \\verb+UPPER @-> LOWER+ relation is given\nin Figure \\ref{tab6}. It is a composition of many auxiliary relations.\nWe label the major components in accordance with the outline in Figure\n\\ref{tab3}. The formulation of the longest-match constraint is based\non a suggestion by Ronald M. Kaplan (p.c.).\n\\begin{figure}[here]\n\\vspace*{-2mm}\n{\\it Initial match}\n\\begin{verbatim}\n ~$[\n .o.\n [. .] ->\n .o.\n\\end{verbatim}\n{\\it Left to right}\n\\begin{verbatim}\n [~$\n .o.\n \n .o.\n\\end{verbatim}\n{\\it Longest match}\n\\begin{verbatim}\n ~$\n .o.\n\\end{verbatim}\n{\\it Replacement}\n\\begin{verbatim}\n \n\\end{verbatim}\n\\caption{\\label{tab6}Definition of {\\tt UPPER @-> LOWER}}\n\\vspace*{-2mm}\n\\end{figure}\n\nThe logic of {\\tt @->} replacement could be encoded in many other\nways, for example, by using the three pairs of auxiliary brackets,\n{\\tt i}, {\\tt c}, and {\\tt a},\nintroduced in Kaplan and Kay (1994). We take here a more minimalist\napproach. One reason is that we prefer to think of the simple\nunconditional (uncontexted) replacement as the basic case, as in\nKarttunen (1995). Without the additional complexities introduced by\ncontexts, the directionality and length-of-match constraints can be\nencoded with fewer diacritics. (We believe that the conditional case\ncan also be handled in a simpler way than in Kaplan and Kay (1994).)\nThe number of auxiliary markers is an important consideration for some\nof the applications discussed below.\n\nIn a phonological or morphological rewrite rule, the center part of\nthe rule is typically very small: a modification, deletion or\ninsertion of a single segment. On the other hand, in our text\nprocessing applications, the upper language may involve a large\nnetwork representing, for example, a lexicon of multiword\ntokens. Practical experience shows that the presence of many auxiliary\ndiacritics makes it difficult or impossible to compute the\nleft-to-right and longest-match constraints in such cases. The size of\nintermediate states of the computation becomes a critical issue, while\nit is irrelevant for simple phonological rules. We will return to\nthis issue in the discussion of tokenizing transducers in Section 4.\n\nThe transducers derived from the definition in Figure \\ref{tab6} have\nthe property that they unambiguously parse the input string into a\nsequence of substrings that are either copied to the output unchanged or\nreplaced by some other strings. However they do not fall neatly into\nany standard class of transducers discussed in the literature\n(Eilenberg 1974, Sch\\\"{u}tzenberger 1977, Berstel 1979). If the {\\tt\nLOWER} language consists of a single string, then the relation encoded\nby the transducer is in Berstel's terms a {\\bf rational function}, and\nthe network is an {\\bf unambigous} transducer, even though it may\ncontain states with outgoing transitions to two or more destinations\nfor the same input symbol. An unambiguous transducer may also be {\\bf\nsequentiable}, in which case it can be turned into an equivalent {\\bf\nsequential} transducer \\cite{Mohri:fsa+nlp}, which can in turn be\nminimized. A transducer is sequential just in case there are no states\nwith more than one transition for the same input symbol. Roche and\nSchabes (1995) call such transducers {\\bf deterministic}.\n\nOur replacement transducers in general are not unambiguous because we\nallow {\\tt LOWER} to be any regular language. It may well turn out\nthat, in all cases that are of practical interest, the lower language\nis in fact a singleton, or at least some finite set, but it is not so\nby definition. Even if the replacement transducer is unambiguous, it\nmay well be unsequentiable if {\\tt UPPER} is an infinite language.\nFor example, the simple transducer for {\\tt a+ b @-> x} in Figure\n\\ref{net3} cannot be sequentialized. It has to replace any string of\n``a''s by ``x'' or copy it to the output unchanged depending on\nwhether the string eventually terminates at ``b''. It is obviously\nimpossible for any finite-state device to accumulate an unbounded\namount of delayed output. On the other hand, the transducer in Figure\n\\ref{net2} is sequentiable because there the choice between {\\tt a}\nand {\\tt a:x} just depends on the next input symbol.\n\\begin{figure}\n\\begin{center}\n \\centerline{\\psfig{file=acl4.eps}}\n\\caption{\\label{net3}\\verb| a+ b @-> x|. This transducer is\nunambiguous but cannot be sequentialized.}\n\\end{center}\n\\vspace*{-8mm}\n\\end{figure}\n\nBecause none of the classical terms fits exactly, we have chosen a\nnovel term, {\\bf directed transduction}, to describe a relation\ninduced by the definition in Figure \\ref{tab6}. It is meant to suggest\nthat the mapping from the input into the output strings is guided by the\ndirectionality and length-of-match constraints. Depending on the\ncharacteristics of the {\\tt UPPER} and {\\tt LOWER} languages, the\nresulting transducers may be unambiguous and even sequential, but\nthat is not guaranteed in the general case.\n\n\\section{Insertion}\nThe effect of the left-to-right and longest-match constraint is to\nfactor any input string uniquely with respect to the upper language of\nthe replace expression, to parse it into a sequence of substrings that\neither belong or do not belong to the language. Instead of replacing\nthe instances of the upper language in the input by other strings, we\ncan also take advantage of the unique factorization in other ways. For\nexample, we may insert a string before and after each substring that\nis an instance of the language in question simply to mark it as such.\n\nTo implement this idea, we introduce the special symbol ... on the\nright-hand side of the replacement expression to mark the place around\nwhich the insertions are to be made. Thus we allow replacement\nexpressions of the form {\\tt UPPER @-> PREFIX ... SUFFIX}. The\ncorresponding transducer locates the instances of {\\tt UPPER} in the\ninput string under the left-to-right, longest-match regimen just\ndescribed. But instead of replacing the matched strings, the\ntransducer just copies them, inserting the specified prefix and\nsuffix. For the sake of generality, we allow {\\tt PREFIX} and {\\tt\nSUFFIX} to denote any regular language.\n\nThe definition of {\\tt UPPER @-> PREFIX ...} {\\tt SUFFIX} is just as\nin Figure \\ref{tab6} except that the Replacement expression is\nreplaced by the Insertion formula in Figure \\ref{tab7}, a simple\nparallel replacement of the two auxiliary brackets that mark the\nselected regions. Because the placement of \\verb+<+ and \\verb+>+ is\nstrictly controlled, they do not occur anywhere else.\n\n\\begin{figure}[here]\n\\vspace*{-2mm}\n{\\it Insertion}\n\\begin{verbatim}\n \n\\end{verbatim}\n\\caption{\\label{tab7}Insertion expression in the definition of\n{\\tt UPPER @-> PREFIX ... SUFFIX}.}\n\\vspace*{-2mm}\n\\end{figure}\n\nWith the ... expressions we can construct transducers that mark\nmaximal instances of a regular language. For example, let us assume\nthat noun phrases consist of an optional determiner, {\\tt (d)}, any\nnumber of adjectives, {\\tt a*}, and one or more nouns, {\\tt n+}. The\nexpression \\verb| (d) a* n+ @->\nthat inserts brackets around maximal instances of the noun phrase\npattern. For example, it maps {\\tt \"dannvaan\"} into {\\tt\n\"[dann]v[aan]\"}, as shown in Figure \\ref{tab8}.\n\n\\begin{figure}[here]\n\\vspace*{-2mm}\n\\begin{verbatim}\n d a n n v a a n\n ------- -----\n [ d a n n ] v [ a a n ]\n\\end{verbatim}\n\\caption{\\label{tab8}Application of\\verb| (d) a* n+ @-> \\%[...\\%] |to\n{\\tt \"dannvaan\"}}\n\\vspace*{-2mm}\n\\end{figure}\n\nAlthough the input string \\verb+\"dannvaan\"+ contains many other\ninstances of the noun phrase pattern, \\verb+\"n\"+, \\verb+\"an\"+,\n\\verb+\"nn\"+, etc., the left-to-right and longest-match constraints\npick out just the two maximal ones. The transducer is displayed\nin Figure \\ref{net4}. Note that {\\tt ?} here matches symbols, such as\n{\\tt v}, that are not included in the alphabet of the network.\n\n\\begin{figure}[here]\n\\vspace*{-2mm}\n\\begin{center}\n \\centerline{\\psfig{file=acl5.eps}}\n\\caption{\\label{net4}\\verb|(d) a* n+ @-> \\%[...\\%]|. The\none path with ``dannvaan'' on the upper side is: $<$0 {\\tt 0:[} 7\n{\\tt d} 3 {\\tt a} 3 {\\tt n} 4 {\\tt n} 4 {\\tt 0:]} 5 {\\tt v} 0 {\\tt\n0:[} 7 {\\tt a} 3 {\\tt a} 3 {\\tt n} 4 {\\tt 0:]} 5$>$.}\n\\end{center}\n\\vspace*{-6mm}\n\\end{figure}\n\n\\section{Applications}\nThe directed replacement operators have many useful applications. We\ndescribe some of them. Although the same results could often be\nachieved by using lex and yacc, sed, awk, perl, and other Unix\nutilities, there is an advantage in using finite-state transducers for\nthese tasks because they can then be smoothly integrated with other\nfinite-state processes, such as morphological analysis by lexical\ntransducers (Karttunen 1994) and rule-based part-of-speech\ndisambiguation (Chanod and Tapanainen 1995, Roche and Schabes 1995).\n\n\\subsection{Tokenization}\nA tokenizer is a device that segments an input string into a sequence\nof tokens. The insertion of end-of-token marks can be accomplished by\na finite-state transducer that is compiled from tokenization\nrules. The tokenization rules may be of several types. For example,\n\\verb|[WHIT\nthat reduces any sequence of tabs, spaces, and newlines to a single\nspace. \\verb|[LETTER+| \\verb|@-> ... EN\nspecial mark, e.g. a newline, at the end of a letter sequence.\n\nAlthough a space generally counts as a token boundary, it can also be\npart of a multiword token, as in expressions like ``at least'', ``head\nover heels'', ``in spite of'', etc. Thus the rule that introduces the\n\\verb+END_OF_TOKEN+ symbol needs to combine the \\verb|LETTER+| pattern\nwith a list of multiword tokens which may include spaces, periods and\nother delimiters.\n\nFigure \\ref{tab9} outlines the construction of a simple tokenizing transducer\nfor English.\n\n\\begin{figure}[here]\n\\vspace{-2mm}\n\\begin{verbatim}\n WHIT\n .o.\n [ LETTER+ | \n a t\n h e a d\n i n\n @-> ... EN\n .o.\n SPACE -> [] || .#. | EN\n\\end{verbatim}\n\\caption{\\label{tab9}A simple tokenizer}\n\\vspace{-2mm}\n\\end{figure}\n\nThe tokenizer in Figure \\ref{tab9} is composed of three\ntransducers. The first reduces strings of whitespace characters to a\nsingle space. The second transducer inserts an \\verb+END_OF_TOKEN+\nmark after simple words and the listed multiword expressions. The\nthird removes the spaces that are not part of some multiword\ntoken. The percent sign here means that the following blank is to be\ntaken literally, that is, parsed as a symbol.\n\nWithout the left-to-right, longest-match constraints, the tokenizing\ntransducer would not produce deterministic output. Note that it must\nintroduce an \\verb+END_OF_TOKEN+ mark after a sequence of letters just\nin case the word is not part of some longer multiword token. This\nproblem is complicated by the fact that the list of multiword tokens\nmay contain overlapping expressions. A tokenizer for French, for\nexample, needs to recognize ``de plus'' (moreover), ``en plus''\n(more), ``en plus de'' (in addition to), and ``de plus en plus'' (more\nand more) as single tokens. Thus there is a token boundary after ``de\nplus'' in {\\it de plus on ne le fait plus} (moreover one doesn't do it\nanymore) but not in {\\it on le fait de plus en plus} (one does it more\nand more) where ``de plus en plus'' is a single token.\n\nIf the list of multiword tokens contains hundreds of expressions, it\nmay require a lot of time and space to compile the tokenizer even if\nthe final result is not too large. The number of auxiliary symbols\nused to encode the constraints has a critical effect on the efficiency\nof that computation. We first observed this phenomenon in the course\nof building a tokenizer for the British National Corpus according to\nthe specifications of the {\\sc bnc} Users Guide \\cite{Leech:bnc},\nwhich lists around 300 multiword tokens and 260 foreign phrases. With\nthe current definition of the directed replacement we have now been\nable to compute similar tokenizers for several other languages\n(French, Spanish, Italian, Portuguese, Dutch, German).\n\n\\subsection{Filtering}\nSome text processing applications involve a preliminary stage in which\nthe input stream is divided into regions that are passed on to the\ncalling process and regions that are ignored. For example, in\nprocessing an {\\sc sgml}-coded document, we may wish to delete all the\nmaterial that appears or does not appear in a region bounded by\ncertain {\\sc sgml} tags, say {\\tt } and {\\tt <\/A>}.\n\nBoth types of filters can easily be constructed using the\ndirected replace operator. A negative filter that deletes all the\nmaterial between the two {\\sc sgml} codes, including the codes themselves,\nis expressed as in Figure \\ref{tab10}.\n\n\\begin{figure}[here]\n\\vspace{-2mm}\n\\begin{verbatim}\n \"\" ~$[\"\"|\"<\/A>\"] \"<\/A>\" @-> [] ;\n\\end{verbatim}\n\\caption{\\label{tab10} A negative filter}\n\\vspace{-2mm}\n\\end{figure}\n\nA positive filter that excludes everything else can be expressed as in\nFigure \\ref{tab11}.\n\n\\begin{figure}[here]\n\\vspace{-2mm}\n\\begin{verbatim}\n ~$\"<\/A>\" \"\" @-> \"\"\n .o.\n \"<\/A>\" ~$\"\" @-> \"<\/A>\" ;\n\\end{verbatim}\n\\caption{\\label{tab11}A positive filter}\n\\vspace{-2mm}\n\\end{figure}\n\nThe positive filter is composed of two transducers. The first reduces\nto {\\tt } any string that ends with it and does not contain the\n{\\tt <\/A>} tag. The second transducer does a similar transduction on\nstrings that begin with {\\tt <\/A>}. Figure 12 illustrates the effect\nof the positive filter.\n\n\\begin{figure}[here]\n\\vspace{-2mm}\n\\begin{verbatim}\none<\/B>two<\/A>three<\/C>four<\/A>\n------------- ----------------\n two <\/A> four<\/A>\n\\end{verbatim}\n\\caption{\\label{tab12}Application of a positive filter}\n\\vspace{-2mm}\n\\end{figure}\n\nThe idea of filtering by finite-state transduction of course does not\ndepend on {\\sc sgml} codes. It can be applied to texts where the\ninteresting and uninteresting regions are defined by any kind of\nregular pattern.\n\n\\subsection{Marking}\n\nAs we observed in section 3, by using the ... symbol on the lower side\nof the replacement expression, we can construct transducers that mark\ninstances of a regular language without changing the text in any other\nway. Such transducers have a wide range of applications. They can be\nused to locate all kinds of expressions that can be described by a\nregular pattern, such as proper names, dates, addresses, social\nsecurity and phone numbers, and the like. Such a marking transducer\ncan be viewed as a deterministic parser for a ``local grammar'' in the\nsense of Gross (1989), Roche (1993), Silberztein (1993) and others.\n \nBy composing two or more marking transducers, we can also construct a\nsingle transducer that builds nested syntactic structures, up to any\ndesired depth. To make the construction simpler, we can start by\ndefining auxiliary symbols for the basic regular patterns. For example,\nwe may define {\\tt NP} as \\verb|[(d) a* n+]|. With that abbreviatory\nconvention, a composition of a simple {\\tt NP} and {\\tt VP} spotter\ncan be defined as in Figure \\ref{tab13}.\n\n\\begin{figure}[here]\n\\vspace{-2mm}\n\\begin{verbatim}\n NP @->\n .o.\n v\n\\end{verbatim}\n\\caption{\\label{tab13}Composition of an {\\tt NP} and a {\\tt VP} spotter}\n\\vspace{-2mm}\n\\end{figure}\n\nFigure \\ref{tab14} shows the effect of applying this composite transducer to\nthe string {\\tt \"dannvaan\"}.\n\n\\begin{figure}[here]\n\\vspace{-2mm}\n\\begin{verbatim}\n d a n n v a a n\n ------- - -----\n [NP d a n n ] [VP v [NP a a n ] ]\n\\end{verbatim}\n\\caption{\\label{tab14}Application of an {\\tt NP-VP} parser}\n\\vspace{-2mm}\n\\end{figure}\n\nBy means of this simple ``bottom-up'' technique, it is possible to\ncompile finite-state transducers that approximate a context-free\nparser up to a chosen depth of embedding. Of course, the\nleft-to-right, longest-match regimen implies that some possible\nanalyses are ignored. To produce all possible parses, we may introduce\nthe ... notation to the simple replace expressions in Karttunen\n(1995).\n\n\n\\section{Extensions}\n\nThe definition of the left-to-right, longest-match replacement can\neasily be modified for the three other directed replace operators\nmentioned in Figure \\ref{tab2}. Another extension, already\nimplemented, is a directed version of parallel replacement (Kempe and\nKarttunen 1996), which allows any number of replacements to be done\nsimultaneously without interfering with each other. Figure \\ref{tab15}\nis an example of a directed parallel replacement. It yields a\ntransducer that maps a string of ``a''s into a single ``b'' and \na string of ``b''s into a single ``a''.\n\n\\begin{figure}[here]\n\\begin{verbatim}\n a+ @-> b, b+ @-> a ;\n\\end{verbatim}\n\\caption{\\label{tab15}Directed, parallel replacement}\n\\vspace{-2mm}\n\\end{figure}\n\nThe definition of directed parallel replacement requires no\nadditions to the techniques already presented. In the near future we\nalso plan to allow directional and length-of-match constraints in the\nmore complicated case of conditional context-constrained replacement.\n\n\\section{Acknowledgements}\n\nI would like to thank Ronald M. Kaplan, Martin Kay, Andr\\'{e} Kempe,\nJohn Maxwell, and Annie Zaenen for helpful discussions at the\nbeginning of the project, as well as Paula Newman and Kenneth\nR. Beesley for editorial advice on the first draft of the paper. The\nwork on tokenizers and phrasal analyzers by Anne Schiller and Gregory\nGrefenstette revealed the need for a more efficient implementation of\nthe idea. The final version of the paper has benefited from detailed\ncomments by Ronald M. Kaplan and two anonymous reviewers, who\nconvinced me to discard the ill-chosen original title (``Deterministic\nReplacement'') in favor of the present one.\n\n\\section{Appendix: Notational conventions}\n\nThe regular expression formalism used in this paper is essentially the\nsame as in Kaplan and Kay (1994), in Karttunen (1995), and in Kempe\nand Karttunen (1996). Upper-case strings, such as {\\tt UPPER},\nrepresent regular languages, and lower-case letters, such as {\\tt x},\nrepresent symbols. We recognize two types of symbols: unary symbols\n({\\tt a}, {\\tt b}, {\\tt c}, etc) and symbol pairs ({\\tt a:x}, {\\tt\nb:0}, etc. ).\n\nA symbol pair {\\tt a:x} may be thought of as the crossproduct of {\\tt\na} and {\\tt x}, the minimal relation consisting of {\\tt a} (the upper\nsymbol) and {\\tt x} (the lower symbol). To make the notation less\ncumbersome, we systematically ignore the distinction between the\nlanguage {\\tt A} and the identity relation that maps every string\nof {\\tt A} into itself. Consequently, we also write {\\tt a:a} as\njust {\\tt a}.\n\nThree special symbols are used in regular expressions: {\\tt 0} (zero)\nrepresents the empty string (often denoted by $\\epsilon$); {\\tt ?}\nstands for any symbol in the known alphabet and its extensions; in\nreplacement expressions, {\\tt .\\#.} marks the start (left context) or\nthe end (right context) of a string. The percent sign, {\\tt \\%}, is\nused as an escape character. It allows letters that have a special\nmeaning in the calculus to be used as ordinary symbols. Thus {\\tt \\%[}\ndenotes the literal square bracket as opposed to {\\tt [}, which has a\nspecial meaning as a grouping symbol; \\%0 is the ordinary zero symbol.\nDouble quotes around a symbol have the same effect as the percent\nsign.\n\nThe following simple expressions appear freqently in the formulas:\n{\\tt []} the empty string language, {\\tt ?*} the universal (``sigma\nstar'') language.\n\nThe regular expression operators used in the paper are: {\\tt *} zero\nor more (Kleene star), {\\tt +} one or more (Kleene plus), \\verb+~+ not\n(complement), {\\tt \\$} contains, {\\tt \/} ignore, {\\tt |} or (union),\n{\\tt \\&} and (intersection), {\\tt -} minus (relative complement), {\\tt\n.x.} crossproduct, {\\tt .o.} composition, \\verb+->+ simple replace.\n\nIn the transducer diagrams (Figures \\ref{net1}, \\ref{net2}, etc.),\nthe nonfinal states are represented by single circles, final states\nby double circles. State 0 is the initial state. The symbol {\\tt ?}\nrepresents any symbols that are not explicitly present in the\nnetwork. Transitions that differ only with respect to the label\nare collapsed into a single multiply labelled arc.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{section_intro}\n\nTime synchronization between nodes of a communication network is a common assumption made to\nanalyze and design such networks. However, in practice, it is very difficult to exactly\nsynchronize separate nodes either in time or frequency. As an example, in systems with different\ntransmitters, the transmitters must use their own locally generated clock. However, the\ninitialization might be different for each clock and the frequencies at the local signal generators\nmay not be perfectly matched \\cite{Hui_Humblet:85}. Indeed, achieving time, phase or frequency synchronization in practical communication systems has been a major engineering issue and still remains an active area of research (see e.g., \\cite{Wornell_asynchronism:2009}). Thus, fundamental limits of communication in the presence of time asynchronism should be explicitly addressed as a tool to better understand and tackle\nreal-world challenges in the context of multiuser information theory.\n\nThe problem of finding the capacity region of multiuser channels with no time synchronization\nbetween the encoders is considered in \\cite{Cover_McEliece:81}, \\cite{Hui_Humblet:85},\n\\cite{Farkas_Koi:2012}, and \\cite{Grant_Rimoldi_Urbanke_Whiting:2001} from a channel coding\nperspective only for the specific case of multiple access channels (MAC). In \\cite{Verdu_memory:1989}, a frame asynchronous MAC with memory is considered and it is shown that the capacity region can be drastically reduced in the presence of frame asynchronism.\nIn \\cite{Verdu:1989}, an asynchronous MAC is also considered, but with symbol asynchronism. All of\nthese works constrain themselves to the study of channel coding only and disregard the source-channel communication\nof correlated sources over asynchronous channels. In this paper, we are interested in the problem of joint source-channel coding (JSCC) of a set of correlated sources over time-asynchronous multiuser channels which can include relaying as well. In particular, we focus on the analysis of JSCC for a MAC with the presence of a relay, also known as a multiple access relay channel (MARC).\n\nThe problem of JSCC for multiuser networks is open in general. However, numerous results have been published on different aspects of the problem for specific channels and under specific assumptions such as phase or time asynchronism between the nodes. In \\cite{Cover_ElGamal_Salehi:1980}, a sufficient condition for lossless communication of correlated sources over a discrete memoryless MAC is given. Although not always optimal, as shown in \\cite{Dueck:1981}, the achievable scheme of \\cite{Cover_ElGamal_Salehi:1980} outperforms separate source-channel coding. In \\cite{FadiAbdallah_Caire:2008}, however, the authors show that under phase fading, separation is optimal for the important case of a Gaussian MAC. Also, \\cite{Saffar:phase}, \\cite{Saffar_Globecom:2012} show the optimality of separate source-channel coding for several Gaussian networks with phase uncertainty among the nodes. Other authors have derived JSCC coding results for the broadcast channels \\cite{Coleman:2006}, \\cite{Tian_Diggavi_Shamai_BCfull:2011}, interference relay channels \\cite{Saffar_ISIT:2012}, and other multiuser channels \\cite{Gunduz:2009}. Furthermore, for lossy source-channel coding, a separation approach is shown in \\cite{Tian_Diggavi_Shamai_full_version:2012} to be optimal or approximately optimal for certain classes of sources and networks.\n\n\nIn \\cite{Saffar_Mitran_ISIT2013}, we have considered a two user time asynchronous Gaussian MAC with a pair of correlated sources. There, we have derived necessary and sufficient conditions for reliable communication and consequently derived a separation theorem for the problem. This paper extends the work of \\cite{Saffar_Mitran_ISIT2013} to a more general setup with $K$ nodes and a relay. Also, the recent work \\cite{Yemini_Asynchronous_Side:2014} considers the point-to-point state-dependent and cognitive multiple access channels with time asynchronous side information.\n\nIn \\cite{Cover_McEliece:81}, the authors have considered a MAC with no common time base between encoders.\nThere, the encoders transmit with an unknown offset with respect to each other, and the offset is\nbounded by a maximum value $\\dm(n)$ that is a function of coding block length $n$. Using a\ntime-sharing argument, it is shown that the capacity region is the same as the capacity of the\nordinary MAC as long as $\\dm(n)\/n \\rightarrow 0$. On the other hand, \\cite{Hui_Humblet:85}\nconsiders a {\\em totally asynchronous} MAC in which the coding blocks of different users can\npotentially have no overlap at all, and thus potentially have several block lengths of shifts between\nthemselves (denoted by random variables $\\Delta_i$). Moreover, the encoders have different clocks\nthat are referenced with respect to a standard clock, and the offsets between the start of code\nblocks for the standard clock and the clock at transmitter $i$ are denoted by random variables\n$D_i$. For such a scenario, in \\cite{Hui_Humblet:85}, it is shown that the capacity region differs\nfrom that of the synchronous MAC only by the lack of the convex hull operation. In\n\\cite{Poltyrev:83}, Poltyrev also considers a model with arbitrary delays, known to the receiver\n(as opposed to \\cite{Hui_Humblet:85}). Among other related works is the recent paper\n\\cite{Farkas_Koi:2012} that finds a single letter capacity region for the case of a $3$ sender MAC,\n$2$ of which are synchronized with each other and both asynchronous with respect to the third one.\n\n\nIn this paper, we study the communication of $K$ correlated sources over a $K$-user Gaussian time-asynchronous MARC\n(TA-MARC) where the encoders cannot synchronize the starting times of their codewords. Rather, they\ntransmit with unknown positive time delays $d_1,d_2,\\cdots,d_{K+1}\\geq 0$ with respect to a time reference, where the index $K+1$ indicates the relay transmitter. The time shifts are also bounded by $d_{\\ell} \\leq \\dm(n),$ $\\ell=1,\\cdots,K+1$, where $n$ is the codeword block length. Moreover, we assume that the offsets $d_1,d_2,\\cdots,d_{K+1}$ are unknown to the transmitters as a practical assumption since they are not controlled by the transmitters. We further assume that the\nmaximum possible offset\n$\\dm(n) \\rightarrow \\infty$ as $n\\rightarrow \\infty$ while ${\\dm(n) \/ n} \\rightarrow 0$.\n\nThe rest of this paper is organized as follows. In Section \\ref{section_preliminaries}, we present\nthe problem statement and preliminaries along with a key lemma that is useful in the derivation of\nthe converse. In Section \\ref{section_converse}, as our main result, the converse part of the capacity theorem (i.e., a theorem stating coinciding necessary and sufficient conditions for reliable source-channel communication) is proved. Then, under specific gain conditions, using separate source and channel coding and the results of \\cite{Cover_McEliece:81} combined with block Markov coding, it is shown in Section\n\\ref{section_achievability} that the thus achievable region matches the outer bound. Section \\ref{results_statements} then states a separation theorem under specific gain conditions for the TA-MARC as the combination of converse and achievability parts along with a corollary that results for the interference channel. Finally, Section \\ref{section_conclusion} concludes the paper. \\vspace{-.2cm}\n\n\n\n\n\n\\section{Problem Statement and a Key Lemma} \\label{section_preliminaries}\n\n{\\em Notation}: In what follows, we denote random variables by upper case letters, e.g., $X$, their realizations by lower case\nletters, e.g., $x$, and their alphabet by calligraphic letters, e.g., $\\mathcal{X}$. For integers $0 \\leq a \\leq b$, $Y_{a}^{b}$ denotes\nthe $b-a+1$-tuple $(Y[a],\\cdots,Y[b])$, and $Y^{b}$ is a shorthand for $\\Y{0}{b-1}$. Without confusion, $X_{\\ell}^{n}$ denotes the\nlength-$n$ MARC input codeword $(X_{\\ell}[0],\\cdots,X_{\\ell}[n-1])$ of the $\\ell$th transmitter, and based on\nthis, we also denote $(X_{\\ell}[a],\\cdots,X_{\\ell}[b])$ by $X_{\\ell,a}^{b}$. The $n$-length\ndiscrete Fourier transforms (DFT) of the $n$-length codeword $X_{\\ell}^{n}$ is denoted by $\\hat{X}_{\\ell}^{n} =\n{\\rm{DFT}}(\\X{\\ell}{n})$. Furthermore, let $[1,K] \\triangleq \\{1,\\cdots,K\\}$, for $\\forall K \\in \\mathbb{N}$.\n\n\\begin{figure}\n\\centering\n{\\includegraphics[keepaspectratio=true, width=12.68cm]{system_model_channel2.eps}}\n\\caption{{Gaussian time asynchronous multiple access relay channel (TA-MARC), with delays} $d_{1},\\cdots,d_{K+1}$.}\n\\label{fig:TA-MAC}\n\\end{figure}\n\n\nConsider $K$ finite alphabet sources $\\{(U_{1}[i],U_{2}[i],\\cdots,U_{K}[i])\\}_{i=0}^{\\infty}$ as correlated random variables drawn according to a distribution $p(u_1,u_2,\\cdots,u_K)$. The sources are memoryless, i.e.,\n$(U_{1}[i],U_{2}[i],\\cdots,U_{K}[i])$'s are independent and identically distributed (i.i.d) for $i=1,2,\\cdots$. The indices $1,\\cdots,K$, represent the transmitter nodes and the index $K+1$ represents the relay transmitter. All of the sources are to\nbe transmitted to a destination by the help of a relay through a continuous alphabet, discrete-time memoryless\nmultiple-access relay channel (MARC)\nwith time asynchronism between different transmitters and the relay. Specifically, as depicted in Fig. \\ref{fig:TA-MAC}, the encoders use different time references and thus we assume that the encoders start transmitting with offsets of\n\\begin{align}\n0 \\leq d_{\\ell} \\leq \\dm(n), \\quad \\ell=1,\\cdots,K+1,\n\\end{align}\nsymbols with respect to a fixed time reference, where $d_{K+1}$ is the offset for the relay transmitter with respect to the time reference.\n\nHence, the probabilistic characterization of the time-asynchronous Gaussian MARC, referred to as a Gaussian TA-MARC and denoted by $\\mathcal{M}([1,K+1])$ throughout the paper, is\ndescribed by the relationships\n\\begin{align}\\label{channel-model-1}\nY_{\\mathsf{D}}[i] & = \\sum_{\\ell=1}^{K+1} g_{\\ell\\mathsf{D}}X_{\\ell}[i-d_{\\ell}] + Z_{\\mathsf{D}}[i], \\quad i=0,1,\\cdots,n+\\dm(n)-1,\n\\end{align}\n\\noindent as the $i$th entry of the received vector $Y_{\\sf{D}}^{n+\\dm(n)}$ at the destination ($\\sf{D}$), and\n\\begin{align}\\label{channel-model-2}\nY_{\\mathsf{R}}[i] & = \\sum_{\\ell=1}^{K} g_{\\ell \\mathsf{R}}X_{\\ell}[i-d_{\\ell}] + Z_{\\mathsf{R}}[i], \\quad i=0,1,\\cdots,n+\\dm(n) - 1,\n\\end{align}\n\\noindent as the $i$th entry of the received vector $Y_{\\mathsf{R}}^{n+\\dm(n)}$ at the relay ($\\mathsf{R}$), where\n\\begin{itemize}\n\\item $g_{\\ell \\mathsf{D}},\\ell=1,\\cdots,K+1,$ are complex gains from transmission nodes as well as the relay (when $\\ell=K+1$) to the destination, and $g_{\\ell\\mathsf{R}}, \\ell = 1,\\cdots,K,$ are complex gains from the transmission nodes to the relay,\n\\item $X_{\\ell}[i-d_{\\ell}], \\ell=1,\\cdots,K+1$, are the delayed channel inputs such that $X_{\\ell}[i-d_{\\ell}] = 0$ if $(i-d_{\\ell})$$ \\notin \\{0,1,\\cdots,n-1\\}$ and $X_{\\ell}[i-d_{\\ell}]\\in \\mathbb{C}$ otherwise,\n\\item $Z_{\\mathsf{D}}[i],Z_{\\mathsf{R}}[i] \\sim {\\mathcal{C}\\mathcal{N}(0,N)}$ are circularly symmetric complex Gaussian noises at the destination and relay, respectively.\n\\end{itemize}\nFig. \\ref{fig:TA-MAC} depicts the delayed codewords of the encoders, and the formation of the received codeword for the TA-MARC.\n\n\n\n\nWe now define a joint source-channel code and the notion of reliable communication for a Gaussian\nTA-MARC in the sequel.\n\n\\begin{definition}\nA block joint source-channel code of length $n$ for the Gaussian TA-MARC with the block of correlated\nsource outputs $$\\{(U_1[i],U_2[i],\\cdots,U_K[i])\\}_{i=0}^{n-1}$$ is defined by\n\\begin{enumerate}\n\\item {A set of encoding functions with the bandwidth mismatch factor of unity\\footnote{The assumption of unity mismatch factor is without loss of generality and for simplicity of exposition. Extension to the more general setting with different mismatch factors can be achieved by a simple modification (cf. Remark \\ref{Remark:about_mismatch}).}, i.e.,\n\\begin{align*}\nf_{\\ell}^{n}&: \\mathcal{U}_{\\ell}^n \\rightarrow \\mathbb{C}^n, \\quad \\ell=1,2,\\cdots,K,\n\\end{align*}\n\\noindent that map the source outputs to the codewords, and the relay encoding function\n\\begin{align}\\label{Eq:relay_encoding_function}\nx_{(K+1)}^{i+1} = f_{(K+1)}^{i+1}(y_{\\mathsf{R}}[0],y_{\\mathsf{R}}[1],\\cdots,y_{\\mathsf{R}}[i]), \\quad i=0,2,\\cdots,n-2.\n\\end{align}\n\\noindent The sets of encoding functions are denoted by the {\\em codebook} $\\mathcal{C}^{n} = \\Big\\{f_{1}^{n},\\cdots,f_{K}^{n},\\{f_{(K+1)}^{i+1}\\}_{i=0}^{n-2}\\Big\\}$}.\n\n\\item{Power constraints $P_\\ell$, $\\ell=1,\\cdots,K+1,$ on the codeword vectors $X^{n}_{\\ell}$, i.e.,\n\\begin{align}\\label{Power_constraint}\n\\mathbb{E}\\left[{1 \\over n} \\sum_{i=0}^{n-1}\\vert X_{\\ell }[i]\\vert^2\\right] =\n\\mathbb{E}\\left[{1 \\over n} \\sum_{i=0}^{n-1}\\vert \\hat{X}_{\\ell }[i]\\vert^2\\right] \\leq P_\\ell, \\ \\\n\\end{align}\n\\noindent for $\\ell=1,\\cdots,K+1$ where we recall that $\\hat{X}^{n}_{\\ell}=\\text{DFT}\\{X_{\\ell}^{n}\\}$, and $\\mathbb{E}[\\cdot]$ represents the expectation operator.}\n\n\\item{A decoding function $g^n(y_{\\sf{D}}^{n+\\dm} \\vert d_{1}^{K+1}) : \\mathbb{C}^{n+\\dm} \\times [0,\\dm]^{K+1} \\rightarrow \\mathcal{U}_{1}^n \\times\\cdots \\times \\mathcal{U}_{K}^n. $ }\n\\end{enumerate}\n\\end{definition}\n\\begin{definition} \\label{reliability_definition} We say the source $\\{(U_{1}[i],U_{2}[i],\\cdots,U_{K}[i])\\}_{i=0}^{n-1}$ of i.i.d. discrete random variables with joint probability mass function $p(u_1,u_2,\\cdots,u_K)$ {\\em can be reliably sent} over a Gaussian TA-MARC, if there exists a sequence of codebooks $\\mathcal{C}^{n}$ and decoders $g^n$ in $n$ such that the output sequences $\\Uo,\\Ut,\\cdots,U_{K}^{n}$ of the source can be estimated from $Y_{\\mathsf{D}}^{n+\\dm(n)}$ with arbitrarily asymptotically small probability of error uniformly over {\\em all} choices of delays $0 \\leq $$ d_{\\ell} $$\\leq \\dm(n),$ $\\ell=1,\\cdots,K+1$, i.e.,\n\\begin{align}\\label{main_error_probability}\n\\sup_{0 \\leq d_{1}, \\cdots, d_{K+1} \\leq \\dm(n)} P_e^n(d_{1}^{K+1}) \\longrightarrow 0, \\ \\ {\\rm as} \\ \\ n \\rightarrow \\infty,\n\\end{align}\n\\noindent where\n\\begin{align}\nP_e^n(d_{1}^{K+1}) \\triangleq P[g(Y_{\\sf{D}}^{n+\\dm(n)} \\vert d_{1}^{K+1}) \\neq (\\Uo,\\Ut,\\cdots,U_{K}^{n}) \\vert d_{1}^{K+1}],\n\\end{align}\nis the error probability for a given set of offsets $d_{1}^{K+1}$. \\thmend\n\\end{definition}\n\nWe now present a key lemma that plays an important role in the derivation of our results. In order\nto state the lemma, we first need to define the notions of a {\\em sliced} MARC and a {\\em sliced cyclic} MARC as follows:\n\n\\begin{definition}\nLet $\\mS \\subseteq [1,K+1]$ be a subset of transmitter node indices. A Gaussian sliced TA-MARC ${\\mathcal{M}}(\\mS)$ corresponding to the Gaussian TA-MARC ${\\mathcal{M}}([1,K+1])$ defined by \\eqref{channel-model-1}-\\eqref{channel-model-2}, is a MARC in which only the codewords of the encoders with indices in $\\mS$ contribute to the destination's received signal, while the received signal at the relay is the same as that of the original Gaussian TA-MARC $\\mathcal{M}([1,K+1])$.\n\nIn particular, for the Gaussian sliced MARC ${\\mathcal{M}}(\\mS)$, the received signals at the destination and the relay at the $i$th time index, denoted by ${Y}_{\\mathsf{D}(\\mS)}[i]$ and ${Y}_{\\mathsf{R}(\\mS)}[i]$ respectively, are given by\n\\begin{align}\\label{sliced_destination}\n{Y}_{\\mathsf{D}(\\mS)}[i] = \\sum_{\\ell \\in \\mS} g_{\\ell\\mathsf{D}}X_{{{\\ell}}}[i-d_{\\ell}] + {Z}_{\\mathsf{D}}[i], \\quad {i=0,\\cdots,n+\\dm-1},\n\\end{align}\n\\noindent and\n\\begin{align}\\label{sliced_relay}\n{Y}_{\\mathsf{R}(\\mS)}[i] = Y_{\\mathsf{R}}[i], \\quad {i=0,\\cdots,n+\\dm-1}.\n\\end{align}\n\\end{definition}\n\n\\begin{figure}\n\\hspace{-1.5cm}\\centering{\\includegraphics[keepaspectratio = true, height=14.2cm]{Partial_AMARC3.eps}}\n\\caption{Codewords of a Gaussian sliced TA-MARC $\\mathcal{M}(\\mS)$ (top) and the corresponding sliced cyclic MARC $\\tilde{\\mathcal{M}}(\\mS)$ (bottom).}\n\\label{fig:partial-TA-MAC}\n\\end{figure}\n\n\\begin{definition}\nA sliced cyclic MARC $\\widetilde{\\mathcal{M}}(\\mS)$, corresponding to the sliced TA-MARC $\\mathcal{M}(\\mS)$ defined by \\eqref{sliced_destination}-\\eqref{sliced_relay}, is a sliced TA-MARC in which the codewords are cyclicly shifted around the $n$th time index to form new received signals at the destination \\textit{only}. Specifically, the corresponding outputs of the sliced cyclic MARC $\\widetilde{\\mathcal{M}}(\\mS)$ at the destination and the relay at the $i$th time index, denoted by $\\tilde{Y}_{\\mathsf{D}(\\mS)}[i]$ and $\\tilde{Y}_{\\mathsf{R}(\\mS)}[i]$ respectively, can be written as\n\\begin{align}\n\\tilde{Y}_{\\mathsf{D}(\\mS)}[i] = \\sum_{\\ell \\in \\mS} g_{\\ell \\mathsf{D}}X_{{\\ell}}[(i-d_{\\ell})\\hspace{0mm}\\mod n] + Z_{\\mathsf{D}}[i], \\quad i = 0,\\cdots,n-1,\n\\end{align}\n\\noindent and\n\\begin{align}\n\\tilde{Y}_{\\mathsf{R}(\\mS)}[i] &= \\sum_{\\ell = 1}^{K} g_{\\ell \\mathsf{R}}X_{{{\\ell}}}[i-d_{\\ell}] + Z_{\\mathsf{R}}[i], \\quad i = 0,\\cdots,n-1,\n\\nonumber\\\\\n&=Y_{\\mathsf{R}}[i].\n\\end{align}\n\nIn particular, as shown in Fig. \\ref{fig:partial-TA-MAC}, the tail of the codewords are cyclicly shifted to the beginning of the block, where the start point of the block is aligned with the first time instant. The destination's output $\\tilde{Y}_{\\mathsf{D}(\\mS)}^{n}$ of the sliced cyclic MARC is the $n$-tuple that results by adding the shifted versions of the codewords $X_{\\ell}^{n},{\\ell \\in \\mS}$. As indicated in Fig. \\ref{fig:partial-TA-MAC}, we divide the entire time interval $[0,n+\\dm-1]$ into three subintervals $\\mA, \\mB$, and $\\mC$ where\n\\begin{itemize}\n\\item $\\mA$ is the sub-interval representing the left tail of the received codeword, {i.e.}, $[0,\\dm-1]$,\n\\item $\\mB$ represents the right tail, {i.e.}, $[n,n+\\dm-1]$,\n\\item $\\mC$ represents a common part between the sliced TA-MARC and sliced cyclic MARC, {i.e.}, $[\\dm,n-1]$.\n\\end{itemize}\n\\end{definition}\n\n\\begin{remark}\n\\label{Remark_01}\nIn both sliced TA-MARC and sliced cyclic MARC, the observation $Y^{n+\\dm}_{\\mathsf{R}}$ of the relay remains unchanged. Therefore, the generated channel input at the relay $X_{K+1}^{n}$ is the same as the original TA-MARC due to \\eqref{Eq:relay_encoding_function} when the same relay encoding functions are used.\n\\end{remark}\n\nThe following lemma implies that, for every choice of $\\mS \\subseteq [1,K+1]$, the mutual information rate between the inputs and the destination's output in the Gaussian sliced TA-MARC $\\mathcal{M}(\\mS)$ and the sliced cyclic MARC $\\widetilde{\\mathcal{M}}(\\mS)$ are asymptotically the same, i.e., their difference asymptotically vanishes. This fact will be useful in the analysis of the problem in Section\n\\ref{section_converse}, where we can replace a sliced TA-MARC with the corresponding sliced cyclic MARC.\n\nBefore stating and proving the key lemma, we define the following notations:\n\\begin{align}\nY_{\\mathsf{D}{(\\mS)}}[\\mA] & \\triangleq \\{Y_{\\mathsf{D}{(\\mS)}}[i]: i \\in \\mA \\}, \\\\\n\\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mA]& \\triangleq \\{\\tilde{Y}_{\\mathsf{D}{(\\mS)}}[i]: i \\in \\mA \\}, \\\\ \\label{Eq:definitionofXs}\nX_{\\mathcal{S}}^{n} & \\triangleq \\{X_{\\ell}^{n}: \\ell \\in \\mS\\}, \\\\\n\\vec{X}_{\\mathcal{S}}[{\\mA}] & \\triangleq \\{X_{\\ell}[i-d_{\\ell}]: \\ell\\in \\mS, i \\in {\\mA}\\},\\\\\n\\tilde{\\vec{X}}_{\\mathcal{S}}[{\\mA}] & \\triangleq \\{X_{\\ell}[i-d_{\\ell}\\ \\text{mod}\\ n]: \\ell\\in \\mS, i \\in {\\mA}\\},\n\\end{align}\n\\noindent where $\\mS \\subseteq [1,K+1]$ is an arbitrary subset of transmitter nodes indices, and recall that $X_{\\ell}[i-d_{\\ell}] = 0$, for $i-d_{\\ell} \\not\\in \\{0,1,\\cdots,n-1\\}$. Similarly, we can define $Y_{\\mathsf{D}{(\\mS)}}[\\mB]$, $Y_{\\mathsf{D}{(\\mS)}}[\\mC]$, $\\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mB],\\cdots$, by replacing $\\mA$ with $\\mB$ or $\\mC$ in the above definitions.\n\n\n\\begin{lemma} \\label{Key_lemma} For a Gaussian sliced TA-MARC $\\mathcal{M}(\\mS)$, and the corresponding sliced cyclic MARC $\\widetilde{\\mathcal{M}}(\\mS)$,\n\\begin{align}\n{1 \\over n} \\left| I(\\X{\\mS}{n};{Y}_{\\mathsf{D}(\\mS)}^{n+\\dm} \\vert {d_{1}^{K+1}}) -\nI(\\X{\\mS}{n}; \\tilde{Y}^{n}_{\\mathsf{D}(\\mS)} \\vert {d_{1}^{K+1}}) \\right| & \\leq \\ep_{n}, \\quad \\forall \\ d^{K+1}_{1} \\in [0,\\dm(n)]^{K+1}, \\label{lemma_main_expression}\n\\end{align}\n\\noindent for all $\\mS \\subseteq [1,K+1]$, where $\\ep_{n}$ does not depend on $d_{1}^{K+1}$ and $\\ep_{n} \\rightarrow\n0$, as $n\\rightarrow \\infty$. \\thmend \n\\end{lemma}\n\\begin{IEEEproof}\n\nNoting that the mutual information between subsets of two random vectors is a lower bound on the mutual information between the original random vectors, we first lower bound the original mutual information $I(\\X{\\mS}{n};{Y}_{\\mathsf{D}(\\mS)}^{n+\\dm} \\vert {d_{1}^{K+1}})$:\n\\begin{align}\n& I(\\vec{X}_{\\mS}[\\mC]; Y_{{\\mathsf{D}(\\mS)}}[\\mC]\\vert {d_{1}^{K+1}}) \\leq I(\\X{\\mS}{n};{Y}_{\\mathsf{D}(\\mS)}^{n+\\dm} \\vert {d_{1}^{K+1}}). \\label{first_MI_lower}\n\\end{align}\n\\noindent Then, by splitting the entropy terms over the intervals $\\mA, \\mB$, and $\\mC$ as depicted in Fig. \\ref{fig:partial-TA-MAC}, we upper bound the same mutual information term $I(\\X{\\mS}{n}$$;{Y}_{\\mathsf{D}(\\mS)}^{n+\\dm}$$ \\vert {d_{1}^{K+1}})$ as follows:\n\\begin{align}\n I(\\X{\\mS}{n};{Y}_{\\mathsf{D}(\\mS)}^{n+\\dm} \\vert {d_{1}^{K+1}}) & = h({Y}_{\\mathsf{D}(\\mS)}^{n+\\dm} \\vert {d_{1}^{K+1}}) - h({Y}_{\\mathsf{D}(\\mS)}^{n+\\dm} \\vert \\X{\\mS}{n}, {d_{1}^{K+1}}) \\nonumber \\\\\n& \\leq h(Y_{{\\mathsf{D}(\\mS)}}[\\mA] \\vert {d_{1}^{K+1}}) + h(Y_{{\\mathsf{D}(\\mS)}}[\\mB] \\vert {d_{1}^{K+1}}) + h(Y_{{\\mathsf{D}(\\mS)}}[\\mC] \\vert {d_{1}^{K+1}}) - \\sum_{i=0}^{n+\\dm-1} h(Z_{\\mathsf{D}}[i]) \\nonumber \\\\\n& = I(\\vec{X}_{\\mS}[\\mA];Y_{\\mathsf{D}{(\\mS)}}[{\\mA}] \\vert {d_{1}^{K+1}}) + I(\\vec{X}_{\\mS}[\\mB];Y_{\\mathsf{D}{(\\mS)}}[{\\mB}] \\vert {d_{1}^{K+1}})\n+ I(\\vec{X}_{\\mS}[\\mC];Y_{\\mathsf{D}{(\\mS)}}[{\\mC}] \\vert {d_{1}^{K+1}}). \\label{first_MI_upper}\n\\end{align}\n\n\nAlso, the mutual information term $I(\\X{\\mS}{n}; \\tilde{Y}^{n}_{\\mathsf{D}(\\mS)} \\vert {d_{1}^{K+1}})$ which is associated to the sliced cyclic MARC can be similarly lower bounded as\n\\begin{align}\n& I(\\tilde{\\vec{X}}_{\\mS}[\\mC]; \\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mC]\\vert {d_{1}^{K+1}}) \\leq I(\\X{\\mS}{n}; \\tilde{Y}^{n}_{\\mathsf{D}(\\mS)} \\vert {d_{1}^{K+1}}), \\label{second_MI_lower}\n\\end{align}\n\\noindent and upper bounded as\n\\begin{align}\nI(X^{n}_{\\mS}; \\tilde{Y}_{\\mathsf{D}{(\\mS)}}\\vert {d_{1}^{K+1}}) & = h(\\tilde{Y}_{\\mathsf{D}{(\\mS)}} \\vert {d_{1}^{K+1}}) - h(\\tilde{Y}_{\\mathsf{D}{(\\mS)}} \\vert X^{n}_{\\mS}, {d_{1}^{K+1}}) \\nonumber \\\\\n& \\leq h(\\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mA] \\vert {d_{1}^{K+1}}) + h(\\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mC] \\vert {d_{1}^{K+1}})- {\\sum_{i=0}^{n-1}} h(Z_{\\mathsf{D}}[i]) \\nonumber \\\\ \\nonumber\n& = I(\\tilde{\\vec{X}}_{\\mS}[\\mA]; \\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mA]\\vert {d_{1}^{K+1}})+ I(\\tilde{\\vec{X}}_{\\mS}[\\mC]; \\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mC]\\vert {d_{1}^{K+1}})\\\\\n& = I(\\tilde{\\vec{X}}_{\\mS}[\\mA]; \\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mA]\\vert {d_{1}^{K+1}})+ I(\\vec{X}_{\\mS}[\\mC]; Y_{\\mathsf{D}{(\\mS)}}[\\mC]\\vert {d_{1}^{K+1}}),\n\\label{second_MI_upper}\n\\end{align}\n\\noindent where in the last step, we used the fact that for any $\\mathcal{S}\\subseteq [1,K+1]$, ${\\tilde{Y}}_{\\mathsf{D}{(\\mS)}}[{\\mC}] = Y_{{\\mathsf{D}(\\mS)}}[{\\mC}]$ and $\\tilde{\\vec{X}}_{\\mS}[\\mC]=\\vec{X}_{\\mS}[\\mC]$, as there is no cyclic foldover for $i\\in {\\mC}$\n\nHence, combining \\eqref{first_MI_lower}-\\eqref{first_MI_upper}, and \\eqref{second_MI_lower}-\\eqref{second_MI_upper}, we can\nnow bound the difference between the mutual information terms as\n\\begin{align}\n&{1 \\over n} \\left| I(\\X{\\mS}{n};{Y}_{\\mathsf{D}(\\mS)}^{n+\\dm} \\vert {d_{1}^{K+1}}) -\nI(\\X{\\mS}{n}; \\tilde{Y}^{n}_{\\mathsf{D}(\\mS)} \\vert {d_{1}^{K+1}}) \\right| \\nonumber \\\\\n& \\quad \\leq {1\\over n}I(\\vec{X}_{\\mS}[\\mA]; Y_{\\mathsf{D}{(\\mS)}}[\\mA]\\vert {d_{1}^{K+1}}) + {1 \\over n} I(\\vec{X}_{\\mS}[\\mB]; Y_{\\mathsf{D}{(\\mS)}}[\\mB]\\vert {d_{1}^{K+1}}) +{1 \\over n} I(\\tilde{\\vec{X}}_{\\mS}[\\mA]; \\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mA]\\vert {d_{1}^{K+1}}). \\label{three_terms_expansion}\n\\end{align}\n\\noindent But all of the terms in the right hand side of \\eqref{three_terms_expansion} can also be\nbounded as follows. Consider the first term:\n\\begin{align}\n\\nonumber\n{1\\over n}I(\\vec{X}_{\\mS}[\\mA]; Y_{\\mathsf{D}{(\\mS)}}[\\mA]\\vert {d_{1}^{K+1}})& = {1\\over n} \\left[ h(Y_{\\mathsf{D}{(\\mS)}}[\\mA] \\vert d_{1}^{K+1}) - h(Z_{\\mathsf{D}}[{\\mA}]) \\right] \\\\ \\nonumber\n& \\leq {1\\over n} \\sum_{i \\in \\mA} \\left[ h(Y_{{\\mathsf{D}(\\mS)}}[i] \\vert d_{1}^{K+1}) - h(Z_{\\mathsf{D}}[i]) \\right] \\\\ \\nonumber\n& = {1 \\over n} \\sum_{i \\in \\mA} \\left[ h\\left(\\sum_{\\ell \\in \\mS} g_{{\\ell}\\mathsf{D}}X_{\\ell}[i-d_\\ell] + Z_{\\mathsf{D}}[i] \\right)\n- h(Z_{\\mathsf{D}}[i]) \\right] \\\\ \\nonumber\n& \\stackrel{\\rm (a)}{\\leq} {1 \\over n} \\sum_{i \\in \\mA} \\log\\left(1 +{ {\\mathbb{E}\\left\\vert \\sum_{\\ell \\in \\mS} g_{{\\ell}\\mathsf{D}}X_{\\ell}[i-d_\\ell] \\right\\vert}^{2} \\over {N} } \\right)\\\\ \\nonumber\n& \\stackrel{\\rm (b)}{\\leq} {1 \\over n} \\sum_{i \\in \\mA} \\log\\left(1 +{ {\\sum_{\\ell \\in \\mS} \\vert g_{\\ell\\mathsf{D}}\\vert ^{2} \\cdot \\sum_{\\ell\\in \\mathcal{S}}\\mathbb{E}\\vert X_{\\ell}[i-d_\\ell]\\vert^{2}} \\over {N} } \\right)\\\\ \\nonumber\n& \\stackrel{\\rm (c)} \\leq {\\vert \\mA \\vert \\over n} \\log\\left(1 +{ { \\sum_{i \\in \\mA} \\left[ {\\sum_{\\ell \\in \\mS} \\vert g_{\\ell\\mathsf{D}}\\vert ^{2} \\cdot \\sum_{\\ell\\in \\mathcal{S}}\\mathbb{E}\\vert X_{\\ell}[i-d_\\ell]\\vert^{2}} \\right] } \\over {\\vert \\mA \\vert N} } \\right)\\\\ \\nonumber\n& \\stackrel{\\rm (d)}{=} {\\dm \\over n} \\log\\left(1 +{ { {\\sum_{\\ell \\in \\mS} \\vert g_{\\ell\\mathsf{D}}\\vert ^{2} \\cdot \\sum_{\\ell\\in \\mathcal{S}}\\mathbb{E}\\left[\\sum_{i \\in \\mA}\\vert X_{\\ell}[i-d_\\ell]\\vert^{2}\\right]}} \\over {\\dm N} } \\right)\\\\ \\nonumber\n& \\leq {\\dm \\over n} \\log\\left(1 +{ { {\\sum_{\\ell \\in \\mS} \\vert g_{\\ell\\mathsf{D}}\\vert ^{2} \\cdot \\sum_{\\ell\\in \\mathcal{S}}\\mathbb{E}\\sum_{i=0}^{n-1}\\vert X_{\\ell i}\\vert^2}} \\over {\\dm N} } \\right)\\\\ \\nonumber\n&\\stackrel{\\rm (e)}{\\leq} {\\dm \\over n} \\log\\left(1 +{n\\over \\dm}{ {\\sum_{\\ell \\in \\mS} \\vert g_{\\ell\\mathsf{D}}\\vert ^{2} \\cdot \\sum_{\\ell\\in \\mathcal{S}}P_{\\ell}} \\over { N} } \\right) \\\\\n& \\triangleq \\gamma\\left(\\dfrac{\\dm}{n}\\right), \\label{lemma_inequality_1}\n\\end{align}\n\\noindent where $\\rm{(a)}$ follows by the fact that Gaussian distribution maximizes the differential entropy \\cite[Thm. 8.4.1]{Cover:2006}, $\\rm{(b)}$ follows from the Cauchy-Schwartz inequality:\n\\begin{align}\n\\left \\vert \\sum_{\\ell\\in \\mathcal{S}} g_{\\ell\\mathsf{D}}{X}_{\\ell}[i-d_{\\ell}] \\right\\vert^{2} & \\leq {\\left(\\sum_{\\ell\\in \\mathcal{S}}\\vert g_{\\ell\\mathsf{D}} \\vert ^{2} \\right)} \\left(\\sum_{\\ell\\in \\mathcal{S}} \\vert X_{\\ell}[i-d_{\\ell}] \\vert^{2}\\right), \\label{eq_Cauchy}\n\\end{align}\n\\noindent $(\\rm c)$ follows from concavity of the $\\log$ function, $(\\rm d)$ follows from the fact that $\\vert \\mA \\vert = \\dm$, and $(\\rm e)$ follows from the power constraint in \\eqref{Power_constraint}.\n\nSimilarly, for the second term in the right hand side of \\eqref{three_terms_expansion}, it can be shown that\n\\begin{align}\n{1 \\over n} I(\\vec{X}_{\\mS}[\\mB]; Y_{\\mathsf{D}{(\\mS)}}[\\mB]\\vert {d_{1}^{K+1}}) \\leq \\gamma\\left({\\dm\\over n}\\right). \\label{lemma_inequality_2}\n\\end{align}\n\n\nFollowing similar steps that resulted in \\eqref{lemma_inequality_1}, we now upper bound the third term in the right hand side of \\eqref{three_terms_expansion} as follows\n\\begin{align}\n\\nonumber\n{1 \\over n} I(\\tilde{\\vec{X}}_{\\mS}[\\mA]; \\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mA]\\vert {d_{1}^{K+1}})& = {1\\over n} \\left[ h(\\tilde{Y}_{\\mathsf{D}{(\\mS)}}[\\mA] \\vert d_{1}^{K+1}) - h(Z_{\\mathsf{D}}[{\\mA}]) \\right] \\\\ \\nonumber\n& \\leq {1\\over n} \\sum_{i \\in \\mA} \\left[ h(\\tilde{Y}_{\\mathsf{D}}[i] \\vert d_{1}^{K+1}) - h(Z_{\\mathsf{D}}[i]) \\right] \\\\ \\nonumber\n& = {1 \\over n} \\sum_{i \\in \\mA} \\left[ h\\left(\\sum_{\\ell \\in \\mS} g_{\\ell\\mathsf{D}}X_{{{\\ell}}}[(i-{d_{{\\ell}}})\\hspace{-2mm} \\mod n] + Z_{\\mathsf{D}}[i] \\Big\\vert d_{1}^{K+1}\\right) - h(Z_{\\mathsf{D}}[i]) \\right] \\\\ \\nonumber\n& \\leq {1 \\over n} \\sum_{i \\in \\mA} \\log\\left(1 +{ {\\mathbb{E}\\left\\vert \\sum_{\\ell \\in \\mS} g_{{\\ell}\\mathsf{D}}X_{\\ell}[(i-{d_{{\\ell}}})\\hspace{-2mm} \\mod n] \\right\\vert}^{2} \\over {N} } \\right)\n\\\\ \\nonumber\n& \\leq {\\dm \\over n} \\log\\left(1 +{n\\over \\dm}{ {\\sum_{\\ell \\in \\mS} \\vert g_{\\ell\\mathsf{D}}\\vert ^{2} \\cdot \\sum_{\\ell\\in \\mathcal{S}}P_{\\ell}} \\over { N} } \\right)\\\\\n& = \\gamma\\left(\\dfrac{\\dm}{n}\\right). \\label{lemma_inequality_3}\n\\end{align}\n\nBased on \\eqref{lemma_inequality_1}, \\eqref{lemma_inequality_2}, and \\eqref{lemma_inequality_3}, the absolute difference between the mutual informations in \\eqref{lemma_main_expression} is upper bounded by $3\\gamma(\\dm\/n)$.\nOne can see that $3\\gamma\\left(\\dm(n)\/n\\right) \\rightarrow 0$ as $n \\rightarrow \\infty$, since for any $a>0$, $z_{n}\\log(1 + a\/{z_{n}}) \\rightarrow 0$ as\n$z_{n} \\rightarrow 0$, and the lemma is proved by taking $z_{n}=\\dm(n)\/n$ and $a={\\sum_{\\ell \\in \\mS} \\vert g_{\\ell\\mathsf{D}}\\vert ^{2}\\sum_{\\ell\\in \\mathcal{S}}P_{\\ell}}\/N$.\n\\end{IEEEproof}\n\n\\vspace{-.4cm}\n\\section{Converse}\\label{section_converse}\n\n\\begin{lemma}\n\\label{Lemma:Reliable_Communication}\nConsider a Gaussian TA-MARC with power constraints\n$P_1,P_2,\\cdots,P_K$ on the transmitters, and the power constraint $P_{K+1}$ on the relay, and the set of encoders' offsets $d_{1}^{K+1}$. Moreover, assume that the set of offsets $d_{1}^{K+1}$ are known to the receiver, $\\dm(n) \\rightarrow \\infty$, and ${\\dm(n)\n\/ n} \\rightarrow 0$ as $n\\rightarrow \\infty$. Then, a necessary condition for reliably\ncommunicating a source tuple $(U^{n}_1,U^{n}_2,\\cdots,U^{n}_{K}) \\sim {\\prod_{i=0}^{n-1}}p(u_{1}[i],u_{2}[i],\\cdots,u_{K}[i])$, over such a Gaussian\nTA-MARC, in the sense of Definition \\ref{reliability_definition}, is given by\n\\begin{align}\nH(U_{\\mS}\\vert U_{\\mSc}) & \\leq \\log\\left(1+{\\sum_{\\ell \\in \\mS}\\vert g_{\\ell\\mathsf{D}}\\vert ^{2}P_{\\ell} \\over N}\\right),\\quad \\forall \\mS \\subseteq [1,K+1] \\label{separation_TAMAC_1}\n\\end{align}\n\\noindent where $\\mS$ includes the relay, {i.e.}, $\\{K+1\\}\\in \\mS$, where by definition $U_{K+1}\\triangleq \\emptyset$, and $\\mathcal{S}^{c}\\triangleq[1,K+1]\/\\{\\mathcal{S}\\}$.\n\\thmend\n\\end{lemma}\n\\begin{remark}\n\\label{Remark:about_mismatch}\nThe result of \\eqref{separation_TAMAC_1} can be readily extended to the case of mapping blocks of source outputs of the length $m_{n}$ to channel inputs of the length $n$. In particular, for the bandwidth mismatch factor $\\kappa \\triangleq \\lim_{n \\rightarrow \\infty} {n \\over {m_{n}}}$, the converse result in \\eqref{separation_TAMAC_1}, to be proved as an achievability result in Section \\ref{section_achievability} as well, can be generalized to\n\\begin{align}\nH(U_{\\mS}\\vert U_{\\mSc}) & \\leq \\kappa \\log\\left(1+{\\sum_{\\ell \\in \\mS}\\vert g_{\\ell\\mathsf{D}}\\vert ^{2}P_{\\ell} \\over N}\\right), \\quad \\forall \\mS \\subseteq [1,K+1].\n\\end{align}\nSince considering a general mismatch factor $\\kappa>0$ obscures the proof, in the following, without essential loss of generality, we present the proof for the case of $\\kappa=1$.\n\\end{remark}\n\\begin{IEEEproof}\n\nFirst, fix a TA-MARC with given offset vector $d_{1}^{K+1}$, a codebook $\\mathcal{C}^{n}$, and\ninduced {\\em empirical} distribution\n\\[p(\\uo,\\cdots,u_{K}^{n},\\x{1}{n},\\cdots,\\x{K+1}{n},y_{\\mathsf{R}}^{n+\\dm},y_{\\mathsf{D}}^{n+\\dm} \\vert d_{1}^{K+1}).\\]\nSince for this fixed choice of the offset vector $d_{1}^{K+1}$, $P^n_e(d_{1}^{K+1}) \\rightarrow 0$, from Fano's inequality, we have\n\\begin{align}\n{1 \\over n}H(\\Uo,\\Ut,\\cdots,U_{K}^{n} \\vert Y_{\\sf{D}}^{n+\\dm},d_{1}^{K+1}) \\leq {1 \\over n}{P_e^n(d_{1}^{K+1})} \\log \\|{{\\mathcal{U}}^{n}_{1}}\\times{{\\mathcal{U}}^{n}_{2}}\\times\\cdots\\times{{\\mathcal{U}}^{n}_{K}}\\| + {1 \\over n}\n\\triangleq \\de_n, \\label{Fano_inequality}\n\\end{align}\nand $\\de_n \\rightarrow 0$, where convergence is uniform in $d_{1}^{K+1}$ by \\eqref{main_error_probability}.\n\nNow, we can upper bound $H(U_{\\mS}\\vert U_{\\mSc})$ as follows:\n\\begin{align}\nH(U_{\\mS}\\vert U_{\\mSc}) & = {1 \\over n} H(\\Us \\vert \\Usc, d_{1}^{K+1}) \\nonumber \\\\\n& \\stackrel{(\\rm a)}{=} {1 \\over n} H(\\Us \\vert \\Usc, \\X{\\mathcal{S}^{c}}{n}, d_{1}^{K+1}) \\nonumber \\\\\n& = {1 \\over n} I(\\Us; Y_{\\mathsf{D}}^{n+\\dm} \\vert \\Usc, \\X{\\mathcal{S}^{c}}{n}, d_{1}^{K+1}) + {1 \\over n} H(\\Us \\vert Y_{\\mathsf{D}}^{n+\\dm}, \\Usc, \\X{\\mathcal{S}^{c}}{n}, d_{1}^{K+1}) \\nonumber \\\\\n& \\stackrel{(\\rm b)}{\\leq} {1 \\over n} I(\\X{\\mS}{n}; Y_{\\mathsf{D}}^{n+\\dm} \\vert \\Usc, \\X{\\mathcal{S}^{c}}{n}, d_{1}^{K+1}) + \\de_{n} \\nonumber \\\\ \\nonumber\n& \\stackrel{(\\rm c)}{=} {1 \\over n} h(Y_{\\mathsf{D}}^{n+\\dm} \\vert \\Usc, \\X{\\mathcal{S}^{c}}{n}, d_{1}^{K+1}) - {1 \\over n} h(Y_{\\mathsf{D}}^{n+\\dm} \\vert \\Usc,X^{n}_{[1,K+1]}, d_{1}^{K+1}) + \\de_{n} \\nonumber \\\\\n& \\stackrel{(\\rm d)}{\\leq} {1 \\over n} h(Y_{\\mathsf{D}}^{n+\\dm} \\vert \\X{\\mathcal{S}^{c}}{n}, d_{1}^{K+1}) - {1 \\over n} h(Y_{\\mathsf{D}}^{n+\\dm} \\vert \\Usc,X_{[1,K+1]}^{n}, d_{1}^{K+1}) + \\de_{n} \\nonumber \\\\ \\nonumber\n&={1\\over n} h(\\big\\{\\sum_{\\ell=1}^{K+1}g_{\\ell\\mathsf{D}}X_{\\ell}[i-d_{\\ell}]+Z_{\\mathsf{D}}[i]\\big\\}_{i=0}^{n+\\dm-1}\\vert \\X{\\mathcal{S}^{c}}{n}, d_{1}^{K+1})-{1\\over n}h(Z_{\\mathsf{D}}^{n+\\dm})+\\delta_{n}\\\\ \\nonumber\n&={1\\over n} h(\\big\\{\\sum_{\\ell\\in \\mathcal{S}}g_{\\ell\\mathsf{D}}X_{\\ell}[i-d_{\\ell}]+Z_{\\mathsf{D}}[i]\\big\\}_{i=0}^{n+\\dm-1}\\vert \\X{\\mathcal{S}^{c}}{n}, d_{1}^{K+1})-{1\\over n}h(Z_{\\mathsf{D}}^{n+\\dm})+\\delta_{n} \\\\\n& \\leq {1 \\over n} h(Y_{\\mathsf{D}(\\mS)}^{n+\\dm} \\vert d_{1}^{K+1}) - {1 \\over n}h(Z_{\\mathsf{D}}^{n+\\dm}) + \\de_{n} \\nonumber \\\\\n& = {1 \\over n} I(\\X{\\mS}{n}; Y_{\\mathsf{D}(\\mS)}^{n+\\dm} \\vert d_{1}^{K+1}) + \\de_{n} \\label{new_MARC_equation}\n\\end{align}\n\\noindent where in $(\\rm a)$ we used the fact that $\\X{\\mSc}{n}$ is a function of only ${U}_{\\mSc}^{n}$, in $(\\rm b)$ we used the data processing inequality and \\eqref{Fano_inequality}, in $(\\rm c)$ we used $X^{n}_{[1,K+1]}$ based on the definition in \\eqref{Eq:definitionofXs}, and lastly in $(\\rm d)$ we made use of the fact that conditioning does not increase the entropy.\n\nBut \\eqref{new_MARC_equation} represents the mutual information at the destination's output of the Gaussian sliced TA-MARC $\\mathcal{M}(\\mS)$ corresponding to the original Gaussian TA-MARC. Thus, using Lemma \\ref{Key_lemma}, we can now further upper bound the mutual information term in \\eqref{new_MARC_equation} by the corresponding mutual information term in the corresponding sliced cyclic MARC and derive\n\\begin{align}\nH(U_{\\mS}\\vert U_{\\mSc}) \\leq {1 \\over n} I(\\X{\\mS}{n}; {\\tilde{Y}_{\\mathsf{D}(\\mS)}}^{n} \\vert d_{1}^{K+1}) + \\ep_{n} + \\de_{n}. \\label{after_lemma}\n\\end{align}\n\nNow, let $D_{\\ell}, \\ell=1,\\cdots,K+1,$ be a sequence of independent random variables that are each uniformly distributed on the set $\\{0,1,\\cdots,\\dm(n)\\}$ and also independent of $\\{U^{n}_{\\ell}\\}_{\\ell=1}^{K+1}$, $\\{Z_{\\mathsf{D}}[i]\\}_{i=0}^{n-1}$, and $\\{Z_{\\mathsf{R}}[i]\\}_{i=0}^{n-1}$. Since\n\\eqref{after_lemma} is true for every choice of $d_{1}^{K+1} \\in \\{0,1,\\cdots,\\dm(n)\\}^{K+1}$, $H(U_{\\mS}\\vert U_{\\mSc})$ can\nalso be upper bounded by the average over $d_{1}^{K+1}$ of $I(\\X{\\mS}{n}; {\\tilde{Y}_{\\mathsf{D}(\\mS)}}^{n} \\vert d_{1}^{K+1})$. Hence,\n\\begin{align}\nH(U_{\\mS}\\vert U_{\\mSc}) & \\leq I(\\X{\\mS}{n}; {\\tilde{Y}_{\\mathsf{D}(\\mS)}}^{n} \\vert D_{1}^{K+1})+\\epsilon_{n}+\\delta_{n} \\nonumber \\\\\n& \\stackrel{\\rm(a)}{=} I(\\X{\\mS}{n}; \\hat{\\tilde{Y}}_{\\mathsf{D}(\\mS)}^{n} \\vert D_{1}^{K+1}) +\\epsilon_{n}+\\delta_{n}, \\label{eq:middle_step}\n\\end{align}\n\\noindent where $\\hat{\\tilde{Y}}_{\\mathsf{D}(\\mS)}^{n} = {\\rm{DFT}}({\\tilde{Y}}_{\\mathsf{D}(\\mS)}^{n})$, and $\\rm(a)$ follows from the\nfact that the DFT is a bijection.\n\nExpanding $I(\\X{\\mS}{n}; \\hat{\\tilde{Y}}_{\\mathsf{D}(\\mS)}^{n} \\vert D_{1}^{K+1})$ in the right hand side of \\eqref{eq:middle_step},\n\\begin{align}\nH(U_{\\mS}\\vert U_{\\mSc}) & \\leq {1 \\over n} [h(\\hat{\\tilde{Y}}_{\\mathsf{D}(\\mS)}^{n} \\vert D_{1}^{K+1}) - h(\\hat{\\tilde{Y}}_{\\mathsf{D}(\\mathcal{S})}^{n} \\vert X_{\\mS}^{n}, D_{1}^{K+1})] + \\ep_{n} + \\de_{n} \\nonumber \\\\\n& \\leq {1 \\over n} [h(\\hat{\\tilde{Y}}_{\\mathsf{D}(\\mS)}^{n}) - h(\\hat{Z}_{\\mathsf{D}}^{n})] + \\ep_{n} + \\de_{n}, \\nonumber\n\\end{align}\n\\noindent where $\\hat{Z}_{\\mathsf{D}}^{n}={\\rm{DFT}}(Z_{\\mathsf{D}}^{n})$ has {i.i.d.} entries with $\\hat{Z}_{\\mathsf{D}}[i] \\sim\n\\mathcal{C}\\mathcal{N}(0,N)$. Recall $\\hat{X}_{{\\ell}}^{n} = {\\rm{DFT}}(X_{{\\ell}}^{n})$. Then,\n\\begin{align}\nh(\\hat{\\tilde{Y}}_{\\mathsf{D}(\\mS)}^{n}) & = h\\left(\\sum_{\\ell \\in \\mS} {e^{-j\\Bt(D_{{\\ell}})}} \\odot g_{{\\ell}\\mathsf{D}}\\hat{X}_{{\\ell}}^{n} + \\hat{Z}_{\\mathsf{D}}^{n} \\right) \\nonumber \\\\\n& \\leq \\sum_{i=0}^{n-1} h\\left(\\sum_{\\ell \\in \\mS} {e^{-j2\\pi i {D_{{\\ell}}}\\over n}} g_{{\\ell}\\mathsf{D}} \\hat{X}_{{\\ell}}[i] + \\hat{Z}_{\\mathsf{D}}[i]\\right), \\nonumber\n\\end{align}\n\\noindent where ${e^{-j\\Bt(D)}} \\triangleq (e^{-j2\\pi i D \\over n})_{i=0}^{n-1}$ is an $n$-length vector, and $\\odot$ denotes\nelement-wise vector multiplication. Thus,\n\\begin{align}\nH(U_{\\mS}\\vert U_{\\mSc})\n&\\leq {1 \\over n} \\sum_{i=0}^{n-1} \\left[h\\left(\\sum_{\\ell \\in \\mS} {e^{-j2\\pi i {D_{{\\ell}}}\\over n}} g_{\\ell\\mathsf{D}} \\hat{X}_{{\\ell}}[i] + \\hat{Z}_{\\mathsf{D}}[i]\\right)- h(\\hat{Z}_{\\mathsf{D}}[i])\\right] + \\ep_{n} + \\de_{n} \\nonumber\\\\\n& \\leq {1 \\over n} \\sum_{i=0}^{n-1} \\log\\left(1 + { { {\\mathbb{E}\\left\\vert \\sum_{\\ell \\in \\mS} {e^{-j2\\pi i {D_{{\\ell}}}\\over n}} g_{\\ell\\mathsf{D}} \\hat{X}_{{{\\ell}}}[i] \\right\\vert^{2}}} \\over {N}}\\right)\n+ \\ep_{n} + \\de_{n}. \\label{before_split_into_three}\n\\end{align}\n\nWe now divide the sum in \\eqref{before_split_into_three} into three terms for $0 \\leq i \\leq \\al(n)-1$, $\\al(n) \\leq i \\leq n-\\al(n)-1$, and $n-\\al(n) \\leq i \\leq n-1$, where $\\al(n): \\mathbb{N} \\rightarrow \\mathbb{N}$ is a function such that\n\\begin{align}\n{\\al(n) \\over n} \\rightarrow 0, \\ \\ {\\al(n)\\dm(n) \\over n} \\rightarrow \\infty. \\label{condition_on_alpha}\n\\end{align}\n\\noindent An example of such an $\\al(n)$ is the function $\\alpha(n) = \\lceil {n \\over\n\\dm(n)}{\\log\\dm(n)} \\rceil $. Consequently, we first upper bound the tail terms and afterwards the main term in the sequel.\n\nFor the terms in $0\\leq i \\leq \\al(n)-1$, we have\n\n\\begin{align}\n\\nonumber\n{1 \\over n} \\sum_{i=0}^{\\al(n)-1} \\log\\left(1 + { { {\\mathbb{E}\\left\\vert \\sum_{\\ell \\in \\mS} {e^{-j2\\pi i {D_{{\\ell}}}\\over n}} g_{\\ell\\mathsf{D}} \\hat{X}_{{{\\ell}}}[i] \\right\\vert^{2}}} \\over {N}}\\right) & \\stackrel{\\rm(a)}{\\leq} {1 \\over n} \\sum_{i=0}^{\\al(n)-1} \\log\\left(1 + { {\\sum_{\\ell\\in \\mS}\\vert g_{\\ell\\mathsf{D}}\\vert^{2} \\cdot \\sum_{\\ell \\in \\mS}\\mathbb{E} \\vert \\hat{X}_{\\ell}[i]\\vert^{2} } \\over {N}}\\right) \\\\ \\nonumber\n& \\stackrel{\\rm(b)}{\\leq} {\\al(n) \\over n} \\log\\left(1 + { {\\sum_{i=0}^{\\al(n)-1} \\left[\\sum_{\\ell\\in \\mS}\\vert g_{\\ell\\mathsf{D}}\\vert^{2}\\cdot \\sum_{\\ell \\in \\mS}\\mathbb{E} \\vert \\hat{X}_{\\ell}[i]\\vert^{2} \\right]} \\over {\\alpha(n)N}}\\right)\\\\ \\nonumber\n\\nonumber\n& \\stackrel{\\rm(c)}{\\leq}{\\al(n) \\over n} \\log\\left(1 +{n\\over \\alpha(n)} {{ \\sum_{\\ell\\in \\mS}\\vert g_{\\ell\\mathsf{D}}\\vert^{2} \\cdot \\sum_{\\ell \\in \\mS}P_{\\ell}} \\over {N}}\\right) \\\\\n& \\triangleq \\lambda_{n}, \\label{1st_sum}\n\\end{align}\n\\noindent where $(\\rm a)$ follows by the Cauchy-Schwartz inequality (cf. \\eqref{eq_Cauchy}), $(\\rm b)$ follows by the concavity of the $\\log$ function and $(\\rm c)$ follows by the power constraints \\eqref{Power_constraint}.\nAlso, for $n-\\al(n) \\leq i \\leq n-1$, a similar upper bound can be derived by the symmetry of the problem as follows\n\n\\begin{align}\n{1 \\over n} \\sum_{i=n-\\al(n)}^{n-1} \\log\\left(1 + { { {\\mathbb{E}\\left\\vert \\sum_{\\ell \\in \\mS} {e^{-j2\\pi i {D_{{\\ell}}}\\over n}} g_{\\ell\\mathsf{D}} \\hat{X}_{{{\\ell}}}[i] \\right\\vert^{2}}} \\over {N}}\\right) \\leq \\lambda_{n}. \\label{2nd_sum}\n\\end{align}\n\nTo bound the third component of \\eqref{before_split_into_three} for $\\alpha(n)\\leq i\\leq n-\\alpha(n)-1$, we first obtain that\n\\begin{align}\n{ {\\mathbb{E}\\left\\vert \\sum_{\\ell \\in \\mS} {e^{-j2\\pi i {D_{{\\ell}}}\\over n}} g_{\\ell\\mathsf{D}} \\hat{X}_{{{\\ell}}}[i] \\right\\vert^{2}}}\n = \\sum_{\\ell \\in \\mS} \\vert g_{\\ell\\mathsf{D}}\\vert^{2} \\mathbb{E}\\vert \\hat{X}_{\\ell}[i] \\vert^{2} + {\\sum_{ \\substack {(\\ell,\\ell^{'}) \\in \\mS^{2} \\\\ \\label{Eq:dependent_terms} \\ell < \\ell^{'}} }} 2\\Re\\mathbb{E}\\left\\{{e^{-j2\\pi i ({D_{{\\ell}} - D_{{\\ell^{'}}}})\\over n}} g_{\\ell\\mathsf{D}}g^{*}_{\\ell'\\mathsf{D}} \\hat{X}_{{{\\ell}}}[i]\\hat{X}^{*}_{{{\\ell^{'}}}}[i] \\right\\},\n\\end{align}\n\\noindent where $\\Re(z)$ is the real part of $z\\in \\mathbb{C}$. Now, the following two cases can occur\n\n$i$) $\\ell< \\ell'0$ is part $r_j$'s constant density.\n\t\\end{assumption}\n\t\\noindent Critically, \\Cref{assum:Homogeneous} can be used to simplify the identification problem by exploiting measurements of the shape of a manipulated object.\n\tThis is accomplished by noting that, once a cobot has determined the shape of an object, the object's inertial parameters depend solely on the mass of each of its homogeneous parts. \n\tWe refer to our overall approach as \\emph{Homogeneous Part Segmentation} (HPS).\\footnote{We provide code, our complete simulation dataset, and a video showcasing our algorithm at \\url{https:\/\/papers.starslab.ca\/part-segmentation-for-inertial-identification\/}.} \n\tOur main contributions are:\n\t\\begin{itemize}\n\t\t\\item a formulation of inertial parameter identification that incorporates \\Cref{assum:Homogeneous};\n\t\t\\item a method combining the algorithms proposed in \\cite{attene_hierarchical_2008} and \\cite{lin_toward_2018} to improve part segmentation speed;\n\t\t\\item a dataset of models of 20 common workshop tools with 3D meshes, point clouds, ground truth inertial parameters, and ground truth part-level segmentation for each object; and\n\t\t\\item experiments highlighting the benefits of HPS when compared to two benchmark algorithms.\n\t\\end{itemize}\n\tWe undertake a series of simulation studies with our novel dataset and carry out a real-world experiment to assess the performance of the proposed algorithm on a real cobot. We show that, under noisy conditions, our approach produces more accurate inertial parameter estimates than competing algorithms that do not utilize shape information. \n\t\n\t\\vspace{2mm}\n\t\\section{Related Work}\n\t\\label{sec:RelatedWork}\n\t\n\tThis section provides a brief overview of work related to the two main algorithmic components of our system: inertial parameter or load identification and part-level object segmentation.\n\tA more thorough survey of inertial parameter identification for manipulation is provided by Golluccio et al.\\ in \\cite{golluccio_robot_2020}.\n\t\n\t\\subsection{Load Identification}\n\t\n\tIn \\cite{atkeson_estimation_1986}, a least squares method is applied to determine the inertial parameters of a load manipulated by a robot arm.\n\tThe authors of \\cite{atkeson_estimation_1986} underline that poor performance can be caused by the low signal-to-noise ratios in the data used for the regression.\n\tA recursive total least squares (RTLS) approach is proposed in \\cite{kubus_-line_2008} to account for noise in the regressor matrix.\n\tHowever, an experimental evaluation in \\cite{farsoni_real-time_2018} of RTLS on multiple datasets reports large estimation errors, demonstrating that RTLS has limited utility with noisy sensors.\n\tIn this work, we also minimize a least squares cost, but we employ a unique formulation that is well-suited to data gathered by a slower-moving cobot, for example. \n\t\n\tWithout appropriate constraints on regression variables, many identification algorithms find unphysical solutions~\\cite{sousa_physical_2014, traversaro_identification_2016}.\n\tSufficient conditions for the physical consistency of the identified parameters are stated in \\cite{traversaro_identification_2016}, where a constrained optimization problem is formulated that guarantees the validity of the solution.\n\tSimilarly, physical consistency is enforced through linear matrix inequalities as part of the method proposed in \\cite{wensing_linear_2017}. Geodesic distance approximations from a prior solution are used in \\cite{lee_geometric_2019} to regularize the optimization problem without introducing very long runtimes.\n\tIn this work, we \n\t\\emph{implicitly} enforce the physical consistency of estimated inertial parameters by discretizing the manipulated object with point masses \\cite{ayusawa_identification_2010, nadeau_fast_2022}. \n\tFixed-sized voxels can also be used to achieve the same effect~\\cite{song_probabilistic_2020}.\n\t\n\tFinally, the authors of \\cite{sundaralingam_-hand_2021} augment the traditional force-torque sensing system with tactile sensors to estimate the inertial parameters and the friction coefficient of a manipulated object. %\n\tOur contribution similarly makes use of an additional sensing modality in the form of a camera. \n\t\n\t\\subsection{Part-Level Object Segmentation}\n\t\n\tAlthough there is no formal definition of a ``part\" of a manipulated item, humans tend to follow the so-called \\emph{minima rule}: the decomposition of objects into approximately convex contiguous parts bounded by negative curvature minima \\cite{hoffman_parts_1984}.\n\tThis rule has provided inspiration for many part segmentation methods reviewed in \\cite{rodrigues_part-based_2018}, which benchmarks several techniques on the Princeton dataset \\cite{chen_benchmark_2009} and divides approaches into surface-based, volume-based, and skeleton-based algorithms.\n\tSurface-based and volume-based mesh segmentation algorithms are reviewed in \\cite{shamir_survey_2008}, highlighting a tradeoff between segmentation quality and the number of parts produced by the algorithm.\n\tSkeleton-based segmentation methods for 3D shapes, which capture structural information as a lower-dimensional graph (i.e., the shape's \\emph{skeleton}), are reviewed in \\cite{tagliasacchi_3d_2016}.\n\tThe approach in \\cite{lin_seg-mat_2020}, which is based on the medial axis transform~\\cite{li_q-mat_2015}, exploits prior knowledge of an object's skeleton to produce a \\emph{top-down} segmentation about 10 times faster than \\emph{bottom-up} methods that use local information only. \n\t\n\tThe method of Kaick et al. \\cite{kaick_shape_2014} also segments incomplete point clouds into approximately convex shapes, but is prohibitively slow for manipulation tasks, with an average computation time of 127 seconds on the Princeton benchmark \\cite{chen_benchmark_2009} as reported in \\cite{lin_seg-mat_2020}.\n\tIn contrast, learning-based methods for part segmentation can struggle to generalize to out-of-distribution shapes but are usually faster than geometric techniques~\\cite{dou_coverage_2021, lin_point2skeleton_2021, rodrigues_part-based_2018}.\n\tOur approach to segmentation does not apply any learned components but is fast enough for real-time use by a collaborative robot.\n\t\n\tA hierarchical volume-based segmentation technique (HTC) proposed in \\cite{attene_hierarchical_2008}, enabled by fast tetrahedralization \\cite{si_tetgen_2015} and quick convex hull computation \\cite{barber_quickhull_1996}, can perform well if a watertight mesh can be reconstructed from the input point cloud.\n\tThis technique, which we describe in detail in Section \\ref{sec:visual_part_segmentation}, is an important element of our part segmentation pipeline.\n\tAnother key component of our approach is the surface-based algorithm proposed in \\cite{lin_toward_2018}, which uses a dissimilarity metric to iteratively merge nearby mesh patches, taking special care to preserve part boundaries.\n\n\t\\section{Inertial Parameters of Homogeneous Parts}\n\t\n\tIn this section, we describe our inertial parameter identification technique, which assumes that an object has been segmented into its constituent parts.\n\tBy determining the mass of each segment, we are able to identify the full set of inertial parameters, or to provide an approximate solution when \\Cref{assum:Homogeneous} is not respected.%\n\t\n\t\\subsection{Notation}\n\tReference frames $\\ObjectFrame$ and $\\SensorFrame$ are attached to the object and to the force-torque (FT) sensor, respectively.\n\tThe reference frame $\\WorldFrame$ is fixed to the base of the robot and is assumed to be an inertial frame, such that the gravity vector expressed in the sensor frame is given by $\\Vector{g}_{s} = \\Rot{w}{s} [0,0,-9.81]^\\Transpose$.\n\tThe orientation of $\\ObjectFrame$ relative to $\\SensorFrame$ is given by $\\Rot{b}{s}$ and the origin of $\\ObjectFrame$ relative to $\\SensorFrame$, expressed in $\\WorldFrame$, is given by $\\Pos{b}{s}{w}$.\n\tThe skew-symmetric operator $\\Skew{\\cdot}$ transforms a vector $\\Vector{u} \\in\\Real^3$ into a $\\Real^{3\\times3}$ matrix such that $\\Skew{\\Vector{u}} \\Vector{v} = \\Vector{u} \\times \\Vector{v}$.\n\t\n\t\\subsection{Formulation of the Optimization Problem}\n\tFor a part $\\Part{j}$, under \\Cref{assum:Homogeneous}, the $k$-th moment of a mass distribution discretized into $n$ point masses is given by\n\t\\begin{equation}\n\t\t\\label{eqn:Moments}\n\t\t\\int_{V_j} \\Vector{\\Position}^k \\rho_j(\\Vector{\\Position}) dV_j \\approx \\frac{\\Mass}{n}\\sum_{i}^{n} (\\Vector{\\Position}_i)^k ~,\n\t\\end{equation}\n\twhere the position of the $i$-th point mass relative to $\\ObjectFrame$ is given by $\\Vector{\\Position}_i$.\n\tFor a homogeneous mass density, the centre of mass corresponds to the centroid\n\t\\begin{equation}\n\t\t\\Pos{\\Part{j}}{b}{b} = \\frac{1}{n}\\sum_{i}^{n}\\Vector{\\Position}_i~.\n\t\\end{equation}\n\tThe inertia tensor of the $i$-th point mass relative to $\\ObjectFrame$ is\n\t\\begin{align}\n\t\t\\InertiaMatrix(\\Vector{\\Position}_i) \n\t\t&= - \\Mass_i \\Skew{\\Vector{\\Position}_i} \\Skew{\\Vector{\\Position}_i}\\\\[1mm]\n\t\t&= \\Mass_i \\begin{bmatrix}\n\t\t\ty^2 + z^2 & -xy & -xz\\\\\n\t\t\t-yx & x^2+z^2 & -yz\\\\\n\t\t\t-zx & -zy & x^2+y^2\n\t\t\\end{bmatrix},\n\t\\end{align}\n\twhere $\\Vector{\\Position}^{\\Transpose}_i = [x, y, z]$ and $\\Mass_i$ is the mass of the point.\n\t\n\tThe part's inertial parameters with respect to the sensor frame $\\SensorFrame$ are\n\t\\begin{align}\n\t\t\\label{eqn:ParamsFromPointMasses}\n\t\t&{}^s\\Vector{\\phi}^{\\Part{j}} = \\bbm m,\\!\\! & {}^s\\Vector{c}^{\\Part{j}},\\!\\! & {}^s\\InertiaMatrix^{\\Part{j}} \\ebm^\\Transpose = \\\\[1mm]\n\t\t&\\Mass\\! \\bbm \n\t\t1\\\\ \n\t\t\\Pos{\\Part{j}}{s}{s}\\\\ \n\t\t\\Vech\\!\\left(\\Rot{b}{s} \\frac{1}{n}\\sum_{i}^{n}\\left(-\\Skew{\\Vector{\\Position}_i}\\!\\Skew{\\Vector{\\Position}_i}\\!\\right) \\Rot{s}{b} - \\Skew{\\Pos{\\Part{j}}{s}{s}}\\! \\Skew{\\Pos{\\Part{j}}{s}{s}}\\! \\right) \n\t\t\\ebm , \\notag\n\t\\end{align}\n\twhere $\\Rot{b}{s}$ is the rotation that aligns $\\ObjectFrame$ to $\\SensorFrame$, $\\Pos{\\Part{j}}{s}{s}$ is the translation that brings the centroid of $\\Part{j}$ to $\\SensorFrame$, and $\\Vech(\\cdot)$ is the \\emph{vector-half} operator defined in \\cite{henderson_vec_1979} that extracts the elements on and above the main diagonal.\n\t\n\tBy keeping $\\Vector{\\Position}_i$ fixed and by measuring $\\Pos{b}{s}{s}$ and $\\Rot{b}{s}$ with the robot's perception system, it becomes clear that only the part's mass $\\Mass$ needs to be inferred in \\Cref{eqn:ParamsFromPointMasses} as $\\Pos{\\Part{j}}{s}{s} = \\Rot{b}{s}\\Pos{\\Part{j}}{b}{b}+\\Pos{b}{s}{s}$.\n\tHence, assuming that the robot's perception system can provide $\\Vector{\\Position}_i$ and that $\\Pos{b}{s}{s}$ and $\\Rot{b}{s}$ are either known or measured, the inertial parameters of a homogeneous part depend solely on its mass.\n\tSimilarly, the inertial parameters of a rigid object can be expressed as a function of the masses of its constituent parts.\n\t\n\tFor ``stop-and-go\" motions, where force measurements are taken while the robot is immobile, only the mass and \\TextCOM are identifiable \\cite{nadeau_fast_2022}.\n\tNonetheless, a stop-and-go trajectory greatly reduces noise in the data matrix, because accurate estimates of the end-effector kinematics are not needed \\cite{nadeau_fast_2022}.\n\tAssuming that the manipulated object is built from up to four homogeneous parts\\footnote{For stop-and-go trajectories, the rank of the data matrix is four when using non-degenerate poses as described in the appendix of \\cite{nadeau_fast_2022}. Dynamic trajectories can increase the rank of $\\DataMatrix$ to 10, enabling mass identification of up to 10 unique homogeneous parts.}, finding the mass of each part enables the identification of the complete set of inertial parameters even when stop-and-go trajectories are performed.\n\tThis identification requires measuring the wrench $\\Vector{b}_j$ at time step $j$ and relating the wrench to the masses $\\Vector{\\Mass}$ via\n\t\\begin{equation}\n\t\t\\underset{\\Vector{b}_j}{\\underbrace{\n\t\t\t\t\\begin{bmatrix}\n\t\t\t\t\t\\Vector{\\Force}_s\\\\\n\t\t\t\t\t\\Vector{\\Torque}_s\n\t\t\t\t\\end{bmatrix}\n\t\t}}\n\t\t=\n\t\t\\underset{\\RedModel_j}{\\underbrace{\n\t\t\t\t\\begin{bmatrix}\n\t\t\t\t\t\\Vector{\\Gravity}_s\\lbrack1&1&1&1\\rbrack\\\\\n\t\t\t\t\t-\\lbrack\\Vector{\\Gravity}_s\\rbrack_\\times \\lbrack\\Pos{\\Part{1}}{s}{s}&{}\\Pos{\\Part{2}}{s}{s}&\\Pos{\\Part{3}}{s}{s}&\\Pos{\\Part{4}}{s}{s}\\rbrack\n\t\t\t\t\\end{bmatrix}\n\t\t}}\n\t\t\\underset{\\Vector{\\Mass}}{\\underbrace{\n\t\t\t\t\\begin{bmatrix}\n\t\t\t\t\tm_{\\Part{1}}\\\\m_{\\Part{2}}\\\\m_{\\Part{3}}\\\\m_{\\Part{4}}\n\t\t\t\t\\end{bmatrix}\n\t\t}},\n\t\\end{equation}\n\tstacking $K$ matrices such that $\\DataMatrix = [\\RedModel_1^\\Transpose, ..., \\RedModel_K^\\Transpose]^\\Transpose$ and $\\Vector{b} = [\\Vector{b}_1^\\Transpose, ..., \\Vector{b}_K^\\Transpose]^\\Transpose$.\n\tMinimizing the Euclidean norm leads to the convex optimization problem\n\t\\begin{align}\n\t\t\\label{eqn:ObjFunc}\n\t\t&\\min_{\\Vector{\\Mass} \\in\\Real^4} \\quad\\vert\\vert \\DataMatrix \\Vector{\\Mass} - \\Vector{b} \\vert\\vert_2 \\\\\n\t\t&\\quad\\text{\\emph{s.t.}} \\quad \\Mass_{\\Part{j}} \\ge 0 \\enspace \\forall j \\in \\{1, \\ldots, 4\\}, \\notag\n\t\\end{align}\n\twhich can be efficiently solved with standard methods.\n\t\n\t\\section{Visual Part Segmentation}\n\t\\label{sec:visual_part_segmentation}\n\tIn this section, we combine a part segmentation method that uses local information (e.g., surface normals) with a second method that relies on structural information (e.g., shape convexity).\n\tThe Python implementation of our part segmentation algorithm, which is described by \\Cref{algo:WarmStartedHTC}, includes an open-source version of \\cite{attene_hierarchical_2008} as well as our variant of \\cite{lin_toward_2018} that makes use of colour information.\n\t\n\tDefining the shape of an object from a point cloud involves reconstructing the object surface (i.e., shape reconstruction).\n\tIn this work, the ball-pivoting algorithm \\cite{bernardini_ball-pivoting_1999} is used owing to its speed and relative effectiveness on point clouds of low density.\n\tShape reconstruction can be challenging for objects with very thin parts (e.g., a saw) and a thickening of the object through voxelization is performed beforehand. \n\t\n\tAs stated by \\Cref{eqn:Moments}, the moments of a mass distribution are computed by integrating over the shape of the distribution. \n\tHence, volumetric part segmentation is sufficient for identification of the true inertial parameters of an object.\n\tTo obtain such a representation of the object from its surface mesh, tetrahedralization is performed via TetGen \\cite{si_tetgen_2015} and the resulting tetrahedral mesh is supplied as an input to the part segmentation algorithm.\n\t\n\tOur method makes use of the Hierarchical Tetrahedra Clustering (HTC) algorithm \\cite{attene_hierarchical_2008}, which iteratively merges clusters of tetrahedra such that the result is as convex as possible while also prioritizing smaller clusters.\n\tHTC maintains a graph with nodes representing clusters and edges representing adjacency, and with an edge cost based on the concavity and size of the connected nodes. \n\tThe concavity of a cluster is computed by subtracting its volume from its convex hull and an edge cost is defined by\n\t\\begin{align}\n\t\t\\label{eqn:htc_cost}\n\t\tc_{ij} &= \\text{CVXHULL}\\left(C_i \\cup C_j\\right) - \\text{VOL}(C_i \\cup C_j) \\notag\\\\\n\t\t\\text{Cost}(i,j) &= \\begin{cases}\n\t\t\t\t\t\t\t\tc_{ij}+1, &\\text{if }c_{ij}>0\\\\\n\t\t\t\t\t\t\t\t\\frac{\\vert C_i\\vert^2+\\vert C_j\\vert^2}{N^2}, &\\text{otherwise}\n\t\t\t\t\t\t\t\\end{cases},\n\t\\end{align}\n\twhere $\\vert C_i\\vert$ is the number of elements in cluster $C_i$ and $N$ is the total number of elements.\n\tThe edge associated with the lowest cost is selected iteratively and the connected nodes are merged into a single cluster, resulting in the hierarchical segmentation.\n\t\n\tTo make part segmentation faster, we perform an initial clustering such that HTC \n\trequires fewer \n\tconvex hull computations, which is by far the most expensive operation.\n\tThe initial clustering is provided through a bottom-up point cloud segmentation algorithm \\cite{lin_toward_2018} that clusters points together based on heuristic features and chooses a representative point for each cluster.\n\tTwo adjacent clusters are merged if the dissimilarity between their representative points is lower than some threshold $\\beta$ (n.b.\\ the $\\lambda$ symbol is used in \\cite{lin_toward_2018}). \n\tThe value of $\\beta$ increases with each iteration; the process halts when the desired number of clusters is obtained.\n\tIn our implementation, the dissimilarity metric is defined as:\n\t\\begin{equation}\n\t\t\\label{eqn:similarity_metric}\n\t\t\\text{D}(C_i,C_j) = \\lambda_p\\Norm{\\Vector{p}_i-\\Vector{p}_j} + \\lambda_l\\Norm{\\Vector{l}_i-\\Vector{l}_j} + \\lambda_n \\left(1-\\vert \\Vector{n}_i\\cdot \\Vector{n}_j\\vert\\right)\n\t\\end{equation}\n\twhere each $\\lambda$ is a tunable weight, $\\Vector{p}_i$ is the position of the representative point of cluster $C_i$, $\\Vector{l}_i \\in \\Natural^{3}$ is the RGB representation of its colour, and $\\Vector{n}_i$ is its local surface normal. \n\t\n\tTo enable accurate part segmentation, the initial clustering should not cross the boundaries of the parts that will subsequently be defined by the HTC algorithm.\n\tTherefore, initial clustering is stopped when the number of clusters (e.g., 50 in our case) is much larger than the desired number of parts.\n\tThe desired number of clusters does not need to be tuned on a per-object basis as long as it is large enough.\n\n\t\\SetAlgoSkip{SkipBeforeAndAfter}\n\t\\begin{algorithm}[h]\n\t\t\\DontPrintSemicolon\n\t\t\\SetKwInOut{Input}{input}\\SetKwInOut{Output}{output}\n\t\t\\Input{A point cloud}\n\t\t\\Output{A segmented tetrahedral mesh}\n\t\t\\nlset{1}Initialize similarity threshold $\\beta$ as done in \\cite{lin_toward_2018}\\;\n\t\tInitialize each point as a cluster\\;\n\t\t\\While{$number~of~clusters > desired~number$}{\n\t\t\t\\ForEach{existing cluster $C_i$}{\n\t\t\t\t\\ForEach{neighboring cluster $C_j$}{\n\t\t\t\t\t\\If{$\\text{D}(C_i,C_j)<\\beta$}{\n\t\t\t\t\t\tMerge $C_j$ into $C_i$\\;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\t$\\beta \\leftarrow 2\\beta$\n\t\t}\n\t\t\\ForEach{point $\\Vector{p}$ at the border of two clusters}{\n\t\t\tMerge $\\Vector{p}$ into the most similar cluster\\;\n\t\t}\n\t\t\\nlset{2}Perform surface reconstruction and tetrahedralization\\;\n\t\tAssociate each tet. to its nearest cluster\\;\n\t\tInitialize a node in the graph for each cluster\\;\n\t\t\\ForEach{node}{\n\t\t\t\\If{two tet. from two nodes share a face}{\n\t\t\t\tCreate an edge between the nodes and compute edge-cost with \\Cref{eqn:htc_cost}\n\t\t\t}\n\t\t}\n\t\t\\While{$number~of~edges > 0$}{\n\t\t\tFind edge with lowest cost and merge its nodes\\;\n\t\t\tCreate a parent node that contains merged nodes\\;\t\n\t\t}\n\t\tLabel each tet. with its associated cluster number\\;\n\t\t\\caption{HTC (\\textbf{2}) with initial clustering (\\textbf{1})}\n\t\t\\label{algo:WarmStartedHTC}\n\t\\end{algorithm}\n\t\n\t\\section{Experiments}\n\tIn this section, each component of the proposed method is evaluated on 20 objects with standard metrics, and our entire identification pipeline is benchmarked in 80 scenarios.\n\tThe practicality of the proposed approach is also demonstrated through a real `hammer balancing act' experiment using relatively inexpensive robot hardware (see \\Cref{fig:demo}). \n\t\n\tTo conduct simulation experiments with realistic objects and evaluate the performance of part segmentation and identification, a dataset with ground truth inertial parameters and segments is needed.\n\tTo the best of the authors' knowledge, no such dataset is freely available.\n\tFrom CAD files contributed by the community, we built a dataset containing 20 commonly-used workshop tools. \n\tFor each object, our dataset contains a watertight mesh, a coloured surface point cloud with labelled parts, the object's inertial parameters, and a reference frame specifying where the object is usually grasped.\n\tThis dataset of realistic objects enables the evaluation of shape reconstruction, part segmentation, and inertial parameter identification.\n\n\t\\subsection{Experiments on Objects from Our Dataset} \\label{sec:experiments_on_dataset_objects}\n\tThe quality of a shape reconstructed from point cloud data is evaluated by computing the Hausdorff distance between the ground truth mesh and the reconstructed mesh, both scaled such that the diagonal of their respective bounding box is one metre in length.\n\tPart segmentation performed with our proposed variation of HTC is evaluated via the undersegmentation error (USE) \\cite{levinshtein_turbopixels_2009}, and with the global consistency error (GCE) \\cite{martin_database_2001, chen_benchmark_2009}, with results summarized in \\cref{tab:dataset_partseg_eval}.\n\tThe USE measures the proportion of points that are crossing segmentation boundaries, or \\emph{bleeding out} from one segment to another.\n\tThe GCE measures the discrepancy between two segmentations, taking into account the fact that one segmentation can be more refined than the other.\n\tBoth evaluation metrics represent dimensionless error ratios and are insensitive to over-segmentation, \n\tas discussed in Section \\ref{sec:Discussion}.\n\t\n\t\\Cref{tab:AverageComputeTime} summarizes the runtime performance obtained by augmenting HTC with the initial clustering in \\Cref{algo:WarmStartedHTC}.\n\tWhile the average segmentation error is almost identical in both cases, \\Cref{algo:WarmStartedHTC} executes in about a third of the time, owing to the smaller number of convex hull computations performed. \n\t\\begin{table}[t]\n\t\t\\centering\n\t\t\\caption{Average computation time and segmentation error per object from our dataset with standard deviations in parentheses. Initial clustering significantly reduces the runtime with little impact on the part segmentation.}\n\t\t\\label{tab:AverageComputeTime}\n\t\t\\begin{tabular}{cccc}\n\t\t\t\\toprule\n\t\t\tAlgorithm & USE & GCE & Time (s)\\\\\n\t\t\t\\midrule\n\t\t\tHTC & 0.1 (0.13) & 0.05 (0.10) & 9.73 (5.56)\\\\\n\t\t\tHTC with Initial Clustering & 0.1 (0.11) & 0.07 (0.11) & 3.48 (1.00)\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\t\\vspace{-1mm}\n\t\\end{table}\n\t\\begin{table}[h!]\n\t\t\\centering\n\t\t\\caption{Evaluation of part segmentation (USE and GCE) and shape reconstruction (Hausdorff). As expected, objects with a single part do not have GCE and USE errors while objects with many parts have larger segmentation errors.}\n\t\t\\label{tab:dataset_partseg_eval}\n\t\t\\begin{tabular}{cccccc}\n\t\t\t\\toprule\n\t\t\tObject & USE & GCE & Hausdorff & Mass & \\#Parts\\\\\n\t\t\t\t & & & (mm) & (g) & \\\\\n\t\t\t\\midrule\n\t\t\tAllen Key\t\t\t&0\t\t&0\t\t&8.2\t&128 &1\\\\\n\t\t\tBox Wrench\t\t\t&0\t\t&0\t\t&10.9\t&206 &1\\\\\n\t\t\tMeasuring Tape\t\t&0\t\t&0\t\t&11.4\t&136 &1\\\\\n\t\t\tRuler\t\t\t\t&0\t\t&0\t\t&14.8\t&9 \t &1\\\\\n\t\t\tScrewdriver\t\t\t&0.013\t&0.001\t&12.1\t&30 &2\\\\\n\t\t\tNut Screwdriver\t\t&0.002\t&0.002\t&16\t\t&81 &2\\\\\n\t\t\tRubber Mallet\t\t&0.011\t&0.009\t&15.8\t&237 &2\\\\\n\t\t\tBent Jaw Pliers\t\t&0.176\t&0.01\t&16.5\t&255 &3\\\\\n\t\t\tMachinist Hammer\t&0.018\t&0.012\t&13.6\t&133 &2\\\\\n\t\t\tPliers\t\t\t\t&0.123\t&0.041\t&10.7\t&633 &3\\\\\n\t\t\tC Clamp\t\t\t\t&0.103\t&0.077\t&9.6\t&598 &5\\\\\n\t\t\tAdjustable Wrench\t&0.117\t&0.098\t&14.2\t&719 &4\\\\\n\t\t\tHammer\t\t\t\t&0.08\t&0.098\t&14.7\t&690 &3\\\\\n\t\t\tFile\t\t\t\t&0.057\t&0.102\t&17.9\t&20 &3\\\\\n\t\t\tSocket Wrench\t\t&0.141\t&0.123\t&7.4\t&356 &5\\\\\n\t\t\tHacksaw\t\t\t\t&0.129\t&0.128\t&11.1\t&658 &7\\\\\n\t\t\tClamp\t\t\t\t&0.259\t&0.181\t&17.6\t&340 &7\\\\\n\t\t\tVise Grip\t\t\t&0.158\t&0.287\t&17.2\t&387 &8\\\\\n\t\t\tElectronic Caliper\t&0.42\t&0.316\t&9.1\t&174 &14\\\\\n\t\t\tVise Clamp\t\t\t&0.296\t&0.373\t&15.1\t&225 &9\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\t\\vspace{-5mm}\n\t\\end{table}\n\t\\begin{figure}\n\t\t\\vspace{-3mm}\n\t\t\\centering\n\t\t\\begin{overpic}[width=1\\linewidth]{figs\/Real_Seg_vs_Pictures_Smaller}\n\t\t\t\\put(12,-1){(a)}\n\t\t\t\\put(43,-1){(b)}\n\t\t\t\\put(82.5,-1){(c)}\n\t\t\\end{overpic}\n\t\t\\vspace{-3mm}\n\t\t\\caption{Pictures of objects scanned by the RGB-D camera on the manipulator, next to their segmented meshes; points are located at the tetrahedra's centroids (screwdriver is rotated).}\n\t\t\\label{fig:realsegvspictures}\n\t\\end{figure}\n\tThe reconstruction and part segmentation can also be qualitatively evaluated on point clouds obtained from RGB-D images taken by a Realsense D435 camera at the wrist of the manipulator.\n\tTo complete a shape that is partly occluded by the support table, the point cloud is projected onto the table plane, producing a flat bottom that might not correspond to the true shape of the object. For instance, the left side of the rotated screwdriver point cloud in \\Cref{fig:realsegvspictures} is the result of such a projection.\n\t\n\t\\begin{table}[t]\n\t\t\\centering\n\t\t\\caption{Standard deviations of the zero-mean Gaussian noise added to accelerations and force signals in simulation.}\n\t\t\\label{tab:noise_std_dev}\n\t\t\\begin{tabular}{ccccc}\n\t\t\t\\toprule\n\t\t\tScenario \t\t& Ang. Acc. & Lin. Acc. & Force & Torque\\\\\n\t\t\t\\midrule\n\t\t\tLow Noise \t\t& 0.25 & 0.025 & 0.05 & 0.0025\\\\\n\t\t\tModerate Noise \t& 0.5 & 0.05 & 0.1 & 0.005\\\\\n\t\t\tHigh Noise \t\t& 1\t\t& 0.1 & 0.33 & 0.0067\\\\\n\t\t\t\\bottomrule \n\t\t\\end{tabular}\n\t\\end{table}\n\t\n\t\\begin{figure}\n\t\t\\vspace{-1mm}\n\t\t\\centering\n\t\t\\includegraphics[width=1\\linewidth]{figs\/Kinematics_Rubber_Mallet_v2}\n\t\t\\caption{Kinematics of the identification trajectory performed with the Rubber Mallet. The norm of the velocity (black) goes slightly above the standard maximum speed for a cobot.}\n\t\t\\label{fig:kinematicsrubbermallet}\n\t\t\\vspace{-5mm}\n\t\\end{figure}\n\t\n\tThe proposed identification algorithm is tested in simulation under four noise scenarios where zero-mean Gaussian noise is added to the signals. The standard deviations of the noise values are based on the specifications of the Robotiq FT-300 sensor as described in \\Cref{tab:noise_std_dev}.\n\tThe identification trajectory used to generate the regressor matrix $\\DataMatrix$ in \\Cref{eqn:ObjFunc} has the robot lift the object and successively orient it such that gravity is aligned with each axis of $\\ObjectFrame$, stopping for a moment at each pose and moving relatively quickly between poses, as shown in \\Cref{fig:kinematicsrubbermallet}.\n\tThe average condition number~\\cite{gautier_exciting_1992} of the scaled regressor matrix for all simulated trajectories is 74, 92, and 99 for the low, moderate, and high noise scenarios, respectively.\n\tThese relatively low condition numbers confirm that our identification trajectory is non-degenerate.\n\n\tThe performance of our proposed algorithm (HPS) is compared against the classical algorithm proposed in \\cite{atkeson_estimation_1986} (OLS) and to the more modern algorithm proposed in \\cite{lee_geometric_2019} (GEO). \n\tThe latter was provided with a prior solution consisting of the true mass of the object, and the \\TextCOM and inertia tensor resulting from a homogeneous mass distribution for the object ($\\alpha=10^{-5}$ was used).\n\tHPS only uses measurement data when\n\tboth the linear and angular accelerations are below 1 unit$\/s^2$,\n\tcorresponding to the \\textit{stalled} timesteps of the stop-and-go motion, whereas OLS and GEO use all data available.\n\t\n\tThe accuracy of inertial parameter identification is measured via the Riemannian geodesic distance ${e_{\\text{Rie}} = \\sqrt{\\left(\\frac{1}{2}\\sum\\Vector{\\lambda}\\right)}}$~\\cite{lang_fundamentals_1999}, where $\\Vector{\\lambda}$ is the vector of eigenvalues for $P({}^s\\phi^1)^{-1}P({}^s\\phi^2)$ and $P(\\phi)$ is the \\textit{pseudoinertia} matrix that is required to be symmetric positive-definite (SPD) \\cite{wensing_linear_2017}.\n\tThe metric $e_{\\text{Rie}}$ is the distance between the estimated and ground truth inertial parameters in the space of SPD matrices.\n\tTo assist interpretation of the identification error, we also compute the size-based error metrics proposed in \\cite{nadeau_fast_2022}, which use the bounding box and mass of the object to produce average \\textit{percentage} errors $\\bar{\\Vector{e}}_m$, $\\bar{\\Vector{e}}_C$, and $\\bar{\\Vector{e}}_J$ associated, respectively, with the mass, centre of mass, and inertia tensor estimates.\n\t\\Cref{tab:PerfoComparison} reports the mean of the entries of the error vector $\\bar{\\Vector{e}}$ for each quantity of interest. \n\t\n\t\\begin{table}[t]\n\t\t\\centering\n\t\t\\caption{Comparison between HPS (ours), OLS, and GEO with various levels of noise, indicating the percentage of solutions that were physically consistent (Cons).}\n\t\t\\label{tab:PerfoComparison}\n\t\t\\begin{tabular}{ccccccc}\n\t\t\t\\toprule\n\t\t\tNoise \t& Algo. & Cons. (\\%) & $\\bar{\\Vector{e}}_m$(\\%) & $\\bar{\\Vector{e}}_C$(\\%) & $\\bar{\\Vector{e}}_J$(\\%) & $e_{\\text{Rie}}$\\\\\n\t\t\t\\midrule\n\t\t\tNo\t & OLS & 100 & \\textbf{$<$0.1} & \\textbf{$<$0.1} & \\textbf{0.09} & \\textbf{0.03}\\\\\n\t\t\t\t & GEO & 100 & $<$0.1 & 1.35 & 53.22\t & 1.14\\\\\n\t\t\t\t & HPS & 100 & 0.27 & 0.1 & 10.28 & 0.72\\\\\n\t\t\tLow & OLS & 14 & 0.19 & 2.13 & $>$500 & N\/A\\\\\n\t\t\t\t & GEO & 100 & \\textbf{0.18}\t & 1.34 & 52.79 & 1.14\\\\\n\t\t\t\t & HPS & 100 & 0.40 & \\textbf{0.32} & \\textbf{11.12} & \\textbf{0.74}\\\\\n\t\t\tMod. & OLS & 4.5 & 0.36 & 5.33 & $>$500 & N\/A\\\\\n\t\t\t\t & GEO & 100 & \\textbf{0.35} & 1.37 & 51.73 & 1.14\\\\\n\t\t\t\t & HPS & 100 & 0.74 & \\textbf{0.48} & \\textbf{11.81} & \\textbf{0.77}\\\\\n\t\t\tHigh & OLS & 2 & 0.69 & 10.11 & $>$500 & N\/A\\\\\n\t\t\t \t & GEO & 100 & \\textbf{0.64} & 1.58 & 48.49 & 1.13\\\\\n\t\t\t\t & HPS & 100 & 2.79 & \\textbf{1.07} & \\textbf{15.00} & \\textbf{0.87}\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\t\\vspace{0mm}\n\t\\end{table}\n\n\t\\subsection{Demonstration in Real Settings}\n\tTo test our proposed method in a realistic setting, we used a uFactory xArm 7 manipulator equipped with a RealSense D435 camera and a Robotiq FT-300 force-torque sensor, as shown in \\Cref{fig:gladyswieldingobject}.\n\tFirst, the hammer in \\Cref{fig:realsegvspictures} was scanned with the camera, producing 127 RGB-D images of the scene in about 30 seconds.\n\tThe object was then picked up at a predetermined grasping pose where the Robotiq 2F-85 gripper could fit into plastic holders attached to the object (enabling a stable grasp of the handle).\n\tA short trajectory that took about 10 seconds to execute was performed while the dynamics of the object were recorded at approximately 100 Hz.\n\tPoint cloud stitching, mesh reconstruction, and part segmentation can be performed concurrently while the robot executes the trajectory, since these operations take 2.87, 0.24, and 2.82 seconds, respectively.\n\tFinally, our proposed method identified the inertial parameters in about 0.5 seconds with MOSEK \\cite{andersen2000mosek}. \n\tUsing only its estimate of the \\TextCOMMA, the robot autonomously balanced the hammer on a cylindrical target with a radius of 17.5 mm.\n\tThe entire process is shown in our accompanying video, with summary snapshots provided in \\Cref{fig:demo}.\n\tIn contrast, both OLS and GEO returned inaccurate parameter estimates, causing hammer balancing to fail.\n\t\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\begin{overpic}[width=1\\linewidth]{figs\/Balancing_Hammer_Demo}\n\t\t\t\\put(12,-4){(a)}\n\t\t\t\\put(47,-4){(b)}\n\t\t\t\\put(82.5,-4){(c)}\n\t\t\\end{overpic}\n\t\t\\vspace{0.1mm}\n\t\t\\caption{The hammer is (a) scanned, (b) picked up for inertial identification, and (c) balanced onto the small green target.}\n\t\t\\label{fig:demo}\n\t\t\\vspace{-4mm}\n\t\\end{figure}\n\t\n\t\\section{Discussion}\n\t\\label{sec:Discussion}\n\tThis work exploits object shape information to produce fast and accurate inertial parameter estimates.\n\tThe object reconstructions used by our method to determine object shape are often already computed as components of planning and manipulation algorithms, limiting the computational burden introduced by our approach.\n\tThe decomposition of objects into parts also enables fast inertial parameter identification of bodies that have joints (e.g., hedge scissors), since the parameters can be trivially updated following a change in object configuration.\n\t\n\tWrong or inaccurate part segmentation may lead to erroneous parameter estimates.\n\tHowever, if \\Cref{assum:Homogeneous} holds, \\emph{over}-segmenting an object does not affect the result of the identification as for a given part, \\Cref{eqn:Moments} can be decomposed into\n\t\\begin{equation}\n\t\t\\label{eqn:Oversegmentation}\n\t\t \\int_{V_1} \\Vector{\\Position}^k \\rho_1(\\Vector{\\Position}) dV_1 + \\int_{V_2} \\Vector{\\Position}^k \\rho_2(\\Vector{\\Position}) dV_2 = \\int_{V} \\Vector{\\Position}^k \\rho(\\Vector{\\Position}) dV,\n\t\\end{equation}\n\twhich is true since $V=V_1 \\cup V_2$ and $\\rho_1(\\Vector{\\Position}) = \\rho_2(\\Vector{\\Position})$.\n\tSimilarly, if two conceptually distinct parts with the \\emph{same} mass density are erroneously combined into one, the result of the identification will remain unaffected.\n\tHowever, if parts with \\emph{different} mass densities are considered to be a single part by the algorithm, the identification will fail since \\Cref{eqn:Oversegmentation} does not hold when $\\rho_1(\\Vector{\\Position}) \\neq \\rho_2(\\Vector{\\Position})$.\n\t\n\tThe comparison in \\Cref{tab:PerfoComparison} suggests that OLS outperforms other algorithms in the noiseless scenario, which is expected since it is not biased by any prior information.\n\tHowever, OLS nearly always converges to physically inconsistent solutions and becomes inaccurate in the presence of even a small amount of sensor noise.\n\tThe GEO algorithm performs similarly across noise levels, possibly due to the very good prior solution (i.e., correct mass, homogeneous density) provided, corresponding to the exact solution for objects that have a single part. \n\tOn average, HPS outperforms OLS and GEO for the identification of \\TextCOM and $\\InertiaMatrix$ when the signals are noisy.\n\tThe slightly higher $\\bar{\\Vector{e}}_m$ for HPS may be caused by the approximation made when using stalling motions~\\cite{nadeau_fast_2022}.\n\t\n\tExperiments with objects from our dataset do not reveal any obvious trends relating the quality of the shape reconstruction (measured via the Hausdorff distance), the quality of the part segmentation (measured via USE and GCE), and the quality of the inertial parameter identification (measured via $e_{\\text{Rie}}$).\n\tThis can be explained by the fact that an object's mass and shape largely determine the signal-to-noise ratios that any identification algorithm has to deal with.\n\t\n\tAs demonstrated by experiments with objects that are mostly symmetrical (e.g., the screwdriver), if the shape of the object is such that the parts' centroids are coplanar, the optimizer will `lazily' zero out the mass of some parts since they are not required to minimize \\Cref{eqn:ObjFunc}.\n\tAn improved version of HPS could use the hierarchy from HTC to intelligently define segments whose centroids are not coplanar.\n\t\n\t\\section{Conclusion}\n\t\\label{sec:Conclusion}\n\t\n\tIn this paper, we leveraged the observation that man-made objects are often built from a few parts with homogeneous densities. \n\tWe proposed a method to quickly perform part segmentation and inertial parameter identification.\n\tWe ran 80 simulations in which our approach outperformed two benchmark methods, and we demonstrated real-world applicability by autonomously balancing a hammer on a small target. \n\tOn average, our proposed algorithm performs well in noisy conditions and can estimate the full set of inertial parameters from`stop-and-go' trajectories that can be safely executed by collaborative robots.\n\tPromising lines of future work include formulating the optimization problem as a mixed-integer program where mass densities are chosen from a list of known materials, and improving the segmentation algorithm such that parts' centroids are never coplanar.\n\t\n\t\\bibliographystyle{ieeetr}\n\t\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{Intro}\n\nAs the field of Artificial Intelligence matures and becomes ubiquitous, there is a growing emergence of systems where people and agents work together. These systems, often called Human-Agent Systems or Human-Agent Cooperatives, have moved from theory to reality in the many forms, including digital personal assistants, recommendation systems, training and tutoring systems, service robots, chat bots, planning systems and self-driving cars \\cite{amir2013plan,azaria2015strategic,barrett2017making,biran2017explanation,XAIP2017,jennings2014human,kleinerman2018providing,langley2017explainable,richardson2008coach,rosenfeld2017intelligent,rosenfeld2012learning,rosenfeld2015learning,salem2015would,sheh2017did,sierhuis2003human,traum2003negotiation,vanlehn2011relative,xiao2007commerce}. One key question surrounding these systems is the type and quality of the information that must be shared between the agents and the human-users during their interactions. \n\nThis paper focuses on one aspect of this human-agent interaction \u2014 the internal level of explainability that agents using machine learning must have regarding the decisions they make. The overall goal of this paper is to provide an extensive study of this issue in Human-Agent Systems. Towards this goal, our first step is to formally and clearly define explainability in Section \\ref{sec:definitions}, as well as the concepts of interpretability, transparency, explicitness, and faithfulness that make a system explainable. Through using these definitions, we provide a clear taxonomy regarding the \\emph{Why, Who, What, When, and How} about explainability and stress the relationship of interpretability, transparency, explicitness, and faithfulness to each of these issues. \n\nOverall, we believe that the solutions presented to all of these issues need to be considered in tandem as they are intertwined. The type of explainability needed directly depends on the motivation for the type of human-agent system being implemented and thus directly stems from the first question about the overall reason, or reasons, for why the system must be explainable. Assuming that the system is human-centric, as is the case in recommendation \\cite{kleinerman2018providing,xiao2007commerce}, training \\cite{traum2003negotiation}, and tutoring systems \\cite{amir2013plan,vanlehn2011relative}, then the information will likely need to persuade the person to choose a certain action, for example through arguments about the agent's decision \\cite{rosenfeld2016providing}, its policy \\cite{rosenfeld2017intelligent} or presentation \\cite{azaria2014agent} . If the system is agent-centric, such as in knowledge discovery or self-driving cars, the agent might need to provide information about its decision to help convince the human participant of the correctness of their solution, aiding in the adoption of these agent based technologies \\cite{Ribeiro2016}. In both cases, the information the agent provides should build trust to ensure its decisions are accepted \\cite{abs-1806-00069,Guidotti2018,jennings2014human,lee2004trust}. Furthermore, these explanations might be necessary for legal considerations \\cite{doshi2017towards,vlek2016method}. In all cases we need to consider and then evaluate \\emph{how} these explanations were generated, presented, and if their level of detail correctly matches the system's need, something we address in Section \\ref{How}. \n\nThis paper is structured as follows. First, in Section \\ref{sec:definitions}, we provide definitions for the terms of explainability, interpretability, transparency, fairness, explicitness and faithfulness and discuss the relationship between these terms. Based on these definitions, in Section \\ref{Why} we present a taxonomy of three possibilities for \\emph{why} explainability might be needed, ranging from not helpful, beneficial and critical. In Section \\ref{Who}, we suggest three possible targets for \\emph{who} the explanation is geared for: ``regular users\", ``expert users\", or entities external to the users of the system. In Section \\ref{What}, we address \\emph{what} mechanism is used to create explanations. We consider six possibilities: directly from the machine learning algorithm, using feature selection and analysis, through a tool separate from the learning algorithm to model all definitions, a tool to explain a specific outcome, visualization tools and prototype analysis. In Section \\ref{when} we address \\emph{when} the generated explanations should be presented: before, after and\/or during the task execution. In Section \\ref{How} we introduce a general framework to evaluate explanations. Section \\ref{discuss} includes a discussion about the taxonomy presented in the paper, including a table summarizing previous works and how they relate. Section \\ref{conclusion} concludes.\n\n\n\\section{Definitions of Explainable Systems and Related Terms}\n\\label{sec:definitions}\n Several works have focused on the definitions of a system's explainability and also the related definitions of interpretability, transparency, fairness, explicitness and faithfulness. As we demonstrate in this section, of all of these terms, we believe that the objective of making a system explainable is the most central and important for three reasons. Chronologically, this term was introduced first and thus has largest research history. Second, and possibly due to the first factor, this is the most general term. As we explain in this section, a system's level of explainability is created through the interpretations that the agent provides. These interpretable elements can be transparent, fair, explicit, and\/or faithful. Last, and most importantly, this term connotes the key objective for the system: facilitating the human user's understanding of the agent's logic. \n\n\\subsection{Theoretical Foundations for Explainability}\nIt has been noted that a thorough study of the term explanation would need to start with Aristotle as since his time it has been noted that explanations and causal reasoning are intrinsically intertwined \\cite{hoffman2017explaining}. Specific to computer systems, as early as 1982, expert systems such as MYCIN and NEOMYCIN were developed for encoding the logical process within complex systems \\cite{clancey1983epistemology,clancey1982neomycin}. The objective of these systems, as is still the case, was to provide a set of clear explanations for a complex process. However, no clear definitions for the nature of what constituted an explanation was provided.\n\nWork by Gregor and Benbasat in 1999 defined the nature of explainability within ``intelligent\" or ``knowledge-based\" systems as a ``declaration of the meaning of words spoken, actions, motives, etc., with a view to adjusting a misunderstanding or\nreconciling differences\" \\cite{gregor1999explanations}. As they point out in their paper, this definition assumes that the explanation is provided by the provider of the information, in our case the intelligent agent, and that the explanation is geared to resolve some type of misunderstanding or disagreement. This definition is in line with other work that assumed that explanations were needed to help understand a system malfunction, an anomaly or to resolve conflict between the system and the user \\cite{gilbert1989explanation,ortony1987surprisingness,schank1986explanation}. Given this definition, it is not surprising that the first agent explanations were basic reasoning traces that assume the user will understand the technical information provided, without taking a user other than the system designer into account. As these explanations are not typically processed beyond the raw logic of the system, they are referred to as ``na\\\"ive explanations\" by previous work \\cite{sormo2004explanation}. In our opinion, explainability of this type is more appropriate for system debugging than for other uses.\n\n\nPossibly more generally, the Philosophy of Science community also provided several definitions of explainability. Most similar to the previous definition, work by Schank \\cite{schank1986explanation} specifies that explanations address anomalies where a person is faced with a situation that does not fit her internalized model of the world. This type of definition can be thought of as goal-based, as the goal of the explanation is to address a specific need (e.g. disharmony within a user's internalized model) \\cite{sormo2004explanation}. Thus, explanations focus on an operational goal of addressing why the system isn't functioning as expected.\n\nA second theory by van Fraassen \\cite{van198511} claims that an explanation is always an answer to an implicit or explicit why-question comparing two or more possibilities. As such, an explanation provides information about why possibility $S_0$ was chosen and not options $S_1 \\dots S_n$ \\cite{sormo2004explanation,van198511}. This definition suggests a minimum criteria any explanation must fulfill, namely that it facilitates a user choosing a specific option $S_0$, as well as a framework for understanding explanations as answers to why-questions contrasting two or more states \\cite{sormo2004explanation}. One limitation of this approach is that the provided explanation has no use beyond helping the user understand why possibility $S_0$ was preferable relative to other possibilities. \n\nMost generally, a third theory by Achinstein \\cite{achinstein1983nature} focuses on explanations as a process of communication between people. Here, the goal of an explanation is to provide the knowledge a recipient requests from a designated sender. Accordingly, this theory does not necessarily require a complete explanation if the system's user does not require it. Consider a previously described example \\cite{sormo2005explanation} that a neural network is trained to compare two pictures of a certain type and can give a similarity measure, e.g. from 0 to 1, and most people cannot understand how it came up with this score. Presenting the pictures to the user so she can validate the similarity for herself can itself serve as an explanation. As the very definition of a proper explanation is dependent on the interaction between the sender and the receiver, such an explanation is sufficient. Similarly, explanations can be motivated by many situations and not exclusively van Fraassen's why-questions. Conversely, a proper definition can and should be limited only to the information needed to address the receiver's request. \n\n\\subsection{The Need for Precisely Defining Explainability in Human-Agent Systems}\nRecently, questions have arose as to the definition of explainability of machine learning and agent systems. An explosive growth of interest has been registered within various research communities as is evident by workshops on: Explanation-aware Computing (ExaCt), Fairness, Accountability, and Transparency (FAT-ML), Workshop on Human Interpretability in Machine Learning (WHI), Interpretable ML for Complex Systems, Workshop on Explainable AI, Human-Centred Machine Learning, and Explainable Smart Systems \\cite{CHI2018}. However, no consensus exists about the meaning of various terms related to explainability including interpretability, transparency, explicitness, and faithfulness. It has been pointed out that the Oxford English dictionary does not have a definition for the term ``explainable\" \\cite{DoranSB17}. One definition for an explanation that has been suggested as a, ``statement or account that makes something clear; a reason or justification given for an action or belief\" is not always true for systems that claim to be explainable \\cite{DoranSB17}. Thus, providing an accepted and unified definition of explainability and other related terms is of great importance.\n\nPart of the confusion is likely complicated by the fact that the terms, ``explainability, interpretability and transparency\" are often used synonymously while other researchers implicitly define these terms differently \\cite{DoranSB17,doshi2017towards,abs-1806-00069,Guidotti2018,Lipton16a,samek2017explainable}. Artificial intelligence researchers tend to use the term Explainable AI (XAI) \\cite{CHI2018,gunning2017explainable}, and focus on how explainable an artificial intelligence (XAI) system is without necessarily directly addressing the machine learning algorithms. For example, work on explainable planning, which they coin XAIP, takes a system view of planning without considering any machine learning algorithms. They distance themselves from machine learning and deep learning systems which they claim are still far from being explainable \\cite{XAIP2017}. \n\nIn contrast, the machine learning community often focuses on the ``interpretability\" of a machine learning system by focusing on how a machine learning algorithm makes its decisions and how interpretations can be derived either directly or secondarily from the machine learning component \\cite{doshi2017towards,letham2015interpretable,rudin2014algorithms,vellido2012making,wang2017bayesian}. However, this term is equally poorly defined. In fact, one paper has gone so far as to recently write that, ``at present, interpretability has no formal technical meaning\" and that, ``the term interpretability holds no agreed upon meaning, and yet machine\nlearning conferences frequently publish papers which wield the term in a quasimathematical way\" \\cite{Lipton16a}. In these papers, there is no syntactical technical difference between interpretable and explainable systems, as both terms refer to aspects of providing information to a human actor about the agent's decision-making process. Previous work generally defined interpretability as the ability to explain or present the decisions of a machine learning system using understandable terms \\cite{doshi2017towards}. More technically, Montanavon et al. propose that ``an interpretation is the mapping of an abstract concept (e.g. a predicted class) into a domain that the human can make sense of\" which in turn forms explanations \\cite{post}. Similarly, Doran et al. define interpretability as ``a system where a user cannot only see, but also study and understand how inputs are mathematically mapped to outputs.\" To them, the opposite of interpretable systems are ``opaque\" or ``black box\" systems which yield no insight about the mapping between a decision and the inputs that yielded that decision \\cite{DoranSB17}. \n\nWithin the Machine Learning \/ Agent community, transparency has been informally defined to be the opposite of opacity or ``blackbox-ness\" \\cite{Lipton16a}. In order to clarify the difference between interpretability and transparency, we build upon the definition of transparency as an explanation about how the system reached its conclusion \\cite{sormo2005explanation}. More formally, transparency has been defined as a decision model where the decision-making process can be directly understood without any additional information \\cite{Guidotti2018}. It is generally accepted that certain decision models are inherently transparent and others are not. For example, decision trees, and especially relatively small decision trees, are transparent, while deep neural networks cannot be understood without the aid of a explanation tool outside that of the decision process \\cite{Guidotti2018}. We consider this difference in the next section and again in Section \\ref{What}.\n\n\\label{inter_subsection}\n\n\n\n\n\\subsection{Formal Definitions for Explainability, Interpretability and Transparency in Human-Agent Systems}\n\\label{definitions}\nThis paper's first contribution is a clear definition for explainability and for the related terms: interpretability and transparency. In defining these terms we also define how explicitness and faithfulness are used within the context of Human-Agent Systems. A summary of these definitions is found in Table \\ref{table2}. \n\nIn defining these terms, we focus on the features and records that are used as training input in the system, the supervised targets that need to be identified, and the machine learning algorithm used by the agent. We define $L$ as the machine learning algorithm that is created from a set of training records, $R$.\nEach record $r \\in R$ contains values for a tuple of ordered features, $F$.\nEach feature is defined as $f \\in F$. Thus, the entire training set consists of $R \\times F$. For example: Assume that the features are: $f1 = age$ (years), $f2 = height$ (cm), $f3 = weight$ (kg), so $F = \\{age, height, weight\\}$. A possible record $r \\in R$ might be $r=\\{35,160,70\\}$. While this model naturally lends itself to tabular data, it can as easily be applied to other forms of input such as texts, whereby $f$ are strings, or images whereby $f$ are pixels. The objective of $L$ is to properly fit $R \\times F$ with regard to the labeled targets $t \\in T$. \n\n\n\\begin{table}[]\n\\begin{tabular}{|l|c|l|}\n\\hline\nTerm & \\begin{tabular}[c]{@{}c@{}}Notation\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Short Description\\end{tabular} \\\\ \\hline\nFeature & $F$ & \\begin{tabular}[c]{@{}l@{}}One field within the input.\\end{tabular} \\\\ \\hline\nRecord & $R$ & \\begin{tabular}[c]{@{}l@{}}A collection of one item of information (e.g. picture, row in datasheet).\\end{tabular} \\\\ \\hline\nTarget & $T$ & \\begin{tabular}[c]{@{}l@{}}The labelled category to be learned. Can be categorical or numeric.\\end{tabular} \\\\ \\hline\n\\begin{tabular}[c]{@{}l@{}}Algorithm\\end{tabular} & $L$ & \\begin{tabular}[c]{@{}l@{}}The algorithm used to predict the value of $T$ from the collection of data \\\\(all features and records).\\end{tabular} \\\\ \\hline\n\\begin{tabular}[c]{@{}l@{}}Interpretation\\end{tabular} & $\\varmathbb I$ & \\begin{tabular}[c]{@{}l@{}}A function that takes as its input $F,R,T,$ and $L$ \\\\and returns a representation of $L$'s logic.\\end{tabular} \\\\ \\hline\nExplanation & \\multicolumn{1}{c|}{$\\varmathbb E$} & \\multicolumn{1}{l|}{\\begin{tabular}[c]{@{}l@{}} The human-centric objective for the user to understand $L$ using $\\varmathbb I$.\\end{tabular}} \\\\ \\hline\nExplicitness & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\begin{tabular}[c]{@{}l@{}}The extent to which $\\varmathbb I$ is understandable to the intended user.\\end{tabular}} \\\\ \\hline\nFairness & & \\begin{tabular}[c]{@{}l@{}}The lack of bias in $L$ for a field of importance (e.g. gender, age, ethnicity).\\end{tabular} \\\\ \\hline\nFaithfulness & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\begin{tabular}[c]{@{}l@{}}The extent to which the logic within $\\varmathbb I$ is similar to that of $L$.\\end{tabular}} \\\\ \\hline\nJustification & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\begin{tabular}[c]{@{}l@{}}Why the user should accept $L$'s decision. \\\\Not necessarily faithful as no connection assumed between $L$ and $\\varmathbb I$.\\end{tabular}} \\\\ \\hline\nTransparency & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{\\begin{tabular}[c]{@{}l@{}}The connection between $\\varmathbb I$ and $L$ is both explicit and faithful.\\end{tabular}} \\\\ \\hline\n\\end{tabular}\n\\caption{Notation and short definition of key concepts of explainability, interpretability, transparency, fairness, and explicitness in this paper. Concepts of features, records, targets and machine learning algorithms and explanations are also included as they define the key concepts.}\n\\label{table2}\n\\end{table}\n\nWe define explainability as the ability for the human user to understand the agent's logic. This definition is consistent with several papers that considered the difference between explainability and interpretability within Human-Agent Systems. For example, Doran et al. define explainable systems as those that explains the decision-making process of a model using reasoning about the most human-understandable features of the input data \\cite{DoranSB17}. Following their logic, interpretability and transparency can help form explanations, but are only part of the process. Guidotti et al. state that ``an interpretable model is required to provide an explanation\" \\cite{Guidotti2018}, thus an explanation is obtained by the means of an interpretable model. \nSimilarly, Montanavon et al., define explanations as ``a collection of features of the interpretable domain, that have contributed for a given example to produce a decision\" \\cite{post}. \n\nThus, the objective of any system is explainability, meaning it has an explanation $\\varmathbb E$, which is the human-centric aim to understand $L$. An explanation is derived based on the human user's understanding about the connection between $T$ and $R \\times F$. The user will create $\\varmathbb E$ based on her understanding of an interpretation function, $\\varmathbb I$ that takes as its inputs $L$, $R \\times F$ and $T$ and returns a representation of the logic within $L$ that can be understood. Consequently, in this paper we refer to explainability of systems as the understanding the human user has achieved from the explanation and do not use this term interchangeably with ``interpretability\" and ``transparency\". We reserve use of terms ``interpretability\" and ``transparency\" as descriptions of the agent's logic. Specifically, we define $\\varmathbb E$ as:\n\\begin{equation} \n\\varmathbb E = \\varmathbb I(L(R \\times F,T))\n\\label{equ:explain}\n\\end{equation}\n\nWe claim that the connection between $\\varmathbb I$ and $L$, $R$, $F$ and $T$ will also determine the type of explanation that is generated. A globally explainable model provides an explanation for all outcomes within $T$ taking into consideration $R \\times F$, thus using all information in: ${L,R,F,T}$. A locally explainable model provides explanations for a specific outcome, $t \\in T$ (and by extension for specific records $r \\in R$), using ${L,r,F,t}$ as input. \n\nWe use three additional terms: explicitness, faithfulness and justification to quantify the relationship of $\\varmathbb I$ to $\\varmathbb E$ and $L$ respectively. Following recent work \\cite{abs-1806-07538}, we refer to \\textit{explicitness} as the level to which the output of $\\varmathbb I$ is immediate and understandable. As we further explore in the next section, the level of explicitness depends on \\emph{who} the target of the explanation is and what is the level of her expertise at understanding $\\varmathbb I$. It is likely that two users will obtain different values for $\\varmathbb E$ even given the same value for $\\varmathbb I$, making quantifying $\\varmathbb I$'s explicitness difficult due to this level of subjectivity. We define \\textit{faithfulness}, also previously defined as \\textit{fidelity} \\cite{4938655,Ribeiro2016}, as the degree to which the logic within $\\varmathbb I$ is similar to that of $L$. Especially within less faithful models, a concept of \\textit{completeness} was recently suggested to refer to the ability of $\\varmathbb I$ to provide an accurate description for all possible actions of $L$ \\cite{abs-1806-00069}. Given the similarity of these terms, we only use the term faithful due to its general connotation. Justification was previously defined as an explanation about why a decision is correct without any information about the logic about how it was made \\cite{biran2017explanation}. According to this definition, justifications can be generated even within non-interpretable systems. Consequently, justification requires no connection between $\\varmathbb I$ and $L$ and no faithfulness. Instead, justification methods are likely to provide implicit or explicit arguments about the correctness of the agent's decision, such as through persuasive argumentation \\cite{yetim2008framework}.\n\nIn order for a model to be transparent, two elements are needed: the decision-making model must be readily understood by the user, and that explanation must map directly to how the decision is made. More precisely, a transparent explanation is one where the connection between $\\varmathbb E$, $\\varmathbb I$ and $L$ is explicit and faithful as the logic within $\\varmathbb I$ is readily understandable and identical to $L$, e.g. $\\varmathbb I \\simeq L$. When a tool or model is used to provide information about the decision-making process secondary to $L$, the system contains elements of interpretability, but not transparency. \n\nSection \\ref{What} discusses the different types of interpretations that can be generated, including transparent ones. Non-transparent interpretations will lack faithfulness, explicitness, or both. Examples include tools to create model and outcome interpretations, feature analysis, visualization methods and prototype analysis. Each of these methods will focus on different parameters within the input, ${R,F,T}$ and their relationship to $L$. Model and outcome interpretation tools create $\\varmathbb I$ without a direct connection to the logic in $L$. Feature Analysis is a method of providing interpretations via analyzing a subset of features $f \\in F$. Prototype selection is a method of providing interpretations via analyzing a subset of records $r \\in R$. Visualization tools are used to understand the connection between $L$ and $T$ and thus $\\varmathbb I$ takes this interpretable form. \n\nTo help visualize the relationship between explainability, interpretability and transparency, please note Figure \\ref{Figure1}. Note that interpretability includes six methods, including transparent models, and also the non-transparent possibilities of model and outcome tools, feature analysis, visualization methods, and prototype analysis. In the figure, interpretability points to the objective of explainability to signify that interpretability is a means for providing explainability, as per these terms' definitions in Table \\ref{table2}. Note the overlaps within the figure. Feature analysis can serve as a basis for creating transparent models, on its own as a method of interpretability, or as a interpretable component within model, outcome and visualization tools. Similarly, visualization tools can help explain the entire model as a global solution or as a localized interpretable element for specific outcomes of $t \\in T$. Prototype analysis uses $R$ as the basis for interpretability, and not $F$, and can be used for visualization and\/or outcome analysis of $r \\in R$. We explore these points further in Section \\ref{What}. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=4.5in]{fig1v2.png}\n\\caption{A Venn Diagram of the relationship between Explainability, Interpretability and Transparency. Notice the centrality of Feature Analysis to 4 of the 5 interpretable elements.}\n\\label{Figure1} \\label{fig::example0}\n\\end{figure}\n\nThe level of interpretability and transparency needed within an explanation will be connected to either hard or soft-constraints defined by the user's requirements. At times, there may be a hard-constraint based on a legal requirement for transparency, or a soft-constraint that transparency exist in cases where one suspects that the agent made a mistake or does not understand why the agent chose one possibility over others \\cite{gregor1999explanations,Guidotti2018,schank1986explanation,van198511,vlek2016method}. Explainability can be important for other reasons, including building trust between the user and system even when mistakes were not made \\cite{chen2014situation}-- something we now explore\n\n\\section{\\emph{Why} a Human-Agent System should be Explainable? }\\label{Why}\n\nWe believe that the single most important question one must ask about this topic is \\emph{Why} we need an explanation, and how important it is for the user to understand the agent's logic. In answering this question one must establish whether a system truly needs to be explainable. We posit that one can generalize the need for explainability with a taxonomy of three levels: \n\\begin{enumerate}\n\\item Not helpful\n\\item Beneficial\n\\item Critical\n\\end{enumerate}\n\nAdjustable autonomy is a well-established concept within human-agent and human-robot groups that refers to the amount of control an agent\/robot has compared to the human-user \\cite{goodrich2001experiments,scerri2001adjustable,yanco2004classifying}. Under this approach, the need for explainability can be viewed as a function of the degree of cooperation between the agent to the human user. Assuming the agent is fully controlled by the human operator (e.g. teleoperated), then no explainability is needed as the agent is fully an extension of the human participant. Conversely, if the robot is given full control, particularly if the reason for the decision is obvious (a recommendation agent gives advice based on a well-established collaborative filtering algorithm), it again serves to reason that no explainability is needed. Additionally, Doshi-Velez and Kim pointed out that an explanation at times is not needed if there are no significant consequences for unacceptable results or the agent's decision is universally accepted and trusted \\cite{doshi2017towards}.\n\nAt the other extreme, many Human-Agent Systems are built whereby the agent's role is to support a human's task. In many of these cases, we argue that the agent's explanation is a critical element within the system. The need for an agent to be transparent or to explicitly and faithfully explain its actions is tied directly to task execution. For example, Intelligent Tutoring Systems (ITS) typically use step-based granularities of interaction whereby the agent confirms one skill has been learned or uses hints to guide the human participant \\cite{vanlehn2011relative}. The system must provide concrete explanations for its guidance (called \\textit{hints} in ITS terminology) to better guide the user. Similarly, explanations form a critical component of many negotiation, training, and argumentation systems \\cite{rahwan2003towards,rosenfeld2016providing,rosenfeld2016negochat,sierhuis2003human,traum2003negotiation}. For example, effective explanations might be critical to aid a person in making the final life-or-death decision within Human-Agent Systems \\cite{sierhuis2003human}. Rosenfeld's et al.'s NegoChat-A negotiation agent uses arguments to present the logic behind its position \\cite{rosenfeld2016negochat}. Traum et al. explained the justification within choices of their training agent to better convince the trainee, as well as to teach the factors to\nlook at in making decisions \\cite{traum2003negotiation}. Rosenfeld and Kraus created agents that use argumentation to better persuade people to engage in positive behaviors, such as choosing healthier foods to eat \\cite{rosenfeld2016providing}. Azaria et al. demonstrate how an agent that learns the best presentation method for proposals given to a user improves their acceptance rate \\cite{azaria2014agent}. Many of these systems can be generally described as Decision Support Systems (DSS). A DSS is typically defined as helping people make semi-structured decisions requiring some human judgment and at the same time with some agreement on the solution method \\cite{Adam2008}. An agent's effective explanation is critical within a DSS as the system's goal is providing the information to help facilitate improved user decisions.\n\nA middle category in our taxonomy exists when an explanation is beneficial, but not critical. The Merrian-Webster dictionary defines beneficial as something that ``produces good or helpful results\"\\footnote{https:\/\/www.merriam-webster.com\/dictionary\/benefit}. In general, the defining characteristic of explanations within this category is that they are not needed in order for the system to behave optimally or with peak efficiency. \n\nTo date, many reasons have been suggested for making systems explainable \\cite{CHI2018,DoranSB17,doshi2017towards,gregor1999explanations,Guidotti2018,Lipton16a,sormo2005explanation}:\n\n\\begin{enumerate}\n\\item To justify its decisions so the human participant can decide to accept them (provide control)\n\\item To explain the agent's choices to guarantee safety concerns are met\n\\item To build trust in the agent's choices, especially if a mistake is suspected or the human operator does not have experience with the system\n\\item To explain the agent's choices to ensure fair, ethical, and\/or legal decisions are made\n\\item Knowledge \/ scientific discovery \n\\item To explain the agent's choices to better evaluate or debug the system in previously unconsidered situations\n\\end{enumerate}\n\nThe importance of these types of explanations will likely vary greatly across systems. If the user will not accept the system without this explanation, then a critical need for explainability exists. This can particularly be the case in Human-Agent Systems where the agent supports a life-or-death task, such as search and rescue or medical diagnostic systems, where ultimately the person is tasked with the final decisions \\cite{jennings2014human}. In these types of tasks the explanation is critical to facilitate a person's decision whether to accept the agent's suggestion and\/or to allow that person to decide if safety concerns are met, such as a patient's health or that of a person at-risk in a rescue situation. In other situations, explanations are beneficial for the overall function of the human-agent system, but are not critical. \n\nOne key and common example where explanations can range in significance from critical to beneficial are situations where explanations help instill trust. Previous work on trust, within people in a work situation, identified two types of trust that develop over time, ``knowledge-based\" and ``identification-based\" \\cite{inbook}. Of the two types of trust, they claim that knowledge-based trust requires less time, interactions and information to develop as it is grounded primarily in the other party's predictability. Identification-based trust requires a mutual understanding about the other's desires and intention and requires more information, interactions and time to develop. \n\nWe posit that previous work has focused on elements of this trust model in identifying what types of explanations are necessary to foster this type of trust within Human-Agent Systems. Following our previous definitions of interpretability and transparency, it seems that the former type of interpretable elements may be sufficient for knowledge-based definitions of trust, while transparent elements are required for identification-based models. When a person has not yet developed enough positive experience with the agent she interacts with, both knowledge-based and identification based trust are missing. As it has been previously noted that people are slow to adopt systems that they do not understand and trust and ``if the users do not trust a model or a prediction they will not use it.\" \\cite{Ribeiro2016}, even providing a non-transparent interpretable explanation will likely help instill confidence about the system's predictability, thus facilitating the user's knowledge-based trust in the system. Ribeiro et al. demonstrate how interpretability of this type is important for identifying models that have high accuracy for the wrong reasons \\cite{Ribeiro2016}. For example, they show that text classification often are wrongly based on the heading rather than the content. In contrast, image classifiers that capture the main part of the image in a similar manner to the human eye, install a feeling that the model is functioning correctly even if accuracy is not particularly high. \n\nHowever, it has been claimed that when the person suspects the agent has made a mistake and\/or is unreliable then the agent should act with transparency, and not merely be interpretable, as explanations generated from transparent methods will aid the user to trust the agent in the future \\cite{XAIP2017}. In extreme cases, if the user completely disregards the agent, then the human-agent system breaks down, making transparent explanations critical to help restore trust. Furthermore, explanations based on $L$'s transparency may be needed to help facilitate the higher level of identification-based trust. Only transparent interpretations directly link $L$ and $\\varmathbb I$ thus providing full information about the agent's intention. We suggest that designers of systems that require this higher level of trust, such as health-care \\cite{crockett2016data}, recommender systems \\cite{knijnenburg2012explaining}, planning \\cite{XAIP2017} and human-robot rescue systems \\cite{rosenfeld2017intelligent,salem2015would} should be transparent, and not merely interpretable. \n\nOther types of explanations are geared towards people beyond the immediate users of the system. Examples of these types of explanations include those designed for legal and policy experts to confirm that the decisions \/ actions of agent fulfill legal requirements such as being fair and ethical \\cite{DoranSB17,doshi2017towards,dwork2012fairness,Garfinkel2017,Guidotti2018}. Both the EU and UK governments have adopted guidelines requiring agent designers to provide users information about agents' decisions. In the words of the EU's ``General Data Protection Regulation\" (GDPR), users are legally entitled to obtain ``meaningful explanation of the logic involved\" of these decisions. Additional legislation exists to ensure that agents are not biased against any ethnic or gender groups \\cite{doshi2017towards,Guidotti2018} such that they demonstrate fairness \\cite{dwork2012fairness}. Similarly, the ACM has published guidelines for algorithmic accountability and transparency \\cite{Garfinkel2017}. \nThe system's explanation is not here critical for effective performance of the agent, but instead to confirm that a secondary legal requirement is being met. \n\nExplanations geared beyond the immediate user can also be those geared for researchers to help facilitate scientific knowledge discovery \\cite{doshi2017towards,Guidotti2018} or for system designers to evaluate or test a system \\cite{doshi2017towards,sormo2004explanation,sormo2005explanation}. For example, a medical diagnostic system may work with peak efficiency exclusively as a black box, and users may be willing to rely on this black box as the agent is trusted due to an exemplary historical record. Nonetheless, explanations can be still be helpful for knowledge discovery to help researchers understand gain understanding of various medical phenomena. Explainability has also been suggested as being necessary for properly evaluating a system or for the agent's designer to confirm that the system is properly functioning, even within situations that were not considered when the agent was built. For example, Doshi-Velez and Kim claimed that due to the inherent inability to quantify all possible situations, it is impossible for a system designer to evaluate an agent in all possible situations \\cite{doshi2017towards}. Explanations can be useful in these situations to help make evident any possible gaps between an agent's formulation and implementation and its performance. In all cases, the explanation is not geared to the end-user of the system, but rather to an expert user who requires the explanation for a reason beyond the day-to-day operation of the system. \n\nAs we have shown in this section, the question about explainability can be divided into questions about its necessity, e.g. not necessary, beneficial or critical, which is directly connected to the objective of that explanation. From a user-perspective, the primary objective of the explanation is related to factors that help her use the system, and particularly elements that help foster trust. In these cases, a system may need to be transparent, even if this level of explanation entails a sacrifice of the system's performance. We further explore this possibility and relationship in Section \\ref{What}. \nAt times, explanations are needed or beneficial for entities beyond the typical end-user such as for the designer, researcher or legal expert. As the objective of explanations of this type is different, it stands to reason that the type of explanation may be fundamentally different based on \\emph{whom} the target is for this information, something we address in the next section. This in turn may impact the type of interpretation the agent must present, something we explore in Section \\ref{What}. \n\n\\section{\\emph{Who} is the Target of the Explanation?}\n\\label{Who}\nThe type of interpretable element needed to form the basis of the explanation is highly dependent on the question of \\emph{who} the explanation is for. We suggest three possibilities:\n\\begin{enumerate}\n\\item Regular user\n\\item Expert user\n\\item External entity\n\\end{enumerate}\n\n\nThe level of explanation detail needed depends on \\emph{why} that Human-Agent Systems needs the user to understand the agent's logic (Section \\ref{Why}) and how the explanation has been generated (Section \\ref{What}). If the need for explanation is for legal purposes, then it follows that legal experts need the explanation, and not the regular user. Similarly, it stands to reason that the type of explanation that is given should be directed specifically to this population. If the purpose of the explanation is to support experts' knowledge discovery, then it stands to reason that the explanation should be directed towards researchers with knowledge of a specific problem. In these cases, the system might not even need to present their explanations to the regular users and may thus only focus on presenting information to these experts. Most systems will still likely benefit by directing explanations to the regular users to help them better understand the system's decisions, thus aiding in their acceptance and\/or trust. In these cases, the system should be focused on providing justifications in addition to providing the logic behind their decisions through arguments \\cite{rosenfeld2016providing,yetim2008framework} and\/or through Case Based Reasoning \\cite{corchado2003constructing,kim2014bayesian,kwon2004applying} that help reassure the user about the correctness of the agent's decision.\n\nThe same explanation might be considered as extremely helpful by a system developer, but considered useless by a regular user. Thus, the expertise level of the target will play a large part of defining an explanation and how explicit $\\varmathbb I$ is. Deciding on how to generate and present $\\varmathbb I$ will be covered in later sections.\n\nSimilarly, what level of detail constitutes an adequate explanation likely depends on precisely how long the user will study the explanation. If the goal is knowledge discovery and\/or complying with legal requirements, then an expert will likely need to spend large amounts of time meticulously studying the inner-workings of the decision-making process. In these cases, it seems likely that great amounts of detail regarding the justification of the explanations will be necessary. If a regular user is assumed, and the goal is to build user trust and understanding, then shorter, very directed explanations are likely more beneficial. This issue touches upon a larger issue about the danger additional information may overload a given user \\cite{shrot2014crisp}.\n\nAt times, the recipient of the explanation is not the user directly interacting with the system. This is true in cases where explanations are mandated by an external regulative entity, such as is proposed by the EU's GDPR, regulation. In this case, the system must follow explanation guidelines provided by the external entity. In contrast, developers providing explanations to users will typically follow different guidelines, such as user usability studies. As these two types of explanations are not exclusive, it is possible that the agent will generate multiple types of explanations for the different targets (e.g. the user and the regulator entity). In certain types of systems, such as security systems, multiple potential targets of the explanation also exist. Vigano and Magazzeni explain that security systems have many possible targets, such as the designer, the attacker, and the analyst \\cite{vigano2018explainable}. Obviously an explanation provided for the designer can be very dangerous in the hands of an attacker. Thus, aside from the question of how ``helpful\" an explanation is for a certain types of user, one must consider what the implications of providing an unsuitable explanation are. In these cases, the explanation must be provided for a given user while also considering the implications on the system's security goals. \n\n\\section{\\emph{What} Interpretation can be Generated?}\\label{What}\nOnce we have established the \\emph{why} and \\emph{who} about explanations, a key related question one must address is \\emph{what} interpretation can be generated as the basis for the required explanation. Different users will need different types of explanations, and the interpretations required for effective explanations will differ accordingly \\cite{vigano2018explainable}. We posit that six basic approaches exist as to how interpretations can be generated:\n\\begin{enumerate}\n\\item Directly from a transparent machine learning algorithm \n\\item Feature selection and\/or analysis of the inputs\n\\item Using an algorithm to create a post-hoc model tool\n\\item Using an algorithm to create a post-hoc outcome tool\n\\item Using an interpretation algorithm to create a post-hoc visualization of the agent's logic\n\\item Using an interpretation algorithm to provide post-hoc support for the agent's logic via prototypes\n\\end{enumerate}\n\nIn Figure \\ref{fig::Explicit-Faithful} we describe how these various methods for generating interpretations have different degrees of faithfulness and explicitness. Each of these methods contains some level of trade-off between their explicitness and faithfulness. For example, as described in Section \\ref{definitions}, transparent models are inherently more explicit and faithful than other possibilities. Nonetheless, we present this figure only as a guideline, as many implementations and possibilities exists within each of these six basic approaches. These differences will impact the levels of both faithfulness and explicitness, something we indicate via the arrows pointing to both higher levels of faithfulness and explicitness for a specific implementation. \n\\begin{figure}\n\\label{Figure2}\n\\centering\n\\includegraphics[width=3.3in]{Explicit-Faithful.pdf}\n\\caption{Faithfulness versus explicitness within the six basic approaches for generating interpretations}\n\\label{fig::Explicit-Faithful}\n\\end{figure}\n\n\\subsection{Generating Transparent Interpretations Directly from Machine Learning Algorithms}\n\\label{What-transparent}\nThe first approach, and the most explicit and faithful method, is to generate $\\varmathbb I$ directly from the output of the machine learning algorithm, $L$. These types of interpretations can be considered ante-hoc, or ``before this\" (e.g. an explanation is needed), as the this type of connection between $\\varmathbb I$ and $L$ facilitates providing interpretations at any point, including as the task is being performed \\cite{ante-2018,holzinger2017we}.\nThese transparent algorithms, often called white box algorithms, include decision trees, rule-based methods, k-nn (k-nearest neighbor), Bayesian and logistic regression \\cite{dreiseitl2002logistic}. As per our definitions in Section \\ref{sec:definitions}, these algorithms have not been designed for generating interpretations, but can be readily derived from the understandable logic inherent in the algorithms. As we explain in this section, all of these algorithms are faithful, and are explicit to varying degrees. A clear downside to these approaches is that one is then limited to these machine learning algorithms, and\/or a specific algorithmic implementation. It has been previously noted that an inverse relationship often exists between machine learning algorithms' accuracy and their explainability \\cite{Guidotti2018,gunning2017explainable}. Black box algorithms, especially deep neural networks but including other less explainable algorithms such as ensemble methods and support vector machines, are often used due to their exceptional accuracy on some problems. However, these types of algorithms are difficult to glean explicit interpretations from and are typically not transparent \\cite{dreiseitl2002logistic}. Figure \\ref{fig::Explicit-Predict} is based on previous work \\cite{Explain2018,gunning2017explainable} and quantifies the general relationship between algorithms' explicitness and accuracy. This figure describes the relationships as they stand at the time the paper is written, and may change as algorithmic solutions develop and evolve. Additionally, this figure may be somewhat over-simplified, as we now describe.\n\n\\begin{figure}\n\\label{Figure3}\n\\centering\n\\includegraphics[width=3.3in]{Explicit-Predict.pdf}\n\\caption{Typical trade-off between prediction accuracy versus explicitness}\n\\label{fig::Explicit-Predict}\n\\end{figure}\n\nDecision trees are often cited to be the most understandable (e.g. explicit) \\cite{Explain2018,doshi2017towards,Freitas2014,gunning2017explainable,quinlan1986induction}. The hierarchical structure inherent in decision trees yields itself to understanding which attributes are most important, of second-most importance, etc. \\cite{Freitas2014}. Furthermore, assuming the size of the tree is relatively small due to Occam's Razor \\cite{murphy1993exploring}, the if-then rules that can be derived directly from decision trees are both particularly explicit and faithful \\cite{Freitas2014,RosenfeldSGBHL14}. \n\nHowever, in practice not all decision trees are easily understood. Large decision trees with hundreds of nodes and leaves are often more accurate than smaller ones, despite the assumption inherent within Occam's Razor \\cite{murphy1993exploring}. Such trees are less explicit, especially if they contain many attributes and\/or multiple instances of nodes using the same attribute for different conditions. Assuming the decision tree is too large to fully understand (e.g. thousands of rules) \\cite{hara2016making} and\/or overfitted due to noise in the training data \\cite{Freitas2014}, it will lose its explicitness. One approach to address this issue is suggested by Last and Maimon \\cite{last2004compact} where they reason about the added value of added attributes versus the complexity they add, facilitating more explicit models.\n\nClassification rules \\cite{clark1989cn2,michalski1999learning} have also been suggested as a highly explicit machine learning model \\cite{Explain2018,Freitas2014,gunning2017explainable}. As is the case with decision trees, the if-then rules within such models provide faithful interpretations and are potentially explicit. The flat, non-hierarchical structure in such models can be an advantage in allowing the user to focus on individual rules separately which at times has been shown to be advantageous \\cite{clancey1983epistemology,Freitas2014}. However, in contrast to decision trees, this structure does not inherently give a person insight as to the relative importance of the rules within the system. Furthermore, conflicts between rules need to be handled, often through an ordered rule-list, adding to the model's complexity and reducing its level of explicitness. \n\nNearest neighbor algorithms, such as k-nn, can potentially be transparent machine learning models as they can provide interpretations based on the similarity between an item needing interpretation and other similar items. This is reminiscent of the picture classification example in Section \\ref{sec:definitions} as the person is actually performing an analysis similar to k-nn in understanding why certain pictures are similar. This process is also similar to the logic within certain Case Based Reasoning algorithms which often also use logic akin to k-nn algorithms to provide an interpretation for why two items are similar \\cite{sormo2005explanation}. However, as has been previously pointed out, these interpretations are only typically explicit if k is kept small, e.g. k=1 or close to 1 \\cite{sormo2005explanation}. Furthermore, k-nn is a lazy model that classifies each new instance separately. As such, every instance could potentially have a different ``interpretation\", making this a local interpretation. In contrast, both decision trees and rule-based systems construct general rules that are to be applied across all instances \\cite{Freitas2014}. In addition, if the number of attributes in the dataset are very large, it might be difficult for a person to appreciate the similarities and differences between different instances again reducing the explicitness of the model. \n\nBayesian network classifiers have also been suggested as another transparent machine learning model. Knowing the probability of a successful outcome is often needed in many applications, something that probabilistic models, including Bayesian models, excel at \\cite{bellazzi2008predictive}. Bayesian models have been previously suggested to be the most transparent of these types of models as each attribute can be independently analyzed and the relative strength of that attribute be understood \\cite{Freitas2014}. This approach is favored in many medical applications for this reason \\cite{zupan2000machine,kononenko1993inductive,lavravc1999selected}. More complex, non-na\\\"ive Bayesian models can be constructed \\cite{cheng1999comparing} although one may then potentially lose both model accuracy and transparency. \n\nSimilar to Bayesian models, logistic regression also outputs outcome probabilities by fitting the output of its regression model to values between 0 and 1. The logit function inherent in this model is also constructed from probabilities-- here in the form of log-odds \/ odds-ratios. This makes this model popular for creating medical applications \\cite{bagley2001logistic,katafigiotis2018stone}. At times, the interpretations that can be generated by these relationships are explicit \\cite{dreiseitl2002logistic}. \n\nSupport Vector Machines (SVM) are based on finding a hyperplane to separate between different instances and are potentially explicit, particularly if a linear kernel is used \\cite{bellazzi2008predictive}. Once again, if many attributes exist in the model, the explicitness of the model might be limited even if a linear kernel is used. An SVM becomes even less explicit if more complex kernels are used including RBF and polynomial kernels. As is the case with the last three of these algorithms (SVM, k-nn and Bayesian), feature selection \/ reduction could significantly help the explicitness of the model, something we explore in the next section. \n\nAs no one algorithm provides both high accuracy and explicitness, it is important to consider new machine learning algorithms that include explainability as a consideration within the learning algorithm. One example of this approach is work by Kim, Rudin and Shah, who have suggested a Bayesian Case Model for case-based reasoning \\cite{kim2014bayesian}. Another example is introduced by Lou et al. \\cite{LouCG12}. Their generalized additive models (GAMs) combine univariate models called shape functions through a linear function. On one hand, the shape functions can be arbitrarily complex, making GAMs more accurate than simple linear models. On the other hand, GAMs do not contain any interactions between features, making them more explicit than black box models. Lou et al. also suggested adding selected terms of interacting pairs of features to standard GAMs \\cite{lou2013accurate}. This method increases the accuracy of the models, while maintaining better explicitness than black box methods. Caruana et al. propose a extension of the GAM, GA$^2$M, which considers pairwise interactions between features and provide a case study showing its success in accurately and transparently explaining a health-care dataset \\cite{Caruana2015}.\n\nWe believe these approaches are worthy of further consideration and provide an important future research area as new combinations of machine learning algorithms that provide both high accuracy and explainability could potentially be developed. Several of these methods use an element of feature analysis as the basis of their transparency \\cite{Caruana2015,last2004compact,LouCG12,lou2013accurate}. In general, feature selection can be a critical element in creating transparent and non-transparent interpretations, as we now detail.\n\n\\subsection{Generating Interpretations from Feature Selection \/ Analysis}\n\\label{What-feature analysis}\nA second approach to create the interpretation, $\\varmathbb I$, is through performing feature selection and\/or feature analysis of all features, $F_1 \\ldots F_i$, before or after a model has been built. Theoretically, this approach can be used alone and exclusively to generate interpretations within the non-transparent ``black box\" algorithms, or in conjunction with the above ``white box\" algorithms to help further increase their explicitness. Feature selection has long been established as an effective way of building potentially better models which are simpler and thus better overcome the curse of dimensionality \\cite{guyon2003introduction}. Additionally, models with fewer attributes are potentially more explicit as the true causal relationship between the dependent and independent variables is clearer and thus easier to present to the user \\cite{Kononenko99}. The strong advantage of this approach is that the information presented to the user is generated directly from the mathematical relationship between a small set of features and the target being learned. \n\nThree basic types of feature selection approaches exist: filters, wrappers, and embedded methods. We believe that filter methods are typically best suited for generating explicit interpretations as the analysis is derived directly from the data without any connection to a specific machine learning model \\cite{guyon2003introduction,Saeys2007survey}. Univariate scores such as information gain or $X^2$ can be used to evaluate each of the attributes independently. Either the top $n$ features could then be selected or only those with a score above a previously defined threshold. The user's attention could then be focused on relationships between these attribute, facilitating explicitness. Multivariate filters, such as CFS \\cite{hall1999correlation} allow us to potentially discover interconnections between attributes. The user's attention could again then be focused on this small subset of features with the assumption that interrelationships between features have become more explicit. Previous work by Vellido et al. \\cite{vellido2012making} recommends using principals component analysis (PCA) to generate interpretations. Not only does PCA reduce the number of attributes needing to be considered, but the new features generated by PCA are linear combinations of the original ones. As such, the user could understand an explanation based on these interrelationships, especially if the both the number and size of these derived features are small. \n\nAs filter methods are independent of the machine learning algorithm used, it has been suggested that this approach can be used in conjunction with black box algorithms to make them more explicit \\cite{vellido2012robust}. One example is previous work that used feature selection to reduce the number of features from nearly 200 to 3 before using a neural network for classification \\cite{vellido2012robust}. As neural networks are becoming increasing popular due to their superior accuracy in many datasets, we believe this is a general approach that is worth consideration to help make neural networks more explicit.\n\n\\subsection {Tools to Generate Model Interpretations Independently from L}\n\\label{What-Model outcome}\nThe above methods are faithful in that the transparent algorithms and feature analysis is done in conjunction with $L$. However, other approaches exist that create $\\varmathbb I$ as a process independent of the logic within $L$. In the best case, $\\varmathbb I$ does faithfully approximate the actual and complete logic within $L$, albeit found differently, and thus represents a form of reverse-engineering version of the logic within $L$ \\cite{Augasta2012}. Even when $\\varmathbb I$ is not 100\\% faithful, the goal is to be as faithful and explicit as possible, making these approaches a type of metacognition process, or reasoning about the reasoning process (e.g. $L$) \\cite{cox2011metareasoning}. A key difference within the remaining approaches in this section is that $\\varmathbb I$ is created through an analysis after the $L$'s learning has been done, something referred to as postprocessing \\cite{Strumbelj2010} or post-hoc analysis \\cite{Lipton16a,post}. Examples of post-hoc approaches that we consider in the remainder of this section include: model and outcome interpretations, visualization, and prototyping similar records. \n\nWhile disconnecting the $L$ and $\\varmathbb I$ can lead to a loss of faithfulness, it can lead to other benefits and challenges. Designing tools that focus on $\\varmathbb I$ could potentially lead to very explicit models, something we represent in Figure \\ref{fig::Explicit-Faithful}. Additionally, interpretations that are derived directly from the machine learning algorithm or the features are strongly restricted by the nature of the algorithm \/ features. In contrast, interpretations that are created in addition to the decision-making algorithm can be made to comply with various standards. For example, Miller demonstrates how interpretations are often created by the same people that develop the system. They tend to generate explanations that are understandable to software designers, but are not explicit for the system's users \\cite{miller2017explanation}. He suggests using insights from the social sciences when discussing explainability in AI. Other factors, such as legal and practical considerations might limit researchers as to what constitutes a sufficient explanation. For example, as these tools disconnect the logic in $\\varmathbb I$ from $L$, they cannot guarantee the fairness of the agent's decision which may be a critical need and even require transparency (see Section \\ref{Why}). \n\nThe first possibility creates a ``model interpretation tool\" that is used to explain the logic behind $L$'s predictions for all values of $T$ given all records, $R$. A group of these approaches create simpler, transparent decision trees or rules secondary to $L$. While these approaches will have the highest level of explicitness, they will generally lack faithfulness. For example, Frosst \\cite{frosst2017distilling} presents a specific interpretation model for neural networks in an attempt to resolve the tension between the generalization of neural networks and the explicitness of decision trees. They show how to use a deep neural network to train a decision tree. The new model does not perform as well as a neural network, but is explicit. Many other approaches have used decision trees to provide explanations for neural networks \\cite{Boz2002,Craven1995,KRISHNAN1999}, decision rules \\cite{arbatli1997rule,Augasta2012,craven1994using,Kahramanli,zhou2003extracting} and a combination of genetic algorithms with decision trees or rules \\cite{arbatli1997rule,4938655,Mohamed2011}. Similarly, decision trees \\cite{Chipman_makingsense,Domingos1998,zhou2016interpreting} and decision rules \\cite{deng2014interpreting,hara2016making,4167900,tan2016tree} have been suggested to explain tree ensembles. \n\nSome explanations secondary to $L$ are generated by using feature analysis and thus are most similar to the approaches in the previous section. One example of these algorithms is SP-LIME, which provides explanations that are independent of the type of machine learning algorithm used \\cite{Ribeiro2016}. It is noteworthy that SP-LIME includes feature engineering as part of its analysis, showing the potential connection between the second and third approaches. The feature engineering in SP-LIME tweaks examples that are tagged as positive and observes how changing them affects the classification. A similar method has been used to show how Random Forests can be made explainable \\cite{tolomei2017interpretable,whitmore2018explicating}. The Random Forest can be considered a black box that determines the class of a given feature set. $L$'s interpretabity is obtained by determining how the different features contribute to the classification of a feature set \\cite{tolomei2017interpretable}, or even which features should be changed, and how, in order to obtain a different classification \\cite{whitmore2018explicating}. This type of interpretation is extremely valuable. For example, consider a set of medical features, such as weight, blood pressure, age etc. and a model to determine heart attack risk. Assume that for a specific feature set the model classifies the patient as high risk. The model's interpretation facilitates knowing what parameters need to change in order to change the prediction to low risk. \n\n\n\n\\subsection {Tools to Generate Outcome Interpretations Independently from L}\n\\label{What-outcome explanation}\nThe second possibility for creating interpretations independently from $L$ creates an ``outcome explanation\" that is localized and explains the prediction for a given instance $r \\in R$ and its prediction, $t \\in T$. It has been claimed that feature selection approaches are useful for obtaining a general, global understanding of the model, but not for specific classifications of an instance, $t$. Consequently, they advocate using local interpretations \\cite{Baehrens2010}. One example is an approach that uses vectors which are constructed independently of the learning algorithm for generating localized interpretations \\cite{Baehrens2010}. Another example advocates using coalition game theory to evaluate the effect of combinations of features for predicting $t$ \\cite{Strumbelj2010}. Work by Lundberg and Lee present a unified framework for interpreting\npredictions using Shapley game theoretic functions \\cite{lundberg2017unified}. Certain algorithms have both localized and global versions. One example is the local algorithm LIME and its global variant, SP-LIME \\cite{Ribeiro2016}.\n\n\\subsection {Algorithms to Visualize the Algorithm's Decision}\n\\label{What-visualization}\nWhile the explanations in the previous sections focused on ways a person could better understand the logic within $L$, visualization techniques typically focus on explaining how a subset of features within $F$ are connected to $L$. However, the level of explicitness within visualization is lower than that of feature selection and model and outcome interpretations. This is because feature selection and model and outcome interpretations all aim to understand the logic within $L$, thus giving them relatively higher level of faithfulness and explicitness. As visualization tools do not focus on understanding the logic within $L$, they are less faithful than feature analysis methods that do, and at times the level of understanding they provide is not high, especially for regular users.\n\nOverall, many of these approaches seem to have the primary goal of justification for a specific outcome of $L$ and are not focused on even localized interpretations of $L$'s logic. As justification is more concerned with persuading a user that a decision is correct than providing information about $L$'s logic \\cite{biran2017explanation}, it seems that justification methods likely have the least amount of faithfulness, as there is no need to make any direct connection between $\\varmathbb I$ and $L$. Consistent with this aim, work by Lei, Barzilay and Jaakkola generated rationales, which they defined as justifications for an agent's local decision through creating a visualization tool that highlighted which sections of text, e.g. $f \\in F$, were responsible for making a specific classification \\cite{lei2016rationalizing}. \n\nConsider explanations that can potentially be generated within image classification, a task many visualization tools address \\cite{fong2017interpretable,guo2010novel,Simonyan2013DeepIC,xu2015show,zhou2016learning}. A visualization tool will typically identify the portion of the picture (a subset of $F$) that was most responsible for yielding a prediction, $T_k$. However, typical visualizations, such as those generated by saliency masks, class activation mapping, sensitivity analysis and partial dependency plots all only focus on highlighting important portions of input, without explaining the logic within the model, and the output is often hard for regular users to understand. Nonetheless, these approaches are useful in explaining high accuracy, low-explicitness machine learning algorithms, particularly neural networks, often within image classification tasks.\n\nSaliency maps are a visualizations that identify important, e.g. salient, objects which are groups of features \\cite{xu2015show}. In general, saliency can be defined as identifying the region of an image $r \\in R$, that $L(r\\times F,T_k)$ will identify \\cite{fong2017interpretable}. For example, a picture may include several items, such as a person, house and car. $r$ can represent the car and $L(r\\times F,T_k)$ is used to properly identify it ($T_k$). Somewhat similar to the previous types of explanations, these salient features could then generate a textual explanation of an image. For example, Xu et al. focused on identifying objects within a deep neural network (CNN) for picture identification to automatically create text descriptions \\cite{xu2015show} for a given picture (outcome description). Kim et al. created textual explanation for neural networks of self-driving cars \\cite{kim2018textual}. More generally, saliency masks can be used to identify the $r*n$ areas that represent the $t*n$ targets that were identified in the picture \\cite{fong2017interpretable,guo2010novel,Simonyan2013DeepIC,xu2015show,zhou2016learning}. They generally use the gradient of the output corresponding to the each of the targets with respect to the inputted features \\cite{Lipton16a}. While earlier works constrained the neural network to provide this level of explicitness \\cite{zhou2016learning}, recent works provide visual explanations without altering the structure of the neural network \\cite{fong2017interpretable,hu2018explainable,selvaraju2017grad}. Still, serious concerns exist that many of these visualizations are too complex for regular users and thus reserved for experts, as some of these explanations are only appropriate for people researching the under-workings of the algorithm to diagnose and understand mistakes \\cite{selvaraju2017grad}. \n\nNeural activation is a visualization for the inspection of neural networks that help focus a person to what neurons are activated with respect to particular input records. As opposed to the previous visualizations that focus on $F$ and $R$, this visualization helps provide an understanding about neural networks' decisions making them less of a black box. Consequently, these approaches provide interpretation and not justification and are more faithful. For example, work by Yosinski et al. \\cite{yosinski2015understanding} proposes two tools for visualizing and understanding what computations and neuron activations occur in the intermediate layers of deep neural networks (DNN). Work by Schwartz-Ziv and Tishby suggest using a Information Plane visualization which captures the Mutual Information values that each layer preserves regarding the input and output variables of DNNs \\cite{shwartz2017opening}. \n\nOther visualizations exist for other machine learning algorithms and learning tasks. Similar to saliency maps, sensitivity analysis provides a visualization that connects the inputs and outputs of $L$ \\cite{saltelli2002sensitivity}. Moreover, sensitivity analysis maps have been applied to tasks beyond image classification and to other black box machine learning algorithms such as ensemble trees \\cite{cortez2013using}. For example, Coretz and Embrechts present five sensitivity analysis methods appropriate for both classification and regression tasks \\cite{cortez2013using}. Zhang and Wallace present a sensitivity analysis for convolutional neural networks used in text classification \\cite{zhang2015sensitivity}. \n\nPartial Dependency Plots (PDP) help visualize the average partial relationship between the predicted response of $L$ and one or more features within $F$ \\cite{friedman2001greedy,goldstein2015peeking}. PDPs use feature analysis as a critical part of their interpretation, and are much more faithful and explicit than many of the other visualizations approaches in this section. However, as the primary output and interpretation tool is visual \\cite{friedman2001greedy}, we have categorized it in this section. Examples include work by Hooker that uses ANOVA decomposition to help create this a Variable Interaction Network (VIN) visualization \\cite{hooker2004discovering} and work by Goldstein et al. that extend the more classic PDP model by graphing the functional relationship between the predicted response and the feature for individual observations, thus making this a localized visualization \\cite {goldstein2015peeking}. Similarly, Krause et al. provide a localized visualization to create partial dependence bars, a color bar representation of a PDP \\cite{krause2016interacting}.\n\n\\subsection{Generating Explanations from Prototyping the Dataset's Input as Examples}\n\\label{What-protyping}\nSimilar to visualization tools, prototype selection also seeks to clarify the link between $L$'s input and output. However, while visualization tools focus on the input from $F$, prototyping focuses on $R$, seeking the existence of a subset of records similar to record, $r \\in R$, being classified. This subset is meant to serve as an implicit explanation as to the correctness of the model as prototyping aims to find the minimal subset of input records that can serve as a distillation or condensed view of the dataset \\cite{bien2011prototype}. \n\nPrototypes have been shown to help people better understand $L$'s decisions. For example, work by Henricks et al. focuses on providing visual explanations for images that include class-discriminate information about other images that share common characteristics with the image being classified \\cite{HendricksARDSD16}. The assumption here is that the information about similar pictures in the same class helps people better understand the decision of the algorithm. Bien and Tibshirani propose two methods for generating prototypes-- a LP relaxation with randomized rounding and a greedy approach \\cite{bien2011prototype}. Work by Kim et al. suggested using maximum mean\ndiscrepancy to generate prototypes \\cite{kim2016examples}. In other work by Kim et al., they suggest using a Bayesian Case Model (BCM) to generate prototypes \\cite{kim2014bayesian}.\n\n\\subsection{Comparing the Six Basic Approaches for Generating Interpretations}\nReferring back to Figure \\ref{fig::Explicit-Faithful}, each of these approaches will differ along the axis of their level of explicitness and faithfulness. It has been previously noted that many of the visualization approaches produce interpretations that are not easily understood by people without an expert-level understanding of the problem being solved \\cite{post} making them not very explicit. As they often provide justification and no direct interpretation of the logic in $L$, they are also not very faithful. As prototypes provide examples of similar classifications, they are often more explicit than visualizations as regular users can more easily understand their meaning. However, as they also do not attempt to directly explain $L$'s logic, they are not more faithful. Other approaches, such as transparent ones, have high levels of both explicitness and faithfulness, but are typically limited to white box methods that facilitate these types of interpretability. Model and outcome tool approaches can potentially be geared to any user, making them very explicit, but are less faithful as the logic generated in $\\varmathbb I$ is not necessarily the same as that in $L$. When taken in combination with a white box algorithm, feature analysis methods can be very explicit and faithful. At times, they are used independently of $L$, potentially making them less faithful.\n\nReferring back to Figure \\ref{Figure1}, each of the approaches described in this section are labeled with the term within the explainability model described in Section \\ref{definitions}. However, note the overlaps within the Venn Diagram as overlaps do exist between some of the approaches described in this section. While transparent approaches do link $\\varmathbb I$ and $L$, sometimes the link between these two elements is strengthened and\/or described through an analysis of the $F$ commonly seen in Feature Analysis approaches. For example, the GAM and GA$^2$M approaches \\cite{Caruana2015,LouCG12} use univariate and pairwise feature analysis methods respectively in their transparent models. While model outcome models such as SP-LIME pride themselves on being agnostic, e.g. no direct connection be assumed between $\\varmathbb I$ and $L$, they do use elements of feature analysis and visualization in creating their global interpretation of $L$ \\cite{Ribeiro2016}. Similarly, the outcome explanation model, LIME, also uses feature analysis and visualization in creating its local interpretations of $L$ \\cite{Ribeiro2016} for an instance of $r \\in R$. Saliency maps are visualization that is based on identifying the features used for classifying a given picture \\cite{fong2017interpretable} showing the potential overlap between visualization methods and feature analysis. However, at times, the identified salient features are used to create a outcome interpretation, as is the case in other work \\cite{xu2015show}. Similarly, work by Lei, Barzilay and Jaakkola generated visualizations of outcomes through analyzing which features were most useful to the model, again showing the intersection of these three approaches. Last, some prototype analysis tools, such as work by Henricks et al. use visual methods \\cite{HendricksARDSD16}. Thus, we stress that the different types of interpretation approaches are often complementary and not mutually exclusive.\n\nGiven these differences of the explicitness and faithfulness of each of these approaches, it seems logical that the type of interface used for disseminated the system's interpretation will likely depend upon the level of the user's expertise and the type of interpretation that was generated. The idea of adaptable interfaces based on people's expertise was previously noted \\cite{grudin1989case,shneiderman2002promoting,SteinGNRJ17}. In these systems, the type of information presented in the interface depends on the user's level of expertise. Accordingly, an interface might consider different types of interpretation or interpretation algorithms based on \\emph{who} the end-user will be. Even among experts, it is reasonable to assume that different users will need different types of information. The different backgrounds of legal experts, scientists, safety engineers, and researchers may necessitate different types of interfaces \\cite{doshi2017towards}. \n \n \n\\section{\\emph{When} Should Information be Presented?}\n\\label{when}\n\nExplanations can be categorized based on \\emph{when} the interpretation is presented:\n\\begin{enumerate}\n\\item Before the task \n\\item Continuing explanations throughout the task \n\\item After the task \n\\end{enumerate}\n\nSome agents may present their interpretation before the task is executed as either justification \\cite{biran2017explanation}, conceptualization or proof of fairness of an agent's intended action \\cite{dwork2012fairness}. Other agents may present their explanation during task execution, especially if this information is important to explain when the agent fails so it will be trusted to correct the error \\cite{jennings2014human,abs-1806-00069,Guidotti2018}. Other agents provide explanations after actions are carried out \\cite{langley2017explainable}, to be used for retrospective reports \\cite{Lipton16a}. \n\nIt is important to note that not all approaches for \\emph{what} can be generated, as per Section \\ref{What} support all of these possibilities. While all methods can be used for analysis after the task, many of these methods use post-hoc analysis that separates $L$ from $\\varmathbb I$. Thus, if fairness needs to be checked before task execution, the lack of connection between $L$ from $\\varmathbb I$ in model and outcome explanations, visualizations, and prototypes make this difficult to accurately check. Transparent methods could fulfill this requirement due to their inherent faithfulness. Feature analysis methods including, but not limited to GAM, GA$^2$M, and PDP \\cite{Caruana2015,friedman2001greedy,goldstein2015peeking,lou2013accurate} can check the connection between inputs and outputs, thus confirming fairness or other legal requirements are met even before task execution. \n\nThe choice of \\emph{when} to present the explanation is not exclusive. Agents might supply various explanations at various times, before, during and after the task is carried out. Building on the taxonomy in Section \\ref{Why}, if explainability is critical for the system to begin functioning, then it stands to reason that this knowledge must be presented at the beginning of the task, thus enabling the user to determine whether to accept the agent's recommendation \\cite{sheh2017did}. However, if it is beneficial to build trust \/ user acceptance, then it might be directed during the task, especially if the agent erred. If the purpose of the explanation is to justify the agent's choice from a legal perspective then we may need to certify that decision before the agent acts (preventative) or after the act (accusatory). But, if the goal is conceptualization, especially in the form of knowledge discovery and\/or to support future decisions, then the need for explanation after task execution is equally critical. These possibilities are not inherently mutually exclusive. For example, work by Vigano and Magazzeni \\cite{vigano2018explainable} claims that explanations should be provided throughout all stages of the systems lifecycle within security systems. They describe how explanations should begin as the system is designed an implemented, continue through use, analysis and change and maybe even when it is replaced. One may argue whether this is crucial for all systems or only for security systems that are discussed in their work, but it is surely a point to consider.\n\n\\section{\\emph{How} can Explanations be Evaluated?} \\label{How}\nIt was previously noted that little agreement currently exists about how to define explainability and interpretability which may be adding to the difficulty in properly evaluating it \\cite{doshi2017towards}. In order to address this point, we first clearly defined these terms in Section \\ref{definitions}, and then proceeded to consider questions of \\emph{why}, \\emph{what}, \\emph{when} and \\emph{how} based on these definitions. \n\nAs we discuss in this section, creating a general evaluation framework is still an open challenge as these issues are often intrinsically connected. For example, the detail of an explanation is often dependent on \\emph{why} that explanation is needed. An expert will likely differ from a regular user regarding \\emph{why} an explanation is needed, will often need these explanations at different times, e.g. before or after the task (\\emph{when}), and may require different types of explanations and interfaces (\\emph{what} and \\emph{how}). At other times multiple facets of explanation exist even within one category. A DSS system is built to support a user's decision, thus making explainability a critical issue. However, these systems will still likely benefit from better explanations, so that the user trusts those explanations. Similarly, a scientist pursuing knowledge discovery may need to analyze and interact with information presented before, during and after a task's completion (\\emph{when}). Thus, multiple goals must often be considered and evaluated.\n\nTo date, there is little consensus about how to quantify these interconnections. Many works evaluated explainability as a binary value-- either it works or it doesn't. Within these papers, an explanation is inspected to insure that it provides the necessary information in the context of a specific application \\cite{lei2016rationalizing,Ribeiro2016}. If it does so, it is judged as a success, even if other approaches may have been more successful. \nTo help quantify evaluations, Doshi-Velez and Kim suggested a taxonomy of three types of evaluation tasks that can be used for evaluation: application, human, and functionally grounded \\cite{doshi2017towards}. In their model, application grounded tasks are meant for experts attempting to execute a task and the evaluation focuses on how well the task was completed. Human-grounded tasks are simplified and can be performed with regular-users. They conceded that it is not clear what the evaluation goal of this task need be but recommended simplified evaluation metrics, such as reporting which explanation they preferred. Work by Mohseni and Ragan suggested creating canonical datasets of this type that could quantify differences between interpretable algorithms \\cite{mohseni2018human}. They proposed annotating regions in images, and words in texts that provide an explanation. The output of any new interpretation algorithm, $\\varmathbb I$, could be compared to the user annotations that provide a ground-truth. This approach is still goal-oriented and thus they classify their task as a human-grounded task (e.g. having $\\varmathbb I$ match the human explanation). Doshi-Velez and Kim's last type of evaluation task is functionality-grounded where some objective criteria for evaluation is predefined (e.g. the ratio of decision tree size to model accuracy). The main advantage to this type of evaluation is the evaluation of $\\varmathbb I$ can be quantified without any need for user studies.\n\nThis taxonomy provides three important types of tasks that can be used in evaluate explainability and interpretability, but these researchers do not propose how to quantify the effectiveness of the key components within $\\varmathbb E$ and $\\varmathbb I$. This paper's main point is that questions surrounding the system's need for \\emph{why}, \\emph{what}, \\emph{when} and \\emph{how} about explainability must be addressed. These elements can and should be quantified, while also considering trade-offs between these categories as well as elements within $\\varmathbb I$. Issues about algorithms' explicitness, faithfulness and transparency must be explicitly evaluated while balancing the agent's and user's performance requirements, including the agent's fairness and prediction accuracy, and the user's performance and acceptance of $\\varmathbb E$'s.\n\nGiven this, we suggest explicitly evaluating the following three elements in Human-Agent Systems: The quantifiable performance of the agent's learning, $L$, its level of interpretation, $\\varmathbb I$, and human's understanding, $\\varmathbb E$. For example, in a movie recommendation system the three scores would be described as follows: The score of for $L$ is based on standard metrics for evaluating recommendation predictions (i.e. accuracy, precision and\/or recall). A score can also be given to $\\varmathbb I$ that reflects how much explicitness, faithfulness and transparency exist in $\\varmathbb I$ according to objective criteria described below. The score for $\\varmathbb E$ should be quantified based on the user's performance. As the goal of the system is to yield predictions that the user understands so they will be accepted, we should quantify the impact of $\\varmathbb I$ on the user's behavior. Thus, we suggest an evaluation score that quantifies:\n\n\\begin{enumerate}\n\\item A score for $L$, the performance of the agent's prediction \n\\item A score for $\\varmathbb I$, the interpretation given to the user \n\\item A score for $\\varmathbb E$, the user's acceptance of $\\varmathbb I$\n\\end{enumerate}\n\nAs described in previous sections, a complex interplay exists between these three elements. There is often a trade-off between the performance of $L$ and the explicitness of $\\varmathbb I$ that can be produced from $L$ (see Figure \\ref{fig::Explicit-Predict}). White-box algorithms are more explicit and can even be transparent, but typically have lower performance. Higher accuracy algorithms, such as neural networks, are typically less explicit and faithful (see Figure \\ref{fig::Explicit-Faithful}). Thus, agents with lower performance scores for $L$ will likely have higher scores for $\\varmathbb E$, especially if explicitness and faithfulness are important and quantifiable within the system. Furthermore, different user types and interfaces will be effected by the type of agent design and a total measure is needed to weigh all parameters that are needed by a system into account. For example, an agent that was designed to support an expert user is different from one provided to a regular user. \n\nAnother equally important element of the system is how well the person executed her system task(s) given $\\varmathbb I$. In theory, multiple goals for $\\varmathbb E$ may exist for the human user such as immediate performance vs. long-term knowledge acquisition. These may be complementary or in conflict. For example, assume the explanation goal of a system is to support a person's ability to purchase items in a time-constrained environment (e.g. online stock purchasing). The greater detail contained within the agent's explanations on one hand instill improved confidence within the user, but also will take more time to read and process, which may prevent the user from capitalizing on certain quickly passing market fluctuations. Thus, some measure should likely be introduced to reason about different goals for the explanation and the relative strengths of various explanations, their interfaces, and the algorithms that generate those explanations.\n\nTo capture these properties, we propose an overall utility to quantify the complementary and contradictory goals for $L, \\varmathbb I$, and $\\varmathbb E$ as the weighted product:\n\n\\begin{equation}\nUtility = \\prod_{n=1}^{NumGoals} Imp_n*Grade_n\n\\label{equation-utility}\n\\end{equation}\n\\begin{equation}\n\\sum_{n=1}^{NumGoals} Imp_n = 1\n\\end{equation}\n\nWe define $NumGoals$ as the number of goals in an the system. $L, \\varmathbb I$, and $\\varmathbb E$ each have an overall objective and the system meets this object through all of these goals. The objective for $L$ is to provide predictions for $T$ using \n$R \\times F$. The ability of the system to meet this objective is measured through machine learning performance metrics that quantify the goals of high accuracy, recall, precision, F-measure, mean average precision, and mean squared error. A goal for $L$ can also be that $L$ exhibits fairness, which often is a hard-constraint due to legal considerations. The objective for $\\varmathbb I$ is to provide a representation of $L$'s logic that is understandable to the user. This success of this objective can be measured by goals for $\\varmathbb I$ to have the highest levels of explicitness, faithfulness, and transparency. Other papers have suggested additional goals for $\\varmathbb I$ including justification \\cite{kofod2008explanatory} and completeness \\cite{abs-1806-00069} that we argue are included in goals of explicitness and faithfulness respectively. The objective for $\\varmathbb E$ is that the person will understand $L$ using $\\varmathbb I$. This can be measured through the goal that the user's performance be improved given $\\varmathbb I$. Additional goals include those specified in Section \\ref{Why} including guaranteeing safety concerns, trust, and knowledge \/ scientific discovery. Goals such as to the timing of when interpretations were present (e.g. ``presenting during task as required\") are likely hard-constraints (e.g. either it was done at the correct time or not). $Imp_n$ is the importance weight we give to the $n_{th}$ goal such that $0=a_0$ is unbiased and its error function $ \\delta a$, which\ncan be read off Rel. (\\ref{appb5}), allows to compute its standard deviation \n$\\sigma_a$ ($\\sigma_a^2=<[\\delta a]^2>$)~:\n\n\\begin{equation}\n\\displaystyle \\frac{1}{\\sigma_a^2}=g_i g_jV^{-1}_{ij}\n\\label{appb6}\n\\end{equation} \n\nWhen there is no correlation ($ V_{ij}\\simeq \\delta_{ij}$), this expression gives \nthe usual result ($\\sigma_a^{-2}=\\sum_i \\sigma_{a_i}^{-2}$). \n\n\nIn general $\\delta a_i$ and $\\delta a_j$ are given by expressions like in Rels. (\\ref{appb1})\nwith two different path lengths $u_i$ and $u_j$, both unknown. Correspondingly\nthe quantity $V_{ij}=<\\delta a_i(u_i) \\delta a_j(u_j)>$ can only be estimated\nby performing the average as presented in Section A1 of Appendix A. This gives~:\n\n\\begin{equation}\nV_{ij}=<<\\delta a_i(u_i) \\delta a_j(u_j)>>_{u_iu_j}=\\left [ B E + \\frac{1}{24}A I \\right]_{ij}\n\\label{appb7}\n\\end{equation} \n\n\\noindent where $E$ is a rank 1 matrix \nsuch that each $E_{ij}=1$~; here $B=[<[\\delta \\theta]^2>+1\/3A]\/4$ and $A=c^2L\/X_0$.\nIn order to compute $\\sigma_a^2$ using Rel. (\\ref{appb6}), we need to invert $V$\njust defined. Indeed, using $E^2=nE$, it is easy to prove that~:\n\n\\begin{equation}\nV= \\lambda I + \\mu E \\Longleftrightarrow V^{-1}= \\frac{1}{\\lambda}[I- \\frac{\\mu}{n\\mu+\\lambda} E]\n\\label{appb8}\n\\end{equation}\n\n\\noindent \nwhere $n=$ dim $I=$ dim $E$ is also the number of points (photons).\nUsing this formula with the matrix in Rel. (\\ref{appb7}), we easily find~:\n\n\\begin{equation}\n\\sigma_a^2= \\frac{1}{4}[<[\\delta \\theta]^2> +\\frac{1}{3}A+\\frac{1}{6n}A] \n\\label{appb9} \n\\end{equation}\n\nThis relation could be expected beforehand. It shows that the uncorrelated part of the error\naffecting each $a_i$ (2\/3 of the variance associated with multiple scattering \nas shown by Rel. (\\ref{app8}), plus the variance provided by the device in front of the \nDIRC bar) is transfered to $a$ without changes, while the correlated part \n(1\/3 of the multiple scattering contribution to the full variance) scales with $n$.\nIf we had neglected the terms generated by the multiple scattering effects\nin $V_{ij}$ (for $i \\ne j$), we would have got instead $\\sigma_a^2=[<[\\delta \\theta]^2>\n+A\/(2n)]\/4$ which can be considerably smaller at low track momentum.\n\n\nActually, the finite size sample corrections (see Section A2) may be accounted for. \nConcerning the uncorrelated part, this would amount to $1\/n^2$ corrections and can be \nneglected~; the correlated part gives however a $1\/n$ contribution which corrects\nthe term $A\/6n$ in Rel. (\\ref{appb9}) by a factor of 4. Therefore, \nan improved expression for $\\sigma_a^2$ taking into account all $1\/n$ corrections is~: \n\n\n\\begin{equation}\n\\sigma_a^2 \\displaystyle\n\\left|_{n} \\right.= \\frac{1}{4}[<[\\delta \\theta]^2> +\\frac{1}{3}A+\\frac{2}{3n}A] \n\\label{appb9a} \n\\end{equation}\n\nIt is also interesting to compare Rels. (\\ref{appb9}) and (\\ref{appb9a}) with \nthe first Rel. (\\ref{appb2}).\nIndeed, one clearly sees that the variance for $a$ is smaller than the variance\nat mid path inside the quartz as soon as the number of photons is greater than 1\n(large $n$ limit) or 4 (finite $n$ corrected). Then, in all practical applications,\nRels. (\\ref{appb2}) do not reflect the sharing of the variance\nbetween correlated and uncorrelated parts and leads to an overestimate of the\nmultiple scattering contribution to the center errors.\n\n\n\\subsection*{B3 The Full Charged Track Error Problem}\n\n\\indent \\indent We have just treated (as a one--dimensional problem) the \ndetermination of the error on the the $a$ coordinate of the circle center,\ntaking into account the number $n$ of emitted photons. We have seen that\nthe uncorrelated part of its variance decreases as $1\/n$ while the\ncorrelated part is unaffected. However, our actual problem is two--dimensional\nand because of correlations terms like $<\\delta a_i \\delta b_j>$, it is not equivalent\nto the conjunction of two one--dimensional problems. \n\nIn order to complete the treatment, let us display the following identity\nfor the inverse $V^{-1}$ of a symmetric matrix $V$\nof rank and dimension $n$~:\n\n\\begin{equation}\nV^{-1}=\n\\left (\n\\begin{array}{lll}\nS_1 & \\widetilde{C}\\\\[0.5cm]\nC & S_2\n\\end{array}\n\\right )^{-1}\n=\n\\left (\n\\begin{array}{lll}\n(S_1-\\widetilde{C}S_2^{-1}C)^{-1} &- (S_1-\\widetilde{C}S_2^{-1}C)^{-1}\\widetilde{C}S_2^{-1}\\\\[0.5cm]\n-S_2^{-1}C (S_1-\\widetilde{C}S_2^{-1}C)^{-1} & S_2^{-1}+S_2^{-1}C (S_1-\\widetilde{C}S_2^{-1}C)^{-1}\n\\widetilde{C}S_2^{-1}\n\\end{array}\n\\right )\n\\label{appb10}\n\\end{equation}\n\n\\noindent where the submatrices $S_1$ and $S_2$ are square matrices of dimensions\nresp. $k~($, $S_{2~ij}=<\\delta b_i \\delta b_j>$ and\n$C_{ij}=<\\delta a_i \\delta b_j>$, where the error functions are given by expressions\nlike Rel. (\\ref{appb1}) with appropriate path lengths. It is easy to compute \nthese (sub--)matrices~:\n\n\\begin{equation}\n\\left \\{\n\\begin{array}{ll}\n\\displaystyle S_1=B_{\\theta} ~E + \\frac{1}{24}A~I &\n~~~,~~~\\displaystyle B_{\\theta}=\\frac{1}{4} [<[\\delta \\theta]^2> + \\frac{1}{3} A] \\\\[0.5cm]\n\\displaystyle S_2=B_{\\phi} ~E + \\frac{1}{24}A~I &\n~~~,~~~\\displaystyle B_{\\phi}=\\frac{1}{4} [\\sin^2{\\theta}<[\\delta \\phi]^2> + \\frac{1}{3} A ]\\\\[0.5cm]\n\\displaystyle C=B_{\\theta \\phi} ~E &\n~~~,~~~\\displaystyle B_{\\theta \\phi}=\\frac{1}{4} \\sin{\\theta}<[\\delta \\theta \\delta \\phi]> \n\\end{array}\n\\right.\n\\label{appb11}\n\\end{equation}\n\n\\noindent where the rank 1 matrix $E$ has been already defined and still \n$A=c^2 L\/X_0$. We clearly have $C=\\widetilde{C}$ and all submatrices here are $n \\times n$.\nRel. (\\ref{appb10}) is useful in our case because these three submatrices\nhave each a special form. \n\nWe can now define two sets of relations analogous to (\\ref{appb3}) for the $a_i$ and $b_i$\nintroducing this way $a_0$ and $b_0$,\nand the vector of dimension $2n$~: $(\\cdots, ag_i-a_i,\\cdots,bg_j-b_j,\\cdots)$. Then we can\ndefine a function $\\chi_c^2=F(a,b)$ in a way analogous to Rel. (\\ref{appb3}),\nusing this vector and the matrix of Rel. (\\ref{appb10}). Doing as previously,\nit is easy to find that the solution which minimizes $F(a,b)$ is a couple\nof random variables $(a,b)$ of expectation values $(a_0,b_0)$ (to be fit) with an inverse of covariance matrix\ndefined by the sum of the elements of each of the four submatrices in Rel. (\\ref{appb10}).\nThese sums can be easily computed knowing the submatrices $S_1$, $S_2$ and $C$ (Rels. (\\ref{appb11})),\nby means of Rels. (\\ref{appb10}) and (\\ref{appb8}).\n\nAfter tedious algebra, it follows from there that the center fixing term can be written~:\n\n\\begin{equation}\n\\chi^2_C=\\left ( a_0,b_0 \\right )\n\\left (\n\\begin{array}{ll}\n\\displaystyle \\frac{1}{4}[<[\\delta \\theta]^2> + \\frac{1}{3} A + \\frac{1}{6n} A] &\n\\displaystyle ~~~~~~~\\frac{1}{4} \\sin{\\theta} <\\delta \\theta \\delta \\phi> \\\\[0.5cm]\n\\displaystyle ~~~~~~~\\frac{1}{4} \\sin{\\theta} <\\delta \\theta \\delta \\phi> &\n\\displaystyle \\frac{1}{4}[\\sin^2{\\theta} <[\\delta \\phi]^2> + \\frac{1}{3} A + \\frac{1}{6n} A] \\\\[0.5cm] \n\\end{array}\n\\right )^{-1}\n\\left (\n\\begin{array}{l}\na_0 \\\\[0.5cm]\nb_0\n\\end{array}\n\\right )\n\\label{appb12}\n\\end{equation}\n\nIn writing this expression, we have used $a_0$ and $b_0$ instead of\n$a_0-a_{measured}$ and $b_0-b_{measured}$, taking into account that the corresponding \nmeasured values are zero by definition.\nThis relation defines the error covariance matrix of the charged track direction \nassociated with $n$ detected photons~; it differs from the matrix $\\Sigma_0$\nby its taking into account the multiple scattering undergone by the charged track inside the\nradiator. It can simply be written $\\Sigma=\\Sigma_0+[A\/3+A\/(6n)]I$.\n\nOne can apply the finite size sample corrections found in Section A2,\nas we did in the previous subsection. This \namounts to change the expression for $\\Sigma$ to $\\Sigma=\\Sigma_0+[A\/3+2A\/(3n)]I$.\n\n\nFinally, it should be noted that the $a_0b_0$ covariance\nterm is not affected\nby effects due to multiple scattering~; this could have been inferred from Rels. (\\ref{appb1})\n(and explains the covariance term in Rels. (\\ref{appb2}), as the expectation value\n$<\\varepsilon_1(u) \\varepsilon_2(v)>$ is zero for any values of $u$ and $v$). \nFrom now on, we name for clarity the parameters to be fit $a$ and $b$\ninstead of $a_0$ and $b_0$. \n\nIt should be noted, however, that multiple scattering effects imply that\nthere are as many centers (charged track directions) as photons. Therefore, \nany fit procedure can only provide a determination of the\n$mean$ center coordinates for each track considered, as an approximation\nof the actual center value at the DIRC bar entrance. \n\n\n\\subsection*{B4 The Minimization Function and the Circle Parameters}\n\n\\indent \\indent Given a set of points known each with some error, and assuming they\nshould be on a circle arc, the problem we state here is to define a function\n$F(a,b,R)$, the minimum of which providing the circle parameters. An usual approach\nis actually to choose as function $F$, a $\\chi^2$. Denoting by $R$ the circle radius\nand by $(a,b)$ the center coordinates, the function is~:\n\n\\begin{equation}\n\\chi^2_n=\\sum_{i,j=1,n}(d_i-Rg_i) V^{-1}_{ij}(d_j-Rg_j) \n\\label{appb13}\n\\end{equation}\n\n\\noindent where $d_i=\\sqrt{(x_i-a)^2+(y_i-b)^2}$ is the distance of each point $(x_i,y_i)$\nto the fit center, and $g_i=1$ defines a constant vector $g$ introduced only in order to\nhave a correct matching of repeated indices. A priori, the covariance matrix \nis defined by $ V_{ij}=<\\delta d_i \\delta d_j>$. Usually, the error functions\n$\\delta d_i$ are obtained by differentiating the expression for $d_i$~:\n\n\n\\begin{equation}\n\\delta d_i=\\frac{(x_i-a)\\delta x_i+ (y_i-b)\\delta y_i}{d_i}\n\\label{appb14}\n\\end{equation}\n\n\\noindent where $\\delta x_i$ and $\\delta y_i$ are the errors functions affecting\nthe photon measurements $x_i$ and $y_i$.\n\nIn our case, the points are spread onto a small arc\nand\/or the relative size of the errors compared with the circle radius is large~;\nthen, one has to introduce additional information for reasons already quoted at several\nplaces in the body of this paper. The most\nobvious additional information which is available in our case refers to the circle center. \n\nWe have as {\\it a priori} information the measurement provided by the tracking device located in front\nof the DIRC bar entrance~; this is summarized by a central value and a covariance error matrix (referred to\nanywhere above as $\\Sigma_0$). Obviously, this defines a distribution\n(normal, assuming we are lucky) but not the actual location of the center \nindeed associated with the track under consideration.\n\nThe question is now~: how to introduce the approximate knowledge\n$(0,0)$ of the given charged track direction and keep anyway its actual center coordinates\n$(a,b)$ to be fit? In the previous case (no charged track information), the measured \nquantities could be written~: \n\n\\begin{equation}\nx_i=x^0_i+a+\\delta x_i ~~,~~ y_i=y^0_i+b+\\delta y_i\n\\label{appc15a}\n\\end{equation}\n\n\\noindent\nwhere $x^0_i=R \\cos{\\varphi_i}$ and $y^0_i=R \\sin{\\varphi_i}$. Here \n$R$ is the true radius, $\\varphi_i$ is the true azimuth on the circle and\n$(a,b)$ the true center. Then the true value for the $x_i$ and $y_i$\nare respectively $x^0_i+a$ and $y^0_i+b$. Fixing the center at the ``measured''\nvalue $(a_i,b_i)$ (actually $(0,0)$), turns out to rewrite these equations~:\n\n\\begin{equation}\nx_i=x^0_i+a_i+\\delta x_i ~~,~~ y_i=y^0_i+b_i+\\delta y_i\n\\label{appc15b}\n\\end{equation}\n\nHowever, for a given well defined track, we can write~:\n\n\\begin{equation}\na_i=a+\\delta a_i~~~,~~~~ b_i=b+\\delta b_i ~~~~,~~~ \\forall i=1, \\cdots ~n\n\\label{appb15}\n\\end{equation}\n \n\\noindent where $\\delta a_i$ and $\\delta b_i$ are the error functions\nwhich take into account the errors at the entrance of the DIRC bar $and$ the multiple \nscattering undergone by the charged track up to the point where it\nemits photon $i$. In this way $(a,b)$ is the $actual$ center \nwhen it exists (no multiple scattering), otherwise it can be formally\ndefined as the mean value of the quantity corresponding to\n$G$ in Rel. (\\ref{app13}), which is then non--zero on a track by track basis. \nThen Eqs. (\\ref{appc15b}) can be rewritten~:\n\n\\begin{equation}\nx_i=x^0_i+a+ \\delta a_i+\\delta x_i ~~,~~ y_i=y^0_i+b+ \\delta b_i+\\delta y_i\n\\label{appc15c}\n\\end{equation}\n \nThe difference between the case when the center is left free and when it \nis constrained is transfered to the error functions which become $\\delta a_i+\\delta x_i$ and \n$\\delta b_i+\\delta y_i$ instead of respectively $\\delta x_i$ and $\\delta y_i$.\nConceptually, the difference comes from what is submitted to fit in both cases.\nIn the former case (free center), the measured quantitites submitted to fit are\nthe measured points $(x_i,y_i)$, while in the latter case (constrained center),\nthe quantitites\nsubmitted to fit are actually $(x_i-a_{measurement},y_i-b_{measurement})$.\nIn the former case, the center $(a,b)$ is fully fit, in the latter case\none fits the departure of the actual center from the measured point $(0,0)$.\n\nThen, Rel. (\\ref{appb14}) is still valid but should be rewritten~:\n\n\n\\begin{equation}\n\\delta d_i = \\frac{(x_i-a)(\\delta x_i+\\delta a_i)+(y_i-b)(\\delta y_i+\\delta b_i)}{d_i}\n\\label{appb16}\n\\end{equation}\n\n\\noindent in order to keep $\\delta x_i$ and $\\delta y_i$ their original meaning\n(errors due to the measurement of the photon direction without reference to the charged\ntrack). \nIf, moreover, we choose the origin in the plane in order that it coincides\nwith the image of the track direction provided by the tracking device, Rel. (\\ref{appb16})\ncan be approximated by~: \n\n\\begin{equation}\n\\delta d_i = \\frac{x_i(\\delta x_i+\\delta a_i)+y_i(\\delta y_i+\\delta b_i)}{d'_i}\n\\label{appb17}\n\\end{equation}\n\n\\noindent where $d'_i$ in the denominator is $d'_i=\\sqrt{x^2_i+y^2_i}$.\n$\\delta d_i$ in Rels. (\\ref{appb16}) and (\\ref{appb14}) differ only\nat first order and only by the differentials for $\\delta a_i$ and $\\delta b_i$.\n\nTherefore, the quantity $\\chi^2_n$ (Rel. (\\ref{appb13})) can be used with\n$V_{ij}=<\\delta d_i \\delta d_j>$, where the errors functions are given by\nRel. (\\ref{appb17}). As they depend on the path length followed up to the\nemission of each photon, this expression has to be approximated by its mean value\n$V_{ij}=<<\\delta d_i \\delta d_j>>_{u_iu_j}$ easily computable\nusing all information given above.\n\nIn this way, we are in position to define $\\chi^2_n$ as a function of \n$(a,b)$ and $R$. It remains to account for forcing the coordinates \n$(a,b)$ to remain in the neighborhood of the measured point $(0,0)$~;\nthis is achieved{\\footnote{One may ask oneself whether Rel. (\\ref{appb18})\nactually exhausts the problem. Indeed, one may be tempted to introduce\nin the $\\chi^2$ to be minimized, terms coupling $a$ and $b$ with the\n$d_i-Rg_i$~; this is not studied here. Anyway, such additional terms\nwould surely degrade the algorithm speed. From the results already at hand, \none may conclude that their effect is small for track momenta above 500 MeV\/c.\n}} \nby defining the following $\\chi^2$~:\n\n\\begin{equation}\n\\chi^2 = \\chi^2_n + \\chi^2_C\n\\label{appb18}\n\\end{equation}\n\n\\noindent using Rels. (\\ref{appb13}) and (\\ref{appb12}), where \nthe influence of multiple scattering is already taken\ninto account, including correlations. While $\\chi^2_n$\nis $n-3$ degrees of freedom, the $\\chi^2$ just defined\nis $n-1$ degrees of freedom.\n\nOne could ask oneself about the influence of the additional\nterm $\\chi^2_C$ when minimising, if the circle arc happens to be large enough\nthat such a term is actually useless. We have checked numerically\nthis case using trivial simulations, where the populated arc length \nand the number of \"measured\" points could be varied at will. We have \nfound that for large circle arcs (about 180$^{\\circ}$ or more)\nthe additional term $\\chi^2_C$ did not prevent to get the same solution as\nwhen it is removed, with completely negligible fluctuations.\n\n\nAnother possibility could be considered. This turns to give up \nfitting the center, accepting the measured value $(0,0)$ as optimum.\nUsing the notation $d'_i \\equiv d_i(a=0,b=0)$, this turns\nto estimate the radius by~:\n\n\\begin{equation}\nR=\\displaystyle \\sigma_R^2 \\sum_{ij} g_i V^{-1}_{ij} d'_j ~~~, \n~~~\\frac{1}{\\sigma_R^2}=\\sum_{ij} g_i V^{-1}_{ij} g_j\n\\label{challenge}\n\\end{equation}\n\nThis gives results close in quality to minimising Eq. (\\ref{appb18}),\nif the fixed center $(a,b)=(0,0)$ is correctly measured. If however, there\nis any bias in this estimate, it may become much worser. Indeed, in case\nwhen the measured center is slightly biased, minimising Eq. (\\ref{appb18})\nis safe as one recovers the correct center location, even starting from a \nwrong center~; instead, using Eq. (\\ref{challenge}), the estimate of \nthe radius suffers piling up effects which summarize in sensible effects. \nFor instance, assuming 2.8 mr systematic error on the charged track direction\n(known with statistical accuracy 2 mr rms),\nthe radius pull gets a rms of 1.12 instead of 0.98,\nunder the same conditions, for the procedure we recommand.\nThis difference of behaviour degrades in presence of background \n(1.4 compared to 1.2) or if the statistical accuracy on the charged \ntrack direction worsens. \n\n\\subsection*{B5 Correlations among Photons in the Equatorial Plane}\n\n\\indent \\indent \n\nThe radius $\\tan{\\theta_C\/2}$ of the circle is approximated by each of the photon\ndistance to the common center ($d'_i =\\sqrt{x_i^2+y_i^2}$~). Each such estimate\nis affected by an error function which is given by Rel. (\\ref{appb17}),\n with the measured center set at the origin in the equatorial plane.\nFrom expressions given above, we can then deduce~:\n\n\\begin{equation}\n\\left \\{\n\\begin{array}{llll}\n<[\\delta d_i]^2>&=&\\displaystyle \\frac{1}{d^{'2}_i}\n\\left[\nx_i^2[<[\\delta x_i]^2> +\\frac{1}{4}<[\\delta \\theta]^2>]+\ny_i^2[<[\\delta y_i]^2> +\\frac{1}{4}\\sin^2{\\theta}<[\\delta \\phi]^2>]+ \\right.\\\\[0.5cm]\n~&~&\\left. \\displaystyle 2 x_iy_i[<\\delta x_i\\delta y_i>+\\frac{1}{4}\\sin{\\theta}<\\delta \\theta \\delta \\phi>]\\right]\n+ \\displaystyle \\frac{1}{8}c^2 \\frac{L}{X_0}\n\\\\[0.5cm]\n<\\delta d_i \\delta d_j>&=&\\displaystyle \\frac{1}{4d'_id'_j}\n\\left[\nx_ix_j<[\\delta \\theta]^2>+y_iy_j\\sin^2{\\theta}<[\\delta \\phi]^2>+ \\right.\\\\[0.5cm]\n~&~& \\left. \\displaystyle (x_iy_j+x_jy_i) \\sin{\\theta} <\\delta \\theta\\delta \\phi>\\right] \n+ \\displaystyle \\frac{(x_ix_j+y_iy_j)}{d'_id'_j}\\frac{1}{12}c^2 \\frac{L}{X_0}\n\\end{array}\n\\right.\n\\label{appb19}\n\\end{equation}\n\n\\noindent where we have assumed that the measurements ($x$ and $y$) for photons $i$ and $j$\nare statistically independent for ease of reading.\n\nThese relations give the expression for the elements of the error covariance\nmatrix which enter the fit procedure described in the body of the text and just above.\nThe variance for each estimate of the circle radius ($d'_i$) depends on the photon errors, \nthe track direction errors and the multiple scattering it undergoes~;\nthe covariance term exhibits an interesting feature~: up to the fact that the metric along $\\vec{v}$\nand $\\vec{w}$ are different, one sees a surprising correlation pattern. Qualitatively,\ncorrelations are the strongest (and positive) for photons close to each other in azimuth,\ncorrelations are the strongest (and negative) for pairs of photons opposite in azimuth, while\nthere is no correlation for photon pairs having azimuthal distance of $\\pi\/2$.\n\n\\subsection*{B6 Multiple Scattering Effects in the General Case}\n\n\\indent \\indent When the arc to be fit is large enough (possibly $2 \\pi$ radians),\nfixing the circle center becomes irrelevant. In this case, a question remains\nabout the influence of multiple scattering on the error definition and the fit \nprocedure.\n\nFor each photon, $d_i=\\sqrt{(x_i-a_i)^2+(y_i-b_i)^2}$ remains the basic quantity which \nenters the fit procedure. Whatever is the way to express the\nproblem, we have as free parameter the charged track direction\nand as ``data'' the angular distance of this direction with photon directions.\nAs noted above, when multiple scattering is active, the direction of the charged\ntrack varies from photon to photon with theoretically known statistical fluctuations.\nTherefore the estimate of the radius provided by each photon inherits the\nfluctuations of the charged track. Stated otherwise, the error due\nto multiple scattering can either be treated separately (as we did)\nor included in the error function of the measurement ($x_i,y_i$),\ntogether with the other contributions (geometrical errors, chromaticity error).\nThis means that Rel. (\\ref{appb17}) is still relevant. Therefore,\nwhen there is no center fixing term, \nit is equivalent to consider that the error\nfunction on ($x_i,y_i$) is ($\\delta x_i+\\delta a_i,\\delta y_i+\\delta b_i$) ,\nwhere the second term of each component is reduced to only the multiple \nscattering contribution. \n\nIn this case, Rels. (\\ref{appb19}) become~:\n\n\\begin{equation}\n\\left \\{\n\\begin{array}{llll}\n<[\\delta d_i]^2>&=&\\displaystyle \\frac{1}{d^{'2}_i}\n\\left[ \\displaystyle\nx_i^2<[\\delta x_i]^2> + y_i^2<[\\delta y_i]^2> + 2 x_iy_i<\\delta x_i\\delta y_i>\\right]\n+ \\displaystyle \\frac{1}{8}c^2 \\frac{L}{X_0}\n\\\\[0.5cm]\n<\\delta d_i \\delta d_j>&=&\\displaystyle \\frac{(x_iy_j+x_jy_i)}{d'_id'_j}\n\\frac{1}{12}c^2 \\frac{L}{X_0}\n\\end{array}\n\\right.\n\\label{appb20}\n\\end{equation}\n\nThis shows that correlations among photons always exist, only due however\nto the properties of multiple scattering. Therefore, for low momentum tracks,\nwhen the multiple scattering is dominant, correlations among photons can never\nbe ignored in any reconstruction procedure for devices like the DIRC.\nThis is clearly independent of the representation chosen for the data (here\nthe stereographic projection)~; it only relies on the fact that the measured\nquantities are angles between photons and\na single charged track direction which changes in a correlated way from one photon to the other. \n\n\\newpage\n\\section*{Acknowledgements}\n\\indent \\indent We thanks our colleagues of the BaBar DIRC group for the\ninterest they manifested in the successive steps of this work. We also\nwarmly acknowledge J. Chauveau for important remarks and comments.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{The monster diagram} \\lbl{diagram}\n\n\\subsection{The vertices} Let ${\\mathfrak g}_{\\mathbf R}$ be the (semi-simple) \nLie-algebra of some compact Lie group $G$, let ${\\mathfrak g}={\\mathfrak g}_{\\mathbf R}\n\\otimes{\\mathbf C}$, let ${\\mathfrak h}\\subset i{\\mathfrak g}_{\\mathbf R}$ be a Cartan\nsubalgebra of ${\\mathfrak g}$, and let $W$ be the Weyl group of ${\\mathfrak h}$ in\n${\\mathfrak g}$. Let $\\Delta_+\\subset{\\mathfrak h}^\\star$ be a set of positive\nroots of ${\\mathfrak g}$, and let $\\rho\\in i{\\mathfrak g}_{\\mathbf R}^\\star$ be half\nthe sum of the positive roots. Let $\\hbar$ be an indeterminate, and\nlet ${\\mathbf C}[[\\hbar]]$ be the ring of formal power series in $\\hbar$\nwith coefficients in ${\\mathbf C}$.\n\n\\begin{myitemize}\n\n\\item ${{\\mathcal K}^F}$ is the set of all framed knots in ${\\mathbf R}^3$.\n\n\\item ${\\mathcal A}$ is the algebra of not-necessarily-connected chord diagrams,\n as in page~\\pageref{Adefinition}.\n\n\\item ${\\calB'_\\times}$ and ${\\calB'_{\\mathaccent\\cdot\\cup}}$ denote the space of Chinese characters (allowing\nconnected components that have no univalent vertices), as in\npage~\\pageref{Bdefinition}, taken with its two algebra structures.\n\n\\item ${U(\\frakg)^\\frakg[[\\hbar]]}$ is the ${\\mathfrak g}$-invariant part of the universal enveloping\nalgebra $U({\\mathfrak g})$ of ${\\mathfrak g}$, with the coefficient ring extended to be\n${\\mathbf C}[[\\hbar]]$.\n\n\\item ${S(\\frakg)^\\frakg_\\times[[\\hbar]]}$ \\lbl{Stdefinition} and ${S(\\frakg)^\\frakg_{\\mathaccent\\cdot\\cup}[[\\hbar]]}$ denote the ${\\mathfrak g}$-invariant\npart of the symmetric algebra $S({\\mathfrak g})$ of ${\\mathfrak g}$, with the\ncoefficient ring extended to be ${\\mathbf C}[[\\hbar]]$. In ${S(\\frakg)^\\frakg_{\\mathaccent\\cdot\\cup}[[\\hbar]]}$ we take\nthe algebra structure induced from the natural algebra structure of the\nsymmetric algebra. In ${S(\\frakg)^\\frakg_\\times[[\\hbar]]}$ we take the algebra structure induced from\nthe algebra structure of ${U(\\frakg)^\\frakg[[\\hbar]]}$ by the symmetrization map ${\\beta_\\frakg}:{S(\\frakg)^\\frakg_\\times[[\\hbar]]}\\to{U(\\frakg)^\\frakg[[\\hbar]]}$,\nwhich is a linear isomorphism by the Poincare-Birkhoff-Witt theorem.\n\n\\item ${P(\\frakh^\\star)^W[[\\hbar]]}$ is the space of Weyl-invariant polynomial functions on\n${\\mathfrak h}^\\star$, with coefficients in ${\\mathbf C}[[\\hbar]]$.\n\n\\item ${P(\\frakg^\\star)^\\frakg}[[\\hbar]]$ is the space of ad-invariant polynomial functions on\n${\\mathfrak g}^\\star$, with coefficients in ${\\mathbf C}[[\\hbar]]$.\n\n\\end{myitemize}\n\n\\subsection{The edges} \\lbl{TheEdges}\n\n\\begin{myitemize}\n\n\\item $Z$ is the framed version of the Kontsevich integral for knots as\ndefined in~\\cite{LeMurakami:Universal}. A simpler (and equal) definition for\na framed knot $K$ is\n\\[\n Z(K)=e^{\\Theta\\cdot\\text{writhe}(K)\/2}\n \\cdot S\\left(\\tilde{Z}(K)\\right)\n \\in{\\mathcal A}\\subset{\\mathcal A},\n\\]\nwhere $\\Theta$ is the chord diagram $\\silenteepic{theta}{0.5}$, $S$ is the\nstandard algebra map ${\\mathcal A}^r={\\mathcal A}\/<\\Theta>\\to{\\mathcal A}$ defined by mapping\n$\\Theta$ to $0$ and leaving all other primitives of ${\\mathcal A}$ in place, and\n$\\tilde{Z}$ is the Kontsevich integral as in~\\cite{Kontsevich:Vassiliev}.\n\n\\item $\\chi$ is the symmetrization map ${\\calB'_\\times}\\to{\\mathcal A}$, as on\npage~\\pageref{chidefinition}. It is an algebra isomorphism\nby~\\cite{Bar-Natan:Vassiliev} and the definition of $\\times$.\n\n\\item ${\\hat{\\Omega}}$ is the wheeling map as in page~\\pageref{Ohdefinition}. We\nargue that it should be an algebra (iso-)morphism\n(Conjecture~\\ref{WheelingConjecture}).\n\n\\item ${RT_\\frakg}$ denotes the Reshetikhin-Turaev knot invariant\nassociated with the Lie algebra ${\\mathfrak g}$~\\cite{Reshetikhin:QUEA,\nReshetikhin:Quasitriangle, ReshetikhinTuraev:Ribbon, Turaev:Invariants}.\n\n\\item ${{\\mathcal T}_\\frakg}$ (in all three instances) is the usual\n``diagrams to Lie algebras'' map, as in~\\cite[Section~2.4 and\nexercise~5.1]{Bar-Natan:Vassiliev}. The only variation we make\nis that we multiply the image of a degree $m$ element of ${\\mathcal A}$\n(or ${\\calB'_\\times}$ or ${\\calB'_{\\mathaccent\\cdot\\cup}}$) by $\\hbar^m$. In the construction of ${{\\mathcal T}_\\frakg}$\nan invariant bilinear form on ${\\mathfrak g}$ is needed. We use\nthe standard form $(\\cdot,\\cdot)$ used in~\\cite{ReshetikhinTuraev:Ribbon}\nand in~\\cite[Appendix]{ChariPressley:QuantumGroups}. See\nalso~\\cite[Chapter~2]{Kac:InfiniteDimensionalLieAlgebras}.\n\n\\item The isomorphism ${\\beta_\\frakg}$ was already discussed when ${S(\\frakg)^\\frakg_\\times[[\\hbar]]}$ was defined on\npage~\\pageref{Stdefinition}.\n\n\\item The definition of the ``Duflo map'' ${D(j_\\frakg^{1\/2})}$ requires some\npreliminaries. If $V$ is a vector space, there is an algebra map\n$D:P(V)\\to\\operatorname{Diff}(V^\\star)$ between the algebra $P(V)$ of polynomial\nfunctions on $V$ and the algebra $\\operatorname{Diff}(V^\\star)$ of constant coefficients\ndifferential operators on the symmetric algebra $S(V)$. $D$ is defined on\ngenerators as follows: If $\\alpha\\in V^\\star$ is a degree 1 polynomial\non $V$, set $D(\\alpha)(v)=\\alpha(v)$ for $v\\in V\\subset S(V)$, and\nextend $D(\\alpha)$ to all of $S(V)$ using the Leibnitz law. A different\n(but less precise) way of defining $D$ is via the Fourier transform:\nIdentify $S(V)$ with the space of functions on $V^\\star$. A polynomial\nfunction on $V$ becomes a differential operator on $V^\\star$ after\ntaking the Fourier transform, and this defines our map $D$. Either way,\nif $j\\in P(V)$ is homogeneous of degree $k$, the differential operator\n$D(j)$ lowers degrees by $k$ and thus vanishes on the low degrees of\n$S(V)$. Hence $D(j)$ makes sense even when $j$ is a power series instead\nof a polynomial. This definition has a natural extension to the case\nwhen the spaces involved are extended by ${\\mathbf C}[[\\hbar]]$, or even\n${\\mathbf C}(\\!(\\hbar)\\!)$, the algebra of Laurent polynomials in $\\hbar$.\n\nNow use this definition of $D$ with $V={\\mathfrak g}$ to\ndefine the Duflo map ${D(j_\\frakg^{1\/2})}$, where $j_{\\mathfrak g}(X)$ is defined for $X\\in{\\mathfrak g}$ by\n\\[ j_{\\mathfrak g}(X) =\n \\det\\left(\\frac{\\sinh\\operatorname{ad} X\/2}{\\operatorname{ad} X\/2}\\right).\n\\]\nThe square root $j_{\\mathfrak g}^{1\/2}$ of\n$j_{\\mathfrak g}$ is defined as in~\\cite{Duflo:Resolubles}\nor~\\cite[Section~8.2]{BerlineGetzlerVergne:DiracOperators}, and is a power\nseries in $X$ that begins with $1$. We note that by Kirillov's formula\nfor the character of the trivial representation (see e.g.~\\cite[Theorem\n8.4 with $\\lambda=i\\rho$]{BerlineGetzlerVergne:DiracOperators}),\n$j_{\\mathfrak g}^{1\/2}$ is the Fourier transform of the symplectic\nmeasure on $M_{i\\rho}$, where $M_{i\\rho}$ is the co-adjoint\norbit of $i\\rho$ in ${\\mathfrak g}_{\\mathbf R}^\\star$ (see\ne.g.~\\cite[Section~7.5]{BerlineGetzlerVergne:DiracOperators}):\n\\begin{equation} \\lbl{FourierTransform}\n j_{\\mathfrak g}^{1\/2}(X) = \\int_{r\\in M_{i\\rho}} e^{ir(X)}dr.\n\\end{equation}\n(We consider the symplectic measure as a measure on\n${\\mathfrak g}_{\\mathbf R}^\\star$, whose support is the subset $M_{i\\rho}$\nof ${\\mathfrak g}_{\\mathbf R}^\\star$. Its Fourier transform is a function on\n${\\mathfrak g}_{\\mathbf R}$ that can be computed via integration on the support\n$M_{i\\rho}\\subset{\\mathfrak g}_{\\mathbf R}^\\star$ of the symplectic measure.)\nDuflo~\\cite[th\\'eor\\`eme~V.2]{Duflo:Resolubles} proved that ${D(j_\\frakg^{1\/2})}$ is an\nalgebra isomorphism.\n\n\\item ${\\psi_\\frakg}$ is the Harish-Chandra isomorphism $U({\\mathfrak g})^{\\mathfrak g}\\to\nP({\\mathfrak h}^\\star)^W$ extended by $\\hbar$. Using the representation theory\nof ${\\mathfrak g}$, it is defined as follows. If $z$ is in $U({\\mathfrak g})^{\\mathfrak g}$\nand $\\lambda\\in{\\mathfrak h}^\\star$ is a positive integral weight, we set\n${\\psi_\\frakg}(z)(\\lambda)$ to be the scalar by which $z$ acts on the irreducible\nrepresentation of ${\\mathfrak g}$ whose heighest weight is $\\lambda-\\rho$. It\nis well known (see e.g.~\\cite[Section~23.3]{Humphreys:LieAlgebras})\nthat this partial definition of ${\\psi_\\frakg}(z)$ extends uniquely to a\nWeyl-invariant polynomial (also denoted ${\\psi_\\frakg}(z)$) on ${\\mathfrak h}^\\star$,\nand that the resulting map ${\\psi_\\frakg}:U({\\mathfrak g})^{\\mathfrak g}\\to P({\\mathfrak h}^\\star)^W$\nis an isomorphism.\n\n\\item The two equalities at the lower right quarter of the monster diagram\nneed no explanation. We note though that if the space of polynomials ${P(\\frakg^\\star)^\\frakg}[[\\hbar]]$\nis endowed with its obvious algebra structure, only the lower equality is\nin fact an equality of algebras.\n\n\\item ${\\iota_\\frakg}$ is the restriction map induced by the identification of\n${\\mathfrak h}^\\star$ with a subspace of ${\\mathfrak g}^\\star$ defined using the\nform $(\\cdot,\\cdot)$ of ${\\mathfrak g}$. The map ${\\iota_\\frakg}$ is an isomorphism by\nChevalley's theorem (see e.g.~\\cite[Section~23.1]{Humphreys:LieAlgebras}\nand~\\cite[Section~VI-2]{BrockerTomDieck:Representations}).\n\n\\item ${S_\\frakg}$ is the extension by $\\hbar$ of an integral operator. If\n$p(\\lambda)$ is an invariant polynomial of $\\lambda\\in{\\mathfrak g}^\\star$, then\n\\[ {S_\\frakg}(p)(\\lambda)=\\int_{r\\in M_{i\\rho}}p(\\lambda-ir)dr. \\]\n${S_\\frakg}$ can also be viewed as a convolution operator (with a measure\nconcentrated on $M_\\rho$), and like all convolution operators, it maps\npolynomials to polynomials.\n\n\\end{myitemize}\n\n\\subsection{The faces}\n\n\\begin{myitemize}\n\n\\item The commutativity of the face labeled ${\\silenteepic{Ca}{0.6}}$\nwas proven by Kassel~\\cite{Kassel:QuantumGroups} and\nLe and Murakami~\\cite{LeMurakami:Universal} following\nDrinfel'd~\\cite{Drinfeld:QuasiHopf, Drinfeld:GalQQ}. We comment that\nit is this commutativity that makes the notion of ``canonical Vassiliev\ninvariants''~\\cite{Bar-NatanGaroufalidis:MMR} interesting.\n\n\\item The commutativity of the face labeled ${\\silenteepic{Cb}{0.5}}$ is immediate from the\ndefinitions, and was already noted in~\\cite{Bar-Natan:Vassiliev}.\n\n\\item The commutativity of the face labeled ${\\silenteepic{Cc}{0.4}}$ (notice that this face\nfully encloses the one labeled ${\\silenteepic{Ce}{0.5}}$) is due to\nDuflo~\\cite[th\\'eor\\`eme~V.1]{Duflo:Resolubles}.\n\n\\end{myitemize}\n\n\\begin{proposition} \\lbl{PreciseWheelingTheorem}\nThe face labeled ${\\silenteepic{Cd}{0.5}}$ is commutative.\n\\end{proposition}\n\n\\begin{remark} Recalling that ${D(j_\\frakg^{1\/2})}$ is an algebra isomorphism,\nProposition~\\ref{PreciseWheelingTheorem} becomes the precise formulation\nof Theorem~\\ref{WheelingTheorem}.\n\\end{remark}\n\n\\begin{proof}[Proof of Proposition~\\ref{PreciseWheelingTheorem}] Follows\nimmediately from the following two lemmas, taking $C=\\Omega$ in\n\\eqref{ChatC}.\n\\def$\\hfill\\smily${$\\hfill\\smily$}\n\\end{proof}\n\n\\begin{lemma} \\lbl{FirstLemma} Let $\\kappa:{\\mathfrak g}\\to{\\mathfrak g}^\\star$ be the\nidentification induced by the standard bilinear form $(\\cdot,\\cdot)$ of\n${\\mathfrak g}$. Extend $\\kappa$ to all symmetric powers of ${\\mathfrak g}$, and let\n$\\kappa^\\hbar:S({\\mathfrak g})^{\\mathfrak g}[[\\hbar]]\\to S({\\mathfrak g}^\\star)(\\!(\\hbar)\\!)$\nbe defined for a homogeneous $s\\in S({\\mathfrak g})^{\\mathfrak g}[[\\hbar]]$ (relative to\nthe grading of $S({\\mathfrak g})$) by $\\kappa^\\hbar(s)=\\hbar^{-\\deg s}\\kappa(s)$.\nIf $C\\in{\\mathcal B}'$ is a Chinese character, $\\hat{C}:{\\mathcal B}'\\to{\\mathcal B}'$ is\nthe operator corresponding to $C$ as in Definition~\\ref{ChatDef}, and\n$C'\\in{\\mathcal B}'$ is another Chinese character, then\n\\begin{equation} \\lbl{ChatC}\n {{\\mathcal T}_\\frakg}\\hat{C}(C') = D(\\kappa^\\hbar{{\\mathcal T}_\\frakg} C){{\\mathcal T}_\\frakg} C'.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof} If $\\kappa j$ is a tensor in\n$S^{k}({\\mathfrak g}^\\star)\\subset{\\mathfrak g}^{\\star\\otimes k}$, the $k$'th\nsymmetric tensor power of ${\\mathfrak g}^\\star$, and $j'$ is a tensor in\n$S^{k'}({\\mathfrak g})\\subset{\\mathfrak g}^{\\otimes k'}$, then\n\\begin{equation} \\lbl{DInFull}\n D(\\kappa j)(j')=\\begin{cases}\n 0 & \\text{if $k>k'$,} \\\\\n \\parbox{2.6in}{\n the sum of all ways of contracting all the tensor components of $j$\n with some (or all) tensor components of $j'$\n }\\quad & \\text{otherwise.}\n \\end{cases}\n\\end{equation}\nBy definition, the ``diagrams to Lie algebras'' map carries gluing to\ncontraction, and hence carries the operation in Definition~\\ref{ChatDef}\nto the operation in~\\eqref{DInFull}, namely, to $D$. Counting powers of\n$\\hbar$, this proves~\\eqref{ChatC}.\n\\def$\\hfill\\smily${$\\hfill\\smily$}\n\\end{proof}\n\n\\begin{lemma} \\lbl{TgOmega}\n$\\kappa^\\hbar{{\\mathcal T}_\\frakg}\\Omega=j_{\\mathfrak g}^{1\/2}$.\n\\end{lemma}\n\n\\begin{proof} It follows easily from the definition of ${{\\mathcal T}_\\frakg}$ and $\\kappa^h$\nthat $(\\kappa^\\hbar{{\\mathcal T}_\\frakg}\\omega_n)(X)=\\operatorname{tr}(\\operatorname{ad} X)^n$ for any $X\\in{\\mathfrak g}$.\nHence, using the fact that $\\kappa^\\hbar\\circ{{\\mathcal T}_\\frakg}$ is an algebra morphism if\n${\\mathcal B}'$ is taken with the disjoint union product,\n\\[ (\\kappa^\\hbar{{\\mathcal T}_\\frakg}\\Omega)(X)\n = \\exp\\sum_{n=1}^\\infty b_{2n}(\\kappa^\\hbar{{\\mathcal T}_\\frakg}\\omega_{2n})(X)\n = \\exp\\sum_{n=1}^\\infty b_{2n}\\operatorname{tr}(\\operatorname{ad} X)^{2n}\n = \\det\\exp\\sum_{n=1}^\\infty b_{2n}(\\operatorname{ad} X)^{2n}.\n\\]\nBy the definition of the modified Bernoulli numbers~\\eqref{MBNDefinition},\nthis is\n\\begin{equation}\n \\det\\exp\\frac{1}{2}\\log\\frac{\\sinh \\operatorname{ad} X\/2}{\\operatorname{ad} X\/2}\n = \\det\\left(\\frac{\\sinh \\operatorname{ad} X\/2}{\\operatorname{ad} X\/2}\\right)^{1\/2}\n = j_{\\mathfrak g}^{1\/2}(X).\n \\prtag{\\smily}\n\\end{equation}\n\\renewcommand{$\\hfill\\smily$}{}\n\\end{proof}\n\n\\begin{proposition} \nThe face labeled ${\\silenteepic{Ce}{0.5}}$ is commutative.\n\\end{proposition}\n\n\\begin{proof} According to M.~Vergne (private communication), this is a\nwell known fact. We could not find a reference, so here's the gist of\nthe proof. Forgetting about powers of $\\hbar$ and ${\\mathfrak g}$-invariance\nand taking the Fourier transform (over ${\\mathfrak g}_{\\mathbf R}$), the\ndifferential operator ${D(j_\\frakg^{1\/2})}$ becomes the operator of multiplication by\n$j_{\\mathfrak g}^{1\/2}(iX)$ on $S({\\mathfrak g})$. Taking the inverse Fourier transform,\nwe see that ${D(j_\\frakg^{1\/2})}$ is the operator of convolution with the inverse Fourier\ntransform of $j_{\\mathfrak g}^{1\/2}(iX)$, which is the symplectic measure on\n$M_\\rho$ (see~\\eqref{FourierTransform}). So ${D(j_\\frakg^{1\/2})}$ is convolution with\nthat measure, as required.\n\\def$\\hfill\\smily${$\\hfill\\smily$}\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\n\\subsection{The conjectures}\nLet us start with the statements of our conjectures; the rest of\nthe paper is concerned with motivating and justifying them. We\nassume some familiarity with the theory of Vassiliev invariants. See\ne.g.~\\cite{Bar-Natan:Vassiliev, Birman:Bulletin, BirmanLin:Vassiliev,\nGoussarov:New, Goussarov:nEquivalence, Kontsevich:Vassiliev,\nVassiliev:CohKnot, Vassiliev:Book} and~\\cite{Bar-Natan:VasBib}.\n\nVery briefly, recall that any complex-valued knot invariant $V$ can be\nextended to an invariant of knots with double points ({\\em singular\nknots}) via the formula $V(\\!\\silenteepic{DoublePoint}{0.5}\\!\\!) =\nV(\\!\\silenteepic{OverCrossing}{0.5}\\!\\!) -\nV(\\!\\silenteepic{UnderCrossing}{0.5}\\!\\!)$. An invariant of knots\n(or framed knots) is called a {\\em Vassiliev invariant}, or a {\\em\nfinite type invariant of type $m$}, if its extension to singular knots\nvanishes whenever evaluated on a singular knot that has more than $m$\ndouble points. Vassiliev invariants are in some senses analogues to\npolynomials (on the space of all knots), and one may hope that they\nseparate knots. While this is an open problem and the precise power of the\nVassiliev theory is yet unknown, it is known (see~\\cite{Vogel:Structures})\nthat Vassiliev invariants are strictly stronger than the\nReshetikhin-Turaev invariants (\\cite{ReshetikhinTuraev:Ribbon}), and in\nparticular they are strictly stronger than the Alexander-Conway, Jones,\nHOMFLY, and Kauffman invariants. Hence one is interested in a detailed\nunderstanding of the theory of Vassiliev invariants.\n\nThe set ${\\mathcal V}$ of all Vassiliev invariants of framed knots is a linear\nspace, filtered by the ``type'' of an invariant. The fundamental theorem\nof Vassiliev invariants, due to Kontsevich~\\cite{Kontsevich:Vassiliev},\nsays that the associated graded space $\\operatorname{gr}{\\mathcal V}$ of ${\\mathcal V}$ can be\nidentified with the graded dual ${\\mathcal A}^\\star$ of a certain completed\ngraded space ${\\mathcal A}$ \\lbl{Adefinition} of formal linear combinations of\ncertain diagrams, modulo certain linear relations. The ``diagrams'' in\n${\\mathcal A}$ are connected graphs made of a single distinguished directed line\n(the ``skeleton''), some number of undirected ``internal edges'', some\nnumber of trivalent ``external vertices'' in which an internal edge ends\non the skeleton, and some number of trivalent ``internal vertices''\nin which three internal edges meet. It is further assumed that the\ninternal vertices are ``oriented''; that for each internal vertices\none of the two possible cyclic orderings of the edges emanating\nfrom it is specified. An example of a diagram in ${\\mathcal A}$ is in\nfigure~\\ref{BasicDefinitions}. The linear relations in the definition\nof ${\\mathcal A}$ are the well-known $AS$, $IHX$, and $STU$ relations, also\nshown in figure~\\ref{BasicDefinitions}. The space ${\\mathcal A}$ is graded by\nhalf the total number of trivalent vertices in a given diagram.\n\n\\begin{figure}[htpb]\n\\[ \\eepic{BasicDefinitions}{0.85} \\]\n\\caption{\n A diagram in ${\\mathcal A}$, a diagram in ${\\mathcal B}$ (a Chinese character),\n and the $AS$, $IHX$, and $STU$ relations. All internal vertices shown\n are oriented counterclockwise.\n}\n\\lbl{BasicDefinitions}\n\\end{figure}\n\nThe most difficult part of the currently known proofs of the\nisomorphism $\\operatorname{gr}{\\mathcal V}\\cong{\\mathcal A}^\\star$ is the construction of a\n``universal Vassiliev invariant''; an ${\\mathcal A}$-valued framed-knot\ninvariant that satisfies a certain universality property. Such a\n``universal Vassiliev invariant'' is not unique; the set of universal\nVassiliev invariants is in a bijective correspondence with the set of all\nfiltration-respecting maps ${\\mathcal V}\\to\\operatorname{gr}{\\mathcal V}$ that induce the identity\nmap $\\operatorname{gr}{\\mathcal V}\\to\\operatorname{gr}{\\mathcal V}$. But it is a noteworthy and not terribly well\nunderstood fact that all known constructions of a universal Vassiliev\ninvariant are either known to give the same answer or are conjectured to\ngive the same answer as the original ``framed Kontsevich integral'' $Z$\n(see Section~\\ref{TheEdges}). Furthermore, the Kontsevich integral is\nwell behaved in several senses, as shown in~\\cite{Bar-Natan:Vassiliev,\nBar-NatanGaroufalidis:MMR, Kassel:QuantumGroups, Kontsevich:Vassiliev,\nLeMurakami2Ohtsuki:3Manifold, LeMurakami:Universal, LeMurakami:Parallel}.\n\nThus it seems that $Z$ is a canonical and not an accidental object. It\nis therefore surprising how little we know about it. While there are\nseveral formulas for computing $Z$, they are all of limited use beyond\nthe first few degrees. Presently, we do not know how to compute $Z$\nfor {\\em any} knot; not even the unknot!\n\nOur first conjecture is about the value of the Kontsevich integral of the\nunknot. We conjecture a completely explicit formula, written in terms\nof an alternative realization of the space ${\\mathcal A}$, the space ${\\mathcal B}$\nof ``Chinese characters'' (see~\\cite{Bar-Natan:Vassiliev}). The space\n${\\mathcal B}$ is also a completed graded space of formal linear combinations\nof diagrams modulo linear relations: the diagrams are the so-called\nChinese characters, which are the same as the diagrams in ${\\mathcal A}$\nexcept that a skeleton is not present, and instead a certain number\nof univalent vertices are allowed (the connectivity requirement is\ndropped, but one insists that every connected component of a Chinese\ncharacter would have at least one univalent vertex). An example of a\nChinese character is in figure~\\ref{BasicDefinitions}. The relations\nare the $AS$ and $IHX$ relations that appear in the same figure (but\nnot the $STU$ relation, which involves the skeleton). The degree of a\nChinese character is half the total number of its vertices. There is a\nnatural isomorphism $\\chi:{\\mathcal B}\\to{\\mathcal A}$ \\lbl{chidefinition} which maps\nevery Chinese character to the average of all possible ways of placing\nits univalent vertices along a skeleton line. In a sense that we will\nrecall below, the fact that $\\chi$ is an isomorphism is an analog of\nthe Poincare-Birkhoff-Witt (PBW) theorem. We note that the inverse map\n$\\sigma$ of $\\chi$ is more difficult to construct and manipulate.\n\n\\begin{conjecture} \\lbl{UnknotConjecture} (Wheels)\nThe framed Kontsevich integral of the unknot,\n$Z(\\bigcirc)$, expressed in terms of Chinese characters, is equal to\n\\begin{equation} \\lbl{OmegaDef}\n \\Omega=\\exp_{\\mathaccent\\cdot\\cup} \\sum_{n=1}^\\infty b_{2n}\\omega_{2n}.\n\\end{equation}\n\\end{conjecture}\n\nThe notation in~\\eqref{OmegaDef} means:\n\\begin{myitemize}\n\\item The `modified Bernoulli numbers' $b_{2n}$ are defined by the power\nseries expansion\n\\begin{equation} \\lbl{MBNDefinition}\n \\sum_{n=0}^\\infty b_{2n}x^{2n} = \\frac{1}{2}\\log\\frac{\\sinh x\/2}{x\/2}.\n\\end{equation}\nThese numbers are related to the usual Bernoulli numbers $B_{2n}$ and\nto the values of the Riemann $\\zeta$-function on the even integers via\n(see e.g.~\\cite[Section~12.12]{Apostol:AnalyticNumberTheory})\n\\[\n b_{2n}\n =\\frac{B_{2n}}{4n(2n)!}\n =\\frac{(-1)^{n+1}}{2n(2\\pi)^{2n}}\\zeta(2n).\n\\]\nThe first three modified Bernoulli numbers are $b_2=1\/48$, $b_4=-1\/5760$,\nand $b_6=1\/362880$.\n\\item The `$2n$-wheel' $\\omega_{2n}$ is the degree $2n$ Chinese character\nmade of a $2n$-gon with $2n$ legs:\n\\[\n \\omega_2=\\eepic{2wheel}{0.6},\\quad\n \\omega_4=\\eepic{4wheel}{0.6},\\quad\n \\omega_6=\\eepic{6wheel}{0.6},\\quad\\ldots,\n\\]\n(with all vertices oriented counterclockwise).\\footnote{\n Wheels have appeared in several noteworthy places before:\n \\cite{Chmutov:CombinatorialMelvinMorton, ChmutovVarchenko:sl2,\n KrickerSpenceAitchison:Cabling, Vaintrob:Primitive}. Similar but slightly\n different objects appear in Ng's beautiful work on ribbon\n knots~\\cite{Ng:Ribbon}.\n}\n\n\\item $\\exp_{\\mathaccent\\cdot\\cup}$ means `exponential in the disjoint union sense'; that is,\n it is the formal-sum exponential of a linear combination of Chinese\n characters, with the product being the disjoint union product.\n\\end{myitemize}\n\nLet us explain why we believe the Wheels Conjecture\n(Conjecture~\\ref{UnknotConjecture}). Recall (\\cite{Bar-Natan:Vassiliev}) that\nthere is a parallelism between the space ${\\mathcal A}$ (and various variations\nthereof) and a certain part of the theory of Lie algebras. Specifically,\ngiven a metrized Lie algebra ${\\mathfrak g}$, there exists a commutative square\n(a refined version is in Theorem~\\ref{CommutativityTheorem} below)\n\\[ {\n \\def{\\mathcal A}{{\\mathcal A}}\n \\def{\\mathcal B}{{\\mathcal B}}\n \\def{{\\mathcal T}_\\frakg}{{{\\mathcal T}_{\\mathfrak g}}}\n \\def{{U}^\\frakg(\\frakg}){{{U}^{\\mathfrak g}({\\mathfrak g}})}\n \\def{{S}^\\frakg(\\frakg}){{{S}^{\\mathfrak g}({\\mathfrak g}})}\n \\def\\Udef{{\\left(\\parbox{2.4in}{\\small the ${\\mathfrak g}$-invariant part\n of the completed universal enveloping algebra of ${\\mathfrak g}$}\n \\right)}}\n \\def\\Sdef{{\\left(\\parbox{2.1in}{\\small the ${\\mathfrak g}$-invariant part\n of the completed symmetric algebra of ${\\mathfrak g}$}\n \\right)}}\n \\eepic{CommutativeSquare}{0.8}\n} \\]\nin which the left column is the above mentioned formal PBW isomorphism\n$\\chi$, and the right column is the symmetrization map ${\\beta_\\frakg}:S({\\mathfrak g})\\to\nU({\\mathfrak g})$, sending an unordered word of length $n$ to the average of the\n$n!$ ways of ordering its letters and reading them as a product in\n$U({\\mathfrak g})$. The map ${\\beta_\\frakg}$ is an isomorphism by the honest PBW theorem. The\nleft-to-right maps ${\\mathcal T}_{\\mathfrak g}$ are defined as in\n\\cite{Bar-Natan:Vassiliev} by contracting copies of the structure\nconstants tensor, one for each vertex of any given diagram, using the\nstandard invariant form $(\\cdot,\\cdot)$ on ${\\mathfrak g}$ (see citations in\nsection~\\ref{TheEdges} below). The maps ${\\mathcal T}_{\\mathfrak g}$ seem\nto `forget' some information (some high-degree elements on the left\nget mapped to 0 on the right no matter what the algebra ${\\mathfrak g}$ is,\nsee~\\cite{Vogel:Structures}), but at least up to degree 12 it is faithful\n(for some Lie algebras); see~\\cite{Kneissler:Twelve}.\n\n\\begin{theorem} \\lbl{UnknotTheorem} Conjecture~\\ref{UnknotConjecture} is\n``true on the level of semi-simple Lie algebras''. Namely,\n\\[ {\\mathcal T}_{\\mathfrak g}\\Omega = {\\mathcal T}_{\\mathfrak g}\\chi^{-1}Z(\\bigcirc). \\]\n\\end{theorem}\n\nWe now formulate our second conjecture. Let\n${\\mathcal B}'=\\operatorname{span}\\left\\{\\silenteepic{CCExample}{0.5}\\right\\}\/(AS,IHX)$\n\\lbl{Bdefinition} be the same as ${\\mathcal B}$, only dropping the connectivity\nrequirement (so that\nwe also allow\nconnected components that have no univalent vertices). The space ${\\mathcal B}'$\nhas two different products, and thus is an algebra in two different ways:\n\\begin{myitemize}\n\\item The disjoint union $C_1{\\mathaccent\\cdot\\cup} C_2$ of two Chinese characters $C_{1,2}$\n is again a Chinese character. The obvious bilinear extension of ${\\mathaccent\\cdot\\cup}$\n is a well defined product ${\\mathcal B}'\\times{\\mathcal B}'\\to{\\mathcal B}'$, which turns\n ${\\mathcal B}'$ into an algebra. For emphasis we will call this algebra ${\\calB'_{\\mathaccent\\cdot\\cup}}$.\n\\item ${\\mathcal B}'$ is isomorphic (as a vector space) to the space\n ${\\mathcal A}'=\\operatorname{span}\\left\\{\\silenteepic{CDExample}{0.5}\\right\\}\/(AS,IHX,STU)$\n of diagrams whose skeleton is a single oriented interval\n (like ${\\mathcal A}$, only that here we also\n allow non-connected diagrams). The isomorphism is the map\n $\\chi:{\\mathcal B}'\\to{\\mathcal A}'$ that maps a Chinese\n character with $k$ ``legs'' (univalent vertices) to the average\n of the $k!$ ways of arranging them along an oriented interval\n (in~\\cite{Bar-Natan:Vassiliev} the sum was used instead of the\n average). ${\\mathcal A}'$ has a well known ``juxtaposition'' product $\\times$,\n related to the ``connect sum'' operation on knots:\n \\[ \\silenteepic{AnotherCD}{0.5}\\times\\ \\silenteepic{CDExample}{0.5}=\n \\silenteepic{ProductCD}{0.5}.\n \\]\n The algebra structure on ${\\mathcal A}'$ defines another algebra structure on\n ${\\mathcal B}'$. For emphasis we will call this algebra ${\\calB'_\\times}$.\n\\end{myitemize}\n\nAs before, ${\\mathcal A}'$ is graded by half the number of trivalent\nvertices in a diagram, ${\\mathcal B}'$ is graded by half the total number of\nvertices in a diagram, and the isomorphism $\\chi$ as well as the two\nproducts respect these gradings.\n\n\\begin{definition} \\lbl{ChatDef} If $C$ is a Chinese character, let\n$\\hat{C}:{\\mathcal B}'\\to{\\mathcal B}'$ be the operator defined by\n\\[ \\hat{C}(C')=\\begin{cases}\n 0 & \\text{if $C$ has more legs than $C'$,} \\\\\n \\parbox{2.6in}{\n the sum of all ways of gluing all the legs of $C$ to some (or all)\n legs of $C'$\n }\\quad & \\text{otherwise.}\n \\end{cases}\n\\]\nFor example,\n\\[ \\widehat{\\omega_4}(\\omega_2)=0; \\qquad\n \\widehat{\\omega_2}(\\omega_4)=\n 8\\eepic{SideGluing}{0.5}+4\\eepic{DiagonalGluing}{0.5}.\n\\]\nIf $C$ has $k$ legs and total degree $m$, then $\\hat{C}$ is an operator\nof degree $m-k$. By linear extension, we find that every $C\\in{\\mathcal B}'$\ndefines an operator $\\hat{C}:{\\mathcal B}'\\to{\\mathcal B}'$, and in fact, even infinite\nlinear combinations of Chinese characters with an {\\em increasing} number\nof legs define operators ${\\mathcal B}'\\to{\\mathcal B}'$.\n\\end{definition}\n\nAs $\\Omega$ is made of wheels, we call the action of the (degree\n$0$) operator $\\hat{\\Omega}$ \\lbl{Ohdefinition} ``wheeling''. As\n$\\Omega$ begins with $1$, the wheeling map is invertible. We argue\nbelow that $\\hat{\\Omega}$ is a diagrammatic analog of the Duflo\nisomorphism ${S}^{\\mathfrak g}({\\mathfrak g})\\to{S}^{\\mathfrak g}({\\mathfrak g})$\n(see~\\cite{Duflo:Resolubles} and see below). The Duflo isomorphism\nintertwines the two algebra structures that ${S}^{\\mathfrak g}({\\mathfrak g})$ has:\nthe structure it inherits from the symmetric algebra and the structure\nit inherits from ${U}^{\\mathfrak g}({\\mathfrak g})$ via the PBW isomorphism.\nOne may hope that $\\hat{\\Omega}$ has the parallel property:\n\n\\begin{conjecture} \\lbl{WheelingConjecture} (Wheeling\\footnote{Conjectured\nindependently by Deligne~\\cite{Deligne:Letter}.}) Wheeling intertwines\nthe two products on Chinese characters. More precisely, the map\n$\\hat{\\Omega}:{\\calB'_{\\mathaccent\\cdot\\cup}}\\to{\\calB'_\\times}$ is an algebra isomorphism.\n\\end{conjecture}\n\nThere are several good reasons to hope that\nConjecture~\\ref{WheelingConjecture} is true. If it is true, one would\nbe able to use it along with Conjecture~\\ref{UnknotConjecture} and\nknown properties of the Kontsevich integral (such as its behavior the\noperations of change of framing, connected sum, and taking the parallel of\na component as in~\\cite{LeMurakami:Parallel}) to get explicit formulas\nfor the Kontsevich integral of several other knots and links. Note\nthat change of framing and connect sum act on the Kontsevich integral\nmultiplicatively using the product in ${\\mathcal A}$, but the conjectured formula\nwe have for the Kontsevich integral of the unknot is in ${\\mathcal B}$. Using\nConjecture~\\ref{WheelingConjecture} it should be possible to perform\nall operations in ${\\mathcal B}$.\n\nPerhaps a more important reason is that in essence, ${\\mathcal A}$ and ${\\mathcal B}$\ncapture that part of the information about $U({\\mathfrak g})$ and $S({\\mathfrak g})$\nthat can be described entirely in terms of the bracket and the structure\nconstants. Thus a proof of Conjecture~\\ref{WheelingConjecture} would yield an\nelementary proof of the intertwining property of the Duflo isomorphism,\nwhose current proofs use representation theory and are quite involved. We\nfeel that the knowledge missing to give an elementary proof of the\nintertwining property of the Duflo isomorphism is the same knowledge\nthat is missing for giving a proof of the Kashiwara-Vergne conjecture\n(\\cite{KashiwaraVergne:CampbellHausdorff}).\n\n\\begin{theorem} \\lbl{WheelingTheorem} Conjecture~\\ref{WheelingConjecture}\nis ``true on the level of semi-simple Lie algebras''. A precise statement is\nin Proposition~\\ref{PreciseWheelingTheorem} and the remark following it.\n\\end{theorem}\n\n\\begin{remark} As semi-simple Lie algebras ``see'' all of the\nVassiliev theory at least up to degree~12 \\cite{Bar-Natan:Vassiliev,\nKneissler:Twelve}, Theorems~\\ref{UnknotTheorem} and ~\\ref{WheelingTheorem}\nimply Conjectures~\\ref{UnknotConjecture} and~\\ref{WheelingConjecture} up to that\ndegree. It should be noted that semi-simple Lie algebras do not ``see''\nthe whole Vassiliev theory at high degrees, see ~\\cite{Vogel:Structures}.\n\\end{remark}\n\n\\begin{remark} As the Duflo isomorphism has no known elementary proof, the\nLie algebra techniques used in this paper are unlikely to give full proofs\nof Conjectures~\\ref{UnknotConjecture} and~\\ref{WheelingConjecture}.\n\\end{remark}\n\n\\begin{remark} We've chosen to work over the complex numbers to allow for\nsome analytical arguments below. The rationality of the Kontsevich\nintegral~\\cite{LeMurakami:Universal} and the uniform classification of\nsemi-simple Lie algebras over fields of characteristic 0 implies that\nConjectures~\\ref{UnknotConjecture} and~\\ref{WheelingConjecture} and\nTheorems~\\ref{UnknotTheorem} and~\\ref{WheelingTheorem} are independent\nof the (characteristic 0) ground field.\n\\end{remark}\n\n\\subsection{The plan} Theorem~\\ref{UnknotTheorem} and\nTheorem~\\ref{WheelingTheorem} both follow from a delicate assembly of\nwidely known facts about Lie algebras and related objects; the main\nnovelty in this paper is the realization that these known facts can\nbe brought together and used to prove Theorems~\\ref{UnknotTheorem}\nand~\\ref{WheelingTheorem} and make Conjectures~\\ref{UnknotConjecture}\nand~\\ref{WheelingConjecture}. The facts we use about Lie-algebras amount to the\ncommutativity of a certain monstrous diagram. In Section~\\ref{diagram}\nbelow we will explain everything that appears in that diagram,\nprove its commutativity, and prove Theorem~\\ref{WheelingTheorem}. In\nSection~\\ref{proof} we will show how that commutativity implies\nTheorem~\\ref{UnknotTheorem} as well. We conclude this introductory\nsection with a picture of the monster itself:\n\n\\begin{theorem} \\lbl{CommutativityTheorem} (definitions and proof in\nSection~\\ref{diagram})\nThe following monster diagram is commutative:\n\\[ \\eepic{CommutativeDiagram}{0.8} \\]\n\\end{theorem}\n\n\\begin{remark} Our two conjectures ought to be related---one talks\nabout $\\Omega$, and another is about an operator $\\hat{\\Omega}$\nmade out of $\\Omega$, and the proofs of Theorems~\\ref{UnknotTheorem}\nand~\\ref{WheelingTheorem} both use the Duflo map (${D(j_\\frakg^{1\/2})}$ in the above\ndiagram). But looking more closely at the proofs below, the relationship\nseems to disappear. The proof of Theorem~\\ref{WheelingTheorem} uses\nonly the commutativity of the face labeled ${\\silenteepic{Cd}{0.5}}$, while the proof of\nTheorem~\\ref{UnknotTheorem} uses the commutativity of all faces but\n${\\silenteepic{Cd}{0.5}}$. No further relations between the conjectures are seen in the\nproofs of our theorems. We are still missing the deep relation that\nought to exist between `Wheels' and `Wheeling'. Why is it that the same\nstrange combination of Chinese characters $\\Omega$ plays a role in these\ntwo seemingly unrelated affairs?\n\\end{remark}\n\n\\subsection{Postscript} According to\nKontsevich~\\cite{Kontsevich:DeformationQuantization},\nConjecture~\\ref{WheelingConjecture} seems to follow from the results he proves\nin Section~8.3 of that paper, but a full proof of the conjecture is not\ngiven there. \\cite{LeThurston:Unknot} have shown that\nConjecture~\\ref{WheelingConjecture} implies\nConjecture~\\ref{UnknotConjecture}, but\nunfortunately their proof does not shed light on the fundamental\nrelationship that ought to exist between the two conjectures.\n\n\\subsection{Acknowledgement} Much of this work was done when the\nfour of us were visiting \\AA rhus, Denmark, for a special semester on\ngeometry and physics, in August 1995. We wish to thank the organizers,\nJ.~Dupont, H.~Pedersen, A.~Swann and especially J.~Andersen for their\nhospitality and for the stimulating atmosphere they created. We wish\nto thank the Institute for Advanced Studies for their hospitality,\nand P.~Deligne for listening to our thoughts and sharing his. His\nletter~\\cite{Deligne:Letter} introduced us to the Duflo isomorphism;\ninitially our proofs relied more heavily on the Kirillov character\nformula. A.~Others made some very valuable suggestions; we thank them\nand also thank J.~Birman, A.~Haviv, A.~Joseph, G.~Perets, J.~D.~Rogawski,\nM.~Vergne and S.~Willerton for additional remarks and suggestions.\n\n\n\\section{Proof of Theorem~\\ref{UnknotTheorem}} \\lbl{proof}\n\nWe prove the slightly stronger equality\n\\begin{equation} \\lbl{PreciseUnknotEquation}\n {\\mathcal T}^\\hbar_{\\mathfrak g}\\Omega\n = {\\mathcal T}^\\hbar_{\\mathfrak g}\\chi^{-1}Z(\\bigcirc).\n\\end{equation}\n\n\\begin{proof} We compute the right hand side\nof~\\eqref{PreciseUnknotEquation} by computing\n${S_\\frakg}{\\iota_\\frakg}^{-1}{\\psi_\\frakg}{RT_\\frakg}(\\bigcirc)$ and using the commutativity of the monster\ndiagram. It is known (see\ne.g.~\\cite[example~11.3.10]{ChariPressley:QuantumGroups}) that if\n$\\lambda-\\rho\\in{\\mathfrak h}^\\star$ is the highest weight of some irreducible\nrepresentation $R_{\\lambda-\\rho}$ of ${\\mathfrak g}$, then\n\\[ ({\\psi_\\frakg}{RT_\\frakg}(\\bigcirc))(\\lambda)\n = \\frac{1}{\\dim R_{\\lambda-\\rho}} \\prod_{\\alpha\\in\\Delta_+} \\frac\n {\\sinh\\hbar(\\lambda,\\alpha)\/2}\n {\\sinh\\hbar(\\rho,\\alpha)\/2},\n\\]\nwhere $\\Delta_+$ is the set of positive roots of ${\\mathfrak g}$ and\n$(\\cdot,\\cdot)$ is the standard invariant bilinear form on ${\\mathfrak g}$.\nBy the Weyl dimension formula and some minor arithmetic, we get (see\nalso~\\cite[section~7]{LeMurakami:Parallel})\n\\begin{equation} \\lbl{ProductOverRoots}\n ({\\psi_\\frakg}{RT_\\frakg}(\\bigcirc))(\\lambda) = \\prod_{\\alpha\\in\\Delta_+}\n \\frac\n {\\hbar(\\rho,\\alpha)\/2}\n {\\sinh\\hbar(\\rho,\\alpha)\/2}\n \\cdot\\frac\n {\\sinh\\hbar(\\lambda,\\alpha)\/2}\n {\\hbar(\\lambda,\\alpha)\/2}\n .\n\\end{equation}\nWe can identify ${\\mathfrak g}$ and ${\\mathfrak g}^\\star$ using the form $(\\cdot,\\cdot)$,\nand then expressions like `$\\operatorname{ad}\\lambda$' makes sense. By definition,\nif ${\\mathfrak g}_\\alpha$ is the weight space of the root $\\alpha$, then\n$\\operatorname{ad}\\lambda$ acts as multiplication by $(\\lambda,\\alpha)$\non ${\\mathfrak g}_\\alpha$, while acting trivially on ${\\mathfrak h}$. From this\nand~\\eqref{ProductOverRoots} we get\n\\[ ({\\psi_\\frakg}{RT_\\frakg}(\\bigcirc))(\\lambda) =\n \\det\\left(\\frac{\\operatorname{ad}\\hbar\\rho\/2}{\\sinh\\operatorname{ad}\\hbar\\rho\/2}\\right)^{1\/2}\n \\cdot\n \\det\\left(\\frac{\\sinh\\operatorname{ad}\\hbar\\lambda\/2}{\\operatorname{ad}\\hbar\\lambda\/2}\\right)^{1\/2} =\n j_{\\mathfrak g}^{-1\/2}(\\hbar\\rho)\\cdot j_{\\mathfrak g}^{1\/2}(\\hbar\\lambda).\n\\]\n\nThe above expression (call it $Z(\\lambda)$) makes sense\nfor all $\\lambda\\in{\\mathfrak g}^\\star$, and hence it is also\n${\\iota_\\frakg}^{-1}{\\psi_\\frakg}{RT_\\frakg}(\\bigcirc)$. So we're only left with computing ${S_\\frakg}\nZ(\\lambda)$:\n\\[ {S_\\frakg} Z(\\lambda)=\\int_{r\\in M_{i\\rho}}dr\\,Z(\\lambda-ir)=\n j_{\\mathfrak g}^{-1\/2}(\\hbar\\rho)\n \\int_{r\\in M_{i\\rho}}dr\\,j_{\\mathfrak g}^{1\/2}(\\hbar(\\lambda-ir)).\n\\]\nBy~\\eqref{FourierTransform}, this is\n\\[ j_{\\mathfrak g}^{-1\/2}(\\hbar\\rho)\\int_{r\\in M_{i\\rho}}dr\\int_{r'\\in M_{i\\rho}}dr'\\,\n e^{i\\hbar( r',\\lambda-ir)}\n = j_{\\mathfrak g}^{-1\/2}(\\hbar\\rho)\\int_{r'\\in M_{i\\rho}}dr'\\,\n e^{i\\hbar( r',\\lambda)}\\int_{r\\in M_{i\\rho}}dr\\,\n e^{i\\hbar( -ir',r)}.\n\\]\nUsing~\\eqref{FourierTransform} again, we find that the inner-most integral\nis equal to $j_{\\mathfrak g}^{1\/2}(\\hbar\\rho)$ independently of $r'$, and hence\n\\[ {S_\\frakg} Z(\\lambda)\n = \\int_{r'\\in M_{i\\rho}}dr'\\,e^{i\\hbar( r',\\lambda)},\n\\]\nand using~\\eqref{FourierTransform} one last time we find that\n\\begin{equation} \\lbl{SgZ}\n {S_\\frakg} Z(\\lambda) = j_{\\mathfrak g}^{1\/2}(\\hbar\\lambda).\n\\end{equation}\n\nThe left hand side of~\\eqref{PreciseUnknotEquation} was already computed\n(up to duality and powers of $\\hbar$) in Lemma~\\ref{TgOmega}. Undoing the\neffect of $\\kappa^\\hbar$ there, we get the same answer as in~\\eqref{SgZ}.\n\\def$\\hfill\\smily${$\\hfill\\smily$}\n\\end{proof}\n\n\\[ \\eepic{WheelsWheeling}{0.25} \\]\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Motivation}\n\nExperimental efforts to develop useful solid state quantum information processors have encountered a host of practical problems that have substantially limited progress. While the desire to reduce noise in solid state qubits appears to be the key factor that drives much of the recent work in this field, it must be acknowledged that there are formidable challenges related to architecture, circuit density, fabrication variation, calibration and control that also deserve attention. For example, a qubit that is inherently exponentially sensitive to fabrication variations with no recourse for in-situ correction holds little promise in any large scale architecture, even with the best of modern fabrication facilities. Thus, a qubit designed in the absence of information concerning its ultimate use in a larger scale system may prove to be of little utility in the future. In what follows, we present an experimental demonstration of a novel superconducting flux qubit \\cite{fluxqubit} that has been specifically designed to address several issues that pertain to the implementation of a large scale quantum information processor. While noise is not the central focus of this article, we nonetheless present experimental evidence that, despite its physical size and relative complexity, the observed flux noise in this flux qubit is comparable to the quietest such devices reported upon in the literature to date. \n\nIt has been well established that rf-SQUIDs can be used as qubits given an appropriate choice of device parameters. Such devices can be operated as a flux biased phase qubit using two intrawell energy levels \\cite{FluxBiasedPhaseQubit} or as a flux qubit using any pair of interwell levels \\cite{fluxqubit}. This article will focus upon an experimental demonstration of a novel rf-SQUID flux qubit that can be tuned in-situ using solely {\\it static} flux biases to compensate for fabrication variations in device parameters, both within single qubits and between multiple qubits. It is stressed that this latter issue is of critical importance in the development of useful large scale quantum information processors that could foreseeably involve thousands of qubits \\cite{DiVincenzo}. Note that in this regard, the ion trap approach to building a quantum information processor has a considerable advantage in that the qubits are intrinsically identical, albeit the challenge is then to characterize and control the trapping potential with high fidelity \\cite{Wineland}. While our research group's express interest is in the development of a large scale superconducting adiabatic quantum optimization [AQO] processor \\cite{AQC,Santoro}, it should be noted that many of the practical problems confronted herein are also of concern to those interested in implementing gate model quantum computation [GMQC] processors \\cite{GMQC} using superconducting technologies.\n\nThis article is organized as follows: In Section II, a theoretical argument is presented to justify the rf-SQUID design that has been implemented. It is shown that this design is robust against fabrication variations in Josephson junction critical current. Second, it is argued why it is necessary to include a tunable inductance in the flux qubit to account for differences in inductance between qubits in a multi-qubit architecture and to compensate for changes in qubit inductance during operation. Thereafter, the focus of the article shifts towards an experimental demonstration of the rf-SQUID flux qubit. The architecture of the experimental device and its operation are discussed in Section III and then a series of experiments to characterize the rf-SQUID and to highlight its control are presented in Section IV. Section V contains measurements of properties that indicate that this more complex rf-SQUID is indeed a flux qubit. Flux and critical current noise measurements and a formula for converting the measured flux noise spectral density into a free induction (Ramsey) decay time are presented in Section VI. A summary of key conclusions is provided in Section VII. Detailed calculations of rf-SQUID Hamiltonians have been placed in the appendices.\n\n\\section{rf-SQUID Flux Qubit Design}\n\nThe behavior of most superconducting devices is governed by three types of macroscopic parameters: the critical currents of any Josephson junctions, the net capacitance across the junctions and the inductance of the superconducting wiring. The Hamiltonian for many of these devices can generically be written as\n\\begin{equation}\n\\label{eqn:Hphase}\n{\\cal H}=\\sum_i\\left[\\frac{Q_i^2}{2C_i}-E_{Ji}\\cos(\\varphi_i)\\right]+\\sum_{n}U_n\\frac{\\left(\\varphi_n-\\varphi_n^x\\right)^2}{2} \\; ,\n\\end{equation}\n\n\\noindent where $C_i$, $E_{Ji}=I_i\\Phi_0\/2\\pi$ and $I_i$ denote the capacitance, Josephson energy and critical current of Josephson junction $i$, respectively. The terms in the first sum are readily recognized as being the Hamiltonians of the individual junctions for which the quantum mechanical phase across the junction $\\varphi_i$ and the charge collected on the junction $Q_i$ obey the commutation relation $[\\Phi_0\\varphi_i\/2\\pi,Q_j]=i\\hbar\\delta_{ij}$. The index $n$ in the second summation is over closed inductive loops. External fluxes threading each closed loop, $\\Phi_n^x$, have been represented as phases $\\varphi_n^x\\equiv 2\\pi\\Phi_n^x\/\\Phi_0$. The quantum mechanical phase drop experienced by the superconducting condensate circulating around any closed loop is denoted as $\\varphi_n$. The overall potential energy scale factor for each closed loop is given by $U_n\\equiv(\\Phi_0\/2\\pi)^2\/L_n$. Here, $L_n$ can be either a geometric inductance from wiring or Josephson inductance from large junctions \\cite{vanDuzer}. Hamiltonian (\\ref{eqn:Hphase}) will be used as the progenitor for all device Hamiltonians that follow.\n\n\\subsection{Compound-Compound Josephson Junction Structure}\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{RFSSummary.pdf}\n\\caption{\\label{fig:rfssummary} (color online) a) A single junction rf-SQUID qubit. b) Compound Josephson Junction (CJJ) rf-SQUID qubit. c) Compound-Compound Josephson Junction (CCJJ) rf-SQUID qubit. Junction critical currents $I_i$ and junction phases $\\varphi_i$ ($1\\leq i \\leq 4$) as noted. Net device phases are denoted as $\\varphi_{\\alpha}$, where $\\alpha\\in\\left(\\ell ,r,q\\right)$. External fluxes, $\\Phi_n^x$, are represented as phases $\\varphi_{n}^x\\equiv2\\pi\\Phi_n^x\/\\Phi_0$, where $n\\in\\left(L,R,\\text{cjj},\\text{ccjj},q\\right)$. Inductance of the rf-SQUID body, CJJ loop and CCJJ loop are denoted as $L_{\\text{body}}$, $L_{\\text{cjj}}$ and $L_{\\text{ccjj}}$, respectively.}\n\\end{figure}\n\nA sequence of rf-SQUID architectures are depicted in Fig.~\\ref{fig:rfssummary}. The most primitive version of such a device is depicted in Fig.~\\ref{fig:rfssummary}a, and more complex variants in Figs.\\ref{fig:rfssummary} b and \\ref{fig:rfssummary}c. For the single junction rf-SQUID (Fig.~\\ref{fig:rfssummary}a), the phase across the junction can be equated to the phase drop across the body of the rf-SQUID: $\\varphi_1=\\varphi_q$. The Hamiltonian for this device can then be written as\n\\begin{subequations}\n\\begin{equation}\n\\label{eqn:1JHeff}\n{\\cal H}=\\frac{Q_q^2}{2C_q}+V(\\varphi_q) \\; ;\n\\end{equation}\n\\vspace{-0.12in}\n\\begin{equation}\n\\label{eqn:1JV}\nV(\\varphi_q)=U_q\\Big\\{\\frac{\\left(\\varphi_q-\\varphi_q^x\\right)^2}{2}-\\beta\\cos\\left(\\varphi_q\\right)\\Big\\} \\; ;\n\\end{equation}\n\\vspace{-0.12in}\n\\begin{equation}\n\\label{eqn:1Jbeta}\n\\beta=\\frac{2\\pi L_q I_q^c}{\\Phi_0} \\; ,\n\\end{equation}\n\\end{subequations}\n\n\\noindent with the qubit inductance $L_q\\equiv L_{\\text{body}}$, qubit capacitance $C_q\\equiv C_1$ and qubit critical current $I_q^c\\equiv I_1$ in this particular case. If this device has been designed such that $\\beta>1$ and is flux biased such that $\\varphi_q^x\\approx\\pi$, then the potential energy $V(\\varphi_q)$ will be bistable. With increasing $\\beta$ an appreciable potential energy barrier forms between the two local minima of $V(\\varphi_q)$, through which the two lowest lying states of the rf-SQUID may couple via quantum tunneling. It is these two lowest lying states, which are separated from all other rf-SQUID states by an energy of order of the rf-SQUID plasma energy $\\hbar\\omega_p\\equiv\\hbar\/\\sqrt{L_qC_1}$, that form the basis of a qubit. One can write an effective low energy version of Hamiltonian (\\ref{eqn:1JHeff}) as \\cite{Leggett}\n\\begin{equation}\n\\label{eqn:Hqubit}\n{\\cal H}_{q}=-{\\frac{1}{2}}\\left[\\epsilon\\sigma_z+\\Delta_q\\sigma_x\\right] \\;\\; ,\n\\end{equation}\n\n\\noindent where $\\epsilon=2\\left|I_q^p\\right|\\left(\\Phi_q^x-\\Phi_0\/2\\right)$, $\\left|I_q^p\\right|$ is the magnitude of the persistent current that flows about the inductive $q$ loop when the device is biased hard [$\\epsilon\\gg\\Delta_q$] to one side and $\\Delta_q$ represents the tunneling energy between the otherwise degenerate counter-circulating persistent current states at $\\Phi^x_q=\\Phi_0\/2$. \n\n\\begin{figure}\n\\includegraphics[width=3.25in]{QubitComparison_ver2.pdf}\n\\caption{\\label{fig:qubitcomparison} (color online) Depiction of the two lowest lying states of an rf-SQUID at degeneracy ($\\epsilon=0$) with nomenclature for the energy basis ($\\ket{g}$,$\\ket{e}$) and flux basis ($\\ket{\\downarrow}$,$\\ket{\\uparrow}$) as indicated.}\n\\end{figure}\n\nA depiction of the one-dimensional potential energy and the two lowest energy states of an rf-SQUID at degeneracy ($\\Phi_q^x=\\Phi_0\/2$) for nominal device parameters is shown in Fig.~\\ref{fig:qubitcomparison}. In this diagram, the ground and first excited state are denoted by $\\ket{g}$ and $\\ket{e}$, respectively. These two energy levels constitute the energy eigenbasis of a flux qubit. An alternate representation of these states, which is frequently referred to as either the flux or persistent current basis, can be formed by taking the symmetric and antisymmetric combinations of the energy eigenstates: $\\ket{\\downarrow}=\\left(\\ket{g}+\\ket{e}\\right)\/\\sqrt{2}$ and $\\ket{\\uparrow}=\\left(\\ket{g}-\\ket{e}\\right)\/\\sqrt{2}$, which yield two roughly gaussian shaped wavefunctions that are centered about each of the wells shown in Fig.~\\ref{fig:qubitcomparison}. The magnitude of the persistent current used in Eq.~(\\ref{eqn:Hqubit}) is then defined by $\\left|I_q^p\\right|\\equiv\\left|\\bra{\\uparrow}\\left(\\Phi_q-\\Phi_0\/2\\right)\/L_q\\ket{\\uparrow}\\right|$. The tunneling energy is given by $\\Delta_q=\\bra{e}{\\cal H}_q\\ket{e}-\\bra{g}{\\cal H}_q\\ket{g}$. \n\nThe aforementioned dual representation of the states of a flux qubit allows two distinct modes of operation of the flux qubit as a binary logical element with a logical basis defined by the states $\\ket{0}$ and $\\ket{1}$. In the first mode, the logical basis is mapped onto the energy eigenbasis: $\\ket{0}\\rightarrow\\ket{g}$ and $\\ket{1}\\rightarrow\\ket{e}$. This mode is useful for optimizing the coherence times of flux qubits as the dispersion of Hamiltonian (\\ref{eqn:Hqubit}) is flat as a function of $\\Phi_q^x$ to first order for $\\epsilon\\approx 0$, thus providing some protection from the effects of low frequency flux noise \\cite{optimalpoint}. However, this is not a convenient mode of operation for implementing interactions between flux qubits \\cite{parametriccoupling1,parametriccoupling2}. In the second mode, the logical basis is mapped onto the persistent current basis: $\\ket{0}\\rightarrow\\ket{\\downarrow}$ and $\\ket{1}\\rightarrow\\ket{\\uparrow}$. This mode of operation facilitates the implementation of inter-qubit interactions via inductive couplings, but does so at the expense of coherence times. GMQC schemes exist that attempt to leverage the benefits of both of the above modes of operation \\cite{IBM,Oliver,NiftyItalianPaper}. On the other hand, those interested in implementing AQO strictly use the second mode of operation cited above. This, very naturally, leads to some interesting properties: First and foremost, in the coherent regime at $\\epsilon=0$, the groundstate maps onto $\\ket{g}=\\left(\\ket{0}+\\ket{1}\\right)\/\\sqrt{2}$, which implies that it is a superposition state with a fixed phase between components in the logical basis. Second, the logical basis is not coincident with the energy eigenbasis, except in the extreme limit $\\epsilon\/\\Delta_q\\gg 1$. As such, the qubit should not be viewed as an otherwise free spin-1\/2 in a magnetic field, rather it maps onto an Ising spin subjected to a magnetic field with both a longitudinal ($B_z\\rightarrow\\epsilon$) and a transverse ($B_x\\rightarrow\\Delta_q$) component \\cite{Ising}. In this case, it is the competition between $\\epsilon$ and $\\Delta_q$ which dictates the relative amplitudes of $\\ket{\\downarrow}$ and $\\ket{\\uparrow}$ in the groundstate wavefunction $\\ket{g}$, thereby enabling logical operations that make {\\it no} explicit use of the excited state $\\ket{e}$. This latter mode of operation of the flux qubit has connections to the fields of quantum magnetism \\cite{Anderson} and optimization theory \\cite{Kirkpatrick}. Interestingly, systems of coupled flux qubits that are operated in this mode bear considerable resemblance to Feynman's original vision of how to build a quantum computer \\cite{Feynman}.\n\nWhile much seminal work has been done on single junction and the related 3-Josephson junction rf-SQUID flux qubit\\cite{3JJfluxqubits,MooijMore3JJFluxQubits,MooijSuperposition,MooijCoherentDynamics,MooijCoupledSpectroscopy,OliverMachZehnder,OliverLandauZener,OliverAmplitudeSpectroscopy,ClarkeQubits,IPHT4Q,1OverFFluxQubit1,1OverFFluxQubit2}, it has been recognized that such devices would be impractical in a large scale quantum information processor as their properties are exceptionally sensitive to fabrication variations. In particular, in the regime $E_{J1}\\gg\\hbar\\omega_p$, $\\Delta_q\\propto\\exp(-\\hbar\\omega_p\/E_{J1})$. Thus, it would be unrealistic to expect a large scale processor involving a multitude of such devices to yield from even the best superconducting fabrication facility. Moreover, implementation of AQO requires the ability to actively tune $\\Delta_q$ from being the dominant energy scale in the qubit to being essentially negligible during the course of a computation. Thus the single junction rf-SQUID flux qubit is of limited practical utility and has passed out of favor as a prototype qubit.\n\nThe next step in the evolution of the single junction flux qubit and related variants was the compound Josephson junction (CJJ) rf-SQUID, as depicted in Fig.~\\ref{fig:rfssummary}b. This device was first reported upon by Han, Lapointe and Lukens \\cite{CJJ} and was the first type of flux qubit to display signatures of quantum superposition of macroscopic states \\cite{LukensSuperposition}. The CJJ rf-SQUID has been used by other research groups\\cite{ItaliansCJJ,NiftyItalianPaper,MoreNiftyItalianPaper} and a related 4-Josephson junction device has been proposed \\cite{3JJfluxqubits,MooijMore3JJFluxQubits}. The CJJ rf-SQUID flux qubit and related variants have reappeared in a gradiometric configuration in more recent history \\cite{HOMRT,IBM,Delft}. Here, the single junction of Fig.~\\ref{fig:rfssummary}a has been replaced by a flux biased dc-SQUID of inductance $L_{\\text{cjj}}$ that allows one to tune the critical current of the rf-SQUID in-situ. Let the applied flux threading this structure be denoted by $\\Phi^x_{\\text{cjj}}$. It is shown in Appendix A that the Hamiltonian for this system can be written as\n\\begin{subequations}\n\\begin{equation}\n\\label{eqn:2JHeff}\n{\\cal H}=\\sum_n\\left[\\frac{Q_n^2}{2C_n}+U_n\\frac{\\left(\\varphi_n-\\varphi_n^x\\right)^2}{2}\\right]-U_q\\beta_{\\text{eff}}\\cos\\left(\\varphi_q-\\varphi_q^0\\right) \\; ,\n\\end{equation}\n\n\\noindent where the sum is over $n\\in\\left\\{q,\\text{cjj}\\right\\}$, $C_q\\equiv C_1+C_2$, $1\/C_{\\text{cjj}}\\equiv 1\/C_1+1\/C_2$ and $L_q\\equiv L_{\\text{body}}+L_{\\text{cjj}}\/4$. The 2-dimensional potential energy in Hamiltonian (\\ref{eqn:2JHeff}) is characterized by\n\\begin{equation}\n\\label{eqn:2JBeff}\n\\beta_{\\text{eff}}=\\beta_+\\cos\\left(\\frac{\\varphi_{\\text{cjj}}}{2}\\right)\\sqrt{1+\\left[\\frac{\\beta_-}{\\beta_+}\\tan(\\varphi_{\\text{cjj}}\/2)\\right]^2} \\; ;\n\\end{equation}\n\\vspace{-12pt}\n\\begin{equation}\n\\label{eqn:2JOffset}\n\\varphi_q^0\\equiv 2\\pi\\frac{\\Phi_q^0}{\\Phi_0} =-\\arctan\\left(\\frac{\\beta_-}{\\beta_+}\\tan\\left(\\varphi_{\\text{cjj}}\/2\\right)\\right) \\; ;\n\\end{equation}\n\\vspace{-12pt}\n\\begin{equation}\n\\label{eqn:2Jbetapm}\n\\beta_{\\pm}\\equiv 2\\pi L_q\\left(I_{1}\\pm I_{2}\\right)\/\\Phi_0 \\; .\n\\end{equation}\n\\end{subequations}\n\n\\noindent Note that if $\\cos\\left(\\varphi_{\\text{cjj}}\/2\\right)<0$, then $\\beta_{\\text{eff}}<0$ in Hamiltonian (\\ref{eqn:2JHeff}). This feature provides a natural means of shifting the qubit degeneracy point from $\\varphi_q^x=\\pi$, as in the single junction rf-SQUID case, to $\\varphi_q^x\\approx 0$. It has been assumed in all that follows that this $\\pi$-shifted mode of operation of the CCJ rf-SQUID has been invoked.\n\nHamiltonian (\\ref{eqn:2JHeff}) is similar to that of a single junction rf-SQUID modulo the presence of a $\\varphi_{\\text{cjj}}$-dependent tunnel barrier through $\\beta_{\\text{eff}}$ and an effective critical current $I_q^c\\equiv I_1+I_2$.\nFor $L_{\\text{cjj}}\/L_q\\ll 1$ it is reasonable to assume that $\\varphi_{\\text{cjj}}\\approx 2\\pi\\Phi^x_{\\text{cjj}}\/\\Phi_0$. Consequently, the CJJ rf-SQUID facilitates in-situ tuning of the tunneling energy through $\\Phi^x_{\\text{cjj}}$. While this is clearly desirable, one does pay for the additional flexibility by adding more complexity to the rf-SQUID design and thus more potential room for fabrication variations. The minimum achievable barrier height is ultimately limited by any so called {\\it junction asymmetry} which leads to finite $\\beta_{-}$. In practice, for $\\beta_-\/\\beta_+=(I_{1}-I_{2})\/(I_{1}+I_{2})\\lesssim0.05$, this effect is of little concern. However, a more insidious effect of junction asymmetry can be seen via the change of variables $\\varphi_q-\\varphi_q^0\\rightarrow\\varphi_q$ in Eq.~(\\ref{eqn:2JHeff}), namely an apparent $\\Phi^x_{\\text{cjj}}$-dependent flux offset: $\\Phi^x_q\\rightarrow\\Phi^x_q-\\Phi_q^0(\\Phi^x_{\\text{cjj}})$. If the purpose of the CJJ is to simply allow the experimentalist to target a particular $\\Delta_q$, then the presence of $\\Phi_q^0(\\Phi^x_{\\text{cjj}})$ can be readily compensated via the application of a static flux offset. On the other hand, any mode of operation that explicitly requires altering $\\Delta_q$ during the course of a quantum computation \\cite{IBM,Oliver,Kaminsky,Aharonov,NiftyItalianPaper,MoreNiftyItalianPaper} would also require active compensation for what amounts to a nonlinear crosstalk from $\\Phi^x_{\\text{cjj}}$ to $\\Phi^x_q$. While it may be possible to approximate this effect as a linear crosstalk over a small range of $\\Phi^x_{\\text{cjj}}$ if the junction asymmetry is small, one would nonetheless need to implement precise {\\it time-dependent} flux bias compensation to utilize the CJJ rf-SQUID as a flux qubit in any quantum computation scheme. While this may be feasible in laboratory scale systems, it is by no means desirable nor practical on a large scale quantum information processor.\n\nA second problem with the CJJ rf-SQUID flux qubit is that one cannot homogenize the qubit parameters $\\left|I_q^p\\right|$ and $\\Delta_q$ between a multitude of such devices that possess different $\\beta_{\\pm}$ over a broad range of $\\Phi^x_{\\text{cjj}}$. While one can accomplish this task to a limited degree in a perturbative manner about carefully chosen CJJ biases for each qubit \\cite{synchronization}, the equivalence of $\\left|I_q^p\\right|$ and $\\Delta_q$ between those qubits will be approximate at best. Therefore, the CJJ rf-SQUID does not provide a convenient means of accommodating fabrication variations between multiple flux qubits in a large scale processor.\n\nGiven that the CJJ rf-SQUID provides additional flexibility at a cost, it is by no means obvious that one can design a better rf-SQUID flux qubit by adding even more junctions. Specifically, it is desirable to have a device whose imperfections can be mitigated purely by the application of {\\it time-independent} compensation signals. The novel rf-SQUID topology shown in Fig.~\\ref{fig:rfssummary}c, hereafter referred to as the compound-compound Josephson junction (CCJJ) rf-SQUID, satisfies this latter constraint. Here, each junction of the CJJ in Fig.~\\ref{fig:rfssummary}b has been replaced by a dc-SQUID, which will be referred to as left ($L$) and right ($R$) minor loops, and will be subjected to external flux biases $\\Phi_L^x$ and $\\Phi_R^x$, respectively. The role of the CJJ loop in Fig.~\\ref{fig:rfssummary}b is now played by the CCJJ loop of inductance $L_{\\text{ccjj}}$ which will be subjected to an external flux bias $\\Phi^x_{\\text{ccjj}}$. It is shown in Appendix B that if one chooses {\\it static} values of $\\Phi_L^x$ and $\\Phi_R^x$ such that the net critical currents of the minor loops are equal, then it can be described by an effective two-dimensional Hamiltonian of the form\n\\begin{subequations}\n\\begin{equation}\n\\label{eqn:4JHeff}\n{\\cal H}=\\sum_n\\left[\\frac{Q_n^2}{2C_n}+U_n\\frac{\\left(\\varphi_n-\\varphi_n^x\\right)^2}{2}\\right]-U_q\\beta_{\\text{eff}}\\cos\\left(\\varphi_q-\\varphi_q^0\\right) \\; ,\n\\end{equation}\n\n\\noindent where the sum is over $n\\in\\left\\{q,\\text{ccjj}\\right\\}$, $C_q\\equiv C_1+C_2+C_3+C_4$, $1\/C_{\\text{ccjj}}\\equiv 1\/(C_1+C_2)+1\/(C_3+C_4)$ and $L_q\\equiv L_{\\text{body}}+L_{\\text{ccjj}}\/4$. The effective 2-dimensional potential energy in Hamiltonian (\\ref{eqn:4JHeff}) is characterized by\n\\begin{equation}\n\\label{eqn:4JBeffbalanced}\n\\beta_{\\text{eff}}=\\beta_+(\\Phi^x_{L},\\Phi^x_{R})\\cos\\left(\\frac{\\varphi_{\\text{ccjj}}-\\varphi^0_{\\text{ccjj}}}{2}\\right) \\;\\; ,\n\\end{equation}\n\n\\noindent where $\\beta_+(\\Phi^x_{L},\\Phi^x_{R})=2\\pi L_q I_q^c(\\Phi^x_{L},\\Phi^x_{R})\/\\Phi_0$ with \n\\begin{displaymath}\nI_q^c(\\Phi^x_{L},\\Phi^x_{R})\\equiv (I_1+I_2)\\cos\\left(\\frac{\\pi\\Phi^x_{L}}{\\Phi_0}\\right)+(I_3+I_4)\\cos\\left(\\frac{\\pi\\Phi^x_{R}}{\\Phi_0}\\right) \\; . \n\\end{displaymath}\n\n\\noindent Given an appropriate choice of $\\Phi^x_{L}$ and $\\Phi^x_{R}$, the $q$ and ccjj loops will possess apparent flux offsets of the form\n\\begin{equation}\n\\label{eqn:qoffset}\n\\Phi_q^0=\\frac{\\Phi_0\\varphi_q^0}{2\\pi}=\\frac{\\Phi_{L}^0+\\Phi_{R}^0}{2}\\; ;\n\\end{equation}\n\\vspace{-12pt}\n\\begin{equation}\n\\label{eqn:ccjjoffset}\n\\Phi^0_{\\text{ccjj}}=\\frac{\\Phi_0\\varphi^0_{\\text{ccjj}}}{2\\pi}=\\Phi_{L}^0-\\Phi_{R}^0\\; ,\n\\end{equation}\n\\end{subequations}\n\n\\noindent where $\\Phi_{L(R)}^0$ is given by Eq.~(\\ref{eqn:4JMinorOffset}), which is purely a function of $\\Phi^x_{L(R)}$ and junction critical currents. As such, the apparent flux offsets are {\\it independent} of $\\Phi^x_{\\text{ccjj}}$. Under such conditions, we deem the CCJJ to be {\\it balanced}. Given that the intended mode of operation is to hold $\\Phi_L^x$ and $\\Phi_R^x$ constant, then the offset phases $\\varphi_L^0$ and $\\varphi_R^0$ will also be constant. The result is that Hamiltonian (\\ref{eqn:4JHeff}) for the CCJJ rf-SQUID becomes homologous to that of an ideal CJJ rf-SQUID [$\\beta_-=0$ in Eqs.~(\\ref{eqn:2JBeff}) and (\\ref{eqn:2JOffset})] with apparent {\\it static} flux offsets. Such static offsets can readily be calibrated and compensated for in-situ using either analog control lines or on-chip programmable flux sources \\cite{PCC}. For typical device parameters and junction variability on the order of a few percent, these offsets will be $\\sim 1\\rightarrow 10\\,$m$\\Phi_0$. Equations \\ref{eqn:4JHeff}-\\ref{eqn:ccjjoffset} with $\\Phi_q^0=\\Phi^0_{\\text{ccjj}}=0$ will be referred to hereafter as the ideal CCJJ rf-SQUID model.\n\nThe second advantage of the CCJJ rf-SQUID is that one can readily accommodate for variations in critical current between multiple flux qubits. Note that in Eq.~(\\ref{eqn:4JBeffbalanced}) that the maximum height of the tunnel barrier is governed by $\\beta_+(\\Phi^x_{L},\\Phi^x_{R})\\equiv\\beta_L(\\Phi^x_{L})+\\beta_R(\\Phi^x_{R})$, where $\\beta_{L(R)}$ is given by Eq.~(\\ref{eqn:4JMinorOffset}). One is free to choose any pair of $(\\Phi^x_{L},\\Phi^x_{R})$ such that $\\beta_L(\\Phi^x_{L})=\\beta_R(\\Phi^x_{R})$, as dictated by Eq.~(\\ref{eqn:balancedapprox}). Consequently, $\\beta_+=2\\beta_R(\\Phi^x_{R})$ in Eq.~(\\ref{eqn:4JBeffbalanced}). One can then choose $\\Phi^x_{R}$, which then dictates $\\Phi^x_{L}$, so as to homogenize $\\beta_+$ between multiple flux qubits. The results is a set of nominally uniform flux qubits where the particular choice of $(\\Phi^x_{L},\\Phi^x_{R})$ for each qubit merely results in unique static flux offsets $\\Phi^0_q$ and $\\Phi^0_{\\text{ccjj}}$ for each device.\n\nTo summarize up to this point, the CCJJ rf-SQUID is robust against Josephson junction fabrication variations both within an individual rf-SQUID and between a plurality of such devices. The variations can be effectively tuned out purely through the application of {\\it static} flux biases, which is of considerable advantage when envisioning the implementation of large scale quantum information processors that use flux qubits.\n\n\\subsection{$L$-tuner}\n\nThe purpose of the CCJJ structure was to provide a means of coming to terms with fabrication variations in Josephson junctions both within individual flux qubits and between sets of such devices. However, junctions are not the only key parameter that may vary between devices, nor are fabrication variations responsible for all of the potential variation. In particular, it has been experimentally demonstrated that the inductance of a qubit $L_q$ that is connected to other qubits via tunable mutual inductances is a function of the coupler configuration \\cite{cjjcoupler}. Let the bare inductance of the qubit in the presence of no couplers be represented by $L_q^0$ and the mutual inductance between the qubit and coupler $i$ be represented by $M_{\\text{co},i}$. If the coupler possesses a first order susceptibility $\\chi_i$, as defined in Ref.~\\onlinecite{cjjcoupler}, then the net inductance of the qubit can be expressed as \n\\begin{equation}\n\\label{eqn:LqNoTuner}\nL_q=L_q^0-\\sum_{i}M^2_{\\text{co},i}\\chi_i\\; . \n\\end{equation}\n\n\\noindent Given that qubit properties such as $\\Delta_q$ can be exponentially sensitive to variations in $L_q$, then it is undesirable to have variations in $L_q$ between multiple flux qubits or to have $L_q$ change during operation. This could have a deleterious impact upon AQO in which it is typically assumed that all qubits are identical and they are intended to be annealed in unison \\cite{AQC}. From the perspective of GMQC, one could very well attempt to compensate for such effects in a CJJ or CCJJ rf-SQUID flux qubit by adjusting the tunnel barrier height to hold $\\Delta_q$ constant, but doing so alters $\\left|I_q^p\\right|$, which then alters the coupling of the qubit to radiative sources, thus demanding further compensation. Consequently, it also makes sense from the perspective of GMQC that one find a means of rendering $L_q$ uniform between multiple qubits and insensitive to the settings of inductive coupling elements.\n\n\\begin{figure}\n\\includegraphics[width=2.5in]{LTuner.pdf}\n\\caption{\\label{fig:LTuner} (color online) A CCJJ rf-SQUID with $L$-tuner connected to multiple tunable inductive couplers via transformers with mutual inductances $M_{\\text{co},i}$ and possessing susceptibilities $\\chi_i$. The $L$-tuner is controlled via the external flux bias $\\Phi^x_{LT}$}\n\\end{figure}\n\nIn order to compensate for variations in $L_q$, we have inserted a tunable Josephson inductance \\cite{vanDuzer} into the CCJJ rf-SQUID body, as depicted in Fig.~\\ref{fig:LTuner}. We refer to this element as an inductance ($L$)-tuner. This relatively simple element comprises a dc-SQUID whose critical current vastly exceeds that of the CCJJ structure, thus ensuring negligible phase drop across the $L$-tuner. Assuming that the inductance of the $L$-tuner wiring is negligible, the $L$-tuner modifies Eq.~(\\ref{eqn:LqNoTuner}) in the following manner:\n\\begin{equation}\n\\label{eqn:LqWithTuner}\nL_q=L_q^0-\\sum_{i}M^2_{\\text{co},i}\\chi_i + \\frac{L_{J0}}{\\cos(\\pi\\Phi^x_{LT}\/\\Phi_0)}\\; ,\n\\end{equation}\n\n\\noindent where $L_{J0}\\equiv\\Phi_0\/2\\pi I^c_{LT}$, $I^c_{LT}$ is the net critical current of the two junctions in the $L$-tuner and $\\Phi^x_{LT}$ is an externally applied flux bias threading the $L$-tuner loop. For modest flux biases such that $I^c_{LT}\\cos(\\pi\\Phi^x_{LT}\/\\Phi_0)\\gg I_q^c$, Eq.~(\\ref{eqn:LqWithTuner}) is a reliable model of the physics of the $L$-tuner.\n\nGiven that the $L$-tuner is only capable of augmenting $L_q$, one can only choose to target $L_q>L_q^0-\\sum_iM_{\\text{co},i}^2\\chi^{\\text{AFM}}_i+L_{J0}$, where $\\chi^{\\text{AFM}}_i$ is the maximum antiferromagnetic (AFM) susceptibility of inter-qubit coupler $i$. In practice, we choose to restrict operation of the couplers to the range $-\\chi_i^{\\text{AFM}}<\\chi_i<\\chi_i^{\\text{AFM}}$ such that the maximum qubit inductance that will be encountered is $L_q>L_q^0+\\sum_iM_{\\text{co},i}^2\\chi^{\\text{AFM}}_i+L_{J0}$. We then choose to prebias $\\Phi^x_{\\text{LT}}$ for each qubit to match the maximum realized $L_q\\equiv L^{\\text{max}}_q$ amongst a set of flux qubits. Thereafter, one can hold $L_q=L^{\\text{max}}_q$ as couplers are adjusted by inverting Eq.~(\\ref{eqn:LqWithTuner}) to solve for an appropriate value of $\\Phi^x_{LT}$. Thus, the $L$-tuner provides a ready means of compensating for small variations in $L_q$ between flux qubits and to hold $L_q$ constant as inductive inter-qubit coupling elements are adjusted.\n\n\\section{Device Architecture, Fabrication and Readout Operation}\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{Architecture_Schematic.pdf} \\\\\n\\includegraphics[width=3.25in]{Architecture_CrossSection.pdf} \\\\\n\\includegraphics[width=3.25in]{Architecture_OpticalQubit.pdf}\n\\caption{\\label{fig:architecture} (color online) a) High level schematic of the analog devices on the device reported upon herein. Qubits are represented as light grey elongated objects and denoted as $q_0\\ldots q_7$. One representative readout (RO), CCJJ and $L$-tuner ($LT$) each have been indicated in dashed boxes. Couplers (CO) are represented as dark objects located at the intersections of the qubit bodies. b) SEM of a cross-section of the fabrication profile. Metal layers denoted as BASE, WIRA, WIRB and WIRC. Insulating layers labeled as SiO$_2$. Topmost insulator has not been planarized in this test structure, but is planarized in the full circuit process. An example via (VIA), Josephson junction (JUNC, AlO$_x$\/Al) and resistor (RESI) are also indicated. c) Optical image of a portion of a device completed up to WIRB. Portions of qubits $q_0\\ldots q_3$ and the entirety of $q_4$ are visible.}\n\\end{figure}\n\nTo test the CCJJ rf-SQUID flux qubit, we fabricated a circuit containing 8 such devices with pairwise interactions mediated by a network of 16 in-situ tunable CJJ rf-SQUID inter-qubit couplers \\cite{cjjcoupler}. Each qubit was also coupled to its own dedicated quantum flux parametron (QFP)-enabled readout \\cite{QFP}. A high level schematic of the device architecture is shown in Fig.~\\ref{fig:architecture}a. External flux biases were provided to target devices using a sparse combination of analog current bias lines to facilitate device calibration and an array of single flux quantum (SFQ) based on-chip programmable control circuitry (PCC) \\cite{PCC}. \n\nThe device was fabricated from an oxidized Si wafer with Nb\/Al\/Al$_2$O$_3$\/Nb trilayer junctions and four Nb wiring layers separated by planarized plasma enhanced chemical vapor deposited SiO$_{2}$. A scanning electron micrograph of the process cross-section is shown in Fig.~\\ref{fig:architecture}b. The Nb metal layers have been labeled as BASE, WIRA, WIRB and WIRC. The flux qubit wiring was primarily located in WIRB and consisted of $2\\,\\mu$m wide leads arranged as an approximately $900\\,\\mu$m long differential microstrip located $200\\,$nm above a groundplane in WIRA. CJJ rf-SQUID coupler wiring was primarily located in WIRC, stacked on top of the qubit wiring to provide inductive coupling. PCC flux storage loops were implemented as stacked spirals of 13-20 turns of $0.25\\,\\mu$m wide wiring with $0.25\\,\\mu$m separation in BASE and WIRA (WIRB). Stored flux was picked up by one-turn washers in WIRB (WIRA) and fed\ninto transformers for flux-biasing devices. External control lines were mostly located in\nBASE and WIRA. All of these control elements resided below a groundplane in WIRC. The groundplane under the qubits and over the PCC\/external control lines were electrically connected using extended vias in WIRB so as to form a nearly continuous superconducting shield between the analog devices on top and the bias circuitry below. To provide biases to target devices with minimal parasitic crosstalk, transformers for biasing qubits, couplers, QFPs and dc-SQUIDs using bias lines and\/or PCC elements were enclosed in superconducting boxes with BASE and WIRC forming the top and bottom, respectively, and vertical walls formed by extended vias in WIRA and WIRB. Minimal sized openings were placed in the vertical walls through which the bias and target device wiring passed at opposing ends of each box. \n\nAn optical image of a portion of a device completed up to WIRB is shown in Fig.~\\ref{fig:architecture}c. Qubits are visible as elongated objects, WIRB PCC spirals are visible as dark rectangles and WIRB washers are visible as light rectangles with slits. Note that the extrema of the CCJJ rf-SQUID qubits are terminated in unused transformers. These latter elements allow this 8-qubit unit cell to be tiled in a larger device with additional inter-qubit CJJ couplers providing the connections between unit cells.\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{unipolarannealingwaveforms.pdf}\n\\caption{\\label{fig:unipolarannealing} (color online) a) Schematic representation of a portion of the circuit reported upon herein. Canonical representations of all externally controlled flux biases $\\Phi_{\\alpha}^x$, readout current bias $i_{ro}$ and key mutual inductances $M_{\\alpha}$ are indicated. b) Depiction of latching readout waveform sequence. c) Example QFP state population measurement as a function of the dc level $\\Phi^x_{\\text{qfp}}$ with no qubit signal. Data have been fit to Eq.~(\\ref{eqn:transition}).}\n\\end{figure}\n\nWe have studied the properties of all 8 CCJJ rf-SQUID flux qubits on this chip in detail and report upon one such device herein. To clearly establish the lingua franca of our work, we have depicted a portion of the multi-qubit circuit in Fig.~\\ref{fig:unipolarannealing}a. Canonical representations of the external flux biases needed to operate a qubit, a coupler and a QFP-enabled readout are labeled on the diagram. The fluxes $\\Phi_L^x$, $\\Phi_R^x$, $\\Phi^x_{LT}$ and $\\Phi_{\\text{co}}^x$ were only ever subjected to dc levels in our experiments that were controlled by the PCC. The remaining fluxes and readout current biases were driven by a custom-built 128 channel room temperature current source. The mutual inductance between qubit and QFP ($M_{q-\\text{qfp}}$), between QFP and dc-SQUID ($M_{\\text{qfp-ro}}$), qubit and coupler ($M_{\\text{co},i}$) and $\\Phi^x_{\\text{co}}$-dependent inter-qubit mutual inductance ($M_{\\text{eff}}$) have also been indicated. Further details concerning cryogenics, magnetic shielding and signal filtering have been discussed in previous publications \\cite{LOMRT,PCC,QFP,cjjcoupler}.\n\nSince much of what follows depends upon a clear understanding of our novel QFP-enabled readout mechanism, we present a brief review of its operation herein. The flux and readout current waveform sequence involved in a single-shot readout is depicted in Fig.~\\ref{fig:unipolarannealing}b. Much like the CJJ qubit \\cite{LOMRT}, the QFP can be adiabatically {\\it annealed} from a state with a monostable potential ($\\Phi^x_{\\text{latch}}=-\\Phi_0\/2$) to a state with a bistable potential ($\\Phi^x_{\\text{latch}}=-\\Phi_0$) that supports two counter-circulating persistent current states. The matter of which persistent current state prevails at the end of an annealing cycle depends upon the sum of $\\Phi^x_{\\text{qfp}}$ and any signal from the qubit mediated via $M_{q-\\text{qfp}}$. The state of the QFP is then determined with high fidelity using a synchronized flux pulse and current bias ramp applied to the dc-SQUID. The readout process was typically completed within a repetition time $t_{\\text{rep}}<50\\,\\mu$s.\n\nAn example trace of the population of one of the QFP persistent current states $P_{\\text{qfp}}$ versus $\\Phi^x_{\\text{qfp}}$, obtained using the latching sequence depicted in Fig.~\\ref{fig:unipolarannealing}b, is shown in Fig.~\\ref{fig:unipolarannealing}c. This trace was obtained with the qubit potential held monostable ($\\Phi^x_{\\text{ccjj}}=-\\Phi_0\/2$) such that it presented minimal flux to the QFP and would therefore not influence $P_{\\text{qfp}}$. The data have been fit to the phenomenological form\n\\begin{equation}\n\\label{eqn:transition}\nP_{\\text{qfp}}=\\frac{1}{2}\\left[1-\\tanh\\left(\\frac{\\Phi^x_{\\text{qfp}}-\\Phi^0_{\\text{qfp}}}{w}\\right)\\right]\n\\end{equation}\n\n\\noindent with width $w\\sim 0.18\\,$m$\\Phi_0$ for the trace shown therein. When biased with constant $\\Phi^x_{\\text{qfp}}=\\Phi^0_{\\text{qfp}}$, which we refer to as the QFP degeneracy point, this transition in the population statistics can be used as a highly nonlinear flux amplifier for sensing the state of the qubit. Given that $M_{q-\\text{qfp}}=6.28\\pm0.01\\,$pH for the devices reported upon herein and that typical qubit persistent currents in the presence of negligible tunneling $\\left|I_q^p\\right|\\gtrsim 1\\,\\mu$A, then the net flux presented by a qubit was $2M_{q-\\text{qfp}}\\left|I_q^p\\right|\\gtrsim 6\\,$m$\\Phi_0$, which far exceeded $w$. By this means one can achieve the very high qubit state readout fidelity reported in Ref.~\\onlinecite{QFP}. On the other hand, the QFP can be used as a linearized flux sensor by engaging $\\Phi^x_{\\text{qfp}}$ in a feedback loop and actively tracking $\\Phi^0_{\\text{qfp}}$. This latter mode of operation has been used extensively in obtaining many of the results presented herein.\n\n\n\\section{CCJJ rf-SQUID Characterization}\n\nThe purpose of this section is to present measurements that characterize the CCJJ, $L$-tuner and capacitance of a CCJJ rf-SQUID. All measurements shown herein have been made with a set of standard bias conditions given by $\\Phi^x_{L}=98.4\\,$m$\\Phi_0$, $\\Phi^x_{R}=-89.3\\,$m$\\Phi_0$, $\\Phi^x_{\\text{LT}}=0.344\\,\\Phi_0$ and all inter-qubit couplers tuned to provide $M_{\\text{eff}}=0$, unless indicated otherwise. The logic behind this particular choice of bias conditions will be explained in what follows.\nThis section will begin with a description of the experimental methods for extracting $L_q$ and $I_q^c$ from persistent current measurements. Thereafter, data that demonstrate the performance of the CCJJ and $L$-tuner will be presented. Finally, this section will conclude with the determination of $C_q$ from macroscopic resonant tunneling data.\n\n\\subsection{High Precision Persistent Current Measurements} \n\nThe most direct means of obtaining information regarding a CCJJ rf-SQUID is to measure the persistent current $\\left|I_q^p\\right|$ as a function of $\\Phi^x_{\\text{ccjj}}$. A reasonable first approach to measuring this quantity would be to sequentially prepare the qubit in one of its persistent current states and then the other, and use the QFP in feedback mode to measure the difference in flux sensed by the QFP, which equals $2M_{q-\\text{qfp}}\\left|I_q^p\\right|$. A fundamental problem with this approach is that it is sensitive to low frequency (LF) flux noise \\cite{1OverF}, which can alter the background flux experienced by the QFP between the sequential measurements. For a typical measurement with our apparatus, the act of locating a single QFP degeneracy point to within $20\\,\\mu\\Phi_0$ takes on the order of $1\\,$s, which means that two sequential measurements would only be immune to flux noise below $0.5\\,$Hz. We have devised a LF flux noise rejection scheme that takes advantage of the fact that such noise will generate a correlated shift in the apparent degeneracy points if the sequential preparations of the qubit can be interleaved with single-shot measurements that are performed in rapid succession. If these measurements are performed with repetition time $t_{\\text{rep}}\\sim 1\\,$ms, then the measurements will be immune to flux noise below $\\sim 1\\,$kHz.\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{iqplockinwaveforms.pdf}\n\\caption{\\label{fig:iqplockin} (color online) a) Low frequency flux noise rejecting qubit persistent current measurement sequence. Waveforms shown are appropriate for measuring $\\left|I_q^p\\left(\\Phi^x_{\\text{ccjj}}\\right)\\right|$ for $-\\Phi_0\\leq\\Phi^x_{\\text{ccjj}}\\leq 0$. The $\\Phi^x_{\\text{ccjj}}$ waveform can be offset by integer $\\Phi_0$ to measure the periodic behavior of this quantity. Typical repetition time is $t_{\\text{rep}}\\sim 1\\,$ms. b) Depiction of QFP transition and correlated changes in QFP population statistics for the two different qubit initializations.}\n\\end{figure}\n\nA depiction of the LF flux noise rejecting persistent current measurement sequence is shown in Fig.~\\ref{fig:iqplockin}a. The waveforms comprise two concatenated blocks of sequential annealing of the qubit to a target $\\Phi^x_{\\text{ccjj}}$ in the presence of an alternating polarizing flux bias $\\pm\\Phi_q^i$ followed by latching and single-shot readout of the QFP. The QFP flux bias is engaged in a differential feedback mode in which it is pulsed in alternating directions by an amount $\\delta\\Phi_m$ about a mean level $\\Phi_m$. The two single-shot measurements yield binary results for the QFP state and the {\\it difference} between the two binary results is recorded. Gathering a statistically large number of such differential measurements then yields a differential population measurement $\\delta P_{\\text{qfp}}$. Conceptually, the measurement works in the manner depicted in Fig.~\\ref{fig:iqplockin}b: the two different initializations of the qubit move the QFP degeneracy point to some unknown levels $\\Phi_m^0\\pm\\delta\\Phi_m^0$, where $\\Phi_m^0$ represents the true mean of the degeneracy points at any given instant in time and $2\\delta\\Phi_m^0$ is the true difference in degeneracy points that is independent of time. Focusing on flux biases that are close to the degeneracy point, one can linearize Eq.~(\\ref{eqn:transition}):\n\\begin{equation}\n\\label{eqn:transitionlinear}\nP_{\\text{qfp},\\pm}\\approx\\frac{1}{2}+\\frac{1}{2w}\\left[\\Phi^x_{\\text{qfp}}-\\left(\\Phi_m^0\\pm\\delta\\Phi_m^0\\right)\\right] \\; .\n\\end{equation}\n\n\\noindent Assuming that the rms LF flux noise $\\Phi_n\\ll w$ and that one has reasonable initial guesses for $\\Phi_m^0\\pm\\delta\\Phi_m^0$, then the use of the linear approximation should be justified. Applying $\\Phi^x_{\\text{qfp}}=\\Phi_m\\pm\\delta\\Phi_m$ and sufficient repetitions of the waveform pattern shown in Fig.~\\ref{fig:iqplockin}a, the differential population will then be of the form\n\\begin{equation}\n\\label{eqn:diffpop}\n\\delta P_{\\text{qfp}}=P_{\\text{qfp},+}-P_{\\text{qfp},-}=\\frac{1}{w}\\left[\\delta\\Phi_m+\\delta\\Phi_m^0\\right]\\; ,\n\\end{equation}\n\n\\noindent which is {\\it independent} of $\\Phi_m$ and $\\Phi_m^0$. Note that the above expression contains only two independent variables, $w$ and $\\delta\\Phi_m^0$, and that $\\delta P_{\\text{qfp}}$ is purely a linear function of $\\delta\\Phi_m$. By sampling at three values of $\\delta\\Phi_m$, as depicted by the pairs of numbered points in Fig.~\\ref{fig:iqplockin}b, the independent variables in Eq.~(\\ref{eqn:diffpop}) will be overconstrained, thus readily yielding $\\delta\\Phi_m^0$. One can then infer the qubit persistent current as follows:\n\\begin{equation}\n\\label{eqn:iqplockin}\n\\left|I_q^p\\right|=\\frac{2\\delta\\Phi_m^0}{2M_{q-\\text{qfp}}}=\\frac{\\delta\\Phi_m^0}{M_{q-\\text{qfp}}} \\; .\n\\end{equation}\n\n\\begin{figure}[ht]\n\\includegraphics[width=3.25in]{QU1Lq0Extraction_ver2.pdf}\n\\caption{\\label{fig:Lq0extraction} (color online) Example measurements of $\\left|I_q^p\\left(\\Phi^x_{\\text{ccjj}}\\right)\\right|$.}\n\\end{figure}\n\n\\noindent Example measurements of $\\left|I_q^p\\right|\\left(\\Phi^x_{\\text{ccjj}}\\right)$ are shown in Fig.~\\ref{fig:Lq0extraction}. These data, for which $1.5\\lesssim\\left|\\beta_{\\text{eff}}\\right|\\lesssim 2.5$, have been fit to the ideal CCJJ rf-SQUID model by finding the value of $\\varphi_q\\equiv\\varphi^{\\text{min}}_q$ for which the potential in Eq.~(\\ref{eqn:4JHeff}) is minimized:\n\\begin{equation}\n\\label{eqn:iqp2d}\n\\left|I_q^p\\right|=\\frac{\\Phi_0}{2\\pi}\\frac{\\left|\\varphi^{\\text{min}}_q-\\varphi_q^x\\right|}{L_q} \\;\\; .\n\\end{equation}\n\n\\noindent The best fit shown in Fig.~\\ref{fig:Lq0extraction} was obtained with $L_q=265.4 \\pm 1.0\\,$pH, $L_{\\text{ccjj}}=26\\pm 1\\,$pH and $I_q^c=3.103\\pm 0.003\\,\\mu$A. For comparison, we had estimated $L_q=273\\,$pH at the standard bias condition for $\\Phi^x_{LT}$ and $L_{\\text{ccjj}}=20\\,$pH from design. \n\nIn practice, we have found that the LF flux noise rejecting method of measuring $\\left|I_q^p\\right|$ effectively eliminates any observable $1\/f$ component in that measurement's noise power spectral density, to within statistical error. Finally, it should be noted that the LF flux noise rejecting method is applicable to any measurement of a difference in flux sensed by a linearized detector. In what follows herein, we have made liberal use of this technique to calibrate a variety of quantities in-situ using both QFPs and other qubits as flux detectors.\n\n\\subsection{CCJJ}\n\nIn this subsection, the CCJJ has been characterized as a function of $\\Phi^x_{L}$ and $\\Phi^x_{R}$ with all other static flux biases set to the standard bias condition cited above. Referring to Eq.~(\\ref{eqn:4JQOffset}), it can be seen that the qubit degeneracy point $\\Phi_q^0$ is a function of $\\Phi^x_{\\text{ccjj}}$ through $\\gamma_0$ if the CCJJ has not been balanced. To accentuate this functional dependence, one can anneal the CCJJ rf-SQUID with $\\Phi^x_{\\text{ccjj}}$ waveforms of opposing polarity about a minimum in $\\left|\\beta_{\\text{eff}}\\right|$, as found at $\\Phi^x_{\\text{ccjj}}=-\\Phi_0\/2$. The expectation is that the {\\it apparent} qubit degeneracy points will be antisymmetric about the mean given by setting $\\gamma_0=0$ in Eq.~(\\ref{eqn:4JQOffset}). The waveform sequence for performing a differential qubit degeneracy point measurement is depicted in Fig.~\\ref{fig:bipolarlockin}. In this case, the QFP is used as a latching readout and the qubit acts as the linearized detector of its own apparent annealing polarization-dependent flux offset. As with the $\\left|I_q^p\\right|$ measurement described above, this LF flux noise rejecting procedure returns a {\\it difference} in apparent flux sensed by the qubit and not the absolute flux offsets. \n\n\\begin{figure}[ht]\n\\includegraphics[width=3.25in]{bipolarannealingwaveforms.pdf}\n\\caption{\\label{fig:bipolarlockin} (color online) Schematic of low frequency noise rejecting differential qubit degeneracy point measurement sequence. The qubit is annealed with a $\\Phi^x_{\\text{ccjj}}$ signal of opposing polarity in the two frames and the qubit flux bias is controlled via feedback.}\n\\end{figure}\n\nTo find balanced pairs of $\\left(\\Phi^x_{L},\\Phi^x_{R}\\right)$ in practice, we set $\\Phi^x_{R}$ to a constant and used the LF flux noise rejecting procedure inside a software feedback loop that controlled $\\Phi^x_{L}$ to null the difference in apparent degeneracy point to a precision of $20\\,\\mu\\Phi_0$. Balanced pairs of $\\left(\\Phi^x_{L},\\Phi^x_{R}\\right)$ are plotted in Fig.~\\ref{fig:balanced}a. These data have been fit to \\ref{eqn:balancedapprox} using $\\beta_-\/\\beta_+$ as a free parameter. The best fit shown in Fig.~\\ref{fig:balanced}a was obtained with $1-\\beta_{R,+}\/\\beta_{L,+}=(4.1\\pm0.3)\\times 10^{-3}$, which then indicates an approximately $0.4\\%$ asymmetry between the pairs of junctions in the $L$ and $R$ loops.\n\n\\begin{figure}[ht]\n\\includegraphics[width=3.25in]{QU1MinorLobeBalancing.pdf} \\\\\n\\includegraphics[width=3.25in]{QU1BalancedIp_ver2.pdf} \\\\\n\\caption{\\label{fig:balanced} (color online) a) Minor lobe balancing data and fit to Eq.~(\\ref{eqn:balancedapprox}). The standard bias conditions for $\\Phi^x_{L}$ and $\\Phi^x_{R}$ are indicated by dashed lines. b) $\\left|I_q^p(\\Phi^x_{\\text{ccjj}}=-\\Phi_0)\\right|$ versus $\\Phi^x_{R}$, where $\\Phi^x_{L}$ has been chosen using Eq.~(\\ref{eqn:balancedapprox}). The data have been fit to the ideal CCJJ rf-SQUID model. The standard bias condition for $\\Phi^x_{R}$ and the resultant $\\left|I_q^p(\\Phi^x_{\\text{ccjj}}=-\\Phi_0)\\right|$ are indicated by dashed lines.}\n\\end{figure}\n\nA demonstration of how the CCJJ facilitates tuning of $I_q^c$ is shown in Fig.~\\ref{fig:balanced}b. Here, the measurable consequence of altering $I_q^c$ that was recorded was a change in $\\left|I_q^p\\right|$ at $\\Phi^x_{\\text{ccjj}}=-\\Phi_0$. These data have been fit to the ideal CCJJ rf-SQUID model with the substitution\n\\begin{equation}\n\\label{eqn:Icbalanced}\nI_q^c(\\Phi^x_{R},\\Phi^x_{L})=I_c^0\\cos\\left(\\frac{\\pi\\Phi^x_{R}}{\\Phi_0}\\right)\n\\end{equation}\n\n\\noindent and using the values of $L_{\\text{ccjj}}$ and $L_q$ obtained from fitting the data in Fig.~\\ref{fig:Lq0extraction}, but treating $I_c^0$ as a free parameter. Here, $\\Phi^x_{L}$ on the left side of Eq.~(\\ref{eqn:Icbalanced}) is a function of $\\Phi^x_{R}$ per the CCJJ balancing condition Eq.~(\\ref{eqn:balancedapprox}). The best fit was obtained with $I_c^0=3.25\\pm0.01\\,\\mu$A. This latter quantity agrees well with the designed critical current of four $0.6\\,\\mu$m diameter junctions in parallel of $3.56\\;\\mu$A. Thus, it is possible to target a desired $I_q^c$ by using Eq.~(\\ref{eqn:Icbalanced}) to select $\\Phi^x_{R}$ and then Eq.~(\\ref{eqn:balancedapprox}) to select $\\Phi^x_{L}$. The standard bias conditions for $\\Phi^x_{L}$ and $\\Phi^x_{R}$ quoted previously were chosen so as to homogenize $I_q^c$ amongst the 8 CCJJ rf-SQUIDs on this particular chip.\n\n\\subsection{$L$-Tuner}\n\nTo characterize the $L$-tuner, we once again turned to measurements of $\\left|I_q^p(\\Phi^x_{\\text{ccjj}}=-\\Phi_0)\\right|$, but this time as a function of $\\Phi^x_{LT}$. Persistent current results were then used to infer $\\delta L_q=L_q(\\Phi^x_{LT})-L_q(\\Phi^x_{LT}=0)$ using the ideal CCJJ rf-SQUID model with $L_{\\text{ccjj}}$ and $I_q^c$ held constant and treating $L_q$ as a free parameter. The experimental results are plotted in Fig.~\\ref{fig:ltuner}a and have been fit to\n\\begin{equation}\n\\label{eqn:Ltunerfit}\n\\delta L_q=\\frac{L_{J0}}{\\cos\\left(\\pi\\Phi^x_{LT}\/\\Phi_0\\right)} \\; ,\n\\end{equation}\n\n\\noindent and the best fit was obtained with $L_{J0}=19.60\\pm0.04\\,$pH. Modeling this latter parameter as $L_{q0}=\\Phi_0\/2\\pi I^c_{LT}$, we estimate $I^c_{LT}=16.79\\pm0.04\\,\\mu$A, which is close to the design value of $16.94\\,\\mu$A. The standard bias condition for $\\Phi^x_{\\text{LT}}$ was chosen so as to homogenize $L_q$ amongst the 8 CCJJ rf-SQUID flux qubits on this chip and to provide adequate bipolar range to accommodate inter-qubit coupler operation.\n\n\\begin{figure}[ht]\n\\includegraphics[width=3.25in]{QU1LTunerCalibration.pdf} \\\\\n\\includegraphics[width=3.25in]{QU1LTunerCompensationComparison_ver2.pdf} \\\\\n\\caption{\\label{fig:ltuner} (color online) a) $L$-tuner calibration and fit to Eq.~(\\ref{eqn:Ltunerfit}). The standard bias condition for $\\Phi^x_{\\text{LT}}$ and the resultant $\\delta L_q$ are indicated by dashed lines. b) Observed change in maximum qubit persistent current with and without active $L$-tuner compensation and predictions for both cases.}\n\\end{figure}\n\nTo demonstrate the use of the $L$-tuner, we have probed a worst-case scenario in which four CJJ rf-SQUID couplers connected to the CCJJ rf-SQUID in question are tuned in unison. Each of the couplers had been independently calibrated per the procedures described in Ref.~\\onlinecite{cjjcoupler}, from which we obtained $M_{\\text{co},i}\\approx 15.8\\,$pH and $\\chi_i\\left(\\Phi^x_{\\text{co}}\\right)$ ($i\\in\\left\\{1,2,3,4\\right\\}$). Each of these devices provided a maximum AFM inter-qubit mutual inductance $M_{\\text{AFM}}=M^2_{\\text{co},i}\\chi_{\\text{AFM}}\\approx 1.56\\,$pH, from which one can estimate $\\chi_{\\text{AFM}}\\approx 6.3\\,$nH$^{-1}$. Measurements of $\\left|I_q^p\\right|$ with and without active $L$-tuner compensation as a function of coupler bias $\\Phi^x_{\\text{co}}$, as applied to all four couplers simultaneously, are presented in Fig.~\\ref{fig:ltuner}b. The predictions from the ideal CCJJ rf-SQUID model, obtained by using $L_q=265.4\\,\\text{pH}$ (with compensation) and $L_q$ obtained from Eq.~(\\ref{eqn:LqNoTuner}) (without compensation), are also shown. Note that the two data sets and predictions all agree to within experimental error at $\\Phi^x_{\\text{co}}=0.5\\,\\Phi_0$, which corresponds to the all zero coupling state ($M_{\\text{eff}}=0$). The experimental results obtained without $L$-tuner compensation agree reasonably well with the predicted $\\Phi^x_{\\text{co}}$-dependence. As compared to the case without compensation, it can be seen that the measured $\\left|I_q^p\\right|$ show considerably less $\\Phi^x_{\\text{co}}$-dependence when $L$-tuner compensation is provided. However, the data suggest a small systematic deviation from the inductance models Eqs.~(\\ref{eqn:LqNoTuner}) and (\\ref{eqn:LqWithTuner}). At $\\Phi^x_{\\text{ccjj}}=-\\Phi_0$, for which it is estimated that $\\beta_{\\text{eff}}\\approx 2.43$, $\\left|I_q^p\\right|\\propto 1\/L_q$. Given that the data for the case without compensation are below the model, then it appears that we have slightly underestimated the change in $L_q$. Consequently, we have provided insufficient ballast inductance when the $L$-tuner compensation was activated.\n\n\\subsection{rf-SQUID Capacitance}\n\nSince $I_q^c$ and $L_q$ directly impact the CCJJ rf-SQUID potential in Hamiltonian (\\ref{eqn:4JHeff}), it was possible to infer CCJJ and $L$-tuner properties from measurements of the groundstate persistent current. In contrast, the rf-SQUID capacitance $C_q$ appears in the kinetic term in Hamiltonian (\\ref{eqn:4JHeff}). Consequently, one must turn to alternate experimental methods that invoke excitations of the CCJJ rf-SQUID in order to characterize $C_q$. One such method is to probe macroscopic resonant tunneling (MRT) between the lowest lying state in one well into either the lowest order [LO, $n=0$] state or into a higher order [HO, $n>0$] state in the opposing well of the rf-SQUID double well potential \\cite{HOMRT}. The spacing of successive HOMRT peaks as a function of rf-SQUID flux bias $\\Phi^x_q$ will be particularly sensitive to $C_q$. HOMRT has been observed in many different rf-SQUIDs and is a well established quantum mechanical phenomenon \\cite{HOMRT,Bennett,MRT3JJ}. LOMRT proved to be more difficult to observe in practice and was only reported upon relatively recently in the literature \\cite{LOMRT}. We refer the reader to this latter reference for the experimental method for measuring MRT rates.\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{QU1_HOMRT_Rate.pdf} \\\\\n\\includegraphics[width=3.25in]{QU1_HOMRT_GaussianWidth.pdf} \\\\\n\\includegraphics[width=3.25in]{QU1_HOMRT_PeakPosition.pdf}\n\\caption{\\label{fig:HOMRT} (color online) a) HOMRT peaks fitted to Eq.~(\\ref{eqn:HOMRTFit}). Data shown are for $\\Phi^x_{\\text{ccjj}}\/\\Phi_0=-0.6677$, $-0.6735$, $-0.6793$, $-0.6851$, $-0.6911$ and $-0.6970$, from left to right, respectively. Number of levels in target well $n$ as indicated. b) Best fit Gaussian width parameter $W_n$ as a function of $n$. c) Best fit peak position $\\epsilon_p^n$ as a function of $n$.}\n\\end{figure}\n\nMeasurements of the initial decay rate $\\Gamma\\equiv dP_{\\downarrow}\/dt|_{t=0}$ versus $\\Phi^x_q$ are shown in Fig.~\\ref{fig:HOMRT}a with the order of the target level $n$ as indicated. The maximum observable $\\Gamma$ was imposed by the bandwidth of the apparatus, which was $\\sim 5\\,$MHz. The minimum observable $\\Gamma$ was dictated by experimental run time constraints. In order to observe many HO resonant peaks within our experimental bandwidth we have successively raised the tunnel barrier height in roughly equal intervals by tuning the target $\\Phi^x_{\\text{ccjj}}$. The result is a cascade of resonant peaks atop a monotonic background.\n\nThe authors of Ref.~\\onlinecite{Bennett} attempted to fit their HOMRT data to a sum of gaussian broadened lorentzian peaks. It was found that they could obtain satisfactory fits within the vicinity of the tops of the resonant features but that the model was unable to correctly describe the valleys between peaks. We had reached the same conclusion with the very same model as applied to our data. However, it was empirically observed that we could obtain excellent fits to all of the data by using a model composed of a sum of purely gaussian peaks plus a background that varies exponentially with $\\Phi^x_q$:\n\\begin{equation}\n\\label{eqn:HOMRTFit}\n\\frac{\\Gamma(\\Phi^x_q)}{\\hbar}=\\sum_{n}\\sqrt{\\frac{\\pi}{8}}\\frac{\\Delta_n^2}{W_n}e^{-\\frac{(\\epsilon-\\epsilon_p^n)^2}{2W_n^2}}+\\Gamma_{\\text{bkgd}}e^{\\Phi^x_q\/\\delta\\Phi_{\\text{bkgd}}}\\; ,\n\\end{equation}\n\n\\noindent where $\\epsilon\\equiv 2\\left|I_q^p\\right|\\Phi^x_q$. These fits are shown in Fig.~\\ref{fig:HOMRT}a. A summary of the gaussian width parameter $W_n$ in Fig.~\\ref{fig:HOMRT}b is shown solely for informational purposes. We will refrain from speculating why there is no trace of lorentzian lineshapes or on the origins of the exponential background herein, but rather defer a detailed examination of HOMRT to a future publication.\n\nFor the purposes of this article, the key results to take from the fits shown in Fig.~\\ref{fig:HOMRT}a are the positions of the resonant peaks, as plotted in Fig.~\\ref{fig:HOMRT}c. These results indicate that the peak spacing is very uniform: $\\delta\\Phi_{\\text{MRT}}=1.55\\pm0.01\\,$m$\\Phi_0$. One can compare $\\delta\\Phi_{\\text{MRT}}$ with the predictions of the ideal CCJJ rf-SQUID model using the previously calibrated $L_q=265.4\\,$pH, $L_{\\text{ccjj}}=26\\,$pH and $I_q^c=3.103\\,\\mu$A with $C_q$ treated as a free parameter. From such a comparison, we estimate $C_q=190\\pm 2\\,$fF. \n\nThe relatively large value of $C_q$ quoted above can be reconciled with the CCJJ rf-SQUID design by noting that, unlike other rf-SQUID flux qubits reported upon in the literature, our qubit body resides proximal to a superconducting groundplane so as to minimize crosstalk. In this case, the qubit wiring can be viewed as a differential transmission line of length $\\ell\/2\\sim 900\\,\\mu$m, where $\\ell$ is the total length of qubit wiring, with the effective Josephson junction and a short on opposing ends. The transmission line will present an impedance of the form $Z(\\omega)=-j Z_0\\tanh(\\omega\\ell\/2\\nu)$ to the effective Josephson junction, with the phase velocity $\\nu\\equiv1\/\\sqrt{L_0C_0}$ defined by the differential inductance per unit length $L_0\\sim 0.26\\,$pH$\/\\mu$m and capacitance per unit length $C_0\\sim 0.18\\,$fF$\/\\mu$m, as estimated from design. If the separation between differential leads is greater than the distance to the groundplane, then $\\ell\/2\\nu\\approx\\sqrt{L_{\\text{body}}C_{\\text{body}}\/4}$, where $C_{\\text{body}}\\sim 640\\,$fF is the total capacitance of the qubit wiring to ground. Thus, one can model the high frequency behavior of the shorted differential transmission line as an inductance $L_{\\text{body}}$ and a capacitance $C_{\\text{body}}\/4$ connected in parallel with the CCJJ. Taking a reasonable estimated value of $40\\,$fF\/$\\mu$m$^2$ for the capacitance per unit area of a Josephson junction, one can estimate the total capacitance of four $0.6\\,\\mu$m diameter junctions in parallel to be $C_J\\sim 45\\,$fF. Thus we estimate $C_q=C_J+C_{\\text{body}}\/4\\sim 205\\,$fF, which is in reasonable agreement with the best fit value of $C_q$ quoted above.\n\nWith all of the controls of the CCJJ rf-SQUID having been demonstrated, we reach the first key conclusion of this article: The CCJJ rf-SQUID is a robust device in that parametric variations, both within an individual device and between a multitude of such devices, can be accounted for using purely static flux biases. These biases have been applied to all 8 CCJJ rf-SQUIDs on this particular chip using a truly scalable architecture involving on-chip flux sources that are programmed by only a small number of address lines \\cite{PCC}.\n\n\n\\section{Qubit Properties}\n\nThe purpose of the CCJJ rf-SQUID is to provide an as ideal as possible flux qubit \\cite{fluxqubit}. By this statement, it is meant that the physics of the two lowest lying states of the device can be described by an effective Hamiltonian of the form Eq.~(\\ref{eqn:Hqubit}) with $\\epsilon=2\\left|I_q^p\\right|\\left(\\Phi_q^x-\\Phi^0_q\\right)$, $\\left|I_q^p\\right|$ being the magnitude of the persistent current that flows about the inductive loop when the device is biased hard to one side, $\\Phi_q^0$ being a static flux offset and $\\Delta_q$ representing the tunneling energy between the lowest lying states when biased at its degeneracy point $\\Phi_q^x=\\Phi^0_q$. Thus, $\\left|I_q^p\\right|$ and $\\Delta_q$ are the defining properties of a flux qubit, regardless of its topology \\cite{Leggett}. Given the complexity of a six junction device with five closed superconducting loops, it is quite justifiable to question whether the CCJJ rf-SQUID constitutes a qubit. These concerns will be directly addressed herein by demonstrating that measured $\\left|I_q^p\\right|$ and $\\Delta_q$ agree with the predictions of the quantum mechanical Hamiltonian (\\ref{eqn:4JHeff}) given independently calibrated values of $L_q$, $L_{\\text{ccjj}}$, $I_q^c$ and $C_q$.\n\nBefore proceeding, it is worth providing some context in regards to the choice of experimental methods that have been described below. For those researchers attempting to implement GMQC using resonant electromagnetic fields to prepare states and mediate interactions between qubits, experiments that involve high frequency pulse sequences to drive excitations in the qubit (such as Rabi oscillations\\cite{MooijSuperposition}, Ramsey fringes\\cite{MooijSuperposition,1OverFFluxQubit1} and spin-echo\\cite{MooijSuperposition,1OverFFluxQubit1,1OverFFluxQubit2}) are the natural modality for studying quantum effects. Such experiments are convenient in this case as the methods can be viewed as basic gate operations within this intended mode of operation. However, such methods are not the exclusive means of characterizing quantum resources. For those who wish to use precise dc pulses to implement GMQC or whose interests lie in developing hardware for AQO, it is far more convenient to have a set of tools for characterizing quantum mechanical properties that require only low bandwidth bias controls. Such methods, some appropriate in the coherent regime \\cite{Greenberg,gsip} and others in the incoherent regime \\cite{HOMRT,LOMRT,LZ}, have been reported in the literature. We have made use of such low frequency methods as our apparatuses typically possess 128 low bandwidth bias lines to facilitate the adiabatic manipulation of a large number of devices.\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{QU1_LOMRT_ExampleTraces.pdf} \\\\\n\\includegraphics[width=3.25in]{QU1_LOMRT_FitParameters.pdf} \\\\\n\\caption{\\label{fig:LOMRT} (color online) a) Example LOMRT peaks fitted to Eq.~(\\ref{eqn:LOMRTFit}). Data shown are for $\\Phi^x_{\\text{ccjj}}\/\\Phi_0=-0.6621$, $-0.6642$ and $-0.6663$, from top to bottom, respectively. Data from the qubit initialized in $\\ket{\\downarrow}$ ($\\ket{\\uparrow}$) are indicated by solid (hollow) points. b) Energy scales obtained from fitting multiple LOMRT traces.}\n\\end{figure}\n\nOne possible means of probing quantum mechanical tunneling between the two lowest lying states of a CCJJ rf-SQUID is via MRT\\cite{LOMRT}. Example LOMRT decay rate data are shown in Fig.~\\ref{fig:LOMRT}a. We show results for both initializations, $\\ket{\\downarrow}$ and $\\ket{\\uparrow}$, and fits to gaussian peaks, as detailed in Ref.~\\onlinecite{LOMRT}:\n\\begin{equation}\n\\label{eqn:LOMRTFit}\n\\frac{\\Gamma(\\Phi^x_q)}{\\hbar}=\\sqrt{\\frac{\\pi}{8}}\\frac{\\Delta_q^2}{W}e^{-\\frac{(\\epsilon-\\epsilon_p)^2}{2W^2}} \\;\\; .\n\\end{equation}\n\n\\noindent A summary of the fit parameters $\\epsilon_p$ and $W$ versus $\\Phi^x_{\\text{ccjj}}$ is shown in Fig.~\\ref{fig:LOMRT}b. We also provide estimates of the device temperature using the formula\n\\begin{equation}\n\\label{eqn:TMRT}\nk_BT_{\\text{MRT}}=\\frac{W^2}{2\\epsilon_p} \\; .\n\\end{equation}\n\n\\noindent As expected, $T_{\\text{MRT}}$ shows no discernible $\\Phi^x_{\\text{ccjj}}$-dependence and is scattered about a mean value of $53\\pm2\\,$mK. A summary of $\\Delta_q$ versus $\\Phi^x_{\\text{ccjj}}$ will be shown in conjunction with more experimental results at the end of this section. For further details concerning LOMRT, the reader is directed to Ref.~\\onlinecite{LOMRT}.\n\nA second possible means of probing $\\Delta_q$ is via a Landau-Zener experiment \\cite{LZ}. In principle, this method should be applicable in both the coherent and incoherent regime. In practice, we have found it only possible to probe the device to modestly larger $\\Delta_q$ than we can reach via LOMRT purely due to the low bandwidth of our bias lines. Results from such experiments on the CCJJ rf-SQUID flux qubit will be summarized at the end of this section. We see no fundamental limitation that would prevent others with higher bandwidth apparatuses to explore the physics of the CJJ or CCJJ flux qubit at the crossover between the coherent and incoherent regimes using the Landau-Zener method. \n\nIn order to probe the qubit tunnel splitting in the coherent regime using low bandwidth bias lines, we have developed a new experimental procedure for sensing the expectation value of the qubit persistent current, similar in spirit to other techniques already reported in the literature \\cite{gsip}. An unfortunate consequence of the choice of design parameters for our high fidelity QFP-enabled readout scheme is that the QFP is relatively strongly coupled to the qubit, thus limiting its utility as a detector when the qubit tunnel barrier is suppressed. One can circumvent this problem within our device architecture by tuning an inter-qubit coupler to a finite inductance and using a second qubit as a latching sensor, in much the same manner as a QFP. Consider two flux qubits coupled via a mutual inductance $M_{\\text{eff}}$. The system Hamiltonian can then be modeled as\n\\begin{equation}\n\\label{eqn:H2Q}\n{\\cal H}=-\\sum_{i\\in\\left\\{q,d\\right\\}}\\frac{1}{2}\\left[\\epsilon_i\\sigma_z^{(i)}+\\Delta_i\\sigma_x^{(i)}\\right]+J\\sigma_z^{(q)}\\sigma_z^{(d)} \\; ,\n\\end{equation}\n\n\\noindent where $J\\equiv M_{\\text{eff}}|I_q^p||I_d^p|$. Let qubit $q$ be the flux source and qubit $d$ serve the role of the detector whose tunnel barrier is adiabatically raised during the course of a measurement, just as in a QFP single shot measurement depicted in Fig.~\\ref{fig:unipolarannealing}. In the limit $\\Delta_d\\rightarrow 0$ one can write analytic expressions for the dispersion of the four lowest energies of Hamiltonian (\\ref{eqn:H2Q}):\n\\begin{equation}\n\\label{eqn:E2Q}\n\\begin{array}{ccc}\nE_{1\\pm} & = & \\pm\\frac{1}{2}\\sqrt{\\left(\\epsilon_q-2J\\right)^2+\\Delta_1^2}-\\frac{1}{2}\\epsilon_d \\; ;\\\\\nE_{2\\pm} & = & \\pm\\frac{1}{2}\\sqrt{\\left(\\epsilon_q+2J\\right)^2+\\Delta_1^2}+\\frac{1}{2}\\epsilon_d \\; .\n\\end{array}\n\\end{equation}\n\n\\noindent As with the QFP, let the flux bias of the detector qubit be engaged in a feedback loop to track its degeneracy point where $P_{d,\\downarrow}=1\/2$. Assuming Boltzmann statistics for the thermal occupation of the four levels given by Eq.~(\\ref{eqn:E2Q}), this condition is met when\n\\begin{equation}\n\\label{eqn:P2minus}\nP_{d,\\downarrow}=\\frac{1}{2}=\\frac{e^{-E_{2-}\/k_BT}+e^{-E_{2+}\/k_BT}}{\\sum_{\\alpha\\in\\left\\{1\\pm,2\\pm\\right\\}}e^{-E_{\\alpha}\/k_BT}} \\; .\n\\end{equation}\n\n\\noindent Setting $P_{d,\\downarrow}=1\/2$ in Eq.~(\\ref{eqn:P2minus}) and solving for $\\epsilon_2$ then yields an analytic formula for the balancing condition:\n\\begin{equation}\n\\label{eqn:HalfCondGeneral}\n\\epsilon_d= \\frac{F(+)-F(-)}{2}+k_BT\\ln\\left(\\frac{1+e^{-F(+)\/k_BT}}{1+e^{-F(-)\/k_BT}}\\right) \\; ;\n\\end{equation}\n\\vspace{-0.12in}\n\\begin{displaymath}\nF(\\pm)\\equiv\\sqrt{\\left(\\epsilon_q\\pm 2J\\right)^2+\\Delta_1^2} \\; .\n\\end{displaymath}\n\nWhile Eq.~(\\ref{eqn:HalfCondGeneral}) may look unfamiliar, it readily reduces to an intuitive result in the limit of small coupling $J\\ll \\Delta_1$ and $T\\rightarrow 0$:\n\\begin{equation}\n\\label{eqn:HalfCondSmallJ}\n\\epsilon_d \\approx M_{\\text{eff}}|I_q^p|\\frac{\\epsilon_q}{\\sqrt{\\epsilon_q^2+\\Delta_q^2}} = M_{\\text{eff}}\\bra{g}\\hat{I}_q^p\\ket{g} \\; ,\n\\end{equation}\n\n\\noindent where $\\ket{g}$ denotes the groundstate of the source qubit and $\\hat{I}_q^p\\equiv\\left|I_q^p\\right|\\sigma_z^{(q)}$ is the source qubit persistent current operator. Thus Eq.~(\\ref{eqn:HalfCondGeneral}) is an expression for the expectation value of the source qubit's groundstate persistent current in the presence of backaction from the detector and finite temperature. Setting $\\epsilon_i=2|I_i^p|\\Phi^x_{i}$ and rearranging then gives an expression for the flux bias of the detector qubit as a function of flux bias applied to the source qubit. Given independent calibrations of $M_{\\text{eff}}=1.56\\pm 0.01\\,$pH for a particular coupler set to $\\Phi^x_{\\text{co}}=0$ on this chip, $T=54\\pm 3\\,$mK from LOMRT fits and $|I_d^p|=1.25\\pm0.02\\,\\mu$A at the CCJJ bias where the LOMRT rate approaches the bandwidth of our bias lines, one can then envision tracing out $\\Phi_d^x$ versus $\\Phi_q^x$ and fitting to Eq.~(\\ref{eqn:HalfCondGeneral}) to extract the source qubit parameters $|I_q^p|$ and $\\Delta_q$ .\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{QU1LargeDeltaTraceExample.pdf}\n\\caption{\\label{fig:largedeltatrace} (color online) Example coupled flux trace taken at $\\Phi^x_{\\text{ccjj}}=-0.6513\\,\\Phi_0$ used to extract large $\\Delta$ in the coherent regime. }\n\\end{figure}\n\nAn example $\\Phi_d^x$ versus $\\Phi_q^x$ data set for source CCJJ flux bias $\\Phi^x_{\\text{ccjj}}=-0.6513\\,\\Phi_0$ is shown in Fig.~\\ref{fig:largedeltatrace}. The solid curve in this plot corresponds to a fit to Eq.~(\\ref{eqn:HalfCondGeneral}) with a small background slope that we denote as $\\chi$. We have confirmed from the ideal CCJJ rf-SQUID model that $\\chi$ is due to the diamagnetic response of the source rf-SQUID to changing $\\Phi_q^x$. This feature becomes more pronounced with increasing $C_q$ and is peaked at the value of $\\Phi^x_{\\text{ccjj}}$ for which the source qubit potential becomes monostable, $\\beta_{\\text{eff}}=1$. Nonetheless, the model also indicates that $\\chi$ in no way modifies the dynamics of the rf-SQUID, thus the qubit model still applies. From fitting these particular data, we obtained $|I_q^p|=0.72\\pm 0.04\\,\\mu$A and $\\Delta_q\/h=2.64\\pm 0.24\\,$GHz.\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{deltawaveforms.pdf}\n\\caption{\\label{fig:deltawaveforms} (color online) Depiction of large $\\Delta_q$ measurement waveforms. The waveform sequence is similar to that of Fig.~\\ref{fig:iqplockin}, albeit the source qubit's tunnel barrier is partially suppressed ($-\\Phi_0\/2<\\Phi^x_{\\text{ccjj}}<-\\Phi_0$) and a second qubit (as opposed to a QFP) serves as the flux detector.}\n\\end{figure}\n\nIn practice we have found it inefficient to take detailed traces of $\\Phi_d^x$ versus $\\Phi_q^x$ as this procedure is susceptible to corruption by LF flux noise in the detector qubit. As an alternative approach, we have adapted the LF flux noise rejecting procedures introduced in the last section of this article to measure a series of three differential flux levels in the detector qubit. The waveforms needed to accomplish this task are depicted in Fig.~\\ref{fig:deltawaveforms}. Here, the dc-SQUID and QFP connected to the detector qubit are used in latching readout mode while the detector qubit is annealed in the presence of a differential flux bias $\\Phi_m\\pm\\delta\\Phi_m$ which is controlled via feedback. Meanwhile, the source qubit's CCJJ bias is pulsed to an intermediate level $-\\Phi_0<\\Phi^x_{\\text{ccjj}}<-\\Phi_0\/2$ in the presence of an initialization flux bias $\\pm\\Phi_q^i$. By choosing two appropriate pairs of levels $\\pm\\Phi_q^i$, as indicated by the solid points $1\\pm$ and $2\\pm$ in Fig.~\\ref{fig:largedeltatrace}, one can extract $\\left|I_q^p\\right|$ and $\\chi$ from the two differential flux measurements. In order to extract $\\Delta_q$, we then choose a pair of $\\pm\\Phi_q^i$ in the centre of the trace, as indicated by the solid points $3\\pm$, from which we obtain the central slope $d\\Phi_d^x\/d\\Phi_q^x$. Taking the first derivative of Eq.~(\\ref{eqn:HalfCondGeneral}) and evaluating at $\\Phi_q^x=0$ yields\n\\begin{equation}\n\\label{eqn:centralslope}\n\\frac{d\\Phi_d^x}{d\\Phi_q^x}-\\chi=\\frac{2M_{\\text{eff}}\\left|I_q^p\\right|^2}{\\sqrt{\\left(2J\\right)^2+\\Delta_q^2}}\\tanh\\left[\\frac{\\sqrt{\\left(2J\\right)^2+\\Delta_q^2}}{2k_bT}\\right] \\; .\n\\end{equation}\n\n\\noindent Given independent estimates of all other parameters, one can then extract $\\Delta_q$ from this final differential flux measurement.\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{QU1IpSummary_ver6.pdf} \\\\\n\\includegraphics[width=3.25in]{QU1DeltaSummary_ver6.pdf}\n\\caption{\\label{fig:DeltaAndIp} (color online) a) Magnitude of the persistent current $\\left|I_q^p\\right|$ as a function of $\\Phi^x_{\\text{ccjj}}$. b) Tunneling energy $\\Delta_q$ between two lowest lying states of the CCJJ rf-SQUID as a function of $\\Phi^x_{\\text{ccjj}}$, as characterized by macroscopic resonant tunneling [MRT] and Landau-Zener [LZ] in the incoherent regime and coupled groundstate persistent current ($\\bra{g}\\hat{I}_q^p\\ket{g}$) in the coherent regime. Solid curves are the predictions of the ideal CCJJ rf-SQUID model using independently calibrated $L_q$, $L_{\\text{ccjj}}$, $I_q^c$ and $C_q$ with no free parameters.}\n\\end{figure}\n\nA summary of experimental values of the qubit parameters $\\left|I_q^p\\right|$ and $\\Delta_q$ versus $\\Phi^x_{\\text{ccjj}}$ is shown in Fig.~\\ref{fig:DeltaAndIp}. Here, we have taken $\\Delta_q$ from LOMRT and Landau-Zener experiments in the incoherent regime and from the LF flux noise rejecting persistent current procedure discussed above in the coherent regime. The large gap between the three sets of measurements is due to two reasons: First, the relatively low bandwidth of our bias lines does not allow us to perform MRT or Landau-Zener measurements at higher $\\Delta_q$ where the dynamics are faster. Second, while the coherent regime method worked for $\\Delta_q>k_BT$, it proved difficult to reliably extract $\\Delta_q$ in the opposite limit. As such, we cannot make any precise statements regarding the value of $\\Phi^x_{\\text{ccjj}}$ which serves as the delineation between the coherent and incoherent regimes based upon the data shown in Fig.~\\ref{fig:DeltaAndIp}b. Regulating the device at lower temperature would assist in extending the utility of the coherent regime method to lower $\\Delta_q$. On the other hand, given that Eq.~(\\ref{eqn:LOMRTFit}) predicts that $\\Gamma\\propto\\Delta_q^2$, one would have to augment the experimental bandwidth by at least two orders of magnitude to gain one order of magnitude in $\\Delta_q$ via either MRT or LZ experiments.\n\nThe solid curves in Fig.~\\ref{fig:DeltaAndIp} were generated with the ideal CCJJ rf-SQUID model using the independently calibrated $L_q=265.4\\,$pH, $L_{\\text{ccjj}}=26\\,$pH, $I_q^c=3.103\\,\\mu$A and $C_q=190\\,$fF. Note that there are no free parameters. It can be seen that the agreement between theory and experiment is quite reasonable. Thus we reach the second key conclusion of this article: The CCJJ rf-SQUID can be identified as a flux qubit as the measured $\\left|I_q^p\\right|$ and $\\Delta_q$ agree with the predictions of a quantum mechanical Hamiltonian whose parameters were independently calibrated.\n\n\n\\section{Noise}\n\nWith the identification of the CCJJ rf-SQUID as a flux qubit firmly established, we now turn to assessing the relative quality of this device in comparison to other flux qubits reported upon in the literature. In this section, we present measurements of the low frequency flux and critical current spectral noise densities, $S_{\\Phi}(f)$ and $S_{I}(f)$, respectively. Finally, we provide explicit links between $S_{\\Phi}(f)$ and the free induction (Ramsey) decay time $T^*_{2}$ that would be relevant were this flux qubit to be used as an element in a gate model quantum information processor.\n\n\\subsection{Flux Noise}\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{QU1FluxNoise_ver1.pdf}\n\\caption{\\label{fig:fluxnoise} (color online) Low frequency flux noise in the CCJJ rf-SQUID flux qubit. Data [points] have been fit to Eq.~(\\ref{eqn:1OverF}) [solid curve].}\n\\end{figure}\n\n\nLow frequency ($1\/f$) flux noise is ubiquitous in superconducting devices and is considered a serious impediment to the development of large scale solid state quantum information processors \\cite{1OverF}. We have performed systematic studies of this property using a large number of flux qubits of varying geometry \\cite{1OverFGeometry} and, more recently, as a function of materials and fabrication parameters. These latter studies have aided in the reduction of the amplitude of $1\/f$ flux noise in our devices and will be the subject of a forthcoming publication. Using the methods described in Ref.~\\onlinecite{1OverFGeometry}, we have generated the one-sided flux noise power spectral density $S_{\\Phi}(f)$ shown in Fig.~\\ref{fig:fluxnoise}. These data have been fit to the generic form\n\\begin{equation}\n\\label{eqn:1OverF}\nS(f)=\\frac{A^2}{f^{\\alpha}}+w_n\\; ,\n\\end{equation}\n\n\\noindent with best fit parameters $\\alpha=0.95\\pm 0.05$, $\\sqrt{w_n}=9.7\\pm 0.5\\,\\mu\\Phi_0\/\\sqrt{\\text{Hz}}$ and amplitude $A$ such that $\\sqrt{S_{\\Phi}(1\\,\\text{Hz})}=1.3^{+0.7}_{-0.5}\\,\\mu\\Phi_0\/\\sqrt{\\text{Hz}}$. Thus we reach the third key conclusion of this article: We have demonstrated that it is possible to achieve $1\/f$ flux noise levels with Nb wiring that are as low as the best Al wire qubits reported in the literature \\cite{1OverF,1OverFFluxQubit1,1OverFFluxQubit2}. Moreover, we have measured similar spectra from a large number of identical flux qubits, both on the same and different chips, and can state with confidence that the $1\/f$ amplitude reported herein is reproducible. Given the experimentally observed geometric scaling of $S_{\\Phi}(1\\,\\text{Hz})$ in Ref.~\\onlinecite{1OverFGeometry} and the relatively large size of our flux qubit bodies, we consider the prospects of observing even lower $1\/f$ noise in smaller flux qubits from our fabrication facility to be very promising.\n\n\\subsection{Critical Current Noise}\n\nA second noise metric of note is the critical current noise spectral density $S_I(f)$. This quantity has been studied extensively and a detailed comparison of experimental results is presented in Ref.~\\onlinecite{vanHarlingen}. A recent study of the temperature and geometric dependence of critical current noise has been published in Ref.~\\onlinecite{NewCriticalCurrentNoise}. Based upon Eq.~(18) of Ref.~\\onlinecite{vanHarlingen}, we estimate that the $1\/f$ critical current noise from a single $0.6\\,\\mu$m diameter junction, as found in the CCJJ rf-SQUID flux qubit, will have an amplitude such that $\\sqrt{S_I(1\\,\\text{Hz})}\\sim 0.2\\,$pA$\/\\sqrt{\\text{Hz}}$. Unfortunately, we were unable to directly measure critical current noise in the flux qubit. While the QFP-enable readout provided high fidelity qubit state discrimination when qubits are fully annealed to $\\Phi^x_{\\text{ccjj}}=-\\Phi_0$, this readout mechanism simply lacked the sensitivity required for performing high resolution critical current noise measurements. In lieu of a measurement of $S_I(f)$ from a qubit, we have characterized this quantity for the dc-SQUID connected to the qubit in question. The dc-SQUID had two $0.6\\,\\mu$m junctions connected in parallel. A time trace of the calibrated switching current $I_{\\text{sw}}\\approx I_c$ was obtained by repeating the waveform sequence depicted in Fig.~\\ref{fig:unipolarannealing}b except with $\\Phi^x_{\\text{latch}}=-\\Phi_0\/2$ at all time (QFP disabled, minimum persistent current) and $\\Phi^x_{\\text{ro}}=0$ to provide minimum sensitivity to flux noise. Assuming that the critical current noise from each junction is uncorrelated, the best that we could establish was an upper bound of $\\sqrt{S_I(1\\,\\text{Hz})}\\lesssim 7\\,$pA$\/\\sqrt{\\text{Hz}}$ for a single $0.6\\,\\mu$m diameter junction.\n\nGiven the upper bound cited above for critical current noise from a single junction, we now turn to assessing the relative impact of this quantity upon the CCJJ rf-SQUID flux qubit. It is shown in Appendix B that fluctuations in the critical currents of the individual junctions of a CCJJ generate apparent flux noise in the flux qubit by modulating $\\Phi_q^0$. Inserting critical current fluctuations of magnitude $\\delta I_c\\lesssim 7\\,$pA$\/\\sqrt{\\text{Hz}}$ and a mean junction critical current $I_c=I_q^c\/4\\sim 0.8\\,\\mu$A into Eq.~(\\ref{eqn:4JOffsetFluctuation}) yields qubit degeneracy point fluctuations $\\left|\\delta\\Phi_q^0\\right|\\lesssim 0.1\\,\\mu\\Phi_0\/\\sqrt{\\text{Hz}}$. This final result is at least one order of magnitude smaller than the amplitude of $1\/f$ flux noise inferred from the data in Fig.~\\ref{fig:fluxnoise}. As such, we consider the effects of critical current noise in the CCJJ rf-SQUID to be tolerable.\n\n\\subsection{Estimation of $T^*_{2}$}\n \nWhile measurements of noise power spectral densities are the most direct way of reporting upon and comparing between different qubits, our research group is frequently asked what is the dephasing time for our flux qubits. The answer presumably depends very strongly upon bias settings, for recall that we have measured properties of the CCJJ rf-SQUID flux qubit in both the coherent and incoherent regime. Given that our apparatuses contain only low bandwidth bias lines for enabling AQO, we are unable to measure dephasing within our own laboratory. Collaborative efforts to measure dephasing for our flux qubits are in progress. In the meantime, we provide a rough estimate below for our flux qubits if they were biased to the optimal point, $\\Phi^x_q=\\Phi_q^0$ based upon the measured $S_{\\Phi}(f)$ and subjected to a free induction decay, or Ramsey fringe, experiment. Referring to Eq.~(33a) of Ref.~\\onlinecite{Martinis} and key results from Ref.~\\onlinecite{Schnirman}, the mean squared phase noise for a flux qubit at the optimal point will be given by\n\\begin{equation}\n\\label{eqn:dephasing1}\n\\left<\\phi_n^2(t)\\right>=\\frac{1}{\\hbar^2}\\frac{\\left(2\\left|I_q^p\\right|\\right)^4}{2\\Delta^2}\\int^{\\Delta\/h}_{f_m}\\! df S_{\\Phi^2}(f)\\frac{\\sin^2(\\pi f t)}{(\\pi f)^2} \\; ,\n\\end{equation}\n\n\\noindent where $S_{\\Phi^2}(\\omega)$ represents the quadratic flux noise spectral density and $f_m$ is the measurement cutoff frequency. Assuming that the first order spectral density $S_{\\Phi}(\\omega)=2\\pi A^2\/\\omega$, then $S_{\\Phi^2}(\\omega)$ can be written as \n\\begin{eqnarray}\n\\label{eqn:sphisquared}\nS_{\\Phi^2}(\\omega) & = & \\frac{1}{2\\pi}\\int\\! dt e^{-i\\omega t}\\left<\\Phi_n^2(t) \\Phi_n^2(0)\\right> \\nonumber\\\\\n & = & \\frac{1}{2\\pi}\\int\\! dt e^{-i\\omega t} \\int d\\omega^{\\prime}\\frac{2\\pi A^2}{\\omega^{\\prime}}\\int d\\omega^{\\prime\\prime}\\frac{2\\pi A^2}{\\omega^{\\prime\\prime}} \\nonumber\\\\\n & = & 8\\pi^2 A^4\\frac{\\ln\\left(\\omega\/\\omega_{\\text{ir}}\\right)}{\\omega} \\; ,\n\\end{eqnarray}\n\n\\noindent where $\\omega_{\\text{ir}}\\equiv 2\\pi f_{\\text{ir}}$ denotes an infrared cutoff of the $1\/f$ noise spectral density. Inserting Eq.~(\\ref{eqn:sphisquared}) into Eq.~(\\ref{eqn:dephasing1}) and rendering the integral dimensionless then yields:\n\\begin{equation}\n\\label{eqn:dephasing2}\n\\left<\\phi_n^2(t)\\right>=\\frac{t^2}{\\hbar^2}\\frac{\\left(2\\left|I_q^p\\right| A\\right)^4}{\\pi\\Delta^2}\\int^{\\Delta t\/h}_{f_{\\text{min}}t}\\! dx \\frac{\\ln\\left(x\/f_{\\text{ir}}t\\right)\\sin^2(\\pi x)}{x^3} \\; ,\n\\end{equation}\n\n\\noindent where $f_{\\text{min}}=\\max\\left[\\begin{array}{cc} f_m & f_{\\text{ir}}\\end{array}\\right]$. We have numerically studied the behavior of the integral in Eq.~(\\ref{eqn:dephasing2}). In the very long measurement time limit the integral is cut off by $f_{\\text{ir}}$ and the integral varies as $1\/t^2$, which then cancels the factor of $t^2$ in the numerator of Eq.~(\\ref{eqn:dephasing2}). This means that the mean squared phase noise eventually reaches a finite limit. However, the more experimentally relevant limit is $f_m\\gg f_{\\text{ir}}$ , for which we found empirically that the integral varies roughly as $5\\times\\left[\\ln\\left(f_m\/f_{\\text{ir}}\\right)\\right]^2$ over many orders of magnitude in the argument of the logarithm. In this latter limit the result is independent of $t$, so Eq.~(\\ref{eqn:dephasing2}) can be rewritten as $\\left<\\phi_n^2(t)\\right>=t^2\/(T^*_{2})^2$, which then yields the following formula for $T^*_{2}$:\n\\begin{equation}\n\\label{eqn:Tphi}\nT^*_{2}\\approx\\left[\\frac{1}{\\hbar^2}\\frac{\\left(2\\left|I_q^p\\right| A\\right)^4}{\\pi\\Delta^2}5\\ln\\left(f_m\/f_{\\text{ir}}\\right)\\right]^{-1\/2} \\; .\n\\end{equation}\n\nSince flux noise spectra seem to obey the $1\/f$ form down to at least $0.1\\,$mHz and researchers are generally concerned with dephasing over times of order $1\\,\\mu$s, then it is fair to consider $f_m\/f_{\\text{ir}}\\sim 10^{10}$. For a nominal value of $\\Phi^x_{\\text{ccjj}}$ such that the flux qubit is in the coherent regime, say $-0.652\\,\\Phi_0$, the qubit parameters are $\\Delta_q\/h\\approx 2\\,$GHz and $\\left|I_q^p\\right|\\approx 0.7\\,\\mu$A. Substituting these quantities into Eq.~(\\ref{eqn:dephasing2}) then yields $T^*_{2}\\sim 150\\,$ns. This estimate of the dephasing time is comparable to that observed in considerably smaller flux qubits with comparable $1\/f$ flux noise levels \\cite{1OverFFluxQubit1,1OverFFluxQubit2}.\n\n\n\\section{Conclusions}\n\nOne can draw three key conclusions from the work presented herein: First, the CCJJ rf-SQUID is a robust and scalable device in that it allows for in-situ correction for parametric variations in Josephson junction critical currents and device inductance, both within and between flux qubits using only static flux biases. Second, the measured flux qubit properties, namely the persistent current $\\left|I_q^p\\right|$ and tunneling energy $\\Delta_q$, agree with the predictions of a quantum mechanical Hamiltonian whose parameters have been independently calibrated, thus justifying the identification of this device as a flux qubit. Third, it has been experimentally demonstrated that the low frequency flux noise in this all Nb wiring flux qubit is comparable to the best all Al wiring devices reported upon in the literature. Taken in summation, these three conclusions represent a significant step forward in the development of useful large scale superconducting quantum information processors.\n\nWe thank J.~Hilton, P.~Spear, A.~Tcaciuc, F.~Cioata, M.~Amin, F.~Brito, D.~Averin, A.~Kleinsasser and G.~Kerber for useful discussions. Siyuan Han was supported in part by NSF Grant No. DMR-0325551.\n\n\\begin{appendix}\n\n\\section{CJJ rf-SQUID}\n\nLet the qubit and cjj loop phases be defined as \n\\begin{subequations}\n\\begin{equation}\n\\varphi_q\\equiv\\left(\\varphi_1+\\varphi_2\\right)\/2 \\; ,\n\\end{equation} \n\\begin{equation}\n\\varphi_{\\text{cjj}}\\equiv\\varphi_1-\\varphi_2 \\; ,\n\\end{equation}\n\\end{subequations}\nrespectively. Furthermore, assume that the CJJ loop has an inductance $L_{\\text{cjj}}$ that is divided symmetrically between the two paths. Using trigonometric relations, one can write a Hamiltonian for this system in terms of modes in the $q$ and cjj loops that has the following form:\n\\begin{subequations}\n\\begin{eqnarray}\n\\label{eqn:2J2DH}\n{\\cal H} & = & \\sum_n\\left[\\frac{Q_n^2}{2C_n}+U_n\\frac{(\\varphi_n-\\varphi_n^x)^2}{2}\\right] \\nonumber\\\\\n & & -U_q\\beta_+\\cos\\left(\\frac{\\varphi_{\\text{cjj}}}{2}\\right)\\cos\\left(\\varphi_q\\right) \\nonumber\\\\\n & & +U_q\\beta_-\\sin\\left(\\frac{\\varphi_{\\text{cjj}}}{2}\\right)\\sin\\left(\\varphi_q\\right) \\; ;\n\\end{eqnarray}\n\\vspace{-12pt}\n\\begin{equation}\n\\beta_{\\pm}=\\frac{2\\pi L_q\\left(I_{1}\\pm I_{2}\\right)}{\\Phi_0} \\; ,\n\\end{equation}\n\\end{subequations}\n\n\\noindent where the sum is over $n\\in\\left\\{q,\\text{cjj}\\right\\}$, $C_q\\equiv C_1+C_2$, $1\/C_{\\text{cjj}}\\equiv 1\/C_1+1\/C_2$, $U_n\\equiv (\\Phi_0\/2\\pi)^2\/L_n$, $L_q\\equiv L_{\\text{body}}+L_{\\text{cjj}}\/4$ and $[\\Phi_0\\varphi_n\/2\\pi,Q_n]=i\\hbar$. The Josephson potential energy of Hamiltonian (\\ref{eqn:2J2DH}) can be rearranged by defining an angle $\\theta$ such that $\\tan\\theta=(\\beta_-\/\\beta_+)\\tan\\left(\\varphi_{\\text{cjj}}\/2\\right)$. Further trigonometric manipulation then yields Eqs.~(\\ref{eqn:2JHeff})-(\\ref{eqn:2Jbetapm}). \n\n\\section{CCJJ rf-SQUID}\n\nFollowing the same logic as for the CJJ rf-SQUID, one can define four orthogonal quantum mechanical degrees of freedom as follows:\n\\begin{subequations}\n\\begin{eqnarray}\n\\label{eqn:ccjjphases}\n\\varphi_L & \\equiv & \\varphi_1-\\varphi_2 \\; ;\\\\\n\\varphi_R & \\equiv & \\varphi_3-\\varphi_4 \\; ;\\\\\n\\varphi_{\\text{ccjj}} & \\equiv & \\varphi_{\\ell}-\\varphi_r=\\frac{\\varphi_1+\\varphi_2}{2}-\\frac{\\varphi_3+\\varphi_4}{2} \\; ;\\\\\n\\varphi_q & \\equiv & \\frac{\\varphi_{\\ell}+\\varphi_r}{2}=\\frac{\\varphi_1+\\varphi_2+\\varphi_3+\\varphi_4}{4}\\; .\n\\end{eqnarray}\n\\end{subequations}\n\n\\noindent Using the same strategy as in Appendix A, one can use trigonometric identities to first express the Josephson potential in terms of the $L$ and $R$ loop modes:\n\\begin{subequations}\n\\begin{eqnarray}\n\\label{eqn:4J4DH}\n{\\cal H} & = & \\sum_n\\frac{Q_n^2}{2C_n}+\\sum_mU_m\\frac{(\\varphi_m-\\varphi_m^x)^2}{2} \\nonumber\\\\\n & & -U_q\\beta_{L+}\\cos\\left(\\frac{\\varphi_L}{2}\\right)\\cos\\left(\\varphi_{\\ell}\\right) \\nonumber\\\\\n & & +U_q\\beta_{L-}\\sin\\left(\\frac{\\varphi_L}{2}\\right)\\sin\\left(\\varphi_{\\ell}\\right) \\nonumber \\\\\n & & -U_q\\beta_{R+}\\cos\\left(\\frac{\\varphi_R}{2}\\right)\\cos\\left(\\varphi_{r}\\right) \\nonumber\\\\\n & & +U_q\\beta_{R-}\\sin\\left(\\frac{\\varphi_R}{2}\\right)\\sin\\left(\\varphi_{r}\\right) \\; ;\n\\end{eqnarray}\n\\vspace{-12pt}\n\\begin{equation}\n\\label{eqn:betalrpm}\n\\beta_{L(R),\\pm}\\equiv\\frac{2\\pi L_q\\left(I_{1(3)}\\pm I_{2(4)}\\right)}{\\Phi_0} \\; ,\n\\end{equation}\n\\end{subequations}\n\n\\noindent where the first sum is over $n\\in\\left\\{L,R,\\ell,r\\right\\}$ and the second sum is over closed inductive loops $m\\in\\left\\{L,R,\\text{ccjj},q\\right\\}$. As before, each of the modes obey the commutation relation $[\\Phi_0\\varphi_n\/2\\pi,Q_n]=i\\hbar$. Here, $1\/C_{L(R)}=1\/C_{1(3)}+1\/C_{2(4)}$, $C_{\\ell(r)}=C_{1(3)}+C_{2(4)}$ and $U_m=(\\Phi_0\/2\\pi)^2\/L_m$. \n\nWe have found it adequate for our work to assume that $L_{L,R}\/L_q\\ll 1$, which then allows one to reduce the four dimensional system given in Hamiltonian (\\ref{eqn:4J4DH}) to two dimensions. Consequently, we will substitute $\\varphi_{L(R)}=\\varphi^x_{L(R)}$ and ignore the $L$ and $R$ kinetic terms henceforth. Assuming that the inductance of the ccjj loop is divided equally between the two branches one can then write $L_q=L_{\\text{body}}+L_{\\text{ccjj}}\/4$. With these approximations and the $\\theta$ strategy presented in Appendix A, one can rearrange the Josephson potential terms to yield the following:\n\\begin{subequations}\n\\begin{eqnarray}\n\\label{eqn:4J2DHver1}\n{\\cal H} & = & \\sum_n\\frac{Q_n^2}{2C_n}+\\sum_mU_m\\frac{(\\varphi_m-\\varphi_m^x)^2}{2} \\nonumber\\\\\n & & -U_q\\beta_{L}\\cos\\left(\\varphi_{\\ell}-\\varphi_L^0\\right) \\nonumber\\\\\n & & -U_q\\beta_{R}\\cos\\left(\\varphi_{r}-\\varphi_R^0\\right) \\; ;\n\\end{eqnarray}\n\\vspace{-12pt}\n\\begin{eqnarray}\n\\label{eqn:betalr}\n\\beta_{L(R)} & = & \\beta_{L(R),+}\\cos\\left(\\frac{\\varphi_{L(R)}^x}{2}\\right) \\\\\n & & \\times\\sqrt{1+\\left[\\frac{\\beta_{L(R),-}}{\\beta_{L(R),+}}\\tan\\left(\\frac{\\varphi_{L(R)}^x}{2}\\right)\\right]^2} \\; ; \\nonumber\n\\end{eqnarray}\n\\vspace{-12pt}\n\\begin{equation}\n\\label{eqn:4JMinorOffset}\n\\varphi_{L(R)}^0\n =-\\arctan\\left(\\frac{\\beta_{L(R),-}}{\\beta_{L(R),+}}\\tan(\\varphi_{L(R)}^x\/2)\\right)\\; ,\n\\end{equation}\n\\end{subequations}\n\n\\noindent where the first sum is over $n\\in\\left\\{\\ell,r\\right\\}$ and the second sum is over $m\\in\\left\\{\\text{ccjj},q\\right\\}$. The Josephson potential is given by a sum of two cosines, as encountered in the CJJ rf-SQUID derivation of Hamiltonian (\\ref{eqn:2J2DH}) from Hamiltonian (\\ref{eqn:Hphase}). These two terms can be rewritten in the same manner by defining $\\beta_{\\pm}=\\beta_L\\pm\\beta_R$. The result, similar to Hamiltonian (\\ref{eqn:2J2DH}), can then be subjected to the $\\theta$ strategy to yield\n\\begin{subequations}\n\\begin{eqnarray}\n\\label{eqn:4J2DHver2}\n{\\cal H} & = & \\sum_n\\left[\\frac{Q_n^2}{2C_n}+U_n\\frac{(\\varphi_n-\\varphi_n^x)^2}{2}\\right] \\nonumber\\\\\n & & -U_q\\beta_{\\text{eff}}\\cos\\left(\\varphi_q-\\varphi_q^0\\right) \\; ,\n \\end{eqnarray}\n \n\\noindent where the sum is over $n\\in\\left\\{q,\\text{ccjj}\\right\\}$ and the capacitances are defined as $C_q=C_1+C_2+C_3+C_4$ and $1\/C_{\\text{ccjj}}=1\/(C_1+C_2)+1\/(C_3+C_4)$. The other parameters are defined as\n\\begin{equation}\n\\label{eqn:4JBeff}\n\\beta_{\\text{eff}}=\\beta_+\\cos\\left(\\frac{\\gamma}{2}\\right)\\sqrt{1+\\left[\\frac{\\beta_-}{\\beta_+}\\tan\\left(\\frac{\\gamma}{2}\\right)\\right]^2} \\; ;\n\\end{equation}\n\\vspace{-12pt}\n\\begin{equation}\n\\label{eqn:4JQOffset}\n\\varphi_q^0=\\frac{\\varphi_L^0+\\varphi_R^0}{2}+\\gamma_0 \\; ;\n\\end{equation}\n\\vspace{-12pt}\n\\begin{equation}\n\\label{eqn:4JCCJJPhase}\n\\gamma\\equiv\\varphi_{\\text{ccjj}}-\\left(\\varphi_L^0-\\varphi_R^0\\right) \\; ;\n\\end{equation}\n\\vspace{-12pt}\n\\begin{equation}\n\\label{eqn:4JCCJJMonkey}\n\\gamma_0\\equiv -\\arctan\\left(\\frac{\\beta_-}{\\beta_+}\\tan(\\gamma\/2)\\right) \\; ;\n\\end{equation}\n\\begin{equation}\n\\label{eqn:betaccjjpm}\n\\beta_{\\pm}\\equiv \\beta_L\\pm\\beta_R \\; .\n\\end{equation}\n\\end{subequations}\n\nHamiltonian (\\ref{eqn:4J2DHver2}) inherits much of its complexity from junction asymmetry both within the minor loops, which gives rise to $\\varphi_{L(R)}^0$, and effective junction asymmetry between the minor loops, which gives rise to $\\gamma_0$. For arbitrary external flux biases and nominal spread in junction critical current, the CCJJ rf-SQUID offers no obvious advantage over the CJJ rf-SQUID. However, upon choosing biases $\\Phi_L^x$ and $\\Phi_R^x$ such that \n\\begin{equation}\n\\label{eqn:balanced}\n\\beta_L=\\beta_R \\;\\; ,\n\\end{equation}\n\n\\noindent then $\\beta_-=0$, and consequently $\\gamma_0=0$. With these substitutions, Hamiltonian (\\ref{eqn:4J2DHver2}) yields Hamiltonian (\\ref{eqn:4JHeff}). Note that for $\\beta_{L(R),-}\/\\beta_{L(R),+}\\ll 1$ and modest $\\Phi^x_{L(R)}$ that the so-called CCJJ balancing condition given by Eqs.~(\\ref{eqn:betalr}) and (\\ref{eqn:balanced}) can be written approximately as\n\\begin{displaymath}\n\\beta_{L,+}\\cos\\left(\\frac{\\varphi_L^x}{2}\\right) \\approx \\beta_{R,+}\\cos\\left(\\frac{\\varphi_R^x}{2}\\right) \\; ,\n\\end{displaymath}\n\n\\noindent which, upon solving for $\\varphi_L^x$ yields\n\\begin{equation}\n\\label{eqn:balancedapprox}\n\\Phi^x_{L} = \\frac{2\\pi}{\\Phi_0}\\arccos\\left[\\frac{\\beta_{R,+}}{\\beta_{L,+}}\\cos\\left(\\frac{\\pi\\Phi^x_{R}}{\\Phi_0}\\right)\\right] \\; .\n\\end{equation}\n\nIt is possible for critical current noise to couple into the $\\varphi_q$ degree of freedom in any compound junction rf-SQUID qubit via modulation of the junction asymmetry-dependent apparent qubit flux offset $\\Phi_q^0$. In the case of the CCJJ rf-SQUID, all three quantities on the right side of Eq.~(\\ref{eqn:4JQOffset}) are ultimately related to the critical currents of the individual junctions. Given typical junction parameter spreads from our fabrication facility,\n\\begin{displaymath}\n\\left|\\frac{\\beta_{L(R),-}}{\\beta_{L(R),+}}\\right|=\\left|\\frac{I_{1(3)}-I_{2(4)}}{I_{1(3)}+I_{2(4)}}\\right|\\sim {\\cal O}(0.01) \\; ,\n\\end{displaymath}\n\n\\noindent so one can write an approximate expression for $\\varphi^0_{L(R)}$ using Eq.~(\\ref{eqn:4JMinorOffset}):\n\\begin{eqnarray}\n\\label{eqn:4JMinorOffsetApprox}\n\\varphi^0_{L(R)} & \\approx & -\\frac{I_{1(3)}-I_{2(4)}}{I_{1(3)}+I_{2(4)}}\\tan\\left(\\frac{\\varphi^x_{L(R)}}{2}\\right) \\nonumber\\\\\n & \\approx & -\\frac{I_{1(3)}-I_{2(4)}}{2I_c}\\tan\\left(\\frac{\\varphi^x_{L(R)}}{2}\\right) \\; ,\n\\end{eqnarray}\n\n\\noindent and for $\\gamma_0$ using Eqs.~(\\ref{eqn:betalr}), (\\ref{eqn:4JCCJJPhase}) and (\\ref{eqn:4JCCJJMonkey}):\n\\begin{eqnarray}\n\\label{eqn:4JCCJJMonkeyApprox}\n\\gamma_0 & \\approx & \\frac{(I_3+I_4)\\cos\\left(\\frac{\\varphi_R^x}{2}\\right)-(I_1+I_2)\\cos\\left(\\frac{\\varphi_L^x}{2}\\right)}{(I_1+I_2)\\cos\\left(\\frac{\\varphi_L^x}{2}\\right)+(I_3+I_4)\\cos\\left(\\frac{\\varphi_R^x}{2}\\right)}\\tan\\left(\\frac{\\gamma}{2}\\right) \\nonumber\\\\\n & \\approx & \\frac{(I_3+I_4)\\cos\\left(\\frac{\\varphi_R^x}{2}\\right)-(I_1+I_2)\\cos\\left(\\frac{\\varphi_L^x}{2}\\right)}{2I_c\\left[\\cos\\left(\\frac{\\varphi_L^x}{2}\\right)+\\cos\\left(\\frac{\\varphi_R^x}{2}\\right)\\right]}\\tan\\left(\\frac{\\gamma}{2}\\right) , \\nonumber\\\\\n & & \n\\end{eqnarray}\n\n\\noindent where $I_c$ represents the mean critical current of a single junction. The CCJJ rf-SQUID is intended to be operated with only small flux biases in the minor loops, thus $\\cos\\left(\\frac{\\varphi_L^x}{2}\\right)\\approx\\cos\\left(\\frac{\\varphi_R^x}{2}\\right)\\approx 1$. It is also reasonable to assume that $\\gamma\\approx\\varphi^x_{\\text{ccjj}}$\nas the corrections to $\\tan(\\gamma\/2)$ from $\\varphi^0_{L(R)}$ and from the effective two-dimensionality of the rf-SQUID potential will be very small. Inserting Eqs.~\\ref{eqn:4JMinorOffsetApprox} and \\ref{eqn:4JCCJJMonkeyApprox} into Eq.~(\\ref{eqn:4JQOffset}) then yields\n\\begin{eqnarray}\n\\label{eqn:4JOffsetApprox}\n\\varphi^0_q & \\approx & -\\frac{I_1}{2I_c}\\left[\\tan\\left(\\frac{\\varphi_L^x}{2}\\right)+\\frac{1}{2}\\tan\\left(\\frac{\\varphi^x_{\\text{ccjj}}}{2}\\right)\\right] \\nonumber\\\\\n & & -\\frac{I_2}{2I_c}\\left[-\\tan\\left(\\frac{\\varphi_L^x}{2}\\right)+\\frac{1}{2}\\tan\\left(\\frac{\\varphi^x_{\\text{ccjj}}}{2}\\right)\\right] \\nonumber\\\\\n & & -\\frac{I_3}{2I_c}\\left[\\tan\\left(\\frac{\\varphi_R^x}{2}\\right)-\\frac{1}{2}\\tan\\left(\\frac{\\varphi^x_{\\text{ccjj}}}{2}\\right)\\right] \\nonumber\\\\\n& & -\\frac{I_4}{2I_c}\\left[-\\tan\\left(\\frac{\\varphi_R^x}{2}\\right)-\\frac{1}{2}\\tan\\left(\\frac{\\varphi^x_{\\text{ccjj}}}{2}\\right)\\right] \\; .\n\\end{eqnarray}\n\nFor the typical operating parameters described in this article, $\\Phi^x_{L(R)}\/\\Phi_0\\sim0.1$ and the device acts as a qubit for $\\Phi^x_{\\text{ccjj}}\/\\Phi_0\\sim 0.65$. For these flux biases, the magnitude of the terms within the square braces in Eq.~(\\ref{eqn:4JOffsetApprox}) are all of order 1. Therefore, for general flux bias conditions, the apparent qubit flux offset is roughly given by\n\\begin{displaymath}\n\\Phi_q^0 \\approx -\\frac{\\Phi_0}{4\\pi}\\frac{(I_1+I_2)-(I_3+I_4)}{I_c} \\; .\n\\end{displaymath}\n\n\\noindent Assume that each junction experiences critical current fluctuations of magnitude $\\delta I_c$. If each junction's fluctuations are independent, \nthen the root mean square variation of the qubit degeneracy point $\\left|\\delta\\Phi_q^0\\right|$ will be \n\\begin{equation}\n\\label{eqn:4JOffsetFluctuation}\n\\left|\\delta\\Phi_q^0\\right| \\approx \\frac{\\Phi_0}{2\\pi}\\frac{\\delta I_c}{I_c} \\; .\n\\end{equation}\n\n\\noindent Thus, critical current fluctuations generate apparent flux noise in the CCJJ rf-SQUID flux qubit.\n\n\\end{appendix}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{SecIntro}\nThe discovery of the interstellar object `Oumuamua in 2017 \\citep{MWM} led to a substantial increase in the expected number density of interstellar objects relative to certain earlier estimates \\citep{MTL09}. Recently, the identification of a putative interstellar meteor by \\citet{SL19a} enabled the determination of the flux of extrasolar objects impacting the Earth's atmosphere \\citep{SL19b}. \n\nThere are several avenues to analyze objects which originate beyond the Solar system (in short, extrasolar). First, one can send out spacecrafts to investigate interstellar dust in the neighborhood of Earth \\citep{LBG00}, unbound objects like `Oumuamua \\citep{SL18}, gravitationally captured objects within our Solar system \\citep{LL18}, or even nearby exoplanets like Proxima b.\\footnote{\\url{https:\/\/breakthroughinitiatives.org\/initiative\/3}} A second possibility entails remote sensing studies of interstellar meteors that burn up in Earth's atmosphere \\citep{SL19b} or objects that graze the Sun \\citep{FL19}. We will instead address a third route in this Letter: combing through lunar samples to search for extrasolar material. The same approach is utilizable, in principle, for detecting extrasolar material deposited on the surfaces of asteroids and comets.\n\nIt is very beneficial that extrasolar objects impact not only the Earth but also the Moon. The latter is advantageous from two different standpoints. First, as the Moon lacks an atmosphere, there is minimal ablation of small objects relative to Earth, consequently ensuring that they are preserved and do not burn up before impacting the surface. Second, it is well-known that the Moon is geologically inert with respect to the Earth over the past few Gyr \\citep{JHA12}. This feature ensures that the Moon, unlike the Earth, preserves a comprehensive geological record dating back almost to its formation around $4.5$ Ga.\n\nFrom a practical standpoint, the strategy of searching lunar samples has two benefits with respect to the alternatives mentioned earlier. First, the Apollo missions returned $\\sim 400$ kg of lunar material to the Earth, ensuring that it is feasible to examine these samples for extrasolar debris. Second, both the federal and private sectors have expressed an interest in going back to the Moon in the upcoming decade,\\footnote{\\url{https:\/\/www.nasa.gov\/specials\/apollo50th\/back.html}} and potentially establishing lunar bases in the long run.\\footnote{\\url{http:\/\/www.asi.org\/}} There are numerous benefits expected to accrue from the sustained \\emph{in situ} exploration of the Moon in areas as diverse as high-energy physics, medicine, planetary science and astrobiology \\citep{Cock10,CAC12}. We suggest that one should also include the detection of extrasolar material - in particular, the search for the building blocks of extraterrestrial life - to the list of benefits from lunar exploration.\n\nThe outline of the Letter is as follows. We predict the mass and number flux of extrasolar impactors striking the lunar surface in Section \\ref{SecMF}. We estimate the abundances of extrasolar material, organics, and biomolecular building blocks in Section \\ref{SecAbE}. Next, we briefly outline methodologies by which the extrasolar components may be detected in Section \\ref{SecSeaE}. Finally, we summarize our central results in Section \\ref{SecConc}.\n\n\\section{Mass flux of Extrasolar Impactors}\\label{SecMF}\nHenceforth, we use the subscript `S' to reference impactors whose origin lies within the Solar system (i.e., intrasolar) and the subscript `E' to denote impactors that originate outside the solar system (i.e., extrasolar). \n\nWe begin by assessing the number flux of extrasolar impactors on the Moon. In order to do so, we note that the contribution from gravitational focusing can be neglected since the correction factor $\\left(1 + v_\\mathrm{esc}^2\/v_\\infty^2\\right)$ is close to unity, where $v_\\mathrm{esc}$ is the escape velocity and $v_\\infty$ represents the excess velocity at a large distance. The probability distribution function for the impact flux is denoted by $\\mathcal{P}_E(m)$, in units of m$^{-2}$ s$^{-1}$ kg$^{-1}$, where $m$ is the mass of the impactor. We will work with a power-law function in the mass range of interest, i.e., $\\mathcal{P}_E(m) = C_E m^{-\\lambda_E}$, where $C_E$ is the proportionality constant and $\\lambda_E$ is the power-law index. This ansatz allows us to determine the number flux of impactors $\\dot{\\mathcal{N}}_E(m)$ with masses $> m$ as follows:\n\\begin{equation}\\label{PhiEI}\n \\dot{\\mathcal{N}}_E(m) = \\int_m^\\infty \\mathcal{P}_E(m')\\,dm' = \\frac{C_E}{|-\\lambda_E + 1|} m^{-\\lambda_E + 1}. \n\\end{equation}\nAs we are interested in extraterrestrial impactors, we will make the assumption that the number flux is roughly the same for the Moon and the Earth's atmosphere. This is fairly valid because the extra contribution from the orbital velocity of the Moon is smaller than $v_\\infty$ and $v_\\mathrm{esc}$ by approximately an order of magnitude. Note that the total number of impacts per unit time is \\emph{not} similar for both worlds, even if the number fluxes are comparable, because of their differing surface areas. In conjunction with the compiled data from Figure 1 and Section 2 of \\citet{SL19b}, we estimate $\\dot{\\mathcal{N}}_E(m)$ to be\n\\begin{equation}\\label{PhiEEmp}\n \\dot{\\mathcal{N}}_E(m) \\sim 4.4 \\times 10^{-22}\\,\\mathrm{m^{-2}\\,s^{-1}}\\, \\left(\\frac{m}{1\\,\\mathrm{kg}}\\right)^{-1.14}. \n\\end{equation}\nAs a consistency check, if we substitute $m = 10^{-14}$ kg in the above expression, we obtain $\\dot{\\mathcal{N}}_E \\sim 4 \\times 10^{-6}$ m$^{-2}$ s$^{-1}$. This result is in reasonable agreement with the empirical estimate of $\\dot{\\mathcal{N}}_E \\sim 1 \\times 10^{-6}$ m$^{-2}$ s$^{-1}$ based on \\emph{in situ} measurements carried out by the Ulysses and Galileo spacecrafts \\citep{LBG00}. In addition, the power-law exponent of $-1.14$ specified in (\\ref{PhiEEmp}) exhibits very good agreement with the empirical value of $-1.1$ from spacecraft observations \\citep{LBG00}. It is straightforward to determine $C_E$ and $\\lambda_E$ from (\\ref{PhiEI}) and (\\ref{PhiEEmp}); for example, we find $\\lambda_E = 2.14$. \n\n\nIn a similar fashion, we can determine the flux of Solar system objects that impact the Moon. We define $\\mathcal{P}_S(m) = C_S m^{-\\lambda_S}$ and thereby compute $\\dot{\\mathcal{N}}_S(m)$ in the same fashion as (\\ref{PhiEI}). At very small masses, the values of $C_S$ and $\\lambda_S$ are not tightly constrained. Older empirical measurements of interplanetary dust particles (IDPs) indicated that $\\lambda_S \\approx 2.34$ for $m > 10^{-7}$ kg \\citep{GHS11}, whereas more recent studies based on the Lunar Dust Experiment (LDEX) onboard the Lunar Atmosphere and Dust Environment Explorer (LADEE) concluded that $\\lambda_S \\approx 1.9$ for dust grains with masses $> 10^{-15}$ kg \\citep{SH16}. If we further suppose that the flux at Earth's atmosphere is comparable to that on the Moon, Figure 1 of \\citet{BA06} indicates that $\\lambda_S \\approx 1.9$. Thus, we find that $\\lambda_S$ is not very different from $\\lambda_E$. We introduce the ansatz\n\\begin{equation}\\label{PhiSEmp}\n \\dot{\\mathcal{N}}_S(m) \\sim 6 \\times 10^{-19}\\,\\mathrm{m^{-2}\\,s^{-1}}\\, \\left(\\frac{m}{1\\,\\mathrm{kg}}\\right)^{-0.9},\n\\end{equation}\nwhere the normalization constant is chosen to preserve consistency with Figure 1 of \\citet{BA06}. For $m = 0.1$ kg, the above formula yields $\\dot{\\mathcal{N}}_S \\sim 4.8 \\times 10^{-18}$ m$^{-2}$ s$^{-1}$. By using the observational data in Figure 4 of \\citet{GHS11}, we end up with $\\dot{\\mathcal{N}}_S \\sim 6.3 \\times 10^{-18}$ m$^{-2}$ s$^{-1}$, indicating that the above ansatz may be a reasonable estimate.\n\n\nAlong the same lines, we can determine the mass flux of impactors within a given mass range of $\\left(m_\\mathrm{min},\\,m_\\mathrm{max}\\right)$. The corresponding mass flux, denoted by $\\dot{\\mathcal{M}}_{E,S}$, is\n\\begin{equation}\n \\dot{\\mathcal{M}}_{E,S} = \\int_{m_\\mathrm{min}}^{m_\\mathrm{max}} m'\\,\\mathcal{P}_{E,S}(m')\\,dm'.\n\\end{equation}\nFor our lower bound, we choose approximately $\\mu$m-sized objects (with $m_\\mathrm{min} = 10^{-15}$ kg) as they represent the smallest particles that may host organic material \\citep{FKF03,Kwok}; in the most optimal circumstances, they might be capable of transporting living or extinct microbes \\citep{Wes10}. Our upper bound of $m_\\mathrm{max} = 10^{15}$ kg is based on the fact that objects with higher masses are unlikely to have impacted the Moon over its current age. The ratio of the two mass fluxes ($\\delta_{ES}$) is defined as\n\\begin{equation}\\label{RatFlux}\n \\delta_{ES} \\equiv \\frac{\\dot{\\mathcal{M}}_E}{\\dot{\\mathcal{M}}_S} \\sim 2.6 \\times 10^{-3},\n\\end{equation}\nwhere the last equality follows from employing the preceding relations. In other words, the mass flux of extrasolar objects striking the Moon is potentially three orders of magnitude smaller than the mass flux of impactors originating from within our Solar system. \n\nWe caution that the scaling relations specified for $\\dot{\\mathcal{N}}_E$ and $\\dot{\\mathcal{N}}_S$ constitute merely heuristic estimates as they are subject to numerous uncertainties (most notably for the flux of extrasolar objects). It is likely that a single power-law function will not suffice, thereby necessitating the use of broken power-laws in future studies. Another simplification introduced herein is that the flux of impactors remains roughly constant over time. While this is approximately correct when it comes to intrasolar objects over the past few Gyr and possibly valid for extrasolar objects, it is \\emph{not} valid for intrasolar objects during the early stages of our Solar system ($\\gtrsim 4.0$ Ga), when the impact rates were a few orders of magnitudes higher \\citep{CS92}.\n\n\\section{Abundance of Extrasolar Material on the Moon}\\label{SecAbE}\nThe ratio $\\delta_{ES}$ is valuable because it enables us to calculate the abundance of extrasolar material present near the lunar surface. However, in doing so, we rely upon the assumption that the gardening depths of intrasolar and extrasolar objects are comparable. This is not entirely unreasonable because the specific kinetic energy is proportional to $v_\\mathrm{esc}^2 + v_\\infty^2$, implying that its value for extrasolar objects is conceivably an order of magnitude higher than for intrasolar objects. \n\nIf the variations in gardening depth are ignored, the abundance of extrasolar material by weight ($\\phi_E$) is approximately proportional to $\\dot{\\mathcal{M}}_E$, consequently yielding $\\phi_E \\sim \\delta_{ES} \\phi_S$ with $\\phi_S$ signifying the abundance of (micro)meteoritic material originating from the Solar system. Based on the analysis of lunar samples, it has been estimated that this component makes up $\\sim 1$-$1.5\\%$ (by weight) of the lunar soil and $\\sim 1.28\\%$ of the lunar regolith \\citep{AGKM,MVT15}. Therefore, by using the above expression for $\\phi_E$, we arrive at $\\phi_E \\sim 30$ ppm, namely, the mass fraction of extrasolar material is $\\sim 3 \\times 10^{-5}$. In comparison, material ejected from Earth subsequently deposited on the Moon is predicted to occur at an abundance of $\\sim 1$-$2$ ppm at the surface \\citep{Arm10}.\n\nOf this extrasolar material, we note that $\\sim 10^{-3}$, therefore amounting to an abundance of $\\sim 30$ ppb, is derived from halo stars \\citep{SL19c}. Another crucial point worth noting before proceeding further is that the preservation of older extrasolar material is feasible in principle because the Moon has been geologically inactive relative to Earth during the past few Gyr \\citep{JHA12}. If we suppose, for instance, that the material is uniformly distributed over time and adequately preserved, we find that $\\sim 10\\%$ of all extrasolar material would have been deposited $> 4$ Ga. In other words, the abundance of such material might be $\\sim 3$ ppm after using the previous result for $\\phi_E$.\n\nHowever, the extrasolar material deposited on the surface will comprise both inorganic and organic components. It is very difficult to estimate the abundance of the latter as we lack precise constraints on the abundance of organics in ejecta expelled from extrasolar systems as well as the likelihood of their survival during transit and impact with the lunar surface. Hence, our subsequent discussion must be viewed with due caution as we operate under the premise that (micro)meteorites and IDPs within the Solar system are not very atypical relative to other planetary systems.\\footnote{This line of reasoning goes by many names, including the Copernican Principle and the Principle of Mediocrity, and is often implicitly invoked in astrobiology.}\n\nWe begin by considering the abundance of extrasolar organic material. Even within the Solar system, the inventory of organic carbon varies widely across meteorites and IDPs. For instance, it is believed that organic carbon comprises $\\sim 1.5$-$4\\%$ by weight in carbonaceous chondrites \\citep{PS10}, whereas it is lower for other classes of meteorites. When it comes to IDPs, laboratory analyses indicate that they possess $\\sim 10\\%$ carbon by weight on average \\citep{CS92,PCF06}. If we err on the side of caution and choose a mean value of $\\sim 1\\%$ as not all carbon is incorporated in organic material, we find that the abundance of extrasolar organic material ($\\phi_{E,O}$) may be $\\sim 3 \\times 10^{-7}$, namely, we obtain $\\phi_{E,O} \\sim 0.3$ ppm.\n\nHowever, it should be noted that the majority of organic carbon ($> 70\\%$) in carbonaceous chondrites is locked up in the form of insoluble compounds that are ``kerogen-like'' in nature \\citep{PS10,QOB14}. As organics constitute a very broad category, it is more instructive to focus on specific classes. We will henceforth mostly restrict ourselves to amino acids because they are building blocks for proteins and are therefore essential for life-as-we-know-it. Other organic compounds that were identified in meteorites include aliphatic and aromatic hydrocarbons, phosphonic and sulfonic acids, and polyols. \n\nWe begin by considering the abundance of amino acids. Meteorites exhibit different concentrations of amino acids with values ranging from $\\ll 1$ ppm to $\\gtrsim 100$ ppm \\citep{MAO}. The uncertainty for IDPs is even larger because only a few amino acids such as $\\alpha$-amino isobutyric acid have been detected and the average abundance of amino acids in IDPs remains poorly constrained \\citep{MPT04}. Hence, we will resort to an alternative strategy instead. The analysis of lunar samples from the Apollo missions indicates that the concentration of amino acids is $\\sim 0.1$-$100$ ppb with typical values on the order of $\\sim 10$ ppb \\citep{GZK72,HHW71,ECD16}. \n\nEarlier, we determined that the extrasolar mass flux is lower by three orders of magnitude compared to the intrasolar mass flux. Hence, using the value of $\\delta_{ES}$ from (\\ref{RatFlux}), we find that the concentration of extrasolar amino acids is potentially $\\sim 30$ parts-per-trillion (ppt). However, this estimate is an upper bound in all likelihood because it presumes that the fiducial choice of $10$ ppb for amino acids in the lunar regolith arises solely from (micro)meteorite impacts. In actuality, on account of the high enantiomeric excesses detected, it is believed these samples have experienced some terrestrial biological contamination \\citep{ECD16}. \n\nIn analogy with the discovery of carboxylic acids and nucleobases - the building blocks of lipids and nucleic acids, respectively - in meteorites on Earth, it is plausible that these compounds might be found on the Moon. For example, analysis of meteorites has revealed that carboxylic acids may comprise $\\sim 40$-$300$ ppm \\citep{PCF06}. Adopting a fiducial value of $\\sim 10$ ppm for carboxylic acids in extrasolar material by erring on the side of caution, we estimate an abundance of $\\sim 0.3$ ppb for extrasolar carboxylic acids in the lunar regolith after using the prior estimate for $\\phi_E$. A similar analysis can be carried out for nucleobases by employing carbonaceous chondrites as a proxy. Choosing a nucleobase abundance of $\\sim 0.1$ ppm in chondrites \\citep{CSC11}, we obtain an estimate of $\\sim 3$ ppt for extrasolar nucleobases near the lunar surface. \n\nWe reiterate that the numbers described herein are rough estimates because a number of key processes are not tightly constrained. Apart from the direct contribution of extrasolar objects impacting the Moon, it is possible for extrasolar material to be deposited on intrasolar objects that subsequently impact the Moon and thereby deposit this material on the lunar surface. It is likely, however, that this contribution will be sub-dominant.\n\n\\section{Searching for Extrasolar Material on the Moon}\\label{SecSeaE}\nHitherto, we have calculated the abundance of extrasolar material deposited on the lunar surface. However, this raises an immediate question: how do we distinguish between material (e.g., micrometeorites and IDPs) derived from within and outside the Solar system?\n\nThe solution may lie, at least partly, in analyzing multiple isotope ratios of samples \\citep{LL18}. Of the various candidates, perhaps the best studied are the oxygen isotope ratios. In the oxygen three-isotope plot, involving the isotope ratios $^{17}$O\/$^{16}$O and $^{18}$O\/$^{16}$O, the terrestrial fractionation line has a slope of approximately $0.5$ whereas carbonaceous chondrites are characterized by a slope of $\\sim 1$\n\\citep{Clay03,KA09}. It should also be noted that the $^{17}$O\/$^{18}$O ratio exhibits a lower value in the Solar system in comparison to the Galactic average \\citep{NG12}. Thus, significant deviations from the Solar system values in the oxygen three-isotope plot might imply that the sample is extrasolar in origin. \n\nApart from oxygen isotopes, other extrasolar flags include carbon and nitrogen isotope ratios, corresponding to $^{12}$C\/$^{13}$C and $^{14}$N\/$^{15}$N, respectively \\citep{Mum11,FM15}. Note, for instance, that enhanced values of the $^{12}$C\/$^{13}$C ratio could arise in extrasolar objects that have traversed through regions in proximity to Young Stellar Objects \\citep{SPY15}. In addition to isotope ratios, anomalies in CN-to-OH ratios as well as the abundances of bulk elements, C$_2$ and C$_3$ molecules might also serve as effective methods for discerning extrasolar material \\citep{LS07,Sch08}. \n\nOnce the identification of extrasolar grains has been achieved, one could attempt to identify the organics present within them. A plethora of standard techniques can be employed such as liquid chromatography-mass spectrometry. Using such procedures, the identification of amino acids, nucleobases and other organic compounds is feasible at sub-ppb concentrations \\citep{GDA06,CSC11,BSE12}. The detection of either nucleobases or amino acids that are neither prevalent in terrestrial nor meteoritic material would lend further credence to the notion that the sample under question may be extrasolar in nature.\\footnote{It is worth appreciating that meteorites contain ``exotic'' organic compounds that are very rare on Earth. For instance, the analysis of carbonaceous meteorites has revealed the existence of nucleobase analogs (e.g., purine) whose abundances are extremely low on Earth \\citep{CSC11}.}\n\nHitherto, we have limited our discussion to extrasolar material and organic compounds. There is yet another scenario worth mentioning, albeit with a potentially much lower probability, namely, the detection of biosignatures corresponding to extinct extraterrestrial life.\\footnote{We have implicitly excluded the prospects for living extraterrestrial organisms because the Moon's habitability ``window'' appears to have come to a close just millions of years after its formation \\citep{SMC18}.} There are a number of methods that may be utilized to search for biomarkers. Some of the measurable characteristics of molecular biosignatures include: (a) enantiomeric excesses stemming from homochirality, (b) preference for certain diastereoisomers and structural isomers, and (c) isotopic heterogeneities at molecular or sub-molecular levels \\citep{SAM08}. A review of numerous life-detection experiments and their efficacy can be found in \\citet{NHV18}. The most ideal scenario arguably entails the discovery of extrasolar microfossils as they would provide clear-cut evidence for extraterrestrial life; on Earth, the oldest microfossils with unambiguous evidence of cell lumens and walls are from the $\\sim 3.4$ Ga Strelley Pool Formation in Western Australia \\citep{WKS11}.\n\n\\section{Conclusion}\\label{SecConc}\nIn light of recent discoveries of interstellar objects, we have studied the deposition of extrasolar material on the lunar surface by estimating the mass fluxes of impactors originating from within and outside our Solar system. Our choice of the Moon is motivated by the fact that it lacks an atmosphere (avoiding ablation of the impactors) and is mostly geologically inactive (allowing for long-lived retention of material). \n\nOur calculations suggest that the abundance of extrasolar material at the surface is $\\sim 30$ ppm, with the abundance of detritus deposited $> 4$ Ga being $\\sim 3$ ppm. Of this material, a small fraction will exist in the form of organic molecules. We estimated that the abundance of extrasolar organic carbon near the lunar surface is $\\sim 0.3$ ppm. Among the various organic compounds, the abundances of carboxylic acids, amino acids and nucleobases are of particular interest as they constitute the building blocks for life-as-we-know-it. Our results indicate that their maximal abundances might be $\\sim 300$ ppt, $\\sim 30$ ppt and $\\sim 3$ ppt, respectively.\n\nWe outlined how the detection of extrasolar debris may be feasible by analyzing lunar samples. A combination of isotope ratios (oxygen in particular), elemental abundances, and other diagnostics might allow us to identify extrasolar material on the Moon. This material can then be subjected to subsequent laboratory experiments to search for organic compounds such as amino acids as well as molecular biosignatures arising from extinct extraterrestrial life. Altogether, these analyses could provide important new clues for astrobiology.\n\nEven the ``mere'' discovery of inorganic extrasolar material will open up new avenues for research. In particular, by studying the chemical composition of this material, it may be possible to place constraints on planetary formation models, assess the habitability of early planetary systems, gauge the origin and evolution of exo-Oort clouds, and determine the chemical diversity of extrasolar planetary systems. Hence, a new channel for understanding these physical processes, separate from studying unbound interstellar objects such as `Oumuamua \\citep{TRR17,RAV18,MM19}, can be initiated.\n\nThe discovery of extrasolar organics could reveal new complex macromolecules that may possess practical value in medicine and engineering. Furthermore, the detection of such molecules would enable us to gain a deeper understanding of what types of organics were synthesized in other planetary systems, allowing us to gauge the latter's prospects for hosting life. Finally, the discovery of molecular biosignatures confirming the existence of (extinct) extraterrestrial life will indubitably have far-reaching consequences for humankind. In view of these potential benefits, we contend that there are additional compelling grounds for sustained \\emph{in situ} exploration of the lunar surface in the upcoming decades.\n\n\\acknowledgments\nThis work was supported in part by the Breakthrough Prize Foundation, Harvard University's Faculty of Arts and Sciences, and the Institute for Theory and Computation (ITC) at Harvard University.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{s_intro}\n\nExtreme oxygen line ratios (O{\\small 32}\\ $\\equiv$ [\\ion{O}{iii}]$\\lambda 5007$\/[\\ion{O}{ii}]$\\lambda 3727$ $>4$) were proposed recently as a potential tracer of the escape of ionising radiation from galaxies through density-bounded \\ion{H}{ii}\\ regions \\citep{Jaskot13, Nakajima14}. The idea is the following: if a galaxy is leaking ionising photons through a density-bounded region, the ratio of O{\\small 32}\\ can be high if the \\ion{H}{ii}\\ region that we observe is truncated, for example if (part of) the [\\ion{O}{ii}]\\ region is missing. We see deeper into the ionised region and the external layer of [\\ion{O}{ii}]\\ is either nonexistent or thinner than in the classical ionisation-bounded scenario. Given their high O{\\small 32}\\ ratios, \\citet{Jaskot13} discuss the possibility of LyC escape from \"Green Pea\" (GP) galaxies, a population of extremely compact, strongly star-forming galaxies in the local Universe \\citep{Cardamone09, Izotov11, 2016ApJ...820..130Y}. \\citet{Nakajima14} and \\citet{Nakajima16} compare O{\\small 32}\\ ratios of different types of high-redshift galaxies, Lyman Break Galaxies (LBGs), and Lyman Alpha Emitters (LAEs) with GPs and Sloan Digital Sky Survey (SDSS) galaxies: high-redshift galaxies have on average higher O{\\small 32}\\ ratios than SDSS galaxies, but comparable to GPs. Furthermore, the observed O{\\small 32}\\ ratios of LAEs are larger than those of LBGs. Along the same line, GPs are also strong LAEs \\citep{Henry15, 2016ApJ...820..130Y, 2017A&A...597A..13V, 2017ApJ...838....4Y}, which is very unusual for galaxies in the local Universe \\citep{Hayes11, Wold14}. \n \nWhile the O{\\small 32}\\ ratio of galaxies that are leaking ionising photons may be enhanced compared to those with a LyC escape fraction, $f_{\\mathrm{esc}}^{\\mathrm{LyC}}$ , equal to zero, there are other situations that can lead to high O{\\small 32}\\ ratios. For example, the O{\\small 32}\\ ratio depends on metallicity: low stellar and nebular metallicities lead to higher O{\\small 32}\\ ratios \\citep{Jaskot13}. A harder ionising spectrum will also induce higher O{\\small 32}\\ ratios, as investigated in for example \\citet{Pellegrini12}, as well as a higher ionisation parameter (e.g. \\citealt{Stasinska15}). Furthermore, shocks could also explain these ratios, as studied in detail in \\citet{Stasinska15}.\n\nDespite intensive searches for LyC emission from galaxies, only a few LyC leakers have been identified over the last decades in the local Universe \\citep{Bergvall06, Leitet13, Borthakur14, Leitherer16}, but most searches resulted in non-detections or upper limits \\citep{Siana15, Mostardi15, Grazian16, Rutkowski16, Rutkowski17}. The discovery of the link between LyC emission and O{\\small 32},\\ however, turned the tide, as demonstrated by for example \\citet{Izotov16a,Izotov16b, 2018MNRAS.474.4514I, 2018MNRAS.tmp.1318I}. For their studies, LyC emission was detected for all eleven galaxies at $z \\approx 0.3$ that were selected by their extreme O{\\small 32}\\ ratios (O{\\small 32}\\ > 4), among other criteria such as brightness, compactness, and strong \\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi\\ equivalent widths. Furthermore, a correlation between O{\\small 32}\\ and the escape of ionising photons was found, although the scatter of $f_{\\mathrm{esc}}^{\\mathrm{LyC}}$\\ is large \\citep{2018MNRAS.tmp.1318I}. At high redshift ($z \\approx 3$), four galaxies with high escape fractions ($> 50$\\%) have been reported \\citep{Vanzella15, 2016A&A...585A..51D, Shapley16, Bian17, 2018MNRAS.476L..15V}, which were selected by similar criteria. Additionally, the recent results from the Lyman Continuum Escape Survey \\citep{2018arXiv180601741F} reveal an average escape fraction of $\\sim20\\%$ for galaxies at $z\\approx3$ with strong [\\ion{O}{iii}]\\ emission, and a weak correlation between $f_{\\mathrm{esc}}^{\\mathrm{LyC}}$ and the [\\ion{O}{iii}]\\ equivalent width for $\\sim20$ galaxies with directly detected LyC emission. Although the combination of these selection criteria has resulted in relatively few galaxies with confirmed LyC emission yet, the detection of extreme O{\\small 32}\\ emission from a local low-mass GP analogue \\citep{2017ApJ...845..165M} might, however, suggest that low-mass extreme O{\\small 32}\\ emitters, and thus possible low-mass LyC emitters, are more common than the bright GP samples suggest. A statistical study of the O{\\small 32}\\ ratios of emission-line selected galaxies over a broad range of stellar masses has, however, not been performed so far. \n\nThe unique capabilities of the Multi-Unit Spectroscopic Explorer (MUSE) \\citep{2010SPIE.7735E..08B} allow us to study the properties of galaxies with extreme O{\\small 32}\\ ratios and how common they are in emission-line selected samples. For this study we combine four MUSE Guaranteed Time Observing (GTO) surveys and collect a sample of mainly emission-line detected galaxies with a high specific star formation rate and stellar masses between $\\sim10^6$ and $\\sim10^{10}$, from which we compute the distribution of O{\\small 32}\\ ratios in a blind survey of star-forming galaxies. We will here present the properties and occurrences of extreme oxygen emitters spanning the redshift range 0.28 < z < 0.85, where both lines are in the MUSE spectral range, in the largest statistical sample of emission-line detected galaxies in three-dimensional spectral data.\n\nThis article is organised as follows: in Sect.~\\ref{s_data} we describe the data from different programmes that we used for this study; in Sect.~\\ref{s_sample} we describe the sample selection; in Sect.~\\ref{s_results} we investigate the occurrence of high O{\\small 32}\\ ratios and study potential correlations with stellar mass ( \\Mstar ), star formation rate (\\ensuremath{\\mathrm{SFR}} ), and the metallicity indicator R{\\small 23}\\ line ratio; in Sect.~\\ref{s_discussion} we study the incidence rate of galaxies with high O{\\small 32}\\ ratios as a function of \\Mstar\\ and $z$, and we also discuss how our results compare to nebular models with no escape of ionising photons. We end with a discussion on the most extreme oxygen emitters. Throughout this paper we adopt a cosmology with $H_0$ = 70 km s$^{-1}$ Mpc$^{-1}$, $\\Omega_m$ = 0.3 and $\\Omega_\\Lambda$ = 0.7. \n\n\\section{Data}\n\\label{s_data}\n\\subsection{Different MUSE surveys}\nFor this study we used data taken with MUSE, as part of GTO observations, covering a wavelength range of 4800-9300 $\\AA$. We selected data from four surveys that together span an area of more than 55 arcmin$^2$. Below follows a description of the different surveys. \n\n\\subsubsection{Hubble Ultra Deep Field survey}\nThe MUSE Hubble Ultra Deep (HUDF) survey \\citep{2017A&A...608A...1B} consists of two fields of different size and depth in the original HUDF region. The Medium Deep Mosaic Field (e.g. UDF-mosaic) is a mosaic of nine pointings ($\\approx$3$\\arcmin$ x 3$\\arcmin$) with a depth of approximately ten hours. The UDF-10 Ultra Deep Field (e.g. UDF-10) is a deeper observation of a single pointing within the UDF-mosaic region, with a depth of approximately 31 hours, covering $\\approx$1.15 arcmin$^2$. The data reduction is described by \\citet{2017A&A...608A...1B} and is based on the MUSE standard pipeline version 1.7dev \\citep{2014ASPC..485..451W}. For the construction of the redshift catalogue \\citep{2017A&A...608A...2I}, Hubble Space Telescope (HST) priors are used, as well as a blind search for emission lines in the datacube using the software ORIGIN (Mary et al. in prep).\n\n\n\\subsubsection{MUSE-Wide survey}\nThe MUSE-Wide survey aims to cover a larger field of view than the HUDF survey with relatively short exposures of one hour per pointing. The data release of the full survey will be presented in Urrutia et al (in prep). Here we focus on the first 24 pointings of MUSE-Wide, which together cover an area of 22.2 arcmin$^2$. We use the source catalogue that is presented in \\citet{2017A&A...606A..12H}, which is created using the emission-line detection software LSDCat \\citep{2017A&A...602A.111H}, but we do not use their supplied spectra since we extract the spectra of all surveys consistently. \n\n\\subsubsection{MUSE QuBES survey}\nThe MUSE Quasar Blind Emitter Survey (QuBES) consists of 24 individual fields centred on quasars. All datacubes have a minimum total exposure time of two hours, with selected fields observed to a depth of ten hours based on the availability of high-quality archival auxiliary data such as quasi-stellar object (QSO) spectra. The standard data reduction was carried out with the MUSE Data Reduction System (DRS; \\citealt{2014ASPC..485..451W}). Post-processing procedures for additional integral field unit (IFU) normalisation and sky subtraction were carried out using the CubEx package (Cantalupo et al. in prep; see \\citealt{2018ApJ...859...53M} and \\citealt{2016ApJ...831...39B} for details). The survey will be fully described in Straka et al. (in prep) and Segers et al. (in prep). A subset of 21 fields is used for this study based on the availability of galaxy catalogues. For this study, the presence of the QSO in the field is not important. The galaxy catalogues are compiled as follows. First, white light images are created from the MUSE datacubes by summing the entire cube along the wavelength axis. Then, SExtractor \\citep{1996A&AS..117..393B} is used to detect any objects down to 1$\\sigma$. The spectra for each object are extracted by selecting associated pixels in the segmentation maps produced by SExtractor. The spectra of the resulting detections are then inspected by eye in order to determine the galaxy redshifts based on nebular emission lines and stellar absorption features. \n\n\\subsubsection{Galaxy groups survey}\nThe last dataset added to our sample is that of the Galaxy group survey (Epinat et al. in prep), which targets galaxy groups at intermediate redshift ($z \\approx 0.5-0.7$) from the zCOSMOS 20k group catalogue \\citep{2012ApJ...753..121K}. We selected data from three galaxy groups, namely COSMOS-Gr30, 34, and 84, with the deepest MUSE data of 9.75, 5.25, and 5.25 hours respectively, \\textasciitilde 1$\\arcmin$ x 1$\\arcmin$ each. The data reduction followed the same approach as the Hubble Ultra Deep Field and is described in \\citet{2018A&A...609A..40E} for the galaxy group field COSMOS-Gr30. For the construction of the redshift catalogues, galaxies were selected from the COSMOS photometric catalogue by \\citet{2016ApJS..224...24L}, complemented by emission-line detection using ORIGIN for the deepest field COSMOS-Gr30 (see also \\citealt{2018A&A...609A..40E}). \n\n\\subsection{Spectrum extraction and emission-line flux measurements}\nThe spectra of all sources are extracted from the datacubes using the same method in order to make the line flux measurements comparable. We followed the approach of \\citet{2017A&A...608A...2I} and extracted the spectra using a mask region, which is the HST segmentation map convolved by the MUSE point spread function for all surveys except the MUSE QuBES survey, for which there is no HST coverage. Spectra in this survey are extracted using a mask region that is constructed from the MUSE white light image. We then used the simple unweighted sum of the flux in the mask region and used this as the spectrum of each galaxy. We measured the line fluxes of the galaxies in the catalogues using the software \\textsc{platefit} \\citep{2004ApJ...613..898T, 2008A&A...485..657B}. Since this method is the same as that used by \\citet{2017A&A...608A...2I} to construct the HUDF emission-line catalogue, the line flux measurements that we use here are identical to theirs. \n\n\\subsection{Deriving \\ensuremath{\\mathrm{SFRs}}\\ and dust extinction}\nThe \\ensuremath{\\mathrm{SFRs}}\\ of our galaxies are calculated using the method described in \\citet{2013MNRAS.432.2112B}. In short, we simultaneously fit the \\citet{2001MNRAS.323..887C} models to the brightest (signal-to-noise ratio (S\/N)$>$3) optical emission lines. From this we estimate the \\ensuremath{\\mathrm{SFR}}\\ marginalising over: metallicity, ionisation parameter, dust-to-metal ratio, and the optical depth of the dust attenuation (\\ensuremath{\\mathrm{\\tau}_{V}}). The advantage of using a multi-emission-line approach over using a single Balmer line to calculate the \\ensuremath{\\mathrm{SFR}}\\ is that it is less affected by sky line contamination of a single line and should therefore provide a more robust \\ensuremath{\\mathrm{SFR}} . Also, this method provides an estimate of \\ensuremath{\\mathrm{\\tau}_{V}} , which we adopt to correct the emission-line fluxes for dust extinction. \n\n\\subsection{Calculating stellar masses}\nWe obtained stellar masses for the galaxies by performing spectral energy distribution (SED) fitting using the \\textsc{fast} (Fitting and Assessment of Synthetic Templates) algorithm \\citep{2009ApJ...700..221K}, where we used the \\citet{2003MNRAS.344.1000B} library and assumed exponentially declining star formation histories (SFHs) with a \\citet{2000ApJ...533..682C} extinction law and a \\citet{2003ApJ...586L.133C} initial mass function (IMF). To test the influence of our SFH assumption, we compare our stellar masses with those derived by the \\textsc{magphys} code \\citep{2008MNRAS.388.1595D}. For these \\Mstar\\ estimations, the photometry is fitted to stellar population synthesis models assuming random bursts of star formation in addition to an exponentially declining SFH. We find consistent stellar masses, for example the median difference between \\Mstar\\ derived from \\textsc{fast} and \\Mstar\\ from \\textsc{magphys} equals $\\log \\Mstar\/\\Msun = 0.08$ with a standard deviation of $\\log \\Mstar\/\\Msun = 0.3$, from which we conclude that the influence of the assumed SFH is small for this sample. For the UDF, we used HST Advanced Camera for Surveys (ACS) and Wide Field Camera 3 (WFC3) photometry from the catalogue of \\citet{2015AJ....150...31R}. The same approach is applied to the MUSE-Wide (photometry from \\citealt{2014ApJS..214...24S}) and the Galaxy groups (photometry from \\citealt{2016ApJS..224...24L}). Unfortunately there is no deep photometry available for the MUSE-Qubes survey. We therefore used a set of 11 400 $\\AA$ -wide boxcar filters, where emission lines are masked, to compute a pseudo-photometric SED, as will be described in more detail in Segers et al. (in prep). Since not all the bright emission lines lie within the MUSE spectral range for our redshifts, we decided to leave the photometry uncorrected for bright emission lines. To test the possible effect of strong emission lines on \\Mstar , we compared our stellar masses to those based on the photometry that is corrected for emission lines in the MUSE spectral range. We find that for the bulk of the galaxies in our sample, this effect is negligible; for example, the maximum difference between the emission-line corrected and the non-corrected \\Mstar\\ for the extreme O{\\small 32}\\ emitters, which will be introduced in Sect. \\ref{s_results}, corresponds to $\\log$ \\Mstar\/\\Msun = 0.03.\n\n\\subsection{Redshift distribution}\nFor our main sample, we only select galaxies with spectra of sufficient quality for our study, that is, spectra with S\/N > 3 in the [\\ion{O}{iii}]$\\lambda 5007$\\ line at $0.28 < z < 0.85,$ since for this redshift interval the [\\ion{O}{ii}]\\ and [\\ion{O}{iii}]\\ emission lines fall within the MUSE wavelength range. For galaxies that meet this criterion, but have S\/N([\\ion{O}{ii}] ) < 3, we use the 3-$\\sigma$ lower limit for the O{\\small 32}\\ ratio. We show two examples of spectra that meet these criteria in Fig. \\ref{fig:example_spec}. This leads to a total sample of 815 galaxies, of which the redshift distribution for each survey is shown in Fig. \\ref{fig:galaxies_histogram}. This histogram shows the number of galaxies in the redshift bins, separated based on the survey from which the data originates. Because the depths of the different surveys that we combine here are not uniform, care is required for selecting galaxies for our final sample. \n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\hsize]{example_spec_o32.pdf}\n \\caption{Example spectra of O{\\small 32}\\ emitters in our sample. The upper panel shows the spectrum of the galaxy with the most extreme oxygen ratio (dust-corrected O{\\small 32}\\ = 23) from the UDF-mosaic catalogue with id = 6865 and z = 0.83. The orange shaded lines show the \\ifmmode {\\rm H}\\gamma \\else H$\\gamma$\\fi , \\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi,\\ and [\\ion{O}{iii}]$\\lambda\\lambda 4959,5007$\\ of neighbouring sources at $z\\approx0.62$. An example spectrum of a galaxy with a lower O{\\small 32}\\ ratio, in this case with dust-corrected O{\\small 32}\\ = 0.25, is shown in the lower panel (UDF-mosaic, id = 892, z = 0.74).}\n \\label{fig:example_spec}\n\\end{figure*}\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{hist_z_allsf_newselection_snhb_new.pdf}\n \\caption{Redshift distribution of 815 galaxies in total, in the range 0.28 < z < 0.85, with high confidence, spectroscopically determined MUSE redshift, in the four surveys that we are using for this project.}\n \\label{fig:galaxies_histogram}\n\\end{figure}\n\n\\subsection{Sample selection}\n\\label{s_sample}\n\\label{sec:selection}\nObservations of star-forming galaxies have shown that the star formation rate (\\ensuremath{\\mathrm{SFR}}) and stellar mass (\\Mstar) of these galaxies are tightly related (e.g. \\citealt{2004MNRAS.351.1151B, 2007ApJ...660L..43N}). This relation, also called the star-forming main sequence (SFMS), has been studied intensively in the last decade (e.g. \\citealt{2012ApJ...754L..29W, 2014ApJ...795..104W, 2015ApJ...801...80L, 2017ApJ...847...76S}). \\citet{Boogaardetal} have constrained the SFMS for galaxies that are detected in deep MUSE data, modelling them as a Gaussian distribution around a three-dimensional (3-D) plane, taking into account and obtaining the redshift evolution of the \\ensuremath{\\mathrm{SFR}}\\ - \\Mstar\\ relation, resulting in\n\\begin{equation}\n\\label{SFMS}\n\\begin{aligned}\n\\log \\ensuremath{\\mathrm{SFR}}\\ = {} & 0.83^{+0.07}_{-0.06} \\log \\left(\\frac{M_*}{M_0}\\right) -0.83^{+0.05}_{-0.05} \\\\ \n & + 1.74^{+0.66}_{-0.68} \\log \\left(\\frac{1 + z}{1 + z_0}\\right) \\pm 0.44^{+0.05}_{-0.04} \n\\end{aligned}\n,\\end{equation} \nwith $M_0 = 10^{8.5}$ \\Mstar\\ and $z_0 = 0.55$. Because this relation is derived for a sample of galaxies with deep photometry and high S\/N Balmer lines from MUSE spectra of galaxies with stellar masses down to log \\Mstar \/\\Msun\\ $\\approx$ 7, and comparable to the mass range of the galaxies in our sample, we use this $z$-dependent \\Mstar - \\ensuremath{\\mathrm{SFR}}\\ relation to select galaxies for our final sample. We calculate how much the \\ensuremath{\\mathrm{SFR}}\\ of a galaxy is offset from the redshift-corrected SFMS, which we will herein refer to as the 'distance to the SFMS' ($\\Delta$ SFMS) given in dex with a sign such that objects above the SFMS have a positive distance. The distribution of the distances to the SFMS is shown in Fig. \\ref{fig:distr_ssfr} for each survey that we use for this study separately. The dashed line represents the SFMS, with, at the left side, galaxies below and, at the right side, galaxies above the SFMS. For all four surveys, this distribution peaks within 1-$\\sigma$ from the SFMS. Since our sample is mostly emission-line selected, we expect our sample to be most complete at high \\ensuremath{\\mathrm{SFRs}} . However, below the SFMS it is likely that a fraction of the galaxies are below the detection threshold. We therefore conservatively select galaxies with a distance > 0 dex from the SFMS, resulting in a final sample of 406 galaxies. Applying such a selection based on a fixed distance from the $z-$dependent SFMS relation also ensures that we select the same fraction of star-forming galaxies at each redshift.\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\textwidth]{ssfr_histogram_mainsdependent_v2_sfms_rs1.pdf}\n \\caption{Distribution of the distance to the redshift-dependent SFMS from \\citet{Boogaardetal} (Equation \\ref{SFMS}) in dex of the four different surveys used for this study. We show the 1-$\\sigma$ variation around the mean $\\mu$ (grey area) calculated from the fitted normal distribution (black line). We included all galaxies with a distance > 0 dex (dashed line), so all galaxies that lie above the main sequence. The number in the upper right corner of the diagram shows the number of galaxies above this threshold. In our final sample we include 406 galaxies in total.} \n \\label{fig:distr_ssfr}\n\\end{figure*}\nFor an overview of the stellar mass and the \\ensuremath{\\mathrm{SFR}}\\ distribution of the four surveys, we refer the reader to Figs. \\ref{fig:app_distr_m} and \\ref{fig:app_distr_sfr} in the Appendix. \n\nBased on the stellar mass distribution, we assume that all the galaxies in the sample are pure star-forming systems, since studies show that the fraction of active galactic nucleus (AGN) host galaxies is low at these masses. For example almost all AGN hosts at $0 < z < 1$ have $\\log$ \\Mstar\/\\Msun\\ $\\gtrsim 10.2$ \\citep{2013A&A...556A..11V} and the fraction of AGNs over all galaxies with $\\log$ \\Mstar\/\\Msun\\ $\\approx 10$ at $z \\lesssim 0.3 $ is $\\sim$1 $\\%$ \\citep{2003MNRAS.346.1055K,2010ApJ...723.1447H}.\n\n\n\\section{Results}\n\\label{s_results}\nIn this section we explore whether there is a correlation between galactic properties and O{\\small 32} . We also study the location of these emitters with respect to the SFMS, since \\citet{2017A&A...605A..67C} report that LyC leakers lie further above the SFMS.\n \n\\subsection{Redshift distribution of extreme O{\\small 32}\\ emitters}\nIn Fig. \\ref{fig:redshift_frequency} we show the redshift distribution of our sample after the selection that is described in the previous section. In light grey we show the number density of all the galaxies, while in dark grey we show that of galaxies with O{\\small 32}\\ > 1. For galaxies with S\/N([\\ion{O}{ii}] ) $<$ 3, we use the 3-$\\sigma$ upper limits on [\\ion{O}{ii}] , resulting in 3-$\\sigma$ lower limits on O{\\small 32} . We determined the ratio between the number of galaxies with O{\\small 32}\\ > 1 to the total number of galaxies in a bin. A $\\chi^2$ statistical test against the null hypothesis that this fraction in each bin is independent of redshift gives $\\chi^2$ = 1.9 a $P$-value of $\\sim$0.99, which indicates that the fraction of galaxies with oxygen ratios above unity does not evolve as a function of redshift. We adopted the same approach for the extreme oxygen emitters with O{\\small 32}\\ > 4, as shown in red, which yielded a comparable result ( $\\chi^2$ = 4.4, $P$-value $\\approx$ 0.88). This latter result is not particularly robust, because we only have 15 extreme emitters in our sample. \n\\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{redshift_frequency_msselection_v2_sfms_rs1.pdf}\n \\caption{Redshift frequency of extreme O{\\small 32}\\ emitters with O{\\small 32}\\ > 4 (red). In dark grey we show the redshift distribution of the galaxies with O{\\small 32}\\ > 1 and in light grey the total sample.}\n \\label{fig:redshift_frequency}\n\\end{figure}\n\n\\subsection{O{\\small 32}\\ emitters on the star-formation main sequence}\nThe redshift-corrected distribution in the $\\log$ \\ensuremath{\\mathrm{SFR}}\\ - $\\log$ \\Mstar\\ plane is shown in Fig. \\ref{fig:main_sequence}. We normalised our results to $z=0$, and show the SFMS of Eq. \\ref{SFMS} corrected to $z=0$ (dashed line). The same diagram coloured by survey is shown in Fig. \\ref{fig:app_SFMS} in the Appendix. Only galaxies above this relation are selected for the final sample (circles), but for comparison we also show the galaxies that are left out by the SFMS selection (triangles). Galaxies with oxygen ratios larger than unity (blue points) are overall more abundant above the SFMS (104 out of 406 galaxies, 26$\\%$) than below this relation (46 out of 324 galaxies, 14$\\%$). The number of extreme O{\\small 32}\\ emitters deviates even more; 15 versus one galaxies in O{\\small 32}\\ regime above and below the SFMS, respectively. Regarding only galaxies in the final sample, there is no clear correlation between distance from the SFMS and O{\\small 32}\\ ratio. Moreover, we also find that galaxies with O{\\small 32}\\ > 1 are more common at low masses ($\\log$\\Mstar\/\\Msun\\ < 9). A two-dimensional plot of O{\\small 32}\\ versus the distance to the SFMS is shown in Fig. \\ref{fig:deltasfms_o32}. We will now turn to a more quantitative discussion of the relation between the oxygen ratio and \\ensuremath{\\mathrm{SFR}}\\ and \\Mstar .\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{main_sequence_o3o2_zdep_msselection_v2_sfms_rs1.pdf}\n \\caption{ \\ensuremath{\\mathrm{SFR}}\\ - \\Mstar\\ diagram where we have corrected the \\ensuremath{\\mathrm{SFRs}}\\ to $z=0$ using the redshift evolution from Eq. \\ref{SFMS}. The dashed line shows this relation for $z = 0$, above which we selected galaxies in our sample, as described in Sect. \\ref{sec:selection}. The points are coloured by the logarithm of the dust-corrected O{\\small 32}\\ ratio. The symbols show galaxies in the final selection (circles), galaxies below the selection threshold (triangles), and galaxies with a lower limit on the O{\\small 32}\\ ratio (squares).}\n \\label{fig:main_sequence}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{DeltaSFMS_o3o2_rs1.pdf}\n \\caption{O{\\small 32}\\ ratio versus the distance to the SFMS. Only galaxies above $\\Delta$ SFMS = 0 (dashed line) are included in the final sample. Lower limits on O{\\small 32}\\ are shown by orange triangles.} \n \\label{fig:deltasfms_o32}\n\\end{figure} \n\n\n\\subsection{O{\\small 32}\\ as a function of stellar mass}\n\\label{subsec:o32M}\nIn Fig. \\ref{fig:m_o32} we plot the $\\log$ O{\\small 32}\\ line ratio versus the stellar mass (black dots). The red squares denote the median values in both the x- and y-directions, in the intervals of $\\log$ O{\\small 32}\\ between -1.0 and 1.0 with a step size of 0.5. Additionally we show 3-$\\sigma$ lower limits of the O{\\small 32}\\ ratio for galaxies without an [\\ion{O}{ii}]\\ detection above our S\/N threshold (orange triangles). The confirmed LyC-leaking galaxies from \\citet{Izotov16a,Izotov16b, 2018MNRAS.474.4514I} are shown by green stars, and the extreme LyC emitter at $z=3.2$ from \\citet{2016A&A...585A..51D} and \\citet{2016ApJ...825...41V} is shown by the green square. The histograms along the y- and x-axis represent the distribution of O{\\small 32}\\ and \\Mstar , respectively. \n\nThe median values in the O{\\small 32}\\ bins show a clear anti-correlation between the oxygen ratio and the stellar mass. However, the Spearman's rank correlation coefficient for individual galaxies equals -0.30 ($P$-value $\\approx$ 0), indicating no clear trend between $\\log$ O{\\small 32}\\ and $\\log$ \\Mstar . The stellar masses of galaxies with O{\\small 32}\\ > 4 are lower than the average stellar mass of the entire population, for example all the extreme oxygen emitters in our sample have stellar masses below $10^9$. Although the O{\\small 32}\\ ratios of the extreme emitters in our sample are similar to those of the confirmed LyC leakers, their stellar masses are smaller than those of most of the leaking galaxies from \\citet{Izotov16a, Izotov16b, 2018MNRAS.474.4514I}, \\citet{2016A&A...585A..51D} and \\citet{2016ApJ...825...41V}. Because their galaxies were, besides their extreme O{\\small 32}\\ ratios, selected by their brightness to increase the possibility to directly detect LyC photons at $z \\approx 0.3$, their mass is not necessarily a reflection of the typical mass of an LyC emitter. For example, \\citet{2017MNRAS.471..548I} show that galaxies at $z<0.1$, which are only selected by extreme O{\\small 32}\\ ratios, all have masses between $10^{6} - 10^{7} \\Msun$. In addition, Mrk 71, a near green pea analogue and a LyC emitter candidate, has a stellar mass around $10^{5}$ \\citep{2017ApJ...845..165M}, suggesting that the mass of the bulk of LyC emitters might be lower than what is derived from confirmed LyC leakers. The approximately 20 recently discovered galaxies with LyC emission at $z\\approx3$ \\citep{2018arXiv180601741F} show an anti-correlation between the LyC escape fraction and the stellar mass, similar to our results.\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{M_o3o2_withhist_msselection_v2_sfms_rs1.pdf}\n \\caption{Stellar mass as a function of $\\log$ O{\\small 32}\\ (black dots) and lower limits on O{\\small 32}\\ (orange triangles). The median per bin with 1-$\\sigma$ errors is shown by the red squares and error bars. We defined the bins by intervals of $\\log$O{\\small 32}\\ between -1.0 and 1.0 with a step size of 0.5. Three-$\\sigma$ lower limits on the O{\\small 32}\\ ratio for sources with a detection of [\\ion{O}{ii}]\\ below our S\/N cut are shown by the orange triangles. The green stars indicate the positions of the confirmed Lyman continuum leakers from \\citet{Izotov16a,Izotov16b, 2018MNRAS.474.4514I} and the green square is a LyC emitter at $z$=3.2 \\citep{2016A&A...585A..51D, 2016ApJ...825...41V}. Galaxies above the grey dashed line have extreme O{\\small 32}\\ ratios (O{\\small 32}\\ > 4). }\n \\label{fig:m_o32}\n\\end{figure}\n\n\\subsection{O{\\small 32}\\ as a function of \\ensuremath{\\mathrm{SFR}}}\nIn Fig. \\ref{fig:sfr_o32} we show the oxygen ratio as a function of \\ensuremath{\\mathrm{SFR}}\\ with the same colour code as used in Fig. \\ref{fig:m_o32}. The median values of the \\ensuremath{\\mathrm{SFR}}\\ decrease with increasing O{\\small 32} . For individual galaxies there is again no clear correlation (Spearman's rank correlation coefficient $\\approx$ -0.35, $P$-value $\\approx$ 0). The \\ensuremath{\\mathrm{SFR}}\\ of the confirmed LyC emitters, visualised by the green stars and square, is about two orders of magnitude larger than the median of our galaxies with comparable O{\\small 32}\\ emission.\n\nWhen comparing the O{\\small 32}\\ ratio with the specific star formation rate (\\ensuremath{\\mathrm{sSFR}}\\ = \\ensuremath{\\mathrm{SFR}}\/\\Mstar), we find that these values are also not correlated and that this also holds for the median values in the O{\\small 32}\\ bins. The \\ensuremath{\\mathrm{sSFR}}\\ of the confirmed LyC emitters is on average one order of magnitude larger than the \\ensuremath{\\mathrm{sSFR}}\\ of our sample. \n\\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{SFR_o3o2_withhist_msselection_v2_sfms_rs1.pdf}\n \\caption{O{\\small 32}\\ ratio versus \\ensuremath{\\mathrm{SFR}} . The colours are described in the caption of Fig. \\ref{fig:m_o32}. All \\ensuremath{\\mathrm{SFR}} s that are shown here are derived from emission lines, as described in Sect. \\ref{sec:selection}.}\n \\label{fig:sfr_o32}\n\\end{figure}\n\n\\subsection{O{\\small 32}\\ as a function of R{\\small 23}}\n\\label{results:r23}\nR{\\small 23}\\ is a diagnostic to estimate the gas-phase oxygen abundances (later referred to as metallicity, \\Z ) of galaxies and is based on the ratio of [\\ion{O}{ii}]\\ + [\\ion{O}{iii}]\\ over \\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi\\ \\citep{1979A&A....78..200A, 1979MNRAS.189...95P, 1980MNRAS.193..219P, 1991ApJ...380..140M}, given by \\citep{1991ApJ...380..140M}\n\\begin{equation}\n\\mathrm{R{\\small 23}} \\space = \\space \\frac{ [\\ion{O}{ii}]{\\ensuremath{\\lambda}3727} + [\\ion{O}{iii}]{\\ensuremath{\\lambda}4959} + [\\ion{O}{iii}]{\\ensuremath{\\lambda}5007}}{\\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi} \\space. \n\\end{equation} \nSince it only relies on the blue rest-frame spectrum, this diagnostic is often used when the red emission lines are out of the spectrum. However, the relation between R{\\small 23}\\ and \\Z\\ is degenerate and therefore additional lines are still necessary to constrain the metallicity. In Fig. \\ref{fig:r23} we plot the logarithm of the oxygen line ratio against the logarithm of R{\\small 23} . At high O{\\small 32}\\ ($\\log$ O{\\small 32}\\ $\\gtrsim$ -0.2), we visually determine a trend between O{\\small 32}\\ and R{\\small 23} , which is followed by the LyC leakers of \\citet{Izotov16a,Izotov16b, 2018MNRAS.474.4514I}. However, below $\\log$ O{\\small 32}\\ $\\approx$ -0.2 the data points scatter in the $\\log$ R{\\small 23}\\ direction, as a result of uncertain \\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi\\ measurements as we will return to in the discussion. There we will also discuss how the stellar and gas-phase metallicity influences the oxygen ratio.\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{r23_o3o2_withhist_MSselection_v2_sfms_rs1.pdf}\n \\caption{O{\\small 32}\\ versus R{\\small 23} , which is a proxy of the metallicity. The colours are used in the same way as in the previous plots. The orange triangles are now both lower limits of O{\\small 32}\\ and upper limits of R{\\small 23} . Galaxies with S\/N(\\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi) $< 3$ are shown by open symbols.} \n \\label{fig:r23}\n \\end{figure} \n\n\n\\section{Discussion}\n\\label{s_discussion}\nIn the previous section we gave an overview of the galaxy properties of our sample. Here we aim to determine what processes and properties are responsible for a high oxygen line ratio and under which circumstances it is likely to find extreme O{\\small 32}\\ ratios. \n\\subsection{The occurrence of high O{\\small 32}}\nAlthough in previous studies 'extreme' oxygen emitters are defined as having O{\\small 32}\\ > 4, we use O{\\small 32}\\ > 1 as the threshold for 'high' O{\\small 32}\\ emitters to study their occurrence, which will be justified later in this section. This results in a significant sample of 104 high O{\\small 32}\\ emitters, in contrast to applying the O{\\small 32}\\ > 4 threshold, which would only leave 15 galaxies in the extreme regime. In order to study how our selection criterion influences our results and to study the redshift dependence of our results, we constructed a comparison sample from SDSS data. \n\\subsubsection{Creating a comparison sample from SDSS data}\n\\label{section:sdsscomp}\nWe created a comparison sample of star-forming galaxies in SDSS DR7, from which the derivation of the measurements is detailed in \\citet{2004MNRAS.351.1151B} and \\citet{2004ApJ...613..898T}. For each galaxy in the MUSE sample we selected a local analogue in SDSS that lies at the same position in the redshift-corrected \\ensuremath{\\mathrm{SFR}}\\ - \\Mstar\\ plane, thus both samples contain the same number of galaxies. However, because the spectra and consequently the emission-line flux measurement of SDSS galaxies are, unlike the MUSE galaxies, biased by aperture effects due to a finite fiber size, we first corrected for this. Assuming that the slope of the SFMS from Eq. \\ref{SFMS} is unaffected, we refitted the relation to the SDSS data, resulting in a shift of +0.2 in the $\\log \\ensuremath{\\mathrm{SFR}} - 2.93 \\log(1+z)$ direction. Each SDSS galaxy in the same redshift-corrected \\ensuremath{\\mathrm{SFR}}\\ - \\Mstar\\ position is a potential analogue of a MUSE galaxy. We therefore selected all SDSS galaxies within the $1- \\sigma$ error bars of the position of the MUSE galaxy. We used the median value and the 16$\\%$ and 84$\\%$ percentiles of the [\\ion{O}{iii}]\\ and [\\ion{O}{ii}]\\ emission-line fluxes of all selected SDSS galaxies galaxy as the fluxes and errors of the analogue. \n\nFigure \\ref{fig:oiii_oii_hist_sdss} shows the distribution of O{\\small 32}\\ of our sample (left) and the SDSS comparison sample (middle). We compared the O{\\small 32}\\ of each galaxy in our sample with its counterpart in the SDSS sample. The result of this is shown in the right panel of Fig. \\ref{fig:oiii_oii_hist_sdss}. At $\\Delta \\log$ O{\\small 32}\\ = 0 (black dashed line), the oxygen ratio of the galaxy in our sample is the same as its SDSS analogue. The median value of the $\\Delta \\log$ O{\\small 32}\\ is at 0.13 dex (blue solid line), which means that the O{\\small 32}\\ ratio of the MUSE galaxies on average exceeds the O{\\small 32}\\ of galaxies in the comparison sample. The dashed blue lines, however, indicate that this difference is within the 1-$\\sigma$ error bars.\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\hsize]{oiii_oii_mosaic_sdss_onlysf_newplot_incmedian_errors_MSselection_v2.pdf}\n \\caption{Number of galaxies per $\\log$ O{\\small 32}\\ bin for our sample of 406 galaxies (left panel). The dashed lines correspond to O{\\small 32}\\ = 1 (left) and O{\\small 32}\\ = 4 (right). In the middle panel we show a similar plot for our SDSS comparison sample 1 (see the text for details). The distribution of the difference in O{\\small 32}\\ of each galaxy in our sample with its counterpart in the SDSS comparison sample is shown in the right panel. The median and the 1-$\\sigma$ spread of the distribution are shown by the solid and dashed blue lines respectively.}\n \\label{fig:oiii_oii_hist_sdss}\n\\end{figure*}\n\n\\subsubsection{Incidence rate of high O{\\small 32}\\ as a function of mass}\n\\label{incidenceratemass}\nWe divided the sample into three subsets based on stellar mass, with $\\log$ \\Mstar \/\\Msun\\ in the ranges [7.0, 8.0], [8.0, 9.0], and [9.0, 10.0]. We then derived the fraction of galaxies with O{\\small 32}\\ > 1 in each mass bin, shown as the black line in Fig. \\ref{fig:f_logM_notZcorr}, where the points indicate the centres of the mass bins. One-$\\sigma$ errors are derived by bootstrapping the sample 10,000 times (grey area). We then applied the same approach to the SDSS comparison sample (see the red points, line, and shaded area).\n\nThe fraction of galaxies in the MUSE sample with O{\\small 32}\\ > 1 decreases with increasing \\Mstar\\ (black line); we find $\\sim$30 $\\%$ for galaxies with stellar masses between $10^{7}$ and $10^{9}$ \\Msun, but $\\sim$10 $\\%$ in the highest mass bin. This trend is comparable to that of the median bins between O{\\small 32}\\ and \\Mstar\\ , which we described in Sect. \\ref{subsec:o32M} and Fig. \\ref{fig:m_o32}. The SDSS comparison sample follows a comparable trend, but the fractions are offset by 0.05-0.2 towards lower fractions. Below $\\log$ \\Mstar\/\\Msun\\ $ \\approx$ 8 the SDSS sample is, however, incomplete (see Appendix \\ref{app:sdss}), so we caution that the SDSS results in the lowest mass bin are likely to be biased as a result. \n\n \\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{f_logM_noZcorr_msselection_allfields_inclSDSSnew_v2_boxes_rs1.pdf}\n \\caption{Fraction of galaxies with O{\\small 32}\\ > 1 in bins of 1 dex with central $\\log$ \\Mstar\\\/\\Msun = 7.5, 8.5, and 9.5 of our MUSE sample (black\/grey) and the SDSS comparison sample (red) (see the text for details). The points denote the centres of the stellar mass bins and their size reflects the number of galaxies in the bin. The shaded areas cover the 1-$\\sigma$ errors as calculated by bootstrapping in the y-direction and the bin size in the x-direction.} \n \\label{fig:f_logM_notZcorr}\n \\end{figure} \n\n\\subsubsection{Adopting a metallicity-dependent threshold for the O{\\small 32}\\ ratio}\n\\label{sec:metalthreshold}\nHigh oxygen ratios can also be driven by low metallicity systems (see for example the nebular models of \\citealt{2016MNRAS.462.1757G}) due to the harder ionising spectrum of low metallicity stars and less efficient cooling. Here we perform the same analysis as in the previous section, but instead of the fixed threshold at O{\\small 32}\\ = 1, we adopt a metallicity dependent threshold on O{\\small 32}\\ that we derive as follows. For each galaxy we derive the metallicity \\Z\\ using the redshift dependent \\Mstar\\ - \\Z\\ relation of \\citet{2014ApJ...791..130Z}. We then set the threshold for this galaxy equal to the O{\\small 32}\\ ratio from photo-ionisation models from \\citet{2016MNRAS.462.1757G} with this metallicity and the ionisation parameter set to $\\log U = -3$. We show the relation between the metallicity dependent O{\\small 32}\\ threshold and stellar mass in Fig. \\ref{fig:threshold}, for the minimum and maximum redshifts of our sample ($z = 0.28$ and $z = 0.85$). Our assumption that galaxies are highly ionised above $\\log U = -3$ results in an O{\\small 32}\\ threshold for O{\\small 32}\\ between 0.5 and 1.7. This is the reason for setting the fixed threshold to O{\\small 32}\\ > 1 in the previous section. The incidence rate of galaxies with O{\\small 32}\\ above this metallicity-dependent threshold in each mass bin is shown in Fig. \\ref{fig:f_logM_Zcorr} for the MUSE sample (black\/grey) and the comparison sample (red).\n\nWe see that with the \\Z -dependent threshold there is no longer a strong trend between \\Mstar\\ and the fraction of high O{\\small 32}\\ emitters. For the bin with most galaxies ($8 < \\log$ \\Mstar\/\\Msun\\ $< 9$, see Table \\ref{table:fractions}), the SDSS and MUSE results are in agreement. This indicates that the relation between \\Mstar\\ and the incidence rate of high O{\\small 32}\\ emitters is most likely the result of the relation between metallicity and oxygen ratio.\n \\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{threshold_M_z.pdf}\n \\caption{Metallicity-dependent O{\\small 32}\\ threshold as a function of stellar mass, for the minimum and maximum redshift of our sample, derived using the redshift-dependent \\Mstar\\ - \\Z\\ relation of \\citet{2014ApJ...791..130Z} and the O{\\small 32}\\ ratio from photo-ionisation models from \\citet{2016MNRAS.462.1757G} (see the text for details).\n }\n \\label{fig:threshold}\n \\end{figure} \n \\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{f_logM_Zcorr_msselection_allfields_inclSDSS_v2_boxes_rs1.pdf}\n \\caption{Fraction of galaxies with O{\\small 32}\\ greater than the metallicity-dependent threshold for our MUSE sample (black\/grey) and the SDSS comparison sample (red) (see the text for details).} \n \\label{fig:f_logM_Zcorr}\n \\end{figure} \n\n\n\\subsubsection{Evolution of the fraction of galaxies with high O{\\small 32}}\nWe split the sample into three redshift bins of equal number, with median redshifts of $z = 0.42$, $z = 0.63,$ and $z = 0.74$. We then calculated the fractions of galaxies above the metallicity-dependent threshold and show the result in Fig. \\ref{fig:f_logM_Zcorr_zdep}. For comparison we also show the incidence rates of the entire SDSS comparison sample, which has a median redshift of $z = 0.03$. \n\nIn the lowest mass bin there seems to be a weak trend between the incidence rate and the redshift, for example the fraction in the highest $z$ bin is significantly higher than the fraction of the SDSS comparison sample. However, the lowest redshift subsample in this mass bin is larger (44) than the intermediate and high-redshift bin (26 and 28 respectively; see also Table \\ref{table:fractions}), which may indicate that we only include the most extreme star-forming systems in the high-redshift sample and this can explain the results that we observe in the lowest mass bin. In the two highest mass bins, there is no significant difference between the fraction of O{\\small 32}\\ > 1 at different redshifts. In Fig. \\ref{fig:fraction_time} the results for the galaxies with stellar masses between $\\log$ \\Mstar \/\\Msun\\ = 8 and $\\log$ \\Mstar \/\\Msun\\ = 9 are presented against the look-back time and redshift, which we calculated using the median redshifts of the redshift bins. \n \\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{f_logM_Zcorr_msselection_allfields_zdep_inclSDSS_v2_boxes_rs1.pdf}\n \\caption{Incidence rate of galaxies with O{\\small 32}\\ above the metallicity-dependent threshold for three equally sized redshift-selected subsets, with median redshifts of z = 0.42 (green), z = 0.63 (purple), and z = 0.74 (orange) and the entire SDSS comparison sample (red) with median redshift z = 0.03. The vertical lines reflect the 1-$\\sigma$ errors in each bin.} \n \\label{fig:f_logM_Zcorr_zdep}\n \\end{figure} \n \\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{fractionversusredshift_v2_rs1.pdf}\n \\caption{Incidence rate of galaxies with high oxygen emission for galaxies with stellar masses between $\\log$ \\Mstar \/\\Msun\\ = 8 and $\\log$ \\Mstar \/\\Msun\\ = 9, versus look-back time and redshift, calculated from the median redshift of the subsets.} \n \\label{fig:fraction_time}\n \\end{figure} \n \nThe incidence rate of high oxygen emitters when controlled for metallicity is thus independent of redshift for our sample. Since we selected the galaxies based on their distance to the redshift-dependent main SFMS (Eq. \\ref{SFMS}), we selected the same fraction of star-forming galaxies at each redshift, rather than selecting the same kind (same \\ensuremath{\\mathrm{SFR}}\\ and \\Mstar\\ and thus \\ensuremath{\\mathrm{sSFR}} ) of galaxies over cosmic time. The incidence rates that we here derived may therefore be extrapolated to the entire population of star-forming galaxies, and suggest that the fraction of high O{\\small 32}\\ emitters are constant over time between $z = 0.28$ and $z = 0.85$. \n\nIn the current paradigm, the O{\\small 32}\\ ratio of high-redshift galaxies are believed to be more extreme than those of low-redshift galaxies. Results from the MOSFIRE Deep Evolution Field (MOSDEF) and the Keck Baryonic Structure Survey (KBSS)-MOSFIRE surveys of galaxies at $z \\sim 2.3$ show that their O{\\small 32}\\ ratios are offset towards significantly larger values in comparison to those of local galaxies \\citep{2014ApJ...795..165S, 2016ApJ...816...23S, 2017ApJ...836..164S}. This may be interpreted by a harder stellar radiation field at fixed mass at higher redshift \\citep{2014ApJ...795..165S, 2017ApJ...836..164S}, or as the result of lower metallicities of high-redshift galaxies at fixed mass \\citep{2016ApJ...816...23S}. These results are supported by cosmological simulations of massive galaxies \\citep{2017MNRAS.472.2468H} that show that the ionisation parameter and the [\\ion{O}{iii}]\/\\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi\\ ratio increases with redshift at fixed \\Mstar\\ at $0 < z < 4$. Our results at $0.28 < z < 0.85$ , however, support another scenario where the O{\\small 32}\\ ratio is constant over cosmic time. This difference may be the result of the different \\Mstar\\ regime that is probed in the high-redshift surveys (9 $\\lesssim$ $\\log$ \\Mstar\/\\Msun\\ $\\lesssim$ 11) and in the cosmological simulations (9.5 $\\lesssim$ $\\log$ \\Mstar\/\\Msun\\ $\\lesssim$ 11.5) with respect to the stellar mass of the galaxies in this work (7 $\\lesssim$ $\\log$ \\Mstar\/\\Msun\\ $\\lesssim$ 10). \\begin{table*}\n\\caption{Number of galaxies in mass and redshift bins.} \n\\label{table:fractions} \n\\centering \n\\begin{tabular}{c | c c c c} \n\\hline\\hline \n & 7< $\\log$ \\Mstar \/\\Msun\\ <8 & 8 < $\\log$ \\Mstar \/\\Msun\\ < 9 & 9 < $\\log$ \\Mstar \/\\Msun\\ <10 & total \\\\\n \\hline \nall & 98 & 195 & 81 & 406 \\\\\nlow z & 44 & 64 & 11 & 135 \\\\\nintermediate z & 26 & 67 & 29 & 135 \\\\\nhigh z & 28 & 64 & 41 & 136 \\\\\n\\hline \n\\end{tabular}\n\\end{table*}\n\n\\subsubsection{Completeness and robustness of the results}\nOur data sample consists of a combination of several surveys with depths between one and 30 hours. The catalogues for most fields are a mixture of emission lines and continuum-detected galaxies, resulting in samples of different completeness limits in \\Mstar\\ and emission-line flux. In Appendix \\ref{app:sdss} we study how such a possible incompleteness of our data sample alters our results by simulating different completenesses in flux and \\Mstar\\ of SDSS data, and find that the effect is negligible. \n\nWe studied the impact of our \\ensuremath{\\mathrm{SFR}}\\ calibration method on the results and compared them with \\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi -derived \\ensuremath{\\mathrm{SFRs}}\\ that are de-reddened by the \\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi\/\\ifmmode {\\rm H}\\gamma \\else H$\\gamma$\\fi\\ fraction. We also re-analysed the data by adopting different stellar libraries for \\Mstar\\ calculations and derived similar results. However, for one of the surveys that is used for this study, the MUSE QuBES, we used the MUSE spectrum for the SED fitting instead of deep photometry as we used for the data of the other surveys. We are aware that this induces uncertainty on the mass estimates and therefore re-analysed the results in Sect. \\ref{sec:metalthreshold} without the data of the MUSE QuBES survey and acquired comparable results.\n\n\\subsection{Can nebular models with no escape of ionising photons predict the observed O{\\small 32} ?}\nIn Sect. \\ref{results:r23} we discussed the behaviour of our galaxies in the $\\log$ O{\\small 32}\\ versus R{\\small 23}\\ diagram (see also Fig. \\ref{fig:r23}). Here we compare these results with nebular models to study if they are consistent with each other and whether we can derive the nebular metallicity of our galaxies with the R{\\small 23}\\ method (see Fig. \\ref{fig:r23_discussion}). We added the grids of line ratios from nebular models that are calculated by \\citet{2016MNRAS.462.1757G} by the coloured squares, where each colour represents a model of a fixed metallicity \\Z \\ (see the figure legend; metallicity here refers to the combination of nebular and stellar oxygen abundances, since these are kept constant for these models). We connected the grids of models with constant metallicity by dashed lines, where the ionisation parameter $U$ increases towards the upper right from $\\log U = -4$ to $\\log U = -1$. For the calculation of these models the ionising photon escape fraction was assumed to be zero. The lower limits on O{\\small 32},\\ which are also upper limits on R{\\small 23},\\ are not shown here since in this plot because it is difficult to compare these galaxies with models. \n\nDeriving the nebular metallicity of our galaxies by comparing it to the model results is not straightforward, due to the degeneracy of the R{\\small 23}--metallicity relation. However, from Fig. \\ref{fig:r23_discussion} it is clear that the R{\\small 23}\\ ratio indicates that the metallicity of the majority of the galaxies in our sample is sub-solar and around 0.006 ($\\approx 1\/3$ \\Zsun). Hence the R{\\small 23}\\ of many of the galaxies exceeds the maximum R{\\small 23}\\ predicted by the models, which can partly be explained by an uncertain \\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi\\ flux measurement (galaxies with an \\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi\\ signal to noise lower than three are shown by open squares). \n\nWe have differentiated between models with ionisation parameters between $\\log U$ = -4 and $\\log U$ = -2 (solid line) and those of higher values, since observations show that the bulk of star-forming galaxies have $\\log U$ < -2 (e.g. \\citealt{2014ApJ...787..120S}). The ionisation parameter of galaxies with extreme O{\\small 32}\\ ratios might however exceed those of normal galaxies, as pointed out by \\citet{Stasinska15}. However, if the O{\\small 32}\\ ratio of our galaxies exceeds the predicted ratio of models with $\\log U > -2$, the escape of LyC photons is also a likely scenario. For this reason the galaxy with the highest O{\\small 32}\\ ratio is a promising LyC escape candidate. Although the O{\\small 32}\\ ratios of the other extreme emitters in our sample (with O{\\small 32}\\ > 4) in this diagram are similar to those of the confirmed LyC leakers (green stars), the logaritm of R{\\small 23}\\ of our galaxies scatters within 0.4 of the value of the LyC leakers. Most of them however imply either low stellar and nebular metallicities, high ionisation parameters, the escape of ionising photons, or a combination of these factors. However, comparing the data that lie inside the model grid to these nebular models with no escape of ionising photons is thus not sufficient to determine if LyC escape is responsible for the extreme oxygen emission. \n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\hsize]{r23_o32_nosdss_newcolors_v2_resubmit_rs1.pdf}\n \\caption{Logarithm of O{\\small 32}\\ versus the logarithm of R{\\small 23} . The black points are galaxies from our sample, open symbols reflect galaxies with S\/N (\\ifmmode {\\rm H}\\beta \\else H$\\beta$\\fi ) < 3), while the green stars are the confirmed LyC leakers from \\citet{Izotov16a,Izotov16b, 2018MNRAS.474.4514I}. The squares that are connected by solid ($\\log U < -2$) and dashed coloured lines show the position in this diagram of the nebular model of \\citet{2016MNRAS.462.1757G} for constant gas-phase metallicity (see legend) and with increasing $\\log U$ towards the upper right, with -4 < $\\log U$ < -1 and step size 0.5.} \n \\label{fig:r23_discussion}\n \\end{figure*} \n \n\\subsection{Extreme O{\\small 32}\\ emitters}\nIn the previous sections we discussed how the properties of galaxies with extreme oxygen ratios are related to the entire sample. Here we study the robustness of the O{\\small 32}\\ measurement and investigate the electron temperatures of the [\\ion{O}{iii}]\\ regime in extreme galaxies. \n\nTo confirm that the O{\\small 32}\\ ratios are well measured, we compare them to the [\\ion{Ne}{iii}]$\\lambda 3869$\/[\\ion{O}{ii}]\\ ratio. This ratio is an alternative diagnostic of the ionisation parameter because of its tight relation to the O{\\small 32}\\ ratio \\citep{2014ApJ...780..100L}. Even though [\\ion{Ne}{iii}]$\\lambda 3869$\\ is more than an order of magnitude fainter than [\\ion{O}{iii}]$\\lambda 5007$ , the [\\ion{Ne}{iii}]\/[\\ion{O}{ii}]\\ ratio is less affected by reddening than the O{\\small 32}\\ ratio. \nThe relationship between [\\ion{O}{iii}]$\\lambda 5007$\/[\\ion{O}{ii}]$\\lambda 3727$\\ and [\\ion{Ne}{iii}]$\\lambda 3869$\/[\\ion{O}{ii}]$\\lambda 3727$\\ for the extreme emitters with a significant [\\ion{Ne}{iii}]$\\lambda 3869$\\ detection ($\\sigma > 3$) is shown in Fig.~\\ref{fig:ne3}. The solid grey line shows the predictions of the Starburst99\/Mappings III photo-ionisation models of \\citet{2014ApJ...780..100L} with \\Z\\ = 0.2 \\Zsun . We offset the models by a factor of +0.6 in the $\\log$ [\\ion{O}{iii}]$\\lambda 5007$\/[\\ion{O}{ii}]$\\lambda 3727$\\ direction to take into account the discrepancy between these models and the observations of \\citet{2006A&A...448..955I} and \\citet{Jaskot13}, as discussed by \\citet{2014ApJ...780..100L}. \n\\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{o32_ne3o2_v2_resubmit.pdf}\n \\caption{Log O{\\small 32}\\ versus the $\\log$ [\\ion{Ne}{iii}]$\\lambda 3869$\/[\\ion{O}{ii}]\\ ratio for our extreme emitters (O{\\small 32}\\ >4) with a [\\ion{Ne}{iii}]$\\lambda 3869$\\ detection of at least 3 $\\sigma$. The galaxies of which we do not have a significant [\\ion{O}{ii}]\\ detection are presented by red circles, which are the 3-$\\sigma$ upper limits on both ratios. The rest of the extreme emitters sample is shown by black circles and the most extreme O{\\small 32}\\ emitter by the black diamond. The grey line corresponds to the \\citet{2014ApJ...780..100L} relation for \\Z\\ = 0.2 \\Zsun\\ between $\\log$ O{\\small 32}\\ and $\\log$ [\\ion{Ne}{iii}]$\\lambda 3869$\/[\\ion{O}{ii}]\\ , which is offset by +0.6 in the y-direction.} \n \\label{fig:ne3}\n \\end{figure} \n We use these results to conclude that the [\\ion{Ne}{iii}]$\\lambda 3869$\/[\\ion{O}{ii}]\\ ratios are consistent with extreme O{\\small 32}\\ ratios. There are, however, offsets between our observations and the corrected \\citet{2014ApJ...780..100L} relation of up to 0.2 dex, which are most likely caused by the lower significance of the [\\ion{Ne}{iii}]$\\lambda 3869$\\ line and\/or by an offset in the dust attenuation estimate. We note however, that if we use these offsets to correct the [\\ion{O}{iii}]\\ line fluxes, the O{\\small 32}\\ ratios will still be in the regime of the extreme oxygen emitters.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\hsize]{O32_O3_gutkin_cl01.pdf}\n \\caption{Log O{\\small 32}\\ versus the [\\ion{O}{iii}]$\\lambda 4363$\/[\\ion{O}{iii}]$\\lambda 5007$\\ ratio (lower x-axis) and the electron temperature in the [\\ion{O}{iii}]\\ regime (lower x-axis) calculated using the ${\\tt nebular.ionic}$ routine in Pyneb \\citep{2015A&A...573A..42L}, assuming $n_e$ = 100 cm$^3$. The coloured lines indicate the predictions from the \\citet{2016MNRAS.462.1757G} models of different metallicity as indicated by the colours, with increasing $\\log U$ towards the upper right, and $\\log U < -2$ (solid line) and $\\log U < -2$ (dashed line), and step size of $\\log U$ = 0.5 between the squares. The colours of the circles are used as in Fig. \\ref{fig:ne3}. }\n \\label{fig:te}\n \\end{figure} \n For most of the extreme emitters, we have high S\/N measurements (S\/N > 3) of the auroral [\\ion{O}{iii}]$\\lambda 4363$\\ emission line. As such we can use the [\\ion{O}{iii}]$\\lambda 4363$\/[\\ion{O}{iii}]$\\lambda 5007$\\ ratio to diagnose the electron temperatures (\\Te) in the [\\ion{O}{iii}]\\ regime of the \\ion{H}{ii}\\ region (see \\citealt{1989agna.book.....O}), using the ${\\tt nebular.ionic}$ routine in Pyneb \\citep{2015A&A...573A..42L}, assuming an electron density of $n_e = 100$ cm$^{-3}$.\nThe [\\ion{O}{iii}]$\\lambda 4363$\/[\\ion{O}{iii}]$\\lambda 5007$\\ ratios and the corresponding values of \\Te\\ are shown in Fig. \\ref{fig:te}. A complete discussion on this is outside the scope of the paper. However, there is presumably a positive correlation between O{\\small 32}\\ and \\Te\\ since the electron temperatures of the galaxies with the most extreme O{\\small 32}\\ ratios exceed those of the other extreme galaxies. \nFurthermore, comparing the [\\ion{O}{iii}]$\\lambda 4363$\/[\\ion{O}{iii}]$\\lambda 5007$\\ and O{\\small 32}\\ ratios to the predictions from the \\citet{2016MNRAS.462.1757G} nebular models, the metallicity of the galaxies can be constrained, varying from \\Z$\\approx0.0001$ (\\Z\/\\Zsun$\\approx$0.005) for the galaxy with the highest O{\\small 32}\\ ratio (diamond) to \\Z$\\approx0.002-0.004$ (\\Z\/\\Zsun$\\approx$0.15) for the galaxies with O{\\small 32} $\\approx 4$. Again, for the calculation of these nebular models, the fraction of escaping ionising photons is assumed to equal zero. Although such a scenario would boost the O{\\small 32}\\ ratio, there is no obvious reason to expect an effect on the [\\ion{O}{iii}]$\\lambda 4363$\/[\\ion{O}{iii}]$\\lambda 5007$\\ ratio, and thus the real metallicity of a leaking system would be even lower than what is predicted by the models. Other scenarios, such as a top-heavy initial mass function, might increase the hardness of the ionising spectrum and could, however, also affect the position of a galaxy in this plot. \n \n\\subsubsection*{UDF object 6865}\nIn Fig. \\ref{fig:images} we show an HST image of the F775W filter and MUSE images of the [\\ion{O}{ii}]\\ and [\\ion{O}{iii}]\\ lines of the most extreme oxygen emitter in our sample, which has O{\\small 32}\\ = 23. This galaxy is observed in the UDF-mosaic at $z = 0.83$ (identified in the MUSE catalogue of \\citealt{2017A&A...608A...2I} as id = 6865). The spectrum of this object is shown in the upper panel of Fig. \\ref{fig:example_spec}, from which we derived an extinction \\ensuremath{\\mathrm{\\tau}_{V}}\\ = 0.49. The field is very crowded as can be seen in the HST image. The MUSE narrow band images of the [\\ion{O}{ii}]$\\lambda 3727$\\ and [\\ion{O}{iii}]$\\lambda 5007$\\ lines, however, confirm that the measured line fluxes originate from this source in the centre of the images. The $\\log$ R{\\small 23} , $\\log$ [\\ion{Ne}{iii}]\/[\\ion{O}{ii}],\\ and $\\log$ [\\ion{O}{iii}]$\\lambda 4363$\/[\\ion{O}{iii}]$\\lambda 5007$\\ of this object are shown by the diamond in Figs. \\ref{fig:r23_discussion}, \\ref{fig:ne3}, and \\ref{fig:te}. The $\\log$ [\\ion{O}{iii}]$\\lambda 4363$\/[\\ion{O}{iii}]$\\lambda 5007$\\ indicates that the metallicity of this galaxy is extremely low (\\Z$\\approx0.0001$), which deviates somewhat from what is predicted from the R{\\small 23}\\ ratio (\\Z$\\approx0.0005-0.001$), and the ionisation parameter is high ($\\log U \\approx -1.5$). However, all models assume that there is no escape of ionising photons, which may affect the O{\\small 32}\\ and R{\\small 23}\\ ratios. \n\nDue to the combination of the relatively low stellar mass, $\\log$(\\Mstar\/\\Msun)$=8.16^{+0.16}_{-0.06}$, and the redshift, we are not able to precisely derive the size of the object, because the apparent size is comparable to the PSF of the HST image and the source is not resolved in the MUSE data. This, however, indicates that the galaxy is compact, like the objects of \\citet{Izotov16a,Izotov16b, 2018MNRAS.474.4514I}. Together with the comparison of its oxygen ratio to those of nebular models, this suggests that this galaxy may be an LyC emitter candidate. \n\\begin{figure} \n \\centering\n \\includegraphics[width=\\hsize]{HST_and_nb_6865_scale.png}\n \\caption{HST F77W image (left), a MUSE narrow band image of the [\\ion{O}{ii}]$\\lambda 3727$\\ line (middle), and a MUSE narrow band image of the [\\ion{O}{iii}]$\\lambda 5007$\\ line (right) of the most extreme oxygen emitter with O{\\small 32}\\ = 23.} \n \\label{fig:images}\n \\end{figure} \n\n\n\\section{Conclusions}\nWe constructed a sample of emission-line galaxies in the redshift range $0.28 < z < 0.85$ that are detected in data from four MUSE GTO surveys. The galaxies are selected based on their position in the \\ensuremath{\\mathrm{SFR}}\\ - \\Mstar\\ plane in a way that we only included galaxies that are above the redshift-dependent SFMS from \\citet{Boogaardetal}. In this regime we expect the sample to be independent of selection effects. Our final sample consists of 406 galaxies, of which 104 (26$\\%$) have a high O{\\small 32}\\ ratio (O{\\small 32}\\ > 1) and 15 galaxies are extreme emitters with O{\\small 32} > 4 (3.7$\\%$). We studied the O{\\small 32}\\ ratio as a function of the position in the (redshift-corrected) \\ensuremath{\\mathrm{SFR}}\\ versus \\Mstar\\ diagram, as a function of stellar mass \\Mstar , \\ensuremath{\\mathrm{SFR}} , and metallicity indicator R{\\small 23} . We then studied the incidence rate of galaxies with high oxygen ratios, which is defined by either a fixed threshold, O{\\small 32}\\ > 1, or by a metallicity-dependent threshold as a function of \\Mstar\\ and redshift. The main conclusions of this study are: \n\\begin{itemize}\n\\item Galaxies with a high oxygen ratio are more common at lower masses (\\Mstar\\ < 9) and above the SFMS. There is no clear correlation between distance from the SFMS and the O{\\small 32}\\ ratio for galaxies in our final sample that are above the SFMS (Fig. \\ref{fig:main_sequence}). \\\\\n\\item We find no correlation between O{\\small 32}\\ ratio and \\Mstar , although the median values in O{\\small 32}\\ bins seems to be anti-correlated (Fig. \\ref{fig:m_o32}).\\\\\n\\item We observe the same trend between the median values of O{\\small 32}\\ and \\ensuremath{\\mathrm{SFR}} , but again no significant correlation for individual galaxies. The \\ensuremath{\\mathrm{SFR}}\\ of most of our extreme emitters is two to three orders of magnitude smaller than those of confirmed leakers (Fig. \\ref{fig:sfr_o32}).\\\\\n\\item The fraction of galaxies with high O{\\small 32}\\ ratios is independent of stellar mass when we use a metallicity-dependent O{\\small 32}\\ threshold (Fig. \\ref{fig:f_logM_Zcorr}).\\\\\n\\item We find no significant correlation between the fraction of high O{\\small 32}\\ emitters and redshift, suggesting that there is no redshift evolution of the number of high O{\\small 32}\\ in the redshift range $0.28 < z < 0.85$ (Figs. \\ref{fig:f_logM_Zcorr_zdep} and \\ref{fig:fraction_time}). \\\\\n\n\\item Comparing O{\\small 32}\\ and R{\\small 23}\\ of our galaxies with those of nebular models with no escape of ionising photons, we find that some of the high oxygen emitters can be reproduced by models with a high ionisation parameter ($\\log U \\approx -2$), a very low stellar and nebular metallicity (smaller than $\\sim 1\/3$ \\Zsun), or a combination of both. However, our extreme emitters are in the same regime as the confirmed leakers from \\citet{Izotov16a, Izotov16b, 2018MNRAS.474.4514I} and we therefore cannot exclude the escape of ionising photons from these galaxies. The O{\\small 32}\\ ratio of our most extreme oxygen emitter can only be explained by models with very high ionisation parameter ($\\log U > -2$), from which we conclude that this galaxy may be a LyC leaker candidate (Fig. \\ref{fig:r23_discussion}). \\\\\n\n\\item For galaxies with a significant [\\ion{O}{iii}]$\\lambda 4363$\\ detection, we derived the [\\ion{O}{iii}]$\\lambda 4363$\/[\\ion{O}{iii}]$\\lambda 5007$\\ ratio and the electron temperature and find that these values are similar to or larger than those predicted by nebular models with extremely low metallicity, high ionisation parameters, and constant \\ensuremath{\\mathrm{SFR}}\\ at $t = 3 \\times 10^8$ years. From this we conclude that a part of the extreme O{\\small 32}\\ emitters may have light-weighted ages of $t < 3 \\times 10^8$ years (Fig. \\ref{fig:te}). \\\\\n\\end{itemize}\n\\label{s_conclusions}\n\n\n\\begin{acknowledgements}\nWe thank the referee for a constructive report that helped improve the paper. AV is supported by a Marie Heim-V\\\"{o}gtlin fellowship of the Swiss National Foundation. JB acknowledges support from the Funda\\c{c}\\~{a}o para a Ci\\^{e}ncia e a Technologia (FCT) through national funds (UID\/FIS\/04434\/2013) and Investigador FCT contract IF\/01654\/2014\/CP1215\/CT0003., and by FEDER through COMPETE2020 (POCI-01-0145-FEDER-007672). TC acknowledges support from the ANR FOGHAR (ANR-13-BS05-0010-02), the OCEVU Labex (ANR-11- LABX-0060), and the A*MIDEX project (ANR-11-IDEX-0001-02) funded by the \"Investissements d'avenir\" French government programme. JS and SM acknowledge support from The Netherlands Organisation for Scientific Research (NWO), VICI grant 639.043.409. SC gratefully acknowledges support from Swiss National Science Foundation grant PP00P2$\\_$163824.\n\\end{acknowledgements}\n\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzprmr b/data_all_eng_slimpj/shuffled/split2/finalzzprmr new file mode 100644 index 0000000000000000000000000000000000000000..164c2466ec0eee9bb1169f5cab95c87c4d03dd87 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzprmr @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nServices running on the client-server model may crash or behave unintentionally from time to time due to software bugs or attacks by malicious users.\nTo prevent such problems and continuously provide services to clients, fault tolerance is an important consideration.\n\\emph{State machine replication} (SMR) \\cite{Schneider1990} is commonly used to improve fault tolerance by replicating a service over multiple replicas.\nIn SMR, the replicated service is called replicas, and the state of all replicas are kept consistent by executing a replication protocol.\nHence, using this method, an active operation can be continued as a whole even if a failure occurs in a part of the replicas.\nSeveral SMR protocols have been proposed in previous studies \\cite{Moniz2011,Cachin2001,Castro2002,Nakamura2014a,Kotla2007,Sousa2012,Bessani2014}.\n\nAn SMR that deploys replicas on a continental scale is called \\emph{geographic SMR} \\cite{Sousa2015,Liu2017,Eischer2018,Mao2008,Veronese2010,Coelho2018}.\nReplicas in geographic SMR are separated by a large distance to withstand a catastrophic disaster, such as an earthquake.\nIf some of the replicas fail, the service can be continued by the replicas in other \\emph{sites} (regions).\nWith the development of public cloud services after the 2000s, geographic SMR can be easily realized.\n\n\nAlthough geographic SMR could have been easily implemented, ways of obtaining the best performance using the optimal replica deployment remain unclear.\nPerformance of a replica deployment depends on several factors, including the location of the leader replica, distances between replicas, and distances between clients and replicas.\nFor example, if replicas are deployed in nearby regions, the time taken for request processing can be shortened, but the fault tolerance will be reduced.\nIn contrast, if the replicas are distributed farther apart from one another, the fault tolerance will increase, but the processing time for a normal request will be slower.\n\nIn this paper, we propose a performance-estimation method to determine the optimal replica deployment for building a service using geographic SMR.\nFirst, we define the task to find the optimal replica deployment among all possible candidates as \\emph{replica deployment decision problem}, which requires to output a ranking of all possible replica deployments sorted by their latencies.\nThe proposed method solves this problem by using an evaluation function that estimates a latency of each replica deployment based on the \\emph{round-trip time} (RTT), which is generally regarded as an important parameter in geographic SMR.\nAlthough it is unrealistic to actually build all possible replica deployments and measure their latencies, RTTs can be measured relatively easily.\nTherefore, this evaluation function is practical and can be used to select the optimal deployment for actual service construction.\n\n\nFinally, we conduct an experimental evaluation using Amazon Web Services with 15 regions to demonstrate the effectiveness and practicality of the proposed method.\nIn the experiment, we actually build thousands of geographic replications and measure their latencies; then we create the measured latency ranking and compare it against the rankings generated by the proposed method.\nThe results exhibit that the proposed method with the RTT-based evaluation function can generate a consistent ranking with reasonable calculation time.\n\nIn particular, this paper makes the following contributions:\n\\begin{enumerate}\n\\item It presents a new method that generate a ranking to assist deciding a replica deployment for geographic SMR.\n\\item It also presents a evaluation function that consistently calculates latency of a replica deployment by using round-trip time between sites, which can be easily measured compared with the actual latency of the deployment.\n\\item It conducts exhaustive experiments with thousands of replications built on Amazon Web Services, and evaluates the proposed method and the evaluation function.\n\\end{enumerate}\n\n\\section{Background}\n\\label{sec:backgrund}\n\n\n\\subsection{State Machine Replication}\n\\label{sec:smr}\n\n\\emph{State machine replication} (SMR) \\cite{Schneider1990} is a replication method for the client-server model.\nIn SMR, the server is modeled by a state machine; thus, on receipt of a message, the server changes its state and sends messages to other processes if necessary.\nThe server's role is replicated over $n$ replicas that independently operate the functions on distinct hosts and interact with clients via request and response messages.\n\nClient requests to be executed are submitted to all replicas, and the order in which different replicas receive these requests may differ due to variations in the communication delays.\nTherefore, the replicas execute a replication protocol to guarantee that they process requests in the same order to maintain consistency.\nAfter a replica processes a request, it replies to the client with the execution result.\n\n\nThere are two variations of SMR;\nSMR that can withstand crash failures (resp. Byzantine failures) is called CFT SMR (resp. BFT SMR).\nThe number of faulty replicas that a replication can tolerate $f$ is related to $n$ as follows \\cite{Lamport2002}:\n$n \\geq 2f +1$ for CFT SMR and $n \\geq 3f +1$ for BFT SMR.\nHereafter, we assume BFT SMR and $n = 4$ (i.e., $f=1$); however, the proposed method is applicable for any $n$ and $f$ of BFT SMR and CFT SMR.\n\n\\subsection{Related Work}\n\\label{sec:relatedwork}\n\n\n\n\n\n\n\nThe problem of determining the optimal replica deployment has been extensively studied in the field of data replication.\nCook et al. formulated the time required to read and write data as a cost in a simple read-write policy (when reading a data object, refer to one replica. When writing data, a client transfer the data to all servers that have its replica) and proved that this problem is NP-complete \\cite{Cook2002}.\nThey also proposed an approximation algorithm for the problem.\nAlthough the target replication problem is different, their formulation is very similar to the evaluation function proposed in this paper.\nThe survey by Sen et al. \\cite{Sen2015} provides a comprehensive overview of the previous studies on the data location optimization problem using mathematical models.\n\nIn the field of geographic SMR, there are a few methods that optimize a replica deployment \\cite{Liu2017,Eischer2018}.\nIn \\cite{Liu2017}, Liu and Vukoli\\'{c} proposed two methods for geographic SMR: Droppy that dynamically relocates a set of replication leaders according to given replication settings and workload situations, and Dripple that divides the replicated system state into multiple partitions so that Droppy can efficiently relocate the leaders.\nEischer and Distler proposed Archer \\cite{Eischer2018} that relocates leaders based on their response times as measured by clients.\nA Hash-chain-based technique was employed in the protocol to allow clients to detect illegal phases caused by Byzantine replicas to prevent such replicas from being wrongly assigned as leaders.\n\nIn this paper, we propose a method that can help identify the best replica deployment when building an geographic SMR.\nThe proposed method differs from these prior studies in several ways.\nFirst, the proposed method can be used with any replication protocol by defining an evaluation function to calculate the estimated latency of different replica deployments.\nIn contrast, although Droppy and Archer can dynamically relocates the leader replica locations, they only support leader-based replication protocols.\nSecond, the proposed method can also identify the best replica deployment from all possible replica deployments; this complements these existing methods, which are limited to determining an assignment of replication roles to the replicas in a replication.\n\n\n\\section{Replica Deployment Decision Problem}\n\\label{sec:problem-definition}\n\nWe formally define the problem addressed herein as a \\emph{replica deployment decision problem}.\nIn the definition, we call a location wherein a replica (or a client) can be deployed to as a \\emph{site}\\footnote{For example, if the SMR is built on a public cloud service, each region is a site; if it is built in facilities on premises, each data center is a site.}.\nIn the problem, the following inputs are provided by a user.\n\\begin{itemize}\n \\item $n$: the number of replicas that the user wants to deploy\n \\item $SC$: a set of candidate sites wherein replicas can be deployed\n \\item $C$: a set of client locations\n\\end{itemize}\n\nThe goal of this problem is to output a ranking\\footnote{\nThe proposed method outputs not only the best replica deployment, but also the whole ranking of all possible deployments, because the best deployment may not be acceptable for some reason other than latency.\n}\nof replica deployments sorted by latency (of course, a replica deployment with smaller latency is ranked higher).\nThe user will then choose the final replica deployment for the SMR from this ranking.\nHere, latency is defined as the time taken by a client from sending a request to the replicas until receiving its response.\n\n\n\n\n\n\\section{Proposed Method}\n\\label{sec:proposed-method}\n\nIn this section, we propose a method to solve the replica deployment decision problem and to determine the optimal replica deployment from all the possible deployments for geographic SMR.\nUsing the proposed method, any replication configuration can be evaluated without actually building it.\n\n\n\n\n\\subsection{Overview}\n\\label{sec:proposed-method-overview}\n\nFigure \\ref{fig:approach} illustrates the overview of the proposed method and the method consists of the following steps:\n\\begin{figure}[tp]\n \\centering\n \\includegraphics[width=65mm]{fig\/conceptual_diagram_of_propose_method.pdf}\n \\caption{Overview of the proposed method}\n \\label{fig:approach}\n\\end{figure}\n\\begin{enumerate}\n\\item First, a set, $DC$, of all possible replica deployments is created based on $SC$ and $n$.\nEach replica deployment is expressed as a pair of locations for the leader and the other replicas\\footnote{Here, we assume rotating coordinator-based SMR protocols similar to those \\cite{Lamport1998,Castro2002,Sousa2012,Kotla2007}.\nIf the proposed method is applied to leader-less SMR protocols similar to those \\cite{Moniz2011,Cachin2001,Nakamura2014a}, then each replica deployment is simply expressed as a set of replica locations of size $n$.\n}.\n\\item Next, for each replica deployment $x \\in DC$, its latency is estimated using the evaluation function $f(x, C)$ based on the measured RTTs.\nThis function is further described in Section \\ref{sec:evaluation-function}.\n\\item The elements in $DC$ are sorted based on their calculated latency; the sorted result is outputted as the ranking for the inputs.\n\\end{enumerate}\nThus, the replica deployment with the shortest latency is ranked as the best replica deployment.\n\n\\subsection{Evaluation Function $f(x, C)$} \n\\label{sec:evaluation-function}\n\nThe evaluation function $f(x, C)$ outputs an estimated latency based on replica deployment, $x$, and the client locations, $C$ by tracing message transmissions specific to a replication protocol being used.\nThe function plays an important role in the proposed method.\n\n\n\\subsubsection{Approach}\n\\label{subsubsec:evaluation-function-approach}\n\nIf site candidates $SC$ is large, it is impractical to actually build SMRs with all possible replica deployments to evaluate their latencies.\nTherefore, the evaluation function estimates them based on round-trip time (RTT) between sites, which can be measured more easily, and outputs as an latency for that deployment.\nIn other words, before using the proposed method, a user must measure RTTs between candidate sites in advance.\nHere, the time required for message processing in a replica is disregarded because the communication delay between replicas is relatively large compared with the processing time in a geographic SMR.\n\nAssuming that a latency can only be estimated from the communication time, two factors must be considered: the types of message communications (i.e., \\emph{message transmission patterns}) that constitute latency and the communication time between sites.\nThe message transmission pattern can be found by referring to an SMR protocol used in a replication.\nThen, for a given set $C$ of clients and replica locations $x$, the function simulate the transmission and receipt of messages based on the message transmission pattern of the replication protocol and the measured RTTs.\n\nHere, we model the message transmission pattern of Mod-SMaRt \\cite{Sousa2012} of BFT-SMaRt \\cite{Bessani2014} as an example; however, we believe the same approach can be applied to other SMR protocols.\nIn Mod-SMaRt, a special replica (called a \\emph{leader} replica) determines the order in which requests are executed and communicates this order to the other replicas.\nThe message transmission pattern involves five types of messages that are exchanged among the client and replicas to process the request, as shown in Fig. \\ref{fig:bft-smart-message-flow}.\n\\begin{figure}[tp]\n \\centering\n \\includegraphics[width=65mm]{fig\/bft-smart-message-flow.pdf}\n \\caption{The message transmission pattern for Mod-SMaRt protocol \\cite{Sousa2012} in BFT-SMaRt \\cite{Bessani2014}.\n Replica 1 is the leader replica and Req., P, W, A, and Res., indicate Request, Propose, Write, Accept, and Response messages, respectively.}\n \\label{fig:bft-smart-message-flow}\n\\end{figure}\nFirst, the client sends a request to each replica (Request).\nWhen the leader replica receives the request, it sends Propose messages to each replica to propose a candidate value for agreement (Propose).\nThen, Write and Accept messages are exchanged between all replicas to confirm the validity of the candidate values to determine the final agreed value (Write and Accept).\nFinally, the replicas execute the ordered request and return the result to the client (Response).\nHereafter, an RTT and a message transmission delay between sites $a$ and $b$ is denoted as $\\mathrm{RTT}(a, b)$ and $\\mathrm{delay}(a, b) = \\mathrm{RTT}(a, b)\/2$, respectively.\n\n\n\n\\subsubsection{Latency Formulation}\n\\label{subsubsec:latency-calculating}\n\nThe evaluation function $f$ estimates a latency of each client location $c \\in C$, and outputs the average of these latencies as follows:\n\\begin{equation}\n f(x, C) = \\sum_{c \\in C} f_c(x, c) \/ |C|,\n\\end{equation}\nwhere $f_c$ is a evaluation function for a single client.\nHereafter, we explain how $f_c(x, c)$ calculates a latency on a replica deployment.\nThe message pattern of Mod-SMaRt comprises five parts as depicted in Fig.~\\ref{fig:bft-smart-message-flow}, and we denote the timings of these parts by $S_{req}$, $S_{pro}$, $S_{wrt}$, $S_{acc}$, and $S_{res}$, respectively. \nIf necessary, we denote the timing for a specific replica $r_i$ by adding a superscript such as $S_{pro}^i$.\n\nFirst, we calculate the timing $S_{req}$ at which the leader receives a request.\nIn the replication protocol, a request message is sent from a client to each replica although only the leader replica processes the request in the fault-free case;\nthus, $S_{req}$ can be expressed as the average of the RTTs from each client $c$ to the leader replica $l$:\n\\begin{equation}\n S_{req} = \\sum_{c \\in C} \\mathrm{delay}(c, l) \/ |C|.\n\\end{equation}\n\nThen, the leader sends the request to each replica as Propose messages; the timing $S_{pro}^i$ at which the replica $r_i$ receives the Propose message is expressed as follows:\n\\begin{equation}\n S_{pro}^i = S_{req} + \\mathrm{delay}(l, r_i).\n\\end{equation}\n\nWhen a replica receives the Propose message, it broadcasts a Write message to all replicas.\nEach replica accepts the Write message when it receives the same Write messages from a majority $\\lceil (n+1)\/2 \\rceil$ of the replicas.\nThe timing $S_{wrt}^i$ at which replica $r_i$ accepts the Write messages can be calculated based on the timing at which the replica $r_{i}$ receives the Write message sent from replica $r_j$:\n\\begin{equation}\n S_{wrt}^i = \\mathrm{find}(T^i_{wrt}, \\lceil (n+1)\/2 \\rceil),\n\\end{equation}\nwhere $t_{wrt}(r_i, r_j) = S_{pro}^j + \\mathrm{delay}(r_j, r_i)$, $T_{wrt}^i = \\{ t \\mid t_{wrt}(r_i, r_j), 0 \\leq j < n \\}$, and $\\mathrm{find} (S, k)$ is a function that returns the $k$-th smallest element of set $S$.\n\nAn Accept message is sent in the same way as Write messages.\nTherefore, if we define $t_{acc}(r_i, r_j) = S_{wrt}^i + \\mathrm{delay}(r_j, r_i)$, $S_{acc}^i$ is\n\\begin{equation}\n S_{acc}^i = \\mathrm{find}(T^i_{acc}, \\lceil (n+1)\/2 \\rceil),\n\\end{equation}\nwhere $T_{acc}^i = \\{ t \\mid t_{acc}(r_i, r_j), 0 \\leq j < n \\}$.\n\nFinally, when a replica receives a majority of Accept messages, it executes the request and sends the execution result to the client as a Response message.\nWhen a client receives the same response message from $f + 1$ distinct replicas, it accepts the result.\nTherefore,\n\\begin{equation}\n f_c(x, c) = S_{res} = \\mathrm{find}(T_{res}, f+1),\n\\end{equation}\nwhere $T_{res} = \\{ t \\mid S_{acc}^i + \\mathrm{delay}(r_i, c), 0 \\leq i < n \\}$.\n\n\n\\section{Evaluation}\n\\label{sec:evaluation}\n\nIn this section, we examine the effectiveness of the proposed method described in Section \\ref{sec:proposed-method}.\nFirst, the evaluation of replica deployments in terms of the RTT is verified in Section \\ref{sec:rtt-experiment}.\nNext, the latencies of thousands of replica deployments on a public cloud service are measured to evaluate the accuracy of the ranking generated by the proposed method in Section \\ref{sec:latency-experiment}.\nFinally, Section \\ref{sec:ranking-experiment} characterizes the time that it takes to generate a ranking.\n\nAll experiments are conducted using Amazon Web Services EC2, a representative public cloud service.\nWe use 15 regions\\footnote{N.~Virginia, Ohio, N.~California, Oregon, Mumbai, Seoul, Singapore, Sydney, Tokyo, Canada Central, Frankfurt, Ireland, London, Paris, S\\~{a}o Paulo} of Amazon EC2 as site candidates $SC$ for replica deployments (i.e., $|SC| = 15$).\nReplica and client programs are executed on Ubuntu Server 16.04 (64 bit).\nFor replicas and clients, we use t2.micro instances that have one vCPU, 1 GiB memory, EBS storage, and a network interface of \"Low to Moderate\" performance.\n\n\\subsection{Validation of the Use of RTTs}\n\\label{sec:rtt-experiment}\nThe proposed method calculates lantecies based on the RTTs between sites.\nHere, we evaluate whether it is appropriate to use the RTTs for estimating lantecy and how long the generated ranking is valid.\n\n\\subsubsection{Method}\n\\label{subsubsec:rtt-measuring-method}\n\nAn instance is deployed in each of the regions, and the \\verb,ping, command is executed against the instances in the other 14 regions every two seconds.\nRTTs were measured during the following periods (all times are displayed in UTC in 24-h notation):\n\\begin{itemize}\n\\item Term A: March 7, 19:27 -- 22:13, 2018\n\\item Term B: January 11, 11:14 -- January 28, 3:41, 2019\n\\item Term C: April 15, 15:48 -- April 23, 11:15, 2019\n\\end{itemize}\n\n\n\\subsubsection{Results and Discussion}\n\\label{subsubsec:rtt-results}\n\nRTTs measured during Term C are shown as a boxplot in Fig. \\ref{fig:rtt-ireland-all} (only the results for the Ireland instance are shown due to space limitations).\nAlthough RTT varied from region to region, these variations were small.\nThe largest variation was observed between the Ireland and Singapore regions, and its mean and standard deviation were 180.3 and 24.1 ms, respectively.\n\n\\begin{figure}[tp]\n\t\\centering\n\t\\includegraphics[scale=0.80]{fig\/ireland-boxplot-all.pdf}\n\t\\caption{Distribution of RTT from Ireland to each region during term A.}\n\t\\label{fig:rtt-ireland-all}\n\\end{figure}\n\nNext, we compare the RTTs from Ireland to Singapore (where the largest variations were observed) during terms A, B, and C.\nThe average RTTs were 175.3 ms in term A, 179.8 ms in term B, and 180.3 ms in term C.\nOver the 13 months between term A and term C, RTT increased by 5 ms.\nAlthough this may seem like a small difference, if similar changes occurred between all regions, it is likely that the ranking generated by the proposed method would change considerably.\n\n\n\nTo investigate how these difference affects a replica deployment ranking, we generated two rankings from the RTTs measured during Terms A and C with the client location Multiple (see Section \\ref{subsubsec:latency-measuring-method} for its definition).\nFigure \\ref{fig:rtt_termA_vs_termC} shows the correlation between these rankings. \nWe can observe that the RTT changes affected the ranking certainly, especially for the 2000--5000 ranks.\nThe largest difference happened on the replica deployment of Tokyo (leader), Canada, Oregon, and Singapore.\nThe deployment was in 3523rd place in the term A ranking, while it was in 2688th place in the term C ranking.\n\n\\begin{figure}[tp]\n \\centering\n \\includegraphics[scale=0.31]{fig\/scatter_WPSDS_TermA-TermC_Fstrict.pdf}\n \\caption{Difference between the rankings of Terms A and C.}\n \\label{fig:rtt_termA_vs_termC}\n\\end{figure}\n\nThe results indicate that the RTT variations in the public cloud are sufficiently small in the short term; thus, estimating a replica deployment in terms of based on the RTTs between sites is valid.\nIn contrast, RTTs between regions changed over long periods (on the order of one year).\nTherefore, a replica deployment that is found to be optimal may no longer be optimal after a long time has passed, suggesting that replicas should be relocated periodically to maintain optimal performance.\n\n\\subsection{Ranking Accuracy}\n\\label{sec:latency-experiment}\nHere, we discuss the accuracy of a ranking generated by the proposed method by comparing the rankings with those derived from the experimentally measured latencies of all possible replica deployments.\n\n\\subsubsection{Method}\n\\label{subsubsec:latency-measuring-method}\n\n\nWe introduce a baseline evaluation function $f_{simple}(x, C)$ to compare the accuracy of the evaluation function of the proposed method.\nThis function roughly estimates a latency based on a simplified message pattern for Mod-SMaRt.\nFirst, it divides the pattern into three parts: Request, Byzantine Consensus, and Response as Fig.~\\ref{fig:bft-smart-message-flow} and calculates their timings $S_{req}$, $S_{con}$, and $S_{res}$ as follows:\n$S_{req}$ is the average of the half RTTs from each client to the leader replica.\n$S_{con}$ is the sum of the half RTTs between all pairs of replicas.\n$S_{res}$ is the average of the half RTTs from each replica to each client.\nFinally, this function outputs the sum of these timings as a latency.\n\nIn this experiment, all possible replica deployments are built on AWS and the latency of each one is measured.\nWe do not assume that multiple replicas are deployed in the same region.\nSince $|SC| = 15$ and $n = 4$, the total number of possible replica deployments $|DC| = |SC| \\times {}_{|SC|-1}C_{n-1} = 5,460$.\nIf replicas are deployed to the same combination of regions, the location of the leader replica may differ; hence, such deployments are considered independently. \n\nAs with replicas, it is assumed that the clients are also located in the AWS regions.\nTo evaluate the effects of the number and locations of clients, clients are placed in geographically distant regions, namely Ireland, Sydney, and N.~Virginia.\nThe case wherein multiple clients are placed in multiple regions (we call this deployment as ``Multiple'') is also evaluated: 10 clients are placed in Ireland, 3 clients are placed in Sydney, and 5 clients are placed in N.~Virginia.\n\nSMR is built using the open-source SMR library BFT-SMaRt \\cite{Bessani2014}\\footnote{\\url{https:\/\/github.com\/bft-smart\/library\/releases\/tag\/v1.1-beta}}.\nA replication is build to withstand Byzantine failures; the tolerable number of failures is $f=1$ and the number of replicas is $n=4$.\nThe defaults are used for all other BFT-SMaRt settings.\n\nAll latencies are measured using the sample programs LatencyClient and LatencyServer bundled in BFT-SMaRt.\nLatencyClient periodically sends requests to the service and measures the latency.\nLatencyServer is a dummy service that provides no functionality; it simply returns a response immediately after receiving a request from a client.\nThe payload sizes of the requests and responses are 1,024 bytes.\nLatencyClient sends requests 50 times every 2 sec.\nThe top 10\\% (i.e., the highest five values) and bottom 10\\% (i.e., the lowest five values) of the measured values are considered as outliers and disregarded;\nthe average of the other values (40 values in total) is considered as the latency of the replica deployment.\nThe latency is estimated with the average RTTs measured during Term C in Section \\ref{subsubsec:latency-measuring-method}.\n\n\n\\subsubsection{Results and Discussion}\n\\label{subsubsec:latency-results}\n\nFigure \\ref{fig:multiple-5460} shows the correlations between the rankings generated via the proposed method using the evaluation functions.\nDue to space limitations, only the results for the multiple are shown.\nTable \\ref{tab:scatter-result-overall} also shows the root mean square error (RMSE)\ncalculated based on the ideal ranking (i.e., $y = x$), which perfectly matches the ranking based on the measured latencies, and the correlation coefficient (CC) for each client location.\nThe results indicate that the RMSE was lower and the CC was higher (exceeding 0.91 in all cases) for $f$ than for $f_{simple}$ for all client locations.\nThis implies that $f$ yielded more accurate rankings by tracing the communications between the replicas in detail.\n\n\\begin{figure}[tp]\n\t\\centering\n\t\\includegraphics[scale=0.31]{fig\/ScatterPlot_WPSDS_single_sydney.pdf}\n\t\\caption{\n \tScatter plots of measured latency rank and estimated latency rank ($C$ = Sydney, $|DC| = 5460$).\n Each plotted point represents the latency of a replica deployment (red for $f$ and blue for $f_{simple}$).\n The horizontal axis represents the ranking derived from the latencies output by the proposed method with $f$ or $f_{simple}$, and the vertical axis represents the ranking derived from the measured latencies. \n }\n\t\\label{fig:multiple-5460}\n\\end{figure}\n\n\\begin{table}[tp]\n \\caption{RMSE and correlation coefficient (CC)}\n \\begin{center}\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n \\textbf{} & \\multicolumn{2}{|c|}{\\textbf{RMSE}} &\\multicolumn{2}{|c|}{\\textbf{CC}} \\\\\n \\cline{2-5} \n \\textbf{Client location} & \n \\textbf{\\textit{$f_{simple}$}}& \\textbf{\\textit{$f$}}&\n \\textbf{\\textit{$f_{simple}$}}& \\textbf{\\textit{$f$}} \\\\ \\hline\n Ireland & 759.411 &\t620.686 & 0.884 & 0.922\\\\ \\hline\n N. Virginia & 722.516 &\t548.982 &\t0.895 & 0.939\\\\ \\hline\n Sydney & 985.598 & 638.275 & 0.804 & 0.918\\\\ \\hline\n Multiple & 697.473 & 629.228 & 0.902 & 0.920\\\\ \\hline\n \\end{tabular}\n \\label{tab:scatter-result-overall}\n \\end{center}\n\\end{table}\n\n\n\n\nThese experiments confirmed that the proposed method can generate consistent rankings in various client locations.\nFurther, it was revealed that the rankings generated by $f$ are more accurate than those generated by $f_{simple}$ (particularly for the higher-ranked deployments).\nHence, a higher reproducibility of the replication protocol can reproduce more accurate replica deployment ranking.\n\n\n\\subsection{Calculation Time to Generate a Ranking}\n\\label{sec:ranking-experiment}\n\nFinally, we evaluate the calculation time required to generate a ranking with the proposed method\\footnote{\nAll the rankings were calculated by the program implemented with Python 3.6 on the following PC: Intel Core i5 7400, Windows 10 Home 64-bit.\n}.\nThe ranking calculation times of $f_{simple}$ and $f$ were 1.88s and 10.88s, respectively for $n=4$, that is, $f_{simple}$ is about five times faster than $f$. \nThis finding indicates that more time is required to calculate the estimated latency using the evaluation function and to improve the reproducibility of the communication.\n\n\nNext, we investigate the influence of the size of $SC$ on the calculation time with $f$.\nTable \\ref{tab:ranking-each-SC} shows the resulting calculation times for different $|SC|$ values and the corresponding calculation times per replica deployment $t\/|DC|$.\nThe result shows that as the size of $SC$ increased, the calculation time required to generate the rankings considerably increased because the total number of replica deployments $|DC|$ increases exponentially with $|SC|$.\n\\begin{table}[tp]\n \\centering\n \\caption{Calculation time by size of site candidates $SC$ ($n = 4$)}\n \\label{tab:ranking-each-SC}\n \\begin{tabular}{|c|c|c|c|}\n \\hline\n \\textbf{$|SC|$} & \\textbf{Time $t$ [sec]} & \\textbf{$|DC|$} & \\textbf{$t\/|DC|$ [msec]}\\\\ \\hline\n 15 & 12.9 & 5,460 & 2.36\\\\ \\hline\n 20 & 39.5 & 19,380 & 2.04\\\\ \\hline\n 25 & 110.5 & 50,600 & 2.18\\\\ \\hline\n 30 & 227.3 & 109,620 & 2.07\\\\ \\hline\n \\end{tabular}\n\\end{table}\n\n\n\nFurthermore, the influence of the number of replicas $n$ on the calculation time with $f$.\nTable \\ref{tab:ranking-each-n} shows the calculation time $t$ for different $|DC|$ values and the corresponding calculation times per replica deployment, $t\/|DC|$, as $n$ is varied.\nThe result shows that $t$ and $|DC|$ were maximized at different values of $n$ (10 and 7, respectively) because the calculation time of a replica deployment $t\/|DC|$ increases as $n$ increases.\n\n\\begin{table}[tp]\n \\centering\n \\caption{Calculation time by the number of replicas $n$ ($|SC| = 15$)}\n \\label{tab:ranking-each-n}\n \\begin{tabular}{|c|c|c|c|}\n \\hline\n \\textbf{$n$} & \\textbf{Time $t$ [sec]} & \\textbf{$|DC|$} & \\textbf{$t\/|DC|$ [msec]}\\\\ \\hline\n 4 & 12.9 & 5,460 & 2.36\\\\ \\hline\n 7 & 429.1 & 45,045 & 9.53\\\\ \\hline\n 10 & 792.1 & 30,030 & 26.38\\\\ \\hline\n 13 & 77.7 & 1,365 & 56.90\\\\ \\hline\n \\end{tabular}\n\\end{table}\n\n\nThese measurement results reveal that the rankings for replica deployments can be calculated in several hundred seconds when the replica number and site number are relatively small.\nThis is considered a reasonable calculation time since a deployed SMR is typically operated for more than one year. \nIn contrast, if large numbers of replicas and $SC$ are used, the calculation time becomes very high.\nIn such a case, some changes need to be made so that the solution is still practical, e.g., calculations latencies in parallel, discarding replica deployments that seems to be slow, and so on.\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nIn this paper, we addressed on the difficulty of determining the optimal replica deployment for geographic state machine replication by proposing a novel method to generate a ranking of all possible replica deployments.\nWe introduced an evaluation function that estimates a latency of each replica deployment based on the RTTs between sites, which are easy to measure without actually building the deployments.\nHence, all possible replica deployments can be evaluated and ranked accordingly to determine the optimal replica deployment for geographic SMR.\nWe confirmed the validity of evaluating replica deployments in terms of their RTTs.\nAfter that, we measured the latencies of thousands of replica deployments built on Amazon Web Services, and ranked the deployments accordingly.\nThen, we compared this experimentally derived ranking with those rankings generated using the proposed method.\nThe results exhibited that the proposed method can create a ranking with sufficient accuracy in a reasonable time.\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDeep X-ray surveys have recently revealed a population of moderately to heavily\nabsorbed active galactic nuclei (AGN) at faint fluxes. A few such objects \nare known to be at high redshift, for example one source discovered by\n{\\it ROSAT}\\ is at z=2.35 (Almaini et al 1995) and two others discovered by \n{\\it ASCA}\\ have z=0.9 and z=0.672 (Ohta et al 1996; Boyle et al 1998). \nThe so-called Narrow-Line X-ray Emitting Galaxies (NLXGs) might, in fact,\nbe the low redshift counterparts of these obscured objects, since both\nclasses are characterised by hard X-ray spectra (Carballo et al 1995; \nAlmaini et al 1996). The discovery of \nsuch sources at faint X-ray fluxes is of vital importance in explaining the \norigin of the X-ray background, since the brightest AGN in the X-ray sky \n(mostly type 1 AGN) generally have much softer X-ray spectra than the X-ray \nbackground spectrum (Fabian \\& Barcons 1992).\n\nThe UK {\\it ROSAT} Medium Sensitivity Survey (Branduardi - Raymond et al\n1994; Carballo et al 1995) was carried out in order to identify a\ncomplete sample of moderately faint X-ray selected sources (flux over\nthe 0.5--2 keV band in excess of $1.7\\times 10^{-14}{\\rm erg}\\, {\\rm cm}^{-2}\\, {\\rm s}^{-1}$) over a\nsignificant area of the sky (2.2 deg$^2$) in a region of minimal\nGalactic absorption. In this survey the source with the highest hardness\nratio is RX J1011.2+5545 with $HR=0.67$ ($HR=(H-S)\/(H+S)$ where $S$ and \n$H$ are the counts in PSPC channels 11--39 and 40--200 respectively). It is\nalso one of its brightest sources with a flux $S(0.5-2\\,\n{\\rm keV} )=6.6\\times 10^{-14}\\, {\\rm erg}\\, {\\rm cm}^{-2}\\, {\\rm s}^{-1}$. The hard X-ray spectrum together\nwith the fact that the source has no optical counterpart visible on the POSS\nplates (which is atypical of the X-ray sources at this flux level), \nsuggested a possibly highly obscured source and prompted us to start a\nprogram of follow-up optical and {\\it ASCA}\\ hard X-ray observations. A\nNED search also revealed that the source is a radio-emitter at various\nfrequencies, with a double lobe morphology. The combination of\nradio, optical and X-ray data has enabled us to classify this object as a\nradio-loud, moderately obscured, high-excitation AGN at a redshift\n$z=1.246$. This is the first X-ray selected obscured AGN discovered \nat high redshift found to be radio loud. In this paper we report on all \nof the recent observations and discuss the nature of this source.\n\n \n\\section {The Data}\n\n\\subsection{{\\it ROSAT} soft X-ray observations}\n\nThe discovery observation was carried out on May 11, 1992 with \nthe {\\it ROSAT}\\ PSPC-B, giving an exposure time of 18529s. The data were \nreduced and scanned for sources as described in Carballo et al (1995). \nAfter a number of sources in the PSPC image were identified with optical \ncounterparts, the astrometry of the X-ray field was corrected by applying \nshifts in RA and DEC. The final X-ray position for the source \nRX J1011.2+5545 is $10^h11^m12^s.4$ and $55^{\\circ}44'50''$\n(J2000) with a 90 per cent error circle of radius $\\sim 4''$.\nThe X-ray image showed no evidence for any extension in the source\n(the FWHM is $27''$, consistent with the PSF at an offset angle\nfrom the {\\it ROSAT}\\ field centre of $7.3'$).\nThe Galactic column density in this direction is $6.7\\times 10^{19}\\, {\\rm\ncm}^{-2}$.\n\n\n\n\n We used the FTOOLS\/XSELECT V3.6 package to extract the counts contained\n within a circle of radius $1.5'$ centered on the source and used \n a ``source-free'' region of radius $6.5'$ at a similar off-axis angle\n in the background subtraction. For the purpose of spectral fitting we \n grouped the PSPC pulse-height data so that every spectral bin\n contained at least 20 counts, leading to a 0.1--2.0 keV source\n spectrum with just 6 bins. \n\n A single power-law fit, assuming only Galactic line-of-sight absorption, \n gives a very flat photon spectral index\n $\\Gamma=0.93_{-0.23}^{+0.20}$ (we always quote 90 per cent errors for\n a single parameter). However, the quality of the fit is not very good\n ($\\chi^2\/{\\nu}=10.2\/4$) corresponding to probability for the null \n hypothesis (PNH) of only 3.7 per cent. The inclusion in the spectral model\n of absorption \n intrinsic to the X-ray source produces a somewhat better fit \n ($\\chi^2\/{\\nu}=4.4\/3$) with a steeper underlying power law\n (although formally the improvement in the fit is not significant\n in terms of the F-test). Clearly data at higher energies are required\n in order to better constrain the continuum slope in this source.\n\n\\subsection{{\\it ASCA} hard X-ray observations}\n\nRX J1011.2+5545 was observed with {\\it ASCA}\\ on November 12-13,\n1995. The source was clearly seen in both the SIS0 and SIS1 cameras,\nwhich were operated in 1-CCD mode (Bright mode). Standard FTOOLS\/XSELECT V3.6\ntasks and techniques were used to clean the data (using default\nparameters), resulting in effective exposure times of 53054s (SIS0)\nand 52855s (SIS1). In this paper we ignore the GIS2 and GIS3\nobservations since the source is barely detected in these detectors. \nWe rely on the spectral calibration of the SIS0 data, in preference to that \nfor SIS1 when necessary.\n\nA spectrum was extracted from a $3'$ radius region centred on the\nsource. The background subtraction was found to be much more accurate\nwhen we chose a source-free region within the same image rather than\nusing the available archival background images. (For example, a\ndetector Fe fluorescent line in the 6-7 keV spectral region went away\nwhen we used the adopted method, but not when the archival background\nwas used). After background subtraction the resulting spectrum was\nagain binned in order to give a minimum of 20 counts per spectral\nchannel. The result was 16 bins for SIS0 and 15 for SIS1. No\nsignificant source variability was found in the data.\n\n\\begin{figure}\n\\centerline{\\psfig{figure=6h23.xray.ps,width=0.5\\textwidth,angle=270}}\n \\caption{The measured {\\it ROSAT}\\ and {\\it ASCA}\\ X-ray spectra of RX J1011.2+5545\n together with the residuals to the best fitting (power-law plus\n intrinsic absorption) model. The filled squares, empty squares and triangles\n are the {\\it ROSAT}\\ PSPC, the {\\it ASCA}\\ SIS0 and the {\\it ASCA}\\ SIS1 data points\n respectively. The {\\it ROSAT}\\ PSPC model is shown with a dashed line, the\n {\\it ASCA}\\ SIS0 one with a solid line and the {\\it ASCA}\\ SIS1 one with a\n dotted line.}\n\\end{figure}\n\nThe simultaneous fitting of the SIS0 and SIS1 data with a single power-law \nmodel (but with different normalisations applying to the two detectors\nto allow for calibration uncertainties) gives an acceptable fit\n($\\chi^2\/{\\nu}=34.5\/28$ with PNH of 18.5 per cent) with a rather flat photon\nindex $\\Gamma=1.43_{-0.23}^{+0.24}$. The inclusion of absorption\nintrinsic to the X-ray source again produces a steeper underlying power law\nbut with only a modest improvement in the fit ($\\chi^2\/{\\nu}=31.8\/27$)\n(which again is not a significant improvement in terms of the F-test).\n\n\n\\begin{figure}\n\\centerline{\\psfig{figure=6h23.cont.ps,width=0.5\\textwidth,angle=270}}\n \\caption{Confidence contours (68, 90 and 99 per cent confidence) for the intrinsic absorption and photon\n index from the combined {\\it ROSAT}\\ and {\\it ASCA}\\ data. }\n\\end{figure}\n\n\nWe then combined the {\\it ROSAT}\\ and {\\it ASCA}\\ data so as to better constrain the\nspectral parameters. Our approach has been to assume the same value of the\nmodel normalisation for the {\\it ROSAT}\\ and {\\it ASCA}\\ SIS0 data but allow\na different normalisation for {\\it ASCA}\\ SIS1 data. This procedure\nproduces a significantly better fit than taking the same normalisation for all\nthree datasets (at 99.9 per cent using F-test), whereas introducing different\nnormalisations for each instrument does not result in a\nsignificant improvement. A single power law fit is only marginally\nacceptable ($\\chi^2\/{\\nu}=53.9\/34$ with a PNH=1.6 per cent), with\n$\\Gamma=1.13\\pm 0.16$. However, the fit improves if absorption intrinsic to \nthe source is included ($\\chi^2\/{\\nu}=47.2\/33$ with PNH of 5.2 per cent). \nThe best fit (see Fig. 1) corresponds to $\\Gamma=1.45^{+0.72}_{-0.28}$ and\n$N_H=(2.1^{+12.4}_{-1.6})\\times 10^{21}\\, {\\rm cm}^{-2}$ (at the redshift of\nthe source $z=1.246$). Fig. 2 shows the confidence\ncontours for the two free spectral parameters.\n\nThere is some evidence for significant residuals in all three\ninstruments at $\\sim 1$ ${\\rm keV}$ (Fig. 1). The inclusion of a\nGaussian-line component centered at this energy results in a further\nimprovement of the fit ($\\chi^2\/{\\nu}= 31.8\/29$). However, it is not\ncompletely obvious that such a feature is associated with the source;\nthe corresponding rest-frame energy is $2.2\\pm 0.1\\, {\\rm keV}$ with a\nrest-frame equivalent width $\\sim 165\\, {\\rm eV}$. One could identify\nthis as a SiXIV-SiXVI complex (Netzer \\& Turner 1997), but then the Fe\nK line should be seen in the spectrum and it is not (rest-frame\nequivalent width $< 418$ eV at 95 per cent confidence). Attempts to\naccount for these residuals in the X-ray spectrum in terms of ionised\nabsorbers did not show significant improvemet in the fit.\n\nWe conclude that the absorbed, power-law fit is the most tenable model.\nThe flux of the source is $S(0.5-2\\, {\\rm keV})=6.6\\times 10^{-14}\\, {\\rm erg}\\, {\\rm cm}^{-2}\\, {\\rm s}^{-1}$ and\n$S(2-10\\, {\\rm keV})=1.9\\times 10^{-13}\\, {\\rm erg}\\, {\\rm cm}^{-2}\\, {\\rm s}^{-1}$ and the K-corrected rest\nframe luminosity (using the measured redshift of $z=1.246$) is\n$L(0.5-2\\, {\\rm keV})= 4.8\\times 10^{44} {\\rm erg}\\, {\\rm s}^{-1}$ and $L(2-10\\, {\\rm keV})=\n2.1\\times 10^{45} {\\rm erg}\\, {\\rm s}^{-1}$ ($H_0=50\\, {\\rm km}\\, {\\rm s}^{-1}\\, {\\rm\nMpc}^{-1}$ and $q_0=0$).\n\n\\subsection{Optical imaging}\n\nThe POSS plates show no counterpart within or near the position of the\n{\\it ROSAT}\\ source. In order to search for fainter candidate optical \ncounterparts, we imaged the field (as for all the other survey sources) \nwith the CCD\ncamera at the Cassegrain focus of the 2.2m telescope of the Centro\nAstron\\'omico Hispano Alem\\'an, Calar Alto on February 10,\n1994. A single exposure of 900s was taken with the Johnson R filter.\nThe photometric conditions were good and the seeing was\n$\\sim 1.4''$. Data reduction and the astrometric and photometric \ncalibrations were performed as described by Carballo \net al (1995).\n\n\\begin{figure}\n\\centerline{\\psfig{figure=6h23.caha.ps,width=0.5\\textwidth,angle=0}}\n \\caption{$R$-band image of the field around RX J1011.2+5545. The\n image is 1 arcmin each side. The\n circle is the 90 per cent error circle for the {\\it ROSAT}\\ X-ray position}\n\\end{figure}\n\n\nThe R-band image (Fig. 3) reveals a single $R=21.02\\pm 0.08$ source\nwithin or near the error circle of the X-ray source, whose position is\n$10^h11^m12^s.3$ and $55^{\\circ}44'47''$ (J2000) and is the likely\ncounterpart, later confirmed by spectroscopy. The surface brightness\nprofile of the source does not show compelling evidence for any\nadditional extension to the profile of a bright star. \n\n\\subsection{Optical spectroscopy}\n\nOptical spectroscopy of the candidate counterpart was carried out at\nthe 4.2m William Herschel Telescope on the Observatorio del Roque de\nlos Muchachos (La Palma) with the ISIS double spectrograph, on \nFebruary 25, 1998. We used the 150 lines\/mm gratings and TEK CCD\ndetectors for both arms, covering a spectral range from 3400 to 8550\\AA .\nThe atmospheric conditions were very poor with bad and variable sky\ntransparency, dust and seeing, with the latter starting at $3.5''$ but \nlater improving to between $2''$ and $2.5''$. Two sets of observations were \ncarried out, the first set corresponding to the period of worst seeing \nwith a slit width of $2.5''$ and the second set with a slit width of \n$1.5''$. Here we ignore the first set of observations, although \nqualitatively they reveal much the same as the second set. \n\n\\begin{figure}\n\\centerline{\\psfig{figure=6h23.wht.ps,width=0.5\\textwidth,angle=0}}\n \\caption{Raw optical spectrum of RX J1011.2+5545.}\n\\end{figure}\n\nThe observations with the slit width set at $1.5''$ totalled 5 on-source\nexposures of 1800s each, all close to parallactic angle and\nwith airmass less than 1.2. The data were reduced using standard IRAF\nroutines. The optimally extracted source spectra were registered to a common\nwavelength origin using the sky spectrum. The resulting summed\nspectrum was wavelength calibrated using polynomial fits to standard\narc maps, yielding rms residuals of 0.72\\AA\\ and 0.37\\AA\\ in the blue\nand in the red respectively. The spectral resolution was measured from\nunblended arc lines to be 9.6\\AA\\ and 8.8\\AA\\ in the blue and in the\nred respectively. Given the poor conditions, no attempt\nwas made to flux calibrate the spectra.\n\nFig. 4 shows the resulting spectra with markers on the most prominent\nemission lines. The redshift $z=1.246$ has been determined from the\nstrongest features [NeV]$\\lambda$3426 and [OII]$\\lambda$3727, although\nthe other emission lines are entirely consistent with this\nredshift. The presence of the high ionisation [NeV]$\\lambda$3346 and\n$\\lambda$3426 lines clearly reveals an AGN. Table 1 lists the emission\nfeatures detected in the spectrum, rest-frame equivalent widths and\nFWHM estimated via gaussian fitting with 90 per cent errors and\ncorrected for spectral dispersion.\n\n\\begin{table}\n\\centering\n\\begin{minipage}{70mm}\n\\caption{Detected emission lines in the optical spectrum}\n\\begin{tabular}{llcc}\n\n\\hline\n\nEmission & Redshift & $W_{\\lambda}^a$ & FWHM\\\\\nline & & (\\AA ) & (${\\rm km}\\, {\\rm s}^{-1}$)\\\\\n\n\\hline\nCIV$\\lambda$1550 & 1.2450 & 35$^b$ & $<2500^b$\\\\\nHeII$\\lambda$1640 & 1.2452 & 15$^b$ & $<800^b$\\\\\nCIII$]\\lambda$1909 & 1.2445 & 16 & $385^{+390}_{-380}$\\\\\n$[$NeIV$]\\lambda$2423& 1.2469 & 12 & $560^{+280}_{-270}$ \\\\\nMgII$\\lambda$2798 & 1.242$^b$ & 15$^b$ & $2000^{+2700}_{-1000}$ ($^b$)\\\\\n$[$NeV$]\\lambda$3346 & 1.2453 & 4 & $480^{+160}_{-130}$\\\\\n$[$NeV$]\\lambda$3426 & 1.2462 & 14 & $920^{+310}_{-240}$\\\\\n$[$OII$]\\lambda$3727 & 1.2462 & 42 & $625^{+60}_{-55}$\\\\ \\hline\n\\end{tabular}\n\n$^a$ Rest-frame equivalent width\\\\\n$^b$ Highly uncertain\\\\\n\\end{minipage}\n\\end{table}\n\nThe semi-forbidden CIII] line is clearly detected and narrow (see\nTable 1). Since this line is predicted to be broad in a type 1 AGN,\nthe implication is that the broad-line region in this AGN is\nobscured. The CIV and HeII lines appear also narrow, but in a low\nsignal to noise part of the spectrum. The MgII line is probably broad,\nbut with an equivalent width normalised to the equivalent width of the\nnarrow lines significantly smaller (10--20 times) than is typically\nfound in type 1 AGN (Francis et al 1991). Broad MgII has been found\nin IR hyperluminous galaxies (Hines \\& Wills 1993; Hines et al 1995)\nand high-redshift radiogalaxies (di Serego Alighieri, Cimati \\&\nFosbury 1994; Stockton, Kellogg \\& Ridgway 1995) and has been\ninterpreted as scattered emission from a hidden type 1 AGN.\n\n\n\\subsection{Radio data}\n\nWe searched in various archives for radio observations of our source.\nThere are a number of detections, the most relevant of which are the\nWesterbork Northern Sky Survey (Rengelink et al 1997) at 326 MHz, the\nTexas Survey (Douglas et al 1996) at 365 MHz, the FIRST survey (White\net al 1997) at 1.4 GHz and the Green Bank 6cm survey (Gregory et al\n1996) at 4.85 GHz. Both the Texas and the FIRST surveys resolve the\nsource into two components aligned approximately N-S. The N component is\nthe brightest in the FIRST data (0.090 Jy compared to the 0.071 Jy of\nthe S component). The optical position lies in between both\ncomponents (see Fig. 5). The separation between the components \nis $\\sim 11''$ at 1.4 GHz and $\\sim 15''$ at 365 MHz.\n\n\\begin{figure}\n\\centerline{\\psfig{figure=6h23.first.ps,width=0.5\\textwidth,angle=0}}\n \\caption{A radio map of RX J1011.2+5545 at 1.4GHz from the FIRST\n survey. The cross shows the position of the optical source and the\n thick circle is the error circle of the X-ray source.}\n\\end{figure}\n\n\nThe integrated radio fluxes together with the measurements at optical and \nX-ray frequencies are shown in Fig. 6 in the form of a spectral\nenergy distribution. The radio spectrum has a \n$S_{\\nu}\\propto \\nu^{-0.9}$ shape from 326 MHz to 4.85 GHz, which is\ntypical of lobe-dominated radio sources. Although\nfrom the spatial information at the various frequencies it is not\ncompletely clear that this is a lobe-dominated double source, both the\nspectral index and the position of the optical source strongly support\nthis hypothesis.\n\n\\begin{figure}\n\\centerline{\\psfig{figure=6h23.sed.ps,width=0.5\\textwidth,angle=270}}\n \\caption{The spectral energy distribution of RX J1011.2+5545 from\n radio to X-rays (see text for details on the data points).}\n\\end{figure}\n\n\\section{Discussion}\n\nThere are various facts that support the conclusion that RX\nJ1011.2+5545 is an AGN, the most relevant being a high radio to optical\nflux ratio, a $2-10\\, {\\rm keV}$ luminosity exceeding $10^{45}\\, {\\rm erg}\\, {\\rm s}^{-1}$ and\nthe broad MgII emission. In addition the strong [NeV]$\\lambda$3624 line \nimplies the presence of an underlying hard ionising continuum.\n\nWe initially suspected obscuration in this source because of its high \n{\\it ROSAT}\\ PSPC hardness ratio. Also for a typical uncovered AGN, the average\noptical magnitude corresponding to its X-ray flux would be $R\\sim 19$ (see,\ne.g., Hasinger 1996) instead of the observed value of $R\\sim 21$.\nThe absence of a broad CIII] line confirms this obscuration hypothesis.\n\nA weak broad MgII line is detected, its equivalent width being 3 to 5\ntimes smaller than for a type I AGN (Francis et al 1991; Baker \\&\nHunstead 1995). This cannot be explained as a simple obscuration\neffect, since in that case both the broad lines and the nuclear\ncontinuum would be equally suppressed, leaving the equivalent widths\nunchanged. The weakness of MgII and the absence of broad CIV and HeII\nmay be the result of dilution by a source of blue continuum over and\nabove that emanating directly from the nucleus. The requirement would\nbe that at a rest wavelength of $\\sim 2800$\\AA\\ the nuclear continuum\nmay be only 20 to 50 per cent of the total. The nature of this extra\nblue component is unknown, but reflected nuclear radiation, nebular\ncontinuum and copious star formation are all possibilities. The\nnon-detection of a reflected Fe K line in X-rays and the strong [OII] line\nwith respect to typical type I situation favour the enhanced star\nformation scenario. The equivalent width of the broad CIII] component\nis expected to be roughly 2 to 5 times smaller than that of MgII in a\ntype I AGN, and therefore it would be very weak in this object.\nObscuration of the nuclear continuum could also lead to the narrow\n[NeV] lines having enhanced equivalent widths.\n\n\nThe power-law in the X-ray spectrum of this object is similar to that\nfound for other luminous radio-loud quasars at high redshifts,\n($\\Gamma\\sim 1.5$, Cappi et al 1997), distinctively flatter than for\nradio-quiet AGN. This has been associated with different emission\nmechanisms (synchrotron self-Compton with the radio-emitting electrons\nin radio-loud AGN versus nuclear emission in radio-quiet objects). It\nis then possible that in radio-loud active galaxies the line-of-sight\nto X-ray emitting regions intercepts less obscuring material than does\nthe direct path to the nucleus. Larger absorbing columns ($N_H\\sim\n10^{22}\\, {\\rm cm}^{-2}$) than that observed in RX J1011.2+5545 are common\nonly among radio-loud quasars at very high redshifts ($z>3$, Cappi et\nal 1997, Fiore et al 1998). The possible contribution to the X-ray\nflux from a cluster of galaxies hosting this source (which might be\ndominant in radiogalaxies, Crawford \\& Fabian 1996) is small, since\nthe X-ray data does not show evidence for a spectral cutoff consistent\nwith thermal emission.\n\nThe amount of X-ray absorption predicts an optical extinction for the\nX-ray source which is $A_V=1.1^{+6.7}_{-0.85}$, using standard dust to\ngas ratios. For moderate extinction ($A_V\\sim 1-2$), the nuclear light\nseen in the optical can be direct radiation from the nucleus. However, if \nthe obscuration is much larger, then the MgII\nbroad line would be seen through reflection only. It is even possible\nthat the nucleus is very heavily obscured in the optical ($A_V\\gg 10$) in\nwhich case the direct X-ray continuum and nuclear Fe K emission might also be\nsuppressed, leaving a dominant X-ray component arising in the radio lobes \nwith only moderate associated photoelectric absorption. \nDisentangling both possibilities requires high spatial resolution\noptical and IR observations.\n\nIn any event, the discovery of this object demonstrates that\nhigh-redshift radio-loud obscured AGN are present at faint X-ray\nfluxes. Such objects may play a role, albeit probably minor, in \nproducing the X-ray background. Surveys to be carried out with AXAF and XMM \nwill undoubtely find large numbers of obscured AGNs and show what is their \ncontribution to the X-ray background.\n\n\\section*{Acknowledgments}\n\nXB and RC were visiting astronomers of the Centro-Astron\\'omico\nHispano-Alem\\'an, Calar Alto, operated by the Max-Planck-Institute for\nAstronomy, Heidelberg jointly with the Spanish `Comisi\\'on Nacional\nde Astronom\\'\\i a'. The William Herschel Telescope is operated on the\nisland of La Palma by the Isaac Newton Group in the spanish\nObservatorio del Roque de los Muchachos of the Instituto de Astrof\\'\\i\nsica de Canarias. This research has made use of the NASA\/IPAC\nExtragalactic Database (NED), which is operated by the Jet Propulsion\nLaboratory, California Institute of Technology under contract with the\nNational Aeronautics and Space Administration. XB, RC, MTC and JIGS\nacknowledge financial support by the DGES under project PB95-0122.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec1}\nRadiation therapy, chemotherapy, surgery or their combination are commonly used to control the cancer in clinics.\nCompared to conventional 3D conformal therapy, modern treatment methods, such as intensity modulation radiation therapy (IMRT) and volumetric arc therapy (VMAT) focus more on delivering prescribed dosage to planning target volume (PTV) while protecting organs-at-risk (OAR)~\\cite{Rifat}. Tumor proximity to these critical structures demands a high accuracy in tumor delineation to avoid toxicities from radiation therapy~\\cite{lin2019deep}.\nTo obtain the treatment plan with a desirable dose distribution, a physicist is required to carefully go through multiple trial-and-error iterations that tune the treatment planning parameters and weightings to control the trade-offs between clinical objectives. Such procedure is time-consuming and could suffer from a large inter-\/intra-observer variability due to the different experience and skills of physicists~\\cite{van2020automatic}. \n\nKnowledge-based planning (KBP) provides a promising solution to overcome the above limitations by automatically generating dose distributions, patient-specific dose-volume histograms (DVH) and dose constraints of PTVs and OARs. It can serve as the references of planning optimization processing and planning quality control, thereby streamlining the treatment planning process. Recently, advancements in the field of deep learning have inspired many researches in radiation oncology~\\cite{ge2019knowledge}, as the dose distribution can be directly predicted by the data-driven approaches. Specifically, Nguyenet~\\textit{et al}.\\cite{nguyen2019feasibility} used U-Net to predict the dose distribution for prostate cancer. Fan~\\textit{et al}.\\cite{fan2019automatic} further utilized ResUNet to predict the dose distribution for head-and-neck cancer. Kandalan~\\textit{et al}.\\cite{kandalan2020dose} aimed to study the generalizability of U-Net for prostate cancer dose prediction via transfer learning with minimal input data. However, existing methods typically employed U-Net and its variants~\\cite{nguyen20193d,willems2019feasibility,nguyen2019feasibility,barragan2019three}, where off-the-shelf networks cannot guarantee the applicability for various physicians, diseases and clinical settings. \nRecently, the model ensemble approach has been verified to improve the performance and robustness, which is constructed by a collection of neural networks whose predictions are combined at the test stage by weighted averaging or voting. The strategies for building ensembles typically include training the same network with various initializations~\\cite{lakshminarayanan2016simple}, different number of iterations~\\cite{huang2017snapshot}, and multiple subsets of the training data~\\cite{zhou2002ensembling}.\nAlthough diversity is believed to be essential for successful ensembles~\\cite{kuncheva2003measures}, this is overlooked by existing methods typically using a single network architecture coupled with different training strategies or a combination of a few off-the-shelf architectures. \n\nThe lack of diversity issue can be tackled by the method based on neural architecture search (NAS), which can generate a large number of diverse architectures, driving a natural bias towards diversity of predictions, and in turn to afford the opportunity to integrate these networks for a better result. However, several important research gaps are unexplored: \n1) Which one is more important to create an ensemble between base learners' performance and diversity?\n2) How to balance the trade-off between ensemble performance with computational complexity? \n3) How to encourage the diversity in the searching process of NAS?\n\nIn this study, we propose a learning-based ensemble approach with NAS for dose prediction, named LENAS, which adopts the teacher-student paradigm by leveraging the combination of diverse outputs from multiple automatically designed neural networks as a teacher model zoo to guide the target student network training. \nThe core of our LENAS includes two folds. First, instead of using off-the-shelf networks, we present a novel U-shape differentiable neural architecture search framework, named U-NAS, which automatically and efficiently searches for neural architecture from enormous architecture configurations to ensure both high performance and diversity of teacher models. \nSecond, to reduce the computational costs in the inference phase and meanwhile ensure high ensemble performance, we further present a knowledge distillation (KD) network with adversarial learning, named KDA-Net, which hierarchically transfers the distilled knowledge from the teacher networks to the student network. \nTo the best of our knowledge, the proposed LENAS is the first method to integrate NAS into KD in the medical imaging field, and the first method to investigate NAS and KD in ensemble learning. \nThe proposed method has been evaluated on two public datasets, \\textit{i}.\\textit{e}., OpenKBP dataset of 2020 AAPM Grand Challenge and the AIMIS dataset of 2021 Tencent AIMIS Challenge (task 4). Our U-NAS ensembles achieved the mean absolute error (MAE) of 2.357 and 1.465 in dose score and DVH score on the OpenKBP dataset, respectively, superior to the champion of the AAPM challenge. And our single LENAS model achieved the MAE of 2.565 and 1.737 in dose score and DVH score on OpenKBP, respectively, superior to the state-of-the-art methods~\\cite{babier2020openkbp,long2015fully,milletari2016v,ronneberger2015u}. \nIn addition, our single U-NAS model also achieved the mean square error (MSE) of 15611.6398 in dose score on AIMIS, winning the first place in the AIMIS challenge.\n\nOur contributions mainly include four folds:\n\\begin{itemize}\n \\item We present a novel learning-based ensemble framework, named LENAS, including the U-NAS framework which efficiently and automatically searches for optimal architectures, and a KDA-Net for the trade-off between the computational cost and accuracy;\n \\item It is the first attempt to investigate NAS and KD in ensemble learning, especially in the field of medical image analysis;\n \\item We provide several in-depth analysis and empirical guidelines for the base learners generation and selection in ensemble learning in consideration of both diversity and performance;\n \\item Extensive experiments on two public datasets demonstrated the effectiveness of each module and superior performance of our method to the state-of-the-art methods. \n \n\\end{itemize}\n\\section{Related Work}\n\\subsection{Knowledge-Based Planning}\nKnowledge-based automatic treatment planning is realized by building an atlas-based repository or a mathematical model to predict the dosimetry (\\textit{i}.\\textit{e}., dose distribution, entire DVH curve, dose volume metrics, etc.), which utilizes previously optimized plans~\\cite{momin2021knowledge}. For example, in atlas-based methods, manually designed geometric features are selected as metrics to define the similarity between previous plans and a new plan. On the other hand, the previous parameters of the most similar plan are adopted as the initialization of the new plan optimization. The modeling methods use handcrafted features to regress and predict DVH of a new plan to guide the optimization processing~\\cite{zhu2011planning}. The features include overlap volume histogram (OVH)~\\cite{wu2009patient}, beams eye view (BEV) projections, and overlap of regions of interests (ROIs), etc., which are applicable to both methods.\n\nHowever, the traditional KBP methods only predict 2-dimensional or 1-dimensional dosimetry metrics, which lack entire spatial distribution of dosage.\nIn the past few years, many researchers focused on the deep learning-based KBP methods. Due to the powerful ability of extracting statistical and contextual features of convolution neural network (CNN), 3-dimensional voxel-wise dose distribution with high accuracy can be directly predicted. The inputs of deep learning-based models usually are images (\\textit{e}.\\textit{g}., CT images and structure masks), and the architecture of models are mainly U-Net~\\cite{willems2019feasibility, kajikawa2019convolutional, bohara2020using}. The two main directions for improving the performance of CNN-based dose prediction are: 1) designing different architectures, including modified U-Net~\\cite{ma2019individualized, gotz2020deep}, U-Res-Net~\\cite{liu2019deep}, HD U-net~\\cite{barragan2019three, nguyen20193d}, GAN-based~\\cite{murakami2020fully, nguyen2020incorporating}, etc.; 2) adding clinical parameters into inputs, such as isocenter~\\cite{willems2019feasibility}, beam geometry infomation~\\cite{barragan2019three}, isodose lines and gradient infomation~\\cite{tan2021incorporating}).\n\n\n\\subsection{Ensemble Learning}\nEnsemble learning has shown impressive power in various deep learning tasks (\\textit{e}.\\textit{g}., the ILSVRC challenge~\\cite{russakovsky2015imagenet}), a large amount of literature has provided theoretical and empirical justifications for its success, including Bayesian model averaging~\\cite{domingos2000bayesian,monteith2011turning}, enriching representations~\\cite{domingos1997does}, and reducing stochastic optimization error~\\cite{dietterich2000ensemble,zhou2021ensemble}. \nThese arguments reached a consensus that the individual learner in the ensembles should be \\textit{accurate and diverse}~\\cite{brown2005managing,zhang2013exploiting}. \nTo encourage the diversity of the ensembles, the strategies for building ensembles typically include: 1) training the same network with various settings, such as bagging~\\cite{altman2017ensemble}, random initializations~\\cite{kornblith2019similarity}, and different hype-parameters~\\cite{morcos2018insights} (\\textit{e}.\\textit{g}., iteration, learning rate, and objective function); 2) training different networks with the various architectures. One of the most famous techniques is dropout~\\cite{srivastava2014dropout}, in which some of the neurons are dropped in each iteration, and the final model can be viewed as an ensemble composed of multiple different sub-models. \nIn addition, Lin~\\textit{et al}.\\cite{lin2019seg4reg} won the first place in the AASCE\\footnote{https:\/\/aasce19.grand-challenge.org} challenge with ensemble of ResNet~\\cite{he2016deep}, DenseNet~\\cite{huang2017densely}, and EfficientNet~\\cite{tan2019efficientnet}. \nAs for combining the predictions of each base model in an ensemble, the most prevailing method is majority voting~\\cite{opitz1996actively} for classification and segmentation (which can be viewed as pixel-wise classification), and simple averaging~\\cite{perrone1993networks} for the regression task. \n\nDespite of their success, most existing ensemble methods do not explicitly balance the two important factors, \\textit{i}.\\textit{e}., the performance of individual learners and diversity among them. To the best of our knowledge, we are the first to adopt NAS to ensemble learning, and we further provide empirical guidelines for selecting members for an ensemble.\n\n\\subsection{Neural Architecture Search}\nNeural architecture search (NAS) aims at searching for a desirable neural architecture from a large architecture collection.\nIt has received increasing interest in various medical image analysis tasks, such as image classification~\\cite{dondeti2020deep}, localization~\\cite{jiang2020elixirnet}, segmentation~\\cite{wang2021bix}, and reconstruction~\\cite{yan2020neural}. \nMuch of the focus has been on the design of search space and search strategy. \nFor example, Weng~\\textit{et al}.\\cite{weng2019unet} introduced NAS-Unet for 2D medical image segmentation which consists of different primitive operation sets for down-sampling cell and up-sampling cell.\nZhu~\\textit{et al}.\\cite{zhu2019v} proposed V-NAS for volumetric medical image segmentation that designed a search space including 2D, 3D, or pseudo-3D (P3D) convolutions. \nAs for the search strategy, existing researches can be categorized into three classes: the evolutionary algorithm~\\cite{real2019regularized}, reinforcement learning~\\cite{zoph2018learning}, and gradient-based differentiable methods~\\cite{liu2019darts}. \n\nHowever, there are still two aspects of the research gap that will be addressed in this paper. First, we embed NAS to the pixel-wise regression task in medical imaging (\\textit{e}.\\textit{g}., dose prediction). Second, we investigate the effectiveness of NAS in ensemble learning, as most of the existing methods focus on searching for the single best model, ignoring the value of enormous architecture candidates.\n\n\n\\section{Methods}\n\\label{sec_methods}\n\n\\begin{figure*}[t!]\n \\centering\n \\includegraphics[width=0.95\\textwidth]{figs\/framework.pdf}\n \\caption{Overview of the proposed LENAS.\n $O_i$ in the hybrid module denotes the operation and $\\alpha_i$ denotes its weight. $Dis$ in KDA-Net denotes the discriminator.}\n \\label{fig_framwork}\n\\end{figure*}\n\nThe framework of the proposed LENAS is shown in Fig.~\\ref{fig_framwork}, which has two components: 1) a U-shape differentiable neural architecture search (U-NAS) pipeline for automatic architecture search, and 2) KDA-Net which hierarchically transfers the to-be-distilled knowledge of U-NAS ensembles to a single lightweight network via adversarial learning. In the following, we introduce each component in details.\n\n\\subsection{U-NAS}\nAs shown in Fig.~\\ref{fig_framwork}, the proposed U-NAS follows the autoencoder~\\cite{ronneberger2015u} structure with four down cells (DCs) and four up cells (UCs). Each individual cell is learned in a large search space with about $4\\times 10^4$ architecture configurations. In the following, we first introduce the search space, then describe the training strategy for joint optimization of the architecture and its weights.\n\n\\noindent\\textbf{Search Space.}\nThe yellow and red blocks in Fig.~\\ref{fig_framwork} show the network topologies of DC and UC, respectively, which include several fundamental computing units called hybrid modules (HMs). Each HM is a sum of different operations, and there are four types of HM: normal ($N$), downward ($D$), upward ($U$), and connect ($C$), corresponding to different operation groups in the search space. As shown in Table~\\ref{tab_ops}, we include the following operations in the search space: convolution (conv), squeeze-and-excitation conv (se\\_conv), dilated conv (dil\\_conv), depthwise-separable conv (dep\\_conv), max pooling (max\\_pool), average pooling (avg\\_pool), trilinear interpolate (interpolate), and residual connection (identity). \n\n\\begin{table}[htbp]\n\\caption{Operation set used for searching cells.}\n\\resizebox{0.48\\textwidth}{!}{\n\\begin{tabular}{cccccc}\n\\bottomrule[1.5pt]\nNormOps & DownOps & UpOps & \\multicolumn{1}{c|}{ConnectOps} & pre & post \\\\ \\hline\nidentity & avg\\_pool & up\\_se\\_conv & \\multicolumn{1}{c|}{identity} & conv & conv \\\\\nconv & max\\_pool & up\\_dep\\_conv & \\multicolumn{1}{c|}{no connection} & & \\\\\nse\\_conv & down\\_se\\_conv & up\\_conv & \\multicolumn{1}{c|}{} & & \\\\\ndil\\_conv & down\\_dil\\_conv & up\\_dil\\_conv & \\multicolumn{1}{c|}{} & & \\\\\ndep\\_conv & down\\_dep\\_conv & interpolate & \\multicolumn{1}{c|}{} & & \\\\\n & down\\_conv & & \\multicolumn{1}{c|}{} & & \\\\ \n\\toprule[1.5pt]\n\\end{tabular}}\n\\label{tab_ops}\n\\end{table}\n\nThe prefix `down' means the stride of the convolution operation is two, while the prefix `up' indicates the transposed convolution, which doubles the image resolution. \nFor the first three columns of Table~\\ref{tab_ops}, we use $3\\times 3\\times 3$ kernels for all convolution operations int the Conv-IN-ReLU order. In addition, the $3\\times 3\\times 3$ convolution (pre) and $1 \\times 1 \\times 1$ convolution (post) are applied to adjust the number of channels.\n\n\\begin{algorithm}[!h]\n\\caption{Training Strategy of U-NAS}\n \\begin{algorithmic}\n \\STATE Create the mixed operations $\\overline{O}$ parametrized by $\\alpha$.\n \\WHILE{not converged}\n \\STATE 1. Update weights $\\omega$ by gradient descending $\\nabla _\\omega \\mathcal{L}_{\\mathrm{dose}}(\\omega , \\alpha)$ on $\\mathcal{D}_{\\mathrm{train}}$.\n \\STATE 2. Update $\\alpha$ by gradient descending $\\nabla _{\\alpha} \\mathcal{L}_{\\mathrm{dose}}(\\omega , \\alpha)$ on $\\mathcal{D}_{\\mathrm{val}}$.\n \\ENDWHILE\n \\STATE \\fontsize{8.5pt}{\\baselineskip}\\selectfont Replace $\\overline{O}$ with $O=O_i$, $i=\\arg \\max_k \\exp (\\alpha _k)\/\\sum ^{N}_{j=1} \\exp(\\alpha _j)$.\n \\STATE Re-train the network with the best learned cell structures on $\\mathcal{D}_{\\mathrm{train}}$.\n \\end{algorithmic}\n \\label{algo_1}\n\\end{algorithm}\n\n\\noindent\\textbf{Training Strategy.}\nThe training strategy of U-NAS contains two stages: the searching process and the re-training process. In the searching process, U-NAS is learned in a differentiable way \\cite{liu2019darts}, which optimizes a super network consisting of HMs with mixed operations. As Fig.~\\ref{fig_framwork} shows, for each operation $O_i$ in total $N$ operations $O$, the weight of each operation is determined by the parameter $\\alpha_i \\in \\alpha$, whose softmax transformation $\\tilde{\\alpha}_i=\\exp(\\alpha_i) \/ \\sum^{N}_{j=1}\\exp(\\alpha_j)$ represents how much $O_i$ contributes to the HM. Then, the architecture parameters $\\alpha$ and the network weights $\\omega$ are learned by the mixed operations alternately. We repeat the searching processing several times with different initializations to converge to different local optima, resulting in different searched architectures.\n\nOnce the searching process finishes, each HM would only keep the most likely operation based on the parameter $\\alpha$, then we replace DCs and UCs with the best learned structure and re-train the network on $\\mathcal{D}_{\\mathrm{train}}$. Algorithm~\\ref{algo_1} describes the details of the training strategy of U-NAS. In both searching and re-training processes, $\\mathcal{L}_1$ norm is used to measure the difference between the dose prediction $\\hat{y}$ and the target $y$:\n\\begin{equation}\n \\mathcal{L}_{\\mathrm{dose}}=\\| y - \\hat{y} \\|_1\n \\label{eq_dose}\n\\end{equation}\n\n\\noindent\\textbf{Diversity Encouraging Loss.}\nThe diversity among the models obtained by U-NAS can be potentially achieved via different initializations. However in many settings, the independently searching process could converge to similar local optima as the same searching goal is exploited. To optimize for diversity directly in the architecture searching process, we propose a diversity encouraging loss to encourage different predictions between the learned model with the best model\\footnote{The model with the best performance in multiple optimized architectures}.\n\nSpecifically, in the searching process, the goal is to achieve high accuracy of the learned model and encourage the difference of the prediction between the learned model and best model. Therefore, in the training stage of U-NAS, the final loss of $\\mathcal{L}_{\\mathrm{nas}}$ can be formulated by adding the dose error loss $\\mathcal{L}_{\\mathrm{dose}}$ in Eq.~(\\ref{eq_dose}) with diversity encouraging loss $\\mathcal{L}_{\\mathrm{div}}$ as:\n\\begin{equation}\n \\begin{aligned}\n \\mathcal{L}_{\\mathrm{nas}} &= \\mathcal{L}_{\\mathrm{dose}} + \\mathcal{L}_{\\mathrm{div}} \\\\\n &= \\|y - \\hat{y}\\|_1 + \\eta \\max (0, m - \\frac{\\left\\|\\hat{y}-\\hat{y*} \\right\\|_1}{\\left(\\|\\hat{y}\\|_1 + \\|\\hat{y*}\\|_1\\right)\/2} ),\n \\end{aligned}\n\\end{equation}\nwhere $\\|\\cdot\\|_1$ is the voxel-wise $l_1$ norm; $y$, $\\hat{y}$, and $\\hat{y*}$ denote the ground-truth, prediction result of the training model and best model, respectively; and $m$ is the margin (empirically set to 0.2) used to reduce the correlation between $\\hat{y}$ and $\\hat{y*}$ while avoiding the outliers; $\\eta$ is a weighting hyper-parameter to balance the two loss terms (empirically set to 1). \n\n\\subsection{KDA-Net}\nThe proposed KDA-Net performs knowledge distillation from the U-NAS ensembles to a single target network with adversarial learning. As shown in Fig.~\\ref{fig_framwork}, we use a single U-Net network as the student and the average of multiple U-NAS predictions as the teacher ensemble. For all the $K=8$ blocks (four D blocks and four C blocks) of the network, we apply the similarity loss on the intermediate output between the teacher ensembles and the student based on squared Euclidean distance as:\\footnote{Instead of ${L}_1$ loss in Eq.~\\ref{eq_dose}, we adopt ${L}_2$ loss to the deep supervision for a fast optimization.}\n\\begin{equation}\n \n \\mathcal{L}_{\\mathrm{sim}}=\\sum_{k=1}^8\\left\\|\\frac{1}{M}\\sum_{i=1}^M \\left(I_k^{T_i} - I^S_k\\right)\\right\\|_2^2,\n \\label{eq_sim}\n\\end{equation}\nwhere $I^{T_i}_k$ and $I^S_k$ denote the intermediate output of the $k$-th block of the $i$-th teacher network $T$ and student network $S$, respectively, and $M$ denotes the number of teacher networks.\n\nThen we further adopt adversarial learning in our knowledge distillation process to force the model to generate more similar features by the student and teachers. Specifically, for the $k$-th block, we learn a discriminator $D_k$ to distinguish the output of teachers from that of the student, which in turn encourages the student to produce more similar output with teachers. The adversarial loss is defined as:\n\\begin{equation}\n \\mathcal{L}_{\\mathrm{adv}}=\\sum_{k=1}^{8} \\mathbb{E}_{I_k\\sim P_T}\\log D_k\\left(I_k\\right)+ \\sum_{k=1}^{8} \\mathbb{E}_{I_k\\sim P_S}\\log \\left(1-D_k\\left(I_k\\right)\\right),\n \\label{eq_adv}\n\\end{equation}\nwhere $I_k\\sim P_T$ and $I_k\\sim P_S$ denote outputs from the $k$-th block of teacher ensembles and the student network, respectively. Based on the above definition, we incorporate the dose loss in Eq.~(\\ref{eq_dose}), the similarity loss in Eq.~(\\ref{eq_sim}) and the adversarial loss in Eq.~(\\ref{eq_adv}) into our final loss function of KDA-Net:\n\\begin{equation}\n \\mathcal{L}_{\\mathrm{KDA}}=\\mathcal{L}_{\\mathrm{dose}} + \\lambda_1 \\mathcal{L}_{\\mathrm{sim}} + \\lambda_2 \\mathcal{L}_{\\mathrm{adv}},\n \\label{eq_final}\n\\end{equation}\nwhere $\\lambda_1$ and $\\lambda_2$ are weighting hyper-parameters which are empirically set to 0.05 and 0.01, respectively, in our experiments.\n\n\\section{Experiments}\n\\subsection{Datasets}\nIn this study, we evaluate the proposed method using two public datasets: the OpenKBP dataset and the AIMIS dataset.\n\n\\textbf{OpenKBP dataset.} The Open Knowledge-Based Planning (OpenKBP) dataset of 2020 AAPM Grand Challenge~\\cite{babier2020openkbp} is a public dataset consisting of 340 CT scans for the dose prediction task. The OpenKBP dataset includes subjects treated for head-and-neck cancer with radiation therapy. The data is partitioned into training ($n=200$), validation ($n=40$), and test ($n=100$) sets. The ROIs used in this study include the body, seven OARs (\\textit{i}.\\textit{e}., brainstem, spinal cord, right parotid, left parotid, larynx, esophagus and mandible) and three planning target volumes (PTVs) with gross disease (PTV70), intermediate-risk target volumes (PTV63), and elective target volumes (PTV56).\n\n\\textbf{AIMIS dataset.} The AIMIS dataset consists of 500 CT scans from the 2021 Tencent AI Medical Innovation System (AIMIS) Challenge (task 4).\\footnote{https:\/\/contest.taop.qq.com\/channelDetail?id=108} Each scan is from a patient treated for lung cancer with stereotactic body radiation therapy (SBRT). The dataset is officially partitioned into 300 scans for training, 100 scans for validation and 100 scans for testing. The ROIs used in this study include the body, five OARs (\\textit{i}.\\textit{e}., left lung, right lung, total lung, spinal cord, and heart) as well as inner target volume (ITV) and planning target volume (PTV). \n\n\\subsection{Implementation and Evaluation Metrics}\nThe pre-processing for the two datasets follows~\\cite{liu2021cascade}. For normalization, the CT-values are truncated to [-1024 HU, 1500 HU].\nThe following data augmentations are performed during training: horizontal and vertical flips, translation, and rotation around $z$-axis.\nFor each case of the OpenKBP dataset, the OAR masks (7 channels) and the merged target (1 channel) are concatenated with the CT scan (1 channel) as a $9\\times 128\\times 128\\times 128$ tensor and fed into the dose prediction models.\nAnd for the AIMIS dataset, the input consists of OAR masks (5 channels), CT scan (1 channel), and target volume (2 channels) and body (1 channel).\n\nFor U-NAS, in the searching process, we first train the super network for $8\\times 10^4$ iterations using an Adam optimizer with an initial learning rate of $3\\times 10^{-4}$, and a weight decay of $1\\times 10^{-4}$. \nAfter that, the architecture parameters $\\alpha$ are determined from the super network on the validation set. \nWe repeat the searching process multiple times with different random seeds to obtain various architectures. Then we re-train the searched models on the training set for $8\\times 10^4$ iterations with a learning rate of $3\\times 10^{-4}$.\nFor KDA-Net, we train the student network for $6\\times 10^4$ iterations using an Adam optimizer with an initial learning rate of $1\\times 10^{-5}$ and weight decay of $1\\times 10^{-4}$. \n\nWe use the official evaluation codes to validate the proposed method. Specifically, for the OpenKBP dataset, the evaluation metircs include: (1) dose error, which calculates the mean absolute error (MAE) between the dose prediction and its corresponding ground-truth plan with mean absolute error; and (2) DVH error, which calculates the absolute error of the DVH curves between the prediction and ground truth. According to~\\cite{babier2020openkbp}, \\{D99, D50 D1\\} for PTVs and \\{D0.01cc, Dmean\\} for OARs are selected to measure the similarity of DVH curves in this task.\nAnd for the AIMIS dataset, the evaluation is performed by measuring dose error with the mean squared error (MSE).\nIn addition, We use a paired t-test to calculate the statistical significance of the results.\n\n\\begin{table*}[]\n\\caption{Performance comparison of NAS models with manually designed networks on $\\mathcal{D}_{val}$. And BS, SC, RP, LA, ES, LE, MD, P$_{70}$, P$_{63}$, and P$_{56}$ denote Brainstem, Spinal Cord, Right Parotid, Left Parotid, Larynx, Esophagus, Mandible, PTV70, PTV63, and PTV56, respectivily. $\\dagger$ represents significantly different results ($p < 0.05$, paired t-tests)}\n\\resizebox{1\\linewidth}{!}{\n\\begin{tabular}{c|c|c|c|c|c|c|c|c}\n\\toprule\n\\multicolumn{2}{c|}{\\multirow{3}{*}{}} & \\multicolumn{5}{c|}{Single Model} & \\multicolumn{2}{c}{Ensemble} \\\\ \\cline{3-9} \n\\multicolumn{2}{c|}{} & \\multicolumn{1}{c|}{Conv} & \\multicolumn{1}{c|}{Se\\_conv} & \\multicolumn{1}{c|}{Dil\\_conv} & \\multicolumn{1}{c|}{Dep\\_conv} & NAS & \\multicolumn{1}{c|}{Manual} & NAS \\\\ \n\\hline\n\\hline\n\\multirow{11}{*}{\\makecell{Dose\\\\ Error}} & Body & $2.634\\pm 0.760$ & $2.736\\pm 0.861\\dagger$ & $2.741\\pm 0.805\\dagger$ & $2.728\\pm 0.789\\dagger$ & \\bm{$2.581\\pm 0.784$} & $2.503\\pm 0.762\\dagger$ & \\bm{$2.400\\pm 0.743$} \\\\\n & BS & $1.606\\pm 1.076$ & $1.646\\pm 1.152$ & $1.743\\pm 1.305\\dagger$ & $1.677\\pm 1.278$ & \\bm{$1.486\\pm 0.971$} & $1.541\\pm 1.136$ & \\bm{$1.442\\pm 1.035$} \\\\\n & SC & $2.095\\pm 1.014$ & $2.131\\pm 1.181$ & $2.184\\pm 1.115$ & $2.068\\pm 0.986\\dagger$ & \\bm{$2.018\\pm 0.936$} & $1.939\\pm 0.909$ & \\bm{$1.891\\pm 0.878$} \\\\\n & RP & \\bm{$3.040\\pm 0.870$} & $3.277\\pm 1.020$ & $3.152\\pm 0.921$ & $3.472\\pm 1.117$ & $3.074\\pm 0.909$ & $2.942\\pm 0.868$ & \\bm{$2.820\\pm 0.889$} \\\\\n & LA & $3.154\\pm 0.839$ & $3.208\\pm 0.979$ & $3.075\\pm 0.803$ & $3.383\\pm 1.212$ & \\bm{$3.029\\pm 0.874$} & $2.917\\pm 0.740$ & \\bm{$2.710\\pm 0.753$} \\\\\n & ES & \\bm{$2.428\\pm 1.004$} & $2.773\\pm 0.830$ & $2.749\\pm 1.036$ & $2.531\\pm 1.077$ & $2.467\\pm 1.009$ & $2.410\\pm 0.892$ & \\bm{$2.182\\pm 0.705$} \\\\\n & LE & $3.034\\pm 1.385$ & $3.204\\pm 1.419$ & $3.451\\pm 1.561$ & $3.148\\pm 1.481$ & \\bm{$2.883\\pm 1.366$} & $2.973\\pm 1.334$ & \\bm{$2.593\\pm 0.727$} \\\\\n & MD & $3.988\\pm 1.341$ & $3.992\\pm 1.413$ & $4.086\\pm 1.146$ & $4.051\\pm 1.218$ & \\bm{$3.800\\pm 1.189$} & $3.745\\pm 1.152$ & \\bm{$3.520\\pm 1.133$} \\\\\n & P$_{70}$ & $2.062\\pm 1.004\\dagger$ & $2.090\\pm 1.167\\dagger$ & $2.198\\pm 1.463\\dagger$ & $2.045\\pm 0.895\\dagger$ & \\bm{$1.620\\pm 0.789$} & $1.896\\pm 1.108\\dagger$ & \\bm{$1.570\\pm 0.784$} \\\\\n & P$_{63}$ & $2.345\\pm 1.137$ & $2.534\\pm 1.250\\dagger$ & $2.686\\pm 1.566\\dagger$ & $2.521\\pm 1.090\\dagger$ & \\bm{$2.135\\pm 0.950$} & $2.318\\pm 1.213\\dagger$ & \\bm{$2.057\\pm 0.926$} \\\\\n & P$_{56}$ & $2.227\\pm 0.916$ & $2.296\\pm 0.885$ & $2.326\\pm 1.029\\dagger$ & $2.371\\pm 0.758\\dagger$ & \\bm{$2.116\\pm 0.768$} & $2.122\\pm 0.841\\dagger$ & \\bm{$1.926\\pm 0.703$} \\\\ \\hline\n\\bottomrule\n\\end{tabular}}\n\\label{tab_NAS}\n\\end{table*}\n\n\\begin{figure}[thbp]\n \\centering\n \\subfloat[]{\n\t\t\\includegraphics[width=0.49\\textwidth]{figs\/multi_dvh.pdf}\n\t\t\\label{fig:multi_dvh}\n\t}\\\\\n \\subfloat[]{\n\t\t\\includegraphics[width=0.48\\textwidth]{figs\/visual_kda.pdf}\n\t\t\\label{fig:visual_kda}\n\t}\n \\caption{(a) DVHs of the dose distribution of ground-truth plan (solid curves) and predictions by a single U-Net with and without the proposed KDA method, illustrated by dashed lines and dotted curves, respectively. The PTV70, PTV63, and PTV56 are shown in red, orange, and yellow curves, respectively.(b) An example of dose distributions of the clinical plan and predicted plans of a single U-Net with and without the KDA method.}\n \\label{fig:kda}\n\\end{figure}\n\n\n\n\\subsection{Experimental Results}\n\\subsubsection{Performance of U-NAS}\nWe first compare the performance of our U-NAS model with four manually designed architectures on the OpenKBP validation set. \nFor each manually designed architecture, we employ the same convolution operation choice in the normal HM, including \\textit{conv}, \\textit{se\\_conv}, \\textit{dil\\_conv}, and \\textit{dep\\_conv}. \nAnd we apply \\textit{max\\_pool}, \\textit{interpolate} and \\textit{no connection} operations for all the manually designed architectures. \nTable~\\ref{tab_NAS} shows performance comparison of different models. Our U-NAS model outperforms all the manually designed networks in the body, seven OARs and three PTVs. \nIn summary, the single U-NAS model achieves MAE of 2.580 and 1.736 in dose score and DVH score, respectively, outperforming the best manually designed network by 0.111 and 0.128 in dose error and DVH error, respectively.\nAnd it is interesting that in most cases, the ensemble of four models outperforms the corresponding individual models (for both the manually designed and NAS learned models), and the ensemble of NAS models outperforms the ensemble of manually designed models. Please refer to Sec.~\\ref{sec.dis} for more discussions of ensemble learning.\n\n\\subsubsection{Performance of KDA-Net}\nWe compare the performance of a single U-Net with and without the proposed KDA module, including the dose distributions and DVH, on the OpenKBP validation set.\nFig.~\\ref{fig:multi_dvh} shows an example of DVH curves from a patient of the validation set. The solid lines represent the DVH curves of ground truth, while the dashed lines and dotted lines represent the DVHs extracted from predicted dose of U-Net with and without KDA (\\textit{i}.\\textit{e}., train from scratch), respectively. For this example patient, the U-Net with KDA exhibits a better agreement in predicting the dose to the PTVs. The predictions of OARs are more variable between two methods. Fig.~\\ref{fig:visual_kda} shows the corresponding dose color contour for the same patient in Fig.~\\ref{fig:multi_dvh}, which suggests that the single U-Net model with KDA is able to achieve better dosimetric congruence with the original plan on the PTV. \n\n\\begin{table}[!t]\n\\caption{Comparison of performance with the state-of-the-art methods on the OpenKBP test set.}\n\\centering\n\\resizebox{\\linewidth}{!}{\n\\begin{tabular}{l|l|l|l|l|l}\n\\toprule\n\\multicolumn{2}{c|}{\\multirow{2}{*}{Methods}} & \\multicolumn{2}{c|}{Dose Score} & \\multicolumn{2}{c}{DVH Score} \\\\\n\\cline{3-6}\n\\multicolumn{2}{c|}{} & MAE & MSE & MAE & MSE \\\\\n\\hline\n\\multirow{5}{*}{Leaderboar\n} &Top \\#1 & 2.429 & 15.488 & 1.478 & 5.913 \\\\\n &Top \\#2 & 2.564 & 16.550 & 1.704 & 6.812 \\\\\n &Top \\#3 & 2.615 & 17.774 & 1.582 & 6.961 \\\\\n &Top \\#4 & 2.650 & 18.091 & 1.539 & 6.031 \\\\\n &Top \\#5 & 2.679 & 18.023 & 1.573 & 6.525 \\\\\n\\hline\n\\hline\n\\multirow{5}{*}{Single Model} &FCN~\\cite{long2015fully} & 2.681 & 18.144 & 2.452 & 12.310 \\\\\n &V-Net~\\cite{milletari2016v} & 3.129 & 23.336 & 2.325 & 11.417 \\\\\n &U-Net~\\cite{ronneberger2015u} & 2.619 & 17.221 & 2.313 & 11.343 \\\\\n &ResUNet~\\cite{yu2017volumetric} & 2.601 & 16.932 & 2.209 & 10.591 \\\\\n &U-NAS (ours) & 2.597 & 16.962 & 1.803 & 7.628 \\\\\n &LENAS (ours) & \\textbf{2.565} & \\textbf{16.614} & \\textbf{1.737} & \\textbf{7.272} \\\\\n\\hline\n\\hline\n\\multirow{3}{*}{Cascade} &U-Net~\\cite{ronneberger2015u} & 2.461 & 15.489 & 1.588 & 6.511 \\\\\n &ResUNet~\\cite{yu2017volumetric} & 2.448 & 16.023 & 1.499 & 5.855 \\\\\n &U-NAS (ours) & \\textbf{2.434} & \\textbf{15.376} & \\textbf{1.496} & \\textbf{5.564} \\\\\n\\hline\n\\hline\n\\multirow{2}{*}{Ensemble} &Off-the-shelf & 2.521 & 16.060 & 1.771 & 6.851 \\\\\n &U-NAS (ours) & \\textbf{2.357} & \\textbf{14.326} & \\textbf{1.465} & \\textbf{5.560} \\\\\n\\bottomrule\n\\end{tabular}}\n\\label{tab_sota}\n\\end{table}\n\n\\begin{table}[!t]\n \\centering\n \\caption{Comparison with the state-of-the-art methods on the test set of AIMIS.\n }\n \\resizebox{\\linewidth}{!}{\n \\begin{tabular}{c|l|l|c|l|l}\n \\toprule\n \\multicolumn{3}{c|}{Primary Phase} & \\multicolumn{3}{c}{Final Phase} \\\\\n \\hline\n Rank & Team & Dose Score & Rank & Team & Dose Score\\\\\n \\hline\n \\textbf{\\#1} & \\textbf{qqll (ours)} & \\textbf{15611.6398} & \\textbf{\\#1} & \\textbf{qqll (ours)} & \\textbf{15571.6051} \\\\\n \\#2 & deepmedimg & 17223.3940 & \\#2 & gosnail & 15869.4256 \\\\\n \\#3 & gosnail & 18425.5708 & \\#3 & teamC & 16323.9720 \\\\\n \\#4 & adosepredictor & 18638.4767 & \\#4 & 27149 & 16486.1417 \\\\\n \\#5 & star & 19340.0643 & \\#5 & capsicummeat & 18137.9836 \\\\\n \\bottomrule\n \\end{tabular}}\n \\label{tab:aimis_test}\n\\end{table}\n\n\\begin{table}[!t]\n \\centering\n \\caption{Comparison of U-NAS with the off-the-shelf models on the validation set of AIMIS.}\n \n \\resizebox{\\linewidth}{!}{\n \\begin{tabular}{c|c|c|c|c|c|c|c|c|c}\n \\toprule\n \\multirow{2}{*}{Methods} & \\multirow{2}{*}{All} & \\multirow{2}{*}{Body} & \\multirow{2}{*}{Heart} & \\multirow{2}{*}{L-Lung} & \\multirow{2}{*}{R-Lung} & Total & Spinal & \\multirow{2}{*}{ITV} & \\multirow{2}{*}{PTV} \\\\\n & & & & & & -Lung & -Cord & & \\\\\n \\hline\n U-Net & 9801 & 56608 & 45643 & \\textbf{71894} & 75099 & \\textbf{64108} & \\textbf{68377} & 525499 & \\textbf{842770} \\\\\n ResUNet & 9782 & 56668 & \\textbf{41858} & 77288 & 77399 & 66066 & 71790 & 593486 & 904382 \\\\\n U-NAS & \\textbf{9484} & \\textbf{54839} & 43746 & 82291 & \\textbf{71597} & 66922 & 71175 & \\textbf{510381} & 858750 \\\\\n \\bottomrule\n \\end{tabular}}\n \\label{tab:aimis_val}\n\\end{table}\n\n\n\\subsubsection{Comparison with the State-of-the-art Methods}\nIn Table~\\ref{tab_sota}, we compare the proposed LENAS model with several state-of-the-art methods on the OpenKBP test set. The competing methods include 3D FCN~\\cite{long2015fully}, V-Net~\\cite{long2015fully}, 3D U-Net~\\cite{ronneberger2015u}, 3D ResUNet~\\cite{yu2017volumetric}, and five top-ranking methods on the AAPM-2020 challenge learderboard~\\cite{babier2020openkbp}. We thoroughly compare our LENAS model with existing methods using single model, cascade and ensemble strategies. The cascade strategy is to sequentially combine two networks and produce the results in a coarse-to-fine fashion.\n\\textbf{For single model}, our U-NAS achieves MAE of 2.597 and 1.803 in dose score and DVH score, respectively, outperforming the best off-the-shelf method (\\textit{i}.\\textit{e}., ResUNet). Integrating the KDA mechanism (\\textit{i}.\\textit{e}., LENAS) could further improve the performance to 2.565 and 1.737, respectively. \\textbf{For cascade models}, our cascade U-NAS model achieves 2.434 and 1.496 MAE of dose score and DVH score, respectively, outperforming the cascade ResUNet which achieves 2.448 and 1.499. \n\\textbf{For five model ensembles},\nour U-NAS ensemble achieves 2.357 MAE and 14.326 MSE of dose score, and 1.465 MAE and 5.560 MSE in DVH score, outperforming the ensembles of off-the-shelf models and the top ranking solutions on the AAPM-2020 challenge leaderboard. \n\nWe further explore the generalization capability of our method on the AIMIS dataset. Specifically, we apply the best architecture learned from the OpenKBP dataset to the AIMIS challenge. The evaluation results are calculated by the organizers\\footnote{https:\/\/contest.taop.qq.com} of the challenge, shown in Table~\\ref{tab:aimis_test}.\nOur U-NAS method achieves the first place in the AIMIS challenge in both the primary and final phases\\footnote{In primary phase, the test set consists of 50 scans, and in final phase, the test set consists of another 150 scans.}, outperforming the runner-up by 9.36\\% and 1.88\\%, respectively. And in Table~\\ref{tab:aimis_val}, we further compare our U-NAS model with two best best performing off-the-shelf models, \\textit{i}.\\textit{e}., U-Net and ResUNet, with respect to different ROIs. Consistent trend can be observed that our U-NAS outperforms the off-the-shelf models and other top ranking solutions on both the validation set and the test set of AIMIS.\n\n\\begin{figure}[thb]\n \\centering\n \\subfloat[]{\n\t\t\\includegraphics[width=0.48\\textwidth]{figs\/20_nas.pdf}\n\t\t\\label{fig:mba_nas_20}\n\t}\\\\\n \\subfloat[]{\n\t\t\\includegraphics[width=0.48\\textwidth]{figs\/20_nas_en.pdf}\n\t\t\\label{fig:mba_nas_20_en}\n\t}\n \\caption{The dose score (MAE) of (a) 20 NAS models; (b) the ensembles.}\n \\label{fig:many_better_all}\n\\end{figure}\n\n\\section{Discussions}\n\\label{sec.dis}\nIn this section, we investigate the correlations between diversity and ensemble performance, and empirically provide insightful guidance for the ensemble learning with NAS in the task of dose prediction. \n\n\\subsection{Ensemble Many is Better than All}\nMost works, especially for the competitions~\\cite{tang2020gp,lin2019seg4reg,deng2014ensemble,fernandez2014we,ghimire2014extreme}, brusquely integrate all the obtained models to obtain the final result. \nTo explore the correlation between the number of ensembles and its corresponding performance, we follow~\\cite{zhou2002ensembling} to systemically conduct the searching processes multiple times and select the top 20 models for the experiment. Then, we average the results one-by-one sequentially w.r.t. the individual performance. The results are shown in Fig.~\\ref{fig:many_better_all}. \nFig.~\\ref{fig:mba_nas_20} shows the dose score of the 20 selected models which ranges from 2.5806 (NAS\\_1) to 2.6979 (NAS\\_20) in MAE.\nFig.~\\ref{fig:mba_nas_20_en} shows the dose score of ensembles of top $k\\in[1,20]$ models. It can be seen that the ensembles achieve best performance with top 14 models (2.3621 in MAE), instead of all the 20 models (2.3693 in MAE). \nIntuitively, the models in the ensembles with unacceptable performance could hurt the final ensemble results.\nOur next step is to explore the selection criterion for the members of the ensemble.\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{figs\/performance_diversity.pdf}\n \\caption{The dose score (MAE) of the ensembles with different numbers of models. The yellow bars indicate the models selected based on performance; the blue and green bars indicate the models selected based on diversity from top 20 models and 10 models, respectively.}\n \\label{fig:per_div}\n\\end{figure}\n\n\\subsection{Performance vs. Diversity}\nExtensive literature~\\cite{opitz1996actively} has shown that the ideal ensemble is one comprised of accurate networks that make errors on different parts of the input space. In other word, the performance of ensemble depends not only on the accuracy of base learners, but also the diversity of them. However, existing works typically implicitly encourage the diversity of the ensembles with different training strategies (\\textit{e}.\\textit{g}., random initializations, different hyper-parameters and loss functions), then causally select the models based on the accuracy. So \\textit{does diversity matter when selecting the members in the ensemble?} To answer the question, we conduct the experiments as follows.\n\nThe diversity of two models is measured by averaging the root mean absolute error (RMAE) between the predictions for each sample as follows:\n $d(y_a, y_b) = \\frac{|y_a^i - y_b^i|}{y_a^i + y_b^i}$,\nwhere $y_a$ and $y_b$ are the outputs of two models. \nThen, we select different pairs of models based on individual models' performance and the diversities between them. The yellow and blue bars of Fig.~\\ref{fig:per_div} show that for the total 20 models, the ensemble performance based on the individual models' performance is consistently better than those based on the diversity. \nIt reveal that \\textit{the performance of individual model's performance is an essential factor in ensembling.} \nIn addition, the results of the yellow and green bars show that in the top 10 models, the observation exactly contradicts to the former one that the ensemble performance based on the individual models' performance are lower than those based on the diversity. \nIt suggests that the diversity is indeed an important factor as well. Especially \\textit{when the performance of the individual models are comparable, the diversity is more important than the accuracy.}\n\n\\begin{figure}[!t]\n \\centering\n \\subfloat[]{\n\t\t\\includegraphics[width=0.22\\textwidth]{figs\/strategies_div.pdf}\n\t\t\\label{fig:strategies_div}\n\t}\n \\subfloat[]{\n\t\t\\includegraphics[width=0.22\\textwidth]{figs\/strategies_dose.pdf}\n\t\t\\label{fig:strategies_dose}\n\t}\n \\caption{(a) Diversity and (b) dose score (MAE) of individual models in different ensemble strategies: NAS, bagging, random initializations and iterations, and different off-the-shelf architectures (denotes public).}\n \\label{fig:diff_strate}\n\\end{figure}\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=0.35\\textwidth]{figs\/strategies_dose_en.pdf}\n \\caption{Overall Dose score (MAE) of the ensembles with different strategies.}\n \\label{fig:diff_strate_en}\n\\end{figure}\n\\subsection{Comparison of Different Ensemble Strategies}\nA uniqueness of the proposed LENAS model is to exploit a diverse collection of network structures that drives a natural bias towards diversity of predictions produced by individual networks. \nTo assess the impact of LENAS on the ensemble, we compare the diversity and performance of our method with four ensemble strategies on the OpenKBP validation set in Fig.~\\ref{fig:diff_strate}, including bagging, random initializations, different training iterations, and different off-the-shelf architectures. \nFor each strategy, we obtain four models randomly. \nSpecifically, for NAS models, we select the top four models in the aforementioned 20 NAS models (\\textit{i}.\\textit{e}., NAS\\_1 to NAS\\_4). For bagging, we split the training set into four non-overlapped sub sets and using different portions of the data (three subsets) to train four models. For random initialization, we repeat the training procedures four times with different initialization seeds. For the different training epochs, we train a single network with $8 \\times 10^4$ iterations, and pick the last four checkpoints with a gap of 2000 training iterations in between. For the off-the-shelf architectures, we select the four most popular architectures in 3D medical image analysis, including FCN, VNet, U-Net, and ResUNet.\nThe diversities of the four ensemble strategies are illustrated in Fig.~\\ref{fig:strategies_div}. The diversity of the NAS models is 0.0326 in RMAE with standard deviation of 0.0009, greater than the other three strategies (\\textit{i}.\\textit{e}., cross-validation, random initialization and iterations), comparable to off-the-shelf architectures which achieve $0.0338\\pm0.0013$ diversity. \nThe results in Fig.~\\ref{fig:strategies_dose} shows that the mean and standard deviation of the dose score of NAS models is $2.587\\pm0.0082$, outperforming other strategies by a large margin.\nNote that the diversity of 4-fold cross-validation is close to the NAS models; however, the individual models' performance suffer from the limited representations of the subsets of training data. The similar trends are observed in off-the-shelf models: individual models' performance restricts the performance of the final ensemble.\nThe performance with respect to the dose score in MAE of the ensembles are depicted in Fig.~\\ref{fig:diff_strate_en}. \nObviously, the ensembles of NAS achieve the best performance with 2.392 in MAE, superior to the other ensemble strategies. The result reveals that \nproducing and ensembling different network architectures is superior to those simply creating an ensemble containing duplicates of a single network architecture with different model parameters.\n\n\\subsection{Effectiveness of Diversity Encouraging Loss}\nWe investigate the effectiveness of the diversity encouraging loss in Fig.~\\ref{fig:div_loss}. Specifically, we searched 10 architectures with and without diversity encouraging loss,\\footnote{We follow~\\cite{liu2019darts} to search single architecture of DC and UC in each model for facilitating the optimization of searching process.} and the learned down cells and up cells are shown in Fig.~\\ref{fig:div_loss}. We further calculate the variation of the operations in each module. \nSpecifically, we first rank the operations in each HM based on the frequency in all the architectures (\\textit{e}.\\textit{g}., the operation with the highest frequency is identified with ID 0), then product all the std. of each HM.\nThe quantitative results of the variation w\/ and w\/o diversity encouraging loss are 328.3 and 31.1, respectively, indicating that with the diversity encouraging loss, the U-NAS method can generate architectures with a greater variation, and consequently encourage the diversity of the predictions.\n\n\\begin{figure}[!t]\n \\centering\n \\subfloat[]{\n\t\t\\includegraphics[width=0.49\\textwidth]{figs\/wo_loss.pdf}\n\t\t\\label{fig:wo_loss}\n\t}\\\\\n \\subfloat[]{\n\t\t\\includegraphics[width=0.48\\textwidth]{figs\/w_loss.pdf}\n\t\t\\label{fig:w_loss}\n\t}\n \\caption{The down cells and up cells of 10 learned architectures (a) w\/o and (b) w\/ diversity encouraging loss, where \\textit{id}, \\textit{n}, \\textit{co}, \\textit{se}, \\textit{di}, \\textit{de}, \\textit{av}, \\textit{m}, \\textit{in}, and \\textit{pr} denote \\textit{identity}, \\textit{no connection}, \\textit{conv}, \\textit{se\\_conv}, \\textit{dil\\_conv}, \\textit{dep\\_conv}, \\textit{avg\\_pool}, \\textit{max\\_pool}, \\textit{interpolate}, and \\textit{pre} in Table~\\ref{tab_ops}, respectively. The yellow operations are not used by the 10 architectures.}\n \\label{fig:div_loss}\n\\end{figure}\n\n\\section{Conclusion and Future Work}\nIn this paper, we proposed a learning-based ensemble approach, named LENAS, for 3D radiotherapy dose prediction. \nTwo key components of LENAS include 1) a U-NAS framework which automatically searches the neural architectures\nfrom numerous architecture configurations to form a teacher network zoo, and 2) a KDA-Net which hierarchically transfers the distilled knowledge from the teacher networks to the student network to reduce the inference time while maintain competitive accuracy. \nIn addition, we conducted comprehensively experiments to investigate the impact of diversity in ensemble learning, and derived several empirical guidelines for producing and ensembling multiple base learners in consideration of individual accuracy and diversity.\nExtensive experiments on two public datasets demonstrated the effectiveness and superior performance of our method to the state-of-the-art methods. \n\nWe would point out several limitations of our work. First, the NAS ensembles require multiple rounds of searching-retraining, which is very time-consuming in the training phase. Second, a few failure models may be generated by NAS. This situation is also common in the gradient-based NAS methods. Third, the diversity between learners in ensemble is hard to formulate appropriately, which could be task specific and vary for different outputs (\\textit{e}.\\textit{g}., classification, segmentation, and regression).\nFuture studies will be focused on: 1) a more specific model selection criterion for the best ensemble strategies; 2) a computational-efficient training strategies for multiple ensemble learners; and 3) an optimization method from dose prediction map to the final radiotherapy treatment plan.\n\n\n\\bibliographystyle{IEEEtran.bst}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nRecurrence relations are powerful tools for\nevaluating multi--loop Feynman integrals \\cite{ch-tk}.\nThey relate Feynman integrals with various\ndegrees of their denominators. In many cases\nthey provide the possibility to express an\nintegral with given degrees of denominators as a linear\ncombination of a few master integrals with some coefficients which\nwe will call weight factors.\n\nAt two--loop level the recurrence relations are relatively simple and one\ncan easily find and realize the corresponding recursive algorithm for\nthe calculation of the weight factors.\n\nAt three--loop level the recurrence relations are more complicated and to \nfind an algorithm to calculate weight factors is a serious problem.\n\nFor vacuum integrals with one nonzero mass and various numbers of \nmassless lines the corresponding\nalgorithm was described in \n\\cite{REC}-\\cite{av-p}.\nHere a repeated application of various recurrence relations is performed\nuntil the integrals of a certain type are eliminated. \n\nIn recent works \\cite{REC},\\cite{AFMT}-\\cite{chet} the recursive algorithm\nwas successfully applied in calculations of\ntwo--loop and three--loop QED and QCD.\n\nNevertheless, such recursive algorithms\nlead to too time and memory consuming calculations\nbecause of the size of intermediate\nexpressions grows exponentially with respect to the degrees of\nthe denominators in the initial integral. \nIn fact, the calculations mentioned above were made at the\nlimits of computer capabilities.\n\nIn this work we suggest a new approach based on explicit formulas for \nthe solutions of the recurrence relations. As an application, the case\nof three loop vacuum integrals with four equal mass and two\nmassless lines is considered. The efficiency of this approach\nis demonstrated by calculations of previousely unknown coefficients in \nTaylor expansion of QED photon vacuum polarization for small $q^2$.\n\n\\section{General case}\n\nLet us consider the three--loop vacuum integrals with six different masses:\n\n\\begin{eqnarray}\nB(\\underline{n},D)\\equiv\nB(n_1,n_2,n_3,n_4,n_5,n_6,D)=\n\\frac{m^{2\\Sigma_1^6 n_i-3D}}\n{\\big[\\imath\\pi^{D\/2}\\Gamma(3-D\/2)\\big]^3}\n\\int\\int\\int \\frac{d^Dp\\,d^Dk\\,d^Dl} \n{D_1^{n_1}D_2^{n_2}D_3^{n_3}D_4^{n_4}D_5^{n_5}D_6^{n_6}}\n\\label{integral}\n\\end{eqnarray}\n\n\\noindent\nwhere\n\n\\centerline{\n\\begin{tabular}{lll}\n$D_1=k^2-\\mu_1 m^2$,&\n$D_2=l^2-\\mu_2 m^2$,&\n$D_3=(p+k)^2-\\mu_3 m^2$\\\\\n$D_4=(p+l)^2-\\mu_4 m^2$,&\n$D_5=(p+k+l)^2-\\mu_5 m^2$,&\n$D_6=p^2-\\mu_6 m^2$\\\\\n\\end{tabular}\n}\n\nLet us derive recurrence relations that result from integration by parts,\nby letting $(\\partial\/\\partial p_i)\\cdot p_j$ act on\nthe integrand, with $p_{i,j}\\in\\{p,k,l\\}$. For example, \nacting by $(\\partial\/\\partial k)\\cdot (p+k)$ we get\n\n\\begin{eqnarray}\n(D-2n_3-n_1-n_5) \\}B(\\underline{n},D)&=&\n\\{ n_1 {\\bf 1}^+({\\bf 3}^- -{\\bf 6}^- + \\mu_3 -\\mu_6 +\\mu_1)\n+2n_3 {\\bf 3}^+ \\mu_3 \\nonumber\\\\\n&&+n_5 {\\bf 5}^+({\\bf 3}^- -{\\bf 2}^- + \\mu_3 -\\mu_2 +\\mu_5)\\}B(\\underline{n},D)\n\\label{rr}\n\\end{eqnarray}\n\n\\noindent\nwhere \n${\\bf 1}^\\pm B(n_1,\\ldots)\\equiv B(n_1\\pm1,\\ldots)$, etc.\n\nOthers relations can be obtained from (\\ref{rr}) by proper \npermutations of the $n_i, \\mu_i$ and ${\\bf I}^\\pm$ objects. \n\nThe common way of using these relations is step by step re-expression\nof the integral (\\ref{integral}) with some values of $n_i$ through a set of \nintegrals \nwith shifted values of $n_i$, with the final goal to reduce this set to\na few integrals with $n_i$ are equal to $0$ or $1$, so called \"master\" \nintegrals. The result can be represented as\n\n\\begin{eqnarray}\nB(\\underline{n},D)=\\sum_k f^k(\\underline{n},D)N_k(D)\\nonumber\n\\end{eqnarray}\n\n\\noindent\nwhere the index $k$ enumerate master integrals $N_k(D)$ and corresponding \ncoefficient functions $f^k(n_i,D)$. \n\nThere are two problems on this way. First, there is no general \napproach to construction of such recursive procedure, that is to find \nproper combinations of these relations and a proper sequence of its use \nis the matter of art even for the case of one mass \n\\cite{av-p}. Second, even in cases when such procedures were \nconstructed, they lead to very time and memory consuming calculation\nbecause of large reproduction rate at every recursion step.\nFor example, the relation (\\ref{rr}) expresses the integral through 7 \nothers.\n\n\nInstead, let us construct the coefficient functions $f^k(\\underline{n},D)$ \ndirectly as solutions of the given recurrence relations. \n\nFor that, let us diagonalize the recurrence relations with respect to \n$n_i\\,{\\bf I}^+$ operators. \nWe found that the recurrence relations can be represented in the following \nsimple form\n\n\\begin{eqnarray}\n\\{P(x_1,\\dots,x_6)\\cdot n_i{\\bf I}^+ - \n\\frac{D-4}{2}\\partial_i(P(x_1,\\dots,x_6)) \\}_{x_i={\\bf\\small I}^- \n+\\mu_i} B(\\underline{n},D)=0,\\quad i=1,\\dots, 6.\n\\label{rr2}\n\\end{eqnarray}\n\n\\noindent\nwhere \n\\begin{eqnarray}\nP(x_1,\\dots,x_6)&=&\n2(x_1x_2(x_1+x_2)+x_3x_4(x_3+x_4)+x_5x_6(x_5+x_6))\\nonumber\\\\\n&&+x_1x_3x_6+x_1x_4x_5+x_2x_3x_5+x_2x_4x_6\\nonumber\\\\\n&&-(x_1+x_2+x_3+x_4+x_5+x_6)(x_1x_2+x_3x_4+x_5x_6)\\nonumber\n\\end{eqnarray}\n\nThe differential equation corresponding to (\\ref{rr2}) has the solution\n$P^{D\/2-2}(x_i+\\mu_i)$. Let\nus consider \"Lourent\" coefficients of this function:\n\n\\begin{eqnarray}\nf(n_i,D)=\n\\frac{1}{(2\\pi\\imath)^6}\n\\oint\\oint\\oint\\oint\\oint\\oint\n\\frac\n{dx_1dx_2dx_3dx_4dx_5dx_6}\n{x_1^{n_1}x_2^{n_2}x_3^{n_3}x_4^{n_4}x_5^{n_5}x_6^{n_6}}\n{P(x_1+\\mu_1,\\dots,x_6+\\mu_6)^{D\/2-2}}\n\\label{solution}\n\\end{eqnarray}\n\n\\noindent\nwhere integral symbols denote six subsequent complex \nintegrations with contours \nwhich will be described below. If one acts by (\\ref{rr2}) on \n(\\ref{solution}) one gets up to the surface terms \nthe same expression\nas acting by $P\\partial_i -(D\/2-2)(\\partial_iP)$ on $P^{D\/2-2}$, that is \nzero. Then, the surface terms can be removed if we choose\nclosed or ended in infinity point contours. For more accuracy one can \nconsider \nanalytical continuations of the result on $D$ from large negative values.\nSo (\\ref{solution}) is the solution of the relations (\\ref{rr}),\nand the different choices of the contours correspond to different \nindependent solutions. Note, that if one chooses the contour as a small \ncircle over zero, one get the true Lourent coefficient\nof the function $P^{D\/2-2}$, so this function can be called generalized \ngenerating function for the solutions of the relations (\\ref{rr}).\n\nThen, in accordance with the dimensional regularization rules, \nthe integrals (\\ref{integral}) are not equal to zero only if at least three\namong $n_i$ are positive. So it is natural to construct \nthe solutions from those that are equal to zero if the index from \ndefinite three--index set (\"Taylor\" indexes) is not positive. \nOne can obtain such solutions \nif one chooses the contours, corresponding to these indexes,\nas circles over zero. In this case these three integrations can be\nevaluated and lead to coefficient in the common Taylor expansion in\ncorresponding variables.\n\nThe three remaining integrations in general case lead to the sum of \ngeneralized hypergeometric seria, but for some cases of practical interest\n(see below)\ncan be reduced to the finite sums of Pochhammers symbols products. \n\n\\section{Example}\n\nAs an example let us consider the case of integrals with four equal mass\nand two massless lines, that is $\\mu_1=\\mu_2=0,\\mu_3=\\mu_4=\\mu_5=\\mu_6=1$. \nLet us calculate the coefficient functions \nwhich corresponds to the choice of the master integrals from \\cite{REC}.\nThat is, we expand $B(\\underline{n})$ as\n\n\\begin{eqnarray}\nB(\\underline{n},D)=\nN(\\underline{n},D)B(0,0,1,1,1,1,D)+\nM(\\underline{n},D)B(1,1,0,0,1,1,D)+\nT(\\underline{n},D)B(0,0,0,1,1,1,D)\\nonumber\n\\end{eqnarray}\n\n\\noindent\nwith the following normalization conditions\n\\begin{eqnarray}\nN(0,0,1,1,1,1,D)=1,\\quad N(1,1,0,0,1,1,D)=0,\\quad \nN(0,0,0,1,1,1,D)=0,\\label{condN}\\\\\nM(0,0,1,1,1,1,D)=0,\\quad M(1,1,0,0,1,1,D)=1,\\quad\nM(0,0,0,1,1,1,D)=0,\\label{condM}\\\\\nT(0,0,1,1,1,1,D)=0,\\quad T(1,1,0,0,1,1,D)=0,\\quad \nT(0,0,0,1,1,1,D)=1,\\label{condT} \\end{eqnarray}\n\nThe practical rule for choosing the integration contours is: circle around\nzero for unity in the master integral and contour over cut for zero in \nthe master integral.\n\nTo get $N(\\underline{n})$ one should make first the Taylor expansion \nin $x_3,x_4,x_5,x_6$\n\n\\begin{eqnarray}\nB(n_i,D)\\propto\\oint\\oint\n\\frac{dx_1dx_2}{x_1^{n_1}x_2^{n_2}}\n\\big(\\frac{\\partial_3^{n_3-1}\\dots\\partial_6^{n_6-1}}\n{(n_3-1)!\\dots(n_6-1)!}\nP(x_1,x_2,x_3+1,\\dots,x_6+1)^{D\/2-2}\\big)\n\\vert_{x_3,\\dots,x_6=0}\\nonumber\n\\end{eqnarray}\n\nThe remaining integrals over $x_1,x_2$ are of the type\n\n\\begin{eqnarray}\n\\oint\\oint\n\\frac{dx_1dx_2}{x_1^{n_1}x_2^{n_2}}\n[x_1x_2(x_1+x_2-4)]^{D\/2-2}\n\\propto\n(-4)^{-n_1-n_2}\\frac{(D\/2-1)_{-n_1}(D\/2-1)_{-n_2}}{(3D\/2-3)_{-n_1-n_2}}\n\\equiv N(n_1,n_2,1,1,1,1,D)\\nonumber\n\\end{eqnarray}\n\n\\noindent\nwhere we follow the normalization (\\ref{condN}). \n\nThe case $M(\\underline{n},D)$ is analogous. The only difference is that \ndue to the \nsymmetry of the task we should take the sum of the solutions with the \nsignatures $(++\\pm\\pm++)$ and $(++++\\pm\\pm)$.\n\nThe case $T(\\underline{n},D)$ is more complicated. The symmetry of the task \nassumes that one should try the following form of the solution\n\n\\begin{eqnarray}\nT(n_1,n_2,n_3,n_4,n_5,n_6,D)&=&\n\\phantom{+}t(n_1,n_2,n_3,n_4,n_5,n_6,D)+t(n_1,n_2,n_4,n_3,n_6,n_5,D)\\nonumber\\\\\n&&+t(n_1,n_2,n_5,n_6,n_3,n_4,D)+t(n_1,n_2,n_6,n_5,n_4,n_3,D)\n\\nonumbe\n\\end{eqnarray}\n\n\n\\noindent\nwhere $t(\\underline{n},D)$ is non--zero only if $n_4,n_5,n_6>0$.\nLet us construct $t(\\underline{n},D)$ using (\\ref{solution}),\nkeeping in mind possible mixing with $N(\\underline{n},D)$ solution.\nAfter differentiating \nover last three indexes the task reduces to the construction of \n$t(n_1,n_2,n_3,1,1,1,D)$. Let us consider the corresponding integral:\n\n\\begin{eqnarray}\n\\overline{t}(n_1,n_2,n_3,D)=\n\\frac{1}{(2\\pi\\imath)^3}\n\\oint\\oint\\oint\n\\frac\n{dx_1dx_2dx_3}\n{x_1^{n_1}x_2^{n_2}x_3^{n_3}}\n{(x_3^2-x_1x_2x_3+x_1x_2(x_1+x_2-4))^{D\/2-2}}\n\\label{tbar}\n\\end{eqnarray}\n\nFor $n_3<1$ one can calculates this integral immediately (the possible \n$N(\\underline{n},D)$ contribution vanish). Taking into account the \nnormalization (\\ref{condT}) we get\n\n\\begin{eqnarray}\nt(n_1,n_2,n_3<1,1,1,1,D)&=&\\Df\n{\\overline{t}(n_1,n_2,n_3,D)}{\\overline{t}(0,0,0,D)}\\nonumber\\\\\n&=&\n\\Df\n{(2-D)_{(n_1+n_3)}\n(2-D)_{(n_2+n_3)}\n(\\Df{D-1}{2})_{(-n_1-n_3)}\n(\\Df{D-1}{2})_{(-n_2-n_3)}}\n{(-4)^{(n_1+n_2)}(-8)^{n_3}}\n\\nonumber\\\\\n&&\\sum_{k=0}^{[-n_3\/2]}\\Df{\n(\\Df{D-1}{2}-n_1-n_3)_{-k}\n(\\Df{D-1}{2}-n_2-n_3)_{-k}\n(n_3)_{(-n_3-2k)}\n}{\n(\\Df{3-D}{2})_{-k}\n(\\Df{1}{2})_{-k}\n(-n_3-2k)!\n}\n\\nonumbe\n\\end{eqnarray}\n\nFor $n_3>1$ using integration by parts for $x_3$ in (\\ref{tbar}) (which \nreduces to evaluation of $(n_3-1)^{th}$ derivative of $P^{D\/2-2}$)\nthe $\\overline{t}(n_1,n_2,n_3,D)$ can be reduced to \na set of $\\overline{t}(n_1,n_2,1,D)$ with different $n_1,n_2$.\nLet us extract the $t(n_1,n_2,1,1,1,1,D)$ from $\\overline{t}(n_1,n_2,1,D)$ \naccording to the conditions (\\ref{condT})\n\n\\begin{eqnarray}\nt(n_1,n_2,1,1,1,1,D)=\\Df{1}{\\overline{t}(0,0,0,D)}\n(\\overline{t}(n_1,n_2,1,D)\n-\\overline{t}(0,0,1,D)N(n_1,n_2,1,1,1,1))\n\\label{t2}\n\\end{eqnarray}\n\nOne can calculate the $t(n_1,n_2,1,1,1,1,D)$\nby direct use of the (\\ref{t2}) expanding it for example in seria over\n$D\/2-2$, but we found more suitable to use the recurrence relations \non $n_1,n_2$:\n\n\\begin{eqnarray}\nt(n_1,n_2,1,1,1,1,D)&=&\n-\\Df{(D-2)^2}{4(D-3)(2n_1-D+2)}\n(-\\Df{1}{2}t(n_1-1,n_2-1,0,1,1,1,D-2)\\nonumber\\\\\n&&+t(n_1-2,n_2-1,1,1,1,1,D-2))\\nonumber\\\\\n&&-\\Df{2(D-2)^2(11D-38)}{3(3D-10)(3D-8)(D-3)}\nN(n_1,n_2,1,1,1,1,D)\\label{rt1}\\\\\nt(n_1,n_2,1,1,1,1,D)&=&\n\\Df{(n_1-n_2-1)}{(2n_1-D+2)}\nt(n_1,n_2+1,0,1,1,1,D)\\nonumber\\\\\n&&+\\Df{(2n_2-D+4)}{(2n_1-D+2)}\nt(n_1-1,n_2+1,1,1,1,1,D)\n\\label{rt2}\n\\end{eqnarray}\n\nWith the help of (\\ref{rt1}) the $n_1+n_2$ can be reduced to $-1,0,1$ \nand with the help of (\\ref{rt2}) the $n_1-n_2$ can be reduced to $0,1$\n(note that $t(n_1,n_2,1,1,1,1,D)=t(n_2,n_1,1,1,1,1,D)$).\nHere at every recursion step the one integral reexpreses through\nthe other one plus rational over $D$, that is there is no \n\"exponential reproduction\". Then, the recursion acts separately\non variables $n_1+n_2$ and $n_1-n_2$. So, although the relations \n(\\ref{rt1},\\ref{rt2}) can be solved to explicit formulas,\nthis \"safe\" variant\nof recursion is in this case the most effective way of calculations.\n\nThe relations (\\ref{rt1},\\ref{rt2}) are the simple example of the recurrence\nrelations with D-shifts, which can be derived in the following way.\nNote that if \n$f^k(n_i,D)$ is a solution of (\\ref{rr2}), then\n$P({\\bf I}^-+\\mu_i)f^k(n_i,D-2)$ also is a solution.\nHence, if $f^k(n_i,D)$ is a complete set of solutions, then\n\n\\begin{eqnarray}\nf^k(n_i,D)=\\sum_n S^k_n(D)P({\\bf I}^-+\\mu_i)f^n(n_i,D-2)\n\\label{rrD}\n\\end{eqnarray}\n\n\\noindent\nwhere the coefficients of mixing matrix $S$ depends only over $D$.\nFor the solutions (\\ref{solution}) the matrix $S$ is unit matrix.\nOn the other hand, the desire to come to some specific set of master \nintegrals leads to nontrivial mixing matrix and for the example considered \nabove these coefficients are\n\n\\vspace{3mm}\n\\begin{tabular}{lll}\n$S^n_n=-\\Df{3}{64}\\Df{(3D - 8)(3D - 10)}{(D - 4)^2}$\n&$S^m_m=\\Df{3}{16}\\Df{(3D - 8)(3D - 10)}{(2D - 7)(2D - 9)}$&\n$S^t_t=-\\Df{(D - 2)^2}{4(D - 3)(D - 4)}$\\\\\n$S^t_n=\\Df{(11D - 38)(D - 2)^2}{32(D - 3)(D - 4)^2}$& \n\\multicolumn{2}{l}{$S^n_t=S^t_m=S^m_t=S^m_n =S^n_m=0$}\\\\\n\\end{tabular}\n\n\\vspace{3mm}\nTo check the efficiency of this approach we \nevaluated, to 3 loops, the first 5 moments in the \n$z\\equiv q^2\/4m^2\\to 0$ \nexpansion of the QED photon vacuum polarization \n \\[\\Pi(z) = \\sum_{n>0} C_n\\,z^n + {\\rm O}(\\alpha^4)\\,,\\]\n\nThe $C_n$ are expressed through approximately $10^5$ scalar \nintegrals, but there is no necessary to evaluate these integrals separately.\nInstead, we evaluated a few integrals of (\\ref{solution}) type, but\nwith $P^{D\/2-2}$ producted by a long polinomial in $x_i$.\n\nAfter OS mass \\cite{GBGS,BGS} and charge\\cite{REC} renormalization,\nwe obtained the finite $D\\rightarrow4$ limits\n(the coefficients $C_1, C_2, C_3$ can be found in \\cite{3l}):\n\n\\begin{eqnarray}\nC_4 & = & \\Bigl\\{ N^2\\left[ \\Df{256}{693} \\,\\zeta_2\n + \\Df{2522821}{9437184} \\,\\zeta_3\n - \\Df{129586264289}{143327232000})\\right]\\nonumber\\\\\n & &{} + N \\left[ \\Df{160}{231} \\left(1-\\Df{8}{5}\\ln2\\right)\\zeta_2\n + \\Df{1507351507033}{1651507200} \\,\\zeta_3\n - \\Df{269240669884818833}{245806202880000} \\right]\n \\Bigr\\}\\frac{\\alpha^3}{\\pi^3} \n\\nonumber\\\\\n & &{}+\\Df{51986}{127575}\\,N\\frac{\\alpha^2}{\\pi^2}\n +\\Df{32}{693} \\,N\\frac{\\alpha }{\\pi }\\,,\\nonumber\\\\\nC_5 & = & \\Bigl\\{ N^2\\left[ \\Df{1024}{3003} \n\\,\\zeta_2\n + \\Df{1239683}{3932160} \\,\\zeta_3\n - \\Df{512847330943}{556351488000})\\right]\\nonumber\\\\\n & &{} + N \\left[ \\Df{640}{1001} \\left(1-\\Df{8}{5}\\ln2\\right)\\zeta_2\n + \\Df{939939943788973}{190749081600} \\,\\zeta_3\n - \\Df{360248170450504167133}{60837035212800000} \\right]\n \\Bigr\\}\\frac{\\alpha^3}{\\pi^3} \n\\nonumber\\\\\n & &{}+\\Df{432385216}{1260653625}\\,N\\frac{\\alpha^2}{\\pi^2}\n +\\Df{512}{15015} \\,N\\frac{\\alpha }{\\pi }\\,,\\nonumber\n\\end{eqnarray}\n\n\\noindent\nwhere we follow common practice \\cite{BKT}, by allowing for\n$N$ degenerate leptons. In pure QED, $N=1$; formally,\nthe powers of $N$ serve to count the number of electron loops.\n\nThe $N$ contribution of $C_4$ is in agreement with recent QCD \ncalculations \\cite{chet}, the $N^2$ part of $C_4$ and the \n$C_5$ are new.\n\nThe bare (non-renormalized) integrals were calculated for arbitrary $D$.\nCalculations for $C_4$ were made on PC with \nPentium-75 processor by REDUCE with 24Mbyte memory, within \napproximately 10 CPU hours. \nThe most difficult diagrams for $C_5$ were calculated\non HP735 workstation. \n\nThese results demonstrates a reasonable progress in comparison with common \nrecursive \napproach. For example, the common way used in \\cite{3l} demands \nseveral CPU hours on DEC-Alpha workstation to calculate full $D$ \ndependence of $C_2$ integrals, and further calculations became \npossible only after truncation in $(D\/2-2)$. In the present approach the \nfull $D$ calculations for $C_2$ demand about 5 minutes on PC.\n\n\\section{Conclusions}\n\nThe new approach suggested in this work allows to produce explicit \nformulas (\\ref{solution}) for the solutions of the recurrence relations for \n3--loop vacuum integrals.\nThis formulas can be used for direct calculations and demonstrate a\nhigh efficiency.\nOn the other hand, they produce a new type $D$-shifted recurrence \nrelations (\\ref{rrD}) for these integrals.\nFinally, we hope that simple representation (\\ref{rr2}) of the \ntraditional recurrence relations which allows to obtain all these\nresults is not intrinsic for 3--loop vacuum case and \ngeneralization for multi--loop or\/and non-vacuum case is possible.\n\n\n\\section{Acknowledgment}\nI would like to thank D.Broadhurst for the possibility to use\nhis RECURSOR \\cite{REC} which produced a lot of initial materials for \ninvestigating the structure of the solutions and V.Ilyin for drawing the \nattention to the problem and many fruitful discussions.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \nIn statistical physics, \nto calculate the free energy of solvable lattice models\n for finite temperature is one of the \n important problems. \n For this purpose, thermodynamic Bethe ansatz (TBA) \nequations have been often used \\cite{Ta99}. In general, \nthe TBA equations are an infinite number of coupled nonlinear \nintegral equations (NLIE) with an infinite number unknown functions. \nThen it is desirable to reduce TBA equations \nto a finite number of coupled NLIE with a finite number of \nunknown functions. \n\nDestri, de Vega \\cite{DD92} and Kl\\\"umper \\cite{K93,K92}\n proposed NLIE with two (or one \\footnote{ if an integral contour \n with a closed loop is adopted}) \nunknown functions for the $XXZ$ (or $XYZ$) model. \nTo generalize their NLIE to models \n whose underlying algebras have arbitrary rank seems to be \na difficult problem as we need considerable trial and errors \nto find auxiliary functions which are needed to derive the NLIE. \nThen there are NLIE of abovementioned type\n for models whose underlying algebras have \nat most rank 3 (for example, \\cite{KWZ97,FK99,D05}). \n\nSeveral years ago, \nTakahashi discovered \\cite{Ta01} an another NLIE for the \n$XXZ$ model in simplifying TBA equations. \nLater, the same NLIE was rederived \\cite{TSK01} from fusion relations \n($T$-system) \\cite{KR87} \namong quantum transfer matrices (QTM) \\cite{S85}. \nIn addition, it was also rederived \\cite{KW02} for the $XXX$ model \nfrom a fugacity expansion formula. \n\nIn view of these situations, we have derived NLIE of Takahashi type for \nthe $osp(1|2s)$ model \\cite{T02}, the $sl(r+1)$ model \\cite{T03},\nthe higher spin Heisenberg model \\cite{T04}, \nthe $U_{q}(\\widehat{sl}(r+1))$ Perk-Schultz model \\cite{TT05}. \nIn these cases, \nthe number of unknown functions and NLIE coincide with the rank of the \nunderlying algebras. In this paper, we will further derive NLIE with a finite \nnumber of unknown functions \nfor the $U_{q}(\\widehat{sl}(r+1|s+1))$ Perk-Schultz model \\cite{PS81,Sc83}, \nwhich is a multicomponent generalization of the 6-vertex model and \none of the fundamental solvable lattice models in statistical mechanics. \nFor example, a special case of this model is related to the \nsupersymmetric $t-J$ model, which is important in \nstrongly correlated electron systems. \n\nIn section 2, we introduce the $U_{q}(\\widehat{sl}(r+1|s+1))$ Perk-Schultz model, \nand define the QTM for it. \nAs a summation over tableaux labeled by $a \\times m$ Young (super) diagram, we \nintroduce an auxiliary function (\\ref{DVF}) \\cite{T97,T98}\n which includes an eigenvalue formula (\\ref{QTM-eigen}) of the QTM \nas a special case. \nWe also introduce a system of functional relations ($T$-system) which is satisfied by\n this auxiliary function.\n\nIn section 3, we derive two kind of NLIE which contain only $r+s+1$ unknown functions. \nThe first ones (\\ref{nlie-general}), (\\ref{nlie-generalb}) \n reduce to the NLIE for the $U_{q}(\\widehat{sl}(r+1))$ Perk-Schultz model \nin \\cite{TT05} if $s=-1$. \nHowever our new NLIE are not straightforward generalization of the ones in \nour previous paper \\cite{TT05}. \nIn fact for $r,s \\ge 0$ case, \na straightforward generation of our previous NLIE \n becomes a system of an infinite number of coupled NLIE which contains an \ninfinite number of unknown functions \n(see (\\ref{nlie4})). \nTo overcome this difficulty, we will use the \nquantum (supersymmetric) Jacobi-Trudi and Giambelli formula \n(\\ref{jacobi-trudi}) and \na duality (\\ref{dual}) for the auxiliary function, \nfrom which a closed set of NLIE can be derived. \nWe will also propose another NLIE (\\ref{nlie-xi=-1}) and (\\ref{nlie-xi=-1b}) \nin the latter part of the \nsection 3, which have never been considered before \neven for the $U_{q}(\\widehat{sl}(2))$ case. \nIn deriving the NLIE, we assume that $q$ is generic. \nHowever we expect that our results can be also analytically continued to \nthe case where $q$ is a root of unity. \n\nIn section 4, we calculate the high temperature expansion of the \nfree energy based on our NLIE.\n In particular, we can derive coefficients (\\ref{coe1})-(\\ref{coe5}) \nup to the order of 5 for the arbitrary rank $r+s+1$. \nThe point is that if we fix the degree of the high temperature expansion, \nwe can write down a general formula of the coefficients. \nOn the other hand, if we specialize parameters, we can \nderive the coefficients for much higher orders. For example \nfor $(r,s)=(2,-1),(-1,2)$, $q=1$, $\\mu_{a}=0$ case, \n coefficients of the high temperature expansion of the specific heat \n up to the order of 40 are presented in appendix. \n It will be difficult to derive the coefficients of such a \nhigh order by other method. \n \nSection 5 is devoted to concluding remarks. \n\\section{The Perk-Schultz model and the quantum transfer matrix method} \nIn this section, we will introduce the $U_{q}(\\widehat{sl}(r+1|s+1))$ \nPerk-Schultz model\n\\footnote{$U_{q}(\\widehat{sl}(r+1|s+1))$ is a quantum affine superalgebra, \nwhich characterizes the $R$-matrix of this model. \nSee for example, \\cite{Y99}. \nWe assume $\\eta \\in {\\mathbb R}$ ($q=e^{\\eta}$). \nA rational limit ($q \\to 1$) of the Perk-Schultz model is\n the Uimin-Sutherland model \\cite{U70,S75}.}\n \\cite{PS81,Sc83} and \nthe quantum transfer matrix (QTM) method \n \\cite{S85,SI87,K87,SAW90,K92,K93} \n for it. \nThe QTM method was applied to the Perk-Schultz model \n in ref. \\cite{KWZ97} \n(see also, ref. \\cite{JKS97,JKS98,FK99}). \n\nLet us introduce three sets $B=\\{1,2,\\dots,r+s+2\\}=B_{+}\\cup B_{-}$, \nwhere $B_{+} \\cap B_{-}=\\phi $, $|B_{+}|=r+1$ and $|B_{-}|=s+1$\n ($r,s \\in {\\mathbb Z}_{\\ge -1}$).\nWe define a grading parameter $p(a)$ ($a \\in B$) such that \n$p(a)=0$ for $a \\in B_{+}$ and \n$p(a)=1$ for $a \\in B_{-}$. \nThe $R$-matrix of the $U_{q}(\\widehat{sl}(r+1|s+1))$ Perk-Schultz model \\cite{PS81} \nis given as \n\\begin{eqnarray}\nR(v)=\n\\sum_{a_{1},a_{2},b_{1},b_{2}\\in B}\nR^{a_{1},b_{1}}_{a_{2},b_{2}}(v) \nE^{a_{1},a_{2}}\\otimes E^{b_{1},b_{2}},\n\\end{eqnarray}\nwhere $E^{a,b}$ is a $r+s+2$ by $r+s+2$ matrix \nwhose $(i,j)$ element is given as \n$(E^{a,b})_{i,j}=\\delta_{ai}\\delta_{bj}$; \n$R^{a_{1},b_{1}}_{a_{2},b_{2}}(v)$ is defined as \n\\begin{eqnarray}\n&& R^{a,a}_{a,a}(v)=[(-1)^{p(a)}v+1]_{q}, \\\\\n&& R^{a,b}_{a,b}(v)=(-1)^{p(a)p(b)}[v]_{q} \\quad (a \\ne b), \\\\\n&& R^{b,a}_{a,b}(v)=q^{\\mathrm{sign}(a-b)v}\n\\quad (a \\ne b), \\label{R-mat}\n\\end{eqnarray}\nwhere $v \\in \\mathbb{C}$ is the spectral parameter;\n$a,b \\in B$; \n $[v]_{q}=(q^{v}-q^{-v})\/(q-q^{-1})$; \n$q=e^{\\eta}$. \nNote that this $R$-matrix reduces to the one for the well known 6-vertex model \nif $(r,s)=(1,-1)$.\n\nLet $L$ be a positive integer (the number of lattice sites). \nThe row-to-row transfer matrix on $({\\mathbb C}^{r+s+2})^{\\otimes L}$ \nis defined as\n\\footnote{The lower index $i,j$ of $R_{ij}(v)$ is used as follows: \nfor example, $E^{a,b}_{k}$ \nis defined on $({\\mathbb C}^{r+s+2})^{\\otimes (L+1)}$: \n$E^{a,b}_{k}=I^{\\otimes k}\\otimes E^{a,b}\\otimes I^{\\otimes (L-k)}$, \nwhere $I$ is $r+s+2$ by $r+s+2$ identity matrix; \n$k=0,1,\\dots, L$. \nThen \n$R_{ij}(v)$ is defined as \n$\nR_{ij}(v)=\\sum_{a_{1},a_{2},b_{1},b_{2}} \nR^{a_{1},b_{1}}_{a_{2},b_{2}}(v) \nE^{a_{1},a_{2}}_{i} E^{b_{1},b_{2}}_{j}\n$. The trace ${\\mathrm tr}_{0}$ is \ntaken over the auxiliary space indexed by $0$.} \n\\begin{eqnarray}\nt(v)={\\mathrm tr}_{0}(R_{0L}(v)\n \\cdots R_{02}(v)R_{01}(v)).\n\\label{rtr}\n\\end{eqnarray}\nThe main part of the Hamiltonian is proportional to \nthe logarithmic derivative of the row-to-row transfer matrix (\\ref{rtr}): \n\\begin{eqnarray}\n&& \\hspace{-20pt} \nH_{body}=\\frac{J\\sinh \\eta}{\\eta}\\frac{d}{dv}\\log t(v) |_{v=0}\n= J\\sum_{j=1}^{L}\\biggl\\{\n \\cosh \\eta \\sum_{a \\in B} (-1)^{p(a)}E^{a,a}_{j}E^{a,a}_{j+1} +\n\\nonumber \\\\ && \n \\sum_{\n {\\scriptsize \\begin{array}{c}\n a, b \\in B \\\\\n a\\ne b \n \\end{array}}\n }\n \\left( {\\rm sign}(a-b) \\sinh \\eta \n E^{a,a}_{j}E^{b,b}_{j+1} +\n (-1)^{p(a)p(b)}E^{b,a}_{j}E^{a,b}_{j+1}\n \\right)\n\\biggl\\}, \\label{ham0}\n\\end{eqnarray}\nwhere we adopt the periodic boundary condition \n$E^{a,b}_{L+1}=E^{a,b}_{1}$. \nWithout breaking the integrability, we can also add the chemical \npotential term\n\\begin{eqnarray}\nH_{ch}=-\\sum_{j=1}^{L}\\sum_{a \\in B}\\mu_{a}E^{a,a}_{j} \\label{hamch}\n\\end{eqnarray}\n to $H_{body}$. Then the total Hamiltonian is $H=H_{body}+H_{ch}$. \n \nTo treat the model at finite temperature $T$, \nwe introduce the so-called quantum transfer matrix (QTM)\\cite{S85}: \n\\begin{eqnarray}\n&& \\hspace{-30pt} t_{\\mathrm{QTM}}(v)=\\sum_{\\{\\alpha_{k}\\},\\{\\beta_{k}\\}}\nt_{\\mathrm{QTM}}(v)\n^{\\{\\beta_{1},\\dots, \\beta_{N} \\}}\n_{\\{\\alpha_{1},\\dots,\\alpha_{N} \\}}\nE^{\\beta_{1}\\alpha_{1}}_{1}\nE^{\\beta_{2}\\alpha_{2}}_{2}\n\\cdots \nE^{\\beta_{N}\\alpha_{N}}_{N}, \\label{QTM} \\\\\n&& \\hspace{-46pt}\nt_{\\mathrm{QTM}}(v)^{\\{\\beta_{1},\\dots, \\beta_{N} \\}}\n_{\\{\\alpha_{1},\\dots,\\alpha_{N} \\}}=\n\\sum_{\\{\\nu_{k}\\}}e^{\\frac{\\mu_{\\nu_{1}}}{T}}\n\\prod_{k=1}^{\\frac{N}{2}}\n R^{\\beta_{2k},\\nu_{2k+1}}_{\\alpha_{2k},\\nu_{2k}}(u+iv)\n \\widetilde{R}^{\\beta_{2k-1},\\nu_{2k}}_{\\alpha_{2k-1},\\nu_{2k-1}}(u-iv),\n \\nonumber \n\\end{eqnarray}\nwhere $N \\in 2{\\mathbb Z}_{\\ge 1} $ is the Trotter number; \n$\\nu_{N+1}=\\nu_{1}$; $\\nu_{k},\\alpha_{k},\\beta_{k}\n \\in B$; $u=-\\frac{J \\sinh \\eta }{\\eta N T}$; \n$\\widetilde{R}^{a_{1},b_{1}}_{a_{2},b_{2}}(v)=\nR^{b_{1},a_{2}}_{b_{2},a_{1}}(v)$ is the \\symbol{\"60}$90^{\\circ}$ rotation' of $R(v)$. \n We can express \\cite{S85} the free energy per site \nin terms of only the largest eigenvalue $\\Lambda_{1}$ of \nthe QTM (\\ref{QTM}) at $v=0$:\n\\begin{eqnarray}\nf=\n-T\\lim_{N\\to \\infty}\\log \\Lambda_{1},\n\\label{free-en-qtm}\n\\end{eqnarray} \nwhere the Boltzmann constant is set to $1$. \n\nDue to the Yang-Baxter equation, the QTM (\\ref{QTM}) forms \ncommuting family for any $v$. \nThus it can be diagonalized by the \nBethe ansatz. \nThe eigenvalue formula\n\\footnote{To be precise, \nthis formula is a conjecture \nfor general parameters $r,s,q,\\mu_{a},N$. \nIn \\cite{KWZ97}, the \nalgebraic Bethe ansatz for a one particle state was \nexecuted for the QTM of the $U_{q}(\\hat{sl}(r+1|s+1))$ Perk-Schultz model. \nAs for the $U_{q}(\\hat{sl}(2))$ case, a proof of this formula by \n the algebraic Bethe ansatz is similar to the \nrow-to-row transfer matrix case (cf. \\cite{GKS04}). \nThis formula has a quite natural form (dressed vacuum form) \nfrom a point of view of the analytic Bethe ansatz \\cite{R83,KS95}. \nAn eigenvalue formula of the row to row transfer matrix (\\ref{rtr}) \nwas derived in \\cite{BVV82,Sc83}. It has essentially same form as\n (\\ref{QTM-eigen}) except for a part which is related to \n the vacuum eigenvalue. \nThere is also support by numerical calculations for small \n$r,s$.}\n of the QTM (\\ref{QTM}) will be (cf. \\cite{KWZ97,FK99}) \n\\begin{eqnarray}\nT^{(1)}_{1}(v)=\\sum_{a\\in B}z(a;v), \n \\label{QTM-eigen}\n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\n&& z(a;v)=\\psi_{a}(v) \\xi_{a}\n \\nonumber \\\\ \n&& \\times \n\\frac{Q_{a-1}(v-\\frac{i\\sum_{j=1}^{a-1}(-1)^{p(j)}}{2}-i(-1)^{p(a)})\nQ_{a}(v-\\frac{i\\sum_{j=1}^{a}(-1)^{p(j)}}{2}+i (-1)^{p(a)})}\n{Q_{a-1}(v-\\frac{i\\sum_{j=1}^{a-1}(-1)^{p(j)}}{2})\nQ_{a}(v-\\frac{i\\sum_{j=1}^{a}(-1)^{p(j)}}{2})}, \\nonumber \\\\\n&& Q_{a}(v)=\\prod_{k=1}^{M_{a}}\\sin \\eta(v-v_{k}^{(a)}),\n \\\\\n&& \\psi_{a}(v)=e^{\\frac{\\mu_{a}}{T}}\n \\phi_{-}(v-i(-1)^{p(1)}\\delta_{a,1})\n\\phi_{+}(v+i(-1)^{p(r+s+2)}\\delta_{a,r+s+2}),\n \\nonumber \\\\\n&& \\hspace{20pt}\n\\phi_{\\pm}(v)=\\left(\n\\frac{\\sin \\eta (v\\pm iu)}{\\sinh \\eta }\\right)^{\\frac{N}{2}},\n\\nonumber \n\\end{eqnarray}\nwhere $M_{a}\\in {\\mathbb Z}_{\\ge 0}$; $Q_{0}(v)=Q_{r+s+2}(v)=1$. \n$\\xi_{a} \\in \\{-1,1\\}$ is a parameter which depends on the grading \nparameter $\\{p(b)\\}_{b \\in B}$. \n$\\{v^{(a)}_{k}\\}$ is a root of the Bethe ansatz equation \n(BAE)\n\\begin{eqnarray}\n&& \\hspace{-20pt} \n\\frac{\\psi_{a}(v^{(a)}_{k}+\\frac{i}{2}\\sum_{j=1}^{a}(-1)^{p(j)})}\n {\\psi_{a+1}(v^{(a)}_{k}+\\frac{i}{2}\\sum_{j=1}^{a}(-1)^{p(j)})} \\label{BAE} \\\\\n&& =\n-\\varepsilon_{a}\n\\frac{Q_{a-1}(v^{(a)}_{k}+\\frac{i(-1)^{p(a)}}{2})Q_{a}(v^{(a)}_{k}-i(-1)^{p(a+1)})\n Q_{a+1}(v^{(a)}_{k}+\\frac{i(-1)^{p(a+1)}}{2})}\n {Q_{a-1}(v^{(a)}_{k}-\\frac{i(-1)^{p(a)}}{2})Q_{a}(v^{(a)}_{k}+i(-1)^{p(a)})\n Q_{a+1}(v^{(a)}_{k}-\\frac{i(-1)^{p(a+1)}}{2})}\n \\nonumber \\\\\n&& \\hspace{40pt} \\mbox{for} \\quad k\\in \\{1,2, \\dots, M_{a}\\} \\quad \n\\mbox{and} \\quad a\\in \\{1,2,\\dots, r+s+1 \\}, \\nonumber\n\\end{eqnarray}\nwhere $\\varepsilon_{a}=\\frac{\\xi_{a+1}}{\\xi_{a}} \\in \\{-1,1 \\} $.\nFrom now on, we assume the relation $p(1)=p(r+s+2)$ on \nthe grading parameter. \nIn this case, the eigenvalue formula (\\ref{QTM-eigen}) \nof the QTM has good analyticity to derive the NLIE. \nWe expect that this assumption does not spoil generality \nas the free energy will be independent of the order of the \ngrading parameters. \n\nLet us define \nan auxiliary function \\cite{T97,T98} (see also \\cite{T98-2}): \n\\begin{eqnarray}\nT_{m}^{(a)}(v)=\\sum_{\\{d_{j,k}\\}} \\prod_{j=1}^{a}\\prod_{k=1}^{m}\nz(d_{j,k};v-\\frac{i}{2}(a-m-2j+2k)),\n\\label{DVF}\n\\end{eqnarray}\nwhere $m,a \\in \\mathbb{Z}_{\\ge 1}$, and the summation is taken over \n$d_{j,k}\\in B$ ($ 1 < 2 < \\cdots < r+s+2$) \nsuch that\n\\begin{eqnarray}\n&& d_{j,k} \\le d_{j+1,k} \\quad {\\rm and} \\quad d_{j,k} \\le d_{j,k+1} \\label{rule1} \\\\ \n&& d_{j,k} < d_{j,k+1} \\quad {\\rm if} \\quad \n d_{j,k} \\in B_{-} \\quad {\\rm or} \\quad d_{j,k+1} \\in B_{-} \n \\label{rule2} \\\\ \n&& d_{j,k} < d_{j+1,k} \\quad {\\rm if} \\quad d_{j,k} \\in B_{+} \n\\quad {\\rm or} \\quad d_{j+1,k} \\in B_{+}. \\label{rule3}\n\\end{eqnarray}\nThis function contains \n$T_{1}^{(1)}(v)$ (\\ref{QTM-eigen}) as a special case \n$(a,m)=(1,1)$. \n(\\ref{DVF}) can be interpreted as a \nsummation over a Young (super) tableaux labeled by \n$a \\times m$ Young (super) diagram. \nIt is related to a system of eigenvalue formulae of the \nQTM for fusion models \\cite{KRS81}. \nNote that the condition (\\ref{rule2}) is void if $s=-1$, then \n(\\ref{DVF}) reduces to the Bazhanov-Reshetikhin formula \\cite{BR90}. \n\nFor $a,m \\in {\\mathbb Z}_{\\ge 1}$, we \n will normalize (\\ref{DVF}) as \n $ \\widetilde{T}^{(a)}_{m}(v)=\n T^{(a)}_{m}(v)\/{\\mathcal N}^{(a)}_{m}(v)$, \n where \n\\begin{eqnarray}\n\\hspace{-30pt} && {\\mathcal N}^{(a)}_{m}(v)=\n \\frac{\\phi_{-}(v- \\frac{a+m}{2} \\xi i)\n\\phi_{+}(v+ \\frac{a+m}{2}\\xi i)}{\n \\phi_{-}(v-\\frac{a-m}{2}i)\\phi_{+}(v+\\frac{a-m}{2}i)}\n \\nonumber \\\\ \n\\hspace{-30pt} && \\hspace{20pt} \\times\n \\prod_{j=1}^{a}\\prod_{k=1}^{m}\n \\phi_{-}(v-\\frac{a-m-2j+2k}{2}i)\\phi_{+}(v-\\frac{a-m-2j+2k}{2}i).\n \\label{normal}\n\\end{eqnarray}\nHere we introduce a parameter $\\xi \\in \\{-1,1 \\}$. \n$T^{(a)}_{m}(v)$ has no pole on $v$ due to the BAE (\\ref{BAE}). \nIn contrast, $\\widetilde{T}^{(a)}_{m}(v)$ has \npoles at $v=\\pm (\\frac{m+a}{2}\\xi i +iu)+\\frac{n \\pi}{\\eta}$ \n($n \\in {\\mathbb Z}$) for \n$(a,m) \\in {\\mathbb Z}_{\\ge 1} \\times \\{1,2,\\dots,s+1 \\} \\cup \n \\{1,2,\\dots,r+1 \\}\\times {\\mathbb Z}_{\\ge 1}$. \n\nOne can show that \n$\\widetilde{T}^{(a)}_{m}(v)$ satisfies the \nso called $T$-system for $U_{q}(\\widehat{sl}(r+1|s+1))$ \\cite{T97,T98} \n(see also \\cite{JKS98} for a derivation of TBA equations from the \n$T$-system). \nFor $m,a \\in {\\mathbb Z}_{\\ge 1}$,\n\\begin{eqnarray}\n&& \\hspace{-10pt} \n\\widetilde{T}^{(a)}_{m}(v-\\frac{i}{2})\\widetilde{T}^{(a)}_{m}(v+\\frac{i}{2})=\n\\widetilde{T}^{(a)}_{m-1}(v)\\widetilde{T}^{(a)}_{m+1}(v)+\n\\widetilde{T}^{(a-1)}_{m}(v)\\widetilde{T}^{(a+1)}_{m}(v)\\label{T-sys} \\\\ \n&& \\hspace{-10pt} \\mbox{for} \\quad \na \\in \\{1,2,\\dots, r\\} \\quad \\mbox{or} \\quad m \\in \\{1,2,\\dots, s\\}\n \\quad \\mbox{or}\\quad (a,m)=(r+1,s+1), \\nonumber \\\\\n&& \\hspace{-10pt}\n\\widetilde{T}^{(r+1)}_{m}(v-\\frac{i}{2})\\widetilde{T}^{(r+1)}_{m}(v+\\frac{i}{2})=\n\\widetilde{T}^{(r+1)}_{m-1}(v)\\widetilde{T}^{(r+1)}_{m+1}(v) \n\\quad \\mbox{for} \\quad m \\in {\\mathbb Z}_{\\ge s+2}, \\label{T-sys-m} \\\\\n&& \\hspace{-10pt}\n\\widetilde{T}^{(a)}_{s+1}(v-\\frac{i}{2})\\widetilde{T}^{(a)}_{s+1}(v+\\frac{i}{2})=\n\\widetilde{T}^{(a-1)}_{s+1}(v)\\widetilde{T}^{(a+1)}_{s+1}(v) \n\\quad \\mbox{for} \\quad a \\in {\\mathbb Z}_{\\ge r+2}, \\label{T-sys-a}\n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\n&& \\hspace{-35pt} \n \\widetilde{T}^{(a)}_{0}(v)=\\frac{\\phi_{-}(v-\\frac{a}{2}i)\\phi_{+}(v+\\frac{a}{2}i)}\n {\\phi_{-}(v-\\frac{a}{2}\\xi i)\\phi_{+}(v+\\frac{a}{2} \\xi i)}\n\\quad {\\rm for} \\quad a \\in {\\mathbb Z}_{\\ge 1},\\label{a0} \\\\\n&& \\hspace{-35pt}\n\\widetilde{T}^{(0)}_{m}(v)=\n \\frac{\\phi_{-}(v+\\frac{m}{2}i)\\phi_{+}(v-\\frac{m}{2}i)}\n {\\phi_{-}(v-\\frac{m}{2} \\xi i)\\phi_{+}(v+\\frac{m}{2} \\xi i)}\n\\quad {\\rm for} \\quad m \\in {\\mathbb Z}_{\\ge 1}. \\label{0m} \n\\end{eqnarray}\nThere is a duality relation for the auxiliary function.\n\\begin{eqnarray} \n&& \\hspace{-35pt}\n\\widetilde{T}^{(r+1)}_{a+s}(v)=\n\\zeta^{a-1} \n\\widetilde{T}^{(r+a)}_{s+1}(v) \\quad {\\rm for} \\quad a \\in Z_{\\ge 1} ,\n\\label{dual}\n\\end{eqnarray} \nwhere \n$\\zeta = \\frac{\\prod_{a \\in B_{+}} \\xi_{a}\n e^{\\frac{\\mu_{a}}{T}}}{\\prod_{b \\in B_{-}}\\xi_{b}e^{\\frac{\\mu_{b}}{T}}}$. \n(\\ref{a0}) (resp. (\\ref{0m})) becomes $1$ if $\\xi=1$ (resp. $\\xi=-1$). \nNote that there is no upper bound for the index $a$ of $\\widetilde{T}^{(a)}_{m}(v)$ \nfor $m \\in \\{1,2,\\dots, s+1 \\}$ if $s \\in {\\mathbb Z}_{\\ge 0}$.\nFor $s=-1$, this $T$-system reduces the one for $U_{q}(\\widehat{sl}(r+1))$\n \\cite{KNS94} (see also \\cite{KR87}). \nIn this case, (\\ref{dual}) reduces to \n$\\widetilde{T}^{(r+1)}_{a-1}(v)=\\zeta^{a-1}=\ne^{\\frac{(a-1)(\\mu_{1}+\\mu_{2}+\\cdots +\\mu_{r+1})}{T}}$ \n if $ \\xi =1 $ (see eq. (2.21) in \\cite{TT05}).\nFrom the relations (\\ref{T-sys-m}), (\\ref{T-sys-a}), (\\ref{dual}) and \n(\\ref{T-sys}) for $(a,m)=(r+1,s+1)$, one can derive the following relation \n for $a \\in {\\mathbb Z}_{\\ge 2}$: \n\\begin{eqnarray}\n&& \\hspace{-20pt} \\widetilde{T}^{(r+1)}_{s+a}(v) =\n\\zeta^{a-1}\n\\widetilde{T}^{(r+a)}_{s+1}(v) \\nonumber \\\\\n&& =\n \\frac{\n\\zeta^{a-1} \n\\prod_{j=1}^{a} \\widetilde{T}^{(r+1)}_{s+1}(v+\\frac{a-2j+1}{2}i) }\n {\\prod_{j=2}^{a} \\bigl( \n \\zeta\n\\widetilde{T}^{(r+1)}_{s}(v+\\frac{a-2j+2}{2}i)+\n \\widetilde{T}^{(r)}_{s+1}(v+\\frac{a-2j+2}{2}i) \\bigr)} .\n \\nonumber \\\\\n \\label{sol}\n\\end{eqnarray}\n$\\widetilde{T}^{(a)}_{m}(v)$ can also be written in terms of a determinant \n(the quantum (supersymmetric) Jacobi-Trudi and Giambelli formula \\cite{T97,T98} \n(for $s=-1$ case, \\cite{BR90}; \nfor $U_{q}(B_{r}^{(1)})$ case, \\cite{KOS95}))\n\\begin{eqnarray}\n\\widetilde{T}^{(a)}_{m}(v)&=&\n W^{(a)}_{m}(v)\\det _{1\\le j,k \\le m}\n\\left(\\widetilde{T}^{(a+j-k)}_{1}\n\\left(\nv-\\frac{j+k-m-1}{2}i\n\\right) \n\\right) \\label{jacobi-trudi} \\\\\n&=& Z^{(a)}_{m}(v) \\det _{1\\le j,k \\le a}\n\\left(\\widetilde{T}^{(1)}_{m+j-k}\n\\left(\nv-\\frac{a-j-k+1}{2}i\n\\right) \n\\right), \\label{jacobi-trudi2}\n\\end{eqnarray}\nwhere $\\widetilde{T}^{(a)}_{1}(v)=0$ for $a <0$ and \n$\\widetilde{T}^{(1)}_{m}(v)=0$ for $m <0$. \n$ W^{(a)}_{m}(v)$ and $ Z^{(a)}_{m}(v)$ are normalization functions: \n\\begin{eqnarray}\n&& W^{(a)}_{m}(v)=\\frac{1}{\\prod_{j=1}^{m-1}\\widetilde{T}^{(a)}_{0}(v+\\frac{m-2j}{2}i)}, \\\\\n&& Z^{(a)}_{m}(v)= \\frac{1}{\\prod_{j=1}^{a-1}\\widetilde{T}^{(0)}_{m}(v-\\frac{a-2j}{2}i)},\n\\end{eqnarray} \nwhere $\\prod_{j=1}^{0}(\\cdots )=1$. \nSubstituting (\\ref{jacobi-trudi}) into (\\ref{dual}), we obtain an equation\n\\begin{eqnarray}\n&& W^{(r+1)}_{a+s}(v) \\det _{1\\le j,k \\le a+s}\n\\left(\\widetilde{T}^{(r+1+j-k)}_{1}\n\\left(\nv-\\frac{j+k-a-s-1}{2}i\n\\right) \n\\right) \\nonumber \\\\\n&&=\n\\zeta^{a-1}\nW^{(r+a)}_{s+1}(v)\n\\det _{1\\le j,k \\le s+1}\n\\left(\\widetilde{T}^{(r+a+j-k)}_{1}\n\\left(\nv-\\frac{j+k-s-2}{2}i\n\\right) \n\\right) \n\\nonumber \\\\\n&& \\hspace{180pt} \\mbox{for} \\quad \na \\in {\\mathbb Z}_{\\ge 1}. \\label{det-eq}\n\\end{eqnarray}\nExpanding partially (\\ref{det-eq}) on both side, \nwe obtain\n\\begin{eqnarray}\n&& \\widetilde{T}^{(a+r+s)}_{1}(v)=\n\\frac{\n\\widetilde{A}_{1}(v)-\n\\zeta^{a-1}\n\\frac{W^{(r+a)}_{s+1}(v)}{W^{(r+1)}_{a+s}(v)}\n\\widetilde{A}_{2}(v)\n}\n{(-1)^{a+s}\\widetilde{A}_{3}(v)+(-1)^{s}\n\\zeta^{a-1} \n\\frac{W^{(r+a)}_{s+1}(v)}{W^{(r+1)}_{a+s}(v)}\n \\widetilde{A}_{4}(v)} \n \\nonumber \\\\ \n&& \\hspace{160pt} \\mbox{for} \\quad \na \\in {\\mathbb Z}_{\\ge 2}, \n\\label{a+r+s}\n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\n&& \\widetilde{A}_{1}(v)=\\det _{1\\le j,k \\le a+s}\n\\left(\\widetilde{f}_{j,k}\n\\left(\nv-\\frac{j+k-a-s-1}{2}i\n\\right) \n\\right) \\\\\n&& \\quad \\widetilde{f}_{j,k}(v)=\\widetilde{T}^{(r+1+j-k)}_{1}(v) \n \\quad \\mbox{for} \\quad (j,k) \\ne (a+s,1), \n\\quad \\widetilde{f}_{a+s,1}(v)=0, \\nonumber \\\\\n&&\\widetilde{A}_{2}(v)=\n\\det _{1\\le j,k \\le s+1}\n\\left(\\widetilde{g}_{j,k}\n\\left(\nv-\\frac{j+k-s-2}{2}i\n\\right) \n\\right) \n \\\\\n&& \\quad \\widetilde{g}_{j,k}(v)=\\widetilde{T}^{(r+a+j-k)}_{1}(v) \n \\quad \\mbox{for} \\quad (j,k) \\ne (s+1,1), \n\\quad \\widetilde{g}_{s+1,1}(v)=0, \\nonumber \\\\ \n&& \\widetilde{A}_{3}(v)=\\det _{1\\le j,k \\le a+s-1}\n\\left(\\widetilde{T}^{(r+j-k)}_{1}\n\\left(\nv-\\frac{j+k-a-s}{2}i\n\\right) \n\\right), \\\\\n&&\\widetilde{A}_{4}(v)=\n\\det _{1\\le j,k \\le s}\n\\left(\\widetilde{T}^{(r+a+j-k-1)}_{1}\n\\left(\nv-\\frac{j+k-s-1}{2}i\n\\right) \n\\right) \n.\n\\end{eqnarray}\nIt turns out that $\\widetilde{T}^{(a+r+s)}_{1}(v)$ is written in \nterms of $\\{\\widetilde{T}^{(d)}_{1}(v)\\}$ where $ \\max (0,r-s+2-a) \\le d \\le a+r+s-1$. \nThen $ \\widetilde{T}^{(a)}_{1}(v) $ for $a \\in {\\mathbb Z}_{\\ge r+s+2}$ \ncan be expressed in \nterms of $\\{\\widetilde{T}^{(d)}_{1}(v)\\}$ where $ 0 \\le d \\le r+s+1$.\nSimilarly, we can derive the \nfollowing relation from (\\ref{dual}) and (\\ref{jacobi-trudi2}).\n\\begin{eqnarray}\n&& \\widetilde{T}^{(1)}_{a+r+s}(v)=\n\\frac{\n\\zeta^{a-1}\n\\frac{Z^{(r+a)}_{s+1}(v)}{Z^{(r+1)}_{a+s}(v)}\n\\widetilde{A}_{5}(v)-\n\\widetilde{A}_{6}(v)\n}\n{(-1)^{a+r}\n\\zeta^{a-1}\n\\frac{Z^{(r+a)}_{s+1}(v)}{Z^{(r+1)}_{a+s}(v)}\n\\widetilde{A}_{7}(v)+(-1)^{r} \n\\widetilde{A}_{8}(v)} \n\\nonumber \\\\\n&& \\hspace{140pt} \\mbox{for} \\quad \na \\in {\\mathbb Z}_{\\ge 2}, \n\\label{a+r+s-b}\n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\n&& \\widetilde{A}_{5}(v)=\\det _{1\\le j,k \\le a+r}\n\\left(\\widetilde{h}_{j,k}\n\\left(\nv-\\frac{a+r+1-j-k}{2}i\n\\right) \n\\right) \\\\\n&& \\quad \\widetilde{h}_{j,k}(v)=\\widetilde{T}^{(1)}_{s+1+j-k}(v) \n \\quad \\mbox{for} \\quad (j,k) \\ne (a+r,1), \n\\quad \\widetilde{h}_{a+r,1}(v)=0, \\nonumber \\\\\n&&\\widetilde{A}_{6}(v)=\n\\det _{1\\le j,k \\le r+1}\n\\left(\\widetilde{b}_{j,k}\n\\left(\nv-\\frac{r+2-j-k}{2}i\n\\right) \n\\right) \n \\\\\n&& \\quad \\widetilde{b}_{j,k}(v)=\\widetilde{T}^{(1)}_{a+s+j-k}(v) \n \\quad \\mbox{for} \\quad (j,k) \\ne (r+1,1), \n\\quad \\widetilde{b}_{r+1,1}(v)=0, \\nonumber \\\\ \n&& \\widetilde{A}_{7}(v)=\\det _{1\\le j,k \\le a+r-1}\n\\left(\\widetilde{T}^{(1)}_{s+j-k}\n\\left(\nv-\\frac{a+r-j-k}{2}i\n\\right) \n\\right), \\\\\n&&\\widetilde{A}_{8}(v)=\n\\det _{1\\le j,k \\le r}\n\\left(\\widetilde{T}^{(1)}_{a+s-1+j-k}\n\\left(\nv-\\frac{r+1-j-k}{2}i\n\\right) \n\\right) \n.\n\\end{eqnarray}\n\nLet us consider the limit \n\\begin{eqnarray}\n&& Q^{(a)}_{m}:=\\lim_{v \\to i \\eta^{-1} \\infty} \\widetilde{T}^{(a)}_{m}(v) \n =\\sum_{\\{ d_{j,k}\\}}\n\\prod_{j=1}^{a}\\prod_{k=1}^{m} \\xi_{d_{j,k}}\n\\exp \\left(\\frac{\\mu_{d_{j,k}}}{T} \\right),\n \\label{limit}\n\\end{eqnarray} \nwhere the summation is taken over $\\{ d_{j,k}\\}$ ($d_{j,k} \\in B$)\n which obey the rules (\\ref{rule1})-(\\ref{rule3}).\nFor example, for $U_{q}(\\widehat{sl}(2|1))$ ($B_{+}=\\{1,3\\}$, $B_{-}=\\{2\\}$) case, we have, \n\\begin{eqnarray}\nQ^{(1)}_{1}&=& \\xi_{1}e^{\\frac{\\mu_{1}}{T}}+\\xi_{2}e^{\\frac{\\mu_{2}}{T}}\n +\\xi_{3} e^{\\frac{\\mu_{3}}{T}},\n \\label{Q11-sl21} \\\\\nQ^{(a)}_{1}&=&\n \\xi_{1} \\xi_{2}^{a-1} e^{\\frac{\\mu_{1}+(a-1)\\mu_{2}}{T}}\n+\\xi_{1} \\xi_{2}^{a-2} \\xi_{3} e^{\\frac{\\mu_{1}+(a-2)\\mu_{2}+\\mu_{3}}{T}}\n+\\xi_{2}^{a}e^{\\frac{a \\mu_{2}}{T}}\n+\\xi_{2}^{a-1} \\xi_{3} e^{\\frac{(a-1)\\mu_{2}+\\mu_{3}}{T}} \\nonumber \\\\\n&=& \\xi_{2}^{a-2}e^{ \\frac{(a-2) \\mu_{2}}{T}}Q^{(2)}_{1} \n\\qquad \\mbox{for} \\quad a \\in {\\mathbb Z}_{\\ge 2}. \n \\label{Q-sl21}\n\\end{eqnarray}\nWe can also rewrite (\\ref{Q-sl21}) as \n\\begin{eqnarray}\nQ^{(a)}_{1}=\\frac{{Q^{(3)}_{1}}^{a-2}}{{Q^{(2)}_{1}}^{a-3}}\n=\\frac{{Q^{(2)}_{1}}^{a-1}}{(\\zeta +Q^{(1)}_{1})^{a-2}}.\n\\label{Qa1-sl21}\n\\end{eqnarray}\nThis quantity (\\ref{limit}) corresponds to \n the character of $a$-th anti-(super)symmetric and \n$m$-th (super)symmetric tensor representation. \nWe will use $Q^{(a)}_{1}$ and $Q^{(1)}_{m}$ later. \n\n$Q^{(a)}_{m}$ also satisfies the so called $Q$-system, \nwhich is the $T$-system (\\ref{T-sys})-(\\ref{dual})\n without the spectral parameter $v$: \nfor $ m,a \\in {\\mathbb Z}_{\\ge 1}$, we have \n\\begin{eqnarray}\n\\hspace{-20pt} && {Q^{(a)}_{m}}^{2}=Q^{(a)}_{m-1}Q^{(a)}_{m+1}+Q^{(a-1)}_{m}Q^{(a+1)}_{m}\n\\label{Q-sys} \\\\ \n&&\\hspace{10pt} \\mbox{for} \\quad \na \\in \\{1,2,\\dots, r\\} \\quad \\mbox{or} \\quad m \\in \\{1,2,\\dots, s\\}\n \\nonumber \\\\\n&& \\hspace{130pt} \\mbox{or}\\quad (a,m)=(r+1,s+1), \\nonumber \\\\\n&&{Q^{(r+1)}_{m}}^{2}=Q^{(r+1)}_{m-1}Q^{(r+1)}_{m+1}\n \\quad \\mbox{for} \\quad m \\in {\\mathbb Z}_{\\ge s+2},\\\\\n&&{Q^{(a)}_{s+1}}^{2} =Q^{(a-1)}_{s+1}Q^{(a+1)}_{s+1} \n\\quad \\mbox{for} \\quad a \\in {\\mathbb Z}_{\\ge r+2}, \n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\n&& Q^{(a)}_{0}=Q^{(0)}_{m}=1\n\\quad {\\rm for} \\quad a,m \\in {\\mathbb Z}_{\\ge 1},\\nonumber \\\\\n&& Q^{(r+1)}_{a+s}=\n\\zeta^{a-1}\nQ^{(r+a)}_{s+1} \\quad {\\rm for} \\quad a \\in Z_{\\ge 1} .\n\\end{eqnarray} \nThe $Q$-system was introduced \\cite{K89,KR90} as functional relations among \ncharacters of finite dimensional representations of \nYangians (or quantum affine algebras) associated with simple Lie algebras. \nThe above system of equations is a superalgebra version of them. \n\nIn closing this section, \nlet us comment on the analyticity of the auxiliary function (\\ref{DVF}). \nAs mentioned before, the free energy (\\ref{free-en-qtm}) is given only by the \nlargest eigenvalue of the QTM (\\ref{QTM}). \nThen we are only interested in a root of the BAE (\\ref{BAE}) \nwhich gives the largest eigenvalue of the QTM. \nJudging from numerical calculations \\cite{JKS97,JKS98,T03,TT05}, \nsuch a root will exist in the sector \n$\\frac{N}{2}=M_{1}=\\cdots =M_{r+s+1}$ of the BAE, \nand it will form \\symbol{\"60}one-string' on the complex plane. \nFor this root, the zeros of the auxiliary function (\\ref{DVF}) will \n exist near the lines ${\\rm Im} v= \\pm \\frac{a+m}{2}$ \n at least for $\\{\\mu_{a}\\}=\\{0\\}$ and small $u$ \n(see, figures in \\cite{JKS98,T03,TT05}). \nIn this sector, we have \n\\begin{eqnarray}\n&& \\xi_{b}=1 \\qquad {\\rm for} \\qquad b \\in B,\n\\nonumber \\\\ \n&& \\varepsilon_{b}=1 \\qquad {\\rm for} \\qquad b \\in B-\\{r+s+2 \\},\n\\label{para}\n\\\\\n&& \\zeta=\\exp(\\frac{\\sum_{a\\in B_{+}}\\mu_{a}-\\sum_{a\\in B_{-}}\\mu_{a}}{T}).\n\\nonumber \\end{eqnarray} \nFrom now on, we only consider the largest eigenvalue of the \nQTM, and assume these values (\\ref{para}) of the parameters. \n\\section{The nonlinear integral equations}\nIn this section, we will derive NLIE by using formulae in the previous section. \nWe will treat two kind of NLIE paying attention to the value of the \nparameter $\\xi \\in \\{-1,1\\}$. \nAlthough the first step of calculations (\\ref{mustput})-(\\ref{nlie2}) is similar to\n $s=-1$ case \\cite{TSK01,T03,TT05}, we will present it for reader's convenience. \n \nTaking note on \nthe limit (\\ref{limit}) and \n the fact that $\\widetilde{T}^{(a)}_{m}(v)$ has \npoles at $v=\\pm (\\frac{m+a}{2}\\xi i +iu)+ \\frac{n \\pi}{\\eta}$ \n($n \\in {\\mathbb Z}$) \nfor $(a,m) \\in \\{1,2,\\dots, r+1\\}\\times {\\mathbb Z}_{\\ge 1} \\cup \n{\\mathbb Z}_{\\ge 1} \\times \\{1,2,\\dots, s+1\\}$, \nwe can expand ${\\widetilde T}^{(a)}_{m}(v)$ as follows.\n\\begin{eqnarray}\n&& {\\widetilde T}^{(a)}_{m}(v)=Q^{(a)}_{m}\n \\label{mustput} \\\\ \n&& \\hspace{20pt} +\n\\sum_{n \\in {\\mathbb Z}} \n\\sum_{j=1}^{\\frac{N}{2}} \n \\left\\{ \n\\frac{A^{(a)}_{m,j}}{(v-\\frac{a+m}{2}\\xi i-iu-\\frac{\\pi n}{\\eta})^{j}}\n+\n\\frac{{\\bar A}^{(a)}_{m,j}}{(v+\\frac{a+m}{2}\\xi i+iu+\\frac{\\pi n}{\\eta})^{j}}\n\\right\\},\n\\nonumber \n\\end{eqnarray}\nwhere the coefficients $A^{(a)}_{m,j}, {\\bar A}^{(a)}_{m,j} \\in {\\mathbb C}$ \ncan be expressed as contour integrals:\n\\begin{eqnarray}\n&& A^{(a)}_{m,j}= \\oint_{{\\tilde C}^{(a)}_{m}} \\frac{{\\mathrm d} v}{2\\pi i}\n \\widetilde{T}^{(a)}_{m}(v)(v-\\frac{a+m}{2}\\xi i-iu)^{j-1},\\nonumber \\\\\n&& \\overline{A}^{(a)}_{m,j}=\n \\oint_{\\overline{\\tilde C}^{(a)}_{m}} \\frac{{\\mathrm d} v}{2\\pi i}\n \\widetilde{T}^{(a)}_{m}(v)(v+\\frac{a+m}{2}\\xi i+iu)^{j-1}.\n \\label{coeff}\n\\end{eqnarray}\nHere the contour ${\\tilde C}^{(a)}_{m}$ (resp. $\\overline{\\tilde C}^{(a)}_{m}$) \nis a counterclockwise closed loop \nwhich surrounds $v=\\frac{a+m}{2}\\xi i+iu$ (resp. $v=-\\frac{a+m}{2}\\xi i-iu$) \nand does not surround $v=-\\frac{a+m}{2}\\xi i-iu-\\frac{\\pi n}{\\eta},\n\\frac{a+m}{2}\\xi i+iu+\\frac{\\pi k}{\\eta},$ \n(resp. $v=\\frac{a+m}{2}\\xi i+iu+\\frac{\\pi n}{\\eta}, \n-\\frac{a+m}{2}\\xi i-iu-\\frac{\\pi k}{\\eta}$), where $n \\in {\\mathbb Z}, k \\in {\\mathbb Z}-\\{0\\} $.\nUsing the $T$-system (\\ref{T-sys})-(\\ref{T-sys-a}), \nwe can rewrite (\\ref{coeff}) as \n\\begin{eqnarray}\n&& A^{(a)}_{m,j}= \\oint_{{\\tilde C}^{(a)}_{m}} \\frac{{\\mathrm d} v}{2\\pi i}\n \\bigg\\{\n \\frac{\\widetilde{T}^{(a)}_{m-1}(v-\\frac{\\xi i}{2})\n \\widetilde{T}^{(a)}_{m+1}(v-\\frac{\\xi i}{2})}\n {\\widetilde{T}^{(a)}_{m}(v-\\xi i)} \\nonumber \\\\\n&& \\hspace{80pt} +\n \\frac{\\widetilde{T}^{(a-1)}_{m}(v-\\frac{\\xi i}{2})\n \\widetilde{T}^{(a+1)}_{m}(v-\\frac{\\xi i}{2})}\n {\\widetilde{T}^{(a)}_{m}(v-\\xi i)}\n \\bigg\\}\n (v-\\frac{a+m}{2}\\xi i-iu)^{j-1},\\nonumber \\\\\n&& \\overline{A}^{(a)}_{m,j}=\n \\oint_{\\overline{\\tilde C}^{(a)}_{m}} \\frac{{\\mathrm d} v}{2\\pi i}\n\\bigg\\{\n \\frac{\\widetilde{T}^{(a)}_{m-1}(v+\\frac{\\xi i}{2})\n \\widetilde{T}^{(a)}_{m+1}(v+\\frac{\\xi i}{2})}\n {\\widetilde{T}^{(a)}_{m}(v+\\xi i)} \n\\label{coeff2} \\\\\n&& \\hspace{80pt} +\n \\frac{\\widetilde{T}^{(a-1)}_{m}(v+\\frac{\\xi i}{2})\n \\widetilde{T}^{(a+1)}_{m}(v+\\frac{\\xi i}{2})}\n {\\widetilde{T}^{(a)}_{m}(v+\\xi i)}\n \\bigg\\}\n (v+\\frac{a+m}{2}\\xi i+iu)^{j-1},\n \\nonumber \n\\end{eqnarray}\nwhere we admit $\\widetilde{T}^{(b)}_{n}(v)=0$ if \n$(b,n) \\in {\\mathbb Z }_{\\ge r+2}\\times {\\mathbb Z}_{\\ge s+2}$\n (cf. \\cite{DM92,MR92}).\nSubstituting (\\ref{coeff2}) into (\\ref{mustput}) and taking the summation\n over $j$, we obtain\n\\begin{eqnarray}\n&& \\hspace{-30pt}\n\\widetilde{T}^{(a)}_{m}(v)=Q^{(a)}_{m} \\nonumber \\\\ \n&& +\n\\sum_{n \\in {\\mathbb Z}}\n\\oint_{{\\tilde C}^{(a)}_{m}} \\frac{{\\mathrm d} y}{2\\pi i} \n\\frac{1-\\left(\\frac{y}{v-\\frac{a+m}{2} \\xi i-iu -\\frac{\\pi n}{\\eta}}\\right)^{\\frac{N}{2}}}\n {v-y-\\frac{a+m}{2} \\xi i-iu -\\frac{\\pi n}{\\eta}} \n \\nonumber \\\\\n&&\\hspace{20pt} \\times \n \\bigg\\{ \n \\frac{\\widetilde{T}^{(a)}_{m-1}(y+\\frac{a+m-1}{2} \\xi i+iu) \n \\widetilde{T}^{(a)}_{m+1}(y+\\frac{a+m-1}{2} \\xi i+iu)}\n {\\widetilde{T}^{(a)}_{m}(y+\\frac{a+m-2}{2} \\xi i+iu)} \\nonumber \\\\\n && \\hspace{50pt} +\n \\frac{\\widetilde{T}^{(a-1)}_{m}(y+\\frac{a+m-1}{2} \\xi i+iu) \n \\widetilde{T}^{(a+1)}_{m}(y+\\frac{a+m-1}{2} \\xi i+iu)}\n {\\widetilde{T}^{(a)}_{m}(y+\\frac{a+m-2}{2} \\xi i+iu)}\n \\bigg\\} \\nonumber \\\\\n&& +\n\\sum_{n \\in {\\mathbb Z}}\n\\oint_{\\overline{\\tilde C}^{(a)}_{m}} \\frac{{\\mathrm d} y}{2\\pi i} \n\\frac{1-\\left(\\frac{y}{v+\\frac{a+m}{2} \\xi i+iu +\\frac{\\pi n}{\\eta}}\\right)^{\\frac{N}{2}}}\n {v-y+\\frac{a+m}{2} \\xi i+iu +\\frac{\\pi n}{\\eta}} \n \\nonumber \\\\\n&&\\hspace{20pt} \\times \n \\bigg\\{ \n \\frac{\\widetilde{T}^{(a)}_{m-1}(y-\\frac{a+m-1}{2} \\xi i-iu) \n \\widetilde{T}^{(a)}_{m+1}(y-\\frac{a+m-1}{2} \\xi i-iu)}\n {\\widetilde{T}^{(a)}_{m}(y-\\frac{a+m-2}{2} \\xi i-iu)} \n\\label{nlie1} \\\\\n && \\hspace{50pt} +\n \\frac{\\widetilde{T}^{(a-1)}_{m}(y-\\frac{a+m-1}{2} \\xi i-iu) \n \\widetilde{T}^{(a+1)}_{m}(y-\\frac{a+m-1}{2} \\xi i-iu)}\n {\\widetilde{T}^{(a)}_{m}(y-\\frac{a+m-2}{2} \\xi i-iu)}\n \\bigg\\}.\n \\nonumber\n\\end{eqnarray}\nHere the contours are shifted as follows: \n the contour ${\\tilde C}^{(a)}_{m}$ (resp. $\\overline{\\tilde C}^{(a)}_{m}$) \nis a counterclockwise closed loop \nwhich surrounds $y=0 $ (resp. $y=0$)\nand does not surround $y=-(a+m)\\xi i-2iu-\\frac{\\pi n}{\\eta},\\frac{\\pi k}{\\eta}$ \n(resp. $y=(a+m)\\xi i+2iu+\\frac{\\pi n}{\\eta},\\frac{\\pi k}{\\eta}$), \nwhere $n \\in {\\mathbb Z}, k \\in {\\mathbb Z}-\\{0 \\}$.\nWe can neglect the terms $\\left(\\frac{y}{v \\pm \\frac{a+m}{2} \\xi i \\pm iu \\pm \n\\frac{\\pi n}{\\eta}}\\right)^{\\frac{N}{2}}$ in (\\ref{nlie1}) since the poles at $y=0$ in \n the two brackets $\\{\\cdots \\}$ \nare canceled by the zeros from these terms. \nBy using the following relation\n\\begin{eqnarray}\n\\lim_{m \\to \\infty}\n\\sum_{n=-m}^{m}\\frac{1}{v-\\frac{\\pi n}{\\eta}}\n=\\frac{\\eta}{\\tan \\eta v},\n\\end{eqnarray}\nwe can take the summation over $n \\in {\\mathbb Z}$.\n\\begin{eqnarray}\n&& \\hspace{-30pt}\n\\widetilde{T}^{(a)}_{m}(v)=Q^{(a)}_{m} \\nonumber \\\\ \n&& +\n\\oint_{{\\tilde C}^{(a)}_{m}} \\frac{{\\mathrm d} y}{2\\pi i} \n\\frac{\\eta }\n {\\tan \\eta (v-y-\\frac{a+m}{2} \\xi i-iu)} \n \\nonumber \\\\\n&&\\hspace{20pt} \\times \n \\bigg\\{ \n \\frac{\\widetilde{T}^{(a)}_{m-1}(y+\\frac{a+m-1}{2} \\xi i+iu) \n \\widetilde{T}^{(a)}_{m+1}(y+\\frac{a+m-1}{2} \\xi i+iu)}\n {\\widetilde{T}^{(a)}_{m}(y+\\frac{a+m-2}{2} \\xi i+iu)} \\nonumber \\\\\n && \\hspace{50pt} +\n \\frac{\\widetilde{T}^{(a-1)}_{m}(y+\\frac{a+m-1}{2} \\xi i+iu) \n \\widetilde{T}^{(a+1)}_{m}(y+\\frac{a+m-1}{2} \\xi i+iu)}\n {\\widetilde{T}^{(a)}_{m}(y+\\frac{a+m-2}{2} \\xi i+iu)}\n \\bigg\\} \\nonumber \\\\\n&& +\n\\oint_{\\overline{\\tilde C}^{(a)}_{m}} \\frac{{\\mathrm d} y}{2\\pi i} \n\\frac{\\eta }\n {\\tan \\eta (v-y+\\frac{a+m}{2} \\xi i+iu)} \n \\nonumber \\\\\n&&\\hspace{20pt} \\times \n \\bigg\\{ \n \\frac{\\widetilde{T}^{(a)}_{m-1}(y-\\frac{a+m-1}{2} \\xi i-iu) \n \\widetilde{T}^{(a)}_{m+1}(y-\\frac{a+m-1}{2} \\xi i-iu)}\n {\\widetilde{T}^{(a)}_{m}(y-\\frac{a+m-2}{2} \\xi i-iu)} \n\\label{nlie2} \\\\\n && \\hspace{50pt} +\n \\frac{\\widetilde{T}^{(a-1)}_{m}(y-\\frac{a+m-1}{2} \\xi i-iu) \n \\widetilde{T}^{(a+1)}_{m}(y-\\frac{a+m-1}{2} \\xi i-iu)}\n {\\widetilde{T}^{(a)}_{m}(y-\\frac{a+m-2}{2} \\xi i-iu)}\n \\bigg\\},\n \\nonumber \\\\\n && {\\rm for} \\quad (a,m) \\in \n\\{1,2,\\dots,r+1\\} \\times {\\mathbb Z}_{\\ge 1} \\cup \n{\\mathbb Z}_{\\ge 1} \\times \\{1,2,\\dots,s+1\\}.\n \\nonumber \n\\end{eqnarray}\nIn the next subsection, we will consider specializations \nof this system of NLIE (\\ref{nlie2}). \n\\subsection{The nonlinear integral equations for $\\xi=1$}\nLet us consider the NLIE (\\ref{nlie2}) for $\\xi=1$ and $m=1$. \nTaking note on the fact ${\\widetilde T}^{(a)}_{0}(v)=1$ (cf.(\\ref{a0})), \nwe can drop the first terms in the two brackets $\\{\\cdots \\}$ in (\\ref{nlie2}) \nsince they have no poles at $y=0$.\nThen the NLIE (\\ref{nlie2}) reduce to the following NLIE on \n${\\mathcal T}^{(a)}_{1}(v)=\\lim_{N \\to \\infty}\\widetilde{T}^{(a)}_{1}(v)$ \n after the Trotter limit $N \\to \\infty $ with $u=-\\frac{J \\sinh \\eta }{\\eta N T}$.\n\\begin{eqnarray}\n{\\mathcal T}^{(a)}_{1}(v)=Q^{(a)}_{1} \n&+&\n\\oint_{C^{(a)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(a-1)}_{1}(y+\\frac{i a}{2}) \n \\mathcal{T}^{(a+1)}_{1}(y+\\frac{i a}{2})}\n {\\tan \\eta (v-y-\\frac{i(a+1)}{2})\n \\mathcal{T}^{(a)}_{1}(y+\\frac{i(a-1)}{2})}\n \\nonumber \\\\\n&+&\n\\oint_{\\overline{C}^{(a)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(a-1)}_{1}(y-\\frac{i a}{2}) \n \\mathcal{T}^{(a+1)}_{1}(y-\\frac{i a}{2})}\n {\\tan \\eta (v-y+\\frac{i(a+1)}{2})\n \\mathcal{T}^{(a)}_{1}(y-\\frac{i(a-1)}{2})}\n \\nonumber \\\\ \n && \\hspace{120pt} \n {\\rm for} \\quad a \\in {\\mathbb Z}_{\\ge 1},\n \\label{nlie4}\n\\end{eqnarray}\nwhere \nthe contour $C^{(a)}_{1}$ (resp. $\\overline{C}^{(a)}_{1}$) \nis a counterclockwise closed loop around $y=0$ (resp. $y=0$) \nwhich satisfies the condition \n$y \\ne v-\\frac{a+1}{2}i+\\frac{\\pi n}{\\eta}$ \n(resp. $y \\ne v+\\frac{a+1}{2}i+\\frac{\\pi n}{\\eta}$) and \ndoes not surround \n$z^{(a)}_{1}-\\frac{a-1}{2}i+\\frac{\\pi n}{\\eta}$, \n$-(a+1)i\n+\\frac{\\pi n}{\\eta}$, $\\frac{\\pi k}{\\eta}$ \n(resp. \n$z^{(a)}_{1}+\\frac{a-1}{2}i+\\frac{\\pi n}{\\eta}$, \n$(a+1)i +\\frac{\\pi n}{\\eta}$, $\\frac{\\pi k}{\\eta}$); \n($n \\in \\mathbb{Z}$, $k \\in \\mathbb{Z}-\\{0\\}$). \nHere we put the zeros of $\\mathcal{T}^{(a)}_{1}(v)$ as $\\{ z^{(a)}_{1} \\} $: \n$\\mathcal{T}^{(a)}_{1}(z^{(a)}_{1})=0$. \n$\\mathcal{T}^{(0)}_{1}(v)$ is a known function:\n\\begin{eqnarray} \n\\mathcal{T}^{(0)}_{1}(v)=\n\\lim_{N \\to \\infty} \\widetilde{T}^{(0)}_{1}(v)\n=\\exp \\left(\\frac{2J (\\sinh \\eta)^{2} }\n{T(\\cosh \\eta -\\cos (2\\eta v))}\\right).\n\\end{eqnarray}\nNote that (\\ref{nlie4}) are an infinite number of couple NLIE \nif $ s \\in {\\mathbb Z}_{\\ge 0} $. \nThis situation is quite different from the $U_{q}(\\widehat{sl}(r+1))$\n case \\cite{TT05,T03,TSK01}. \nHowever these NLIE are not independent, then \nwe will take the first $r+s+1$ of them ((\\ref{nlie4}) for $a \\in \\{1,2,\\dots r+s+1 \\}$). \nThe NLIE for $a=r+s+1$ contains $\\mathcal{T}^{(r+s+2)}_{1}(v)$, then we \nwill eliminate this by using the relation (\\ref{a+r+s}), \nwhere $W^{(a)}_{m}(v)=1$ for $\\xi=1$.\n\\begin{eqnarray}\n&& {\\mathcal T}^{(a)}_{1}(v)=Q^{(a)}_{1} \n+\n\\oint_{C^{(a)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(a-1)}_{1}(y+\\frac{i a}{2}) \n \\mathcal{T}^{(a+1)}_{1}(y+\\frac{i a}{2})}\n {\\tan \\eta (v-y-\\frac{i(a+1)}{2})\n \\mathcal{T}^{(a)}_{1}(y+\\frac{i(a-1)}{2})}\n \\nonumber \\\\\n&& \\hspace{40pt} +\n\\oint_{\\overline{C}^{(a)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(a-1)}_{1}(y-\\frac{i a}{2}) \n \\mathcal{T}^{(a+1)}_{1}(y-\\frac{i a}{2})}\n {\\tan \\eta (v-y+\\frac{i(a+1)}{2})\n \\mathcal{T}^{(a)}_{1}(y-\\frac{i(a-1)}{2})}\n \\nonumber \\\\ \n && \\hspace{70pt} \n {\\rm for} \\quad a \\in \\{1,2,\\dots r+s \\},\n \\label{nlie-general} \\\\\n && {\\mathcal T}^{(r+s+1)}_{1}(v)=Q^{(r+s+1)}_{1}\n \\nonumber \\\\ \n&&\\hspace{20pt} +\n\\oint_{C^{(r+s+1)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(r+s)}_{1}(y+\\frac{i (r+s+1)}{2}) \n \\mathcal{F}(y+\\frac{i (r+s+1)}{2})}\n {\\tan \\eta (v-y-\\frac{i(r+s+2)}{2})\n \\mathcal{T}^{(r+s+1)}_{1}(y+\\frac{i(r+s)}{2})}\n \\nonumber \\\\\n&& \\hspace{20pt}+\n\\oint_{\\overline{C}^{(r+s+1)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(r+s)}_{1}(y-\\frac{i (r+s+1)}{2}) \n \\mathcal{F}(y-\\frac{i (r+s+1)}{2})}\n {\\tan \\eta (v-y+\\frac{i(r+s+2)}{2})\n \\mathcal{T}^{(r+s+1)}_{1}(y-\\frac{i(r+s)}{2})} ,\n\\label{nlie-generalb}\n \\\\\n&& \\hspace{20pt} \n\\mathcal{F}(v)=\\lim_{N \\to \\infty }\\widetilde{T}^{(r+s+2)}_{1}(v)=\n\\frac{\nA_{1}(v)-\n\\zeta \nA_{2}(v)\n}\n{(-1)^{s}A_{3}(v)+(-1)^{s} \n\\zeta \nA_{4}(v)},\n\\label{det-hashi}\n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\n&& A_{1}(v)=\\det _{1\\le j,k \\le s+2}\n\\left(f_{j,k}\n\\left(\nv-\\frac{j+k-s-3}{2}i\n\\right) \n\\right) \\\\\n&& \\quad f_{j,k}(v)=\\mathcal{T}^{(r+1+j-k)}_{1}(v) \n \\quad \\mbox{for} \\quad (j,k) \\ne (s+2,1), \n\\quad f_{s+2,1}(v)=0, \\nonumber \\\\\n&& A_{2}(v)=\n\\det _{1\\le j,k \\le s+1}\n\\left(g_{j,k}\n\\left(\nv-\\frac{j+k-s-2}{2}i\n\\right) \n\\right) \n \\\\\n&& \\quad g_{j,k}(v)=\\mathcal{T}^{(r+2+j-k)}_{1}(v) \n \\quad \\mbox{for} \\quad (j,k) \\ne (s+1,1), \n\\quad g_{s+1,1}(v)=0, \\nonumber \\\\ \n&& A_{3}(v)=\\det _{1\\le j,k \\le s+1}\n\\left(\\mathcal{T}^{(r+j-k)}_{1}\n\\left(\nv-\\frac{j+k-2-s}{2}i\n\\right) \n\\right), \\\\\n&& A_{4}(v)=\n\\det _{1\\le j,k \\le s}\n\\left(\\mathcal{T}^{(r+j-k+1)}_{1}\n\\left(\nv-\\frac{j+k-s-1}{2}i\n\\right) \n\\right) \n.\n\\end{eqnarray}\nIf $s=-1$, then $A_{1}(v)=A_{4}(v)=0$ and \n$A_{2}(v)=A_{3}(v)=1$, and consequently \n(\\ref{det-hashi}) reduces to \n${\\mathcal F}(v)=\\mathcal{T}^{(r+1)}_{1}(v)=Q^{(r+1)}_{1}=\n\\zeta =\ne^{\\frac{\\mu_{1}+\\cdots +\\mu_{r+1}}{T}}$, where \nthe determinants should be interpreted as \n$\\det_{1\\le j,k \\le 0} (\\cdots )=1$, $\\det_{1\\le j,k \\le -1} (\\cdots )=0$. Thus \n(\\ref{nlie-general}) and (\\ref{nlie-generalb}) \nreduce to the NLIE for $U_{q}(\\widehat{sl}(r+1))$ in \\cite{TT05}.\nIn particular for $s=0$ ($U_{q}(\\widehat{sl}(r+1|1))$ case, we can use \n(\\ref{sol}): \n\\begin{eqnarray}\n&& {\\mathcal T}^{(a)}_{1}(v)=Q^{(a)}_{1} \n+\n\\oint_{C^{(a)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(a-1)}_{1}(y+\\frac{i a}{2}) \n \\mathcal{T}^{(a+1)}_{1}(y+\\frac{i a}{2})}\n {\\tan \\eta (v-y-\\frac{i(a+1)}{2})\n \\mathcal{T}^{(a)}_{1}(y+\\frac{i(a-1)}{2})}\n \\nonumber \\\\\n&& \\hspace{76pt} +\n\\oint_{\\overline{C}^{(a)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(a-1)}_{1}(y-\\frac{i a}{2}) \n \\mathcal{T}^{(a+1)}_{1}(y-\\frac{i a}{2})}\n {\\tan \\eta (v-y+\\frac{i(a+1)}{2})\n \\mathcal{T}^{(a)}_{1}(y-\\frac{i(a-1)}{2})}\n \\nonumber \\\\ \n && \\hspace{100pt} \n {\\rm for} \\quad a \\in \\{1,2,\\dots r \\},\n \\label{nlie-s=0} \\\\\n&& {\\mathcal T}^{(r+1)}_{1}(v)=Q^{(r+1)}_{1} \n \\nonumber \\\\ \n&& \\hspace{10pt}+\n\\oint_{C^{(r+1)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(r)}_{1}(y+\\frac{i (r+1)}{2}) \n \\mathcal{T}^{(r+1)}_{1}(y+\\frac{i(r+2)}{2})}\n {\\tan \\eta (v-y-\\frac{i(r+2)}{2})\n (\n \\zeta \n+\\mathcal{T}^{(r)}_{1}(y+\\frac{i(r+1)}{2}))}\n \\nonumber \\\\\n&& \\hspace{10pt}+\n\\oint_{\\overline{C}^{(r+1)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(r)}_{1}(y-\\frac{i (r+1)}{2}) \n \\mathcal{T}^{(r+1)}_{1}(y-\\frac{i(r+2)}{2})}\n {\\tan \\eta (v-y+\\frac{i(r+2)}{2})\n (\n \\zeta \n+\\mathcal{T}^{(r)}_{1}(y-\\frac{i(r+1)}{2}))}.\n \\nonumber \\\\\n && \\label{nlie-s=0b}\n\\end{eqnarray}\nThe free energy per site is given by a solution of these \nNLIE (\\ref{nlie-general})-(\\ref{nlie-s=0b})\n\\begin{eqnarray}\nf=J \\cosh \\eta -T \\log \\mathcal{T}^{(1)}_{1}(0).\n \\label{free-en}\n\\end{eqnarray}\nIn these NLIE (\\ref{nlie-general})-(\\ref{nlie-s=0b}), \nthe number of unknown functions and equations is \n$r+s+1$, which contrasts with TBA equations \\cite{Sch87,Sch92,EK94,JKS98,Sa99}.\n\\subsection{The nonlinear integral equations for $\\xi=-1$}\nNext, \nlet us consider the NLIE (\\ref{nlie2}) for $\\xi=-1$ and $a=1$. \nTaking note on the fact ${\\widetilde T}^{(0)}_{m}(v)=1$ (cf.(\\ref{0m})), \nwe can drop the second terms in the two brackets $\\{\\cdots \\}$ in (\\ref{nlie2}) \nsince they have no poles at $y=0$.\nThen the NLIE (\\ref{nlie2}) reduce to the following NLIE on \n${\\mathcal T}^{(1)}_{m}(v)=\\lim_{N \\to \\infty}\\widetilde{T}^{(1)}_{m}(v)$ \n after the Trotter limit $N \\to \\infty $ with $u=-\\frac{J \\sinh \\eta }{\\eta N T}$.\n\\begin{eqnarray}\n{\\mathcal T}^{(1)}_{m}(v)=Q^{(1)}_{m} \n&+&\n\\oint_{C^{(1)}_{m}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(1)}_{m-1}(y-\\frac{i m}{2}) \n \\mathcal{T}^{(1)}_{m+1}(y-\\frac{i m}{2})}\n {\\tan \\eta (v-y+\\frac{i(m+1)}{2})\n \\mathcal{T}^{(1)}_{m}(y-\\frac{i(m-1)}{2})}\n \\nonumber \\\\\n&+&\n\\oint_{\\overline{C}^{(1)}_{m}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(1)}_{m-1}(y+\\frac{i m}{2}) \n \\mathcal{T}^{(1)}_{m+1}(y+\\frac{i m}{2})}\n {\\tan \\eta (v-y-\\frac{i(m+1)}{2})\n \\mathcal{T}^{(1)}_{m}(y+\\frac{i(m-1)}{2})}\n \\nonumber \\\\ \n && \\hspace{70pt} \n {\\rm for} \\quad m \\in {\\mathbb Z}_{\\ge 1},\n \\label{infinitenlie-xi=-1}\n\\end{eqnarray}\nwhere \n\\begin{eqnarray} \n\\mathcal{T}^{(1)}_{0}(v)=\n\\lim_{N \\to \\infty} \\widetilde{T}^{(1)}_{0}(v)\n=\\exp \\left(-\\frac{2J (\\sinh \\eta)^{2} }\n{T(\\cosh \\eta -\\cos (2\\eta v))}\\right),\n\\end{eqnarray}\nand the contour $C^{(1)}_{m}$ (resp. $\\overline{C}^{(1)}_{m}$) \nis a counterclockwise closed loop around $y=0$ (resp. $y=0$) \nwhich satisfies the condition \n$y \\ne v+\\frac{m+1}{2}i+\\frac{\\pi n}{\\eta}$ \n(resp. $y \\ne v-\\frac{m+1}{2}i+\\frac{\\pi n}{\\eta}$) and \ndoes not surround \n$z^{(1)}_{m}+\\frac{m-1}{2}i+\\frac{\\pi n}{\\eta}$, \n$(1+m)i\n+\\frac{\\pi n}{\\eta}$, $\\frac{\\pi k}{\\eta}$ \n(resp. \n$z^{(1)}_{m}-\\frac{m-1}{2}i+\\frac{\\pi n}{\\eta}$, \n$-(1+m)i +\\frac{\\pi n}{\\eta}$, $\\frac{\\pi k}{\\eta}$) \n ($n \\in \\mathbb{Z}$, $k \\in \\mathbb{Z}-\\{0\\}$). \nHere $\\{z^{(1)}_{m}\\}$ are zeros of ${\\mathcal T}^{(1)}_{m}(v)$: \n${\\mathcal T}^{(1)}_{m}(z^{(1)}_{m})=0$. \nThese are an infinite number of coupled NLIE. \nWe can reduce them as $\\xi=1$ case. \nBy using (\\ref{a+r+s-b}) in the limit $N \\to \\infty$,\n we can reduce (\\ref{infinitenlie-xi=-1}) \nas follows, \nwhere $Z^{(a)}_{m}(v)=1$ for $\\xi=-1$.\n\\begin{eqnarray}\n{\\mathcal T}^{(1)}_{m}(v)=Q^{(1)}_{m} \n&+&\n\\oint_{C^{(1)}_{m}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(1)}_{m-1}(y-\\frac{i m}{2}) \n \\mathcal{T}^{(1)}_{m+1}(y-\\frac{i m}{2})}\n {\\tan \\eta (v-y+\\frac{i(m+1)}{2})\n \\mathcal{T}^{(1)}_{m}(y-\\frac{i(m-1)}{2})}\n \\nonumber \\\\\n&+&\n\\oint_{\\overline{C}^{(1)}_{m}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(1)}_{m-1}(y+\\frac{i m}{2}) \n \\mathcal{T}^{(1)}_{m+1}(y+\\frac{i m}{2})}\n {\\tan \\eta (v-y-\\frac{i(m+1)}{2})\n \\mathcal{T}^{(1)}_{m}(y+\\frac{i(m-1)}{2})}\n \\nonumber \\\\ \n && \\hspace{70pt} \n {\\rm for} \\quad m \\in \\{1,2,\\dots r+s \\},\n \\label{nlie-xi=-1} \\\\\n{\\mathcal T}^{(1)}_{r+s+1}(v)=Q^{(1)}_{r+s+1} \n&+&\n\\oint_{C^{(1)}_{r+s+1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(1)}_{r+s}(y-\\frac{i (r+s+1)}{2}) \n \\mathcal{G}(y-\\frac{i (r+s+1)}{2})}\n {\\tan \\eta (v-y+\\frac{i(r+s+2)}{2})\n \\mathcal{T}^{(1)}_{r+s+1}(y-\\frac{i(r+s)}{2})}\n \\nonumber \\\\\n&& \\hspace{-70pt}+\n\\oint_{\\overline{C}^{(1)}_{r+s+1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(1)}_{r+s}(y+\\frac{i (r+s+1)}{2}) \n \\mathcal{G}(y+\\frac{i (r+s+1)}{2})}\n {\\tan \\eta (v-y-\\frac{i(r+s+2)}{2})\n \\mathcal{T}^{(1)}_{r+s+1}(y+\\frac{i(r+s)}{2})} ,\n \\label{nlie-xi=-1b}\n\\end{eqnarray}\n\\begin{eqnarray}\n\\mathcal{G}(v)=\n\\lim_{N \\to \\infty}\n\\widetilde{T}^{(1)}_{r+s+2}(v)=\n\\frac{\n\\zeta \nA_{5}(v)-A_{6}(v)\n}\n{(-1)^{r}\n\\zeta \nA_{7}(v)+(-1)^{r} A_{8}(v)},\n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\n&& A_{5}(v)=\\det _{1\\le j,k \\le r+2}\n\\left( h_{j,k}\n\\left(\nv-\\frac{r+3-j-k}{2}i\n\\right) \n\\right) \\\\\n&& \\quad h_{j,k}(v)={\\mathcal T}^{(1)}_{s+1+j-k}(v) \n \\quad \\mbox{for} \\quad (j,k) \\ne (2+r,1), \n\\quad h_{r+2,1}(v)=0, \\nonumber \\\\\n&& A_{6}(v)=\n\\det _{1\\le j,k \\le r+1}\n\\left( b_{j,k}\n\\left(\nv-\\frac{r+2-j-k}{2}i\n\\right) \n\\right) \n \\\\\n&& \\quad b_{j,k}(v)={\\mathcal T}^{(1)}_{s+2+j-k}(v) \n \\quad \\mbox{for} \\quad (j,k) \\ne (r+1,1), \n\\quad b_{r+1,1}(v)=0, \\nonumber \\\\ \n&& A_{7}(v)=\\det _{1\\le j,k \\le r+1}\n\\left({\\mathcal T}^{(1)}_{s+j-k}\n\\left(\nv-\\frac{r+2-j-k}{2}i\n\\right) \n\\right), \\\\\n&&A_{8}(v)=\n\\det _{1\\le j,k \\le r}\n\\left({\\mathcal T}^{(1)}_{s+1+j-k}\n\\left(\nv-\\frac{r+1-j-k}{2}i\n\\right) \n\\right) \n,\n\\end{eqnarray}\nwhere ${\\mathcal T}^{(1)}_{m}(v)=0$ for $m<0 $.\n\nIn particular for $r=0$ ($U_{q}(\\widehat{sl}(1|s+1))$ case, we can use \n(\\ref{sol}): \n\\begin{eqnarray}\n&& {\\mathcal T}^{(1)}_{m}(v)=Q^{(1)}_{m} +\n\\oint_{C^{(1)}_{m}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(1)}_{m-1}(y-\\frac{i m}{2}) \n \\mathcal{T}^{(1)}_{m+1}(y-\\frac{i m}{2})}\n {\\tan \\eta (v-y+\\frac{i(m+1)}{2})\n \\mathcal{T}^{(1)}_{m}(y-\\frac{i(m-1)}{2})}\n \\nonumber \\\\\n&& \\hspace{76pt} +\n\\oint_{\\overline{C}^{(1)}_{1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(1)}_{m-1}(y+\\frac{i m}{2}) \n \\mathcal{T}^{(1)}_{m+1}(y+\\frac{i m}{2})}\n {\\tan \\eta (v-y-\\frac{i(m+1)}{2})\n \\mathcal{T}^{(1)}_{m}(y+\\frac{i(m-1)}{2})}\n \\nonumber \\\\ \n && \\hspace{100pt} \n {\\rm for} \\quad m \\in \\{1,2,\\dots s \\},\n \\label{nlie-r=0} \\\\\n&& {\\mathcal T}^{(1)}_{s+1}(v)=Q^{(1)}_{s+1} \n\\nonumber \\\\\n&& \\hspace{8pt} +\n\\oint_{C^{(1)}_{s+1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(1)}_{s}(y-\\frac{i (s+1)}{2}) \n \\mathcal{T}^{(1)}_{s+1}(y-\\frac{i(s+2)}{2})}\n {\\tan \\eta (v-y+\\frac{i(s+2)}{2})\n (\n \\zeta^{-1}\n+\\mathcal{T}^{(1)}_{s}(y-\\frac{i(s+1)}{2}))}\n \\nonumber \\\\\n&& \\hspace{8pt}+\n\\oint_{\\overline{C}^{(1)}_{s+1}} \\frac{{\\mathrm d} y}{2\\pi i} \n \\frac{\\eta \n \\mathcal{T}^{(1)}_{s}(y+\\frac{i (s+1)}{2}) \n \\mathcal{T}^{(1)}_{s+1}(y+\\frac{i(s+2)}{2})}\n {\\tan \\eta (v-y-\\frac{i(s+2)}{2})\n (\n \\zeta^{-1}\n+\\mathcal{T}^{(1)}_{s}(y+\\frac{i(s+1)}{2}))}. \n\\nonumber \\\\ \n\\label{nlie-r=0b}\n\\end{eqnarray}\nThe free energy per site is given by a solution of these \nNLIE (\\ref{nlie-xi=-1})-(\\ref{nlie-r=0b})\n\\begin{eqnarray}\nf=-J \\cosh \\eta -T \\log \\mathcal{T}^{(1)}_{1}(0).\n \\label{free-en2}\n\\end{eqnarray}\nIn some sense, these NLIE are \\symbol{\"60}dual' to the ones in the previous section. \nThe NLIE (\\ref{nlie-xi=-1})-(\\ref{nlie-r=0b})have only $r+s+1$ unknown functions. \nThese NLIE have never been considered before even for $U_{q}(\\widehat{sl}(2))$ case. \n\\section{High temperature expansions} \nIn this section, we will calculate the high temperature \nexpansion of the free energy from our new NLIE. \nFor large $T\/|J|$, we assume the following expansion :\n\\begin{eqnarray}\n&&\\mathcal{T}^{(a)}_{1}(v)=\n \\exp \\left(\\sum_{n=0}^{{\\mathrm deg}}b_{n}^{(a)}(v)(\\frac{J}{T})^{n} \n+O((\\frac{J}{T})^{{\\mathrm deg}+1}) \\right)\n \\nonumber \n\\\\\n&& =Q^{(a)}_{1}\\Biggl\\{ 1+b^{(a)}_{1}(v)\\frac{J}{T}+\n\\left(b^{(a)}_{2}(v)+\\frac{(b^{(a)}_{1}(v))^2}{2}\\right)(\\frac{J}{T})^2\n+ \\label{hte-ta} \\\\\n&& \\left(b^{(a)}_{3}(v)+b^{(a)}_{2}(v)b^{(a)}_{1}(v)+\n\\frac{(b^{(a)}_{1}(v))^3}{6}\\right)\n(\\frac{J}{T})^3 +\\cdots \\Biggr\\}+O((\\frac{J}{T})^{{\\mathrm deg}+1}),\n\\nonumber \n\\end{eqnarray}\nwhere $b_{0}^{(a)}(v)=\\log Q^{(a)}_{1}$. \nHere we do not expand $\\{Q^{(b)}_{1}\\}_{b \\ge 1}$ with respect to $\\frac{J}{T}$. \nThus the coefficients $\\{b^{(a)}_{n}(v) \\}$ \nthemselves depend on $\\frac{1}{T}$.\nIn this sense, our high temperature expansion formula \n is different from ordinary one. \nSubstituting this (\\ref{hte-ta}) into some of the NLIE \n(\\ref{nlie4})-(\\ref{nlie-s=0b}), \nwe can calculate the coefficients $\\{b^{(a)}_{n}(v) \\}$ up to the order of $n={\\mathrm deg}$. \nNote that we only need $\\{b^{(1)}_{n}(0) \\}$ to calculate the free energy (\\ref{free-en}). \nTaking note on this fact, \nfirstly we use\n\\footnote{As for numerical calculations of the free energy, \nwe expect that the reduced NLIE (\\ref{nlie-general})-(\\ref{nlie-s=0b}) \nare easier to use than the non-reduced NLIE (\\ref{nlie4}).}\n a subset (NLIE for $a \\in \\{1,2,\\dots, {\\mathrm deg} \\}$) \nof the non-reduced NLIE (\\ref{nlie4}) \nrather than the reduced NLIE (\\ref{nlie-general})-(\\ref{nlie-s=0b}). \nWe have observed that $b^{(1)}_{n}(0)$ can be expressed in terms of \n\\footnote{For $s=-1$ case, \nthey are \n$Q^{(1)}_{1},Q^{(2)}_{1}, \\dots ,Q^{(d)}_{1}$: \n$d=\\min (n+1,r+1)$ since \n$Q^{(a)}_{1}=0$ if $a \\ge r+2$.}\n$Q^{(1)}_{1},Q^{(2)}_{1}, \\dots ,Q^{(n+1)}_{1}$. \nWe have calculated the coefficients by using Mathematica. \nAs examples, we shall enumerate the coefficients $\\{b^{(1)}_{n}(0) \\}$ up to the \norder of $5$, where we put $\\Delta=\\cosh \\eta $. \n\\begin{eqnarray}\n&& \\hspace{-20pt}\nb^{(1)}_{1}(0)= \\frac{2 \\Delta Q^{(2)}_{1}}{{Q^{(1)}_{1}}^2}, \n \\label{coe1} \\\\\n&& \\hspace{-20pt}\nb^{(1)}_{2}(0)=-\\frac{6 \\Delta^2 {Q^{(2)}_{1}}^2}{{Q^{(1)}_{1}}^4}+\\frac{\\left(2 \\Delta^2+1\\right)\n Q^{(2)}_{1}}{{Q^{(1)}_{1}}^2}+\\frac{\\left(4 \\Delta^2-1\\right) Q^{(3)}_{1}}{{Q^{(1)}_{1}}^3},\n \\label{coe2} \\\\ \n&& \\hspace{-20pt}\nb^{(1)}_{3}(0)=\\frac{80 {Q^{(2)}_{1}}^3 \\Delta^3}{3\n {Q^{(1)}_{1}}^6}\n+\\frac{8 Q^{(3)}_{1} \\Delta^3}{{Q^{(1)}_{1}}^3}\n+\\frac{\\left(\\frac{4 \\Delta^3}{3}+2 \\Delta\\right)\n Q^{(2)}_{1}}{{Q^{(1)}_{1}}^2}\n\\nonumber \\\\\n&& \n\\hspace{-15pt}\n+\\frac{\\left(8 \\Delta-32 \\Delta^3\\right) Q^{(2)}_{1} Q^{(3)}_{1}}{{Q^{(1)}_{1}}^5}\n+\\frac{\\left(-12 \\Delta^3-6\n \\Delta\\right) {Q^{(2)}_{1}}^2\n+\\left(8 \\Delta^3-4 \\Delta\\right) Q^{(4)}_{1}}{{Q^{(1)}_{1}}^4},\n \\label{coe3} \\\\\n&&\\hspace{-20pt}\n b^{(1)}_{4}(0)=-\\frac{140 \\Delta^4\n {Q^{(2)}_{1}}^4}{{Q^{(1)}_{1}}^8}\n+\\frac{\\left(240 \\Delta^4-60 \\Delta^2\\right) Q^{(3)}_{1}\n {Q^{(2)}_{1}}^2}{{Q^{(1)}_{1}}^7}\n\\nonumber \\\\\n&& \n+\\frac{\\left(\\frac{2 \\Delta^4}{3}+2 \\Delta^2+\\frac{1}{4}\\right)\n Q^{(2)}_{1}}{{Q^{(1)}_{1}}^2}\n+\\frac{\\left(\\frac{28 \\Delta^4}{3}+\\frac{14 \\Delta^2}{3}-\\frac{1}{4}\\right)\n Q^{(3)}_{1}}{{Q^{(1)}_{1}}^3}\n\\nonumber \\\\\n&& \n+\\frac{\\left(-14 \\Delta^4-\\frac{56 \\Delta^2}{3}-\\frac{3}{2}\\right) \n {Q^{(2)}_{1}}^2+\\left(24 \\Delta^4-8\n \\Delta^2-1\\right) Q^{(4)}_{1}}{{Q^{(1)}_{1}}^4}\n\\nonumber \\\\\n&& \n+\\frac{\\left(80 \\Delta^4+40 \\Delta^2\\right) {Q^{(2)}_{1}}^3+\\left(40 \\Delta^2-80 \\Delta^4\\right)\n Q^{(4)}_{1} Q^{(2)}_{1}}{{Q^{(1)}_{1}}^6}\n\\nonumber \\\\\n&& \n+\\frac{\\left(-40 \\Delta^4+20 \\Delta^2-\\frac{5}{2}\\right) {Q^{(3)}_{1}}^2}{{Q^{(1)}_{1}}^6}\n\\nonumber \\\\\n&& \n+\\frac{\\left(-96 \\Delta^4-8\n \\Delta^2+4\\right) Q^{(2)}_{1} Q^{(3)}_{1}\n +\\left(16 \\Delta^4-12 \\Delta^2+1\\right) Q^{(5)}_{1}}{{Q^{(1)}_{1}}^5},\n\\label{coe4} \n\\end{eqnarray}\n\\begin{eqnarray}\n&& \\hspace{-15pt} b^{(1)}_{5}(0)=\\frac{4032 \\Delta^5\n {Q^{(2)}_{1}}^5}{5 {Q^{(1)}_{1}}^{10}}\n +\\frac{\\left(448 \\Delta^3-1792 \\Delta^5\\right) Q^{(3)}_{1}\n {Q^{(2)}_{1}}^3}{{Q^{(1)}_{1}}^9}\n\\nonumber \\\\\n&& \n +\\frac{\\left(\\frac{4 \\Delta^5}{15}+\\frac{4 \\Delta^3}{3}+\\frac{\\Delta}{2}\\right)\n Q^{(2)}_{1}}{{Q^{(1)}_{1}}^2}\n+\\frac{\\left(8 \\Delta^5+10 \\Delta^3+\\frac{\\Delta}{2}\\right) Q^{(3)}_{1}}{{Q^{(1)}_{1}}^3}\n\\nonumber \\\\\n&& \n +\\frac{\\left(-12\n \\Delta^5-30 \\Delta^3-8 \\Delta\\right) {Q^{(2)}_{1}}^2+\\left(40 \\Delta^5-6 \\Delta\\right)\n Q^{(4)}_{1}}{{Q^{(1)}_{1}}^4}\n\\nonumber \\\\\n&& \n+\\frac{\\left(-560 \\Delta^5-280\n \\Delta^3\\right) {Q^{(2)}_{1}}^4+\\left(672 \\Delta^5-336 \\Delta^3\\right) \n Q^{(4)}_{1} {Q^{(2)}_{1}}^2}{{Q^{(1)}_{1}}^8}\n \\nonumber \\\\\n&& \n+\\frac{\\left(672 \\Delta^5-336 \\Delta^3+42 \\Delta\\right)\n {Q^{(3)}_{1}}^2 Q^{(2)}_{1}}{{Q^{(1)}_{1}}^8}\n\\nonumber \\\\\n&& \n+\\frac{\\left(-160 \\Delta^5-100 \\Delta^3+11 \\Delta\\right) Q^{(2)}_{1} Q^{(3)}_{1}\n +\\left(64 \\Delta^5-40\n \\Delta^3\\right) Q^{(5)}_{1}}{{Q^{(1)}_{1}}^5}\n\\nonumber \\\\\n&& \n\\hspace{-10pt}\n+\\frac{\\left(960 \\Delta^5+120 \\Delta^3-60 \\Delta\\right) Q^{(3)}_{1} {Q^{(2)}_{1}}^2+\\left(-192\n \\Delta^5+144 \\Delta^3-12 \\Delta\\right) Q^{(5)}_{1} Q^{(2)}_{1}}{{Q^{(1)}_{1}}^7}\n\\nonumber \\\\\n&&\n+\\frac{\\left(-192 \\Delta^5+144 \\Delta^3-24 \\Delta\\right) Q^{(3)}_{1}\n Q^{(4)}_{1}}{{Q^{(1)}_{1}}^7}\n\\nonumber \\\\\n&& \n+\\frac{\\left(\\frac{400 \\Delta^5}{3}+\\frac{500 \\Delta^3}{3}+20 \\Delta\\right) {Q^{(2)}_{1}}^3+\\left(-320\n \\Delta^5+80 \\Delta^3+30 \\Delta\\right) Q^{(4)}_{1} Q^{(2)}_{1}}{{Q^{(1)}_{1}}^6}\n\\nonumber \\\\\n&& \n+\\frac{\\left(40 \\Delta^3-160 \\Delta^5\\right) {Q^{(3)}_{1}}^2+\\left(32 \\Delta^5-32 \\Delta^3+6\n \\Delta\\right) Q^{(6)}_{1}}{{Q^{(1)}_{1}}^6}.\n \\label{coe5} \n\\end{eqnarray}\nIn deriving these coefficients (\\ref{coe1})-(\\ref{coe5}), we \ndid not assume (\\ref{limit}). Of course, when one calculate the free energy of the model, \n one must assume (\\ref{limit}) and (\\ref{para}). \nWe can also rewrite the coefficient $b^{(1)}_{n}(0)$ in terms of \n$Q^{(1)}_{1},Q^{(2)}_{1},\\dots,Q^{(d)}_{1}$ and $\\zeta$ \n\\footnote{\n$Q^{(r+1)}_{1}=\\zeta$ if $s=-1$.}\n( $d=\\min (n+1,r+s+1)$ ) since $Q^{(a)}_{1}$ for $a \\in {\\mathbb Z}_{\\ge r+s+2}$ can \nbe written in terms of $Q^{(1)}_{1},Q^{(2)}_{1},\\dots,Q^{(r+s+1)}_{1}$ and $\\zeta$ \ndue to the relation (\\ref{a+r+s}) in the limit $v \\to i\\eta^{-1} \\infty $ \n(see also an example: (\\ref{Q11-sl21})-(\\ref{Qa1-sl21})). \nIf $b^{(n)}_{1}(0)$ is written in terms of \n$Q^{(1)}_{1},Q^{(2)}_{1},\\dots,Q^{(d)}_{1}$ and $\\zeta$ ( \n$d=\\min (n+1,r+s+1)$), it should be the coefficient \nof the high temperature expansion directly derived from \n the reduced NLIE (\\ref{nlie-general})-(\\ref{nlie-s=0b}). \n Of course these two expressions of the coefficient $b^{(1)}_{n}(0)$ are \n equivalent under the relations (\\ref{limit}) and (\\ref{para}). \n \n For fixed values of parameters, we have calculated \n the high temperature expansion for much higher order (see, appendix). \nWe have plotted the high temperature expansion \nof the specific heat (Figure \\ref{specific2}-\\ref{specific4}).\n Here we have adopted the Pade approximation method. \n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1\\textwidth]\n{specific2.eps}\n\\end{center}\n\\caption{Temperature dependence of the high temperature \nexpansion of the specific heat $C$ \nfor the rank 2 case ($r+s=1$, $J=1$, $q=1$, \n$\\mu_{a}=0$ ($a \\in B$)). We have plotted \nplan series (dotted lines) of $C$ in Appendix and their Pade approximations \nof order [$n$,$d$] (numerator: a degree $n$ polynomial of $1\/T$, \n denominator: a degree $d$ polynomial of $1\/T$) \nby using Mathematica: \n each line denotes $C$ for \n$sl(3|0)$ with [20,20] (thin), $sl(2|1)$ with [17,17] (medium),\n$sl(1|2)$ with [17,17] (thick), $sl(0|3)$ [20,20] (dashed thick) respectively. \nWe have also plotted (thick dots) \na result of numerical calculation from another NLIE by J\\\"uttner \nand Kl\\\"umper \\cite{JK97} for the $sl(2|1)$ case. \n C for the $sl(3|0)$ case was also \nconsidered in \\cite{FK02,FK99}.}\n\\label{specific2}\n\\end{figure}\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1\\textwidth]\n{specific3.eps}\n\\end{center}\n\\caption{Temperature dependence of the high temperature \nexpansion of the specific heat $C$ \nfor the rank 3 case ($r+s=2$, $J=1$, $q=1$, \n$\\mu_{a}=0$ ($a \\in B$)). We have plotted \nplan series (dotted lines) of $C$ in Appendix and their Pade approximations \nof order [$n$,$d$] (numerator: a degree $n$ polynomial of $1\/T$, \n denominator: a degree $d$ polynomial of $1\/T$): \n each line denotes $C$ for \n$sl(4|0)$ with [19,20] (thin), $sl(3|1)$ with [17,17] (medium),\n$sl(2|2)$ with [16,16] (thick), $sl(1|3)$ with [17,17] \n(dashed medium), $sl(0|4)$ with [18,21] (dashed thick) respectively. \n C for the $sl(4|0)$ case was also \nconsidered in \\cite{FK02}.}\n\\label{specific3}\n\\end{figure}\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1\\textwidth]\n{specific4.eps}\n\\end{center}\n\\caption{Temperature dependence of the high temperature \nexpansion of the specific heat $C$ \nfor the rank 4 case ($r+s=3$, $J=1$, $q=1$, \n$\\mu_{a}=0$ ($a \\in B$)). We have plotted \nplan series (dotted lines) of $C$ in Appendix and their Pade approximations \nof order [$n$,$d$] (numerator: a degree $n$ polynomial of $1\/T$, \n denominator: a degree $d$ polynomial of $1\/T$): \n each line denotes $C$ for \n$sl(5|0)$ with [17,21] (thin), $sl(4|1)$ with [16,18] (medium),\n$sl(3|2)$ with [17,17] (thick), \n$sl(2|3)$ with [16,17] (dashed thin), $sl(1|4)$ with [16,18] \n(dashed medium), $sl(0|5)$ with [17,21] (dashed thick) respectively. }\n\\label{specific4}\n\\end{figure}\nThere is a duality among the specific heats with respect to interchange of \n$r$ and $s$. \nIn particular, $r=s$ case is self-dual, then \nthe specific heat becomes an even function of $T$ (see (\\ref{hte-sl22})). \nIn Figure \\ref{specific2}, we have also plotted a result of \n a numerical calculation by another NLIE \\cite{JK97}.\nWe find a good agreement \n between our result and their result except for very low temperature region. \n \nWe can also calculate the high temperature expansion from the NLIE \nfor $\\xi=-1$ in subsection 3.2. \nSimilar to $\\xi=1$ case, we assume \n\\begin{eqnarray}\n&&\\mathcal{T}^{(1)}_{m}(v)=\n \\exp \\left(\\sum_{n=0}^{{\\mathrm deg}}\\widehat{b}_{m,n}(v)(\\frac{J}{T})^{n} \n+O((\\frac{J}{T})^{{\\mathrm deg}+1}) \\right) ,\n\\label{hte-tm}\n\\end{eqnarray}\nwhere $\\widehat{b}_{m,0}(v)=\\log Q^{(1)}_{m}$. \nHere we do not expand $\\{ Q^{(1)}_{k} \\}_{k \\ge 1}$ with respect to $\\frac{J}{T}$. \n(\\ref{hte-ta}) for $a=1$ should coincide with \n(\\ref{hte-tm}) for $m=1$ up to a factor from \nthe normalization function (\\ref{normal}). \nThus we have \n\\begin{eqnarray}\nb^{(1)}_{n}(0)=\\widehat{b}_{1,n}(0)+2\\Delta \\delta_{n,1}\n \\label{ty1}\n\\end{eqnarray}\nDue to symmetry between the NLIE for $\\xi=1$ and the one for $\\xi=-1$, \nthe following relation follows:\n\\begin{eqnarray}\n\\widehat{b}_{1,n}(0)=(-1)^{n}b^{(1)}_{n}(0)|_{Q^{(a)}_{1} \\to Q^{(1)}_{a} \n \\ {\\rm for} \\ a \\ge 1}.\n \\label{ty2}\n\\end{eqnarray}\nFor example, (\\ref{ty1}) and (\\ref{ty2}) for $n=1$ \nand (\\ref{coe1}) reproduce \n the $Q$-system (\\ref{Q-sys}) for $(a,m)=(1,1)$. \nFrom the relations \n(\\ref{ty1}) and (\\ref{ty2}) for $n=2$ and (\\ref{coe2}), we obtain \n identities among characters \n\\begin{eqnarray} \n&& \\hspace{-40pt} \n-3 {Q^{(2)}_{1}}^{2}+Q^{(2)}_{1}{Q^{(1)}_{1}}^{2}+2 Q^{(3)}_{1}Q^{(1)}_{1}\n=-3 {Q^{(1)}_{2}}^{2}+Q^{(1)}_{2}{Q^{(1)}_{1}}^{2}+2 Q^{(1)}_{3}Q^{(1)}_{1}, \\\\\n&& \\hspace{-40pt}\n Q^{(2)}_{1}Q^{(1)}_{1}-Q^{(3)}_{1}=Q^{(1)}_{2}Q^{(1)}_{1}-Q^{(1)}_{3},\n\\end{eqnarray}\nwhere we have used the fact that $Q^{(a)}_{m}$ does not depend on $\\Delta $. \nThese relations can be proved from the \nrelations (\\ref{jacobi-trudi}), (\\ref{jacobi-trudi2}) and (\\ref{limit}).\n\nSome comments on references on the high temperature expansion \nare in order. \nThe high temperature expansion of the free energy was\n calculated from the Takahashi's NLIE for \n the $XXX$-model up to the order of 100 \\cite{ShT02}; \n the $XXZ$-model up to the order of 99 \\cite{TT05}. \nAs for the higher rank or higher spin case, we have some results\n \\cite{T02,T03,T04,TT05} from NLIE. \nIn particular, our result on the $sl(r+1)$ Uimin-Sutherland model \nin \\cite{T03} was applied \\cite{BGOSTF03,YRFC04,YRZ04,BGO04,BGOF04,BGOT05} \nto spin ladder models and \ngood agreement\nwas seen between theoretical results and \nexperimental data. \nWe note that \nthe coefficients (\\ref{coe1})-(\\ref{coe3}) coincide with eqs. \n(4.14)-(4.16) in \\cite{TT05}. \nNote however that the coefficients in our paper are more general than the ones in \n\\cite{TT05} since the value of $Q^{(a)}_{1}$ (\\ref{limit})\n was restricted to $s=-1$ case in \\cite{TT05}. \nThere are also several works on high temperature expansions by different methods \n(see for example, \\cite{DV95,RST02,BEU00,FK02,F03}).\n\\section{Concluding remarks}\nIn this paper, we have derived NLIE which contain only $r+s+1$ unknown functions \nfor the $U(\\widehat{sl}(r+1|s+1))$ Perk-Schultz model. \nThe key is a duality for the auxiliary function (\\ref{dual}) \nand the quantum (supersymmetric) Jacobi-Trudi and Giambelli \nformula (\\ref{jacobi-trudi}) and (\\ref{jacobi-trudi2}). \nAlthough we assumed that $q$ is generic, \nwe expect that our NLIE (at least reduced ones \n(\\ref{nlie-general})-(\\ref{nlie-s=0b}), \n(\\ref{nlie-xi=-1})-(\\ref{nlie-r=0b})) will also be \nvalid even for the case where $q$ is root of unity \nas we will not need to take into account truncation of the \n$T$-system. \nThe high temperature expansion of the free energy \nin terms of characters was calculated from our NLIE. \n\nThere are NLIE with a finite number of unknown functions \nfor algebras of arbitrary rank in different context \\cite{Z98,DDT00}. \nThese NLIE are different from Takahashi-type. \nWhether one can generalize (or modify) their NLIE for finite \ntemperature case\n is still not clear. \n A deeper understanding of this subject is desirable. \n\nThere is an another kind of formulation of transfer matrices \nwhich is based on the graded formulation of the \n quantum inverse scattering method.\nIn this formulation, the row-to-row transfer matrix \nis defined as a supertrace: \n$\\widehat{t}(v)={\\mathrm str}_{0}(\\widehat{R}_{0L}(v)\n \\cdots \\widehat{R}_{02}(v)\\widehat{R}_{01}(v))$, where \nthe $R$-matrix is defined as $\\widehat{ R}^{a_{1},b_{1}}_{a_{2},b_{2}}(v)=\n(-1)^{p(a_{1})p(b_{1})}\nR^{a_{1},b_{1}}_{a_{2},b_{2}}(v)$ and the graded tensor product is adopted. \nAs far as the free energy (in the thermodynamic limit)\n is concerned, we think that there is no difference \nbetween this graded formulation and the one we have adopted. \n\\section*{Acknowledgments}\nThe author would like to thank A. Kl\\\"umper and K. Sakai for \ncomments on a figure of specific heats. \nHe also thank Y. Nagatani for a remark \n on programming of Mathematica. \n\\noindent\n\\renewcommand{\\theequation}{A.1.\\arabic{equation}}\n\\begin{landscape}\n\\section*{Appendix: The high temperature expansion of the specific heat}\nWe will list the high temperature expansion of the \nspecific heat $C_{sl(r+1|s+1)}$ for the $U_{q}(\\widehat{sl}(r+1|s+1))$ \nPerk-Schultz model at $q=1$, \n$\\mu_{a}=0$ ($a \\in B$). \nHere we put $t=\\frac{J}{T}$. \nIn this case, $Q^{(a)}_{1}$ (cf. (\\ref{limit})) becomes \n\\begin{eqnarray}\nQ^{(a)}_{1}=\\sum_{j=0}^{a}\\binom{r+1}{j}\\binom{a+s-j}{a-j}, \\label{Q-q=1}\n\\end{eqnarray}\nwhich is the dimension of $a$-th anti-(super)symmetric tensor representation \nof $sl(r+1|s+1)$. \nIf one substitute (\\ref{Q-q=1}), $\\Delta=1$ and the values of $(r,s)$\n into (\\ref{coe1})-(\\ref{coe5}), \none can recover (\\ref{hte-sl30})-(\\ref{hte-sl32}) up to the order of 5 \nthrough $C=-T\\frac{\\partial^{2} f}{\\partial T^{2}}$. \n A formula for $r